From xen-devel-bounces@lists.xenproject.org Wed Jul 01 04:04:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 04:04:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqTz8-0002Qo-Hy; Wed, 01 Jul 2020 04:04:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r09v=AM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqTz7-0002Qj-DA
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 04:04:33 +0000
X-Inumbo-ID: f6856360-bb4f-11ea-86ca-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6856360-bb4f-11ea-86ca-12813bfff9fa;
 Wed, 01 Jul 2020 04:04:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=B+jrj4r81dZ2uOZm15cD5uHlYb5vC/N4/uT7gCBTODY=; b=eOaZp6sy6YADodPsvC3+kSNdu
 HnGIzzgOevLsB7Hk7dhxLNPyp5AXQyRvFhCCsuiA6Rhe7PduFzoCKLnPzSy0oW8WZKq3yXxELM6xu
 0+aAvbuWbOg2HRdK1JvXXqt7/RgVQIKmKAOz5zRxprhbmRbJLhZlpkrsEw7lxfo0DyHcQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqTz3-0007u7-Lx; Wed, 01 Jul 2020 04:04:29 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqTz3-00069n-24; Wed, 01 Jul 2020 04:04:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqTz3-000710-1F; Wed, 01 Jul 2020 04:04:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151480-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151480: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=7c30b859a947535f2213277e827d7ac7dcff9c84
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jul 2020 04:04:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151480 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151480/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                7c30b859a947535f2213277e827d7ac7dcff9c84
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   13 days
Failing since        151236  2020-06-19 19:10:35 Z   11 days   14 attempts
Testing same since   151467  2020-06-30 02:29:41 Z    1 days    2 attempts

------------------------------------------------------------
478 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 22766 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 05:56:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 05:56:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqVjW-0003Hz-2y; Wed, 01 Jul 2020 05:56:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7C8Y=AM=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jqVjV-0003Hu-5J
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 05:56:33 +0000
X-Inumbo-ID: 9d3d8bc4-bb5f-11ea-86cd-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 9d3d8bc4-bb5f-11ea-86cd-12813bfff9fa;
 Wed, 01 Jul 2020 05:56:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1593582992;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 in-reply-to:in-reply-to:references:references;
 bh=P9e6pxRlhLqdi/Qte6Z9/WWcMved/5ud2vMwKx50UFk=;
 b=I0SpaZEE2YBLOd6ZL3U0e1vaoWDAsCYEqgXkPCmfsP4hnAzbYduAdJs2kvorodYlh3BaYQ
 h8+0LGIk5nnt4BiaTxfjOyzlC98mx7sGhpSyAZ1FmFbpKB6V08QRp7+OBdHSC9SQTnn6SO
 uCnBMk6J/vqxcMpcQBRDDX/ZMX+cRjM=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-45-zcMumqTuPlySm6ML7C6EDw-1; Wed, 01 Jul 2020 01:56:30 -0400
X-MC-Unique: zcMumqTuPlySm6ML7C6EDw-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1B91919067E0;
 Wed,  1 Jul 2020 05:56:29 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 0D68E612A5;
 Wed,  1 Jul 2020 05:56:26 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 8BD5311384A6; Wed,  1 Jul 2020 07:56:24 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
References: <20200624121841.17971-1-paul@xen.org>
 <20200624121841.17971-3-paul@xen.org>
 <20200630150849.GA2110@perard.uk.xensource.com>
Date: Wed, 01 Jul 2020 07:56:24 +0200
In-Reply-To: <20200630150849.GA2110@perard.uk.xensource.com> (Anthony PERARD's
 message of "Tue, 30 Jun 2020 16:08:49 +0100")
Message-ID: <87eepvdbhz.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Eduardo Habkost <ehabkost@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paul Durrant <pdurrant@amazon.com>,
 Jason Andryuk <jandryuk@gmail.com>, qemu-devel@nongnu.org,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org,
 Paolo Bonzini <pbonzini@redhat.com>, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Anthony PERARD <anthony.perard@citrix.com> writes:

> On Wed, Jun 24, 2020 at 01:18:41PM +0100, Paul Durrant wrote:
>> From: Paul Durrant <pdurrant@amazon.com>
>> 
>> The generic pc_machine_initfn() calls pc_system_flash_create() which creates
>> 'system.flash0' and 'system.flash1' devices. These devices are then realized
>> by pc_system_flash_map() which is called from pc_system_firmware_init() which
>> itself is called via pc_memory_init(). The latter however is not called when
>> xen_enable() is true and hence the following assertion fails:
>> 
>> qemu-system-i386: hw/core/qdev.c:439: qdev_assert_realized_properly:
>> Assertion `dev->realized' failed
>> 
>> These flash devices are unneeded when using Xen so this patch avoids the
>> assertion by simply removing them using pc_system_flash_cleanup_unused().
>> 
>> Reported-by: Jason Andryuk <jandryuk@gmail.com>
>> Fixes: ebc29e1beab0 ("pc: Support firmware configuration with -blockdev")
>> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
>> Tested-by: Jason Andryuk <jandryuk@gmail.com>
>
> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
>
> I think I would add:
>
> Fixes: dfe8c79c4468 ("qdev: Assert onboard devices all get realized properly")
>
> as this is the first commit where the unrealized flash devices are an
> issue.

They were an issue before, but commit dfe8c79c4468 turned the minor
issue into a crash bug.  No objections.



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 05:59:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 05:59:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqVm4-0003P9-Gw; Wed, 01 Jul 2020 05:59:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7C8Y=AM=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jqVm3-0003P1-86
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 05:59:11 +0000
X-Inumbo-ID: fbd390e9-bb5f-11ea-86cd-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id fbd390e9-bb5f-11ea-86cd-12813bfff9fa;
 Wed, 01 Jul 2020 05:59:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1593583150;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=Yp53qNv2a6t0Mz/ydfPlIxanTqAlC2QRvorB2eFQvzo=;
 b=aJrNXhnnDqf09lVs04jBHMq+KmuFQ4uBEGj9Al5rYrpCRxJcx/7S8COWWtT9U/vEwObIPW
 PaEsjZ5lZIUxD7tY0IYUuektE3jj2kTy6dtxUIM6BFqZ7VXertA3rdG7r318CbT86cmPzJ
 Bpo1FiMnrACZt6u3oG5WfF4FxYV0u4g=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-143-9GxAnRZcPomIJdFk8qov4Q-1; Wed, 01 Jul 2020 01:59:07 -0400
X-MC-Unique: 9GxAnRZcPomIJdFk8qov4Q-1
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F2B96800C64;
 Wed,  1 Jul 2020 05:59:05 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id E83F574191;
 Wed,  1 Jul 2020 05:59:02 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 954E011384A6; Wed,  1 Jul 2020 07:59:01 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
References: <20200624121841.17971-1-paul@xen.org>
 <20200624121841.17971-3-paul@xen.org>
 <33e594dd-dbfa-7c57-1cf5-0852e8fc8e1d@redhat.com>
 <000701d64ef5$6568f660$303ae320$@xen.org>
 <9e591254-d215-d5af-38d2-fd5b65f84a43@redhat.com>
Date: Wed, 01 Jul 2020 07:59:01 +0200
In-Reply-To: <9e591254-d215-d5af-38d2-fd5b65f84a43@redhat.com> ("Philippe
 =?utf-8?Q?Mathieu-Daud=C3=A9=22's?= message of "Tue, 30 Jun 2020 19:27:22
 +0200")
Message-ID: <877dvndbdm.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Eduardo Habkost' <ehabkost@redhat.com>,
 'Jason Andryuk' <jandryuk@gmail.com>, 'Paul Durrant' <pdurrant@amazon.com>,
 paul@xen.org, qemu-devel@nongnu.org, "'Michael S. Tsirkin'" <mst@redhat.com>,
 'Paolo Bonzini' <pbonzini@redhat.com>, xen-devel@lists.xenproject.org,
 'Richard Henderson' <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> writes:

> On 6/30/20 5:44 PM, Paul Durrant wrote:
>>> -----Original Message-----
>>> From: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
>>> Sent: 30 June 2020 16:26
>>> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org; qemu-d=
evel@nongnu.org
>>> Cc: Eduardo Habkost <ehabkost@redhat.com>; Michael S. Tsirkin <mst@redh=
at.com>; Paul Durrant
>>> <pdurrant@amazon.com>; Jason Andryuk <jandryuk@gmail.com>; Paolo Bonzin=
i <pbonzini@redhat.com>;
>>> Richard Henderson <rth@twiddle.net>
>>> Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
>>>
>>> On 6/24/20 2:18 PM, Paul Durrant wrote:
>>>> From: Paul Durrant <pdurrant@amazon.com>
>>>>
>>>> The generic pc_machine_initfn() calls pc_system_flash_create() which c=
reates
>>>> 'system.flash0' and 'system.flash1' devices. These devices are then re=
alized
>>>> by pc_system_flash_map() which is called from pc_system_firmware_init(=
) which
>>>> itself is called via pc_memory_init(). The latter however is not calle=
d when
>>>> xen_enable() is true and hence the following assertion fails:
>>>>
>>>> qemu-system-i386: hw/core/qdev.c:439: qdev_assert_realized_properly:
>>>> Assertion `dev->realized' failed
>>>>
>>>> These flash devices are unneeded when using Xen so this patch avoids t=
he
>>>> assertion by simply removing them using pc_system_flash_cleanup_unused=
().
>>>>
>>>> Reported-by: Jason Andryuk <jandryuk@gmail.com>
>>>> Fixes: ebc29e1beab0 ("pc: Support firmware configuration with -blockde=
v")
>>>> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
>>>> Tested-by: Jason Andryuk <jandryuk@gmail.com>
>>>> ---
>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>>> Cc: Richard Henderson <rth@twiddle.net>
>>>> Cc: Eduardo Habkost <ehabkost@redhat.com>
>>>> Cc: "Michael S. Tsirkin" <mst@redhat.com>
>>>> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
>>>> ---
>>>>  hw/i386/pc_piix.c    | 9 ++++++---
>>>>  hw/i386/pc_sysfw.c   | 2 +-
>>>>  include/hw/i386/pc.h | 1 +
>>>>  3 files changed, 8 insertions(+), 4 deletions(-)
>>>>
>>>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>>> index 1497d0e4ae..977d40afb8 100644
>>>> --- a/hw/i386/pc_piix.c
>>>> +++ b/hw/i386/pc_piix.c
>>>> @@ -186,9 +186,12 @@ static void pc_init1(MachineState *machine,
>>>>      if (!xen_enabled()) {
>>>>          pc_memory_init(pcms, system_memory,
>>>>                         rom_memory, &ram_memory);
>>>> -    } else if (machine->kernel_filename !=3D NULL) {
>>>> -        /* For xen HVM direct kernel boot, load linux here */
>>>> -        xen_load_linux(pcms);
>>>> +    } else {
>>>> +        pc_system_flash_cleanup_unused(pcms);
>>>
>>> TIL pc_system_flash_cleanup_unused().
>>>
>>> What about restricting at the source?
>>>
>>=20
>> And leave the devices in place? They are not relevant for Xen, so why no=
t clean up?
>
> No, I meant to not create them in the first place, instead of
> create+destroy.

Better.  Opinion, not demand :)

[...]



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 07:04:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 07:04:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqWmh-0000gb-H7; Wed, 01 Jul 2020 07:03:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3SYL=AM=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jqWmg-0000gV-C0
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 07:03:54 +0000
X-Inumbo-ID: 04ef213e-bb69-11ea-bca7-bc764e2007e4
Received: from mail-ed1-x544.google.com (unknown [2a00:1450:4864:20::544])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04ef213e-bb69-11ea-bca7-bc764e2007e4;
 Wed, 01 Jul 2020 07:03:52 +0000 (UTC)
Received: by mail-ed1-x544.google.com with SMTP id by13so8737129edb.11
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jul 2020 00:03:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=6eNZuzFNJ6OxbyFGv/sTyrbXRJOgNAq4o3gC0+HLpag=;
 b=UAoPm0YKD/s7r6iPfz/T0PuEhqOaM4+nyi4tMmyeXBtD9suQ2kjadh3JuiLeOe4eHP
 cyhesJE195NEHpLjShWKGowwJ2vFh9NDRWQgUT0BOvk7KdBczKx8k+qTYqp0X9p0Z0K4
 0FnjU6d71yidlpIGI+FHodFAj4jEQR75u/8Sa1hZPgcvfRPmZ3YEOhFwlJCbVJiyLhD8
 DR5ywCRQB021jF6sxzXcVOBKC4sn6d3B50DgGWdNU1ppEIvCEKWFrfDD9AtLaSCZREJM
 hwZsxh+Wgg2SJEPcE2oYiNXvfs9b3wvVtt2+uqVAeJ6XDju0Kpexs3neZwdAbxVEPuom
 zsAw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=6eNZuzFNJ6OxbyFGv/sTyrbXRJOgNAq4o3gC0+HLpag=;
 b=oD3bevQ6xTAS2lz0wbSIhO8aqJKMNYL4dsNQcbR3l9jZHfZscAHTLyl0ujZydBed92
 iXpvpQidickJywYFUeCUpis4GRXYcTQtcRoEt1REL42+Q1cmD7kJ7WvIq4ZDPA5RHGKn
 hw3Jt7EKRnnDnewvJA+aWVkr+1GZ8p979w5HHLtVVFtQVdd+JWLAb2bdx9rt2BiAxEdV
 3Oa4aoAW1VC+qzgQOLKDDUw0Vj1xJVAdaZ/7yB6C4iudlM14uSpcMSoO4PyXD27+8345
 Wsri+ognYbV78t7tRB+2eu21a35Lau7ClVRfvR14IjIz8JfFuxtiBCPtlfl44gaVjRZm
 IeJQ==
X-Gm-Message-State: AOAM533REPFIZ1OwgqAshvuEJxNCYACN1I1/TZeR9c4N1n+Kvw47LgHF
 FIdoG2IbULpv1mRwcYpPrCs=
X-Google-Smtp-Source: ABdhPJz4LIGFdOnkn9jdZqEhUBlzB6LaMc2aOiaq72jXKhCB/fO/5680G+Tj706S1ohuE+kvZnnySA==
X-Received: by 2002:a05:6402:1ca6:: with SMTP id
 cz6mr9805142edb.171.1593587031355; 
 Wed, 01 Jul 2020 00:03:51 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
 by smtp.gmail.com with ESMTPSA id z1sm3961362eji.92.2020.07.01.00.03.49
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 01 Jul 2020 00:03:50 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: =?utf-8?Q?'Philippe_Mathieu-Daud=C3=A9'?= <philmd@redhat.com>,
 <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>
References: <20200624121841.17971-1-paul@xen.org>
 <20200624121841.17971-3-paul@xen.org>
 <33e594dd-dbfa-7c57-1cf5-0852e8fc8e1d@redhat.com>
 <000701d64ef5$6568f660$303ae320$@xen.org>
 <9e591254-d215-d5af-38d2-fd5b65f84a43@redhat.com>
In-Reply-To: <9e591254-d215-d5af-38d2-fd5b65f84a43@redhat.com>
Subject: RE: [PATCH 2/2] xen: cleanup unrealized flash devices
Date: Wed, 1 Jul 2020 08:03:49 +0100
Message-ID: <000801d64f75$c604f570$520ee050$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJSqy6H9p+wwq7WTKLIQgTsJ4xkIwGyH0+lAvluRd8CNjldPAM3wS46p6jIZ4A=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Eduardo Habkost' <ehabkost@redhat.com>,
 'Jason Andryuk' <jandryuk@gmail.com>, 'Paul Durrant' <pdurrant@amazon.com>,
 "'Michael S. Tsirkin'" <mst@redhat.com>, 'Paolo Bonzini' <pbonzini@redhat.com>,
 'Richard Henderson' <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> Sent: 30 June 2020 18:27
> To: paul@xen.org; xen-devel@lists.xenproject.org; =
qemu-devel@nongnu.org
> Cc: 'Eduardo Habkost' <ehabkost@redhat.com>; 'Michael S. Tsirkin' =
<mst@redhat.com>; 'Paul Durrant'
> <pdurrant@amazon.com>; 'Jason Andryuk' <jandryuk@gmail.com>; 'Paolo =
Bonzini' <pbonzini@redhat.com>;
> 'Richard Henderson' <rth@twiddle.net>
> Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
>=20
> On 6/30/20 5:44 PM, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> >> Sent: 30 June 2020 16:26
> >> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org; =
qemu-devel@nongnu.org
> >> Cc: Eduardo Habkost <ehabkost@redhat.com>; Michael S. Tsirkin =
<mst@redhat.com>; Paul Durrant
> >> <pdurrant@amazon.com>; Jason Andryuk <jandryuk@gmail.com>; Paolo =
Bonzini <pbonzini@redhat.com>;
> >> Richard Henderson <rth@twiddle.net>
> >> Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
> >>
> >> On 6/24/20 2:18 PM, Paul Durrant wrote:
> >>> From: Paul Durrant <pdurrant@amazon.com>
> >>>
> >>> The generic pc_machine_initfn() calls pc_system_flash_create() =
which creates
> >>> 'system.flash0' and 'system.flash1' devices. These devices are =
then realized
> >>> by pc_system_flash_map() which is called from =
pc_system_firmware_init() which
> >>> itself is called via pc_memory_init(). The latter however is not =
called when
> >>> xen_enable() is true and hence the following assertion fails:
> >>>
> >>> qemu-system-i386: hw/core/qdev.c:439: =
qdev_assert_realized_properly:
> >>> Assertion `dev->realized' failed
> >>>
> >>> These flash devices are unneeded when using Xen so this patch =
avoids the
> >>> assertion by simply removing them using =
pc_system_flash_cleanup_unused().
> >>>
> >>> Reported-by: Jason Andryuk <jandryuk@gmail.com>
> >>> Fixes: ebc29e1beab0 ("pc: Support firmware configuration with =
-blockdev")
> >>> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> >>> Tested-by: Jason Andryuk <jandryuk@gmail.com>
> >>> ---
> >>> Cc: Paolo Bonzini <pbonzini@redhat.com>
> >>> Cc: Richard Henderson <rth@twiddle.net>
> >>> Cc: Eduardo Habkost <ehabkost@redhat.com>
> >>> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> >>> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
> >>> ---
> >>>  hw/i386/pc_piix.c    | 9 ++++++---
> >>>  hw/i386/pc_sysfw.c   | 2 +-
> >>>  include/hw/i386/pc.h | 1 +
> >>>  3 files changed, 8 insertions(+), 4 deletions(-)
> >>>
> >>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> >>> index 1497d0e4ae..977d40afb8 100644
> >>> --- a/hw/i386/pc_piix.c
> >>> +++ b/hw/i386/pc_piix.c
> >>> @@ -186,9 +186,12 @@ static void pc_init1(MachineState *machine,
> >>>      if (!xen_enabled()) {
> >>>          pc_memory_init(pcms, system_memory,
> >>>                         rom_memory, &ram_memory);
> >>> -    } else if (machine->kernel_filename !=3D NULL) {
> >>> -        /* For xen HVM direct kernel boot, load linux here */
> >>> -        xen_load_linux(pcms);
> >>> +    } else {
> >>> +        pc_system_flash_cleanup_unused(pcms);
> >>
> >> TIL pc_system_flash_cleanup_unused().
> >>
> >> What about restricting at the source?
> >>
> >
> > And leave the devices in place? They are not relevant for Xen, so =
why not clean up?
>=20
> No, I meant to not create them in the first place, instead of
> create+destroy.
>=20
> Anyway what you did works, so I don't have any problem.

IIUC Jason originally tried restricting creation but encountered a =
problem because xen_enabled() would always return false at that point, =
because machine creation occurs before accelerators are initialized.

  Paul

>=20
> Reviewed-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
>=20
> >
> >   Paul
> >
> >> -- >8 --
> >> --- a/hw/i386/pc.c
> >> +++ b/hw/i386/pc.c
> >> @@ -1004,24 +1004,26 @@ void pc_memory_init(PCMachineState *pcms,
> >>                                      &machine->device_memory->mr);
> >>      }
> >>
> >> -    /* Initialize PC system firmware */
> >> -    pc_system_firmware_init(pcms, rom_memory);
> >> -
> >> -    option_rom_mr =3D g_malloc(sizeof(*option_rom_mr));
> >> -    memory_region_init_ram(option_rom_mr, NULL, "pc.rom", =
PC_ROM_SIZE,
> >> -                           &error_fatal);
> >> -    if (pcmc->pci_enabled) {
> >> -        memory_region_set_readonly(option_rom_mr, true);
> >> -    }
> >> -    memory_region_add_subregion_overlap(rom_memory,
> >> -                                        PC_ROM_MIN_VGA,
> >> -                                        option_rom_mr,
> >> -                                        1);
> >> -
> >>      fw_cfg =3D fw_cfg_arch_create(machine,
> >>                                  x86ms->boot_cpus, =
x86ms->apic_id_limit);
> >>
> >> -    rom_set_fw(fw_cfg);
> >> +    /* Initialize PC system firmware */
> >> +    if (!xen_enabled()) {
> >> +        pc_system_firmware_init(pcms, rom_memory);
> >> +
> >> +        option_rom_mr =3D g_malloc(sizeof(*option_rom_mr));
> >> +        memory_region_init_ram(option_rom_mr, NULL, "pc.rom", =
PC_ROM_SIZE,
> >> +                               &error_fatal);
> >> +        if (pcmc->pci_enabled) {
> >> +            memory_region_set_readonly(option_rom_mr, true);
> >> +        }
> >> +        memory_region_add_subregion_overlap(rom_memory,
> >> +                                            PC_ROM_MIN_VGA,
> >> +                                            option_rom_mr,
> >> +                                            1);
> >> +
> >> +        rom_set_fw(fw_cfg);
> >> +    }
> >>
> >>      if (pcmc->has_reserved_memory && machine->device_memory->base) =
{
> >>          uint64_t *val =3D g_malloc(sizeof(*val));
> >> ---
> >>
> >>> +        if (machine->kernel_filename !=3D NULL) {
> >>> +            /* For xen HVM direct kernel boot, load linux here */
> >>> +            xen_load_linux(pcms);
> >>> +        }
> >>>      }
> >>>
> >>>      gsi_state =3D pc_gsi_create(&x86ms->gsi, pcmc->pci_enabled);
> >>> diff --git a/hw/i386/pc_sysfw.c b/hw/i386/pc_sysfw.c
> >>> index ec2a3b3e7e..0ff47a4b59 100644
> >>> --- a/hw/i386/pc_sysfw.c
> >>> +++ b/hw/i386/pc_sysfw.c
> >>> @@ -108,7 +108,7 @@ void pc_system_flash_create(PCMachineState =
*pcms)
> >>>      }
> >>>  }
> >>>
> >>> -static void pc_system_flash_cleanup_unused(PCMachineState *pcms)
> >>> +void pc_system_flash_cleanup_unused(PCMachineState *pcms)
> >>>  {
> >>>      char *prop_name;
> >>>      int i;
> >>> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> >>> index e6135c34d6..497f2b7ab7 100644
> >>> --- a/include/hw/i386/pc.h
> >>> +++ b/include/hw/i386/pc.h
> >>> @@ -187,6 +187,7 @@ int cmos_get_fd_drive_type(FloppyDriveType =
fd0);
> >>>
> >>>  /* pc_sysfw.c */
> >>>  void pc_system_flash_create(PCMachineState *pcms);
> >>> +void pc_system_flash_cleanup_unused(PCMachineState *pcms);
> >>>  void pc_system_firmware_init(PCMachineState *pcms, MemoryRegion =
*rom_memory);
> >>>
> >>>  /* acpi-build.c */
> >>>
> >
> >




From xen-devel-bounces@lists.xenproject.org Wed Jul 01 07:19:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 07:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqX1W-0001dB-Sq; Wed, 01 Jul 2020 07:19:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r09v=AM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqX1V-0001d6-Dd
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 07:19:13 +0000
X-Inumbo-ID: 28df3ba4-bb6b-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28df3ba4-bb6b-11ea-bca7-bc764e2007e4;
 Wed, 01 Jul 2020 07:19:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=P1dQVb4d5nM63GK5fgE73Gs4XB+Iq1bw7Tqlk73aimU=; b=aEiKt6uwAJm/IED5fM8d5WkdY
 6CoJaDJBq3yXG9XLX+JXjLYMLLgti7oTyPCPNCcxfNwntmJ0EMRtAZET58OPx/pvuCDMzlQ9FkJjj
 uFje2pssmU4oG5hRfuhOQQSpDZfsR3vrXeG+66dxkT3leSDPkOUeCtwrIsVrQakJtEmQo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqX1S-0003ZB-JP; Wed, 01 Jul 2020 07:19:10 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqX1S-0005tp-BJ; Wed, 01 Jul 2020 07:19:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqX1S-0007AK-Ad; Wed, 01 Jul 2020 07:19:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151485-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151485: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=fc1bff958998910ec8d25db86cd2f53ff125f7ab
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jul 2020 07:19:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151485 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151485/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                fc1bff958998910ec8d25db86cd2f53ff125f7ab
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   18 days
Failing since        151101  2020-06-14 08:32:51 Z   16 days   18 attempts
Testing same since   151471  2020-06-30 05:19:07 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Cathy Zhang <cathy.zhang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Liran Alon <liran.alon@oracle.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <thuth@redhat.com>
  Tong Ho <tong.ho@xilinx.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 14425 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 07:19:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 07:19:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqX1l-0001fA-9u; Wed, 01 Jul 2020 07:19:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QDW7=AM=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1jqX1k-0001f0-9R
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 07:19:28 +0000
X-Inumbo-ID: 323dc5c6-bb6b-11ea-bb8b-bc764e2007e4
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 323dc5c6-bb6b-11ea-bb8b-bc764e2007e4;
 Wed, 01 Jul 2020 07:19:27 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id h22so18537886lji.9
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jul 2020 00:19:27 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id;
 bh=Dzidh0Nswx9r0je/a6UkI0vRAjpCHlIaVow9ZwgeZoY=;
 b=SWcZvAHvHllVZRCfxUBGg8Rg05tjPl8Woq7ZnTkpvLRnA9jgCokK28s9asoUU5Vx9e
 LZlwGaVsrofdhr+9PSjihfd2v8d2poYKcjtVKmlxHhq+ca0DtYiBIMC0TU+fQ4ypRT02
 zcZmv9nyU7Zvdk4RS1cC8WiuKG8foMbNnbHlC6c0cjxcaclMWcRpwsaPBhYkDVi8M+2Y
 72e06QiAThuTKvxMEWlJXdbyTZotjIFun2iY9eAibIXNNoNsaTDK0cLnkYYDqJJdAtSh
 8BTdoBfZo1zdJ4AFrfHCR7pMiT/m3dK2d34dfOxHekoyKMCjRV2aUtRMGS/BZRP766WX
 ErOw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=Dzidh0Nswx9r0je/a6UkI0vRAjpCHlIaVow9ZwgeZoY=;
 b=GRFaxKkf0l/qYXsFsdrvY8QQg1AotkeHFKARbDHP2uBbwLl7hDkD4eJpLbv5q8famS
 F6v01pPSbl7iHHErFBvhVxRH5qZG6ld1tRaXQC3idON776fr150VaQFv+Dx5IRkNFw9o
 mAEFt8HkiWXEvit1gWpvIDfV29DtZQPVQLQxT5GFNcSrDUxWrftk7JwdwO51sk6MLoCa
 e7suuGdeGxA7cOczssFenUjV9JN+dAjM9vjOTkr0Jr/OcKiY6yyOLR4zYPmdFJqD6nXl
 BJomkPmV6sZ6IbYskjykn645ukZpC4M/vA6X4wGjJmQCDHx0L8X7iYpMqKtjuDT8/pa4
 tqiw==
X-Gm-Message-State: AOAM533CrxfJeq/8MbNkH0HtzlOGUmy4Y3qdec1QzpX2RBBWtXwKyKwH
 /xsyJ9mxoc9Wf929yNABs7OAXSpEhQs=
X-Google-Smtp-Source: ABdhPJzHeP5UUKvdKY9UVyzftvsX49OGMGlY57Lr8VA82TTNT1JkFN1xWJw8Jcb1mGcoT2PMhv+p2A==
X-Received: by 2002:a2e:a494:: with SMTP id h20mr7376403lji.435.1593587965653; 
 Wed, 01 Jul 2020 00:19:25 -0700 (PDT)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id z11sm1501163ljh.115.2020.07.01.00.19.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 01 Jul 2020 00:19:24 -0700 (PDT)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: xen-devel@lists.xenproject.org, jgross@suse.com, ian.jackson@eu.citrix.com,
 wl@xen.org
Subject: [PATCH v2] xen/displif: Protocol version 2
Date: Wed,  1 Jul 2020 10:19:23 +0300
Message-Id: <20200701071923.18883-1-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

1. Add protocol version as an integer

Version string, which is in fact an integer, is hard to handle in the
code that supports different protocol versions. To simplify that
also add the version as an integer.

2. Pass buffer offset with XENDISPL_OP_DBUF_CREATE

There are cases when display data buffer is created with non-zero
offset to the data start. Handle such cases and provide that offset
while creating a display buffer.

3. Add XENDISPL_OP_GET_EDID command

Add an optional request for reading Extended Display Identification
Data (EDID) structure which allows better configuration of the
display connectors over the configuration set in XenStore.
With this change connectors may have multiple resolutions defined
with respect to detailed timing definitions and additional properties
normally provided by displays.

If this request is not supported by the backend then visible area
is defined by the relevant XenStore's "resolution" property.

If backend provides extended display identification data (EDID) with
XENDISPL_OP_GET_EDID request then EDID values must take precedence
over the resolutions defined in XenStore.

4. Bump protocol version to 2.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 xen/include/public/io/displif.h | 91 +++++++++++++++++++++++++++++++--
 1 file changed, 88 insertions(+), 3 deletions(-)

diff --git a/xen/include/public/io/displif.h b/xen/include/public/io/displif.h
index cc5de9cb1f35..0055895510f7 100644
--- a/xen/include/public/io/displif.h
+++ b/xen/include/public/io/displif.h
@@ -38,7 +38,8 @@
  *                           Protocol version
  ******************************************************************************
  */
-#define XENDISPL_PROTOCOL_VERSION     "1"
+#define XENDISPL_PROTOCOL_VERSION     "2"
+#define XENDISPL_PROTOCOL_VERSION_INT  2
 
 /*
  ******************************************************************************
@@ -202,6 +203,9 @@
  *      Width and height of the connector in pixels separated by
  *      XENDISPL_RESOLUTION_SEPARATOR. This defines visible area of the
  *      display.
+ *      If backend provides extended display identification data (EDID) with
+ *      XENDISPL_OP_GET_EDID request then EDID values must take precedence
+ *      over the resolutions defined here.
  *
  *------------------ Connector Request Transport Parameters -------------------
  *
@@ -349,6 +353,8 @@
 #define XENDISPL_OP_FB_DETACH         0x13
 #define XENDISPL_OP_SET_CONFIG        0x14
 #define XENDISPL_OP_PG_FLIP           0x15
+/* The below command is available in protocol version 2 and above. */
+#define XENDISPL_OP_GET_EDID          0x16
 
 /*
  ******************************************************************************
@@ -377,6 +383,10 @@
 #define XENDISPL_FIELD_BE_ALLOC       "be-alloc"
 #define XENDISPL_FIELD_UNIQUE_ID      "unique-id"
 
+#define XENDISPL_EDID_BLOCK_SIZE      128
+#define XENDISPL_EDID_BLOCK_COUNT     256
+#define XENDISPL_EDID_MAX_SIZE        (XENDISPL_EDID_BLOCK_SIZE * XENDISPL_EDID_BLOCK_COUNT)
+
 /*
  ******************************************************************************
  *                          STATUS RETURN CODES
@@ -451,7 +461,9 @@
  * +----------------+----------------+----------------+----------------+
  * |                           gref_directory                          | 40
  * +----------------+----------------+----------------+----------------+
- * |                             reserved                              | 44
+ * |                             data_ofs                              | 44
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 48
  * +----------------+----------------+----------------+----------------+
  * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/|
  * +----------------+----------------+----------------+----------------+
@@ -494,6 +506,7 @@
  *   buffer size (buffer_sz) exceeds what can be addressed by this single page,
  *   then reference to the next page must be supplied (see gref_dir_next_page
  *   below)
+ * data_ofs - uint32_t, offset of the data in the buffer, octets
  */
 
 #define XENDISPL_DBUF_FLG_REQ_ALLOC       (1 << 0)
@@ -506,6 +519,7 @@ struct xendispl_dbuf_create_req {
     uint32_t buffer_sz;
     uint32_t flags;
     grant_ref_t gref_directory;
+    uint32_t data_ofs;
 };
 
 /*
@@ -731,6 +745,44 @@ struct xendispl_page_flip_req {
     uint64_t fb_cookie;
 };
 
+/*
+ * Request EDID - request EDID describing current connector:
+ *         0                1                 2               3        octet
+ * +----------------+----------------+----------------+----------------+
+ * |               id                | _OP_GET_EDID   |   reserved     | 4
+ * +----------------+----------------+----------------+----------------+
+ * |                             buffer_sz                             | 8
+ * +----------------+----------------+----------------+----------------+
+ * |                          gref_directory                           | 12
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 16
+ * +----------------+----------------+----------------+----------------+
+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/|
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 64
+ * +----------------+----------------+----------------+----------------+
+ *
+ * Notes:
+ *   - This command is not available in protocol version 1 and should be
+ *     ignored.
+ *   - This request is optional and if not supported then visible area
+ *     is defined by the relevant XenStore's "resolution" property.
+ *   - Shared buffer, allocated for EDID storage, must not be less then
+ *     XENDISPL_EDID_MAX_SIZE octets.
+ *
+ * buffer_sz - uint32_t, buffer size to be allocated, octets
+ * gref_directory - grant_ref_t, a reference to the first shared page
+ *   describing EDID buffer references. See XENDISPL_OP_DBUF_CREATE for
+ *   grant page directory structure (struct xendispl_page_directory).
+ *
+ * See response format for this request.
+ */
+
+struct xendispl_get_edid_req {
+    uint32_t buffer_sz;
+    grant_ref_t gref_directory;
+};
+
 /*
  *---------------------------------- Responses --------------------------------
  *
@@ -753,6 +805,35 @@ struct xendispl_page_flip_req {
  * id - uint16_t, private guest value, echoed from request
  * status - int32_t, response status, zero on success and -XEN_EXX on failure
  *
+ *
+ * Get EDID response - response for XENDISPL_OP_GET_EDID:
+ *         0                1                 2               3        octet
+ * +----------------+----------------+----------------+----------------+
+ * |               id                |    operation   |    reserved    | 4
+ * +----------------+----------------+----------------+----------------+
+ * |                              status                               | 8
+ * +----------------+----------------+----------------+----------------+
+ * |                             edid_sz                               | 12
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 16
+ * +----------------+----------------+----------------+----------------+
+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/|
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 64
+ * +----------------+----------------+----------------+----------------+
+ *
+ * Notes:
+ *   - This response is not available in protocol version 1 and should be
+ *     ignored.
+ *
+ * edid_sz - uint32_t, size of the EDID, octets
+ */
+
+struct xendispl_get_edid_resp {
+    uint32_t edid_sz;
+};
+
+/*
  *----------------------------------- Events ----------------------------------
  *
  * Events are sent via a shared page allocated by the front and propagated by
@@ -804,6 +885,7 @@ struct xendispl_req {
         struct xendispl_fb_detach_req fb_detach;
         struct xendispl_set_config_req set_config;
         struct xendispl_page_flip_req pg_flip;
+        struct xendispl_get_edid_req get_edid;
         uint8_t reserved[56];
     } op;
 };
@@ -813,7 +895,10 @@ struct xendispl_resp {
     uint8_t operation;
     uint8_t reserved;
     int32_t status;
-    uint8_t reserved1[56];
+    union {
+        struct xendispl_get_edid_resp get_edid;
+        uint8_t reserved1[56];
+    } op;
 };
 
 struct xendispl_evt {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 07:53:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 07:53:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqXYV-0004sF-2F; Wed, 01 Jul 2020 07:53:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w4aC=AM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jqXYT-0004sA-F5
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 07:53:17 +0000
X-Inumbo-ID: e9736c56-bb6f-11ea-86d7-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9736c56-bb6f-11ea-86d7-12813bfff9fa;
 Wed, 01 Jul 2020 07:53:12 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: fu4Ta5yH48MNwr0gP2MVL6yw9LSfyA5rlJq8yvKUM8DgJerDobGFXP5s5Y5Vowq8bH1xZ895ok
 TNKWZvURzOpr2Db5nmRC+iie24wL0URnYhjvZ3GZfsBsTEea/XZH3fcDLlhJd0K3JlkdQHAsJm
 wax/R2cohQpN7wNZzhkPU4y/UpX27bhjAFxwWq7JOeZeKdQwae+VnCPQxvvGKX2nnoL/klr22b
 URHOhlRPdbmYDESKpUS0b/a17KI+LklSqE9swgF/wM4pzuXyCZWZvOS0Mu+zN9YkWede3EmEKu
 srI=
X-SBRS: 2.7
X-MesageID: 21708591
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,299,1589256000"; d="scan'208";a="21708591"
Date: Wed, 1 Jul 2020 09:52:57 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [XEN PATCH] hvmloader: Fix reading ACPI PM1 CNT value
Message-ID: <20200701075257.GM735@Air-de-Roger>
References: <20200630170913.123646-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <20200630170913.123646-1-anthony.perard@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jun 30, 2020 at 06:09:13PM +0100, Anthony PERARD wrote:
> In order to get the CNT value from QEMU, we were supposed to read a
> word, according to the implementation in QEMU. But it has been lax and
> allowed to read a single byte. This has changed with commit
> 5d971f9e6725 ("memory: Revert "memory: accept mismatching sizes in
> memory_region_access_valid"") and result in hvmloader crashing on
> the BUG_ON.

This is a bug on the QEMU side, the ACPI spec states: "Accesses to PM1
control registers are accessed through byte and word accesses.".
That's on section 4.8.3.2.1 PM1 Control Registers of my copy of the
ACPI spec (6.2A).

> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

I'm fine with this if such bogus behavior has made it's way into a
release version of QEMU, but it needs to state it's a workaround for a
QEMU bug, not a bug in hvmloader.

IMO the QEMU change should be reverted.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 09:02:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 09:02:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqYdH-0002uB-4L; Wed, 01 Jul 2020 09:02:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w4aC=AM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jqYdG-0002u4-CH
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 09:02:18 +0000
X-Inumbo-ID: 902528b0-bb79-11ea-bca7-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 902528b0-bb79-11ea-bca7-bc764e2007e4;
 Wed, 01 Jul 2020 09:02:17 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: i8xnSH4MPIXch7CDs6lVi0mHG8WXUgDdXmURI54kbExyL3Z1gYHmQ/MIONC1VwCIl9dXwU8Y4K
 lPaCcdjNGB1TeAr6GCrU2lk9S+Y+oxcBn7kNkvmDDXTxMsNZT5gzzgt47/3LLrzcq4jpHS1KCm
 2abuATUcY6TJFW3SoEFpPRalqkanCAH5XfkUFSC8IXz3HW+4jM6j78/VOTdRJKjM7BIUCzILsp
 AmoiT1LRijD/kM/tCGJW0A6A0Jkw0ybgaBcfD5hyd1vb+D25Ia1JR3bXCzcZgKqfEVAj4vRh3l
 GG8=
X-SBRS: 2.7
X-MesageID: 22186773
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,299,1589256000"; d="scan'208";a="22186773"
Date: Wed, 1 Jul 2020 11:02:10 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: vPT rework (and timer mode)
Message-ID: <20200701090210.GN735@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

I've been doing some work with the virtual timers infrastructure in
order to improve some of it's shortcomings. See:

https://lists.xenproject.org/archives/html/xen-devel/2020-06/msg00919.html

For an example of such issues, and how the emulated timers are not
architecturally correct.

It's my understanding that the purpose of pt_update_irq and
pt_intr_post is to attempt to implement the "delay for missed ticks"
mode, where Xen will accumulate timer interrupts if they cannot be
injected. As shown by the patch above, this is all broken when the
timer is added to a vCPU (pt->vcpu) different than the actual target
vCPU where the interrupt gets delivered (note this can also be a list
of vCPUs if routed from the IO-APIC using Fixed mode).

I'm at lost at how to fix this so that virtual timers work properly
and we also keep the "delay for missed ticks" mode without doing a
massive rework and somehow keeping track of where injected interrupts
originated, which seems an overly complicated solution.

My proposal hence would be to completely remove the timer_mode, and
just treat virtual timer interrupts as other interrupts, ie: they will
be injected from the callback (pt_timer_fn) and the vCPU(s) would be
kicked. Whether interrupts would get lost (ie: injected when a
previous one is still pending) depends on the contention on the
system. I'm not aware of any current OS that uses timer interrupts as
a way to track time. I think current OSes know the differences between
a timer counter and an event timer, and will use them appropriately.

This would allow to get rid of pt_update_irq and pt_intr_post calls in
the VMX/SVM interrupt injection paths, and likely simplify the virtual
timers code quite a lot. Note the guest would also always track the
real wallclock.

AFAICT such change would also allow to get rid of the per-vCPU vpt
lists.

Wanted to get some feedback on this approach before starting to do the
work, since as said above it will involve dropping the timer modes.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 09:10:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 09:10:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqYlL-0003sd-0K; Wed, 01 Jul 2020 09:10:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eXJY=AM=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jqYlJ-0003sY-Qy
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 09:10:37 +0000
X-Inumbo-ID: b9a37574-bb7a-11ea-8496-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9a37574-bb7a-11ea-8496-bc764e2007e4;
 Wed, 01 Jul 2020 09:10:37 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 11Kw2xoQRVL7WqilHuFkwhtyXZdyrzkrO9PEu8args1R1Vh7pOTeVnDou8nwpb16a6fhSiw3c9
 WlLn3YeMG5GgizWYT44ZurqCtoQRgSNZCsecYuEnkGcRItvKIJGrdGCJNwCv+zrk1+AefS3Usz
 PPlsClfpbO5yL3vDelgTzziBeRyHktAqeT1Rki/2XBulSPIzhVvBfGSTnN6BdiQ8J6bgllYFTk
 iyJCNuIL2cB4B7kES2t3hajJigArvcDNiswFiVu3ZMPW5wAAVFIJO9+zUHfba6QEuHQH7MnI3u
 nUc=
X-SBRS: 2.7
X-MesageID: 21712682
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,299,1589256000"; d="scan'208";a="21712682"
Date: Wed, 1 Jul 2020 10:10:31 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH] hvmloader: Fix reading ACPI PM1 CNT value
Message-ID: <20200701091031.GC2030@perard.uk.xensource.com>
References: <20200630170913.123646-1-anthony.perard@citrix.com>
 <20200701075257.GM735@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200701075257.GM735@Air-de-Roger>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 09:52:57AM +0200, Roger Pau Monn wrote:
> On Tue, Jun 30, 2020 at 06:09:13PM +0100, Anthony PERARD wrote:
> > In order to get the CNT value from QEMU, we were supposed to read a
> > word, according to the implementation in QEMU. But it has been lax and
> > allowed to read a single byte. This has changed with commit
> > 5d971f9e6725 ("memory: Revert "memory: accept mismatching sizes in
> > memory_region_access_valid"") and result in hvmloader crashing on
> > the BUG_ON.
> 
> This is a bug on the QEMU side, the ACPI spec states: "Accesses to PM1
> control registers are accessed through byte and word accesses.".
> That's on section 4.8.3.2.1 PM1 Control Registers of my copy of the
> ACPI spec (6.2A).

I guess we can ignore this patch then, and I should write a patch for
QEMU instead.

> I'm fine with this if such bogus behavior has made it's way into a
> release version of QEMU, but it needs to state it's a workaround for a
> QEMU bug, not a bug in hvmloader.

It hasn't, but might.

> IMO the QEMU change should be reverted.

The change can't be reverted, it is to fix a CVE and isn't related to
ACPI. But we can fix the emulator.

> Thanks, Roger.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 09:50:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 09:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqZNC-0006TH-8p; Wed, 01 Jul 2020 09:49:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w4aC=AM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jqZNB-0006TC-FE
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 09:49:45 +0000
X-Inumbo-ID: 30e13f90-bb80-11ea-b7bb-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30e13f90-bb80-11ea-b7bb-bc764e2007e4;
 Wed, 01 Jul 2020 09:49:44 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 8NjfsZi9mpxIgJgtsrk9zDhCrFEyJe8Ljy5xso3uZS9HpODYUo1U9AlaoS3FgRi6gFPUO52Q6V
 IJ1UQwZg3ENv/ss6jlLgmJ1ovuu3RwZ5IyMikQ84n+LO6xb33IjSjsJncBVhvLWhr3V090XViI
 XErI8005NLcLSD2pBq8lDsP9Vl8jqOLGPMufwrdpW06Dd9QfI6OutmVceEf9zk7arrSg9pcu1v
 YsEuPd8yVbmYTkO4UQ3Y1ahH2pJswdjSMdQLqsqeIXyU2RB48/A8s8HIaqNv+rABHgd346JClF
 FCM=
X-SBRS: 2.7
X-MesageID: 21688457
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,299,1589256000"; d="scan'208";a="21688457"
Date: Wed, 1 Jul 2020 11:49:35 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
Message-ID: <20200701094935.GO735@Air-de-Roger>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jun 30, 2020 at 02:33:45PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Check if Intel Processor Trace feature is supported by current
> processor. Define vmtrace_supported global variable.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  xen/arch/x86/hvm/vmx/vmcs.c                 | 7 ++++++-
>  xen/common/domain.c                         | 2 ++
>  xen/include/asm-x86/cpufeature.h            | 1 +
>  xen/include/asm-x86/hvm/vmx/vmcs.h          | 1 +
>  xen/include/public/arch-x86/cpufeatureset.h | 1 +
>  xen/include/xen/domain.h                    | 2 ++
>  6 files changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index ca94c2bedc..b73d824357 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -291,6 +291,12 @@ static int vmx_init_vmcs_config(void)
>          _vmx_cpu_based_exec_control &=
>              ~(CPU_BASED_CR8_LOAD_EXITING | CPU_BASED_CR8_STORE_EXITING);
>  
> +    rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
> +
> +    /* Check whether IPT is supported in VMX operation. */
> +    vmtrace_supported = cpu_has_ipt &&
> +                        (_vmx_misc_cap & VMX_MISC_PT_SUPPORTED);

This function gets called for every CPU that's brought up, so you need
to set it on the BSP, and then check that the APs also support the
feature or else fail to bring them up AFAICT. If not you could end up
with a non working system.

I agree it's very unlikely to boot on a system with such differences
between CPUs, but better be safe than sorry.

> +
>      if ( _vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS )
>      {
>          min = 0;
> @@ -305,7 +311,6 @@ static int vmx_init_vmcs_config(void)
>                 SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS |
>                 SECONDARY_EXEC_XSAVES |
>                 SECONDARY_EXEC_TSC_SCALING);
> -        rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
>          if ( _vmx_misc_cap & VMX_MISC_VMWRITE_ALL )
>              opt |= SECONDARY_EXEC_ENABLE_VMCS_SHADOWING;
>          if ( opt_vpid_enabled )
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 7cc9526139..0a33e0dfd6 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -82,6 +82,8 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_mostly;
>  
>  vcpu_info_t dummy_vcpu_info;
>  
> +bool_t vmtrace_supported;

Plain bool, and I think it wants to be __read_mostly.

I'm also unsure whether this is the best place to put such variable,
since there are no users introduced on this patch it's hard to tell.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:03:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqZaH-000881-HY; Wed, 01 Jul 2020 10:03:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Rv6a=AM=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jqZaG-00087w-EM
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:03:16 +0000
X-Inumbo-ID: 1493cba8-bb82-11ea-86e4-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1493cba8-bb82-11ea-86e4-12813bfff9fa;
 Wed, 01 Jul 2020 10:03:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=azibvWKuR1lyx61qcunppEepW+0QHhUdMpJTjtiAPMo=; b=T0peDUOYPJSY6WxBAPeugO90ei
 v0HQsDadXoX9hojnxdwDTasDTPpL3+brNZP0oJ3o5tsLuaUTOfBvnlT5JTnHADIE1fLCJzGT7Xy8/
 KDw1oegWESTbEOlAaPAiXYjG2v/kFq0wxNjLjmNyXHitsie4tn/qTNZm4jrDfeSPNyYU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqZaE-0007CD-JI; Wed, 01 Jul 2020 10:03:14 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqZaE-0007m3-6O; Wed, 01 Jul 2020 10:03:14 +0000
Subject: Re: [PATCH v2 2/2] optee: allow plain TMEM buffers with NULL address
To: paul@xen.org, 'Stefano Stabellini' <sstabellini@kernel.org>
References: <20200619223332.438344-1-volodymyr_babchuk@epam.com>
 <20200619223332.438344-3-volodymyr_babchuk@epam.com>
 <alpine.DEB.2.21.2006221809380.8121@sstabellini-ThinkPad-T480s>
 <87ftampkd7.fsf@epam.com> <2df789f3-e881-36a3-51f4-010b499990f5@xen.org>
 <alpine.DEB.2.21.2006231403220.8121@sstabellini-ThinkPad-T480s>
 <b1891206-b883-46b9-70a3-3027a931d2ed@xen.org>
 <000301d64de8$d1811f20$74835d60$@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <43be71f1-60a7-8b52-dbf3-9b4b79e69a36@xen.org>
Date: Wed, 1 Jul 2020 11:03:12 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <000301d64de8$d1811f20$74835d60$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, op-tee@lists.trustedfirmware.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 29/06/2020 08:42, Paul Durrant wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: 26 June 2020 18:54
>> To: Stefano Stabellini <sstabellini@kernel.org>; paul@xen.org
>> Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; xen-devel@lists.xenproject.org; op-
>> tee@lists.trustedfirmware.org
>> Subject: Re: [PATCH v2 2/2] optee: allow plain TMEM buffers with NULL address
>>
>> (using paul xen.org's email)
>>
> 
> Thanks. Avoids annoying warning banners :-)
> 
>> Hi,
>>
>> Apologies for the late answer.
>>
>> On 23/06/2020 22:09, Stefano Stabellini wrote:
>>> On Tue, 23 Jun 2020, Julien Grall wrote:
>>>> On 23/06/2020 03:49, Volodymyr Babchuk wrote:
>>>>>
>>>>> Hi Stefano,
>>>>>
>>>>> Stefano Stabellini writes:
>>>>>
>>>>>> On Fri, 19 Jun 2020, Volodymyr Babchuk wrote:
>>>>>>> Trusted Applications use popular approach to determine required size
>>>>>>> of buffer: client provides a memory reference with the NULL pointer to
>>>>>>> a buffer. This is so called "Null memory reference". TA updates the
>>>>>>> reference with the required size and returns it back to the
>>>>>>> client. Then client allocates buffer of needed size and repeats the
>>>>>>> operation.
>>>>>>>
>>>>>>> This behavior is described in TEE Client API Specification, paragraph
>>>>>>> 3.2.5. Memory References.
>>>>>>>
>>>>>>> OP-TEE represents this null memory reference as a TMEM parameter with
>>>>>>> buf_ptr = 0x0. This is the only case when we should allow TMEM
>>>>>>> buffer without the OPTEE_MSG_ATTR_NONCONTIG flag. This also the
>>>>>>> special case for a buffer with OPTEE_MSG_ATTR_NONCONTIG flag.
>>>>>>>
>>>>>>> This could lead to a potential issue, because IPA 0x0 is a valid
>>>>>>> address, but OP-TEE will treat it as a special case. So, care should
>>>>>>> be taken when construction OP-TEE enabled guest to make sure that such
>>>>>>> guest have no memory at IPA 0x0 and none of its memory is mapped at PA
>>>>>>> 0x0.
>>>>>>>
>>>>>>> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>>>>>>> ---
>>>>>>>
>>>>>>> Changes from v1:
>>>>>>>     - Added comment with TODO about possible PA/IPA 0x0 issue
>>>>>>>     - The same is described in the commit message
>>>>>>>     - Added check in translate_noncontig() for the NULL ptr buffer
>>>>>>>
>>>>>>> ---
>>>>>>>     xen/arch/arm/tee/optee.c | 27 ++++++++++++++++++++++++---
>>>>>>>     1 file changed, 24 insertions(+), 3 deletions(-)
>>>>>>>
>>>>>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>>>>>> index 6963238056..70bfef7e5f 100644
>>>>>>> --- a/xen/arch/arm/tee/optee.c
>>>>>>> +++ b/xen/arch/arm/tee/optee.c
>>>>>>> @@ -215,6 +215,15 @@ static bool optee_probe(void)
>>>>>>>         return true;
>>>>>>>     }
>>>>>>>     +/*
>>>>>>> + * TODO: There is a potential issue with guests that either have RAM
>>>>>>> + * at IPA of 0x0 or some of theirs memory is mapped at PA 0x0. This is
>>>>>>                                   ^ their
>>>>>>
>>>>>>> + * because PA of 0x0 is considered as NULL pointer by OP-TEE. It will
>>>>>>> + * not be able to map buffer with such pointer to TA address space, or
>>>>>>> + * use such buffer for communication with the guest. We either need to
>>>>>>> + * check that guest have no such mappings or ensure that OP-TEE
>>>>>>> + * enabled guest will not be created with such mappings.
>>>>>>> + */
>>>>>>>     static int optee_domain_init(struct domain *d)
>>>>>>>     {
>>>>>>>         struct arm_smccc_res resp;
>>>>>>> @@ -725,6 +734,15 @@ static int translate_noncontig(struct optee_domain
>>>>>>> *ctx,
>>>>>>>             uint64_t next_page_data;
>>>>>>>         } *guest_data, *xen_data;
>>>>>>>     +    /*
>>>>>>> +     * Special case: buffer with buf_ptr == 0x0 is considered as NULL
>>>>>>> +     * pointer by OP-TEE. No translation is needed. This can lead to
>>>>>>> +     * an issue as IPA 0x0 is a valid address for Xen. See the comment
>>>>>>> +     * near optee_domain_init()
>>>>>>> +     */
>>>>>>> +    if ( !param->u.tmem.buf_ptr )
>>>>>>> +        return 0;
>>>>>>
>>>>>> Given that today it is not possible for this to happen, it could even be
>>>>>> an ASSERT. But I think I would just return an error, maybe -EINVAL?
>>>>>
>>>>> Hmm, looks like my comment is somewhat misleading :(
>>>>
>>>> How about the following comment:
>>>>
>>>> We don't want to translate NULL (0) as it can be used by the guest to fetch
>>>> the size of the buffer to allocate. This behavior depends on TA, but there is
>>>> a guarantee that OP-TEE will not try to map it (see more details on top of
>>>> optee_domain_init()).
>>>>
>>>>>
>>>>> What I mean, is that param->u.tmem.buf_ptr == 0 is the normal situation.
>>>>> This is the special case, when OP-TEE treats this buffer as a NULL. So
>>>>> we are doing nothing there. Thus, "return 0".
>>>>>
>>>>> But, as Julien pointed out, we can have machine where 0x0 is the valid
>>>>> memory address and there is a chance, that some guest will use it as a
>>>>> pointer to buffer.
>>>>>
>>>>>> Aside from this, and the small grammar issue, everything else looks fine
>>>>>> to me.
>>>>>>
>>>>>> Let's wait for Julien's reply, but if this is the only thing I could fix
>>>>>> on commit.
>>>>
>>>> I agree with Volodymyr, this is the normal case here. There are more work to
>>>> prevent MFN 0 to be mapped in the guest but this shouldn't be an issue today.
>>>
>>> Let's put the MFN 0 issue aside for a moment.
>>>
>>>   From the commit message I thought that if the guest wanted to pass a
>>> NULL buffer ("Null memory reference") then it would also *not* set
>>> OPTEE_MSG_ATTR_NONCONTIG, which would be handled by the "else" statement
>>> also modified by this patch. Thus, I thought that reaching
>>> translate_noncontig with buf_ptr == NULL would always be an error.
>>>
>>> But re-reading the commit message and from both your answers it is not
>>> the case: a "Null memory reference" is allowed with
>>> OPTEE_MSG_ATTR_NONCONTIG set too.
>>>
>>> Thus, I have no further comments and the improvements on the in-code
>>> comment could be done on commit.
>>
>> Good :). IIRC Paul gave a provisional RaB for this series. @Paul, now
>> that we are settled, could we get a formal one?
> 
> Sure.
> 
> Release-acked-by: Paul Durrant <paul@xen.org>

Thanks!

It is not clear to me what Stefano had in mind for the "in-code 
comment". So I will leave him committing the series.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:06:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqZdB-0008Fu-4Y; Wed, 01 Jul 2020 10:06:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w4aC=AM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jqZd9-0008Fp-Ev
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:06:15 +0000
X-Inumbo-ID: 7ea65948-bb82-11ea-b7bb-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ea65948-bb82-11ea-b7bb-bc764e2007e4;
 Wed, 01 Jul 2020 10:06:14 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: mnfIwy5AWaf4fvbkpyhbEDNJWDx7drEi6+8QkIaPLqBI3NDQ68oxsoisRnHxLeUCNN/lgu0UOH
 f6GZzXjcPKI1vJwP2ltqTVVbGej/mXNw914Q2Zd0jf0wGjgEc9pXhsJJ0fRP5senQHAEeiADTX
 EtgkZmSkncvPSmV6ZxRuIaYi62R3g0omQD4a8OXFb5QFVc/SVB8c6x1DW18BbZTyKIM17UyTeG
 BXSDnoHmed/sNYOPxlR3vDOUyw1QWHO/0hWIaEChxNVnVM5ByB5y+NU3jauj/bFrWRyLnAdmce
 Zxk=
X-SBRS: 2.7
X-MesageID: 22191019
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,299,1589256000"; d="scan'208";a="22191019"
Date: Wed, 1 Jul 2020 12:05:52 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
Message-ID: <20200701100552.GP735@Air-de-Roger>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com, Jan
 Beulich <jbeulich@suse.com>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jun 30, 2020 at 02:33:46PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Allow to specify the size of per-vCPU trace buffer upon
> domain creation. This is zero by default (meaning: not enabled).
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  docs/man/xl.cfg.5.pod.in             | 10 ++++++++++
>  tools/golang/xenlight/helpers.gen.go |  2 ++
>  tools/golang/xenlight/types.gen.go   |  1 +
>  tools/libxl/libxl.h                  |  8 ++++++++
>  tools/libxl/libxl_create.c           |  1 +
>  tools/libxl/libxl_types.idl          |  2 ++
>  tools/xl/xl_parse.c                  | 20 ++++++++++++++++++++
>  xen/common/domain.c                  | 12 ++++++++++++
>  xen/include/public/domctl.h          |  1 +
>  9 files changed, 57 insertions(+)
> 
> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> index 0532739c1f..78f434b722 100644
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -278,6 +278,16 @@ memory=8096 will report significantly less memory available for use
>  than a system with maxmem=8096 memory=8096 due to the memory overhead
>  of having to track the unused pages.
>  
> +=item B<vmtrace_pt_size=BYTES>
> +
> +Specifies the size of processor trace buffer that would be allocated
> +for each vCPU belonging to this domain. Disabled (i.e. B<vmtrace_pt_size=0>
> +by default. This must be set to non-zero value in order to be able to
> +use processor tracing features with this domain.
> +
> +B<NOTE>: The size value must be between 4 kB and 4 GB and it must

I think the minimum value is 8kB, since 4kB would be order 0, which
is used to signal that the feature is disabled?

> diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
> index 61b4ef7b7e..4eba224590 100644
> --- a/tools/xl/xl_parse.c
> +++ b/tools/xl/xl_parse.c
> @@ -1861,6 +1861,26 @@ void parse_config_data(const char *config_source,
>          }
>      }
>  
> +    if (!xlu_cfg_get_long(config, "vmtrace_pt_size", &l, 1) && l) {
> +        int32_t shift = 0;

unsigned int? I don't think there's a reason for this to be a fixed
width signed integer.

> +
> +        if (l & (l - 1))
> +        {
> +            fprintf(stderr, "ERROR: pt buffer size must be a power of 2\n");
> +            exit(1);
> +        }
> +
> +        while (l >>= 1) ++shift;
> +
> +        if (shift <= XEN_PAGE_SHIFT)
> +        {
> +            fprintf(stderr, "ERROR: too small pt buffer\n");
> +            exit(1);
> +        }
> +
> +        b_info->vmtrace_pt_order = shift - XEN_PAGE_SHIFT;
> +    }
> +
>      if (!xlu_cfg_get_list(config, "ioports", &ioports, &num_ioports, 0)) {
>          b_info->num_ioports = num_ioports;
>          b_info->ioports = calloc(num_ioports, sizeof(*b_info->ioports));
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 0a33e0dfd6..27dcfbac8c 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -338,6 +338,12 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
>          return -EINVAL;
>      }
>  
> +    if ( config->vmtrace_pt_order && !vmtrace_supported )
> +    {
> +        dprintk(XENLOG_INFO, "Processor tracing is not supported\n");
> +        return -EINVAL;
> +    }
> +
>      return arch_sanitise_domain_config(config);
>  }
>  
> @@ -443,6 +449,12 @@ struct domain *domain_create(domid_t domid,
>          d->nr_pirqs = min(d->nr_pirqs, nr_irqs);
>  
>          radix_tree_init(&d->pirq_tree);
> +
> +        if ( config->vmtrace_pt_order )
> +        {
> +            uint32_t shift_val = config->vmtrace_pt_order + PAGE_SHIFT;
> +            d->vmtrace_pt_size = (1ULL << shift_val);

I don't think the vmtrace_pt_size domain field has been introduced
yet?

Please check that each patch builds on it's own, or else we would
break bisectability of the tree.

Also I would consider just storing this directly as an order, there's
no reason to convert it back to a size?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:13:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:13:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqZkU-0000gS-UD; Wed, 01 Jul 2020 10:13:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r09v=AM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqZkT-0000g8-Dk
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:13:49 +0000
X-Inumbo-ID: 8addaaa8-bb83-11ea-86e7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8addaaa8-bb83-11ea-86e7-12813bfff9fa;
 Wed, 01 Jul 2020 10:13:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=2WqxvxGBRjUaZC6Bkt7HtlE+3AG+RKJsnJvy/u+W1RM=; b=FmyqwjnwieeLfxrFoenfLUIYk
 c9sZd7gS45yOrYoSNfPBoKOyaSFpzWkFZx5KZHXVsVDckkpjyAFz8SSxITq69zZR+PVJwUY6JVM4d
 Tfckii/ppTJvEVukT1E/zu6eVCaTyy64WQ6VgJMDg9bjfhpy4tXtlDjWQGNYBNhRtdbL8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqZkM-0007OV-VP; Wed, 01 Jul 2020 10:13:43 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqZkM-0000Xn-IA; Wed, 01 Jul 2020 10:13:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqZkM-0007x6-HZ; Wed, 01 Jul 2020 10:13:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151504-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 151504: all pass - PUSHED
X-Osstest-Versions-This: xen=23ca7ec0ba620db52a646d80e22f9703a6589f66
X-Osstest-Versions-That: xen=88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jul 2020 10:13:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151504 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151504/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  23ca7ec0ba620db52a646d80e22f9703a6589f66
baseline version:
 xen                  88cfd062e8318dfeb67c7d2eb50b6cd224b0738a

Last test of basis   151428  2020-06-28 09:23:57 Z    3 days
Testing same since   151504  2020-07-01 09:23:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   88cfd062e8..23ca7ec0ba  23ca7ec0ba620db52a646d80e22f9703a6589f66 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:23:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqZtH-0001X9-Qu; Wed, 01 Jul 2020 10:22:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LFmw=AM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqZtG-0001X4-Sk
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:22:54 +0000
X-Inumbo-ID: d2d87364-bb84-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2d87364-bb84-11ea-b7bb-bc764e2007e4;
 Wed, 01 Jul 2020 10:22:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3F876AD04;
 Wed,  1 Jul 2020 10:22:53 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/7] x86: compat header generation and checking adjustments
Message-ID: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Date: Wed, 1 Jul 2020 12:22:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

As was pointed out by 0e2e54966af5 ("mm: fix public declaration of
struct xen_mem_acquire_resource"), we're not currently handling structs
correctly that have uint64_aligned_t fields. Patch 2 demonstrates that
there was also an issue with XEN_GUEST_HANDLE_64().

Only the 1st patch was previously sent, but the approach chosen has
been changed altogether. All later patches are new. For 4.14 I think
at least patch 1 should be considered.

1: x86: fix compat header generation
2: x86/mce: add compat struct checking for XEN_MC_inject_v2
3: x86/mce: bring hypercall subop compat checking in sync again
4: x86/dmop: add compat struct checking for XEN_DMOP_map_mem_type_to_ioreq_server
5: x86: generalize padding field handling
6: flask: drop dead compat translation code
7: x86: only generate compat headers actually needed

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:25:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:25:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqZva-0001eQ-8I; Wed, 01 Jul 2020 10:25:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LFmw=AM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqZvZ-0001eK-0b
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:25:17 +0000
X-Inumbo-ID: 26c40d6c-bb85-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26c40d6c-bb85-11ea-bb8b-bc764e2007e4;
 Wed, 01 Jul 2020 10:25:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 10B19AD25;
 Wed,  1 Jul 2020 10:25:14 +0000 (UTC)
Subject: [PATCH v2 1/7] x86: fix compat header generation
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Message-ID: <a8139d0e-f332-b877-dea8-3ce8a6869285@suse.com>
Date: Wed, 1 Jul 2020 12:25:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

As was pointed out by 0e2e54966af5 ("mm: fix public declaration of
struct xen_mem_acquire_resource"), we're not currently handling structs
correctly that have uint64_aligned_t fields. #pragma pack(4) suppresses
the necessary alignment even if the type did properly survive (which
it also didn't) in the process of generating the headers. Overall,
with the above mentioned change applied, there's only a latent issue
here afaict, i.e. no other of our interface structs is currently
affected.

As a result it is clear that using #pragma pack(4) is not an option.
Drop all uses from compat header generation. Make sure
{,u}int64_aligned_t actually survives, such that explicitly aligned
fields will remain aligned. Arrange for {,u}int64_t to be transformed
into a type that's 64 bits wide and 4-byte aligned, by utilizing that
in typedef-s the "aligned" attribute can be used to reduce alignment.
Additionally, for the cases where native structures get re-used,
enforce suitable alignment via typedef-s (which allow alignment to be
reduced).

This use of typedef-s makes necessary changes to CHECK_*() macro
generation: Previously get-fields.sh relied on finding struct/union
keywords when other compound types were used. We now need to use the
typedef-s (guaranteeing suitable alignment) now, and hence the script
has to recognize those cases, too. (Unfortunately there are a few
special cases to be dealt with, but this is really not much different
from e.g. the pre-existing compat_domain_handle_t special case.)

This need to use typedef-s is certainly somewhat fragile going forward,
as in similar future cases it is imperative to also use typedef-s, or
else the CHECK_*() macros won't check what they're supposed to check. I
don't currently see any means to avoid this fragility, though.

There's one change to generated code according to my observations: In
arch_compat_vcpu_op() the runstate area "area" variable would previously
have been put in a just 4-byte aligned stack slot (despite being 8 bytes
in size), whereas now it gets put in an 8-byte aligned location.

There also results some curious inconsistency in struct xen_mc from
these changes - I intend to clean this up later on. Otherwise unrelated
code would also need adjustment right here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Different approach, addressing the latent alignment issues in v1.

--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -34,15 +34,6 @@ headers-$(CONFIG_XSM_FLASK) += compat/xs
 cppflags-y                := -include public/xen-compat.h -DXEN_GENERATING_COMPAT_HEADERS
 cppflags-$(CONFIG_X86)    += -m32
 
-# 8-byte types are 4-byte aligned on x86_32 ...
-ifeq ($(CONFIG_CC_IS_CLANG),y)
-prefix-$(CONFIG_X86)      := \#pragma pack(push, 4)
-suffix-$(CONFIG_X86)      := \#pragma pack(pop)
-else
-prefix-$(CONFIG_X86)      := \#pragma pack(4)
-suffix-$(CONFIG_X86)      := \#pragma pack()
-endif
-
 endif
 
 public-$(CONFIG_X86) := $(wildcard public/arch-x86/*.h public/arch-x86/*/*.h)
@@ -57,10 +48,8 @@ compat/%.h: compat/%.i Makefile $(BASEDI
 	echo "#define $$id" >>$@.new; \
 	echo "#include <xen/compat.h>" >>$@.new; \
 	$(if $(filter-out compat/arch-%.h,$@),echo "#include <$(patsubst compat/%,public/%,$@)>" >>$@.new;) \
-	$(if $(prefix-y),echo "$(prefix-y)" >>$@.new;) \
 	grep -v '^# [0-9]' $< | \
 	$(PYTHON) $(BASEDIR)/tools/compat-build-header.py | uniq >>$@.new; \
-	$(if $(suffix-y),echo "$(suffix-y)" >>$@.new;) \
 	echo "#endif /* $$id */" >>$@.new
 	mv -f $@.new $@
 
--- a/xen/include/public/arch-x86/pmu.h
+++ b/xen/include/public/arch-x86/pmu.h
@@ -105,7 +105,7 @@ struct xen_pmu_arch {
          * Processor's registers at the time of interrupt.
          * WO for hypervisor, RO for guests.
          */
-        struct xen_pmu_regs regs;
+        xen_pmu_regs_t regs;
         /* Padding for adding new registers to xen_pmu_regs in the future */
 #define XENPMU_REGS_PAD_SZ  64
         uint8_t pad[XENPMU_REGS_PAD_SZ];
@@ -132,8 +132,8 @@ struct xen_pmu_arch {
      * hypervisor into hardware during XENPMU_flush
      */
     union {
-        struct xen_pmu_amd_ctxt amd;
-        struct xen_pmu_intel_ctxt intel;
+        xen_pmu_amd_ctxt_t amd;
+        xen_pmu_intel_ctxt_t intel;
 
         /*
          * Padding for contexts (fixed parts only, does not include MSR banks
--- a/xen/include/public/arch-x86/xen-mca.h
+++ b/xen/include/public/arch-x86/xen-mca.h
@@ -112,7 +112,7 @@ struct mcinfo_common {
     uint16_t type;      /* structure type */
     uint16_t size;      /* size of this struct in bytes */
 };
-
+typedef struct mcinfo_common xen_mcinfo_common_t;
 
 #define MC_FLAG_CORRECTABLE     (1 << 0)
 #define MC_FLAG_UNCORRECTABLE   (1 << 1)
@@ -123,7 +123,7 @@ struct mcinfo_common {
 #define MC_FLAG_MCE		(1 << 6)
 /* contains global x86 mc information */
 struct mcinfo_global {
-    struct mcinfo_common common;
+    xen_mcinfo_common_t common;
 
     /* running domain at the time in error (most likely the impacted one) */
     uint16_t mc_domid;
@@ -138,7 +138,7 @@ struct mcinfo_global {
 
 /* contains bank local x86 mc information */
 struct mcinfo_bank {
-    struct mcinfo_common common;
+    xen_mcinfo_common_t common;
 
     uint16_t mc_bank; /* bank nr */
     uint16_t mc_domid; /* Usecase 5: domain referenced by mc_addr on dom0
@@ -156,11 +156,12 @@ struct mcinfo_msr {
     uint64_t reg;   /* MSR */
     uint64_t value; /* MSR value */
 };
+typedef struct mcinfo_msr xen_mcinfo_msr_t;
 
 /* contains mc information from other
  * or additional mc MSRs */
 struct mcinfo_extended {
-    struct mcinfo_common common;
+    xen_mcinfo_common_t common;
 
     /* You can fill up to five registers.
      * If you need more, then use this structure
@@ -172,7 +173,7 @@ struct mcinfo_extended {
      * and E(R)FLAGS, E(R)IP, E(R)MISC, up to 11/19 of them might be
      * useful at present. So expand this array to 32 to leave room.
      */
-    struct mcinfo_msr mc_msr[32];
+    xen_mcinfo_msr_t mc_msr[32];
 };
 
 /* Recovery Action flags. Giving recovery result information to DOM0 */
@@ -208,6 +209,7 @@ struct page_offline_action
     uint64_t mfn;
     uint64_t status;
 };
+typedef struct page_offline_action xen_page_offline_action_t;
 
 struct cpu_offline_action
 {
@@ -216,17 +218,18 @@ struct cpu_offline_action
     uint16_t mc_coreid;
     uint16_t mc_core_threadid;
 };
+typedef struct cpu_offline_action xen_cpu_offline_action_t;
 
 #define MAX_UNION_SIZE 16
 struct mcinfo_recovery
 {
-    struct mcinfo_common common;
+    xen_mcinfo_common_t common;
     uint16_t mc_bank; /* bank nr */
     uint8_t action_flags;
     uint8_t action_types;
     union {
-        struct page_offline_action page_retire;
-        struct cpu_offline_action cpu_offline;
+        xen_page_offline_action_t page_retire;
+        xen_cpu_offline_action_t cpu_offline;
         uint8_t pad[MAX_UNION_SIZE];
     } action_info;
 };
@@ -279,7 +282,7 @@ struct mcinfo_logical_cpu {
     uint32_t mc_cache_size;
     uint32_t mc_cache_alignment;
     int32_t mc_nmsrvals;
-    struct mcinfo_msr mc_msrvalues[__MC_MSR_ARRAYSIZE];
+    xen_mcinfo_msr_t mc_msrvalues[__MC_MSR_ARRAYSIZE];
 };
 typedef struct mcinfo_logical_cpu xen_mc_logical_cpu_t;
 DEFINE_XEN_GUEST_HANDLE(xen_mc_logical_cpu_t);
@@ -399,8 +402,9 @@ struct xen_mc_msrinject {
     domid_t  mcinj_domid;           /* valid only if MC_MSRINJ_F_GPADDR is
                                        present in mcinj_flags */
     uint16_t _pad0;
-    struct mcinfo_msr mcinj_msr[MC_MSRINJ_MAXMSRS];
+    xen_mcinfo_msr_t mcinj_msr[MC_MSRINJ_MAXMSRS];
 };
+typedef struct xen_mc_msrinject xen_mc_msrinject_t;
 
 /* Flags for mcinj_flags above; bits 16-31 are reserved */
 #define MC_MSRINJ_F_INTERPOSE   0x1
@@ -410,6 +414,7 @@ struct xen_mc_msrinject {
 struct xen_mc_mceinject {
     unsigned int mceinj_cpunr;      /* target processor id */
 };
+typedef struct xen_mc_mceinject xen_mc_mceinject_t;
 
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 #define XEN_MC_inject_v2        6
@@ -422,7 +427,7 @@ struct xen_mc_mceinject {
 
 struct xen_mc_inject_v2 {
     uint32_t flags;
-    struct xenctl_bitmap cpumap;
+    xenctl_bitmap_t cpumap;
 };
 #endif
 
@@ -431,10 +436,10 @@ struct xen_mc {
     uint32_t interface_version; /* XEN_MCA_INTERFACE_VERSION */
     union {
         struct xen_mc_fetch        mc_fetch;
-        struct xen_mc_notifydomain mc_notifydomain;
+        xen_mc_notifydomain_t      mc_notifydomain;
         struct xen_mc_physcpuinfo  mc_physcpuinfo;
-        struct xen_mc_msrinject    mc_msrinject;
-        struct xen_mc_mceinject    mc_mceinject;
+        xen_mc_msrinject_t         mc_msrinject;
+        xen_mc_mceinject_t         mc_mceinject;
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
         struct xen_mc_inject_v2    mc_inject_v2;
 #endif
--- a/xen/include/public/argo.h
+++ b/xen/include/public/argo.h
@@ -67,8 +67,8 @@ typedef struct xen_argo_addr
 
 typedef struct xen_argo_send_addr
 {
-    struct xen_argo_addr src;
-    struct xen_argo_addr dst;
+    xen_argo_addr_t src;
+    xen_argo_addr_t dst;
 } xen_argo_send_addr_t;
 
 typedef struct xen_argo_ring
@@ -121,7 +121,7 @@ typedef struct xen_argo_unregister_ring
 
 typedef struct xen_argo_ring_data_ent
 {
-    struct xen_argo_addr ring;
+    xen_argo_addr_t ring;
     uint16_t flags;
     uint16_t pad;
     uint32_t space_required;
@@ -132,13 +132,13 @@ typedef struct xen_argo_ring_data
 {
     uint32_t nent;
     uint32_t pad;
-    struct xen_argo_ring_data_ent data[XEN_FLEX_ARRAY_DIM];
+    xen_argo_ring_data_ent_t data[XEN_FLEX_ARRAY_DIM];
 } xen_argo_ring_data_t;
 
 struct xen_argo_ring_message_header
 {
     uint32_t len;
-    struct xen_argo_addr source;
+    xen_argo_addr_t source;
     uint32_t message_type;
     uint8_t data[XEN_FLEX_ARRAY_DIM];
 };
--- a/xen/include/public/event_channel.h
+++ b/xen/include/public/event_channel.h
@@ -321,16 +321,16 @@ typedef struct evtchn_set_priority evtch
 struct evtchn_op {
     uint32_t cmd; /* enum event_channel_op */
     union {
-        struct evtchn_alloc_unbound    alloc_unbound;
-        struct evtchn_bind_interdomain bind_interdomain;
-        struct evtchn_bind_virq        bind_virq;
-        struct evtchn_bind_pirq        bind_pirq;
-        struct evtchn_bind_ipi         bind_ipi;
-        struct evtchn_close            close;
-        struct evtchn_send             send;
-        struct evtchn_status           status;
-        struct evtchn_bind_vcpu        bind_vcpu;
-        struct evtchn_unmask           unmask;
+        evtchn_alloc_unbound_t    alloc_unbound;
+        evtchn_bind_interdomain_t bind_interdomain;
+        evtchn_bind_virq_t        bind_virq;
+        evtchn_bind_pirq_t        bind_pirq;
+        evtchn_bind_ipi_t         bind_ipi;
+        evtchn_close_t            close;
+        evtchn_send_t             send;
+        evtchn_status_t           status;
+        evtchn_bind_vcpu_t        bind_vcpu;
+        evtchn_unmask_t           unmask;
     } u;
 };
 typedef struct evtchn_op evtchn_op_t;
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -74,6 +74,7 @@ struct xen_dm_op_create_ioreq_server {
     /* OUT - server id */
     ioservid_t id;
 };
+typedef struct xen_dm_op_create_ioreq_server xen_dm_op_create_ioreq_server_t;
 
 /*
  * XEN_DMOP_get_ioreq_server_info: Get all the information necessary to
@@ -113,6 +114,7 @@ struct xen_dm_op_get_ioreq_server_info {
     /* OUT - buffered ioreq gfn (see block comment above)*/
     uint64_aligned_t bufioreq_gfn;
 };
+typedef struct xen_dm_op_get_ioreq_server_info xen_dm_op_get_ioreq_server_info_t;
 
 /*
  * XEN_DMOP_map_io_range_to_ioreq_server: Register an I/O range for
@@ -148,6 +150,7 @@ struct xen_dm_op_ioreq_server_range {
     /* IN - inclusive start and end of range */
     uint64_aligned_t start, end;
 };
+typedef struct xen_dm_op_ioreq_server_range xen_dm_op_ioreq_server_range_t;
 
 #define XEN_DMOP_PCI_SBDF(s,b,d,f) \
 	((((s) & 0xffff) << 16) |  \
@@ -173,6 +176,7 @@ struct xen_dm_op_set_ioreq_server_state
     uint8_t enabled;
     uint8_t pad;
 };
+typedef struct xen_dm_op_set_ioreq_server_state xen_dm_op_set_ioreq_server_state_t;
 
 /*
  * XEN_DMOP_destroy_ioreq_server: Destroy the IOREQ Server <id>.
@@ -186,6 +190,7 @@ struct xen_dm_op_destroy_ioreq_server {
     ioservid_t id;
     uint16_t pad;
 };
+typedef struct xen_dm_op_destroy_ioreq_server xen_dm_op_destroy_ioreq_server_t;
 
 /*
  * XEN_DMOP_track_dirty_vram: Track modifications to the specified pfn
@@ -203,6 +208,7 @@ struct xen_dm_op_track_dirty_vram {
     /* IN - first pfn to track */
     uint64_aligned_t first_pfn;
 };
+typedef struct xen_dm_op_track_dirty_vram xen_dm_op_track_dirty_vram_t;
 
 /*
  * XEN_DMOP_set_pci_intx_level: Set the logical level of one of a domain's
@@ -217,6 +223,7 @@ struct xen_dm_op_set_pci_intx_level {
     /* IN - Level: 0 -> deasserted, 1 -> asserted */
     uint8_t  level;
 };
+typedef struct xen_dm_op_set_pci_intx_level xen_dm_op_set_pci_intx_level_t;
 
 /*
  * XEN_DMOP_set_isa_irq_level: Set the logical level of a one of a domain's
@@ -230,6 +237,7 @@ struct xen_dm_op_set_isa_irq_level {
     /* IN - Level: 0 -> deasserted, 1 -> asserted */
     uint8_t  level;
 };
+typedef struct xen_dm_op_set_isa_irq_level xen_dm_op_set_isa_irq_level_t;
 
 /*
  * XEN_DMOP_set_pci_link_route: Map a PCI INTx line to an IRQ line.
@@ -242,6 +250,7 @@ struct xen_dm_op_set_pci_link_route {
     /* ISA IRQ (1-15) or 0 -> disable link */
     uint8_t  isa_irq;
 };
+typedef struct xen_dm_op_set_pci_link_route xen_dm_op_set_pci_link_route_t;
 
 /*
  * XEN_DMOP_modified_memory: Notify that a set of pages were modified by
@@ -265,6 +274,7 @@ struct xen_dm_op_modified_memory {
     /* IN/OUT - Must be set to 0 */
     uint32_t opaque;
 };
+typedef struct xen_dm_op_modified_memory xen_dm_op_modified_memory_t;
 
 struct xen_dm_op_modified_memory_extent {
     /* IN - number of contiguous pages modified */
@@ -294,6 +304,7 @@ struct xen_dm_op_set_mem_type {
     /* IN - first pfn in region */
     uint64_aligned_t first_pfn;
 };
+typedef struct xen_dm_op_set_mem_type xen_dm_op_set_mem_type_t;
 
 /*
  * XEN_DMOP_inject_event: Inject an event into a VCPU, which will
@@ -327,6 +338,7 @@ struct xen_dm_op_inject_event {
     /* IN - type-specific extra data (%cr2 for #PF, pending_dbg for #DB) */
     uint64_aligned_t cr2;
 };
+typedef struct xen_dm_op_inject_event xen_dm_op_inject_event_t;
 
 /*
  * XEN_DMOP_inject_msi: Inject an MSI for an emulated device.
@@ -340,6 +352,7 @@ struct xen_dm_op_inject_msi {
     /* IN - MSI address (0xfeexxxxx) */
     uint64_aligned_t addr;
 };
+typedef struct xen_dm_op_inject_msi xen_dm_op_inject_msi_t;
 
 /*
  * XEN_DMOP_map_mem_type_to_ioreq_server : map or unmap the IOREQ Server <id>
@@ -366,6 +379,7 @@ struct xen_dm_op_map_mem_type_to_ioreq_s
     uint64_t opaque;    /* IN/OUT - only used for hypercall continuation,
                            has to be set to zero by the caller */
 };
+typedef struct xen_dm_op_map_mem_type_to_ioreq_server xen_dm_op_map_mem_type_to_ioreq_server_t;
 
 /*
  * XEN_DMOP_remote_shutdown : Declare a shutdown for another domain
@@ -377,6 +391,7 @@ struct xen_dm_op_remote_shutdown {
     uint32_t reason;       /* SHUTDOWN_* => enum sched_shutdown_reason */
                            /* (Other reason values are not blocked) */
 };
+typedef struct xen_dm_op_remote_shutdown xen_dm_op_remote_shutdown_t;
 
 /*
  * XEN_DMOP_relocate_memory : Relocate GFNs for the specified guest.
@@ -395,6 +410,7 @@ struct xen_dm_op_relocate_memory {
     /* Starting GFN where GFNs should be relocated. */
     uint64_aligned_t dst_gfn;
 };
+typedef struct xen_dm_op_relocate_memory xen_dm_op_relocate_memory_t;
 
 /*
  * XEN_DMOP_pin_memory_cacheattr : Pin caching type of RAM space.
@@ -416,30 +432,30 @@ struct xen_dm_op_pin_memory_cacheattr {
     uint32_t type;          /* XEN_DMOP_MEM_CACHEATTR_* */
     uint32_t pad;
 };
+typedef struct xen_dm_op_pin_memory_cacheattr xen_dm_op_pin_memory_cacheattr_t;
 
 struct xen_dm_op {
     uint32_t op;
     uint32_t pad;
     union {
-        struct xen_dm_op_create_ioreq_server create_ioreq_server;
-        struct xen_dm_op_get_ioreq_server_info get_ioreq_server_info;
-        struct xen_dm_op_ioreq_server_range map_io_range_to_ioreq_server;
-        struct xen_dm_op_ioreq_server_range unmap_io_range_from_ioreq_server;
-        struct xen_dm_op_set_ioreq_server_state set_ioreq_server_state;
-        struct xen_dm_op_destroy_ioreq_server destroy_ioreq_server;
-        struct xen_dm_op_track_dirty_vram track_dirty_vram;
-        struct xen_dm_op_set_pci_intx_level set_pci_intx_level;
-        struct xen_dm_op_set_isa_irq_level set_isa_irq_level;
-        struct xen_dm_op_set_pci_link_route set_pci_link_route;
-        struct xen_dm_op_modified_memory modified_memory;
-        struct xen_dm_op_set_mem_type set_mem_type;
-        struct xen_dm_op_inject_event inject_event;
-        struct xen_dm_op_inject_msi inject_msi;
-        struct xen_dm_op_map_mem_type_to_ioreq_server
-                map_mem_type_to_ioreq_server;
-        struct xen_dm_op_remote_shutdown remote_shutdown;
-        struct xen_dm_op_relocate_memory relocate_memory;
-        struct xen_dm_op_pin_memory_cacheattr pin_memory_cacheattr;
+        xen_dm_op_create_ioreq_server_t create_ioreq_server;
+        xen_dm_op_get_ioreq_server_info_t get_ioreq_server_info;
+        xen_dm_op_ioreq_server_range_t map_io_range_to_ioreq_server;
+        xen_dm_op_ioreq_server_range_t unmap_io_range_from_ioreq_server;
+        xen_dm_op_set_ioreq_server_state_t set_ioreq_server_state;
+        xen_dm_op_destroy_ioreq_server_t destroy_ioreq_server;
+        xen_dm_op_track_dirty_vram_t track_dirty_vram;
+        xen_dm_op_set_pci_intx_level_t set_pci_intx_level;
+        xen_dm_op_set_isa_irq_level_t set_isa_irq_level;
+        xen_dm_op_set_pci_link_route_t set_pci_link_route;
+        xen_dm_op_modified_memory_t modified_memory;
+        xen_dm_op_set_mem_type_t set_mem_type;
+        xen_dm_op_inject_event_t inject_event;
+        xen_dm_op_inject_msi_t inject_msi;
+        xen_dm_op_map_mem_type_to_ioreq_server_t map_mem_type_to_ioreq_server;
+        xen_dm_op_remote_shutdown_t remote_shutdown;
+        xen_dm_op_relocate_memory_t relocate_memory;
+        xen_dm_op_pin_memory_cacheattr_t pin_memory_cacheattr;
     } u;
 };
 
--- a/xen/include/public/hvm/hvm_vcpu.h
+++ b/xen/include/public/hvm/hvm_vcpu.h
@@ -69,6 +69,7 @@ struct vcpu_hvm_x86_32 {
 
     uint16_t pad2[3];
 };
+typedef struct vcpu_hvm_x86_32 xen_vcpu_hvm_x86_32_t;
 
 /*
  * The layout of the _ar fields of the segment registers is the
@@ -114,6 +115,7 @@ struct vcpu_hvm_x86_64 {
      * the 32-bit structure should be used instead.
      */
 };
+typedef struct vcpu_hvm_x86_64 xen_vcpu_hvm_x86_64_t;
 
 struct vcpu_hvm_context {
 #define VCPU_HVM_MODE_32B 0  /* 32bit fields of the structure will be used. */
@@ -124,8 +126,8 @@ struct vcpu_hvm_context {
 
     /* CPU registers. */
     union {
-        struct vcpu_hvm_x86_32 x86_32;
-        struct vcpu_hvm_x86_64 x86_64;
+        xen_vcpu_hvm_x86_32_t x86_32;
+        xen_vcpu_hvm_x86_64_t x86_64;
     } cpu_regs;
 };
 typedef struct vcpu_hvm_context vcpu_hvm_context_t;
--- a/xen/include/public/hypfs.h
+++ b/xen/include/public/hypfs.h
@@ -53,9 +53,10 @@ struct xen_hypfs_direntry {
     uint32_t content_len;      /* Current length of data. */
     uint32_t max_write_len;    /* Max. length for writes (0 if read-only). */
 };
+typedef struct xen_hypfs_direntry xen_hypfs_direntry_t;
 
 struct xen_hypfs_dirlistentry {
-    struct xen_hypfs_direntry e;
+    xen_hypfs_direntry_t e;
     /* Offset in bytes to next entry (0 == this is the last entry). */
     uint16_t off_next;
     /* Zero terminated entry name, possibly with some padding for alignment. */
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -604,7 +604,7 @@ struct xen_reserved_device_memory_map {
     XEN_GUEST_HANDLE(xen_reserved_device_memory_t) buffer;
     /* IN */
     union {
-        struct physdev_pci_device pci;
+        physdev_pci_device_t pci;
     } dev;
 };
 typedef struct xen_reserved_device_memory_map xen_reserved_device_memory_map_t;
--- a/xen/include/public/physdev.h
+++ b/xen/include/public/physdev.h
@@ -229,11 +229,11 @@ DEFINE_XEN_GUEST_HANDLE(physdev_manage_p
 struct physdev_op {
     uint32_t cmd;
     union {
-        struct physdev_irq_status_query      irq_status_query;
-        struct physdev_set_iopl              set_iopl;
-        struct physdev_set_iobitmap          set_iobitmap;
-        struct physdev_apic                  apic_op;
-        struct physdev_irq                   irq_op;
+        physdev_irq_status_query_t irq_status_query;
+        physdev_set_iopl_t         set_iopl;
+        physdev_set_iobitmap_t     set_iobitmap;
+        physdev_apic_t             apic_op;
+        physdev_irq_t              irq_op;
     } u;
 };
 typedef struct physdev_op physdev_op_t;
@@ -334,7 +334,7 @@ struct physdev_dbgp_op {
     uint8_t op;
     uint8_t bus;
     union {
-        struct physdev_pci_device pci;
+        physdev_pci_device_t pci;
     } u;
 };
 typedef struct physdev_dbgp_op physdev_dbgp_op_t;
--- a/xen/include/public/platform.h
+++ b/xen/include/public/platform.h
@@ -42,6 +42,7 @@ struct xenpf_settime32 {
     uint32_t nsecs;
     uint64_t system_time;
 };
+typedef struct xenpf_settime32 xenpf_settime32_t;
 #define XENPF_settime64           62
 struct xenpf_settime64 {
     /* IN variables. */
@@ -50,6 +51,7 @@ struct xenpf_settime64 {
     uint32_t mbz;
     uint64_t system_time;
 };
+typedef struct xenpf_settime64 xenpf_settime64_t;
 #if __XEN_INTERFACE_VERSION__ < 0x00040600
 #define XENPF_settime XENPF_settime32
 #define xenpf_settime xenpf_settime32
@@ -529,6 +531,7 @@ struct xenpf_cpu_hotadd
 	uint32_t acpi_id;
 	uint32_t pxm;
 };
+typedef struct xenpf_cpu_hotadd xenpf_cpu_hotadd_t;
 
 #define XENPF_mem_hotadd    59
 struct xenpf_mem_hotadd
@@ -538,6 +541,7 @@ struct xenpf_mem_hotadd
     uint32_t pxm;
     uint32_t flags;
 };
+typedef struct xenpf_mem_hotadd xenpf_mem_hotadd_t;
 
 #define XENPF_core_parking  60
 
@@ -622,29 +626,29 @@ struct xen_platform_op {
     uint32_t cmd;
     uint32_t interface_version; /* XENPF_INTERFACE_VERSION */
     union {
-        struct xenpf_settime           settime;
-        struct xenpf_settime32         settime32;
-        struct xenpf_settime64         settime64;
-        struct xenpf_add_memtype       add_memtype;
-        struct xenpf_del_memtype       del_memtype;
-        struct xenpf_read_memtype      read_memtype;
-        struct xenpf_microcode_update  microcode;
-        struct xenpf_platform_quirk    platform_quirk;
-        struct xenpf_efi_runtime_call  efi_runtime_call;
-        struct xenpf_firmware_info     firmware_info;
-        struct xenpf_enter_acpi_sleep  enter_acpi_sleep;
-        struct xenpf_change_freq       change_freq;
-        struct xenpf_getidletime       getidletime;
-        struct xenpf_set_processor_pminfo set_pminfo;
-        struct xenpf_pcpuinfo          pcpu_info;
-        struct xenpf_pcpu_version      pcpu_version;
-        struct xenpf_cpu_ol            cpu_ol;
-        struct xenpf_cpu_hotadd        cpu_add;
-        struct xenpf_mem_hotadd        mem_add;
-        struct xenpf_core_parking      core_parking;
-        struct xenpf_resource_op       resource_op;
-        struct xenpf_symdata           symdata;
-        uint8_t                        pad[128];
+        xenpf_settime_t               settime;
+        xenpf_settime32_t             settime32;
+        xenpf_settime64_t             settime64;
+        xenpf_add_memtype_t           add_memtype;
+        xenpf_del_memtype_t           del_memtype;
+        xenpf_read_memtype_t          read_memtype;
+        xenpf_microcode_update_t      microcode;
+        xenpf_platform_quirk_t        platform_quirk;
+        xenpf_efi_runtime_call_t      efi_runtime_call;
+        xenpf_firmware_info_t         firmware_info;
+        xenpf_enter_acpi_sleep_t      enter_acpi_sleep;
+        xenpf_change_freq_t           change_freq;
+        xenpf_getidletime_t           getidletime;
+        xenpf_set_processor_pminfo_t  set_pminfo;
+        xenpf_pcpuinfo_t              pcpu_info;
+        xenpf_pcpu_version_t          pcpu_version;
+        xenpf_cpu_ol_t                cpu_ol;
+        xenpf_cpu_hotadd_t            cpu_add;
+        xenpf_mem_hotadd_t            mem_add;
+        xenpf_core_parking_t          core_parking;
+        xenpf_resource_op_t           resource_op;
+        xenpf_symdata_t               symdata;
+        uint8_t                       pad[128];
     } u;
 };
 typedef struct xen_platform_op xen_platform_op_t;
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -127,7 +127,7 @@ struct xen_pmu_data {
     uint8_t pad[6];
 
     /* Architecture-specific information */
-    struct xen_pmu_arch pmu;
+    xen_pmu_arch_t pmu;
 };
 
 #endif /* __XEN_PUBLIC_PMU_H__ */
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -726,7 +726,7 @@ struct vcpu_info {
 #endif /* XEN_HAVE_PV_UPCALL_MASK */
     xen_ulong_t evtchn_pending_sel;
     struct arch_vcpu_info arch;
-    struct vcpu_time_info time;
+    vcpu_time_info_t time;
 }; /* 64 bytes (x86) */
 #ifndef __XEN__
 typedef struct vcpu_info vcpu_info_t;
@@ -1031,6 +1031,7 @@ struct xenctl_bitmap {
     XEN_GUEST_HANDLE_64(uint8) bitmap;
     uint32_t nr_bits;
 };
+typedef struct xenctl_bitmap xenctl_bitmap_t;
 #endif
 
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
--- a/xen/include/public/xsm/flask_op.h
+++ b/xen/include/public/xsm/flask_op.h
@@ -33,10 +33,12 @@ struct xen_flask_load {
     XEN_GUEST_HANDLE(char) buffer;
     uint32_t size;
 };
+typedef struct xen_flask_load xen_flask_load_t;
 
 struct xen_flask_setenforce {
     uint32_t enforcing;
 };
+typedef struct xen_flask_setenforce xen_flask_setenforce_t;
 
 struct xen_flask_sid_context {
     /* IN/OUT: sid to convert to/from string */
@@ -47,6 +49,7 @@ struct xen_flask_sid_context {
     uint32_t size;
     XEN_GUEST_HANDLE(char) context;
 };
+typedef struct xen_flask_sid_context xen_flask_sid_context_t;
 
 struct xen_flask_access {
     /* IN: access request */
@@ -60,6 +63,7 @@ struct xen_flask_access {
     uint32_t audit_deny;
     uint32_t seqno;
 };
+typedef struct xen_flask_access xen_flask_access_t;
 
 struct xen_flask_transition {
     /* IN: transition SIDs and class */
@@ -69,6 +73,7 @@ struct xen_flask_transition {
     /* OUT: new SID */
     uint32_t newsid;
 };
+typedef struct xen_flask_transition xen_flask_transition_t;
 
 #if __XEN_INTERFACE_VERSION__ < 0x00040800
 struct xen_flask_userlist {
@@ -106,11 +111,13 @@ struct xen_flask_boolean {
      */
     XEN_GUEST_HANDLE(char) name;
 };
+typedef struct xen_flask_boolean xen_flask_boolean_t;
 
 struct xen_flask_setavc_threshold {
     /* IN */
     uint32_t threshold;
 };
+typedef struct xen_flask_setavc_threshold xen_flask_setavc_threshold_t;
 
 struct xen_flask_hash_stats {
     /* OUT */
@@ -119,6 +126,7 @@ struct xen_flask_hash_stats {
     uint32_t buckets_total;
     uint32_t max_chain_len;
 };
+typedef struct xen_flask_hash_stats xen_flask_hash_stats_t;
 
 struct xen_flask_cache_stats {
     /* IN */
@@ -131,6 +139,7 @@ struct xen_flask_cache_stats {
     uint32_t reclaims;
     uint32_t frees;
 };
+typedef struct xen_flask_cache_stats xen_flask_cache_stats_t;
 
 struct xen_flask_ocontext {
     /* IN */
@@ -138,6 +147,7 @@ struct xen_flask_ocontext {
     uint32_t sid;
     uint64_t low, high;
 };
+typedef struct xen_flask_ocontext xen_flask_ocontext_t;
 
 struct xen_flask_peersid {
     /* IN */
@@ -145,12 +155,14 @@ struct xen_flask_peersid {
     /* OUT */
     uint32_t sid;
 };
+typedef struct xen_flask_peersid xen_flask_peersid_t;
 
 struct xen_flask_relabel {
     /* IN */
     uint32_t domid;
     uint32_t sid;
 };
+typedef struct xen_flask_relabel xen_flask_relabel_t;
 
 struct xen_flask_devicetree_label {
     /* IN */
@@ -158,6 +170,7 @@ struct xen_flask_devicetree_label {
     uint32_t length;
     XEN_GUEST_HANDLE(char) path;
 };
+typedef struct xen_flask_devicetree_label xen_flask_devicetree_label_t;
 
 struct xen_flask_op {
     uint32_t cmd;
@@ -188,26 +201,26 @@ struct xen_flask_op {
 #define FLASK_DEVICETREE_LABEL  25
     uint32_t interface_version; /* XEN_FLASK_INTERFACE_VERSION */
     union {
-        struct xen_flask_load load;
-        struct xen_flask_setenforce enforce;
+        xen_flask_load_t load;
+        xen_flask_setenforce_t enforce;
         /* FLASK_CONTEXT_TO_SID and FLASK_SID_TO_CONTEXT */
-        struct xen_flask_sid_context sid_context;
-        struct xen_flask_access access;
+        xen_flask_sid_context_t sid_context;
+        xen_flask_access_t access;
         /* FLASK_CREATE, FLASK_RELABEL, FLASK_MEMBER */
-        struct xen_flask_transition transition;
+        xen_flask_transition_t transition;
 #if __XEN_INTERFACE_VERSION__ < 0x00040800
         struct xen_flask_userlist userlist;
 #endif
         /* FLASK_GETBOOL, FLASK_SETBOOL */
-        struct xen_flask_boolean boolean;
-        struct xen_flask_setavc_threshold setavc_threshold;
-        struct xen_flask_hash_stats hash_stats;
-        struct xen_flask_cache_stats cache_stats;
+        xen_flask_boolean_t boolean;
+        xen_flask_setavc_threshold_t setavc_threshold;
+        xen_flask_hash_stats_t hash_stats;
+        xen_flask_cache_stats_t cache_stats;
         /* FLASK_ADD_OCONTEXT, FLASK_DEL_OCONTEXT */
-        struct xen_flask_ocontext ocontext;
-        struct xen_flask_peersid peersid;
-        struct xen_flask_relabel relabel;
-        struct xen_flask_devicetree_label devicetree_label;
+        xen_flask_ocontext_t ocontext;
+        xen_flask_peersid_t peersid;
+        xen_flask_relabel_t relabel;
+        xen_flask_devicetree_label_t devicetree_label;
     } u;
 };
 typedef struct xen_flask_op xen_flask_op_t;
--- a/xen/tools/compat-build-header.py
+++ b/xen/tools/compat-build-header.py
@@ -3,7 +3,7 @@
 import re,sys
 
 pats = [
- [ r"__InClUdE__(.*)", r"#include\1\n#pragma pack(4)" ],
+ [ r"__InClUdE__(.*)", r"#include\1" ],
  [ r"__IfDeF__ (XEN_HAVE.*)", r"#ifdef \1" ],
  [ r"__ElSe__", r"#else" ],
  [ r"__EnDif__", r"#endif" ],
@@ -11,9 +11,11 @@ pats = [
  [ r"__UnDeF__", r"#undef" ],
  [ r"\"xen-compat.h\"", r"<public/xen-compat.h>" ],
  [ r"(struct|union|enum)\s+(xen_?)?(\w)", r"\1 compat_\3" ],
- [ r"@KeeP@", r"" ],
+ [ r"typedef(.*)@KeeP@(xen_?)?([\w]+)([^\w])",
+   r"typedef\1\2\3 __attribute__((__aligned__(__alignof(\1compat_\3))))\4" ],
  [ r"_t([^\w]|$)", r"_compat_t\1" ],
- [ r"(8|16|32|64)_compat_t([^\w]|$)", r"\1_t\2" ],
+ [ r"int(8|16|32|64_aligned)_compat_t([^\w]|$)", r"int\1_t\2" ],
+ [ r"(\su?int64(_compat)?)_T([^\w]|$)", r"\1_t\3" ],
  [ r"(^|[^\w])xen_?(\w*)_compat_t([^\w]|$$)", r"\1compat_\2_t\3" ],
  [ r"(^|[^\w])XEN_?", r"\1COMPAT_" ],
  [ r"(^|[^\w])Xen_?", r"\1Compat_" ],
--- a/xen/tools/compat-build-source.py
+++ b/xen/tools/compat-build-source.py
@@ -9,6 +9,7 @@ pats = [
  [ r"^\s*#\s*endif /\* (XEN_HAVE.*) \*/\s+", r"__EnDif__" ],
  [ r"^\s*#\s*define\s+([A-Z_]*_GUEST_HANDLE)", r"#define HIDE_\1" ],
  [ r"^\s*#\s*define\s+([a-z_]*_guest_handle)", r"#define hide_\1" ],
+ [ r"^\s*#\s*define\s+(u?int64)_aligned_t\s.*", r"typedef \1_T __attribute__((aligned(4))) \1_compat_T;" ],
  [ r"XEN_GUEST_HANDLE(_[0-9A-Fa-f]+)?", r"COMPAT_HANDLE" ],
 ];
 
--- a/xen/tools/get-fields.sh
+++ b/xen/tools/get-fields.sh
@@ -418,6 +418,21 @@ check_field ()
 			"}")
 				level=$(expr $level - 1) id=
 				;;
+			compat_*_t)
+				if [ $level = 2 ]
+				then
+					fields=" "
+					token="${token%_t}"
+					token="${token#compat_}"
+				fi
+				;;
+			evtchn_*_compat_t)
+				if [ $level = 2 -a $token != evtchn_port_compat_t ]
+				then
+					fields=" "
+					token="${token%_compat_t}"
+				fi
+				;;
 			[a-zA-Z]*)
 				id=$token
 				;;
@@ -464,6 +479,14 @@ build_check ()
 		"]")
 			arrlvl=$(expr $arrlvl - 1)
 			;;
+		compat_*_t)
+			if [ $level = 2 -a $token != compat_argo_port_t ]
+			then
+				fields=" "
+				token="${token%_t}"
+				token="${token#compat_}"
+			fi
+			;;
 		[a-zA-Z_]*)
 			test $level != 2 -o $arrlvl != 1 || id=$token
 			;;



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:25:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:25:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqZw6-0001iB-LM; Wed, 01 Jul 2020 10:25:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LFmw=AM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqZw5-0001i5-PD
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:25:49 +0000
X-Inumbo-ID: 3ae0e46e-bb85-11ea-86e8-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3ae0e46e-bb85-11ea-86e8-12813bfff9fa;
 Wed, 01 Jul 2020 10:25:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CD7BCAEF9;
 Wed,  1 Jul 2020 10:25:47 +0000 (UTC)
Subject: [PATCH v2 2/7] x86/mce: add compat struct checking for
 XEN_MC_inject_v2
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Message-ID: <007679c8-84d5-2e91-e48e-68746741fb45@suse.com>
Date: Wed, 1 Jul 2020 12:25:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

84e364f2eda2 ("x86: add CMCI software injection interface") merely made
sure things would build, without any concern about things actually
working:
- despite the addition of xenctl_bitmap to xlat.lst, the resulting macro
  wasn't invoked anywhere (which would have lead to recognizing that the
  structure appeared to have no fully compatible layout, despite the use
  of a 64-bit handle),
- the interface struct itself was neither added to xlat.lst (and the
  resulting macro then invoked) nor was any manual checking of
  individual fields added.

Adjust compat header generation logic to retain XEN_GUEST_HANDLE_64(),
which is intentionally layed out to be compatible between different size
guests. Invoke the missing checking (implicitly through CHECK_mc).

No change in the resulting generated code.

Fixes: 84e364f2eda2 ("x86: add CMCI software injection interface")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1312,10 +1312,12 @@ CHECK_FIELD_(struct, mc_fetch, fetch_id)
 CHECK_FIELD_(struct, mc_physcpuinfo, ncpus);
 # define CHECK_compat_mc_physcpuinfo struct mc_physcpuinfo
 
-#define CHECK_compat_mc_inject_v2   struct mc_inject_v2
+# define xen_ctl_bitmap              xenctl_bitmap
+
 CHECK_mc;
 # undef CHECK_compat_mc_fetch
 # undef CHECK_compat_mc_physcpuinfo
+# undef xen_ctl_bitmap
 
 # define xen_mc_info                 mc_info
 CHECK_mc_info;
--- a/xen/include/public/arch-x86/xen-mca.h
+++ b/xen/include/public/arch-x86/xen-mca.h
@@ -429,6 +429,7 @@ struct xen_mc_inject_v2 {
     uint32_t flags;
     xenctl_bitmap_t cpumap;
 };
+typedef struct xen_mc_inject_v2 xen_mc_inject_v2_t;
 #endif
 
 struct xen_mc {
@@ -441,7 +442,7 @@ struct xen_mc {
         xen_mc_msrinject_t         mc_msrinject;
         xen_mc_mceinject_t         mc_mceinject;
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
-        struct xen_mc_inject_v2    mc_inject_v2;
+        xen_mc_inject_v2_t         mc_inject_v2;
 #endif
     } u;
 };
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -25,6 +25,7 @@
 ?	mcinfo_recovery			arch-x86/xen-mca.h
 !	mc_fetch			arch-x86/xen-mca.h
 ?	mc_info				arch-x86/xen-mca.h
+?	mc_inject_v2			arch-x86/xen-mca.h
 ?	mc_mceinject			arch-x86/xen-mca.h
 ?	mc_msrinject			arch-x86/xen-mca.h
 ?	mc_notifydomain			arch-x86/xen-mca.h
--- a/xen/tools/compat-build-header.py
+++ b/xen/tools/compat-build-header.py
@@ -19,6 +19,7 @@ pats = [
  [ r"(^|[^\w])xen_?(\w*)_compat_t([^\w]|$$)", r"\1compat_\2_t\3" ],
  [ r"(^|[^\w])XEN_?", r"\1COMPAT_" ],
  [ r"(^|[^\w])Xen_?", r"\1Compat_" ],
+ [ r"(^|[^\w])COMPAT_HANDLE_64\(", r"\1XEN_GUEST_HANDLE_64(" ],
  [ r"(^|[^\w])long([^\w]|$$)", r"\1int\2" ]
 ];
 
--- a/xen/tools/compat-build-source.py
+++ b/xen/tools/compat-build-source.py
@@ -10,7 +10,7 @@ pats = [
  [ r"^\s*#\s*define\s+([A-Z_]*_GUEST_HANDLE)", r"#define HIDE_\1" ],
  [ r"^\s*#\s*define\s+([a-z_]*_guest_handle)", r"#define hide_\1" ],
  [ r"^\s*#\s*define\s+(u?int64)_aligned_t\s.*", r"typedef \1_T __attribute__((aligned(4))) \1_compat_T;" ],
- [ r"XEN_GUEST_HANDLE(_[0-9A-Fa-f]+)?", r"COMPAT_HANDLE" ],
+ [ r"XEN_GUEST_HANDLE", r"COMPAT_HANDLE" ],
 ];
 
 xlatf = open('xlat.lst', 'r')



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:26:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqZxA-0001px-W4; Wed, 01 Jul 2020 10:26:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LFmw=AM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqZx9-0001pp-8l
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:26:55 +0000
X-Inumbo-ID: 621d6e30-bb85-11ea-86e8-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 621d6e30-bb85-11ea-86e8-12813bfff9fa;
 Wed, 01 Jul 2020 10:26:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A80B1AD04;
 Wed,  1 Jul 2020 10:26:53 +0000 (UTC)
Subject: [PATCH v2 3/7] x86/mce: bring hypercall subop compat checking in sync
 again
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Message-ID: <5d53a2e3-716c-2213-96e5-9d37371c482c@suse.com>
Date: Wed, 1 Jul 2020 12:26:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Use a typedef in struct xen_mc also for the two subops "manually"
translated in the handler, just for consistency. No functional
change.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1307,16 +1307,16 @@ CHECK_mcinfo_common;
 
 CHECK_FIELD_(struct, mc_fetch, flags);
 CHECK_FIELD_(struct, mc_fetch, fetch_id);
-# define CHECK_compat_mc_fetch       struct mc_fetch
+# define CHECK_mc_fetch              struct mc_fetch
 
 CHECK_FIELD_(struct, mc_physcpuinfo, ncpus);
-# define CHECK_compat_mc_physcpuinfo struct mc_physcpuinfo
+# define CHECK_mc_physcpuinfo        struct mc_physcpuinfo
 
 # define xen_ctl_bitmap              xenctl_bitmap
 
 CHECK_mc;
-# undef CHECK_compat_mc_fetch
-# undef CHECK_compat_mc_physcpuinfo
+# undef CHECK_mc_fetch
+# undef CHECK_mc_physcpuinfo
 # undef xen_ctl_bitmap
 
 # define xen_mc_info                 mc_info
--- a/xen/include/public/arch-x86/xen-mca.h
+++ b/xen/include/public/arch-x86/xen-mca.h
@@ -391,6 +391,7 @@ struct xen_mc_physcpuinfo {
     /* OUT */
     XEN_GUEST_HANDLE(xen_mc_logical_cpu_t) info;
 };
+typedef struct xen_mc_physcpuinfo xen_mc_physcpuinfo_t;
 
 #define XEN_MC_msrinject    4
 #define MC_MSRINJ_MAXMSRS       8
@@ -436,9 +437,9 @@ struct xen_mc {
     uint32_t cmd;
     uint32_t interface_version; /* XEN_MCA_INTERFACE_VERSION */
     union {
-        struct xen_mc_fetch        mc_fetch;
+        xen_mc_fetch_t             mc_fetch;
         xen_mc_notifydomain_t      mc_notifydomain;
-        struct xen_mc_physcpuinfo  mc_physcpuinfo;
+        xen_mc_physcpuinfo_t       mc_physcpuinfo;
         xen_mc_msrinject_t         mc_msrinject;
         xen_mc_mceinject_t         mc_mceinject;
 #if defined(__XEN__) || defined(__XEN_TOOLS__)



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:27:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:27:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqZxV-0001sj-9D; Wed, 01 Jul 2020 10:27:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LFmw=AM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqZxU-0001sS-He
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:27:16 +0000
X-Inumbo-ID: 6f07a1d8-bb85-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f07a1d8-bb85-11ea-8496-bc764e2007e4;
 Wed, 01 Jul 2020 10:27:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 54BF1AD04;
 Wed,  1 Jul 2020 10:27:15 +0000 (UTC)
Subject: [PATCH v2 4/7] x86/dmop: add compat struct checking for
 XEN_DMOP_map_mem_type_to_ioreq_server
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Message-ID: <8bb00b11-7004-51c4-c679-83da922d085b@suse.com>
Date: Wed, 1 Jul 2020 12:27:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This was forgotten when the subop was added.

Also take the opportunity and move the dm_op_relocate_memory entry in
xlat.lst to its designated place.

No change in the resulting generated code.

Fixes: ca2b511d3ff4 ("x86/ioreq server: add DMOP to map guest ram with p2m_ioreq_server to an ioreq server")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -730,6 +730,7 @@ CHECK_dm_op_modified_memory;
 CHECK_dm_op_set_mem_type;
 CHECK_dm_op_inject_event;
 CHECK_dm_op_inject_msi;
+CHECK_dm_op_map_mem_type_to_ioreq_server;
 CHECK_dm_op_remote_shutdown;
 CHECK_dm_op_relocate_memory;
 CHECK_dm_op_pin_memory_cacheattr;
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -67,15 +67,16 @@
 ?	grant_entry_v2			grant_table.h
 ?	gnttab_swap_grant_ref		grant_table.h
 !	dm_op_buf			hvm/dm_op.h
-?	dm_op_relocate_memory		hvm/dm_op.h
 ?	dm_op_create_ioreq_server	hvm/dm_op.h
 ?	dm_op_destroy_ioreq_server	hvm/dm_op.h
 ?	dm_op_get_ioreq_server_info	hvm/dm_op.h
 ?	dm_op_inject_event		hvm/dm_op.h
 ?	dm_op_inject_msi		hvm/dm_op.h
 ?	dm_op_ioreq_server_range	hvm/dm_op.h
+?	dm_op_map_mem_type_to_ioreq_server hvm/dm_op.h
 ?	dm_op_modified_memory		hvm/dm_op.h
 ?	dm_op_pin_memory_cacheattr	hvm/dm_op.h
+?	dm_op_relocate_memory		hvm/dm_op.h
 ?	dm_op_remote_shutdown		hvm/dm_op.h
 ?	dm_op_set_ioreq_server_state	hvm/dm_op.h
 ?	dm_op_set_isa_irq_level		hvm/dm_op.h



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:27:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqZxq-0001wd-Hy; Wed, 01 Jul 2020 10:27:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LFmw=AM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqZxp-0001wT-KF
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:27:37 +0000
X-Inumbo-ID: 7b5abf4c-bb85-11ea-86e8-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b5abf4c-bb85-11ea-86e8-12813bfff9fa;
 Wed, 01 Jul 2020 10:27:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0A06EAD04;
 Wed,  1 Jul 2020 10:27:36 +0000 (UTC)
Subject: [PATCH v2 5/7] x86: generalize padding field handling
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Message-ID: <83274416-2812-53c9-f8cb-23ebdf73782e@suse.com>
Date: Wed, 1 Jul 2020 12:27:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The original intention was to ignore padding fields, but the pattern
matched only ones whose names started with an underscore. Also match
fields whose names are in line with the C spec by not having a leading
underscore. (Note that the leading ^ in the sed regexps was pointless
and hence get dropped.)

This requires adjusting some vNUMA macros, to avoid triggering
"enumeration value ... not handled in switch" warnings, which - due to
-Werror - would cause the build to fail. (I have to admit that I find
these padding fields odd, when translation of the containing structure
is needed anyway.)

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
While for translation macros skipping padding fields pretty surely is a
reasonable thing to do, we may want to consider not ignoring them when
generating checking macros.

--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -354,10 +354,13 @@ int compat_memory_op(unsigned int cmd, X
                 return -EFAULT;
 
 #define XLAT_vnuma_topology_info_HNDL_vdistance_h(_d_, _s_)		\
+            case XLAT_vnuma_topology_info_vdistance_pad:                \
             guest_from_compat_handle((_d_)->vdistance.h, (_s_)->vdistance.h)
 #define XLAT_vnuma_topology_info_HNDL_vcpu_to_vnode_h(_d_, _s_)		\
+            case XLAT_vnuma_topology_info_vcpu_to_vnode_pad:            \
             guest_from_compat_handle((_d_)->vcpu_to_vnode.h, (_s_)->vcpu_to_vnode.h)
 #define XLAT_vnuma_topology_info_HNDL_vmemrange_h(_d_, _s_)		\
+            case XLAT_vnuma_topology_info_vmemrange_pad:                \
             guest_from_compat_handle((_d_)->vmemrange.h, (_s_)->vmemrange.h)
 
             XLAT_vnuma_topology_info(nat.vnuma, &cmp.vnuma);
--- a/xen/tools/get-fields.sh
+++ b/xen/tools/get-fields.sh
@@ -218,7 +218,7 @@ for line in sys.stdin.readlines():
 				fi
 				;;
 			[\,\;])
-				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
+				if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
 				then
 					if [ $kind = union ]
 					then
@@ -347,7 +347,7 @@ build_body ()
 			fi
 			;;
 		[\,\;])
-			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
+			if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
 			then
 				if [ -z "$array" -a -z "$array_type" ]
 				then
@@ -437,7 +437,7 @@ check_field ()
 				id=$token
 				;;
 			[\,\;])
-				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
+				if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
 				then
 					check_field $1 $2 $3.$id "$fields"
 					test "$token" != ";" || fields= id=
@@ -491,7 +491,7 @@ build_check ()
 			test $level != 2 -o $arrlvl != 1 || id=$token
 			;;
 		[\,\;])
-			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
+			if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
 			then
 				check_field $kind $1 $id "$fields"
 				test "$token" != ";" || fields= id=



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:28:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqZyK-00023V-R3; Wed, 01 Jul 2020 10:28:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LFmw=AM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqZyJ-00023G-Tn
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:28:07 +0000
X-Inumbo-ID: 8d8314e5-bb85-11ea-86e8-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d8314e5-bb85-11ea-86e8-12813bfff9fa;
 Wed, 01 Jul 2020 10:28:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6AED6AD25;
 Wed,  1 Jul 2020 10:28:06 +0000 (UTC)
Subject: [PATCH v2 6/7] flask: drop dead compat translation code
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Message-ID: <7711f68d-394e-a74f-81fa-51f8447174ce@suse.com>
Date: Wed, 1 Jul 2020 12:28:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Daniel de Graaf <dgdegra@tycho.nsa.gov>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Translation macros aren't needed at all (or else a devicetree_label
entry would have been missing), and userlist has been removed quite some
time ago.

No functional change.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -148,14 +148,11 @@
 ?	xenoprof_init			xenoprof.h
 ?	xenoprof_passive		xenoprof.h
 ?	flask_access			xsm/flask_op.h
-!	flask_boolean			xsm/flask_op.h
 ?	flask_cache_stats		xsm/flask_op.h
 ?	flask_hash_stats		xsm/flask_op.h
-!	flask_load			xsm/flask_op.h
 ?	flask_ocontext			xsm/flask_op.h
 ?	flask_peersid			xsm/flask_op.h
 ?	flask_relabel			xsm/flask_op.h
 ?	flask_setavc_threshold		xsm/flask_op.h
 ?	flask_setenforce		xsm/flask_op.h
-!	flask_sid_context		xsm/flask_op.h
 ?	flask_transition		xsm/flask_op.h
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -790,8 +790,6 @@ CHECK_flask_transition;
 #define xen_flask_load compat_flask_load
 #define flask_security_load compat_security_load
 
-#define xen_flask_userlist compat_flask_userlist
-
 #define xen_flask_sid_context compat_flask_sid_context
 #define flask_security_context compat_security_context
 #define flask_security_sid compat_security_sid


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:28:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:28:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqZyf-00027m-30; Wed, 01 Jul 2020 10:28:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LFmw=AM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqZyd-00027V-UV
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:28:27 +0000
X-Inumbo-ID: 996d8cb2-bb85-11ea-86e8-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 996d8cb2-bb85-11ea-86e8-12813bfff9fa;
 Wed, 01 Jul 2020 10:28:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 75294AD04;
 Wed,  1 Jul 2020 10:28:26 +0000 (UTC)
Subject: [PATCH v2 7/7] x86: only generate compat headers actually needed
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Message-ID: <5892f237-cfcf-eb19-058c-bd4f45c7bc97@suse.com>
Date: Wed, 1 Jul 2020 12:28:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

As was already the case for XSM/Flask, avoid generating compat headers
when they're not going to be needed. To address resulting build issues
- move compat/hvm/dm_op.h inclusion to the only source file needing it,
- add a little bit of #ifdef-ary.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Alternatively we could consistently drop conditionals (except for per-
arch cases perhaps).

--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -717,6 +717,8 @@ static int dm_op(const struct dmop_args
     return rc;
 }
 
+#include <compat/hvm/dm_op.h>
+
 CHECK_dm_op_create_ioreq_server;
 CHECK_dm_op_get_ioreq_server_info;
 CHECK_dm_op_ioreq_server_range;
--- a/xen/common/compat/domain.c
+++ b/xen/common/compat/domain.c
@@ -11,7 +11,6 @@ EMIT_FILE;
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
 #include <compat/vcpu.h>
-#include <compat/hvm/hvm_vcpu.h>
 
 #define xen_vcpu_set_periodic_timer vcpu_set_periodic_timer
 CHECK_vcpu_set_periodic_timer;
@@ -25,6 +24,10 @@ CHECK_SIZE_(struct, vcpu_info);
 CHECK_vcpu_register_vcpu_info;
 #undef xen_vcpu_register_vcpu_info
 
+#ifdef CONFIG_HVM
+
+#include <compat/hvm/hvm_vcpu.h>
+
 #define xen_vcpu_hvm_context vcpu_hvm_context
 #define xen_vcpu_hvm_x86_32 vcpu_hvm_x86_32
 #define xen_vcpu_hvm_x86_64 vcpu_hvm_x86_64
@@ -33,6 +36,8 @@ CHECK_vcpu_hvm_context;
 #undef xen_vcpu_hvm_x86_32
 #undef xen_vcpu_hvm_context
 
+#endif
+
 int compat_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
@@ -49,6 +54,7 @@ int compat_vcpu_op(int cmd, unsigned int
         if ( v->vcpu_info == &dummy_vcpu_info )
             return -EINVAL;
 
+#ifdef CONFIG_HVM
         if ( is_hvm_vcpu(v) )
         {
             struct vcpu_hvm_context ctxt;
@@ -61,6 +67,7 @@ int compat_vcpu_op(int cmd, unsigned int
             domain_unlock(d);
         }
         else
+#endif
         {
             struct compat_vcpu_guest_context *ctxt;
 
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -3,32 +3,34 @@ ifneq ($(CONFIG_COMPAT),)
 compat-arch-$(CONFIG_X86) := x86_32
 
 headers-y := \
-    compat/argo.h \
-    compat/callback.h \
+    compat/arch-$(compat-arch-y).h \
     compat/elfnote.h \
     compat/event_channel.h \
     compat/features.h \
-    compat/grant_table.h \
-    compat/hypfs.h \
-    compat/kexec.h \
     compat/memory.h \
     compat/nmi.h \
     compat/physdev.h \
     compat/platform.h \
+    compat/pmu.h \
     compat/sched.h \
-    compat/trace.h \
     compat/vcpu.h \
     compat/version.h \
     compat/xen.h \
-    compat/xenoprof.h
+    compat/xlat.h
 headers-$(CONFIG_X86)     += compat/arch-x86/pmu.h
 headers-$(CONFIG_X86)     += compat/arch-x86/xen-mca.h
 headers-$(CONFIG_X86)     += compat/arch-x86/xen.h
 headers-$(CONFIG_X86)     += compat/arch-x86/xen-$(compat-arch-y).h
-headers-$(CONFIG_X86)     += compat/hvm/dm_op.h
-headers-$(CONFIG_X86)     += compat/hvm/hvm_op.h
-headers-$(CONFIG_X86)     += compat/hvm/hvm_vcpu.h
-headers-y                 += compat/arch-$(compat-arch-y).h compat/pmu.h compat/xlat.h
+headers-$(CONFIG_ARGO)    += compat/argo.h
+headers-$(CONFIG_PV)      += compat/callback.h
+headers-$(CONFIG_GRANT_TABLE) += compat/grant_table.h
+headers-$(CONFIG_HVM)     += compat/hvm/dm_op.h
+headers-$(CONFIG_HVM)     += compat/hvm/hvm_op.h
+headers-$(CONFIG_HVM)     += compat/hvm/hvm_vcpu.h
+headers-$(CONFIG_HYPFS)   += compat/hypfs.h
+headers-$(CONFIG_KEXEC)   += compat/kexec.h
+headers-$(CONFIG_TRACEBUFFER) += compat/trace.h
+headers-$(CONFIG_XENOPROF) += compat/xenoprof.h
 headers-$(CONFIG_XSM_FLASK) += compat/xsm/flask_op.h
 
 cppflags-y                := -include public/xen-compat.h -DXEN_GENERATING_COMPAT_HEADERS
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -216,8 +216,6 @@ extern long compat_argo_op(
     unsigned long arg4);
 #endif
 
-#include <compat/hvm/dm_op.h>
-
 extern int
 compat_dm_op(
     domid_t domid,



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:31:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:31:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqa16-0002zn-HW; Wed, 01 Jul 2020 10:31:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w4aC=AM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jqa15-0002zh-8p
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:30:59 +0000
X-Inumbo-ID: f3319edc-bb85-11ea-8496-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3319edc-bb85-11ea-8496-bc764e2007e4;
 Wed, 01 Jul 2020 10:30:57 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: EG+C/FWo608QGe69kQsgOI93i17mm/+xN8MSm1J0oeAJj3Hp2EKxsdUIyB0V0REJYcxfEd3MWK
 gACVE4tHFJd6T33ud/dcqaQ84ZtSlZnEmfCa1OEtDtVTCBFMLW6y4KIWhW9FXoIhuILzyzOzIi
 HqaLyX94amQwUbnHZGt41H73lZsgQMAj/zdZFFp3qehTZSUNFSCmIbpT8ho6R7nxj2GXtcIf3/
 Nc/BXGz9cZ0SbYEHYdaCoU4yf5VIByAoiXTK5TiwOoA5QVQdLu/7NBBPv6gkhTdvvUPeXqZ2sh
 9MA=
X-SBRS: 2.7
X-MesageID: 21691752
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,299,1589256000"; d="scan'208";a="21691752"
Date: Wed, 1 Jul 2020 12:30:50 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v4 04/10] x86/vmx: implement processor tracing for VMX
Message-ID: <20200701103050.GQ735@Air-de-Roger>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <70df90dad7e759f4bb3dba405dc45e372a57fab7.1593519420.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <70df90dad7e759f4bb3dba405dc45e372a57fab7.1593519420.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, tamas.lengyel@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, luwei.kang@intel.com,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jun 30, 2020 at 02:33:47PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Use Intel Processor Trace feature in order to
> provision vmtrace_pt_* features.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  xen/arch/x86/hvm/vmx/vmx.c         | 89 ++++++++++++++++++++++++++++++
>  xen/include/asm-x86/hvm/hvm.h      | 38 +++++++++++++
>  xen/include/asm-x86/hvm/vmx/vmcs.h |  3 +
>  xen/include/asm-x86/hvm/vmx/vmx.h  | 14 +++++
>  4 files changed, 144 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index ab19d9424e..db3f051b40 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -508,11 +508,24 @@ static void vmx_restore_host_msrs(void)
>  
>  static void vmx_save_guest_msrs(struct vcpu *v)
>  {
> +    uint64_t rtit_ctl;
> +
>      /*
>       * We cannot cache SHADOW_GS_BASE while the VCPU runs, as it can
>       * be updated at any time via SWAPGS, which we cannot trap.
>       */
>      v->arch.hvm.vmx.shadow_gs = rdgsshadow();
> +
> +    if ( unlikely(v->arch.hvm.vmx.pt_state &&
> +                  v->arch.hvm.vmx.pt_state->active) )
> +    {

Nit: define rtit_ctl here to reduce the scope.

> +        rdmsrl(MSR_RTIT_CTL, rtit_ctl);
> +        BUG_ON(rtit_ctl & RTIT_CTL_TRACEEN);
> +
> +        rdmsrl(MSR_RTIT_STATUS, v->arch.hvm.vmx.pt_state->status);
> +        rdmsrl(MSR_RTIT_OUTPUT_MASK,
> +               v->arch.hvm.vmx.pt_state->output_mask.raw);
> +    }
>  }
>  
>  static void vmx_restore_guest_msrs(struct vcpu *v)
> @@ -524,6 +537,17 @@ static void vmx_restore_guest_msrs(struct vcpu *v)
>  
>      if ( cpu_has_msr_tsc_aux )
>          wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
> +
> +    if ( unlikely(v->arch.hvm.vmx.pt_state &&
> +                  v->arch.hvm.vmx.pt_state->active) )
> +    {
> +        wrmsrl(MSR_RTIT_OUTPUT_BASE,
> +               v->arch.hvm.vmx.pt_state->output_base);
> +        wrmsrl(MSR_RTIT_OUTPUT_MASK,
> +               v->arch.hvm.vmx.pt_state->output_mask.raw);
> +        wrmsrl(MSR_RTIT_STATUS,
> +               v->arch.hvm.vmx.pt_state->status);
> +    }
>  }
>  
>  void vmx_update_cpu_exec_control(struct vcpu *v)
> @@ -2240,6 +2264,60 @@ static bool vmx_get_pending_event(struct vcpu *v, struct x86_event *info)
>      return true;
>  }
>  
> +static int vmx_init_pt(struct vcpu *v)
> +{
> +    v->arch.hvm.vmx.pt_state = xzalloc(struct pt_state);
> +
> +    if ( !v->arch.hvm.vmx.pt_state )
> +        return -EFAULT;

-ENOMEM

> +
> +    if ( !v->arch.vmtrace.pt_buf )

Agian, I'm quite sure this doesn't build, since pt_buf is introduced
in patch 5.

I will try to continue to review, but it's quite hard when fields not
yet introduced are used in the code, as I have no idea what that is.

> +        return -EINVAL;
> +
> +    if ( !v->domain->vmtrace_pt_size )
> +	return -EINVAL;

Indentation (hard tab), and could be joined with the previous check,
since both return -EINVAL.

> +
> +    v->arch.hvm.vmx.pt_state->output_base = page_to_maddr(v->arch.vmtrace.pt_buf);
> +    v->arch.hvm.vmx.pt_state->output_mask.raw = v->domain->vmtrace_pt_size - 1;
> +
> +    if ( vmx_add_host_load_msr(v, MSR_RTIT_CTL, 0) )
> +        return -EFAULT;
> +
> +    if ( vmx_add_guest_msr(v, MSR_RTIT_CTL,
> +                              RTIT_CTL_TRACEEN | RTIT_CTL_OS |
> +                              RTIT_CTL_USR | RTIT_CTL_BRANCH_EN) )
> +        return -EFAULT;

I think I've already pointed this out before (in v2), but please don't
drop the returned error codes from vmx_add_host_load_msr and
vmx_add_guest_msr. Please store them in a local variable and return
those if != 0.

> +
> +    return 0;
> +}
> +
> +static int vmx_destroy_pt(struct vcpu* v)
> +{
> +    if ( v->arch.hvm.vmx.pt_state )
> +        xfree(v->arch.hvm.vmx.pt_state);
> +
> +    v->arch.hvm.vmx.pt_state = NULL;
> +    return 0;
> +}

I think those should be port of vmx_vcpu_{initialise/destroy}, there's
no need to introduce new hooks for it? As the allocation size will be
known at domain creation already.

> +static int vmx_control_pt(struct vcpu *v, bool_t enable)

Plain bool.

> +{
> +    if ( !v->arch.hvm.vmx.pt_state )
> +        return -EINVAL;
> +
> +    v->arch.hvm.vmx.pt_state->active = enable;
> +    return 0;
> +}
> +
> +static int vmx_get_pt_offset(struct vcpu *v, uint64_t *offset)
> +{
> +    if ( !v->arch.hvm.vmx.pt_state )
> +        return -EINVAL;
> +
> +    *offset = v->arch.hvm.vmx.pt_state->output_mask.offset;
> +    return 0;
> +}
> +
>  static struct hvm_function_table __initdata vmx_function_table = {
>      .name                 = "VMX",
>      .cpu_up_prepare       = vmx_cpu_up_prepare,
> @@ -2295,6 +2373,10 @@ static struct hvm_function_table __initdata vmx_function_table = {
>      .altp2m_vcpu_update_vmfunc_ve = vmx_vcpu_update_vmfunc_ve,
>      .altp2m_vcpu_emulate_ve = vmx_vcpu_emulate_ve,
>      .altp2m_vcpu_emulate_vmfunc = vmx_vcpu_emulate_vmfunc,
> +    .vmtrace_init_pt = vmx_init_pt,
> +    .vmtrace_destroy_pt = vmx_destroy_pt,
> +    .vmtrace_control_pt = vmx_control_pt,
> +    .vmtrace_get_pt_offset = vmx_get_pt_offset,

As pointed out above, vmtrace_init_pt and vmtrace_destroy_pt should
IMO be dropped and instead done in vmx_vcpu_{initialise/destroy}.

>      .tsc_scaling = {
>          .max_ratio = VMX_TSC_MULTIPLIER_MAX,
>      },
> @@ -3674,6 +3756,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>  
>      hvm_invalidate_regs_fields(regs);
>  
> +    if ( unlikely(v->arch.hvm.vmx.pt_state &&
> +                  v->arch.hvm.vmx.pt_state->active) )
> +    {
> +        rdmsrl(MSR_RTIT_OUTPUT_MASK,
> +               v->arch.hvm.vmx.pt_state->output_mask.raw);
> +    }
> +
>      if ( paging_mode_hap(v->domain) )
>      {
>          /*
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index 1eb377dd82..8f194889e5 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -214,6 +214,12 @@ struct hvm_function_table {
>      bool_t (*altp2m_vcpu_emulate_ve)(struct vcpu *v);
>      int (*altp2m_vcpu_emulate_vmfunc)(const struct cpu_user_regs *regs);
>  
> +    /* vmtrace */
> +    int (*vmtrace_init_pt)(struct vcpu *v);
> +    int (*vmtrace_destroy_pt)(struct vcpu *v);
> +    int (*vmtrace_control_pt)(struct vcpu *v, bool_t enable);
> +    int (*vmtrace_get_pt_offset)(struct vcpu *v, uint64_t *offset);
> +
>      /*
>       * Parameters and callbacks for hardware-assisted TSC scaling,
>       * which are valid only when the hardware feature is available.
> @@ -655,6 +661,38 @@ static inline bool altp2m_vcpu_emulate_ve(struct vcpu *v)
>      return false;
>  }
>  
> +static inline int vmtrace_init_pt(struct vcpu *v)
> +{
> +    if ( hvm_funcs.vmtrace_init_pt )
> +        return hvm_funcs.vmtrace_init_pt(v);
> +
> +    return -EOPNOTSUPP;
> +}
> +
> +static inline int vmtrace_destroy_pt(struct vcpu *v)
> +{
> +    if ( hvm_funcs.vmtrace_destroy_pt )
> +        return hvm_funcs.vmtrace_destroy_pt(v);
> +
> +    return -EOPNOTSUPP;
> +}
> +
> +static inline int vmtrace_control_pt(struct vcpu *v, bool_t enable)
> +{
> +    if ( hvm_funcs.vmtrace_control_pt )
> +        return hvm_funcs.vmtrace_control_pt(v, enable);
> +
> +    return -EOPNOTSUPP;
> +}
> +
> +static inline int vmtrace_get_pt_offset(struct vcpu *v, uint64_t *offset)
> +{
> +    if ( hvm_funcs.vmtrace_get_pt_offset )
> +        return hvm_funcs.vmtrace_get_pt_offset(v, offset);
> +
> +    return -EOPNOTSUPP;
> +}
> +
>  /*
>   * This must be defined as a macro instead of an inline function,
>   * because it uses 'struct vcpu' and 'struct domain' which have
> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
> index 0e9a0b8de6..64c0d82614 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -186,6 +186,9 @@ struct vmx_vcpu {
>       * pCPU and wakeup the related vCPU.
>       */
>      struct pi_blocking_vcpu pi_blocking;
> +
> +    /* State of processor trace feature */
> +    struct pt_state      *pt_state;

I think it's fine to add this here for now, but we might also consider
putting it outside of a HVM specific structure if it's to be used by
PV guests. Since all this is HVM specific I'm fine with adding it
here.

>  };
>  
>  int vmx_create_vmcs(struct vcpu *v);
> diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
> index 111ccd7e61..be7213d3c0 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmx.h
> @@ -689,4 +689,18 @@ typedef union ldt_or_tr_instr_info {
>      };
>  } ldt_or_tr_instr_info_t;
>  
> +/* Processor Trace state per vCPU */
> +struct pt_state {

Please use ipt_state here, since this is an Intel specific structure.

> +    bool_t active;

Plain bool.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:38:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqa8V-0003Hl-P8; Wed, 01 Jul 2020 10:38:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w4aC=AM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jqa8U-0003Hd-0R
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:38:38 +0000
X-Inumbo-ID: 04b53f50-bb87-11ea-86eb-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 04b53f50-bb87-11ea-86eb-12813bfff9fa;
 Wed, 01 Jul 2020 10:38:36 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: R2TE0un+zsVHMM+MRn8fHq2WYJYqL4pXQHDlZu6k9kDazEQHlilYmW2SlcIQbCfPbiAthqkmIN
 6JKe84TTzgUzizHFDgj+kEj0f+CvXNkofeQit5omhtd7byFhMCivICwTvX/FJ4hi7htH5Oyt7I
 ACIEfTu/C9piTRzcU1vWfyBYCUHD6Bxb7FqxBGgspn+D73h2sqvI/NjBQ3m6P/pkk8gYSVa9Ex
 /Ql//GNfuNxQI/BiT3KcW//OWjgCEZGXStfc1ESnRyAJq4uBwzEPhC0WF0O6zx5S7LM8doyQg7
 O70=
X-SBRS: 2.7
X-MesageID: 21692216
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,299,1589256000"; d="scan'208";a="21692216"
Date: Wed, 1 Jul 2020 12:38:29 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v4 05/10] common/domain: allocate vmtrace_pt_buffer
Message-ID: <20200701103829.GR735@Air-de-Roger>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <0e02c97054da6e367f740ab8d2574e2d255553c8.1593519420.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0e02c97054da6e367f740ab8d2574e2d255553c8.1593519420.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas.lengyel@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jun 30, 2020 at 02:33:48PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Allocate processor trace buffer for each vCPU when the domain
> is created, deallocate trace buffers on domain destruction.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  xen/arch/x86/domain.c        | 11 +++++++++++
>  xen/common/domain.c          | 32 ++++++++++++++++++++++++++++++++
>  xen/include/asm-x86/domain.h |  4 ++++
>  xen/include/xen/sched.h      |  4 ++++
>  4 files changed, 51 insertions(+)
> 
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index fee6c3931a..0d79fd390c 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -2199,6 +2199,17 @@ int domain_relinquish_resources(struct domain *d)
>                  altp2m_vcpu_disable_ve(v);
>          }
>  
> +        for_each_vcpu ( d, v )
> +        {
> +            if ( !v->arch.vmtrace.pt_buf )
> +                continue;
> +
> +            vmtrace_destroy_pt(v);
> +
> +            free_domheap_pages(v->arch.vmtrace.pt_buf,
> +                get_order_from_bytes(v->domain->vmtrace_pt_size));
> +        }
> +
>          if ( is_pv_domain(d) )
>          {
>              for_each_vcpu ( d, v )
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 27dcfbac8c..8513659ef8 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -137,6 +137,31 @@ static void vcpu_destroy(struct vcpu *v)
>      free_vcpu_struct(v);
>  }
>  
> +static int vmtrace_alloc_buffers(struct vcpu *v)
> +{
> +    struct page_info *pg;
> +    uint64_t size = v->domain->vmtrace_pt_size;

IMO you would be better by just storing an order here (like it's
passed from the toolstack), that would avoid the checks and conversion
to an order. Also vmtrace_pt_size could be of type unsigned int
instead of uint64_t.

> +
> +    if ( size < PAGE_SIZE || size > GB(4) || (size & (size - 1)) )
> +    {
> +        /*
> +         * We don't accept trace buffer size smaller than single page
> +         * and the upper bound is defined as 4GB in the specification.
> +         * The buffer size must be also a power of 2.
> +         */
> +        return -EINVAL;
> +    }
> +
> +    pg = alloc_domheap_pages(v->domain, get_order_from_bytes(size),
> +                             MEMF_no_refcount);
> +
> +    if ( !pg )
> +        return -ENOMEM;
> +
> +    v->arch.vmtrace.pt_buf = pg;

You can assign to pt_buf directly IMO, no need for the pg local
variable.

> +    return 0;
> +}
> +
>  struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
>  {
>      struct vcpu *v;
> @@ -162,6 +187,9 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
>      v->vcpu_id = vcpu_id;
>      v->dirty_cpu = VCPU_CPU_CLEAN;
>  
> +    if ( d->vmtrace_pt_size && vmtrace_alloc_buffers(v) != 0 )
> +        return NULL;

You are leaking the allocated v here, see other error paths below in
the function.

> +
>      spin_lock_init(&v->virq_lock);
>  
>      tasklet_init(&v->continue_hypercall_tasklet, NULL, NULL);
> @@ -188,6 +216,9 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
>      if ( arch_vcpu_create(v) != 0 )
>          goto fail_sched;
>  
> +    if ( d->vmtrace_pt_size && vmtrace_init_pt(v) != 0 )
> +        goto fail_sched;
> +
>      d->vcpu[vcpu_id] = v;
>      if ( vcpu_id != 0 )
>      {
> @@ -422,6 +453,7 @@ struct domain *domain_create(domid_t domid,
>      d->shutdown_code = SHUTDOWN_CODE_INVALID;
>  
>      spin_lock_init(&d->pbuf_lock);
> +    spin_lock_init(&d->vmtrace_lock);
>  
>      rwlock_init(&d->vnuma_rwlock);
>  
> diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
> index 6fd94c2e14..b01c107f5c 100644
> --- a/xen/include/asm-x86/domain.h
> +++ b/xen/include/asm-x86/domain.h
> @@ -627,6 +627,10 @@ struct arch_vcpu
>      struct {
>          bool next_interrupt_enabled;
>      } monitor;
> +
> +    struct {
> +        struct page_info *pt_buf;
> +    } vmtrace;
>  };
>  
>  struct guest_memory_policy
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index ac53519d7f..48f0a61bbd 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -457,6 +457,10 @@ struct domain
>      unsigned    pbuf_idx;
>      spinlock_t  pbuf_lock;
>  
> +    /* Used by vmtrace features */
> +    spinlock_t  vmtrace_lock;

Does this need to be per domain or rather per-vcpu? It's hard to tell
because there's no user of it in the patch.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:46:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:46:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqaGL-0004EF-Jr; Wed, 01 Jul 2020 10:46:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T2yc=AM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jqaGJ-0004Dz-Pn
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:46:43 +0000
X-Inumbo-ID: 267023de-bb88-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 267023de-bb88-11ea-b7bb-bc764e2007e4;
 Wed, 01 Jul 2020 10:46:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 084D7AD04;
 Wed,  1 Jul 2020 10:46:42 +0000 (UTC)
Subject: Re: [PATCH v2] xen/displif: Protocol version 2
To: Oleksandr Andrushchenko <andr2000@gmail.com>,
 xen-devel@lists.xenproject.org, ian.jackson@eu.citrix.com, wl@xen.org
References: <20200701071923.18883-1-andr2000@gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <dffd127d-c5a1-4c77-baa8-f1d931145bc4@suse.com>
Date: Wed, 1 Jul 2020 12:46:41 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200701071923.18883-1-andr2000@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.07.20 09:19, Oleksandr Andrushchenko wrote:
> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> 
> 1. Add protocol version as an integer
> 
> Version string, which is in fact an integer, is hard to handle in the
> code that supports different protocol versions. To simplify that
> also add the version as an integer.
> 
> 2. Pass buffer offset with XENDISPL_OP_DBUF_CREATE
> 
> There are cases when display data buffer is created with non-zero
> offset to the data start. Handle such cases and provide that offset
> while creating a display buffer.
> 
> 3. Add XENDISPL_OP_GET_EDID command
> 
> Add an optional request for reading Extended Display Identification
> Data (EDID) structure which allows better configuration of the
> display connectors over the configuration set in XenStore.
> With this change connectors may have multiple resolutions defined
> with respect to detailed timing definitions and additional properties
> normally provided by displays.
> 
> If this request is not supported by the backend then visible area
> is defined by the relevant XenStore's "resolution" property.
> 
> If backend provides extended display identification data (EDID) with
> XENDISPL_OP_GET_EDID request then EDID values must take precedence
> over the resolutions defined in XenStore.
> 
> 4. Bump protocol version to 2.
> 
> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:47:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:47:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqaGe-0004G5-SY; Wed, 01 Jul 2020 10:47:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w4aC=AM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jqaGe-0004Ft-0A
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:47:04 +0000
X-Inumbo-ID: 31f05314-bb88-11ea-86ed-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 31f05314-bb88-11ea-86ed-12813bfff9fa;
 Wed, 01 Jul 2020 10:47:02 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: pG4jg7OvnbFWBidfhZRN6IQe++xQtUxLfnDM5g1uod1gtreVBDUy1fN30pnH06DbRHqfLDTBSQ
 bNoJh8OWc+bZptyU13fjSWqzam++ZHRhocNDMUc8c1YKWdDK/m1m6FiOOlr3saLyAe4mBhgeyR
 bDhNEQMDMGDLk6oPh/6D3m8aGiRGZF7DC76fnqeXDwxPYcjCehOSF5fw2YerpVuyEGmDp3Vvkx
 cmb1aqk6bhzh2JYqz1BA2hwRaod8Y4U/qyJBkBdqvjGdLSsMxrQtIMUnt1RQe+I5KpA5aq9K9D
 AW0=
X-SBRS: 2.7
X-MesageID: 21388112
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,299,1589256000"; d="scan'208";a="21388112"
Date: Wed, 1 Jul 2020 12:46:51 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v4 06/10] memory: batch processing in acquire_resource()
Message-ID: <20200701104651.GS735@Air-de-Roger>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <a317b169e3710a481bb4be066d9b878f27b3e66c.1593519420.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a317b169e3710a481bb4be066d9b878f27b3e66c.1593519420.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com, Jan
 Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jun 30, 2020 at 02:33:49PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Allow to acquire large resources by allowing acquire_resource()
> to process items in batches, using hypercall continuation.

This patch should be the first of thew series IMO, since it can go in
independently of the rest, as it's a general improvement to
XENMEM_acquire_resource.

> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  xen/common/memory.c | 32 +++++++++++++++++++++++++++++---
>  1 file changed, 29 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 714077c1e5..3ab06581a2 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1046,10 +1046,12 @@ static int acquire_grant_table(struct domain *d, unsigned int id,
>  }
>  
>  static int acquire_resource(
> -    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
> +    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg,
> +    unsigned long *start_extent)
>  {
>      struct domain *d, *currd = current->domain;
>      xen_mem_acquire_resource_t xmar;
> +    uint32_t total_frames;
>      /*
>       * The mfn_list and gfn_list (below) arrays are ok on stack for the
>       * moment since they are small, but if they need to grow in future
> @@ -1077,8 +1079,17 @@ static int acquire_resource(
>          return 0;
>      }
>  
> +    total_frames = xmar.nr_frames;
> +
> +    if ( *start_extent )
> +    {
> +        xmar.frame += *start_extent;
> +        xmar.nr_frames -= *start_extent;
> +        guest_handle_add_offset(xmar.frame_list, *start_extent);
> +    }
> +
>      if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
> -        return -E2BIG;
> +        xmar.nr_frames = ARRAY_SIZE(mfn_list);
>  
>      rc = rcu_lock_remote_domain_by_id(xmar.domid, &d);
>      if ( rc )
> @@ -1135,6 +1146,14 @@ static int acquire_resource(
>          }
>      }
>  
> +    if ( !rc )
> +    {
> +        *start_extent += xmar.nr_frames;
> +
> +        if ( *start_extent != total_frames )
> +            rc = -ERESTART;
> +    }

I think you should add some kind of loop here, processing just 32
frames and preempting might be too low. You generally want to loop
doing batches of 32 entries until hypercall_preempt_check() returns
true.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:52:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:52:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqaLV-00058u-H7; Wed, 01 Jul 2020 10:52:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r09v=AM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqaLU-00058a-U9
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:52:04 +0000
X-Inumbo-ID: e2635a34-bb88-11ea-86ed-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e2635a34-bb88-11ea-86ed-12813bfff9fa;
 Wed, 01 Jul 2020 10:51:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=6M19E7fHojNuFUmXDjBqxNxNvLv6cQ1/Wz9wr9qQhEY=; b=YSld8SmdpCbqRquzkPkZF7NnC
 uBiEINA/my6LYOtSHHBwqci7eiMLzJ/F53h0qEUWy/3clTmn0oMyRd/a34ZDKU0LidmA64yAmMcoD
 sy/z/dii/ktTcNtLezlj9p+zGn3gzc2lx+H7I3kDSZ7nbMb5yGP4iCyz+Zh0NYzzxRT+Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqaLN-00088D-21; Wed, 01 Jul 2020 10:51:57 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqaLM-0002ND-Lj; Wed, 01 Jul 2020 10:51:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqaLM-0004pB-L6; Wed, 01 Jul 2020 10:51:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151491-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151491: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: xen=23ca7ec0ba620db52a646d80e22f9703a6589f66
X-Osstest-Versions-That: xen=da53345dd5ff7d3a34e83587fd375c0b7722f46c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jul 2020 10:51:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151491 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151491/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10   fail REGR. vs. 151461

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151461
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151461
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151461
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151461
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151461
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151461
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151461
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151461
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151461
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  23ca7ec0ba620db52a646d80e22f9703a6589f66
baseline version:
 xen                  da53345dd5ff7d3a34e83587fd375c0b7722f46c

Last test of basis   151461  2020-06-29 19:39:16 Z    1 days
Failing since        151473  2020-06-30 08:47:12 Z    1 days    2 attempts
Testing same since   151491  2020-06-30 22:36:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   da53345dd5..23ca7ec0ba  23ca7ec0ba620db52a646d80e22f9703a6589f66 -> master


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 10:52:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 10:52:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqaLp-0005Aw-Te; Wed, 01 Jul 2020 10:52:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w4aC=AM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jqaLo-0005Am-Lr
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 10:52:24 +0000
X-Inumbo-ID: f1b7a509-bb88-11ea-86ed-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f1b7a509-bb88-11ea-86ed-12813bfff9fa;
 Wed, 01 Jul 2020 10:52:23 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: /e81gyTLqkFFAWY9nm+5cjUP9KpjGf82wUwOjctenLDDD+S+A+VHslJCmNuHR316ru/v6UhQi/
 ZRpKvDaS1PnJd8efKGfr3wT1dCXYXqLa7MRMykvmk6+VR+tPpedxkO1gmFddANPkkBuwtuV+7w
 UhACHRE1gsYJIWIRQs7r1scAPUf7sBJ9dO7j+QjBqdBGmjpTUhtLOEYacWX4gvepSCzFefn706
 fQ3Oj0uoDR5Ul68+/oyOGx+rgdMOrrrcfRcG7U9DHLAT7SeAIIyAh8v4+/qJ2X5xDW/Eq7JPco
 AI0=
X-SBRS: 2.7
X-MesageID: 21718948
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,299,1589256000"; d="scan'208";a="21718948"
Date: Wed, 1 Jul 2020 12:52:16 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v4 07/10] x86/mm: add vmtrace_buf resource type
Message-ID: <20200701105216.GT735@Air-de-Roger>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <2446caa5be5eca36f0b5ca47d2edcbd6f7792484.1593519420.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2446caa5be5eca36f0b5ca47d2edcbd6f7792484.1593519420.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas.lengyel@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jun 30, 2020 at 02:33:50PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Allow to map processor trace buffer using
> acquire_resource().
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  xen/arch/x86/mm.c           | 25 +++++++++++++++++++++++++
>  xen/include/public/memory.h |  1 +
>  2 files changed, 26 insertions(+)
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index e376fc7e8f..bb781bd90c 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4624,6 +4624,31 @@ int arch_acquire_resource(struct domain *d, unsigned int type,
>          }
>          break;
>      }
> +
> +    case XENMEM_resource_vmtrace_buf:
> +    {
> +        mfn_t mfn;
> +        unsigned int i;
> +        struct vcpu *v = domain_vcpu(d, id);

Missing blank newline between variable definitions and code.

> +        rc = -EINVAL;
> +
> +        if ( !v )
> +            break;
> +
> +        if ( !v->arch.vmtrace.pt_buf )
> +            break;
> +
> +        mfn = page_to_mfn(v->arch.vmtrace.pt_buf);
> +
> +        if ( frame + nr_frames > (v->domain->vmtrace_pt_size >> PAGE_SHIFT) )
> +            break;

You can place all the checks done above in a single if.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 11:00:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 11:00:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqaTc-0006HR-6O; Wed, 01 Jul 2020 11:00:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w4aC=AM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jqaTb-0006HK-4h
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 11:00:27 +0000
X-Inumbo-ID: 10489faa-bb8a-11ea-86ee-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 10489faa-bb8a-11ea-86ee-12813bfff9fa;
 Wed, 01 Jul 2020 11:00:26 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: WmC/VWMI7ryIhLYTErmqxUt0KrLTV7rpQ45yxN8Vo5LMUcRTWSVBgJWscCLrXHCNA8zLzQW4OK
 GM8V8r3JInUpqQY2F+kHS/liWiNPLc7/1I+8m6vd/W+TJuQMuy2cIDrW0msngnUncfPbBWDoSQ
 NhwGjdeOFC54FCJFqFskI4T/o+m47qZIe8Zf21J5IKmEWEfjrXoar30nzKX2k3SSqjb8+lfpRd
 R2IbMj+4IfSJLUwYZLUb+Br/LCmg/IfEUuGlk+0IchxJ3MYVtStIGU/arxMwMCrAYZsIs03XxC
 v+g=
X-SBRS: 2.7
X-MesageID: 21388763
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,300,1589256000"; d="scan'208";a="21388763"
Date: Wed, 1 Jul 2020 13:00:16 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v4 08/10] x86/domctl: add XEN_DOMCTL_vmtrace_op
Message-ID: <20200701110016.GU735@Air-de-Roger>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <5578c50c2c1803ccd1c92d125c6b1febf1415a8a.1593519420.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5578c50c2c1803ccd1c92d125c6b1febf1415a8a.1593519420.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas.lengyel@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jun 30, 2020 at 02:33:51PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Implement domctl to manage the runtime state of
> processor trace feature.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  xen/arch/x86/domctl.c       | 48 +++++++++++++++++++++++++++++++++++++
>  xen/include/public/domctl.h | 26 ++++++++++++++++++++
>  2 files changed, 74 insertions(+)
> 
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index 6f2c69788d..a041b724d8 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -322,6 +322,48 @@ void arch_get_domain_info(const struct domain *d,
>      info->arch_config.emulation_flags = d->arch.emulation_flags;
>  }
>  
> +static int do_vmtrace_op(struct domain *d, struct xen_domctl_vmtrace_op *op,
> +                         XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
> +{
> +    int rc;
> +    struct vcpu *v;
> +
> +    if ( !vmtrace_supported )
> +        return -EOPNOTSUPP;
> +
> +    if ( !is_hvm_domain(d) )
> +        return -EOPNOTSUPP;

You can join both checks.

> +
> +    if ( op->vcpu >= d->max_vcpus )
> +        return -EINVAL;
> +
> +    v = domain_vcpu(d, op->vcpu);
> +    rc = 0;

No need to init rc to zero, after the switch below it will always be
initialized.

> +
> +    switch ( op->cmd )
> +    {
> +    case XEN_DOMCTL_vmtrace_pt_enable:
> +    case XEN_DOMCTL_vmtrace_pt_disable:
> +        vcpu_pause(v);
> +        spin_lock(&d->vmtrace_lock);
> +
> +        rc = vmtrace_control_pt(v, op->cmd == XEN_DOMCTL_vmtrace_pt_enable);
> +
> +        spin_unlock(&d->vmtrace_lock);
> +        vcpu_unpause(v);
> +        break;
> +
> +    case XEN_DOMCTL_vmtrace_pt_get_offset:
> +        rc = vmtrace_get_pt_offset(v, &op->offset);

Since you don't pause the vcpu here, I think you want to use atomic
operations to update v->arch.hvm.vmx.pt_state->output_mask.raw, or
else you could see inconsistent results if a vmexit is updating it in
parallel? (since you don't pause the target vcpu).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 11:01:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 11:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqaUQ-0006Lq-Hm; Wed, 01 Jul 2020 11:01:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rDAW=AM=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jqaUP-0006Ld-9s
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 11:01:17 +0000
X-Inumbo-ID: 2f045f40-bb8a-11ea-bb8b-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f045f40-bb8a-11ea-bb8b-bc764e2007e4;
 Wed, 01 Jul 2020 11:01:16 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: CUDjbD5JT1BLO+4XOj8GD7SIwJVKm01Z7/j6H6ADkbOtWJeZlbyap+Rww0tcwBbpgJgFTbsdDd
 zZuoc1wUrIUZmghm/zEqThVS2fhk9QB+DJqdZdzgTfIAKU3qct1jnjCG1zElodpmzxTGT8/aP0
 l7e3FYNC8iIRAVrb71vFB+FEjfOu0R50vA0BLhZ1Ja+bBa2SP8MNA9HcN/1EC++Y7oJqh60tXA
 oqfJb/Vvu90smKrYDhIho7779svaz0Z4tVK6dLybVTiLO4hlZUZ2KZKQmFonn/yAMpr69LQR2R
 f3c=
X-SBRS: 2.7
X-MesageID: 21693498
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,300,1589256000"; d="scan'208";a="21693498"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-2"
Content-Transfer-Encoding: 8bit
Message-ID: <24316.27891.433815.62003@mariner.uk.xensource.com>
Date: Wed, 1 Jul 2020 12:01:07 +0100
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Subject: Re: [PATCH v3 7/7] tools/proctrace: add proctrace tool
In-Reply-To: <CABfawhmpGEE0jq=vMicqdmf2nbMs-a4Y0nxBUN=JwOeA_H-YGQ@mail.gmail.com>
References: <1617453791.11443328.1592849168658.JavaMail.zimbra@cert.pl>
 <1786138246.11444015.1592849576272.JavaMail.zimbra@cert.pl>
 <20200626114824.mt2zsbwdbed5dtwj@liuwe-devbox-debian-v2>
 <24309.63267.596889.412833@mariner.uk.xensource.com>
 <CABfawhmpGEE0jq=vMicqdmf2nbMs-a4Y0nxBUN=JwOeA_H-YGQ@mail.gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?iso-8859-2?Q?Micha=B3_Leszczy=F1ski?= <michal.leszczynski@cert.pl>,
 Tamas K Lengyel <tamas.lengyel@intel.com>, "Kang,
 Luwei" <luwei.kang@intel.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Tamas K Lengyel writes ("Re: [PATCH v3 7/7] tools/proctrace: add proctrace tool"):
> On Fri, Jun 26, 2020 at 7:26 AM Ian Jackson <ian.jackson@citrix.com> wrote:
> > Wei Liu writes ("Re: [PATCH v3 7/7] tools/proctrace: add proctrace tool"):
> > > On Mon, Jun 22, 2020 at 08:12:56PM +0200, Micha Leszczyski wrote:
> > > > Add an demonstration tool that uses xc_vmtrace_* calls in order
> > > > to manage external IPT monitoring for DomU.
> > ...
> > > > +    if (rc) {
> > > > +        fprintf(stderr, "Failed to call xc_vmtrace_pt_disable\n");
> > > > +        return 1;
> > > > +    }
> > > > +
> > >
> > > You should close fmem and xc in the exit path.
> >
> > Thanks for reviewing this.  I agree with your comments.  But I
> > disagree with this one.
> >
> > This is in main().  When the program exits, the xc handle and memory
> > mappings will go away as the kernel tears down the process.
> >
> > Leaving out this kind of cleanup in standalone command-line programs
> > is fine, I think.  It can make the code simpler - and it avoids the
> > risk of doing it wrong, which can turn errors into crashes.
> 
> Hi Ian,
> while I agree that this particular code would not be an issue,
> consider that these tools are often taken as sample codes to be reused
> in other contexts as well. As such, I think it should include the
> close bits as well and exhibit all the "best practices" we would like
> to see anyone else building tools for Xen.

Well, you're the author if this and I think you get to decide this
question (which is one of style).  If that is your view then you Wei's
comment is certainly right, as far as it goes.

But looking at this program it seems to me that there is a great deal
of other stuff it allocates, one way or another, which it doesn't
free.

Is your intent that this program has this coding style ?

   wombat = xc_allocate_wombat();
   if (bad(wombat)) {
     print_error("wombat");
     exit(-1);
   }

   hippo = xc_allocate_hippo();
   if (bad(hippo)) {
     print_error("hippo");
     xc_free_wombat(wombat);
     exit(-1);
   }

   zebra = xc_allocate_zebra();
   if (bad(zebra)) {
     print_error("zebra");
     xc_free_wombat(wombat);
     xc_free_hippo(hippo);
     exit(-1);
   }
   ...

I think this is an unhelpful coding style.  It inevitably leads to
leaks.  IMO if you are going to try to tear down all things, you
should do it like this:

   xc_wombat *wombat = NULL:
   xc_hippo *hippo = NULL;
   xc_hippo *zebra = NULL;

   wombat = xc_allocate_wombat();
   if (bad(wombat)) {
     print_error("wombat");
     goto exit_error;
   }

   hippo = xc_allocate_hippo();
   if (bad(hipp)) {
     print_error("hippo");
     goto exit_error;
   }

   zebra = xc_allocate_zebra();
   if (bad(hipp)) {
     print_error("zebra");
     goto exit_error;
   }

   ...
  exit_error:
   if (some(wombat)) xc_free_wombat(wombat);
   if (some(hippo)) xc_free_hippo(hippo);
   if (some(zebra)) xc_free_zebra(zebra);
   exit(-1);

or some similar approach that makes it easier to see that the code is
correct and leak-free.

But as I say I think as the author you get to decide.

Regards,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 11:07:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 11:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqaaA-0006no-KP; Wed, 01 Jul 2020 11:07:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T2yc=AM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jqaaA-0006nj-74
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 11:07:14 +0000
X-Inumbo-ID: 040de5bc-bb8b-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 040de5bc-bb8b-11ea-bca7-bc764e2007e4;
 Wed, 01 Jul 2020 11:07:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C3C48AC79;
 Wed,  1 Jul 2020 11:07:12 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org
Subject: [PATCH v2 0/4] Remove 32-bit Xen PV guest support
Date: Wed,  1 Jul 2020 13:06:46 +0200
Message-Id: <20200701110650.16172-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Deep Shah <sdeep@vmware.com>,
 "VMware, Inc." <pv-drivers@vmware.com>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
 "H. Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The long term plan has been to replace Xen PV guests by PVH. The first
victim of that plan are now 32-bit PV guests, as those are used only
rather seldom these days. Xen on x86 requires 64-bit support and with
Grub2 now supporting PVH officially since version 2.04 there is no
need to keep 32-bit PV guest support alive in the Linux kernel.
Additionally Meltdown mitigation is not available in the kernel running
as 32-bit PV guest, so dropping this mode makes sense from security
point of view, too.

Changes in V2:
- rebase to 5.8 kernel
- addressed comments to V1
- new patches 3 and 4

Juergen Gross (4):
  x86/xen: remove 32-bit Xen PV guest support
  x86/paravirt: remove 32-bit support from PARAVIRT_XXL
  x86/paravirt: cleanup paravirt macros
  x86/paravirt: use CONFIG_PARAVIRT_XXL instead of CONFIG_PARAVIRT

 arch/x86/entry/entry_32.S                   | 109 +------
 arch/x86/entry/entry_64.S                   |   4 +-
 arch/x86/entry/vdso/vdso32/vclock_gettime.c |   1 +
 arch/x86/include/asm/fixmap.h               |   2 +-
 arch/x86/include/asm/paravirt.h             | 107 +-----
 arch/x86/include/asm/paravirt_types.h       |  21 --
 arch/x86/include/asm/pgtable-3level_types.h |   5 -
 arch/x86/include/asm/proto.h                |   2 +-
 arch/x86/include/asm/required-features.h    |   2 +-
 arch/x86/include/asm/segment.h              |   6 +-
 arch/x86/kernel/cpu/common.c                |   8 -
 arch/x86/kernel/head_32.S                   |  31 --
 arch/x86/kernel/kprobes/core.c              |   1 -
 arch/x86/kernel/kprobes/opt.c               |   1 -
 arch/x86/kernel/paravirt.c                  |  18 --
 arch/x86/kernel/paravirt_patch.c            |  17 -
 arch/x86/xen/Kconfig                        |   3 +-
 arch/x86/xen/Makefile                       |   3 +-
 arch/x86/xen/apic.c                         |  17 -
 arch/x86/xen/enlighten_pv.c                 |  52 +--
 arch/x86/xen/mmu_pv.c                       | 340 +++-----------------
 arch/x86/xen/p2m.c                          |   6 +-
 arch/x86/xen/setup.c                        |  35 +-
 arch/x86/xen/smp_pv.c                       |  18 --
 arch/x86/xen/xen-asm.S                      | 182 ++++++++++-
 arch/x86/xen/xen-asm_32.S                   | 185 -----------
 arch/x86/xen/xen-asm_64.S                   | 181 -----------
 arch/x86/xen/xen-head.S                     |   6 -
 drivers/xen/Kconfig                         |   4 +-
 29 files changed, 232 insertions(+), 1135 deletions(-)
 delete mode 100644 arch/x86/xen/xen-asm_32.S
 delete mode 100644 arch/x86/xen/xen-asm_64.S

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 11:07:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 11:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqaaE-0006o8-Tr; Wed, 01 Jul 2020 11:07:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T2yc=AM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jqaaD-0006nx-CV
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 11:07:17 +0000
X-Inumbo-ID: 05977038-bb8b-11ea-86ef-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05977038-bb8b-11ea-86ef-12813bfff9fa;
 Wed, 01 Jul 2020 11:07:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 699ECAD71;
 Wed,  1 Jul 2020 11:07:15 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, x86@kernel.org,
 virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org
Subject: [PATCH v2 3/4] x86/paravirt: cleanup paravirt macros
Date: Wed,  1 Jul 2020 13:06:49 +0200
Message-Id: <20200701110650.16172-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200701110650.16172-1-jgross@suse.com>
References: <20200701110650.16172-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Deep Shah <sdeep@vmware.com>, "VMware,
 Inc." <pv-drivers@vmware.com>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 Thomas Gleixner <tglx@linutronix.de>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Some paravirt macros are no longer used, delete them.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/include/asm/paravirt.h | 15 ---------------
 1 file changed, 15 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index cfe9f6e472b5..cff2fbd1edd5 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -609,16 +609,9 @@ bool __raw_callee_save___native_vcpu_is_preempted(long cpu);
 #endif /* SMP && PARAVIRT_SPINLOCKS */
 
 #ifdef CONFIG_X86_32
-#define PV_SAVE_REGS "pushl %ecx; pushl %edx;"
-#define PV_RESTORE_REGS "popl %edx; popl %ecx;"
-
 /* save and restore all caller-save registers, except return value */
 #define PV_SAVE_ALL_CALLER_REGS		"pushl %ecx;"
 #define PV_RESTORE_ALL_CALLER_REGS	"popl  %ecx;"
-
-#define PV_FLAGS_ARG "0"
-#define PV_EXTRA_CLOBBERS
-#define PV_VEXTRA_CLOBBERS
 #else
 /* save and restore all caller-save registers, except return value */
 #define PV_SAVE_ALL_CALLER_REGS						\
@@ -639,14 +632,6 @@ bool __raw_callee_save___native_vcpu_is_preempted(long cpu);
 	"pop %rsi;"							\
 	"pop %rdx;"							\
 	"pop %rcx;"
-
-/* We save some registers, but all of them, that's too much. We clobber all
- * caller saved registers but the argument parameter */
-#define PV_SAVE_REGS "pushq %%rdi;"
-#define PV_RESTORE_REGS "popq %%rdi;"
-#define PV_EXTRA_CLOBBERS EXTRA_CLOBBERS, "rcx" , "rdx", "rsi"
-#define PV_VEXTRA_CLOBBERS EXTRA_CLOBBERS, "rdi", "rcx" , "rdx", "rsi"
-#define PV_FLAGS_ARG "D"
 #endif
 
 /*
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 11:07:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 11:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqaaG-0006oP-7L; Wed, 01 Jul 2020 11:07:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T2yc=AM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jqaaF-0006nj-3H
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 11:07:19 +0000
X-Inumbo-ID: 0526e692-bb8b-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0526e692-bb8b-11ea-b7bb-bc764e2007e4;
 Wed, 01 Jul 2020 11:07:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A2612AD65;
 Wed,  1 Jul 2020 11:07:14 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org
Subject: [PATCH v2 2/4] x86/paravirt: remove 32-bit support from PARAVIRT_XXL
Date: Wed,  1 Jul 2020 13:06:48 +0200
Message-Id: <20200701110650.16172-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200701110650.16172-1-jgross@suse.com>
References: <20200701110650.16172-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Deep Shah <sdeep@vmware.com>,
 "VMware, Inc." <pv-drivers@vmware.com>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
 "H. Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The last 32-bit user of stuff under CONFIG_PARAVIRT_XXL is gone.

Remove 32-bit specific parts.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/entry/vdso/vdso32/vclock_gettime.c |  1 +
 arch/x86/include/asm/paravirt.h             | 92 +++------------------
 arch/x86/include/asm/paravirt_types.h       | 21 -----
 arch/x86/include/asm/pgtable-3level_types.h |  5 --
 arch/x86/include/asm/segment.h              |  4 -
 arch/x86/kernel/cpu/common.c                |  8 --
 arch/x86/kernel/kprobes/core.c              |  1 -
 arch/x86/kernel/kprobes/opt.c               |  1 -
 arch/x86/kernel/paravirt.c                  | 18 ----
 arch/x86/kernel/paravirt_patch.c            | 17 ----
 arch/x86/xen/enlighten_pv.c                 |  6 --
 11 files changed, 13 insertions(+), 161 deletions(-)

diff --git a/arch/x86/entry/vdso/vdso32/vclock_gettime.c b/arch/x86/entry/vdso/vdso32/vclock_gettime.c
index 84a4a73f77f7..283ed9d00426 100644
--- a/arch/x86/entry/vdso/vdso32/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vdso32/vclock_gettime.c
@@ -14,6 +14,7 @@
 #undef CONFIG_ILLEGAL_POINTER_VALUE
 #undef CONFIG_SPARSEMEM_VMEMMAP
 #undef CONFIG_NR_CPUS
+#undef CONFIG_PARAVIRT_XXL
 
 #define CONFIG_X86_32 1
 #define CONFIG_PGTABLE_LEVELS 2
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 5ca5d297df75..cfe9f6e472b5 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -160,8 +160,6 @@ static inline void wbinvd(void)
 	PVOP_VCALL0(cpu.wbinvd);
 }
 
-#define get_kernel_rpl()  (pv_info.kernel_rpl)
-
 static inline u64 paravirt_read_msr(unsigned msr)
 {
 	return PVOP_CALL1(u64, cpu.read_msr, msr);
@@ -277,12 +275,10 @@ static inline void load_TLS(struct thread_struct *t, unsigned cpu)
 	PVOP_VCALL2(cpu.load_tls, t, cpu);
 }
 
-#ifdef CONFIG_X86_64
 static inline void load_gs_index(unsigned int gs)
 {
 	PVOP_VCALL1(cpu.load_gs_index, gs);
 }
-#endif
 
 static inline void write_ldt_entry(struct desc_struct *dt, int entry,
 				   const void *desc)
@@ -372,10 +368,7 @@ static inline pte_t __pte(pteval_t val)
 {
 	pteval_t ret;
 
-	if (sizeof(pteval_t) > sizeof(long))
-		ret = PVOP_CALLEE2(pteval_t, mmu.make_pte, val, (u64)val >> 32);
-	else
-		ret = PVOP_CALLEE1(pteval_t, mmu.make_pte, val);
+	ret = PVOP_CALLEE1(pteval_t, mmu.make_pte, val);
 
 	return (pte_t) { .pte = ret };
 }
@@ -384,11 +377,7 @@ static inline pteval_t pte_val(pte_t pte)
 {
 	pteval_t ret;
 
-	if (sizeof(pteval_t) > sizeof(long))
-		ret = PVOP_CALLEE2(pteval_t, mmu.pte_val,
-				   pte.pte, (u64)pte.pte >> 32);
-	else
-		ret = PVOP_CALLEE1(pteval_t, mmu.pte_val, pte.pte);
+	ret = PVOP_CALLEE1(pteval_t, mmu.pte_val, pte.pte);
 
 	return ret;
 }
@@ -397,10 +386,7 @@ static inline pgd_t __pgd(pgdval_t val)
 {
 	pgdval_t ret;
 
-	if (sizeof(pgdval_t) > sizeof(long))
-		ret = PVOP_CALLEE2(pgdval_t, mmu.make_pgd, val, (u64)val >> 32);
-	else
-		ret = PVOP_CALLEE1(pgdval_t, mmu.make_pgd, val);
+	ret = PVOP_CALLEE1(pgdval_t, mmu.make_pgd, val);
 
 	return (pgd_t) { ret };
 }
@@ -409,11 +395,7 @@ static inline pgdval_t pgd_val(pgd_t pgd)
 {
 	pgdval_t ret;
 
-	if (sizeof(pgdval_t) > sizeof(long))
-		ret =  PVOP_CALLEE2(pgdval_t, mmu.pgd_val,
-				    pgd.pgd, (u64)pgd.pgd >> 32);
-	else
-		ret =  PVOP_CALLEE1(pgdval_t, mmu.pgd_val, pgd.pgd);
+	ret =  PVOP_CALLEE1(pgdval_t, mmu.pgd_val, pgd.pgd);
 
 	return ret;
 }
@@ -433,51 +415,32 @@ static inline void ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned
 					   pte_t *ptep, pte_t old_pte, pte_t pte)
 {
 
-	if (sizeof(pteval_t) > sizeof(long))
-		/* 5 arg words */
-		pv_ops.mmu.ptep_modify_prot_commit(vma, addr, ptep, pte);
-	else
-		PVOP_VCALL4(mmu.ptep_modify_prot_commit,
-			    vma, addr, ptep, pte.pte);
+	PVOP_VCALL4(mmu.ptep_modify_prot_commit, vma, addr, ptep, pte.pte);
 }
 
 static inline void set_pte(pte_t *ptep, pte_t pte)
 {
-	if (sizeof(pteval_t) > sizeof(long))
-		PVOP_VCALL3(mmu.set_pte, ptep, pte.pte, (u64)pte.pte >> 32);
-	else
-		PVOP_VCALL2(mmu.set_pte, ptep, pte.pte);
+	PVOP_VCALL2(mmu.set_pte, ptep, pte.pte);
 }
 
 static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
 			      pte_t *ptep, pte_t pte)
 {
-	if (sizeof(pteval_t) > sizeof(long))
-		/* 5 arg words */
-		pv_ops.mmu.set_pte_at(mm, addr, ptep, pte);
-	else
-		PVOP_VCALL4(mmu.set_pte_at, mm, addr, ptep, pte.pte);
+	PVOP_VCALL4(mmu.set_pte_at, mm, addr, ptep, pte.pte);
 }
 
 static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
 {
 	pmdval_t val = native_pmd_val(pmd);
 
-	if (sizeof(pmdval_t) > sizeof(long))
-		PVOP_VCALL3(mmu.set_pmd, pmdp, val, (u64)val >> 32);
-	else
-		PVOP_VCALL2(mmu.set_pmd, pmdp, val);
+	PVOP_VCALL2(mmu.set_pmd, pmdp, val);
 }
 
-#if CONFIG_PGTABLE_LEVELS >= 3
 static inline pmd_t __pmd(pmdval_t val)
 {
 	pmdval_t ret;
 
-	if (sizeof(pmdval_t) > sizeof(long))
-		ret = PVOP_CALLEE2(pmdval_t, mmu.make_pmd, val, (u64)val >> 32);
-	else
-		ret = PVOP_CALLEE1(pmdval_t, mmu.make_pmd, val);
+	ret = PVOP_CALLEE1(pmdval_t, mmu.make_pmd, val);
 
 	return (pmd_t) { ret };
 }
@@ -486,11 +449,7 @@ static inline pmdval_t pmd_val(pmd_t pmd)
 {
 	pmdval_t ret;
 
-	if (sizeof(pmdval_t) > sizeof(long))
-		ret =  PVOP_CALLEE2(pmdval_t, mmu.pmd_val,
-				    pmd.pmd, (u64)pmd.pmd >> 32);
-	else
-		ret =  PVOP_CALLEE1(pmdval_t, mmu.pmd_val, pmd.pmd);
+	ret =  PVOP_CALLEE1(pmdval_t, mmu.pmd_val, pmd.pmd);
 
 	return ret;
 }
@@ -499,12 +458,9 @@ static inline void set_pud(pud_t *pudp, pud_t pud)
 {
 	pudval_t val = native_pud_val(pud);
 
-	if (sizeof(pudval_t) > sizeof(long))
-		PVOP_VCALL3(mmu.set_pud, pudp, val, (u64)val >> 32);
-	else
-		PVOP_VCALL2(mmu.set_pud, pudp, val);
+	PVOP_VCALL2(mmu.set_pud, pudp, val);
 }
-#if CONFIG_PGTABLE_LEVELS >= 4
+
 static inline pud_t __pud(pudval_t val)
 {
 	pudval_t ret;
@@ -569,29 +525,6 @@ static inline void p4d_clear(p4d_t *p4dp)
 	set_p4d(p4dp, __p4d(0));
 }
 
-#endif	/* CONFIG_PGTABLE_LEVELS == 4 */
-
-#endif	/* CONFIG_PGTABLE_LEVELS >= 3 */
-
-#ifdef CONFIG_X86_PAE
-/* Special-case pte-setting operations for PAE, which can't update a
-   64-bit pte atomically */
-static inline void set_pte_atomic(pte_t *ptep, pte_t pte)
-{
-	PVOP_VCALL3(mmu.set_pte_atomic, ptep, pte.pte, pte.pte >> 32);
-}
-
-static inline void pte_clear(struct mm_struct *mm, unsigned long addr,
-			     pte_t *ptep)
-{
-	PVOP_VCALL3(mmu.pte_clear, mm, addr, ptep);
-}
-
-static inline void pmd_clear(pmd_t *pmdp)
-{
-	PVOP_VCALL1(mmu.pmd_clear, pmdp);
-}
-#else  /* !CONFIG_X86_PAE */
 static inline void set_pte_atomic(pte_t *ptep, pte_t pte)
 {
 	set_pte(ptep, pte);
@@ -607,7 +540,6 @@ static inline void pmd_clear(pmd_t *pmdp)
 {
 	set_pmd(pmdp, __pmd(0));
 }
-#endif	/* CONFIG_X86_PAE */
 
 #define  __HAVE_ARCH_START_CONTEXT_SWITCH
 static inline void arch_start_context_switch(struct task_struct *prev)
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 732f62e04ddb..9d0c16315869 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -68,12 +68,7 @@ struct paravirt_callee_save {
 /* general info */
 struct pv_info {
 #ifdef CONFIG_PARAVIRT_XXL
-	unsigned int kernel_rpl;
-	int shared_kernel_pmd;
-
-#ifdef CONFIG_X86_64
 	u16 extra_user_64bit_cs;  /* __USER_CS if none */
-#endif
 #endif
 
 	const char *name;
@@ -126,9 +121,7 @@ struct pv_cpu_ops {
 	void (*set_ldt)(const void *desc, unsigned entries);
 	unsigned long (*store_tr)(void);
 	void (*load_tls)(struct thread_struct *t, unsigned int cpu);
-#ifdef CONFIG_X86_64
 	void (*load_gs_index)(unsigned int idx);
-#endif
 	void (*write_ldt_entry)(struct desc_struct *ldt, int entrynum,
 				const void *desc);
 	void (*write_gdt_entry)(struct desc_struct *,
@@ -263,21 +256,11 @@ struct pv_mmu_ops {
 	struct paravirt_callee_save pgd_val;
 	struct paravirt_callee_save make_pgd;
 
-#if CONFIG_PGTABLE_LEVELS >= 3
-#ifdef CONFIG_X86_PAE
-	void (*set_pte_atomic)(pte_t *ptep, pte_t pteval);
-	void (*pte_clear)(struct mm_struct *mm, unsigned long addr,
-			  pte_t *ptep);
-	void (*pmd_clear)(pmd_t *pmdp);
-
-#endif	/* CONFIG_X86_PAE */
-
 	void (*set_pud)(pud_t *pudp, pud_t pudval);
 
 	struct paravirt_callee_save pmd_val;
 	struct paravirt_callee_save make_pmd;
 
-#if CONFIG_PGTABLE_LEVELS >= 4
 	struct paravirt_callee_save pud_val;
 	struct paravirt_callee_save make_pud;
 
@@ -290,10 +273,6 @@ struct pv_mmu_ops {
 	void (*set_pgd)(pgd_t *pgdp, pgd_t pgdval);
 #endif	/* CONFIG_PGTABLE_LEVELS >= 5 */
 
-#endif	/* CONFIG_PGTABLE_LEVELS >= 4 */
-
-#endif	/* CONFIG_PGTABLE_LEVELS >= 3 */
-
 	struct pv_lazy_ops lazy_mode;
 
 	/* dom0 ops */
diff --git a/arch/x86/include/asm/pgtable-3level_types.h b/arch/x86/include/asm/pgtable-3level_types.h
index 80fbb4a9ed87..56baf43befb4 100644
--- a/arch/x86/include/asm/pgtable-3level_types.h
+++ b/arch/x86/include/asm/pgtable-3level_types.h
@@ -20,12 +20,7 @@ typedef union {
 } pte_t;
 #endif	/* !__ASSEMBLY__ */
 
-#ifdef CONFIG_PARAVIRT_XXL
-#define SHARED_KERNEL_PMD	((!static_cpu_has(X86_FEATURE_PTI) &&	\
-				 (pv_info.shared_kernel_pmd)))
-#else
 #define SHARED_KERNEL_PMD	(!static_cpu_has(X86_FEATURE_PTI))
-#endif
 
 #define ARCH_PAGE_TABLE_SYNC_MASK	(SHARED_KERNEL_PMD ? 0 : PGTBL_PMD_MODIFIED)
 
diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h
index 9646c300f128..517920928989 100644
--- a/arch/x86/include/asm/segment.h
+++ b/arch/x86/include/asm/segment.h
@@ -222,10 +222,6 @@
 
 #endif
 
-#ifndef CONFIG_PARAVIRT_XXL
-# define get_kernel_rpl()		0
-#endif
-
 #define IDT_ENTRIES			256
 #define NUM_EXCEPTION_VECTORS		32
 
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 043d93cdcaad..65cdfa433370 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -1396,15 +1396,7 @@ static void generic_identify(struct cpuinfo_x86 *c)
 	 * ESPFIX issue, we can change this.
 	 */
 #ifdef CONFIG_X86_32
-# ifdef CONFIG_PARAVIRT_XXL
-	do {
-		extern void native_iret(void);
-		if (pv_ops.cpu.iret == native_iret)
-			set_cpu_bug(c, X86_BUG_ESPFIX);
-	} while (0);
-# else
 	set_cpu_bug(c, X86_BUG_ESPFIX);
-# endif
 #endif
 }
 
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index ada39ddbc922..fa1b6f2f5222 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -780,7 +780,6 @@ __used __visible void *trampoline_handler(struct pt_regs *regs)
 	/* fixup registers */
 	regs->cs = __KERNEL_CS;
 #ifdef CONFIG_X86_32
-	regs->cs |= get_kernel_rpl();
 	regs->gs = 0;
 #endif
 	/* We use pt_regs->sp for return address holder. */
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 7af4c61dde52..816f00e89d04 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -180,7 +180,6 @@ optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs)
 		/* Save skipped registers */
 		regs->cs = __KERNEL_CS;
 #ifdef CONFIG_X86_32
-		regs->cs |= get_kernel_rpl();
 		regs->gs = 0;
 #endif
 		regs->ip = (unsigned long)op->kp.addr + INT3_INSN_SIZE;
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index 674a7d66d960..b318700c5ada 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -263,13 +263,8 @@ enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
 struct pv_info pv_info = {
 	.name = "bare hardware",
 #ifdef CONFIG_PARAVIRT_XXL
-	.kernel_rpl = 0,
-	.shared_kernel_pmd = 1,	/* Only used when CONFIG_X86_PAE is set */
-
-#ifdef CONFIG_X86_64
 	.extra_user_64bit_cs = __USER_CS,
 #endif
-#endif
 };
 
 /* 64-bit pagetable entries */
@@ -305,9 +300,7 @@ struct paravirt_patch_template pv_ops = {
 	.cpu.load_idt		= native_load_idt,
 	.cpu.store_tr		= native_store_tr,
 	.cpu.load_tls		= native_load_tls,
-#ifdef CONFIG_X86_64
 	.cpu.load_gs_index	= native_load_gs_index,
-#endif
 	.cpu.write_ldt_entry	= native_write_ldt_entry,
 	.cpu.write_gdt_entry	= native_write_gdt_entry,
 	.cpu.write_idt_entry	= native_write_idt_entry,
@@ -317,9 +310,7 @@ struct paravirt_patch_template pv_ops = {
 
 	.cpu.load_sp0		= native_load_sp0,
 
-#ifdef CONFIG_X86_64
 	.cpu.usergs_sysret64	= native_usergs_sysret64,
-#endif
 	.cpu.iret		= native_iret,
 	.cpu.swapgs		= native_swapgs,
 
@@ -374,18 +365,11 @@ struct paravirt_patch_template pv_ops = {
 	.mmu.ptep_modify_prot_start	= __ptep_modify_prot_start,
 	.mmu.ptep_modify_prot_commit	= __ptep_modify_prot_commit,
 
-#if CONFIG_PGTABLE_LEVELS >= 3
-#ifdef CONFIG_X86_PAE
-	.mmu.set_pte_atomic	= native_set_pte_atomic,
-	.mmu.pte_clear		= native_pte_clear,
-	.mmu.pmd_clear		= native_pmd_clear,
-#endif
 	.mmu.set_pud		= native_set_pud,
 
 	.mmu.pmd_val		= PTE_IDENT,
 	.mmu.make_pmd		= PTE_IDENT,
 
-#if CONFIG_PGTABLE_LEVELS >= 4
 	.mmu.pud_val		= PTE_IDENT,
 	.mmu.make_pud		= PTE_IDENT,
 
@@ -397,8 +381,6 @@ struct paravirt_patch_template pv_ops = {
 
 	.mmu.set_pgd		= native_set_pgd,
 #endif /* CONFIG_PGTABLE_LEVELS >= 5 */
-#endif /* CONFIG_PGTABLE_LEVELS >= 4 */
-#endif /* CONFIG_PGTABLE_LEVELS >= 3 */
 
 	.mmu.pte_val		= PTE_IDENT,
 	.mmu.pgd_val		= PTE_IDENT,
diff --git a/arch/x86/kernel/paravirt_patch.c b/arch/x86/kernel/paravirt_patch.c
index 3eff63c090d2..ace6e334cb39 100644
--- a/arch/x86/kernel/paravirt_patch.c
+++ b/arch/x86/kernel/paravirt_patch.c
@@ -26,14 +26,10 @@ struct patch_xxl {
 	const unsigned char	mmu_read_cr3[3];
 	const unsigned char	mmu_write_cr3[3];
 	const unsigned char	irq_restore_fl[2];
-# ifdef CONFIG_X86_64
 	const unsigned char	cpu_wbinvd[2];
 	const unsigned char	cpu_usergs_sysret64[6];
 	const unsigned char	cpu_swapgs[3];
 	const unsigned char	mov64[3];
-# else
-	const unsigned char	cpu_iret[1];
-# endif
 };
 
 static const struct patch_xxl patch_data_xxl = {
@@ -42,7 +38,6 @@ static const struct patch_xxl patch_data_xxl = {
 	.irq_save_fl		= { 0x9c, 0x58 },	// pushf; pop %[re]ax
 	.mmu_read_cr2		= { 0x0f, 0x20, 0xd0 },	// mov %cr2, %[re]ax
 	.mmu_read_cr3		= { 0x0f, 0x20, 0xd8 },	// mov %cr3, %[re]ax
-# ifdef CONFIG_X86_64
 	.mmu_write_cr3		= { 0x0f, 0x22, 0xdf },	// mov %rdi, %cr3
 	.irq_restore_fl		= { 0x57, 0x9d },	// push %rdi; popfq
 	.cpu_wbinvd		= { 0x0f, 0x09 },	// wbinvd
@@ -50,19 +45,11 @@ static const struct patch_xxl patch_data_xxl = {
 				    0x48, 0x0f, 0x07 },	// swapgs; sysretq
 	.cpu_swapgs		= { 0x0f, 0x01, 0xf8 },	// swapgs
 	.mov64			= { 0x48, 0x89, 0xf8 },	// mov %rdi, %rax
-# else
-	.mmu_write_cr3		= { 0x0f, 0x22, 0xd8 },	// mov %eax, %cr3
-	.irq_restore_fl		= { 0x50, 0x9d },	// push %eax; popf
-	.cpu_iret		= { 0xcf },		// iret
-# endif
 };
 
 unsigned int paravirt_patch_ident_64(void *insn_buff, unsigned int len)
 {
-#ifdef CONFIG_X86_64
 	return PATCH(xxl, mov64, insn_buff, len);
-#endif
-	return 0;
 }
 # endif /* CONFIG_PARAVIRT_XXL */
 
@@ -98,13 +85,9 @@ unsigned int native_patch(u8 type, void *insn_buff, unsigned long addr,
 	PATCH_CASE(mmu, read_cr3, xxl, insn_buff, len);
 	PATCH_CASE(mmu, write_cr3, xxl, insn_buff, len);
 
-# ifdef CONFIG_X86_64
 	PATCH_CASE(cpu, usergs_sysret64, xxl, insn_buff, len);
 	PATCH_CASE(cpu, swapgs, xxl, insn_buff, len);
 	PATCH_CASE(cpu, wbinvd, xxl, insn_buff, len);
-# else
-	PATCH_CASE(cpu, iret, xxl, insn_buff, len);
-# endif
 #endif
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index 44562d30878c..659e59140ef1 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1002,8 +1002,6 @@ void __init xen_setup_vcpu_info_placement(void)
 }
 
 static const struct pv_info xen_info __initconst = {
-	.shared_kernel_pmd = 0,
-
 	.extra_user_64bit_cs = FLAT_USER_CS64,
 	.name = "Xen",
 };
@@ -1301,10 +1299,6 @@ asmlinkage __visible void __init xen_start_kernel(void)
 				   xen_start_info->nr_pages);
 	xen_reserve_special_pages();
 
-	/* keep using Xen gdt for now; no urgent need to change it */
-
-	pv_info.kernel_rpl = 0;
-
 	/* set the limit of our address space */
 	xen_reserve_top();
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 11:07:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 11:07:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqaaI-0006pl-Om; Wed, 01 Jul 2020 11:07:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T2yc=AM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jqaaI-0006nx-At
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 11:07:22 +0000
X-Inumbo-ID: 0631807e-bb8b-11ea-86ef-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0631807e-bb8b-11ea-86ef-12813bfff9fa;
 Wed, 01 Jul 2020 11:07:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6CB08AD72;
 Wed,  1 Jul 2020 11:07:16 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org
Subject: [PATCH v2 4/4] x86/paravirt: use CONFIG_PARAVIRT_XXL instead of
 CONFIG_PARAVIRT
Date: Wed,  1 Jul 2020 13:06:50 +0200
Message-Id: <20200701110650.16172-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200701110650.16172-1-jgross@suse.com>
References: <20200701110650.16172-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
 "H. Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

There are some code parts using CONFIG_PARAVIRT for Xen pvops related
issues instead of the more stringent CONFIG_PARAVIRT_XXL.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/entry/entry_64.S                | 4 ++--
 arch/x86/include/asm/fixmap.h            | 2 +-
 arch/x86/include/asm/required-features.h | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index d2a00c97e53f..cb715d2b357d 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -45,13 +45,13 @@
 .code64
 .section .entry.text, "ax"
 
-#ifdef CONFIG_PARAVIRT
+#ifdef CONFIG_PARAVIRT_XXL
 SYM_CODE_START(native_usergs_sysret64)
 	UNWIND_HINT_EMPTY
 	swapgs
 	sysretq
 SYM_CODE_END(native_usergs_sysret64)
-#endif /* CONFIG_PARAVIRT */
+#endif /* CONFIG_PARAVIRT_XXL */
 
 /*
  * 64-bit SYSCALL instruction entry. Up to 6 arguments in registers.
diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
index b9527a54db99..f1422ada4ffe 100644
--- a/arch/x86/include/asm/fixmap.h
+++ b/arch/x86/include/asm/fixmap.h
@@ -99,7 +99,7 @@ enum fixed_addresses {
 	FIX_PCIE_MCFG,
 #endif
 #endif
-#ifdef CONFIG_PARAVIRT
+#ifdef CONFIG_PARAVIRT_XXL
 	FIX_PARAVIRT_BOOTMAP,
 #endif
 #ifdef	CONFIG_X86_INTEL_MID
diff --git a/arch/x86/include/asm/required-features.h b/arch/x86/include/asm/required-features.h
index 6847d85400a8..3ff0d48469f2 100644
--- a/arch/x86/include/asm/required-features.h
+++ b/arch/x86/include/asm/required-features.h
@@ -54,7 +54,7 @@
 #endif
 
 #ifdef CONFIG_X86_64
-#ifdef CONFIG_PARAVIRT
+#ifdef CONFIG_PARAVIRT_XXL
 /* Paravirtualized systems may not have PSE or PGE available */
 #define NEED_PSE	0
 #define NEED_PGE	0
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 11:07:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 11:07:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqaaL-0006r2-4l; Wed, 01 Jul 2020 11:07:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T2yc=AM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jqaaK-0006nj-3O
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 11:07:24 +0000
X-Inumbo-ID: 049d6818-bb8b-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 049d6818-bb8b-11ea-bb8b-bc764e2007e4;
 Wed, 01 Jul 2020 11:07:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BE069AD41;
 Wed,  1 Jul 2020 11:07:13 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org
Subject: [PATCH v2 1/4] x86/xen: remove 32-bit Xen PV guest support
Date: Wed,  1 Jul 2020 13:06:47 +0200
Message-Id: <20200701110650.16172-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200701110650.16172-1-jgross@suse.com>
References: <20200701110650.16172-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
 "H. Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Xen is requiring 64-bit machines today and since Xen 4.14 it can be
built without 32-bit PV guest support. There is no need to carry the
burden of 32-bit PV guest support in the kernel any longer, as new
guests can be either HVM or PVH, or they can use a 64 bit kernel.

Remove the 32-bit Xen PV support from the kernel.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/entry/entry_32.S      | 109 +----------
 arch/x86/include/asm/proto.h   |   2 +-
 arch/x86/include/asm/segment.h |   2 +-
 arch/x86/kernel/head_32.S      |  31 ---
 arch/x86/xen/Kconfig           |   3 +-
 arch/x86/xen/Makefile          |   3 +-
 arch/x86/xen/apic.c            |  17 --
 arch/x86/xen/enlighten_pv.c    |  48 +----
 arch/x86/xen/mmu_pv.c          | 340 ++++-----------------------------
 arch/x86/xen/p2m.c             |   6 +-
 arch/x86/xen/setup.c           |  35 +---
 arch/x86/xen/smp_pv.c          |  18 --
 arch/x86/xen/xen-asm.S         | 182 ++++++++++++++++--
 arch/x86/xen/xen-asm_32.S      | 185 ------------------
 arch/x86/xen/xen-asm_64.S      | 181 ------------------
 arch/x86/xen/xen-head.S        |   6 -
 drivers/xen/Kconfig            |   4 +-
 17 files changed, 216 insertions(+), 956 deletions(-)
 delete mode 100644 arch/x86/xen/xen-asm_32.S
 delete mode 100644 arch/x86/xen/xen-asm_64.S

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 024d7d276cd4..70efe6d072f1 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -449,8 +449,6 @@
 
 .macro SWITCH_TO_KERNEL_STACK
 
-	ALTERNATIVE     "", "jmp .Lend_\@", X86_FEATURE_XENPV
-
 	BUG_IF_WRONG_CR3
 
 	SWITCH_TO_KERNEL_CR3 scratch_reg=%eax
@@ -599,8 +597,6 @@
  */
 .macro SWITCH_TO_ENTRY_STACK
 
-	ALTERNATIVE     "", "jmp .Lend_\@", X86_FEATURE_XENPV
-
 	/* Bytes to copy */
 	movl	$PTREGS_SIZE, %ecx
 
@@ -872,17 +868,6 @@ SYM_ENTRY(__begin_SYSENTER_singlestep_region, SYM_L_GLOBAL, SYM_A_NONE)
  * will ignore all of the single-step traps generated in this range.
  */
 
-#ifdef CONFIG_XEN_PV
-/*
- * Xen doesn't set %esp to be precisely what the normal SYSENTER
- * entry point expects, so fix it up before using the normal path.
- */
-SYM_CODE_START(xen_sysenter_target)
-	addl	$5*4, %esp			/* remove xen-provided frame */
-	jmp	.Lsysenter_past_esp
-SYM_CODE_END(xen_sysenter_target)
-#endif
-
 /*
  * 32-bit SYSENTER entry.
  *
@@ -966,9 +951,8 @@ SYM_FUNC_START(entry_SYSENTER_32)
 
 	movl	%esp, %eax
 	call	do_fast_syscall_32
-	/* XEN PV guests always use IRET path */
-	ALTERNATIVE "testl %eax, %eax; jz .Lsyscall_32_done", \
-		    "jmp .Lsyscall_32_done", X86_FEATURE_XENPV
+	testl	%eax, %eax
+	jz	.Lsyscall_32_done
 
 	STACKLEAK_ERASE
 
@@ -1166,95 +1150,6 @@ SYM_FUNC_END(entry_INT80_32)
 #endif
 .endm
 
-#ifdef CONFIG_PARAVIRT
-SYM_CODE_START(native_iret)
-	iret
-	_ASM_EXTABLE(native_iret, asm_iret_error)
-SYM_CODE_END(native_iret)
-#endif
-
-#ifdef CONFIG_XEN_PV
-/*
- * See comment in entry_64.S for further explanation
- *
- * Note: This is not an actual IDT entry point. It's a XEN specific entry
- * point and therefore named to match the 64-bit trampoline counterpart.
- */
-SYM_FUNC_START(xen_asm_exc_xen_hypervisor_callback)
-	/*
-	 * Check to see if we got the event in the critical
-	 * region in xen_iret_direct, after we've reenabled
-	 * events and checked for pending events.  This simulates
-	 * iret instruction's behaviour where it delivers a
-	 * pending interrupt when enabling interrupts:
-	 */
-	cmpl	$xen_iret_start_crit, (%esp)
-	jb	1f
-	cmpl	$xen_iret_end_crit, (%esp)
-	jae	1f
-	call	xen_iret_crit_fixup
-1:
-	pushl	$-1				/* orig_ax = -1 => not a system call */
-	SAVE_ALL
-	ENCODE_FRAME_POINTER
-
-	mov	%esp, %eax
-	call	xen_pv_evtchn_do_upcall
-	jmp	handle_exception_return
-SYM_FUNC_END(xen_asm_exc_xen_hypervisor_callback)
-
-/*
- * Hypervisor uses this for application faults while it executes.
- * We get here for two reasons:
- *  1. Fault while reloading DS, ES, FS or GS
- *  2. Fault while executing IRET
- * Category 1 we fix up by reattempting the load, and zeroing the segment
- * register if the load fails.
- * Category 2 we fix up by jumping to do_iret_error. We cannot use the
- * normal Linux return path in this case because if we use the IRET hypercall
- * to pop the stack frame we end up in an infinite loop of failsafe callbacks.
- * We distinguish between categories by maintaining a status value in EAX.
- */
-SYM_FUNC_START(xen_failsafe_callback)
-	pushl	%eax
-	movl	$1, %eax
-1:	mov	4(%esp), %ds
-2:	mov	8(%esp), %es
-3:	mov	12(%esp), %fs
-4:	mov	16(%esp), %gs
-	/* EAX == 0 => Category 1 (Bad segment)
-	   EAX != 0 => Category 2 (Bad IRET) */
-	testl	%eax, %eax
-	popl	%eax
-	lea	16(%esp), %esp
-	jz	5f
-	jmp	asm_iret_error
-5:	pushl	$-1				/* orig_ax = -1 => not a system call */
-	SAVE_ALL
-	ENCODE_FRAME_POINTER
-	jmp	handle_exception_return
-
-.section .fixup, "ax"
-6:	xorl	%eax, %eax
-	movl	%eax, 4(%esp)
-	jmp	1b
-7:	xorl	%eax, %eax
-	movl	%eax, 8(%esp)
-	jmp	2b
-8:	xorl	%eax, %eax
-	movl	%eax, 12(%esp)
-	jmp	3b
-9:	xorl	%eax, %eax
-	movl	%eax, 16(%esp)
-	jmp	4b
-.previous
-	_ASM_EXTABLE(1b, 6b)
-	_ASM_EXTABLE(2b, 7b)
-	_ASM_EXTABLE(3b, 8b)
-	_ASM_EXTABLE(4b, 9b)
-SYM_FUNC_END(xen_failsafe_callback)
-#endif /* CONFIG_XEN_PV */
-
 SYM_CODE_START_LOCAL_NOALIGN(handle_exception)
 	/* the function address is in %gs's slot on the stack */
 	SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1
diff --git a/arch/x86/include/asm/proto.h b/arch/x86/include/asm/proto.h
index 6e81788a30c1..28996fe19301 100644
--- a/arch/x86/include/asm/proto.h
+++ b/arch/x86/include/asm/proto.h
@@ -25,7 +25,7 @@ void entry_SYSENTER_compat(void);
 void __end_entry_SYSENTER_compat(void);
 void entry_SYSCALL_compat(void);
 void entry_INT80_compat(void);
-#if defined(CONFIG_X86_64) && defined(CONFIG_XEN_PV)
+#ifdef CONFIG_XEN_PV
 void xen_entry_INT80_compat(void);
 #endif
 #endif
diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h
index 6669164abadc..9646c300f128 100644
--- a/arch/x86/include/asm/segment.h
+++ b/arch/x86/include/asm/segment.h
@@ -301,7 +301,7 @@ static inline void vdso_read_cpunode(unsigned *cpu, unsigned *node)
 extern const char early_idt_handler_array[NUM_EXCEPTION_VECTORS][EARLY_IDT_HANDLER_SIZE];
 extern void early_ignore_irq(void);
 
-#if defined(CONFIG_X86_64) && defined(CONFIG_XEN_PV)
+#ifdef CONFIG_XEN_PV
 extern const char xen_early_idt_handler_array[NUM_EXCEPTION_VECTORS][XEN_EARLY_IDT_HANDLER_SIZE];
 #endif
 
diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index f66a6b90f954..7ed84c282233 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -134,38 +134,7 @@ SYM_CODE_START(startup_32)
 	movl %eax,pa(initial_page_table+0xffc)
 #endif
 
-#ifdef CONFIG_PARAVIRT
-	/* This is can only trip for a broken bootloader... */
-	cmpw $0x207, pa(boot_params + BP_version)
-	jb .Ldefault_entry
-
-	/* Paravirt-compatible boot parameters.  Look to see what architecture
-		we're booting under. */
-	movl pa(boot_params + BP_hardware_subarch), %eax
-	cmpl $num_subarch_entries, %eax
-	jae .Lbad_subarch
-
-	movl pa(subarch_entries)(,%eax,4), %eax
-	subl $__PAGE_OFFSET, %eax
-	jmp *%eax
-
-.Lbad_subarch:
-SYM_INNER_LABEL_ALIGN(xen_entry, SYM_L_WEAK)
-	/* Unknown implementation; there's really
-	   nothing we can do at this point. */
-	ud2a
-
-	__INITDATA
-
-subarch_entries:
-	.long .Ldefault_entry		/* normal x86/PC */
-	.long xen_entry			/* Xen hypervisor */
-	.long .Ldefault_entry		/* Moorestown MID */
-num_subarch_entries = (. - subarch_entries) / 4
-.previous
-#else
 	jmp .Ldefault_entry
-#endif /* CONFIG_PARAVIRT */
 SYM_CODE_END(startup_32)
 
 #ifdef CONFIG_HOTPLUG_CPU
diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 1aded63a95cb..218acbd5c7a0 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -19,6 +19,7 @@ config XEN_PV
 	bool "Xen PV guest support"
 	default y
 	depends on XEN
+	depends on X86_64
 	select PARAVIRT_XXL
 	select XEN_HAVE_PVMMU
 	select XEN_HAVE_VPMU
@@ -50,7 +51,7 @@ config XEN_PVHVM_SMP
 
 config XEN_512GB
 	bool "Limit Xen pv-domain memory to 512GB"
-	depends on XEN_PV && X86_64
+	depends on XEN_PV
 	default y
 	help
 	  Limit paravirtualized user domains to 512GB of RAM.
diff --git a/arch/x86/xen/Makefile b/arch/x86/xen/Makefile
index 084de77a109e..5de137d536cc 100644
--- a/arch/x86/xen/Makefile
+++ b/arch/x86/xen/Makefile
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: GPL-2.0
-OBJECT_FILES_NON_STANDARD_xen-asm_$(BITS).o := y
+OBJECT_FILES_NON_STANDARD_xen-asm.o := y
 
 ifdef CONFIG_FUNCTION_TRACER
 # Do not profile debug and lowlevel utilities
@@ -34,7 +34,6 @@ obj-$(CONFIG_XEN_PV)		+= mmu_pv.o
 obj-$(CONFIG_XEN_PV)		+= irq.o
 obj-$(CONFIG_XEN_PV)		+= multicalls.o
 obj-$(CONFIG_XEN_PV)		+= xen-asm.o
-obj-$(CONFIG_XEN_PV)		+= xen-asm_$(BITS).o
 
 obj-$(CONFIG_XEN_PVH)		+= enlighten_pvh.o
 
diff --git a/arch/x86/xen/apic.c b/arch/x86/xen/apic.c
index 5e53bfbe5823..ea6e9c54da9d 100644
--- a/arch/x86/xen/apic.c
+++ b/arch/x86/xen/apic.c
@@ -58,10 +58,6 @@ static u32 xen_apic_read(u32 reg)
 
 	if (reg == APIC_LVR)
 		return 0x14;
-#ifdef CONFIG_X86_32
-	if (reg == APIC_LDR)
-		return SET_APIC_LOGICAL_ID(1UL << smp_processor_id());
-#endif
 	if (reg != APIC_ID)
 		return 0;
 
@@ -127,14 +123,6 @@ static int xen_phys_pkg_id(int initial_apic_id, int index_msb)
 	return initial_apic_id >> index_msb;
 }
 
-#ifdef CONFIG_X86_32
-static int xen_x86_32_early_logical_apicid(int cpu)
-{
-	/* Match with APIC_LDR read. Otherwise setup_local_APIC complains. */
-	return 1 << cpu;
-}
-#endif
-
 static void xen_noop(void)
 {
 }
@@ -197,11 +185,6 @@ static struct apic xen_pv_apic = {
 	.icr_write 			= xen_apic_icr_write,
 	.wait_icr_idle 			= xen_noop,
 	.safe_wait_icr_idle 		= xen_safe_apic_wait_icr_idle,
-
-#ifdef CONFIG_X86_32
-	/* generic_processor_info and setup_local_APIC. */
-	.x86_32_early_logical_apicid	= xen_x86_32_early_logical_apicid,
-#endif
 };
 
 static void __init xen_apic_check(void)
diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index acc49fa6a097..44562d30878c 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -119,14 +119,6 @@ static void __init xen_banner(void)
 	printk(KERN_INFO "Xen version: %d.%d%s%s\n",
 	       version >> 16, version & 0xffff, extra.extraversion,
 	       xen_feature(XENFEAT_mmu_pt_update_preserve_ad) ? " (preserve-AD)" : "");
-
-#ifdef CONFIG_X86_32
-	pr_warn("WARNING! WARNING! WARNING! WARNING! WARNING! WARNING! WARNING!\n"
-		"Support for running as 32-bit PV-guest under Xen will soon be removed\n"
-		"from the Linux kernel!\n"
-		"Please use either a 64-bit kernel or switch to HVM or PVH mode!\n"
-		"WARNING! WARNING! WARNING! WARNING! WARNING! WARNING! WARNING!\n");
-#endif
 }
 
 static void __init xen_pv_init_platform(void)
@@ -555,13 +547,8 @@ static void xen_load_tls(struct thread_struct *t, unsigned int cpu)
 	 * exception between the new %fs descriptor being loaded and
 	 * %fs being effectively cleared at __switch_to().
 	 */
-	if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_CPU) {
-#ifdef CONFIG_X86_32
-		lazy_load_gs(0);
-#else
+	if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_CPU)
 		loadsegment(fs, 0);
-#endif
-	}
 
 	xen_mc_batch();
 
@@ -572,13 +559,11 @@ static void xen_load_tls(struct thread_struct *t, unsigned int cpu)
 	xen_mc_issue(PARAVIRT_LAZY_CPU);
 }
 
-#ifdef CONFIG_X86_64
 static void xen_load_gs_index(unsigned int idx)
 {
 	if (HYPERVISOR_set_segment_base(SEGBASE_GS_USER_SEL, idx))
 		BUG();
 }
-#endif
 
 static void xen_write_ldt_entry(struct desc_struct *dt, int entrynum,
 				const void *ptr)
@@ -597,7 +582,6 @@ static void xen_write_ldt_entry(struct desc_struct *dt, int entrynum,
 	preempt_enable();
 }
 
-#ifdef CONFIG_X86_64
 struct trap_array_entry {
 	void (*orig)(void);
 	void (*xen)(void);
@@ -677,7 +661,6 @@ static bool __ref get_trap_addr(void **addr, unsigned int ist)
 
 	return true;
 }
-#endif
 
 static int cvt_gate_to_trap(int vector, const gate_desc *val,
 			    struct trap_info *info)
@@ -690,10 +673,8 @@ static int cvt_gate_to_trap(int vector, const gate_desc *val,
 	info->vector = vector;
 
 	addr = gate_offset(val);
-#ifdef CONFIG_X86_64
 	if (!get_trap_addr((void **)&addr, val->bits.ist))
 		return 0;
-#endif	/* CONFIG_X86_64 */
 	info->address = addr;
 
 	info->cs = gate_segment(val);
@@ -927,15 +908,12 @@ static u64 xen_read_msr_safe(unsigned int msr, int *err)
 static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
 {
 	int ret;
-#ifdef CONFIG_X86_64
 	unsigned int which;
 	u64 base;
-#endif
 
 	ret = 0;
 
 	switch (msr) {
-#ifdef CONFIG_X86_64
 	case MSR_FS_BASE:		which = SEGBASE_FS; goto set;
 	case MSR_KERNEL_GS_BASE:	which = SEGBASE_GS_USER; goto set;
 	case MSR_GS_BASE:		which = SEGBASE_GS_KERNEL; goto set;
@@ -945,7 +923,6 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
 		if (HYPERVISOR_set_segment_base(which, base) != 0)
 			ret = -EIO;
 		break;
-#endif
 
 	case MSR_STAR:
 	case MSR_CSTAR:
@@ -1027,9 +1004,7 @@ void __init xen_setup_vcpu_info_placement(void)
 static const struct pv_info xen_info __initconst = {
 	.shared_kernel_pmd = 0,
 
-#ifdef CONFIG_X86_64
 	.extra_user_64bit_cs = FLAT_USER_CS64,
-#endif
 	.name = "Xen",
 };
 
@@ -1055,18 +1030,14 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
 	.read_pmc = xen_read_pmc,
 
 	.iret = xen_iret,
-#ifdef CONFIG_X86_64
 	.usergs_sysret64 = xen_sysret64,
-#endif
 
 	.load_tr_desc = paravirt_nop,
 	.set_ldt = xen_set_ldt,
 	.load_gdt = xen_load_gdt,
 	.load_idt = xen_load_idt,
 	.load_tls = xen_load_tls,
-#ifdef CONFIG_X86_64
 	.load_gs_index = xen_load_gs_index,
-#endif
 
 	.alloc_ldt = xen_alloc_ldt,
 	.free_ldt = xen_free_ldt,
@@ -1332,13 +1303,8 @@ asmlinkage __visible void __init xen_start_kernel(void)
 
 	/* keep using Xen gdt for now; no urgent need to change it */
 
-#ifdef CONFIG_X86_32
-	pv_info.kernel_rpl = 1;
-	if (xen_feature(XENFEAT_supervisor_mode_kernel))
-		pv_info.kernel_rpl = 0;
-#else
 	pv_info.kernel_rpl = 0;
-#endif
+
 	/* set the limit of our address space */
 	xen_reserve_top();
 
@@ -1352,12 +1318,6 @@ asmlinkage __visible void __init xen_start_kernel(void)
 	if (rc != 0)
 		xen_raw_printk("physdev_op failed %d\n", rc);
 
-#ifdef CONFIG_X86_32
-	/* set up basic CPUID stuff */
-	cpu_detect(&new_cpu_data);
-	set_cpu_cap(&new_cpu_data, X86_FEATURE_FPU);
-	new_cpu_data.x86_capability[CPUID_1_EDX] = cpuid_edx(1);
-#endif
 
 	if (xen_start_info->mod_start) {
 	    if (xen_start_info->flags & SIF_MOD_START_PFN)
@@ -1426,12 +1386,8 @@ asmlinkage __visible void __init xen_start_kernel(void)
 	xen_efi_init(&boot_params);
 
 	/* Start the world */
-#ifdef CONFIG_X86_32
-	i386_start_kernel();
-#else
 	cr4_init_shadow(); /* 32b kernel does this in i386_start_kernel() */
 	x86_64_start_reservations((char *)__pa_symbol(&boot_params));
-#endif
 }
 
 static int xen_cpu_up_prepare_pv(unsigned int cpu)
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index a58d9c69807a..317aa8b78c07 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -86,19 +86,8 @@
 #include "mmu.h"
 #include "debugfs.h"
 
-#ifdef CONFIG_X86_32
-/*
- * Identity map, in addition to plain kernel map.  This needs to be
- * large enough to allocate page table pages to allocate the rest.
- * Each page can map 2MB.
- */
-#define LEVEL1_IDENT_ENTRIES	(PTRS_PER_PTE * 4)
-static RESERVE_BRK_ARRAY(pte_t, level1_ident_pgt, LEVEL1_IDENT_ENTRIES);
-#endif
-#ifdef CONFIG_X86_64
 /* l3 pud for userspace vsyscall mapping */
 static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;
-#endif /* CONFIG_X86_64 */
 
 /*
  * Protects atomic reservation decrease/increase against concurrent increases.
@@ -439,26 +428,6 @@ static void xen_set_pud(pud_t *ptr, pud_t val)
 	xen_set_pud_hyper(ptr, val);
 }
 
-#ifdef CONFIG_X86_PAE
-static void xen_set_pte_atomic(pte_t *ptep, pte_t pte)
-{
-	trace_xen_mmu_set_pte_atomic(ptep, pte);
-	__xen_set_pte(ptep, pte);
-}
-
-static void xen_pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
-{
-	trace_xen_mmu_pte_clear(mm, addr, ptep);
-	__xen_set_pte(ptep, native_make_pte(0));
-}
-
-static void xen_pmd_clear(pmd_t *pmdp)
-{
-	trace_xen_mmu_pmd_clear(pmdp);
-	set_pmd(pmdp, __pmd(0));
-}
-#endif	/* CONFIG_X86_PAE */
-
 __visible pmd_t xen_make_pmd(pmdval_t pmd)
 {
 	pmd = pte_pfn_to_mfn(pmd);
@@ -466,7 +435,6 @@ __visible pmd_t xen_make_pmd(pmdval_t pmd)
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_make_pmd);
 
-#ifdef CONFIG_X86_64
 __visible pudval_t xen_pud_val(pud_t pud)
 {
 	return pte_mfn_to_pfn(pud.pud);
@@ -571,7 +539,6 @@ __visible p4d_t xen_make_p4d(p4dval_t p4d)
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_make_p4d);
 #endif  /* CONFIG_PGTABLE_LEVELS >= 5 */
-#endif	/* CONFIG_X86_64 */
 
 static int xen_pmd_walk(struct mm_struct *mm, pmd_t *pmd,
 		int (*func)(struct mm_struct *mm, struct page *, enum pt_level),
@@ -654,14 +621,12 @@ static int __xen_pgd_walk(struct mm_struct *mm, pgd_t *pgd,
 	limit--;
 	BUG_ON(limit >= FIXADDR_TOP);
 
-#ifdef CONFIG_X86_64
 	/*
 	 * 64-bit has a great big hole in the middle of the address
 	 * space, which contains the Xen mappings.
 	 */
 	hole_low = pgd_index(GUARD_HOLE_BASE_ADDR);
 	hole_high = pgd_index(GUARD_HOLE_END_ADDR);
-#endif
 
 	nr = pgd_index(limit) + 1;
 	for (i = 0; i < nr; i++) {
@@ -787,6 +752,8 @@ static int xen_pin_page(struct mm_struct *mm, struct page *page,
    read-only, and can be pinned. */
 static void __xen_pgd_pin(struct mm_struct *mm, pgd_t *pgd)
 {
+	pgd_t *user_pgd = xen_get_user_pgd(pgd);
+
 	trace_xen_mmu_pgd_pin(mm, pgd);
 
 	xen_mc_batch();
@@ -800,26 +767,14 @@ static void __xen_pgd_pin(struct mm_struct *mm, pgd_t *pgd)
 		xen_mc_batch();
 	}
 
-#ifdef CONFIG_X86_64
-	{
-		pgd_t *user_pgd = xen_get_user_pgd(pgd);
-
-		xen_do_pin(MMUEXT_PIN_L4_TABLE, PFN_DOWN(__pa(pgd)));
+	xen_do_pin(MMUEXT_PIN_L4_TABLE, PFN_DOWN(__pa(pgd)));
 
-		if (user_pgd) {
-			xen_pin_page(mm, virt_to_page(user_pgd), PT_PGD);
-			xen_do_pin(MMUEXT_PIN_L4_TABLE,
-				   PFN_DOWN(__pa(user_pgd)));
-		}
+	if (user_pgd) {
+		xen_pin_page(mm, virt_to_page(user_pgd), PT_PGD);
+		xen_do_pin(MMUEXT_PIN_L4_TABLE,
+			   PFN_DOWN(__pa(user_pgd)));
 	}
-#else /* CONFIG_X86_32 */
-#ifdef CONFIG_X86_PAE
-	/* Need to make sure unshared kernel PMD is pinnable */
-	xen_pin_page(mm, pgd_page(pgd[pgd_index(TASK_SIZE)]),
-		     PT_PMD);
-#endif
-	xen_do_pin(MMUEXT_PIN_L3_TABLE, PFN_DOWN(__pa(pgd)));
-#endif /* CONFIG_X86_64 */
+
 	xen_mc_issue(0);
 }
 
@@ -870,9 +825,7 @@ static int __init xen_mark_pinned(struct mm_struct *mm, struct page *page,
 static void __init xen_after_bootmem(void)
 {
 	static_branch_enable(&xen_struct_pages_ready);
-#ifdef CONFIG_X86_64
 	SetPagePinned(virt_to_page(level3_user_vsyscall));
-#endif
 	xen_pgd_walk(&init_mm, xen_mark_pinned, FIXADDR_TOP);
 }
 
@@ -919,29 +872,19 @@ static int xen_unpin_page(struct mm_struct *mm, struct page *page,
 /* Release a pagetables pages back as normal RW */
 static void __xen_pgd_unpin(struct mm_struct *mm, pgd_t *pgd)
 {
+	pgd_t *user_pgd = xen_get_user_pgd(pgd);
+
 	trace_xen_mmu_pgd_unpin(mm, pgd);
 
 	xen_mc_batch();
 
 	xen_do_pin(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
 
-#ifdef CONFIG_X86_64
-	{
-		pgd_t *user_pgd = xen_get_user_pgd(pgd);
-
-		if (user_pgd) {
-			xen_do_pin(MMUEXT_UNPIN_TABLE,
-				   PFN_DOWN(__pa(user_pgd)));
-			xen_unpin_page(mm, virt_to_page(user_pgd), PT_PGD);
-		}
+	if (user_pgd) {
+		xen_do_pin(MMUEXT_UNPIN_TABLE,
+			   PFN_DOWN(__pa(user_pgd)));
+		xen_unpin_page(mm, virt_to_page(user_pgd), PT_PGD);
 	}
-#endif
-
-#ifdef CONFIG_X86_PAE
-	/* Need to make sure unshared kernel PMD is unpinned */
-	xen_unpin_page(mm, pgd_page(pgd[pgd_index(TASK_SIZE)]),
-		       PT_PMD);
-#endif
 
 	__xen_pgd_walk(mm, pgd, xen_unpin_page, USER_LIMIT);
 
@@ -1089,7 +1032,6 @@ static void __init pin_pagetable_pfn(unsigned cmd, unsigned long pfn)
 		BUG();
 }
 
-#ifdef CONFIG_X86_64
 static void __init xen_cleanhighmap(unsigned long vaddr,
 				    unsigned long vaddr_end)
 {
@@ -1273,17 +1215,15 @@ static void __init xen_pagetable_cleanhighmap(void)
 	xen_cleanhighmap(addr, roundup(addr + size, PMD_SIZE * 2));
 	xen_start_info->pt_base = (unsigned long)__va(__pa(xen_start_info->pt_base));
 }
-#endif
 
 static void __init xen_pagetable_p2m_setup(void)
 {
 	xen_vmalloc_p2m_tree();
 
-#ifdef CONFIG_X86_64
 	xen_pagetable_p2m_free();
 
 	xen_pagetable_cleanhighmap();
-#endif
+
 	/* And revector! Bye bye old array */
 	xen_start_info->mfn_list = (unsigned long)xen_p2m_addr;
 }
@@ -1420,6 +1360,8 @@ static void __xen_write_cr3(bool kernel, unsigned long cr3)
 }
 static void xen_write_cr3(unsigned long cr3)
 {
+	pgd_t *user_pgd = xen_get_user_pgd(__va(cr3));
+
 	BUG_ON(preemptible());
 
 	xen_mc_batch();  /* disables interrupts */
@@ -1430,20 +1372,14 @@ static void xen_write_cr3(unsigned long cr3)
 
 	__xen_write_cr3(true, cr3);
 
-#ifdef CONFIG_X86_64
-	{
-		pgd_t *user_pgd = xen_get_user_pgd(__va(cr3));
-		if (user_pgd)
-			__xen_write_cr3(false, __pa(user_pgd));
-		else
-			__xen_write_cr3(false, 0);
-	}
-#endif
+	if (user_pgd)
+		__xen_write_cr3(false, __pa(user_pgd));
+	else
+		__xen_write_cr3(false, 0);
 
 	xen_mc_issue(PARAVIRT_LAZY_CPU);  /* interrupts restored */
 }
 
-#ifdef CONFIG_X86_64
 /*
  * At the start of the day - when Xen launches a guest, it has already
  * built pagetables for the guest. We diligently look over them
@@ -1478,49 +1414,39 @@ static void __init xen_write_cr3_init(unsigned long cr3)
 
 	xen_mc_issue(PARAVIRT_LAZY_CPU);  /* interrupts restored */
 }
-#endif
 
 static int xen_pgd_alloc(struct mm_struct *mm)
 {
 	pgd_t *pgd = mm->pgd;
-	int ret = 0;
+	struct page *page = virt_to_page(pgd);
+	pgd_t *user_pgd;
+	int ret = -ENOMEM;
 
 	BUG_ON(PagePinned(virt_to_page(pgd)));
+	BUG_ON(page->private != 0);
 
-#ifdef CONFIG_X86_64
-	{
-		struct page *page = virt_to_page(pgd);
-		pgd_t *user_pgd;
-
-		BUG_ON(page->private != 0);
-
-		ret = -ENOMEM;
-
-		user_pgd = (pgd_t *)__get_free_page(GFP_KERNEL | __GFP_ZERO);
-		page->private = (unsigned long)user_pgd;
+	user_pgd = (pgd_t *)__get_free_page(GFP_KERNEL | __GFP_ZERO);
+	page->private = (unsigned long)user_pgd;
 
-		if (user_pgd != NULL) {
+	if (user_pgd != NULL) {
 #ifdef CONFIG_X86_VSYSCALL_EMULATION
-			user_pgd[pgd_index(VSYSCALL_ADDR)] =
-				__pgd(__pa(level3_user_vsyscall) | _PAGE_TABLE);
+		user_pgd[pgd_index(VSYSCALL_ADDR)] =
+			__pgd(__pa(level3_user_vsyscall) | _PAGE_TABLE);
 #endif
-			ret = 0;
-		}
-
-		BUG_ON(PagePinned(virt_to_page(xen_get_user_pgd(pgd))));
+		ret = 0;
 	}
-#endif
+
+	BUG_ON(PagePinned(virt_to_page(xen_get_user_pgd(pgd))));
+
 	return ret;
 }
 
 static void xen_pgd_free(struct mm_struct *mm, pgd_t *pgd)
 {
-#ifdef CONFIG_X86_64
 	pgd_t *user_pgd = xen_get_user_pgd(pgd);
 
 	if (user_pgd)
 		free_page((unsigned long)user_pgd);
-#endif
 }
 
 /*
@@ -1539,7 +1465,6 @@ static void xen_pgd_free(struct mm_struct *mm, pgd_t *pgd)
  */
 __visible pte_t xen_make_pte_init(pteval_t pte)
 {
-#ifdef CONFIG_X86_64
 	unsigned long pfn;
 
 	/*
@@ -1553,7 +1478,7 @@ __visible pte_t xen_make_pte_init(pteval_t pte)
 	    pfn >= xen_start_info->first_p2m_pfn &&
 	    pfn < xen_start_info->first_p2m_pfn + xen_start_info->nr_p2m_frames)
 		pte &= ~_PAGE_RW;
-#endif
+
 	pte = pte_pfn_to_mfn(pte);
 	return native_make_pte(pte);
 }
@@ -1561,13 +1486,6 @@ PV_CALLEE_SAVE_REGS_THUNK(xen_make_pte_init);
 
 static void __init xen_set_pte_init(pte_t *ptep, pte_t pte)
 {
-#ifdef CONFIG_X86_32
-	/* If there's an existing pte, then don't allow _PAGE_RW to be set */
-	if (pte_mfn(pte) != INVALID_P2M_ENTRY
-	    && pte_val_ma(*ptep) & _PAGE_PRESENT)
-		pte = __pte_ma(((pte_val_ma(*ptep) & _PAGE_RW) | ~_PAGE_RW) &
-			       pte_val_ma(pte));
-#endif
 	__xen_set_pte(ptep, pte);
 }
 
@@ -1702,7 +1620,6 @@ static void xen_release_pmd(unsigned long pfn)
 	xen_release_ptpage(pfn, PT_PMD);
 }
 
-#ifdef CONFIG_X86_64
 static void xen_alloc_pud(struct mm_struct *mm, unsigned long pfn)
 {
 	xen_alloc_ptpage(mm, pfn, PT_PUD);
@@ -1712,19 +1629,9 @@ static void xen_release_pud(unsigned long pfn)
 {
 	xen_release_ptpage(pfn, PT_PUD);
 }
-#endif
 
 void __init xen_reserve_top(void)
 {
-#ifdef CONFIG_X86_32
-	unsigned long top = HYPERVISOR_VIRT_START;
-	struct xen_platform_parameters pp;
-
-	if (HYPERVISOR_xen_version(XENVER_platform_parameters, &pp) == 0)
-		top = pp.virt_start;
-
-	reserve_top_address(-top);
-#endif	/* CONFIG_X86_32 */
 }
 
 /*
@@ -1733,11 +1640,7 @@ void __init xen_reserve_top(void)
  */
 static void * __init __ka(phys_addr_t paddr)
 {
-#ifdef CONFIG_X86_64
 	return (void *)(paddr + __START_KERNEL_map);
-#else
-	return __va(paddr);
-#endif
 }
 
 /* Convert a machine address to physical address */
@@ -1771,56 +1674,7 @@ static void __init set_page_prot(void *addr, pgprot_t prot)
 {
 	return set_page_prot_flags(addr, prot, UVMF_NONE);
 }
-#ifdef CONFIG_X86_32
-static void __init xen_map_identity_early(pmd_t *pmd, unsigned long max_pfn)
-{
-	unsigned pmdidx, pteidx;
-	unsigned ident_pte;
-	unsigned long pfn;
-
-	level1_ident_pgt = extend_brk(sizeof(pte_t) * LEVEL1_IDENT_ENTRIES,
-				      PAGE_SIZE);
 
-	ident_pte = 0;
-	pfn = 0;
-	for (pmdidx = 0; pmdidx < PTRS_PER_PMD && pfn < max_pfn; pmdidx++) {
-		pte_t *pte_page;
-
-		/* Reuse or allocate a page of ptes */
-		if (pmd_present(pmd[pmdidx]))
-			pte_page = m2v(pmd[pmdidx].pmd);
-		else {
-			/* Check for free pte pages */
-			if (ident_pte == LEVEL1_IDENT_ENTRIES)
-				break;
-
-			pte_page = &level1_ident_pgt[ident_pte];
-			ident_pte += PTRS_PER_PTE;
-
-			pmd[pmdidx] = __pmd(__pa(pte_page) | _PAGE_TABLE);
-		}
-
-		/* Install mappings */
-		for (pteidx = 0; pteidx < PTRS_PER_PTE; pteidx++, pfn++) {
-			pte_t pte;
-
-			if (pfn > max_pfn_mapped)
-				max_pfn_mapped = pfn;
-
-			if (!pte_none(pte_page[pteidx]))
-				continue;
-
-			pte = pfn_pte(pfn, PAGE_KERNEL_EXEC);
-			pte_page[pteidx] = pte;
-		}
-	}
-
-	for (pteidx = 0; pteidx < ident_pte; pteidx += PTRS_PER_PTE)
-		set_page_prot(&level1_ident_pgt[pteidx], PAGE_KERNEL_RO);
-
-	set_page_prot(pmd, PAGE_KERNEL_RO);
-}
-#endif
 void __init xen_setup_machphys_mapping(void)
 {
 	struct xen_machphys_mapping mapping;
@@ -1831,13 +1685,8 @@ void __init xen_setup_machphys_mapping(void)
 	} else {
 		machine_to_phys_nr = MACH2PHYS_NR_ENTRIES;
 	}
-#ifdef CONFIG_X86_32
-	WARN_ON((machine_to_phys_mapping + (machine_to_phys_nr - 1))
-		< machine_to_phys_mapping);
-#endif
 }
 
-#ifdef CONFIG_X86_64
 static void __init convert_pfn_mfn(void *v)
 {
 	pte_t *pte = v;
@@ -2168,105 +2017,6 @@ void __init xen_relocate_p2m(void)
 	xen_start_info->nr_p2m_frames = n_frames;
 }
 
-#else	/* !CONFIG_X86_64 */
-static RESERVE_BRK_ARRAY(pmd_t, initial_kernel_pmd, PTRS_PER_PMD);
-static RESERVE_BRK_ARRAY(pmd_t, swapper_kernel_pmd, PTRS_PER_PMD);
-RESERVE_BRK(fixup_kernel_pmd, PAGE_SIZE);
-RESERVE_BRK(fixup_kernel_pte, PAGE_SIZE);
-
-static void __init xen_write_cr3_init(unsigned long cr3)
-{
-	unsigned long pfn = PFN_DOWN(__pa(swapper_pg_dir));
-
-	BUG_ON(read_cr3_pa() != __pa(initial_page_table));
-	BUG_ON(cr3 != __pa(swapper_pg_dir));
-
-	/*
-	 * We are switching to swapper_pg_dir for the first time (from
-	 * initial_page_table) and therefore need to mark that page
-	 * read-only and then pin it.
-	 *
-	 * Xen disallows sharing of kernel PMDs for PAE
-	 * guests. Therefore we must copy the kernel PMD from
-	 * initial_page_table into a new kernel PMD to be used in
-	 * swapper_pg_dir.
-	 */
-	swapper_kernel_pmd =
-		extend_brk(sizeof(pmd_t) * PTRS_PER_PMD, PAGE_SIZE);
-	copy_page(swapper_kernel_pmd, initial_kernel_pmd);
-	swapper_pg_dir[KERNEL_PGD_BOUNDARY] =
-		__pgd(__pa(swapper_kernel_pmd) | _PAGE_PRESENT);
-	set_page_prot(swapper_kernel_pmd, PAGE_KERNEL_RO);
-
-	set_page_prot(swapper_pg_dir, PAGE_KERNEL_RO);
-	xen_write_cr3(cr3);
-	pin_pagetable_pfn(MMUEXT_PIN_L3_TABLE, pfn);
-
-	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE,
-			  PFN_DOWN(__pa(initial_page_table)));
-	set_page_prot(initial_page_table, PAGE_KERNEL);
-	set_page_prot(initial_kernel_pmd, PAGE_KERNEL);
-
-	pv_ops.mmu.write_cr3 = &xen_write_cr3;
-}
-
-/*
- * For 32 bit domains xen_start_info->pt_base is the pgd address which might be
- * not the first page table in the page table pool.
- * Iterate through the initial page tables to find the real page table base.
- */
-static phys_addr_t __init xen_find_pt_base(pmd_t *pmd)
-{
-	phys_addr_t pt_base, paddr;
-	unsigned pmdidx;
-
-	pt_base = min(__pa(xen_start_info->pt_base), __pa(pmd));
-
-	for (pmdidx = 0; pmdidx < PTRS_PER_PMD; pmdidx++)
-		if (pmd_present(pmd[pmdidx]) && !pmd_large(pmd[pmdidx])) {
-			paddr = m2p(pmd[pmdidx].pmd);
-			pt_base = min(pt_base, paddr);
-		}
-
-	return pt_base;
-}
-
-void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
-{
-	pmd_t *kernel_pmd;
-
-	kernel_pmd = m2v(pgd[KERNEL_PGD_BOUNDARY].pgd);
-
-	xen_pt_base = xen_find_pt_base(kernel_pmd);
-	xen_pt_size = xen_start_info->nr_pt_frames * PAGE_SIZE;
-
-	initial_kernel_pmd =
-		extend_brk(sizeof(pmd_t) * PTRS_PER_PMD, PAGE_SIZE);
-
-	max_pfn_mapped = PFN_DOWN(xen_pt_base + xen_pt_size + 512 * 1024);
-
-	copy_page(initial_kernel_pmd, kernel_pmd);
-
-	xen_map_identity_early(initial_kernel_pmd, max_pfn);
-
-	copy_page(initial_page_table, pgd);
-	initial_page_table[KERNEL_PGD_BOUNDARY] =
-		__pgd(__pa(initial_kernel_pmd) | _PAGE_PRESENT);
-
-	set_page_prot(initial_kernel_pmd, PAGE_KERNEL_RO);
-	set_page_prot(initial_page_table, PAGE_KERNEL_RO);
-	set_page_prot(empty_zero_page, PAGE_KERNEL_RO);
-
-	pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
-
-	pin_pagetable_pfn(MMUEXT_PIN_L3_TABLE,
-			  PFN_DOWN(__pa(initial_page_table)));
-	xen_write_cr3(__pa(initial_page_table));
-
-	memblock_reserve(xen_pt_base, xen_pt_size);
-}
-#endif	/* CONFIG_X86_64 */
-
 void __init xen_reserve_special_pages(void)
 {
 	phys_addr_t paddr;
@@ -2300,12 +2050,7 @@ static void xen_set_fixmap(unsigned idx, phys_addr_t phys, pgprot_t prot)
 
 	switch (idx) {
 	case FIX_BTMAP_END ... FIX_BTMAP_BEGIN:
-#ifdef CONFIG_X86_32
-	case FIX_WP_TEST:
-# ifdef CONFIG_HIGHMEM
-	case FIX_KMAP_BEGIN ... FIX_KMAP_END:
-# endif
-#elif defined(CONFIG_X86_VSYSCALL_EMULATION)
+#ifdef CONFIG_X86_VSYSCALL_EMULATION
 	case VSYSCALL_PAGE:
 #endif
 		/* All local page mappings */
@@ -2357,9 +2102,7 @@ static void __init xen_post_allocator_init(void)
 	pv_ops.mmu.set_pte = xen_set_pte;
 	pv_ops.mmu.set_pmd = xen_set_pmd;
 	pv_ops.mmu.set_pud = xen_set_pud;
-#ifdef CONFIG_X86_64
 	pv_ops.mmu.set_p4d = xen_set_p4d;
-#endif
 
 	/* This will work as long as patching hasn't happened yet
 	   (which it hasn't) */
@@ -2367,15 +2110,11 @@ static void __init xen_post_allocator_init(void)
 	pv_ops.mmu.alloc_pmd = xen_alloc_pmd;
 	pv_ops.mmu.release_pte = xen_release_pte;
 	pv_ops.mmu.release_pmd = xen_release_pmd;
-#ifdef CONFIG_X86_64
 	pv_ops.mmu.alloc_pud = xen_alloc_pud;
 	pv_ops.mmu.release_pud = xen_release_pud;
-#endif
 	pv_ops.mmu.make_pte = PV_CALLEE_SAVE(xen_make_pte);
 
-#ifdef CONFIG_X86_64
 	pv_ops.mmu.write_cr3 = &xen_write_cr3;
-#endif
 }
 
 static void xen_leave_lazy_mmu(void)
@@ -2420,17 +2159,11 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 	.make_pte = PV_CALLEE_SAVE(xen_make_pte_init),
 	.make_pgd = PV_CALLEE_SAVE(xen_make_pgd),
 
-#ifdef CONFIG_X86_PAE
-	.set_pte_atomic = xen_set_pte_atomic,
-	.pte_clear = xen_pte_clear,
-	.pmd_clear = xen_pmd_clear,
-#endif	/* CONFIG_X86_PAE */
 	.set_pud = xen_set_pud_hyper,
 
 	.make_pmd = PV_CALLEE_SAVE(xen_make_pmd),
 	.pmd_val = PV_CALLEE_SAVE(xen_pmd_val),
 
-#ifdef CONFIG_X86_64
 	.pud_val = PV_CALLEE_SAVE(xen_pud_val),
 	.make_pud = PV_CALLEE_SAVE(xen_make_pud),
 	.set_p4d = xen_set_p4d_hyper,
@@ -2442,7 +2175,6 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
 	.p4d_val = PV_CALLEE_SAVE(xen_p4d_val),
 	.make_p4d = PV_CALLEE_SAVE(xen_make_p4d),
 #endif
-#endif	/* CONFIG_X86_64 */
 
 	.activate_mm = xen_activate_mm,
 	.dup_mmap = xen_dup_mmap,
diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 0acba2c712ab..be4151f42611 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -379,12 +379,8 @@ static void __init xen_rebuild_p2m_list(unsigned long *p2m)
 
 		if (type == P2M_TYPE_PFN || i < chunk) {
 			/* Use initial p2m page contents. */
-#ifdef CONFIG_X86_64
 			mfns = alloc_p2m_page();
 			copy_page(mfns, xen_p2m_addr + pfn);
-#else
-			mfns = xen_p2m_addr + pfn;
-#endif
 			ptep = populate_extra_pte((unsigned long)(p2m + pfn));
 			set_pte(ptep,
 				pfn_pte(PFN_DOWN(__pa(mfns)), PAGE_KERNEL));
@@ -467,7 +463,7 @@ EXPORT_SYMBOL_GPL(get_phys_to_machine);
  * Allocate new pmd(s). It is checked whether the old pmd is still in place.
  * If not, nothing is changed. This is okay as the only reason for allocating
  * a new pmd is to replace p2m_missing_pte or p2m_identity_pte by a individual
- * pmd. In case of PAE/x86-32 there are multiple pmds to allocate!
+ * pmd.
  */
 static pte_t *alloc_p2m_pmd(unsigned long addr, pte_t *pte_pg)
 {
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 3566e37241d7..3fd1d2ff8b5d 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -545,13 +545,10 @@ static unsigned long __init xen_get_pages_limit(void)
 {
 	unsigned long limit;
 
-#ifdef CONFIG_X86_32
-	limit = GB(64) / PAGE_SIZE;
-#else
 	limit = MAXMEM / PAGE_SIZE;
 	if (!xen_initial_domain() && xen_512gb_limit)
 		limit = GB(512) / PAGE_SIZE;
-#endif
+
 	return limit;
 }
 
@@ -722,17 +719,8 @@ static void __init xen_reserve_xen_mfnlist(void)
 	if (!xen_is_e820_reserved(start, size))
 		return;
 
-#ifdef CONFIG_X86_32
-	/*
-	 * Relocating the p2m on 32 bit system to an arbitrary virtual address
-	 * is not supported, so just give up.
-	 */
-	xen_raw_console_write("Xen hypervisor allocated p2m list conflicts with E820 map\n");
-	BUG();
-#else
 	xen_relocate_p2m();
 	memblock_free(start, size);
-#endif
 }
 
 /**
@@ -921,20 +909,6 @@ char * __init xen_memory_setup(void)
 	return "Xen";
 }
 
-/*
- * Set the bit indicating "nosegneg" library variants should be used.
- * We only need to bother in pure 32-bit mode; compat 32-bit processes
- * can have un-truncated segments, so wrapping around is allowed.
- */
-static void __init fiddle_vdso(void)
-{
-#ifdef CONFIG_X86_32
-	u32 *mask = vdso_image_32.data +
-		vdso_image_32.sym_VDSO32_NOTE_MASK;
-	*mask |= 1 << VDSO_NOTE_NONEGSEG_BIT;
-#endif
-}
-
 static int register_callback(unsigned type, const void *func)
 {
 	struct callback_register callback = {
@@ -951,11 +925,7 @@ void xen_enable_sysenter(void)
 	int ret;
 	unsigned sysenter_feature;
 
-#ifdef CONFIG_X86_32
-	sysenter_feature = X86_FEATURE_SEP;
-#else
 	sysenter_feature = X86_FEATURE_SYSENTER32;
-#endif
 
 	if (!boot_cpu_has(sysenter_feature))
 		return;
@@ -967,7 +937,6 @@ void xen_enable_sysenter(void)
 
 void xen_enable_syscall(void)
 {
-#ifdef CONFIG_X86_64
 	int ret;
 
 	ret = register_callback(CALLBACKTYPE_syscall, xen_syscall_target);
@@ -983,7 +952,6 @@ void xen_enable_syscall(void)
 		if (ret != 0)
 			setup_clear_cpu_cap(X86_FEATURE_SYSCALL32);
 	}
-#endif /* CONFIG_X86_64 */
 }
 
 static void __init xen_pvmmu_arch_setup(void)
@@ -1024,7 +992,6 @@ void __init xen_arch_setup(void)
 	disable_cpuidle();
 	disable_cpufreq();
 	WARN_ON(xen_set_default_idle());
-	fiddle_vdso();
 #ifdef CONFIG_NUMA
 	numa_off = 1;
 #endif
diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
index 171aff1b11f2..9218aa6ab28e 100644
--- a/arch/x86/xen/smp_pv.c
+++ b/arch/x86/xen/smp_pv.c
@@ -212,15 +212,6 @@ static void __init xen_pv_smp_prepare_boot_cpu(void)
 		 * sure the old memory can be recycled. */
 		make_lowmem_page_readwrite(xen_initial_gdt);
 
-#ifdef CONFIG_X86_32
-	/*
-	 * Xen starts us with XEN_FLAT_RING1_DS, but linux code
-	 * expects __USER_DS
-	 */
-	loadsegment(ds, __USER_DS);
-	loadsegment(es, __USER_DS);
-#endif
-
 	xen_filter_cpu_maps();
 	xen_setup_vcpu_info_placement();
 
@@ -301,10 +292,6 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 
 	gdt = get_cpu_gdt_rw(cpu);
 
-#ifdef CONFIG_X86_32
-	ctxt->user_regs.fs = __KERNEL_PERCPU;
-	ctxt->user_regs.gs = __KERNEL_STACK_CANARY;
-#endif
 	memset(&ctxt->fpu_ctxt, 0, sizeof(ctxt->fpu_ctxt));
 
 	/*
@@ -342,12 +329,7 @@ cpu_initialize_context(unsigned int cpu, struct task_struct *idle)
 	ctxt->kernel_ss = __KERNEL_DS;
 	ctxt->kernel_sp = task_top_of_stack(idle);
 
-#ifdef CONFIG_X86_32
-	ctxt->event_callback_cs     = __KERNEL_CS;
-	ctxt->failsafe_callback_cs  = __KERNEL_CS;
-#else
 	ctxt->gs_base_kernel = per_cpu_offset(cpu);
-#endif
 	ctxt->event_callback_eip    =
 		(unsigned long)xen_asm_exc_xen_hypervisor_callback;
 	ctxt->failsafe_callback_eip =
diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
index 508fe204520b..aaac3ff313a9 100644
--- a/arch/x86/xen/xen-asm.S
+++ b/arch/x86/xen/xen-asm.S
@@ -6,12 +6,19 @@
  * operations here; the indirect forms are better handled in C.
  */
 
+#include <asm/errno.h>
 #include <asm/asm-offsets.h>
 #include <asm/percpu.h>
 #include <asm/processor-flags.h>
+#include <asm/segment.h>
+#include <asm/thread_info.h>
+#include <asm/asm.h>
 #include <asm/frame.h>
 #include <asm/asm.h>
 
+#include <xen/interface/xen.h>
+
+#include <linux/init.h>
 #include <linux/linkage.h>
 
 /*
@@ -76,11 +83,7 @@ SYM_FUNC_END(xen_save_fl_direct)
  */
 SYM_FUNC_START(xen_restore_fl_direct)
 	FRAME_BEGIN
-#ifdef CONFIG_X86_64
 	testw $X86_EFLAGS_IF, %di
-#else
-	testb $X86_EFLAGS_IF>>8, %ah
-#endif
 	setz PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask
 	/*
 	 * Preempt here doesn't matter because that will deal with any
@@ -104,15 +107,6 @@ SYM_FUNC_END(xen_restore_fl_direct)
  */
 SYM_FUNC_START(check_events)
 	FRAME_BEGIN
-#ifdef CONFIG_X86_32
-	push %eax
-	push %ecx
-	push %edx
-	call xen_force_evtchn_callback
-	pop %edx
-	pop %ecx
-	pop %eax
-#else
 	push %rax
 	push %rcx
 	push %rdx
@@ -132,7 +126,6 @@ SYM_FUNC_START(check_events)
 	pop %rdx
 	pop %rcx
 	pop %rax
-#endif
 	FRAME_END
 	ret
 SYM_FUNC_END(check_events)
@@ -151,3 +144,164 @@ SYM_FUNC_START(xen_read_cr2_direct)
 	FRAME_END
 	ret
 SYM_FUNC_END(xen_read_cr2_direct);
+
+.macro xen_pv_trap name
+SYM_CODE_START(xen_\name)
+	pop %rcx
+	pop %r11
+	jmp  \name
+SYM_CODE_END(xen_\name)
+_ASM_NOKPROBE(xen_\name)
+.endm
+
+xen_pv_trap asm_exc_divide_error
+xen_pv_trap asm_exc_debug
+xen_pv_trap asm_exc_xendebug
+xen_pv_trap asm_exc_int3
+xen_pv_trap asm_exc_xennmi
+xen_pv_trap asm_exc_overflow
+xen_pv_trap asm_exc_bounds
+xen_pv_trap asm_exc_invalid_op
+xen_pv_trap asm_exc_device_not_available
+xen_pv_trap asm_exc_double_fault
+xen_pv_trap asm_exc_coproc_segment_overrun
+xen_pv_trap asm_exc_invalid_tss
+xen_pv_trap asm_exc_segment_not_present
+xen_pv_trap asm_exc_stack_segment
+xen_pv_trap asm_exc_general_protection
+xen_pv_trap asm_exc_page_fault
+xen_pv_trap asm_exc_spurious_interrupt_bug
+xen_pv_trap asm_exc_coprocessor_error
+xen_pv_trap asm_exc_alignment_check
+#ifdef CONFIG_X86_MCE
+xen_pv_trap asm_exc_machine_check
+#endif /* CONFIG_X86_MCE */
+xen_pv_trap asm_exc_simd_coprocessor_error
+#ifdef CONFIG_IA32_EMULATION
+xen_pv_trap entry_INT80_compat
+#endif
+xen_pv_trap asm_exc_xen_hypervisor_callback
+
+	__INIT
+SYM_CODE_START(xen_early_idt_handler_array)
+	i = 0
+	.rept NUM_EXCEPTION_VECTORS
+	pop %rcx
+	pop %r11
+	jmp early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE
+	i = i + 1
+	.fill xen_early_idt_handler_array + i*XEN_EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc
+	.endr
+SYM_CODE_END(xen_early_idt_handler_array)
+	__FINIT
+
+hypercall_iret = hypercall_page + __HYPERVISOR_iret * 32
+/*
+ * Xen64 iret frame:
+ *
+ *	ss
+ *	rsp
+ *	rflags
+ *	cs
+ *	rip		<-- standard iret frame
+ *
+ *	flags
+ *
+ *	rcx		}
+ *	r11		}<-- pushed by hypercall page
+ * rsp->rax		}
+ */
+SYM_CODE_START(xen_iret)
+	pushq $0
+	jmp hypercall_iret
+SYM_CODE_END(xen_iret)
+
+SYM_CODE_START(xen_sysret64)
+	/*
+	 * We're already on the usermode stack at this point, but
+	 * still with the kernel gs, so we can easily switch back.
+	 *
+	 * tss.sp2 is scratch space.
+	 */
+	movq %rsp, PER_CPU_VAR(cpu_tss_rw + TSS_sp2)
+	movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
+
+	pushq $__USER_DS
+	pushq PER_CPU_VAR(cpu_tss_rw + TSS_sp2)
+	pushq %r11
+	pushq $__USER_CS
+	pushq %rcx
+
+	pushq $VGCF_in_syscall
+	jmp hypercall_iret
+SYM_CODE_END(xen_sysret64)
+
+/*
+ * Xen handles syscall callbacks much like ordinary exceptions, which
+ * means we have:
+ * - kernel gs
+ * - kernel rsp
+ * - an iret-like stack frame on the stack (including rcx and r11):
+ *	ss
+ *	rsp
+ *	rflags
+ *	cs
+ *	rip
+ *	r11
+ * rsp->rcx
+ */
+
+/* Normal 64-bit system call target */
+SYM_FUNC_START(xen_syscall_target)
+	popq %rcx
+	popq %r11
+
+	/*
+	 * Neither Xen nor the kernel really knows what the old SS and
+	 * CS were.  The kernel expects __USER_DS and __USER_CS, so
+	 * report those values even though Xen will guess its own values.
+	 */
+	movq $__USER_DS, 4*8(%rsp)
+	movq $__USER_CS, 1*8(%rsp)
+
+	jmp entry_SYSCALL_64_after_hwframe
+SYM_FUNC_END(xen_syscall_target)
+
+#ifdef CONFIG_IA32_EMULATION
+
+/* 32-bit compat syscall target */
+SYM_FUNC_START(xen_syscall32_target)
+	popq %rcx
+	popq %r11
+
+	/*
+	 * Neither Xen nor the kernel really knows what the old SS and
+	 * CS were.  The kernel expects __USER32_DS and __USER32_CS, so
+	 * report those values even though Xen will guess its own values.
+	 */
+	movq $__USER32_DS, 4*8(%rsp)
+	movq $__USER32_CS, 1*8(%rsp)
+
+	jmp entry_SYSCALL_compat_after_hwframe
+SYM_FUNC_END(xen_syscall32_target)
+
+/* 32-bit compat sysenter target */
+SYM_FUNC_START(xen_sysenter_target)
+	mov 0*8(%rsp), %rcx
+	mov 1*8(%rsp), %r11
+	mov 5*8(%rsp), %rsp
+	jmp entry_SYSENTER_compat
+SYM_FUNC_END(xen_sysenter_target)
+
+#else /* !CONFIG_IA32_EMULATION */
+
+SYM_FUNC_START_ALIAS(xen_syscall32_target)
+SYM_FUNC_START(xen_sysenter_target)
+	lea 16(%rsp), %rsp	/* strip %rcx, %r11 */
+	mov $-ENOSYS, %rax
+	pushq $0
+	jmp hypercall_iret
+SYM_FUNC_END(xen_sysenter_target)
+SYM_FUNC_END_ALIAS(xen_syscall32_target)
+
+#endif	/* CONFIG_IA32_EMULATION */
diff --git a/arch/x86/xen/xen-asm_32.S b/arch/x86/xen/xen-asm_32.S
deleted file mode 100644
index 4757cec33abe..000000000000
--- a/arch/x86/xen/xen-asm_32.S
+++ /dev/null
@@ -1,185 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Asm versions of Xen pv-ops, suitable for direct use.
- *
- * We only bother with direct forms (ie, vcpu in pda) of the
- * operations here; the indirect forms are better handled in C.
- */
-
-#include <asm/thread_info.h>
-#include <asm/processor-flags.h>
-#include <asm/segment.h>
-#include <asm/asm.h>
-
-#include <xen/interface/xen.h>
-
-#include <linux/linkage.h>
-
-/* Pseudo-flag used for virtual NMI, which we don't implement yet */
-#define XEN_EFLAGS_NMI  0x80000000
-
-/*
- * This is run where a normal iret would be run, with the same stack setup:
- *	8: eflags
- *	4: cs
- *	esp-> 0: eip
- *
- * This attempts to make sure that any pending events are dealt with
- * on return to usermode, but there is a small window in which an
- * event can happen just before entering usermode.  If the nested
- * interrupt ends up setting one of the TIF_WORK_MASK pending work
- * flags, they will not be tested again before returning to
- * usermode. This means that a process can end up with pending work,
- * which will be unprocessed until the process enters and leaves the
- * kernel again, which could be an unbounded amount of time.  This
- * means that a pending signal or reschedule event could be
- * indefinitely delayed.
- *
- * The fix is to notice a nested interrupt in the critical window, and
- * if one occurs, then fold the nested interrupt into the current
- * interrupt stack frame, and re-process it iteratively rather than
- * recursively.  This means that it will exit via the normal path, and
- * all pending work will be dealt with appropriately.
- *
- * Because the nested interrupt handler needs to deal with the current
- * stack state in whatever form its in, we keep things simple by only
- * using a single register which is pushed/popped on the stack.
- */
-
-.macro POP_FS
-1:
-	popw %fs
-.pushsection .fixup, "ax"
-2:	movw $0, (%esp)
-	jmp 1b
-.popsection
-	_ASM_EXTABLE(1b,2b)
-.endm
-
-SYM_CODE_START(xen_iret)
-	/* test eflags for special cases */
-	testl $(X86_EFLAGS_VM | XEN_EFLAGS_NMI), 8(%esp)
-	jnz hyper_iret
-
-	push %eax
-	ESP_OFFSET=4	# bytes pushed onto stack
-
-	/* Store vcpu_info pointer for easy access */
-#ifdef CONFIG_SMP
-	pushw %fs
-	movl $(__KERNEL_PERCPU), %eax
-	movl %eax, %fs
-	movl %fs:xen_vcpu, %eax
-	POP_FS
-#else
-	movl %ss:xen_vcpu, %eax
-#endif
-
-	/* check IF state we're restoring */
-	testb $X86_EFLAGS_IF>>8, 8+1+ESP_OFFSET(%esp)
-
-	/*
-	 * Maybe enable events.  Once this happens we could get a
-	 * recursive event, so the critical region starts immediately
-	 * afterwards.  However, if that happens we don't end up
-	 * resuming the code, so we don't have to be worried about
-	 * being preempted to another CPU.
-	 */
-	setz %ss:XEN_vcpu_info_mask(%eax)
-xen_iret_start_crit:
-
-	/* check for unmasked and pending */
-	cmpw $0x0001, %ss:XEN_vcpu_info_pending(%eax)
-
-	/*
-	 * If there's something pending, mask events again so we can
-	 * jump back into exc_xen_hypervisor_callback. Otherwise do not
-	 * touch XEN_vcpu_info_mask.
-	 */
-	jne 1f
-	movb $1, %ss:XEN_vcpu_info_mask(%eax)
-
-1:	popl %eax
-
-	/*
-	 * From this point on the registers are restored and the stack
-	 * updated, so we don't need to worry about it if we're
-	 * preempted
-	 */
-iret_restore_end:
-
-	/*
-	 * Jump to hypervisor_callback after fixing up the stack.
-	 * Events are masked, so jumping out of the critical region is
-	 * OK.
-	 */
-	je xen_asm_exc_xen_hypervisor_callback
-
-1:	iret
-xen_iret_end_crit:
-	_ASM_EXTABLE(1b, asm_iret_error)
-
-hyper_iret:
-	/* put this out of line since its very rarely used */
-	jmp hypercall_page + __HYPERVISOR_iret * 32
-SYM_CODE_END(xen_iret)
-
-	.globl xen_iret_start_crit, xen_iret_end_crit
-
-/*
- * This is called by xen_asm_exc_xen_hypervisor_callback in entry_32.S when it sees
- * that the EIP at the time of interrupt was between
- * xen_iret_start_crit and xen_iret_end_crit.
- *
- * The stack format at this point is:
- *	----------------
- *	 ss		: (ss/esp may be present if we came from usermode)
- *	 esp		:
- *	 eflags		}  outer exception info
- *	 cs		}
- *	 eip		}
- *	----------------
- *	 eax		:  outer eax if it hasn't been restored
- *	----------------
- *	 eflags		}
- *	 cs		}  nested exception info
- *	 eip		}
- *	 return address	: (into xen_asm_exc_xen_hypervisor_callback)
- *
- * In order to deliver the nested exception properly, we need to discard the
- * nested exception frame such that when we handle the exception, we do it
- * in the context of the outer exception rather than starting a new one.
- *
- * The only caveat is that if the outer eax hasn't been restored yet (i.e.
- * it's still on stack), we need to restore its value here.
-*/
-.pushsection .noinstr.text, "ax"
-SYM_CODE_START(xen_iret_crit_fixup)
-	/*
-	 * Paranoia: Make sure we're really coming from kernel space.
-	 * One could imagine a case where userspace jumps into the
-	 * critical range address, but just before the CPU delivers a
-	 * PF, it decides to deliver an interrupt instead.  Unlikely?
-	 * Definitely.  Easy to avoid?  Yes.
-	 */
-	testb $2, 2*4(%esp)		/* nested CS */
-	jnz 2f
-
-	/*
-	 * If eip is before iret_restore_end then stack
-	 * hasn't been restored yet.
-	 */
-	cmpl $iret_restore_end, 1*4(%esp)
-	jae 1f
-
-	movl 4*4(%esp), %eax		/* load outer EAX */
-	ret $4*4			/* discard nested EIP, CS, and EFLAGS as
-					 * well as the just restored EAX */
-
-1:
-	ret $3*4			/* discard nested EIP, CS, and EFLAGS */
-
-2:
-	ret
-SYM_CODE_END(xen_iret_crit_fixup)
-.popsection
diff --git a/arch/x86/xen/xen-asm_64.S b/arch/x86/xen/xen-asm_64.S
deleted file mode 100644
index 5d252aaeade8..000000000000
--- a/arch/x86/xen/xen-asm_64.S
+++ /dev/null
@@ -1,181 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * Asm versions of Xen pv-ops, suitable for direct use.
- *
- * We only bother with direct forms (ie, vcpu in pda) of the
- * operations here; the indirect forms are better handled in C.
- */
-
-#include <asm/errno.h>
-#include <asm/percpu.h>
-#include <asm/processor-flags.h>
-#include <asm/segment.h>
-#include <asm/asm-offsets.h>
-#include <asm/thread_info.h>
-#include <asm/asm.h>
-
-#include <xen/interface/xen.h>
-
-#include <linux/init.h>
-#include <linux/linkage.h>
-
-.macro xen_pv_trap name
-SYM_CODE_START(xen_\name)
-	pop %rcx
-	pop %r11
-	jmp  \name
-SYM_CODE_END(xen_\name)
-_ASM_NOKPROBE(xen_\name)
-.endm
-
-xen_pv_trap asm_exc_divide_error
-xen_pv_trap asm_exc_debug
-xen_pv_trap asm_exc_xendebug
-xen_pv_trap asm_exc_int3
-xen_pv_trap asm_exc_xennmi
-xen_pv_trap asm_exc_overflow
-xen_pv_trap asm_exc_bounds
-xen_pv_trap asm_exc_invalid_op
-xen_pv_trap asm_exc_device_not_available
-xen_pv_trap asm_exc_double_fault
-xen_pv_trap asm_exc_coproc_segment_overrun
-xen_pv_trap asm_exc_invalid_tss
-xen_pv_trap asm_exc_segment_not_present
-xen_pv_trap asm_exc_stack_segment
-xen_pv_trap asm_exc_general_protection
-xen_pv_trap asm_exc_page_fault
-xen_pv_trap asm_exc_spurious_interrupt_bug
-xen_pv_trap asm_exc_coprocessor_error
-xen_pv_trap asm_exc_alignment_check
-#ifdef CONFIG_X86_MCE
-xen_pv_trap asm_exc_machine_check
-#endif /* CONFIG_X86_MCE */
-xen_pv_trap asm_exc_simd_coprocessor_error
-#ifdef CONFIG_IA32_EMULATION
-xen_pv_trap entry_INT80_compat
-#endif
-xen_pv_trap asm_exc_xen_hypervisor_callback
-
-	__INIT
-SYM_CODE_START(xen_early_idt_handler_array)
-	i = 0
-	.rept NUM_EXCEPTION_VECTORS
-	pop %rcx
-	pop %r11
-	jmp early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE
-	i = i + 1
-	.fill xen_early_idt_handler_array + i*XEN_EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc
-	.endr
-SYM_CODE_END(xen_early_idt_handler_array)
-	__FINIT
-
-hypercall_iret = hypercall_page + __HYPERVISOR_iret * 32
-/*
- * Xen64 iret frame:
- *
- *	ss
- *	rsp
- *	rflags
- *	cs
- *	rip		<-- standard iret frame
- *
- *	flags
- *
- *	rcx		}
- *	r11		}<-- pushed by hypercall page
- * rsp->rax		}
- */
-SYM_CODE_START(xen_iret)
-	pushq $0
-	jmp hypercall_iret
-SYM_CODE_END(xen_iret)
-
-SYM_CODE_START(xen_sysret64)
-	/*
-	 * We're already on the usermode stack at this point, but
-	 * still with the kernel gs, so we can easily switch back.
-	 *
-	 * tss.sp2 is scratch space.
-	 */
-	movq %rsp, PER_CPU_VAR(cpu_tss_rw + TSS_sp2)
-	movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
-
-	pushq $__USER_DS
-	pushq PER_CPU_VAR(cpu_tss_rw + TSS_sp2)
-	pushq %r11
-	pushq $__USER_CS
-	pushq %rcx
-
-	pushq $VGCF_in_syscall
-	jmp hypercall_iret
-SYM_CODE_END(xen_sysret64)
-
-/*
- * Xen handles syscall callbacks much like ordinary exceptions, which
- * means we have:
- * - kernel gs
- * - kernel rsp
- * - an iret-like stack frame on the stack (including rcx and r11):
- *	ss
- *	rsp
- *	rflags
- *	cs
- *	rip
- *	r11
- * rsp->rcx
- */
-
-/* Normal 64-bit system call target */
-SYM_FUNC_START(xen_syscall_target)
-	popq %rcx
-	popq %r11
-
-	/*
-	 * Neither Xen nor the kernel really knows what the old SS and
-	 * CS were.  The kernel expects __USER_DS and __USER_CS, so
-	 * report those values even though Xen will guess its own values.
-	 */
-	movq $__USER_DS, 4*8(%rsp)
-	movq $__USER_CS, 1*8(%rsp)
-
-	jmp entry_SYSCALL_64_after_hwframe
-SYM_FUNC_END(xen_syscall_target)
-
-#ifdef CONFIG_IA32_EMULATION
-
-/* 32-bit compat syscall target */
-SYM_FUNC_START(xen_syscall32_target)
-	popq %rcx
-	popq %r11
-
-	/*
-	 * Neither Xen nor the kernel really knows what the old SS and
-	 * CS were.  The kernel expects __USER32_DS and __USER32_CS, so
-	 * report those values even though Xen will guess its own values.
-	 */
-	movq $__USER32_DS, 4*8(%rsp)
-	movq $__USER32_CS, 1*8(%rsp)
-
-	jmp entry_SYSCALL_compat_after_hwframe
-SYM_FUNC_END(xen_syscall32_target)
-
-/* 32-bit compat sysenter target */
-SYM_FUNC_START(xen_sysenter_target)
-	mov 0*8(%rsp), %rcx
-	mov 1*8(%rsp), %r11
-	mov 5*8(%rsp), %rsp
-	jmp entry_SYSENTER_compat
-SYM_FUNC_END(xen_sysenter_target)
-
-#else /* !CONFIG_IA32_EMULATION */
-
-SYM_FUNC_START_ALIAS(xen_syscall32_target)
-SYM_FUNC_START(xen_sysenter_target)
-	lea 16(%rsp), %rsp	/* strip %rcx, %r11 */
-	mov $-ENOSYS, %rax
-	pushq $0
-	jmp hypercall_iret
-SYM_FUNC_END(xen_sysenter_target)
-SYM_FUNC_END_ALIAS(xen_syscall32_target)
-
-#endif	/* CONFIG_IA32_EMULATION */
diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 1ba601df3a37..2d7c8f34f56c 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -35,13 +35,8 @@ SYM_CODE_START(startup_xen)
 	rep __ASM_SIZE(stos)
 
 	mov %_ASM_SI, xen_start_info
-#ifdef CONFIG_X86_64
 	mov initial_stack(%rip), %rsp
-#else
-	mov initial_stack, %esp
-#endif
 
-#ifdef CONFIG_X86_64
 	/* Set up %gs.
 	 *
 	 * The base of %gs always points to fixed_percpu_data.  If the
@@ -53,7 +48,6 @@ SYM_CODE_START(startup_xen)
 	movq	$INIT_PER_CPU_VAR(fixed_percpu_data),%rax
 	cdq
 	wrmsr
-#endif
 
 	call xen_start_kernel
 SYM_CODE_END(startup_xen)
diff --git a/drivers/xen/Kconfig b/drivers/xen/Kconfig
index 727f11eb46b2..46e7fd099904 100644
--- a/drivers/xen/Kconfig
+++ b/drivers/xen/Kconfig
@@ -52,9 +52,7 @@ config XEN_BALLOON_MEMORY_HOTPLUG
 
 config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT
 	int "Hotplugged memory limit (in GiB) for a PV guest"
-	default 512 if X86_64
-	default 4 if X86_32
-	range 0 64 if X86_32
+	default 512
 	depends on XEN_HAVE_PVMMU
 	depends on XEN_BALLOON_MEMORY_HOTPLUG
 	help
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 11:59:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 11:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqbOK-00038T-Qb; Wed, 01 Jul 2020 11:59:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xe6U=AM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jqbOJ-00038O-S6
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 11:59:03 +0000
X-Inumbo-ID: 40edcd88-bb92-11ea-86f7-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 40edcd88-bb92-11ea-86f7-12813bfff9fa;
 Wed, 01 Jul 2020 11:59:02 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: weTeaknDV0sEqpvRdxIpQaRfhHXiDbIXWdTcnTH88GBzTgyz3JX8JaXUOk17WdXMqXxpz5NspV
 /pZWJz1PsVF1K3GJVKdZajwIQ0EWZ0hv29TC0n7QTrGIHrJaM2FAlT2/fDbxBUkGQmSw3WFDpL
 J7ptZ1N6CfavGbWnd2LjEYelswXS1D+UieFeqzKhwMDuoVQvzGzLM6OTYUgLo3yWGOZo+2a+bw
 trDDD055jEneNdtP0jDlA0p0zScAeWbA+ulnAvnUDvSF2ECOmlviBXIRlK2bEZkdOSIlv9qwhe
 thI=
X-SBRS: 2.7
X-MesageID: 21379321
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,300,1589256000"; d="scan'208";a="21379321"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH for-4.14] x86/spec-ctrl: Protect against CALL/JMP
 straight-line speculation
Date: Wed, 1 Jul 2020 12:58:42 +0100
Message-ID: <20200701115842.18583-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Some x86 CPUs speculatively execute beyond indirect CALL/JMP instructions.

With CONFIG_INDIRECT_THUNK / Retpolines, indirect CALL/JMP instructions are
converted to direct CALL/JMP's to __x86_indirect_thunk_REG(), leaving just a
handful of indirect JMPs implementing those stubs.

There is no architectrual execution beyond an indirect JMP, so use INT3 as
recommended by vendors to halt speculative execution.  This is shorter than
LFENCE (which would also work fine), but also shows up in logs if we do
unexpected execute them.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Paul Durrant <paul@xen.org>

This wants backporting to all release, possibly even into the security trees,
and should therefore be considered for 4.14 at this point.
---
 xen/arch/x86/indirect-thunk.S | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/x86/indirect-thunk.S b/xen/arch/x86/indirect-thunk.S
index 3c17f75c23..7392aee127 100644
--- a/xen/arch/x86/indirect-thunk.S
+++ b/xen/arch/x86/indirect-thunk.S
@@ -24,10 +24,12 @@
 .macro IND_THUNK_LFENCE reg:req
         lfence
         jmp *%\reg
+        int3 /* Halt straight-line speculation */
 .endm
 
 .macro IND_THUNK_JMP reg:req
         jmp *%\reg
+        int3 /* Halt straight-line speculation */
 .endm
 
 /*
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 12:08:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 12:08:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqbWq-00049f-Jj; Wed, 01 Jul 2020 12:07:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WQY8=AM=epam.com=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1jqbWp-000482-5L
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 12:07:51 +0000
X-Inumbo-ID: 7a17a524-bb93-11ea-b7bb-bc764e2007e4
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.48]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a17a524-bb93-11ea-b7bb-bc764e2007e4;
 Wed, 01 Jul 2020 12:07:47 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Kk16XI+ghPhCiQkBBMS/XiXuQjLLelf2CkrmHTYrnEo7H6EUCtlhabsxqlVUQwJyBbHhXDuyQVpJs3GJxZrUDU+uSg4f8Taazwp01idOhoNcWxmX7rYHBnU0e/MajlJuG57zkiyYmUY5YWSk1SHJmFTvGmmf3viIJHA0Wr90G2ZrVFi4+3AFY+u8gRpPSVu1MYADCj29RBfk+hHsUBN/a1XUX4xT7+WlSYxSoOiKmmyv+FbzWtrAm/rfDza3sFzPRSQaojdX/2cT45l7MAbDv6hLj4kRyvE24sGKACLPEf1r4V6A9UqZ/frvNMYKTx2P7pkoIhyXvoBdRwEMnplZcg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5BgXuofRnRUjDAtDAEIx7EjssbYAkRtZRDtvRze2+00=;
 b=gRye+K4KqQRoMmgGvANMHzReRpGujyMUfdO1c+jQbdNVC38JYB8X6mDaX2MGrpJW6qkRlWA+Bn9ydwvlPEkzIgXy8nYyNnC1rHudYGv6jI3Q0msBj9PIwpUdDWUD80oaOordNSgDNgnHgaUFvGc0EaKOTdUYp+vxpuMDs5PJM70bbmCRAIEHdmRjeW/Nn2VovtZ2bKIbmKzyZKICrpTlXxDs56AryRJ2MPOF9T+yPw1S3qSjF0AeINcyU3ykH9O5prETNVNyAScAw+RZBMPtlZslWcY9qkx+ZFvBjHLtJIO/TaHS3z2deT69Qj1M6rwzarkK73OideBVt38zuniaYA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5BgXuofRnRUjDAtDAEIx7EjssbYAkRtZRDtvRze2+00=;
 b=Y/iarCbxIj/iAWfa5J4HY+VG79m27MGa8rnkt2XUG230UhU1zF8JOtFKACQPIbpJlMb/YvurA7qJrS6h5bciFfU9vcugrWKIFtanA7uwsQPbw/0KSJex3kYtmk1JP1hwfdlCQyqR3QKD5mVfa+SHoMRKz5nNu4AEzwf8mCySvqagsOHrhch5Q9gJ95sXEuJx3c4MwvQlJ33hl1TP6agchiLz/NyjEZOS5ZSGvUS764yHvBPkeQq18OYKRl9GrRuP4Z+n3YCAIlp9pYSFvoeObgH0XUZqhPpoGuRa8AHBKRHj22IvllAYe3IVD3HJoF4LqMyit6woBSMC5BkT8VMWRQ==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4148.eurprd03.prod.outlook.com (2603:10a6:208:c7::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3131.24; Wed, 1 Jul
 2020 12:07:46 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508%7]) with mapi id 15.20.3131.029; Wed, 1 Jul 2020
 12:07:46 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, Oleksandr
 Andrushchenko <andr2000@gmail.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>, "ian.jackson@eu.citrix.com"
 <ian.jackson@eu.citrix.com>, "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH v2] xen/displif: Protocol version 2
Thread-Topic: [PATCH v2] xen/displif: Protocol version 2
Thread-Index: AQHWT3f2pfc1d0tAak69A/eTMH1j5qjyit6AgAAWp4A=
Date: Wed, 1 Jul 2020 12:07:46 +0000
Message-ID: <b5a6e034-4d52-d6b2-7c14-3c44c4a19cc3@epam.com>
References: <20200701071923.18883-1-andr2000@gmail.com>
 <dffd127d-c5a1-4c77-baa8-f1d931145bc4@suse.com>
In-Reply-To: <dffd127d-c5a1-4c77-baa8-f1d931145bc4@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 16e11f91-0095-4161-c96f-08d81db75de3
x-ms-traffictypediagnostic: AM0PR03MB4148:
x-microsoft-antispam-prvs: <AM0PR03MB414890DDB4BD977086E4E45EE76C0@AM0PR03MB4148.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-forefront-prvs: 04519BA941
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: lN3ZWaG/J9hazq2j0S3E/Hrjn7fIstK5pCnPaJ/L1xmrGGq0HVHzA4Epnuyfliu+Bw7kcngnDePRnxQrUGos3j1cHrW3A/WWwTo5QZx3Dv7mXA8PwNqK4Q23oI+fh2j/JwbHoyQePQL8i6r/qChcVFzGY0/k/PgTjqNyYaLoVehV+kOOumEo2tbFY4JkjN0u/HNIFlGJ7SWNZIiM8iT26OzWwEyK2ZEYro4zMgGuKuTfIRMYaez52MByzVKvmPoio/ZItE0P2UKXte28tFhpuSp3reV0YPPGETbdBmA4/fH0hPWWrh+xTu/1Rj/yGKca
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM0PR03MB6324.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(136003)(39860400002)(376002)(396003)(366004)(346002)(5660300002)(31696002)(8936002)(71200400001)(110136005)(26005)(478600001)(66946007)(8676002)(76116006)(66446008)(36756003)(64756008)(186003)(66476007)(66556008)(91956017)(6506007)(53546011)(316002)(31686004)(86362001)(2616005)(66574015)(6512007)(2906002)(83380400001)(6486002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: bgDLKx4MiH2V4pce61Y3B0jGJ12J66VLhaeopqr0RZHmgGrkCraJosEmB531Fc6QT1PzYqO5zg9cLQahMD5qXk4CS0C0COmBSa1z4rJIo1tKSxKHqeIttPsOKh0SXTKfNV7mco1I8OjbxjavexHNZUveL8H3iYbGLL+Ra7cplkm+Cs1ED6+zIpEjmnrc4IycCeSbuBosQDPNliVUj8Xhv5snslu6cISxIrsYmKyfedPQy4B+zuTdHO9+diIyf7e4GT0VuAFwTssM2oC+wsyDEhnYaqzkjihQ4bGZ2hdrh8puuikkH4+sa/4bxG+6rk+2lsIaKYkPnR2z38sS+0T9XEUKMVtgOIuMXXy87ApgY36k1Ni49qniXHhxIqmzrbka/J7MlHS+euRWxdso5QYmZOK+mN4D8g2RAWGOhPLO43MEqWtRHuh+e56xnxZHLVEvoki07bypps+R9xs2BBzwlgiGlDy8gdwREb2pHuhwvbs=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <3FC33300C47B714FA2EE8FB7A5313F55@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 16e11f91-0095-4161-c96f-08d81db75de3
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Jul 2020 12:07:46.3050 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: knorLwFEQjRjBE7g3ILmGciPcJ2duzx5uvY/xnhDDM0MVIJY4eo/655VC/95bsPn3rAHTYqX8/Ljq0S7rffJwCbdHsZKgWTdkpRgDckDirZ94zNqf/is3jWMrCnfMr9O
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4148
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

T24gNy8xLzIwIDE6NDYgUE0sIErDvHJnZW4gR3Jvw58gd3JvdGU6DQo+IE9uIDAxLjA3LjIwIDA5
OjE5LCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4+IEZyb206IE9sZWtzYW5kciBB
bmRydXNoY2hlbmtvIDxvbGVrc2FuZHJfYW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4NCj4+DQo+PiAx
LiBBZGQgcHJvdG9jb2wgdmVyc2lvbiBhcyBhbiBpbnRlZ2VyDQo+Pg0KPj4gVmVyc2lvbiBzdHJp
bmcsIHdoaWNoIGlzIGluIGZhY3QgYW4gaW50ZWdlciwgaXMgaGFyZCB0byBoYW5kbGUgaW4gdGhl
DQo+PiBjb2RlIHRoYXQgc3VwcG9ydHMgZGlmZmVyZW50IHByb3RvY29sIHZlcnNpb25zLiBUbyBz
aW1wbGlmeSB0aGF0DQo+PiBhbHNvIGFkZCB0aGUgdmVyc2lvbiBhcyBhbiBpbnRlZ2VyLg0KPj4N
Cj4+IDIuIFBhc3MgYnVmZmVyIG9mZnNldCB3aXRoIFhFTkRJU1BMX09QX0RCVUZfQ1JFQVRFDQo+
Pg0KPj4gVGhlcmUgYXJlIGNhc2VzIHdoZW4gZGlzcGxheSBkYXRhIGJ1ZmZlciBpcyBjcmVhdGVk
IHdpdGggbm9uLXplcm8NCj4+IG9mZnNldCB0byB0aGUgZGF0YSBzdGFydC4gSGFuZGxlIHN1Y2gg
Y2FzZXMgYW5kIHByb3ZpZGUgdGhhdCBvZmZzZXQNCj4+IHdoaWxlIGNyZWF0aW5nIGEgZGlzcGxh
eSBidWZmZXIuDQo+Pg0KPj4gMy4gQWRkIFhFTkRJU1BMX09QX0dFVF9FRElEIGNvbW1hbmQNCj4+
DQo+PiBBZGQgYW4gb3B0aW9uYWwgcmVxdWVzdCBmb3IgcmVhZGluZyBFeHRlbmRlZCBEaXNwbGF5
IElkZW50aWZpY2F0aW9uDQo+PiBEYXRhIChFRElEKSBzdHJ1Y3R1cmUgd2hpY2ggYWxsb3dzIGJl
dHRlciBjb25maWd1cmF0aW9uIG9mIHRoZQ0KPj4gZGlzcGxheSBjb25uZWN0b3JzIG92ZXIgdGhl
IGNvbmZpZ3VyYXRpb24gc2V0IGluIFhlblN0b3JlLg0KPj4gV2l0aCB0aGlzIGNoYW5nZSBjb25u
ZWN0b3JzIG1heSBoYXZlIG11bHRpcGxlIHJlc29sdXRpb25zIGRlZmluZWQNCj4+IHdpdGggcmVz
cGVjdCB0byBkZXRhaWxlZCB0aW1pbmcgZGVmaW5pdGlvbnMgYW5kIGFkZGl0aW9uYWwgcHJvcGVy
dGllcw0KPj4gbm9ybWFsbHkgcHJvdmlkZWQgYnkgZGlzcGxheXMuDQo+Pg0KPj4gSWYgdGhpcyBy
ZXF1ZXN0IGlzIG5vdCBzdXBwb3J0ZWQgYnkgdGhlIGJhY2tlbmQgdGhlbiB2aXNpYmxlIGFyZWEN
Cj4+IGlzIGRlZmluZWQgYnkgdGhlIHJlbGV2YW50IFhlblN0b3JlJ3MgInJlc29sdXRpb24iIHBy
b3BlcnR5Lg0KPj4NCj4+IElmIGJhY2tlbmQgcHJvdmlkZXMgZXh0ZW5kZWQgZGlzcGxheSBpZGVu
dGlmaWNhdGlvbiBkYXRhIChFRElEKSB3aXRoDQo+PiBYRU5ESVNQTF9PUF9HRVRfRURJRCByZXF1
ZXN0IHRoZW4gRURJRCB2YWx1ZXMgbXVzdCB0YWtlIHByZWNlZGVuY2UNCj4+IG92ZXIgdGhlIHJl
c29sdXRpb25zIGRlZmluZWQgaW4gWGVuU3RvcmUuDQo+Pg0KPj4gNC4gQnVtcCBwcm90b2NvbCB2
ZXJzaW9uIHRvIDIuDQo+Pg0KPj4gU2lnbmVkLW9mZi1ieTogT2xla3NhbmRyIEFuZHJ1c2hjaGVu
a28gPG9sZWtzYW5kcl9hbmRydXNoY2hlbmtvQGVwYW0uY29tPg0KPg0KPiBSZXZpZXdlZC1ieTog
SnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KDQpUaGFuayB5b3UsIGRvIHlvdSB3YW50
IG1lIHRvIHByZXBhcmUgdGhlIHNhbWUgZm9yIHRoZSBrZXJuZWwgc28NCg0KeW91IGhhdmUgaXQg
YXQgaGFuZCB3aGVuIHRoZSB0aW1lIGNvbWVzPw0KDQo+DQo+DQo+IEp1ZXJnZW4=


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 12:16:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 12:16:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqbfV-00059M-V7; Wed, 01 Jul 2020 12:16:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T2yc=AM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jqbfU-00058k-JN
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 12:16:48 +0000
X-Inumbo-ID: b95029ae-bb94-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b95029ae-bb94-11ea-bb8b-bc764e2007e4;
 Wed, 01 Jul 2020 12:16:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6FE32ADE2;
 Wed,  1 Jul 2020 12:16:42 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH v2 2/2] xen/xenbus: let xenbus_map_ring_valloc() return errno
 values only
Date: Wed,  1 Jul 2020 14:16:38 +0200
Message-Id: <20200701121638.19840-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200701121638.19840-1-jgross@suse.com>
References: <20200701121638.19840-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Today xenbus_map_ring_valloc() can return either a negative errno
value (-ENOMEM or -EINVAL) or a grant status value. This is a mess as
e.g -ENOMEM and GNTST_eagain have the same numeric value.

Fix that by turning all grant mapping errors into -ENOENT. This is
no problem as all callers of xenbus_map_ring_valloc() only use the
return value to print an error message, and in case of mapping errors
the grant status value has already been printed by __xenbus_map_ring()
before.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 drivers/xen/xenbus/xenbus_client.c | 22 ++++++----------------
 1 file changed, 6 insertions(+), 16 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index 9f8372079ecf..4f168b46fbca 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -456,8 +456,7 @@ EXPORT_SYMBOL_GPL(xenbus_free_evtchn);
  * Map @nr_grefs pages of memory into this domain from another
  * domain's grant table.  xenbus_map_ring_valloc allocates @nr_grefs
  * pages of virtual address space, maps the pages to that address, and
- * sets *vaddr to that address.  Returns 0 on success, and GNTST_*
- * (see xen/include/interface/grant_table.h) or -ENOMEM / -EINVAL on
+ * sets *vaddr to that address.  Returns 0 on success, and -errno on
  * error. If an error is returned, device will switch to
  * XenbusStateClosing and the error message will be saved in XenStore.
  */
@@ -477,18 +476,11 @@ int xenbus_map_ring_valloc(struct xenbus_device *dev, grant_ref_t *gnt_refs,
 		return -ENOMEM;
 
 	info->node = kzalloc(sizeof(*info->node), GFP_KERNEL);
-	if (!info->node) {
+	if (!info->node)
 		err = -ENOMEM;
-		goto out;
-	}
-
-	err = ring_ops->map(dev, info, gnt_refs, nr_grefs, vaddr);
-
-	/* Some hypervisors are buggy and can return 1. */
-	if (err > 0)
-		err = GNTST_general_error;
+	else
+		err = ring_ops->map(dev, info, gnt_refs, nr_grefs, vaddr);
 
- out:
 	kfree(info->node);
 	kfree(info);
 	return err;
@@ -507,7 +499,6 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
 			     bool *leaked)
 {
 	int i, j;
-	int err = GNTST_okay;
 
 	if (nr_grefs > XENBUS_MAX_RING_GRANTS)
 		return -EINVAL;
@@ -522,7 +513,6 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
 
 	for (i = 0; i < nr_grefs; i++) {
 		if (info->map[i].status != GNTST_okay) {
-			err = info->map[i].status;
 			xenbus_dev_fatal(dev, info->map[i].status,
 					 "mapping in shared page %d from domain %d",
 					 gnt_refs[i], dev->otherend_id);
@@ -531,7 +521,7 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
 			handles[i] = info->map[i].handle;
 	}
 
-	return GNTST_okay;
+	return 0;
 
  fail:
 	for (i = j = 0; i < nr_grefs; i++) {
@@ -554,7 +544,7 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
 		}
 	}
 
-	return err;
+	return -ENOENT;
 }
 
 /**
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 12:16:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 12:16:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqbfP-00058g-Fv; Wed, 01 Jul 2020 12:16:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T2yc=AM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jqbfO-00058b-DD
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 12:16:42 +0000
X-Inumbo-ID: b8857786-bb94-11ea-8701-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b8857786-bb94-11ea-8701-12813bfff9fa;
 Wed, 01 Jul 2020 12:16:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0E133ADE0;
 Wed,  1 Jul 2020 12:16:41 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 clang-built-linux@googlegroups.com
Subject: [PATCH v2 0/2] xen/xenbus: some cleanups
Date: Wed,  1 Jul 2020 14:16:36 +0200
Message-Id: <20200701121638.19840-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Avoid allocating large amount of data on the stack in
xenbus_map_ring_valloc() and some related return value cleanups.

Juergen Gross (2):
  xen/xenbus: avoid large structs and arrays on the stack
  xen/xenbus: let xenbus_map_ring_valloc() return errno values only

 drivers/xen/xenbus/xenbus_client.c | 167 ++++++++++++++---------------
 1 file changed, 81 insertions(+), 86 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 12:16:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 12:16:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqbfQ-00058r-NO; Wed, 01 Jul 2020 12:16:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=T2yc=AM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jqbfP-00058k-Kn
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 12:16:43 +0000
X-Inumbo-ID: b8f40f48-bb94-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b8f40f48-bb94-11ea-8496-bc764e2007e4;
 Wed, 01 Jul 2020 12:16:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CA5A5ADE1;
 Wed,  1 Jul 2020 12:16:41 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 clang-built-linux@googlegroups.com
Subject: [PATCH v2 1/2] xen/xenbus: avoid large structs and arrays on the stack
Date: Wed,  1 Jul 2020 14:16:37 +0200
Message-Id: <20200701121638.19840-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200701121638.19840-1-jgross@suse.com>
References: <20200701121638.19840-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Arnd Bergmann <arnd@arndb.de>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

xenbus_map_ring_valloc() and its sub-functions are putting quite large
structs and arrays on the stack. This is problematic at runtime, but
might also result in build failures (e.g. with clang due to the option
-Werror,-Wframe-larger-than=... used).

Fix that by moving most of the data from the stack into a dynamically
allocated struct. Performance is no issue here, as
xenbus_map_ring_valloc() is used only when adding a new PV device to
a backend driver.

While at it move some duplicated code from pv/hvm specific mapping
functions to the single caller.

Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- shorten internal function names (Boris Ostrovsky)
---
 drivers/xen/xenbus/xenbus_client.c | 161 +++++++++++++++--------------
 1 file changed, 83 insertions(+), 78 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index 040d2a43e8e3..9f8372079ecf 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -69,11 +69,27 @@ struct xenbus_map_node {
 	unsigned int   nr_handles;
 };
 
+struct map_ring_valloc {
+	struct xenbus_map_node *node;
+
+	/* Why do we need two arrays? See comment of __xenbus_map_ring */
+	union {
+		unsigned long addrs[XENBUS_MAX_RING_GRANTS];
+		pte_t *ptes[XENBUS_MAX_RING_GRANTS];
+	};
+	phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];
+
+	struct gnttab_map_grant_ref map[XENBUS_MAX_RING_GRANTS];
+	struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];
+
+	unsigned int idx;	/* HVM only. */
+};
+
 static DEFINE_SPINLOCK(xenbus_valloc_lock);
 static LIST_HEAD(xenbus_valloc_pages);
 
 struct xenbus_ring_ops {
-	int (*map)(struct xenbus_device *dev,
+	int (*map)(struct xenbus_device *dev, struct map_ring_valloc *info,
 		   grant_ref_t *gnt_refs, unsigned int nr_grefs,
 		   void **vaddr);
 	int (*unmap)(struct xenbus_device *dev, void *vaddr);
@@ -449,12 +465,32 @@ int xenbus_map_ring_valloc(struct xenbus_device *dev, grant_ref_t *gnt_refs,
 			   unsigned int nr_grefs, void **vaddr)
 {
 	int err;
+	struct map_ring_valloc *info;
+
+	*vaddr = NULL;
+
+	if (nr_grefs > XENBUS_MAX_RING_GRANTS)
+		return -EINVAL;
+
+	info = kzalloc(sizeof(*info), GFP_KERNEL);
+	if (!info)
+		return -ENOMEM;
+
+	info->node = kzalloc(sizeof(*info->node), GFP_KERNEL);
+	if (!info->node) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	err = ring_ops->map(dev, info, gnt_refs, nr_grefs, vaddr);
 
-	err = ring_ops->map(dev, gnt_refs, nr_grefs, vaddr);
 	/* Some hypervisors are buggy and can return 1. */
 	if (err > 0)
 		err = GNTST_general_error;
 
+ out:
+	kfree(info->node);
+	kfree(info);
 	return err;
 }
 EXPORT_SYMBOL_GPL(xenbus_map_ring_valloc);
@@ -466,12 +502,10 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
 			     grant_ref_t *gnt_refs,
 			     unsigned int nr_grefs,
 			     grant_handle_t *handles,
-			     phys_addr_t *addrs,
+			     struct map_ring_valloc *info,
 			     unsigned int flags,
 			     bool *leaked)
 {
-	struct gnttab_map_grant_ref map[XENBUS_MAX_RING_GRANTS];
-	struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];
 	int i, j;
 	int err = GNTST_okay;
 
@@ -479,23 +513,22 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
 		return -EINVAL;
 
 	for (i = 0; i < nr_grefs; i++) {
-		memset(&map[i], 0, sizeof(map[i]));
-		gnttab_set_map_op(&map[i], addrs[i], flags, gnt_refs[i],
-				  dev->otherend_id);
+		gnttab_set_map_op(&info->map[i], info->phys_addrs[i], flags,
+				  gnt_refs[i], dev->otherend_id);
 		handles[i] = INVALID_GRANT_HANDLE;
 	}
 
-	gnttab_batch_map(map, i);
+	gnttab_batch_map(info->map, i);
 
 	for (i = 0; i < nr_grefs; i++) {
-		if (map[i].status != GNTST_okay) {
-			err = map[i].status;
-			xenbus_dev_fatal(dev, map[i].status,
+		if (info->map[i].status != GNTST_okay) {
+			err = info->map[i].status;
+			xenbus_dev_fatal(dev, info->map[i].status,
 					 "mapping in shared page %d from domain %d",
 					 gnt_refs[i], dev->otherend_id);
 			goto fail;
 		} else
-			handles[i] = map[i].handle;
+			handles[i] = info->map[i].handle;
 	}
 
 	return GNTST_okay;
@@ -503,19 +536,19 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
  fail:
 	for (i = j = 0; i < nr_grefs; i++) {
 		if (handles[i] != INVALID_GRANT_HANDLE) {
-			memset(&unmap[j], 0, sizeof(unmap[j]));
-			gnttab_set_unmap_op(&unmap[j], (phys_addr_t)addrs[i],
+			gnttab_set_unmap_op(&info->unmap[j],
+					    info->phys_addrs[i],
 					    GNTMAP_host_map, handles[i]);
 			j++;
 		}
 	}
 
-	if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap, j))
+	if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, info->unmap, j))
 		BUG();
 
 	*leaked = false;
 	for (i = 0; i < j; i++) {
-		if (unmap[i].status != GNTST_okay) {
+		if (info->unmap[i].status != GNTST_okay) {
 			*leaked = true;
 			break;
 		}
@@ -566,21 +599,12 @@ static int xenbus_unmap_ring(struct xenbus_device *dev, grant_handle_t *handles,
 	return err;
 }
 
-struct map_ring_valloc_hvm
-{
-	unsigned int idx;
-
-	/* Why do we need two arrays? See comment of __xenbus_map_ring */
-	phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];
-	unsigned long addrs[XENBUS_MAX_RING_GRANTS];
-};
-
 static void xenbus_map_ring_setup_grant_hvm(unsigned long gfn,
 					    unsigned int goffset,
 					    unsigned int len,
 					    void *data)
 {
-	struct map_ring_valloc_hvm *info = data;
+	struct map_ring_valloc *info = data;
 	unsigned long vaddr = (unsigned long)gfn_to_virt(gfn);
 
 	info->phys_addrs[info->idx] = vaddr;
@@ -589,39 +613,28 @@ static void xenbus_map_ring_setup_grant_hvm(unsigned long gfn,
 	info->idx++;
 }
 
-static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev,
-				      grant_ref_t *gnt_ref,
-				      unsigned int nr_grefs,
-				      void **vaddr)
+static int xenbus_map_ring_hvm(struct xenbus_device *dev,
+			       struct map_ring_valloc *info,
+			       grant_ref_t *gnt_ref,
+			       unsigned int nr_grefs,
+			       void **vaddr)
 {
-	struct xenbus_map_node *node;
+	struct xenbus_map_node *node = info->node;
 	int err;
 	void *addr;
 	bool leaked = false;
-	struct map_ring_valloc_hvm info = {
-		.idx = 0,
-	};
 	unsigned int nr_pages = XENBUS_PAGES(nr_grefs);
 
-	if (nr_grefs > XENBUS_MAX_RING_GRANTS)
-		return -EINVAL;
-
-	*vaddr = NULL;
-
-	node = kzalloc(sizeof(*node), GFP_KERNEL);
-	if (!node)
-		return -ENOMEM;
-
 	err = alloc_xenballooned_pages(nr_pages, node->hvm.pages);
 	if (err)
 		goto out_err;
 
 	gnttab_foreach_grant(node->hvm.pages, nr_grefs,
 			     xenbus_map_ring_setup_grant_hvm,
-			     &info);
+			     info);
 
 	err = __xenbus_map_ring(dev, gnt_ref, nr_grefs, node->handles,
-				info.phys_addrs, GNTMAP_host_map, &leaked);
+				info, GNTMAP_host_map, &leaked);
 	node->nr_handles = nr_grefs;
 
 	if (err)
@@ -641,11 +654,13 @@ static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev,
 	spin_unlock(&xenbus_valloc_lock);
 
 	*vaddr = addr;
+	info->node = NULL;
+
 	return 0;
 
  out_xenbus_unmap_ring:
 	if (!leaked)
-		xenbus_unmap_ring(dev, node->handles, nr_grefs, info.addrs);
+		xenbus_unmap_ring(dev, node->handles, nr_grefs, info->addrs);
 	else
 		pr_alert("leaking %p size %u page(s)",
 			 addr, nr_pages);
@@ -653,7 +668,6 @@ static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev,
 	if (!leaked)
 		free_xenballooned_pages(nr_pages, node->hvm.pages);
  out_err:
-	kfree(node);
 	return err;
 }
 
@@ -676,40 +690,30 @@ int xenbus_unmap_ring_vfree(struct xenbus_device *dev, void *vaddr)
 EXPORT_SYMBOL_GPL(xenbus_unmap_ring_vfree);
 
 #ifdef CONFIG_XEN_PV
-static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
-				     grant_ref_t *gnt_refs,
-				     unsigned int nr_grefs,
-				     void **vaddr)
+static int xenbus_map_ring_pv(struct xenbus_device *dev,
+			      struct map_ring_valloc *info,
+			      grant_ref_t *gnt_refs,
+			      unsigned int nr_grefs,
+			      void **vaddr)
 {
-	struct xenbus_map_node *node;
+	struct xenbus_map_node *node = info->node;
 	struct vm_struct *area;
-	pte_t *ptes[XENBUS_MAX_RING_GRANTS];
-	phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];
 	int err = GNTST_okay;
 	int i;
 	bool leaked;
 
-	*vaddr = NULL;
-
-	if (nr_grefs > XENBUS_MAX_RING_GRANTS)
-		return -EINVAL;
-
-	node = kzalloc(sizeof(*node), GFP_KERNEL);
-	if (!node)
-		return -ENOMEM;
-
-	area = alloc_vm_area(XEN_PAGE_SIZE * nr_grefs, ptes);
+	area = alloc_vm_area(XEN_PAGE_SIZE * nr_grefs, info->ptes);
 	if (!area) {
 		kfree(node);
 		return -ENOMEM;
 	}
 
 	for (i = 0; i < nr_grefs; i++)
-		phys_addrs[i] = arbitrary_virt_to_machine(ptes[i]).maddr;
+		info->phys_addrs[i] =
+			arbitrary_virt_to_machine(info->ptes[i]).maddr;
 
 	err = __xenbus_map_ring(dev, gnt_refs, nr_grefs, node->handles,
-				phys_addrs,
-				GNTMAP_host_map | GNTMAP_contains_pte,
+				info, GNTMAP_host_map | GNTMAP_contains_pte,
 				&leaked);
 	if (err)
 		goto failed;
@@ -722,6 +726,8 @@ static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
 	spin_unlock(&xenbus_valloc_lock);
 
 	*vaddr = area->addr;
+	info->node = NULL;
+
 	return 0;
 
 failed:
@@ -730,11 +736,10 @@ static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
 	else
 		pr_alert("leaking VM area %p size %u page(s)", area, nr_grefs);
 
-	kfree(node);
 	return err;
 }
 
-static int xenbus_unmap_ring_vfree_pv(struct xenbus_device *dev, void *vaddr)
+static int xenbus_unmap_ring_pv(struct xenbus_device *dev, void *vaddr)
 {
 	struct xenbus_map_node *node;
 	struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];
@@ -798,12 +803,12 @@ static int xenbus_unmap_ring_vfree_pv(struct xenbus_device *dev, void *vaddr)
 }
 
 static const struct xenbus_ring_ops ring_ops_pv = {
-	.map = xenbus_map_ring_valloc_pv,
-	.unmap = xenbus_unmap_ring_vfree_pv,
+	.map = xenbus_map_ring_pv,
+	.unmap = xenbus_unmap_ring_pv,
 };
 #endif
 
-struct unmap_ring_vfree_hvm
+struct unmap_ring_hvm
 {
 	unsigned int idx;
 	unsigned long addrs[XENBUS_MAX_RING_GRANTS];
@@ -814,19 +819,19 @@ static void xenbus_unmap_ring_setup_grant_hvm(unsigned long gfn,
 					      unsigned int len,
 					      void *data)
 {
-	struct unmap_ring_vfree_hvm *info = data;
+	struct unmap_ring_hvm *info = data;
 
 	info->addrs[info->idx] = (unsigned long)gfn_to_virt(gfn);
 
 	info->idx++;
 }
 
-static int xenbus_unmap_ring_vfree_hvm(struct xenbus_device *dev, void *vaddr)
+static int xenbus_unmap_ring_hvm(struct xenbus_device *dev, void *vaddr)
 {
 	int rv;
 	struct xenbus_map_node *node;
 	void *addr;
-	struct unmap_ring_vfree_hvm info = {
+	struct unmap_ring_hvm info = {
 		.idx = 0,
 	};
 	unsigned int nr_pages;
@@ -887,8 +892,8 @@ enum xenbus_state xenbus_read_driver_state(const char *path)
 EXPORT_SYMBOL_GPL(xenbus_read_driver_state);
 
 static const struct xenbus_ring_ops ring_ops_hvm = {
-	.map = xenbus_map_ring_valloc_hvm,
-	.unmap = xenbus_unmap_ring_vfree_hvm,
+	.map = xenbus_map_ring_hvm,
+	.unmap = xenbus_unmap_ring_hvm,
 };
 
 void __init xenbus_ring_ops_init(void)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 12:26:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 12:26:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqboJ-0006C7-Tn; Wed, 01 Jul 2020 12:25:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DLP9=AM=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jqboI-0006C2-FT
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 12:25:54 +0000
X-Inumbo-ID: 00dcd226-bb96-11ea-8496-bc764e2007e4
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00dcd226-bb96-11ea-8496-bc764e2007e4;
 Wed, 01 Jul 2020 12:25:53 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id g2so13462658lfb.0
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jul 2020 05:25:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=0m3xHATUmn66K+zGcSIW2tRSRee6EDqqrd59PD+igzs=;
 b=PjKAczU26QU5qQh8txybYrwbsgNlTcFxXVNvV2s7uVD130rbZTYW8kUMsm3AYPexro
 3NNQxjCo96sHuf3Mm/ni1Qas99BmC0esd4qmZ9gMixJFArpMnL8cmOz8KORq+i5ollhu
 rZ3Q8ho5jXxsM4pTgZOiLTTiC1ZqFgnoPvo0USzkmdawVDsgRF1eSDZzsXm6mbmc6nqM
 5O/1pMB4scjw7EZrwYANpLiH2+9mK0ZaRSbMZBarl/H49l8BrD/rGQdiK34xVI5lltDk
 OVr50Ii5jaFKD+T2vmmaIDY2ZuJUl40XrEeSPAe8RW7aEq1OUW9dIyeSyytCUNfntIWd
 OUlQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=0m3xHATUmn66K+zGcSIW2tRSRee6EDqqrd59PD+igzs=;
 b=LJiNdAo6FRt3FuCPvFCJf1yYAownlrSwux7I/y/YLpL3zzfh34fl2NQ6FDNHBsqo1K
 BVGLskSQO8V3phVNHhz845tnvZwiCd7IUiT2ZMX0uS4zP8qJwRwszMiw1KS/QSWPnH5s
 dK0zKl6UMEqCr0zXPiocFDg5qOq1Edua9K447dHpZ4UhEAVosPkSgHcCfBxIdq/byglV
 5jf49srsKDi9EIx4I36+f0yssGHjVqxp9T5iJ/3Y1LrUjqzm+vwpAZI7k4nRY8kfkWIl
 PdWiy08TPrmEwu4x9ZfSvdUSWA6oB0UkgN0l1gd//oyJt2ggQmdHCiS8aBe1oFdCfEg5
 eohg==
X-Gm-Message-State: AOAM5318l63J5/DgHtx7YPZAtnRAZrqsTKKLEtaP6AI5OXSCRYTzyYc2
 cH0eR0YVpVUd77wolRIfQjdvRs+E3IbYjmPPww0=
X-Google-Smtp-Source: ABdhPJwBH7qgS8+hUi+f8eycQWrSoL8wQ6PEdPQEmNHFNy2vttrvb/j/eXLelv3zW3gKBhGI7T9tHpZJvBVEz1XJvJc=
X-Received: by 2002:a05:6512:691:: with SMTP id
 t17mr1091976lfe.44.1593606351678; 
 Wed, 01 Jul 2020 05:25:51 -0700 (PDT)
MIME-Version: 1.0
References: <20200624121841.17971-1-paul@xen.org>
 <20200624121841.17971-3-paul@xen.org>
 <33e594dd-dbfa-7c57-1cf5-0852e8fc8e1d@redhat.com>
 <000701d64ef5$6568f660$303ae320$@xen.org>
 <9e591254-d215-d5af-38d2-fd5b65f84a43@redhat.com>
 <000801d64f75$c604f570$520ee050$@xen.org>
In-Reply-To: <000801d64f75$c604f570$520ee050$@xen.org>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 1 Jul 2020 08:25:40 -0400
Message-ID: <CAKf6xpvNTVqK263pdSARyoWnzP8g9SRoSqvhnLLwyYadjR1ChQ@mail.gmail.com>
Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
To: Paul Durrant <paul@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Eduardo Habkost <ehabkost@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paul Durrant <pdurrant@amazon.com>,
 QEMU <qemu-devel@nongnu.org>, Paolo Bonzini <pbonzini@redhat.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 1, 2020 at 3:03 AM Paul Durrant <xadimgnik@gmail.com> wrote:
>
> > -----Original Message-----
> > From: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> > Sent: 30 June 2020 18:27
> > To: paul@xen.org; xen-devel@lists.xenproject.org; qemu-devel@nongnu.org
> > Cc: 'Eduardo Habkost' <ehabkost@redhat.com>; 'Michael S. Tsirkin' <mst@=
redhat.com>; 'Paul Durrant'
> > <pdurrant@amazon.com>; 'Jason Andryuk' <jandryuk@gmail.com>; 'Paolo Bon=
zini' <pbonzini@redhat.com>;
> > 'Richard Henderson' <rth@twiddle.net>
> > Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
> >
> > On 6/30/20 5:44 PM, Paul Durrant wrote:
> > >> -----Original Message-----
> > >> From: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> > >> Sent: 30 June 2020 16:26
> > >> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org; qem=
u-devel@nongnu.org
> > >> Cc: Eduardo Habkost <ehabkost@redhat.com>; Michael S. Tsirkin <mst@r=
edhat.com>; Paul Durrant
> > >> <pdurrant@amazon.com>; Jason Andryuk <jandryuk@gmail.com>; Paolo Bon=
zini <pbonzini@redhat.com>;
> > >> Richard Henderson <rth@twiddle.net>
> > >> Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
> > >>
> > >> On 6/24/20 2:18 PM, Paul Durrant wrote:
> > >>> From: Paul Durrant <pdurrant@amazon.com>
> > >>>
> > >>> The generic pc_machine_initfn() calls pc_system_flash_create() whic=
h creates
> > >>> 'system.flash0' and 'system.flash1' devices. These devices are then=
 realized
> > >>> by pc_system_flash_map() which is called from pc_system_firmware_in=
it() which
> > >>> itself is called via pc_memory_init(). The latter however is not ca=
lled when
> > >>> xen_enable() is true and hence the following assertion fails:
> > >>>
> > >>> qemu-system-i386: hw/core/qdev.c:439: qdev_assert_realized_properly=
:
> > >>> Assertion `dev->realized' failed
> > >>>
> > >>> These flash devices are unneeded when using Xen so this patch avoid=
s the
> > >>> assertion by simply removing them using pc_system_flash_cleanup_unu=
sed().
> > >>>
> > >>> Reported-by: Jason Andryuk <jandryuk@gmail.com>
> > >>> Fixes: ebc29e1beab0 ("pc: Support firmware configuration with -bloc=
kdev")
> > >>> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> > >>> Tested-by: Jason Andryuk <jandryuk@gmail.com>
> > >>> ---
> > >>> Cc: Paolo Bonzini <pbonzini@redhat.com>
> > >>> Cc: Richard Henderson <rth@twiddle.net>
> > >>> Cc: Eduardo Habkost <ehabkost@redhat.com>
> > >>> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> > >>> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
> > >>> ---
> > >>>  hw/i386/pc_piix.c    | 9 ++++++---
> > >>>  hw/i386/pc_sysfw.c   | 2 +-
> > >>>  include/hw/i386/pc.h | 1 +
> > >>>  3 files changed, 8 insertions(+), 4 deletions(-)
> > >>>
> > >>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> > >>> index 1497d0e4ae..977d40afb8 100644
> > >>> --- a/hw/i386/pc_piix.c
> > >>> +++ b/hw/i386/pc_piix.c
> > >>> @@ -186,9 +186,12 @@ static void pc_init1(MachineState *machine,
> > >>>      if (!xen_enabled()) {
> > >>>          pc_memory_init(pcms, system_memory,
> > >>>                         rom_memory, &ram_memory);
> > >>> -    } else if (machine->kernel_filename !=3D NULL) {
> > >>> -        /* For xen HVM direct kernel boot, load linux here */
> > >>> -        xen_load_linux(pcms);
> > >>> +    } else {
> > >>> +        pc_system_flash_cleanup_unused(pcms);
> > >>
> > >> TIL pc_system_flash_cleanup_unused().
> > >>
> > >> What about restricting at the source?
> > >>
> > >
> > > And leave the devices in place? They are not relevant for Xen, so why=
 not clean up?
> >
> > No, I meant to not create them in the first place, instead of
> > create+destroy.
> >
> > Anyway what you did works, so I don't have any problem.
>
> IIUC Jason originally tried restricting creation but encountered a proble=
m because xen_enabled() would always return false at that point, because ma=
chine creation occurs before accelerators are initialized.

Correct.  Quoting my previous email:
"""
Removing the call to pc_system_flash_create() from pc_machine_initfn()
lets QEMU startup and run a Xen HVM again.  xen_enabled() doesn't work
there since accelerators have not been initialized yes, I guess?
"""

If you want to remove the creation in the first place, then I have two
questions.  Why does pc_system_flash_create()/pc_pflash_create() get
called so early creating the pflash devices?  Why aren't they just
created as needed in pc_system_flash_map()?

Regards,
Jason

>   Paul
>
> >
> > Reviewed-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> >
> > >
> > >   Paul
> > >
> > >> -- >8 --
> > >> --- a/hw/i386/pc.c
> > >> +++ b/hw/i386/pc.c
> > >> @@ -1004,24 +1004,26 @@ void pc_memory_init(PCMachineState *pcms,
> > >>                                      &machine->device_memory->mr);
> > >>      }
> > >>
> > >> -    /* Initialize PC system firmware */
> > >> -    pc_system_firmware_init(pcms, rom_memory);
> > >> -
> > >> -    option_rom_mr =3D g_malloc(sizeof(*option_rom_mr));
> > >> -    memory_region_init_ram(option_rom_mr, NULL, "pc.rom", PC_ROM_SI=
ZE,
> > >> -                           &error_fatal);
> > >> -    if (pcmc->pci_enabled) {
> > >> -        memory_region_set_readonly(option_rom_mr, true);
> > >> -    }
> > >> -    memory_region_add_subregion_overlap(rom_memory,
> > >> -                                        PC_ROM_MIN_VGA,
> > >> -                                        option_rom_mr,
> > >> -                                        1);
> > >> -
> > >>      fw_cfg =3D fw_cfg_arch_create(machine,
> > >>                                  x86ms->boot_cpus, x86ms->apic_id_li=
mit);
> > >>
> > >> -    rom_set_fw(fw_cfg);
> > >> +    /* Initialize PC system firmware */
> > >> +    if (!xen_enabled()) {
> > >> +        pc_system_firmware_init(pcms, rom_memory);
> > >> +
> > >> +        option_rom_mr =3D g_malloc(sizeof(*option_rom_mr));
> > >> +        memory_region_init_ram(option_rom_mr, NULL, "pc.rom", PC_RO=
M_SIZE,
> > >> +                               &error_fatal);
> > >> +        if (pcmc->pci_enabled) {
> > >> +            memory_region_set_readonly(option_rom_mr, true);
> > >> +        }
> > >> +        memory_region_add_subregion_overlap(rom_memory,
> > >> +                                            PC_ROM_MIN_VGA,
> > >> +                                            option_rom_mr,
> > >> +                                            1);
> > >> +
> > >> +        rom_set_fw(fw_cfg);
> > >> +    }
> > >>
> > >>      if (pcmc->has_reserved_memory && machine->device_memory->base) =
{
> > >>          uint64_t *val =3D g_malloc(sizeof(*val));
> > >> ---
> > >>
> > >>> +        if (machine->kernel_filename !=3D NULL) {
> > >>> +            /* For xen HVM direct kernel boot, load linux here */
> > >>> +            xen_load_linux(pcms);
> > >>> +        }
> > >>>      }
> > >>>
> > >>>      gsi_state =3D pc_gsi_create(&x86ms->gsi, pcmc->pci_enabled);
> > >>> diff --git a/hw/i386/pc_sysfw.c b/hw/i386/pc_sysfw.c
> > >>> index ec2a3b3e7e..0ff47a4b59 100644
> > >>> --- a/hw/i386/pc_sysfw.c
> > >>> +++ b/hw/i386/pc_sysfw.c
> > >>> @@ -108,7 +108,7 @@ void pc_system_flash_create(PCMachineState *pcm=
s)
> > >>>      }
> > >>>  }
> > >>>
> > >>> -static void pc_system_flash_cleanup_unused(PCMachineState *pcms)
> > >>> +void pc_system_flash_cleanup_unused(PCMachineState *pcms)
> > >>>  {
> > >>>      char *prop_name;
> > >>>      int i;
> > >>> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> > >>> index e6135c34d6..497f2b7ab7 100644
> > >>> --- a/include/hw/i386/pc.h
> > >>> +++ b/include/hw/i386/pc.h
> > >>> @@ -187,6 +187,7 @@ int cmos_get_fd_drive_type(FloppyDriveType fd0)=
;
> > >>>
> > >>>  /* pc_sysfw.c */
> > >>>  void pc_system_flash_create(PCMachineState *pcms);
> > >>> +void pc_system_flash_cleanup_unused(PCMachineState *pcms);
> > >>>  void pc_system_firmware_init(PCMachineState *pcms, MemoryRegion *r=
om_memory);
> > >>>
> > >>>  /* acpi-build.c */
> > >>>
> > >
> > >
>
>


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 12:26:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 12:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqbpC-0006GD-BH; Wed, 01 Jul 2020 12:26:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LFmw=AM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqbpB-0006G7-FI
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 12:26:49 +0000
X-Inumbo-ID: 226b6db2-bb96-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 226b6db2-bb96-11ea-bca7-bc764e2007e4;
 Wed, 01 Jul 2020 12:26:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3B361ADE2;
 Wed,  1 Jul 2020 12:26:48 +0000 (UTC)
Subject: Re: [PATCH for-4.14] x86/spec-ctrl: Protect against CALL/JMP
 straight-line speculation
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200701115842.18583-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <41b49d79-e0fa-161a-bb27-a9a2ccf361f5@suse.com>
Date: Wed, 1 Jul 2020 14:26:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200701115842.18583-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.07.2020 13:58, Andrew Cooper wrote:
> Some x86 CPUs speculatively execute beyond indirect CALL/JMP instructions.
> 
> With CONFIG_INDIRECT_THUNK / Retpolines, indirect CALL/JMP instructions are
> converted to direct CALL/JMP's to __x86_indirect_thunk_REG(), leaving just a
> handful of indirect JMPs implementing those stubs.
> 
> There is no architectrual execution beyond an indirect JMP, so use INT3 as
> recommended by vendors to halt speculative execution.  This is shorter than
> LFENCE (which would also work fine), but also shows up in logs if we do
> unexpected execute them.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 12:40:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 12:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqc2U-00083l-1L; Wed, 01 Jul 2020 12:40:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6/Ob=AM=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1jqc2S-00083g-H3
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 12:40:32 +0000
X-Inumbo-ID: 0ce7dd02-bb98-11ea-bb8b-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0ce7dd02-bb98-11ea-bb8b-bc764e2007e4;
 Wed, 01 Jul 2020 12:40:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1593607231;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=yAWp/9Pq6AKPKidFbwHNBmMZv7ojh/uPH80e3Rw7R88=;
 b=a4CKwk9sEdFq4aX3IvmCIrtFxRMDgN35kb/4T6KrxsrWktCGlvddwoVvyTO5sKd2W0p3ug
 D2M0XvrExBi3iwIZS2t9xx/RDMh5JwmVUGckT7nxpN+tnbYoggW6kztSAK7tn/ZWxlj1vL
 y+v9IkByhHLcGQYlv6ZUycESaXoJxoA=
Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com
 [209.85.208.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-123-BSKv4bbiNaSjusmN2iXEDQ-1; Wed, 01 Jul 2020 08:40:29 -0400
X-MC-Unique: BSKv4bbiNaSjusmN2iXEDQ-1
Received: by mail-ed1-f70.google.com with SMTP id y66so12782222ede.19
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jul 2020 05:40:29 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:autocrypt
 :message-id:date:user-agent:mime-version:in-reply-to
 :content-language:content-transfer-encoding;
 bh=yAWp/9Pq6AKPKidFbwHNBmMZv7ojh/uPH80e3Rw7R88=;
 b=ZbnhvNKpSlGUNz7hEMh2AJbFVct3x2JMpYtBjYcEBQPNHSyHZ4qP1uerIrsG7JB+LI
 LbXxdN8NUrObAB3xfvIWdX7kICP+X1xjIwJa1ctpnsKIXI+hRv08QLiF6Bd60KpZZiOJ
 MF+UtNU1tVW1mKAZX48KnxJ+39HRyYT8T2CfNB33WgF+UsYuR/+CycjwBTw2ee0tSap5
 HSze4NRmL1B4fxt/CVyzCfG7t2qVJuuHJjDnRFviEOat/t48W827NHI1PhNmTxVjv8Da
 Z0lfHESpc7rbZ6irn/q4MNP6DeIpkWBojPtyC5efnPyXBs9BVeDeS2Qg5hcn/jiVL+0p
 ZACg==
X-Gm-Message-State: AOAM530KlJkPvuYMlU4IFVTIHq36/N1vVE7M3bLqy0Ms8UV4kuiuLCD1
 zsPRo0BmzLcotQSKXXAl9lMrZ3G15MAC/LT2/9gwnANmq2EDOKcLthAKE0hJSezfSYqNivxPVnf
 WcaMAQfWEVS7L6RoDT/OJiWoL0es=
X-Received: by 2002:a05:6402:318d:: with SMTP id
 di13mr17601382edb.172.1593607228109; 
 Wed, 01 Jul 2020 05:40:28 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJwR/R0bOhxLg9lSHXY7+zJlUud1tTy2nsI1gQaaTDTIDmQttKWzv48lJk2M2DcbP6ZPbAJOtA==
X-Received: by 2002:a05:6402:318d:: with SMTP id
 di13mr17601369edb.172.1593607227890; 
 Wed, 01 Jul 2020 05:40:27 -0700 (PDT)
Received: from [192.168.1.40] (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id i7sm5975097eds.91.2020.07.01.05.40.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 01 Jul 2020 05:40:27 -0700 (PDT)
Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
To: Jason Andryuk <jandryuk@gmail.com>, Paul Durrant <paul@xen.org>
References: <20200624121841.17971-1-paul@xen.org>
 <20200624121841.17971-3-paul@xen.org>
 <33e594dd-dbfa-7c57-1cf5-0852e8fc8e1d@redhat.com>
 <000701d64ef5$6568f660$303ae320$@xen.org>
 <9e591254-d215-d5af-38d2-fd5b65f84a43@redhat.com>
 <000801d64f75$c604f570$520ee050$@xen.org>
 <CAKf6xpvNTVqK263pdSARyoWnzP8g9SRoSqvhnLLwyYadjR1ChQ@mail.gmail.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Autocrypt: addr=philmd@redhat.com; keydata=
 mQINBDXML8YBEADXCtUkDBKQvNsQA7sDpw6YLE/1tKHwm24A1au9Hfy/OFmkpzo+MD+dYc+7
 bvnqWAeGweq2SDq8zbzFZ1gJBd6+e5v1a/UrTxvwBk51yEkadrpRbi+r2bDpTJwXc/uEtYAB
 GvsTZMtiQVA4kRID1KCdgLa3zztPLCj5H1VZhqZsiGvXa/nMIlhvacRXdbgllPPJ72cLUkXf
 z1Zu4AkEKpccZaJspmLWGSzGu6UTZ7UfVeR2Hcc2KI9oZB1qthmZ1+PZyGZ/Dy+z+zklC0xl
 XIpQPmnfy9+/1hj1LzJ+pe3HzEodtlVA+rdttSvA6nmHKIt8Ul6b/h1DFTmUT1lN1WbAGxmg
 CH1O26cz5nTrzdjoqC/b8PpZiT0kO5MKKgiu5S4PRIxW2+RA4H9nq7nztNZ1Y39bDpzwE5Sp
 bDHzd5owmLxMLZAINtCtQuRbSOcMjZlg4zohA9TQP9krGIk+qTR+H4CV22sWldSkVtsoTaA2
 qNeSJhfHQY0TyQvFbqRsSNIe2gTDzzEQ8itsmdHHE/yzhcCVvlUzXhAT6pIN0OT+cdsTTfif
 MIcDboys92auTuJ7U+4jWF1+WUaJ8gDL69ThAsu7mGDBbm80P3vvUZ4fQM14NkxOnuGRrJxO
 qjWNJ2ZUxgyHAh5TCxMLKWZoL5hpnvx3dF3Ti9HW2dsUUWICSQARAQABtDJQaGlsaXBwZSBN
 YXRoaWV1LURhdWTDqSAoUGhpbCkgPHBoaWxtZEByZWRoYXQuY29tPokCVQQTAQgAPwIbDwYL
 CQgHAwIGFQgCCQoLBBYCAwECHgECF4AWIQSJweePYB7obIZ0lcuio/1u3q3A3gUCXsfWwAUJ
 KtymWgAKCRCio/1u3q3A3ircD/9Vjh3aFNJ3uF3hddeoFg1H038wZr/xi8/rX27M1Vj2j9VH
 0B8Olp4KUQw/hyO6kUxqkoojmzRpmzvlpZ0cUiZJo2bQIWnvScyHxFCv33kHe+YEIqoJlaQc
 JfKYlbCoubz+02E2A6bFD9+BvCY0LBbEj5POwyKGiDMjHKCGuzSuDRbCn0Mz4kCa7nFMF5Jv
 piC+JemRdiBd6102ThqgIsyGEBXuf1sy0QIVyXgaqr9O2b/0VoXpQId7yY7OJuYYxs7kQoXI
 6WzSMpmuXGkmfxOgbc/L6YbzB0JOriX0iRClxu4dEUg8Bs2pNnr6huY2Ft+qb41RzCJvvMyu
 gS32LfN0bTZ6Qm2A8ayMtUQgnwZDSO23OKgQWZVglGliY3ezHZ6lVwC24Vjkmq/2yBSLakZE
 6DZUjZzCW1nvtRK05ebyK6tofRsx8xB8pL/kcBb9nCuh70aLR+5cmE41X4O+MVJbwfP5s/RW
 9BFSL3qgXuXso/3XuWTQjJJGgKhB6xXjMmb1J4q/h5IuVV4juv1Fem9sfmyrh+Wi5V1IzKI7
 RPJ3KVb937eBgSENk53P0gUorwzUcO+ASEo3Z1cBKkJSPigDbeEjVfXQMzNt0oDRzpQqH2vp
 apo2jHnidWt8BsckuWZpxcZ9+/9obQ55DyVQHGiTN39hkETy3Emdnz1JVHTU0Q==
Message-ID: <07cc67e9-aeaa-1947-43db-c00716bead01@redhat.com>
Date: Wed, 1 Jul 2020 14:40:26 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <CAKf6xpvNTVqK263pdSARyoWnzP8g9SRoSqvhnLLwyYadjR1ChQ@mail.gmail.com>
Content-Language: en-US
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Eduardo Habkost <ehabkost@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paul Durrant <pdurrant@amazon.com>,
 QEMU <qemu-devel@nongnu.org>, Paolo Bonzini <pbonzini@redhat.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/1/20 2:25 PM, Jason Andryuk wrote:
> On Wed, Jul 1, 2020 at 3:03 AM Paul Durrant <xadimgnik@gmail.com> wrote:
>>
>>> -----Original Message-----
>>> From: Philippe Mathieu-Daudé <philmd@redhat.com>
>>> Sent: 30 June 2020 18:27
>>> To: paul@xen.org; xen-devel@lists.xenproject.org; qemu-devel@nongnu.org
>>> Cc: 'Eduardo Habkost' <ehabkost@redhat.com>; 'Michael S. Tsirkin' <mst@redhat.com>; 'Paul Durrant'
>>> <pdurrant@amazon.com>; 'Jason Andryuk' <jandryuk@gmail.com>; 'Paolo Bonzini' <pbonzini@redhat.com>;
>>> 'Richard Henderson' <rth@twiddle.net>
>>> Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
>>>
>>> On 6/30/20 5:44 PM, Paul Durrant wrote:
>>>>> -----Original Message-----
>>>>> From: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>>> Sent: 30 June 2020 16:26
>>>>> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org; qemu-devel@nongnu.org
>>>>> Cc: Eduardo Habkost <ehabkost@redhat.com>; Michael S. Tsirkin <mst@redhat.com>; Paul Durrant
>>>>> <pdurrant@amazon.com>; Jason Andryuk <jandryuk@gmail.com>; Paolo Bonzini <pbonzini@redhat.com>;
>>>>> Richard Henderson <rth@twiddle.net>
>>>>> Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
>>>>>
>>>>> On 6/24/20 2:18 PM, Paul Durrant wrote:
>>>>>> From: Paul Durrant <pdurrant@amazon.com>
>>>>>>
>>>>>> The generic pc_machine_initfn() calls pc_system_flash_create() which creates
>>>>>> 'system.flash0' and 'system.flash1' devices. These devices are then realized
>>>>>> by pc_system_flash_map() which is called from pc_system_firmware_init() which
>>>>>> itself is called via pc_memory_init(). The latter however is not called when
>>>>>> xen_enable() is true and hence the following assertion fails:
>>>>>>
>>>>>> qemu-system-i386: hw/core/qdev.c:439: qdev_assert_realized_properly:
>>>>>> Assertion `dev->realized' failed
>>>>>>
>>>>>> These flash devices are unneeded when using Xen so this patch avoids the
>>>>>> assertion by simply removing them using pc_system_flash_cleanup_unused().
>>>>>>
>>>>>> Reported-by: Jason Andryuk <jandryuk@gmail.com>
>>>>>> Fixes: ebc29e1beab0 ("pc: Support firmware configuration with -blockdev")
>>>>>> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
>>>>>> Tested-by: Jason Andryuk <jandryuk@gmail.com>
>>>>>> ---
>>>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>>>>> Cc: Richard Henderson <rth@twiddle.net>
>>>>>> Cc: Eduardo Habkost <ehabkost@redhat.com>
>>>>>> Cc: "Michael S. Tsirkin" <mst@redhat.com>
>>>>>> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
>>>>>> ---
>>>>>>  hw/i386/pc_piix.c    | 9 ++++++---
>>>>>>  hw/i386/pc_sysfw.c   | 2 +-
>>>>>>  include/hw/i386/pc.h | 1 +
>>>>>>  3 files changed, 8 insertions(+), 4 deletions(-)
>>>>>>
>>>>>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>>>>> index 1497d0e4ae..977d40afb8 100644
>>>>>> --- a/hw/i386/pc_piix.c
>>>>>> +++ b/hw/i386/pc_piix.c
>>>>>> @@ -186,9 +186,12 @@ static void pc_init1(MachineState *machine,
>>>>>>      if (!xen_enabled()) {
>>>>>>          pc_memory_init(pcms, system_memory,
>>>>>>                         rom_memory, &ram_memory);
>>>>>> -    } else if (machine->kernel_filename != NULL) {
>>>>>> -        /* For xen HVM direct kernel boot, load linux here */
>>>>>> -        xen_load_linux(pcms);
>>>>>> +    } else {
>>>>>> +        pc_system_flash_cleanup_unused(pcms);
>>>>>
>>>>> TIL pc_system_flash_cleanup_unused().
>>>>>
>>>>> What about restricting at the source?
>>>>>
>>>>
>>>> And leave the devices in place? They are not relevant for Xen, so why not clean up?
>>>
>>> No, I meant to not create them in the first place, instead of
>>> create+destroy.
>>>
>>> Anyway what you did works, so I don't have any problem.
>>
>> IIUC Jason originally tried restricting creation but encountered a problem because xen_enabled() would always return false at that point, because machine creation occurs before accelerators are initialized.
> 
> Correct.  Quoting my previous email:
> """
> Removing the call to pc_system_flash_create() from pc_machine_initfn()
> lets QEMU startup and run a Xen HVM again.  xen_enabled() doesn't work
> there since accelerators have not been initialized yes, I guess?

Ah, I missed that. You pointed at the bug here :)

I think pc_system_flash_create() shouldn't be called in init() but
realize().

> """
> 
> If you want to remove the creation in the first place, then I have two
> questions.  Why does pc_system_flash_create()/pc_pflash_create() get
> called so early creating the pflash devices?  Why aren't they just
> created as needed in pc_system_flash_map()?
> 
> Regards,
> Jason
> 
>>   Paul
>>
>>>
>>> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>
>>>>
>>>>   Paul
>>>>
>>>>> -- >8 --
>>>>> --- a/hw/i386/pc.c
>>>>> +++ b/hw/i386/pc.c
>>>>> @@ -1004,24 +1004,26 @@ void pc_memory_init(PCMachineState *pcms,
>>>>>                                      &machine->device_memory->mr);
>>>>>      }
>>>>>
>>>>> -    /* Initialize PC system firmware */
>>>>> -    pc_system_firmware_init(pcms, rom_memory);
>>>>> -
>>>>> -    option_rom_mr = g_malloc(sizeof(*option_rom_mr));
>>>>> -    memory_region_init_ram(option_rom_mr, NULL, "pc.rom", PC_ROM_SIZE,
>>>>> -                           &error_fatal);
>>>>> -    if (pcmc->pci_enabled) {
>>>>> -        memory_region_set_readonly(option_rom_mr, true);
>>>>> -    }
>>>>> -    memory_region_add_subregion_overlap(rom_memory,
>>>>> -                                        PC_ROM_MIN_VGA,
>>>>> -                                        option_rom_mr,
>>>>> -                                        1);
>>>>> -
>>>>>      fw_cfg = fw_cfg_arch_create(machine,
>>>>>                                  x86ms->boot_cpus, x86ms->apic_id_limit);
>>>>>
>>>>> -    rom_set_fw(fw_cfg);
>>>>> +    /* Initialize PC system firmware */
>>>>> +    if (!xen_enabled()) {
>>>>> +        pc_system_firmware_init(pcms, rom_memory);
>>>>> +
>>>>> +        option_rom_mr = g_malloc(sizeof(*option_rom_mr));
>>>>> +        memory_region_init_ram(option_rom_mr, NULL, "pc.rom", PC_ROM_SIZE,
>>>>> +                               &error_fatal);
>>>>> +        if (pcmc->pci_enabled) {
>>>>> +            memory_region_set_readonly(option_rom_mr, true);
>>>>> +        }
>>>>> +        memory_region_add_subregion_overlap(rom_memory,
>>>>> +                                            PC_ROM_MIN_VGA,
>>>>> +                                            option_rom_mr,
>>>>> +                                            1);
>>>>> +
>>>>> +        rom_set_fw(fw_cfg);
>>>>> +    }
>>>>>
>>>>>      if (pcmc->has_reserved_memory && machine->device_memory->base) {
>>>>>          uint64_t *val = g_malloc(sizeof(*val));
>>>>> ---
>>>>>
>>>>>> +        if (machine->kernel_filename != NULL) {
>>>>>> +            /* For xen HVM direct kernel boot, load linux here */
>>>>>> +            xen_load_linux(pcms);
>>>>>> +        }
>>>>>>      }
>>>>>>
>>>>>>      gsi_state = pc_gsi_create(&x86ms->gsi, pcmc->pci_enabled);
>>>>>> diff --git a/hw/i386/pc_sysfw.c b/hw/i386/pc_sysfw.c
>>>>>> index ec2a3b3e7e..0ff47a4b59 100644
>>>>>> --- a/hw/i386/pc_sysfw.c
>>>>>> +++ b/hw/i386/pc_sysfw.c
>>>>>> @@ -108,7 +108,7 @@ void pc_system_flash_create(PCMachineState *pcms)
>>>>>>      }
>>>>>>  }
>>>>>>
>>>>>> -static void pc_system_flash_cleanup_unused(PCMachineState *pcms)
>>>>>> +void pc_system_flash_cleanup_unused(PCMachineState *pcms)
>>>>>>  {
>>>>>>      char *prop_name;
>>>>>>      int i;
>>>>>> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
>>>>>> index e6135c34d6..497f2b7ab7 100644
>>>>>> --- a/include/hw/i386/pc.h
>>>>>> +++ b/include/hw/i386/pc.h
>>>>>> @@ -187,6 +187,7 @@ int cmos_get_fd_drive_type(FloppyDriveType fd0);
>>>>>>
>>>>>>  /* pc_sysfw.c */
>>>>>>  void pc_system_flash_create(PCMachineState *pcms);
>>>>>> +void pc_system_flash_cleanup_unused(PCMachineState *pcms);
>>>>>>  void pc_system_firmware_init(PCMachineState *pcms, MemoryRegion *rom_memory);
>>>>>>
>>>>>>  /* acpi-build.c */
>>>>>>
>>>>
>>>>
>>
>>
> 



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 12:55:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 12:55:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqcGl-0000px-Ou; Wed, 01 Jul 2020 12:55:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6/Ob=AM=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1jqcGk-0000pr-8c
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 12:55:18 +0000
X-Inumbo-ID: 1cf62a4e-bb9a-11ea-870f-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 1cf62a4e-bb9a-11ea-870f-12813bfff9fa;
 Wed, 01 Jul 2020 12:55:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1593608117;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=rb7sOS8DHtRYUMEyNZNrLkoz4ICYvjE6JzCI6gwE2JU=;
 b=Fbpww934Sgs2ga1tuaCjZ7VecdsL6C90J0vHccxxlgECOGQ1jSg4e/aTx5JFcNopp1duM1
 bE6QGXrRzVB10h3E+mjHqHcdd5Exf62WLwmQiZvKJ8TX14XZEVddRBOid26MyX/++UaqCL
 Yh4dnfLKsYE/mhzG32iiCBf6D4EwA6k=
Received: from mail-ej1-f71.google.com (mail-ej1-f71.google.com
 [209.85.218.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-264-aAfnM7aaOoOVwm7iVIVPUg-1; Wed, 01 Jul 2020 08:55:13 -0400
X-MC-Unique: aAfnM7aaOoOVwm7iVIVPUg-1
Received: by mail-ej1-f71.google.com with SMTP id b14so15090616ejv.14
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jul 2020 05:55:12 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:from:to:cc:references:autocrypt
 :message-id:date:user-agent:mime-version:in-reply-to
 :content-language:content-transfer-encoding;
 bh=rb7sOS8DHtRYUMEyNZNrLkoz4ICYvjE6JzCI6gwE2JU=;
 b=cgJFDN+KpokuVDIIyJmSbMqnOrhfnei0buxoC89zFq2I10jWws71rUlqBOE8C3wPnS
 xaATgvTSYngEDAYdq3yTloCpcbnWeWIPVeFHpKQXbYmEFurY+fZMuDwL+U+6AWXtTfHC
 GsZlewQhhH0Y3rpngKDr7396hsZMjhTp91dQq96wceWuUy2SOR9FgzF26te8Gi2a4E06
 uB7TW35cTANhFrS/u9732whbmoNJI21BXNJT8INIr+NP1zWJfs3Lgt67fugWyJoX+unc
 wOeRtV0/t2rcT8jLcCPgfXRKfatc68FhqLlW19jY/oOjfKl0Z8aihfmsooaEwjbdaEH8
 C84w==
X-Gm-Message-State: AOAM533IVxkhLqu9VMWwBIdt0h7tLp02Tnpk6okdDrKRMlYgkeUDY7GW
 jbHthKkwD7Mxo6rj4Oag5i6vgRSdngCiusNm+HAbj9daqyKT8ajOzSCbdcrUrKVfaZsQUJSeDq7
 Kpo1cgzKCTrvVSrjMn6xiPPAdVtI=
X-Received: by 2002:a17:907:10d4:: with SMTP id
 rv20mr7245241ejb.413.1593608111914; 
 Wed, 01 Jul 2020 05:55:11 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJxrc1UwT7LYqI3an2+MrXCSqjgz2hXWdCORsWQzGgQw9DUznzUkPYFx9mGHc/f9Mjb2DeNfiw==
X-Received: by 2002:a17:907:10d4:: with SMTP id
 rv20mr7245220ejb.413.1593608111658; 
 Wed, 01 Jul 2020 05:55:11 -0700 (PDT)
Received: from [192.168.1.40] (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id l12sm7170357edj.6.2020.07.01.05.55.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 01 Jul 2020 05:55:11 -0700 (PDT)
Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
To: Jason Andryuk <jandryuk@gmail.com>, Paul Durrant <paul@xen.org>
References: <20200624121841.17971-1-paul@xen.org>
 <20200624121841.17971-3-paul@xen.org>
 <33e594dd-dbfa-7c57-1cf5-0852e8fc8e1d@redhat.com>
 <000701d64ef5$6568f660$303ae320$@xen.org>
 <9e591254-d215-d5af-38d2-fd5b65f84a43@redhat.com>
 <000801d64f75$c604f570$520ee050$@xen.org>
 <CAKf6xpvNTVqK263pdSARyoWnzP8g9SRoSqvhnLLwyYadjR1ChQ@mail.gmail.com>
 <07cc67e9-aeaa-1947-43db-c00716bead01@redhat.com>
Autocrypt: addr=philmd@redhat.com; keydata=
 mQINBDXML8YBEADXCtUkDBKQvNsQA7sDpw6YLE/1tKHwm24A1au9Hfy/OFmkpzo+MD+dYc+7
 bvnqWAeGweq2SDq8zbzFZ1gJBd6+e5v1a/UrTxvwBk51yEkadrpRbi+r2bDpTJwXc/uEtYAB
 GvsTZMtiQVA4kRID1KCdgLa3zztPLCj5H1VZhqZsiGvXa/nMIlhvacRXdbgllPPJ72cLUkXf
 z1Zu4AkEKpccZaJspmLWGSzGu6UTZ7UfVeR2Hcc2KI9oZB1qthmZ1+PZyGZ/Dy+z+zklC0xl
 XIpQPmnfy9+/1hj1LzJ+pe3HzEodtlVA+rdttSvA6nmHKIt8Ul6b/h1DFTmUT1lN1WbAGxmg
 CH1O26cz5nTrzdjoqC/b8PpZiT0kO5MKKgiu5S4PRIxW2+RA4H9nq7nztNZ1Y39bDpzwE5Sp
 bDHzd5owmLxMLZAINtCtQuRbSOcMjZlg4zohA9TQP9krGIk+qTR+H4CV22sWldSkVtsoTaA2
 qNeSJhfHQY0TyQvFbqRsSNIe2gTDzzEQ8itsmdHHE/yzhcCVvlUzXhAT6pIN0OT+cdsTTfif
 MIcDboys92auTuJ7U+4jWF1+WUaJ8gDL69ThAsu7mGDBbm80P3vvUZ4fQM14NkxOnuGRrJxO
 qjWNJ2ZUxgyHAh5TCxMLKWZoL5hpnvx3dF3Ti9HW2dsUUWICSQARAQABtDJQaGlsaXBwZSBN
 YXRoaWV1LURhdWTDqSAoUGhpbCkgPHBoaWxtZEByZWRoYXQuY29tPokCVQQTAQgAPwIbDwYL
 CQgHAwIGFQgCCQoLBBYCAwECHgECF4AWIQSJweePYB7obIZ0lcuio/1u3q3A3gUCXsfWwAUJ
 KtymWgAKCRCio/1u3q3A3ircD/9Vjh3aFNJ3uF3hddeoFg1H038wZr/xi8/rX27M1Vj2j9VH
 0B8Olp4KUQw/hyO6kUxqkoojmzRpmzvlpZ0cUiZJo2bQIWnvScyHxFCv33kHe+YEIqoJlaQc
 JfKYlbCoubz+02E2A6bFD9+BvCY0LBbEj5POwyKGiDMjHKCGuzSuDRbCn0Mz4kCa7nFMF5Jv
 piC+JemRdiBd6102ThqgIsyGEBXuf1sy0QIVyXgaqr9O2b/0VoXpQId7yY7OJuYYxs7kQoXI
 6WzSMpmuXGkmfxOgbc/L6YbzB0JOriX0iRClxu4dEUg8Bs2pNnr6huY2Ft+qb41RzCJvvMyu
 gS32LfN0bTZ6Qm2A8ayMtUQgnwZDSO23OKgQWZVglGliY3ezHZ6lVwC24Vjkmq/2yBSLakZE
 6DZUjZzCW1nvtRK05ebyK6tofRsx8xB8pL/kcBb9nCuh70aLR+5cmE41X4O+MVJbwfP5s/RW
 9BFSL3qgXuXso/3XuWTQjJJGgKhB6xXjMmb1J4q/h5IuVV4juv1Fem9sfmyrh+Wi5V1IzKI7
 RPJ3KVb937eBgSENk53P0gUorwzUcO+ASEo3Z1cBKkJSPigDbeEjVfXQMzNt0oDRzpQqH2vp
 apo2jHnidWt8BsckuWZpxcZ9+/9obQ55DyVQHGiTN39hkETy3Emdnz1JVHTU0Q==
Message-ID: <5c00f6a5-3f86-258e-999f-956eef825d14@redhat.com>
Date: Wed, 1 Jul 2020 14:55:09 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <07cc67e9-aeaa-1947-43db-c00716bead01@redhat.com>
Content-Language: en-US
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Eduardo Habkost <ehabkost@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paul Durrant <pdurrant@amazon.com>,
 QEMU <qemu-devel@nongnu.org>, Paolo Bonzini <pbonzini@redhat.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/1/20 2:40 PM, Philippe Mathieu-Daudé wrote:
> On 7/1/20 2:25 PM, Jason Andryuk wrote:
>> On Wed, Jul 1, 2020 at 3:03 AM Paul Durrant <xadimgnik@gmail.com> wrote:
>>>
>>>> -----Original Message-----
>>>> From: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>> Sent: 30 June 2020 18:27
>>>> To: paul@xen.org; xen-devel@lists.xenproject.org; qemu-devel@nongnu.org
>>>> Cc: 'Eduardo Habkost' <ehabkost@redhat.com>; 'Michael S. Tsirkin' <mst@redhat.com>; 'Paul Durrant'
>>>> <pdurrant@amazon.com>; 'Jason Andryuk' <jandryuk@gmail.com>; 'Paolo Bonzini' <pbonzini@redhat.com>;
>>>> 'Richard Henderson' <rth@twiddle.net>
>>>> Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
>>>>
>>>> On 6/30/20 5:44 PM, Paul Durrant wrote:
>>>>>> -----Original Message-----
>>>>>> From: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>>>> Sent: 30 June 2020 16:26
>>>>>> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org; qemu-devel@nongnu.org
>>>>>> Cc: Eduardo Habkost <ehabkost@redhat.com>; Michael S. Tsirkin <mst@redhat.com>; Paul Durrant
>>>>>> <pdurrant@amazon.com>; Jason Andryuk <jandryuk@gmail.com>; Paolo Bonzini <pbonzini@redhat.com>;
>>>>>> Richard Henderson <rth@twiddle.net>
>>>>>> Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
>>>>>>
>>>>>> On 6/24/20 2:18 PM, Paul Durrant wrote:
>>>>>>> From: Paul Durrant <pdurrant@amazon.com>
>>>>>>>
>>>>>>> The generic pc_machine_initfn() calls pc_system_flash_create() which creates
>>>>>>> 'system.flash0' and 'system.flash1' devices. These devices are then realized
>>>>>>> by pc_system_flash_map() which is called from pc_system_firmware_init() which
>>>>>>> itself is called via pc_memory_init(). The latter however is not called when
>>>>>>> xen_enable() is true and hence the following assertion fails:
>>>>>>>
>>>>>>> qemu-system-i386: hw/core/qdev.c:439: qdev_assert_realized_properly:
>>>>>>> Assertion `dev->realized' failed
>>>>>>>
>>>>>>> These flash devices are unneeded when using Xen so this patch avoids the
>>>>>>> assertion by simply removing them using pc_system_flash_cleanup_unused().
>>>>>>>
>>>>>>> Reported-by: Jason Andryuk <jandryuk@gmail.com>
>>>>>>> Fixes: ebc29e1beab0 ("pc: Support firmware configuration with -blockdev")
>>>>>>> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
>>>>>>> Tested-by: Jason Andryuk <jandryuk@gmail.com>
>>>>>>> ---
>>>>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>>>>>> Cc: Richard Henderson <rth@twiddle.net>
>>>>>>> Cc: Eduardo Habkost <ehabkost@redhat.com>
>>>>>>> Cc: "Michael S. Tsirkin" <mst@redhat.com>
>>>>>>> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
>>>>>>> ---
>>>>>>>  hw/i386/pc_piix.c    | 9 ++++++---
>>>>>>>  hw/i386/pc_sysfw.c   | 2 +-
>>>>>>>  include/hw/i386/pc.h | 1 +
>>>>>>>  3 files changed, 8 insertions(+), 4 deletions(-)
>>>>>>>
>>>>>>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>>>>>> index 1497d0e4ae..977d40afb8 100644
>>>>>>> --- a/hw/i386/pc_piix.c
>>>>>>> +++ b/hw/i386/pc_piix.c
>>>>>>> @@ -186,9 +186,12 @@ static void pc_init1(MachineState *machine,
>>>>>>>      if (!xen_enabled()) {
>>>>>>>          pc_memory_init(pcms, system_memory,
>>>>>>>                         rom_memory, &ram_memory);
>>>>>>> -    } else if (machine->kernel_filename != NULL) {
>>>>>>> -        /* For xen HVM direct kernel boot, load linux here */
>>>>>>> -        xen_load_linux(pcms);
>>>>>>> +    } else {
>>>>>>> +        pc_system_flash_cleanup_unused(pcms);
>>>>>>
>>>>>> TIL pc_system_flash_cleanup_unused().
>>>>>>
>>>>>> What about restricting at the source?
>>>>>>
>>>>>
>>>>> And leave the devices in place? They are not relevant for Xen, so why not clean up?
>>>>
>>>> No, I meant to not create them in the first place, instead of
>>>> create+destroy.
>>>>
>>>> Anyway what you did works, so I don't have any problem.
>>>
>>> IIUC Jason originally tried restricting creation but encountered a problem because xen_enabled() would always return false at that point, because machine creation occurs before accelerators are initialized.
>>
>> Correct.  Quoting my previous email:
>> """
>> Removing the call to pc_system_flash_create() from pc_machine_initfn()
>> lets QEMU startup and run a Xen HVM again.  xen_enabled() doesn't work
>> there since accelerators have not been initialized yes, I guess?
> 
> Ah, I missed that. You pointed at the bug here :)
> 
> I think pc_system_flash_create() shouldn't be called in init() but
> realize().

Hmm this is a MachineClass, not qdev, so no realize().

In softmmu/vl.c we have:

4152     configure_accelerators(argv[0]);
....
4327     if (!xen_enabled()) { // so xen_enable() is working
4328         /* On 32-bit hosts, QEMU is limited by virtual address space */
4329         if (ram_size > (2047 << 20) && HOST_LONG_BITS == 32) {
4330             error_report("at most 2047 MB RAM can be simulated");
4331             exit(1);
4332         }
4333     }
....
4348     machine_run_board_init(current_machine);

which calls in hw/core/machine.c:

1089 void machine_run_board_init(MachineState *machine)
1090 {
1091     MachineClass *machine_class = MACHINE_GET_CLASS(machine);
....
1138     machine_class->init(machine);
1139 }

         -> pc_machine_class_init()

This should come after:

         -> pc_machine_initfn()

            -> pc_system_flash_create(pcms)
> 
>> """
>>
>> If you want to remove the creation in the first place, then I have two
>> questions.  Why does pc_system_flash_create()/pc_pflash_create() get
>> called so early creating the pflash devices?  Why aren't they just
>> created as needed in pc_system_flash_map()?
>>
>> Regards,
>> Jason
>>
>>>   Paul
>>>
>>>>
>>>> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>>
>>>>>
>>>>>   Paul
>>>>>
>>>>>> -- >8 --
>>>>>> --- a/hw/i386/pc.c
>>>>>> +++ b/hw/i386/pc.c
>>>>>> @@ -1004,24 +1004,26 @@ void pc_memory_init(PCMachineState *pcms,
>>>>>>                                      &machine->device_memory->mr);
>>>>>>      }
>>>>>>
>>>>>> -    /* Initialize PC system firmware */
>>>>>> -    pc_system_firmware_init(pcms, rom_memory);
>>>>>> -
>>>>>> -    option_rom_mr = g_malloc(sizeof(*option_rom_mr));
>>>>>> -    memory_region_init_ram(option_rom_mr, NULL, "pc.rom", PC_ROM_SIZE,
>>>>>> -                           &error_fatal);
>>>>>> -    if (pcmc->pci_enabled) {
>>>>>> -        memory_region_set_readonly(option_rom_mr, true);
>>>>>> -    }
>>>>>> -    memory_region_add_subregion_overlap(rom_memory,
>>>>>> -                                        PC_ROM_MIN_VGA,
>>>>>> -                                        option_rom_mr,
>>>>>> -                                        1);
>>>>>> -
>>>>>>      fw_cfg = fw_cfg_arch_create(machine,
>>>>>>                                  x86ms->boot_cpus, x86ms->apic_id_limit);
>>>>>>
>>>>>> -    rom_set_fw(fw_cfg);
>>>>>> +    /* Initialize PC system firmware */
>>>>>> +    if (!xen_enabled()) {
>>>>>> +        pc_system_firmware_init(pcms, rom_memory);
>>>>>> +
>>>>>> +        option_rom_mr = g_malloc(sizeof(*option_rom_mr));
>>>>>> +        memory_region_init_ram(option_rom_mr, NULL, "pc.rom", PC_ROM_SIZE,
>>>>>> +                               &error_fatal);
>>>>>> +        if (pcmc->pci_enabled) {
>>>>>> +            memory_region_set_readonly(option_rom_mr, true);
>>>>>> +        }
>>>>>> +        memory_region_add_subregion_overlap(rom_memory,
>>>>>> +                                            PC_ROM_MIN_VGA,
>>>>>> +                                            option_rom_mr,
>>>>>> +                                            1);
>>>>>> +
>>>>>> +        rom_set_fw(fw_cfg);
>>>>>> +    }
>>>>>>
>>>>>>      if (pcmc->has_reserved_memory && machine->device_memory->base) {
>>>>>>          uint64_t *val = g_malloc(sizeof(*val));
>>>>>> ---
>>>>>>
>>>>>>> +        if (machine->kernel_filename != NULL) {
>>>>>>> +            /* For xen HVM direct kernel boot, load linux here */
>>>>>>> +            xen_load_linux(pcms);
>>>>>>> +        }
>>>>>>>      }
>>>>>>>
>>>>>>>      gsi_state = pc_gsi_create(&x86ms->gsi, pcmc->pci_enabled);
>>>>>>> diff --git a/hw/i386/pc_sysfw.c b/hw/i386/pc_sysfw.c
>>>>>>> index ec2a3b3e7e..0ff47a4b59 100644
>>>>>>> --- a/hw/i386/pc_sysfw.c
>>>>>>> +++ b/hw/i386/pc_sysfw.c
>>>>>>> @@ -108,7 +108,7 @@ void pc_system_flash_create(PCMachineState *pcms)
>>>>>>>      }
>>>>>>>  }
>>>>>>>
>>>>>>> -static void pc_system_flash_cleanup_unused(PCMachineState *pcms)
>>>>>>> +void pc_system_flash_cleanup_unused(PCMachineState *pcms)
>>>>>>>  {
>>>>>>>      char *prop_name;
>>>>>>>      int i;
>>>>>>> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
>>>>>>> index e6135c34d6..497f2b7ab7 100644
>>>>>>> --- a/include/hw/i386/pc.h
>>>>>>> +++ b/include/hw/i386/pc.h
>>>>>>> @@ -187,6 +187,7 @@ int cmos_get_fd_drive_type(FloppyDriveType fd0);
>>>>>>>
>>>>>>>  /* pc_sysfw.c */
>>>>>>>  void pc_system_flash_create(PCMachineState *pcms);
>>>>>>> +void pc_system_flash_cleanup_unused(PCMachineState *pcms);
>>>>>>>  void pc_system_firmware_init(PCMachineState *pcms, MemoryRegion *rom_memory);
>>>>>>>
>>>>>>>  /* acpi-build.c */
>>>>>>>
>>>>>
>>>>>
>>>
>>>
>>
> 



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 13:23:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 13:23:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqchx-0003NO-7c; Wed, 01 Jul 2020 13:23:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3SYL=AM=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jqchv-0003NJ-T1
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 13:23:24 +0000
X-Inumbo-ID: 096fd264-bb9e-11ea-b7bb-bc764e2007e4
Received: from mail-wm1-x331.google.com (unknown [2a00:1450:4864:20::331])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 096fd264-bb9e-11ea-b7bb-bc764e2007e4;
 Wed, 01 Jul 2020 13:23:23 +0000 (UTC)
Received: by mail-wm1-x331.google.com with SMTP id f139so23227066wmf.5
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jul 2020 06:23:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=jf2wtkoCW3xmuHm5TPxfRgukLk2yxsCQD6cMw9FXid4=;
 b=JRZTfk0UBEXbxiHvzkprt7OnwOCmnqK0YSOplNzWeMPVRnneceqLBOWcNyMjqu0kxV
 L+GLOYlHYQsrrlko3ph/fstgBqOwm3LqpCCgy/ZF3YDCm6Wn6h/WsRCuJl6ED7I7uaZ/
 g47MTBzW9xDJiYIfZF/QtFl+Rd0QeDZgEfONFSqhn/hMlv/lnKykQ3I6DQcqt4L8P1FY
 pV5m/I/4WSy66Jugy2brHSwE6rBiQtG3s0ts81FIe+Pf/flS2mzFh1DvPrVds1vcRnMJ
 3hPLvrLVl38VFqE/1+r0UBDL30Fcndt7tTNMWjT3fDJu9DdByrKhNqNSH43bv0ZaMxsj
 dHxg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=jf2wtkoCW3xmuHm5TPxfRgukLk2yxsCQD6cMw9FXid4=;
 b=Y2ZCI7Xpgvp62dKbhv0voaMQ0qKJlLzoM1LoGjo3IeDVIxte3QWwkLidpqqfDaTliv
 usfm8jcpaloTtJ8+yc3kyLA3lRabuGfURgVUYNvnfuKGbH+RRWUMs1+R6MFMVWNjvfhr
 tvQaTQ8fDwV36t4kvrnnWWhkWRGeUfxCXRadVR8iL1yMxEKcrVFSfFS/6IGZI32a1MUm
 Z3uOTe7qFqHd52NS/N/GAfzShA/EqUg9G6aXq+JF9vOHcoTPZbMv91QpROhPEBKMDKTw
 BF8SIjwfgFT5gSkAnFuqgDkKXyLC8oZRfZYkLlhPRWLyoU8A34nzhd/0uDMKCB/lkSw3
 FP6A==
X-Gm-Message-State: AOAM5305/tIbswx89KWIeo6A0ly1yZ4glYu6M5gbdPVWF38VeAvnHvn+
 o6x40W6td1qQqGiiZyXDQ7M=
X-Google-Smtp-Source: ABdhPJw4NwuvaTFUJ06O9gdmNc5bF9TTa83gE/UxLK90XtyKmstGli/mWH+gg7QwhvIx0xOcvwoU4g==
X-Received: by 2002:a1c:2d83:: with SMTP id
 t125mr27363966wmt.187.1593609802266; 
 Wed, 01 Jul 2020 06:23:22 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
 by smtp.gmail.com with ESMTPSA id h14sm7676165wrt.36.2020.07.01.06.23.19
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 01 Jul 2020 06:23:21 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
 "'Andrew Cooper'" <andrew.cooper3@citrix.com>
References: <20200701115842.18583-1-andrew.cooper3@citrix.com>
 <41b49d79-e0fa-161a-bb27-a9a2ccf361f5@suse.com>
In-Reply-To: <41b49d79-e0fa-161a-bb27-a9a2ccf361f5@suse.com>
Subject: RE: [PATCH for-4.14] x86/spec-ctrl: Protect against CALL/JMP
 straight-line speculation
Date: Wed, 1 Jul 2020 14:23:18 +0100
Message-ID: <001201d64faa$ca8a6370$5f9f2a50$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQEAcbj1SdXayzh+G/7yKy5sunlvLgKYQmRNqomv8RA=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Xen-devel' <xen-devel@lists.xenproject.org>, 'Wei Liu' <wl@xen.org>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 01 July 2020 13:27
> To: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Xen-devel <xen-devel@lists.xenproject.org>; Wei Liu <wl@xen.org>; =
Roger Pau Monn=C3=A9
> <roger.pau@citrix.com>; Paul Durrant <paul@xen.org>
> Subject: Re: [PATCH for-4.14] x86/spec-ctrl: Protect against CALL/JMP =
straight-line speculation
>=20
> On 01.07.2020 13:58, Andrew Cooper wrote:
> > Some x86 CPUs speculatively execute beyond indirect CALL/JMP =
instructions.
> >
> > With CONFIG_INDIRECT_THUNK / Retpolines, indirect CALL/JMP =
instructions are
> > converted to direct CALL/JMP's to __x86_indirect_thunk_REG(), =
leaving just a
> > handful of indirect JMPs implementing those stubs.
> >
> > There is no architectrual execution beyond an indirect JMP, so use =
INT3 as
> > recommended by vendors to halt speculative execution.  This is =
shorter than
> > LFENCE (which would also work fine), but also shows up in logs if we =
do
> > unexpected execute them.
> >
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Release-acked-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 13:35:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 13:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqctP-0004HR-Cb; Wed, 01 Jul 2020 13:35:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tcdi=AM=casper.srs.infradead.org=batv+501e1de201b53739768b+6156+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1jqctN-0004HM-G7
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 13:35:13 +0000
X-Inumbo-ID: af66e1ac-bb9f-11ea-b7bb-bc764e2007e4
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af66e1ac-bb9f-11ea-b7bb-bc764e2007e4;
 Wed, 01 Jul 2020 13:35:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
 References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-Transfer-Encoding:Content-ID:Content-Description;
 bh=6o0bqYA9UibQkqr/MmLZIfw1Phn9yQ8zQ1ln3u9Cm/E=; b=hlKDnjFXs/Y5WjrWNNgSQ+2Cvd
 RvsmfMbhXkV1KC+OK+5x7CGESEtA1Qk4HZeAu6Yaz/UH9zEAUbNa9FVjAhV4Duiv/yAyXRv3/1ni3
 CSmwa08RJtybgECUGXsj6t5Ikz9WU8jXnEjZhSsiO4Cce5bMLn+ryrsvUEQAfHt3Y/kj2M4b29Xj0
 2lORXLltD3a0Za4R4fVC/+lv0bptUhSUEHkFmEDfSV3phIa7QD/g3Q2Zle+t27uSL2Zcfg1ARrlES
 xPbJ0Leq9BQuYJ3vQmrkn81G/oqYWCgAUIa51MwnCk73e/oMjUpHEK7/k+Ajj3vJC3XBgczD+ReMg
 1fAQgQbQ==;
Received: from hch by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat
 Linux)) id 1jqct6-0006Qi-2K; Wed, 01 Jul 2020 13:34:56 +0000
Date: Wed, 1 Jul 2020 14:34:56 +0100
From: Christoph Hellwig <hch@infradead.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH] xen: introduce xen_vring_use_dma
Message-ID: <20200701133456.GA23888@infradead.org>
References: <20200624091732.23944-1-peng.fan@nxp.com>
 <20200624050355-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006241047010.8121@sstabellini-ThinkPad-T480s>
 <20200624163940-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006241351430.8121@sstabellini-ThinkPad-T480s>
 <20200624181026-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006251014230.8121@sstabellini-ThinkPad-T480s>
 <20200626110629-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006291621300.8121@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2006291621300.8121@sstabellini-ThinkPad-T480s>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by
 casper.infradead.org. See http://www.infradead.org/rpr.html
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Peng Fan <peng.fan@nxp.com>, x86@kernel.org,
 konrad.wilk@oracle.com, jasowang@redhat.com,
 "Michael S. Tsirkin" <mst@redhat.com>, linux-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org, iommu@lists.linux-foundation.org,
 linux-imx@nxp.com, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 linux-arm-kernel@lists.infradead.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jun 29, 2020 at 04:46:09PM -0700, Stefano Stabellini wrote:
> > I could imagine some future Xen hosts setting a flag somewhere in the
> > platform capability saying "no xen specific flag, rely on
> > "VIRTIO_F_ACCESS_PLATFORM". Then you set that accordingly in QEMU.
> > How about that?
> 
> Yes, that would be fine and there is no problem implementing something
> like that when we get virtio support in Xen. Today there are still no
> virtio interfaces provided by Xen to ARM guests (no virtio-block/net,
> etc.)
> 
> In fact, in both cases we are discussing virtio is *not* provided by
> Xen; it is a firmware interface to something entirely different:
> 
> 1) virtio is used to talk to a remote AMP processor (RPMesg)
> 2) virtio is used to talk to a secure-world firmware/OS (Trusty)
>
> VIRTIO_F_ACCESS_PLATFORM is not set by Xen in these cases but by RPMesg
> and by Trusty respectively. I don't know if Trusty should or should not
> set VIRTIO_F_ACCESS_PLATFORM, but I think Linux should still work
> without issues.
> 

Any virtio implementation that is not in control of the memory map
(aka not the hypervisor) absolutely must set VIRTIO_F_ACCESS_PLATFORM,
else it is completely broken.

> The xen_domain() check in Linux makes it so that vring_use_dma_api
> returns the opposite value on native Linux compared to Linux as Xen/ARM
> DomU by "accident". By "accident" because there is no architectural
> reason why Linux Xen/ARM DomU should behave differently compared to
> native Linux in this regard.
> 
> I hope that now it is clearer why I think the if (xen_domain()) check
> needs to be improved anyway, even if we fix generic dma_ops with virtio
> interfaces missing VIRTIO_F_ACCESS_PLATFORM.

IMHO that Xen quirk should never have been added in this form..


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 13:59:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 13:59:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqdGh-000611-EV; Wed, 01 Jul 2020 13:59:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r09v=AM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqdGh-00060w-4t
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 13:59:19 +0000
X-Inumbo-ID: 0df51e02-bba3-11ea-871d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0df51e02-bba3-11ea-871d-12813bfff9fa;
 Wed, 01 Jul 2020 13:59:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=cjS1OdWOCfwHOCXLFZm8ns85U8esMmISNWI59Ftxt44=; b=JrqMv27JJWaZ7ukcxo2lnmowh
 H/ZcLWY7BabFUtCkpiD4Gj0yK+2JbpODL2L9BBkpow489AANIDYrHKRkf3ezLnAAWS4kQEYUY9cK6
 j5m6BtRFuG25IFnkTI9Nd/1B6tDJOEr3zUz1NFpoBUYeEU4oSLFgPXECaCz/UeEh7rOV0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqdGf-0003Bo-3S; Wed, 01 Jul 2020 13:59:17 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqdGe-0001vz-NN; Wed, 01 Jul 2020 13:59:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqdGe-0006Uv-MR; Wed, 01 Jul 2020 13:59:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151496-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151496: tolerable all pass - PUSHED
X-Osstest-Failures: libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: libvirt=e7998ebeaf15e4e8825be0dd97aa1316f194f00d
X-Osstest-Versions-That: libvirt=4268e187531eb370bc6fbac4496018bb7fef6716
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jul 2020 13:59:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151496 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151496/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151469
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151469
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              e7998ebeaf15e4e8825be0dd97aa1316f194f00d
baseline version:
 libvirt              4268e187531eb370bc6fbac4496018bb7fef6716

Last test of basis   151469  2020-06-30 04:19:06 Z    1 days
Testing same since   151496  2020-07-01 04:23:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel P. Berrangé <berrange@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Weblate <noreply@weblate.org>
  Yuri Chornoivan <yurchor@ukr.net>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   4268e18753..e7998ebeaf  e7998ebeaf15e4e8825be0dd97aa1316f194f00d -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 15:00:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 15:00:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqeDb-0003Bi-SM; Wed, 01 Jul 2020 15:00:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DLP9=AM=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jqeDa-0003Bd-HW
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 15:00:10 +0000
X-Inumbo-ID: 8e0a1a90-bbab-11ea-bca7-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e0a1a90-bbab-11ea-bca7-bc764e2007e4;
 Wed, 01 Jul 2020 15:00:09 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id t25so22826974lji.12
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jul 2020 08:00:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=yc+eL/IAj3nyAMuFv8crrvZ6GHVlDHU/rqByUyGib/E=;
 b=OndfuD5dzixU+MzAMVK8Yj8EEwL2yPyBwDT+uPo/gRYAU2InINZMBbtCuYDiGk3rWB
 pxLCtBiGJ+Vb+/9+gkKx4T8xRkLcggmR96lmJsS/azur8SXDNSyC5ly1X+HuLk5hTzaE
 XYt04Pz1g+WwshfUyDiRsEfLTLp3U7ZoM9t/ZJnD4nt19UYMzUFjvYgH4x918UbrysWt
 AfGVnDJMcwUvOCclU+8s/VK5ssj/iPgGDQtT1HjM4jyPE1TGUHOOmHM2S5gW7go5xVYb
 ZsBuDmT5n9h6RPSZcGAVR8TQsrlBUWj6Ui37KflR6diWlc1G7Th/v7t4Wf0+nQIkjJpc
 XWwg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=yc+eL/IAj3nyAMuFv8crrvZ6GHVlDHU/rqByUyGib/E=;
 b=iA9kMOwfs4RdVov7sTiluEpv7oAUYpEHUYh8u5vk/RV0PRpxKFCnM/JcRHysIMaNGu
 xF534UWkX+OC4gZuFvJxCl2hLEHDWyaEUz5+kbXgNzG7av9LhnfcueYjN/oTBRXOcowc
 E8AM8hR4TOI+qycqZyI1wbnt0SCGRHm6pgfFvXE9KRxwtYN8oeA5AEVSk0d6zVSH0rkU
 jPUagy8aAXaCV0tFyaTn2c164mHZgi8HVqd5lfTA8CHkj4J642Y/+t1fhlsL8r0RKZQH
 3micgdK2SuxzkHvVvetyab0deSGPG8jZI6gKOZrdAQXXejA7skvqQ93W89SCMoe6iL7V
 0zFw==
X-Gm-Message-State: AOAM5319OME3hx7aETOQVs5wGwGMf6oabzqhl3QDJsTjIpOxYWRMtmH/
 APmmAIQss2rdzIwEDxjns5umpF6S5fINj/OUZ4A=
X-Google-Smtp-Source: ABdhPJzcrloggdF14PhUb1enOgtTkLl6rNS/f0POjqGOcm0QyU99xMr65irW/PqhXVMC+WtSHbmRzyrHzmFUtxdLZXA=
X-Received: by 2002:a2e:978b:: with SMTP id y11mr12896639lji.399.1593615608018; 
 Wed, 01 Jul 2020 08:00:08 -0700 (PDT)
MIME-Version: 1.0
References: <20200624121841.17971-1-paul@xen.org>
 <20200624121841.17971-3-paul@xen.org>
 <33e594dd-dbfa-7c57-1cf5-0852e8fc8e1d@redhat.com>
 <000701d64ef5$6568f660$303ae320$@xen.org>
 <9e591254-d215-d5af-38d2-fd5b65f84a43@redhat.com>
 <000801d64f75$c604f570$520ee050$@xen.org>
 <CAKf6xpvNTVqK263pdSARyoWnzP8g9SRoSqvhnLLwyYadjR1ChQ@mail.gmail.com>
 <07cc67e9-aeaa-1947-43db-c00716bead01@redhat.com>
 <5c00f6a5-3f86-258e-999f-956eef825d14@redhat.com>
In-Reply-To: <5c00f6a5-3f86-258e-999f-956eef825d14@redhat.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 1 Jul 2020 10:59:56 -0400
Message-ID: <CAKf6xpuiQBhvChwfikaLd4tvKVUn3oo88wbsMVw7P11ehV-EYQ@mail.gmail.com>
Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Eduardo Habkost <ehabkost@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paul Durrant <pdurrant@amazon.com>,
 Paul Durrant <paul@xen.org>, QEMU <qemu-devel@nongnu.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 1, 2020 at 8:55 AM Philippe Mathieu-Daud=C3=A9 <philmd@redhat.c=
om> wrote:
>
> On 7/1/20 2:40 PM, Philippe Mathieu-Daud=C3=A9 wrote:
> > On 7/1/20 2:25 PM, Jason Andryuk wrote:
> >> On Wed, Jul 1, 2020 at 3:03 AM Paul Durrant <xadimgnik@gmail.com> wrot=
e:
> >>>
> >>>> -----Original Message-----
> >>>> From: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> >>>> Sent: 30 June 2020 18:27
> >>>> To: paul@xen.org; xen-devel@lists.xenproject.org; qemu-devel@nongnu.=
org
> >>>> Cc: 'Eduardo Habkost' <ehabkost@redhat.com>; 'Michael S. Tsirkin' <m=
st@redhat.com>; 'Paul Durrant'
> >>>> <pdurrant@amazon.com>; 'Jason Andryuk' <jandryuk@gmail.com>; 'Paolo =
Bonzini' <pbonzini@redhat.com>;
> >>>> 'Richard Henderson' <rth@twiddle.net>
> >>>> Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
> >>>>
> >>>> On 6/30/20 5:44 PM, Paul Durrant wrote:
> >>>>>> -----Original Message-----
> >>>>>> From: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> >>>>>> Sent: 30 June 2020 16:26
> >>>>>> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org; q=
emu-devel@nongnu.org
> >>>>>> Cc: Eduardo Habkost <ehabkost@redhat.com>; Michael S. Tsirkin <mst=
@redhat.com>; Paul Durrant
> >>>>>> <pdurrant@amazon.com>; Jason Andryuk <jandryuk@gmail.com>; Paolo B=
onzini <pbonzini@redhat.com>;
> >>>>>> Richard Henderson <rth@twiddle.net>
> >>>>>> Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
> >>>>>>
> >>>>>> On 6/24/20 2:18 PM, Paul Durrant wrote:
> >>>>>>> From: Paul Durrant <pdurrant@amazon.com>
> >>>>>>>
> >>>>>>> The generic pc_machine_initfn() calls pc_system_flash_create() wh=
ich creates
> >>>>>>> 'system.flash0' and 'system.flash1' devices. These devices are th=
en realized
> >>>>>>> by pc_system_flash_map() which is called from pc_system_firmware_=
init() which
> >>>>>>> itself is called via pc_memory_init(). The latter however is not =
called when
> >>>>>>> xen_enable() is true and hence the following assertion fails:
> >>>>>>>
> >>>>>>> qemu-system-i386: hw/core/qdev.c:439: qdev_assert_realized_proper=
ly:
> >>>>>>> Assertion `dev->realized' failed
> >>>>>>>
> >>>>>>> These flash devices are unneeded when using Xen so this patch avo=
ids the
> >>>>>>> assertion by simply removing them using pc_system_flash_cleanup_u=
nused().
> >>>>>>>
> >>>>>>> Reported-by: Jason Andryuk <jandryuk@gmail.com>
> >>>>>>> Fixes: ebc29e1beab0 ("pc: Support firmware configuration with -bl=
ockdev")
> >>>>>>> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> >>>>>>> Tested-by: Jason Andryuk <jandryuk@gmail.com>
> >>>>>>> ---
> >>>>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
> >>>>>>> Cc: Richard Henderson <rth@twiddle.net>
> >>>>>>> Cc: Eduardo Habkost <ehabkost@redhat.com>
> >>>>>>> Cc: "Michael S. Tsirkin" <mst@redhat.com>
> >>>>>>> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
> >>>>>>> ---
> >>>>>>>  hw/i386/pc_piix.c    | 9 ++++++---
> >>>>>>>  hw/i386/pc_sysfw.c   | 2 +-
> >>>>>>>  include/hw/i386/pc.h | 1 +
> >>>>>>>  3 files changed, 8 insertions(+), 4 deletions(-)
> >>>>>>>
> >>>>>>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> >>>>>>> index 1497d0e4ae..977d40afb8 100644
> >>>>>>> --- a/hw/i386/pc_piix.c
> >>>>>>> +++ b/hw/i386/pc_piix.c
> >>>>>>> @@ -186,9 +186,12 @@ static void pc_init1(MachineState *machine,
> >>>>>>>      if (!xen_enabled()) {
> >>>>>>>          pc_memory_init(pcms, system_memory,
> >>>>>>>                         rom_memory, &ram_memory);
> >>>>>>> -    } else if (machine->kernel_filename !=3D NULL) {
> >>>>>>> -        /* For xen HVM direct kernel boot, load linux here */
> >>>>>>> -        xen_load_linux(pcms);
> >>>>>>> +    } else {
> >>>>>>> +        pc_system_flash_cleanup_unused(pcms);
> >>>>>>
> >>>>>> TIL pc_system_flash_cleanup_unused().
> >>>>>>
> >>>>>> What about restricting at the source?
> >>>>>>
> >>>>>
> >>>>> And leave the devices in place? They are not relevant for Xen, so w=
hy not clean up?
> >>>>
> >>>> No, I meant to not create them in the first place, instead of
> >>>> create+destroy.
> >>>>
> >>>> Anyway what you did works, so I don't have any problem.
> >>>
> >>> IIUC Jason originally tried restricting creation but encountered a pr=
oblem because xen_enabled() would always return false at that point, becaus=
e machine creation occurs before accelerators are initialized.
> >>
> >> Correct.  Quoting my previous email:
> >> """
> >> Removing the call to pc_system_flash_create() from pc_machine_initfn()
> >> lets QEMU startup and run a Xen HVM again.  xen_enabled() doesn't work
> >> there since accelerators have not been initialized yes, I guess?
> >
> > Ah, I missed that. You pointed at the bug here :)
> >
> > I think pc_system_flash_create() shouldn't be called in init() but
> > realize().
>
> Hmm this is a MachineClass, not qdev, so no realize().
>
> In softmmu/vl.c we have:
>
> 4152     configure_accelerators(argv[0]);
> ....
> 4327     if (!xen_enabled()) { // so xen_enable() is working
> 4328         /* On 32-bit hosts, QEMU is limited by virtual address space=
 */
> 4329         if (ram_size > (2047 << 20) && HOST_LONG_BITS =3D=3D 32) {
> 4330             error_report("at most 2047 MB RAM can be simulated");
> 4331             exit(1);
> 4332         }
> 4333     }
> ....
> 4348     machine_run_board_init(current_machine);
>
> which calls in hw/core/machine.c:
>
> 1089 void machine_run_board_init(MachineState *machine)
> 1090 {
> 1091     MachineClass *machine_class =3D MACHINE_GET_CLASS(machine);
> ....
> 1138     machine_class->init(machine);
> 1139 }
>
>          -> pc_machine_class_init()
>
> This should come after:
>
>          -> pc_machine_initfn()
>
>             -> pc_system_flash_create(pcms)

Sorry, I'm not following the flow you want.

The xen_enabled() call in vl.c may always fail.  Or at least in most
common Xen usage.  Xen HVMs are started with machine xenfv and don't
specify an accelerator.  You need to process the xenfv default machine
opts to process "accel=3Dxen".  Actually, I don't know how the
accelerator initialization works, but xen_accel_class_init() needs to
be called to set `ac->allowed =3D &xen_allowed`.  xen_allowed is the
return value of xen_enabled() and the accelerator initialization must
toggle it.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 15:11:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 15:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqeO6-000450-W7; Wed, 01 Jul 2020 15:11:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6/Ob=AM=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1jqeO5-00044v-ML
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 15:11:02 +0000
X-Inumbo-ID: 12034596-bbad-11ea-8496-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 12034596-bbad-11ea-8496-bc764e2007e4;
 Wed, 01 Jul 2020 15:10:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1593616259;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=3OzkLmjp2/6OT377j2dxBv/5JQQ05QCE0dQNnMjasKY=;
 b=CqOUbAQXEYG+JILftKBus9lVMeLedXctE0dHgAigSx0L0QNB7ZxolZcHydu8DwWrxhTpSp
 qu0Jm2XJI13gBX5NcjzHduOuR+4A1FOJJAupvrY+QcepCRgi7073fHnj8/5TUYpr8tYFI6
 2hLqKHEabcofxg+rBMCZsVshY4V6KIQ=
Received: from mail-ed1-f70.google.com (mail-ed1-f70.google.com
 [209.85.208.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-4-kl_brxg7OkSYFHsnilCUKw-1; Wed, 01 Jul 2020 11:10:57 -0400
X-MC-Unique: kl_brxg7OkSYFHsnilCUKw-1
Received: by mail-ed1-f70.google.com with SMTP id h5so21102340edl.7
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jul 2020 08:10:57 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:autocrypt
 :message-id:date:user-agent:mime-version:in-reply-to
 :content-language:content-transfer-encoding;
 bh=3OzkLmjp2/6OT377j2dxBv/5JQQ05QCE0dQNnMjasKY=;
 b=gRwfj+NzByR1EJi465bZ0g3vj57tQd8io+jagn/UsVsQULNPZtd97RgSfuIiuRGmrB
 X97NcpHBD1YvwfRc/E9+OoTQTOFMcVpd9DNmiVweb9lQprFPnzbsAqBqUWuIhAKVSxdl
 0LKXbW13qdsmvP9fetjJYIQRwx/8aLkXWAosShpRlhIqGRhAF+6Ob0rT0A6lItzvumw3
 K3r2rBQ3RHOMuO7gOWsNuh3fH34G/PGeDDdMikHlBFO2QbmCBUgIwN6F72cUmfUoMhvi
 IkWkFGbljqqST7pdsLmsG8BawawcmZ3EHxqAtxOtGXEal9sfs3e1AurB3iYsHiMKg6Gz
 VgSg==
X-Gm-Message-State: AOAM531k86W2xjiRh1j6oifEjHaIjWfbMdaD/nQy+/Ot3MlOA0kXIGEI
 LGIEyHTwYiRHMd+myRdghFkDpLQ7yT02DYTGZ+gOR7dt++3MwxR/ZH/oxVw4U2OFfYK4ovrII/C
 JDmh8kvPppEnPt+PzPWh4IcpzSnI=
X-Received: by 2002:a17:906:35d2:: with SMTP id
 p18mr24456300ejb.393.1593616256219; 
 Wed, 01 Jul 2020 08:10:56 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJxRsyWL+FnQ+2J4ypqMaTDn8KOMxKU/MhqjZ2pwt5D4D1Yx8kgojUgOqxbplfComQUP53vs5g==
X-Received: by 2002:a17:906:35d2:: with SMTP id
 p18mr24456276ejb.393.1593616256004; 
 Wed, 01 Jul 2020 08:10:56 -0700 (PDT)
Received: from [192.168.1.37] (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id c10sm97656edt.22.2020.07.01.08.10.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 01 Jul 2020 08:10:55 -0700 (PDT)
Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
To: Jason Andryuk <jandryuk@gmail.com>
References: <20200624121841.17971-1-paul@xen.org>
 <20200624121841.17971-3-paul@xen.org>
 <33e594dd-dbfa-7c57-1cf5-0852e8fc8e1d@redhat.com>
 <000701d64ef5$6568f660$303ae320$@xen.org>
 <9e591254-d215-d5af-38d2-fd5b65f84a43@redhat.com>
 <000801d64f75$c604f570$520ee050$@xen.org>
 <CAKf6xpvNTVqK263pdSARyoWnzP8g9SRoSqvhnLLwyYadjR1ChQ@mail.gmail.com>
 <07cc67e9-aeaa-1947-43db-c00716bead01@redhat.com>
 <5c00f6a5-3f86-258e-999f-956eef825d14@redhat.com>
 <CAKf6xpuiQBhvChwfikaLd4tvKVUn3oo88wbsMVw7P11ehV-EYQ@mail.gmail.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Autocrypt: addr=philmd@redhat.com; keydata=
 mQINBDXML8YBEADXCtUkDBKQvNsQA7sDpw6YLE/1tKHwm24A1au9Hfy/OFmkpzo+MD+dYc+7
 bvnqWAeGweq2SDq8zbzFZ1gJBd6+e5v1a/UrTxvwBk51yEkadrpRbi+r2bDpTJwXc/uEtYAB
 GvsTZMtiQVA4kRID1KCdgLa3zztPLCj5H1VZhqZsiGvXa/nMIlhvacRXdbgllPPJ72cLUkXf
 z1Zu4AkEKpccZaJspmLWGSzGu6UTZ7UfVeR2Hcc2KI9oZB1qthmZ1+PZyGZ/Dy+z+zklC0xl
 XIpQPmnfy9+/1hj1LzJ+pe3HzEodtlVA+rdttSvA6nmHKIt8Ul6b/h1DFTmUT1lN1WbAGxmg
 CH1O26cz5nTrzdjoqC/b8PpZiT0kO5MKKgiu5S4PRIxW2+RA4H9nq7nztNZ1Y39bDpzwE5Sp
 bDHzd5owmLxMLZAINtCtQuRbSOcMjZlg4zohA9TQP9krGIk+qTR+H4CV22sWldSkVtsoTaA2
 qNeSJhfHQY0TyQvFbqRsSNIe2gTDzzEQ8itsmdHHE/yzhcCVvlUzXhAT6pIN0OT+cdsTTfif
 MIcDboys92auTuJ7U+4jWF1+WUaJ8gDL69ThAsu7mGDBbm80P3vvUZ4fQM14NkxOnuGRrJxO
 qjWNJ2ZUxgyHAh5TCxMLKWZoL5hpnvx3dF3Ti9HW2dsUUWICSQARAQABtDJQaGlsaXBwZSBN
 YXRoaWV1LURhdWTDqSAoUGhpbCkgPHBoaWxtZEByZWRoYXQuY29tPokCVQQTAQgAPwIbDwYL
 CQgHAwIGFQgCCQoLBBYCAwECHgECF4AWIQSJweePYB7obIZ0lcuio/1u3q3A3gUCXsfWwAUJ
 KtymWgAKCRCio/1u3q3A3ircD/9Vjh3aFNJ3uF3hddeoFg1H038wZr/xi8/rX27M1Vj2j9VH
 0B8Olp4KUQw/hyO6kUxqkoojmzRpmzvlpZ0cUiZJo2bQIWnvScyHxFCv33kHe+YEIqoJlaQc
 JfKYlbCoubz+02E2A6bFD9+BvCY0LBbEj5POwyKGiDMjHKCGuzSuDRbCn0Mz4kCa7nFMF5Jv
 piC+JemRdiBd6102ThqgIsyGEBXuf1sy0QIVyXgaqr9O2b/0VoXpQId7yY7OJuYYxs7kQoXI
 6WzSMpmuXGkmfxOgbc/L6YbzB0JOriX0iRClxu4dEUg8Bs2pNnr6huY2Ft+qb41RzCJvvMyu
 gS32LfN0bTZ6Qm2A8ayMtUQgnwZDSO23OKgQWZVglGliY3ezHZ6lVwC24Vjkmq/2yBSLakZE
 6DZUjZzCW1nvtRK05ebyK6tofRsx8xB8pL/kcBb9nCuh70aLR+5cmE41X4O+MVJbwfP5s/RW
 9BFSL3qgXuXso/3XuWTQjJJGgKhB6xXjMmb1J4q/h5IuVV4juv1Fem9sfmyrh+Wi5V1IzKI7
 RPJ3KVb937eBgSENk53P0gUorwzUcO+ASEo3Z1cBKkJSPigDbeEjVfXQMzNt0oDRzpQqH2vp
 apo2jHnidWt8BsckuWZpxcZ9+/9obQ55DyVQHGiTN39hkETy3Emdnz1JVHTU0Q==
Message-ID: <7cec82ad-208c-04d9-4ad7-8656cc914516@redhat.com>
Date: Wed, 1 Jul 2020 17:10:54 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <CAKf6xpuiQBhvChwfikaLd4tvKVUn3oo88wbsMVw7P11ehV-EYQ@mail.gmail.com>
Content-Language: en-US
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Eduardo Habkost <ehabkost@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paul Durrant <pdurrant@amazon.com>,
 Paul Durrant <paul@xen.org>, QEMU <qemu-devel@nongnu.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/1/20 4:59 PM, Jason Andryuk wrote:
> On Wed, Jul 1, 2020 at 8:55 AM Philippe Mathieu-Daudé <philmd@redhat.com> wrote:
>> On 7/1/20 2:40 PM, Philippe Mathieu-Daudé wrote:
>>> On 7/1/20 2:25 PM, Jason Andryuk wrote:
>>>> On Wed, Jul 1, 2020 at 3:03 AM Paul Durrant <xadimgnik@gmail.com> wrote:
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>>>> Sent: 30 June 2020 18:27
>>>>>> To: paul@xen.org; xen-devel@lists.xenproject.org; qemu-devel@nongnu.org
>>>>>> Cc: 'Eduardo Habkost' <ehabkost@redhat.com>; 'Michael S. Tsirkin' <mst@redhat.com>; 'Paul Durrant'
>>>>>> <pdurrant@amazon.com>; 'Jason Andryuk' <jandryuk@gmail.com>; 'Paolo Bonzini' <pbonzini@redhat.com>;
>>>>>> 'Richard Henderson' <rth@twiddle.net>
>>>>>> Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
>>>>>>
>>>>>> On 6/30/20 5:44 PM, Paul Durrant wrote:
>>>>>>>> -----Original Message-----
>>>>>>>> From: Philippe Mathieu-Daudé <philmd@redhat.com>
>>>>>>>> Sent: 30 June 2020 16:26
>>>>>>>> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org; qemu-devel@nongnu.org
>>>>>>>> Cc: Eduardo Habkost <ehabkost@redhat.com>; Michael S. Tsirkin <mst@redhat.com>; Paul Durrant
>>>>>>>> <pdurrant@amazon.com>; Jason Andryuk <jandryuk@gmail.com>; Paolo Bonzini <pbonzini@redhat.com>;
>>>>>>>> Richard Henderson <rth@twiddle.net>
>>>>>>>> Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
>>>>>>>>
>>>>>>>> On 6/24/20 2:18 PM, Paul Durrant wrote:
>>>>>>>>> From: Paul Durrant <pdurrant@amazon.com>
>>>>>>>>>
>>>>>>>>> The generic pc_machine_initfn() calls pc_system_flash_create() which creates
>>>>>>>>> 'system.flash0' and 'system.flash1' devices. These devices are then realized
>>>>>>>>> by pc_system_flash_map() which is called from pc_system_firmware_init() which
>>>>>>>>> itself is called via pc_memory_init(). The latter however is not called when
>>>>>>>>> xen_enable() is true and hence the following assertion fails:
>>>>>>>>>
>>>>>>>>> qemu-system-i386: hw/core/qdev.c:439: qdev_assert_realized_properly:
>>>>>>>>> Assertion `dev->realized' failed
>>>>>>>>>
>>>>>>>>> These flash devices are unneeded when using Xen so this patch avoids the
>>>>>>>>> assertion by simply removing them using pc_system_flash_cleanup_unused().
>>>>>>>>>
>>>>>>>>> Reported-by: Jason Andryuk <jandryuk@gmail.com>
>>>>>>>>> Fixes: ebc29e1beab0 ("pc: Support firmware configuration with -blockdev")
>>>>>>>>> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
>>>>>>>>> Tested-by: Jason Andryuk <jandryuk@gmail.com>
>>>>>>>>> ---
>>>>>>>>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>>>>>>>>> Cc: Richard Henderson <rth@twiddle.net>
>>>>>>>>> Cc: Eduardo Habkost <ehabkost@redhat.com>
>>>>>>>>> Cc: "Michael S. Tsirkin" <mst@redhat.com>
>>>>>>>>> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
>>>>>>>>> ---
>>>>>>>>>  hw/i386/pc_piix.c    | 9 ++++++---
>>>>>>>>>  hw/i386/pc_sysfw.c   | 2 +-
>>>>>>>>>  include/hw/i386/pc.h | 1 +
>>>>>>>>>  3 files changed, 8 insertions(+), 4 deletions(-)
>>>>>>>>>
>>>>>>>>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>>>>>>>> index 1497d0e4ae..977d40afb8 100644
>>>>>>>>> --- a/hw/i386/pc_piix.c
>>>>>>>>> +++ b/hw/i386/pc_piix.c
>>>>>>>>> @@ -186,9 +186,12 @@ static void pc_init1(MachineState *machine,
>>>>>>>>>      if (!xen_enabled()) {
>>>>>>>>>          pc_memory_init(pcms, system_memory,
>>>>>>>>>                         rom_memory, &ram_memory);
>>>>>>>>> -    } else if (machine->kernel_filename != NULL) {
>>>>>>>>> -        /* For xen HVM direct kernel boot, load linux here */
>>>>>>>>> -        xen_load_linux(pcms);
>>>>>>>>> +    } else {
>>>>>>>>> +        pc_system_flash_cleanup_unused(pcms);
>>>>>>>>
>>>>>>>> TIL pc_system_flash_cleanup_unused().
>>>>>>>>
>>>>>>>> What about restricting at the source?
>>>>>>>>
>>>>>>>
>>>>>>> And leave the devices in place? They are not relevant for Xen, so why not clean up?
>>>>>>
>>>>>> No, I meant to not create them in the first place, instead of
>>>>>> create+destroy.
>>>>>>
>>>>>> Anyway what you did works, so I don't have any problem.
>>>>>
>>>>> IIUC Jason originally tried restricting creation but encountered a problem because xen_enabled() would always return false at that point, because machine creation occurs before accelerators are initialized.
>>>>
>>>> Correct.  Quoting my previous email:
>>>> """
>>>> Removing the call to pc_system_flash_create() from pc_machine_initfn()
>>>> lets QEMU startup and run a Xen HVM again.  xen_enabled() doesn't work
>>>> there since accelerators have not been initialized yes, I guess?
>>>
>>> Ah, I missed that. You pointed at the bug here :)
>>>
>>> I think pc_system_flash_create() shouldn't be called in init() but
>>> realize().
>>
>> Hmm this is a MachineClass, not qdev, so no realize().
>>
>> In softmmu/vl.c we have:
>>
>> 4152     configure_accelerators(argv[0]);
>> ....
>> 4327     if (!xen_enabled()) { // so xen_enable() is working
>> 4328         /* On 32-bit hosts, QEMU is limited by virtual address space */
>> 4329         if (ram_size > (2047 << 20) && HOST_LONG_BITS == 32) {
>> 4330             error_report("at most 2047 MB RAM can be simulated");
>> 4331             exit(1);
>> 4332         }
>> 4333     }
>> ....
>> 4348     machine_run_board_init(current_machine);
>>
>> which calls in hw/core/machine.c:
>>
>> 1089 void machine_run_board_init(MachineState *machine)
>> 1090 {
>> 1091     MachineClass *machine_class = MACHINE_GET_CLASS(machine);
>> ....
>> 1138     machine_class->init(machine);
>> 1139 }
>>
>>          -> pc_machine_class_init()
>>
>> This should come after:
>>
>>          -> pc_machine_initfn()
>>
>>             -> pc_system_flash_create(pcms)
> 
> Sorry, I'm not following the flow you want.

Well, I was simply trying to understand what was wrong, and
what we should fix so you don't have to create flash devices
then destroy them when using Xen.

I already said Paul patch is OK and gave my R-b to it.

> 
> The xen_enabled() call in vl.c may always fail.  Or at least in most
> common Xen usage.  Xen HVMs are started with machine xenfv and don't
> specify an accelerator.  You need to process the xenfv default machine
> opts to process "accel=xen".  Actually, I don't know how the
> accelerator initialization works, but xen_accel_class_init() needs to
> be called to set `ac->allowed = &xen_allowed`.  xen_allowed is the
> return value of xen_enabled() and the accelerator initialization must
> toggle it.

Since you are happy how this current works, I'm also fine with it,
I won't investigate further.

Regards,

Phil.



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 15:12:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 15:12:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqePR-0004AN-EC; Wed, 01 Jul 2020 15:12:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Rv6a=AM=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jqePP-0004AB-JX
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 15:12:23 +0000
X-Inumbo-ID: 4303754e-bbad-11ea-872a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4303754e-bbad-11ea-872a-12813bfff9fa;
 Wed, 01 Jul 2020 15:12:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yn7M2OIusXG4MVM86Xccd+b1UVizo0B7lyhglVXG+xk=; b=aaQxxsUfn0VwWhg15KZz+d4LIQ
 pY7ghl04oSmPQWx455rmCdUBFH0PS8R45nb78JCxC/pJ1hJW+tCSMq84RV2Ul3Eqjr0e79EbPympi
 UkVzw8h55Ju72K/e0oaOWwcA5kx4TeK5gJv166Wk1ySmCfLBQ/eGnvrCGWMB1datBzBI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqePI-0004jZ-L6; Wed, 01 Jul 2020 15:12:16 +0000
Received: from [54.239.6.179] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqePI-0007cU-9H; Wed, 01 Jul 2020 15:12:16 +0000
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 xen-devel@lists.xenproject.org
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
From: Julien Grall <julien@xen.org>
Message-ID: <85416128-a334-4640-7504-0865f715b3a2@xen.org>
Date: Wed, 1 Jul 2020 16:12:12 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30/06/2020 13:33, Michał Leszczyński wrote:
> @@ -305,7 +311,6 @@ static int vmx_init_vmcs_config(void)
>                  SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS |
>                  SECONDARY_EXEC_XSAVES |
>                  SECONDARY_EXEC_TSC_SCALING);
> -        rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
>           if ( _vmx_misc_cap & VMX_MISC_VMWRITE_ALL )
>               opt |= SECONDARY_EXEC_ENABLE_VMCS_SHADOWING;
>           if ( opt_vpid_enabled )
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 7cc9526139..0a33e0dfd6 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -82,6 +82,8 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_mostly;
>   
>   vcpu_info_t dummy_vcpu_info;
>   
> +bool_t vmtrace_supported;

All the code looks x86 specific. So may I ask why this was implemented 
in common code?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 15:36:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 15:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqemP-0005uw-Fs; Wed, 01 Jul 2020 15:36:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Rv6a=AM=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jqemO-0005ur-IF
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 15:36:08 +0000
X-Inumbo-ID: 93c49bc2-bbb0-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 93c49bc2-bbb0-11ea-bb8b-bc764e2007e4;
 Wed, 01 Jul 2020 15:36:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=fdKPh9+uJZZ56Z33Me4yP5f8EtRE35rWQhI3cWDExAw=; b=N/q88qtGYkqnw+K1FwugODuRFs
 1nB6Fw5zxbgpyLILgcBHj3hHcKRzxqODAz9gHzs32KifBGdlmKHJugl/Xfgv23iWB9jwQVsQBYMZm
 FiYX99NU5/IRPnFsb8xOdCpo4JDi1GKoX0TfQjAz7CMkwRnXD4Zbk90OV5qbAl+ulndA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqemF-0005Ay-G9; Wed, 01 Jul 2020 15:35:59 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqemF-0000JG-6Q; Wed, 01 Jul 2020 15:35:59 +0000
Subject: Re: [PATCH v4 05/10] common/domain: allocate vmtrace_pt_buffer
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 xen-devel@lists.xenproject.org
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <0e02c97054da6e367f740ab8d2574e2d255553c8.1593519420.git.michal.leszczynski@cert.pl>
From: Julien Grall <julien@xen.org>
Message-ID: <3c710ce8-c561-fd73-3be8-a92456588db9@xen.org>
Date: Wed, 1 Jul 2020 16:35:56 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <0e02c97054da6e367f740ab8d2574e2d255553c8.1593519420.git.michal.leszczynski@cert.pl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 30/06/2020 13:33, Michał Leszczyński wrote:
> +static int vmtrace_alloc_buffers(struct vcpu *v)
> +{
> +    struct page_info *pg;
> +    uint64_t size = v->domain->vmtrace_pt_size;
> +
> +    if ( size < PAGE_SIZE || size > GB(4) || (size & (size - 1)) )
> +    {
> +        /*
> +         * We don't accept trace buffer size smaller than single page
> +         * and the upper bound is defined as 4GB in the specification.

This is common code, so what specification are you talking about?

I am guessing this is an Intel one, but I don't think Intel should 
dictate the common code implementation.

> +         * The buffer size must be also a power of 2.
> +         */
> +        return -EINVAL;
> +    }
> +
> +    pg = alloc_domheap_pages(v->domain, get_order_from_bytes(size),
> +                             MEMF_no_refcount);
> +
> +    if ( !pg )
> +        return -ENOMEM;
> +
> +    v->arch.vmtrace.pt_buf = pg;

v->arch.vmtrace.pt_buf is not defined on Arm. Please make sure common 
code build on all arch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 15:42:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 15:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqest-0006mO-9i; Wed, 01 Jul 2020 15:42:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iA1B=AM=gmail.com=brgerst@srs-us1.protection.inumbo.net>)
 id 1jqess-0006mJ-D4
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 15:42:50 +0000
X-Inumbo-ID: 84a69a7c-bbb1-11ea-bb8b-bc764e2007e4
Received: from mail-io1-xd43.google.com (unknown [2607:f8b0:4864:20::d43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 84a69a7c-bbb1-11ea-bb8b-bc764e2007e4;
 Wed, 01 Jul 2020 15:42:49 +0000 (UTC)
Received: by mail-io1-xd43.google.com with SMTP id o5so25431521iow.8
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jul 2020 08:42:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=hvie/6BG7o1swX84/UM5fLpFCqjSlLJ7YxyaK37mWfQ=;
 b=Bq1vY63RSI+AVQWedtGjs4OUW2V0itIttXR/Ci9bU+UKtI2BWdpUf4sDz96BGrSGyR
 V/UHlS3aV5SiS1fho1aDTwbVxTp9t9zupgusW9YlBOKSFD1s5HwzZCEnfkzKUo4houln
 Cb9GusQWPx+4H9uO3kYPoNxDNhXwNZRfOArcBfdPzeZujuNW7QbqNM+E5s1nPvPswtiE
 yalC+9Rq/obRStT18qIsbrsHaiLJfmIufmx2kDpKPpibWV64gwBlaNJtvv+rkyGKxq/D
 QNlSZpiBWDiBf25Mu9KGz+xPLsEN9KPmrpg9RuyfbJ9gKv5/mawluJJ9fmVMi2Zym+aS
 HSIQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=hvie/6BG7o1swX84/UM5fLpFCqjSlLJ7YxyaK37mWfQ=;
 b=p6fLT44+iaL5fIdVJHgY3FAfi3r7wGTnBQgOHWk7JN/thD+OwLjxDIpiiiQgEg1GmZ
 cXKtSSY57XIpt7ilbMqIhtCH4NpYhtTj2la1oeExD+M0UkAq7AGrORuJs4yVaXenmIm0
 8BXEq3LOcrYWH97uSOQxMt9g0h6CumVwZE+ewq/nAtoZkc+8DWxImoQvszhNgPPIEgvK
 RJoG6w+OZVMhKmPye8TJ7jN/LGB34jG3MMzkxY0fkmtwAeIa0ejorgxntHtzhdlaPvw1
 dd7+M2Mwad87rOJG2jGYD4+PUWfcrX9Zhtd1ur1ijrOsJdhUIVVGE4/gJxa+h+YRy9ra
 cwQg==
X-Gm-Message-State: AOAM533ioP/JKaiyzXK77eVsLW6D7OjRqGvJeHb10e7VsKc08zsG52De
 AYZqayNcWOOBXdCmCWS6qynCKD1bLvXj36lzcw==
X-Google-Smtp-Source: ABdhPJwY58186upJGjF+++Cg40KulQrFPTck9+8exmcfwwDU6enLRHtvLKw+YqEw0n/2SyAqb90qlC0X5+uiFSQ+OaU=
X-Received: by 2002:a5e:d90c:: with SMTP id n12mr2832816iop.144.1593618168297; 
 Wed, 01 Jul 2020 08:42:48 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1593191971.git.luto@kernel.org>
 <947880c41ade688ff4836f665d0c9fcaa9bd1201.1593191971.git.luto@kernel.org>
In-Reply-To: <947880c41ade688ff4836f665d0c9fcaa9bd1201.1593191971.git.luto@kernel.org>
From: Brian Gerst <brgerst@gmail.com>
Date: Wed, 1 Jul 2020 11:42:37 -0400
Message-ID: <CAMzpN2iW4XD1Gsgq0ZeeH2eewLO+9Mk6eyk0LnbF-kP3v=smLg@mail.gmail.com>
Subject: Re: [PATCH 3/6] x86/entry/64/compat: Fix Xen PV SYSENTER frame setup
To: Andy Lutomirski <luto@kernel.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 the arch/x86 maintainers <x86@kernel.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jun 26, 2020 at 1:30 PM Andy Lutomirski <luto@kernel.org> wrote:
>
> The SYSENTER frame setup was nonsense.  It worked by accident
> because the normal code into which the Xen asm jumped
> (entry_SYSENTER_32/compat) threw away SP without touching the stack.
> entry_SYSENTER_compat was recently modified such that it relied on
> having a valid stack pointer, so now the Xen asm needs to invoke it
> with a valid stack.
>
> Fix it up like SYSCALL: use the Xen-provided frame and skip the bare
> metal prologue.
>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: xen-devel@lists.xenproject.org
> Fixes: 1c3e5d3f60e2 ("x86/entry: Make entry_64_compat.S objtool clean")
> Signed-off-by: Andy Lutomirski <luto@kernel.org>
> ---
>  arch/x86/entry/entry_64_compat.S |  1 +
>  arch/x86/xen/xen-asm_64.S        | 20 ++++++++++++++++----
>  2 files changed, 17 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
> index 7b9d8150f652..381a6de7de9c 100644
> --- a/arch/x86/entry/entry_64_compat.S
> +++ b/arch/x86/entry/entry_64_compat.S
> @@ -79,6 +79,7 @@ SYM_CODE_START(entry_SYSENTER_compat)
>         pushfq                          /* pt_regs->flags (except IF = 0) */
>         pushq   $__USER32_CS            /* pt_regs->cs */
>         pushq   $0                      /* pt_regs->ip = 0 (placeholder) */
> +SYM_INNER_LABEL(entry_SYSENTER_compat_after_hwframe, SYM_L_GLOBAL)

This skips over the section that truncates the syscall number to
32-bits.  The comments present some doubt that it is actually
necessary, but the Xen path shouldn't differ from native.  That code
should be moved after this new label.

--
Brian Gerst


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 16:07:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 16:07:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqfFz-0000e1-AB; Wed, 01 Jul 2020 16:06:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xe6U=AM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jqfFx-0000dw-Sy
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 16:06:41 +0000
X-Inumbo-ID: d869d8f6-bbb4-11ea-8733-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d869d8f6-bbb4-11ea-8733-12813bfff9fa;
 Wed, 01 Jul 2020 16:06:39 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: XYdSQsJ1TLytDbKUdkoLb8wITnvE7AAdptQsFMitKD4MMAIQJw/X2Aul1vubGwu6HqLpCFZU1e
 bY57isVcdQLvQBgQ/YTmnsB4xT4vlj3ITCdXaVI+ZY95CAAmoEEJfIm7yEXZzPmU0jClzM/FZX
 pUrvRtmG8cpVo+0OcWdAtYaZSMXXZfIEiGk3bR3eeZrKajrNz6Uhu+f4rO4Crzrusa3m25DPTY
 P21cHQcrZppGJBlL6OR38Sv3xahODqLdKng5yEe9+FNVmEGpCi9lN9ZNAiSoIciEdtvJ5Vz8iQ
 xsw=
X-SBRS: 2.7
X-MesageID: 21724774
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,300,1589256000"; d="scan'208";a="21724774"
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Julien Grall <julien@xen.org>, =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?=
 <michal.leszczynski@cert.pl>, <xen-devel@lists.xenproject.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
Date: Wed, 1 Jul 2020 17:06:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <85416128-a334-4640-7504-0865f715b3a2@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01/07/2020 16:12, Julien Grall wrote:
> On 30/06/2020 13:33, Michał Leszczyński wrote:
>> @@ -305,7 +311,6 @@ static int vmx_init_vmcs_config(void)
>>                  SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS |
>>                  SECONDARY_EXEC_XSAVES |
>>                  SECONDARY_EXEC_TSC_SCALING);
>> -        rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
>>           if ( _vmx_misc_cap & VMX_MISC_VMWRITE_ALL )
>>               opt |= SECONDARY_EXEC_ENABLE_VMCS_SHADOWING;
>>           if ( opt_vpid_enabled )
>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>> index 7cc9526139..0a33e0dfd6 100644
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -82,6 +82,8 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_mostly;
>>     vcpu_info_t dummy_vcpu_info;
>>   +bool_t vmtrace_supported;
>
> All the code looks x86 specific. So may I ask why this was implemented
> in common code?

There were some questions directed specifically at the ARM maintainers
about CoreSight, which have gone unanswered.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 16:11:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 16:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqfKK-0001SN-U0; Wed, 01 Jul 2020 16:11:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w4aC=AM=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jqfKJ-0001S1-O2
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 16:11:11 +0000
X-Inumbo-ID: 7a4300c6-bbb5-11ea-8734-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7a4300c6-bbb5-11ea-8734-12813bfff9fa;
 Wed, 01 Jul 2020 16:11:11 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: B1RYPuBoJVN8t7cQ6qP2Qhx8loHOKBLS+msp4YB+uSmkkftFZOpWxvTXG2IAFgqvlKPo/SJQWm
 o5eH1rSKqB3LeF8rE9m7n3z63K06GFWJAGBdMzbEzjoYQJya48DE/wM7Q3qBxmYuDronu3/Pp9
 pqISW00EHB6278iYIs4cLZGNq+wevp2rIZ5kNL24AJFvbBe7c9S0xmwSbcU7D8cNO2aTX1Fuan
 P4ETXU1dCbh15G2n2op8LRaJLeiKniGLTUq4YwI6wL/fGYhU5QQ8sjYjGayFXgQPChdW/3an59
 2k8=
X-SBRS: 2.7
X-MesageID: 21420140
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,300,1589256000"; d="scan'208";a="21420140"
Date: Wed, 1 Jul 2020 18:10:57 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 1/7] x86: fix compat header generation
Message-ID: <20200701161057.GV735@Air-de-Roger>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <a8139d0e-f332-b877-dea8-3ce8a6869285@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <a8139d0e-f332-b877-dea8-3ce8a6869285@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 12:25:15PM +0200, Jan Beulich wrote:
> As was pointed out by 0e2e54966af5 ("mm: fix public declaration of
> struct xen_mem_acquire_resource"), we're not currently handling structs
> correctly that have uint64_aligned_t fields. #pragma pack(4) suppresses
> the necessary alignment even if the type did properly survive (which
> it also didn't) in the process of generating the headers. Overall,
> with the above mentioned change applied, there's only a latent issue
> here afaict, i.e. no other of our interface structs is currently
> affected.
> 
> As a result it is clear that using #pragma pack(4) is not an option.
> Drop all uses from compat header generation. Make sure
> {,u}int64_aligned_t actually survives, such that explicitly aligned
> fields will remain aligned. Arrange for {,u}int64_t to be transformed
> into a type that's 64 bits wide and 4-byte aligned, by utilizing that
> in typedef-s the "aligned" attribute can be used to reduce alignment.
> Additionally, for the cases where native structures get re-used,
> enforce suitable alignment via typedef-s (which allow alignment to be
> reduced).
> 
> This use of typedef-s makes necessary changes to CHECK_*() macro
> generation: Previously get-fields.sh relied on finding struct/union
> keywords when other compound types were used. We now need to use the
> typedef-s (guaranteeing suitable alignment) now, and hence the script

Extra now before the comma I think.

> has to recognize those cases, too. (Unfortunately there are a few
> special cases to be dealt with, but this is really not much different
> from e.g. the pre-existing compat_domain_handle_t special case.)
> 
> This need to use typedef-s is certainly somewhat fragile going forward,
> as in similar future cases it is imperative to also use typedef-s, or
> else the CHECK_*() macros won't check what they're supposed to check. I
> don't currently see any means to avoid this fragility, though.
> 
> There's one change to generated code according to my observations: In
> arch_compat_vcpu_op() the runstate area "area" variable would previously
> have been put in a just 4-byte aligned stack slot (despite being 8 bytes
> in size), whereas now it gets put in an 8-byte aligned location.
> 
> There also results some curious inconsistency in struct xen_mc from
> these changes - I intend to clean this up later on. Otherwise unrelated
> code would also need adjustment right here.

Oh, so that's the reason fields in xen_mc are not all switched to use
their typedef equivalent I guess?

> --- a/xen/tools/get-fields.sh
> +++ b/xen/tools/get-fields.sh
> @@ -418,6 +418,21 @@ check_field ()
>  			"}")
>  				level=$(expr $level - 1) id=
>  				;;
> +			compat_*_t)
> +				if [ $level = 2 ]
> +				then
> +					fields=" "
> +					token="${token%_t}"
> +					token="${token#compat_}"
> +				fi
> +				;;
> +			evtchn_*_compat_t)
> +				if [ $level = 2 -a $token != evtchn_port_compat_t ]
> +				then
> +					fields=" "
> +					token="${token%_compat_t}"
> +				fi
> +				;;

Likely related to the above, but I assume we might want to add a check
here to assert no struct fields are used?

I assume this is not added here in order to prevent exploding due to
the xen_mc issues.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 16:17:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 16:17:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqfQk-0001e2-Jh; Wed, 01 Jul 2020 16:17:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LFmw=AM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqfQk-0001dx-5i
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 16:17:50 +0000
X-Inumbo-ID: 67654db4-bbb6-11ea-8738-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 67654db4-bbb6-11ea-8738-12813bfff9fa;
 Wed, 01 Jul 2020 16:17:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CD669ACC3;
 Wed,  1 Jul 2020 16:17:47 +0000 (UTC)
Subject: Re: [PATCH v2 1/7] x86: fix compat header generation
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <a8139d0e-f332-b877-dea8-3ce8a6869285@suse.com>
 <20200701161057.GV735@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <76441369-a9aa-3d2d-9a65-c4a5fd20f6f1@suse.com>
Date: Wed, 1 Jul 2020 18:17:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200701161057.GV735@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.07.2020 18:10, Roger Pau Monné wrote:
> On Wed, Jul 01, 2020 at 12:25:15PM +0200, Jan Beulich wrote:
>> As was pointed out by 0e2e54966af5 ("mm: fix public declaration of
>> struct xen_mem_acquire_resource"), we're not currently handling structs
>> correctly that have uint64_aligned_t fields. #pragma pack(4) suppresses
>> the necessary alignment even if the type did properly survive (which
>> it also didn't) in the process of generating the headers. Overall,
>> with the above mentioned change applied, there's only a latent issue
>> here afaict, i.e. no other of our interface structs is currently
>> affected.
>>
>> As a result it is clear that using #pragma pack(4) is not an option.
>> Drop all uses from compat header generation. Make sure
>> {,u}int64_aligned_t actually survives, such that explicitly aligned
>> fields will remain aligned. Arrange for {,u}int64_t to be transformed
>> into a type that's 64 bits wide and 4-byte aligned, by utilizing that
>> in typedef-s the "aligned" attribute can be used to reduce alignment.
>> Additionally, for the cases where native structures get re-used,
>> enforce suitable alignment via typedef-s (which allow alignment to be
>> reduced).
>>
>> This use of typedef-s makes necessary changes to CHECK_*() macro
>> generation: Previously get-fields.sh relied on finding struct/union
>> keywords when other compound types were used. We now need to use the
>> typedef-s (guaranteeing suitable alignment) now, and hence the script
> 
> Extra now before the comma I think.
> 
>> has to recognize those cases, too. (Unfortunately there are a few
>> special cases to be dealt with, but this is really not much different
>> from e.g. the pre-existing compat_domain_handle_t special case.)
>>
>> This need to use typedef-s is certainly somewhat fragile going forward,
>> as in similar future cases it is imperative to also use typedef-s, or
>> else the CHECK_*() macros won't check what they're supposed to check. I
>> don't currently see any means to avoid this fragility, though.
>>
>> There's one change to generated code according to my observations: In
>> arch_compat_vcpu_op() the runstate area "area" variable would previously
>> have been put in a just 4-byte aligned stack slot (despite being 8 bytes
>> in size), whereas now it gets put in an 8-byte aligned location.
>>
>> There also results some curious inconsistency in struct xen_mc from
>> these changes - I intend to clean this up later on. Otherwise unrelated
>> code would also need adjustment right here.
> 
> Oh, so that's the reason fields in xen_mc are not all switched to use
> their typedef equivalent I guess?

Yes - see patches later in the series, which take care of the anomaly.

>> --- a/xen/tools/get-fields.sh
>> +++ b/xen/tools/get-fields.sh
>> @@ -418,6 +418,21 @@ check_field ()
>>  			"}")
>>  				level=$(expr $level - 1) id=
>>  				;;
>> +			compat_*_t)
>> +				if [ $level = 2 ]
>> +				then
>> +					fields=" "
>> +					token="${token%_t}"
>> +					token="${token#compat_}"
>> +				fi
>> +				;;
>> +			evtchn_*_compat_t)
>> +				if [ $level = 2 -a $token != evtchn_port_compat_t ]
>> +				then
>> +					fields=" "
>> +					token="${token%_compat_t}"
>> +				fi
>> +				;;
> 
> Likely related to the above, but I assume we might want to add a check
> here to assert no struct fields are used?

I think we could, but have you found similar assertions
elsewhere? There being any fields would, aiui, indicate a syntax
violation (or else $level can't be 2), and I'd rather leave
catching these to the compiler.

> I assume this is not added here in order to prevent exploding due to
> the xen_mc issues.

I don't think it would, as it continues handling struct/union
just fine. (We may want to drop this support, to enforce no
use of only typedef-s, but I'm not sure _that_ wouldn't
explode.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 16:18:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 16:18:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqfR1-0001gq-WE; Wed, 01 Jul 2020 16:18:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Rv6a=AM=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jqfR0-0001gZ-Rj
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 16:18:06 +0000
X-Inumbo-ID: 72259240-bbb6-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 72259240-bbb6-11ea-bb8b-bc764e2007e4;
 Wed, 01 Jul 2020 16:18:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=tGcLBPxqIUkrZiJIKx9/dwoobGuMd2YvL+03LZJ/0oQ=; b=JmAYMD18ljQ3t0+trXLv7LAABV
 Rabo8k5AFuq3iO6bEcSFBxEGyWowskNMIeZ+mndLUOo15giIH1f0qN/0hNgui7KHaoo+VULMBjden
 +zPXC77vkcszVy9dawmJAKMJghoH9IST6/WzQGz2YpUDuN5/0ThnU94fNUAIhMy/uEg4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqfQu-0006UL-Lw; Wed, 01 Jul 2020 16:18:00 +0000
Received: from [54.239.6.178] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqfQu-0002tH-Bs; Wed, 01 Jul 2020 16:18:00 +0000
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 xen-devel@lists.xenproject.org
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
Date: Wed, 1 Jul 2020 17:17:56 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 01/07/2020 17:06, Andrew Cooper wrote:
> On 01/07/2020 16:12, Julien Grall wrote:
>> On 30/06/2020 13:33, Michał Leszczyński wrote:
>>> @@ -305,7 +311,6 @@ static int vmx_init_vmcs_config(void)
>>>                   SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS |
>>>                   SECONDARY_EXEC_XSAVES |
>>>                   SECONDARY_EXEC_TSC_SCALING);
>>> -        rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
>>>            if ( _vmx_misc_cap & VMX_MISC_VMWRITE_ALL )
>>>                opt |= SECONDARY_EXEC_ENABLE_VMCS_SHADOWING;
>>>            if ( opt_vpid_enabled )
>>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>>> index 7cc9526139..0a33e0dfd6 100644
>>> --- a/xen/common/domain.c
>>> +++ b/xen/common/domain.c
>>> @@ -82,6 +82,8 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_mostly;
>>>      vcpu_info_t dummy_vcpu_info;
>>>    +bool_t vmtrace_supported;
>>
>> All the code looks x86 specific. So may I ask why this was implemented
>> in common code?
> 
> There were some questions directed specifically at the ARM maintainers
> about CoreSight, which have gone unanswered.

I can only find one question related to the size. Is there any other?

I don't know how the interface will look like given that AFAICT the 
buffer may be embedded in the HW. We would need to investigate how to 
differentiate between two domUs in this case without impacting the 
performance in the common code.

So I think it is a little premature to implement this in common code and 
always compiled in for Arm. It would be best if this stay in x86 code.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 16:19:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 16:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqfRu-0001nv-AU; Wed, 01 Jul 2020 16:19:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Rv6a=AM=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jqfRt-0001nm-26
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 16:19:01 +0000
X-Inumbo-ID: 91419279-bbb6-11ea-8738-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 91419279-bbb6-11ea-8738-12813bfff9fa;
 Wed, 01 Jul 2020 16:18:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:References:Cc:To:From:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=MpkcReJIcRYvzjdVBZCtaSIuIjPMNFRSm0tOUODVcFI=; b=dajzQCIrwIwWlrrOFzNWY1kf7c
 aJHh5mfv05tTvW1enJ+JWXj7+gceRwG2y9Tx3rKiXeGGFRBYq5Pl2xBZmWfe1PcyPv6ViTuvT80mi
 Wb6ZV7FPaPFyAQrm4jqcPpQvT9KbG3NXQB54iol6rJqOrPNGeI69nUNnoazjaGgdUAc0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqfRn-0006VS-PK; Wed, 01 Jul 2020 16:18:55 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqfRn-0002ve-Hc; Wed, 01 Jul 2020 16:18:55 +0000
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
From: Julien Grall <julien@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 xen-devel@lists.xenproject.org
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
Message-ID: <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
Date: Wed, 1 Jul 2020 17:18:52 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 01/07/2020 17:17, Julien Grall wrote:
> 
> 
> On 01/07/2020 17:06, Andrew Cooper wrote:
>> On 01/07/2020 16:12, Julien Grall wrote:
>>> On 30/06/2020 13:33, Michał Leszczyński wrote:
>>>> @@ -305,7 +311,6 @@ static int vmx_init_vmcs_config(void)
>>>>                   SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS |
>>>>                   SECONDARY_EXEC_XSAVES |
>>>>                   SECONDARY_EXEC_TSC_SCALING);
>>>> -        rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
>>>>            if ( _vmx_misc_cap & VMX_MISC_VMWRITE_ALL )
>>>>                opt |= SECONDARY_EXEC_ENABLE_VMCS_SHADOWING;
>>>>            if ( opt_vpid_enabled )
>>>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>>>> index 7cc9526139..0a33e0dfd6 100644
>>>> --- a/xen/common/domain.c
>>>> +++ b/xen/common/domain.c
>>>> @@ -82,6 +82,8 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_mostly;
>>>>      vcpu_info_t dummy_vcpu_info;
>>>>    +bool_t vmtrace_supported;
>>>
>>> All the code looks x86 specific. So may I ask why this was implemented
>>> in common code?
>>
>> There were some questions directed specifically at the ARM maintainers
>> about CoreSight, which have gone unanswered.
> 
> I can only find one question related to the size. Is there any other?
> 
> I don't know how the interface will look like given that AFAICT the 
> buffer may be embedded in the HW. We would need to investigate how to 
> differentiate between two domUs in this case without impacting the 
> performance in the common code.

s/in the common code/during the context switch/

> So I think it is a little premature to implement this in common code and 
> always compiled in for Arm. It would be best if this stay in x86 code.
> 
> Cheers,
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 16:34:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 16:34:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqfgX-0003Tk-Lk; Wed, 01 Jul 2020 16:34:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iA1B=AM=gmail.com=brgerst@srs-us1.protection.inumbo.net>)
 id 1jqfgW-0003Tf-5Y
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 16:34:08 +0000
X-Inumbo-ID: af31e90c-bbb8-11ea-bb8b-bc764e2007e4
Received: from mail-io1-xd41.google.com (unknown [2607:f8b0:4864:20::d41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af31e90c-bbb8-11ea-bb8b-bc764e2007e4;
 Wed, 01 Jul 2020 16:34:07 +0000 (UTC)
Received: by mail-io1-xd41.google.com with SMTP id i4so25654137iov.11
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jul 2020 09:34:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=MoybjELZfC8f+lXSCtoUVME2X9MJfIpV+bdz3fRbLPs=;
 b=UIQesNKyPGHb0LsgKEkEqKC04ivhJDSLff+nAYrcTcn1rxfacGjyvsw8a6MHdkh2OU
 dysTn5Deu/gEZ6sPLxswsTnoXY8MNBef2h7eQigzrb8i+UA/fScu+ZhchmeakIiuK9y5
 lWUtPL1yADHz4EeusxKEafPF/8hJSssq923JNp/E/Yybr3deh+6cYHVsW2lzeTghXLxX
 ZgGXH5e5fXYPugjAkgWJ89Ps8b7S5pw8vRtOtrJAGP85BydIP5Dfzu/BNSxI6BNFh6FS
 ooCn2W2QvCgGubrpoe1aQ4cWXJ8lpknkPZs0MDAdOLmDkjGjoTOM3guUedfZID0WQTLx
 JHOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=MoybjELZfC8f+lXSCtoUVME2X9MJfIpV+bdz3fRbLPs=;
 b=CAkoEid6UIpuu9WdYSNEkyR0fFyTDTjDMjG63wtL+Iv4T95Bzr9Ye17Rh8awoa1s1i
 2c0WcAhowd5LH7wmZiX2IvIV0MMsowOoCQlBXv79JLKyKRSQEtRE38Ej9vrjSsJybIqA
 rj2MVlQKz1ClFizaoupn3VyyoyRVUBefWEb4wAnve7Ohuriet8/fTil2lw19mu4RAJCG
 wuNeoRPc6CqZK1Om80pNZbhzYFUhYA4xlEpDP5rgbVF4RWVODxQp/kK3FGRwqMn9Fmsj
 Yew6ieU8gu2gw60ZH5qaj87qeIYHi8yCdsiTlsCeDnyfyv9/qUhjarLM1anZL8K51HFV
 5JXw==
X-Gm-Message-State: AOAM530gRV+jXe3Fq62LiArUXzh+QQxrs0MfU8qUfYxwtxr8mNjoy+Fx
 FLDWBHY5PkjiDPP0PcOcdxRmsxOqJJeLWH6bLg==
X-Google-Smtp-Source: ABdhPJyvkzjjp3Dg16h9W16pWaYpiojKXdtxSoSo0JgxzCryuV4rnIzZSZrN/Fkw4QQZ2XiCYSCYxFItVNiYZ1CA3jg=
X-Received: by 2002:a02:3501:: with SMTP id k1mr28948520jaa.133.1593621247338; 
 Wed, 01 Jul 2020 09:34:07 -0700 (PDT)
MIME-Version: 1.0
References: <20200701110650.16172-1-jgross@suse.com>
 <20200701110650.16172-2-jgross@suse.com>
In-Reply-To: <20200701110650.16172-2-jgross@suse.com>
From: Brian Gerst <brgerst@gmail.com>
Date: Wed, 1 Jul 2020 12:33:56 -0400
Message-ID: <CAMzpN2iuwv=05vpxeP6eyVqEH9_093gDtDV3QAXYQ2QrucznBQ@mail.gmail.com>
Subject: Re: [PATCH v2 1/4] x86/xen: remove 32-bit Xen PV guest support
To: Juergen Gross <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 the arch/x86 maintainers <x86@kernel.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 Andy Lutomirski <luto@kernel.org>, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 1, 2020 at 7:08 AM Juergen Gross <jgross@suse.com> wrote:
>
> Xen is requiring 64-bit machines today and since Xen 4.14 it can be
> built without 32-bit PV guest support. There is no need to carry the
> burden of 32-bit PV guest support in the kernel any longer, as new
> guests can be either HVM or PVH, or they can use a 64 bit kernel.
>
> Remove the 32-bit Xen PV support from the kernel.

If you send a v3, it would be better to split the move of the 64-bit
code into xen-asm.S into a separate patch.

--
Brian Gerst


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 17:26:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 17:26:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqgV5-0007eR-JD; Wed, 01 Jul 2020 17:26:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xe6U=AM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jqgV4-0007dZ-R0
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 17:26:22 +0000
X-Inumbo-ID: fb16a39d-bbbf-11ea-874c-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fb16a39d-bbbf-11ea-874c-12813bfff9fa;
 Wed, 01 Jul 2020 17:26:21 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 7x2vJkvdfqzPxkWHWzUVXJZqP+KkxFI2dqA03j03LrwtWNdSlG5NGzq0ZzOefKaDy3ReGrEjV+
 FHj2KszmViTo1Tpuk+0liUsCpHf9HvoGUfjyiBGb70huTnSutly0JkRzx79ZJFiCnKgQfjBQ2l
 aOPvhRSiDEfNLzHFkXrzOxJXZs8R5MNMYz2o+OOtlLQiZStV18GE0qqvmPR8zrL8a0/v8p1bsG
 L2vzRYoL75HurC0a+4c6ebwVlKkVyyaoi7wLBJ/b4bwouIQpET+W+SaQJGdMlCcgqvG+VIaXLl
 o/w=
X-SBRS: 2.7
X-MesageID: 21628612
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,301,1589256000"; d="scan'208";a="21628612"
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Julien Grall <julien@xen.org>, =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?=
 <michal.leszczynski@cert.pl>, <xen-devel@lists.xenproject.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
 <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
Date: Wed, 1 Jul 2020 18:26:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01/07/2020 17:18, Julien Grall wrote:
>
>
> On 01/07/2020 17:17, Julien Grall wrote:
>>
>>
>> On 01/07/2020 17:06, Andrew Cooper wrote:
>>> On 01/07/2020 16:12, Julien Grall wrote:
>>>> On 30/06/2020 13:33, Michał Leszczyński wrote:
>>>>> @@ -305,7 +311,6 @@ static int vmx_init_vmcs_config(void)
>>>>>                   SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS |
>>>>>                   SECONDARY_EXEC_XSAVES |
>>>>>                   SECONDARY_EXEC_TSC_SCALING);
>>>>> -        rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
>>>>>            if ( _vmx_misc_cap & VMX_MISC_VMWRITE_ALL )
>>>>>                opt |= SECONDARY_EXEC_ENABLE_VMCS_SHADOWING;
>>>>>            if ( opt_vpid_enabled )
>>>>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>>>>> index 7cc9526139..0a33e0dfd6 100644
>>>>> --- a/xen/common/domain.c
>>>>> +++ b/xen/common/domain.c
>>>>> @@ -82,6 +82,8 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_mostly;
>>>>>      vcpu_info_t dummy_vcpu_info;
>>>>>    +bool_t vmtrace_supported;
>>>>
>>>> All the code looks x86 specific. So may I ask why this was implemented
>>>> in common code?
>>>
>>> There were some questions directed specifically at the ARM maintainers
>>> about CoreSight, which have gone unanswered.
>>
>> I can only find one question related to the size. Is there any other?
>>
>> I don't know how the interface will look like given that AFAICT the
>> buffer may be embedded in the HW. We would need to investigate how to
>> differentiate between two domUs in this case without impacting the
>> performance in the common code.
>
> s/in the common code/during the context switch/
>
>> So I think it is a little premature to implement this in common code
>> and always compiled in for Arm. It would be best if this stay in x86
>> code.

I've just checked with a colleague.  CoreSight can dump to a memory
buffer - there's even a decode library for the packet stream
https://github.com/Linaro/OpenCSD, although ultimately it is platform
specific as to whether the feature is supported.

Furthermore, the choice isn't "x86 vs ARM", now that RISCv support is
on-list, and Power9 is floating on the horizon.

For the sake of what is literally just one byte in common code, I stand
my original suggestion of this being a common interface.  It is not
something which should be x86 specific.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 17:35:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 17:35:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqgdN-00005x-Ge; Wed, 01 Jul 2020 17:34:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lceB=AM=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jqgdM-00005s-8c
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 17:34:56 +0000
X-Inumbo-ID: 2d5a53fc-bbc1-11ea-874c-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2d5a53fc-bbc1-11ea-874c-12813bfff9fa;
 Wed, 01 Jul 2020 17:34:55 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 5CDCB20781;
 Wed,  1 Jul 2020 17:34:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1593624894;
 bh=7gRGt637RJr7xqh8m9IFhKHH4colOY8f4NtES4RwwhY=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=s4JTo+sHZ5kueHTuTEAo2QGy2uhqmeAHdidNDEG2j3pV3VxRg5Xv6wKav2TR+6t4b
 XUiS73u1pvO97HpBjUcfEuJ+XpBhQ0M0mFiLtwwqKmYcxdiCP4RGhschpBHp6VSQJP
 m93SVv9K90jHvPW8nUrDpWPn4XcEaTHFUj4sKDZM=
Date: Wed, 1 Jul 2020 10:34:53 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Christoph Hellwig <hch@infradead.org>
Subject: Re: [PATCH] xen: introduce xen_vring_use_dma
In-Reply-To: <20200701133456.GA23888@infradead.org>
Message-ID: <alpine.DEB.2.21.2007011020320.8121@sstabellini-ThinkPad-T480s>
References: <20200624091732.23944-1-peng.fan@nxp.com>
 <20200624050355-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006241047010.8121@sstabellini-ThinkPad-T480s>
 <20200624163940-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006241351430.8121@sstabellini-ThinkPad-T480s>
 <20200624181026-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006251014230.8121@sstabellini-ThinkPad-T480s>
 <20200626110629-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006291621300.8121@sstabellini-ThinkPad-T480s>
 <20200701133456.GA23888@infradead.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Peng Fan <peng.fan@nxp.com>,
 Stefano Stabellini <sstabellini@kernel.org>, konrad.wilk@oracle.com,
 jasowang@redhat.com, x86@kernel.org, linux-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org, iommu@lists.linux-foundation.org,
 "Michael S. Tsirkin" <mst@redhat.com>, linux-imx@nxp.com,
 xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 linux-arm-kernel@lists.infradead.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 1 Jul 2020, Christoph Hellwig wrote:
> On Mon, Jun 29, 2020 at 04:46:09PM -0700, Stefano Stabellini wrote:
> > > I could imagine some future Xen hosts setting a flag somewhere in the
> > > platform capability saying "no xen specific flag, rely on
> > > "VIRTIO_F_ACCESS_PLATFORM". Then you set that accordingly in QEMU.
> > > How about that?
> > 
> > Yes, that would be fine and there is no problem implementing something
> > like that when we get virtio support in Xen. Today there are still no
> > virtio interfaces provided by Xen to ARM guests (no virtio-block/net,
> > etc.)
> > 
> > In fact, in both cases we are discussing virtio is *not* provided by
> > Xen; it is a firmware interface to something entirely different:
> > 
> > 1) virtio is used to talk to a remote AMP processor (RPMesg)
> > 2) virtio is used to talk to a secure-world firmware/OS (Trusty)
> >
> > VIRTIO_F_ACCESS_PLATFORM is not set by Xen in these cases but by RPMesg
> > and by Trusty respectively. I don't know if Trusty should or should not
> > set VIRTIO_F_ACCESS_PLATFORM, but I think Linux should still work
> > without issues.
> > 
> 
> Any virtio implementation that is not in control of the memory map
> (aka not the hypervisor) absolutely must set VIRTIO_F_ACCESS_PLATFORM,
> else it is completely broken.

Lots of broken virtio implementations out there it would seem :-(


> > The xen_domain() check in Linux makes it so that vring_use_dma_api
> > returns the opposite value on native Linux compared to Linux as Xen/ARM
> > DomU by "accident". By "accident" because there is no architectural
> > reason why Linux Xen/ARM DomU should behave differently compared to
> > native Linux in this regard.
> > 
> > I hope that now it is clearer why I think the if (xen_domain()) check
> > needs to be improved anyway, even if we fix generic dma_ops with virtio
> > interfaces missing VIRTIO_F_ACCESS_PLATFORM.
> 
> IMHO that Xen quirk should never have been added in this form..

Would you be in favor of a more flexible check along the lines of the
one proposed in the patch that started this thread:

    if (xen_vring_use_dma())
            return true;


xen_vring_use_dma would be implemented so that it returns true when
xen_swiotlb is required and false otherwise.


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 17:52:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 17:52:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqguA-0001jE-1o; Wed, 01 Jul 2020 17:52:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xe6U=AM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jqgu8-0001j6-Kh
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 17:52:16 +0000
X-Inumbo-ID: 99202092-bbc3-11ea-8751-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 99202092-bbc3-11ea-8751-12813bfff9fa;
 Wed, 01 Jul 2020 17:52:15 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: p9HUq16D4AAzzYFj4pTIdFyoJSzWL1QVqfLKaG8z+PeddPoDV8F/ejsa5l5V8M/od7FVpUskxB
 184RsqVyciNFukZnSXfLLW3enB7gsYfNOfOk5E+uEGJ6ymTnrEqu6obHhaGItX7023jtH6y9yn
 zBmFLRKLgtgGpl2qSQT/reW+I5JUSgDrBrJP3O4pAUUVF2O7lxr66C7QHThRwy+USnFd9GgyFF
 qyrtZnrdd4tIjqs0NLvV9Mn/HJR6AS/H/8XDOBByZuzcnJaEWgOeq/H2gGKL8u/a+/zf4Lc5ZX
 r30=
X-SBRS: 2.7
X-MesageID: 21734709
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,301,1589256000"; d="scan'208";a="21734709"
Subject: Re: [PATCH v4 01/10] x86/vmx: add Intel PT MSR definitions
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 <xen-devel@lists.xenproject.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <2ff9ecee8367e814a29b17a34203bda0e3c48d74.1593519420.git.michal.leszczynski@cert.pl>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <46b8c096-91ae-a3d7-1c53-d54616f38388@citrix.com>
Date: Wed, 1 Jul 2020 18:52:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <2ff9ecee8367e814a29b17a34203bda0e3c48d74.1593519420.git.michal.leszczynski@cert.pl>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: tamas.lengyel@intel.com, luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30/06/2020 13:33, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
>
> Define constants related to Intel Processor Trace features.
>
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>

Acked-by: Andrew Cooper <andrew.cooper3@ctirix.com>

I wanted to have a play with the series, and have ended up having to do
the rebase anyway.

As we're in code freeze for 4.14, I've started x86-next in its usual
location
(https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/x86-next)
and will commit this (and any other accumulated patches) once 4.15 opens.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 18:02:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 18:02:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqh3z-0002gU-1G; Wed, 01 Jul 2020 18:02:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Rv6a=AM=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jqh3x-0002gP-8S
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 18:02:25 +0000
X-Inumbo-ID: 03f016ce-bbc5-11ea-8754-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03f016ce-bbc5-11ea-8754-12813bfff9fa;
 Wed, 01 Jul 2020 18:02:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=KOsEzqe3M8e03ZIhwdL4fxQRO4Qm1M6Arx3fSDX1f8U=; b=L3fCGwRMitW1a9V4c01bLmMfBf
 qO9rQsnJh65ROBqoe/sqxhEOERQaLxDuJcnPhC61tSfGQyBXjc7E4fCzN96253NaxcxT6kpUrZmnJ
 TrrxeikO7s3XLrQYVCb6QnS4ZnSxj4adC4imVsd+48myT36SvQBfqu/vDPmMpxRALgV8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqh3q-0008UN-Ep; Wed, 01 Jul 2020 18:02:18 +0000
Received: from [54.239.6.178] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqh3q-000409-5D; Wed, 01 Jul 2020 18:02:18 +0000
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 xen-devel@lists.xenproject.org
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
 <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
 <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <95154add-164a-5450-28e1-f24611e1642f@xen.org>
Date: Wed, 1 Jul 2020 19:02:15 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 01/07/2020 18:26, Andrew Cooper wrote:
> On 01/07/2020 17:18, Julien Grall wrote:
>>
>>
>> On 01/07/2020 17:17, Julien Grall wrote:
>>>
>>>
>>> On 01/07/2020 17:06, Andrew Cooper wrote:
>>>> On 01/07/2020 16:12, Julien Grall wrote:
>>>>> On 30/06/2020 13:33, Michał Leszczyński wrote:
>>>>>> @@ -305,7 +311,6 @@ static int vmx_init_vmcs_config(void)
>>>>>>                    SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS |
>>>>>>                    SECONDARY_EXEC_XSAVES |
>>>>>>                    SECONDARY_EXEC_TSC_SCALING);
>>>>>> -        rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
>>>>>>             if ( _vmx_misc_cap & VMX_MISC_VMWRITE_ALL )
>>>>>>                 opt |= SECONDARY_EXEC_ENABLE_VMCS_SHADOWING;
>>>>>>             if ( opt_vpid_enabled )
>>>>>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>>>>>> index 7cc9526139..0a33e0dfd6 100644
>>>>>> --- a/xen/common/domain.c
>>>>>> +++ b/xen/common/domain.c
>>>>>> @@ -82,6 +82,8 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_mostly;
>>>>>>       vcpu_info_t dummy_vcpu_info;
>>>>>>     +bool_t vmtrace_supported;
>>>>>
>>>>> All the code looks x86 specific. So may I ask why this was implemented
>>>>> in common code?
>>>>
>>>> There were some questions directed specifically at the ARM maintainers
>>>> about CoreSight, which have gone unanswered.
>>>
>>> I can only find one question related to the size. Is there any other?
>>>
>>> I don't know how the interface will look like given that AFAICT the
>>> buffer may be embedded in the HW. We would need to investigate how to
>>> differentiate between two domUs in this case without impacting the
>>> performance in the common code.
>>
>> s/in the common code/during the context switch/
>>
>>> So I think it is a little premature to implement this in common code
>>> and always compiled in for Arm. It would be best if this stay in x86
>>> code.
> 
> I've just checked with a colleague.  CoreSight can dump to a memory
> buffer - there's even a decode library for the packet stream
> https://github.com/Linaro/OpenCSD, although ultimately it is platform
> specific as to whether the feature is supported.
> 
> Furthermore, the choice isn't "x86 vs ARM", now that RISCv support is
> on-list, and Power9 is floating on the horizon.
> 
> For the sake of what is literally just one byte in common code, I stand
> my original suggestion of this being a common interface.  It is not
> something which should be x86 specific.

This argument can also be used against putting in common code. What I am 
the most concern of is we are trying to guess how the interface will 
look like for another architecture. Your suggested interface may work, 
but this also may end up to be a complete mess.

So I think we want to wait for a new architecture to use vmtrace before 
moving to common code. This is not going to be a massive effort to move 
that bit in common if needed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 18:06:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 18:06:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqh8C-0002rC-M7; Wed, 01 Jul 2020 18:06:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xe6U=AM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jqh8B-0002r7-CO
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 18:06:47 +0000
X-Inumbo-ID: a033023a-bbc5-11ea-8754-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a033023a-bbc5-11ea-8754-12813bfff9fa;
 Wed, 01 Jul 2020 18:06:46 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: xngJl0r63LTvvNtZrRePOINLd+XIylNDqq6BJ4bK+BeWpGakLjH78TUcrHRRFJcTBWG1L8rIfl
 P+NZ5gGXbPwVvXI8CaSPRNRvCCCQJQFg5vnj0k3JeaXFpXwGRY6/Ygk5HuW12OWRB+rZqnZcT8
 jKktNYf2Dkr0BrEQMdHr43eZA7SvsAsdfr1YmRdCvzm8JgpTmGk/8ugollQNuyJOgIAJD0PzxF
 mjMKRpYu9NR06HRQMXWiTK+GicbkyE0/4IC0/OoNOxBxVgsDlEI/rRBGoRY6eLmsjf1Z+goC69
 zS4=
X-SBRS: 2.7
X-MesageID: 21760884
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,301,1589256000"; d="scan'208";a="21760884"
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Julien Grall <julien@xen.org>, =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?=
 <michal.leszczynski@cert.pl>, <xen-devel@lists.xenproject.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
 <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
 <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
 <95154add-164a-5450-28e1-f24611e1642f@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <df0aa9b4-d7f7-f909-e833-3f2f3040a2dc@citrix.com>
Date: Wed, 1 Jul 2020 19:06:40 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <95154add-164a-5450-28e1-f24611e1642f@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01/07/2020 19:02, Julien Grall wrote:
> Hi,
>
> On 01/07/2020 18:26, Andrew Cooper wrote:
>> On 01/07/2020 17:18, Julien Grall wrote:
>>>
>>>
>>> On 01/07/2020 17:17, Julien Grall wrote:
>>>>
>>>>
>>>> On 01/07/2020 17:06, Andrew Cooper wrote:
>>>>> On 01/07/2020 16:12, Julien Grall wrote:
>>>>>> On 30/06/2020 13:33, Michał Leszczyński wrote:
>>>>>>> @@ -305,7 +311,6 @@ static int vmx_init_vmcs_config(void)
>>>>>>>                    SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS |
>>>>>>>                    SECONDARY_EXEC_XSAVES |
>>>>>>>                    SECONDARY_EXEC_TSC_SCALING);
>>>>>>> -        rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
>>>>>>>             if ( _vmx_misc_cap & VMX_MISC_VMWRITE_ALL )
>>>>>>>                 opt |= SECONDARY_EXEC_ENABLE_VMCS_SHADOWING;
>>>>>>>             if ( opt_vpid_enabled )
>>>>>>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>>>>>>> index 7cc9526139..0a33e0dfd6 100644
>>>>>>> --- a/xen/common/domain.c
>>>>>>> +++ b/xen/common/domain.c
>>>>>>> @@ -82,6 +82,8 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_mostly;
>>>>>>>       vcpu_info_t dummy_vcpu_info;
>>>>>>>     +bool_t vmtrace_supported;
>>>>>>
>>>>>> All the code looks x86 specific. So may I ask why this was
>>>>>> implemented
>>>>>> in common code?
>>>>>
>>>>> There were some questions directed specifically at the ARM
>>>>> maintainers
>>>>> about CoreSight, which have gone unanswered.
>>>>
>>>> I can only find one question related to the size. Is there any other?
>>>>
>>>> I don't know how the interface will look like given that AFAICT the
>>>> buffer may be embedded in the HW. We would need to investigate how to
>>>> differentiate between two domUs in this case without impacting the
>>>> performance in the common code.
>>>
>>> s/in the common code/during the context switch/
>>>
>>>> So I think it is a little premature to implement this in common code
>>>> and always compiled in for Arm. It would be best if this stay in x86
>>>> code.
>>
>> I've just checked with a colleague.  CoreSight can dump to a memory
>> buffer - there's even a decode library for the packet stream
>> https://github.com/Linaro/OpenCSD, although ultimately it is platform
>> specific as to whether the feature is supported.
>>
>> Furthermore, the choice isn't "x86 vs ARM", now that RISCv support is
>> on-list, and Power9 is floating on the horizon.
>>
>> For the sake of what is literally just one byte in common code, I stand
>> my original suggestion of this being a common interface.  It is not
>> something which should be x86 specific.
>
> This argument can also be used against putting in common code. What I
> am the most concern of is we are trying to guess how the interface
> will look like for another architecture. Your suggested interface may
> work, but this also may end up to be a complete mess.
>
> So I think we want to wait for a new architecture to use vmtrace
> before moving to common code. This is not going to be a massive effort
> to move that bit in common if needed.

I suggest you read the series.

The only thing in common code is the bit of the interface saying "I'd
like buffers this big please".

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 18:09:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 18:09:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqhB4-0002zQ-4j; Wed, 01 Jul 2020 18:09:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Rv6a=AM=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jqhB2-0002zF-PO
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 18:09:44 +0000
X-Inumbo-ID: 0a5fee16-bbc6-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a5fee16-bbc6-11ea-bb8b-bc764e2007e4;
 Wed, 01 Jul 2020 18:09:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=48oe2L4isXyCk4WjRbb9FLGIEr5/k9xk7hkDEs7B+p0=; b=Z3YUVSmA7u5J4+MWnZXV0Cn8ki
 SPMOLcv1N3TZsolj8oPHvzzhWWo+vFA5w3zRgESpeckOAbm+k1HsmcKsNIgShNZwLjO+eKWb6A9fL
 pZshcMW1del8gwtRysT4z//iq80FiCo/Ci1giXo+S5duGbNCdNR3L2m/Kj1DlPsCYSmc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqhAx-0000BA-8k; Wed, 01 Jul 2020 18:09:39 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqhAw-0004Nj-Tu; Wed, 01 Jul 2020 18:09:39 +0000
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 xen-devel@lists.xenproject.org
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
 <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
 <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
 <95154add-164a-5450-28e1-f24611e1642f@xen.org>
 <df0aa9b4-d7f7-f909-e833-3f2f3040a2dc@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <de298379-43c3-648f-aade-9efc7f761970@xen.org>
Date: Wed, 1 Jul 2020 19:09:35 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <df0aa9b4-d7f7-f909-e833-3f2f3040a2dc@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 01/07/2020 19:06, Andrew Cooper wrote:
> On 01/07/2020 19:02, Julien Grall wrote:
>> Hi,
>>
>> On 01/07/2020 18:26, Andrew Cooper wrote:
>>> On 01/07/2020 17:18, Julien Grall wrote:
>>>>
>>>>
>>>> On 01/07/2020 17:17, Julien Grall wrote:
>>>>>
>>>>>
>>>>> On 01/07/2020 17:06, Andrew Cooper wrote:
>>>>>> On 01/07/2020 16:12, Julien Grall wrote:
>>>>>>> On 30/06/2020 13:33, Michał Leszczyński wrote:
>>>>>>>> @@ -305,7 +311,6 @@ static int vmx_init_vmcs_config(void)
>>>>>>>>                     SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS |
>>>>>>>>                     SECONDARY_EXEC_XSAVES |
>>>>>>>>                     SECONDARY_EXEC_TSC_SCALING);
>>>>>>>> -        rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
>>>>>>>>              if ( _vmx_misc_cap & VMX_MISC_VMWRITE_ALL )
>>>>>>>>                  opt |= SECONDARY_EXEC_ENABLE_VMCS_SHADOWING;
>>>>>>>>              if ( opt_vpid_enabled )
>>>>>>>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>>>>>>>> index 7cc9526139..0a33e0dfd6 100644
>>>>>>>> --- a/xen/common/domain.c
>>>>>>>> +++ b/xen/common/domain.c
>>>>>>>> @@ -82,6 +82,8 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_mostly;
>>>>>>>>        vcpu_info_t dummy_vcpu_info;
>>>>>>>>      +bool_t vmtrace_supported;
>>>>>>>
>>>>>>> All the code looks x86 specific. So may I ask why this was
>>>>>>> implemented
>>>>>>> in common code?
>>>>>>
>>>>>> There were some questions directed specifically at the ARM
>>>>>> maintainers
>>>>>> about CoreSight, which have gone unanswered.
>>>>>
>>>>> I can only find one question related to the size. Is there any other?
>>>>>
>>>>> I don't know how the interface will look like given that AFAICT the
>>>>> buffer may be embedded in the HW. We would need to investigate how to
>>>>> differentiate between two domUs in this case without impacting the
>>>>> performance in the common code.
>>>>
>>>> s/in the common code/during the context switch/
>>>>
>>>>> So I think it is a little premature to implement this in common code
>>>>> and always compiled in for Arm. It would be best if this stay in x86
>>>>> code.
>>>
>>> I've just checked with a colleague.  CoreSight can dump to a memory
>>> buffer - there's even a decode library for the packet stream
>>> https://github.com/Linaro/OpenCSD, although ultimately it is platform
>>> specific as to whether the feature is supported.
>>>
>>> Furthermore, the choice isn't "x86 vs ARM", now that RISCv support is
>>> on-list, and Power9 is floating on the horizon.
>>>
>>> For the sake of what is literally just one byte in common code, I stand
>>> my original suggestion of this being a common interface.  It is not
>>> something which should be x86 specific.
>>
>> This argument can also be used against putting in common code. What I
>> am the most concern of is we are trying to guess how the interface
>> will look like for another architecture. Your suggested interface may
>> work, but this also may end up to be a complete mess.
>>
>> So I think we want to wait for a new architecture to use vmtrace
>> before moving to common code. This is not going to be a massive effort
>> to move that bit in common if needed.
> 
> I suggest you read the series.

Already went through the series and ...

> 
> The only thing in common code is the bit of the interface saying "I'd
> like buffers this big please".

... I stand by my point. There is no need to have this code in common 
code until someone else need it. This code can be easily implemented in 
arch_domain_create().

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 18:28:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 18:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqhSN-0004cO-Nl; Wed, 01 Jul 2020 18:27:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r09v=AM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqhSM-0004cJ-Nq
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 18:27:38 +0000
X-Inumbo-ID: 89e1e08e-bbc8-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 89e1e08e-bbc8-11ea-b7bb-bc764e2007e4;
 Wed, 01 Jul 2020 18:27:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=qlUyGgi/TqYJdvLmXjDwORUOfw5vyGdBNaaRyrsiHzc=; b=Mk1qXi9CnZ7pzcIlz+6OgHbbF
 XMOEqRnoLSUm89UDbjLO5Ij/Fn2eYTnZQ2PTelrwFNFqNrPHcdBLbYxVdbRb/NLjPxG78mdmalLJr
 19Holn2R4JYaFIMDPLzcuV9hl7EvoEOhu3dzQex3OgIkb6+FutWNKIfW9pDsp+hdj14cs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqhSK-0000VY-Li; Wed, 01 Jul 2020 18:27:36 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqhSK-0006lj-8s; Wed, 01 Jul 2020 18:27:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqhSK-000747-7t; Wed, 01 Jul 2020 18:27:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151494-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151494: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-saverestore:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=7c30b859a947535f2213277e827d7ac7dcff9c84
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jul 2020 18:27:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151494 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151494/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 guest-saverestore fail pass in 151480

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                7c30b859a947535f2213277e827d7ac7dcff9c84
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   13 days
Failing since        151236  2020-06-19 19:10:35 Z   11 days   15 attempts
Testing same since   151467  2020-06-30 02:29:41 Z    1 days    3 attempts

------------------------------------------------------------
478 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 22766 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 18:40:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 18:40:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqheG-0005Yk-1S; Wed, 01 Jul 2020 18:39:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8lOf=AM=kernel.org=luto@srs-us1.protection.inumbo.net>)
 id 1jqheF-0005Yf-6J
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 18:39:55 +0000
X-Inumbo-ID: 415c60bc-bbca-11ea-875d-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 415c60bc-bbca-11ea-875d-12813bfff9fa;
 Wed, 01 Jul 2020 18:39:54 +0000 (UTC)
Received: from mail-wr1-f51.google.com (mail-wr1-f51.google.com
 [209.85.221.51])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9FEE0208C7
 for <xen-devel@lists.xenproject.org>; Wed,  1 Jul 2020 18:39:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1593628793;
 bh=BzgN9CdkNj8KsNJpO6Dp3dgk25B6SZxEFrbbFo9+dlk=;
 h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
 b=EekpatgCfXAY5PVIXJOXKjXre1Wyt5kLnk69xj79naKsliUjFBna4Am1jKxDJ1RUB
 YwgGo3BT2S3Rx2ybuQVvOdWWxVpkyGDwgeTCGh2QZeVh0VyaK5E3Db8ddZOCtW/e4V
 jqfSmiN8gcKG4aikDeyjscuKc7uS8UTF14Y94hyM=
Received: by mail-wr1-f51.google.com with SMTP id z15so13904765wrl.8
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jul 2020 11:39:53 -0700 (PDT)
X-Gm-Message-State: AOAM530IQzHyoF1amt2qVa00BdyyzlBu6JWNtYNh5Pi21kRglHvKux42
 1sblMrkSrXt+AWtMR9X7cA6nChgsOfWVWZzNlQcHMA==
X-Google-Smtp-Source: ABdhPJxL9r2ocTMGAnClzir8rROoW17dgBhZ2sKyXqCKyH9ofutXZA7VisI5aNG7X5iBtvuS5HHHhEDpobjJAmSLQgA=
X-Received: by 2002:a5d:458a:: with SMTP id p10mr27429972wrq.184.1593628792155; 
 Wed, 01 Jul 2020 11:39:52 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1593191971.git.luto@kernel.org>
 <947880c41ade688ff4836f665d0c9fcaa9bd1201.1593191971.git.luto@kernel.org>
 <CAMzpN2iW4XD1Gsgq0ZeeH2eewLO+9Mk6eyk0LnbF-kP3v=smLg@mail.gmail.com>
In-Reply-To: <CAMzpN2iW4XD1Gsgq0ZeeH2eewLO+9Mk6eyk0LnbF-kP3v=smLg@mail.gmail.com>
From: Andy Lutomirski <luto@kernel.org>
Date: Wed, 1 Jul 2020 11:39:40 -0700
X-Gmail-Original-Message-ID: <CALCETrVy-Q4K04wmEPe5VeU=at2BL4b-bSFkoSU-BPbTaTB2Yg@mail.gmail.com>
Message-ID: <CALCETrVy-Q4K04wmEPe5VeU=at2BL4b-bSFkoSU-BPbTaTB2Yg@mail.gmail.com>
Subject: Re: [PATCH 3/6] x86/entry/64/compat: Fix Xen PV SYSENTER frame setup
To: Brian Gerst <brgerst@gmail.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 the arch/x86 maintainers <x86@kernel.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 Andy Lutomirski <luto@kernel.org>, xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 1, 2020 at 8:42 AM Brian Gerst <brgerst@gmail.com> wrote:
>
> On Fri, Jun 26, 2020 at 1:30 PM Andy Lutomirski <luto@kernel.org> wrote:
> >
> > The SYSENTER frame setup was nonsense.  It worked by accident
> > because the normal code into which the Xen asm jumped
> > (entry_SYSENTER_32/compat) threw away SP without touching the stack.
> > entry_SYSENTER_compat was recently modified such that it relied on
> > having a valid stack pointer, so now the Xen asm needs to invoke it
> > with a valid stack.
> >
> > Fix it up like SYSCALL: use the Xen-provided frame and skip the bare
> > metal prologue.
> >
> > Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > Cc: Juergen Gross <jgross@suse.com>
> > Cc: Stefano Stabellini <sstabellini@kernel.org>
> > Cc: xen-devel@lists.xenproject.org
> > Fixes: 1c3e5d3f60e2 ("x86/entry: Make entry_64_compat.S objtool clean")
> > Signed-off-by: Andy Lutomirski <luto@kernel.org>
> > ---
> >  arch/x86/entry/entry_64_compat.S |  1 +
> >  arch/x86/xen/xen-asm_64.S        | 20 ++++++++++++++++----
> >  2 files changed, 17 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
> > index 7b9d8150f652..381a6de7de9c 100644
> > --- a/arch/x86/entry/entry_64_compat.S
> > +++ b/arch/x86/entry/entry_64_compat.S
> > @@ -79,6 +79,7 @@ SYM_CODE_START(entry_SYSENTER_compat)
> >         pushfq                          /* pt_regs->flags (except IF = 0) */
> >         pushq   $__USER32_CS            /* pt_regs->cs */
> >         pushq   $0                      /* pt_regs->ip = 0 (placeholder) */
> > +SYM_INNER_LABEL(entry_SYSENTER_compat_after_hwframe, SYM_L_GLOBAL)
>
> This skips over the section that truncates the syscall number to
> 32-bits.  The comments present some doubt that it is actually
> necessary, but the Xen path shouldn't differ from native.  That code
> should be moved after this new label.

Whoops.  I thought I caught that myself, but apparently not.  I'll fix it.

>
> --
> Brian Gerst


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 20:03:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 20:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqix6-0004MO-7o; Wed, 01 Jul 2020 20:03:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r09v=AM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqix5-0004MJ-37
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 20:03:27 +0000
X-Inumbo-ID: ec610674-bbd5-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec610674-bbd5-11ea-b7bb-bc764e2007e4;
 Wed, 01 Jul 2020 20:03:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Kj1n6YsjtwZzPlmOcKEmjyGI9s5fMpUrVj5jfaZOHcg=; b=6szkxEazrNnYWeEGCAA+z4e3X
 j5EK9GJtazmwXDb6Rz3g8qo9X7FUrPqx7a6/ae/u2w84kUMTThGrv+n0pZkF9cTnXBv/hdreYn0XG
 YPLyhgoBFwRjzgnO7nFHcCgzJZYqFpkafxMuLqAdAE5WBoN+KpyUDo3O234E/KYZSyslc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqix3-0002JA-Ae; Wed, 01 Jul 2020 20:03:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqix3-00048o-2A; Wed, 01 Jul 2020 20:03:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqix3-0004Gb-1P; Wed, 01 Jul 2020 20:03:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151511-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 151511: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=3b7dab93f2401b08c673244c9ae0f92e08bd03ba
X-Osstest-Versions-That: xen=23ca7ec0ba620db52a646d80e22f9703a6589f66
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jul 2020 20:03:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151511 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151511/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3b7dab93f2401b08c673244c9ae0f92e08bd03ba
baseline version:
 xen                  23ca7ec0ba620db52a646d80e22f9703a6589f66

Last test of basis   151476  2020-06-30 11:00:51 Z    1 days
Testing same since   151511  2020-07-01 17:01:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   23ca7ec0ba..3b7dab93f2  3b7dab93f2401b08c673244c9ae0f92e08bd03ba -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 20:47:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 20:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqjdx-0007gK-Jp; Wed, 01 Jul 2020 20:47:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E/dH=AM=redhat.com=mst@srs-us1.protection.inumbo.net>)
 id 1jqjdv-0007gF-QM
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 20:47:43 +0000
X-Inumbo-ID: 1998bf1f-bbdc-11ea-877f-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 1998bf1f-bbdc-11ea-877f-12813bfff9fa;
 Wed, 01 Jul 2020 20:47:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1593636458;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 in-reply-to:in-reply-to:references:references;
 bh=MbiyraTJ4trv3FWb4qCBJhr5R70RrnDBw9rQbrUZ784=;
 b=VfFIRYhyN7UAlCmU4ycy3YjrdUnZaix3K83yEmCJ4yw+0MKL3BwuvHujojAeGw/KHJUYyy
 Y6SsKzUGNYq4BV0q2xfZCxCjGX4LJA3yWuprFF0sVrh4rQU/MTjyXc6+cmBsvv5uUMLYRf
 309S+3va0D9/K/zDx6wiFR3gKOdZ6+w=
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-178-pzifr3PYPOaCxAU1JTkHBA-1; Wed, 01 Jul 2020 16:47:36 -0400
X-MC-Unique: pzifr3PYPOaCxAU1JTkHBA-1
Received: by mail-wm1-f72.google.com with SMTP id g6so19422509wmk.4
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jul 2020 13:47:36 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to;
 bh=MbiyraTJ4trv3FWb4qCBJhr5R70RrnDBw9rQbrUZ784=;
 b=T9LXlIa4WkX/hNvS0JTUDTVhQ4Nuc5ygJvRgicYzdxV67DOIf8Ji3HodbouPDyoK7U
 8ClqaP1WvvlE16QJBDaP3RXfKhNQYDKGKmK/GmdbD3gwLPuwU2CO4RJgGfOYF674IcBx
 YT8Px/md2ChNNIiCYehDlh2meoAfkD3qtddjVbiUdscLs8dBRN9bVWRoVxCNO39borUH
 tUGqBusPnrb/gvewWec1N6+7deeW0V0sojEd0PhKzHG95K2ptPS4d7olq3PEi0a6nmGK
 +vooptsiIizrk7X4uIb85kN6rYXdIonPpztChNiIDFIekrXL4pnEQRCqoxOqr6xiz7gC
 gTlg==
X-Gm-Message-State: AOAM533wwWHQnNmAjierE0nmdqzKhKdnFdhz9NsBHPplt+XxtZlcaVn7
 NdTxvNQTGC2d2s1zhPjKg3mvG7BOD9hVzx9zHmpKL7fuCv6YNWMakUEfExigk/vrig8QThHjNIw
 EmyMVksT8/8VY76RxV9/emUKDZ84=
X-Received: by 2002:adf:ed47:: with SMTP id u7mr30433368wro.201.1593636455515; 
 Wed, 01 Jul 2020 13:47:35 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJzi4/QeFIvFvIhW8MFrahcm+NHf1Nr8du8L4WlMn+iG6s23576AuHveqgE2W9yZnAvkgrNyJg==
X-Received: by 2002:adf:ed47:: with SMTP id u7mr30433344wro.201.1593636455219; 
 Wed, 01 Jul 2020 13:47:35 -0700 (PDT)
Received: from redhat.com (bzq-79-182-31-92.red.bezeqint.net. [79.182.31.92])
 by smtp.gmail.com with ESMTPSA id
 h2sm8337653wrw.62.2020.07.01.13.47.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 01 Jul 2020 13:47:32 -0700 (PDT)
Date: Wed, 1 Jul 2020 16:47:29 -0400
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH] xen: introduce xen_vring_use_dma
Message-ID: <20200701164501-mutt-send-email-mst@kernel.org>
References: <20200624050355-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006241047010.8121@sstabellini-ThinkPad-T480s>
 <20200624163940-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006241351430.8121@sstabellini-ThinkPad-T480s>
 <20200624181026-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006251014230.8121@sstabellini-ThinkPad-T480s>
 <20200626110629-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006291621300.8121@sstabellini-ThinkPad-T480s>
 <20200701133456.GA23888@infradead.org>
 <alpine.DEB.2.21.2007011020320.8121@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007011020320.8121@sstabellini-ThinkPad-T480s>
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=mst@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Peng Fan <peng.fan@nxp.com>, konrad.wilk@oracle.com,
 jasowang@redhat.com, x86@kernel.org, linux-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 Christoph Hellwig <hch@infradead.org>, iommu@lists.linux-foundation.org,
 linux-imx@nxp.com, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 linux-arm-kernel@lists.infradead.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 10:34:53AM -0700, Stefano Stabellini wrote:
> On Wed, 1 Jul 2020, Christoph Hellwig wrote:
> > On Mon, Jun 29, 2020 at 04:46:09PM -0700, Stefano Stabellini wrote:
> > > > I could imagine some future Xen hosts setting a flag somewhere in the
> > > > platform capability saying "no xen specific flag, rely on
> > > > "VIRTIO_F_ACCESS_PLATFORM". Then you set that accordingly in QEMU.
> > > > How about that?
> > > 
> > > Yes, that would be fine and there is no problem implementing something
> > > like that when we get virtio support in Xen. Today there are still no
> > > virtio interfaces provided by Xen to ARM guests (no virtio-block/net,
> > > etc.)
> > > 
> > > In fact, in both cases we are discussing virtio is *not* provided by
> > > Xen; it is a firmware interface to something entirely different:
> > > 
> > > 1) virtio is used to talk to a remote AMP processor (RPMesg)
> > > 2) virtio is used to talk to a secure-world firmware/OS (Trusty)
> > >
> > > VIRTIO_F_ACCESS_PLATFORM is not set by Xen in these cases but by RPMesg
> > > and by Trusty respectively. I don't know if Trusty should or should not
> > > set VIRTIO_F_ACCESS_PLATFORM, but I think Linux should still work
> > > without issues.
> > > 
> > 
> > Any virtio implementation that is not in control of the memory map
> > (aka not the hypervisor) absolutely must set VIRTIO_F_ACCESS_PLATFORM,
> > else it is completely broken.
> 
> Lots of broken virtio implementations out there it would seem :-(

Not really, most of virtio implementations are in full control of
memory, being part of the hypervisor.

> 
> > > The xen_domain() check in Linux makes it so that vring_use_dma_api
> > > returns the opposite value on native Linux compared to Linux as Xen/ARM
> > > DomU by "accident". By "accident" because there is no architectural
> > > reason why Linux Xen/ARM DomU should behave differently compared to
> > > native Linux in this regard.
> > > 
> > > I hope that now it is clearer why I think the if (xen_domain()) check
> > > needs to be improved anyway, even if we fix generic dma_ops with virtio
> > > interfaces missing VIRTIO_F_ACCESS_PLATFORM.
> > 
> > IMHO that Xen quirk should never have been added in this form..
> 
> Would you be in favor of a more flexible check along the lines of the
> one proposed in the patch that started this thread:
> 
>     if (xen_vring_use_dma())
>             return true;
> 
> 
> xen_vring_use_dma would be implemented so that it returns true when
> xen_swiotlb is required and false otherwise.

I'll need to think about it. Sounds reasonable on the surface ...

-- 
MST



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 21:03:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 21:03:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqjsy-0000vO-1l; Wed, 01 Jul 2020 21:03:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r09v=AM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqjsw-0000v4-VP
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 21:03:15 +0000
X-Inumbo-ID: 412823ec-bbde-11ea-8786-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 412823ec-bbde-11ea-8786-12813bfff9fa;
 Wed, 01 Jul 2020 21:03:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=awlZpy2hQb4uT+P9TsUbt5ULJdxLkCKKGsTTTVUqmDg=; b=ZMEMVTWn0vIlFh+yzx7kzvVUe
 78XzD1NgDNBYMVjtFQNu+Boowpeym0n1MrD8tLdM3oVasTQzzGURLMY0l/U73uIxSCs02odprfRqx
 wKqPSkZHrs6sEUEBllV//XFLSwvNVkMtsXx5MTkI1PQQB31Su2BLhxxS5Th1rkgIhFxqc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqjsl-0003RN-Et; Wed, 01 Jul 2020 21:03:03 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqjsk-0006Uv-VT; Wed, 01 Jul 2020 21:03:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqjsk-000428-Uk; Wed, 01 Jul 2020 21:03:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151503-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 151503: regressions - FAIL
X-Osstest-Failures: linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=e75220890bf6b37c5f7b1dbd81d8292ed6d96643
X-Osstest-Versions-That: linux=4e9688ad3d36e8f73c73e435f53da5ae1cd91a70
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jul 2020 21:03:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151503 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151503/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 151339

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                e75220890bf6b37c5f7b1dbd81d8292ed6d96643
baseline version:
 linux                4e9688ad3d36e8f73c73e435f53da5ae1cd91a70

Last test of basis   151339  2020-06-24 16:09:27 Z    7 days
Testing same since   151503  2020-07-01 09:18:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Plattner <aplattner@nvidia.com>
  Aditya Pakki <pakki001@umn.edu>
  Al Cooper <alcooperx@gmail.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Lobakin <alobakin@marvell.com>
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Anton Eidelman <anton@lightbitslabs.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ariel Elior <ariel.elior@marvell.com>
  Bernard Zhao <bernard@vivo.com>
  Borislav Petkov <bp@suse.de>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chihhao Chen <chihhao.chen@mediatek.com>
  Christian Brauner <christian.brauner@ubuntu.com>
  Christoph Hellwig <hch@lst.de>
  Chuck Lever <chuck.lever@oracle.com>
  Chuhong Yuan <hslester96@gmail.com>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Corey Minyard <cminyard@mvista.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Gomez <dagmcr@gmail.com>
  Daniel Wagner <dwagner@suse.de>
  Darrick J. Wong <darrick.wong@oracle.com>
  David Christensen <drc@linux.vnet.ibm.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Denis Efremov <efremov@linux.com>
  Denis Kirjanov <denis.kirjanov@suse.com>
  Denis Kirjanov <kda@linux-powerpc.org>
  Dennis Dalessandro <dennis.dalessandro@intel.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Doug Berger <opendmb@gmail.com>
  Drew Fustini <drew@beagleboard.org>
  Eddie James <eajames@linux.ibm.com>
  Eric Dumazet <edumazet@google.com>
  Fabian Vogt <fvogt@suse.de>
  Fan Guo <guofan5@huawei.com>
  Felipe Balbi <balbi@kernel.org>
  Felix Kuehling <Felix.Kuehling@amd.com>
  Fenghua Yu <fenghua.yu@intel.com>
  Filipe Manana <fdmanana@suse.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Gao Xiang <hsiangkao@redhat.com>
  Gaurav Singh <gaurav1086@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  guodeqing <geffrey.guo@huawei.com>
  Hangbin Liu <liuhangbin@gmail.com>
  Heiko Carstens <heiko.carstens@de.ibm.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Huaisheng Ye <yehs1@lenovo.com>
  Huy Nguyen <huyn@mellanox.com>
  Igor Russkikh <irusskikh@marvell.com>
  Ilya Ponetayev <i.ponetaev@ndmsystems.com>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jens Axboe <axboe@kernel.dk>
  Jeremy Kerr <jk@ozlabs.org>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiping Ma <jiping.ma2@windriver.com>
  Joakim Tjernlund <joakim.tjernlund@infinera.com>
  Joel Stanley <joel@jms.id.au>
  Joerg Roedel <jroedel@suse.de>
  John Fastabend <john.fastabend@gmail.com>
  John Stultz <john.stultz@linaro.org>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Julian Wiedmann <jwi@linux.ibm.com>
  Junxiao Bi <junxiao.bi@oracle.com>
  Juri Lelli <juri.lelli@redhat.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kees Cook <keescook@chromium.org>
  Lalithambika Krishnakumar <lalithambika.krishnakumar@intel.com>
  Laurence Tratt <laurie@tratt.net>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Leon Romanovsky <leonro@mellanox.com>
  Li Jun <jun.li@nxp.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Longfang Liu <liulongfang@huawei.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Luis Chamberlain <mcgrof@kernel.org>
  Macpaul Lin <macpaul.lin@mediatek.com>
  Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
  Mans Rullgard <mans@mansr.com>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marek Vasut <marex@denx.de>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Mark Zhang <markz@mellanox.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matt Fleming <matt@codeblueprint.co.uk>
  Matthew Hagan <mnhagan88@gmail.com>
  Michal Hocko <mhocko@suse.com>
  Michal Kalderon <michal.kalderon@marvell.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Minas Harutyunyan <hminas@synopsys.com>
  Minas Harutyunyan <Minas.Harutyunyan@synopsys.com>
  Muchun Song <songmuchun@bytedance.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Nathan Huckleberry <nhuck@google.com>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  Neal Cardwell <ncardwell@google.com>
  Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
  Olga Kornievskaia <kolga@netapp.com>
  Olga Kornievskaia <olga.kornievskaia@gmail.com>
  Oliver Neukum <oneukum@suse.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Chen <peter.chen@nxp.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Qiushi Wu <wu000273@umn.edu>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
  Reinette Chatre <reinette.chatre@intel.com>
  Ren Xudong <renxudong1@huawei.com>
  Robert Nelson <robertcnelson@gmail.com>
  Robin Gong <yibin.gong@nxp.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Gushchin <guro@fb.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Sabrina Dubroca <sd@queasysnail.net>
  Sagi Grimberg <sagi@grimberg.me>
  Sami Tolvanen <samitolvanen@google.com>
  Sasha Levin <sashal@kernel.org>
  Sean Christopherson <sean.j.christopherson@intel.com>
  SeongJae Park <sjpark@amazon.de>
  Shawn Guo <shawnguo@kernel.org>
  Shay Drory <shayd@mellanox.com>
  Shengjiu Wang <shengjiu.wang@nxp.com>
  Soheil Hassas Yeganeh <soheil@google.com>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Stanislav Fomichev <sdf@google.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Steffen Maier <maier@linux.ibm.com>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sven Auhagen <sven.auhagen@voleatech.de>
  Sven Schnelle <svens@linux.ibm.com>
  Taehee Yoo <ap420073@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tang Bin <tangbin@cmss.chinamobile.com>
  Tariq Toukan <tariqt@mellanox.com>
  Thierry Reding <treding@nvidia.com>
  Thomas Falcon <tlfalcon@linux.ibm.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Martitz <t.martitz@avm.de>
  Todd Kjos <tkjos@google.com>
  Toke Høiland-Jørgensen <toke@redhat.com>
  Tom Seewald <tseewald@gmail.com>
  Tomasz Meresiński <tomasz@meresinski.eu>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Valentin Longchamp <valentin@longchamp.me>
  Vasily Averin <vvs@virtuozzo.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vidya Sagar <vidyas@nvidia.com>
  Vincent Chen <vincent.chen@sifive.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Waiman Long <longman@redhat.com>
  Wang Hai <wanghai38@huawei.com>
  Weiping Zhang <zhangweiping@didiglobal.com>
  Wenhui Sheng <Wenhui.Sheng@amd.com>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Wolfram Sang <wsa@kernel.org>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yang Yingliang <yangyingliang@huawei.com>
  Yash Shah <yash.shah@sifive.com>
  Ye Bin <yebin10@huawei.com>
  Yick W. Tse <y_w_tse@yahoo.com.hk>
  Yonghong Song <yhs@fb.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  yu kuai <yukuai3@huawei.com>
  Zekun Shen <bruceshenzk@gmail.com>
  Zhang Shengju <zhangshengju@cmss.chinamobile.com>
  Zhang Xiaoxu <zhangxiaoxu5@huawei.com>
  Zheng Bin <zhengbin13@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5469 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 21:11:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 21:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqk0Q-0001mn-0W; Wed, 01 Jul 2020 21:10:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r09v=AM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqk0P-0001mh-A4
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 21:10:57 +0000
X-Inumbo-ID: 59e43e42-bbdf-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59e43e42-bbdf-11ea-b7bb-bc764e2007e4;
 Wed, 01 Jul 2020 21:10:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=To4B3kJKr3Tc3XZfS9dqfF/cit2tAWNCsEeL1TwINAM=; b=LFo6xWRy6DKuTL9CA8EHNekbg
 7IVm+jSYg/HYOEdokiqfFfeaC8q2paBhPmCt9abpgc7Wk0EEfMgP1+GK4R2XoFn8OP4WWAc8NVJ8F
 SWe663R5pjYlSdsG3VCpAo0y4aOhCcGDgwkkKdjruHasfMjq+/GGNjRGWh6MErwYijcqk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqk0M-0003aE-HY; Wed, 01 Jul 2020 21:10:54 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqk0L-0006jV-Uz; Wed, 01 Jul 2020 21:10:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqk0L-0005j5-U7; Wed, 01 Jul 2020 21:10:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151500-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151500: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=fc1bff958998910ec8d25db86cd2f53ff125f7ab
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jul 2020 21:10:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151500 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151500/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                fc1bff958998910ec8d25db86cd2f53ff125f7ab
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   18 days
Failing since        151101  2020-06-14 08:32:51 Z   17 days   19 attempts
Testing same since   151471  2020-06-30 05:19:07 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Cathy Zhang <cathy.zhang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Liran Alon <liran.alon@oracle.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <thuth@redhat.com>
  Tong Ho <tong.ho@xilinx.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 14425 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 21:23:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 21:23:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqkCm-0002iT-8A; Wed, 01 Jul 2020 21:23:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E/dH=AM=redhat.com=mst@srs-us1.protection.inumbo.net>)
 id 1jqkCl-0002iO-3i
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 21:23:43 +0000
X-Inumbo-ID: 2335f852-bbe1-11ea-8786-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 2335f852-bbe1-11ea-8786-12813bfff9fa;
 Wed, 01 Jul 2020 21:23:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1593638622;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 in-reply-to:in-reply-to:references:references;
 bh=m2r4biBmgD4usQWexXh9vBu72j/PA8sqQ6Ual1zjZVg=;
 b=Frob1QcE+rnIvaVsB8k1AiEVRY+qE9u3ThkorDAkHfQ0mG0lz2JmkYSDt5C651cZAoWaWA
 kQh8g2gRehTiVbt6jrEqSXH88vaBVuHDyCvWJj64CWz8ZMcr9xbN/neZ27wO5pwYZ9SNd3
 0UidxTkhxPfsL0eVn0tAHVmGSAhEVLk=
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-160-03hKAwVoNrmUTk_-qTHYyQ-1; Wed, 01 Jul 2020 17:23:40 -0400
X-MC-Unique: 03hKAwVoNrmUTk_-qTHYyQ-1
Received: by mail-wr1-f69.google.com with SMTP id i14so22362529wru.17
 for <xen-devel@lists.xenproject.org>; Wed, 01 Jul 2020 14:23:40 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to;
 bh=m2r4biBmgD4usQWexXh9vBu72j/PA8sqQ6Ual1zjZVg=;
 b=Kig+qeBSZdBFbf0mZPn2ShswRAC8SwhOfduQL4bvthvOTBdUqUke7zpF1N+4ZiYQ1/
 3LwstOFjWGbS2ZM19PwfprI2gVV7TIPQhZtHAvrd2QtGdaESbL1aitwZdqahf53rgRCf
 ZuKO/9a6P9bcMGHOok34Zo0uKIEwo9i/eUHP8/B4Y+FdATE/e/w5HQtwbCUkuNJrsyme
 jM/Mh7dT8E7wzt2DxHVk4VUvHfJZ40RFeSKmd3rK/W4Vea9deFzekddh80bZji0+AhNn
 soRJeOR8q7rxCFFsvhIpm7gpJ+dUdCiIuFNvSwRWTrpP8wPnnWguqzT2ZrVZwvuAIEmJ
 BzFw==
X-Gm-Message-State: AOAM533+Uc53ROksm61AkgHwQzoOYoZ0lb7lyVimuLRQoUR3QVNQWqTt
 7qGrWMmk9lGzu98bI/CNIdHHHqFw8DAyl/XhRtM/hItTQidBUvOs3LAz5/nlnZl3eZvngas5Gm3
 oKm7uvZDy1KRjRaus2V2pYJBAgok=
X-Received: by 2002:a5d:6b8c:: with SMTP id n12mr28640728wrx.352.1593638617973; 
 Wed, 01 Jul 2020 14:23:37 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJzQ5EOdK3/I7t3hzH2nXxlZ8gKRo3bNG63qK7yosafmw6Hvgwi6pb2IZMrXml3ZBqz0x5VNnw==
X-Received: by 2002:a5d:6b8c:: with SMTP id n12mr28640715wrx.352.1593638617762; 
 Wed, 01 Jul 2020 14:23:37 -0700 (PDT)
Received: from redhat.com (bzq-79-182-31-92.red.bezeqint.net. [79.182.31.92])
 by smtp.gmail.com with ESMTPSA id
 d63sm8905146wmc.22.2020.07.01.14.23.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 01 Jul 2020 14:23:35 -0700 (PDT)
Date: Wed, 1 Jul 2020 17:23:32 -0400
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH] xen: introduce xen_vring_use_dma
Message-ID: <20200701172219-mutt-send-email-mst@kernel.org>
References: <20200624050355-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006241047010.8121@sstabellini-ThinkPad-T480s>
 <20200624163940-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006241351430.8121@sstabellini-ThinkPad-T480s>
 <20200624181026-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006251014230.8121@sstabellini-ThinkPad-T480s>
 <20200626110629-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006291621300.8121@sstabellini-ThinkPad-T480s>
 <20200701133456.GA23888@infradead.org>
 <alpine.DEB.2.21.2007011020320.8121@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007011020320.8121@sstabellini-ThinkPad-T480s>
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=mst@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Peng Fan <peng.fan@nxp.com>, konrad.wilk@oracle.com,
 jasowang@redhat.com, x86@kernel.org, linux-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 Christoph Hellwig <hch@infradead.org>, iommu@lists.linux-foundation.org,
 linux-imx@nxp.com, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 linux-arm-kernel@lists.infradead.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 10:34:53AM -0700, Stefano Stabellini wrote:
> Would you be in favor of a more flexible check along the lines of the
> one proposed in the patch that started this thread:
> 
>     if (xen_vring_use_dma())
>             return true;
> 
> 
> xen_vring_use_dma would be implemented so that it returns true when
> xen_swiotlb is required and false otherwise.

Just to stress - with a patch like this virtio can *still* use DMA API
if PLATFORM_ACCESS is set. So if DMA API is broken on some platforms
as you seem to be saying, you guys should fix it before doing something
like this..

-- 
MST



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 21:43:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 21:43:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqkVi-0004Pw-0x; Wed, 01 Jul 2020 21:43:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xe6U=AM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jqkVg-0004Pr-UW
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 21:43:16 +0000
X-Inumbo-ID: de9b5ba8-bbe3-11ea-bb8b-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id de9b5ba8-bbe3-11ea-bb8b-bc764e2007e4;
 Wed, 01 Jul 2020 21:43:16 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: xvQpnHoUKgxp7zPk6oTWRD8nXqjtMm8m3ejkiJXVQ5jCQmiYJLmjpvOptimV8ebvooDOn0KEdV
 UT8S6vdflilJ9d3ueBWmJJsVsI5J5RAcgBwvtmj0i7afgtJkCFEikz6IL9t9TDVh1T/+RQ6dIu
 POWHUBSA3PvJaa9PFZwdxatVp0sAGEEPT5znYJhxPavPFNTI1biKUsmoZryvVuJtjhQx1rZuoq
 c07MFEdH+HinR1C0FmVnf7B/2qqDUiqkWsUmiUY69GBm4zDDc0ic6DxeUTq7d0HVGHp9HbP8sy
 l5A=
X-SBRS: 2.7
X-MesageID: 21431194
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,301,1589256000"; d="scan'208";a="21431194"
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 <xen-devel@lists.xenproject.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f935f7f0-30e4-4ba2-588f-a8368a7b93b1@citrix.com>
Date: Wed, 1 Jul 2020 22:42:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>, Stefano
 Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30/06/2020 13:33, Michał Leszczyński wrote:
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index ca94c2bedc..b73d824357 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -291,6 +291,12 @@ static int vmx_init_vmcs_config(void)
>          _vmx_cpu_based_exec_control &=
>              ~(CPU_BASED_CR8_LOAD_EXITING | CPU_BASED_CR8_STORE_EXITING);
>  
> +    rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
> +
> +    /* Check whether IPT is supported in VMX operation. */
> +    vmtrace_supported = cpu_has_ipt &&
> +                        (_vmx_misc_cap & VMX_MISC_PT_SUPPORTED);

There is a subtle corner case here.  vmx_init_vmcs_config() is called on
all CPUs, and is supposed to level things down safely if we find any
asymmetry.

If instead you go with something like this:

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index b73d824357..6960109183 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -294,8 +294,8 @@ static int vmx_init_vmcs_config(void)
     rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
 
     /* Check whether IPT is supported in VMX operation. */
-    vmtrace_supported = cpu_has_ipt &&
-                        (_vmx_misc_cap & VMX_MISC_PT_SUPPORTED);
+    if ( !(_vmx_misc_cap & VMX_MISC_PT_SUPPORTED) )
+        vmtrace_supported = false;
 
     if ( _vmx_cpu_based_exec_control &
CPU_BASED_ACTIVATE_SECONDARY_CONTROLS )
     {
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index c9b6af826d..9d7822e006 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1092,6 +1092,9 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 #endif
     }
 
+    /* Set a default for VMTrace before HVM setup occurs. */
+    vmtrace_supported = cpu_has_ipt;
+
     /* Sanitise the raw E820 map to produce a final clean version. */
     max_page = raw_max_page = init_e820(memmap_type, &e820_raw);
 

Then you'll also get a vmtrace_supported=true which works correctly in
the Broadwell and no-VT-x case as well.


> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 7cc9526139..0a33e0dfd6 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -82,6 +82,8 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_mostly;
>  
>  vcpu_info_t dummy_vcpu_info;
>  
> +bool_t vmtrace_supported;

bool please.  We're in the process of converting over to C99 bools, and
objection was taken to a tree-wide cleanup.

> +
>  static void __domain_finalise_shutdown(struct domain *d)
>  {
>      struct vcpu *v;
> diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
> index f790d5c1f8..8d7955dd87 100644
> --- a/xen/include/asm-x86/cpufeature.h
> +++ b/xen/include/asm-x86/cpufeature.h
> @@ -104,6 +104,7 @@
>  #define cpu_has_clwb            boot_cpu_has(X86_FEATURE_CLWB)
>  #define cpu_has_avx512er        boot_cpu_has(X86_FEATURE_AVX512ER)
>  #define cpu_has_avx512cd        boot_cpu_has(X86_FEATURE_AVX512CD)
> +#define cpu_has_ipt             boot_cpu_has(X86_FEATURE_IPT)
>  #define cpu_has_sha             boot_cpu_has(X86_FEATURE_SHA)
>  #define cpu_has_avx512bw        boot_cpu_has(X86_FEATURE_AVX512BW)
>  #define cpu_has_avx512vl        boot_cpu_has(X86_FEATURE_AVX512VL)
> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
> index 906810592f..0e9a0b8de6 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -283,6 +283,7 @@ extern u32 vmx_secondary_exec_control;
>  #define VMX_VPID_INVVPID_SINGLE_CONTEXT_RETAINING_GLOBAL 0x80000000000ULL
>  extern u64 vmx_ept_vpid_cap;
>  
> +#define VMX_MISC_PT_SUPPORTED                   0x00004000

VMX_MISC_PROC_TRACE, and ...

>  #define VMX_MISC_CR3_TARGET                     0x01ff0000
>  #define VMX_MISC_VMWRITE_ALL                    0x20000000
>  
> diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
> index 5ca35d9d97..0d3f15f628 100644
> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -217,6 +217,7 @@ XEN_CPUFEATURE(SMAP,          5*32+20) /*S  Supervisor Mode Access Prevention */
>  XEN_CPUFEATURE(AVX512_IFMA,   5*32+21) /*A  AVX-512 Integer Fused Multiply Add */
>  XEN_CPUFEATURE(CLFLUSHOPT,    5*32+23) /*A  CLFLUSHOPT instruction */
>  XEN_CPUFEATURE(CLWB,          5*32+24) /*A  CLWB instruction */
> +XEN_CPUFEATURE(IPT,           5*32+25) /*   Intel Processor Trace */

.. any chance we can spell this out as PROC_TRACE?  The "Intel" part
won't be true if any of the other vendors choose to implement this
interface to the spec.

Otherwise, LGTM.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jul 01 22:52:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 22:52:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqlaL-0001di-W7; Wed, 01 Jul 2020 22:52:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r09v=AM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqlaL-0001dd-67
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 22:52:09 +0000
X-Inumbo-ID: 7b670169-bbed-11ea-8793-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b670169-bbed-11ea-8793-12813bfff9fa;
 Wed, 01 Jul 2020 22:52:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ykbaOSVCbYwuQfXZIJwan5qjXGK+XStu9v3sYRqBke4=; b=fkVRRElknqYBD+hwXPeyF4EFj5
 rezwuK7jCg8Did1AuFc8Fps9GRVa+QMCpTMFZxIVZ526KYA4OrU9FkZ0T0s0v5xFlKrmaO/8FKiqx
 SqG8lQ9Ps4ao/eQlqeBFEyYc2110KDLDrQW4yRd1dGIYeApSYpNSDg0gWuObetHer/UM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqlaG-0005P3-IJ; Wed, 01 Jul 2020 22:52:04 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqlaG-0001om-78; Wed, 01 Jul 2020 22:52:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqlaG-0000C5-6V; Wed, 01 Jul 2020 22:52:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-libvirt-xsm
Message-Id: <E1jqlaG-0000C5-6V@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jul 2020 22:52:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-libvirt-xsm
testid guest-start

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151514/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-libvirt-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-libvirt-xsm.guest-start --summary-out=tmp/151514.bisection-summary --basis-template=151065 --blessings=real,real-bisect qemu-mainline test-amd64-i386-libvirt-xsm guest-start
Searching for failure / basis pass:
 151500 fail [host=huxelrebe0] / 151149 [host=huxelrebe1] 151101 [host=albana0] 151065 [host=albana1] 151047 [host=fiano0] 150970 [host=fiano1] 150930 [host=elbling1] 150916 [host=chardonnay0] 150895 [host=elbling0] 150831 [host=pinot0] 150694 [host=rimava1] 150631 [host=debina1] 150608 [host=pinot1] 150593 [host=italia0] 150585 [host=chardonnay1] 150532 [host=debina0] 150492 [host=fiano0] 150457 ok.
Failure / basis pass flights: 151500 / 150457
(tree with no url: minios)
(tree in basispass but not in latest: libvirt_gnulib)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 4268e187531eb370bc6fbac4496018bb7fef6716 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 da53345dd5ff7d3a34e83587fd375c0b7722f46c
Basis pass a1cd25b919509be2645dbe6f952d5263e0d4e4e5 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#a1cd25b919509be2645dbe6f952d5263e0d4e4e5-4268e187531eb370bc6fbac4496018bb7fef6716 https://gitlab.com/keycodemap/keycodemapdb.git#317d3eeb963a515e15a63fa356d8ebcda7041a51-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0\
 dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3-00217f1919270007d7a911f89b32e39b9dcaa907 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db9934a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://git.qemu.org/qemu.git#a20ab81d22300cca80325c284f21eefee99aa740-fc1bff958998910ec8d25db86cd2f53ff125f7ab git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-88ab0c1\
 5525ced2eefe39220742efe4769089ad8 git://xenbits.xen.org/xen.git#9f3e9139fa6c3d620eb08dff927518fc88200b8d-da53345dd5ff7d3a34e83587fd375c0b7722f46c
Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.

Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.

Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 65572 nodes in revision graph
Searching for test results:
 150457 pass a1cd25b919509be2645dbe6f952d5263e0d4e4e5 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 150492 [host=fiano0]
 150532 [host=debina0]
 150585 [host=chardonnay1]
 150593 [host=italia0]
 150631 [host=debina1]
 150608 [host=pinot1]
 150694 [host=rimava1]
 150831 [host=pinot0]
 150909 []
 150930 [host=elbling1]
 150916 [host=chardonnay0]
 150895 [host=elbling0]
 150899 []
 150970 [host=fiano1]
 151047 [host=fiano0]
 151101 [host=albana0]
 151065 [host=albana1]
 151149 [host=huxelrebe1]
 151221 fail irrelevant
 151175 fail irrelevant
 151241 fail irrelevant
 151286 fail irrelevant
 151269 fail irrelevant
 151328 fail irrelevant
 151304 fail irrelevant
 151377 fail irrelevant
 151353 fail c5815b31976f3982d18c7f6c1367ab6e403eb7eb 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b d4b78317b7cf8c0c635b70086503813f79ff21ec 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151399 fail irrelevant
 151414 fail irrelevant
 151462 pass f45735786a3d9bee622f80eab75131b0da485798 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 49ee11555262a256afec592dfed7c5902d5eefd2 2e3de6253422112ae43e608661ba94ea6b345694 726c78d14dfe6ec76f5e4c7756821a91f0a04b34
 151443 pass a1cd25b919509be2645dbe6f952d5263e0d4e4e5 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151432 pass a1cd25b919509be2645dbe6f952d5263e0d4e4e5 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151446 fail irrelevant
 151433 fail irrelevant
 151455 pass 9170b0ee6f867d2be1165e83c80910b0e0ac952d 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1d24410da356731da70b3334f86343e11e207d2 3c659044118e34603161457db9934a34f816d78b 470dd165d152ff7ceac61c7b71c2b89220b3aad7 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151434 fail 36b1e8669d85f5dbde4e40a6625df9a78085c2a0 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b 61fee7f45955cd0bf9b79be9fa9c7ebabb5e6a85 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151463 pass f45735786a3d9bee622f80eab75131b0da485798 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 5d2f557b47dfbf8f23277a5bdd8473d4607c681a 2e3de6253422112ae43e608661ba94ea6b345694 51ca66c37371b10b378513af126646de22eddb17
 151436 fail f57a8cd3df0167d72b87fdd868a287608a741b73 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 322969adf1fb3d6cfbd613f35121315715aff2ed 3c659044118e34603161457db9934a34f816d78b bae31bfa48b9caecee25da3d5333901a126a06b4 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151447 pass cf9e7726b38bc93a2728638d435199297d2b3aaa 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 53550e81e2cafe7c03a39526b95cd21b5194d9b1 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151438 fail ea3320048897f5279bc49cb49d26f8099706a834 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b eefe34ea4b82c2b47abe28af4cc7247d51553626 2e3de6253422112ae43e608661ba94ea6b345694 25636ed707cf1211ce846c7ec58f8643e435d7a7
 151449 pass cf9e7726b38bc93a2728638d435199297d2b3aaa 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a2433243fbe471c250d7eddc2c7da325d91265fd 3c659044118e34603161457db9934a34f816d78b 250bc43a406f7d46e319abe87c19548d4f027828 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151439 fail 07e1a18accee37a2850f3825c85cb29b1599b1e0 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b 3f429a3400822141651486193d6af625eeab05a5 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 151468 pass a1cd25b919509be2645dbe6f952d5263e0d4e4e5 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151440 fail 6f28865223292a816f1bfde589901a00156cf8a1 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 58ae92a993687d913aa6dd00ef3497a1bc5f6c40 3c659044118e34603161457db9934a34f816d78b 54cdfe511219b8051046be55a6e156c4f08ff7ff 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 151456 pass 63d08bec0b2dace2fcefffb23a1fa5b14c473d67 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b eea8f5df4ecc607d64f091b8d916fcc11a697541 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151435 fail irrelevant
 151442 fail 3a58613b0cf6a29960b909e6fd7420639ff794bd 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a2433243fbe471c250d7eddc2c7da325d91265fd 3c659044118e34603161457db9934a34f816d78b 6675a653d2e57ab09c32c0ea7b44a1d6c40a7f58 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151464 blocked bc85c34ea91c46588423fa24e56e09ca5aab31dd 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 7d2410cea154bf915fb30179ebda3b17ac36e70e 2e3de6253422112ae43e608661ba94ea6b345694 780aba2779b834f19b2a6f0dcdea0e7e0b5e1622
 151450 fail 1eabe312ea4fa80922443ad73a950857c1f87786 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 9fc7fc4d3909817555ce0af6bcb69dff1606140d 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151488 blocked 0137bf0dab2738d5443e2f407239856e2aa25bb3 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151484 blocked 6f60d2a8503ce8c624bce6b53bf7b68476f5056f 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b b8bee16e94df0fcd03bdad9969c30894418b0e6e 2e3de6253422112ae43e608661ba94ea6b345694 fced27b002c73c47c6c24ece2fe32b78157ad6b6
 151458 pass b934d5f42f29764277bc6f0f1cae19ada6f85e74 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 86e8c353f705f14f2f2fd7a6195cefa431aa24d9 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 151453 pass 63d08bec0b2dace2fcefffb23a1fa5b14c473d67 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 3c659044118e34603161457db9934a34f816d78b 9f1f264edbdf5516d6f208497310b3eedbc7b74c 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151474 blocked a5a297f387fee9e9aa4cbc2df6788c330fd33ad1 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 71b04329c4f7d5824a289ca5225e1883a278cf3b 2e3de6253422112ae43e608661ba94ea6b345694 e181db8ba4e0797b8f9b55996adfa71ffb5b4081
 151460 pass 63d08bec0b2dace2fcefffb23a1fa5b14c473d67 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 3575b0aea983ad57804c9af739ed8ff7bc168393 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151482 blocked 6f60d2a8503ce8c624bce6b53bf7b68476f5056f 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b cf2d1203dcfc2bf964453d83a2302231ce77f2dc 2e3de6253422112ae43e608661ba94ea6b345694 3351acaee706b8e238b031a456bf181f97f167c3
 151470 fail d482cf6bef484e697f1dbb99f2504e7d67b149e7 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151459 fail d482cf6bef484e697f1dbb99f2504e7d67b149e7 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151466 blocked 611e03127fcc84c7cd64b1da30140ca3b8fa1269 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bb78cfbec07eda45118b630a09b0af549b43a135 3c659044118e34603161457db9934a34f816d78b fe0fe4735e798578097758781166cc221319b93d 2e3de6253422112ae43e608661ba94ea6b345694 d9f58cd54fe2f05e1f05e2fe254684bd1840de8e
 151477 blocked ab55a8a0871207de5fe194f55cbbcecae7a3cfe9 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 773861274ad75a62c7ecf70ecc8e4ba31ed62190 2e3de6253422112ae43e608661ba94ea6b345694 ad33a573c009d72466432b41ba0591c64e819c19
 151472 blocked a5a297f387fee9e9aa4cbc2df6788c330fd33ad1 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 250b1da35d579f42319af234f36207902ca4baa4 2e3de6253422112ae43e608661ba94ea6b345694 dde6174ada5280cd9a6396e3b12606360a0d29a3
 151471 fail irrelevant
 151493 pass 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 210d18674a34bb43bd05cdd68d24fd03e161ff3d 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151475 blocked ab55a8a0871207de5fe194f55cbbcecae7a3cfe9 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b d127de3baa64d1cabc8e1994e658688abb577ba9 2e3de6253422112ae43e608661ba94ea6b345694 ad33a573c009d72466432b41ba0591c64e819c19
 151481 blocked 6f60d2a8503ce8c624bce6b53bf7b68476f5056f 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b b8bee16e94df0fcd03bdad9969c30894418b0e6e 2e3de6253422112ae43e608661ba94ea6b345694 3351acaee706b8e238b031a456bf181f97f167c3
 151478 blocked f6c79ca2af3607eb1cbbb7208c194f7cbf7a6abd 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 4ec2a1f53e8aaa22924614b64dde97321126943e 2e3de6253422112ae43e608661ba94ea6b345694 ad33a573c009d72466432b41ba0591c64e819c19
 151479 blocked 6f60d2a8503ce8c624bce6b53bf7b68476f5056f 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b cf2d1203dcfc2bf964453d83a2302231ce77f2dc 2e3de6253422112ae43e608661ba94ea6b345694 422ec8fcf34cf961e81fbccd7d236fa2c1e678a8
 151486 blocked 4ccc69707e9e4a16d66c1bc7b5de55bc3943e3dd 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151483 blocked 6297560761adf660497ab0053af18bab159f6b2f 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151487 blocked d901fd6092414417ee59a4567d2c62f853a62d5c 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151485 fail 4268e187531eb370bc6fbac4496018bb7fef6716 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 da53345dd5ff7d3a34e83587fd375c0b7722f46c
 151489 pass 21597d3caad8c94996de05e5d426178966a17860 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 49ee11555262a256afec592dfed7c5902d5eefd2 2e3de6253422112ae43e608661ba94ea6b345694 16c36d27f2644737c34d4a0fc1de525d0ee185ad
 151492 pass 63d08bec0b2dace2fcefffb23a1fa5b14c473d67 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a4cfb842fca9693a330cb5435284c1ee8bfbbace 3c659044118e34603161457db9934a34f816d78b 23374a84c5f08e20ec2506a6322330d51f9134c5 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151495 fail 257aba2dafee0fec97f3f0a2d06fb82587aaf1a0 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 3e80f6902c13f6edb6675c0f33edcbbf0163ec32 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151498 pass 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 589b1be07c060e583d9f758ff0cb10e0f1ff242f 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151501 fail 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b da9630c57ee386f8beb571ba6bb4a98d546c42ca 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151505 fail 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 007d1dbf72536ec1b847a944832e4de1546af2ac 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151507 pass 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151508 fail 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151509 pass 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151510 fail 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151500 fail 4268e187531eb370bc6fbac4496018bb7fef6716 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 da53345dd5ff7d3a34e83587fd375c0b7722f46c
 151512 pass 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151514 fail 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
Searching for interesting versions
 Result found: flight 150457 (pass), for basis pass
 Result found: flight 151459 (fail), for basis failure (at ancestor ~22)
 Repro found: flight 151468 (pass), for basis pass
 Repro found: flight 151485 (fail), for basis failure
 0 revisions at 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
No revisions left to test, checking graph state.
 Result found: flight 151507 (pass), for last pass
 Result found: flight 151508 (fail), for first failure
 Repro found: flight 151509 (pass), for last pass
 Repro found: flight 151510 (fail), for first failure
 Repro found: flight 151512 (pass), for last pass
 Repro found: flight 151514 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151514/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.231324 to fit
pnmtopng: 164 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-libvirt-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
151514: tolerable FAIL

flight 151514 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/151514/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-libvirt-xsm  12 guest-start             fail baseline untested


jobs:
 build-i386-libvirt                                           pass    
 test-amd64-i386-libvirt-xsm                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Jul 01 23:25:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 01 Jul 2020 23:25:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqm6B-0004CY-R0; Wed, 01 Jul 2020 23:25:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r09v=AM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqm6A-0004CT-RP
 for xen-devel@lists.xenproject.org; Wed, 01 Jul 2020 23:25:02 +0000
X-Inumbo-ID: 15607fca-bbf2-11ea-8793-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15607fca-bbf2-11ea-8793-12813bfff9fa;
 Wed, 01 Jul 2020 23:25:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=e96kX2bS/U9XnKBGQl+L2deaIkCfVS8yh+/49ORlU+k=; b=u/Ye1PzoJHf9RaLhPJlJDJwTt
 zZpJxQ0RqFLpxrJZE7LpnggQBfRlVOUL18LGDM4fCFZDa9dloloRKAB+TQAsVrAnEpx4OGOq71h/P
 +O+VezwWDvcB1Lfd+1bcT4nNHslYuJU9wgsl3YhJX5LGwG7VisOutQKQhdYs5LyP872mQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqm68-00060P-06; Wed, 01 Jul 2020 23:25:00 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqm67-0004I7-Lc; Wed, 01 Jul 2020 23:24:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqm67-0004Lo-L5; Wed, 01 Jul 2020 23:24:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151515-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 151515: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=0dbed3ad3366734fd23ee3fd1f9989c8c96b6052
X-Osstest-Versions-That: xen=3b7dab93f2401b08c673244c9ae0f92e08bd03ba
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 01 Jul 2020 23:24:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151515 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151515/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0dbed3ad3366734fd23ee3fd1f9989c8c96b6052
baseline version:
 xen                  3b7dab93f2401b08c673244c9ae0f92e08bd03ba

Last test of basis   151511  2020-07-01 17:01:00 Z    0 days
Testing same since   151515  2020-07-01 21:04:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3b7dab93f2..0dbed3ad33  0dbed3ad3366734fd23ee3fd1f9989c8c96b6052 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 01:42:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 01:42:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqoEd-0002TW-Sp; Thu, 02 Jul 2020 01:41:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5R4B=AN=qq.com=jinchen1227@srs-us1.protection.inumbo.net>)
 id 1jqoEb-0002TR-R1
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 01:41:54 +0000
X-Inumbo-ID: 2bc2212a-bc05-11ea-bb8b-bc764e2007e4
Received: from smtpbgau1.qq.com (unknown [54.206.16.166])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2bc2212a-bc05-11ea-bb8b-bc764e2007e4;
 Thu, 02 Jul 2020 01:41:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qq.com; s=s201512;
 t=1593654094; bh=k8KB57vn6CvpptSOhh7aUD6mLR0zOLEgcadtesUNcc8=;
 h=From:To:Subject:Mime-Version:Date:Message-ID;
 b=hYENp+RagolU4bRfhU3xi7/vo3M1HrpABO6XJ/8vFzCqQkomOXPV++0tLJfPX/V6I
 UzvOCJ8DAxAarZR8ghQe05JdGogWAULqWt4ga7VGPQDgNTWl4X/BUG4qq6gQfdzbDx
 9WGfGJ3iqxebL8LPNX1jrmAVZDv37WVr31YJhrB0=
X-QQ-FEAT: UcX0e83sQ0gij4JRpWjVZqwlck7d/ymes1KTTpuoOStCztPdktMMp5+IxozJT
 1tmxwTOzRNZzhcY2kCX9fdmOFnTcUEvnR8QUuX8TVTtWGRveHc4p0qk1spI+rmp96VoboRw
 6EvFKXDFL/zPCWr8rHuM/IobdXVwAcuKKWzr0GD73hoF7mKMRwXZhT7bF0wRRrRfrp/eW+A
 GDR/VsEaGrPcp5khkoFvd3WbUCtYWd6DdTD6abvgOorQr1h5s2hbBKXQPa8m5qWqz8n27Rv
 GY9EyGI6DZBYJmVvEqzcUvynA=
X-QQ-SSF: 00000000000000F000000000000000Z
X-QQ-XMAILINFO: N3bN8o5bCt8q7DOeuxGy9o+mNNdUOwFQSXTZZW3HhmNCtY/4hs5k/veBRR+aaA
 n1J87spyVllJbf68x2RY8jSR4uKamfSqf/qM9iApBy90hED/DMAaHlBc56Hue+R+j/h+6hledgbd/
 p7h3Rm2lqNvejYM9mBItmKk3ZIP1a1HyypS7m4mOl+E6Jz9azOnubrlKA6eEFfZoU56z2DeQvZkHI
 JxZaWNekE7m/TrPq3q+LulQjq9pXfFayfkVHAYFSilrAQOwiDoEIZBLTbzIRCGqi9+ePJ5nTLBPUD
 bE78wtuOgHqhZTvr7KIWAocHiC8F8UF6Z3AyouteTOKDT45yCASQQcHnaEWFOajjISV7G3Pdv6xdj
 29PSFPBs2GoQgBmLhCmVQtPFwCXdsEy2UZl6tKBXelYg/0MP3SX7nsaO6Cy3VuBa7VcXpKOuqgu92
 5mxB/4aXx2HttOhmUpPg+Cnv6dtg1Fd19bwSsRqfwGYD0vTt980yJX8S/BU4TaOic3sjp1z/sz7C0
 9X6336nUeOY1nmEJ8yoyMQ00npdNPFg2hDSiMl6ppZw6QaUG1zf4w4RZCA+lCGltm2gVUEQUtiH0j
 pYj/VlHxfaudMhFgxw6dI/Gr9XQUG42c1hy1gXPhg72P6eI3OtG6VF9x3134O1ffbutvHkwyz1E1U
 /AKQ5GVjQ5ZTjyRJipJgqqzWNqlmfNzCuPUhE8
X-HAS-ATTACH: no
X-QQ-BUSINESS-ORIGIN: 2
X-Originating-IP: 120.201.105.119
X-QQ-STYLE: 
X-QQ-mid: webmail712t1593654093t9775718
From: "=?gb18030?B?amluY2hlbg==?=" <jinchen1227@qq.com>
To: "=?gb18030?B?eGVuLWRldmVs?=" <xen-devel@lists.xenproject.org>
Subject: [Xen ARM64]  Save coredump log when xen/dom0 crash on ARM64?
Mime-Version: 1.0
Content-Type: multipart/alternative;
 boundary="----=_NextPart_5EFD3B4D_128C68C8_5A57067A"
Content-Transfer-Encoding: 8Bit
Date: Thu, 2 Jul 2020 09:41:33 +0800
X-Priority: 3
Message-ID: <tencent_F424A8312298D36ED25612607EF4BC341B0A@qq.com>
X-QQ-MIME: TCMime 1.0 by Tencent
X-Mailer: QQMail 2.x
X-QQ-Mailer: QQMail 2.x
X-QQ-SENDSIZE: 520
Received: from qq.com (unknown [127.0.0.1]) by smtp.qq.com (ESMTP) with SMTP
 id ; Thu, 02 Jul 2020 09:41:34 +0800 (CST)
Feedback-ID: webmail:qq.com:bgforeign:bgforeign12
X-QQ-Bgrelay: 1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a multi-part message in MIME format.

------=_NextPart_5EFD3B4D_128C68C8_5A57067A
Content-Type: text/plain;
	charset="gb18030"
Content-Transfer-Encoding: base64

SGVsbG8geGVuIGV4cGVydHM6DQoNCiZuYnNwOyAmbmJzcDsmbmJzcDtJcyB0aGVyZSBhbnkg
d2F5IHRvIHNhdmUgeGVuIGFuZCBkb20wIGNvcmUgZHVtcCBsb2cgd2hlbiB4ZW4gb3IgZG9t
MCBjcmFzaCBvbiBBUk02NCBwbGF0Zm9ybT8NCiZuYnNwOyAmbmJzcDsgSSBmaW5kIHRoYXQg
dGhlIGtkdW1wIHNlZW1zIGNhbid0Jm5ic3A7cnVuIG9uIEFSTTY0IHBsYXRmb3JtPw0KJm5i
c3A7ICZuYnNwOyBBcmUgdGhlcmUgYW55Jm5ic3A7cGF0Y2hlcyZuYnNwO29yIG90aGVyIHdh
eSB0byBhY2hpdmUgdGhpcyBnb2FsPw0KJm5ic3A7ICZuYnNwOyBUaGFuayAmbmJzcDt5b3Ug
dmVyeSBtdWNoIQ==

------=_NextPart_5EFD3B4D_128C68C8_5A57067A
Content-Type: text/html;
	charset="gb18030"
Content-Transfer-Encoding: base64

PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNo
YXJzZXQ9R0IxODAzMCI+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAmcXVvdDtNaWNyb3Nv
ZnQgWWFIZWkgVUkmcXVvdDs7IGZvbnQtdmFyaWFudC1udW1lcmljOiBub3JtYWw7IGZvbnQt
dmFyaWFudC1lYXN0LWFzaWFuOiBub3JtYWw7IGxpbmUtaGVpZ2h0OiAyMXB4OyB3aWRvd3M6
IDE7Ij5IZWxsbyB4ZW4gZXhwZXJ0czo8L3NwYW4+PGRpdiBzdHlsZT0iZm9udC1mYW1pbHk6
ICZxdW90O01pY3Jvc29mdCBZYUhlaSBVSSZxdW90OzsgZm9udC12YXJpYW50LW51bWVyaWM6
IG5vcm1hbDsgZm9udC12YXJpYW50LWVhc3QtYXNpYW46IG5vcm1hbDsgbGluZS1oZWlnaHQ6
IDIxcHg7IHdpZG93czogMTsiPjxicj48L2Rpdj48ZGl2IHN0eWxlPSJmb250LWZhbWlseTog
JnF1b3Q7TWljcm9zb2Z0IFlhSGVpIFVJJnF1b3Q7OyBmb250LXZhcmlhbnQtbnVtZXJpYzog
bm9ybWFsOyBmb250LXZhcmlhbnQtZWFzdC1hc2lhbjogbm9ybWFsOyBsaW5lLWhlaWdodDog
MjFweDsgd2lkb3dzOiAxOyI+PHNwYW4gc3R5bGU9ImJhY2tncm91bmQtY29sb3I6IHJnYmEo
MCwgMCwgMCwgMCk7Ij4mbmJzcDsgJm5ic3A7Jm5ic3A7PC9zcGFuPjxzcGFuIHN0eWxlPSJi
YWNrZ3JvdW5kLWNvbG9yOiB0cmFuc3BhcmVudDsiPklzIHRoZXJlIGFueSB3YXkgdG8gc2F2
ZSB4ZW4gYW5kIGRvbTAgY29yZSBkdW1wIGxvZyB3aGVuIHhlbiBvciBkb20wIGNyYXNoIG9u
IEFSTTY0IHBsYXRmb3JtPzwvc3Bhbj48L2Rpdj48ZGl2IHN0eWxlPSJmb250LWZhbWlseTog
JnF1b3Q7TWljcm9zb2Z0IFlhSGVpIFVJJnF1b3Q7OyBmb250LXZhcmlhbnQtbnVtZXJpYzog
bm9ybWFsOyBmb250LXZhcmlhbnQtZWFzdC1hc2lhbjogbm9ybWFsOyBsaW5lLWhlaWdodDog
MjFweDsgd2lkb3dzOiAxOyI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTogMTAuNXB0OyBsaW5l
LWhlaWdodDogMS41OyBiYWNrZ3JvdW5kLWNvbG9yOiByZ2JhKDAsIDAsIDAsIDApOyI+Jm5i
c3A7ICZuYnNwOyBJIGZpbmQgdGhhdCB0aGUga2R1bXAgc2VlbXMgY2FuJ3Q8L3NwYW4+PHNw
YW4gc3R5bGU9ImJhY2tncm91bmQtY29sb3I6IHRyYW5zcGFyZW50OyI+Jm5ic3A7cnVuIG9u
IEFSTTY0IHBsYXRmb3JtPzwvc3Bhbj48L2Rpdj48ZGl2IHN0eWxlPSJmb250LXZhcmlhbnQt
bnVtZXJpYzogbm9ybWFsOyBmb250LXZhcmlhbnQtZWFzdC1hc2lhbjogbm9ybWFsOyBsaW5l
LWhlaWdodDogMjFweDsgd2lkb3dzOiAxOyI+PHNwYW4gc3R5bGU9ImZvbnQtZmFtaWx5OiAm
cXVvdDtNaWNyb3NvZnQgWWFIZWkgVUkmcXVvdDs7IGJhY2tncm91bmQtY29sb3I6IHJnYmEo
MCwgMCwgMCwgMCk7IGNvbG9yOiBpbmhlcml0ICFpbXBvcnRhbnQ7Ij4mbmJzcDsgJm5ic3A7
IEFyZSB0aGVyZSBhbnkmbmJzcDtwYXRjaGVzJm5ic3A7b3Igb3RoZXIgd2F5IHRvIGFjaGl2
ZSB0aGlzIGdvYWw/PC9zcGFuPjxzcGFuIGNsYXNzPSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2Ui
IHN0eWxlPSJmb250LWZhbWlseTogzqLI7dHFuto7IGNvbG9yOiByZ2IoMTI4LCAxMjgsIDEy
OCk7Ij48L3NwYW4+PC9kaXY+PGRpdiBzdHlsZT0iZm9udC1mYW1pbHk6ICZxdW90O01pY3Jv
c29mdCBZYUhlaSBVSSZxdW90OzsgZm9udC12YXJpYW50LW51bWVyaWM6IG5vcm1hbDsgZm9u
dC12YXJpYW50LWVhc3QtYXNpYW46IG5vcm1hbDsgbGluZS1oZWlnaHQ6IDIxcHg7IHdpZG93
czogMTsiPjxzcGFuIHN0eWxlPSJiYWNrZ3JvdW5kLWNvbG9yOiByZ2JhKDAsIDAsIDAsIDAp
OyI+Jm5ic3A7ICZuYnNwOyBUaGFuayAmbmJzcDt5b3UgdmVyeSBtdWNoITwvc3Bhbj48L2Rp
dj4=

------=_NextPart_5EFD3B4D_128C68C8_5A57067A--





From xen-devel-bounces@lists.xenproject.org Thu Jul 02 03:55:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 03:55:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqqJt-0004qg-0n; Thu, 02 Jul 2020 03:55:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wviA=AN=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jqqJs-0004qb-Cb
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 03:55:28 +0000
X-Inumbo-ID: dd30e074-bc17-11ea-bca7-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.61])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id dd30e074-bc17-11ea-bca7-bc764e2007e4;
 Thu, 02 Jul 2020 03:55:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1593662126;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=kmn5R5crZf3r748cF/7g0FhkHnoY7WFTkWnSXgnl0xM=;
 b=XHFkZAkpi/JEZGizroWtqzFmPFp4ZuAgpAWZDMRlktbQK1UXKQCHsiqLdWOQpA00LeOcSp
 XXbRUiOIagluDGUo+SOJ0Z0dzdKCoJA/q8GO/ZaIIlo0QaF2YNqUnT5VNV5H7GH6hfEC6y
 LNH6xgQhMmK4N6k2X8+yzkVJ+hn2/4c=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-512-vQ6EYvkyNXOqhjKkymAqsw-1; Wed, 01 Jul 2020 23:55:24 -0400
X-MC-Unique: vQ6EYvkyNXOqhjKkymAqsw-1
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 995158015F7;
 Thu,  2 Jul 2020 03:55:22 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 35CC479243;
 Thu,  2 Jul 2020 03:55:19 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 9937511384A6; Thu,  2 Jul 2020 05:55:17 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
References: <20200624121841.17971-1-paul@xen.org>
 <20200624121841.17971-3-paul@xen.org>
 <33e594dd-dbfa-7c57-1cf5-0852e8fc8e1d@redhat.com>
 <000701d64ef5$6568f660$303ae320$@xen.org>
 <9e591254-d215-d5af-38d2-fd5b65f84a43@redhat.com>
 <000801d64f75$c604f570$520ee050$@xen.org>
 <CAKf6xpvNTVqK263pdSARyoWnzP8g9SRoSqvhnLLwyYadjR1ChQ@mail.gmail.com>
Date: Thu, 02 Jul 2020 05:55:17 +0200
In-Reply-To: <CAKf6xpvNTVqK263pdSARyoWnzP8g9SRoSqvhnLLwyYadjR1ChQ@mail.gmail.com>
 (Jason Andryuk's message of "Wed, 1 Jul 2020 08:25:40 -0400")
Message-ID: <87o8oy8tay.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Eduardo Habkost <ehabkost@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Paul Durrant <pdurrant@amazon.com>,
 Paul Durrant <paul@xen.org>, QEMU <qemu-devel@nongnu.org>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk <jandryuk@gmail.com> writes:

> On Wed, Jul 1, 2020 at 3:03 AM Paul Durrant <xadimgnik@gmail.com> wrote:
>>
>> > -----Original Message-----
>> > From: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
>> > Sent: 30 June 2020 18:27
>> > To: paul@xen.org; xen-devel@lists.xenproject.org; qemu-devel@nongnu.or=
g
>> > Cc: 'Eduardo Habkost' <ehabkost@redhat.com>; 'Michael S. Tsirkin' <mst=
@redhat.com>; 'Paul Durrant'
>> > <pdurrant@amazon.com>; 'Jason Andryuk' <jandryuk@gmail.com>; 'Paolo Bo=
nzini' <pbonzini@redhat.com>;
>> > 'Richard Henderson' <rth@twiddle.net>
>> > Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
>> >
>> > On 6/30/20 5:44 PM, Paul Durrant wrote:
>> > >> -----Original Message-----
>> > >> From: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
>> > >> Sent: 30 June 2020 16:26
>> > >> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org; qe=
mu-devel@nongnu.org
>> > >> Cc: Eduardo Habkost <ehabkost@redhat.com>; Michael S. Tsirkin <mst@=
redhat.com>; Paul Durrant
>> > >> <pdurrant@amazon.com>; Jason Andryuk <jandryuk@gmail.com>; Paolo Bo=
nzini <pbonzini@redhat.com>;
>> > >> Richard Henderson <rth@twiddle.net>
>> > >> Subject: Re: [PATCH 2/2] xen: cleanup unrealized flash devices
>> > >>
>> > >> On 6/24/20 2:18 PM, Paul Durrant wrote:
>> > >>> From: Paul Durrant <pdurrant@amazon.com>
>> > >>>
>> > >>> The generic pc_machine_initfn() calls pc_system_flash_create() whi=
ch creates
>> > >>> 'system.flash0' and 'system.flash1' devices. These devices are the=
n realized
>> > >>> by pc_system_flash_map() which is called from pc_system_firmware_i=
nit() which
>> > >>> itself is called via pc_memory_init(). The latter however is not c=
alled when
>> > >>> xen_enable() is true and hence the following assertion fails:
>> > >>>
>> > >>> qemu-system-i386: hw/core/qdev.c:439: qdev_assert_realized_properl=
y:
>> > >>> Assertion `dev->realized' failed
>> > >>>
>> > >>> These flash devices are unneeded when using Xen so this patch avoi=
ds the
>> > >>> assertion by simply removing them using pc_system_flash_cleanup_un=
used().
>> > >>>
>> > >>> Reported-by: Jason Andryuk <jandryuk@gmail.com>
>> > >>> Fixes: ebc29e1beab0 ("pc: Support firmware configuration with -blo=
ckdev")
>> > >>> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
>> > >>> Tested-by: Jason Andryuk <jandryuk@gmail.com>
>> > >>> ---
>> > >>> Cc: Paolo Bonzini <pbonzini@redhat.com>
>> > >>> Cc: Richard Henderson <rth@twiddle.net>
>> > >>> Cc: Eduardo Habkost <ehabkost@redhat.com>
>> > >>> Cc: "Michael S. Tsirkin" <mst@redhat.com>
>> > >>> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
>> > >>> ---
>> > >>>  hw/i386/pc_piix.c    | 9 ++++++---
>> > >>>  hw/i386/pc_sysfw.c   | 2 +-
>> > >>>  include/hw/i386/pc.h | 1 +
>> > >>>  3 files changed, 8 insertions(+), 4 deletions(-)
>> > >>>
>> > >>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>> > >>> index 1497d0e4ae..977d40afb8 100644
>> > >>> --- a/hw/i386/pc_piix.c
>> > >>> +++ b/hw/i386/pc_piix.c
>> > >>> @@ -186,9 +186,12 @@ static void pc_init1(MachineState *machine,
>> > >>>      if (!xen_enabled()) {
>> > >>>          pc_memory_init(pcms, system_memory,
>> > >>>                         rom_memory, &ram_memory);
>> > >>> -    } else if (machine->kernel_filename !=3D NULL) {
>> > >>> -        /* For xen HVM direct kernel boot, load linux here */
>> > >>> -        xen_load_linux(pcms);
>> > >>> +    } else {
>> > >>> +        pc_system_flash_cleanup_unused(pcms);
>> > >>
>> > >> TIL pc_system_flash_cleanup_unused().
>> > >>
>> > >> What about restricting at the source?
>> > >>
>> > >
>> > > And leave the devices in place? They are not relevant for Xen, so wh=
y not clean up?
>> >
>> > No, I meant to not create them in the first place, instead of
>> > create+destroy.
>> >
>> > Anyway what you did works, so I don't have any problem.
>>
>> IIUC Jason originally tried restricting creation but encountered a probl=
em because xen_enabled() would always return false at that point, because m=
achine creation occurs before accelerators are initialized.
>
> Correct.  Quoting my previous email:
> """
> Removing the call to pc_system_flash_create() from pc_machine_initfn()
> lets QEMU startup and run a Xen HVM again.  xen_enabled() doesn't work
> there since accelerators have not been initialized yes, I guess?
> """
>
> If you want to remove the creation in the first place, then I have two
> questions.  Why does pc_system_flash_create()/pc_pflash_create() get
> called so early creating the pflash devices?  Why aren't they just
> created as needed in pc_system_flash_map()?

commit ebc29e1beab02646702c8cb9a1d29b68f72ad503

    pc: Support firmware configuration with -blockdev

    [...]

    Properties need to be created in .instance_init() methods.  For PC
    machines, that's pc_machine_initfn().  To make alias properties work,
    we need to create the onboard flash devices there, too.  [...]

For context, read the entire commit message.  If you have questions
then, don't hesitate to ask them here.



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 04:44:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 04:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqr5Q-0000hU-Vi; Thu, 02 Jul 2020 04:44:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CtVa=AN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqr5P-0000h4-AB
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 04:44:35 +0000
X-Inumbo-ID: b63adda6-bc1e-11ea-87c1-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b63adda6-bc1e-11ea-87c1-12813bfff9fa;
 Thu, 02 Jul 2020 04:44:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Gl3jCGS8s73Jjr2QmQ66EFx6eq/saDyqnSmA27DSBEw=; b=iqIFxCxHT1Foa0dayzNvFFWH/
 etHZzCAMuxuF+gaf9mfjEGAZYScrW9utIq+g0G3NuF25E3f1h2uc80PN/fKf/tEU25EgRdWc6R/W/
 rOwCYN2BbO1Ba45L00nhovJqLXQr/6lYIX2pXRp/xE6RJz/0c5HHqpQ8Yi7lkR13dp2nM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqr5H-0008I8-Fh; Thu, 02 Jul 2020 04:44:27 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqr5H-0004or-0Z; Thu, 02 Jul 2020 04:44:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqr5G-0003gN-Vl; Thu, 02 Jul 2020 04:44:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151506-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151506: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:heisenbug
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=23ca7ec0ba620db52a646d80e22f9703a6589f66
X-Osstest-Versions-That: xen=23ca7ec0ba620db52a646d80e22f9703a6589f66
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jul 2020 04:44:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151506 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151506/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1           fail pass in 151491

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail in 151491 never pass
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 151491
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151491
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151491
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151491
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151491
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151491
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151491
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151491
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151491
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151491
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  23ca7ec0ba620db52a646d80e22f9703a6589f66
baseline version:
 xen                  23ca7ec0ba620db52a646d80e22f9703a6589f66

Last test of basis   151506  2020-07-01 10:55:16 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 07:20:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 07:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqtVw-0005o2-9j; Thu, 02 Jul 2020 07:20:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DrmV=AN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jqtVv-0005nx-6U
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 07:20:07 +0000
X-Inumbo-ID: 73e857f6-bc34-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 73e857f6-bc34-11ea-bb8b-bc764e2007e4;
 Thu, 02 Jul 2020 07:20:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 731B3AF37;
 Thu,  2 Jul 2020 07:20:05 +0000 (UTC)
Subject: Re: [PATCH v2] xen/displif: Protocol version 2
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
 "wl@xen.org" <wl@xen.org>
References: <20200701071923.18883-1-andr2000@gmail.com>
 <dffd127d-c5a1-4c77-baa8-f1d931145bc4@suse.com>
 <b5a6e034-4d52-d6b2-7c14-3c44c4a19cc3@epam.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <e442e4d9-fe79-7f65-c196-2a0a35923492@suse.com>
Date: Thu, 2 Jul 2020 09:20:04 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <b5a6e034-4d52-d6b2-7c14-3c44c4a19cc3@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.07.20 14:07, Oleksandr Andrushchenko wrote:
> On 7/1/20 1:46 PM, Jürgen Groß wrote:
>> On 01.07.20 09:19, Oleksandr Andrushchenko wrote:
>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>
>>> 1. Add protocol version as an integer
>>>
>>> Version string, which is in fact an integer, is hard to handle in the
>>> code that supports different protocol versions. To simplify that
>>> also add the version as an integer.
>>>
>>> 2. Pass buffer offset with XENDISPL_OP_DBUF_CREATE
>>>
>>> There are cases when display data buffer is created with non-zero
>>> offset to the data start. Handle such cases and provide that offset
>>> while creating a display buffer.
>>>
>>> 3. Add XENDISPL_OP_GET_EDID command
>>>
>>> Add an optional request for reading Extended Display Identification
>>> Data (EDID) structure which allows better configuration of the
>>> display connectors over the configuration set in XenStore.
>>> With this change connectors may have multiple resolutions defined
>>> with respect to detailed timing definitions and additional properties
>>> normally provided by displays.
>>>
>>> If this request is not supported by the backend then visible area
>>> is defined by the relevant XenStore's "resolution" property.
>>>
>>> If backend provides extended display identification data (EDID) with
>>> XENDISPL_OP_GET_EDID request then EDID values must take precedence
>>> over the resolutions defined in XenStore.
>>>
>>> 4. Bump protocol version to 2.
>>>
>>> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>
>> Reviewed-by: Juergen Gross <jgross@suse.com>
> 
> Thank you, do you want me to prepare the same for the kernel so
> 
> you have it at hand when the time comes?

It should be added to the kernel only when really needed (i.e. a user of
the new functionality is showing up).


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 07:34:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 07:34:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqtjZ-0006l9-FO; Thu, 02 Jul 2020 07:34:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mrbb=AN=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jqtjY-0006l4-5a
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 07:34:12 +0000
X-Inumbo-ID: 6bd62b54-bc36-11ea-8496-bc764e2007e4
Received: from mail-wm1-x331.google.com (unknown [2a00:1450:4864:20::331])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6bd62b54-bc36-11ea-8496-bc764e2007e4;
 Thu, 02 Jul 2020 07:34:11 +0000 (UTC)
Received: by mail-wm1-x331.google.com with SMTP id j18so25582524wmi.3
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jul 2020 00:34:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=fmKoyTMn8HBzb24wwHIteVJ+aDTuBQpLq3NL4nd+YQQ=;
 b=FNLSrx46jlGA6SN0/j37srDQcKnoQL/AGbCnGGmgARnXvAbdDhjxGalG4F/TzduLzh
 xGY0JGtl6iuaWwYhpJeK5TM5k7kqpS8A7Hg+IfY6YS5DYrxlej9fKmK0RnksIg7pObre
 +Nq64+kfeseXQLOzN2DPnEvk/C+51ZVTI88mGA4BtbfjdYsSqyRL5ehlR/ZZ7DBbHwM/
 Ag4g7XCgxSBNeF0creAUmo3IOrpL59Bb2iXqyEuCUZLTJ/OXIMPmo9t3hxACfb6Dj1AY
 wgRuHwVT4E778qD4pub7YIWiRfGakHtBrU12INVPxTjXZ79n09PdZIRCBx3rAVrPu6wq
 2vVQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=fmKoyTMn8HBzb24wwHIteVJ+aDTuBQpLq3NL4nd+YQQ=;
 b=rkJBXi3wN9kdgmcVc0IgF8LUpxQOgppP7v5gBItuOR22knkW5hxxfV4Je61NLPwhLw
 AJ6R06ZxyLtPObD9nN5zK0xRjy3RRZixerdMNtU2NsdRfD4P/5H5F5l9Ikfg2NY5A7Y/
 /FVOI9oYWbTK2IJfAPcYwEr+aA9wngZUQ5vdFoISdvmFgTWrURdN8F4DXN9S+IxTZTlB
 gjLhfGgHuAFgPPfWgr8w0Nbd0iNAqJUlOMmd19mhh2k6oKiNXTnX3ZEitVCL4YB7P9QZ
 x8SgxBJCklQKyGp5PtYkRn9mRXmnOSLMjXO2PRFMlAroI8xNldFjtvX5jSa80tnXeJtE
 HYNg==
X-Gm-Message-State: AOAM532X6zBLxWdWiw2hI310RzE38aD/iP32/RDc4I5yookSq71gjMEA
 B/0jfrbttwgtJ4GJGU3zyXA=
X-Google-Smtp-Source: ABdhPJxoXmg3mcGxGIhx6YoRqxZf1kaJORFa0jvENrv2yt2xlIHmQSnbdMc34nx+AlaZzyBBxYgVvA==
X-Received: by 2002:a1c:ab84:: with SMTP id u126mr24876397wme.43.1593675250850; 
 Thu, 02 Jul 2020 00:34:10 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-239.amazon.com. [54.240.197.239])
 by smtp.gmail.com with ESMTPSA id x1sm9821978wrp.10.2020.07.02.00.34.09
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 02 Jul 2020 00:34:10 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	<xen-devel@lists.xenproject.org>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
In-Reply-To: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
Subject: RE: [PATCH v2 0/7] x86: compat header generation and checking
 adjustments
Date: Thu, 2 Jul 2020 08:34:09 +0100
Message-ID: <001d01d65043$2d137890$873a69b0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQLrhzkctRDO+pIv8AQp+zhKytw4/KbJdpvg
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@citrix.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 01 July 2020 11:23
> To: xen-devel@lists.xenproject.org
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap =
<George.Dunlap@eu.citrix.com>; Ian
> Jackson <ian.jackson@citrix.com>; Julien Grall <julien@xen.org>; Wei =
Liu <wl@xen.org>; Stefano
> Stabellini <sstabellini@kernel.org>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>; Paul Durrant
> <paul@xen.org>
> Subject: [PATCH v2 0/7] x86: compat header generation and checking =
adjustments
>=20
> As was pointed out by 0e2e54966af5 ("mm: fix public declaration of
> struct xen_mem_acquire_resource"), we're not currently handling =
structs
> correctly that have uint64_aligned_t fields. Patch 2 demonstrates that
> there was also an issue with XEN_GUEST_HANDLE_64().
>=20
> Only the 1st patch was previously sent, but the approach chosen has
> been changed altogether. All later patches are new. For 4.14 I think
> at least patch 1 should be considered.

It's now quite a large patch. Since xen_mem_acquire_resouce() has been =
fixed, patch #1 (as you say in the comment there) is addressing a latent =
issue and so I=E2=80=99d prefer not to take what is now quite a large =
patch into 4.14.

  Paul

>=20
> 1: x86: fix compat header generation
> 2: x86/mce: add compat struct checking for XEN_MC_inject_v2
> 3: x86/mce: bring hypercall subop compat checking in sync again
> 4: x86/dmop: add compat struct checking for =
XEN_DMOP_map_mem_type_to_ioreq_server
> 5: x86: generalize padding field handling
> 6: flask: drop dead compat translation code
> 7: x86: only generate compat headers actually needed
>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 07:42:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 07:42:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqtrO-0007c6-8w; Thu, 02 Jul 2020 07:42:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bBDp=AN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqtrM-0007c1-Tc
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 07:42:16 +0000
X-Inumbo-ID: 8cb73cea-bc37-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8cb73cea-bc37-11ea-bb8b-bc764e2007e4;
 Thu, 02 Jul 2020 07:42:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7F82BB19E;
 Thu,  2 Jul 2020 07:42:15 +0000 (UTC)
Subject: Re: [PATCH v2 0/7] x86: compat header generation and checking
 adjustments
To: paul@xen.org
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <001d01d65043$2d137890$873a69b0$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5b9de197-16f1-cb22-d109-a40dc8917549@suse.com>
Date: Thu, 2 Jul 2020 09:42:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <001d01d65043$2d137890$873a69b0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.07.2020 09:34, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 01 July 2020 11:23
>> To: xen-devel@lists.xenproject.org
>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap <George.Dunlap@eu.citrix.com>; Ian
>> Jackson <ian.jackson@citrix.com>; Julien Grall <julien@xen.org>; Wei Liu <wl@xen.org>; Stefano
>> Stabellini <sstabellini@kernel.org>; Roger Pau Monné <roger.pau@citrix.com>; Paul Durrant
>> <paul@xen.org>
>> Subject: [PATCH v2 0/7] x86: compat header generation and checking adjustments
>>
>> As was pointed out by 0e2e54966af5 ("mm: fix public declaration of
>> struct xen_mem_acquire_resource"), we're not currently handling structs
>> correctly that have uint64_aligned_t fields. Patch 2 demonstrates that
>> there was also an issue with XEN_GUEST_HANDLE_64().
>>
>> Only the 1st patch was previously sent, but the approach chosen has
>> been changed altogether. All later patches are new. For 4.14 I think
>> at least patch 1 should be considered.
> 
> It's now quite a large patch.

Most parts being entirely mechanical, though. But still ...

> Since xen_mem_acquire_resouce() has been fixed, patch #1 (as you say
> in the comment there) is addressing a latent issue and so I’d prefer
> not to take what is now quite a large patch into 4.14.

... fair enough.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 07:44:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 07:44:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqtts-0007jy-Nr; Thu, 02 Jul 2020 07:44:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bBDp=AN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqttr-0007jt-Os
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 07:44:51 +0000
X-Inumbo-ID: e8d5c76c-bc37-11ea-87de-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e8d5c76c-bc37-11ea-87de-12813bfff9fa;
 Thu, 02 Jul 2020 07:44:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 29200AE25;
 Thu,  2 Jul 2020 07:44:50 +0000 (UTC)
Subject: Ping: [PATCH] build: tweak variable exporting for make 3.82
To: Paul Durrant <paul@xen.org>
References: <0677fe2a-9ea1-7b3c-e212-4a2478537459@suse.com>
 <20200629163027.GA2030@perard.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <50b573f7-a0eb-fda8-d88b-d9786faf541e@suse.com>
Date: Thu, 2 Jul 2020 09:44:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200629163027.GA2030@perard.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.06.2020 18:30, Anthony PERARD wrote:
> On Fri, Jun 26, 2020 at 05:02:30PM +0200, Jan Beulich wrote:
>> While I've been running into an issue here only because of an additional
>> local change I'm carrying, to be able to override just the compiler in
>> $(XEN_ROOT)/.config (rather than the whole tool chain), in
>> config/StdGNU.mk:
>>
>> ifeq ($(filter-out default undefined,$(origin CC)),)
>>
>> I'd like to propose to nevertheless correct the underlying issue:
>> Exporting an unset variable changes its origin from "undefined" to
>> "file". This comes into effect because of our adding of -rR to
>> MAKEFLAGS, which make 3.82 wrongly applies also upon re-invoking itself
>> after having updated auto.conf{,.cmd}.
>>
>> Move the export statement past $(XEN_ROOT)/config/$(XEN_OS).mk inclusion
>> such that the variables already have their designated values at that
>> point, while retaining their initial origin up to the point they get
>> defined.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/Makefile
>> +++ b/xen/Makefile
>> @@ -17,8 +17,6 @@ export XEN_BUILD_HOST	?= $(shell hostnam
>>  PYTHON_INTERPRETER	:= $(word 1,$(shell which python3 python python2 2>/dev/null) python)
>>  export PYTHON		?= $(PYTHON_INTERPRETER)
>>  
>> -export CC CXX LD
>> -
>>  export BASEDIR := $(CURDIR)
>>  export XEN_ROOT := $(BASEDIR)/..
>>  
>> @@ -42,6 +40,8 @@ export TARGET_ARCH     := $(shell echo $
>>  # Allow someone to change their config file
>>  export KCONFIG_CONFIG ?= .config
>>  
>> +export CC CXX LD
>> +
>>  .PHONY: default
>>  default: build
> 
> That patch is fine and it is probably better to export a variable that
> has a value rather than before the variable is set.
> 
> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Paul - thoughts either way as to 4.14? If not to go in now, I
definitely intend to backport it. (And in fact I'm meanwhile
considering to enter a make bug for the behavior, unless its
behavior has changed in later versions.)

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 07:56:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 07:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqu4e-0000C2-OL; Thu, 02 Jul 2020 07:56:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mrbb=AN=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jqu4d-0000Bx-Sv
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 07:55:59 +0000
X-Inumbo-ID: 7710e6be-bc39-11ea-bca7-bc764e2007e4
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7710e6be-bc39-11ea-bca7-bc764e2007e4;
 Thu, 02 Jul 2020 07:55:58 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id f18so18912155wrs.0
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jul 2020 00:55:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=SsiUrbkHvdCOcSEjELxpX5iwr73P/3ya/uIpJCFbC3I=;
 b=bnMx8r9YJKQmc88fFbS0cZUxUQcemyrtU+7GWKEDxxFJ9IcuN3cEngEFUGk/x0Zo8i
 G3CCJJY1sZFnIytmxHEsa8CNhRKVe+fazKmDlGYZGEWby1qNFPtlIo41exh1vxY89ycD
 5my6HPfIpgF+MYDbIED3giM5dkIxOw7mxlrbmjoxAU/QqXIZXTQo8Of5+RZWQ7K93zJc
 334hwM7jSshVQr0q/jcH767xovSEDpKwhbrQQoWoBdgTsxxoa+gZutzIE+KlndwqYgc4
 2DE0N9IChQSFDuYAppxmEWhT1+UVlmW6o4nUmXaKohgBOROkHY+6Ae03SQ6UINwcHdPb
 85pw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=SsiUrbkHvdCOcSEjELxpX5iwr73P/3ya/uIpJCFbC3I=;
 b=kJrnnKM4tzPMGInOKOdkKgd9ikT9pKEy024K0kENa6oV8rxiQpfEj+OpEAX+rWpA62
 MEtr9AjjeZWUYVU5cOEDQvT0HigxcNChUewhGfFt3OCGVG1ahOHZ03lHNIl9vTYiLiDR
 K3ZToI9uMRE2/RokdWHRvj0ADNJVS/1ClM6ThEjiP8mBFYPRsnLBRFQFCQtz+88Kf/N1
 wW4g54L2o6/s0WtYbcvO3H91qmzamVU9nAGPIwF0IW9LbMVTjN9xhnF8GZdHI/9qyqkO
 sVNpTz4BB0CwRdUZ4Qnhukbfm+UuK+ULQT+jJbShOtMj8rZ9F9BDFLIwsegH4uBy2ZVH
 w04Q==
X-Gm-Message-State: AOAM532XXhIXwyOLJvQ7NPSzYACpzty2NNZ1ZQOyG3uA9QtaWK+J8f93
 RDvT8SYu/cSzyvfT50cM4x8=
X-Google-Smtp-Source: ABdhPJzipGUEhIrMnr3TXyx5pQDCx0MQOSLuimItM0TC2LZz1mWOQ+1ce6EfheNsIAc2w5lReEmHGQ==
X-Received: by 2002:adf:ef8a:: with SMTP id d10mr29393011wro.126.1593676558100; 
 Thu, 02 Jul 2020 00:55:58 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-239.amazon.com. [54.240.197.239])
 by smtp.gmail.com with ESMTPSA id j4sm5384723wrp.51.2020.07.02.00.55.56
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 02 Jul 2020 00:55:57 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <0677fe2a-9ea1-7b3c-e212-4a2478537459@suse.com>
 <20200629163027.GA2030@perard.uk.xensource.com>
 <50b573f7-a0eb-fda8-d88b-d9786faf541e@suse.com>
In-Reply-To: <50b573f7-a0eb-fda8-d88b-d9786faf541e@suse.com>
Subject: RE: Ping: [PATCH] build: tweak variable exporting for make 3.82
Date: Thu, 2 Jul 2020 08:55:56 +0100
Message-ID: <001e01d65046$384224c0$a8c66e40$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQIaxjOh7cGNy1dVRAvK3itPt2/NmQIeq8zPAbAfezCoTIkcsA==
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@citrix.com>,
 'Anthony PERARD' <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 02 July 2020 08:45
> To: Paul Durrant <paul@xen.org>
> Cc: Anthony PERARD <anthony.perard@citrix.com>; xen-devel@lists.xenproject.org; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <George.Dunlap@eu.citrix.com>; Ian Jackson
> <ian.jackson@citrix.com>; Julien Grall <julien@xen.org>; Wei Liu <wl@xen.org>; Stefano Stabellini
> <sstabellini@kernel.org>
> Subject: Ping: [PATCH] build: tweak variable exporting for make 3.82
> 
> On 29.06.2020 18:30, Anthony PERARD wrote:
> > On Fri, Jun 26, 2020 at 05:02:30PM +0200, Jan Beulich wrote:
> >> While I've been running into an issue here only because of an additional
> >> local change I'm carrying, to be able to override just the compiler in
> >> $(XEN_ROOT)/.config (rather than the whole tool chain), in
> >> config/StdGNU.mk:
> >>
> >> ifeq ($(filter-out default undefined,$(origin CC)),)
> >>
> >> I'd like to propose to nevertheless correct the underlying issue:
> >> Exporting an unset variable changes its origin from "undefined" to
> >> "file". This comes into effect because of our adding of -rR to
> >> MAKEFLAGS, which make 3.82 wrongly applies also upon re-invoking itself
> >> after having updated auto.conf{,.cmd}.
> >>
> >> Move the export statement past $(XEN_ROOT)/config/$(XEN_OS).mk inclusion
> >> such that the variables already have their designated values at that
> >> point, while retaining their initial origin up to the point they get
> >> defined.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >>
> >> --- a/xen/Makefile
> >> +++ b/xen/Makefile
> >> @@ -17,8 +17,6 @@ export XEN_BUILD_HOST	?= $(shell hostnam
> >>  PYTHON_INTERPRETER	:= $(word 1,$(shell which python3 python python2 2>/dev/null) python)
> >>  export PYTHON		?= $(PYTHON_INTERPRETER)
> >>
> >> -export CC CXX LD
> >> -
> >>  export BASEDIR := $(CURDIR)
> >>  export XEN_ROOT := $(BASEDIR)/..
> >>
> >> @@ -42,6 +40,8 @@ export TARGET_ARCH     := $(shell echo $
> >>  # Allow someone to change their config file
> >>  export KCONFIG_CONFIG ?= .config
> >>
> >> +export CC CXX LD
> >> +
> >>  .PHONY: default
> >>  default: build
> >
> > That patch is fine and it is probably better to export a variable that
> > has a value rather than before the variable is set.
> >
> > Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> Paul - thoughts either way as to 4.14? If not to go in now, I
> definitely intend to backport it. (And in fact I'm meanwhile
> considering to enter a make bug for the behavior, unless its
> behavior has changed in later versions.)
> 

I agree with Anthony's statement so I'm happy for this to go in 4.14.

Release-acked-by: Paul Durrant <paul@xen.org>




From xen-devel-bounces@lists.xenproject.org Thu Jul 02 07:59:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 07:59:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqu8E-0000Li-74; Thu, 02 Jul 2020 07:59:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gX0m=AN=epam.com=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1jqu8C-0000Lc-AR
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 07:59:40 +0000
X-Inumbo-ID: f9d8da20-bc39-11ea-b7bb-bc764e2007e4
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.69]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f9d8da20-bc39-11ea-b7bb-bc764e2007e4;
 Thu, 02 Jul 2020 07:59:38 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bL9XpOzg3ZpQEJqNdGKNTdlSlmL/+MzTMCNfMlGOBkwValhfYEDylhaIOXdQYkc4Y/TKgf7PZlYAVkIcgozHfk58zVs2Fcz90iD7+Ko3UunRHavwQYWh7sQkTM0W9/qIwSlPNCQiHVVEbogaatdCKWyFaqARVmLsbZCKIICZKatf6rqaKrTyDBF6xbjyHw97AoqzHo3SkJ+dCWlitsUkuldcfW2bLwPwZNrWiC4ckEyihVjwt2AFAi6cnXWjUzAOYmPL6H15E20OT6OzeBBJlJNju3zfMtT25iwezt0YecIFkwhGGgjsLRu00uEHi0f0DZTajvEXA8abtr10Zil8eg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KLUyh4w4CyzcYYsgSbvvd8BZiFlIfy9mVfPOw6PuCZE=;
 b=h/HHe90OkBr9GifIykBf2e/J5PHn8nZ5Q0YR/ZaJVAhhoR9SWh2x0o7poC+tGNJJD8d7L/4QXor7IUZw5JY9mJqXv6G3F7B3920w4X0FElcI3bBVyGaUfAl4vtG9SsNhQQHJoUuIsnAIknR+GluXPFLARLjjH9Cfzzwi6U2AmLI13FIIO0UKqNrFGW/STYhn4ZYSBIvM1KHU9wg0BSiyH4DWVa++OfkmnJ2ZSIJQOKeKgTaVWDDLMC+z8qp52XwaO8Hqo6NtiIRum4+6JndPj11OncRT90MoMk4tsYGjZLQMxXBS/b200LK8Hh3QZNe8CETgtAJn++zO5sPYjU1YaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KLUyh4w4CyzcYYsgSbvvd8BZiFlIfy9mVfPOw6PuCZE=;
 b=ePHOIqoHwtGi20YEkCkXkArDytKevks/n2w8e5Fx06gzgN9fBezZ+xqEUE8rK3PJa2YqVTbP+T2ojfXeN34ZOfXGl4ml+TNW3heJAc+1mJwj9m+SRZcSRUFB793GBWUDwKn3JBf8vy9Rlpz52o8byLhf39M33Z6gwkutQjqCnYZlxTkNZrHNalbFbV/IKdYUaoeMuO5G1JYLI1FJ2SoIqiBT9N7aC3nr7RzpyV6CGZHiyxJNNhFdXekJieNF6SwnsghFeUykYKIhEGCUGvEMHPqrKZ7gjPUzGdTP90037S9zmFFSdB7yu8ZHcKohkRKRVXdWfUjIABSdtp2A2uyonA==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5010.eurprd03.prod.outlook.com (2603:10a6:208:10b::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3131.25; Thu, 2 Jul
 2020 07:59:37 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508%8]) with mapi id 15.20.3153.024; Thu, 2 Jul 2020
 07:59:37 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, Oleksandr
 Andrushchenko <andr2000@gmail.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>, "ian.jackson@eu.citrix.com"
 <ian.jackson@eu.citrix.com>, "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH v2] xen/displif: Protocol version 2
Thread-Topic: [PATCH v2] xen/displif: Protocol version 2
Thread-Index: AQHWT3f2pfc1d0tAak69A/eTMH1j5qjyit6AgAAWp4CAAUH0AIAACwsA
Date: Thu, 2 Jul 2020 07:59:37 +0000
Message-ID: <f50ec904-8cb2-2bd6-c3ba-35e8c44bd607@epam.com>
References: <20200701071923.18883-1-andr2000@gmail.com>
 <dffd127d-c5a1-4c77-baa8-f1d931145bc4@suse.com>
 <b5a6e034-4d52-d6b2-7c14-3c44c4a19cc3@epam.com>
 <e442e4d9-fe79-7f65-c196-2a0a35923492@suse.com>
In-Reply-To: <e442e4d9-fe79-7f65-c196-2a0a35923492@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f910deec-afc6-4dab-9ecb-08d81e5ddda7
x-ms-traffictypediagnostic: AM0PR03MB5010:
x-microsoft-antispam-prvs: <AM0PR03MB50104D909A742038AB3DB366E76D0@AM0PR03MB5010.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-forefront-prvs: 0452022BE1
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: AKTZICfvafAXndTpm4SrcPVraXLeiuVleoZxaCxC9IVodrhMYxfAnOPSYGSXYlNo8/v+Uxj9JPjSaqSI2WWafGkAZvfNit3tKFTZbGrmhYDij9NdX8FMTmnUtcEVBgjX27Bteks5Moo6HKysXSd/bF8z754ztoRZ+6ue7DArB7YnO5hzeZFIw6aYJAZGeJc/kOw8ykVlGRyuOA8WVVGpIzPUYVhAwosLR1N4R9dawzkn4bnaZYX6Awy9DiTC+oeGmWfDduvkG6BnWfThNbK7lXgxCe7I1dNj2VnjSzqAS+QbgGDHictbk6NXHaaq9pNf
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM0PR03MB6324.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(346002)(366004)(136003)(376002)(39860400002)(396003)(36756003)(71200400001)(110136005)(76116006)(91956017)(6506007)(53546011)(6486002)(26005)(2616005)(66946007)(31696002)(66574015)(66476007)(64756008)(66556008)(66446008)(478600001)(5660300002)(8676002)(186003)(6512007)(83380400001)(31686004)(2906002)(86362001)(316002)(8936002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: pChM2QuxDFyGv0eJfyrQkYM/i542i/rKb5y9gXkWRw/fmiTqbsJIwFMRvRaDOfGwvNgjMU9VJxUYpCahfMP9Z26Ins5gQDELsxAhx/uNZ/9srgFr8qK3uc4o35TDXd0WJ0NSOsqLxKH26ZLVg750djkMYt9kLDd1xwNxXKZSqIdxBWUqbdtQvM5KlQHTossi/wR8ygHcPVdpot09p9EibO25MxIjT2J+4d1sCUnudFPwtsZFUTlDa37DuSy7o2fzthgaHt/kLpIgN3ZiPtYXPkbjaVTmTqLHe+9/gQPqC0POOuUoge7juvMNxOVHMOTp81dXl8qBue3XhXDSS0hug7lC7099y6xtwcZzjl9k+zhib2jEu1dbWC4diU8LKxyPJwy+aWrsRdt6RrEVSCms05VwWsck14sxLIqEE8g2RlpjXnq6WRGroWlRdoHDWR2gM5QgP+BxWnF/RfYrriIkzUL0EBoCXBlPS8fgh7x33Tw=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <161FB667D306C34782A274D938F5F60D@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f910deec-afc6-4dab-9ecb-08d81e5ddda7
X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Jul 2020 07:59:37.1033 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ybqtTpJtBZBfhCkOw19+vvTHyuaobA/ix99yziS5KxwaiYBps98cclBqKSLOGTPE5duSgsrXFz3o+DT4znW2BptjwownrIab/v4rbwbQUjRQTWa/E+3RT4lJ5E12bVK+
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5010
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQpPbiA3LzIvMjAgMTA6MjAgQU0sIErDvHJnZW4gR3Jvw58gd3JvdGU6DQo+IE9uIDAxLjA3LjIw
IDE0OjA3LCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4+IE9uIDcvMS8yMCAxOjQ2
IFBNLCBKw7xyZ2VuIEdyb8OfIHdyb3RlOg0KPj4+IE9uIDAxLjA3LjIwIDA5OjE5LCBPbGVrc2Fu
ZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4+Pj4gRnJvbTogT2xla3NhbmRyIEFuZHJ1c2hjaGVu
a28gPG9sZWtzYW5kcl9hbmRydXNoY2hlbmtvQGVwYW0uY29tPg0KPj4+Pg0KPj4+PiAxLiBBZGQg
cHJvdG9jb2wgdmVyc2lvbiBhcyBhbiBpbnRlZ2VyDQo+Pj4+DQo+Pj4+IFZlcnNpb24gc3RyaW5n
LCB3aGljaCBpcyBpbiBmYWN0IGFuIGludGVnZXIsIGlzIGhhcmQgdG8gaGFuZGxlIGluIHRoZQ0K
Pj4+PiBjb2RlIHRoYXQgc3VwcG9ydHMgZGlmZmVyZW50IHByb3RvY29sIHZlcnNpb25zLiBUbyBz
aW1wbGlmeSB0aGF0DQo+Pj4+IGFsc28gYWRkIHRoZSB2ZXJzaW9uIGFzIGFuIGludGVnZXIuDQo+
Pj4+DQo+Pj4+IDIuIFBhc3MgYnVmZmVyIG9mZnNldCB3aXRoIFhFTkRJU1BMX09QX0RCVUZfQ1JF
QVRFDQo+Pj4+DQo+Pj4+IFRoZXJlIGFyZSBjYXNlcyB3aGVuIGRpc3BsYXkgZGF0YSBidWZmZXIg
aXMgY3JlYXRlZCB3aXRoIG5vbi16ZXJvDQo+Pj4+IG9mZnNldCB0byB0aGUgZGF0YSBzdGFydC4g
SGFuZGxlIHN1Y2ggY2FzZXMgYW5kIHByb3ZpZGUgdGhhdCBvZmZzZXQNCj4+Pj4gd2hpbGUgY3Jl
YXRpbmcgYSBkaXNwbGF5IGJ1ZmZlci4NCj4+Pj4NCj4+Pj4gMy4gQWRkIFhFTkRJU1BMX09QX0dF
VF9FRElEIGNvbW1hbmQNCj4+Pj4NCj4+Pj4gQWRkIGFuIG9wdGlvbmFsIHJlcXVlc3QgZm9yIHJl
YWRpbmcgRXh0ZW5kZWQgRGlzcGxheSBJZGVudGlmaWNhdGlvbg0KPj4+PiBEYXRhIChFRElEKSBz
dHJ1Y3R1cmUgd2hpY2ggYWxsb3dzIGJldHRlciBjb25maWd1cmF0aW9uIG9mIHRoZQ0KPj4+PiBk
aXNwbGF5IGNvbm5lY3RvcnMgb3ZlciB0aGUgY29uZmlndXJhdGlvbiBzZXQgaW4gWGVuU3RvcmUu
DQo+Pj4+IFdpdGggdGhpcyBjaGFuZ2UgY29ubmVjdG9ycyBtYXkgaGF2ZSBtdWx0aXBsZSByZXNv
bHV0aW9ucyBkZWZpbmVkDQo+Pj4+IHdpdGggcmVzcGVjdCB0byBkZXRhaWxlZCB0aW1pbmcgZGVm
aW5pdGlvbnMgYW5kIGFkZGl0aW9uYWwgcHJvcGVydGllcw0KPj4+PiBub3JtYWxseSBwcm92aWRl
ZCBieSBkaXNwbGF5cy4NCj4+Pj4NCj4+Pj4gSWYgdGhpcyByZXF1ZXN0IGlzIG5vdCBzdXBwb3J0
ZWQgYnkgdGhlIGJhY2tlbmQgdGhlbiB2aXNpYmxlIGFyZWENCj4+Pj4gaXMgZGVmaW5lZCBieSB0
aGUgcmVsZXZhbnQgWGVuU3RvcmUncyAicmVzb2x1dGlvbiIgcHJvcGVydHkuDQo+Pj4+DQo+Pj4+
IElmIGJhY2tlbmQgcHJvdmlkZXMgZXh0ZW5kZWQgZGlzcGxheSBpZGVudGlmaWNhdGlvbiBkYXRh
IChFRElEKSB3aXRoDQo+Pj4+IFhFTkRJU1BMX09QX0dFVF9FRElEIHJlcXVlc3QgdGhlbiBFRElE
IHZhbHVlcyBtdXN0IHRha2UgcHJlY2VkZW5jZQ0KPj4+PiBvdmVyIHRoZSByZXNvbHV0aW9ucyBk
ZWZpbmVkIGluIFhlblN0b3JlLg0KPj4+Pg0KPj4+PiA0LiBCdW1wIHByb3RvY29sIHZlcnNpb24g
dG8gMi4NCj4+Pj4NCj4+Pj4gU2lnbmVkLW9mZi1ieTogT2xla3NhbmRyIEFuZHJ1c2hjaGVua28g
PG9sZWtzYW5kcl9hbmRydXNoY2hlbmtvQGVwYW0uY29tPg0KPj4+DQo+Pj4gUmV2aWV3ZWQtYnk6
IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCj4+DQo+PiBUaGFuayB5b3UsIGRvIHlv
dSB3YW50IG1lIHRvIHByZXBhcmUgdGhlIHNhbWUgZm9yIHRoZSBrZXJuZWwgc28NCj4+DQo+PiB5
b3UgaGF2ZSBpdCBhdCBoYW5kIHdoZW4gdGhlIHRpbWUgY29tZXM/DQo+DQo+IEl0IHNob3VsZCBi
ZSBhZGRlZCB0byB0aGUga2VybmVsIG9ubHkgd2hlbiByZWFsbHkgbmVlZGVkIChpLmUuIGEgdXNl
ciBvZg0KPiB0aGUgbmV3IGZ1bmN0aW9uYWxpdHkgaXMgc2hvd2luZyB1cCkuDQoNCldlIGhhdmUg
YSBwYXRjaCBmb3IgdGhhdCB3aGljaCBhZGRzIEVESUQgdG8gdGhlIGV4aXN0aW5nIFBWIERSTSBm
cm9udGVuZCwNCg0Kc28gd2hpbGUgdXBzdHJlYW1pbmcgdGhvc2UgY2hhbmdlcyBJIHdpbGwgYWxz
byBpbmNsdWRlIGNoYW5nZXMgdG8gdGhlIHByb3RvY29sDQoNCmluIHRoZSBrZXJuZWwgc2VyaWVz
OiBmb3IgdGhhdCB3ZSBuZWVkIHRoZSBoZWFkZXIgaW4gWGVuIHRyZWUgZmlyc3QsIHJpZ2h0Pw0K
DQo+DQo+DQo+IEp1ZXJnZW4NCg0KVGhhbmsgeW91LA0KDQpPbGVrc2FuZHINCg==


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 08:10:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 08:10:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jquIj-0002Rr-M8; Thu, 02 Jul 2020 08:10:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gJZC=AN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jquIi-0002Rm-S8
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 08:10:32 +0000
X-Inumbo-ID: 7f471784-bc3b-11ea-8496-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7f471784-bc3b-11ea-8496-bc764e2007e4;
 Thu, 02 Jul 2020 08:10:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1593677432;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=nS9CkHDvm9Fd6ZbiOp3GcRc8ExkVZjAsRFvovRVDDE8=;
 b=h8gwMFfKAb3TYkH2G2MF+P+PJKeepu0YpQ7Az7IEwy6U6eXebJlGFHhh
 Qsqua77/rJ215hnW/0P3JhfkSR8NGEFctUWKzOcUlBRlXOgwF5hiP2MkK
 Jpn6sC5G+8tuuFj/CKCv4sWWJm3EAu8vN6brlpG52+11iiB/VAdkRS5Fy Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 8IEfGUwlmj4dBqs+N+FL6caDZBkh0i6YEWCC/Zkbo6ZYTNxtuM7+zPewuWwSuJNsU87AeWfKTz
 ipTegCxhYT/v1Iv/Rm3trXIRO8PgABHJh0FeO3f+QHW/Ekm4hanH/9SBz4FVgYSy6FZOQXek/a
 iQ+gA2nqJBJV8iFlyZ+DXNb5uug0ZQWCRHdB7B/s9hgHpLdY8VsI8VXpWjiQODjqwqBwYU7Orv
 YPh7wOUWhXgfP01dNatS2jDLVg5oPf3p00rykf6jd3lNce03rMjD0Vziwj0stwTGzq6fNHCxrj
 t5o=
X-SBRS: 2.7
X-MesageID: 21458305
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,303,1589256000"; d="scan'208";a="21458305"
Date: Thu, 2 Jul 2020 10:10:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
Message-ID: <20200702081020.GW735@Air-de-Roger>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <f935f7f0-30e4-4ba2-588f-a8368a7b93b1@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f935f7f0-30e4-4ba2-588f-a8368a7b93b1@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>, Stefano
 Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 10:42:55PM +0100, Andrew Cooper wrote:
> On 30/06/2020 13:33, Michał Leszczyński wrote:
> > diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> > index ca94c2bedc..b73d824357 100644
> > --- a/xen/arch/x86/hvm/vmx/vmcs.c
> > +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> > @@ -291,6 +291,12 @@ static int vmx_init_vmcs_config(void)
> >          _vmx_cpu_based_exec_control &=
> >              ~(CPU_BASED_CR8_LOAD_EXITING | CPU_BASED_CR8_STORE_EXITING);
> >  
> > +    rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
> > +
> > +    /* Check whether IPT is supported in VMX operation. */
> > +    vmtrace_supported = cpu_has_ipt &&
> > +                        (_vmx_misc_cap & VMX_MISC_PT_SUPPORTED);
> 
> There is a subtle corner case here.  vmx_init_vmcs_config() is called on
> all CPUs, and is supposed to level things down safely if we find any
> asymmetry.
> 
> If instead you go with something like this:
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index b73d824357..6960109183 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -294,8 +294,8 @@ static int vmx_init_vmcs_config(void)
>      rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
>  
>      /* Check whether IPT is supported in VMX operation. */
> -    vmtrace_supported = cpu_has_ipt &&
> -                        (_vmx_misc_cap & VMX_MISC_PT_SUPPORTED);
> +    if ( !(_vmx_misc_cap & VMX_MISC_PT_SUPPORTED) )
> +        vmtrace_supported = false;

This is also used during hotplug, so I'm not sure it's safe to turn
vmtrace_supported off during runtime, where VMs might be already using
it. IMO it would be easier to just set it on the BSP, and then refuse
to bring up any AP that doesn't have the feature. TBH I don't think we
are likely to find any system with such configuration, but seems more
robust than changing vmtrace_supported at runtime.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 08:19:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 08:19:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jquQn-0002ef-Gg; Thu, 02 Jul 2020 08:18:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bM0n=AN=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1jquQl-0002ea-CQ
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 08:18:52 +0000
X-Inumbo-ID: a251df2e-bc3c-11ea-bb8b-bc764e2007e4
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a251df2e-bc3c-11ea-bb8b-bc764e2007e4;
 Thu, 02 Jul 2020 08:18:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version:
 References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-Transfer-Encoding:Content-ID:Content-Description;
 bh=B95rbPlWjlDan7UytKGhRhe4eiuia9yaCCRRm9SGDcc=; b=0O81avXCjZeiwbdfkjY5QlweSc
 OQ1RJ9hkyqKUbu32+ayp8uy/9wwehfsmaa6JgjzTXqrZDwMe7+J+xA6TVJgyB1IizYyvcg7Uqxa54
 rIfaVVlmjsmzRQNkvd4T961eKkAkBlqBXGyaPKzDdA9vN+hRjG6G7d75lRc5zzbaSch8W70HOCU3Z
 a9agVIRMnyCH4b8YR0I28/x4345sHesBKM1ejKjKjsuqcAqyRZmAENAMwrAm501XiELmJ968TlWC8
 y6LOGcle+GcGrL24+mw6FN7w1VaaDJmgRSs6DJxof8IcowO2aKXxCp/5goZG43+T+9PCAmIBQUNKj
 7F9v0j+Q==;
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1jquQQ-0005pZ-FU; Thu, 02 Jul 2020 08:18:30 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id C53083003D8;
 Thu,  2 Jul 2020 10:18:27 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id AA0AE264F8CB2; Thu,  2 Jul 2020 10:18:27 +0200 (CEST)
Date: Thu, 2 Jul 2020 10:18:27 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2 0/4] Remove 32-bit Xen PV guest support
Message-ID: <20200702081827.GA4781@hirez.programming.kicks-ass.net>
References: <20200701110650.16172-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200701110650.16172-1-jgross@suse.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Deep Shah <sdeep@vmware.com>,
 "VMware, Inc." <pv-drivers@vmware.com>, x86@kernel.org,
 linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 Andy Lutomirski <luto@kernel.org>, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel@lists.xenproject.org, Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 01:06:46PM +0200, Juergen Gross wrote:
> The long term plan has been to replace Xen PV guests by PVH. The first
> victim of that plan are now 32-bit PV guests, as those are used only
> rather seldom these days. Xen on x86 requires 64-bit support and with
> Grub2 now supporting PVH officially since version 2.04 there is no
> need to keep 32-bit PV guest support alive in the Linux kernel.
> Additionally Meltdown mitigation is not available in the kernel running
> as 32-bit PV guest, so dropping this mode makes sense from security
> point of view, too.

Hooray!!! Much thanks!


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 08:29:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 08:29:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jquas-0003Xd-Gu; Thu, 02 Jul 2020 08:29:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bBDp=AN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jquar-0003XY-4g
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 08:29:17 +0000
X-Inumbo-ID: 1d808780-bc3e-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d808780-bc3e-11ea-bca7-bc764e2007e4;
 Thu, 02 Jul 2020 08:29:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6EF7FADC1;
 Thu,  2 Jul 2020 08:29:15 +0000 (UTC)
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Julien Grall <julien@xen.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
 <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
 <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
 <95154add-164a-5450-28e1-f24611e1642f@xen.org>
 <df0aa9b4-d7f7-f909-e833-3f2f3040a2dc@citrix.com>
 <de298379-43c3-648f-aade-9efc7f761970@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8df16863-2207-6747-cf17-f88124927ddb@suse.com>
Date: Thu, 2 Jul 2020 10:29:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <de298379-43c3-648f-aade-9efc7f761970@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.07.2020 20:09, Julien Grall wrote:
> On 01/07/2020 19:06, Andrew Cooper wrote:
>> On 01/07/2020 19:02, Julien Grall wrote:
>>> On 01/07/2020 18:26, Andrew Cooper wrote:
>>>> For the sake of what is literally just one byte in common code, I stand
>>>> my original suggestion of this being a common interface.  It is not
>>>> something which should be x86 specific.
>>>
>>> This argument can also be used against putting in common code. What I
>>> am the most concern of is we are trying to guess how the interface
>>> will look like for another architecture. Your suggested interface may
>>> work, but this also may end up to be a complete mess.
>>>
>>> So I think we want to wait for a new architecture to use vmtrace
>>> before moving to common code. This is not going to be a massive effort
>>> to move that bit in common if needed.
>>
>> I suggest you read the series.
> 
> Already went through the series and ...
> 
>>
>> The only thing in common code is the bit of the interface saying "I'd
>> like buffers this big please".
> 
> ... I stand by my point. There is no need to have this code in common 
> code until someone else need it. This code can be easily implemented in 
> arch_domain_create().

I'm with Andrew here, fwiw, as long as the little bit of code that
is actually put in common/ or include/xen/ doesn't imply arbitrary
restrictions on acceptable values. For example, unless there is
proof that for all architectures of interest currently or in the
not too distant future an order value is fine (as opposed to a
size one), then an order field would be fine to live in common
code imo. Otherwise it would need to be a size one, with per-arch
enforcement of further imposed restrictions (like needing to be a
power of 2).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 08:34:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 08:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqufw-0004Kv-4Y; Thu, 02 Jul 2020 08:34:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bBDp=AN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqufu-0004Kq-Po
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 08:34:30 +0000
X-Inumbo-ID: d8a2368a-bc3e-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8a2368a-bc3e-11ea-b7bb-bc764e2007e4;
 Thu, 02 Jul 2020 08:34:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4C4C3B1DB;
 Thu,  2 Jul 2020 08:34:29 +0000 (UTC)
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <f935f7f0-30e4-4ba2-588f-a8368a7b93b1@citrix.com>
 <20200702081020.GW735@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5bb2fb6a-c4f4-7d88-9e07-7922d4235338@suse.com>
Date: Thu, 2 Jul 2020 10:34:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200702081020.GW735@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.07.2020 10:10, Roger Pau Monné wrote:
> On Wed, Jul 01, 2020 at 10:42:55PM +0100, Andrew Cooper wrote:
>> On 30/06/2020 13:33, Michał Leszczyński wrote:
>>> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
>>> index ca94c2bedc..b73d824357 100644
>>> --- a/xen/arch/x86/hvm/vmx/vmcs.c
>>> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
>>> @@ -291,6 +291,12 @@ static int vmx_init_vmcs_config(void)
>>>          _vmx_cpu_based_exec_control &=
>>>              ~(CPU_BASED_CR8_LOAD_EXITING | CPU_BASED_CR8_STORE_EXITING);
>>>  
>>> +    rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
>>> +
>>> +    /* Check whether IPT is supported in VMX operation. */
>>> +    vmtrace_supported = cpu_has_ipt &&
>>> +                        (_vmx_misc_cap & VMX_MISC_PT_SUPPORTED);
>>
>> There is a subtle corner case here.  vmx_init_vmcs_config() is called on
>> all CPUs, and is supposed to level things down safely if we find any
>> asymmetry.
>>
>> If instead you go with something like this:
>>
>> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
>> index b73d824357..6960109183 100644
>> --- a/xen/arch/x86/hvm/vmx/vmcs.c
>> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
>> @@ -294,8 +294,8 @@ static int vmx_init_vmcs_config(void)
>>      rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
>>  
>>      /* Check whether IPT is supported in VMX operation. */
>> -    vmtrace_supported = cpu_has_ipt &&
>> -                        (_vmx_misc_cap & VMX_MISC_PT_SUPPORTED);
>> +    if ( !(_vmx_misc_cap & VMX_MISC_PT_SUPPORTED) )
>> +        vmtrace_supported = false;
> 
> This is also used during hotplug, so I'm not sure it's safe to turn
> vmtrace_supported off during runtime, where VMs might be already using
> it. IMO it would be easier to just set it on the BSP, and then refuse
> to bring up any AP that doesn't have the feature.

+1

IOW I also don't think that "vmx_init_vmcs_config() ... is supposed to
level things down safely". Instead I think the expectation is for
CPU onlining to fail if a CPU lacks features compared to the BSP. As
can be implied from what Roger says, doing like what you suggest may
be fine during boot, but past that only at times where we know there's
no user of a certain feature, and where discarding the feature flag
won't lead to other inconsistencies (which may very well mean "never").

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 08:42:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 08:42:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqunP-0005BR-UH; Thu, 02 Jul 2020 08:42:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DrmV=AN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jqunP-0005BM-7z
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 08:42:15 +0000
X-Inumbo-ID: ec634ca9-bc3f-11ea-87e7-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ec634ca9-bc3f-11ea-87e7-12813bfff9fa;
 Thu, 02 Jul 2020 08:42:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1F618B1CD;
 Thu,  2 Jul 2020 08:42:12 +0000 (UTC)
Subject: Re: [PATCH v2] xen/displif: Protocol version 2
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
 "wl@xen.org" <wl@xen.org>
References: <20200701071923.18883-1-andr2000@gmail.com>
 <dffd127d-c5a1-4c77-baa8-f1d931145bc4@suse.com>
 <b5a6e034-4d52-d6b2-7c14-3c44c4a19cc3@epam.com>
 <e442e4d9-fe79-7f65-c196-2a0a35923492@suse.com>
 <f50ec904-8cb2-2bd6-c3ba-35e8c44bd607@epam.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <be21be56-ea1b-e558-6905-a6cb3e5e4849@suse.com>
Date: Thu, 2 Jul 2020 10:42:11 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <f50ec904-8cb2-2bd6-c3ba-35e8c44bd607@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.07.20 09:59, Oleksandr Andrushchenko wrote:
> 
> On 7/2/20 10:20 AM, Jürgen Groß wrote:
>> On 01.07.20 14:07, Oleksandr Andrushchenko wrote:
>>> On 7/1/20 1:46 PM, Jürgen Groß wrote:
>>>> On 01.07.20 09:19, Oleksandr Andrushchenko wrote:
>>>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>>
>>>>> 1. Add protocol version as an integer
>>>>>
>>>>> Version string, which is in fact an integer, is hard to handle in the
>>>>> code that supports different protocol versions. To simplify that
>>>>> also add the version as an integer.
>>>>>
>>>>> 2. Pass buffer offset with XENDISPL_OP_DBUF_CREATE
>>>>>
>>>>> There are cases when display data buffer is created with non-zero
>>>>> offset to the data start. Handle such cases and provide that offset
>>>>> while creating a display buffer.
>>>>>
>>>>> 3. Add XENDISPL_OP_GET_EDID command
>>>>>
>>>>> Add an optional request for reading Extended Display Identification
>>>>> Data (EDID) structure which allows better configuration of the
>>>>> display connectors over the configuration set in XenStore.
>>>>> With this change connectors may have multiple resolutions defined
>>>>> with respect to detailed timing definitions and additional properties
>>>>> normally provided by displays.
>>>>>
>>>>> If this request is not supported by the backend then visible area
>>>>> is defined by the relevant XenStore's "resolution" property.
>>>>>
>>>>> If backend provides extended display identification data (EDID) with
>>>>> XENDISPL_OP_GET_EDID request then EDID values must take precedence
>>>>> over the resolutions defined in XenStore.
>>>>>
>>>>> 4. Bump protocol version to 2.
>>>>>
>>>>> Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
>>>>
>>>> Reviewed-by: Juergen Gross <jgross@suse.com>
>>>
>>> Thank you, do you want me to prepare the same for the kernel so
>>>
>>> you have it at hand when the time comes?
>>
>> It should be added to the kernel only when really needed (i.e. a user of
>> the new functionality is showing up).
> 
> We have a patch for that which adds EDID to the existing PV DRM frontend,
> 
> so while upstreaming those changes I will also include changes to the protocol
> 
> in the kernel series: for that we need the header in Xen tree first, right?

Yes.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 08:42:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 08:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqunf-0005Cf-69; Thu, 02 Jul 2020 08:42:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gpFn=AN=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jqund-0005CU-Os
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 08:42:29 +0000
X-Inumbo-ID: f5b3a0be-bc3f-11ea-87e7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f5b3a0be-bc3f-11ea-87e7-12813bfff9fa;
 Thu, 02 Jul 2020 08:42:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yfT+oahe/46NccK0dDGyKo1XTGX8UPZpGTv+Gg35fdw=; b=rBUepNvWQ7zM5sUErrKVC24Fuc
 /idCYDhp2PRy3hcL8fYSai/QmFL06Fx5KTM8ppZyJC4vmIdTTwiRY3kJ94/7Ih+or8SkOzw6jBuCt
 ZQKH70N/UDUnq5WKbVwp7sxqmKvDUyNXgqPFgnjfW4CeFi4VCAKxrt1yknSIhsFATeKI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqunX-00058v-Cq; Thu, 02 Jul 2020 08:42:23 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqunX-0003KB-3v; Thu, 02 Jul 2020 08:42:23 +0000
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Jan Beulich <jbeulich@suse.com>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
 <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
 <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
 <95154add-164a-5450-28e1-f24611e1642f@xen.org>
 <df0aa9b4-d7f7-f909-e833-3f2f3040a2dc@citrix.com>
 <de298379-43c3-648f-aade-9efc7f761970@xen.org>
 <8df16863-2207-6747-cf17-f88124927ddb@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <cf41855b-9e5e-13f2-9ab0-04b98f8b3cdd@xen.org>
Date: Thu, 2 Jul 2020 09:42:19 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <8df16863-2207-6747-cf17-f88124927ddb@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 02/07/2020 09:29, Jan Beulich wrote:
> On 01.07.2020 20:09, Julien Grall wrote:
>> On 01/07/2020 19:06, Andrew Cooper wrote:
>>> On 01/07/2020 19:02, Julien Grall wrote:
>>>> On 01/07/2020 18:26, Andrew Cooper wrote:
>>>>> For the sake of what is literally just one byte in common code, I stand
>>>>> my original suggestion of this being a common interface.  It is not
>>>>> something which should be x86 specific.
>>>>
>>>> This argument can also be used against putting in common code. What I
>>>> am the most concern of is we are trying to guess how the interface
>>>> will look like for another architecture. Your suggested interface may
>>>> work, but this also may end up to be a complete mess.
>>>>
>>>> So I think we want to wait for a new architecture to use vmtrace
>>>> before moving to common code. This is not going to be a massive effort
>>>> to move that bit in common if needed.
>>>
>>> I suggest you read the series.
>>
>> Already went through the series and ...
>>
>>>
>>> The only thing in common code is the bit of the interface saying "I'd
>>> like buffers this big please".
>>
>> ... I stand by my point. There is no need to have this code in common
>> code until someone else need it. This code can be easily implemented in
>> arch_domain_create().
> 
> I'm with Andrew here, fwiw, as long as the little bit of code that
> is actually put in common/ or include/xen/ doesn't imply arbitrary
> restrictions on acceptable values.
Well yes the code is simple. However, the code as it is wouldn't be 
usuable on other architecture without additional work (aside arch 
specific code). For instance, there is no way to map the buffer outside 
of Xen as it is all x86 specific.

If you want the allocation to be in the common code, then the 
infrastructure to map/unmap the buffer should also be in common code. 
Otherwise, there is no point to allocate it in common.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 08:50:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 08:50:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jquvU-00067l-1U; Thu, 02 Jul 2020 08:50:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bBDp=AN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jquvT-00067g-5X
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 08:50:35 +0000
X-Inumbo-ID: 178fd8d2-bc41-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 178fd8d2-bc41-11ea-b7bb-bc764e2007e4;
 Thu, 02 Jul 2020 08:50:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ED590B658;
 Thu,  2 Jul 2020 08:50:33 +0000 (UTC)
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Julien Grall <julien@xen.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
 <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
 <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
 <95154add-164a-5450-28e1-f24611e1642f@xen.org>
 <df0aa9b4-d7f7-f909-e833-3f2f3040a2dc@citrix.com>
 <de298379-43c3-648f-aade-9efc7f761970@xen.org>
 <8df16863-2207-6747-cf17-f88124927ddb@suse.com>
 <cf41855b-9e5e-13f2-9ab0-04b98f8b3cdd@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <75066926-9fe4-1e51-707c-c77c4e6d63ae@suse.com>
Date: Thu, 2 Jul 2020 10:50:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <cf41855b-9e5e-13f2-9ab0-04b98f8b3cdd@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.07.2020 10:42, Julien Grall wrote:
> On 02/07/2020 09:29, Jan Beulich wrote:
>> I'm with Andrew here, fwiw, as long as the little bit of code that
>> is actually put in common/ or include/xen/ doesn't imply arbitrary
>> restrictions on acceptable values.
> Well yes the code is simple. However, the code as it is wouldn't be 
> usuable on other architecture without additional work (aside arch 
> specific code). For instance, there is no way to map the buffer outside 
> of Xen as it is all x86 specific.
> 
> If you want the allocation to be in the common code, then the 
> infrastructure to map/unmap the buffer should also be in common code. 
> Otherwise, there is no point to allocate it in common.

I don't think I agree here - I see nothing wrong with exposing of
the memory being arch specific, when allocation is generic. This
is no different from, in just x86, allocation logic being common
to PV and HVM, but exposing being different for both.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 08:55:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 08:55:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jquzl-0006Hj-Mf; Thu, 02 Jul 2020 08:55:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gpFn=AN=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jquzk-0006He-MN
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 08:55:00 +0000
X-Inumbo-ID: b5f8bc14-bc41-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b5f8bc14-bc41-11ea-bb8b-bc764e2007e4;
 Thu, 02 Jul 2020 08:55:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Y+hK0IzQPWUcTCbqph1dqrlzmbOv8FUI0IwplOzU5dE=; b=xHqkDZHLeg18xPLBX21Hm851DY
 Cmj70XxrBNQ3bhwU/Ib38V8gulQRsp1zIGtTKKxxvMd9/DbrmP+UP3jQppzYP0AvPEJZZ+Up6ldNc
 XA3H2YeRvs3uBs3UZlUKeDT0RwAfRwtyWmw2joUE9EecEt8w2GNGzswFqqjiPi6Javhg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jquzf-0005N9-DF; Thu, 02 Jul 2020 08:54:55 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jquzf-0003nk-4I; Thu, 02 Jul 2020 08:54:55 +0000
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Jan Beulich <jbeulich@suse.com>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
 <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
 <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
 <95154add-164a-5450-28e1-f24611e1642f@xen.org>
 <df0aa9b4-d7f7-f909-e833-3f2f3040a2dc@citrix.com>
 <de298379-43c3-648f-aade-9efc7f761970@xen.org>
 <8df16863-2207-6747-cf17-f88124927ddb@suse.com>
 <cf41855b-9e5e-13f2-9ab0-04b98f8b3cdd@xen.org>
 <75066926-9fe4-1e51-707c-c77c4e6d63ae@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <3fa0c3e7-9243-b1bb-d6ad-a3bd21437782@xen.org>
Date: Thu, 2 Jul 2020 09:54:52 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <75066926-9fe4-1e51-707c-c77c4e6d63ae@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 02/07/2020 09:50, Jan Beulich wrote:
> On 02.07.2020 10:42, Julien Grall wrote:
>> On 02/07/2020 09:29, Jan Beulich wrote:
>>> I'm with Andrew here, fwiw, as long as the little bit of code that
>>> is actually put in common/ or include/xen/ doesn't imply arbitrary
>>> restrictions on acceptable values.
>> Well yes the code is simple. However, the code as it is wouldn't be
>> usuable on other architecture without additional work (aside arch
>> specific code). For instance, there is no way to map the buffer outside
>> of Xen as it is all x86 specific.
>>
>> If you want the allocation to be in the common code, then the
>> infrastructure to map/unmap the buffer should also be in common code.
>> Otherwise, there is no point to allocate it in common.
> 
> I don't think I agree here - I see nothing wrong with exposing of
> the memory being arch specific, when allocation is generic. This
> is no different from, in just x86, allocation logic being common
> to PV and HVM, but exposing being different for both.

Are you suggesting that the way it would be exposed may be different for 
other architecture?

If so, this is one more reason to not impose a way for allocating the 
buffer in the common code until another arch add support for vmtrace.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 09:01:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 09:01:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqv5X-00078Y-Co; Thu, 02 Jul 2020 09:00:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gJZC=AN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jqv5V-00078T-JH
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 09:00:57 +0000
X-Inumbo-ID: 89a459ba-bc42-11ea-87ea-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 89a459ba-bc42-11ea-87ea-12813bfff9fa;
 Thu, 02 Jul 2020 09:00:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1593680456;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=u/OOQ7QEogZlO8NIoN1DS/EdDXM56LmhZLHwVl9cVBs=;
 b=WEOpWT3hpCjyXVQpTvqDXEIW0L6dwK43ZCwTat2N/uF0S5okjkigKXHO
 wIxoKtI6v8yWqnjnKbPRC6dAP3NwgHxwS5xCCMKnwRgjzJ8xb4kKWsjHq
 YwDQDjZ0SFmw2824MqgBex8grkXJk1f2JFgdyiRmhXxwQYENfu+Y8tdD3 4=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: by4ASU/LdoqsYGCBXHu/F+KIfFHDaFiWCruCDUIDIlUofFeGWPhMXON5a4xiAJ1I+vf/gqk5zf
 en91MWrvUWecEk06jfJWCQEbgseK5xR6d8RvEWEsNE0ZqmnSUS3hdDx3KdSOJ/8jT3K/6fZHRO
 fflFNGDfBOwOpyDmTEWEHs2ecY6BYERXYtqArhEgnMICXqiWcxNm1yXEYxYyZx6jxpUpaNrm0u
 9O8FRCKrr8hGC2CQz9QubPvKZWAG4fckiod3lam302H6PBH2dgUi1A0zgoSs+7THZ4QOPSE6BO
 r6o=
X-SBRS: 2.7
X-MesageID: 21461513
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,303,1589256000"; d="scan'208";a="21461513"
Date: Thu, 2 Jul 2020 11:00:47 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
Message-ID: <20200702090047.GX735@Air-de-Roger>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com, Jan
 Beulich <jbeulich@suse.com>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jun 30, 2020 at 02:33:46PM +0200, Michał Leszczyński wrote:
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 59bdc28c89..7b8289d436 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -92,6 +92,7 @@ struct xen_domctl_createdomain {
>      uint32_t max_evtchn_port;
>      int32_t max_grant_frames;
>      int32_t max_maptrack_frames;
> +    uint8_t vmtrace_pt_order;

I've been thinking about this, and even though this is a domctl (so
not a stable interface) we might want to consider using a size (or a
number of pages) here rather than an order. IPT also supports
TOPA mode (kind of a linked list of buffers) that would allow for
sizes not rounded to order boundaries to be used, since then only each
item in the linked list needs to be rounded to an order boundary, so
you could for example use three 4K pages in TOPA mode AFAICT.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 09:18:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 09:18:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqvMO-00088b-TP; Thu, 02 Jul 2020 09:18:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bBDp=AN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqvMN-00088W-JK
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 09:18:23 +0000
X-Inumbo-ID: f9554592-bc44-11ea-87ed-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f9554592-bc44-11ea-87ed-12813bfff9fa;
 Thu, 02 Jul 2020 09:18:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 39C86B5DB;
 Thu,  2 Jul 2020 09:18:21 +0000 (UTC)
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Julien Grall <julien@xen.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
 <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
 <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
 <95154add-164a-5450-28e1-f24611e1642f@xen.org>
 <df0aa9b4-d7f7-f909-e833-3f2f3040a2dc@citrix.com>
 <de298379-43c3-648f-aade-9efc7f761970@xen.org>
 <8df16863-2207-6747-cf17-f88124927ddb@suse.com>
 <cf41855b-9e5e-13f2-9ab0-04b98f8b3cdd@xen.org>
 <75066926-9fe4-1e51-707c-c77c4e6d63ae@suse.com>
 <3fa0c3e7-9243-b1bb-d6ad-a3bd21437782@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0e02a9b5-ba7a-43a2-3369-a4410f216ddb@suse.com>
Date: Thu, 2 Jul 2020 11:18:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <3fa0c3e7-9243-b1bb-d6ad-a3bd21437782@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.07.2020 10:54, Julien Grall wrote:
> 
> 
> On 02/07/2020 09:50, Jan Beulich wrote:
>> On 02.07.2020 10:42, Julien Grall wrote:
>>> On 02/07/2020 09:29, Jan Beulich wrote:
>>>> I'm with Andrew here, fwiw, as long as the little bit of code that
>>>> is actually put in common/ or include/xen/ doesn't imply arbitrary
>>>> restrictions on acceptable values.
>>> Well yes the code is simple. However, the code as it is wouldn't be
>>> usuable on other architecture without additional work (aside arch
>>> specific code). For instance, there is no way to map the buffer outside
>>> of Xen as it is all x86 specific.
>>>
>>> If you want the allocation to be in the common code, then the
>>> infrastructure to map/unmap the buffer should also be in common code.
>>> Otherwise, there is no point to allocate it in common.
>>
>> I don't think I agree here - I see nothing wrong with exposing of
>> the memory being arch specific, when allocation is generic. This
>> is no different from, in just x86, allocation logic being common
>> to PV and HVM, but exposing being different for both.
> 
> Are you suggesting that the way it would be exposed may be different for 
> other architecture?

Why not? To take a possibly extreme example - consider an arch
where (for bare metal) the buffer is specified to appear at a
fixed range of addresses. This would then want to be this way
in the virtualized case as well. There'd be no point in using
any common logic mapping the buffer at a guest requested
address. Instead it would simply appear at the arch mandated
one, without the guest needing to take any action.

> If so, this is one more reason to not impose a way for allocating the 
> buffer in the common code until another arch add support for vmtrace.

I'm still not seeing why allocation and exposure need to be done
at the same place.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 09:57:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 09:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqvxz-0002zq-B0; Thu, 02 Jul 2020 09:57:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gpFn=AN=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jqvxx-0002zi-Qy
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 09:57:13 +0000
X-Inumbo-ID: 6723a104-bc4a-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6723a104-bc4a-11ea-bca7-bc764e2007e4;
 Thu, 02 Jul 2020 09:57:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1cUWloNbnqqp/ee/lmLi98KIuvnU6uDG752B/uPOq+8=; b=feyerl8EXeOAC/zSBT3+FNhU7x
 QWAA9kCYUXBj1K85EXss9jksRtJgma1HRm52DB97jMiNwt31mhfJDlxPpqNMgbwRmP5yJ5Xyjdyhk
 ucroEcZcT+DX7OfCFEuUatRSKgi6WE7S8UiNmSyedBFYg3b6fcV9yoInjm4IYKyrsr1k=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqvxr-0006WA-Ny; Thu, 02 Jul 2020 09:57:07 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqvxr-0007ZV-Ej; Thu, 02 Jul 2020 09:57:07 +0000
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Jan Beulich <jbeulich@suse.com>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
 <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
 <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
 <95154add-164a-5450-28e1-f24611e1642f@xen.org>
 <df0aa9b4-d7f7-f909-e833-3f2f3040a2dc@citrix.com>
 <de298379-43c3-648f-aade-9efc7f761970@xen.org>
 <8df16863-2207-6747-cf17-f88124927ddb@suse.com>
 <cf41855b-9e5e-13f2-9ab0-04b98f8b3cdd@xen.org>
 <75066926-9fe4-1e51-707c-c77c4e6d63ae@suse.com>
 <3fa0c3e7-9243-b1bb-d6ad-a3bd21437782@xen.org>
 <0e02a9b5-ba7a-43a2-3369-a4410f216ddb@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9a3f4d58-e5ad-c7a1-6c5f-42aa92101ca1@xen.org>
Date: Thu, 2 Jul 2020 10:57:04 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0e02a9b5-ba7a-43a2-3369-a4410f216ddb@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 02/07/2020 10:18, Jan Beulich wrote:
> On 02.07.2020 10:54, Julien Grall wrote:
>>
>>
>> On 02/07/2020 09:50, Jan Beulich wrote:
>>> On 02.07.2020 10:42, Julien Grall wrote:
>>>> On 02/07/2020 09:29, Jan Beulich wrote:
>>>>> I'm with Andrew here, fwiw, as long as the little bit of code that
>>>>> is actually put in common/ or include/xen/ doesn't imply arbitrary
>>>>> restrictions on acceptable values.
>>>> Well yes the code is simple. However, the code as it is wouldn't be
>>>> usuable on other architecture without additional work (aside arch
>>>> specific code). For instance, there is no way to map the buffer outside
>>>> of Xen as it is all x86 specific.
>>>>
>>>> If you want the allocation to be in the common code, then the
>>>> infrastructure to map/unmap the buffer should also be in common code.
>>>> Otherwise, there is no point to allocate it in common.
>>>
>>> I don't think I agree here - I see nothing wrong with exposing of
>>> the memory being arch specific, when allocation is generic. This
>>> is no different from, in just x86, allocation logic being common
>>> to PV and HVM, but exposing being different for both.
>>
>> Are you suggesting that the way it would be exposed may be different for
>> other architecture?
> 
> Why not? To take a possibly extreme example - consider an arch
> where (for bare metal) the buffer is specified to appear at a
> fixed range of addresses.

I am probably missing something here... The current goal is the buffer 
will be mapped in the dom0. Most likely the way to map it will be using 
the acquire hypercall (unless you invent a brand new one...).

For a guest, you could possibly reserve a fixed range and then map it 
when creating the vCPU in Xen. But then, you will likely want a fixed 
size... So why would you bother to ask the user to define the size?

Another way to do it, would be the toolstack to do the mapping. At which 
point, you still need an hypercall to do the mapping (probably the 
hypercall acquire).

> 
>> If so, this is one more reason to not impose a way for allocating the
>> buffer in the common code until another arch add support for vmtrace.
> 
> I'm still not seeing why allocation and exposure need to be done
> at the same place.

If I were going to add support for CoreSight on Arm, then the acquire 
hypercall is likely going to be the way to go for mapping the resource 
in the tools. At which point this will need to be common.

I am still not entirely happy to define the interface yet, but this 
would still be better than trying to make the allocation in common code 
and the leaving the mapping aside. After all, this is a "little bit" 
more code.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 10:25:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 10:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqwOs-0005ge-KV; Thu, 02 Jul 2020 10:25:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WaoH=AN=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jqwOr-0005gZ-NG
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 10:25:01 +0000
X-Inumbo-ID: 4804b00c-bc4e-11ea-87f7-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4804b00c-bc4e-11ea-87f7-12813bfff9fa;
 Thu, 02 Jul 2020 10:24:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1593685499;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=G0Aqljya8yX75ne1DuCuJKaJD8hBtukZv46lpc2ZS+I=;
 b=IQIi5QNGuawiu6sLimTgT2syzohUNvkJ6+HhVbIMZ7ymmlxou6WEnLS2
 vfwas3axN+ujAiHtBvURnseT/6sy3nc/8LmZgl8YWOEB3cYWTVGI+oBb3
 1A9/Vh5eRKyFuymI8cX/kL712sO+fltDxhkFbEEaKWiLOOXFstXkV5buE 8=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: C1t6oVau5SRudcj+lU3bEdnJyf/GtsyoZmseUbLLjUYqW9nfxM0GBgIaw1OkoQgT+MIb81x8Y0
 7qB7rTlsA6TPUWeZd6S1NdhdUljh4slxKmZht0wLQ71w13IQGawnSsQ2Flkqem1ZU/zyCM0tWs
 M7HNFHPHwgjc7h+x3TxRLUex+G98+uoB2zMjUtXqgNwvLawCHJQwFmLmQD7n+q0n86bmo0a52g
 Bruw7UvayaKJrURTg96E1OHT8KjIGlO6Rkktqi35IEFbITTlY8HUARVgmyoHltwCs/1WYYbqMM
 kQs=
X-SBRS: 2.7
X-MesageID: 21787128
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,304,1589256000"; d="scan'208";a="21787128"
Date: Thu, 2 Jul 2020 11:24:33 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
Message-ID: <20200702102433.GE2030@perard.uk.xensource.com>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas.lengyel@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Michał,

On Tue, Jun 30, 2020 at 02:33:46PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Allow to specify the size of per-vCPU trace buffer upon
> domain creation. This is zero by default (meaning: not enabled).
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
> 
> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> index 0532739c1f..78f434b722 100644
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -278,6 +278,16 @@ memory=8096 will report significantly less memory available for use
>  than a system with maxmem=8096 memory=8096 due to the memory overhead
>  of having to track the unused pages.
>  
> +=item B<vmtrace_pt_size=BYTES>

I don't like much this new configuration name. To me, "pt" sound like
passthrough, as in pci passthrough. But it seems to be for "processor
trace" (or tracing), isn't it? So if it is, then we have "trace" twice
in the name and I don't think that configuration is about tracing the
processor tracing feature. (Also I don't think we need to state "vm" in
the name easier as every configuration option should be about a vm.)

How about a name that is easier to understand without having to know all
the possible abbreviations? Maybe "processor_trace_buffer_size" or
similar?

> +
> +Specifies the size of processor trace buffer that would be allocated
> +for each vCPU belonging to this domain. Disabled (i.e. B<vmtrace_pt_size=0>
> +by default. This must be set to non-zero value in order to be able to
> +use processor tracing features with this domain.
> +
> +B<NOTE>: The size value must be between 4 kB and 4 GB and it must
> +be also a power of 2.

Maybe the configuration variable could take KBYTES for kilo-bytes
instead of just BYTES since the min is 4kB?

Also that item seems to be in the "Memory Allocation" section, but I
don't think that's a good place as the other options are for the size of
guest RAM. I don't know in which section this would be better but maybe
"Other Options" would be OK.

>  =back
>  
>  =head3 Guest Virtual NUMA Configuration
> diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
> index 61b4ef7b7e..4eba224590 100644
> --- a/tools/xl/xl_parse.c
> +++ b/tools/xl/xl_parse.c
> @@ -1861,6 +1861,26 @@ void parse_config_data(const char *config_source,
>          }
>      }
>  
> +    if (!xlu_cfg_get_long(config, "vmtrace_pt_size", &l, 1) && l) {
> +        int32_t shift = 0;
> +
> +        if (l & (l - 1))
> +        {
> +            fprintf(stderr, "ERROR: pt buffer size must be a power of 2\n");

It would be better to state the option name in the error message.

> +            exit(1);
> +        }
> +
> +        while (l >>= 1) ++shift;
> +
> +        if (shift <= XEN_PAGE_SHIFT)
> +        {
> +            fprintf(stderr, "ERROR: too small pt buffer\n");
> +            exit(1);
> +        }
> +
> +        b_info->vmtrace_pt_order = shift - XEN_PAGE_SHIFT;
> +    }
> +
>      if (!xlu_cfg_get_list(config, "ioports", &ioports, &num_ioports, 0)) {
>          b_info->num_ioports = num_ioports;
>          b_info->ioports = calloc(num_ioports, sizeof(*b_info->ioports));
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 0a33e0dfd6..27dcfbac8c 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 59bdc28c89..7b8289d436 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h

I don't think it's wise to modify the toolstack, the hypervisor, and the
hypercall ABI in the same patch. Can you change this last two files in a
separate patch?

Thank you,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 12:31:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 12:31:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqyMb-0008UU-0z; Thu, 02 Jul 2020 12:30:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CtVa=AN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqyMa-0008UA-8h
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 12:30:48 +0000
X-Inumbo-ID: d6adfcf8-bc5f-11ea-881e-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d6adfcf8-bc5f-11ea-881e-12813bfff9fa;
 Thu, 02 Jul 2020 12:30:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=L4T71TMAnQvtUQoKYSofagDRrzmocHJk6NC5usjnTvw=; b=5RsxF3973ZuHBWPh+HtO/lTrU
 yfLLv+KDx9xKvL1lJSnS1Q1d0do/Upjj53Ldt1pOSsnLqvqf72BJrixuXm7VGrlEXOSq9uXHFrs71
 qjbTfiG8++aWOBEjr65DhWRQDB9EOXpJf95RFEylWQAaI3vz4yTnC1pNVf3F9XVGqVlRc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqyMR-0000yc-GF; Thu, 02 Jul 2020 12:30:39 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqyMR-0001G9-5w; Thu, 02 Jul 2020 12:30:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqyMR-0003Eb-5G; Thu, 02 Jul 2020 12:30:39 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151516-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 151516: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=e75220890bf6b37c5f7b1dbd81d8292ed6d96643
X-Osstest-Versions-That: linux=4e9688ad3d36e8f73c73e435f53da5ae1cd91a70
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jul 2020 12:30:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151516 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151516/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                e75220890bf6b37c5f7b1dbd81d8292ed6d96643
baseline version:
 linux                4e9688ad3d36e8f73c73e435f53da5ae1cd91a70

Last test of basis   151339  2020-06-24 16:09:27 Z    7 days
Testing same since   151503  2020-07-01 09:18:19 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Plattner <aplattner@nvidia.com>
  Aditya Pakki <pakki001@umn.edu>
  Al Cooper <alcooperx@gmail.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Lobakin <alobakin@marvell.com>
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Anton Eidelman <anton@lightbitslabs.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ariel Elior <ariel.elior@marvell.com>
  Bernard Zhao <bernard@vivo.com>
  Borislav Petkov <bp@suse.de>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chihhao Chen <chihhao.chen@mediatek.com>
  Christian Brauner <christian.brauner@ubuntu.com>
  Christoph Hellwig <hch@lst.de>
  Chuck Lever <chuck.lever@oracle.com>
  Chuhong Yuan <hslester96@gmail.com>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Corey Minyard <cminyard@mvista.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Gomez <dagmcr@gmail.com>
  Daniel Wagner <dwagner@suse.de>
  Darrick J. Wong <darrick.wong@oracle.com>
  David Christensen <drc@linux.vnet.ibm.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Denis Efremov <efremov@linux.com>
  Denis Kirjanov <denis.kirjanov@suse.com>
  Denis Kirjanov <kda@linux-powerpc.org>
  Dennis Dalessandro <dennis.dalessandro@intel.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Doug Berger <opendmb@gmail.com>
  Drew Fustini <drew@beagleboard.org>
  Eddie James <eajames@linux.ibm.com>
  Eric Dumazet <edumazet@google.com>
  Fabian Vogt <fvogt@suse.de>
  Fan Guo <guofan5@huawei.com>
  Felipe Balbi <balbi@kernel.org>
  Felix Kuehling <Felix.Kuehling@amd.com>
  Fenghua Yu <fenghua.yu@intel.com>
  Filipe Manana <fdmanana@suse.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Gao Xiang <hsiangkao@redhat.com>
  Gaurav Singh <gaurav1086@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  guodeqing <geffrey.guo@huawei.com>
  Hangbin Liu <liuhangbin@gmail.com>
  Heiko Carstens <heiko.carstens@de.ibm.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Huaisheng Ye <yehs1@lenovo.com>
  Huy Nguyen <huyn@mellanox.com>
  Igor Russkikh <irusskikh@marvell.com>
  Ilya Ponetayev <i.ponetaev@ndmsystems.com>
  Ingo Molnar <mingo@kernel.org>
  Jan Kara <jack@suse.cz>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jens Axboe <axboe@kernel.dk>
  Jeremy Kerr <jk@ozlabs.org>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jiping Ma <jiping.ma2@windriver.com>
  Joakim Tjernlund <joakim.tjernlund@infinera.com>
  Joel Stanley <joel@jms.id.au>
  Joerg Roedel <jroedel@suse.de>
  John Fastabend <john.fastabend@gmail.com>
  John Stultz <john.stultz@linaro.org>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Julian Wiedmann <jwi@linux.ibm.com>
  Junxiao Bi <junxiao.bi@oracle.com>
  Juri Lelli <juri.lelli@redhat.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kees Cook <keescook@chromium.org>
  Lalithambika Krishnakumar <lalithambika.krishnakumar@intel.com>
  Laurence Tratt <laurie@tratt.net>
  Laurent Pinchart <laurent.pinchart+renesas@ideasonboard.com>
  Leon Romanovsky <leonro@mellanox.com>
  Li Jun <jun.li@nxp.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Longfang Liu <liulongfang@huawei.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Luis Chamberlain <mcgrof@kernel.org>
  Macpaul Lin <macpaul.lin@mediatek.com>
  Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
  Mans Rullgard <mans@mansr.com>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marek Vasut <marex@denx.de>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Mark Zhang <markz@mellanox.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Schwidefsky <schwidefsky@de.ibm.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matt Fleming <matt@codeblueprint.co.uk>
  Matthew Hagan <mnhagan88@gmail.com>
  Michal Hocko <mhocko@suse.com>
  Michal Kalderon <michal.kalderon@marvell.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Minas Harutyunyan <hminas@synopsys.com>
  Minas Harutyunyan <Minas.Harutyunyan@synopsys.com>
  Muchun Song <songmuchun@bytedance.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Nathan Huckleberry <nhuck@google.com>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  Neal Cardwell <ncardwell@google.com>
  Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
  Olga Kornievskaia <kolga@netapp.com>
  Olga Kornievskaia <olga.kornievskaia@gmail.com>
  Oliver Neukum <oneukum@suse.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Chen <peter.chen@nxp.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Qiushi Wu <wu000273@umn.edu>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
  Reinette Chatre <reinette.chatre@intel.com>
  Ren Xudong <renxudong1@huawei.com>
  Robert Nelson <robertcnelson@gmail.com>
  Robin Gong <yibin.gong@nxp.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Gushchin <guro@fb.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Sabrina Dubroca <sd@queasysnail.net>
  Sagi Grimberg <sagi@grimberg.me>
  Sami Tolvanen <samitolvanen@google.com>
  Sasha Levin <sashal@kernel.org>
  Sean Christopherson <sean.j.christopherson@intel.com>
  SeongJae Park <sjpark@amazon.de>
  Shawn Guo <shawnguo@kernel.org>
  Shay Drory <shayd@mellanox.com>
  Shengjiu Wang <shengjiu.wang@nxp.com>
  Soheil Hassas Yeganeh <soheil@google.com>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Stanislav Fomichev <sdf@google.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Steffen Maier <maier@linux.ibm.com>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sven Auhagen <sven.auhagen@voleatech.de>
  Sven Schnelle <svens@linux.ibm.com>
  Taehee Yoo <ap420073@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tang Bin <tangbin@cmss.chinamobile.com>
  Tariq Toukan <tariqt@mellanox.com>
  Thierry Reding <treding@nvidia.com>
  Thomas Falcon <tlfalcon@linux.ibm.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Martitz <t.martitz@avm.de>
  Todd Kjos <tkjos@google.com>
  Toke Høiland-Jørgensen <toke@redhat.com>
  Tom Seewald <tseewald@gmail.com>
  Tomasz Meresiński <tomasz@meresinski.eu>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Valentin Longchamp <valentin@longchamp.me>
  Vasily Averin <vvs@virtuozzo.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vidya Sagar <vidyas@nvidia.com>
  Vincent Chen <vincent.chen@sifive.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Waiman Long <longman@redhat.com>
  Wang Hai <wanghai38@huawei.com>
  Weiping Zhang <zhangweiping@didiglobal.com>
  Wenhui Sheng <Wenhui.Sheng@amd.com>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Wolfram Sang <wsa@kernel.org>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yang Yingliang <yangyingliang@huawei.com>
  Yash Shah <yash.shah@sifive.com>
  Ye Bin <yebin10@huawei.com>
  Yick W. Tse <y_w_tse@yahoo.com.hk>
  Yonghong Song <yhs@fb.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  yu kuai <yukuai3@huawei.com>
  Zekun Shen <bruceshenzk@gmail.com>
  Zhang Shengju <zhangshengju@cmss.chinamobile.com>
  Zhang Xiaoxu <zhangxiaoxu5@huawei.com>
  Zheng Bin <zhengbin13@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   4e9688ad3d36..e75220890bf6  e75220890bf6b37c5f7b1dbd81d8292ed6d96643 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 12:55:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 12:55:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqyk3-0001mo-0Q; Thu, 02 Jul 2020 12:55:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AhfR=AN=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1jqyk1-0001mj-JQ
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 12:55:01 +0000
X-Inumbo-ID: 3c7c397a-bc63-11ea-8823-12813bfff9fa
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3c7c397a-bc63-11ea-8823-12813bfff9fa;
 Thu, 02 Jul 2020 12:54:59 +0000 (UTC)
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
 s=2020; t=1593694497;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 in-reply-to:in-reply-to:references:references;
 bh=hNvxqRrDBbz/jOt3So+sU7T7gZSGCFNrBBBNp3CZ8ak=;
 b=uFjYLcy3ORbpRJ1mNlx3oLoa/xhPpbmog/Rzoeck+yzRlpR+1qwqbSbNRNkICxTp+NeiZ4
 1/5ZodefKs3pCG7Dr1zT1PtRHOmyPIarZKu5JrBCFec29TFCTKV4lBvvX1zyGqDxQVdNtl
 7wZqz88O8OsL8c2ovz5KpEjSb9EmltKRcWqqYLCOA8Jx3IFGtHz5Xm9YkjG4K/yAPIpbFL
 nOuvXvfvXPZc2nVgqCg03DT4kNoEgAUypGBlx0yupqVkaseT4QedEj3mFGCU+u6uFTNjwQ
 QVhMC61GkmeCe60cHGtCwBAgYeCWmooqDxCAB9PFpGNbfZm6wHtyKqtva+msPQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
 s=2020e; t=1593694497;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 in-reply-to:in-reply-to:references:references;
 bh=hNvxqRrDBbz/jOt3So+sU7T7gZSGCFNrBBBNp3CZ8ak=;
 b=J83+q3jnPFCHhvohg6pU47S27J9wegJwqnwUYOrKe1OOWO7wE7bVIoC8hRvQ+vI0ktFeHt
 GchKFMnOfP9AxrCw==
To: Andy Lutomirski <luto@kernel.org>, Brian Gerst <brgerst@gmail.com>
Subject: Re: [PATCH 3/6] x86/entry/64/compat: Fix Xen PV SYSENTER frame setup
In-Reply-To: <CALCETrVy-Q4K04wmEPe5VeU=at2BL4b-bSFkoSU-BPbTaTB2Yg@mail.gmail.com>
References: <cover.1593191971.git.luto@kernel.org>
 <947880c41ade688ff4836f665d0c9fcaa9bd1201.1593191971.git.luto@kernel.org>
 <CAMzpN2iW4XD1Gsgq0ZeeH2eewLO+9Mk6eyk0LnbF-kP3v=smLg@mail.gmail.com>
 <CALCETrVy-Q4K04wmEPe5VeU=at2BL4b-bSFkoSU-BPbTaTB2Yg@mail.gmail.com>
Date: Thu, 02 Jul 2020 14:54:57 +0200
Message-ID: <87k0zm9ivy.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 the arch/x86 maintainers <x86@kernel.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 Andy Lutomirski <luto@kernel.org>, xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Andy Lutomirski <luto@kernel.org> writes:
> On Wed, Jul 1, 2020 at 8:42 AM Brian Gerst <brgerst@gmail.com> wrote:
> > On Fri, Jun 26, 2020 at 1:30 PM Andy Lutomirski <luto@kernel.org> wrote:
>> >
>> > The SYSENTER frame setup was nonsense.  It worked by accident
>> > because the normal code into which the Xen asm jumped
>> > (entry_SYSENTER_32/compat) threw away SP without touching the stack.
>> > entry_SYSENTER_compat was recently modified such that it relied on
>> > having a valid stack pointer, so now the Xen asm needs to invoke it
>> > with a valid stack.
>> >
>> > Fix it up like SYSCALL: use the Xen-provided frame and skip the bare
>> > metal prologue.
>> >
>> > Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> > Cc: Juergen Gross <jgross@suse.com>
>> > Cc: Stefano Stabellini <sstabellini@kernel.org>
>> > Cc: xen-devel@lists.xenproject.org
>> > Fixes: 1c3e5d3f60e2 ("x86/entry: Make entry_64_compat.S objtool clean")
>> > Signed-off-by: Andy Lutomirski <luto@kernel.org>
>> > ---
>> >  arch/x86/entry/entry_64_compat.S |  1 +
>> >  arch/x86/xen/xen-asm_64.S        | 20 ++++++++++++++++----
>> >  2 files changed, 17 insertions(+), 4 deletions(-)
>> >
>> > diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
>> > index 7b9d8150f652..381a6de7de9c 100644
>> > --- a/arch/x86/entry/entry_64_compat.S
>> > +++ b/arch/x86/entry/entry_64_compat.S
>> > @@ -79,6 +79,7 @@ SYM_CODE_START(entry_SYSENTER_compat)
>> >         pushfq                          /* pt_regs->flags (except IF = 0) */
>> >         pushq   $__USER32_CS            /* pt_regs->cs */
>> >         pushq   $0                      /* pt_regs->ip = 0 (placeholder) */
>> > +SYM_INNER_LABEL(entry_SYSENTER_compat_after_hwframe, SYM_L_GLOBAL)
>>
>> This skips over the section that truncates the syscall number to
>> 32-bits.  The comments present some doubt that it is actually
>> necessary, but the Xen path shouldn't differ from native.  That code
>> should be moved after this new label.
>
> Whoops.  I thought I caught that myself, but apparently not.  I'll fix it.

Darn. I already applied that lot. Can you please send a delta fix?

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 13:00:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 13:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqypD-0002e0-O0; Thu, 02 Jul 2020 13:00:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6eVn=AN=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jqypC-0002dv-Lx
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 13:00:22 +0000
X-Inumbo-ID: fc61239a-bc63-11ea-8826-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fc61239a-bc63-11ea-8826-12813bfff9fa;
 Thu, 02 Jul 2020 13:00:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1593694821;
 h=from:to:cc:subject:date:message-id:content-id:
 content-transfer-encoding:mime-version;
 bh=JpdqUWmLtIxp95F8LhvgPzgrsk18CcQn9P14QlRU8w4=;
 b=CYGxmhN9ilKxeHdgvQTwsH7TUmmMeNpjSLZ/KMpbM9a5gTJZ+pufgopJ
 IUWIXjnjy6Z2PIdn3ckP5NLonTVMQzhr5PAvsuJytBXfEVRwQyG0g6n4l
 MJaQsaJ/g0mxTRMbry5HCbHg5zcpI8ZmPBezdhKnBV7S/hGHiytutfaDR s=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: qx072hEsQjapmb+M6zyM+dhQT53LCju5GCPkso4fuP9CPCtTNAxSQ9TVW+NfY+5AlQlaHVBVz3
 hHbehskfstKAb2l+kSpoSqgZCHPQ/LIyVAyZJ5Qp6BicBSxiTwNnsPAkZtbZG9twq3gLq+0MEh
 1kZwsKCn+1n3uLF6JhyDnZuX5mONXbSlabzDAIDmGz5CAPwW+NBTloZdYTZt7M3CmdJOr31pqz
 dKWw/DgDQeguYYE6yuGItvUUU+ok3emcvErD0SGJabMy4PI3v3HqMWlv9+u5l2Kz+MdfyBWhSW
 xts=
X-SBRS: 2.7
X-MesageID: 21819042
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,304,1589256000"; d="scan'208";a="21819042"
From: George Dunlap <George.Dunlap@citrix.com>
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>, "intel-xen@intel.com"
 <intel-xen@intel.com>, "daniel.kiper@oracle.com" <daniel.kiper@oracle.com>,
 Roger Pau Monne <roger.pau@citrix.com>, Sergey Dyasli
 <sergey.dyasli@citrix.com>, Christopher Clark
 <christopher.w.clark@gmail.com>, Rich Persaud <persaur@gmail.com>, "Kevin
 Pearson" <kevin.pearson@ortmanconsulting.com>, Juergen Gross
 <jgross@suse.com>, Paul Durrant <pdurrant@amazon.com>, "Ji, John"
 <john.ji@intel.com>, "Natarajan, Janakarajan" <jnataraj@amd.com>,
 "edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>,
 "robin.randhawa@arm.com" <robin.randhawa@arm.com>, Artem Mygaiev
 <Artem_Mygaiev@epam.com>, Matt Spencer <Matt.Spencer@arm.com>,
 "anastassios.nanos@onapp.com" <anastassios.nanos@onapp.com>, "Stewart
 Hildebrand" <Stewart.Hildebrand@dornerworks.com>, Volodymyr Babchuk
 <volodymyr_babchuk@epam.com>, "mirela.simonovic@aggios.com"
 <mirela.simonovic@aggios.com>, Jarvis Roach <Jarvis.Roach@dornerworks.com>,
 Jeff Kubascik <Jeff.Kubascik@dornerworks.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Ian Jackson
 <Ian.Jackson@citrix.com>, Rian Quinn <rianquinn@gmail.com>, "Daniel P. Smith"
 <dpsmith@apertussolutions.com>,
 =?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLRG91ZyBHb2xkc3RlaW4=?=
 <cardoe@cardoe.com>, George Dunlap <George.Dunlap@citrix.com>, "David
 Woodhouse" <dwmw@amazon.co.uk>,
 =?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQW1pdCBTaGFo?= <amit@infradead.org>,
 =?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLVmFyYWQgR2F1dGFt?=
 <varadgautam@gmail.com>, Brian Woods <brian.woods@xilinx.com>, Robert Townley
 <rob.townley@gmail.com>, Bobby Eshleman <bobby.eshleman@gmail.com>, "Olivier
 Lambert" <olivier.lambert@vates.fr>, Andrew Cooper
 <Andrew.Cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Community Call CANCELLED this month
Thread-Topic: Community Call CANCELLED this month
Thread-Index: AQHWUHC3euMoSyGYyUylOaYjFN5BqA==
Date: Thu, 2 Jul 2020 13:00:10 +0000
Message-ID: <AA2A5E63-6791-4B81-9571-4FA6C4A2DAD3@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <8F3A688F48244F4E85E97012D80ACB75@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGV5IGFsbCwNCg0KUmF0aGVyIHRoYW4gaG9sZGluZyB0aGUgY29tbXVuaXR5IGNhbGwgdGhpcyBt
b250aCwgSSB0aGluayBwZW9wbGUgc2hvdWxkIHN1Ym1pdCBkZXNpZ24gc2Vzc2lvbnMgZm9yIHRv
cGljcyB0aGV5IHdhbnQgdG8gZGlzY3VzczoNCg0KaHR0cHM6Ly9kZXNpZ24tc2Vzc2lvbnMueGVu
cHJvamVjdC5vcmcNCg0KV2XigJlsbCBwaWNrIHVwIGFnYWluIGluIEF1Z3VzdC4NCg0KIC1HZW9y
Z2U=


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 13:10:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 13:10:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqyyl-0003Vm-N7; Thu, 02 Jul 2020 13:10:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CtVa=AN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqyyj-0003VG-Qe
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 13:10:13 +0000
X-Inumbo-ID: 5d1ebd2c-bc65-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d1ebd2c-bc65-11ea-bb8b-bc764e2007e4;
 Thu, 02 Jul 2020 13:10:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=w2+OM7CFoGdh5FxR5EtokRKi6OHg9JuIKhu8kEKAZ8Q=; b=J4RZXaZyO3uYbOtfrFBCkF4CF
 2Vq+sMt/2PQuuHtM4EcrPITMp2JbbTXEORYZhcv8XnKwuDV40oiAECKNY/CmRZH1gacDPJsWO2dce
 NkEVvwui7cp/+9bzp9Wj99NC7y4tihzuWAGlQpmM+0YBP7Tp2rLxMwvJx9IwUDe12HuWE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqyyi-0001gZ-8J; Thu, 02 Jul 2020 13:10:12 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqyyh-0002Nu-T7; Thu, 02 Jul 2020 13:10:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqyyh-0006Ur-SQ; Thu, 02 Jul 2020 13:10:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151535-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 151535: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=be63d9d47f571a60d70f8fb630c03871312d9655
X-Osstest-Versions-That: xen=0dbed3ad3366734fd23ee3fd1f9989c8c96b6052
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jul 2020 13:10:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151535 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151535/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655
baseline version:
 xen                  0dbed3ad3366734fd23ee3fd1f9989c8c96b6052

Last test of basis   151515  2020-07-01 21:04:44 Z    0 days
Testing same since   151535  2020-07-02 10:00:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0dbed3ad33..be63d9d47f  be63d9d47f571a60d70f8fb630c03871312d9655 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 13:28:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 13:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqzFk-0004Uz-CI; Thu, 02 Jul 2020 13:27:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CtVa=AN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jqzFj-0004Uc-HV
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 13:27:47 +0000
X-Inumbo-ID: cdc585ea-bc67-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cdc585ea-bc67-11ea-b7bb-bc764e2007e4;
 Thu, 02 Jul 2020 13:27:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=lPlJBuymZpsCdYRCIjEPdlM+CNV1jPK3HVXp+8Pydbw=; b=Dnx30GxqDxN6UrW2dDx4fFDGB
 1xLexACsLAL9Qp4pOn9z9heiqCi/5pnfSrdMuUgS47+5aCtBTLdmsSWx/F2yQPXZjqLQXYtqo/W3l
 cibUnrLvPXaknlHmUyM7d1auXPJQ3r5De+jdAwfKeriKzTn7NB65mNoyTIrE524G1h/os=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqzFc-00020X-5z; Thu, 02 Jul 2020 13:27:40 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jqzFb-0002zN-Tq; Thu, 02 Jul 2020 13:27:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jqzFb-0007aI-T8; Thu, 02 Jul 2020 13:27:39 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151513-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151513: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-saverestore:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=7c30b859a947535f2213277e827d7ac7dcff9c84
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jul 2020 13:27:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151513 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151513/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 guest-saverestore fail in 151494 pass in 151513
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 151494

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                7c30b859a947535f2213277e827d7ac7dcff9c84
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   14 days
Failing since        151236  2020-06-19 19:10:35 Z   12 days   16 attempts
Testing same since   151467  2020-06-30 02:29:41 Z    2 days    4 attempts

------------------------------------------------------------
478 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 22766 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 13:31:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 13:31:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqzJU-0005J7-0a; Thu, 02 Jul 2020 13:31:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bBDp=AN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jqzJS-0005J1-PQ
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 13:31:38 +0000
X-Inumbo-ID: 5aef1706-bc68-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5aef1706-bc68-11ea-bb8b-bc764e2007e4;
 Thu, 02 Jul 2020 13:31:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 65476BC67;
 Thu,  2 Jul 2020 13:31:37 +0000 (UTC)
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Julien Grall <julien@xen.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
 <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
 <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
 <95154add-164a-5450-28e1-f24611e1642f@xen.org>
 <df0aa9b4-d7f7-f909-e833-3f2f3040a2dc@citrix.com>
 <de298379-43c3-648f-aade-9efc7f761970@xen.org>
 <8df16863-2207-6747-cf17-f88124927ddb@suse.com>
 <cf41855b-9e5e-13f2-9ab0-04b98f8b3cdd@xen.org>
 <75066926-9fe4-1e51-707c-c77c4e6d63ae@suse.com>
 <3fa0c3e7-9243-b1bb-d6ad-a3bd21437782@xen.org>
 <0e02a9b5-ba7a-43a2-3369-a4410f216ddb@suse.com>
 <9a3f4d58-e5ad-c7a1-6c5f-42aa92101ca1@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d0165fc3-fb05-2e49-eff3-e45a674b00e1@suse.com>
Date: Thu, 2 Jul 2020 15:30:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <9a3f4d58-e5ad-c7a1-6c5f-42aa92101ca1@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.07.2020 11:57, Julien Grall wrote:
> Hi,
> 
> On 02/07/2020 10:18, Jan Beulich wrote:
>> On 02.07.2020 10:54, Julien Grall wrote:
>>>
>>>
>>> On 02/07/2020 09:50, Jan Beulich wrote:
>>>> On 02.07.2020 10:42, Julien Grall wrote:
>>>>> On 02/07/2020 09:29, Jan Beulich wrote:
>>>>>> I'm with Andrew here, fwiw, as long as the little bit of code that
>>>>>> is actually put in common/ or include/xen/ doesn't imply arbitrary
>>>>>> restrictions on acceptable values.
>>>>> Well yes the code is simple. However, the code as it is wouldn't be
>>>>> usuable on other architecture without additional work (aside arch
>>>>> specific code). For instance, there is no way to map the buffer outside
>>>>> of Xen as it is all x86 specific.
>>>>>
>>>>> If you want the allocation to be in the common code, then the
>>>>> infrastructure to map/unmap the buffer should also be in common code.
>>>>> Otherwise, there is no point to allocate it in common.
>>>>
>>>> I don't think I agree here - I see nothing wrong with exposing of
>>>> the memory being arch specific, when allocation is generic. This
>>>> is no different from, in just x86, allocation logic being common
>>>> to PV and HVM, but exposing being different for both.
>>>
>>> Are you suggesting that the way it would be exposed may be different for
>>> other architecture?
>>
>> Why not? To take a possibly extreme example - consider an arch
>> where (for bare metal) the buffer is specified to appear at a
>> fixed range of addresses.
> 
> I am probably missing something here... The current goal is the buffer 
> will be mapped in the dom0. Most likely the way to map it will be using 
> the acquire hypercall (unless you invent a brand new one...).
> 
> For a guest, you could possibly reserve a fixed range and then map it 
> when creating the vCPU in Xen. But then, you will likely want a fixed 
> size... So why would you bother to ask the user to define the size?

Because there may be the option to only populate part of the fixed
range?

> Another way to do it, would be the toolstack to do the mapping. At which 
> point, you still need an hypercall to do the mapping (probably the 
> hypercall acquire).

There may not be any mapping to do in such a contrived, fixed-range
environment. This scenario was specifically to demonstrate that the
way the mapping gets done may be arch-specific (here: a no-op)
despite the allocation not being so.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 14:14:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 14:14:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jqzzB-0000Eu-DK; Thu, 02 Jul 2020 14:14:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gpFn=AN=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jqzzA-0000En-8K
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 14:14:44 +0000
X-Inumbo-ID: 5f50f91c-bc6e-11ea-8834-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5f50f91c-bc6e-11ea-8834-12813bfff9fa;
 Thu, 02 Jul 2020 14:14:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Y97D6coPVxi0PX9tBidIhqe+3o0C8naVU4+vdJ3TuTk=; b=GfH2bsKKBG46/Cmq2blhMBWcF1
 lg4cqT8zMUvr+JYdW8uIzZDoZQMcqvi2KM7U9eSg0uWAwe/pCvFgKfVvhJnZgXSPAtlNHhekwBFoe
 0iPXwvC7nBn8r/emE/1pMMYFwjFjZ80Sx14LoxPXbowQPoGxG1xvmwwkPRLrZUV5nlOc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqzz4-0002wc-Eg; Thu, 02 Jul 2020 14:14:38 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jqzz4-0004hi-6H; Thu, 02 Jul 2020 14:14:38 +0000
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Jan Beulich <jbeulich@suse.com>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
 <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
 <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
 <95154add-164a-5450-28e1-f24611e1642f@xen.org>
 <df0aa9b4-d7f7-f909-e833-3f2f3040a2dc@citrix.com>
 <de298379-43c3-648f-aade-9efc7f761970@xen.org>
 <8df16863-2207-6747-cf17-f88124927ddb@suse.com>
 <cf41855b-9e5e-13f2-9ab0-04b98f8b3cdd@xen.org>
 <75066926-9fe4-1e51-707c-c77c4e6d63ae@suse.com>
 <3fa0c3e7-9243-b1bb-d6ad-a3bd21437782@xen.org>
 <0e02a9b5-ba7a-43a2-3369-a4410f216ddb@suse.com>
 <9a3f4d58-e5ad-c7a1-6c5f-42aa92101ca1@xen.org>
 <d0165fc3-fb05-2e49-eff3-e45a674b00e1@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7f915146-6566-e5a7-14d2-cb2319838562@xen.org>
Date: Thu, 2 Jul 2020 15:14:35 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d0165fc3-fb05-2e49-eff3-e45a674b00e1@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 02/07/2020 14:30, Jan Beulich wrote:
> On 02.07.2020 11:57, Julien Grall wrote:
>> Hi,
>>
>> On 02/07/2020 10:18, Jan Beulich wrote:
>>> On 02.07.2020 10:54, Julien Grall wrote:
>>>>
>>>>
>>>> On 02/07/2020 09:50, Jan Beulich wrote:
>>>>> On 02.07.2020 10:42, Julien Grall wrote:
>>>>>> On 02/07/2020 09:29, Jan Beulich wrote:
>>>>>>> I'm with Andrew here, fwiw, as long as the little bit of code that
>>>>>>> is actually put in common/ or include/xen/ doesn't imply arbitrary
>>>>>>> restrictions on acceptable values.
>>>>>> Well yes the code is simple. However, the code as it is wouldn't be
>>>>>> usuable on other architecture without additional work (aside arch
>>>>>> specific code). For instance, there is no way to map the buffer outside
>>>>>> of Xen as it is all x86 specific.
>>>>>>
>>>>>> If you want the allocation to be in the common code, then the
>>>>>> infrastructure to map/unmap the buffer should also be in common code.
>>>>>> Otherwise, there is no point to allocate it in common.
>>>>>
>>>>> I don't think I agree here - I see nothing wrong with exposing of
>>>>> the memory being arch specific, when allocation is generic. This
>>>>> is no different from, in just x86, allocation logic being common
>>>>> to PV and HVM, but exposing being different for both.
>>>>
>>>> Are you suggesting that the way it would be exposed may be different for
>>>> other architecture?
>>>
>>> Why not? To take a possibly extreme example - consider an arch
>>> where (for bare metal) the buffer is specified to appear at a
>>> fixed range of addresses.
>>
>> I am probably missing something here... The current goal is the buffer
>> will be mapped in the dom0. Most likely the way to map it will be using
>> the acquire hypercall (unless you invent a brand new one...).
>>
>> For a guest, you could possibly reserve a fixed range and then map it
>> when creating the vCPU in Xen. But then, you will likely want a fixed
>> size... So why would you bother to ask the user to define the size?
> 
> Because there may be the option to only populate part of the fixed
> range?

It was yet another extreme case ;).

> 
>> Another way to do it, would be the toolstack to do the mapping. At which
>> point, you still need an hypercall to do the mapping (probably the
>> hypercall acquire).
> 
> There may not be any mapping to do in such a contrived, fixed-range
> environment. This scenario was specifically to demonstrate that the
> way the mapping gets done may be arch-specific (here: a no-op)
> despite the allocation not being so.
You are arguing on extreme cases which I don't think is really helpful 
here. Yes if you want to map at a fixed address in a guest you may not 
need the acquire hypercall. But in most of the other cases (see has for 
the tools) you will need it.

So what's the problem with requesting to have the acquire hypercall 
implemented in common code?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 14:17:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 14:17:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr01T-0000Lg-R0; Thu, 02 Jul 2020 14:17:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bBDp=AN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jr01S-0000Lb-PO
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 14:17:06 +0000
X-Inumbo-ID: b4da886c-bc6e-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4da886c-bc6e-11ea-bca7-bc764e2007e4;
 Thu, 02 Jul 2020 14:17:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 36C43AD9F;
 Thu,  2 Jul 2020 14:17:05 +0000 (UTC)
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Julien Grall <julien@xen.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
 <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
 <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
 <95154add-164a-5450-28e1-f24611e1642f@xen.org>
 <df0aa9b4-d7f7-f909-e833-3f2f3040a2dc@citrix.com>
 <de298379-43c3-648f-aade-9efc7f761970@xen.org>
 <8df16863-2207-6747-cf17-f88124927ddb@suse.com>
 <cf41855b-9e5e-13f2-9ab0-04b98f8b3cdd@xen.org>
 <75066926-9fe4-1e51-707c-c77c4e6d63ae@suse.com>
 <3fa0c3e7-9243-b1bb-d6ad-a3bd21437782@xen.org>
 <0e02a9b5-ba7a-43a2-3369-a4410f216ddb@suse.com>
 <9a3f4d58-e5ad-c7a1-6c5f-42aa92101ca1@xen.org>
 <d0165fc3-fb05-2e49-eff3-e45a674b00e1@suse.com>
 <7f915146-6566-e5a7-14d2-cb2319838562@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7ac383c2-0264-cc75-a85b-13c1fdfb0bd6@suse.com>
Date: Thu, 2 Jul 2020 16:17:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <7f915146-6566-e5a7-14d2-cb2319838562@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.07.2020 16:14, Julien Grall wrote:
> Hi,
> 
> On 02/07/2020 14:30, Jan Beulich wrote:
>> On 02.07.2020 11:57, Julien Grall wrote:
>>> Hi,
>>>
>>> On 02/07/2020 10:18, Jan Beulich wrote:
>>>> On 02.07.2020 10:54, Julien Grall wrote:
>>>>>
>>>>>
>>>>> On 02/07/2020 09:50, Jan Beulich wrote:
>>>>>> On 02.07.2020 10:42, Julien Grall wrote:
>>>>>>> On 02/07/2020 09:29, Jan Beulich wrote:
>>>>>>>> I'm with Andrew here, fwiw, as long as the little bit of code that
>>>>>>>> is actually put in common/ or include/xen/ doesn't imply arbitrary
>>>>>>>> restrictions on acceptable values.
>>>>>>> Well yes the code is simple. However, the code as it is wouldn't be
>>>>>>> usuable on other architecture without additional work (aside arch
>>>>>>> specific code). For instance, there is no way to map the buffer outside
>>>>>>> of Xen as it is all x86 specific.
>>>>>>>
>>>>>>> If you want the allocation to be in the common code, then the
>>>>>>> infrastructure to map/unmap the buffer should also be in common code.
>>>>>>> Otherwise, there is no point to allocate it in common.
>>>>>>
>>>>>> I don't think I agree here - I see nothing wrong with exposing of
>>>>>> the memory being arch specific, when allocation is generic. This
>>>>>> is no different from, in just x86, allocation logic being common
>>>>>> to PV and HVM, but exposing being different for both.
>>>>>
>>>>> Are you suggesting that the way it would be exposed may be different for
>>>>> other architecture?
>>>>
>>>> Why not? To take a possibly extreme example - consider an arch
>>>> where (for bare metal) the buffer is specified to appear at a
>>>> fixed range of addresses.
>>>
>>> I am probably missing something here... The current goal is the buffer
>>> will be mapped in the dom0. Most likely the way to map it will be using
>>> the acquire hypercall (unless you invent a brand new one...).
>>>
>>> For a guest, you could possibly reserve a fixed range and then map it
>>> when creating the vCPU in Xen. But then, you will likely want a fixed
>>> size... So why would you bother to ask the user to define the size?
>>
>> Because there may be the option to only populate part of the fixed
>> range?
> 
> It was yet another extreme case ;).

Yes, sure - just to demonstrate my point.

>>> Another way to do it, would be the toolstack to do the mapping. At which
>>> point, you still need an hypercall to do the mapping (probably the
>>> hypercall acquire).
>>
>> There may not be any mapping to do in such a contrived, fixed-range
>> environment. This scenario was specifically to demonstrate that the
>> way the mapping gets done may be arch-specific (here: a no-op)
>> despite the allocation not being so.
> You are arguing on extreme cases which I don't think is really helpful 
> here. Yes if you want to map at a fixed address in a guest you may not 
> need the acquire hypercall. But in most of the other cases (see has for 
> the tools) you will need it.
> 
> So what's the problem with requesting to have the acquire hypercall 
> implemented in common code?

Didn't we start out by you asking that there be as little common code
as possible for the time being? I have no issue with putting the
acquire implementation there ...

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 14:22:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 14:22:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr06l-0001AO-Et; Thu, 02 Jul 2020 14:22:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ftxy=AN=canonical.com=colin.king@srs-us1.protection.inumbo.net>)
 id 1jr06k-0001AJ-Dw
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 14:22:34 +0000
X-Inumbo-ID: 7786a83c-bc6f-11ea-8496-bc764e2007e4
Received: from youngberry.canonical.com (unknown [91.189.89.112])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7786a83c-bc6f-11ea-8496-bc764e2007e4;
 Thu, 02 Jul 2020 14:22:32 +0000 (UTC)
Received: from 1.general.cking.uk.vpn ([10.172.193.212] helo=localhost)
 by youngberry.canonical.com with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2)
 (envelope-from <colin.king@canonical.com>)
 id 1jr06Z-0000Sf-K0; Thu, 02 Jul 2020 14:22:23 +0000
From: Colin King <colin.king@canonical.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "David S . Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org
Subject: [PATCH][next] xen-netfront: remove redundant assignment to variable
 'act'
Date: Thu,  2 Jul 2020 15:22:23 +0100
Message-Id: <20200702142223.48178-1-colin.king@canonical.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: kernel-janitors@vger.kernel.org, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Colin Ian King <colin.king@canonical.com>

The variable act is being initialized with a value that is
never read and it is being updated later with a new value. The
initialization is redundant and can be removed.

Addresses-Coverity: ("Unused value")
Signed-off-by: Colin Ian King <colin.king@canonical.com>
---
 drivers/net/xen-netfront.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 468f3f6f1425..860a0cce346d 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -856,7 +856,7 @@ static u32 xennet_run_xdp(struct netfront_queue *queue, struct page *pdata,
 {
 	struct xdp_frame *xdpf;
 	u32 len = rx->status;
-	u32 act = XDP_PASS;
+	u32 act;
 	int err;
 
 	xdp->data_hard_start = page_address(pdata);
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 14:25:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 14:25:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr09y-0001JJ-0O; Thu, 02 Jul 2020 14:25:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WaoH=AN=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jr09w-0001JD-5s
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 14:25:52 +0000
X-Inumbo-ID: edbdd9da-bc6f-11ea-b7bb-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id edbdd9da-bc6f-11ea-b7bb-bc764e2007e4;
 Thu, 02 Jul 2020 14:25:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1593699950;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=P8qqWwaT3JO4DtYZfTWWgYAYHCkV32cVmUHPjU8krbs=;
 b=gDndjqABMnKA9vjodQpAg8grWdUjV2QcfHkIaR1hGhsEAMG7qc5wR7mE
 antLuNKCQ7n6PZMdibvyN6BAX/y4iEwkJC3/bQD6nTaTEMyGW8BYMvWjq
 QTl70BVmDdcJ63WW+8E0KsegfKJ6a7gGvZIzFZi1Q6WLdNuAtfQ0G3sS2 c=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Dck9YdOiChKAkoJYdjmrWBMAMyc5NUpu70ZCutMjgsdLlU9Xl3KAOxhOlMAIYPaypMCDiFfy9F
 Gp6onPQUk1pU6kN5zrj/2oFnwiHuIqsS7AljGoE6WmAFppou6rqa5+z25T3n8uXqQlTPNgsOd2
 obIBuVpSL97mRDcINEiyB8y5MCgOgOdKCxLGC4yiOMaFka5dF8pSm8kCp9waMaMbiH3f+aSsOE
 IQD5hrBme7rwuYe9j+Z79Xjm0wVA+Lr0YieBRjYeqAjZ3qaHWfk98MkYNJ6LMmBWvEgLW6897R
 USs=
X-SBRS: 2.7
X-MesageID: 22307503
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,304,1589256000"; d="scan'208";a="22307503"
Date: Thu, 2 Jul 2020 15:25:44 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Paul Durrant <paul.durrant@citrix.com>
Subject: Re: [PATCH v5] xen: fix build without pci passthrough
Message-ID: <20200702142544.GA2157@perard.uk.xensource.com>
References: <20200604183141.32044-1-pbonzini@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200604183141.32044-1-pbonzini@redhat.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jun 04, 2020 at 02:31:41PM -0400, Paolo Bonzini wrote:
> From: Anthony PERARD <anthony.perard@citrix.com>
> 
> Xen PCI passthrough support may not be available and thus the global
> variable "has_igd_gfx_passthru" might be compiled out. Common code
> should not access it in that case.
> 
> Unfortunately, we can't use CONFIG_XEN_PCI_PASSTHROUGH directly in
> xen-common.c so this patch instead move access to the
> has_igd_gfx_passthru variable via function and those functions are
> also implemented as stubs. The stubs will be used when QEMU is built
> without passthrough support.
> 
> Now, when one will want to enable igd-passthru via the -machine
> property, they will get an error message if QEMU is built without
> passthrough support.
> 
> Fixes: 46472d82322d0 ('xen: convert "-machine igd-passthru" to an accelerator property')
> Reported-by: Roger Pau Monn <roger.pau@citrix.com>
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> Message-Id: <20200603160442.3151170-1-anthony.perard@citrix.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Hi Paul,

Can I backport this patch to qemu-xen-4.14? It allows to build qemu
without pci passthrough support which seems to be important for FreeBSD
as PT with QEMU is only available on Linux.

(There's a fix to the patch that I would backport as well. "xen:
Actually fix build without passthrough")

Thanks.

> ---
>  accel/xen/xen-all.c  |  4 ++--
>  hw/Makefile.objs     |  2 +-
>  hw/i386/pc_piix.c    |  2 +-
>  hw/xen/Makefile.objs |  3 ++-
>  hw/xen/xen_pt.c      | 12 +++++++++++-
>  hw/xen/xen_pt.h      |  6 ++++--
>  hw/xen/xen_pt_stub.c | 22 ++++++++++++++++++++++
>  7 files changed, 43 insertions(+), 8 deletions(-)
>  create mode 100644 hw/xen/xen_pt_stub.c
> 
> diff --git a/accel/xen/xen-all.c b/accel/xen/xen-all.c
> index f3edc65ec9..0c24d4b191 100644
> --- a/accel/xen/xen-all.c
> +++ b/accel/xen/xen-all.c
> @@ -137,12 +137,12 @@ static void xen_change_state_handler(void *opaque, int running,
>  
>  static bool xen_get_igd_gfx_passthru(Object *obj, Error **errp)
>  {
> -    return has_igd_gfx_passthru;
> +    return xen_igd_gfx_pt_enabled();
>  }
>  
>  static void xen_set_igd_gfx_passthru(Object *obj, bool value, Error **errp)
>  {
> -    has_igd_gfx_passthru = value;
> +    xen_igd_gfx_pt_set(value, errp);
>  }
>  
>  static void xen_setup_post(MachineState *ms, AccelState *accel)
> diff --git a/hw/Makefile.objs b/hw/Makefile.objs
> index 660e2b4373..4cbe5e4e57 100644
> --- a/hw/Makefile.objs
> +++ b/hw/Makefile.objs
> @@ -35,7 +35,7 @@ devices-dirs-y += usb/
>  devices-dirs-$(CONFIG_VFIO) += vfio/
>  devices-dirs-y += virtio/
>  devices-dirs-y += watchdog/
> -devices-dirs-y += xen/
> +devices-dirs-$(CONFIG_XEN) += xen/
>  devices-dirs-$(CONFIG_MEM_DEVICE) += mem/
>  devices-dirs-$(CONFIG_NUBUS) += nubus/
>  devices-dirs-y += semihosting/
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index eea964e72b..054d3aa9f7 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -377,7 +377,7 @@ static void pc_init_isa(MachineState *machine)
>  #ifdef CONFIG_XEN
>  static void pc_xen_hvm_init_pci(MachineState *machine)
>  {
> -    const char *pci_type = has_igd_gfx_passthru ?
> +    const char *pci_type = xen_igd_gfx_pt_enabled() ?
>                  TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE : TYPE_I440FX_PCI_DEVICE;
>  
>      pc_init1(machine,
> diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
> index 340b2c5096..3fc715e595 100644
> --- a/hw/xen/Makefile.objs
> +++ b/hw/xen/Makefile.objs
> @@ -1,6 +1,7 @@
>  # xen backend driver support
> -common-obj-$(CONFIG_XEN) += xen-legacy-backend.o xen_devconfig.o xen_pvdev.o xen-bus.o xen-bus-helper.o xen-backend.o
> +common-obj-y += xen-legacy-backend.o xen_devconfig.o xen_pvdev.o xen-bus.o xen-bus-helper.o xen-backend.o
>  
>  obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen-host-pci-device.o
>  obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen_pt.o xen_pt_config_init.o xen_pt_graphics.o xen_pt_msi.o
>  obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen_pt_load_rom.o
> +obj-$(call $(lnot, $(CONFIG_XEN_PCI_PASSTHROUGH))) += xen_pt_stub.o
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index 81d5ad8da7..ab84443d5e 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -65,7 +65,17 @@
>  #include "qemu/range.h"
>  #include "exec/address-spaces.h"
>  
> -bool has_igd_gfx_passthru;
> +static bool has_igd_gfx_passthru;
> +
> +bool xen_igd_gfx_pt_enabled(void)
> +{
> +    return has_igd_gfx_passthru;
> +}
> +
> +void xen_igd_gfx_pt_set(bool value, Error **errp)
> +{
> +    has_igd_gfx_passthru = value;
> +}
>  
>  #define XEN_PT_NR_IRQS (256)
>  static uint8_t xen_pt_mapped_machine_irq[XEN_PT_NR_IRQS] = {0};
> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> index 179775db7b..6e9cec95f3 100644
> --- a/hw/xen/xen_pt.h
> +++ b/hw/xen/xen_pt.h
> @@ -5,6 +5,9 @@
>  #include "hw/pci/pci.h"
>  #include "xen-host-pci-device.h"
>  
> +bool xen_igd_gfx_pt_enabled(void);
> +void xen_igd_gfx_pt_set(bool value, Error **errp);
> +
>  void xen_pt_log(const PCIDevice *d, const char *f, ...) GCC_FMT_ATTR(2, 3);
>  
>  #define XEN_PT_ERR(d, _f, _a...) xen_pt_log(d, "%s: Error: "_f, __func__, ##_a)
> @@ -322,10 +325,9 @@ extern void *pci_assign_dev_load_option_rom(PCIDevice *dev,
>                                              unsigned int domain,
>                                              unsigned int bus, unsigned int slot,
>                                              unsigned int function);
> -extern bool has_igd_gfx_passthru;
>  static inline bool is_igd_vga_passthrough(XenHostPCIDevice *dev)
>  {
> -    return (has_igd_gfx_passthru
> +    return (xen_igd_gfx_pt_enabled()
>              && ((dev->class_code >> 0x8) == PCI_CLASS_DISPLAY_VGA));
>  }
>  int xen_pt_register_vga_regions(XenHostPCIDevice *dev);
> diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
> new file mode 100644
> index 0000000000..2d8cac8d54
> --- /dev/null
> +++ b/hw/xen/xen_pt_stub.c
> @@ -0,0 +1,22 @@
> +/*
> + * Copyright (C) 2020       Citrix Systems UK Ltd.
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + */
> +
> +#include "qemu/osdep.h"
> +#include "hw/xen/xen_pt.h"
> +#include "qapi/error.h"
> +
> +bool xen_igd_gfx_pt_enabled(void)
> +{
> +    return false;
> +}
> +
> +void xen_igd_gfx_pt_set(bool value, Error **errp)
> +{
> +    if (value) {
> +        error_setg(errp, "Xen PCI passthrough support not built in");
> +    }
> +}
> -- 
> 2.26.2
> 
> 

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 14:31:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 14:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr0FM-00028c-PM; Thu, 02 Jul 2020 14:31:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gpFn=AN=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jr0FL-00028X-RI
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 14:31:27 +0000
X-Inumbo-ID: b57ccc1a-bc70-11ea-883d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b57ccc1a-bc70-11ea-883d-12813bfff9fa;
 Thu, 02 Jul 2020 14:31:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=YR4Xopsj3V/FSwV7yfqdS6ETcxQG7fnxAzc4VfHKV/M=; b=gHG9tkVCn5lGFm+cFWQ2fbq4et
 W9mS+2LYQ4broDtWtzvtLl1Uuxl6BVe3kV1MIj5bLrtF/oFQHyX0D80jE9sT8ybba1f7XfGYIIgWT
 SUUtkRigUFgy71S+fwsCpFErDZj8aN9q8oW6qHq6rW0D0SYW6KyJiQX2VINPufNxmLGY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jr0FF-0003GU-54; Thu, 02 Jul 2020 14:31:21 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jr0FE-0005Jt-Rl; Thu, 02 Jul 2020 14:31:20 +0000
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: Jan Beulich <jbeulich@suse.com>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <85416128-a334-4640-7504-0865f715b3a2@xen.org>
 <48c59780-bedb-ff08-723c-be14a9b73e6b@citrix.com>
 <f2aa4cf9-0689-82c0-cb6c-55d55ecbd5c1@xen.org>
 <a9a33ba1-b121-5e6f-b74c-7d2a60c84b13@xen.org>
 <a7187837-495f-56a5-a8d0-635a53ac9234@citrix.com>
 <95154add-164a-5450-28e1-f24611e1642f@xen.org>
 <df0aa9b4-d7f7-f909-e833-3f2f3040a2dc@citrix.com>
 <de298379-43c3-648f-aade-9efc7f761970@xen.org>
 <8df16863-2207-6747-cf17-f88124927ddb@suse.com>
 <cf41855b-9e5e-13f2-9ab0-04b98f8b3cdd@xen.org>
 <75066926-9fe4-1e51-707c-c77c4e6d63ae@suse.com>
 <3fa0c3e7-9243-b1bb-d6ad-a3bd21437782@xen.org>
 <0e02a9b5-ba7a-43a2-3369-a4410f216ddb@suse.com>
 <9a3f4d58-e5ad-c7a1-6c5f-42aa92101ca1@xen.org>
 <d0165fc3-fb05-2e49-eff3-e45a674b00e1@suse.com>
 <7f915146-6566-e5a7-14d2-cb2319838562@xen.org>
 <7ac383c2-0264-cc75-a85b-13c1fdfb0bd6@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <dadeeedd-a9e1-d5f4-4754-8da3f065fd44@xen.org>
Date: Thu, 2 Jul 2020 15:31:18 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <7ac383c2-0264-cc75-a85b-13c1fdfb0bd6@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 luwei.kang@intel.com, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 02/07/2020 15:17, Jan Beulich wrote:
> On 02.07.2020 16:14, Julien Grall wrote:
>> On 02/07/2020 14:30, Jan Beulich wrote:
>>> On 02.07.2020 11:57, Julien Grall wrote:
>>>> On 02/07/2020 10:18, Jan Beulich wrote:
>>>>> On 02.07.2020 10:54, Julien Grall wrote:
>>>>>> On 02/07/2020 09:50, Jan Beulich wrote:
>>>>>>> On 02.07.2020 10:42, Julien Grall wrote:
>>>>>>>> On 02/07/2020 09:29, Jan Beulich wrote:
>>>> Another way to do it, would be the toolstack to do the mapping. At which
>>>> point, you still need an hypercall to do the mapping (probably the
>>>> hypercall acquire).
>>>
>>> There may not be any mapping to do in such a contrived, fixed-range
>>> environment. This scenario was specifically to demonstrate that the
>>> way the mapping gets done may be arch-specific (here: a no-op)
>>> despite the allocation not being so.
>> You are arguing on extreme cases which I don't think is really helpful
>> here. Yes if you want to map at a fixed address in a guest you may not
>> need the acquire hypercall. But in most of the other cases (see has for
>> the tools) you will need it.
>>
>> So what's the problem with requesting to have the acquire hypercall
>> implemented in common code?
> 
> Didn't we start out by you asking that there be as little common code
> as possible for the time being?

Well as I said I am not in favor of having the allocation in common 
code, but if you want to keep it then you also want to implement 
map/unmap in the common code ([1], [2]).

> I have no issue with putting the
> acquire implementation there ...
This was definitely not clear given how you argued with extreme cases...

Cheers,

[1] <9a3f4d58-e5ad-c7a1-6c5f-42aa92101ca1@xen.org>
[2] <cf41855b-9e5e-13f2-9ab0-04b98f8b3cdd@xen.org>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 14:38:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 14:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr0MM-0002Lo-Ii; Thu, 02 Jul 2020 14:38:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CtVa=AN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jr0ML-0002LO-4B
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 14:38:41 +0000
X-Inumbo-ID: b4dec546-bc71-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4dec546-bc71-11ea-b7bb-bc764e2007e4;
 Thu, 02 Jul 2020 14:38:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=KYysRHyAwfsQQI78BNdSSq7J8mQ+itkWnO7ZPFc2Nv4=; b=2KXbG2h0XmWGoXRDioizc4t+L
 q+wsBbwWvOeyVLuh4czVB/xnEXKHYbT9Yx/uN7429SSkX+HjpmHrvUT0692AQfzKQSgPpfauKWOpb
 zuPyukZbzZCTrM9FqnhyrR3jHnt9e9dRinpOmhqzhbqy1XTfFLq4Sa5CMZTRY+acbVgBo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jr0MD-0003OC-HR; Thu, 02 Jul 2020 14:38:33 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jr0MD-0005SF-4b; Thu, 02 Jul 2020 14:38:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jr0MD-0003Kq-3z; Thu, 02 Jul 2020 14:38:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151527-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151527: regressions - FAIL
X-Osstest-Failures: libvirt:test-amd64-amd64-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: libvirt=7fa7f7eeb6e969e002845928e155914da2fc8cd0
X-Osstest-Versions-That: libvirt=e7998ebeaf15e4e8825be0dd97aa1316f194f00d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jul 2020 14:38:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151527 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151527/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496
 test-amd64-i386-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151496
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151496
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              7fa7f7eeb6e969e002845928e155914da2fc8cd0
baseline version:
 libvirt              e7998ebeaf15e4e8825be0dd97aa1316f194f00d

Last test of basis   151496  2020-07-01 04:23:43 Z    1 days
Testing same since   151527  2020-07-02 04:29:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7fa7f7eeb6e969e002845928e155914da2fc8cd0
Author: Daniel P. Berrangé <berrange@redhat.com>
Date:   Wed Jul 1 17:36:51 2020 +0100

    util: add access check for hooks to fix running as non-root
    
    Since feb83c1e710b9ea8044a89346f4868d03b31b0f1 libvirtd will abort on
    startup if run as non-root
    
      2020-07-01 16:30:30.738+0000: 1647444: error : virDirOpenInternal:2869 : cannot open directory '/etc/libvirt/hooks/daemon.d': Permission denied
    
    The root cause flaw is that non-root libvirtd is using /etc/libvirt for
    its hooks. Traditionally that has been harmless though since we checked
    whether we could access the hook file and degraded gracefully. We need
    the same access check for iterating over the hook directory.
    
    Long term we should make it possible to have an unprivileged hook dir
    under $HOME.
    
    Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
    Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

commit c3fa17cd9a158f38416a80af3e0f712bf96ebf38
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Wed Jul 1 09:47:48 2020 +0200

    virnettlshelpers: Update private key
    
    With the recent update of Fedora rawhide I've noticed
    virnettlssessiontest and virnettlscontexttest failing with:
    
      Our own certificate servercertreq-ctx.pem failed validation
      against cacertreq-ctx.pem: The certificate uses an insecure
      algorithm
    
    This is result of Fedora changes to support strong crypto [1]. RSA
    with 1024 bit key is viewed as legacy and thus insecure. Generate
    a new private key then. Moreover, switch to EC which is not only
    shorter but also not deprecated that often as RSA. Generated
    using the following command:
    
      openssl genpkey --outform PEM --out privkey.pem \$
      --algorithm EC --pkeyopt ec_paramgen_curve:P-384 \$
      --pkeyopt ec_param_enc:named_curve
    
    1: https://fedoraproject.org/wiki/Changes/StrongCryptoSettings2
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit d57f361083c5053267e6d9380c1afe2abfcae8ac
Author: Daniel Henrique Barboza <danielhb413@gmail.com>
Date:   Tue Jun 30 16:43:43 2020 -0300

    docs: Fix 'Offline migration' description
    
    'transfers inactive the definition of a domain' seems odd.
    
    Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
    Reviewed-by: Andrea Bolognani <abologna@redhat.com>


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 14:49:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 14:49:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr0WL-0003Ef-IK; Thu, 02 Jul 2020 14:49:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OOhv=AN=gmail.com=brgerst@srs-us1.protection.inumbo.net>)
 id 1jr0WK-0003Ea-AI
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 14:49:00 +0000
X-Inumbo-ID: 29cdc0e0-bc73-11ea-8496-bc764e2007e4
Received: from mail-io1-xd43.google.com (unknown [2607:f8b0:4864:20::d43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 29cdc0e0-bc73-11ea-8496-bc764e2007e4;
 Thu, 02 Jul 2020 14:48:59 +0000 (UTC)
Received: by mail-io1-xd43.google.com with SMTP id y2so29170200ioy.3
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jul 2020 07:48:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=v4VXI9uP014tYHa4Tj77JYdAu3duCFmr1FkN+vT8NsI=;
 b=FoyObNL92iyVLE061Hvbm+xq+zoeDIpNkTcIUjGaXkrsBOkS1ywIpFNY/kdIw7OPms
 EWzPxJOpa7MJzcQI1tMfv/vtfU3yNUt4ZETFGw4iwn/fsT9oFyCTnaKIEPlG/LQW7v/F
 cuJdCxfF3FjsQCpeWWNc/QExbhhNa5rnBjX3vciDAAGLevquONjrb00s+NcBsQlgv81f
 e2GGhZ5MMK992CtMz1m3IsNTvXT5qf752JhUqT9DMnFntgzsoYNJScekoqqQ9tlo3OqY
 saoknjuOJ2JQx0GcMqJd6BehD+PelFtZ31gCN2UQkckHDMAyNoGZFxYKfdmuvXR7NZJY
 oPAw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=v4VXI9uP014tYHa4Tj77JYdAu3duCFmr1FkN+vT8NsI=;
 b=IdExb4T6K3mpXyU8hFKRaspJBPmYLndV31/EBm4lJ9oBohKtNzlPP075XYN4617P1D
 ymdaZ7Jvvxpms/7YLqYLntOhKxSz5PjhLY9jpcKxbgEqSw0ukMsdQIWtWjgYOUaKq4vF
 Ics3MRCZ1XP42/WhGjnkNO9SsmGu7DL1vhFK+Lb4ZHOhBaRTSku+23JgJeJ9gMlARgN6
 /PdDXEMosFqW2DZ4n48fNWvEaXn2yY1JL796JQ+f7JsvtmbtZfL1gRma+6+OpRiEFxu/
 3I/jpdPQsvq/XRMcG0hWnVuD8IhyuinGjIDDbgP5m2GR3Iyqn/giMfnDvN7q6idmv1kj
 VRBg==
X-Gm-Message-State: AOAM532qtf34vJW6CkjiCmW65cmIoePXfE6S/9JuO0oBjj5ZHyYU7uvU
 SGDmZkqnGTbkNT9kZ3FtckeWvRjBJgMPibe64w==
X-Google-Smtp-Source: ABdhPJyVQXzx0lAtmUN3BtC8Tkcr13el8baae8+TSIqX/1DCH6FcNUO6wM89BF5N7w1XA1rAyO0/vpSkh2gr5Elwbuo=
X-Received: by 2002:a5d:849a:: with SMTP id t26mr7826768iom.22.1593701339267; 
 Thu, 02 Jul 2020 07:48:59 -0700 (PDT)
MIME-Version: 1.0
References: <20200701110650.16172-1-jgross@suse.com>
In-Reply-To: <20200701110650.16172-1-jgross@suse.com>
From: Brian Gerst <brgerst@gmail.com>
Date: Thu, 2 Jul 2020 10:48:48 -0400
Message-ID: <CAMzpN2hvK2T7Qje51MPjMyTggxT7_=EFnt7gAmJEa1Zq+t3LtA@mail.gmail.com>
Subject: Re: [PATCH v2 0/4] Remove 32-bit Xen PV guest support
To: Juergen Gross <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Deep Shah <sdeep@vmware.com>,
 "VMware, Inc." <pv-drivers@vmware.com>,
 the arch/x86 maintainers <x86@kernel.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 Linux Virtualization <virtualization@lists.linux-foundation.org>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 Andy Lutomirski <luto@kernel.org>, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 1, 2020 at 7:07 AM Juergen Gross <jgross@suse.com> wrote:
>
> The long term plan has been to replace Xen PV guests by PVH. The first
> victim of that plan are now 32-bit PV guests, as those are used only
> rather seldom these days. Xen on x86 requires 64-bit support and with
> Grub2 now supporting PVH officially since version 2.04 there is no
> need to keep 32-bit PV guest support alive in the Linux kernel.
> Additionally Meltdown mitigation is not available in the kernel running
> as 32-bit PV guest, so dropping this mode makes sense from security
> point of view, too.

One thing that you missed is removing VDSO_NOTE_NONEGSEG_BIT from
vdso32/note.S.  With that removed there is no difference from the
64-bit version.

Otherwise this series looks good to me.
--
Brian Gerst


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 14:58:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 14:58:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr0fV-000470-EX; Thu, 02 Jul 2020 14:58:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mrbb=AN=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jr0fT-00046v-Kh
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 14:58:27 +0000
X-Inumbo-ID: 7b642416-bc74-11ea-bb8b-bc764e2007e4
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b642416-bc74-11ea-bb8b-bc764e2007e4;
 Thu, 02 Jul 2020 14:58:26 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id 17so28323432wmo.1
 for <xen-devel@lists.xenproject.org>; Thu, 02 Jul 2020 07:58:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=T4F0H2VU/s4RezHwUWV4RCFlQyqVluB7IEU0w1AzBn8=;
 b=MIMI034ooObgfsFTG3w3Jz4ZmxsS3Tgr59tT+D0k/xKyuvXd2D9ujipi4EG8ptnr8V
 rHEbyHcn890re8tdz2OjbYs6kdXsaPC0tsDiBilrl/wd5dQ+q7+kPz0pp7Ji4pHorPNy
 FUZpdEs/ZKamM42pGo6dv//2+GLKs63UQ+TYzsCn12NL2TA1im3xRSlZ/e7zGrvEl22l
 Tyh0BWtGWyJs4PQQGt+XuwA+LkS7pVpIKOn8fj9hYPKVQqxVsOyEZHNLPCzttt+HL+PT
 rGqDiN++xKFZdIXf0CnFqQqBNeJsgVA/NeBf4dZWkz22rCLwbwzH1+StXiuzEQT+TgXF
 58ig==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=T4F0H2VU/s4RezHwUWV4RCFlQyqVluB7IEU0w1AzBn8=;
 b=uZvKQdpbScnS77mPsbDOzR2JiPCABbERxr7vBpAvl5+nNZAV48CGTpPThyxE260eBb
 yjeeFDqzDK9HPv5jAv8iUmIvJrevBtPZ5mCgQADqgOM/QL8KUJo9tMPcmUcFJD39hfms
 oB4qrhWQerAj6h9JaKOVJXbkBF3kYrqLQtNIV6D18SDnjhhk7SwZn9Tavl193bK5LdT5
 XsB/ZHscTUUBy/ho4xPWTHkeUEKBHpFRTQ5pnx8z1i+/oUjzs9NdcueISeVqfpHQAcsm
 rC8F1yMVwsOCnQCPaQLLbuk5IQXOGzK75Wswq3GrVZMT5pN2s+KZ1SZpebtszOawwRX8
 OoPA==
X-Gm-Message-State: AOAM530RuY/SASghjjmmJ4KywzvLzNBVWLIAl1CnBjoYgJeXXmXJmPCC
 2B1kwI20fNckY+2T2dvhaAk=
X-Google-Smtp-Source: ABdhPJwUrvl47VbP0f6r7yOTUAH3GoGj5xbmWhtNFciNa2UmNtA4YqHYfBRB/hMh2zEEq9TtXau8JA==
X-Received: by 2002:a1c:2e0e:: with SMTP id u14mr31902659wmu.55.1593701905660; 
 Thu, 02 Jul 2020 07:58:25 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-231.amazon.com. [54.240.197.231])
 by smtp.gmail.com with ESMTPSA id r67sm11391627wmr.9.2020.07.02.07.58.24
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 02 Jul 2020 07:58:25 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Anthony PERARD'" <anthony.perard@citrix.com>,
 "'Paul Durrant'" <paul.durrant@citrix.com>
References: <20200604183141.32044-1-pbonzini@redhat.com>
 <20200702142544.GA2157@perard.uk.xensource.com>
In-Reply-To: <20200702142544.GA2157@perard.uk.xensource.com>
Subject: RE: [PATCH v5] xen: fix build without pci passthrough
Date: Thu, 2 Jul 2020 15:58:24 +0100
Message-ID: <002301d65081$3c7e4600$b57ad200$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQIUWz2X2P52aIvTbn1t59d871MHUAHbluYKqGlue5A=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org,
 =?iso-8859-1?Q?'Roger_Pau_Monn=E9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of =
Anthony PERARD
> Sent: 02 July 2020 15:26
> To: Paul Durrant <paul.durrant@citrix.com>

Emails to this address are probably going to /dev/null by now :-)

> Cc: xen-devel@lists.xenproject.org; Roger Pau Monn=E9 =
<roger.pau@citrix.com>
> Subject: Re: [PATCH v5] xen: fix build without pci passthrough
>=20
> On Thu, Jun 04, 2020 at 02:31:41PM -0400, Paolo Bonzini wrote:
> > From: Anthony PERARD <anthony.perard@citrix.com>
> >
> > Xen PCI passthrough support may not be available and thus the global
> > variable "has_igd_gfx_passthru" might be compiled out. Common code
> > should not access it in that case.
> >
> > Unfortunately, we can't use CONFIG_XEN_PCI_PASSTHROUGH directly in
> > xen-common.c so this patch instead move access to the
> > has_igd_gfx_passthru variable via function and those functions are
> > also implemented as stubs. The stubs will be used when QEMU is built
> > without passthrough support.
> >
> > Now, when one will want to enable igd-passthru via the -machine
> > property, they will get an error message if QEMU is built without
> > passthrough support.
> >
> > Fixes: 46472d82322d0 ('xen: convert "-machine igd-passthru" to an =
accelerator property')
> > Reported-by: Roger Pau Monn=E9 <roger.pau@citrix.com>
> > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> > Message-Id: <20200603160442.3151170-1-anthony.perard@citrix.com>
> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
>=20
> Hi Paul,
>=20
> Can I backport this patch to qemu-xen-4.14? It allows to build qemu
> without pci passthrough support which seems to be important for =
FreeBSD
> as PT with QEMU is only available on Linux.
>=20
> (There's a fix to the patch that I would backport as well. "xen:
> Actually fix build without passthrough")
>=20
> Thanks.

I have no objection to make this fix for 4.14.

Cheers,

  Paul

>=20
> > ---
> >  accel/xen/xen-all.c  |  4 ++--
> >  hw/Makefile.objs     |  2 +-
> >  hw/i386/pc_piix.c    |  2 +-
> >  hw/xen/Makefile.objs |  3 ++-
> >  hw/xen/xen_pt.c      | 12 +++++++++++-
> >  hw/xen/xen_pt.h      |  6 ++++--
> >  hw/xen/xen_pt_stub.c | 22 ++++++++++++++++++++++
> >  7 files changed, 43 insertions(+), 8 deletions(-)
> >  create mode 100644 hw/xen/xen_pt_stub.c
> >
> > diff --git a/accel/xen/xen-all.c b/accel/xen/xen-all.c
> > index f3edc65ec9..0c24d4b191 100644
> > --- a/accel/xen/xen-all.c
> > +++ b/accel/xen/xen-all.c
> > @@ -137,12 +137,12 @@ static void xen_change_state_handler(void =
*opaque, int running,
> >
> >  static bool xen_get_igd_gfx_passthru(Object *obj, Error **errp)
> >  {
> > -    return has_igd_gfx_passthru;
> > +    return xen_igd_gfx_pt_enabled();
> >  }
> >
> >  static void xen_set_igd_gfx_passthru(Object *obj, bool value, Error =
**errp)
> >  {
> > -    has_igd_gfx_passthru =3D value;
> > +    xen_igd_gfx_pt_set(value, errp);
> >  }
> >
> >  static void xen_setup_post(MachineState *ms, AccelState *accel)
> > diff --git a/hw/Makefile.objs b/hw/Makefile.objs
> > index 660e2b4373..4cbe5e4e57 100644
> > --- a/hw/Makefile.objs
> > +++ b/hw/Makefile.objs
> > @@ -35,7 +35,7 @@ devices-dirs-y +=3D usb/
> >  devices-dirs-$(CONFIG_VFIO) +=3D vfio/
> >  devices-dirs-y +=3D virtio/
> >  devices-dirs-y +=3D watchdog/
> > -devices-dirs-y +=3D xen/
> > +devices-dirs-$(CONFIG_XEN) +=3D xen/
> >  devices-dirs-$(CONFIG_MEM_DEVICE) +=3D mem/
> >  devices-dirs-$(CONFIG_NUBUS) +=3D nubus/
> >  devices-dirs-y +=3D semihosting/
> > diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> > index eea964e72b..054d3aa9f7 100644
> > --- a/hw/i386/pc_piix.c
> > +++ b/hw/i386/pc_piix.c
> > @@ -377,7 +377,7 @@ static void pc_init_isa(MachineState *machine)
> >  #ifdef CONFIG_XEN
> >  static void pc_xen_hvm_init_pci(MachineState *machine)
> >  {
> > -    const char *pci_type =3D has_igd_gfx_passthru ?
> > +    const char *pci_type =3D xen_igd_gfx_pt_enabled() ?
> >                  TYPE_IGD_PASSTHROUGH_I440FX_PCI_DEVICE : =
TYPE_I440FX_PCI_DEVICE;
> >
> >      pc_init1(machine,
> > diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
> > index 340b2c5096..3fc715e595 100644
> > --- a/hw/xen/Makefile.objs
> > +++ b/hw/xen/Makefile.objs
> > @@ -1,6 +1,7 @@
> >  # xen backend driver support
> > -common-obj-$(CONFIG_XEN) +=3D xen-legacy-backend.o xen_devconfig.o =
xen_pvdev.o xen-bus.o xen-bus-
> helper.o xen-backend.o
> > +common-obj-y +=3D xen-legacy-backend.o xen_devconfig.o xen_pvdev.o =
xen-bus.o xen-bus-helper.o xen-
> backend.o
> >
> >  obj-$(CONFIG_XEN_PCI_PASSTHROUGH) +=3D xen-host-pci-device.o
> >  obj-$(CONFIG_XEN_PCI_PASSTHROUGH) +=3D xen_pt.o =
xen_pt_config_init.o xen_pt_graphics.o xen_pt_msi.o
> >  obj-$(CONFIG_XEN_PCI_PASSTHROUGH) +=3D xen_pt_load_rom.o
> > +obj-$(call $(lnot, $(CONFIG_XEN_PCI_PASSTHROUGH))) +=3D =
xen_pt_stub.o
> > diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> > index 81d5ad8da7..ab84443d5e 100644
> > --- a/hw/xen/xen_pt.c
> > +++ b/hw/xen/xen_pt.c
> > @@ -65,7 +65,17 @@
> >  #include "qemu/range.h"
> >  #include "exec/address-spaces.h"
> >
> > -bool has_igd_gfx_passthru;
> > +static bool has_igd_gfx_passthru;
> > +
> > +bool xen_igd_gfx_pt_enabled(void)
> > +{
> > +    return has_igd_gfx_passthru;
> > +}
> > +
> > +void xen_igd_gfx_pt_set(bool value, Error **errp)
> > +{
> > +    has_igd_gfx_passthru =3D value;
> > +}
> >
> >  #define XEN_PT_NR_IRQS (256)
> >  static uint8_t xen_pt_mapped_machine_irq[XEN_PT_NR_IRQS] =3D {0};
> > diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> > index 179775db7b..6e9cec95f3 100644
> > --- a/hw/xen/xen_pt.h
> > +++ b/hw/xen/xen_pt.h
> > @@ -5,6 +5,9 @@
> >  #include "hw/pci/pci.h"
> >  #include "xen-host-pci-device.h"
> >
> > +bool xen_igd_gfx_pt_enabled(void);
> > +void xen_igd_gfx_pt_set(bool value, Error **errp);
> > +
> >  void xen_pt_log(const PCIDevice *d, const char *f, ...) =
GCC_FMT_ATTR(2, 3);
> >
> >  #define XEN_PT_ERR(d, _f, _a...) xen_pt_log(d, "%s: Error: "_f, =
__func__, ##_a)
> > @@ -322,10 +325,9 @@ extern void =
*pci_assign_dev_load_option_rom(PCIDevice *dev,
> >                                              unsigned int domain,
> >                                              unsigned int bus, =
unsigned int slot,
> >                                              unsigned int function);
> > -extern bool has_igd_gfx_passthru;
> >  static inline bool is_igd_vga_passthrough(XenHostPCIDevice *dev)
> >  {
> > -    return (has_igd_gfx_passthru
> > +    return (xen_igd_gfx_pt_enabled()
> >              && ((dev->class_code >> 0x8) =3D=3D =
PCI_CLASS_DISPLAY_VGA));
> >  }
> >  int xen_pt_register_vga_regions(XenHostPCIDevice *dev);
> > diff --git a/hw/xen/xen_pt_stub.c b/hw/xen/xen_pt_stub.c
> > new file mode 100644
> > index 0000000000..2d8cac8d54
> > --- /dev/null
> > +++ b/hw/xen/xen_pt_stub.c
> > @@ -0,0 +1,22 @@
> > +/*
> > + * Copyright (C) 2020       Citrix Systems UK Ltd.
> > + *
> > + * This work is licensed under the terms of the GNU GPL, version 2 =
or later.
> > + * See the COPYING file in the top-level directory.
> > + */
> > +
> > +#include "qemu/osdep.h"
> > +#include "hw/xen/xen_pt.h"
> > +#include "qapi/error.h"
> > +
> > +bool xen_igd_gfx_pt_enabled(void)
> > +{
> > +    return false;
> > +}
> > +
> > +void xen_igd_gfx_pt_set(bool value, Error **errp)
> > +{
> > +    if (value) {
> > +        error_setg(errp, "Xen PCI passthrough support not built =
in");
> > +    }
> > +}
> > --
> > 2.26.2
> >
> >
>=20
> --
> Anthony PERARD




From xen-devel-bounces@lists.xenproject.org Thu Jul 02 15:11:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 15:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr0sD-0005iG-PK; Thu, 02 Jul 2020 15:11:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XMTP=AN=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jr0sD-0005iB-3E
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 15:11:37 +0000
X-Inumbo-ID: 516c7f58-bc76-11ea-8849-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 516c7f58-bc76-11ea-8849-12813bfff9fa;
 Thu, 02 Jul 2020 15:11:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1593702696;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=7w8TacDIa9VCOgiQeVRNIRtXyJ8Dk4dulQpwXemfS/I=;
 b=VdT/zBQV+S10yhbKtkUN9+PCbhtUpTBxlbZx0jM4Bxj4WE1qIJ1i2JpU
 yJLuQ2DCvJtYCa8epMlJHdEfRvTwwHM+6WiTkGteMmhxvRXyWAr10wRVT
 a64HCEK0vGfmMfcqRpCwakUrhXSphYoIKlVpOIZIRoGsG5YHDTJyhlWXL 8=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: OQ4WFe5y4gRPrBJDc9VjOm/uCTcaLTjlnNRlwnFWvGLxf/y8oQ93PveJrnxOy3FfzyuFX4n4OS
 9FzT3AKHHVZIzSbUauAUD7euu9IJzrV8gFRHq8kCO1Sulj9NREIfknFBahN29I5DGQ/b1ub+t9
 bvSFYpvqd+gRHKoEx6GKWbP/PaTQcM4bgy6/169zOKj92wYbxO6ZuVtX1HiNNYcGQh3AlyukB+
 XIHK6TuzsGB2YlIQqGsTsv6R6rNEPPVUfv5t/ZqS15bvVHwzfqByPmz08ZY0yiwjBxLssKd0jR
 JxE=
X-SBRS: 2.7
X-MesageID: 21503911
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,304,1589256000"; d="scan'208";a="21503911"
Subject: Re: [PATCH v4 10/10] tools/proctrace: add proctrace tool
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 <xen-devel@lists.xenproject.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <0ab003238e4e666d3847024b8917dbc11c40fecb.1593519420.git.michal.leszczynski@cert.pl>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <241285fc-f8be-575f-8b2a-f5aa44b77d47@citrix.com>
Date: Thu, 2 Jul 2020 16:10:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <0ab003238e4e666d3847024b8917dbc11c40fecb.1593519420.git.michal.leszczynski@cert.pl>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: luwei.kang@intel.com, tamas.lengyel@intel.com,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30/06/2020 13:33, Michał Leszczyński wrote:
> diff --git a/tools/proctrace/COPYING b/tools/proctrace/COPYING
> new file mode 100644
> index 0000000000..c0a841112c
> --- /dev/null
> +++ b/tools/proctrace/COPYING

The top-level COPYING file is GPL2.  There shouldn't be any need to
include a second copy here.

> diff --git a/tools/proctrace/Makefile b/tools/proctrace/Makefile
> new file mode 100644
> index 0000000000..2983c477fe
> --- /dev/null
> +++ b/tools/proctrace/Makefile
> @@ -0,0 +1,48 @@
> +# Copyright (C) CERT Polska - NASK PIB
> +# Author: Michał Leszczyński <michal.leszczynski@cert.pl>
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; under version 2 of the License.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +
> +XEN_ROOT=$(CURDIR)/../..
> +include $(XEN_ROOT)/tools/Rules.mk
> +
> +CFLAGS  += -Werror
> +CFLAGS  += $(CFLAGS_libxenevtchn)
> +CFLAGS  += $(CFLAGS_libxenctrl)
> +LDLIBS  += $(LDLIBS_libxenctrl)
> +LDLIBS  += $(LDLIBS_libxenevtchn)
> +LDLIBS  += $(LDLIBS_libxenforeignmemory)
> +
> +.PHONY: all
> +all: build
> +
> +.PHONY: build
> +build: proctrace
> +
> +.PHONY: install
> +install: build
> +	$(INSTALL_DIR) $(DESTDIR)$(sbindir)
> +	$(INSTALL_PROG) proctrace $(DESTDIR)$(sbindir)/proctrace
> +
> +.PHONY: uninstall
> +uninstall:
> +	rm -f $(DESTDIR)$(sbindir)/proctrace
> +
> +.PHONY: clean
> +clean:
> +	$(RM) -f $(DEPS_RM)

You need to remove proctrace as well, for `make clean` to have the
intended semantics.

> +
> +.PHONY: distclean
> +distclean: clean
> +
> +iptlive: iptlive.o Makefile
> +	$(CC) $(LDFLAGS) $< -o $@ $(LDLIBS) $(APPEND_LDFLAGS)

This rule looks to be totally unused?

> +#include <stdlib.h>
> +#include <stdio.h>
> +#include <sys/mman.h>
> +#include <signal.h>
> +
> +#include <xenctrl.h>
> +#include <xen/xen.h>
> +#include <xenforeignmemory.h>
> +
> +#define BUF_SIZE (16384 * XC_PAGE_SIZE)

This hardcodes the size of the buffer which is configurable per VM. 
Mapping the buffer fails when it is smaller than this.

It appears there is still outstanding bug from the acquire_resource work
which never got fixed.  The guest_handle_is_null(xmar.frame_list) path
in Xen is supposed to report the size of the resource, not the size of
Xen's local buffer, so userspace can ask "how large is this resource".

I'll try and find some time to fix this and arrange for backports, but
the current behaviour is nonsense, and problematic for new users.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 15:45:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 15:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr1Oy-0008K1-LI; Thu, 02 Jul 2020 15:45:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DrmV=AN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jr1Ox-0008Jw-0J
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 15:45:27 +0000
X-Inumbo-ID: 0be3cffe-bc7b-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0be3cffe-bc7b-11ea-b7bb-bc764e2007e4;
 Thu, 02 Jul 2020 15:45:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 33EEAAEB1;
 Thu,  2 Jul 2020 15:45:25 +0000 (UTC)
Subject: Re: [PATCH v2 0/4] Remove 32-bit Xen PV guest support
To: Brian Gerst <brgerst@gmail.com>
References: <20200701110650.16172-1-jgross@suse.com>
 <CAMzpN2hvK2T7Qje51MPjMyTggxT7_=EFnt7gAmJEa1Zq+t3LtA@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <e277e875-c159-4281-e9b7-08c91882d1fb@suse.com>
Date: Thu, 2 Jul 2020 17:45:23 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <CAMzpN2hvK2T7Qje51MPjMyTggxT7_=EFnt7gAmJEa1Zq+t3LtA@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Deep Shah <sdeep@vmware.com>,
 "VMware, Inc." <pv-drivers@vmware.com>,
 the arch/x86 maintainers <x86@kernel.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 Linux Virtualization <virtualization@lists.linux-foundation.org>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 Andy Lutomirski <luto@kernel.org>, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.07.20 16:48, Brian Gerst wrote:
> On Wed, Jul 1, 2020 at 7:07 AM Juergen Gross <jgross@suse.com> wrote:
>>
>> The long term plan has been to replace Xen PV guests by PVH. The first
>> victim of that plan are now 32-bit PV guests, as those are used only
>> rather seldom these days. Xen on x86 requires 64-bit support and with
>> Grub2 now supporting PVH officially since version 2.04 there is no
>> need to keep 32-bit PV guest support alive in the Linux kernel.
>> Additionally Meltdown mitigation is not available in the kernel running
>> as 32-bit PV guest, so dropping this mode makes sense from security
>> point of view, too.
> 
> One thing that you missed is removing VDSO_NOTE_NONEGSEG_BIT from
> vdso32/note.S.  With that removed there is no difference from the
> 64-bit version.

Oh, this means we can probably remove arch/x86/xen/vdso.h completely.

> 
> Otherwise this series looks good to me.

Thanks,


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 16:24:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 16:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr20P-0003dY-Pu; Thu, 02 Jul 2020 16:24:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c/Sk=AN=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jr20O-0003dT-BD
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 16:24:08 +0000
X-Inumbo-ID: 735438ea-bc80-11ea-bb8b-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 735438ea-bc80-11ea-bb8b-bc764e2007e4;
 Thu, 02 Jul 2020 16:24:07 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id E306CA3090;
 Thu,  2 Jul 2020 18:24:05 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id CBE9FA300A;
 Thu,  2 Jul 2020 18:24:04 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id go5xOBPgBZZb; Thu,  2 Jul 2020 18:24:04 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 4FE7BA3090;
 Thu,  2 Jul 2020 18:24:04 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id xbCvnvvTXBNC; Thu,  2 Jul 2020 18:24:04 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 13D4CA300A;
 Thu,  2 Jul 2020 18:24:04 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id E2D4F22DE9;
 Thu,  2 Jul 2020 18:23:33 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id I6vpVB927P65; Thu,  2 Jul 2020 18:23:28 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 71D1722E11;
 Thu,  2 Jul 2020 18:23:28 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id MHEpN4-fX64b; Thu,  2 Jul 2020 18:23:28 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 47A5522DE9;
 Thu,  2 Jul 2020 18:23:28 +0200 (CEST)
Date: Thu, 2 Jul 2020 18:23:28 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: Roger Pau =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>
Message-ID: <1505813895.18300396.1593707008144.JavaMail.zimbra@cert.pl>
In-Reply-To: <20200702090047.GX735@Air-de-Roger>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
 <20200702090047.GX735@Air-de-Roger>
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - FF77 (Linux)/8.6.0_GA_1194)
Thread-Topic: tools/libxl: add vmtrace_pt_size parameter
Thread-Index: 5q6wqdKAf9z93k0C4h0wD+I0N+vYmQ==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 Jan Beulich <jbeulich@suse.com>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 2 lip 2020 o 11:00, Roger Pau Monn=C3=A9 roger.pau@citrix.com napisa=
=C5=82(a):

> On Tue, Jun 30, 2020 at 02:33:46PM +0200, Micha=C5=82 Leszczy=C5=84ski wr=
ote:
>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
>> index 59bdc28c89..7b8289d436 100644
>> --- a/xen/include/public/domctl.h
>> +++ b/xen/include/public/domctl.h
>> @@ -92,6 +92,7 @@ struct xen_domctl_createdomain {
>>      uint32_t max_evtchn_port;
>>      int32_t max_grant_frames;
>>      int32_t max_maptrack_frames;
>> +    uint8_t vmtrace_pt_order;
>=20
> I've been thinking about this, and even though this is a domctl (so
> not a stable interface) we might want to consider using a size (or a
> number of pages) here rather than an order. IPT also supports
> TOPA mode (kind of a linked list of buffers) that would allow for
> sizes not rounded to order boundaries to be used, since then only each
> item in the linked list needs to be rounded to an order boundary, so
> you could for example use three 4K pages in TOPA mode AFAICT.
>=20
> Roger.

In previous versions it was "size" but it was requested to change it
to "order" in order to shrink the variable size from uint64_t to
uint8_t, because there is limited space for xen_domctl_createdomain
structure.

How should I proceed?

Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 17:12:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 17:12:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr2l9-0007ge-FT; Thu, 02 Jul 2020 17:12:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gpFn=AN=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jr2l8-0007gZ-SP
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 17:12:26 +0000
X-Inumbo-ID: 33bb30ec-bc87-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 33bb30ec-bc87-11ea-bca7-bc764e2007e4;
 Thu, 02 Jul 2020 17:12:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:To:Subject:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=qKo7l4ukOyqKEr2gEJ9D1gjkDyTo7aBp5+6VSKv82mw=; b=bigzbg0EmeDBNW3p+PxVkqW2P0
 r+G4d2srdbMWFnsYZktryPyjY1oT1AID96HFJKmqGmITtxxXxCPH3kfqTdyr3V47S+yFEqAPnmFiL
 JnR0y4t1kZGhS3EyrnRfhyJB1DJxerLw9bMjexudtGfzZ8OlOOfM+IEuTAFf/+NfV8BU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jr2l8-0006m5-4Z; Thu, 02 Jul 2020 17:12:26 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jr2l7-0001L7-Sn; Thu, 02 Jul 2020 17:12:26 +0000
Subject: Re: [Xen ARM64] Save coredump log when xen/dom0 crash on ARM64?
To: jinchen <jinchen1227@qq.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <tencent_F424A8312298D36ED25612607EF4BC341B0A@qq.com>
From: Julien Grall <julien@xen.org>
Message-ID: <94415ba8-53de-c294-36f6-0290bfb0bc83@xen.org>
Date: Thu, 2 Jul 2020 18:12:24 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <tencent_F424A8312298D36ED25612607EF4BC341B0A@qq.com>
Content-Type: text/plain; charset=gb18030; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

On 02/07/2020 02:41, jinchen wrote:
> Hello xen experts:
> 
> Is there any way to save xen and dom0 core dump log when xen or dom0 
> crash on ARM64 platform?

Usually all the crash stack trace (Xen and Dom0) should be output on the 
Xen Console.

>  02 02 I find that the kdump seems can't02run on ARM64 platform?

We don't have support for kdump/kexec on Arm in Xen yet.

>  02 02 Are there any02patches02or other way to achive this goal?

I am not aware of any patches, but they would be welcomed.

For other way, it depends what exactly you expect. Do you need more than 
the stack trace?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 17:35:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 17:35:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr36s-0000xw-Be; Thu, 02 Jul 2020 17:34:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gpFn=AN=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jr36r-0000xr-8L
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 17:34:53 +0000
X-Inumbo-ID: 560e87e0-bc8a-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 560e87e0-bc8a-11ea-bca7-bc764e2007e4;
 Thu, 02 Jul 2020 17:34:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Q26iJJNm1J2GxnnFPGVie/uZdpJ0fBopyLEBAH0PjRI=; b=o5xV1fcf5oYACBK+NfLOGn/sMa
 x9rMVWXMxMu5hJrduzSpfe2EZPqhI3pOQ6HoqJlAS8jLJV1GYleTl5Ltw8dH/VzRehecaLiKpnTGF
 q8LedVTch6SCPlN9rGmJzhFSt/6PP3TRJcUQGB/Z3VXUbQsaK9qhabe5C/Vg01bmyCOE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jr36p-0007Ao-M7; Thu, 02 Jul 2020 17:34:51 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jr36p-0002P1-2Z; Thu, 02 Jul 2020 17:34:51 +0000
Subject: Re: Kexec and libxenctlr.so
To: Wei Liu <wl@xen.org>, Julien Grall <jgrall@amazon.com>
References: <7a88218d-981e-6583-15a5-3fcaffb05294@amazon.com>
 <20200626110812.hxeoomagamkdceu7@liuwe-devbox-debian-v2>
From: Julien Grall <julien@xen.org>
Message-ID: <aa5ad259-5848-e8c4-61e8-6649bb65ece5@xen.org>
Date: Thu, 2 Jul 2020 18:34:48 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200626110812.hxeoomagamkdceu7@liuwe-devbox-debian-v2>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "paul@xen.org" <paul@xen.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 daniel.kiper@oracle.com,
 "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Wei,

On 26/06/2020 12:08, Wei Liu wrote:
> On Thu, Jun 11, 2020 at 03:57:37PM +0100, Julien Grall wrote:
>> Hi all,
>>
>> kexec-tools has an option to load dynamically libxenctlr.so (not .so.4.x)
>> (see [1]).
>>
>> Given that the library has never been considered stable, it is probably a
>> disaster that is waiting to happen.
>>
>> Looking at the tree kexec is using the follow libxc functions:
>>     - xc_kexec_get_range()
>>     - xc_kexec_load()
>>     - xc_kexec_unload()
>>     - xc_kexec_status()
>>     - xc_kexec_exec()
>>     - xc_version()
>>     - xc_interface_open()
>>     - xc_interface_close()
>>     - xc_get_max_cpus()
>>     - xc_get_machine_memory_map()
>>
>> I think it is uncontroversial that we want a new stable library for all the
>> xc_kexec_* functions (maybe libxenexec)?
> 
> That sounds fine to me.
> 
> Looking at the list of functions, all the xc_kexec_* ones are probably
> already rather stable.

That's my understanding as well.

Although, we may want to rethink some of the hypercalls (such as 
KEXEC_cmd_kexec_get_range) in the future as they have different layout 
between 32-bit and 64-bit. Thanksfully this wasn't exposed outside of 
libxc, so it shouldn't be an issue to have a stable library.

> 
> For xc_interface_open / close, they are perhaps used only to obtain an
> xc handle such that it can be used to make hypercalls. You new kexec
> library is going to expose its own handle with a xencall handle wrapped
> inside, so you can do away with an xc handle.

I have already a PoC for the new library. I had to tweak a bit the list 
of helpers as some use hypercalls arguments directly. Below, the 
proposed interface:

/* Callers who don't care don't need to #include <xentoollog.h> */
struct xentoollog_logger;

typedef struct xenkexec_handle xenkexec_handle;

typedef struct xenkexec_segments xenkexec_segments;

xenkexec_handle *xenkexec_open(struct xentoollog_logger *logger,
                                unsigned int open_flags);
int xenkexec_close(xenkexec_handle *khdl);

int xenkexec_exec(xenkexec_handle *khdl, int type);
int xenkexec_get_range(xenkexec_handle *khdl, int range, int nr,
                        uint64_t *size, uint64_t *start);
int xenkexec_load(xenkexec_handle *khdl, uint8_t type, uint16_t arch,
                   uint64_t entry_maddr, uint32_t nr_segments,
                   xenkexec_segments *segments);
int xenkexec_unload(xenkexec_handle *khdl, int type);
int xenkexec_status(xenkexec_handle *khdl, int type);

xenkexec_segments *xenkexec_allocate_segments(xenkexec_handle *khdl,
                                               unsigned int nr);
void xenkexec_free_segments(xenkexec_handle *khdl, xenkexec_segments *segs);

int xenkexec_update_segment(xenkexec_handle *khdl, xenkexec_segments *segs,
                             unsigned int idx, void *buffer, size_t 
buffer_size,
                             uint64_t dest_maddr, size_t dest_size);


> 
>>
>> However I am not entirely sure where to put the others.
>>
>> I am thinking to introduce libxensysctl for xc_get_max_cpus() as it is a
>> XEN_SYSCTL. We could possibly include xc_get_machine_memory_map() (despite
>> it is a XENMEM_).
>>
> 
> Introducing an libxensysctl before we stabilise sysctl interface seems
> wrong to me. We can bury the call inside libxenkexec itself for the time
> being.

That would work for me.

> 
>> For xc_version(), I am thinking to extend libxentoolcore to also include
>> "stable xen API".
>>
> 
> If you can do without an xc handle, do you still need to call
> xc_version?

Looking at kexec, xc_version() is used by crashdump to determine which 
architecture is used by Xen (in this case 32-bit x86 vs 64-bit x86).

The was introcuded by commit:

commit cdbc9b011fe43407908632d842e3a39e495e48d9
Author: Ian Campbell <ian.campbell@xensource.com>
Date:   Fri Mar 16 10:10:24 2007 +0000

     Set crash dump ELF header e_machine field based on underlying 
hypervisor architecture.

     This is necessary when running Xen with a 64 bit hypervisor and 32 bit
     domain 0 since the CPU crash notes will be 64 bit.

     Detecting the hypervisor archiecture requires libxenctrl and 
therefore this
     support is optional and disabled by default.

     Signed-off-by: Ian Campbell <ian.campbell@xensource.com>
     Acked-by: Magnus Damm <magnus@valinux.co.jp>
     Signed-off-by: Simon Horman <horms@verge.net.au>

As we drop support for 32-bit Xen quite a long time ago, we may be able 
to remove the call to xc_version().

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 18:22:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 18:22:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr3qR-00057d-Uz; Thu, 02 Jul 2020 18:21:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0h=AN=amazon.com=prvs=445caddfd=anchalag@srs-us1.protection.inumbo.net>)
 id 1jr3qQ-00056y-QR
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 18:21:58 +0000
X-Inumbo-ID: ea208cfc-bc90-11ea-bca7-bc764e2007e4
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea208cfc-bc90-11ea-bca7-bc764e2007e4;
 Thu, 02 Jul 2020 18:21:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1593714118; x=1625250118;
 h=date:from:to:subject:message-id:mime-version;
 bh=cs2EUulVYmJiGWKGk3tjkuDldhZPB72HbPIIjHU97dk=;
 b=Im/bqyFMVIZ9ZEysI5y0Exifv8AoyVt0F4JvyFQ6wg6oYkamRQOYZl2r
 KHg786tY44AsAQso/39AmMSrWgJIzcbau7KHMpmTXX8T6FjunccNyfeFG
 Rzp111s4wYKSnb23u/8rZZfVov9Hqi8ynOMey40eR9EyAHHltz77uzJNQ w=;
IronPort-SDR: QkQyYmP3nGLpAiOJsMdMVFBarx/mgZsiYn5B7jwFEKwaWLU7F40e7pi00Eb1XlqW0VHybjgWcA
 d/KKpJe7YxXg==
X-IronPort-AV: E=Sophos;i="5.75,305,1589241600"; d="scan'208";a="56964454"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1d-74cf8b49.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 02 Jul 2020 18:21:50 +0000
Received: from EX13MTAUEB002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1d-74cf8b49.us-east-1.amazon.com (Postfix) with ESMTPS
 id BDE57C05B9; Thu,  2 Jul 2020 18:21:43 +0000 (UTC)
Received: from EX13D08UEB002.ant.amazon.com (10.43.60.107) by
 EX13MTAUEB002.ant.amazon.com (10.43.60.12) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:21:25 +0000
Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by
 EX13D08UEB002.ant.amazon.com (10.43.60.107) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:21:25 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 2 Jul 2020 18:21:25 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 896F240844; Thu,  2 Jul 2020 18:21:24 +0000 (UTC)
Date: Thu, 2 Jul 2020 18:21:24 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH v2 00/11] Fix PM hibernation in Xen guests
Message-ID: <cover.1593665947.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,
This series fixes PM hibernation for hvm guests running on xen hypervisor.
The running guest could now be hibernated and resumed successfully at a
later time. The fixes for PM hibernation are added to block and
network device drivers i.e xen-blkfront and xen-netfront. Any other driver
that needs to add S4 support if not already, can follow same method of
introducing freeze/thaw/restore callbacks.
The patches had been tested against upstream kernel and xen4.11. Large
scale testing is also done on Xen based Amazon EC2 instances. All this testing
involved running memory exhausting workload in the background.

Doing guest hibernation does not involve any support from hypervisor and
this way guest has complete control over its state. Infrastructure
restrictions for saving up guest state can be overcome by guest initiated
hibernation.

These patches were send out as RFC before and all the feedback had been
incorporated in the patches. The last v1 could be found here:

[v1]: https://lkml.org/lkml/2020/5/19/1312
All comments and feedback from v1 had been incorporated in v2 series.
Any comments/suggestions are welcome

Known issues:
1.KASLR causes intermittent hibernation failures. VM fails to resumes and
has to be restarted. I will investigate this issue separately and shouldn't
be a blocker for this patch series.
2. During hibernation, I observed sometimes that freezing of tasks fails due
to busy XFS workqueuei[xfs-cil/xfs-sync]. This is also intermittent may be 1
out of 200 runs and hibernation is aborted in this case. Re-trying hibernation
may work. Also, this is a known issue with hibernation and some
filesystems like XFS has been discussed by the community for years with not an
effectve resolution at this point.

Testing How to:
---------------
1. Setup xen hypervisor on a physical machine[ I used Ubuntu 16.04 +upstream
xen-4.11]
2. Bring up a HVM guest w/t kernel compiled with hibernation patches
[I used ubuntu18.04 netboot bionic images and also Amazon Linux on-prem images].
3. Create a swap file size=RAM size
4. Update grub parameters and reboot
5. Trigger pm-hibernation from within the VM

Example:
Set up a file-backed swap space. Swap file size>=Total memory on the system
sudo dd if=/dev/zero of=/swap bs=$(( 1024 * 1024 )) count=4096 # 4096MiB
sudo chmod 600 /swap
sudo mkswap /swap
sudo swapon /swap

Update resume device/resume offset in grub if using swap file:
resume=/dev/xvda1 resume_offset=200704 no_console_suspend=1

Execute:
--------
sudo pm-hibernate
OR
echo disk > /sys/power/state && echo reboot > /sys/power/disk

Compute resume offset code:
"
#!/usr/bin/env python
import sys
import array
import fcntl

#swap file
f = open(sys.argv[1], 'r')
buf = array.array('L', [0])

#FIBMAP
ret = fcntl.ioctl(f.fileno(), 0x01, buf)
print buf[0]
"


Aleksei Besogonov (1):
  PM / hibernate: update the resume offset on SNAPSHOT_SET_SWAP_AREA

Anchal Agarwal (4):
  x86/xen: Introduce new function to map HYPERVISOR_shared_info on
    Resume
  x86/xen: save and restore steal clock during PM hibernation
  xen: Introduce wrapper for save/restore sched clock offset
  xen: Update sched clock offset to avoid system instability in
    hibernation

Munehisa Kamata (5):
  xen/manage: keep track of the on-going suspend mode
  xenbus: add freeze/thaw/restore callbacks support
  x86/xen: add system core suspend and resume callbacks
  xen-blkfront: add callbacks for PM suspend and hibernation
  xen-netfront: add callbacks for PM suspend and hibernation

Thomas Gleixner (1):
  genirq: Shutdown irq chips in suspend/resume during hibernation

 arch/x86/xen/enlighten_hvm.c      |   7 ++
 arch/x86/xen/suspend.c            |  53 +++++++++++++
 arch/x86/xen/time.c               |  15 +++-
 arch/x86/xen/xen-ops.h            |   3 +
 drivers/block/xen-blkfront.c      | 122 +++++++++++++++++++++++++++++-
 drivers/net/xen-netfront.c        |  98 +++++++++++++++++++++++-
 drivers/xen/events/events_base.c  |   1 +
 drivers/xen/manage.c              |  60 +++++++++++++++
 drivers/xen/xenbus/xenbus_probe.c |  96 +++++++++++++++++++----
 include/linux/irq.h               |   2 +
 include/xen/xen-ops.h             |   3 +
 include/xen/xenbus.h              |   3 +
 kernel/irq/chip.c                 |   2 +-
 kernel/irq/internals.h            |   1 +
 kernel/irq/pm.c                   |  31 +++++---
 kernel/power/user.c               |   6 +-
 16 files changed, 470 insertions(+), 33 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 18:22:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 18:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr3qN-000573-Il; Thu, 02 Jul 2020 18:21:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0h=AN=amazon.com=prvs=445caddfd=anchalag@srs-us1.protection.inumbo.net>)
 id 1jr3qL-00056y-S2
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 18:21:53 +0000
X-Inumbo-ID: e70d7dae-bc90-11ea-b7bb-bc764e2007e4
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e70d7dae-bc90-11ea-b7bb-bc764e2007e4;
 Thu, 02 Jul 2020 18:21:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1593714113; x=1625250113;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=uoroRGLFE2Atyc2JGE05NLscaVRKKsCZnOJvewnD1q4=;
 b=cfr5zgdq7Ft7MAHGkskXajztclg+fGj0slKdLNLluKkmOzUYYOX4IKrl
 bw1lfhRIMBG25w5OTvBTvS251rg6SB4FWCEtW+LMqYDN0X/LRX8VQBbIN
 9427OFi+ocY2nfg0RE0Au/6mkvxWbLXX5/mU41YNN32zRF6jgU642GuiK A=;
IronPort-SDR: 2zEetQQdV5iJipJrTQmkyojCF1liNQ2rvC5FD4pMA/MiHYFXS2EHBC9fKUXHHRnbe8spyb4jXz
 T4wH3OzVpZxQ==
X-IronPort-AV: E=Sophos;i="5.75,305,1589241600"; d="scan'208";a="55693041"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2a-119b4f96.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 02 Jul 2020 18:21:50 +0000
Received: from EX13MTAUWA001.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2a-119b4f96.us-west-2.amazon.com (Postfix) with ESMTPS
 id BA4BA1A102E; Thu,  2 Jul 2020 18:21:48 +0000 (UTC)
Received: from EX13D10UWA003.ant.amazon.com (10.43.160.248) by
 EX13MTAUWA001.ant.amazon.com (10.43.160.58) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:21:43 +0000
Received: from EX13MTAUWA001.ant.amazon.com (10.43.160.58) by
 EX13D10UWA003.ant.amazon.com (10.43.160.248) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:21:36 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.160.118) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 2 Jul 2020 18:21:36 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 8A9B940844; Thu,  2 Jul 2020 18:21:36 +0000 (UTC)
Date: Thu, 2 Jul 2020 18:21:36 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend mode
Message-ID: <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1593665947.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Munehisa Kamata <kamatam@amazon.com>

Guest hibernation is different from xen suspend/resume/live migration.
Xen save/restore does not use pm_ops as is needed by guest hibernation.
Hibernation in guest follows ACPI path and is guest inititated , the
hibernation image is saved within guest as compared to later modes
which are xen toolstack assisted and image creation/storage is in
control of hypervisor/host machine.
To differentiate between Xen suspend and PM hibernation, keep track
of the on-going suspend mode by mainly using a new PM notifier.
Introduce simple functions which help to know the on-going suspend mode
so that other Xen-related code can behave differently according to the
current suspend mode.
Since Xen suspend doesn't have corresponding PM event, its main logic
is modfied to acquire pm_mutex and set the current mode.

Though, acquirng pm_mutex is still right thing to do, we may
see deadlock if PM hibernation is interrupted by Xen suspend.
PM hibernation depends on xenwatch thread to process xenbus state
transactions, but the thread will sleep to wait pm_mutex which is
already held by PM hibernation context in the scenario. Xen shutdown
code may need some changes to avoid the issue.

[Anchal Agarwal: Changelog]:
 RFC v1->v2: Code refactoring
 v1->v2:     Remove unused functions for PM SUSPEND/PM hibernation

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/xen/manage.c  | 60 +++++++++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h |  1 +
 2 files changed, 61 insertions(+)

diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
index cd046684e0d1..69833fd6cfd1 100644
--- a/drivers/xen/manage.c
+++ b/drivers/xen/manage.c
@@ -14,6 +14,7 @@
 #include <linux/freezer.h>
 #include <linux/syscore_ops.h>
 #include <linux/export.h>
+#include <linux/suspend.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -40,6 +41,20 @@ enum shutdown_state {
 /* Ignore multiple shutdown requests. */
 static enum shutdown_state shutting_down = SHUTDOWN_INVALID;
 
+enum suspend_modes {
+	NO_SUSPEND = 0,
+	XEN_SUSPEND,
+	PM_HIBERNATION,
+};
+
+/* Protected by pm_mutex */
+static enum suspend_modes suspend_mode = NO_SUSPEND;
+
+bool xen_is_xen_suspend(void)
+{
+	return suspend_mode == XEN_SUSPEND;
+}
+
 struct suspend_info {
 	int cancelled;
 };
@@ -99,6 +114,10 @@ static void do_suspend(void)
 	int err;
 	struct suspend_info si;
 
+	lock_system_sleep();
+
+	suspend_mode = XEN_SUSPEND;
+
 	shutting_down = SHUTDOWN_SUSPEND;
 
 	err = freeze_processes();
@@ -162,6 +181,10 @@ static void do_suspend(void)
 	thaw_processes();
 out:
 	shutting_down = SHUTDOWN_INVALID;
+
+	suspend_mode = NO_SUSPEND;
+
+	unlock_system_sleep();
 }
 #endif	/* CONFIG_HIBERNATE_CALLBACKS */
 
@@ -387,3 +410,40 @@ int xen_setup_shutdown_event(void)
 EXPORT_SYMBOL_GPL(xen_setup_shutdown_event);
 
 subsys_initcall(xen_setup_shutdown_event);
+
+static int xen_pm_notifier(struct notifier_block *notifier,
+			unsigned long pm_event, void *unused)
+{
+	switch (pm_event) {
+	case PM_SUSPEND_PREPARE:
+	case PM_HIBERNATION_PREPARE:
+	case PM_RESTORE_PREPARE:
+		suspend_mode = PM_HIBERNATION;
+		break;
+	case PM_POST_SUSPEND:
+	case PM_POST_RESTORE:
+	case PM_POST_HIBERNATION:
+		/* Set back to the default */
+		suspend_mode = NO_SUSPEND;
+		break;
+	default:
+		pr_warn("Receive unknown PM event 0x%lx\n", pm_event);
+		return -EINVAL;
+	}
+
+	return 0;
+};
+
+static struct notifier_block xen_pm_notifier_block = {
+	.notifier_call = xen_pm_notifier
+};
+
+static int xen_setup_pm_notifier(void)
+{
+	if (!xen_hvm_domain())
+		return -ENODEV;
+
+	return register_pm_notifier(&xen_pm_notifier_block);
+}
+
+subsys_initcall(xen_setup_pm_notifier);
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 39a5580f8feb..2521d6a306cd 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -40,6 +40,7 @@ u64 xen_steal_clock(int cpu);
 
 int xen_setup_shutdown_event(void);
 
+bool xen_is_xen_suspend(void);
 extern unsigned long *xen_contiguous_bitmap;
 
 #if defined(CONFIG_XEN_PV) || defined(CONFIG_ARM) || defined(CONFIG_ARM64)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 18:22:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 18:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr3qw-0005DE-8u; Thu, 02 Jul 2020 18:22:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0h=AN=amazon.com=prvs=445caddfd=anchalag@srs-us1.protection.inumbo.net>)
 id 1jr3qu-0005Cv-S3
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 18:22:28 +0000
X-Inumbo-ID: fb2adb9d-bc90-11ea-8887-12813bfff9fa
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fb2adb9d-bc90-11ea-8887-12813bfff9fa;
 Thu, 02 Jul 2020 18:22:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1593714148; x=1625250148;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=L0wmgxX+Ovp8pu3YnIUM23eFCfp/nVBG2avgRDi4ITE=;
 b=AYpFS4qf//QSf1DXaidpas5RaihxjrIv9aA3HVGT/6cvwO7Cqvzf0WDM
 KHGrE03l3CSOcVCMt8LLo2b3Vkpus5fGJjmvBfHBgHz+vwAYepQyb7wKK
 ekiYo9ootHWH/4CBY9Zmu8VbRsxEnZTIMeo2Re+BZu00KiSwLR4lpqXNo s=;
IronPort-SDR: phoIWWm5EGmTpHtUjCgKWFQfuW7oI7r/z0P4Vuu21Tm6bBK93XjGosJHiCJSK1i+PGpOrR2b63
 /Mq6kJcMc4Tg==
X-IronPort-AV: E=Sophos;i="5.75,305,1589241600"; d="scan'208";a="56964556"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1e-17c49630.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 02 Jul 2020 18:22:26 +0000
Received: from EX13MTAUEB002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1e-17c49630.us-east-1.amazon.com (Postfix) with ESMTPS
 id C81B2A188E; Thu,  2 Jul 2020 18:22:18 +0000 (UTC)
Received: from EX13D08UEB002.ant.amazon.com (10.43.60.107) by
 EX13MTAUEB002.ant.amazon.com (10.43.60.12) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:21:52 +0000
Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by
 EX13D08UEB002.ant.amazon.com (10.43.60.107) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:21:52 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 2 Jul 2020 18:21:52 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 05A9F40844; Thu,  2 Jul 2020 18:21:52 +0000 (UTC)
Date: Thu, 2 Jul 2020 18:21:52 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH v2 03/11] x86/xen: Introduce new function to map
 HYPERVISOR_shared_info on Resume
Message-ID: <3601db44e7c543016ca67327393d9ae37019e408.1593665947.git.anchalag@amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1593665947.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce a small function which re-uses shared page's PA allocated
during guest initialization time in reserve_shared_info() and not
allocate new page during resume flow.
It also  does the mapping of shared_info_page by calling
xen_hvm_init_shared_info() to use the function.

Changelog:
v1->v2: Remove extra check for shared_info_pfn to be NULL

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 arch/x86/xen/enlighten_hvm.c | 6 ++++++
 arch/x86/xen/xen-ops.h       | 1 +
 2 files changed, 7 insertions(+)

diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index 3e89b0067ff0..d91099928746 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -28,6 +28,12 @@
 
 static unsigned long shared_info_pfn;
 
+void xen_hvm_map_shared_info(void)
+{
+	xen_hvm_init_shared_info();
+	HYPERVISOR_shared_info = __va(PFN_PHYS(shared_info_pfn));
+}
+
 void xen_hvm_init_shared_info(void)
 {
 	struct xen_add_to_physmap xatp;
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 53b224fd6177..41e9e9120f2d 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -54,6 +54,7 @@ void xen_enable_sysenter(void);
 void xen_enable_syscall(void);
 void xen_vcpu_restore(void);
 
+void xen_hvm_map_shared_info(void);
 void xen_hvm_init_shared_info(void);
 void xen_unplug_emulated_devices(void);
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 18:22:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 18:22:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr3r1-0005Es-Gt; Thu, 02 Jul 2020 18:22:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0h=AN=amazon.com=prvs=445caddfd=anchalag@srs-us1.protection.inumbo.net>)
 id 1jr3qz-0005Cv-QR
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 18:22:33 +0000
X-Inumbo-ID: fcc79bac-bc90-11ea-8887-12813bfff9fa
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fcc79bac-bc90-11ea-8887-12813bfff9fa;
 Thu, 02 Jul 2020 18:22:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1593714150; x=1625250150;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=OCpSQw7b/4F/s9Kck7b/oFt4msmldkjBxvkPzpRtc18=;
 b=T1shpIRhhZv0QyTkTEcdlrYXncywj0xuufCwh4wLYGcToE6BTxNl8OLE
 Adgu/ugBCx2gL7JkPXAroAUGNNVrLM6k6xhtwyYM+LIgSHzePbHKzk7d7
 +V6wCwFW6MX3lr6CLcoFk/w2PPzp80n0z8dZnppmVuWEJ74xuqbVfKvP9 c=;
IronPort-SDR: vkg/Sg8L6s4rdxfrTdTR2arayeY3g64ZdS81u9rg8mgIf0I0him3TpE5PXpud66wIReuCHA0QC
 aagUD9PVAMEA==
X-IronPort-AV: E=Sophos;i="5.75,305,1589241600"; d="scan'208";a="55693165"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2c-579b7f5b.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 02 Jul 2020 18:22:29 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2c-579b7f5b.us-west-2.amazon.com (Postfix) with ESMTPS
 id 1F1CDA18CF; Thu,  2 Jul 2020 18:22:27 +0000 (UTC)
Received: from EX13D08UEE003.ant.amazon.com (10.43.62.118) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:22:05 +0000
Received: from EX13MTAUEE002.ant.amazon.com (10.43.62.24) by
 EX13D08UEE003.ant.amazon.com (10.43.62.118) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:22:05 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.62.224) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 2 Jul 2020 18:22:05 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 0A86C40844; Thu,  2 Jul 2020 18:22:05 +0000 (UTC)
Date: Thu, 2 Jul 2020 18:22:05 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH v2 04/11] x86/xen: add system core suspend and resume callbacks
Message-ID: <20200702182205.GA3531@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1593665947.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Munehisa Kamata <kamatam@amazon.com>

Add Xen PVHVM specific system core callbacks for PM
hibernation support. The callbacks suspend and resume
Xen primitives like shared_info, pvclock and grant table.
These syscore_ops are specifically for domU hibernation.
xen_suspend() calls syscore_suspend() during Xen suspend
operation however, during xen suspend lock_system_sleep()
lock is taken and thus system cannot trigger hibernation.
These system core callbacks will be called only from the
hibernation context.

[Anchal Agarwal: Changelog]:
v1->v2: Edit commit message
        Fixed syscore_suspend() to call gnntab_suspend
	Removed suspend mode check in syscore_suspend()/
	syscore_resume()
Signed-off-by: Agarwal Anchal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 arch/x86/xen/enlighten_hvm.c |  1 +
 arch/x86/xen/suspend.c       | 47 ++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h        |  2 ++
 3 files changed, 50 insertions(+)

diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index d91099928746..bd6bf6eb2052 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -215,6 +215,7 @@ static void __init xen_hvm_guest_init(void)
 	if (xen_feature(XENFEAT_hvm_callback_vector))
 		xen_have_vector_callback = 1;
 
+	xen_setup_syscore_ops();
 	xen_hvm_smp_init();
 	WARN_ON(xen_cpuhp_setup(xen_cpu_up_prepare_hvm, xen_cpu_dead_hvm));
 	xen_unplug_emulated_devices();
diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
index 1d83152c761b..e8c924e93fc5 100644
--- a/arch/x86/xen/suspend.c
+++ b/arch/x86/xen/suspend.c
@@ -2,17 +2,22 @@
 #include <linux/types.h>
 #include <linux/tick.h>
 #include <linux/percpu-defs.h>
+#include <linux/syscore_ops.h>
+#include <linux/kernel_stat.h>
 
 #include <xen/xen.h>
 #include <xen/interface/xen.h>
+#include <xen/interface/memory.h>
 #include <xen/grant_table.h>
 #include <xen/events.h>
+#include <xen/xen-ops.h>
 
 #include <asm/cpufeatures.h>
 #include <asm/msr-index.h>
 #include <asm/xen/hypercall.h>
 #include <asm/xen/page.h>
 #include <asm/fixmap.h>
+#include <asm/pvclock.h>
 
 #include "xen-ops.h"
 #include "mmu.h"
@@ -82,3 +87,45 @@ void xen_arch_suspend(void)
 
 	on_each_cpu(xen_vcpu_notify_suspend, NULL, 1);
 }
+
+static int xen_syscore_suspend(void)
+{
+	struct xen_remove_from_physmap xrfp;
+	int ret;
+
+	gnttab_suspend();
+
+	xrfp.domid = DOMID_SELF;
+	xrfp.gpfn = __pa(HYPERVISOR_shared_info) >> PAGE_SHIFT;
+
+	ret = HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrfp);
+	if (!ret)
+		HYPERVISOR_shared_info = &xen_dummy_shared_info;
+
+	return ret;
+}
+
+static void xen_syscore_resume(void)
+{
+	/* No need to setup vcpu_info as it's already moved off */
+	xen_hvm_map_shared_info();
+
+	pvclock_resume();
+
+	gnttab_resume();
+}
+
+/*
+ * These callbacks will be called with interrupts disabled and when having only
+ * one CPU online.
+ */
+static struct syscore_ops xen_hvm_syscore_ops = {
+	.suspend = xen_syscore_suspend,
+	.resume = xen_syscore_resume
+};
+
+void __init xen_setup_syscore_ops(void)
+{
+	if (xen_hvm_domain())
+		register_syscore_ops(&xen_hvm_syscore_ops);
+}
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 2521d6a306cd..9fa8a4082d68 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -41,6 +41,8 @@ u64 xen_steal_clock(int cpu);
 int xen_setup_shutdown_event(void);
 
 bool xen_is_xen_suspend(void);
+void xen_setup_syscore_ops(void);
+
 extern unsigned long *xen_contiguous_bitmap;
 
 #if defined(CONFIG_XEN_PV) || defined(CONFIG_ARM) || defined(CONFIG_ARM64)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 18:22:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 18:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr3rC-0005IW-Pb; Thu, 02 Jul 2020 18:22:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0h=AN=amazon.com=prvs=445caddfd=anchalag@srs-us1.protection.inumbo.net>)
 id 1jr3rA-0005I2-Vz
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 18:22:45 +0000
X-Inumbo-ID: 05db32a8-bc91-11ea-bb8b-bc764e2007e4
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05db32a8-bc91-11ea-bb8b-bc764e2007e4;
 Thu, 02 Jul 2020 18:22:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1593714165; x=1625250165;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=6WYhS9fhcjlT02XaftcCbDh+6PPkxfzixjjo5vFo6xY=;
 b=lNWLO/Z15mnPjoTbYcBU7rtwPxvziCpLO4bQHU2RKBfECOyVnudfeRCU
 N663oWBDuc1CJsTVTug7SiToXyR5/V5djI/kyU70BTFiiVqD57gQOicJf
 Frw+MXRob8aCr8q6QYD1iUs/odSItdjpQ9jurql/yxlkE3Ng34rQxJoZa w=;
IronPort-SDR: 6WL2v6GoZKshQ2w15CQNvN73wC/gCz8xAZ14lB2bb634i0tTkR52VltZYVndpTfW348uMqq0Op
 8xCq/KYQWdeg==
X-IronPort-AV: E=Sophos;i="5.75,305,1589241600"; d="scan'208";a="41164983"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-1e-97fdccfd.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 02 Jul 2020 18:22:44 +0000
Received: from EX13MTAUEB002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1e-97fdccfd.us-east-1.amazon.com (Postfix) with ESMTPS
 id 4673DA181E; Thu,  2 Jul 2020 18:22:42 +0000 (UTC)
Received: from EX13D08UEB002.ant.amazon.com (10.43.60.107) by
 EX13MTAUEB002.ant.amazon.com (10.43.60.12) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:22:17 +0000
Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by
 EX13D08UEB002.ant.amazon.com (10.43.60.107) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:22:17 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 2 Jul 2020 18:22:17 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 2738540844; Thu,  2 Jul 2020 18:22:17 +0000 (UTC)
Date: Thu, 2 Jul 2020 18:22:17 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH v2 05/11] genirq: Shutdown irq chips in suspend/resume during
 hibernation
Message-ID: <20200702182217.GA3577@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1593665947.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Thomas Gleixner <tglx@linutronix.de>

Many legacy device drivers do not implement power management (PM)
functions which means that interrupts requested by these drivers stay
in active state when the kernel is hibernated.

This does not matter on bare metal and on most hypervisors because the
interrupt is restored on resume without any noticeable side effects as
it stays connected to the same physical or virtual interrupt line.

The XEN interrupt mechanism is different as it maintains a mapping
between the Linux interrupt number and a XEN event channel. If the
interrupt stays active on hibernation this mapping is preserved but
there is unfortunately no guarantee that on resume the same event
channels are reassigned to these devices. This can result in event
channel conflicts which prevent the affected devices from being
restored correctly.

One way to solve this would be to add the necessary power management
functions to all affected legacy device drivers, but that's a
questionable effort which does not provide any benefits on non-XEN
environments.

The least intrusive and most efficient solution is to provide a
mechanism which allows the core interrupt code to tear down these
interrupts on hibernation and bring them back up again on resume. This
allows the XEN event channel mechanism to assign an arbitrary event
channel on resume without affecting the functionality of these
devices.

Fortunately all these device interrupts are handled by a dedicated XEN
interrupt chip so the chip can be marked that all interrupts connected
to it are handled this way. This is pretty much in line with the other
interrupt chip specific quirks, e.g. IRQCHIP_MASK_ON_SUSPEND.

Add a new quirk flag IRQCHIP_SHUTDOWN_ON_SUSPEND and add support for
it the core interrupt suspend/resume paths.

Changelog:
RFCv2-RFCv3: Incorporated tglx@'s patch to work with xen code
v1->v2: Corrected the author's name to tglx@

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 drivers/xen/events/events_base.c |  1 +
 include/linux/irq.h              |  2 ++
 kernel/irq/chip.c                |  2 +-
 kernel/irq/internals.h           |  1 +
 kernel/irq/pm.c                  | 31 ++++++++++++++++++++++---------
 5 files changed, 27 insertions(+), 10 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 140c7bf33a98..958dea2a4916 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1611,6 +1611,7 @@ static struct irq_chip xen_pirq_chip __read_mostly = {
 	.irq_set_affinity	= set_affinity_irq,
 
 	.irq_retrigger		= retrigger_dynirq,
+	.flags                  = IRQCHIP_SHUTDOWN_ON_SUSPEND,
 };
 
 static struct irq_chip xen_percpu_chip __read_mostly = {
diff --git a/include/linux/irq.h b/include/linux/irq.h
index 8d5bc2c237d7..94cb8c994d06 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -542,6 +542,7 @@ struct irq_chip {
  * IRQCHIP_EOI_THREADED:	Chip requires eoi() on unmask in threaded mode
  * IRQCHIP_SUPPORTS_LEVEL_MSI	Chip can provide two doorbells for Level MSIs
  * IRQCHIP_SUPPORTS_NMI:	Chip can deliver NMIs, only for root irqchips
+ * IRQCHIP_SHUTDOWN_ON_SUSPEND: Shutdown non wake irqs in the suspend path
  */
 enum {
 	IRQCHIP_SET_TYPE_MASKED		= (1 <<  0),
@@ -553,6 +554,7 @@ enum {
 	IRQCHIP_EOI_THREADED		= (1 <<  6),
 	IRQCHIP_SUPPORTS_LEVEL_MSI	= (1 <<  7),
 	IRQCHIP_SUPPORTS_NMI		= (1 <<  8),
+	IRQCHIP_SHUTDOWN_ON_SUSPEND     = (1 <<  9),
 };
 
 #include <linux/irqdesc.h>
diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index 41e7e37a0928..fd59489ff14b 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -233,7 +233,7 @@ __irq_startup_managed(struct irq_desc *desc, struct cpumask *aff, bool force)
 }
 #endif
 
-static int __irq_startup(struct irq_desc *desc)
+int __irq_startup(struct irq_desc *desc)
 {
 	struct irq_data *d = irq_desc_get_irq_data(desc);
 	int ret = 0;
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index 7db284b10ac9..b6fca5eacff7 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -80,6 +80,7 @@ extern void __enable_irq(struct irq_desc *desc);
 extern int irq_activate(struct irq_desc *desc);
 extern int irq_activate_and_startup(struct irq_desc *desc, bool resend);
 extern int irq_startup(struct irq_desc *desc, bool resend, bool force);
+extern int __irq_startup(struct irq_desc *desc);
 
 extern void irq_shutdown(struct irq_desc *desc);
 extern void irq_shutdown_and_deactivate(struct irq_desc *desc);
diff --git a/kernel/irq/pm.c b/kernel/irq/pm.c
index 8f557fa1f4fe..dc48a25f1756 100644
--- a/kernel/irq/pm.c
+++ b/kernel/irq/pm.c
@@ -85,16 +85,25 @@ static bool suspend_device_irq(struct irq_desc *desc)
 	}
 
 	desc->istate |= IRQS_SUSPENDED;
-	__disable_irq(desc);
-
 	/*
-	 * Hardware which has no wakeup source configuration facility
-	 * requires that the non wakeup interrupts are masked at the
-	 * chip level. The chip implementation indicates that with
-	 * IRQCHIP_MASK_ON_SUSPEND.
+	 * Some irq chips (e.g. XEN PIRQ) require a full shutdown on suspend
+	 * as some of the legacy drivers(e.g. floppy) do nothing during the
+	 * suspend path
 	 */
-	if (irq_desc_get_chip(desc)->flags & IRQCHIP_MASK_ON_SUSPEND)
-		mask_irq(desc);
+	if (irq_desc_get_chip(desc)->flags & IRQCHIP_SHUTDOWN_ON_SUSPEND) {
+		irq_shutdown(desc);
+	} else {
+		__disable_irq(desc);
+
+	       /*
+		* Hardware which has no wakeup source configuration facility
+		* requires that the non wakeup interrupts are masked at the
+		* chip level. The chip implementation indicates that with
+		* IRQCHIP_MASK_ON_SUSPEND.
+		*/
+		if (irq_desc_get_chip(desc)->flags & IRQCHIP_MASK_ON_SUSPEND)
+			mask_irq(desc);
+	}
 	return true;
 }
 
@@ -152,7 +161,11 @@ static void resume_irq(struct irq_desc *desc)
 	irq_state_set_masked(desc);
 resume:
 	desc->istate &= ~IRQS_SUSPENDED;
-	__enable_irq(desc);
+
+	if (irq_desc_get_chip(desc)->flags & IRQCHIP_SHUTDOWN_ON_SUSPEND)
+		__irq_startup(desc);
+	else
+		__enable_irq(desc);
 }
 
 static void resume_irqs(bool want_early)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 18:22:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 18:22:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr3rI-0005L0-5I; Thu, 02 Jul 2020 18:22:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0h=AN=amazon.com=prvs=445caddfd=anchalag@srs-us1.protection.inumbo.net>)
 id 1jr3rH-0005Kj-Dl
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 18:22:51 +0000
X-Inumbo-ID: 09425372-bc91-11ea-8887-12813bfff9fa
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 09425372-bc91-11ea-8887-12813bfff9fa;
 Thu, 02 Jul 2020 18:22:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1593714170; x=1625250170;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=06XpXsbm4nGxufy5odK5fTjKd20Tk+xFUtGFnfbSPzI=;
 b=VX/bvOfbKagtaL40wqHSFwR8yT7u2TXTQd6kmhKrJsSoSha4CWsNHzhj
 oEJPA73A3HWv+QmpHEeqRciLRdVsnXo6YktcfuKG/oFMbBAs3mX5UHVtP
 ufcwDNz1U4ct/G0XXzJCDnwl0NcHxuemYqHWPlUs7IDyzzeF4Ej6X5hjs k=;
IronPort-SDR: 0gwnU9r137fbenaSVqPItFyNKfUucpkmlvUo+1SwnrFmuaEm55bBT1oVwuTzRuYtHWnEFiJ/b7
 VGVubhW9JJLw==
X-IronPort-AV: E=Sophos;i="5.75,305,1589241600"; d="scan'208";a="48737187"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2a-90c42d1d.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP;
 02 Jul 2020 18:22:47 +0000
Received: from EX13MTAUWB001.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2a-90c42d1d.us-west-2.amazon.com (Postfix) with ESMTPS
 id EE3B7A20BE; Thu,  2 Jul 2020 18:22:45 +0000 (UTC)
Received: from EX13D07UWB001.ant.amazon.com (10.43.161.238) by
 EX13MTAUWB001.ant.amazon.com (10.43.161.249) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:22:40 +0000
Received: from EX13MTAUWB001.ant.amazon.com (10.43.161.207) by
 EX13D07UWB001.ant.amazon.com (10.43.161.238) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:22:40 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.161.249) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 2 Jul 2020 18:22:40 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 4A9E540844; Thu,  2 Jul 2020 18:22:40 +0000 (UTC)
Date: Thu, 2 Jul 2020 18:22:40 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH v2 07/11] xen-netfront: add callbacks for PM suspend and
 hibernation
Message-ID: <20200702182240.GA3596@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1593665947.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Munehisa Kamata <kamatam@amazon.com>

Add freeze, thaw and restore callbacks for PM suspend and hibernation
support. The freeze handler simply disconnects the frotnend from the
backend and frees resources associated with queues after disabling the
net_device from the system. The restore handler just changes the
frontend state and let the xenbus handler to re-allocate the resources
and re-connect to the backend. This can be performed transparently to
the rest of the system. The handlers are used for both PM suspend and
hibernation so that we can keep the existing suspend/resume callbacks
for Xen suspend without modification. Freezing netfront devices is
normally expected to finish within a few hundred milliseconds, but it
can rarely take more than 5 seconds and hit the hard coded timeout,
it would depend on backend state which may be congested and/or have
complex configuration. While it's rare case, longer default timeout
seems a bit more reasonable here to avoid hitting the timeout.
Also, make it configurable via module parameter so that we can cover
broader setups than what we know currently.

[Anchal Agarwal: Changelog]:
RFCv1->RFCv2: Variable name fix and checkpatch.pl fixes]

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/net/xen-netfront.c | 98 +++++++++++++++++++++++++++++++++++++-
 1 file changed, 97 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 482c6c8b0fb7..65edcdd6e05f 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -43,6 +43,7 @@
 #include <linux/moduleparam.h>
 #include <linux/mm.h>
 #include <linux/slab.h>
+#include <linux/completion.h>
 #include <net/ip.h>
 
 #include <xen/xen.h>
@@ -56,6 +57,12 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/grant_table.h>
 
+enum netif_freeze_state {
+	NETIF_FREEZE_STATE_UNFROZEN,
+	NETIF_FREEZE_STATE_FREEZING,
+	NETIF_FREEZE_STATE_FROZEN,
+};
+
 /* Module parameters */
 #define MAX_QUEUES_DEFAULT 8
 static unsigned int xennet_max_queues;
@@ -63,6 +70,12 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644);
 MODULE_PARM_DESC(max_queues,
 		 "Maximum number of queues per virtual interface");
 
+static unsigned int netfront_freeze_timeout_secs = 10;
+module_param_named(freeze_timeout_secs,
+		   netfront_freeze_timeout_secs, uint, 0644);
+MODULE_PARM_DESC(freeze_timeout_secs,
+		 "timeout when freezing netfront device in seconds");
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -160,6 +173,10 @@ struct netfront_info {
 	struct netfront_stats __percpu *tx_stats;
 
 	atomic_t rx_gso_checksum_fixup;
+
+	int freeze_state;
+
+	struct completion wait_backend_disconnected;
 };
 
 struct netfront_rx_info {
@@ -721,6 +738,21 @@ static int xennet_close(struct net_device *dev)
 	return 0;
 }
 
+static int xennet_disable_interrupts(struct net_device *dev)
+{
+	struct netfront_info *np = netdev_priv(dev);
+	unsigned int num_queues = dev->real_num_tx_queues;
+	unsigned int queue_index;
+	struct netfront_queue *queue;
+
+	for (queue_index = 0; queue_index < num_queues; ++queue_index) {
+		queue = &np->queues[queue_index];
+		disable_irq(queue->tx_irq);
+		disable_irq(queue->rx_irq);
+	}
+	return 0;
+}
+
 static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
 				grant_ref_t ref)
 {
@@ -1301,6 +1333,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	np->queues = NULL;
 
+	init_completion(&np->wait_backend_disconnected);
+
 	err = -ENOMEM;
 	np->rx_stats = netdev_alloc_pcpu_stats(struct netfront_stats);
 	if (np->rx_stats == NULL)
@@ -1794,6 +1828,50 @@ static int xennet_create_queues(struct netfront_info *info,
 	return 0;
 }
 
+static int netfront_freeze(struct xenbus_device *dev)
+{
+	struct netfront_info *info = dev_get_drvdata(&dev->dev);
+	unsigned long timeout = netfront_freeze_timeout_secs * HZ;
+	int err = 0;
+
+	xennet_disable_interrupts(info->netdev);
+
+	netif_device_detach(info->netdev);
+
+	info->freeze_state = NETIF_FREEZE_STATE_FREEZING;
+
+	/* Kick the backend to disconnect */
+	xenbus_switch_state(dev, XenbusStateClosing);
+
+	/* We don't want to move forward before the frontend is diconnected
+	 * from the backend cleanly.
+	 */
+	timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
+					      timeout);
+	if (!timeout) {
+		err = -EBUSY;
+		xenbus_dev_error(dev, err, "Freezing timed out;"
+				 "the device may become inconsistent state");
+		return err;
+	}
+
+	/* Tear down queues */
+	xennet_disconnect_backend(info);
+	xennet_destroy_queues(info);
+
+	info->freeze_state = NETIF_FREEZE_STATE_FROZEN;
+
+	return err;
+}
+
+static int netfront_restore(struct xenbus_device *dev)
+{
+	/* Kick the backend to re-connect */
+	xenbus_switch_state(dev, XenbusStateInitialising);
+
+	return 0;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1999,6 +2077,8 @@ static int xennet_connect(struct net_device *dev)
 		spin_unlock_bh(&queue->rx_lock);
 	}
 
+	np->freeze_state = NETIF_FREEZE_STATE_UNFROZEN;
+
 	return 0;
 }
 
@@ -2036,10 +2116,23 @@ static void netback_changed(struct xenbus_device *dev,
 		break;
 
 	case XenbusStateClosed:
-		if (dev->state == XenbusStateClosed)
+		if (dev->state == XenbusStateClosed) {
+		     /* dpm context is waiting for the backend */
+			if (np->freeze_state == NETIF_FREEZE_STATE_FREEZING)
+				complete(&np->wait_backend_disconnected);
 			break;
+		}
+
 		/* Fall through - Missed the backend's CLOSING state. */
 	case XenbusStateClosing:
+	       /* We may see unexpected Closed or Closing from the backend.
+		* Just ignore it not to prevent the frontend from being
+		* re-connected in the case of PM suspend or hibernation.
+		*/
+		if (np->freeze_state == NETIF_FREEZE_STATE_FROZEN &&
+		    dev->state == XenbusStateInitialising) {
+			break;
+		}
 		xenbus_frontend_closed(dev);
 		break;
 	}
@@ -2186,6 +2279,9 @@ static struct xenbus_driver netfront_driver = {
 	.probe = netfront_probe,
 	.remove = xennet_remove,
 	.resume = netfront_resume,
+	.freeze = netfront_freeze,
+	.thaw	= netfront_restore,
+	.restore = netfront_restore,
 	.otherend_changed = netback_changed,
 };
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 18:22:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 18:22:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr3rP-0005Oh-Ej; Thu, 02 Jul 2020 18:22:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0h=AN=amazon.com=prvs=445caddfd=anchalag@srs-us1.protection.inumbo.net>)
 id 1jr3rN-0005Lf-Ow
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 18:22:57 +0000
X-Inumbo-ID: 0d5f7e8a-bc91-11ea-b7bb-bc764e2007e4
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d5f7e8a-bc91-11ea-b7bb-bc764e2007e4;
 Thu, 02 Jul 2020 18:22:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1593714177; x=1625250177;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=RVQKYtcRWwNWuZmUzS3zKOFxRzkR229MYD6OqjQdblg=;
 b=NdsRZ2snflyOIiQBbCjZKak10ZUwltWSELv0tuqOotP7kx2aMnxD4Jy7
 CJzocvYTg6PMw3OWcQkAqQRIje778szTn3joTxV04qhael74DvVAjIYOd
 GSeHpsvLapQOYzov7dAkDzJY8s9TX3daU2RxWQg7VpDrDAzFnnzF5ugzW k=;
IronPort-SDR: CD+xWM+1+cIX2syWOyjPqbem2nqHCuYhZtRY9gc6NlNk+D4GhT+VRWopNGaFDnmYCsyAzUKHdD
 NlhAuEexncvg==
X-IronPort-AV: E=Sophos;i="5.75,305,1589241600"; d="scan'208";a="56964654"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1a-807d4a99.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 02 Jul 2020 18:22:54 +0000
Received: from EX13MTAUEB002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1a-807d4a99.us-east-1.amazon.com (Postfix) with ESMTPS
 id 67EEAA2423; Thu,  2 Jul 2020 18:22:47 +0000 (UTC)
Received: from EX13D08UEB002.ant.amazon.com (10.43.60.107) by
 EX13MTAUEB002.ant.amazon.com (10.43.60.12) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:22:29 +0000
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX13D08UEB002.ant.amazon.com (10.43.60.107) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:22:29 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 2 Jul 2020 18:22:28 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id B17DF40844; Thu,  2 Jul 2020 18:22:28 +0000 (UTC)
Date: Thu, 2 Jul 2020 18:22:28 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH v2 06/11] xen-blkfront: add callbacks for PM suspend and
 hibernation
Message-ID: <20200702182228.GA3586@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1593665947.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Munehisa Kamata <kamatam@amazon.com>

S4 power transisiton states are much different than xen
suspend/resume. Former is visible to the guest and frontend drivers should
be aware of the state transistions and should be able to take appropriate
actions when needed. In transition to S4 we need to make sure that at least
all the in-flight blkif requests get completed, since they probably contain
bits of the guest's memory image and that's not going to get saved any
other way. Hence, re-issuing of in-flight requests as in case of xen resume
will not work here. This is in contrast to xen-suspend where we need to
freeze with as little processing as possible to avoid dirtying RAM late in
the migration cycle and we know that in-flight data can wait.

Add freeze, thaw and restore callbacks for PM suspend and hibernation
support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
events, need to implement these xenbus_driver callbacks. The freeze handler
stops block-layer queue and disconnect the frontend from the backend while
freeing ring_info and associated resources. Before disconnecting from the
backend, we need to prevent any new IO from being queued and wait for
existing IO to complete. Freeze/unfreeze of the queues will guarantee
that there are no requests in use on the shared ring. However, for sanity
we should check state of the ring before disconnecting to make sure that
there are no outstanding requests to be processed on the ring.
The restore handler re-allocates ring_info, unquiesces and unfreezes the
queue and re-connect to the backend, so that rest of the kernel can
continue to use the block device transparently.

Note:For older backends,if a backend doesn't have commit'12ea729645ace'
xen/blkback: unmap all persistent grants when frontend gets disconnected,
the frontend may see massive amount of grant table warning when freeing
resources.
[   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
[   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!

In this case, persistent grants would need to be disabled.

[Anchal Agarwal: Changelog]:
RFC v1->v2: Removed timeout per request before disconnect during
	    blkfront freeze.
	    Added queue freeze/quiesce to the blkfront_freeze
	    Code cleanup
RFC v2->v3: None
RFC v3->v1: Code cleanup, Refractoring
    v1->v2: * remove err variable in blkfront_freeze
            * BugFix: error handling if rings are still busy
              after queue freeze/quiesce and returnign driver to
              connected state
            * add TODO if blkback fails to disconnect on freeze
            * Code formatting

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/block/xen-blkfront.c | 122 +++++++++++++++++++++++++++++++++--
 1 file changed, 118 insertions(+), 4 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 3b889ea950c2..9e3ed1b9f509 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -48,6 +48,8 @@
 #include <linux/list.h>
 #include <linux/workqueue.h>
 #include <linux/sched/mm.h>
+#include <linux/completion.h>
+#include <linux/delay.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -80,6 +82,8 @@ enum blkif_state {
 	BLKIF_STATE_DISCONNECTED,
 	BLKIF_STATE_CONNECTED,
 	BLKIF_STATE_SUSPENDED,
+	BLKIF_STATE_FREEZING,
+	BLKIF_STATE_FROZEN,
 };
 
 struct grant {
@@ -219,6 +223,7 @@ struct blkfront_info
 	struct list_head requests;
 	struct bio_list bio_list;
 	struct list_head info_list;
+	struct completion wait_backend_disconnected;
 };
 
 static unsigned int nr_minors;
@@ -1005,6 +1010,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size,
 	info->sector_size = sector_size;
 	info->physical_sector_size = physical_sector_size;
 	blkif_set_queue_limits(info);
+	init_completion(&info->wait_backend_disconnected);
 
 	return 0;
 }
@@ -1353,6 +1359,8 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 	unsigned int i;
 	struct blkfront_ring_info *rinfo;
 
+	if (info->connected == BLKIF_STATE_FREEZING)
+		goto free_rings;
 	/* Prevent new requests being issued until we fix things up. */
 	info->connected = suspend ?
 		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
@@ -1360,6 +1368,7 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 	if (info->rq)
 		blk_mq_stop_hw_queues(info->rq);
 
+free_rings:
 	for_each_rinfo(info, rinfo, i)
 		blkif_free_ring(rinfo);
 
@@ -1563,8 +1572,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
 	struct blkfront_info *info = rinfo->dev_info;
 
-	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
+	if (unlikely(info->connected != BLKIF_STATE_CONNECTED &&
+			info->connected != BLKIF_STATE_FREEZING)) {
 		return IRQ_HANDLED;
+	}
 
 	spin_lock_irqsave(&rinfo->ring_lock, flags);
  again:
@@ -2026,6 +2037,7 @@ static int blkif_recover(struct blkfront_info *info)
 	struct bio *bio;
 	unsigned int segs;
 	struct blkfront_ring_info *rinfo;
+	bool frozen = info->connected == BLKIF_STATE_FROZEN;
 
 	blkfront_gather_backend_features(info);
 	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
@@ -2048,6 +2060,9 @@ static int blkif_recover(struct blkfront_info *info)
 		kick_pending_request_queues(rinfo);
 	}
 
+	if (frozen)
+		return 0;
+
 	list_for_each_entry_safe(req, n, &info->requests, queuelist) {
 		/* Requeue pending requests (flush or discard) */
 		list_del_init(&req->queuelist);
@@ -2364,6 +2379,7 @@ static void blkfront_connect(struct blkfront_info *info)
 
 		return;
 	case BLKIF_STATE_SUSPENDED:
+	case BLKIF_STATE_FROZEN:
 		/*
 		 * If we are recovering from suspension, we need to wait
 		 * for the backend to announce it's features before
@@ -2481,12 +2497,37 @@ static void blkback_changed(struct xenbus_device *dev,
 		break;
 
 	case XenbusStateClosed:
-		if (dev->state == XenbusStateClosed)
+		if (dev->state == XenbusStateClosed) {
+			if (info->connected == BLKIF_STATE_FREEZING) {
+				blkif_free(info, 0);
+				info->connected = BLKIF_STATE_FROZEN;
+				complete(&info->wait_backend_disconnected);
+			}
 			break;
+		}
+		/*
+		 * We receive backend's Closed again while thawing
+		 * or restoring and it causes thawing or restoring to fail.
+		 * During blkfront_restore, backend is still in Closed state
+		 * and we receive backend as closed here while frontend's
+		 * dev->state is set to XenBusStateInitialized.
+		 * Ignore such unexpected state regardless of the backend's
+		 * state.
+		 */
+		if (info->connected == BLKIF_STATE_FROZEN) {
+			dev_dbg(&dev->dev, "Thawing/Restoring, ignore the backend's Closed state: %s",
+				dev->nodename);
+			break;
+		}
+
 		/* fall through */
 	case XenbusStateClosing:
-		if (info)
-			blkfront_closing(info);
+		if (info) {
+			if (info->connected == BLKIF_STATE_FREEZING)
+				xenbus_frontend_closed(dev);
+			else
+				blkfront_closing(info);
+		}
 		break;
 	}
 }
@@ -2630,6 +2671,76 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
 	mutex_unlock(&blkfront_mutex);
 }
 
+static int blkfront_freeze(struct xenbus_device *dev)
+{
+	unsigned int i;
+	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
+	struct blkfront_ring_info *rinfo;
+	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
+	unsigned int timeout = 5 * HZ;
+	unsigned long flags;
+
+	info->connected = BLKIF_STATE_FREEZING;
+
+	blk_mq_freeze_queue(info->rq);
+	blk_mq_quiesce_queue(info->rq);
+
+	for_each_rinfo(info, rinfo, i) {
+		/* No more gnttab callback work. */
+		gnttab_cancel_free_callback(&rinfo->callback);
+		/* Flush gnttab callback work. Must be done with no locks held. */
+		flush_work(&rinfo->work);
+	}
+
+	for_each_rinfo(info, rinfo, i) {
+		spin_lock_irqsave(&rinfo->ring_lock, flags);
+		if (RING_FULL(&rinfo->ring) ||
+			RING_HAS_UNCONSUMED_RESPONSES(&rinfo->ring)) {
+			spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+			xenbus_dev_error(dev, -EBUSY, "Hibernation Failed. The ring is still busy");
+			info->connected = BLKIF_STATE_CONNECTED;
+			blk_mq_unquiesce_queue(info->rq);
+			blk_mq_unfreeze_queue(info->rq);
+			return -EBUSY;
+		}
+		spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+	}
+	/* Kick the backend to disconnect */
+	xenbus_switch_state(dev, XenbusStateClosing);
+
+	/*
+	 * We don't want to move forward before the frontend is diconnected
+	 * from the backend cleanly.
+	 * TODO:Handle timeout by falling back to the normal
+	 * disconnect path and just wait for the backend to close before
+	 * reconnecting. Bring the system back to its original state by
+	 * failing hibernation gracefully.
+	 */
+	timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
+						timeout);
+	if (!timeout) {
+		xenbus_dev_error(dev, -EBUSY, "Freezing timed out;"
+			"the device may become inconsistent state");
+		return -EBUSY;
+	}
+
+	return 0;
+}
+
+static int blkfront_restore(struct xenbus_device *dev)
+{
+	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
+	int err;
+
+	err = talk_to_blkback(dev, info);
+	if (!err) {
+		blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings);
+		blk_mq_unquiesce_queue(info->rq);
+		blk_mq_unfreeze_queue(info->rq);
+	}
+	return err;
+}
+
 static const struct block_device_operations xlvbd_block_fops =
 {
 	.owner = THIS_MODULE,
@@ -2653,6 +2764,9 @@ static struct xenbus_driver blkfront_driver = {
 	.resume = blkfront_resume,
 	.otherend_changed = blkback_changed,
 	.is_ready = blkfront_is_ready,
+	.freeze = blkfront_freeze,
+	.thaw = blkfront_restore,
+	.restore = blkfront_restore
 };
 
 static void purge_persistent_grants(struct blkfront_info *info)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 18:23:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 18:23:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr3ro-0005Zm-On; Thu, 02 Jul 2020 18:23:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0h=AN=amazon.com=prvs=445caddfd=anchalag@srs-us1.protection.inumbo.net>)
 id 1jr3rm-0005ZE-VZ
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 18:23:22 +0000
X-Inumbo-ID: 1caaf9c8-bc91-11ea-bca7-bc764e2007e4
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1caaf9c8-bc91-11ea-bca7-bc764e2007e4;
 Thu, 02 Jul 2020 18:23:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1593714202; x=1625250202;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=I7turOllMrL/CNEMBFGxOPTAV9RKwwcOD5Qzsp0RMCI=;
 b=vCQNhmqEzpWBiFjhA17o+7y6jSRGt0hV8h93YfSdFmwmTL7uGwTB3li2
 WqRsJOVaS5mmHL5FBKt/3PAjm2H7e/eklFheXYrlJwWuB1POXO7BXTWcE
 SUDV3yj1DX3w216jPDUzXevn+HCtxqoXZ6Tce287X5Fyln5TibTNer3eL E=;
IronPort-SDR: gtZOu0dMiB3ehEJQSAxthJy8slH3FlYoulPxQjoZIUWyHzAH9qA9TfOTryU3wg6sLJrulv+kzq
 yxHeXV2LgO7A==
X-IronPort-AV: E=Sophos;i="5.75,305,1589241600"; d="scan'208";a="39736466"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2c-4e7c8266.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 02 Jul 2020 18:23:16 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2c-4e7c8266.us-west-2.amazon.com (Postfix) with ESMTPS
 id A0D51A2540; Thu,  2 Jul 2020 18:23:14 +0000 (UTC)
Received: from EX13D08UEE002.ant.amazon.com (10.43.62.92) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:22:51 +0000
Received: from EX13MTAUEE002.ant.amazon.com (10.43.62.24) by
 EX13D08UEE002.ant.amazon.com (10.43.62.92) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:22:51 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.62.224) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 2 Jul 2020 18:22:51 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 651F840844; Thu,  2 Jul 2020 18:22:51 +0000 (UTC)
Date: Thu, 2 Jul 2020 18:22:51 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH v2 08/11] x86/xen: save and restore steal clock during PM
 hibernation
Message-ID: <20200702182251.GA3606@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1593665947.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Save/restore steal times in syscore suspend/resume during PM
hibernation. Commit '5e25f5db6abb9: ("xen/time: do not
decrease steal time after live migration on xen")' fixes xen
guest steal time handling during migration. A similar issue is seen
during PM hibernation.
Currently, steal time accounting code in scheduler expects steal clock
callback to provide monotonically increasing value. If the accounting
code receives a smaller value than previous one, it uses a negative
value to calculate steal time and results in incorrectly updated idle
and steal time accounting. This breaks userspace tools which read
/proc/stat.

top - 08:05:35 up  2:12,  3 users,  load average: 0.00, 0.07, 0.23
Tasks:  80 total,   1 running,  79 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,30100.0%id,  0.0%wa,  0.0%hi, 0.0%si,-1253874204672.0%st

This can actually happen when a Xen PVHVM guest gets restored from
hibernation, because such a restored guest is just a fresh domain from
Xen perspective and the time information in runstate info starts over
from scratch.

Changelog:
v1->v2: Removed patches that introduced new function calls for saving/restoring
        sched clock offset and using existing ones that are used during LM

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 arch/x86/xen/suspend.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
index e8c924e93fc5..10cd14326472 100644
--- a/arch/x86/xen/suspend.c
+++ b/arch/x86/xen/suspend.c
@@ -94,10 +94,9 @@ static int xen_syscore_suspend(void)
 	int ret;
 
 	gnttab_suspend();
-
+	xen_manage_runstate_time(-1);
 	xrfp.domid = DOMID_SELF;
 	xrfp.gpfn = __pa(HYPERVISOR_shared_info) >> PAGE_SHIFT;
-
 	ret = HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrfp);
 	if (!ret)
 		HYPERVISOR_shared_info = &xen_dummy_shared_info;
@@ -111,7 +110,7 @@ static void xen_syscore_resume(void)
 	xen_hvm_map_shared_info();
 
 	pvclock_resume();
-
+	xen_manage_runstate_time(0);
 	gnttab_resume();
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 18:23:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 18:23:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr3rt-0005by-5Y; Thu, 02 Jul 2020 18:23:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0h=AN=amazon.com=prvs=445caddfd=anchalag@srs-us1.protection.inumbo.net>)
 id 1jr3rs-0005bi-OH
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 18:23:28 +0000
X-Inumbo-ID: 1ff4c03c-bc91-11ea-8887-12813bfff9fa
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1ff4c03c-bc91-11ea-8887-12813bfff9fa;
 Thu, 02 Jul 2020 18:23:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1593714208; x=1625250208;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=H8un5fMu7aydEquICqxLlc2CreWzZQt/df8398cfX/c=;
 b=EuA4j1Ay4fUzhF4NQf9kdvekUqHC2IfUmjhWyFdGQlbU2lauRr+EJ/n2
 KABRzPs83MGZNiCxxkRZPPkPJSukvqM/8Vb5yJ158icsqMR7iUEs+XSgN
 CQVB3R2qlf/4G/PBYIWAcQwa+FfuGd7am6XGebcC+LqV02Kmz/7nPf+Pj 0=;
IronPort-SDR: +nUVv9fjNo7jApSiBsxQulNLtThEWcq31zBbfQPBu6bZu2P9FKOrUT141gBiLH5vsliubVIDL4
 w1fvvS1czfgA==
X-IronPort-AV: E=Sophos;i="5.75,305,1589241600"; d="scan'208";a="39736489"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2c-c6afef2e.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 02 Jul 2020 18:23:26 +0000
Received: from EX13MTAUWA001.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2c-c6afef2e.us-west-2.amazon.com (Postfix) with ESMTPS
 id 93042A2655; Thu,  2 Jul 2020 18:23:24 +0000 (UTC)
Received: from EX13D10UWA001.ant.amazon.com (10.43.160.216) by
 EX13MTAUWA001.ant.amazon.com (10.43.160.118) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:23:12 +0000
Received: from EX13MTAUWA001.ant.amazon.com (10.43.160.58) by
 EX13D10UWA001.ant.amazon.com (10.43.160.216) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:23:12 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.160.118) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 2 Jul 2020 18:23:12 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id BB32E40844; Thu,  2 Jul 2020 18:23:12 +0000 (UTC)
Date: Thu, 2 Jul 2020 18:23:12 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH v2 09/11] xen: Introduce wrapper for save/restore sched clock
 offset
Message-ID: <20200702182312.GA3699@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1593665947.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce wrappers for save/restore xen_sched_clock_offset to be
used by PM hibernation code to avoid system instability during resume.

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 arch/x86/xen/time.c    | 15 +++++++++++++--
 arch/x86/xen/xen-ops.h |  2 ++
 2 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index c8897aad13cd..676950eb0cb5 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -386,12 +386,23 @@ static const struct pv_time_ops xen_time_ops __initconst = {
 static struct pvclock_vsyscall_time_info *xen_clock __read_mostly;
 static u64 xen_clock_value_saved;
 
+/*This is needed to maintain a monotonic clock value during PM hibernation */
+void xen_save_sched_clock_offset(void)
+{
+	xen_clock_value_saved = xen_clocksource_read() - xen_sched_clock_offset;
+}
+
+void xen_restore_sched_clock_offset(void)
+{
+	xen_sched_clock_offset = xen_clocksource_read() - xen_clock_value_saved;
+}
+
 void xen_save_time_memory_area(void)
 {
 	struct vcpu_register_time_memory_area t;
 	int ret;
 
-	xen_clock_value_saved = xen_clocksource_read() - xen_sched_clock_offset;
+	xen_save_sched_clock_offset();
 
 	if (!xen_clock)
 		return;
@@ -434,7 +445,7 @@ void xen_restore_time_memory_area(void)
 out:
 	/* Need pvclock_resume() before using xen_clocksource_read(). */
 	pvclock_resume();
-	xen_sched_clock_offset = xen_clocksource_read() - xen_clock_value_saved;
+	xen_restore_sched_clock_offset();
 }
 
 static void xen_setup_vsyscall_time_info(void)
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 41e9e9120f2d..f4b78b19493b 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -70,6 +70,8 @@ void xen_save_time_memory_area(void);
 void xen_restore_time_memory_area(void);
 void xen_init_time_ops(void);
 void xen_hvm_init_time_ops(void);
+void xen_save_sched_clock_offset(void);
+void xen_restore_sched_clock_offset(void);
 
 irqreturn_t xen_debug_interrupt(int irq, void *dev_id);
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 18:23:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 18:23:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr3s3-0005hX-FE; Thu, 02 Jul 2020 18:23:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0h=AN=amazon.com=prvs=445caddfd=anchalag@srs-us1.protection.inumbo.net>)
 id 1jr3s2-0005h0-AO
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 18:23:38 +0000
X-Inumbo-ID: 2504ba33-bc91-11ea-8887-12813bfff9fa
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2504ba33-bc91-11ea-8887-12813bfff9fa;
 Thu, 02 Jul 2020 18:23:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1593714218; x=1625250218;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=42TBP3y4PhPFoPiNAsOF6sq0ISwy0yvaTgprwGJN3gs=;
 b=gqArI3DQHtn1L2MkSfq/YBuf/AiCtMbE6j8vOQleOTW18+umcSBh8b1X
 fWQhO2254GD1Yx+wItnv1LzeF5PkT5/7Pz/dUbzITx+cf6WwmgfyY7uEq
 pSpHMVd4YThvuOvnqu4Z1YCdOUZ6bg6zufvGvQt8bnFf/5zW5ht3pAjbX A=;
IronPort-SDR: LmEWe+lBjje6IkAwKMPzWi2nEZ4tgqKNfXE84BMIprWb5gWec64tizT5QYDEjeSJ/XdZT9iLjF
 RRHpr+GD/3WQ==
X-IronPort-AV: E=Sophos;i="5.75,305,1589241600"; d="scan'208";a="55693384"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 02 Jul 2020 18:23:37 +0000
Received: from EX13MTAUWB001.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com (Postfix) with ESMTPS
 id 7E718A255A; Thu,  2 Jul 2020 18:23:35 +0000 (UTC)
Received: from EX13D10UWB004.ant.amazon.com (10.43.161.121) by
 EX13MTAUWB001.ant.amazon.com (10.43.161.249) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:23:28 +0000
Received: from EX13MTAUWB001.ant.amazon.com (10.43.161.207) by
 EX13D10UWB004.ant.amazon.com (10.43.161.121) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:23:28 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.161.249) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 2 Jul 2020 18:23:28 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 8E2C940844; Thu,  2 Jul 2020 18:23:28 +0000 (UTC)
Date: Thu, 2 Jul 2020 18:23:28 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH v2 10/11] xen: Update sched clock offset to avoid system
 instability in hibernation
Message-ID: <20200702182328.GA3751@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1593665947.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Save/restore xen_sched_clock_offset in syscore suspend/resume during PM
hibernation. Commit '867cefb4cb1012: ("xen: Fix x86 sched_clock() interface
for xen")' fixes xen guest time handling during migration. A similar issue
is seen during PM hibernation when system runs CPU intensive workload.
Post resume pvclock resets the value to 0 however, xen sched_clock_offset
is never updated. System instability is seen during resume from hibernation
when system is under heavy CPU load. Since xen_sched_clock_offset is not
updated, system does not see the monotonic clock value and the scheduler
would then think that heavy CPU hog tasks need more time in CPU, causing
the system to freeze

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 arch/x86/xen/suspend.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
index 10cd14326472..4d8b1d2390b9 100644
--- a/arch/x86/xen/suspend.c
+++ b/arch/x86/xen/suspend.c
@@ -95,6 +95,7 @@ static int xen_syscore_suspend(void)
 
 	gnttab_suspend();
 	xen_manage_runstate_time(-1);
+	xen_save_sched_clock_offset();
 	xrfp.domid = DOMID_SELF;
 	xrfp.gpfn = __pa(HYPERVISOR_shared_info) >> PAGE_SHIFT;
 	ret = HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrfp);
@@ -110,6 +111,12 @@ static void xen_syscore_resume(void)
 	xen_hvm_map_shared_info();
 
 	pvclock_resume();
+	/*
+	 * Restore xen_sched_clock_offset during resume to maintain
+	 * monotonic clock value
+	 */
+	xen_restore_sched_clock_offset();
+
 	xen_manage_runstate_time(0);
 	gnttab_resume();
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 18:24:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 18:24:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr3sT-0005sM-Op; Thu, 02 Jul 2020 18:24:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0h=AN=amazon.com=prvs=445caddfd=anchalag@srs-us1.protection.inumbo.net>)
 id 1jr3sS-0005rq-Cv
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 18:24:04 +0000
X-Inumbo-ID: 34e09ed1-bc91-11ea-8887-12813bfff9fa
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 34e09ed1-bc91-11ea-8887-12813bfff9fa;
 Thu, 02 Jul 2020 18:24:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1593714244; x=1625250244;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=8xWBYHNSqyc0rRCT8ma4wuwSS0bbWLxvstPp4v8IEIQ=;
 b=dKwhQogBiG38kRTV9cIxd2hom5FJS8v1BFqS6UlOfR57ptUdWzxE/iiV
 mTxaYJTeEdY9x/aImJn/3xuLNCyKdhwBR3jGW7U/JorU0bhdpGqCosQEg
 Ncys7NAKZfTYovy7dr2XjxoPh7UQpVSQhu/var/BACXMorIbV8yq9HCm8 8=;
IronPort-SDR: sDAToUmcQVKVwVu9eMz0u6grcT4hSIwmjgpl1XIY45YPsfIMhKSq5/2gBLD4pbIvjcoFAEP58A
 Z+Jgig/wdQNw==
X-IronPort-AV: E=Sophos;i="5.75,305,1589241600"; d="scan'208";a="39714105"
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1a-715bee71.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 02 Jul 2020 18:24:03 +0000
Received: from EX13MTAUWC001.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1a-715bee71.us-east-1.amazon.com (Postfix) with ESMTPS
 id 23592A25CA; Thu,  2 Jul 2020 18:23:55 +0000 (UTC)
Received: from EX13D05UWC001.ant.amazon.com (10.43.162.82) by
 EX13MTAUWC001.ant.amazon.com (10.43.162.135) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:23:37 +0000
Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by
 EX13D05UWC001.ant.amazon.com (10.43.162.82) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:23:37 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 2 Jul 2020 18:23:37 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 5F90C40844; Thu,  2 Jul 2020 18:23:37 +0000 (UTC)
Date: Thu, 2 Jul 2020 18:23:37 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH v2 11/11] PM / hibernate: update the resume offset on
 SNAPSHOT_SET_SWAP_AREA
Message-ID: <20200702182337.GA3762@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1593665947.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Aleksei Besogonov <cyberax@amazon.com>

The SNAPSHOT_SET_SWAP_AREA is supposed to be used to set the hibernation
offset on a running kernel to enable hibernating to a swap file.
However, it doesn't actually update the swsusp_resume_block variable. As
a result, the hibernation fails at the last step (after all the data is
written out) in the validation of the swap signature in
mark_swapfiles().

Before this patch, the command line processing was the only place where
swsusp_resume_block was set.

[Anchal Agarwal: Changelog: Resolved patch conflict as code fragmented to
snapshot_set_swap_area]

Signed-off-by: Aleksei Besogonov <cyberax@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 kernel/power/user.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/power/user.c b/kernel/power/user.c
index d5eedc2baa2a..e1209cefc103 100644
--- a/kernel/power/user.c
+++ b/kernel/power/user.c
@@ -242,8 +242,12 @@ static int snapshot_set_swap_area(struct snapshot_data *data,
 		return -EINVAL;
 	}
 	data->swap = swap_type_of(swdev, offset, &bdev);
-	if (data->swap < 0)
+	if (data->swap < 0) {
 		return -ENODEV;
+	} else {
+		swsusp_resume_device = swdev;
+		swsusp_resume_block = offset;
+	}
 
 	data->bd_inode = bdev->bd_inode;
 	bdput(bdev);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 18:26:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 18:26:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr3uM-0006D3-5J; Thu, 02 Jul 2020 18:26:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0h=AN=amazon.com=prvs=445caddfd=anchalag@srs-us1.protection.inumbo.net>)
 id 1jr3uK-0006Ct-Cd
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 18:26:00 +0000
X-Inumbo-ID: 79f99b48-bc91-11ea-b7bb-bc764e2007e4
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79f99b48-bc91-11ea-b7bb-bc764e2007e4;
 Thu, 02 Jul 2020 18:25:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1593714360; x=1625250360;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=jLJQ9NuVygNNRcAPap0ICuY1A6zLkRPc2aHxxOZMLNc=;
 b=Bn04Gr3BvCal+J+zvGCbb48FgninblZOYSr/Qt1aYgQ5Nx5T/oQTfegb
 bK8rj3X+OpNHAPDrR1p7VC0yAegLkvWTtoEO5LIoWb+SMrs0Mxfw0oB5n
 tC5WGmyGyWE/Xk8fQhhzMauX7TQaiTlzzB8NRtUk/HPDwbOxjovpOsZQ4 w=;
IronPort-SDR: 9K0UaovTj6pXWTvQUqHP8tf1CyImeNnAp5dgeU8Ek2iag7xWzdGj816eXt/y4qO4q0g3EatLKK
 YauVToN5kJOg==
X-IronPort-AV: E=Sophos;i="5.75,305,1589241600"; d="scan'208";a="55693820"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1a-16acd5e0.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 02 Jul 2020 18:25:57 +0000
Received: from EX13MTAUEB002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1a-16acd5e0.us-east-1.amazon.com (Postfix) with ESMTPS
 id 2B3EAA1FEE; Thu,  2 Jul 2020 18:25:49 +0000 (UTC)
Received: from EX13D08UEB002.ant.amazon.com (10.43.60.107) by
 EX13MTAUEB002.ant.amazon.com (10.43.60.12) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:25:30 +0000
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX13D08UEB002.ant.amazon.com (10.43.60.107) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 2 Jul 2020 18:25:30 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 2 Jul 2020 18:25:29 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id E9DAC40844; Thu,  2 Jul 2020 18:25:29 +0000 (UTC)
Date: Thu, 2 Jul 2020 18:25:29 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH v2 02/11] xenbus: add freeze/thaw/restore callbacks support
Message-ID: <20200702182529.GA3908@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1593665947.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Munehisa Kamata <kamatam@amazon.com>

Since commit b3e96c0c7562 ("xen: use freeze/restore/thaw PM events for
suspend/resume/chkpt"), xenbus uses PMSG_FREEZE, PMSG_THAW and
PMSG_RESTORE events for Xen suspend. However, they're actually assigned
to xenbus_dev_suspend(), xenbus_dev_cancel() and xenbus_dev_resume()
respectively, and only suspend and resume callbacks are supported at
driver level. To support PM suspend and PM hibernation, modify the bus
level PM callbacks to invoke not only device driver's suspend/resume but
also freeze/thaw/restore.

Note that we'll use freeze/restore callbacks even for PM suspend whereas
suspend/resume callbacks are normally used in the case, becausae the
existing xenbus device drivers already have suspend/resume callbacks
specifically designed for Xen suspend. So we can allow the device
drivers to keep the existing callbacks wihtout modification.

[Anchal Agarwal: Changelog]:
RFC v1->v2: Refactored the callbacks code
    v1->v2: Use dev_warn instead of pr_warn, naming/initialization
            conventions
Signed-off-by: Agarwal Anchal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/xen/xenbus/xenbus_probe.c | 96 ++++++++++++++++++++++++++-----
 include/xen/xenbus.h              |  3 +
 2 files changed, 84 insertions(+), 15 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index 38725d97d909..715919aacd28 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -50,6 +50,7 @@
 #include <linux/io.h>
 #include <linux/slab.h>
 #include <linux/module.h>
+#include <linux/suspend.h>
 
 #include <asm/page.h>
 #include <asm/xen/hypervisor.h>
@@ -599,16 +600,33 @@ int xenbus_dev_suspend(struct device *dev)
 	struct xenbus_driver *drv;
 	struct xenbus_device *xdev
 		= container_of(dev, struct xenbus_device, dev);
+	bool xen_suspend = xen_is_xen_suspend();
 
 	DPRINTK("%s", xdev->nodename);
 
 	if (dev->driver == NULL)
 		return 0;
 	drv = to_xenbus_driver(dev->driver);
-	if (drv->suspend)
-		err = drv->suspend(xdev);
-	if (err)
-		dev_warn(dev, "suspend failed: %i\n", err);
+	if (xen_suspend) {
+		if (drv->suspend)
+			err = drv->suspend(xdev);
+	} else {
+		if (drv->freeze) {
+			err = drv->freeze(xdev);
+			if (!err) {
+				free_otherend_watch(xdev);
+				free_otherend_details(xdev);
+				return 0;
+			}
+		}
+	}
+
+	if (err) {
+		dev_warn(&xdev->dev, "%s %s failed: %d\n", xen_suspend ?
+				"suspend" : "freeze", xdev->nodename, err);
+		return err;
+	}
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(xenbus_dev_suspend);
@@ -619,6 +637,7 @@ int xenbus_dev_resume(struct device *dev)
 	struct xenbus_driver *drv;
 	struct xenbus_device *xdev
 		= container_of(dev, struct xenbus_device, dev);
+	bool xen_suspend = xen_is_xen_suspend();
 
 	DPRINTK("%s", xdev->nodename);
 
@@ -627,23 +646,34 @@ int xenbus_dev_resume(struct device *dev)
 	drv = to_xenbus_driver(dev->driver);
 	err = talk_to_otherend(xdev);
 	if (err) {
-		dev_warn(dev, "resume (talk_to_otherend) failed: %i\n", err);
+		dev_warn(&xdev->dev, "%s (talk_to_otherend) %s failed: %d\n",
+				xen_suspend ? "resume" : "restore",
+				xdev->nodename, err);
 		return err;
 	}
 
-	xdev->state = XenbusStateInitialising;
+	if (xen_suspend) {
+		xdev->state = XenbusStateInitialising;
+		if (drv->resume)
+			err = drv->resume(xdev);
+	} else {
+		if (drv->restore)
+			err = drv->restore(xdev);
+	}
 
-	if (drv->resume) {
-		err = drv->resume(xdev);
-		if (err) {
-			dev_warn(dev, "resume failed: %i\n", err);
-			return err;
-		}
+	if (err) {
+		dev_warn(&xdev->dev, "%s %s failed: %d\n",
+				xen_suspend ? "resume" : "restore",
+				xdev->nodename, err);
+		return err;
 	}
 
 	err = watch_otherend(xdev);
 	if (err) {
-		dev_warn(dev, "resume (watch_otherend) failed: %d\n", err);
+		dev_warn(&xdev->dev, "%s (watch_otherend) %s failed: %d.\n",
+				xen_suspend ? "resume" : "restore",
+				xdev->nodename, err);
+
 		return err;
 	}
 
@@ -653,8 +683,44 @@ EXPORT_SYMBOL_GPL(xenbus_dev_resume);
 
 int xenbus_dev_cancel(struct device *dev)
 {
-	/* Do nothing */
-	DPRINTK("cancel");
+	int err;
+	struct xenbus_driver *drv;
+	struct xenbus_device *xendev = to_xenbus_device(dev);
+	bool xen_suspend = xen_is_xen_suspend();
+
+	if (xen_suspend) {
+		/* Do nothing */
+		DPRINTK("cancel");
+		return 0;
+	}
+
+	DPRINTK("%s", xendev->nodename);
+
+	if (dev->driver == NULL)
+		return 0;
+	drv = to_xenbus_driver(dev->driver);
+	err = talk_to_otherend(xendev);
+	if (err) {
+		dev_warn(&xendev->dev, "thaw (talk_to_otherend) %s failed: %d.\n",
+			xendev->nodename, err);
+		return err;
+	}
+
+	if (drv->thaw) {
+		err = drv->thaw(xendev);
+		if (err) {
+			dev_warn(&xendev->dev, "thaw %s failed: %d\n", xendev->nodename, err);
+			return err;
+		}
+	}
+
+	err = watch_otherend(xendev);
+	if (err) {
+		dev_warn(&xendev->dev, "thaw (watch_otherend) %s failed: %d.\n",
+			xendev->nodename, err);
+		return err;
+	}
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(xenbus_dev_cancel);
diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
index 5a8315e6d8a6..8da964763255 100644
--- a/include/xen/xenbus.h
+++ b/include/xen/xenbus.h
@@ -104,6 +104,9 @@ struct xenbus_driver {
 	int (*remove)(struct xenbus_device *dev);
 	int (*suspend)(struct xenbus_device *dev);
 	int (*resume)(struct xenbus_device *dev);
+	int (*freeze)(struct xenbus_device *dev);
+	int (*thaw)(struct xenbus_device *dev);
+	int (*restore)(struct xenbus_device *dev);
 	int (*uevent)(struct xenbus_device *, struct kobj_uevent_env *);
 	struct device_driver driver;
 	int (*read_otherend_details)(struct xenbus_device *dev);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 18:38:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 18:38:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr46E-0007DG-9D; Thu, 02 Jul 2020 18:38:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rv/I=AN=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1jr46C-0007DB-5Q
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 18:38:17 +0000
X-Inumbo-ID: 301a96c4-bc93-11ea-bb8b-bc764e2007e4
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.163])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 301a96c4-bc93-11ea-bb8b-bc764e2007e4;
 Thu, 02 Jul 2020 18:38:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1593715094;
 s=strato-dkim-0002; d=aepfle.de;
 h=In-Reply-To:References:Message-ID:Subject:Cc:To:From:Date:
 X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
 bh=kLvpg1soij3DNc2umoAvjn9s26IXsI1NG+9wQq0SEnU=;
 b=TZHVSU8ujsYqSaDp5YUibxI0yo5qWe2SwQh6m6PNIH8KFXesE9xEJ4Cf53v30EMdQZ
 fbMlnucF6GACbKjBNqVq+AqKJIEYNl3tdOSr6Hgl3lNDyaYkKbfHQHSL+z9BY9Ee5BQA
 xpgZ1QeryfbE8sK+uVQ01pA+WE3uaS5YyUic3vFk1bBwhApIfLoMIPohypd9yvM7AkHS
 ErOqJ3albVvFBbDKOHbCMqyT19q2Jl57mFJ0D0xRkk2hB+UbkqYoKvDvOeJsUEICeVJ+
 HYsq+RuEmHUvjXm5GYVXKqwApCC/E3NksYw7r9GQ9WgjvIvzDiQMuxraUQKSN6ocDEa9
 tjrw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS329Fjw=="
X-RZG-CLASS-ID: mo00
Received: from aepfle.de by smtp.strato.de (RZmta 46.10.5 DYNA|AUTH)
 with ESMTPSA id m032cfw62IcAX2S
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 2 Jul 2020 20:38:10 +0200 (CEST)
Date: Thu, 2 Jul 2020 20:38:06 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Michael Young <m.a.young@durham.ac.uk>
Subject: Re: Build problems in kdd.c with xen-4.14.0-rc4
Message-ID: <20200702183806.GA28738@aepfle.de>
References: <alpine.LFD.2.22.394.2006302259370.2894@austen3.home>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature"; boundary="TB36FDmn/VVEgNH/"
Content-Disposition: inline
In-Reply-To: <alpine.LFD.2.22.394.2006302259370.2894@austen3.home>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Tim Deegan <tim@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--TB36FDmn/VVEgNH/
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 30, Michael Young wrote:

> I get the following errors when trying to build xen-4.14.0-rc4

This happens to work for me.

Olaf

---
 tools/debugger/kdd/kdd.c | 8 ++++----
 1 file changed, 3 insertions(+), 3 deletions(-)

--- a/tools/debugger/kdd/kdd.c
+++ b/tools/debugger/kdd/kdd.c
@@ -742,25 +742,25 @@ static void kdd_tx(kdd_state *s)
     int i;
=20
     /* Fix up the checksum before we send */
     for (i =3D 0; i < s->txp.h.len; i++)
         sum +=3D s->txp.payload[i];
     s->txp.h.sum =3D sum;
=20
     kdd_log_pkt(s, "TX", &s->txp);
=20
     len =3D s->txp.h.len + sizeof (kdd_hdr);
     if (s->txp.h.dir =3D=3D KDD_DIR_PKT)
         /* Append the mysterious 0xaa byte to each packet */
-        s->txb[len++] =3D 0xaa;
+        s->txp.payload[len++] =3D 0xaa;
=20
     (void) blocking_write(s->fd, s->txb, len);
 }
=20
=20
 /* Send an acknowledgement to the client */
 static void kdd_send_ack(kdd_state *s, uint32_t id, uint16_t type)
 {
     s->txp.h.dir =3D KDD_DIR_ACK;
     s->txp.h.type =3D type;
     s->txp.h.len =3D 0;
     s->txp.h.id =3D id;
@@ -775,25 +775,25 @@ static void kdd_send_cmd(kdd_state *s, uint32_t subty=
pe, size_t extra)
     s->txp.h.type =3D KDD_PKT_CMD;
     s->txp.h.len =3D sizeof (kdd_cmd) + extra;
     s->txp.h.id =3D (s->next_id ^=3D 1);
     s->txp.h.sum =3D 0;
     s->txp.cmd.subtype =3D subtype;
     kdd_tx(s);
 }
=20
 /* Cause the client to print a string */
 static void kdd_send_string(kdd_state *s, char *fmt, ...)
 {
     uint32_t len =3D 0xffff - sizeof (kdd_msg);
-    char *buf =3D (char *) s->txb + sizeof (kdd_hdr) + sizeof (kdd_msg);
+    char *buf =3D (char *) &s->txp + sizeof (kdd_hdr) + sizeof (kdd_msg);
     va_list ap;
    =20
     va_start(ap, fmt);
     len =3D vsnprintf(buf, len, fmt, ap);
     va_end(ap);
=20
     s->txp.h.dir =3D KDD_DIR_PKT;
     s->txp.h.type =3D KDD_PKT_MSG;
     s->txp.h.len =3D sizeof (kdd_msg) + len;
     s->txp.h.id =3D (s->next_id ^=3D 1);
     s->txp.h.sum =3D 0;
     s->txp.msg.subtype =3D KDD_MSG_PRINT;
@@ -807,25 +807,25 @@ static void kdd_break(kdd_state *s)
 {
     uint16_t ilen;
     KDD_LOG(s, "Break\n");
=20
     if (s->running)
         kdd_halt(s->guest);
     s->running =3D 0;
=20
     {
         unsigned int i;
         /* XXX debug pattern */
         for (i =3D 0; i < 0x100 ; i++)=20
-            s->txb[sizeof (kdd_hdr) + i] =3D i;
+            s->txp.payload[sizeof (kdd_hdr) + i] =3D i;
     }
=20
     /* Send a state-change message to the client so it knows we've stopped=
 */
     s->txp.h.dir =3D KDD_DIR_PKT;
     s->txp.h.type =3D KDD_PKT_STC;
     s->txp.h.len =3D sizeof (kdd_stc);
     s->txp.h.id =3D (s->next_id ^=3D 1);
     s->txp.stc.subtype =3D KDD_STC_STOP;
     s->txp.stc.stop.cpu =3D s->cpuid;
     s->txp.stc.stop.ncpus =3D kdd_count_cpus(s->guest);=20
     s->txp.stc.stop.kthread =3D 0; /* Let the debugger figure it out */
     s->txp.stc.stop.status =3D KDD_STC_STATUS_BREAKPOINT;

--TB36FDmn/VVEgNH/
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl7+KY4ACgkQ86SN7mm1
DoBuFA/+IZZ+sENG0nz+pSvf5pYjGZyLMHgqqTGKOn9VxoX2HeJZg8Mc2A2NMDYE
ImX/uXMXbPKZI02eFMB3hFCQORjW7BztEEVzbVQZ9ZDw5hG/N+l82tMUyJM/HfW3
YJrRRx6RMCDecwLAXVdhI0+0xgpET7mLXCLlLiFMByy2pKE0Dpqjka7b3I+BLNTk
3/juc+OnCH+SikYa4yx2KY88Bd6k0guiEjSEvH63BhkuTVkXfh+VHo3L41K5fQ2F
pbp37Gah6XUUIZ4/s+h40sZbDaBjmzsmF6kI72mrWT/6D2xlv8Kbk9A3p3EJtaCG
RBl90byqh6a5+7kdcLAT60bQW1lwObimQd1QxGLkvRovxAVGXoE2gSgJl8U/v9eK
jhLQB/tlwHEdvliMvtjOdj6mB82qDxQ60H9MlIvs2fpQrbNNrG6uYbGJ9HVHriOc
RYNOdL/f2s/KKAVwkk/aqPrigzayeCTopR4o97/b8gcm7CaQyD/gQ2w1CBIqwJ/+
GApwnQRN1a4VPSx0X1/jKUU3jYYYyKa04DCuQcnPvPXk+lCcZ+YcZA211iXSrmfg
AkbDHu4s0Pq1gTZ3asYLKP4tEQHl+nrT4z+xd6GBq45/MbzCcqukBj5Ru/pBVFkV
i07b5hmk/4Gd1zjLOYbX/deldyoHykw5w2p0KvBdhwiro2kZTn4=
=onR7
-----END PGP SIGNATURE-----

--TB36FDmn/VVEgNH/--


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 19:01:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 19:01:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr4SM-0001Ad-Bx; Thu, 02 Jul 2020 19:01:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CtVa=AN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jr4SK-0001AY-LS
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 19:01:08 +0000
X-Inumbo-ID: 61e33776-bc96-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 61e33776-bc96-11ea-bca7-bc764e2007e4;
 Thu, 02 Jul 2020 19:01:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=a6SthLSceR/mVZYQtSDWieyKhRwYRujgrI2OR57wAHc=; b=vOswhNFwky2an4hxdBj2gkmky
 Q+YO/X/J1I2ZeLgVVaoDM+1rxd60V0mrRIr3gw5AOwKjmV+0UJTo/rqbpnSuXQ5LvLo56GuIxjQHo
 svTMWHmp0G0cG5BU1welRWpkoQ1OpsH7uL1dlbkQIuSeUY7xO25/OAhDzunSHvpPXyArY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jr4SH-0000R0-M9; Thu, 02 Jul 2020 19:01:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jr4SH-0001W5-Dq; Thu, 02 Jul 2020 19:01:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jr4SH-0006Ne-Cs; Thu, 02 Jul 2020 19:01:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151518-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151518: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-stop:fail:heisenbug
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=fc1bff958998910ec8d25db86cd2f53ff125f7ab
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jul 2020 19:01:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151518 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151518/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     15 guest-stop                 fail pass in 151500

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                fc1bff958998910ec8d25db86cd2f53ff125f7ab
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   19 days
Failing since        151101  2020-06-14 08:32:51 Z   18 days   20 attempts
Testing same since   151471  2020-06-30 05:19:07 Z    2 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Cathy Zhang <cathy.zhang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Liran Alon <liran.alon@oracle.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <thuth@redhat.com>
  Tong Ho <tong.ho@xilinx.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 14425 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 19:41:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 19:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr55J-0004R0-Gj; Thu, 02 Jul 2020 19:41:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CtVa=AN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jr55H-0004QH-En
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 19:41:23 +0000
X-Inumbo-ID: 01b0a4e6-bc9c-11ea-88a2-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 01b0a4e6-bc9c-11ea-88a2-12813bfff9fa;
 Thu, 02 Jul 2020 19:41:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=5aF90RyIGZCSTCNX/ltVo3zjsa6gH1DiSNDfKO6WF00=; b=Hp518mBfkIFb7CWFH2udzBaUX
 oUvb37YEk/0ow1EIvlWJlgHamQXMjiWz1QhaxZBF2Qo9vVK7mlzhoRqQ12rBeDIMWJsUn3lqw2d0g
 gZ7G4s/UeA+JsCnfGy2XCb8bzXAiFdznggOWjSVDvmtWsIpcXEQ1BmX3bWTYVG7vqSZEs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jr55F-00018G-C2; Thu, 02 Jul 2020 19:41:21 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jr55E-0005Gr-VY; Thu, 02 Jul 2020 19:41:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jr55E-0006Ve-UM; Thu, 02 Jul 2020 19:41:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151532-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151532: all pass - PUSHED
X-Osstest-Versions-This: ovmf=c8edb70945099fd35a0997d3f3db105efc144e13
X-Osstest-Versions-That: ovmf=00217f1919270007d7a911f89b32e39b9dcaa907
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jul 2020 19:41:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151532 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151532/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c8edb70945099fd35a0997d3f3db105efc144e13
baseline version:
 ovmf                 00217f1919270007d7a911f89b32e39b9dcaa907

Last test of basis   151465  2020-06-30 01:43:31 Z    2 days
Testing same since   151532  2020-07-02 07:45:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Irene Park <ipark@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   00217f1919..c8edb70945  c8edb70945099fd35a0997d3f3db105efc144e13 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 20:29:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 20:29:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr5pW-0007sU-2Y; Thu, 02 Jul 2020 20:29:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c/Sk=AN=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jr5pV-0007sP-1I
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 20:29:09 +0000
X-Inumbo-ID: ad2751c0-bca2-11ea-88b5-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ad2751c0-bca2-11ea-88b5-12813bfff9fa;
 Thu, 02 Jul 2020 20:29:07 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id E6FD1A37DD;
 Thu,  2 Jul 2020 22:29:05 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id D13B2A3664;
 Thu,  2 Jul 2020 22:29:04 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id WcG16VrIrpmI; Thu,  2 Jul 2020 22:29:04 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 396F2A37DD;
 Thu,  2 Jul 2020 22:29:04 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id ZXkfElhBcH6q; Thu,  2 Jul 2020 22:29:04 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 08F30A3664;
 Thu,  2 Jul 2020 22:29:04 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id E297E22E3F;
 Thu,  2 Jul 2020 22:28:33 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id iVaWd6boHJqx; Thu,  2 Jul 2020 22:28:28 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 5DA6422E40;
 Thu,  2 Jul 2020 22:28:28 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id mx456RbkkZ2G; Thu,  2 Jul 2020 22:28:28 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 3236022E3F;
 Thu,  2 Jul 2020 22:28:28 +0200 (CEST)
Date: Thu, 2 Jul 2020 22:28:28 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: Julien Grall <julien@xen.org>
Message-ID: <187614050.18497785.1593721708078.JavaMail.zimbra@cert.pl>
In-Reply-To: <dadeeedd-a9e1-d5f4-4754-8da3f065fd44@xen.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <3fa0c3e7-9243-b1bb-d6ad-a3bd21437782@xen.org>
 <0e02a9b5-ba7a-43a2-3369-a4410f216ddb@suse.com>
 <9a3f4d58-e5ad-c7a1-6c5f-42aa92101ca1@xen.org>
 <d0165fc3-fb05-2e49-eff3-e45a674b00e1@suse.com>
 <7f915146-6566-e5a7-14d2-cb2319838562@xen.org>
 <7ac383c2-0264-cc75-a85b-13c1fdfb0bd6@suse.com>
 <dadeeedd-a9e1-d5f4-4754-8da3f065fd44@xen.org>
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - GC83 (Win)/8.6.0_GA_1194)
Thread-Topic: x86/vmx: add IPT cpu feature
Thread-Index: ayJ3lBVi1JgJBZhMzn8HOkHk4Vrvhw==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 luwei kang <luwei.kang@intel.com>,
 Roger Pau =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 2 lip 2020 o 16:31, Julien Grall julien@xen.org napisa=C5=82(a):

> On 02/07/2020 15:17, Jan Beulich wrote:
>> On 02.07.2020 16:14, Julien Grall wrote:
>>> On 02/07/2020 14:30, Jan Beulich wrote:
>>>> On 02.07.2020 11:57, Julien Grall wrote:
>>>>> On 02/07/2020 10:18, Jan Beulich wrote:
>>>>>> On 02.07.2020 10:54, Julien Grall wrote:
>>>>>>> On 02/07/2020 09:50, Jan Beulich wrote:
>>>>>>>> On 02.07.2020 10:42, Julien Grall wrote:
>>>>>>>>> On 02/07/2020 09:29, Jan Beulich wrote:
>>>>> Another way to do it, would be the toolstack to do the mapping. At wh=
ich
>>>>> point, you still need an hypercall to do the mapping (probably the
>>>>> hypercall acquire).
>>>>
>>>> There may not be any mapping to do in such a contrived, fixed-range
>>>> environment. This scenario was specifically to demonstrate that the
>>>> way the mapping gets done may be arch-specific (here: a no-op)
>>>> despite the allocation not being so.
>>> You are arguing on extreme cases which I don't think is really helpful
>>> here. Yes if you want to map at a fixed address in a guest you may not
>>> need the acquire hypercall. But in most of the other cases (see has for
>>> the tools) you will need it.
>>>
>>> So what's the problem with requesting to have the acquire hypercall
>>> implemented in common code?
>>=20
>> Didn't we start out by you asking that there be as little common code
>> as possible for the time being?
>=20
> Well as I said I am not in favor of having the allocation in common
> code, but if you want to keep it then you also want to implement
> map/unmap in the common code ([1], [2]).
>=20
>> I have no issue with putting the
>> acquire implementation there ...
> This was definitely not clear given how you argued with extreme cases...
>=20
> Cheers,
>=20
> [1] <9a3f4d58-e5ad-c7a1-6c5f-42aa92101ca1@xen.org>
> [2] <cf41855b-9e5e-13f2-9ab0-04b98f8b3cdd@xen.org>
>=20
> --
> Julien Grall


Guys,

could you express your final decision on this topic?

While I understand the discussion and the arguments you've raised,
I would like to know what particular elements should be moved where.

So are we going abstract way, or non-abstract-x86 only way?

Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 20:30:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 20:30:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr5qp-0000As-Hh; Thu, 02 Jul 2020 20:30:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c/Sk=AN=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jr5qo-0000An-29
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 20:30:30 +0000
X-Inumbo-ID: dd5b3a46-bca2-11ea-88b5-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd5b3a46-bca2-11ea-88b5-12813bfff9fa;
 Thu, 02 Jul 2020 20:30:27 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id DE845A38B9;
 Thu,  2 Jul 2020 22:30:26 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id C9683A384F;
 Thu,  2 Jul 2020 22:30:25 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id yyo64ZndYqLo; Thu,  2 Jul 2020 22:30:25 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 4D65CA38B9;
 Thu,  2 Jul 2020 22:30:25 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id r8MvnRebaXFk; Thu,  2 Jul 2020 22:30:25 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 2299BA384F;
 Thu,  2 Jul 2020 22:30:25 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 0F13B22E49;
 Thu,  2 Jul 2020 22:29:55 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id eSO8T3zbrb_o; Thu,  2 Jul 2020 22:29:49 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 89DCC22E4A;
 Thu,  2 Jul 2020 22:29:49 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id PdsDc5d_FJp8; Thu,  2 Jul 2020 22:29:49 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 6865F22E49;
 Thu,  2 Jul 2020 22:29:49 +0200 (CEST)
Date: Thu, 2 Jul 2020 22:29:49 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: Jan Beulich <jbeulich@suse.com>
Message-ID: <1378030024.18497933.1593721789389.JavaMail.zimbra@cert.pl>
In-Reply-To: <5bb2fb6a-c4f4-7d88-9e07-7922d4235338@suse.com>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <7302dbfcd07dfaad9e50bb772673e588fcc4de67.1593519420.git.michal.leszczynski@cert.pl>
 <f935f7f0-30e4-4ba2-588f-a8368a7b93b1@citrix.com>
 <20200702081020.GW735@Air-de-Roger>
 <5bb2fb6a-c4f4-7d88-9e07-7922d4235338@suse.com>
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - GC83 (Win)/8.6.0_GA_1194)
Thread-Topic: x86/vmx: add IPT cpu feature
Thread-Index: WUyFXnCTdsSnNUgEtk6y69dO/UcNzQ==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 luwei kang <luwei.kang@intel.com>,
 Roger Pau =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 2 lip 2020 o 10:34, Jan Beulich jbeulich@suse.com napisa=C5=82(a):

> On 02.07.2020 10:10, Roger Pau Monn=C3=A9 wrote:
>> On Wed, Jul 01, 2020 at 10:42:55PM +0100, Andrew Cooper wrote:
>>> On 30/06/2020 13:33, Micha=C5=82 Leszczy=C5=84ski wrote:
>>>> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
>>>> index ca94c2bedc..b73d824357 100644
>>>> --- a/xen/arch/x86/hvm/vmx/vmcs.c
>>>> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
>>>> @@ -291,6 +291,12 @@ static int vmx_init_vmcs_config(void)
>>>>          _vmx_cpu_based_exec_control &=3D
>>>>              ~(CPU_BASED_CR8_LOAD_EXITING | CPU_BASED_CR8_STORE_EXITIN=
G);
>>>> =20
>>>> +    rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
>>>> +
>>>> +    /* Check whether IPT is supported in VMX operation. */
>>>> +    vmtrace_supported =3D cpu_has_ipt &&
>>>> +                        (_vmx_misc_cap & VMX_MISC_PT_SUPPORTED);
>>>
>>> There is a subtle corner case here.=C2=A0 vmx_init_vmcs_config() is cal=
led on
>>> all CPUs, and is supposed to level things down safely if we find any
>>> asymmetry.
>>>
>>> If instead you go with something like this:
>>>
>>> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
>>> index b73d824357..6960109183 100644
>>> --- a/xen/arch/x86/hvm/vmx/vmcs.c
>>> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
>>> @@ -294,8 +294,8 @@ static int vmx_init_vmcs_config(void)
>>> =C2=A0=C2=A0=C2=A0=C2=A0 rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
>>> =C2=A0
>>> =C2=A0=C2=A0=C2=A0=C2=A0 /* Check whether IPT is supported in VMX opera=
tion. */
>>> -=C2=A0=C2=A0=C2=A0 vmtrace_supported =3D cpu_has_ipt &&
>>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (_vmx=
_misc_cap & VMX_MISC_PT_SUPPORTED);
>>> +=C2=A0=C2=A0=C2=A0 if ( !(_vmx_misc_cap & VMX_MISC_PT_SUPPORTED) )
>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 vmtrace_supported =3D false=
;
>>=20
>> This is also used during hotplug, so I'm not sure it's safe to turn
>> vmtrace_supported off during runtime, where VMs might be already using
>> it. IMO it would be easier to just set it on the BSP, and then refuse
>> to bring up any AP that doesn't have the feature.
>=20
> +1
>=20
> IOW I also don't think that "vmx_init_vmcs_config() ... is supposed to
> level things down safely". Instead I think the expectation is for
> CPU onlining to fail if a CPU lacks features compared to the BSP. As
> can be implied from what Roger says, doing like what you suggest may
> be fine during boot, but past that only at times where we know there's
> no user of a certain feature, and where discarding the feature flag
> won't lead to other inconsistencies (which may very well mean "never").
>=20
> Jan


Ok, I will modify it in a way Roger suggested for the previous patch
version. CPU onlining will fail if there is an inconsistency.

Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 21:03:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 21:03:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr6MB-0002iV-8l; Thu, 02 Jul 2020 21:02:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WabN=AN=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jr6MA-0002iP-1X
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 21:02:54 +0000
X-Inumbo-ID: 64fb331c-bca7-11ea-bb8b-bc764e2007e4
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 64fb331c-bca7-11ea-bb8b-bc764e2007e4;
 Thu, 02 Jul 2020 21:02:53 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 062KktL2183409;
 Thu, 2 Jul 2020 21:02:48 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=YiWks5ex821tPWXOK4J5iyPZscj0Ms1G+WO1UwXxeWo=;
 b=HV4SxcyKJ5QMnsrtkXPtkqL1uFB4p4iNEG3t/8gTX4iXpKsqwVdxHJ5vlUISWSb6aQ8L
 OIEsPTYEkp9wdWDfNorx2fcdSzkfN/Sb2/3yUH5JHNMOzrcM3WAzk2ZsBPnIbuhRM5q1
 A8qNujxk/bz0EygVq49sVtp8HEtqz/oXlPrCHbw4PmRO+s/6N4+gnZlWeJCuF2V421Wl
 WUjyfpdZuSwwX/hxENZZdXh5/pp84l5Z3P9CeySNXFdg8Wl8VdkgHL+jSYJA0J8hacDD
 IlAhK2KSGBq+ybP/rIrkusWzr2I1vfp1TP/HpxsCKe9dSrRUVxUFRXhohH/kj008vm79 ZA== 
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2120.oracle.com with ESMTP id 31wxrnjn86-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 02 Jul 2020 21:02:48 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 062Kmut9122943;
 Thu, 2 Jul 2020 21:02:47 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by aserp3030.oracle.com with ESMTP id 31y52nac7f-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 02 Jul 2020 21:02:47 +0000
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 062L2hLw032250;
 Thu, 2 Jul 2020 21:02:44 GMT
Received: from [10.39.209.60] (/10.39.209.60)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 02 Jul 2020 21:02:43 +0000
Subject: Re: [PATCH v2 1/2] xen/xenbus: avoid large structs and arrays on the
 stack
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, clang-built-linux@googlegroups.com
References: <20200701121638.19840-1-jgross@suse.com>
 <20200701121638.19840-2-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <80b8927f-d654-44f3-a860-fb3e395652d6@oracle.com>
Date: Thu, 2 Jul 2020 17:02:37 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200701121638.19840-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9670
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0
 phishscore=0 mlxscore=0
 adultscore=0 suspectscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2007020138
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9670
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0
 mlxlogscore=999
 priorityscore=1501 impostorscore=0 bulkscore=0 clxscore=1011
 malwarescore=0 phishscore=0 adultscore=0 cotscore=-2147483648
 lowpriorityscore=0 suspectscore=0 spamscore=0 classifier=spam adjust=0
 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2007020138
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Arnd Bergmann <arnd@arndb.de>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/1/20 8:16 AM, Juergen Gross wrote:
> xenbus_map_ring_valloc() and its sub-functions are putting quite large
> structs and arrays on the stack. This is problematic at runtime, but
> might also result in build failures (e.g. with clang due to the option
> -Werror,-Wframe-larger-than=... used).
>
> Fix that by moving most of the data from the stack into a dynamically
> allocated struct. Performance is no issue here, as
> xenbus_map_ring_valloc() is used only when adding a new PV device to
> a backend driver.
>
> While at it move some duplicated code from pv/hvm specific mapping
> functions to the single caller.
>
> Reported-by: Arnd Bergmann <arnd@arndb.de>
> Signed-off-by: Juergen Gross <jgross@suse.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>




From xen-devel-bounces@lists.xenproject.org Thu Jul 02 21:28:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 21:28:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr6kx-0004TD-AR; Thu, 02 Jul 2020 21:28:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CtVa=AN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jr6kv-0004T8-NZ
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 21:28:29 +0000
X-Inumbo-ID: f7823c96-bcaa-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7823c96-bcaa-11ea-8496-bc764e2007e4;
 Thu, 02 Jul 2020 21:28:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=6+Q9Lu43/YWOEKz82MLjNF+qt6/iSS3/U6hHzx+e48c=; b=1opqkO1K0IPCxWFSx3XRjn93z
 ghdqjkMEVE8ttez4Sep6+pwGYk24A8kazFEX+4Cw38wGgyOYvgpPO7bXei12glrAjqzh7I+Bisc6r
 4KfU5wdDeV1XsJ32FQ15k0+mPJIWt9ukJv2U0uRQnKogmljgfssS889FMwpFN37ChsHPQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jr6ks-0003Aw-OO; Thu, 02 Jul 2020 21:28:26 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jr6ks-0001qD-EI; Thu, 02 Jul 2020 21:28:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jr6ks-0004Br-Di; Thu, 02 Jul 2020 21:28:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151528-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151528: regressions - FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: xen=0dbed3ad3366734fd23ee3fd1f9989c8c96b6052
X-Osstest-Versions-That: xen=23ca7ec0ba620db52a646d80e22f9703a6589f66
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 02 Jul 2020 21:28:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151528 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151528/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151506

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151506
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151506
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151506
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151506
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151506
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151506
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151506
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151506
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151506
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  0dbed3ad3366734fd23ee3fd1f9989c8c96b6052
baseline version:
 xen                  23ca7ec0ba620db52a646d80e22f9703a6589f66

Last test of basis   151506  2020-07-01 10:55:16 Z    1 days
Testing same since   151528  2020-07-02 04:45:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0dbed3ad3366734fd23ee3fd1f9989c8c96b6052
Author: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Date:   Fri Jun 19 22:34:01 2020 +0000

    optee: allow plain TMEM buffers with NULL address
    
    Trusted Applications use a popular approach to determine the required
    size of a buffer: the client provides a memory reference with the NULL
    pointer to a buffer. This is so called "Null memory reference". TA
    updates the reference with the required size and returns it back to the
    client. Then the client allocates a buffer of the needed size and
    repeats the operation.
    
    This behavior is described in TEE Client API Specification, paragraph
    3.2.5. Memory References.
    
    OP-TEE represents this null memory reference as a TMEM parameter with
    buf_ptr = 0x0. This is the only case when we should allow a TMEM
    buffer without the OPTEE_MSG_ATTR_NONCONTIG flag. This also the
    special case for a buffer with OPTEE_MSG_ATTR_NONCONTIG flag.
    
    This could lead to a potential issue, because IPA 0x0 is a valid
    address, but OP-TEE will treat it as a special case. So, care should
    be taken when construction OP-TEE enabled guest to make sure that such
    guest have no memory at IPA 0x0 and none of its memory is mapped at PA
    0x0.
    
    Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Release-acked-by: Paul Durrant <paul@xen.org>

commit 5b13eb1d978e9732fe2c9826b60885b687a5c4fc
Author: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Date:   Fri Jun 19 22:33:59 2020 +0000

    optee: immediately free buffers that are released by OP-TEE
    
    Normal World can share a buffer with OP-TEE for two reasons:
    1. A client application wants to exchange data with TA
    2. OP-TEE asks for shared buffer for internal needs
    
    The second case was handled more strictly than necessary:
    
    1. In RPC request OP-TEE asks for buffer
    2. NW allocates buffer and provides it via RPC response
    3. Xen pins pages and translates data
    4. Xen provides buffer to OP-TEE
    5. OP-TEE uses it
    6. OP-TEE sends request to free the buffer
    7. NW frees the buffer and sends the RPC response
    8. Xen unpins pages and forgets about the buffer
    
    The problem is that Xen should forget about buffer in between stages 6
    and 7. I.e. the right flow should be like this:
    
    6. OP-TEE sends request to free the buffer
    7. Xen unpins pages and forgets about the buffer
    8. NW frees the buffer and sends the RPC response
    
    This is because OP-TEE internally frees the buffer before sending the
    "free SHM buffer" request. So we have no reason to hold reference for
    this buffer anymore. Moreover, in multiprocessor systems NW have time
    to reuse the buffer cookie for another buffer. Xen complained about this
    and denied the new buffer registration. I have seen this issue while
    running tests on iMX SoC.
    
    So, this patch basically corrects that behavior by freeing the buffer
    earlier, when handling RPC return from OP-TEE.
    
    Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Release-acked-by: Paul Durrant <paul@xen.org>

commit 3b7dab93f2401b08c673244c9ae0f92e08bd03ba
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jul 1 12:39:59 2020 +0100

    x86/spec-ctrl: Protect against CALL/JMP straight-line speculation
    
    Some x86 CPUs speculatively execute beyond indirect CALL/JMP instructions.
    
    With CONFIG_INDIRECT_THUNK / Retpolines, indirect CALL/JMP instructions are
    converted to direct CALL/JMP's to __x86_indirect_thunk_REG(), leaving just a
    handful of indirect JMPs implementing those stubs.
    
    There is no architectrual execution beyond an indirect JMP, so use INT3 as
    recommended by vendors to halt speculative execution.  This is shorter than
    LFENCE (which would also work fine), but also shows up in logs if we do
    unexpected execute them.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Paul Durrant <paul@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 21:37:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 21:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr6u1-0005MU-Ed; Thu, 02 Jul 2020 21:37:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yVFs=AN=davemloft.net=davem@srs-us1.protection.inumbo.net>)
 id 1jr6u0-0005MP-M0
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 21:37:52 +0000
X-Inumbo-ID: 4652d91a-bcac-11ea-bca7-bc764e2007e4
Received: from shards.monkeyblade.net (unknown [2620:137:e000::1:9])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4652d91a-bcac-11ea-bca7-bc764e2007e4;
 Thu, 02 Jul 2020 21:37:49 +0000 (UTC)
Received: from localhost (unknown [IPv6:2601:601:9f00:477::3d5])
 (using TLSv1 with cipher AES256-SHA (256/256 bits))
 (Client did not present a certificate)
 (Authenticated sender: davem-davemloft)
 by shards.monkeyblade.net (Postfix) with ESMTPSA id E685B12845373;
 Thu,  2 Jul 2020 14:37:47 -0700 (PDT)
Date: Thu, 02 Jul 2020 14:37:47 -0700 (PDT)
Message-Id: <20200702.143747.827041018046186172.davem@davemloft.net>
To: colin.king@canonical.com
Subject: Re: [PATCH][next] xen-netfront: remove redundant assignment to
 variable 'act'
From: David Miller <davem@davemloft.net>
In-Reply-To: <20200702142223.48178-1-colin.king@canonical.com>
References: <20200702142223.48178-1-colin.king@canonical.com>
X-Mailer: Mew version 6.8 on Emacs 26.3
Mime-Version: 1.0
Content-Type: Text/Plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.12
 (shards.monkeyblade.net [149.20.54.216]);
 Thu, 02 Jul 2020 14:37:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, sstabellini@kernel.org, netdev@vger.kernel.org,
 kernel-janitors@vger.kernel.org, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org, kuba@kernel.org, boris.ostrovsky@oracle.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Colin King <colin.king@canonical.com>
Date: Thu,  2 Jul 2020 15:22:23 +0100

> From: Colin Ian King <colin.king@canonical.com>
> 
> The variable act is being initialized with a value that is
> never read and it is being updated later with a new value. The
> initialization is redundant and can be removed.
> 
> Addresses-Coverity: ("Unused value")
> Signed-off-by: Colin Ian King <colin.king@canonical.com>

Applied, thank you.


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 23:00:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 23:00:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr8BI-0003P7-Rs; Thu, 02 Jul 2020 22:59:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WabN=AN=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jr8BH-0003P2-HF
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 22:59:47 +0000
X-Inumbo-ID: b86e8958-bcb7-11ea-bb8b-bc764e2007e4
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b86e8958-bcb7-11ea-bb8b-bc764e2007e4;
 Thu, 02 Jul 2020 22:59:45 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 062MvTp8075029;
 Thu, 2 Jul 2020 22:59:30 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=rja40bkZ+uABEn1zz77pxq4CDtieylOgbo/CCZbZS0U=;
 b=kndgUCLi8TtT2/WeHmyYfw9qwBaJMg3c18lZZKETy+NXS0p5fTh/sbkr+4z902nhZl+0
 K0m+qJFpAVL6igeiwoMBiAIGH2++AXdcKbUnPxpTzZGmZA9fedbrIOs9TqUOzTcR6aI8
 zxnWs4wcImV7hB90a0i2eAm3aZnF+7El9T02kBCK5IgQNWVs7vMIPG57NyaIhLQ0fuEl
 4zaEnZDOEqtZiSon3n1ddFj5toNYmkczbPVlXqnT+6ue47O3sCWk3GFMBHGYEf5vhIQu
 aozrGcaGV4J0oX1KvW5y71AvRNfLX1JVYjQYn47BTWBHqJ9P+V5Utnu+y+mFKjn2djMo Iw== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2130.oracle.com with ESMTP id 31ywrc17n7-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 02 Jul 2020 22:59:30 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 062Mwgxi160912;
 Thu, 2 Jul 2020 22:59:30 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by userp3020.oracle.com with ESMTP id 31xfvwbfh9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 02 Jul 2020 22:59:30 +0000
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 062MxRse016460;
 Thu, 2 Jul 2020 22:59:27 GMT
Received: from [10.39.209.60] (/10.39.209.60)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 02 Jul 2020 22:59:27 +0000
Subject: Re: [PATCH v2 1/4] x86/xen: remove 32-bit Xen PV guest support
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org
References: <20200701110650.16172-1-jgross@suse.com>
 <20200701110650.16172-2-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <6d0b517a-6c53-61d3-117b-40e33e013037@oracle.com>
Date: Thu, 2 Jul 2020 18:59:21 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200701110650.16172-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9670
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0
 adultscore=0 spamscore=0
 phishscore=0 malwarescore=0 mlxlogscore=999 bulkscore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2007020153
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9670
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0
 spamscore=0 mlxlogscore=999
 clxscore=1011 cotscore=-2147483648 priorityscore=1501 lowpriorityscore=0
 malwarescore=0 mlxscore=0 adultscore=0 suspectscore=0 impostorscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2007020153
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
 "H. Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/1/20 7:06 AM, Juergen Gross wrote:
> Xen is requiring 64-bit machines today and since Xen 4.14 it can be
> built without 32-bit PV guest support. There is no need to carry the
> burden of 32-bit PV guest support in the kernel any longer, as new
> guests can be either HVM or PVH, or they can use a 64 bit kernel.
>
> Remove the 32-bit Xen PV support from the kernel.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  arch/x86/entry/entry_32.S      | 109 +----------
>  arch/x86/include/asm/proto.h   |   2 +-
>  arch/x86/include/asm/segment.h |   2 +-
>  arch/x86/kernel/head_32.S      |  31 ---
>  arch/x86/xen/Kconfig           |   3 +-
>  arch/x86/xen/Makefile          |   3 +-
>  arch/x86/xen/apic.c            |  17 --
>  arch/x86/xen/enlighten_pv.c    |  48 +----


Should we drop PageHighMem() test in set_aliased_prot()?


(And there are few other places where is is used, in mmu_pv.c)



> @@ -555,13 +547,8 @@ static void xen_load_tls(struct thread_struct *t, =
unsigned int cpu)
>  	 * exception between the new %fs descriptor being loaded and
>  	 * %fs being effectively cleared at __switch_to().
>  	 */
> -	if (paravirt_get_lazy_mode() =3D=3D PARAVIRT_LAZY_CPU) {
> -#ifdef CONFIG_X86_32
> -		lazy_load_gs(0);
> -#else


I think this also needs an adjustment to the preceding comment.


> =20
> -#ifdef CONFIG_X86_PAE
> -static void xen_set_pte_atomic(pte_t *ptep, pte_t pte)
> -{
> -	trace_xen_mmu_set_pte_atomic(ptep, pte);
> -	__xen_set_pte(ptep, pte);


Probably not for this series but I wonder whether __xen_set_pte() should
continue to use hypercall now that we are 64-bit only.


> @@ -654,14 +621,12 @@ static int __xen_pgd_walk(struct mm_struct *mm, p=
gd_t *pgd,


Comment above should be updated.


-boris



From xen-devel-bounces@lists.xenproject.org Thu Jul 02 23:24:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 23:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr8Yw-0005nb-SX; Thu, 02 Jul 2020 23:24:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XMTP=AN=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jr8Yv-0005nV-Ku
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 23:24:13 +0000
X-Inumbo-ID: 2329bb8e-bcbb-11ea-bca7-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2329bb8e-bcbb-11ea-bca7-bc764e2007e4;
 Thu, 02 Jul 2020 23:24:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1593732252;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=DD/40Qz8qZ78c4jky0R8eJwe8SV5DGFlqxwDiyq60pw=;
 b=dSkUvgeHtKGv8ckYivVY5J7d7/bFXIX60d6R2n8P14Cg/5guSF2fcJIN
 ayhG79qAW3QIAAHZ46z3oHU0wdFYBLZQwEuIFeeu7/nNwGgFDU1nvzrlu
 PiXF94l4++psFputDy+FfWtXcocL4FwJMNRallnLRMYV+uJ5qr5v939pE o=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: EHdLL4zhLW9bpSAxAYvRa35rsjy9utSWkFNwQ8GKYJ+LgLEnAwpznR4lbdlktDiJaHK1o4M5lZ
 S9HVefjH7PkelGMqcFH6RJL0JqHfpf9IcxUSI8VAQovcbJluYYMeF1D8rqexJXsykDNol2W6J9
 zkUhu+rXVjkd5mxogyU8jPVFs0w0NNcM6Mrh4UdjXGLcHhpaa7BnEGf9ra8xsTjZKldT9NTuNF
 qTi/2UdOord7K58h1mQXpcGPw8obNGNY7+e1hE0eYiOjt5TnUJpkspi3E+mRNaXVAAtuQJ4Dr7
 fRM=
X-SBRS: 2.7
X-MesageID: 22346397
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,305,1589256000"; d="scan'208";a="22346397"
Subject: Re: [PATCH v2 1/4] x86/xen: remove 32-bit Xen PV guest support
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, <xen-devel@lists.xenproject.org>, <x86@kernel.org>,
 <linux-kernel@vger.kernel.org>
References: <20200701110650.16172-1-jgross@suse.com>
 <20200701110650.16172-2-jgross@suse.com>
 <6d0b517a-6c53-61d3-117b-40e33e013037@oracle.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9f8cc440-82f0-d6d8-945d-19c48f69a6b0@citrix.com>
Date: Fri, 3 Jul 2020 00:24:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <6d0b517a-6c53-61d3-117b-40e33e013037@oracle.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
 "H. Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02/07/2020 23:59, Boris Ostrovsky wrote:
> On 7/1/20 7:06 AM, Juergen Gross wrote:
>>  
>> -#ifdef CONFIG_X86_PAE
>> -static void xen_set_pte_atomic(pte_t *ptep, pte_t pte)
>> -{
>> -	trace_xen_mmu_set_pte_atomic(ptep, pte);
>> -	__xen_set_pte(ptep, pte);
>
> Probably not for this series but I wonder whether __xen_set_pte() should
> continue to use hypercall now that we are 64-bit only.

The hypercall path is a SYSCALL (and SYSRET out).

The "writeable" PTE path is a #PF, followed by an x86 instruction
emulation, which then reaches the same logic as the hypercall path (and
an IRET out).

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 02 23:27:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 02 Jul 2020 23:27:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jr8cH-0005va-C9; Thu, 02 Jul 2020 23:27:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WabN=AN=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jr8cG-0005vV-Ju
 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2020 23:27:40 +0000
X-Inumbo-ID: 9e98cf76-bcbb-11ea-bca7-bc764e2007e4
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e98cf76-bcbb-11ea-bca7-bc764e2007e4;
 Thu, 02 Jul 2020 23:27:39 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 062NMkUp119523;
 Thu, 2 Jul 2020 23:27:32 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=i/tsRFX8JHpMl5wiN39lBbymh9q/GHG0u/XCh+bRodU=;
 b=hnLKPKeT5NMY6L9PBMyt+A2/H197eXGoiG12wr2K0KQ0cicPM4DRtPocxdOzwJ5UzI7h
 cO3vONz4RYHkIAnB7xdIFIcnaCnxp8eJnOsYKGOZnsb9YkY4gbq3dqw/MdZVmWPuW6qa
 FmnHtoL68Wvq6R3AoiqOooz1P0LgJ49hZvdGvD97EIo4wavOLljDp38J+bd6wLBvfQKH
 EXMZNDlBtIvBRK/9mGowkfHYdxrHMVILH1huC3gNzslZ9ucsq4tH3w/pAp2S8SH3t8Pz
 xpTQbwIDSy2AmW1ouvIwh1SuYpIBPJPPJ6ThEZHkkWlrMUm9Hdyx5nCRJRChJjd4OdT6 mw== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2130.oracle.com with ESMTP id 31ywrc19xg-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 02 Jul 2020 23:27:31 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 062NHsaE170427;
 Thu, 2 Jul 2020 23:27:31 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by userp3030.oracle.com with ESMTP id 31xg21pvwr-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 02 Jul 2020 23:27:31 +0000
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 062NRTTf031587;
 Thu, 2 Jul 2020 23:27:29 GMT
Received: from [192.168.29.236] (/73.249.50.119)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 02 Jul 2020 23:27:29 +0000
Subject: Re: [PATCH v2 1/4] x86/xen: remove 32-bit Xen PV guest support
To: Andrew Cooper <andrew.cooper3@citrix.com>, Juergen Gross
 <jgross@suse.com>, xen-devel@lists.xenproject.org,
 x86@kernel.org, linux-kernel@vger.kernel.org
References: <20200701110650.16172-1-jgross@suse.com>
 <20200701110650.16172-2-jgross@suse.com>
 <6d0b517a-6c53-61d3-117b-40e33e013037@oracle.com>
 <9f8cc440-82f0-d6d8-945d-19c48f69a6b0@citrix.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <30b9ee7e-e590-d55e-6eb4-1623521642fd@oracle.com>
Date: Thu, 2 Jul 2020 19:27:26 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <9f8cc440-82f0-d6d8-945d-19c48f69a6b0@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9670
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0
 spamscore=0 phishscore=0
 malwarescore=0 mlxlogscore=999 adultscore=0 mlxscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2007020156
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9670
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0
 spamscore=0 mlxlogscore=999
 clxscore=1015 cotscore=-2147483648 priorityscore=1501 lowpriorityscore=0
 malwarescore=0 mlxscore=0 adultscore=0 suspectscore=0 impostorscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2007020156
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
 "H. Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/2/20 7:24 PM, Andrew Cooper wrote:
> On 02/07/2020 23:59, Boris Ostrovsky wrote:
>> On 7/1/20 7:06 AM, Juergen Gross wrote:
>>>  
>>> -#ifdef CONFIG_X86_PAE
>>> -static void xen_set_pte_atomic(pte_t *ptep, pte_t pte)
>>> -{
>>> -	trace_xen_mmu_set_pte_atomic(ptep, pte);
>>> -	__xen_set_pte(ptep, pte);
>> Probably not for this series but I wonder whether __xen_set_pte() should
>> continue to use hypercall now that we are 64-bit only.
> The hypercall path is a SYSCALL (and SYSRET out).
>
> The "writeable" PTE path is a #PF, followed by an x86 instruction
> emulation, which then reaches the same logic as the hypercall path (and
> an IRET out).


Then we should at least update the comment.


-boris



From xen-devel-bounces@lists.xenproject.org Fri Jul 03 04:07:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 04:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrCyk-00052O-9L; Fri, 03 Jul 2020 04:07:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYi7=AO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrCyi-00052J-SI
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 04:07:08 +0000
X-Inumbo-ID: a7d7c7b4-bce2-11ea-8916-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a7d7c7b4-bce2-11ea-8916-12813bfff9fa;
 Fri, 03 Jul 2020 04:07:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=RmkkA+RVE91I1CLBU6xLtThdRqKKqINNcQJpDxJ/3y0=; b=62JjkOQuqdsxl7BPp+SAX91fP
 N4JE+gCAzqm1zgJYR/D2Q7Ow1xiIy5l2+winiID3/U0SY7eY+DI3L5wYaDsXkh5KK+y+1oAo2EcoA
 HZD0hk4ofZvTaOxWkEw/oOiWjAUYuw14A7EbrJMOtFTVguv0+zOK28O9weJ94mqVXiV+Q=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrCye-0004my-Kt; Fri, 03 Jul 2020 04:07:04 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrCye-0002Pb-7P; Fri, 03 Jul 2020 04:07:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrCye-00055n-6S; Fri, 03 Jul 2020 04:07:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151544-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-unstable test] 151544: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-upstream-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-upstream-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-upstream-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-upstream-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-upstream-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: qemuu=ea6d3cd1ed79d824e605a70c3626bc437c386260
X-Osstest-Versions-That: qemuu=410cc30fdc590417ae730d635bbc70257adf6750
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jul 2020 04:07:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151544 qemu-upstream-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151544/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149875
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149875
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149875
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149875
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass

version targeted for testing:
 qemuu                ea6d3cd1ed79d824e605a70c3626bc437c386260
baseline version:
 qemuu                410cc30fdc590417ae730d635bbc70257adf6750

Last test of basis   149875  2020-04-29 12:38:30 Z   64 days
Testing same since   151544  2020-07-02 15:39:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Paolo Bonzini <pbonzini@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   410cc30fdc..ea6d3cd1ed  ea6d3cd1ed79d824e605a70c3626bc437c386260 -> master


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 04:31:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 04:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrDM4-0007Rl-G5; Fri, 03 Jul 2020 04:31:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYi7=AO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrDM2-0007RL-Fo
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 04:31:14 +0000
X-Inumbo-ID: 03193c18-bce6-11ea-891a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03193c18-bce6-11ea-891a-12813bfff9fa;
 Fri, 03 Jul 2020 04:31:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Vc9Opi9+UUGZhDBB6TTYJqIYx7keQc7tybotaPUrkO0=; b=bohMyxFPy/MZ5xyz0ru/WszR4
 /tWq49R8mi9ms9hpud/ea0ZfPHcO8nO/7xr6EYyu+4cEuriRXh21oDvPxZgPVC/MciSPWr3Tb4sjn
 StxCn4r66OP6hdokWBa9PmAgTbOCohM8fSCkS6HafWYn8IePCpKIfTXksw52LoavPpl/o=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrDLu-0005MA-F9; Fri, 03 Jul 2020 04:31:06 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrDLu-0003AX-3M; Fri, 03 Jul 2020 04:31:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrDLu-00021t-2P; Fri, 03 Jul 2020 04:31:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151540-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151540: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=cd77006e01b3198c75fb7819b3d0ff89709539bb
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jul 2020 04:31:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151540 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151540/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                cd77006e01b3198c75fb7819b3d0ff89709539bb
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   15 days
Failing since        151236  2020-06-19 19:10:35 Z   13 days   17 attempts
Testing same since   151540  2020-07-02 13:31:21 Z    0 days    1 attempts

------------------------------------------------------------
488 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 23012 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 05:08:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 05:08:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrDw7-00029o-Kq; Fri, 03 Jul 2020 05:08:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYi7=AO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrDw6-00029j-Dj
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 05:08:30 +0000
X-Inumbo-ID: 3bb67cf2-bceb-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3bb67cf2-bceb-11ea-b7bb-bc764e2007e4;
 Fri, 03 Jul 2020 05:08:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=bZ+U/6nF6RWIuETCBLIc6DE1qhEWRf/LIsjgTkq5Kqs=; b=YPNZ4o8V5bXjKroT8rHdfhq1q
 T/piFx1x/HukLleojvP2xIBudL2qJxjnGwPV6he9fEUlz9MHrDz7aCS3G4QIlyQZUEREqJrqMk3Zr
 XM088kwyk8IHzn5/jjV2KC5OqayMKW5YHllr9zHsRRO0TuqBi0PAaCdyy0krGi6XlG0m0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrDw4-0006Lc-VA; Fri, 03 Jul 2020 05:08:28 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrDw4-0004Ng-A2; Fri, 03 Jul 2020 05:08:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrDw4-0003gj-9S; Fri, 03 Jul 2020 05:08:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151550-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151550: all pass - PUSHED
X-Osstest-Versions-This: ovmf=0622a7b1b203ad4ab1675533e958792fc1afc12b
X-Osstest-Versions-That: ovmf=c8edb70945099fd35a0997d3f3db105efc144e13
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jul 2020 05:08:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151550 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151550/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0622a7b1b203ad4ab1675533e958792fc1afc12b
baseline version:
 ovmf                 c8edb70945099fd35a0997d3f3db105efc144e13

Last test of basis   151532  2020-07-02 07:45:27 Z    0 days
Testing same since   151550  2020-07-02 20:09:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Pierre Gondois <pierre.gondois@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c8edb70945..0622a7b1b2  0622a7b1b203ad4ab1675533e958792fc1afc12b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 05:15:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 05:15:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrE2u-0002z2-Dl; Fri, 03 Jul 2020 05:15:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CbuU=AO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jrE2s-0002yx-PU
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 05:15:30 +0000
X-Inumbo-ID: 3617d24a-bcec-11ea-891d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3617d24a-bcec-11ea-891d-12813bfff9fa;
 Fri, 03 Jul 2020 05:15:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 33B3FB03E;
 Fri,  3 Jul 2020 05:15:29 +0000 (UTC)
Subject: Re: [PATCH v2 1/4] x86/xen: remove 32-bit Xen PV guest support
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, x86@kernel.org, linux-kernel@vger.kernel.org
References: <20200701110650.16172-1-jgross@suse.com>
 <20200701110650.16172-2-jgross@suse.com>
 <6d0b517a-6c53-61d3-117b-40e33e013037@oracle.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <96159aed-9fdb-9fcb-a1b1-7c6c2c47e6a1@suse.com>
Date: Fri, 3 Jul 2020 07:15:27 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <6d0b517a-6c53-61d3-117b-40e33e013037@oracle.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
 "H. Peter Anvin" <hpa@zytor.com>, Thomas Gleixner <tglx@linutronix.de>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 03.07.20 00:59, Boris Ostrovsky wrote:
> On 7/1/20 7:06 AM, Juergen Gross wrote:
>> Xen is requiring 64-bit machines today and since Xen 4.14 it can be
>> built without 32-bit PV guest support. There is no need to carry the
>> burden of 32-bit PV guest support in the kernel any longer, as new
>> guests can be either HVM or PVH, or they can use a 64 bit kernel.
>>
>> Remove the 32-bit Xen PV support from the kernel.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   arch/x86/entry/entry_32.S      | 109 +----------
>>   arch/x86/include/asm/proto.h   |   2 +-
>>   arch/x86/include/asm/segment.h |   2 +-
>>   arch/x86/kernel/head_32.S      |  31 ---
>>   arch/x86/xen/Kconfig           |   3 +-
>>   arch/x86/xen/Makefile          |   3 +-
>>   arch/x86/xen/apic.c            |  17 --
>>   arch/x86/xen/enlighten_pv.c    |  48 +----
> 
> 
> Should we drop PageHighMem() test in set_aliased_prot()?
> 
> 
> (And there are few other places where is is used, in mmu_pv.c)

Yes, will drop those.

> 
> 
> 
>> @@ -555,13 +547,8 @@ static void xen_load_tls(struct thread_struct *t, unsigned int cpu)
>>   	 * exception between the new %fs descriptor being loaded and
>>   	 * %fs being effectively cleared at __switch_to().
>>   	 */
>> -	if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_CPU) {
>> -#ifdef CONFIG_X86_32
>> -		lazy_load_gs(0);
>> -#else
> 
> 
> I think this also needs an adjustment to the preceding comment.

Yes.

> 
> 
>>   
>> -#ifdef CONFIG_X86_PAE
>> -static void xen_set_pte_atomic(pte_t *ptep, pte_t pte)
>> -{
>> -	trace_xen_mmu_set_pte_atomic(ptep, pte);
>> -	__xen_set_pte(ptep, pte);
> 
> 
> Probably not for this series but I wonder whether __xen_set_pte() should
> continue to use hypercall now that we are 64-bit only.

As Andrew wrote already the hypercall will be cheaper.

I'll adjust the comment, though.

> 
> 
>> @@ -654,14 +621,12 @@ static int __xen_pgd_walk(struct mm_struct *mm, pgd_t *pgd,
> 
> 
> Comment above should be updated.

Yes.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 06:03:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 06:03:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrEmt-00071k-4e; Fri, 03 Jul 2020 06:03:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VO50=AO=qq.com=jinchen1227@srs-us1.protection.inumbo.net>)
 id 1jrEmn-00071a-GM
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 06:03:02 +0000
X-Inumbo-ID: d284b566-bcf2-11ea-8923-12813bfff9fa
Received: from smtpbgeu1.qq.com (unknown [52.59.177.22])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d284b566-bcf2-11ea-8923-12813bfff9fa;
 Fri, 03 Jul 2020 06:02:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=qq.com; s=s201512;
 t=1593756164; bh=uSL9O6FYZvNNmDKGVHwcm66t1VAgLHRwlBt0Em1xOZA=;
 h=From:To:Subject:Mime-Version:Date:Message-ID;
 b=aP/s8xaXjOxRJzD4DHx+dcN8wWrk7tC5tpGqIsYT+An468NBl6g1rP7Dkk4dnUgg2
 /vCZxZ90OqhvfN/uSDXWdFFHzWSwRIT2VOu3kWYSK9CdR/e3TS52az9mieJLHEcU82
 buUxgjIntM16T1SqeWjIC75+1U707eTxqANKDjcU=
X-QQ-FEAT: 9d8LckC02S2WHpF/yotlfXpYFwNjGAYcKdZVAE+a84CrsVkjsK8phZclF+w88
 OtfhqxXAqBVaDANvIX0CIwkiqCbnuGKZ6oy6fBiVfg4LaDvaOeEKMltQoq4lkubmHVZyhWv
 +vcDw4fXYwFmuBPRX08AGh3CGx8LIR4GSfx1LnGLIlSJIfhYVfhu8B0njFEDnPi72NmC5Jr
 b6AY09SDlTnmWnT5D0cZbk1lbXxkOoK9hq9mBsPCVTVqpRi93SxncOAP+RC7BSdcOym9k0I
 hNzakm4a/E1fjcK1miNKH+Ld4=
X-QQ-SSF: 00000000000000F000000000000000Z
X-QQ-XMAILINFO: MRdXMdGsawJ9lYvPyqsR1aRpEm18ocO/abRAn4PtrbXwVCFDF0vKgbF03+QKYG
 QZJDVokSXhOeRPuZsDpsH1vlOz3P6IBxAKZcSnbx+yFM07vDos6z9XDioAKOVdRp0jbJD8noib7OZ
 wO5fsvwwW1ah5rvHrMrTqB+HLAOdVH6DoazDTdTuY44S/ne8qrT7TvW7Ma1o6kAvXfVwvfm7/DuyV
 OsiIYVZHBaHuJVTsQhkE1U2B4DSD7M4w8P6jCo12RQMGppO0B1xk1fSqxxHABOOqFD/SpIovHK9S5
 ix694baIFHa3zWLWurHjPdUBZfX69OJlhg38ROBiDBGemIAA+UZD/xI0HCS9T1DlYhNvs64IMMvPK
 lw1T3H82RC7HIb+GnzyAnvqIsdb9o6IIswNjzMuZsq3XG1OKDOzB2ZwW+IKMnKzyXTXwh2PiQcxBq
 Fxgrt2idohHMxY6rptlTczM8Za1qzwvPagTHjBDWQnN4qnsP1bTMIhbCIE6nzo361Ea9QYPZN70Ko
 pwVARY8uGA2OBdqmi+xUtGGHQ4Zp7jKLaiVC8pLo0zEmudso+u9nTkV4TyfeQTJF7EF2p4sk5sFpb
 p4s6KAgsEZpyq3fnUqKZ1oiBo5kBAX0E5EOuTlwEOdNDfPw4+I+Atz2LHFVyXDRij05Jl8MYpvEWr
 m+iXsKQk2lSSYznpNflvg+neYQTw7WBO/JcccJ3ywSxKyyxCd/QqEhuzdGzwYuAxmcYPHN
X-HAS-ATTACH: no
X-QQ-BUSINESS-ORIGIN: 2
X-Originating-IP: 120.201.105.119
X-QQ-STYLE: 
X-QQ-mid: webmail712t1593756164t4706451
From: "=?gb18030?B?amluY2hlbg==?=" <jinchen1227@qq.com>
To: "=?gb18030?B?SnVsaWVuIEdyYWxs?=" <julien@xen.org>,
 "=?gb18030?B?eGVuLWRldmVs?=" <xen-devel@lists.xenproject.org>
Subject: =?gb18030?B?u9i4tKO6IFtYZW4gQVJNNjRdIFNhdmUgY29yZWR1?=
 =?gb18030?B?bXAgbG9nIHdoZW4geGVuL2RvbTAgY3Jhc2ggb24g?=
 =?gb18030?B?QVJNNjQ/?=
Mime-Version: 1.0
Content-Type: multipart/alternative;
 boundary="----=_NextPart_5EFECA03_10E771F8_7BF8E463"
Content-Transfer-Encoding: 8Bit
Date: Fri, 3 Jul 2020 14:02:43 +0800
X-Priority: 3
Message-ID: <tencent_C1F76837DF25C430969ABF6E4A557260AA0A@qq.com>
X-QQ-MIME: TCMime 1.0 by Tencent
X-Mailer: QQMail 2.x
X-QQ-Mailer: QQMail 2.x
X-QQ-SENDSIZE: 520
Received: from qq.com (unknown [127.0.0.1]) by smtp.qq.com (ESMTP) with SMTP
 id ; Fri, 03 Jul 2020 14:02:44 +0800 (CST)
Feedback-ID: webmail:qq.com:bgforeign:bgforeign11
X-QQ-Bgrelay: 1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a multi-part message in MIME format.

------=_NextPart_5EFECA03_10E771F8_7BF8E463
Content-Type: text/plain;
	charset="gb18030"
Content-Transfer-Encoding: base64

VGhhbmsgeW91IGZvciB5b3VyIHJlcGx5IQ0KDQoNCk9uIDAyLzA3LzIwMjAgMDI6NDEsIGpp
bmNoZW4gd3JvdGU6DQomZ3Q7Jmd0OyBIZWxsbyB4ZW4gZXhwZXJ0czoNCiZndDsmZ3Q7IA0K
Jmd0OyZndDsgSXMgdGhlcmUgYW55IHdheSB0byBzYXZlIHhlbiBhbmQgZG9tMCBjb3JlIGR1
bXAgbG9nIHdoZW4geGVuIG9yIGRvbTAgDQomZ3Q7Jmd0OyBjcmFzaCBvbiBBUk02NCBwbGF0
Zm9ybT8NCg0KJmd0O1VzdWFsbHkgYWxsIHRoZSBjcmFzaCBzdGFjayB0cmFjZSAoWGVuIGFu
ZCBEb20wKSBzaG91bGQgYmUgb3V0cHV0IG9uIHRoZSANCiZndDtYZW4gQ29uc29sZS4NCg0K
DQpCdXQgaWYgSSBkb24ndCBjb25uZWN0IGEgZGVidWcgc2VyaWFsIGFuZCB3YW50IHRvIGNo
ZWNrIHRoZSBkdW1wIGVycm9yIGFmdGVyIHJlYm9vdD8NCg0KJmd0OyZndDsmbmJzcDsgJm5i
c3A7ICZuYnNwOyBJIGZpbmQgdGhhdCB0aGUga2R1bXAgc2VlbXMgY2FuJ3QgcnVuIG9uIEFS
TTY0IHBsYXRmb3JtPw0KDQomZ3Q7V2UgZG9uJ3QgaGF2ZSBzdXBwb3J0IGZvciBrZHVtcC9r
ZXhlYyBvbiBBcm0gaW4gWGVuIHlldC4NCg0KDQpJIGZpbmQgdGhlIGtkdW1wIHNlZW1zIHRo
ZSBhcHByb3ByaWF0ZSB3YXkgdG8gZG8gdGhpcywgYnV0IGl0IGRvZXNuJ3Qgc3VwcG9ydCB4
ZW4gYXJtNjQuDQoNCiZndDsmZ3Q7Jm5ic3A7ICZuYnNwOyAmbmJzcDsgQXJlIHRoZXJlIGFu
eSBwYXRjaGVzIG9yIG90aGVyIHdheSB0byBhY2hpdmUgdGhpcyBnb2FsPw0KDQomZ3Q7SSBh
bSBub3QgYXdhcmUgb2YgYW55IHBhdGNoZXMsIGJ1dCB0aGV5IHdvdWxkIGJlIHdlbGNvbWVk
Lg0KDQomZ3Q7Rm9yIG90aGVyIHdheSwgaXQgZGVwZW5kcyB3aGF0IGV4YWN0bHkgeW91IGV4
cGVjdC4gRG8geW91IG5lZWQgbW9yZSB0aGFuIA0KJmd0O3RoZSBzdGFjayB0cmFjZT8NCg0K
VGhlIHN0YWNrIHRyYWNlIHdpbGwgYmUgb2ssIGlmIG90aGVyIGluZm9tYW50aW9uIGNhbiBi
ZSBzYXZlIHdpbGwgYmUgYmV0dGVyIChsaWtlIG1lbW9yeS92Y3B1L2RvbWFpbiwgZXRjLik=

------=_NextPart_5EFECA03_10E771F8_7BF8E463
Content-Type: text/html;
	charset="gb18030"
Content-Transfer-Encoding: base64

PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNo
YXJzZXQ9R0IxODAzMCI+PGRpdj5UaGFuayB5b3UgZm9yIHlvdXIgcmVwbHkhPC9kaXY+PGRp
dj48YnI+PC9kaXY+PGRpdj5PbiAwMi8wNy8yMDIwIDAyOjQxLCBqaW5jaGVuIHdyb3RlOjwv
ZGl2PjxkaXY+Jmd0OyZndDsgSGVsbG8geGVuIGV4cGVydHM6PGJyPiZndDsmZ3Q7IDxicj4m
Z3Q7Jmd0OyBJcyB0aGVyZSBhbnkgd2F5IHRvIHNhdmUgeGVuIGFuZCBkb20wIGNvcmUgZHVt
cCBsb2cgd2hlbiB4ZW4gb3IgZG9tMCA8YnI+Jmd0OyZndDsgY3Jhc2ggb24gQVJNNjQgcGxh
dGZvcm0/PGJyPjxicj4mZ3Q7VXN1YWxseSBhbGwgdGhlIGNyYXNoIHN0YWNrIHRyYWNlIChY
ZW4gYW5kIERvbTApIHNob3VsZCBiZSBvdXRwdXQgb24gdGhlIDxicj4mZ3Q7WGVuIENvbnNv
bGUuPC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5CdXQgaWYgSSBkb24ndCBjb25uZWN0IGEg
ZGVidWcgc2VyaWFsIGFuZCB3YW50IHRvIGNoZWNrIHRoZSBkdW1wIGVycm9yIGFmdGVyIHJl
Ym9vdD88L2Rpdj48ZGl2Pjxicj4mZ3Q7Jmd0OyZuYnNwOyAmbmJzcDsgJm5ic3A7IEkgZmlu
ZCB0aGF0IHRoZSBrZHVtcCBzZWVtcyBjYW4ndCBydW4gb24gQVJNNjQgcGxhdGZvcm0/PGJy
Pjxicj4mZ3Q7V2UgZG9uJ3QgaGF2ZSBzdXBwb3J0IGZvciBrZHVtcC9rZXhlYyBvbiBBcm0g
aW4gWGVuIHlldC48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PkkgZmluZCB0aGUga2R1bXAg
c2VlbXMgdGhlIGFwcHJvcHJpYXRlIHdheSB0byBkbyB0aGlzLCBidXQgaXQgZG9lc24ndCBz
dXBwb3J0IHhlbiBhcm02NC48YnI+PGJyPiZndDsmZ3Q7Jm5ic3A7ICZuYnNwOyAmbmJzcDsg
QXJlIHRoZXJlIGFueSBwYXRjaGVzIG9yIG90aGVyIHdheSB0byBhY2hpdmUgdGhpcyBnb2Fs
Pzxicj48YnI+Jmd0O0kgYW0gbm90IGF3YXJlIG9mIGFueSBwYXRjaGVzLCBidXQgdGhleSB3
b3VsZCBiZSB3ZWxjb21lZC48YnI+PGJyPiZndDtGb3Igb3RoZXIgd2F5LCBpdCBkZXBlbmRz
IHdoYXQgZXhhY3RseSB5b3UgZXhwZWN0LiBEbyB5b3UgbmVlZCBtb3JlIHRoYW4gPGJyPiZn
dDt0aGUgc3RhY2sgdHJhY2U/PGJyPjxicj5UaGUgc3RhY2sgdHJhY2Ugd2lsbCBiZSBvaywg
aWYgb3RoZXIgaW5mb21hbnRpb24gY2FuIGJlIHNhdmUgd2lsbCBiZSBiZXR0ZXIgKGxpa2Ug
bWVtb3J5L3ZjcHUvZG9tYWluLCBldGMuKTwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+PGJy
Pjxicj48L2Rpdj4=

------=_NextPart_5EFECA03_10E771F8_7BF8E463--





From xen-devel-bounces@lists.xenproject.org Fri Jul 03 07:58:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 07:58:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrGaJ-0007Wv-OB; Fri, 03 Jul 2020 07:58:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Kpvw=AO=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jrGaI-0007Wq-Ke
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 07:58:10 +0000
X-Inumbo-ID: efdd61e8-bd02-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id efdd61e8-bd02-11ea-bca7-bc764e2007e4;
 Fri, 03 Jul 2020 07:58:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=y831c3SyaYAviC2e5WYtZjpRaHUueP+RBPpzoR37b8I=; b=WDzZ2nhG96liyYN8lPXOKggYBH
 LGwv7KLXO8TN7b3HAHwMm7rzCqFHs9UUFlDR7g1BA1yrzHMcbPEZ3+8are/MLK5gNx4mDD2pmMTrC
 Txk4e/fSTlK1PeZzUgbMweCxrBGl64FFob0VJYwZSZ5ifb53utXixLggvElw3uHlCcbU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrGaC-00012e-RV; Fri, 03 Jul 2020 07:58:04 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrGaC-0005im-Ij; Fri, 03 Jul 2020 07:58:04 +0000
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <3fa0c3e7-9243-b1bb-d6ad-a3bd21437782@xen.org>
 <0e02a9b5-ba7a-43a2-3369-a4410f216ddb@suse.com>
 <9a3f4d58-e5ad-c7a1-6c5f-42aa92101ca1@xen.org>
 <d0165fc3-fb05-2e49-eff3-e45a674b00e1@suse.com>
 <7f915146-6566-e5a7-14d2-cb2319838562@xen.org>
 <7ac383c2-0264-cc75-a85b-13c1fdfb0bd6@suse.com>
 <dadeeedd-a9e1-d5f4-4754-8da3f065fd44@xen.org>
 <187614050.18497785.1593721708078.JavaMail.zimbra@cert.pl>
From: Julien Grall <julien@xen.org>
Message-ID: <2e01fca9-efcd-7d09-355f-29245bbc8721@xen.org>
Date: Fri, 3 Jul 2020 08:58:01 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <187614050.18497785.1593721708078.JavaMail.zimbra@cert.pl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 luwei kang <luwei.kang@intel.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 02/07/2020 21:28, Michał Leszczyński wrote:
> ----- 2 lip 2020 o 16:31, Julien Grall julien@xen.org napisał(a):
> 
>> On 02/07/2020 15:17, Jan Beulich wrote:
>>> On 02.07.2020 16:14, Julien Grall wrote:
>>>> On 02/07/2020 14:30, Jan Beulich wrote:
>>>>> On 02.07.2020 11:57, Julien Grall wrote:
>>>>>> On 02/07/2020 10:18, Jan Beulich wrote:
>>>>>>> On 02.07.2020 10:54, Julien Grall wrote:
>>>>>>>> On 02/07/2020 09:50, Jan Beulich wrote:
>>>>>>>>> On 02.07.2020 10:42, Julien Grall wrote:
>>>>>>>>>> On 02/07/2020 09:29, Jan Beulich wrote:
>>>>>> Another way to do it, would be the toolstack to do the mapping. At which
>>>>>> point, you still need an hypercall to do the mapping (probably the
>>>>>> hypercall acquire).
>>>>>
>>>>> There may not be any mapping to do in such a contrived, fixed-range
>>>>> environment. This scenario was specifically to demonstrate that the
>>>>> way the mapping gets done may be arch-specific (here: a no-op)
>>>>> despite the allocation not being so.
>>>> You are arguing on extreme cases which I don't think is really helpful
>>>> here. Yes if you want to map at a fixed address in a guest you may not
>>>> need the acquire hypercall. But in most of the other cases (see has for
>>>> the tools) you will need it.
>>>>
>>>> So what's the problem with requesting to have the acquire hypercall
>>>> implemented in common code?
>>>
>>> Didn't we start out by you asking that there be as little common code
>>> as possible for the time being?
>>
>> Well as I said I am not in favor of having the allocation in common
>> code, but if you want to keep it then you also want to implement
>> map/unmap in the common code ([1], [2]).
>>
>>> I have no issue with putting the
>>> acquire implementation there ...
>> This was definitely not clear given how you argued with extreme cases...
>>
>> Cheers,
>>
>> [1] <9a3f4d58-e5ad-c7a1-6c5f-42aa92101ca1@xen.org>
>> [2] <cf41855b-9e5e-13f2-9ab0-04b98f8b3cdd@xen.org>
>>
>> --
>> Julien Grall
> 
> 
> Guys,
> 
> could you express your final decision on this topic?

Can you move the acquire implementation from x86 to common code?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 09:08:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 09:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrHgU-0005Iz-Ih; Fri, 03 Jul 2020 09:08:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rI2g=AO=virtuozzo.com=vsementsov@srs-us1.protection.inumbo.net>)
 id 1jrHgT-0005Ii-H7
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 09:08:37 +0000
X-Inumbo-ID: c4b43b68-bd0c-11ea-8952-12813bfff9fa
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.123]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c4b43b68-bd0c-11ea-8952-12813bfff9fa;
 Fri, 03 Jul 2020 09:08:32 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PG75k+Ti7TRubwqEaO8gSY94vwbW7S+bDjZ537tvQvfdICuTL7ozIvER5SmyIsp6xbkn/Zppr4SG1sy/2wjtZkwUAFbhyvxHwf3oENnkXUyTVrum7TSu47AR9dSteFqqTU+9a0hF0fso0mwOpkNlP2ZTZ/T46TapmK+OZ0g7fpLwq9V6s7ndgUVR59H+3FsaXUuKWtQ842tX3/VFpPfWarLDaA2/jPHieCPogQRCVDH5q5LQye2YX0dEWYDLVZroB/ftRcLOO6DI3dC6ARE9WcxJTzf47fHSLzU2s9ugUC7VpcvcFNPJ3AgxNXEDSoxc7TpnjPetLY+ufEJnBZ/IAg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xfHSVlIo44tg6W+pIWSY0xjOo9Pxh+Mei6mpQ0bwGBI=;
 b=hfPRJtx8r8PeEjR/UnHIt4VrpuTkD5v+YQ6J1bYNuxkJhRj/nWuCwzMDf56F/n2Fm/qenz2eren1svMqpug8wPdn3uPcZpwCdeqvn3aQc5VKtM3QYurZ5hTDb6G8rcvd38DKWSvwjaxsZI2Rk0c1a8GgPxzIhNZP3F4snAkFRjTD6qBlUXlwuBpJeR/UtgW9O69eMZPlYmSOVrc6mgT2c9g8goshsxtXZHcagwTMQJU/TdwEWwdFiyBKLT60mt5j5Lkzme4XQ+js+OfgFd5xnRluRGYfShD9QjeDXDyAPuGiOvbgOTq7lTOzOWelOjbf4+2BtALDTwcfCoIlOFvMUA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=virtuozzo.com; dmarc=pass action=none
 header.from=virtuozzo.com; dkim=pass header.d=virtuozzo.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtuozzo.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xfHSVlIo44tg6W+pIWSY0xjOo9Pxh+Mei6mpQ0bwGBI=;
 b=I/zt7nZIKaKbYh5fzHlsIrXpH/QWCYg8H75OfNqP/KwfjJw3Yp7BvjYAcH1vW0rGfY+u2q+xroNqJqFHjqMy4rek6QZY9OU3JMWlET8kKTX+hS9zeeWt41pcqCCsOJs8thnk/251sTIEGCqMHd9O/c7BUMcSUYFeGKWdRFLqF9k=
Authentication-Results: nongnu.org; dkim=none (message not signed)
 header.d=none;nongnu.org; dmarc=none action=none header.from=virtuozzo.com;
Received: from AM7PR08MB5494.eurprd08.prod.outlook.com (2603:10a6:20b:dc::15)
 by AM7PR08MB5448.eurprd08.prod.outlook.com (2603:10a6:20b:106::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.28; Fri, 3 Jul
 2020 09:08:31 +0000
Received: from AM7PR08MB5494.eurprd08.prod.outlook.com
 ([fe80::a408:2f0f:bc6c:d312]) by AM7PR08MB5494.eurprd08.prod.outlook.com
 ([fe80::a408:2f0f:bc6c:d312%4]) with mapi id 15.20.3131.028; Fri, 3 Jul 2020
 09:08:31 +0000
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v11 1/8] error: auto propagated local_err
Date: Fri,  3 Jul 2020 12:08:09 +0300
Message-Id: <20200703090816.3295-2-vsementsov@virtuozzo.com>
X-Mailer: git-send-email 2.21.0
In-Reply-To: <20200703090816.3295-1-vsementsov@virtuozzo.com>
References: <20200703090816.3295-1-vsementsov@virtuozzo.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM0PR10CA0022.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:17c::32) To AM7PR08MB5494.eurprd08.prod.outlook.com
 (2603:10a6:20b:dc::15)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from localhost.localdomain (185.215.60.15) by
 AM0PR10CA0022.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:17c::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.23 via Frontend
 Transport; Fri, 3 Jul 2020 09:08:29 +0000
X-Mailer: git-send-email 2.21.0
X-Originating-IP: [185.215.60.15]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 223dc08a-cc50-47bf-3d53-08d81f30a7d8
X-MS-TrafficTypeDiagnostic: AM7PR08MB5448:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <AM7PR08MB544893CD56056C56E2F94BF3C16A0@AM7PR08MB5448.eurprd08.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-Forefront-PRVS: 045315E1EE
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 1jbAEV1m1poFTYrkwpLSbN/kErl6G+bZeWohFVMoDtJNGWXm+Fmt8INoJVrgbeNzuGwWqhP1f+VL+NLOqdG/MIwyr5n1gRElbKtMMHFc0QnGXNl7RhfVKv1J30ODZzSa057Rm4CDUh9uFSbfCODxQBTDr14ILdf2J3AylZFy/74YjGHUf0+ydR5UKH3b0qh5yke4Ay76gA1vPxH5PcobrkFu6A+iSpuiLP5xeJtrRnG17eBOkFYEvR+HTPLrVCPOLjEZ1xR/tQqf+UeEGuxNeapz/cOGYbzfNzF1+nGrT445yXAjEEEGHRgIyeuc8XUDywLbKr8unsXxoxHWstIaKA==
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM7PR08MB5494.eurprd08.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(39840400004)(366004)(376002)(136003)(396003)(346002)(66946007)(66476007)(66556008)(2906002)(86362001)(1076003)(30864003)(8676002)(83380400001)(6512007)(5660300002)(8936002)(6666004)(186003)(36756003)(4326008)(6506007)(16526019)(6486002)(6916009)(7416002)(26005)(478600001)(956004)(52116002)(316002)(54906003)(2616005);
 DIR:OUT; SFP:1102; 
X-MS-Exchange-AntiSpam-MessageData: fBGiTrHrwuLh8rYAW6i0mZhe63DX/yN3XNiwlSHSCF1LLcQ9lxm8ZfeULw1gDejcYxvN4+hpD4TasCQh15+YOlOGqwdwXOpcp5qt07a/Gm6oR+RG57SY2Q52UhLyjpqT2hUDAB+ozFKHdIcG3SMpGhMFcosYuGJEyrrDeGYXNQQF0dBmreU4yY2b7eGhMgRVXNaOynDARbsEEF/NzzvpUce86vYIU4QFTuW5Bo6AHCfs0q3+wc3ubGq0TWdOCE7nhl0aN+x/rqIKbNhxeMZkONHPQ/YYQFzEiL3e+inVjizx5t+L4R4d5ojypXmb18kbhTfXhLdtM+HKpa5YYD0IUtEooxSUOznzm8vHEJfEnnPOiZtOsakFNnVEvyHQNpckpVQYPxGSrFZw/JMf5OjaL+UUpK1+0ItBgj93br0QGGLMgoLA32D0UJC8KO78mxdVtvbvSTQKtGGr7NDt3ht4drWSmWtD6klUgrn9YBbzjpA=
X-OriginatorOrg: virtuozzo.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 223dc08a-cc50-47bf-3d53-08d81f30a7d8
X-MS-Exchange-CrossTenant-AuthSource: AM7PR08MB5494.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jul 2020 09:08:30.9515 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 0bc7f26d-0264-416e-a6fc-8352af79c58f
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kGsMQcAU8WJFwLq3H2143YQ8NIIaOwmWlgKU86XBdnn75rmxrPIoYGyZlzhEwQLh71Bgd6E4FSlDKvBqIciCuD45B5qLaix0G8cAbFTyygY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5448
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 Laszlo Ersek <lersek@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, armbru@redhat.com,
 groug@kaod.org, Stefano Stabellini <sstabellini@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Max Reitz <mreitz@redhat.com>, eblake@redhat.com,
 Michael Roth <mdroth@linux.vnet.ibm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce a new ERRP_AUTO_PROPAGATE macro, to be used at start of
functions with an errp OUT parameter.

It has three goals:

1. Fix issue with error_fatal and error_prepend/error_append_hint: user
can't see this additional information, because exit() happens in
error_setg earlier than information is added. [Reported by Greg Kurz]

2. Fix issue with error_abort and error_propagate: when we wrap
error_abort by local_err+error_propagate, the resulting coredump will
refer to error_propagate and not to the place where error happened.
(the macro itself doesn't fix the issue, but it allows us to [3.] drop
the local_err+error_propagate pattern, which will definitely fix the
issue) [Reported by Kevin Wolf]

3. Drop local_err+error_propagate pattern, which is used to workaround
void functions with errp parameter, when caller wants to know resulting
status. (Note: actually these functions could be merely updated to
return int error code).

To achieve these goals, later patches will add invocations
of this macro at the start of functions with either use
error_prepend/error_append_hint (solving 1) or which use
local_err+error_propagate to check errors, switching those
functions to use *errp instead (solving 2 and 3).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
---

Cc: Eric Blake <eblake@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Greg Kurz <groug@kaod.org>
Cc: Christian Schoenebeck <qemu_oss@crudebyte.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: "Philippe Mathieu-Daudé" <philmd@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: Michael Roth <mdroth@linux.vnet.ibm.com>
Cc: qemu-devel@nongnu.org
Cc: qemu-block@nongnu.org
Cc: xen-devel@lists.xenproject.org

 include/qapi/error.h | 205 ++++++++++++++++++++++++++++++++++++-------
 1 file changed, 172 insertions(+), 33 deletions(-)

diff --git a/include/qapi/error.h b/include/qapi/error.h
index 5ceb3ace06..b54aedbfd7 100644
--- a/include/qapi/error.h
+++ b/include/qapi/error.h
@@ -39,7 +39,7 @@
  *   • pointer-valued functions return non-null / null pointer, and
  *   • integer-valued functions return non-negative / negative.
  *
- * How to:
+ * = Deal with Error object =
  *
  * Create an error:
  *     error_setg(errp, "situation normal, all fouled up");
@@ -73,28 +73,91 @@
  * reporting it (primarily useful in testsuites):
  *     error_free_or_abort(&err);
  *
- * Pass an existing error to the caller:
- *     error_propagate(errp, err);
- * where Error **errp is a parameter, by convention the last one.
+ * = Deal with Error ** function parameter =
  *
- * Pass an existing error to the caller with the message modified:
- *     error_propagate_prepend(errp, err);
+ * A function may use the error system to return errors. In this case, the
+ * function defines an Error **errp parameter, by convention the last one (with
+ * exceptions for functions using ... or va_list).
  *
- * Avoid
- *     error_propagate(errp, err);
- *     error_prepend(errp, "Could not frobnicate '%s': ", name);
- * because this fails to prepend when @errp is &error_fatal.
+ * The caller may then pass in the following errp values:
+ *
+ * 1. &error_abort
+ *    Any error will result in abort().
+ * 2. &error_fatal
+ *    Any error will result in exit() with a non-zero status.
+ * 3. NULL
+ *    No error reporting through errp parameter.
+ * 4. The address of a NULL-initialized Error *err
+ *    Any error will populate errp with an error object.
  *
- * Create a new error and pass it to the caller:
+ * The following rules then implement the correct semantics desired by the
+ * caller.
+ *
+ * Create a new error to pass to the caller:
  *     error_setg(errp, "situation normal, all fouled up");
  *
- * Call a function and receive an error from it:
+ * Calling another errp-based function:
+ *     f(..., errp);
+ *
+ * == Checking success of subcall ==
+ *
+ * If a function returns a value indicating an error in addition to setting
+ * errp (which is recommended), then you don't need any additional code, just
+ * do:
+ *
+ *     int ret = f(..., errp);
+ *     if (ret < 0) {
+ *         ... handle error ...
+ *         return ret;
+ *     }
+ *
+ * If a function returns nothing (not recommended for new code), the only way
+ * to check success is by consulting errp; doing this safely requires the use
+ * of the ERRP_AUTO_PROPAGATE macro, like this:
+ *
+ *     int our_func(..., Error **errp) {
+ *         ERRP_AUTO_PROPAGATE();
+ *         ...
+ *         subcall(..., errp);
+ *         if (*errp) {
+ *             ...
+ *             return -EINVAL;
+ *         }
+ *         ...
+ *     }
+ *
+ * ERRP_AUTO_PROPAGATE takes care of wrapping the original errp as needed, so
+ * that the rest of the function can directly use errp (including
+ * dereferencing), where any errors will then be propagated on to the original
+ * errp when leaving the function.
+ *
+ * In some cases, we need to check result of subcall, but do not want to
+ * propagate the Error object to our caller. In such cases we don't need
+ * ERRP_AUTO_PROPAGATE, but just a local Error object:
+ *
+ * Receive an error and not pass it:
  *     Error *err = NULL;
- *     foo(arg, &err);
+ *     subcall(arg, &err);
  *     if (err) {
  *         handle the error...
+ *         error_free(err);
  *     }
  *
+ * Note that older code that did not use ERRP_AUTO_PROPAGATE would instead need
+ * a local Error * variable and the use of error_propagate() to properly handle
+ * all possible caller values of errp. Now this is DEPRECATED* (see below).
+ *
+ * Note that any function that wants to modify an error object, such as by
+ * calling error_append_hint or error_prepend, must use ERRP_AUTO_PROPAGATE, in
+ * order for a caller's use of &error_fatal to see the additional information.
+ *
+ * In rare cases, we need to pass existing Error object to the caller by hand:
+ *     error_propagate(errp, err);
+ *
+ * Pass an existing error to the caller with the message modified:
+ *     error_propagate_prepend(errp, err);
+ *
+ *
  * Call a function ignoring errors:
  *     foo(arg, NULL);
  *
@@ -104,26 +167,6 @@
  * Call a function treating errors as fatal:
  *     foo(arg, &error_fatal);
  *
- * Receive an error and pass it on to the caller:
- *     Error *err = NULL;
- *     foo(arg, &err);
- *     if (err) {
- *         handle the error...
- *         error_propagate(errp, err);
- *     }
- * where Error **errp is a parameter, by convention the last one.
- *
- * Do *not* "optimize" this to
- *     foo(arg, errp);
- *     if (*errp) { // WRONG!
- *         handle the error...
- *     }
- * because errp may be NULL!
- *
- * But when all you do with the error is pass it on, please use
- *     foo(arg, errp);
- * for readability.
- *
  * Receive and accumulate multiple errors (first one wins):
  *     Error *err = NULL, *local_err = NULL;
  *     foo(arg, &err);
@@ -151,6 +194,61 @@
  *         error_setg(&err, ...); // WRONG!
  *     }
  * because this may pass a non-null err to error_setg().
+ *
+ * DEPRECATED*
+ *
+ * The following pattern of receiving, checking, and then forwarding an error
+ * to the caller by hand is now deprecated:
+ *
+ *     Error *err = NULL;
+ *     foo(arg, &err);
+ *     if (err) {
+ *         handle the error...
+ *         error_propagate(errp, err);
+ *     }
+ *
+ * Instead, use ERRP_AUTO_PROPAGATE macro.
+ *
+ * The old pattern is deprecated because of two things:
+ *
+ * 1. Issue with error_abort and error_propagate: when we wrap error_abort by
+ * local_err+error_propagate, the resulting coredump will refer to
+ * error_propagate and not to the place where error happened.
+ *
+ * 2. A lot of extra code of the same pattern
+ *
+ * How to update old code to use ERRP_AUTO_PROPAGATE?
+ *
+ * All you need is to add ERRP_AUTO_PROPAGATE() invocation at function start,
+ * than you may safely dereference errp to check errors and do not need any
+ * additional local Error variables or calls to error_propagate().
+ *
+ * Example:
+ *
+ * old code
+ *
+ *     void fn(..., Error **errp) {
+ *         Error *err = NULL;
+ *         foo(arg, &err);
+ *         if (err) {
+ *             handle the error...
+ *             error_propagate(errp, err);
+ *             return;
+ *         }
+ *         ...
+ *     }
+ *
+ * updated code
+ *
+ *     void fn(..., Error **errp) {
+ *         ERRP_AUTO_PROPAGATE();
+ *         foo(arg, errp);
+ *         if (*errp) {
+ *             handle the error...
+ *             return;
+ *         }
+ *         ...
+ *     }
  */
 
 #ifndef ERROR_H
@@ -359,6 +457,47 @@ void error_set_internal(Error **errp,
                         ErrorClass err_class, const char *fmt, ...)
     GCC_FMT_ATTR(6, 7);
 
+typedef struct ErrorPropagator {
+    Error *local_err;
+    Error **errp;
+} ErrorPropagator;
+
+static inline void error_propagator_cleanup(ErrorPropagator *prop)
+{
+    error_propagate(prop->errp, prop->local_err);
+}
+
+G_DEFINE_AUTO_CLEANUP_CLEAR_FUNC(ErrorPropagator, error_propagator_cleanup);
+
+/*
+ * ERRP_AUTO_PROPAGATE
+ *
+ * This macro exists to assist with proper error handling in a function which
+ * uses an Error **errp parameter.  It must be used as the first line of a
+ * function which modifies an error (with error_prepend, error_append_hint, or
+ * similar) or which wants to dereference *errp.  It is still safe (but
+ * useless) to use in other functions.
+ *
+ * If errp is NULL or points to error_fatal, it is rewritten to point to a
+ * local Error object, which will be automatically propagated to the original
+ * errp on function exit (see error_propagator_cleanup).
+ *
+ * After invocation of this macro it is always safe to dereference errp
+ * (as it's not NULL anymore) and to add information by error_prepend or
+ * error_append_hint (as, if it was error_fatal, we swapped it with a
+ * local_error to be propagated on cleanup).
+ *
+ * Note: we don't wrap the error_abort case, as we want resulting coredump
+ * to point to the place where the error happened, not to error_propagate.
+ */
+#define ERRP_AUTO_PROPAGATE() \
+    g_auto(ErrorPropagator) _auto_errp_prop = {.errp = errp}; \
+    do { \
+        if (!errp || errp == &error_fatal) { \
+            errp = &_auto_errp_prop.local_err; \
+        } \
+    } while (0)
+
 /*
  * Special error destination to abort on error.
  * See error_setg() and error_propagate() for details.
-- 
2.21.0



From xen-devel-bounces@lists.xenproject.org Fri Jul 03 09:08:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 09:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrHge-0005Kl-7x; Fri, 03 Jul 2020 09:08:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rI2g=AO=virtuozzo.com=vsementsov@srs-us1.protection.inumbo.net>)
 id 1jrHgd-0005Ii-HV
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 09:08:47 +0000
X-Inumbo-ID: c8680668-bd0c-11ea-8952-12813bfff9fa
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.113]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c8680668-bd0c-11ea-8952-12813bfff9fa;
 Fri, 03 Jul 2020 09:08:39 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LEXBFWXolDSfynR9N5L4t8RaSCh6a4M4GlO0Uq/4ssKuJMmEuGikukMKvCoq+zS1rj6nhHnZdsAUmYltRV9toOKrn/GixbjwWNOZRJvyQr64eeNDfCNG/WbbbnAcC+YRYYrexU0IjIpLRO/7GMQnQU0hPXl17Mub1nxQ5M1zlkBH7OqRyh5JHEy2Gi0KmWBhO42d/PWVM7BBqWD0fYY7otSWRfjdAza6pfZeYrCSGGYOrSIfyqVY8pe2kIRQyKIb0H4bbtA+QTZYLAm0ndkIucn3q1brnLyg8ECHv/HvtI6iIRwaENIR4O4CcifDn/ipXA4vj5FTXCUP8CPiwna91g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b3kGYHvlNVVBkZs5SrhRbcEeydQhpoNgVqChlyjDEQc=;
 b=jzuBFyVTi6nAIGkq+o5nFSLO2CkadTeb23dX+gArX2ry3gAfQ6QdZF0xmkppoWIjg/iDPhXXyQGTdpbnP2bwoU7Vg+sOBW4pE9MIuvIQu2yclWZ+nGm2Tez8gfzLoELJU4EA4u0edSAVpUrJag/0v9EnR6tMn9ybDH4nI8RUKYnw/btuDgW/ONqdWso9iH6l3krl5l73Je1y35P/f2VNVFwmzRu0jNrIdAtg/UWIQMc90stZupvbFAgyG9yDE1xXcXMNkhQ+4t94yZjthOEyVkGcDDsHS1g/juI/qkjGbkZOw7pJ/DSWBSybLVBJRB3Hsz8GynEQFhurZDKi3rjYyg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=virtuozzo.com; dmarc=pass action=none
 header.from=virtuozzo.com; dkim=pass header.d=virtuozzo.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtuozzo.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b3kGYHvlNVVBkZs5SrhRbcEeydQhpoNgVqChlyjDEQc=;
 b=MmX8+GFG+7VmKxkygdLlcYAFXKe+BzwVrKp77UJZvRyQ3fix8dQJ4ZkE2sv+N0APEoQAnYP1ic6fd0SY8iXwMLUqQZMxCgZbZ0Uf+5cK0ExoyPk6lH/M9CDocqI1jUD91Hv4ySUvIQt0+5A+6bF469Z+NCa34yknY+9//7wRxrk=
Authentication-Results: nongnu.org; dkim=none (message not signed)
 header.d=none;nongnu.org; dmarc=none action=none header.from=virtuozzo.com;
Received: from AM7PR08MB5494.eurprd08.prod.outlook.com (2603:10a6:20b:dc::15)
 by AM7PR08MB5448.eurprd08.prod.outlook.com (2603:10a6:20b:106::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.28; Fri, 3 Jul
 2020 09:08:37 +0000
Received: from AM7PR08MB5494.eurprd08.prod.outlook.com
 ([fe80::a408:2f0f:bc6c:d312]) by AM7PR08MB5494.eurprd08.prod.outlook.com
 ([fe80::a408:2f0f:bc6c:d312%4]) with mapi id 15.20.3131.028; Fri, 3 Jul 2020
 09:08:37 +0000
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v11 8/8] xen: introduce ERRP_AUTO_PROPAGATE
Date: Fri,  3 Jul 2020 12:08:16 +0300
Message-Id: <20200703090816.3295-9-vsementsov@virtuozzo.com>
X-Mailer: git-send-email 2.21.0
In-Reply-To: <20200703090816.3295-1-vsementsov@virtuozzo.com>
References: <20200703090816.3295-1-vsementsov@virtuozzo.com>
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-ClientProxiedBy: AM0PR10CA0022.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:17c::32) To AM7PR08MB5494.eurprd08.prod.outlook.com
 (2603:10a6:20b:dc::15)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from localhost.localdomain (185.215.60.15) by
 AM0PR10CA0022.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:17c::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.23 via Frontend
 Transport; Fri, 3 Jul 2020 09:08:36 +0000
X-Mailer: git-send-email 2.21.0
X-Originating-IP: [185.215.60.15]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e43f4e21-3220-4e9b-4abb-08d81f30abbd
X-MS-TrafficTypeDiagnostic: AM7PR08MB5448:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <AM7PR08MB54480D6AF6D6246E0CF7C03BC16A0@AM7PR08MB5448.eurprd08.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:111;
X-Forefront-PRVS: 045315E1EE
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: dCLB8Bed3qM7GO6hyN2mOaPrFDo/sJnWcVg4Fmw4b0NEsqdiavpu3nXFvLG55Q6maiexFIWBisQjssFRg8t0Oz7oQ5f3DQYFI2tujPB4gGWiiK8Ie7FYZkFmrO0feoJ3540aR8Enn5W6dsMD+GFcL6lG++JhqylfXZ9/fVyT+S3hcNo7YWpkJh5xDcP2JE1h/bR3J0XoNqFz9/Egg7YT3KrmXPcQyhPDH1ogdnnMPkWzH++KkfkydKh6GxjdBx4XbTmi/+N1WPCPF6RVcjAFGmj5bkVEjqkvIlSJU42mtEJNCXpxRjsMNDDjOdmiauaqWX6Cc5EE9mPwZM/h+ackB/UxhSMhhJ/RWlRYr72qzqBk8ywJyYZ8pbGeBIe6mPY5
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM7PR08MB5494.eurprd08.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(39840400004)(366004)(376002)(136003)(396003)(346002)(69590400007)(66946007)(66476007)(66556008)(2906002)(86362001)(1076003)(30864003)(8676002)(83380400001)(6512007)(5660300002)(8936002)(6666004)(186003)(36756003)(4326008)(6506007)(16526019)(6486002)(6916009)(7416002)(26005)(478600001)(956004)(52116002)(316002)(54906003)(2616005);
 DIR:OUT; SFP:1102; 
X-MS-Exchange-AntiSpam-MessageData: 2qnLOqbaUiCJ0+ghjPDCHzZQ0tKyhmIciNrS2b9aAeMOV02kT9O3OA4g7alFe2VcRGp+IgEWV6XtXRrnhS70zUTH8l2Vlqpj/uymP9gprTmz9RwmMBSd6K6EbPF5vHBXwDhTk4Qf2H4scMd+HbWBWhsXgNxgJWlp2foMgpIC5m9sdD9fbjNi7fagUoJrNRG1hNYymPNxQz9KjZfydS1073EO1PEiEdYuy2L862FKoOnCg74aXajDyOJhw2J4mJj6DPmbso8rgTpcomuo2c/GA//078pfuL8FQ+wUSpQzP+jH7zdX+nRpx58ghvECp09+pcYnjgRElKzWjJFhAKV4B1P9Bx5JvjaeCUQChKE/CNE6EuRjLX5hq7ZZpD/UW1e0JpKxtIE4NZzE3G6eXBlD0ojYbWuO3yJZxWZxWXAF3rZ8b8bDpVKe/gYZrOg32e3L8GiMf3jTXCfQ0Y0HaBBWEAwsCAG6rChRV6zdz5lKmkY=
X-OriginatorOrg: virtuozzo.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e43f4e21-3220-4e9b-4abb-08d81f30abbd
X-MS-Exchange-CrossTenant-AuthSource: AM7PR08MB5494.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jul 2020 09:08:37.5166 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 0bc7f26d-0264-416e-a6fc-8352af79c58f
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: xSuU19i+Fp1FIeAVShwRmY9QGqFdrJc//j+Ok0ju92d4m+P83i3oDUqii9J9tK3PbgYoTLxVPr9M9FTwnX2FgreYws1GuXAJnWOkQ3NYoeQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5448
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 qemu-block@nongnu.org, Paul Durrant <paul@xen.org>, armbru@redhat.com,
 groug@kaod.org, vsementsov@virtuozzo.com,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Max Reitz <mreitz@redhat.com>, eblake@redhat.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

If we want to add some info to errp (by error_prepend() or
error_append_hint()), we must use the ERRP_AUTO_PROPAGATE macro.
Otherwise, this info will not be added when errp == &error_fatal
(the program will exit prior to the error_append_hint() or
error_prepend() call).  Fix such cases.

If we want to check error after errp-function call, we need to
introduce local_err and then propagate it to errp. Instead, use
ERRP_AUTO_PROPAGATE macro, benefits are:
1. No need of explicit error_propagate call
2. No need of explicit local_err variable: use errp directly
3. ERRP_AUTO_PROPAGATE leaves errp as is if it's not NULL or
   &error_fatal, this means that we don't break error_abort
   (we'll abort on error_set, not on error_propagate)

This commit is generated by command

    sed -n '/^X86 Xen CPUs$/,/^$/{s/^F: //p}' MAINTAINERS | \
    xargs git ls-files | grep '\.[hc]$' | \
    xargs spatch \
        --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
        --macro-file scripts/cocci-macro-file.h \
        --in-place --no-show-diff --max-width 80

Reported-by: Kevin Wolf <kwolf@redhat.com>
Reported-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
---
 hw/block/dataplane/xen-block.c |  17 +++---
 hw/block/xen-block.c           | 102 ++++++++++++++-------------------
 hw/pci-host/xen_igd_pt.c       |   7 +--
 hw/xen/xen-backend.c           |   7 +--
 hw/xen/xen-bus.c               |  92 +++++++++++++----------------
 hw/xen/xen-host-pci-device.c   |  27 +++++----
 hw/xen/xen_pt.c                |  25 ++++----
 hw/xen/xen_pt_config_init.c    |  17 +++---
 8 files changed, 128 insertions(+), 166 deletions(-)

diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index 5f8f15778b..1a077cc05f 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -723,8 +723,8 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
                                unsigned int protocol,
                                Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenDevice *xendev = dataplane->xendev;
-    Error *local_err = NULL;
     unsigned int ring_size;
     unsigned int i;
 
@@ -760,9 +760,8 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
     }
 
     xen_device_set_max_grant_refs(xendev, dataplane->nr_ring_ref,
-                                  &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+                                  errp);
+    if (*errp) {
         goto stop;
     }
 
@@ -770,9 +769,8 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
                                               dataplane->ring_ref,
                                               dataplane->nr_ring_ref,
                                               PROT_READ | PROT_WRITE,
-                                              &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+                                              errp);
+    if (*errp) {
         goto stop;
     }
 
@@ -805,9 +803,8 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
     dataplane->event_channel =
         xen_device_bind_event_channel(xendev, event_channel,
                                       xen_block_dataplane_event, dataplane,
-                                      &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+                                      errp);
+    if (*errp) {
         goto stop;
     }
 
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index a775fba7c0..623ae5b8e0 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -195,6 +195,7 @@ static const BlockDevOps xen_block_dev_ops = {
 
 static void xen_block_realize(XenDevice *xendev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenBlockDevice *blockdev = XEN_BLOCK_DEVICE(xendev);
     XenBlockDeviceClass *blockdev_class =
         XEN_BLOCK_DEVICE_GET_CLASS(xendev);
@@ -202,7 +203,6 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
     XenBlockVdev *vdev = &blockdev->props.vdev;
     BlockConf *conf = &blockdev->props.conf;
     BlockBackend *blk = conf->blk;
-    Error *local_err = NULL;
 
     if (vdev->type == XEN_BLOCK_VDEV_TYPE_INVALID) {
         error_setg(errp, "vdev property not set");
@@ -212,9 +212,8 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
     trace_xen_block_realize(type, vdev->disk, vdev->partition);
 
     if (blockdev_class->realize) {
-        blockdev_class->realize(blockdev, &local_err);
-        if (local_err) {
-            error_propagate(errp, local_err);
+        blockdev_class->realize(blockdev, errp);
+        if (*errp) {
             return;
         }
     }
@@ -280,8 +279,8 @@ static void xen_block_frontend_changed(XenDevice *xendev,
                                        enum xenbus_state frontend_state,
                                        Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     enum xenbus_state backend_state = xen_device_backend_get_state(xendev);
-    Error *local_err = NULL;
 
     switch (frontend_state) {
     case XenbusStateInitialised:
@@ -290,15 +289,13 @@ static void xen_block_frontend_changed(XenDevice *xendev,
             break;
         }
 
-        xen_block_disconnect(xendev, &local_err);
-        if (local_err) {
-            error_propagate(errp, local_err);
+        xen_block_disconnect(xendev, errp);
+        if (*errp) {
             break;
         }
 
-        xen_block_connect(xendev, &local_err);
-        if (local_err) {
-            error_propagate(errp, local_err);
+        xen_block_connect(xendev, errp);
+        if (*errp) {
             break;
         }
 
@@ -311,9 +308,8 @@ static void xen_block_frontend_changed(XenDevice *xendev,
 
     case XenbusStateClosed:
     case XenbusStateUnknown:
-        xen_block_disconnect(xendev, &local_err);
-        if (local_err) {
-            error_propagate(errp, local_err);
+        xen_block_disconnect(xendev, errp);
+        if (*errp) {
             break;
         }
 
@@ -665,9 +661,9 @@ static void xen_block_blockdev_del(const char *node_name, Error **errp)
 static char *xen_block_blockdev_add(const char *id, QDict *qdict,
                                     Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     const char *driver = qdict_get_try_str(qdict, "driver");
     BlockdevOptions *options = NULL;
-    Error *local_err = NULL;
     char *node_name;
     Visitor *v;
 
@@ -688,10 +684,9 @@ static char *xen_block_blockdev_add(const char *id, QDict *qdict,
         goto fail;
     }
 
-    qmp_blockdev_add(options, &local_err);
+    qmp_blockdev_add(options, errp);
 
-    if (local_err) {
-        error_propagate(errp, local_err);
+    if (*errp) {
         goto fail;
     }
 
@@ -710,14 +705,12 @@ fail:
 
 static void xen_block_drive_destroy(XenBlockDrive *drive, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     char *node_name = drive->node_name;
 
     if (node_name) {
-        Error *local_err = NULL;
-
-        xen_block_blockdev_del(node_name, &local_err);
-        if (local_err) {
-            error_propagate(errp, local_err);
+        xen_block_blockdev_del(node_name, errp);
+        if (*errp) {
             return;
         }
         g_free(node_name);
@@ -731,6 +724,7 @@ static XenBlockDrive *xen_block_drive_create(const char *id,
                                              const char *device_type,
                                              QDict *opts, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     const char *params = qdict_get_try_str(opts, "params");
     const char *mode = qdict_get_try_str(opts, "mode");
     const char *direct_io_safe = qdict_get_try_str(opts, "direct-io-safe");
@@ -738,7 +732,6 @@ static XenBlockDrive *xen_block_drive_create(const char *id,
     char *driver = NULL;
     char *filename = NULL;
     XenBlockDrive *drive = NULL;
-    Error *local_err = NULL;
     QDict *file_layer;
     QDict *driver_layer;
 
@@ -817,13 +810,12 @@ static XenBlockDrive *xen_block_drive_create(const char *id,
 
     g_assert(!drive->node_name);
     drive->node_name = xen_block_blockdev_add(drive->id, driver_layer,
-                                              &local_err);
+                                              errp);
 
     qobject_unref(driver_layer);
 
 done:
-    if (local_err) {
-        error_propagate(errp, local_err);
+    if (*errp) {
         xen_block_drive_destroy(drive, NULL);
         return NULL;
     }
@@ -848,8 +840,8 @@ static void xen_block_iothread_destroy(XenBlockIOThread *iothread,
 static XenBlockIOThread *xen_block_iothread_create(const char *id,
                                                    Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenBlockIOThread *iothread = g_new(XenBlockIOThread, 1);
-    Error *local_err = NULL;
     QDict *opts;
     QObject *ret_data = NULL;
 
@@ -858,13 +850,11 @@ static XenBlockIOThread *xen_block_iothread_create(const char *id,
     opts = qdict_new();
     qdict_put_str(opts, "qom-type", TYPE_IOTHREAD);
     qdict_put_str(opts, "id", id);
-    qmp_object_add(opts, &ret_data, &local_err);
+    qmp_object_add(opts, &ret_data, errp);
     qobject_unref(opts);
     qobject_unref(ret_data);
 
-    if (local_err) {
-        error_propagate(errp, local_err);
-
+    if (*errp) {
         g_free(iothread->id);
         g_free(iothread);
         return NULL;
@@ -876,6 +866,7 @@ static XenBlockIOThread *xen_block_iothread_create(const char *id,
 static void xen_block_device_create(XenBackendInstance *backend,
                                     QDict *opts, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenBus *xenbus = xen_backend_get_bus(backend);
     const char *name = xen_backend_get_name(backend);
     unsigned long number;
@@ -883,7 +874,6 @@ static void xen_block_device_create(XenBackendInstance *backend,
     XenBlockDrive *drive = NULL;
     XenBlockIOThread *iothread = NULL;
     XenDevice *xendev = NULL;
-    Error *local_err = NULL;
     const char *type;
     XenBlockDevice *blockdev;
 
@@ -915,16 +905,15 @@ static void xen_block_device_create(XenBackendInstance *backend,
         goto fail;
     }
 
-    drive = xen_block_drive_create(vdev, device_type, opts, &local_err);
+    drive = xen_block_drive_create(vdev, device_type, opts, errp);
     if (!drive) {
-        error_propagate_prepend(errp, local_err, "failed to create drive: ");
+        error_prepend(errp, "failed to create drive: ");
         goto fail;
     }
 
-    iothread = xen_block_iothread_create(vdev, &local_err);
-    if (local_err) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to create iothread: ");
+    iothread = xen_block_iothread_create(vdev, errp);
+    if (*errp) {
+        error_prepend(errp, "failed to create iothread: ");
         goto fail;
     }
 
@@ -932,32 +921,29 @@ static void xen_block_device_create(XenBackendInstance *backend,
     blockdev = XEN_BLOCK_DEVICE(xendev);
 
     if (!object_property_set_str(OBJECT(xendev), "vdev", vdev,
-                                 &local_err)) {
-        error_propagate_prepend(errp, local_err, "failed to set 'vdev': ");
+                                 errp)) {
+        error_prepend(errp, "failed to set 'vdev': ");
         goto fail;
     }
 
     if (!object_property_set_str(OBJECT(xendev), "drive",
                                  xen_block_drive_get_node_name(drive),
-                                 &local_err)) {
-        error_propagate_prepend(errp, local_err, "failed to set 'drive': ");
+                                 errp)) {
+        error_prepend(errp, "failed to set 'drive': ");
         goto fail;
     }
 
     if (!object_property_set_str(OBJECT(xendev), "iothread", iothread->id,
-                                 &local_err)) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to set 'iothread': ");
+                                 errp)) {
+        error_prepend(errp, "failed to set 'iothread': ");
         goto fail;
     }
 
     blockdev->iothread = iothread;
     blockdev->drive = drive;
 
-    if (!qdev_realize_and_unref(DEVICE(xendev), BUS(xenbus), &local_err)) {
-        error_propagate_prepend(errp, local_err,
-                                "realization of device %s failed: ",
-                                type);
+    if (!qdev_realize_and_unref(DEVICE(xendev), BUS(xenbus), errp)) {
+        error_prepend(errp, "realization of device %s failed: ", type);
         goto fail;
     }
 
@@ -981,31 +967,29 @@ fail:
 static void xen_block_device_destroy(XenBackendInstance *backend,
                                      Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenDevice *xendev = xen_backend_get_device(backend);
     XenBlockDevice *blockdev = XEN_BLOCK_DEVICE(xendev);
     XenBlockVdev *vdev = &blockdev->props.vdev;
     XenBlockDrive *drive = blockdev->drive;
     XenBlockIOThread *iothread = blockdev->iothread;
-    Error *local_err = NULL;
 
     trace_xen_block_device_destroy(vdev->number);
 
     object_unparent(OBJECT(xendev));
 
     if (iothread) {
-        xen_block_iothread_destroy(iothread, &local_err);
-        if (local_err) {
-            error_propagate_prepend(errp, local_err,
-                                    "failed to destroy iothread: ");
+        xen_block_iothread_destroy(iothread, errp);
+        if (*errp) {
+            error_prepend(errp, "failed to destroy iothread: ");
             return;
         }
     }
 
     if (drive) {
-        xen_block_drive_destroy(drive, &local_err);
-        if (local_err) {
-            error_propagate_prepend(errp, local_err,
-                                    "failed to destroy drive: ");
+        xen_block_drive_destroy(drive, errp);
+        if (*errp) {
+            error_prepend(errp, "failed to destroy drive: ");
             return;
         }
     }
diff --git a/hw/pci-host/xen_igd_pt.c b/hw/pci-host/xen_igd_pt.c
index efcc9347ff..29ade9ca25 100644
--- a/hw/pci-host/xen_igd_pt.c
+++ b/hw/pci-host/xen_igd_pt.c
@@ -79,17 +79,16 @@ static void host_pci_config_read(int pos, int len, uint32_t *val, Error **errp)
 
 static void igd_pt_i440fx_realize(PCIDevice *pci_dev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     uint32_t val = 0;
     size_t i;
     int pos, len;
-    Error *local_err = NULL;
 
     for (i = 0; i < ARRAY_SIZE(igd_host_bridge_infos); i++) {
         pos = igd_host_bridge_infos[i].offset;
         len = igd_host_bridge_infos[i].len;
-        host_pci_config_read(pos, len, &val, &local_err);
-        if (local_err) {
-            error_propagate(errp, local_err);
+        host_pci_config_read(pos, len, &val, errp);
+        if (*errp) {
             return;
         }
         pci_default_write_config(pci_dev, pos, val, len);
diff --git a/hw/xen/xen-backend.c b/hw/xen/xen-backend.c
index da065f81b7..1cc0694053 100644
--- a/hw/xen/xen-backend.c
+++ b/hw/xen/xen-backend.c
@@ -98,9 +98,9 @@ static void xen_backend_list_remove(XenBackendInstance *backend)
 void xen_backend_device_create(XenBus *xenbus, const char *type,
                                const char *name, QDict *opts, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     const XenBackendImpl *impl = xen_backend_table_lookup(type);
     XenBackendInstance *backend;
-    Error *local_error = NULL;
 
     if (!impl) {
         return;
@@ -110,9 +110,8 @@ void xen_backend_device_create(XenBus *xenbus, const char *type,
     backend->xenbus = xenbus;
     backend->name = g_strdup(name);
 
-    impl->create(backend, opts, &local_error);
-    if (local_error) {
-        error_propagate(errp, local_error);
+    impl->create(backend, opts, errp);
+    if (*errp) {
         g_free(backend->name);
         g_free(backend);
         return;
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index c4e2162ae9..2ea5144ef0 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -53,9 +53,9 @@ static char *xen_device_get_frontend_path(XenDevice *xendev)
 
 static void xen_device_unplug(XenDevice *xendev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenBus *xenbus = XEN_BUS(qdev_get_parent_bus(DEVICE(xendev)));
     const char *type = object_get_typename(OBJECT(xendev));
-    Error *local_err = NULL;
     xs_transaction_t tid;
 
     trace_xen_device_unplug(type, xendev->name);
@@ -69,14 +69,14 @@ again:
     }
 
     xs_node_printf(xenbus->xsh, tid, xendev->backend_path, "online",
-                   &local_err, "%u", 0);
-    if (local_err) {
+                   errp, "%u", 0);
+    if (*errp) {
         goto abort;
     }
 
     xs_node_printf(xenbus->xsh, tid, xendev->backend_path, "state",
-                   &local_err, "%u", XenbusStateClosing);
-    if (local_err) {
+                   errp, "%u", XenbusStateClosing);
+    if (*errp) {
         goto abort;
     }
 
@@ -96,7 +96,6 @@ abort:
      * from ending the transaction.
      */
     xs_transaction_end(xenbus->xsh, tid, true);
-    error_propagate(errp, local_err);
 }
 
 static void xen_bus_print_dev(Monitor *mon, DeviceState *dev, int indent)
@@ -205,15 +204,13 @@ static XenWatch *watch_list_add(XenWatchList *watch_list, const char *node,
                                 const char *key, XenWatchHandler handler,
                                 void *opaque, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenWatch *watch = new_watch(node, key, handler, opaque);
-    Error *local_err = NULL;
 
     notifier_list_add(&watch_list->notifiers, &watch->notifier);
 
-    xs_node_watch(watch_list->xsh, node, key, watch->token, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
-
+    xs_node_watch(watch_list->xsh, node, key, watch->token, errp);
+    if (*errp) {
         notifier_remove(&watch->notifier);
         free_watch(watch);
 
@@ -255,11 +252,11 @@ static void xen_bus_backend_create(XenBus *xenbus, const char *type,
                                    const char *name, char *path,
                                    Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     xs_transaction_t tid;
     char **key;
     QDict *opts;
     unsigned int i, n;
-    Error *local_err = NULL;
 
     trace_xen_bus_backend_create(type, path);
 
@@ -314,13 +311,11 @@ again:
         return;
     }
 
-    xen_backend_device_create(xenbus, type, name, opts, &local_err);
+    xen_backend_device_create(xenbus, type, name, opts, errp);
     qobject_unref(opts);
 
-    if (local_err) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to create '%s' device '%s': ",
-                                type, name);
+    if (*errp) {
+        error_prepend(errp, "failed to create '%s' device '%s': ", type, name);
     }
 }
 
@@ -692,9 +687,9 @@ static void xen_device_remove_watch(XenDevice *xendev, XenWatch *watch,
 
 static void xen_device_backend_create(XenDevice *xendev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenBus *xenbus = XEN_BUS(qdev_get_parent_bus(DEVICE(xendev)));
     struct xs_permissions perms[2];
-    Error *local_err = NULL;
 
     xendev->backend_path = xen_device_get_backend_path(xendev);
 
@@ -706,30 +701,27 @@ static void xen_device_backend_create(XenDevice *xendev, Error **errp)
     g_assert(xenbus->xsh);
 
     xs_node_create(xenbus->xsh, XBT_NULL, xendev->backend_path, perms,
-                   ARRAY_SIZE(perms), &local_err);
-    if (local_err) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to create backend: ");
+                   ARRAY_SIZE(perms), errp);
+    if (*errp) {
+        error_prepend(errp, "failed to create backend: ");
         return;
     }
 
     xendev->backend_state_watch =
         xen_device_add_watch(xendev, xendev->backend_path,
                              "state", xen_device_backend_changed,
-                             &local_err);
-    if (local_err) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to watch backend state: ");
+                             errp);
+    if (*errp) {
+        error_prepend(errp, "failed to watch backend state: ");
         return;
     }
 
     xendev->backend_online_watch =
         xen_device_add_watch(xendev, xendev->backend_path,
                              "online", xen_device_backend_changed,
-                             &local_err);
-    if (local_err) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to watch backend online: ");
+                             errp);
+    if (*errp) {
+        error_prepend(errp, "failed to watch backend online: ");
         return;
     }
 }
@@ -866,9 +858,9 @@ static bool xen_device_frontend_exists(XenDevice *xendev)
 
 static void xen_device_frontend_create(XenDevice *xendev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenBus *xenbus = XEN_BUS(qdev_get_parent_bus(DEVICE(xendev)));
     struct xs_permissions perms[2];
-    Error *local_err = NULL;
 
     xendev->frontend_path = xen_device_get_frontend_path(xendev);
 
@@ -885,20 +877,18 @@ static void xen_device_frontend_create(XenDevice *xendev, Error **errp)
         g_assert(xenbus->xsh);
 
         xs_node_create(xenbus->xsh, XBT_NULL, xendev->frontend_path, perms,
-                       ARRAY_SIZE(perms), &local_err);
-        if (local_err) {
-            error_propagate_prepend(errp, local_err,
-                                    "failed to create frontend: ");
+                       ARRAY_SIZE(perms), errp);
+        if (*errp) {
+            error_prepend(errp, "failed to create frontend: ");
             return;
         }
     }
 
     xendev->frontend_state_watch =
         xen_device_add_watch(xendev, xendev->frontend_path, "state",
-                             xen_device_frontend_changed, &local_err);
-    if (local_err) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to watch frontend state: ");
+                             xen_device_frontend_changed, errp);
+    if (*errp) {
+        error_prepend(errp, "failed to watch frontend state: ");
     }
 }
 
@@ -1247,11 +1237,11 @@ static void xen_device_exit(Notifier *n, void *data)
 
 static void xen_device_realize(DeviceState *dev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenDevice *xendev = XEN_DEVICE(dev);
     XenDeviceClass *xendev_class = XEN_DEVICE_GET_CLASS(xendev);
     XenBus *xenbus = XEN_BUS(qdev_get_parent_bus(DEVICE(xendev)));
     const char *type = object_get_typename(OBJECT(xendev));
-    Error *local_err = NULL;
 
     if (xendev->frontend_id == DOMID_INVALID) {
         xendev->frontend_id = xen_domid;
@@ -1267,10 +1257,9 @@ static void xen_device_realize(DeviceState *dev, Error **errp)
         goto unrealize;
     }
 
-    xendev->name = xendev_class->get_name(xendev, &local_err);
-    if (local_err) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to get device name: ");
+    xendev->name = xendev_class->get_name(xendev, errp);
+    if (*errp) {
+        error_prepend(errp, "failed to get device name: ");
         goto unrealize;
     }
 
@@ -1293,22 +1282,19 @@ static void xen_device_realize(DeviceState *dev, Error **errp)
     xendev->feature_grant_copy =
         (xengnttab_grant_copy(xendev->xgth, 0, NULL) == 0);
 
-    xen_device_backend_create(xendev, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+    xen_device_backend_create(xendev, errp);
+    if (*errp) {
         goto unrealize;
     }
 
-    xen_device_frontend_create(xendev, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+    xen_device_frontend_create(xendev, errp);
+    if (*errp) {
         goto unrealize;
     }
 
     if (xendev_class->realize) {
-        xendev_class->realize(xendev, &local_err);
-        if (local_err) {
-            error_propagate(errp, local_err);
+        xendev_class->realize(xendev, errp);
+        if (*errp) {
             goto unrealize;
         }
     }
diff --git a/hw/xen/xen-host-pci-device.c b/hw/xen/xen-host-pci-device.c
index 1b44dcafaf..02379c341c 100644
--- a/hw/xen/xen-host-pci-device.c
+++ b/hw/xen/xen-host-pci-device.c
@@ -333,8 +333,8 @@ void xen_host_pci_device_get(XenHostPCIDevice *d, uint16_t domain,
                              uint8_t bus, uint8_t dev, uint8_t func,
                              Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     unsigned int v;
-    Error *err = NULL;
 
     d->config_fd = -1;
     d->domain = domain;
@@ -342,36 +342,36 @@ void xen_host_pci_device_get(XenHostPCIDevice *d, uint16_t domain,
     d->dev = dev;
     d->func = func;
 
-    xen_host_pci_config_open(d, &err);
-    if (err) {
+    xen_host_pci_config_open(d, errp);
+    if (*errp) {
         goto error;
     }
 
-    xen_host_pci_get_resource(d, &err);
-    if (err) {
+    xen_host_pci_get_resource(d, errp);
+    if (*errp) {
         goto error;
     }
 
-    xen_host_pci_get_hex_value(d, "vendor", &v, &err);
-    if (err) {
+    xen_host_pci_get_hex_value(d, "vendor", &v, errp);
+    if (*errp) {
         goto error;
     }
     d->vendor_id = v;
 
-    xen_host_pci_get_hex_value(d, "device", &v, &err);
-    if (err) {
+    xen_host_pci_get_hex_value(d, "device", &v, errp);
+    if (*errp) {
         goto error;
     }
     d->device_id = v;
 
-    xen_host_pci_get_dec_value(d, "irq", &v, &err);
-    if (err) {
+    xen_host_pci_get_dec_value(d, "irq", &v, errp);
+    if (*errp) {
         goto error;
     }
     d->irq = v;
 
-    xen_host_pci_get_hex_value(d, "class", &v, &err);
-    if (err) {
+    xen_host_pci_get_hex_value(d, "class", &v, errp);
+    if (*errp) {
         goto error;
     }
     d->class_code = v;
@@ -381,7 +381,6 @@ void xen_host_pci_device_get(XenHostPCIDevice *d, uint16_t domain,
     return;
 
 error:
-    error_propagate(errp, err);
 
     if (d->config_fd >= 0) {
         close(d->config_fd);
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index ab84443d5e..baa25eb91a 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -777,12 +777,12 @@ static void xen_pt_destroy(PCIDevice *d) {
 
 static void xen_pt_realize(PCIDevice *d, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenPCIPassthroughState *s = XEN_PT_DEVICE(d);
     int i, rc = 0;
     uint8_t machine_irq = 0, scratch;
     uint16_t cmd = 0;
     int pirq = XEN_PT_UNASSIGNED_PIRQ;
-    Error *err = NULL;
 
     /* register real device */
     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
@@ -793,10 +793,9 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
     xen_host_pci_device_get(&s->real_device,
                             s->hostaddr.domain, s->hostaddr.bus,
                             s->hostaddr.slot, s->hostaddr.function,
-                            &err);
-    if (err) {
-        error_append_hint(&err, "Failed to \"open\" the real pci device");
-        error_propagate(errp, err);
+                            errp);
+    if (*errp) {
+        error_append_hint(errp, "Failed to \"open\" the real pci device");
         return;
     }
 
@@ -823,11 +822,10 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
             return;
         }
 
-        xen_pt_setup_vga(s, &s->real_device, &err);
-        if (err) {
-            error_append_hint(&err, "Setup VGA BIOS of passthrough"
-                    " GFX failed");
-            error_propagate(errp, err);
+        xen_pt_setup_vga(s, &s->real_device, errp);
+        if (*errp) {
+            error_append_hint(errp, "Setup VGA BIOS of passthrough"
+                              " GFX failed");
             xen_host_pci_device_put(&s->real_device);
             return;
         }
@@ -840,10 +838,9 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
     xen_pt_register_regions(s, &cmd);
 
     /* reinitialize each config register to be emulated */
-    xen_pt_config_init(s, &err);
-    if (err) {
-        error_append_hint(&err, "PCI Config space initialisation failed");
-        error_propagate(errp, err);
+    xen_pt_config_init(s, errp);
+    if (*errp) {
+        error_append_hint(errp, "PCI Config space initialisation failed");
         rc = -1;
         goto err_out;
     }
diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
index d0d7c720a6..af3fbd1bfb 100644
--- a/hw/xen/xen_pt_config_init.c
+++ b/hw/xen/xen_pt_config_init.c
@@ -2008,8 +2008,8 @@ static void xen_pt_config_reg_init(XenPCIPassthroughState *s,
 
 void xen_pt_config_init(XenPCIPassthroughState *s, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     int i, rc;
-    Error *err = NULL;
 
     QLIST_INIT(&s->reg_grps);
 
@@ -2067,13 +2067,14 @@ void xen_pt_config_init(XenPCIPassthroughState *s, Error **errp)
 
                 /* initialize capability register */
                 for (j = 0; regs->size != 0; j++, regs++) {
-                    xen_pt_config_reg_init(s, reg_grp_entry, regs, &err);
-                    if (err) {
-                        error_append_hint(&err, "Failed to init register %d"
-                                " offsets 0x%x in grp_type = 0x%x (%d/%zu)", j,
-                                regs->offset, xen_pt_emu_reg_grps[i].grp_type,
-                                i, ARRAY_SIZE(xen_pt_emu_reg_grps));
-                        error_propagate(errp, err);
+                    xen_pt_config_reg_init(s, reg_grp_entry, regs, errp);
+                    if (*errp) {
+                        error_append_hint(errp, "Failed to init register %d"
+                                          " offsets 0x%x in grp_type = 0x%x (%d/%zu)",
+                                          j,
+                                          regs->offset,
+                                          xen_pt_emu_reg_grps[i].grp_type,
+                                          i, ARRAY_SIZE(xen_pt_emu_reg_grps));
                         xen_pt_config_delete(s);
                         return;
                     }
-- 
2.21.0



From xen-devel-bounces@lists.xenproject.org Fri Jul 03 09:08:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 09:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrHgZ-0005Jc-Uq; Fri, 03 Jul 2020 09:08:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rI2g=AO=virtuozzo.com=vsementsov@srs-us1.protection.inumbo.net>)
 id 1jrHgY-0005Ii-HI
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 09:08:42 +0000
X-Inumbo-ID: c4b43b69-bd0c-11ea-8952-12813bfff9fa
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.123]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c4b43b69-bd0c-11ea-8952-12813bfff9fa;
 Fri, 03 Jul 2020 09:08:33 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Aee0vzM9qW6oADLGgvgy1bQl4F3f4D+nUPDWAKKGadm5xSQ8DvnFDlvzpSvzgjtOAfdYN2HBuXD49P4kKi1vDllWp/Wj+dNa25YVHCEKp0Y+g4KOsHXQz9Ca9DqltRa42xVJZb++tAMIl13juWdCN4in+2KuHCv3RxrbRUwLrFjel3i5T9dAgaoJyS/YSVPTULEaNIIVW9ya0iK1PfwbZQm+JRkgONMhKtW1GvXItHSjs1HcKXGAF1HfQISDXNwZXdX1dVsIDCLqjqMxt3ODc1QwuGYjk2W6cViWwAwGLTZag6Nsfnv6HFldgOgtVDOZt+onEJGnFJQa8Dc05h4mZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K2Xnopx5wQ6IFxisXOfZQRZb+CUoqixGaOMagDkzs8k=;
 b=JqQpHZPZh/u2hiYquILKKIstpLXUl0S0QHhg2DTjtKpJqKNLqYfQcQVc/5O3JHP4FE5kjf9UzjsQX01JL6w6uXDtPykOWaSK2W/aOjK1MsSZ6JVDTh2sKidti6Yx2IBIqxYzZ/TI5OIkPAdQ95O9IvODFIexSmiUfFttslAoUuR6BdbRRQdqvSFyQKE4ggmgut1/69dQ18MPe8/b5KDBt8g37YSRnSIpRY356K1n/89NnOR2+ba8Iff9QkDzi76fjrik1W/7lN+gNpj9LfmZDsMqgMJJkAI8adITZZOf+NCf6NluZ1N3P04ZgBPywdjSkp/xh68QM4JCbLwqzcphOA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=virtuozzo.com; dmarc=pass action=none
 header.from=virtuozzo.com; dkim=pass header.d=virtuozzo.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtuozzo.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K2Xnopx5wQ6IFxisXOfZQRZb+CUoqixGaOMagDkzs8k=;
 b=md+O+2arOUa9XoG/Yt+SyHQ94kNuuCjf3ktcvmtk1UwttBoFiX1MR52DmxiGm5ztJyVoDnHnp5hkMYr8ihdn9hn5tLX7bU/C78na6yG7kNVxHvZ7wOV7siSh2CbOpKQzhxIsGqt1vS75xU30EXGc4HhGNjAYRV5mRdbKrfvyfnA=
Authentication-Results: nongnu.org; dkim=none (message not signed)
 header.d=none;nongnu.org; dmarc=none action=none header.from=virtuozzo.com;
Received: from AM7PR08MB5494.eurprd08.prod.outlook.com (2603:10a6:20b:dc::15)
 by AM7PR08MB5448.eurprd08.prod.outlook.com (2603:10a6:20b:106::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.28; Fri, 3 Jul
 2020 09:08:32 +0000
Received: from AM7PR08MB5494.eurprd08.prod.outlook.com
 ([fe80::a408:2f0f:bc6c:d312]) by AM7PR08MB5494.eurprd08.prod.outlook.com
 ([fe80::a408:2f0f:bc6c:d312%4]) with mapi id 15.20.3131.028; Fri, 3 Jul 2020
 09:08:32 +0000
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v11 2/8] scripts: Coccinelle script to use
 ERRP_AUTO_PROPAGATE()
Date: Fri,  3 Jul 2020 12:08:10 +0300
Message-Id: <20200703090816.3295-3-vsementsov@virtuozzo.com>
X-Mailer: git-send-email 2.21.0
In-Reply-To: <20200703090816.3295-1-vsementsov@virtuozzo.com>
References: <20200703090816.3295-1-vsementsov@virtuozzo.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM0PR10CA0022.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:17c::32) To AM7PR08MB5494.eurprd08.prod.outlook.com
 (2603:10a6:20b:dc::15)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from localhost.localdomain (185.215.60.15) by
 AM0PR10CA0022.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:17c::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.23 via Frontend
 Transport; Fri, 3 Jul 2020 09:08:31 +0000
X-Mailer: git-send-email 2.21.0
X-Originating-IP: [185.215.60.15]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e38ec337-f427-4449-3adb-08d81f30a8a1
X-MS-TrafficTypeDiagnostic: AM7PR08MB5448:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <AM7PR08MB5448A7243A56B1F8E4E6C4A9C16A0@AM7PR08MB5448.eurprd08.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-Forefront-PRVS: 045315E1EE
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: h5sY4DE22is+KkVW2xhGJgYCnSmbwjfRe54bVqenzzNLI/nvcYjnK32MjFGEld073ZqJpYt+0lBP4U/7e68ohd4pM5rxjeioAgSlMsEeg6P6Gr4kR05jds/wgG2tCK80ZTNbk0JYjCpQUr497CUBdLsl0+8rB3avOt/hjRNjrvU5CWsEGmxC/xZkIwXIKW92OqECyogUCQ++hG3wMv9QF/EI+DtIEB7qyuBJcnlty0l/6AyzBt1oBGuLXVbfDAUM2euDJzLhSW0aFxNzW2FdcDTL7JxjSLRxmKhCJ/XscBA8ShfxuykYXJsKc7j3X8VfMNUzdbW0mPuTmdLF/7huWEFZqzkUZSKhzrA2/uZ/Oc6q+qPn845/Fxq58xya8FHDNwAITBvbYg57ovtfWzUrFy8H4dZvqYAGTdRIuJ9RdJHxKx6sd9h/qAPdi19jl1SIQCjxQ7eS7nHzODZzu/s7nQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM7PR08MB5494.eurprd08.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(39840400004)(366004)(376002)(136003)(396003)(346002)(66946007)(66574015)(66476007)(66556008)(2906002)(86362001)(1076003)(30864003)(8676002)(83380400001)(6512007)(5660300002)(8936002)(6666004)(186003)(36756003)(4326008)(6506007)(16526019)(6486002)(6916009)(7416002)(26005)(478600001)(956004)(52116002)(316002)(54906003)(2616005)(2004002);
 DIR:OUT; SFP:1102; 
X-MS-Exchange-AntiSpam-MessageData: RxI44ijiYMVR+1jW9160MkdbBj1H/rZCJSXVctIdK7dfez5j+6HHG71kztX4gQNOQpE+xeYPzFEj3ppq02ie2rMGH9RpHYNHvEpGC92hzwNXHDpOT6XUrls9H0McfBGubxUCi1KmzrVR8jibnHqkgTmpyzKWexZFhnXzlZW1d55ld4967E2UZxnNWlBgPDGuC9mIJ0qqyxJn2WoadpJjH/H8ZimcD+ZOL0bmi2SWUUoAG2/pIe3dvslu7REM/+UoWVPIKrUv0fJidSWtpHzp1NaLnEXLlI5QQ8m7LGXgfvTlD7aGjBUHdjU6p9UA2n3sh5J+gwnxjW0tdglHYtOwMC1RYYHKp+D0Tiv/GvchbeYVOXJSqPTFZZZZ7Mx+6qZ/zqkGpmgdzJS0meXnHoIe2cS6geGDoyEbADFLfl0KCITsmdL3ZEm0PxwWoBLe949oRiHExMYPd/X9AN++nNAQatFXADJ2Iu9Fe/H3Qo7tkKM=
X-OriginatorOrg: virtuozzo.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e38ec337-f427-4449-3adb-08d81f30a8a1
X-MS-Exchange-CrossTenant-AuthSource: AM7PR08MB5494.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jul 2020 09:08:32.2737 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 0bc7f26d-0264-416e-a6fc-8352af79c58f
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mjQssyVwna8Ik+GmhHwP+KSo6lgTqmMFdjhS+XRJHNQmSpEZwtIMRm5umulFRcf4QMb5UTvZ1y+Wj9hJfPN/queTfuMcgOYx54c84P4L/FI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5448
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 Laszlo Ersek <lersek@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, armbru@redhat.com,
 Max Reitz <mreitz@redhat.com>, groug@kaod.org,
 Stefano Stabellini <sstabellini@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 eblake@redhat.com, Michael Roth <mdroth@linux.vnet.ibm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Script adds ERRP_AUTO_PROPAGATE macro invocation where appropriate and
does corresponding changes in code (look for details in
include/qapi/error.h)

Usage example:
spatch --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
 --macro-file scripts/cocci-macro-file.h --in-place --no-show-diff \
 --max-width 80 FILES...

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
---

Cc: Eric Blake <eblake@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Greg Kurz <groug@kaod.org>
Cc: Christian Schoenebeck <qemu_oss@crudebyte.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: "Philippe Mathieu-Daudé" <philmd@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: Michael Roth <mdroth@linux.vnet.ibm.com>
Cc: qemu-devel@nongnu.org
Cc: qemu-block@nongnu.org
Cc: xen-devel@lists.xenproject.org

 scripts/coccinelle/auto-propagated-errp.cocci | 337 ++++++++++++++++++
 include/qapi/error.h                          |   3 +
 MAINTAINERS                                   |   1 +
 3 files changed, 341 insertions(+)
 create mode 100644 scripts/coccinelle/auto-propagated-errp.cocci

diff --git a/scripts/coccinelle/auto-propagated-errp.cocci b/scripts/coccinelle/auto-propagated-errp.cocci
new file mode 100644
index 0000000000..c29f695adf
--- /dev/null
+++ b/scripts/coccinelle/auto-propagated-errp.cocci
@@ -0,0 +1,337 @@
+// Use ERRP_AUTO_PROPAGATE (see include/qapi/error.h)
+//
+// Copyright (c) 2020 Virtuozzo International GmbH.
+//
+// This program is free software; you can redistribute it and/or
+// modify it under the terms of the GNU General Public License as
+// published by the Free Software Foundation; either version 2 of the
+// License, or (at your option) any later version.
+//
+// This program is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+//
+// You should have received a copy of the GNU General Public License
+// along with this program.  If not, see
+// <http://www.gnu.org/licenses/>.
+//
+// Usage example:
+// spatch --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
+//  --macro-file scripts/cocci-macro-file.h --in-place \
+//  --no-show-diff --max-width 80 FILES...
+//
+// Note: --max-width 80 is needed because coccinelle default is less
+// than 80, and without this parameter coccinelle may reindent some
+// lines which fit into 80 characters but not to coccinelle default,
+// which in turn produces extra patch hunks for no reason.
+
+// Switch unusual Error ** parameter names to errp
+// (this is necessary to use ERRP_AUTO_PROPAGATE).
+//
+// Disable optional_qualifier to skip functions with
+// "Error *const *errp" parameter.
+//
+// Skip functions with "assert(_errp && *_errp)" statement, because
+// that signals unusual semantics, and the parameter name may well
+// serve a purpose. (like nbd_iter_channel_error()).
+//
+// Skip util/error.c to not touch, for example, error_propagate() and
+// error_propagate_prepend().
+@ depends on !(file in "util/error.c") disable optional_qualifier@
+identifier fn;
+identifier _errp != errp;
+@@
+
+ fn(...,
+-   Error **_errp
++   Error **errp
+    ,...)
+ {
+(
+     ... when != assert(_errp && *_errp)
+&
+     <...
+-    _errp
++    errp
+     ...>
+)
+ }
+
+// Add invocation of ERRP_AUTO_PROPAGATE to errp-functions where
+// necessary
+//
+// Note, that without "when any" the final "..." does not mach
+// something matched by previous pattern, i.e. the rule will not match
+// double error_prepend in control flow like in
+// vfio_set_irq_signaling().
+//
+// Note, "exists" says that we want apply rule even if it does not
+// match on all possible control flows (otherwise, it will not match
+// standard pattern when error_propagate() call is in if branch).
+@ disable optional_qualifier exists@
+identifier fn, local_err;
+symbol errp;
+@@
+
+ fn(..., Error **errp, ...)
+ {
++   ERRP_AUTO_PROPAGATE();
+    ...  when != ERRP_AUTO_PROPAGATE();
+(
+(
+    error_append_hint(errp, ...);
+|
+    error_prepend(errp, ...);
+|
+    error_vprepend(errp, ...);
+)
+    ... when any
+|
+    Error *local_err = NULL;
+    ...
+(
+    error_propagate_prepend(errp, local_err, ...);
+|
+    error_propagate(errp, local_err);
+)
+    ...
+)
+ }
+
+// Warn when several Error * definitions are in the control flow.
+// This rule is not chained to rule1 and less restrictive, to cover more
+// functions to warn (even those we are not going to convert).
+//
+// Note, that even with one (or zero) Error * definition in the each
+// control flow we may have several (in total) Error * definitions in
+// the function. This case deserves attention too, but I don't see
+// simple way to match with help of coccinelle.
+@check1 disable optional_qualifier exists@
+identifier fn, _errp, local_err, local_err2;
+position p1, p2;
+@@
+
+ fn(..., Error **_errp, ...)
+ {
+     ...
+     Error *local_err = NULL;@p1
+     ... when any
+     Error *local_err2 = NULL;@p2
+     ... when any
+ }
+
+@ script:python @
+fn << check1.fn;
+p1 << check1.p1;
+p2 << check1.p2;
+@@
+
+print('Warning: function {} has several definitions of '
+      'Error * local variable: at {}:{} and then at {}:{}'.format(
+          fn, p1[0].file, p1[0].line, p2[0].file, p2[0].line))
+
+// Warn when several propagations are in the control flow.
+@check2 disable optional_qualifier exists@
+identifier fn, _errp;
+position p1, p2;
+@@
+
+ fn(..., Error **_errp, ...)
+ {
+     ...
+(
+     error_propagate_prepend(_errp, ...);@p1
+|
+     error_propagate(_errp, ...);@p1
+)
+     ...
+(
+     error_propagate_prepend(_errp, ...);@p2
+|
+     error_propagate(_errp, ...);@p2
+)
+     ... when any
+ }
+
+@ script:python @
+fn << check2.fn;
+p1 << check2.p1;
+p2 << check2.p2;
+@@
+
+print('Warning: function {} propagates to errp several times in '
+      'one control flow: at {}:{} and then at {}:{}'.format(
+          fn, p1[0].file, p1[0].line, p2[0].file, p2[0].line))
+
+// Match functions with propagation of local error to errp.
+// We want to refer these functions in several following rules, but I
+// don't know a proper way to inherit a function, not just its name
+// (to not match another functions with same name in following rules).
+// Not-proper way is as follows: rename errp parameter in functions
+// header and match it in following rules. Rename it back after all
+// transformations.
+//
+// The common case is a single definition of local_err with at most one
+// error_propagate_prepend() or error_propagate() on each control-flow
+// path. Functions with multiple definitions or propagates we want to
+// examine manually. Rules check1 and check2 emit warnings to guide us
+// to them.
+//
+// Note that we match not only this "common case", but any function,
+// which has the "common case" on at least one control-flow path.
+@rule1 disable optional_qualifier exists@
+identifier fn, local_err;
+symbol errp;
+@@
+
+ fn(..., Error **
+-    errp
++    ____
+    , ...)
+ {
+     ...
+     Error *local_err = NULL;
+     ...
+(
+     error_propagate_prepend(errp, local_err, ...);
+|
+     error_propagate(errp, local_err);
+)
+     ...
+ }
+
+// Convert special case with goto separately.
+// I tried merging this into the following rule the obvious way, but
+// it made Coccinelle hang on block.c
+//
+// Note interesting thing: if we don't do it here, and try to fixup
+// "out: }" things later after all transformations (the rule will be
+// the same, just without error_propagate() call), coccinelle fails to
+// match this "out: }".
+@ disable optional_qualifier@
+identifier rule1.fn, rule1.local_err, out;
+symbol errp;
+@@
+
+ fn(..., Error ** ____, ...)
+ {
+     <...
+-    goto out;
++    return;
+     ...>
+- out:
+-    error_propagate(errp, local_err);
+ }
+
+// Convert most of local_err related stuff.
+//
+// Note, that we inherit rule1.fn and rule1.local_err names, not
+// objects themselves. We may match something not related to the
+// pattern matched by rule1. For example, local_err may be defined with
+// the same name in different blocks inside one function, and in one
+// block follow the propagation pattern and in other block doesn't.
+//
+// Note also that errp-cleaning functions
+//   error_free_errp
+//   error_report_errp
+//   error_reportf_errp
+//   warn_report_errp
+//   warn_reportf_errp
+// are not yet implemented. They must call corresponding Error* -
+// freeing function and then set *errp to NULL, to avoid further
+// propagation to original errp (consider ERRP_AUTO_PROPAGATE in use).
+// For example, error_free_errp may look like this:
+//
+//    void error_free_errp(Error **errp)
+//    {
+//        error_free(*errp);
+//        *errp = NULL;
+//    }
+@ disable optional_qualifier exists@
+identifier rule1.fn, rule1.local_err;
+expression list args;
+symbol errp;
+@@
+
+ fn(..., Error ** ____, ...)
+ {
+     <...
+(
+-    Error *local_err = NULL;
+|
+
+// Convert error clearing functions
+(
+-    error_free(local_err);
++    error_free_errp(errp);
+|
+-    error_report_err(local_err);
++    error_report_errp(errp);
+|
+-    error_reportf_err(local_err, args);
++    error_reportf_errp(errp, args);
+|
+-    warn_report_err(local_err);
++    warn_report_errp(errp);
+|
+-    warn_reportf_err(local_err, args);
++    warn_reportf_errp(errp, args);
+)
+?-    local_err = NULL;
+
+|
+-    error_propagate_prepend(errp, local_err, args);
++    error_prepend(errp, args);
+|
+-    error_propagate(errp, local_err);
+|
+-    &local_err
++    errp
+)
+     ...>
+ }
+
+// Convert remaining local_err usage. For example, different kinds of
+// error checking in if conditionals. We can't merge this into
+// previous hunk, as this conflicts with other substitutions in it (at
+// least with "- local_err = NULL").
+@ disable optional_qualifier@
+identifier rule1.fn, rule1.local_err;
+symbol errp;
+@@
+
+ fn(..., Error ** ____, ...)
+ {
+     <...
+-    local_err
++    *errp
+     ...>
+ }
+
+// Always use the same pattern for checking error
+@ disable optional_qualifier@
+identifier rule1.fn;
+symbol errp;
+@@
+
+ fn(..., Error ** ____, ...)
+ {
+     <...
+-    *errp != NULL
++    *errp
+     ...>
+ }
+
+// Revert temporary ___ identifier.
+@ disable optional_qualifier@
+identifier rule1.fn;
+@@
+
+ fn(..., Error **
+-   ____
++   errp
+    , ...)
+ {
+     ...
+ }
diff --git a/include/qapi/error.h b/include/qapi/error.h
index b54aedbfd7..514cd1f5ae 100644
--- a/include/qapi/error.h
+++ b/include/qapi/error.h
@@ -249,6 +249,9 @@
  *         }
  *         ...
  *     }
+ *
+ * For mass-conversion use script
+ *   scripts/coccinelle/auto-propagated-errp.cocci
  */
 
 #ifndef ERROR_H
diff --git a/MAINTAINERS b/MAINTAINERS
index dec252f38b..65ce440217 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2165,6 +2165,7 @@ F: scripts/coccinelle/error-use-after-free.cocci
 F: scripts/coccinelle/error_propagate_null.cocci
 F: scripts/coccinelle/remove_local_err.cocci
 F: scripts/coccinelle/use-error_fatal.cocci
+F: scripts/coccinelle/auto-propagated-errp.cocci
 
 GDB stub
 M: Alex Bennée <alex.bennee@linaro.org>
-- 
2.21.0



From xen-devel-bounces@lists.xenproject.org Fri Jul 03 09:08:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 09:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrHgQ-0005In-Ad; Fri, 03 Jul 2020 09:08:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rI2g=AO=virtuozzo.com=vsementsov@srs-us1.protection.inumbo.net>)
 id 1jrHgO-0005Ii-M2
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 09:08:33 +0000
X-Inumbo-ID: c384f2be-bd0c-11ea-8952-12813bfff9fa
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.123]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c384f2be-bd0c-11ea-8952-12813bfff9fa;
 Fri, 03 Jul 2020 09:08:31 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C4AY0Ep0XMLvWjRbcvX1CzIra9cp56MtaTbTkGCacPsBPUtsXqWHjhEng5DOHajT2JfvusRniNfXriwwZa+Evf+TBAOIbYbM4QzlBugVXPx1UZF3jx/VT57NgbZ9h/jnwLp2DfAUyfxHoLeDEKdokVM8XeiLkypOEMQ0t4LekteDQNPNXqW2Nwwm8VGfGrSBfPC6JGk/q3yIVzb3PeMZtZO9ZNqBq/+w820d80QUvJu78IibQqLI2nPGc83IMiYNgwLzlQHV8MVG5EldE6gCjpbYgeSTFC7/XaP1qwRzWIUEoRorpNmvEq6N61GNFViHrbQ56w1uoXRaO9Zn4gjuNg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q16QWkpMwIuNHc3KdZv/XpoCCAtL9O/q/HdoKTKbv4Q=;
 b=jgGwE0S2HBzoHY8CDSUWHqrNfdYpaYUXC+KQNWpbnKeTbn7F+G9YN+mPxqnksPbxVtejGpREGlQ/f7mdL4TEewFeDPgQZ9WwqJFSqunp/nYGSxmfw5a9PjLw5c2VFN9CJUIy1YR5MiP6h1Ypj2rLqN+E5XFYC80uCsy0IVonw8uOV3Rhc3gOH5NtxgH0BA5+GbDhdyHQvdL+P1Vx7eunhtvwAsKJAQaRPn2yCGzkNeoVaK2d5NeQOhQ7wi1OlQsq8Ju862TEmSSmtssSaIIiS6JqAAIZ7pq9GCuYJmIApR7JdRPMpzHhOrHQfQ2YF5srgPqLvIKuwgAtHagU3PTfsQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=virtuozzo.com; dmarc=pass action=none
 header.from=virtuozzo.com; dkim=pass header.d=virtuozzo.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtuozzo.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=q16QWkpMwIuNHc3KdZv/XpoCCAtL9O/q/HdoKTKbv4Q=;
 b=LunQVUgnTaclkBF4I6nnQE+IFmN4OaxCnrkfTE3FeMAItn8A9u9KlmU/ye510QzOmG3UiyyGdumNGnZLmZ6ttTDA0uj70i4ZA9qYsOoz8m1ESgtLRMB8rUD5cdAx/BvHaqpVuNETkq1lnx5TAUOlZ85mD4+OsjdyjhC064G5zus=
Authentication-Results: nongnu.org; dkim=none (message not signed)
 header.d=none;nongnu.org; dmarc=none action=none header.from=virtuozzo.com;
Received: from AM7PR08MB5494.eurprd08.prod.outlook.com (2603:10a6:20b:dc::15)
 by AM7PR08MB5448.eurprd08.prod.outlook.com (2603:10a6:20b:106::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.28; Fri, 3 Jul
 2020 09:08:29 +0000
Received: from AM7PR08MB5494.eurprd08.prod.outlook.com
 ([fe80::a408:2f0f:bc6c:d312]) by AM7PR08MB5494.eurprd08.prod.outlook.com
 ([fe80::a408:2f0f:bc6c:d312%4]) with mapi id 15.20.3131.028; Fri, 3 Jul 2020
 09:08:29 +0000
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v11 0/8] error: auto propagated local_err part I
Date: Fri,  3 Jul 2020 12:08:08 +0300
Message-Id: <20200703090816.3295-1-vsementsov@virtuozzo.com>
X-Mailer: git-send-email 2.21.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM0PR10CA0022.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:17c::32) To AM7PR08MB5494.eurprd08.prod.outlook.com
 (2603:10a6:20b:dc::15)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from localhost.localdomain (185.215.60.15) by
 AM0PR10CA0022.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:17c::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.23 via Frontend
 Transport; Fri, 3 Jul 2020 09:08:28 +0000
X-Mailer: git-send-email 2.21.0
X-Originating-IP: [185.215.60.15]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1bd17e35-e055-464a-5372-08d81f30a715
X-MS-TrafficTypeDiagnostic: AM7PR08MB5448:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <AM7PR08MB5448B4D9C641AAAE72A3A7E7C16A0@AM7PR08MB5448.eurprd08.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:549;
X-Forefront-PRVS: 045315E1EE
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: L04HZ2pvo4FC4jRzJH8NlD47x6lKD5S+585bh7ZI9JBj3eB4TIc0PMyV8Q8msmfKLyK7roNEiqOfxB+gnhogaDtjnXGrJ+pubtnDggQzg53VjevKWAWdjb6PoRlnxYuLicjLP9DJqaRHeEzuo+L/K2Mxoxvo/MWeJD1YiarU4e1DHgIKMpGruGudnUHzpAdoUv4R5wLu08cdATQEBIHcD1ox+D0T4i2UEepDWif3GGJTBH3JzSGkthYoOZXVkhJg8KkQsB2T/v5d/3nBhSMhHff13aYoiq+2u/nDUcHZ72Pylx0mlTVUn5kJpkzrPehqk1hBQEm8NuEG01lHb8vH3rfi0G3eM9s1Hk1p79VTTX0GeCLUyLBfL2MoT16V6Ipc8gsgx+UwwLZcyPHkBuYkzw==
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM7PR08MB5494.eurprd08.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(39840400004)(366004)(376002)(136003)(396003)(346002)(66946007)(66476007)(66556008)(966005)(2906002)(86362001)(1076003)(8676002)(83380400001)(6512007)(5660300002)(8936002)(6666004)(186003)(36756003)(4326008)(6506007)(16526019)(6486002)(6916009)(7416002)(26005)(478600001)(956004)(52116002)(316002)(54906003)(2616005);
 DIR:OUT; SFP:1102; 
X-MS-Exchange-AntiSpam-MessageData: VB1dOkt/mh/DTF47kWkSm6T8zoiTIA0e7iYdoDje8lqTMTtTx33rQE1gttsV68xNybs3cKHdNWAJ4BTiEQsqW9nhOUfuY6OS8LOIKqJDeZxT6l+4NeCw7s1E8IUdCScAsCYBZghDbiw/jiTKnN6WhshK6Ohk1v3JmuLMVtSIz1DLOwrgZSnIETkkQY0SwnxZHDQAILD7IJ/epaPeWGG//Cj3ut6oX4RZnl4ECJADPm8DFK2wO8fyksb0RZiy6CgbOJV3Yuq9ZcljN895dCtn56h67nGKRbXapRZJYqqK7p2qwMPCGtjvFbqzPF6WkAiphxVswlhIN96t2GM00a+52jC+gSO9aXZU1nCXJxmne96rhnz9rDjkr65wF83kM8sVpiFrv1q4h2RiPj6AOSAi8TisjwQJPiU8aVMV+Km/WCPWbZ5u+0JHadyvMVd2zEB1inbewhjxWES8hxQ7hE+rKD+SmMU10qKuap8kzDd65yu5LBbUYPgWls+KhDfVVnWR
X-OriginatorOrg: virtuozzo.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1bd17e35-e055-464a-5372-08d81f30a715
X-MS-Exchange-CrossTenant-AuthSource: AM7PR08MB5494.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jul 2020 09:08:29.6213 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 0bc7f26d-0264-416e-a6fc-8352af79c58f
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tBVLwEgyFYNMa7oBcYgz6QK7YVU2CMRv33TIjpQ6dRQIAHbDNFc0aSI5YV1cFtWHwosuu19fqWa0Dv2DJmnGPKORWKMg/FqtU2LeCqcj7is=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5448
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 Laszlo Ersek <lersek@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, armbru@redhat.com,
 Max Reitz <mreitz@redhat.com>, groug@kaod.org,
 Stefano Stabellini <sstabellini@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 eblake@redhat.com, Michael Roth <mdroth@linux.vnet.ibm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Based-on: <20200702155000.3455325-1-armbru@redhat.com>

v11: (based-on "[PATCH v2 00/44] Less clumsy error checking")
01: minor rebase of documentation, keep r-bs
02: - minor comment tweaks [Markus]
    - use explicit file name in MAINTAINERS instead of pattern
    - add Markus's r-b
03,07,08: rabase changes, drop r-bs


v11 is available at
 https://src.openvz.org/scm/~vsementsov/qemu.git #tag up-auto-local-err-partI-v11
v10 is available at
 https://src.openvz.org/scm/~vsementsov/qemu.git #tag up-auto-local-err-partI-v10

In these series, there is no commit-per-subsystem script, each generated
commit is generated in separate.

Still, generating commands are very similar, and looks like

    sed -n '/^<Subsystem name>$/,/^$/{s/^F: //p}' MAINTAINERS | \
    xargs git ls-files | grep '\.[hc]$' | \
    xargs spatch \
        --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
        --macro-file scripts/cocci-macro-file.h \
        --in-place --no-show-diff --max-width 80

Note, that in each generated commit, generation command is the only
text, indented by 8 spaces in 'git log -1' output, so, to regenerate all
commits (for example, after rebase, or change in coccinelle script), you
may use the following command:

git rebase -x "sh -c \"git show --pretty= --name-only | xargs git checkout HEAD^ -- ; git reset; git log -1 | grep '^        ' | sh\"" HEAD~6

Which will start automated interactive rebase for generated patches,
which will stop if generated patch changed
(you may do git commit --amend to apply updated generated changes).

Note:
  git show --pretty= --name-only   - lists files, changed in HEAD
  git log -1 | grep '^        ' | sh   - rerun generation command of HEAD


Check for compilation of changed .c files
git rebase -x "sh -c \"git show --pretty= --name-only | sed -n 's/\.c$/.o/p' | xargs make -j9\"" HEAD~6

Vladimir Sementsov-Ogievskiy (8):
  error: auto propagated local_err
  scripts: Coccinelle script to use ERRP_AUTO_PROPAGATE()
  SD (Secure Card): introduce ERRP_AUTO_PROPAGATE
  pflash: introduce ERRP_AUTO_PROPAGATE
  fw_cfg: introduce ERRP_AUTO_PROPAGATE
  virtio-9p: introduce ERRP_AUTO_PROPAGATE
  nbd: introduce ERRP_AUTO_PROPAGATE
  xen: introduce ERRP_AUTO_PROPAGATE

 scripts/coccinelle/auto-propagated-errp.cocci | 337 ++++++++++++++++++
 include/block/nbd.h                           |   1 +
 include/qapi/error.h                          | 208 +++++++++--
 block/nbd.c                                   |   7 +-
 hw/9pfs/9p-local.c                            |  12 +-
 hw/9pfs/9p.c                                  |   1 +
 hw/block/dataplane/xen-block.c                |  17 +-
 hw/block/pflash_cfi01.c                       |   7 +-
 hw/block/pflash_cfi02.c                       |   7 +-
 hw/block/xen-block.c                          | 102 +++---
 hw/nvram/fw_cfg.c                             |  14 +-
 hw/pci-host/xen_igd_pt.c                      |   7 +-
 hw/sd/sdhci-pci.c                             |   7 +-
 hw/sd/sdhci.c                                 |  21 +-
 hw/sd/ssi-sd.c                                |  10 +-
 hw/xen/xen-backend.c                          |   7 +-
 hw/xen/xen-bus.c                              |  92 ++---
 hw/xen/xen-host-pci-device.c                  |  27 +-
 hw/xen/xen_pt.c                               |  25 +-
 hw/xen/xen_pt_config_init.c                   |  17 +-
 nbd/client.c                                  |   5 +
 nbd/server.c                                  |   5 +
 MAINTAINERS                                   |   1 +
 23 files changed, 690 insertions(+), 247 deletions(-)
 create mode 100644 scripts/coccinelle/auto-propagated-errp.cocci

Cc: Eric Blake <eblake@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Greg Kurz <groug@kaod.org>
Cc: Christian Schoenebeck <qemu_oss@crudebyte.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: "Philippe Mathieu-Daudé" <philmd@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: Michael Roth <mdroth@linux.vnet.ibm.com>
Cc: qemu-devel@nongnu.org
Cc: qemu-block@nongnu.org
Cc: xen-devel@lists.xenproject.org

-- 
2.21.0



From xen-devel-bounces@lists.xenproject.org Fri Jul 03 09:41:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 09:41:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrICY-0000Gh-2P; Fri, 03 Jul 2020 09:41:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bw0N=AO=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jrICW-0000Fy-HZ
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 09:41:44 +0000
X-Inumbo-ID: 673d4cf4-bd11-11ea-8496-bc764e2007e4
Received: from mail-ej1-x633.google.com (unknown [2a00:1450:4864:20::633])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 673d4cf4-bd11-11ea-8496-bc764e2007e4;
 Fri, 03 Jul 2020 09:41:43 +0000 (UTC)
Received: by mail-ej1-x633.google.com with SMTP id p20so33481400ejd.13
 for <xen-devel@lists.xenproject.org>; Fri, 03 Jul 2020 02:41:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=RNt2cEIZBE61z29DbiD6gBhjcmDmLxOEByV7tgW2TTo=;
 b=TmIdnnMyaI6pFf86S7Dwrh2DA0pTIAwm0Cbtg+lj/sEabkGbDP+SMlPutplrvOo9df
 C3HMoau5QJO26ldZbj3tBdP/AxGK0UdweiDdMuQ4BrLKQw8+UCNdIGzMadYpZZmEO2Qk
 NNdLYwRPSfBV7mdfHsafPmiR+RqyvysCrTkpsOuTSCwOq0xkIzMEtI+GJ8yyi7A8QeYC
 s543UJ/nLaawEPwSzN7bqvty25udJLxobqiYEj8c9Z+bd2b1pOd80DQNuccK8zrAEU2h
 XhRWos3a9gvnqpMRwhTclycFeaWwSIvnnIoGvfUQR36lF++8vaDfiwD10TZ8FyUfjnGM
 vqHA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=RNt2cEIZBE61z29DbiD6gBhjcmDmLxOEByV7tgW2TTo=;
 b=p38L6MCslosnLW6/mSczqOJ8uJEmH4iC6VFmiE1/eXISzduw0gl8l+syAV42NGHMVj
 ovVx7Yux1/C/kVbNLqA74Zfv4chE0lspDoFSv+/fEUDHB4BZAC1iCl9etuyVDv9tfpLt
 SFkr+bB3laJWBUV2VPSnk64zlF61sbA1zWGykyD16PAEuvXGpSKxIN0jWbCAK6rznui/
 4ye1JEzSmaebAV1oqWXrWz1gf0RzjieyREjaEGduemOVp/7+1DrhwjlSSA6e+NlUzfLx
 eVLWktvNpPGMOYWKYZRXYDdTexhfiOen0AWA10l17bmvFcveZutP2K07wf4cbfuvzMk9
 s0jQ==
X-Gm-Message-State: AOAM532ExuytCwHXTKDiYUAPVySc4bwvmBXkShGltW0XxCt9c5OhVGXc
 ipz55u+LmUgWLhUc0zdDqow/oi8NNq8=
X-Google-Smtp-Source: ABdhPJzmeWZ6UCbMItNclRAj1PpClddxgIG2zJa6m3mumrbPwRXOH3YcQ4GkKKyelOJPa6aXG3Wdsw==
X-Received: by 2002:a17:906:7a46:: with SMTP id
 i6mr30218342ejo.475.1593769302913; 
 Fri, 03 Jul 2020 02:41:42 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id s23sm9316419ejz.53.2020.07.03.02.41.41
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 03 Jul 2020 02:41:42 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Michael Young'" <m.a.young@durham.ac.uk>,
 <xen-devel@lists.xenproject.org>
References: <alpine.LFD.2.22.394.2006302259370.2894@austen3.home>
In-Reply-To: <alpine.LFD.2.22.394.2006302259370.2894@austen3.home>
Subject: RE: Build problems in kdd.c with xen-4.14.0-rc4 
Date: Fri, 3 Jul 2020 10:41:41 +0100
Message-ID: <004601d6511e$28673710$7935a530$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJDTGpxOkTL6sslt6TCRsF6oIyh1qgbo0Cg
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Tim Deegan' <tim@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Michael Young
> Sent: 30 June 2020 23:22
> To: xen-devel@lists.xenproject.org
> Cc: Tim Deegan <tim@xen.org>
> Subject: Build problems in kdd.c with xen-4.14.0-rc4
> 
> I get the following errors when trying to build xen-4.14.0-rc4
> 
> kdd.c: In function 'kdd_tx':
> kdd.c:754:15: error: array subscript 16 is above array bounds of 'uint8_t[16]' {aka 'unsigned
> char[16]'} [-Werror=array-bounds]
>    754 |         s->txb[len++] = 0xaa;
>        |         ~~~~~~^~~~~~~
> kdd.c:82:17: note: while referencing 'txb'
>     82 |         uint8_t txb[sizeof (kdd_hdr)];           /* Marshalling area for tx */
>        |                 ^~~
> kdd.c: In function 'kdd_break':
> kdd.c:819:19: error: array subscript 16 is above array bounds of 'uint8_t[16]' {aka 'unsigned
> char[16]'} [-Werror=array-bounds]
>    819 |             s->txb[sizeof (kdd_hdr) + i] = i;
>        |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~
> kdd.c:82:17: note: while referencing 'txb'
>     82 |         uint8_t txb[sizeof (kdd_hdr)];           /* Marshalling area for tx */
>        |                 ^~~
> In file included from /usr/include/stdio.h:867,
>                   from kdd.c:36:
> In function 'vsnprintf',
>      inlined from 'kdd_send_string' at kdd.c:791:11:
> /usr/include/bits/stdio2.h:80:10: error: '__builtin___vsnprintf_chk' specified bound 65519 exceeds
> destination size 0 [-Werror=stringop-overflow=]
>     80 |   return __builtin___vsnprintf_chk (__s, __n, __USE_FORTIFY_LEVEL - 1,
>        |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>     81 |         __bos (__s), __fmt, __ap);
>        |         ~~~~~~~~~~~~~~~~~~~~~~~~~
> cc1: all warnings being treated as errors
> make[4]: *** [/builddir/build/BUILD/xen-4.14.0-rc4/tools/debugger/kdd/../../../tools/Rules.mk:216:
> kdd.o] Error 1
> 
> The first two array-bounds errors seem to be a result of the
> 
> kdd: stop using [0] arrays to access packet contents
> 
> patch at
> http://xenbits.xenproject.org/gitweb/?p=xen.git;a=commit;h=3471cafbdda35eacf04670881dd2aee2558b4f08
> 
> which reduced the size of txb from
> sizeof (kdd_hdr) + 65536
> to
> sizeof (kdd_hdr)
> which means the code now tries to write beyond the end of txb in both
> cases.
> 

Sorry not to get back to you sooner. Which compiler are you using?

  Paul

>  	Michael Young




From xen-devel-bounces@lists.xenproject.org Fri Jul 03 09:44:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 09:44:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrIFY-0000PD-HC; Fri, 03 Jul 2020 09:44:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gjf6=AO=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jrIFY-0000P8-48
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 09:44:52 +0000
X-Inumbo-ID: d643c984-bd11-11ea-8958-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d643c984-bd11-11ea-8958-12813bfff9fa;
 Fri, 03 Jul 2020 09:44:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1593769490;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=hQ2tplJfrun6q9GRaPBbaWrbeHkFKi0r2UTOf5iKGy8=;
 b=R0FzuK51z/zvaKlEDAdBSFuSGlBWPX1Bi2+Y2kx4nY616KnroW45g2q2
 FHv5ExGUSbaQbIYJaUa3CcG8d7ESe4tut0Ib+YVKIno/fDwzYSYYCwkEi
 bO15zkw4zORmwDaNsWB6lPhLFUa0/KeoP/kzj1ZZe+ApAeEHrpFuULIJU 8=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: iI8SD/VCCQ4ZlZX16wkh3zjWWSOwCZgLRk1cV5LagIIOlB8ehZqOE/SvLECDA2isXgOGR2/Moz
 0502YXKR2gVsPQvKy6vK8Cv7ii+e34oYvXxSebsii9HWNadCCa097Hvl2FWuLAHmPq17XU2eaZ
 V2Czcix3/pvqZvRw88Zh5PUm+yXVy58yD33hq0dyuMK3Qi+A9uqDXfxJwtd/fb0ghpIMoLD1ji
 YSltImq092ctcRxhYMvnvkSwTACVpwdwTurG01gKlxDwpLxrvUw7Mh6sQAApr4zllLorLEw/1b
 gkI=
X-SBRS: 2.7
X-MesageID: 21559329
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,307,1589256000"; d="scan'208";a="21559329"
Date: Fri, 3 Jul 2020 11:44:38 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
Message-ID: <20200703094438.GY735@Air-de-Roger>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
 <20200702090047.GX735@Air-de-Roger>
 <1505813895.18300396.1593707008144.JavaMail.zimbra@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1505813895.18300396.1593707008144.JavaMail.zimbra@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, tamas lengyel <tamas.lengyel@intel.com>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 Jan Beulich <jbeulich@suse.com>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 02, 2020 at 06:23:28PM +0200, Michał Leszczyński wrote:
> ----- 2 lip 2020 o 11:00, Roger Pau Monné roger.pau@citrix.com napisał(a):
> 
> > On Tue, Jun 30, 2020 at 02:33:46PM +0200, Michał Leszczyński wrote:
> >> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> >> index 59bdc28c89..7b8289d436 100644
> >> --- a/xen/include/public/domctl.h
> >> +++ b/xen/include/public/domctl.h
> >> @@ -92,6 +92,7 @@ struct xen_domctl_createdomain {
> >>      uint32_t max_evtchn_port;
> >>      int32_t max_grant_frames;
> >>      int32_t max_maptrack_frames;
> >> +    uint8_t vmtrace_pt_order;
> > 
> > I've been thinking about this, and even though this is a domctl (so
> > not a stable interface) we might want to consider using a size (or a
> > number of pages) here rather than an order. IPT also supports
> > TOPA mode (kind of a linked list of buffers) that would allow for
> > sizes not rounded to order boundaries to be used, since then only each
> > item in the linked list needs to be rounded to an order boundary, so
> > you could for example use three 4K pages in TOPA mode AFAICT.
> > 
> > Roger.
> 
> In previous versions it was "size" but it was requested to change it
> to "order" in order to shrink the variable size from uint64_t to
> uint8_t, because there is limited space for xen_domctl_createdomain
> structure.

It's likely I'm missing something here, but I wasn't aware
xen_domctl_createdomain had any constrains regarding it's size. It's
currently 48bytes which seems fairly small.

There might be constrains on struct domain (the hypervisor internal
domain tracking structure), but I think you are already using a size
field there IIRC.

> 
> How should I proceed?

This is an unstable interface, so we could always change it. It seems
like we might want to use a size parameter at some point to take
advantage of non physically contiguous buffers, but if there are other
blockers that prevent such field from being wider ATM I'm fine with
it.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 09:49:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 09:49:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrIJh-0000Yr-30; Fri, 03 Jul 2020 09:49:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XiPz=AO=durham.ac.uk=m.a.young@srs-us1.protection.inumbo.net>)
 id 1jrIJg-0000Ym-2B
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 09:49:08 +0000
X-Inumbo-ID: 6e6a5d40-bd12-11ea-bb8b-bc764e2007e4
Received: from GBR01-LO2-obe.outbound.protection.outlook.com (unknown
 [40.107.10.95]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e6a5d40-bd12-11ea-bb8b-bc764e2007e4;
 Fri, 03 Jul 2020 09:49:05 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k5P7Qy0IogidzzN85fhC8FatoqN/w8TznaE4y6KRM0OixtM9Ipk8MpnJOqnvHdJJ1HzO6OBeo+vBESd3n2saiR9NpM1i6brJGgfIk1Vrrvn5TOm653PhXVlpmVHL9LcKZ4NwS65wGBovZtcbEih7RHD6z/O3PCXvXE7n3r39y3r2DM9QxoO6FKbvi3lu+BDa9q/dz9T6wirgN8yVmcibIgSIjJ+27V4ffL8R+pOLhs1TPg37TWOBUZdKhX2POAtIRNiBd0iJkkxcYe1lHaVk7HHqa9651Z3pXDQqAs1gNpUlxY0cGJ7z84yeM3386guKyXUOAAiOpd1817OjQRS6bw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aAW/Crn4QUVtArQFAkNzARmHWlCle50TRjQ41/NK3Tk=;
 b=OUZgek7CeB61ulyKTT/4w9UXj8rWE9yC2jrxfpPtPSSICuktuT9mzN3uSbI4U79RBuiPTl1CidPuGzZ5Hffnq6BHD9lqb2z6Ti03dTkXApL7yfgJ9Cu7eOqtkknQ4yiCxcx6M25vYCPMlaD+/avarfRo4wcQt62A+BCuNppopQosF0DQXSUk5o+ATmAdp4ldU3A5tjzWGVAObNnFeC/nVHM+tdz6NKVEGxPF52/4B0q/deNli1CjwCYGFHQpBd4H2IwuDP4giPEPBS+jG/tKgHjuzpkIlBF2iR0TYTpWWvAa5w2s5E45ZdUhB9gomIMeq0gVbAbW6aFv0/V3NvaKQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=durham.ac.uk; dmarc=pass action=none header.from=durham.ac.uk;
 dkim=pass header.d=durham.ac.uk; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=durhamuniversity.onmicrosoft.com;
 s=selector2-durhamuniversity-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aAW/Crn4QUVtArQFAkNzARmHWlCle50TRjQ41/NK3Tk=;
 b=L0qmlccPW3z+vD2LgN6DO6ZlBK+d2NuoJ1HfDY7UsoV8rGcAnoRhQBn4CskVJ7RNKh6YU0zL6QYao9ml2ygsGnf1MMLrWxL38zPIoFfHOuCr2gUwXiXuG7HYc01rAfyUxtV3q6bvGLEZR2jAS+IMnF77xs9oIhtxTnQJ+0/261Y=
Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=durham.ac.uk;
Received: from CWLP265MB1634.GBRP265.PROD.OUTLOOK.COM (2603:10a6:401:32::19)
 by CWXP265MB0870.GBRP265.PROD.OUTLOOK.COM (2603:10a6:401:46::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.20; Fri, 3 Jul
 2020 09:49:03 +0000
Received: from CWLP265MB1634.GBRP265.PROD.OUTLOOK.COM
 ([fe80::d4cb:ad6a:b891:13d8]) by CWLP265MB1634.GBRP265.PROD.OUTLOOK.COM
 ([fe80::d4cb:ad6a:b891:13d8%6]) with mapi id 15.20.3153.027; Fri, 3 Jul 2020
 09:49:03 +0000
Date: Fri, 3 Jul 2020 10:48:57 +0100 (BST)
From: Michael Young <m.a.young@durham.ac.uk>
X-X-Sender: michael@austen3.home
To: paul@xen.org
Subject: RE: Build problems in kdd.c with xen-4.14.0-rc4 
In-Reply-To: <004601d6511e$28673710$7935a530$@xen.org>
Message-ID: <alpine.LFD.2.22.394.2007031044330.1956@austen3.home>
References: <alpine.LFD.2.22.394.2006302259370.2894@austen3.home>
 <004601d6511e$28673710$7935a530$@xen.org>
Content-Type: text/plain; charset=US-ASCII; format=flowed
X-ClientProxiedBy: LO2P265CA0236.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:b::32) To CWLP265MB1634.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:401:32::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from broadband.bt.com (2a00:23c4:921a:2100:e6ed:102c:ecbf:28ae) by
 LO2P265CA0236.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:b::32) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3153.21 via Frontend Transport; Fri, 3 Jul 2020 09:49:02 +0000
X-X-Sender: michael@austen3.home
X-Originating-IP: [2a00:23c4:921a:2100:e6ed:102c:ecbf:28ae]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c0e50c2f-ef4d-494f-65de-08d81f36519b
X-MS-TrafficTypeDiagnostic: CWXP265MB0870:
X-Microsoft-Antispam-PRVS: <CWXP265MB0870106082A889A150B5F2EE876A0@CWXP265MB0870.GBRP265.PROD.OUTLOOK.COM>
X-MS-Oob-TLC-OOBClassifiers: OLM:126;
X-Forefront-PRVS: 045315E1EE
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: tmAyIsLlrBtnsIQpzU9ubT2rsoC9toP9MkJDMwpEp8JKU7uWGiSxnGFK/G9/bAQg1uj7sKDOZTY/C6Hoivdk6QJS9MlWje1rBDqXvBTxOJP7h1ytAZrmjyVkmUwJwiGd04AIwnfNMRsZ77DQokLSwm5yx0WJGNW+FVHwrMg8oz/wzQj4Tb4vUhKgWZ+o4ykC6UmaP/7hpJm8HE2p+2Y2yu69VbdXfJ7GVhjSVivan58cAg0rmYArjYgYKw0xQMoM88vfn6hzP3mvg31hBmVvjjAV2Ltc6tCYlGa2IZSMDv8wO1BAaQKg3D6KVPzurVSBc0PHziLIqbVk06geVc6zYFoHXQS57MKaTpshpLPP3UfmqblDn2EVor5e4Vh12+K7Ss5M5WQMgP9ci8l264G4DQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:CWLP265MB1634.GBRP265.PROD.OUTLOOK.COM; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(39860400002)(136003)(376002)(396003)(366004)(346002)(52116002)(86362001)(53546011)(6506007)(786003)(316002)(9686003)(6512007)(2906002)(16526019)(36756003)(186003)(4743002)(478600001)(966005)(6666004)(6486002)(6916009)(8936002)(66946007)(5660300002)(83380400001)(8676002)(4326008)(66476007)(66556008);
 DIR:OUT; SFP:1102; 
X-MS-Exchange-AntiSpam-MessageData: 428TjF7Lu7hB4zXBkm2PySSj8fW0nj6ZhjMOTgaY/pjwP+KtLCGaNzuzx1Rj9WKv1N4E7z51xh3ohwE8MicI9gxEPMQKqdp4e4NX42qCF+Pd94C6Udwt5kyiiH8fPG7XyDBAWhznbS3WUGRWYtb8b4d/L2qDAG18rYaKijDpq5kB9woH1qZ82UC1nR/GVPVnW5btUKyXCDyzkmjWkz5xSWa8h4moVVxQms4V4saY2bVbtR9/WuNBQ0FY8lWjxs+F5/qhMFPsPVlsIt1RLnZPCfw7et39U62NhkYs2aSz36oUXQ1hoTwFWpC2mZ1con+L92xtLh1WhWvCB/dv2hdvZTq0M9BoAT96QUs9D13XFWoyvVJ9AEXs/pvgPE2nkpuSXuj808zmM8plscTh3gYUx2Xoz54jQ31l2oA45inOKGt/FbP4sJPMZ4rqhJ2X6u1OjnnjsUMo2O0okILOxjQPSZWBT3dGTL4+yoeIN+N82xy3QffGFUqtY2nyNiFSPrtXzoLWWrSFfyZ77Dz2+dnVDElPsltE+Xfuw76dmbTouvM=
X-OriginatorOrg: durham.ac.uk
X-MS-Exchange-CrossTenant-Network-Message-Id: c0e50c2f-ef4d-494f-65de-08d81f36519b
X-MS-Exchange-CrossTenant-AuthSource: CWLP265MB1634.GBRP265.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jul 2020 09:49:03.3117 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 7250d88b-4b68-4529-be44-d59a2d8a6f94
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0fWXS8w050i4/pHhu7nU8hkQQN7Le0Gi/IRKVpdpEE/JjXaTWJUsodHAwyR29vDQc0hdSY46EieOSSIq0Fny4Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CWXP265MB0870
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, 'Tim Deegan' <tim@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 3 Jul 2020, Paul Durrant wrote:

>> -----Original Message-----
>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Michael Young
>> Sent: 30 June 2020 23:22
>> To: xen-devel@lists.xenproject.org
>> Cc: Tim Deegan <tim@xen.org>
>> Subject: Build problems in kdd.c with xen-4.14.0-rc4
>>
>> I get the following errors when trying to build xen-4.14.0-rc4
>>
>> kdd.c: In function 'kdd_tx':
>> kdd.c:754:15: error: array subscript 16 is above array bounds of 'uint8_t[16]' {aka 'unsigned
>> char[16]'} [-Werror=array-bounds]
>>    754 |         s->txb[len++] = 0xaa;
>>        |         ~~~~~~^~~~~~~
>> kdd.c:82:17: note: while referencing 'txb'
>>     82 |         uint8_t txb[sizeof (kdd_hdr)];           /* Marshalling area for tx */
>>        |                 ^~~
>> kdd.c: In function 'kdd_break':
>> kdd.c:819:19: error: array subscript 16 is above array bounds of 'uint8_t[16]' {aka 'unsigned
>> char[16]'} [-Werror=array-bounds]
>>    819 |             s->txb[sizeof (kdd_hdr) + i] = i;
>>        |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~
>> kdd.c:82:17: note: while referencing 'txb'
>>     82 |         uint8_t txb[sizeof (kdd_hdr)];           /* Marshalling area for tx */
>>        |                 ^~~
>> In file included from /usr/include/stdio.h:867,
>>                   from kdd.c:36:
>> In function 'vsnprintf',
>>      inlined from 'kdd_send_string' at kdd.c:791:11:
>> /usr/include/bits/stdio2.h:80:10: error: '__builtin___vsnprintf_chk' specified bound 65519 exceeds
>> destination size 0 [-Werror=stringop-overflow=]
>>     80 |   return __builtin___vsnprintf_chk (__s, __n, __USE_FORTIFY_LEVEL - 1,
>>        |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>     81 |         __bos (__s), __fmt, __ap);
>>        |         ~~~~~~~~~~~~~~~~~~~~~~~~~
>> cc1: all warnings being treated as errors
>> make[4]: *** [/builddir/build/BUILD/xen-4.14.0-rc4/tools/debugger/kdd/../../../tools/Rules.mk:216:
>> kdd.o] Error 1
>>
>> The first two array-bounds errors seem to be a result of the
>>
>> kdd: stop using [0] arrays to access packet contents
>>
>> patch at
>> http://xenbits.xenproject.org/gitweb/?p=xen.git;a=commit;h=3471cafbdda35eacf04670881dd2aee2558b4f08
>>
>> which reduced the size of txb from
>> sizeof (kdd_hdr) + 65536
>> to
>> sizeof (kdd_hdr)
>> which means the code now tries to write beyond the end of txb in both
>> cases.
>>
>
> Sorry not to get back to you sooner. Which compiler are you using?
>
>  Paul

This was with gcc-10.1.1-1.fc32.x86_64
Full build logs are (at the moment) at 
https://download.copr.fedorainfracloud.org/results/myoung/xentest/fedora-32-x86_64/01515056-xen/

 	Michael Young


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 09:56:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 09:56:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrIQv-0001Ol-SW; Fri, 03 Jul 2020 09:56:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5Z/2=AO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jrIQv-0001Og-1e
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 09:56:37 +0000
X-Inumbo-ID: 7b510e18-bd13-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b510e18-bd13-11ea-bb8b-bc764e2007e4;
 Fri, 03 Jul 2020 09:56:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9023BB1FA;
 Fri,  3 Jul 2020 09:56:35 +0000 (UTC)
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
 <20200702090047.GX735@Air-de-Roger>
 <1505813895.18300396.1593707008144.JavaMail.zimbra@cert.pl>
 <20200703094438.GY735@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b5335c2e-da13-28de-002b-e93dd68a0a11@suse.com>
Date: Fri, 3 Jul 2020 11:56:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200703094438.GY735@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 03.07.2020 11:44, Roger Pau Monné wrote:
> On Thu, Jul 02, 2020 at 06:23:28PM +0200, Michał Leszczyński wrote:
>> ----- 2 lip 2020 o 11:00, Roger Pau Monné roger.pau@citrix.com napisał(a):
>>
>>> On Tue, Jun 30, 2020 at 02:33:46PM +0200, Michał Leszczyński wrote:
>>>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
>>>> index 59bdc28c89..7b8289d436 100644
>>>> --- a/xen/include/public/domctl.h
>>>> +++ b/xen/include/public/domctl.h
>>>> @@ -92,6 +92,7 @@ struct xen_domctl_createdomain {
>>>>      uint32_t max_evtchn_port;
>>>>      int32_t max_grant_frames;
>>>>      int32_t max_maptrack_frames;
>>>> +    uint8_t vmtrace_pt_order;
>>>
>>> I've been thinking about this, and even though this is a domctl (so
>>> not a stable interface) we might want to consider using a size (or a
>>> number of pages) here rather than an order. IPT also supports
>>> TOPA mode (kind of a linked list of buffers) that would allow for
>>> sizes not rounded to order boundaries to be used, since then only each
>>> item in the linked list needs to be rounded to an order boundary, so
>>> you could for example use three 4K pages in TOPA mode AFAICT.
>>>
>>> Roger.
>>
>> In previous versions it was "size" but it was requested to change it
>> to "order" in order to shrink the variable size from uint64_t to
>> uint8_t, because there is limited space for xen_domctl_createdomain
>> structure.
> 
> It's likely I'm missing something here, but I wasn't aware
> xen_domctl_createdomain had any constrains regarding it's size. It's
> currently 48bytes which seems fairly small.

Additionally I would guess a uint32_t could do here, if the value
passed was "number of pages" rather than "number of bytes"?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 10:07:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 10:07:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrIb3-0002Ly-TU; Fri, 03 Jul 2020 10:07:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYi7=AO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrIb2-0002Lt-L1
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 10:07:04 +0000
X-Inumbo-ID: f0798660-bd14-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0798660-bd14-11ea-b7bb-bc764e2007e4;
 Fri, 03 Jul 2020 10:07:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=gfQ8yxO7gYLZ+9RUkTnNRZ9hqxXe3Ep7ewIQS9wSat4=; b=dFtHIekhrtYIN3jchhuaGPiCT
 bUMUrfAL7ogZQlHTe7Uz06nrxwWCt9a6yPxjvLG3EA10MvvSqnDGCBRsH+xbFJzmlIqvoiDc/GHNk
 ZOlorDB1tNkGxpzPbXunKe7yrGbXZjZwGPk84Tbxh5X1FSXHizIJezgOz9ogBQ9J1d9vs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrIaz-00041W-M9; Fri, 03 Jul 2020 10:07:01 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrIaz-0000m0-B2; Fri, 03 Jul 2020 10:07:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrIaz-0008Fb-AJ; Fri, 03 Jul 2020 10:07:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151547-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151547: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=64f0ad8ad8e13257e7c912df470d46784b55c3fd
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jul 2020 10:07:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151547 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151547/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                64f0ad8ad8e13257e7c912df470d46784b55c3fd
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   20 days
Failing since        151101  2020-06-14 08:32:51 Z   19 days   21 attempts
Testing same since   151547  2020-07-02 19:02:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Cathy Zhang <cathy.zhang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Liran Alon <liran.alon@oracle.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <thuth@redhat.com>
  Tong Ho <tong.ho@xilinx.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 15325 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 10:11:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 10:11:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrIfJ-0003A1-MQ; Fri, 03 Jul 2020 10:11:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gjf6=AO=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jrIfI-00039w-HT
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 10:11:28 +0000
X-Inumbo-ID: 8eb000de-bd15-11ea-8962-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8eb000de-bd15-11ea-8962-12813bfff9fa;
 Fri, 03 Jul 2020 10:11:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1593771087;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=X+P6sIYROQUrJMGeiNvi5GHRePLeRgpPx8AihGlO/N0=;
 b=Orqw9sWysC3YU1bxdmfREMkE2FrJdQRMW8qXi/4JKm/j7jbMP9Mr8F4l
 LtJt5cim7qrRkxnNTIGrEPdfHpkYhxCZbvQo4qF7dQVc6Ff+CWx9lVerI
 MDdap+h9hnBoNOZDfkqv/V4dQI9TtBMr1wBylSfxogxh2x9mCYckNU8tL c=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: PhBJq5feS6AtDWsamf3SJO8BhZ0iN6oEh1Ay5Mg4n1CsvgriISyVgh52qEwwuscFsQRv71wDsS
 e9g7iYh1GNokYYX86h5xUzBvCpHNzl4iuVJd7yBhn08XGQqAfgFH8aTLLZeQsX6J1YSEnEhjc3
 8kMlJZrDV8vAynN0BpkY98IUxuoJ4kd1Et2EKu+53I34u7imP3VxF+X78No9oA3kk+z5XCu3Pz
 5Q56v2t2Ls55mtvUAvU/1SlOuYx7V2vE8s2xXKDaw4g2RieWWzrhDLHImCe68GjhsyFzaQDxiI
 xPU=
X-SBRS: 2.7
X-MesageID: 21762672
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,307,1589256000"; d="scan'208";a="21762672"
Date: Fri, 3 Jul 2020 12:11:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
Message-ID: <20200703101120.GZ735@Air-de-Roger>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
 <20200702090047.GX735@Air-de-Roger>
 <1505813895.18300396.1593707008144.JavaMail.zimbra@cert.pl>
 <20200703094438.GY735@Air-de-Roger>
 <b5335c2e-da13-28de-002b-e93dd68a0a11@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b5335c2e-da13-28de-002b-e93dd68a0a11@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, tamas lengyel <tamas.lengyel@intel.com>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 03, 2020 at 11:56:38AM +0200, Jan Beulich wrote:
> On 03.07.2020 11:44, Roger Pau Monné wrote:
> > On Thu, Jul 02, 2020 at 06:23:28PM +0200, Michał Leszczyński wrote:
> >> ----- 2 lip 2020 o 11:00, Roger Pau Monné roger.pau@citrix.com napisał(a):
> >>
> >>> On Tue, Jun 30, 2020 at 02:33:46PM +0200, Michał Leszczyński wrote:
> >>>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> >>>> index 59bdc28c89..7b8289d436 100644
> >>>> --- a/xen/include/public/domctl.h
> >>>> +++ b/xen/include/public/domctl.h
> >>>> @@ -92,6 +92,7 @@ struct xen_domctl_createdomain {
> >>>>      uint32_t max_evtchn_port;
> >>>>      int32_t max_grant_frames;
> >>>>      int32_t max_maptrack_frames;
> >>>> +    uint8_t vmtrace_pt_order;
> >>>
> >>> I've been thinking about this, and even though this is a domctl (so
> >>> not a stable interface) we might want to consider using a size (or a
> >>> number of pages) here rather than an order. IPT also supports
> >>> TOPA mode (kind of a linked list of buffers) that would allow for
> >>> sizes not rounded to order boundaries to be used, since then only each
> >>> item in the linked list needs to be rounded to an order boundary, so
> >>> you could for example use three 4K pages in TOPA mode AFAICT.
> >>>
> >>> Roger.
> >>
> >> In previous versions it was "size" but it was requested to change it
> >> to "order" in order to shrink the variable size from uint64_t to
> >> uint8_t, because there is limited space for xen_domctl_createdomain
> >> structure.
> > 
> > It's likely I'm missing something here, but I wasn't aware
> > xen_domctl_createdomain had any constrains regarding it's size. It's
> > currently 48bytes which seems fairly small.
> 
> Additionally I would guess a uint32_t could do here, if the value
> passed was "number of pages" rather than "number of bytes"?

That could work, not sure if it needs to state however that those will
be 4K pages, since Arm can have a different minimum page size IIRC?
(or that's already the assumption for all number of frames fields)
vmtrace_nr_frames seems fine to me.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 10:22:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 10:22:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrIqE-00043K-Pl; Fri, 03 Jul 2020 10:22:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OFwp=AO=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jrIqD-00042l-Hh
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 10:22:45 +0000
X-Inumbo-ID: 1fe2b762-bd17-11ea-8962-12813bfff9fa
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1fe2b762-bd17-11ea-8962-12813bfff9fa;
 Fri, 03 Jul 2020 10:22:40 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 063A1gOr191558;
 Fri, 3 Jul 2020 10:22:38 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=VRQkPgod6GGE++rwFxMBA+cnzNu5RnUyQ53+t1KYc4o=;
 b=ULkl9YeOFywnaJBig5yRrD95j3Z68udeszV/JhhqcRiG+qeag4eJ3f+wrOzz40jx7sdi
 V4YbDg1AGCvY/KXVyBUHbzLbcgkQINeEqttQoeZo6/0GHL4nVZY3rGstWgdyC3AEcTDq
 RbLbqKtU6uZsPHz2C++ReaZgPmAtPCEWbagKdfiNcDjo8pGI/8ViZh6P8D6UFSPm4CRz
 BupF4HVAhs1lv7+31q0T7SQbgzfYdJhb7dLm03wEmUFqmEr5bSafgTL3vQpVp+KhIhjR
 DOso+psHEVEMiaKfsBMMNAdl3KB08WEkJ1szYS3qmaPYMWrkBaK0MP9zdBsJ4EpRHqla Ww== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2130.oracle.com with ESMTP id 31ywrc3ckc-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 03 Jul 2020 10:22:38 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 063AIifQ027112;
 Fri, 3 Jul 2020 10:22:37 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by userp3020.oracle.com with ESMTP id 31xfvx4r65-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 03 Jul 2020 10:22:37 +0000
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 063AMZqu031832;
 Fri, 3 Jul 2020 10:22:36 GMT
Received: from [10.39.218.81] (/10.39.218.81)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 03 Jul 2020 10:22:35 +0000
Subject: Re: [PATCH v2 0/2] xen/xenbus: some cleanups
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, clang-built-linux@googlegroups.com
References: <20200701121638.19840-1-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <c940406e-cfb5-f536-2eee-278f3520c702@oracle.com>
Date: Fri, 3 Jul 2020 06:22:33 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200701121638.19840-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9670
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0
 adultscore=0 spamscore=0
 phishscore=0 malwarescore=0 mlxlogscore=999 bulkscore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2007030072
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9670
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0
 spamscore=0 mlxlogscore=999
 clxscore=1015 cotscore=-2147483648 priorityscore=1501 lowpriorityscore=0
 malwarescore=0 mlxscore=0 adultscore=0 suspectscore=0 impostorscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2007030071
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/1/20 8:16 AM, Juergen Gross wrote:
> Avoid allocating large amount of data on the stack in
> xenbus_map_ring_valloc() and some related return value cleanups.
>
> Juergen Gross (2):
>   xen/xenbus: avoid large structs and arrays on the stack
>   xen/xenbus: let xenbus_map_ring_valloc() return errno values only
>
>  drivers/xen/xenbus/xenbus_client.c | 167 ++++++++++++++---------------
>  1 file changed, 81 insertions(+), 86 deletions(-)
>


Applied to for-linus-5.8b.


-boris



From xen-devel-bounces@lists.xenproject.org Fri Jul 03 10:34:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 10:34:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrJ1c-0004yE-SA; Fri, 03 Jul 2020 10:34:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bw0N=AO=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jrJ1b-0004y9-71
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 10:34:31 +0000
X-Inumbo-ID: c6bb7dd4-bd18-11ea-bca7-bc764e2007e4
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6bb7dd4-bd18-11ea-bca7-bc764e2007e4;
 Fri, 03 Jul 2020 10:34:30 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id f7so29109378wrw.1
 for <xen-devel@lists.xenproject.org>; Fri, 03 Jul 2020 03:34:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=rIYyG9fTWCUmdeGoBgnk41cUwWAC9W1MLe1N4MjMJwU=;
 b=q07rg2AJH0YG1jBUsr4iBewNGX9yewxALXbz6vAAkaN7g2+ntVrLc9y6N+tiRcY1lR
 TbbW5aAY6jTbU0vWoJ/9OVZHD0laZZs1exioKIEHHLvAPlnP7zRoyG2mi4pYzUvWHUpp
 G0cEUwEoE4ybJbKBD7o7Dgi0z4vQoOeHifU+N72BgA8w7KR2djqw+GB74dJC8sWUbEvL
 eP188/hDKQc8gx+0d/M1ZOKraE5CW8FfUE6fzeOGwQcH/LcpOWZTo0JIaKn09+yZvNN1
 CGUnVXrI7aAnovTpC48sBKzPg5bhU33wWj8oydX3MPBVcEdoDEwtrOClQs1gD5+Zwy2e
 g0ZQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=rIYyG9fTWCUmdeGoBgnk41cUwWAC9W1MLe1N4MjMJwU=;
 b=k+d2YfrUFDSauDdzafucrj7TXrJ1KYxGFTySoQZYr50Cv5oyB8IE5+ThKugrfvWhss
 LUDHBmhoYHrPZfcolCwvRzcOt5pw6VcYjPv5z2niYS/Mrzi7vK24PDQTDHQLTP+XptPt
 KFWuyeINWhi+SBMZgOaGyZHAGkWC9dc7e3FoOn1HeXvPbiSQofI5HlKNU1p6wL3pVqx7
 B6eVJpMcMhAYnbBPoR9SoplQbbeaRkZWIc22dr4ID/RgV6YDFTf+BL8ZDRAIS+ms1ZBe
 HIPMqa5oIc6VJ1ftXK6TTRcT1cgyJl4kNHjVkxMDBkWlFC/1DmaEWBItD6pkPgk9NOsK
 fGCA==
X-Gm-Message-State: AOAM530vExctEn+wuzeSnJSHg1YdZUgBa08BNQIUurfu7o+oBFZ39cYA
 9p54HjPPoK7q/iJBFPy6JPo=
X-Google-Smtp-Source: ABdhPJy7b4aYqgAYm0HnXDsTLnGxF/Vf1bbX98WZU8rAoyIR5RKYEskmROzKKrjI3ZiHLIL+kGfWLQ==
X-Received: by 2002:adf:db09:: with SMTP id s9mr35197367wri.256.1593772469554; 
 Fri, 03 Jul 2020 03:34:29 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id y7sm13427934wrt.11.2020.07.03.03.34.28
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 03 Jul 2020 03:34:29 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Michael Young'" <m.a.young@durham.ac.uk>
References: <alpine.LFD.2.22.394.2006302259370.2894@austen3.home>
 <004601d6511e$28673710$7935a530$@xen.org>
 <alpine.LFD.2.22.394.2007031044330.1956@austen3.home>
In-Reply-To: <alpine.LFD.2.22.394.2007031044330.1956@austen3.home>
Subject: RE: Build problems in kdd.c with xen-4.14.0-rc4 
Date: Fri, 3 Jul 2020 11:34:28 +0100
Message-ID: <004701d65125$87f4de60$97de9b20$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJDTGpxOkTL6sslt6TCRsF6oIyh1gHgYS1gAmIf6man+Z3lQA==
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org, 'Tim Deegan' <tim@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Michael Young <m.a.young@durham.ac.uk>
> Sent: 03 July 2020 10:49
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; 'Tim Deegan' <tim@xen.org>
> Subject: RE: Build problems in kdd.c with xen-4.14.0-rc4
> 
> On Fri, 3 Jul 2020, Paul Durrant wrote:
> 
> >> -----Original Message-----
> >> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Michael Young
> >> Sent: 30 June 2020 23:22
> >> To: xen-devel@lists.xenproject.org
> >> Cc: Tim Deegan <tim@xen.org>
> >> Subject: Build problems in kdd.c with xen-4.14.0-rc4
> >>
> >> I get the following errors when trying to build xen-4.14.0-rc4
> >>
> >> kdd.c: In function 'kdd_tx':
> >> kdd.c:754:15: error: array subscript 16 is above array bounds of 'uint8_t[16]' {aka 'unsigned
> >> char[16]'} [-Werror=array-bounds]
> >>    754 |         s->txb[len++] = 0xaa;
> >>        |         ~~~~~~^~~~~~~
> >> kdd.c:82:17: note: while referencing 'txb'
> >>     82 |         uint8_t txb[sizeof (kdd_hdr)];           /* Marshalling area for tx */
> >>        |                 ^~~
> >> kdd.c: In function 'kdd_break':
> >> kdd.c:819:19: error: array subscript 16 is above array bounds of 'uint8_t[16]' {aka 'unsigned
> >> char[16]'} [-Werror=array-bounds]
> >>    819 |             s->txb[sizeof (kdd_hdr) + i] = i;
> >>        |             ~~~~~~^~~~~~~~~~~~~~~~~~~~~~
> >> kdd.c:82:17: note: while referencing 'txb'
> >>     82 |         uint8_t txb[sizeof (kdd_hdr)];           /* Marshalling area for tx */
> >>        |                 ^~~
> >> In file included from /usr/include/stdio.h:867,
> >>                   from kdd.c:36:
> >> In function 'vsnprintf',
> >>      inlined from 'kdd_send_string' at kdd.c:791:11:
> >> /usr/include/bits/stdio2.h:80:10: error: '__builtin___vsnprintf_chk' specified bound 65519 exceeds
> >> destination size 0 [-Werror=stringop-overflow=]
> >>     80 |   return __builtin___vsnprintf_chk (__s, __n, __USE_FORTIFY_LEVEL - 1,
> >>        |          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> >>     81 |         __bos (__s), __fmt, __ap);
> >>        |         ~~~~~~~~~~~~~~~~~~~~~~~~~
> >> cc1: all warnings being treated as errors
> >> make[4]: *** [/builddir/build/BUILD/xen-4.14.0-rc4/tools/debugger/kdd/../../../tools/Rules.mk:216:
> >> kdd.o] Error 1
> >>
> >> The first two array-bounds errors seem to be a result of the
> >>
> >> kdd: stop using [0] arrays to access packet contents
> >>
> >> patch at
> >> http://xenbits.xenproject.org/gitweb/?p=xen.git;a=commit;h=3471cafbdda35eacf04670881dd2aee2558b4f08
> >>
> >> which reduced the size of txb from
> >> sizeof (kdd_hdr) + 65536
> >> to
> >> sizeof (kdd_hdr)
> >> which means the code now tries to write beyond the end of txb in both
> >> cases.
> >>
> >
> > Sorry not to get back to you sooner. Which compiler are you using?
> >
> >  Paul
> 
> This was with gcc-10.1.1-1.fc32.x86_64
> Full build logs are (at the moment) at
> https://download.copr.fedorainfracloud.org/results/myoung/xentest/fedora-32-x86_64/01515056-xen/
> 

Ok, I have an older compiler. Does this patch fix it for you?

---8<---
diff --git a/tools/debugger/kdd/kdd.c b/tools/debugger/kdd/kdd.c
index 866532f0c7..a7d0976ea4 100644
--- a/tools/debugger/kdd/kdd.c
+++ b/tools/debugger/kdd/kdd.c
@@ -79,11 +79,11 @@ typedef struct {
 /* State of the debugger stub */
 typedef struct {
     union {
-        uint8_t txb[sizeof (kdd_hdr)];           /* Marshalling area for tx */
+        uint8_t txb[sizeof (kdd_pkt)];           /* Marshalling area for tx */
         kdd_pkt txp;                 /* Also readable as a packet structure */
     };
     union {
-        uint8_t rxb[sizeof (kdd_hdr)];           /* Marshalling area for rx */
+        uint8_t rxb[sizeof (kdd_pkt)];           /* Marshalling area for rx */
         kdd_pkt rxp;                 /* Also readable as a packet structure */
     };
     unsigned int cur;       /* Offset into rx where we'll put the next byte */
---8<---

  Paul

>  	Michael Young



From xen-devel-bounces@lists.xenproject.org Fri Jul 03 10:35:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 10:35:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrJ2w-00053V-6C; Fri, 03 Jul 2020 10:35:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Kpvw=AO=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jrJ2u-00053P-70
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 10:35:52 +0000
X-Inumbo-ID: f7534dfa-bd18-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7534dfa-bd18-11ea-bca7-bc764e2007e4;
 Fri, 03 Jul 2020 10:35:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=jLYvpQIZR3neQsEKuEJQwRzz1OvwxeZioF5niqJcRX8=; b=KqJwXi3+saYywAbDfjGwv+uY9/
 VcVtHvqYFcONtKUcXqDwlMSfwQqYv8aUtlTEZAyLZXhP7qxfrGiu6oBZ/Pq6H+Izcpv9U+Rzr9/bY
 wDsX2+AzX0MCLGnAcZZa8KwkimouJ6uLfKon08ahCCC4xGzHVOLhEQNOcg/Rw6zkqQ5g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrJ2m-0004XD-MG; Fri, 03 Jul 2020 10:35:44 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrJ2m-0005U7-Cd; Fri, 03 Jul 2020 10:35:44 +0000
Subject: Re: [PATCH v4 06/10] memory: batch processing in acquire_resource()
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 xen-devel@lists.xenproject.org
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <a317b169e3710a481bb4be066d9b878f27b3e66c.1593519420.git.michal.leszczynski@cert.pl>
From: Julien Grall <julien@xen.org>
Message-ID: <5be6cb58-82d0-0a78-a9b2-5c078b5d3587@xen.org>
Date: Fri, 3 Jul 2020 11:35:41 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a317b169e3710a481bb4be066d9b878f27b3e66c.1593519420.git.michal.leszczynski@cert.pl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, "paul@xen.org" <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 luwei.kang@intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

(+ Paul as the author XENMEM_acquire_resource)

Hi,

On 30/06/2020 13:33, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Allow to acquire large resources by allowing acquire_resource()
> to process items in batches, using hypercall continuation.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>   xen/common/memory.c | 32 +++++++++++++++++++++++++++++---
>   1 file changed, 29 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 714077c1e5..3ab06581a2 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1046,10 +1046,12 @@ static int acquire_grant_table(struct domain *d, unsigned int id,
>   }
>   
>   static int acquire_resource(
> -    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
> +    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg,
> +    unsigned long *start_extent)
>   {
>       struct domain *d, *currd = current->domain;
>       xen_mem_acquire_resource_t xmar;
> +    uint32_t total_frames;
>       /*
>        * The mfn_list and gfn_list (below) arrays are ok on stack for the
>        * moment since they are small, but if they need to grow in future
> @@ -1077,8 +1079,17 @@ static int acquire_resource(
>           return 0;
>       }
>   
> +    total_frames = xmar.nr_frames;

On 32-bit, the start_extent would be 26-bits wide which is not enough to 
cover all the xmar.nr_frames. Therefore, you want that check that it is 
possible to encode a continuation. Something like:

/* Is the size too large for us to encode a continuation? */
if ( unlikely(xmar.nr_frames > (UINT_MAX >> MEMOP_EXTENT_SHIFT)) )

> +
> +    if ( *start_extent ) > +    {
> +        xmar.frame += *start_extent;
> +        xmar.nr_frames -= *start_extent;

As start_extent is exposed to the guest, you want to check if it is not 
bigger than xmar.nr_frames.

> +        guest_handle_add_offset(xmar.frame_list, *start_extent);
> +    }
> +
>       if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
> -        return -E2BIG;
> +        xmar.nr_frames = ARRAY_SIZE(mfn_list);

The documentation of the hypercall suggests that if you pass NULL, then 
it will return the maximum number value for nr_frames supported by the 
implementation. So technically a domain cannot use more than 
ARRAY_SIZE(mfn_list).

However, you new addition conflict with the documentation. Can you 
clarify how a domain will know that it can use more than 
ARRAY_SIZE(mfn_list)?

>   
>       rc = rcu_lock_remote_domain_by_id(xmar.domid, &d);
>       if ( rc )
> @@ -1135,6 +1146,14 @@ static int acquire_resource(
>           }
>       }
>   
> +    if ( !rc )
> +    {
> +        *start_extent += xmar.nr_frames;
> +
> +        if ( *start_extent != total_frames )
> +            rc = -ERESTART;
> +    }
> +
>    out:
>       rcu_unlock_domain(d);
>   
> @@ -1600,7 +1619,14 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>   
>       case XENMEM_acquire_resource:
>           rc = acquire_resource(
> -            guest_handle_cast(arg, xen_mem_acquire_resource_t));
> +            guest_handle_cast(arg, xen_mem_acquire_resource_t),
> +            &start_extent);

Hmmm... it looks like we forgot to check that start_extent is always 0 
when the hypercall was added.

As this is exposed to the guest, it technically means that there no 
guarantee that start_extent will always be 0.

However, in practice, this was likely the intention and should be the 
case. So it may just be enough to mention the potential breakage in the 
commit message.

@All: what do you think?

> +
> +        if ( rc == -ERESTART )
> +            return hypercall_create_continuation(
> +                __HYPERVISOR_memory_op, "lh",
> +                op | (start_extent << MEMOP_EXTENT_SHIFT), arg);
> +
>           break;
>   
>       default:
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 10:52:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 10:52:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrJJK-0006gs-Mk; Fri, 03 Jul 2020 10:52:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bw0N=AO=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jrJJJ-0006gn-R9
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 10:52:49 +0000
X-Inumbo-ID: 557656a0-bd1b-11ea-bb8b-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 557656a0-bd1b-11ea-bb8b-bc764e2007e4;
 Fri, 03 Jul 2020 10:52:48 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id o11so32185459wrv.9
 for <xen-devel@lists.xenproject.org>; Fri, 03 Jul 2020 03:52:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=YRp1E5N2b9fcgKZLzvgbn7oU+J1uNarEF7+xE3/CSD4=;
 b=BsF77UDO6I/IVj8ZsItBsDuMQHJ/EQDAoXYaysu9iIarSxMYy/zLzR1JXrcUzbxmVu
 6JHQ1FkAXaup77oD6zbxmHAk9DuWfhCGDGQ9SRbJ+mUrSFggS/AT6Zzal86WLkd7VC9s
 4CeYaOWPDa8hQOsS8P6dec1keqvNr8d52ii69LosdpaXXwAjbxZFrVMw0KjUb1te7J7I
 2QTqd8Zbvw86T58pbMuF9bzAxoY2QEcjviQz99fJQrdvaRFBSEM8fIt+B2jrQOK3yYtX
 cRo6lMbqtr5Pt2On+ngI1amSmeRWau9qlsugbaDeKRLa1XuPN8pQLw2WqAwMyVMOvAys
 CiYw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=YRp1E5N2b9fcgKZLzvgbn7oU+J1uNarEF7+xE3/CSD4=;
 b=lscodplzqjwh4ylNRcHmTJSOi6yq/fCjyT1XeXss+UM0+iC56FmyjADiPabRAPXPNg
 wX3HdHWcUlbml93vCvWsaZpk3iGivPhjmAZ6v8wfEf1jPhiIElIVpRAhbScNLt28Xlrv
 d0lTrZVgNrDanO/S/O2uOS5YkrBDQgOpa8IhVmgVdgE8W6uNtpFeYzmNK9NH3IXikku0
 kBT/SjufxMCUl2bP37j1cSgvFA7TLV8XXpGgeGVtYkGadRgHDmt7mHw1sPgocgoy2LKN
 dkMhHyKPZdKtfDVualmGAhhObUuAyZpuiP8ZtAfIlw7EXAsdaw3AkrvztVqPeZ33N8wc
 bonQ==
X-Gm-Message-State: AOAM532CySGrPrL12nSmvhxVEg0o/BLanliru0IQ9pm/L1IrTTVofRRP
 gdEvyMUqvpKz6uhE+Cseb/o=
X-Google-Smtp-Source: ABdhPJy7s1D6ykSnCB7ioFDMgy/inGyzKLNuiGTpgGyhXK4x4Y/a7NCVu20vU3AOosuasyx2K2Kd+w==
X-Received: by 2002:adf:e850:: with SMTP id d16mr37667320wrn.426.1593773567952; 
 Fri, 03 Jul 2020 03:52:47 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id b18sm4540781wrs.46.2020.07.03.03.52.46
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 03 Jul 2020 03:52:47 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Julien Grall'" <julien@xen.org>,
 =?utf-8?Q?'Micha=C5=82_Leszczy=C5=84ski'?= <michal.leszczynski@cert.pl>,
 <xen-devel@lists.xenproject.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <a317b169e3710a481bb4be066d9b878f27b3e66c.1593519420.git.michal.leszczynski@cert.pl>
 <5be6cb58-82d0-0a78-a9b2-5c078b5d3587@xen.org>
In-Reply-To: <5be6cb58-82d0-0a78-a9b2-5c078b5d3587@xen.org>
Subject: RE: [PATCH v4 06/10] memory: batch processing in acquire_resource()
Date: Fri, 3 Jul 2020 11:52:46 +0100
Message-ID: <004901d65128$16a6f330$43f4d990$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFzyG1KjeOu8uNiQeFGxRZBsBXsYgGYFxRdAaoW8QOpoKwZQA==
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>,
 luwei.kang@intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 03 July 2020 11:36
> To: Micha=C5=82 Leszczy=C5=84ski <michal.leszczynski@cert.pl>; =
xen-devel@lists.xenproject.org
> Cc: luwei.kang@intel.com; tamas.lengyel@intel.com; Andrew Cooper =
<andrew.cooper3@citrix.com>; George
> Dunlap <george.dunlap@citrix.com>; Ian Jackson =
<ian.jackson@eu.citrix.com>; Jan Beulich
> <jbeulich@suse.com>; Stefano Stabellini <sstabellini@kernel.org>; Wei =
Liu <wl@xen.org>; paul@xen.org
> Subject: Re: [PATCH v4 06/10] memory: batch processing in =
acquire_resource()
>=20
> (+ Paul as the author XENMEM_acquire_resource)
>=20
> Hi,
>=20
> On 30/06/2020 13:33, Micha=C5=82 Leszczy=C5=84ski wrote:
> > From: Michal Leszczynski <michal.leszczynski@cert.pl>
> >
> > Allow to acquire large resources by allowing acquire_resource()
> > to process items in batches, using hypercall continuation.
> >
> > Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> > ---
> >   xen/common/memory.c | 32 +++++++++++++++++++++++++++++---
> >   1 file changed, 29 insertions(+), 3 deletions(-)
> >
> > diff --git a/xen/common/memory.c b/xen/common/memory.c
> > index 714077c1e5..3ab06581a2 100644
> > --- a/xen/common/memory.c
> > +++ b/xen/common/memory.c
> > @@ -1046,10 +1046,12 @@ static int acquire_grant_table(struct domain =
*d, unsigned int id,
> >   }
> >
> >   static int acquire_resource(
> > -    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
> > +    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg,
> > +    unsigned long *start_extent)
> >   {
> >       struct domain *d, *currd =3D current->domain;
> >       xen_mem_acquire_resource_t xmar;
> > +    uint32_t total_frames;
> >       /*
> >        * The mfn_list and gfn_list (below) arrays are ok on stack =
for the
> >        * moment since they are small, but if they need to grow in =
future
> > @@ -1077,8 +1079,17 @@ static int acquire_resource(
> >           return 0;
> >       }
> >
> > +    total_frames =3D xmar.nr_frames;
>=20
> On 32-bit, the start_extent would be 26-bits wide which is not enough =
to
> cover all the xmar.nr_frames. Therefore, you want that check that it =
is
> possible to encode a continuation. Something like:
>=20
> /* Is the size too large for us to encode a continuation? */
> if ( unlikely(xmar.nr_frames > (UINT_MAX >> MEMOP_EXTENT_SHIFT)) )
>=20
> > +
> > +    if ( *start_extent ) > +    {
> > +        xmar.frame +=3D *start_extent;
> > +        xmar.nr_frames -=3D *start_extent;
>=20
> As start_extent is exposed to the guest, you want to check if it is =
not
> bigger than xmar.nr_frames.
>=20
> > +        guest_handle_add_offset(xmar.frame_list, *start_extent);
> > +    }
> > +
> >       if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
> > -        return -E2BIG;
> > +        xmar.nr_frames =3D ARRAY_SIZE(mfn_list);
>=20
> The documentation of the hypercall suggests that if you pass NULL, =
then
> it will return the maximum number value for nr_frames supported by the
> implementation. So technically a domain cannot use more than
> ARRAY_SIZE(mfn_list).
>=20
> However, you new addition conflict with the documentation. Can you
> clarify how a domain will know that it can use more than
> ARRAY_SIZE(mfn_list)?

The domain should not need to know. It should be told the maximum number =
of frames of the type it wants. If we have to carve that up into batches =
inside Xen then the caller should not need to care, right?

>=20
> >
> >       rc =3D rcu_lock_remote_domain_by_id(xmar.domid, &d);
> >       if ( rc )
> > @@ -1135,6 +1146,14 @@ static int acquire_resource(
> >           }
> >       }
> >
> > +    if ( !rc )
> > +    {
> > +        *start_extent +=3D xmar.nr_frames;
> > +
> > +        if ( *start_extent !=3D total_frames )
> > +            rc =3D -ERESTART;
> > +    }
> > +
> >    out:
> >       rcu_unlock_domain(d);
> >
> > @@ -1600,7 +1619,14 @@ long do_memory_op(unsigned long cmd, =
XEN_GUEST_HANDLE_PARAM(void) arg)
> >
> >       case XENMEM_acquire_resource:
> >           rc =3D acquire_resource(
> > -            guest_handle_cast(arg, xen_mem_acquire_resource_t));
> > +            guest_handle_cast(arg, xen_mem_acquire_resource_t),
> > +            &start_extent);
>=20
> Hmmm... it looks like we forgot to check that start_extent is always 0
> when the hypercall was added.
>=20
> As this is exposed to the guest, it technically means that there no
> guarantee that start_extent will always be 0.
>=20

I don't follow. A start extent !=3D 0 means you are in a continuation. =
How can you check for 0 without breaking continuations?

  Paul

> However, in practice, this was likely the intention and should be the
> case. So it may just be enough to mention the potential breakage in =
the
> commit message.
>=20
> @All: what do you think?
>=20
> > +
> > +        if ( rc =3D=3D -ERESTART )
> > +            return hypercall_create_continuation(
> > +                __HYPERVISOR_memory_op, "lh",
> > +                op | (start_extent << MEMOP_EXTENT_SHIFT), arg);
> > +
> >           break;
> >
> >       default:
> >
>=20
> Cheers,
>=20
> --
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Fri Jul 03 11:18:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 11:18:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrJhj-0000Do-Q9; Fri, 03 Jul 2020 11:18:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Kpvw=AO=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jrJhj-0000Dj-4A
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 11:18:03 +0000
X-Inumbo-ID: db9e941a-bd1e-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db9e941a-bd1e-11ea-bca7-bc764e2007e4;
 Fri, 03 Jul 2020 11:18:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1c1xJrwzerN6Sa2lta3v/PwEKe/38t9Yg7tP9UloAQA=; b=V77AVq8BNJ2rJgdcoMb+otNEx1
 0DlYTdCyzZQLXx+D4fVOBB8H4QiL+SHcZAGsaaqWzXCgcrNcA7jrOqepBLobnPTeqhGFQrYoYKSNJ
 gQnhZ5G/iLYp57ayAoMHF33KRy9vbB/sHLfzDIC1JW0m62iI2MpwXxsi1oBptXaVVCe4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrJhb-0005Jl-88; Fri, 03 Jul 2020 11:17:55 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrJha-0007hz-ME; Fri, 03 Jul 2020 11:17:54 +0000
Subject: Re: [PATCH v4 06/10] memory: batch processing in acquire_resource()
To: paul@xen.org, =?UTF-8?B?J01pY2hhxYIgTGVzemN6ecWEc2tpJw==?=
 <michal.leszczynski@cert.pl>, xen-devel@lists.xenproject.org
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <a317b169e3710a481bb4be066d9b878f27b3e66c.1593519420.git.michal.leszczynski@cert.pl>
 <5be6cb58-82d0-0a78-a9b2-5c078b5d3587@xen.org>
 <004901d65128$16a6f330$43f4d990$@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <481e8ee7-561a-10d6-4358-7b07a8911ce8@xen.org>
Date: Fri, 3 Jul 2020 12:17:52 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <004901d65128$16a6f330$43f4d990$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>,
 luwei.kang@intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 03/07/2020 11:52, Paul Durrant wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: 03 July 2020 11:36
>> To: Michał Leszczyński <michal.leszczynski@cert.pl>; xen-devel@lists.xenproject.org
>> Cc: luwei.kang@intel.com; tamas.lengyel@intel.com; Andrew Cooper <andrew.cooper3@citrix.com>; George
>> Dunlap <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Jan Beulich
>> <jbeulich@suse.com>; Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; paul@xen.org
>> Subject: Re: [PATCH v4 06/10] memory: batch processing in acquire_resource()
>>
>> (+ Paul as the author XENMEM_acquire_resource)
>>
>> Hi,
>>
>> On 30/06/2020 13:33, Michał Leszczyński wrote:
>>> From: Michal Leszczynski <michal.leszczynski@cert.pl>
>>>
>>> Allow to acquire large resources by allowing acquire_resource()
>>> to process items in batches, using hypercall continuation.
>>>
>>> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
>>> ---
>>>    xen/common/memory.c | 32 +++++++++++++++++++++++++++++---
>>>    1 file changed, 29 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/xen/common/memory.c b/xen/common/memory.c
>>> index 714077c1e5..3ab06581a2 100644
>>> --- a/xen/common/memory.c
>>> +++ b/xen/common/memory.c
>>> @@ -1046,10 +1046,12 @@ static int acquire_grant_table(struct domain *d, unsigned int id,
>>>    }
>>>
>>>    static int acquire_resource(
>>> -    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
>>> +    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg,
>>> +    unsigned long *start_extent)
>>>    {
>>>        struct domain *d, *currd = current->domain;
>>>        xen_mem_acquire_resource_t xmar;
>>> +    uint32_t total_frames;
>>>        /*
>>>         * The mfn_list and gfn_list (below) arrays are ok on stack for the
>>>         * moment since they are small, but if they need to grow in future
>>> @@ -1077,8 +1079,17 @@ static int acquire_resource(
>>>            return 0;
>>>        }
>>>
>>> +    total_frames = xmar.nr_frames;
>>
>> On 32-bit, the start_extent would be 26-bits wide which is not enough to
>> cover all the xmar.nr_frames. Therefore, you want that check that it is
>> possible to encode a continuation. Something like:
>>
>> /* Is the size too large for us to encode a continuation? */
>> if ( unlikely(xmar.nr_frames > (UINT_MAX >> MEMOP_EXTENT_SHIFT)) )
>>
>>> +
>>> +    if ( *start_extent ) > +    {
>>> +        xmar.frame += *start_extent;
>>> +        xmar.nr_frames -= *start_extent;
>>
>> As start_extent is exposed to the guest, you want to check if it is not
>> bigger than xmar.nr_frames.
>>
>>> +        guest_handle_add_offset(xmar.frame_list, *start_extent);
>>> +    }
>>> +
>>>        if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
>>> -        return -E2BIG;
>>> +        xmar.nr_frames = ARRAY_SIZE(mfn_list);
>>
>> The documentation of the hypercall suggests that if you pass NULL, then
>> it will return the maximum number value for nr_frames supported by the
>> implementation. So technically a domain cannot use more than
>> ARRAY_SIZE(mfn_list).
>>
>> However, you new addition conflict with the documentation. Can you
>> clarify how a domain will know that it can use more than
>> ARRAY_SIZE(mfn_list)?
> 
> The domain should not need to know. It should be told the maximum number of frames of the type it wants. If we have to carve that up into batches inside Xen then the caller should not need to care, right?

In the current implementation, we tell the guest how many frames it can 
request in a batch. This number may be much smaller that the maximum 
number of frames of the type.

Furthermore this value is not tie to the xmar.type. Therefore, it is 
valid for a guest to call this hypercall only once at boot to figure out 
the maximum batch.

So while the change you suggest looks a good idea, I don't think it is 
possible to do that with the current hypercall.

> 
>>
>>>
>>>        rc = rcu_lock_remote_domain_by_id(xmar.domid, &d);
>>>        if ( rc )
>>> @@ -1135,6 +1146,14 @@ static int acquire_resource(
>>>            }
>>>        }
>>>
>>> +    if ( !rc )
>>> +    {
>>> +        *start_extent += xmar.nr_frames;
>>> +
>>> +        if ( *start_extent != total_frames )
>>> +            rc = -ERESTART;
>>> +    }
>>> +
>>>     out:
>>>        rcu_unlock_domain(d);
>>>
>>> @@ -1600,7 +1619,14 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>>>
>>>        case XENMEM_acquire_resource:
>>>            rc = acquire_resource(
>>> -            guest_handle_cast(arg, xen_mem_acquire_resource_t));
>>> +            guest_handle_cast(arg, xen_mem_acquire_resource_t),
>>> +            &start_extent);
>>
>> Hmmm... it looks like we forgot to check that start_extent is always 0
>> when the hypercall was added.
>>
>> As this is exposed to the guest, it technically means that there no
>> guarantee that start_extent will always be 0.
>>
> 
> I don't follow. A start extent != 0 means you are in a continuation. How can you check for 0 without breaking continuations?

I think you misundertood my point. My point is we never checked that 
start_extent was 0. So a guest could validly pass a non-zero value to 
start_extent and not break on older Xen release.

When this patch will be merged, such guest would behave differently. Or 
did I miss any check/documentation for the start_extent value?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 11:23:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 11:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrJmV-00012O-D5; Fri, 03 Jul 2020 11:22:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5Z/2=AO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jrJmU-00012H-J6
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 11:22:58 +0000
X-Inumbo-ID: 8b5735c4-bd1f-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8b5735c4-bd1f-11ea-bca7-bc764e2007e4;
 Fri, 03 Jul 2020 11:22:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8B748AE65;
 Fri,  3 Jul 2020 11:22:56 +0000 (UTC)
Subject: Re: [PATCH v4 06/10] memory: batch processing in acquire_resource()
To: Julien Grall <julien@xen.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <a317b169e3710a481bb4be066d9b878f27b3e66c.1593519420.git.michal.leszczynski@cert.pl>
 <5be6cb58-82d0-0a78-a9b2-5c078b5d3587@xen.org>
 <004901d65128$16a6f330$43f4d990$@xen.org>
 <481e8ee7-561a-10d6-4358-7b07a8911ce8@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d45edef1-5b15-fdd4-b030-1ffe5c77057d@suse.com>
Date: Fri, 3 Jul 2020 13:22:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <481e8ee7-561a-10d6-4358-7b07a8911ce8@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 'Wei Liu' <wl@xen.org>, paul@xen.org,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J01pY2hhxYIgTGVzemN6ecWEc2tpJw==?= <michal.leszczynski@cert.pl>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, luwei.kang@intel.com,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 03.07.2020 13:17, Julien Grall wrote:
> Hi,
> 
> On 03/07/2020 11:52, Paul Durrant wrote:
>>> -----Original Message-----
>>> From: Julien Grall <julien@xen.org>
>>> Sent: 03 July 2020 11:36
>>> To: Michał Leszczyński <michal.leszczynski@cert.pl>; xen-devel@lists.xenproject.org
>>> Cc: luwei.kang@intel.com; tamas.lengyel@intel.com; Andrew Cooper <andrew.cooper3@citrix.com>; George
>>> Dunlap <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Jan Beulich
>>> <jbeulich@suse.com>; Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; paul@xen.org
>>> Subject: Re: [PATCH v4 06/10] memory: batch processing in acquire_resource()
>>>
>>> (+ Paul as the author XENMEM_acquire_resource)
>>>
>>> Hi,
>>>
>>> On 30/06/2020 13:33, Michał Leszczyński wrote:
>>>> From: Michal Leszczynski <michal.leszczynski@cert.pl>
>>>>
>>>> Allow to acquire large resources by allowing acquire_resource()
>>>> to process items in batches, using hypercall continuation.
>>>>
>>>> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
>>>> ---
>>>>    xen/common/memory.c | 32 +++++++++++++++++++++++++++++---
>>>>    1 file changed, 29 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/xen/common/memory.c b/xen/common/memory.c
>>>> index 714077c1e5..3ab06581a2 100644
>>>> --- a/xen/common/memory.c
>>>> +++ b/xen/common/memory.c
>>>> @@ -1046,10 +1046,12 @@ static int acquire_grant_table(struct domain *d, unsigned int id,
>>>>    }
>>>>
>>>>    static int acquire_resource(
>>>> -    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
>>>> +    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg,
>>>> +    unsigned long *start_extent)
>>>>    {
>>>>        struct domain *d, *currd = current->domain;
>>>>        xen_mem_acquire_resource_t xmar;
>>>> +    uint32_t total_frames;
>>>>        /*
>>>>         * The mfn_list and gfn_list (below) arrays are ok on stack for the
>>>>         * moment since they are small, but if they need to grow in future
>>>> @@ -1077,8 +1079,17 @@ static int acquire_resource(
>>>>            return 0;
>>>>        }
>>>>
>>>> +    total_frames = xmar.nr_frames;
>>>
>>> On 32-bit, the start_extent would be 26-bits wide which is not enough to
>>> cover all the xmar.nr_frames. Therefore, you want that check that it is
>>> possible to encode a continuation. Something like:
>>>
>>> /* Is the size too large for us to encode a continuation? */
>>> if ( unlikely(xmar.nr_frames > (UINT_MAX >> MEMOP_EXTENT_SHIFT)) )
>>>
>>>> +
>>>> +    if ( *start_extent ) > +    {
>>>> +        xmar.frame += *start_extent;
>>>> +        xmar.nr_frames -= *start_extent;
>>>
>>> As start_extent is exposed to the guest, you want to check if it is not
>>> bigger than xmar.nr_frames.
>>>
>>>> +        guest_handle_add_offset(xmar.frame_list, *start_extent);
>>>> +    }
>>>> +
>>>>        if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
>>>> -        return -E2BIG;
>>>> +        xmar.nr_frames = ARRAY_SIZE(mfn_list);
>>>
>>> The documentation of the hypercall suggests that if you pass NULL, then
>>> it will return the maximum number value for nr_frames supported by the
>>> implementation. So technically a domain cannot use more than
>>> ARRAY_SIZE(mfn_list).
>>>
>>> However, you new addition conflict with the documentation. Can you
>>> clarify how a domain will know that it can use more than
>>> ARRAY_SIZE(mfn_list)?
>>
>> The domain should not need to know. It should be told the maximum number of frames of the type it wants. If we have to carve that up into batches inside Xen then the caller should not need to care, right?
> 
> In the current implementation, we tell the guest how many frames it can 
> request in a batch. This number may be much smaller that the maximum 
> number of frames of the type.
> 
> Furthermore this value is not tie to the xmar.type. Therefore, it is 
> valid for a guest to call this hypercall only once at boot to figure out 
> the maximum batch.
> 
> So while the change you suggest looks a good idea, I don't think it is 
> possible to do that with the current hypercall.

Doesn't the limit simply change to UINT_MAX >> MEMOP_EXTENT_SHIFT,
which then is what should be reported?

>>>> @@ -1135,6 +1146,14 @@ static int acquire_resource(
>>>>            }
>>>>        }
>>>>
>>>> +    if ( !rc )
>>>> +    {
>>>> +        *start_extent += xmar.nr_frames;
>>>> +
>>>> +        if ( *start_extent != total_frames )
>>>> +            rc = -ERESTART;
>>>> +    }
>>>> +
>>>>     out:
>>>>        rcu_unlock_domain(d);
>>>>
>>>> @@ -1600,7 +1619,14 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>
>>>>        case XENMEM_acquire_resource:
>>>>            rc = acquire_resource(
>>>> -            guest_handle_cast(arg, xen_mem_acquire_resource_t));
>>>> +            guest_handle_cast(arg, xen_mem_acquire_resource_t),
>>>> +            &start_extent);
>>>
>>> Hmmm... it looks like we forgot to check that start_extent is always 0
>>> when the hypercall was added.
>>>
>>> As this is exposed to the guest, it technically means that there no
>>> guarantee that start_extent will always be 0.
>>>
>>
>> I don't follow. A start extent != 0 means you are in a continuation. How can you check for 0 without breaking continuations?
> 
> I think you misundertood my point. My point is we never checked that 
> start_extent was 0. So a guest could validly pass a non-zero value to 
> start_extent and not break on older Xen release.
> 
> When this patch will be merged, such guest would behave differently. Or 
> did I miss any check/documentation for the start_extent value?

I think we may have done the same in the past already when enabling
sub-ops for use of continuations. A guest specifying a non-zero
start_extent itself is effectively a request for an undefined sub-op.
With, as a result, undefined behavior.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 11:36:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 11:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrJzm-0001xg-Jm; Fri, 03 Jul 2020 11:36:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Kpvw=AO=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jrJzk-0001xb-N3
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 11:36:40 +0000
X-Inumbo-ID: 756734a6-bd21-11ea-897e-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 756734a6-bd21-11ea-897e-12813bfff9fa;
 Fri, 03 Jul 2020 11:36:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zjrIqnqhC+azfSMglhiBjnS4BnavmmFMmaNMMdzsvvs=; b=YPQ5KDQ69BsXGGfPxTETlCWK1L
 0YnMvQIxceEXRJHPRCi9zXjZx7l9cBFlSQnaC+jsZJxrKg1fni1D0OB8U3ch/r7RPOJFxXORNJTkd
 ZT2jqIX5VBn/YeXgUBwDPMLWDwUvah99FF6vU6C/SZ6iWV20CQUTStUL3otHfZ41j5bU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrJze-0005f1-CU; Fri, 03 Jul 2020 11:36:34 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrJzd-00006o-3S; Fri, 03 Jul 2020 11:36:33 +0000
Subject: Re: [PATCH v4 06/10] memory: batch processing in acquire_resource()
To: Jan Beulich <jbeulich@suse.com>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <a317b169e3710a481bb4be066d9b878f27b3e66c.1593519420.git.michal.leszczynski@cert.pl>
 <5be6cb58-82d0-0a78-a9b2-5c078b5d3587@xen.org>
 <004901d65128$16a6f330$43f4d990$@xen.org>
 <481e8ee7-561a-10d6-4358-7b07a8911ce8@xen.org>
 <d45edef1-5b15-fdd4-b030-1ffe5c77057d@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <cec1bfc7-694c-ae40-3fcd-ed0829295893@xen.org>
Date: Fri, 3 Jul 2020 12:36:29 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d45edef1-5b15-fdd4-b030-1ffe5c77057d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 'Wei Liu' <wl@xen.org>, paul@xen.org,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J01pY2hhxYIgTGVzemN6ecWEc2tpJw==?= <michal.leszczynski@cert.pl>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, luwei.kang@intel.com,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 03/07/2020 12:22, Jan Beulich wrote:
> On 03.07.2020 13:17, Julien Grall wrote:
>> On 03/07/2020 11:52, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: Julien Grall <julien@xen.org>
>>>> Sent: 03 July 2020 11:36
>>>> To: Michał Leszczyński <michal.leszczynski@cert.pl>; xen-devel@lists.xenproject.org
>>>> Cc: luwei.kang@intel.com; tamas.lengyel@intel.com; Andrew Cooper <andrew.cooper3@citrix.com>; George
>>>> Dunlap <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Jan Beulich
>>>> <jbeulich@suse.com>; Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; paul@xen.org
>>>> Subject: Re: [PATCH v4 06/10] memory: batch processing in acquire_resource()
>>>>
>>>> (+ Paul as the author XENMEM_acquire_resource)
>>>>
>>>> Hi,
>>>>
>>>> On 30/06/2020 13:33, Michał Leszczyński wrote:
>>>>> From: Michal Leszczynski <michal.leszczynski@cert.pl>
>>>>>
>>>>> Allow to acquire large resources by allowing acquire_resource()
>>>>> to process items in batches, using hypercall continuation.
>>>>>
>>>>> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
>>>>> ---
>>>>>     xen/common/memory.c | 32 +++++++++++++++++++++++++++++---
>>>>>     1 file changed, 29 insertions(+), 3 deletions(-)
>>>>>
>>>>> diff --git a/xen/common/memory.c b/xen/common/memory.c
>>>>> index 714077c1e5..3ab06581a2 100644
>>>>> --- a/xen/common/memory.c
>>>>> +++ b/xen/common/memory.c
>>>>> @@ -1046,10 +1046,12 @@ static int acquire_grant_table(struct domain *d, unsigned int id,
>>>>>     }
>>>>>
>>>>>     static int acquire_resource(
>>>>> -    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
>>>>> +    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg,
>>>>> +    unsigned long *start_extent)
>>>>>     {
>>>>>         struct domain *d, *currd = current->domain;
>>>>>         xen_mem_acquire_resource_t xmar;
>>>>> +    uint32_t total_frames;
>>>>>         /*
>>>>>          * The mfn_list and gfn_list (below) arrays are ok on stack for the
>>>>>          * moment since they are small, but if they need to grow in future
>>>>> @@ -1077,8 +1079,17 @@ static int acquire_resource(
>>>>>             return 0;
>>>>>         }
>>>>>
>>>>> +    total_frames = xmar.nr_frames;
>>>>
>>>> On 32-bit, the start_extent would be 26-bits wide which is not enough to
>>>> cover all the xmar.nr_frames. Therefore, you want that check that it is
>>>> possible to encode a continuation. Something like:
>>>>
>>>> /* Is the size too large for us to encode a continuation? */
>>>> if ( unlikely(xmar.nr_frames > (UINT_MAX >> MEMOP_EXTENT_SHIFT)) )
>>>>
>>>>> +
>>>>> +    if ( *start_extent ) > +    {
>>>>> +        xmar.frame += *start_extent;
>>>>> +        xmar.nr_frames -= *start_extent;
>>>>
>>>> As start_extent is exposed to the guest, you want to check if it is not
>>>> bigger than xmar.nr_frames.
>>>>
>>>>> +        guest_handle_add_offset(xmar.frame_list, *start_extent);
>>>>> +    }
>>>>> +
>>>>>         if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
>>>>> -        return -E2BIG;
>>>>> +        xmar.nr_frames = ARRAY_SIZE(mfn_list);
>>>>
>>>> The documentation of the hypercall suggests that if you pass NULL, then
>>>> it will return the maximum number value for nr_frames supported by the
>>>> implementation. So technically a domain cannot use more than
>>>> ARRAY_SIZE(mfn_list).
>>>>
>>>> However, you new addition conflict with the documentation. Can you
>>>> clarify how a domain will know that it can use more than
>>>> ARRAY_SIZE(mfn_list)?
>>>
>>> The domain should not need to know. It should be told the maximum number of frames of the type it wants. If we have to carve that up into batches inside Xen then the caller should not need to care, right?
>>
>> In the current implementation, we tell the guest how many frames it can
>> request in a batch. This number may be much smaller that the maximum
>> number of frames of the type.
>>
>> Furthermore this value is not tie to the xmar.type. Therefore, it is
>> valid for a guest to call this hypercall only once at boot to figure out
>> the maximum batch.
>>
>> So while the change you suggest looks a good idea, I don't think it is
>> possible to do that with the current hypercall.
> 
> Doesn't the limit simply change to UINT_MAX >> MEMOP_EXTENT_SHIFT,
> which then is what should be reported?

Hmmm... Can you remind me whether we support migration to an older release?

But it may stilln't be a concern as this can only be used by Dom0 or a 
PV domain targeting another domain.

> 
>>>>> @@ -1135,6 +1146,14 @@ static int acquire_resource(
>>>>>             }
>>>>>         }
>>>>>
>>>>> +    if ( !rc )
>>>>> +    {
>>>>> +        *start_extent += xmar.nr_frames;
>>>>> +
>>>>> +        if ( *start_extent != total_frames )
>>>>> +            rc = -ERESTART;
>>>>> +    }
>>>>> +
>>>>>      out:
>>>>>         rcu_unlock_domain(d);
>>>>>
>>>>> @@ -1600,7 +1619,14 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>
>>>>>         case XENMEM_acquire_resource:
>>>>>             rc = acquire_resource(
>>>>> -            guest_handle_cast(arg, xen_mem_acquire_resource_t));
>>>>> +            guest_handle_cast(arg, xen_mem_acquire_resource_t),
>>>>> +            &start_extent);
>>>>
>>>> Hmmm... it looks like we forgot to check that start_extent is always 0
>>>> when the hypercall was added.
>>>>
>>>> As this is exposed to the guest, it technically means that there no
>>>> guarantee that start_extent will always be 0.
>>>>
>>>
>>> I don't follow. A start extent != 0 means you are in a continuation. How can you check for 0 without breaking continuations?
>>
>> I think you misundertood my point. My point is we never checked that
>> start_extent was 0. So a guest could validly pass a non-zero value to
>> start_extent and not break on older Xen release.
>>
>> When this patch will be merged, such guest would behave differently. Or
>> did I miss any check/documentation for the start_extent value?
> 
> I think we may have done the same in the past already when enabling
> sub-ops for use of continuations. A guest specifying a non-zero
> start_extent itself is effectively a request for an undefined sub-op.
> With, as a result, undefined behavior.
Ok. So just mentioning the change in the commit message should be fine then.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 11:40:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 11:40:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrK34-0002kM-4S; Fri, 03 Jul 2020 11:40:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bw0N=AO=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jrK33-0002cF-2T
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 11:40:05 +0000
X-Inumbo-ID: ef762cde-bd21-11ea-8496-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef762cde-bd21-11ea-8496-bc764e2007e4;
 Fri, 03 Jul 2020 11:40:04 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id r12so32248965wrj.13
 for <xen-devel@lists.xenproject.org>; Fri, 03 Jul 2020 04:40:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=ljcIZ6Vuoj5BaNfMxfLHzi0gzvCqFiRu4i4B6UXaF44=;
 b=khNDcJIvTe5mlrs8ldRfBcIoXNzNB9pG9mpWl1pBhlyY4vf3HZMAy9rrF1btR9X40N
 S0o7c3jNQ6wzLKq0fEqrJIpEGYVRY6e4CcQuIeZBdfXb569SVOMyV/yLO0RoPiYnIsU4
 2vtAJIZ6O+5ym1fEBzeGAIFnlq5T43FcX4sYTDgnjXPkPNfBEjdrlWVPnFBNwMWNWuDC
 6H8vSDriNggnuSzDDrq9qPxrIED1lPve3lE4V+xzxMafYcDZzv9dm8uvHpF470YQKFhn
 c1UHjKyQRsJKxDibTW1PsDcVvbIQZ0FS60DmdODMdqtakw0ZF5A1tk+Bf3V9KMY6v0uQ
 rIjg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=ljcIZ6Vuoj5BaNfMxfLHzi0gzvCqFiRu4i4B6UXaF44=;
 b=fRTVmYlCIqNfOJGNDEKhCMlhg0bEHG0CRgYgQdrUnCZt2mZ0t/GwXGoHbkpXYsGf0J
 fiKJ92hwI3D6ShNdIeXPVA8tfrJ1QhhEJ1juUUCsWy4BbdLuHUBanqvHj+G6QO3+XyU6
 FGw+MP9fVg9hoEGgvNtWXEFLBk8q9rD9dRhGEC9MRGxpPJbmbV9b1oZncauNEnCpUJ/c
 yy5FtTH5Ae7ZXg74NDGwrlVCKYxH6ErwhxXzTcgItZun8PtU0u+1pzyhitheCGv/OReq
 G1ZF6ivuvtTo07mtmuz/ljwztUzVeCr6TIuNQtGtVWj7SRz52c1QLOA2I2qDyAjEdYyT
 yY5Q==
X-Gm-Message-State: AOAM530Gc7v7/TpFP/xHsZxsb1LA0DHx+ynRev5g1CkQ3pgnkuUoUlwI
 8AWzYoXehe4GCh5fWLsBG7M=
X-Google-Smtp-Source: ABdhPJwr9/tovbfxVLzPHiQ876c1htNcIIcr9lKS4drt4oK8MKec9gW77p5JnK73PDcFFQOCclI7zw==
X-Received: by 2002:adf:e482:: with SMTP id i2mr35793062wrm.75.1593776403390; 
 Fri, 03 Jul 2020 04:40:03 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id q7sm14638109wrs.27.2020.07.03.04.40.02
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 03 Jul 2020 04:40:02 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Julien Grall'" <julien@xen.org>,
 =?utf-8?Q?'Micha=C5=82_Leszczy=C5=84ski'?= <michal.leszczynski@cert.pl>,
 <xen-devel@lists.xenproject.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <a317b169e3710a481bb4be066d9b878f27b3e66c.1593519420.git.michal.leszczynski@cert.pl>
 <5be6cb58-82d0-0a78-a9b2-5c078b5d3587@xen.org>
 <004901d65128$16a6f330$43f4d990$@xen.org>
 <481e8ee7-561a-10d6-4358-7b07a8911ce8@xen.org>
In-Reply-To: <481e8ee7-561a-10d6-4358-7b07a8911ce8@xen.org>
Subject: RE: [PATCH v4 06/10] memory: batch processing in acquire_resource()
Date: Fri, 3 Jul 2020 12:40:01 +0100
Message-ID: <004a01d6512e$b0b5fab0$1221f010$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFzyG1KjeOu8uNiQeFGxRZBsBXsYgGYFxRdAaoW8QMB0iLMpgFYu/uhqYdjH6A=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>,
 luwei.kang@intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 03 July 2020 12:18
> To: paul@xen.org; 'Micha=C5=82 Leszczy=C5=84ski' =
<michal.leszczynski@cert.pl>; xen-devel@lists.xenproject.org
> Cc: luwei.kang@intel.com; tamas.lengyel@intel.com; 'Andrew Cooper' =
<andrew.cooper3@citrix.com>;
> 'George Dunlap' <george.dunlap@citrix.com>; 'Ian Jackson' =
<ian.jackson@eu.citrix.com>; 'Jan Beulich'
> <jbeulich@suse.com>; 'Stefano Stabellini' <sstabellini@kernel.org>; =
'Wei Liu' <wl@xen.org>
> Subject: Re: [PATCH v4 06/10] memory: batch processing in =
acquire_resource()
>=20
> Hi,
>=20
> On 03/07/2020 11:52, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Julien Grall <julien@xen.org>
> >> Sent: 03 July 2020 11:36
> >> To: Micha=C5=82 Leszczy=C5=84ski <michal.leszczynski@cert.pl>; =
xen-devel@lists.xenproject.org
> >> Cc: luwei.kang@intel.com; tamas.lengyel@intel.com; Andrew Cooper =
<andrew.cooper3@citrix.com>;
> George
> >> Dunlap <george.dunlap@citrix.com>; Ian Jackson =
<ian.jackson@eu.citrix.com>; Jan Beulich
> >> <jbeulich@suse.com>; Stefano Stabellini <sstabellini@kernel.org>; =
Wei Liu <wl@xen.org>;
> paul@xen.org
> >> Subject: Re: [PATCH v4 06/10] memory: batch processing in =
acquire_resource()
> >>
> >> (+ Paul as the author XENMEM_acquire_resource)
> >>
> >> Hi,
> >>
> >> On 30/06/2020 13:33, Micha=C5=82 Leszczy=C5=84ski wrote:
> >>> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> >>>
> >>> Allow to acquire large resources by allowing acquire_resource()
> >>> to process items in batches, using hypercall continuation.
> >>>
> >>> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> >>> ---
> >>>    xen/common/memory.c | 32 +++++++++++++++++++++++++++++---
> >>>    1 file changed, 29 insertions(+), 3 deletions(-)
> >>>
> >>> diff --git a/xen/common/memory.c b/xen/common/memory.c
> >>> index 714077c1e5..3ab06581a2 100644
> >>> --- a/xen/common/memory.c
> >>> +++ b/xen/common/memory.c
> >>> @@ -1046,10 +1046,12 @@ static int acquire_grant_table(struct =
domain *d, unsigned int id,
> >>>    }
> >>>
> >>>    static int acquire_resource(
> >>> -    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
> >>> +    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg,
> >>> +    unsigned long *start_extent)
> >>>    {
> >>>        struct domain *d, *currd =3D current->domain;
> >>>        xen_mem_acquire_resource_t xmar;
> >>> +    uint32_t total_frames;
> >>>        /*
> >>>         * The mfn_list and gfn_list (below) arrays are ok on stack =
for the
> >>>         * moment since they are small, but if they need to grow in =
future
> >>> @@ -1077,8 +1079,17 @@ static int acquire_resource(
> >>>            return 0;
> >>>        }
> >>>
> >>> +    total_frames =3D xmar.nr_frames;
> >>
> >> On 32-bit, the start_extent would be 26-bits wide which is not =
enough to
> >> cover all the xmar.nr_frames. Therefore, you want that check that =
it is
> >> possible to encode a continuation. Something like:
> >>
> >> /* Is the size too large for us to encode a continuation? */
> >> if ( unlikely(xmar.nr_frames > (UINT_MAX >> MEMOP_EXTENT_SHIFT)) )
> >>
> >>> +
> >>> +    if ( *start_extent ) > +    {
> >>> +        xmar.frame +=3D *start_extent;
> >>> +        xmar.nr_frames -=3D *start_extent;
> >>
> >> As start_extent is exposed to the guest, you want to check if it is =
not
> >> bigger than xmar.nr_frames.
> >>
> >>> +        guest_handle_add_offset(xmar.frame_list, *start_extent);
> >>> +    }
> >>> +
> >>>        if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
> >>> -        return -E2BIG;
> >>> +        xmar.nr_frames =3D ARRAY_SIZE(mfn_list);
> >>
> >> The documentation of the hypercall suggests that if you pass NULL, =
then
> >> it will return the maximum number value for nr_frames supported by =
the
> >> implementation. So technically a domain cannot use more than
> >> ARRAY_SIZE(mfn_list).
> >>
> >> However, you new addition conflict with the documentation. Can you
> >> clarify how a domain will know that it can use more than
> >> ARRAY_SIZE(mfn_list)?
> >
> > The domain should not need to know. It should be told the maximum =
number of frames of the type it
> wants. If we have to carve that up into batches inside Xen then the =
caller should not need to care,
> right?
>=20
> In the current implementation, we tell the guest how many frames it =
can
> request in a batch. This number may be much smaller that the maximum
> number of frames of the type.
>=20
> Furthermore this value is not tie to the xmar.type. Therefore, it is
> valid for a guest to call this hypercall only once at boot to figure =
out
> the maximum batch.
>=20
> So while the change you suggest looks a good idea, I don't think it is
> possible to do that with the current hypercall.
>=20

Oh, I was clearly misremembering what the semantic was; I thought it was =
implementation max for the given type but indeed we do just return the =
array size, so we expect the caller to know the individual resource type =
limitations.
So, as Jan says, passing back UINT_MAX >> MEMOP_EXTENT_SHIFT seems to be =
what we need.

  Paul



From xen-devel-bounces@lists.xenproject.org Fri Jul 03 12:51:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 12:51:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrL9i-0008SD-0l; Fri, 03 Jul 2020 12:51:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5Z/2=AO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jrL9h-0008S8-11
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 12:51:01 +0000
X-Inumbo-ID: d4e7e0d8-bd2b-11ea-89a0-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d4e7e0d8-bd2b-11ea-89a0-12813bfff9fa;
 Fri, 03 Jul 2020 12:50:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E7D37ACA0;
 Fri,  3 Jul 2020 12:50:53 +0000 (UTC)
Subject: Re: [PATCH v4 06/10] memory: batch processing in acquire_resource()
To: Julien Grall <julien@xen.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <a317b169e3710a481bb4be066d9b878f27b3e66c.1593519420.git.michal.leszczynski@cert.pl>
 <5be6cb58-82d0-0a78-a9b2-5c078b5d3587@xen.org>
 <004901d65128$16a6f330$43f4d990$@xen.org>
 <481e8ee7-561a-10d6-4358-7b07a8911ce8@xen.org>
 <d45edef1-5b15-fdd4-b030-1ffe5c77057d@suse.com>
 <cec1bfc7-694c-ae40-3fcd-ed0829295893@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4fbc0a79-052e-0596-ca31-ec4902dddc85@suse.com>
Date: Fri, 3 Jul 2020 14:50:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <cec1bfc7-694c-ae40-3fcd-ed0829295893@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 'Wei Liu' <wl@xen.org>, paul@xen.org,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J01pY2hhxYIgTGVzemN6ecWEc2tpJw==?= <michal.leszczynski@cert.pl>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, luwei.kang@intel.com,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 03.07.2020 13:36, Julien Grall wrote:
> On 03/07/2020 12:22, Jan Beulich wrote:
>> On 03.07.2020 13:17, Julien Grall wrote:
>>> In the current implementation, we tell the guest how many frames it can
>>> request in a batch. This number may be much smaller that the maximum
>>> number of frames of the type.
>>>
>>> Furthermore this value is not tie to the xmar.type. Therefore, it is
>>> valid for a guest to call this hypercall only once at boot to figure out
>>> the maximum batch.
>>>
>>> So while the change you suggest looks a good idea, I don't think it is
>>> possible to do that with the current hypercall.
>>
>> Doesn't the limit simply change to UINT_MAX >> MEMOP_EXTENT_SHIFT,
>> which then is what should be reported?
> 
> Hmmm... Can you remind me whether we support migration to an older release?

I'm pretty sure we say "N -> N+1 only" somewhere, but this "somewhere"
clearly isn't SUPPORT.md.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 13:23:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 13:23:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrLew-0002Yh-MF; Fri, 03 Jul 2020 13:23:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bw0N=AO=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jrLew-0002Yc-2O
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 13:23:18 +0000
X-Inumbo-ID: 5ae0c430-bd30-11ea-8496-bc764e2007e4
Received: from mail-wr1-x429.google.com (unknown [2a00:1450:4864:20::429])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ae0c430-bd30-11ea-8496-bc764e2007e4;
 Fri, 03 Jul 2020 13:23:17 +0000 (UTC)
Received: by mail-wr1-x429.google.com with SMTP id r12so32602911wrj.13
 for <xen-devel@lists.xenproject.org>; Fri, 03 Jul 2020 06:23:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=R34phvw8FAUR+j0p3tZaKBuCtRVikcNPvGj4M9UukwY=;
 b=H2gCvH7IQfFnkWGMU8Ip4GsTCXDs2TWlJmoJBKNYLOyhRe0UWzGUyjbdKqnp2fIz1G
 AUDaRaVmBca+xn/g3RhTwft+XatzyRCb2FXULb8rPSiP7o0LFZzq6ptWQqFF6KxixIPJ
 Oj512l4H8RVZ80G+YVxrmYooNmeRwKi26XqftfgzBCxQUvUUzuvzKQGg/+2n5SDcuFO8
 5o8AQcI3+/du07Sm7VD57frVZNKopxqJozCwrZScVRvHRVIVguLRL7G4NB0B+0XpjPmh
 tKrnoqi7YARmnGz8aH0QFfB/p14u33AQfRMV6ZtZhg80753vlMWEkukaY8ZyqWokc/Es
 lFoA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=R34phvw8FAUR+j0p3tZaKBuCtRVikcNPvGj4M9UukwY=;
 b=QgHMbb+rtRa4+p4dTXeq+c/Dtkwrnmn6fV3kql+koc0u7Pd28Uk0BBdrcJ7aeHyJHf
 Hm7aXRwAox7vOO6cmOsZ2Tt6B/BvHfNc3qGDLa/odJzarE+IK1l19GrfFYZW1r4vl2PA
 JBygSEocXi0sQaPQEcD5XdyXH/VOpGYAWPTz0UIY6SMePZXXueprk5KpCcOstfmfHvgY
 MIq1UOljPJ/n48nu3GRWXiUuOjEl7xy+9dFpvGHIi0TvD4lVHqyBJUS2uGXOEHq42Ny8
 IYo195Ua4C8CmAcpuDy/AHxJ/K+r0xUfTmT0SiDko4LkZSOdaToHzhqWkKxfLjyEdtHe
 FdLw==
X-Gm-Message-State: AOAM531/F42JQhnX37ByM0QweEUJrgHr5N4XwZyqlz80bMeIKM/ABjZk
 AbLcaRIvB3ZyU6gYjpVmEYsILAqeGy0=
X-Google-Smtp-Source: ABdhPJxwi3bAGa7QuBXtqqnhRW0933s/w4ms1+YzZzc469QEya5qFktYTxqbH4mUC7C+vT0WTBRBQg==
X-Received: by 2002:a5d:5270:: with SMTP id l16mr36553841wrc.122.1593782596488; 
 Fri, 03 Jul 2020 06:23:16 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id o205sm14495680wme.24.2020.07.03.06.23.15
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 03 Jul 2020 06:23:15 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Olaf Hering'" <olaf@aepfle.de>,
 "'Michael Young'" <m.a.young@durham.ac.uk>
References: <alpine.LFD.2.22.394.2006302259370.2894@austen3.home>
 <20200702183806.GA28738@aepfle.de>
In-Reply-To: <20200702183806.GA28738@aepfle.de>
Subject: RE: Build problems in kdd.c with xen-4.14.0-rc4
Date: Fri, 3 Jul 2020 14:23:14 +0100
Message-ID: <005701d6513d$1bea4080$53bec180$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJDTGpxOkTL6sslt6TCRsF6oIyh1gJDRGtIqAnGWyA=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org, 'Tim Deegan' <tim@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Olaf Hering
> Sent: 02 July 2020 19:38
> To: Michael Young <m.a.young@durham.ac.uk>
> Cc: xen-devel@lists.xenproject.org; Tim Deegan <tim@xen.org>
> Subject: Re: Build problems in kdd.c with xen-4.14.0-rc4
> 
> On Tue, Jun 30, Michael Young wrote:
> 
> > I get the following errors when trying to build xen-4.14.0-rc4
> 
> This happens to work for me.
> 
> Olaf
> 
> ---
>  tools/debugger/kdd/kdd.c | 8 ++++----
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> --- a/tools/debugger/kdd/kdd.c
> +++ b/tools/debugger/kdd/kdd.c
> @@ -742,25 +742,25 @@ static void kdd_tx(kdd_state *s)
>      int i;
> 
>      /* Fix up the checksum before we send */
>      for (i = 0; i < s->txp.h.len; i++)
>          sum += s->txp.payload[i];
>      s->txp.h.sum = sum;
> 
>      kdd_log_pkt(s, "TX", &s->txp);
> 
>      len = s->txp.h.len + sizeof (kdd_hdr);
>      if (s->txp.h.dir == KDD_DIR_PKT)
>          /* Append the mysterious 0xaa byte to each packet */
> -        s->txb[len++] = 0xaa;
> +        s->txp.payload[len++] = 0xaa;

That doesn't look quite right. I think you need [len++ - sizeof(kdd_hdr)] there.

> 
>      (void) blocking_write(s->fd, s->txb, len);
>  }
> 
> 
>  /* Send an acknowledgement to the client */
>  static void kdd_send_ack(kdd_state *s, uint32_t id, uint16_t type)
>  {
>      s->txp.h.dir = KDD_DIR_ACK;
>      s->txp.h.type = type;
>      s->txp.h.len = 0;
>      s->txp.h.id = id;
> @@ -775,25 +775,25 @@ static void kdd_send_cmd(kdd_state *s, uint32_t subtype, size_t extra)
>      s->txp.h.type = KDD_PKT_CMD;
>      s->txp.h.len = sizeof (kdd_cmd) + extra;
>      s->txp.h.id = (s->next_id ^= 1);
>      s->txp.h.sum = 0;
>      s->txp.cmd.subtype = subtype;
>      kdd_tx(s);
>  }
> 
>  /* Cause the client to print a string */
>  static void kdd_send_string(kdd_state *s, char *fmt, ...)
>  {
>      uint32_t len = 0xffff - sizeof (kdd_msg);
> -    char *buf = (char *) s->txb + sizeof (kdd_hdr) + sizeof (kdd_msg);
> +    char *buf = (char *) &s->txp + sizeof (kdd_hdr) + sizeof (kdd_msg);
>      va_list ap;
> 
>      va_start(ap, fmt);
>      len = vsnprintf(buf, len, fmt, ap);
>      va_end(ap);
> 
>      s->txp.h.dir = KDD_DIR_PKT;
>      s->txp.h.type = KDD_PKT_MSG;
>      s->txp.h.len = sizeof (kdd_msg) + len;
>      s->txp.h.id = (s->next_id ^= 1);
>      s->txp.h.sum = 0;
>      s->txp.msg.subtype = KDD_MSG_PRINT;
> @@ -807,25 +807,25 @@ static void kdd_break(kdd_state *s)
>  {
>      uint16_t ilen;
>      KDD_LOG(s, "Break\n");
> 
>      if (s->running)
>          kdd_halt(s->guest);
>      s->running = 0;
> 
>      {
>          unsigned int i;
>          /* XXX debug pattern */
>          for (i = 0; i < 0x100 ; i++)
> -            s->txb[sizeof (kdd_hdr) + i] = i;
> +            s->txp.payload[sizeof (kdd_hdr) + i] = i;

Again, drop the sizeof(kdd_hdr) here I think.

  Paul

>      }
> 
>      /* Send a state-change message to the client so it knows we've stopped */
>      s->txp.h.dir = KDD_DIR_PKT;
>      s->txp.h.type = KDD_PKT_STC;
>      s->txp.h.len = sizeof (kdd_stc);
>      s->txp.h.id = (s->next_id ^= 1);
>      s->txp.stc.subtype = KDD_STC_STOP;
>      s->txp.stc.stop.cpu = s->cpuid;
>      s->txp.stc.stop.ncpus = kdd_count_cpus(s->guest);
>      s->txp.stc.stop.kthread = 0; /* Let the debugger figure it out */
>      s->txp.stc.stop.status = KDD_STC_STATUS_BREAKPOINT;



From xen-devel-bounces@lists.xenproject.org Fri Jul 03 13:26:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 13:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrLiV-0002gn-6v; Fri, 03 Jul 2020 13:26:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Alrl=AO=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1jrLiT-0002gi-Vn
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 13:26:58 +0000
X-Inumbo-ID: dd3cc244-bd30-11ea-8496-bc764e2007e4
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.218])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd3cc244-bd30-11ea-8496-bc764e2007e4;
 Fri, 03 Jul 2020 13:26:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1593782815;
 s=strato-dkim-0002; d=aepfle.de;
 h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:
 X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
 bh=wfEc9paGMDvith0n2mgtEgJbtiRuWou13v4+WORTg2U=;
 b=thuc4UIEmXFBzMLJYPGmk+F0shMrAHcWDOekD3LRcUiFp0H/057wd30w6ZkP3SEGze
 lbpnj+CX3rORENvXgsGA2kgCOj9mrOHoGvd8Px/Gub2f33PY2PnqzuLEhpgcMj6JslTs
 qSYgTPkH2xtn0PdVN2wIKVoSjQrtkN+J4lbOjmlketu2BuUyaHfp9iLgSuMbPDjS5joA
 whIcgGVyLH1bUOAmc7Iep7haoA3mS44YfVkD7t1ADnHgcjF4yanJI4ZPcq1JI1RohfsA
 9D1zDGDm3mSthqAuLXRwusbll7amKxgBY5pnngSEyJ9PGojqg+4CxFO8bE70o27pKrt1
 5ywQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF1UB6FaE3sBj87wDNX2bCLA8cjrnV86YYhB3Vq"
X-RZG-CLASS-ID: mo00
Received: from sender by smtp.strato.de (RZmta 46.10.5 AUTH)
 with ESMTPSA id m032cfw63DQsae1
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 3 Jul 2020 15:26:54 +0200 (CEST)
Date: Fri, 3 Jul 2020 15:26:47 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Paul Durrant <xadimgnik@gmail.com>
Subject: Re: Build problems in kdd.c with xen-4.14.0-rc4
Message-ID: <20200703152647.2dacd821.olaf@aepfle.de>
In-Reply-To: <005701d6513d$1bea4080$53bec180$@xen.org>
References: <alpine.LFD.2.22.394.2006302259370.2894@austen3.home>
 <20200702183806.GA28738@aepfle.de>
 <005701d6513d$1bea4080$53bec180$@xen.org>
X-Mailer: Claws Mail 2020.06.03 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/gbwj2tSC7u.1gE0liArWo13";
 protocol="application/pgp-signature"; micalg=pgp-sha256
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, 'Michael Young' <m.a.young@durham.ac.uk>,
 'Tim Deegan' <tim@xen.org>, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--Sig_/gbwj2tSC7u.1gE0liArWo13
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Fri, 3 Jul 2020 14:23:14 +0100
schrieb Paul Durrant <xadimgnik@gmail.com>:

> That doesn't look quite right.

That might be true. I do not debug windows, and it makes gcc happy...=20

Olaf

--Sig_/gbwj2tSC7u.1gE0liArWo13
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl7/MhcACgkQ86SN7mm1
DoD3CA/+N4mxC3nF+yUz4WghDaZEhABqoLLks61qA29DTTudCL7hcyoAaWQ90V1w
sy8yfQycTMeUGvRQTGGtduxWyxyM41Y3xFBXJ48Y3cb/8PIkxYUKAfjDkXmdz2av
B0PqGpGFz4YLoDnKOONlkpRV/07oB2rHi2f5d1c8oCP0gV4YBBa2pHUsMTPw1UR9
eF/F4ZanlphaEQ4iTGSF8PXR8F5mY8uyBoVvc3RynMRkAu0cpRPoVZ3GpZxTQGcx
z8MeUvG8bjm3lqwY2wF/tIM7m6NNTVhmqOH3WQQHzdR11w3Wv+si9H1kG1USvynj
BeEiMfWVQk9kBj/c4llK4Rtb4fATiWSMgV/ynfXHUC6iBT321wwzsPCIbFtG7DjL
hbIuCVNpUTzrfFQkV/0o+NAUYbQ/hG2GU+jl3rq0+miF29raGU8TQwMhoNWbw2Eq
IkZoEkeNG7L+GXZpJ8MtZPYeO59qkF1h80qD//wU6NpFbYAdfVYsE5SbIzEwdISO
7rLm8vfVbWGhy4qgmuJeJsCXwHV5+XcBgsPAydYgdGOmOdKj7ibgiQQqBi+xA2Sg
htsV3yrGrpJczKmQBsa+12kXxB/1GXau9SLaL/7avDe/1esT8BfbMZs8U34G0Irf
p2OCU0bb6SbFuLMEGIVY+yJXv1gGYIYN4JLrEz9TPzkA8OcteBc=
=qI0Y
-----END PGP SIGNATURE-----

--Sig_/gbwj2tSC7u.1gE0liArWo13--


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 13:56:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 13:56:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrMAl-00057R-Ik; Fri, 03 Jul 2020 13:56:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3ntU=AO=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jrMAk-00057M-Ex
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 13:56:10 +0000
X-Inumbo-ID: f2528e1c-bd34-11ea-bca7-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f2528e1c-bd34-11ea-bca7-bc764e2007e4;
 Fri, 03 Jul 2020 13:56:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1593784569;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=t2d9pyErofHDyiYmtc/NX4G6Sq0sIspmqxfCWwBaScA=;
 b=Wv6aHsWZpfcbkO3tSe2OcJ87I3H2j1jlNJdBvQzQ12DLa1WcV/kWmzFU
 IH4PLNzqyhnymPyiTNHWqQ7es/FCV49UsbL3bfpRw8lNTERm5+MCDbyih
 LpknOPsE39bzhOj+ThS81xa+SmV8DTYOHSh9PAM5JIdMu/BCWB2cf8MSd E=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 2+zsXMfHhmfKNOlFvz9le2HA36lsPy6pIuqFOuRnuzsaAwMgs7yZ2OXtn/h4O7jTz8TCIkO2+G
 ArwZA4U1aqtlNlJ/VoUJPPW6y0cozf1c0DPgpd/OE2Jjgs6TwXheM5y9H/LcmrBWEMYM57vEjS
 QitnBZmoFIiZZGMCgrHV5HG4H4mm79GwQHe71tq0MiCfCyAFkT2syWaBRiEyyPBVZ7JfW92YLf
 g/1l+wujEZ4ewK3UyxvxF/gdPvcmJcYuz8EpK/TgDVOaJKDmUpv5+CTp4tgePA6aGi4qTllieW
 jfw=
X-SBRS: 2.7
X-MesageID: 21773974
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,308,1589256000"; d="scan'208";a="21773974"
From: Anthony PERARD <anthony.perard@citrix.com>
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
Subject: [XEN PATCH for-4.14] Config: Update QEMU
Date: Fri, 3 Jul 2020 14:55:33 +0100
Message-ID: <20200703135533.336625-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Backport 2 commits to fix building QEMU without PCI passthrough
support.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 Config.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Config.mk b/Config.mk
index f7d10b7c4cc6..478928c178b7 100644
--- a/Config.mk
+++ b/Config.mk
@@ -245,7 +245,7 @@ SEABIOS_UPSTREAM_URL ?= git://xenbits.xen.org/seabios.git
 MINIOS_UPSTREAM_URL ?= git://xenbits.xen.org/mini-os.git
 endif
 OVMF_UPSTREAM_REVISION ?= 20d2e5a125e34fc8501026613a71549b2a1a3e54
-QEMU_UPSTREAM_REVISION ?= 410cc30fdc590417ae730d635bbc70257adf6750
+QEMU_UPSTREAM_REVISION ?= ea6d3cd1ed79d824e605a70c3626bc437c386260
 MINIOS_UPSTREAM_REVISION ?= f57858b7e8ef8dd48394dd08cec2bef3c9fb92f5
 
 SEABIOS_UPSTREAM_REVISION ?= rel-1.13.0
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jul 03 14:12:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 14:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrMQF-0006n4-1y; Fri, 03 Jul 2020 14:12:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYi7=AO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrMQD-0006mz-PK
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 14:12:09 +0000
X-Inumbo-ID: 2dbe2ad6-bd37-11ea-89b4-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2dbe2ad6-bd37-11ea-89b4-12813bfff9fa;
 Fri, 03 Jul 2020 14:12:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zvkK19cGa3/H9rrnJxhbkDaGwVpyp4UyG+xVh37lrqs=; b=RPismKQJWINbfFB17fBAM48kO
 qJF8hwqSPr4eu9LrLgs3s8xCAXSd7jKkm0/lwTYBTpWWXh4fWZvefGT8OZz8BCqMvN7qpIU/JFR/Y
 wLHAQP73VgOo8EW1PDo3kQmgR3oMUPT/N4Z3SSRu0o5dAhmLcxgaPQmf3DpLszyrmjirc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrMQB-0000B8-B9; Fri, 03 Jul 2020 14:12:07 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrMQB-0003az-2H; Fri, 03 Jul 2020 14:12:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrMQB-0001bm-1k; Fri, 03 Jul 2020 14:12:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151564-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151564: regressions - FAIL
X-Osstest-Failures: libvirt:test-amd64-amd64-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: libvirt=d1d888a69f505922140bec292b8d208b3571f084
X-Osstest-Versions-That: libvirt=e7998ebeaf15e4e8825be0dd97aa1316f194f00d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jul 2020 14:12:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151564 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151564/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496
 test-amd64-i386-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151496
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151496
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              d1d888a69f505922140bec292b8d208b3571f084
baseline version:
 libvirt              e7998ebeaf15e4e8825be0dd97aa1316f194f00d

Last test of basis   151496  2020-07-01 04:23:43 Z    2 days
Failing since        151527  2020-07-02 04:29:15 Z    1 days    2 attempts
Testing same since   151564  2020-07-03 04:18:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d1d888a69f505922140bec292b8d208b3571f084
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Thu Jul 2 14:41:18 2020 +0200

    NEWS: Update for libvirt 6.5.0
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit 7fa7f7eeb6e969e002845928e155914da2fc8cd0
Author: Daniel P. Berrangé <berrange@redhat.com>
Date:   Wed Jul 1 17:36:51 2020 +0100

    util: add access check for hooks to fix running as non-root
    
    Since feb83c1e710b9ea8044a89346f4868d03b31b0f1 libvirtd will abort on
    startup if run as non-root
    
      2020-07-01 16:30:30.738+0000: 1647444: error : virDirOpenInternal:2869 : cannot open directory '/etc/libvirt/hooks/daemon.d': Permission denied
    
    The root cause flaw is that non-root libvirtd is using /etc/libvirt for
    its hooks. Traditionally that has been harmless though since we checked
    whether we could access the hook file and degraded gracefully. We need
    the same access check for iterating over the hook directory.
    
    Long term we should make it possible to have an unprivileged hook dir
    under $HOME.
    
    Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
    Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

commit c3fa17cd9a158f38416a80af3e0f712bf96ebf38
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Wed Jul 1 09:47:48 2020 +0200

    virnettlshelpers: Update private key
    
    With the recent update of Fedora rawhide I've noticed
    virnettlssessiontest and virnettlscontexttest failing with:
    
      Our own certificate servercertreq-ctx.pem failed validation
      against cacertreq-ctx.pem: The certificate uses an insecure
      algorithm
    
    This is result of Fedora changes to support strong crypto [1]. RSA
    with 1024 bit key is viewed as legacy and thus insecure. Generate
    a new private key then. Moreover, switch to EC which is not only
    shorter but also not deprecated that often as RSA. Generated
    using the following command:
    
      openssl genpkey --outform PEM --out privkey.pem \$
      --algorithm EC --pkeyopt ec_paramgen_curve:P-384 \$
      --pkeyopt ec_param_enc:named_curve
    
    1: https://fedoraproject.org/wiki/Changes/StrongCryptoSettings2
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit d57f361083c5053267e6d9380c1afe2abfcae8ac
Author: Daniel Henrique Barboza <danielhb413@gmail.com>
Date:   Tue Jun 30 16:43:43 2020 -0300

    docs: Fix 'Offline migration' description
    
    'transfers inactive the definition of a domain' seems odd.
    
    Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
    Reviewed-by: Andrea Bolognani <abologna@redhat.com>


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 14:37:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 14:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrMov-00005r-CB; Fri, 03 Jul 2020 14:37:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wATx=AO=cert.pl=hubert.jasudowicz@srs-us1.protection.inumbo.net>)
 id 1jrMou-00005m-Ek
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 14:37:40 +0000
X-Inumbo-ID: be51d374-bd3a-11ea-bb8b-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be51d374-bd3a-11ea-bb8b-bc764e2007e4;
 Fri, 03 Jul 2020 14:37:39 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 3B953A3A57;
 Fri,  3 Jul 2020 16:37:38 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 31076A36FD;
 Fri,  3 Jul 2020 16:37:37 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id TpPj6Tq88Wfj; Fri,  3 Jul 2020 16:37:36 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 8D3A7A3A57;
 Fri,  3 Jul 2020 16:37:36 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id I_QcTcf2D3yr; Fri,  3 Jul 2020 16:37:36 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 68826A36FD;
 Fri,  3 Jul 2020 16:37:36 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 59215223D3;
 Fri,  3 Jul 2020 16:37:06 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id StbmaSZGds1P; Fri,  3 Jul 2020 16:37:00 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id BDACC22DE6;
 Fri,  3 Jul 2020 16:37:00 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id zvDmxyBjKjLy; Fri,  3 Jul 2020 16:37:00 +0200 (CEST)
Received: from [192.168.70.4] (unknown [195.187.238.48])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 2CCE022AC3;
 Fri,  3 Jul 2020 16:37:00 +0200 (CEST)
From: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
Subject: Re: [PATCH] x86/cpuid: Expose number of vCPUs in CPUID.1.EBX
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <f9c2583332d83fe76c3d98e215c76b7b111650e3.1592496443.git.hubert.jasudowicz@cert.pl>
 <bc49dfbd-ffc0-3548-1e46-22b808442679@citrix.com>
 <8174d110-be3b-5735-9085-f35f7f0318ab@cert.pl>
 <03c4c8e1-5924-9b85-6e1b-023ae24745f3@citrix.com>
Message-ID: <eb4b392b-84c9-d98c-5fe6-423175cd8f18@cert.pl>
Date: Fri, 3 Jul 2020 16:36:59 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <03c4c8e1-5924-9b85-6e1b-023ae24745f3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 6/30/20 10:49 PM, Andrew Cooper wrote:
> On 19/06/2020 15:19, Hubert Jasudowicz wrote:
>> On 6/18/20 6:51 PM, Andrew Cooper wrote:
>>> On 18/06/2020 17:22, Hubert Jasudowicz wrote:
>>>> When running under KVM (or presumably other hypervisors) we enable
>>>> the CPUID.1.EDX.HTT flag, thus indicating validity of CPUID.1.EBX[23=
:16]
>>>> - maximum number of logical processors which the guest reads as 0.
>>>>
>>>> Although this method of topology detection is considered legacy,
>>>> Windows falls back to it when CPUID.0BH.EBX is 0.
>>>>
>>>> CPUID.1.EBX[23:16] being equal to 0, triggers memory corruption in
>>>> ntoskrnl.exe as Windows assumes that number of logical processors wo=
uld
>>>> be at least 1. Memory corruption manifests itself while mapping
>>>> framebuffer for early graphical subsystem, causing BSOD.
>>>>
>>>> This patch fixes running nested Windows (tested on 7 and 10) with KV=
M as
>>>> L0 hypervisor, by setting the value to maximum number of vCPUs in do=
main.
>>>>
>>>> Signed-off-by: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
>>> I'm afraid fixing guest topology is more complicated than just this.=C2=
=A0 On
>>> its own, I'm not sure if this is safe for VMs migrating in.
>>>
>>> While I agree that Xen's logic is definitely broken, I suspect the
>>> conditions for the BSOD are more complicated than this, because Windo=
ws
>>> does work fine when there is no KVM in the setup described.
>>>
>>> ~Andrew
>>>
>> After some more testing, I've managed to boot Windows by explicitly co=
nfiguring the guest
>> with cpuid=3D"host,htt=3D0". If I understand correctly, the default be=
havior is to
>> enable HTT for the guest and basically pass through the value of CPUID=
.1.EBX[23:16]
>> without any sanity checks.
>>
>> The reason this works in other setups is that the non-zero value retur=
ned by real hardware
>> leaks into the guest. In my setup, what Xen sees is:
>> CPUID.1h =3D=3D EAX: 000806ea EBX: 00000800 ECX: fffab223 EDX: 0f8bfbf=
f
>>
>> In terms of VM migration, this seems already broken because guest migh=
t read different
>> values depending on what underlying hardware reports. The patch would =
at least provide
>> some consistency between hosts. Another solution would be not to enabl=
e HTT bit by default.
>=20
> Apologies for the delay replying.=C2=A0 (I've been attempting to finish=
 the
> reply for more than a week now, but am just far too busy).
>=20

No worries. I understand that it's always too much code to review and=20
too few maintainers. ;)

>=20
> Xen's behaviour is definitely buggy.=C2=A0 I'm not trying to defend the=
 mess
> it is currently in.
>=20
> The problem started (AFAICT) with c/s ca2eee92df44 in Xen 3.4 (yup -
> you're reading that right), which is still reverted in XenServer becaus=
e
> it broke migration across that changeset.=C2=A0 (We also have other top=
ology
> extensions which are broken in different ways, and I'm still attempting
> to unbreak upstream Xen enough to fix it properly).
>=20
> That changeset attempted to expose hyperthreads, but keep them somewhat
> hidden by blindly asserting that APIC_ID shall now be vcpu_id * 2.
>=20
> Starting with 4.14-rc3, the logic patched above can now distinguish
> between a clean boot, and a migration in from a pre-4.14 version of Xen=
,
> where the CPUID settings need re-inventing out of thin air.
>=20
>=20
> Anyway - to this problem specifically.
>=20
> It seems KVM is giving us HTT=3D0 and NC=3D0.=C2=A0 The botched logic a=
bove has
> clearly not been run on a pre-HTT processor, and it trips up properly
> under KVM's way of doing things.
>=20
> How is the rest of the topology expressed?=C2=A0 Do we get one socket p=
er
> vcpu then, or is this example a single vcpu VM?

The default way of exposing topology when specifying -smp [cpu number] in=
=20
QEMU command line is 1 socket, 1 core, 1 thread for each vCPU.=20

I've fiddled with the switches and when I configured QEMU with
-smp cores=3D2,sockets=3D2,threads=3D2, Xen sees the leaf as:
CPUID.1h =3D=3D EAX: 806ea EBX: 40800 ECX: fffa3223 EDX: 1f8bfbff

so, as you can see, now the HTT bit is on and thus EBX[23:16] makes sense
being equal to number of threads * number of cores for this socket.

This also makes Windows boot without overriding cpuid policy.

> I'm wondering if the option least likely to break migration under the
> current scheme would be to have Xen invent a nonzero number there in th=
e
> HVM policy alongside setting HTT.

This would probably fix the issue and not break anything (hopefully), how=
ever
I don't really understand the rationale behind setting HTT bit on by defa=
ult,
except for looking "weird" to the guest that it has multiple sockets each
with single core. Can you elaborate on that?

Hubert Jasudowicz


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 14:50:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 14:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrN1K-0001gG-Ir; Fri, 03 Jul 2020 14:50:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5Z/2=AO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jrN1I-0001gB-MC
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 14:50:28 +0000
X-Inumbo-ID: 8890a8bc-bd3c-11ea-89d2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8890a8bc-bd3c-11ea-89d2-12813bfff9fa;
 Fri, 03 Jul 2020 14:50:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 570D4AC24;
 Fri,  3 Jul 2020 14:50:27 +0000 (UTC)
Subject: Re: vPT rework (and timer mode)
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200701090210.GN735@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f89a158a-416e-1939-597a-075ff97f2b02@suse.com>
Date: Fri, 3 Jul 2020 16:50:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200701090210.GN735@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.07.2020 11:02, Roger Pau Monné wrote:
> It's my understanding that the purpose of pt_update_irq and
> pt_intr_post is to attempt to implement the "delay for missed ticks"
> mode, where Xen will accumulate timer interrupts if they cannot be
> injected. As shown by the patch above, this is all broken when the
> timer is added to a vCPU (pt->vcpu) different than the actual target
> vCPU where the interrupt gets delivered (note this can also be a list
> of vCPUs if routed from the IO-APIC using Fixed mode).
> 
> I'm at lost at how to fix this so that virtual timers work properly
> and we also keep the "delay for missed ticks" mode without doing a
> massive rework and somehow keeping track of where injected interrupts
> originated, which seems an overly complicated solution.
> 
> My proposal hence would be to completely remove the timer_mode, and
> just treat virtual timer interrupts as other interrupts, ie: they will
> be injected from the callback (pt_timer_fn) and the vCPU(s) would be
> kicked. Whether interrupts would get lost (ie: injected when a
> previous one is still pending) depends on the contention on the
> system. I'm not aware of any current OS that uses timer interrupts as
> a way to track time. I think current OSes know the differences between
> a timer counter and an event timer, and will use them appropriately.

Fundamentally - why not, the more that this promises to be a
simplification. The question we need to answer up front is whether
we're happy to possibly break old OSes (presumably ones no-one
ought to be using anymore these days, due to their support life
cycles long having ended).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 15:03:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 15:03:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrNDI-0002bV-PD; Fri, 03 Jul 2020 15:02:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x5eZ=AO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jrNDH-0002bQ-LL
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 15:02:51 +0000
X-Inumbo-ID: 42d8e88c-bd3e-11ea-b7bb-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 42d8e88c-bd3e-11ea-b7bb-bc764e2007e4;
 Fri, 03 Jul 2020 15:02:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1593788570;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=fDuNXfjpdF6Jk5EWJfB857d+zYqdvN5+NOL4vz2VmOo=;
 b=fU+77xgni0sMfyPL9GIenNu8In5M+h2tBHu0j1P6VeIQiXvrE/ZoZFJQ
 KkmepWMN9JaEg8rAdjHoZeO3DQuCAy9NdtJX2XJd6Izgw8a9VQbGc8TYG
 E/mR2aR3Y+EDHFMmvzkwV7xYVhLqilbmAgfTkCT3/Zsj+pG1mCehw9985 U=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: vG574Rd6OoB1Efr/Jdl5cuCLYxoJKIA8GGEAAq3HtmQOdGKHvYVZf6SFQp1xxaNGXuWsI6Xa8v
 U8AUk/4KIbgsuKNLCkngh46JVx5/vPBZe50coLMX46gugaemkIOpLvxutnE40Sty1t3zHZSIcB
 TelkoHqTnySRT3lJdSENuvNMPKdzxIVXmdVaMRtfdrwEVWl8NqcbcjJ1wVQqoFApEp5kx8MkLX
 E6nt9BQFKrPBy+VoU7WFP0UZZMP1yV8pdPvO7i0HhUxEVXgw2a6pgG1h7ocVXXkLIBdayxXcqM
 /uU=
X-SBRS: 2.7
X-MesageID: 22387314
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,308,1589256000"; d="scan'208";a="22387314"
Subject: Re: vPT rework (and timer mode)
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20200701090210.GN735@Air-de-Roger>
 <f89a158a-416e-1939-597a-075ff97f2b02@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <af13fa01-db36-784d-dfaf-b9905defc7fd@citrix.com>
Date: Fri, 3 Jul 2020 16:02:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <f89a158a-416e-1939-597a-075ff97f2b02@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 03/07/2020 15:50, Jan Beulich wrote:
> On 01.07.2020 11:02, Roger Pau Monné wrote:
>> It's my understanding that the purpose of pt_update_irq and
>> pt_intr_post is to attempt to implement the "delay for missed ticks"
>> mode, where Xen will accumulate timer interrupts if they cannot be
>> injected. As shown by the patch above, this is all broken when the
>> timer is added to a vCPU (pt->vcpu) different than the actual target
>> vCPU where the interrupt gets delivered (note this can also be a list
>> of vCPUs if routed from the IO-APIC using Fixed mode).
>>
>> I'm at lost at how to fix this so that virtual timers work properly
>> and we also keep the "delay for missed ticks" mode without doing a
>> massive rework and somehow keeping track of where injected interrupts
>> originated, which seems an overly complicated solution.
>>
>> My proposal hence would be to completely remove the timer_mode, and
>> just treat virtual timer interrupts as other interrupts, ie: they will
>> be injected from the callback (pt_timer_fn) and the vCPU(s) would be
>> kicked. Whether interrupts would get lost (ie: injected when a
>> previous one is still pending) depends on the contention on the
>> system. I'm not aware of any current OS that uses timer interrupts as
>> a way to track time. I think current OSes know the differences between
>> a timer counter and an event timer, and will use them appropriately.
> Fundamentally - why not, the more that this promises to be a
> simplification. The question we need to answer up front is whether
> we're happy to possibly break old OSes (presumably ones no-one
> ought to be using anymore these days, due to their support life
> cycles long having ended).

The various timer modes were all compatibility, and IIRC, mostly for
Windows XP and older which told time by counting the number of timer
interrupts.

Paul - you might remember better than me?

Its possibly worth noting that issues in this are cause triple faults in
OVMF (it seems to enable interrupts in its timer handler), and breaks
in-guest kexec (because our timer-targetting logic doesn't work in a way
remotely close to real hardware when the kexec kernel is booting on a
non-zero vCPU).

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 15:12:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 15:12:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrNMz-0003To-QQ; Fri, 03 Jul 2020 15:12:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYi7=AO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrNMy-0003Tj-Tv
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 15:12:52 +0000
X-Inumbo-ID: a433015c-bd3f-11ea-89dd-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a433015c-bd3f-11ea-89dd-12813bfff9fa;
 Fri, 03 Jul 2020 15:12:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=G6kmMXHHYp/e8FfyleS/gSKtYRqyyh0F2t/O0OY+m5Y=; b=NtGzQG9OX8B/lAr7orW9Jy3jy
 Q9Zri9jhlQcENyz1/Vo4B7VGsEZqYd134E3Ht+1Qrv94D2B2lZb4qI5Eaf99JVMSALEGXnp9NgYpg
 zq2u1/aVazwId5G472ohn2P5rBgIB+VX/oSlevB+xKW8D6Bh7Mt1StoTo+m6rFgfixLCc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrNMn-0001HG-Qu; Fri, 03 Jul 2020 15:12:41 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrNMn-00067d-Jn; Fri, 03 Jul 2020 15:12:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrNMn-0006rA-Ix; Fri, 03 Jul 2020 15:12:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151554-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151554: regressions - FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: xen=be63d9d47f571a60d70f8fb630c03871312d9655
X-Osstest-Versions-That: xen=23ca7ec0ba620db52a646d80e22f9703a6589f66
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jul 2020 15:12:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151554 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151554/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151506

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151506
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151506
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151506
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151506
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151506
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151506
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151506
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151506
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151506
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655
baseline version:
 xen                  23ca7ec0ba620db52a646d80e22f9703a6589f66

Last test of basis   151506  2020-07-01 10:55:16 Z    2 days
Failing since        151528  2020-07-02 04:45:56 Z    1 days    2 attempts
Testing same since   151554  2020-07-02 21:40:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit be63d9d47f571a60d70f8fb630c03871312d9655
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Jul 2 11:11:40 2020 +0200

    build: tweak variable exporting for make 3.82
    
    While I've been running into an issue here only because of an additional
    local change I'm carrying, to be able to override just the compiler in
    $(XEN_ROOT)/.config (rather than the whole tool chain), in
    config/StdGNU.mk:
    
    ifeq ($(filter-out default undefined,$(origin CC)),)
    
    I'd like to propose to nevertheless correct the underlying issue:
    Exporting an unset variable changes its origin from "undefined" to
    "file". This comes into effect because of our adding of -rR to
    MAKEFLAGS, which make 3.82 wrongly applies also upon re-invoking itself
    after having updated auto.conf{,.cmd}.
    
    Move the export statement past $(XEN_ROOT)/config/$(XEN_OS).mk inclusion
    (which happens through $(XEN_ROOT)/Config.mk) such that the variables
    already have their designated values at that point, while retaining
    their initial origin up to the point they get defined.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Release-acked-by: Paul Durrant <paul@xen.org>

commit 5b718d24e88ceb2c28010c647836929b85b22b5d
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Thu Jul 2 11:05:53 2020 +0200

    x86/tlb: fix assisted flush usage
    
    Commit e9aca9470ed86 introduced a regression when avoiding sending
    IPIs for certain flush operations. Xen page fault handler
    (spurious_page_fault) relies on blocking interrupts in order to
    prevent handling TLB flush IPIs and thus preventing other CPUs from
    removing page tables pages. Switching to assisted flushing avoided such
    IPIs, and thus can result in pages belonging to the page tables being
    removed (and possibly re-used) while __page_fault_type is being
    executed.
    
    Force some of the TLB flushes to use IPIs, thus avoiding the assisted
    TLB flush. Those selected flushes are the page type change (when
    switching from a page table type to a different one, ie: a page that
    has been removed as a page table) and page allocation. This sadly has
    a negative performance impact on the pvshim, as less assisted flushes
    can be used. Note the flush in grant-table code is also switched to
    use an IPI even when not strictly needed. This is done so that a
    common arch_flush_tlb_mask can be introduced and always used in common
    code.
    
    Introduce a new flag (FLUSH_FORCE_IPI) and helper to force a TLB flush
    using an IPI (x86 only). Note that the flag is only meaningfully defined
    when the hypervisor supports PV or shadow paging mode, as otherwise
    hardware assisted paging domains are in charge of their page tables and
    won't share page tables with Xen, thus not influencing the result of
    page walks performed by the spurious fault handler.
    
    Just passing this new flag when calling flush_area_mask prevents the
    usage of the assisted flush without any other side effects.
    
    Note the flag is not defined on Arm.
    
    Fixes: e9aca9470ed86 ('x86/tlb: use Xen L0 assisted TLB flush when available')
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Release-acked-by: Paul Durrant <paul@xen.org>

commit 0dbed3ad3366734fd23ee3fd1f9989c8c96b6052
Author: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Date:   Fri Jun 19 22:34:01 2020 +0000

    optee: allow plain TMEM buffers with NULL address
    
    Trusted Applications use a popular approach to determine the required
    size of a buffer: the client provides a memory reference with the NULL
    pointer to a buffer. This is so called "Null memory reference". TA
    updates the reference with the required size and returns it back to the
    client. Then the client allocates a buffer of the needed size and
    repeats the operation.
    
    This behavior is described in TEE Client API Specification, paragraph
    3.2.5. Memory References.
    
    OP-TEE represents this null memory reference as a TMEM parameter with
    buf_ptr = 0x0. This is the only case when we should allow a TMEM
    buffer without the OPTEE_MSG_ATTR_NONCONTIG flag. This also the
    special case for a buffer with OPTEE_MSG_ATTR_NONCONTIG flag.
    
    This could lead to a potential issue, because IPA 0x0 is a valid
    address, but OP-TEE will treat it as a special case. So, care should
    be taken when construction OP-TEE enabled guest to make sure that such
    guest have no memory at IPA 0x0 and none of its memory is mapped at PA
    0x0.
    
    Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Release-acked-by: Paul Durrant <paul@xen.org>

commit 5b13eb1d978e9732fe2c9826b60885b687a5c4fc
Author: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Date:   Fri Jun 19 22:33:59 2020 +0000

    optee: immediately free buffers that are released by OP-TEE
    
    Normal World can share a buffer with OP-TEE for two reasons:
    1. A client application wants to exchange data with TA
    2. OP-TEE asks for shared buffer for internal needs
    
    The second case was handled more strictly than necessary:
    
    1. In RPC request OP-TEE asks for buffer
    2. NW allocates buffer and provides it via RPC response
    3. Xen pins pages and translates data
    4. Xen provides buffer to OP-TEE
    5. OP-TEE uses it
    6. OP-TEE sends request to free the buffer
    7. NW frees the buffer and sends the RPC response
    8. Xen unpins pages and forgets about the buffer
    
    The problem is that Xen should forget about buffer in between stages 6
    and 7. I.e. the right flow should be like this:
    
    6. OP-TEE sends request to free the buffer
    7. Xen unpins pages and forgets about the buffer
    8. NW frees the buffer and sends the RPC response
    
    This is because OP-TEE internally frees the buffer before sending the
    "free SHM buffer" request. So we have no reason to hold reference for
    this buffer anymore. Moreover, in multiprocessor systems NW have time
    to reuse the buffer cookie for another buffer. Xen complained about this
    and denied the new buffer registration. I have seen this issue while
    running tests on iMX SoC.
    
    So, this patch basically corrects that behavior by freeing the buffer
    earlier, when handling RPC return from OP-TEE.
    
    Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
    Release-acked-by: Paul Durrant <paul@xen.org>

commit 3b7dab93f2401b08c673244c9ae0f92e08bd03ba
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Jul 1 12:39:59 2020 +0100

    x86/spec-ctrl: Protect against CALL/JMP straight-line speculation
    
    Some x86 CPUs speculatively execute beyond indirect CALL/JMP instructions.
    
    With CONFIG_INDIRECT_THUNK / Retpolines, indirect CALL/JMP instructions are
    converted to direct CALL/JMP's to __x86_indirect_thunk_REG(), leaving just a
    handful of indirect JMPs implementing those stubs.
    
    There is no architectrual execution beyond an indirect JMP, so use INT3 as
    recommended by vendors to halt speculative execution.  This is shorter than
    LFENCE (which would also work fine), but also shows up in logs if we do
    unexpected execute them.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Paul Durrant <paul@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 16:45:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 16:45:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrOoD-0002qH-03; Fri, 03 Jul 2020 16:45:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYi7=AO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrOoC-0002qC-DC
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 16:45:04 +0000
X-Inumbo-ID: 89a6ced9-bd4c-11ea-89fb-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 89a6ced9-bd4c-11ea-89fb-12813bfff9fa;
 Fri, 03 Jul 2020 16:45:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=c58/bStT32waMtpPye2oON5AG+pdsLQMX0JoHkPS4ao=; b=xb+kpYnoEjTWb0RzAv+9HOgwY
 AvYGQrocMhN9E/Ffwt0wa3ZV4sH6MPkaKECDQW1jEF02jI9Gxkh1H2AVlzwGYAK8WsU7WLzBSBJE2
 zMU9GtWRsewsaGY+brJHrT2s/Dcp4wok+hmNol+2z4AEqDkpZNJlIvsKZ9s7NnogIZ6Wk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrOo8-0003UF-Vd; Fri, 03 Jul 2020 16:45:01 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrOo8-0002Lj-LN; Fri, 03 Jul 2020 16:45:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrOo8-0005gw-HN; Fri, 03 Jul 2020 16:45:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151570-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151570: all pass - PUSHED
X-Osstest-Versions-This: ovmf=f43a14e3cff3fa45c30ff152c4172204a4458341
X-Osstest-Versions-That: ovmf=0622a7b1b203ad4ab1675533e958792fc1afc12b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jul 2020 16:45:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151570 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151570/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 f43a14e3cff3fa45c30ff152c4172204a4458341
baseline version:
 ovmf                 0622a7b1b203ad4ab1675533e958792fc1afc12b

Last test of basis   151550  2020-07-02 20:09:20 Z    0 days
Testing same since   151570  2020-07-03 05:15:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Leif Lindholm <leif@nuviainc.com>
  Oleksiy Yakovlev <oleksiyy@ami.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0622a7b1b2..f43a14e3cf  f43a14e3cff3fa45c30ff152c4172204a4458341 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 17:03:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 17:03:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrP6A-0004Vy-LO; Fri, 03 Jul 2020 17:03:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYi7=AO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrP68-0004Vd-Vp
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 17:03:37 +0000
X-Inumbo-ID: 1dfe834e-bd4f-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1dfe834e-bd4f-11ea-bb8b-bc764e2007e4;
 Fri, 03 Jul 2020 17:03:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hlIpXpYWoDxOmxkG9/0wpTPoZwuOL0H08cu9b9CKrRM=; b=q/qq+7ysy3PQWoLOOrvHyo4e9U
 wlATTr4WdIL7pAXKwBOU16qZqzo63l4aMMV+Mev8a90Rqsq+3KK4a7ITFpon/ZWOM5gj0PGoe31Nn
 7SXN8tGNxREztnOlJaHV8vlDWgwPPV4TIrj4CvrpbLX2Qw0L+wO8WR2ngaJTKXVXCEUw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrP60-0003qf-Nb; Fri, 03 Jul 2020 17:03:28 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrP60-0003Ht-FH; Fri, 03 Jul 2020 17:03:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrP60-0004Pp-Ea; Fri, 03 Jul 2020 17:03:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [libvirt bisection] complete test-amd64-amd64-libvirt-pair
Message-Id: <E1jrP60-0004Pp-Ea@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jul 2020 17:03:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-libvirt-pair
testid guest-migrate/src_host/dst_host

Tree: libvirt git://libvirt.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  libvirt git://libvirt.org/libvirt.git
  Bug introduced:  7fa7f7eeb6e969e002845928e155914da2fc8cd0
  Bug not present: c3fa17cd9a158f38416a80af3e0f712bf96ebf38
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151585/


  commit 7fa7f7eeb6e969e002845928e155914da2fc8cd0
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Wed Jul 1 17:36:51 2020 +0100
  
      util: add access check for hooks to fix running as non-root
      
      Since feb83c1e710b9ea8044a89346f4868d03b31b0f1 libvirtd will abort on
      startup if run as non-root
      
        2020-07-01 16:30:30.738+0000: 1647444: error : virDirOpenInternal:2869 : cannot open directory '/etc/libvirt/hooks/daemon.d': Permission denied
      
      The root cause flaw is that non-root libvirtd is using /etc/libvirt for
      its hooks. Traditionally that has been harmless though since we checked
      whether we could access the hook file and degraded gracefully. We need
      the same access check for iterating over the hook directory.
      
      Long term we should make it possible to have an unprivileged hook dir
      under $HOME.
      
      Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/libvirt/test-amd64-amd64-libvirt-pair.guest-migrate--src_host--dst_host.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/libvirt/test-amd64-amd64-libvirt-pair.guest-migrate--src_host--dst_host --summary-out=tmp/151585.bisection-summary --basis-template=151496 --blessings=real,real-bisect libvirt test-amd64-amd64-libvirt-pair guest-migrate/src_host/dst_host
Searching for failure / basis pass:
 151564 fail [dst_host=pinot1,src_host=pinot0] / 151496 [dst_host=albana0,src_host=albana1] 151469 [dst_host=fiano0,src_host=fiano1] 151417 [dst_host=albana1,src_host=albana0] 151396 [dst_host=pinot0,src_host=pinot1] 151370 [dst_host=huxelrebe0,src_host=huxelrebe1] 151352 [dst_host=huxelrebe1,src_host=huxelrebe0] 151330 [dst_host=godello0,src_host=godello1] 151308 [dst_host=godello1,src_host=godello0] 151251 [dst_host=elbling1,src_host=elbling0] 151229 [dst_host=fiano1,src_host=fiano0] 151197 ok\
 .
Failure / basis pass flights: 151564 / 151197
(tree with no url: minios)
Tree: libvirt git://libvirt.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest d1d888a69f505922140bec292b8d208b3571f084 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c8edb70945099fd35a0997d3f3db105efc144e13 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
Basis pass 6f28865223292a816f1bfde589901a00156cf8a1 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
Generating revisions with ./adhoc-revtuple-generator  git://libvirt.org/libvirt.git#6f28865223292a816f1bfde589901a00156cf8a1-d1d888a69f505922140bec292b8d208b3571f084 https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd\
 8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#8927e2777786a43cddfaa328b0f4c41a09c629c9-c8edb70945099fd35a0997d3f3db105efc144e13 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db9934a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://xenbits.xen.org/qemu-xen.git#410cc30fdc590417ae730d635bbc70257adf6750-ea6d3cd1ed79d824e605a70c3626bc437c386260 git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-88ab\
 0c15525ced2eefe39220742efe4769089ad8 git://xenbits.xen.org/xen.git#b91825f628c9a62cf2a3a0d972ea81484a8b7fce-23ca7ec0ba620db52a646d80e22f9703a6589f66
>From git://cache:9419/git://libvirt.org/libvirt
   c203b8fee1..201f8d1876  master     -> origin/master
Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.

>From git://cache:9419/git://xenbits.xen.org/osstest/ovmf
   0622a7b1b2..f43a14e3cf  xen-tested-master -> origin/xen-tested-master
Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.

Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 17578 nodes in revision graph
Searching for test results:
 151197 pass 6f28865223292a816f1bfde589901a00156cf8a1 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151251 [dst_host=elbling1,src_host=elbling0]
 151229 [dst_host=fiano1,src_host=fiano0]
 151330 [dst_host=godello0,src_host=godello1]
 151308 [dst_host=godello1,src_host=godello0]
 151352 [dst_host=huxelrebe1,src_host=huxelrebe0]
 151370 [dst_host=huxelrebe0,src_host=huxelrebe1]
 151417 [dst_host=albana1,src_host=albana0]
 151396 [dst_host=pinot0,src_host=pinot1]
 151469 [dst_host=fiano0,src_host=fiano1]
 151496 [dst_host=albana0,src_host=albana1]
 151576 fail 7fa7f7eeb6e969e002845928e155914da2fc8cd0 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151557 pass d66181c84e8fc8471476ce607f7ad321392350c3 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a4a2258a1fec66665481b0bd929b049921cb07a0 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 d11c75185276ded944f2ea0277532b7fee849bbc 20b65c15a38d98f31f212925a3e5a733dce5b477
 151542 pass 6f28865223292a816f1bfde589901a00156cf8a1 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151567 pass bcc007d1b766dc59c75dad610ca75b92bd43f7d2 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151559 pass f6f745297d884453ef4ed65643d267069f778517 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a4a2258a1fec66665481b0bd929b049921cb07a0 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 d11c75185276ded944f2ea0277532b7fee849bbc 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151549 fail 7fa7f7eeb6e969e002845928e155914da2fc8cd0 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151564 fail d1d888a69f505922140bec292b8d208b3571f084 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c8edb70945099fd35a0997d3f3db105efc144e13 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151585 fail 7fa7f7eeb6e969e002845928e155914da2fc8cd0 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151527 fail 7fa7f7eeb6e969e002845928e155914da2fc8cd0 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151561 pass 207a5009ea8308286f6e248ac5519b072c252555 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 da53345dd5ff7d3a34e83587fd375c0b7722f46c
 151571 pass d57f361083c5053267e6d9380c1afe2abfcae8ac 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151552 pass aef2c5ea6f04e765170565a77a0fdc5fd6a3ea47 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151583 pass c3fa17cd9a158f38416a80af3e0f712bf96ebf38 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151563 pass 4268e187531eb370bc6fbac4496018bb7fef6716 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 db77d8f7ee9490138d853c4fb06e7a1e14a49148 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 0e2e54966af556f4047c1048855c4a071028a32d
 151579 pass c3fa17cd9a158f38416a80af3e0f712bf96ebf38 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151574 pass c3fa17cd9a158f38416a80af3e0f712bf96ebf38 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151581 fail 7fa7f7eeb6e969e002845928e155914da2fc8cd0 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
Searching for interesting versions
 Result found: flight 151197 (pass), for basis pass
 For basis failure, parent search stopping at c3fa17cd9a158f38416a80af3e0f712bf96ebf38 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66, results HASH(0x558084978d98) HASH(0x55808497bd88) HASH(0x558084979d38) For basis fai\
 lure, parent search stopping at d57f361083c5053267e6d9380c1afe2abfcae8ac 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66, results HASH(0x558084981b00) For basis failure, parent search stopping at bcc007d1b766dc59c75dad61\
 0ca75b92bd43f7d2 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66, results HASH(0x55808493bbc0) For basis failure, parent search stopping at 4268e187531eb370bc6fbac4496018bb7fef6716 27acf0ef828bf719b2053ba398b195829413dbd\
 d c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 db77d8f7ee9490138d853c4fb06e7a1e14a49148 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 0e2e54966af556f4047c1048855c4a071028a32d, results HASH(0x558084973c20) For basis failure, parent search stopping at 207a5009ea8308286f6e248ac5519b072c252555 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a4\
 72b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 da53345dd5ff7d3a34e83587fd375c0b7722f46c, results HASH(0x55808494e268) For basis failure, parent search stopping at f6f745297d884453ef4ed65643d267069f778517 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a4a2258a1fec66665481b0bd929b\
 049921cb07a0 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 d11c75185276ded944f2ea0277532b7fee849bbc 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a, results HASH(0x558084944410) For basis failure, parent search stopping at d66181c84e8fc8471476ce607f7ad321392350c3 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a4a2258a1fec66665481b0bd929b049921cb07a0 3c659044118e34603161457db9934a34f816d78b 41\
 0cc30fdc590417ae730d635bbc70257adf6750 d11c75185276ded944f2ea0277532b7fee849bbc 20b65c15a38d98f31f212925a3e5a733dce5b477, results HASH(0x558084963e60) For basis failure, parent search stopping at aef2c5ea6f04e765170565a77a0fdc5fd6a3ea47 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 2e3de6253422112ae\
 43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6, results HASH(0x558084977018) For basis failure, parent search stopping at 6f28865223292a816f1bfde589901a00156cf8a1 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea8148\
 4a8b7fce, results HASH(0x55808494c0b0) HASH(0x558084963b90) Result found: flight 151527 (fail), for basis failure (at ancestor ~10071)
 Repro found: flight 151542 (pass), for basis pass
 Repro found: flight 151564 (fail), for basis failure
 0 revisions at c3fa17cd9a158f38416a80af3e0f712bf96ebf38 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
No revisions left to test, checking graph state.
 Result found: flight 151574 (pass), for last pass
 Result found: flight 151576 (fail), for first failure
 Repro found: flight 151579 (pass), for last pass
 Repro found: flight 151581 (fail), for first failure
 Repro found: flight 151583 (pass), for last pass
 Repro found: flight 151585 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  libvirt git://libvirt.org/libvirt.git
  Bug introduced:  7fa7f7eeb6e969e002845928e155914da2fc8cd0
  Bug not present: c3fa17cd9a158f38416a80af3e0f712bf96ebf38
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151585/

Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.


  commit 7fa7f7eeb6e969e002845928e155914da2fc8cd0
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Wed Jul 1 17:36:51 2020 +0100
  
      util: add access check for hooks to fix running as non-root
      
      Since feb83c1e710b9ea8044a89346f4868d03b31b0f1 libvirtd will abort on
      startup if run as non-root
      
        2020-07-01 16:30:30.738+0000: 1647444: error : virDirOpenInternal:2869 : cannot open directory '/etc/libvirt/hooks/daemon.d': Permission denied
      
      The root cause flaw is that non-root libvirtd is using /etc/libvirt for
      its hooks. Traditionally that has been harmless though since we checked
      whether we could access the hook file and degraded gracefully. We need
      the same access check for iterating over the hook directory.
      
      Long term we should make it possible to have an unprivileged hook dir
      under $HOME.
      
      Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

pnmtopng: 225 colors found
Revision graph left in /home/logs/results/bisect/libvirt/test-amd64-amd64-libvirt-pair.guest-migrate--src_host--dst_host.{dot,ps,png,html,svg}.
----------------------------------------
151585: tolerable FAIL

flight 151585 libvirt real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/151585/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair 22 guest-migrate/src_host/dst_host fail baseline untested


jobs:
 build-amd64-libvirt                                          pass    
 test-amd64-amd64-libvirt-pair                                fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Jul 03 17:10:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 17:10:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrPD2-0005LE-De; Fri, 03 Jul 2020 17:10:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ZZ1=AO=kernel.org=luto@srs-us1.protection.inumbo.net>)
 id 1jrPD0-0005L9-Gm
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 17:10:42 +0000
X-Inumbo-ID: 1f892741-bd50-11ea-8a04-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1f892741-bd50-11ea-8a04-12813bfff9fa;
 Fri, 03 Jul 2020 17:10:42 +0000 (UTC)
Received: from mail-wr1-f51.google.com (mail-wr1-f51.google.com
 [209.85.221.51])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id F1E88214D8
 for <xen-devel@lists.xenproject.org>; Fri,  3 Jul 2020 17:10:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1593796241;
 bh=eSLxwCsrnjTN1e/eMh6nr/qbzVVZVXbOT9m3hRAwgUY=;
 h=From:Date:Subject:To:Cc:From;
 b=pHjE8pN36o+e/YKNzwG1ysSGr2KqQURQ5A7ukoqZCvAeM1u7QOU8KseOSXJXzkpAT
 PQOIWf348pJihiEl9qg2Hi497UH6KQNOrs3zlmjR9M1StahTZyL1dHxeA5VAccfvoU
 10GIlFJGhmfIeXAQX/vByyDFlLGgiXthQ7kZwzDw=
Received: by mail-wr1-f51.google.com with SMTP id z2so11214777wrp.2
 for <xen-devel@lists.xenproject.org>; Fri, 03 Jul 2020 10:10:40 -0700 (PDT)
X-Gm-Message-State: AOAM5307LjnqRWRYjFfvvAZeIQG2MRE4Hwqm/woIGelDMqF0chvzU19J
 k5/fKroPNrf3WIN4YLIamglTYFqvSSBw9xoHAq4rVQ==
X-Google-Smtp-Source: ABdhPJxAdrtZOxCWu77L4TeW9SEziwQ1Q79FxUmVvDBqK30QeKHVkoNZP7TaDcgSwTQuWVtqX6NTUmygzsDDwM6qVdA=
X-Received: by 2002:adf:8104:: with SMTP id 4mr38344164wrm.18.1593796239481;
 Fri, 03 Jul 2020 10:10:39 -0700 (PDT)
MIME-Version: 1.0
From: Andy Lutomirski <luto@kernel.org>
Date: Fri, 3 Jul 2020 10:10:28 -0700
X-Gmail-Original-Message-ID: <CALCETrVfi1Rnt5nnrHNivdxE7MqRPiLXvon4-engqo=LCKiojA@mail.gmail.com>
Message-ID: <CALCETrVfi1Rnt5nnrHNivdxE7MqRPiLXvon4-engqo=LCKiojA@mail.gmail.com>
Subject: FSGSBASE seems to be busted on Xen PV
To: xen-devel <xen-devel@lists.xenproject.org>,
 LKML <linux-kernel@vger.kernel.org>, 
 Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
 Jan Beulich <jbeulich@suse.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: X86 ML <x86@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Xen folks-

I did some testing of the upcoming Linux FSGSBASE support on Xen PV,
and I found what appears to be some significant bugs in the Xen
context switching code.  These bugs are causing Linux selftest
failures, and they could easily cause random and hard-to-debug
failures of user programs that use the new instructions in a Xen PV
guest.

The bugs seem to boil down to the context switching code in Xen being
clever and trying to guess that a nonzero FS or GS means that the
segment base must match the in-memory descriptor.  This is simply not
true if CR4.FSGSBASE is set -- the bases can have any canonical value,
under the full control of the guest, and Xen has absolutely no way of
knowing whether the values are expected to be in sync with the
selectors.  (The same is true of FSGSBASE except that guest funny
business either requires MSR accesses or some descriptor table
fiddling, and guests are perhaps less likely to care)

Having written a bunch of the corresponding Linux code, I don't
there's any way around just independently saving and restoring the
selectors and the bases.  At least it's relatively fast with FSGSBASE
enabled.

If you can't get this fixed in upstream Xen reasonably quickly, we may
need to disable FSGSBASE in a Xen PV guest in Linux.

--Andy


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 17:16:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 17:16:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrPIK-0005We-7d; Fri, 03 Jul 2020 17:16:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x5eZ=AO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jrPII-0005WZ-Vo
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 17:16:11 +0000
X-Inumbo-ID: e3172130-bd50-11ea-8a05-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e3172130-bd50-11ea-8a05-12813bfff9fa;
 Fri, 03 Jul 2020 17:16:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1593796571;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=dqeiGuK5cYVNFmkGldspUWwR9pAfBDVSylO2ZhDXxgA=;
 b=OboNHorrDLyxTT03ZvD66t3/9cUXj0SP2+xwxrHH02xnomyJir6d6i50
 HlVF0RLj556zgUO80pdX8uaYpnGkN00FjUkE4aT+abvihgfRWpA6Sn/ZU
 Sx5JF22LzD4nVUgs4oFXEbLWzxvy+LeSX99cDdkwD7YQX6BJgzbF9y1Hq 4=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Wmjh8XanJ46odrm8NoGJflxLrYjP3hhL6XwFpCOGk+igvaFduGuN3NR99lPlmnofEXknmtK5yj
 mwzzdR267dX4eqgtColQf885PR+XTcgtUzCf82Mt1LA8qqfWWE254d1aH161crac2N1+PkIEN6
 wreuB3wIIqvAcUFETs2YmxbDlJ4sGPI+Yeshdyq6/EprEzutj1L7lcvj25/3tjJ+lw4TSd+04W
 xIozewpd6O9j6mWYLM7dpXhA+kAcYIDXJqv5VzNXdBnnYImLbofcLHVKwymQFkU87HH8zq1SYK
 Onk=
X-SBRS: 2.7
X-MesageID: 21581824
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,308,1589256000"; d="scan'208";a="21581824"
Subject: Re: FSGSBASE seems to be busted on Xen PV
To: Andy Lutomirski <luto@kernel.org>, xen-devel
 <xen-devel@lists.xenproject.org>, LKML <linux-kernel@vger.kernel.org>,
 Juergen Gross <jgross@suse.com>, Jan Beulich <jbeulich@suse.com>, "Boris
 Ostrovsky" <boris.ostrovsky@oracle.com>
References: <CALCETrVfi1Rnt5nnrHNivdxE7MqRPiLXvon4-engqo=LCKiojA@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e78d2ee5-66cf-2ed8-c04f-71dd92efdfe1@citrix.com>
Date: Fri, 3 Jul 2020 18:16:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <CALCETrVfi1Rnt5nnrHNivdxE7MqRPiLXvon4-engqo=LCKiojA@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: X86 ML <x86@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 03/07/2020 18:10, Andy Lutomirski wrote:
> Hi Xen folks-
>
> I did some testing of the upcoming Linux FSGSBASE support on Xen PV,
> and I found what appears to be some significant bugs in the Xen
> context switching code.  These bugs are causing Linux selftest
> failures, and they could easily cause random and hard-to-debug
> failures of user programs that use the new instructions in a Xen PV
> guest.
>
> The bugs seem to boil down to the context switching code in Xen being
> clever and trying to guess that a nonzero FS or GS means that the
> segment base must match the in-memory descriptor.  This is simply not
> true if CR4.FSGSBASE is set -- the bases can have any canonical value,
> under the full control of the guest, and Xen has absolutely no way of
> knowing whether the values are expected to be in sync with the
> selectors.  (The same is true of FSGSBASE except that guest funny
> business either requires MSR accesses or some descriptor table
> fiddling, and guests are perhaps less likely to care)
>
> Having written a bunch of the corresponding Linux code, I don't
> there's any way around just independently saving and restoring the
> selectors and the bases.  At least it's relatively fast with FSGSBASE
> enabled.
>
> If you can't get this fixed in upstream Xen reasonably quickly, we may
> need to disable FSGSBASE in a Xen PV guest in Linux.

This has come up several times before, but if its actually breaking
userspace then Xen needs to change.

I'll see about making something which is rather more robust.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 18:51:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 18:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrQmH-00054e-Be; Fri, 03 Jul 2020 18:51:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYi7=AO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrQmF-00054E-Mj
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 18:51:11 +0000
X-Inumbo-ID: 247ea6f4-bd5e-11ea-8a25-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 247ea6f4-bd5e-11ea-8a25-12813bfff9fa;
 Fri, 03 Jul 2020 18:51:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=aVPZoKjOV4KRjVP2Dcp7wlHyH7EDJBzFQCiPX07Smz4=; b=xgFAGKhnd966jXxhLJPs8sCZLN
 vsX/9XsAIwB3Kw0IMZh/SoaXfOvmy+UnmXM7WPOZwwUDLf8IzPPY/YRaTAFMd44NoiJqAtfrXFR2U
 NVbokVtqmOFPMzv+YzgMx9M1nl0py5CSluDKUhCEniJA8mm2f3fjeMtXL+7Frwa7LMdo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrQm6-0005pj-3d; Fri, 03 Jul 2020 18:51:02 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrQm5-0007QM-Kv; Fri, 03 Jul 2020 18:51:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrQm5-0000FM-KM; Fri, 03 Jul 2020 18:51:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict
Message-Id: <E1jrQm5-0000FM-KM@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jul 2020 18:51:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151591/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict.debian-hvm-install --summary-out=tmp/151591.bisection-summary --basis-template=151065 --blessings=real,real-bisect qemu-mainline test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict debian-hvm-install
Searching for failure / basis pass:
 151547 fail [host=huxelrebe1] / 151149 [host=fiano0] 151101 [host=chardonnay1] 151065 [host=albana0] 151047 [host=godello1] 150970 [host=chardonnay0] 150930 [host=fiano1] 150916 [host=rimava1] 150909 [host=elbling1] 150899 [host=pinot1] 150895 [host=debina0] 150831 [host=huxelrebe0] 150694 [host=albana1] 150631 ok.
Failure / basis pass flights: 151547 / 150631
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 64f0ad8ad8e13257e7c912df470d46784b55c3fd 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 5cc7a54c2e91d82cb6a52e4921325c511fd90712 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9-00217f1919270007d7a911f89b32e39b9dcaa907 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://git.qemu.org/qemu.git#5cc7a54c2e91d82cb6a52e4921325c511fd90712-64f0ad8ad8e13257e7c912df470d46784b55c3fd git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-88ab0c15525ced2eefe39220742efe4769089ad8 git://xenbits.xen.org/xen.git#1497e78068421d83956f8e82fb6e1bf1fc3b1199-23ca7ec0ba620db52a646d80e22f9703a6589f66
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 55263 nodes in revision graph
Searching for test results:
 150631 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 5cc7a54c2e91d82cb6a52e4921325c511fd90712 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 150694 [host=albana1]
 150831 [host=huxelrebe0]
 150909 [host=elbling1]
 150930 [host=fiano1]
 150916 [host=rimava1]
 150895 [host=debina0]
 150899 [host=pinot1]
 150970 [host=chardonnay0]
 151047 [host=godello1]
 151101 [host=chardonnay1]
 151065 [host=albana0]
 151149 [host=fiano0]
 151221 fail irrelevant
 151175 fail irrelevant
 151241 fail irrelevant
 151286 fail irrelevant
 151269 fail irrelevant
 151328 fail irrelevant
 151304 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 322969adf1fb3d6cfbd613f35121315715aff2ed 3c659044118e34603161457db9934a34f816d78b 171199f56f5f9bdf1e5d670d09ef1351d8f01bae 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151377 fail irrelevant
 151353 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b d4b78317b7cf8c0c635b70086503813f79ff21ec 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151399 fail irrelevant
 151414 fail irrelevant
 151435 fail irrelevant
 151459 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151471 fail irrelevant
 151485 fail irrelevant
 151546 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 3575b0aea983ad57804c9af739ed8ff7bc168393 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151526 fail irrelevant
 151529 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b eefe34ea4b82c2b47abe28af4cc7247d51553626 2e3de6253422112ae43e608661ba94ea6b345694 25636ed707cf1211ce846c7ec58f8643e435d7a7
 151530 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b 3f429a3400822141651486193d6af625eeab05a5 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 151531 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 58ae92a993687d913aa6dd00ef3497a1bc5f6c40 3c659044118e34603161457db9934a34f816d78b 54cdfe511219b8051046be55a6e156c4f08ff7ff 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 151558 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 6b0eff1a4ea47c835a7d8bee88c05c47ada37495 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151500 fail irrelevant
 151533 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a2433243fbe471c250d7eddc2c7da325d91265fd 3c659044118e34603161457db9934a34f816d78b 6675a653d2e57ab09c32c0ea7b44a1d6c40a7f58 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151548 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 49ee11555262a256afec592dfed7c5902d5eefd2 2e3de6253422112ae43e608661ba94ea6b345694 726c78d14dfe6ec76f5e4c7756821a91f0a04b34
 151534 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 53550e81e2cafe7c03a39526b95cd21b5194d9b1 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151536 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 250bc43a406f7d46e319abe87c19548d4f027828 2e3de6253422112ae43e608661ba94ea6b345694 3371ced37ced359167b5a71abee2062854371323
 151566 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 157ed954e2dc8c2a4230d38058ca7f1fe50902e0 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151537 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 3c659044118e34603161457db9934a34f816d78b 9f1f264edbdf5516d6f208497310b3eedbc7b74c 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151519 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 5cc7a54c2e91d82cb6a52e4921325c511fd90712 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151591 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151539 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1d24410da356731da70b3334f86343e11e207d2 3c659044118e34603161457db9934a34f816d78b 470dd165d152ff7ceac61c7b71c2b89220b3aad7 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151518 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151560 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b da9630c57ee386f8beb571ba6bb4a98d546c42ca 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151541 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b eea8f5df4ecc607d64f091b8d916fcc11a697541 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151572 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 75a6ed875ff0a2eb6b2971ae2098ed09963d7329 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151543 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151578 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 64f0ad8ad8e13257e7c912df470d46784b55c3fd 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151545 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 86e8c353f705f14f2f2fd7a6195cefa431aa24d9 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 151551 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 5d2f557b47dfbf8f23277a5bdd8473d4607c681a 2e3de6253422112ae43e608661ba94ea6b345694 51ca66c37371b10b378513af126646de22eddb17
 151555 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 7d2410cea154bf915fb30179ebda3b17ac36e70e 2e3de6253422112ae43e608661ba94ea6b345694 780aba2779b834f19b2a6f0dcdea0e7e0b5e1622
 151547 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b 64f0ad8ad8e13257e7c912df470d46784b55c3fd 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151562 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 fec6a7af5c5760b9bccd9e7c3eaf29f0401af264
 151556 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 175198ad91d8bac540159705873b4ffe4fb94eab 2e3de6253422112ae43e608661ba94ea6b345694 51ca66c37371b10b378513af126646de22eddb17
 151573 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 007d1dbf72536ec1b847a944832e4de1546af2ac 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151580 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151575 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 5cc7a54c2e91d82cb6a52e4921325c511fd90712 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151584 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151582 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151588 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
Searching for interesting versions
 Result found: flight 150631 (pass), for basis pass
 Result found: flight 151547 (fail), for basis failure
 Repro found: flight 151575 (pass), for basis pass
 Repro found: flight 151578 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
No revisions left to test, checking graph state.
 Result found: flight 151580 (pass), for last pass
 Result found: flight 151582 (fail), for first failure
 Repro found: flight 151584 (pass), for last pass
 Repro found: flight 151587 (fail), for first failure
 Repro found: flight 151588 (pass), for last pass
 Repro found: flight 151591 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151591/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.718386 to fit
pnmtopng: 226 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
151591: tolerable ALL FAIL

flight 151591 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/151591/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail baseline untested


jobs:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Jul 03 18:55:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 18:55:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrQpz-0005DJ-TH; Fri, 03 Jul 2020 18:55:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=V/qb=AO=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1jrQpy-0005DE-Sa
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 18:55:02 +0000
X-Inumbo-ID: b2f5db14-bd5e-11ea-8496-bc764e2007e4
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b2f5db14-bd5e-11ea-8496-bc764e2007e4;
 Fri, 03 Jul 2020 18:55:02 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1jrQpt-000IZp-GB; Fri, 03 Jul 2020 18:54:57 +0000
Date: Fri, 3 Jul 2020 19:54:57 +0100
From: Tim Deegan <tim@xen.org>
To: Michael Young <m.a.young@durham.ac.uk>
Subject: Re: Build problems in kdd.c with xen-4.14.0-rc4
Message-ID: <20200703185457.GA71229@deinos.phlegethon.org>
References: <alpine.LFD.2.22.394.2006302259370.2894@austen3.home>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <alpine.LFD.2.22.394.2006302259370.2894@austen3.home>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org);
 SAEximRunCond expanded to false
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Michael,

Thanks for ther report!

At 23:21 +0100 on 30 Jun (1593559296), Michael Young wrote:
> I get the following errors when trying to build xen-4.14.0-rc4
> 
> kdd.c: In function 'kdd_tx':
> kdd.c:754:15: error: array subscript 16 is above array bounds of 'uint8_t[16]' {aka 'unsigned char[16]'} [-Werror=array-bounds]
>    754 |         s->txb[len++] = 0xaa;
>        |         ~~~~~~^~~~~~~
> kdd.c:82:17: note: while referencing 'txb'
>     82 |         uint8_t txb[sizeof (kdd_hdr)];           /* Marshalling area for tx */
>        |                 ^~~
> kdd.c: In function 'kdd_break':
> kdd.c:819:19: error: array subscript 16 is above array bounds of 'uint8_t[16]' {aka 'unsigned char[16]'} [-Werror=array-bounds]

Oh dear.  The fix for the last kdd bug seems to have gone wrong
somewhere.  The patch I posted has:

-        uint8_t txb[sizeof (kdd_hdr) + 65536];   /* Marshalling area for tx */
+        uint8_t txb[sizeof (kdd_pkt)];           /* Marshalling area for tx */

but as applied in master it's:

-        uint8_t txb[sizeof (kdd_hdr) + 65536];   /* Marshalling area for tx */
+        uint8_t txb[sizeof (kdd_hdr)];           /* Marshalling area for tx */

i.e. the marshalling area is only large enough for a header and GCC
is correctly complaining about that.

Wei, it looks like you committed this patch - can you figure out what
happened to it please?

Cheers,

Tim.


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 19:59:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 19:59:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrRpk-0001iB-0A; Fri, 03 Jul 2020 19:58:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=krI5=AO=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jrRpi-0001i6-Vh
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 19:58:51 +0000
X-Inumbo-ID: 9ccc82f8-bd67-11ea-8496-bc764e2007e4
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ccc82f8-bd67-11ea-8496-bc764e2007e4;
 Fri, 03 Jul 2020 19:58:50 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id o2so35196060wmh.2
 for <xen-devel@lists.xenproject.org>; Fri, 03 Jul 2020 12:58:50 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=oqIa57g/7ALZ6VgxEdWVcdrSAKmoBsBprxucTRBdkVI=;
 b=O/DjwKLr7GUGJIXnULbijdN7UaQ6fRHmE8FFgnCtBCnmuNoAxP4WXQ3fH3ctZXiwEJ
 ZNzBNtdePZ+tB5z/f/HG8zBbKSMh2M9y9OH5pEb+C8UpJPUG+iMRgAQCFn00lTK+xCAZ
 As9JHph8y3XVSfT14W3dXhb7eU6BO/wiShRvossZg+oGGvIb3CUXa6avz5I6HcEuMqFe
 thjMzp6yP6r24YmSKp14rS1afXiw5RyCDF9r1asiF4JiGPSBlTvOXAq/7T7nXB6525vk
 RGq9zc0oaOaAK1nVpGNazYCOjVInXrIPYvTdrlgtKsJGq8PIVaVRtD5E2fF/MfIy9GWl
 CdNQ==
X-Gm-Message-State: AOAM530y+/C2qlL/DuX0ElZxOlwv7fq8kn9zBDQgmKiT14NIguZLAGf3
 SNScVfRl/w8alR1/DgfP04g=
X-Google-Smtp-Source: ABdhPJxLuHLtnfUkRo5on7fWfYTlNpM4/A4xZWQdZrDNY5jOYh8Oxcv81bmS20DYGnGeAss7lhALqg==
X-Received: by 2002:a1c:a74c:: with SMTP id q73mr37135950wme.96.1593806329510; 
 Fri, 03 Jul 2020 12:58:49 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id w12sm15771710wrm.79.2020.07.03.12.58.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 03 Jul 2020 12:58:48 -0700 (PDT)
Date: Fri, 3 Jul 2020 19:58:47 +0000
From: Wei Liu <wl@xen.org>
To: Tim Deegan <tim@xen.org>
Subject: Re: Build problems in kdd.c with xen-4.14.0-rc4
Message-ID: <20200703195847.nxamgjw6a2dayyoo@liuwe-devbox-debian-v2>
References: <alpine.LFD.2.22.394.2006302259370.2894@austen3.home>
 <20200703185457.GA71229@deinos.phlegethon.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200703185457.GA71229@deinos.phlegethon.org>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Michael Young <m.a.young@durham.ac.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 03, 2020 at 07:54:57PM +0100, Tim Deegan wrote:
> Hi Michael,
> 
> Thanks for ther report!
> 
> At 23:21 +0100 on 30 Jun (1593559296), Michael Young wrote:
> > I get the following errors when trying to build xen-4.14.0-rc4
> > 
> > kdd.c: In function 'kdd_tx':
> > kdd.c:754:15: error: array subscript 16 is above array bounds of 'uint8_t[16]' {aka 'unsigned char[16]'} [-Werror=array-bounds]
> >    754 |         s->txb[len++] = 0xaa;
> >        |         ~~~~~~^~~~~~~
> > kdd.c:82:17: note: while referencing 'txb'
> >     82 |         uint8_t txb[sizeof (kdd_hdr)];           /* Marshalling area for tx */
> >        |                 ^~~
> > kdd.c: In function 'kdd_break':
> > kdd.c:819:19: error: array subscript 16 is above array bounds of 'uint8_t[16]' {aka 'unsigned char[16]'} [-Werror=array-bounds]
> 
> Oh dear.  The fix for the last kdd bug seems to have gone wrong
> somewhere.  The patch I posted has:
> 
> -        uint8_t txb[sizeof (kdd_hdr) + 65536];   /* Marshalling area for tx */
> +        uint8_t txb[sizeof (kdd_pkt)];           /* Marshalling area for tx */
> 
> but as applied in master it's:
> 
> -        uint8_t txb[sizeof (kdd_hdr) + 65536];   /* Marshalling area for tx */
> +        uint8_t txb[sizeof (kdd_hdr)];           /* Marshalling area for tx */
> 
> i.e. the marshalling area is only large enough for a header and GCC
> is correctly complaining about that.
> 
> Wei, it looks like you committed this patch - can you figure out what
> happened to it please?
> 

My bad. The mail I saved did not apply cleanly so I manually recreated
your patch.

Thanks for letting me know. I will send a patch to fix the issue.

Wei.

> Cheers,
> 
> Tim.


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 20:10:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 20:10:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrS0f-0003Jx-1O; Fri, 03 Jul 2020 20:10:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=krI5=AO=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jrS0d-0003Js-MU
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 20:10:07 +0000
X-Inumbo-ID: 2fd856b6-bd69-11ea-8a39-12813bfff9fa
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2fd856b6-bd69-11ea-8a39-12813bfff9fa;
 Fri, 03 Jul 2020 20:10:06 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id k6so33940400wrn.3
 for <xen-devel@lists.xenproject.org>; Fri, 03 Jul 2020 13:10:06 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=klbXbAQrxeBsYlQA+DIpNswrBH4JAUwBVstVHGzKcO4=;
 b=GMn+Tba+t++zEhgaCEt/UeiCkmknpCNbEXUA75gMWj1UUl4O8o7TzL5pAVDi5crXr6
 eA+WIA3feq5sgWh8kIlGsXegQMyoG3PTN4OaQcuHoDyER1JDfOfKgZ0g+GLemIXGo1bb
 3NTjey0UoAcDkUXZbMhQb6wvDFUB3LL8Ri83+YE72hm2v/QMXHzyAG28ApOAT+gDMm8w
 BYdpCjjHL+57iAGnE3Llozm3qFKi7MnqbHO/F5aVs2OZmNZS7AJS/PtsKBb4QNA6jtPE
 HL17jMOwQYm/qpqzttQYwT3fRbc5gT09j0rERyorf2lN30/ErVts9LNB8wEIgfMbQgWz
 XndA==
X-Gm-Message-State: AOAM531mP5bBhxz9svjeqFdg4a3WfBiCUPbM/Menx7A9hTuY/HkwYEbK
 +Xi/Nztyjv+OO/b4Syy+9FHrdZfn
X-Google-Smtp-Source: ABdhPJx1sjbvu0J1HtcM2K6vJf5RrYh9I9M52pTqf9A0vMzCu/X0uLhG6SQBSWRCNPh09MLsQj917w==
X-Received: by 2002:a5d:6342:: with SMTP id b2mr37910381wrw.262.1593807005561; 
 Fri, 03 Jul 2020 13:10:05 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id c18sm14443214wmk.18.2020.07.03.13.10.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 03 Jul 2020 13:10:05 -0700 (PDT)
From: Wei Liu <wl@xen.org>
To: Xen Development List <xen-devel@lists.xenproject.org>
Subject: [PATCH for-4.14] kdd: fix build again
Date: Fri,  3 Jul 2020 20:10:01 +0000
Message-Id: <20200703201001.56606-1-wl@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Paul Durrant <paul@xen.org>,
 Tim Deegan <tim@xen.org>, Wei Liu <wl@xen.org>,
 Michael Young <m.a.young@durham.ac.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Restore Tim's patch. The one that was committed was recreated by me
because git didn't accept my saved copy. I made some mistakes while
recreating that patch and here we are.

Fixes: 3471cafbdda3 ("kdd: stop using [0] arrays to access packet contents")
Reported-by: Michael Young <m.a.young@durham.ac.uk>
Signed-off-by: Wei Liu <wl@xen.org>
---
Cc: Tim Deegan <tim@xen.org>
Cc: Paul Durrant <paul@xen.org>
Cc: Michael Young <m.a.young@durham.ac.uk>
---
 tools/debugger/kdd/kdd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/debugger/kdd/kdd.c b/tools/debugger/kdd/kdd.c
index 866532f0c770..a7d0976ea4a8 100644
--- a/tools/debugger/kdd/kdd.c
+++ b/tools/debugger/kdd/kdd.c
@@ -79,11 +79,11 @@ typedef struct {
 /* State of the debugger stub */
 typedef struct {
     union {
-        uint8_t txb[sizeof (kdd_hdr)];           /* Marshalling area for tx */
+        uint8_t txb[sizeof (kdd_pkt)];           /* Marshalling area for tx */
         kdd_pkt txp;                 /* Also readable as a packet structure */
     };
     union {
-        uint8_t rxb[sizeof (kdd_hdr)];           /* Marshalling area for rx */
+        uint8_t rxb[sizeof (kdd_pkt)];           /* Marshalling area for rx */
         kdd_pkt rxp;                 /* Also readable as a packet structure */
     };
     unsigned int cur;       /* Offset into rx where we'll put the next byte */
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 03 20:10:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 20:10:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrS1A-0003M1-AG; Fri, 03 Jul 2020 20:10:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=krI5=AO=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jrS18-0003Lu-EL
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 20:10:38 +0000
X-Inumbo-ID: 429dbe9e-bd69-11ea-8a39-12813bfff9fa
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 429dbe9e-bd69-11ea-8a39-12813bfff9fa;
 Fri, 03 Jul 2020 20:10:38 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id a6so22957892wmm.0
 for <xen-devel@lists.xenproject.org>; Fri, 03 Jul 2020 13:10:37 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=nBP9r56Uca8qm75PjBZ2ceuLrm/thJOGmaJslfZnOVM=;
 b=mDY/8oJ3XIXJveJAyBwFdMJk2oYCLsTOXOHw7eIv7xoKex/yailXqrW3loYqWkyu3y
 bZLtjANd6T+m4dRxpjwXg2wwhvaZoe8hEvNq/vuSUFXNGSYtwBOY1nNH01AtY/vPaFM3
 CDNY/pcRZx+yyIJ3CsTQNuDQ6P5T94Gxd4BeUXCDYpc2HXMRDdaCnIdPfa8oyDP/7SnC
 7GMoUTvlB58I/Fi4RJqpWOnpsDBcB7nm23ayCtJhQp2GQmPoi3im/8f//UnZ9wvVrdoG
 1ELVzAMb575AT/8kUwHLhKW8uluVGPw0Uuvg8vceTIi4LlW7rvMkfJvwLME0i9bDAU1z
 Ye/A==
X-Gm-Message-State: AOAM531fNWJ3MqcR3d7adPCy1waaJ5IMgi3TxC+NlO9iSR55ta2VATPZ
 T2bN5Bh8BCN3qC3PAds5+bY=
X-Google-Smtp-Source: ABdhPJyH8LwO/TczKagR144GYqjxJ/ZfuwU1Xlj4397SNnNXnHX/OHa3eA3OTqJoqekOUX6b6m9sww==
X-Received: by 2002:a7b:c4d6:: with SMTP id g22mr39576260wmk.170.1593807037244; 
 Fri, 03 Jul 2020 13:10:37 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id d201sm14079313wmd.34.2020.07.03.13.10.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 03 Jul 2020 13:10:36 -0700 (PDT)
Date: Fri, 3 Jul 2020 20:10:35 +0000
From: Wei Liu <wl@xen.org>
To: Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [XEN PATCH for-4.14] Config: Update QEMU
Message-ID: <20200703201035.pv6nyhydxyzqsuit@liuwe-devbox-debian-v2>
References: <20200703135533.336625-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200703135533.336625-1-anthony.perard@citrix.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 03, 2020 at 02:55:33PM +0100, Anthony PERARD wrote:
> Backport 2 commits to fix building QEMU without PCI passthrough
> support.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

FWIW:

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 20:27:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 20:27:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrSHK-0004N0-Om; Fri, 03 Jul 2020 20:27:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=V/qb=AO=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1jrSHK-0004Mv-89
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 20:27:22 +0000
X-Inumbo-ID: 98bc68b4-bd6b-11ea-bb8b-bc764e2007e4
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98bc68b4-bd6b-11ea-bb8b-bc764e2007e4;
 Fri, 03 Jul 2020 20:27:21 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1jrSHG-000Il9-G0; Fri, 03 Jul 2020 20:27:18 +0000
Date: Fri, 3 Jul 2020 21:27:18 +0100
From: Tim Deegan <tim@xen.org>
To: Wei Liu <wl@xen.org>
Subject: Re: [PATCH for-4.14] kdd: fix build again
Message-ID: <20200703202718.GA72092@deinos.phlegethon.org>
References: <20200703201001.56606-1-wl@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <20200703201001.56606-1-wl@xen.org>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org);
 SAEximRunCond expanded to false
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen Development List <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Michael Young <m.a.young@durham.ac.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

At 20:10 +0000 on 03 Jul (1593807001), Wei Liu wrote:
> Restore Tim's patch. The one that was committed was recreated by me
> because git didn't accept my saved copy. I made some mistakes while
> recreating that patch and here we are.
> 
> Fixes: 3471cafbdda3 ("kdd: stop using [0] arrays to access packet contents")
> Reported-by: Michael Young <m.a.young@durham.ac.uk>
> Signed-off-by: Wei Liu <wl@xen.org>

Reviewed-by: Tim Deegan <tim@xen.org>

Thanks!

Tim.


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 20:30:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 20:30:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrSKT-00058v-7R; Fri, 03 Jul 2020 20:30:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYi7=AO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrSKS-00058p-At
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 20:30:36 +0000
X-Inumbo-ID: 0c1ee0ca-bd6c-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c1ee0ca-bd6c-11ea-bb8b-bc764e2007e4;
 Fri, 03 Jul 2020 20:30:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ak3ZfJB+oHYO1jodlhDGZPX5o+07K9rppEQzbmFOdmQ=; b=mX21O5A96wO28A7wC4FsJ03Th
 b+qusseAVvM2NUu3+GYrM1cYcSClDZNopNySW/okW1LZ2MgQDFKMyTnLIgtfQ8ssj4BDLQzCBmv1K
 zAaR8EquPLhqVXfY7KKEI1tKz4NloVlFBgdwW6y42RCiqq0c4T28YI5HIetSJrHxwQLBc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrSKQ-0007kH-2z; Fri, 03 Jul 2020 20:30:34 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrSKP-0002Sd-Ae; Fri, 03 Jul 2020 20:30:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrSKP-00043x-9S; Fri, 03 Jul 2020 20:30:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151565-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151565: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=7cc2a8ea104820dd9e702202621e8fd4d9f6c8cf
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jul 2020 20:30:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151565 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151565/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 test-armhf-armhf-xl-vhd     15 guest-start/debian.repeat fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                7cc2a8ea104820dd9e702202621e8fd4d9f6c8cf
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   15 days
Failing since        151236  2020-06-19 19:10:35 Z   14 days   18 attempts
Testing same since   151565  2020-07-03 04:35:37 Z    0 days    1 attempts

------------------------------------------------------------
489 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 23140 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 22:31:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 22:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrUD6-0006Ke-EQ; Fri, 03 Jul 2020 22:31:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pqYw=AO=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1jrUD4-0006KZ-Db
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 22:31:07 +0000
X-Inumbo-ID: e044b874-bd7c-11ea-bca7-bc764e2007e4
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e044b874-bd7c-11ea-bca7-bc764e2007e4;
 Fri, 03 Jul 2020 22:31:03 +0000 (UTC)
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
 s=2020; t=1593815461;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 in-reply-to:in-reply-to:references:references;
 bh=e6HM5KalB3cdRfA4Hps7INNFh9eeE6yz+RhORK/ferw=;
 b=UxWfa4HS6S7NQAyS/jMl4fjxbcCv1prvRSCKbsDllp1+dzc5z4T7inYWBkJ3FhkaGfHaTZ
 vAGLgP5gEjwFC1sgShElbTjfY3RehAdz/I6E1I+msYo1HdhV615kK2pihe8pCXeOFCgX9E
 WONDkzZJl/h8MJnEGa00LKFRZ09tDewa4sRXfvLgaYEAaDe0ATeTslGCTtHWH9CNJoVLkl
 62KdtDowVoLPx/rBXkQleegbyC+nJxPWzpcyu6WdZgfHKU6pXPFzpUBtldeZBQuvTSrj0p
 EV6miUZEF3v9PMDvUvKVJnZHJUh9rLNxNz7hdtwPUXED6ouA+tE72u89gRISxw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
 s=2020e; t=1593815461;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 in-reply-to:in-reply-to:references:references;
 bh=e6HM5KalB3cdRfA4Hps7INNFh9eeE6yz+RhORK/ferw=;
 b=bvB//3vcFyKBqGU4jVbTlAhUCfJaMSN/jkN+tpi1wI9XzIYrvfyn0Bmgf8ATogOTaNPtl2
 DSyT7KFiHbveqTBg==
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Andy Lutomirski <luto@kernel.org>, xen-devel <xen-devel@lists.xenproject.org>,
 LKML <linux-kernel@vger.kernel.org>, Juergen Gross <jgross@suse.com>,
 Jan Beulich <jbeulich@suse.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: FSGSBASE seems to be busted on Xen PV
In-Reply-To: <e78d2ee5-66cf-2ed8-c04f-71dd92efdfe1@citrix.com>
References: <CALCETrVfi1Rnt5nnrHNivdxE7MqRPiLXvon4-engqo=LCKiojA@mail.gmail.com>
 <e78d2ee5-66cf-2ed8-c04f-71dd92efdfe1@citrix.com>
Date: Sat, 04 Jul 2020 00:31:00 +0200
Message-ID: <87eepss02j.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: X86 ML <x86@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Andrew Cooper <andrew.cooper3@citrix.com> writes:
> On 03/07/2020 18:10, Andy Lutomirski wrote:
>> If you can't get this fixed in upstream Xen reasonably quickly, we may
>> need to disable FSGSBASE in a Xen PV guest in Linux.
>
> This has come up several times before, but if its actually breaking
> userspace then Xen needs to change.
>
> I'll see about making something which is rather more robust.

You mean disabling XEN PV completely? That would be indeed very robust
and allows us to get rid of lots of obscure code. Feel free to add my
Acked-by :)

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Fri Jul 03 23:05:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 03 Jul 2020 23:05:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrUk2-0000QJ-6W; Fri, 03 Jul 2020 23:05:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XYi7=AO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrUk1-0000QD-4q
 for xen-devel@lists.xenproject.org; Fri, 03 Jul 2020 23:05:09 +0000
X-Inumbo-ID: a261996e-bd81-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a261996e-bd81-11ea-8496-bc764e2007e4;
 Fri, 03 Jul 2020 23:05:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=laX69xYbzig0khpPKEBDKg/3L6w7eBgUxDq6HYxCy0g=; b=r8/vYPhy0JViNSALRFbo8UqkF
 B83pzrYhM5NHZLH8bUZ3Wkgg3aTsBExbTjQ4998z3IyxCQCrLHb5nbrIQkO4dhjp/geku86kSyPcq
 oTrtYZMcl0AtFR7Pliq//rLXKJVOJ6lCuRlkTi2XFWp7hvAHLfnV9v9+0RwiQ+TilNMps=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrUjx-00027N-M3; Fri, 03 Jul 2020 23:05:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrUjx-0001WG-Bo; Fri, 03 Jul 2020 23:05:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrUjx-0005zh-Ai; Fri, 03 Jul 2020 23:05:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151577-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151577: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=64f0ad8ad8e13257e7c912df470d46784b55c3fd
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 03 Jul 2020 23:05:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151577 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151577/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                64f0ad8ad8e13257e7c912df470d46784b55c3fd
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   21 days
Failing since        151101  2020-06-14 08:32:51 Z   19 days   22 attempts
Testing same since   151547  2020-07-02 19:02:45 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Cathy Zhang <cathy.zhang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Liran Alon <liran.alon@oracle.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <thuth@redhat.com>
  Tong Ho <tong.ho@xilinx.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 15325 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 02:35:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 02:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrY0y-0002HY-VD; Sat, 04 Jul 2020 02:34:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CV4e=AP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrY0x-0002HB-Dv
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 02:34:51 +0000
X-Inumbo-ID: ec0e8f28-bd9e-11ea-8aa5-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ec0e8f28-bd9e-11ea-8aa5-12813bfff9fa;
 Sat, 04 Jul 2020 02:34:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ttQRt2Pt0Ommx/+chINtvp6GF1RXe2q4hZ8VP+872jM=; b=DOvA4ea/xZ3K/eW4WB6wTN9MY
 NdjMV5DMHNX55g9VHb5Ay3l8IBgiJIVaUcbGtWJ2nZywa4/m4TX4abzA5fMQd8+bEqLIFS8/0OY5v
 wgFPzF82MW2K+1mO4bzpgRKp719wlSekHq4Qq/pDoiBp3oEoot6ZZPrRClYryz03E84bw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrY0q-0008HD-A6; Sat, 04 Jul 2020 02:34:44 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrY0p-0004EX-UJ; Sat, 04 Jul 2020 02:34:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrY0p-0000kk-TW; Sat, 04 Jul 2020 02:34:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151590-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151590: all pass - PUSHED
X-Osstest-Versions-This: ovmf=627d1d6693b0594d257dbe1a3363a8d4bd4d8307
X-Osstest-Versions-That: ovmf=f43a14e3cff3fa45c30ff152c4172204a4458341
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 04 Jul 2020 02:34:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151590 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151590/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 627d1d6693b0594d257dbe1a3363a8d4bd4d8307
baseline version:
 ovmf                 f43a14e3cff3fa45c30ff152c4172204a4458341

Last test of basis   151570  2020-07-03 05:15:25 Z    0 days
Testing same since   151590  2020-07-03 17:14:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Leif Lindholm <leif@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f43a14e3cf..627d1d6693  627d1d6693b0594d257dbe1a3363a8d4bd4d8307 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 03:09:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 03:09:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrYYD-0004qD-UR; Sat, 04 Jul 2020 03:09:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CV4e=AP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrYYC-0004ph-9U
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 03:09:12 +0000
X-Inumbo-ID: bbbc2c18-bda3-11ea-8aaa-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bbbc2c18-bda3-11ea-8aaa-12813bfff9fa;
 Sat, 04 Jul 2020 03:09:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ZP0JEfHbogZQdHpBFPqovNUO7Ku6TmEJxlEXcvn28O8=; b=FoYo5EU0o9osrtw4kXFJ7BzoA
 /1WPUgZuuNiLmsUT6Qv9IjBkYYCCJwz4aO75nG5T+8cFXPNLzmtcKzbOiRD8OK4V2WmtF3055rqZH
 1K0BQAc/XmTDyLCbiiGJhoBit8+yRQr2DZrCFW6K6oGZyg1jusVtqyQtIgs7Hy78Nh7+M=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrYYA-0000Uc-Lj; Sat, 04 Jul 2020 03:09:10 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrYYA-0006I7-FA; Sat, 04 Jul 2020 03:09:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrYYA-0003K3-EN; Sat, 04 Jul 2020 03:09:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151586-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151586: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=be63d9d47f571a60d70f8fb630c03871312d9655
X-Osstest-Versions-That: xen=23ca7ec0ba620db52a646d80e22f9703a6589f66
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 04 Jul 2020 03:09:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151586 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151586/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151506
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151506
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151506
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151506
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151506
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151506
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151506
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151506
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151506
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655
baseline version:
 xen                  23ca7ec0ba620db52a646d80e22f9703a6589f66

Last test of basis   151506  2020-07-01 10:55:16 Z    2 days
Failing since        151528  2020-07-02 04:45:56 Z    1 days    3 attempts
Testing same since   151554  2020-07-02 21:40:14 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   23ca7ec0ba..be63d9d47f  be63d9d47f571a60d70f8fb630c03871312d9655 -> master


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 06:57:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 06:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrc6j-0006uw-Sd; Sat, 04 Jul 2020 06:57:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ydgZ=AP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jrc6i-0006ur-Hc
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 06:57:04 +0000
X-Inumbo-ID: 90afbfec-bdc3-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 90afbfec-bdc3-11ea-b7bb-bc764e2007e4;
 Sat, 04 Jul 2020 06:57:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0F0BAAB89;
 Sat,  4 Jul 2020 06:57:03 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Subject: [GIT PULL] xen: branch for v5.8-rc4
Date: Sat,  4 Jul 2020 08:57:02 +0200
Message-Id: <20200704065702.3073-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.8b-rc4-tag

xen: branch for v5.8-rc4

It contains only 1 small cleanup patch for ARM and two patches for the
xenbus driver fixinf latent problems (large stack allocations and bad
return code settings).

Thanks.

Juergen

 arch/arm/xen/enlighten.c           |   1 -
 drivers/xen/xenbus/xenbus_client.c | 167 ++++++++++++++++++-------------------
 2 files changed, 81 insertions(+), 87 deletions(-)

Juergen Gross (2):
      xen/xenbus: avoid large structs and arrays on the stack
      xen/xenbus: let xenbus_map_ring_valloc() return errno values only

Xiaofei Tan (1):
      arm/xen: remove the unused macro GRANT_TABLE_PHYSADDR


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 07:05:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 07:05:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrcEV-0007op-NM; Sat, 04 Jul 2020 07:05:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cJvU=AP=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1jrcEU-0007ok-UK
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 07:05:06 +0000
X-Inumbo-ID: b08e5eda-bdc4-11ea-bb8b-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b08e5eda-bdc4-11ea-bb8b-bc764e2007e4;
 Sat, 04 Jul 2020 07:05:06 +0000 (UTC)
Subject: Re: [GIT PULL] xen: branch for v5.8-rc4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1593846305;
 bh=mdISOtGKokzv0rP5dd5e/Ev8OPzF6jG9Dh8+t5MOrZs=;
 h=From:In-Reply-To:References:Date:To:Cc:From;
 b=P9TGSqzYJy8tqf7rK2dwN49odgya/v9o89YRQvdExfpDDJKRckKPnXFU53XudClne
 vP86XgbsHqndOBHwmltAMINm85YrQZi4aXjoP6SuY8OA8tENCpIfUOkIny1k+Ouir+
 Hfl0GYTA3qvw0OjG9SfqGUXAJN2W53ow63m3k53A=
From: pr-tracker-bot@kernel.org
In-Reply-To: <20200704065702.3073-1-jgross@suse.com>
References: <20200704065702.3073-1-jgross@suse.com>
X-PR-Tracked-List-Id: <linux-kernel.vger.kernel.org>
X-PR-Tracked-Message-Id: <20200704065702.3073-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
 for-linus-5.8b-rc4-tag
X-PR-Tracked-Commit-Id: 578c1bb9056263ad3c9e09746b3d6e4daf63bdb0
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 35e884f89df4c48566d745dc5a97a0d058d04263
Message-Id: <159384630585.17224.3932318630515842404.pr-tracker-bot@kernel.org>
Date: Sat, 04 Jul 2020 07:05:05 +0000
To: Juergen Gross <jgross@suse.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 torvalds@linux-foundation.org, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The pull request you sent on Sat,  4 Jul 2020 08:57:02 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.8b-rc4-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/35e884f89df4c48566d745dc5a97a0d058d04263

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 09:59:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 09:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrexI-0005CH-RQ; Sat, 04 Jul 2020 09:59:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CV4e=AP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrexH-0005CC-Bi
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 09:59:31 +0000
X-Inumbo-ID: 0d404aae-bddd-11ea-8ae0-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0d404aae-bddd-11ea-8ae0-12813bfff9fa;
 Sat, 04 Jul 2020 09:59:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=78X5S2cOIjzDTjEspiXe7Grl+IdiwH9704QkMa71YVA=; b=5iBdlgeXmv57EdcE06WadTNya
 lmGi5l2ejoPVJS5c7ts8sxGlJVXzUY+ffPHjdsAPdkViy7CZmshU5LvwV1BPIpDwZ05bmmf2VH8/U
 fJwvixSrX7o0khjUBNq/SquKFV3U+p1zD84ibA5SxHpKomoVvauOjPr4MSJTYPkGBp+Vo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrexE-0000bg-Nh; Sat, 04 Jul 2020 09:59:28 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrexE-0005Sr-BM; Sat, 04 Jul 2020 09:59:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrexE-00060v-82; Sat, 04 Jul 2020 09:59:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151594-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151594: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=cdd3bb54332f82295ed90cd0c09c78cd0c0ee822
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 04 Jul 2020 09:59:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151594 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151594/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                cdd3bb54332f82295ed90cd0c09c78cd0c0ee822
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   16 days
Failing since        151236  2020-06-19 19:10:35 Z   14 days   19 attempts
Testing same since   151594  2020-07-03 20:40:58 Z    0 days    1 attempts

------------------------------------------------------------
509 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 24096 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 12:48:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 12:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrhat-0001yu-3p; Sat, 04 Jul 2020 12:48:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CV4e=AP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrhar-0001yp-Kc
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 12:48:33 +0000
X-Inumbo-ID: a9dd97ba-bdf4-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a9dd97ba-bdf4-11ea-b7bb-bc764e2007e4;
 Sat, 04 Jul 2020 12:48:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ic1ni+Iqqtpn7hyShJArHrBwgw8oqFPaz0CJukcxyIs=; b=FUM+zaHJTtxo9+g5i6GCdRubf
 l0oWMbkkAo6vi5N0GE2XpK1Dp4jud/3Zg52woCNhFFBphHA+Gh/WiW1IxjeOflEzSAhqjkrbUTadW
 LhM8zoKm9uT6Q9wE8yY5Vsu8GLlqQjrWjPLXgZ0xPPupDYzpSep+y6o6CSUsQVJNnr9D4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrhao-0003lo-1R; Sat, 04 Jul 2020 12:48:30 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrhan-0005Wx-K4; Sat, 04 Jul 2020 12:48:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrhan-0007YQ-JM; Sat, 04 Jul 2020 12:48:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151598-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151598: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=5f42c3375d45108cf14f50ac8ba57c2865e75e9c
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 04 Jul 2020 12:48:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151598 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151598/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                5f42c3375d45108cf14f50ac8ba57c2865e75e9c
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   21 days
Failing since        151101  2020-06-14 08:32:51 Z   20 days   23 attempts
Testing same since   151598  2020-07-03 23:07:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Cathy Zhang <cathy.zhang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <thuth@redhat.com>
  Tong Ho <tong.ho@xilinx.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 16385 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:50:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjUJ-0003F0-Us; Sat, 04 Jul 2020 14:49:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjUI-0003ES-OV
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:49:54 +0000
X-Inumbo-ID: 9cb3dfd4-be05-11ea-8496-bc764e2007e4
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9cb3dfd4-be05-11ea-8496-bc764e2007e4;
 Sat, 04 Jul 2020 14:49:50 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id f18so27701613wrs.0
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:49:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=1LONYHTd7jeNrmu8RfEJNRKaqa+4/OTfMiA08MmDC4w=;
 b=LCSMbVvH8N0ZbcZUYZvlwl0/LtuSzor2alX6qtutU+9ODMj8+C2lynOKjaqj8fXnEI
 Up7WQjLdb8bh14eYdbzmWq0pniHymCkqwLxs7uZhzz+SQeCK8ybk5MN0Zyhj8Eee+SQj
 wRrntXxiS8ThEJ3VSHHvZJ08OGnA1vj0opel3/dZthPvXbGAIhhQVqi63UTc46zKMF8g
 WrVDYhnL59CSHsALtHCZR8uG2+fDNJPpG55w+1h1DIiM4zmD5wiPD6EKOmnxiehzG+GO
 tiIAW62Y9YFjeqMNBorgEysK6Wv7zhm124e0UVH2qXSktIEDr+5rtVe5ZIRMCgW9yetf
 3gAA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=1LONYHTd7jeNrmu8RfEJNRKaqa+4/OTfMiA08MmDC4w=;
 b=rvpxhDr4dbszXwU+bgnQmd5GzmoFtIfpYWbDOK8hKp/T9nX0bqBZIovWjqN6Vu1uwT
 8hAoaFi3tDWktCu5kzDTUt/aDT+E+bvdQalU9BSby6de/IARYlSaBleLx3C9OICkW+ju
 R4GrEiIqoZA3lCxAEEYgtQSOWSKqUkN6SajSVgRrwga+tluhXsQaBhYbRicJurxjpDwN
 R4cyotqNZEhc5WPY+z9hTct3owW6AXLZwNjRWmBmzT+lcXQQj7rsepuFn9O3lfX9BoLN
 V1+m5XzXd0OAFifHzFnhXd0mS+OBz7Lvh2GlOHBCAvcJtjEXir2C4MqhOzESVat0wYfv
 E1tw==
X-Gm-Message-State: AOAM53120zcCvWEDdWObFzL/2hklERl8Mzf14khsrYnYOnHnz1OKr3AR
 +SXNaHGUPr7FX+4jU/pteVc=
X-Google-Smtp-Source: ABdhPJyhL9pSuntp8VeiIjIT6N9OP48blUH4lL3NMOGJ+4E4luyAkGU/9nzb7bgsn2yWXLxxbe616w==
X-Received: by 2002:adf:f54b:: with SMTP id j11mr41454017wrp.206.1593874189885; 
 Sat, 04 Jul 2020 07:49:49 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.49.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:49:49 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 01/26] hw/arm/sbsa-ref: Remove unused 'hw/usb.h' header
Date: Sat,  4 Jul 2020 16:49:18 +0200
Message-Id: <20200704144943.18292-2-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This file doesn't access anything from "hw/usb.h", remove its
inclusion.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/arm/sbsa-ref.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
index e40c868a82..021e7c1b8b 100644
--- a/hw/arm/sbsa-ref.c
+++ b/hw/arm/sbsa-ref.c
@@ -38,7 +38,6 @@
 #include "hw/loader.h"
 #include "hw/pci-host/gpex.h"
 #include "hw/qdev-properties.h"
-#include "hw/usb.h"
 #include "hw/char/pl011.h"
 #include "net/net.h"
 
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:50:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjUT-0003tj-FX; Sat, 04 Jul 2020 14:50:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjUS-0003ES-Oq
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:50:04 +0000
X-Inumbo-ID: 9f27afa2-be05-11ea-b7bb-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f27afa2-be05-11ea-b7bb-bc764e2007e4;
 Sat, 04 Jul 2020 14:49:54 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id z15so24482792wrl.8
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:49:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=MuyWjc3EjPCzKMWDQ2x+ag455dTHS4+/J8s0D1IfWzs=;
 b=eJ712M4xvrpeG0WDBqIsWoodAc0KX1IcpGrySlrkuD9AOTok/P4V/I21yqX/UEczz4
 3tSXj1WMi3HUlpJkfq2a6D2xREY05qgcqqJ19Jl7cdWzw2gv+ctBk/5poKNfTIv79Q0n
 BAVVtuIe/u6EewOUx6SzWsbiKfIGzmfWG/Yq7yydaZQAcV3k6WTSGWLLZAqkjp55Pn7+
 lBXvy8gVmzLifoWV3EcPuOxQzRn/bv2cit5TEsCXcdNiE6ar8MCWmB3dx2nr0CNajAWK
 hoBQwV5aFZPYADzkJ6K25xbFFvQMnugh8FXNfFas/L7BOnEBzAH5LJDg5p/AgO1233GU
 IUJw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=MuyWjc3EjPCzKMWDQ2x+ag455dTHS4+/J8s0D1IfWzs=;
 b=bFdZePQLKgPiOthqM0+SNRyYusNfxCytc5NmbTJNcc7oLYEmK6trHvHKqZzEMHWlCR
 ABaib5HMnwhox9GZKSvWyPLnHWINOtYQFWeyy1iPzSzomeWq94yFB18Wm+elHRzKRklC
 C3SylrbVKwDEEV4CVnWhNU3K/1SW/1DO+mmK7he6Qcffy9bGr+Nz5BKIHlNbBfK3aR5k
 8iZVLStIyIPqFazsb9T5VCQZyKXBXdW2WXTvu6lAbMvYFY1oEdoQzQstQ5Clu9JnnDMO
 aYqLAZaWfuU+lcm/Wcxvn3sggXrLrT8uq4+GDEEzhs0/JtCptAz+woFi28/WfDu0C0Km
 nu7A==
X-Gm-Message-State: AOAM5334ZUOgIF2P+SZRYZbeGIXqgm/a1kt+itM0yR21nm6/OzQU+7lw
 45dnKrtPv96oNEL6/rUF4E8=
X-Google-Smtp-Source: ABdhPJyYcaxhb0w6bZS0TVjPWW7MJK44YHjNK2tmtt5WThpkFwrsjrZmdAUU/9drVhDWFP6nr2M8dQ==
X-Received: by 2002:adf:e9c4:: with SMTP id l4mr42871069wrn.9.1593874193982;
 Sat, 04 Jul 2020 07:49:53 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.49.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:49:53 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 03/26] hw/usb: Remove unused VM_USB_HUB_SIZE definition
Date: Sat,  4 Jul 2020 16:49:20 +0200
Message-Id: <20200704144943.18292-4-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Commit a5d2f7273c ("qdev/usb: make qemu aware of usb busses")
removed the last use of VM_USB_HUB_SIZE, 11 years ago. Time
to drop it.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 include/hw/usb.h | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/include/hw/usb.h b/include/hw/usb.h
index e29a37635b..4f04a1a879 100644
--- a/include/hw/usb.h
+++ b/include/hw/usb.h
@@ -470,10 +470,6 @@ void usb_generic_async_ctrl_complete(USBDevice *s, USBPacket *p);
 void hmp_info_usbhost(Monitor *mon, const QDict *qdict);
 bool usb_host_dev_is_scsi_storage(USBDevice *usbdev);
 
-/* usb ports of the VM */
-
-#define VM_USB_HUB_SIZE 8
-
 /* usb-bus.c */
 
 #define TYPE_USB_BUS "usb-bus"
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:50:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjUF-0003EX-J1; Sat, 04 Jul 2020 14:49:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjUD-0003ES-Sv
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:49:50 +0000
X-Inumbo-ID: 9b9576f8-be05-11ea-b7bb-bc764e2007e4
Received: from mail-wm1-x336.google.com (unknown [2a00:1450:4864:20::336])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b9576f8-be05-11ea-b7bb-bc764e2007e4;
 Sat, 04 Jul 2020 14:49:48 +0000 (UTC)
Received: by mail-wm1-x336.google.com with SMTP id j18so34714752wmi.3
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:49:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=8Sd8UrNpAT6Q827DPYJ71GYz3qN5ijhu3T6adNwNx58=;
 b=VfCW80igCu05TRu6Kp/hRp0wwLcO25D00t+NwoIV0c8ix6mcQko2qSVKfN1H7cmL4H
 Vxhy0na0A3ZdoSWK98SpIoSITxwO50uXAKNvBj0H9USSoEJkOYekcVtX7qA8oey7Zq0C
 1gT9pIxDTb1SouD3jSFCjmRxh176pRqgb9NK+POT5ThziR0p2IFJM7o0Quc2x81CpA+g
 ifW/7fKRhTgTGZolZ71YooHseLeyWaonfI/a9QW1+3iYMgESDtL3KwrJ6MbXAJJc5tVN
 WBh2g+7GmdCH0HZbUKAJrDSTZ0YLZDtsbFuAgkRinU6RkXxKmgHpBNku6nt5HiABtxt8
 vQ8A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :mime-version:content-transfer-encoding;
 bh=8Sd8UrNpAT6Q827DPYJ71GYz3qN5ijhu3T6adNwNx58=;
 b=jMYCbb/2GYBb9LldY+odyXedBq6fxYycZVlpcGWrBwR5UUdaodGin/YuGv4qOt5ZgF
 eLwfC9z6t47dInGO8fYyaH0iGVGGc+RLKmFY7CtMR7gUb0GfJUCUazZNV+dtjYEgysGI
 h/IqyX2bRkrymU72AWpq4QvcqqsRwmYMwWKtnK7jEKRewEVW4iO6dl/3NCiF7BZpZ8DC
 ilKmX54ZLTPBCNMbhRbBzAi/MT7JzdqubAz5C7MYWeaM2OEgs5yX0iEWwFWrrtxl4pxz
 nknL6KWiBJAiJ206YBHUBW6UbrjDRuuzCAzsBAxawjUpudmjUe0sGsArP3ubcRQbg2VA
 5+sg==
X-Gm-Message-State: AOAM533YB1EAL02KTI8iidA5w8z/OcU091uSjOEdVTtzljmVGRjqvp3K
 9qsHAL2uSsVjesURiodUEuQ=
X-Google-Smtp-Source: ABdhPJyVXTn8bDCpfeNaz82C2B4sWvaeuwMMI0440OjK/Jj8KTAg5BtYB+Q0yPcpdr0q523zbyeuCw==
X-Received: by 2002:a1c:28c4:: with SMTP id o187mr27108566wmo.62.1593874187874; 
 Sat, 04 Jul 2020 07:49:47 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.49.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:49:46 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 00/26] hw/usb: Give it love,
 reduce 'hw/usb.h' inclusion out of hw/usb/
Date: Sat,  4 Jul 2020 16:49:17 +0200
Message-Id: <20200704144943.18292-1-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

This is the second time I try to replace a magic typename string
by a constant, and Zoltan warns me this is counter productive as
"hw/usb.h" pulls in an insane amount of code.

Time to give the usb subsystem some love and move forward.

This series can be decomposed as follow:

 1-2:    preliminary machine cleanups (arm/ppc)
 3-13:   usb related headers cleanups
 14-15:  usb quirks cleanup
 16-18:  refactor usb_get_dev_path() to add usb_get_port_path()
 19:     let spapr use usb_get_port_path() to make USBDevice opaque
 20:     extract the public USB API (for machine/board/soc)
 21:     make the older "usb.h" internal to hw/usb/
 22-25:  use TYPENAME definitions
 26:     cover dwc2 in MAINTAINERS

Please review.

Phil.

Philippe Mathieu-Daudé (26):
  hw/arm/sbsa-ref: Remove unused 'hw/usb.h' header
  hw/ppc/sam460ex: Add missing 'hw/pci/pci.h' header
  hw/usb: Remove unused VM_USB_HUB_SIZE definition
  hw/usb: Reduce 'exec/memory.h' inclusion
  hw/usb/desc: Add missing header
  hw/usb/hcd-dwc2: Remove unnecessary includes
  hw/usb/hcd-dwc2: Restrict some headers to source
  hw/usb/hcd-dwc2: Restrict 'dwc2-regs.h' scope
  hw/usb/hcd-ehci: Remove unnecessary include
  hw/usb/hcd-ehci: Move few definitions from header to source
  hw/usb/hcd-xhci: Add missing header
  hw/usb/hcd-musb: Restrict header scope
  hw/usb/desc: Reduce some declarations scope
  hw/usb/quirks: Rename included source with '.inc.c' suffix
  hw/usb: Add new 'usb-quirks.h' local header
  hw/usb/bus: Simplify usb_get_dev_path()
  hw/usb/bus: Rename usb_get_dev_path() as usb_get_full_dev_path()
  hw/usb/bus: Add usb_get_port_path()
  hw/ppc/spapr: Use usb_get_port_path()
  hw/usb: Introduce "hw/usb/usb.h" public API
  hw/usb: Move internal API to local 'usb-internal.h' header
  hw/usb/usb-hcd: Use OHCI type definitions
  hw/usb/usb-hcd: Use EHCI type definitions
  hw/usb/usb-hcd: Use UHCI type definitions
  hw/usb/usb-hcd: Use XHCI type definitions
  MAINTAINERS: Cover dwc-hsotg (dwc2) USB host controller emulation

 hw/usb/desc.h                             | 11 +++++
 {include/hw => hw}/usb/dwc2-regs.h        |  0
 hw/usb/hcd-dwc2.h                         |  5 +-
 hw/usb/hcd-ehci.h                         | 24 +---------
 {include/hw => hw}/usb/hcd-musb.h         |  2 +
 hw/usb/hcd-ohci.h                         |  4 +-
 hw/usb/hcd-xhci.h                         |  4 +-
 include/hw/usb.h => hw/usb/usb-internal.h | 50 ++-----------------
 hw/usb/usb-quirks.h                       | 27 +++++++++++
 include/hw/usb/chipidea.h                 |  2 +-
 include/hw/usb/usb-hcd.h                  | 36 ++++++++++++++
 include/hw/usb/usb.h                      | 58 +++++++++++++++++++++++
 chardev/baum.c                            |  2 +-
 hw/arm/allwinner-a10.c                    |  2 +-
 hw/arm/allwinner-h3.c                     | 10 ++--
 hw/arm/exynos4210.c                       |  2 +-
 hw/arm/pxa2xx.c                           |  3 +-
 hw/arm/realview.c                         |  3 +-
 hw/arm/sbsa-ref.c                         |  4 +-
 hw/arm/versatilepb.c                      |  3 +-
 hw/arm/xilinx_zynq.c                      |  2 +-
 hw/display/sm501.c                        |  3 +-
 hw/i386/pc.c                              |  2 +-
 hw/i386/pc_piix.c                         |  5 +-
 hw/i386/pc_q35.c                          | 15 +++---
 hw/isa/piix4.c                            |  3 +-
 hw/mips/fuloong2e.c                       |  5 +-
 hw/ppc/mac_newworld.c                     |  5 +-
 hw/ppc/mac_oldworld.c                     |  3 +-
 hw/ppc/sam460ex.c                         |  6 ++-
 hw/ppc/spapr.c                            | 13 +++--
 hw/sh4/r2d.c                              |  2 +-
 hw/usb/bus.c                              | 40 +++++++++-------
 hw/usb/chipidea.c                         |  1 +
 hw/usb/combined-packet.c                  |  2 +-
 hw/usb/core.c                             |  2 +-
 hw/usb/desc-msos.c                        |  2 +-
 hw/usb/desc.c                             |  3 +-
 hw/usb/dev-audio.c                        |  2 +-
 hw/usb/dev-hid.c                          |  2 +-
 hw/usb/dev-hub.c                          |  2 +-
 hw/usb/dev-mtp.c                          |  2 +-
 hw/usb/dev-network.c                      |  2 +-
 hw/usb/dev-serial.c                       |  2 +-
 hw/usb/dev-smartcard-reader.c             |  2 +-
 hw/usb/dev-storage.c                      |  2 +-
 hw/usb/dev-uas.c                          |  2 +-
 hw/usb/dev-wacom.c                        |  2 +-
 hw/usb/hcd-dwc2.c                         |  8 ++--
 hw/usb/hcd-ehci-sysbus.c                  |  1 +
 hw/usb/hcd-ehci.c                         | 13 ++++-
 hw/usb/hcd-musb.c                         |  4 +-
 hw/usb/hcd-ohci-pci.c                     |  4 +-
 hw/usb/hcd-ohci.c                         |  1 -
 hw/usb/hcd-uhci.c                         | 21 ++++----
 hw/usb/hcd-xhci-nec.c                     |  3 +-
 hw/usb/hcd-xhci.c                         |  2 +-
 hw/usb/host-libusb.c                      |  2 +-
 hw/usb/host-stub.c                        |  2 +-
 hw/usb/libhw.c                            |  2 +-
 hw/usb/quirks.c                           |  5 +-
 hw/usb/{quirks.h => quirks.inc.c}         |  5 --
 hw/usb/redirect.c                         |  3 +-
 hw/usb/tusb6010.c                         |  4 +-
 hw/usb/xen-usb.c                          |  2 +-
 monitor/misc.c                            |  2 +-
 softmmu/vl.c                              |  2 +-
 MAINTAINERS                               |  7 ++-
 68 files changed, 294 insertions(+), 185 deletions(-)
 rename {include/hw => hw}/usb/dwc2-regs.h (100%)
 rename {include/hw => hw}/usb/hcd-musb.h (98%)
 rename include/hw/usb.h => hw/usb/usb-internal.h (92%)
 create mode 100644 hw/usb/usb-quirks.h
 create mode 100644 include/hw/usb/usb-hcd.h
 create mode 100644 include/hw/usb/usb.h
 rename hw/usb/{quirks.h => quirks.inc.c} (99%)

-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:50:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjUO-0003FQ-78; Sat, 04 Jul 2020 14:50:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjUN-0003ES-Om
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:49:59 +0000
X-Inumbo-ID: 9dfab0c0-be05-11ea-bb8b-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9dfab0c0-be05-11ea-bb8b-bc764e2007e4;
 Sat, 04 Jul 2020 14:49:52 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id s10so35718707wrw.12
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:49:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=TYD3rdWUPlT0NII6NR282LsOncN84HigXsBo5vZptIs=;
 b=mOBuD8AYvWgD1eKqLpIBdjXVQ5Pph/+FmMaPRcgW2zETZQyqX8+ELNIs1My9xotg49
 yl9bERPQPS4BhKGyeBRx4Qd+uQFLZYRklMZbvqAuJXZY6sEyWYouhWOjPYTFIPLsN1nA
 GHwtlofg4fOkSOvAn8jYj6oYYhDGjGPP34trt8rdNRv6PvCFLzPAjucb8srw71EOzwXj
 tqxRmbVb81GwGMeF/lodyUfzMseIwejEvsc32wNWsbTHQldJGTBhlLiCfR5UelU3S8dR
 pT9OEuZPV2fjo7gKaYA+MWXkGAIMr5hn5Xs7KOlPZHvMRGlwOazCfo5B65rotWknhZhj
 6mPQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=TYD3rdWUPlT0NII6NR282LsOncN84HigXsBo5vZptIs=;
 b=s/IqLK0KvY7ANIUvonY6gGOR7Vs+sqy0q7dvZtPbDGkSe6NVbFPkxoEP1iLRASnPFw
 /+Ijt8x5FIsW3JTNyb5jXB/eDiy+mW2zXTEezWDxJ1teRCaYkcJMQCxJZHijH3d+O/Qu
 beQCaqTAgwTtnf2e9ErfQ4LUO9w8o/P8lGtp90CRf8GrtfqiZw95KJ07dRGaXYO5c8zK
 R2TJ2clIUVmWDKijKsgNpw93jgdd3IV772bFrBQVeZ3cRf74vNzUhU5i1sMh/duH9LVi
 Ybug2nUgxhgmZRgHQB2AlUpOj3A2yVIUborJIpW/gItH7i4Tk9je1gRVi5Yr5bRWS9Q7
 Lz0A==
X-Gm-Message-State: AOAM532cP+/Z1eCkSsjCo/yZuGggRxJN2Ws/kUW8eUbRMegP98EfZerW
 pst6yb6K7TOBS8pkK9K97KE=
X-Google-Smtp-Source: ABdhPJwmYB47kPgnriIBv4UAA2pvunyU+tdfS4/5cW6u0UXI8kGWbm5Q/26nom3RMXKe12Z+5cK61g==
X-Received: by 2002:adf:d0d0:: with SMTP id z16mr43245870wrh.95.1593874191948; 
 Sat, 04 Jul 2020 07:49:51 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.49.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:49:51 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 02/26] hw/ppc/sam460ex: Add missing 'hw/pci/pci.h' header
Date: Sat,  4 Jul 2020 16:49:19 +0200
Message-Id: <20200704144943.18292-3-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This file uses pci_create_simple() and PCI_DEVFN() which are both
declared in "hw/pci/pci.h". This include is indirectly included
by an USB header. As we want to reduce the USB header inclusions
later, include the PCI header now, to avoid later:

  hw/ppc/sam460ex.c:397:5: error: implicit declaration of function ‘pci_create_simple’; did you mean ‘sysbus_create_simple’? [-Werror=implicit-function-declaration]
    397 |     pci_create_simple(pci_bus, PCI_DEVFN(6, 0), "sm501");
        |     ^~~~~~~~~~~~~~~~~
        |     sysbus_create_simple
  hw/ppc/sam460ex.c:397:5: error: nested extern declaration of ‘pci_create_simple’ [-Werror=nested-externs]
  hw/ppc/sam460ex.c:397:32: error: implicit declaration of function ‘PCI_DEVFN’ [-Werror=implicit-function-declaration]
    397 |     pci_create_simple(pci_bus, PCI_DEVFN(6, 0), "sm501");
        |                                ^~~~~~~~~
  hw/ppc/sam460ex.c:397:32: error: nested extern declaration of ‘PCI_DEVFN’ [-Werror=nested-externs]

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/ppc/sam460ex.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c
index 1a106a68de..fae970b142 100644
--- a/hw/ppc/sam460ex.c
+++ b/hw/ppc/sam460ex.c
@@ -38,6 +38,7 @@
 #include "hw/usb/hcd-ehci.h"
 #include "hw/ppc/fdt.h"
 #include "hw/qdev-properties.h"
+#include "hw/pci/pci.h"
 
 #include <libfdt.h>
 
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:50:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjUY-0003w0-O0; Sat, 04 Jul 2020 14:50:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjUX-0003ES-P6
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:50:09 +0000
X-Inumbo-ID: a0583856-be05-11ea-b7bb-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a0583856-be05-11ea-b7bb-bc764e2007e4;
 Sat, 04 Jul 2020 14:49:56 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id q5so35745347wru.6
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:49:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=Oswm/m1DAWqBYNXr2uTrm9A1MHSMTs1YgVZvoQJC8bI=;
 b=gQJg1+Wh58x4r1H2OG452+zx84IaV+E1T+M1C2BaKGGeGjGLijbGT/5fa47rZbH8+0
 wx3oFF6OQ4YShJqliLGLwkPIB5oJdoms+e1FUh9KOzyP0OaQJPWoQRUIivVmvk/OIni+
 pIzj1GiWmXLxj18zi7ZWsi1d3bC0/vHZhYuKuqhtZGYCTA6vwHL3QyRfwCu4wrFZiM4w
 nFj0qm+tx7V6fzx/zRQcGItYIzadZOv9ExIOm820D40RuWqtW4US5zoVgHElBrDFWXJU
 AbeBG/uIbXymxcCdns7Thwq8fUZU4bxlC2yWqkCy++/qfZTE6Jfr1HmDinGWpjPF9fWy
 Rs2A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=Oswm/m1DAWqBYNXr2uTrm9A1MHSMTs1YgVZvoQJC8bI=;
 b=aGJnS5c6zOcu5hbRbQWdzrVZ8XL7rqvgyex1qGcPxeCaXhlBt/ISVyV0o+WEMcPO5z
 oWgScbSk6WOw+98tLtCCIIF4e4sq+mwdMOOs7z5xmGIA1tTxQZ22K/VQDLSXbiIbv7+p
 x5cxs2SGN+eLllA0rxQmJR/T+Qy/0dSDBFdVYk/whO+KYkjVaN0VLQRVO1U6YrM5QDER
 BHUdJmF5H7hGbSfL3u1OXt14vJO1kt1KJQofRCfGFeGeWMUTEGHqZcxZZokW0Cv35/U1
 ficfSzBy2zb3aYHCKFHFgrqVTG+jOw94oJe8gdVYlVZtXLCvG4kMxPCZttWfesGDcSRY
 fwog==
X-Gm-Message-State: AOAM533C/b666Oo3B2eG9Uhc/YbapWM4MH/7y3LYlC7lvRl9KIrHlvV6
 Q836nAAlyWHN/4wTGamz7ks=
X-Google-Smtp-Source: ABdhPJwNyiTXMTeB3cC7VryyLyGzYsyh9YLCpLFWWlt+IfdrZqIJno2o/CWRvBISeVrXF/zuyPa+Cw==
X-Received: by 2002:adf:9561:: with SMTP id 88mr11042389wrs.240.1593874195987; 
 Sat, 04 Jul 2020 07:49:55 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.49.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:49:55 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 04/26] hw/usb: Reduce 'exec/memory.h' inclusion
Date: Sat,  4 Jul 2020 16:49:21 +0200
Message-Id: <20200704144943.18292-5-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

"exec/memory.h" is only required by "hw/usb/hcd-musb.h".
Include it there directly.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 include/hw/usb.h          | 1 -
 include/hw/usb/hcd-musb.h | 2 ++
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/include/hw/usb.h b/include/hw/usb.h
index 4f04a1a879..15b2ef300a 100644
--- a/include/hw/usb.h
+++ b/include/hw/usb.h
@@ -25,7 +25,6 @@
  * THE SOFTWARE.
  */
 
-#include "exec/memory.h"
 #include "hw/qdev-core.h"
 #include "qemu/iov.h"
 #include "qemu/queue.h"
diff --git a/include/hw/usb/hcd-musb.h b/include/hw/usb/hcd-musb.h
index c874b9f292..ec3ee5c4b0 100644
--- a/include/hw/usb/hcd-musb.h
+++ b/include/hw/usb/hcd-musb.h
@@ -13,6 +13,8 @@
 #ifndef HW_USB_MUSB_H
 #define HW_USB_MUSB_H
 
+#include "exec/memory.h"
+
 enum musb_irq_source_e {
     musb_irq_suspend = 0,
     musb_irq_resume,
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:50:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:50:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjUe-0003xr-1K; Sat, 04 Jul 2020 14:50:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjUc-0003ES-PD
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:50:14 +0000
X-Inumbo-ID: a1811856-be05-11ea-8496-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a1811856-be05-11ea-8496-bc764e2007e4;
 Sat, 04 Jul 2020 14:49:58 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id f2so7792457wrp.7
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:49:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=7pFkSbRRQbCuRBQa/wHlCeQOONUZFK/RPfYxuVN6mNY=;
 b=oe96UZf2PvTeWCGOzNPdJjjjwq8e7uWWti9W08FMinN7P58qMXJzwG4jf7BQqAlIXy
 6nHLswC2ubCB3a5BJuyeZJgNcIKrAIDo4ARzHOGDOs8dG3JjWPLZh+xe/GSdKhL1zZpq
 8BYgKvBkBIAl/g+8RWgLQjeAv3/zhyETI0aJxVBr/gw0rUcBfnedTHwEViIUPNIXNva6
 03JOGnSszEoF4cvdF3lLiUVfjtrvuQh3LEc873Fecq0vjBpXttzot5/ZB9ZEJEQG430J
 9YLB5PV5soEOhpr3WG8RhSY1eKgOIEVX6mRUL3RwbaUehHrdq/CwhlxSp1fhDKJcCsc/
 pq5A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=7pFkSbRRQbCuRBQa/wHlCeQOONUZFK/RPfYxuVN6mNY=;
 b=G3iciyn7RQ8poATp+XoZi/LGasfXagXvrMp83LJoggQKPv5/2nWah3EsboJqty8Ey3
 Llbi9QyB3V9/Xuv+OXgLDbtpwJ9OAYFi2nUercF+38gEo8qLrcypuano9wCpcQw6ZskX
 TtB+YE6oGcNNzrZ7XnAOobI5XMxnbMB3XQDgoQWRyv6ic4qP6vPrx9HTHPgpsHOcNmgu
 OCJvopcsNv1Kst/fPpWPdIOrOjNE0NJVXKVakIA88yV8KuWjjpksJdg7ocvvMhJRAPJe
 JEYLUWoxeoXM7/ZFyAkaq7sM1tjQIkRgtVMrqEWN5zJa04rq4deZ3l8fnhibqQq8uFxi
 rRTA==
X-Gm-Message-State: AOAM5317t1vYaqpZiEKllene4TYWqqc/7jTOab5nUgYtIKfizhIEb/+c
 WLs0NuYB9RkclxvMy8nwYZ0=
X-Google-Smtp-Source: ABdhPJyIsvEkdmFv/CEzZPKYJCez2abF31l4aHFgnR6C3Ee2RRBQ8QQm9RtLdC3SuJbJy3WxELYEqQ==
X-Received: by 2002:adf:f08b:: with SMTP id n11mr40029476wro.312.1593874197889; 
 Sat, 04 Jul 2020 07:49:57 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.49.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:49:57 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 05/26] hw/usb/desc: Add missing header
Date: Sat,  4 Jul 2020 16:49:22 +0200
Message-Id: <20200704144943.18292-6-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This header uses the USBPacket and USBDevice types which are
forward declared in "hw/usb.h".

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/desc.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/hw/usb/desc.h b/hw/usb/desc.h
index 4d81c68e0e..92594fbe29 100644
--- a/hw/usb/desc.h
+++ b/hw/usb/desc.h
@@ -2,6 +2,7 @@
 #define QEMU_HW_USB_DESC_H
 
 #include <wchar.h>
+#include "hw/usb.h"
 
 /* binary representation */
 typedef struct USBDescriptor {
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:50:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:50:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjUi-000408-FS; Sat, 04 Jul 2020 14:50:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjUh-0003ES-PK
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:50:19 +0000
X-Inumbo-ID: a2b47b14-be05-11ea-bb8b-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2b47b14-be05-11ea-bb8b-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:00 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id g10so12677223wmc.1
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=Ci3lKGhT7CaGmfO0merQ+/KTVofaj9NyV7u1/2UAP3w=;
 b=HTr3GqMeiMrHSFVz09gNzzyGE5/NusjcYwPUPJ9+Az6tEcg4A2YjJthJc8BKS0f3xF
 F4+ebiouyz8NRclvNRqup/GDrEN4F9p2jviH4u+BYG+0fxPmcN8XcbhSAoh6SFqU4Zw0
 FH/IpcWFj2yRtZ9xJLuf96x1ST8pDEf9hbCzZCPwmk0sO+6vtZTFj/sU4oryrL3pJ5Li
 Z1W2ONSpiFn0BO3jkg80P04ozix3y5MCMrcei0bm8NOU72QICc6nO3mwQ0RF77ptyVlN
 TnLG2rz47FkHMnveffGYGtgCStlP8At2zi6ngwyWWC2wn+upypNWXZkbAwAcNzprcGts
 BnOA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=Ci3lKGhT7CaGmfO0merQ+/KTVofaj9NyV7u1/2UAP3w=;
 b=FH6YksVs3O4etbI8wdq8gEuWnxDOQ/P90Pii+Ro7zYi8TLhmAH66yfx1oNDdvuLXd9
 z3bg86sTl+VEIIS+BTFeZT1OyCbeohdS0Em8WDbkODDWmLJ6SG6Ffrl46/oVQTyQlzCp
 fLMKTK8eqX/Qwjl1DLKcEL2TlQi9LMlcx5tz31IY24DfSQ2hNlUsL9OUFLfE9tpFkjx7
 2BSoJGFqsVdX50ppiiaNUhW5xg6MJfvo+OFvrWFnun6YXnlnov2CdS9UNEkeaWbri/yE
 McZFyBRhXlcRVB1EO/a7/V/OMdpIhh5M2pHVVAPY6Ux9FYFii0LqBj1sc3GLCJwzpRR1
 yKEg==
X-Gm-Message-State: AOAM533ev3KBkowerYAlNWi8Qf7x3iApbUxkVyc12H8CXHubc9ZV0tpv
 O+sRgNi5mmr8hm32wvkSvmA=
X-Google-Smtp-Source: ABdhPJxDJ13C73XgkZIyjGV7LxF7eBLsgTcF0RFnIfpAnAwchRq5JuWHl5cvDUWZqd//9F+vT3r0og==
X-Received: by 2002:a1c:b386:: with SMTP id
 c128mr43397378wmf.133.1593874199940; 
 Sat, 04 Jul 2020 07:49:59 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.49.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:49:59 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 06/26] hw/usb/hcd-dwc2: Remove unnecessary includes
Date: Sat,  4 Jul 2020 16:49:23 +0200
Message-Id: <20200704144943.18292-7-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

"qemu/error-report.h" and "qemu/main-loop.h" are not used.
Remove them.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/hcd-dwc2.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
index 72cbd051f3..590e75b455 100644
--- a/hw/usb/hcd-dwc2.c
+++ b/hw/usb/hcd-dwc2.c
@@ -39,8 +39,6 @@
 #include "migration/vmstate.h"
 #include "trace.h"
 #include "qemu/log.h"
-#include "qemu/error-report.h"
-#include "qemu/main-loop.h"
 #include "hw/qdev-properties.h"
 
 #define USB_HZ_FS       12000000
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:50:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:50:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjUn-00042Z-PR; Sat, 04 Jul 2020 14:50:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjUm-0003ES-PI
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:50:24 +0000
X-Inumbo-ID: a3f9bdae-be05-11ea-bca7-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3f9bdae-be05-11ea-bca7-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:02 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id z2so13504664wrp.2
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=/SEb7i40/nbfV3CvB7T6xkytpAoG+YE52veZZ5mv5j4=;
 b=M86GQvVjqh2GOpnc8959nIEqLnIltzOWwA63YgcRTjTllNzsFeKa2AxLUplFe59nVZ
 EU5gEhxt62SSXh8q7000ETaWmRIbSKTzrgZ47MtBnXMnZm75mqtFngMn13ekv5iU3uR1
 jZH+QjvY71cexOwBR/zzvmBgICLG+ZXIDZGxG3evDVbBXwFvFbeNbh4ioXGgUGUaFauF
 lbh9Jk61RelYOfy760OtZib/qtDn/T5+sGmZ3flNc8ml4r0kTxIqEmHI5Z71CCawDYVS
 JHWip9FEybZwE41MmK/BYGDg85WJYNaGdThCFnN87eeYykN1tf2mZJKQABQuft2b5FSo
 Wb9w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=/SEb7i40/nbfV3CvB7T6xkytpAoG+YE52veZZ5mv5j4=;
 b=bCE1XVYMh3IC8dFrgKf3Cj4UFCjctmDCQDfJOOItTltY0F14a3+Y8CK9FrrAhjAnaB
 9mJg2Z5ps8fNgYhDySiSB4OfOZjG5zr46zHP7TMC4Mp9vImUlBf4up/CWYKx4swvOIFA
 tuDL5OP7AA9y//uVI0qcV7+e65+woOkYhuPdMbBF3MDSL6f/qDOYuolrc5PzNeTOCtZX
 YGWxWauX30VhBFf0qr4FB1c1vZJ8SX/GY/JwCRiFFchv3dZ9LwQmroFP/QPFtfMnZZ21
 qxi5ZRrkQHpep9sWKw6YAcpd2sB/G5NLu7IwcwabnL0TbDqGx5T3BUMFsImJbxAufrS6
 g7yQ==
X-Gm-Message-State: AOAM5333hzYQfqTsKlu9JSlSrja7buGoXQhZYT5Lh/Z0lkVrT9CGIbZ5
 rTa5x8kQ9vY5b4Z4pVMfR2Q=
X-Google-Smtp-Source: ABdhPJzU2sv77zfdcGa3LX0JoxfFviQ6DXsIevpTW1tzmmUhaY0yZ73I5eZpKHBXzbQobqiSJf7dUQ==
X-Received: by 2002:adf:8b50:: with SMTP id v16mr43284603wra.188.1593874202076; 
 Sat, 04 Jul 2020 07:50:02 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:01 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 07/26] hw/usb/hcd-dwc2: Restrict some headers to source
Date: Sat,  4 Jul 2020 16:49:24 +0200
Message-Id: <20200704144943.18292-8-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The header "usb/hcd-dwc2.h" doesn't need to include "qemu/timer.h",
"sysemu/dma.h", "hw/irq.h" (the types required are forward declared).
Include them in the source file which is the only one requiring the
function declarations.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/hcd-dwc2.h | 3 ---
 hw/usb/hcd-dwc2.c | 3 +++
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/usb/hcd-dwc2.h b/hw/usb/hcd-dwc2.h
index 4ba809a07b..2adf0f53c7 100644
--- a/hw/usb/hcd-dwc2.h
+++ b/hw/usb/hcd-dwc2.h
@@ -19,11 +19,8 @@
 #ifndef HW_USB_DWC2_H
 #define HW_USB_DWC2_H
 
-#include "qemu/timer.h"
-#include "hw/irq.h"
 #include "hw/sysbus.h"
 #include "hw/usb.h"
-#include "sysemu/dma.h"
 
 #define DWC2_MMIO_SIZE      0x11000
 
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
index 590e75b455..ccf05d0823 100644
--- a/hw/usb/hcd-dwc2.c
+++ b/hw/usb/hcd-dwc2.c
@@ -36,8 +36,11 @@
 #include "qapi/error.h"
 #include "hw/usb/dwc2-regs.h"
 #include "hw/usb/hcd-dwc2.h"
+#include "hw/irq.h"
+#include "sysemu/dma.h"
 #include "migration/vmstate.h"
 #include "trace.h"
+#include "qemu/timer.h"
 #include "qemu/log.h"
 #include "hw/qdev-properties.h"
 
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:50:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:50:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjUt-00045P-2t; Sat, 04 Jul 2020 14:50:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjUr-0003ES-Pd
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:50:29 +0000
X-Inumbo-ID: a52e31be-be05-11ea-bca7-bc764e2007e4
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a52e31be-be05-11ea-bca7-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:04 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id z15so24483030wrl.8
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=8KnwDCFnDP2p4VznLGchEQ7MhbCDHoouF9hb3OmMvFo=;
 b=qnXK+ln5rQy/kZGC3Fh2RFx/8LLRy2FzRaODh+swFw0DtexOC034PCwojAmD2p2qEc
 +xBeYKP6Sea6ucH8fkvrVNNAXzAZNbHz5N4CojECMrqUJVqYMYiMaro3hTSh93ifaaQa
 XZwymWtZAF5r5fjYDxoQfxgBDwsrJ+DJMbXURYjcQX+7tdru4xwSDdEYfIB4ftoZAWW3
 odapVHwJEugfsQL3deagFHvaMemeZ1/ffZNaL8ml9YhUlAPZWcmqAc9Ldb9FPwIY49F5
 nCg4ELh3hzoGogZjdC9IsUHrjWxHgtGLNK9IssYFB3ec8/8vdv8bxlDIDWrs/Yk59fX7
 Perw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=8KnwDCFnDP2p4VznLGchEQ7MhbCDHoouF9hb3OmMvFo=;
 b=Xc/87NXteoX8tE+sTZK7x9T9bgVBFJUbD81cRZ8Rd7hXP17kJXIlcna6HY0OPqx6CX
 7eOZAzi6O1YybqC39JQuf+LUWWetbc+DqPfc9pl9G2nOTnfwmM0aO2BJ5ylZOKUqqe74
 MwwWElhMghcMCfiKAyEo6dfu2AEqQT2uwZk2KFDNpj0dc1JaXq8zBhHkZdKPjvGM2nJ8
 l4DraPTmm53Y6cagBbkJL0BMJwEaO9F8avrnx1Lstbr28zkhQRSo2+VovZhFUBto8Xli
 JWVKl6Sd8Xngnwz+jgAG11aqyQB+ifayMgF8pY5R4qmK/nRpBv6jEd2CK0V+798HyKLD
 9ogA==
X-Gm-Message-State: AOAM53250OkZhW0WUA4vwMqxG+14/Tgv2pE9SeeQnu2LtBFruf/l+nKV
 ZRnjqJt/KCSdvFBBcQWtD+Q=
X-Google-Smtp-Source: ABdhPJy5uEuHNE6Qu6FF+bwIyX64Em7+RTT3tgkuG40gaTm44oygCnXfGMDpLMWbQQ/T4oOMWH2j6A==
X-Received: by 2002:a5d:5381:: with SMTP id d1mr41607272wrv.177.1593874204051; 
 Sat, 04 Jul 2020 07:50:04 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:03 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 08/26] hw/usb/hcd-dwc2: Restrict 'dwc2-regs.h' scope
Date: Sat,  4 Jul 2020 16:49:25 +0200
Message-Id: <20200704144943.18292-9-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

We only use these register definitions in files under the
hw/usb/ directory. Keep that header local by moving it there.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 {include/hw => hw}/usb/dwc2-regs.h | 0
 hw/usb/hcd-dwc2.c                  | 2 +-
 2 files changed, 1 insertion(+), 1 deletion(-)
 rename {include/hw => hw}/usb/dwc2-regs.h (100%)

diff --git a/include/hw/usb/dwc2-regs.h b/hw/usb/dwc2-regs.h
similarity index 100%
rename from include/hw/usb/dwc2-regs.h
rename to hw/usb/dwc2-regs.h
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
index ccf05d0823..252b60ef65 100644
--- a/hw/usb/hcd-dwc2.c
+++ b/hw/usb/hcd-dwc2.c
@@ -34,7 +34,6 @@
 #include "qemu/osdep.h"
 #include "qemu/units.h"
 #include "qapi/error.h"
-#include "hw/usb/dwc2-regs.h"
 #include "hw/usb/hcd-dwc2.h"
 #include "hw/irq.h"
 #include "sysemu/dma.h"
@@ -43,6 +42,7 @@
 #include "qemu/timer.h"
 #include "qemu/log.h"
 #include "hw/qdev-properties.h"
+#include "dwc2-regs.h"
 
 #define USB_HZ_FS       12000000
 #define USB_HZ_HS       96000000
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:50:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjUy-00048H-D1; Sat, 04 Jul 2020 14:50:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjUw-0003ES-Pr
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:50:34 +0000
X-Inumbo-ID: a6634f6a-be05-11ea-bb8b-bc764e2007e4
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6634f6a-be05-11ea-bb8b-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:06 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id f18so27702015wrs.0
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=6Mlo0AeYRU65Ls2fCVkmevBd3nH8ZXr33Z7OmOj+vXU=;
 b=gODFZA0NwiEg34xoLUZooBEFePqRiHvzOgk7GoIz2n1rFGd4QSl0yPFHs0kMpTwfun
 VxbOHiWGiK7hGkJcR55uhGHJOkSnTOylYJNNNeJNFX5MQ1AJWFLxthC0JxYRXwwA8UD6
 0T4W6OXB8JWENKKs9BjOmQtMx26PqxWLkNEqphVLzbN6BpKPqkGpMh3IysQmS3XaqFCH
 w/AkpcTcynoV0yqxNIqLzGxVmIkT1wEWEZ/2m+7m7ZFEkqqyiCInBV+iToCedjNeVAij
 I8W2Zmy8BwxXfoflWqBcUrD9nDoivwv8OJoVTnLun2Tiql7zQaqUo+6b6+3cW7+Kchj4
 DYZA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=6Mlo0AeYRU65Ls2fCVkmevBd3nH8ZXr33Z7OmOj+vXU=;
 b=pMmpzGN3Jn9yhmi/+4hWkDRbZSGJbXLJdy9AA1ygmj44la6DrbZLsx3+nPqHQfo1qk
 sdwTt1iiHw0DNxnlVP8fc6MfihBShS/xKrD0rw4bHGrhKuTc0dgNY4N1Ed0RBeD4ua/i
 7Qt1sITlAlSlZIS63/PfouCelOo9AeFK5kNOyI3cIiXwxUVC+m1fb5pqbgRI0hFgr+Io
 S85tYEIYMGO9n9cZA8T5oKH9FfWYD7zLOG671ywULlx8B81WP6uPrUnywwVMqZrJSWAh
 w6u54D9bBtacB6MPb/cl7HZdx15GSPECMENA+e3XMFFdTFRrLw6yfZLnX+0QV0yL8yes
 xRwg==
X-Gm-Message-State: AOAM531CnM3aEYxrYy+v0HIW+K42E9Si9cdTSE0VO6tsYzraPOkGjV3K
 pg/DoljnFlHJyrCuwZNHjqA=
X-Google-Smtp-Source: ABdhPJx90xoS99/KVAVNosR0acMctdxAlHMTJW7Yd8LCiWu1nVe8JG8D7h3cg324kUXYXFvYsOB19w==
X-Received: by 2002:adf:f504:: with SMTP id q4mr40505370wro.163.1593874206130; 
 Sat, 04 Jul 2020 07:50:06 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:05 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 09/26] hw/usb/hcd-ehci: Remove unnecessary include
Date: Sat,  4 Jul 2020 16:49:26 +0200
Message-Id: <20200704144943.18292-10-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

As "qemu/main-loop.h" is not used, remove it.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/hcd-ehci.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/hw/usb/hcd-ehci.c b/hw/usb/hcd-ehci.c
index 1495e8f7fa..256fb91e0c 100644
--- a/hw/usb/hcd-ehci.c
+++ b/hw/usb/hcd-ehci.c
@@ -34,7 +34,6 @@
 #include "migration/vmstate.h"
 #include "trace.h"
 #include "qemu/error-report.h"
-#include "qemu/main-loop.h"
 #include "sysemu/runstate.h"
 
 #define FRAME_TIMER_FREQ 1000
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjaV-0004r2-4R; Sat, 04 Jul 2020 14:56:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjV6-0003ES-QP
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:50:44 +0000
X-Inumbo-ID: a8a747b8-be05-11ea-8496-bc764e2007e4
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a8a747b8-be05-11ea-8496-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:10 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id o2so37044820wmh.2
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:10 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=Qlr3O8z8K3sxKCQEA3LIB47YWANsKrmr6NiJfPzW+pA=;
 b=BjegYO4FHLxpds48YkBBFzRrjEv1/GP+e4ATsl6+otTSGUY/901sR8dfeFezH6YE/H
 gyqcHtJDTSjMR3xSfApYj8hqtz/wSqo+AhH4cF/fm2j+9m/n0GaYISJPWJ/RwXFwMv0i
 zF35Mon1ROOvzuyLQlhn2ubfKHpMR6u+d6BRgIWzYqjNEGrlhI70Re1P2IMAC6x/WVCy
 UpAhaCc/lRUYHliXs64Cz+eMjpdgUsURvkEK6XHbHXGDgqkC12TEmbUML5S9HZLBsrDI
 CfvgBiMtwGNU5/W+nM99z3Uhdqnhwsjs6JgDwP2puNTcMnyTpTn0mUq5lKsTdp+nUtSr
 CzWg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=Qlr3O8z8K3sxKCQEA3LIB47YWANsKrmr6NiJfPzW+pA=;
 b=ifhHzi4AVl6PBngXYnFGLKyc1CcaCjs4i2HYEZFbggnJTWoM1zOJIXZLpwA9vTpmw4
 n2XKyz0wb4R3QM4Y5tavkRbWJRhKdlCTin488UCSzdhhzTnYwmyYT1CUJHpk3VJ3Gsrw
 JbHZya1Y3YKEIWMB2tKgsYZArmor9C5KW7Y1SKowPy1/2ihKKGyZIhZSYY76e9C25VJL
 GO5fYVQy+FFrA0Y996MPkcpxOnhWRc1ixfBQAxUAATg99OHfTU/7sCBtFHulCRrd7eQO
 BzADbKlRxVJ+BAebuhTtyP7FFDTd2p2wCH+SbP8TE4JyD3/z6s0GNJySFoUu51OtjdYS
 Ix9w==
X-Gm-Message-State: AOAM531+X9+BpluPQyrIvrYSikDY9MbLfiFu20Alec20mbrxi6LSQqi9
 PAP+/bKWfPg15Hvemx4TM1s=
X-Google-Smtp-Source: ABdhPJwynHTB68DwvsWAVjiG8rSM/kHHHNQ70teJ+dR7LseKz9qmvzwpG0TxpNzjRrE6k7cH1DWiTw==
X-Received: by 2002:a1c:f007:: with SMTP id a7mr41379225wmb.103.1593874209918; 
 Sat, 04 Jul 2020 07:50:09 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:09 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 11/26] hw/usb/hcd-xhci: Add missing header
Date: Sat,  4 Jul 2020 16:49:28 +0200
Message-Id: <20200704144943.18292-12-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This header uses the USBPort type which is forward declared
by "hw/usb.h".

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/hcd-xhci.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/hw/usb/hcd-xhci.h b/hw/usb/hcd-xhci.h
index 946af51fc2..8edbdc2c3e 100644
--- a/hw/usb/hcd-xhci.h
+++ b/hw/usb/hcd-xhci.h
@@ -22,6 +22,8 @@
 #ifndef HW_USB_HCD_XHCI_H
 #define HW_USB_HCD_XHCI_H
 
+#include "hw/usb.h"
+
 #define TYPE_XHCI "base-xhci"
 #define TYPE_NEC_XHCI "nec-usb-xhci"
 #define TYPE_QEMU_XHCI "qemu-xhci"
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjag-0004tT-Fz; Sat, 04 Jul 2020 14:56:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjVB-0003ES-Qj
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:50:49 +0000
X-Inumbo-ID: a9e8a022-be05-11ea-b7bb-bc764e2007e4
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a9e8a022-be05-11ea-b7bb-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:12 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id 22so34725437wmg.1
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=LIvjoiOYGW8qxH03j7d+uMoGgATq8ow1BJ8pj4MMInc=;
 b=K8OdEPUyesQSU/S5m5qok6/IcUiWOAZO7BMajTrjGs6JPbceMxtuZf8byY8tbJEGHX
 JEhvrKsqhFGfeStk45pdmkIx1Crd26UqdfRtBkDFTF1ZcEpMz4Y9f8h/NujfAZ8zhopU
 tcgLUIVhx4SR2VngYxvbMDeifVcrfHrvzA0aQukuywYfCwQ6dqY2ovVF7OMVfTiF2YPn
 DNxSIQBEYNKYXgGw5SLUtGuykIXaIqdYUBYTZrYeQbwEa/h1m7E4cIMUq8sIDEztAFSW
 sjbJhzGGYK8GyRliwIteCXYFpst00XbTUThRJIANJG3Mz51oqbQ9X/YLHxyeR4B28a9a
 b3LA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=LIvjoiOYGW8qxH03j7d+uMoGgATq8ow1BJ8pj4MMInc=;
 b=GBjuYTscOYDBbDgtp/qBQxOQkuwjlqJYKsftUl+tO8MLgS+KYzRv15oOIw73qK/mYy
 slmB4Hh5VioFNRHdC3snrMKbRL4C0WyKSnqgJ8qLZg8cfS0YAES3Jon3sEby/Bf+GTlE
 uGdCBoYhQAmzcOWinwaoVGZun5wCwvY564UPh91i5m9dkcnDLA3vKWeyKEb7rtg7L88F
 4Y9BbH9udZC3EaB06pzAE4ENe7C1/QPWQhIr55KOTBgiSaiQZCA/dPpGv+N3kTpBPlYk
 hzzcmyxC+eX1HRurvvQvgyIH732rDyrMuTSELaZOGBj/aXpzqE7B1Z/LAg97808YTsjG
 Ukng==
X-Gm-Message-State: AOAM531tAx7hfuExcOghhYP0TV06YfxaCwrCJ0mtqpzOIxi8FM83dbFU
 twg0iFait2LYekxyCIzZW3Y=
X-Google-Smtp-Source: ABdhPJxJqnEAihzmKK5m10VR2mbQ1Jco3QSfcvHgpyoRQzFTU2aqsRISnZhnGEZBTtk3/AGxed2vTA==
X-Received: by 2002:a7b:c313:: with SMTP id k19mr26876880wmj.67.1593874211964; 
 Sat, 04 Jul 2020 07:50:11 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:11 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 12/26] hw/usb/hcd-musb: Restrict header scope
Date: Sat,  4 Jul 2020 16:49:29 +0200
Message-Id: <20200704144943.18292-13-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

"hcd-musb.h" is only required by USB device implementions.
As we keep these implementations in the hw/usb/ directory,
move the header there.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 {include/hw => hw}/usb/hcd-musb.h | 0
 hw/usb/hcd-musb.c                 | 2 +-
 hw/usb/tusb6010.c                 | 2 +-
 3 files changed, 2 insertions(+), 2 deletions(-)
 rename {include/hw => hw}/usb/hcd-musb.h (100%)

diff --git a/include/hw/usb/hcd-musb.h b/hw/usb/hcd-musb.h
similarity index 100%
rename from include/hw/usb/hcd-musb.h
rename to hw/usb/hcd-musb.h
diff --git a/hw/usb/hcd-musb.c b/hw/usb/hcd-musb.c
index 85f5ff5bd4..b8d8766a4a 100644
--- a/hw/usb/hcd-musb.c
+++ b/hw/usb/hcd-musb.c
@@ -23,9 +23,9 @@
 #include "qemu/osdep.h"
 #include "qemu/timer.h"
 #include "hw/usb.h"
-#include "hw/usb/hcd-musb.h"
 #include "hw/irq.h"
 #include "hw/hw.h"
+#include "hcd-musb.h"
 
 /* Common USB registers */
 #define MUSB_HDRC_FADDR		0x00	/* 8-bit */
diff --git a/hw/usb/tusb6010.c b/hw/usb/tusb6010.c
index 27eb28d3e4..9f9b81b09d 100644
--- a/hw/usb/tusb6010.c
+++ b/hw/usb/tusb6010.c
@@ -23,11 +23,11 @@
 #include "qemu/module.h"
 #include "qemu/timer.h"
 #include "hw/usb.h"
-#include "hw/usb/hcd-musb.h"
 #include "hw/arm/omap.h"
 #include "hw/hw.h"
 #include "hw/irq.h"
 #include "hw/sysbus.h"
+#include "hcd-musb.h"
 
 #define TYPE_TUSB6010 "tusb6010"
 #define TUSB(obj) OBJECT_CHECK(TUSBState, (obj), TYPE_TUSB6010)
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjag-0004tr-PA; Sat, 04 Jul 2020 14:56:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjVf-0003ES-SR
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:51:19 +0000
X-Inumbo-ID: b13882c0-be05-11ea-8496-bc764e2007e4
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b13882c0-be05-11ea-8496-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:25 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id o2so37045098wmh.2
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=dSp90TZcMkhYVN1myg6NdGD6Nbq0gZQK2ClNXRKzwCU=;
 b=mKjsT3NYlThJ+LvUmtlSMZy5vL935vqJl78XXQCOfn95V8qATFdthcq6p9KJZvenD6
 DffdeGJvs+v064CCp2bsGhza6HQ140vH4JRai4A3w+oS3o1wIspj2Cju5MMVDflqztas
 XqxwrLwqtwKNKEg6aekNLR2FAcywQVHY/ZPzG6wsvXMQx0+h11VfEUR7kqHBPv3peOBX
 gfriguM3HSKDBd5gYz5iWvxaoezeQw/GUCx443CDTct4IA4h1LI8tVQdQU/ugz11KIv+
 BiB3KOhE8U5/19gm2x+V5J/zs6cl0vA9D1NhrqwmqhZr47fh9n3rf96aJTJhrioPyh+C
 odGg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=dSp90TZcMkhYVN1myg6NdGD6Nbq0gZQK2ClNXRKzwCU=;
 b=TjEoeeww3uHlTl4SqPgds9lkfPdr9lA99tKeTbKV1xd7kTKos/CR6V6QPImASTalgP
 mUVrvpoxQZ1tbAt7AfuOhsNQtYQSSMEbVyG1daSbOMrQJgcuUuborwuhlJCQUGYFFLJ+
 ee8he9ruVrtEagavUesNFujBBbFEvZM8Q00iTmyXSIPOF6j5ZZ3GBXqHuY1t8sdunFWh
 nINCntwptk8+RC+CT3KvHLSHATXYWo1F9yjZWQ1hVHEueJX5cy2dM3t012D+TlW6YgLx
 p98NDjZCKKlmx4H4IVFllIk+N8hfpPjUI1BvNrr243NdkZ2Eoz7xwg3nLglNY4wtLcCH
 qZMA==
X-Gm-Message-State: AOAM530//WZ5nEDIk2F6F5xrdTLW7GWTZoPxU4QnUOdZ9roSjflskDyE
 AA4sqFjUmTD0lrKqeq2DPrQ=
X-Google-Smtp-Source: ABdhPJzvKUoOtrkVjzZQp6DwlQyiHYdZ94BKN5ozpyFIsmy9iEQycoO+prQidokC2C86DctB5wk5ig==
X-Received: by 2002:a1c:18e:: with SMTP id 136mr10977710wmb.93.1593874224206; 
 Sat, 04 Jul 2020 07:50:24 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:23 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 18/26] hw/usb/bus: Add usb_get_port_path()
Date: Sat,  4 Jul 2020 16:49:35 +0200
Message-Id: <20200704144943.18292-19-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Refactor usb_get_full_dev_path() to take a 'want_full_path'
argument, and add usb_get_port_path() which returns a short
path.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 include/hw/usb.h | 10 ++++++++++
 hw/usb/bus.c     | 18 +++++++++++++-----
 2 files changed, 23 insertions(+), 5 deletions(-)

diff --git a/include/hw/usb.h b/include/hw/usb.h
index 8c3bc920ff..7ea502d421 100644
--- a/include/hw/usb.h
+++ b/include/hw/usb.h
@@ -506,6 +506,16 @@ void usb_port_location(USBPort *downstream, USBPort *upstream, int portnr);
 void usb_unregister_port(USBBus *bus, USBPort *port);
 void usb_claim_port(USBDevice *dev, Error **errp);
 void usb_release_port(USBDevice *dev);
+/**
+ * usb_get_port_path:
+ * @dev: the USB device
+ *
+ * The returned data must be released with g_free()
+ * when no longer required.
+ *
+ * Returns: a dynamically allocated pathname.
+ */
+char *usb_get_port_path(USBDevice *dev);
 void usb_device_attach(USBDevice *dev, Error **errp);
 int usb_device_detach(USBDevice *dev);
 void usb_check_attach(USBDevice *dev, Error **errp);
diff --git a/hw/usb/bus.c b/hw/usb/bus.c
index fad8194bf5..518e5b94ed 100644
--- a/hw/usb/bus.c
+++ b/hw/usb/bus.c
@@ -577,12 +577,10 @@ static void usb_bus_dev_print(Monitor *mon, DeviceState *qdev, int indent)
                    dev->attached ? ", attached" : "");
 }
 
-static char *usb_get_full_dev_path(DeviceState *qdev)
+static char *usb_get_dev_path(USBDevice *dev, bool want_full_path)
 {
-    USBDevice *dev = USB_DEVICE(qdev);
-
-    if (dev->flags & (1 << USB_DEV_FLAG_FULL_PATH)) {
-        DeviceState *hcd = qdev->parent_bus->parent;
+    if (want_full_path && (dev->flags & (1 << USB_DEV_FLAG_FULL_PATH))) {
+        DeviceState *hcd = DEVICE(dev)->parent_bus->parent;
         char *id = qdev_get_dev_path(hcd);
 
         if (id) {
@@ -594,6 +592,16 @@ static char *usb_get_full_dev_path(DeviceState *qdev)
     return g_strdup(dev->port->path);
 }
 
+static char *usb_get_full_dev_path(DeviceState *qdev)
+{
+    return usb_get_dev_path(USB_DEVICE(qdev), true);
+}
+
+char *usb_get_port_path(USBDevice *dev)
+{
+    return usb_get_dev_path(dev, false);
+}
+
 static char *usb_get_fw_dev_path(DeviceState *qdev)
 {
     USBDevice *dev = USB_DEVICE(qdev);
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjah-0004uv-2i; Sat, 04 Jul 2020 14:56:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjVQ-0003ES-Rb
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:51:04 +0000
X-Inumbo-ID: ad8de00c-be05-11ea-bca7-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad8de00c-be05-11ea-bca7-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:18 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id z2so13505031wrp.2
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=F2y1uml1RkFfNsOb+m2iyzdDwj1uXo3NnuLQTrqdA9s=;
 b=XuFU4CBE8JCcfpdI/mR5sE2jyMeyBckQOwPMTtzwJf8CjscRTofonY09HxmwJm00NE
 hgXaGKtjBGKgBEytwuJ+1LT1UMvcBupnCU5041RZtHT4vwcBWQo/M/+GghnU6sLvVTLF
 Y0OX5yURCQg13pYSivq8FFPYj7wg8wOD1j0jENHOgxaRTzOtllVlw7cjPNFn2IWH1BIq
 Sm6KuoljtLR0LbF6gX5rfDBtKX6CprkJZYeXQzCIGgZcLZdQ6vORkbiRnRe5hvsbF/uS
 OWFgKWrvnxxqFL1fM5rMPeqOvIkLQG0cFB/v/CHa/YpQquR2m9B6/D1fPzqQ7wWU0wd9
 8j+w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=F2y1uml1RkFfNsOb+m2iyzdDwj1uXo3NnuLQTrqdA9s=;
 b=H/G7fkqj0E3OCviW9kEBbWZdxAr5nwASou7X2g/Hfk+J8LIjzErTqtdIR9EZ4w2sVv
 PAMOOmkMhcyJIzpJYqbsNNa1KdMY7mYkdxO8Jf22guUgkz/FprHaMOeHyHG/ovQ8QLVe
 RnUCnVPNRt6AFI9oYuBFiwWOWFlASNomhFdvxAG0GAcJQxKgQyW9PBmSwQBj4XWZj9mI
 zl1E30xKWdyn8qS9iJgM1KDv1YuGvmlKQqPBrDSrGZSRu7pAMhA0McdmHOUx64+615nT
 KQQw+N3pNDITS2CWIDXetMJD0eqm/wgC+UbrwlwDbt65k4gNZgm4nXSFyTCH1GAzgbim
 zPWw==
X-Gm-Message-State: AOAM532/JnBDwUIaUrD2E0p1JxILIccKHN+e5nbylYeSaXBGT0x9lxIO
 RnVVGipftxcsKshHnCYibSg=
X-Google-Smtp-Source: ABdhPJyq/7eRAGeHxJJ4O+Zutc0Mt3EntxvTem4i1yYIv/0yUmC2RdRBoUXQL+VySe0UhHY6Lv3gOQ==
X-Received: by 2002:a5d:6a01:: with SMTP id m1mr43778986wru.115.1593874218068; 
 Sat, 04 Jul 2020 07:50:18 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:17 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 15/26] hw/usb: Add new 'usb-quirks.h' local header
Date: Sat,  4 Jul 2020 16:49:32 +0200
Message-Id: <20200704144943.18292-16-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Only redirect.c consumes the quirks API. Reduce the big "hw/usb.h"
header by moving the quirks related declaration into their own
header. As nothing out of hw/usb/ requires it, keep it local.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/usb-quirks.h | 27 +++++++++++++++++++++++++++
 include/hw/usb.h    | 11 -----------
 hw/usb/quirks.c     |  1 +
 hw/usb/redirect.c   |  1 +
 4 files changed, 29 insertions(+), 11 deletions(-)
 create mode 100644 hw/usb/usb-quirks.h

diff --git a/hw/usb/usb-quirks.h b/hw/usb/usb-quirks.h
new file mode 100644
index 0000000000..542889efc4
--- /dev/null
+++ b/hw/usb/usb-quirks.h
@@ -0,0 +1,27 @@
+/*
+ * USB quirk handling
+ *
+ * Copyright (c) 2012 Red Hat, Inc.
+ *
+ * Red Hat Authors:
+ * Hans de Goede <hdegoede@redhat.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ */
+
+#ifndef HW_USB_QUIRKS_H
+#define HW_USB_QUIRKS_H
+
+/* In bulk endpoints are streaming data sources (iow behave like isoc eps) */
+#define USB_QUIRK_BUFFER_BULK_IN        0x01
+/* Bulk pkts in FTDI format, need special handling when combining packets */
+#define USB_QUIRK_IS_FTDI               0x02
+
+int usb_get_quirks(uint16_t vendor_id, uint16_t product_id,
+                   uint8_t interface_class, uint8_t interface_subclass,
+                   uint8_t interface_protocol);
+
+#endif
diff --git a/include/hw/usb.h b/include/hw/usb.h
index 18f1349bdc..8c3bc920ff 100644
--- a/include/hw/usb.h
+++ b/include/hw/usb.h
@@ -549,15 +549,4 @@ int usb_device_alloc_streams(USBDevice *dev, USBEndpoint **eps, int nr_eps,
                              int streams);
 void usb_device_free_streams(USBDevice *dev, USBEndpoint **eps, int nr_eps);
 
-/* quirks.c */
-
-/* In bulk endpoints are streaming data sources (iow behave like isoc eps) */
-#define USB_QUIRK_BUFFER_BULK_IN	0x01
-/* Bulk pkts in FTDI format, need special handling when combining packets */
-#define USB_QUIRK_IS_FTDI		0x02
-
-int usb_get_quirks(uint16_t vendor_id, uint16_t product_id,
-                   uint8_t interface_class, uint8_t interface_subclass,
-                   uint8_t interface_protocol);
-
 #endif
diff --git a/hw/usb/quirks.c b/hw/usb/quirks.c
index 655b36f2d5..b0d0f87e35 100644
--- a/hw/usb/quirks.c
+++ b/hw/usb/quirks.c
@@ -15,6 +15,7 @@
 #include "qemu/osdep.h"
 #include "quirks.inc.c"
 #include "hw/usb.h"
+#include "usb-quirks.h"
 
 static bool usb_id_match(const struct usb_device_id *ids,
                          uint16_t vendor_id, uint16_t product_id,
diff --git a/hw/usb/redirect.c b/hw/usb/redirect.c
index 417a60a2e6..4c5925a039 100644
--- a/hw/usb/redirect.c
+++ b/hw/usb/redirect.c
@@ -45,6 +45,7 @@
 #include "hw/usb.h"
 #include "migration/qemu-file-types.h"
 #include "migration/vmstate.h"
+#include "usb-quirks.h"
 
 /* ERROR is defined below. Remove any previous definition. */
 #undef ERROR
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjai-0004wQ-Ar; Sat, 04 Jul 2020 14:56:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjVa-0003ES-S7
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:51:14 +0000
X-Inumbo-ID: aff8b4ca-be05-11ea-bca7-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aff8b4ca-be05-11ea-bca7-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:22 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id f2so7792991wrp.7
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=bNYiPy+H907R6GvRggaZDEAvq/yJ89RNH5vqYDziYiQ=;
 b=Zzt2Ao+qbpw+PyTf0gj3H8RK9KRIom4jWcvpaKOt4s1hW0V1FcCCbKJ+J1H1BoamQa
 X7yQWbLBOiWHDKwtYXNcSlpsadQlqloNBTBk3OoIeHNh7YjvW7yMB0amtIHW7A3uKNrF
 KIuBad86H5R6+A+garcJYO5R0cc0W1dnCsIKjrMacUlgm36Fhjt7sIWGaJjruzORr6Mw
 tCeATKmk7k/LQ9OiBic3Z6q8k1mV35zARhSfEfSLPMbQtj1CaWLLBsqEXHNhY7FCgnTB
 UPsPUgFVXy+a2LKVM7IwIcL6sBQUgKI3uk0ZNyDH/kMyxLLse6a/v9qA5iUjYWAwTnNg
 EH5w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=bNYiPy+H907R6GvRggaZDEAvq/yJ89RNH5vqYDziYiQ=;
 b=nfd2Pk/6tEDbYMMHxEt7/NsMuLcrXGhJnR/JpWW6Z5ScctMIIxOFZDxtrHui9+JWzF
 nTIlq0/IOquvgABI5lLp2aW1j5vSus+nOjjR1ypuUH/lp1m0Jt9G9Dc1BJ673eSPINas
 Ixlo+k5Qa+Jcwwlvy6/RC0tWAU0rZWut9LPMw7+Xeork7lCeAt9rC0vYZBHZtWBEovwk
 g3zrZe7T/QvzJFAXtvIDdL5qnapCTBreM1QFRJ9JBXQ+7HMgw6OhUwEZOjL+chB4PVf+
 edmHFGV8Vd1wPd1k4ql2BQvX5aJb0qnHj1t/9jcjur5R5+3pWLTdJtc5U0OaKTLKnMp5
 HSMQ==
X-Gm-Message-State: AOAM531O6vL9VGk/Q0+yx3zW7UnUo4AY7M/S6h5UaJfje7VWhElZgPZs
 7lLemYFpTd1+ttDFhFC6d68=
X-Google-Smtp-Source: ABdhPJx8cXLO5EiX1MZNGYDXemombuB+cpvmJcqKanQIPyybTeiASdt/aAvkPNB3j02DjAjdPJizlA==
X-Received: by 2002:adf:e74e:: with SMTP id c14mr43510389wrn.143.1593874222155; 
 Sat, 04 Jul 2020 07:50:22 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:21 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 17/26] hw/usb/bus: Rename usb_get_dev_path() as
 usb_get_full_dev_path()
Date: Sat,  4 Jul 2020 16:49:34 +0200
Message-Id: <20200704144943.18292-18-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

If the device has USB_DEV_FLAG_FULL_PATH set, usb_get_dev_path()
returns the full port path. Rename the function accordingly.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/bus.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/hw/usb/bus.c b/hw/usb/bus.c
index f8901e822c..fad8194bf5 100644
--- a/hw/usb/bus.c
+++ b/hw/usb/bus.c
@@ -13,7 +13,7 @@
 
 static void usb_bus_dev_print(Monitor *mon, DeviceState *qdev, int indent);
 
-static char *usb_get_dev_path(DeviceState *dev);
+static char *usb_get_full_dev_path(DeviceState *dev);
 static char *usb_get_fw_dev_path(DeviceState *qdev);
 static void usb_qdev_unrealize(DeviceState *qdev);
 
@@ -33,7 +33,7 @@ static void usb_bus_class_init(ObjectClass *klass, void *data)
     HotplugHandlerClass *hc = HOTPLUG_HANDLER_CLASS(klass);
 
     k->print_dev = usb_bus_dev_print;
-    k->get_dev_path = usb_get_dev_path;
+    k->get_dev_path = usb_get_full_dev_path;
     k->get_fw_dev_path = usb_get_fw_dev_path;
     hc->unplug = qdev_simple_device_unplug_cb;
 }
@@ -577,7 +577,7 @@ static void usb_bus_dev_print(Monitor *mon, DeviceState *qdev, int indent)
                    dev->attached ? ", attached" : "");
 }
 
-static char *usb_get_dev_path(DeviceState *qdev)
+static char *usb_get_full_dev_path(DeviceState *qdev)
 {
     USBDevice *dev = USB_DEVICE(qdev);
 
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjai-0004wt-LE; Sat, 04 Jul 2020 14:56:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjVu-0003ES-T0
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:51:34 +0000
X-Inumbo-ID: b54c7c68-be05-11ea-bca7-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b54c7c68-be05-11ea-bca7-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:31 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id q15so34722043wmj.2
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=BVN2tL6FsOSYOSZ1Rnc/AWNY9pAl08p/yTnmHBA4/lE=;
 b=pCckQw2fpxMWuUrRXy5Fj3fHnP1lNxTV/c5641pZ9JVVlE/7SDn5Q+r8sSpmGeHEtX
 DQ82DdmBJfjkp7u54KKO9Ay1CQm1OqoeBDjN4toemHPaBd5JcfQL3QaYsoKMSYnMcZ86
 cZkSflzpX4ea7A2bLQiMRirFJDw5g/vdSSr9sfwJoCmDfKl2BVC60kE20D3yzbnZa1ls
 bJh1WPioULkSM6b3/MugK1q0os9mcW2PgGSAqfRiQY3jHTvkj9TCGs9ygvVJCLAyMiEa
 nPCSpr4C+N0nVyXsOtCts8cDGKX+c88yfVcUQmPePkBlul0t7jTng4E9ABkXxABaOIFb
 KvpQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=BVN2tL6FsOSYOSZ1Rnc/AWNY9pAl08p/yTnmHBA4/lE=;
 b=my7cNrqN+E9EYtMbSgLDkG+qhnBFTz+k0cP4b2wHNG1+2JHpqaWVya3UnTc0WLhIoR
 ZIn06Jo8tku9CBXdeT6f2F4k0AsxjG70wtXK2MZt/N4IHe1FRwaCMEZbinn/yp/WP3lo
 rFnpBfKDEiuJlPnJD4ry5tcxX4+IAVv5eRFYRod9uchjSUgSxZyOqiRuF/DCNnyGnCs5
 uTrjDIpfc8ArBLYXtbBxLSdWQKluwxRkvSd9Z/TVgP5fb287vSlKYHG7GcOE9a5/11sP
 W+hFgn56wK0BbXboFniKCa10ZIgEaJ5wuB2chGDwFX7tshV2JuCIB4MLfkYs9SL8zzVS
 n+cg==
X-Gm-Message-State: AOAM531GruCTHnBC0zIKqK2NFL4VCJQS9q0yw6YjX7ujkOTv/n5I2Fqu
 5Zgkc80tPlytl7z3G8wOrok=
X-Google-Smtp-Source: ABdhPJwaQwLIlLdyKxBkKPa8GBks7V8EDYQ/7hiL57HMX9nYvfh3kbH/YgCk6NdT+Qi4tqQqI6p6nw==
X-Received: by 2002:a7b:cc08:: with SMTP id f8mr43795677wmh.106.1593874230724; 
 Sat, 04 Jul 2020 07:50:30 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:30 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 21/26] hw/usb: Move internal API to local 'usb-internal.h'
 header
Date: Sat,  4 Jul 2020 16:49:38 +0200
Message-Id: <20200704144943.18292-22-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Only the files under hw/usb/ require access to the USB internal
API. Move include/hw/usb.h to hw/usb/usb-internal.h to reduce
its scope.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/desc.h                             | 2 +-
 hw/usb/hcd-dwc2.h                         | 2 +-
 hw/usb/hcd-ehci.h                         | 2 +-
 hw/usb/hcd-ohci.h                         | 2 +-
 hw/usb/hcd-xhci.h                         | 2 +-
 include/hw/usb.h => hw/usb/usb-internal.h | 7 +++----
 hw/usb/bus.c                              | 2 +-
 hw/usb/combined-packet.c                  | 2 +-
 hw/usb/core.c                             | 2 +-
 hw/usb/desc-msos.c                        | 2 +-
 hw/usb/desc.c                             | 3 +--
 hw/usb/dev-audio.c                        | 2 +-
 hw/usb/dev-hid.c                          | 2 +-
 hw/usb/dev-hub.c                          | 2 +-
 hw/usb/dev-mtp.c                          | 2 +-
 hw/usb/dev-network.c                      | 2 +-
 hw/usb/dev-serial.c                       | 2 +-
 hw/usb/dev-smartcard-reader.c             | 2 +-
 hw/usb/dev-storage.c                      | 2 +-
 hw/usb/dev-uas.c                          | 2 +-
 hw/usb/dev-wacom.c                        | 2 +-
 hw/usb/hcd-dwc2.c                         | 1 +
 hw/usb/hcd-musb.c                         | 2 +-
 hw/usb/hcd-ohci-pci.c                     | 2 +-
 hw/usb/hcd-ohci.c                         | 1 -
 hw/usb/hcd-uhci.c                         | 2 +-
 hw/usb/hcd-xhci-nec.c                     | 3 +--
 hw/usb/hcd-xhci.c                         | 2 +-
 hw/usb/host-libusb.c                      | 2 +-
 hw/usb/libhw.c                            | 2 +-
 hw/usb/quirks.c                           | 2 +-
 hw/usb/redirect.c                         | 2 +-
 hw/usb/tusb6010.c                         | 2 +-
 hw/usb/xen-usb.c                          | 2 +-
 MAINTAINERS                               | 1 -
 35 files changed, 35 insertions(+), 39 deletions(-)
 rename include/hw/usb.h => hw/usb/usb-internal.h (99%)

diff --git a/hw/usb/desc.h b/hw/usb/desc.h
index 4bf6966c4b..ee4f042602 100644
--- a/hw/usb/desc.h
+++ b/hw/usb/desc.h
@@ -2,7 +2,7 @@
 #define QEMU_HW_USB_DESC_H
 
 #include <wchar.h>
-#include "hw/usb.h"
+#include "usb-internal.h"
 
 /* binary representation */
 typedef struct USBDescriptor {
diff --git a/hw/usb/hcd-dwc2.h b/hw/usb/hcd-dwc2.h
index 2adf0f53c7..2dfb3f3bc5 100644
--- a/hw/usb/hcd-dwc2.h
+++ b/hw/usb/hcd-dwc2.h
@@ -20,7 +20,7 @@
 #define HW_USB_DWC2_H
 
 #include "hw/sysbus.h"
-#include "hw/usb.h"
+#include "usb-internal.h"
 
 #define DWC2_MMIO_SIZE      0x11000
 
diff --git a/hw/usb/hcd-ehci.h b/hw/usb/hcd-ehci.h
index 4577f5e31d..337b3ad05c 100644
--- a/hw/usb/hcd-ehci.h
+++ b/hw/usb/hcd-ehci.h
@@ -19,10 +19,10 @@
 #define HW_USB_HCD_EHCI_H
 
 #include "qemu/timer.h"
-#include "hw/usb.h"
 #include "sysemu/dma.h"
 #include "hw/pci/pci.h"
 #include "hw/sysbus.h"
+#include "usb-internal.h"
 
 #define CAPA_SIZE        0x10
 
diff --git a/hw/usb/hcd-ohci.h b/hw/usb/hcd-ohci.h
index 5c8819aedf..771927ea17 100644
--- a/hw/usb/hcd-ohci.h
+++ b/hw/usb/hcd-ohci.h
@@ -22,7 +22,7 @@
 #define HCD_OHCI_H
 
 #include "sysemu/dma.h"
-#include "hw/usb.h"
+#include "usb-internal.h"
 
 /* Number of Downstream Ports on the root hub: */
 #define OHCI_MAX_PORTS 15
diff --git a/hw/usb/hcd-xhci.h b/hw/usb/hcd-xhci.h
index 8edbdc2c3e..f9a3aaceec 100644
--- a/hw/usb/hcd-xhci.h
+++ b/hw/usb/hcd-xhci.h
@@ -22,7 +22,7 @@
 #ifndef HW_USB_HCD_XHCI_H
 #define HW_USB_HCD_XHCI_H
 
-#include "hw/usb.h"
+#include "usb-internal.h"
 
 #define TYPE_XHCI "base-xhci"
 #define TYPE_NEC_XHCI "nec-usb-xhci"
diff --git a/include/hw/usb.h b/hw/usb/usb-internal.h
similarity index 99%
rename from include/hw/usb.h
rename to hw/usb/usb-internal.h
index 2ea5186ea5..ceafb65936 100644
--- a/include/hw/usb.h
+++ b/hw/usb/usb-internal.h
@@ -1,8 +1,5 @@
-#ifndef QEMU_USB_H
-#define QEMU_USB_H
-
 /*
- * QEMU USB API
+ * QEMU USB internal API
  *
  * Copyright (c) 2005 Fabrice Bellard
  *
@@ -24,6 +21,8 @@
  * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
  * THE SOFTWARE.
  */
+#ifndef QEMU_USB_INTERNAL_H
+#define QEMU_USB_INTERNAL_H
 
 #include "hw/qdev-core.h"
 #include "hw/usb/usb.h"
diff --git a/hw/usb/bus.c b/hw/usb/bus.c
index 518e5b94ed..ba6c48e800 100644
--- a/hw/usb/bus.c
+++ b/hw/usb/bus.c
@@ -1,6 +1,5 @@
 #include "qemu/osdep.h"
 #include "hw/qdev-properties.h"
-#include "hw/usb.h"
 #include "qapi/error.h"
 #include "qemu/error-report.h"
 #include "qemu/module.h"
@@ -9,6 +8,7 @@
 #include "monitor/monitor.h"
 #include "trace.h"
 #include "qemu/cutils.h"
+#include "usb-internal.h"
 #include "desc.h"
 
 static void usb_bus_dev_print(Monitor *mon, DeviceState *qdev, int indent);
diff --git a/hw/usb/combined-packet.c b/hw/usb/combined-packet.c
index 5d57e883dc..28e19aad12 100644
--- a/hw/usb/combined-packet.c
+++ b/hw/usb/combined-packet.c
@@ -21,9 +21,9 @@
  */
 #include "qemu/osdep.h"
 #include "qemu/units.h"
-#include "hw/usb.h"
 #include "qemu/iov.h"
 #include "trace.h"
+#include "usb-internal.h"
 
 static void usb_combined_packet_add(USBCombinedPacket *combined, USBPacket *p)
 {
diff --git a/hw/usb/core.c b/hw/usb/core.c
index 5abd128b6b..6fed698d20 100644
--- a/hw/usb/core.c
+++ b/hw/usb/core.c
@@ -24,9 +24,9 @@
  * THE SOFTWARE.
  */
 #include "qemu/osdep.h"
-#include "hw/usb.h"
 #include "qemu/iov.h"
 #include "trace.h"
+#include "usb-internal.h"
 
 void usb_pick_speed(USBPort *port)
 {
diff --git a/hw/usb/desc-msos.c b/hw/usb/desc-msos.c
index 3a5ad7c8d0..79a8093f3f 100644
--- a/hw/usb/desc-msos.c
+++ b/hw/usb/desc-msos.c
@@ -1,6 +1,6 @@
 #include "qemu/osdep.h"
-#include "hw/usb.h"
 #include "desc.h"
+#include "usb-internal.h"
 
 /*
  * Microsoft OS Descriptors
diff --git a/hw/usb/desc.c b/hw/usb/desc.c
index 8b6eaea407..defb344014 100644
--- a/hw/usb/desc.c
+++ b/hw/usb/desc.c
@@ -1,8 +1,7 @@
 #include "qemu/osdep.h"
-
-#include "hw/usb.h"
 #include "desc.h"
 #include "trace.h"
+#include "usb-internal.h"
 
 /* ------------------------------------------------------------------ */
 
diff --git a/hw/usb/dev-audio.c b/hw/usb/dev-audio.c
index 1371c44f48..1e4d1051f3 100644
--- a/hw/usb/dev-audio.c
+++ b/hw/usb/dev-audio.c
@@ -32,10 +32,10 @@
 #include "qemu/osdep.h"
 #include "qemu/module.h"
 #include "hw/qdev-properties.h"
-#include "hw/usb.h"
 #include "migration/vmstate.h"
 #include "desc.h"
 #include "audio/audio.h"
+#include "usb-internal.h"
 
 static void usb_audio_reinit(USBDevice *dev, unsigned channels);
 
diff --git a/hw/usb/dev-hid.c b/hw/usb/dev-hid.c
index 89f63b698b..59b47272ba 100644
--- a/hw/usb/dev-hid.c
+++ b/hw/usb/dev-hid.c
@@ -25,7 +25,6 @@
 
 #include "qemu/osdep.h"
 #include "ui/console.h"
-#include "hw/usb.h"
 #include "migration/vmstate.h"
 #include "desc.h"
 #include "qapi/error.h"
@@ -33,6 +32,7 @@
 #include "qemu/timer.h"
 #include "hw/input/hid.h"
 #include "hw/qdev-properties.h"
+#include "usb-internal.h"
 
 /* HID interface requests */
 #define GET_REPORT   0xa101
diff --git a/hw/usb/dev-hub.c b/hw/usb/dev-hub.c
index 5f19dd9fb5..b394ae9983 100644
--- a/hw/usb/dev-hub.c
+++ b/hw/usb/dev-hub.c
@@ -27,11 +27,11 @@
 #include "qemu/timer.h"
 #include "trace.h"
 #include "hw/qdev-properties.h"
-#include "hw/usb.h"
 #include "migration/vmstate.h"
 #include "desc.h"
 #include "qemu/error-report.h"
 #include "qemu/module.h"
+#include "usb-internal.h"
 
 #define MAX_PORTS 8
 
diff --git a/hw/usb/dev-mtp.c b/hw/usb/dev-mtp.c
index 15a2243101..147e564bea 100644
--- a/hw/usb/dev-mtp.c
+++ b/hw/usb/dev-mtp.c
@@ -24,10 +24,10 @@
 #include "qemu/filemonitor.h"
 #include "trace.h"
 #include "hw/qdev-properties.h"
-#include "hw/usb.h"
 #include "migration/vmstate.h"
 #include "desc.h"
 #include "qemu/units.h"
+#include "usb-internal.h"
 
 /* ----------------------------------------------------------------------- */
 
diff --git a/hw/usb/dev-network.c b/hw/usb/dev-network.c
index c69756709b..2e06d74f69 100644
--- a/hw/usb/dev-network.c
+++ b/hw/usb/dev-network.c
@@ -26,7 +26,6 @@
 #include "qemu/osdep.h"
 #include "qapi/error.h"
 #include "hw/qdev-properties.h"
-#include "hw/usb.h"
 #include "migration/vmstate.h"
 #include "desc.h"
 #include "net/net.h"
@@ -37,6 +36,7 @@
 #include "qemu/iov.h"
 #include "qemu/module.h"
 #include "qemu/cutils.h"
+#include "usb-internal.h"
 
 /*#define TRAFFIC_DEBUG*/
 /* Thanks to NetChip Technologies for donating this product ID.
diff --git a/hw/usb/dev-serial.c b/hw/usb/dev-serial.c
index 7e50e3ba47..4d3f91a85a 100644
--- a/hw/usb/dev-serial.c
+++ b/hw/usb/dev-serial.c
@@ -14,11 +14,11 @@
 #include "qemu/error-report.h"
 #include "qemu/module.h"
 #include "hw/qdev-properties.h"
-#include "hw/usb.h"
 #include "migration/vmstate.h"
 #include "desc.h"
 #include "chardev/char-serial.h"
 #include "chardev/char-fe.h"
+#include "usb-internal.h"
 
 //#define DEBUG_Serial
 
diff --git a/hw/usb/dev-smartcard-reader.c b/hw/usb/dev-smartcard-reader.c
index fcfe216594..9602b25a10 100644
--- a/hw/usb/dev-smartcard-reader.c
+++ b/hw/usb/dev-smartcard-reader.c
@@ -41,9 +41,9 @@
 #include "qemu/error-report.h"
 #include "qemu/module.h"
 #include "hw/qdev-properties.h"
-#include "hw/usb.h"
 #include "migration/vmstate.h"
 #include "desc.h"
+#include "usb-internal.h"
 
 #include "ccid.h"
 
diff --git a/hw/usb/dev-storage.c b/hw/usb/dev-storage.c
index f5977eb72e..a58c84dffa 100644
--- a/hw/usb/dev-storage.c
+++ b/hw/usb/dev-storage.c
@@ -13,7 +13,6 @@
 #include "qemu/module.h"
 #include "qemu/option.h"
 #include "qemu/config-file.h"
-#include "hw/usb.h"
 #include "desc.h"
 #include "hw/qdev-properties.h"
 #include "hw/scsi/scsi.h"
@@ -22,6 +21,7 @@
 #include "sysemu/block-backend.h"
 #include "qapi/visitor.h"
 #include "qemu/cutils.h"
+#include "usb-internal.h"
 
 //#define DEBUG_MSD
 
diff --git a/hw/usb/dev-uas.c b/hw/usb/dev-uas.c
index a3a4d41c07..9dc39f98a2 100644
--- a/hw/usb/dev-uas.c
+++ b/hw/usb/dev-uas.c
@@ -17,12 +17,12 @@
 #include "qemu/main-loop.h"
 #include "qemu/module.h"
 
-#include "hw/usb.h"
 #include "migration/vmstate.h"
 #include "desc.h"
 #include "hw/qdev-properties.h"
 #include "hw/scsi/scsi.h"
 #include "scsi/constants.h"
+#include "usb-internal.h"
 
 /* --------------------------------------------------------------------- */
 
diff --git a/hw/usb/dev-wacom.c b/hw/usb/dev-wacom.c
index 8aba44b8bc..7c162b7f85 100644
--- a/hw/usb/dev-wacom.c
+++ b/hw/usb/dev-wacom.c
@@ -28,10 +28,10 @@
 
 #include "qemu/osdep.h"
 #include "ui/console.h"
-#include "hw/usb.h"
 #include "migration/vmstate.h"
 #include "qemu/module.h"
 #include "desc.h"
+#include "usb-internal.h"
 
 /* Interface requests */
 #define WACOM_GET_REPORT	0x2101
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
index 252b60ef65..47ae18d510 100644
--- a/hw/usb/hcd-dwc2.c
+++ b/hw/usb/hcd-dwc2.c
@@ -43,6 +43,7 @@
 #include "qemu/log.h"
 #include "hw/qdev-properties.h"
 #include "dwc2-regs.h"
+#include "usb-internal.h"
 
 #define USB_HZ_FS       12000000
 #define USB_HZ_HS       96000000
diff --git a/hw/usb/hcd-musb.c b/hw/usb/hcd-musb.c
index b8d8766a4a..bc3efcce65 100644
--- a/hw/usb/hcd-musb.c
+++ b/hw/usb/hcd-musb.c
@@ -22,10 +22,10 @@
  */
 #include "qemu/osdep.h"
 #include "qemu/timer.h"
-#include "hw/usb.h"
 #include "hw/irq.h"
 #include "hw/hw.h"
 #include "hcd-musb.h"
+#include "usb-internal.h"
 
 /* Common USB registers */
 #define MUSB_HDRC_FADDR		0x00	/* 8-bit */
diff --git a/hw/usb/hcd-ohci-pci.c b/hw/usb/hcd-ohci-pci.c
index a7fb1666af..cb6bc55f59 100644
--- a/hw/usb/hcd-ohci-pci.c
+++ b/hw/usb/hcd-ohci-pci.c
@@ -21,7 +21,6 @@
 #include "qemu/osdep.h"
 #include "qapi/error.h"
 #include "qemu/timer.h"
-#include "hw/usb.h"
 #include "migration/vmstate.h"
 #include "hw/pci/pci.h"
 #include "hw/sysbus.h"
@@ -29,6 +28,7 @@
 #include "hw/qdev-properties.h"
 #include "trace.h"
 #include "hcd-ohci.h"
+#include "usb-internal.h"
 
 #define TYPE_PCI_OHCI "pci-ohci"
 #define PCI_OHCI(obj) OBJECT_CHECK(OHCIPCIState, (obj), TYPE_PCI_OHCI)
diff --git a/hw/usb/hcd-ohci.c b/hw/usb/hcd-ohci.c
index 1e6e85e86a..f4a85a8774 100644
--- a/hw/usb/hcd-ohci.c
+++ b/hw/usb/hcd-ohci.c
@@ -30,7 +30,6 @@
 #include "qapi/error.h"
 #include "qemu/module.h"
 #include "qemu/timer.h"
-#include "hw/usb.h"
 #include "migration/vmstate.h"
 #include "hw/sysbus.h"
 #include "hw/qdev-dma.h"
diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
index 37f7beb3fa..1d4dd33b6c 100644
--- a/hw/usb/hcd-uhci.c
+++ b/hw/usb/hcd-uhci.c
@@ -27,7 +27,6 @@
  */
 
 #include "qemu/osdep.h"
-#include "hw/usb.h"
 #include "hw/usb/uhci-regs.h"
 #include "migration/vmstate.h"
 #include "hw/pci/pci.h"
@@ -39,6 +38,7 @@
 #include "trace.h"
 #include "qemu/main-loop.h"
 #include "qemu/module.h"
+#include "usb-internal.h"
 
 #define FRAME_TIMER_FREQ 1000
 
diff --git a/hw/usb/hcd-xhci-nec.c b/hw/usb/hcd-xhci-nec.c
index e6a5a22b6d..24c59fa4b0 100644
--- a/hw/usb/hcd-xhci-nec.c
+++ b/hw/usb/hcd-xhci-nec.c
@@ -20,11 +20,10 @@
  */
 
 #include "qemu/osdep.h"
-#include "hw/usb.h"
 #include "qemu/module.h"
 #include "hw/pci/pci.h"
 #include "hw/qdev-properties.h"
-
+#include "usb-internal.h"
 #include "hcd-xhci.h"
 
 static Property nec_xhci_properties[] = {
diff --git a/hw/usb/hcd-xhci.c b/hw/usb/hcd-xhci.c
index b330e36fe6..a3f6b14681 100644
--- a/hw/usb/hcd-xhci.c
+++ b/hw/usb/hcd-xhci.c
@@ -23,7 +23,6 @@
 #include "qemu/timer.h"
 #include "qemu/module.h"
 #include "qemu/queue.h"
-#include "hw/usb.h"
 #include "migration/vmstate.h"
 #include "hw/pci/pci.h"
 #include "hw/qdev-properties.h"
@@ -33,6 +32,7 @@
 #include "qapi/error.h"
 
 #include "hcd-xhci.h"
+#include "usb-internal.h"
 
 //#define DEBUG_XHCI
 //#define DEBUG_DATA
diff --git a/hw/usb/host-libusb.c b/hw/usb/host-libusb.c
index ad7ed8fb0c..615655f2f5 100644
--- a/hw/usb/host-libusb.c
+++ b/hw/usb/host-libusb.c
@@ -50,7 +50,7 @@
 #include "trace.h"
 
 #include "hw/qdev-properties.h"
-#include "hw/usb.h"
+#include "usb-internal.h"
 
 /* ------------------------------------------------------------------------ */
 
diff --git a/hw/usb/libhw.c b/hw/usb/libhw.c
index 9c33a1640f..a8d7f994df 100644
--- a/hw/usb/libhw.c
+++ b/hw/usb/libhw.c
@@ -20,8 +20,8 @@
  * THE SOFTWARE.
  */
 #include "qemu/osdep.h"
-#include "hw/usb.h"
 #include "sysemu/dma.h"
+#include "usb-internal.h"
 
 int usb_packet_map(USBPacket *p, QEMUSGList *sgl)
 {
diff --git a/hw/usb/quirks.c b/hw/usb/quirks.c
index b0d0f87e35..c427d45f1e 100644
--- a/hw/usb/quirks.c
+++ b/hw/usb/quirks.c
@@ -14,7 +14,7 @@
 
 #include "qemu/osdep.h"
 #include "quirks.inc.c"
-#include "hw/usb.h"
+#include "usb-internal.h"
 #include "usb-quirks.h"
 
 static bool usb_id_match(const struct usb_device_id *ids,
diff --git a/hw/usb/redirect.c b/hw/usb/redirect.c
index 4c5925a039..a0c55de7f8 100644
--- a/hw/usb/redirect.c
+++ b/hw/usb/redirect.c
@@ -42,9 +42,9 @@
 #include <usbredirfilter.h>
 
 #include "hw/qdev-properties.h"
-#include "hw/usb.h"
 #include "migration/qemu-file-types.h"
 #include "migration/vmstate.h"
+#include "usb-internal.h"
 #include "usb-quirks.h"
 
 /* ERROR is defined below. Remove any previous definition. */
diff --git a/hw/usb/tusb6010.c b/hw/usb/tusb6010.c
index 9f9b81b09d..191df38356 100644
--- a/hw/usb/tusb6010.c
+++ b/hw/usb/tusb6010.c
@@ -22,12 +22,12 @@
 #include "qemu/osdep.h"
 #include "qemu/module.h"
 #include "qemu/timer.h"
-#include "hw/usb.h"
 #include "hw/arm/omap.h"
 #include "hw/hw.h"
 #include "hw/irq.h"
 #include "hw/sysbus.h"
 #include "hcd-musb.h"
+#include "usb-internal.h"
 
 #define TYPE_TUSB6010 "tusb6010"
 #define TUSB(obj) OBJECT_CHECK(TUSBState, (obj), TYPE_TUSB6010)
diff --git a/hw/usb/xen-usb.c b/hw/usb/xen-usb.c
index 4d266d7bb4..a6a0b466f9 100644
--- a/hw/usb/xen-usb.c
+++ b/hw/usb/xen-usb.c
@@ -27,12 +27,12 @@
 #include "qemu/main-loop.h"
 #include "qemu/option.h"
 #include "hw/sysbus.h"
-#include "hw/usb.h"
 #include "hw/xen/xen-legacy-backend.h"
 #include "monitor/qdev.h"
 #include "qapi/error.h"
 #include "qapi/qmp/qdict.h"
 #include "qapi/qmp/qstring.h"
+#include "usb-internal.h"
 
 #include "hw/xen/interface/io/usbif.h"
 
diff --git a/MAINTAINERS b/MAINTAINERS
index dec252f38b..2566566d72 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1642,7 +1642,6 @@ F: hw/usb/*
 F: tests/qtest/usb-*-test.c
 F: docs/usb2.txt
 F: docs/usb-storage.txt
-F: include/hw/usb.h
 F: include/hw/usb/
 F: default-configs/usb.mak
 
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjat-00053z-59; Sat, 04 Jul 2020 14:56:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjWE-0003ES-TR
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:51:54 +0000
X-Inumbo-ID: b927570e-be05-11ea-8496-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b927570e-be05-11ea-8496-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:38 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id q15so34722183wmj.2
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=ELz2xQrwO3as8Wxs/Kn8jW0b/kVDPOkqqjBqSxetIoY=;
 b=lEaSbT813203f5Q/sN2DPPZEN+/cTb3ulD0Tsc1K7UiiJmuIHRljgZA5E6zal6HQzd
 SJ1xW7NtSAKTID6CskQvsrLb7LQCq+tEKlaGrkyTbnWVNZv0iqk/rWKNzeml32LvPYt7
 wgIMupaZdxlJI/+kIl0Ql/RW5EIXIoUpP0EIn+bP7T+ouYR5fkNExvJnsPSciKAgerQW
 WrzFOrM8q18cFGilFsVCB9yObwLMcIHNnP0LEH05SNx5uCwKd2r1O7S/Hqo/VTdOad/w
 ZbUVITTYcFraCPIbBmWczqxqNUrrLd09DJk+rTPqpM+lJ6rRdSpbb8Vxn63LhDrjM5CZ
 kfxw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=ELz2xQrwO3as8Wxs/Kn8jW0b/kVDPOkqqjBqSxetIoY=;
 b=ByzWdM8Doy738hoQVijAtSlJOmuZGqOyla+9JYJL7GSr2UiM1e0hHA/mBDQFYeMo7Y
 zkRHS/m0UQ8GR3/ZbEgFVa3fubx6qlxUj46XREacrmNtrdZktPMR7HtLoFgWhv12z3lN
 2CVqLied2slRty60CXAqGPwfQJY9jM39VjPUXoROJufmQteS2zSrkkVNyQMtp8ENNVEB
 +wGy/tyXlJT479B2sNmNbv3xyYdYdfdV2j8zRb2KLGLIl63l7x0Gor/Zlewa7SMyx552
 hXeOO854XoRDXeskVo/rLXq4ltfK3zXe7SCfwxeTmVGPz8VCo3HnSano0RdSL7rvoOex
 OV2w==
X-Gm-Message-State: AOAM530iiWjrdqygYS4rqUu9H2KYblqRa67bzYGT/6/gax+Qhx6j9gyB
 4Z1SkTlFNHDfCGiEiXV22Yk=
X-Google-Smtp-Source: ABdhPJxszZnIT3VKXO6NXhl+YZ/Tp7y4xXuOBdXFlI7NLcqcetxqVubOW7Z948642pU/bk7Dmf0M6w==
X-Received: by 2002:a1c:e355:: with SMTP id a82mr42855621wmh.165.1593874237478; 
 Sat, 04 Jul 2020 07:50:37 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:36 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 24/26] hw/usb/usb-hcd: Use UHCI type definitions
Date: Sat,  4 Jul 2020 16:49:41 +0200
Message-Id: <20200704144943.18292-25-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Various machine/board/soc models create UHCI device instances
with the generic QDEV API, and don't need to access USB internals.

Simplify header inclusions by moving the QOM type names into a
simple header, with no need to include other "hw/usb" headers.

Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 include/hw/usb/usb-hcd.h |  6 ++++++
 hw/i386/pc_piix.c        |  3 ++-
 hw/i386/pc_q35.c         | 13 +++++++------
 hw/isa/piix4.c           |  3 ++-
 hw/mips/fuloong2e.c      |  5 +++--
 hw/usb/hcd-uhci.c        | 19 ++++++++++---------
 6 files changed, 30 insertions(+), 19 deletions(-)

diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
index 74af3a4533..c9d0a88984 100644
--- a/include/hw/usb/usb-hcd.h
+++ b/include/hw/usb/usb-hcd.h
@@ -24,4 +24,10 @@
 #define TYPE_FUSBH200_EHCI          "fusbh200-ehci-usb"
 #define TYPE_CHIPIDEA               "usb-chipidea"
 
+/* UHCI */
+#define TYPE_PIIX3_USB_UHCI         "piix3-usb-uhci"
+#define TYPE_PIIX4_USB_UHCI         "piix4-usb-uhci"
+#define TYPE_VT82C686B_USB_UHCI     "vt82c686b-usb-uhci"
+#define TYPE_ICH9_USB_UHCI(n)       "ich9-usb-uhci" #n
+
 #endif
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 4d1de7cfab..0024c346c6 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -37,6 +37,7 @@
 #include "hw/pci/pci.h"
 #include "hw/pci/pci_ids.h"
 #include "hw/usb/usb.h"
+#include "hw/usb/usb-hcd.h"
 #include "net/net.h"
 #include "hw/ide/pci.h"
 #include "hw/irq.h"
@@ -275,7 +276,7 @@ static void pc_init1(MachineState *machine,
 #endif
 
     if (pcmc->pci_enabled && machine_usb(machine)) {
-        pci_create_simple(pci_bus, piix3_devfn + 2, "piix3-usb-uhci");
+        pci_create_simple(pci_bus, piix3_devfn + 2, TYPE_PIIX3_USB_UHCI);
     }
 
     if (pcmc->pci_enabled && x86_machine_is_acpi_enabled(X86_MACHINE(pcms))) {
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index b985f5bea1..a80527e6ed 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -51,6 +51,7 @@
 #include "hw/ide/pci.h"
 #include "hw/ide/ahci.h"
 #include "hw/usb/usb.h"
+#include "hw/usb/usb-hcd.h"
 #include "qapi/error.h"
 #include "qemu/error-report.h"
 #include "sysemu/numa.h"
@@ -68,15 +69,15 @@ struct ehci_companions {
 };
 
 static const struct ehci_companions ich9_1d[] = {
-    { .name = "ich9-usb-uhci1", .func = 0, .port = 0 },
-    { .name = "ich9-usb-uhci2", .func = 1, .port = 2 },
-    { .name = "ich9-usb-uhci3", .func = 2, .port = 4 },
+    { .name = TYPE_ICH9_USB_UHCI(1), .func = 0, .port = 0 },
+    { .name = TYPE_ICH9_USB_UHCI(2), .func = 1, .port = 2 },
+    { .name = TYPE_ICH9_USB_UHCI(3), .func = 2, .port = 4 },
 };
 
 static const struct ehci_companions ich9_1a[] = {
-    { .name = "ich9-usb-uhci4", .func = 0, .port = 0 },
-    { .name = "ich9-usb-uhci5", .func = 1, .port = 2 },
-    { .name = "ich9-usb-uhci6", .func = 2, .port = 4 },
+    { .name = TYPE_ICH9_USB_UHCI(4), .func = 0, .port = 0 },
+    { .name = TYPE_ICH9_USB_UHCI(5), .func = 1, .port = 2 },
+    { .name = TYPE_ICH9_USB_UHCI(6), .func = 2, .port = 4 },
 };
 
 static int ehci_create_ich9_with_companions(PCIBus *bus, int slot)
diff --git a/hw/isa/piix4.c b/hw/isa/piix4.c
index f634bcb2d1..e11e5fae21 100644
--- a/hw/isa/piix4.c
+++ b/hw/isa/piix4.c
@@ -29,6 +29,7 @@
 #include "hw/southbridge/piix.h"
 #include "hw/pci/pci.h"
 #include "hw/isa/isa.h"
+#include "hw/usb/usb-hcd.h"
 #include "hw/sysbus.h"
 #include "hw/intc/i8259.h"
 #include "hw/dma/i8257.h"
@@ -255,7 +256,7 @@ DeviceState *piix4_create(PCIBus *pci_bus, ISABus **isa_bus, I2CBus **smbus)
     pci = pci_create_simple(pci_bus, devfn + 1, "piix4-ide");
     pci_ide_create_devs(pci);
 
-    pci_create_simple(pci_bus, devfn + 2, "piix4-usb-uhci");
+    pci_create_simple(pci_bus, devfn + 2, TYPE_PIIX4_USB_UHCI);
     if (smbus) {
         *smbus = piix4_pm_init(pci_bus, devfn + 3, 0x1100,
                                isa_get_irq(NULL, 9), NULL, 0, NULL);
diff --git a/hw/mips/fuloong2e.c b/hw/mips/fuloong2e.c
index 8ca31e5162..b6d33dd2cd 100644
--- a/hw/mips/fuloong2e.c
+++ b/hw/mips/fuloong2e.c
@@ -33,6 +33,7 @@
 #include "hw/mips/mips.h"
 #include "hw/mips/cpudevs.h"
 #include "hw/pci/pci.h"
+#include "hw/usb/usb-hcd.h"
 #include "qemu/log.h"
 #include "hw/loader.h"
 #include "hw/ide/pci.h"
@@ -258,8 +259,8 @@ static void vt82c686b_southbridge_init(PCIBus *pci_bus, int slot, qemu_irq intc,
     dev = pci_create_simple(pci_bus, PCI_DEVFN(slot, 1), "via-ide");
     pci_ide_create_devs(dev);
 
-    pci_create_simple(pci_bus, PCI_DEVFN(slot, 2), "vt82c686b-usb-uhci");
-    pci_create_simple(pci_bus, PCI_DEVFN(slot, 3), "vt82c686b-usb-uhci");
+    pci_create_simple(pci_bus, PCI_DEVFN(slot, 2), TYPE_VT82C686B_USB_UHCI);
+    pci_create_simple(pci_bus, PCI_DEVFN(slot, 3), TYPE_VT82C686B_USB_UHCI);
 
     *i2c_bus = vt82c686b_pm_init(pci_bus, PCI_DEVFN(slot, 4), 0xeee1, NULL);
 
diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
index 1d4dd33b6c..da078dc3fa 100644
--- a/hw/usb/hcd-uhci.c
+++ b/hw/usb/hcd-uhci.c
@@ -39,6 +39,7 @@
 #include "qemu/main-loop.h"
 #include "qemu/module.h"
 #include "usb-internal.h"
+#include "hw/usb/usb-hcd.h"
 
 #define FRAME_TIMER_FREQ 1000
 
@@ -1358,21 +1359,21 @@ static void uhci_data_class_init(ObjectClass *klass, void *data)
 
 static UHCIInfo uhci_info[] = {
     {
-        .name       = "piix3-usb-uhci",
+        .name      = TYPE_PIIX3_USB_UHCI,
         .vendor_id = PCI_VENDOR_ID_INTEL,
         .device_id = PCI_DEVICE_ID_INTEL_82371SB_2,
         .revision  = 0x01,
         .irq_pin   = 3,
         .unplug    = true,
     },{
-        .name      = "piix4-usb-uhci",
+        .name      = TYPE_PIIX4_USB_UHCI,
         .vendor_id = PCI_VENDOR_ID_INTEL,
         .device_id = PCI_DEVICE_ID_INTEL_82371AB_2,
         .revision  = 0x01,
         .irq_pin   = 3,
         .unplug    = true,
     },{
-        .name      = "vt82c686b-usb-uhci",
+        .name      = TYPE_VT82C686B_USB_UHCI,
         .vendor_id = PCI_VENDOR_ID_VIA,
         .device_id = PCI_DEVICE_ID_VIA_UHCI,
         .revision  = 0x01,
@@ -1380,42 +1381,42 @@ static UHCIInfo uhci_info[] = {
         .realize   = usb_uhci_vt82c686b_realize,
         .unplug    = true,
     },{
-        .name      = "ich9-usb-uhci1", /* 00:1d.0 */
+        .name      = TYPE_ICH9_USB_UHCI(1), /* 00:1d.0 */
         .vendor_id = PCI_VENDOR_ID_INTEL,
         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI1,
         .revision  = 0x03,
         .irq_pin   = 0,
         .unplug    = false,
     },{
-        .name      = "ich9-usb-uhci2", /* 00:1d.1 */
+        .name      = TYPE_ICH9_USB_UHCI(2), /* 00:1d.1 */
         .vendor_id = PCI_VENDOR_ID_INTEL,
         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI2,
         .revision  = 0x03,
         .irq_pin   = 1,
         .unplug    = false,
     },{
-        .name      = "ich9-usb-uhci3", /* 00:1d.2 */
+        .name      = TYPE_ICH9_USB_UHCI(3), /* 00:1d.2 */
         .vendor_id = PCI_VENDOR_ID_INTEL,
         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI3,
         .revision  = 0x03,
         .irq_pin   = 2,
         .unplug    = false,
     },{
-        .name      = "ich9-usb-uhci4", /* 00:1a.0 */
+        .name      = TYPE_ICH9_USB_UHCI(4), /* 00:1a.0 */
         .vendor_id = PCI_VENDOR_ID_INTEL,
         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI4,
         .revision  = 0x03,
         .irq_pin   = 0,
         .unplug    = false,
     },{
-        .name      = "ich9-usb-uhci5", /* 00:1a.1 */
+        .name      = TYPE_ICH9_USB_UHCI(5), /* 00:1a.1 */
         .vendor_id = PCI_VENDOR_ID_INTEL,
         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI5,
         .revision  = 0x03,
         .irq_pin   = 1,
         .unplug    = false,
     },{
-        .name      = "ich9-usb-uhci6", /* 00:1a.2 */
+        .name      = TYPE_ICH9_USB_UHCI(6), /* 00:1a.2 */
         .vendor_id = PCI_VENDOR_ID_INTEL,
         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI6,
         .revision  = 0x03,
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjat-00054J-Dy; Sat, 04 Jul 2020 14:56:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjWO-0003ES-Tr
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:52:04 +0000
X-Inumbo-ID: bb939dcc-be05-11ea-bca7-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb939dcc-be05-11ea-bca7-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:42 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id b6so35748311wrs.11
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=uE7YqG+qACaG8Uud6yeGxAsfWVnV3rkTeBq37HQj1qo=;
 b=sIhGHE4QN5ORR+3/Mgnoaxt2dRH9Duka3DMpiaQTVgS1lpSJEJ8KEB4VjCSKQqSlfc
 HwpmkE15woH+abTAT5VLKiyhb0Zq+u1XvalFXiALNHpXyYLuDGhlcjr3PJjaoEf3+FY7
 4UjhgJzhzYvcqHRKWwR/aqKYvEyqZ1yrs8FGt1f9ohuKcUlxQe/dtogbIP9H2gOPTdM+
 X1mnhOjui1Vk5B+Xa4Ju6b+X93HolCAPQsIP0PCVa5noiOK0+NH5/fSK60TLZZ4znABy
 cKKta1I+QCL7DFqQUZxr5qs5kGd/t289lAxS+UdQKR8b+NnivhadaJIHkJJVNpWJ4AOo
 xV1w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=uE7YqG+qACaG8Uud6yeGxAsfWVnV3rkTeBq37HQj1qo=;
 b=BliaDKB+yVVOwZmDb8XSgmOZ+8Q6LzESelP7O/Y4C+k7ABU+IwZMztWdVTwVo2hKJl
 v/lzsbrzUR9gANXd8un2oIV6ztb5IQiPIvQtOv9e3ZVg4qtW6zsJ8e9GjsNykxMB20uz
 /3WaANDkAR5EYazCqy6G5wGq3MLR+K8XMLXm/jYcyXD848xVYtvqh2mYUNb3Dglzws3O
 RiLus8U7zmNrNhpQpjQdD85gXTyl9PTjvzmLFkbyQlJBlP6lZdFD4oqDLHxCS1PvOFZN
 KrybYzzKFgu8lz2VlOf+HRGxCZNdOEa7ws/81Ys9YRzvCdT/5x4P4KBWVCIAJW91qjXi
 ZITQ==
X-Gm-Message-State: AOAM533+cHJPRdAXI+pl0TJtGOyEGypK82tb2tzr4C69lrg+AKY3tydt
 Ejf/GbpeLpYJ1q+NLMmfrkQ=
X-Google-Smtp-Source: ABdhPJw68YNz9StYiIER/Bcj0m+eoPkR5IWJU/SRvblSzDy3zWSMVqWNGeAGRScY2s+A4+/iXWgZVA==
X-Received: by 2002:adf:9561:: with SMTP id 88mr11043924wrs.240.1593874241665; 
 Sat, 04 Jul 2020 07:50:41 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:41 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 26/26] MAINTAINERS: Cover dwc-hsotg (dwc2) USB host controller
 emulation
Date: Sat,  4 Jul 2020 16:49:43 +0200
Message-Id: <20200704144943.18292-27-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add an section for the dwc2 host controller emulation
introduced in commit 153ef1662c.

Cc: Paul Zimmerman <pauldzim@gmail.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 2566566d72..e3f895bc6e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1651,6 +1651,12 @@ M: Samuel Thibault <samuel.thibault@ens-lyon.org>
 S: Maintained
 F: hw/usb/dev-serial.c
 
+USB dwc-hsotg (dwc2)
+M: Gerd Hoffmann <kraxel@redhat.com>
+R: Paul Zimmerman <pauldzim@gmail.com>
+S: Maintained
+F: hw/usb/*dwc2*
+
 VFIO
 M: Alex Williamson <alex.williamson@redhat.com>
 S: Supported
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjat-00054s-PA; Sat, 04 Jul 2020 14:56:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjVG-0003ES-R1
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:50:54 +0000
X-Inumbo-ID: ab2286ec-be05-11ea-8496-bc764e2007e4
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab2286ec-be05-11ea-8496-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:14 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id o2so37044911wmh.2
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=k+8FH1LHVyWY5lxn8kaV6RhN5WT6vNL6Csn+tWtGqIU=;
 b=IB9L+DJ6fMSlRh6JnfVIyjRfs3r2+cKnkCMh6AOc0lm/+BaC+pa5gw3hPPcon8hyCC
 TQRh9Dsuy4N9iNSa+sDAo59Wb9OBUTJhWrGiQYUiHaJcD2YjCqcZFlX3q6wvN0kzv12y
 zjRWD0O+C14SgaG835wapqV4fOHwaoE+lia4hfdbd5Zyk+XJzdC2t6DGiCxQ8vTcnZgX
 354/5VYBjRfY9uD1tT+MK47AGBcB3P1p8J9RpLbDsqF2ri9rbyNs8LgFGF8k/fGIcKWy
 wmxNBkxWFAH/E/5EdZfvMLkUm0kbfAlZiV+UjR0G1MRggb5dB2nI2Req9hTae4hUmE3t
 glLw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=k+8FH1LHVyWY5lxn8kaV6RhN5WT6vNL6Csn+tWtGqIU=;
 b=MZIH5rHd4EJdp9pxTlRTLiGYMdF3qeb5auU+xu5I6gTmaLxMaIEjrabPIdqwTpwNOS
 n5c3ZVqsoKeeg4NjxLypxvRUUvroOna4ee6yGyTjfCqAgz60lcScih58bnyDg/uTJJWw
 +HSE9RnO57qHwLE0pwexr6X7Pu/aWNU5kxPHMKiYHmkiHRVwKjAM/aoon3dpPFFc0/5w
 VfnDhnU3feWr3NCp5TifbjFf+g7oQq8tyIKcX3d38t3Yf7Q7fpmA4+kXhbOfwbMTQX9a
 v1BTBaJ4Ba+Cuzn+eY8rfVH4jeYOtrBpMAicDkgsbWqX3h4j2LfWK1+TMJQr7HKQOQ21
 X4Fw==
X-Gm-Message-State: AOAM533F+nZPJraafRbz67xt8oPF8p9kdUiBrVwg9P83EopqRWezICiC
 USWTu41fj7QqNHwoW0Kpf6c=
X-Google-Smtp-Source: ABdhPJzNTGufYfLMeOUgBhGaylb5O5upKi84ytezyRjwfpHgpubMO2vfpa3tyGQAl3MlqZSVzS+XTQ==
X-Received: by 2002:a1c:48:: with SMTP id 69mr43033872wma.32.1593874214066;
 Sat, 04 Jul 2020 07:50:14 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:13 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 13/26] hw/usb/desc: Reduce some declarations scope
Date: Sat,  4 Jul 2020 16:49:30 +0200
Message-Id: <20200704144943.18292-14-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

USBDescString is forward-declared. Only bus.c uses the
usb_device_get_product_desc() and usb_device_get_usb_desc()
function. Move all that to the "desc.h" header to reduce
the big "hw/usb.h" header a bit.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/desc.h    | 10 ++++++++++
 include/hw/usb.h | 10 ----------
 hw/usb/bus.c     |  1 +
 3 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/hw/usb/desc.h b/hw/usb/desc.h
index 92594fbe29..4bf6966c4b 100644
--- a/hw/usb/desc.h
+++ b/hw/usb/desc.h
@@ -242,4 +242,14 @@ int usb_desc_get_descriptor(USBDevice *dev, USBPacket *p,
 int usb_desc_handle_control(USBDevice *dev, USBPacket *p,
         int request, int value, int index, int length, uint8_t *data);
 
+const char *usb_device_get_product_desc(USBDevice *dev);
+
+const USBDesc *usb_device_get_usb_desc(USBDevice *dev);
+
+struct USBDescString {
+    uint8_t index;
+    char *str;
+    QLIST_ENTRY(USBDescString) next;
+};
+
 #endif /* QEMU_HW_USB_DESC_H */
diff --git a/include/hw/usb.h b/include/hw/usb.h
index 15b2ef300a..18f1349bdc 100644
--- a/include/hw/usb.h
+++ b/include/hw/usb.h
@@ -192,12 +192,6 @@ typedef struct USBDescOther USBDescOther;
 typedef struct USBDescString USBDescString;
 typedef struct USBDescMSOS USBDescMSOS;
 
-struct USBDescString {
-    uint8_t index;
-    char *str;
-    QLIST_ENTRY(USBDescString) next;
-};
-
 #define USB_MAX_ENDPOINTS  15
 #define USB_MAX_INTERFACES 16
 
@@ -555,10 +549,6 @@ int usb_device_alloc_streams(USBDevice *dev, USBEndpoint **eps, int nr_eps,
                              int streams);
 void usb_device_free_streams(USBDevice *dev, USBEndpoint **eps, int nr_eps);
 
-const char *usb_device_get_product_desc(USBDevice *dev);
-
-const USBDesc *usb_device_get_usb_desc(USBDevice *dev);
-
 /* quirks.c */
 
 /* In bulk endpoints are streaming data sources (iow behave like isoc eps) */
diff --git a/hw/usb/bus.c b/hw/usb/bus.c
index 957559b18d..111c3af7c1 100644
--- a/hw/usb/bus.c
+++ b/hw/usb/bus.c
@@ -9,6 +9,7 @@
 #include "monitor/monitor.h"
 #include "trace.h"
 #include "qemu/cutils.h"
+#include "desc.h"
 
 static void usb_bus_dev_print(Monitor *mon, DeviceState *qdev, int indent);
 
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjau-00055S-2o; Sat, 04 Jul 2020 14:56:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjVV-0003ES-S5
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:51:09 +0000
X-Inumbo-ID: aebbc98a-be05-11ea-b7bb-bc764e2007e4
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aebbc98a-be05-11ea-b7bb-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:20 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id w3so24606974wmi.4
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=LwCcRU3ITBZcN/p8m4Tvly+Gfq57t4rX34Kk3Uzziqs=;
 b=oKjMAbLQ2Nbdo7AVZTEMo5rKzXNXebbxS2YHgA5kZDA2Z2+DxgsuBaop4Nf7YP0t9g
 825K7Sz5l6DRKW2FWV64F6Y8PFEbfBZ/8mNGGq8q8I/v/XzgiVoT/c6cLtJ4Wq4+y8s/
 ZyQu3ZX70MOw0YMf2jRU84FoSuY8lwe6rfsONGi0EvhvcQH4agyx8WdTlthVPz7oUMLd
 7gZ2JkRsngTeq9VmBw6iuSdlXJYxQF7UzAOPFbD80qjvobuW2TUKX0fMm3Fjk3Rm4noB
 rWoSeUrHYZkkLtcx+JFqgKhA77mwlMll57o/tAzzPnbS9XSESKlGwrsNVmP1tPkGSkA/
 JlFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=LwCcRU3ITBZcN/p8m4Tvly+Gfq57t4rX34Kk3Uzziqs=;
 b=Eh+6h058r+uOg2DywExGmoYyVrZlZ7+afOmZn0HxjUWbUzJ9BQ3nVptpCONutwHUbr
 N6dprYgypafELQGImKgW/WpwpiQRUWFUN+m4MqKnt2ZnE1dfMELbaA4pDcxbKYlCLxlt
 7GhPlQoXw5QnLqjO3dKIym3s4On00GzF045osVhUtyf+hm2ynS5FPF4Q58fCMGro7whq
 tH3CDTTol9oZdEHyqYsryjHFw0bKY5XPtQtsmgS68QxK3NIxbu5tFsaBaW5vJXcWEEjo
 EzM9KkmMuSEmMSixrt187zqa9XM33c6YmdJzhbNhsZp6Wfi4qAskMAZzsoFepreKIqSN
 w/Nw==
X-Gm-Message-State: AOAM53249mpe4B+m6mJKPNk73OAgEYMKJwMswanzTHf4qdHrveqjNDmg
 wY6Y73EtGCOa37PkRFR6YyM=
X-Google-Smtp-Source: ABdhPJxv37a8GSHu9gIk53qyLIpPiec4lNJ044JwM140Yq2vwztlBB8+hQI1sx3Og8exNtb1rbIDuw==
X-Received: by 2002:a1c:b686:: with SMTP id
 g128mr43033346wmf.145.1593874220071; 
 Sat, 04 Jul 2020 07:50:20 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:19 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 16/26] hw/usb/bus: Simplify usb_get_dev_path()
Date: Sat,  4 Jul 2020 16:49:33 +0200
Message-Id: <20200704144943.18292-17-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Simplify usb_get_dev_path() a bit.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/bus.c | 19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/hw/usb/bus.c b/hw/usb/bus.c
index 111c3af7c1..f8901e822c 100644
--- a/hw/usb/bus.c
+++ b/hw/usb/bus.c
@@ -580,19 +580,18 @@ static void usb_bus_dev_print(Monitor *mon, DeviceState *qdev, int indent)
 static char *usb_get_dev_path(DeviceState *qdev)
 {
     USBDevice *dev = USB_DEVICE(qdev);
-    DeviceState *hcd = qdev->parent_bus->parent;
-    char *id = NULL;
 
     if (dev->flags & (1 << USB_DEV_FLAG_FULL_PATH)) {
-        id = qdev_get_dev_path(hcd);
-    }
-    if (id) {
-        char *ret = g_strdup_printf("%s/%s", id, dev->port->path);
-        g_free(id);
-        return ret;
-    } else {
-        return g_strdup(dev->port->path);
+        DeviceState *hcd = qdev->parent_bus->parent;
+        char *id = qdev_get_dev_path(hcd);
+
+        if (id) {
+            char *ret = g_strdup_printf("%s/%s", id, dev->port->path);
+            g_free(id);
+            return ret;
+        }
     }
+    return g_strdup(dev->port->path);
 }
 
 static char *usb_get_fw_dev_path(DeviceState *qdev)
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjau-00056G-Er; Sat, 04 Jul 2020 14:56:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjVp-0003ES-T1
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:51:29 +0000
X-Inumbo-ID: b3c731bc-be05-11ea-8496-bc764e2007e4
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b3c731bc-be05-11ea-8496-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:29 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id q15so34721985wmj.2
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=fJVcVH4doVw6Km6BSM1MV1lDhvP8r9rvGpMs+kFykbo=;
 b=K2lwgSpyWnjq/h8TiOfPsKpInIWHa5+dz+1p8VTR4XavTCAi4oZM+nXkVl8WOqLewb
 ozsFFf2OwYpYbdZ3eiEwdCdNFB8xk060HSlm2pDLSmkmpqjw2t8zgC7WWS6Rtc2pYFI6
 pVhqULi6RzwvieAFZ+T/B0GoMZz0xSebo4MS/liGeIh0HixD4qcgipHRgT3GteBbaP3O
 NfDFlBiaubRh22WagrqyVHDevz3RsDYDG2zHsljANg3QGj+roWzdwYDe+wIiXCkn/EDo
 Tg6k/KyVg3AsKT99KsfBndK/TZAZVe77NfLq7pdi2mAnr6zr2XQ1x/QBBG4elYDZy0FK
 QunQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=fJVcVH4doVw6Km6BSM1MV1lDhvP8r9rvGpMs+kFykbo=;
 b=WT8ixSKz5EaUKPqUvBcyTOwMcxBkrUcOsfmalOsMmUT3a0El1RGA/IrSYMMRfTuqK2
 ObV18I01To2rGO+Y/hdkcFsuz/Ysw76Fhom84NqnrI9ycTD8qVJ9q3I+O15gyo++fZsd
 onXtwTRfQyPHZNH0HpWUTnYGteP4c8YJYuSRR12JKbM0YRJNvTh32RUCL6DsVbGWRP2i
 oihQlqYUzTu6mQXUO9/hdDvXOPjzf2pxsk6r+85xRRk3gEAx9NHIVKApEQkuAO23pSN7
 J+2iirE09vK0XQMuBv3YYhf+dZliXBQ84uMlrtDPwW6FoKfB9kqXtGNIo4dNhaI+UxOj
 HJiQ==
X-Gm-Message-State: AOAM530OTNXxatzadq/X3+8y2w16swIEi/FTcl2WN2sFGILrz5UqjxQ1
 KBvlB76CL8sTqFuBw536IfQ=
X-Google-Smtp-Source: ABdhPJwBkQVEoJtyk9IO8+500AN1sxUzaC/kjPlQwjEVcQTj3r8o+ql1zItIAXJmSwL9GUWTiy5ETg==
X-Received: by 2002:a1c:5986:: with SMTP id
 n128mr28975302wmb.112.1593874228406; 
 Sat, 04 Jul 2020 07:50:28 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:27 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 20/26] hw/usb: Introduce "hw/usb/usb.h" public API
Date: Sat,  4 Jul 2020 16:49:37 +0200
Message-Id: <20200704144943.18292-21-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Only the USB devices require to access the USB internal APIs.

The rest of the code base only wants to consume USB devices
with a generic API. Move the generic declarations to the new
"hw/usb/usb.h" header.

Reported-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 include/hw/usb.h      | 27 +-------------------
 include/hw/usb/usb.h  | 58 +++++++++++++++++++++++++++++++++++++++++++
 chardev/baum.c        |  2 +-
 hw/i386/pc.c          |  2 +-
 hw/i386/pc_piix.c     |  2 +-
 hw/i386/pc_q35.c      |  2 +-
 hw/ppc/mac_newworld.c |  2 +-
 hw/ppc/sam460ex.c     |  1 +
 hw/ppc/spapr.c        |  2 +-
 hw/sh4/r2d.c          |  2 +-
 hw/usb/host-stub.c    |  2 +-
 monitor/misc.c        |  2 +-
 softmmu/vl.c          |  2 +-
 13 files changed, 70 insertions(+), 36 deletions(-)
 create mode 100644 include/hw/usb/usb.h

diff --git a/include/hw/usb.h b/include/hw/usb.h
index 7ea502d421..2ea5186ea5 100644
--- a/include/hw/usb.h
+++ b/include/hw/usb.h
@@ -26,6 +26,7 @@
  */
 
 #include "hw/qdev-core.h"
+#include "hw/usb/usb.h"
 #include "qemu/iov.h"
 #include "qemu/queue.h"
 
@@ -176,7 +177,6 @@
 typedef struct USBBus USBBus;
 typedef struct USBBusOps USBBusOps;
 typedef struct USBPort USBPort;
-typedef struct USBDevice USBDevice;
 typedef struct USBPacket USBPacket;
 typedef struct USBCombinedPacket USBCombinedPacket;
 typedef struct USBEndpoint USBEndpoint;
@@ -256,9 +256,6 @@ struct USBDevice {
     const USBDescIface  *ifaces[USB_MAX_INTERFACES];
 };
 
-#define TYPE_USB_DEVICE "usb-device"
-#define USB_DEVICE(obj) \
-     OBJECT_CHECK(USBDevice, (obj), TYPE_USB_DEVICE)
 #define USB_DEVICE_CLASS(klass) \
      OBJECT_CLASS_CHECK(USBDeviceClass, (klass), TYPE_USB_DEVICE)
 #define USB_DEVICE_GET_CLASS(obj) \
@@ -459,15 +456,8 @@ void usb_device_reset(USBDevice *dev);
 void usb_wakeup(USBEndpoint *ep, unsigned int stream);
 void usb_generic_async_ctrl_complete(USBDevice *s, USBPacket *p);
 
-/* usb-linux.c */
-void hmp_info_usbhost(Monitor *mon, const QDict *qdict);
-bool usb_host_dev_is_scsi_storage(USBDevice *usbdev);
-
 /* usb-bus.c */
 
-#define TYPE_USB_BUS "usb-bus"
-#define USB_BUS(obj) OBJECT_CHECK(USBBus, (obj), TYPE_USB_BUS)
-
 struct USBBus {
     BusState qbus;
     USBBusOps *ops;
@@ -489,13 +479,8 @@ struct USBBusOps {
 void usb_bus_new(USBBus *bus, size_t bus_size,
                  USBBusOps *ops, DeviceState *host);
 void usb_bus_release(USBBus *bus);
-USBBus *usb_bus_find(int busnr);
 void usb_legacy_register(const char *typename, const char *usbdevice_name,
                          USBDevice *(*usbdevice_init)(const char *params));
-USBDevice *usb_new(const char *name);
-bool usb_realize_and_unref(USBDevice *dev, USBBus *bus, Error **errp);
-USBDevice *usb_create_simple(USBBus *bus, const char *name);
-USBDevice *usbdevice_create(const char *cmdline);
 void usb_register_port(USBBus *bus, USBPort *port, void *opaque, int index,
                        USBPortOps *ops, int speedmask);
 void usb_register_companion(const char *masterbus, USBPort *ports[],
@@ -506,16 +491,6 @@ void usb_port_location(USBPort *downstream, USBPort *upstream, int portnr);
 void usb_unregister_port(USBBus *bus, USBPort *port);
 void usb_claim_port(USBDevice *dev, Error **errp);
 void usb_release_port(USBDevice *dev);
-/**
- * usb_get_port_path:
- * @dev: the USB device
- *
- * The returned data must be released with g_free()
- * when no longer required.
- *
- * Returns: a dynamically allocated pathname.
- */
-char *usb_get_port_path(USBDevice *dev);
 void usb_device_attach(USBDevice *dev, Error **errp);
 int usb_device_detach(USBDevice *dev);
 void usb_check_attach(USBDevice *dev, Error **errp);
diff --git a/include/hw/usb/usb.h b/include/hw/usb/usb.h
new file mode 100644
index 0000000000..9a13b08503
--- /dev/null
+++ b/include/hw/usb/usb.h
@@ -0,0 +1,58 @@
+/*
+ * QEMU USB API
+ *
+ * Copyright (c) 2005 Fabrice Bellard
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+#ifndef QEMU_HW_USB_H
+#define QEMU_HW_USB_H
+
+typedef struct USBDevice USBDevice;
+
+#define TYPE_USB_DEVICE "usb-device"
+#define USB_DEVICE(obj) \
+     OBJECT_CHECK(USBDevice, (obj), TYPE_USB_DEVICE)
+
+typedef struct USBBus USBBus;
+
+#define TYPE_USB_BUS "usb-bus"
+#define USB_BUS(obj) OBJECT_CHECK(USBBus, (obj), TYPE_USB_BUS)
+
+USBBus *usb_bus_find(int busnr);
+USBDevice *usb_new(const char *name);
+bool usb_realize_and_unref(USBDevice *dev, USBBus *bus, Error **errp);
+USBDevice *usb_create_simple(USBBus *bus, const char *name);
+USBDevice *usbdevice_create(const char *cmdline);
+
+/**
+ * usb_get_port_path:
+ * @dev: the USB device
+ *
+ * The returned data must be released with g_free()
+ * when no longer required.
+ *
+ * Returns: a dynamically allocated pathname.
+ */
+char *usb_get_port_path(USBDevice *dev);
+
+void hmp_info_usbhost(Monitor *mon, const QDict *qdict);
+bool usb_host_dev_is_scsi_storage(USBDevice *usbdev);
+
+#endif
diff --git a/chardev/baum.c b/chardev/baum.c
index 9c95e7bc79..fc04bf2e2f 100644
--- a/chardev/baum.c
+++ b/chardev/baum.c
@@ -28,7 +28,7 @@
 #include "qemu/main-loop.h"
 #include "qemu/module.h"
 #include "qemu/timer.h"
-#include "hw/usb.h"
+#include "hw/usb/usb.h"
 #include "ui/console.h"
 #include <brlapi.h>
 #include <brlapi_constants.h>
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 4af9679d03..a890f57ac2 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -83,7 +83,7 @@
 #include "qapi/qapi-visit-common.h"
 #include "qapi/visitor.h"
 #include "hw/core/cpu.h"
-#include "hw/usb.h"
+#include "hw/usb/usb.h"
 #include "hw/i386/intel_iommu.h"
 #include "hw/net/ne2000-isa.h"
 #include "standard-headers/asm-x86/bootparam.h"
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 1d832b2878..4d1de7cfab 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -36,7 +36,7 @@
 #include "hw/firmware/smbios.h"
 #include "hw/pci/pci.h"
 #include "hw/pci/pci_ids.h"
-#include "hw/usb.h"
+#include "hw/usb/usb.h"
 #include "net/net.h"
 #include "hw/ide/pci.h"
 #include "hw/irq.h"
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 047ea8db28..b985f5bea1 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -50,7 +50,7 @@
 #include "hw/firmware/smbios.h"
 #include "hw/ide/pci.h"
 #include "hw/ide/ahci.h"
-#include "hw/usb.h"
+#include "hw/usb/usb.h"
 #include "qapi/error.h"
 #include "qemu/error-report.h"
 #include "sysemu/numa.h"
diff --git a/hw/ppc/mac_newworld.c b/hw/ppc/mac_newworld.c
index 828c5992ae..7bf69f4a1f 100644
--- a/hw/ppc/mac_newworld.c
+++ b/hw/ppc/mac_newworld.c
@@ -69,7 +69,7 @@
 #include "sysemu/kvm.h"
 #include "sysemu/reset.h"
 #include "kvm_ppc.h"
-#include "hw/usb.h"
+#include "hw/usb/usb.h"
 #include "exec/address-spaces.h"
 #include "hw/sysbus.h"
 #include "trace.h"
diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c
index fae970b142..781b45e14b 100644
--- a/hw/ppc/sam460ex.c
+++ b/hw/ppc/sam460ex.c
@@ -35,6 +35,7 @@
 #include "hw/char/serial.h"
 #include "hw/i2c/ppc4xx_i2c.h"
 #include "hw/i2c/smbus_eeprom.h"
+#include "hw/usb/usb.h"
 #include "hw/usb/hcd-ehci.h"
 #include "hw/ppc/fdt.h"
 #include "hw/qdev-properties.h"
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 221d3e7a8c..0c0409077f 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -70,7 +70,7 @@
 
 #include "exec/address-spaces.h"
 #include "exec/ram_addr.h"
-#include "hw/usb.h"
+#include "hw/usb/usb.h"
 #include "qemu/config-file.h"
 #include "qemu/error-report.h"
 #include "trace.h"
diff --git a/hw/sh4/r2d.c b/hw/sh4/r2d.c
index 443820901d..a39c378855 100644
--- a/hw/sh4/r2d.c
+++ b/hw/sh4/r2d.c
@@ -40,7 +40,7 @@
 #include "hw/ide.h"
 #include "hw/irq.h"
 #include "hw/loader.h"
-#include "hw/usb.h"
+#include "hw/usb/usb.h"
 #include "hw/block/flash.h"
 #include "exec/address-spaces.h"
 
diff --git a/hw/usb/host-stub.c b/hw/usb/host-stub.c
index 538ed29684..11b754892d 100644
--- a/hw/usb/host-stub.c
+++ b/hw/usb/host-stub.c
@@ -32,7 +32,7 @@
 
 #include "qemu/osdep.h"
 #include "ui/console.h"
-#include "hw/usb.h"
+#include "hw/usb/usb.h"
 #include "monitor/monitor.h"
 
 void hmp_info_usbhost(Monitor *mon, const QDict *qdict)
diff --git a/monitor/misc.c b/monitor/misc.c
index 89bb970b00..65c0f887dd 100644
--- a/monitor/misc.c
+++ b/monitor/misc.c
@@ -26,7 +26,7 @@
 #include "monitor-internal.h"
 #include "cpu.h"
 #include "monitor/qdev.h"
-#include "hw/usb.h"
+#include "hw/usb/usb.h"
 #include "hw/pci/pci.h"
 #include "sysemu/watchdog.h"
 #include "hw/loader.h"
diff --git a/softmmu/vl.c b/softmmu/vl.c
index 3e15ee2435..25a13e913e 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -41,7 +41,7 @@
 #include "qemu/error-report.h"
 #include "qemu/sockets.h"
 #include "sysemu/accel.h"
-#include "hw/usb.h"
+#include "hw/usb/usb.h"
 #include "hw/isa/isa.h"
 #include "hw/scsi/scsi.h"
 #include "hw/display/vga.h"
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjau-000574-Q5; Sat, 04 Jul 2020 14:56:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjVz-0003ES-TP
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:51:39 +0000
X-Inumbo-ID: b69fe758-be05-11ea-bb8b-bc764e2007e4
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b69fe758-be05-11ea-bb8b-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:34 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id l17so34722954wmj.0
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=KF2Q1IQk+ahap9W/Mry96hRo3DbCgSYQH2d1hiHR8iQ=;
 b=RAQhqu18T6VUpYh2sdm0xRNj5/bRNkZh4umN5tUrHagmbLZVJX5/84bq62eDrR0eRk
 z3xAlKtd4mj2nXuzVp01YdFzEidKPR5SSMCf2XRbQBhFNw6vfiX43SiZHk3XoEuT8rGw
 0XBs5xe6RlJH9WthiHtBLxalHh7MNuQOFfjflsIqW2Levmjop8KiKovb6XSMlZ/odqsx
 Eurhi045hPTaj+DgftzLYGb5/vJ2sCgq0I6n/p2TlHk0th1/H61CjpOoS1X29Ei9kln4
 PYApQ2P3+f7gIDHk/+ym0f3YFUXPcThbjrKZYGv8Wyf4ipGfh4vDG62307NToRXNR8oV
 ophA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=KF2Q1IQk+ahap9W/Mry96hRo3DbCgSYQH2d1hiHR8iQ=;
 b=UcePVyNul3p8fG5iyKpiPCWD3++YaXsi21UXQFdUclIfbh4BGospKuQtaftGKDKXR4
 ODi8YRTwav/iN/csMXXeiD/KMgNwKAxJce1PipzHEWpIuz4ryZgrH7Qs1Gm51LeBHEkj
 sd6dY6PFP/PSxYyPl0Wr+aWiOz22cCD1Ov78/BK6rXl2Ukva+8KTZjB3ijKb2dpNtf/u
 kDH2c3RbidXZPyftN0RISgsHw+8x2nFsva7tqXk5xIdcVk383p9/hjhAtYCGaO9EsXch
 DY+IXSmrRnL7uf80fm/yoRpHpWR8RqhkSAntATgCJ2zcgOoWScaCQNb7CgYXKixeq0E7
 WE+Q==
X-Gm-Message-State: AOAM533RsTy77Ngxd8Cox4LnWZl1yGL3UzGCycdx67vIi1K90gkbMdfF
 bXZICg6EFfQxP1maHeN57v4=
X-Google-Smtp-Source: ABdhPJx2AAfoyUVngfkp8HcQ1LKQbbMiTCGjrFSeWkveBqg3nVRqfMGkyrx6NhSTUmnzJ5QBmiC+sw==
X-Received: by 2002:a7b:c313:: with SMTP id k19mr26877648wmj.67.1593874233021; 
 Sat, 04 Jul 2020 07:50:33 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:32 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 22/26] hw/usb/usb-hcd: Use OHCI type definitions
Date: Sat,  4 Jul 2020 16:49:39 +0200
Message-Id: <20200704144943.18292-23-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Various machine/board/soc models create OHCI device instances
with the generic QDEV API, and don't need to access USB internals.

Simplify header inclusions by moving the QOM type names into a
simple header, with no need to include other "hw/usb" headers.

Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/hcd-ohci.h        |  2 +-
 include/hw/usb/usb-hcd.h | 16 ++++++++++++++++
 hw/arm/allwinner-a10.c   |  2 +-
 hw/arm/allwinner-h3.c    |  9 +++++----
 hw/arm/pxa2xx.c          |  3 ++-
 hw/arm/realview.c        |  3 ++-
 hw/arm/versatilepb.c     |  3 ++-
 hw/display/sm501.c       |  3 ++-
 hw/ppc/mac_newworld.c    |  3 ++-
 hw/ppc/mac_oldworld.c    |  3 ++-
 hw/ppc/sam460ex.c        |  3 ++-
 hw/ppc/spapr.c           |  3 ++-
 hw/usb/hcd-ohci-pci.c    |  2 +-
 13 files changed, 40 insertions(+), 15 deletions(-)
 create mode 100644 include/hw/usb/usb-hcd.h

diff --git a/hw/usb/hcd-ohci.h b/hw/usb/hcd-ohci.h
index 771927ea17..6949cf0dab 100644
--- a/hw/usb/hcd-ohci.h
+++ b/hw/usb/hcd-ohci.h
@@ -21,6 +21,7 @@
 #ifndef HCD_OHCI_H
 #define HCD_OHCI_H
 
+#include "hw/usb/usb-hcd.h"
 #include "sysemu/dma.h"
 #include "usb-internal.h"
 
@@ -91,7 +92,6 @@ typedef struct OHCIState {
     void (*ohci_die)(struct OHCIState *ohci);
 } OHCIState;
 
-#define TYPE_SYSBUS_OHCI "sysbus-ohci"
 #define SYSBUS_OHCI(obj) OBJECT_CHECK(OHCISysBusState, (obj), TYPE_SYSBUS_OHCI)
 
 typedef struct {
diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
new file mode 100644
index 0000000000..21fdfaf22d
--- /dev/null
+++ b/include/hw/usb/usb-hcd.h
@@ -0,0 +1,16 @@
+/*
+ * QEMU USB HCD types
+ *
+ * Copyright (c) 2020  Philippe Mathieu-Daudé <f4bug@amsat.org>
+ *
+ * SPDX-License-Identifier: GPL-2.0-or-later
+ */
+
+#ifndef HW_USB_HCD_TYPES_H
+#define HW_USB_HCD_TYPES_H
+
+/* OHCI */
+#define TYPE_SYSBUS_OHCI            "sysbus-ohci"
+#define TYPE_PCI_OHCI               "pci-ohci"
+
+#endif
diff --git a/hw/arm/allwinner-a10.c b/hw/arm/allwinner-a10.c
index 52e0d83760..53c24ff602 100644
--- a/hw/arm/allwinner-a10.c
+++ b/hw/arm/allwinner-a10.c
@@ -25,7 +25,7 @@
 #include "hw/misc/unimp.h"
 #include "sysemu/sysemu.h"
 #include "hw/boards.h"
-#include "hw/usb/hcd-ohci.h"
+#include "hw/usb/usb-hcd.h"
 
 #define AW_A10_MMC0_BASE        0x01c0f000
 #define AW_A10_PIC_REG_BASE     0x01c20400
diff --git a/hw/arm/allwinner-h3.c b/hw/arm/allwinner-h3.c
index 8e09468e86..d1d90ffa79 100644
--- a/hw/arm/allwinner-h3.c
+++ b/hw/arm/allwinner-h3.c
@@ -28,6 +28,7 @@
 #include "hw/sysbus.h"
 #include "hw/char/serial.h"
 #include "hw/misc/unimp.h"
+#include "hw/usb/usb-hcd.h"
 #include "hw/usb/hcd-ehci.h"
 #include "hw/loader.h"
 #include "sysemu/sysemu.h"
@@ -381,16 +382,16 @@ static void allwinner_h3_realize(DeviceState *dev, Error **errp)
                          qdev_get_gpio_in(DEVICE(&s->gic),
                                           AW_H3_GIC_SPI_EHCI3));
 
-    sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI0],
+    sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI0],
                          qdev_get_gpio_in(DEVICE(&s->gic),
                                           AW_H3_GIC_SPI_OHCI0));
-    sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI1],
+    sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI1],
                          qdev_get_gpio_in(DEVICE(&s->gic),
                                           AW_H3_GIC_SPI_OHCI1));
-    sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI2],
+    sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI2],
                          qdev_get_gpio_in(DEVICE(&s->gic),
                                           AW_H3_GIC_SPI_OHCI2));
-    sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI3],
+    sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI3],
                          qdev_get_gpio_in(DEVICE(&s->gic),
                                           AW_H3_GIC_SPI_OHCI3));
 
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
index f104a33463..27196170f5 100644
--- a/hw/arm/pxa2xx.c
+++ b/hw/arm/pxa2xx.c
@@ -18,6 +18,7 @@
 #include "hw/arm/pxa.h"
 #include "sysemu/sysemu.h"
 #include "hw/char/serial.h"
+#include "hw/usb/usb-hcd.h"
 #include "hw/i2c/i2c.h"
 #include "hw/irq.h"
 #include "hw/qdev-properties.h"
@@ -2196,7 +2197,7 @@ PXA2xxState *pxa270_init(MemoryRegion *address_space,
         s->ssp[i] = (SSIBus *)qdev_get_child_bus(dev, "ssi");
     }
 
-    sysbus_create_simple("sysbus-ohci", 0x4c000000,
+    sysbus_create_simple(TYPE_SYSBUS_OHCI, 0x4c000000,
                          qdev_get_gpio_in(s->pic, PXA2XX_PIC_USBH1));
 
     s->pcmcia[0] = pxa2xx_pcmcia_init(address_space, 0x20000000);
diff --git a/hw/arm/realview.c b/hw/arm/realview.c
index b6c0a1adb9..0aa34bd4c2 100644
--- a/hw/arm/realview.c
+++ b/hw/arm/realview.c
@@ -16,6 +16,7 @@
 #include "hw/net/lan9118.h"
 #include "hw/net/smc91c111.h"
 #include "hw/pci/pci.h"
+#include "hw/usb/usb-hcd.h"
 #include "net/net.h"
 #include "sysemu/sysemu.h"
 #include "hw/boards.h"
@@ -256,7 +257,7 @@ static void realview_init(MachineState *machine,
         sysbus_connect_irq(busdev, 3, pic[51]);
         pci_bus = (PCIBus *)qdev_get_child_bus(dev, "pci");
         if (machine_usb(machine)) {
-            pci_create_simple(pci_bus, -1, "pci-ohci");
+            pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
         }
         n = drive_get_max_bus(IF_SCSI);
         while (n >= 0) {
diff --git a/hw/arm/versatilepb.c b/hw/arm/versatilepb.c
index e596b8170f..3e6224dc96 100644
--- a/hw/arm/versatilepb.c
+++ b/hw/arm/versatilepb.c
@@ -17,6 +17,7 @@
 #include "net/net.h"
 #include "sysemu/sysemu.h"
 #include "hw/pci/pci.h"
+#include "hw/usb/usb-hcd.h"
 #include "hw/i2c/i2c.h"
 #include "hw/i2c/arm_sbcon_i2c.h"
 #include "hw/irq.h"
@@ -273,7 +274,7 @@ static void versatile_init(MachineState *machine, int board_id)
         }
     }
     if (machine_usb(machine)) {
-        pci_create_simple(pci_bus, -1, "pci-ohci");
+        pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
     }
     n = drive_get_max_bus(IF_SCSI);
     while (n >= 0) {
diff --git a/hw/display/sm501.c b/hw/display/sm501.c
index 9cccc68c35..5f076c841f 100644
--- a/hw/display/sm501.c
+++ b/hw/display/sm501.c
@@ -33,6 +33,7 @@
 #include "hw/sysbus.h"
 #include "migration/vmstate.h"
 #include "hw/pci/pci.h"
+#include "hw/usb/usb-hcd.h"
 #include "hw/qdev-properties.h"
 #include "hw/i2c/i2c.h"
 #include "hw/display/i2c-ddc.h"
@@ -1961,7 +1962,7 @@ static void sm501_realize_sysbus(DeviceState *dev, Error **errp)
     sysbus_init_mmio(sbd, &s->state.mmio_region);
 
     /* bridge to usb host emulation module */
-    usb_dev = qdev_new("sysbus-ohci");
+    usb_dev = qdev_new(TYPE_SYSBUS_OHCI);
     qdev_prop_set_uint32(usb_dev, "num-ports", 2);
     qdev_prop_set_uint64(usb_dev, "dma-offset", s->base);
     sysbus_realize_and_unref(SYS_BUS_DEVICE(usb_dev), &error_fatal);
diff --git a/hw/ppc/mac_newworld.c b/hw/ppc/mac_newworld.c
index 7bf69f4a1f..3c32c1831b 100644
--- a/hw/ppc/mac_newworld.c
+++ b/hw/ppc/mac_newworld.c
@@ -55,6 +55,7 @@
 #include "hw/input/adb.h"
 #include "hw/ppc/mac_dbdma.h"
 #include "hw/pci/pci.h"
+#include "hw/usb/usb-hcd.h"
 #include "net/net.h"
 #include "sysemu/sysemu.h"
 #include "hw/boards.h"
@@ -411,7 +412,7 @@ static void ppc_core99_init(MachineState *machine)
     }
 
     if (machine->usb) {
-        pci_create_simple(pci_bus, -1, "pci-ohci");
+        pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
 
         /* U3 needs to use USB for input because Linux doesn't support via-cuda
         on PPC64 */
diff --git a/hw/ppc/mac_oldworld.c b/hw/ppc/mac_oldworld.c
index f8c204ead7..a429a3e1df 100644
--- a/hw/ppc/mac_oldworld.c
+++ b/hw/ppc/mac_oldworld.c
@@ -37,6 +37,7 @@
 #include "hw/isa/isa.h"
 #include "hw/pci/pci.h"
 #include "hw/pci/pci_host.h"
+#include "hw/usb/usb-hcd.h"
 #include "hw/boards.h"
 #include "hw/nvram/fw_cfg.h"
 #include "hw/char/escc.h"
@@ -301,7 +302,7 @@ static void ppc_heathrow_init(MachineState *machine)
     qdev_realize_and_unref(dev, adb_bus, &error_fatal);
 
     if (machine_usb(machine)) {
-        pci_create_simple(pci_bus, -1, "pci-ohci");
+        pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
     }
 
     if (graphic_depth != 15 && graphic_depth != 32 && graphic_depth != 8)
diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c
index 781b45e14b..ac60d17a86 100644
--- a/hw/ppc/sam460ex.c
+++ b/hw/ppc/sam460ex.c
@@ -36,6 +36,7 @@
 #include "hw/i2c/ppc4xx_i2c.h"
 #include "hw/i2c/smbus_eeprom.h"
 #include "hw/usb/usb.h"
+#include "hw/usb/usb-hcd.h"
 #include "hw/usb/hcd-ehci.h"
 #include "hw/ppc/fdt.h"
 #include "hw/qdev-properties.h"
@@ -372,7 +373,7 @@ static void sam460ex_init(MachineState *machine)
 
     /* USB */
     sysbus_create_simple(TYPE_PPC4xx_EHCI, 0x4bffd0400, uic[2][29]);
-    dev = qdev_new("sysbus-ohci");
+    dev = qdev_new(TYPE_SYSBUS_OHCI);
     qdev_prop_set_string(dev, "masterbus", "usb-bus.0");
     qdev_prop_set_uint32(dev, "num-ports", 6);
     sbdev = SYS_BUS_DEVICE(dev);
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 0c0409077f..db1706a66c 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -71,6 +71,7 @@
 #include "exec/address-spaces.h"
 #include "exec/ram_addr.h"
 #include "hw/usb/usb.h"
+#include "hw/usb/usb-hcd.h"
 #include "qemu/config-file.h"
 #include "qemu/error-report.h"
 #include "trace.h"
@@ -2958,7 +2959,7 @@ static void spapr_machine_init(MachineState *machine)
 
     if (machine->usb) {
         if (smc->use_ohci_by_default) {
-            pci_create_simple(phb->bus, -1, "pci-ohci");
+            pci_create_simple(phb->bus, -1, TYPE_PCI_OHCI);
         } else {
             pci_create_simple(phb->bus, -1, "nec-usb-xhci");
         }
diff --git a/hw/usb/hcd-ohci-pci.c b/hw/usb/hcd-ohci-pci.c
index cb6bc55f59..14df83ec2e 100644
--- a/hw/usb/hcd-ohci-pci.c
+++ b/hw/usb/hcd-ohci-pci.c
@@ -29,8 +29,8 @@
 #include "trace.h"
 #include "hcd-ohci.h"
 #include "usb-internal.h"
+#include "hw/usb/usb-hcd.h"
 
-#define TYPE_PCI_OHCI "pci-ohci"
 #define PCI_OHCI(obj) OBJECT_CHECK(OHCIPCIState, (obj), TYPE_PCI_OHCI)
 
 typedef struct {
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjb0-0005FP-Eo; Sat, 04 Jul 2020 14:56:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjW9-0003ES-TV
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:51:49 +0000
X-Inumbo-ID: b7e5f6b6-be05-11ea-bb8b-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7e5f6b6-be05-11ea-bb8b-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:36 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id f18so37074799wml.3
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=f29/MIGKjiVCXNbXb7QXhmc4KA4eor+Mhn/A2jA2iWs=;
 b=aB1S25DGnSIIYyLepd0/DAdWudVgoTXK9tCwzyeK195D7VcOpmizNyVX6hZ+Hw2qEs
 azVFVrdxwL3VuODlB4q4QqZfMjPTxFOKhd3QaNoTlI3BCwrwaIJCdaN/9hDuogPt7S1W
 uMkSVz50tiyCDufVzR9ruwvMAD7Cy57YADj/X/TkNbbAOo6IdzPigDU6+Qt9UnoF1S0U
 2gQlMm3UZYz9RaiNodxR9aWaZAJ7LbmPd+BGW5Mh8hxVh31zMr93bGQmvMf+wmWascsB
 JSWvSnk5sXCB2K4BvDiMtrGexuQVS2ZsqDG1irobpKigzLZqPl++eLsBBaNhhanPYtWu
 xagQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=f29/MIGKjiVCXNbXb7QXhmc4KA4eor+Mhn/A2jA2iWs=;
 b=ZrLw5pKI1K2QyUF3y1QYnTWMGaMl3YYSs0uGnYqk7ICHLP1N/S1iClT1m+5RXxDmLE
 0Cdv7T2DUTx6yu5s6yiKUOELT+tVs6uIJGY3SMqAs8X+w8uKMoO3H06FbSvlLULmNvY2
 h7QeQkmeB4YQX0wY92+tNckdCSjmrHDXbtfcPS+ztM4pV1Ez1f9Gsp8e6mIZDVrdd+Zk
 HzU2pJ+KqUvkOGgbJec94wwLajalt/IrnpcjgLPtknWOReOvQ93VMJjlfY3kWIitoI9B
 h4Y2MoAjbjOIs8oBwLhqwSldnG7uMSwonDoxRu2SqeSY60oletBWLXIkoVFGUww2QbTS
 yopQ==
X-Gm-Message-State: AOAM532ap1oG4PoQdGme4cMYWtuN8Fgqpk3L4CSv2tCyRJMyiZ9TGqgi
 HpAUdwIDLfdoGursw4pzuSE=
X-Google-Smtp-Source: ABdhPJzXh5fqsqOLbUyvdsrfc19bAAQd0SyMWAd0LLQtMbMkVi1HDhynYNEmZjRrZ57ugrqA88FVFA==
X-Received: by 2002:a1c:2183:: with SMTP id h125mr43989795wmh.83.1593874235357; 
 Sat, 04 Jul 2020 07:50:35 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:34 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 23/26] hw/usb/usb-hcd: Use EHCI type definitions
Date: Sat,  4 Jul 2020 16:49:40 +0200
Message-Id: <20200704144943.18292-24-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Various machine/board/soc models create EHCI device instances
with the generic QDEV API, and don't need to access USB internals.

Simplify header inclusions by moving the QOM type names into a
simple header, with no need to include other "hw/usb" headers.

Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/hcd-ehci.h         | 11 +----------
 include/hw/usb/chipidea.h |  2 +-
 include/hw/usb/usb-hcd.h  | 11 +++++++++++
 hw/arm/allwinner-h3.c     |  1 -
 hw/arm/exynos4210.c       |  2 +-
 hw/arm/sbsa-ref.c         |  3 ++-
 hw/arm/xilinx_zynq.c      |  2 +-
 hw/ppc/sam460ex.c         |  1 -
 hw/usb/chipidea.c         |  1 +
 hw/usb/hcd-ehci-sysbus.c  |  1 +
 10 files changed, 19 insertions(+), 16 deletions(-)

diff --git a/hw/usb/hcd-ehci.h b/hw/usb/hcd-ehci.h
index 337b3ad05c..da70767409 100644
--- a/hw/usb/hcd-ehci.h
+++ b/hw/usb/hcd-ehci.h
@@ -23,6 +23,7 @@
 #include "hw/pci/pci.h"
 #include "hw/sysbus.h"
 #include "usb-internal.h"
+#include "hw/usb/usb-hcd.h"
 
 #define CAPA_SIZE        0x10
 
@@ -316,7 +317,6 @@ void usb_ehci_realize(EHCIState *s, DeviceState *dev, Error **errp);
 void usb_ehci_unrealize(EHCIState *s, DeviceState *dev);
 void ehci_reset(void *opaque);
 
-#define TYPE_PCI_EHCI "pci-ehci-usb"
 #define PCI_EHCI(obj) OBJECT_CHECK(EHCIPCIState, (obj), TYPE_PCI_EHCI)
 
 typedef struct EHCIPCIState {
@@ -327,15 +327,6 @@ typedef struct EHCIPCIState {
     EHCIState ehci;
 } EHCIPCIState;
 
-
-#define TYPE_SYS_BUS_EHCI "sysbus-ehci-usb"
-#define TYPE_PLATFORM_EHCI "platform-ehci-usb"
-#define TYPE_EXYNOS4210_EHCI "exynos4210-ehci-usb"
-#define TYPE_AW_H3_EHCI "aw-h3-ehci-usb"
-#define TYPE_TEGRA2_EHCI "tegra2-ehci-usb"
-#define TYPE_PPC4xx_EHCI "ppc4xx-ehci-usb"
-#define TYPE_FUSBH200_EHCI "fusbh200-ehci-usb"
-
 #define SYS_BUS_EHCI(obj) \
     OBJECT_CHECK(EHCISysBusState, (obj), TYPE_SYS_BUS_EHCI)
 #define SYS_BUS_EHCI_CLASS(class) \
diff --git a/include/hw/usb/chipidea.h b/include/hw/usb/chipidea.h
index 1ec2e9dbda..28f46291de 100644
--- a/include/hw/usb/chipidea.h
+++ b/include/hw/usb/chipidea.h
@@ -2,6 +2,7 @@
 #define CHIPIDEA_H
 
 #include "hw/usb/hcd-ehci.h"
+#include "hw/usb/usb-hcd.h"
 
 typedef struct ChipideaState {
     /*< private >*/
@@ -10,7 +11,6 @@ typedef struct ChipideaState {
     MemoryRegion iomem[3];
 } ChipideaState;
 
-#define TYPE_CHIPIDEA "usb-chipidea"
 #define CHIPIDEA(obj) OBJECT_CHECK(ChipideaState, (obj), TYPE_CHIPIDEA)
 
 #endif /* CHIPIDEA_H */
diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
index 21fdfaf22d..74af3a4533 100644
--- a/include/hw/usb/usb-hcd.h
+++ b/include/hw/usb/usb-hcd.h
@@ -13,4 +13,15 @@
 #define TYPE_SYSBUS_OHCI            "sysbus-ohci"
 #define TYPE_PCI_OHCI               "pci-ohci"
 
+/* EHCI */
+#define TYPE_SYS_BUS_EHCI           "sysbus-ehci-usb"
+#define TYPE_PCI_EHCI               "pci-ehci-usb"
+#define TYPE_PLATFORM_EHCI          "platform-ehci-usb"
+#define TYPE_EXYNOS4210_EHCI        "exynos4210-ehci-usb"
+#define TYPE_AW_H3_EHCI             "aw-h3-ehci-usb"
+#define TYPE_TEGRA2_EHCI            "tegra2-ehci-usb"
+#define TYPE_PPC4xx_EHCI            "ppc4xx-ehci-usb"
+#define TYPE_FUSBH200_EHCI          "fusbh200-ehci-usb"
+#define TYPE_CHIPIDEA               "usb-chipidea"
+
 #endif
diff --git a/hw/arm/allwinner-h3.c b/hw/arm/allwinner-h3.c
index d1d90ffa79..8b7adddc27 100644
--- a/hw/arm/allwinner-h3.c
+++ b/hw/arm/allwinner-h3.c
@@ -29,7 +29,6 @@
 #include "hw/char/serial.h"
 #include "hw/misc/unimp.h"
 #include "hw/usb/usb-hcd.h"
-#include "hw/usb/hcd-ehci.h"
 #include "hw/loader.h"
 #include "sysemu/sysemu.h"
 #include "hw/arm/allwinner-h3.h"
diff --git a/hw/arm/exynos4210.c b/hw/arm/exynos4210.c
index fa639806ec..692fb02159 100644
--- a/hw/arm/exynos4210.c
+++ b/hw/arm/exynos4210.c
@@ -35,7 +35,7 @@
 #include "hw/qdev-properties.h"
 #include "hw/arm/exynos4210.h"
 #include "hw/sd/sdhci.h"
-#include "hw/usb/hcd-ehci.h"
+#include "hw/usb/usb-hcd.h"
 
 #define EXYNOS4210_CHIPID_ADDR         0x10000000
 
diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
index 021e7c1b8b..4e4c338ae9 100644
--- a/hw/arm/sbsa-ref.c
+++ b/hw/arm/sbsa-ref.c
@@ -38,6 +38,7 @@
 #include "hw/loader.h"
 #include "hw/pci-host/gpex.h"
 #include "hw/qdev-properties.h"
+#include "hw/usb/usb-hcd.h"
 #include "hw/char/pl011.h"
 #include "net/net.h"
 
@@ -485,7 +486,7 @@ static void create_ehci(const SBSAMachineState *sms)
     hwaddr base = sbsa_ref_memmap[SBSA_EHCI].base;
     int irq = sbsa_ref_irqmap[SBSA_EHCI];
 
-    sysbus_create_simple("platform-ehci-usb", base,
+    sysbus_create_simple(TYPE_PLATFORM_EHCI, base,
                          qdev_get_gpio_in(sms->gic, irq));
 }
 
diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c
index ed970273f3..9ccdc03095 100644
--- a/hw/arm/xilinx_zynq.c
+++ b/hw/arm/xilinx_zynq.c
@@ -29,7 +29,7 @@
 #include "hw/loader.h"
 #include "hw/misc/zynq-xadc.h"
 #include "hw/ssi/ssi.h"
-#include "hw/usb/chipidea.h"
+#include "hw/usb/usb-hcd.h"
 #include "qemu/error-report.h"
 #include "hw/sd/sdhci.h"
 #include "hw/char/cadence_uart.h"
diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c
index ac60d17a86..3f7cf0d1ae 100644
--- a/hw/ppc/sam460ex.c
+++ b/hw/ppc/sam460ex.c
@@ -37,7 +37,6 @@
 #include "hw/i2c/smbus_eeprom.h"
 #include "hw/usb/usb.h"
 #include "hw/usb/usb-hcd.h"
-#include "hw/usb/hcd-ehci.h"
 #include "hw/ppc/fdt.h"
 #include "hw/qdev-properties.h"
 #include "hw/pci/pci.h"
diff --git a/hw/usb/chipidea.c b/hw/usb/chipidea.c
index 3dcd22ccba..e81f63295e 100644
--- a/hw/usb/chipidea.c
+++ b/hw/usb/chipidea.c
@@ -11,6 +11,7 @@
 
 #include "qemu/osdep.h"
 #include "hw/usb/hcd-ehci.h"
+#include "hw/usb/usb-hcd.h"
 #include "hw/usb/chipidea.h"
 #include "qemu/log.h"
 #include "qemu/module.h"
diff --git a/hw/usb/hcd-ehci-sysbus.c b/hw/usb/hcd-ehci-sysbus.c
index 3730736540..b7debc1934 100644
--- a/hw/usb/hcd-ehci-sysbus.c
+++ b/hw/usb/hcd-ehci-sysbus.c
@@ -18,6 +18,7 @@
 #include "qemu/osdep.h"
 #include "hw/qdev-properties.h"
 #include "hw/usb/hcd-ehci.h"
+#include "hw/usb/usb-hcd.h"
 #include "migration/vmstate.h"
 #include "qemu/module.h"
 
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjb0-0005Fr-PG; Sat, 04 Jul 2020 14:56:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjVk-0003ES-Sf
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:51:24 +0000
X-Inumbo-ID: b2790e2a-be05-11ea-b7bb-bc764e2007e4
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b2790e2a-be05-11ea-b7bb-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:27 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id g10so12677787wmc.1
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:27 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=lYK2LCkkm8mXjPosH/dQ8dVGr7IbLAxEMo5hKyX8gDM=;
 b=aXPSXSTXJgMKootuaitVYmZBSggQZgK5/msZtoZcg7I8jA8LHFIiATkIx5WCEZldCp
 SeaQ7H/JOLUc20KVhGcA7f0xl4oHCgEjVr98O8j746wh6K6hdXVhWWXSpum0kkLIE1Ll
 o0sWptvhIkbblNoImRzF3l09dCL3iVxJ0WW2ZeILMYy+tYCo1N5He53EBz1ZPY1YrT9P
 UVjDzzudJa9wnQi7oNvmU0XMFcjt7H4ZMvL4fKtFw3JleIBzE+YVCgrp2ICVwl2encpY
 Qq0n9vh6n8s2ZQnJVc4oUaSLVKsimpACr35vRSlpJj6uci1zi0CBBOOmhbGsI86MbzSm
 uOAg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=lYK2LCkkm8mXjPosH/dQ8dVGr7IbLAxEMo5hKyX8gDM=;
 b=l+rlezrdqhdtyXXt3qbbltkAj6M++atsi2n0XhuPBckOQ+peFFUo/TPhzoNvpWvy/e
 lqyLjwEHDV5XQScoH8eQsN0zkeEIvb9G9UxtMoWMG4TnPFTmWTABTbiloQdVW+NKyDZV
 2k/VPhAfj5MsX6Z+CvaoBUUT3/rb3K1cFWul/nmHnNrpHpQlvQZrv+nR9B/rT06uXcID
 WabBFUoVm7kte+GpGrDM3tieuhFhXtpfwTnYvjinSvoBGY9RcxlK56WgdY5zcEJlGr57
 d4SzA6WwXwr2Cz13nxsOEI9O2LI5PTwyIVQCWBAafhcay74Mw/UM2TS2TGzkUFljZKwF
 rBSw==
X-Gm-Message-State: AOAM533WKTiPFLIOw3kFH+MUu83ZPRUrludQAUziDzFT2locpCU1qHly
 4h6gVM0BRY4xix/gpC2b1s8=
X-Google-Smtp-Source: ABdhPJyYWdPo4hPIeG0V9hnWAf/4U3I+ZmZnVun1avHp1mFNCeVPmAKMwxtGlSjuHYbjFpIPmXDhOA==
X-Received: by 2002:a1c:f301:: with SMTP id q1mr41171773wmq.110.1593874226397; 
 Sat, 04 Jul 2020 07:50:26 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:25 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 19/26] hw/ppc/spapr: Use usb_get_port_path()
Date: Sat,  4 Jul 2020 16:49:36 +0200
Message-Id: <20200704144943.18292-20-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

To avoid to access the USBDevice internals, and use the
recently added usb_get_port_path() helper instead.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/ppc/spapr.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index f6f034d039..221d3e7a8c 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -3121,7 +3121,8 @@ static char *spapr_get_fw_dev_path(FWPathProvider *p, BusState *bus,
              * We use SRP luns of the form 01000000 | (usb-port << 16) | lun
              * in the top 32 bits of the 64-bit LUN
              */
-            unsigned usb_port = atoi(usb->port->path);
+            g_autofree char *usb_port_path = usb_get_port_path(usb);
+            unsigned usb_port = atoi(usb_port_path);
             unsigned id = 0x1000000 | (usb_port << 16) | d->lun;
             return g_strdup_printf("%s@%"PRIX64, qdev_fw_name(dev),
                                    (uint64_t)id << 32);
@@ -3137,7 +3138,8 @@ static char *spapr_get_fw_dev_path(FWPathProvider *p, BusState *bus,
     if (strcmp("usb-host", qdev_fw_name(dev)) == 0) {
         USBDevice *usbdev = CAST(USBDevice, dev, TYPE_USB_DEVICE);
         if (usb_host_dev_is_scsi_storage(usbdev)) {
-            return g_strdup_printf("storage@%s/disk", usbdev->port->path);
+            g_autofree char *usb_port_path = usb_get_port_path(usbdev);
+            return g_strdup_printf("storage@%s/disk", usb_port_path);
         }
     }
 
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjb1-0005Ga-29; Sat, 04 Jul 2020 14:56:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjVL-0003ES-RK
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:50:59 +0000
X-Inumbo-ID: ac624880-be05-11ea-b7bb-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac624880-be05-11ea-b7bb-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:16 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id a6so24089448wmm.0
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=xjjAqG2L8PiyECLv8vWl/nPN8wdvqzFPKnLDYj57EWU=;
 b=kGjxAbMk0Y+V1ZM0jG8F7zrUJeEpbEY2rUuWHyvQnUy2BC4TEa4D9G84sW8qU3Tnyi
 hXyLAakl5TAVaESh9mwcqIkW9kPRuFhF1CPBo0iqa9EzqOeUO8dcJwyom69eu1X7is1K
 GI/h4RCMamXhRltfPP+anRNrC4T3Xl/b+bRd7v5Nxqs6IBSrpiZ+NfMzkc3fyOz+zG/O
 qhZVj0cz4AUqg4attnXMbE/OWUZmlqtkxjHzNmmPiDdGXYHh9wyMp7La7dx/LG+WG1y0
 7hKot5Tx+EVBqxTjnBKGlFKxDUcrNR10Wub/RcuZ75CfRChXgIvTj2rJ1blM2eTTB1qV
 uPjQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=xjjAqG2L8PiyECLv8vWl/nPN8wdvqzFPKnLDYj57EWU=;
 b=BlEDOZDvk569CRF8HkryQBb3ozQUYjdhSCo6rKJ6kMBMm7L1/UNLQ3zgLsN8yRZ3T/
 kL1m6TzdYWrT82sJ/hMsknvX/xKfGwzUC+Q3eQRZNFNilV+0ROaTSX4aFbXJbaHllV1l
 R+qUPyIcoCr3Y7bKIdbkogVCZ8I7S+9ly1cg6R4ruYH0nobsJP9Pf0EZcp+/kgLdoMTV
 F8Rh5R2EIw9i+ygLyR0Q5bluiapZYBr1ldXfAPKVTmvQn8BTg7Jj0rUnv8i6gCRsrK/7
 xzox/EFaxJ7FmySljFiPfE2VZBrJCNc4Qc0xeMNl/rAJJQSSuYcdZfEkxzoI2lrv38Z7
 BNoA==
X-Gm-Message-State: AOAM530kn5302bEYPm21F+PWkMpv261nQ5npHk7/b61Qtg3qpR0zv4ei
 dX20Gxsgiu+LycCAVXrQk78=
X-Google-Smtp-Source: ABdhPJyylaQ90oaFVpyNttGYGyUpJwJRvbiuNFcOgPIhMsiwGOGXapRLVUD2ADJD/bSRm2sJT5Aq2g==
X-Received: by 2002:a1c:6788:: with SMTP id
 b130mr42706142wmc.100.1593874216169; 
 Sat, 04 Jul 2020 07:50:16 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:15 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 14/26] hw/usb/quirks: Rename included source with '.inc.c'
 suffix
Date: Sat,  4 Jul 2020 16:49:31 +0200
Message-Id: <20200704144943.18292-15-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This file is not a header, but contains source code which is
included and compiled once. We use the '.inc.c' suffix in few
other cases in the repository. Follow the same convention with
this file.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/quirks.c                   | 2 +-
 hw/usb/{quirks.h => quirks.inc.c} | 5 -----
 2 files changed, 1 insertion(+), 6 deletions(-)
 rename hw/usb/{quirks.h => quirks.inc.c} (99%)

diff --git a/hw/usb/quirks.c b/hw/usb/quirks.c
index 23ea7a23ea..655b36f2d5 100644
--- a/hw/usb/quirks.c
+++ b/hw/usb/quirks.c
@@ -13,7 +13,7 @@
  */
 
 #include "qemu/osdep.h"
-#include "quirks.h"
+#include "quirks.inc.c"
 #include "hw/usb.h"
 
 static bool usb_id_match(const struct usb_device_id *ids,
diff --git a/hw/usb/quirks.h b/hw/usb/quirks.inc.c
similarity index 99%
rename from hw/usb/quirks.h
rename to hw/usb/quirks.inc.c
index 50ef2f9c2e..004b228aba 100644
--- a/hw/usb/quirks.h
+++ b/hw/usb/quirks.inc.c
@@ -12,9 +12,6 @@
  * (at your option) any later version.
  */
 
-#ifndef HW_USB_QUIRKS_H
-#define HW_USB_QUIRKS_H
-
 /* 1 on 1 copy of linux/drivers/usb/serial/ftdi_sio_ids.h */
 #include "quirks-ftdi-ids.h"
 /* 1 on 1 copy of linux/drivers/usb/serial/pl2303.h */
@@ -915,5 +912,3 @@ static const struct usb_device_id usbredir_ftdi_serial_ids[] = {
 
 #undef USB_DEVICE
 #undef USB_DEVICE_AND_INTERFACE_INFO
-
-#endif
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjb1-0005HU-Ij; Sat, 04 Jul 2020 14:56:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjV1-0003ES-QB
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:50:39 +0000
X-Inumbo-ID: a78974d2-be05-11ea-b7bb-bc764e2007e4
Received: from mail-wm1-x330.google.com (unknown [2a00:1450:4864:20::330])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a78974d2-be05-11ea-b7bb-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:08 +0000 (UTC)
Received: by mail-wm1-x330.google.com with SMTP id j18so34715268wmi.3
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=CVXjF5m55kSlusAor5YG6lDc3XbyrrRebQzgpXEfhAM=;
 b=AP5+c+c62PoQXfTA/LtPm4Xh/2BhHY+f3Lrq+ywLKxnRs/teD8SlK4rBeR1W21QVVg
 MX5/zdYcr6MAV0tTj6y2JwWtreM0kI/3HLVXeM5Km0zk4LkxkDr+JUBRTgKPd4JDTNTQ
 qN+mtEml6SPGpVvDH0gvcL2DaAmIUkHFVWrvjkbOj7nMpqfWvSbHnLG867tnERkfPaEt
 R2NVOPlavAmVmA4nXJZ+9rKOMoryjsy3gDMvjOfRq/6T+kmZOAFOqpsdRHndZbJgqOt0
 X2QNL+czgbdip7AR/uSYj4AY75akrcM9UFux62yHvkIZB1h62OgvTkd3vwzX7RVlkGWT
 qEKg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=CVXjF5m55kSlusAor5YG6lDc3XbyrrRebQzgpXEfhAM=;
 b=lhRpyZUIAEbEPAHb/Nraug1SZ3LRVUHjXHNvI723mGVchXoIGW8irweJuOft/pw8TD
 DKKKCFturVMwwEbHO3DgN9ShNGtSIj/Vzb21wMUDzWEk1EfvDs/KW4JjAbUd8o/dv0F3
 fqZ2vITH6vBldGkiWW5BDLs/nXuwLZpJmbzPk8P9Q6eN7S/QUVt/fxB8uBa/Lskiba+0
 YJ6t2SNrpIgPSOgmRQ/gkcHmwamazUyWsUte5cART1KbcGgt2NClCwt/E5iLjKRL9AuQ
 QlWyAOE/TnmBUqI9cM72lf8XYJsWf67sMhO51lLrvy34OcsUeJrWa/thu2xPbFj4YEgx
 I6vQ==
X-Gm-Message-State: AOAM533jGOYSyEKbNpsL9nB3hEIOUZP7hQ5Bh8cq5kBCZPenxoB8e03F
 ysb5Dnqwmvap+rmDLyUJ8y8=
X-Google-Smtp-Source: ABdhPJymro0ZDYhdB5ObnYkjPPmmHwp/a0dtBWYIDo1/eOyMgDUzyUYBTppKmDKb9G4hobFLnzrm0w==
X-Received: by 2002:a1c:18e:: with SMTP id 136mr10977102wmb.93.1593874208054; 
 Sat, 04 Jul 2020 07:50:08 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:07 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 10/26] hw/usb/hcd-ehci: Move few definitions from header to
 source
Date: Sat,  4 Jul 2020 16:49:27 +0200
Message-Id: <20200704144943.18292-11-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Move definitions only useful for hcd-ehci.c to this source file.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/hcd-ehci.h | 11 -----------
 hw/usb/hcd-ehci.c | 12 ++++++++++++
 2 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/hw/usb/hcd-ehci.h b/hw/usb/hcd-ehci.h
index 57b38cfc05..4577f5e31d 100644
--- a/hw/usb/hcd-ehci.h
+++ b/hw/usb/hcd-ehci.h
@@ -24,17 +24,6 @@
 #include "hw/pci/pci.h"
 #include "hw/sysbus.h"
 
-#ifndef EHCI_DEBUG
-#define EHCI_DEBUG   0
-#endif
-
-#if EHCI_DEBUG
-#define DPRINTF printf
-#else
-#define DPRINTF(...)
-#endif
-
-#define MMIO_SIZE        0x1000
 #define CAPA_SIZE        0x10
 
 #define NB_PORTS         6        /* Max. Number of downstream ports */
diff --git a/hw/usb/hcd-ehci.c b/hw/usb/hcd-ehci.c
index 256fb91e0c..a0beee527c 100644
--- a/hw/usb/hcd-ehci.c
+++ b/hw/usb/hcd-ehci.c
@@ -36,6 +36,18 @@
 #include "qemu/error-report.h"
 #include "sysemu/runstate.h"
 
+#ifndef EHCI_DEBUG
+#define EHCI_DEBUG   0
+#endif
+
+#if EHCI_DEBUG
+#define DPRINTF printf
+#else
+#define DPRINTF(...)
+#endif
+
+#define MMIO_SIZE        0x1000
+
 #define FRAME_TIMER_FREQ 1000
 #define FRAME_TIMER_NS   (NANOSECONDS_PER_SECOND / FRAME_TIMER_FREQ)
 #define UFRAME_TIMER_NS  (FRAME_TIMER_NS / 8)
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 14:56:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 14:56:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrjb1-0005IZ-Tr; Sat, 04 Jul 2020 14:56:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrjWJ-0003ES-Tc
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 14:51:59 +0000
X-Inumbo-ID: ba56aea4-be05-11ea-bca7-bc764e2007e4
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba56aea4-be05-11ea-bca7-bc764e2007e4;
 Sat, 04 Jul 2020 14:50:40 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id o2so37045535wmh.2
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 07:50:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=uFl2X6lG6vil8sEXKsbieCQlkH7+teoSNXY57sY2N5o=;
 b=pEPq0tBtl3XW/xKrGA5imilI9ZmVHCddD6JX7pQf5W26Rrv3j0GgA9nuE3yRIkzKmh
 xHXVt9A0O2Sb6wPcOQhczgtzphR+PeHVZ4Fdijl43Nrxg+v0v2Hz13FZ7aE8sOrKMSba
 lfG8A6MMb+bM+xNdOi2/2GPMz+xyBlp2BCeJ2lrXsutgPmG7wdmm6i+UpuNi8+BMpdVg
 eiQN9k6197cbsfvPjRB78ybOhM3ezQzyqGNS+gSkzZo/tgdWfPLh9B5gQMHqP+XuG+Vf
 +5B1ATevPKzMRin82a+Fpe8eN19U96Tx56oDwDoz3bikVgrPUqsQH6eHcCsfnf4o51sH
 T5uw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=uFl2X6lG6vil8sEXKsbieCQlkH7+teoSNXY57sY2N5o=;
 b=N0xpKeuxnVk3Dtu9VqSxa1h5OCWxxjzF5sYKtTUwOnVU8lm3EHaMIKTe5POQ818WUU
 xaUdrvj9WncibSVLwWfWtsUMBlRYPCZfiEnvDvrxXSsfhIcu1CpN3n75qOi7L3e21wCb
 Kjelh37NH9v9uOTOO4IsyVxqlD88SgU9mwZ//bZwaGJqlSt/xCsv+wWpsvARapY7eBvx
 pglUWPkKdmBSnOBsgUScxzMQS4DUlkwfCJvK5GGvOrNa9RxzfBtdBP3zLcu2f2SzFkRl
 Hi6I4W0cb9KN6PLHvnKcEu3fj7r/VcjVXwLvd1cV/cM7aP4hhFTNCnzf9EyOVwAKFe0s
 Rj1Q==
X-Gm-Message-State: AOAM5310KX1BMWMrmz3NQIaajp8PBB7TOjQpA2UbfKl1i+SmcG1/uZp+
 G8pbaips41w97JesXGetIYE=
X-Google-Smtp-Source: ABdhPJzM4tGe90D+YYy5lfrAPXqvO73Nu2kMmuLd9RH15Fay5P8lwxNw5t/4Uu8mV3OZMnQRXn0t1A==
X-Received: by 2002:a7b:c44d:: with SMTP id l13mr43119938wmi.66.1593874239587; 
 Sat, 04 Jul 2020 07:50:39 -0700 (PDT)
Received: from localhost.localdomain (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r10sm17135019wrm.17.2020.07.04.07.50.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 04 Jul 2020 07:50:39 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org,
	BALATON Zoltan <balaton@eik.bme.hu>
Subject: [PATCH 25/26] hw/usb/usb-hcd: Use XHCI type definitions
Date: Sat,  4 Jul 2020 16:49:42 +0200
Message-Id: <20200704144943.18292-26-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Various machine/board/soc models create XHCI device instances
with the generic QDEV API, and don't need to access USB internals.

Simplify header inclusions by moving the QOM type names into a
simple header, with no need to include other "hw/usb" headers.

Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/usb/hcd-xhci.h        | 2 +-
 include/hw/usb/usb-hcd.h | 3 +++
 hw/ppc/spapr.c           | 2 +-
 3 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/hw/usb/hcd-xhci.h b/hw/usb/hcd-xhci.h
index f9a3aaceec..b6c54e38a6 100644
--- a/hw/usb/hcd-xhci.h
+++ b/hw/usb/hcd-xhci.h
@@ -23,9 +23,9 @@
 #define HW_USB_HCD_XHCI_H
 
 #include "usb-internal.h"
+#include "hw/usb/usb-hcd.h"
 
 #define TYPE_XHCI "base-xhci"
-#define TYPE_NEC_XHCI "nec-usb-xhci"
 #define TYPE_QEMU_XHCI "qemu-xhci"
 
 #define XHCI(obj) \
diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
index c9d0a88984..56107fca62 100644
--- a/include/hw/usb/usb-hcd.h
+++ b/include/hw/usb/usb-hcd.h
@@ -30,4 +30,7 @@
 #define TYPE_VT82C686B_USB_UHCI     "vt82c686b-usb-uhci"
 #define TYPE_ICH9_USB_UHCI(n)       "ich9-usb-uhci" #n
 
+/* XHCI */
+#define TYPE_NEC_XHCI "nec-usb-xhci"
+
 #endif
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index db1706a66c..d8b3978f24 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -2961,7 +2961,7 @@ static void spapr_machine_init(MachineState *machine)
         if (smc->use_ohci_by_default) {
             pci_create_simple(phb->bus, -1, TYPE_PCI_OHCI);
         } else {
-            pci_create_simple(phb->bus, -1, "nec-usb-xhci");
+            pci_create_simple(phb->bus, -1, TYPE_NEC_XHCI);
         }
 
         if (spapr->has_graphics) {
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 15:23:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 15:23:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrk0z-0000UG-J3; Sat, 04 Jul 2020 15:23:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CV4e=AP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrk0y-0000Tw-6C
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 15:23:40 +0000
X-Inumbo-ID: 5320b022-be0a-11ea-8b45-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5320b022-be0a-11ea-8b45-12813bfff9fa;
 Sat, 04 Jul 2020 15:23:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ewRza5zVzSfC89B0y/kCb/95Tu4Qlz7FFVyqF7sOWAQ=; b=oyvxr9u8Is/58kCBgqkasFHX2
 ShpG9rk2fTxBv4yvs1Pe0vMCiCPg6jqTlOZDqey28K5tkdGAq/xuL0L9CfKk9qjTAKQIB69PU6zbB
 cKV+iLORJcloytIhDoMNTLNikVY29vAdLQTdGaboFT9t5vOcYq77+nU50lIpTWGo9r/gE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrk0r-0006j2-Ff; Sat, 04 Jul 2020 15:23:33 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrk0q-0005yZ-UD; Sat, 04 Jul 2020 15:23:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrk0q-0001wP-Sc; Sat, 04 Jul 2020 15:23:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151608-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151608: regressions - FAIL
X-Osstest-Failures: libvirt:test-amd64-amd64-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: libvirt=201f8d1876136b0693505614efa3c9d113aff0bb
X-Osstest-Versions-That: libvirt=e7998ebeaf15e4e8825be0dd97aa1316f194f00d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 04 Jul 2020 15:23:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151608 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151608/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496
 test-amd64-i386-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151496
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151496
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              201f8d1876136b0693505614efa3c9d113aff0bb
baseline version:
 libvirt              e7998ebeaf15e4e8825be0dd97aa1316f194f00d

Last test of basis   151496  2020-07-01 04:23:43 Z    3 days
Failing since        151527  2020-07-02 04:29:15 Z    2 days    3 attempts
Testing same since   151608  2020-07-04 04:18:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 201f8d1876136b0693505614efa3c9d113aff0bb
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Mon Jun 29 14:55:54 2020 +0200

    virConnectGetAllDomainStats: Document two vcpu stats
    
    When introducing vcpu.<num>.wait (v1.3.2-rc1~301) and
    vcpu.<num>.halted (v2.4.0-rc1~36) the documentation was
    not written.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Erik Skultety <eskultet@redhat.com>

commit c203b8fee1ce15003934c09e811fbd2eaec9f230
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Thu Jul 2 15:02:38 2020 +0200

    docs: Update CI documentation
    
    We're no longer using either Travis CI or the Jenkins-based
    CentOS CI, but we have started using Cirrus CI.
    
    Mention the libvirt-ci subproject as well, as a pointer for those
    who might want to learn more about our CI infrastructure.
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit fb912901316dbe7d485551606373bd71d5271601
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Mon Jun 29 19:00:36 2020 +0200

    cirrus: Generate jobs dynamically
    
    Instead of having static job definitions for FreeBSD and macOS,
    use a generic template for both and fill in the details that are
    actually different, such as the list of packages to install, in
    the GitLab CI job, right before calling cirrus-run.
    
    The target-specific information are provided by lcitool, so that
    keeping them up to date is just a matter of running the refresh
    script when necessary.
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>
    Reviewed-by: Erik Skultety <eskultet@redhat.com>

commit 919ee94ca9c7fed77897fa8e3b04952e02780c0c
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Fri Jul 3 09:32:30 2020 +0200

    maint: Post-release version bump to 6.6.0
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>

commit d7f935f1f17a3ecf180ddb9600cbef4ba8dc20f4
Author: Daniel Veillard <veillard@redhat.com>
Date:   Fri Jul 3 08:49:25 2020 +0200

    Release of libvirt-6.5.0
    
    * NEWS.rst: updated with date of release
    
    Signed-off-by: Daniel Veillard <veillard@redhat.com>

commit d1d888a69f505922140bec292b8d208b3571f084
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Thu Jul 2 14:41:18 2020 +0200

    NEWS: Update for libvirt 6.5.0
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit 7fa7f7eeb6e969e002845928e155914da2fc8cd0
Author: Daniel P. Berrangé <berrange@redhat.com>
Date:   Wed Jul 1 17:36:51 2020 +0100

    util: add access check for hooks to fix running as non-root
    
    Since feb83c1e710b9ea8044a89346f4868d03b31b0f1 libvirtd will abort on
    startup if run as non-root
    
      2020-07-01 16:30:30.738+0000: 1647444: error : virDirOpenInternal:2869 : cannot open directory '/etc/libvirt/hooks/daemon.d': Permission denied
    
    The root cause flaw is that non-root libvirtd is using /etc/libvirt for
    its hooks. Traditionally that has been harmless though since we checked
    whether we could access the hook file and degraded gracefully. We need
    the same access check for iterating over the hook directory.
    
    Long term we should make it possible to have an unprivileged hook dir
    under $HOME.
    
    Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
    Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

commit c3fa17cd9a158f38416a80af3e0f712bf96ebf38
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Wed Jul 1 09:47:48 2020 +0200

    virnettlshelpers: Update private key
    
    With the recent update of Fedora rawhide I've noticed
    virnettlssessiontest and virnettlscontexttest failing with:
    
      Our own certificate servercertreq-ctx.pem failed validation
      against cacertreq-ctx.pem: The certificate uses an insecure
      algorithm
    
    This is result of Fedora changes to support strong crypto [1]. RSA
    with 1024 bit key is viewed as legacy and thus insecure. Generate
    a new private key then. Moreover, switch to EC which is not only
    shorter but also not deprecated that often as RSA. Generated
    using the following command:
    
      openssl genpkey --outform PEM --out privkey.pem \$
      --algorithm EC --pkeyopt ec_paramgen_curve:P-384 \$
      --pkeyopt ec_param_enc:named_curve
    
    1: https://fedoraproject.org/wiki/Changes/StrongCryptoSettings2
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit d57f361083c5053267e6d9380c1afe2abfcae8ac
Author: Daniel Henrique Barboza <danielhb413@gmail.com>
Date:   Tue Jun 30 16:43:43 2020 -0300

    docs: Fix 'Offline migration' description
    
    'transfers inactive the definition of a domain' seems odd.
    
    Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
    Reviewed-by: Andrea Bolognani <abologna@redhat.com>


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 15:29:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 15:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrk6K-0000fN-7C; Sat, 04 Jul 2020 15:29:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRhY=AP=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jrk6I-0000fH-De
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 15:29:10 +0000
X-Inumbo-ID: 1b0cb946-be0b-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b0cb946-be0b-11ea-bca7-bc764e2007e4;
 Sat, 04 Jul 2020 15:29:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=J5n0FelEMPB/ub5oFiH/6XxPMZvyiGaUigA/UxUvVVg=; b=Xdx3SxFWYwF3c4pKlwDkheaqXA
 Cvudma8yPgVinpLMpooergLe2tcxeQuhpNjJYNZ8jUiI5S45+Nv3o1TQ8KN1TG/ap5nWmFqRe9LUZ
 rwbdy2zR0+/oPuY/XGSrO16yb8FzQqPKm6YMxcJMPvaNGFcm0dy9PDvkhA1h/oTISerM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrk6D-0006pI-KH; Sat, 04 Jul 2020 15:29:05 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrk6D-0005e5-9U; Sat, 04 Jul 2020 15:29:05 +0000
Subject: Re: [PATCH v4 for-4.14 2/2] pvcalls: Document correctly and
 explicitely the padding for all arches
To: xen-devel@lists.xenproject.org
References: <20200627095533.14145-1-julien@xen.org>
 <20200627095533.14145-3-julien@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <b7f41be0-f1d2-2c3b-c79f-5d9763dfb5df@xen.org>
Date: Sat, 4 Jul 2020 16:29:02 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200627095533.14145-3-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 27/06/2020 10:55, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The specification of pvcalls suggests there is padding for 32-bit x86
> at the end of most the structure. However, they are not described in
> in the public header.
> 
> Because of that all the structures would be 32-bit aligned and not
> 64-bit aligned for 32-bit x86.
> 
> For all the other architectures supported (Arm and 64-bit x86), the
> structure are aligned to 64-bit because they contain uint64_t field.
> Therefore all the structures contain implicit padding.
> 
> Given the specification is authoriitative, the padding will the same for
> the all architectures. The potential breakage of compatibility is ought
> to be fine as pvcalls is still a tech preview.
> 
> As an aside, the padding sadly cannot be mandated to be 0 as they are
> already present. So it is not going to be possible to use the padding
> for extending a command in the future.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

It looks like most of the comments are on the commit message. So rather 
than sending the series again, below a new version of the commit message:

"
The specification of pvcalls suggests there is padding for 32-bit x86
at the end of most the structure. However, they are not described in
in the public header.

Because of that all the structures would have a different size between 
32-bit x86 and 64-bit x86.

For all the other architectures supported (Arm and 64-bit x86), the
structure have the sames sizes because they contain implicit padding 
thanks to the 64-bit alinment of the field uint64_t field.

Given the specification is authoritative, the padding will now be the 
same for all architectures. The potential breakage of compatibility is 
ought to be fine as pvcalls is still a tech preview.
"

Cheers,


-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 15:33:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 15:33:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrkAd-0001Sd-PO; Sat, 04 Jul 2020 15:33:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRhY=AP=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jrkAc-0001SY-MA
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 15:33:38 +0000
X-Inumbo-ID: bb08a6ee-be0b-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb08a6ee-be0b-11ea-b7bb-bc764e2007e4;
 Sat, 04 Jul 2020 15:33:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:To:Subject:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=52tfAKuP79r/HTo90iVbkMWNl79hm+RYb6LyoVA5E18=; b=q6wkeMyhzSu4qtuatDcWBUZoTl
 eVMQQrPZz3renhbOBkQZtVR8tZRnhg97brXyru2hbFSQu4OLUTQtpWKvQJDXDTdXhCNOaBYZdIAHR
 SCmF0FoRnLQtmQZg0r3j1bvbeW5II1hQ4q1DJQdByfjaQBh7Z/8vFiCL340OJwVAlG2I=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrkAb-0006vc-SP; Sat, 04 Jul 2020 15:33:37 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrkAb-0005yJ-Mn; Sat, 04 Jul 2020 15:33:37 +0000
Subject: =?UTF-8?B?UmU6IOWbnuWkje+8miBbWGVuIEFSTTY0XSBTYXZlIGNvcmVkdW1wIGxv?=
 =?UTF-8?Q?g_when_xen/dom0_crash_on_ARM64=3f?=
To: jinchen <jinchen1227@qq.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <tencent_C1F76837DF25C430969ABF6E4A557260AA0A@qq.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2370c2cc-307c-831c-cfab-0a41412b3e1e@xen.org>
Date: Sat, 4 Jul 2020 16:33:36 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <tencent_C1F76837DF25C430969ABF6E4A557260AA0A@qq.com>
Content-Type: text/plain; charset=gb18030; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 03/07/2020 07:02, jinchen wrote:
> Thank you for your reply!
> 
> On 02/07/2020 02:41, jinchen wrote:
>  >> Hello xen experts:
>  >>
>  >> Is there any way to save xen and dom0 core dump log when xen or dom0
>  >> crash on ARM64 platform?
> 
>  >Usually all the crash stack trace (Xen and Dom0) should be output on the
>  >Xen Console.
> 
> But if I don't connect a debug serial and want to check the dump error 
> after reboot?
If you don't have debug serial always connected, then it will not get 
the logs.

> 
>  >>02 02 02 I find that the kdump seems can't run on ARM64 platform?
> 
>  >We don't have support for kdump/kexec on Arm in Xen yet.
> 
> I find the kdump seems the appropriate way to do this, but it doesn't 
> support xen arm64.
> 
>  >>02 02 02 Are there any patches or other way to achive this goal?
> 
>  >I am not aware of any patches, but they would be welcomed.
> 
>  >For other way, it depends what exactly you expect. Do you need more than
>  >the stack trace?
> 
> The stack trace will be ok, if other infomantion can be save will be 
> better (like memory/vcpu/domain, etc.)

Kexec is probably the way to go. I would be happy to review the patches 
for Xen upstream if you are up to provide an implementation.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 16:08:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 16:08:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrki1-0004c2-Kf; Sat, 04 Jul 2020 16:08:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRhY=AP=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jrki0-0004bx-UL
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 16:08:09 +0000
X-Inumbo-ID: 8cf3a452-be10-11ea-8b57-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8cf3a452-be10-11ea-8b57-12813bfff9fa;
 Sat, 04 Jul 2020 16:08:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=8T9HRFe0NX5TmJ8txEXvtIB0ExxdDWChxtJUzjMFJak=; b=A0IybA58BeGAUCAiY3dqiV4xjk
 hV1YUeeq3eNsp7ELsl6At7/IIF9hGqGyCyBguyV4l8RCkTP0u9BBlbjaehQxaECP14CdNObomQoO3
 dORozY4Vbedlfrf4RefWcrrCF+QZWcY7yzTPDMReYnPp83QquTooMf3XbmQFYzhC3+1g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrkhs-00084w-C5; Sat, 04 Jul 2020 16:08:00 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrkhs-0007bb-2X; Sat, 04 Jul 2020 16:08:00 +0000
Subject: Re: [PATCH 1/2] xen/arm: entry: Place a speculation barrier following
 an ret instruction
To: Stefano Stabellini <sstabellini@kernel.org>
References: <20200616175913.7368-1-julien@xen.org>
 <20200616175913.7368-2-julien@xen.org>
 <alpine.DEB.2.21.2006161422240.24982@sstabellini-ThinkPad-T480s>
 <57696b4d-da83-a4d6-4d82-41a6f6c9174c@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <5c3a2407-3e76-3a30-7f93-036706e00f73@xen.org>
Date: Sat, 4 Jul 2020 17:07:57 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <57696b4d-da83-a4d6-4d82-41a6f6c9174c@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: paul@xen.org, Andre.Przywara@arm.com, Julien Grall <jgrall@amazon.com>,
 Bertrand.Marquis@arm.com, security@xenproject.org,
 xen-devel@lists.xenproject.org, Volodymyr_Babchuk@epam.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 17/06/2020 17:23, Julien Grall wrote:
> Hi,
> 
> On 16/06/2020 22:24, Stefano Stabellini wrote:
>> On Tue, 16 Jun 2020, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> Some CPUs can speculate past a RET instruction and potentially perform
>>> speculative accesses to memory before processing the return.
>>>
>>> There is no known gadget available after the RET instruction today.
>>> However some of the registers (such as in check_pending_guest_serror())
>>> may contain a value provided the guest.
>>                                ^ by
>>
>>
>>> In order to harden the code, it would be better to add a speculation
>>> barrier after each RET instruction. The performance is meant to be
>>> negligeable as the speculation barrier is not meant to be archicturally
>>> executed.
>>>
>>> Note that on arm32, the ldmia instruction will act as a return from the
>>> function __context_switch(). While the whitepaper doesn't suggest it is
>>> possible to speculate after the instruction, add preventively a
>>> speculation barrier after it as well.
>>>
>>> This is part of the work to mitigate straight-line speculation.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>
>> I did a compile-test on the patch too.
>>
>>
>>> ---
>>>
>>> I am still unsure whether we preventively should add a speculation 
>>> barrier
>>> preventively after all the RET instructions in arm*/lib/. The smc 
>>> call be
>>> taken care in a follow-up patch.
>>
>> SMC is great to have but it seems to be overkill to do the ones under
>> lib/.
>  From my understanding, the compiler will add a speculation barrier 
> preventively after each 'ret' when the mitigation are turned on.So it 
> feels to me we want to follow the same approach.
> 
> Obviously, we can avoid them but I would like to have a justification 
> for not adding them (nothing is overkilled against speculation ;)).

I finally found some time to look at arm*/lib in more details. Some of 
the helpers can definitely be called with guest inputs.

For instance, memchr() is called from hypfs_get_path_user() with the 3rd 
argument controlled by the guest. In both 32-bit and 64-bit 
implementation, you will reach the end of the function memchr() with 
r2/w2 and r3/w3 (it contains a character from the buffer) controlled by 
the guest.

As this is the only function in the unit, we don't know what will be the 
instructions right after RET. So it would be safer to add a speculation 
barrier there too.

Note that hypfs is currently only accessibly by Dom0. Yet, I still think 
we should try to harden any code if we can :).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 16:36:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 16:36:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrl9D-00074r-Qi; Sat, 04 Jul 2020 16:36:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ux+X=AP=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1jrl9C-00074m-0s
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 16:36:14 +0000
X-Inumbo-ID: 78b5da60-be14-11ea-bb8b-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 78b5da60-be14-11ea-bb8b-bc764e2007e4;
 Sat, 04 Jul 2020 16:36:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1593880571;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=SqXz38wdz/FD+LnrCe+KuV6IAZbK6L/A1oUQpL8mo44=;
 b=TZNRE9jQOT2CNh25NsU2LuLCcZxOxnm5zdHngPsw+8CJBb8hGO3ju4+RZRmrIdPljbVZfO
 kB4ZP5U2qrdrqrL65k1vxOty/FIrBAWNJXs4DBPyg6dEusrjJMpNihcr9EZvI0rXgrnY1j
 yeusxfEnC5m+l5ug5dA8nBmKvd0Tc6s=
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-330-M6CzRJ4yPsq_ZxfDCRq3Hg-1; Sat, 04 Jul 2020 12:36:10 -0400
X-MC-Unique: M6CzRJ4yPsq_ZxfDCRq3Hg-1
Received: by mail-wr1-f72.google.com with SMTP id y16so35988371wrr.20
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 09:36:10 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:autocrypt
 :message-id:date:user-agent:mime-version:in-reply-to
 :content-language:content-transfer-encoding;
 bh=SqXz38wdz/FD+LnrCe+KuV6IAZbK6L/A1oUQpL8mo44=;
 b=REYctrPP2KVGNXeLmTEPG6tjqMTOvouGKRJVCGLPYEQUEjFE1e7VK92TK+lw9B37nC
 Tz8jHJoVb2jKs/zewszqCI/wZ6xw6sMgd+J7duziRzECq6jiCjT873gBBYQ35lJ/I5XN
 hz7f26pnzY998vvv6XrKby7m2HyOz979q38vNsppQpcgUZxf3/h6PioDB08VG04iIY6k
 8jTHY5Kcpcz3YF+Rn1dABngUOsu5DjjM2xJ3lKjuC4qDgsKFMoVtPclkO91x6wQi82T8
 vHzSo92Nw8MXA6IKv3VAE6IQTAfIV/BdC8Ju5kA9qKGJGU8YWry7j7IbNdeTIU6WsOz0
 s0mw==
X-Gm-Message-State: AOAM5326U4TqDylmTOpVM3/ljHsVkZYP+tglm5q5+qL//zX8HtC7l8ae
 IblqwEfHxRBrO6IXMdcmw1f8lE/UMzg0A/Qs7Dc1IzXSn2RzCNs487f9gh9DI1Ztq2NI7yLWX7l
 3K/zYMxOQx11sF/wZmSbe8ssZy0E=
X-Received: by 2002:a05:6000:d0:: with SMTP id
 q16mr8556941wrx.166.1593880569113; 
 Sat, 04 Jul 2020 09:36:09 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJxwOJi6CsXPd0bz+EEZxRlnsYFe4x67mZFFceVXWJFwu9AHH7lmdGkjXYPqrDUEAhIfJcjFGQ==
X-Received: by 2002:a05:6000:d0:: with SMTP id
 q16mr8556917wrx.166.1593880568935; 
 Sat, 04 Jul 2020 09:36:08 -0700 (PDT)
Received: from [192.168.1.39] (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r11sm16595203wmh.1.2020.07.04.09.36.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 04 Jul 2020 09:36:08 -0700 (PDT)
Subject: Re: [PATCH v11 8/8] xen: introduce ERRP_AUTO_PROPAGATE
To: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>,
 qemu-devel@nongnu.org
References: <20200703090816.3295-1-vsementsov@virtuozzo.com>
 <20200703090816.3295-9-vsementsov@virtuozzo.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Autocrypt: addr=philmd@redhat.com; keydata=
 mQINBDXML8YBEADXCtUkDBKQvNsQA7sDpw6YLE/1tKHwm24A1au9Hfy/OFmkpzo+MD+dYc+7
 bvnqWAeGweq2SDq8zbzFZ1gJBd6+e5v1a/UrTxvwBk51yEkadrpRbi+r2bDpTJwXc/uEtYAB
 GvsTZMtiQVA4kRID1KCdgLa3zztPLCj5H1VZhqZsiGvXa/nMIlhvacRXdbgllPPJ72cLUkXf
 z1Zu4AkEKpccZaJspmLWGSzGu6UTZ7UfVeR2Hcc2KI9oZB1qthmZ1+PZyGZ/Dy+z+zklC0xl
 XIpQPmnfy9+/1hj1LzJ+pe3HzEodtlVA+rdttSvA6nmHKIt8Ul6b/h1DFTmUT1lN1WbAGxmg
 CH1O26cz5nTrzdjoqC/b8PpZiT0kO5MKKgiu5S4PRIxW2+RA4H9nq7nztNZ1Y39bDpzwE5Sp
 bDHzd5owmLxMLZAINtCtQuRbSOcMjZlg4zohA9TQP9krGIk+qTR+H4CV22sWldSkVtsoTaA2
 qNeSJhfHQY0TyQvFbqRsSNIe2gTDzzEQ8itsmdHHE/yzhcCVvlUzXhAT6pIN0OT+cdsTTfif
 MIcDboys92auTuJ7U+4jWF1+WUaJ8gDL69ThAsu7mGDBbm80P3vvUZ4fQM14NkxOnuGRrJxO
 qjWNJ2ZUxgyHAh5TCxMLKWZoL5hpnvx3dF3Ti9HW2dsUUWICSQARAQABtDJQaGlsaXBwZSBN
 YXRoaWV1LURhdWTDqSAoUGhpbCkgPHBoaWxtZEByZWRoYXQuY29tPokCVQQTAQgAPwIbDwYL
 CQgHAwIGFQgCCQoLBBYCAwECHgECF4AWIQSJweePYB7obIZ0lcuio/1u3q3A3gUCXsfWwAUJ
 KtymWgAKCRCio/1u3q3A3ircD/9Vjh3aFNJ3uF3hddeoFg1H038wZr/xi8/rX27M1Vj2j9VH
 0B8Olp4KUQw/hyO6kUxqkoojmzRpmzvlpZ0cUiZJo2bQIWnvScyHxFCv33kHe+YEIqoJlaQc
 JfKYlbCoubz+02E2A6bFD9+BvCY0LBbEj5POwyKGiDMjHKCGuzSuDRbCn0Mz4kCa7nFMF5Jv
 piC+JemRdiBd6102ThqgIsyGEBXuf1sy0QIVyXgaqr9O2b/0VoXpQId7yY7OJuYYxs7kQoXI
 6WzSMpmuXGkmfxOgbc/L6YbzB0JOriX0iRClxu4dEUg8Bs2pNnr6huY2Ft+qb41RzCJvvMyu
 gS32LfN0bTZ6Qm2A8ayMtUQgnwZDSO23OKgQWZVglGliY3ezHZ6lVwC24Vjkmq/2yBSLakZE
 6DZUjZzCW1nvtRK05ebyK6tofRsx8xB8pL/kcBb9nCuh70aLR+5cmE41X4O+MVJbwfP5s/RW
 9BFSL3qgXuXso/3XuWTQjJJGgKhB6xXjMmb1J4q/h5IuVV4juv1Fem9sfmyrh+Wi5V1IzKI7
 RPJ3KVb937eBgSENk53P0gUorwzUcO+ASEo3Z1cBKkJSPigDbeEjVfXQMzNt0oDRzpQqH2vp
 apo2jHnidWt8BsckuWZpxcZ9+/9obQ55DyVQHGiTN39hkETy3Emdnz1JVHTU0Q==
Message-ID: <e2b4f10a-162c-ebb8-3232-381c4d820f9f@redhat.com>
Date: Sat, 4 Jul 2020 18:36:07 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <20200703090816.3295-9-vsementsov@virtuozzo.com>
Content-Language: en-US
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 qemu-block@nongnu.org, Paul Durrant <paul@xen.org>, groug@kaod.org,
 armbru@redhat.com, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Max Reitz <mreitz@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/3/20 11:08 AM, Vladimir Sementsov-Ogievskiy wrote:
> If we want to add some info to errp (by error_prepend() or
> error_append_hint()), we must use the ERRP_AUTO_PROPAGATE macro.
> Otherwise, this info will not be added when errp == &error_fatal
> (the program will exit prior to the error_append_hint() or
> error_prepend() call).  Fix such cases.
> 
> If we want to check error after errp-function call, we need to
> introduce local_err and then propagate it to errp. Instead, use
> ERRP_AUTO_PROPAGATE macro, benefits are:
> 1. No need of explicit error_propagate call
> 2. No need of explicit local_err variable: use errp directly
> 3. ERRP_AUTO_PROPAGATE leaves errp as is if it's not NULL or
>    &error_fatal, this means that we don't break error_abort
>    (we'll abort on error_set, not on error_propagate)
> 
> This commit is generated by command
> 
>     sed -n '/^X86 Xen CPUs$/,/^$/{s/^F: //p}' MAINTAINERS | \
>     xargs git ls-files | grep '\.[hc]$' | \
>     xargs spatch \
>         --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
>         --macro-file scripts/cocci-macro-file.h \
>         --in-place --no-show-diff --max-width 80
> 
> Reported-by: Kevin Wolf <kwolf@redhat.com>
> Reported-by: Greg Kurz <groug@kaod.org>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> ---
>  hw/block/dataplane/xen-block.c |  17 +++---
>  hw/block/xen-block.c           | 102 ++++++++++++++-------------------
>  hw/pci-host/xen_igd_pt.c       |   7 +--
>  hw/xen/xen-backend.c           |   7 +--
>  hw/xen/xen-bus.c               |  92 +++++++++++++----------------
>  hw/xen/xen-host-pci-device.c   |  27 +++++----
>  hw/xen/xen_pt.c                |  25 ++++----
>  hw/xen/xen_pt_config_init.c    |  17 +++---
>  8 files changed, 128 insertions(+), 166 deletions(-)

Without the description, this patch has 800 lines of diff...
It killed me, I don't have the energy to review patch #7 of this
series after that, sorry.
Consider splitting such mechanical patches next time. Here it
could have been hw/block, hw/pci-host, hw/xen.

Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 17:13:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 17:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrljQ-0001zK-Ky; Sat, 04 Jul 2020 17:13:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jWGQ=AP=eik.bme.hu=balaton@srs-us1.protection.inumbo.net>)
 id 1jrljP-0001zF-DE
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 17:13:39 +0000
X-Inumbo-ID: b0c3ecd0-be19-11ea-bca7-bc764e2007e4
Received: from zero.eik.bme.hu (unknown [2001:738:2001:2001::2001])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0c3ecd0-be19-11ea-bca7-bc764e2007e4;
 Sat, 04 Jul 2020 17:13:34 +0000 (UTC)
Received: from zero.eik.bme.hu (blah.eik.bme.hu [152.66.115.182])
 by localhost (Postfix) with SMTP id 26C7A746307;
 Sat,  4 Jul 2020 19:13:33 +0200 (CEST)
Received: by zero.eik.bme.hu (Postfix, from userid 432)
 id 7963774594E; Sat,  4 Jul 2020 19:13:32 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by zero.eik.bme.hu (Postfix) with ESMTP id 76AB17456F8;
 Sat,  4 Jul 2020 19:13:32 +0200 (CEST)
Date: Sat, 4 Jul 2020 19:13:32 +0200 (CEST)
From: BALATON Zoltan <balaton@eik.bme.hu>
To: =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <f4bug@amsat.org>
Subject: Re: [PATCH 22/26] hw/usb/usb-hcd: Use OHCI type definitions
In-Reply-To: <20200704144943.18292-23-f4bug@amsat.org>
Message-ID: <alpine.BSF.2.22.395.2007041909370.92265@zero.eik.bme.hu>
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-23-f4bug@amsat.org>
User-Agent: Alpine 2.22 (BSF 395 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed;
 BOUNDARY="3866299591-816135042-1593882634=:92265"
Content-ID: <alpine.BSF.2.22.395.2007041910480.92265@zero.eik.bme.hu>
X-Spam-Checker-Version: Sophos PMX: 6.4.8.2820816,
 Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2020.7.4.170317,
 AntiVirus-Engine: 5.74.0, AntiVirus-Data: 2020.7.3.5740002
X-Spam-Flag: NO
X-Spam-Probability: 9%
X-Spam-Level: 
X-Spam-Status: No, score=9% required=50%
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?ISO-8859-15?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?ISO-8859-15?Q?Marc-Andr=E9_Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <philmd@redhat.com>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--3866299591-816135042-1593882634=:92265
Content-Type: text/plain; CHARSET=ISO-8859-15; format=flowed
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.BSF.2.22.395.2007041910481.92265@zero.eik.bme.hu>

On Sat, 4 Jul 2020, Philippe Mathieu-Daud wrote:
> Various machine/board/soc models create OHCI device instances
> with the generic QDEV API, and don't need to access USB internals.
>
> Simplify header inclusions by moving the QOM type names into a
> simple header, with no need to include other "hw/usb" headers.
>
> Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
> Signed-off-by: Philippe Mathieu-Daud <f4bug@amsat.org>
> ---
> hw/usb/hcd-ohci.h        |  2 +-
> include/hw/usb/usb-hcd.h | 16 ++++++++++++++++

I wonder if we need a new header for this or these could just go in the 
new public hw/usb/usb.h as machines creating a HCD may also add devices 
(like keyboard/mouse) so probably will need both headers anyway so 
splitting it up may not worth it but I don't really mind, either way.

For sm501 and sam460ex parts:

Reviewed-by: BALATON Zoltan <balaton@eik.bme.hu>

Regards,
BALATON Zoltan

> hw/arm/allwinner-a10.c   |  2 +-
> hw/arm/allwinner-h3.c    |  9 +++++----
> hw/arm/pxa2xx.c          |  3 ++-
> hw/arm/realview.c        |  3 ++-
> hw/arm/versatilepb.c     |  3 ++-
> hw/display/sm501.c       |  3 ++-
> hw/ppc/mac_newworld.c    |  3 ++-
> hw/ppc/mac_oldworld.c    |  3 ++-
> hw/ppc/sam460ex.c        |  3 ++-
> hw/ppc/spapr.c           |  3 ++-
> hw/usb/hcd-ohci-pci.c    |  2 +-
> 13 files changed, 40 insertions(+), 15 deletions(-)
> create mode 100644 include/hw/usb/usb-hcd.h
>
> diff --git a/hw/usb/hcd-ohci.h b/hw/usb/hcd-ohci.h
> index 771927ea17..6949cf0dab 100644
> --- a/hw/usb/hcd-ohci.h
> +++ b/hw/usb/hcd-ohci.h
> @@ -21,6 +21,7 @@
> #ifndef HCD_OHCI_H
> #define HCD_OHCI_H
>
> +#include "hw/usb/usb-hcd.h"
> #include "sysemu/dma.h"
> #include "usb-internal.h"
>
> @@ -91,7 +92,6 @@ typedef struct OHCIState {
>     void (*ohci_die)(struct OHCIState *ohci);
> } OHCIState;
>
> -#define TYPE_SYSBUS_OHCI "sysbus-ohci"
> #define SYSBUS_OHCI(obj) OBJECT_CHECK(OHCISysBusState, (obj), TYPE_SYSBUS_OHCI)
>
> typedef struct {
> diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
> new file mode 100644
> index 0000000000..21fdfaf22d
> --- /dev/null
> +++ b/include/hw/usb/usb-hcd.h
> @@ -0,0 +1,16 @@
> +/*
> + * QEMU USB HCD types
> + *
> + * Copyright (c) 2020  Philippe Mathieu-Daud <f4bug@amsat.org>
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#ifndef HW_USB_HCD_TYPES_H
> +#define HW_USB_HCD_TYPES_H
> +
> +/* OHCI */
> +#define TYPE_SYSBUS_OHCI            "sysbus-ohci"
> +#define TYPE_PCI_OHCI               "pci-ohci"
> +
> +#endif
> diff --git a/hw/arm/allwinner-a10.c b/hw/arm/allwinner-a10.c
> index 52e0d83760..53c24ff602 100644
> --- a/hw/arm/allwinner-a10.c
> +++ b/hw/arm/allwinner-a10.c
> @@ -25,7 +25,7 @@
> #include "hw/misc/unimp.h"
> #include "sysemu/sysemu.h"
> #include "hw/boards.h"
> -#include "hw/usb/hcd-ohci.h"
> +#include "hw/usb/usb-hcd.h"
>
> #define AW_A10_MMC0_BASE        0x01c0f000
> #define AW_A10_PIC_REG_BASE     0x01c20400
> diff --git a/hw/arm/allwinner-h3.c b/hw/arm/allwinner-h3.c
> index 8e09468e86..d1d90ffa79 100644
> --- a/hw/arm/allwinner-h3.c
> +++ b/hw/arm/allwinner-h3.c
> @@ -28,6 +28,7 @@
> #include "hw/sysbus.h"
> #include "hw/char/serial.h"
> #include "hw/misc/unimp.h"
> +#include "hw/usb/usb-hcd.h"
> #include "hw/usb/hcd-ehci.h"
> #include "hw/loader.h"
> #include "sysemu/sysemu.h"
> @@ -381,16 +382,16 @@ static void allwinner_h3_realize(DeviceState *dev, Error **errp)
>                          qdev_get_gpio_in(DEVICE(&s->gic),
>                                           AW_H3_GIC_SPI_EHCI3));
>
> -    sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI0],
> +    sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI0],
>                          qdev_get_gpio_in(DEVICE(&s->gic),
>                                           AW_H3_GIC_SPI_OHCI0));
> -    sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI1],
> +    sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI1],
>                          qdev_get_gpio_in(DEVICE(&s->gic),
>                                           AW_H3_GIC_SPI_OHCI1));
> -    sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI2],
> +    sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI2],
>                          qdev_get_gpio_in(DEVICE(&s->gic),
>                                           AW_H3_GIC_SPI_OHCI2));
> -    sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI3],
> +    sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI3],
>                          qdev_get_gpio_in(DEVICE(&s->gic),
>                                           AW_H3_GIC_SPI_OHCI3));
>
> diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
> index f104a33463..27196170f5 100644
> --- a/hw/arm/pxa2xx.c
> +++ b/hw/arm/pxa2xx.c
> @@ -18,6 +18,7 @@
> #include "hw/arm/pxa.h"
> #include "sysemu/sysemu.h"
> #include "hw/char/serial.h"
> +#include "hw/usb/usb-hcd.h"
> #include "hw/i2c/i2c.h"
> #include "hw/irq.h"
> #include "hw/qdev-properties.h"
> @@ -2196,7 +2197,7 @@ PXA2xxState *pxa270_init(MemoryRegion *address_space,
>         s->ssp[i] = (SSIBus *)qdev_get_child_bus(dev, "ssi");
>     }
>
> -    sysbus_create_simple("sysbus-ohci", 0x4c000000,
> +    sysbus_create_simple(TYPE_SYSBUS_OHCI, 0x4c000000,
>                          qdev_get_gpio_in(s->pic, PXA2XX_PIC_USBH1));
>
>     s->pcmcia[0] = pxa2xx_pcmcia_init(address_space, 0x20000000);
> diff --git a/hw/arm/realview.c b/hw/arm/realview.c
> index b6c0a1adb9..0aa34bd4c2 100644
> --- a/hw/arm/realview.c
> +++ b/hw/arm/realview.c
> @@ -16,6 +16,7 @@
> #include "hw/net/lan9118.h"
> #include "hw/net/smc91c111.h"
> #include "hw/pci/pci.h"
> +#include "hw/usb/usb-hcd.h"
> #include "net/net.h"
> #include "sysemu/sysemu.h"
> #include "hw/boards.h"
> @@ -256,7 +257,7 @@ static void realview_init(MachineState *machine,
>         sysbus_connect_irq(busdev, 3, pic[51]);
>         pci_bus = (PCIBus *)qdev_get_child_bus(dev, "pci");
>         if (machine_usb(machine)) {
> -            pci_create_simple(pci_bus, -1, "pci-ohci");
> +            pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
>         }
>         n = drive_get_max_bus(IF_SCSI);
>         while (n >= 0) {
> diff --git a/hw/arm/versatilepb.c b/hw/arm/versatilepb.c
> index e596b8170f..3e6224dc96 100644
> --- a/hw/arm/versatilepb.c
> +++ b/hw/arm/versatilepb.c
> @@ -17,6 +17,7 @@
> #include "net/net.h"
> #include "sysemu/sysemu.h"
> #include "hw/pci/pci.h"
> +#include "hw/usb/usb-hcd.h"
> #include "hw/i2c/i2c.h"
> #include "hw/i2c/arm_sbcon_i2c.h"
> #include "hw/irq.h"
> @@ -273,7 +274,7 @@ static void versatile_init(MachineState *machine, int board_id)
>         }
>     }
>     if (machine_usb(machine)) {
> -        pci_create_simple(pci_bus, -1, "pci-ohci");
> +        pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
>     }
>     n = drive_get_max_bus(IF_SCSI);
>     while (n >= 0) {
> diff --git a/hw/display/sm501.c b/hw/display/sm501.c
> index 9cccc68c35..5f076c841f 100644
> --- a/hw/display/sm501.c
> +++ b/hw/display/sm501.c
> @@ -33,6 +33,7 @@
> #include "hw/sysbus.h"
> #include "migration/vmstate.h"
> #include "hw/pci/pci.h"
> +#include "hw/usb/usb-hcd.h"
> #include "hw/qdev-properties.h"
> #include "hw/i2c/i2c.h"
> #include "hw/display/i2c-ddc.h"
> @@ -1961,7 +1962,7 @@ static void sm501_realize_sysbus(DeviceState *dev, Error **errp)
>     sysbus_init_mmio(sbd, &s->state.mmio_region);
>
>     /* bridge to usb host emulation module */
> -    usb_dev = qdev_new("sysbus-ohci");
> +    usb_dev = qdev_new(TYPE_SYSBUS_OHCI);
>     qdev_prop_set_uint32(usb_dev, "num-ports", 2);
>     qdev_prop_set_uint64(usb_dev, "dma-offset", s->base);
>     sysbus_realize_and_unref(SYS_BUS_DEVICE(usb_dev), &error_fatal);
> diff --git a/hw/ppc/mac_newworld.c b/hw/ppc/mac_newworld.c
> index 7bf69f4a1f..3c32c1831b 100644
> --- a/hw/ppc/mac_newworld.c
> +++ b/hw/ppc/mac_newworld.c
> @@ -55,6 +55,7 @@
> #include "hw/input/adb.h"
> #include "hw/ppc/mac_dbdma.h"
> #include "hw/pci/pci.h"
> +#include "hw/usb/usb-hcd.h"
> #include "net/net.h"
> #include "sysemu/sysemu.h"
> #include "hw/boards.h"
> @@ -411,7 +412,7 @@ static void ppc_core99_init(MachineState *machine)
>     }
>
>     if (machine->usb) {
> -        pci_create_simple(pci_bus, -1, "pci-ohci");
> +        pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
>
>         /* U3 needs to use USB for input because Linux doesn't support via-cuda
>         on PPC64 */
> diff --git a/hw/ppc/mac_oldworld.c b/hw/ppc/mac_oldworld.c
> index f8c204ead7..a429a3e1df 100644
> --- a/hw/ppc/mac_oldworld.c
> +++ b/hw/ppc/mac_oldworld.c
> @@ -37,6 +37,7 @@
> #include "hw/isa/isa.h"
> #include "hw/pci/pci.h"
> #include "hw/pci/pci_host.h"
> +#include "hw/usb/usb-hcd.h"
> #include "hw/boards.h"
> #include "hw/nvram/fw_cfg.h"
> #include "hw/char/escc.h"
> @@ -301,7 +302,7 @@ static void ppc_heathrow_init(MachineState *machine)
>     qdev_realize_and_unref(dev, adb_bus, &error_fatal);
>
>     if (machine_usb(machine)) {
> -        pci_create_simple(pci_bus, -1, "pci-ohci");
> +        pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
>     }
>
>     if (graphic_depth != 15 && graphic_depth != 32 && graphic_depth != 8)
> diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c
> index 781b45e14b..ac60d17a86 100644
> --- a/hw/ppc/sam460ex.c
> +++ b/hw/ppc/sam460ex.c
> @@ -36,6 +36,7 @@
> #include "hw/i2c/ppc4xx_i2c.h"
> #include "hw/i2c/smbus_eeprom.h"
> #include "hw/usb/usb.h"
> +#include "hw/usb/usb-hcd.h"
> #include "hw/usb/hcd-ehci.h"
> #include "hw/ppc/fdt.h"
> #include "hw/qdev-properties.h"
> @@ -372,7 +373,7 @@ static void sam460ex_init(MachineState *machine)
>
>     /* USB */
>     sysbus_create_simple(TYPE_PPC4xx_EHCI, 0x4bffd0400, uic[2][29]);
> -    dev = qdev_new("sysbus-ohci");
> +    dev = qdev_new(TYPE_SYSBUS_OHCI);
>     qdev_prop_set_string(dev, "masterbus", "usb-bus.0");
>     qdev_prop_set_uint32(dev, "num-ports", 6);
>     sbdev = SYS_BUS_DEVICE(dev);
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 0c0409077f..db1706a66c 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -71,6 +71,7 @@
> #include "exec/address-spaces.h"
> #include "exec/ram_addr.h"
> #include "hw/usb/usb.h"
> +#include "hw/usb/usb-hcd.h"
> #include "qemu/config-file.h"
> #include "qemu/error-report.h"
> #include "trace.h"
> @@ -2958,7 +2959,7 @@ static void spapr_machine_init(MachineState *machine)
>
>     if (machine->usb) {
>         if (smc->use_ohci_by_default) {
> -            pci_create_simple(phb->bus, -1, "pci-ohci");
> +            pci_create_simple(phb->bus, -1, TYPE_PCI_OHCI);
>         } else {
>             pci_create_simple(phb->bus, -1, "nec-usb-xhci");
>         }
> diff --git a/hw/usb/hcd-ohci-pci.c b/hw/usb/hcd-ohci-pci.c
> index cb6bc55f59..14df83ec2e 100644
> --- a/hw/usb/hcd-ohci-pci.c
> +++ b/hw/usb/hcd-ohci-pci.c
> @@ -29,8 +29,8 @@
> #include "trace.h"
> #include "hcd-ohci.h"
> #include "usb-internal.h"
> +#include "hw/usb/usb-hcd.h"
>
> -#define TYPE_PCI_OHCI "pci-ohci"
> #define PCI_OHCI(obj) OBJECT_CHECK(OHCIPCIState, (obj), TYPE_PCI_OHCI)
>
> typedef struct {
>
--3866299591-816135042-1593882634=:92265--


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 17:15:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 17:15:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrllb-00025E-2v; Sat, 04 Jul 2020 17:15:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jWGQ=AP=eik.bme.hu=balaton@srs-us1.protection.inumbo.net>)
 id 1jrllZ-000256-K3
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 17:15:53 +0000
X-Inumbo-ID: 01961bec-be1a-11ea-bca7-bc764e2007e4
Received: from zero.eik.bme.hu (unknown [152.66.115.2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01961bec-be1a-11ea-bca7-bc764e2007e4;
 Sat, 04 Jul 2020 17:15:50 +0000 (UTC)
Received: from zero.eik.bme.hu (blah.eik.bme.hu [152.66.115.182])
 by localhost (Postfix) with SMTP id 035FC74632B;
 Sat,  4 Jul 2020 19:15:49 +0200 (CEST)
Received: by zero.eik.bme.hu (Postfix, from userid 432)
 id BA3BB745702; Sat,  4 Jul 2020 19:15:48 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by zero.eik.bme.hu (Postfix) with ESMTP id B74BF7456F8;
 Sat,  4 Jul 2020 19:15:48 +0200 (CEST)
Date: Sat, 4 Jul 2020 19:15:48 +0200 (CEST)
From: BALATON Zoltan <balaton@eik.bme.hu>
To: =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <f4bug@amsat.org>
Subject: Re: [PATCH 23/26] hw/usb/usb-hcd: Use EHCI type definitions
In-Reply-To: <20200704144943.18292-24-f4bug@amsat.org>
Message-ID: <alpine.BSF.2.22.395.2007041914490.92265@zero.eik.bme.hu>
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-24-f4bug@amsat.org>
User-Agent: Alpine 2.22 (BSF 395 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="3866299591-1739992078-1593882948=:92265"
X-Spam-Checker-Version: Sophos PMX: 6.4.8.2820816,
 Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2020.7.4.170918,
 AntiVirus-Engine: 5.74.0, AntiVirus-Data: 2020.7.3.5740002
X-Spam-Flag: NO
X-Spam-Probability: 9%
X-Spam-Level: 
X-Spam-Status: No, score=9% required=50%
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?ISO-8859-15?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?ISO-8859-15?Q?Marc-Andr=E9_Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <philmd@redhat.com>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--3866299591-1739992078-1593882948=:92265
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8BIT

On Sat, 4 Jul 2020, Philippe Mathieu-Daudé wrote:
> Various machine/board/soc models create EHCI device instances
> with the generic QDEV API, and don't need to access USB internals.
>
> Simplify header inclusions by moving the QOM type names into a
> simple header, with no need to include other "hw/usb" headers.
>
> Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
> ---
> hw/usb/hcd-ehci.h         | 11 +----------
> include/hw/usb/chipidea.h |  2 +-
> include/hw/usb/usb-hcd.h  | 11 +++++++++++
> hw/arm/allwinner-h3.c     |  1 -
> hw/arm/exynos4210.c       |  2 +-
> hw/arm/sbsa-ref.c         |  3 ++-
> hw/arm/xilinx_zynq.c      |  2 +-
> hw/ppc/sam460ex.c         |  1 -
> hw/usb/chipidea.c         |  1 +
> hw/usb/hcd-ehci-sysbus.c  |  1 +
> 10 files changed, 19 insertions(+), 16 deletions(-)
>
> diff --git a/hw/usb/hcd-ehci.h b/hw/usb/hcd-ehci.h
> index 337b3ad05c..da70767409 100644
> --- a/hw/usb/hcd-ehci.h
> +++ b/hw/usb/hcd-ehci.h
> @@ -23,6 +23,7 @@
> #include "hw/pci/pci.h"
> #include "hw/sysbus.h"
> #include "usb-internal.h"
> +#include "hw/usb/usb-hcd.h"
>
> #define CAPA_SIZE        0x10
>
> @@ -316,7 +317,6 @@ void usb_ehci_realize(EHCIState *s, DeviceState *dev, Error **errp);
> void usb_ehci_unrealize(EHCIState *s, DeviceState *dev);
> void ehci_reset(void *opaque);
>
> -#define TYPE_PCI_EHCI "pci-ehci-usb"
> #define PCI_EHCI(obj) OBJECT_CHECK(EHCIPCIState, (obj), TYPE_PCI_EHCI)
>
> typedef struct EHCIPCIState {
> @@ -327,15 +327,6 @@ typedef struct EHCIPCIState {
>     EHCIState ehci;
> } EHCIPCIState;
>
> -
> -#define TYPE_SYS_BUS_EHCI "sysbus-ehci-usb"
> -#define TYPE_PLATFORM_EHCI "platform-ehci-usb"
> -#define TYPE_EXYNOS4210_EHCI "exynos4210-ehci-usb"
> -#define TYPE_AW_H3_EHCI "aw-h3-ehci-usb"
> -#define TYPE_TEGRA2_EHCI "tegra2-ehci-usb"
> -#define TYPE_PPC4xx_EHCI "ppc4xx-ehci-usb"
> -#define TYPE_FUSBH200_EHCI "fusbh200-ehci-usb"
> -
> #define SYS_BUS_EHCI(obj) \
>     OBJECT_CHECK(EHCISysBusState, (obj), TYPE_SYS_BUS_EHCI)
> #define SYS_BUS_EHCI_CLASS(class) \
> diff --git a/include/hw/usb/chipidea.h b/include/hw/usb/chipidea.h
> index 1ec2e9dbda..28f46291de 100644
> --- a/include/hw/usb/chipidea.h
> +++ b/include/hw/usb/chipidea.h
> @@ -2,6 +2,7 @@
> #define CHIPIDEA_H
>
> #include "hw/usb/hcd-ehci.h"
> +#include "hw/usb/usb-hcd.h"
>
> typedef struct ChipideaState {
>     /*< private >*/
> @@ -10,7 +11,6 @@ typedef struct ChipideaState {
>     MemoryRegion iomem[3];
> } ChipideaState;
>
> -#define TYPE_CHIPIDEA "usb-chipidea"
> #define CHIPIDEA(obj) OBJECT_CHECK(ChipideaState, (obj), TYPE_CHIPIDEA)
>
> #endif /* CHIPIDEA_H */
> diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
> index 21fdfaf22d..74af3a4533 100644
> --- a/include/hw/usb/usb-hcd.h
> +++ b/include/hw/usb/usb-hcd.h
> @@ -13,4 +13,15 @@
> #define TYPE_SYSBUS_OHCI            "sysbus-ohci"
> #define TYPE_PCI_OHCI               "pci-ohci"
>
> +/* EHCI */
> +#define TYPE_SYS_BUS_EHCI           "sysbus-ehci-usb"
> +#define TYPE_PCI_EHCI               "pci-ehci-usb"
> +#define TYPE_PLATFORM_EHCI          "platform-ehci-usb"
> +#define TYPE_EXYNOS4210_EHCI        "exynos4210-ehci-usb"
> +#define TYPE_AW_H3_EHCI             "aw-h3-ehci-usb"
> +#define TYPE_TEGRA2_EHCI            "tegra2-ehci-usb"
> +#define TYPE_PPC4xx_EHCI            "ppc4xx-ehci-usb"
> +#define TYPE_FUSBH200_EHCI          "fusbh200-ehci-usb"
> +#define TYPE_CHIPIDEA               "usb-chipidea"
> +
> #endif
> diff --git a/hw/arm/allwinner-h3.c b/hw/arm/allwinner-h3.c
> index d1d90ffa79..8b7adddc27 100644
> --- a/hw/arm/allwinner-h3.c
> +++ b/hw/arm/allwinner-h3.c
> @@ -29,7 +29,6 @@
> #include "hw/char/serial.h"
> #include "hw/misc/unimp.h"
> #include "hw/usb/usb-hcd.h"
> -#include "hw/usb/hcd-ehci.h"
> #include "hw/loader.h"
> #include "sysemu/sysemu.h"
> #include "hw/arm/allwinner-h3.h"
> diff --git a/hw/arm/exynos4210.c b/hw/arm/exynos4210.c
> index fa639806ec..692fb02159 100644
> --- a/hw/arm/exynos4210.c
> +++ b/hw/arm/exynos4210.c
> @@ -35,7 +35,7 @@
> #include "hw/qdev-properties.h"
> #include "hw/arm/exynos4210.h"
> #include "hw/sd/sdhci.h"
> -#include "hw/usb/hcd-ehci.h"
> +#include "hw/usb/usb-hcd.h"
>
> #define EXYNOS4210_CHIPID_ADDR         0x10000000
>
> diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
> index 021e7c1b8b..4e4c338ae9 100644
> --- a/hw/arm/sbsa-ref.c
> +++ b/hw/arm/sbsa-ref.c
> @@ -38,6 +38,7 @@
> #include "hw/loader.h"
> #include "hw/pci-host/gpex.h"
> #include "hw/qdev-properties.h"
> +#include "hw/usb/usb-hcd.h"
> #include "hw/char/pl011.h"
> #include "net/net.h"
>
> @@ -485,7 +486,7 @@ static void create_ehci(const SBSAMachineState *sms)
>     hwaddr base = sbsa_ref_memmap[SBSA_EHCI].base;
>     int irq = sbsa_ref_irqmap[SBSA_EHCI];
>
> -    sysbus_create_simple("platform-ehci-usb", base,
> +    sysbus_create_simple(TYPE_PLATFORM_EHCI, base,
>                          qdev_get_gpio_in(sms->gic, irq));
> }
>
> diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c
> index ed970273f3..9ccdc03095 100644
> --- a/hw/arm/xilinx_zynq.c
> +++ b/hw/arm/xilinx_zynq.c
> @@ -29,7 +29,7 @@
> #include "hw/loader.h"
> #include "hw/misc/zynq-xadc.h"
> #include "hw/ssi/ssi.h"
> -#include "hw/usb/chipidea.h"
> +#include "hw/usb/usb-hcd.h"
> #include "qemu/error-report.h"
> #include "hw/sd/sdhci.h"
> #include "hw/char/cadence_uart.h"
> diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c
> index ac60d17a86..3f7cf0d1ae 100644
> --- a/hw/ppc/sam460ex.c
> +++ b/hw/ppc/sam460ex.c
> @@ -37,7 +37,6 @@
> #include "hw/i2c/smbus_eeprom.h"
> #include "hw/usb/usb.h"
> #include "hw/usb/usb-hcd.h"
> -#include "hw/usb/hcd-ehci.h"
> #include "hw/ppc/fdt.h"
> #include "hw/qdev-properties.h"
> #include "hw/pci/pci.h"
> diff --git a/hw/usb/chipidea.c b/hw/usb/chipidea.c
> index 3dcd22ccba..e81f63295e 100644
> --- a/hw/usb/chipidea.c
> +++ b/hw/usb/chipidea.c
> @@ -11,6 +11,7 @@
>
> #include "qemu/osdep.h"
> #include "hw/usb/hcd-ehci.h"
> +#include "hw/usb/usb-hcd.h"
> #include "hw/usb/chipidea.h"
> #include "qemu/log.h"
> #include "qemu/module.h"
> diff --git a/hw/usb/hcd-ehci-sysbus.c b/hw/usb/hcd-ehci-sysbus.c
> index 3730736540..b7debc1934 100644
> --- a/hw/usb/hcd-ehci-sysbus.c
> +++ b/hw/usb/hcd-ehci-sysbus.c
> @@ -18,6 +18,7 @@
> #include "qemu/osdep.h"
> #include "hw/qdev-properties.h"
> #include "hw/usb/hcd-ehci.h"
> +#include "hw/usb/usb-hcd.h"
> #include "migration/vmstate.h"
> #include "qemu/module.h"

Do these last two still need hw/usb/hcd-ehci.h? If so do they get 
hw/usb/usb-hcd.h via that one so do they need to explicitely include it 
again?

Regards,
BALATON Zoltan
--3866299591-1739992078-1593882948=:92265--


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 17:17:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 17:17:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrlnS-0002EX-J9; Sat, 04 Jul 2020 17:17:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jWGQ=AP=eik.bme.hu=balaton@srs-us1.protection.inumbo.net>)
 id 1jrlnR-0002ES-5k
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 17:17:49 +0000
X-Inumbo-ID: 47c5ef8e-be1a-11ea-8496-bc764e2007e4
Received: from zero.eik.bme.hu (unknown [152.66.115.2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47c5ef8e-be1a-11ea-8496-bc764e2007e4;
 Sat, 04 Jul 2020 17:17:47 +0000 (UTC)
Received: from zero.eik.bme.hu (blah.eik.bme.hu [152.66.115.182])
 by localhost (Postfix) with SMTP id C149F74594E;
 Sat,  4 Jul 2020 19:17:46 +0200 (CEST)
Received: by zero.eik.bme.hu (Postfix, from userid 432)
 id 7B9CB745702; Sat,  4 Jul 2020 19:17:46 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by zero.eik.bme.hu (Postfix) with ESMTP id 79A8F7456F8;
 Sat,  4 Jul 2020 19:17:46 +0200 (CEST)
Date: Sat, 4 Jul 2020 19:17:46 +0200 (CEST)
From: BALATON Zoltan <balaton@eik.bme.hu>
To: =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <f4bug@amsat.org>
Subject: Re: [PATCH 24/26] hw/usb/usb-hcd: Use UHCI type definitions
In-Reply-To: <20200704144943.18292-25-f4bug@amsat.org>
Message-ID: <alpine.BSF.2.22.395.2007041916060.92265@zero.eik.bme.hu>
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-25-f4bug@amsat.org>
User-Agent: Alpine 2.22 (BSF 395 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="3866299591-481592883-1593883066=:92265"
X-Spam-Checker-Version: Sophos PMX: 6.4.8.2820816,
 Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2020.7.4.170918,
 AntiVirus-Engine: 5.74.0, AntiVirus-Data: 2020.7.3.5740002
X-Spam-Flag: NO
X-Spam-Probability: 9%
X-Spam-Level: 
X-Spam-Status: No, score=9% required=50%
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?ISO-8859-15?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?ISO-8859-15?Q?Marc-Andr=E9_Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <philmd@redhat.com>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--3866299591-481592883-1593883066=:92265
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8BIT

On Sat, 4 Jul 2020, Philippe Mathieu-Daudé wrote:
> Various machine/board/soc models create UHCI device instances
> with the generic QDEV API, and don't need to access USB internals.
>
> Simplify header inclusions by moving the QOM type names into a
> simple header, with no need to include other "hw/usb" headers.
>
> Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
> ---
> include/hw/usb/usb-hcd.h |  6 ++++++
> hw/i386/pc_piix.c        |  3 ++-
> hw/i386/pc_q35.c         | 13 +++++++------
> hw/isa/piix4.c           |  3 ++-
> hw/mips/fuloong2e.c      |  5 +++--
> hw/usb/hcd-uhci.c        | 19 ++++++++++---------
> 6 files changed, 30 insertions(+), 19 deletions(-)
>
> diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
> index 74af3a4533..c9d0a88984 100644
> --- a/include/hw/usb/usb-hcd.h
> +++ b/include/hw/usb/usb-hcd.h
> @@ -24,4 +24,10 @@
> #define TYPE_FUSBH200_EHCI          "fusbh200-ehci-usb"
> #define TYPE_CHIPIDEA               "usb-chipidea"
>
> +/* UHCI */
> +#define TYPE_PIIX3_USB_UHCI         "piix3-usb-uhci"
> +#define TYPE_PIIX4_USB_UHCI         "piix4-usb-uhci"
> +#define TYPE_VT82C686B_USB_UHCI     "vt82c686b-usb-uhci"
> +#define TYPE_ICH9_USB_UHCI(n)       "ich9-usb-uhci" #n

What is that #n at the end? Looks like a typo. Does it break compilation?

Regards,
BALATON Zoltan

> +
> #endif
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index 4d1de7cfab..0024c346c6 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -37,6 +37,7 @@
> #include "hw/pci/pci.h"
> #include "hw/pci/pci_ids.h"
> #include "hw/usb/usb.h"
> +#include "hw/usb/usb-hcd.h"
> #include "net/net.h"
> #include "hw/ide/pci.h"
> #include "hw/irq.h"
> @@ -275,7 +276,7 @@ static void pc_init1(MachineState *machine,
> #endif
>
>     if (pcmc->pci_enabled && machine_usb(machine)) {
> -        pci_create_simple(pci_bus, piix3_devfn + 2, "piix3-usb-uhci");
> +        pci_create_simple(pci_bus, piix3_devfn + 2, TYPE_PIIX3_USB_UHCI);
>     }
>
>     if (pcmc->pci_enabled && x86_machine_is_acpi_enabled(X86_MACHINE(pcms))) {
> diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
> index b985f5bea1..a80527e6ed 100644
> --- a/hw/i386/pc_q35.c
> +++ b/hw/i386/pc_q35.c
> @@ -51,6 +51,7 @@
> #include "hw/ide/pci.h"
> #include "hw/ide/ahci.h"
> #include "hw/usb/usb.h"
> +#include "hw/usb/usb-hcd.h"
> #include "qapi/error.h"
> #include "qemu/error-report.h"
> #include "sysemu/numa.h"
> @@ -68,15 +69,15 @@ struct ehci_companions {
> };
>
> static const struct ehci_companions ich9_1d[] = {
> -    { .name = "ich9-usb-uhci1", .func = 0, .port = 0 },
> -    { .name = "ich9-usb-uhci2", .func = 1, .port = 2 },
> -    { .name = "ich9-usb-uhci3", .func = 2, .port = 4 },
> +    { .name = TYPE_ICH9_USB_UHCI(1), .func = 0, .port = 0 },
> +    { .name = TYPE_ICH9_USB_UHCI(2), .func = 1, .port = 2 },
> +    { .name = TYPE_ICH9_USB_UHCI(3), .func = 2, .port = 4 },
> };
>
> static const struct ehci_companions ich9_1a[] = {
> -    { .name = "ich9-usb-uhci4", .func = 0, .port = 0 },
> -    { .name = "ich9-usb-uhci5", .func = 1, .port = 2 },
> -    { .name = "ich9-usb-uhci6", .func = 2, .port = 4 },
> +    { .name = TYPE_ICH9_USB_UHCI(4), .func = 0, .port = 0 },
> +    { .name = TYPE_ICH9_USB_UHCI(5), .func = 1, .port = 2 },
> +    { .name = TYPE_ICH9_USB_UHCI(6), .func = 2, .port = 4 },
> };
>
> static int ehci_create_ich9_with_companions(PCIBus *bus, int slot)
> diff --git a/hw/isa/piix4.c b/hw/isa/piix4.c
> index f634bcb2d1..e11e5fae21 100644
> --- a/hw/isa/piix4.c
> +++ b/hw/isa/piix4.c
> @@ -29,6 +29,7 @@
> #include "hw/southbridge/piix.h"
> #include "hw/pci/pci.h"
> #include "hw/isa/isa.h"
> +#include "hw/usb/usb-hcd.h"
> #include "hw/sysbus.h"
> #include "hw/intc/i8259.h"
> #include "hw/dma/i8257.h"
> @@ -255,7 +256,7 @@ DeviceState *piix4_create(PCIBus *pci_bus, ISABus **isa_bus, I2CBus **smbus)
>     pci = pci_create_simple(pci_bus, devfn + 1, "piix4-ide");
>     pci_ide_create_devs(pci);
>
> -    pci_create_simple(pci_bus, devfn + 2, "piix4-usb-uhci");
> +    pci_create_simple(pci_bus, devfn + 2, TYPE_PIIX4_USB_UHCI);
>     if (smbus) {
>         *smbus = piix4_pm_init(pci_bus, devfn + 3, 0x1100,
>                                isa_get_irq(NULL, 9), NULL, 0, NULL);
> diff --git a/hw/mips/fuloong2e.c b/hw/mips/fuloong2e.c
> index 8ca31e5162..b6d33dd2cd 100644
> --- a/hw/mips/fuloong2e.c
> +++ b/hw/mips/fuloong2e.c
> @@ -33,6 +33,7 @@
> #include "hw/mips/mips.h"
> #include "hw/mips/cpudevs.h"
> #include "hw/pci/pci.h"
> +#include "hw/usb/usb-hcd.h"
> #include "qemu/log.h"
> #include "hw/loader.h"
> #include "hw/ide/pci.h"
> @@ -258,8 +259,8 @@ static void vt82c686b_southbridge_init(PCIBus *pci_bus, int slot, qemu_irq intc,
>     dev = pci_create_simple(pci_bus, PCI_DEVFN(slot, 1), "via-ide");
>     pci_ide_create_devs(dev);
>
> -    pci_create_simple(pci_bus, PCI_DEVFN(slot, 2), "vt82c686b-usb-uhci");
> -    pci_create_simple(pci_bus, PCI_DEVFN(slot, 3), "vt82c686b-usb-uhci");
> +    pci_create_simple(pci_bus, PCI_DEVFN(slot, 2), TYPE_VT82C686B_USB_UHCI);
> +    pci_create_simple(pci_bus, PCI_DEVFN(slot, 3), TYPE_VT82C686B_USB_UHCI);
>
>     *i2c_bus = vt82c686b_pm_init(pci_bus, PCI_DEVFN(slot, 4), 0xeee1, NULL);
>
> diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
> index 1d4dd33b6c..da078dc3fa 100644
> --- a/hw/usb/hcd-uhci.c
> +++ b/hw/usb/hcd-uhci.c
> @@ -39,6 +39,7 @@
> #include "qemu/main-loop.h"
> #include "qemu/module.h"
> #include "usb-internal.h"
> +#include "hw/usb/usb-hcd.h"
>
> #define FRAME_TIMER_FREQ 1000
>
> @@ -1358,21 +1359,21 @@ static void uhci_data_class_init(ObjectClass *klass, void *data)
>
> static UHCIInfo uhci_info[] = {
>     {
> -        .name       = "piix3-usb-uhci",
> +        .name      = TYPE_PIIX3_USB_UHCI,
>         .vendor_id = PCI_VENDOR_ID_INTEL,
>         .device_id = PCI_DEVICE_ID_INTEL_82371SB_2,
>         .revision  = 0x01,
>         .irq_pin   = 3,
>         .unplug    = true,
>     },{
> -        .name      = "piix4-usb-uhci",
> +        .name      = TYPE_PIIX4_USB_UHCI,
>         .vendor_id = PCI_VENDOR_ID_INTEL,
>         .device_id = PCI_DEVICE_ID_INTEL_82371AB_2,
>         .revision  = 0x01,
>         .irq_pin   = 3,
>         .unplug    = true,
>     },{
> -        .name      = "vt82c686b-usb-uhci",
> +        .name      = TYPE_VT82C686B_USB_UHCI,
>         .vendor_id = PCI_VENDOR_ID_VIA,
>         .device_id = PCI_DEVICE_ID_VIA_UHCI,
>         .revision  = 0x01,
> @@ -1380,42 +1381,42 @@ static UHCIInfo uhci_info[] = {
>         .realize   = usb_uhci_vt82c686b_realize,
>         .unplug    = true,
>     },{
> -        .name      = "ich9-usb-uhci1", /* 00:1d.0 */
> +        .name      = TYPE_ICH9_USB_UHCI(1), /* 00:1d.0 */
>         .vendor_id = PCI_VENDOR_ID_INTEL,
>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI1,
>         .revision  = 0x03,
>         .irq_pin   = 0,
>         .unplug    = false,
>     },{
> -        .name      = "ich9-usb-uhci2", /* 00:1d.1 */
> +        .name      = TYPE_ICH9_USB_UHCI(2), /* 00:1d.1 */
>         .vendor_id = PCI_VENDOR_ID_INTEL,
>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI2,
>         .revision  = 0x03,
>         .irq_pin   = 1,
>         .unplug    = false,
>     },{
> -        .name      = "ich9-usb-uhci3", /* 00:1d.2 */
> +        .name      = TYPE_ICH9_USB_UHCI(3), /* 00:1d.2 */
>         .vendor_id = PCI_VENDOR_ID_INTEL,
>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI3,
>         .revision  = 0x03,
>         .irq_pin   = 2,
>         .unplug    = false,
>     },{
> -        .name      = "ich9-usb-uhci4", /* 00:1a.0 */
> +        .name      = TYPE_ICH9_USB_UHCI(4), /* 00:1a.0 */
>         .vendor_id = PCI_VENDOR_ID_INTEL,
>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI4,
>         .revision  = 0x03,
>         .irq_pin   = 0,
>         .unplug    = false,
>     },{
> -        .name      = "ich9-usb-uhci5", /* 00:1a.1 */
> +        .name      = TYPE_ICH9_USB_UHCI(5), /* 00:1a.1 */
>         .vendor_id = PCI_VENDOR_ID_INTEL,
>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI5,
>         .revision  = 0x03,
>         .irq_pin   = 1,
>         .unplug    = false,
>     },{
> -        .name      = "ich9-usb-uhci6", /* 00:1a.2 */
> +        .name      = TYPE_ICH9_USB_UHCI(6), /* 00:1a.2 */
>         .vendor_id = PCI_VENDOR_ID_INTEL,
>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI6,
>         .revision  = 0x03,
>
--3866299591-481592883-1593883066=:92265--


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 17:20:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 17:20:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrlpT-0002Mp-W6; Sat, 04 Jul 2020 17:19:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jWGQ=AP=eik.bme.hu=balaton@srs-us1.protection.inumbo.net>)
 id 1jrlpT-0002Mk-CZ
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 17:19:55 +0000
X-Inumbo-ID: 937dab56-be1a-11ea-bb8b-bc764e2007e4
Received: from zero.eik.bme.hu (unknown [2001:738:2001:2001::2001])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 937dab56-be1a-11ea-bb8b-bc764e2007e4;
 Sat, 04 Jul 2020 17:19:54 +0000 (UTC)
Received: from zero.eik.bme.hu (blah.eik.bme.hu [152.66.115.182])
 by localhost (Postfix) with SMTP id B1BEA74632C;
 Sat,  4 Jul 2020 19:19:53 +0200 (CEST)
Received: by zero.eik.bme.hu (Postfix, from userid 432)
 id 8799B745702; Sat,  4 Jul 2020 19:19:53 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by zero.eik.bme.hu (Postfix) with ESMTP id 84B7A7456F8;
 Sat,  4 Jul 2020 19:19:53 +0200 (CEST)
Date: Sat, 4 Jul 2020 19:19:53 +0200 (CEST)
From: BALATON Zoltan <balaton@eik.bme.hu>
To: =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <f4bug@amsat.org>
Subject: Re: [PATCH 25/26] hw/usb/usb-hcd: Use XHCI type definitions
In-Reply-To: <20200704144943.18292-26-f4bug@amsat.org>
Message-ID: <alpine.BSF.2.22.395.2007041918320.92265@zero.eik.bme.hu>
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-26-f4bug@amsat.org>
User-Agent: Alpine 2.22 (BSF 395 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="3866299591-100065408-1593883193=:92265"
X-Spam-Checker-Version: Sophos PMX: 6.4.8.2820816,
 Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2020.7.4.171217,
 AntiVirus-Engine: 5.74.0, AntiVirus-Data: 2020.7.3.5740002
X-Spam-Flag: NO
X-Spam-Probability: 9%
X-Spam-Level: 
X-Spam-Status: No, score=9% required=50%
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?ISO-8859-15?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?ISO-8859-15?Q?Marc-Andr=E9_Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <philmd@redhat.com>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--3866299591-100065408-1593883193=:92265
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8BIT

On Sat, 4 Jul 2020, Philippe Mathieu-Daudé wrote:
> Various machine/board/soc models create XHCI device instances
> with the generic QDEV API, and don't need to access USB internals.
>
> Simplify header inclusions by moving the QOM type names into a
> simple header, with no need to include other "hw/usb" headers.
>
> Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
> ---
> hw/usb/hcd-xhci.h        | 2 +-
> include/hw/usb/usb-hcd.h | 3 +++
> hw/ppc/spapr.c           | 2 +-
> 3 files changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/hw/usb/hcd-xhci.h b/hw/usb/hcd-xhci.h
> index f9a3aaceec..b6c54e38a6 100644
> --- a/hw/usb/hcd-xhci.h
> +++ b/hw/usb/hcd-xhci.h
> @@ -23,9 +23,9 @@
> #define HW_USB_HCD_XHCI_H
>
> #include "usb-internal.h"
> +#include "hw/usb/usb-hcd.h"
>
> #define TYPE_XHCI "base-xhci"
> -#define TYPE_NEC_XHCI "nec-usb-xhci"
> #define TYPE_QEMU_XHCI "qemu-xhci"

Why is qemu-xhci left here? Should that be moved to public header too? 
(Maybe no machine adds it but that's a public type too I think.)

Regards.
BALATON Zoltan

> #define XHCI(obj) \
> diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
> index c9d0a88984..56107fca62 100644
> --- a/include/hw/usb/usb-hcd.h
> +++ b/include/hw/usb/usb-hcd.h
> @@ -30,4 +30,7 @@
> #define TYPE_VT82C686B_USB_UHCI     "vt82c686b-usb-uhci"
> #define TYPE_ICH9_USB_UHCI(n)       "ich9-usb-uhci" #n
>
> +/* XHCI */
> +#define TYPE_NEC_XHCI "nec-usb-xhci"
> +
> #endif
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index db1706a66c..d8b3978f24 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -2961,7 +2961,7 @@ static void spapr_machine_init(MachineState *machine)
>         if (smc->use_ohci_by_default) {
>             pci_create_simple(phb->bus, -1, TYPE_PCI_OHCI);
>         } else {
> -            pci_create_simple(phb->bus, -1, "nec-usb-xhci");
> +            pci_create_simple(phb->bus, -1, TYPE_NEC_XHCI);
>         }
>
>         if (spapr->has_graphics) {
>
--3866299591-100065408-1593883193=:92265--


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 17:21:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 17:21:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrlqt-00036F-AR; Sat, 04 Jul 2020 17:21:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CV4e=AP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrlqs-00036A-FX
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 17:21:22 +0000
X-Inumbo-ID: c7d8129c-be1a-11ea-8b5d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c7d8129c-be1a-11ea-8b5d-12813bfff9fa;
 Sat, 04 Jul 2020 17:21:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=3l0NoXNsII6sujgCCKD+9z1Wi4T1xFr2MEO+BAn5Y4Y=; b=fedtj7s8SLog3GoRKDyHx8nRe
 Wq4T17y3kYXi7pK01d+MAdFYIZNZOQ3p405wTRsD2XFIJqH4x6wPDPfD+OLKglfv2E51FGd5skcmE
 Pg2KLuRD06O+/uTB6xNDKeb9RDUiyWk7SFbImrnZYBUt3VwGYjiBG2ENORqqFvrR7qUPA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrlqr-0000z3-Ee; Sat, 04 Jul 2020 17:21:21 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrlqq-0002XY-Vm; Sat, 04 Jul 2020 17:21:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrlqq-0005Ao-VA; Sat, 04 Jul 2020 17:21:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151606-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151606: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=be63d9d47f571a60d70f8fb630c03871312d9655
X-Osstest-Versions-That: xen=be63d9d47f571a60d70f8fb630c03871312d9655
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 04 Jul 2020 17:21:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151606 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151606/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 151586

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151586
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151586
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151586
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151586
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151586
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151586
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151586
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151586
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151586
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655
baseline version:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655

Last test of basis   151606  2020-07-04 03:11:14 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Jul 04 17:23:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 17:23:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrltK-0003Fr-T8; Sat, 04 Jul 2020 17:23:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRhY=AP=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jrltJ-0003Fm-1j
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 17:23:53 +0000
X-Inumbo-ID: 2184814a-be1b-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2184814a-be1b-11ea-b7bb-bc764e2007e4;
 Sat, 04 Jul 2020 17:23:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=qq732BlxhxVZGENhgbbyibnchqHycZIPp0JFK4P96DE=; b=mvydM5v8ECqmkVKai9zYftjqya
 WeGk9CzgoOp+j6LpksMTme0woYKIevJGByEOlxkg447HueU0Yfzi789coWlg4YR+t1/ZU3EaCrT/D
 3ogmeFDRGoUkyIQ4M2gnF4akAeqSmB+QtcJxUENf6UyQRv7sZg8AjCCrKYONhNZmQTQk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrltD-00011y-8B; Sat, 04 Jul 2020 17:23:47 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrltD-0006Ni-06; Sat, 04 Jul 2020 17:23:47 +0000
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jan Beulich <jbeulich@suse.com>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
 <20200702090047.GX735@Air-de-Roger>
 <1505813895.18300396.1593707008144.JavaMail.zimbra@cert.pl>
 <20200703094438.GY735@Air-de-Roger>
 <b5335c2e-da13-28de-002b-e93dd68a0a11@suse.com>
 <20200703101120.GZ735@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <51ecaf40-8fb5-8454-7055-5af33a47152e@xen.org>
Date: Sat, 4 Jul 2020 18:23:44 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200703101120.GZ735@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 03/07/2020 11:11, Roger Pau Monné wrote:
> On Fri, Jul 03, 2020 at 11:56:38AM +0200, Jan Beulich wrote:
>> On 03.07.2020 11:44, Roger Pau Monné wrote:
>>> On Thu, Jul 02, 2020 at 06:23:28PM +0200, Michał Leszczyński wrote:
>>>> ----- 2 lip 2020 o 11:00, Roger Pau Monné roger.pau@citrix.com napisał(a):
>>>>
>>>>> On Tue, Jun 30, 2020 at 02:33:46PM +0200, Michał Leszczyński wrote:
>>>>>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
>>>>>> index 59bdc28c89..7b8289d436 100644
>>>>>> --- a/xen/include/public/domctl.h
>>>>>> +++ b/xen/include/public/domctl.h
>>>>>> @@ -92,6 +92,7 @@ struct xen_domctl_createdomain {
>>>>>>       uint32_t max_evtchn_port;
>>>>>>       int32_t max_grant_frames;
>>>>>>       int32_t max_maptrack_frames;
>>>>>> +    uint8_t vmtrace_pt_order;
>>>>>
>>>>> I've been thinking about this, and even though this is a domctl (so
>>>>> not a stable interface) we might want to consider using a size (or a
>>>>> number of pages) here rather than an order. IPT also supports
>>>>> TOPA mode (kind of a linked list of buffers) that would allow for
>>>>> sizes not rounded to order boundaries to be used, since then only each
>>>>> item in the linked list needs to be rounded to an order boundary, so
>>>>> you could for example use three 4K pages in TOPA mode AFAICT.
>>>>>
>>>>> Roger.
>>>>
>>>> In previous versions it was "size" but it was requested to change it
>>>> to "order" in order to shrink the variable size from uint64_t to
>>>> uint8_t, because there is limited space for xen_domctl_createdomain
>>>> structure.
>>>
>>> It's likely I'm missing something here, but I wasn't aware
>>> xen_domctl_createdomain had any constrains regarding it's size. It's
>>> currently 48bytes which seems fairly small.
>>
>> Additionally I would guess a uint32_t could do here, if the value
>> passed was "number of pages" rather than "number of bytes"?
Looking at the rest of the code, the toolstack accepts a 64-bit value. 
So this would lead to truncation of the buffer if it is bigger than 2^44 
bytes.

I agree such buffer is unlikely, yet I still think we want to harden the 
code whenever we can. So the solution is to either prevent check 
truncation in libxl or directly use 64-bit in the domctl.

My preference is the latter.

> 
> That could work, not sure if it needs to state however that those will
> be 4K pages, since Arm can have a different minimum page size IIRC?
> (or that's already the assumption for all number of frames fields)
> vmtrace_nr_frames seems fine to me.

The hypercalls interface is using the same page granularity as the 
hypervisor (i.e 4KB).

While we already support guest using 64KB page granularity, it is 
impossible to have a 64KB Arm hypervisor in the current state. You are 
going to either break existing guest (if you switch to 64KB page 
granularity for the hypercall ABI) or render them insecure (the mimimum 
mapping in the P2M would be 64KB).

DOMCTLs are not stable yet, so using a number of pages is OK. However, I 
would strongly suggest to use a number of bytes for any xl/libxl/stable 
libraries interfaces as this avoids confusion and also make more 
futureproof.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 17:49:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 17:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrmHl-00053J-Vk; Sat, 04 Jul 2020 17:49:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CRhY=AP=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jrmHk-00053E-Jb
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 17:49:08 +0000
X-Inumbo-ID: a8d19afe-be1e-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a8d19afe-be1e-11ea-bb8b-bc764e2007e4;
 Sat, 04 Jul 2020 17:49:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=UzsOzfJWtA+DZlEQDZ036u2bU+tFHOKDcq7PpqV4rE0=; b=s/sbkpY1XwZetchIZSoaYyUKlk
 ni90jcz6wwv6vh3evGWS6xNicuR2Lu13Sa9HHQ+dnsSNoljt2fvduLkkb0/NNAarHMIsBK1qhpUqY
 uGdHD/ZA304xRAt/BKp24hTBHdOprkrKCfuriuoWAJY63h5JTLQJC6TiOgamSHrV1feI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrmHd-0001T4-06; Sat, 04 Jul 2020 17:49:01 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jrmHc-0007SJ-M9; Sat, 04 Jul 2020 17:49:00 +0000
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 xen-devel@lists.xenproject.org
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
From: Julien Grall <julien@xen.org>
Message-ID: <d427a0da-b178-3db1-ccf7-6cdc64480e84@xen.org>
Date: Sat, 4 Jul 2020 18:48:58 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Anthony PERARD <anthony.perard@citrix.com>, luwei.kang@intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 30/06/2020 13:33, Michał Leszczyński wrote:
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 71709dc585..891e8e28d6 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -438,6 +438,14 @@
>    */
>   #define LIBXL_HAVE_CREATEINFO_PASSTHROUGH 1
>   
> +/*
> + * LIBXL_HAVE_VMTRACE_PT_ORDER indicates that
> + * libxl_domain_create_info has a vmtrace_pt_order parameter, which
> + * allows to enable pre-allocation of processor tracing buffers
> + * with the given order of size.
> + */
> +#define LIBXL_HAVE_VMTRACE_PT_ORDER 1
> +
>   /*
>    * libxl ABI compatibility
>    *
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 75862dc6ed..651d1f4c0f 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -608,6 +608,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
>               .max_evtchn_port = b_info->event_channels,
>               .max_grant_frames = b_info->max_grant_frames,
>               .max_maptrack_frames = b_info->max_maptrack_frames,
> +            .vmtrace_pt_order = b_info->vmtrace_pt_order,
>           };
>   
>           if (info->type != LIBXL_DOMAIN_TYPE_PV) {
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 9d3f05f399..1c5dd43e4d 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -645,6 +645,8 @@ libxl_domain_build_info = Struct("domain_build_info",[
>       # supported by x86 HVM and ARM support is planned.
>       ("altp2m", libxl_altp2m_mode),
>   
> +    ("vmtrace_pt_order", integer),

libxl can be used by external projects (such libvirt) for implementing 
their own toolstack.

While on x86 you always have the same granularity, on Arm the hypervisor 
and each guest may have a different page granularity (e.g 4KB, 16KB, 
64KB). So it is unclear what order one would have to use.

I think it would be best if the external user only specify the number of 
bytes. You can then sanity check the value and convert to an order (or 
number of pages) in libxl before passing the value to the hypervisor.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 18:07:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 18:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrmZU-0006nz-FS; Sat, 04 Jul 2020 18:07:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrmZT-0006nu-ST
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 18:07:27 +0000
X-Inumbo-ID: 37b8f670-be21-11ea-bb8b-bc764e2007e4
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37b8f670-be21-11ea-bb8b-bc764e2007e4;
 Sat, 04 Jul 2020 18:07:27 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id j18so35045066wmi.3
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 11:07:27 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-language:content-transfer-encoding;
 bh=mZdfCBg4VmKOePgd4ZgEELl6kuOuKz4YZpkDO2WK/mg=;
 b=F2Xn7TA1uzAW8ytlKaIyS4FH6AXRlqhnEh7Gf8jIRWpe90/Agn2zEndE5uWg9TPPh2
 Pd+k1nFdIKd+lXbzlYVKgulsRPdNpaouObDppl6bLj9NDRrdIRhocgN+39xEMxQ98cMf
 27x2Ptfr6Coh+Aigug48khlW9MFKrzmZuNnBRyDt7CNNbUZomH0+JTZEgQ/b5OKydcQx
 824lfbGdQS3XigbBcy4nRTe0/gidZZnSa6IlBmoXqY9JM760s1WPVoc9nWLewNsAzsUx
 kH6yf1LuXqBVqa54YPW0eSuyYOLMVB6E2FDl83IBeGRwUHx9Z+eo3n9GLk3WoxviLU5+
 W1DA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:subject:to:cc:references:from:message-id
 :date:user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=mZdfCBg4VmKOePgd4ZgEELl6kuOuKz4YZpkDO2WK/mg=;
 b=cAFzKw5CeETBpvb1gPN0EQVzoDUilC1nhhcY6PPweC2fPNS5rb7hq64vzoNId8+phI
 MRxP82C3eE/zKoOM86+KhA5wvzGZP95ebVZJJkHlrp2h+Khivd7w7p2rXSWL3ENj8zCZ
 0mkKhIjyQqrKMw0BHBOA6kLwBo+QzEL0ZiXJcmmXH8tmUYsBbUjKYZvQQcjV1cm1yXKC
 2wWRSrs++HqmWWaO4eBtppN5Y+tXTNH3W3AVsKCC+xvn/MM/0fadfIQk67dWvw/kOVWw
 pGHo51Km45ednCu2t7y+DgJEBklkqxoHX0m7NseWquKxhHiml6LmOLrcCiioku88qomY
 jFmQ==
X-Gm-Message-State: AOAM533SGIxWIv49+a8lhMp1tDQchEneYe2FoBNBtVYeSdwiL7xH7GWw
 8j9esrvt4oQ8B4lBNNt1ItQ=
X-Google-Smtp-Source: ABdhPJzZWrQ0HgNVFoE6cODEZmoDzuftK1C6O0pwc+VU4hGUi2ilcK4PyTG4adTP9B6gXrmR2BdtTw==
X-Received: by 2002:a1c:3546:: with SMTP id c67mr42349957wma.102.1593886046288; 
 Sat, 04 Jul 2020 11:07:26 -0700 (PDT)
Received: from [192.168.1.39] (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id h14sm18374863wrt.36.2020.07.04.11.07.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 04 Jul 2020 11:07:25 -0700 (PDT)
Subject: Re: [PATCH 25/26] hw/usb/usb-hcd: Use XHCI type definitions
To: BALATON Zoltan <balaton@eik.bme.hu>
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-26-f4bug@amsat.org>
 <alpine.BSF.2.22.395.2007041918320.92265@zero.eik.bme.hu>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
Message-ID: <5520c5eb-b80f-4a44-8aa5-7512048482d1@amsat.org>
Date: Sat, 4 Jul 2020 20:07:23 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <alpine.BSF.2.22.395.2007041918320.92265@zero.eik.bme.hu>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/4/20 7:19 PM, BALATON Zoltan wrote:
> On Sat, 4 Jul 2020, Philippe Mathieu-Daudé wrote:
>> Various machine/board/soc models create XHCI device instances
>> with the generic QDEV API, and don't need to access USB internals.
>>
>> Simplify header inclusions by moving the QOM type names into a
>> simple header, with no need to include other "hw/usb" headers.
>>
>> Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
>> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
>> ---
>> hw/usb/hcd-xhci.h        | 2 +-
>> include/hw/usb/usb-hcd.h | 3 +++
>> hw/ppc/spapr.c           | 2 +-
>> 3 files changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/hw/usb/hcd-xhci.h b/hw/usb/hcd-xhci.h
>> index f9a3aaceec..b6c54e38a6 100644
>> --- a/hw/usb/hcd-xhci.h
>> +++ b/hw/usb/hcd-xhci.h
>> @@ -23,9 +23,9 @@
>> #define HW_USB_HCD_XHCI_H
>>
>> #include "usb-internal.h"
>> +#include "hw/usb/usb-hcd.h"
>>
>> #define TYPE_XHCI "base-xhci"
>> -#define TYPE_NEC_XHCI "nec-usb-xhci"
>> #define TYPE_QEMU_XHCI "qemu-xhci"
> 
> Why is qemu-xhci left here? Should that be moved to public header too?
> (Maybe no machine adds it but that's a public type too I think.)

I don't know because I never used it, but I guess you are right.

> 
> Regards.
> BALATON Zoltan
> 
>> #define XHCI(obj) \
>> diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
>> index c9d0a88984..56107fca62 100644
>> --- a/include/hw/usb/usb-hcd.h
>> +++ b/include/hw/usb/usb-hcd.h
>> @@ -30,4 +30,7 @@
>> #define TYPE_VT82C686B_USB_UHCI     "vt82c686b-usb-uhci"
>> #define TYPE_ICH9_USB_UHCI(n)       "ich9-usb-uhci" #n
>>
>> +/* XHCI */
>> +#define TYPE_NEC_XHCI "nec-usb-xhci"
>> +
>> #endif
>> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
>> index db1706a66c..d8b3978f24 100644
>> --- a/hw/ppc/spapr.c
>> +++ b/hw/ppc/spapr.c
>> @@ -2961,7 +2961,7 @@ static void spapr_machine_init(MachineState
>> *machine)
>>         if (smc->use_ohci_by_default) {
>>             pci_create_simple(phb->bus, -1, TYPE_PCI_OHCI);
>>         } else {
>> -            pci_create_simple(phb->bus, -1, "nec-usb-xhci");
>> +            pci_create_simple(phb->bus, -1, TYPE_NEC_XHCI);
>>         }
>>
>>         if (spapr->has_graphics) {
>>


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 18:09:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 18:09:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrmbf-0006vo-SX; Sat, 04 Jul 2020 18:09:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrmbe-0006vi-Ny
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 18:09:42 +0000
X-Inumbo-ID: 87cf9380-be21-11ea-bca7-bc764e2007e4
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 87cf9380-be21-11ea-bca7-bc764e2007e4;
 Sat, 04 Jul 2020 18:09:41 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id o8so35042700wmh.4
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 11:09:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-language:content-transfer-encoding;
 bh=26ESjHbgI4BTNqbm3B28vUYqBjitO3tpkPCaUOMj9bA=;
 b=mjQJFUI6aMztNDG3wk/I6lnSW2MW7aYTzbw8VjESugs+xxGssINBebLZzFFWc19OPr
 QSUTSmQNkVGIrwTCsL4FjoH+68hC6bI8Bu9TjoFJtfgQ+ojd/8scxej8aXQOMQQPzTV6
 4SZl9IrZoHaHos6nZzG+AuvhsMVhLKm9f7CmrQHCqW9A1VOL62ay3xmyihw246V5vwhP
 juGXylLSDJ3HYcpmN2QekRlfO3gTk4QXapNlbM3byEIyzGCWd7W1Q5QKZPPW2j0JKBn7
 W/4JYAusxRFcM2RSD/HrsAG8G9eozJBBNKpgfRsKjzLUcXXH4VVQ2nPytwZJJDydSR62
 kyYQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:subject:to:cc:references:from:message-id
 :date:user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=26ESjHbgI4BTNqbm3B28vUYqBjitO3tpkPCaUOMj9bA=;
 b=c9uAtNHP2yhy8APOASHR5Rl4xVOu8LY5aEQzt6eqniHQG5TFBQrRWkivNRSuk8rCCy
 WNHDfFYFm89okah+bCcM9ZCZkIU9JprCrXYrPQgy6eKrIRiTO2jk4ARF3fqBwx3NxpX8
 LOmFqQucuO0tGhAuwt2RxOsi5OaarVJpq1jlhutoJ8C+i7F/hMgy67uH19Kvew5zOW5L
 NG1qz/pDLsBvHGi0W3k09tp/dpHbMUSocsvw6q3f+F+zoKFyqZCN9B27BbW/2pZcmTXP
 lHF5s8ePkzENDY6JX/YsK6vg4peAlUHfX3gfRH5b1pDL2SlOKDXEsn0R87pjsluOd1AK
 ekyA==
X-Gm-Message-State: AOAM531KkU4nukNUWFaegmUn5Nx2lYrhetp/fkbgpl98kswmBxvY55/1
 rL2deA8SOCFNSQsY7j/mdw8=
X-Google-Smtp-Source: ABdhPJwzNYUs2VQ8iFHaSSyYEPlKjZV73AzYSO9YQqAb7SjZkDwNyzXJcWkT3bJwzOf1ZQv9R1lWpw==
X-Received: by 2002:a1c:acc2:: with SMTP id v185mr40248743wme.81.1593886180534; 
 Sat, 04 Jul 2020 11:09:40 -0700 (PDT)
Received: from [192.168.1.39] (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id 92sm18722180wrr.96.2020.07.04.11.09.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 04 Jul 2020 11:09:39 -0700 (PDT)
Subject: Re: [PATCH 22/26] hw/usb/usb-hcd: Use OHCI type definitions
To: BALATON Zoltan <balaton@eik.bme.hu>
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-23-f4bug@amsat.org>
 <alpine.BSF.2.22.395.2007041909370.92265@zero.eik.bme.hu>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
Message-ID: <c966ff55-5499-b302-a5a5-2cc601c4a4f6@amsat.org>
Date: Sat, 4 Jul 2020 20:09:37 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <alpine.BSF.2.22.395.2007041909370.92265@zero.eik.bme.hu>
Content-Type: text/plain; charset=iso-8859-15
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/4/20 7:13 PM, BALATON Zoltan wrote:
> On Sat, 4 Jul 2020, Philippe Mathieu-Daud wrote:
>> Various machine/board/soc models create OHCI device instances
>> with the generic QDEV API, and don't need to access USB internals.
>>
>> Simplify header inclusions by moving the QOM type names into a
>> simple header, with no need to include other "hw/usb" headers.
>>
>> Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
>> Signed-off-by: Philippe Mathieu-Daud <f4bug@amsat.org>
>> ---
>> hw/usb/hcd-ohci.h | 2 +-
>> include/hw/usb/usb-hcd.h | 16 ++++++++++++++++
> 
> I wonder if we need a new header for this or these could just go in the
> new public hw/usb/usb.h as machines creating a HCD may also add devices
> (like keyboard/mouse) so probably will need both headers anyway so
> splitting it up may not worth it but I don't really mind, either way.

Hmm the rationale for this choice is: SoC might only instanciate USB HCI
via sysbus/qdev API, without any use of "hw/usb/usb.h". This is the
machine / board that instanciate USB devices and plug them to the HCI on
the SoC.

I can reword better the description.

> 
> For sm501 and sam460ex parts:
> 
> Reviewed-by: BALATON Zoltan <balaton@eik.bme.hu>
> 
> Regards,
> BALATON Zoltan
> 
>> hw/arm/allwinner-a10.c | 2 +-
>> hw/arm/allwinner-h3.c | 9 +++++----
>> hw/arm/pxa2xx.c | 3 ++-
>> hw/arm/realview.c | 3 ++-
>> hw/arm/versatilepb.c | 3 ++-
>> hw/display/sm501.c | 3 ++-
>> hw/ppc/mac_newworld.c | 3 ++-
>> hw/ppc/mac_oldworld.c | 3 ++-
>> hw/ppc/sam460ex.c | 3 ++-
>> hw/ppc/spapr.c | 3 ++-
>> hw/usb/hcd-ohci-pci.c | 2 +-
>> 13 files changed, 40 insertions(+), 15 deletions(-)
>> create mode 100644 include/hw/usb/usb-hcd.h
>>
>> diff --git a/hw/usb/hcd-ohci.h b/hw/usb/hcd-ohci.h
>> index 771927ea17..6949cf0dab 100644
>> --- a/hw/usb/hcd-ohci.h
>> +++ b/hw/usb/hcd-ohci.h
>> @@ -21,6 +21,7 @@
>> #ifndef HCD_OHCI_H
>> #define HCD_OHCI_H
>>
>> +#include "hw/usb/usb-hcd.h"
>> #include "sysemu/dma.h"
>> #include "usb-internal.h"
>>
>> @@ -91,7 +92,6 @@ typedef struct OHCIState {
>>  void (*ohci_die)(struct OHCIState *ohci);
>> } OHCIState;
>>
>> -#define TYPE_SYSBUS_OHCI "sysbus-ohci"
>> #define SYSBUS_OHCI(obj) OBJECT_CHECK(OHCISysBusState, (obj),
>> TYPE_SYSBUS_OHCI)
>>
>> typedef struct {
>> diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
>> new file mode 100644
>> index 0000000000..21fdfaf22d
>> --- /dev/null
>> +++ b/include/hw/usb/usb-hcd.h
>> @@ -0,0 +1,16 @@
>> +/*
>> + * QEMU USB HCD types
>> + *
>> + * Copyright (c) 2020 Philippe Mathieu-Daud <f4bug@amsat.org>
>> + *
>> + * SPDX-License-Identifier: GPL-2.0-or-later
>> + */
>> +
>> +#ifndef HW_USB_HCD_TYPES_H
>> +#define HW_USB_HCD_TYPES_H
>> +
>> +/* OHCI */
>> +#define TYPE_SYSBUS_OHCI "sysbus-ohci"
>> +#define TYPE_PCI_OHCI "pci-ohci"
>> +
>> +#endif
>> diff --git a/hw/arm/allwinner-a10.c b/hw/arm/allwinner-a10.c
>> index 52e0d83760..53c24ff602 100644
>> --- a/hw/arm/allwinner-a10.c
>> +++ b/hw/arm/allwinner-a10.c
>> @@ -25,7 +25,7 @@
>> #include "hw/misc/unimp.h"
>> #include "sysemu/sysemu.h"
>> #include "hw/boards.h"
>> -#include "hw/usb/hcd-ohci.h"
>> +#include "hw/usb/usb-hcd.h"
>>
>> #define AW_A10_MMC0_BASE 0x01c0f000
>> #define AW_A10_PIC_REG_BASE 0x01c20400
>> diff --git a/hw/arm/allwinner-h3.c b/hw/arm/allwinner-h3.c
>> index 8e09468e86..d1d90ffa79 100644
>> --- a/hw/arm/allwinner-h3.c
>> +++ b/hw/arm/allwinner-h3.c
>> @@ -28,6 +28,7 @@
>> #include "hw/sysbus.h"
>> #include "hw/char/serial.h"
>> #include "hw/misc/unimp.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "hw/usb/hcd-ehci.h"
>> #include "hw/loader.h"
>> #include "sysemu/sysemu.h"
>> @@ -381,16 +382,16 @@ static void allwinner_h3_realize(DeviceState
>> *dev, Error **errp)
>>  qdev_get_gpio_in(DEVICE(&s->gic),
>>  AW_H3_GIC_SPI_EHCI3));
>>
>> - sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI0],
>> + sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI0],
>>  qdev_get_gpio_in(DEVICE(&s->gic),
>>  AW_H3_GIC_SPI_OHCI0));
>> - sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI1],
>> + sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI1],
>>  qdev_get_gpio_in(DEVICE(&s->gic),
>>  AW_H3_GIC_SPI_OHCI1));
>> - sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI2],
>> + sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI2],
>>  qdev_get_gpio_in(DEVICE(&s->gic),
>>  AW_H3_GIC_SPI_OHCI2));
>> - sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI3],
>> + sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI3],
>>  qdev_get_gpio_in(DEVICE(&s->gic),
>>  AW_H3_GIC_SPI_OHCI3));
>>
>> diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
>> index f104a33463..27196170f5 100644
>> --- a/hw/arm/pxa2xx.c
>> +++ b/hw/arm/pxa2xx.c
>> @@ -18,6 +18,7 @@
>> #include "hw/arm/pxa.h"
>> #include "sysemu/sysemu.h"
>> #include "hw/char/serial.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "hw/i2c/i2c.h"
>> #include "hw/irq.h"
>> #include "hw/qdev-properties.h"
>> @@ -2196,7 +2197,7 @@ PXA2xxState *pxa270_init(MemoryRegion
>> *address_space,
>>  s->ssp[i] = (SSIBus *)qdev_get_child_bus(dev, "ssi");
>>  }
>>
>> - sysbus_create_simple("sysbus-ohci", 0x4c000000,
>> + sysbus_create_simple(TYPE_SYSBUS_OHCI, 0x4c000000,
>>  qdev_get_gpio_in(s->pic, PXA2XX_PIC_USBH1));
>>
>>  s->pcmcia[0] = pxa2xx_pcmcia_init(address_space, 0x20000000);
>> diff --git a/hw/arm/realview.c b/hw/arm/realview.c
>> index b6c0a1adb9..0aa34bd4c2 100644
>> --- a/hw/arm/realview.c
>> +++ b/hw/arm/realview.c
>> @@ -16,6 +16,7 @@
>> #include "hw/net/lan9118.h"
>> #include "hw/net/smc91c111.h"
>> #include "hw/pci/pci.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "net/net.h"
>> #include "sysemu/sysemu.h"
>> #include "hw/boards.h"
>> @@ -256,7 +257,7 @@ static void realview_init(MachineState *machine,
>>  sysbus_connect_irq(busdev, 3, pic[51]);
>>  pci_bus = (PCIBus *)qdev_get_child_bus(dev, "pci");
>>  if (machine_usb(machine)) {
>> - pci_create_simple(pci_bus, -1, "pci-ohci");
>> + pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
>>  }
>>  n = drive_get_max_bus(IF_SCSI);
>>  while (n >= 0) {
>> diff --git a/hw/arm/versatilepb.c b/hw/arm/versatilepb.c
>> index e596b8170f..3e6224dc96 100644
>> --- a/hw/arm/versatilepb.c
>> +++ b/hw/arm/versatilepb.c
>> @@ -17,6 +17,7 @@
>> #include "net/net.h"
>> #include "sysemu/sysemu.h"
>> #include "hw/pci/pci.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "hw/i2c/i2c.h"
>> #include "hw/i2c/arm_sbcon_i2c.h"
>> #include "hw/irq.h"
>> @@ -273,7 +274,7 @@ static void versatile_init(MachineState *machine,
>> int board_id)
>>  }
>>  }
>>  if (machine_usb(machine)) {
>> - pci_create_simple(pci_bus, -1, "pci-ohci");
>> + pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
>>  }
>>  n = drive_get_max_bus(IF_SCSI);
>>  while (n >= 0) {
>> diff --git a/hw/display/sm501.c b/hw/display/sm501.c
>> index 9cccc68c35..5f076c841f 100644
>> --- a/hw/display/sm501.c
>> +++ b/hw/display/sm501.c
>> @@ -33,6 +33,7 @@
>> #include "hw/sysbus.h"
>> #include "migration/vmstate.h"
>> #include "hw/pci/pci.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "hw/qdev-properties.h"
>> #include "hw/i2c/i2c.h"
>> #include "hw/display/i2c-ddc.h"
>> @@ -1961,7 +1962,7 @@ static void sm501_realize_sysbus(DeviceState
>> *dev, Error **errp)
>>  sysbus_init_mmio(sbd, &s->state.mmio_region);
>>
>>  /* bridge to usb host emulation module */
>> - usb_dev = qdev_new("sysbus-ohci");
>> + usb_dev = qdev_new(TYPE_SYSBUS_OHCI);
>>  qdev_prop_set_uint32(usb_dev, "num-ports", 2);
>>  qdev_prop_set_uint64(usb_dev, "dma-offset", s->base);
>>  sysbus_realize_and_unref(SYS_BUS_DEVICE(usb_dev), &error_fatal);
>> diff --git a/hw/ppc/mac_newworld.c b/hw/ppc/mac_newworld.c
>> index 7bf69f4a1f..3c32c1831b 100644
>> --- a/hw/ppc/mac_newworld.c
>> +++ b/hw/ppc/mac_newworld.c
>> @@ -55,6 +55,7 @@
>> #include "hw/input/adb.h"
>> #include "hw/ppc/mac_dbdma.h"
>> #include "hw/pci/pci.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "net/net.h"
>> #include "sysemu/sysemu.h"
>> #include "hw/boards.h"
>> @@ -411,7 +412,7 @@ static void ppc_core99_init(MachineState *machine)
>>  }
>>
>>  if (machine->usb) {
>> - pci_create_simple(pci_bus, -1, "pci-ohci");
>> + pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
>>
>>  /* U3 needs to use USB for input because Linux doesn't support
>> via-cuda
>>  on PPC64 */
>> diff --git a/hw/ppc/mac_oldworld.c b/hw/ppc/mac_oldworld.c
>> index f8c204ead7..a429a3e1df 100644
>> --- a/hw/ppc/mac_oldworld.c
>> +++ b/hw/ppc/mac_oldworld.c
>> @@ -37,6 +37,7 @@
>> #include "hw/isa/isa.h"
>> #include "hw/pci/pci.h"
>> #include "hw/pci/pci_host.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "hw/boards.h"
>> #include "hw/nvram/fw_cfg.h"
>> #include "hw/char/escc.h"
>> @@ -301,7 +302,7 @@ static void ppc_heathrow_init(MachineState *machine)
>>  qdev_realize_and_unref(dev, adb_bus, &error_fatal);
>>
>>  if (machine_usb(machine)) {
>> - pci_create_simple(pci_bus, -1, "pci-ohci");
>> + pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
>>  }
>>
>>  if (graphic_depth != 15 && graphic_depth != 32 && graphic_depth != 8)
>> diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c
>> index 781b45e14b..ac60d17a86 100644
>> --- a/hw/ppc/sam460ex.c
>> +++ b/hw/ppc/sam460ex.c
>> @@ -36,6 +36,7 @@
>> #include "hw/i2c/ppc4xx_i2c.h"
>> #include "hw/i2c/smbus_eeprom.h"
>> #include "hw/usb/usb.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "hw/usb/hcd-ehci.h"
>> #include "hw/ppc/fdt.h"
>> #include "hw/qdev-properties.h"
>> @@ -372,7 +373,7 @@ static void sam460ex_init(MachineState *machine)
>>
>>  /* USB */
>>  sysbus_create_simple(TYPE_PPC4xx_EHCI, 0x4bffd0400, uic[2][29]);
>> - dev = qdev_new("sysbus-ohci");
>> + dev = qdev_new(TYPE_SYSBUS_OHCI);
>>  qdev_prop_set_string(dev, "masterbus", "usb-bus.0");
>>  qdev_prop_set_uint32(dev, "num-ports", 6);
>>  sbdev = SYS_BUS_DEVICE(dev);
>> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
>> index 0c0409077f..db1706a66c 100644
>> --- a/hw/ppc/spapr.c
>> +++ b/hw/ppc/spapr.c
>> @@ -71,6 +71,7 @@
>> #include "exec/address-spaces.h"
>> #include "exec/ram_addr.h"
>> #include "hw/usb/usb.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "qemu/config-file.h"
>> #include "qemu/error-report.h"
>> #include "trace.h"
>> @@ -2958,7 +2959,7 @@ static void spapr_machine_init(MachineState
>> *machine)
>>
>>  if (machine->usb) {
>>  if (smc->use_ohci_by_default) {
>> - pci_create_simple(phb->bus, -1, "pci-ohci");
>> + pci_create_simple(phb->bus, -1, TYPE_PCI_OHCI);
>>  } else {
>>  pci_create_simple(phb->bus, -1, "nec-usb-xhci");
>>  }
>> diff --git a/hw/usb/hcd-ohci-pci.c b/hw/usb/hcd-ohci-pci.c
>> index cb6bc55f59..14df83ec2e 100644
>> --- a/hw/usb/hcd-ohci-pci.c
>> +++ b/hw/usb/hcd-ohci-pci.c
>> @@ -29,8 +29,8 @@
>> #include "trace.h"
>> #include "hcd-ohci.h"
>> #include "usb-internal.h"
>> +#include "hw/usb/usb-hcd.h"
>>
>> -#define TYPE_PCI_OHCI "pci-ohci"
>> #define PCI_OHCI(obj) OBJECT_CHECK(OHCIPCIState, (obj), TYPE_PCI_OHCI)
>>
>> typedef struct {
>>


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 18:12:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 18:12:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrmeY-0007i5-EN; Sat, 04 Jul 2020 18:12:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jrmeW-0007hz-SW
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 18:12:40 +0000
X-Inumbo-ID: f20ae222-be21-11ea-bb8b-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f20ae222-be21-11ea-bb8b-bc764e2007e4;
 Sat, 04 Jul 2020 18:12:39 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id j4so33645548wrp.10
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 11:12:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-language:content-transfer-encoding;
 bh=dR5yAbM94Oarj3h8zuGYtWJaMflsDMqLOxHo7EbP8gI=;
 b=u6n2jsij5oAqs6s+fzbL8Jk30AKCjpTP3V+d/MUa3F29sfC+NU3p0TuAHlBJq2tUXI
 w4/BoQpthsGaZdYWUW/T07EpyNQYh2q78UumTwp2KNxsFe8vAKl0sdK6cW8/DjaCYo8i
 FwNSf77oq4kAprNundvFZWgHGL43IdYtGMXQ6Ma1XGcD0UTOnIlj4Z4cyt+wDLnFhA5T
 HkponJU6Rv5WH2YRA5Yw5K0ZQFfRJULheyqYSM9WaDlvPw2gIXeToIlFgtRhpW9PhCTA
 0RnkOXfCJfxLyoqm/dRKGk/YefB9WtZN41MLkmeSfDaEgoPZqF/gyLBU9WIJPNIFZ55I
 uIwA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:subject:to:cc:references:from:message-id
 :date:user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=dR5yAbM94Oarj3h8zuGYtWJaMflsDMqLOxHo7EbP8gI=;
 b=pePa/VF/WcsxGO4+ZoAqo7yOozVkZe//+ET1MJtVcWeBxAZK0cNqPcpY+6CpSSkSZr
 v21MHhpvlt+4rrdTPm7teqprAWpMB9s+UY8zNwE861KEYxkV4sbybtwCaTDeezL/nVww
 NjX/Td+EIBYdcreVs3egYBssYgcbYMIHnYQm9emb/fPV/yXR/fyycPEKCfm5hmwit5N6
 hGusZP7UWcqsKRtNsj58RBerrn24lXfYqtINYTA+lqbKfM5izFMP4FxtENcotsFPIR7R
 uE9u4PkV9KIecpOvGH1Ra/qP38ley4lqFbWOIz29dGsEFlc0zGepVUKuMFDbSWaf17Z6
 JLzA==
X-Gm-Message-State: AOAM531qn6ZqwollVYttFuzoapIHX5zbWOnkjZXfuHhY7g0dLJC41J2p
 CmogEa+Y4cX/TIwykZriW6g=
X-Google-Smtp-Source: ABdhPJxOLfAMmahMaoYf1dyoxeHrbYNeWnj1XcQlMEhgj7aLv2DVhmCoQZ8rP4y0uYhA5TedskmrPw==
X-Received: by 2002:adf:c44d:: with SMTP id a13mr42892704wrg.205.1593886358792; 
 Sat, 04 Jul 2020 11:12:38 -0700 (PDT)
Received: from [192.168.1.39] (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id h13sm17258467wml.42.2020.07.04.11.12.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 04 Jul 2020 11:12:37 -0700 (PDT)
Subject: Re: [PATCH 24/26] hw/usb/usb-hcd: Use UHCI type definitions
To: BALATON Zoltan <balaton@eik.bme.hu>
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-25-f4bug@amsat.org>
 <alpine.BSF.2.22.395.2007041916060.92265@zero.eik.bme.hu>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
Message-ID: <f19dc1c9-8b72-695b-bce1-660e547e5658@amsat.org>
Date: Sat, 4 Jul 2020 20:12:35 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <alpine.BSF.2.22.395.2007041916060.92265@zero.eik.bme.hu>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Eric Blake <eblake@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/4/20 7:17 PM, BALATON Zoltan wrote:
> On Sat, 4 Jul 2020, Philippe Mathieu-Daudé wrote:
>> Various machine/board/soc models create UHCI device instances
>> with the generic QDEV API, and don't need to access USB internals.
>>
>> Simplify header inclusions by moving the QOM type names into a
>> simple header, with no need to include other "hw/usb" headers.
>>
>> Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
>> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
>> ---
>> include/hw/usb/usb-hcd.h |  6 ++++++
>> hw/i386/pc_piix.c        |  3 ++-
>> hw/i386/pc_q35.c         | 13 +++++++------
>> hw/isa/piix4.c           |  3 ++-
>> hw/mips/fuloong2e.c      |  5 +++--
>> hw/usb/hcd-uhci.c        | 19 ++++++++++---------
>> 6 files changed, 30 insertions(+), 19 deletions(-)
>>
>> diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
>> index 74af3a4533..c9d0a88984 100644
>> --- a/include/hw/usb/usb-hcd.h
>> +++ b/include/hw/usb/usb-hcd.h
>> @@ -24,4 +24,10 @@
>> #define TYPE_FUSBH200_EHCI          "fusbh200-ehci-usb"
>> #define TYPE_CHIPIDEA               "usb-chipidea"
>>
>> +/* UHCI */
>> +#define TYPE_PIIX3_USB_UHCI         "piix3-usb-uhci"
>> +#define TYPE_PIIX4_USB_UHCI         "piix4-usb-uhci"
>> +#define TYPE_VT82C686B_USB_UHCI     "vt82c686b-usb-uhci"
>> +#define TYPE_ICH9_USB_UHCI(n)       "ich9-usb-uhci" #n
> 
> What is that #n at the end? Looks like a typo. Does it break compilation?

#n is a C preprocessor feature that expand the 'n' argument to a "n"
string, so:

TYPE_ICH9_USB_UHCI(1) = "ich9-usb-uhci" #1
                      = "ich9-usb-uhci" "1"
                      = "ich9-usb-uhci1"

I'm pretty sure we use that elsewhere. If not, I can add a definition
for each 1 ... 6 typenames.

> 
> Regards,
> BALATON Zoltan
> 
>> +
>> #endif
>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>> index 4d1de7cfab..0024c346c6 100644
>> --- a/hw/i386/pc_piix.c
>> +++ b/hw/i386/pc_piix.c
>> @@ -37,6 +37,7 @@
>> #include "hw/pci/pci.h"
>> #include "hw/pci/pci_ids.h"
>> #include "hw/usb/usb.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "net/net.h"
>> #include "hw/ide/pci.h"
>> #include "hw/irq.h"
>> @@ -275,7 +276,7 @@ static void pc_init1(MachineState *machine,
>> #endif
>>
>>     if (pcmc->pci_enabled && machine_usb(machine)) {
>> -        pci_create_simple(pci_bus, piix3_devfn + 2, "piix3-usb-uhci");
>> +        pci_create_simple(pci_bus, piix3_devfn + 2,
>> TYPE_PIIX3_USB_UHCI);
>>     }
>>
>>     if (pcmc->pci_enabled &&
>> x86_machine_is_acpi_enabled(X86_MACHINE(pcms))) {
>> diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
>> index b985f5bea1..a80527e6ed 100644
>> --- a/hw/i386/pc_q35.c
>> +++ b/hw/i386/pc_q35.c
>> @@ -51,6 +51,7 @@
>> #include "hw/ide/pci.h"
>> #include "hw/ide/ahci.h"
>> #include "hw/usb/usb.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "qapi/error.h"
>> #include "qemu/error-report.h"
>> #include "sysemu/numa.h"
>> @@ -68,15 +69,15 @@ struct ehci_companions {
>> };
>>
>> static const struct ehci_companions ich9_1d[] = {
>> -    { .name = "ich9-usb-uhci1", .func = 0, .port = 0 },
>> -    { .name = "ich9-usb-uhci2", .func = 1, .port = 2 },
>> -    { .name = "ich9-usb-uhci3", .func = 2, .port = 4 },
>> +    { .name = TYPE_ICH9_USB_UHCI(1), .func = 0, .port = 0 },
>> +    { .name = TYPE_ICH9_USB_UHCI(2), .func = 1, .port = 2 },
>> +    { .name = TYPE_ICH9_USB_UHCI(3), .func = 2, .port = 4 },
>> };
>>
>> static const struct ehci_companions ich9_1a[] = {
>> -    { .name = "ich9-usb-uhci4", .func = 0, .port = 0 },
>> -    { .name = "ich9-usb-uhci5", .func = 1, .port = 2 },
>> -    { .name = "ich9-usb-uhci6", .func = 2, .port = 4 },
>> +    { .name = TYPE_ICH9_USB_UHCI(4), .func = 0, .port = 0 },
>> +    { .name = TYPE_ICH9_USB_UHCI(5), .func = 1, .port = 2 },
>> +    { .name = TYPE_ICH9_USB_UHCI(6), .func = 2, .port = 4 },
>> };
>>
>> static int ehci_create_ich9_with_companions(PCIBus *bus, int slot)
>> diff --git a/hw/isa/piix4.c b/hw/isa/piix4.c
>> index f634bcb2d1..e11e5fae21 100644
>> --- a/hw/isa/piix4.c
>> +++ b/hw/isa/piix4.c
>> @@ -29,6 +29,7 @@
>> #include "hw/southbridge/piix.h"
>> #include "hw/pci/pci.h"
>> #include "hw/isa/isa.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "hw/sysbus.h"
>> #include "hw/intc/i8259.h"
>> #include "hw/dma/i8257.h"
>> @@ -255,7 +256,7 @@ DeviceState *piix4_create(PCIBus *pci_bus, ISABus
>> **isa_bus, I2CBus **smbus)
>>     pci = pci_create_simple(pci_bus, devfn + 1, "piix4-ide");
>>     pci_ide_create_devs(pci);
>>
>> -    pci_create_simple(pci_bus, devfn + 2, "piix4-usb-uhci");
>> +    pci_create_simple(pci_bus, devfn + 2, TYPE_PIIX4_USB_UHCI);
>>     if (smbus) {
>>         *smbus = piix4_pm_init(pci_bus, devfn + 3, 0x1100,
>>                                isa_get_irq(NULL, 9), NULL, 0, NULL);
>> diff --git a/hw/mips/fuloong2e.c b/hw/mips/fuloong2e.c
>> index 8ca31e5162..b6d33dd2cd 100644
>> --- a/hw/mips/fuloong2e.c
>> +++ b/hw/mips/fuloong2e.c
>> @@ -33,6 +33,7 @@
>> #include "hw/mips/mips.h"
>> #include "hw/mips/cpudevs.h"
>> #include "hw/pci/pci.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "qemu/log.h"
>> #include "hw/loader.h"
>> #include "hw/ide/pci.h"
>> @@ -258,8 +259,8 @@ static void vt82c686b_southbridge_init(PCIBus
>> *pci_bus, int slot, qemu_irq intc,
>>     dev = pci_create_simple(pci_bus, PCI_DEVFN(slot, 1), "via-ide");
>>     pci_ide_create_devs(dev);
>>
>> -    pci_create_simple(pci_bus, PCI_DEVFN(slot, 2),
>> "vt82c686b-usb-uhci");
>> -    pci_create_simple(pci_bus, PCI_DEVFN(slot, 3),
>> "vt82c686b-usb-uhci");
>> +    pci_create_simple(pci_bus, PCI_DEVFN(slot, 2),
>> TYPE_VT82C686B_USB_UHCI);
>> +    pci_create_simple(pci_bus, PCI_DEVFN(slot, 3),
>> TYPE_VT82C686B_USB_UHCI);
>>
>>     *i2c_bus = vt82c686b_pm_init(pci_bus, PCI_DEVFN(slot, 4), 0xeee1,
>> NULL);
>>
>> diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
>> index 1d4dd33b6c..da078dc3fa 100644
>> --- a/hw/usb/hcd-uhci.c
>> +++ b/hw/usb/hcd-uhci.c
>> @@ -39,6 +39,7 @@
>> #include "qemu/main-loop.h"
>> #include "qemu/module.h"
>> #include "usb-internal.h"
>> +#include "hw/usb/usb-hcd.h"
>>
>> #define FRAME_TIMER_FREQ 1000
>>
>> @@ -1358,21 +1359,21 @@ static void uhci_data_class_init(ObjectClass
>> *klass, void *data)
>>
>> static UHCIInfo uhci_info[] = {
>>     {
>> -        .name       = "piix3-usb-uhci",
>> +        .name      = TYPE_PIIX3_USB_UHCI,
>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>         .device_id = PCI_DEVICE_ID_INTEL_82371SB_2,
>>         .revision  = 0x01,
>>         .irq_pin   = 3,
>>         .unplug    = true,
>>     },{
>> -        .name      = "piix4-usb-uhci",
>> +        .name      = TYPE_PIIX4_USB_UHCI,
>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>         .device_id = PCI_DEVICE_ID_INTEL_82371AB_2,
>>         .revision  = 0x01,
>>         .irq_pin   = 3,
>>         .unplug    = true,
>>     },{
>> -        .name      = "vt82c686b-usb-uhci",
>> +        .name      = TYPE_VT82C686B_USB_UHCI,
>>         .vendor_id = PCI_VENDOR_ID_VIA,
>>         .device_id = PCI_DEVICE_ID_VIA_UHCI,
>>         .revision  = 0x01,
>> @@ -1380,42 +1381,42 @@ static UHCIInfo uhci_info[] = {
>>         .realize   = usb_uhci_vt82c686b_realize,
>>         .unplug    = true,
>>     },{
>> -        .name      = "ich9-usb-uhci1", /* 00:1d.0 */
>> +        .name      = TYPE_ICH9_USB_UHCI(1), /* 00:1d.0 */
>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI1,
>>         .revision  = 0x03,
>>         .irq_pin   = 0,
>>         .unplug    = false,
>>     },{
>> -        .name      = "ich9-usb-uhci2", /* 00:1d.1 */
>> +        .name      = TYPE_ICH9_USB_UHCI(2), /* 00:1d.1 */
>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI2,
>>         .revision  = 0x03,
>>         .irq_pin   = 1,
>>         .unplug    = false,
>>     },{
>> -        .name      = "ich9-usb-uhci3", /* 00:1d.2 */
>> +        .name      = TYPE_ICH9_USB_UHCI(3), /* 00:1d.2 */
>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI3,
>>         .revision  = 0x03,
>>         .irq_pin   = 2,
>>         .unplug    = false,
>>     },{
>> -        .name      = "ich9-usb-uhci4", /* 00:1a.0 */
>> +        .name      = TYPE_ICH9_USB_UHCI(4), /* 00:1a.0 */
>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI4,
>>         .revision  = 0x03,
>>         .irq_pin   = 0,
>>         .unplug    = false,
>>     },{
>> -        .name      = "ich9-usb-uhci5", /* 00:1a.1 */
>> +        .name      = TYPE_ICH9_USB_UHCI(5), /* 00:1a.1 */
>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI5,
>>         .revision  = 0x03,
>>         .irq_pin   = 1,
>>         .unplug    = false,
>>     },{
>> -        .name      = "ich9-usb-uhci6", /* 00:1a.2 */
>> +        .name      = TYPE_ICH9_USB_UHCI(6), /* 00:1a.2 */
>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI6,
>>         .revision  = 0x03,
>>


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 19:17:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 19:17:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrneW-0004NZ-6v; Sat, 04 Jul 2020 19:16:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0BjV=AP=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jrneV-0004NU-9l
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 19:16:43 +0000
X-Inumbo-ID: e45ff73a-be2a-11ea-b7bb-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e45ff73a-be2a-11ea-b7bb-bc764e2007e4;
 Sat, 04 Jul 2020 19:16:42 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 4DA3EA242E;
 Sat,  4 Jul 2020 21:16:41 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 36C1BA23F8;
 Sat,  4 Jul 2020 21:16:40 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id SNxKUOhn8haT; Sat,  4 Jul 2020 21:16:39 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 9139AA242E;
 Sat,  4 Jul 2020 21:16:39 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id Gz_v9F6XRqv7; Sat,  4 Jul 2020 21:16:39 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 5EE5DA23F8;
 Sat,  4 Jul 2020 21:16:39 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 4128B22ACD;
 Sat,  4 Jul 2020 21:16:09 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id b7qClOMY0dAa; Sat,  4 Jul 2020 21:16:03 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 9B2A222EFD;
 Sat,  4 Jul 2020 21:16:03 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id jBOuLc-zdkfn; Sat,  4 Jul 2020 21:16:03 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 571CB22EF9;
 Sat,  4 Jul 2020 21:16:03 +0200 (CEST)
Date: Sat, 4 Jul 2020 21:16:03 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: Julien Grall <julien@xen.org>
Message-ID: <271632089.19642398.1593890163210.JavaMail.zimbra@cert.pl>
In-Reply-To: <2e01fca9-efcd-7d09-355f-29245bbc8721@xen.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <9a3f4d58-e5ad-c7a1-6c5f-42aa92101ca1@xen.org>
 <d0165fc3-fb05-2e49-eff3-e45a674b00e1@suse.com>
 <7f915146-6566-e5a7-14d2-cb2319838562@xen.org>
 <7ac383c2-0264-cc75-a85b-13c1fdfb0bd6@suse.com>
 <dadeeedd-a9e1-d5f4-4754-8da3f065fd44@xen.org>
 <187614050.18497785.1593721708078.JavaMail.zimbra@cert.pl>
 <2e01fca9-efcd-7d09-355f-29245bbc8721@xen.org>
Subject: Re: [PATCH v4 02/10] x86/vmx: add IPT cpu feature
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - GC83 (Win)/8.6.0_GA_1194)
Thread-Topic: x86/vmx: add IPT cpu feature
Thread-Index: YFC2wBSChqCukYMSGTrVHCRJJ7fKjQ==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 luwei kang <luwei.kang@intel.com>,
 Roger Pau =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 3 lip 2020 o 9:58, Julien Grall julien@xen.org napisa=C5=82(a):

> Hi,
>=20
> On 02/07/2020 21:28, Micha=C5=82 Leszczy=C5=84ski wrote:
>> ----- 2 lip 2020 o 16:31, Julien Grall julien@xen.org napisa=C5=82(a):
>>=20
>>> On 02/07/2020 15:17, Jan Beulich wrote:
>>>> On 02.07.2020 16:14, Julien Grall wrote:
>>>>> On 02/07/2020 14:30, Jan Beulich wrote:
>>>>>> On 02.07.2020 11:57, Julien Grall wrote:
>>>>>>> On 02/07/2020 10:18, Jan Beulich wrote:
>>>>>>>> On 02.07.2020 10:54, Julien Grall wrote:
>>>>>>>>> On 02/07/2020 09:50, Jan Beulich wrote:
>>>>>>>>>> On 02.07.2020 10:42, Julien Grall wrote:
>>>>>>>>>>> On 02/07/2020 09:29, Jan Beulich wrote:
>>>>>>> Another way to do it, would be the toolstack to do the mapping. At =
which
>>>>>>> point, you still need an hypercall to do the mapping (probably the
>>>>>>> hypercall acquire).
>>>>>>
>>>>>> There may not be any mapping to do in such a contrived, fixed-range
>>>>>> environment. This scenario was specifically to demonstrate that the
>>>>>> way the mapping gets done may be arch-specific (here: a no-op)
>>>>>> despite the allocation not being so.
>>>>> You are arguing on extreme cases which I don't think is really helpfu=
l
>>>>> here. Yes if you want to map at a fixed address in a guest you may no=
t
>>>>> need the acquire hypercall. But in most of the other cases (see has f=
or
>>>>> the tools) you will need it.
>>>>>
>>>>> So what's the problem with requesting to have the acquire hypercall
>>>>> implemented in common code?
>>>>
>>>> Didn't we start out by you asking that there be as little common code
>>>> as possible for the time being?
>>>
>>> Well as I said I am not in favor of having the allocation in common
>>> code, but if you want to keep it then you also want to implement
>>> map/unmap in the common code ([1], [2]).
>>>
>>>> I have no issue with putting the
>>>> acquire implementation there ...
>>> This was definitely not clear given how you argued with extreme cases..=
.
>>>
>>> Cheers,
>>>
>>> [1] <9a3f4d58-e5ad-c7a1-6c5f-42aa92101ca1@xen.org>
>>> [2] <cf41855b-9e5e-13f2-9ab0-04b98f8b3cdd@xen.org>
>>>
>>> --
>>> Julien Grall
>>=20
>>=20
>> Guys,
>>=20
>> could you express your final decision on this topic?
>=20
> Can you move the acquire implementation from x86 to common code?
>=20
> Cheers,
>=20
> --
> Julien Grall


Ok, sure. This will be done within the patch v5.

Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 19:44:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 19:44:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jro5C-00073O-Ru; Sat, 04 Jul 2020 19:44:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jWGQ=AP=eik.bme.hu=balaton@srs-us1.protection.inumbo.net>)
 id 1jro5B-00073J-Os
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 19:44:17 +0000
X-Inumbo-ID: bd90b5aa-be2e-11ea-bb8b-bc764e2007e4
Received: from zero.eik.bme.hu (unknown [2001:738:2001:2001::2001])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd90b5aa-be2e-11ea-bb8b-bc764e2007e4;
 Sat, 04 Jul 2020 19:44:15 +0000 (UTC)
Received: from zero.eik.bme.hu (blah.eik.bme.hu [152.66.115.182])
 by localhost (Postfix) with SMTP id 3DEE874633F;
 Sat,  4 Jul 2020 21:44:14 +0200 (CEST)
Received: by zero.eik.bme.hu (Postfix, from userid 432)
 id E5D1774633D; Sat,  4 Jul 2020 21:44:13 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by zero.eik.bme.hu (Postfix) with ESMTP id E33B574632B;
 Sat,  4 Jul 2020 21:44:13 +0200 (CEST)
Date: Sat, 4 Jul 2020 21:44:13 +0200 (CEST)
From: BALATON Zoltan <balaton@eik.bme.hu>
To: =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <f4bug@amsat.org>
Subject: Re: [PATCH 24/26] hw/usb/usb-hcd: Use UHCI type definitions
In-Reply-To: <f19dc1c9-8b72-695b-bce1-660e547e5658@amsat.org>
Message-ID: <alpine.BSF.2.22.395.2007042140380.45095@zero.eik.bme.hu>
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-25-f4bug@amsat.org>
 <alpine.BSF.2.22.395.2007041916060.92265@zero.eik.bme.hu>
 <f19dc1c9-8b72-695b-bce1-660e547e5658@amsat.org>
User-Agent: Alpine 2.22 (BSF 395 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="3866299591-28275790-1593891853=:45095"
X-Spam-Checker-Version: Sophos PMX: 6.4.8.2820816,
 Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2020.7.4.193917,
 AntiVirus-Engine: 5.74.0, AntiVirus-Data: 2020.7.3.5740002
X-Spam-Flag: NO
X-Spam-Probability: 9%
X-Spam-Level: 
X-Spam-Status: No, score=9% required=50%
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>, qemu-devel@nongnu.org,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 Huacai Chen <chenhc@lemote.com>, Eric Blake <eblake@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?ISO-8859-15?Q?Herv=E9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?ISO-8859-15?Q?Marc-Andr=E9_Lureau?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 xen-devel@lists.xenproject.org, Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <philmd@redhat.com>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--3866299591-28275790-1593891853=:45095
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8BIT

On Sat, 4 Jul 2020, Philippe Mathieu-Daudé wrote:
> On 7/4/20 7:17 PM, BALATON Zoltan wrote:
>> On Sat, 4 Jul 2020, Philippe Mathieu-Daudé wrote:
>>> Various machine/board/soc models create UHCI device instances
>>> with the generic QDEV API, and don't need to access USB internals.
>>>
>>> Simplify header inclusions by moving the QOM type names into a
>>> simple header, with no need to include other "hw/usb" headers.
>>>
>>> Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
>>> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
>>> ---
>>> include/hw/usb/usb-hcd.h |  6 ++++++
>>> hw/i386/pc_piix.c        |  3 ++-
>>> hw/i386/pc_q35.c         | 13 +++++++------
>>> hw/isa/piix4.c           |  3 ++-
>>> hw/mips/fuloong2e.c      |  5 +++--
>>> hw/usb/hcd-uhci.c        | 19 ++++++++++---------
>>> 6 files changed, 30 insertions(+), 19 deletions(-)
>>>
>>> diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
>>> index 74af3a4533..c9d0a88984 100644
>>> --- a/include/hw/usb/usb-hcd.h
>>> +++ b/include/hw/usb/usb-hcd.h
>>> @@ -24,4 +24,10 @@
>>> #define TYPE_FUSBH200_EHCI          "fusbh200-ehci-usb"
>>> #define TYPE_CHIPIDEA               "usb-chipidea"
>>>
>>> +/* UHCI */
>>> +#define TYPE_PIIX3_USB_UHCI         "piix3-usb-uhci"
>>> +#define TYPE_PIIX4_USB_UHCI         "piix4-usb-uhci"
>>> +#define TYPE_VT82C686B_USB_UHCI     "vt82c686b-usb-uhci"
>>> +#define TYPE_ICH9_USB_UHCI(n)       "ich9-usb-uhci" #n
>>
>> What is that #n at the end? Looks like a typo. Does it break compilation?
>
> #n is a C preprocessor feature that expand the 'n' argument to a "n"
> string, so:
>
> TYPE_ICH9_USB_UHCI(1) = "ich9-usb-uhci" #1
>                      = "ich9-usb-uhci" "1"
>                      = "ich9-usb-uhci1"
>
> I'm pretty sure we use that elsewhere. If not, I can add a definition
> for each 1 ... 6 typenames.

No it's OK, no need to list all defines. I just did not notice the macro 
argument that's why I was wondering where it comes from. This seems to be 
used elsewhere at least here:

hw/audio/es1370.c:#define a(n) if (val & CTRL_##n) strcat (buf, " "#n)
hw/audio/es1370.c:#define a(n) if (val & SCTRL_##n) strcat (buf, " "#n)
hw/audio/es1370.c:#define b(n) if (!(val & SCTRL_##n)) strcat (buf, " "#n)

Maybe writing it without the space between "# is clearer as then it looks 
more like it's part of the value.

Regards,
BALATON Zoltan

>>> +
>>> #endif
>>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>> index 4d1de7cfab..0024c346c6 100644
>>> --- a/hw/i386/pc_piix.c
>>> +++ b/hw/i386/pc_piix.c
>>> @@ -37,6 +37,7 @@
>>> #include "hw/pci/pci.h"
>>> #include "hw/pci/pci_ids.h"
>>> #include "hw/usb/usb.h"
>>> +#include "hw/usb/usb-hcd.h"
>>> #include "net/net.h"
>>> #include "hw/ide/pci.h"
>>> #include "hw/irq.h"
>>> @@ -275,7 +276,7 @@ static void pc_init1(MachineState *machine,
>>> #endif
>>>
>>>     if (pcmc->pci_enabled && machine_usb(machine)) {
>>> -        pci_create_simple(pci_bus, piix3_devfn + 2, "piix3-usb-uhci");
>>> +        pci_create_simple(pci_bus, piix3_devfn + 2,
>>> TYPE_PIIX3_USB_UHCI);
>>>     }
>>>
>>>     if (pcmc->pci_enabled &&
>>> x86_machine_is_acpi_enabled(X86_MACHINE(pcms))) {
>>> diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
>>> index b985f5bea1..a80527e6ed 100644
>>> --- a/hw/i386/pc_q35.c
>>> +++ b/hw/i386/pc_q35.c
>>> @@ -51,6 +51,7 @@
>>> #include "hw/ide/pci.h"
>>> #include "hw/ide/ahci.h"
>>> #include "hw/usb/usb.h"
>>> +#include "hw/usb/usb-hcd.h"
>>> #include "qapi/error.h"
>>> #include "qemu/error-report.h"
>>> #include "sysemu/numa.h"
>>> @@ -68,15 +69,15 @@ struct ehci_companions {
>>> };
>>>
>>> static const struct ehci_companions ich9_1d[] = {
>>> -    { .name = "ich9-usb-uhci1", .func = 0, .port = 0 },
>>> -    { .name = "ich9-usb-uhci2", .func = 1, .port = 2 },
>>> -    { .name = "ich9-usb-uhci3", .func = 2, .port = 4 },
>>> +    { .name = TYPE_ICH9_USB_UHCI(1), .func = 0, .port = 0 },
>>> +    { .name = TYPE_ICH9_USB_UHCI(2), .func = 1, .port = 2 },
>>> +    { .name = TYPE_ICH9_USB_UHCI(3), .func = 2, .port = 4 },
>>> };
>>>
>>> static const struct ehci_companions ich9_1a[] = {
>>> -    { .name = "ich9-usb-uhci4", .func = 0, .port = 0 },
>>> -    { .name = "ich9-usb-uhci5", .func = 1, .port = 2 },
>>> -    { .name = "ich9-usb-uhci6", .func = 2, .port = 4 },
>>> +    { .name = TYPE_ICH9_USB_UHCI(4), .func = 0, .port = 0 },
>>> +    { .name = TYPE_ICH9_USB_UHCI(5), .func = 1, .port = 2 },
>>> +    { .name = TYPE_ICH9_USB_UHCI(6), .func = 2, .port = 4 },
>>> };
>>>
>>> static int ehci_create_ich9_with_companions(PCIBus *bus, int slot)
>>> diff --git a/hw/isa/piix4.c b/hw/isa/piix4.c
>>> index f634bcb2d1..e11e5fae21 100644
>>> --- a/hw/isa/piix4.c
>>> +++ b/hw/isa/piix4.c
>>> @@ -29,6 +29,7 @@
>>> #include "hw/southbridge/piix.h"
>>> #include "hw/pci/pci.h"
>>> #include "hw/isa/isa.h"
>>> +#include "hw/usb/usb-hcd.h"
>>> #include "hw/sysbus.h"
>>> #include "hw/intc/i8259.h"
>>> #include "hw/dma/i8257.h"
>>> @@ -255,7 +256,7 @@ DeviceState *piix4_create(PCIBus *pci_bus, ISABus
>>> **isa_bus, I2CBus **smbus)
>>>     pci = pci_create_simple(pci_bus, devfn + 1, "piix4-ide");
>>>     pci_ide_create_devs(pci);
>>>
>>> -    pci_create_simple(pci_bus, devfn + 2, "piix4-usb-uhci");
>>> +    pci_create_simple(pci_bus, devfn + 2, TYPE_PIIX4_USB_UHCI);
>>>     if (smbus) {
>>>         *smbus = piix4_pm_init(pci_bus, devfn + 3, 0x1100,
>>>                                isa_get_irq(NULL, 9), NULL, 0, NULL);
>>> diff --git a/hw/mips/fuloong2e.c b/hw/mips/fuloong2e.c
>>> index 8ca31e5162..b6d33dd2cd 100644
>>> --- a/hw/mips/fuloong2e.c
>>> +++ b/hw/mips/fuloong2e.c
>>> @@ -33,6 +33,7 @@
>>> #include "hw/mips/mips.h"
>>> #include "hw/mips/cpudevs.h"
>>> #include "hw/pci/pci.h"
>>> +#include "hw/usb/usb-hcd.h"
>>> #include "qemu/log.h"
>>> #include "hw/loader.h"
>>> #include "hw/ide/pci.h"
>>> @@ -258,8 +259,8 @@ static void vt82c686b_southbridge_init(PCIBus
>>> *pci_bus, int slot, qemu_irq intc,
>>>     dev = pci_create_simple(pci_bus, PCI_DEVFN(slot, 1), "via-ide");
>>>     pci_ide_create_devs(dev);
>>>
>>> -    pci_create_simple(pci_bus, PCI_DEVFN(slot, 2),
>>> "vt82c686b-usb-uhci");
>>> -    pci_create_simple(pci_bus, PCI_DEVFN(slot, 3),
>>> "vt82c686b-usb-uhci");
>>> +    pci_create_simple(pci_bus, PCI_DEVFN(slot, 2),
>>> TYPE_VT82C686B_USB_UHCI);
>>> +    pci_create_simple(pci_bus, PCI_DEVFN(slot, 3),
>>> TYPE_VT82C686B_USB_UHCI);
>>>
>>>     *i2c_bus = vt82c686b_pm_init(pci_bus, PCI_DEVFN(slot, 4), 0xeee1,
>>> NULL);
>>>
>>> diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
>>> index 1d4dd33b6c..da078dc3fa 100644
>>> --- a/hw/usb/hcd-uhci.c
>>> +++ b/hw/usb/hcd-uhci.c
>>> @@ -39,6 +39,7 @@
>>> #include "qemu/main-loop.h"
>>> #include "qemu/module.h"
>>> #include "usb-internal.h"
>>> +#include "hw/usb/usb-hcd.h"
>>>
>>> #define FRAME_TIMER_FREQ 1000
>>>
>>> @@ -1358,21 +1359,21 @@ static void uhci_data_class_init(ObjectClass
>>> *klass, void *data)
>>>
>>> static UHCIInfo uhci_info[] = {
>>>     {
>>> -        .name       = "piix3-usb-uhci",
>>> +        .name      = TYPE_PIIX3_USB_UHCI,
>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>         .device_id = PCI_DEVICE_ID_INTEL_82371SB_2,
>>>         .revision  = 0x01,
>>>         .irq_pin   = 3,
>>>         .unplug    = true,
>>>     },{
>>> -        .name      = "piix4-usb-uhci",
>>> +        .name      = TYPE_PIIX4_USB_UHCI,
>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>         .device_id = PCI_DEVICE_ID_INTEL_82371AB_2,
>>>         .revision  = 0x01,
>>>         .irq_pin   = 3,
>>>         .unplug    = true,
>>>     },{
>>> -        .name      = "vt82c686b-usb-uhci",
>>> +        .name      = TYPE_VT82C686B_USB_UHCI,
>>>         .vendor_id = PCI_VENDOR_ID_VIA,
>>>         .device_id = PCI_DEVICE_ID_VIA_UHCI,
>>>         .revision  = 0x01,
>>> @@ -1380,42 +1381,42 @@ static UHCIInfo uhci_info[] = {
>>>         .realize   = usb_uhci_vt82c686b_realize,
>>>         .unplug    = true,
>>>     },{
>>> -        .name      = "ich9-usb-uhci1", /* 00:1d.0 */
>>> +        .name      = TYPE_ICH9_USB_UHCI(1), /* 00:1d.0 */
>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI1,
>>>         .revision  = 0x03,
>>>         .irq_pin   = 0,
>>>         .unplug    = false,
>>>     },{
>>> -        .name      = "ich9-usb-uhci2", /* 00:1d.1 */
>>> +        .name      = TYPE_ICH9_USB_UHCI(2), /* 00:1d.1 */
>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI2,
>>>         .revision  = 0x03,
>>>         .irq_pin   = 1,
>>>         .unplug    = false,
>>>     },{
>>> -        .name      = "ich9-usb-uhci3", /* 00:1d.2 */
>>> +        .name      = TYPE_ICH9_USB_UHCI(3), /* 00:1d.2 */
>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI3,
>>>         .revision  = 0x03,
>>>         .irq_pin   = 2,
>>>         .unplug    = false,
>>>     },{
>>> -        .name      = "ich9-usb-uhci4", /* 00:1a.0 */
>>> +        .name      = TYPE_ICH9_USB_UHCI(4), /* 00:1a.0 */
>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI4,
>>>         .revision  = 0x03,
>>>         .irq_pin   = 0,
>>>         .unplug    = false,
>>>     },{
>>> -        .name      = "ich9-usb-uhci5", /* 00:1a.1 */
>>> +        .name      = TYPE_ICH9_USB_UHCI(5), /* 00:1a.1 */
>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI5,
>>>         .revision  = 0x03,
>>>         .irq_pin   = 1,
>>>         .unplug    = false,
>>>     },{
>>> -        .name      = "ich9-usb-uhci6", /* 00:1a.2 */
>>> +        .name      = TYPE_ICH9_USB_UHCI(6), /* 00:1a.2 */
>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI6,
>>>         .revision  = 0x03,
>>>
>
>
--3866299591-28275790-1593891853=:45095--


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 19:49:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 19:49:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jro9u-0007GR-Hg; Sat, 04 Jul 2020 19:49:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hHao=AP=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jro9s-0007GM-K3
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 19:49:08 +0000
X-Inumbo-ID: 6bcc0188-be2f-11ea-bca7-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6bcc0188-be2f-11ea-bca7-bc764e2007e4;
 Sat, 04 Jul 2020 19:49:07 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id z15so24959791wrl.8
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 12:49:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-language:content-transfer-encoding;
 bh=uMacwgNeIs+PJI8CGi9IjvobADMieWhRAND6S/GfHto=;
 b=O/9JL3MSirFJkP1c7f4CwBYsLdeVR4ukHFj5e3F/VbhTJ+2aTH5GBMSR6mVKmTUT9m
 Yen25RHReEU+1V2ioaaDyoZQLyRtvPhepxlMh56F187Ivutv7swJUMAg8f+zReYKyTqm
 ZG02BnrlqJ5V/qV8tz8BluIELDj8SYu92OPRi9BqQhJGWLEqoouiFxXWVje9/Hc8vPJr
 sw1e5nzOqrSeqNVjvdgiKZcywSin0hsWPmPAU1l16bS6bFI3uzmmctPU29lsN5ctr0ps
 aSmDFvae2pcKb1gUHwE2FiZ2qKkVyUaM4ogoDaASmfQlC522h+PhcBuCHcuQmTICfxnA
 D9Cg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:subject:to:cc:references:from:message-id
 :date:user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=uMacwgNeIs+PJI8CGi9IjvobADMieWhRAND6S/GfHto=;
 b=BZ0ORJwKUs3Nst9DEET42r6oFjhk8x6rIf6YWN971vVO+8G9kKTfwp/yYX4yMB2kge
 LqC+h5/MeO8CJcSskr1OlUxrRq8G2js06zH2VOlzI+4DmxF2v/X/C8/FIzJZhK7FoelZ
 3Hlvgotc7Dj0NBu/5fPvRYMzSnfgBTFKAWGMj0QfHRKbEYqRq1PLH+4IUcq41MD/p6Km
 fDyRwF7d/JCZAC5iEOpXsqEj06Qw7aP3DoDYab7jeTkzg8m7ZdECIYhCyH4G2j9/POBP
 t/PKZLZTQa0OCahRzFuLLYYuUptYBWOZNJRqxxht4FJE5PVH5Vq8pT3CuHRO0iNoIVcf
 AB5g==
X-Gm-Message-State: AOAM5333YkhIOq8C3CdteFsgcwy5tL+lTbLgxhCdfyFnzN3i6o6zcF/3
 QW0niQclaMregBf+OtZZgp0=
X-Google-Smtp-Source: ABdhPJy6stJuxGSFgGXDG4hKdhE14jfpA2BlXMmqb6Kd4GRkyABa7BFAqUywI8FdlapWYYLuG7fqlA==
X-Received: by 2002:adf:9404:: with SMTP id 4mr40799737wrq.367.1593892146468; 
 Sat, 04 Jul 2020 12:49:06 -0700 (PDT)
Received: from [192.168.1.39] (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id r28sm9296432wrr.20.2020.07.04.12.49.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 04 Jul 2020 12:49:05 -0700 (PDT)
Subject: Re: [PATCH 24/26] hw/usb/usb-hcd: Use UHCI type definitions
To: BALATON Zoltan <balaton@eik.bme.hu>
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-25-f4bug@amsat.org>
 <alpine.BSF.2.22.395.2007041916060.92265@zero.eik.bme.hu>
 <f19dc1c9-8b72-695b-bce1-660e547e5658@amsat.org>
 <alpine.BSF.2.22.395.2007042140380.45095@zero.eik.bme.hu>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
Message-ID: <b3f7a188-8927-8044-4e8d-ffba848a45ce@amsat.org>
Date: Sat, 4 Jul 2020 21:49:04 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <alpine.BSF.2.22.395.2007042140380.45095@zero.eik.bme.hu>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>, qemu-devel@nongnu.org,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 Huacai Chen <chenhc@lemote.com>, Eric Blake <eblake@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 xen-devel@lists.xenproject.org, Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/4/20 9:44 PM, BALATON Zoltan wrote:
> On Sat, 4 Jul 2020, Philippe Mathieu-Daudé wrote:
>> On 7/4/20 7:17 PM, BALATON Zoltan wrote:
>>> On Sat, 4 Jul 2020, Philippe Mathieu-Daudé wrote:
>>>> Various machine/board/soc models create UHCI device instances
>>>> with the generic QDEV API, and don't need to access USB internals.
>>>>
>>>> Simplify header inclusions by moving the QOM type names into a
>>>> simple header, with no need to include other "hw/usb" headers.
>>>>
>>>> Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
>>>> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
>>>> ---
>>>> include/hw/usb/usb-hcd.h |  6 ++++++
>>>> hw/i386/pc_piix.c        |  3 ++-
>>>> hw/i386/pc_q35.c         | 13 +++++++------
>>>> hw/isa/piix4.c           |  3 ++-
>>>> hw/mips/fuloong2e.c      |  5 +++--
>>>> hw/usb/hcd-uhci.c        | 19 ++++++++++---------
>>>> 6 files changed, 30 insertions(+), 19 deletions(-)
>>>>
>>>> diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
>>>> index 74af3a4533..c9d0a88984 100644
>>>> --- a/include/hw/usb/usb-hcd.h
>>>> +++ b/include/hw/usb/usb-hcd.h
>>>> @@ -24,4 +24,10 @@
>>>> #define TYPE_FUSBH200_EHCI          "fusbh200-ehci-usb"
>>>> #define TYPE_CHIPIDEA               "usb-chipidea"
>>>>
>>>> +/* UHCI */
>>>> +#define TYPE_PIIX3_USB_UHCI         "piix3-usb-uhci"
>>>> +#define TYPE_PIIX4_USB_UHCI         "piix4-usb-uhci"
>>>> +#define TYPE_VT82C686B_USB_UHCI     "vt82c686b-usb-uhci"
>>>> +#define TYPE_ICH9_USB_UHCI(n)       "ich9-usb-uhci" #n
>>>
>>> What is that #n at the end? Looks like a typo. Does it break
>>> compilation?
>>
>> #n is a C preprocessor feature that expand the 'n' argument to a "n"
>> string, so:
>>
>> TYPE_ICH9_USB_UHCI(1) = "ich9-usb-uhci" #1
>>                      = "ich9-usb-uhci" "1"
>>                      = "ich9-usb-uhci1"
>>
>> I'm pretty sure we use that elsewhere. If not, I can add a definition
>> for each 1 ... 6 typenames.
> 
> No it's OK, no need to list all defines. I just did not notice the macro
> argument that's why I was wondering where it comes from. This seems to
> be used elsewhere at least here:
> 
> hw/audio/es1370.c:#define a(n) if (val & CTRL_##n) strcat (buf, " "#n)
> hw/audio/es1370.c:#define a(n) if (val & SCTRL_##n) strcat (buf, " "#n)
> hw/audio/es1370.c:#define b(n) if (!(val & SCTRL_##n)) strcat (buf, " "#n)
> 
> Maybe writing it without the space between "# is clearer as then it
> looks more like it's part of the value.

Ah clever indeed. Thanks!

> 
> Regards,
> BALATON Zoltan
> 
>>>> +
>>>> #endif
>>>> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
>>>> index 4d1de7cfab..0024c346c6 100644
>>>> --- a/hw/i386/pc_piix.c
>>>> +++ b/hw/i386/pc_piix.c
>>>> @@ -37,6 +37,7 @@
>>>> #include "hw/pci/pci.h"
>>>> #include "hw/pci/pci_ids.h"
>>>> #include "hw/usb/usb.h"
>>>> +#include "hw/usb/usb-hcd.h"
>>>> #include "net/net.h"
>>>> #include "hw/ide/pci.h"
>>>> #include "hw/irq.h"
>>>> @@ -275,7 +276,7 @@ static void pc_init1(MachineState *machine,
>>>> #endif
>>>>
>>>>     if (pcmc->pci_enabled && machine_usb(machine)) {
>>>> -        pci_create_simple(pci_bus, piix3_devfn + 2, "piix3-usb-uhci");
>>>> +        pci_create_simple(pci_bus, piix3_devfn + 2,
>>>> TYPE_PIIX3_USB_UHCI);
>>>>     }
>>>>
>>>>     if (pcmc->pci_enabled &&
>>>> x86_machine_is_acpi_enabled(X86_MACHINE(pcms))) {
>>>> diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
>>>> index b985f5bea1..a80527e6ed 100644
>>>> --- a/hw/i386/pc_q35.c
>>>> +++ b/hw/i386/pc_q35.c
>>>> @@ -51,6 +51,7 @@
>>>> #include "hw/ide/pci.h"
>>>> #include "hw/ide/ahci.h"
>>>> #include "hw/usb/usb.h"
>>>> +#include "hw/usb/usb-hcd.h"
>>>> #include "qapi/error.h"
>>>> #include "qemu/error-report.h"
>>>> #include "sysemu/numa.h"
>>>> @@ -68,15 +69,15 @@ struct ehci_companions {
>>>> };
>>>>
>>>> static const struct ehci_companions ich9_1d[] = {
>>>> -    { .name = "ich9-usb-uhci1", .func = 0, .port = 0 },
>>>> -    { .name = "ich9-usb-uhci2", .func = 1, .port = 2 },
>>>> -    { .name = "ich9-usb-uhci3", .func = 2, .port = 4 },
>>>> +    { .name = TYPE_ICH9_USB_UHCI(1), .func = 0, .port = 0 },
>>>> +    { .name = TYPE_ICH9_USB_UHCI(2), .func = 1, .port = 2 },
>>>> +    { .name = TYPE_ICH9_USB_UHCI(3), .func = 2, .port = 4 },
>>>> };
>>>>
>>>> static const struct ehci_companions ich9_1a[] = {
>>>> -    { .name = "ich9-usb-uhci4", .func = 0, .port = 0 },
>>>> -    { .name = "ich9-usb-uhci5", .func = 1, .port = 2 },
>>>> -    { .name = "ich9-usb-uhci6", .func = 2, .port = 4 },
>>>> +    { .name = TYPE_ICH9_USB_UHCI(4), .func = 0, .port = 0 },
>>>> +    { .name = TYPE_ICH9_USB_UHCI(5), .func = 1, .port = 2 },
>>>> +    { .name = TYPE_ICH9_USB_UHCI(6), .func = 2, .port = 4 },
>>>> };
>>>>
>>>> static int ehci_create_ich9_with_companions(PCIBus *bus, int slot)
>>>> diff --git a/hw/isa/piix4.c b/hw/isa/piix4.c
>>>> index f634bcb2d1..e11e5fae21 100644
>>>> --- a/hw/isa/piix4.c
>>>> +++ b/hw/isa/piix4.c
>>>> @@ -29,6 +29,7 @@
>>>> #include "hw/southbridge/piix.h"
>>>> #include "hw/pci/pci.h"
>>>> #include "hw/isa/isa.h"
>>>> +#include "hw/usb/usb-hcd.h"
>>>> #include "hw/sysbus.h"
>>>> #include "hw/intc/i8259.h"
>>>> #include "hw/dma/i8257.h"
>>>> @@ -255,7 +256,7 @@ DeviceState *piix4_create(PCIBus *pci_bus, ISABus
>>>> **isa_bus, I2CBus **smbus)
>>>>     pci = pci_create_simple(pci_bus, devfn + 1, "piix4-ide");
>>>>     pci_ide_create_devs(pci);
>>>>
>>>> -    pci_create_simple(pci_bus, devfn + 2, "piix4-usb-uhci");
>>>> +    pci_create_simple(pci_bus, devfn + 2, TYPE_PIIX4_USB_UHCI);
>>>>     if (smbus) {
>>>>         *smbus = piix4_pm_init(pci_bus, devfn + 3, 0x1100,
>>>>                                isa_get_irq(NULL, 9), NULL, 0, NULL);
>>>> diff --git a/hw/mips/fuloong2e.c b/hw/mips/fuloong2e.c
>>>> index 8ca31e5162..b6d33dd2cd 100644
>>>> --- a/hw/mips/fuloong2e.c
>>>> +++ b/hw/mips/fuloong2e.c
>>>> @@ -33,6 +33,7 @@
>>>> #include "hw/mips/mips.h"
>>>> #include "hw/mips/cpudevs.h"
>>>> #include "hw/pci/pci.h"
>>>> +#include "hw/usb/usb-hcd.h"
>>>> #include "qemu/log.h"
>>>> #include "hw/loader.h"
>>>> #include "hw/ide/pci.h"
>>>> @@ -258,8 +259,8 @@ static void vt82c686b_southbridge_init(PCIBus
>>>> *pci_bus, int slot, qemu_irq intc,
>>>>     dev = pci_create_simple(pci_bus, PCI_DEVFN(slot, 1), "via-ide");
>>>>     pci_ide_create_devs(dev);
>>>>
>>>> -    pci_create_simple(pci_bus, PCI_DEVFN(slot, 2),
>>>> "vt82c686b-usb-uhci");
>>>> -    pci_create_simple(pci_bus, PCI_DEVFN(slot, 3),
>>>> "vt82c686b-usb-uhci");
>>>> +    pci_create_simple(pci_bus, PCI_DEVFN(slot, 2),
>>>> TYPE_VT82C686B_USB_UHCI);
>>>> +    pci_create_simple(pci_bus, PCI_DEVFN(slot, 3),
>>>> TYPE_VT82C686B_USB_UHCI);
>>>>
>>>>     *i2c_bus = vt82c686b_pm_init(pci_bus, PCI_DEVFN(slot, 4), 0xeee1,
>>>> NULL);
>>>>
>>>> diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
>>>> index 1d4dd33b6c..da078dc3fa 100644
>>>> --- a/hw/usb/hcd-uhci.c
>>>> +++ b/hw/usb/hcd-uhci.c
>>>> @@ -39,6 +39,7 @@
>>>> #include "qemu/main-loop.h"
>>>> #include "qemu/module.h"
>>>> #include "usb-internal.h"
>>>> +#include "hw/usb/usb-hcd.h"
>>>>
>>>> #define FRAME_TIMER_FREQ 1000
>>>>
>>>> @@ -1358,21 +1359,21 @@ static void uhci_data_class_init(ObjectClass
>>>> *klass, void *data)
>>>>
>>>> static UHCIInfo uhci_info[] = {
>>>>     {
>>>> -        .name       = "piix3-usb-uhci",
>>>> +        .name      = TYPE_PIIX3_USB_UHCI,
>>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>>         .device_id = PCI_DEVICE_ID_INTEL_82371SB_2,
>>>>         .revision  = 0x01,
>>>>         .irq_pin   = 3,
>>>>         .unplug    = true,
>>>>     },{
>>>> -        .name      = "piix4-usb-uhci",
>>>> +        .name      = TYPE_PIIX4_USB_UHCI,
>>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>>         .device_id = PCI_DEVICE_ID_INTEL_82371AB_2,
>>>>         .revision  = 0x01,
>>>>         .irq_pin   = 3,
>>>>         .unplug    = true,
>>>>     },{
>>>> -        .name      = "vt82c686b-usb-uhci",
>>>> +        .name      = TYPE_VT82C686B_USB_UHCI,
>>>>         .vendor_id = PCI_VENDOR_ID_VIA,
>>>>         .device_id = PCI_DEVICE_ID_VIA_UHCI,
>>>>         .revision  = 0x01,
>>>> @@ -1380,42 +1381,42 @@ static UHCIInfo uhci_info[] = {
>>>>         .realize   = usb_uhci_vt82c686b_realize,
>>>>         .unplug    = true,
>>>>     },{
>>>> -        .name      = "ich9-usb-uhci1", /* 00:1d.0 */
>>>> +        .name      = TYPE_ICH9_USB_UHCI(1), /* 00:1d.0 */
>>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI1,
>>>>         .revision  = 0x03,
>>>>         .irq_pin   = 0,
>>>>         .unplug    = false,
>>>>     },{
>>>> -        .name      = "ich9-usb-uhci2", /* 00:1d.1 */
>>>> +        .name      = TYPE_ICH9_USB_UHCI(2), /* 00:1d.1 */
>>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI2,
>>>>         .revision  = 0x03,
>>>>         .irq_pin   = 1,
>>>>         .unplug    = false,
>>>>     },{
>>>> -        .name      = "ich9-usb-uhci3", /* 00:1d.2 */
>>>> +        .name      = TYPE_ICH9_USB_UHCI(3), /* 00:1d.2 */
>>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI3,
>>>>         .revision  = 0x03,
>>>>         .irq_pin   = 2,
>>>>         .unplug    = false,
>>>>     },{
>>>> -        .name      = "ich9-usb-uhci4", /* 00:1a.0 */
>>>> +        .name      = TYPE_ICH9_USB_UHCI(4), /* 00:1a.0 */
>>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI4,
>>>>         .revision  = 0x03,
>>>>         .irq_pin   = 0,
>>>>         .unplug    = false,
>>>>     },{
>>>> -        .name      = "ich9-usb-uhci5", /* 00:1a.1 */
>>>> +        .name      = TYPE_ICH9_USB_UHCI(5), /* 00:1a.1 */
>>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI5,
>>>>         .revision  = 0x03,
>>>>         .irq_pin   = 1,
>>>>         .unplug    = false,
>>>>     },{
>>>> -        .name      = "ich9-usb-uhci6", /* 00:1a.2 */
>>>> +        .name      = TYPE_ICH9_USB_UHCI(6), /* 00:1a.2 */
>>>>         .vendor_id = PCI_VENDOR_ID_INTEL,
>>>>         .device_id = PCI_DEVICE_ID_INTEL_82801I_UHCI6,
>>>>         .revision  = 0x03,
>>>>
>>
>>


From xen-devel-bounces@lists.xenproject.org Sat Jul 04 21:07:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 04 Jul 2020 21:07:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrpMw-0005UG-GH; Sat, 04 Jul 2020 21:06:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CV4e=AP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrpMv-0005UB-48
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 21:06:41 +0000
X-Inumbo-ID: 408a5848-be3a-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 408a5848-be3a-11ea-bb8b-bc764e2007e4;
 Sat, 04 Jul 2020 21:06:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zaGYbzvL5sDnzjZ0qsxkLotszghtG/RCsGWRWOxMXbY=; b=gg37yXC55Lgz0mmmkHtOEW/s6
 YocghE/Vev0MvtBLUk2XFZpChRToFM6KfAcpA2EtcwUHbZBMjM5PNYF1rYN6aJPLMH6zW1420CLhe
 +gk9iXXbxN5OmR5g+TBy2ZLFAd1FHBBBxJW3JSm6V1Bl8YL/70XDK+SRpIdACHYOsK8c0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrpMs-0005IP-JV; Sat, 04 Jul 2020 21:06:38 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrpMs-0002sF-Af; Sat, 04 Jul 2020 21:06:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrpMs-0003FV-A1; Sat, 04 Jul 2020 21:06:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151617-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151617: regressions - FAIL
X-Osstest-Failures: linux-linus:test-amd64-i386-xl-pvshim:xen-boot:fail:regression
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=35e884f89df4c48566d745dc5a97a0d058d04263
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 04 Jul 2020 21:06:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151617 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151617/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-pvshim     7 xen-boot                 fail REGR. vs. 151214
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-boot          fail REGR. vs. 151214
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151214
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                35e884f89df4c48566d745dc5a97a0d058d04263
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   16 days
Failing since        151236  2020-06-19 19:10:35 Z   15 days   20 attempts
Testing same since   151617  2020-07-04 10:03:06 Z    0 days    1 attempts

------------------------------------------------------------
551 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 25961 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 05 00:34:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 00:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrsbN-0006Sz-RI; Sun, 05 Jul 2020 00:33:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3tmk=AQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jrsbM-0006SE-JE
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 00:33:48 +0000
X-Inumbo-ID: 2ae22ced-be57-11ea-8b88-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2ae22ced-be57-11ea-8b88-12813bfff9fa;
 Sun, 05 Jul 2020 00:33:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=UkNcQDpVsNhS+dKLVTQGfvDZHMxpHEpXrzxJpnvAq44=; b=OJMnnxFikg34XBsmEPuLpgHVZ
 lABT0UqOQyHsa1HiS7coibdL5Jtxcql1cJOmEyp5PKe7FRBa/X9Cut/SGrWGJwAGz6fXgbW/yOF/Q
 HcfoAi4uO+HPJc+3nTX5lawsSYFsxnfFa8DrYyYOs7JyqUjQk9crwURCqtGYHwzM2wN5o=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrsbB-0001CR-Iq; Sun, 05 Jul 2020 00:33:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jrsbB-0001k7-8Y; Sun, 05 Jul 2020 00:33:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jrsbB-0000vT-7P; Sun, 05 Jul 2020 00:33:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151622-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151622: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=7b7515702012219410802a168ae4aa45b72a44df
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jul 2020 00:33:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151622 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151622/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                7b7515702012219410802a168ae4aa45b72a44df
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   22 days
Failing since        151101  2020-06-14 08:32:51 Z   20 days   24 attempts
Testing same since   151622  2020-07-04 12:57:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Cathy Zhang <cathy.zhang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <thuth@redhat.com>
  Tong Ho <tong.ho@xilinx.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 16573 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 05 04:26:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 04:26:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrwE8-0001RF-6Y; Sun, 05 Jul 2020 04:26:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0p6Q=AP=gmail.com=pauldzim@srs-us1.protection.inumbo.net>)
 id 1jrkyq-0006Fv-7D
 for xen-devel@lists.xenproject.org; Sat, 04 Jul 2020 16:25:32 +0000
X-Inumbo-ID: fab41772-be12-11ea-8496-bc764e2007e4
Received: from mail-io1-xd42.google.com (unknown [2607:f8b0:4864:20::d42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fab41772-be12-11ea-8496-bc764e2007e4;
 Sat, 04 Jul 2020 16:25:31 +0000 (UTC)
Received: by mail-io1-xd42.google.com with SMTP id a12so35287479ion.13
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 09:25:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=siYZvVhE9VeBBvLQIX/nnAed+rLIHn4bVXcuyvDsWaw=;
 b=pjF0PMXkE9pbfyA44NA0kCnTofn8BML3M2OwLhH+Sqsxxe1sGclQdp8aRABF6oUm7p
 jl3jluIF8Iod+vAQYD9a9T9hizpvwz6WnAjoWUqHG6YxzqwkxwFtal6rAmecurpCeu63
 HodGkRhRHtWuT2Pe9TYLrfl5xbRYOk9LLYdPHTMPdK0DhgKFqlWb+OJKfZYFsG/Mgude
 iMrzKgB1Y747nHlXa24vTJmzsISKSK+ZiOvur0XiEJN8zeFh9UgKrjT9VXbPu7hZm6iB
 4IkItqkdEv9WRqjf/co0WnxmfLXBQ+uTU17T3TFxcuXcGW+OlroDBON4fZDJU7C+YvxT
 s7rA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=siYZvVhE9VeBBvLQIX/nnAed+rLIHn4bVXcuyvDsWaw=;
 b=dmkNKqsKoNEpHw4moA2C5JVOOyJr32Oapac2QpDDyKGH38hobx1JsaZ4dogWOCCM4P
 06C+S914WlmV9AtI8zJi/Ho6AJXmfvkh1KYQg7slvVFsXFEtKSmx0QLhvzrl9qKspre6
 4aG7x4iOwcV4AmBtzUsdEk+zYGyLfUGCISO2OHEwLSXff9cOWMb6+3bzbjGexhMYDIpB
 jTc0iz4UdxycdXrEl1pHrOogY/8be8gdglvdSWTlFhDdbMDkUkeKezDgKjHNus8oSnTB
 S74sbEwwToNcZnusvZXVDCMsSzQgd5RIicProf40wkt81n4tcHmujnaBBKFNybT9j3Qf
 5ZnQ==
X-Gm-Message-State: AOAM533NAIInM78Q4ttdThTn7kuH4uJgpU8jL+WgMoDqiicwMMvGq7Vd
 GFLYfvbCaG96eEop7B5+NxNMpklmdoLYHkzLiqY=
X-Google-Smtp-Source: ABdhPJwZwrn7vzEIJbGYJoQZG3XTTSfTQJEX4yl2ivSAb5nZSnnjPxLr+c9nvkov/DkXTccC1tmUb5GLEFGW3coDSTU=
X-Received: by 2002:a6b:8e56:: with SMTP id q83mr17709001iod.61.1593879930979; 
 Sat, 04 Jul 2020 09:25:30 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-27-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-27-f4bug@amsat.org>
From: Paul Zimmerman <pauldzim@gmail.com>
Date: Sat, 4 Jul 2020 09:25:20 -0700
Message-ID: <CADBGO7832C0Rw+RbZBRuDAGGtwhk9RV+bHVBHe+EXxLupbqfig@mail.gmail.com>
Subject: Re: [PATCH 26/26] MAINTAINERS: Cover dwc-hsotg (dwc2) USB host
 controller emulation
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: multipart/alternative; boundary="00000000000019cac205a9a017dd"
X-Mailman-Approved-At: Sun, 05 Jul 2020 04:26:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, BALATON Zoltan <balaton@eik.bme.hu>,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Richard Henderson <rth@twiddle.net>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 qemu-ppc@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--00000000000019cac205a9a017dd
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, Jul 4, 2020 at 7:50 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
>
wrote:

> Add an section for the dwc2 host controller emulation
> introduced in commit 153ef1662c.
>
> Cc: Paul Zimmerman <pauldzim@gmail.com>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>
> ---
>  MAINTAINERS | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 2566566d72..e3f895bc6e 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -1651,6 +1651,12 @@ M: Samuel Thibault <samuel.thibault@ens-lyon.org>
>  S: Maintained
>  F: hw/usb/dev-serial.c
>
> +USB dwc-hsotg (dwc2)
> +M: Gerd Hoffmann <kraxel@redhat.com>
> +R: Paul Zimmerman <pauldzim@gmail.com>
> +S: Maintained
> +F: hw/usb/*dwc2*
> +
>  VFIO
>  M: Alex Williamson <alex.williamson@redhat.com>
>  S: Supported


Acked-by: Paul Zimmerman <pauldzim@gmail.com>


> --
> 2.21.3
>
>

--00000000000019cac205a9a017dd
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div><br></div><div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=
=3D"gmail_attr">On Sat, Jul 4, 2020 at 7:50 AM Philippe Mathieu-Daud=C3=A9 =
&lt;<a href=3D"mailto:f4bug@amsat.org">f4bug@amsat.org</a>&gt; wrote:<br></=
div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-lef=
t:1px #ccc solid;padding-left:1ex">Add an section for the dwc2 host control=
ler emulation<br>
introduced in commit 153ef1662c.<br>
<br>
Cc: Paul Zimmerman &lt;<a href=3D"mailto:pauldzim@gmail.com" target=3D"_bla=
nk">pauldzim@gmail.com</a>&gt;<br>
Signed-off-by: Philippe Mathieu-Daud=C3=A9 &lt;<a href=3D"mailto:f4bug@amsa=
t.org" target=3D"_blank">f4bug@amsat.org</a>&gt;<br>
---<br>
=C2=A0MAINTAINERS | 6 ++++++<br>
=C2=A01 file changed, 6 insertions(+)<br>
<br>
diff --git a/MAINTAINERS b/MAINTAINERS<br>
index 2566566d72..e3f895bc6e 100644<br>
--- a/MAINTAINERS<br>
+++ b/MAINTAINERS<br>
@@ -1651,6 +1651,12 @@ M: Samuel Thibault &lt;<a href=3D"mailto:samuel.thib=
ault@ens-lyon.org" target=3D"_blank">samuel.thibault@ens-lyon.org</a>&gt;<b=
r>
=C2=A0S: Maintained<br>
=C2=A0F: hw/usb/dev-serial.c<br>
<br>
+USB dwc-hsotg (dwc2)<br>
+M: Gerd Hoffmann &lt;<a href=3D"mailto:kraxel@redhat.com" target=3D"_blank=
">kraxel@redhat.com</a>&gt;<br>
+R: Paul Zimmerman &lt;<a href=3D"mailto:pauldzim@gmail.com" target=3D"_bla=
nk">pauldzim@gmail.com</a>&gt;<br>
+S: Maintained<br>
+F: hw/usb/*dwc2*<br>
+<br>
=C2=A0VFIO<br>
=C2=A0M: Alex Williamson &lt;<a href=3D"mailto:alex.williamson@redhat.com" =
target=3D"_blank">alex.williamson@redhat.com</a>&gt;<br>
=C2=A0S: Supported</blockquote><div dir=3D"auto"><br></div><div dir=3D"auto=
">Acked-by: Paul Zimmerman &lt;<a href=3D"mailto:pauldzim@gmail.com">pauldz=
im@gmail.com</a>&gt;</div><div dir=3D"auto"><br></div><blockquote class=3D"=
gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-=
left:1ex"><br>
-- <br>
2.21.3<br>
<br>
</blockquote></div></div>

--00000000000019cac205a9a017dd--


From xen-devel-bounces@lists.xenproject.org Sun Jul 05 05:38:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 05:38:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jrxLY-0007LR-Da; Sun, 05 Jul 2020 05:37:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QJgw=AQ=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1jrxLX-0007LM-1N
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 05:37:47 +0000
X-Inumbo-ID: a68ad1b2-be81-11ea-bca7-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id a68ad1b2-be81-11ea-bca7-bc764e2007e4;
 Sun, 05 Jul 2020 05:37:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1593927463;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=QjMyNamLLkRJJI8ub+Padxcs+ybIZnNdDYQZ7F5eZm8=;
 b=NqZ0p37e8S6v0Tj3zP807s/jgXy1lZ3ZSyxU3IHZLQsl9ZEDP976bhXgYgR14f4id+rSRg
 QTPtqCg9WZhXFpzQG8Y3HRd/C5HrmvZlkPPfhm9wMzwV7lZeSXqD2KP0CMvBVR7pH9dbWx
 ybAfrhG6AJyhe4Y8RELNgQwPiKH4Qlk=
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-417-FLuzGc-SPtSV8t3S2TekiA-1; Sun, 05 Jul 2020 01:37:42 -0400
X-MC-Unique: FLuzGc-SPtSV8t3S2TekiA-1
Received: by mail-wr1-f71.google.com with SMTP id z1so7876984wrn.18
 for <xen-devel@lists.xenproject.org>; Sat, 04 Jul 2020 22:37:41 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=QjMyNamLLkRJJI8ub+Padxcs+ybIZnNdDYQZ7F5eZm8=;
 b=l1IDz4rnXYjgMbDfC5u0jRNacWNeG8RVsxdqDSKhSd13mfrcPnXnYd2p+Tg6DGZUCy
 TDa0JV3j58f8hvHn2f9blF35PKgfRn2V1ur4CCtRrvLGEVHeX5GuMSP2pBLeOgSxWyah
 eeIzMlyT3yBB2Jya6OoCMc1HidKHp6Cj/WtYUrPWFM9D3lPwIP1OrQ+jYDgl2WeZgiE6
 8R34kPXdUD9vNgU1WfXjXc39jq2L9zG3+T2OMOVq27UvUurx5NB/8VZJjbzDNYCetz9p
 l8SafKW9p/stEu75H9hTHpng/QqE0NQmrN2i6zUF0RpkuZIGefSgds5H0xlznwVO3K2j
 zGow==
X-Gm-Message-State: AOAM533ETFmD1nfA3Xei0ZA3Kopyug8JlRZFnYsMwkcKQNpzs+NYsrQR
 +Id55YA/jZt9iF62xv46knRyX+ZU2E0815O9r93gPfZhcOqYOOejr7nMMlXJNif+wQ7TL3x2wzV
 x7omcn/L8Wur+M8ZC0ni/qWY4vO8=
X-Received: by 2002:adf:ded2:: with SMTP id i18mr43454221wrn.109.1593927461014; 
 Sat, 04 Jul 2020 22:37:41 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJx3gbpxpgGybQ3VzZRMzQ46rw4ptwRvsmRqLomV48jR8zUbdkEyVOboq2NFCpnpK8+MW/QV+A==
X-Received: by 2002:adf:ded2:: with SMTP id i18mr43454210wrn.109.1593927460808; 
 Sat, 04 Jul 2020 22:37:40 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:adf2:29a0:7689:d40c?
 ([2001:b07:6468:f312:adf2:29a0:7689:d40c])
 by smtp.gmail.com with ESMTPSA id w14sm18980566wrt.55.2020.07.04.22.37.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 04 Jul 2020 22:37:40 -0700 (PDT)
Subject: Re: [PATCH 24/26] hw/usb/usb-hcd: Use UHCI type definitions
To: BALATON Zoltan <balaton@eik.bme.hu>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-25-f4bug@amsat.org>
 <alpine.BSF.2.22.395.2007041916060.92265@zero.eik.bme.hu>
 <f19dc1c9-8b72-695b-bce1-660e547e5658@amsat.org>
 <alpine.BSF.2.22.395.2007042140380.45095@zero.eik.bme.hu>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <ba4dd94b-ec7a-b4ec-4786-c8c5dcd8127f@redhat.com>
Date: Sun, 5 Jul 2020 07:37:38 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.6.0
MIME-Version: 1.0
In-Reply-To: <alpine.BSF.2.22.395.2007042140380.45095@zero.eik.bme.hu>
Content-Language: en-US
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>, qemu-devel@nongnu.org,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 Huacai Chen <chenhc@lemote.com>, Eric Blake <eblake@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 xen-devel@lists.xenproject.org, Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04/07/20 21:44, BALATON Zoltan wrote:
> 
> No it's OK, no need to list all defines. I just did not notice the macro
> argument that's why I was wondering where it comes from. This seems to
> be used elsewhere at least here:
> 
> hw/audio/es1370.c:#define a(n) if (val & CTRL_##n) strcat (buf, " "#n)
> hw/audio/es1370.c:#define a(n) if (val & SCTRL_##n) strcat (buf, " "#n)
> hw/audio/es1370.c:#define b(n) if (!(val & SCTRL_##n)) strcat (buf, " "#n)
> 
> Maybe writing it without the space between "# is clearer as then it
> looks more like it's part of the value.

I think keeping the space is better.

The reason is that CTRL_##n pastes together CTRL_ and n, but " "#n is
different:

1) First, it turns n into a string, for example "1"

2) Then, just because there is another string just before, the two are
concatenated, as in "CTRL_" "1".

Paolo



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 09:26:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 09:26:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js0ul-0000rs-PS; Sun, 05 Jul 2020 09:26:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Cql=AQ=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1js0uk-0000rn-9P
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 09:26:22 +0000
X-Inumbo-ID: 9611fd7c-bea1-11ea-bca7-bc764e2007e4
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9611fd7c-bea1-11ea-bca7-bc764e2007e4;
 Sun, 05 Jul 2020 09:26:21 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id o2so38489997wmh.2
 for <xen-devel@lists.xenproject.org>; Sun, 05 Jul 2020 02:26:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-language:content-transfer-encoding;
 bh=V6VxSUE4Bn25txtzYc2pOs5ttoLW2r+gnS/zuul7E4Q=;
 b=O+ekF3GWrxTOzHZ0BXua7IHles3NGTwEiTdtbTwPbSx5SXIYw2TK/E6b5CRrqy0mBR
 ZZWQP+ly3nzbTRQD6TEw9YxQvPZp6hZBAyAQhaf08DrKnyb/X2AUwEiEy0bmncIE6s3e
 ow9/lwSEh1PH7a7iOFpGvRcHV24T9iMoM0H+S0RAMvAyHkE+gIpIs70JuzXFEmrkNT09
 DC/TNnWaKbXa1nicfAMu09rX1cjTEw8izyQFKH/MPKE8/fIc3T9jBVtNECURNS2XWoVN
 k0uFGgXqN+Z3VhAHfMcsz9KWS+dCCBNAtklP3Mo6/p+tK0yrEED2igLLLC91GAtis/+/
 JTYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:subject:to:cc:references:from:message-id
 :date:user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=V6VxSUE4Bn25txtzYc2pOs5ttoLW2r+gnS/zuul7E4Q=;
 b=C5brOw8dV3TOGECVvQrUc3Ju1Ko/9pLU0NPmHw8aqPzu2sKuuOQvochDm+Rt8dl5QN
 7XKIxuYX5V61fvE9ywGlqph3bBFEmJSXKFbJ9OZPmqa0nMxZ0BpBc0yf3ZtZxCsEHiib
 rDn8Qdd3kISkDd/nLZFY/w/l/X5QDXK86506uz/9bIJtmzbiWS4v/BOnEAOevqv5DtQw
 ce33O9uwtO1FqOFzlS5DN49m9J9QATCF43LGSsy2CFXWs6VZqjup0vWRuyNAxsr/mVKT
 fB4OUDY2SaDFP+By9Dg3rizJpp3tQwnk/8GonIAEFY9oaalzfv8ai2GfNSZmtTdtOxqC
 dlDw==
X-Gm-Message-State: AOAM530pvIGGnhWmqvjU7z3p4XlIGDmxJI970FZ724TfEzHxIl5pvmMl
 1GXzpcOeDgiePSuTAobQ9dY=
X-Google-Smtp-Source: ABdhPJwILjiR1W+EkE4Nf1XTSMb7GPG+l/lNl8jSnWMlocBShA2Uu5XgzV4UIQboIpbc4nOwhrDjQA==
X-Received: by 2002:a1c:44e:: with SMTP id 75mr44874861wme.139.1593941180006; 
 Sun, 05 Jul 2020 02:26:20 -0700 (PDT)
Received: from [192.168.1.39] (1.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.1])
 by smtp.gmail.com with ESMTPSA id w7sm19483180wmc.32.2020.07.05.02.26.17
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 05 Jul 2020 02:26:19 -0700 (PDT)
Subject: Re: [PATCH 23/26] hw/usb/usb-hcd: Use EHCI type definitions
To: BALATON Zoltan <balaton@eik.bme.hu>
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-24-f4bug@amsat.org>
 <alpine.BSF.2.22.395.2007041914490.92265@zero.eik.bme.hu>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
Message-ID: <f046b629-7b13-16d2-bb35-b4eda6fbc02b@amsat.org>
Date: Sun, 5 Jul 2020 11:26:17 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <alpine.BSF.2.22.395.2007041914490.92265@zero.eik.bme.hu>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/4/20 7:15 PM, BALATON Zoltan wrote:
> On Sat, 4 Jul 2020, Philippe Mathieu-Daudé wrote:
>> Various machine/board/soc models create EHCI device instances
>> with the generic QDEV API, and don't need to access USB internals.
>>
>> Simplify header inclusions by moving the QOM type names into a
>> simple header, with no need to include other "hw/usb" headers.
>>
>> Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
>> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
>> ---
>> hw/usb/hcd-ehci.h         | 11 +----------
>> include/hw/usb/chipidea.h |  2 +-
>> include/hw/usb/usb-hcd.h  | 11 +++++++++++
>> hw/arm/allwinner-h3.c     |  1 -
>> hw/arm/exynos4210.c       |  2 +-
>> hw/arm/sbsa-ref.c         |  3 ++-
>> hw/arm/xilinx_zynq.c      |  2 +-
>> hw/ppc/sam460ex.c         |  1 -
>> hw/usb/chipidea.c         |  1 +
>> hw/usb/hcd-ehci-sysbus.c  |  1 +
>> 10 files changed, 19 insertions(+), 16 deletions(-)
>>
>> diff --git a/hw/usb/hcd-ehci.h b/hw/usb/hcd-ehci.h
>> index 337b3ad05c..da70767409 100644
>> --- a/hw/usb/hcd-ehci.h
>> +++ b/hw/usb/hcd-ehci.h
>> @@ -23,6 +23,7 @@
>> #include "hw/pci/pci.h"
>> #include "hw/sysbus.h"
>> #include "usb-internal.h"
>> +#include "hw/usb/usb-hcd.h"
>>
>> #define CAPA_SIZE        0x10
>>
>> @@ -316,7 +317,6 @@ void usb_ehci_realize(EHCIState *s, DeviceState
>> *dev, Error **errp);
>> void usb_ehci_unrealize(EHCIState *s, DeviceState *dev);
>> void ehci_reset(void *opaque);
>>
>> -#define TYPE_PCI_EHCI "pci-ehci-usb"
>> #define PCI_EHCI(obj) OBJECT_CHECK(EHCIPCIState, (obj), TYPE_PCI_EHCI)
>>
>> typedef struct EHCIPCIState {
>> @@ -327,15 +327,6 @@ typedef struct EHCIPCIState {
>>     EHCIState ehci;
>> } EHCIPCIState;
>>
>> -
>> -#define TYPE_SYS_BUS_EHCI "sysbus-ehci-usb"
>> -#define TYPE_PLATFORM_EHCI "platform-ehci-usb"
>> -#define TYPE_EXYNOS4210_EHCI "exynos4210-ehci-usb"
>> -#define TYPE_AW_H3_EHCI "aw-h3-ehci-usb"
>> -#define TYPE_TEGRA2_EHCI "tegra2-ehci-usb"
>> -#define TYPE_PPC4xx_EHCI "ppc4xx-ehci-usb"
>> -#define TYPE_FUSBH200_EHCI "fusbh200-ehci-usb"
>> -
>> #define SYS_BUS_EHCI(obj) \
>>     OBJECT_CHECK(EHCISysBusState, (obj), TYPE_SYS_BUS_EHCI)
>> #define SYS_BUS_EHCI_CLASS(class) \
>> diff --git a/include/hw/usb/chipidea.h b/include/hw/usb/chipidea.h
>> index 1ec2e9dbda..28f46291de 100644
>> --- a/include/hw/usb/chipidea.h
>> +++ b/include/hw/usb/chipidea.h
>> @@ -2,6 +2,7 @@
>> #define CHIPIDEA_H
>>
>> #include "hw/usb/hcd-ehci.h"
>> +#include "hw/usb/usb-hcd.h"
>>
>> typedef struct ChipideaState {
>>     /*< private >*/
>> @@ -10,7 +11,6 @@ typedef struct ChipideaState {
>>     MemoryRegion iomem[3];
>> } ChipideaState;
>>
>> -#define TYPE_CHIPIDEA "usb-chipidea"
>> #define CHIPIDEA(obj) OBJECT_CHECK(ChipideaState, (obj), TYPE_CHIPIDEA)
>>
>> #endif /* CHIPIDEA_H */
>> diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
>> index 21fdfaf22d..74af3a4533 100644
>> --- a/include/hw/usb/usb-hcd.h
>> +++ b/include/hw/usb/usb-hcd.h
>> @@ -13,4 +13,15 @@
>> #define TYPE_SYSBUS_OHCI            "sysbus-ohci"
>> #define TYPE_PCI_OHCI               "pci-ohci"
>>
>> +/* EHCI */
>> +#define TYPE_SYS_BUS_EHCI           "sysbus-ehci-usb"
>> +#define TYPE_PCI_EHCI               "pci-ehci-usb"
>> +#define TYPE_PLATFORM_EHCI          "platform-ehci-usb"
>> +#define TYPE_EXYNOS4210_EHCI        "exynos4210-ehci-usb"
>> +#define TYPE_AW_H3_EHCI             "aw-h3-ehci-usb"
>> +#define TYPE_TEGRA2_EHCI            "tegra2-ehci-usb"
>> +#define TYPE_PPC4xx_EHCI            "ppc4xx-ehci-usb"
>> +#define TYPE_FUSBH200_EHCI          "fusbh200-ehci-usb"
>> +#define TYPE_CHIPIDEA               "usb-chipidea"
>> +
>> #endif
>> diff --git a/hw/arm/allwinner-h3.c b/hw/arm/allwinner-h3.c
>> index d1d90ffa79..8b7adddc27 100644
>> --- a/hw/arm/allwinner-h3.c
>> +++ b/hw/arm/allwinner-h3.c
>> @@ -29,7 +29,6 @@
>> #include "hw/char/serial.h"
>> #include "hw/misc/unimp.h"
>> #include "hw/usb/usb-hcd.h"
>> -#include "hw/usb/hcd-ehci.h"
>> #include "hw/loader.h"
>> #include "sysemu/sysemu.h"
>> #include "hw/arm/allwinner-h3.h"
>> diff --git a/hw/arm/exynos4210.c b/hw/arm/exynos4210.c
>> index fa639806ec..692fb02159 100644
>> --- a/hw/arm/exynos4210.c
>> +++ b/hw/arm/exynos4210.c
>> @@ -35,7 +35,7 @@
>> #include "hw/qdev-properties.h"
>> #include "hw/arm/exynos4210.h"
>> #include "hw/sd/sdhci.h"
>> -#include "hw/usb/hcd-ehci.h"
>> +#include "hw/usb/usb-hcd.h"
>>
>> #define EXYNOS4210_CHIPID_ADDR         0x10000000
>>
>> diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
>> index 021e7c1b8b..4e4c338ae9 100644
>> --- a/hw/arm/sbsa-ref.c
>> +++ b/hw/arm/sbsa-ref.c
>> @@ -38,6 +38,7 @@
>> #include "hw/loader.h"
>> #include "hw/pci-host/gpex.h"
>> #include "hw/qdev-properties.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "hw/char/pl011.h"
>> #include "net/net.h"
>>
>> @@ -485,7 +486,7 @@ static void create_ehci(const SBSAMachineState *sms)
>>     hwaddr base = sbsa_ref_memmap[SBSA_EHCI].base;
>>     int irq = sbsa_ref_irqmap[SBSA_EHCI];
>>
>> -    sysbus_create_simple("platform-ehci-usb", base,
>> +    sysbus_create_simple(TYPE_PLATFORM_EHCI, base,
>>                          qdev_get_gpio_in(sms->gic, irq));
>> }
>>
>> diff --git a/hw/arm/xilinx_zynq.c b/hw/arm/xilinx_zynq.c
>> index ed970273f3..9ccdc03095 100644
>> --- a/hw/arm/xilinx_zynq.c
>> +++ b/hw/arm/xilinx_zynq.c
>> @@ -29,7 +29,7 @@
>> #include "hw/loader.h"
>> #include "hw/misc/zynq-xadc.h"
>> #include "hw/ssi/ssi.h"
>> -#include "hw/usb/chipidea.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "qemu/error-report.h"
>> #include "hw/sd/sdhci.h"
>> #include "hw/char/cadence_uart.h"
>> diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c
>> index ac60d17a86..3f7cf0d1ae 100644
>> --- a/hw/ppc/sam460ex.c
>> +++ b/hw/ppc/sam460ex.c
>> @@ -37,7 +37,6 @@
>> #include "hw/i2c/smbus_eeprom.h"
>> #include "hw/usb/usb.h"
>> #include "hw/usb/usb-hcd.h"
>> -#include "hw/usb/hcd-ehci.h"
>> #include "hw/ppc/fdt.h"
>> #include "hw/qdev-properties.h"
>> #include "hw/pci/pci.h"
>> diff --git a/hw/usb/chipidea.c b/hw/usb/chipidea.c
>> index 3dcd22ccba..e81f63295e 100644
>> --- a/hw/usb/chipidea.c
>> +++ b/hw/usb/chipidea.c
>> @@ -11,6 +11,7 @@
>>
>> #include "qemu/osdep.h"
>> #include "hw/usb/hcd-ehci.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "hw/usb/chipidea.h"
>> #include "qemu/log.h"
>> #include "qemu/module.h"
>> diff --git a/hw/usb/hcd-ehci-sysbus.c b/hw/usb/hcd-ehci-sysbus.c
>> index 3730736540..b7debc1934 100644
>> --- a/hw/usb/hcd-ehci-sysbus.c
>> +++ b/hw/usb/hcd-ehci-sysbus.c
>> @@ -18,6 +18,7 @@
>> #include "qemu/osdep.h"
>> #include "hw/qdev-properties.h"
>> #include "hw/usb/hcd-ehci.h"
>> +#include "hw/usb/usb-hcd.h"
>> #include "migration/vmstate.h"
>> #include "qemu/module.h"
> 
> Do these last two still need hw/usb/hcd-ehci.h? If so do they get
> hw/usb/usb-hcd.h via that one so do they need to explicitely include it
> again?

chipidea.c implements an HCI, so it uses the hcd-ehci.h internals,
else we get:

usb/chipidea.c:96:5: error: unknown type name ‘EHCIState’
   96 |     EHCIState *ehci = &SYS_BUS_EHCI(obj)->ehci;
      |     ^~~~~~~~~
hw/usb/chipidea.c:152:5: error: unknown type name ‘SysBusEHCIClass’
  152 |     SysBusEHCIClass *sec = SYS_BUS_EHCI_CLASS(klass);
      |     ^~~~~~~~~~~~~~~
hw/usb/chipidea.c:152:28: error: implicit declaration of function
‘SYS_BUS_EHCI_CLASS’ [-Werror=implicit-function-declaration]
  152 |     SysBusEHCIClass *sec = SYS_BUS_EHCI_CLASS(klass);
      |                            ^~~~~~~~~~~~~~~~~~

Similarly with hcd-ehci-sysbus.c:

hw/usb/hcd-ehci-sysbus.c:30:50: error: ‘vmstate_ehci’ undeclared here
(not in a function)
   30 |         VMSTATE_STRUCT(ehci, EHCISysBusState, 2, vmstate_ehci,
EHCIState),
      |                                                  ^~~~~~~~~~~~
hw/usb/hcd-ehci-sysbus.c:45:26: error: implicit declaration of function
‘SYS_BUS_EHCI’; did you mean ‘TYPE_SYS_BUS_EHCI’?
[-Werror=implicit-function-declaration]
   45 |     EHCISysBusState *i = SYS_BUS_EHCI(dev);
      |                          ^~~~~~~~~~~~
      |                          TYPE_SYS_BUS_EHCI
hw/usb/hcd-ehci-sysbus.c:48:5: error: implicit declaration of function
‘usb_ehci_realize’; did you mean ‘usb_ehci_sysbus_realize’?
[-Werror=implicit-function-declaration]
   48 |     usb_ehci_realize(s, dev, errp);
      |     ^~~~~~~~~~~~~~~~
      |     usb_ehci_sysbus_realize
hw/usb/hcd-ehci-sysbus.c:74:5: error: implicit declaration of function
‘usb_ehci_init’ [-Werror=implicit-function-declaration]
   74 |     usb_ehci_init(s, DEVICE(obj));
      |     ^~~~~~~~~~~~~

The idea of "hw/usb/usb-hcd.h" is to keep USB internals opaque to
machine/board/soc using controllers, they just want to use the
devices.

I'll see how to reword the description to make that clearer.

> 
> Regards,
> BALATON Zoltan


From xen-devel-bounces@lists.xenproject.org Sun Jul 05 10:32:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 10:32:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js1wE-0006SY-OK; Sun, 05 Jul 2020 10:31:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3tmk=AQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1js1wD-0006ST-Jo
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 10:31:57 +0000
X-Inumbo-ID: bf4c0563-beaa-11ea-8bc0-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bf4c0563-beaa-11ea-8bc0-12813bfff9fa;
 Sun, 05 Jul 2020 10:31:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=7onaZJxbOCXjzPn8pf60GU1Y0pZhMm1Uuj3CWOwtNro=; b=OyAUK1WE+UKOjQYurhCatlUaB
 b88ppJiJsfrIvXcQcucTHIx1ys32Z5zYn7TD4YDtN+WuR+m2O3FhN2FEK/JJ2pAC0t1EInniGhZIZ
 Qu5oqd8wbHqYrJfy+PvzkYUpcEPkYch6z1XL7G6aA0Rseg4gvGNIEQQ+G3BnZTdItDDS4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1js1wB-0006UH-Ts; Sun, 05 Jul 2020 10:31:55 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1js1wB-0003m5-II; Sun, 05 Jul 2020 10:31:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1js1wB-0008FR-He; Sun, 05 Jul 2020 10:31:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151641-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 151641: all pass - PUSHED
X-Osstest-Versions-This: xen=be63d9d47f571a60d70f8fb630c03871312d9655
X-Osstest-Versions-That: xen=23ca7ec0ba620db52a646d80e22f9703a6589f66
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jul 2020 10:31:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151641 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151641/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655
baseline version:
 xen                  23ca7ec0ba620db52a646d80e22f9703a6589f66

Last test of basis   151504  2020-07-01 09:23:05 Z    4 days
Testing same since   151641  2020-07-05 09:18:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   23ca7ec0ba..be63d9d47f  be63d9d47f571a60d70f8fb630c03871312d9655 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Jul 05 11:13:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 11:13:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js2Zx-0001UT-9K; Sun, 05 Jul 2020 11:13:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3tmk=AQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1js2Zw-0001UJ-Cv
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 11:13:00 +0000
X-Inumbo-ID: 7996e43c-beb0-11ea-8bc3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7996e43c-beb0-11ea-8bc3-12813bfff9fa;
 Sun, 05 Jul 2020 11:12:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+nsczB1MQUCM329MqPcHbdJ8fcUo2+ubkBEIfXNnWiA=; b=yf42cMYUhbYGZdv7Ss1cVHASC
 gXgR2WImphBb9UqYlu6NzYGzgij8KLM95Fgc10YZU+SCcw89jOjwhzYzm1TPUB0PzwauIDvdUhG5n
 NJVCX7/vyPyngzFwWj7HqIfILy3MlrmrOfLCTHHP5SudtZNEXagidso6ARHWR/MNQYelA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1js2Zq-0007ED-Oh; Sun, 05 Jul 2020 11:12:54 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1js2Zq-0005gy-5n; Sun, 05 Jul 2020 11:12:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1js2Zq-0006on-5A; Sun, 05 Jul 2020 11:12:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151630-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151630: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start.2:fail:regression
 linux-linus:test-amd64-i386-xl-pvshim:xen-boot:fail:heisenbug
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:heisenbug
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
 linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=35e884f89df4c48566d745dc5a97a0d058d04263
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jul 2020 11:12:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151630 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151630/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 17 guest-start.2            fail REGR. vs. 151214

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-pvshim     7 xen-boot         fail in 151617 pass in 151630
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-boot  fail in 151617 pass in 151630
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail in 151617 pass in 151630
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail in 151617 pass in 151630
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail in 151617 pass in 151630
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10     fail pass in 151617
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 151617
 test-armhf-armhf-xl-vhd      15 guest-start/debian.repeat  fail pass in 151617

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                35e884f89df4c48566d745dc5a97a0d058d04263
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   17 days
Failing since        151236  2020-06-19 19:10:35 Z   15 days   21 attempts
Testing same since   151617  2020-07-04 10:03:06 Z    1 days    2 attempts

------------------------------------------------------------
551 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 25961 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 05 11:51:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 11:51:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js3AR-0004hp-70; Sun, 05 Jul 2020 11:50:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3tmk=AQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1js3AP-0004hk-Es
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 11:50:41 +0000
X-Inumbo-ID: bec3b53a-beb5-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bec3b53a-beb5-11ea-bb8b-bc764e2007e4;
 Sun, 05 Jul 2020 11:50:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=eE/PHpCXZ1lkt3AvRAOwZdWAH3/k1jNQzA+xV7KnMFg=; b=YOkWtfTYo3De7ksdQBXCNxWZ3
 CunrbMgggns/W0XoiB2X7o8t37ImbNJP0RRYjlkD8yVJcEkt1wlOI39tLzgU1no7oN12W7PYpR7J+
 5U/X/Wl2FnuL7qC/m+bO2Kmi4Dg4mnk3wNa9Lt7Oi0uSLq2wNRhxoJ/CXY/WI9t0iJkEk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1js3AM-0007tK-Dt; Sun, 05 Jul 2020 11:50:38 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1js3AM-0006YT-0b; Sun, 05 Jul 2020 11:50:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1js3AL-0006Td-WB; Sun, 05 Jul 2020 11:50:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151634-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151634: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=eb6490f544388dd24c0d054a96dd304bc7284450
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jul 2020 11:50:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151634 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151634/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                eb6490f544388dd24c0d054a96dd304bc7284450
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   22 days
Failing since        151101  2020-06-14 08:32:51 Z   21 days   25 attempts
Testing same since   151634  2020-07-05 00:36:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Cathy Zhang <cathy.zhang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <thuth@redhat.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17819 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 05 13:00:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 13:00:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js4Fp-0001xq-Rf; Sun, 05 Jul 2020 13:00:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7P7Y=AQ=listes.aquilenet.fr=admin@srs-us1.protection.inumbo.net>)
 id 1js4Fo-0001xl-K3
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 13:00:20 +0000
X-Inumbo-ID: 79d2dfc8-bebf-11ea-bb8b-bc764e2007e4
Received: from hera.aquilenet.fr (unknown [185.233.100.1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79d2dfc8-bebf-11ea-bb8b-bc764e2007e4;
 Sun, 05 Jul 2020 13:00:18 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id 37469865
 for <xen-devel@lists.xenproject.org>; Sun,  5 Jul 2020 15:00:16 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id 7lHGnZwo_YMY for <xen-devel@lists.xenproject.org>;
 Sun,  5 Jul 2020 15:00:14 +0200 (CEST)
Received: from function (lfbn-bor-1-797-11.w86-234.abo.wanadoo.fr
 [86.234.239.11])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id 2912C23B6
 for <xen-devel@lists.xenproject.org>; Sun,  5 Jul 2020 15:00:11 +0200 (CEST)
Received: from samy by function with local (Exim 4.94)
 (envelope-from <admin@listes.aquilenet.fr>) id 1js4Fd-004Wmv-1O
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 15:00:09 +0200
Date: Sun, 5 Jul 2020 15:00:08 +0200
From: Samuel Thibault <admin@listes.aquilenet.fr>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: Block requests priorities?
Message-ID: <20200705130008.w7z33nz6ue7ly7sx@function>
Mail-Followup-To: Samuel Thibault <admin@listes.aquilenet.fr>,
 xen-devel <xen-devel@lists.xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

Every month, we are getting a RAID check at the same time as VMs backing
up their data. Yes, probably not a good thing to do, but that should
still be working, and what we see is the whole system gets overly
sluggish since we mark all of these idle-prioritized.

The RAID check is happening inside dom0, with the --idle option to make
I/Os deprioritized.

The backups are happening inside domUs, with ionice -c 3 to make I/O
deprioritized as well.

But AIUI, the fact that the backup I/Os should be reprioritized is not
transmitted through the blkif protocol, and thus dom0 considers them
just like other VM I/O requests. Shouldn't we add priorities to the
blkif requests?

Otherwise what I guess is happening is that some large VMs are backing
up a lot of data, and even if *they* prioritize I/O requests inside
the domUs, the flurry of I/O requests are still piling up inside
dom0, at non-idle priority since the idle priority information is not
transmitted. And then I/O requests from other VMs get delayed behind
that flurry of requests. I'm not even sure that bfq happens to have the
proper information in order to correctly establish fair queueing between
domUs?

Samuel


From xen-devel-bounces@lists.xenproject.org Sun Jul 05 13:01:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 13:01:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js4GU-0001zO-4j; Sun, 05 Jul 2020 13:01:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3tmk=AQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1js4GT-0001zH-3v
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 13:01:01 +0000
X-Inumbo-ID: 9233f5a2-bebf-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9233f5a2-bebf-11ea-8496-bc764e2007e4;
 Sun, 05 Jul 2020 13:00:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=EvwIZRda5zbZ10HnMythaaVsDV0rsVpJM0EYe+3Ja5o=; b=mL5zdp/TGFLU+1rlF7X9o1xnR
 qLpSsy534JkzagYyvNlcLu1KSgDkrY9n+96waLsSG6uMRFuadZuZFuObjWzHpF/ED+vXy17BuACZ1
 4YkFQLQd6YrpUh/vAAYO3RC1IOOYFbieGZxtWaLREn+LCxGYRGFTu4unvkM94OC9Z+WyU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1js4GQ-0000li-HA; Sun, 05 Jul 2020 13:00:58 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1js4GP-0001cQ-Qw; Sun, 05 Jul 2020 13:00:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1js4GP-0000VN-Pq; Sun, 05 Jul 2020 13:00:57 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151638-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151638: regressions - FAIL
X-Osstest-Failures: libvirt:test-amd64-amd64-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: libvirt=ab9fd53823483975adb0cb7d46e03f647c7f3b57
X-Osstest-Versions-That: libvirt=e7998ebeaf15e4e8825be0dd97aa1316f194f00d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jul 2020 13:00:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151638 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151638/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496
 test-amd64-i386-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151496
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151496
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              ab9fd53823483975adb0cb7d46e03f647c7f3b57
baseline version:
 libvirt              e7998ebeaf15e4e8825be0dd97aa1316f194f00d

Last test of basis   151496  2020-07-01 04:23:43 Z    4 days
Failing since        151527  2020-07-02 04:29:15 Z    3 days    4 attempts
Testing same since   151638  2020-07-05 04:25:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Yanqiu Zhang <yanqzhan@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ab9fd53823483975adb0cb7d46e03f647c7f3b57
Author: Laine Stump <laine@redhat.com>
Date:   Wed Jun 24 13:12:56 2020 -0400

    network: use proper arg type when calling virNetDevSetOnline()
    
    The 2nd arg to this function is a bool, not an int.
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit e95dd7aacd814c5b7d109252f0b68b2ac9cebb9b
Author: Laine Stump <laine@redhat.com>
Date:   Tue Jun 23 22:52:58 2020 -0400

    network: make networkDnsmasqXmlNsDef private to bridge_driver.c
    
    This struct isn't used anywhere else.
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit 9ceb3cff8582119d2f907b85655e758068770e83
Author: Laine Stump <laine@redhat.com>
Date:   Fri Jun 19 17:40:17 2020 -0400

    network: fix memory leak in networkBuildDhcpDaemonCommandLine()
    
    hostsfilestr was not being freed. This will be turned into g_autofree
    in an upcoming patch converting a lot more of the same file to using
    g_auto*, but I wanted to make a separate patch for this first so the
    other patch is simpler to review (and to make backporting easier).
    
    The leak was introduced in commit 97a0aa246799c97d0a9ca9ecd6b4fd932ae4756c
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit a726feb693ea5a1b0c90761c35641f0db8fc0619
Author: Laine Stump <laine@redhat.com>
Date:   Thu Jun 18 19:16:33 2020 -0400

    use g_autoptr for all xmlBuffers
    
    AUTOPTR_CLEANUP_FUNC is set to xmlBufferFree() in util/virxml.h (This
    is actually new - added accidentally (but fortunately harmlessly!) in
    commit 257aba2dafe. I had added it along with the hunks in this patch,
    then decided to remove it and submit separately, but missed taking out
    the hunk in virxml.h)
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit b7a92bce070fd57844a59bf8b1c30cb4ef4f3acd
Author: Laine Stump <laine@redhat.com>
Date:   Thu Jun 18 12:49:09 2020 -0400

    conf, vmx: check for OOM after calling xmlBufferCreate()
    
    Although libvirt itself uses g_malloc0() and friends, which exit when
    there isn't enouogh memory, libxml2 uses standard malloc(), which just
    returns NULL on OOM - this means we must check for NULL on return from
    any libxml2 functions that allocate memory.
    
    xmlBufferCreate(), for example, might return NULL, and we don't always
    check for it. This patch adds checks where it isn't already done.
    
    (NB: Although libxml2 has a provision for changing behavior on OOM (by
    calling xmlMemSetup() to change what functions are used to
    allocating/freeing memory), we can't use that, since parts of libvirt
    code end up in libvirt.so, which is linked and called directly by
    applications that may themselves use libxml2 (and may have already set
    their own alternate malloc()), e.g. drivers like esx which live totally
    in the library rather than a separate process.)
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit ad231189ab948f82b8f2288250df088d9718bb7c
Author: Yanqiu Zhang <yanqzhan@redhat.com>
Date:   Thu Jul 2 09:06:46 2020 +0000

    news.html: Add 3 new features
    
    Add 'virtio packed' in 6.3.0, 'virDomainGetHostnameFlags' and
    'Panic Crashloaded event' for 6.1.0.
    
    Signed-off-by: Yanqiu Zhang <yanqzhan@redhat.com>
    Reviewed-by: Andrea Bolognani <abologna@redhat.com>

commit 201f8d1876136b0693505614efa3c9d113aff0bb
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Mon Jun 29 14:55:54 2020 +0200

    virConnectGetAllDomainStats: Document two vcpu stats
    
    When introducing vcpu.<num>.wait (v1.3.2-rc1~301) and
    vcpu.<num>.halted (v2.4.0-rc1~36) the documentation was
    not written.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Erik Skultety <eskultet@redhat.com>

commit c203b8fee1ce15003934c09e811fbd2eaec9f230
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Thu Jul 2 15:02:38 2020 +0200

    docs: Update CI documentation
    
    We're no longer using either Travis CI or the Jenkins-based
    CentOS CI, but we have started using Cirrus CI.
    
    Mention the libvirt-ci subproject as well, as a pointer for those
    who might want to learn more about our CI infrastructure.
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit fb912901316dbe7d485551606373bd71d5271601
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Mon Jun 29 19:00:36 2020 +0200

    cirrus: Generate jobs dynamically
    
    Instead of having static job definitions for FreeBSD and macOS,
    use a generic template for both and fill in the details that are
    actually different, such as the list of packages to install, in
    the GitLab CI job, right before calling cirrus-run.
    
    The target-specific information are provided by lcitool, so that
    keeping them up to date is just a matter of running the refresh
    script when necessary.
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>
    Reviewed-by: Erik Skultety <eskultet@redhat.com>

commit 919ee94ca9c7fed77897fa8e3b04952e02780c0c
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Fri Jul 3 09:32:30 2020 +0200

    maint: Post-release version bump to 6.6.0
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>

commit d7f935f1f17a3ecf180ddb9600cbef4ba8dc20f4
Author: Daniel Veillard <veillard@redhat.com>
Date:   Fri Jul 3 08:49:25 2020 +0200

    Release of libvirt-6.5.0
    
    * NEWS.rst: updated with date of release
    
    Signed-off-by: Daniel Veillard <veillard@redhat.com>

commit d1d888a69f505922140bec292b8d208b3571f084
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Thu Jul 2 14:41:18 2020 +0200

    NEWS: Update for libvirt 6.5.0
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit 7fa7f7eeb6e969e002845928e155914da2fc8cd0
Author: Daniel P. Berrangé <berrange@redhat.com>
Date:   Wed Jul 1 17:36:51 2020 +0100

    util: add access check for hooks to fix running as non-root
    
    Since feb83c1e710b9ea8044a89346f4868d03b31b0f1 libvirtd will abort on
    startup if run as non-root
    
      2020-07-01 16:30:30.738+0000: 1647444: error : virDirOpenInternal:2869 : cannot open directory '/etc/libvirt/hooks/daemon.d': Permission denied
    
    The root cause flaw is that non-root libvirtd is using /etc/libvirt for
    its hooks. Traditionally that has been harmless though since we checked
    whether we could access the hook file and degraded gracefully. We need
    the same access check for iterating over the hook directory.
    
    Long term we should make it possible to have an unprivileged hook dir
    under $HOME.
    
    Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
    Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

commit c3fa17cd9a158f38416a80af3e0f712bf96ebf38
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Wed Jul 1 09:47:48 2020 +0200

    virnettlshelpers: Update private key
    
    With the recent update of Fedora rawhide I've noticed
    virnettlssessiontest and virnettlscontexttest failing with:
    
      Our own certificate servercertreq-ctx.pem failed validation
      against cacertreq-ctx.pem: The certificate uses an insecure
      algorithm
    
    This is result of Fedora changes to support strong crypto [1]. RSA
    with 1024 bit key is viewed as legacy and thus insecure. Generate
    a new private key then. Moreover, switch to EC which is not only
    shorter but also not deprecated that often as RSA. Generated
    using the following command:
    
      openssl genpkey --outform PEM --out privkey.pem \$
      --algorithm EC --pkeyopt ec_paramgen_curve:P-384 \$
      --pkeyopt ec_param_enc:named_curve
    
    1: https://fedoraproject.org/wiki/Changes/StrongCryptoSettings2
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit d57f361083c5053267e6d9380c1afe2abfcae8ac
Author: Daniel Henrique Barboza <danielhb413@gmail.com>
Date:   Tue Jun 30 16:43:43 2020 -0300

    docs: Fix 'Offline migration' description
    
    'transfers inactive the definition of a domain' seems odd.
    
    Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
    Reviewed-by: Andrea Bolognani <abologna@redhat.com>


From xen-devel-bounces@lists.xenproject.org Sun Jul 05 16:19:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 16:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js7Lo-00014F-Hb; Sun, 05 Jul 2020 16:18:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3tmk=AQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1js7Ln-00014A-RW
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 16:18:43 +0000
X-Inumbo-ID: 30fdab22-bedb-11ea-8bf7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 30fdab22-bedb-11ea-8bf7-12813bfff9fa;
 Sun, 05 Jul 2020 16:18:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=fOvN9Z6EULJnSoHU5b2tSC6tXRUY7LJRqaz+tSjOk40=; b=h1kHFd/8EBiT9kMeQQ9Kmtp91
 lJhfPkKGn0xTie3sds2C5rdJJEB2KqxAza0d2KL6qGoykbljyhbUhZm6OOrpujI/tpyDCJBK+P05Q
 ggd1XBubalcHl547pxvzLqluTtpD02pjsFVxKItolgxKZrpBkS56Lyy9LHBjuiNkFUSuA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1js7Ll-0004s7-3i; Sun, 05 Jul 2020 16:18:41 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1js7Lk-0004Er-Pi; Sun, 05 Jul 2020 16:18:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1js7Lk-0001vj-P5; Sun, 05 Jul 2020 16:18:40 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151635-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151635: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=be63d9d47f571a60d70f8fb630c03871312d9655
X-Osstest-Versions-That: xen=be63d9d47f571a60d70f8fb630c03871312d9655
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jul 2020 16:18:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151635 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151635/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 151606

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151606
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151606
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151606
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151606
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151606
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151606
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151606
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151606
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151606
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151606
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655
baseline version:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655

Last test of basis   151635  2020-07-05 01:51:49 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 18:56:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 18:56:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js9nu-0005cI-Bq; Sun, 05 Jul 2020 18:55:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZHG=AQ=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1js9ns-0005bb-O1
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 18:55:52 +0000
X-Inumbo-ID: 2245c090-bef1-11ea-8c02-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2245c090-bef1-11ea-8c02-12813bfff9fa;
 Sun, 05 Jul 2020 18:55:46 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 6676BA2026;
 Sun,  5 Jul 2020 20:55:45 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 58BDFA2037;
 Sun,  5 Jul 2020 20:55:44 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 1JBZVfExoyvJ; Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id CC9E8A2022;
 Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id Jc7SFJCbWlUJ; Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 9C06EA1F8E;
 Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 7F15522C30;
 Sun,  5 Jul 2020 20:55:13 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id zPrEudZ0Ce5r; Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id F3DEC22C07;
 Sun,  5 Jul 2020 20:55:07 +0200 (CEST)
X-Quarantine-ID: <GaILWkwrL2jb>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id GaILWkwrL2jb; Sun,  5 Jul 2020 20:55:07 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id CD5B322BF3;
 Sun,  5 Jul 2020 20:55:07 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 02/11] x86/vmx: add Intel PT MSR definitions
Date: Sun,  5 Jul 2020 20:54:55 +0200
Message-Id: <ba3de1d4cd926b16a297d90055a03fda0762c2b5.1593974333.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Jan Beulich <jbeulich@suse.com>, tamas.lengyel@intel.com,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Define constants related to Intel Processor Trace features.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/include/asm-x86/msr-index.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 0fe98af923..4fd54fb5c9 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -72,7 +72,31 @@
 #define MSR_RTIT_OUTPUT_BASE                0x00000560
 #define MSR_RTIT_OUTPUT_MASK                0x00000561
 #define MSR_RTIT_CTL                        0x00000570
+#define  RTIT_CTL_TRACE_EN                  (_AC(1, ULL) <<  0)
+#define  RTIT_CTL_CYC_EN                    (_AC(1, ULL) <<  1)
+#define  RTIT_CTL_OS                        (_AC(1, ULL) <<  2)
+#define  RTIT_CTL_USR                       (_AC(1, ULL) <<  3)
+#define  RTIT_CTL_PWR_EVT_EN                (_AC(1, ULL) <<  4)
+#define  RTIT_CTL_FUP_ON_PTW                (_AC(1, ULL) <<  5)
+#define  RTIT_CTL_FABRIC_EN                 (_AC(1, ULL) <<  6)
+#define  RTIT_CTL_CR3_FILTER                (_AC(1, ULL) <<  7)
+#define  RTIT_CTL_TOPA                      (_AC(1, ULL) <<  8)
+#define  RTIT_CTL_MTC_EN                    (_AC(1, ULL) <<  9)
+#define  RTIT_CTL_TSC_EN                    (_AC(1, ULL) << 10)
+#define  RTIT_CTL_DIS_RETC                  (_AC(1, ULL) << 11)
+#define  RTIT_CTL_PTW_EN                    (_AC(1, ULL) << 12)
+#define  RTIT_CTL_BRANCH_EN                 (_AC(1, ULL) << 13)
+#define  RTIT_CTL_MTC_FREQ                  (_AC(0xf, ULL) << 14)
+#define  RTIT_CTL_CYC_THRESH                (_AC(0xf, ULL) << 19)
+#define  RTIT_CTL_PSB_FREQ                  (_AC(0xf, ULL) << 24)
+#define  RTIT_CTL_ADDR(n)                   (_AC(0xf, ULL) << (32 + 4 * (n)))
 #define MSR_RTIT_STATUS                     0x00000571
+#define  RTIT_STATUS_FILTER_EN              (_AC(1, ULL) <<  0)
+#define  RTIT_STATUS_CONTEXT_EN             (_AC(1, ULL) <<  1)
+#define  RTIT_STATUS_TRIGGER_EN             (_AC(1, ULL) <<  2)
+#define  RTIT_STATUS_ERROR                  (_AC(1, ULL) <<  4)
+#define  RTIT_STATUS_STOPPED                (_AC(1, ULL) <<  5)
+#define  RTIT_STATUS_BYTECNT                (_AC(0x1ffff, ULL) << 32)
 #define MSR_RTIT_CR3_MATCH                  0x00000572
 #define MSR_RTIT_ADDR_A(n)                 (0x00000580 + (n) * 2)
 #define MSR_RTIT_ADDR_B(n)                 (0x00000581 + (n) * 2)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 18:56:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 18:56:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js9nu-0005cQ-KD; Sun, 05 Jul 2020 18:55:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZHG=AQ=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1js9ns-0005bc-Ut
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 18:55:52 +0000
X-Inumbo-ID: 226f4e06-bef1-11ea-bca7-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 226f4e06-bef1-11ea-bca7-bc764e2007e4;
 Sun, 05 Jul 2020 18:55:46 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id D28CBA1F8E;
 Sun,  5 Jul 2020 20:55:45 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id BFC7FA201B;
 Sun,  5 Jul 2020 20:55:44 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id bsOg5AyN26K4; Sun,  5 Jul 2020 20:55:44 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 20123A202C;
 Sun,  5 Jul 2020 20:55:44 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id EPRksHIMDsRv; Sun,  5 Jul 2020 20:55:44 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id DF3ADA2026;
 Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id CDF0622C31;
 Sun,  5 Jul 2020 20:55:13 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 3yV87tbjOx0a; Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 134B322C0A;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
X-Quarantine-ID: <Yo09Q0eUjxyb>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id Yo09Q0eUjxyb; Sun,  5 Jul 2020 20:55:07 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id D88CA22BFF;
 Sun,  5 Jul 2020 20:55:07 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 03/11] x86/vmx: add IPT cpu feature
Date: Sun,  5 Jul 2020 20:54:56 +0200
Message-Id: <4d6eac657d082efaa0e7d141b5c9a07791b31f94.1593974333.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, luwei.kang@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Check if Intel Processor Trace feature is supported by current
processor. Define vmtrace_supported global variable.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 xen/arch/x86/hvm/vmx/vmcs.c                 | 15 ++++++++++++++-
 xen/common/domain.c                         |  2 ++
 xen/include/asm-x86/cpufeature.h            |  1 +
 xen/include/asm-x86/hvm/vmx/vmcs.h          |  1 +
 xen/include/public/arch-x86/cpufeatureset.h |  1 +
 xen/include/xen/domain.h                    |  2 ++
 6 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index ca94c2bedc..3a53553f10 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -291,6 +291,20 @@ static int vmx_init_vmcs_config(void)
         _vmx_cpu_based_exec_control &=
             ~(CPU_BASED_CR8_LOAD_EXITING | CPU_BASED_CR8_STORE_EXITING);
 
+    rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
+
+    /* Check whether IPT is supported in VMX operation. */
+    if ( !smp_processor_id() )
+        vmtrace_supported = cpu_has_ipt &&
+                            (_vmx_misc_cap & VMX_MISC_PROC_TRACE);
+    else if ( vmtrace_supported &&
+              !(_vmx_misc_cap & VMX_MISC_PROC_TRACE) )
+    {
+        printk("VMX: IPT capabilities fatally differ between CPU%u and CPU0\n",
+               smp_processor_id());
+        return -EINVAL;
+    }
+
     if ( _vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS )
     {
         min = 0;
@@ -305,7 +319,6 @@ static int vmx_init_vmcs_config(void)
                SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS |
                SECONDARY_EXEC_XSAVES |
                SECONDARY_EXEC_TSC_SCALING);
-        rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
         if ( _vmx_misc_cap & VMX_MISC_VMWRITE_ALL )
             opt |= SECONDARY_EXEC_ENABLE_VMCS_SHADOWING;
         if ( opt_vpid_enabled )
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 7cc9526139..a45cf023f7 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -82,6 +82,8 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_mostly;
 
 vcpu_info_t dummy_vcpu_info;
 
+bool vmtrace_supported __read_mostly;
+
 static void __domain_finalise_shutdown(struct domain *d)
 {
     struct vcpu *v;
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index f790d5c1f8..555f696a26 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -104,6 +104,7 @@
 #define cpu_has_clwb            boot_cpu_has(X86_FEATURE_CLWB)
 #define cpu_has_avx512er        boot_cpu_has(X86_FEATURE_AVX512ER)
 #define cpu_has_avx512cd        boot_cpu_has(X86_FEATURE_AVX512CD)
+#define cpu_has_ipt             boot_cpu_has(X86_FEATURE_PROC_TRACE)
 #define cpu_has_sha             boot_cpu_has(X86_FEATURE_SHA)
 #define cpu_has_avx512bw        boot_cpu_has(X86_FEATURE_AVX512BW)
 #define cpu_has_avx512vl        boot_cpu_has(X86_FEATURE_AVX512VL)
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 906810592f..6153ba6769 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -283,6 +283,7 @@ extern u32 vmx_secondary_exec_control;
 #define VMX_VPID_INVVPID_SINGLE_CONTEXT_RETAINING_GLOBAL 0x80000000000ULL
 extern u64 vmx_ept_vpid_cap;
 
+#define VMX_MISC_PROC_TRACE                     0x00004000
 #define VMX_MISC_CR3_TARGET                     0x01ff0000
 #define VMX_MISC_VMWRITE_ALL                    0x20000000
 
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index fe7492a225..2c91862f2d 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -217,6 +217,7 @@ XEN_CPUFEATURE(SMAP,          5*32+20) /*S  Supervisor Mode Access Prevention */
 XEN_CPUFEATURE(AVX512_IFMA,   5*32+21) /*A  AVX-512 Integer Fused Multiply Add */
 XEN_CPUFEATURE(CLFLUSHOPT,    5*32+23) /*A  CLFLUSHOPT instruction */
 XEN_CPUFEATURE(CLWB,          5*32+24) /*A  CLWB instruction */
+XEN_CPUFEATURE(PROC_TRACE,    5*32+25) /*   Processor Tracing feature */
 XEN_CPUFEATURE(AVX512PF,      5*32+26) /*A  AVX-512 Prefetch Instructions */
 XEN_CPUFEATURE(AVX512ER,      5*32+27) /*A  AVX-512 Exponent & Reciprocal Instrs */
 XEN_CPUFEATURE(AVX512CD,      5*32+28) /*A  AVX-512 Conflict Detection Instrs */
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 7e51d361de..61ebc6c24d 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -130,4 +130,6 @@ struct vnuma_info {
 
 void vnuma_destroy(struct vnuma_info *vnuma);
 
+extern bool vmtrace_supported;
+
 #endif /* __XEN_DOMAIN_H__ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 18:56:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 18:56:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js9nz-0005d4-0f; Sun, 05 Jul 2020 18:55:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZHG=AQ=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1js9nx-0005bb-O2
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 18:55:57 +0000
X-Inumbo-ID: 2245c091-bef1-11ea-8c02-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2245c091-bef1-11ea-8c02-12813bfff9fa;
 Sun, 05 Jul 2020 18:55:46 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 77381A202C;
 Sun,  5 Jul 2020 20:55:45 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 66A44A2022;
 Sun,  5 Jul 2020 20:55:44 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 6bSkVzviQlTy; Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id BC2BCA200D;
 Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id zfmRvghhShE3; Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 82B4FA1CEA;
 Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 698AC22C2A;
 Sun,  5 Jul 2020 20:55:13 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id px1dtBfU7_oa; Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id E6A9B22BF6;
 Sun,  5 Jul 2020 20:55:07 +0200 (CEST)
X-Quarantine-ID: <6GIZq3m1Wlkr>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 6GIZq3m1Wlkr; Sun,  5 Jul 2020 20:55:07 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id B921E2295A;
 Sun,  5 Jul 2020 20:55:07 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 01/11] memory: batch processing in acquire_resource()
Date: Sun,  5 Jul 2020 20:54:54 +0200
Message-Id: <02415890e4e8211513b495228c790e1d16de767f.1593974333.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Allow to acquire large resources by allowing acquire_resource()
to process items in batches, using hypercall continuation.

Be aware that this modifies the behavior of acquire_resource
call with frame_list=NULL. While previously it would return
the size of internal array (32), with this patch it returns
the maximal quantity of frames that could be requested at once,
i.e. UINT_MAX >> MEMOP_EXTENT_SHIFT.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 xen/common/memory.c | 49 ++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 44 insertions(+), 5 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 714077c1e5..eb42f883df 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1046,10 +1046,12 @@ static int acquire_grant_table(struct domain *d, unsigned int id,
 }
 
 static int acquire_resource(
-    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
+    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg,
+    unsigned long *start_extent)
 {
     struct domain *d, *currd = current->domain;
     xen_mem_acquire_resource_t xmar;
+    uint32_t total_frames;
     /*
      * The mfn_list and gfn_list (below) arrays are ok on stack for the
      * moment since they are small, but if they need to grow in future
@@ -1069,7 +1071,7 @@ static int acquire_resource(
         if ( xmar.nr_frames )
             return -EINVAL;
 
-        xmar.nr_frames = ARRAY_SIZE(mfn_list);
+        xmar.nr_frames = UINT_MAX >> MEMOP_EXTENT_SHIFT;
 
         if ( __copy_field_to_guest(arg, &xmar, nr_frames) )
             return -EFAULT;
@@ -1077,8 +1079,28 @@ static int acquire_resource(
         return 0;
     }
 
+    total_frames = xmar.nr_frames;
+
+    /* Is the size too large for us to encode a continuation? */
+    if ( unlikely(xmar.nr_frames > (UINT_MAX >> MEMOP_EXTENT_SHIFT)) )
+        return -EINVAL;
+
+    if ( *start_extent )
+    {
+        /*
+         * Check whether start_extent is in bounds, as this
+         * value if visible to the calling domain.
+         */
+        if ( *start_extent > xmar.nr_frames )
+            return -EINVAL;
+
+        xmar.frame += *start_extent;
+        xmar.nr_frames -= *start_extent;
+        guest_handle_add_offset(xmar.frame_list, *start_extent);
+    }
+
     if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
-        return -E2BIG;
+        xmar.nr_frames = ARRAY_SIZE(mfn_list);
 
     rc = rcu_lock_remote_domain_by_id(xmar.domid, &d);
     if ( rc )
@@ -1135,6 +1157,14 @@ static int acquire_resource(
         }
     }
 
+    if ( !rc )
+    {
+        *start_extent += xmar.nr_frames;
+
+        if ( *start_extent != total_frames )
+            rc = -ERESTART;
+    }
+
  out:
     rcu_unlock_domain(d);
 
@@ -1599,8 +1629,17 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 #endif
 
     case XENMEM_acquire_resource:
-        rc = acquire_resource(
-            guest_handle_cast(arg, xen_mem_acquire_resource_t));
+        do {
+            rc = acquire_resource(
+                guest_handle_cast(arg, xen_mem_acquire_resource_t),
+                &start_extent);
+
+            if ( hypercall_preempt_check() )
+                return hypercall_create_continuation(
+                    __HYPERVISOR_memory_op, "lh",
+                    op | (start_extent << MEMOP_EXTENT_SHIFT), arg);
+        } while ( rc == -ERESTART );
+
         break;
 
     default:
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 18:56:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 18:56:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js9no-0005bl-S5; Sun, 05 Jul 2020 18:55:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZHG=AQ=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1js9nn-0005bb-Sy
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 18:55:47 +0000
X-Inumbo-ID: 2245a02e-bef1-11ea-8c02-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2245a02e-bef1-11ea-8c02-12813bfff9fa;
 Sun, 05 Jul 2020 18:55:46 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 7B4FCA2046;
 Sun,  5 Jul 2020 20:55:45 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 71C8EA203D;
 Sun,  5 Jul 2020 20:55:44 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id LpZ0XIP1sjKn; Sun,  5 Jul 2020 20:55:44 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id EDFC8A1F8E;
 Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id arBBim_pQi6z; Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id B14CFA1FE9;
 Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 9D6C822C09;
 Sun,  5 Jul 2020 20:55:13 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id IIeWzElhpfKb; Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 27D5422C11;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
X-Quarantine-ID: <U7Rxn7a5967L>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id U7Rxn7a5967L; Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id EDAC622C06;
 Sun,  5 Jul 2020 20:55:07 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 04/11] common: add vmtrace_pt_size domain parameter
Date: Sun,  5 Jul 2020 20:54:57 +0200
Message-Id: <5d52b37e391a4165dc3775f77a621d34a33d22c2.1593974333.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Add vmtrace_pt_size domain parameter in live domain and
vmtrace_pt_order parameter in xen_domctl_createdomain.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 xen/common/domain.c         | 12 ++++++++++++
 xen/include/public/domctl.h |  1 +
 xen/include/xen/sched.h     |  4 ++++
 3 files changed, 17 insertions(+)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index a45cf023f7..25d3359c5b 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -338,6 +338,12 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }
 
+    if ( config->vmtrace_pt_order && !vmtrace_supported )
+    {
+        dprintk(XENLOG_INFO, "Processor tracing is not supported\n");
+        return -EINVAL;
+    }
+
     return arch_sanitise_domain_config(config);
 }
 
@@ -443,6 +449,12 @@ struct domain *domain_create(domid_t domid,
         d->nr_pirqs = min(d->nr_pirqs, nr_irqs);
 
         radix_tree_init(&d->pirq_tree);
+
+        if ( config->vmtrace_pt_order )
+        {
+            uint32_t shift_val = config->vmtrace_pt_order + PAGE_SHIFT;
+            d->vmtrace_pt_size = (1ULL << shift_val);
+        }
     }
 
     if ( (err = arch_domain_create(d, config)) != 0 )
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 59bdc28c89..7b8289d436 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -92,6 +92,7 @@ struct xen_domctl_createdomain {
     uint32_t max_evtchn_port;
     int32_t max_grant_frames;
     int32_t max_maptrack_frames;
+    uint8_t vmtrace_pt_order;
 
     struct xen_arch_domainconfig arch;
 };
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index ac53519d7f..48f0a61bbd 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -457,6 +457,10 @@ struct domain
     unsigned    pbuf_idx;
     spinlock_t  pbuf_lock;
 
+    /* Used by vmtrace features */
+    spinlock_t  vmtrace_lock;
+    uint64_t    vmtrace_pt_size;
+
     /* OProfile support. */
     struct xenoprof *xenoprof;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 18:56:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 18:56:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js9np-0005br-3u; Sun, 05 Jul 2020 18:55:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZHG=AQ=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1js9no-0005bc-36
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 18:55:48 +0000
X-Inumbo-ID: 225d0dc2-bef1-11ea-bca7-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 225d0dc2-bef1-11ea-bca7-bc764e2007e4;
 Sun, 05 Jul 2020 18:55:46 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id B3835A2022;
 Sun,  5 Jul 2020 20:55:45 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id AB0C0A1F8E;
 Sun,  5 Jul 2020 20:55:44 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id M_2M1YoxZiRc; Sun,  5 Jul 2020 20:55:44 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 01133A201B;
 Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id y5Wiv2tvmoYa; Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id A5E75A1F91;
 Sun,  5 Jul 2020 20:55:43 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 721A622C2B;
 Sun,  5 Jul 2020 20:55:13 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id ZdhWyUy1OfMH; Sun,  5 Jul 2020 20:55:07 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id CEF9F22BF7;
 Sun,  5 Jul 2020 20:55:07 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id BPpe4wxl5z5N; Sun,  5 Jul 2020 20:55:07 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 9CA72228F7;
 Sun,  5 Jul 2020 20:55:07 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 00/11] Implement support for external IPT monitoring
Date: Sun,  5 Jul 2020 20:54:53 +0200
Message-Id: <cover.1593974333.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, luwei.kang@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Anthony PERARD <anthony.perard@citrix.com>, tamas.lengyel@intel.com,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Intel Processor Trace is an architectural extension available in modern I=
ntel=20
family CPUs. It allows recording the detailed trace of activity while the=
=20
processor executes the code. One might use the recorded trace to reconstr=
uct=20
the code flow. It means, to find out the executed code paths, determine=20
branches taken, and so forth.

The abovementioned feature is described in Intel(R) 64 and IA-32 Architec=
tures=20
Software Developer's Manual Volume 3C: System Programming Guide, Part 3,=20
Chapter 36: "Intel Processor Trace."

This patch series implements an interface that Dom0 could use in order to=
=20
enable IPT for particular vCPUs in DomU, allowing for external monitoring=
. Such=20
a feature has numerous applications like malware monitoring, fuzzing, or=20
performance testing.

Also thanks to Tamas K Lengyel for a few preliminary hints before
first version of this patch was submitted to xen-devel.

Changed since v1:
  * MSR_RTIT_CTL is managed using MSR load lists
  * other PT-related MSRs are modified only when vCPU goes out of context
  * trace buffer is now acquired as a resource
  * added vmtrace_pt_size parameter in xl.cfg, the size of trace buffer
    must be specified in the moment of domain creation
  * trace buffers are allocated on domain creation, destructed on
    domain destruction
  * HVMOP_vmtrace_ipt_enable/disable is limited to enabling/disabling PT
    these calls don't manage buffer memory anymore
  * lifted 32 MFN/GFN array limit when acquiring resources
  * minor code style changes according to review

Changed since v2:
  * trace buffer is now allocated on domain creation (in v2 it was
    allocated when hvm param was set)
  * restored 32-item limit in mfn/gfn arrays in acquire_resource
    and instead implemented hypercall continuations
  * code changes according to Jan's and Roger's review

Changed since v3:
  * vmtrace HVMOPs are not implemented as DOMCTLs
  * patches splitted up according to Andrew's comments
  * code changes according to v3 review on the mailing list

Changed since v4:
  * rebased to commit be63d9d4
  * fixed dependencies between patches
    (earlier patches don't reference further patches)
  * introduced preemption check in acquire_resource
  * moved buffer allocation to common code
  * splitted some patches according to code review
  * minor fixes according to code review

This patch series is available on GitHub:
https://github.com/icedevml/xen/tree/ipt-patch-v5


Michal Leszczynski (11):
  memory: batch processing in acquire_resource()
  x86/vmx: add Intel PT MSR definitions
  x86/vmx: add IPT cpu feature
  common: add vmtrace_pt_size domain parameter
  tools/libxl: add vmtrace_pt_size parameter
  x86/hvm: processor trace interface in HVM
  x86/vmx: implement IPT in VMX
  x86/mm: add vmtrace_buf resource type
  x86/domctl: add XEN_DOMCTL_vmtrace_op
  tools/libxc: add xc_vmtrace_* functions
  tools/proctrace: add proctrace tool

 docs/man/xl.cfg.5.pod.in                    |  11 ++
 tools/golang/xenlight/helpers.gen.go        |   2 +
 tools/golang/xenlight/types.gen.go          |   1 +
 tools/libxc/Makefile                        |   1 +
 tools/libxc/include/xenctrl.h               |  39 +++++
 tools/libxc/xc_vmtrace.c                    |  73 +++++++++
 tools/libxl/libxl.h                         |   8 +
 tools/libxl/libxl_create.c                  |   1 +
 tools/libxl/libxl_types.idl                 |   2 +
 tools/proctrace/Makefile                    |  48 ++++++
 tools/proctrace/proctrace.c                 | 163 ++++++++++++++++++++
 tools/xl/xl_parse.c                         |  22 +++
 xen/arch/x86/domain.c                       |  19 +++
 xen/arch/x86/domctl.c                       |  48 ++++++
 xen/arch/x86/hvm/vmx/vmcs.c                 |  15 +-
 xen/arch/x86/hvm/vmx/vmx.c                  | 109 +++++++++++++
 xen/common/domain.c                         |  33 ++++
 xen/common/memory.c                         |  77 ++++++++-
 xen/include/asm-x86/cpufeature.h            |   1 +
 xen/include/asm-x86/hvm/hvm.h               |  20 +++
 xen/include/asm-x86/hvm/vmx/vmcs.h          |   4 +
 xen/include/asm-x86/hvm/vmx/vmx.h           |  14 ++
 xen/include/asm-x86/msr-index.h             |  24 +++
 xen/include/public/arch-x86/cpufeatureset.h |   1 +
 xen/include/public/domctl.h                 |  27 ++++
 xen/include/public/memory.h                 |   1 +
 xen/include/xen/domain.h                    |   2 +
 xen/include/xen/sched.h                     |   8 +
 28 files changed, 768 insertions(+), 6 deletions(-)
 create mode 100644 tools/libxc/xc_vmtrace.c
 create mode 100644 tools/proctrace/Makefile
 create mode 100644 tools/proctrace/proctrace.c

--=20
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 18:56:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 18:56:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js9oL-0005mX-9L; Sun, 05 Jul 2020 18:56:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZHG=AQ=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1js9oK-0005m2-00
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 18:56:20 +0000
X-Inumbo-ID: 359cef92-bef1-11ea-b7bb-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 359cef92-bef1-11ea-b7bb-bc764e2007e4;
 Sun, 05 Jul 2020 18:56:19 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 2CEB8A20CD;
 Sun,  5 Jul 2020 20:56:17 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 02E85A20C3;
 Sun,  5 Jul 2020 20:56:16 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 1I7hzOnL4bmA; Sun,  5 Jul 2020 20:56:15 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id D1F3EA209C;
 Sun,  5 Jul 2020 20:56:14 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id vSDx68SufYYS; Sun,  5 Jul 2020 20:56:14 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 1C8C7A2037;
 Sun,  5 Jul 2020 20:56:14 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 7365222C24;
 Sun,  5 Jul 2020 20:55:19 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 2s-5pUYlNWXw; Sun,  5 Jul 2020 20:55:13 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id B371B22C22;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
X-Quarantine-ID: <cnal_8Z5apTk>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id cnal_8Z5apTk; Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 8764A22C24;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 11/11] tools/proctrace: add proctrace tool
Date: Sun,  5 Jul 2020 20:55:04 +0200
Message-Id: <e0ac5422825ce307470256aab1652336d5179a9a.1593974333.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: luwei.kang@intel.com, Michal Leszczynski <michal.leszczynski@cert.pl>,
 tamas.lengyel@intel.com, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Add an demonstration tool that uses xc_vmtrace_* calls in order
to manage external IPT monitoring for DomU.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 tools/proctrace/Makefile    |  48 +++++++++++
 tools/proctrace/proctrace.c | 163 ++++++++++++++++++++++++++++++++++++
 2 files changed, 211 insertions(+)
 create mode 100644 tools/proctrace/Makefile
 create mode 100644 tools/proctrace/proctrace.c

diff --git a/tools/proctrace/Makefile b/tools/proctrace/Makefile
new file mode 100644
index 0000000000..2983c477fe
--- /dev/null
+++ b/tools/proctrace/Makefile
@@ -0,0 +1,48 @@
+# Copyright (C) CERT Polska - NASK PIB
+# Author: Micha=C5=82 Leszczy=C5=84ski <michal.leszczynski@cert.pl>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; under version 2 of the License.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+
+XEN_ROOT=3D$(CURDIR)/../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+CFLAGS  +=3D -Werror
+CFLAGS  +=3D $(CFLAGS_libxenevtchn)
+CFLAGS  +=3D $(CFLAGS_libxenctrl)
+LDLIBS  +=3D $(LDLIBS_libxenctrl)
+LDLIBS  +=3D $(LDLIBS_libxenevtchn)
+LDLIBS  +=3D $(LDLIBS_libxenforeignmemory)
+
+.PHONY: all
+all: build
+
+.PHONY: build
+build: proctrace
+
+.PHONY: install
+install: build
+	$(INSTALL_DIR) $(DESTDIR)$(sbindir)
+	$(INSTALL_PROG) proctrace $(DESTDIR)$(sbindir)/proctrace
+
+.PHONY: uninstall
+uninstall:
+	rm -f $(DESTDIR)$(sbindir)/proctrace
+
+.PHONY: clean
+clean:
+	$(RM) -f $(DEPS_RM)
+
+.PHONY: distclean
+distclean: clean
+
+iptlive: iptlive.o Makefile
+	$(CC) $(LDFLAGS) $< -o $@ $(LDLIBS) $(APPEND_LDFLAGS)
+
+-include $(DEPS_INCLUDE)
diff --git a/tools/proctrace/proctrace.c b/tools/proctrace/proctrace.c
new file mode 100644
index 0000000000..22bf91db8d
--- /dev/null
+++ b/tools/proctrace/proctrace.c
@@ -0,0 +1,163 @@
+/***********************************************************************=
*******
+ * tools/proctrace.c
+ *
+ * Demonstrative tool for collecting Intel Processor Trace data from Xen=
.
+ *  Could be used to externally monitor a given vCPU in given DomU.
+ *
+ * Copyright (C) 2020 by CERT Polska - NASK PIB
+ *
+ * Authors: Micha=C5=82 Leszczy=C5=84ski, michal.leszczynski@cert.pl
+ * Date:    June, 2020
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; under version 2 of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License
+ *  along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <sys/mman.h>
+#include <signal.h>
+
+#include <xenctrl.h>
+#include <xen/xen.h>
+#include <xenforeignmemory.h>
+
+#define BUF_SIZE (16384 * XC_PAGE_SIZE)
+
+volatile int interrupted =3D 0;
+
+void term_handler(int signum) {
+    interrupted =3D 1;
+}
+
+int main(int argc, char* argv[]) {
+    xc_interface *xc;
+    uint32_t domid;
+    uint32_t vcpu_id;
+
+    int rc =3D -1;
+    uint8_t *buf =3D NULL;
+    uint64_t last_offset =3D 0;
+
+    xenforeignmemory_handle *fmem;
+    xenforeignmemory_resource_handle *fres;
+
+    if (signal(SIGINT, term_handler) =3D=3D SIG_ERR)
+    {
+        fprintf(stderr, "Failed to register signal handler\n");
+        return 1;
+    }
+
+    if (argc !=3D 3) {
+        fprintf(stderr, "Usage: %s <domid> <vcpu_id>\n", argv[0]);
+        fprintf(stderr, "It's recommended to redirect this"
+                        "program's output to file\n");
+        fprintf(stderr, "or to pipe it's output to xxd or other program.=
\n");
+        return 1;
+    }
+
+    domid =3D atoi(argv[1]);
+    vcpu_id =3D atoi(argv[2]);
+
+    xc =3D xc_interface_open(0, 0, 0);
+
+    fmem =3D xenforeignmemory_open(0, 0);
+
+    if (!xc) {
+        fprintf(stderr, "Failed to open xc interface\n");
+        return 1;
+    }
+
+    rc =3D xc_vmtrace_pt_enable(xc, domid, vcpu_id);
+
+    if (rc) {
+        fprintf(stderr, "Failed to call xc_vmtrace_pt_enable\n");
+        return 1;
+    }
+
+    fres =3D xenforeignmemory_map_resource(
+        fmem, domid, XENMEM_resource_vmtrace_buf,
+        /* vcpu: */ vcpu_id,
+        /* frame: */ 0,
+        /* num_frames: */ BUF_SIZE >> XC_PAGE_SHIFT,
+        (void **)&buf,
+        PROT_READ, 0);
+
+    if (!buf) {
+        fprintf(stderr, "Failed to map trace buffer\n");
+        return 1;
+    }
+
+    while (!interrupted) {
+        uint64_t offset;
+        rc =3D xc_vmtrace_pt_get_offset(xc, domid, vcpu_id, &offset);
+
+        if (rc) {
+            fprintf(stderr, "Failed to call xc_vmtrace_pt_get_offset\n")=
;
+            return 1;
+        }
+
+        if (offset > last_offset)
+        {
+            fwrite(buf + last_offset, offset - last_offset, 1, stdout);
+        }
+        else if (offset < last_offset)
+        {
+            // buffer wrapped
+            fwrite(buf + last_offset, BUF_SIZE - last_offset, 1, stdout)=
;
+            fwrite(buf, offset, 1, stdout);
+        }
+
+        last_offset =3D offset;
+        usleep(1000 * 100);
+    }
+
+    rc =3D xenforeignmemory_unmap_resource(fmem, fres);
+
+    if (rc) {
+        fprintf(stderr, "Failed to unmap resource\n");
+        return 1;
+    }
+
+    rc =3D xenforeignmemory_close(fmem);
+
+    if (rc) {
+        fprintf(stderr, "Failed to close fmem\n");
+        return 1;
+    }
+
+    rc =3D xc_vmtrace_pt_disable(xc, domid, vcpu_id);
+
+    if (rc) {
+        fprintf(stderr, "Failed to call xc_vmtrace_pt_disable\n");
+        return 1;
+    }
+
+    rc =3D xc_interface_close(xc);
+
+    if (rc) {
+        fprintf(stderr, "Failed to close xc interface\n");
+        return 1;
+    }
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
--=20
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 18:56:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 18:56:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js9oL-0005mp-IV; Sun, 05 Jul 2020 18:56:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZHG=AQ=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1js9oK-0005mD-IN
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 18:56:20 +0000
X-Inumbo-ID: 35a11d89-bef1-11ea-8c04-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 35a11d89-bef1-11ea-8c04-12813bfff9fa;
 Sun, 05 Jul 2020 18:56:19 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 97F0EA2022;
 Sun,  5 Jul 2020 20:56:16 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 95673A2026;
 Sun,  5 Jul 2020 20:56:15 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id c23W13ESr3r9; Sun,  5 Jul 2020 20:56:14 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 7FDC7A2052;
 Sun,  5 Jul 2020 20:56:14 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id isgh7PHF9bSH; Sun,  5 Jul 2020 20:56:14 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id C27ACA2022;
 Sun,  5 Jul 2020 20:56:13 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 29CF622C3D;
 Sun,  5 Jul 2020 20:55:19 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id CrPYSZmT-5fP; Sun,  5 Jul 2020 20:55:13 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 3B5CF22C06;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
X-Quarantine-ID: <Rz3LuF7Dh33H>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id Rz3LuF7Dh33H; Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 0D31A2295A;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 05/11] tools/libxl: add vmtrace_pt_size parameter
Date: Sun,  5 Jul 2020 20:54:58 +0200
Message-Id: <f7e3c91789a7763b997918b6ebb987be670f9ce5.1593974333.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>, tamas.lengyel@intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Allow to specify the size of per-vCPU trace buffer upon
domain creation. This is zero by default (meaning: not enabled).

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 docs/man/xl.cfg.5.pod.in             | 11 +++++++++++
 tools/golang/xenlight/helpers.gen.go |  2 ++
 tools/golang/xenlight/types.gen.go   |  1 +
 tools/libxl/libxl.h                  |  8 ++++++++
 tools/libxl/libxl_create.c           |  1 +
 tools/libxl/libxl_types.idl          |  2 ++
 tools/xl/xl_parse.c                  | 22 ++++++++++++++++++++++
 7 files changed, 47 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1f..670759f6bd 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -278,6 +278,17 @@ memory=8096 will report significantly less memory available for use
 than a system with maxmem=8096 memory=8096 due to the memory overhead
 of having to track the unused pages.
 
+=item B<processor_trace_buffer_size=BYTES>
+
+Specifies the size of processor trace buffer that would be allocated
+for each vCPU belonging to this domain. Disabled (i.e.
+B<processor_trace_buffer_size=0> by default. This must be set to
+non-zero value in order to be able to use processor tracing features
+with this domain.
+
+B<NOTE>: The size value must be between 4 kB and 4 GB and it must
+be also a power of 2.
+
 =back
 
 =head3 Guest Virtual NUMA Configuration
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 152c7e8e6b..bfc37b69c8 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1117,6 +1117,7 @@ return fmt.Errorf("invalid union key '%v'", x.Type)}
 x.ArchArm.GicVersion = GicVersion(xc.arch_arm.gic_version)
 x.ArchArm.Vuart = VuartType(xc.arch_arm.vuart)
 x.Altp2M = Altp2MMode(xc.altp2m)
+x.VmtracePtOrder = int(xc.vmtrace_pt_order)
 
  return nil}
 
@@ -1592,6 +1593,7 @@ return fmt.Errorf("invalid union key '%v'", x.Type)}
 xc.arch_arm.gic_version = C.libxl_gic_version(x.ArchArm.GicVersion)
 xc.arch_arm.vuart = C.libxl_vuart_type(x.ArchArm.Vuart)
 xc.altp2m = C.libxl_altp2m_mode(x.Altp2M)
+xc.vmtrace_pt_order = C.int(x.VmtracePtOrder)
 
  return nil
  }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index 663c1e86b4..f9b07ac862 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -516,6 +516,7 @@ GicVersion GicVersion
 Vuart VuartType
 }
 Altp2M Altp2MMode
+VmtracePtOrder int
 }
 
 type domainBuildInfoTypeUnion interface {
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 1cd6c38e83..4abb521756 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -438,6 +438,14 @@
  */
 #define LIBXL_HAVE_CREATEINFO_PASSTHROUGH 1
 
+/*
+ * LIBXL_HAVE_VMTRACE_PT_ORDER indicates that
+ * libxl_domain_create_info has a vmtrace_pt_order parameter, which
+ * allows to enable pre-allocation of processor tracing buffers
+ * with the given order of size.
+ */
+#define LIBXL_HAVE_VMTRACE_PT_ORDER 1
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 2814818e34..82b595161a 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -608,6 +608,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
             .max_evtchn_port = b_info->event_channels,
             .max_grant_frames = b_info->max_grant_frames,
             .max_maptrack_frames = b_info->max_maptrack_frames,
+            .vmtrace_pt_order = b_info->vmtrace_pt_order,
         };
 
         if (info->type != LIBXL_DOMAIN_TYPE_PV) {
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 9d3f05f399..1c5dd43e4d 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -645,6 +645,8 @@ libxl_domain_build_info = Struct("domain_build_info",[
     # supported by x86 HVM and ARM support is planned.
     ("altp2m", libxl_altp2m_mode),
 
+    ("vmtrace_pt_order", integer),
+
     ], dir=DIR_IN,
        copy_deprecated_fn="libxl__domain_build_info_copy_deprecated",
 )
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 61b4ef7b7e..279f7c14d3 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1861,6 +1861,28 @@ void parse_config_data(const char *config_source,
         }
     }
 
+    if (!xlu_cfg_get_long(config, "processor_trace_buffer_size", &l, 1) && l) {
+        int32_t shift = 0;
+
+        if (l & (l - 1))
+        {
+            fprintf(stderr, "ERROR: processor_trace_buffer_size "
+			    "- must be a power of 2\n");
+            exit(1);
+        }
+
+        while (l >>= 1) ++shift;
+
+        if (shift <= XEN_PAGE_SHIFT)
+        {
+            fprintf(stderr, "ERROR: processor_trace_buffer_size "
+			    "- value is too small\n");
+            exit(1);
+        }
+
+        b_info->vmtrace_pt_order = shift - XEN_PAGE_SHIFT;
+    }
+
     if (!xlu_cfg_get_list(config, "ioports", &ioports, &num_ioports, 0)) {
         b_info->num_ioports = num_ioports;
         b_info->ioports = calloc(num_ioports, sizeof(*b_info->ioports));
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 18:56:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 18:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js9oQ-0005pw-TP; Sun, 05 Jul 2020 18:56:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZHG=AQ=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1js9oO-0005m2-V1
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 18:56:24 +0000
X-Inumbo-ID: 37202e88-bef1-11ea-b7bb-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37202e88-bef1-11ea-b7bb-bc764e2007e4;
 Sun, 05 Jul 2020 18:56:21 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 5FD30A20CE;
 Sun,  5 Jul 2020 20:56:17 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 24D36A20BB;
 Sun,  5 Jul 2020 20:56:16 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 4qVLccuKoIX3; Sun,  5 Jul 2020 20:56:15 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 324E6A20AB;
 Sun,  5 Jul 2020 20:56:15 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 4DItugEelIB2; Sun,  5 Jul 2020 20:56:15 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id E8472A2026;
 Sun,  5 Jul 2020 20:56:13 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 3428022C37;
 Sun,  5 Jul 2020 20:55:19 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id oP-w6INSIyUv; Sun,  5 Jul 2020 20:55:13 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 8C7A122C29;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
X-Quarantine-ID: <wHWEk28cd5CO>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id wHWEk28cd5CO; Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 698F522C20;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 09/11] x86/domctl: add XEN_DOMCTL_vmtrace_op
Date: Sun,  5 Jul 2020 20:55:02 +0200
Message-Id: <f3ec05eb4908f774683e96553ec32d68fac0d0ac.1593974333.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Implement domctl to manage the runtime state of
processor trace feature.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 xen/arch/x86/domctl.c       | 48 +++++++++++++++++++++++++++++++++++++
 xen/include/public/domctl.h | 26 ++++++++++++++++++++
 2 files changed, 74 insertions(+)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 6f2c69788d..a041b724d8 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -322,6 +322,48 @@ void arch_get_domain_info(const struct domain *d,
     info->arch_config.emulation_flags = d->arch.emulation_flags;
 }
 
+static int do_vmtrace_op(struct domain *d, struct xen_domctl_vmtrace_op *op,
+                         XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    int rc;
+    struct vcpu *v;
+
+    if ( !vmtrace_supported )
+        return -EOPNOTSUPP;
+
+    if ( !is_hvm_domain(d) )
+        return -EOPNOTSUPP;
+
+    if ( op->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    v = domain_vcpu(d, op->vcpu);
+    rc = 0;
+
+    switch ( op->cmd )
+    {
+    case XEN_DOMCTL_vmtrace_pt_enable:
+    case XEN_DOMCTL_vmtrace_pt_disable:
+        vcpu_pause(v);
+        spin_lock(&d->vmtrace_lock);
+
+        rc = vmtrace_control_pt(v, op->cmd == XEN_DOMCTL_vmtrace_pt_enable);
+
+        spin_unlock(&d->vmtrace_lock);
+        vcpu_unpause(v);
+        break;
+
+    case XEN_DOMCTL_vmtrace_pt_get_offset:
+        rc = vmtrace_get_pt_offset(v, &op->offset);
+        break;
+
+    default:
+        rc = -EOPNOTSUPP;
+    }
+
+    return rc;
+}
+
 #define MAX_IOPORTS 0x10000
 
 long arch_do_domctl(
@@ -337,6 +379,12 @@ long arch_do_domctl(
     switch ( domctl->cmd )
     {
 
+    case XEN_DOMCTL_vmtrace_op:
+        ret = do_vmtrace_op(d, &domctl->u.vmtrace_op, u_domctl);
+        if ( !ret )
+            copyback = true;
+	break;
+
     case XEN_DOMCTL_shadow_op:
         ret = paging_domctl(d, &domctl->u.shadow_op, u_domctl, 0);
         if ( ret == -ERESTART )
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 7b8289d436..f836cb5970 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1136,6 +1136,28 @@ struct xen_domctl_vuart_op {
                                  */
 };
 
+/* XEN_DOMCTL_vmtrace_op: Perform VM tracing related operation */
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
+struct xen_domctl_vmtrace_op {
+    /* IN variable */
+    uint32_t cmd;
+/* Enable/disable external vmtrace for given domain */
+#define XEN_DOMCTL_vmtrace_pt_enable      1
+#define XEN_DOMCTL_vmtrace_pt_disable     2
+#define XEN_DOMCTL_vmtrace_pt_get_offset  3
+    domid_t domain;
+    uint32_t vcpu;
+    uint64_aligned_t size;
+
+    /* OUT variable */
+    uint64_aligned_t offset;
+};
+typedef struct xen_domctl_vmtrace_op xen_domctl_vmtrace_op_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_vmtrace_op_t);
+
+#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1217,6 +1239,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_vuart_op                      81
 #define XEN_DOMCTL_get_cpu_policy                82
 #define XEN_DOMCTL_set_cpu_policy                83
+#define XEN_DOMCTL_vmtrace_op                    84
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1277,6 +1300,9 @@ struct xen_domctl {
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_alloc         psr_alloc;
         struct xen_domctl_vuart_op          vuart_op;
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+        struct xen_domctl_vmtrace_op        vmtrace_op;
+#endif
         uint8_t                             pad[128];
     } u;
 };
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 18:56:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 18:56:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js9oR-0005qK-6f; Sun, 05 Jul 2020 18:56:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZHG=AQ=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1js9oP-0005mD-Gu
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 18:56:25 +0000
X-Inumbo-ID: 37249a22-bef1-11ea-8c04-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 37249a22-bef1-11ea-8c04-12813bfff9fa;
 Sun, 05 Jul 2020 18:56:21 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 3A8DBA2037;
 Sun,  5 Jul 2020 20:56:19 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 4FF6FA20C9;
 Sun,  5 Jul 2020 20:56:16 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id B_BcBFl6ja6D; Sun,  5 Jul 2020 20:56:15 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 305A0A2022;
 Sun,  5 Jul 2020 20:56:15 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id k5rmcILqT4aJ; Sun,  5 Jul 2020 20:56:15 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 1E197A203D;
 Sun,  5 Jul 2020 20:56:14 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 5A35022C20;
 Sun,  5 Jul 2020 20:55:19 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id iL-0WfZ8YBhL; Sun,  5 Jul 2020 20:55:13 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 6ADBF22C0B;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
X-Quarantine-ID: <NHfE5Sz6ftCE>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id NHfE5Sz6ftCE; Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 3CB5622C0C;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 07/11] x86/vmx: implement IPT in VMX
Date: Sun,  5 Jul 2020 20:55:00 +0200
Message-Id: <c7a1a2c492b6bf667d89aaa22502a026c655efe4.1593974333.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, luwei.kang@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Jan Beulich <jbeulich@suse.com>, tamas.lengyel@intel.com,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Use Intel Processor Trace feature to provide vmtrace_pt_*
interface for HVM/VMX.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 xen/arch/x86/hvm/vmx/vmx.c         | 109 +++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/vmx/vmcs.h |   3 +
 xen/include/asm-x86/hvm/vmx/vmx.h  |  14 ++++
 3 files changed, 126 insertions(+)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index cc6d4ece22..4eded2ef84 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -428,6 +428,56 @@ static void vmx_domain_relinquish_resources(struct domain *d)
     vmx_free_vlapic_mapping(d);
 }
 
+static int vmx_init_pt(struct vcpu *v)
+{
+    int rc;
+    uint64_t size = v->domain->vmtrace_pt_size;
+
+    v->arch.hvm.vmx.ipt_state = xzalloc(struct ipt_state);
+
+    if ( !v->arch.hvm.vmx.ipt_state )
+        return -ENOMEM;
+
+    if ( !v->vmtrace.pt_buf || !size )
+        return -EINVAL;
+
+    /*
+     * We don't accept trace buffer size smaller than single page
+     * and the upper bound is defined as 4GB in the specification.
+     * The buffer size must be also a power of 2.
+     */
+    if ( size < PAGE_SIZE || size > GB(4) || (size & (size - 1)) )
+        return -EINVAL;
+
+    v->arch.hvm.vmx.ipt_state->output_base =
+        page_to_maddr(v->vmtrace.pt_buf);
+    v->arch.hvm.vmx.ipt_state->output_mask.raw = size - 1;
+
+    rc = vmx_add_host_load_msr(v, MSR_RTIT_CTL, 0);
+
+    if ( rc )
+        return rc;
+
+    rc = vmx_add_guest_msr(v, MSR_RTIT_CTL,
+                              RTIT_CTL_TRACE_EN | RTIT_CTL_OS |
+                              RTIT_CTL_USR | RTIT_CTL_BRANCH_EN);
+
+    if ( rc )
+        return rc;
+
+    return 0;
+}
+
+static int vmx_destroy_pt(struct vcpu* v)
+{
+    if ( v->arch.hvm.vmx.ipt_state )
+        xfree(v->arch.hvm.vmx.ipt_state);
+
+    v->arch.hvm.vmx.ipt_state = NULL;
+    return 0;
+}
+
+
 static int vmx_vcpu_initialise(struct vcpu *v)
 {
     int rc;
@@ -471,6 +521,14 @@ static int vmx_vcpu_initialise(struct vcpu *v)
 
     vmx_install_vlapic_mapping(v);
 
+    if ( v->domain->vmtrace_pt_size )
+    {
+        rc = vmx_init_pt(v);
+
+        if ( rc )
+            return rc;
+    }
+
     return 0;
 }
 
@@ -483,6 +541,7 @@ static void vmx_vcpu_destroy(struct vcpu *v)
      * prior to vmx_domain_destroy so we need to disable PML for each vcpu
      * separately here.
      */
+    vmx_destroy_pt(v);
     vmx_vcpu_disable_pml(v);
     vmx_destroy_vmcs(v);
     passive_domain_destroy(v);
@@ -513,6 +572,18 @@ static void vmx_save_guest_msrs(struct vcpu *v)
      * be updated at any time via SWAPGS, which we cannot trap.
      */
     v->arch.hvm.vmx.shadow_gs = rdgsshadow();
+
+    if ( unlikely(v->arch.hvm.vmx.ipt_state &&
+                  v->arch.hvm.vmx.ipt_state->active) )
+    {
+        uint64_t rtit_ctl;
+        rdmsrl(MSR_RTIT_CTL, rtit_ctl);
+        BUG_ON(rtit_ctl & RTIT_CTL_TRACE_EN);
+
+        rdmsrl(MSR_RTIT_STATUS, v->arch.hvm.vmx.ipt_state->status);
+        rdmsrl(MSR_RTIT_OUTPUT_MASK,
+               v->arch.hvm.vmx.ipt_state->output_mask.raw);
+    }
 }
 
 static void vmx_restore_guest_msrs(struct vcpu *v)
@@ -524,6 +595,17 @@ static void vmx_restore_guest_msrs(struct vcpu *v)
 
     if ( cpu_has_msr_tsc_aux )
         wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
+
+    if ( unlikely(v->arch.hvm.vmx.ipt_state &&
+                  v->arch.hvm.vmx.ipt_state->active) )
+    {
+        wrmsrl(MSR_RTIT_OUTPUT_BASE,
+               v->arch.hvm.vmx.ipt_state->output_base);
+        wrmsrl(MSR_RTIT_OUTPUT_MASK,
+               v->arch.hvm.vmx.ipt_state->output_mask.raw);
+        wrmsrl(MSR_RTIT_STATUS,
+               v->arch.hvm.vmx.ipt_state->status);
+    }
 }
 
 void vmx_update_cpu_exec_control(struct vcpu *v)
@@ -2240,6 +2322,24 @@ static bool vmx_get_pending_event(struct vcpu *v, struct x86_event *info)
     return true;
 }
 
+static int vmx_control_pt(struct vcpu *v, bool enable)
+{
+    if ( !v->arch.hvm.vmx.ipt_state )
+        return -EINVAL;
+
+    v->arch.hvm.vmx.ipt_state->active = enable;
+    return 0;
+}
+
+static int vmx_get_pt_offset(struct vcpu *v, uint64_t *offset)
+{
+    if ( !v->arch.hvm.vmx.ipt_state )
+        return -EINVAL;
+
+    *offset = v->arch.hvm.vmx.ipt_state->output_mask.offset;
+    return 0;
+}
+
 static struct hvm_function_table __initdata vmx_function_table = {
     .name                 = "VMX",
     .cpu_up_prepare       = vmx_cpu_up_prepare,
@@ -2295,6 +2395,8 @@ static struct hvm_function_table __initdata vmx_function_table = {
     .altp2m_vcpu_update_vmfunc_ve = vmx_vcpu_update_vmfunc_ve,
     .altp2m_vcpu_emulate_ve = vmx_vcpu_emulate_ve,
     .altp2m_vcpu_emulate_vmfunc = vmx_vcpu_emulate_vmfunc,
+    .vmtrace_control_pt = vmx_control_pt,
+    .vmtrace_get_pt_offset = vmx_get_pt_offset,
     .tsc_scaling = {
         .max_ratio = VMX_TSC_MULTIPLIER_MAX,
     },
@@ -3674,6 +3776,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
 
     hvm_invalidate_regs_fields(regs);
 
+    if ( unlikely(v->arch.hvm.vmx.ipt_state &&
+                  v->arch.hvm.vmx.ipt_state->active) )
+    {
+        rdmsrl(MSR_RTIT_OUTPUT_MASK,
+               v->arch.hvm.vmx.ipt_state->output_mask.raw);
+    }
+
     if ( paging_mode_hap(v->domain) )
     {
         /*
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 6153ba6769..65971fa6ad 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -186,6 +186,9 @@ struct vmx_vcpu {
      * pCPU and wakeup the related vCPU.
      */
     struct pi_blocking_vcpu pi_blocking;
+
+    /* State of processor trace feature */
+    struct ipt_state      *ipt_state;
 };
 
 int vmx_create_vmcs(struct vcpu *v);
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 111ccd7e61..8d7c67e43d 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -689,4 +689,18 @@ typedef union ldt_or_tr_instr_info {
     };
 } ldt_or_tr_instr_info_t;
 
+/* Processor Trace state per vCPU */
+struct ipt_state {
+    bool active;
+    uint64_t status;
+    uint64_t output_base;
+    union {
+        uint64_t raw;
+        struct {
+            uint32_t size;
+            uint32_t offset;
+        };
+    } output_mask;
+};
+
 #endif /* __ASM_X86_HVM_VMX_VMX_H__ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 18:56:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 18:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js9oV-0005ub-Mo; Sun, 05 Jul 2020 18:56:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZHG=AQ=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1js9oT-0005m2-VJ
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 18:56:29 +0000
X-Inumbo-ID: 37203e8c-bef1-11ea-8496-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37203e8c-bef1-11ea-8496-bc764e2007e4;
 Sun, 05 Jul 2020 18:56:21 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 17C21A2081;
 Sun,  5 Jul 2020 20:56:17 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id CE085A2047;
 Sun,  5 Jul 2020 20:56:15 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id lMswJV0Qiirr; Sun,  5 Jul 2020 20:56:15 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id CBE51A207F;
 Sun,  5 Jul 2020 20:56:14 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id sJXl2t2dRER2; Sun,  5 Jul 2020 20:56:14 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 16632A202C;
 Sun,  5 Jul 2020 20:56:14 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 422F022C3A;
 Sun,  5 Jul 2020 20:55:19 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id yPJWs7im3KH7; Sun,  5 Jul 2020 20:55:13 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 9E30D22C20;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
X-Quarantine-ID: <9a2BxL5qECOO>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 9a2BxL5qECOO; Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 7412D22C1A;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 10/11] tools/libxc: add xc_vmtrace_* functions
Date: Sun,  5 Jul 2020 20:55:03 +0200
Message-Id: <07343a2258d2db7dab24653edab84b825103e63d.1593974333.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: luwei.kang@intel.com, Michal Leszczynski <michal.leszczynski@cert.pl>,
 tamas.lengyel@intel.com, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Add functions in libxc that use the new XEN_DOMCTL_vmtrace interface.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 tools/libxc/Makefile          |  1 +
 tools/libxc/include/xenctrl.h | 39 +++++++++++++++++++
 tools/libxc/xc_vmtrace.c      | 73 +++++++++++++++++++++++++++++++++++
 3 files changed, 113 insertions(+)
 create mode 100644 tools/libxc/xc_vmtrace.c

diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index fae5969a73..605e44501d 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -27,6 +27,7 @@ CTRL_SRCS-y       += xc_csched2.c
 CTRL_SRCS-y       += xc_arinc653.c
 CTRL_SRCS-y       += xc_rt.c
 CTRL_SRCS-y       += xc_tbuf.c
+CTRL_SRCS-y       += xc_vmtrace.c
 CTRL_SRCS-y       += xc_pm.c
 CTRL_SRCS-y       += xc_cpu_hotplug.c
 CTRL_SRCS-y       += xc_resume.c
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 4c89b7294c..34f27fd7d4 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1585,6 +1585,45 @@ int xc_tbuf_set_cpu_mask(xc_interface *xch, xc_cpumap_t mask);
 
 int xc_tbuf_set_evt_mask(xc_interface *xch, uint32_t mask);
 
+/**
+ * Enable processor trace for given vCPU in given DomU.
+ * Allocate the trace ringbuffer with given size.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid domain identifier
+ * @parm vcpu vcpu identifier
+ * @return 0 on success, -1 on failure
+ */
+int xc_vmtrace_pt_enable(xc_interface *xch, uint32_t domid,
+                         uint32_t vcpu);
+
+/**
+ * Disable processor trace for given vCPU in given DomU.
+ * Deallocate the trace ringbuffer.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid domain identifier
+ * @parm vcpu vcpu identifier
+ * @return 0 on success, -1 on failure
+ */
+int xc_vmtrace_pt_disable(xc_interface *xch, uint32_t domid,
+                          uint32_t vcpu);
+
+/**
+ * Get current offset inside the trace ringbuffer.
+ * This allows to determine how much data was written into the buffer.
+ * Once buffer overflows, the offset will reset to 0 and the previous
+ * data will be overriden.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid domain identifier
+ * @parm vcpu vcpu identifier
+ * @parm offset current offset inside trace buffer will be written there
+ * @return 0 on success, -1 on failure
+ */
+int xc_vmtrace_pt_get_offset(xc_interface *xch, uint32_t domid,
+                             uint32_t vcpu, uint64_t *offset);
+
 int xc_domctl(xc_interface *xch, struct xen_domctl *domctl);
 int xc_sysctl(xc_interface *xch, struct xen_sysctl *sysctl);
 
diff --git a/tools/libxc/xc_vmtrace.c b/tools/libxc/xc_vmtrace.c
new file mode 100644
index 0000000000..32f90a6203
--- /dev/null
+++ b/tools/libxc/xc_vmtrace.c
@@ -0,0 +1,73 @@
+/******************************************************************************
+ * xc_vmtrace.c
+ *
+ * API for manipulating hardware tracing features
+ *
+ * Copyright (c) 2020, Michal Leszczynski
+ *
+ * Copyright 2020 CERT Polska. All rights reserved.
+ * Use is subject to license terms.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "xc_private.h"
+#include <xen/trace.h>
+
+int xc_vmtrace_pt_enable(
+        xc_interface *xch, uint32_t domid, uint32_t vcpu)
+{
+    DECLARE_DOMCTL;
+    int rc;
+
+    domctl.cmd = XEN_DOMCTL_vmtrace_op;
+    domctl.domain = domid;
+    domctl.u.vmtrace_op.cmd = XEN_DOMCTL_vmtrace_pt_enable;
+    domctl.u.vmtrace_op.vcpu = vcpu;
+
+    rc = do_domctl(xch, &domctl);
+    return rc;
+}
+
+int xc_vmtrace_pt_get_offset(
+        xc_interface *xch, uint32_t domid, uint32_t vcpu, uint64_t *offset)
+{
+    DECLARE_DOMCTL;
+    int rc;
+
+    domctl.cmd = XEN_DOMCTL_vmtrace_op;
+    domctl.domain = domid;
+    domctl.u.vmtrace_op.cmd = XEN_DOMCTL_vmtrace_pt_get_offset;
+    domctl.u.vmtrace_op.vcpu = vcpu;
+
+    rc = do_domctl(xch, &domctl);
+    if ( !rc )
+        *offset = domctl.u.vmtrace_op.offset;
+    return rc;
+}
+
+int xc_vmtrace_pt_disable(xc_interface *xch, uint32_t domid, uint32_t vcpu)
+{
+    DECLARE_DOMCTL;
+    int rc;
+
+    domctl.cmd = XEN_DOMCTL_vmtrace_op;
+    domctl.domain = domid;
+    domctl.u.vmtrace_op.cmd = XEN_DOMCTL_vmtrace_pt_disable;
+    domctl.u.vmtrace_op.vcpu = vcpu;
+
+    rc = do_domctl(xch, &domctl);
+    return rc;
+}
+
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 18:56:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 18:56:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js9ob-0005zS-1T; Sun, 05 Jul 2020 18:56:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZHG=AQ=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1js9oY-0005m2-VY
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 18:56:35 +0000
X-Inumbo-ID: 37200200-bef1-11ea-bb8b-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37200200-bef1-11ea-bb8b-bc764e2007e4;
 Sun, 05 Jul 2020 18:56:21 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id A51CAA20CC;
 Sun,  5 Jul 2020 20:56:16 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id A272FA2037;
 Sun,  5 Jul 2020 20:56:15 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id MPZIYhkmTYT3; Sun,  5 Jul 2020 20:56:15 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 8DFEBA2058;
 Sun,  5 Jul 2020 20:56:14 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id eFOoYfuRpihs; Sun,  5 Jul 2020 20:56:14 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id AFA8DA201B;
 Sun,  5 Jul 2020 20:56:13 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 1DC0522C1E;
 Sun,  5 Jul 2020 20:55:19 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id WWYAtzNBoRq6; Sun,  5 Jul 2020 20:55:13 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 58FF622C12;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
X-Quarantine-ID: <aOEGDFEWHkIY>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id aOEGDFEWHkIY; Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 2236B22C0B;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 06/11] x86/hvm: processor trace interface in HVM
Date: Sun,  5 Jul 2020 20:54:59 +0200
Message-Id: <a4833c8168e287f0caf1dc6f16ec5c054bd88b0a.1593974333.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Implement necessary changes in common code/HVM to support
processor trace features. Define vmtrace_pt_* API and
implement trace buffer allocation/deallocation in common
code.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 xen/arch/x86/domain.c         | 19 +++++++++++++++++++
 xen/common/domain.c           | 19 +++++++++++++++++++
 xen/include/asm-x86/hvm/hvm.h | 20 ++++++++++++++++++++
 xen/include/xen/sched.h       |  4 ++++
 4 files changed, 62 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index fee6c3931a..79c9794408 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2199,6 +2199,25 @@ int domain_relinquish_resources(struct domain *d)
                 altp2m_vcpu_disable_ve(v);
         }
 
+        for_each_vcpu ( d, v )
+        {
+            unsigned int i;
+
+            if ( !v->vmtrace.pt_buf )
+                continue;
+
+            for ( i = 0; i < (v->domain->vmtrace_pt_size >> PAGE_SHIFT); i++ )
+            {
+                struct page_info *pg = mfn_to_page(
+                    mfn_add(page_to_mfn(v->vmtrace.pt_buf), i));
+                if ( (pg->count_info & PGC_count_mask) != 1 )
+                    return -EBUSY;
+            }
+
+            free_domheap_pages(v->vmtrace.pt_buf,
+                get_order_from_bytes(v->domain->vmtrace_pt_size));
+        }
+
         if ( is_pv_domain(d) )
         {
             for_each_vcpu ( d, v )
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 25d3359c5b..f480c4e033 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -137,6 +137,21 @@ static void vcpu_destroy(struct vcpu *v)
     free_vcpu_struct(v);
 }
 
+static int vmtrace_alloc_buffers(struct vcpu *v)
+{
+    struct page_info *pg;
+    uint64_t size = v->domain->vmtrace_pt_size;
+
+    pg = alloc_domheap_pages(v->domain, get_order_from_bytes(size),
+                             MEMF_no_refcount);
+
+    if ( !pg )
+        return -ENOMEM;
+
+    v->vmtrace.pt_buf = pg;
+    return 0;
+}
+
 struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
 {
     struct vcpu *v;
@@ -162,6 +177,9 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
     v->vcpu_id = vcpu_id;
     v->dirty_cpu = VCPU_CPU_CLEAN;
 
+    if ( d->vmtrace_pt_size && vmtrace_alloc_buffers(v) != 0 )
+        return NULL;
+
     spin_lock_init(&v->virq_lock);
 
     tasklet_init(&v->continue_hypercall_tasklet, NULL, NULL);
@@ -422,6 +440,7 @@ struct domain *domain_create(domid_t domid,
     d->shutdown_code = SHUTDOWN_CODE_INVALID;
 
     spin_lock_init(&d->pbuf_lock);
+    spin_lock_init(&d->vmtrace_lock);
 
     rwlock_init(&d->vnuma_rwlock);
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 1eb377dd82..2d474a4c50 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -214,6 +214,10 @@ struct hvm_function_table {
     bool_t (*altp2m_vcpu_emulate_ve)(struct vcpu *v);
     int (*altp2m_vcpu_emulate_vmfunc)(const struct cpu_user_regs *regs);
 
+    /* vmtrace */
+    int (*vmtrace_control_pt)(struct vcpu *v, bool enable);
+    int (*vmtrace_get_pt_offset)(struct vcpu *v, uint64_t *offset);
+
     /*
      * Parameters and callbacks for hardware-assisted TSC scaling,
      * which are valid only when the hardware feature is available.
@@ -655,6 +659,22 @@ static inline bool altp2m_vcpu_emulate_ve(struct vcpu *v)
     return false;
 }
 
+static inline int vmtrace_control_pt(struct vcpu *v, bool enable)
+{
+    if ( hvm_funcs.vmtrace_control_pt )
+        return hvm_funcs.vmtrace_control_pt(v, enable);
+
+    return -EOPNOTSUPP;
+}
+
+static inline int vmtrace_get_pt_offset(struct vcpu *v, uint64_t *offset)
+{
+    if ( hvm_funcs.vmtrace_get_pt_offset )
+        return hvm_funcs.vmtrace_get_pt_offset(v, offset);
+
+    return -EOPNOTSUPP;
+}
+
 /*
  * This must be defined as a macro instead of an inline function,
  * because it uses 'struct vcpu' and 'struct domain' which have
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 48f0a61bbd..95ebab0d30 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -253,6 +253,10 @@ struct vcpu
     /* vPCI per-vCPU area, used to store data for long running operations. */
     struct vpci_vcpu vpci;
 
+    struct {
+        struct page_info *pt_buf;
+    } vmtrace;
+
     struct arch_vcpu arch;
 };
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 18:56:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 18:56:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js9of-00062x-Br; Sun, 05 Jul 2020 18:56:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NZHG=AQ=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1js9od-0005m2-Vo
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 18:56:40 +0000
X-Inumbo-ID: 377be142-bef1-11ea-bca7-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 377be142-bef1-11ea-bca7-bc764e2007e4;
 Sun, 05 Jul 2020 18:56:22 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 2C467A2026;
 Sun,  5 Jul 2020 20:56:19 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 097FAA209C;
 Sun,  5 Jul 2020 20:56:15 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id GXBBdrOICLwb; Sun,  5 Jul 2020 20:56:15 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id D1102A2081;
 Sun,  5 Jul 2020 20:56:14 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id xK-_YHP_09Yc; Sun,  5 Jul 2020 20:56:14 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id AE4C1A1F8E;
 Sun,  5 Jul 2020 20:56:13 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 1240F22C13;
 Sun,  5 Jul 2020 20:55:19 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id KA1JC6HJHzTN; Sun,  5 Jul 2020 20:55:13 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 8423C22C1E;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
X-Quarantine-ID: <p10_anPoB9eT>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id p10_anPoB9eT; Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 538CB2295A;
 Sun,  5 Jul 2020 20:55:08 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 08/11] x86/mm: add vmtrace_buf resource type
Date: Sun,  5 Jul 2020 20:55:01 +0200
Message-Id: <a306c4811973d80c83f1cb46cdbef1aa54ac6379.1593974333.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Allow to map processor trace buffer using
acquire_resource().

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 xen/common/memory.c         | 28 ++++++++++++++++++++++++++++
 xen/include/public/memory.h |  1 +
 2 files changed, 29 insertions(+)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index eb42f883df..04f4e152c0 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1007,6 +1007,29 @@ static long xatp_permission_check(struct domain *d, unsigned int space)
     return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
 }
 
+static int acquire_vmtrace_buf(struct domain *d, unsigned int id,
+                               unsigned long frame,
+                               unsigned int nr_frames,
+                               xen_pfn_t mfn_list[])
+{
+    mfn_t mfn;
+    unsigned int i;
+    struct vcpu *v = domain_vcpu(d, id);
+
+    if ( !v || !v->vmtrace.pt_buf )
+        return -EINVAL;
+
+    mfn = page_to_mfn(v->vmtrace.pt_buf);
+
+    if ( frame + nr_frames > (v->domain->vmtrace_pt_size >> PAGE_SHIFT) )
+        return -EINVAL;
+
+    for ( i = 0; i < nr_frames; i++ )
+        mfn_list[i] = mfn_x(mfn_add(mfn, frame + i));
+
+    return 0;
+}
+
 static int acquire_grant_table(struct domain *d, unsigned int id,
                                unsigned long frame,
                                unsigned int nr_frames,
@@ -1117,6 +1140,11 @@ static int acquire_resource(
                                  mfn_list);
         break;
 
+    case XENMEM_resource_vmtrace_buf:
+        rc = acquire_vmtrace_buf(d, xmar.id, xmar.frame, xmar.nr_frames,
+                                 mfn_list);
+        break;
+
     default:
         rc = arch_acquire_resource(d, xmar.type, xmar.id, xmar.frame,
                                    xmar.nr_frames, mfn_list);
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 21057ed78e..f4c905a10e 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -625,6 +625,7 @@ struct xen_mem_acquire_resource {
 
 #define XENMEM_resource_ioreq_server 0
 #define XENMEM_resource_grant_table 1
+#define XENMEM_resource_vmtrace_buf 2
 
     /*
      * IN - a type-specific resource identifier, which must be zero
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 18:59:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 18:59:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js9rE-0006iL-VP; Sun, 05 Jul 2020 18:59:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RgbI=AQ=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1js9rD-0006iE-Tg
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 18:59:19 +0000
X-Inumbo-ID: a119caba-bef1-11ea-8496-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a119caba-bef1-11ea-8496-bc764e2007e4;
 Sun, 05 Jul 2020 18:59:19 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 6DF33A1BA5;
 Sun,  5 Jul 2020 20:59:18 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 6B88AA1B9C;
 Sun,  5 Jul 2020 20:59:17 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id AkBzsEYyKSVV; Sun,  5 Jul 2020 20:59:17 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 0F325A1BA5;
 Sun,  5 Jul 2020 20:59:17 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id LlagYyDGD44d; Sun,  5 Jul 2020 20:59:16 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id E4D04A1B9C;
 Sun,  5 Jul 2020 20:59:16 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id D824C22C09;
 Sun,  5 Jul 2020 20:58:46 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id GxDN8sUCE3ir; Sun,  5 Jul 2020 20:58:41 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 7777622C0D;
 Sun,  5 Jul 2020 20:58:41 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id eRShEoNG52Mn; Sun,  5 Jul 2020 20:58:41 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 59DA122C09;
 Sun,  5 Jul 2020 20:58:41 +0200 (CEST)
Date: Sun, 5 Jul 2020 20:58:41 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Message-ID: <983829150.19744505.1593975521301.JavaMail.zimbra@cert.pl>
In-Reply-To: <e0ac5422825ce307470256aab1652336d5179a9a.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <cover.1593974333.git.michal.leszczynski@cert.pl>
 <e0ac5422825ce307470256aab1652336d5179a9a.1593974333.git.michal.leszczynski@cert.pl>
Subject: Re: [PATCH v5 11/11] tools/proctrace: add proctrace tool
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - GC83 (Win)/8.6.0_GA_1194)
Thread-Topic: tools/proctrace: add proctrace tool
Thread-Index: XPBWFQz1Fq+Dhs3nny1nHRh5+zAzMw==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: luwei kang <luwei.kang@intel.com>, tamas lengyel <tamas.lengyel@intel.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 5 lip 2020 o 20:55, Micha=C5=82 Leszczy=C5=84ski michal.leszczynski@c=
ert.pl napisa=C5=82(a):

> From: Michal Leszczynski <michal.leszczynski@cert.pl>
>=20
> Add an demonstration tool that uses xc_vmtrace_* calls in order
> to manage external IPT monitoring for DomU.
>=20
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
> tools/proctrace/Makefile    |  48 +++++++++++
> tools/proctrace/proctrace.c | 163 ++++++++++++++++++++++++++++++++++++
> 2 files changed, 211 insertions(+)
> create mode 100644 tools/proctrace/Makefile
> create mode 100644 tools/proctrace/proctrace.c


> diff --git a/tools/proctrace/proctrace.c b/tools/proctrace/proctrace.c
> new file mode 100644
> index 0000000000..22bf91db8d
> --- /dev/null
> +++ b/tools/proctrace/proctrace.c


> +#include <stdlib.h>
> +#include <stdio.h>
> +#include <sys/mman.h>
> +#include <signal.h>
> +
> +#include <xenctrl.h>
> +#include <xen/xen.h>
> +#include <xenforeignmemory.h>
> +
> +#define BUF_SIZE (16384 * XC_PAGE_SIZE)


I would like to discuss here, how we should retrieve the trace buffer size
in runtime? Should there be a hypercall for it, or some extension to
acquire_resource logic?

Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Sun Jul 05 19:03:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 19:03:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1js9up-0007ZF-Fd; Sun, 05 Jul 2020 19:03:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RgbI=AQ=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1js9un-0007ZA-Sz
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 19:03:01 +0000
X-Inumbo-ID: 24619920-bef2-11ea-8c05-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 24619920-bef2-11ea-8c05-12813bfff9fa;
 Sun, 05 Jul 2020 19:02:59 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 704E0A20CC;
 Sun,  5 Jul 2020 21:02:58 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 6D318A2022;
 Sun,  5 Jul 2020 21:02:57 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id C4JvvuLBAb7c; Sun,  5 Jul 2020 21:02:56 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 7FF2FA20CC;
 Sun,  5 Jul 2020 21:02:56 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id QJNZsigDHHcD; Sun,  5 Jul 2020 21:02:56 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 5A114A2022;
 Sun,  5 Jul 2020 21:02:56 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 4B84822A09;
 Sun,  5 Jul 2020 21:02:26 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 0SCzO9XgUyOs; Sun,  5 Jul 2020 21:02:20 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 8F03022C51;
 Sun,  5 Jul 2020 21:02:20 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 3Avyv-nzelws; Sun,  5 Jul 2020 21:02:20 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 7148722C50;
 Sun,  5 Jul 2020 21:02:20 +0200 (CEST)
Date: Sun, 5 Jul 2020 21:02:20 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Message-ID: <1763045628.19744689.1593975740414.JavaMail.zimbra@cert.pl>
In-Reply-To: <f7e3c91789a7763b997918b6ebb987be670f9ce5.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <cover.1593974333.git.michal.leszczynski@cert.pl>
 <f7e3c91789a7763b997918b6ebb987be670f9ce5.1593974333.git.michal.leszczynski@cert.pl>
Subject: Re: [PATCH v5 05/11] tools/libxl: add vmtrace_pt_size parameter
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - GC83 (Win)/8.6.0_GA_1194)
Thread-Topic: tools/libxl: add vmtrace_pt_size parameter
Thread-Index: r6wk3k4v+wV7+SIKpnvkyBjdP7odSg==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: luwei kang <luwei.kang@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 tamas lengyel <tamas.lengyel@intel.com>,
 Roger Pau =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 5 lip 2020 o 20:54, Micha=C5=82 Leszczy=C5=84ski michal.leszczynski@c=
ert.pl napisa=C5=82(a):

> From: Michal Leszczynski <michal.leszczynski@cert.pl>
>=20
> Allow to specify the size of per-vCPU trace buffer upon
> domain creation. This is zero by default (meaning: not enabled).
>=20
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
> docs/man/xl.cfg.5.pod.in             | 11 +++++++++++
> tools/golang/xenlight/helpers.gen.go |  2 ++
> tools/golang/xenlight/types.gen.go   |  1 +
> tools/libxl/libxl.h                  |  8 ++++++++
> tools/libxl/libxl_create.c           |  1 +
> tools/libxl/libxl_types.idl          |  2 ++
> tools/xl/xl_parse.c                  | 22 ++++++++++++++++++++++
> 7 files changed, 47 insertions(+)
>=20
> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> index 0532739c1f..670759f6bd 100644
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -278,6 +278,17 @@ memory=3D8096 will report significantly less memory =
available
> for use
> than a system with maxmem=3D8096 memory=3D8096 due to the memory overhead
> of having to track the unused pages.
>=20
> +=3Ditem B<processor_trace_buffer_size=3DBYTES>
> +
> +Specifies the size of processor trace buffer that would be allocated
> +for each vCPU belonging to this domain. Disabled (i.e.
> +B<processor_trace_buffer_size=3D0> by default. This must be set to
> +non-zero value in order to be able to use processor tracing features
> +with this domain.
> +
> +B<NOTE>: The size value must be between 4 kB and 4 GB and it must
> +be also a power of 2.
> +
> =3Dback
>=20
> =3Dhead3 Guest Virtual NUMA Configuration
> diff --git a/tools/golang/xenlight/helpers.gen.go
> b/tools/golang/xenlight/helpers.gen.go
> index 152c7e8e6b..bfc37b69c8 100644
> --- a/tools/golang/xenlight/helpers.gen.go
> +++ b/tools/golang/xenlight/helpers.gen.go
> @@ -1117,6 +1117,7 @@ return fmt.Errorf("invalid union key '%v'", x.Type)=
}
> x.ArchArm.GicVersion =3D GicVersion(xc.arch_arm.gic_version)
> x.ArchArm.Vuart =3D VuartType(xc.arch_arm.vuart)
> x.Altp2M =3D Altp2MMode(xc.altp2m)
> +x.VmtracePtOrder =3D int(xc.vmtrace_pt_order)
>=20
>  return nil}
>=20
> @@ -1592,6 +1593,7 @@ return fmt.Errorf("invalid union key '%v'", x.Type)=
}
> xc.arch_arm.gic_version =3D C.libxl_gic_version(x.ArchArm.GicVersion)
> xc.arch_arm.vuart =3D C.libxl_vuart_type(x.ArchArm.Vuart)
> xc.altp2m =3D C.libxl_altp2m_mode(x.Altp2M)
> +xc.vmtrace_pt_order =3D C.int(x.VmtracePtOrder)
>=20
>  return nil
>  }
> diff --git a/tools/golang/xenlight/types.gen.go
> b/tools/golang/xenlight/types.gen.go
> index 663c1e86b4..f9b07ac862 100644
> --- a/tools/golang/xenlight/types.gen.go
> +++ b/tools/golang/xenlight/types.gen.go
> @@ -516,6 +516,7 @@ GicVersion GicVersion
> Vuart VuartType
> }
> Altp2M Altp2MMode
> +VmtracePtOrder int
> }
>=20
> type domainBuildInfoTypeUnion interface {
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> index 1cd6c38e83..4abb521756 100644
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -438,6 +438,14 @@
>  */
> #define LIBXL_HAVE_CREATEINFO_PASSTHROUGH 1
>=20
> +/*
> + * LIBXL_HAVE_VMTRACE_PT_ORDER indicates that
> + * libxl_domain_create_info has a vmtrace_pt_order parameter, which
> + * allows to enable pre-allocation of processor tracing buffers
> + * with the given order of size.
> + */
> +#define LIBXL_HAVE_VMTRACE_PT_ORDER 1
> +
> /*
>  * libxl ABI compatibility
>  *
> diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
> index 2814818e34..82b595161a 100644
> --- a/tools/libxl/libxl_create.c
> +++ b/tools/libxl/libxl_create.c
> @@ -608,6 +608,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_co=
nfig
> *d_config,
>             .max_evtchn_port =3D b_info->event_channels,
>             .max_grant_frames =3D b_info->max_grant_frames,
>             .max_maptrack_frames =3D b_info->max_maptrack_frames,
> +            .vmtrace_pt_order =3D b_info->vmtrace_pt_order,
>         };
>=20
>         if (info->type !=3D LIBXL_DOMAIN_TYPE_PV) {
> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 9d3f05f399..1c5dd43e4d 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -645,6 +645,8 @@ libxl_domain_build_info =3D Struct("domain_build_info=
",[
>     # supported by x86 HVM and ARM support is planned.
>     ("altp2m", libxl_altp2m_mode),
>=20
> +    ("vmtrace_pt_order", integer),
> +
>     ], dir=3DDIR_IN,
>        copy_deprecated_fn=3D"libxl__domain_build_info_copy_deprecated",
> )
> diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
> index 61b4ef7b7e..279f7c14d3 100644
> --- a/tools/xl/xl_parse.c
> +++ b/tools/xl/xl_parse.c
> @@ -1861,6 +1861,28 @@ void parse_config_data(const char *config_source,
>         }
>     }
>=20
> +    if (!xlu_cfg_get_long(config, "processor_trace_buffer_size", &l, 1) =
&& l) {
> +        int32_t shift =3D 0;
> +
> +        if (l & (l - 1))
> +        {
> +            fprintf(stderr, "ERROR: processor_trace_buffer_size "
> +=09=09=09    "- must be a power of 2\n");
> +            exit(1);
> +        }
> +
> +        while (l >>=3D 1) ++shift;
> +
> +        if (shift <=3D XEN_PAGE_SHIFT)
> +        {
> +            fprintf(stderr, "ERROR: processor_trace_buffer_size "
> +=09=09=09    "- value is too small\n");
> +            exit(1);
> +        }
> +
> +        b_info->vmtrace_pt_order =3D shift - XEN_PAGE_SHIFT;
> +    }
> +
>     if (!xlu_cfg_get_list(config, "ioports", &ioports, &num_ioports, 0)) =
{
>         b_info->num_ioports =3D num_ioports;
>         b_info->ioports =3D calloc(num_ioports, sizeof(*b_info->ioports))=
;
> --
> 2.17.1


As there were many different ideas about how the naming scheme should be
and what kinds of values should be passed where, I would like to discuss
this particular topic. Right now we have it pretty confusing:

* user sets "processor_trace_buffer_size" option in xl.cfg
* domain creation hypercall uses "vmtrace_pt_order" (derived from above)
* hypervisor side stores "vmtrace_pt_size" (converted back to bytes)


Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 19:12:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 19:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsA3b-0008Rx-DB; Sun, 05 Jul 2020 19:12:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RgbI=AQ=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jsA3a-0008Rr-5d
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 19:12:06 +0000
X-Inumbo-ID: 68bce16e-bef3-11ea-8c07-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 68bce16e-bef3-11ea-8c07-12813bfff9fa;
 Sun, 05 Jul 2020 19:12:03 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id B4FC7A20D0;
 Sun,  5 Jul 2020 21:12:02 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id AFB94A1F5F;
 Sun,  5 Jul 2020 21:12:01 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 9YjeorauOy19; Sun,  5 Jul 2020 21:12:01 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 2E434A20D0;
 Sun,  5 Jul 2020 21:12:01 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 9Sb-ECyKpntJ; Sun,  5 Jul 2020 21:12:01 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 0649FA1F5F;
 Sun,  5 Jul 2020 21:12:01 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id DF00D22B81;
 Sun,  5 Jul 2020 21:11:30 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id qsaOUekXGdrE; Sun,  5 Jul 2020 21:11:25 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 52C44226F9;
 Sun,  5 Jul 2020 21:11:25 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id O5e_p3PByzIt; Sun,  5 Jul 2020 21:11:25 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 31019225FD;
 Sun,  5 Jul 2020 21:11:25 +0200 (CEST)
Date: Sun, 5 Jul 2020 21:11:25 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Message-ID: <762195600.19745364.1593976285067.JavaMail.zimbra@cert.pl>
In-Reply-To: <a4833c8168e287f0caf1dc6f16ec5c054bd88b0a.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <cover.1593974333.git.michal.leszczynski@cert.pl>
 <a4833c8168e287f0caf1dc6f16ec5c054bd88b0a.1593974333.git.michal.leszczynski@cert.pl>
Subject: Re: [PATCH v5 06/11] x86/hvm: processor trace interface in HVM
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - GC83 (Win)/8.6.0_GA_1194)
Thread-Topic: x86/hvm: processor trace interface in HVM
Thread-Index: mWFwsvM7AQVGQ/Hxmuqg6b8gSpO+Mg==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 luwei kang <luwei.kang@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas lengyel <tamas.lengyel@intel.com>,
 Roger Pau =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 5 lip 2020 o 20:54, Micha=C5=82 Leszczy=C5=84ski michal.leszczynski@c=
ert.pl napisa=C5=82(a):

> From: Michal Leszczynski <michal.leszczynski@cert.pl>
>=20
> Implement necessary changes in common code/HVM to support
> processor trace features. Define vmtrace_pt_* API and
> implement trace buffer allocation/deallocation in common
> code.
>=20
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
> xen/arch/x86/domain.c         | 19 +++++++++++++++++++
> xen/common/domain.c           | 19 +++++++++++++++++++
> xen/include/asm-x86/hvm/hvm.h | 20 ++++++++++++++++++++
> xen/include/xen/sched.h       |  4 ++++
> 4 files changed, 62 insertions(+)
>=20
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index fee6c3931a..79c9794408 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -2199,6 +2199,25 @@ int domain_relinquish_resources(struct domain *d)
>                 altp2m_vcpu_disable_ve(v);
>         }
>=20
> +        for_each_vcpu ( d, v )
> +        {
> +            unsigned int i;
> +
> +            if ( !v->vmtrace.pt_buf )
> +                continue;
> +
> +            for ( i =3D 0; i < (v->domain->vmtrace_pt_size >> PAGE_SHIFT=
); i++ )
> +            {
> +                struct page_info *pg =3D mfn_to_page(
> +                    mfn_add(page_to_mfn(v->vmtrace.pt_buf), i));
> +                if ( (pg->count_info & PGC_count_mask) !=3D 1 )
> +                    return -EBUSY;
> +            }
> +
> +            free_domheap_pages(v->vmtrace.pt_buf,
> +                get_order_from_bytes(v->domain->vmtrace_pt_size));


While this works, I don't feel that this is a good solution with this loop
returning -EBUSY here. I would like to kindly ask for suggestions regarding
this topic.


Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Sun Jul 05 20:37:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 20:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsBNW-0006f4-W1; Sun, 05 Jul 2020 20:36:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3tmk=AQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsBNV-0006ee-Ex
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 20:36:45 +0000
X-Inumbo-ID: 39c0f52e-beff-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39c0f52e-beff-11ea-8496-bc764e2007e4;
 Sun, 05 Jul 2020 20:36:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=E9wEbFvfUx6tiOm2rKUi4CBI6BeXdOALmKQ+quWo6Fc=; b=ikUn4Le0Z0dMqmhcHZn40JdZY
 OEhMNCusXJl3n0p9JKHZs+i8lilI3azQNTfzo2rzryhrr+IMMcw9FwEA9sNXBP3BbFhYR+mzS+RJa
 TvGDxipwrPzBOiSyZYfDmvjoWyXJ9GQsCQUuGCvt3znRnZmJIvqDtWxDVwSxjnQm/jJnA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsBNN-0001IH-EU; Sun, 05 Jul 2020 20:36:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsBNN-0003C4-7B; Sun, 05 Jul 2020 20:36:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsBNN-0006HM-6R; Sun, 05 Jul 2020 20:36:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151643-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151643: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start.2:fail:regression
 linux-linus:test-amd64-i386-xl-pvshim:xen-boot:fail:heisenbug
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=35e884f89df4c48566d745dc5a97a0d058d04263
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jul 2020 20:36:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151643 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151643/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 17 guest-start.2  fail in 151630 REGR. vs. 151214

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-pvshim     7 xen-boot         fail in 151617 pass in 151643
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-boot  fail in 151617 pass in 151643
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail in 151617 pass in 151643
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail in 151617 pass in 151643
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail in 151617 pass in 151643
 test-amd64-amd64-examine    4 memdisk-try-append fail in 151630 pass in 151643
 test-armhf-armhf-xl-vhd 15 guest-start/debian.repeat fail in 151630 pass in 151643
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10     fail pass in 151617
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat  fail pass in 151630

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                35e884f89df4c48566d745dc5a97a0d058d04263
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   17 days
Failing since        151236  2020-06-19 19:10:35 Z   16 days   22 attempts
Testing same since   151617  2020-07-04 10:03:06 Z    1 days    3 attempts

------------------------------------------------------------
551 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 25961 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 05 21:22:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 21:22:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsC5V-0002FQ-KP; Sun, 05 Jul 2020 21:22:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3tmk=AQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsC5U-0002FK-TD
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 21:22:12 +0000
X-Inumbo-ID: 95af3cf0-bf05-11ea-8c18-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 95af3cf0-bf05-11ea-8c18-12813bfff9fa;
 Sun, 05 Jul 2020 21:22:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=RtnWJr1o3Hx7QQXaH44bzfsgkSNs31fXb90mfiTqnMQ=; b=Owxz3WrQNa/BP2eHEDFdpbfcrT
 KHy487fTpy0lIOdhaHqLybDK3RnO8PKkdrxMgKPZyR9DHaoxSKlKNkD0idGVYFHZWiHO+FvVvTIos
 nemkafVmjLD7JziO+9iaFxohHWm2iZkUEbqbVX6A0ZKeS5Nz8ZIYGecRP16kVmxelU84=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsC5Q-00026v-WF; Sun, 05 Jul 2020 21:22:09 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsC5Q-0004AW-FF; Sun, 05 Jul 2020 21:22:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsC5Q-0002TK-EW; Sun, 05 Jul 2020 21:22:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete
 test-amd64-amd64-xl-qemuu-ovmf-amd64
Message-Id: <E1jsC5Q-0002TK-EW@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jul 2020 21:22:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-ovmf-amd64
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151654/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install --summary-out=tmp/151654.bisection-summary --basis-template=151065 --blessings=real,real-bisect qemu-mainline test-amd64-amd64-xl-qemuu-ovmf-amd64 debian-hvm-install
Searching for failure / basis pass:
 151645 fail [host=huxelrebe0] / 151149 [host=chardonnay1] 151101 [host=elbling1] 151065 [host=fiano1] 151047 [host=albana0] 150970 [host=fiano0] 150930 [host=huxelrebe1] 150916 [host=pinot1] 150909 [host=chardonnay0] 150899 [host=debina0] 150895 [host=italia0] 150831 [host=albana1] 150694 [host=debina1] 150631 [host=pinot0] 150608 ok.
Failure / basis pass flights: 151645 / 150608
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 6bb228190ef0b45669d285114cf8a280c55f4b39 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3-627d1d6693b0594d257dbe1a3363a8d4bd4d8307 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://git.qemu.org/qemu.git#6bb228190ef0b45669d285114cf8a280c55f4b39-eb6490f544388dd24c0d054a96dd304bc7284450 git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-88ab0c15525ced2eefe39220742efe4769089ad8 git://xenbits.xen.org/xen.git#1497e78068421d83956f8e82fb6e1bf1fc3b1199-be63d9d47f571a60d70f8fb630c03871312d9655
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 43284 nodes in revision graph
Searching for test results:
 150631 [host=pinot0]
 150608 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 6bb228190ef0b45669d285114cf8a280c55f4b39 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 150694 [host=debina1]
 150831 [host=albana1]
 150909 [host=chardonnay0]
 150930 [host=huxelrebe1]
 150916 [host=pinot1]
 150895 [host=italia0]
 150899 [host=debina0]
 150970 [host=fiano0]
 151047 [host=albana0]
 151101 [host=elbling1]
 151065 [host=fiano1]
 151149 [host=chardonnay1]
 151221 fail irrelevant
 151175 fail irrelevant
 151241 fail irrelevant
 151286 fail irrelevant
 151269 fail irrelevant
 151328 fail irrelevant
 151304 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 322969adf1fb3d6cfbd613f35121315715aff2ed 3c659044118e34603161457db9934a34f816d78b 171199f56f5f9bdf1e5d670d09ef1351d8f01bae 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151377 fail irrelevant
 151353 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b d4b78317b7cf8c0c635b70086503813f79ff21ec 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151399 fail irrelevant
 151414 fail irrelevant
 151435 fail irrelevant
 151459 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151471 fail irrelevant
 151485 fail irrelevant
 151500 fail irrelevant
 151518 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151547 fail irrelevant
 151598 fail irrelevant
 151597 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b eefe34ea4b82c2b47abe28af4cc7247d51553626 2e3de6253422112ae43e608661ba94ea6b345694 25636ed707cf1211ce846c7ec58f8643e435d7a7
 151603 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a2433243fbe471c250d7eddc2c7da325d91265fd 3c659044118e34603161457db9934a34f816d78b 6675a653d2e57ab09c32c0ea7b44a1d6c40a7f58 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151593 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 6bb228190ef0b45669d285114cf8a280c55f4b39 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151577 fail irrelevant
 151600 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b 3f429a3400822141651486193d6af625eeab05a5 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 151595 fail irrelevant
 151601 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 58ae92a993687d913aa6dd00ef3497a1bc5f6c40 3c659044118e34603161457db9934a34f816d78b 54cdfe511219b8051046be55a6e156c4f08ff7ff 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 151604 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 53550e81e2cafe7c03a39526b95cd21b5194d9b1 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151607 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 250bc43a406f7d46e319abe87c19548d4f027828 2e3de6253422112ae43e608661ba94ea6b345694 3371ced37ced359167b5a71abee2062854371323
 151611 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1d24410da356731da70b3334f86343e11e207d2 3c659044118e34603161457db9934a34f816d78b 470dd165d152ff7ceac61c7b71c2b89220b3aad7 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151609 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 3c659044118e34603161457db9934a34f816d78b 9f1f264edbdf5516d6f208497310b3eedbc7b74c 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151612 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b eea8f5df4ecc607d64f091b8d916fcc11a697541 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151614 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 86e8c353f705f14f2f2fd7a6195cefa431aa24d9 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 151640 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 6bb228190ef0b45669d285114cf8a280c55f4b39 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151615 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 3575b0aea983ad57804c9af739ed8ff7bc168393 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151618 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 49ee11555262a256afec592dfed7c5902d5eefd2 2e3de6253422112ae43e608661ba94ea6b345694 726c78d14dfe6ec76f5e4c7756821a91f0a04b34
 151642 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151619 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 6bb228190ef0b45669d285114cf8a280c55f4b39 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151644 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151621 fail irrelevant
 151646 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 75a6ed875ff0a2eb6b2971ae2098ed09963d7329 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151624 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 5d2f557b47dfbf8f23277a5bdd8473d4607c681a 2e3de6253422112ae43e608661ba94ea6b345694 51ca66c37371b10b378513af126646de22eddb17
 151626 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 7d2410cea154bf915fb30179ebda3b17ac36e70e 2e3de6253422112ae43e608661ba94ea6b345694 780aba2779b834f19b2a6f0dcdea0e7e0b5e1622
 151647 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 589b1be07c060e583d9f758ff0cb10e0f1ff242f 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151627 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bb78cfbec07eda45118b630a09b0af549b43a135 3c659044118e34603161457db9934a34f816d78b fe0fe4735e798578097758781166cc221319b93d 2e3de6253422112ae43e608661ba94ea6b345694 d9f58cd54fe2f05e1f05e2fe254684bd1840de8e
 151628 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 98d59d5dd8b662ba8ec7c522faa9b88823389711 2e3de6253422112ae43e608661ba94ea6b345694 e181db8ba4e0797b8f9b55996adfa71ffb5b4081
 151648 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151629 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 49ee11555262a256afec592dfed7c5902d5eefd2 2e3de6253422112ae43e608661ba94ea6b345694 1a58d8dab52f241d52fec1d992d859b9632c4739
 151622 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b 7b7515702012219410802a168ae4aa45b72a44df 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151631 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151649 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151632 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 6bb228190ef0b45669d285114cf8a280c55f4b39 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151650 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151633 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b 7b7515702012219410802a168ae4aa45b72a44df 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151636 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b db2322469a245eb9d9aa1c98747f6d595cca8f35 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151651 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151637 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 9354eaaf16fdb98651574f131ff66ad974e50bba 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151639 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 9940b2cfbc05cdffdf6b42227a80cb1e6d2a85c2 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151634 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151652 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151645 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151654 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
Searching for interesting versions
 Result found: flight 150608 (pass), for basis pass
 Result found: flight 151634 (fail), for basis failure
 Repro found: flight 151640 (pass), for basis pass
 Repro found: flight 151642 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
No revisions left to test, checking graph state.
 Result found: flight 151648 (pass), for last pass
 Result found: flight 151649 (fail), for first failure
 Repro found: flight 151650 (pass), for last pass
 Repro found: flight 151651 (fail), for first failure
 Repro found: flight 151652 (pass), for last pass
 Repro found: flight 151654 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151654/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.635783 to fit
pnmtopng: 206 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ovmf-amd64.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
151654: tolerable ALL FAIL

flight 151654 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/151654/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail baseline untested


jobs:
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun Jul 05 23:05:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 05 Jul 2020 23:05:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsDh3-0001xp-U5; Sun, 05 Jul 2020 23:05:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3tmk=AQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsDh2-0001xV-3T
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 23:05:04 +0000
X-Inumbo-ID: effd4c3e-bf13-11ea-8c27-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id effd4c3e-bf13-11ea-8c27-12813bfff9fa;
 Sun, 05 Jul 2020 23:04:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ERLTMYji/se25E6RSaY0lGg4VwNHnWmflYNgWPkQg0A=; b=rakWhWHE2jSvz7+QzCq/oL/1O
 UBZ7OcMheJKl+WawnRwL/3UV4GQ0fmCIq/zbOScAuNyFdnF5dTXPkq6iSCyGYCdzsOes5SeUabfE+
 o8kjBxDU/hUX8mxTWl1jxvqo4o5P5r4hdJEtOODdBQdNDQFD8vNF62msFw04wWUAj+b+Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsDgr-0003zO-GS; Sun, 05 Jul 2020 23:04:53 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsDgr-0007Um-8G; Sun, 05 Jul 2020 23:04:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsDgr-0004vI-7a; Sun, 05 Jul 2020 23:04:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151645-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151645: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=eb6490f544388dd24c0d054a96dd304bc7284450
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 05 Jul 2020 23:04:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151645 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151645/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                eb6490f544388dd24c0d054a96dd304bc7284450
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   23 days
Failing since        151101  2020-06-14 08:32:51 Z   21 days   26 attempts
Testing same since   151634  2020-07-05 00:36:29 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Cathy Zhang <cathy.zhang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <thuth@redhat.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17819 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 02:34:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 02:34:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsGxE-0004Zz-IG; Mon, 06 Jul 2020 02:34:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m1N9=AR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsGxC-0004ZV-Jk
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 02:33:58 +0000
X-Inumbo-ID: 20d36934-bf31-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 20d36934-bf31-11ea-bca7-bc764e2007e4;
 Mon, 06 Jul 2020 02:33:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=aAUdU1kY3lLUwS3FjO6VPo70C8jMJDSw+YekF0m067o=; b=YX5vMQW9T0PqMYxF9gztXd+ST
 whEP96eIRVBzOwsf3o8Gv4PLsokPbtN5vqzYFeFidiLQjjTjz3Hc+U9cnW5Rr7cta3J5Qwq365B+4
 zCoXHnSFYjvm+uWlOerEAgeYnSHDnxkyswKqJGH0NvFJsb/tsmsABaimeFZyBpHbNHMns=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsGx4-0001oC-Ty; Mon, 06 Jul 2020 02:33:50 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsGx4-00087k-L8; Mon, 06 Jul 2020 02:33:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsGx4-000460-KQ; Mon, 06 Jul 2020 02:33:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151653-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151653: regressions - FAIL
X-Osstest-Failures: linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=bb5a93aaf25261321db0c499cde7da6ee9d8b164
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jul 2020 02:33:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151653 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151653/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                bb5a93aaf25261321db0c499cde7da6ee9d8b164
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   18 days
Failing since        151236  2020-06-19 19:10:35 Z   16 days   23 attempts
Testing same since   151653  2020-07-05 20:39:33 Z    0 days    1 attempts

------------------------------------------------------------
568 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 27478 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 04:14:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 04:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsIVz-0004nX-4G; Mon, 06 Jul 2020 04:13:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NqVp=AQ=gmail.com=nieklinnenbank@srs-us1.protection.inumbo.net>)
 id 1jsAkD-0003Mp-5D
 for xen-devel@lists.xenproject.org; Sun, 05 Jul 2020 19:56:09 +0000
X-Inumbo-ID: 902a64f0-bef9-11ea-bca7-bc764e2007e4
Received: from mail-il1-x141.google.com (unknown [2607:f8b0:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 902a64f0-bef9-11ea-bca7-bc764e2007e4;
 Sun, 05 Jul 2020 19:56:07 +0000 (UTC)
Received: by mail-il1-x141.google.com with SMTP id t18so11275007ilh.2
 for <xen-devel@lists.xenproject.org>; Sun, 05 Jul 2020 12:56:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=kQ6l8az03kehxbkrP0p2iddBUqHaPLMKVPPFuTfHu9U=;
 b=kWtw4Dby/XYYuLXEnjh8R367jJFb2MdhvmUNhliOCOidwjq5kztJFVOEv4/R6vhWeC
 RY7gAm4KacL28+mwTTN5KpjoSdAo8vAjWE3civZNTdkbNc1OwRWFgI58VkcmUBPnwTeg
 VhVjkU+933Xy3DQPz37vhwqc6XpqR4/JEPHHwOiCcGP6IQ4b5r/zWA+E0QCxmgfpfKAP
 FjHH57TQKb3R/k2etDT0LHHlvwT02H37/vDecDoLgw7NpwRbhqCnns2Zh2ZdMB9ml7Cj
 epxlcJkXBVkK5YxWhnLxJGl3M8QLz5pFd//ugEQWgu7LWVPk9KIhC/8yCC1piehUBCVP
 7tQQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=kQ6l8az03kehxbkrP0p2iddBUqHaPLMKVPPFuTfHu9U=;
 b=rAPYWQKZ//EuG95xB4QlBsWlNdq5qDEna/ViHcwJuMMuY+v8Qmz+ATl/EVhbLlVxiS
 1U3RQ+/E13hQ3OEG0BngwEkwTLdBvQIhZHLFbO9rDs1j83n3OFCBJJaC8OKK0HZ/7PMT
 zrYbA1KBefvLKYm34tAYBieOn757ZB9wTUNgOua+TrJFLSJT0kt2c7Sv1xg6NKjlByI2
 cyD349HDhJjwJuxek2UvCuQSYEGLj5Lwa9coSaBmPUmtY6wBOAywrXHCQpK1q5kPrRE+
 jSzLPXKCTgvCkOoAcP+dMC0KYxpCAYqq0P4zor0BEPKG4l81FL+HRLAVipDCthrsWSus
 4qFQ==
X-Gm-Message-State: AOAM531RNIq2fzTvPsPPBhDpgR1hR5b3F81kxQZqWociz8ebNnz+ucrA
 oQKWEEO6gMYkZARKgTNU9/CMuugRnDw597aGy4I=
X-Google-Smtp-Source: ABdhPJx+ekJQHdSYhIZ8qbbqu+IUSLrvRvQOkodCUlCfQT65+PH7H0ICsQP//H8L6oiWIEw/oLgCAVEgTfwr7D2Bfgw=
X-Received: by 2002:a92:844b:: with SMTP id l72mr28524731ild.19.1593978966003; 
 Sun, 05 Jul 2020 12:56:06 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-23-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-23-f4bug@amsat.org>
From: Niek Linnenbank <nieklinnenbank@gmail.com>
Date: Sun, 5 Jul 2020 21:55:57 +0200
Message-ID: <CAPan3WpZ_SCGws05S2sH9jf4MYjciE0kgpeqrDSviGTpcaj_+Q@mail.gmail.com>
Subject: Re: [PATCH 22/26] hw/usb/usb-hcd: Use OHCI type definitions
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: multipart/alternative; boundary="0000000000000c5dca05a9b726c6"
X-Mailman-Approved-At: Mon, 06 Jul 2020 04:13:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 QEMU Developers <qemu-devel@nongnu.org>, Jiaxun Yang <jiaxun.yang@flygoat.com>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--0000000000000c5dca05a9b726c6
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, Jul 4, 2020, 16:50 Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org> wr=
ote:

> Various machine/board/soc models create OHCI device instances
> with the generic QDEV API, and don't need to access USB internals.
>
> Simplify header inclusions by moving the QOM type names into a
> simple header, with no need to include other "hw/usb" headers.
>
> Suggested-by: BALATON Zoltan <balaton@eik.bme.hu>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>
>
Reviewed-by: Niek Linnenbank <nieklinnenbank@gmail.com>

---
>  hw/usb/hcd-ohci.h        |  2 +-
>  include/hw/usb/usb-hcd.h | 16 ++++++++++++++++
>  hw/arm/allwinner-a10.c   |  2 +-
>  hw/arm/allwinner-h3.c    |  9 +++++----
>  hw/arm/pxa2xx.c          |  3 ++-
>  hw/arm/realview.c        |  3 ++-
>  hw/arm/versatilepb.c     |  3 ++-
>  hw/display/sm501.c       |  3 ++-
>  hw/ppc/mac_newworld.c    |  3 ++-
>  hw/ppc/mac_oldworld.c    |  3 ++-
>  hw/ppc/sam460ex.c        |  3 ++-
>  hw/ppc/spapr.c           |  3 ++-
>  hw/usb/hcd-ohci-pci.c    |  2 +-
>  13 files changed, 40 insertions(+), 15 deletions(-)
>  create mode 100644 include/hw/usb/usb-hcd.h
>
> diff --git a/hw/usb/hcd-ohci.h b/hw/usb/hcd-ohci.h
> index 771927ea17..6949cf0dab 100644
> --- a/hw/usb/hcd-ohci.h
> +++ b/hw/usb/hcd-ohci.h
> @@ -21,6 +21,7 @@
>  #ifndef HCD_OHCI_H
>  #define HCD_OHCI_H
>
> +#include "hw/usb/usb-hcd.h"
>  #include "sysemu/dma.h"
>  #include "usb-internal.h"
>
> @@ -91,7 +92,6 @@ typedef struct OHCIState {
>      void (*ohci_die)(struct OHCIState *ohci);
>  } OHCIState;
>
> -#define TYPE_SYSBUS_OHCI "sysbus-ohci"
>  #define SYSBUS_OHCI(obj) OBJECT_CHECK(OHCISysBusState, (obj),
> TYPE_SYSBUS_OHCI)
>
>  typedef struct {
> diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h
> new file mode 100644
> index 0000000000..21fdfaf22d
> --- /dev/null
> +++ b/include/hw/usb/usb-hcd.h
> @@ -0,0 +1,16 @@
> +/*
> + * QEMU USB HCD types
> + *
> + * Copyright (c) 2020  Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>
> + *
> + * SPDX-License-Identifier: GPL-2.0-or-later
> + */
> +
> +#ifndef HW_USB_HCD_TYPES_H
> +#define HW_USB_HCD_TYPES_H
> +
> +/* OHCI */
> +#define TYPE_SYSBUS_OHCI            "sysbus-ohci"
> +#define TYPE_PCI_OHCI               "pci-ohci"
> +
> +#endif
> diff --git a/hw/arm/allwinner-a10.c b/hw/arm/allwinner-a10.c
> index 52e0d83760..53c24ff602 100644
> --- a/hw/arm/allwinner-a10.c
> +++ b/hw/arm/allwinner-a10.c
> @@ -25,7 +25,7 @@
>  #include "hw/misc/unimp.h"
>  #include "sysemu/sysemu.h"
>  #include "hw/boards.h"
> -#include "hw/usb/hcd-ohci.h"
> +#include "hw/usb/usb-hcd.h"
>
>  #define AW_A10_MMC0_BASE        0x01c0f000
>  #define AW_A10_PIC_REG_BASE     0x01c20400
> diff --git a/hw/arm/allwinner-h3.c b/hw/arm/allwinner-h3.c
> index 8e09468e86..d1d90ffa79 100644
> --- a/hw/arm/allwinner-h3.c
> +++ b/hw/arm/allwinner-h3.c
> @@ -28,6 +28,7 @@
>  #include "hw/sysbus.h"
>  #include "hw/char/serial.h"
>  #include "hw/misc/unimp.h"
> +#include "hw/usb/usb-hcd.h"
>  #include "hw/usb/hcd-ehci.h"
>  #include "hw/loader.h"
>  #include "sysemu/sysemu.h"
> @@ -381,16 +382,16 @@ static void allwinner_h3_realize(DeviceState *dev,
> Error **errp)
>                           qdev_get_gpio_in(DEVICE(&s->gic),
>                                            AW_H3_GIC_SPI_EHCI3));
>
> -    sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI0],
> +    sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI0],
>                           qdev_get_gpio_in(DEVICE(&s->gic),
>                                            AW_H3_GIC_SPI_OHCI0));
> -    sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI1],
> +    sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI1],
>                           qdev_get_gpio_in(DEVICE(&s->gic),
>                                            AW_H3_GIC_SPI_OHCI1));
> -    sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI2],
> +    sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI2],
>                           qdev_get_gpio_in(DEVICE(&s->gic),
>                                            AW_H3_GIC_SPI_OHCI2));
> -    sysbus_create_simple("sysbus-ohci", s->memmap[AW_H3_OHCI3],
> +    sysbus_create_simple(TYPE_SYSBUS_OHCI, s->memmap[AW_H3_OHCI3],
>                           qdev_get_gpio_in(DEVICE(&s->gic),
>                                            AW_H3_GIC_SPI_OHCI3));
>
> diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c
> index f104a33463..27196170f5 100644
> --- a/hw/arm/pxa2xx.c
> +++ b/hw/arm/pxa2xx.c
> @@ -18,6 +18,7 @@
>  #include "hw/arm/pxa.h"
>  #include "sysemu/sysemu.h"
>  #include "hw/char/serial.h"
> +#include "hw/usb/usb-hcd.h"
>  #include "hw/i2c/i2c.h"
>  #include "hw/irq.h"
>  #include "hw/qdev-properties.h"
> @@ -2196,7 +2197,7 @@ PXA2xxState *pxa270_init(MemoryRegion *address_spac=
e,
>          s->ssp[i] =3D (SSIBus *)qdev_get_child_bus(dev, "ssi");
>      }
>
> -    sysbus_create_simple("sysbus-ohci", 0x4c000000,
> +    sysbus_create_simple(TYPE_SYSBUS_OHCI, 0x4c000000,
>                           qdev_get_gpio_in(s->pic, PXA2XX_PIC_USBH1));
>
>      s->pcmcia[0] =3D pxa2xx_pcmcia_init(address_space, 0x20000000);
> diff --git a/hw/arm/realview.c b/hw/arm/realview.c
> index b6c0a1adb9..0aa34bd4c2 100644
> --- a/hw/arm/realview.c
> +++ b/hw/arm/realview.c
> @@ -16,6 +16,7 @@
>  #include "hw/net/lan9118.h"
>  #include "hw/net/smc91c111.h"
>  #include "hw/pci/pci.h"
> +#include "hw/usb/usb-hcd.h"
>  #include "net/net.h"
>  #include "sysemu/sysemu.h"
>  #include "hw/boards.h"
> @@ -256,7 +257,7 @@ static void realview_init(MachineState *machine,
>          sysbus_connect_irq(busdev, 3, pic[51]);
>          pci_bus =3D (PCIBus *)qdev_get_child_bus(dev, "pci");
>          if (machine_usb(machine)) {
> -            pci_create_simple(pci_bus, -1, "pci-ohci");
> +            pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
>          }
>          n =3D drive_get_max_bus(IF_SCSI);
>          while (n >=3D 0) {
> diff --git a/hw/arm/versatilepb.c b/hw/arm/versatilepb.c
> index e596b8170f..3e6224dc96 100644
> --- a/hw/arm/versatilepb.c
> +++ b/hw/arm/versatilepb.c
> @@ -17,6 +17,7 @@
>  #include "net/net.h"
>  #include "sysemu/sysemu.h"
>  #include "hw/pci/pci.h"
> +#include "hw/usb/usb-hcd.h"
>  #include "hw/i2c/i2c.h"
>  #include "hw/i2c/arm_sbcon_i2c.h"
>  #include "hw/irq.h"
> @@ -273,7 +274,7 @@ static void versatile_init(MachineState *machine, int
> board_id)
>          }
>      }
>      if (machine_usb(machine)) {
> -        pci_create_simple(pci_bus, -1, "pci-ohci");
> +        pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
>      }
>      n =3D drive_get_max_bus(IF_SCSI);
>      while (n >=3D 0) {
> diff --git a/hw/display/sm501.c b/hw/display/sm501.c
> index 9cccc68c35..5f076c841f 100644
> --- a/hw/display/sm501.c
> +++ b/hw/display/sm501.c
> @@ -33,6 +33,7 @@
>  #include "hw/sysbus.h"
>  #include "migration/vmstate.h"
>  #include "hw/pci/pci.h"
> +#include "hw/usb/usb-hcd.h"
>  #include "hw/qdev-properties.h"
>  #include "hw/i2c/i2c.h"
>  #include "hw/display/i2c-ddc.h"
> @@ -1961,7 +1962,7 @@ static void sm501_realize_sysbus(DeviceState *dev,
> Error **errp)
>      sysbus_init_mmio(sbd, &s->state.mmio_region);
>
>      /* bridge to usb host emulation module */
> -    usb_dev =3D qdev_new("sysbus-ohci");
> +    usb_dev =3D qdev_new(TYPE_SYSBUS_OHCI);
>      qdev_prop_set_uint32(usb_dev, "num-ports", 2);
>      qdev_prop_set_uint64(usb_dev, "dma-offset", s->base);
>      sysbus_realize_and_unref(SYS_BUS_DEVICE(usb_dev), &error_fatal);
> diff --git a/hw/ppc/mac_newworld.c b/hw/ppc/mac_newworld.c
> index 7bf69f4a1f..3c32c1831b 100644
> --- a/hw/ppc/mac_newworld.c
> +++ b/hw/ppc/mac_newworld.c
> @@ -55,6 +55,7 @@
>  #include "hw/input/adb.h"
>  #include "hw/ppc/mac_dbdma.h"
>  #include "hw/pci/pci.h"
> +#include "hw/usb/usb-hcd.h"
>  #include "net/net.h"
>  #include "sysemu/sysemu.h"
>  #include "hw/boards.h"
> @@ -411,7 +412,7 @@ static void ppc_core99_init(MachineState *machine)
>      }
>
>      if (machine->usb) {
> -        pci_create_simple(pci_bus, -1, "pci-ohci");
> +        pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
>
>          /* U3 needs to use USB for input because Linux doesn't support
> via-cuda
>          on PPC64 */
> diff --git a/hw/ppc/mac_oldworld.c b/hw/ppc/mac_oldworld.c
> index f8c204ead7..a429a3e1df 100644
> --- a/hw/ppc/mac_oldworld.c
> +++ b/hw/ppc/mac_oldworld.c
> @@ -37,6 +37,7 @@
>  #include "hw/isa/isa.h"
>  #include "hw/pci/pci.h"
>  #include "hw/pci/pci_host.h"
> +#include "hw/usb/usb-hcd.h"
>  #include "hw/boards.h"
>  #include "hw/nvram/fw_cfg.h"
>  #include "hw/char/escc.h"
> @@ -301,7 +302,7 @@ static void ppc_heathrow_init(MachineState *machine)
>      qdev_realize_and_unref(dev, adb_bus, &error_fatal);
>
>      if (machine_usb(machine)) {
> -        pci_create_simple(pci_bus, -1, "pci-ohci");
> +        pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);
>      }
>
>      if (graphic_depth !=3D 15 && graphic_depth !=3D 32 && graphic_depth =
!=3D 8)
> diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c
> index 781b45e14b..ac60d17a86 100644
> --- a/hw/ppc/sam460ex.c
> +++ b/hw/ppc/sam460ex.c
> @@ -36,6 +36,7 @@
>  #include "hw/i2c/ppc4xx_i2c.h"
>  #include "hw/i2c/smbus_eeprom.h"
>  #include "hw/usb/usb.h"
> +#include "hw/usb/usb-hcd.h"
>  #include "hw/usb/hcd-ehci.h"
>  #include "hw/ppc/fdt.h"
>  #include "hw/qdev-properties.h"
> @@ -372,7 +373,7 @@ static void sam460ex_init(MachineState *machine)
>
>      /* USB */
>      sysbus_create_simple(TYPE_PPC4xx_EHCI, 0x4bffd0400, uic[2][29]);
> -    dev =3D qdev_new("sysbus-ohci");
> +    dev =3D qdev_new(TYPE_SYSBUS_OHCI);
>      qdev_prop_set_string(dev, "masterbus", "usb-bus.0");
>      qdev_prop_set_uint32(dev, "num-ports", 6);
>      sbdev =3D SYS_BUS_DEVICE(dev);
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 0c0409077f..db1706a66c 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -71,6 +71,7 @@
>  #include "exec/address-spaces.h"
>  #include "exec/ram_addr.h"
>  #include "hw/usb/usb.h"
> +#include "hw/usb/usb-hcd.h"
>  #include "qemu/config-file.h"
>  #include "qemu/error-report.h"
>  #include "trace.h"
> @@ -2958,7 +2959,7 @@ static void spapr_machine_init(MachineState *machin=
e)
>
>      if (machine->usb) {
>          if (smc->use_ohci_by_default) {
> -            pci_create_simple(phb->bus, -1, "pci-ohci");
> +            pci_create_simple(phb->bus, -1, TYPE_PCI_OHCI);
>          } else {
>              pci_create_simple(phb->bus, -1, "nec-usb-xhci");
>          }
> diff --git a/hw/usb/hcd-ohci-pci.c b/hw/usb/hcd-ohci-pci.c
> index cb6bc55f59..14df83ec2e 100644
> --- a/hw/usb/hcd-ohci-pci.c
> +++ b/hw/usb/hcd-ohci-pci.c
> @@ -29,8 +29,8 @@
>  #include "trace.h"
>  #include "hcd-ohci.h"
>  #include "usb-internal.h"
> +#include "hw/usb/usb-hcd.h"
>
> -#define TYPE_PCI_OHCI "pci-ohci"
>  #define PCI_OHCI(obj) OBJECT_CHECK(OHCIPCIState, (obj), TYPE_PCI_OHCI)
>
>  typedef struct {
> --
> 2.21.3
>
>

--0000000000000c5dca05a9b726c6
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><br><br><div class=3D"gmail_quote"><div dir=3D"ltr" =
class=3D"gmail_attr">On Sat, Jul 4, 2020, 16:50 Philippe Mathieu-Daud=C3=A9=
 &lt;<a href=3D"mailto:f4bug@amsat.org">f4bug@amsat.org</a>&gt; wrote:<br><=
/div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-le=
ft:1px #ccc solid;padding-left:1ex">Various machine/board/soc models create=
 OHCI device instances<br>
with the generic QDEV API, and don&#39;t need to access USB internals.<br>
<br>
Simplify header inclusions by moving the QOM type names into a<br>
simple header, with no need to include other &quot;hw/usb&quot; headers.<br=
>
<br>
Suggested-by: BALATON Zoltan &lt;<a href=3D"mailto:balaton@eik.bme.hu" targ=
et=3D"_blank" rel=3D"noreferrer">balaton@eik.bme.hu</a>&gt;<br>
Signed-off-by: Philippe Mathieu-Daud=C3=A9 &lt;<a href=3D"mailto:f4bug@amsa=
t.org" target=3D"_blank" rel=3D"noreferrer">f4bug@amsat.org</a>&gt;<br></bl=
ockquote></div></div><div dir=3D"auto">Reviewed-by: Niek Linnenbank &lt;<a =
href=3D"mailto:nieklinnenbank@gmail.com">nieklinnenbank@gmail.com</a>&gt;</=
div><div dir=3D"auto"><br></div><div dir=3D"auto"><div class=3D"gmail_quote=
"><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:=
1px #ccc solid;padding-left:1ex">
---<br>
=C2=A0hw/usb/hcd-ohci.h=C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 2 +-<br>
=C2=A0include/hw/usb/usb-hcd.h | 16 ++++++++++++++++<br>
=C2=A0hw/arm/allwinner-a10.c=C2=A0 =C2=A0|=C2=A0 2 +-<br>
=C2=A0hw/arm/allwinner-h3.c=C2=A0 =C2=A0 |=C2=A0 9 +++++----<br>
=C2=A0hw/arm/pxa2xx.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 3 ++-<br>
=C2=A0hw/arm/realview.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 3 ++-<br>
=C2=A0hw/arm/versatilepb.c=C2=A0 =C2=A0 =C2=A0|=C2=A0 3 ++-<br>
=C2=A0hw/display/sm501.c=C2=A0 =C2=A0 =C2=A0 =C2=A0|=C2=A0 3 ++-<br>
=C2=A0hw/ppc/mac_newworld.c=C2=A0 =C2=A0 |=C2=A0 3 ++-<br>
=C2=A0hw/ppc/mac_oldworld.c=C2=A0 =C2=A0 |=C2=A0 3 ++-<br>
=C2=A0hw/ppc/sam460ex.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 |=C2=A0 3 ++-<br>
=C2=A0hw/ppc/spapr.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0|=C2=A0 3 ++-<=
br>
=C2=A0hw/usb/hcd-ohci-pci.c=C2=A0 =C2=A0 |=C2=A0 2 +-<br>
=C2=A013 files changed, 40 insertions(+), 15 deletions(-)<br>
=C2=A0create mode 100644 include/hw/usb/usb-hcd.h<br>
<br>
diff --git a/hw/usb/hcd-ohci.h b/hw/usb/hcd-ohci.h<br>
index 771927ea17..6949cf0dab 100644<br>
--- a/hw/usb/hcd-ohci.h<br>
+++ b/hw/usb/hcd-ohci.h<br>
@@ -21,6 +21,7 @@<br>
=C2=A0#ifndef HCD_OHCI_H<br>
=C2=A0#define HCD_OHCI_H<br>
<br>
+#include &quot;hw/usb/usb-hcd.h&quot;<br>
=C2=A0#include &quot;sysemu/dma.h&quot;<br>
=C2=A0#include &quot;usb-internal.h&quot;<br>
<br>
@@ -91,7 +92,6 @@ typedef struct OHCIState {<br>
=C2=A0 =C2=A0 =C2=A0void (*ohci_die)(struct OHCIState *ohci);<br>
=C2=A0} OHCIState;<br>
<br>
-#define TYPE_SYSBUS_OHCI &quot;sysbus-ohci&quot;<br>
=C2=A0#define SYSBUS_OHCI(obj) OBJECT_CHECK(OHCISysBusState, (obj), TYPE_SY=
SBUS_OHCI)<br>
<br>
=C2=A0typedef struct {<br>
diff --git a/include/hw/usb/usb-hcd.h b/include/hw/usb/usb-hcd.h<br>
new file mode 100644<br>
index 0000000000..21fdfaf22d<br>
--- /dev/null<br>
+++ b/include/hw/usb/usb-hcd.h<br>
@@ -0,0 +1,16 @@<br>
+/*<br>
+ * QEMU USB HCD types<br>
+ *<br>
+ * Copyright (c) 2020=C2=A0 Philippe Mathieu-Daud=C3=A9 &lt;<a href=3D"mai=
lto:f4bug@amsat.org" target=3D"_blank" rel=3D"noreferrer">f4bug@amsat.org</=
a>&gt;<br>
+ *<br>
+ * SPDX-License-Identifier: GPL-2.0-or-later<br>
+ */<br>
+<br>
+#ifndef HW_USB_HCD_TYPES_H<br>
+#define HW_USB_HCD_TYPES_H<br>
+<br>
+/* OHCI */<br>
+#define TYPE_SYSBUS_OHCI=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 &quot;sy=
sbus-ohci&quot;<br>
+#define TYPE_PCI_OHCI=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0&quot;pci-ohci&quot;<br>
+<br>
+#endif<br>
diff --git a/hw/arm/allwinner-a10.c b/hw/arm/allwinner-a10.c<br>
index 52e0d83760..53c24ff602 100644<br>
--- a/hw/arm/allwinner-a10.c<br>
+++ b/hw/arm/allwinner-a10.c<br>
@@ -25,7 +25,7 @@<br>
=C2=A0#include &quot;hw/misc/unimp.h&quot;<br>
=C2=A0#include &quot;sysemu/sysemu.h&quot;<br>
=C2=A0#include &quot;hw/boards.h&quot;<br>
-#include &quot;hw/usb/hcd-ohci.h&quot;<br>
+#include &quot;hw/usb/usb-hcd.h&quot;<br>
<br>
=C2=A0#define AW_A10_MMC0_BASE=C2=A0 =C2=A0 =C2=A0 =C2=A0 0x01c0f000<br>
=C2=A0#define AW_A10_PIC_REG_BASE=C2=A0 =C2=A0 =C2=A00x01c20400<br>
diff --git a/hw/arm/allwinner-h3.c b/hw/arm/allwinner-h3.c<br>
index 8e09468e86..d1d90ffa79 100644<br>
--- a/hw/arm/allwinner-h3.c<br>
+++ b/hw/arm/allwinner-h3.c<br>
@@ -28,6 +28,7 @@<br>
=C2=A0#include &quot;hw/sysbus.h&quot;<br>
=C2=A0#include &quot;hw/char/serial.h&quot;<br>
=C2=A0#include &quot;hw/misc/unimp.h&quot;<br>
+#include &quot;hw/usb/usb-hcd.h&quot;<br>
=C2=A0#include &quot;hw/usb/hcd-ehci.h&quot;<br>
=C2=A0#include &quot;hw/loader.h&quot;<br>
=C2=A0#include &quot;sysemu/sysemu.h&quot;<br>
@@ -381,16 +382,16 @@ static void allwinner_h3_realize(DeviceState *dev, Er=
ror **errp)<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 qdev_get_gpio_in(DEVICE(&amp;s-&gt;gic),<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0AW_H3_GIC_SPI_EHCI3));<br>
<br>
-=C2=A0 =C2=A0 sysbus_create_simple(&quot;sysbus-ohci&quot;, s-&gt;memmap[A=
W_H3_OHCI0],<br>
+=C2=A0 =C2=A0 sysbus_create_simple(TYPE_SYSBUS_OHCI, s-&gt;memmap[AW_H3_OH=
CI0],<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 qdev_get_gpio_in(DEVICE(&amp;s-&gt;gic),<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0AW_H3_GIC_SPI_OHCI0));<br>
-=C2=A0 =C2=A0 sysbus_create_simple(&quot;sysbus-ohci&quot;, s-&gt;memmap[A=
W_H3_OHCI1],<br>
+=C2=A0 =C2=A0 sysbus_create_simple(TYPE_SYSBUS_OHCI, s-&gt;memmap[AW_H3_OH=
CI1],<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 qdev_get_gpio_in(DEVICE(&amp;s-&gt;gic),<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0AW_H3_GIC_SPI_OHCI1));<br>
-=C2=A0 =C2=A0 sysbus_create_simple(&quot;sysbus-ohci&quot;, s-&gt;memmap[A=
W_H3_OHCI2],<br>
+=C2=A0 =C2=A0 sysbus_create_simple(TYPE_SYSBUS_OHCI, s-&gt;memmap[AW_H3_OH=
CI2],<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 qdev_get_gpio_in(DEVICE(&amp;s-&gt;gic),<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0AW_H3_GIC_SPI_OHCI2));<br>
-=C2=A0 =C2=A0 sysbus_create_simple(&quot;sysbus-ohci&quot;, s-&gt;memmap[A=
W_H3_OHCI3],<br>
+=C2=A0 =C2=A0 sysbus_create_simple(TYPE_SYSBUS_OHCI, s-&gt;memmap[AW_H3_OH=
CI3],<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 qdev_get_gpio_in(DEVICE(&amp;s-&gt;gic),<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0AW_H3_GIC_SPI_OHCI3));<br>
<br>
diff --git a/hw/arm/pxa2xx.c b/hw/arm/pxa2xx.c<br>
index f104a33463..27196170f5 100644<br>
--- a/hw/arm/pxa2xx.c<br>
+++ b/hw/arm/pxa2xx.c<br>
@@ -18,6 +18,7 @@<br>
=C2=A0#include &quot;hw/arm/pxa.h&quot;<br>
=C2=A0#include &quot;sysemu/sysemu.h&quot;<br>
=C2=A0#include &quot;hw/char/serial.h&quot;<br>
+#include &quot;hw/usb/usb-hcd.h&quot;<br>
=C2=A0#include &quot;hw/i2c/i2c.h&quot;<br>
=C2=A0#include &quot;hw/irq.h&quot;<br>
=C2=A0#include &quot;hw/qdev-properties.h&quot;<br>
@@ -2196,7 +2197,7 @@ PXA2xxState *pxa270_init(MemoryRegion *address_space,=
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0s-&gt;ssp[i] =3D (SSIBus *)qdev_get_child=
_bus(dev, &quot;ssi&quot;);<br>
=C2=A0 =C2=A0 =C2=A0}<br>
<br>
-=C2=A0 =C2=A0 sysbus_create_simple(&quot;sysbus-ohci&quot;, 0x4c000000,<br=
>
+=C2=A0 =C2=A0 sysbus_create_simple(TYPE_SYSBUS_OHCI, 0x4c000000,<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 qdev_get_gpio_in(s-&gt;pic, PXA2XX_PIC_USBH1));<br>
<br>
=C2=A0 =C2=A0 =C2=A0s-&gt;pcmcia[0] =3D pxa2xx_pcmcia_init(address_space, 0=
x20000000);<br>
diff --git a/hw/arm/realview.c b/hw/arm/realview.c<br>
index b6c0a1adb9..0aa34bd4c2 100644<br>
--- a/hw/arm/realview.c<br>
+++ b/hw/arm/realview.c<br>
@@ -16,6 +16,7 @@<br>
=C2=A0#include &quot;hw/net/lan9118.h&quot;<br>
=C2=A0#include &quot;hw/net/smc91c111.h&quot;<br>
=C2=A0#include &quot;hw/pci/pci.h&quot;<br>
+#include &quot;hw/usb/usb-hcd.h&quot;<br>
=C2=A0#include &quot;net/net.h&quot;<br>
=C2=A0#include &quot;sysemu/sysemu.h&quot;<br>
=C2=A0#include &quot;hw/boards.h&quot;<br>
@@ -256,7 +257,7 @@ static void realview_init(MachineState *machine,<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0sysbus_connect_irq(busdev, 3, pic[51]);<b=
r>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0pci_bus =3D (PCIBus *)qdev_get_child_bus(=
dev, &quot;pci&quot;);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (machine_usb(machine)) {<br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 pci_create_simple(pci_bus, -1, &=
quot;pci-ohci&quot;);<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 pci_create_simple(pci_bus, -1, T=
YPE_PCI_OHCI);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0n =3D drive_get_max_bus(IF_SCSI);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0while (n &gt;=3D 0) {<br>
diff --git a/hw/arm/versatilepb.c b/hw/arm/versatilepb.c<br>
index e596b8170f..3e6224dc96 100644<br>
--- a/hw/arm/versatilepb.c<br>
+++ b/hw/arm/versatilepb.c<br>
@@ -17,6 +17,7 @@<br>
=C2=A0#include &quot;net/net.h&quot;<br>
=C2=A0#include &quot;sysemu/sysemu.h&quot;<br>
=C2=A0#include &quot;hw/pci/pci.h&quot;<br>
+#include &quot;hw/usb/usb-hcd.h&quot;<br>
=C2=A0#include &quot;hw/i2c/i2c.h&quot;<br>
=C2=A0#include &quot;hw/i2c/arm_sbcon_i2c.h&quot;<br>
=C2=A0#include &quot;hw/irq.h&quot;<br>
@@ -273,7 +274,7 @@ static void versatile_init(MachineState *machine, int b=
oard_id)<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}<br>
=C2=A0 =C2=A0 =C2=A0}<br>
=C2=A0 =C2=A0 =C2=A0if (machine_usb(machine)) {<br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 pci_create_simple(pci_bus, -1, &quot;pci-ohci&=
quot;);<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);=
<br>
=C2=A0 =C2=A0 =C2=A0}<br>
=C2=A0 =C2=A0 =C2=A0n =3D drive_get_max_bus(IF_SCSI);<br>
=C2=A0 =C2=A0 =C2=A0while (n &gt;=3D 0) {<br>
diff --git a/hw/display/sm501.c b/hw/display/sm501.c<br>
index 9cccc68c35..5f076c841f 100644<br>
--- a/hw/display/sm501.c<br>
+++ b/hw/display/sm501.c<br>
@@ -33,6 +33,7 @@<br>
=C2=A0#include &quot;hw/sysbus.h&quot;<br>
=C2=A0#include &quot;migration/vmstate.h&quot;<br>
=C2=A0#include &quot;hw/pci/pci.h&quot;<br>
+#include &quot;hw/usb/usb-hcd.h&quot;<br>
=C2=A0#include &quot;hw/qdev-properties.h&quot;<br>
=C2=A0#include &quot;hw/i2c/i2c.h&quot;<br>
=C2=A0#include &quot;hw/display/i2c-ddc.h&quot;<br>
@@ -1961,7 +1962,7 @@ static void sm501_realize_sysbus(DeviceState *dev, Er=
ror **errp)<br>
=C2=A0 =C2=A0 =C2=A0sysbus_init_mmio(sbd, &amp;s-&gt;state.mmio_region);<br=
>
<br>
=C2=A0 =C2=A0 =C2=A0/* bridge to usb host emulation module */<br>
-=C2=A0 =C2=A0 usb_dev =3D qdev_new(&quot;sysbus-ohci&quot;);<br>
+=C2=A0 =C2=A0 usb_dev =3D qdev_new(TYPE_SYSBUS_OHCI);<br>
=C2=A0 =C2=A0 =C2=A0qdev_prop_set_uint32(usb_dev, &quot;num-ports&quot;, 2)=
;<br>
=C2=A0 =C2=A0 =C2=A0qdev_prop_set_uint64(usb_dev, &quot;dma-offset&quot;, s=
-&gt;base);<br>
=C2=A0 =C2=A0 =C2=A0sysbus_realize_and_unref(SYS_BUS_DEVICE(usb_dev), &amp;=
error_fatal);<br>
diff --git a/hw/ppc/mac_newworld.c b/hw/ppc/mac_newworld.c<br>
index 7bf69f4a1f..3c32c1831b 100644<br>
--- a/hw/ppc/mac_newworld.c<br>
+++ b/hw/ppc/mac_newworld.c<br>
@@ -55,6 +55,7 @@<br>
=C2=A0#include &quot;hw/input/adb.h&quot;<br>
=C2=A0#include &quot;hw/ppc/mac_dbdma.h&quot;<br>
=C2=A0#include &quot;hw/pci/pci.h&quot;<br>
+#include &quot;hw/usb/usb-hcd.h&quot;<br>
=C2=A0#include &quot;net/net.h&quot;<br>
=C2=A0#include &quot;sysemu/sysemu.h&quot;<br>
=C2=A0#include &quot;hw/boards.h&quot;<br>
@@ -411,7 +412,7 @@ static void ppc_core99_init(MachineState *machine)<br>
=C2=A0 =C2=A0 =C2=A0}<br>
<br>
=C2=A0 =C2=A0 =C2=A0if (machine-&gt;usb) {<br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 pci_create_simple(pci_bus, -1, &quot;pci-ohci&=
quot;);<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);=
<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0/* U3 needs to use USB for input because =
Linux doesn&#39;t support via-cuda<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0on PPC64 */<br>
diff --git a/hw/ppc/mac_oldworld.c b/hw/ppc/mac_oldworld.c<br>
index f8c204ead7..a429a3e1df 100644<br>
--- a/hw/ppc/mac_oldworld.c<br>
+++ b/hw/ppc/mac_oldworld.c<br>
@@ -37,6 +37,7 @@<br>
=C2=A0#include &quot;hw/isa/isa.h&quot;<br>
=C2=A0#include &quot;hw/pci/pci.h&quot;<br>
=C2=A0#include &quot;hw/pci/pci_host.h&quot;<br>
+#include &quot;hw/usb/usb-hcd.h&quot;<br>
=C2=A0#include &quot;hw/boards.h&quot;<br>
=C2=A0#include &quot;hw/nvram/fw_cfg.h&quot;<br>
=C2=A0#include &quot;hw/char/escc.h&quot;<br>
@@ -301,7 +302,7 @@ static void ppc_heathrow_init(MachineState *machine)<br=
>
=C2=A0 =C2=A0 =C2=A0qdev_realize_and_unref(dev, adb_bus, &amp;error_fatal);=
<br>
<br>
=C2=A0 =C2=A0 =C2=A0if (machine_usb(machine)) {<br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 pci_create_simple(pci_bus, -1, &quot;pci-ohci&=
quot;);<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 pci_create_simple(pci_bus, -1, TYPE_PCI_OHCI);=
<br>
=C2=A0 =C2=A0 =C2=A0}<br>
<br>
=C2=A0 =C2=A0 =C2=A0if (graphic_depth !=3D 15 &amp;&amp; graphic_depth !=3D=
 32 &amp;&amp; graphic_depth !=3D 8)<br>
diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c<br>
index 781b45e14b..ac60d17a86 100644<br>
--- a/hw/ppc/sam460ex.c<br>
+++ b/hw/ppc/sam460ex.c<br>
@@ -36,6 +36,7 @@<br>
=C2=A0#include &quot;hw/i2c/ppc4xx_i2c.h&quot;<br>
=C2=A0#include &quot;hw/i2c/smbus_eeprom.h&quot;<br>
=C2=A0#include &quot;hw/usb/usb.h&quot;<br>
+#include &quot;hw/usb/usb-hcd.h&quot;<br>
=C2=A0#include &quot;hw/usb/hcd-ehci.h&quot;<br>
=C2=A0#include &quot;hw/ppc/fdt.h&quot;<br>
=C2=A0#include &quot;hw/qdev-properties.h&quot;<br>
@@ -372,7 +373,7 @@ static void sam460ex_init(MachineState *machine)<br>
<br>
=C2=A0 =C2=A0 =C2=A0/* USB */<br>
=C2=A0 =C2=A0 =C2=A0sysbus_create_simple(TYPE_PPC4xx_EHCI, 0x4bffd0400, uic=
[2][29]);<br>
-=C2=A0 =C2=A0 dev =3D qdev_new(&quot;sysbus-ohci&quot;);<br>
+=C2=A0 =C2=A0 dev =3D qdev_new(TYPE_SYSBUS_OHCI);<br>
=C2=A0 =C2=A0 =C2=A0qdev_prop_set_string(dev, &quot;masterbus&quot;, &quot;=
usb-bus.0&quot;);<br>
=C2=A0 =C2=A0 =C2=A0qdev_prop_set_uint32(dev, &quot;num-ports&quot;, 6);<br=
>
=C2=A0 =C2=A0 =C2=A0sbdev =3D SYS_BUS_DEVICE(dev);<br>
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c<br>
index 0c0409077f..db1706a66c 100644<br>
--- a/hw/ppc/spapr.c<br>
+++ b/hw/ppc/spapr.c<br>
@@ -71,6 +71,7 @@<br>
=C2=A0#include &quot;exec/address-spaces.h&quot;<br>
=C2=A0#include &quot;exec/ram_addr.h&quot;<br>
=C2=A0#include &quot;hw/usb/usb.h&quot;<br>
+#include &quot;hw/usb/usb-hcd.h&quot;<br>
=C2=A0#include &quot;qemu/config-file.h&quot;<br>
=C2=A0#include &quot;qemu/error-report.h&quot;<br>
=C2=A0#include &quot;trace.h&quot;<br>
@@ -2958,7 +2959,7 @@ static void spapr_machine_init(MachineState *machine)=
<br>
<br>
=C2=A0 =C2=A0 =C2=A0if (machine-&gt;usb) {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0if (smc-&gt;use_ohci_by_default) {<br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 pci_create_simple(phb-&gt;bus, -=
1, &quot;pci-ohci&quot;);<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 pci_create_simple(phb-&gt;bus, -=
1, TYPE_PCI_OHCI);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0} else {<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0pci_create_simple(phb-&gt;b=
us, -1, &quot;nec-usb-xhci&quot;);<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0}<br>
diff --git a/hw/usb/hcd-ohci-pci.c b/hw/usb/hcd-ohci-pci.c<br>
index cb6bc55f59..14df83ec2e 100644<br>
--- a/hw/usb/hcd-ohci-pci.c<br>
+++ b/hw/usb/hcd-ohci-pci.c<br>
@@ -29,8 +29,8 @@<br>
=C2=A0#include &quot;trace.h&quot;<br>
=C2=A0#include &quot;hcd-ohci.h&quot;<br>
=C2=A0#include &quot;usb-internal.h&quot;<br>
+#include &quot;hw/usb/usb-hcd.h&quot;<br>
<br>
-#define TYPE_PCI_OHCI &quot;pci-ohci&quot;<br>
=C2=A0#define PCI_OHCI(obj) OBJECT_CHECK(OHCIPCIState, (obj), TYPE_PCI_OHCI=
)<br>
<br>
=C2=A0typedef struct {<br>
-- <br>
2.21.3<br>
<br>
</blockquote></div></div></div>

--0000000000000c5dca05a9b726c6--


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 06:00:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 06:00:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsKAK-00052l-6a; Mon, 06 Jul 2020 05:59:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HS9e=AR=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jsKAI-00052g-Ia
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 05:59:42 +0000
X-Inumbo-ID: e10d61ac-bf4d-11ea-8c41-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id e10d61ac-bf4d-11ea-8c41-12813bfff9fa;
 Mon, 06 Jul 2020 05:59:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594015179;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=RDj94LFWNXNG6l5HKQRtxvGkWU+Qk9Sz8iHMo0NKMBQ=;
 b=ISNmGHa3JUFGEK70SPIyJZbFPRljRpkKSeIU20TeOucGHMRl0UGdoLvdb8/KJRjjfy0rHa
 +wF2LxtYA/ivBI0NjWtbFg8X+gXXFTyK4AuSSaq5kmO7gzFxBvoi6aRbexg9mylwx8EYG9
 +TI+PKlqTvGOhH0Vo8NPsrIHDkuFT/U=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-116-XTMDI9ctPZSmrUKS_wxtNA-1; Mon, 06 Jul 2020 01:59:35 -0400
X-MC-Unique: XTMDI9ctPZSmrUKS_wxtNA-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F0977805EE6;
 Mon,  6 Jul 2020 05:59:33 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 9102960BF3;
 Mon,  6 Jul 2020 05:59:26 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 19C101138648; Mon,  6 Jul 2020 07:59:25 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Subject: Re: [PATCH v11 1/8] error: auto propagated local_err
References: <20200703090816.3295-1-vsementsov@virtuozzo.com>
 <20200703090816.3295-2-vsementsov@virtuozzo.com>
Date: Mon, 06 Jul 2020 07:59:25 +0200
In-Reply-To: <20200703090816.3295-2-vsementsov@virtuozzo.com> (Vladimir
 Sementsov-Ogievskiy's message of "Fri, 3 Jul 2020 12:08:09 +0300")
Message-ID: <87o8otgp4y.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Michael Roth <mdroth@linux.vnet.ibm.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laszlo Ersek <lersek@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, qemu-devel@nongnu.org,
 groug@kaod.org, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Max Reitz <mreitz@redhat.com>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> writes:

> Introduce a new ERRP_AUTO_PROPAGATE macro, to be used at start of
> functions with an errp OUT parameter.
>
> It has three goals:
>
> 1. Fix issue with error_fatal and error_prepend/error_append_hint: user
> can't see this additional information, because exit() happens in
> error_setg earlier than information is added. [Reported by Greg Kurz]
>
> 2. Fix issue with error_abort and error_propagate: when we wrap
> error_abort by local_err+error_propagate, the resulting coredump will
> refer to error_propagate and not to the place where error happened.
> (the macro itself doesn't fix the issue, but it allows us to [3.] drop
> the local_err+error_propagate pattern, which will definitely fix the
> issue) [Reported by Kevin Wolf]
>
> 3. Drop local_err+error_propagate pattern, which is used to workaround
> void functions with errp parameter, when caller wants to know resulting
> status. (Note: actually these functions could be merely updated to
> return int error code).
>
> To achieve these goals, later patches will add invocations
> of this macro at the start of functions with either use
> error_prepend/error_append_hint (solving 1) or which use
> local_err+error_propagate to check errors, switching those
> functions to use *errp instead (solving 2 and 3).
>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> Reviewed-by: Paul Durrant <paul@xen.org>
> Reviewed-by: Greg Kurz <groug@kaod.org>
> Reviewed-by: Eric Blake <eblake@redhat.com>
> ---
>
> Cc: Eric Blake <eblake@redhat.com>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Max Reitz <mreitz@redhat.com>
> Cc: Greg Kurz <groug@kaod.org>
> Cc: Christian Schoenebeck <qemu_oss@crudebyte.com>
> Cc: Stefan Hajnoczi <stefanha@redhat.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: "Philippe Mathieu-Daud=C3=A9" <philmd@redhat.com>
> Cc: Laszlo Ersek <lersek@redhat.com>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: Markus Armbruster <armbru@redhat.com>
> Cc: Michael Roth <mdroth@linux.vnet.ibm.com>
> Cc: qemu-devel@nongnu.org
> Cc: qemu-block@nongnu.org
> Cc: xen-devel@lists.xenproject.org
>
>  include/qapi/error.h | 205 ++++++++++++++++++++++++++++++++++++-------
>  1 file changed, 172 insertions(+), 33 deletions(-)
>
> diff --git a/include/qapi/error.h b/include/qapi/error.h
> index 5ceb3ace06..b54aedbfd7 100644
> --- a/include/qapi/error.h
> +++ b/include/qapi/error.h
> @@ -39,7 +39,7 @@
>   *   =E2=80=A2 pointer-valued functions return non-null / null pointer, =
and
>   *   =E2=80=A2 integer-valued functions return non-negative / negative.
>   *
> - * How to:
> + * =3D Deal with Error object =3D
>   *
>   * Create an error:
>   *     error_setg(errp, "situation normal, all fouled up");
> @@ -73,28 +73,91 @@
>   * reporting it (primarily useful in testsuites):
>   *     error_free_or_abort(&err);
>   *
> - * Pass an existing error to the caller:
> - *     error_propagate(errp, err);
> - * where Error **errp is a parameter, by convention the last one.
> + * =3D Deal with Error ** function parameter =3D
>   *
> - * Pass an existing error to the caller with the message modified:
> - *     error_propagate_prepend(errp, err);
> + * A function may use the error system to return errors. In this case, t=
he
> + * function defines an Error **errp parameter, by convention the last on=
e (with
> + * exceptions for functions using ... or va_list).
>   *
> - * Avoid
> - *     error_propagate(errp, err);
> - *     error_prepend(errp, "Could not frobnicate '%s': ", name);
> - * because this fails to prepend when @errp is &error_fatal.
> + * The caller may then pass in the following errp values:
> + *
> + * 1. &error_abort
> + *    Any error will result in abort().
> + * 2. &error_fatal
> + *    Any error will result in exit() with a non-zero status.
> + * 3. NULL
> + *    No error reporting through errp parameter.
> + * 4. The address of a NULL-initialized Error *err
> + *    Any error will populate errp with an error object.

The rebase onto my "error: Document Error API usage rules" rendered this
this partly redundant.  I'll try my hand at a proper merge, then ask you
to check it.

Should I fail to complete this in time for the soft freeze, we can merge
the thing as is.  Comment improvements are fair game until -rc1 or so.

>   *
> - * Create a new error and pass it to the caller:
> + * The following rules then implement the correct semantics desired by t=
he
> + * caller.
> + *
> + * Create a new error to pass to the caller:
>   *     error_setg(errp, "situation normal, all fouled up");
>   *
> - * Call a function and receive an error from it:
> + * Calling another errp-based function:
> + *     f(..., errp);
> + *
> + * =3D=3D Checking success of subcall =3D=3D
> + *
> + * If a function returns a value indicating an error in addition to sett=
ing
> + * errp (which is recommended), then you don't need any additional code,=
 just
> + * do:
> + *
> + *     int ret =3D f(..., errp);
> + *     if (ret < 0) {
> + *         ... handle error ...
> + *         return ret;
> + *     }
> + *
> + * If a function returns nothing (not recommended for new code), the onl=
y way

Also when a function returns something, but there is no distinct error
value.  Example: object_property_get_int().

I shouldn't criticize comments without suggesting improvements.  But
since I'm going to mess with this text anyway to merge your work with my
prior work, I take it easy and only note what I think needs work.  I'll
then try to address all that in or on top of my merge.

> + * to check success is by consulting errp; doing this safely requires th=
e use
> + * of the ERRP_AUTO_PROPAGATE macro, like this:

"Requires" is inaccurate.  Using a local variable with error_propagate()
also works (there's even an example right below).  We prefer
ERRP_AUTO_PROPAGATE(), because it's more readable and improves the
debugging experience.

> + *
> + *     int our_func(..., Error **errp) {

A function's opening brace goes on its own line.  More of the same below.

> + *         ERRP_AUTO_PROPAGATE();
> + *         ...
> + *         subcall(..., errp);
> + *         if (*errp) {
> + *             ...
> + *             return -EINVAL;
> + *         }
> + *         ...
> + *     }
> + *
> + * ERRP_AUTO_PROPAGATE takes care of wrapping the original errp as neede=
d, so
> + * that the rest of the function can directly use errp (including
> + * dereferencing), where any errors will then be propagated on to the or=
iginal
> + * errp when leaving the function.
> + *
> + * In some cases, we need to check result of subcall, but do not want to
> + * propagate the Error object to our caller. In such cases we don't need
> + * ERRP_AUTO_PROPAGATE, but just a local Error object:
> + *
> + * Receive an error and not pass it:
>   *     Error *err =3D NULL;
> - *     foo(arg, &err);
> + *     subcall(arg, &err);
>   *     if (err) {
>   *         handle the error...
> + *         error_free(err);
>   *     }
>   *
> + * Note that older code that did not use ERRP_AUTO_PROPAGATE would inste=
ad need
> + * a local Error * variable and the use of error_propagate() to properly=
 handle
> + * all possible caller values of errp. Now this is DEPRECATED* (see belo=
w).

I'd prefer not to shout DEPRECATED.

> + *
> + * Note that any function that wants to modify an error object, such as =
by
> + * calling error_append_hint or error_prepend, must use ERRP_AUTO_PROPAG=
ATE, in
> + * order for a caller's use of &error_fatal to see the additional inform=
ation.

"Must" should be reserved for situations where failure to adhere is
categorically wrong.

While we *want* people to use ERRP_AUTO_PROPAGATE() with
error_append_hint() and error_prepend(), failure to do so need not be
wrong.

Apropos error_append_hint(), the "Show errp instead of &err where &err
is actually unusual" part of my "error: Improve examples in error.h's
big comment" now feels premature to me.  E.g.

    * Create an error and add additional explanation:
  - *     error_setg(&err, "invalid quark");
  - *     error_append_hint(&err, "Valid quarks are up, down, strange, "
  + *     error_setg(errp, "invalid quark");
  + *     error_append_hint(errp, "Valid quarks are up, down, strange, "
    *                       "charm, top, bottom.\n");

is actually bad advice until ERRP_AUTO_PROPAGATE() turns it into good
advice.  I think I'll drop it from my commit, then see how I like the
comment with yours applied.

> + *
> + * In rare cases, we need to pass existing Error object to the caller by=
 hand:
> + *     error_propagate(errp, err);

Out of curiosity: can you describe such a case?

> + *
> + * Pass an existing error to the caller with the message modified:
> + *     error_propagate_prepend(errp, err);
> + *
> + *
>   * Call a function ignoring errors:
>   *     foo(arg, NULL);
>   *
> @@ -104,26 +167,6 @@
>   * Call a function treating errors as fatal:
>   *     foo(arg, &error_fatal);
>   *
> - * Receive an error and pass it on to the caller:
> - *     Error *err =3D NULL;
> - *     foo(arg, &err);
> - *     if (err) {
> - *         handle the error...
> - *         error_propagate(errp, err);
> - *     }
> - * where Error **errp is a parameter, by convention the last one.
> - *
> - * Do *not* "optimize" this to
> - *     foo(arg, errp);
> - *     if (*errp) { // WRONG!
> - *         handle the error...
> - *     }
> - * because errp may be NULL!
> - *
> - * But when all you do with the error is pass it on, please use
> - *     foo(arg, errp);
> - * for readability.
> - *
>   * Receive and accumulate multiple errors (first one wins):
>   *     Error *err =3D NULL, *local_err =3D NULL;
>   *     foo(arg, &err);
> @@ -151,6 +194,61 @@
>   *         error_setg(&err, ...); // WRONG!
>   *     }
>   * because this may pass a non-null err to error_setg().
> + *
> + * DEPRECATED*
> + *
> + * The following pattern of receiving, checking, and then forwarding an =
error
> + * to the caller by hand is now deprecated:
> + *
> + *     Error *err =3D NULL;
> + *     foo(arg, &err);
> + *     if (err) {
> + *         handle the error...
> + *         error_propagate(errp, err);
> + *     }
> + *
> + * Instead, use ERRP_AUTO_PROPAGATE macro.
> + *
> + * The old pattern is deprecated because of two things:
> + *
> + * 1. Issue with error_abort and error_propagate: when we wrap error_abo=
rt by
> + * local_err+error_propagate, the resulting coredump will refer to
> + * error_propagate and not to the place where error happened.
> + *
> + * 2. A lot of extra code of the same pattern
> + *
> + * How to update old code to use ERRP_AUTO_PROPAGATE?
> + *
> + * All you need is to add ERRP_AUTO_PROPAGATE() invocation at function s=
tart,
> + * than you may safely dereference errp to check errors and do not need =
any
> + * additional local Error variables or calls to error_propagate().
> + *
> + * Example:
> + *
> + * old code
> + *
> + *     void fn(..., Error **errp) {
> + *         Error *err =3D NULL;
> + *         foo(arg, &err);
> + *         if (err) {
> + *             handle the error...
> + *             error_propagate(errp, err);
> + *             return;
> + *         }
> + *         ...
> + *     }
> + *
> + * updated code
> + *
> + *     void fn(..., Error **errp) {
> + *         ERRP_AUTO_PROPAGATE();
> + *         foo(arg, errp);
> + *         if (*errp) {
> + *             handle the error...
> + *             return;
> + *         }
> + *         ...
> + *     }

This is another spot where we need to merge your work with mine
properly.  When foo() returns a distinct error value, then checking that
is even better:

          void fn(..., Error **errp)
          {
              if (!foo(arg, errp)) {
                  handle the error...
                  return;
              }
              ...
          }

>   */
> =20
>  #ifndef ERROR_H
> @@ -359,6 +457,47 @@ void error_set_internal(Error **errp,
>                          ErrorClass err_class, const char *fmt, ...)
>      GCC_FMT_ATTR(6, 7);
> =20

The ERRP_AUTO_PROPAGATE stuff starts rather abruptly.  I'm afraid an
unprepared reader will get what's going on only at
G_DEFINE_AUTO_CLEANUP_CLEAR_FUNC(), or even at #define
ERRP_AUTO_PROPAGATE().

Let's move the typedef, helper function and macro invocation behind the
definition of ERRP_AUTO_PROPAGATE(), similar to how we declare each
error_FOO_internal() helper function right after the macro that needs
it.

> +typedef struct ErrorPropagator {
> +    Error *local_err;
> +    Error **errp;
> +} ErrorPropagator;
> +
> +static inline void error_propagator_cleanup(ErrorPropagator *prop)
> +{
> +    error_propagate(prop->errp, prop->local_err);
> +}
> +
> +G_DEFINE_AUTO_CLEANUP_CLEAR_FUNC(ErrorPropagator, error_propagator_clean=
up);
> +
> +/*
> + * ERRP_AUTO_PROPAGATE
> + *

No other definition comment in this file repeats the name being defined.
Let's keep the comment style locally consistent.

> + * This macro exists to assist with proper error handling in a function =
which
> + * uses an Error **errp parameter.  It must be used as the first line of=
 a
> + * function which modifies an error (with error_prepend, error_append_hi=
nt, or
> + * similar) or which wants to dereference *errp.  It is still safe (but
> + * useless) to use in other functions.
> + *
> + * If errp is NULL or points to error_fatal, it is rewritten to point to=
 a
> + * local Error object, which will be automatically propagated to the ori=
ginal
> + * errp on function exit (see error_propagator_cleanup).
> + *
> + * After invocation of this macro it is always safe to dereference errp
> + * (as it's not NULL anymore) and to add information by error_prepend or
> + * error_append_hint (as, if it was error_fatal, we swapped it with a
> + * local_error to be propagated on cleanup).
> + *
> + * Note: we don't wrap the error_abort case, as we want resulting coredu=
mp
> + * to point to the place where the error happened, not to error_propagat=
e.
> + */
> +#define ERRP_AUTO_PROPAGATE() \
> +    g_auto(ErrorPropagator) _auto_errp_prop =3D {.errp =3D errp}; \
> +    do { \
> +        if (!errp || errp =3D=3D &error_fatal) { \
> +            errp =3D &_auto_errp_prop.local_err; \
> +        } \
> +    } while (0)
> +

Let's align the backslashes for consistency with nearby macros:

   #define ERRP_AUTO_PROPAGATE()                                   \
       g_auto(ErrorPropagator) _auto_errp_prop =3D {.errp =3D errp};   \
       do {                                                        \
           if (!errp || errp =3D=3D &error_fatal) {                    \
               errp =3D &_auto_errp_prop.local_err;                  \
           }                                                       \
       } while (0)

>  /*
>   * Special error destination to abort on error.
>   * See error_setg() and error_propagate() for details.



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 06:59:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 06:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsL5Y-0001dN-Fs; Mon, 06 Jul 2020 06:58:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bUWB=AR=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jsL5X-0001dI-6O
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 06:58:51 +0000
X-Inumbo-ID: 253b0890-bf56-11ea-bb8b-bc764e2007e4
Received: from mail-wm1-x32f.google.com (unknown [2a00:1450:4864:20::32f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 253b0890-bf56-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 06:58:50 +0000 (UTC)
Received: by mail-wm1-x32f.google.com with SMTP id 22so38051215wmg.1
 for <xen-devel@lists.xenproject.org>; Sun, 05 Jul 2020 23:58:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=9uNruDhmPyImOqqHCrEGTkb5DFGlhJcN7Yk887FFybI=;
 b=suGhUR7EpQXJGED7NSQgAt2gLpJ1pwBVYLR2gDNKouS+HtV/cseJn4WUwm/jQVeCaK
 bTlE8m/gg957YfFt2K4Rhgrn01+2kTnMKXuVjducQStVDKVKu5IVbCKJXuj517QQWz11
 SyYrwaSG4hEhMM27S6xwEqXYCWvDw4W4agyhwPwiNZn0xDY3/AlzWBknWI5WQigbHMzT
 DJp7x7NwJZ2R338CGtY98UmJ7oMIvHxeurrVBw9XIiGeRUFc6pYftMNht0XgUx4sn1I5
 j5+80tjhM/adT6/KQxcZomAT7pTHbHDWZ18h8Du+NvRnkwH4vUm9RStVX+lGP3t8aKDL
 lgVA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=9uNruDhmPyImOqqHCrEGTkb5DFGlhJcN7Yk887FFybI=;
 b=rMA33s0VDWaB/mU+UJnpDx3O++NcBsjUKgZgIpdp/E5JzXr2gepk4Wtj2kKwn/mR1a
 wCJSJR4yNzpSuFRgiZitO0wgFPJHsYS0zUiar92TtC4HIcxeSPsJNWB+9+tLWLurQEuL
 U/iGVauCe2ax7v6kboVnVsqwKxEzJucC0fzsN3FL7z7ThYuXWw8pk9AQJVP9WjZfxgG4
 1q4nD9rQz67V1e9/0ZgkZVnKUQpX69GIY4a6mLH/dUOVDQr4TpRIvu+/XwN0MQxAiPhD
 kVQXFzfGB0cDTgEEHGw7HnbEvfmG0UUhL0kpUbqaCePsOPYUe8w64NcQvgj2HZKfSKWH
 eZGA==
X-Gm-Message-State: AOAM530Pmn7nupD/k8bknK0H/rpHkNzDCpWN/y7M2P+QFIxPes9bnXX0
 VEkCo6a6uGhOQ05fR6uLVH4=
X-Google-Smtp-Source: ABdhPJzLjwVLtEZDdcQx3Kd3RzhaH8DDbJBU7HpWPcF0d/onwg6Itjr8nZPwyWoRUzTEGfNDOjnfag==
X-Received: by 2002:a1c:2109:: with SMTP id h9mr4789164wmh.174.1594018729744; 
 Sun, 05 Jul 2020 23:58:49 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5782:7500:8191:456f:379d:d246])
 by smtp.gmail.com with ESMTPSA id 30sm24064903wrm.74.2020.07.05.23.58.48
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Sun, 05 Jul 2020 23:58:49 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Tim Deegan'" <tim@xen.org>,
	"'Wei Liu'" <wl@xen.org>
References: <20200703201001.56606-1-wl@xen.org>
 <20200703202718.GA72092@deinos.phlegethon.org>
In-Reply-To: <20200703202718.GA72092@deinos.phlegethon.org>
Subject: RE: [PATCH for-4.14] kdd: fix build again
Date: Mon, 6 Jul 2020 07:58:50 +0100
Message-ID: <007701d65362$e7c89130$b759b390$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQL1GIiwyScKkPdw+9l0F8HkPG9WSwGPLZeIprAbMaA=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Xen Development List' <xen-devel@lists.xenproject.org>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'Michael Young' <m.a.young@durham.ac.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Tim Deegan <tim@xen.org>
> Sent: 03 July 2020 21:27
> To: Wei Liu <wl@xen.org>
> Cc: Xen Development List <xen-devel@lists.xenproject.org>; Michael Young <m.a.young@durham.ac.uk>;
> Paul Durrant <paul@xen.org>; Ian Jackson <ian.jackson@eu.citrix.com>
> Subject: Re: [PATCH for-4.14] kdd: fix build again
> 
> At 20:10 +0000 on 03 Jul (1593807001), Wei Liu wrote:
> > Restore Tim's patch. The one that was committed was recreated by me
> > because git didn't accept my saved copy. I made some mistakes while
> > recreating that patch and here we are.
> >
> > Fixes: 3471cafbdda3 ("kdd: stop using [0] arrays to access packet contents")
> > Reported-by: Michael Young <m.a.young@durham.ac.uk>
> > Signed-off-by: Wei Liu <wl@xen.org>
> 
> Reviewed-by: Tim Deegan <tim@xen.org>
> 
> Thanks!
> 
> Tim.

Release-acked-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 06:59:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 06:59:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsL63-0001f1-Og; Mon, 06 Jul 2020 06:59:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bUWB=AR=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jsL63-0001ev-56
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 06:59:23 +0000
X-Inumbo-ID: 385582ca-bf56-11ea-b7bb-bc764e2007e4
Received: from mail-wm1-x332.google.com (unknown [2a00:1450:4864:20::332])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 385582ca-bf56-11ea-b7bb-bc764e2007e4;
 Mon, 06 Jul 2020 06:59:22 +0000 (UTC)
Received: by mail-wm1-x332.google.com with SMTP id 22so38052427wmg.1
 for <xen-devel@lists.xenproject.org>; Sun, 05 Jul 2020 23:59:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=GSqycnyOgHnqRbxkdykYxL0t6AV3g94XVZgZUh4b7Oo=;
 b=nGLQub6o9/Mr2h4TTPzgZwJ63jnsecYGyIq9xnWuaYK1LVkVRYZz1s6G51ZZGQMiVK
 hg5zIyEaCgss536nOHArIsumPnWlfg46CG615CMNLStkWnjLfZuuLTJqpz+n3f7TY7Sg
 sFRzuWYlNu+5SZI8jMMw7hAEGEz+PYmeoKMc7ZMDUyTzjnz/ClSo5KmpUCIAYbXzdHfr
 vAuoyJaQl3O4Za/QvsU/LFBah8Zv1X4zqPEe88xfcHPeBeJCPeLQ/YemmMVDzk/FztcB
 Q50au6e7BF2I3vgi2AV6sWNzTGOVQYV3oGD46nrptrfVoIXEceV1kDfxp/IHCJcJOotE
 IEhQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=GSqycnyOgHnqRbxkdykYxL0t6AV3g94XVZgZUh4b7Oo=;
 b=dbGWEePY0E3bc3D+vrIjsSEqDREuGfpFpLGxJFBBGiHUo8xgKjoiBDEdK5+/R1bwFV
 zkLWSzSGfF6hB80Wx3w3gHx71rZ1LwGo0x0u6q+pudHZP3LjxJP2GUlFAcUoju9NvMIu
 mlw1cR97wL+QWcpsoNwUFGfRwxC00UrJ4jDV02vXDiCVE0YlVTIaoK+Bu1yAc2aHznPY
 Z9UyYisg4gkYpLog9Ou2sE/hyFogjOCa1di/OXCLgbMer3WG2HMILL5KgMPRK/0XG6IQ
 HudJraR2dKpiPM1ff6hS0Dy6bVzwgC2wrrMEmwTdu5X92oYoQ3DCzU+XIpMa6Yy2N664
 9hIQ==
X-Gm-Message-State: AOAM530yfdHksyjAEwWrvsN69yVN8ZRYSWx8+IJjoYveJ9hPC1P3BunT
 i1q5AqaLz2NFYqsaDULcJK0=
X-Google-Smtp-Source: ABdhPJwzyz2pszcNWZGno9KupPDPrGUYfRu9QXYAaGGf+9Xdw9ucJWJ0j8PPt1b+SgBWqyAquHSxNg==
X-Received: by 2002:a05:600c:2249:: with SMTP id
 a9mr45794270wmm.163.1594018761880; 
 Sun, 05 Jul 2020 23:59:21 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5782:7500:8191:456f:379d:d246])
 by smtp.gmail.com with ESMTPSA id z25sm21289086wmk.28.2020.07.05.23.59.21
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Sun, 05 Jul 2020 23:59:21 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Wei Liu'" <wl@xen.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>
References: <20200703135533.336625-1-anthony.perard@citrix.com>
 <20200703201035.pv6nyhydxyzqsuit@liuwe-devbox-debian-v2>
In-Reply-To: <20200703201035.pv6nyhydxyzqsuit@liuwe-devbox-debian-v2>
Subject: RE: [XEN PATCH for-4.14] Config: Update QEMU
Date: Mon, 6 Jul 2020 07:59:23 +0100
Message-ID: <007801d65362$fb068fe0$f113afa0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQGZdywpQDnhjbXBL4Z/m/aCXPm/pgJTGOJCqWE+v1A=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org, 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 =?iso-8859-1?Q?'Roger_Pau_Monn=E9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Wei Liu <wl@xen.org>
> Sent: 03 July 2020 21:11
> To: Anthony PERARD <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org; Roger =
Pau Monn=E9
> <roger.pau@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Wei =
Liu <wl@xen.org>
> Subject: Re: [XEN PATCH for-4.14] Config: Update QEMU
>=20
> On Fri, Jul 03, 2020 at 02:55:33PM +0100, Anthony PERARD wrote:
> > Backport 2 commits to fix building QEMU without PCI passthrough
> > support.
> >
> > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>=20
> FWIW:
>=20
> Acked-by: Wei Liu <wl@xen.org>

Release-acked-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 07:04:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 07:04:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsLAO-0002ZU-Ar; Mon, 06 Jul 2020 07:03:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bUWB=AR=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jsLAN-0002ZP-FC
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 07:03:51 +0000
X-Inumbo-ID: d8045e04-bf56-11ea-b7bb-bc764e2007e4
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8045e04-bf56-11ea-b7bb-bc764e2007e4;
 Mon, 06 Jul 2020 07:03:50 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id z15so28301457wrl.8
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 00:03:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=SBKsAA7+u0dXdQjfFG41WcLWxsA+fMDFgWXbL5FWNq4=;
 b=BkGYlwL8QNt/pwe8xZtOFlgec1ug2f/5YqgqifJMTUwvlYk+sLW4sTDOOGatKXN/Cr
 1bG9vOYYfCqyjuptcEJBpzWKz6akQIgKF7PhU8/exfRUnGogogitKCCaXbDY0jVAtbqw
 TvvBwHyF7558yP4dlAeI98Ee+FExtu7wzzpLa/5PZZ0x/wN24yEIKXdTaokYIm0ArxKM
 ZbU8irq+sXmKhgWVv0hofVHT58ZlCRczC58jlmTF3vzlygD8xW6XvCo/NZbdLW0IoCyP
 +4YJ2B5dSKGO2SxViGbC7jihkOn13cp1cltq85jcHGTXN3/Dui6oMEyABsVpB7db7qZS
 FocA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=SBKsAA7+u0dXdQjfFG41WcLWxsA+fMDFgWXbL5FWNq4=;
 b=K/jSjLSmkdtVfSqDOhXVDnyoCMcRDxvVhewJsLxpPW2Lvs3tsZUtjaNEkQP/+mfYWj
 xgB9gKGJP5sByJvVUJ+kanc/4b7D5PoGvxPhlQLNhokitsk7x+MPUOCDUOt1/TTeZNT3
 s6Ajodxf+3NCTcIVWCzcyrCB7Z8G8HuznlN5a4bLBMjW2UvjuLO05XELiJf2E/9eI1rt
 cL87K8DQIz0uY+aabyLP0Ax8bYjFGzXjCDLllilQBQMrMkazkxGIIFousv71xB3+Ug/z
 KVEavoaSzPsnHybLU/gixUcU1YYWIjfNvhQx9GpeTwiYSlLKysCjE1v3rUSbxHY8tiRs
 bUVQ==
X-Gm-Message-State: AOAM533l4T7/l69nyG/ZUhn2BdVwuMq6j6Prof2+4tww14PA+zV3yNQn
 MktRX6CoH72UWaj9G5W374s=
X-Google-Smtp-Source: ABdhPJzOZ4Azv0pyfItyvCQIEo/vELaJ2ixVp+EAeyuIZQb9/x2pzVhR/02lYFbz3YlEDd7iKvtWaw==
X-Received: by 2002:adf:e6c8:: with SMTP id y8mr50099349wrm.40.1594019029689; 
 Mon, 06 Jul 2020 00:03:49 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5782:7500:8191:456f:379d:d246])
 by smtp.gmail.com with ESMTPSA id u2sm21773367wml.16.2020.07.06.00.03.48
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 06 Jul 2020 00:03:49 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 "'Jan Beulich'" <jbeulich@suse.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
References: <20200701090210.GN735@Air-de-Roger>
 <f89a158a-416e-1939-597a-075ff97f2b02@suse.com>
 <af13fa01-db36-784d-dfaf-b9905defc7fd@citrix.com>
In-Reply-To: <af13fa01-db36-784d-dfaf-b9905defc7fd@citrix.com>
Subject: RE: vPT rework (and timer mode)
Date: Mon, 6 Jul 2020 08:03:50 +0100
Message-ID: <007a01d65363$9ab7c1d0$d0274570$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQG+XT3UL7mtyhdM6X8x294LjZ1FmQGlQU2IApzoD8SpB/prwA==
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org, 'Wei Liu' <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 03 July 2020 16:03
> To: Jan Beulich <jbeulich@suse.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>
> Cc: xen-devel@lists.xenproject.org; Wei Liu <wl@xen.org>; Paul Durrant =
<paul@xen.org>
> Subject: Re: vPT rework (and timer mode)
>=20
> On 03/07/2020 15:50, Jan Beulich wrote:
> > On 01.07.2020 11:02, Roger Pau Monn=C3=A9 wrote:
> >> It's my understanding that the purpose of pt_update_irq and
> >> pt_intr_post is to attempt to implement the "delay for missed =
ticks"
> >> mode, where Xen will accumulate timer interrupts if they cannot be
> >> injected. As shown by the patch above, this is all broken when the
> >> timer is added to a vCPU (pt->vcpu) different than the actual =
target
> >> vCPU where the interrupt gets delivered (note this can also be a =
list
> >> of vCPUs if routed from the IO-APIC using Fixed mode).
> >>
> >> I'm at lost at how to fix this so that virtual timers work properly
> >> and we also keep the "delay for missed ticks" mode without doing a
> >> massive rework and somehow keeping track of where injected =
interrupts
> >> originated, which seems an overly complicated solution.
> >>
> >> My proposal hence would be to completely remove the timer_mode, and
> >> just treat virtual timer interrupts as other interrupts, ie: they =
will
> >> be injected from the callback (pt_timer_fn) and the vCPU(s) would =
be
> >> kicked. Whether interrupts would get lost (ie: injected when a
> >> previous one is still pending) depends on the contention on the
> >> system. I'm not aware of any current OS that uses timer interrupts =
as
> >> a way to track time. I think current OSes know the differences =
between
> >> a timer counter and an event timer, and will use them =
appropriately.
> > Fundamentally - why not, the more that this promises to be a
> > simplification. The question we need to answer up front is whether
> > we're happy to possibly break old OSes (presumably ones no-one
> > ought to be using anymore these days, due to their support life
> > cycles long having ended).
>=20
> The various timer modes were all compatibility, and IIRC, mostly for
> Windows XP and older which told time by counting the number of timer
> interrupts.
>=20
> Paul - you might remember better than me?

I think it is only quite recently that Windows has started favouring =
enlightened time sources rather than counting ticks but an admin may =
still turn all the viridian enlightenments off so just dropping ticks =
will probably still cause time to drift backwards.

  Paul

>=20
> Its possibly worth noting that issues in this are cause triple faults =
in
> OVMF (it seems to enable interrupts in its timer handler), and breaks
> in-guest kexec (because our timer-targetting logic doesn't work in a =
way
> remotely close to real hardware when the kexec kernel is booting on a
> non-zero vCPU).
>=20
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 07:28:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 07:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsLXm-0004LG-EA; Mon, 06 Jul 2020 07:28:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m1N9=AR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsLXk-0004Kr-Pr
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 07:28:00 +0000
X-Inumbo-ID: 33908f2e-bf5a-11ea-8c45-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 33908f2e-bf5a-11ea-8c45-12813bfff9fa;
 Mon, 06 Jul 2020 07:27:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=4WLVD1Kp5w5Yh/Rs/r1wLc9EZciXFpWFDwvEP0FzpXM=; b=qETPtIp4jZJEi88+C11wqXfpJ
 LR1/+70aYIZttfhjbYuBAHnIDptM73Vx58kcAEiF68LUzjNjiZpoZFSOK/7N/c8Gv0R7t0k0r9OUV
 iPsfDhQGynJ33K7Owks3IEXB5mxOtsxSZUOs3N9mWc/eNlfGsAuNFDZc5ysidYYuun3FU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsLXb-0008CH-C2; Mon, 06 Jul 2020 07:27:51 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsLXb-0004gz-3P; Mon, 06 Jul 2020 07:27:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsLXb-0005Yz-2k; Mon, 06 Jul 2020 07:27:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151656-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151656: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=eb6490f544388dd24c0d054a96dd304bc7284450
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jul 2020 07:27:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151656 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151656/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                eb6490f544388dd24c0d054a96dd304bc7284450
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   23 days
Failing since        151101  2020-06-14 08:32:51 Z   21 days   27 attempts
Testing same since   151634  2020-07-05 00:36:29 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Cathy Zhang <cathy.zhang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <thuth@redhat.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17819 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 07:42:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 07:42:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsLlK-0005vk-Ne; Mon, 06 Jul 2020 07:42:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HS9e=AR=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jsLlI-0005vf-WC
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 07:42:01 +0000
X-Inumbo-ID: 2cdf0528-bf5c-11ea-8c46-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 2cdf0528-bf5c-11ea-8c46-12813bfff9fa;
 Mon, 06 Jul 2020 07:41:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594021319;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=VPclBdxcEAnChhlbJZsDBWI14QCfYuM438hdr/HHVjs=;
 b=EFQxPy/8bqPr7E/cr2Z60XZpz45djrLJUP+I00+3b6lFmNAOtFgT3QrKrdKPPohK6USSCg
 RZL0A8vJFH6yJciKh3AHYH0Ixwrdv7qAKGJhPfTfiA5XOyb2yhTPVhcJoeBgwXtguXgSqa
 HuNAtZ8fgntyCvE58fxabmEgJCMl+cg=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-297-GXwXvf9pPU6o06QHFW7HIA-1; Mon, 06 Jul 2020 03:41:56 -0400
X-MC-Unique: GXwXvf9pPU6o06QHFW7HIA-1
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6529A801E6A;
 Mon,  6 Jul 2020 07:41:54 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 369D110013C2;
 Mon,  6 Jul 2020 07:41:51 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 9139B1138648; Mon,  6 Jul 2020 09:41:49 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: Re: [PATCH v11 8/8] xen: introduce ERRP_AUTO_PROPAGATE
References: <20200703090816.3295-1-vsementsov@virtuozzo.com>
 <20200703090816.3295-9-vsementsov@virtuozzo.com>
 <e2b4f10a-162c-ebb8-3232-381c4d820f9f@redhat.com>
Date: Mon, 06 Jul 2020 09:41:49 +0200
In-Reply-To: <e2b4f10a-162c-ebb8-3232-381c4d820f9f@redhat.com> ("Philippe
 =?utf-8?Q?Mathieu-Daud=C3=A9=22's?= message of "Sat, 4 Jul 2020 18:36:07
 +0200")
Message-ID: <878sfxgkea.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>,
 Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, qemu-devel@nongnu.org, groug@kaod.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Max Reitz <mreitz@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> writes:

> On 7/3/20 11:08 AM, Vladimir Sementsov-Ogievskiy wrote:
>> If we want to add some info to errp (by error_prepend() or
>> error_append_hint()), we must use the ERRP_AUTO_PROPAGATE macro.
>> Otherwise, this info will not be added when errp =3D=3D &error_fatal
>> (the program will exit prior to the error_append_hint() or
>> error_prepend() call).  Fix such cases.
>>=20
>> If we want to check error after errp-function call, we need to
>> introduce local_err and then propagate it to errp. Instead, use
>> ERRP_AUTO_PROPAGATE macro, benefits are:
>> 1. No need of explicit error_propagate call
>> 2. No need of explicit local_err variable: use errp directly
>> 3. ERRP_AUTO_PROPAGATE leaves errp as is if it's not NULL or
>>    &error_fatal, this means that we don't break error_abort
>>    (we'll abort on error_set, not on error_propagate)
>>=20
>> This commit is generated by command
>>=20
>>     sed -n '/^X86 Xen CPUs$/,/^$/{s/^F: //p}' MAINTAINERS | \
>>     xargs git ls-files | grep '\.[hc]$' | \
>>     xargs spatch \
>>         --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
>>         --macro-file scripts/cocci-macro-file.h \
>>         --in-place --no-show-diff --max-width 80
>>=20
>> Reported-by: Kevin Wolf <kwolf@redhat.com>
>> Reported-by: Greg Kurz <groug@kaod.org>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>  hw/block/dataplane/xen-block.c |  17 +++---
>>  hw/block/xen-block.c           | 102 ++++++++++++++-------------------
>>  hw/pci-host/xen_igd_pt.c       |   7 +--
>>  hw/xen/xen-backend.c           |   7 +--
>>  hw/xen/xen-bus.c               |  92 +++++++++++++----------------
>>  hw/xen/xen-host-pci-device.c   |  27 +++++----
>>  hw/xen/xen_pt.c                |  25 ++++----
>>  hw/xen/xen_pt_config_init.c    |  17 +++---
>>  8 files changed, 128 insertions(+), 166 deletions(-)
>
> Without the description, this patch has 800 lines of diff...
> It killed me, I don't have the energy to review patch #7 of this
> series after that, sorry.
> Consider splitting such mechanical patches next time. Here it
> could have been hw/block, hw/pci-host, hw/xen.

Probably my fault; I asked for less fine-grained splitting.

Finding a split of a tree-wide transformation that pleases everyone is
basically impossible.

The conversion to ERRP_AUTO_PROPAGATE() could be one patch per function,
but that would be excessive.

Vladimir chose to split along maintenance boundaries, so he can cc: the
right people on the right code.  I agree with the idea.  The difficulty
is which boundaries.  Our code is not partitioned into maintenance
domains.  Instead, we have overlapping sets.  Makes sense, because it
mirrors how we actually maintain it.

Because of that, a blind split guided by MAINTAINERS won't work well.  A
split that makes sense needs a bit of human judgement, too.

This part makes perfect sense to me from the cc: point of view: it's
Xen, the whole of Xen, and nothing but Xen.

I acknowledge that its size made it exhausting for you to review.  I
didn't expect that, probably because after having spent hours on
reviewing and improving the macro and the Coccinelle script, I know
exactly what to look for, and also consider the script trustworthy[*].

> Reviewed-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>

Thank you, much appreciated!


[*] I've learned not to trust Coccinelle 100%, ever.



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 07:55:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 07:55:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsLyR-00074m-GX; Mon, 06 Jul 2020 07:55:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aOOe=AR=virtuozzo.com=vsementsov@srs-us1.protection.inumbo.net>)
 id 1jsLyP-000741-G0
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 07:55:34 +0000
X-Inumbo-ID: 106114f2-bf5e-11ea-8c46-12813bfff9fa
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.125]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 106114f2-bf5e-11ea-8c46-12813bfff9fa;
 Mon, 06 Jul 2020 07:55:31 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SgR+fe+AmtzhnxVCbvFi2L++DHx/za/86x5IVNyFkA8BqKs2JUsQBSIILCDjgzp5vI2QSUA8skoYzsXusvQhoxGT2dCCgLC0MvXQGer+OyXyxZlHgBOGKjbmktoeyUaail1sSbLJygSBGtpWBOIORL8/S+ylsVHdweT/6UdsefqHIWwcyU1qdDbbDQJEeEzo0/JeN42Scq1UyuxWs4qXerSQvPjS9FbTtllasreKMVj4bztznkA7MasroA0p9wned1D0RGAX2VmFcuiFcpAOJNZcUzHNyLeAeSDPAWUN4AZHBubpFlpzHjUbpYHmPyRRCDQEkyTyi75h6I4u/P7uUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=S//TxT9x7uU5Xs1uhQ4nTZq8S4gzQYjvdmeX06IP9OI=;
 b=LJcBTC2EhzOmV267toCmAq8TrtYn7FZeUJuXkWbb4c7hCjcFrEBbfqHPo3QqOXk0CYybICXlAUvPl2H3rO0yTGUJYii46pgt5YjDDO70cxZQ5pxH76MhIz5s+yZBYJIezY3zTpIXYj2/B3qaYrX/S5RjdmyYmRNnLEiIRrIi4vTbjvRuA6SNHI6F2E0PSFcw2xbhHSivvZf6JjuEnxgBefbFqd+RQ+DrnPjuVD9xgNgV7Ywo4wiKVdjtJBgCn6LiOwd2/jBAwh3sPo/okxqBCRnRwv7JVeY1IlvlWGj3k3sAclIheD+5r5yznf1ItTjIlMJWmlwhyMcGG/SYHx+Xkw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=virtuozzo.com; dmarc=pass action=none
 header.from=virtuozzo.com; dkim=pass header.d=virtuozzo.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtuozzo.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=S//TxT9x7uU5Xs1uhQ4nTZq8S4gzQYjvdmeX06IP9OI=;
 b=EN36SukXAShOcdQZEJ4ZoNqNZLperACzvAhoXRvsPBXtclHjMLRj2uZCjbK4gpNulWLtgxOr2/PqBI7+PBOwiVg+lJoL2U+NDX5+3wjwtHzlT214lUoovH30drs5BGsDSMbmg8zp/leA97QDGObBAKOhj0PdvI1ZnW43N2oIzRM=
Authentication-Results: redhat.com; dkim=none (message not signed)
 header.d=none;redhat.com; dmarc=none action=none header.from=virtuozzo.com;
Received: from AM7PR08MB5494.eurprd08.prod.outlook.com (2603:10a6:20b:dc::15)
 by AM6PR08MB5079.eurprd08.prod.outlook.com (2603:10a6:20b:e8::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.27; Mon, 6 Jul
 2020 07:55:30 +0000
Received: from AM7PR08MB5494.eurprd08.prod.outlook.com
 ([fe80::a408:2f0f:bc6c:d312]) by AM7PR08MB5494.eurprd08.prod.outlook.com
 ([fe80::a408:2f0f:bc6c:d312%4]) with mapi id 15.20.3153.029; Mon, 6 Jul 2020
 07:55:29 +0000
Subject: Re: [PATCH v11 8/8] xen: introduce ERRP_AUTO_PROPAGATE
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
References: <20200703090816.3295-1-vsementsov@virtuozzo.com>
 <20200703090816.3295-9-vsementsov@virtuozzo.com>
 <e2b4f10a-162c-ebb8-3232-381c4d820f9f@redhat.com>
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-ID: <be7bd5ba-8cee-3b9c-7869-95cdd37bdb5f@virtuozzo.com>
Date: Mon, 6 Jul 2020 10:55:28 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
In-Reply-To: <e2b4f10a-162c-ebb8-3232-381c4d820f9f@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM0PR04CA0006.eurprd04.prod.outlook.com
 (2603:10a6:208:122::19) To AM7PR08MB5494.eurprd08.prod.outlook.com
 (2603:10a6:20b:dc::15)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.100.2] (185.215.60.58) by
 AM0PR04CA0006.eurprd04.prod.outlook.com (2603:10a6:208:122::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.20 via Frontend
 Transport; Mon, 6 Jul 2020 07:55:28 +0000
X-Originating-IP: [185.215.60.58]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d8dff763-d15c-42f5-0ba6-08d82181f3ba
X-MS-TrafficTypeDiagnostic: AM6PR08MB5079:
X-Microsoft-Antispam-PRVS: <AM6PR08MB50796BB580726F4BF2B27EDEC1690@AM6PR08MB5079.eurprd08.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-Forefront-PRVS: 04569283F9
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 6Sn/oHz8fvlJAw4YJSvEq4xKcy/sJeolaYETy40Ug8p7amP29Wa48qC1Pv/mZCe05FSlu3SmKIsNr/Ykcw5ORd6hCtqOrOIhkRUkmU9Y5+lOW33F39rqWmvlp3TL4hiByOVhPCmh+8RDoE6GAU1uS5UeCLPWkOsB794EdFmVTjl2JcbTS2dfOXAz9rrON4HfK0bEr7Vf4sZS5i0k4ayl/n/vQiQwlXXu9tvkf/TypMWDhjlXc+PqnpT0hJj1/Ckskcy7s2JU9bTCvaC3m9hhqAQ9WoLkQGWUybLOYOGI37fx0I8ggH9ls2zUo9bLYFhMHLqGbfDI85pKyJUgUFctC+zTcNRPckd9mYskH/DwpWUY+Lu4WYdds0WtQpLOmwLE
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM7PR08MB5494.eurprd08.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(396003)(376002)(136003)(346002)(39830400003)(366004)(5660300002)(8936002)(2616005)(956004)(316002)(16576012)(66946007)(31696002)(36756003)(86362001)(66476007)(66556008)(16526019)(6486002)(186003)(8676002)(2906002)(54906003)(478600001)(31686004)(83380400001)(52116002)(7416002)(53546011)(4326008)(26005)(43740500002);
 DIR:OUT; SFP:1102; 
X-MS-Exchange-AntiSpam-MessageData: lMRCyFxBJ5fLNkwNw6qTRlTU3porKgG81JM6/w4ArMLtTBvY1X7Nbr6grh3jnqj31K/aXw2aq4zZERIC3FNa9D+CnsudasA2k/8Ml5nlK+h02dy8mgjh7Tl6ErFr6dXxPmthUEDFX/VAETr5MAAigGknZ5ntpGWMSV7St7A+aOoMI9IkpaqSPVqsYrnt0uKTn4YZnvAY64yvW6C3GMxj7C8aIEdphMzdO+B7qg3HrdWMVX8ay3VtUv+27/ZmnE+QL3rjfq165bf11EpGAxoA9gOWizH77cj3KFbKpsuZ3wFEJ0LHc8iij81cP3AUyOn2NZFTBAQUXtwXiXEnfQn5GHBtKBipybAI+VGp35o6icxT+Z7sg1Ly00YzXvuj3C9sDbaX7CiGMVn+CIHRSOnqaZPgHnDvdILoDhI4GtUeRFLDQ1rELLhNU5S91IbOA9eW5ooJHyk2TN1NcIhKzMYtsghFR+jI4KvzKVbRGm0eBt0=
X-OriginatorOrg: virtuozzo.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d8dff763-d15c-42f5-0ba6-08d82181f3ba
X-MS-Exchange-CrossTenant-AuthSource: AM7PR08MB5494.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2020 07:55:29.8510 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 0bc7f26d-0264-416e-a6fc-8352af79c58f
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: A8GZ5M3N8ZrMTlKELjAyZZzzz3NVMz/Qu+dKaJdjWQbOyCP8O7rJX2B9UriIzABzYLus4YNQXDx5uaRgFEvhtMmA2XAYJU6sLY11/XWYGfk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5079
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 qemu-block@nongnu.org, Paul Durrant <paul@xen.org>, groug@kaod.org,
 armbru@redhat.com, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Max Reitz <mreitz@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

04.07.2020 19:36, Philippe Mathieu-Daudé wrote:
> On 7/3/20 11:08 AM, Vladimir Sementsov-Ogievskiy wrote:
>> If we want to add some info to errp (by error_prepend() or
>> error_append_hint()), we must use the ERRP_AUTO_PROPAGATE macro.
>> Otherwise, this info will not be added when errp == &error_fatal
>> (the program will exit prior to the error_append_hint() or
>> error_prepend() call).  Fix such cases.
>>
>> If we want to check error after errp-function call, we need to
>> introduce local_err and then propagate it to errp. Instead, use
>> ERRP_AUTO_PROPAGATE macro, benefits are:
>> 1. No need of explicit error_propagate call
>> 2. No need of explicit local_err variable: use errp directly
>> 3. ERRP_AUTO_PROPAGATE leaves errp as is if it's not NULL or
>>     &error_fatal, this means that we don't break error_abort
>>     (we'll abort on error_set, not on error_propagate)
>>
>> This commit is generated by command
>>
>>      sed -n '/^X86 Xen CPUs$/,/^$/{s/^F: //p}' MAINTAINERS | \
>>      xargs git ls-files | grep '\.[hc]$' | \
>>      xargs spatch \
>>          --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
>>          --macro-file scripts/cocci-macro-file.h \
>>          --in-place --no-show-diff --max-width 80
>>
>> Reported-by: Kevin Wolf <kwolf@redhat.com>
>> Reported-by: Greg Kurz <groug@kaod.org>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> ---
>>   hw/block/dataplane/xen-block.c |  17 +++---
>>   hw/block/xen-block.c           | 102 ++++++++++++++-------------------
>>   hw/pci-host/xen_igd_pt.c       |   7 +--
>>   hw/xen/xen-backend.c           |   7 +--
>>   hw/xen/xen-bus.c               |  92 +++++++++++++----------------
>>   hw/xen/xen-host-pci-device.c   |  27 +++++----
>>   hw/xen/xen_pt.c                |  25 ++++----
>>   hw/xen/xen_pt_config_init.c    |  17 +++---
>>   8 files changed, 128 insertions(+), 166 deletions(-)
> 
> Without the description, this patch has 800 lines of diff...
> It killed me, I don't have the energy to review patch #7 of this
> series after that, sorry.

Sorry for that! I really understand you, take a look at Markus's
"[PATCH v2 00/44] Less clumsy error checking", which I'm trying to review
currently..

Still, the patch exists in such form since
"[RFC v5 000/126] error: auto propagated local_err", where it was reviewed
by Anthony, and I suggested to split it, but it was not needed.
Unfortunately, I've dropped r-bs due to changes..

> Consider splitting such mechanical patches next time. Here it
> could have been hw/block, hw/pci-host, hw/xen.
> 
> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> 

Thanks a lot!

-- 
Best regards,
Vladimir


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 08:32:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 08:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsMXS-0002Ul-Jn; Mon, 06 Jul 2020 08:31:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z/W2=AR=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jsMXQ-0002Ug-KX
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 08:31:44 +0000
X-Inumbo-ID: 1e4bdc3c-bf63-11ea-8c4b-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1e4bdc3c-bf63-11ea-8c4b-12813bfff9fa;
 Mon, 06 Jul 2020 08:31:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594024302;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=RyOGXfsJgDCHslM1KfVrkFxJMaQ9jioyueXpLCZk3NI=;
 b=CXXGeMPXbhLrCGkEEiYRAIye2HOoLYkH18IIdemCPdnfUy5NMLOYG0Gt
 yzTakVPJ7exWcjF5vPx2FpdNEooO0fUOvOC5eYjuCZXzDDJ2l4Ueik8ai
 jt+bmtTRWw5nEPY4ncwFprKNxixbfitZ1c9r5jKX+lvu1LLqa1a7eqjjh I=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: OGUUwOFAqzNS2aNQ4MGbr2wJBRm2ppRppaRvlBp7GnskfuqM5gPs44V8sXyReeq1dbSIWdzvhK
 XsF/IxJBH21XiuIQpTh348Z187pCKUlZOvRTNI0KrcL9RujbvtmvoYtNT9ISOGtiU2nqQNIPrh
 UTKnvXmNVThvwoZCGOjFgbZ2SpSmQ2hFLjImlYbqmPC53CWVDKGPNflvrSUHLpZdeEh8mMXs2/
 frOoB65hAFWqB5o6E7mZzXpQhQIwZQrb4ORNoKoulzTGSkjSR2o9ePGWAe1N8Lz3kLEdg/uDz0
 OXM=
X-SBRS: 2.7
X-MesageID: 21975158
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,318,1589256000"; d="scan'208";a="21975158"
Date: Mon, 6 Jul 2020 10:31:31 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: <paul@xen.org>
Subject: Re: vPT rework (and timer mode)
Message-ID: <20200706083131.GA735@Air-de-Roger>
References: <20200701090210.GN735@Air-de-Roger>
 <f89a158a-416e-1939-597a-075ff97f2b02@suse.com>
 <af13fa01-db36-784d-dfaf-b9905defc7fd@citrix.com>
 <007a01d65363$9ab7c1d0$d0274570$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <007a01d65363$9ab7c1d0$d0274570$@xen.org>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Wei Liu' <wl@xen.org>,
 'Jan Beulich' <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 06, 2020 at 08:03:50AM +0100, Paul Durrant wrote:
> > -----Original Message-----
> > From: Andrew Cooper <andrew.cooper3@citrix.com>
> > Sent: 03 July 2020 16:03
> > To: Jan Beulich <jbeulich@suse.com>; Roger Pau Monné <roger.pau@citrix.com>
> > Cc: xen-devel@lists.xenproject.org; Wei Liu <wl@xen.org>; Paul Durrant <paul@xen.org>
> > Subject: Re: vPT rework (and timer mode)
> > 
> > On 03/07/2020 15:50, Jan Beulich wrote:
> > > On 01.07.2020 11:02, Roger Pau Monné wrote:
> > >> It's my understanding that the purpose of pt_update_irq and
> > >> pt_intr_post is to attempt to implement the "delay for missed ticks"
> > >> mode, where Xen will accumulate timer interrupts if they cannot be
> > >> injected. As shown by the patch above, this is all broken when the
> > >> timer is added to a vCPU (pt->vcpu) different than the actual target
> > >> vCPU where the interrupt gets delivered (note this can also be a list
> > >> of vCPUs if routed from the IO-APIC using Fixed mode).
> > >>
> > >> I'm at lost at how to fix this so that virtual timers work properly
> > >> and we also keep the "delay for missed ticks" mode without doing a
> > >> massive rework and somehow keeping track of where injected interrupts
> > >> originated, which seems an overly complicated solution.
> > >>
> > >> My proposal hence would be to completely remove the timer_mode, and
> > >> just treat virtual timer interrupts as other interrupts, ie: they will
> > >> be injected from the callback (pt_timer_fn) and the vCPU(s) would be
> > >> kicked. Whether interrupts would get lost (ie: injected when a
> > >> previous one is still pending) depends on the contention on the
> > >> system. I'm not aware of any current OS that uses timer interrupts as
> > >> a way to track time. I think current OSes know the differences between
> > >> a timer counter and an event timer, and will use them appropriately.
> > > Fundamentally - why not, the more that this promises to be a
> > > simplification. The question we need to answer up front is whether
> > > we're happy to possibly break old OSes (presumably ones no-one
> > > ought to be using anymore these days, due to their support life
> > > cycles long having ended).
> > 
> > The various timer modes were all compatibility, and IIRC, mostly for
> > Windows XP and older which told time by counting the number of timer
> > interrupts.
> > 
> > Paul - you might remember better than me?
> 
> I think it is only quite recently that Windows has started favouring enlightened time sources rather than counting ticks but an admin may still turn all the viridian enlightenments off so just dropping ticks will probably still cause time to drift backwards.

Even when not using the viridian enlightenments, Windows should rely
on emulated time counters (or the TSC) rather than counting ticks?

I guess I could give it a try with one of the emulated Windows versions
that we test on osstest.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 08:32:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 08:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsMXV-0002Ux-Rl; Mon, 06 Jul 2020 08:31:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jsMXU-0002Ug-P2
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 08:31:48 +0000
X-Inumbo-ID: 1f94e71e-bf63-11ea-8c4b-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1f94e71e-bf63-11ea-8c4b-12813bfff9fa;
 Mon, 06 Jul 2020 08:31:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F3322AD5D;
 Mon,  6 Jul 2020 08:31:43 +0000 (UTC)
Subject: Re: [PATCH v5 06/11] x86/hvm: processor trace interface in HVM
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <cover.1593974333.git.michal.leszczynski@cert.pl>
 <a4833c8168e287f0caf1dc6f16ec5c054bd88b0a.1593974333.git.michal.leszczynski@cert.pl>
 <762195600.19745364.1593976285067.JavaMail.zimbra@cert.pl>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8685426c-0b79-e967-dfce-e9d2e7d21401@suse.com>
Date: Mon, 6 Jul 2020 10:31:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <762195600.19745364.1593976285067.JavaMail.zimbra@cert.pl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.07.2020 21:11, Michał Leszczyński wrote:
> ----- 5 lip 2020 o 20:54, Michał Leszczyński michal.leszczynski@cert.pl napisał(a):
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -2199,6 +2199,25 @@ int domain_relinquish_resources(struct domain *d)
>>                 altp2m_vcpu_disable_ve(v);
>>         }
>>
>> +        for_each_vcpu ( d, v )
>> +        {
>> +            unsigned int i;
>> +
>> +            if ( !v->vmtrace.pt_buf )
>> +                continue;
>> +
>> +            for ( i = 0; i < (v->domain->vmtrace_pt_size >> PAGE_SHIFT); i++ )
>> +            {
>> +                struct page_info *pg = mfn_to_page(
>> +                    mfn_add(page_to_mfn(v->vmtrace.pt_buf), i));
>> +                if ( (pg->count_info & PGC_count_mask) != 1 )
>> +                    return -EBUSY;
>> +            }
>> +
>> +            free_domheap_pages(v->vmtrace.pt_buf,
>> +                get_order_from_bytes(v->domain->vmtrace_pt_size));
> 
> 
> While this works, I don't feel that this is a good solution with this loop
> returning -EBUSY here. I would like to kindly ask for suggestions regarding
> this topic.

I'm sorry to ask, but with the previously give suggestions to mirror
existing code, why do you still need to play with this function? You
really shouldn't have a need to, just like e.g. the ioreq server page
handling code didn't.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 08:33:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 08:33:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsMYw-0002eF-6k; Mon, 06 Jul 2020 08:33:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jsMYv-0002e9-JH
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 08:33:17 +0000
X-Inumbo-ID: 56bcf1fa-bf63-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56bcf1fa-bf63-11ea-b7bb-bc764e2007e4;
 Mon, 06 Jul 2020 08:33:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9FD84AD5D;
 Mon,  6 Jul 2020 08:33:16 +0000 (UTC)
Subject: Re: [PATCH v5 11/11] tools/proctrace: add proctrace tool
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <cover.1593974333.git.michal.leszczynski@cert.pl>
 <e0ac5422825ce307470256aab1652336d5179a9a.1593974333.git.michal.leszczynski@cert.pl>
 <983829150.19744505.1593975521301.JavaMail.zimbra@cert.pl>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <78e96f30-acf3-ad44-1488-62bf974bd83a@suse.com>
Date: Mon, 6 Jul 2020 10:33:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <983829150.19744505.1593975521301.JavaMail.zimbra@cert.pl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, luwei kang <luwei.kang@intel.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.07.2020 20:58, Michał Leszczyński wrote:
> ----- 5 lip 2020 o 20:55, Michał Leszczyński michal.leszczynski@cert.pl napisał(a):
>> --- /dev/null
>> +++ b/tools/proctrace/proctrace.c
>> +#include <stdlib.h>
>> +#include <stdio.h>
>> +#include <sys/mman.h>
>> +#include <signal.h>
>> +
>> +#include <xenctrl.h>
>> +#include <xen/xen.h>
>> +#include <xenforeignmemory.h>
>> +
>> +#define BUF_SIZE (16384 * XC_PAGE_SIZE)
> 
> I would like to discuss here, how we should retrieve the trace buffer size
> in runtime? Should there be a hypercall for it, or some extension to
> acquire_resource logic?

Personally I'd prefer the latter, but the question is whether one can
be made in a backwards compatible way.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 08:42:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 08:42:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsMi5-0003ZE-4t; Mon, 06 Jul 2020 08:42:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z/W2=AR=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jsMi4-0003Z9-8B
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 08:42:44 +0000
X-Inumbo-ID: a7ff5e12-bf64-11ea-bb8b-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7ff5e12-bf64-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 08:42:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594024962;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=2J0WmcZk3jwW/nK+rCzs6kJlwALEZ725s3mSmlqGcMw=;
 b=gFn133RzSvKkrPbydcL7NdgfZ62cdhaN9um9IZ61KOcg/NU62ib2WUAn
 qL3BahrLB6PGxEdf31hQ269AceaFUGii0OsfBskMv+s+rJoVnIiyk0vY/
 EskJfSi03YfNFsyN0FJuUohhcrSIawc27r/qCV38FPE5h+NBjXuYBZrwb g=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: JC14DaW4V9aEFbDaIeCwGhKIavvvaawRBxEfABm6hD7BGJ+Uyxl1Y8CPriGJEYLsKeorhhJX/g
 7ZavBXeM1jTgVB/8pNjquSvOCURQnZpFM5KHt7AhzIeze+DaWBtjnawQJdXNZ9YgtuecrwlbzS
 3MvDGFRDWtWFd+O6coskgvF0023e016bilM96YyjgWvvs4pbKuxxl8T9PChHNaxwrCUUwHcy9S
 oU9/eRCw5cHyVumlEmShK6MmUZWcNEEL9e5sPpavxhEylPXbbbJ7ud2olYVT4KuVPTvwHlRSOg
 UsU=
X-SBRS: 2.7
X-MesageID: 21864110
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,318,1589256000"; d="scan'208";a="21864110"
Date: Mon, 6 Jul 2020 10:42:34 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v5 06/11] x86/hvm: processor trace interface in HVM
Message-ID: <20200706084234.GB735@Air-de-Roger>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <a4833c8168e287f0caf1dc6f16ec5c054bd88b0a.1593974333.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a4833c8168e287f0caf1dc6f16ec5c054bd88b0a.1593974333.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas.lengyel@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sun, Jul 05, 2020 at 08:54:59PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Implement necessary changes in common code/HVM to support
> processor trace features. Define vmtrace_pt_* API and
> implement trace buffer allocation/deallocation in common
> code.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  xen/arch/x86/domain.c         | 19 +++++++++++++++++++
>  xen/common/domain.c           | 19 +++++++++++++++++++
>  xen/include/asm-x86/hvm/hvm.h | 20 ++++++++++++++++++++
>  xen/include/xen/sched.h       |  4 ++++
>  4 files changed, 62 insertions(+)
> 
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index fee6c3931a..79c9794408 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -2199,6 +2199,25 @@ int domain_relinquish_resources(struct domain *d)
>                  altp2m_vcpu_disable_ve(v);
>          }
>  
> +        for_each_vcpu ( d, v )
> +        {
> +            unsigned int i;
> +
> +            if ( !v->vmtrace.pt_buf )
> +                continue;
> +
> +            for ( i = 0; i < (v->domain->vmtrace_pt_size >> PAGE_SHIFT); i++ )
> +            {
> +                struct page_info *pg = mfn_to_page(
> +                    mfn_add(page_to_mfn(v->vmtrace.pt_buf), i));
> +                if ( (pg->count_info & PGC_count_mask) != 1 )
> +                    return -EBUSY;
> +            }
> +
> +            free_domheap_pages(v->vmtrace.pt_buf,
> +                get_order_from_bytes(v->domain->vmtrace_pt_size));

This is racy as a control domain could take a reference between the
check and the freeing.

> +        }
> +
>          if ( is_pv_domain(d) )
>          {
>              for_each_vcpu ( d, v )
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 25d3359c5b..f480c4e033 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -137,6 +137,21 @@ static void vcpu_destroy(struct vcpu *v)
>      free_vcpu_struct(v);
>  }
>  
> +static int vmtrace_alloc_buffers(struct vcpu *v)
> +{
> +    struct page_info *pg;
> +    uint64_t size = v->domain->vmtrace_pt_size;
> +
> +    pg = alloc_domheap_pages(v->domain, get_order_from_bytes(size),
> +                             MEMF_no_refcount);
> +
> +    if ( !pg )
> +        return -ENOMEM;
> +
> +    v->vmtrace.pt_buf = pg;
> +    return 0;
> +}

I think we already agreed that you would use the same model as ioreq
servers, where a reference is taken on allocation and then the pages
are not explicitly freed on domain destruction and put_page_and_type
is used. Is there some reason why that model doesn't work in this
case?

If not, please see hvm_alloc_ioreq_mfn and hvm_free_ioreq_mfn.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 08:46:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 08:46:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsMlq-0003je-MD; Mon, 06 Jul 2020 08:46:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jsMlp-0003jZ-2w
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 08:46:37 +0000
X-Inumbo-ID: 3320b464-bf65-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3320b464-bf65-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 08:46:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CAAE0B16E;
 Mon,  6 Jul 2020 08:46:35 +0000 (UTC)
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
To: Julien Grall <julien@xen.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
 <20200702090047.GX735@Air-de-Roger>
 <1505813895.18300396.1593707008144.JavaMail.zimbra@cert.pl>
 <20200703094438.GY735@Air-de-Roger>
 <b5335c2e-da13-28de-002b-e93dd68a0a11@suse.com>
 <20200703101120.GZ735@Air-de-Roger>
 <51ecaf40-8fb5-8454-7055-5af33a47152e@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d9e604e9-acb7-17df-f0d1-7552dab526c7@suse.com>
Date: Mon, 6 Jul 2020 10:46:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <51ecaf40-8fb5-8454-7055-5af33a47152e@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04.07.2020 19:23, Julien Grall wrote:
> Hi,
> 
> On 03/07/2020 11:11, Roger Pau Monné wrote:
>> On Fri, Jul 03, 2020 at 11:56:38AM +0200, Jan Beulich wrote:
>>> On 03.07.2020 11:44, Roger Pau Monné wrote:
>>>> On Thu, Jul 02, 2020 at 06:23:28PM +0200, Michał Leszczyński wrote:
>>>>> ----- 2 lip 2020 o 11:00, Roger Pau Monné roger.pau@citrix.com napisał(a):
>>>>>
>>>>>> On Tue, Jun 30, 2020 at 02:33:46PM +0200, Michał Leszczyński wrote:
>>>>>>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
>>>>>>> index 59bdc28c89..7b8289d436 100644
>>>>>>> --- a/xen/include/public/domctl.h
>>>>>>> +++ b/xen/include/public/domctl.h
>>>>>>> @@ -92,6 +92,7 @@ struct xen_domctl_createdomain {
>>>>>>>       uint32_t max_evtchn_port;
>>>>>>>       int32_t max_grant_frames;
>>>>>>>       int32_t max_maptrack_frames;
>>>>>>> +    uint8_t vmtrace_pt_order;
>>>>>>
>>>>>> I've been thinking about this, and even though this is a domctl (so
>>>>>> not a stable interface) we might want to consider using a size (or a
>>>>>> number of pages) here rather than an order. IPT also supports
>>>>>> TOPA mode (kind of a linked list of buffers) that would allow for
>>>>>> sizes not rounded to order boundaries to be used, since then only each
>>>>>> item in the linked list needs to be rounded to an order boundary, so
>>>>>> you could for example use three 4K pages in TOPA mode AFAICT.
>>>>>>
>>>>>> Roger.
>>>>>
>>>>> In previous versions it was "size" but it was requested to change it
>>>>> to "order" in order to shrink the variable size from uint64_t to
>>>>> uint8_t, because there is limited space for xen_domctl_createdomain
>>>>> structure.
>>>>
>>>> It's likely I'm missing something here, but I wasn't aware
>>>> xen_domctl_createdomain had any constrains regarding it's size. It's
>>>> currently 48bytes which seems fairly small.
>>>
>>> Additionally I would guess a uint32_t could do here, if the value
>>> passed was "number of pages" rather than "number of bytes"?
> Looking at the rest of the code, the toolstack accepts a 64-bit value. 
> So this would lead to truncation of the buffer if it is bigger than 2^44 
> bytes.
> 
> I agree such buffer is unlikely, yet I still think we want to harden the 
> code whenever we can. So the solution is to either prevent check 
> truncation in libxl or directly use 64-bit in the domctl.
> 
> My preference is the latter.
> 
>>
>> That could work, not sure if it needs to state however that those will
>> be 4K pages, since Arm can have a different minimum page size IIRC?
>> (or that's already the assumption for all number of frames fields)
>> vmtrace_nr_frames seems fine to me.
> 
> The hypercalls interface is using the same page granularity as the 
> hypervisor (i.e 4KB).
> 
> While we already support guest using 64KB page granularity, it is 
> impossible to have a 64KB Arm hypervisor in the current state. You are 
> going to either break existing guest (if you switch to 64KB page 
> granularity for the hypercall ABI) or render them insecure (the mimimum 
> mapping in the P2M would be 64KB).
> 
> DOMCTLs are not stable yet, so using a number of pages is OK. However, I 
> would strongly suggest to use a number of bytes for any xl/libxl/stable 
> libraries interfaces as this avoids confusion and also make more 
> futureproof.

If we can't settle on what "page size" means in the public interface
(which imo is embarrassing), then how about going with number of kb,
like other memory libxl controls do? (I guess using Mb, in line with
other config file controls, may end up being too coarse here.) This
would likely still allow for a 32-bit field to be wide enough.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 08:59:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 08:59:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsMxm-0004iE-19; Mon, 06 Jul 2020 08:58:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bUWB=AR=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jsMxk-0004i9-PN
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 08:58:56 +0000
X-Inumbo-ID: ec0b7134-bf66-11ea-bb8b-bc764e2007e4
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec0b7134-bf66-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 08:58:56 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id o11so39957699wrv.9
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 01:58:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=CT+DD/fXzPNoLaHHSA10MB/yxeCwip5ti2swWOmLce0=;
 b=j5rMHMOsHl4Ap9tUpLJ4fvtmnJd7rVfmQ4kOSYrK89zyCVbCCIO9eit8tLrwm9V2Ya
 T7553C4aH2oTlpJ1w1d5iqfY1R86OBLuFE8mQ9AMcGEEUFrG0V2LzhuEoyxhHI9sxB/C
 xgx2ZUKjeQ3A0rFLaH6bx8QA2+jOtAKTtQeKDCjyFHaYYhHUpuzpc/ubIDd0BeUe+p7s
 4TgSZhCAh+2qK0kSmYeL1lECkazd3lGrCFjzFrxn6sKRRigEOL0g3Rx3UIicJzAcLtrk
 JcCejstrHNtgVD9TOM1Yq2Fz2roHT3eMo1LRlRQjy7o4IDai0k+8PL+Rgj/Vsm4feZ88
 coFw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=CT+DD/fXzPNoLaHHSA10MB/yxeCwip5ti2swWOmLce0=;
 b=TmQ9F6/tcwV3+NZ7iBI7Ycguuio1fg/rPuAqugtwTpWf+8FGSKpmM0z0PU5t1hhK7H
 b/MOqI7XJAHUpgIsS7oqvvR4pQesXKcVYzoXJQ3XY8gnKgnOBJ6vcGeEWDUu15r0cMRu
 c76KEE7P1fKVZ3e5EgfhmxlQpg9cbGDnPWMrrBVIATjyzcB2trjiBLpcK7i/Kn04mV/W
 JjpMuylJGbMTQTB0UjHahoiTl+sTUGrJtNyWfTnK4KHkBWy82PF/s0S5KVFQTNz5iG+L
 z4Ch+/o2HqfPfmOODRaZ0n1lsupiIineH/A0tqFJvwsReVocuLbcP3ckjCFYkl4aN3Hr
 3oWg==
X-Gm-Message-State: AOAM5309+9gb1U0Fsl1Wp2CoK4xn90Fe2A4ZKSawGH+6kfho3x5rGd3j
 IuLpg5LBvXNkbXdngmKxeyc=
X-Google-Smtp-Source: ABdhPJwQfwCCHY/BjH1xQMiwDEzhh0Cw7WoPu30xrPvfPmaC/+mcmD3bsxtyU1V9zws8Za6uaD0Z2A==
X-Received: by 2002:adf:e908:: with SMTP id f8mr46683602wrm.3.1594025935274;
 Mon, 06 Jul 2020 01:58:55 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-232.amazon.com. [54.240.197.232])
 by smtp.gmail.com with ESMTPSA id u186sm23195974wmu.10.2020.07.06.01.58.54
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 06 Jul 2020 01:58:54 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
References: <20200701090210.GN735@Air-de-Roger>
 <f89a158a-416e-1939-597a-075ff97f2b02@suse.com>
 <af13fa01-db36-784d-dfaf-b9905defc7fd@citrix.com>
 <007a01d65363$9ab7c1d0$d0274570$@xen.org> <20200706083131.GA735@Air-de-Roger>
In-Reply-To: <20200706083131.GA735@Air-de-Roger>
Subject: RE: vPT rework (and timer mode)
Date: Mon, 6 Jul 2020 09:58:53 +0100
Message-ID: <007c01d65373$ad3c4140$07b4c3c0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQG+XT3UL7mtyhdM6X8x294LjZ1FmQGlQU2IApzoD8QCa8XKyQFu21K2qOlFFjA=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Wei Liu' <wl@xen.org>,
 'Jan Beulich' <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> Sent: 06 July 2020 09:32
> To: paul@xen.org
> Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'Jan Beulich' =
<jbeulich@suse.com>; xen-
> devel@lists.xenproject.org; 'Wei Liu' <wl@xen.org>
> Subject: Re: vPT rework (and timer mode)
>=20
> On Mon, Jul 06, 2020 at 08:03:50AM +0100, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Andrew Cooper <andrew.cooper3@citrix.com>
> > > Sent: 03 July 2020 16:03
> > > To: Jan Beulich <jbeulich@suse.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>
> > > Cc: xen-devel@lists.xenproject.org; Wei Liu <wl@xen.org>; Paul =
Durrant <paul@xen.org>
> > > Subject: Re: vPT rework (and timer mode)
> > >
> > > On 03/07/2020 15:50, Jan Beulich wrote:
> > > > On 01.07.2020 11:02, Roger Pau Monn=C3=A9 wrote:
> > > >> It's my understanding that the purpose of pt_update_irq and
> > > >> pt_intr_post is to attempt to implement the "delay for missed =
ticks"
> > > >> mode, where Xen will accumulate timer interrupts if they cannot =
be
> > > >> injected. As shown by the patch above, this is all broken when =
the
> > > >> timer is added to a vCPU (pt->vcpu) different than the actual =
target
> > > >> vCPU where the interrupt gets delivered (note this can also be =
a list
> > > >> of vCPUs if routed from the IO-APIC using Fixed mode).
> > > >>
> > > >> I'm at lost at how to fix this so that virtual timers work =
properly
> > > >> and we also keep the "delay for missed ticks" mode without =
doing a
> > > >> massive rework and somehow keeping track of where injected =
interrupts
> > > >> originated, which seems an overly complicated solution.
> > > >>
> > > >> My proposal hence would be to completely remove the timer_mode, =
and
> > > >> just treat virtual timer interrupts as other interrupts, ie: =
they will
> > > >> be injected from the callback (pt_timer_fn) and the vCPU(s) =
would be
> > > >> kicked. Whether interrupts would get lost (ie: injected when a
> > > >> previous one is still pending) depends on the contention on the
> > > >> system. I'm not aware of any current OS that uses timer =
interrupts as
> > > >> a way to track time. I think current OSes know the differences =
between
> > > >> a timer counter and an event timer, and will use them =
appropriately.
> > > > Fundamentally - why not, the more that this promises to be a
> > > > simplification. The question we need to answer up front is =
whether
> > > > we're happy to possibly break old OSes (presumably ones no-one
> > > > ought to be using anymore these days, due to their support life
> > > > cycles long having ended).
> > >
> > > The various timer modes were all compatibility, and IIRC, mostly =
for
> > > Windows XP and older which told time by counting the number of =
timer
> > > interrupts.
> > >
> > > Paul - you might remember better than me?
> >
> > I think it is only quite recently that Windows has started favouring =
enlightened time sources rather
> than counting ticks but an admin may still turn all the viridian =
enlightenments off so just dropping
> ticks will probably still cause time to drift backwards.
>=20
> Even when not using the viridian enlightenments, Windows should rely
> on emulated time counters (or the TSC) rather than counting ticks?

Microsoft implementations... sensible... two different things.

>=20
> I guess I could give it a try with one of the emulated Windows =
versions
> that we test on osstest.
>=20

Pick an old-ish version. I think osstest has copy of Windows 7.

Cheers,

  Paul


> Thanks, Roger.



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 09:48:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 09:48:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsNj0-0000RN-Vu; Mon, 06 Jul 2020 09:47:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G1NU=AR=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jsNiy-0000RE-RT
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 09:47:44 +0000
X-Inumbo-ID: bd287e82-bf6d-11ea-8c54-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bd287e82-bf6d-11ea-8c54-12813bfff9fa;
 Mon, 06 Jul 2020 09:47:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594028864;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=sqnbfKB3XZ7n+12b9GL9SpftWYstMSgku8osYhTMpDk=;
 b=ZOT1s780nysRFzLNCBVL55Dj9OFukSljbWvnpRoLLJ/azEEvsUzcK6Ub
 Tnny0OWP0Q2X10glib3fEPd2HBJEEtSJZGCVJJnVD4wOzA5Pi/pcyG1Sx
 RbGuCuEop1PGrICNdK+ewYQ5niTuoD/P3kiBsmez0fPajPy1xQWl9jME6 0=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: U8tFnO2KivVWJS3ZndysidKi+4L8DKykSPX4JiHs5EVVxoHlplpbbXp9Sig1HxQFlafNbmFV6T
 Ek4pDTZUfaApp5KMiRAGLjGvOp6okf6p8RyjpLVN0/yNedB3A1kxWM8bQIfINJ0OVdcVOoVKok
 hEtL3OLrLYkvKYfxi6/OotOJy71jCs8gu+3caW6i7OJex7S7OiMdAip/m8w8d2/ur02xkFTamV
 FGOpfxtw1TZUDtNuUcq4Cpcnek/zNEmmiCvExl+acNitFrKYB4Dr1Btzgol5akdHNwH8LVlBik
 g9U=
X-SBRS: 2.7
X-MesageID: 21979292
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,318,1589256000"; d="scan'208";a="21979292"
Subject: Re: [PATCH v5 11/11] tools/proctrace: add proctrace tool
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?=
 <michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <cover.1593974333.git.michal.leszczynski@cert.pl>
 <e0ac5422825ce307470256aab1652336d5179a9a.1593974333.git.michal.leszczynski@cert.pl>
 <983829150.19744505.1593975521301.JavaMail.zimbra@cert.pl>
 <78e96f30-acf3-ad44-1488-62bf974bd83a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d1948f7a-22ed-c525-d7ac-35ea98929a01@citrix.com>
Date: Mon, 6 Jul 2020 10:47:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <78e96f30-acf3-ad44-1488-62bf974bd83a@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, tamas lengyel <tamas.lengyel@intel.com>,
 luwei kang <luwei.kang@intel.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 06/07/2020 09:33, Jan Beulich wrote:
> On 05.07.2020 20:58, Michał Leszczyński wrote:
>> ----- 5 lip 2020 o 20:55, Michał Leszczyński michal.leszczynski@cert.pl napisał(a):
>>> --- /dev/null
>>> +++ b/tools/proctrace/proctrace.c
>>> +#include <stdlib.h>
>>> +#include <stdio.h>
>>> +#include <sys/mman.h>
>>> +#include <signal.h>
>>> +
>>> +#include <xenctrl.h>
>>> +#include <xen/xen.h>
>>> +#include <xenforeignmemory.h>
>>> +
>>> +#define BUF_SIZE (16384 * XC_PAGE_SIZE)
>> I would like to discuss here, how we should retrieve the trace buffer size
>> in runtime? Should there be a hypercall for it, or some extension to
>> acquire_resource logic?
> Personally I'd prefer the latter, but the question is whether one can
> be made in a backwards compatible way.

I already covered this in v4.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 10:09:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 10:09:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsO4K-0002GJ-M6; Mon, 06 Jul 2020 10:09:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YXjd=AR=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jsO4J-0002GE-KD
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 10:09:47 +0000
X-Inumbo-ID: ceb9525e-bf70-11ea-8c57-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ceb9525e-bf70-11ea-8c57-12813bfff9fa;
 Mon, 06 Jul 2020 10:09:42 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id A5DA3A1B9C;
 Mon,  6 Jul 2020 12:09:40 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 93F56A1BD5;
 Mon,  6 Jul 2020 12:09:39 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id oYHSO-5wdzop; Mon,  6 Jul 2020 12:09:39 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 062E1A1D68;
 Mon,  6 Jul 2020 12:09:39 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 82akefxhuzMI; Mon,  6 Jul 2020 12:09:38 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id CE771A1BD5;
 Mon,  6 Jul 2020 12:09:38 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id B572A20213;
 Mon,  6 Jul 2020 12:09:08 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 7Hbydrl07Y6H; Mon,  6 Jul 2020 12:09:03 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 2ACD82225B;
 Mon,  6 Jul 2020 12:09:03 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id kZUPg1xi0CTH; Mon,  6 Jul 2020 12:09:03 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 0779E22000;
 Mon,  6 Jul 2020 12:09:03 +0200 (CEST)
Date: Mon, 6 Jul 2020 12:09:02 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: Roger Pau =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>
Message-ID: <212702848.20024300.1594030142855.JavaMail.zimbra@cert.pl>
In-Reply-To: <20200706084234.GB735@Air-de-Roger>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <a4833c8168e287f0caf1dc6f16ec5c054bd88b0a.1593974333.git.michal.leszczynski@cert.pl>
 <20200706084234.GB735@Air-de-Roger>
Subject: Re: [PATCH v5 06/11] x86/hvm: processor trace interface in HVM
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - FF78 (Linux)/8.6.0_GA_1194)
Thread-Topic: x86/hvm: processor trace interface in HVM
Thread-Index: kKPsq4Ntrkk4nefrR9JmowrKpzS5XQ==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 6 lip 2020 o 10:42, Roger Pau Monn=C3=A9 roger.pau@citrix.com napisa=
=C5=82(a):

> On Sun, Jul 05, 2020 at 08:54:59PM +0200, Micha=C5=82 Leszczy=C5=84ski wr=
ote:
>> From: Michal Leszczynski <michal.leszczynski@cert.pl>
>>=20
>> Implement necessary changes in common code/HVM to support
>> processor trace features. Define vmtrace_pt_* API and
>> implement trace buffer allocation/deallocation in common
>> code.
>>=20
>> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
>> ---
>>  xen/arch/x86/domain.c         | 19 +++++++++++++++++++
>>  xen/common/domain.c           | 19 +++++++++++++++++++
>>  xen/include/asm-x86/hvm/hvm.h | 20 ++++++++++++++++++++
>>  xen/include/xen/sched.h       |  4 ++++
>>  4 files changed, 62 insertions(+)
>>=20
>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>> index fee6c3931a..79c9794408 100644
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -2199,6 +2199,25 @@ int domain_relinquish_resources(struct domain *d)
>>                  altp2m_vcpu_disable_ve(v);
>>          }
>> =20
>> +        for_each_vcpu ( d, v )
>> +        {
>> +            unsigned int i;
>> +
>> +            if ( !v->vmtrace.pt_buf )
>> +                continue;
>> +
>> +            for ( i =3D 0; i < (v->domain->vmtrace_pt_size >> PAGE_SHIF=
T); i++ )
>> +            {
>> +                struct page_info *pg =3D mfn_to_page(
>> +                    mfn_add(page_to_mfn(v->vmtrace.pt_buf), i));
>> +                if ( (pg->count_info & PGC_count_mask) !=3D 1 )
>> +                    return -EBUSY;
>> +            }
>> +
>> +            free_domheap_pages(v->vmtrace.pt_buf,
>> +                get_order_from_bytes(v->domain->vmtrace_pt_size));
>=20
> This is racy as a control domain could take a reference between the
> check and the freeing.
>=20
>> +        }
>> +
>>          if ( is_pv_domain(d) )
>>          {
>>              for_each_vcpu ( d, v )
>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>> index 25d3359c5b..f480c4e033 100644
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -137,6 +137,21 @@ static void vcpu_destroy(struct vcpu *v)
>>      free_vcpu_struct(v);
>>  }
>> =20
>> +static int vmtrace_alloc_buffers(struct vcpu *v)
>> +{
>> +    struct page_info *pg;
>> +    uint64_t size =3D v->domain->vmtrace_pt_size;
>> +
>> +    pg =3D alloc_domheap_pages(v->domain, get_order_from_bytes(size),
>> +                             MEMF_no_refcount);
>> +
>> +    if ( !pg )
>> +        return -ENOMEM;
>> +
>> +    v->vmtrace.pt_buf =3D pg;
>> +    return 0;
>> +}
>=20
> I think we already agreed that you would use the same model as ioreq
> servers, where a reference is taken on allocation and then the pages
> are not explicitly freed on domain destruction and put_page_and_type
> is used. Is there some reason why that model doesn't work in this
> case?
>=20
> If not, please see hvm_alloc_ioreq_mfn and hvm_free_ioreq_mfn.
>=20
> Roger.


Ok, I've got it, will do. Thanks for pointing out the examples.


One thing that is confusing to me is that I don't get what is
the meaning of MEMF_no_refcount flag.

In the hvm_{alloc,free}_ioreq_mfn the memory is allocated
explicitly but freed just by putting out the reference, so
I guess it's automatically detected that the refcount dropped to 0
and the page should be freed? If so, why the flag is named "no refcount"?


Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 10:14:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 10:14:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsO8U-00032x-94; Mon, 06 Jul 2020 10:14:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YXjd=AR=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jsO8T-00032s-5U
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 10:14:05 +0000
X-Inumbo-ID: 6adeea18-bf71-11ea-8c58-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6adeea18-bf71-11ea-8c58-12813bfff9fa;
 Mon, 06 Jul 2020 10:14:04 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 0522DA1A26;
 Mon,  6 Jul 2020 12:14:03 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id E6CB3A1B27;
 Mon,  6 Jul 2020 12:14:01 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id KcEDN4c_ZRPT; Mon,  6 Jul 2020 12:14:01 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 32959A1B28;
 Mon,  6 Jul 2020 12:14:01 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id NEeJNtNS_Ked; Mon,  6 Jul 2020 12:14:01 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 05487A1A26;
 Mon,  6 Jul 2020 12:14:01 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 49E65214C4;
 Mon,  6 Jul 2020 12:13:27 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 0mCTzcvk2y-m; Mon,  6 Jul 2020 12:13:21 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id C6AC8218F8;
 Mon,  6 Jul 2020 12:13:21 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id bKsfJBvLaDHp; Mon,  6 Jul 2020 12:13:21 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 9A1A3215FB;
 Mon,  6 Jul 2020 12:13:21 +0200 (CEST)
Date: Mon, 6 Jul 2020 12:13:21 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Message-ID: <2097446473.20028399.1594030401534.JavaMail.zimbra@cert.pl>
In-Reply-To: <5d52b37e391a4165dc3775f77a621d34a33d22c2.1593974333.git.michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <cover.1593974333.git.michal.leszczynski@cert.pl>
 <5d52b37e391a4165dc3775f77a621d34a33d22c2.1593974333.git.michal.leszczynski@cert.pl>
Subject: Re: [PATCH v5 04/11] common: add vmtrace_pt_size domain parameter
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - FF78 (Linux)/8.6.0_GA_1194)
Thread-Topic: common: add vmtrace_pt_size domain parameter
Thread-Index: X8jw5uxalIiqVyGnDoKGxt48USyScg==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 luwei kang <luwei.kang@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas lengyel <tamas.lengyel@intel.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 5 lip 2020 o 20:54, Micha=C5=82 Leszczy=C5=84ski michal.leszczynski@c=
ert.pl napisa=C5=82(a):

> From: Michal Leszczynski <michal.leszczynski@cert.pl>
>=20
> Add vmtrace_pt_size domain parameter in live domain and
> vmtrace_pt_order parameter in xen_domctl_createdomain.
>=20
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
> xen/common/domain.c         | 12 ++++++++++++
> xen/include/public/domctl.h |  1 +
> xen/include/xen/sched.h     |  4 ++++
> 3 files changed, 17 insertions(+)
>=20
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index a45cf023f7..25d3359c5b 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -338,6 +338,12 @@ static int sanitise_domain_config(struct
> xen_domctl_createdomain *config)
>         return -EINVAL;
>     }
>=20
> +    if ( config->vmtrace_pt_order && !vmtrace_supported )
> +    {
> +        dprintk(XENLOG_INFO, "Processor tracing is not supported\n");
> +        return -EINVAL;
> +    }
> +
>     return arch_sanitise_domain_config(config);
> }
>=20
> @@ -443,6 +449,12 @@ struct domain *domain_create(domid_t domid,
>         d->nr_pirqs =3D min(d->nr_pirqs, nr_irqs);
>=20
>         radix_tree_init(&d->pirq_tree);
> +
> +        if ( config->vmtrace_pt_order )
> +        {
> +            uint32_t shift_val =3D config->vmtrace_pt_order + PAGE_SHIFT=
;
> +            d->vmtrace_pt_size =3D (1ULL << shift_val);
> +        }
>     }
>=20
>     if ( (err =3D arch_domain_create(d, config)) !=3D 0 )
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 59bdc28c89..7b8289d436 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -92,6 +92,7 @@ struct xen_domctl_createdomain {
>     uint32_t max_evtchn_port;
>     int32_t max_grant_frames;
>     int32_t max_maptrack_frames;
> +    uint8_t vmtrace_pt_order;
>=20
>     struct xen_arch_domainconfig arch;
> };
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index ac53519d7f..48f0a61bbd 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -457,6 +457,10 @@ struct domain
>     unsigned    pbuf_idx;
>     spinlock_t  pbuf_lock;
>=20
> +    /* Used by vmtrace features */
> +    spinlock_t  vmtrace_lock;
> +    uint64_t    vmtrace_pt_size;
> +
>     /* OProfile support. */
>     struct xenoprof *xenoprof;
>=20
> --
> 2.17.1


Just a note to myself: in v4 it was suggested by Jan that we should
go with "number of kB" instead of "number of bytes" and the type
could be uint32_t.

I will modify it in such way within the next version.


Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 10:19:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 10:19:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsODI-0003EJ-TV; Mon, 06 Jul 2020 10:19:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YXjd=AR=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jsODH-0003EE-57
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 10:19:03 +0000
X-Inumbo-ID: 1ca74b8c-bf72-11ea-8c58-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1ca74b8c-bf72-11ea-8c58-12813bfff9fa;
 Mon, 06 Jul 2020 10:19:02 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 45042A1AAF;
 Mon,  6 Jul 2020 12:19:01 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 2D22FA1A0A;
 Mon,  6 Jul 2020 12:19:00 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id V8pAvAeiI1Td; Mon,  6 Jul 2020 12:18:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id A32C0A1AAF;
 Mon,  6 Jul 2020 12:18:59 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id zRw0KEerLma7; Mon,  6 Jul 2020 12:18:59 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 7FA84A1A0A;
 Mon,  6 Jul 2020 12:18:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 6F72A22367;
 Mon,  6 Jul 2020 12:18:29 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id J27zcxqFVr_q; Mon,  6 Jul 2020 12:18:23 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id D6FD321756;
 Mon,  6 Jul 2020 12:18:23 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id kYyNzTsQF1cw; Mon,  6 Jul 2020 12:18:23 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id B872720C59;
 Mon,  6 Jul 2020 12:18:23 +0200 (CEST)
Date: Mon, 6 Jul 2020 12:18:23 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <661559454.20033618.1594030703652.JavaMail.zimbra@cert.pl>
In-Reply-To: <d1948f7a-22ed-c525-d7ac-35ea98929a01@citrix.com>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <cover.1593974333.git.michal.leszczynski@cert.pl>
 <e0ac5422825ce307470256aab1652336d5179a9a.1593974333.git.michal.leszczynski@cert.pl>
 <983829150.19744505.1593975521301.JavaMail.zimbra@cert.pl>
 <78e96f30-acf3-ad44-1488-62bf974bd83a@suse.com>
 <d1948f7a-22ed-c525-d7ac-35ea98929a01@citrix.com>
Subject: Re: [PATCH v5 11/11] tools/proctrace: add proctrace tool
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - FF78 (Linux)/8.6.0_GA_1194)
Thread-Topic: tools/proctrace: add proctrace tool
Thread-Index: J97tXiE2N0uxaGqHhI6BjCDQQ4aKqw==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 luwei kang <luwei.kang@intel.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 6 lip 2020 o 11:47, Andrew Cooper andrew.cooper3@citrix.com napisa=C5=
=82(a):

> On 06/07/2020 09:33, Jan Beulich wrote:
>> On 05.07.2020 20:58, Micha=C5=82 Leszczy=C5=84ski wrote:
>>> ----- 5 lip 2020 o 20:55, Micha=C5=82 Leszczy=C5=84ski michal.leszczyns=
ki@cert.pl
>>> napisa=C5=82(a):
>>>> --- /dev/null
>>>> +++ b/tools/proctrace/proctrace.c
>>>> +#include <stdlib.h>
>>>> +#include <stdio.h>
>>>> +#include <sys/mman.h>
>>>> +#include <signal.h>
>>>> +
>>>> +#include <xenctrl.h>
>>>> +#include <xen/xen.h>
>>>> +#include <xenforeignmemory.h>
>>>> +
>>>> +#define BUF_SIZE (16384 * XC_PAGE_SIZE)
>>> I would like to discuss here, how we should retrieve the trace buffer s=
ize
>>> in runtime? Should there be a hypercall for it, or some extension to
>>> acquire_resource logic?
>> Personally I'd prefer the latter, but the question is whether one can
>> be made in a backwards compatible way.
>=20
> I already covered this in v4.
>=20
> ~Andrew


Ok, sorry, I see:

> The guest_handle_is_null(xmar.frame_list) path
> in Xen is supposed to report the size of the resource, not the size of
> Xen's local buffer, so userspace can ask "how large is this resource".
>=20
> I'll try and find some time to fix this and arrange for backports, but
> the current behaviour is nonsense, and problematic for new users.

So to make it clear: should I modify the acquire_resource logic
in such way that NULL guest handle would report the actual
size of the resource?

If I got it right, here:

https://lists.xen.org/archives/html/xen-devel/2020-07/msg00159.html

it was suggested that it should report the constant value of
UINT_MAX >> MEMOP_EXTENT_SHIFT and as far as I understood, the expectation
is that it would report how many frames might be requested at once,
not what is the size of the resource we're asking for.


Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 10:28:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 10:28:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsOMM-00047q-0d; Mon, 06 Jul 2020 10:28:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D7Mw=AR=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jsOMK-00047i-G6
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 10:28:24 +0000
X-Inumbo-ID: 6b0d99a6-bf73-11ea-8c59-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b0d99a6-bf73-11ea-8c59-12813bfff9fa;
 Mon, 06 Jul 2020 10:28:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=jMyAiZPQ8U87oK0loz0dL0plREpsBHkGWoX9gyZC6h0=; b=wjUD3yMbLOs6GPPJmD1fshQgzU
 RoIOYZHYgtRMh3Xz0MjjnQtoDU2a/UpJzygQMdQmLWGo/rFPYmN64akQUQgORC64l37NewvnxhN+E
 3mHp/FvOtPYcGp9t6ynrd3femDZz6CVc3xlasdhhEIFJb/L0nPb7pE+bk6ujXt68xITg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jsOMD-0003gm-6i; Mon, 06 Jul 2020 10:28:17 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jsOMC-00028f-Sk; Mon, 06 Jul 2020 10:28:17 +0000
Subject: Re: [PATCH v5 08/11] x86/mm: add vmtrace_buf resource type
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 xen-devel@lists.xenproject.org
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <a306c4811973d80c83f1cb46cdbef1aa54ac6379.1593974333.git.michal.leszczynski@cert.pl>
From: Julien Grall <julien@xen.org>
Message-ID: <2adc4481-196e-646e-2ebf-3bcbdcbf8aa9@xen.org>
Date: Mon, 6 Jul 2020 11:28:14 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a306c4811973d80c83f1cb46cdbef1aa54ac6379.1593974333.git.michal.leszczynski@cert.pl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, luwei.kang@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 05/07/2020 19:55, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Allow to map processor trace buffer using
> acquire_resource().
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>   xen/common/memory.c         | 28 ++++++++++++++++++++++++++++
>   xen/include/public/memory.h |  1 +
>   2 files changed, 29 insertions(+)
> 
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index eb42f883df..04f4e152c0 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1007,6 +1007,29 @@ static long xatp_permission_check(struct domain *d, unsigned int space)
>       return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
>   }
>   
> +static int acquire_vmtrace_buf(struct domain *d, unsigned int id,
> +                               unsigned long frame,

Shouldn't this be uint64_t to avoid truncation?

> +                               unsigned int nr_frames,
> +                               xen_pfn_t mfn_list[])
> +{
> +    mfn_t mfn;
> +    unsigned int i;
> +    struct vcpu *v = domain_vcpu(d, id);
> +
> +    if ( !v || !v->vmtrace.pt_buf )
> +        return -EINVAL;
> +
> +    mfn = page_to_mfn(v->vmtrace.pt_buf);
> +
> +    if ( frame + nr_frames > (v->domain->vmtrace_pt_size >> PAGE_SHIFT) )

frame + nr_frames could possibly overflow a 64-bit value and therefore 
still pass the check.

So I would suggest to use:

(frame > (v->domain_vm_ptrace_pt_size >> PAGE_SHIFT)) ||
(nr_frames > ((v->domain_vm_ptrace_pt_size >> PAGE_SHIFT) - frame))

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 10:31:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 10:31:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsOPV-0004t4-HR; Mon, 06 Jul 2020 10:31:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D7Mw=AR=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jsOPV-0004sz-08
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 10:31:41 +0000
X-Inumbo-ID: e0433a78-bf73-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0433a78-bf73-11ea-b7bb-bc764e2007e4;
 Mon, 06 Jul 2020 10:31:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zNSK5nHtqnhixwbSo0v/LSrDTmOW1CPhjVgRDdLP3Os=; b=Grkp3IkRtlmSIniusg+bzkEB59
 bTy8oqB15wstpCaNt5XAo7EdUUAUzQfdea2/0fowIt5kFeiZ1t5xy/0g+yNpIttx2XIX9OIw5jes8
 wN6nnaD6W77sj0nPftQxmCsRx+WdAXbOCTv5UT0DMzamsOoaSXNS8wp9uA9lcaToeLdg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jsOPO-0003kt-1f; Mon, 06 Jul 2020 10:31:34 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jsOPN-0002DU-Qe; Mon, 06 Jul 2020 10:31:33 +0000
Subject: Re: [PATCH v5 09/11] x86/domctl: add XEN_DOMCTL_vmtrace_op
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 xen-devel@lists.xenproject.org
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <f3ec05eb4908f774683e96553ec32d68fac0d0ac.1593974333.git.michal.leszczynski@cert.pl>
From: Julien Grall <julien@xen.org>
Message-ID: <6763525a-dca6-dfe5-b417-96e69a22d927@xen.org>
Date: Mon, 6 Jul 2020 11:31:30 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <f3ec05eb4908f774683e96553ec32d68fac0d0ac.1593974333.git.michal.leszczynski@cert.pl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, luwei.kang@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Michal,

On 05/07/2020 19:55, Michał Leszczyński wrote:
> +/* XEN_DOMCTL_vmtrace_op: Perform VM tracing related operation */
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +
> +struct xen_domctl_vmtrace_op {
> +    /* IN variable */
> +    uint32_t cmd;
> +/* Enable/disable external vmtrace for given domain */
> +#define XEN_DOMCTL_vmtrace_pt_enable      1
> +#define XEN_DOMCTL_vmtrace_pt_disable     2
> +#define XEN_DOMCTL_vmtrace_pt_get_offset  3
> +    domid_t domain;

AFAICT, there is a 16-bit implicit padding here and ...


> +    uint32_t vcpu;

... a 32-bit implicit padding here. I would suggest to make
the two explicit.

We possibly want to check they are also always zero.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 10:32:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 10:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsOQ1-0004wB-Qw; Mon, 06 Jul 2020 10:32:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YXjd=AR=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jsOQ0-0004vu-Hs
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 10:32:12 +0000
X-Inumbo-ID: f3354a68-bf73-11ea-bb8b-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3354a68-bf73-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 10:32:11 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id B9A5EA1B37;
 Mon,  6 Jul 2020 12:32:10 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 77916A1B18;
 Mon,  6 Jul 2020 12:32:09 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id Bm39aKqkoNb4; Mon,  6 Jul 2020 12:32:09 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 0C75FA1B37;
 Mon,  6 Jul 2020 12:32:09 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id gXmw1oTsNEUM; Mon,  6 Jul 2020 12:32:08 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id D4834A1B18;
 Mon,  6 Jul 2020 12:32:08 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id BCDD920D7F;
 Mon,  6 Jul 2020 12:31:38 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id x-pioaoD2w5G; Mon,  6 Jul 2020 12:31:33 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 558FF21FBD;
 Mon,  6 Jul 2020 12:31:33 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id ELyjZpak_Gd0; Mon,  6 Jul 2020 12:31:33 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 3443921EA4;
 Mon,  6 Jul 2020 12:31:33 +0200 (CEST)
Date: Mon, 6 Jul 2020 12:31:33 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: Jan Beulich <jbeulich@suse.com>
Message-ID: <1352820634.20043216.1594031493113.JavaMail.zimbra@cert.pl>
In-Reply-To: <8685426c-0b79-e967-dfce-e9d2e7d21401@suse.com>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <cover.1593974333.git.michal.leszczynski@cert.pl>
 <a4833c8168e287f0caf1dc6f16ec5c054bd88b0a.1593974333.git.michal.leszczynski@cert.pl>
 <762195600.19745364.1593976285067.JavaMail.zimbra@cert.pl>
 <8685426c-0b79-e967-dfce-e9d2e7d21401@suse.com>
Subject: Re: [PATCH v5 06/11] x86/hvm: processor trace interface in HVM
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - FF78 (Linux)/8.6.0_GA_1194)
Thread-Topic: x86/hvm: processor trace interface in HVM
Thread-Index: R4+dZII/UrtZVSJLA93ofFwNlCxYxQ==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 xen-devel@lists.xenproject.org,
 Roger Pau =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 6 lip 2020 o 10:31, Jan Beulich jbeulich@suse.com napisa=C5=82(a):

> On 05.07.2020 21:11, Micha=C5=82 Leszczy=C5=84ski wrote:
>> ----- 5 lip 2020 o 20:54, Micha=C5=82 Leszczy=C5=84ski michal.leszczynsk=
i@cert.pl
>> napisa=C5=82(a):
>>> --- a/xen/arch/x86/domain.c
>>> +++ b/xen/arch/x86/domain.c
>>> @@ -2199,6 +2199,25 @@ int domain_relinquish_resources(struct domain *d=
)
>>>                 altp2m_vcpu_disable_ve(v);
>>>         }
>>>
>>> +        for_each_vcpu ( d, v )
>>> +        {
>>> +            unsigned int i;
>>> +
>>> +            if ( !v->vmtrace.pt_buf )
>>> +                continue;
>>> +
>>> +            for ( i =3D 0; i < (v->domain->vmtrace_pt_size >> PAGE_SHI=
FT); i++ )
>>> +            {
>>> +                struct page_info *pg =3D mfn_to_page(
>>> +                    mfn_add(page_to_mfn(v->vmtrace.pt_buf), i));
>>> +                if ( (pg->count_info & PGC_count_mask) !=3D 1 )
>>> +                    return -EBUSY;
>>> +            }
>>> +
>>> +            free_domheap_pages(v->vmtrace.pt_buf,
>>> +                get_order_from_bytes(v->domain->vmtrace_pt_size));
>>=20
>>=20
>> While this works, I don't feel that this is a good solution with this lo=
op
>> returning -EBUSY here. I would like to kindly ask for suggestions regard=
ing
>> this topic.
>=20
> I'm sorry to ask, but with the previously give suggestions to mirror
> existing code, why do you still need to play with this function? You
> really shouldn't have a need to, just like e.g. the ioreq server page
> handling code didn't.
>=20
> Jan


Ok, sorry. I think I've finally got it after latest Roger's suggestions :P

This will be fixed in the next version.


Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 10:37:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 10:37:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsOUs-0005AI-EW; Mon, 06 Jul 2020 10:37:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jsOUq-0005AC-S8
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 10:37:12 +0000
X-Inumbo-ID: a60f2348-bf74-11ea-8c5a-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a60f2348-bf74-11ea-8c5a-12813bfff9fa;
 Mon, 06 Jul 2020 10:37:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 20C2BAC91;
 Mon,  6 Jul 2020 10:37:11 +0000 (UTC)
Subject: Re: [PATCH v5 09/11] x86/domctl: add XEN_DOMCTL_vmtrace_op
To: Julien Grall <julien@xen.org>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <f3ec05eb4908f774683e96553ec32d68fac0d0ac.1593974333.git.michal.leszczynski@cert.pl>
 <6763525a-dca6-dfe5-b417-96e69a22d927@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1fe71a95-1757-ca18-1d36-c3712e7b6fdf@suse.com>
Date: Mon, 6 Jul 2020 12:37:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <6763525a-dca6-dfe5-b417-96e69a22d927@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 06.07.2020 12:31, Julien Grall wrote:
> On 05/07/2020 19:55, Michał Leszczyński wrote:
>> +/* XEN_DOMCTL_vmtrace_op: Perform VM tracing related operation */
>> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
>> +
>> +struct xen_domctl_vmtrace_op {
>> +    /* IN variable */
>> +    uint32_t cmd;
>> +/* Enable/disable external vmtrace for given domain */
>> +#define XEN_DOMCTL_vmtrace_pt_enable      1
>> +#define XEN_DOMCTL_vmtrace_pt_disable     2
>> +#define XEN_DOMCTL_vmtrace_pt_get_offset  3
>> +    domid_t domain;
> 
> AFAICT, there is a 16-bit implicit padding here and ...
> 
> 
>> +    uint32_t vcpu;
> 
> ... a 32-bit implicit padding here. I would suggest to make
> the two explicit.
> 
> We possibly want to check they are also always zero.

Not just possibly imo - we should.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 10:38:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 10:38:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsOWI-0005HF-Qz; Mon, 06 Jul 2020 10:38:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D7Mw=AR=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jsOWG-0005H8-Qa
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 10:38:40 +0000
X-Inumbo-ID: db009bae-bf74-11ea-8c5a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id db009bae-bf74-11ea-8c5a-12813bfff9fa;
 Mon, 06 Jul 2020 10:38:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=thad9sChOuafOlmHsNirsbZq9s5m3eeNnXpj9qYAVbw=; b=C9iCoYEARSdKxQNdSrbANA/HUs
 LOmOFGtKhlvVXUuPuQJwTbp29w7K88FU9scvHeOLLgNfXVvTDuHT5UKnnlZ3cgqOF5W0tDT2PdGvy
 GZ8ETae0zaNpUYV0IEWyw9F6vjcwbVxecohXUnLw1vJojNFCCHkjMEV6PQrRRcZm+sp8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jsOWC-0003sy-Bz; Mon, 06 Jul 2020 10:38:36 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jsOWC-0002gk-2b; Mon, 06 Jul 2020 10:38:36 +0000
Subject: Re: [PATCH v5 09/11] x86/domctl: add XEN_DOMCTL_vmtrace_op
To: Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <f3ec05eb4908f774683e96553ec32d68fac0d0ac.1593974333.git.michal.leszczynski@cert.pl>
 <6763525a-dca6-dfe5-b417-96e69a22d927@xen.org>
 <1fe71a95-1757-ca18-1d36-c3712e7b6fdf@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7e6e4cd1-7244-243a-6af6-5c24ce24c10f@xen.org>
Date: Mon, 6 Jul 2020 11:38:33 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1fe71a95-1757-ca18-1d36-c3712e7b6fdf@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 06/07/2020 11:37, Jan Beulich wrote:
> On 06.07.2020 12:31, Julien Grall wrote:
>> On 05/07/2020 19:55, Michał Leszczyński wrote:
>>> +/* XEN_DOMCTL_vmtrace_op: Perform VM tracing related operation */
>>> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
>>> +
>>> +struct xen_domctl_vmtrace_op {
>>> +    /* IN variable */
>>> +    uint32_t cmd;
>>> +/* Enable/disable external vmtrace for given domain */
>>> +#define XEN_DOMCTL_vmtrace_pt_enable      1
>>> +#define XEN_DOMCTL_vmtrace_pt_disable     2
>>> +#define XEN_DOMCTL_vmtrace_pt_get_offset  3
>>> +    domid_t domain;
>>
>> AFAICT, there is a 16-bit implicit padding here and ...
>>
>>
>>> +    uint32_t vcpu;
>>
>> ... a 32-bit implicit padding here. I would suggest to make
>> the two explicit.
>>
>> We possibly want to check they are also always zero.
> 
> Not just possibly imo - we should.

I wasn't sure given that DOMCTL is not a stable interface.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 10:46:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 10:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsOdV-000675-Kb; Mon, 06 Jul 2020 10:46:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D7Mw=AR=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jsOdU-000670-O5
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 10:46:08 +0000
X-Inumbo-ID: e5de6e4c-bf75-11ea-8c5d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e5de6e4c-bf75-11ea-8c5d-12813bfff9fa;
 Mon, 06 Jul 2020 10:46:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Xwp9yo4EleNwCfsxcdeLA5xXMjZtsH6C6SVkY2ROUdI=; b=0xXVHXRjJH9CBHVCpq574JokTG
 dvm6rTsA3l3eBIpmdue8cizouswFFQ361HrHrcRMywJlsdyeO7o5Ew5Tepc0b/KsLJ/gNcJRvQwPb
 sA9dt4KEFG9b95fE0HaQ5Nq/JWCtugKg4psy1CgdS7lKj/b7/ffNpVe2o/tv78KOMCKs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jsOdM-000423-My; Mon, 06 Jul 2020 10:46:00 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jsOdM-00036i-Be; Mon, 06 Jul 2020 10:46:00 +0000
Subject: Re: [PATCH v5 04/11] common: add vmtrace_pt_size domain parameter
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 xen-devel@lists.xenproject.org
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <5d52b37e391a4165dc3775f77a621d34a33d22c2.1593974333.git.michal.leszczynski@cert.pl>
From: Julien Grall <julien@xen.org>
Message-ID: <9d7d63af-fcfd-f52b-2139-caf4197cbae6@xen.org>
Date: Mon, 6 Jul 2020 11:45:58 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <5d52b37e391a4165dc3775f77a621d34a33d22c2.1593974333.git.michal.leszczynski@cert.pl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, luwei.kang@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 05/07/2020 19:54, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Add vmtrace_pt_size domain parameter in live domain and
> vmtrace_pt_order parameter in xen_domctl_createdomain.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>   xen/common/domain.c         | 12 ++++++++++++
>   xen/include/public/domctl.h |  1 +
>   xen/include/xen/sched.h     |  4 ++++
>   3 files changed, 17 insertions(+)
> 
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index a45cf023f7..25d3359c5b 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -338,6 +338,12 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
>           return -EINVAL;
>       }
>   
> +    if ( config->vmtrace_pt_order && !vmtrace_supported )

Looking at the rest of the series, vmtrace will only be supported for 
x86 HVM guest. So don't you want to return -EINVAL for PV guests here?

This could be done in a new helper arch_vmtrace_supported() or possibly 
in the existing arch_sanitise_domain_config().

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 10:53:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 10:53:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsOkQ-0006zk-Cj; Mon, 06 Jul 2020 10:53:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=D7Mw=AR=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jsOkP-0006zf-Bd
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 10:53:17 +0000
X-Inumbo-ID: e552e786-bf76-11ea-8c61-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e552e786-bf76-11ea-8c61-12813bfff9fa;
 Mon, 06 Jul 2020 10:53:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ASS/EyAWJ4zMVNhcFNnYTPSeGf5qD5vkk3McVKOf1nY=; b=YoG4SA6QLxbC9Vy79cxbjqRVvl
 vbLzRL1xrBhHxIROM8qXGSkWmQoXVcpbQWyR6bFB8qStg2JsvFGyJFG+JyuHzFMN7qCwmX9x4qaap
 CJ5FRAg1mfz8nNIZrv4FIbtfIGJWSmLMbO1DKQmnJThZ7U72QZ2taTnHKw4y6A78GB7o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jsOkK-0004A1-KH; Mon, 06 Jul 2020 10:53:12 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jsOkK-0003Gy-Aj; Mon, 06 Jul 2020 10:53:12 +0000
Subject: Re: [PATCH v5 05/11] tools/libxl: add vmtrace_pt_size parameter
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 xen-devel@lists.xenproject.org
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <cover.1593974333.git.michal.leszczynski@cert.pl>
 <f7e3c91789a7763b997918b6ebb987be670f9ce5.1593974333.git.michal.leszczynski@cert.pl>
 <1763045628.19744689.1593975740414.JavaMail.zimbra@cert.pl>
From: Julien Grall <julien@xen.org>
Message-ID: <f4b72fb1-2dfb-def7-6dbd-4908ceb497aa@xen.org>
Date: Mon, 6 Jul 2020 11:53:09 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1763045628.19744689.1593975740414.JavaMail.zimbra@cert.pl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: luwei kang <luwei.kang@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 tamas lengyel <tamas.lengyel@intel.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 05/07/2020 20:02, Michał Leszczyński wrote:
> ----- 5 lip 2020 o 20:54, Michał Leszczyński michal.leszczynski@cert.pl napisał(a):
> 
>> From: Michal Leszczynski <michal.leszczynski@cert.pl>
>>
>> Allow to specify the size of per-vCPU trace buffer upon
>> domain creation. This is zero by default (meaning: not enabled).
>>
>> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
>> ---
>> docs/man/xl.cfg.5.pod.in             | 11 +++++++++++
>> tools/golang/xenlight/helpers.gen.go |  2 ++
>> tools/golang/xenlight/types.gen.go   |  1 +
>> tools/libxl/libxl.h                  |  8 ++++++++
>> tools/libxl/libxl_create.c           |  1 +
>> tools/libxl/libxl_types.idl          |  2 ++
>> tools/xl/xl_parse.c                  | 22 ++++++++++++++++++++++
>> 7 files changed, 47 insertions(+)
>>
>> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
>> index 0532739c1f..670759f6bd 100644
>> --- a/docs/man/xl.cfg.5.pod.in
>> +++ b/docs/man/xl.cfg.5.pod.in
>> @@ -278,6 +278,17 @@ memory=8096 will report significantly less memory available
>> for use
>> than a system with maxmem=8096 memory=8096 due to the memory overhead
>> of having to track the unused pages.
>>
>> +=item B<processor_trace_buffer_size=BYTES>
>> +
>> +Specifies the size of processor trace buffer that would be allocated
>> +for each vCPU belonging to this domain. Disabled (i.e.
>> +B<processor_trace_buffer_size=0> by default. This must be set to
>> +non-zero value in order to be able to use processor tracing features
>> +with this domain.
>> +
>> +B<NOTE>: The size value must be between 4 kB and 4 GB and it must
>> +be also a power of 2.

This seems to suggest that 4 kB is allowed. But looking at the code 
below, you are forbidding the value.

[...]

> As there were many different ideas about how the naming scheme should be
> and what kinds of values should be passed where, I would like to discuss
> this particular topic. Right now we have it pretty confusing:
> 
> * user sets "processor_trace_buffer_size" option in xl.cfg
> * domain creation hypercall uses "vmtrace_pt_order" (derived from above)

You don't only use the order in the hypercall but also the public 
interface of libxl.

> * hypervisor side stores "vmtrace_pt_size" (converted back to bytes)

My preference would be to use the size everywhere, but if one still 
prefer to use the order in the hypercall then the libxl interface should 
use the size.

See my comment in v4 for the rationale.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 11:20:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 11:20:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsPAp-00018l-Op; Mon, 06 Jul 2020 11:20:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jsPAo-00018g-7T
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 11:20:34 +0000
X-Inumbo-ID: b49844e8-bf7a-11ea-8c67-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b49844e8-bf7a-11ea-8c67-12813bfff9fa;
 Mon, 06 Jul 2020 11:20:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 72878AF3D;
 Mon,  6 Jul 2020 11:20:32 +0000 (UTC)
Subject: Re: [PATCH v5 09/11] x86/domctl: add XEN_DOMCTL_vmtrace_op
To: Julien Grall <julien@xen.org>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <f3ec05eb4908f774683e96553ec32d68fac0d0ac.1593974333.git.michal.leszczynski@cert.pl>
 <6763525a-dca6-dfe5-b417-96e69a22d927@xen.org>
 <1fe71a95-1757-ca18-1d36-c3712e7b6fdf@suse.com>
 <7e6e4cd1-7244-243a-6af6-5c24ce24c10f@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ff59d2b7-55a9-53dc-444c-7b4741945c05@suse.com>
Date: Mon, 6 Jul 2020 13:20:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <7e6e4cd1-7244-243a-6af6-5c24ce24c10f@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 06.07.2020 12:38, Julien Grall wrote:
> On 06/07/2020 11:37, Jan Beulich wrote:
>> On 06.07.2020 12:31, Julien Grall wrote:
>>> On 05/07/2020 19:55, Michał Leszczyński wrote:
>>>> +/* XEN_DOMCTL_vmtrace_op: Perform VM tracing related operation */
>>>> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
>>>> +
>>>> +struct xen_domctl_vmtrace_op {
>>>> +    /* IN variable */
>>>> +    uint32_t cmd;
>>>> +/* Enable/disable external vmtrace for given domain */
>>>> +#define XEN_DOMCTL_vmtrace_pt_enable      1
>>>> +#define XEN_DOMCTL_vmtrace_pt_disable     2
>>>> +#define XEN_DOMCTL_vmtrace_pt_get_offset  3
>>>> +    domid_t domain;
>>>
>>> AFAICT, there is a 16-bit implicit padding here and ...
>>>
>>>
>>>> +    uint32_t vcpu;
>>>
>>> ... a 32-bit implicit padding here. I would suggest to make
>>> the two explicit.
>>>
>>> We possibly want to check they are also always zero.
>>
>> Not just possibly imo - we should.
> 
> I wasn't sure given that DOMCTL is not a stable interface.

True; checking padding fields allows assigning meaning to them
without bumping the domctl interface version.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 11:45:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 11:45:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsPYp-0002wG-Th; Mon, 06 Jul 2020 11:45:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m1N9=AR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsPYo-0002vq-W8
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 11:45:23 +0000
X-Inumbo-ID: 28ff233a-bf7e-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28ff233a-bf7e-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 11:45:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=S8symd0CBpl3SvejO7Z+z45pBvHQRPJe63FSSxNOfCg=; b=zr4wsO9scm83Z79KvLvtYPNKA
 Hqg7Dr1aaSEcRIzGJx+6rSmMzmZ00rOFvI0ByffSsClItQuxXM27TnbzXRM2bjl3FEFh9ktf6kkLt
 f9J/msERkSwDhzk3mUBqpxJ3AWtChyl8MCgsvpKb3dweXcfhoK7kWUO2TNLnTq+v6c0ok=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsPYh-00056r-Ll; Mon, 06 Jul 2020 11:45:15 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsPYh-0001LZ-EI; Mon, 06 Jul 2020 11:45:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsPYh-0003Z9-De; Mon, 06 Jul 2020 11:45:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151661-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151661: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 xen-unstable:test-amd64-i386-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=be63d9d47f571a60d70f8fb630c03871312d9655
X-Osstest-Versions-That: xen=be63d9d47f571a60d70f8fb630c03871312d9655
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jul 2020 11:45:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151661 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151661/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine    4 memdisk-try-append fail in 151635 pass in 151661
 test-amd64-i386-libvirt-xsm  18 guest-start/debian.repeat  fail pass in 151635

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151635
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151635
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151635
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151635
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151635
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151635
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151635
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151635
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151635
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151635
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655
baseline version:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655

Last test of basis   151661  2020-07-06 01:54:10 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 12:07:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 12:07:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsPtj-0004pt-8W; Mon, 06 Jul 2020 12:06:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m1N9=AR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsPti-0004pZ-5J
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 12:06:58 +0000
X-Inumbo-ID: 2d45d634-bf81-11ea-8c77-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2d45d634-bf81-11ea-8c77-12813bfff9fa;
 Mon, 06 Jul 2020 12:06:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=pDzQCrEASPvAOpzXjlqgQ3wPV/D/6Ay7LxrNkx2Bu4w=; b=E8RhDSv+zYjrtsytiye/0ioPx
 6Nx7QDutbYdz8ZdpKMolvyCG6USt+A69sv7gw2ExZXc1pFrEo5y67httLDYWsN+UyQXXG9LOCnHTZ
 aDBr5TStB+wQyRkg1XhBb92BiEsufqZHblyx1jc8hfuYvga67MTOJ03gIYzK1Q076MQT4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsPtb-0005Wy-Ge; Mon, 06 Jul 2020 12:06:51 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsPtb-000216-5a; Mon, 06 Jul 2020 12:06:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsPtb-0006jl-53; Mon, 06 Jul 2020 12:06:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151671-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 151671: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
 xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=158912a532fe98f448c688d3571241c9033553bd
X-Osstest-Versions-That: xen=be63d9d47f571a60d70f8fb630c03871312d9655
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jul 2020 12:06:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151671 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151671/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       7 xen-boot                 fail REGR. vs. 151535

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  158912a532fe98f448c688d3571241c9033553bd
baseline version:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655

Last test of basis   151535  2020-07-02 10:00:52 Z    4 days
Testing same since   151671  2020-07-06 09:02:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 158912a532fe98f448c688d3571241c9033553bd
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Fri Jul 3 14:55:33 2020 +0100

    Config: Update QEMU
    
    Backport 2 commits to fix building QEMU without PCI passthrough
    support.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
    Release-acked-by: Paul Durrant <paul@xen.org>

commit d44cbbe0f3243afcc56e47dcfa97bbfe23e46fbb
Author: Wei Liu <wl@xen.org>
Date:   Fri Jul 3 20:10:01 2020 +0000

    kdd: fix build again
    
    Restore Tim's patch. The one that was committed was recreated by me
    because git didn't accept my saved copy. I made some mistakes while
    recreating that patch and here we are.
    
    Fixes: 3471cafbdda3 ("kdd: stop using [0] arrays to access packet contents")
    Reported-by: Michael Young <m.a.young@durham.ac.uk>
    Signed-off-by: Wei Liu <wl@xen.org>
    Reviewed-by: Tim Deegan <tim@xen.org>
    Release-acked-by: Paul Durrant <paul@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 12:15:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 12:15:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsQ25-0005gj-2g; Mon, 06 Jul 2020 12:15:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m1N9=AR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsQ22-0005gP-Vq
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 12:15:35 +0000
X-Inumbo-ID: 609ce846-bf82-11ea-8c78-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 609ce846-bf82-11ea-8c78-12813bfff9fa;
 Mon, 06 Jul 2020 12:15:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ii84zPqummfVxa+qW1xVtgBMQc/mKyOSNtHXr1lhQYU=; b=nXap19js9/HkMB/tvDHpKDKjY
 JekIEU7F/KiLyXe576CabyK/33E48MsW5HvoDgczRy5rM96G0PujVTFs12enzOa8kmKIJZlghMOMO
 oWh7Fou9SnH/LIlaQeMKZkJrobFP/KDOgS/EuMAznYP5QYITSrUYvUQZVIStVExhNowuk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsQ1u-0005fu-R8; Mon, 06 Jul 2020 12:15:26 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsQ1u-0002FC-8U; Mon, 06 Jul 2020 12:15:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsQ1u-0005y7-7r; Mon, 06 Jul 2020 12:15:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151665-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151665: regressions - FAIL
X-Osstest-Failures: libvirt:test-amd64-amd64-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: libvirt=852ee1950aee5f31c9656b30c5fe9124f734c38c
X-Osstest-Versions-That: libvirt=e7998ebeaf15e4e8825be0dd97aa1316f194f00d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jul 2020 12:15:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151665 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151665/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496
 test-amd64-i386-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151496
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151496
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 libvirt              852ee1950aee5f31c9656b30c5fe9124f734c38c
baseline version:
 libvirt              e7998ebeaf15e4e8825be0dd97aa1316f194f00d

Last test of basis   151496  2020-07-01 04:23:43 Z    5 days
Failing since        151527  2020-07-02 04:29:15 Z    4 days    5 attempts
Testing same since   151665  2020-07-06 04:18:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Yanqiu Zhang <yanqzhan@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 852ee1950aee5f31c9656b30c5fe9124f734c38c
Author: Laine Stump <laine@redhat.com>
Date:   Thu Jun 18 22:33:28 2020 -0400

    util: remove OOM error log from virGetHostnameImpl()
    
    The strings allocated in virGetHostnameImpl() are all allocated via
    g_strdup(), which will exit on OOM anyway, so the call to
    virReportOOMError() is redundant, and removing it allows slight
    modification to the code, in particular the cleanup label can be
    eliminated.
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit 59afd0b0bcbbefd65fd8c171d73db207828c8b18
Author: Laine Stump <laine@redhat.com>
Date:   Thu Jun 18 23:00:47 2020 -0400

    conf: eliminate useless error label in virDomainFeaturesDefParse()
    
    The error: label in this function just does "return -1", so replace
    all the "goto error" in the function with "return -1".
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit ab9fd53823483975adb0cb7d46e03f647c7f3b57
Author: Laine Stump <laine@redhat.com>
Date:   Wed Jun 24 13:12:56 2020 -0400

    network: use proper arg type when calling virNetDevSetOnline()
    
    The 2nd arg to this function is a bool, not an int.
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit e95dd7aacd814c5b7d109252f0b68b2ac9cebb9b
Author: Laine Stump <laine@redhat.com>
Date:   Tue Jun 23 22:52:58 2020 -0400

    network: make networkDnsmasqXmlNsDef private to bridge_driver.c
    
    This struct isn't used anywhere else.
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit 9ceb3cff8582119d2f907b85655e758068770e83
Author: Laine Stump <laine@redhat.com>
Date:   Fri Jun 19 17:40:17 2020 -0400

    network: fix memory leak in networkBuildDhcpDaemonCommandLine()
    
    hostsfilestr was not being freed. This will be turned into g_autofree
    in an upcoming patch converting a lot more of the same file to using
    g_auto*, but I wanted to make a separate patch for this first so the
    other patch is simpler to review (and to make backporting easier).
    
    The leak was introduced in commit 97a0aa246799c97d0a9ca9ecd6b4fd932ae4756c
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit a726feb693ea5a1b0c90761c35641f0db8fc0619
Author: Laine Stump <laine@redhat.com>
Date:   Thu Jun 18 19:16:33 2020 -0400

    use g_autoptr for all xmlBuffers
    
    AUTOPTR_CLEANUP_FUNC is set to xmlBufferFree() in util/virxml.h (This
    is actually new - added accidentally (but fortunately harmlessly!) in
    commit 257aba2dafe. I had added it along with the hunks in this patch,
    then decided to remove it and submit separately, but missed taking out
    the hunk in virxml.h)
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit b7a92bce070fd57844a59bf8b1c30cb4ef4f3acd
Author: Laine Stump <laine@redhat.com>
Date:   Thu Jun 18 12:49:09 2020 -0400

    conf, vmx: check for OOM after calling xmlBufferCreate()
    
    Although libvirt itself uses g_malloc0() and friends, which exit when
    there isn't enouogh memory, libxml2 uses standard malloc(), which just
    returns NULL on OOM - this means we must check for NULL on return from
    any libxml2 functions that allocate memory.
    
    xmlBufferCreate(), for example, might return NULL, and we don't always
    check for it. This patch adds checks where it isn't already done.
    
    (NB: Although libxml2 has a provision for changing behavior on OOM (by
    calling xmlMemSetup() to change what functions are used to
    allocating/freeing memory), we can't use that, since parts of libvirt
    code end up in libvirt.so, which is linked and called directly by
    applications that may themselves use libxml2 (and may have already set
    their own alternate malloc()), e.g. drivers like esx which live totally
    in the library rather than a separate process.)
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit ad231189ab948f82b8f2288250df088d9718bb7c
Author: Yanqiu Zhang <yanqzhan@redhat.com>
Date:   Thu Jul 2 09:06:46 2020 +0000

    news.html: Add 3 new features
    
    Add 'virtio packed' in 6.3.0, 'virDomainGetHostnameFlags' and
    'Panic Crashloaded event' for 6.1.0.
    
    Signed-off-by: Yanqiu Zhang <yanqzhan@redhat.com>
    Reviewed-by: Andrea Bolognani <abologna@redhat.com>

commit 201f8d1876136b0693505614efa3c9d113aff0bb
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Mon Jun 29 14:55:54 2020 +0200

    virConnectGetAllDomainStats: Document two vcpu stats
    
    When introducing vcpu.<num>.wait (v1.3.2-rc1~301) and
    vcpu.<num>.halted (v2.4.0-rc1~36) the documentation was
    not written.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Erik Skultety <eskultet@redhat.com>

commit c203b8fee1ce15003934c09e811fbd2eaec9f230
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Thu Jul 2 15:02:38 2020 +0200

    docs: Update CI documentation
    
    We're no longer using either Travis CI or the Jenkins-based
    CentOS CI, but we have started using Cirrus CI.
    
    Mention the libvirt-ci subproject as well, as a pointer for those
    who might want to learn more about our CI infrastructure.
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit fb912901316dbe7d485551606373bd71d5271601
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Mon Jun 29 19:00:36 2020 +0200

    cirrus: Generate jobs dynamically
    
    Instead of having static job definitions for FreeBSD and macOS,
    use a generic template for both and fill in the details that are
    actually different, such as the list of packages to install, in
    the GitLab CI job, right before calling cirrus-run.
    
    The target-specific information are provided by lcitool, so that
    keeping them up to date is just a matter of running the refresh
    script when necessary.
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>
    Reviewed-by: Erik Skultety <eskultet@redhat.com>

commit 919ee94ca9c7fed77897fa8e3b04952e02780c0c
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Fri Jul 3 09:32:30 2020 +0200

    maint: Post-release version bump to 6.6.0
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>

commit d7f935f1f17a3ecf180ddb9600cbef4ba8dc20f4
Author: Daniel Veillard <veillard@redhat.com>
Date:   Fri Jul 3 08:49:25 2020 +0200

    Release of libvirt-6.5.0
    
    * NEWS.rst: updated with date of release
    
    Signed-off-by: Daniel Veillard <veillard@redhat.com>

commit d1d888a69f505922140bec292b8d208b3571f084
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Thu Jul 2 14:41:18 2020 +0200

    NEWS: Update for libvirt 6.5.0
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit 7fa7f7eeb6e969e002845928e155914da2fc8cd0
Author: Daniel P. Berrangé <berrange@redhat.com>
Date:   Wed Jul 1 17:36:51 2020 +0100

    util: add access check for hooks to fix running as non-root
    
    Since feb83c1e710b9ea8044a89346f4868d03b31b0f1 libvirtd will abort on
    startup if run as non-root
    
      2020-07-01 16:30:30.738+0000: 1647444: error : virDirOpenInternal:2869 : cannot open directory '/etc/libvirt/hooks/daemon.d': Permission denied
    
    The root cause flaw is that non-root libvirtd is using /etc/libvirt for
    its hooks. Traditionally that has been harmless though since we checked
    whether we could access the hook file and degraded gracefully. We need
    the same access check for iterating over the hook directory.
    
    Long term we should make it possible to have an unprivileged hook dir
    under $HOME.
    
    Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
    Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

commit c3fa17cd9a158f38416a80af3e0f712bf96ebf38
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Wed Jul 1 09:47:48 2020 +0200

    virnettlshelpers: Update private key
    
    With the recent update of Fedora rawhide I've noticed
    virnettlssessiontest and virnettlscontexttest failing with:
    
      Our own certificate servercertreq-ctx.pem failed validation
      against cacertreq-ctx.pem: The certificate uses an insecure
      algorithm
    
    This is result of Fedora changes to support strong crypto [1]. RSA
    with 1024 bit key is viewed as legacy and thus insecure. Generate
    a new private key then. Moreover, switch to EC which is not only
    shorter but also not deprecated that often as RSA. Generated
    using the following command:
    
      openssl genpkey --outform PEM --out privkey.pem \$
      --algorithm EC --pkeyopt ec_paramgen_curve:P-384 \$
      --pkeyopt ec_param_enc:named_curve
    
    1: https://fedoraproject.org/wiki/Changes/StrongCryptoSettings2
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit d57f361083c5053267e6d9380c1afe2abfcae8ac
Author: Daniel Henrique Barboza <danielhb413@gmail.com>
Date:   Tue Jun 30 16:43:43 2020 -0300

    docs: Fix 'Offline migration' description
    
    'transfers inactive the definition of a domain' seems odd.
    
    Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
    Reviewed-by: Andrea Bolognani <abologna@redhat.com>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 13:26:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 13:26:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsR8r-00037l-65; Mon, 06 Jul 2020 13:26:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MnCI=AR=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jsR8q-00036i-5w
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 13:26:40 +0000
X-Inumbo-ID: 4ec70d86-bf8c-11ea-bb8b-bc764e2007e4
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ec70d86-bf8c-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 13:26:33 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id j18so39340130wmi.3
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 06:26:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=am5VN7/n2uE761M3Be1VEooh2n0rBobtAPOGmP3qI4M=;
 b=PlGM7K2N9i/6GjPrNncv7DxyXCLNbt7NBGOom7aipSBKKp5prlHJhp+HFPsFuT4+is
 xseCSUB/jgvFhr3ULyxOwZdS9u0bk3Wwu9+YTHByX2wlHB71sQivtYCnoRIvFt2KY3uf
 fN4I4vvRh1d8nRqWcB8UklaPnOBI/51LHmUjaSDGrHwDIlgzALaxwhtH31OD6oSKnhZD
 r0Xi5YaZiwHwhZTwvQu2XwsVQ7cjbKYhMwVJrzc9WJii2XY3MFJNKMfCBYF/noAuJjgo
 T8mULoejBWDQVOgUE6fIPARLJ+vYQMu6ou6bP2zwbFHxoiRiXgo3x3KzgTiRV19hr3QV
 3Mmw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=am5VN7/n2uE761M3Be1VEooh2n0rBobtAPOGmP3qI4M=;
 b=szhSuiOwhImdXWjpaKugBgwse1x9GMYGEk4ZOrEiFyZUN7xpdSlbI+wUEyzMqSyFjD
 aVmLNDXDE99vIA0F7BEnprqzKBJ5em77XOGwZ1Ef9FcdbqzLp6cwg4pwKOSx0ddrVUP5
 6YiExWQKS9qzuAmlhqMffotK1iTtUi4Ns1v5LHcBgw2/KHGCgvxoG53h3l+PdXn0xtB/
 b5aLWq28NmWNIUTlegH9IN35VnnuDLR8l9qxJB/PKLDse6ZU1Du8tns/IariHXYl0Qg8
 2K7j/OQDIbt4NuVe1NC+2boRtAFnPtjDL9eLX4naRj3lFmQrUaSQpB1M+GmF7hMYQVM5
 Li6A==
X-Gm-Message-State: AOAM532iwGo515bzDCPkRdXZhyAN6vJBX4JBLRp/TVs6YE5kDfc+FCuR
 tRocxKTF8m5ef61xSNfif5A=
X-Google-Smtp-Source: ABdhPJx9IhLPFnhwHRB01b+AXuniNDNUxthdSjEywHNwuq58/NRF/JoFZuUsEzKmhlDAFVaeOqBIOw==
X-Received: by 2002:a7b:c0c9:: with SMTP id s9mr46471167wmh.166.1594041992205; 
 Mon, 06 Jul 2020 06:26:32 -0700 (PDT)
Received: from localhost.localdomain
 (138.red-83-57-170.dynamicip.rima-tde.net. [83.57.170.138])
 by smtp.gmail.com with ESMTPSA id w2sm24447004wrs.77.2020.07.06.06.26.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 06 Jul 2020 06:26:31 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: Markus Armbruster <armbru@redhat.com>,
	qemu-devel@nongnu.org
Subject: [RFC PATCH 2/2] block/block-backend: Let blk_attach_dev() provide
 helpful error
Date: Mon,  6 Jul 2020 15:26:26 +0200
Message-Id: <20200706132626.22133-3-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200706132626.22133-1-f4bug@amsat.org>
References: <20200706132626.22133-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 John Snow <jsnow@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Let blk_attach_dev() take an Error* object to return helpful
information. Adapt the callers.

  $ qemu-system-arm -M n800
  qemu-system-arm: sd_init failed: cannot attach blk 'sd0' to device 'sd-card' because it is already attached by device 'omap2-mmc'
  Drive 'sd0' is already in use because it has been automatically connected to another device (did you need 'if=none' in the drive options?)

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
RFC: I'm not sure the error API is correctly used in
     set_drive_helper().

 include/sysemu/block-backend.h   |  2 +-
 block/block-backend.c            | 11 ++++++++++-
 hw/block/fdc.c                   |  4 +---
 hw/block/swim.c                  |  4 +---
 hw/block/xen-block.c             |  5 +++--
 hw/core/qdev-properties-system.c | 17 ++++++++++-------
 hw/ide/qdev.c                    |  4 +---
 hw/scsi/scsi-disk.c              |  4 +---
 8 files changed, 28 insertions(+), 23 deletions(-)

diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backend.h
index 8203d7f6f9..118fbad0b4 100644
--- a/include/sysemu/block-backend.h
+++ b/include/sysemu/block-backend.h
@@ -113,7 +113,7 @@ BlockDeviceIoStatus blk_iostatus(const BlockBackend *blk);
 void blk_iostatus_disable(BlockBackend *blk);
 void blk_iostatus_reset(BlockBackend *blk);
 void blk_iostatus_set_err(BlockBackend *blk, int error);
-int blk_attach_dev(BlockBackend *blk, DeviceState *dev);
+int blk_attach_dev(BlockBackend *blk, DeviceState *dev, Error **errp);
 void blk_detach_dev(BlockBackend *blk, DeviceState *dev);
 DeviceState *blk_get_attached_dev(BlockBackend *blk);
 char *blk_get_attached_dev_id(BlockBackend *blk);
diff --git a/block/block-backend.c b/block/block-backend.c
index 23caa45e6a..29480a121d 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -882,12 +882,21 @@ void blk_get_perm(BlockBackend *blk, uint64_t *perm, uint64_t *shared_perm)
 
 /*
  * Attach device model @dev to @blk.
+ *
+ * @blk: Block backend
+ * @dev: Device to attach the block backend to
+ * @errp: pointer to NULL initialized error object
+ *
  * Return 0 on success, -EBUSY when a device model is attached already.
  */
-int blk_attach_dev(BlockBackend *blk, DeviceState *dev)
+int blk_attach_dev(BlockBackend *blk, DeviceState *dev, Error **errp)
 {
     trace_blk_attach_dev(blk_name(blk), object_get_typename(OBJECT(dev)));
     if (blk->dev) {
+        error_setg(errp, "cannot attach blk '%s' to device '%s' because it is "
+                         "already attached by device '%s'",
+                   blk_name(blk), object_get_typename(OBJECT(dev)),
+                   object_get_typename(OBJECT(blk->dev)));
         return -EBUSY;
     }
 
diff --git a/hw/block/fdc.c b/hw/block/fdc.c
index 3425d56e2a..5765b5d4f2 100644
--- a/hw/block/fdc.c
+++ b/hw/block/fdc.c
@@ -519,7 +519,6 @@ static void floppy_drive_realize(DeviceState *qdev, Error **errp)
     FloppyBus *bus = FLOPPY_BUS(qdev->parent_bus);
     FDrive *drive;
     bool read_only;
-    int ret;
 
     if (dev->unit == -1) {
         for (dev->unit = 0; dev->unit < MAX_FD; dev->unit++) {
@@ -545,8 +544,7 @@ static void floppy_drive_realize(DeviceState *qdev, Error **errp)
     if (!dev->conf.blk) {
         /* Anonymous BlockBackend for an empty drive */
         dev->conf.blk = blk_new(qemu_get_aio_context(), 0, BLK_PERM_ALL);
-        ret = blk_attach_dev(dev->conf.blk, qdev);
-        assert(ret == 0);
+        blk_attach_dev(dev->conf.blk, qdev, &error_abort);
 
         /* Don't take write permissions on an empty drive to allow attaching a
          * read-only node later */
diff --git a/hw/block/swim.c b/hw/block/swim.c
index 74f56e8f46..2f1ecd0bb2 100644
--- a/hw/block/swim.c
+++ b/hw/block/swim.c
@@ -159,7 +159,6 @@ static void swim_drive_realize(DeviceState *qdev, Error **errp)
     SWIMDrive *dev = SWIM_DRIVE(qdev);
     SWIMBus *bus = SWIM_BUS(qdev->parent_bus);
     FDrive *drive;
-    int ret;
 
     if (dev->unit == -1) {
         for (dev->unit = 0; dev->unit < SWIM_MAX_FD; dev->unit++) {
@@ -185,8 +184,7 @@ static void swim_drive_realize(DeviceState *qdev, Error **errp)
     if (!dev->conf.blk) {
         /* Anonymous BlockBackend for an empty drive */
         dev->conf.blk = blk_new(qemu_get_aio_context(), 0, BLK_PERM_ALL);
-        ret = blk_attach_dev(dev->conf.blk, qdev);
-        assert(ret == 0);
+        blk_attach_dev(dev->conf.blk, qdev, &error_abort);
     }
 
     if (!blkconf_blocksizes(&dev->conf, errp)) {
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 1b7bc5de08..81650208dc 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -616,14 +616,15 @@ static void xen_cdrom_realize(XenBlockDevice *blockdev, Error **errp)
     blockdev->device_type = "cdrom";
 
     if (!conf->blk) {
+        Error *local_err = NULL;
         int rc;
 
         /* Set up an empty drive */
         conf->blk = blk_new(qemu_get_aio_context(), 0, BLK_PERM_ALL);
 
-        rc = blk_attach_dev(conf->blk, DEVICE(blockdev));
+        rc = blk_attach_dev(conf->blk, DEVICE(blockdev), &local_err);
         if (!rc) {
-            error_setg_errno(errp, -rc, "failed to create drive");
+            error_propagate_prepend(errp, local_err, "failed to create drive");
             return;
         }
     }
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 38b0c9f09b..26dc9f1779 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -139,18 +139,21 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
                    object_get_typename(OBJECT(dev)), prop->name, str);
         goto fail;
     }
-    if (blk_attach_dev(blk, dev) < 0) {
+    if (blk_attach_dev(blk, dev, &local_err) < 0) {
         DriveInfo *dinfo = blk_legacy_dinfo(blk);
 
         if (dinfo && dinfo->type != IF_NONE) {
-            error_setg(errp, "Drive '%s' is already in use because "
-                       "it has been automatically connected to another "
-                       "device (did you need 'if=none' in the drive options?)",
-                       str);
+            error_append_hint(&local_err,
+                              "Drive '%s' is already in use because it has "
+                              "been automatically connected to another device "
+                              "(did you need 'if=none' in the drive options?)"
+                              "\n", str);
         } else {
-            error_setg(errp, "Drive '%s' is already in use by another device",
-                       str);
+            error_append_hint(&local_err,
+                              "Drive '%s' is already in use by another device",
+                              str);
         }
+        error_propagate(errp, local_err);
         goto fail;
     }
 
diff --git a/hw/ide/qdev.c b/hw/ide/qdev.c
index f68fbee93d..b36499c9ee 100644
--- a/hw/ide/qdev.c
+++ b/hw/ide/qdev.c
@@ -165,7 +165,6 @@ static void ide_dev_initfn(IDEDevice *dev, IDEDriveKind kind, Error **errp)
 {
     IDEBus *bus = DO_UPCAST(IDEBus, qbus, dev->qdev.parent_bus);
     IDEState *s = bus->ifs + dev->unit;
-    int ret;
 
     if (!dev->conf.blk) {
         if (kind != IDE_CD) {
@@ -174,8 +173,7 @@ static void ide_dev_initfn(IDEDevice *dev, IDEDriveKind kind, Error **errp)
         } else {
             /* Anonymous BlockBackend for an empty drive */
             dev->conf.blk = blk_new(qemu_get_aio_context(), 0, BLK_PERM_ALL);
-            ret = blk_attach_dev(dev->conf.blk, &dev->qdev);
-            assert(ret == 0);
+            blk_attach_dev(dev->conf.blk, &dev->qdev, &error_abort);
         }
     }
 
diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 8ce68a9dd6..92350642c7 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -2451,14 +2451,12 @@ static void scsi_cd_realize(SCSIDevice *dev, Error **errp)
 {
     SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, dev);
     AioContext *ctx;
-    int ret;
 
     if (!dev->conf.blk) {
         /* Anonymous BlockBackend for an empty drive. As we put it into
          * dev->conf, qdev takes care of detaching on unplug. */
         dev->conf.blk = blk_new(qemu_get_aio_context(), 0, BLK_PERM_ALL);
-        ret = blk_attach_dev(dev->conf.blk, &dev->qdev);
-        assert(ret == 0);
+        blk_attach_dev(dev->conf.blk, &dev->qdev, &error_abort);
     }
 
     ctx = blk_get_aio_context(dev->conf.blk);
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 13:26:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 13:26:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsR8h-00036n-Hq; Mon, 06 Jul 2020 13:26:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MnCI=AR=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jsR8g-00036i-8X
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 13:26:30 +0000
X-Inumbo-ID: 4cbb88b4-bf8c-11ea-bb8b-bc764e2007e4
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4cbb88b4-bf8c-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 13:26:29 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id f18so32847578wrs.0
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 06:26:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=tSVh3xZmU5Rl/VlxY1hhYknTsE0eO5DVOcNMGKELemE=;
 b=qbnbFANxBTkqeC8ntLvePUEA/6O+OhVKoiQ3wU8s9JPQSj4pshbaGeOTNW7MAamPcu
 Su+J3Dd1u74P4moHRgtiNEk57vt/sKjMkE9cJ46oAlj8Xe4Ltx59jSHx/tSifGlkp9jT
 jbSmk+3wA+8Uyjfv74QRqOXNsvpz9jkOL8r40qFfwnHqcWwWPWUUu5MwltusREfB6Yc0
 pS6kgkug7Yp/zCYMoPi2XhqG6MiJiRGd7USoADPe+zJglJiboMUZG8IhQY4v81cPu9zV
 mQl7kmK5T2dLV1wubFA+r0qPiAve69N7SG3QCH2MZOTHvjpa/0djo5deiRXNEJBfIsQR
 1Ayg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :mime-version:content-transfer-encoding;
 bh=tSVh3xZmU5Rl/VlxY1hhYknTsE0eO5DVOcNMGKELemE=;
 b=SObFtEo0FMMb6I6mv14zMP6QV6awC4NOuPmAVbTREIR42OBN2Gp5D49Y/3WaUn+hCj
 vmYJ5V9P6ijW2KOGfB9HshPbQBF6yhPU1IXE7vdTF2nuVkYqMp3ZDKn6pUXfhqbVIrDI
 Oft8I65wugyhja49XLFXggun/gKr39z451gvJ6FJXQmlReI6b2k/geLfwXp4qypwm2fs
 UZ+SU8Q+TTUeeSYEDO3cCdqJupMgfgMn2HYRnPpzV8SZ7JL0mAANSC4qqzHEAP8R5Aat
 j0XJHQyreTzsh14zUD0wXt8ID/PL5MbxxJu812/iPgG6ooRjELGe3hY12AOojKhttR64
 g53w==
X-Gm-Message-State: AOAM531xPPvHIUXLccHKKB9BNqDdrUVa9pKGLVeTr+8qlqluqlxVDZg5
 UwwCxeqNyDS1nV5plWd/Vmw=
X-Google-Smtp-Source: ABdhPJy3fteBFpWnCcBm76WBnCreC3Ozl3AURYlcuW7tquHy8jvwLRuhDJFqfAi9hiIEX+1alLgbyg==
X-Received: by 2002:adf:f203:: with SMTP id p3mr20519115wro.331.1594041988856; 
 Mon, 06 Jul 2020 06:26:28 -0700 (PDT)
Received: from localhost.localdomain
 (138.red-83-57-170.dynamicip.rima-tde.net. [83.57.170.138])
 by smtp.gmail.com with ESMTPSA id w2sm24447004wrs.77.2020.07.06.06.26.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 06 Jul 2020 06:26:28 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: Markus Armbruster <armbru@redhat.com>,
	qemu-devel@nongnu.org
Subject: [PATCH 0/2] block/block-backend: Let blk_attach_dev() provide helpful
 error
Date: Mon,  6 Jul 2020 15:26:24 +0200
Message-Id: <20200706132626.22133-1-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 John Snow <jsnow@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

A pair of patches which helps me debug an issue with block
drive already attached.

Suggestions to correctly/better use the Error API welcome, in
particular in qdev-properties-system::set_drive_helper().

Philippe Mathieu-Daudé (2):
  block/block-backend: Trace blk_attach_dev()
  block/block-backend: Let blk_attach_dev() provide helpful error

 include/sysemu/block-backend.h   |  2 +-
 block/block-backend.c            | 12 +++++++++++-
 hw/block/fdc.c                   |  4 +---
 hw/block/swim.c                  |  4 +---
 hw/block/xen-block.c             |  5 +++--
 hw/core/qdev-properties-system.c | 17 ++++++++++-------
 hw/ide/qdev.c                    |  4 +---
 hw/scsi/scsi-disk.c              |  4 +---
 block/trace-events               |  1 +
 9 files changed, 30 insertions(+), 23 deletions(-)

-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 13:26:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 13:26:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsR8l-00037J-U2; Mon, 06 Jul 2020 13:26:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MnCI=AR=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jsR8l-00036i-5j
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 13:26:35 +0000
X-Inumbo-ID: 4da95cd8-bf8c-11ea-8496-bc764e2007e4
Received: from mail-wm1-x332.google.com (unknown [2a00:1450:4864:20::332])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4da95cd8-bf8c-11ea-8496-bc764e2007e4;
 Mon, 06 Jul 2020 13:26:31 +0000 (UTC)
Received: by mail-wm1-x332.google.com with SMTP id o8so39335467wmh.4
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 06:26:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=3YzIzSCf4xQQ4VX+CJllu+9r7xAX8oggdf7gxj4TtyA=;
 b=OLOCiChN4U2lmokDjguLFHw3NqgvzCz+ll7+2OxnLlKwNm/dOiupl0tgi/aB+OAAyK
 I/F6mS93iP2S859lCocZLYpI9hQ85S/00BO+Hlc4l8+Zno1Q3ve55ymxlclpIRWSSEzf
 dL1ciEGs3i7GHTaDS/vcjcuqtxw0BHZi/GGbuuJ2Z39hhexlqPdC1MHrt6mmmo4K09Qx
 6ESeQafi2xa8PK2l9D4UBFncrvVNv0BYL1B3PsGbkc0GhKxJEufjJpSnNYqcmxEeMFO+
 2iEI9NBmhocTbqGetsGXVwYt+C5l1sLBzikTUOq4PzOO6dixzuba1/zNRepylUTvGcgV
 ne1w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=3YzIzSCf4xQQ4VX+CJllu+9r7xAX8oggdf7gxj4TtyA=;
 b=d4exX0Z/P1fQJALxTMuznEVouY4W0phtdpkEY/7iteI1PvXYodLSpIth1UlYL1BWGK
 X88Un0SNpAdlxn2hun+/gLRdZTTRKkYVikgUEPC5/cwuDrRdM2oNf8DraGwQKMhmlYDz
 0X3k2/OLKB3s8Q8s1cSMMehXvTjiH3lIHWAzpxkg7SGfPTmUDK0z5JfG5kZgmEYQxhfj
 ifdNV1Ej0nWvYdfdcSj/LxvAlczlVj+CY29VhK0Kvp/2ot06kQAO/4iR5T//ARfAzcQ9
 MM8Uw9PwBHcHP4IeRhaB4o8tNz9jNrwY6U3nzaO8UtXHP2zj2Jity+ALIQtjoXdz/+26
 sB6w==
X-Gm-Message-State: AOAM5322jA0yLOcDhKE3FRqgwDi+xfwzPS1aZ4RZ51B9RHt9oE90c3/6
 rhrkARXYZ+jG2q2Tcv9S6lE=
X-Google-Smtp-Source: ABdhPJzzgZUaGkuExMqbsCd96/zTEdWjJCyCUJK2slIkLxntVuI+YroZIga1OuRs954c/HT+Jfs5VQ==
X-Received: by 2002:a1c:1b0d:: with SMTP id b13mr47841887wmb.169.1594041990469; 
 Mon, 06 Jul 2020 06:26:30 -0700 (PDT)
Received: from localhost.localdomain
 (138.red-83-57-170.dynamicip.rima-tde.net. [83.57.170.138])
 by smtp.gmail.com with ESMTPSA id w2sm24447004wrs.77.2020.07.06.06.26.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 06 Jul 2020 06:26:29 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: Markus Armbruster <armbru@redhat.com>,
	qemu-devel@nongnu.org
Subject: [PATCH 1/2] block/block-backend: Trace blk_attach_dev()
Date: Mon,  6 Jul 2020 15:26:25 +0200
Message-Id: <20200706132626.22133-2-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200706132626.22133-1-f4bug@amsat.org>
References: <20200706132626.22133-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 John Snow <jsnow@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add a trace event to follow devices attaching block drives:

  $ qemu-system-arm -M n800 -trace blk_\*
  9513@1594040428.738162:blk_attach_dev attaching blk 'sd0' to device 'omap2-mmc'
  9513@1594040428.738189:blk_attach_dev attaching blk 'sd0' to device 'sd-card'
  qemu-system-arm: sd_init failed: blk 'sd0' already attached by device 'sd-card'

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 block/block-backend.c | 1 +
 block/trace-events    | 1 +
 2 files changed, 2 insertions(+)

diff --git a/block/block-backend.c b/block/block-backend.c
index 6936b25c83..23caa45e6a 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -886,6 +886,7 @@ void blk_get_perm(BlockBackend *blk, uint64_t *perm, uint64_t *shared_perm)
  */
 int blk_attach_dev(BlockBackend *blk, DeviceState *dev)
 {
+    trace_blk_attach_dev(blk_name(blk), object_get_typename(OBJECT(dev)));
     if (blk->dev) {
         return -EBUSY;
     }
diff --git a/block/trace-events b/block/trace-events
index dbe76a7613..aa641c2034 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -9,6 +9,7 @@ blk_co_preadv(void *blk, void *bs, int64_t offset, unsigned int bytes, int flags
 blk_co_pwritev(void *blk, void *bs, int64_t offset, unsigned int bytes, int flags) "blk %p bs %p offset %"PRId64" bytes %u flags 0x%x"
 blk_root_attach(void *child, void *blk, void *bs) "child %p blk %p bs %p"
 blk_root_detach(void *child, void *blk, void *bs) "child %p blk %p bs %p"
+blk_attach_dev(const char *blk_name, const char *dev_name) "attaching blk '%s' to device '%s'"
 
 # io.c
 bdrv_co_preadv(void *bs, int64_t offset, int64_t nbytes, unsigned int flags) "bs %p offset %"PRId64" nbytes %"PRId64" flags 0x%x"
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 14:39:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 14:39:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsSGx-0000lw-2D; Mon, 06 Jul 2020 14:39:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z/W2=AR=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jsSGv-0000lr-NK
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 14:39:05 +0000
X-Inumbo-ID: 703f0018-bf96-11ea-bb8b-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 703f0018-bf96-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 14:39:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594046343;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=v0e5AF/j7e9kwAWUc2BvTLeCZE2YrBNYU2sWxNQcAfQ=;
 b=RjO3rmson9mumnCBjhH8ttwu++ImYV2lz6/JUqiAHMODIMUD7KSVP6Pu
 qgqbGt2j08p+XTvdnLItBzjr97WkHJx3PCQPBU6LpOvy5oan5AYwAh+CB
 ccol++p7297KHv4YceJGqu7q/YSwZOFEwQVJsovfF0hpPLwvccfKYACYe 4=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: LZ5sHd8nyg+LhK+nzup6owCB8qV6HETp99CLLAYfvquZfj+dLTu8mgBOx/W0F0pNNYwSDThEs0
 eI1mMoK6xcQDhrSXriaH9w5wvK95h2JwLM8FO8mn83AsPUDABIztSxQupvFMD+WJ5j6dO48j9R
 zgPuK3j65D5/wrrQPcZWyNDtEDBpPAf3WevB3b5bNqD3MzijO/WxBZ1IyGUR0VFdBT/DAXT+RV
 cCKA8uUIerEv/dvwCP/xdRQyJ+uh8/py88laOEGd8xCLL0mX9qs6M0B3Yp5a8ssenANbKvSN9i
 OW4=
X-SBRS: 2.7
X-MesageID: 22504755
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,320,1589256000"; d="scan'208";a="22504755"
Date: Mon, 6 Jul 2020 16:38:53 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v5 06/11] x86/hvm: processor trace interface in HVM
Message-ID: <20200706143853.GA7191@Air-de-Roger>
References: <cover.1593974333.git.michal.leszczynski@cert.pl>
 <a4833c8168e287f0caf1dc6f16ec5c054bd88b0a.1593974333.git.michal.leszczynski@cert.pl>
 <20200706084234.GB735@Air-de-Roger>
 <212702848.20024300.1594030142855.JavaMail.zimbra@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <212702848.20024300.1594030142855.JavaMail.zimbra@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 06, 2020 at 12:09:02PM +0200, Michał Leszczyński wrote:
> ----- 6 lip 2020 o 10:42, Roger Pau Monné roger.pau@citrix.com napisał(a):
> 
> > On Sun, Jul 05, 2020 at 08:54:59PM +0200, Michał Leszczyński wrote:
> >> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> >> 
> >> Implement necessary changes in common code/HVM to support
> >> processor trace features. Define vmtrace_pt_* API and
> >> implement trace buffer allocation/deallocation in common
> >> code.
> >> 
> >> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> >> ---
> >>  xen/arch/x86/domain.c         | 19 +++++++++++++++++++
> >>  xen/common/domain.c           | 19 +++++++++++++++++++
> >>  xen/include/asm-x86/hvm/hvm.h | 20 ++++++++++++++++++++
> >>  xen/include/xen/sched.h       |  4 ++++
> >>  4 files changed, 62 insertions(+)
> >> 
> >> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> >> index fee6c3931a..79c9794408 100644
> >> --- a/xen/arch/x86/domain.c
> >> +++ b/xen/arch/x86/domain.c
> >> @@ -2199,6 +2199,25 @@ int domain_relinquish_resources(struct domain *d)
> >>                  altp2m_vcpu_disable_ve(v);
> >>          }
> >>  
> >> +        for_each_vcpu ( d, v )
> >> +        {
> >> +            unsigned int i;
> >> +
> >> +            if ( !v->vmtrace.pt_buf )
> >> +                continue;
> >> +
> >> +            for ( i = 0; i < (v->domain->vmtrace_pt_size >> PAGE_SHIFT); i++ )
> >> +            {
> >> +                struct page_info *pg = mfn_to_page(
> >> +                    mfn_add(page_to_mfn(v->vmtrace.pt_buf), i));
> >> +                if ( (pg->count_info & PGC_count_mask) != 1 )
> >> +                    return -EBUSY;
> >> +            }
> >> +
> >> +            free_domheap_pages(v->vmtrace.pt_buf,
> >> +                get_order_from_bytes(v->domain->vmtrace_pt_size));
> > 
> > This is racy as a control domain could take a reference between the
> > check and the freeing.
> > 
> >> +        }
> >> +
> >>          if ( is_pv_domain(d) )
> >>          {
> >>              for_each_vcpu ( d, v )
> >> diff --git a/xen/common/domain.c b/xen/common/domain.c
> >> index 25d3359c5b..f480c4e033 100644
> >> --- a/xen/common/domain.c
> >> +++ b/xen/common/domain.c
> >> @@ -137,6 +137,21 @@ static void vcpu_destroy(struct vcpu *v)
> >>      free_vcpu_struct(v);
> >>  }
> >>  
> >> +static int vmtrace_alloc_buffers(struct vcpu *v)
> >> +{
> >> +    struct page_info *pg;
> >> +    uint64_t size = v->domain->vmtrace_pt_size;
> >> +
> >> +    pg = alloc_domheap_pages(v->domain, get_order_from_bytes(size),
> >> +                             MEMF_no_refcount);
> >> +
> >> +    if ( !pg )
> >> +        return -ENOMEM;
> >> +
> >> +    v->vmtrace.pt_buf = pg;
> >> +    return 0;
> >> +}
> > 
> > I think we already agreed that you would use the same model as ioreq
> > servers, where a reference is taken on allocation and then the pages
> > are not explicitly freed on domain destruction and put_page_and_type
> > is used. Is there some reason why that model doesn't work in this
> > case?
> > 
> > If not, please see hvm_alloc_ioreq_mfn and hvm_free_ioreq_mfn.
> > 
> > Roger.
> 
> 
> Ok, I've got it, will do. Thanks for pointing out the examples.
> 
> 
> One thing that is confusing to me is that I don't get what is
> the meaning of MEMF_no_refcount flag.

That flag prevents the memory from being counted towards the amount of
memory assigned to the domain. You want it that way so that trace
buffers are not accounted as part of the memory assigned to the
domain.

You then need to get a (extra) reference to the pages (there's always
the 'allocated' reference AFAICT) so that when the last reference is
dropped (either by the domain being destroyed or the memory being
unmapped from the control domain) it will be freed.

> In the hvm_{alloc,free}_ioreq_mfn the memory is allocated
> explicitly but freed just by putting out the reference, so
> I guess it's automatically detected that the refcount dropped to 0
> and the page should be freed?

Yes, put_page_alloc_ref will remove the allocated flag and then when
the last reference is dropped the page will be freed.

> If so, why the flag is named "no refcount"?

I'm not sure about the naming, but you can get references to pages
allocated as MEMF_no_refcount, and change their types. They are
however not accounted towards the memory usage of a domain.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 15:06:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 15:06:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsShd-0003CI-BC; Mon, 06 Jul 2020 15:06:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m1N9=AR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsShb-0003CD-Qq
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 15:06:39 +0000
X-Inumbo-ID: 4a0b03ca-bf9a-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a0b03ca-bf9a-11ea-b7bb-bc764e2007e4;
 Mon, 06 Jul 2020 15:06:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Fdf0Ps44CiK97gHKrl6HwSuZBsVjmofn5lIjE1cE/34=; b=5boBDorIOBk0mAGVQVK6oGt22
 2OjEIofcnLY8dPe7OVoe3/TNRVUAnrAx8MwOMpqY7FswMwZQWFx+7nihoAZJInnnKdO64AHd/fQaV
 O+4W5Vb6YHjs8PE3Hl34NKLMsMmw8irst17csPUxC2q6SuODWfuivp8aerM1txnPlrUj4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsShZ-0000UU-Fv; Mon, 06 Jul 2020 15:06:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsShZ-0006MH-4B; Mon, 06 Jul 2020 15:06:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsShZ-0007xd-3R; Mon, 06 Jul 2020 15:06:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151662-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151662: regressions - trouble:
 blocked/broken/fail/pass
X-Osstest-Failures: linux-linus:test-armhf-armhf-xl-cubietruck:<job
 status>:broken:regression
 linux-linus:test-armhf-armhf-xl-cubietruck:host-install(4):broken:regression
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start.2:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=dcb7fd82c75ee2d6e6f9d8cc71c52519ed52e258
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jul 2020 15:06:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151662 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151662/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-cubietruck    <job status>                 broken
 test-armhf-armhf-xl-cubietruck  4 host-install(4)      broken REGR. vs. 151214
 test-arm64-arm64-libvirt-xsm 17 guest-start.2            fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214
 test-armhf-armhf-xl-vhd     15 guest-start/debian.repeat fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10   fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                dcb7fd82c75ee2d6e6f9d8cc71c52519ed52e258
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   18 days
Failing since        151236  2020-06-19 19:10:35 Z   16 days   24 attempts
Testing same since   151662  2020-07-06 02:35:00 Z    0 days    1 attempts

------------------------------------------------------------
568 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               broken  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-cubietruck broken
broken-step test-armhf-armhf-xl-cubietruck host-install(4)

Not pushing.

(No revision log; it would be 27484 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 15:14:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 15:14:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsSpB-00044O-Ba; Mon, 06 Jul 2020 15:14:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dTkW=AR=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jsSp9-00044B-HM
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 15:14:27 +0000
X-Inumbo-ID: 61688384-bf9b-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 61688384-bf9b-11ea-b7bb-bc764e2007e4;
 Mon, 06 Jul 2020 15:14:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 553E7AAD0;
 Mon,  6 Jul 2020 15:14:26 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: fix FXRSTOR test for most AMD CPUs
Message-ID: <29986a8f-47bf-43c2-98e9-e08c1c5925af@suse.com>
Date: Mon, 6 Jul 2020 17:14:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

AMD CPUs that we classify as X86_BUG_FPU_PTRS don't touch the selector/
offset portion of the save image during FXSAVE unless an unmasked
exception is pending. Hence the selector zapping done between the
initial FXSAVE and the emulated FXRSTOR needs to be mirrored onto the
second FXSAVE, output of which gets fed into memcmp() to compare with
the input image.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -2577,6 +2577,7 @@ int main(int argc, char **argv)
         regs.ecx = (unsigned long)(res + 0x81);
         rc = x86_emulate(&ctxt, &emulops);
         asm volatile ( "fxsave %0" : "=m" (res[0x100]) :: "memory" );
+        zap_xfpsel(&res[0x100]);
         if ( (rc != X86EMUL_OKAY) ||
              memcmp(res + 0x100, res + 0x80, 0x200) ||
              (regs.eip != (unsigned long)&instr[4]) )


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 15:47:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 15:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsTKo-0006YJ-28; Mon, 06 Jul 2020 15:47:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G1NU=AR=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jsTKn-0006YE-Fy
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 15:47:09 +0000
X-Inumbo-ID: f1c8d4f2-bf9f-11ea-b7bb-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f1c8d4f2-bf9f-11ea-b7bb-bc764e2007e4;
 Mon, 06 Jul 2020 15:47:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594050426;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=1fFrTf8uNkEHrsl86x9jQ69OdtuBRhByr0KriIBoYN0=;
 b=JIWhfXJ7dgXKh+tfqkXqgoR5nm/hlUH7fsiZXxlWQ3U3PPkz5Ewf7t6C
 JR7B24O0o4GJTZ6+MM9wRPhcKfVdlghdv5F5gyvGX2Gdw3aZ7tSMaFvFt
 s5nV8YrXnHkAjuGKv23rFblkN8XMuWO8XN+itHtIco1ksS54uY0liMUU7 8=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: GuwQeNwa6kLJulgktJ427JYsSOhXXd9CvxtNTtoKCyvvUNj3AkXKV9vwK6/48h3nDg1Xxf6BA3
 ojT29Yf6LZ0Axj1m7XW0TvibqUFnwMnbQh+Nn4TszvnmFBFdrB2jRPmWo9lerhGcxIJdvNeG11
 WL32A2nyYwaIG6mVhmCkEtN/FIe3Iyz2N3NGvqiZJhc3u8Cuz+orAE9gksT7UP4g1IuTX5j+L+
 Z+hLpCsEpkuaiCLsFKqeeGtHbDn4SkXTkOaSgEzW5bRW/r6RbqbRIRfY6L9blWBIsgVzwVnFi8
 UN4=
X-SBRS: 2.7
X-MesageID: 21898130
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,320,1589256000"; d="scan'208";a="21898130"
Subject: Re: [PATCH] x86emul: fix FXRSTOR test for most AMD CPUs
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <29986a8f-47bf-43c2-98e9-e08c1c5925af@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b35fc5f2-4f12-38f2-088c-cee019e8cbad@citrix.com>
Date: Mon, 6 Jul 2020 16:46:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <29986a8f-47bf-43c2-98e9-e08c1c5925af@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 06/07/2020 16:14, Jan Beulich wrote:
> AMD CPUs that we classify as X86_BUG_FPU_PTRS don't touch the selector/
> offset portion of the save image during FXSAVE unless an unmasked
> exception is pending. Hence the selector zapping done between the
> initial FXSAVE and the emulated FXRSTOR needs to be mirrored onto the
> second FXSAVE, output of which gets fed into memcmp() to compare with
> the input image.
>
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 15:56:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 15:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsTTg-0007Qm-0L; Mon, 06 Jul 2020 15:56:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bUWB=AR=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jsTTe-0007Qg-GV
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 15:56:18 +0000
X-Inumbo-ID: 3a1885b2-bfa1-11ea-bb8b-bc764e2007e4
Received: from mail-ej1-x62d.google.com (unknown [2a00:1450:4864:20::62d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3a1885b2-bfa1-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 15:56:17 +0000 (UTC)
Received: by mail-ej1-x62d.google.com with SMTP id dp18so43102273ejc.8
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 08:56:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=lzmS/DYvRKOTwpNhxXahP1nSGfDSZFhfmUICA799hbk=;
 b=ePVJw4MA6crsb+kswF7B18G8D53C8daZXDfoVML6DDPN+BoAu8k28NRIo17Mc5hZcK
 p6fLtI/8ySfigwIkYjoWxmZwp5LBddnF4HWQHYl67RX5K8qg2jEybQmQXt+2trgN16hj
 ZYNq/xeB9BJ24UD43fTMYdZplvlmiGI/AMEoFf6EYEhfvvorXMCIx6gj9C05xwnppWv2
 Q1ViYS6tRJLxpYYeS9Q7Hjrd21AJsAn4N+jpjaqWGDmIioC/srnFrpIQ4F7UtloOQMdP
 IvPcHwPwVueKxr7JA3inxy0oX0PNqLoV9n7+f+/uPZ0YC5OEtMcaw0TnrTdyCV7geNKo
 xT7w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=lzmS/DYvRKOTwpNhxXahP1nSGfDSZFhfmUICA799hbk=;
 b=HGqcsNaVTIi3e+WUi/GWl4Xv88fYX7Sle6vIUsWgWuNU8xY0fh0+0jL8VFJMR9qhQh
 7ty8mfH0nYKZ8YvrumHUZmF4Zzt437f9K0D6r15Q225tmUzYrV6IBzD/bqfOp+3eVTMt
 yKERTu0VIRMWfBoWaC7NyMsx5PeO4u+68s2kCmdTw0or8RLMMPdf6zP5+Iv5YUT+ZqEQ
 UfH6DCMqOnacV+BFQYd/Wt4kBAbdSKjHNR3PLHv8dOgBU2xXhog1OPWZRhJRaNnhamRf
 xNbPDqj592OiG4nxqO2Gor6o46s1cxxWgMlfY8YBoiww3tQlydhlPw8DCzVIGgBn/cPJ
 SarA==
X-Gm-Message-State: AOAM531dOHUybFU3zsnlX8wNp4+hvpv5dHOclBlFXPNFtv2sg49dVEQh
 iOLKT7Ndug9p+wkynSO3BRk=
X-Google-Smtp-Source: ABdhPJyjxaDCh8Thngtc2Y2F/2vQlOMQYsyUey8lzuXQA+2ioovRZcbTBSk0A+RVOpNLSETLsPgHWw==
X-Received: by 2002:a17:906:fa9b:: with SMTP id
 lt27mr41667890ejb.365.1594050976936; 
 Mon, 06 Jul 2020 08:56:16 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-239.amazon.com. [54.240.197.239])
 by smtp.gmail.com with ESMTPSA id n9sm20940928edr.46.2020.07.06.08.56.15
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 06 Jul 2020 08:56:16 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 "'Jan Beulich'" <jbeulich@suse.com>, <xen-devel@lists.xenproject.org>
References: <29986a8f-47bf-43c2-98e9-e08c1c5925af@suse.com>
 <b35fc5f2-4f12-38f2-088c-cee019e8cbad@citrix.com>
In-Reply-To: <b35fc5f2-4f12-38f2-088c-cee019e8cbad@citrix.com>
Subject: RE: [PATCH] x86emul: fix FXRSTOR test for most AMD CPUs
Date: Mon, 6 Jul 2020 16:56:15 +0100
Message-ID: <008101d653ad$fb3dd770$f1b98650$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQKG6H/6mYVSMkUlsOmjEEFKANg48gJqZY+gp4Y3qDA=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Wei Liu' <wl@xen.org>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 06 July 2020 16:47
> To: Jan Beulich <jbeulich@suse.com>; xen-devel@lists.xenproject.org
> Cc: Wei Liu <wl@xen.org>; Roger Pau Monn=C3=A9 <roger.pau@citrix.com>; =
Paul Durrant <paul@xen.org>
> Subject: Re: [PATCH] x86emul: fix FXRSTOR test for most AMD CPUs
>=20
> On 06/07/2020 16:14, Jan Beulich wrote:
> > AMD CPUs that we classify as X86_BUG_FPU_PTRS don't touch the =
selector/
> > offset portion of the save image during FXSAVE unless an unmasked
> > exception is pending. Hence the selector zapping done between the
> > initial FXSAVE and the emulated FXRSTOR needs to be mirrored onto =
the
> > second FXSAVE, output of which gets fed into memcmp() to compare =
with
> > the input image.
> >
> > Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
>=20
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>

Release-acked-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 16:06:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 16:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsTdG-0000PZ-Sd; Mon, 06 Jul 2020 16:06:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xIq9=AR=gmail.com=brendank310@srs-us1.protection.inumbo.net>)
 id 1jsTdE-0000PU-Sr
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 16:06:12 +0000
X-Inumbo-ID: 9c82b71c-bfa2-11ea-8496-bc764e2007e4
Received: from mail-vk1-xa2a.google.com (unknown [2607:f8b0:4864:20::a2a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c82b71c-bfa2-11ea-8496-bc764e2007e4;
 Mon, 06 Jul 2020 16:06:12 +0000 (UTC)
Received: by mail-vk1-xa2a.google.com with SMTP id h190so8700739vkh.6
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 09:06:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:from:date:message-id:subject:to;
 bh=AO+/+dlfB43Jh9L3Jfj1tlqFu6lFfp/rjuSWYXRjhd4=;
 b=sTfQ+G8O8eL5Muvs+3ooGeiWX/nBtpuviDa6AnsfMzh6/F6CScF7vs6JhyIDMyMSrl
 EOuRnqsZ9Q/jxqq8oDzjRK2cd9fMnZv8uwIA2ll25AOrCkkumn+vsZ3vlDtrF57TWNh8
 pqyNyNG28Bz23smu68aSYHd12XSsHMcuF6IbZjExb/ozV02RQL3PI7OQo1fkZm960G/d
 2AG/xc9azAkAWZCXhwmRFouiUCy9yd5SP3Mg/TnSg9bt78dHAHdkkyEWFgbh6PPpnNQo
 p6ndfPs3SFq3LiQLCH5xg2tZAXyoRWUvsFaWA7PZKUK99QB64Mw/49RS4ebtah9HnMJ2
 LsiQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
 bh=AO+/+dlfB43Jh9L3Jfj1tlqFu6lFfp/rjuSWYXRjhd4=;
 b=D8di8gh1PM4yzLEDXwOuROZOF8OXqAsXMGWz0bXHI5zfTsQ5n3gdYyZXQuxAsmqmdj
 9E6WmdcKmltpJO1kSRq3h/pswFYY4si6rELHfvb9GBdgHxejpDSCoRJjTcf0I5UChdar
 IrE2kR0J0HyVuHZjnJC0Khbi3aoh0qDtxWgEKpzSjAWW045NBo4G/pIuTq1g92W3ydbw
 ML9M59MMTQlu+1ToAWdhpdgyZ+QMk48KxmPQ2XEmqaVgp1M4f3fzlvFf3D1WL7kYcpxN
 OkezjjllaOm8/+jU0K163ExkkBteLy4KBiwVs4GsT/ES4RN/Y6IEyq1gv83wJmp0rbIy
 wNEw==
X-Gm-Message-State: AOAM530NBU1dPubh73oUf41+2h2TgHf3nHJA/dBden+6iQsjZ1v4neIv
 UkYyh8HbzW7MyqUGxxu+OInsrfWwDJNS1F9hbUvbsf+j
X-Google-Smtp-Source: ABdhPJxcrc/rZrXkqSMPXCxldBwjEO9EvtwWrW48NeL0DreIIHwkWB8BZcvETZdlVaoLpVwh9fQIPbq1v/HzDEx7ISQ=
X-Received: by 2002:a05:6122:32f:: with SMTP id
 d15mr4513397vko.101.1594051571437; 
 Mon, 06 Jul 2020 09:06:11 -0700 (PDT)
MIME-Version: 1.0
From: Brendan Kerrigan <brendank310@gmail.com>
Date: Mon, 6 Jul 2020 12:06:00 -0400
Message-ID: <CAKPa3c1h=eOhLfVt2GaoE9zk5iyoWyvEmWy-BHCMKpjmNsdJ8A@mail.gmail.com>
Subject: Design Session Notes - "How best to upstream Bareflank's
 implementation of the Xen VMM into the Xen Project"
To: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="000000000000ab574205a9c80ddf"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--000000000000ab574205a9c80ddf
Content-Type: text/plain; charset="UTF-8"

# Design Session Notes - "How best to upstream Bareflank's implementation
of the Xen VMM into the Xen Project"
## Design Session Description
Assured Information Security, Inc. has been working on a new implementation
of the Xen VMM (i.e., just the root component of the Xen hypervisor) using
Bareflank/Boxy (https://github.com/Bareflank/hypervisor). The goals of this
VMM include the ability to reload the hypervisor without having to reboot,
support for a Windows Dom0 (or any Dom0 really), removal of the Xen
scheduling and power management code and instead using the scheduler and
power management logic built into the Dom0 kernel, and removal of PV
support in favor of a pure PVH/HVM implementation. Although there is still
a lot of work to do, we can demonstrate this capability today. The goal of
this design session is to discuss the design of our new approach, ways in
which we can improve it, and ultimately how best to upstream our work into
the Xen Project.

## Current Status of Xen compatibility in Bareflank
Bareflank has compatibility layer to work with the Xen PVH hypercalls, with
the goal of re-using existing Xen VMs, improved performance, less complex
scheduling and power management, and the ability to run non-traditional
dom0s, such as Windows.
* Prototype is up and running:
  * Removal of scheduling/power management code
  * Windows dom0
  * Hotloading the hypervisor (optional late launch, is desirable by gwd)
  * Doesn't share any code directly with Xen at the moment, was referenced
throughout the implementation
* No legacy PV mode, PVH only (currently) though HVM is planned down the
road
* APIC not managed by the hypervisor, but by dom0
* Mostly MSIs are passed directly to dom0
* Minimal interference with power management, lets the operating system
deal with it
* Removal of possible schedule aliasing
* libxl toolstack runs outside of dom0

## Discussion of upstreamability (desirable by andyhhp)
* How to do it organizationally:
  * Subproject (feedback indicated this was undesirable, there would be too
much overhead)
  * Directly bring capabilities into the mainline (feedback indicated this
was more desirable)
* Potential technical requirements:
  * Add scheduling API to allow the use of a host OS scheduler rather than
the built in Xen scheduler (may be difficult)

## Proposed Actions:
* KConfig options to strip out undesirable code portions
* Add ability to do the rootkit like loading of Xen (like hxen/bareflank
support)
* Addition of more hypercalls to support dom0 (or more generally
non-hypervisor context) scheduling
* Stay engaged with xen-devel to align changes with potential ABI changes
that are coming down the road to support updating libxl and the hypervisor
in stages and supporting encrypted memory schemes that are incorporated in
newer architectures

--000000000000ab574205a9c80ddf
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"># Design Session Notes - &quot;How best to upstream Barefl=
ank&#39;s implementation of the Xen VMM into the Xen Project&quot;<br>## De=
sign Session Description<br>Assured Information Security, Inc. has been wor=
king on a new implementation of the Xen VMM (i.e., just the root component =
of the Xen hypervisor) using Bareflank/Boxy (<a href=3D"https://github.com/=
Bareflank/hypervisor">https://github.com/Bareflank/hypervisor</a>). The goa=
ls of this VMM include the ability to reload the hypervisor without having =
to reboot, support for a Windows Dom0 (or any Dom0 really), removal of the =
Xen scheduling and power management code and instead using the scheduler an=
d power management logic built into the Dom0 kernel, and removal of PV supp=
ort in favor of a pure PVH/HVM implementation. Although there is still a lo=
t of work to do, we can demonstrate this capability today. The goal of this=
 design session is to discuss the design of our new approach, ways in which=
 we can improve it, and ultimately how best to upstream our work into the X=
en Project.<br><br>## Current Status of Xen compatibility in Bareflank<br>B=
areflank has compatibility layer to work with the Xen PVH hypercalls, with =
the goal of re-using existing Xen VMs, improved performance, less complex s=
cheduling and power management, and the ability to run non-traditional dom0=
s, such as Windows. <br>* Prototype is up and running:<br>=C2=A0 * Removal =
of scheduling/power management code<br>=C2=A0 * Windows dom0<br>=C2=A0 * Ho=
tloading the hypervisor (optional late launch, is desirable by gwd)<br>=C2=
=A0 * Doesn&#39;t share any code directly with Xen at the moment, was refer=
enced throughout the implementation<br>* No legacy PV mode, PVH only (curre=
ntly) though HVM is planned down the road<br>* APIC not managed by the hype=
rvisor, but by dom0<br>* Mostly MSIs are passed directly to dom0<br>* Minim=
al interference with power management, lets the operating system deal with =
it<br>* Removal of possible schedule aliasing<br>* libxl toolstack runs out=
side of dom0<br><br>## Discussion of upstreamability (desirable by andyhhp)=
<br>* How to do it organizationally:<br>=C2=A0 * Subproject (feedback indic=
ated this was undesirable, there would be too much overhead)<br>=C2=A0 * Di=
rectly bring capabilities into the mainline (feedback indicated this was mo=
re desirable)<br>* Potential technical requirements:<br>=C2=A0 * Add schedu=
ling API to allow the use of a host OS scheduler rather than the built in X=
en scheduler (may be difficult)<br><br>## Proposed Actions:<br>* KConfig op=
tions to strip out undesirable code portions<br>* Add ability to do the roo=
tkit like loading of Xen (like hxen/bareflank support)<br>* Addition of mor=
e hypercalls to support dom0 (or more generally non-hypervisor context) sch=
eduling<br>* Stay engaged with xen-devel to align changes with potential AB=
I changes that are coming down the road to support updating libxl and the h=
ypervisor in stages and supporting encrypted memory schemes that are incorp=
orated in newer architectures</div>

--000000000000ab574205a9c80ddf--


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 16:14:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 16:14:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsTlW-0001H1-Od; Mon, 06 Jul 2020 16:14:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m1N9=AR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsTlV-0001Gf-Tv
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 16:14:45 +0000
X-Inumbo-ID: cb190a1c-bfa3-11ea-8ca5-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cb190a1c-bfa3-11ea-8ca5-12813bfff9fa;
 Mon, 06 Jul 2020 16:14:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=byQLjSd0KSNAJa4Eel5OiE3klijhxuvu9u8mL7wjNmc=; b=1njze2v0q11ryM7IycmKI8+0t
 q9eYkMRgS/jr62OqG3SfGUofHnUY8b7mxN4ZhQ3eCFEKCqK6b6YUHnZJoDApiWMMW5DpWqTG3iJP0
 a2ae5mIg/IrpIrzbztGKhAW4OYUTj5LHh4oKysTwm2PjybhSRusIkhfZyTSe/Pv+wa3sY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsTlP-0002HV-7r; Mon, 06 Jul 2020 16:14:39 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsTlO-0007v1-Uu; Mon, 06 Jul 2020 16:14:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsTlO-0001Qx-UD; Mon, 06 Jul 2020 16:14:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151679-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 151679: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=158912a532fe98f448c688d3571241c9033553bd
X-Osstest-Versions-That: xen=be63d9d47f571a60d70f8fb630c03871312d9655
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jul 2020 16:14:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151679 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151679/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  158912a532fe98f448c688d3571241c9033553bd
baseline version:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655

Last test of basis   151535  2020-07-02 10:00:52 Z    4 days
Testing same since   151671  2020-07-06 09:02:11 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   be63d9d47f..158912a532  158912a532fe98f448c688d3571241c9033553bd -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 16:33:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 16:33:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsU3i-0002vH-ER; Mon, 06 Jul 2020 16:33:34 +0000
Resent-Date: Mon, 06 Jul 2020 16:33:34 +0000
Resent-Message-Id: <E1jsU3i-0002vH-ER@lists.xenproject.org>
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jHNp=AR=patchew.org=no-reply@srs-us1.protection.inumbo.net>)
 id 1jsU3h-0002vC-Fe
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 16:33:33 +0000
X-Inumbo-ID: 6b5631ee-bfa6-11ea-8ca7-12813bfff9fa
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b5631ee-bfa6-11ea-8ca7-12813bfff9fa;
 Mon, 06 Jul 2020 16:33:29 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1594053086; cv=none; 
 d=zohomail.com; s=zohoarc; 
 b=LS875Uv9d9HGFDAKLhmSjilVxySn9CTrOpq477pUqrnVNpkOGVXyYRFhizsNHtL1+fwiONbIM86CJxoNKiZBkPmhjkdWTbm53WBBo+DD6tmZkEjaaYmKmOCu/xUTUk2FASbb5XyYMPZmJu8z3Sr7MqOn1IpyWc6x+UYvIja8McI=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com;
 s=zohoarc; t=1594053086;
 h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:Reply-To:Subject:To;
 bh=zwr/zvLfh1iynU5qvrGxTRUzOodNjSDIsaVPgddNeow=; 
 b=dMT3EGbix+DCJ2oUwtVgQfqvDAvvEi4OYCAm2wAYBxBXS65Ffdu12oTJ1spZBLo3QAxNTeMf8GvI1KfbKLuv46jhluUjQNeZMACbmoEAppU3iQRx0oXU1KZ70joHhAtvDvsVc7LXfb3eOYYlgrwoR+VX3xYNrhqhrH1bnoppinU=
ARC-Authentication-Results: i=1; mx.zohomail.com;
 spf=pass  smtp.mailfrom=no-reply@patchew.org;
 dmarc=pass header.from=<no-reply@patchew.org>
 header.from=<no-reply@patchew.org>
Received: from [172.17.0.3] (23.253.156.214 [23.253.156.214]) by
 mx.zohomail.com with SMTPS id 1594053081886705.6943671360408;
 Mon, 6 Jul 2020 09:31:21 -0700 (PDT)
Message-ID: <159405307662.7847.17757844911728214859@d1fd068a5071>
Subject: Re: [PATCH] trivial: Remove trailing whitespaces
In-Reply-To: <20200706162300.1084753-1-dinechin@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Resent-From: 
From: no-reply@patchew.org
To: dinechin@redhat.com
Date: Mon, 6 Jul 2020 09:31:21 -0700 (PDT)
X-ZohoMailClient: External
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: qemu-devel@nongnu.org
Cc: peter.maydell@linaro.org, dmitry.fleytman@gmail.com, mst@redhat.com,
 jasowang@redhat.com, mark.cave-ayland@ilande.co.uk, qemu-devel@nongnu.org,
 armbru@redhat.com, jcmvbkbc@gmail.com, kraxel@redhat.com,
 edgar.iglesias@gmail.com, jcd@tribudubois.net, marex@denx.de,
 sstabellini@kernel.org, qemu-block@nongnu.org, qemu-trivial@nongnu.org,
 paul@xen.org, magnus.damm@gmail.com, mdroth@linux.vnet.ibm.com,
 hpoussin@reactos.org, anthony.perard@citrix.com, marcandre.lureau@redhat.com,
 david@gibson.dropbear.id.au, philmd@redhat.com, atar4qemu@gmail.com,
 riku.voipio@iki.fi, ehabkost@redhat.com, mjt@tls.msk.ru,
 alistair@alistair23.me, pl@kamp.de, dgilbert@redhat.com, r.bolshakov@yadro.com,
 qemu-arm@nongnu.org, peter.chubb@nicta.com.au, ronniesahlberg@gmail.com,
 xen-devel@lists.xenproject.org, alex.bennee@linaro.org, rth@twiddle.net,
 kwolf@redhat.com, berrange@redhat.com, ysato@users.sourceforge.jp,
 crwulff@gmail.com, laurent@vivier.eu, mreitz@redhat.com, qemu-ppc@nongnu.org,
 pbonzini@redhat.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

UGF0Y2hldyBVUkw6IGh0dHBzOi8vcGF0Y2hldy5vcmcvUUVNVS8yMDIwMDcwNjE2MjMwMC4xMDg0
NzUzLTEtZGluZWNoaW5AcmVkaGF0LmNvbS8KCgoKSGksCgpUaGlzIHNlcmllcyBzZWVtcyB0byBo
YXZlIHNvbWUgY29kaW5nIHN0eWxlIHByb2JsZW1zLiBTZWUgb3V0cHV0IGJlbG93IGZvcgptb3Jl
IGluZm9ybWF0aW9uOgoKU3ViamVjdDogW1BBVENIXSB0cml2aWFsOiBSZW1vdmUgdHJhaWxpbmcg
d2hpdGVzcGFjZXMKVHlwZTogc2VyaWVzCk1lc3NhZ2UtaWQ6IDIwMjAwNzA2MTYyMzAwLjEwODQ3
NTMtMS1kaW5lY2hpbkByZWRoYXQuY29tCgo9PT0gVEVTVCBTQ1JJUFQgQkVHSU4gPT09CiMhL2Jp
bi9iYXNoCmdpdCByZXYtcGFyc2UgYmFzZSA+IC9kZXYvbnVsbCB8fCBleGl0IDAKZ2l0IGNvbmZp
ZyAtLWxvY2FsIGRpZmYucmVuYW1lbGltaXQgMApnaXQgY29uZmlnIC0tbG9jYWwgZGlmZi5yZW5h
bWVzIFRydWUKZ2l0IGNvbmZpZyAtLWxvY2FsIGRpZmYuYWxnb3JpdGhtIGhpc3RvZ3JhbQouL3Nj
cmlwdHMvY2hlY2twYXRjaC5wbCAtLW1haWxiYWNrIGJhc2UuLgo9PT0gVEVTVCBTQ1JJUFQgRU5E
ID09PQoKRnJvbSBodHRwczovL2dpdGh1Yi5jb20vcGF0Y2hldy1wcm9qZWN0L3FlbXUKICogW25l
dyB0YWddICAgICAgICAgcGF0Y2hldy8yMDIwMDcwNjE2MjMwMC4xMDg0NzUzLTEtZGluZWNoaW5A
cmVkaGF0LmNvbSAtPiBwYXRjaGV3LzIwMjAwNzA2MTYyMzAwLjEwODQ3NTMtMS1kaW5lY2hpbkBy
ZWRoYXQuY29tClN3aXRjaGVkIHRvIGEgbmV3IGJyYW5jaCAndGVzdCcKOWFmM2U5MCB0cml2aWFs
OiBSZW1vdmUgdHJhaWxpbmcgd2hpdGVzcGFjZXMKCj09PSBPVVRQVVQgQkVHSU4gPT09CldBUk5J
Tkc6IGxpbmUgb3ZlciA4MCBjaGFyYWN0ZXJzCiMxMjE6IEZJTEU6IGRpc2FzL21pY3JvYmxhemUu
YzoxMTI6CisgICBmY21wX2x0LCBmY21wX2VxLCBmY21wX2xlLCBmY21wX2d0LCBmY21wX25lLCBm
Y21wX2dlLCBmY21wX3VuLCBmbHQsIGZpbnQsIGZzcXJ0LAoKRVJST1I6IGxpbmUgb3ZlciA5MCBj
aGFyYWN0ZXJzCiMxNDg6IEZJTEU6IGRpc2FzL21pY3JvYmxhemUuYzoyODA6CisgIHVuc2lnbmVk
IGxvbmcgYml0X3NlcXVlbmNlOyAvKiBhbGwgdGhlIGZpeGVkIGJpdHMgZm9yIHRoZSBvcCBhcmUg
c2V0IGFuZCBhbGwgdGhlIHZhcmlhYmxlIGJpdHMgKHJlZyBuYW1lcywgaW1tIHZhbHMpIGFyZSBz
ZXQgdG8gMCAqLwoKRVJST1I6IHNwYWNlIHByb2hpYml0ZWQgYmV0d2VlbiBmdW5jdGlvbiBuYW1l
IGFuZCBvcGVuIHBhcmVudGhlc2lzICcoJwojMjg5OiBGSUxFOiBkaXNhcy9taWNyb2JsYXplLmM6
NzI4OgorcmVhZF9pbnNuX21pY3JvYmxhemUgKGJmZF92bWEgbWVtYWRkciwKCkVSUk9SOiBzdXNw
ZWN0IGNvZGUgaW5kZW50IGZvciBjb25kaXRpb25hbCBzdGF0ZW1lbnRzICgyLCA2KQojMjk4OiBG
SUxFOiBkaXNhcy9taWNyb2JsYXplLmM6NzM5OgorICBpZiAoc3RhdHVzICE9IDApCiAgICAgewoK
RVJST1I6IGNvZGUgaW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojMzM0OiBGSUxFOiBkaXNh
cy9taWNyb2JsYXplLmM6ODU0OgorXkkgIC8qIFRoZSBub24tcGMgcmVsYXRpdmUgaW5zdHJ1Y3Rp
b25zIGFyZSByZXR1cm5zLCB3aGljaCBzaG91bGRuJ3QkCgpXQVJOSU5HOiBCbG9jayBjb21tZW50
cyB1c2UgYSBsZWFkaW5nIC8qIG9uIGEgc2VwYXJhdGUgbGluZQojMzM0OiBGSUxFOiBkaXNhcy9t
aWNyb2JsYXplLmM6ODU0OgorICAgICAgICAgLyogVGhlIG5vbi1wYyByZWxhdGl2ZSBpbnN0cnVj
dGlvbnMgYXJlIHJldHVybnMsIHdoaWNoIHNob3VsZG4ndAoKRVJST1I6IGNvZGUgaW5kZW50IHNo
b3VsZCBuZXZlciB1c2UgdGFicwojMzQzOiBGSUxFOiBkaXNhcy9taWNyb2JsYXplLmM6ODg5Ogor
XkkgICAgfSQKCldBUk5JTkc6IEJsb2NrIGNvbW1lbnRzIHVzZSBhIGxlYWRpbmcgLyogb24gYSBz
ZXBhcmF0ZSBsaW5lCiMzNjU6IEZJTEU6IGRpc2FzL25pb3MyLmM6OTk6CisvKiBUaGlzIHN0cnVj
dHVyZSBob2xkcyBpbmZvcm1hdGlvbiBmb3IgYSBwYXJ0aWN1bGFyIGluc3RydWN0aW9uLgoKRVJS
T1I6IGNvZGUgaW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojMzc0OiBGSUxFOiBkaXNhcy9u
aW9zMi5jOjE1NToKKyAgY29uc3QgY2hhciAqYXJnczteSV5JLyogQSBzdHJpbmcgZGVzY3JpYmlu
ZyB0aGUgYXJndW1lbnRzIGZvciB0aGlzJAoKV0FSTklORzogQmxvY2sgY29tbWVudHMgdXNlIGEg
bGVhZGluZyAvKiBvbiBhIHNlcGFyYXRlIGxpbmUKIzM3NDogRklMRTogZGlzYXMvbmlvczIuYzox
NTU6CisgIGNvbnN0IGNoYXIgKmFyZ3M7ICAgICAgICAgICAgLyogQSBzdHJpbmcgZGVzY3JpYmlu
ZyB0aGUgYXJndW1lbnRzIGZvciB0aGlzCgpFUlJPUjogY29kZSBpbmRlbnQgc2hvdWxkIG5ldmVy
IHVzZSB0YWJzCiMzNzc6IEZJTEU6IGRpc2FzL25pb3MyLmM6MTU3OgorICBjb25zdCBjaGFyICph
cmdzX3Rlc3Q7XkkvKiBMaWtlIGFyZ3MsIGJ1dCB3aXRoIGFuIGV4dHJhIGFyZ3VtZW50IGZvciQK
CldBUk5JTkc6IEJsb2NrIGNvbW1lbnRzIHVzZSBhIGxlYWRpbmcgLyogb24gYSBzZXBhcmF0ZSBs
aW5lCiMzNzc6IEZJTEU6IGRpc2FzL25pb3MyLmM6MTU3OgorICBjb25zdCBjaGFyICphcmdzX3Rl
c3Q7ICAgICAgIC8qIExpa2UgYXJncywgYnV0IHdpdGggYW4gZXh0cmEgYXJndW1lbnQgZm9yCgpF
UlJPUjogY29kZSBpbmRlbnQgc2hvdWxkIG5ldmVyIHVzZSB0YWJzCiMzODA6IEZJTEU6IGRpc2Fz
L25pb3MyLmM6MTU5OgorICB1bnNpZ25lZCBsb25nIG51bV9hcmdzO15JLyogVGhlIG51bWJlciBv
ZiBhcmd1bWVudHMgdGhlIGluc3RydWN0aW9uJAoKV0FSTklORzogQmxvY2sgY29tbWVudHMgdXNl
IGEgbGVhZGluZyAvKiBvbiBhIHNlcGFyYXRlIGxpbmUKIzM4MDogRklMRTogZGlzYXMvbmlvczIu
YzoxNTk6CisgIHVuc2lnbmVkIGxvbmcgbnVtX2FyZ3M7ICAgICAgLyogVGhlIG51bWJlciBvZiBh
cmd1bWVudHMgdGhlIGluc3RydWN0aW9uCgpFUlJPUjogY29kZSBpbmRlbnQgc2hvdWxkIG5ldmVy
IHVzZSB0YWJzCiMzODY6IEZJTEU6IGRpc2FzL25pb3MyLmM6MTY0OgorICB1bnNpZ25lZCBsb25n
IG1hc2s7XkleSS8qIE1hc2sgZm9yIHRoZSBvcGNvZGUgZmllbGQgb2YgdGhlJAoKV0FSTklORzog
QmxvY2sgY29tbWVudHMgdXNlIGEgbGVhZGluZyAvKiBvbiBhIHNlcGFyYXRlIGxpbmUKIzM4Njog
RklMRTogZGlzYXMvbmlvczIuYzoxNjQ6CisgIHVuc2lnbmVkIGxvbmcgbWFzazsgICAgICAgICAg
LyogTWFzayBmb3IgdGhlIG9wY29kZSBmaWVsZCBvZiB0aGUKCkVSUk9SOiBjb2RlIGluZGVudCBz
aG91bGQgbmV2ZXIgdXNlIHRhYnMKIzM4OTogRklMRTogZGlzYXMvbmlvczIuYzoxNjY6CisgIHVu
c2lnbmVkIGxvbmcgcGluZm87XkleSS8qIElzIHRoaXMgYSByZWFsIGluc3RydWN0aW9uIG9yIGlu
c3RydWN0aW9uJAoKV0FSTklORzogQmxvY2sgY29tbWVudHMgdXNlIGEgbGVhZGluZyAvKiBvbiBh
IHNlcGFyYXRlIGxpbmUKIzM4OTogRklMRTogZGlzYXMvbmlvczIuYzoxNjY6CisgIHVuc2lnbmVk
IGxvbmcgcGluZm87ICAgICAgICAgLyogSXMgdGhpcyBhIHJlYWwgaW5zdHJ1Y3Rpb24gb3IgaW5z
dHJ1Y3Rpb24KCldBUk5JTkc6IEJsb2NrIGNvbW1lbnRzIHVzZSBhIGxlYWRpbmcgLyogb24gYSBz
ZXBhcmF0ZSBsaW5lCiMzOTI6IEZJTEU6IGRpc2FzL25pb3MyLmM6MTY4OgorICBlbnVtIG92ZXJm
bG93X3R5cGUgb3ZlcmZsb3dfbXNnOyAgLyogVXNlZCB0byBnZW5lcmF0ZSBpbmZvcm1hdGl2ZQoK
V0FSTklORzogQmxvY2sgY29tbWVudHMgdXNlIGEgbGVhZGluZyAvKiBvbiBhIHNlcGFyYXRlIGxp
bmUKIzM5OTogRklMRTogZGlzYXMvbmlvczIuYzoxNzI6CisvKiBUaGlzIHZhbHVlIGlzIHVzZWQg
aW4gdGhlIG5pb3MyX29wY29kZS5waW5mbyBmaWVsZCB0byBpbmRpY2F0ZSB0aGF0IHRoZQoKV0FS
TklORzogQmxvY2sgY29tbWVudHMgdXNlICogb24gc3Vic2VxdWVudCBsaW5lcwojNDAwOiBGSUxF
OiBkaXNhcy9uaW9zMi5jOjE3MzoKKy8qIFRoaXMgdmFsdWUgaXMgdXNlZCBpbiB0aGUgbmlvczJf
b3Bjb2RlLnBpbmZvIGZpZWxkIHRvIGluZGljYXRlIHRoYXQgdGhlCisgICBpbnN0cnVjdGlvbiBp
cyBhIG1hY3JvIG9yIHBzZXVkby1vcC4gIFRoaXMgcmVxdWlyZXMgc3BlY2lhbCB0cmVhdG1lbnQg
YnkKCldBUk5JTkc6IGxpbmUgb3ZlciA4MCBjaGFyYWN0ZXJzCiM1NDk6IEZJTEU6IGRpc2FzL25p
b3MyLmM6Mjg0OgorI2RlZmluZSBHRVRfSVdfQ1VTVE9NX0EoVykgKCgoVykgPj4gSVdfQ1VTVE9N
X0FfTFNCKSAmIElXX0NVU1RPTV9BX1VOU0hJRlRFRF9NQVNLKQoKV0FSTklORzogbGluZSBvdmVy
IDgwIGNoYXJhY3RlcnMKIzU1MDogRklMRTogZGlzYXMvbmlvczIuYzoyODU6CisjZGVmaW5lIFNF
VF9JV19DVVNUT01fQShWKSAoKChWKSAmIElXX0NVU1RPTV9BX1VOU0hJRlRFRF9NQVNLKSA8PCBJ
V19DVVNUT01fQV9MU0IpCgpXQVJOSU5HOiBsaW5lIG92ZXIgODAgY2hhcmFjdGVycwojNTYyOiBG
SUxFOiBkaXNhcy9uaW9zMi5jOjI5MToKKyNkZWZpbmUgR0VUX0lXX0NVU1RPTV9CKFcpICgoKFcp
ID4+IElXX0NVU1RPTV9CX0xTQikgJiBJV19DVVNUT01fQl9VTlNISUZURURfTUFTSykKCldBUk5J
Tkc6IGxpbmUgb3ZlciA4MCBjaGFyYWN0ZXJzCiM1NjM6IEZJTEU6IGRpc2FzL25pb3MyLmM6Mjky
OgorI2RlZmluZSBTRVRfSVdfQ1VTVE9NX0IoVikgKCgoVikgJiBJV19DVVNUT01fQl9VTlNISUZU
RURfTUFTSykgPDwgSVdfQ1VTVE9NX0JfTFNCKQoKV0FSTklORzogbGluZSBvdmVyIDgwIGNoYXJh
Y3RlcnMKIzU3NTogRklMRTogZGlzYXMvbmlvczIuYzoyOTg6CisjZGVmaW5lIEdFVF9JV19DVVNU
T01fQyhXKSAoKChXKSA+PiBJV19DVVNUT01fQ19MU0IpICYgSVdfQ1VTVE9NX0NfVU5TSElGVEVE
X01BU0spCgpXQVJOSU5HOiBsaW5lIG92ZXIgODAgY2hhcmFjdGVycwojNTc2OiBGSUxFOiBkaXNh
cy9uaW9zMi5jOjI5OToKKyNkZWZpbmUgU0VUX0lXX0NVU1RPTV9DKFYpICgoKFYpICYgSVdfQ1VT
VE9NX0NfVU5TSElGVEVEX01BU0spIDw8IElXX0NVU1RPTV9DX0xTQikKCldBUk5JTkc6IGxpbmUg
b3ZlciA4MCBjaGFyYWN0ZXJzCiM1ODY6IEZJTEU6IGRpc2FzL25pb3MyLmM6MzAzOgorI2RlZmlu
ZSBJV19DVVNUT01fUkVBREFfVU5TSElGVEVEX01BU0sgKDB4ZmZmZmZmZmZ1ID4+ICgzMiAtIElX
X0NVU1RPTV9SRUFEQV9TSVpFKSkKCkVSUk9SOiBsaW5lIG92ZXIgOTAgY2hhcmFjdGVycwojNTg3
OiBGSUxFOiBkaXNhcy9uaW9zMi5jOjMwNDoKKyNkZWZpbmUgSVdfQ1VTVE9NX1JFQURBX1NISUZU
RURfTUFTSyAoSVdfQ1VTVE9NX1JFQURBX1VOU0hJRlRFRF9NQVNLIDw8IElXX0NVU1RPTV9SRUFE
QV9MU0IpCgpFUlJPUjogbGluZSBvdmVyIDkwIGNoYXJhY3RlcnMKIzU4ODogRklMRTogZGlzYXMv
bmlvczIuYzozMDU6CisjZGVmaW5lIEdFVF9JV19DVVNUT01fUkVBREEoVykgKCgoVykgPj4gSVdf
Q1VTVE9NX1JFQURBX0xTQikgJiBJV19DVVNUT01fUkVBREFfVU5TSElGVEVEX01BU0spCgpFUlJP
UjogbGluZSBvdmVyIDkwIGNoYXJhY3RlcnMKIzU4OTogRklMRTogZGlzYXMvbmlvczIuYzozMDY6
CisjZGVmaW5lIFNFVF9JV19DVVNUT01fUkVBREEoVikgKCgoVikgJiBJV19DVVNUT01fUkVBREFf
VU5TSElGVEVEX01BU0spIDw8IElXX0NVU1RPTV9SRUFEQV9MU0IpCgpXQVJOSU5HOiBsaW5lIG92
ZXIgODAgY2hhcmFjdGVycwojNTk5OiBGSUxFOiBkaXNhcy9uaW9zMi5jOjMxMDoKKyNkZWZpbmUg
SVdfQ1VTVE9NX1JFQURCX1VOU0hJRlRFRF9NQVNLICgweGZmZmZmZmZmdSA+PiAoMzIgLSBJV19D
VVNUT01fUkVBREJfU0laRSkpCgpFUlJPUjogbGluZSBvdmVyIDkwIGNoYXJhY3RlcnMKIzYwMDog
RklMRTogZGlzYXMvbmlvczIuYzozMTE6CisjZGVmaW5lIElXX0NVU1RPTV9SRUFEQl9TSElGVEVE
X01BU0sgKElXX0NVU1RPTV9SRUFEQl9VTlNISUZURURfTUFTSyA8PCBJV19DVVNUT01fUkVBREJf
TFNCKQoKRVJST1I6IGxpbmUgb3ZlciA5MCBjaGFyYWN0ZXJzCiM2MDE6IEZJTEU6IGRpc2FzL25p
b3MyLmM6MzEyOgorI2RlZmluZSBHRVRfSVdfQ1VTVE9NX1JFQURCKFcpICgoKFcpID4+IElXX0NV
U1RPTV9SRUFEQl9MU0IpICYgSVdfQ1VTVE9NX1JFQURCX1VOU0hJRlRFRF9NQVNLKQoKRVJST1I6
IGxpbmUgb3ZlciA5MCBjaGFyYWN0ZXJzCiM2MDI6IEZJTEU6IGRpc2FzL25pb3MyLmM6MzEzOgor
I2RlZmluZSBTRVRfSVdfQ1VTVE9NX1JFQURCKFYpICgoKFYpICYgSVdfQ1VTVE9NX1JFQURCX1VO
U0hJRlRFRF9NQVNLKSA8PCBJV19DVVNUT01fUkVBREJfTFNCKQoKV0FSTklORzogbGluZSBvdmVy
IDgwIGNoYXJhY3RlcnMKIzYxMjogRklMRTogZGlzYXMvbmlvczIuYzozMTc6CisjZGVmaW5lIElX
X0NVU1RPTV9SRUFEQ19VTlNISUZURURfTUFTSyAoMHhmZmZmZmZmZnUgPj4gKDMyIC0gSVdfQ1VT
VE9NX1JFQURDX1NJWkUpKQoKRVJST1I6IGxpbmUgb3ZlciA5MCBjaGFyYWN0ZXJzCiM2MTM6IEZJ
TEU6IGRpc2FzL25pb3MyLmM6MzE4OgorI2RlZmluZSBJV19DVVNUT01fUkVBRENfU0hJRlRFRF9N
QVNLIChJV19DVVNUT01fUkVBRENfVU5TSElGVEVEX01BU0sgPDwgSVdfQ1VTVE9NX1JFQURDX0xT
QikKCkVSUk9SOiBsaW5lIG92ZXIgOTAgY2hhcmFjdGVycwojNjE0OiBGSUxFOiBkaXNhcy9uaW9z
Mi5jOjMxOToKKyNkZWZpbmUgR0VUX0lXX0NVU1RPTV9SRUFEQyhXKSAoKChXKSA+PiBJV19DVVNU
T01fUkVBRENfTFNCKSAmIElXX0NVU1RPTV9SRUFEQ19VTlNISUZURURfTUFTSykKCkVSUk9SOiBs
aW5lIG92ZXIgOTAgY2hhcmFjdGVycwojNjE1OiBGSUxFOiBkaXNhcy9uaW9zMi5jOjMyMDoKKyNk
ZWZpbmUgU0VUX0lXX0NVU1RPTV9SRUFEQyhWKSAoKChWKSAmIElXX0NVU1RPTV9SRUFEQ19VTlNI
SUZURURfTUFTSykgPDwgSVdfQ1VTVE9NX1JFQURDX0xTQikKCldBUk5JTkc6IGxpbmUgb3ZlciA4
MCBjaGFyYWN0ZXJzCiM2Mjc6IEZJTEU6IGRpc2FzL25pb3MyLmM6MzI2OgorI2RlZmluZSBHRVRf
SVdfQ1VTVE9NX04oVykgKCgoVykgPj4gSVdfQ1VTVE9NX05fTFNCKSAmIElXX0NVU1RPTV9OX1VO
U0hJRlRFRF9NQVNLKQoKV0FSTklORzogbGluZSBvdmVyIDgwIGNoYXJhY3RlcnMKIzYyODogRklM
RTogZGlzYXMvbmlvczIuYzozMjc6CisjZGVmaW5lIFNFVF9JV19DVVNUT01fTihWKSAoKChWKSAm
IElXX0NVU1RPTV9OX1VOU0hJRlRFRF9NQVNLKSA8PCBJV19DVVNUT01fTl9MU0IpCgpXQVJOSU5H
OiBCbG9jayBjb21tZW50cyB1c2UgYSBsZWFkaW5nIC8qIG9uIGEgc2VwYXJhdGUgbGluZQojNzE2
OiBGSUxFOiBody9hcm0vc3RlbGxhcmlzLmM6OTgyOgorICAgIC8qIFRPRE86IFJlYWwgaGFyZHdh
cmUgaGFzIGxpbWl0ZWQgc2l6ZSBGSUZPcy4gIFdlIGhhdmUgYSBmdWxsIDE2IGVudHJ5CgpXQVJO
SU5HOiBCbG9jayBjb21tZW50cyB1c2UgYSBsZWFkaW5nIC8qIG9uIGEgc2VwYXJhdGUgbGluZQoj
NzQyOiBGSUxFOiBody9jb3JlL3B0aW1lci5jOjI0OToKKyAgICAgICAgICAgICAgICAvKiBMb29r
IGF0IHJlbWFpbmluZyBiaXRzIG9mIHBlcmlvZF9mcmFjIGFuZCByb3VuZCBkaXYgdXAgaWYKCldB
Uk5JTkc6IEJsb2NrIGNvbW1lbnRzIHVzZSBhIGxlYWRpbmcgLyogb24gYSBzZXBhcmF0ZSBsaW5l
CiM3NTU6IEZJTEU6IGh3L2NyaXMvYXhpc19kZXY4OC5jOjI3MDoKKyAgICAvKiBUaGUgRVRSQVgt
RlMgaGFzIDEyOEtiIG9uIGNoaXAgcmFtLCB0aGUgZG9jcyByZWZlciB0byBpdCBhcyB0aGUKCldB
Uk5JTkc6IEJsb2NrIGNvbW1lbnRzIHVzZSBhIGxlYWRpbmcgLyogb24gYSBzZXBhcmF0ZSBsaW5l
CiM3Njg6IEZJTEU6IGh3L2NyaXMvYm9vdC5jOjc1OgorICAgIC8qIEJvb3RzIGEga2VybmVsIGVs
ZiBiaW5hcnksIG9zL2xpbnV4LTIuNi92bWxpbnV4IGZyb20gdGhlIGF4aXMKCkVSUk9SOiBkbyBu
b3QgdXNlIEM5OSAvLyBjb21tZW50cwojNzgxOiBGSUxFOiBody9kaXNwbGF5L3F4bC5jOjU0Ogor
I2RlZmluZSBQSVhFTF9TSVpFIDAuMjkzNjg3NSAvLzEyODB4MTAyNCBpcyAxNC44IiB4IDExLjki
CgpFUlJPUjogY29kZSBpbmRlbnQgc2hvdWxkIG5ldmVyIHVzZSB0YWJzCiM3OTQ6IEZJTEU6IGh3
L2RtYS9ldHJheGZzX2RtYS5jOjMyNToKK15JaWYgKCFjaGFubmVsX2VuKGN0cmwsIGMpJAoKV0FS
TklORzogbGluZSBvdmVyIDgwIGNoYXJhY3RlcnMKIzgwMDogRklMRTogaHcvZG1hL2V0cmF4ZnNf
ZG1hLmM6MzMwOgorICAgICAgICAgICAgICAgRChwcmludGYoImNvbnRpbnVlIGZhaWxlZCBjaD0l
ZCBzdGF0ZT0lZCBzdG9wcGVkPSVkIGVuPSVkIGVvbD0lZFxuIiwKCkVSUk9SOiBjb2RlIGluZGVu
dCBzaG91bGQgbmV2ZXIgdXNlIHRhYnMKIzgwMDogRklMRTogaHcvZG1hL2V0cmF4ZnNfZG1hLmM6
MzMwOgorXkleSUQocHJpbnRmKCJjb250aW51ZSBmYWlsZWQgY2g9JWQgc3RhdGU9JWQgc3RvcHBl
ZD0lZCBlbj0lZCBlb2w9JWRcbiIsJAoKRVJST1I6IGNvZGUgaW5kZW50IHNob3VsZCBuZXZlciB1
c2UgdGFicwojODA5OiBGSUxFOiBody9kbWEvZXRyYXhmc19kbWEuYzozODY6CiteSUQocHJpbnRm
KCIlczogY2hhbj0lZCBtYXNrZWRfaW50cj0leFxuIiwgX19mdW5jX18sJAoKRVJST1I6IGNvZGUg
aW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojODI3OiBGSUxFOiBody9kbWEvZXRyYXhmc19k
bWEuYzo1MjA6CiteSV5JRChwcmludGYoImluIGRzY3IgZW5kIGxlbj0lZFxuIiwkCgpFUlJPUjog
Y29kZSBpbmRlbnQgc2hvdWxkIG5ldmVyIHVzZSB0YWJzCiM4MzY6IEZJTEU6IGh3L2RtYS9ldHJh
eGZzX2RtYS5jOjcxMToKK15JZm9yIChpID0gMDskCgpFUlJPUjogY29kZSBpbmRlbnQgc2hvdWxk
IG5ldmVyIHVzZSB0YWJzCiM4NDk6IEZJTEU6IGh3L2RtYS9ldHJheGZzX2RtYS5jOjczMDoKK15J
cmV0dXJuIGNoYW5uZWxfaW5fcHJvY2VzcyhjbGllbnQtPmN0cmwsIGNsaWVudC0+Y2hhbm5lbCwk
CgpFUlJPUjogY29kZSBpbmRlbnQgc2hvdWxkIG5ldmVyIHVzZSB0YWJzCiM5NzM6IEZJTEU6IGh3
L2ludGMvc2hfaW50Yy5jOjI3NjoKK15JcHJpbnRmKCJrID0gJWQsIGZpcnN0ID0gJWQsIGVudW0g
PSAlZCwgbWFzayA9IDB4JTA4eFxuIiwkCgpXQVJOSU5HOiBCbG9jayBjb21tZW50cyB1c2UgYSBs
ZWFkaW5nIC8qIG9uIGEgc2VwYXJhdGUgbGluZQojOTkxOiBGSUxFOiBody9pbnRjL3NoX2ludGMu
Yzo1MDE6CisvKiBBc3NlcnQgbGV2ZWwgPG4+IElSTCBpbnRlcnJ1cHQuCgpFUlJPUjogY29kZSBp
bmRlbnQgc2hvdWxkIG5ldmVyIHVzZSB0YWJzCiMxMTY5OiBGSUxFOiBody91c2IvaGNkLW11c2Iu
YzozNjoKKyNkZWZpbmUgTVVTQl9IRFJDX0lOVFJUWEVeSTB4MDYkCgpFUlJPUjogY29kZSBpbmRl
bnQgc2hvdWxkIG5ldmVyIHVzZSB0YWJzCiMxMTcwOiBGSUxFOiBody91c2IvaGNkLW11c2IuYzoz
NzoKKyNkZWZpbmUgTVVTQl9IRFJDX0lOVFJSWEVeSTB4MDgkCgpFUlJPUjogY29kZSBpbmRlbnQg
c2hvdWxkIG5ldmVyIHVzZSB0YWJzCiMxMTc5OiBGSUxFOiBody91c2IvaGNkLW11c2IuYzoxMTY6
CisjZGVmaW5lIE1HQ19NX1BPV0VSX0lTT1VQREFURV5JXkkweDgwJAoKRVJST1I6IGNvZGUgaW5k
ZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojMTE4ODogRklMRTogaHcvdXNiL2hjZC1tdXNiLmM6
MTMwOgorI2RlZmluZSBNR0NfTV9JTlRSX1NPRl5JXkleSTB4MDgkCgpFUlJPUjogY29kZSBpbmRl
bnQgc2hvdWxkIG5ldmVyIHVzZSB0YWJzCiMxMTk3OiBGSUxFOiBody91c2IvaGNkLW11c2IuYzox
Mzg6CisjZGVmaW5lIE1HQ19NX0RFVkNUTF9CREVWSUNFXkleSTB4ODAkCgpFUlJPUjogc3VzcGVj
dCBjb2RlIGluZGVudCBmb3IgY29uZGl0aW9uYWwgc3RhdGVtZW50cyAoNywgMTEpCiMxNTEwOiBG
SUxFOiBsaW51eC11c2VyL21tYXAuYzo0MjM6CiAgICAgICAgaWYgKG9mZnNldCArIGxlbiA+IHNi
LnN0X3NpemUpIHsKKyAgICAgICAgICAgLyogSWYgc28sIHRydW5jYXRlIHRoZSBmaWxlIG1hcCBh
dCBlb2YgYWxpZ25lZCB3aXRoCgpXQVJOSU5HOiBCbG9jayBjb21tZW50cyB1c2UgYSBsZWFkaW5n
IC8qIG9uIGEgc2VwYXJhdGUgbGluZQojMTUxMjogRklMRTogbGludXgtdXNlci9tbWFwLmM6NDI0
OgorICAgICAgICAgICAvKiBJZiBzbywgdHJ1bmNhdGUgdGhlIGZpbGUgbWFwIGF0IGVvZiBhbGln
bmVkIHdpdGgKCkVSUk9SOiBzcGFjZSBwcm9oaWJpdGVkIGJldHdlZW4gZnVuY3Rpb24gbmFtZSBh
bmQgb3BlbiBwYXJlbnRoZXNpcyAnKCcKIzE1OTM6IEZJTEU6IGxpbnV4LXVzZXIvc3lzY2FsbC5j
OjE2MjA6CisgICAgaWYgKG1zZ19jb250cm9sbGVuIDwgc2l6ZW9mIChzdHJ1Y3QgdGFyZ2V0X2Nt
c2doZHIpKQoKRVJST1I6IGJyYWNlcyB7fSBhcmUgbmVjZXNzYXJ5IGZvciBhbGwgYXJtcyBvZiB0
aGlzIHN0YXRlbWVudAojMTU5MzogRklMRTogbGludXgtdXNlci9zeXNjYWxsLmM6MTYyMDoKKyAg
ICBpZiAobXNnX2NvbnRyb2xsZW4gPCBzaXplb2YgKHN0cnVjdCB0YXJnZXRfY21zZ2hkcikpClsu
Li5dCgpFUlJPUjogc3BhY2UgcHJvaGliaXRlZCBiZXR3ZWVuIGZ1bmN0aW9uIG5hbWUgYW5kIG9w
ZW4gcGFyZW50aGVzaXMgJygnCiMxNjAyOiBGSUxFOiBsaW51eC11c2VyL3N5c2NhbGwuYzoxNzA2
OgorICAgIGlmIChtc2dfY29udHJvbGxlbiA8IHNpemVvZiAoc3RydWN0IHRhcmdldF9jbXNnaGRy
KSkKCkVSUk9SOiBicmFjZXMge30gYXJlIG5lY2Vzc2FyeSBmb3IgYWxsIGFybXMgb2YgdGhpcyBz
dGF0ZW1lbnQKIzE2MDI6IEZJTEU6IGxpbnV4LXVzZXIvc3lzY2FsbC5jOjE3MDY6CisgICAgaWYg
KG1zZ19jb250cm9sbGVuIDwgc2l6ZW9mIChzdHJ1Y3QgdGFyZ2V0X2Ntc2doZHIpKQpbLi4uXQoK
RVJST1I6IHN1c3BlY3QgY29kZSBpbmRlbnQgZm9yIGNvbmRpdGlvbmFsIHN0YXRlbWVudHMgKDQs
IDExKQojMTYxMTogRklMRTogbGludXgtdXNlci9zeXNjYWxsLmM6NTc1MzoKKyAgICBpZiAobGR0
X2luZm8uZW50cnlfbnVtYmVyIDwgVEFSR0VUX0dEVF9FTlRSWV9UTFNfTUlOIHx8ClsuLi5dCiAg
ICAgICAgICAgIHJldHVybiAtVEFSR0VUX0VJTlZBTDsKCkVSUk9SOiBkbyBub3QgdXNlIGFzc2ln
bm1lbnQgaW4gaWYgY29uZGl0aW9uCiMxNjQwOiBGSUxFOiBsaW51eC11c2VyL3N5c2NhbGwuYzox
MDg3NjoKKyAgICAgICAgaWYgKCEocCA9IGxvY2tfdXNlcl9zdHJpbmcoYXJnMikpKQoKRVJST1I6
IGJyYWNlcyB7fSBhcmUgbmVjZXNzYXJ5IGZvciBhbGwgYXJtcyBvZiB0aGlzIHN0YXRlbWVudAoj
MTY0MDogRklMRTogbGludXgtdXNlci9zeXNjYWxsLmM6MTA4NzY6CisgICAgICAgIGlmICghKHAg
PSBsb2NrX3VzZXJfc3RyaW5nKGFyZzIpKSkKWy4uLl0KCkVSUk9SOiBjb2RlIGluZGVudCBzaG91
bGQgbmV2ZXIgdXNlIHRhYnMKIzE2NTM6IEZJTEU6IGxpbnV4LXVzZXIvc3lzY2FsbF9kZWZzLmg6
MTkyNjoKK15JYWJpX3Vsb25nIF5JdGFyZ2V0X3N0X2F0aW1lX25zZWM7JAoKRVJST1I6IHNwYWNl
IHByb2hpYml0ZWQgYmV0d2VlbiBmdW5jdGlvbiBuYW1lIGFuZCBvcGVuIHBhcmVudGhlc2lzICco
JwojMTg2MDogRklMRTogdGFyZ2V0L2NyaXMvdHJhbnNsYXRlLmM6MTA3MToKK3N0YXRpYyB2b2lk
IGNyaXNfcHJlcGFyZV9jY19icmFuY2ggKERpc2FzQ29udGV4dCAqZGMsCgpFUlJPUjogZG8gbm90
IHVzZSBDOTkgLy8gY29tbWVudHMKIzIxNjM6IEZJTEU6IHRhcmdldC9pMzg2L2h2Zi94ODZfdGFz
ay5jOjQ6CisvLwoKRVJST1I6IHRyYWlsaW5nIHdoaXRlc3BhY2UKIzIzOTU6IEZJTEU6IHRhcmdl
dC9zaDQvb3BfaGVscGVyLmM6MTQ5OgorXkkkCgpFUlJPUjogY29kZSBpbmRlbnQgc2hvdWxkIG5l
dmVyIHVzZSB0YWJzCiMyMzk1OiBGSUxFOiB0YXJnZXQvc2g0L29wX2hlbHBlci5jOjE0OToKK15J
JAoKRVJST1I6IGNvZGUgaW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojMjQxNDogRklMRTog
dGFyZ2V0L3h0ZW5zYS9jb3JlLWRlMjEyL2NvcmUtaXNhLmg6NjA4OgorI2RlZmluZSBYQ0hBTF9I
QVZFX01QVV5JXkleSTAkCgpFUlJPUjogY29kZSBpbmRlbnQgc2hvdWxkIG5ldmVyIHVzZSB0YWJz
CiMyNDM5OiBGSUxFOiB0YXJnZXQveHRlbnNhL2NvcmUtc2FtcGxlX2NvbnRyb2xsZXIvY29yZS1p
c2EuaDo2Mjk6CisjZGVmaW5lIFhDSEFMX0hBVkVfTVBVXkleSV5JMCQKCkVSUk9SOiBicmFjZXMg
e30gYXJlIG5lY2Vzc2FyeSBmb3IgYWxsIGFybXMgb2YgdGhpcyBzdGF0ZW1lbnQKIzI0OTE6IEZJ
TEU6IHRjZy90Y2cuYzo4OTY6CisgICAgICAgICAgICAgICAgaWYgKHMtPnBvb2xfY3VycmVudCkK
Wy4uLl0KICAgICAgICAgICAgICAgICBlbHNlClsuLi5dCgpXQVJOSU5HOiBCbG9jayBjb21tZW50
cyB1c2UgYSBsZWFkaW5nIC8qIG9uIGEgc2VwYXJhdGUgbGluZQojMjUzNzogRklMRTogdGNnL3Rj
Zy5jOjM3MTY6CisgICAgICAgICAgICAvKiBhbGxvY2F0ZSBhIG5ldyByZWdpc3RlciBtYXRjaGlu
ZyB0aGUgY29uc3RyYWludAoKRVJST1I6IGNvZGUgaW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFi
cwojMjYzMDogRklMRTogdGVzdHMvdGNnL211bHRpYXJjaC90ZXN0LW1tYXAuYzo2NjoKK15JXklw
MSA9IG1tYXAoTlVMTCwgbGVuLCBQUk9UX1JFQUQsJAoKRVJST1I6IGNvZGUgaW5kZW50IHNob3Vs
ZCBuZXZlciB1c2UgdGFicwojMjYzMzogRklMRTogdGVzdHMvdGNnL211bHRpYXJjaC90ZXN0LW1t
YXAuYzo2ODoKK15JXklwMiA9IG1tYXAoTlVMTCwgbGVuLCBQUk9UX1JFQUQsJAoKRVJST1I6IGNv
ZGUgaW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojMjYzNjogRklMRTogdGVzdHMvdGNnL211
bHRpYXJjaC90ZXN0LW1tYXAuYzo3MDoKK15JXklwMyA9IG1tYXAoTlVMTCwgbGVuLCBQUk9UX1JF
QUQsJAoKRVJST1I6IGNvZGUgaW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojMjYzOTogRklM
RTogdGVzdHMvdGNnL211bHRpYXJjaC90ZXN0LW1tYXAuYzo3MjoKK15JXklwNCA9IG1tYXAoTlVM
TCwgbGVuLCBQUk9UX1JFQUQsJAoKRVJST1I6IGNvZGUgaW5kZW50IHNob3VsZCBuZXZlciB1c2Ug
dGFicwojMjY0MjogRklMRTogdGVzdHMvdGNnL211bHRpYXJjaC90ZXN0LW1tYXAuYzo3NDoKK15J
XklwNSA9IG1tYXAoTlVMTCwgbGVuLCBQUk9UX1JFQUQsJAoKRVJST1I6IGNvZGUgaW5kZW50IHNo
b3VsZCBuZXZlciB1c2UgdGFicwojMjY1MTogRklMRTogdGVzdHMvdGNnL211bHRpYXJjaC90ZXN0
LW1tYXAuYzoxMjE6CiteSXAxID0gbW1hcChOVUxMLCBsZW4sIFBST1RfUkVBRCwkCgpFUlJPUjog
Y29kZSBpbmRlbnQgc2hvdWxkIG5ldmVyIHVzZSB0YWJzCiMyNjYwOiBGSUxFOiB0ZXN0cy90Y2cv
bXVsdGlhcmNoL3Rlc3QtbW1hcC5jOjE0ODoKK15JXklwMSA9IG1tYXAoTlVMTCwgcGFnZXNpemUs
IFBST1RfUkVBRCwkCgpFUlJPUjogY29kZSBpbmRlbnQgc2hvdWxkIG5ldmVyIHVzZSB0YWJzCiMy
NjY4OiBGSUxFOiB0ZXN0cy90Y2cvbXVsdGlhcmNoL3Rlc3QtbW1hcC5jOjE1NToKK15JXklwMiA9
IG1tYXAoTlVMTCwgcGFnZXNpemUsIFBST1RfUkVBRCwkCgpFUlJPUjogY29kZSBpbmRlbnQgc2hv
dWxkIG5ldmVyIHVzZSB0YWJzCiMyNjc3OiBGSUxFOiB0ZXN0cy90Y2cvbXVsdGlhcmNoL3Rlc3Qt
bW1hcC5jOjE2NToKK15JXklwMyA9IG1tYXAoTlVMTCwgbmxlbiwgUFJPVF9SRUFELCQKCkVSUk9S
OiBjb2RlIGluZGVudCBzaG91bGQgbmV2ZXIgdXNlIHRhYnMKIzI2ODM6IEZJTEU6IHRlc3RzL3Rj
Zy9tdWx0aWFyY2gvdGVzdC1tbWFwLmM6MTcwOgorXkleSWlmIChwMyA8IHAyJAoKRVJST1I6IGNv
ZGUgaW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojMjY5MjogRklMRTogdGVzdHMvdGNnL211
bHRpYXJjaC90ZXN0LW1tYXAuYzoxOTQ6CiteSWFkZHIgPSBtbWFwKE5VTEwsIHBhZ2VzaXplICog
NDAsIFBST1RfUkVBRCB8IFBST1RfV1JJVEUsJAoKRVJST1I6IGNvZGUgaW5kZW50IHNob3VsZCBu
ZXZlciB1c2UgdGFicwojMjcwMTogRklMRTogdGVzdHMvdGNnL211bHRpYXJjaC90ZXN0LW1tYXAu
YzoyMDM6CiteSV5JcDEgPSBtbWFwKGFkZHIsIHBhZ2VzaXplLCBQUk9UX1JFQUQsJAoKRVJST1I6
IGNvZGUgaW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojMjcwNTogRklMRTogdGVzdHMvdGNn
L211bHRpYXJjaC90ZXN0LW1tYXAuYzoyMDY6CiteSV5JLyogTWFrZSBzdXJlIHdlIGdldCBwYWdl
cyBhbGlnbmVkIHdpdGggdGhlIHBhZ2VzaXplLiQKCldBUk5JTkc6IEJsb2NrIGNvbW1lbnRzIHVz
ZSBhIGxlYWRpbmcgLyogb24gYSBzZXBhcmF0ZSBsaW5lCiMyNzA1OiBGSUxFOiB0ZXN0cy90Y2cv
bXVsdGlhcmNoL3Rlc3QtbW1hcC5jOjIwNjoKKyAgICAgICAgICAgICAgIC8qIE1ha2Ugc3VyZSB3
ZSBnZXQgcGFnZXMgYWxpZ25lZCB3aXRoIHRoZSBwYWdlc2l6ZS4KCkVSUk9SOiBjb2RlIGluZGVu
dCBzaG91bGQgbmV2ZXIgdXNlIHRhYnMKIzI3MTQ6IEZJTEU6IHRlc3RzL3RjZy9tdWx0aWFyY2gv
dGVzdC1tbWFwLmM6MjM0OgorXkleSXAxID0gbW1hcChhZGRyLCBwYWdlc2l6ZSwgUFJPVF9SRUFE
IHwgUFJPVF9XUklURSwkCgpFUlJPUjogY29kZSBpbmRlbnQgc2hvdWxkIG5ldmVyIHVzZSB0YWJz
CiMyNzE4OiBGSUxFOiB0ZXN0cy90Y2cvbXVsdGlhcmNoL3Rlc3QtbW1hcC5jOjIzNzoKK15JXkkv
KiBNYWtlIHN1cmUgd2UgZ2V0IHBhZ2VzIGFsaWduZWQgd2l0aCB0aGUgcGFnZXNpemUuJAoKV0FS
TklORzogQmxvY2sgY29tbWVudHMgdXNlIGEgbGVhZGluZyAvKiBvbiBhIHNlcGFyYXRlIGxpbmUK
IzI3MTg6IEZJTEU6IHRlc3RzL3RjZy9tdWx0aWFyY2gvdGVzdC1tbWFwLmM6MjM3OgorICAgICAg
ICAgICAgICAgLyogTWFrZSBzdXJlIHdlIGdldCBwYWdlcyBhbGlnbmVkIHdpdGggdGhlIHBhZ2Vz
aXplLgoKRVJST1I6IGNvZGUgaW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojMjcyODogRklM
RTogdGVzdHMvdGNnL211bHRpYXJjaC90ZXN0LW1tYXAuYzoyNjE6CiteSV5JcDEgPSBtbWFwKE5V
TEwsIGxlbiwgUFJPVF9SRUFELCQKCkVSUk9SOiBjb2RlIGluZGVudCBzaG91bGQgbmV2ZXIgdXNl
IHRhYnMKIzI3Mjk6IEZJTEU6IHRlc3RzL3RjZy9tdWx0aWFyY2gvdGVzdC1tbWFwLmM6MjYyOgor
XkleSV5JICBNQVBfUFJJVkFURSwkCgpFUlJPUjogY29kZSBpbmRlbnQgc2hvdWxkIG5ldmVyIHVz
ZSB0YWJzCiMyNzMzOiBGSUxFOiB0ZXN0cy90Y2cvbXVsdGlhcmNoL3Rlc3QtbW1hcC5jOjI2NDoK
K15JXklwMiA9IG1tYXAoTlVMTCwgbGVuLCBQUk9UX1JFQUQsJAoKRVJST1I6IGNvZGUgaW5kZW50
IHNob3VsZCBuZXZlciB1c2UgdGFicwojMjczNDogRklMRTogdGVzdHMvdGNnL211bHRpYXJjaC90
ZXN0LW1tYXAuYzoyNjU6CiteSV5JXkkgIE1BUF9QUklWQVRFLCQKCkVSUk9SOiBjb2RlIGluZGVu
dCBzaG91bGQgbmV2ZXIgdXNlIHRhYnMKIzI3Mzg6IEZJTEU6IHRlc3RzL3RjZy9tdWx0aWFyY2gv
dGVzdC1tbWFwLmM6MjY3OgorXkleSXAzID0gbW1hcChOVUxMLCBsZW4sIFBST1RfUkVBRCwkCgpF
UlJPUjogY29kZSBpbmRlbnQgc2hvdWxkIG5ldmVyIHVzZSB0YWJzCiMyNzM5OiBGSUxFOiB0ZXN0
cy90Y2cvbXVsdGlhcmNoL3Rlc3QtbW1hcC5jOjI2ODoKK15JXkleSSAgTUFQX1BSSVZBVEUsJAoK
RVJST1I6IGNvZGUgaW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojMjc1MDogRklMRTogdGVz
dHMvdGNnL211bHRpYXJjaC90ZXN0LW1tYXAuYzozMTA6CiteSV5JcDEgPSBtbWFwKE5VTEwsIHBh
Z2VzaXplLCBQUk9UX1JFQUQsJAoKRVJST1I6IGNvZGUgaW5kZW50IHNob3VsZCBuZXZlciB1c2Ug
dGFicwojMjc1MTogRklMRTogdGVzdHMvdGNnL211bHRpYXJjaC90ZXN0LW1tYXAuYzozMTE6Cite
SV5JXkkgIE1BUF9QUklWQVRFLCQKCkVSUk9SOiBjb2RlIGluZGVudCBzaG91bGQgbmV2ZXIgdXNl
IHRhYnMKIzI3NTI6IEZJTEU6IHRlc3RzL3RjZy9tdWx0aWFyY2gvdGVzdC1tbWFwLmM6MzEyOgor
XkleSV5JICB0ZXN0X2ZkLCQKCkVSUk9SOiBjb2RlIGluZGVudCBzaG91bGQgbmV2ZXIgdXNlIHRh
YnMKIzI3NjE6IEZJTEU6IHRlc3RzL3RjZy9tdWx0aWFyY2gvdGVzdC1tbWFwLmM6MzQyOgorXklh
ZGRyID0gbW1hcChOVUxMLCBwYWdlc2l6ZSAqIDQ0LCBQUk9UX1JFQUQsJAoKRVJST1I6IGNvZGUg
aW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojMjc3MjogRklMRTogdGVzdHMvdGNnL211bHRp
YXJjaC90ZXN0LW1tYXAuYzozNTI6CiteSV5JcDEgPSBtbWFwKGFkZHIsIHBhZ2VzaXplLCBQUk9U
X1JFQUQsJAoKRVJST1I6IGNvZGUgaW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojMjc3Mzog
RklMRTogdGVzdHMvdGNnL211bHRpYXJjaC90ZXN0LW1tYXAuYzozNTM6CiteSV5JXkkgIE1BUF9Q
UklWQVRFIHwgTUFQX0ZJWEVELCQKCkVSUk9SOiBjb2RlIGluZGVudCBzaG91bGQgbmV2ZXIgdXNl
IHRhYnMKIzI3NzQ6IEZJTEU6IHRlc3RzL3RjZy9tdWx0aWFyY2gvdGVzdC1tbWFwLmM6MzU0Ogor
XkleSV5JICB0ZXN0X2ZkLCQKCkVSUk9SOiBjb2RlIGluZGVudCBzaG91bGQgbmV2ZXIgdXNlIHRh
YnMKIzI3ODM6IEZJTEU6IHRlc3RzL3RjZy9tdWx0aWFyY2gvdGVzdC1tbWFwLmM6Mzg0OgorXklh
ZGRyID0gbW1hcChOVUxMLCBwYWdlc2l6ZSAqIDQwICogNCwgUFJPVF9SRUFELCQKCkVSUk9SOiBj
b2RlIGluZGVudCBzaG91bGQgbmV2ZXIgdXNlIHRhYnMKIzI3OTI6IEZJTEU6IHRlc3RzL3RjZy9t
dWx0aWFyY2gvdGVzdC1tbWFwLmM6MzkyOgorXkleSXAxID0gbW1hcChhZGRyLCBwYWdlc2l6ZSwg
UFJPVF9SRUFELCQKCkVSUk9SOiBjb2RlIGluZGVudCBzaG91bGQgbmV2ZXIgdXNlIHRhYnMKIzI3
OTY6IEZJTEU6IHRlc3RzL3RjZy9tdWx0aWFyY2gvdGVzdC1tbWFwLmM6Mzk1OgorXkleSXAyID0g
bW1hcChhZGRyICsgcGFnZXNpemUsIHBhZ2VzaXplLCBQUk9UX1JFQUQsJAoKRVJST1I6IGNvZGUg
aW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojMjgwMDogRklMRTogdGVzdHMvdGNnL211bHRp
YXJjaC90ZXN0LW1tYXAuYzozOTg6CiteSV5JcDMgPSBtbWFwKGFkZHIgKyBwYWdlc2l6ZSAqIDIs
IHBhZ2VzaXplLCBQUk9UX1JFQUQsJAoKRVJST1I6IGNvZGUgaW5kZW50IHNob3VsZCBuZXZlciB1
c2UgdGFicwojMjgwNDogRklMRTogdGVzdHMvdGNnL211bHRpYXJjaC90ZXN0LW1tYXAuYzo0MDE6
CiteSV5JcDQgPSBtbWFwKGFkZHIgKyBwYWdlc2l6ZSAqIDMsIHBhZ2VzaXplLCBQUk9UX1JFQUQs
JAoKRVJST1I6IGNvZGUgaW5kZW50IHNob3VsZCBuZXZlciB1c2UgdGFicwojMjgwOTogRklMRTog
dGVzdHMvdGNnL211bHRpYXJjaC90ZXN0LW1tYXAuYzo0MDU6CiteSV5JLyogTWFrZSBzdXJlIHdl
IGdldCBwYWdlcyBhbGlnbmVkIHdpdGggdGhlIHBhZ2VzaXplLiQKCldBUk5JTkc6IEJsb2NrIGNv
bW1lbnRzIHVzZSBhIGxlYWRpbmcgLyogb24gYSBzZXBhcmF0ZSBsaW5lCiMyODA5OiBGSUxFOiB0
ZXN0cy90Y2cvbXVsdGlhcmNoL3Rlc3QtbW1hcC5jOjQwNToKKyAgICAgICAgICAgICAgIC8qIE1h
a2Ugc3VyZSB3ZSBnZXQgcGFnZXMgYWxpZ25lZCB3aXRoIHRoZSBwYWdlc2l6ZS4KCkVSUk9SOiBj
b2RlIGluZGVudCBzaG91bGQgbmV2ZXIgdXNlIHRhYnMKIzI4MTg6IEZJTEU6IHRlc3RzL3RjZy9t
dWx0aWFyY2gvdGVzdC1tbWFwLmM6NDgyOgorXkkvKiBBcHBlbmQgYSBmZXcgZXh0cmEgd3JpdGVz
IHRvIG1ha2UgdGhlIGZpbGUgZW5kIGF0IG5vbiQKCldBUk5JTkc6IEJsb2NrIGNvbW1lbnRzIHVz
ZSBhIGxlYWRpbmcgLyogb24gYSBzZXBhcmF0ZSBsaW5lCiMyODE4OiBGSUxFOiB0ZXN0cy90Y2cv
bXVsdGlhcmNoL3Rlc3QtbW1hcC5jOjQ4MjoKKyAgICAgICAvKiBBcHBlbmQgYSBmZXcgZXh0cmEg
d3JpdGVzIHRvIG1ha2UgdGhlIGZpbGUgZW5kIGF0IG5vbgoKdG90YWw6IDgzIGVycm9ycywgMzQg
d2FybmluZ3MsIDIzMjEgbGluZXMgY2hlY2tlZAoKQ29tbWl0IDlhZjNlOTA2YmNlMyAodHJpdmlh
bDogUmVtb3ZlIHRyYWlsaW5nIHdoaXRlc3BhY2VzKSBoYXMgc3R5bGUgcHJvYmxlbXMsIHBsZWFz
ZSByZXZpZXcuICBJZiBhbnkgb2YgdGhlc2UgZXJyb3JzCmFyZSBmYWxzZSBwb3NpdGl2ZXMgcmVw
b3J0IHRoZW0gdG8gdGhlIG1haW50YWluZXIsIHNlZQpDSEVDS1BBVENIIGluIE1BSU5UQUlORVJT
Lgo9PT0gT1VUUFVUIEVORCA9PT0KClRlc3QgY29tbWFuZCBleGl0ZWQgd2l0aCBjb2RlOiAxCgoK
VGhlIGZ1bGwgbG9nIGlzIGF2YWlsYWJsZSBhdApodHRwOi8vcGF0Y2hldy5vcmcvbG9ncy8yMDIw
MDcwNjE2MjMwMC4xMDg0NzUzLTEtZGluZWNoaW5AcmVkaGF0LmNvbS90ZXN0aW5nLmNoZWNrcGF0
Y2gvP3R5cGU9bWVzc2FnZS4KLS0tCkVtYWlsIGdlbmVyYXRlZCBhdXRvbWF0aWNhbGx5IGJ5IFBh
dGNoZXcgW2h0dHBzOi8vcGF0Y2hldy5vcmcvXS4KUGxlYXNlIHNlbmQgeW91ciBmZWVkYmFjayB0
byBwYXRjaGV3LWRldmVsQHJlZGhhdC5jb20=


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 16:58:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 16:58:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUR8-0004dE-Cv; Mon, 06 Jul 2020 16:57:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsUR7-0004d6-Bn
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 16:57:45 +0000
X-Inumbo-ID: cfa8b784-bfa9-11ea-bca7-bc764e2007e4
Received: from mail-io1-xd41.google.com (unknown [2607:f8b0:4864:20::d41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cfa8b784-bfa9-11ea-bca7-bc764e2007e4;
 Mon, 06 Jul 2020 16:57:44 +0000 (UTC)
Received: by mail-io1-xd41.google.com with SMTP id c16so40096841ioi.9
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 09:57:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=jbu76fBiXNYavUN7hJcbgVnTcX2cRFUs4aT7tEW/KCU=;
 b=ubHpKe3Nm/Sr5j6MorIaPojzmsRHj5+fmcltESNG2gAbWTvX1yCj86i/BloyiAH3SL
 xWhHSfxANC145Do+ytU9h3x6gUWXXpxoJbax6p8l+3hblfd3vf/ZAPYv7trfCqGSSgM7
 ULfQdyQJbfWa7bnl+6S/qg9ngCL58WTQ9YsqfvGxRjz48ApcTCue/jtHA4PiCxoPwZhs
 CTGRtaolavDk8YuMP5fi1gXkm8RHoEKlXhKjVg2jvZnxY6lwWUiXLwfdbEuPrlbjmpfN
 ZYZ+D4DfO4FNsz/Cko3Rugg6gv4hh5P11TwOjAfiM1LmFxrBE++T4JSXMs2EiDcjqq0I
 iI4A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=jbu76fBiXNYavUN7hJcbgVnTcX2cRFUs4aT7tEW/KCU=;
 b=pJnphxkRSRRn1uVrolrPLbj8qo8tdk7wnrVH0huFD11D6/yGRpV1gYGvK6BecmcaNK
 d8zC3S+nbUx24xM1JFn29eo2WP2EtohbDoewPLrYb0Xwmf5cQTy0VqkLhtNC/dQv4Q9r
 F8x/3BNDxdQ75MiM7mEInMHA3lrSbfUi1/8qLbBdvKRBR9bFpmTS2XxE7xoWjy3B4nZ7
 dGvL+Ty2UhAYvT0c2oIuDCcxGO5QH5Q+ypdycVEWc2w+CxKh02DJeXgmDV8usLs1WkUk
 zHtuY5ccTmapYGPs9zBt2wCFe9mRAjg5GcIAdqSDy5K6uYM2fnBkmHQ515QEQzAH8kkC
 ebsw==
X-Gm-Message-State: AOAM531xR1Bv3vxrJdCh4pPSRBXngXF9hPD+aKuRv9Pl3Erw3StpdtSv
 f/MiMxf4us9trvPux3KKh+/jwmIMwcMuiJa4veo=
X-Google-Smtp-Source: ABdhPJz4Nagjt4yMPeIUiuanaZZK73HDv7ILWk0uVF4t2Q+HCp1PRj88webbTw+MS2P2VLqMmcsYgAWk06cLvYu6Q4g=
X-Received: by 2002:a02:1a06:: with SMTP id 6mr55748921jai.8.1594054664021;
 Mon, 06 Jul 2020 09:57:44 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-9-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-9-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 09:47:55 -0700
Message-ID: <CAKmqyKNeuFosuMbvQ80EQ2uCEXpxfii=8WZE_njt8=3UyzUMqw@mail.gmail.com>
Subject: Re: [PATCH 08/26] hw/usb/hcd-dwc2: Restrict 'dwc2-regs.h' scope
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:53 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> We only use these register definitions in files under the
> hw/usb/ directory. Keep that header local by moving it there.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  {include/hw =3D> hw}/usb/dwc2-regs.h | 0
>  hw/usb/hcd-dwc2.c                  | 2 +-
>  2 files changed, 1 insertion(+), 1 deletion(-)
>  rename {include/hw =3D> hw}/usb/dwc2-regs.h (100%)
>
> diff --git a/include/hw/usb/dwc2-regs.h b/hw/usb/dwc2-regs.h
> similarity index 100%
> rename from include/hw/usb/dwc2-regs.h
> rename to hw/usb/dwc2-regs.h
> diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
> index ccf05d0823..252b60ef65 100644
> --- a/hw/usb/hcd-dwc2.c
> +++ b/hw/usb/hcd-dwc2.c
> @@ -34,7 +34,6 @@
>  #include "qemu/osdep.h"
>  #include "qemu/units.h"
>  #include "qapi/error.h"
> -#include "hw/usb/dwc2-regs.h"
>  #include "hw/usb/hcd-dwc2.h"
>  #include "hw/irq.h"
>  #include "sysemu/dma.h"
> @@ -43,6 +42,7 @@
>  #include "qemu/timer.h"
>  #include "qemu/log.h"
>  #include "hw/qdev-properties.h"
> +#include "dwc2-regs.h"
>
>  #define USB_HZ_FS       12000000
>  #define USB_HZ_HS       96000000
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 16:59:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 16:59:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUSM-0004iX-OF; Mon, 06 Jul 2020 16:59:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsUSL-0004iN-68
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 16:59:01 +0000
X-Inumbo-ID: fd320ade-bfa9-11ea-bb8b-bc764e2007e4
Received: from mail-io1-xd42.google.com (unknown [2607:f8b0:4864:20::d42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd320ade-bfa9-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 16:59:00 +0000 (UTC)
Received: by mail-io1-xd42.google.com with SMTP id q8so40103182iow.7
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 09:59:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=FXnnv73L5WipvrjnAQ/ZRDtg+5iLp60OiInY9J0Zsfs=;
 b=fK2rUb2MdciYU4Z+vy8DaVGveUMlo9HFcBrc2ZvwhMG/OTSUYi16woJ2geJIOeCHf+
 s/tu4UZiATsSlkJBZD2mo3pornyf9G2RKM8kkgfjb7neQY+BTo9GGZ9L3v3UwKSUzApe
 89qSSXu30fk4x/9gPjbYtny2dhJQctl8b9Lrmn4gsXAE9hmlLHVRcYlZiDw+zMWXKhiT
 WJsFHIKj5Bhm1NoNyZFlGdsqEUXmX9z2x7FAgTnWrhAmBppFMg16jLsf48EcsdQ3N9Ln
 BfooAlKZ7UjPxtmAAZt8fESX0cQByDrugScyx/bxh9+zbshTBMAtPKfqON37hfLVY3Rb
 eq2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=FXnnv73L5WipvrjnAQ/ZRDtg+5iLp60OiInY9J0Zsfs=;
 b=Q3zXkE2wDlN7Rpfk0siEFSywCIvwzCSCU6fu4xHWsMA1JzfCN+t0PEC0SvI+I0dx/e
 QmSH5UR4qIdr12wpuiHUlsSLGjZPJrvOI+cfJaLYoD8KS7p5vANtAhy0wsP8x7I6nEdR
 7+bNrVgpOHEmDp1EvXdmZsom0IgWTirKkXgsDhdcH1wA644b4tgFgnQ6oBjxm+gQXh0N
 AwJmi6psy2xh8/KitTSA6jbzM2j0ar2EV1wnlcfRGt6smKIulkS4R9253qiIPnYl5tN5
 2Y5ZeGuWXhWFq4M0Of0DRAHtMazGBHFY2Jz+1XQRboS5dsiUTJkCgUFXZ3OxPu1TqpzF
 58qg==
X-Gm-Message-State: AOAM530CncLvtDHILju0S6QJg7c34wv8LNIfOHynmf+BDjBY7gKS4s1C
 H463YjyExbwVgIA37rYhyVAJIzh1ZvyOBaOXpH3weaLrAwU=
X-Google-Smtp-Source: ABdhPJxhPOyzA8S179T/GnN+DxKn3yFpd4iuIOBEQ6BWF7TH4kkJEHUUUSyuyppt2TvByxifRjL0bvuZRvWhnh0C+Tc=
X-Received: by 2002:a02:10c1:: with SMTP id 184mr52996773jay.135.1594054740444; 
 Mon, 06 Jul 2020 09:59:00 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-2-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-2-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 09:49:12 -0700
Message-ID: <CAKmqyKNd3qyB33TCamM_zXPFahfvdpmCirouODOy_QFotz55EQ@mail.gmail.com>
Subject: Re: [PATCH 01/26] hw/arm/sbsa-ref: Remove unused 'hw/usb.h' header
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:51 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> This file doesn't access anything from "hw/usb.h", remove its
> inclusion.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  hw/arm/sbsa-ref.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
> index e40c868a82..021e7c1b8b 100644
> --- a/hw/arm/sbsa-ref.c
> +++ b/hw/arm/sbsa-ref.c
> @@ -38,7 +38,6 @@
>  #include "hw/loader.h"
>  #include "hw/pci-host/gpex.h"
>  #include "hw/qdev-properties.h"
> -#include "hw/usb.h"
>  #include "hw/char/pl011.h"
>  #include "net/net.h"
>
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 16:59:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 16:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUTF-0004ok-22; Mon, 06 Jul 2020 16:59:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsUTE-0004oM-JT
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 16:59:56 +0000
X-Inumbo-ID: 1e305074-bfaa-11ea-b7bb-bc764e2007e4
Received: from mail-il1-x142.google.com (unknown [2607:f8b0:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e305074-bfaa-11ea-b7bb-bc764e2007e4;
 Mon, 06 Jul 2020 16:59:56 +0000 (UTC)
Received: by mail-il1-x142.google.com with SMTP id s21so18359343ilk.5
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 09:59:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=9/leO9q/0J5ZJH/Ka9OiHxF4rkOPGYi23nLotDw5+po=;
 b=t+L89NigImBrMi5A+L/vayp2Ek2Z1xRrRswlKbUWeD6I7FIa+voJRjlBIqb0sMzDec
 lqkQijLEMiyv2fVJODfnOo9v77QgO6cdGwJYO8vhoClltQxpALdDRs8ENXpnUTqPC8qf
 cTqrxHmxfkx4pm++FONued1iM/PgnDbs7eHuq38xcfAZlmcjn7mbBAeb9JBvYwXLMk1x
 fkmZUkSQLJ52R1fsCbmaL3rpzt43x9FH+XrveDmr0sQHoPrYO1DZaIY+DO5ZZzWlqj8M
 WZZMs8mMEJb1bdpZBZrqRgqogw2cqAXAXg5S3fLxb9M/1IpPh6xuhL4Lmki4tyc8gx5S
 /BcA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=9/leO9q/0J5ZJH/Ka9OiHxF4rkOPGYi23nLotDw5+po=;
 b=KVYMsLgSYJqWEQaASvyU6nBZAs0es6YBKMwmRphiZTdDzo4nEqoFylq66iKGNu4Qy1
 K/0j4yiJ7nypOXvs/xvfxq//kKSdeRDa3Vb2NWkCZdDt5OKigwSPdTJBLJ4Se4Ki8R5H
 RldbA/7tu1F4pwcSJ07vNmRVyVe1WdJC+ViuYX0OpDeIuq0EK5FEpttHQAhLgHVilS3T
 FntoNaG/YmQGfURi8S6nqCZs5rae06B989o38Rxb2CgTfmb5DfsEAtsmdcvXXQXtcyOw
 soPax9pzJwYAjqS+PcE0EpjZfGrxO3XHtEE5Yk9gwNQpX1YhEMpimAXsVXZBAfpcK5RM
 LlhQ==
X-Gm-Message-State: AOAM533/VFEYVM8CR5diB/oTePHBV8LkWxOEqJSO5sWdmKs0VOhp6Zyr
 3AsqGOT23AFtjRjaDpWjMIh6OAmN88VouJrTerU=
X-Google-Smtp-Source: ABdhPJw/HaHEsm0ulaS03PTv+Iez2OAy3BXjXkZdtFUms+Csyj/gKycDAQBhYI+0mtg7FDmV0+SOu7NjIzp7EDuPP3Q=
X-Received: by 2002:a92:c213:: with SMTP id j19mr31588925ilo.40.1594054795800; 
 Mon, 06 Jul 2020 09:59:55 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-3-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-3-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 09:50:09 -0700
Message-ID: <CAKmqyKNB3fQCBNZ29cRuj5LW14duowkP6+k+6V0fhHhZU+GtsQ@mail.gmail.com>
Subject: Re: [PATCH 02/26] hw/ppc/sam460ex: Add missing 'hw/pci/pci.h' header
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:50 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> This file uses pci_create_simple() and PCI_DEVFN() which are both
> declared in "hw/pci/pci.h". This include is indirectly included
> by an USB header. As we want to reduce the USB header inclusions
> later, include the PCI header now, to avoid later:
>
>   hw/ppc/sam460ex.c:397:5: error: implicit declaration of function =E2=80=
=98pci_create_simple=E2=80=99; did you mean =E2=80=98sysbus_create_simple=
=E2=80=99? [-Werror=3Dimplicit-function-declaration]
>     397 |     pci_create_simple(pci_bus, PCI_DEVFN(6, 0), "sm501");
>         |     ^~~~~~~~~~~~~~~~~
>         |     sysbus_create_simple
>   hw/ppc/sam460ex.c:397:5: error: nested extern declaration of =E2=80=98p=
ci_create_simple=E2=80=99 [-Werror=3Dnested-externs]
>   hw/ppc/sam460ex.c:397:32: error: implicit declaration of function =E2=
=80=98PCI_DEVFN=E2=80=99 [-Werror=3Dimplicit-function-declaration]
>     397 |     pci_create_simple(pci_bus, PCI_DEVFN(6, 0), "sm501");
>         |                                ^~~~~~~~~
>   hw/ppc/sam460ex.c:397:32: error: nested extern declaration of =E2=80=98=
PCI_DEVFN=E2=80=99 [-Werror=3Dnested-externs]
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  hw/ppc/sam460ex.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/hw/ppc/sam460ex.c b/hw/ppc/sam460ex.c
> index 1a106a68de..fae970b142 100644
> --- a/hw/ppc/sam460ex.c
> +++ b/hw/ppc/sam460ex.c
> @@ -38,6 +38,7 @@
>  #include "hw/usb/hcd-ehci.h"
>  #include "hw/ppc/fdt.h"
>  #include "hw/qdev-properties.h"
> +#include "hw/pci/pci.h"
>
>  #include <libfdt.h>
>
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:01:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:01:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUUV-0005ah-D3; Mon, 06 Jul 2020 17:01:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsUUU-0005ac-F2
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:01:14 +0000
X-Inumbo-ID: 4ca2e7d2-bfaa-11ea-b7bb-bc764e2007e4
Received: from mail-il1-x142.google.com (unknown [2607:f8b0:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ca2e7d2-bfaa-11ea-b7bb-bc764e2007e4;
 Mon, 06 Jul 2020 17:01:14 +0000 (UTC)
Received: by mail-il1-x142.google.com with SMTP id t4so20366751iln.1
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 10:01:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=7ctzFV2wTSbf5LJWwfLFnbYaCCE17qTbQd0y6awEpBc=;
 b=ThO12PL7aDHTt6y75GpT9dyrfdUt5rmmp4Txh+Dd0WmdKFnGjk/r9Dm6MWbmdmKR2f
 gOl0EW560gWgAJC7KKPMPgy4RIOmmwIgVNkZadxV2u9D/dIfPdqotN2IpqAS9ycNQYHr
 hJ9dyDKYM4nPXV77ZHg3NPuffoCiRMGhmOByeH/avHgJv72C6g/CaXRvi9GUJfH3XClO
 jPo8NpRGszmVDXb43aPBUiVyM1/rwXiSMxzSVdmimKA5ZzWoLBOmJH+tqoSpPtKZ+8bh
 T2k+MkLuNEDO3YFDdNOzo/NtrgfOqqzzZgI1tkCp+0f10xp8ZpF+n8HFKOdAoTs7T00N
 VhyA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=7ctzFV2wTSbf5LJWwfLFnbYaCCE17qTbQd0y6awEpBc=;
 b=RD4kaByF5yTlOlFWVHv4NEw1EzG8Y5ZJmhWsPugITGOOlCtJjm/wFes1bnwI2muTzS
 DfVTx+4GJOxdyxpwq+IsqcCudX0H54ejepphP/bXaNAPmx2M09xs3U584isRTE17+sL2
 Z50HjvvJ2d5B/W+rlk4z6MdE66QD8mxZIPf3hHp56LVtw52SiBP9d9ceyI1FRyZEYC3o
 ndjaKDSTqHtTAElDhrF+FYVv0A+zFIK+AKCILoy67eKkyjBOnk+1UqoPc6DIuxhS+8KI
 7DoCX2Apz3G/+aevumiFLsTuCkS/3vwLJXcAYIRNCAkSTUR2DJjqT+jXyq79YRkokjU4
 JNww==
X-Gm-Message-State: AOAM530/5TYyIj/7Cwtt3LMY9RpHZodj8UI0YsplLwWDedB57iJkWdAF
 JOOKo4IjvmGJK3rfDkrm3J1A2mev5A3RN4apRE4=
X-Google-Smtp-Source: ABdhPJwXFWIOkq2ujd/ddpYJrT5qb6ZnY2NEtThTElkRWlGkJW2XtpKOBtiOhFAWAiQeMsquUUzqsZikNh2Q3JoY55o=
X-Received: by 2002:a05:6e02:d51:: with SMTP id
 h17mr31745866ilj.131.1594054872136; 
 Mon, 06 Jul 2020 10:01:12 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-4-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-4-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 09:51:17 -0700
Message-ID: <CAKmqyKOXnBzRC6-FQ664k-g8gQkByLEGq1MxBJ97eddL+OcH1A@mail.gmail.com>
Subject: Re: [PATCH 03/26] hw/usb: Remove unused VM_USB_HUB_SIZE definition
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:50 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> Commit a5d2f7273c ("qdev/usb: make qemu aware of usb busses")
> removed the last use of VM_USB_HUB_SIZE, 11 years ago. Time
> to drop it.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  include/hw/usb.h | 4 ----
>  1 file changed, 4 deletions(-)
>
> diff --git a/include/hw/usb.h b/include/hw/usb.h
> index e29a37635b..4f04a1a879 100644
> --- a/include/hw/usb.h
> +++ b/include/hw/usb.h
> @@ -470,10 +470,6 @@ void usb_generic_async_ctrl_complete(USBDevice *s, U=
SBPacket *p);
>  void hmp_info_usbhost(Monitor *mon, const QDict *qdict);
>  bool usb_host_dev_is_scsi_storage(USBDevice *usbdev);
>
> -/* usb ports of the VM */
> -
> -#define VM_USB_HUB_SIZE 8
> -
>  /* usb-bus.c */
>
>  #define TYPE_USB_BUS "usb-bus"
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:02:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:02:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUVa-0005gk-NC; Mon, 06 Jul 2020 17:02:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsUVZ-0005gf-10
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:02:21 +0000
X-Inumbo-ID: 7447f44e-bfaa-11ea-bb8b-bc764e2007e4
Received: from mail-il1-x141.google.com (unknown [2607:f8b0:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7447f44e-bfaa-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 17:02:20 +0000 (UTC)
Received: by mail-il1-x141.google.com with SMTP id o3so16132783ilo.12
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 10:02:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=SURriqbHZVXTTwsvXZaGt8gxbHhNclYTbkHoMvIQmpM=;
 b=MioVaGq1jCalxgWYpPdotTkyVsAMeK/bshk85fNYAhYubO4nVFHSOeCUz0kSho8p9p
 6Y83qIJELSpr9VN9VFzq+lTIUscnBNyLwqUC24IyYDjx/4cG7EfaP2YEZc6BAomYyDpg
 4WJqsPVXOVt3XY2x1ZsaQLbg24wvg5Vy/7YqNFE+T3a7KXvhOx34JOs9fSXv1n/wrIC6
 8CzhLDbRxujPAE056QBW2Z/dWImCVitmNebYDtS6btqpo7FzR6vkHFi2ue676ojVEt9W
 vkRi2J6vudadAMoruzF52cpovO1cMqBkDNoVdP47mEGcUSBlNOyPMoeOEOsRimXSU/ZU
 vZxg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=SURriqbHZVXTTwsvXZaGt8gxbHhNclYTbkHoMvIQmpM=;
 b=GatcxvTfjgBsES/13rcXjU13M3fsMLC0SYIER68jOE2cwVdKj3gZag8KwF5YdVuY8A
 yCb4dXdE60ptCKTFa3GsOdrtsuyIfiWa9mkQXLomgiic+8gXHAkIWu1AGqtAxRSmLJKC
 tOzIkUWcfkabDXlEJXov7mqEnUotdD3vNjT1D3FwdjHHnN9OYM1NRM53O824hETORlGM
 clbQa5sKH8IG9Zrx7L5y6yVoSGmDRO2xmEGY/G3UbWGaN6Yg7lnJ1c0OXua0x7X+a3Yu
 BoqpJL2yXB5RZOKlj5xPzPpaM3htj5BUZf1ul0+0T7pWGmnKt0bD5p/2BRbwCpCgirZm
 ht8g==
X-Gm-Message-State: AOAM533YSNP1NUeH2SiNAqaapGJ7FR6PfGtKugPWvUl42U4P1zVdrIzE
 lDPTkQVZUJ1iOD4k0+F0oZBQ0GtDUA3lGqXwjT/AWoQZXn8=
X-Google-Smtp-Source: ABdhPJw3irY14ekgg5HFZwY3uijUbo8j99jRr5i12obU9BK983EmEFW1tqXdt4MQ3PGmM1qTVFVUGr89GwGbxSSoFdg=
X-Received: by 2002:a92:d186:: with SMTP id z6mr33254547ilz.227.1594054940213; 
 Mon, 06 Jul 2020 10:02:20 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-5-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-5-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 09:52:26 -0700
Message-ID: <CAKmqyKMm2xhgxSqX5mHAkELfBnWhzqw-ruf3oATFvB8sohnw2w@mail.gmail.com>
Subject: Re: [PATCH 04/26] hw/usb: Reduce 'exec/memory.h' inclusion
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:52 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> "exec/memory.h" is only required by "hw/usb/hcd-musb.h".
> Include it there directly.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  include/hw/usb.h          | 1 -
>  include/hw/usb/hcd-musb.h | 2 ++
>  2 files changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/include/hw/usb.h b/include/hw/usb.h
> index 4f04a1a879..15b2ef300a 100644
> --- a/include/hw/usb.h
> +++ b/include/hw/usb.h
> @@ -25,7 +25,6 @@
>   * THE SOFTWARE.
>   */
>
> -#include "exec/memory.h"
>  #include "hw/qdev-core.h"
>  #include "qemu/iov.h"
>  #include "qemu/queue.h"
> diff --git a/include/hw/usb/hcd-musb.h b/include/hw/usb/hcd-musb.h
> index c874b9f292..ec3ee5c4b0 100644
> --- a/include/hw/usb/hcd-musb.h
> +++ b/include/hw/usb/hcd-musb.h
> @@ -13,6 +13,8 @@
>  #ifndef HW_USB_MUSB_H
>  #define HW_USB_MUSB_H
>
> +#include "exec/memory.h"
> +
>  enum musb_irq_source_e {
>      musb_irq_suspend =3D 0,
>      musb_irq_resume,
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:03:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUX6-0005pq-6Q; Mon, 06 Jul 2020 17:03:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsUX5-0005pl-12
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:03:55 +0000
X-Inumbo-ID: ac3ded40-bfaa-11ea-bb8b-bc764e2007e4
Received: from mail-io1-xd44.google.com (unknown [2607:f8b0:4864:20::d44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac3ded40-bfaa-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 17:03:54 +0000 (UTC)
Received: by mail-io1-xd44.google.com with SMTP id e64so35193338iof.12
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 10:03:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=X20uXtdA36wJdFmnL5kXE6ANkGPKNtErzQUg9fO43F0=;
 b=WKa0Lnp/4ENw4KYTm55Jp0CBKAAjECKb63/NLtirFrXqJWtjy9YARPnCGw4sN1rNjC
 xj+uAS77lFX7Pw53RhXebWVJDzr039uCYSZgLleFMvlnZrE5XzIgVJuamrvJYBrcM/gQ
 OgQk5oxwJFsQp3xnl3IN5pjJdSv5RpRJknYyuIngvcAGV+aEd3GH78GbDWbSR4txus0n
 OjkePAGPpMQG0Jj3txMJqRtyrJ1n41/fNix/mk8Dgmq6xf+CrzKD4xaDg7J5pIzH5G8L
 K1zDdXydsLx9rQ7CiCTvJ1PTLelBs97/0EipVuw22nVvFXmvZtr9fBmOY3leWsb+Ctmx
 sPXw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=X20uXtdA36wJdFmnL5kXE6ANkGPKNtErzQUg9fO43F0=;
 b=kpURspZ9NqKyfdOlAgl8w8JwC11hQtOHftLvj904LHFgqOExEoXbxJFWtgn5Qybnqz
 DFNtd6wdJun0D3p3+eHpHBFKVeEeoPOUlKCfbEWBmm+zXokMJugC0oocFqBAVeDRgLDY
 Qw+O3yhM34gYKF7Fuz7Bh1LWY6+nQqePPmuh4mYEgWy0XDKndUqdLAU2TuxQT5aGuXAb
 cPbg1MbuqVpL+9cuyKFTqd48c5FpCc3MHRB2Cuil7K5ipKX8vIbg7TOQt/umzAlSE42c
 nQLBJS2qq1c1tHCUxEd3SzTHkiNaXqEHwiN2xzEw/YAZ407fkdigNYLG52FfSkCf1Npv
 lJrQ==
X-Gm-Message-State: AOAM53208Y1QsoyWiJwKZSwOuvs4+qeLjjMZbpZjQQmcQtOQTC2N8wT5
 i3pCY7qj7DGmuIQ8vy5SMMI/N5zi7YtnybIytb4=
X-Google-Smtp-Source: ABdhPJzF5odL5exow9BLUiHvk44VSggfGQ4JIj5C1Wu24QgZVWClevs13HGgZ8PX2ApFiDe8dG6ucK7PuLDbtFgP3a8=
X-Received: by 2002:a5d:9ed0:: with SMTP id a16mr26245576ioe.176.1594055034056; 
 Mon, 06 Jul 2020 10:03:54 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-7-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-7-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 09:54:07 -0700
Message-ID: <CAKmqyKOexZ602pCtmO03FW5x=NzawWSzfHq3puQOgpLbdXnUbg@mail.gmail.com>
Subject: Re: [PATCH 06/26] hw/usb/hcd-dwc2: Remove unnecessary includes
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:53 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> "qemu/error-report.h" and "qemu/main-loop.h" are not used.
> Remove them.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  hw/usb/hcd-dwc2.c | 2 --
>  1 file changed, 2 deletions(-)
>
> diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
> index 72cbd051f3..590e75b455 100644
> --- a/hw/usb/hcd-dwc2.c
> +++ b/hw/usb/hcd-dwc2.c
> @@ -39,8 +39,6 @@
>  #include "migration/vmstate.h"
>  #include "trace.h"
>  #include "qemu/log.h"
> -#include "qemu/error-report.h"
> -#include "qemu/main-loop.h"
>  #include "hw/qdev-properties.h"
>
>  #define USB_HZ_FS       12000000
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:04:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:04:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUXs-0005tG-Gp; Mon, 06 Jul 2020 17:04:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsUXr-0005t7-Cv
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:04:43 +0000
X-Inumbo-ID: c9232696-bfaa-11ea-bca7-bc764e2007e4
Received: from mail-io1-xd43.google.com (unknown [2607:f8b0:4864:20::d43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c9232696-bfaa-11ea-bca7-bc764e2007e4;
 Mon, 06 Jul 2020 17:04:42 +0000 (UTC)
Received: by mail-io1-xd43.google.com with SMTP id q74so16547579iod.1
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 10:04:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=cTCiYJ1utsgT/OEA1JPToQ9e7sbkDVzau0vl+orbX5M=;
 b=Y5emWhnvg2wA6Hsc2mNKM91+PUECbIjNCVx4XijBRlwxhG9yPaVY4nCPuN66Pn/g4N
 jALw2uCbPtXPOhmyJkKBc45MnuJg6mxtGi3kWihcL0PytiWW/5DejqGp9pu1JW3N0B0L
 YJbqT9Ic/jycTAYWFiu+Rsh/qMuTBhTldv4DWUXGneSj8xYSSWACCuJie0v5fihIfGzM
 +FHYD1FVz9yVKlUzWPPpjP3O3le2SpOqqOzlb4GBTfB83H90A+bGP6q+3xoLfe4Nv+cq
 E6XH5NYuSvgxKJUs0XSM4FaA8Wd5kSisFNa8A9sD0zbO1Mx64e82B7/5rFnqrKTujNAu
 cLYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=cTCiYJ1utsgT/OEA1JPToQ9e7sbkDVzau0vl+orbX5M=;
 b=oF0OiIuKeETAFVLQyN4/YVE30HevZ4KjHOrDS1hBcipJgQrK/wzGXDfrT8wlL9FO4A
 vwBzQHCFd9dV6T8EgZmFFOA93j/Iiuotps8vYBLKhaWFPU6SLobXHTDWf6xYRPw+F6kq
 zY7OI2D7iMPOqvMf3qlY2Kc1oNJWXzyRbTgmmb2niy1DggAQQNIjfEsOpJ42ab+Mejpn
 k4fpQNAAyu3ksaQQD5GMt7F/0vY8Atd0JlbT41IDtba3rNJgyNeDrmCEs5XytgSb0gY/
 s3h8FzPQD+jWl0GbweqzIVXME7wDOKkT2ICrpIqf0e34+8dvNjanHFpXgkc6T4nB4vSo
 jMEQ==
X-Gm-Message-State: AOAM5304j3wtZ61yeI1DxCDuMT1F9L4MF8HedIlfKn5RK+ul65K6F6GL
 rf7elMpfDTfFprZTnS7dQaNxDi2ELW4YSYnqmXM=
X-Google-Smtp-Source: ABdhPJwtinW+rIbee8onSzHAr9mGYnsnNGs22+9NC+YbC3w6hod1nvPLY3hYBjgn60PpMlBLwo7zj8UoDSitGwru4As=
X-Received: by 2002:a5d:97d9:: with SMTP id k25mr26494073ios.42.1594055082487; 
 Mon, 06 Jul 2020 10:04:42 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-8-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-8-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 09:54:53 -0700
Message-ID: <CAKmqyKMk272oxLgGJHQMmZk_7+Q7N=5uPxSUdEMq2-N8DbHrLg@mail.gmail.com>
Subject: Re: [PATCH 07/26] hw/usb/hcd-dwc2: Restrict some headers to source
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:52 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> The header "usb/hcd-dwc2.h" doesn't need to include "qemu/timer.h",
> "sysemu/dma.h", "hw/irq.h" (the types required are forward declared).
> Include them in the source file which is the only one requiring the
> function declarations.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  hw/usb/hcd-dwc2.h | 3 ---
>  hw/usb/hcd-dwc2.c | 3 +++
>  2 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/hw/usb/hcd-dwc2.h b/hw/usb/hcd-dwc2.h
> index 4ba809a07b..2adf0f53c7 100644
> --- a/hw/usb/hcd-dwc2.h
> +++ b/hw/usb/hcd-dwc2.h
> @@ -19,11 +19,8 @@
>  #ifndef HW_USB_DWC2_H
>  #define HW_USB_DWC2_H
>
> -#include "qemu/timer.h"
> -#include "hw/irq.h"
>  #include "hw/sysbus.h"
>  #include "hw/usb.h"
> -#include "sysemu/dma.h"
>
>  #define DWC2_MMIO_SIZE      0x11000
>
> diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
> index 590e75b455..ccf05d0823 100644
> --- a/hw/usb/hcd-dwc2.c
> +++ b/hw/usb/hcd-dwc2.c
> @@ -36,8 +36,11 @@
>  #include "qapi/error.h"
>  #include "hw/usb/dwc2-regs.h"
>  #include "hw/usb/hcd-dwc2.h"
> +#include "hw/irq.h"
> +#include "sysemu/dma.h"
>  #include "migration/vmstate.h"
>  #include "trace.h"
> +#include "qemu/timer.h"
>  #include "qemu/log.h"
>  #include "hw/qdev-properties.h"
>
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:05:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:05:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUYJ-0005x4-Py; Mon, 06 Jul 2020 17:05:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsUYI-0005wn-F2
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:05:10 +0000
X-Inumbo-ID: d93ede12-bfaa-11ea-bca7-bc764e2007e4
Received: from mail-io1-xd41.google.com (unknown [2607:f8b0:4864:20::d41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d93ede12-bfaa-11ea-bca7-bc764e2007e4;
 Mon, 06 Jul 2020 17:05:09 +0000 (UTC)
Received: by mail-io1-xd41.google.com with SMTP id q8so40125814iow.7
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 10:05:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=zN9P5ColTvlUfdn83p6hlM/q3HmA739CHt0AXxAEsKY=;
 b=eegBASf0DJgvBsiQJhBbGe04hBG8gQlnhR0e1R/ASVwj2VHV9vGLfcRsCN4KCZZL7R
 B83BaGMH/iF7mwFtkqNwrhcYBoLeuRgJhTM45CYuhyfzNTru1stzx/37r5EW7OhPb+Sh
 LRtrqRylWajXgEByi9cOTijxoLbDMEMhICnRVbAOOwJLu9jPbF/Te8Qvau7kveunJewl
 Db0lsTutmvoozzVPB+KFbn17OIYLS5FptsgdKB52Bz3rn1GjOosdN0j2cy5W6T6sYt0p
 QdMaqFdSvbzvnq0BKJyMWcewvVfyBSesh4A14gyVGvYTOfgkCHZ+MscaMJTp0T9VOAKa
 WAHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=zN9P5ColTvlUfdn83p6hlM/q3HmA739CHt0AXxAEsKY=;
 b=kUaMMEh+E7vanCoBdINzGIESuMmCGm/fpfG8CwskN5ml7zRRp6gJ8fYsuKTY5cVJ29
 M4t2fshGzhQDLFfcRUHVGy9EscQC7AzEUnY983pV7G5+J5+QG59AYXjy/s+Vvtu06wSf
 hbZRwg/rfVLtLOCtIxfm1KCJLzen/spHLO/aLK3xQKN22AzZ+PJfpHx5XBQqaJmCPOTF
 Ir6RxfB1y5BpmYoS3mg8lDLVZ4yPx/E7nHOq+uAXzKLq32/pyy7VeVeFSEQzVz7vbA8i
 zZ8NFUPyBBPRMAY47yfw0g4BF5wlzEA0XNMBEc8S/lisedNUYVIpCJo6rqwJuWXCGB+F
 Hm/Q==
X-Gm-Message-State: AOAM530VLOIG6Av4EBjlEiu7nnm2te+C1fOhkdByIbnTjf9WINeA4tmc
 0v+cEJrXxfRpOaq0BWmsFA4+n2douCC2FtKBWC0=
X-Google-Smtp-Source: ABdhPJw8hZ3hkrezlrBZ0tGzIowHRnJHp8TQNjhIxHPipZ6z7o2IrpAbgPHKaESP7cYXHZCW69GaN6F/KxK5+BYwjZc=
X-Received: by 2002:a5d:9ed0:: with SMTP id a16mr26251024ioe.176.1594055109583; 
 Mon, 06 Jul 2020 10:05:09 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-6-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-6-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 09:55:22 -0700
Message-ID: <CAKmqyKMW3Db-bk1+MOtz461-iAy9Se4uq=2stNmgiELzVAd3NA@mail.gmail.com>
Subject: Re: [PATCH 05/26] hw/usb/desc: Add missing header
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:52 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> This header uses the USBPacket and USBDevice types which are
> forward declared in "hw/usb.h".
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  hw/usb/desc.h | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/hw/usb/desc.h b/hw/usb/desc.h
> index 4d81c68e0e..92594fbe29 100644
> --- a/hw/usb/desc.h
> +++ b/hw/usb/desc.h
> @@ -2,6 +2,7 @@
>  #define QEMU_HW_USB_DESC_H
>
>  #include <wchar.h>
> +#include "hw/usb.h"
>
>  /* binary representation */
>  typedef struct USBDescriptor {
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:06:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUZS-00066N-4i; Mon, 06 Jul 2020 17:06:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsUZQ-00066D-U8
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:06:20 +0000
X-Inumbo-ID: 034ee74c-bfab-11ea-8496-bc764e2007e4
Received: from mail-il1-x144.google.com (unknown [2607:f8b0:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 034ee74c-bfab-11ea-8496-bc764e2007e4;
 Mon, 06 Jul 2020 17:06:20 +0000 (UTC)
Received: by mail-il1-x144.google.com with SMTP id a6so16800404ilq.13
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 10:06:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=B6iU2F4D68kBs4rXCTy4j6Bd4sjgaDyabwwjvi+hW5Q=;
 b=AbJ0ON1qWA3F4Bki8vxBgNFCAj9bKw6GWMHnwjy0BHevGWNnPoRZOMJ0S+D6de1dhl
 0hNekgR0AHtFq5mG7yzL9shawGAV2AmKUgYcpgmrhjZljtKHOBACxfNrtufx07HwKaMW
 qPASD8vFzMlW++21+2uuY8a4B3cYDOiH2vcQJdGo42bfb58doT5aEhXx2+lGYg2TladZ
 Ss8z+SOazk9trK61OW/m2u2TWhD4ewrvaWAJWYnZPzMWA25uCSGGgRXWR8mns1NG9zn/
 Ja/0keDaRQdPfssAe8FscZ3h1S+YYXW9ePYnlkmdrTyY7zU1cCrI9V7RI3HBRPxlyq11
 C4uA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=B6iU2F4D68kBs4rXCTy4j6Bd4sjgaDyabwwjvi+hW5Q=;
 b=AB8gjMZw/CqUlQdqTrxYtu16Fw99kXitqWf4row/MDsLgbfDvH1u/SckXHo++e3Y6K
 pTPS2T9O70MzgJUR7cergmYy9vAa+G7XFFgn+ZuWBCTu2f5ZFhBLnVQV+yvfGSYeuA5O
 qiAhI2jgP0JfXceK4zDiMTaYRIidgTFRi0EKPk3Va0t+Oa/muRig6O/cd+fh/TZFc6k1
 e/npwLiO9ef4zhGcvpXG6QXR81RvYetJqee+3K1CDVZ+KthfQ3hHf27N7UgjvbC3jPwz
 e3nh478PUXXm08e6N5WmJW1vfbqhn18X8UAGy7F8/2LyU1Ad+CaOGrUUEaQj95qqANxR
 qqrA==
X-Gm-Message-State: AOAM533sJ9h3L5cwvwuRP7HJxVVvaHpnlXKLkUvWzo6kG++ViUPWF7pU
 y+VrL8WbQQ5TOGeEbJ5xUfY2A3HZOTTyb7WjnpU=
X-Google-Smtp-Source: ABdhPJxcajQAjV3tK1pO3vVESVT/fL33Jteajm7cQ9OrNrxAE/wXmAHw5YAkhAIBWLMGaVi/cTHnOj0BeNWBXTtUiyQ=
X-Received: by 2002:a92:5f12:: with SMTP id t18mr31713705ilb.267.1594055180052; 
 Mon, 06 Jul 2020 10:06:20 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-10-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-10-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 09:56:33 -0700
Message-ID: <CAKmqyKMivQ6HxaB9DmJ1EgWcpC0sD1VBOC=V_09if_kkcvEwcA@mail.gmail.com>
Subject: Re: [PATCH 09/26] hw/usb/hcd-ehci: Remove unnecessary include
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:55 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> As "qemu/main-loop.h" is not used, remove it.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  hw/usb/hcd-ehci.c | 1 -
>  1 file changed, 1 deletion(-)
>
> diff --git a/hw/usb/hcd-ehci.c b/hw/usb/hcd-ehci.c
> index 1495e8f7fa..256fb91e0c 100644
> --- a/hw/usb/hcd-ehci.c
> +++ b/hw/usb/hcd-ehci.c
> @@ -34,7 +34,6 @@
>  #include "migration/vmstate.h"
>  #include "trace.h"
>  #include "qemu/error-report.h"
> -#include "qemu/main-loop.h"
>  #include "sysemu/runstate.h"
>
>  #define FRAME_TIMER_FREQ 1000
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:07:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:07:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUal-0006D7-Ga; Mon, 06 Jul 2020 17:07:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsUaj-0006Cx-TH
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:07:41 +0000
X-Inumbo-ID: 338f3dee-bfab-11ea-bb8b-bc764e2007e4
Received: from mail-io1-xd43.google.com (unknown [2607:f8b0:4864:20::d43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 338f3dee-bfab-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 17:07:41 +0000 (UTC)
Received: by mail-io1-xd43.google.com with SMTP id q74so16558443iod.1
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 10:07:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=dBgIVLKA0exdXkprZM6w2CG1AHicJslD0tUu5CxlPMg=;
 b=iqkaVDc+Prmll7W77QI2rjsviIcN5ukUeuzuaQ0p2o0zCEfPG0UufK2FbcqQBNBIWA
 guTgtEdhEKGxejAPHA9tMZmUDN3oLd7C2F0fKRP3owIBf/3tZ11gV8WrZd1M/ig0wQ6V
 kMYCtvlGTOSJe6qG/h8lVqrP3vA+s8CU5gCDC1Wp0auG8Myfi+4wBJ7sve4nUfxtUQa8
 WHp/NFGJDmm4uIEFnPQD7R6ve34GdwcRK7lZxMbn/Dt8nvSuLcJnM1PTi7aYL2e3L8UN
 MvCrzRJl874oVwv+GY8to2rSNjmgWVf0WumbR206I4OuAAVCHdu/C1OKKjCgdQQZOSva
 47bQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=dBgIVLKA0exdXkprZM6w2CG1AHicJslD0tUu5CxlPMg=;
 b=mIha8bW/bb1IQhS1fHi5QCHsw/oxYmR4XWFSrXNHl40B3z2KLo2Ujk9CfTNoclGRNh
 4KXD6j9PM/ramTjHdPkzNw1mt2XZ6LBK6h2uhPUVT7sFqw139r2vR6fybOacMFV6XKlR
 FW/4FzOQkkoyKqv7wpThLBedbJ6+vvQLNPeIKmLsc46C7WIgdNvaLucy8OV0LP7yVhGY
 bj5OV4OZig5KI8B2QwJMOsnYospzuSbg9bKCS/8b//yjLifqmaXThmW6/xc249iAkLbL
 LNUUwV1bv8t35pnjgAMGyDRcWjmgVfy5rqXC6TrIpK7YaodktioDWlDyWZM+OPml4kPo
 CJxw==
X-Gm-Message-State: AOAM532tU1gE55liICQymgxswaoLbzsZ6lDqCe2u1PdODjwVKyTtbQHa
 GDK/NeWUQyOcgaKW5xyaJpE3BWCediMJs3CVKoM=
X-Google-Smtp-Source: ABdhPJyw0KmkpT0wW3UGWOxj20T8QI719PAjvFEtTo4fBjXY/b+McPswSyw7q1t1bQm3ES8jp++ekpGyo8FkQ/2mlOY=
X-Received: by 2002:a02:1a06:: with SMTP id 6mr55797272jai.8.1594055261111;
 Mon, 06 Jul 2020 10:07:41 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-12-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-12-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 09:57:54 -0700
Message-ID: <CAKmqyKMYzHasyz0-Fx5tbpzr2_369n7wxkmtSVubCOxPH1BrDQ@mail.gmail.com>
Subject: Re: [PATCH 11/26] hw/usb/hcd-xhci: Add missing header
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:54 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> This header uses the USBPort type which is forward declared
> by "hw/usb.h".
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  hw/usb/hcd-xhci.h | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/hw/usb/hcd-xhci.h b/hw/usb/hcd-xhci.h
> index 946af51fc2..8edbdc2c3e 100644
> --- a/hw/usb/hcd-xhci.h
> +++ b/hw/usb/hcd-xhci.h
> @@ -22,6 +22,8 @@
>  #ifndef HW_USB_HCD_XHCI_H
>  #define HW_USB_HCD_XHCI_H
>
> +#include "hw/usb.h"
> +
>  #define TYPE_XHCI "base-xhci"
>  #define TYPE_NEC_XHCI "nec-usb-xhci"
>  #define TYPE_QEMU_XHCI "qemu-xhci"
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:08:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:08:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUbW-0006JG-QX; Mon, 06 Jul 2020 17:08:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsUbV-0006J7-0k
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:08:29 +0000
X-Inumbo-ID: 4f9cddca-bfab-11ea-bb8b-bc764e2007e4
Received: from mail-il1-x144.google.com (unknown [2607:f8b0:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4f9cddca-bfab-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 17:08:28 +0000 (UTC)
Received: by mail-il1-x144.google.com with SMTP id r12so26334875ilh.4
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 10:08:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=aJ+NMNUyWOSeU76/wZH/xOla8z/F8BGLKK6NO+daT7c=;
 b=DCtHRVbfbNdZdHrkno7Pg0sSHF/TO28JTJSsk4CQyuv3rpTGwPimzgtRdeipMmeYC8
 aUYtMHdn8ZYb548p0TC5sD7ZpYCJiiGQvNkmx5Fh+Y1+2AWdNXgyTbZF5g/nSA0CaFXC
 xiRGdWCgbqjAb5Jb4EW+JexNsAlDtpzx8H6+qoa4BE4MW1vaDJzHgM4Qc/aMXC3ErDa+
 xscFDbu167g1VZgMknCoWSYpzxdRsorPrRCP5QF0sWCErssL7NXmx5bWwMfBYg4XsY6P
 iFeF9TqxPGvbOg5sN22Y7ZvG9cGuDvhz6MRZPNAzvoxaV4xjYoy11GgiUBRGvz6pcpCz
 OlkA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=aJ+NMNUyWOSeU76/wZH/xOla8z/F8BGLKK6NO+daT7c=;
 b=cio/iDavFzDTP3UQLIxEkoBIKZ/7bt2jOHiEPrIBP2UmHRG8SENCtLcbQMu2TCeB34
 AnkP7hI+MhMlzB3+owM49MW8VmWz5edOSJ8flnLRWISThNQH3Bb9boPhSAveiBAcsB6l
 XQMSk9W0omZCl4OjypAI4GH8VG5mxyQUcYzEmBMmkOrlmPwi7yCNAeWtaMVQhqSsMg9B
 6zimV2xw1HkMODoPCzgGVkxYb+6TJWUu/MamvTX4GfEbdHCnlrazZNaIxJrO6sORycaW
 NOEw9yzRnMCaPUxQeqEaQBVPrpzdm+Wmc9ooAu0SMvOVWe176RinvZ4hedxe1/A1Vbth
 aITw==
X-Gm-Message-State: AOAM530jPdt7yeDSg4PEtZg5l72/i8YP90pDqq9gni1yedP3n0+2y+R6
 LQsL8UXrT+tcnnx60ft1Q0HTz8LXkg8LOt+CxvU=
X-Google-Smtp-Source: ABdhPJztbbycqoucR5VA9L3J/eqZEps73vsP7bMZlYGOF7l5+CP/IDu5wRU4G+LaN3QoYCbBFL9miKesApB55IKBhTM=
X-Received: by 2002:a92:c213:: with SMTP id j19mr31628274ilo.40.1594055308108; 
 Mon, 06 Jul 2020 10:08:28 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-13-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-13-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 09:58:41 -0700
Message-ID: <CAKmqyKP1PJVHc=At4EM_60NZrdkokwOW9iwvqTHBoaYShWLUYg@mail.gmail.com>
Subject: Re: [PATCH 12/26] hw/usb/hcd-musb: Restrict header scope
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:56 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> "hcd-musb.h" is only required by USB device implementions.
> As we keep these implementations in the hw/usb/ directory,
> move the header there.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  {include/hw =3D> hw}/usb/hcd-musb.h | 0
>  hw/usb/hcd-musb.c                 | 2 +-
>  hw/usb/tusb6010.c                 | 2 +-
>  3 files changed, 2 insertions(+), 2 deletions(-)
>  rename {include/hw =3D> hw}/usb/hcd-musb.h (100%)
>
> diff --git a/include/hw/usb/hcd-musb.h b/hw/usb/hcd-musb.h
> similarity index 100%
> rename from include/hw/usb/hcd-musb.h
> rename to hw/usb/hcd-musb.h
> diff --git a/hw/usb/hcd-musb.c b/hw/usb/hcd-musb.c
> index 85f5ff5bd4..b8d8766a4a 100644
> --- a/hw/usb/hcd-musb.c
> +++ b/hw/usb/hcd-musb.c
> @@ -23,9 +23,9 @@
>  #include "qemu/osdep.h"
>  #include "qemu/timer.h"
>  #include "hw/usb.h"
> -#include "hw/usb/hcd-musb.h"
>  #include "hw/irq.h"
>  #include "hw/hw.h"
> +#include "hcd-musb.h"
>
>  /* Common USB registers */
>  #define MUSB_HDRC_FADDR                0x00    /* 8-bit */
> diff --git a/hw/usb/tusb6010.c b/hw/usb/tusb6010.c
> index 27eb28d3e4..9f9b81b09d 100644
> --- a/hw/usb/tusb6010.c
> +++ b/hw/usb/tusb6010.c
> @@ -23,11 +23,11 @@
>  #include "qemu/module.h"
>  #include "qemu/timer.h"
>  #include "hw/usb.h"
> -#include "hw/usb/hcd-musb.h"
>  #include "hw/arm/omap.h"
>  #include "hw/hw.h"
>  #include "hw/irq.h"
>  #include "hw/sysbus.h"
> +#include "hcd-musb.h"
>
>  #define TYPE_TUSB6010 "tusb6010"
>  #define TUSB(obj) OBJECT_CHECK(TUSBState, (obj), TYPE_TUSB6010)
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:09:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:09:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUcs-0006Rb-9Z; Mon, 06 Jul 2020 17:09:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsUcr-0006RW-2b
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:09:53 +0000
X-Inumbo-ID: 81a928b4-bfab-11ea-bca7-bc764e2007e4
Received: from mail-il1-x142.google.com (unknown [2607:f8b0:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 81a928b4-bfab-11ea-bca7-bc764e2007e4;
 Mon, 06 Jul 2020 17:09:52 +0000 (UTC)
Received: by mail-il1-x142.google.com with SMTP id i18so33555859ilk.10
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 10:09:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=Qjmg6keciKQ+VOMeBpL/AW2YN1J1Kn6ACCJR6VlV+qI=;
 b=OwWaKMMMP1QFjMhkzf7cgTSnKgDIc5+avykkBDKL5ShxrEZIh2B9WQ4e2zfIOytV/S
 nT2xYaD/K/Vz5l0Ackv84a76Sz0UM/YC/GgxV/iqLEOpTvzEuxbqE411kHshEjoTjLro
 D8tn/C0W0XdheqSqYAmla8lSsnywSVsKq0WHgzdFTlrA5qHkvSACL4/4KkAEjFI5VhYv
 yu4z/Z1UQT1x3Xag1s1raH6NdSUSqtqIsUOrtWYSlMNZfedghH8nGN5g6ZLmqfd1X3fq
 1n5JPWWntS7W1kO4/RKlZkXlSU9pqI4NdubmRt+GADA2vNbCuw3/HfBUFwD2kp5Wgsvr
 AJtw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=Qjmg6keciKQ+VOMeBpL/AW2YN1J1Kn6ACCJR6VlV+qI=;
 b=HKORv38rNyC60gVibfrL+CGaOMmEe+aJab55GTZ8xCW5sn1lRa0ZacOFWXPVdZBwbE
 cITf/08zVH3JCW6d1dhEYpiX1nfA3ptz0u/iGqMt1XnDot31I/UzExue8S50F+DXKNEu
 z0/Z9fgzx3fS4c3BgvXBfra7Sl/n7N10fsbqBWPJmkFtIcOhlyWwiZ7fhJJUYq1f0TrX
 VWSwP+aMwt7572bLOo6QvuUGC7SupXxISdM8NqlyPEcev2eg1mBpQvpNv0HDVRb7ddhD
 IqiThcIaIJV4LzQIaOIr3wzq/RqIBqINpyMzHas701Oba3VTJknW95KhJTvQDjwP+Ngb
 yRoQ==
X-Gm-Message-State: AOAM530Dz1e+C5kowEY4RTg79r5q/IoOgaN6qFrEPQllZ48WlRkI/Q/c
 4/pGXa30Px3C8GHhKUl97UeaO2PUGYLfKGMO6/c=
X-Google-Smtp-Source: ABdhPJyOwkiRiK+9nBFi1yzATBmoIi7mSBskR4fEEeJCsCiuqZAj+mTPEz50OG8NSilCy8YDtIqWGKoKN3C/V91lepU=
X-Received: by 2002:a92:bb84:: with SMTP id x4mr32234878ilk.177.1594055392114; 
 Mon, 06 Jul 2020 10:09:52 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-11-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-11-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 10:00:05 -0700
Message-ID: <CAKmqyKMUWZ9BsEeZkiK4-_MAhFpZO66MKQNhoZ3q1FT+XZie3g@mail.gmail.com>
Subject: Re: [PATCH 10/26] hw/usb/hcd-ehci: Move few definitions from header
 to source
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:53 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> Move definitions only useful for hcd-ehci.c to this source file.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  hw/usb/hcd-ehci.h | 11 -----------
>  hw/usb/hcd-ehci.c | 12 ++++++++++++
>  2 files changed, 12 insertions(+), 11 deletions(-)
>
> diff --git a/hw/usb/hcd-ehci.h b/hw/usb/hcd-ehci.h
> index 57b38cfc05..4577f5e31d 100644
> --- a/hw/usb/hcd-ehci.h
> +++ b/hw/usb/hcd-ehci.h
> @@ -24,17 +24,6 @@
>  #include "hw/pci/pci.h"
>  #include "hw/sysbus.h"
>
> -#ifndef EHCI_DEBUG
> -#define EHCI_DEBUG   0
> -#endif
> -
> -#if EHCI_DEBUG
> -#define DPRINTF printf
> -#else
> -#define DPRINTF(...)
> -#endif
> -
> -#define MMIO_SIZE        0x1000
>  #define CAPA_SIZE        0x10
>
>  #define NB_PORTS         6        /* Max. Number of downstream ports */
> diff --git a/hw/usb/hcd-ehci.c b/hw/usb/hcd-ehci.c
> index 256fb91e0c..a0beee527c 100644
> --- a/hw/usb/hcd-ehci.c
> +++ b/hw/usb/hcd-ehci.c
> @@ -36,6 +36,18 @@
>  #include "qemu/error-report.h"
>  #include "sysemu/runstate.h"
>
> +#ifndef EHCI_DEBUG
> +#define EHCI_DEBUG   0
> +#endif
> +
> +#if EHCI_DEBUG
> +#define DPRINTF printf
> +#else
> +#define DPRINTF(...)
> +#endif
> +
> +#define MMIO_SIZE        0x1000
> +
>  #define FRAME_TIMER_FREQ 1000
>  #define FRAME_TIMER_NS   (NANOSECONDS_PER_SECOND / FRAME_TIMER_FREQ)
>  #define UFRAME_TIMER_NS  (FRAME_TIMER_NS / 8)
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:10:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:10:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUdh-0007Ab-K1; Mon, 06 Jul 2020 17:10:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsUdg-0007A2-CC
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:10:44 +0000
X-Inumbo-ID: a04beea0-bfab-11ea-bb8b-bc764e2007e4
Received: from mail-io1-xd41.google.com (unknown [2607:f8b0:4864:20::d41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a04beea0-bfab-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 17:10:43 +0000 (UTC)
Received: by mail-io1-xd41.google.com with SMTP id v6so26479595iob.4
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 10:10:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=wZPcNnfJm5MPTasltoq/pBa3iV0LcdhND5nIpKUJi/8=;
 b=k/1aoEljxB0uBEqfqH0Zn4KhFaSEHuRC7puZspsg00ZQnuYbG361wHg3SacgFJDj5D
 tsj3n5BFbxJduCK35tjRhuF2/a9NzvStraAOtyCnDhA1x3ZIDzvxsj0SeVn96GcH66cQ
 +fOPAx/5lIiJ6CUyYV8ORDaUEU0GMk/TmxFz2lsMqQ1ZPLYRTUaOZTHWIAoFD6WwT2w9
 dE0VZxLY1HVMcUMAVCCqzToMHhxxVykCFBb3WPUEtZ8T5PdUIBDhJKXxk1Spjm8NLNHs
 uMj6b4Ox5j2PchTgKufiol5/oFIvZYBrDFwPFhjtdYp+E3sHNnFizOD1mX/9Z5ZYBI8v
 cucQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=wZPcNnfJm5MPTasltoq/pBa3iV0LcdhND5nIpKUJi/8=;
 b=qqfNdsATDQxUJKXjLyENvCiB56yZ2Zp3rPcGjeXUX3lyAbsZ275RXOMNnpd7PpZ40f
 T5MfXmkCodpaJ2u2dNKBx0fHdWbB6o1al8FpLhqSCQfkZjzQDyH5CVR4YJ0Uwx97zXTz
 r33gDEH1KKcDW534euOljWbaj1YbhX8PS02yl5Bn6t7E5SdPWfkPGQ3DdNcB3ZizaEgn
 TqQNidmeoxGtORrIO7765XGaU1wheuoGyEGsRsj0lCndbV8Y+xUUomnYjXwiZ6c5jpJy
 C6q0FbGLQ1H+Wo3QzXN8Uq4ndHr00Uk2on/PHyCC3uqQOnW79wgchPnQnMt0FVCIWhOI
 OPVA==
X-Gm-Message-State: AOAM533yMP1lviNBXHyTefqAqBytvBtZ9ZQB4heT83QikKudq3NW6uOX
 +r/LNf8MchTXDUDBSahF42PTg8vj3Nlzl68572c=
X-Google-Smtp-Source: ABdhPJx4Y0GqgQR2J49zdJrF6Wimg0enF5qkXxsI0KvUd7/3ziu/gL8IcMOHx3OKEzWPz3QhdGIRQSrp8CAFUbt1qmc=
X-Received: by 2002:a02:6c4c:: with SMTP id w73mr55529406jab.26.1594055443540; 
 Mon, 06 Jul 2020 10:10:43 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-14-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-14-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 10:00:56 -0700
Message-ID: <CAKmqyKOcdG_Wv8yectwwHaxmryB9uBKK+GX1ZGtrq7ZCRcRsAw@mail.gmail.com>
Subject: Re: [PATCH 13/26] hw/usb/desc: Reduce some declarations scope
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:59 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> USBDescString is forward-declared. Only bus.c uses the
> usb_device_get_product_desc() and usb_device_get_usb_desc()
> function. Move all that to the "desc.h" header to reduce
> the big "hw/usb.h" header a bit.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  hw/usb/desc.h    | 10 ++++++++++
>  include/hw/usb.h | 10 ----------
>  hw/usb/bus.c     |  1 +
>  3 files changed, 11 insertions(+), 10 deletions(-)
>
> diff --git a/hw/usb/desc.h b/hw/usb/desc.h
> index 92594fbe29..4bf6966c4b 100644
> --- a/hw/usb/desc.h
> +++ b/hw/usb/desc.h
> @@ -242,4 +242,14 @@ int usb_desc_get_descriptor(USBDevice *dev, USBPacke=
t *p,
>  int usb_desc_handle_control(USBDevice *dev, USBPacket *p,
>          int request, int value, int index, int length, uint8_t *data);
>
> +const char *usb_device_get_product_desc(USBDevice *dev);
> +
> +const USBDesc *usb_device_get_usb_desc(USBDevice *dev);
> +
> +struct USBDescString {
> +    uint8_t index;
> +    char *str;
> +    QLIST_ENTRY(USBDescString) next;
> +};
> +
>  #endif /* QEMU_HW_USB_DESC_H */
> diff --git a/include/hw/usb.h b/include/hw/usb.h
> index 15b2ef300a..18f1349bdc 100644
> --- a/include/hw/usb.h
> +++ b/include/hw/usb.h
> @@ -192,12 +192,6 @@ typedef struct USBDescOther USBDescOther;
>  typedef struct USBDescString USBDescString;
>  typedef struct USBDescMSOS USBDescMSOS;
>
> -struct USBDescString {
> -    uint8_t index;
> -    char *str;
> -    QLIST_ENTRY(USBDescString) next;
> -};
> -
>  #define USB_MAX_ENDPOINTS  15
>  #define USB_MAX_INTERFACES 16
>
> @@ -555,10 +549,6 @@ int usb_device_alloc_streams(USBDevice *dev, USBEndp=
oint **eps, int nr_eps,
>                               int streams);
>  void usb_device_free_streams(USBDevice *dev, USBEndpoint **eps, int nr_e=
ps);
>
> -const char *usb_device_get_product_desc(USBDevice *dev);
> -
> -const USBDesc *usb_device_get_usb_desc(USBDevice *dev);
> -
>  /* quirks.c */
>
>  /* In bulk endpoints are streaming data sources (iow behave like isoc ep=
s) */
> diff --git a/hw/usb/bus.c b/hw/usb/bus.c
> index 957559b18d..111c3af7c1 100644
> --- a/hw/usb/bus.c
> +++ b/hw/usb/bus.c
> @@ -9,6 +9,7 @@
>  #include "monitor/monitor.h"
>  #include "trace.h"
>  #include "qemu/cutils.h"
> +#include "desc.h"
>
>  static void usb_bus_dev_print(Monitor *mon, DeviceState *qdev, int inden=
t);
>
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:12:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:12:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUf3-0007I3-Uu; Mon, 06 Jul 2020 17:12:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsUf2-0007Hx-FA
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:12:08 +0000
X-Inumbo-ID: d2631c9c-bfab-11ea-bca7-bc764e2007e4
Received: from mail-io1-xd41.google.com (unknown [2607:f8b0:4864:20::d41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2631c9c-bfab-11ea-bca7-bc764e2007e4;
 Mon, 06 Jul 2020 17:12:07 +0000 (UTC)
Received: by mail-io1-xd41.google.com with SMTP id v6so26484838iob.4
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 10:12:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=P0d8Iq06s1RxhBs4I7BZ5yRhluIYSvpMY7wkvFtI2Xg=;
 b=Y+NfzrMhVjazx3jBHloldv5qQ1eAAl6uiOiG7IwSb4NMG6+903IYIYU0tSMwa8mrC3
 cTQQBM94OvCgQ7efQJmM6schD+rLNB+ozc5/uJgypYi55rBKWT+NxRBPTgvuL6eWEhhE
 7rUuX7OsT4URw2jWCniwP0L1yU131oOJcE/YnIXdhlnn6n4G1LcX+iI6fNOgQFNSh157
 m2EprNayyIy7Rbewu0VeNMbCXbX+6iCdEKOcr8WRTrTBw4Gh4m+fC7fmitA3XMXG66WL
 XqFVF5UEWj2WlfQevGi6yVpV9hZNiy39mq2dQqoo5OhqyH5Quk+sfoOIKSNr3g+956up
 gJjw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=P0d8Iq06s1RxhBs4I7BZ5yRhluIYSvpMY7wkvFtI2Xg=;
 b=PcHutAMYEC+hmZIJHa537NTwe2CfAUCfRa4JR7BanBZMTPGMEvdM9KSKZTVMw/J/Al
 xtcJxGuiXenhef0Z1srinl0xh7vHaJ3uuQi/v6Ay6vx7pgpRc5nc3LwlZxaulS4xc2Ni
 ljf3HMne1xf70aSrVYLk6zB2adKPFtZ5fF6yuCahaM//bSE9/tXoAd0+zw4nqWDP/Usb
 WiSPt5qgJDmKO1KZK9kAeVAngl0vo5GG88oun4Sk/WHB60lf5tDnU+TikJxN/K0xy2k2
 BcLkVHjAkzNO2rl/YFZ/jS66dYo+jltalxoX9X/Tx0jW3t3VKfjiEyPH3cVpVkGkJlX3
 hBbg==
X-Gm-Message-State: AOAM530abkzsM16oXE1ZEqY+2yPps7uU0m3oszFzn5kpR2yH0pId0epx
 nReIc7hGpnYSuOUsxfaN4+BqWgTcX3LDrjBj1ls=
X-Google-Smtp-Source: ABdhPJwQ4Y5SU2PHxNBgM6arl9mZiS4VMzkN+WtIMsTLI4VHjyaJG6nzbQPzYC2Wzjh5CnPd7DP0E6zy1bY/U6O6MAw=
X-Received: by 2002:a5d:9306:: with SMTP id l6mr26961459ion.105.1594055527579; 
 Mon, 06 Jul 2020 10:12:07 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-15-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-15-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 10:02:20 -0700
Message-ID: <CAKmqyKNYrB+BxS41KFWhYc2wXetJH4sJ_18kEDcaYPzu87r49A@mail.gmail.com>
Subject: Re: [PATCH 14/26] hw/usb/quirks: Rename included source with '.inc.c'
 suffix
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:58 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> This file is not a header, but contains source code which is
> included and compiled once. We use the '.inc.c' suffix in few
> other cases in the repository. Follow the same convention with
> this file.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  hw/usb/quirks.c                   | 2 +-
>  hw/usb/{quirks.h =3D> quirks.inc.c} | 5 -----
>  2 files changed, 1 insertion(+), 6 deletions(-)
>  rename hw/usb/{quirks.h =3D> quirks.inc.c} (99%)
>
> diff --git a/hw/usb/quirks.c b/hw/usb/quirks.c
> index 23ea7a23ea..655b36f2d5 100644
> --- a/hw/usb/quirks.c
> +++ b/hw/usb/quirks.c
> @@ -13,7 +13,7 @@
>   */
>
>  #include "qemu/osdep.h"
> -#include "quirks.h"
> +#include "quirks.inc.c"
>  #include "hw/usb.h"
>
>  static bool usb_id_match(const struct usb_device_id *ids,
> diff --git a/hw/usb/quirks.h b/hw/usb/quirks.inc.c
> similarity index 99%
> rename from hw/usb/quirks.h
> rename to hw/usb/quirks.inc.c
> index 50ef2f9c2e..004b228aba 100644
> --- a/hw/usb/quirks.h
> +++ b/hw/usb/quirks.inc.c
> @@ -12,9 +12,6 @@
>   * (at your option) any later version.
>   */
>
> -#ifndef HW_USB_QUIRKS_H
> -#define HW_USB_QUIRKS_H
> -
>  /* 1 on 1 copy of linux/drivers/usb/serial/ftdi_sio_ids.h */
>  #include "quirks-ftdi-ids.h"
>  /* 1 on 1 copy of linux/drivers/usb/serial/pl2303.h */
> @@ -915,5 +912,3 @@ static const struct usb_device_id usbredir_ftdi_seria=
l_ids[] =3D {
>
>  #undef USB_DEVICE
>  #undef USB_DEVICE_AND_INTERFACE_INFO
> -
> -#endif
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:20:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:20:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUnB-00089w-RO; Mon, 06 Jul 2020 17:20:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DFth=AR=redhat.com=dinechin@srs-us1.protection.inumbo.net>)
 id 1jsTua-000292-GM
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 16:24:08 +0000
X-Inumbo-ID: 1acd4054-bfa5-11ea-8ca6-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 1acd4054-bfa5-11ea-8ca6-12813bfff9fa;
 Mon, 06 Jul 2020 16:24:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594052642;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding;
 bh=ZCA2gGWF3bCK1LqpKSjzwm486Ts148V43QBc50Fd0Rw=;
 b=O0xClhx/jOvl1wYl2i3Cs4TO72eAwcrhxZHu04ZuQBGm7dUdNEzCzL1gEnpyhZXIAoNpdA
 fd18Qc0aVtKk6BU+gz1FbqJcJVKpDeefQEKjkaJCX+q+KZV2mHrSwfsjoNPOuKWEDASF/l
 BV44FjHq56JtdYMOOdZ7H6KT9pqypFI=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-305-5WED9vpgMC6y2I-1xr4ZjA-1; Mon, 06 Jul 2020 12:23:57 -0400
X-MC-Unique: 5WED9vpgMC6y2I-1xr4ZjA-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4C625BFE2;
 Mon,  6 Jul 2020 16:23:49 +0000 (UTC)
Received: from turbo.com (ovpn-114-213.ams2.redhat.com [10.36.114.213])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 05D4078124;
 Mon,  6 Jul 2020 16:23:03 +0000 (UTC)
From: Christophe de Dinechin <dinechin@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH] trivial: Remove trailing whitespaces
Date: Mon,  6 Jul 2020 18:23:00 +0200
Message-Id: <20200706162300.1084753-1-dinechin@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dinechin@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 8bit
X-Mailman-Approved-At: Mon, 06 Jul 2020 17:20:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Dmitry Fleytman <dmitry.fleytman@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Michael Roth <mdroth@linux.vnet.ibm.com>, Max Filippov <jcmvbkbc@gmail.com>,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>, Max Reitz <mreitz@redhat.com>,
 Marek Vasut <marex@denx.de>, Stefano Stabellini <sstabellini@kernel.org>,
 qemu-block@nongnu.org, qemu-trivial@nongnu.org, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>, Andrzej Zaborowski <balrogg@gmail.com>,
 Artyom Tarasenko <atar4qemu@gmail.com>,
 Alistair Francis <alistair@alistair23.me>,
 Eduardo Habkost <ehabkost@redhat.com>, Michael Tokarev <mjt@tls.msk.ru>,
 Riku Voipio <riku.voipio@iki.fi>, Peter Lieven <pl@kamp.de>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Roman Bolshakov <r.bolshakov@yadro.com>, qemu-arm@nongnu.org,
 Peter Chubb <peter.chubb@nicta.com.au>,
 Ronnie Sahlberg <ronniesahlberg@gmail.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
 David Gibson <david@gibson.dropbear.id.au>, Kevin Wolf <kwolf@redhat.com>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Yoshinori Sato <ysato@users.sourceforge.jp>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Chris Wulff <crwulff@gmail.com>, Laurent Vivier <laurent@vivier.eu>,
 Jean-Christophe Dubois <jcd@tribudubois.net>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

There are a number of unnecessary trailing whitespaces that have
accumulated over time in the source code. They cause stray changes
in patches if you use tools that automatically remove them.

Tested by doing a `git diff -w` after the change.

This could probably be turned into a pre-commit hook.

Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
---
 block/iscsi.c                                 |   2 +-
 disas/cris.c                                  |   2 +-
 disas/microblaze.c                            |  80 +++---
 disas/nios2.c                                 | 256 +++++++++---------
 hmp-commands.hx                               |   2 +-
 hw/alpha/typhoon.c                            |   6 +-
 hw/arm/gumstix.c                              |   6 +-
 hw/arm/omap1.c                                |   2 +-
 hw/arm/stellaris.c                            |   2 +-
 hw/char/etraxfs_ser.c                         |   2 +-
 hw/core/ptimer.c                              |   2 +-
 hw/cris/axis_dev88.c                          |   2 +-
 hw/cris/boot.c                                |   2 +-
 hw/display/qxl.c                              |   2 +-
 hw/dma/etraxfs_dma.c                          |  18 +-
 hw/dma/i82374.c                               |   2 +-
 hw/i2c/bitbang_i2c.c                          |   2 +-
 hw/input/tsc2005.c                            |   2 +-
 hw/input/tsc210x.c                            |   2 +-
 hw/intc/etraxfs_pic.c                         |   8 +-
 hw/intc/sh_intc.c                             |  10 +-
 hw/intc/xilinx_intc.c                         |   2 +-
 hw/misc/imx25_ccm.c                           |   6 +-
 hw/misc/imx31_ccm.c                           |   2 +-
 hw/net/vmxnet3.h                              |   2 +-
 hw/net/xilinx_ethlite.c                       |   2 +-
 hw/pci/pcie.c                                 |   2 +-
 hw/sd/omap_mmc.c                              |   2 +-
 hw/sh4/shix.c                                 |   2 +-
 hw/sparc64/sun4u.c                            |   2 +-
 hw/timer/etraxfs_timer.c                      |   2 +-
 hw/timer/xilinx_timer.c                       |   4 +-
 hw/usb/hcd-musb.c                             |  10 +-
 hw/usb/hcd-ohci.c                             |   6 +-
 hw/usb/hcd-uhci.c                             |   2 +-
 hw/virtio/virtio-pci.c                        |   2 +-
 include/hw/cris/etraxfs_dma.h                 |   4 +-
 include/hw/net/lance.h                        |   2 +-
 include/hw/ppc/spapr.h                        |   2 +-
 include/hw/xen/interface/io/ring.h            |  34 +--
 include/qemu/log.h                            |   2 +-
 include/qom/object.h                          |   4 +-
 linux-user/cris/cpu_loop.c                    |  16 +-
 linux-user/microblaze/cpu_loop.c              |  16 +-
 linux-user/mmap.c                             |   8 +-
 linux-user/sparc/signal.c                     |   4 +-
 linux-user/syscall.c                          |  24 +-
 linux-user/syscall_defs.h                     |   2 +-
 linux-user/uaccess.c                          |   2 +-
 os-posix.c                                    |   2 +-
 qapi/qapi-util.c                              |   2 +-
 qemu-img.c                                    |   2 +-
 qemu-options.hx                               |  26 +-
 qom/object.c                                  |   2 +-
 target/cris/translate.c                       |  28 +-
 target/cris/translate_v10.inc.c               |   6 +-
 target/i386/hvf/hvf.c                         |   4 +-
 target/i386/hvf/x86.c                         |   4 +-
 target/i386/hvf/x86_decode.c                  |  20 +-
 target/i386/hvf/x86_decode.h                  |   4 +-
 target/i386/hvf/x86_descr.c                   |   2 +-
 target/i386/hvf/x86_emu.c                     |   2 +-
 target/i386/hvf/x86_mmu.c                     |   6 +-
 target/i386/hvf/x86_task.c                    |   2 +-
 target/i386/hvf/x86hvf.c                      |  42 +--
 target/i386/translate.c                       |   8 +-
 target/microblaze/mmu.c                       |   2 +-
 target/microblaze/translate.c                 |   2 +-
 target/sh4/op_helper.c                        |   4 +-
 target/xtensa/core-de212/core-isa.h           |   6 +-
 .../xtensa/core-sample_controller/core-isa.h  |   6 +-
 target/xtensa/core-test_kc705_be/core-isa.h   |   2 +-
 tcg/sparc/tcg-target.inc.c                    |   2 +-
 tcg/tcg.c                                     |  32 +--
 tests/tcg/multiarch/test-mmap.c               |  72 ++---
 ui/curses.c                                   |   4 +-
 ui/curses_keys.h                              |   4 +-
 util/cutils.c                                 |   2 +-
 78 files changed, 440 insertions(+), 440 deletions(-)

diff --git a/block/iscsi.c b/block/iscsi.c
index a8b76979d8..884075f4e1 100644
--- a/block/iscsi.c
+++ b/block/iscsi.c
@@ -1412,7 +1412,7 @@ static void iscsi_readcapacity_sync(IscsiLun *iscsilun, Error **errp)
     struct scsi_task *task = NULL;
     struct scsi_readcapacity10 *rc10 = NULL;
     struct scsi_readcapacity16 *rc16 = NULL;
-    int retries = ISCSI_CMD_RETRIES; 
+    int retries = ISCSI_CMD_RETRIES;
 
     do {
         if (task != NULL) {
diff --git a/disas/cris.c b/disas/cris.c
index 0b0a3fb916..a2be8f1412 100644
--- a/disas/cris.c
+++ b/disas/cris.c
@@ -2569,7 +2569,7 @@ print_insn_cris_generic (bfd_vma memaddr,
   nbytes = info->buffer_length ? info->buffer_length
                                : MAX_BYTES_PER_CRIS_INSN;
   nbytes = MIN(nbytes, MAX_BYTES_PER_CRIS_INSN);
-  status = (*info->read_memory_func) (memaddr, buffer, nbytes, info);  
+  status = (*info->read_memory_func) (memaddr, buffer, nbytes, info);
 
   /* If we did not get all we asked for, then clear the rest.
      Hopefully this makes a reproducible result in case of errors.  */
diff --git a/disas/microblaze.c b/disas/microblaze.c
index 0b89b9c4fa..6de66532f5 100644
--- a/disas/microblaze.c
+++ b/disas/microblaze.c
@@ -15,15 +15,15 @@ You should have received a copy of the GNU General Public License
 along with this program; if not, see <http://www.gnu.org/licenses/>. */
 
 /*
- * Copyright (c) 2001 Xilinx, Inc.  All rights reserved. 
+ * Copyright (c) 2001 Xilinx, Inc.  All rights reserved.
  *
  * Redistribution and use in source and binary forms are permitted
  * provided that the above copyright notice and this paragraph are
  * duplicated in all such forms and that any documentation,
  * advertising materials, and other materials related to such
  * distribution and use acknowledge that the software was developed
- * by Xilinx, Inc.  The name of the Company may not be used to endorse 
- * or promote products derived from this software without specific prior 
+ * by Xilinx, Inc.  The name of the Company may not be used to endorse
+ * or promote products derived from this software without specific prior
  * written permission.
  * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
  * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
@@ -42,7 +42,7 @@ along with this program; if not, see <http://www.gnu.org/licenses/>. */
 /* Assembler instructions for Xilinx's microblaze processor
    Copyright (C) 1999, 2000 Free Software Foundation, Inc.
 
-   
+
 This program is free software; you can redistribute it and/or modify
 it under the terms of the GNU General Public License as published by
 the Free Software Foundation; either version 2 of the License, or
@@ -57,15 +57,15 @@ You should have received a copy of the GNU General Public License
 along with this program; if not, see <http://www.gnu.org/licenses/>.  */
 
 /*
- * Copyright (c) 2001 Xilinx, Inc.  All rights reserved. 
+ * Copyright (c) 2001 Xilinx, Inc.  All rights reserved.
  *
  * Redistribution and use in source and binary forms are permitted
  * provided that the above copyright notice and this paragraph are
  * duplicated in all such forms and that any documentation,
  * advertising materials, and other materials related to such
  * distribution and use acknowledge that the software was developed
- * by Xilinx, Inc.  The name of the Company may not be used to endorse 
- * or promote products derived from this software without specific prior 
+ * by Xilinx, Inc.  The name of the Company may not be used to endorse
+ * or promote products derived from this software without specific prior
  * written permission.
  * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
  * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
@@ -79,15 +79,15 @@ along with this program; if not, see <http://www.gnu.org/licenses/>.  */
 #define MICROBLAZE_OPCM
 
 /*
- * Copyright (c) 2001 Xilinx, Inc.  All rights reserved. 
+ * Copyright (c) 2001 Xilinx, Inc.  All rights reserved.
  *
  * Redistribution and use in source and binary forms are permitted
  * provided that the above copyright notice and this paragraph are
  * duplicated in all such forms and that any documentation,
  * advertising materials, and other materials related to such
  * distribution and use acknowledge that the software was developed
- * by Xilinx, Inc.  The name of the Company may not be used to endorse 
- * or promote products derived from this software without specific prior 
+ * by Xilinx, Inc.  The name of the Company may not be used to endorse
+ * or promote products derived from this software without specific prior
  * written permission.
  * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
  * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
@@ -108,8 +108,8 @@ enum microblaze_instr {
    imm, rtsd, rtid, rtbd, rted, bri, brid, brlid, brai, braid, bralid,
    brki, beqi, beqid, bnei, bneid, blti, bltid, blei, bleid, bgti,
    bgtid, bgei, bgeid, lbu, lhu, lw, lwx, sb, sh, sw, swx, lbui, lhui, lwi,
-   sbi, shi, swi, msrset, msrclr, tuqula, fadd, frsub, fmul, fdiv, 
-   fcmp_lt, fcmp_eq, fcmp_le, fcmp_gt, fcmp_ne, fcmp_ge, fcmp_un, flt, fint, fsqrt, 
+   sbi, shi, swi, msrset, msrclr, tuqula, fadd, frsub, fmul, fdiv,
+   fcmp_lt, fcmp_eq, fcmp_le, fcmp_gt, fcmp_ne, fcmp_ge, fcmp_un, flt, fint, fsqrt,
    tget, tcget, tnget, tncget, tput, tcput, tnput, tncput,
    eget, ecget, neget, necget, eput, ecput, neput, necput,
    teget, tecget, tneget, tnecget, teput, tecput, tneput, tnecput,
@@ -182,7 +182,7 @@ enum microblaze_instr_type {
 /* Assembler Register - Used in Delay Slot Optimization */
 #define REG_AS    18
 #define REG_ZERO  0
- 
+
 #define RD_LOW  21 /* low bit for RD */
 #define RA_LOW  16 /* low bit for RA */
 #define RB_LOW  11 /* low bit for RB */
@@ -258,7 +258,7 @@ enum microblaze_instr_type {
 #define OPCODE_MASK_H24 0xFC1F07FF /* High 6, bits 20-16 and low 11 bits */
 #define OPCODE_MASK_H124  0xFFFF07FF /* High 16, and low 11 bits */
 #define OPCODE_MASK_H1234 0xFFFFFFFF /* All 32 bits */
-#define OPCODE_MASK_H3  0xFC000600 /* High 6 bits and bits 21, 22 */  
+#define OPCODE_MASK_H3  0xFC000600 /* High 6 bits and bits 21, 22 */
 #define OPCODE_MASK_H32 0xFC00FC00 /* High 6 bits and bit 16-21 */
 #define OPCODE_MASK_H34B   0xFC0000FF /* High 6 bits and low 8 bits */
 #define OPCODE_MASK_H34C   0xFC0007E0 /* High 6 bits and bits 21-26 */
@@ -277,14 +277,14 @@ static const struct op_code_struct {
   short inst_offset_type; /* immediate vals offset from PC? (= 1 for branches) */
   short delay_slots; /* info about delay slots needed after this instr. */
   short immval_mask;
-  unsigned long bit_sequence; /* all the fixed bits for the op are set and all the variable bits (reg names, imm vals) are set to 0 */ 
+  unsigned long bit_sequence; /* all the fixed bits for the op are set and all the variable bits (reg names, imm vals) are set to 0 */
   unsigned long opcode_mask; /* which bits define the opcode */
   enum microblaze_instr instr;
   enum microblaze_instr_type instr_type;
   /* more info about output format here */
-} opcodes[MAX_OPCODES] = 
+} opcodes[MAX_OPCODES] =
 
-{ 
+{
   {"add",   INST_TYPE_RD_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x00000000, OPCODE_MASK_H4, add, arithmetic_inst },
   {"rsub",  INST_TYPE_RD_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x04000000, OPCODE_MASK_H4, rsub, arithmetic_inst },
   {"addc",  INST_TYPE_RD_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x08000000, OPCODE_MASK_H4, addc, arithmetic_inst },
@@ -437,7 +437,7 @@ static const struct op_code_struct {
   {"tcput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00B000, OPCODE_MASK_H32, tcput,  anyware_inst },
   {"tnput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00D000, OPCODE_MASK_H32, tnput,  anyware_inst },
   {"tncput", INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00F000, OPCODE_MASK_H32, tncput, anyware_inst },
- 
+
   {"eget",   INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C000400, OPCODE_MASK_H32, eget,   anyware_inst },
   {"ecget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C002400, OPCODE_MASK_H32, ecget,  anyware_inst },
   {"neget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C004400, OPCODE_MASK_H32, neget,  anyware_inst },
@@ -446,7 +446,7 @@ static const struct op_code_struct {
   {"ecput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00A400, OPCODE_MASK_H32, ecput,  anyware_inst },
   {"neput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00C400, OPCODE_MASK_H32, neput,  anyware_inst },
   {"necput", INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00E400, OPCODE_MASK_H32, necput, anyware_inst },
- 
+
   {"teget",   INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C001400, OPCODE_MASK_H32, teget,   anyware_inst },
   {"tecget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C003400, OPCODE_MASK_H32, tecget,  anyware_inst },
   {"tneget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C005400, OPCODE_MASK_H32, tneget,  anyware_inst },
@@ -455,7 +455,7 @@ static const struct op_code_struct {
   {"tecput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00B400, OPCODE_MASK_H32, tecput,  anyware_inst },
   {"tneput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00D400, OPCODE_MASK_H32, tneput,  anyware_inst },
   {"tnecput", INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00F400, OPCODE_MASK_H32, tnecput, anyware_inst },
- 
+
   {"aget",   INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C000800, OPCODE_MASK_H32, aget,   anyware_inst },
   {"caget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C002800, OPCODE_MASK_H32, caget,  anyware_inst },
   {"naget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C004800, OPCODE_MASK_H32, naget,  anyware_inst },
@@ -464,7 +464,7 @@ static const struct op_code_struct {
   {"caput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00A800, OPCODE_MASK_H32, caput,  anyware_inst },
   {"naput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00C800, OPCODE_MASK_H32, naput,  anyware_inst },
   {"ncaput", INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00E800, OPCODE_MASK_H32, ncaput, anyware_inst },
- 
+
   {"taget",   INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C001800, OPCODE_MASK_H32, taget,   anyware_inst },
   {"tcaget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C003800, OPCODE_MASK_H32, tcaget,  anyware_inst },
   {"tnaget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C005800, OPCODE_MASK_H32, tnaget,  anyware_inst },
@@ -473,7 +473,7 @@ static const struct op_code_struct {
   {"tcaput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00B800, OPCODE_MASK_H32, tcaput,  anyware_inst },
   {"tnaput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00D800, OPCODE_MASK_H32, tnaput,  anyware_inst },
   {"tncaput", INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00F800, OPCODE_MASK_H32, tncaput, anyware_inst },
- 
+
   {"eaget",   INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C000C00, OPCODE_MASK_H32, eget,   anyware_inst },
   {"ecaget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C002C00, OPCODE_MASK_H32, ecget,  anyware_inst },
   {"neaget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C004C00, OPCODE_MASK_H32, neget,  anyware_inst },
@@ -482,7 +482,7 @@ static const struct op_code_struct {
   {"ecaput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00AC00, OPCODE_MASK_H32, ecput,  anyware_inst },
   {"neaput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00CC00, OPCODE_MASK_H32, neput,  anyware_inst },
   {"necaput", INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00EC00, OPCODE_MASK_H32, necput, anyware_inst },
- 
+
   {"teaget",   INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C001C00, OPCODE_MASK_H32, teaget,   anyware_inst },
   {"tecaget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C003C00, OPCODE_MASK_H32, tecaget,  anyware_inst },
   {"tneaget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C005C00, OPCODE_MASK_H32, tneaget,  anyware_inst },
@@ -491,7 +491,7 @@ static const struct op_code_struct {
   {"tecaput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00BC00, OPCODE_MASK_H32, tecaput,  anyware_inst },
   {"tneaput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00DC00, OPCODE_MASK_H32, tneaput,  anyware_inst },
   {"tnecaput", INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00FC00, OPCODE_MASK_H32, tnecaput, anyware_inst },
- 
+
   {"getd",    INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000000, OPCODE_MASK_H34C, getd,    anyware_inst },
   {"tgetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000080, OPCODE_MASK_H34C, tgetd,   anyware_inst },
   {"cgetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000100, OPCODE_MASK_H34C, cgetd,   anyware_inst },
@@ -508,7 +508,7 @@ static const struct op_code_struct {
   {"tnputd",  INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000680, OPCODE_MASK_H34C, tnputd,  anyware_inst },
   {"ncputd",  INST_TYPE_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000700, OPCODE_MASK_H34C, ncputd,  anyware_inst },
   {"tncputd", INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000780, OPCODE_MASK_H34C, tncputd, anyware_inst },
- 
+
   {"egetd",    INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000020, OPCODE_MASK_H34C, egetd,    anyware_inst },
   {"tegetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C0000A0, OPCODE_MASK_H34C, tegetd,   anyware_inst },
   {"ecgetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000120, OPCODE_MASK_H34C, ecgetd,   anyware_inst },
@@ -525,7 +525,7 @@ static const struct op_code_struct {
   {"tneputd",  INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C0006A0, OPCODE_MASK_H34C, tneputd,  anyware_inst },
   {"necputd",  INST_TYPE_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000720, OPCODE_MASK_H34C, necputd,  anyware_inst },
   {"tnecputd", INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C0007A0, OPCODE_MASK_H34C, tnecputd, anyware_inst },
- 
+
   {"agetd",    INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000040, OPCODE_MASK_H34C, agetd,    anyware_inst },
   {"tagetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C0000C0, OPCODE_MASK_H34C, tagetd,   anyware_inst },
   {"cagetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000140, OPCODE_MASK_H34C, cagetd,   anyware_inst },
@@ -542,7 +542,7 @@ static const struct op_code_struct {
   {"tnaputd",  INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C0006C0, OPCODE_MASK_H34C, tnaputd,  anyware_inst },
   {"ncaputd",  INST_TYPE_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000740, OPCODE_MASK_H34C, ncaputd,  anyware_inst },
   {"tncaputd", INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C0007C0, OPCODE_MASK_H34C, tncaputd, anyware_inst },
- 
+
   {"eagetd",    INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000060, OPCODE_MASK_H34C, eagetd,    anyware_inst },
   {"teagetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C0000E0, OPCODE_MASK_H34C, teagetd,   anyware_inst },
   {"ecagetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000160, OPCODE_MASK_H34C, ecagetd,   anyware_inst },
@@ -648,13 +648,13 @@ get_field_unsigned_imm (long instr)
 
 /*
   char *
-  get_field_special (instr) 
+  get_field_special (instr)
   long instr;
   {
   char tmpstr[25];
-  
+
   sprintf(tmpstr, "%s%s", register_prefix, (((instr & IMM_MASK) >> IMM_LOW) & REG_MSR_MASK) == 0 ? "pc" : "msr");
-  
+
   return(strdup(tmpstr));
   }
 */
@@ -684,7 +684,7 @@ get_field_special(long instr, const struct op_code_struct *op)
       break;
    case REG_BTR_MASK :
       strcpy(spr, "btr");
-      break;      
+      break;
    case REG_EDR_MASK :
       strcpy(spr, "edr");
       break;
@@ -719,13 +719,13 @@ get_field_special(long instr, const struct op_code_struct *op)
      }
      break;
    }
-   
+
    sprintf(tmpstr, "%s%s", register_prefix, spr);
    return(strdup(tmpstr));
 }
 
 static unsigned long
-read_insn_microblaze (bfd_vma memaddr, 
+read_insn_microblaze (bfd_vma memaddr,
 		      struct disassemble_info *info,
 		      const struct op_code_struct **opr)
 {
@@ -736,7 +736,7 @@ read_insn_microblaze (bfd_vma memaddr,
 
   status = info->read_memory_func (memaddr, ibytes, 4, info);
 
-  if (status != 0) 
+  if (status != 0)
     {
       info->memory_error_func (status, memaddr, info);
       return 0;
@@ -761,7 +761,7 @@ read_insn_microblaze (bfd_vma memaddr,
 }
 
 
-int 
+int
 print_insn_microblaze (bfd_vma memaddr, struct disassemble_info * info)
 {
   fprintf_function    fprintf_func = info->fprintf_func;
@@ -780,7 +780,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disassemble_info * info)
   if (inst == 0) {
     return -1;
   }
-  
+
   if (prev_insn_vma == curr_insn_vma) {
   if (memaddr-(info->bytes_per_chunk) == prev_insn_addr) {
     prev_inst = read_insn_microblaze (prev_insn_addr, info, &pop);
@@ -806,7 +806,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disassemble_info * info)
   else
     {
       fprintf_func (stream, "%s", op->name);
-      
+
       switch (op->inst_type)
 	{
   case INST_TYPE_RD_R1_R2:
@@ -851,7 +851,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disassemble_info * info)
 	  break;
 	case INST_TYPE_R1_IMM:
 	  fprintf_func(stream, "\t%s, %s", get_field_r1(inst), get_field_imm(inst));
-	  /* The non-pc relative instructions are returns, which shouldn't 
+	  /* The non-pc relative instructions are returns, which shouldn't
 	     have a label printed */
 	  if (info->print_address_func && op->inst_offset_type == INST_PC_OFFSET && info->symbol_at_address_func) {
 	    if (immfound)
@@ -886,7 +886,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disassemble_info * info)
 	    if (info->symbol_at_address_func(immval, info)) {
 	      fprintf_func (stream, "\t// ");
 	      info->print_address_func (immval, info);
-	    } 
+	    }
 	  }
 	  break;
         case INST_TYPE_IMM:
@@ -938,7 +938,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disassemble_info * info)
 	  break;
 	}
     }
-  
+
   /* Say how many bytes we consumed? */
   return 4;
 }
diff --git a/disas/nios2.c b/disas/nios2.c
index c3e82140c7..35d9f40f3e 100644
--- a/disas/nios2.c
+++ b/disas/nios2.c
@@ -96,7 +96,7 @@ enum overflow_type
   no_overflow
 };
 
-/* This structure holds information for a particular instruction. 
+/* This structure holds information for a particular instruction.
 
    The args field is a string describing the operands.  The following
    letters can appear in the args:
@@ -152,26 +152,26 @@ enum overflow_type
 struct nios2_opcode
 {
   const char *name;		/* The name of the instruction.  */
-  const char *args;		/* A string describing the arguments for this 
+  const char *args;		/* A string describing the arguments for this
 				   instruction.  */
-  const char *args_test;	/* Like args, but with an extra argument for 
+  const char *args_test;	/* Like args, but with an extra argument for
 				   the expected opcode.  */
-  unsigned long num_args;	/* The number of arguments the instruction 
+  unsigned long num_args;	/* The number of arguments the instruction
 				   takes.  */
   unsigned size;		/* Size in bytes of the instruction.  */
   enum iw_format_type format;	/* Instruction format.  */
   unsigned long match;		/* The basic opcode for the instruction.  */
-  unsigned long mask;		/* Mask for the opcode field of the 
+  unsigned long mask;		/* Mask for the opcode field of the
 				   instruction.  */
-  unsigned long pinfo;		/* Is this a real instruction or instruction 
+  unsigned long pinfo;		/* Is this a real instruction or instruction
 				   macro?  */
-  enum overflow_type overflow_msg;  /* Used to generate informative 
+  enum overflow_type overflow_msg;  /* Used to generate informative
 				       message when fixup overflows.  */
 };
 
-/* This value is used in the nios2_opcode.pinfo field to indicate that the 
-   instruction is a macro or pseudo-op.  This requires special treatment by 
-   the assembler, and is used by the disassembler to determine whether to 
+/* This value is used in the nios2_opcode.pinfo field to indicate that the
+   instruction is a macro or pseudo-op.  This requires special treatment by
+   the assembler, and is used by the disassembler to determine whether to
    check for a nop.  */
 #define NIOS2_INSN_MACRO	0x80000000
 #define NIOS2_INSN_MACRO_MOV	0x80000001
@@ -207,124 +207,124 @@ struct nios2_reg
 #define _NIOS2R1_H_
 
 /* R1 fields.  */
-#define IW_R1_OP_LSB 0 
-#define IW_R1_OP_SIZE 6 
-#define IW_R1_OP_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R1_OP_SIZE)) 
-#define IW_R1_OP_SHIFTED_MASK (IW_R1_OP_UNSHIFTED_MASK << IW_R1_OP_LSB) 
-#define GET_IW_R1_OP(W) (((W) >> IW_R1_OP_LSB) & IW_R1_OP_UNSHIFTED_MASK) 
-#define SET_IW_R1_OP(V) (((V) & IW_R1_OP_UNSHIFTED_MASK) << IW_R1_OP_LSB) 
-
-#define IW_I_A_LSB 27 
-#define IW_I_A_SIZE 5 
-#define IW_I_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_A_SIZE)) 
-#define IW_I_A_SHIFTED_MASK (IW_I_A_UNSHIFTED_MASK << IW_I_A_LSB) 
-#define GET_IW_I_A(W) (((W) >> IW_I_A_LSB) & IW_I_A_UNSHIFTED_MASK) 
-#define SET_IW_I_A(V) (((V) & IW_I_A_UNSHIFTED_MASK) << IW_I_A_LSB) 
-
-#define IW_I_B_LSB 22 
-#define IW_I_B_SIZE 5 
-#define IW_I_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_B_SIZE)) 
-#define IW_I_B_SHIFTED_MASK (IW_I_B_UNSHIFTED_MASK << IW_I_B_LSB) 
-#define GET_IW_I_B(W) (((W) >> IW_I_B_LSB) & IW_I_B_UNSHIFTED_MASK) 
-#define SET_IW_I_B(V) (((V) & IW_I_B_UNSHIFTED_MASK) << IW_I_B_LSB) 
-
-#define IW_I_IMM16_LSB 6 
-#define IW_I_IMM16_SIZE 16 
-#define IW_I_IMM16_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_IMM16_SIZE)) 
-#define IW_I_IMM16_SHIFTED_MASK (IW_I_IMM16_UNSHIFTED_MASK << IW_I_IMM16_LSB) 
-#define GET_IW_I_IMM16(W) (((W) >> IW_I_IMM16_LSB) & IW_I_IMM16_UNSHIFTED_MASK) 
-#define SET_IW_I_IMM16(V) (((V) & IW_I_IMM16_UNSHIFTED_MASK) << IW_I_IMM16_LSB) 
-
-#define IW_R_A_LSB 27 
-#define IW_R_A_SIZE 5 
-#define IW_R_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_A_SIZE)) 
-#define IW_R_A_SHIFTED_MASK (IW_R_A_UNSHIFTED_MASK << IW_R_A_LSB) 
-#define GET_IW_R_A(W) (((W) >> IW_R_A_LSB) & IW_R_A_UNSHIFTED_MASK) 
-#define SET_IW_R_A(V) (((V) & IW_R_A_UNSHIFTED_MASK) << IW_R_A_LSB) 
-
-#define IW_R_B_LSB 22 
-#define IW_R_B_SIZE 5 
-#define IW_R_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_B_SIZE)) 
-#define IW_R_B_SHIFTED_MASK (IW_R_B_UNSHIFTED_MASK << IW_R_B_LSB) 
-#define GET_IW_R_B(W) (((W) >> IW_R_B_LSB) & IW_R_B_UNSHIFTED_MASK) 
-#define SET_IW_R_B(V) (((V) & IW_R_B_UNSHIFTED_MASK) << IW_R_B_LSB) 
-
-#define IW_R_C_LSB 17 
-#define IW_R_C_SIZE 5 
-#define IW_R_C_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_C_SIZE)) 
-#define IW_R_C_SHIFTED_MASK (IW_R_C_UNSHIFTED_MASK << IW_R_C_LSB) 
-#define GET_IW_R_C(W) (((W) >> IW_R_C_LSB) & IW_R_C_UNSHIFTED_MASK) 
-#define SET_IW_R_C(V) (((V) & IW_R_C_UNSHIFTED_MASK) << IW_R_C_LSB) 
-
-#define IW_R_OPX_LSB 11 
-#define IW_R_OPX_SIZE 6 
-#define IW_R_OPX_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_OPX_SIZE)) 
-#define IW_R_OPX_SHIFTED_MASK (IW_R_OPX_UNSHIFTED_MASK << IW_R_OPX_LSB) 
-#define GET_IW_R_OPX(W) (((W) >> IW_R_OPX_LSB) & IW_R_OPX_UNSHIFTED_MASK) 
-#define SET_IW_R_OPX(V) (((V) & IW_R_OPX_UNSHIFTED_MASK) << IW_R_OPX_LSB) 
-
-#define IW_R_IMM5_LSB 6 
-#define IW_R_IMM5_SIZE 5 
-#define IW_R_IMM5_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_IMM5_SIZE)) 
-#define IW_R_IMM5_SHIFTED_MASK (IW_R_IMM5_UNSHIFTED_MASK << IW_R_IMM5_LSB) 
-#define GET_IW_R_IMM5(W) (((W) >> IW_R_IMM5_LSB) & IW_R_IMM5_UNSHIFTED_MASK) 
-#define SET_IW_R_IMM5(V) (((V) & IW_R_IMM5_UNSHIFTED_MASK) << IW_R_IMM5_LSB) 
-
-#define IW_J_IMM26_LSB 6 
-#define IW_J_IMM26_SIZE 26 
-#define IW_J_IMM26_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_J_IMM26_SIZE)) 
-#define IW_J_IMM26_SHIFTED_MASK (IW_J_IMM26_UNSHIFTED_MASK << IW_J_IMM26_LSB) 
-#define GET_IW_J_IMM26(W) (((W) >> IW_J_IMM26_LSB) & IW_J_IMM26_UNSHIFTED_MASK) 
-#define SET_IW_J_IMM26(V) (((V) & IW_J_IMM26_UNSHIFTED_MASK) << IW_J_IMM26_LSB) 
-
-#define IW_CUSTOM_A_LSB 27 
-#define IW_CUSTOM_A_SIZE 5 
-#define IW_CUSTOM_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_A_SIZE)) 
-#define IW_CUSTOM_A_SHIFTED_MASK (IW_CUSTOM_A_UNSHIFTED_MASK << IW_CUSTOM_A_LSB) 
-#define GET_IW_CUSTOM_A(W) (((W) >> IW_CUSTOM_A_LSB) & IW_CUSTOM_A_UNSHIFTED_MASK) 
-#define SET_IW_CUSTOM_A(V) (((V) & IW_CUSTOM_A_UNSHIFTED_MASK) << IW_CUSTOM_A_LSB) 
-
-#define IW_CUSTOM_B_LSB 22 
-#define IW_CUSTOM_B_SIZE 5 
-#define IW_CUSTOM_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_B_SIZE)) 
-#define IW_CUSTOM_B_SHIFTED_MASK (IW_CUSTOM_B_UNSHIFTED_MASK << IW_CUSTOM_B_LSB) 
-#define GET_IW_CUSTOM_B(W) (((W) >> IW_CUSTOM_B_LSB) & IW_CUSTOM_B_UNSHIFTED_MASK) 
-#define SET_IW_CUSTOM_B(V) (((V) & IW_CUSTOM_B_UNSHIFTED_MASK) << IW_CUSTOM_B_LSB) 
-
-#define IW_CUSTOM_C_LSB 17 
-#define IW_CUSTOM_C_SIZE 5 
-#define IW_CUSTOM_C_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_C_SIZE)) 
-#define IW_CUSTOM_C_SHIFTED_MASK (IW_CUSTOM_C_UNSHIFTED_MASK << IW_CUSTOM_C_LSB) 
-#define GET_IW_CUSTOM_C(W) (((W) >> IW_CUSTOM_C_LSB) & IW_CUSTOM_C_UNSHIFTED_MASK) 
-#define SET_IW_CUSTOM_C(V) (((V) & IW_CUSTOM_C_UNSHIFTED_MASK) << IW_CUSTOM_C_LSB) 
-
-#define IW_CUSTOM_READA_LSB 16 
-#define IW_CUSTOM_READA_SIZE 1 
-#define IW_CUSTOM_READA_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_READA_SIZE)) 
-#define IW_CUSTOM_READA_SHIFTED_MASK (IW_CUSTOM_READA_UNSHIFTED_MASK << IW_CUSTOM_READA_LSB) 
-#define GET_IW_CUSTOM_READA(W) (((W) >> IW_CUSTOM_READA_LSB) & IW_CUSTOM_READA_UNSHIFTED_MASK) 
-#define SET_IW_CUSTOM_READA(V) (((V) & IW_CUSTOM_READA_UNSHIFTED_MASK) << IW_CUSTOM_READA_LSB) 
-
-#define IW_CUSTOM_READB_LSB 15 
-#define IW_CUSTOM_READB_SIZE 1 
-#define IW_CUSTOM_READB_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_READB_SIZE)) 
-#define IW_CUSTOM_READB_SHIFTED_MASK (IW_CUSTOM_READB_UNSHIFTED_MASK << IW_CUSTOM_READB_LSB) 
-#define GET_IW_CUSTOM_READB(W) (((W) >> IW_CUSTOM_READB_LSB) & IW_CUSTOM_READB_UNSHIFTED_MASK) 
-#define SET_IW_CUSTOM_READB(V) (((V) & IW_CUSTOM_READB_UNSHIFTED_MASK) << IW_CUSTOM_READB_LSB) 
-
-#define IW_CUSTOM_READC_LSB 14 
-#define IW_CUSTOM_READC_SIZE 1 
-#define IW_CUSTOM_READC_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_READC_SIZE)) 
-#define IW_CUSTOM_READC_SHIFTED_MASK (IW_CUSTOM_READC_UNSHIFTED_MASK << IW_CUSTOM_READC_LSB) 
-#define GET_IW_CUSTOM_READC(W) (((W) >> IW_CUSTOM_READC_LSB) & IW_CUSTOM_READC_UNSHIFTED_MASK) 
-#define SET_IW_CUSTOM_READC(V) (((V) & IW_CUSTOM_READC_UNSHIFTED_MASK) << IW_CUSTOM_READC_LSB) 
-
-#define IW_CUSTOM_N_LSB 6 
-#define IW_CUSTOM_N_SIZE 8 
-#define IW_CUSTOM_N_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_N_SIZE)) 
-#define IW_CUSTOM_N_SHIFTED_MASK (IW_CUSTOM_N_UNSHIFTED_MASK << IW_CUSTOM_N_LSB) 
-#define GET_IW_CUSTOM_N(W) (((W) >> IW_CUSTOM_N_LSB) & IW_CUSTOM_N_UNSHIFTED_MASK) 
-#define SET_IW_CUSTOM_N(V) (((V) & IW_CUSTOM_N_UNSHIFTED_MASK) << IW_CUSTOM_N_LSB) 
+#define IW_R1_OP_LSB 0
+#define IW_R1_OP_SIZE 6
+#define IW_R1_OP_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R1_OP_SIZE))
+#define IW_R1_OP_SHIFTED_MASK (IW_R1_OP_UNSHIFTED_MASK << IW_R1_OP_LSB)
+#define GET_IW_R1_OP(W) (((W) >> IW_R1_OP_LSB) & IW_R1_OP_UNSHIFTED_MASK)
+#define SET_IW_R1_OP(V) (((V) & IW_R1_OP_UNSHIFTED_MASK) << IW_R1_OP_LSB)
+
+#define IW_I_A_LSB 27
+#define IW_I_A_SIZE 5
+#define IW_I_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_A_SIZE))
+#define IW_I_A_SHIFTED_MASK (IW_I_A_UNSHIFTED_MASK << IW_I_A_LSB)
+#define GET_IW_I_A(W) (((W) >> IW_I_A_LSB) & IW_I_A_UNSHIFTED_MASK)
+#define SET_IW_I_A(V) (((V) & IW_I_A_UNSHIFTED_MASK) << IW_I_A_LSB)
+
+#define IW_I_B_LSB 22
+#define IW_I_B_SIZE 5
+#define IW_I_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_B_SIZE))
+#define IW_I_B_SHIFTED_MASK (IW_I_B_UNSHIFTED_MASK << IW_I_B_LSB)
+#define GET_IW_I_B(W) (((W) >> IW_I_B_LSB) & IW_I_B_UNSHIFTED_MASK)
+#define SET_IW_I_B(V) (((V) & IW_I_B_UNSHIFTED_MASK) << IW_I_B_LSB)
+
+#define IW_I_IMM16_LSB 6
+#define IW_I_IMM16_SIZE 16
+#define IW_I_IMM16_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_IMM16_SIZE))
+#define IW_I_IMM16_SHIFTED_MASK (IW_I_IMM16_UNSHIFTED_MASK << IW_I_IMM16_LSB)
+#define GET_IW_I_IMM16(W) (((W) >> IW_I_IMM16_LSB) & IW_I_IMM16_UNSHIFTED_MASK)
+#define SET_IW_I_IMM16(V) (((V) & IW_I_IMM16_UNSHIFTED_MASK) << IW_I_IMM16_LSB)
+
+#define IW_R_A_LSB 27
+#define IW_R_A_SIZE 5
+#define IW_R_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_A_SIZE))
+#define IW_R_A_SHIFTED_MASK (IW_R_A_UNSHIFTED_MASK << IW_R_A_LSB)
+#define GET_IW_R_A(W) (((W) >> IW_R_A_LSB) & IW_R_A_UNSHIFTED_MASK)
+#define SET_IW_R_A(V) (((V) & IW_R_A_UNSHIFTED_MASK) << IW_R_A_LSB)
+
+#define IW_R_B_LSB 22
+#define IW_R_B_SIZE 5
+#define IW_R_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_B_SIZE))
+#define IW_R_B_SHIFTED_MASK (IW_R_B_UNSHIFTED_MASK << IW_R_B_LSB)
+#define GET_IW_R_B(W) (((W) >> IW_R_B_LSB) & IW_R_B_UNSHIFTED_MASK)
+#define SET_IW_R_B(V) (((V) & IW_R_B_UNSHIFTED_MASK) << IW_R_B_LSB)
+
+#define IW_R_C_LSB 17
+#define IW_R_C_SIZE 5
+#define IW_R_C_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_C_SIZE))
+#define IW_R_C_SHIFTED_MASK (IW_R_C_UNSHIFTED_MASK << IW_R_C_LSB)
+#define GET_IW_R_C(W) (((W) >> IW_R_C_LSB) & IW_R_C_UNSHIFTED_MASK)
+#define SET_IW_R_C(V) (((V) & IW_R_C_UNSHIFTED_MASK) << IW_R_C_LSB)
+
+#define IW_R_OPX_LSB 11
+#define IW_R_OPX_SIZE 6
+#define IW_R_OPX_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_OPX_SIZE))
+#define IW_R_OPX_SHIFTED_MASK (IW_R_OPX_UNSHIFTED_MASK << IW_R_OPX_LSB)
+#define GET_IW_R_OPX(W) (((W) >> IW_R_OPX_LSB) & IW_R_OPX_UNSHIFTED_MASK)
+#define SET_IW_R_OPX(V) (((V) & IW_R_OPX_UNSHIFTED_MASK) << IW_R_OPX_LSB)
+
+#define IW_R_IMM5_LSB 6
+#define IW_R_IMM5_SIZE 5
+#define IW_R_IMM5_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_IMM5_SIZE))
+#define IW_R_IMM5_SHIFTED_MASK (IW_R_IMM5_UNSHIFTED_MASK << IW_R_IMM5_LSB)
+#define GET_IW_R_IMM5(W) (((W) >> IW_R_IMM5_LSB) & IW_R_IMM5_UNSHIFTED_MASK)
+#define SET_IW_R_IMM5(V) (((V) & IW_R_IMM5_UNSHIFTED_MASK) << IW_R_IMM5_LSB)
+
+#define IW_J_IMM26_LSB 6
+#define IW_J_IMM26_SIZE 26
+#define IW_J_IMM26_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_J_IMM26_SIZE))
+#define IW_J_IMM26_SHIFTED_MASK (IW_J_IMM26_UNSHIFTED_MASK << IW_J_IMM26_LSB)
+#define GET_IW_J_IMM26(W) (((W) >> IW_J_IMM26_LSB) & IW_J_IMM26_UNSHIFTED_MASK)
+#define SET_IW_J_IMM26(V) (((V) & IW_J_IMM26_UNSHIFTED_MASK) << IW_J_IMM26_LSB)
+
+#define IW_CUSTOM_A_LSB 27
+#define IW_CUSTOM_A_SIZE 5
+#define IW_CUSTOM_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_A_SIZE))
+#define IW_CUSTOM_A_SHIFTED_MASK (IW_CUSTOM_A_UNSHIFTED_MASK << IW_CUSTOM_A_LSB)
+#define GET_IW_CUSTOM_A(W) (((W) >> IW_CUSTOM_A_LSB) & IW_CUSTOM_A_UNSHIFTED_MASK)
+#define SET_IW_CUSTOM_A(V) (((V) & IW_CUSTOM_A_UNSHIFTED_MASK) << IW_CUSTOM_A_LSB)
+
+#define IW_CUSTOM_B_LSB 22
+#define IW_CUSTOM_B_SIZE 5
+#define IW_CUSTOM_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_B_SIZE))
+#define IW_CUSTOM_B_SHIFTED_MASK (IW_CUSTOM_B_UNSHIFTED_MASK << IW_CUSTOM_B_LSB)
+#define GET_IW_CUSTOM_B(W) (((W) >> IW_CUSTOM_B_LSB) & IW_CUSTOM_B_UNSHIFTED_MASK)
+#define SET_IW_CUSTOM_B(V) (((V) & IW_CUSTOM_B_UNSHIFTED_MASK) << IW_CUSTOM_B_LSB)
+
+#define IW_CUSTOM_C_LSB 17
+#define IW_CUSTOM_C_SIZE 5
+#define IW_CUSTOM_C_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_C_SIZE))
+#define IW_CUSTOM_C_SHIFTED_MASK (IW_CUSTOM_C_UNSHIFTED_MASK << IW_CUSTOM_C_LSB)
+#define GET_IW_CUSTOM_C(W) (((W) >> IW_CUSTOM_C_LSB) & IW_CUSTOM_C_UNSHIFTED_MASK)
+#define SET_IW_CUSTOM_C(V) (((V) & IW_CUSTOM_C_UNSHIFTED_MASK) << IW_CUSTOM_C_LSB)
+
+#define IW_CUSTOM_READA_LSB 16
+#define IW_CUSTOM_READA_SIZE 1
+#define IW_CUSTOM_READA_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_READA_SIZE))
+#define IW_CUSTOM_READA_SHIFTED_MASK (IW_CUSTOM_READA_UNSHIFTED_MASK << IW_CUSTOM_READA_LSB)
+#define GET_IW_CUSTOM_READA(W) (((W) >> IW_CUSTOM_READA_LSB) & IW_CUSTOM_READA_UNSHIFTED_MASK)
+#define SET_IW_CUSTOM_READA(V) (((V) & IW_CUSTOM_READA_UNSHIFTED_MASK) << IW_CUSTOM_READA_LSB)
+
+#define IW_CUSTOM_READB_LSB 15
+#define IW_CUSTOM_READB_SIZE 1
+#define IW_CUSTOM_READB_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_READB_SIZE))
+#define IW_CUSTOM_READB_SHIFTED_MASK (IW_CUSTOM_READB_UNSHIFTED_MASK << IW_CUSTOM_READB_LSB)
+#define GET_IW_CUSTOM_READB(W) (((W) >> IW_CUSTOM_READB_LSB) & IW_CUSTOM_READB_UNSHIFTED_MASK)
+#define SET_IW_CUSTOM_READB(V) (((V) & IW_CUSTOM_READB_UNSHIFTED_MASK) << IW_CUSTOM_READB_LSB)
+
+#define IW_CUSTOM_READC_LSB 14
+#define IW_CUSTOM_READC_SIZE 1
+#define IW_CUSTOM_READC_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_READC_SIZE))
+#define IW_CUSTOM_READC_SHIFTED_MASK (IW_CUSTOM_READC_UNSHIFTED_MASK << IW_CUSTOM_READC_LSB)
+#define GET_IW_CUSTOM_READC(W) (((W) >> IW_CUSTOM_READC_LSB) & IW_CUSTOM_READC_UNSHIFTED_MASK)
+#define SET_IW_CUSTOM_READC(V) (((V) & IW_CUSTOM_READC_UNSHIFTED_MASK) << IW_CUSTOM_READC_LSB)
+
+#define IW_CUSTOM_N_LSB 6
+#define IW_CUSTOM_N_SIZE 8
+#define IW_CUSTOM_N_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_N_SIZE))
+#define IW_CUSTOM_N_SHIFTED_MASK (IW_CUSTOM_N_UNSHIFTED_MASK << IW_CUSTOM_N_LSB)
+#define GET_IW_CUSTOM_N(W) (((W) >> IW_CUSTOM_N_LSB) & IW_CUSTOM_N_UNSHIFTED_MASK)
+#define SET_IW_CUSTOM_N(V) (((V) & IW_CUSTOM_N_UNSHIFTED_MASK) << IW_CUSTOM_N_LSB)
 
 /* R1 opcodes.  */
 #define R1_OP_CALL 0
diff --git a/hmp-commands.hx b/hmp-commands.hx
index 60f395c276..d548a3ab74 100644
--- a/hmp-commands.hx
+++ b/hmp-commands.hx
@@ -1120,7 +1120,7 @@ ERST
 
 SRST
 ``dump-guest-memory [-p]`` *filename* *begin* *length*
-  \ 
+  \
 ``dump-guest-memory [-z|-l|-s|-w]`` *filename*
   Dump guest memory to *protocol*. The file can be processed with crash or
   gdb. Without ``-z|-l|-s|-w``, the dump format is ELF.
diff --git a/hw/alpha/typhoon.c b/hw/alpha/typhoon.c
index 29d44dfb06..57c7cf0bd3 100644
--- a/hw/alpha/typhoon.c
+++ b/hw/alpha/typhoon.c
@@ -34,7 +34,7 @@ typedef struct TyphoonWindow {
     uint64_t wsm;
     uint64_t tba;
 } TyphoonWindow;
- 
+
 typedef struct TyphoonPchip {
     MemoryRegion region;
     MemoryRegion reg_iack;
@@ -189,7 +189,7 @@ static MemTxResult cchip_read(void *opaque, hwaddr addr,
     case 0x0780:
         /* PWR: Power Management Control.   */
         break;
-    
+
     case 0x0c00: /* CMONCTLA */
     case 0x0c40: /* CMONCTLB */
     case 0x0c80: /* CMONCNT01 */
@@ -441,7 +441,7 @@ static MemTxResult cchip_write(void *opaque, hwaddr addr,
     case 0x0780:
         /* PWR: Power Management Control.   */
         break;
-    
+
     case 0x0c00: /* CMONCTLA */
     case 0x0c40: /* CMONCTLB */
     case 0x0c80: /* CMONCNT01 */
diff --git a/hw/arm/gumstix.c b/hw/arm/gumstix.c
index 3a4bc332c4..3fdef425ab 100644
--- a/hw/arm/gumstix.c
+++ b/hw/arm/gumstix.c
@@ -10,10 +10,10 @@
  * Contributions after 2012-01-13 are licensed under the terms of the
  * GNU GPL, version 2 or (at your option) any later version.
  */
- 
-/* 
+
+/*
  * Example usage:
- * 
+ *
  * connex:
  * =======
  * create image:
diff --git a/hw/arm/omap1.c b/hw/arm/omap1.c
index 6ba0df6b6d..82e60e3b30 100644
--- a/hw/arm/omap1.c
+++ b/hw/arm/omap1.c
@@ -2914,7 +2914,7 @@ static void omap_rtc_tick(void *opaque)
 
     /*
      * Every full hour add a rough approximation of the compensation
-     * register to the 32kHz Timer (which drives the RTC) value. 
+     * register to the 32kHz Timer (which drives the RTC) value.
      */
     if (s->auto_comp && !s->current_tm.tm_sec && !s->current_tm.tm_min)
         s->tick += s->comp_reg * 1000 / 32768;
diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
index 97ef566c12..7089a534d4 100644
--- a/hw/arm/stellaris.c
+++ b/hw/arm/stellaris.c
@@ -979,7 +979,7 @@ static void stellaris_adc_fifo_write(stellaris_adc_state *s, int n,
 {
     int head;
 
-    /* TODO: Real hardware has limited size FIFOs.  We have a full 16 entry 
+    /* TODO: Real hardware has limited size FIFOs.  We have a full 16 entry
        FIFO fir each sequencer.  */
     head = (s->fifo[n].state >> 4) & 0xf;
     if (s->fifo[n].state & STELLARIS_ADC_FIFO_FULL) {
diff --git a/hw/char/etraxfs_ser.c b/hw/char/etraxfs_ser.c
index 947bdb649a..85f6523efe 100644
--- a/hw/char/etraxfs_ser.c
+++ b/hw/char/etraxfs_ser.c
@@ -180,7 +180,7 @@ static void serial_receive(void *opaque, const uint8_t *buf, int size)
         return;
     }
 
-    for (i = 0; i < size; i++) { 
+    for (i = 0; i < size; i++) {
         s->rx_fifo[s->rx_fifo_pos] = buf[i];
         s->rx_fifo_pos++;
         s->rx_fifo_pos &= 15;
diff --git a/hw/core/ptimer.c b/hw/core/ptimer.c
index b5a54e2536..f08c3c33a7 100644
--- a/hw/core/ptimer.c
+++ b/hw/core/ptimer.c
@@ -246,7 +246,7 @@ uint64_t ptimer_get_count(ptimer_state *s)
             } else {
                 if (shift != 0)
                     div |= (period_frac >> (32 - shift));
-                /* Look at remaining bits of period_frac and round div up if 
+                /* Look at remaining bits of period_frac and round div up if
                    necessary.  */
                 if ((uint32_t)(period_frac << shift))
                     div += 1;
diff --git a/hw/cris/axis_dev88.c b/hw/cris/axis_dev88.c
index dab7423c73..adeed30638 100644
--- a/hw/cris/axis_dev88.c
+++ b/hw/cris/axis_dev88.c
@@ -267,7 +267,7 @@ void axisdev88_init(MachineState *machine)
 
     memory_region_add_subregion(address_space_mem, 0x40000000, machine->ram);
 
-    /* The ETRAX-FS has 128Kb on chip ram, the docs refer to it as the 
+    /* The ETRAX-FS has 128Kb on chip ram, the docs refer to it as the
        internal memory.  */
     memory_region_init_ram(phys_intmem, NULL, "axisdev88.chipram",
                            INTMEM_SIZE, &error_fatal);
diff --git a/hw/cris/boot.c b/hw/cris/boot.c
index b8947bc660..06a440431a 100644
--- a/hw/cris/boot.c
+++ b/hw/cris/boot.c
@@ -72,7 +72,7 @@ void cris_load_image(CRISCPU *cpu, struct cris_load_info *li)
     int image_size;
 
     env->load_info = li;
-    /* Boots a kernel elf binary, os/linux-2.6/vmlinux from the axis 
+    /* Boots a kernel elf binary, os/linux-2.6/vmlinux from the axis
        devboard SDK.  */
     image_size = load_elf(li->image_filename, NULL,
                           translate_kernel_address, NULL,
diff --git a/hw/display/qxl.c b/hw/display/qxl.c
index d5627119ec..28caf878cd 100644
--- a/hw/display/qxl.c
+++ b/hw/display/qxl.c
@@ -51,7 +51,7 @@
 #undef ALIGN
 #define ALIGN(a, b) (((a) + ((b) - 1)) & ~((b) - 1))
 
-#define PIXEL_SIZE 0.2936875 //1280x1024 is 14.8" x 11.9" 
+#define PIXEL_SIZE 0.2936875 //1280x1024 is 14.8" x 11.9"
 
 #define QXL_MODE(_x, _y, _b, _o)                  \
     {   .x_res = _x,                              \
diff --git a/hw/dma/etraxfs_dma.c b/hw/dma/etraxfs_dma.c
index c4334e87bf..20173330a0 100644
--- a/hw/dma/etraxfs_dma.c
+++ b/hw/dma/etraxfs_dma.c
@@ -322,12 +322,12 @@ static inline void channel_start(struct fs_dma_ctrl *ctrl, int c)
 
 static void channel_continue(struct fs_dma_ctrl *ctrl, int c)
 {
-	if (!channel_en(ctrl, c) 
+	if (!channel_en(ctrl, c)
 	    || channel_stopped(ctrl, c)
 	    || ctrl->channels[c].state != RUNNING
 	    /* Only reload the current data descriptor if it has eol set.  */
 	    || !ctrl->channels[c].current_d.eol) {
-		D(printf("continue failed ch=%d state=%d stopped=%d en=%d eol=%d\n", 
+		D(printf("continue failed ch=%d state=%d stopped=%d en=%d eol=%d\n",
 			 c, ctrl->channels[c].state,
 			 channel_stopped(ctrl, c),
 			 channel_en(ctrl,c),
@@ -383,7 +383,7 @@ static void channel_update_irq(struct fs_dma_ctrl *ctrl, int c)
 		ctrl->channels[c].regs[R_INTR]
 		& ctrl->channels[c].regs[RW_INTR_MASK];
 
-	D(printf("%s: chan=%d masked_intr=%x\n", __func__, 
+	D(printf("%s: chan=%d masked_intr=%x\n", __func__,
 		 c,
 		 ctrl->channels[c].regs[R_MASKED_INTR]));
 
@@ -492,7 +492,7 @@ static int channel_out_run(struct fs_dma_ctrl *ctrl, int c)
 	return 1;
 }
 
-static int channel_in_process(struct fs_dma_ctrl *ctrl, int c, 
+static int channel_in_process(struct fs_dma_ctrl *ctrl, int c,
 			      unsigned char *buf, int buflen, int eop)
 {
 	uint32_t len;
@@ -517,7 +517,7 @@ static int channel_in_process(struct fs_dma_ctrl *ctrl, int c,
 	    || eop) {
 		uint32_t r_intr = ctrl->channels[c].regs[R_INTR];
 
-		D(printf("in dscr end len=%d\n", 
+		D(printf("in dscr end len=%d\n",
 			 ctrl->channels[c].current_d.after
 			 - ctrl->channels[c].current_d.buf));
 		ctrl->channels[c].current_d.after = saved_data_buf;
@@ -708,7 +708,7 @@ static int etraxfs_dmac_run(void *opaque)
 	int i;
 	int p = 0;
 
-	for (i = 0; 
+	for (i = 0;
 	     i < ctrl->nr_channels;
 	     i++)
 	{
@@ -724,10 +724,10 @@ static int etraxfs_dmac_run(void *opaque)
 	return p;
 }
 
-int etraxfs_dmac_input(struct etraxfs_dma_client *client, 
+int etraxfs_dmac_input(struct etraxfs_dma_client *client,
 		       void *buf, int len, int eop)
 {
-	return channel_in_process(client->ctrl, client->channel, 
+	return channel_in_process(client->ctrl, client->channel,
 				  buf, len, eop);
 }
 
@@ -739,7 +739,7 @@ void etraxfs_dmac_connect(void *opaque, int c, qemu_irq *line, int input)
 	ctrl->channels[c].input = input;
 }
 
-void etraxfs_dmac_connect_client(void *opaque, int c, 
+void etraxfs_dmac_connect_client(void *opaque, int c,
 				 struct etraxfs_dma_client *cl)
 {
 	struct fs_dma_ctrl *ctrl = opaque;
diff --git a/hw/dma/i82374.c b/hw/dma/i82374.c
index 6977d85ef8..0db27628d5 100644
--- a/hw/dma/i82374.c
+++ b/hw/dma/i82374.c
@@ -146,7 +146,7 @@ static Property i82374_properties[] = {
 static void i82374_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
-    
+
     dc->realize = i82374_realize;
     dc->vmsd = &vmstate_i82374;
     device_class_set_props(dc, i82374_properties);
diff --git a/hw/i2c/bitbang_i2c.c b/hw/i2c/bitbang_i2c.c
index b000952b98..425b0ed69e 100644
--- a/hw/i2c/bitbang_i2c.c
+++ b/hw/i2c/bitbang_i2c.c
@@ -95,7 +95,7 @@ int bitbang_i2c_set(bitbang_i2c_interface *i2c, int line, int level)
     case SENDING_BIT7 ... SENDING_BIT0:
         i2c->buffer = (i2c->buffer << 1) | data;
         /* will end up in WAITING_FOR_ACK */
-        i2c->state++; 
+        i2c->state++;
         return bitbang_i2c_ret(i2c, 1);
 
     case WAITING_FOR_ACK:
diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
index 55d61cc843..df07476c3e 100644
--- a/hw/input/tsc2005.c
+++ b/hw/input/tsc2005.c
@@ -169,7 +169,7 @@ static uint16_t tsc2005_read(TSC2005State *s, int reg)
 
     case 0xc:	/* CFR0 */
         return (s->pressure << 15) | ((!s->busy) << 14) |
-                (s->nextprecision << 13) | s->timing[0]; 
+                (s->nextprecision << 13) | s->timing[0];
     case 0xd:	/* CFR1 */
         return s->timing[1];
     case 0xe:	/* CFR2 */
diff --git a/hw/input/tsc210x.c b/hw/input/tsc210x.c
index 182d3725fc..610b3fca59 100644
--- a/hw/input/tsc210x.c
+++ b/hw/input/tsc210x.c
@@ -412,7 +412,7 @@ static uint16_t tsc2102_control_register_read(
     switch (reg) {
     case 0x00:	/* TSC ADC */
         return (s->pressure << 15) | ((!s->busy) << 14) |
-                (s->nextfunction << 10) | (s->nextprecision << 8) | s->filter; 
+                (s->nextfunction << 10) | (s->nextprecision << 8) | s->filter;
 
     case 0x01:	/* Status / Keypad Control */
         if ((s->model & 0xff00) == 0x2100)
diff --git a/hw/intc/etraxfs_pic.c b/hw/intc/etraxfs_pic.c
index 12988c7aa9..9f9377798d 100644
--- a/hw/intc/etraxfs_pic.c
+++ b/hw/intc/etraxfs_pic.c
@@ -52,15 +52,15 @@ struct etrax_pic
 };
 
 static void pic_update(struct etrax_pic *fs)
-{   
+{
     uint32_t vector = 0;
     int i;
 
     fs->regs[R_R_MASKED_VECT] = fs->regs[R_R_VECT] & fs->regs[R_RW_MASK];
 
     /* The ETRAX interrupt controller signals interrupts to the core
-       through an interrupt request wire and an irq vector bus. If 
-       multiple interrupts are simultaneously active it chooses vector 
+       through an interrupt request wire and an irq vector bus. If
+       multiple interrupts are simultaneously active it chooses vector
        0x30 and lets the sw choose the priorities.  */
     if (fs->regs[R_R_MASKED_VECT]) {
         uint32_t mv = fs->regs[R_R_MASKED_VECT];
@@ -113,7 +113,7 @@ static const MemoryRegionOps pic_ops = {
 };
 
 static void nmi_handler(void *opaque, int irq, int level)
-{   
+{
     struct etrax_pic *fs = (void *)opaque;
     uint32_t mask;
 
diff --git a/hw/intc/sh_intc.c b/hw/intc/sh_intc.c
index 72a55e32dd..4c6e4b89a1 100644
--- a/hw/intc/sh_intc.c
+++ b/hw/intc/sh_intc.c
@@ -236,7 +236,7 @@ static uint64_t sh_intc_read(void *opaque, hwaddr offset,
     printf("sh_intc_read 0x%lx\n", (unsigned long) offset);
 #endif
 
-    sh_intc_locate(desc, (unsigned long)offset, &valuep, 
+    sh_intc_locate(desc, (unsigned long)offset, &valuep,
 		   &enum_ids, &first, &width, &mode);
     return *valuep;
 }
@@ -257,7 +257,7 @@ static void sh_intc_write(void *opaque, hwaddr offset,
     printf("sh_intc_write 0x%lx 0x%08x\n", (unsigned long) offset, value);
 #endif
 
-    sh_intc_locate(desc, (unsigned long)offset, &valuep, 
+    sh_intc_locate(desc, (unsigned long)offset, &valuep,
 		   &enum_ids, &first, &width, &mode);
 
     switch (mode) {
@@ -273,7 +273,7 @@ static void sh_intc_write(void *opaque, hwaddr offset,
 	if ((*valuep & mask) == (value & mask))
             continue;
 #if 0
-	printf("k = %d, first = %d, enum = %d, mask = 0x%08x\n", 
+	printf("k = %d, first = %d, enum = %d, mask = 0x%08x\n",
 	       k, first, enum_ids[k], (unsigned int)mask);
 #endif
         sh_intc_toggle_mask(desc, enum_ids[k], value & mask, 0);
@@ -466,7 +466,7 @@ int sh_intc_init(MemoryRegion *sysmem,
     }
 
     desc->irqs = qemu_allocate_irqs(sh_intc_set_irq, desc, nr_sources);
- 
+
     memory_region_init_io(&desc->iomem, NULL, &sh_intc_ops, desc,
                           "interrupt-controller", 0x100000000ULL);
 
@@ -498,7 +498,7 @@ int sh_intc_init(MemoryRegion *sysmem,
     return 0;
 }
 
-/* Assert level <n> IRL interrupt. 
+/* Assert level <n> IRL interrupt.
    0:deassert. 1:lowest priority,... 15:highest priority. */
 void sh_intc_set_irl(void *opaque, int n, int level)
 {
diff --git a/hw/intc/xilinx_intc.c b/hw/intc/xilinx_intc.c
index 3e65e68619..dfc049de92 100644
--- a/hw/intc/xilinx_intc.c
+++ b/hw/intc/xilinx_intc.c
@@ -113,7 +113,7 @@ pic_write(void *opaque, hwaddr addr,
 
     addr >>= 2;
     D(qemu_log("%s addr=%x val=%x\n", __func__, addr * 4, value));
-    switch (addr) 
+    switch (addr)
     {
         case R_IAR:
             p->regs[R_ISR] &= ~value; /* ACK.  */
diff --git a/hw/misc/imx25_ccm.c b/hw/misc/imx25_ccm.c
index d3107e5ca2..83dd09a9bc 100644
--- a/hw/misc/imx25_ccm.c
+++ b/hw/misc/imx25_ccm.c
@@ -200,9 +200,9 @@ static void imx25_ccm_reset(DeviceState *dev)
     memset(s->reg, 0, IMX25_CCM_MAX_REG * sizeof(uint32_t));
     s->reg[IMX25_CCM_MPCTL_REG] = 0x800b2c01;
     s->reg[IMX25_CCM_UPCTL_REG] = 0x84042800;
-    /* 
+    /*
      * The value below gives:
-     * CPU = 133 MHz, AHB = 66,5 MHz, IPG = 33 MHz. 
+     * CPU = 133 MHz, AHB = 66,5 MHz, IPG = 33 MHz.
      */
     s->reg[IMX25_CCM_CCTL_REG]  = 0xd0030000;
     s->reg[IMX25_CCM_CGCR0_REG] = 0x028A0100;
@@ -219,7 +219,7 @@ static void imx25_ccm_reset(DeviceState *dev)
 
     /*
      * default boot will change the reset values to allow:
-     * CPU = 399 MHz, AHB = 133 MHz, IPG = 66,5 MHz. 
+     * CPU = 399 MHz, AHB = 133 MHz, IPG = 66,5 MHz.
      * For some reason, this doesn't work. With the value below, linux
      * detects a 88 MHz IPG CLK instead of 66,5 MHz.
     s->reg[IMX25_CCM_CCTL_REG]  = 0x20032000;
diff --git a/hw/misc/imx31_ccm.c b/hw/misc/imx31_ccm.c
index 6e246827ab..8da2757cbe 100644
--- a/hw/misc/imx31_ccm.c
+++ b/hw/misc/imx31_ccm.c
@@ -115,7 +115,7 @@ static uint32_t imx31_ccm_get_pll_ref_clk(IMXCCMState *dev)
             if (s->reg[IMX31_CCM_CCMR_REG] & CCMR_FPMF) {
                 freq *= 1024;
             }
-        } 
+        }
     } else {
         freq = CKIH_FREQ;
     }
diff --git a/hw/net/vmxnet3.h b/hw/net/vmxnet3.h
index 5b3b76ba7a..020bf70afd 100644
--- a/hw/net/vmxnet3.h
+++ b/hw/net/vmxnet3.h
@@ -246,7 +246,7 @@ struct Vmxnet3_TxDesc {
         };
         u32 val1;
     };
-    
+
     union {
         struct {
 #ifdef __BIG_ENDIAN_BITFIELD
diff --git a/hw/net/xilinx_ethlite.c b/hw/net/xilinx_ethlite.c
index 71d16fef3d..0703f9e444 100644
--- a/hw/net/xilinx_ethlite.c
+++ b/hw/net/xilinx_ethlite.c
@@ -117,7 +117,7 @@ eth_write(void *opaque, hwaddr addr,
     uint32_t value = val64;
 
     addr >>= 2;
-    switch (addr) 
+    switch (addr)
     {
         case R_TX_CTRL0:
         case R_TX_CTRL1:
diff --git a/hw/pci/pcie.c b/hw/pci/pcie.c
index 5b48bae0f6..4692d9b5a3 100644
--- a/hw/pci/pcie.c
+++ b/hw/pci/pcie.c
@@ -705,7 +705,7 @@ void pcie_cap_slot_write_config(PCIDevice *dev,
 
     hotplug_event_notify(dev);
 
-    /* 
+    /*
      * 6.7.3.2 Command Completed Events
      *
      * Software issues a command to a hot-plug capable Downstream Port by
diff --git a/hw/sd/omap_mmc.c b/hw/sd/omap_mmc.c
index 4088a8a80b..7c6f179578 100644
--- a/hw/sd/omap_mmc.c
+++ b/hw/sd/omap_mmc.c
@@ -342,7 +342,7 @@ static uint64_t omap_mmc_read(void *opaque, hwaddr offset,
         return s->arg >> 16;
 
     case 0x0c:	/* MMC_CON */
-        return (s->dw << 15) | (s->mode << 12) | (s->enable << 11) | 
+        return (s->dw << 15) | (s->mode << 12) | (s->enable << 11) |
                 (s->be << 10) | s->clkdiv;
 
     case 0x10:	/* MMC_STAT */
diff --git a/hw/sh4/shix.c b/hw/sh4/shix.c
index f410c08883..dddfa8b336 100644
--- a/hw/sh4/shix.c
+++ b/hw/sh4/shix.c
@@ -49,7 +49,7 @@ static void shix_init(MachineState *machine)
     MemoryRegion *sysmem = get_system_memory();
     MemoryRegion *rom = g_new(MemoryRegion, 1);
     MemoryRegion *sdram = g_new(MemoryRegion, 2);
-    
+
     cpu = SUPERH_CPU(cpu_create(machine->cpu_type));
 
     /* Allocate memory space */
diff --git a/hw/sparc64/sun4u.c b/hw/sparc64/sun4u.c
index 9c8655cffc..e3dd1c67a0 100644
--- a/hw/sparc64/sun4u.c
+++ b/hw/sparc64/sun4u.c
@@ -670,7 +670,7 @@ static void sun4uv_init(MemoryRegion *address_space_mem,
     s = SYS_BUS_DEVICE(nvram);
     memory_region_add_subregion(pci_address_space_io(ebus), 0x2000,
                                 sysbus_mmio_get_region(s, 0));
- 
+
     initrd_size = 0;
     initrd_addr = 0;
     kernel_size = sun4u_load_kernel(machine->kernel_filename,
diff --git a/hw/timer/etraxfs_timer.c b/hw/timer/etraxfs_timer.c
index afe3d30a8e..797f65b3f4 100644
--- a/hw/timer/etraxfs_timer.c
+++ b/hw/timer/etraxfs_timer.c
@@ -230,7 +230,7 @@ static inline void timer_watchdog_update(ETRAXTimerState *t, uint32_t value)
     if (wd_en && wd_key != new_key)
         return;
 
-    D(printf("en=%d new_key=%x oldkey=%x cmd=%d cnt=%d\n", 
+    D(printf("en=%d new_key=%x oldkey=%x cmd=%d cnt=%d\n",
          wd_en, new_key, wd_key, new_cmd, wd_cnt));
 
     if (t->wd_hits)
diff --git a/hw/timer/xilinx_timer.c b/hw/timer/xilinx_timer.c
index 0190aa47d0..0901ca7b05 100644
--- a/hw/timer/xilinx_timer.c
+++ b/hw/timer/xilinx_timer.c
@@ -166,7 +166,7 @@ timer_write(void *opaque, hwaddr addr,
              __func__, addr * 4, value, timer, addr & 3));
     /* Further decoding to address a specific timers reg.  */
     addr &= 3;
-    switch (addr) 
+    switch (addr)
     {
         case R_TCSR:
             if (value & TCSR_TINT)
@@ -179,7 +179,7 @@ timer_write(void *opaque, hwaddr addr,
                 ptimer_transaction_commit(xt->ptimer);
             }
             break;
- 
+
         default:
             if (addr < ARRAY_SIZE(xt->regs))
                 xt->regs[addr] = value;
diff --git a/hw/usb/hcd-musb.c b/hw/usb/hcd-musb.c
index 85f5ff5bd4..f64f47b34f 100644
--- a/hw/usb/hcd-musb.c
+++ b/hw/usb/hcd-musb.c
@@ -33,8 +33,8 @@
 
 #define MUSB_HDRC_INTRTX	0x02	/* 16-bit */
 #define MUSB_HDRC_INTRRX	0x04
-#define MUSB_HDRC_INTRTXE	0x06  
-#define MUSB_HDRC_INTRRXE	0x08  
+#define MUSB_HDRC_INTRTXE	0x06
+#define MUSB_HDRC_INTRRXE	0x08
 #define MUSB_HDRC_INTRUSB	0x0a	/* 8 bit */
 #define MUSB_HDRC_INTRUSBE	0x0b	/* 8 bit */
 #define MUSB_HDRC_FRAME		0x0c	/* 16-bit */
@@ -113,7 +113,7 @@
  */
 
 /* POWER */
-#define MGC_M_POWER_ISOUPDATE		0x80 
+#define MGC_M_POWER_ISOUPDATE		0x80
 #define	MGC_M_POWER_SOFTCONN		0x40
 #define	MGC_M_POWER_HSENAB		0x20
 #define	MGC_M_POWER_HSMODE		0x10
@@ -127,7 +127,7 @@
 #define MGC_M_INTR_RESUME		0x02
 #define MGC_M_INTR_RESET		0x04
 #define MGC_M_INTR_BABBLE		0x04
-#define MGC_M_INTR_SOF			0x08 
+#define MGC_M_INTR_SOF			0x08
 #define MGC_M_INTR_CONNECT		0x10
 #define MGC_M_INTR_DISCONNECT		0x20
 #define MGC_M_INTR_SESSREQ		0x40
@@ -135,7 +135,7 @@
 #define MGC_M_INTR_EP0			0x01	/* FOR EP0 INTERRUPT */
 
 /* DEVCTL */
-#define MGC_M_DEVCTL_BDEVICE		0x80   
+#define MGC_M_DEVCTL_BDEVICE		0x80
 #define MGC_M_DEVCTL_FSDEV		0x40
 #define MGC_M_DEVCTL_LSDEV		0x20
 #define MGC_M_DEVCTL_VBUS		0x18
diff --git a/hw/usb/hcd-ohci.c b/hw/usb/hcd-ohci.c
index 1e6e85e86a..a2bc7e05d6 100644
--- a/hw/usb/hcd-ohci.c
+++ b/hw/usb/hcd-ohci.c
@@ -670,7 +670,7 @@ static int ohci_service_iso_td(OHCIState *ohci, struct ohci_ed *ed,
 
     starting_frame = OHCI_BM(iso_td.flags, TD_SF);
     frame_count = OHCI_BM(iso_td.flags, TD_FC);
-    relative_frame_number = USUB(ohci->frame_number, starting_frame); 
+    relative_frame_number = USUB(ohci->frame_number, starting_frame);
 
     trace_usb_ohci_iso_td_head(
            ed->head & OHCI_DPTR_MASK, ed->tail & OHCI_DPTR_MASK,
@@ -733,8 +733,8 @@ static int ohci_service_iso_td(OHCIState *ohci, struct ohci_ed *ed,
     start_offset = iso_td.offset[relative_frame_number];
     next_offset = iso_td.offset[relative_frame_number + 1];
 
-    if (!(OHCI_BM(start_offset, TD_PSW_CC) & 0xe) || 
-        ((relative_frame_number < frame_count) && 
+    if (!(OHCI_BM(start_offset, TD_PSW_CC) & 0xe) ||
+        ((relative_frame_number < frame_count) &&
          !(OHCI_BM(next_offset, TD_PSW_CC) & 0xe))) {
         trace_usb_ohci_iso_td_bad_cc_not_accessed(start_offset, next_offset);
         return 1;
diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
index 37f7beb3fa..bebc10a723 100644
--- a/hw/usb/hcd-uhci.c
+++ b/hw/usb/hcd-uhci.c
@@ -80,7 +80,7 @@ struct UHCIPCIDeviceClass {
     UHCIInfo       info;
 };
 
-/* 
+/*
  * Pending async transaction.
  * 'packet' must be the first field because completion
  * handler does "(UHCIAsync *) pkt" cast.
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index 7bc8c1c056..f7d8b30fd7 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -836,7 +836,7 @@ static void virtio_pci_vq_vector_mask(VirtIOPCIProxy *proxy,
 
     /* If guest supports masking, keep irqfd but mask it.
      * Otherwise, clean it up now.
-     */ 
+     */
     if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
         k->guest_notifier_mask(vdev, queue_no, true);
     } else {
diff --git a/include/hw/cris/etraxfs_dma.h b/include/hw/cris/etraxfs_dma.h
index 095d76b956..f11a5874cf 100644
--- a/include/hw/cris/etraxfs_dma.h
+++ b/include/hw/cris/etraxfs_dma.h
@@ -28,9 +28,9 @@ struct etraxfs_dma_client
 void *etraxfs_dmac_init(hwaddr base, int nr_channels);
 void etraxfs_dmac_connect(void *opaque, int channel, qemu_irq *line,
 			  int input);
-void etraxfs_dmac_connect_client(void *opaque, int c, 
+void etraxfs_dmac_connect_client(void *opaque, int c,
 				 struct etraxfs_dma_client *cl);
-int etraxfs_dmac_input(struct etraxfs_dma_client *client, 
+int etraxfs_dmac_input(struct etraxfs_dma_client *client,
 		       void *buf, int len, int eop);
 
 #endif
diff --git a/include/hw/net/lance.h b/include/hw/net/lance.h
index 0357f5f65c..6099c12d37 100644
--- a/include/hw/net/lance.h
+++ b/include/hw/net/lance.h
@@ -6,7 +6,7 @@
  *
  * This represents the Sparc32 lance (Am7990) ethernet device which is an
  * earlier register-compatible member of the AMD PC-Net II (Am79C970A) family.
- * 
+ *
  * Permission is hereby granted, free of charge, to any person obtaining a copy
  * of this software and associated documentation files (the "Software"), to deal
  * in the Software without restriction, including without limitation the rights
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index c421410e3f..fdeed5ecb6 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -131,7 +131,7 @@ struct SpaprMachineClass {
     hwaddr rma_limit;          /* clamp the RMA to this size */
 
     void (*phb_placement)(SpaprMachineState *spapr, uint32_t index,
-                          uint64_t *buid, hwaddr *pio, 
+                          uint64_t *buid, hwaddr *pio,
                           hwaddr *mmio32, hwaddr *mmio64,
                           unsigned n_dma, uint32_t *liobns, hwaddr *nv2gpa,
                           hwaddr *nv2atsd, Error **errp);
diff --git a/include/hw/xen/interface/io/ring.h b/include/hw/xen/interface/io/ring.h
index 5d048b335c..fdb2a6ecba 100644
--- a/include/hw/xen/interface/io/ring.h
+++ b/include/hw/xen/interface/io/ring.h
@@ -1,6 +1,6 @@
 /******************************************************************************
  * ring.h
- * 
+ *
  * Shared producer-consumer ring macros.
  *
  * Permission is hereby granted, free of charge, to any person obtaining a copy
@@ -61,7 +61,7 @@ typedef unsigned int RING_IDX;
 /*
  * Calculate size of a shared ring, given the total available space for the
  * ring and indexes (_sz), and the name tag of the request/response structure.
- * A ring contains as many entries as will fit, rounded down to the nearest 
+ * A ring contains as many entries as will fit, rounded down to the nearest
  * power of two (so we can mask with (size-1) to loop around).
  */
 #define __CONST_RING_SIZE(_s, _sz) \
@@ -75,7 +75,7 @@ typedef unsigned int RING_IDX;
 
 /*
  * Macros to make the correct C datatypes for a new kind of ring.
- * 
+ *
  * To make a new ring datatype, you need to have two message structures,
  * let's say request_t, and response_t already defined.
  *
@@ -85,7 +85,7 @@ typedef unsigned int RING_IDX;
  *
  * These expand out to give you a set of types, as you can see below.
  * The most important of these are:
- * 
+ *
  *     mytag_sring_t      - The shared ring.
  *     mytag_front_ring_t - The 'front' half of the ring.
  *     mytag_back_ring_t  - The 'back' half of the ring.
@@ -153,15 +153,15 @@ typedef struct __name##_back_ring __name##_back_ring_t
 
 /*
  * Macros for manipulating rings.
- * 
- * FRONT_RING_whatever works on the "front end" of a ring: here 
+ *
+ * FRONT_RING_whatever works on the "front end" of a ring: here
  * requests are pushed on to the ring and responses taken off it.
- * 
- * BACK_RING_whatever works on the "back end" of a ring: here 
+ *
+ * BACK_RING_whatever works on the "back end" of a ring: here
  * requests are taken off the ring and responses put on.
- * 
- * N.B. these macros do NO INTERLOCKS OR FLOW CONTROL. 
- * This is OK in 1-for-1 request-response situations where the 
+ *
+ * N.B. these macros do NO INTERLOCKS OR FLOW CONTROL.
+ * This is OK in 1-for-1 request-response situations where the
  * requestor (front end) never has more than RING_SIZE()-1
  * outstanding requests.
  */
@@ -263,26 +263,26 @@ typedef struct __name##_back_ring __name##_back_ring_t
 
 /*
  * Notification hold-off (req_event and rsp_event):
- * 
+ *
  * When queueing requests or responses on a shared ring, it may not always be
  * necessary to notify the remote end. For example, if requests are in flight
  * in a backend, the front may be able to queue further requests without
  * notifying the back (if the back checks for new requests when it queues
  * responses).
- * 
+ *
  * When enqueuing requests or responses:
- * 
+ *
  *  Use RING_PUSH_{REQUESTS,RESPONSES}_AND_CHECK_NOTIFY(). The second argument
  *  is a boolean return value. True indicates that the receiver requires an
  *  asynchronous notification.
- * 
+ *
  * After dequeuing requests or responses (before sleeping the connection):
- * 
+ *
  *  Use RING_FINAL_CHECK_FOR_REQUESTS() or RING_FINAL_CHECK_FOR_RESPONSES().
  *  The second argument is a boolean return value. True indicates that there
  *  are pending messages on the ring (i.e., the connection should not be put
  *  to sleep).
- * 
+ *
  *  These macros will set the req_event/rsp_event field to trigger a
  *  notification on the very next message that is enqueued. If you want to
  *  create batches of work (i.e., only receive a notification after several
diff --git a/include/qemu/log.h b/include/qemu/log.h
index f4724f7330..1a4e066160 100644
--- a/include/qemu/log.h
+++ b/include/qemu/log.h
@@ -14,7 +14,7 @@ typedef struct QemuLogFile {
 extern QemuLogFile *qemu_logfile;
 
 
-/* 
+/*
  * The new API:
  *
  */
diff --git a/include/qom/object.h b/include/qom/object.h
index 94a61ccc3f..380007b133 100644
--- a/include/qom/object.h
+++ b/include/qom/object.h
@@ -1443,12 +1443,12 @@ char *object_get_canonical_path(const Object *obj);
  *   ambiguous match
  *
  * There are two types of supported paths--absolute paths and partial paths.
- * 
+ *
  * Absolute paths are derived from the root object and can follow child<> or
  * link<> properties.  Since they can follow link<> properties, they can be
  * arbitrarily long.  Absolute paths look like absolute filenames and are
  * prefixed with a leading slash.
- * 
+ *
  * Partial paths look like relative filenames.  They do not begin with a
  * prefix.  The matching rules for partial paths are subtle but designed to make
  * specifying objects easy.  At each level of the composition tree, the partial
diff --git a/linux-user/cris/cpu_loop.c b/linux-user/cris/cpu_loop.c
index 334edddd1e..25d0861df9 100644
--- a/linux-user/cris/cpu_loop.c
+++ b/linux-user/cris/cpu_loop.c
@@ -27,7 +27,7 @@ void cpu_loop(CPUCRISState *env)
     CPUState *cs = env_cpu(env);
     int trapnr, ret;
     target_siginfo_t info;
-    
+
     while (1) {
         cpu_exec_start(cs);
         trapnr = cpu_exec(cs);
@@ -49,13 +49,13 @@ void cpu_loop(CPUCRISState *env)
           /* just indicate that signals should be handled asap */
           break;
         case EXCP_BREAK:
-            ret = do_syscall(env, 
-                             env->regs[9], 
-                             env->regs[10], 
-                             env->regs[11], 
-                             env->regs[12], 
-                             env->regs[13], 
-                             env->pregs[7], 
+            ret = do_syscall(env,
+                             env->regs[9],
+                             env->regs[10],
+                             env->regs[11],
+                             env->regs[12],
+                             env->regs[13],
+                             env->pregs[7],
                              env->pregs[11],
                              0, 0);
             if (ret == -TARGET_ERESTARTSYS) {
diff --git a/linux-user/microblaze/cpu_loop.c b/linux-user/microblaze/cpu_loop.c
index 3e0a7f730b..990dda26c3 100644
--- a/linux-user/microblaze/cpu_loop.c
+++ b/linux-user/microblaze/cpu_loop.c
@@ -27,7 +27,7 @@ void cpu_loop(CPUMBState *env)
     CPUState *cs = env_cpu(env);
     int trapnr, ret;
     target_siginfo_t info;
-    
+
     while (1) {
         cpu_exec_start(cs);
         trapnr = cpu_exec(cs);
@@ -52,13 +52,13 @@ void cpu_loop(CPUMBState *env)
             /* Return address is 4 bytes after the call.  */
             env->regs[14] += 4;
             env->sregs[SR_PC] = env->regs[14];
-            ret = do_syscall(env, 
-                             env->regs[12], 
-                             env->regs[5], 
-                             env->regs[6], 
-                             env->regs[7], 
-                             env->regs[8], 
-                             env->regs[9], 
+            ret = do_syscall(env,
+                             env->regs[12],
+                             env->regs[5],
+                             env->regs[6],
+                             env->regs[7],
+                             env->regs[8],
+                             env->regs[9],
                              env->regs[10],
                              0, 0);
             if (ret == -TARGET_ERESTARTSYS) {
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index 0019447892..e48056f6ad 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -401,12 +401,12 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int prot,
     }
 
     /* When mapping files into a memory area larger than the file, accesses
-       to pages beyond the file size will cause a SIGBUS. 
+       to pages beyond the file size will cause a SIGBUS.
 
        For example, if mmaping a file of 100 bytes on a host with 4K pages
        emulating a target with 8K pages, the target expects to be able to
        access the first 8K. But the host will trap us on any access beyond
-       4K.  
+       4K.
 
        When emulating a target with a larger page-size than the hosts, we
        may need to truncate file maps at EOF and add extra anonymous pages
@@ -421,7 +421,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int prot,
 
        /* Are we trying to create a map beyond EOF?.  */
        if (offset + len > sb.st_size) {
-           /* If so, truncate the file map at eof aligned with 
+           /* If so, truncate the file map at eof aligned with
               the hosts real pagesize. Additional anonymous maps
               will be created beyond EOF.  */
            len = REAL_HOST_PAGE_ALIGN(sb.st_size - offset);
@@ -496,7 +496,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int prot,
             }
             goto the_end;
         }
-        
+
         /* handle the start of the mapping */
         if (start > real_start) {
             if (real_end == real_start + qemu_host_page_size) {
diff --git a/linux-user/sparc/signal.c b/linux-user/sparc/signal.c
index d796f50f66..53efb61c70 100644
--- a/linux-user/sparc/signal.c
+++ b/linux-user/sparc/signal.c
@@ -104,7 +104,7 @@ struct target_rt_signal_frame {
     qemu_siginfo_fpu_t  fpu_state;
 };
 
-static inline abi_ulong get_sigframe(struct target_sigaction *sa, 
+static inline abi_ulong get_sigframe(struct target_sigaction *sa,
                                      CPUSPARCState *env,
                                      unsigned long framesize)
 {
@@ -506,7 +506,7 @@ void sparc64_get_context(CPUSPARCState *env)
     if (!lock_user_struct(VERIFY_WRITE, ucp, ucp_addr, 0)) {
         goto do_sigsegv;
     }
-    
+
     mcp = &ucp->tuc_mcontext;
     grp = &mcp->mc_gregs;
 
diff --git a/linux-user/syscall.c b/linux-user/syscall.c
index 97de9fb5c9..10d91a9781 100644
--- a/linux-user/syscall.c
+++ b/linux-user/syscall.c
@@ -1104,7 +1104,7 @@ static inline rlim_t target_to_host_rlim(abi_ulong target_rlim)
 {
     abi_ulong target_rlim_swap;
     rlim_t result;
-    
+
     target_rlim_swap = tswapal(target_rlim);
     if (target_rlim_swap == TARGET_RLIM_INFINITY)
         return RLIM_INFINITY;
@@ -1112,7 +1112,7 @@ static inline rlim_t target_to_host_rlim(abi_ulong target_rlim)
     result = target_rlim_swap;
     if (target_rlim_swap != (rlim_t)result)
         return RLIM_INFINITY;
-    
+
     return result;
 }
 #endif
@@ -1122,13 +1122,13 @@ static inline abi_ulong host_to_target_rlim(rlim_t rlim)
 {
     abi_ulong target_rlim_swap;
     abi_ulong result;
-    
+
     if (rlim == RLIM_INFINITY || rlim != (abi_long)rlim)
         target_rlim_swap = TARGET_RLIM_INFINITY;
     else
         target_rlim_swap = rlim;
     result = tswapal(target_rlim_swap);
-    
+
     return result;
 }
 #endif
@@ -1615,9 +1615,9 @@ static inline abi_long target_to_host_cmsg(struct msghdr *msgh,
     abi_ulong target_cmsg_addr;
     struct target_cmsghdr *target_cmsg, *target_cmsg_start;
     socklen_t space = 0;
-    
+
     msg_controllen = tswapal(target_msgh->msg_controllen);
-    if (msg_controllen < sizeof (struct target_cmsghdr)) 
+    if (msg_controllen < sizeof (struct target_cmsghdr))
         goto the_end;
     target_cmsg_addr = tswapal(target_msgh->msg_control);
     target_cmsg = lock_user(VERIFY_READ, target_cmsg_addr, msg_controllen, 1);
@@ -1703,7 +1703,7 @@ static inline abi_long host_to_target_cmsg(struct target_msghdr *target_msgh,
     socklen_t space = 0;
 
     msg_controllen = tswapal(target_msgh->msg_controllen);
-    if (msg_controllen < sizeof (struct target_cmsghdr)) 
+    if (msg_controllen < sizeof (struct target_cmsghdr))
         goto the_end;
     target_cmsg_addr = tswapal(target_msgh->msg_control);
     target_cmsg = lock_user(VERIFY_WRITE, target_cmsg_addr, msg_controllen, 0);
@@ -5750,7 +5750,7 @@ abi_long do_set_thread_area(CPUX86State *env, abi_ulong ptr)
     }
     unlock_user_struct(target_ldt_info, ptr, 1);
 
-    if (ldt_info.entry_number < TARGET_GDT_ENTRY_TLS_MIN || 
+    if (ldt_info.entry_number < TARGET_GDT_ENTRY_TLS_MIN ||
         ldt_info.entry_number > TARGET_GDT_ENTRY_TLS_MAX)
            return -TARGET_EINVAL;
     seg_32bit = ldt_info.flags & 1;
@@ -5828,7 +5828,7 @@ static abi_long do_get_thread_area(CPUX86State *env, abi_ulong ptr)
     lp = (uint32_t *)(gdt_table + idx);
     entry_1 = tswap32(lp[0]);
     entry_2 = tswap32(lp[1]);
-    
+
     read_exec_only = ((entry_2 >> 9) & 1) ^ 1;
     contents = (entry_2 >> 10) & 3;
     seg_not_present = ((entry_2 >> 15) & 1) ^ 1;
@@ -5844,8 +5844,8 @@ static abi_long do_get_thread_area(CPUX86State *env, abi_ulong ptr)
         (read_exec_only << 3) | (limit_in_pages << 4) |
         (seg_not_present << 5) | (useable << 6) | (lm << 7);
     limit = (entry_1 & 0xffff) | (entry_2  & 0xf0000);
-    base_addr = (entry_1 >> 16) | 
-        (entry_2 & 0xff000000) | 
+    base_addr = (entry_1 >> 16) |
+        (entry_2 & 0xff000000) |
         ((entry_2 & 0xff) << 16);
     target_ldt_info->base_addr = tswapal(base_addr);
     target_ldt_info->limit = tswap32(limit);
@@ -10873,7 +10873,7 @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
         return get_errno(fchown(arg1, low2highuid(arg2), low2highgid(arg3)));
 #if defined(TARGET_NR_fchownat)
     case TARGET_NR_fchownat:
-        if (!(p = lock_user_string(arg2))) 
+        if (!(p = lock_user_string(arg2)))
             return -TARGET_EFAULT;
         ret = get_errno(fchownat(arg1, p, low2highuid(arg3),
                                  low2highgid(arg4), arg5));
diff --git a/linux-user/syscall_defs.h b/linux-user/syscall_defs.h
index 152ec637cb..752ea5ee83 100644
--- a/linux-user/syscall_defs.h
+++ b/linux-user/syscall_defs.h
@@ -1923,7 +1923,7 @@ struct target_stat {
 	abi_long	st_blocks;	/* Number 512-byte blocks allocated. */
 
 	abi_ulong	target_st_atime;
-	abi_ulong 	target_st_atime_nsec; 
+	abi_ulong 	target_st_atime_nsec;
 	abi_ulong	target_st_mtime;
 	abi_ulong	target_st_mtime_nsec;
 	abi_ulong	target_st_ctime;
diff --git a/linux-user/uaccess.c b/linux-user/uaccess.c
index e215ecc2a6..91e2067933 100644
--- a/linux-user/uaccess.c
+++ b/linux-user/uaccess.c
@@ -55,7 +55,7 @@ abi_long target_strlen(abi_ulong guest_addr1)
         unlock_user(ptr, guest_addr, 0);
         guest_addr += len;
         /* we don't allow wrapping or integer overflow */
-        if (guest_addr == 0 || 
+        if (guest_addr == 0 ||
             (guest_addr - guest_addr1) > 0x7fffffff)
             return -TARGET_EFAULT;
         if (len != max_len)
diff --git a/os-posix.c b/os-posix.c
index 3cd52e1e70..fa6dfae168 100644
--- a/os-posix.c
+++ b/os-posix.c
@@ -316,7 +316,7 @@ void os_setup_post(void)
 
         close(fd);
 
-        do {        
+        do {
             len = write(daemon_pipe, &status, 1);
         } while (len < 0 && errno == EINTR);
         if (len != 1) {
diff --git a/qapi/qapi-util.c b/qapi/qapi-util.c
index 29a6c98b53..48045c3ccc 100644
--- a/qapi/qapi-util.c
+++ b/qapi/qapi-util.c
@@ -4,7 +4,7 @@
  * Authors:
  *  Hu Tao       <hutao@cn.fujitsu.com>
  *  Peter Lieven <pl@kamp.de>
- * 
+ *
  * This work is licensed under the terms of the GNU LGPL, version 2.1 or later.
  * See the COPYING.LIB file in the top-level directory.
  *
diff --git a/qemu-img.c b/qemu-img.c
index bdb9f6aa46..72dfa096b1 100644
--- a/qemu-img.c
+++ b/qemu-img.c
@@ -248,7 +248,7 @@ static bool qemu_img_object_print_help(const char *type, QemuOpts *opts)
  * an odd number of ',' (or else a separating ',' following it gets
  * escaped), or be empty (or else a separating ',' preceding it can
  * escape a separating ',' following it).
- * 
+ *
  */
 static bool is_valid_option_list(const char *optarg)
 {
diff --git a/qemu-options.hx b/qemu-options.hx
index 196f468786..2f728fde47 100644
--- a/qemu-options.hx
+++ b/qemu-options.hx
@@ -192,15 +192,15 @@ DEF("numa", HAS_ARG, QEMU_OPTION_numa,
     QEMU_ARCH_ALL)
 SRST
 ``-numa node[,mem=size][,cpus=firstcpu[-lastcpu]][,nodeid=node][,initiator=initiator]``
-  \ 
+  \
 ``-numa node[,memdev=id][,cpus=firstcpu[-lastcpu]][,nodeid=node][,initiator=initiator]``
   \
 ``-numa dist,src=source,dst=destination,val=distance``
-  \ 
+  \
 ``-numa cpu,node-id=node[,socket-id=x][,core-id=y][,thread-id=z]``
-  \ 
+  \
 ``-numa hmat-lb,initiator=node,target=node,hierarchy=hierarchy,data-type=tpye[,latency=lat][,bandwidth=bw]``
-  \ 
+  \
 ``-numa hmat-cache,node-id=node,size=size,level=level[,associativity=str][,policy=str][,line=size]``
     Define a NUMA node and assign RAM and VCPUs to it. Set the NUMA
     distance from a source node to a destination node. Set the ACPI
@@ -395,7 +395,7 @@ DEF("global", HAS_ARG, QEMU_OPTION_global,
     QEMU_ARCH_ALL)
 SRST
 ``-global driver.prop=value``
-  \ 
+  \
 ``-global driver=driver,property=property,value=value``
     Set default value of driver's property prop to value, e.g.:
 
@@ -926,9 +926,9 @@ SRST
 ``-hda file``
   \
 ``-hdb file``
-  \ 
+  \
 ``-hdc file``
-  \ 
+  \
 ``-hdd file``
     Use file as hard disk 0, 1, 2 or 3 image (see
     :ref:`disk_005fimages`).
@@ -1416,7 +1416,7 @@ DEF("fsdev", HAS_ARG, QEMU_OPTION_fsdev,
 
 SRST
 ``-fsdev local,id=id,path=path,security_model=security_model [,writeout=writeout][,readonly][,fmode=fmode][,dmode=dmode] [,throttling.option=value[,throttling.option=value[,...]]]``
-  \ 
+  \
 ``-fsdev proxy,id=id,socket=socket[,writeout=writeout][,readonly]``
   \
 ``-fsdev proxy,id=id,sock_fd=sock_fd[,writeout=writeout][,readonly]``
@@ -1537,9 +1537,9 @@ DEF("virtfs", HAS_ARG, QEMU_OPTION_virtfs,
 
 SRST
 ``-virtfs local,path=path,mount_tag=mount_tag ,security_model=security_model[,writeout=writeout][,readonly] [,fmode=fmode][,dmode=dmode][,multidevs=multidevs]``
-  \ 
+  \
 ``-virtfs proxy,socket=socket,mount_tag=mount_tag [,writeout=writeout][,readonly]``
-  \ 
+  \
 ``-virtfs proxy,sock_fd=sock_fd,mount_tag=mount_tag [,writeout=writeout][,readonly]``
   \
 ``-virtfs synth,mount_tag=mount_tag``
@@ -3674,7 +3674,7 @@ DEF("overcommit", HAS_ARG, QEMU_OPTION_overcommit,
     QEMU_ARCH_ALL)
 SRST
 ``-overcommit mem-lock=on|off``
-  \ 
+  \
 ``-overcommit cpu-pm=on|off``
     Run qemu with hints about host resource overcommit. The default is
     to assume that host overcommits all resources.
@@ -4045,7 +4045,7 @@ DEF("incoming", HAS_ARG, QEMU_OPTION_incoming, \
     QEMU_ARCH_ALL)
 SRST
 ``-incoming tcp:[host]:port[,to=maxport][,ipv4][,ipv6]``
-  \ 
+  \
 ``-incoming rdma:host:port[,ipv4][,ipv6]``
     Prepare for incoming migration, listen on a given tcp port.
 
@@ -4753,7 +4753,7 @@ SRST
                [...]
 
     ``-object secret,id=id,data=string,format=raw|base64[,keyid=secretid,iv=string]``
-      \ 
+      \
     ``-object secret,id=id,file=filename,format=raw|base64[,keyid=secretid,iv=string]``
         Defines a secret to store a password, encryption key, or some
         other sensitive data. The sensitive data can either be passed
diff --git a/qom/object.c b/qom/object.c
index 6ece96bc2b..30630d789f 100644
--- a/qom/object.c
+++ b/qom/object.c
@@ -1020,7 +1020,7 @@ static void object_class_foreach_tramp(gpointer key, gpointer value,
         return;
     }
 
-    if (data->implements_type && 
+    if (data->implements_type &&
         !object_class_dynamic_cast(k, data->implements_type)) {
         return;
     }
diff --git a/target/cris/translate.c b/target/cris/translate.c
index aaa46b5bca..df979594c3 100644
--- a/target/cris/translate.c
+++ b/target/cris/translate.c
@@ -369,7 +369,7 @@ static inline void t_gen_addx_carry(DisasContext *dc, TCGv d)
     if (dc->flagx_known) {
         if (dc->flags_x) {
             TCGv c;
-            
+
             c = tcg_temp_new();
             t_gen_mov_TN_preg(c, PR_CCS);
             /* C flag is already at bit 0.  */
@@ -402,7 +402,7 @@ static inline void t_gen_subx_carry(DisasContext *dc, TCGv d)
     if (dc->flagx_known) {
         if (dc->flags_x) {
             TCGv c;
-            
+
             c = tcg_temp_new();
             t_gen_mov_TN_preg(c, PR_CCS);
             /* C flag is already at bit 0.  */
@@ -688,7 +688,7 @@ static inline void cris_update_cc_x(DisasContext *dc)
 }
 
 /* Update cc prior to executing ALU op. Needs source operands untouched.  */
-static void cris_pre_alu_update_cc(DisasContext *dc, int op, 
+static void cris_pre_alu_update_cc(DisasContext *dc, int op,
                    TCGv dst, TCGv src, int size)
 {
     if (dc->update_cc) {
@@ -718,7 +718,7 @@ static inline void cris_update_result(DisasContext *dc, TCGv res)
 }
 
 /* Returns one if the write back stage should execute.  */
-static void cris_alu_op_exec(DisasContext *dc, int op, 
+static void cris_alu_op_exec(DisasContext *dc, int op,
                    TCGv dst, TCGv a, TCGv b, int size)
 {
     /* Emit the ALU insns.  */
@@ -1068,7 +1068,7 @@ static void cris_store_direct_jmp(DisasContext *dc)
     }
 }
 
-static void cris_prepare_cc_branch (DisasContext *dc, 
+static void cris_prepare_cc_branch (DisasContext *dc,
                     int offset, int cond)
 {
     /* This helps us re-schedule the micro-code to insns in delay-slots
@@ -1108,7 +1108,7 @@ static void gen_load64(DisasContext *dc, TCGv_i64 dst, TCGv addr)
     tcg_gen_qemu_ld_i64(dst, addr, mem_index, MO_TEQ);
 }
 
-static void gen_load(DisasContext *dc, TCGv dst, TCGv addr, 
+static void gen_load(DisasContext *dc, TCGv dst, TCGv addr,
              unsigned int size, int sign)
 {
     int mem_index = cpu_mmu_index(&dc->cpu->env, false);
@@ -3047,27 +3047,27 @@ static unsigned int crisv32_decoder(CPUCRISState *env, DisasContext *dc)
  * to give SW a hint that the exception actually hit on the dslot.
  *
  * CRIS expects all PC addresses to be 16-bit aligned. The lsb is ignored by
- * the core and any jmp to an odd addresses will mask off that lsb. It is 
+ * the core and any jmp to an odd addresses will mask off that lsb. It is
  * simply there to let sw know there was an exception on a dslot.
  *
  * When the software returns from an exception, the branch will re-execute.
  * On QEMU care needs to be taken when a branch+delayslot sequence is broken
  * and the branch and delayslot don't share pages.
  *
- * The TB contaning the branch insn will set up env->btarget and evaluate 
- * env->btaken. When the translation loop exits we will note that the branch 
+ * The TB contaning the branch insn will set up env->btarget and evaluate
+ * env->btaken. When the translation loop exits we will note that the branch
  * sequence is broken and let env->dslot be the size of the branch insn (those
  * vary in length).
  *
  * The TB contaning the delayslot will have the PC of its real insn (i.e no lsb
- * set). It will also expect to have env->dslot setup with the size of the 
- * delay slot so that env->pc - env->dslot point to the branch insn. This TB 
- * will execute the dslot and take the branch, either to btarget or just one 
+ * set). It will also expect to have env->dslot setup with the size of the
+ * delay slot so that env->pc - env->dslot point to the branch insn. This TB
+ * will execute the dslot and take the branch, either to btarget or just one
  * insn ahead.
  *
- * When exceptions occur, we check for env->dslot in do_interrupt to detect 
+ * When exceptions occur, we check for env->dslot in do_interrupt to detect
  * broken branch sequences and setup $erp accordingly (i.e let it point to the
- * branch and set lsb). Then env->dslot gets cleared so that the exception 
+ * branch and set lsb). Then env->dslot gets cleared so that the exception
  * handler can enter. When returning from exceptions (jump $erp) the lsb gets
  * masked off and we will reexecute the branch insn.
  *
diff --git a/target/cris/translate_v10.inc.c b/target/cris/translate_v10.inc.c
index ae34a0d1a3..ad4e213847 100644
--- a/target/cris/translate_v10.inc.c
+++ b/target/cris/translate_v10.inc.c
@@ -299,7 +299,7 @@ static unsigned int dec10_quick_imm(DisasContext *dc)
 
             op = CC_OP_LSL;
             if (imm & (1 << 5)) {
-                op = CC_OP_LSR; 
+                op = CC_OP_LSR;
             }
             imm &= 0x1f;
             cris_cc_mask(dc, CC_MASK_NZVC);
@@ -335,7 +335,7 @@ static unsigned int dec10_quick_imm(DisasContext *dc)
             LOG_DIS("b%s %d\n", cc_name(dc->cond), imm);
 
             cris_cc_mask(dc, 0);
-            cris_prepare_cc_branch(dc, imm, dc->cond); 
+            cris_prepare_cc_branch(dc, imm, dc->cond);
             break;
 
         default:
@@ -487,7 +487,7 @@ static void dec10_reg_mov_pr(DisasContext *dc)
         return;
     }
     if (dc->dst == PR_CCS) {
-        cris_evaluate_flags(dc); 
+        cris_evaluate_flags(dc);
     }
     cris_alu(dc, CC_OP_MOVE, cpu_R[dc->src],
                  cpu_R[dc->src], cpu_PR[dc->dst], preg_sizes_v10[dc->dst]);
diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
index be016b951a..d3f836a0f4 100644
--- a/target/i386/hvf/hvf.c
+++ b/target/i386/hvf/hvf.c
@@ -967,13 +967,13 @@ static int hvf_accel_init(MachineState *ms)
     assert_hvf_ok(ret);
 
     s = g_new0(HVFState, 1);
- 
+
     s->num_slots = 32;
     for (x = 0; x < s->num_slots; ++x) {
         s->slots[x].size = 0;
         s->slots[x].slot_id = x;
     }
-  
+
     hvf_state = s;
     cpu_interrupt_handler = hvf_handle_interrupt;
     memory_listener_register(&hvf_memory_listener, &address_space_memory);
diff --git a/target/i386/hvf/x86.c b/target/i386/hvf/x86.c
index fdb11c8db9..6fd6e541d8 100644
--- a/target/i386/hvf/x86.c
+++ b/target/i386/hvf/x86.c
@@ -83,7 +83,7 @@ bool x86_write_segment_descriptor(struct CPUState *cpu,
 {
     target_ulong base;
     uint32_t limit;
-    
+
     if (GDT_SEL == sel.ti) {
         base  = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE);
         limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
@@ -91,7 +91,7 @@ bool x86_write_segment_descriptor(struct CPUState *cpu,
         base  = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE);
         limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT);
     }
-    
+
     if (sel.index * 8 >= limit) {
         printf("%s: gdt limit\n", __func__);
         return false;
diff --git a/target/i386/hvf/x86_decode.c b/target/i386/hvf/x86_decode.c
index 34c5e3006c..8c576febd2 100644
--- a/target/i386/hvf/x86_decode.c
+++ b/target/i386/hvf/x86_decode.c
@@ -63,7 +63,7 @@ static inline uint64_t decode_bytes(CPUX86State *env, struct x86_decode *decode,
                                     int size)
 {
     target_ulong val = 0;
-    
+
     switch (size) {
     case 1:
     case 2:
@@ -77,7 +77,7 @@ static inline uint64_t decode_bytes(CPUX86State *env, struct x86_decode *decode,
     target_ulong va  = linear_rip(env_cpu(env), env->eip) + decode->len;
     vmx_read_mem(env_cpu(env), &val, va, size);
     decode->len += size;
-    
+
     return val;
 }
 
@@ -210,7 +210,7 @@ static void decode_imm_0(CPUX86State *env, struct x86_decode *decode,
 static void decode_pushseg(CPUX86State *env, struct x86_decode *decode)
 {
     uint8_t op = (decode->opcode_len > 1) ? decode->opcode[1] : decode->opcode[0];
-    
+
     decode->op[0].type = X86_VAR_REG;
     switch (op) {
     case 0xe:
@@ -237,7 +237,7 @@ static void decode_pushseg(CPUX86State *env, struct x86_decode *decode)
 static void decode_popseg(CPUX86State *env, struct x86_decode *decode)
 {
     uint8_t op = (decode->opcode_len > 1) ? decode->opcode[1] : decode->opcode[0];
-    
+
     decode->op[0].type = X86_VAR_REG;
     switch (op) {
     case 0xf:
@@ -461,14 +461,14 @@ struct decode_x87_tbl _decode_tbl3[256];
 static void decode_x87_ins(CPUX86State *env, struct x86_decode *decode)
 {
     struct decode_x87_tbl *decoder;
-    
+
     decode->is_fpu = true;
     int mode = decode->modrm.mod == 3 ? 1 : 0;
     int index = ((decode->opcode[0] & 0xf) << 4) | (mode << 3) |
                  decode->modrm.reg;
 
     decoder = &_decode_tbl3[index];
-    
+
     decode->cmd = decoder->cmd;
     if (decoder->operand_size) {
         decode->operand_size = decoder->operand_size;
@@ -476,7 +476,7 @@ static void decode_x87_ins(CPUX86State *env, struct x86_decode *decode)
     decode->flags_mask = decoder->flags_mask;
     decode->fpop_stack = decoder->pop;
     decode->frev = decoder->rev;
-    
+
     if (decoder->decode_op1) {
         decoder->decode_op1(env, decode, &decode->op[0]);
     }
@@ -2002,7 +2002,7 @@ static inline void decode_displacement(CPUX86State *env, struct x86_decode *deco
     int addressing_size = decode->addressing_size;
     int mod = decode->modrm.mod;
     int rm = decode->modrm.rm;
-    
+
     decode->displacement_size = 0;
     switch (addressing_size) {
     case 2:
@@ -2115,7 +2115,7 @@ uint32_t decode_instruction(CPUX86State *env, struct x86_decode *decode)
 void init_decoder()
 {
     int i;
-    
+
     for (i = 0; i < ARRAY_SIZE(_decode_tbl1); i++) {
         memcpy(&_decode_tbl1[i], &invl_inst, sizeof(invl_inst));
     }
@@ -2124,7 +2124,7 @@ void init_decoder()
     }
     for (i = 0; i < ARRAY_SIZE(_decode_tbl3); i++) {
         memcpy(&_decode_tbl3[i], &invl_inst_x87, sizeof(invl_inst_x87));
-    
+
     }
     for (i = 0; i < ARRAY_SIZE(_1op_inst); i++) {
         _decode_tbl1[_1op_inst[i].opcode] = _1op_inst[i];
diff --git a/target/i386/hvf/x86_decode.h b/target/i386/hvf/x86_decode.h
index ef7960113f..c7879c9ea7 100644
--- a/target/i386/hvf/x86_decode.h
+++ b/target/i386/hvf/x86_decode.h
@@ -43,7 +43,7 @@ typedef enum x86_prefix {
 
 enum x86_decode_cmd {
     X86_DECODE_CMD_INVL = 0,
-    
+
     X86_DECODE_CMD_PUSH,
     X86_DECODE_CMD_PUSH_SEG,
     X86_DECODE_CMD_POP,
@@ -174,7 +174,7 @@ enum x86_decode_cmd {
     X86_DECODE_CMD_CMPXCHG8B,
     X86_DECODE_CMD_CMPXCHG,
     X86_DECODE_CMD_POPCNT,
-    
+
     X86_DECODE_CMD_FNINIT,
     X86_DECODE_CMD_FLD,
     X86_DECODE_CMD_FLDxx,
diff --git a/target/i386/hvf/x86_descr.c b/target/i386/hvf/x86_descr.c
index 8c05c34f33..fd6a63754d 100644
--- a/target/i386/hvf/x86_descr.c
+++ b/target/i386/hvf/x86_descr.c
@@ -112,7 +112,7 @@ void vmx_segment_to_x86_descriptor(struct CPUState *cpu, struct vmx_segment *vmx
 {
     x86_set_segment_limit(desc, vmx_desc->limit);
     x86_set_segment_base(desc, vmx_desc->base);
-    
+
     desc->type = vmx_desc->ar & 15;
     desc->s = (vmx_desc->ar >> 4) & 1;
     desc->dpl = (vmx_desc->ar >> 5) & 3;
diff --git a/target/i386/hvf/x86_emu.c b/target/i386/hvf/x86_emu.c
index d3e289ed87..edc7f74903 100644
--- a/target/i386/hvf/x86_emu.c
+++ b/target/i386/hvf/x86_emu.c
@@ -131,7 +131,7 @@ void write_reg(CPUX86State *env, int reg, target_ulong val, int size)
 target_ulong read_val_from_reg(target_ulong reg_ptr, int size)
 {
     target_ulong val;
-    
+
     switch (size) {
     case 1:
         val = *(uint8_t *)reg_ptr;
diff --git a/target/i386/hvf/x86_mmu.c b/target/i386/hvf/x86_mmu.c
index 65d4603dbf..168c47fa34 100644
--- a/target/i386/hvf/x86_mmu.c
+++ b/target/i386/hvf/x86_mmu.c
@@ -143,7 +143,7 @@ static bool test_pt_entry(struct CPUState *cpu, struct gpt_translation *pt,
     if (pae && pt->exec_access && !pte_exec_access(pte)) {
         return false;
     }
-    
+
 exit:
     /* TODO: check reserved bits */
     return true;
@@ -175,7 +175,7 @@ static bool walk_gpt(struct CPUState *cpu, target_ulong addr, int err_code,
     bool is_large = false;
     target_ulong cr3 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR3);
     uint64_t page_mask = pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MASK;
-    
+
     memset(pt, 0, sizeof(*pt));
     top_level = gpt_top_level(cpu, pae);
 
@@ -184,7 +184,7 @@ static bool walk_gpt(struct CPUState *cpu, target_ulong addr, int err_code,
     pt->user_access = (err_code & MMU_PAGE_US);
     pt->write_access = (err_code & MMU_PAGE_WT);
     pt->exec_access = (err_code & MMU_PAGE_NX);
-    
+
     for (level = top_level; level > 0; level--) {
         get_pt_entry(cpu, pt, level, pae);
 
diff --git a/target/i386/hvf/x86_task.c b/target/i386/hvf/x86_task.c
index 6f04478b3a..9748220381 100644
--- a/target/i386/hvf/x86_task.c
+++ b/target/i386/hvf/x86_task.c
@@ -1,7 +1,7 @@
 // This software is licensed under the terms of the GNU General Public
 // License version 2, as published by the Free Software Foundation, and
 // may be copied, distributed, and modified under those terms.
-// 
+//
 // This program is distributed in the hope that it will be useful,
 // but WITHOUT ANY WARRANTY; without even the implied warranty of
 // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
index 5cbcb32ab6..fd33ab4efc 100644
--- a/target/i386/hvf/x86hvf.c
+++ b/target/i386/hvf/x86hvf.c
@@ -88,7 +88,7 @@ void hvf_put_segments(CPUState *cpu_state)
 {
     CPUX86State *env = &X86_CPU(cpu_state)->env;
     struct vmx_segment seg;
-    
+
     wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit);
     wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE, env->idt.base);
 
@@ -105,7 +105,7 @@ void hvf_put_segments(CPUState *cpu_state)
 
     hvf_set_segment(cpu_state, &seg, &env->segs[R_CS], false);
     vmx_write_segment_descriptor(cpu_state, &seg, R_CS);
-    
+
     hvf_set_segment(cpu_state, &seg, &env->segs[R_DS], false);
     vmx_write_segment_descriptor(cpu_state, &seg, R_DS);
 
@@ -126,10 +126,10 @@ void hvf_put_segments(CPUState *cpu_state)
 
     hvf_set_segment(cpu_state, &seg, &env->ldt, false);
     vmx_write_segment_descriptor(cpu_state, &seg, R_LDTR);
-    
+
     hv_vcpu_flush(cpu_state->hvf_fd);
 }
-    
+
 void hvf_put_msrs(CPUState *cpu_state)
 {
     CPUX86State *env = &X86_CPU(cpu_state)->env;
@@ -178,7 +178,7 @@ void hvf_get_segments(CPUState *cpu_state)
 
     vmx_read_segment_descriptor(cpu_state, &seg, R_CS);
     hvf_get_segment(&env->segs[R_CS], &seg);
-    
+
     vmx_read_segment_descriptor(cpu_state, &seg, R_DS);
     hvf_get_segment(&env->segs[R_DS], &seg);
 
@@ -209,7 +209,7 @@ void hvf_get_segments(CPUState *cpu_state)
     env->cr[2] = 0;
     env->cr[3] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3);
     env->cr[4] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR4);
-    
+
     env->efer = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER);
 }
 
@@ -217,10 +217,10 @@ void hvf_get_msrs(CPUState *cpu_state)
 {
     CPUX86State *env = &X86_CPU(cpu_state)->env;
     uint64_t tmp;
-    
+
     hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, &tmp);
     env->sysenter_cs = tmp;
-    
+
     hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, &tmp);
     env->sysenter_esp = tmp;
 
@@ -237,7 +237,7 @@ void hvf_get_msrs(CPUState *cpu_state)
 #endif
 
     hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_APICBASE, &tmp);
-    
+
     env->tsc = rdtscp() + rvmcs(cpu_state->hvf_fd, VMCS_TSC_OFFSET);
 }
 
@@ -264,15 +264,15 @@ int hvf_put_registers(CPUState *cpu_state)
     wreg(cpu_state->hvf_fd, HV_X86_R15, env->regs[15]);
     wreg(cpu_state->hvf_fd, HV_X86_RFLAGS, env->eflags);
     wreg(cpu_state->hvf_fd, HV_X86_RIP, env->eip);
-   
+
     wreg(cpu_state->hvf_fd, HV_X86_XCR0, env->xcr0);
-    
+
     hvf_put_xsave(cpu_state);
-    
+
     hvf_put_segments(cpu_state);
-    
+
     hvf_put_msrs(cpu_state);
-    
+
     wreg(cpu_state->hvf_fd, HV_X86_DR0, env->dr[0]);
     wreg(cpu_state->hvf_fd, HV_X86_DR1, env->dr[1]);
     wreg(cpu_state->hvf_fd, HV_X86_DR2, env->dr[2]);
@@ -281,7 +281,7 @@ int hvf_put_registers(CPUState *cpu_state)
     wreg(cpu_state->hvf_fd, HV_X86_DR5, env->dr[5]);
     wreg(cpu_state->hvf_fd, HV_X86_DR6, env->dr[6]);
     wreg(cpu_state->hvf_fd, HV_X86_DR7, env->dr[7]);
-    
+
     return 0;
 }
 
@@ -306,16 +306,16 @@ int hvf_get_registers(CPUState *cpu_state)
     env->regs[13] = rreg(cpu_state->hvf_fd, HV_X86_R13);
     env->regs[14] = rreg(cpu_state->hvf_fd, HV_X86_R14);
     env->regs[15] = rreg(cpu_state->hvf_fd, HV_X86_R15);
-    
+
     env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
     env->eip = rreg(cpu_state->hvf_fd, HV_X86_RIP);
-   
+
     hvf_get_xsave(cpu_state);
     env->xcr0 = rreg(cpu_state->hvf_fd, HV_X86_XCR0);
-    
+
     hvf_get_segments(cpu_state);
     hvf_get_msrs(cpu_state);
-    
+
     env->dr[0] = rreg(cpu_state->hvf_fd, HV_X86_DR0);
     env->dr[1] = rreg(cpu_state->hvf_fd, HV_X86_DR1);
     env->dr[2] = rreg(cpu_state->hvf_fd, HV_X86_DR2);
@@ -324,7 +324,7 @@ int hvf_get_registers(CPUState *cpu_state)
     env->dr[5] = rreg(cpu_state->hvf_fd, HV_X86_DR5);
     env->dr[6] = rreg(cpu_state->hvf_fd, HV_X86_DR6);
     env->dr[7] = rreg(cpu_state->hvf_fd, HV_X86_DR7);
-    
+
     x86_update_hflags(env);
     return 0;
 }
@@ -388,7 +388,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
                 intr_type == VMCS_INTR_T_SWEXCEPTION) {
                 wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, env->ins_len);
             }
-            
+
             if (env->has_error_code) {
                 wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_EXCEPTION_ERROR,
                       env->error_code);
diff --git a/target/i386/translate.c b/target/i386/translate.c
index 5e5dbb41b0..d824cfcfe7 100644
--- a/target/i386/translate.c
+++ b/target/i386/translate.c
@@ -1623,7 +1623,7 @@ static void gen_rot_rm_T1(DisasContext *s, MemOp ot, int op1, int is_right)
     tcg_temp_free_i32(t0);
     tcg_temp_free_i32(t1);
 
-    /* The CC_OP value is no longer predictable.  */ 
+    /* The CC_OP value is no longer predictable.  */
     set_cc_op(s, CC_OP_DYNAMIC);
 }
 
@@ -1716,7 +1716,7 @@ static void gen_rotc_rm_T1(DisasContext *s, MemOp ot, int op1,
         gen_op_ld_v(s, ot, s->T0, s->A0);
     else
         gen_op_mov_v_reg(s, ot, s->T0, op1);
-    
+
     if (is_right) {
         switch (ot) {
         case MO_8:
@@ -5353,7 +5353,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
                 set_cc_op(s, CC_OP_EFLAGS);
                 break;
             }
-#endif        
+#endif
             if (!(s->cpuid_features & CPUID_CX8)) {
                 goto illegal_op;
             }
@@ -6398,7 +6398,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
     case 0x6d:
         ot = mo_b_d32(b, dflag);
         tcg_gen_ext16u_tl(s->T0, cpu_regs[R_EDX]);
-        gen_check_io(s, ot, pc_start - s->cs_base, 
+        gen_check_io(s, ot, pc_start - s->cs_base,
                      SVM_IOIO_TYPE_MASK | svm_is_rep(prefixes) | 4);
         if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) {
             gen_repz_ins(s, ot, pc_start - s->cs_base, s->pc - s->cs_base);
diff --git a/target/microblaze/mmu.c b/target/microblaze/mmu.c
index 6763421ba2..5487696089 100644
--- a/target/microblaze/mmu.c
+++ b/target/microblaze/mmu.c
@@ -53,7 +53,7 @@ static void mmu_flush_idx(CPUMBState *env, unsigned int idx)
     }
 }
 
-static void mmu_change_pid(CPUMBState *env, unsigned int newpid) 
+static void mmu_change_pid(CPUMBState *env, unsigned int newpid)
 {
     struct microblaze_mmu *mmu = &env->mmu;
     unsigned int i;
diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
index f6ff2591c3..1925a93eb2 100644
--- a/target/microblaze/translate.c
+++ b/target/microblaze/translate.c
@@ -663,7 +663,7 @@ static void dec_div(DisasContext *dc)
 {
     unsigned int u;
 
-    u = dc->imm & 2; 
+    u = dc->imm & 2;
     LOG_DIS("div\n");
 
     if (trap_illegal(dc, !dc->cpu->cfg.use_div)) {
diff --git a/target/sh4/op_helper.c b/target/sh4/op_helper.c
index 14c3db0f48..fa4f5aee4f 100644
--- a/target/sh4/op_helper.c
+++ b/target/sh4/op_helper.c
@@ -133,7 +133,7 @@ void helper_discard_movcal_backup(CPUSH4State *env)
 	env->movcal_backup = current = next;
 	if (current == NULL)
 	    env->movcal_backup_tail = &(env->movcal_backup);
-    } 
+    }
 }
 
 void helper_ocbi(CPUSH4State *env, uint32_t address)
@@ -146,7 +146,7 @@ void helper_ocbi(CPUSH4State *env, uint32_t address)
 	{
 	    memory_content *next = (*current)->next;
             cpu_stl_data(env, a, (*current)->value);
-	    
+	
 	    if (next == NULL)
 	    {
 		env->movcal_backup_tail = current;
diff --git a/target/xtensa/core-de212/core-isa.h b/target/xtensa/core-de212/core-isa.h
index 90ac329230..4528dd7942 100644
--- a/target/xtensa/core-de212/core-isa.h
+++ b/target/xtensa/core-de212/core-isa.h
@@ -1,4 +1,4 @@
-/* 
+/*
  * xtensa/config/core-isa.h -- HAL definitions that are dependent on Xtensa
  *				processor CORE configuration
  *
@@ -605,12 +605,12 @@
 /*----------------------------------------------------------------------
 				MPU
   ----------------------------------------------------------------------*/
-#define XCHAL_HAVE_MPU			0 
+#define XCHAL_HAVE_MPU			0
 #define XCHAL_MPU_ENTRIES		0
 
 #define XCHAL_MPU_ALIGN_REQ		1	/* MPU requires alignment of entries to background map */
 #define XCHAL_MPU_BACKGROUND_ENTRIES	0	/* number of entries in background map */
- 
+
 #define XCHAL_MPU_ALIGN_BITS		0
 #define XCHAL_MPU_ALIGN			0
 
diff --git a/target/xtensa/core-sample_controller/core-isa.h b/target/xtensa/core-sample_controller/core-isa.h
index d53dca8665..de5a5f3ba2 100644
--- a/target/xtensa/core-sample_controller/core-isa.h
+++ b/target/xtensa/core-sample_controller/core-isa.h
@@ -1,4 +1,4 @@
-/* 
+/*
  * xtensa/config/core-isa.h -- HAL definitions that are dependent on Xtensa
  *				processor CORE configuration
  *
@@ -626,13 +626,13 @@
 /*----------------------------------------------------------------------
 				MPU
   ----------------------------------------------------------------------*/
-#define XCHAL_HAVE_MPU			0 
+#define XCHAL_HAVE_MPU			0
 #define XCHAL_MPU_ENTRIES		0
 
 #define XCHAL_MPU_ALIGN_REQ		1	/* MPU requires alignment of entries to background map */
 #define XCHAL_MPU_BACKGROUND_ENTRIES	0	/* number of entries in bg map*/
 #define XCHAL_MPU_BG_CACHEADRDIS	0	/* default CACHEADRDIS for bg */
- 
+
 #define XCHAL_MPU_ALIGN_BITS		0
 #define XCHAL_MPU_ALIGN			0
 
diff --git a/target/xtensa/core-test_kc705_be/core-isa.h b/target/xtensa/core-test_kc705_be/core-isa.h
index 408fed871d..382e3f187d 100644
--- a/target/xtensa/core-test_kc705_be/core-isa.h
+++ b/target/xtensa/core-test_kc705_be/core-isa.h
@@ -1,4 +1,4 @@
-/* 
+/*
  * xtensa/config/core-isa.h -- HAL definitions that are dependent on Xtensa
  *				processor CORE configuration
  *
diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
index 65fddb310d..d856000c16 100644
--- a/tcg/sparc/tcg-target.inc.c
+++ b/tcg/sparc/tcg-target.inc.c
@@ -988,7 +988,7 @@ static void build_trampolines(TCGContext *s)
             /* Skip the oi argument.  */
             ra += 1;
         }
-                
+
         /* Set the retaddr operand.  */
         if (ra >= TCG_REG_O6) {
             tcg_out_st(s, TCG_TYPE_PTR, TCG_REG_O7, TCG_REG_CALL_STACK,
diff --git a/tcg/tcg.c b/tcg/tcg.c
index 1362bc6101..45d15fe837 100644
--- a/tcg/tcg.c
+++ b/tcg/tcg.c
@@ -872,7 +872,7 @@ void *tcg_malloc_internal(TCGContext *s, int size)
 {
     TCGPool *p;
     int pool_size;
-    
+
     if (size > TCG_POOL_CHUNK_SIZE) {
         /* big malloc: insert a new pool (XXX: could optimize) */
         p = g_malloc(sizeof(TCGPool) + size);
@@ -893,7 +893,7 @@ void *tcg_malloc_internal(TCGContext *s, int size)
                 p = g_malloc(sizeof(TCGPool) + pool_size);
                 p->size = pool_size;
                 p->next = NULL;
-                if (s->pool_current) 
+                if (s->pool_current)
                     s->pool_current->next = p;
                 else
                     s->pool_first = p;
@@ -3093,8 +3093,8 @@ static void dump_regs(TCGContext *s)
 
     for(i = 0; i < TCG_TARGET_NB_REGS; i++) {
         if (s->reg_to_temp[i] != NULL) {
-            printf("%s: %s\n", 
-                   tcg_target_reg_names[i], 
+            printf("%s: %s\n",
+                   tcg_target_reg_names[i],
                    tcg_get_arg_str_ptr(s, buf, sizeof(buf), s->reg_to_temp[i]));
         }
     }
@@ -3111,7 +3111,7 @@ static void check_regs(TCGContext *s)
         ts = s->reg_to_temp[reg];
         if (ts != NULL) {
             if (ts->val_type != TEMP_VAL_REG || ts->reg != reg) {
-                printf("Inconsistency for register %s:\n", 
+                printf("Inconsistency for register %s:\n",
                        tcg_target_reg_names[reg]);
                 goto fail;
             }
@@ -3648,14 +3648,14 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
     nb_iargs = def->nb_iargs;
 
     /* copy constants */
-    memcpy(new_args + nb_oargs + nb_iargs, 
+    memcpy(new_args + nb_oargs + nb_iargs,
            op->args + nb_oargs + nb_iargs,
            sizeof(TCGArg) * def->nb_cargs);
 
     i_allocated_regs = s->reserved_regs;
     o_allocated_regs = s->reserved_regs;
 
-    /* satisfy input constraints */ 
+    /* satisfy input constraints */
     for (k = 0; k < nb_iargs; k++) {
         TCGRegSet i_preferred_regs, o_preferred_regs;
 
@@ -3713,7 +3713,7 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
             /* nothing to do : the constraint is satisfied */
         } else {
         allocate_in_reg:
-            /* allocate a new register matching the constraint 
+            /* allocate a new register matching the constraint
                and move the temporary register into it */
             temp_load(s, ts, tcg_target_available_regs[ts->type],
                       i_allocated_regs, 0);
@@ -3733,7 +3733,7 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
         const_args[i] = 0;
         tcg_regset_set_reg(i_allocated_regs, reg);
     }
-    
+
     /* mark dead temporaries and free the associated registers */
     for (i = nb_oargs; i < nb_oargs + nb_iargs; i++) {
         if (IS_DEAD_ARG(i)) {
@@ -3745,7 +3745,7 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
         tcg_reg_alloc_bb_end(s, i_allocated_regs);
     } else {
         if (def->flags & TCG_OPF_CALL_CLOBBER) {
-            /* XXX: permit generic clobber register list ? */ 
+            /* XXX: permit generic clobber register list ? */
             for (i = 0; i < TCG_TARGET_NB_REGS; i++) {
                 if (tcg_regset_test_reg(tcg_target_call_clobber_regs, i)) {
                     tcg_reg_free(s, i, i_allocated_regs);
@@ -3757,7 +3757,7 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
                an exception. */
             sync_globals(s, i_allocated_regs);
         }
-        
+
         /* satisfy the output constraints */
         for(k = 0; k < nb_oargs; k++) {
             i = def->sorted_args[k];
@@ -3849,7 +3849,7 @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op)
 
     /* assign stack slots first */
     call_stack_size = (nb_iargs - nb_regs) * sizeof(tcg_target_long);
-    call_stack_size = (call_stack_size + TCG_TARGET_STACK_ALIGN - 1) & 
+    call_stack_size = (call_stack_size + TCG_TARGET_STACK_ALIGN - 1) &
         ~(TCG_TARGET_STACK_ALIGN - 1);
     allocate_args = (call_stack_size > TCG_STATIC_CALL_ARGS_SIZE);
     if (allocate_args) {
@@ -3874,7 +3874,7 @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op)
         stack_offset += sizeof(tcg_target_long);
 #endif
     }
-    
+
     /* assign input registers */
     allocated_regs = s->reserved_regs;
     for (i = 0; i < nb_regs; i++) {
@@ -3907,14 +3907,14 @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op)
             tcg_regset_set_reg(allocated_regs, reg);
         }
     }
-    
+
     /* mark dead temporaries and free the associated registers */
     for (i = nb_oargs; i < nb_iargs + nb_oargs; i++) {
         if (IS_DEAD_ARG(i)) {
             temp_dead(s, arg_temp(op->args[i]));
         }
     }
-    
+
     /* clobber call registers */
     for (i = 0; i < TCG_TARGET_NB_REGS; i++) {
         if (tcg_regset_test_reg(tcg_target_call_clobber_regs, i)) {
@@ -4317,7 +4317,7 @@ void tcg_dump_info(void)
                 (double)s->code_out_len / tb_div_count);
     qemu_printf("avg search data/TB  %0.1f\n",
                 (double)s->search_out_len / tb_div_count);
-    
+
     qemu_printf("cycles/op           %0.1f\n",
                 s->op_count ? (double)tot / s->op_count : 0);
     qemu_printf("cycles/in byte      %0.1f\n",
diff --git a/tests/tcg/multiarch/test-mmap.c b/tests/tcg/multiarch/test-mmap.c
index 11d0e777b1..0009e62855 100644
--- a/tests/tcg/multiarch/test-mmap.c
+++ b/tests/tcg/multiarch/test-mmap.c
@@ -17,7 +17,7 @@
  * but WITHOUT ANY WARRANTY; without even the implied warranty of
  * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
  * GNU General Public License for more details.
- * 
+ *
  * You should have received a copy of the GNU General Public License
  * along with this program; if not, see <http://www.gnu.org/licenses/>.
  */
@@ -63,15 +63,15 @@ void check_aligned_anonymous_unfixed_mmaps(void)
 		size_t len;
 
 		len = pagesize + (pagesize * i & 7);
-		p1 = mmap(NULL, len, PROT_READ, 
+		p1 = mmap(NULL, len, PROT_READ,
 			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
-		p2 = mmap(NULL, len, PROT_READ, 
+		p2 = mmap(NULL, len, PROT_READ,
 			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
-		p3 = mmap(NULL, len, PROT_READ, 
+		p3 = mmap(NULL, len, PROT_READ,
 			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
-		p4 = mmap(NULL, len, PROT_READ, 
+		p4 = mmap(NULL, len, PROT_READ,
 			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
-		p5 = mmap(NULL, len, PROT_READ, 
+		p5 = mmap(NULL, len, PROT_READ,
 			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
 
 		/* Make sure we get pages aligned with the pagesize. The
@@ -118,7 +118,7 @@ void check_large_anonymous_unfixed_mmap(void)
 	fprintf(stdout, "%s", __func__);
 
 	len = 0x02000000;
-	p1 = mmap(NULL, len, PROT_READ, 
+	p1 = mmap(NULL, len, PROT_READ,
 		  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
 
 	/* Make sure we get pages aligned with the pagesize. The
@@ -145,14 +145,14 @@ void check_aligned_anonymous_unfixed_colliding_mmaps(void)
 	for (i = 0; i < 0x2fff; i++)
 	{
 		int nlen;
-		p1 = mmap(NULL, pagesize, PROT_READ, 
+		p1 = mmap(NULL, pagesize, PROT_READ,
 			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
 		fail_unless (p1 != MAP_FAILED);
 		p = (uintptr_t) p1;
 		fail_unless ((p & pagemask) == 0);
 		memcpy (dummybuf, p1, pagesize);
 
-		p2 = mmap(NULL, pagesize, PROT_READ, 
+		p2 = mmap(NULL, pagesize, PROT_READ,
 			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
 		fail_unless (p2 != MAP_FAILED);
 		p = (uintptr_t) p2;
@@ -162,12 +162,12 @@ void check_aligned_anonymous_unfixed_colliding_mmaps(void)
 
 		munmap (p1, pagesize);
 		nlen = pagesize * 8;
-		p3 = mmap(NULL, nlen, PROT_READ, 
+		p3 = mmap(NULL, nlen, PROT_READ,
 			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
 		fail_unless (p3 != MAP_FAILED);
 
 		/* Check if the mmaped areas collide.  */
-		if (p3 < p2 
+		if (p3 < p2
 		    && (p3 + nlen) > p2)
 			fail_unless (0);
 
@@ -191,7 +191,7 @@ void check_aligned_anonymous_fixed_mmaps(void)
 	int i;
 
 	/* Find a suitable address to start with.  */
-	addr = mmap(NULL, pagesize * 40, PROT_READ | PROT_WRITE, 
+	addr = mmap(NULL, pagesize * 40, PROT_READ | PROT_WRITE,
 		    MAP_PRIVATE | MAP_ANONYMOUS,
 		    -1, 0);
 	fprintf(stdout, "%s addr=%p", __func__, addr);
@@ -200,10 +200,10 @@ void check_aligned_anonymous_fixed_mmaps(void)
 	for (i = 0; i < 40; i++)
 	{
 		/* Create submaps within our unfixed map.  */
-		p1 = mmap(addr, pagesize, PROT_READ, 
+		p1 = mmap(addr, pagesize, PROT_READ,
 			  MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
 			  -1, 0);
-		/* Make sure we get pages aligned with the pagesize. 
+		/* Make sure we get pages aligned with the pagesize.
 		   The target expects this.  */
 		p = (uintptr_t) p1;
 		fail_unless (p1 == addr);
@@ -231,10 +231,10 @@ void check_aligned_anonymous_fixed_mmaps_collide_with_host(void)
 	for (i = 0; i < 20; i++)
 	{
 		/* Create submaps within our unfixed map.  */
-		p1 = mmap(addr, pagesize, PROT_READ | PROT_WRITE, 
+		p1 = mmap(addr, pagesize, PROT_READ | PROT_WRITE,
 			  MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
 			  -1, 0);
-		/* Make sure we get pages aligned with the pagesize. 
+		/* Make sure we get pages aligned with the pagesize.
 		   The target expects this.  */
 		p = (uintptr_t) p1;
 		fail_unless (p1 == addr);
@@ -258,14 +258,14 @@ void check_file_unfixed_mmaps(void)
 		size_t len;
 
 		len = pagesize;
-		p1 = mmap(NULL, len, PROT_READ, 
-			  MAP_PRIVATE, 
+		p1 = mmap(NULL, len, PROT_READ,
+			  MAP_PRIVATE,
 			  test_fd, 0);
-		p2 = mmap(NULL, len, PROT_READ, 
-			  MAP_PRIVATE, 
+		p2 = mmap(NULL, len, PROT_READ,
+			  MAP_PRIVATE,
 			  test_fd, pagesize);
-		p3 = mmap(NULL, len, PROT_READ, 
-			  MAP_PRIVATE, 
+		p3 = mmap(NULL, len, PROT_READ,
+			  MAP_PRIVATE,
 			  test_fd, pagesize * 2);
 
 		fail_unless (p1 != MAP_FAILED);
@@ -307,9 +307,9 @@ void check_file_unfixed_eof_mmaps(void)
 	fprintf(stdout, "%s", __func__);
 	for (i = 0; i < 0x10; i++)
 	{
-		p1 = mmap(NULL, pagesize, PROT_READ, 
-			  MAP_PRIVATE, 
-			  test_fd, 
+		p1 = mmap(NULL, pagesize, PROT_READ,
+			  MAP_PRIVATE,
+			  test_fd,
 			  (test_fsize - sizeof *p1) & ~pagemask);
 
 		fail_unless (p1 != MAP_FAILED);
@@ -339,7 +339,7 @@ void check_file_fixed_eof_mmaps(void)
 	int i;
 
 	/* Find a suitable address to start with.  */
-	addr = mmap(NULL, pagesize * 44, PROT_READ, 
+	addr = mmap(NULL, pagesize * 44, PROT_READ,
 		    MAP_PRIVATE | MAP_ANONYMOUS,
 		    -1, 0);
 
@@ -349,9 +349,9 @@ void check_file_fixed_eof_mmaps(void)
 	for (i = 0; i < 0x10; i++)
 	{
 		/* Create submaps within our unfixed map.  */
-		p1 = mmap(addr, pagesize, PROT_READ, 
-			  MAP_PRIVATE | MAP_FIXED, 
-			  test_fd, 
+		p1 = mmap(addr, pagesize, PROT_READ,
+			  MAP_PRIVATE | MAP_FIXED,
+			  test_fd,
 			  (test_fsize - sizeof *p1) & ~pagemask);
 
 		fail_unless (p1 != MAP_FAILED);
@@ -381,7 +381,7 @@ void check_file_fixed_mmaps(void)
 	int i;
 
 	/* Find a suitable address to start with.  */
-	addr = mmap(NULL, pagesize * 40 * 4, PROT_READ, 
+	addr = mmap(NULL, pagesize * 40 * 4, PROT_READ,
 		    MAP_PRIVATE | MAP_ANONYMOUS,
 		    -1, 0);
 	fprintf(stdout, "%s addr=%p", __func__, (void *)addr);
@@ -389,20 +389,20 @@ void check_file_fixed_mmaps(void)
 
 	for (i = 0; i < 40; i++)
 	{
-		p1 = mmap(addr, pagesize, PROT_READ, 
+		p1 = mmap(addr, pagesize, PROT_READ,
 			  MAP_PRIVATE | MAP_FIXED,
 			  test_fd, 0);
-		p2 = mmap(addr + pagesize, pagesize, PROT_READ, 
+		p2 = mmap(addr + pagesize, pagesize, PROT_READ,
 			  MAP_PRIVATE | MAP_FIXED,
 			  test_fd, pagesize);
-		p3 = mmap(addr + pagesize * 2, pagesize, PROT_READ, 
+		p3 = mmap(addr + pagesize * 2, pagesize, PROT_READ,
 			  MAP_PRIVATE | MAP_FIXED,
 			  test_fd, pagesize * 2);
-		p4 = mmap(addr + pagesize * 3, pagesize, PROT_READ, 
+		p4 = mmap(addr + pagesize * 3, pagesize, PROT_READ,
 			  MAP_PRIVATE | MAP_FIXED,
 			  test_fd, pagesize * 3);
 
-		/* Make sure we get pages aligned with the pagesize. 
+		/* Make sure we get pages aligned with the pagesize.
 		   The target expects this.  */
 		fail_unless (p1 == (void *)addr);
 		fail_unless (p2 == (void *)addr + pagesize);
@@ -479,7 +479,7 @@ int main(int argc, char **argv)
         checked_write(test_fd, &i, sizeof i);
     }
 
-	/* Append a few extra writes to make the file end at non 
+	/* Append a few extra writes to make the file end at non
 	   page boundary.  */
     checked_write(test_fd, &i, sizeof i); i++;
     checked_write(test_fd, &i, sizeof i); i++;
diff --git a/ui/curses.c b/ui/curses.c
index a59b23a9cf..ba362eb927 100644
--- a/ui/curses.c
+++ b/ui/curses.c
@@ -1,8 +1,8 @@
 /*
  * QEMU curses/ncurses display driver
- * 
+ *
  * Copyright (c) 2005 Andrzej Zaborowski  <balrog@zabor.org>
- * 
+ *
  * Permission is hereby granted, free of charge, to any person obtaining a copy
  * of this software and associated documentation files (the "Software"), to deal
  * in the Software without restriction, including without limitation the rights
diff --git a/ui/curses_keys.h b/ui/curses_keys.h
index 71e04acdc7..8b62258756 100644
--- a/ui/curses_keys.h
+++ b/ui/curses_keys.h
@@ -1,8 +1,8 @@
 /*
  * Keycode and keysyms conversion tables for curses
- * 
+ *
  * Copyright (c) 2005 Andrzej Zaborowski  <balrog@zabor.org>
- * 
+ *
  * Permission is hereby granted, free of charge, to any person obtaining a copy
  * of this software and associated documentation files (the "Software"), to deal
  * in the Software without restriction, including without limitation the rights
diff --git a/util/cutils.c b/util/cutils.c
index 36ce712271..ce4f756bd9 100644
--- a/util/cutils.c
+++ b/util/cutils.c
@@ -142,7 +142,7 @@ time_t mktimegm(struct tm *tm)
         m += 12;
         y--;
     }
-    t = 86400ULL * (d + (153 * m - 457) / 5 + 365 * y + y / 4 - y / 100 + 
+    t = 86400ULL * (d + (153 * m - 457) / 5 + 365 * y + y / 4 - y / 100 +
                  y / 400 - 719469);
     t += 3600 * tm->tm_hour + 60 * tm->tm_min + tm->tm_sec;
     return t;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:32:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:32:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsUyI-0000dJ-6M; Mon, 06 Jul 2020 17:32:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m1N9=AR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsUyG-0000ct-Q8
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:32:00 +0000
X-Inumbo-ID: 94748e54-bfae-11ea-8cb2-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 94748e54-bfae-11ea-8cb2-12813bfff9fa;
 Mon, 06 Jul 2020 17:31:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=MKff4t+ItNuQADKRYhdRb7IepcoSbXsDmPkJWArUicA=; b=ELQxUO59/zJdV7FWEY2oUY0iS
 oPaQad1w8Act+E+ZDH8o13GAEJQlK7aFuqe2m8bqMQb02jlzbQwMgDlg5iHM47BRdJ4eZIkuhGNL7
 001k/tMNcfh0QYwS52SV+t+XgYb/MZXZ9hcvRlKhtj6qnwHICohMMwO44hBf3zEGKZ3cs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsUy8-0003js-04; Mon, 06 Jul 2020 17:31:52 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsUy7-0001qi-O1; Mon, 06 Jul 2020 17:31:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsUy7-0003lt-NN; Mon, 06 Jul 2020 17:31:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151669-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151669: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=eb6490f544388dd24c0d054a96dd304bc7284450
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jul 2020 17:31:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151669 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151669/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                eb6490f544388dd24c0d054a96dd304bc7284450
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   23 days
Failing since        151101  2020-06-14 08:32:51 Z   22 days   28 attempts
Testing same since   151634  2020-07-05 00:36:29 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Cathy Zhang <cathy.zhang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <thuth@redhat.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17819 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:55:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsVKh-0002LU-6U; Mon, 06 Jul 2020 17:55:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsVKf-0002LO-Ax
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:55:09 +0000
X-Inumbo-ID: d4879182-bfb1-11ea-8496-bc764e2007e4
Received: from mail-il1-x142.google.com (unknown [2607:f8b0:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d4879182-bfb1-11ea-8496-bc764e2007e4;
 Mon, 06 Jul 2020 17:55:08 +0000 (UTC)
Received: by mail-il1-x142.google.com with SMTP id t4so20519178iln.1
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 10:55:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=ZmQuG2AsZpbcelM7uL4uy958NUT2KgddZYhFWXtMNSA=;
 b=sw3EFcVvNcwfVu/7C/5MKee4SfHxGIkTeN62Bpx/IWWA6WEjBoEsPgTFuyYd6ZY1Gr
 Qwm/uOcoKwEe7yPYLDyX6FKsDnnjIKYbDfhXcIVJoaIWO51pxpebtQvXo1r77bVQsIV2
 snUKierNhFsvQkA+r4nhTOEPGaPvgQucQE7v1hXTNwuO6Bflgp9C4akbtRkZFD2Bn56i
 Va703Rt1blJoaKzcsdzSBGIjNvCPgsv8s+J8CuUiWFaVS09XY83K4ePMKOa2uI/x5fBF
 ilYijGKucyyL6AqRX03DPpw8PttqRUK+wW2UNv+aMLPfL/FGkxT/vFWlOeK6Jh5ql4cc
 xxFA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=ZmQuG2AsZpbcelM7uL4uy958NUT2KgddZYhFWXtMNSA=;
 b=Vhrw77KXdKbU0Dlm/8dKZAIXfd8YAke3HxZCXe4/HyY23dwVDur5McxKQfEhBtls/5
 hzorqKNeG3eHI2IO3yMOdwFvqRZG96cipAikenaqvw8PBxzSBSZism41X2Ezk/P2cpol
 yFXgrd3+dO2oHFzaMknZFk+T9IRl5iX10y33m318Y1g1wZuM4CHfH6dz+p6jyfmiReiU
 oT+CumscLi3v+55BitDdXyg4o5/AwffrO1pDEanYTgWrzwA3RaCCIunQ+iYJ0ZCM4Sxx
 KKQzNAW5PnADCggrzEIwtlDVHwHWpnxqs1tO8wjzwAh/l1hil8H4ZSrn1t9brqcrkcRA
 rPDQ==
X-Gm-Message-State: AOAM531pFd05vIKQpHWonJEPwitsCzOslbRzMbeCHBfp+fsBtqtFyOQe
 hXBcOIlTbgwZAWyigNHA+SoA8JaX5Zy6V+GZ/30=
X-Google-Smtp-Source: ABdhPJzISbzqletUc82Eswi6ouHI51/TKv7NeZldaWm+9iDTc1RF7eyNcAo0E5IEVoJI+14cfVcFe8k3uljOlhawiqg=
X-Received: by 2002:a92:d186:: with SMTP id z6mr33477001ilz.227.1594058108088; 
 Mon, 06 Jul 2020 10:55:08 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-16-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-16-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 10:45:21 -0700
Message-ID: <CAKmqyKMk1b28i9xh_w1tp2hUcbxWNPUxWthy9VbRbnMtkrVpcQ@mail.gmail.com>
Subject: Re: [PATCH 15/26] hw/usb: Add new 'usb-quirks.h' local header
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:56 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> Only redirect.c consumes the quirks API. Reduce the big "hw/usb.h"
> header by moving the quirks related declaration into their own
> header. As nothing out of hw/usb/ requires it, keep it local.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  hw/usb/usb-quirks.h | 27 +++++++++++++++++++++++++++
>  include/hw/usb.h    | 11 -----------
>  hw/usb/quirks.c     |  1 +
>  hw/usb/redirect.c   |  1 +
>  4 files changed, 29 insertions(+), 11 deletions(-)
>  create mode 100644 hw/usb/usb-quirks.h
>
> diff --git a/hw/usb/usb-quirks.h b/hw/usb/usb-quirks.h
> new file mode 100644
> index 0000000000..542889efc4
> --- /dev/null
> +++ b/hw/usb/usb-quirks.h
> @@ -0,0 +1,27 @@
> +/*
> + * USB quirk handling
> + *
> + * Copyright (c) 2012 Red Hat, Inc.
> + *
> + * Red Hat Authors:
> + * Hans de Goede <hdegoede@redhat.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + */
> +
> +#ifndef HW_USB_QUIRKS_H
> +#define HW_USB_QUIRKS_H
> +
> +/* In bulk endpoints are streaming data sources (iow behave like isoc ep=
s) */
> +#define USB_QUIRK_BUFFER_BULK_IN        0x01
> +/* Bulk pkts in FTDI format, need special handling when combining packet=
s */
> +#define USB_QUIRK_IS_FTDI               0x02
> +
> +int usb_get_quirks(uint16_t vendor_id, uint16_t product_id,
> +                   uint8_t interface_class, uint8_t interface_subclass,
> +                   uint8_t interface_protocol);
> +
> +#endif
> diff --git a/include/hw/usb.h b/include/hw/usb.h
> index 18f1349bdc..8c3bc920ff 100644
> --- a/include/hw/usb.h
> +++ b/include/hw/usb.h
> @@ -549,15 +549,4 @@ int usb_device_alloc_streams(USBDevice *dev, USBEndp=
oint **eps, int nr_eps,
>                               int streams);
>  void usb_device_free_streams(USBDevice *dev, USBEndpoint **eps, int nr_e=
ps);
>
> -/* quirks.c */
> -
> -/* In bulk endpoints are streaming data sources (iow behave like isoc ep=
s) */
> -#define USB_QUIRK_BUFFER_BULK_IN       0x01
> -/* Bulk pkts in FTDI format, need special handling when combining packet=
s */
> -#define USB_QUIRK_IS_FTDI              0x02
> -
> -int usb_get_quirks(uint16_t vendor_id, uint16_t product_id,
> -                   uint8_t interface_class, uint8_t interface_subclass,
> -                   uint8_t interface_protocol);
> -
>  #endif
> diff --git a/hw/usb/quirks.c b/hw/usb/quirks.c
> index 655b36f2d5..b0d0f87e35 100644
> --- a/hw/usb/quirks.c
> +++ b/hw/usb/quirks.c
> @@ -15,6 +15,7 @@
>  #include "qemu/osdep.h"
>  #include "quirks.inc.c"
>  #include "hw/usb.h"
> +#include "usb-quirks.h"
>
>  static bool usb_id_match(const struct usb_device_id *ids,
>                           uint16_t vendor_id, uint16_t product_id,
> diff --git a/hw/usb/redirect.c b/hw/usb/redirect.c
> index 417a60a2e6..4c5925a039 100644
> --- a/hw/usb/redirect.c
> +++ b/hw/usb/redirect.c
> @@ -45,6 +45,7 @@
>  #include "hw/usb.h"
>  #include "migration/qemu-file-types.h"
>  #include "migration/vmstate.h"
> +#include "usb-quirks.h"
>
>  /* ERROR is defined below. Remove any previous definition. */
>  #undef ERROR
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:55:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:55:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsVLT-0002PO-Kk; Mon, 06 Jul 2020 17:55:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsVLS-0002PH-25
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:55:58 +0000
X-Inumbo-ID: f1c96b12-bfb1-11ea-8496-bc764e2007e4
Received: from mail-io1-xd43.google.com (unknown [2607:f8b0:4864:20::d43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f1c96b12-bfb1-11ea-8496-bc764e2007e4;
 Mon, 06 Jul 2020 17:55:57 +0000 (UTC)
Received: by mail-io1-xd43.google.com with SMTP id a12so40324641ion.13
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 10:55:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=mSCfOyGk5eR0zmUJmXHZ59WLz9pikUBhIKrsmcLRwKY=;
 b=j8eXVMxaY8jrHLiJej23Zer/m3spONPlFZ4KX79ZeAzpTiTZv8QpRQThnjOB9bjZEz
 9XqRX+ZacKvhEpEkSiAFCBRMsBRSLouiILP7wtr57JI+XSK4o9JxmLpHI2oc0Q86y5LI
 gKZaeo5iEWUhOrVi4VXiejkfsAXXi1Jok0jj0Lf06WbI9xEOKdvw0l2w9Ddex+SQocdw
 XLAtjlWIGAf3HjvdQdebloODZ+hOYtihnQpNOqqLzXX50FzndxNMA7kTZPk8LHDy3Eqy
 yQzhSTGA0MNqrwk/qGgrvgHt1PhK5+f5Ta5eL+9/+kmDpG8FN1o8jhVPg/ep/hzRREdv
 u/Eg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=mSCfOyGk5eR0zmUJmXHZ59WLz9pikUBhIKrsmcLRwKY=;
 b=mFunRm7m/ZT9RI4mFBmkdjBh0b49qMtxKc2udn96BwsCbZLhF4ayMCexDkEWROnpS7
 uPO4mTILP0gU9yuWy+6aH6xxrEVBKeY5GMkwvG0ytwdy8JDUWd5SENgA9nUxBL5jvJ1+
 Pq7z+ne4Bw3GxU6rCeiHTvEmz1080mU+Pm6Fqr/4WRVPP6g0PpyxDSTMfPAXRVNZJYul
 M7SZb32wJq5OlBncGOGMcpsTTkYNTtCYInltQPa/5wBtRwpgLP1UymiTyxH69zkb7TzV
 va4fHjbNs+k56kSDcsS8Eqa89wKboUycczNqv39LhyMbS0a9B4T7IYqnJ5BS12pHY9aD
 E+QA==
X-Gm-Message-State: AOAM533cCi5aiIPj6xNXP0275vtrcQqxHy9lcxmxN52EQqimyBwEYNak
 B7bSnupgiMqW8VKByUeUrZDYEDBVfO/7virSXnE=
X-Google-Smtp-Source: ABdhPJwgU07aL+RYrZdIeE1eEIZgLeMLC2jE22itCYBokk5fp9t9EJA/ZYEPpHOeXw6oU4TodkfjqRVYChow5WL9Imc=
X-Received: by 2002:a5d:9ed0:: with SMTP id a16mr26452978ioe.176.1594058157255; 
 Mon, 06 Jul 2020 10:55:57 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-17-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-17-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 10:46:09 -0700
Message-ID: <CAKmqyKNj_iiadDJEYme-HWxSNm5y7cE=ESRtxxXd4XvToGsRHw@mail.gmail.com>
Subject: Re: [PATCH 16/26] hw/usb/bus: Simplify usb_get_dev_path()
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 8:00 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> Simplify usb_get_dev_path() a bit.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  hw/usb/bus.c | 19 +++++++++----------
>  1 file changed, 9 insertions(+), 10 deletions(-)
>
> diff --git a/hw/usb/bus.c b/hw/usb/bus.c
> index 111c3af7c1..f8901e822c 100644
> --- a/hw/usb/bus.c
> +++ b/hw/usb/bus.c
> @@ -580,19 +580,18 @@ static void usb_bus_dev_print(Monitor *mon, DeviceS=
tate *qdev, int indent)
>  static char *usb_get_dev_path(DeviceState *qdev)
>  {
>      USBDevice *dev =3D USB_DEVICE(qdev);
> -    DeviceState *hcd =3D qdev->parent_bus->parent;
> -    char *id =3D NULL;
>
>      if (dev->flags & (1 << USB_DEV_FLAG_FULL_PATH)) {
> -        id =3D qdev_get_dev_path(hcd);
> -    }
> -    if (id) {
> -        char *ret =3D g_strdup_printf("%s/%s", id, dev->port->path);
> -        g_free(id);
> -        return ret;
> -    } else {
> -        return g_strdup(dev->port->path);
> +        DeviceState *hcd =3D qdev->parent_bus->parent;
> +        char *id =3D qdev_get_dev_path(hcd);
> +
> +        if (id) {
> +            char *ret =3D g_strdup_printf("%s/%s", id, dev->port->path);
> +            g_free(id);
> +            return ret;
> +        }
>      }
> +    return g_strdup(dev->port->path);
>  }
>
>  static char *usb_get_fw_dev_path(DeviceState *qdev)
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 17:56:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 17:56:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsVM6-0002Tr-Uv; Mon, 06 Jul 2020 17:56:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hiqY=AR=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsVM5-0002Tj-W8
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 17:56:38 +0000
X-Inumbo-ID: 0992a7cc-bfb2-11ea-8496-bc764e2007e4
Received: from mail-io1-xd41.google.com (unknown [2607:f8b0:4864:20::d41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0992a7cc-bfb2-11ea-8496-bc764e2007e4;
 Mon, 06 Jul 2020 17:56:37 +0000 (UTC)
Received: by mail-io1-xd41.google.com with SMTP id a12so40326864ion.13
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 10:56:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=/6i6SzUyP91tXZAmPR05xH5j33bfBUucUM0URzPP61A=;
 b=boamkXe9J4H35VxNzJVu2sKkuqZ5atZU6A1A5XhvxgFGrzuqn6g5Q7BEAyZTMpUh8n
 XRbR8Zv3k0pfntdT7a07EFE6ygQpqlJVA3b21L/ZK7KeNdytEnvSULDnEbDknQCWxcgX
 8bbfye39D1BUQ1BAcIRGiuZaAl7MvRJXkFgyzJlB0nYJGuCrTzE1/V8GyGEUUtzXB8lM
 +2k1yGzCz+XgJBwBT+e0m0Q+bF9LqSXCkqqxeQqvPuvyliNqKuOjs+oMEntue55kn1mj
 5tVcq8YkluSFwe7RLw7lEjbqMwpHKUCokr6yZ39bWVzbROv/rIppb5UyQ6xsvhdvU64B
 V1ZQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=/6i6SzUyP91tXZAmPR05xH5j33bfBUucUM0URzPP61A=;
 b=o4Beuq+n4ZTxkEQKFCQloMa4mOI/kYQv8W+DlJ+u1PgWrk3xZ9wvDmY+GfoQxCnbWD
 +kJGglqgeNLAFO21Xdd+XuOoivM3b2hrBhhbAWMCRPZ2FqczwFF7yiNkGlDtQZzhEkBk
 HhpRWz805Y/7SwYrz+o4bzgSx8Myfzr6nl+zM3A9UWJ6n+Gvrbl6xEk8NN0X3PvrUoRU
 6Q+w3RsrjzA6WnOe+p2MU6WyuG8wkKfrY95coSQxXXco0ertth836k6LIfn0CIeGZH2X
 L3aJF9B7w7k65QlYdi4Whc7KlmzqidouAZmz5RBaIkKkvy+beY81ibKnSLwJUEf0YoEW
 LZPQ==
X-Gm-Message-State: AOAM53341+W3/d5xiPi/p1ENO0mfGvRRjzhPez1Gl/xLqWHDPj3MZMMo
 ozmoh28fO41BbsTKqFQTL2qSR+weK66FM9m+GSA=
X-Google-Smtp-Source: ABdhPJyWgOX4whXvLnmyTPkPv87oz9JfwIp4De0q7D/H/TDK5ZXEnOeDquw+Fo5nyel8hcaNjKzEOm1ikS3FuMmOoSw=
X-Received: by 2002:a5d:9306:: with SMTP id l6mr27140837ion.105.1594058197170; 
 Mon, 06 Jul 2020 10:56:37 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-18-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-18-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 10:46:50 -0700
Message-ID: <CAKmqyKMk==4rbi4iqEuH1aYcUNE+zTbBst5gKp8NkePz6OmDNg@mail.gmail.com>
Subject: Re: [PATCH 17/26] hw/usb/bus: Rename usb_get_dev_path() as
 usb_get_full_dev_path()
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:58 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> If the device has USB_DEV_FLAG_FULL_PATH set, usb_get_dev_path()
> returns the full port path. Rename the function accordingly.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  hw/usb/bus.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/hw/usb/bus.c b/hw/usb/bus.c
> index f8901e822c..fad8194bf5 100644
> --- a/hw/usb/bus.c
> +++ b/hw/usb/bus.c
> @@ -13,7 +13,7 @@
>
>  static void usb_bus_dev_print(Monitor *mon, DeviceState *qdev, int inden=
t);
>
> -static char *usb_get_dev_path(DeviceState *dev);
> +static char *usb_get_full_dev_path(DeviceState *dev);
>  static char *usb_get_fw_dev_path(DeviceState *qdev);
>  static void usb_qdev_unrealize(DeviceState *qdev);
>
> @@ -33,7 +33,7 @@ static void usb_bus_class_init(ObjectClass *klass, void=
 *data)
>      HotplugHandlerClass *hc =3D HOTPLUG_HANDLER_CLASS(klass);
>
>      k->print_dev =3D usb_bus_dev_print;
> -    k->get_dev_path =3D usb_get_dev_path;
> +    k->get_dev_path =3D usb_get_full_dev_path;
>      k->get_fw_dev_path =3D usb_get_fw_dev_path;
>      hc->unplug =3D qdev_simple_device_unplug_cb;
>  }
> @@ -577,7 +577,7 @@ static void usb_bus_dev_print(Monitor *mon, DeviceSta=
te *qdev, int indent)
>                     dev->attached ? ", attached" : "");
>  }
>
> -static char *usb_get_dev_path(DeviceState *qdev)
> +static char *usb_get_full_dev_path(DeviceState *qdev)
>  {
>      USBDevice *dev =3D USB_DEVICE(qdev);
>
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 18:08:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 18:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsVX7-0003WF-2W; Mon, 06 Jul 2020 18:08:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g8At=AR=gmail.com=jrdr.linux@srs-us1.protection.inumbo.net>)
 id 1jsVX5-0003W5-Df
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 18:07:59 +0000
X-Inumbo-ID: 9f89cc96-bfb3-11ea-8496-bc764e2007e4
Received: from mail-pj1-x1030.google.com (unknown [2607:f8b0:4864:20::1030])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f89cc96-bfb3-11ea-8496-bc764e2007e4;
 Mon, 06 Jul 2020 18:07:58 +0000 (UTC)
Received: by mail-pj1-x1030.google.com with SMTP id k5so7782703pjg.3
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 11:07:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id;
 bh=TtTy8zAxVVVpMKoPB5sObxEUTYAxQ4ESDdw3dI7IOfA=;
 b=oaJIJFgUBmb7qZXeWwHQwIJgUXFmUxHmjzR/a7ly4+R/L5J6R2D0uaBXx6qjgig7XL
 IcTfx+7t20+5Z9InaIZohd+NXxgQroM7drjaj2oeyhSmyGQyPuwslXxB08Sr1k/TFIrt
 3vjcEts6w/HrO1mJ2/Wm0DA4nXEfBdTSDzWhRgorpzUGd6iUTNwuHKtQy8qEvPBNDmEi
 I8CN9f/N0NnZkCGRw9NQPNyVWTqZX2D6NQEQh6NkVROkIEj8xEOFG1mjUJ9J9t2fTGpK
 Vs/AuIQZd8mOC96Bsi0mfi8wSoU4KUR+ozHTEGiBwRV2YYf7jrNpWFfYb83DxP4DcZJi
 NMNQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=TtTy8zAxVVVpMKoPB5sObxEUTYAxQ4ESDdw3dI7IOfA=;
 b=nrWuFizRdoPAtvhnYXA+oEbZSrWPXmLCG258cCPoeGEArvVCwHzAyuEe1p69JSObD6
 X3MrBuZnRbL/oGH8dwHlAtm00fDQAruwXmybq+DOou6nNj+x1X76aznstgyyKwh17wVe
 +25adqC5+G7YIQLu56FQFpqtRRAi53+CxQYhf3F75EdzBWStB5T+q+tWe6fYS6I26i7Y
 JV9KFmzArQ2VknNuUaNk9MreIKCD/6TemJtLWya1npycWX5a4ZAD57AX13v4/eyC0+ab
 WT+YPjgJvBzzLP7cM55VnllBY4VyLwLaaq5552TlUtT6kVdcFDjakOB8JMfS1YJcDOIZ
 u0Gw==
X-Gm-Message-State: AOAM531+AXkJifOtOzDLuxt6N/zkUvtfPJyIDUbkezevVpOJUrGbpKb1
 8tAbEsAdR/jneZzYLeV4jsM=
X-Google-Smtp-Source: ABdhPJzylX3osTAYA+Y6HJUZkzAC+UoJPBxpVk5NUh8tucD8BuQqoVsDbrqW5FAUg6C5IOlbZXlbQg==
X-Received: by 2002:a17:902:c206:: with SMTP id
 6mr19932000pll.30.1594058877935; 
 Mon, 06 Jul 2020 11:07:57 -0700 (PDT)
Received: from jordon-HP-15-Notebook-PC.domain.name ([122.171.43.125])
 by smtp.gmail.com with ESMTPSA id 199sm20425544pgc.79.2020.07.06.11.07.54
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 06 Jul 2020 11:07:57 -0700 (PDT)
From: Souptick Joarder <jrdr.linux@gmail.com>
To: boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org
Subject: [PATCH v2 0/3] Few bug fixes and Convert to pin_user_pages*()
Date: Mon,  6 Jul 2020 23:46:09 +0530
Message-Id: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
X-Mailer: git-send-email 1.9.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <xadimgnik@gmail.com>,
 linux-kernel@vger.kernel.org, John Hubbard <jhubbard@nvidia.com>,
 Souptick Joarder <jrdr.linux@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This series contains few clean up, minor bug fixes and
Convert get_user_pages() to pin_user_pages().

I'm compile tested this, but unable to run-time test,
so any testing help is much appriciated.

v2:
	Addressed few review comments and compile issue.
	Patch[1/2] from v1 split into 2 in v2.

Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Paul Durrant <xadimgnik@gmail.com>

Souptick Joarder (3):
  xen/privcmd: Corrected error handling path
  xen/privcmd: Mark pages as dirty
  xen/privcmd: Convert get_user_pages*() to      pin_user_pages*()

 drivers/xen/privcmd.c | 32 ++++++++++++++------------------
 1 file changed, 14 insertions(+), 18 deletions(-)

-- 
1.9.1



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 18:08:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 18:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsVXB-0003WV-Ag; Mon, 06 Jul 2020 18:08:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g8At=AR=gmail.com=jrdr.linux@srs-us1.protection.inumbo.net>)
 id 1jsVXA-0003W5-9P
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 18:08:04 +0000
X-Inumbo-ID: a24d1442-bfb3-11ea-b7bb-bc764e2007e4
Received: from mail-pj1-x1042.google.com (unknown [2607:f8b0:4864:20::1042])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a24d1442-bfb3-11ea-b7bb-bc764e2007e4;
 Mon, 06 Jul 2020 18:08:03 +0000 (UTC)
Received: by mail-pj1-x1042.google.com with SMTP id cv18so42353pjb.1
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 11:08:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=mBXFHkH5TzG6LqeVLyEATGFp3OINlEiEkwTG6DKP5gc=;
 b=FREEGuCxU2Zl2/sBJH058EpPMS4GiGXU/dTLgSmQEKbeNdyxgdzwv6AAMNmM25OGkN
 wHBZkctV9M8/clpVivrB7YRq+baoOE5bJDhR17Lw9dkYo23TMg+x1b8gyTP83jcBRsUn
 3SNWA8P3JttgJfmavqXK/qcf7fBcuz/PfiCLo7HOo3VIdWpaL4z8nf/hygxZQJjiflk+
 qPQPmXDwdqqBEwSL55lx+ZvGU5tvKjsYuSs334PaBtvBgBObRn//cpQIispeu/2WLEAh
 iKi3Ceg4sZjdH2QCP/1xYDV+sGiKJrKHFlkCfJNO/GbDEUNaHfr68A7APrHx0fubgBoX
 ag7A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=mBXFHkH5TzG6LqeVLyEATGFp3OINlEiEkwTG6DKP5gc=;
 b=f4LpsQACeisrZZfqzLbIU+/3bIejRcnLwqdJ7C8D+I2hFZ2ZHTam+BvQNGXDHrMRK5
 J3qDLVTB9FXanWpECDjSWOvy+zbjg6nBfJibZXxB+F9B1jnNTnaclYIBTbDQnRvruR67
 c0Q0GjNKqveZVoW4z0wpbRbRlchve+96RSHp4QYuZNvFph3vST4Jdpo1mRqb4Ww9QyBS
 XqNqmv/E74jF8ZU7FlnOiufJxcNZllk5wcyoQEh/rkjWqCIsg/WkmQs/OlcrOtH/B2Is
 Uv1kdqwiSPWcemKTyv3FP6l0YYIcqu//9HNu0pXmmazt8VV6ItuIHPHJpgvuOAaNqNqc
 8Bdw==
X-Gm-Message-State: AOAM530g1UgAiR9Dg90jMWBcG5f1Jp0wppbkB+TYvwCOMh0BbCeYA/mP
 sOh+SjIFuzpicnppQoDezwU=
X-Google-Smtp-Source: ABdhPJzfrg4kfMMYzI0FM6dp3y6GDVRu7C/vmGzXICthTgrs/634fOSnQFK49BpoDjYsRBctgws2Eg==
X-Received: by 2002:a17:90a:aa83:: with SMTP id l3mr399709pjq.73.1594058882791; 
 Mon, 06 Jul 2020 11:08:02 -0700 (PDT)
Received: from jordon-HP-15-Notebook-PC.domain.name ([122.171.43.125])
 by smtp.gmail.com with ESMTPSA id 199sm20425544pgc.79.2020.07.06.11.07.59
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 06 Jul 2020 11:08:02 -0700 (PDT)
From: Souptick Joarder <jrdr.linux@gmail.com>
To: boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org
Subject: [PATCH v2 1/3] xen/privcmd: Corrected error handling path
Date: Mon,  6 Jul 2020 23:46:10 +0530
Message-Id: <1594059372-15563-2-git-send-email-jrdr.linux@gmail.com>
X-Mailer: git-send-email 1.9.1
In-Reply-To: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
References: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <xadimgnik@gmail.com>,
 linux-kernel@vger.kernel.org, John Hubbard <jhubbard@nvidia.com>,
 Souptick Joarder <jrdr.linux@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Previously, if lock_pages() end up partially mapping pages, it used
to return -ERRNO due to which unlock_pages() have to go through
each pages[i] till *nr_pages* to validate them. This can be avoided
by passing correct number of partially mapped pages & -ERRNO separately,
while returning from lock_pages() due to error.

With this fix unlock_pages() doesn't need to validate pages[i] till
*nr_pages* for error scenario and few condition checks can be ignored.

Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Paul Durrant <xadimgnik@gmail.com>
---
 drivers/xen/privcmd.c | 31 +++++++++++++++----------------
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index a250d11..33677ea 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -580,13 +580,13 @@ static long privcmd_ioctl_mmap_batch(
 
 static int lock_pages(
 	struct privcmd_dm_op_buf kbufs[], unsigned int num,
-	struct page *pages[], unsigned int nr_pages)
+	struct page *pages[], unsigned int nr_pages, unsigned int *pinned)
 {
 	unsigned int i;
+	int page_count = 0;
 
 	for (i = 0; i < num; i++) {
 		unsigned int requested;
-		int pinned;
 
 		requested = DIV_ROUND_UP(
 			offset_in_page(kbufs[i].uptr) + kbufs[i].size,
@@ -594,14 +594,15 @@ static int lock_pages(
 		if (requested > nr_pages)
 			return -ENOSPC;
 
-		pinned = get_user_pages_fast(
+		page_count = get_user_pages_fast(
 			(unsigned long) kbufs[i].uptr,
 			requested, FOLL_WRITE, pages);
-		if (pinned < 0)
-			return pinned;
+		if (page_count < 0)
+			return page_count;
 
-		nr_pages -= pinned;
-		pages += pinned;
+		*pinned += page_count;
+		nr_pages -= page_count;
+		pages += page_count;
 	}
 
 	return 0;
@@ -611,13 +612,8 @@ static void unlock_pages(struct page *pages[], unsigned int nr_pages)
 {
 	unsigned int i;
 
-	if (!pages)
-		return;
-
-	for (i = 0; i < nr_pages; i++) {
-		if (pages[i])
-			put_page(pages[i]);
-	}
+	for (i = 0; i < nr_pages; i++)
+		put_page(pages[i]);
 }
 
 static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
@@ -630,6 +626,7 @@ static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
 	struct xen_dm_op_buf *xbufs = NULL;
 	unsigned int i;
 	long rc;
+	unsigned int pinned = 0;
 
 	if (copy_from_user(&kdata, udata, sizeof(kdata)))
 		return -EFAULT;
@@ -683,9 +680,11 @@ static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
 		goto out;
 	}
 
-	rc = lock_pages(kbufs, kdata.num, pages, nr_pages);
-	if (rc)
+	rc = lock_pages(kbufs, kdata.num, pages, nr_pages, &pinned);
+	if (rc < 0) {
+		nr_pages = pinned;
 		goto out;
+	}
 
 	for (i = 0; i < kdata.num; i++) {
 		set_xen_guest_handle(xbufs[i].h, kbufs[i].uptr);
-- 
1.9.1



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 18:08:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 18:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsVXH-0003Xr-JF; Mon, 06 Jul 2020 18:08:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g8At=AR=gmail.com=jrdr.linux@srs-us1.protection.inumbo.net>)
 id 1jsVXG-0003Xb-CN
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 18:08:10 +0000
X-Inumbo-ID: a628ce30-bfb3-11ea-b7bb-bc764e2007e4
Received: from mail-pl1-x642.google.com (unknown [2607:f8b0:4864:20::642])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a628ce30-bfb3-11ea-b7bb-bc764e2007e4;
 Mon, 06 Jul 2020 18:08:09 +0000 (UTC)
Received: by mail-pl1-x642.google.com with SMTP id s14so15633068plq.6
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 11:08:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=lQCZlvushcOD69EcXhMHyrU/TPLV5UJPHlxcyr/r1Xg=;
 b=PYmDnfr2KReJ6L3ka/pFQ5TCVzJeDABsscKdIo2hVK+g6RynNJZXKJA8nKzjJjOnq0
 L2VdcsAO3SDozGKprBlSPKVlFyWx7ATEpTpHPmuQpqv+2Av+4+UtEMqP4nkiI/TBFFud
 YrT89cUVzUAujSFRvMPiwbghNFNc5iir/1aAB3zwSpXHvBizJQ+zLqhEevq9ZGMXUgDo
 /5JbKtwsVXV5OZ84G6tunl9EnJlXINSomZ+kuBkCV+2h8io0Gx75UW8PzdiOGLFGAV3E
 N4+IHa22YZq1UWZL12+aqy8rW4h9Ghp2VSD7l8bky+RMlBK5Ez4iWmccWaS138Ltdd4F
 J8rg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=lQCZlvushcOD69EcXhMHyrU/TPLV5UJPHlxcyr/r1Xg=;
 b=QHbOTwC8wqOc7gZVjYyaUuAzoWzjXe6lJYUQ99rBmLw2Y7G70s1c5Rgi5cRYwnVKya
 vWH7/m7R+igWW7KY8ecN8C+4La3Axw5tn3wmBGEYHfQz0sredKnyXeYi38hGEAekqmBV
 VykuPFT55f2+e/ndta6gdwuiaS8NeusZOsvVm9mltFbRDrUUaacHuY/1tDVyduD/4ZFJ
 SBx3/Gi7n64x128BIe6NrM28IOQ17EFc/LgJMqEpeJF9/Yl8BMmKvKQNS0Ib7m8M8y3m
 /pmYCdUZwSP3Di0KLP+wokLGVyuUCsG9sMWjZ7EnWjyjTXNA1lYamQCX+JS6InaYL9C0
 2hxA==
X-Gm-Message-State: AOAM530PQXu8HFLjiICec4rfS/DrD/lf0zyXiPrulVhtovpMFj8Zcfra
 HSeT+xZA4Bn1zDPXsvnOExs=
X-Google-Smtp-Source: ABdhPJxXyvj4hRWrdgEtWV/PcGf7R0fJvjrqlaQ3A42V+efb3QSuM3VXtxd7bpryewDUbBJN29AM8A==
X-Received: by 2002:a17:90a:ff92:: with SMTP id
 hf18mr461075pjb.10.1594058889231; 
 Mon, 06 Jul 2020 11:08:09 -0700 (PDT)
Received: from jordon-HP-15-Notebook-PC.domain.name ([122.171.43.125])
 by smtp.gmail.com with ESMTPSA id 199sm20425544pgc.79.2020.07.06.11.08.06
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 06 Jul 2020 11:08:08 -0700 (PDT)
From: Souptick Joarder <jrdr.linux@gmail.com>
To: boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org
Subject: [PATCH v2 2/3] xen/privcmd: Mark pages as dirty
Date: Mon,  6 Jul 2020 23:46:11 +0530
Message-Id: <1594059372-15563-3-git-send-email-jrdr.linux@gmail.com>
X-Mailer: git-send-email 1.9.1
In-Reply-To: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
References: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <xadimgnik@gmail.com>,
 linux-kernel@vger.kernel.org, John Hubbard <jhubbard@nvidia.com>,
 Souptick Joarder <jrdr.linux@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

pages need to be marked as dirty before unpinned it in
unlock_pages() which was oversight. This is fixed now.

Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
Suggested-by: John Hubbard <jhubbard@nvidia.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Paul Durrant <xadimgnik@gmail.com>
---
 drivers/xen/privcmd.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 33677ea..f6c1543 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -612,8 +612,11 @@ static void unlock_pages(struct page *pages[], unsigned int nr_pages)
 {
 	unsigned int i;
 
-	for (i = 0; i < nr_pages; i++)
+	for (i = 0; i < nr_pages; i++) {
+		if (!PageDirty(pages[i]))
+			set_page_dirty_lock(pages[i]);
 		put_page(pages[i]);
+	}
 }
 
 static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
-- 
1.9.1



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 18:08:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 18:08:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsVXN-0003Zp-Sj; Mon, 06 Jul 2020 18:08:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g8At=AR=gmail.com=jrdr.linux@srs-us1.protection.inumbo.net>)
 id 1jsVXN-0003Zb-6o
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 18:08:17 +0000
X-Inumbo-ID: aa2eb68e-bfb3-11ea-b7bb-bc764e2007e4
Received: from mail-pg1-x544.google.com (unknown [2607:f8b0:4864:20::544])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa2eb68e-bfb3-11ea-b7bb-bc764e2007e4;
 Mon, 06 Jul 2020 18:08:16 +0000 (UTC)
Received: by mail-pg1-x544.google.com with SMTP id p3so18743013pgh.3
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 11:08:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=ljWc7ncVT9pbFQys6lUrJ2oby+afUmM7eFZjxIhMARg=;
 b=jEACz1RudmH76IjHbdFlmoPGUPJKFxQCq2v9MrfJJA8KB3hgvmPKoJ6dh7OSUm81N/
 tl3W9rYACLLYLqhNvYdQec8GTl2ejOttzQ58EI02HItGcnKiiC5/2aB5g66OCIZC/E1u
 /mQOQRtDqM/vypg2TSJ4IrsIMarYUK8d/G66KFBDESnUPDZbWua4S7CX2i+MFnWN6+gm
 TAl9dt7TTOtSu6v1DhRGv8HYoLhUJhndqmVklEGZWSNyfwLROdQQHvcrbP7ORlKCEii6
 54YlItX98evWB9sVh7JTXi0t4Ki9jcwLstQgwmyBLwwyw2nX7hSrmTmI1pu9DpWmC1n2
 Kljg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=ljWc7ncVT9pbFQys6lUrJ2oby+afUmM7eFZjxIhMARg=;
 b=saViBWNFaphorBvdoiHFUe3Ac16LdrNg+0eCsKxgWYqjK77cfHe4UkD2Oz3hfnEagJ
 0IOF/B3vpeoLM+J8PbUXC97chCG4v39z1gswj6EGLxSBM6W2F5eLCJ2HozwYDa8vg/xe
 Ol8a1psvY+84/QINX/B4+tpS/ZC7u7q7b2G3XEelvL+tbk1cUukyyRG16Tw7uWxXas7P
 dP/1FchQw+0RJeFJEyWAXtAfng8g0ya+Y0aAIXqorYKfebaaZ+FIwhqfOr8aMpMym1lJ
 ZXiGQsspn5ixfoueKlPMbwurwYUOb3BUJIkmeSVQIF/ZplkgpT9HHkixnNi9AWGfJgV7
 QX/Q==
X-Gm-Message-State: AOAM532b+0docCwtZ3yfCmv92gfgShWV7bnxxib0gnIXomslAc2qxPf1
 c3QdlEyMbVzuGFvI9kcXSGI=
X-Google-Smtp-Source: ABdhPJxIo6PQngVuGXtLZOqfdIq2vAywCi8VVu0pPrqJTL0U3sjfe2jLu94RvE7iZvdk15XpRbmE0w==
X-Received: by 2002:a65:5682:: with SMTP id v2mr41106431pgs.231.1594058895951; 
 Mon, 06 Jul 2020 11:08:15 -0700 (PDT)
Received: from jordon-HP-15-Notebook-PC.domain.name ([122.171.43.125])
 by smtp.gmail.com with ESMTPSA id 199sm20425544pgc.79.2020.07.06.11.08.13
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 06 Jul 2020 11:08:15 -0700 (PDT)
From: Souptick Joarder <jrdr.linux@gmail.com>
To: boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org
Subject: [PATCH v2 3/3] xen/privcmd: Convert get_user_pages*() to
 pin_user_pages*()
Date: Mon,  6 Jul 2020 23:46:12 +0530
Message-Id: <1594059372-15563-4-git-send-email-jrdr.linux@gmail.com>
X-Mailer: git-send-email 1.9.1
In-Reply-To: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
References: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <xadimgnik@gmail.com>,
 linux-kernel@vger.kernel.org, John Hubbard <jhubbard@nvidia.com>,
 Souptick Joarder <jrdr.linux@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In 2019, we introduced pin_user_pages*() and now we are converting
get_user_pages*() to the new API as appropriate. [1] & [2] could
be referred for more information. This is case 5 as per document [1].

[1] Documentation/core-api/pin_user_pages.rst

[2] "Explicit pinning of user-space pages":
        https://lwn.net/Articles/807108/

Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Paul Durrant <xadimgnik@gmail.com>
---
 drivers/xen/privcmd.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index f6c1543..5c5cd24 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -594,7 +594,7 @@ static int lock_pages(
 		if (requested > nr_pages)
 			return -ENOSPC;
 
-		page_count = get_user_pages_fast(
+		page_count = pin_user_pages_fast(
 			(unsigned long) kbufs[i].uptr,
 			requested, FOLL_WRITE, pages);
 		if (page_count < 0)
@@ -610,13 +610,7 @@ static int lock_pages(
 
 static void unlock_pages(struct page *pages[], unsigned int nr_pages)
 {
-	unsigned int i;
-
-	for (i = 0; i < nr_pages; i++) {
-		if (!PageDirty(pages[i]))
-			set_page_dirty_lock(pages[i]);
-		put_page(pages[i]);
-	}
+	unpin_user_pages_dirty_lock(pages, nr_pages, true);
 }
 
 static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
-- 
1.9.1



From xen-devel-bounces@lists.xenproject.org Mon Jul 06 21:28:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 21:28:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsYf3-0002sf-Jg; Mon, 06 Jul 2020 21:28:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m1N9=AR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsYf2-0002sX-U7
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 21:28:24 +0000
X-Inumbo-ID: 9f6c09b0-bfcf-11ea-8cda-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9f6c09b0-bfcf-11ea-8cda-12813bfff9fa;
 Mon, 06 Jul 2020 21:28:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Jr/+wR2lEJ2qGvLsM5D7+FUf4xZufsnm31lfbKy8ZtY=; b=12vmOFiAgXcI2WXUMOguhcpEd
 2yBIdTPooAmks82wKHxF/vfGbMkfleBMbVslbVu1Jh2vSZlGJaewat0I2VzNzVmqeYhlIQMYzuXND
 9YRbRjgND/dnOkAR8f9ZX/bn8thFUuR0c+ewBgnqBwNu6gFwcZEqVmACKLzeaxlAhlwHM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsYf1-0008Ai-Md; Mon, 06 Jul 2020 21:28:23 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsYf1-0003wO-3T; Mon, 06 Jul 2020 21:28:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsYf1-0007aj-2t; Mon, 06 Jul 2020 21:28:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151687-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 151687: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=f97f99c8d88ebc108f6adc3ba74e87d53ba57c70
X-Osstest-Versions-That: xen=158912a532fe98f448c688d3571241c9033553bd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jul 2020 21:28:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151687 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151687/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f97f99c8d88ebc108f6adc3ba74e87d53ba57c70
baseline version:
 xen                  158912a532fe98f448c688d3571241c9033553bd

Last test of basis   151679  2020-07-06 14:01:18 Z    0 days
Testing same since   151687  2020-07-06 19:01:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   158912a532..f97f99c8d8  f97f99c8d88ebc108f6adc3ba74e87d53ba57c70 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 22:42:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 22:42:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsZns-0000kp-1I; Mon, 06 Jul 2020 22:41:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m1N9=AR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsZnq-0000kk-GQ
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 22:41:34 +0000
X-Inumbo-ID: d7b37fec-bfd9-11ea-8ce7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7b37fec-bfd9-11ea-8ce7-12813bfff9fa;
 Mon, 06 Jul 2020 22:41:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=KdjR3Flb1tfNAI2yhGCOo/Ump3PEPTimg1rvJtI4Bmg=; b=GURMSx7DStASLGfvL1llPJlTB
 pMpz2y9n8QoI5SzYG+G9Dt6ezY2JT7U/UEPR8sz3Ex8o2t0a4hofB9pd9FTfZJoc80+tE3u7jCp7h
 y1/bNpVjx6NxoPmHvl5yoa+i8HKIDH0iLdv+NAkKtwOWBjenLdO5Ye3EaRriN+AUaZdhA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsZnp-000151-3Q; Mon, 06 Jul 2020 22:41:33 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsZno-0000AZ-Ek; Mon, 06 Jul 2020 22:41:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsZno-0006Vh-Dm; Mon, 06 Jul 2020 22:41:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151681-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151681: regressions - FAIL
X-Osstest-Failures: linux-linus:test-armhf-armhf-xl-cubietruck:<job
 status>:broken:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start.2:fail:regression
 linux-linus:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
 linux-linus:test-armhf-armhf-xl-cubietruck:host-install(4):broken:heisenbug
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:heisenbug
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=dcb7fd82c75ee2d6e6f9d8cc71c52519ed52e258
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 06 Jul 2020 22:41:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151681 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151681/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-cubietruck    <job status>                broken in 151662
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214
 test-arm64-arm64-libvirt-xsm 17 guest-start.2  fail in 151662 REGR. vs. 151214
 test-armhf-armhf-xl-vhd 15 guest-start/debian.repeat fail in 151662 REGR. vs. 151214

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-cubietruck 4 host-install(4) broken in 151662 pass in 151681
 test-amd64-amd64-xl-rtds 18 guest-localmigrate/x10 fail in 151662 pass in 151681
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat  fail pass in 151662
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 151662
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 151662
 test-armhf-armhf-xl-vhd      11 guest-start                fail pass in 151662

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd     12 migrate-support-check fail in 151662 never pass
 test-armhf-armhf-xl-vhd 13 saverestore-support-check fail in 151662 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                dcb7fd82c75ee2d6e6f9d8cc71c52519ed52e258
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   18 days
Failing since        151236  2020-06-19 19:10:35 Z   17 days   25 attempts
Testing same since   151662  2020-07-06 02:35:00 Z    0 days    2 attempts

------------------------------------------------------------
568 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-cubietruck broken

Not pushing.

(No revision log; it would be 27484 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 23:28:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 23:28:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsaWy-00047A-Jc; Mon, 06 Jul 2020 23:28:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8e5v=AR=gmail.com=pauldzim@srs-us1.protection.inumbo.net>)
 id 1jsaWw-000475-I2
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 23:28:10 +0000
X-Inumbo-ID: 5a0e5bbe-bfe0-11ea-bb8b-bc764e2007e4
Received: from mail-io1-xd41.google.com (unknown [2607:f8b0:4864:20::d41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a0e5bbe-bfe0-11ea-bb8b-bc764e2007e4;
 Mon, 06 Jul 2020 23:28:09 +0000 (UTC)
Received: by mail-io1-xd41.google.com with SMTP id y2so41226742ioy.3
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 16:28:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=o2au+JJF3iFRx8/wMwMY9hqc8NDSgxZJ6W8bmJa6B3w=;
 b=UV175Ip46nlRSvIuHO9aabEhDAhS8PyIj2iRmpR6xqQ+kARmhfpHBtcbb4upNUIVED
 w5COe3pxivzOL5jcL2J6NewBUCoVXUl+Yk7+OvfUBSS+NnrArY64Q0rk8TgKL+7nngM/
 93LBL/adIxwmHtjZwApU05ehx7cWUU90hu9tzxmD1gF9QVNRPi7ljqae5aSh1pybr1qu
 HS9EBrcs8sD4WJ2XiM68eZqI8djubBpP1SsWGRTaNd///6tg//TxTkt6Hhexrze39Dcu
 qHO+nTI/RS9987CX3EVmGPh1Ealpci/8M576p/Lx0DKzavHAfVxg9CsNV43GFb/la/sX
 QYIg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=o2au+JJF3iFRx8/wMwMY9hqc8NDSgxZJ6W8bmJa6B3w=;
 b=IBuuBv0wUpehKhFeDSYd967rWumMOit2kZRMdA2UlslcgttkEeJDSIYPHVpI9UiPKL
 2rKJVsn8844HQwCHqw3LXC+BMYEnyjbcaAGF1nkx7sEqwL6K0PMG0NXf7GtC+fm/pJ6n
 GF9NFGGFsk9CkTL3NxjKugOTR0pND+NbieJwMqpqLCHBIEJn/a69m95GFMxIIiPxeKJ4
 m9chhcK/VNUmbqj7N4Vrsgf139nY5COVAbeVOMyJOWZrZAdozg0UhA6fUKiGZxknHYG4
 77YWkYm6FFUxpq7dmR7EcbDdtKWK37mvunkZzGyUKtFJHMv5Gr7jgVJQW/QfHyoreTfQ
 yu9Q==
X-Gm-Message-State: AOAM530SgFnyLmXcE48zIv/Ct5SVf96Cl8QC7kD6ycoeEBQUT77RRTTA
 SxfHp8y+8pN8KnbO5g2v77Rg7KGhhI9a0i0hEIo=
X-Google-Smtp-Source: ABdhPJxiX7qsyPhPsM9b6aXZ6jrmsQJOHwK5JNaERwfHbIHo+HluncpsULlPeV6Fxuk7XqAu85zdwY+jX06UBRQ27X8=
X-Received: by 2002:a02:ac8e:: with SMTP id x14mr55270700jan.57.1594078088926; 
 Mon, 06 Jul 2020 16:28:08 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-8-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-8-f4bug@amsat.org>
From: Paul Zimmerman <pauldzim@gmail.com>
Date: Mon, 6 Jul 2020 16:27:42 -0700
Message-ID: <CADBGO79yDBVagNRfvKLG7LwVUX_4DK7v6DA5p1CEHboP3wOH7Q@mail.gmail.com>
Subject: Re: [PATCH 07/26] hw/usb/hcd-dwc2: Restrict some headers to source
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: multipart/alternative; boundary="0000000000003c1f1505a9ce3ab0"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 QEMU Developers <qemu-devel@nongnu.org>, Jiaxun Yang <jiaxun.yang@flygoat.com>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 qemu-ppc@nongnu.org, David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--0000000000003c1f1505a9ce3ab0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, Jul 4, 2020 at 7:50 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
>
wrote:

> The header "usb/hcd-dwc2.h" doesn't need to include "qemu/timer.h",
> "sysemu/dma.h", "hw/irq.h" (the types required are forward declared).
> Include them in the source file which is the only one requiring the
> function declarations.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>
> ---
>  hw/usb/hcd-dwc2.h | 3 ---
>  hw/usb/hcd-dwc2.c | 3 +++
>  2 files changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/hw/usb/hcd-dwc2.h b/hw/usb/hcd-dwc2.h
> index 4ba809a07b..2adf0f53c7 100644
> --- a/hw/usb/hcd-dwc2.h
> +++ b/hw/usb/hcd-dwc2.h
> @@ -19,11 +19,8 @@
>  #ifndef HW_USB_DWC2_H
>  #define HW_USB_DWC2_H
>
> -#include "qemu/timer.h"
> -#include "hw/irq.h"
>  #include "hw/sysbus.h"
>  #include "hw/usb.h"
> -#include "sysemu/dma.h"
>
>  #define DWC2_MMIO_SIZE      0x11000
>
> diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
> index 590e75b455..ccf05d0823 100644
> --- a/hw/usb/hcd-dwc2.c
> +++ b/hw/usb/hcd-dwc2.c
> @@ -36,8 +36,11 @@
>  #include "qapi/error.h"
>  #include "hw/usb/dwc2-regs.h"
>  #include "hw/usb/hcd-dwc2.h"
> +#include "hw/irq.h"
> +#include "sysemu/dma.h"
>  #include "migration/vmstate.h"
>  #include "trace.h"
> +#include "qemu/timer.h"
>  #include "qemu/log.h"
>  #include "hw/qdev-properties.h"
>
> --
> 2.21.3
>
>
Reviewed-by: Paul Zimmerman <pauldzim@gmail.com>

--0000000000003c1f1505a9ce3ab0
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><div class=3D"gmail_default" style=3D"fon=
t-family:monospace"><br></div></div><br><div class=3D"gmail_quote"><div dir=
=3D"ltr" class=3D"gmail_attr">On Sat, Jul 4, 2020 at 7:50 AM Philippe Mathi=
eu-Daud=C3=A9 &lt;<a href=3D"mailto:f4bug@amsat.org">f4bug@amsat.org</a>&gt=
; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px=
 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">The hea=
der &quot;usb/hcd-dwc2.h&quot; doesn&#39;t need to include &quot;qemu/timer=
.h&quot;,<br>
&quot;sysemu/dma.h&quot;, &quot;hw/irq.h&quot; (the types required are forw=
ard declared).<br>
Include them in the source file which is the only one requiring the<br>
function declarations.<br>
<br>
Signed-off-by: Philippe Mathieu-Daud=C3=A9 &lt;<a href=3D"mailto:f4bug@amsa=
t.org" target=3D"_blank">f4bug@amsat.org</a>&gt;<br>
---<br>
=C2=A0hw/usb/hcd-dwc2.h | 3 ---<br>
=C2=A0hw/usb/hcd-dwc2.c | 3 +++<br>
=C2=A02 files changed, 3 insertions(+), 3 deletions(-)<br>
<br>
diff --git a/hw/usb/hcd-dwc2.h b/hw/usb/hcd-dwc2.h<br>
index 4ba809a07b..2adf0f53c7 100644<br>
--- a/hw/usb/hcd-dwc2.h<br>
+++ b/hw/usb/hcd-dwc2.h<br>
@@ -19,11 +19,8 @@<br>
=C2=A0#ifndef HW_USB_DWC2_H<br>
=C2=A0#define HW_USB_DWC2_H<br>
<br>
-#include &quot;qemu/timer.h&quot;<br>
-#include &quot;hw/irq.h&quot;<br>
=C2=A0#include &quot;hw/sysbus.h&quot;<br>
=C2=A0#include &quot;hw/usb.h&quot;<br>
-#include &quot;sysemu/dma.h&quot;<br>
<br>
=C2=A0#define DWC2_MMIO_SIZE=C2=A0 =C2=A0 =C2=A0 0x11000<br>
<br>
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c<br>
index 590e75b455..ccf05d0823 100644<br>
--- a/hw/usb/hcd-dwc2.c<br>
+++ b/hw/usb/hcd-dwc2.c<br>
@@ -36,8 +36,11 @@<br>
=C2=A0#include &quot;qapi/error.h&quot;<br>
=C2=A0#include &quot;hw/usb/dwc2-regs.h&quot;<br>
=C2=A0#include &quot;hw/usb/hcd-dwc2.h&quot;<br>
+#include &quot;hw/irq.h&quot;<br>
+#include &quot;sysemu/dma.h&quot;<br>
=C2=A0#include &quot;migration/vmstate.h&quot;<br>
=C2=A0#include &quot;trace.h&quot;<br>
+#include &quot;qemu/timer.h&quot;<br>
=C2=A0#include &quot;qemu/log.h&quot;<br>
=C2=A0#include &quot;hw/qdev-properties.h&quot;<br>
<br>
-- <br>
2.21.3<br>
<br></blockquote><div><br></div><div class=3D"gmail_default" style=3D"font-=
family:monospace"><span style=3D"font-family:Arial,Helvetica,sans-serif">Re=
viewed-by:</span><span style=3D"font-family:Arial,Helvetica,sans-serif">=C2=
=A0Paul Zimmerman &lt;<a href=3D"mailto:pauldzim@gmail.com">pauldzim@gmail.=
com</a>&gt;</span></div></div></div>

--0000000000003c1f1505a9ce3ab0--


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 23:29:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 23:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsaXp-0004A7-TQ; Mon, 06 Jul 2020 23:29:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8e5v=AR=gmail.com=pauldzim@srs-us1.protection.inumbo.net>)
 id 1jsaXo-0004A2-Lj
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 23:29:04 +0000
X-Inumbo-ID: 7aa5a5d0-bfe0-11ea-bca7-bc764e2007e4
Received: from mail-il1-x142.google.com (unknown [2607:f8b0:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7aa5a5d0-bfe0-11ea-bca7-bc764e2007e4;
 Mon, 06 Jul 2020 23:29:04 +0000 (UTC)
Received: by mail-il1-x142.google.com with SMTP id x9so34464267ila.3
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 16:29:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=EqpTLnds0cQjQhPTtDXOEbWvKqfHq/uTNO2RA9Yz7QI=;
 b=UgeotncesG4zQVHNWYePkXPakT+0WIklkaH3zTOngjzTCBnpbdZIBJTHbfi0UmEFS+
 Vcm5+RldKaAmXqg95SgC2MIyumI1qeN9VPKNXGncr1QeMzipgKo4DSt+g3fZ8VHzMSef
 /64emOo0HYBwwnK3UZvcG7ZP73fb9yjItIWt4jFbJG9YOYm8UaCknaRZcVr2FyAL2pFr
 0UeJvx1+inWoRFPAziZTAvU3SCT7J1FIW5pB2PApAWcHLv6eK5KGkNJwOnXx0rqFxs+h
 JqwOFDCq92Wzf+D0PGoqUxr9dFtcFKHAbMbdUPdddOWgcCZRQ8wbcef5+FD8tg64+JhH
 BNpA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=EqpTLnds0cQjQhPTtDXOEbWvKqfHq/uTNO2RA9Yz7QI=;
 b=ZR+DJlmcoxD+f77JpbouzXdW562bYEQ4QKhy7v406I8MU8yGn83F8WKLTXxZTivUdO
 NVtlVf96zfo33CaS/We51QhJN4JH7pIzKhR139vjYUmWNh1r/TaAS/TGanA80uJ34PkC
 G2wa0z10XxeMjRBTZ26oTnbs99ekxAcGKLOS5oADdex1E470KMS3Wnhi4DL1LewuyZpO
 fItWk1khbDW2hwdHejKKD46KzOX9xXBCu7Wo0RiG4ui7sHkvlLD3QzUfJ1VlhZYIBcYG
 4DRYH9zgTvo3KCIQir3LjSIreZx/mWCH+yWZGGPPSdxZW4wdTLYQURebYi0HV9jNS6Ia
 Twuw==
X-Gm-Message-State: AOAM53296UN3581ACndV6EsM8p1QfHP8DWVeNSkYCyigy2Z4NgIFXOpe
 RL+BMuSzlkHGlCwn4FqrXQM2O9BSFfTOcau99z0=
X-Google-Smtp-Source: ABdhPJw3JUW4GbL95AweZL605ZuGn+6rCK3ngUlA6e50fJsVbbkH5ckFabJvO/a6lXcUvM+d3wfLRaCN/a9ELGQNiVI=
X-Received: by 2002:a92:58d1:: with SMTP id z78mr32232484ilf.276.1594078143637; 
 Mon, 06 Jul 2020 16:29:03 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-7-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-7-f4bug@amsat.org>
From: Paul Zimmerman <pauldzim@gmail.com>
Date: Mon, 6 Jul 2020 16:28:37 -0700
Message-ID: <CADBGO78wa9Rth0=cszD6ZNo_y5ZtLQRyjvZLr-D45tuoEe_A8g@mail.gmail.com>
Subject: Re: [PATCH 06/26] hw/usb/hcd-dwc2: Remove unnecessary includes
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: multipart/alternative; boundary="0000000000007eeff905a9ce3d8d"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 QEMU Developers <qemu-devel@nongnu.org>, Jiaxun Yang <jiaxun.yang@flygoat.com>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 qemu-ppc@nongnu.org, David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--0000000000007eeff905a9ce3d8d
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, Jul 4, 2020 at 7:50 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
>
wrote:

> "qemu/error-report.h" and "qemu/main-loop.h" are not used.
> Remove them.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>
> ---
>  hw/usb/hcd-dwc2.c | 2 --
>  1 file changed, 2 deletions(-)
>
> diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
> index 72cbd051f3..590e75b455 100644
> --- a/hw/usb/hcd-dwc2.c
> +++ b/hw/usb/hcd-dwc2.c
> @@ -39,8 +39,6 @@
>  #include "migration/vmstate.h"
>  #include "trace.h"
>  #include "qemu/log.h"
> -#include "qemu/error-report.h"
> -#include "qemu/main-loop.h"
>  #include "hw/qdev-properties.h"
>
>  #define USB_HZ_FS       12000000
> --
> 2.21.3
>
>
Reviewed-by: Paul Zimmerman <pauldzim@gmail.com>

--0000000000007eeff905a9ce3d8d
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><div class=3D"gmail_default" style=3D"fon=
t-family:monospace"><br></div></div><br><div class=3D"gmail_quote"><div dir=
=3D"ltr" class=3D"gmail_attr">On Sat, Jul 4, 2020 at 7:50 AM Philippe Mathi=
eu-Daud=C3=A9 &lt;<a href=3D"mailto:f4bug@amsat.org">f4bug@amsat.org</a>&gt=
; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px=
 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">&quot;q=
emu/error-report.h&quot; and &quot;qemu/main-loop.h&quot; are not used.<br>
Remove them.<br>
<br>
Signed-off-by: Philippe Mathieu-Daud=C3=A9 &lt;<a href=3D"mailto:f4bug@amsa=
t.org" target=3D"_blank">f4bug@amsat.org</a>&gt;<br>
---<br>
=C2=A0hw/usb/hcd-dwc2.c | 2 --<br>
=C2=A01 file changed, 2 deletions(-)<br>
<br>
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c<br>
index 72cbd051f3..590e75b455 100644<br>
--- a/hw/usb/hcd-dwc2.c<br>
+++ b/hw/usb/hcd-dwc2.c<br>
@@ -39,8 +39,6 @@<br>
=C2=A0#include &quot;migration/vmstate.h&quot;<br>
=C2=A0#include &quot;trace.h&quot;<br>
=C2=A0#include &quot;qemu/log.h&quot;<br>
-#include &quot;qemu/error-report.h&quot;<br>
-#include &quot;qemu/main-loop.h&quot;<br>
=C2=A0#include &quot;hw/qdev-properties.h&quot;<br>
<br>
=C2=A0#define USB_HZ_FS=C2=A0 =C2=A0 =C2=A0 =C2=A012000000<br>
-- <br>
2.21.3<br>
<br></blockquote><div><br></div><div class=3D"gmail_default" style=3D"font-=
family:monospace"><span style=3D"font-family:Arial,Helvetica,sans-serif">Re=
viewed-by:</span><span style=3D"font-family:Arial,Helvetica,sans-serif">=C2=
=A0Paul Zimmerman &lt;<a href=3D"mailto:pauldzim@gmail.com">pauldzim@gmail.=
com</a>&gt;</span><span style=3D"font-family:Arial,Helvetica,sans-serif"></=
span></div><div class=3D"gmail_default" style=3D"font-family:monospace"></d=
iv></div></div>

--0000000000007eeff905a9ce3d8d--


From xen-devel-bounces@lists.xenproject.org Mon Jul 06 23:29:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 06 Jul 2020 23:29:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsaYc-0004FN-6r; Mon, 06 Jul 2020 23:29:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8e5v=AR=gmail.com=pauldzim@srs-us1.protection.inumbo.net>)
 id 1jsaYa-0004FG-Pa
 for xen-devel@lists.xenproject.org; Mon, 06 Jul 2020 23:29:52 +0000
X-Inumbo-ID: 9755f432-bfe0-11ea-8496-bc764e2007e4
Received: from mail-io1-xd44.google.com (unknown [2607:f8b0:4864:20::d44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9755f432-bfe0-11ea-8496-bc764e2007e4;
 Mon, 06 Jul 2020 23:29:52 +0000 (UTC)
Received: by mail-io1-xd44.google.com with SMTP id e64so36301257iof.12
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 16:29:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=guo4Cz0saHkzfASqoW+5Vfcn07JMdPw0vLJcHYngdVg=;
 b=Mgg1+R1wgP3yxWPoxwWzVSxY2DBC9uea+PFILvRzMeodl7CDSW6RFuIaULNGjbl5Wm
 e+hCc1T3Xll8LfByGqK3TY3w+7eAb71d7Ae0R0447AX36sCNNoaUbvfaW3MjOyeRnMve
 s+nQuLbys2Cq6ZKBNVYoyxUZYEAcwfG4E1nUQkp22Dcn+sEZ6SgV+248afpNz2u0S6YB
 DeqGJkSGAO5axe7LmsN1L9ov27T4ZifdZ/+d5Qwvyv7PifaQb7LkaWs8K+/MJCfwnF2U
 OM/aWXU/Cp7UwxJaN304YU1X6zGqzj9Ed2BvtxVv3622J/wOVdGJLZ5TIYdbStGq2jRR
 0dnw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=guo4Cz0saHkzfASqoW+5Vfcn07JMdPw0vLJcHYngdVg=;
 b=heynhlQCBo97UrKL70S9EhL092g0zjme+SJyQiiT5LCTlnNCON/NoKV0DwXzJcW9hR
 +i/3t4MBFPLmZ5qPnF7shDBa1eUyPn+6eVakvnQBdFxkkgKh102AUQl8QG/LSwXxAcBD
 kROYFdgTpjr7iiN/VwbmT9HnE/4oj+EI3p4Q5nirtPF6hT9qxloaaZb48WLm9B9xuGwm
 Guhc7Fc4+Ioq5bKVD/kg4qIeQ+uFfQ6AXf8O1bqKjKL3Pf7x+g4ANI3JY3keKQg8/tgt
 teDsz7dHcMNYlgdBEmALZMmGZuvNGk2DnpJPzOOMOlpcKPok4taVLpghzZpGTEhpZeaS
 bitA==
X-Gm-Message-State: AOAM533G7ZhQIp1XYpewnSoUOLf26AdEHqLGaoWpBl3VWhmA0JC4Po+N
 iFWRMQVMLpjrfMQZHb5TBmxUQkkgdVlQO16eFIs=
X-Google-Smtp-Source: ABdhPJxccJUFvrRtMWh3+owZpgS6w9Fdz4b7CnGGaHawyuF5NA7yGHARPV7RXQDl8sKlO2zqMYBeBAVye8DBVl2kztM=
X-Received: by 2002:a5e:dc03:: with SMTP id b3mr26345208iok.97.1594078191753; 
 Mon, 06 Jul 2020 16:29:51 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-9-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-9-f4bug@amsat.org>
From: Paul Zimmerman <pauldzim@gmail.com>
Date: Mon, 6 Jul 2020 16:29:25 -0700
Message-ID: <CADBGO7__svJLvtHjyrn_BhqTnWxJWLbv0i0oK4rjmFLiFb82Aw@mail.gmail.com>
Subject: Re: [PATCH 08/26] hw/usb/hcd-dwc2: Restrict 'dwc2-regs.h' scope
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: multipart/alternative; boundary="0000000000005d234a05a9ce40b4"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 QEMU Developers <qemu-devel@nongnu.org>, Jiaxun Yang <jiaxun.yang@flygoat.com>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 qemu-ppc@nongnu.org, David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--0000000000005d234a05a9ce40b4
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, Jul 4, 2020 at 7:50 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
>
wrote:

> We only use these register definitions in files under the
> hw/usb/ directory. Keep that header local by moving it there.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>
> ---
>  {include/hw =3D> hw}/usb/dwc2-regs.h | 0
>  hw/usb/hcd-dwc2.c                  | 2 +-
>  2 files changed, 1 insertion(+), 1 deletion(-)
>  rename {include/hw =3D> hw}/usb/dwc2-regs.h (100%)
>
> diff --git a/include/hw/usb/dwc2-regs.h b/hw/usb/dwc2-regs.h
> similarity index 100%
> rename from include/hw/usb/dwc2-regs.h
> rename to hw/usb/dwc2-regs.h
> diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c
> index ccf05d0823..252b60ef65 100644
> --- a/hw/usb/hcd-dwc2.c
> +++ b/hw/usb/hcd-dwc2.c
> @@ -34,7 +34,6 @@
>  #include "qemu/osdep.h"
>  #include "qemu/units.h"
>  #include "qapi/error.h"
> -#include "hw/usb/dwc2-regs.h"
>  #include "hw/usb/hcd-dwc2.h"
>  #include "hw/irq.h"
>  #include "sysemu/dma.h"
> @@ -43,6 +42,7 @@
>  #include "qemu/timer.h"
>  #include "qemu/log.h"
>  #include "hw/qdev-properties.h"
> +#include "dwc2-regs.h"
>
>  #define USB_HZ_FS       12000000
>  #define USB_HZ_HS       96000000
> --
> 2.21.3
>
>
Reviewed-by: Paul Zimmerman <pauldzim@gmail.com>

--0000000000005d234a05a9ce40b4
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><div class=3D"gmail_default" style=3D"fon=
t-family:monospace"><br></div></div><br><div class=3D"gmail_quote"><div dir=
=3D"ltr" class=3D"gmail_attr">On Sat, Jul 4, 2020 at 7:50 AM Philippe Mathi=
eu-Daud=C3=A9 &lt;<a href=3D"mailto:f4bug@amsat.org">f4bug@amsat.org</a>&gt=
; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px=
 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">We only=
 use these register definitions in files under the<br>
hw/usb/ directory. Keep that header local by moving it there.<br>
<br>
Signed-off-by: Philippe Mathieu-Daud=C3=A9 &lt;<a href=3D"mailto:f4bug@amsa=
t.org" target=3D"_blank">f4bug@amsat.org</a>&gt;<br>
---<br>
=C2=A0{include/hw =3D&gt; hw}/usb/dwc2-regs.h | 0<br>
=C2=A0hw/usb/hcd-dwc2.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 | 2 +-<br>
=C2=A02 files changed, 1 insertion(+), 1 deletion(-)<br>
=C2=A0rename {include/hw =3D&gt; hw}/usb/dwc2-regs.h (100%)<br>
<br>
diff --git a/include/hw/usb/dwc2-regs.h b/hw/usb/dwc2-regs.h<br>
similarity index 100%<br>
rename from include/hw/usb/dwc2-regs.h<br>
rename to hw/usb/dwc2-regs.h<br>
diff --git a/hw/usb/hcd-dwc2.c b/hw/usb/hcd-dwc2.c<br>
index ccf05d0823..252b60ef65 100644<br>
--- a/hw/usb/hcd-dwc2.c<br>
+++ b/hw/usb/hcd-dwc2.c<br>
@@ -34,7 +34,6 @@<br>
=C2=A0#include &quot;qemu/osdep.h&quot;<br>
=C2=A0#include &quot;qemu/units.h&quot;<br>
=C2=A0#include &quot;qapi/error.h&quot;<br>
-#include &quot;hw/usb/dwc2-regs.h&quot;<br>
=C2=A0#include &quot;hw/usb/hcd-dwc2.h&quot;<br>
=C2=A0#include &quot;hw/irq.h&quot;<br>
=C2=A0#include &quot;sysemu/dma.h&quot;<br>
@@ -43,6 +42,7 @@<br>
=C2=A0#include &quot;qemu/timer.h&quot;<br>
=C2=A0#include &quot;qemu/log.h&quot;<br>
=C2=A0#include &quot;hw/qdev-properties.h&quot;<br>
+#include &quot;dwc2-regs.h&quot;<br>
<br>
=C2=A0#define USB_HZ_FS=C2=A0 =C2=A0 =C2=A0 =C2=A012000000<br>
=C2=A0#define USB_HZ_HS=C2=A0 =C2=A0 =C2=A0 =C2=A096000000<br>
-- <br>
2.21.3<br>
<br></blockquote><div><br></div><div class=3D"gmail_default" style=3D"font-=
family:monospace"><span style=3D"font-family:Arial,Helvetica,sans-serif">Re=
viewed-by:</span><span style=3D"font-family:Arial,Helvetica,sans-serif">=C2=
=A0Paul Zimmerman &lt;<a href=3D"mailto:pauldzim@gmail.com">pauldzim@gmail.=
com</a>&gt;</span><span style=3D"font-family:Arial,Helvetica,sans-serif"></=
span></div><div class=3D"gmail_default" style=3D"font-family:monospace"></d=
iv></div></div>

--0000000000005d234a05a9ce40b4--


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 00:17:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 00:17:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsbIm-0000Up-TP; Tue, 07 Jul 2020 00:17:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pYfC=AS=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsbIm-0000Uf-An
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 00:17:36 +0000
X-Inumbo-ID: 422cbb56-bfe7-11ea-b7bb-bc764e2007e4
Received: from mail-il1-x142.google.com (unknown [2607:f8b0:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 422cbb56-bfe7-11ea-b7bb-bc764e2007e4;
 Tue, 07 Jul 2020 00:17:35 +0000 (UTC)
Received: by mail-il1-x142.google.com with SMTP id x9so34538872ila.3
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 17:17:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=oT6h3FPPrO2frJk8eR6JrALI5YXxXiAsbZ1+5VELi6s=;
 b=IGZ7nFq1jPrJASsShupXRZ987k64O5ZF1peALAk0W9yQa/epc+jZ7vJemnzFRm7qQ0
 XwUCGVF4g7dXUL32MNSRP5OIl5XaEN5kqPkXvDFiw/reJhMLCjqVVM3JcyDTXftTKGCv
 l6VDo3EsePWKMENABCDslc18HsS205LYm9FU8ogayVgsnxjJILxEIynzpk6/Y4tTZIXY
 wybT1mS6TNxf/py8eH90SCc/1SxghIT0cqTDZQfDr7lkBtsbfeh9HyTS8S44H1WgDtnY
 H3IkdCsGUXjn2fTUA3Rjd0Coqk2SxNNZqZqW15ZC3DY2I0HXIZiiII8qF6hnubRW5de6
 Pf2A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=oT6h3FPPrO2frJk8eR6JrALI5YXxXiAsbZ1+5VELi6s=;
 b=bgfk/68IbpdlShsfG+3ikDMDXAGxZWZIBTNQfxOzLElBzQajgyGAm739rVss/yCekh
 Hvs8K69Cjbn5UwjlDMDWoNFfVlJZWOHkIuPSxzhI6+QXypbzXLG62mjyjXFZntdhxdQ8
 KHee/0nBF5+q74D5c+Q4HRrekcfNa2s4facZSMoFEqzr2+rCA8AFadSrmggWohovKgvn
 Uz027CkDosuL2DeX7qnkddFzTG0jLjIaFTbhOnn76i9rNUhxY/9IjuQcrvvCcbbZHVqT
 AhKPoJrQXnnKD+6IkjgYJDwUrnm37mlj0ELxTcvSXJ4HNx5H5FDtXocfyUPA5c3hclAF
 DdZA==
X-Gm-Message-State: AOAM530pa26r+oD5z7XlUuERgIMsobEJME97OGuFRRNTugV/Bnqc4T70
 9HNvp2Xm3ui1BEgYGOOu5qnqrdcdZfSq8LpyVTM=
X-Google-Smtp-Source: ABdhPJzycshpPdNG1lvs8VoNh6fTd7DN0RtstWqgoBIV2gMFAMvggj4/NTvj4Gb9y16ym2XuomQHcm42Kq1Ixg4TWS4=
X-Received: by 2002:a92:bb84:: with SMTP id x4mr34079638ilk.177.1594081055408; 
 Mon, 06 Jul 2020 17:17:35 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-20-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-20-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 17:07:47 -0700
Message-ID: <CAKmqyKNW1V_vOPw4AdBP5BpD2ueOT1NFz4hUON82VMyErLzgyw@mail.gmail.com>
Subject: Re: [PATCH 19/26] hw/ppc/spapr: Use usb_get_port_path()
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 7:59 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> To avoid to access the USBDevice internals, and use the
> recently added usb_get_port_path() helper instead.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  hw/ppc/spapr.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index f6f034d039..221d3e7a8c 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -3121,7 +3121,8 @@ static char *spapr_get_fw_dev_path(FWPathProvider *=
p, BusState *bus,
>               * We use SRP luns of the form 01000000 | (usb-port << 16) |=
 lun
>               * in the top 32 bits of the 64-bit LUN
>               */
> -            unsigned usb_port =3D atoi(usb->port->path);
> +            g_autofree char *usb_port_path =3D usb_get_port_path(usb);
> +            unsigned usb_port =3D atoi(usb_port_path);
>              unsigned id =3D 0x1000000 | (usb_port << 16) | d->lun;
>              return g_strdup_printf("%s@%"PRIX64, qdev_fw_name(dev),
>                                     (uint64_t)id << 32);
> @@ -3137,7 +3138,8 @@ static char *spapr_get_fw_dev_path(FWPathProvider *=
p, BusState *bus,
>      if (strcmp("usb-host", qdev_fw_name(dev)) =3D=3D 0) {
>          USBDevice *usbdev =3D CAST(USBDevice, dev, TYPE_USB_DEVICE);
>          if (usb_host_dev_is_scsi_storage(usbdev)) {
> -            return g_strdup_printf("storage@%s/disk", usbdev->port->path=
);
> +            g_autofree char *usb_port_path =3D usb_get_port_path(usbdev)=
;
> +            return g_strdup_printf("storage@%s/disk", usb_port_path);
>          }
>      }
>
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 00:17:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 00:17:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsbIX-0000Tt-Gl; Tue, 07 Jul 2020 00:17:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pYfC=AS=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jsbIV-0000To-Mo
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 00:17:19 +0000
X-Inumbo-ID: 381ea14c-bfe7-11ea-bb8b-bc764e2007e4
Received: from mail-io1-xd42.google.com (unknown [2607:f8b0:4864:20::d42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 381ea14c-bfe7-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 00:17:18 +0000 (UTC)
Received: by mail-io1-xd42.google.com with SMTP id e64so36388730iof.12
 for <xen-devel@lists.xenproject.org>; Mon, 06 Jul 2020 17:17:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=xZHsjKeHdF8oSJPzmzNmydGOMCfVdbd//sjNZ5Rs0FA=;
 b=KqyIFaGBNvxmnVxltXTlM8plCPYeRrgISiKJM/tD9Kbd27KFI1CC7U0Ah/UKNvGvBg
 udFg++6flfZ4WUK+dzYiQSHta158kukJeZPAWj8r78SZcxpO46xXXxk1U51fbtaaN8pW
 UUCcU4MWbVB6kfLXhL8LBTDGNTCBtLlclVpwbmfb8fc3Pybi9c1feC4krWSyTbkz5e1Y
 y4Z6EjmYKfDfUXGci6vsVY6FDN+h/uZ7WFoe4Z/Mng26z5b9+VGMf5VATYlQYQgKYJbG
 M2f7j3PxAAcsq2PA3Cmtzp0bdi4S1My+kFFeorzw6Q9DCqpqjbysqVEwUzznWFoV5qpK
 AgtA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=xZHsjKeHdF8oSJPzmzNmydGOMCfVdbd//sjNZ5Rs0FA=;
 b=jCoeHnOJQVSFl6HqDSa4bWGzsLdl5sWmmlUZB9LlPq3Bw/Eqk1Fxl2NzRjKLCnniHA
 e10DN3+vWNTqiRHAOO9J1t3BsD+QiAeYmGs8+tiXY9tHQ9K8rMXmNZOwmr6w6liKDEdi
 rlXzVqQ5HIpVT6Hdbumg3dCffMesYzKcgZMltjo+6/p1bp5L4J0RcYvMNKcNB6exK7cs
 W5RRMWmL5MwwiKrqJmW4SRkEUrlQMU8lE2eXP7KcsT0bCuu3xofd5RXJAagEPhZtJ+5h
 6ohsP2H9MypsxZJ/tshpdsSsiMV8ehwAS999C29owNHzgBqEx3fnozeWxIADBGvFeInP
 71oQ==
X-Gm-Message-State: AOAM5310WzDEnTbSXq/oQ7Gtzy/qWQfw1oznJF/0MLHg3bYLdZRqxsAo
 nKwuioRg3rE01+KQqSjOWfNYha/7cqQJd8+GKB8=
X-Google-Smtp-Source: ABdhPJwKg5wBZ5JjpthevN5ot/tUHiyPrdatBIYezb12DomQ74k4H3YD9KYALnZc1QOGl4kgMIw8ZxCjVKiXCiSB2KI=
X-Received: by 2002:a5d:97d9:: with SMTP id k25mr28308672ios.42.1594081038532; 
 Mon, 06 Jul 2020 17:17:18 -0700 (PDT)
MIME-Version: 1.0
References: <20200704144943.18292-1-f4bug@amsat.org>
 <20200704144943.18292-19-f4bug@amsat.org>
In-Reply-To: <20200704144943.18292-19-f4bug@amsat.org>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 6 Jul 2020 17:07:31 -0700
Message-ID: <CAKmqyKPSpKQXAOH7A55+zJXTa_1+eDNHLbwToLFdXx0r1n8Lpw@mail.gmail.com>
Subject: Re: [PATCH 18/26] hw/usb/bus: Add usb_get_port_path()
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 "qemu-devel@nongnu.org Developers" <qemu-devel@nongnu.org>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Leif Lindholm <leif@nuviainc.com>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm <qemu-arm@nongnu.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>, Paul Zimmerman <pauldzim@gmail.com>,
 "open list:New World" <qemu-ppc@nongnu.org>,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 4, 2020 at 8:00 AM Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org=
> wrote:
>
> Refactor usb_get_full_dev_path() to take a 'want_full_path'
> argument, and add usb_get_port_path() which returns a short
> path.
>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  include/hw/usb.h | 10 ++++++++++
>  hw/usb/bus.c     | 18 +++++++++++++-----
>  2 files changed, 23 insertions(+), 5 deletions(-)
>
> diff --git a/include/hw/usb.h b/include/hw/usb.h
> index 8c3bc920ff..7ea502d421 100644
> --- a/include/hw/usb.h
> +++ b/include/hw/usb.h
> @@ -506,6 +506,16 @@ void usb_port_location(USBPort *downstream, USBPort =
*upstream, int portnr);
>  void usb_unregister_port(USBBus *bus, USBPort *port);
>  void usb_claim_port(USBDevice *dev, Error **errp);
>  void usb_release_port(USBDevice *dev);
> +/**
> + * usb_get_port_path:
> + * @dev: the USB device
> + *
> + * The returned data must be released with g_free()
> + * when no longer required.
> + *
> + * Returns: a dynamically allocated pathname.
> + */
> +char *usb_get_port_path(USBDevice *dev);
>  void usb_device_attach(USBDevice *dev, Error **errp);
>  int usb_device_detach(USBDevice *dev);
>  void usb_check_attach(USBDevice *dev, Error **errp);
> diff --git a/hw/usb/bus.c b/hw/usb/bus.c
> index fad8194bf5..518e5b94ed 100644
> --- a/hw/usb/bus.c
> +++ b/hw/usb/bus.c
> @@ -577,12 +577,10 @@ static void usb_bus_dev_print(Monitor *mon, DeviceS=
tate *qdev, int indent)
>                     dev->attached ? ", attached" : "");
>  }
>
> -static char *usb_get_full_dev_path(DeviceState *qdev)
> +static char *usb_get_dev_path(USBDevice *dev, bool want_full_path)
>  {
> -    USBDevice *dev =3D USB_DEVICE(qdev);
> -
> -    if (dev->flags & (1 << USB_DEV_FLAG_FULL_PATH)) {
> -        DeviceState *hcd =3D qdev->parent_bus->parent;
> +    if (want_full_path && (dev->flags & (1 << USB_DEV_FLAG_FULL_PATH))) =
{
> +        DeviceState *hcd =3D DEVICE(dev)->parent_bus->parent;
>          char *id =3D qdev_get_dev_path(hcd);
>
>          if (id) {
> @@ -594,6 +592,16 @@ static char *usb_get_full_dev_path(DeviceState *qdev=
)
>      return g_strdup(dev->port->path);
>  }
>
> +static char *usb_get_full_dev_path(DeviceState *qdev)
> +{
> +    return usb_get_dev_path(USB_DEVICE(qdev), true);
> +}
> +
> +char *usb_get_port_path(USBDevice *dev)
> +{
> +    return usb_get_dev_path(dev, false);
> +}
> +
>  static char *usb_get_fw_dev_path(DeviceState *qdev)
>  {
>      USBDevice *dev =3D USB_DEVICE(qdev);
> --
> 2.21.3
>
>


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 03:10:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 03:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsdzX-0007Iz-B0; Tue, 07 Jul 2020 03:09:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F44P=AS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsdzV-0007IZ-PD
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 03:09:53 +0000
X-Inumbo-ID: 50113176-bfff-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50113176-bfff-11ea-b7bb-bc764e2007e4;
 Tue, 07 Jul 2020 03:09:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=++peogvODNUPisBwM/soMfRflQgraZ5uZCE4L8a9aoY=; b=OAMH+YCpMI7Nxts7cF4sxkGuq
 niTU7nBk9GYfSuQw57Q68JtSOuZUVV4QNPJJ6lGt5QHpSbovydAFfAhfwk3x0HqR410JweP8pnM4w
 4xcmEGp6NftQFFMzsgxz86emiVgLEOuZo7trwoUhYjAJBnzxepFzjE6h9GY4G8OJoNWms=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsdzO-0008L9-EP; Tue, 07 Jul 2020 03:09:46 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsdzN-0003yo-Uq; Tue, 07 Jul 2020 03:09:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsdzN-0004V9-Tj; Tue, 07 Jul 2020 03:09:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151684-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151684: regressions - FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-vhd:guest-start:fail:regression
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start.2:fail:allowable
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=158912a532fe98f448c688d3571241c9033553bd
X-Osstest-Versions-That: xen=be63d9d47f571a60d70f8fb630c03871312d9655
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jul 2020 03:09:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151684 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151684/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd      11 guest-start              fail REGR. vs. 151661

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     17 guest-start.2            fail REGR. vs. 151586

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151661
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151661
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151661
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151661
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151661
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151661
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151661
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151661
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151661
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  158912a532fe98f448c688d3571241c9033553bd
baseline version:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655

Last test of basis   151661  2020-07-06 01:54:10 Z    1 days
Testing same since   151684  2020-07-06 16:36:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 158912a532fe98f448c688d3571241c9033553bd
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Fri Jul 3 14:55:33 2020 +0100

    Config: Update QEMU
    
    Backport 2 commits to fix building QEMU without PCI passthrough
    support.
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
    Release-acked-by: Paul Durrant <paul@xen.org>

commit d44cbbe0f3243afcc56e47dcfa97bbfe23e46fbb
Author: Wei Liu <wl@xen.org>
Date:   Fri Jul 3 20:10:01 2020 +0000

    kdd: fix build again
    
    Restore Tim's patch. The one that was committed was recreated by me
    because git didn't accept my saved copy. I made some mistakes while
    recreating that patch and here we are.
    
    Fixes: 3471cafbdda3 ("kdd: stop using [0] arrays to access packet contents")
    Reported-by: Michael Young <m.a.young@durham.ac.uk>
    Signed-off-by: Wei Liu <wl@xen.org>
    Reviewed-by: Tim Deegan <tim@xen.org>
    Release-acked-by: Paul Durrant <paul@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 07:30:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 07:30:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsi3L-00043y-6H; Tue, 07 Jul 2020 07:30:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ow9A=AS=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jsi3K-0003NW-9k
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 07:30:06 +0000
X-Inumbo-ID: aa5d927c-c023-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa5d927c-c023-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 07:30:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Reply-To:Message-Id:Date:Subject:To:From:Sender:Cc:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=uQa8o5nnzDBqujXVcC9m3ummV9P5idnqgq0tOBtVJTg=; b=tvV1+5gy5/dKtW1Xqh8Eh7ksGo
 6dFT0UIknz0l7iidUo2HYTkiVea8KvDcM1QtY1ssTiRAE3jj/LVQuvCcF4hJSXZ0iLsZECsra+CRv
 Ss3O9jQMPWGlmJ8ik8eNkWynKBxZyCneVW6gK7DZcWe313qTh+QKv0Zm/ns/DIORUMGI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jsi3E-00059h-6k; Tue, 07 Jul 2020 07:30:00 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=CBG-R90WXYV0.amazon.com) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jsi3D-0001hR-RF; Tue, 07 Jul 2020 07:30:00 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org, xen-users@lists.xenproject.org,
 xen-announce@lists.xenproject.org
Subject: Xen 4.14 RC5
Date: Tue,  7 Jul 2020 08:29:58 +0100
Message-Id: <20200707072958.1035-1-paul@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: xen-devel@lists.xenproject.org, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi all,

Xen 4.14 RC5 is tagged. You can check that out from xen.git:

git://xenbits.xen.org/xen.git 4.14.0-rc5

For your convenience there is also a tarball at:
https://downloads.xenproject.org/release/xen/4.14.0-rc5/xen-4.14.0-rc5.tar.gz

And the signature is at:
https://downloads.xenproject.org/release/xen/4.14.0-rc5/xen-4.14.0-rc5.tar.gz.sig

Please send bug reports and test reports to xen-devel@lists.xenproject.org.
When sending bug reports, please CC relevant maintainers and me (paul@xen.org).

As a reminder, there will be a Xen Test Day. Please see the test day schedule at
https://wiki.xenproject.org/wiki/Xen_Project_Test_Days and test instructions at
https://wiki.xenproject.org/wiki/Xen_4.14_RC_test_instructions.

  Paul Durrant



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 08:04:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 08:04:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsiaW-0007Q9-KS; Tue, 07 Jul 2020 08:04:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+gHK=AS=in.bosch.com=manikandan.chockalingam@srs-us1.protection.inumbo.net>)
 id 1jsiUn-00067F-Eh
 for xen-devel@lists.xen.org; Tue, 07 Jul 2020 07:58:30 +0000
X-Inumbo-ID: a21cc480-c027-11ea-8496-bc764e2007e4
Received: from de-out1.bosch-org.com (unknown [139.15.230.186])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a21cc480-c027-11ea-8496-bc764e2007e4;
 Tue, 07 Jul 2020 07:58:24 +0000 (UTC)
Received: from si0vm1948.rbesz01.com
 (lb41g3-ha-dmz-psi-sl1-mailout.fe.ssn.bosch.com [139.15.230.188])
 by si0vms0216.rbdmz01.com (Postfix) with ESMTPS id 4B1FDb4rJhz1XLm4N
 for <xen-devel@lists.xen.org>; Tue,  7 Jul 2020 09:58:23 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=in.bosch.com;
 s=key1-intmail; t=1594108703;
 bh=0Aq2UJ8+ktaMfvmWJVmzAZ3rAYwGxLrQeHs0DB/fKE4=; l=10;
 h=From:Subject:From:Reply-To:Sender;
 b=Ly6rHJvxd36O/kc0qOZMMv6on8fXkVTbKGVDSQzYvph6vOU1ubKg86eNP04nS+QSE
 8ryJEmTB5hPH/XQlKVClXk1LFXnBXYAprZYxvc5x4OJyyuqu8h2gssxIatRE4cI816
 NnXZuv/Nor6CeZv7DFRN/WQOhbY15pyrauoxfCp8=
Received: from si0vm2083.rbesz01.com (unknown [10.58.172.176])
 by si0vm1948.rbesz01.com (Postfix) with ESMTPS id 4B1FDb4LxjzGkN
 for <xen-devel@lists.xen.org>; Tue,  7 Jul 2020 09:58:23 +0200 (CEST)
X-AuditID: 0a3aad17-4b9ff7000000186c-8c-5f042b1f32b3
Received: from si0vm1949.rbesz01.com ( [10.58.173.29])
 (using TLS with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by si0vm2083.rbesz01.com (SMG Outbound) with SMTP id 6D.B8.06252.F1B240F5;
 Tue,  7 Jul 2020 09:58:23 +0200 (CEST)
Received: from FE-MBX2013.de.bosch.com (fe-mbx2013.de.bosch.com [10.3.231.19])
 by si0vm1949.rbesz01.com (Postfix) with ESMTPS id 4B1FDb37WKz6CjZP1
 for <xen-devel@lists.xen.org>; Tue,  7 Jul 2020 09:58:23 +0200 (CEST)
Received: from SGPMBX2022.APAC.bosch.com (10.187.83.37) by
 FE-MBX2013.de.bosch.com (10.3.231.19) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1979.3; Tue, 7 Jul 2020 09:58:22 +0200
Received: from SGPMBX2022.APAC.bosch.com (10.187.83.37) by
 SGPMBX2022.APAC.bosch.com (10.187.83.37) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1979.3; Tue, 7 Jul 2020 15:58:21 +0800
Received: from SGPMBX2022.APAC.bosch.com ([fe80::2d4d:b176:b210:896]) by
 SGPMBX2022.APAC.bosch.com ([fe80::2d4d:b176:b210:896%6]) with mapi id
 15.01.1979.003; Tue, 7 Jul 2020 15:58:21 +0800
From: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: [BUG] Xen build for RCAR failing
Thread-Topic: [BUG] Xen build for RCAR failing
Thread-Index: AdZUKc5JeR7gPpESR52uLkZK1kYwOw==
Date: Tue, 7 Jul 2020 07:58:20 +0000
Message-ID: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
Accept-Language: en-US, en-SG
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.187.56.215]
Content-Type: multipart/alternative;
 boundary="_000_1b60ed1cd7834ed5957a2b4870602073inboschcom_"
MIME-Version: 1.0
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpikeLIzCtJLcpLzFFi42LhslorqyuvzRJvsHmyvMWSj4tZHBg9ju7+
 zRTAGMVlk5Kak1mWWqRvl8CV8f/QN+aC9R1MFYfX/WBpYNzayNTFyMkhIWAise7uLLYuRi4O
 IYEZTBL9XzqYIJw7jBL3596DyrxnlJg6YwpU5hOjxJzP56EyBxkl/vR8ABvGJhAisW/vDXYQ
 W0TAXGLrki2MILawgKbE3nvvGSHiehJbF/azwtiH+veB9bIIqEj8bXvPDGLzClhLfO6bxAZi
 MwrISiy6OYkFxGYWEJe49WQ+1OECEkv2nGeGsEUlXj7+xwphK0psf7yBCaI+SWLHur8sEDMF
 JU7OfMIygVFkFpJRs5CUzUJSNouRAyiuKbF+lz5EiaLElO6H7BC2hkTrnLnsyOILGNlXMYoW
 ZxqU5RoZWBjrFSWlFlcZGOol5+duYoREk/gOxv8dH/QOMTJxMB5ilOBgVhLh7dVmjBfiTUms
 rEotyo8vKs1JLT7EKM3BoiTOq8KzMU5IID2xJDU7NbUgtQgmy8TBKdXA5HDg992pHU6OUcoS
 3qnsnzdzHah28Nfg2RLp9dr1y74F+ie/GFyZeWGLvF2oYPO0+BP93DzLBPeu+n7mcdrWaxKt
 HxiOnz5mNOmOq+FdbY2pu81n86mEf+GenxbsGqb5LNis1sld/MPRNsn6KRMXlUwwcVl/42yH
 W7f6fo6nUyITfBOva7he5izb+XSr72m/EwphxrOu/3m2o3alnPD0gjcNr1Trpx31ak2r5Ixg
 nyO2T17qVLfBHSYDK0H+Wa5BF/O3/eMX6TSeohrjo5qo6/Zoz/nsE9P2vgjT+D+/tjHPWcnd
 74amp/+d6pT9n0N3fX7t8n6r3PoVs64Vxh8XihEVm/D//8Qlv1WPvi2+qsRSnJFoqMVcVJwI
 AGdPJG0VAwAA
X-Mailman-Approved-At: Tue, 07 Jul 2020 08:04:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--_000_1b60ed1cd7834ed5957a2b4870602073inboschcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

SGVsbG8gVGVhbSwNCg0KSSBhbSB0cnlpbmcgdG8gYnVpbGQgeGVuIGh5cGVydmlzb3IgZm9yIFJD
QVIgYW5kIGZvbGxvd2luZyB0aGUgaHR0cHM6Ly93aWtpLnhlbnByb2plY3Qub3JnL3dpa2kvWGVu
X0FSTV93aXRoX1ZpcnR1YWxpemF0aW9uX0V4dGVuc2lvbnMvU2FsdmF0b3ItWCBzdGVwcy4NCg0K
QnV0IGFtIGZhY2luZyB0aGUgZm9sbG93aW5nIGlzc3Vlcw0KDQoxLiAgICAgIFNSQ19VUkk9Imh0
dHA6Ly92My5zay9+bGt1bmRyYWsvZGV2ODYvYXJjaGl2ZS9EZXY4NnNyYy0ke1BWfS50YXIuZ3og
aW4gcmVjaXBlcy1leHRlbmRlZC9kZXY4Ni9kZXY4Nl8wLjE2LjIwLmJiIGlzIG5vdCBhY2Nlc2li
bGUNCg0KTW9kaWZpY2F0aW9uIGRvbmU6IFNSQ19VUkk9aHR0cHM6Ly9zcmMuZmVkb3JhcHJvamVj
dC5vcmcvbG9va2FzaWRlL2V4dHJhcy9kZXY4Ni9EZXY4NnNyYy0wLjE2LjIwLnRhci5nei81Njdj
ZjQ2MGQxMzJmOWQ4Nzc1ZGQ5NWY5MjA4ZTQ5YS9EZXY4NnNyYy0ke1BWfS50YXIuZ3o8aHR0cHM6
Ly9zcmMuZmVkb3JhcHJvamVjdC5vcmcvbG9va2FzaWRlL2V4dHJhcy9kZXY4Ni9EZXY4NnNyYy0w
LjE2LjIwLnRhci5nei81NjdjZjQ2MGQxMzJmOWQ4Nzc1ZGQ5NWY5MjA4ZTQ5YS9EZXY4NnNyYy0k
JTdiUFYlN2QudGFyLmd6Pg0KDQoyLiAgICAgIExJQ19GSUxFU19DSEtTVU0gY2hhbmdlZCBpbiBy
ZWNpcGVzLWV4dGVuZGVkL3hlbi94ZW4uaW5jDQoNCjMuICAgICAgUUEgSXNzdWU6IHhlbjogRmls
ZXMvZGlyZWN0b3JpZXMgd2VyZSBpbnN0YWxsZWQgYnV0IG5vdCBzaGlwcGVkIGluIGFueSBwYWNr
YWdlOg0KDQovdXNyL2Jpbi92Y2hhbi1zb2NrZXQtcHJveHkNCg0KICAvdXNyL3NiaW4veGVubW9u
DQoNCiAgL3Vzci9zYmluL3hlbmh5cGZzDQoNCiAgL3Vzci9saWIvbGlieGVuZnNpbWFnZS5zby40
LjE0LjANCg0KICAvdXNyL2xpYi9saWJ4ZW5oeXBmcy5zby4xDQoNCiAgL3Vzci9saWIvbGlieGVu
ZnNpbWFnZS5zbw0KDQogIC91c3IvbGliL2xpYnhlbmh5cGZzLnNvLjEuMA0KDQogIC91c3IvbGli
L2xpYnhlbmZzaW1hZ2Uuc28uNC4xNA0KDQogIC91c3IvbGliL2xpYnhlbmh5cGZzLnNvDQoNCiAg
L3Vzci9saWIvcGtnY29uZmlnDQoNCiAgL3Vzci9saWIveGVuL2Jpbi9kZXByaXYtZmQtY2hlY2tl
cg0KDQogIC91c3IvbGliL3BrZ2NvbmZpZy94ZW5saWdodC5wYw0KDQogIC91c3IvbGliL3BrZ2Nv
bmZpZy94ZW5ndWVzdC5wYw0KDQogIC91c3IvbGliL3BrZ2NvbmZpZy94ZW5oeXBmcy5wYw0KDQog
IC91c3IvbGliL3BrZ2NvbmZpZy94bHV0aWwucGMNCg0KICAvdXNyL2xpYi9wa2djb25maWcveGVu
dG9vbGNvcmUucGMNCg0KICAvdXNyL2xpYi9wa2djb25maWcveGVudG9vbGxvZy5wYw0KDQogIC91
c3IvbGliL3BrZ2NvbmZpZy94ZW5zdG9yZS5wYw0KDQogIC91c3IvbGliL3BrZ2NvbmZpZy94ZW5j
YWxsLnBjDQoNCiAgL3Vzci9saWIvcGtnY29uZmlnL3hlbmNvbnRyb2wucGMNCg0KICAvdXNyL2xp
Yi9wa2djb25maWcveGVuZGV2aWNlbW9kZWwucGMNCg0KICAvdXNyL2xpYi9wa2djb25maWcveGVu
c3RhdC5wYw0KDQogIC91c3IvbGliL3BrZ2NvbmZpZy94ZW5nbnR0YWIucGMNCg0KICAvdXNyL2xp
Yi9wa2djb25maWcveGVuZXZ0Y2huLnBjDQoNCiAgL3Vzci9saWIvcGtnY29uZmlnL3hlbnZjaGFu
LnBjDQoNCiAgL3Vzci9saWIvcGtnY29uZmlnL3hlbmZvcmVpZ25tZW1vcnkucGMNCg0KICAvdXNy
L2xpYi94ZW5mc2ltYWdlL2ZhdC9mc2ltYWdlLnNvDQoNCiAgL3Vzci9saWIveGVuZnNpbWFnZS9p
c285NjYwL2ZzaW1hZ2Uuc28NCg0KICAvdXNyL2xpYi94ZW5mc2ltYWdlL3JlaXNlcmZzL2ZzaW1h
Z2Uuc28NCg0KICAvdXNyL2xpYi94ZW5mc2ltYWdlL3Vmcy9mc2ltYWdlLnNvDQoNCiAgL3Vzci9s
aWIveGVuZnNpbWFnZS9leHQyZnMtbGliL2ZzaW1hZ2Uuc28NCg0KICAvdXNyL2xpYi94ZW5mc2lt
YWdlL3pmcy9mc2ltYWdlLnNvDQoNClBsZWFzZSBzZXQgRklMRVMgc3VjaCB0aGF0IHRoZXNlIGl0
ZW1zIGFyZSBwYWNrYWdlZC4gQWx0ZXJuYXRpdmVseSBpZiB0aGV5IGFyZSB1bm5lZWRlZCwgYXZv
aWQgaW5zdGFsbGluZyB0aGVtIG9yIGRlbGV0ZSB0aGVtIHdpdGhpbiBkb19pbnN0YWxsLg0KDQp4
ZW46IDMyIGluc3RhbGxlZCBhbmQgbm90IHNoaXBwZWQgZmlsZXMuIFtpbnN0YWxsZWQtdnMtc2hp
cHBlZF0NCg0KRVJST1I6IHhlbi11bnN0YWJsZStnaXRBVVRPSU5DK2JlNjNkOWQ0N2YtcjAgZG9f
cGFja2FnZTogRmF0YWwgUUEgZXJyb3JzIGZvdW5kLCBmYWlsaW5nIHRhc2suDQoNCkVSUk9SOiB4
ZW4tdW5zdGFibGUrZ2l0QVVUT0lOQytiZTYzZDlkNDdmLXIwIGRvX3BhY2thZ2U6IEZ1bmN0aW9u
IGZhaWxlZDogZG9fcGFja2FnZQ0KDQpFUlJPUjogTG9nZmlsZSBvZiBmYWlsdXJlIHN0b3JlZCBp
bjogL2hvbWUvbWFuaWthbmRhbi95b2N0b18yLjE5L2J1aWxkL2J1aWxkL3RtcC93b3JrL2FhcmNo
NjQtcG9reS1saW51eC94ZW4vdW5zdGFibGUrZ2l0QVVUT0lOQytiZTYzZDlkNDdmLXIwL3RlbXAv
bG9nLmRvX3BhY2thZ2UuMTc4ODkNCg0KRVJST1I6IFRhc2sgMTMgKC9ob21lL21hbmlrYW5kYW4v
eW9jdG9fMi4xOS9idWlsZC9tZXRhLXZpcnR1YWxpemF0aW9uL3JlY2lwZXMtZXh0ZW5kZWQveGVu
L3hlbl9naXQuYmIsIGRvX3BhY2thZ2UpIGZhaWxlZCB3aXRoIGV4aXQgY29kZSAnMScNCg0KQ2Fu
IHlvdSBwbGVhc2UgbGV0IG1lIGtub3cgaXMgdGhlcmUgYW55IHVwZGF0ZSBuZWVkcyB0byBiZSBk
b25lIGluIHRoZSBidWlsZCBzdGVwcyBtZW50aW9uZWQgaW4gdGhlIGxpbms/IElmIG5vdCwgY2Fu
IHlvdSBwbGVhc2UgbGV0IG1lIGtub3cgaG93IEkgY2FuIG92ZXJjb21lIHRoaXMgaXNzdWVzIGFu
ZCBidWlsZCBhIGh5cGVydmlzb3IgaW1hZ2UgZm9yIFJDQVI/DQoNClRoYW5rcyBpbiBBZHZhbmNl
LiBNeSBidWlsZCBjb25maWd1cmF0aW9uICBpcyBhcyBmb2xsb3dzLg0KDQpCdWlsZCBDb25maWd1
cmF0aW9uOg0KQkJfVkVSU0lPTiAgICAgICAgPSAiMS4zMC4wIg0KQlVJTERfU1lTICAgICAgICAg
PSAieDg2XzY0LWxpbnV4Ig0KTkFUSVZFTFNCU1RSSU5HICAgPSAidW5pdmVyc2FsIg0KVEFSR0VU
X1NZUyAgICAgICAgPSAiYWFyY2g2NC1wb2t5LWxpbnV4Ig0KTUFDSElORSAgICAgICAgICAgPSAi
c2FsdmF0b3IteCINCkRJU1RSTyAgICAgICAgICAgID0gInBva3kiDQpESVNUUk9fVkVSU0lPTiAg
ICA9ICIyLjEuMiINClRVTkVfRkVBVFVSRVMgICAgID0gImFhcmNoNjQgY29ydGV4YTU3LWNvcnRl
eGE1MyINClRBUkdFVF9GUFUgICAgICAgID0gIiINClNPQ19GQU1JTFkgICAgICAgID0gInJjYXIt
Z2VuMzpyOGE3Nzk1Ig0KbWV0YQ0KbWV0YS1wb2t5DQptZXRhLXlvY3RvLWJzcCAgICA9ICJ0bXA6
Y2NhOGRkMTVjODA5NjYyNjA1MmY2ZDhkMjVmZjFlOWE2MDYxMDRhMyINCm1ldGEtcmNhci1nZW4z
ICAgID0gInRtcDo5NWNiNDhiYTA5YmM3ZTU1ZmQ1NDk4MTdlM2UyNjcyMzQwOWU2OGQ1Ig0KbWV0
YS1saW5hcm8tdG9vbGNoYWluDQptZXRhLW9wdGVlICAgICAgICA9ICJ0bXA6MmY1MWQzODA0ODU5
OWQ5ODc4ZjE0OWQ2ZDE1NTM5ZmI5NzYwM2Y4ZiINCm1ldGEtb2UgICAgICAgICAgID0gInRtcDo1
NWM4YTc2ZGE1ZGMwOTlhN2JjMzgzODQ5NWM2NzIxNDBjZWRiNzhlIg0KbWV0YS12aXJ0dWFsaXph
dGlvbiA9ICJtb3J0eTo2MjQ5NjMxZjU5YWQ2ZWUzZGM5Mzc2MmRlNDlmYzRiNDQzZDk5YWJjIg0K
bWV0YS1zZWxpbnV4ICAgICAgPSAiamV0aHJvOjRjNzVkOWNiY2YxZDc1MDQzYzdjNWFiMzE1YWEz
ODNkOWIyMjc1MTAiDQptZXRhLW5ldHdvcmtpbmcNCm1ldGEtcHl0aG9uICAgICAgID0gInRtcDo1
NWM4YTc2ZGE1ZGMwOTlhN2JjMzgzODQ5NWM2NzIxNDBjZWRiNzhlIg0KbWV0YS1yY2FyLWdlbjMt
eGVuID0gIm1hc3Rlcjo2MDY5OWM2MzFkNTQxYWVlYWViYWVlYzlhMDg3ZWZlZDkzODVlZTQyIg0K
DQoNCk1pdCBmcmV1bmRsaWNoZW4gR3LDvMOfZW4gLyBCZXN0IHJlZ2FyZHMNCg0KQ2hvY2thbGlu
Z2FtIE1hbmlrYW5kYW4NCg0KRVMtQ00gQ29yZSBmbixBRElUIChSQkVJL0VDRjMpDQpSb2JlcnQg
Qm9zY2ggR21iSCB8IFBvc3RmYWNoIDEwIDYwIDUwIHwgNzAwNDkgU3R1dHRnYXJ0IHwgR0VSTUFO
WSB8IHd3dy5ib3NjaC5jb20NClRlbC4gKzkxIDgwIDYxMzYtNDQ1MiB8IEZheCArOTEgODAgNjYx
Ny0wNzExIHwgTWFuaWthbmRhbi5DaG9ja2FsaW5nYW1AaW4uYm9zY2guY29tPG1haWx0bzpNYW5p
a2FuZGFuLkNob2NrYWxpbmdhbUBpbi5ib3NjaC5jb20+DQoNClJlZ2lzdGVyZWQgT2ZmaWNlOiBT
dHV0dGdhcnQsIFJlZ2lzdHJhdGlvbiBDb3VydDogQW10c2dlcmljaHQgU3R1dHRnYXJ0LCBIUkIg
MTQwMDA7DQpDaGFpcm1hbiBvZiB0aGUgU3VwZXJ2aXNvcnkgQm9hcmQ6IEZyYW56IEZlaHJlbmJh
Y2g7IE1hbmFnaW5nIERpcmVjdG9yczogRHIuIFZvbGttYXIgRGVubmVyLA0KUHJvZi4gRHIuIFN0
ZWZhbiBBc2Vua2Vyc2NoYmF1bWVyLCBEci4gTWljaGFlbCBCb2xsZSwgRHIuIENocmlzdGlhbiBG
aXNjaGVyLCBEci4gU3RlZmFuIEhhcnR1bmcsDQpEci4gTWFya3VzIEhleW4sIEhhcmFsZCBLcsO2
Z2VyLCBDaHJpc3RvcGggS8O8YmVsLCBSb2xmIE5ham9yaywgVXdlIFJhc2Noa2UsIFBldGVyIFR5
cm9sbGVyDQrigIsNCg==

--_000_1b60ed1cd7834ed5957a2b4870602073inboschcom_
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: base64

PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVy
bjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVt
YXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6bT0iaHR0cDovL3NjaGVtYXMubWlj
cm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcv
VFIvUkVDLWh0bWw0MCI+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIg
Y29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjxtZXRhIG5hbWU9IkdlbmVyYXRv
ciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUgKGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxl
PjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6
IkNhbWJyaWEgTWF0aCI7DQoJcGFub3NlLTE6MiA0IDUgMyA1IDQgNiAzIDIgNDt9DQpAZm9udC1m
YWNlDQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7DQoJcGFub3NlLTE6MiAxNSA1IDIgMiAyIDQgMyAy
IDQ7fQ0KLyogU3R5bGUgRGVmaW5pdGlvbnMgKi8NCnAuTXNvTm9ybWFsLCBsaS5Nc29Ob3JtYWws
IGRpdi5Nc29Ob3JtYWwNCgl7bWFyZ2luOjBpbjsNCgltYXJnaW4tYm90dG9tOi4wMDAxcHQ7DQoJ
Zm9udC1zaXplOjExLjBwdDsNCglmb250LWZhbWlseToiQ2FsaWJyaSIsc2Fucy1zZXJpZjt9DQph
OmxpbmssIHNwYW4uTXNvSHlwZXJsaW5rDQoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgljb2xv
cjojMDU2M0MxOw0KCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmU7fQ0KYTp2aXNpdGVkLCBzcGFu
Lk1zb0h5cGVybGlua0ZvbGxvd2VkDQoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgljb2xvcjoj
OTU0RjcyOw0KCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmU7fQ0KcC5Nc29MaXN0UGFyYWdyYXBo
LCBsaS5Nc29MaXN0UGFyYWdyYXBoLCBkaXYuTXNvTGlzdFBhcmFncmFwaA0KCXttc28tc3R5bGUt
cHJpb3JpdHk6MzQ7DQoJbWFyZ2luLXRvcDowaW47DQoJbWFyZ2luLXJpZ2h0OjBpbjsNCgltYXJn
aW4tYm90dG9tOjBpbjsNCgltYXJnaW4tbGVmdDouNWluOw0KCW1hcmdpbi1ib3R0b206LjAwMDFw
dDsNCglmb250LXNpemU6MTEuMHB0Ow0KCWZvbnQtZmFtaWx5OiJDYWxpYnJpIixzYW5zLXNlcmlm
O30NCnNwYW4uRW1haWxTdHlsZTE3DQoJe21zby1zdHlsZS10eXBlOnBlcnNvbmFsLWNvbXBvc2U7
DQoJZm9udC1mYW1pbHk6IkFyaWFsIixzYW5zLXNlcmlmOw0KCWNvbG9yOndpbmRvd3RleHQ7DQoJ
Zm9udC13ZWlnaHQ6bm9ybWFsOw0KCWZvbnQtc3R5bGU6bm9ybWFsO30NCi5Nc29DaHBEZWZhdWx0
DQoJe21zby1zdHlsZS10eXBlOmV4cG9ydC1vbmx5Ow0KCWZvbnQtZmFtaWx5OiJDYWxpYnJpIixz
YW5zLXNlcmlmO30NCkBwYWdlIFdvcmRTZWN0aW9uMQ0KCXtzaXplOjguNWluIDExLjBpbjsNCglt
YXJnaW46MS4waW4gMS4waW4gMS4waW4gMS4waW47fQ0KZGl2LldvcmRTZWN0aW9uMQ0KCXtwYWdl
OldvcmRTZWN0aW9uMTt9DQovKiBMaXN0IERlZmluaXRpb25zICovDQpAbGlzdCBsMA0KCXttc28t
bGlzdC1pZDoxMDUyMTk2MjQwOw0KCW1zby1saXN0LXR5cGU6aHlicmlkOw0KCW1zby1saXN0LXRl
bXBsYXRlLWlkczoyMTMzNzU1NTU0IDY3Njk4NzAzIDY3Njk4NzEzIDY3Njk4NzE1IDY3Njk4NzAz
IDY3Njk4NzEzIDY3Njk4NzE1IDY3Njk4NzAzIDY3Njk4NzEzIDY3Njk4NzE1O30NCkBsaXN0IGww
OmxldmVsMQ0KCXttc28tbGV2ZWwtdGFiLXN0b3A6bm9uZTsNCgltc28tbGV2ZWwtbnVtYmVyLXBv
c2l0aW9uOmxlZnQ7DQoJdGV4dC1pbmRlbnQ6LS4yNWluO30NCkBsaXN0IGwwOmxldmVsMg0KCXtt
c28tbGV2ZWwtbnVtYmVyLWZvcm1hdDphbHBoYS1sb3dlcjsNCgltc28tbGV2ZWwtdGFiLXN0b3A6
bm9uZTsNCgltc28tbGV2ZWwtbnVtYmVyLXBvc2l0aW9uOmxlZnQ7DQoJdGV4dC1pbmRlbnQ6LS4y
NWluO30NCkBsaXN0IGwwOmxldmVsMw0KCXttc28tbGV2ZWwtbnVtYmVyLWZvcm1hdDpyb21hbi1s
b3dlcjsNCgltc28tbGV2ZWwtdGFiLXN0b3A6bm9uZTsNCgltc28tbGV2ZWwtbnVtYmVyLXBvc2l0
aW9uOnJpZ2h0Ow0KCXRleHQtaW5kZW50Oi05LjBwdDt9DQpAbGlzdCBsMDpsZXZlbDQNCgl7bXNv
LWxldmVsLXRhYi1zdG9wOm5vbmU7DQoJbXNvLWxldmVsLW51bWJlci1wb3NpdGlvbjpsZWZ0Ow0K
CXRleHQtaW5kZW50Oi0uMjVpbjt9DQpAbGlzdCBsMDpsZXZlbDUNCgl7bXNvLWxldmVsLW51bWJl
ci1mb3JtYXQ6YWxwaGEtbG93ZXI7DQoJbXNvLWxldmVsLXRhYi1zdG9wOm5vbmU7DQoJbXNvLWxl
dmVsLW51bWJlci1wb3NpdGlvbjpsZWZ0Ow0KCXRleHQtaW5kZW50Oi0uMjVpbjt9DQpAbGlzdCBs
MDpsZXZlbDYNCgl7bXNvLWxldmVsLW51bWJlci1mb3JtYXQ6cm9tYW4tbG93ZXI7DQoJbXNvLWxl
dmVsLXRhYi1zdG9wOm5vbmU7DQoJbXNvLWxldmVsLW51bWJlci1wb3NpdGlvbjpyaWdodDsNCgl0
ZXh0LWluZGVudDotOS4wcHQ7fQ0KQGxpc3QgbDA6bGV2ZWw3DQoJe21zby1sZXZlbC10YWItc3Rv
cDpub25lOw0KCW1zby1sZXZlbC1udW1iZXItcG9zaXRpb246bGVmdDsNCgl0ZXh0LWluZGVudDot
LjI1aW47fQ0KQGxpc3QgbDA6bGV2ZWw4DQoJe21zby1sZXZlbC1udW1iZXItZm9ybWF0OmFscGhh
LWxvd2VyOw0KCW1zby1sZXZlbC10YWItc3RvcDpub25lOw0KCW1zby1sZXZlbC1udW1iZXItcG9z
aXRpb246bGVmdDsNCgl0ZXh0LWluZGVudDotLjI1aW47fQ0KQGxpc3QgbDA6bGV2ZWw5DQoJe21z
by1sZXZlbC1udW1iZXItZm9ybWF0OnJvbWFuLWxvd2VyOw0KCW1zby1sZXZlbC10YWItc3RvcDpu
b25lOw0KCW1zby1sZXZlbC1udW1iZXItcG9zaXRpb246cmlnaHQ7DQoJdGV4dC1pbmRlbnQ6LTku
MHB0O30NCm9sDQoJe21hcmdpbi1ib3R0b206MGluO30NCnVsDQoJe21hcmdpbi1ib3R0b206MGlu
O30NCi0tPjwvc3R5bGU+PCEtLVtpZiBndGUgbXNvIDldPjx4bWw+DQo8bzpzaGFwZWRlZmF1bHRz
IHY6ZXh0PSJlZGl0IiBzcGlkbWF4PSIxMDI2IiAvPg0KPC94bWw+PCFbZW5kaWZdLS0+PCEtLVtp
ZiBndGUgbXNvIDldPjx4bWw+DQo8bzpzaGFwZWxheW91dCB2OmV4dD0iZWRpdCI+DQo8bzppZG1h
cCB2OmV4dD0iZWRpdCIgZGF0YT0iMSIgLz4NCjwvbzpzaGFwZWxheW91dD48L3htbD48IVtlbmRp
Zl0tLT4NCjwvaGVhZD4NCjxib2R5IGxhbmc9IkVOLVVTIiBsaW5rPSIjMDU2M0MxIiB2bGluaz0i
Izk1NEY3MiI+DQo8ZGl2IGNsYXNzPSJXb3JkU2VjdGlvbjEiPg0KPHAgY2xhc3M9Ik1zb05vcm1h
bCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwm
cXVvdDssc2Fucy1zZXJpZiI+SGVsbG8gVGVhbSw8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBj
bGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWls
eTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48
L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtm
b250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmIj5JIGFtIHRyeWluZyB0byBi
dWlsZCB4ZW4gaHlwZXJ2aXNvciBmb3IgUkNBUiBhbmQgZm9sbG93aW5nIHRoZQ0KPC9zcGFuPjxz
cGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7
LHNhbnMtc2VyaWYiPjxhIGhyZWY9Imh0dHBzOi8vd2lraS54ZW5wcm9qZWN0Lm9yZy93aWtpL1hl
bl9BUk1fd2l0aF9WaXJ0dWFsaXphdGlvbl9FeHRlbnNpb25zL1NhbHZhdG9yLVgiPmh0dHBzOi8v
d2lraS54ZW5wcm9qZWN0Lm9yZy93aWtpL1hlbl9BUk1fd2l0aF9WaXJ0dWFsaXphdGlvbl9FeHRl
bnNpb25zL1NhbHZhdG9yLVg8L2E+IHN0ZXBzLjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNs
YXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5
OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwv
cD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2Zv
bnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPkJ1dCBhbSBmYWNpbmcgdGhl
IGZvbGxvd2luZyBpc3N1ZXM8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTGlz
dFBhcmFncmFwaCIgc3R5bGU9InRleHQtaW5kZW50Oi0uMjVpbjttc28tbGlzdDpsMCBsZXZlbDEg
bGZvMSI+PCFbaWYgIXN1cHBvcnRMaXN0c10+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7
Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+PHNwYW4gc3R5bGU9Im1z
by1saXN0Oklnbm9yZSI+MS48c3BhbiBzdHlsZT0iZm9udDo3LjBwdCAmcXVvdDtUaW1lcyBOZXcg
Um9tYW4mcXVvdDsiPiZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOw0KPC9zcGFuPjwvc3Bh
bj48L3NwYW4+PCFbZW5kaWZdPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFt
aWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPlNSQ19VUkk9JnF1b3Q7aHR0cDovL3Yz
LnNrL35sa3VuZHJhay9kZXY4Ni9hcmNoaXZlL0Rldjg2c3JjLSR7UFZ9LnRhci5neiBpbiByZWNp
cGVzLWV4dGVuZGVkL2Rldjg2L2Rldjg2XzAuMTYuMjAuYmIgaXMgbm90IGFjY2VzaWJsZTxvOnA+
PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29MaXN0UGFyYWdyYXBoIj48Yj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5z
LXNlcmlmIj5Nb2RpZmljYXRpb24gZG9uZTo8L3NwYW4+PC9iPjxzcGFuIHN0eWxlPSJmb250LXNp
emU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPiBTUkNf
VVJJPTxhIGhyZWY9Imh0dHBzOi8vc3JjLmZlZG9yYXByb2plY3Qub3JnL2xvb2thc2lkZS9leHRy
YXMvZGV2ODYvRGV2ODZzcmMtMC4xNi4yMC50YXIuZ3ovNTY3Y2Y0NjBkMTMyZjlkODc3NWRkOTVm
OTIwOGU0OWEvRGV2ODZzcmMtJCU3YlBWJTdkLnRhci5neiI+aHR0cHM6Ly9zcmMuZmVkb3JhcHJv
amVjdC5vcmcvbG9va2FzaWRlL2V4dHJhcy9kZXY4Ni9EZXY4NnNyYy0wLjE2LjIwLnRhci5nei81
NjdjZjQ2MGQxMzJmOWQ4Nzc1ZGQ5NWY5MjA4ZTQ5YS9EZXY4NnNyYy0ke1BWfS50YXIuZ3o8L2E+
PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb0xpc3RQYXJhZ3JhcGgiIHN0eWxl
PSJ0ZXh0LWluZGVudDotLjI1aW47bXNvLWxpc3Q6bDAgbGV2ZWwxIGxmbzEiPjwhW2lmICFzdXBw
b3J0TGlzdHNdPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90
O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPjxzcGFuIHN0eWxlPSJtc28tbGlzdDpJZ25vcmUiPjIu
PHNwYW4gc3R5bGU9ImZvbnQ6Ny4wcHQgJnF1b3Q7VGltZXMgTmV3IFJvbWFuJnF1b3Q7Ij4mbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsNCjwvc3Bhbj48L3NwYW4+PC9zcGFuPjwhW2VuZGlm
XT48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZx
dW90OyxzYW5zLXNlcmlmIj5MSUNfRklMRVNfQ0hLU1VNIGNoYW5nZWQgaW4gcmVjaXBlcy1leHRl
bmRlZC94ZW4veGVuLmluYzxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29MaXN0
UGFyYWdyYXBoIiBzdHlsZT0idGV4dC1pbmRlbnQ6LS4yNWluO21zby1saXN0OmwwIGxldmVsMSBs
Zm8xIj48IVtpZiAhc3VwcG9ydExpc3RzXT48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtm
b250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmIj48c3BhbiBzdHlsZT0ibXNv
LWxpc3Q6SWdub3JlIj4zLjxzcGFuIHN0eWxlPSJmb250OjcuMHB0ICZxdW90O1RpbWVzIE5ldyBS
b21hbiZxdW90OyI+Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7DQo8L3NwYW4+PC9zcGFu
Pjwvc3Bhbj48IVtlbmRpZl0+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1p
bHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+UUEgSXNzdWU6IHhlbjogRmlsZXMvZGly
ZWN0b3JpZXMgd2VyZSBpbnN0YWxsZWQgYnV0IG5vdCBzaGlwcGVkIGluIGFueSBwYWNrYWdlOjxv
OnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29MaXN0UGFyYWdyYXBoIj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOjkuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMt
c2VyaWYiPi91c3IvYmluL3ZjaGFuLXNvY2tldC1wcm94eTxvOnA+PC9vOnA+PC9zcGFuPjwvcD4N
CjxwIGNsYXNzPSJNc29MaXN0UGFyYWdyYXBoIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjkuMHB0
O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPiZuYnNwOyAvdXNyL3Ni
aW4veGVubW9uPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb0xpc3RQYXJhZ3Jh
cGgiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6OS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwm
cXVvdDssc2Fucy1zZXJpZiI+Jm5ic3A7IC91c3Ivc2Jpbi94ZW5oeXBmczxvOnA+PC9vOnA+PC9z
cGFuPjwvcD4NCjxwIGNsYXNzPSJNc29MaXN0UGFyYWdyYXBoIj48c3BhbiBzdHlsZT0iZm9udC1z
aXplOjkuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPiZuYnNw
OyAvdXNyL2xpYi9saWJ4ZW5mc2ltYWdlLnNvLjQuMTQuMDxvOnA+PC9vOnA+PC9zcGFuPjwvcD4N
CjxwIGNsYXNzPSJNc29MaXN0UGFyYWdyYXBoIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjkuMHB0
O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPiZuYnNwOyAvdXNyL2xp
Yi9saWJ4ZW5oeXBmcy5zby4xPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb0xp
c3RQYXJhZ3JhcGgiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6OS4wcHQ7Zm9udC1mYW1pbHk6JnF1
b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+Jm5ic3A7IC91c3IvbGliL2xpYnhlbmZzaW1hZ2Uu
c288bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTGlzdFBhcmFncmFwaCI+PHNw
YW4gc3R5bGU9ImZvbnQtc2l6ZTo5LjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90Oyxz
YW5zLXNlcmlmIj4mbmJzcDsgL3Vzci9saWIvbGlieGVuaHlwZnMuc28uMS4wPG86cD48L286cD48
L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb0xpc3RQYXJhZ3JhcGgiPjxzcGFuIHN0eWxlPSJmb250
LXNpemU6OS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+Jm5i
c3A7IC91c3IvbGliL2xpYnhlbmZzaW1hZ2Uuc28uNC4xNDxvOnA+PC9vOnA+PC9zcGFuPjwvcD4N
CjxwIGNsYXNzPSJNc29MaXN0UGFyYWdyYXBoIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjkuMHB0
O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPiZuYnNwOyAvdXNyL2xp
Yi9saWJ4ZW5oeXBmcy5zbzxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29MaXN0
UGFyYWdyYXBoIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjkuMHB0O2ZvbnQtZmFtaWx5OiZxdW90
O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPiZuYnNwOyAvdXNyL2xpYi9wa2djb25maWc8bzpwPjwv
bzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTGlzdFBhcmFncmFwaCI+PHNwYW4gc3R5bGU9
ImZvbnQtc2l6ZTo5LjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlm
Ij4mbmJzcDsgL3Vzci9saWIveGVuL2Jpbi9kZXByaXYtZmQtY2hlY2tlcjxvOnA+PC9vOnA+PC9z
cGFuPjwvcD4NCjxwIGNsYXNzPSJNc29MaXN0UGFyYWdyYXBoIj48c3BhbiBzdHlsZT0iZm9udC1z
aXplOjkuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPiZuYnNw
OyAvdXNyL2xpYi9wa2djb25maWcveGVubGlnaHQucGM8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8
cCBjbGFzcz0iTXNvTGlzdFBhcmFncmFwaCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo5LjBwdDtm
b250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmIj4mbmJzcDsgL3Vzci9saWIv
cGtnY29uZmlnL3hlbmd1ZXN0LnBjPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1z
b0xpc3RQYXJhZ3JhcGgiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6OS4wcHQ7Zm9udC1mYW1pbHk6
JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+Jm5ic3A7IC91c3IvbGliL3BrZ2NvbmZpZy94
ZW5oeXBmcy5wYzxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29MaXN0UGFyYWdy
YXBoIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjkuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFs
JnF1b3Q7LHNhbnMtc2VyaWYiPiZuYnNwOyAvdXNyL2xpYi9wa2djb25maWcveGx1dGlsLnBjPG86
cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb0xpc3RQYXJhZ3JhcGgiPjxzcGFuIHN0
eWxlPSJmb250LXNpemU6OS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1z
ZXJpZiI+Jm5ic3A7IC91c3IvbGliL3BrZ2NvbmZpZy94ZW50b29sY29yZS5wYzxvOnA+PC9vOnA+
PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29MaXN0UGFyYWdyYXBoIj48c3BhbiBzdHlsZT0iZm9u
dC1zaXplOjkuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPiZu
YnNwOyAvdXNyL2xpYi9wa2djb25maWcveGVudG9vbGxvZy5wYzxvOnA+PC9vOnA+PC9zcGFuPjwv
cD4NCjxwIGNsYXNzPSJNc29MaXN0UGFyYWdyYXBoIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjku
MHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPiZuYnNwOyAvdXNy
L2xpYi9wa2djb25maWcveGVuc3RvcmUucGM8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFz
cz0iTXNvTGlzdFBhcmFncmFwaCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo5LjBwdDtmb250LWZh
bWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmIj4mbmJzcDsgL3Vzci9saWIvcGtnY29u
ZmlnL3hlbmNhbGwucGM8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTGlzdFBh
cmFncmFwaCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo5LjBwdDtmb250LWZhbWlseTomcXVvdDtB
cmlhbCZxdW90OyxzYW5zLXNlcmlmIj4mbmJzcDsgL3Vzci9saWIvcGtnY29uZmlnL3hlbmNvbnRy
b2wucGM8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTGlzdFBhcmFncmFwaCI+
PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo5LjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90
OyxzYW5zLXNlcmlmIj4mbmJzcDsgL3Vzci9saWIvcGtnY29uZmlnL3hlbmRldmljZW1vZGVsLnBj
PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb0xpc3RQYXJhZ3JhcGgiPjxzcGFu
IHN0eWxlPSJmb250LXNpemU6OS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fu
cy1zZXJpZiI+Jm5ic3A7IC91c3IvbGliL3BrZ2NvbmZpZy94ZW5zdGF0LnBjPG86cD48L286cD48
L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb0xpc3RQYXJhZ3JhcGgiPjxzcGFuIHN0eWxlPSJmb250
LXNpemU6OS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+Jm5i
c3A7IC91c3IvbGliL3BrZ2NvbmZpZy94ZW5nbnR0YWIucGM8bzpwPjwvbzpwPjwvc3Bhbj48L3A+
DQo8cCBjbGFzcz0iTXNvTGlzdFBhcmFncmFwaCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo5LjBw
dDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmIj4mbmJzcDsgL3Vzci9s
aWIvcGtnY29uZmlnL3hlbmV2dGNobi5wYzxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNz
PSJNc29MaXN0UGFyYWdyYXBoIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjkuMHB0O2ZvbnQtZmFt
aWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPiZuYnNwOyAvdXNyL2xpYi9wa2djb25m
aWcveGVudmNoYW4ucGM8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTGlzdFBh
cmFncmFwaCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo5LjBwdDtmb250LWZhbWlseTomcXVvdDtB
cmlhbCZxdW90OyxzYW5zLXNlcmlmIj4mbmJzcDsgL3Vzci9saWIvcGtnY29uZmlnL3hlbmZvcmVp
Z25tZW1vcnkucGM8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTGlzdFBhcmFn
cmFwaCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo5LjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlh
bCZxdW90OyxzYW5zLXNlcmlmIj4mbmJzcDsgL3Vzci9saWIveGVuZnNpbWFnZS9mYXQvZnNpbWFn
ZS5zbzxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29MaXN0UGFyYWdyYXBoIj48
c3BhbiBzdHlsZT0iZm9udC1zaXplOjkuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7
LHNhbnMtc2VyaWYiPiZuYnNwOyAvdXNyL2xpYi94ZW5mc2ltYWdlL2lzbzk2NjAvZnNpbWFnZS5z
bzxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29MaXN0UGFyYWdyYXBoIj48c3Bh
biBzdHlsZT0iZm9udC1zaXplOjkuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNh
bnMtc2VyaWYiPiZuYnNwOyAvdXNyL2xpYi94ZW5mc2ltYWdlL3JlaXNlcmZzL2ZzaW1hZ2Uuc288
bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTGlzdFBhcmFncmFwaCI+PHNwYW4g
c3R5bGU9ImZvbnQtc2l6ZTo5LjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5z
LXNlcmlmIj4mbmJzcDsgL3Vzci9saWIveGVuZnNpbWFnZS91ZnMvZnNpbWFnZS5zbzxvOnA+PC9v
OnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29MaXN0UGFyYWdyYXBoIj48c3BhbiBzdHlsZT0i
Zm9udC1zaXplOjkuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYi
PiZuYnNwOyAvdXNyL2xpYi94ZW5mc2ltYWdlL2V4dDJmcy1saWIvZnNpbWFnZS5zbzxvOnA+PC9v
OnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29MaXN0UGFyYWdyYXBoIj48c3BhbiBzdHlsZT0i
Zm9udC1zaXplOjkuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYi
PiZuYnNwOyAvdXNyL2xpYi94ZW5mc2ltYWdlL3pmcy9mc2ltYWdlLnNvPG86cD48L286cD48L3Nw
YW4+PC9wPg0KPHAgY2xhc3M9Ik1zb0xpc3RQYXJhZ3JhcGgiPjxzcGFuIHN0eWxlPSJmb250LXNp
emU6OS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+UGxlYXNl
IHNldCBGSUxFUyBzdWNoIHRoYXQgdGhlc2UgaXRlbXMgYXJlIHBhY2thZ2VkLiBBbHRlcm5hdGl2
ZWx5IGlmIHRoZXkgYXJlIHVubmVlZGVkLCBhdm9pZCBpbnN0YWxsaW5nIHRoZW0gb3IgZGVsZXRl
IHRoZW0gd2l0aGluIGRvX2luc3RhbGwuPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9
Ik1zb0xpc3RQYXJhZ3JhcGgiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6OS4wcHQ7Zm9udC1mYW1p
bHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+eGVuOiAzMiBpbnN0YWxsZWQgYW5kIG5v
dCBzaGlwcGVkIGZpbGVzLiBbaW5zdGFsbGVkLXZzLXNoaXBwZWRdPG86cD48L286cD48L3NwYW4+
PC9wPg0KPHAgY2xhc3M9Ik1zb0xpc3RQYXJhZ3JhcGgiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6
OS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+RVJST1I6IHhl
bi11bnN0YWJsZSYjNDM7Z2l0QVVUT0lOQyYjNDM7YmU2M2Q5ZDQ3Zi1yMCBkb19wYWNrYWdlOiBG
YXRhbCBRQSBlcnJvcnMgZm91bmQsIGZhaWxpbmcgdGFzay48bzpwPjwvbzpwPjwvc3Bhbj48L3A+
DQo8cCBjbGFzcz0iTXNvTGlzdFBhcmFncmFwaCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo5LjBw
dDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmIj5FUlJPUjogeGVuLXVu
c3RhYmxlJiM0MztnaXRBVVRPSU5DJiM0MztiZTYzZDlkNDdmLXIwIGRvX3BhY2thZ2U6IEZ1bmN0
aW9uIGZhaWxlZDogZG9fcGFja2FnZTxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJN
c29MaXN0UGFyYWdyYXBoIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjkuMHB0O2ZvbnQtZmFtaWx5
OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPkVSUk9SOiBMb2dmaWxlIG9mIGZhaWx1cmUg
c3RvcmVkIGluOiAvaG9tZS9tYW5pa2FuZGFuL3lvY3RvXzIuMTkvYnVpbGQvYnVpbGQvdG1wL3dv
cmsvYWFyY2g2NC1wb2t5LWxpbnV4L3hlbi91bnN0YWJsZSYjNDM7Z2l0QVVUT0lOQyYjNDM7YmU2
M2Q5ZDQ3Zi1yMC90ZW1wL2xvZy5kb19wYWNrYWdlLjE3ODg5PG86cD48L286cD48L3NwYW4+PC9w
Pg0KPHAgY2xhc3M9Ik1zb0xpc3RQYXJhZ3JhcGgiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6OS4w
cHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+RVJST1I6IFRhc2sg
MTMgKC9ob21lL21hbmlrYW5kYW4veW9jdG9fMi4xOS9idWlsZC9tZXRhLXZpcnR1YWxpemF0aW9u
L3JlY2lwZXMtZXh0ZW5kZWQveGVuL3hlbl9naXQuYmIsIGRvX3BhY2thZ2UpIGZhaWxlZCB3aXRo
IGV4aXQgY29kZSAnMSc8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFs
Ij48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZx
dW90OyxzYW5zLXNlcmlmIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0i
TXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVv
dDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmIj5DYW4geW91IHBsZWFzZSBsZXQgbWUga25vdyBpcyB0
aGVyZSBhbnkgdXBkYXRlIG5lZWRzIHRvIGJlIGRvbmUgaW4gdGhlIGJ1aWxkIHN0ZXBzIG1lbnRp
b25lZCBpbiB0aGUgbGluaz8gSWYgbm90LCBjYW4geW91IHBsZWFzZSBsZXQgbWUga25vdyBob3cg
SSBjYW4gb3ZlcmNvbWUgdGhpcyBpc3N1ZXMgYW5kDQogYnVpbGQgYSBoeXBlcnZpc29yIGltYWdl
IGZvciBSQ0FSPzxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxz
cGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7
LHNhbnMtc2VyaWYiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29O
b3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0Fy
aWFsJnF1b3Q7LHNhbnMtc2VyaWYiPlRoYW5rcyBpbiBBZHZhbmNlLiBNeSBidWlsZCBjb25maWd1
cmF0aW9uJm5ic3A7IGlzIGFzIGZvbGxvd3MuPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xh
c3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6
JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9w
Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9u
dC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+QnVpbGQgQ29uZmlndXJhdGlv
bjo8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHls
ZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNl
cmlmIj5CQl9WRVJTSU9OJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
ID0gJnF1b3Q7MS4zMC4wJnF1b3Q7PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1z
b05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7
QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+QlVJTERfU1lTJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7
Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7ID0gJnF1b3Q7eDg2XzY0LWxpbnV4JnF1b3Q7PG86cD48
L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQt
c2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+TkFU
SVZFTFNCU1RSSU5HJm5ic3A7Jm5ic3A7ID0gJnF1b3Q7dW5pdmVyc2FsJnF1b3Q7PG86cD48L286
cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6
ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+VEFSR0VU
X1NZUyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyA9ICZxdW90O2Fh
cmNoNjQtcG9reS1saW51eCZxdW90OzxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJN
c29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90
O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPk1BQ0hJTkUmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgPSAmcXVvdDtzYWx2YXRvci14JnF1
b3Q7PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5
bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1z
ZXJpZiI+RElTVFJPJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7ICZuYnNwOyZuYnNwOyZuYnNwOyZu
YnNwOyZuYnNwOyZuYnNwOyZuYnNwOz0gJnF1b3Q7cG9reSZxdW90OzxvOnA+PC9vOnA+PC9zcGFu
PjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0
O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPkRJU1RST19WRVJTSU9O
Jm5ic3A7Jm5ic3A7Jm5ic3A7ID0gJnF1b3Q7Mi4xLjImcXVvdDs8bzpwPjwvbzpwPjwvc3Bhbj48
L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtm
b250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmIj5UVU5FX0ZFQVRVUkVTJm5i
c3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7ID0gJnF1b3Q7YWFyY2g2NCBjb3J0ZXhhNTctY29ydGV4YTUz
JnF1b3Q7PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4g
c3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fu
cy1zZXJpZiI+VEFSR0VUX0ZQVSZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOyZu
YnNwOyA9ICZxdW90OyZxdW90OzxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29O
b3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0Fy
aWFsJnF1b3Q7LHNhbnMtc2VyaWYiPlNPQ19GQU1JTFkmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsgPSAmcXVvdDtyY2FyLWdlbjM6cjhhNzc5NSZxdW90OzxvOnA+PC9v
OnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNp
emU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPm1ldGEm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsNCjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJN
c29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90
O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPm1ldGEtcG9reSZuYnNwOyZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOw0KPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xh
c3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6
JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+bWV0YS15b2N0by1ic3AmbmJzcDsmbmJzcDsm
bmJzcDsgPSAmcXVvdDt0bXA6Y2NhOGRkMTVjODA5NjYyNjA1MmY2ZDhkMjVmZjFlOWE2MDYxMDRh
MyZxdW90OzxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFu
IHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNh
bnMtc2VyaWYiPm1ldGEtcmNhci1nZW4zJm5ic3A7Jm5ic3A7Jm5ic3A7ID0gJnF1b3Q7dG1wOjk1
Y2I0OGJhMDliYzdlNTVmZDU0OTgxN2UzZTI2NzIzNDA5ZTY4ZDUmcXVvdDs8bzpwPjwvbzpwPjwv
c3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEw
LjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmIj5tZXRhLWxpbmFy
by10b29sY2hhaW4NCjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwi
PjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1
b3Q7LHNhbnMtc2VyaWYiPm1ldGEtb3B0ZWUmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsgPSAmcXVvdDt0bXA6MmY1MWQzODA0ODU5OWQ5ODc4ZjE0OWQ2ZDE1NTM5ZmI5
NzYwM2Y4ZiZxdW90OzxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwi
PjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1
b3Q7LHNhbnMtc2VyaWYiPm1ldGEtb2UmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgPSAmcXVvdDt0bXA6NTVjOGE3NmRhNWRjMDk5YTdi
YzM4Mzg0OTVjNjcyMTQwY2VkYjc4ZSZxdW90OzxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNs
YXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5
OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPm1ldGEtdmlydHVhbGl6YXRpb24gPSAmcXVv
dDttb3J0eTo2MjQ5NjMxZjU5YWQ2ZWUzZGM5Mzc2MmRlNDlmYzRiNDQzZDk5YWJjJnF1b3Q7PG86
cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZv
bnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+
bWV0YS1zZWxpbnV4Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7ID0gJnF1b3Q7amV0aHJv
OjRjNzVkOWNiY2YxZDc1MDQzYzdjNWFiMzE1YWEzODNkOWIyMjc1MTAmcXVvdDs8bzpwPjwvbzpw
Pjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXpl
OjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmIj5tZXRhLW5l
dHdvcmtpbmcmbmJzcDsmbmJzcDsNCjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJN
c29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90
O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPm1ldGEtcHl0aG9uJm5ic3A7Jm5ic3A7Jm5ic3A7Jm5i
c3A7Jm5ic3A7Jm5ic3A7ID0gJnF1b3Q7dG1wOjU1YzhhNzZkYTVkYzA5OWE3YmMzODM4NDk1YzY3
MjE0MGNlZGI3OGUmcXVvdDs8bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9y
bWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlh
bCZxdW90OyxzYW5zLXNlcmlmIj5tZXRhLXJjYXItZ2VuMy14ZW4gPSAmcXVvdDttYXN0ZXI6NjA2
OTljNjMxZDU0MWFlZWFlYmFlZWM5YTA4N2VmZWQ5Mzg1ZWU0MiZxdW90OzxvOnA+PC9vOnA+PC9z
cGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAu
MHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPjxvOnA+Jm5ic3A7
PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250
LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWYiPjxv
OnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0
eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMt
c2VyaWY7Y29sb3I6YmxhY2siPk1pdCBmcmV1bmRsaWNoZW4gR3LDvMOfZW4gLyBCZXN0IHJlZ2Fy
ZHM8YnI+DQo8YnI+DQo8Yj5DaG9ja2FsaW5nYW0gTWFuaWthbmRhbjwvYj48L3NwYW4+PHNwYW4g
c3R5bGU9ImZvbnQtc2l6ZToxMi4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7VGltZXMgTmV3IFJvbWFu
JnF1b3Q7LHNlcmlmIj4NCjwvc3Bhbj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250
LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmO2NvbG9yOmJsYWNrIj48YnI+DQo8
YnI+DQpFUy1DTSBDb3JlIGZuLEFESVQgKFJCRUkvRUNGMyk8YnI+DQpSb2JlcnQgQm9zY2ggR21i
SCB8IFBvc3RmYWNoIDEwIDYwIDUwIHwgNzAwNDkgU3R1dHRnYXJ0IHwgPHNwYW4gc3R5bGU9InRl
eHQtdHJhbnNmb3JtOnVwcGVyY2FzZSI+DQpHRVJNQU5ZIHwgPC9zcGFuPjxhIGhyZWY9Ind3dy5i
b3NjaC5jb20iPjxzcGFuIHN0eWxlPSJjb2xvcjpibHVlIj53d3cuYm9zY2guY29tPC9zcGFuPjwv
YT48YnI+DQpUZWwuICYjNDM7OTEgODAgNjEzNi00NDUyIHwgRmF4ICYjNDM7OTEgODAgNjYxNy0w
NzExIHwgPGEgaHJlZj0ibWFpbHRvOk1hbmlrYW5kYW4uQ2hvY2thbGluZ2FtQGluLmJvc2NoLmNv
bSI+DQo8c3BhbiBzdHlsZT0iY29sb3I6Ymx1ZSI+TWFuaWthbmRhbi5DaG9ja2FsaW5nYW1AaW4u
Ym9zY2guY29tPC9zcGFuPjwvYT48L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTo4LjBwdDtm
b250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmO2NvbG9yOmJsYWNrIj48YnI+
DQo8YnI+DQpSZWdpc3RlcmVkIE9mZmljZTogU3R1dHRnYXJ0LCBSZWdpc3RyYXRpb24gQ291cnQ6
IEFtdHNnZXJpY2h0IFN0dXR0Z2FydCwgSFJCIDE0MDAwOzxicj4NCkNoYWlybWFuIG9mIHRoZSBT
dXBlcnZpc29yeSBCb2FyZDogRnJhbnogRmVocmVuYmFjaDsgTWFuYWdpbmcgRGlyZWN0b3JzOiBE
ci4gVm9sa21hciBEZW5uZXIsDQo8YnI+DQpQcm9mLiBEci4gU3RlZmFuIEFzZW5rZXJzY2hiYXVt
ZXIsIERyLiBNaWNoYWVsIEJvbGxlLCBEci4gQ2hyaXN0aWFuIEZpc2NoZXIsIERyLiBTdGVmYW4g
SGFydHVuZyw8YnI+DQpEci4gTWFya3VzIEhleW4sIEhhcmFsZCBLcsO2Z2VyLCBDaHJpc3RvcGgg
S8O8YmVsLCBSb2xmIE5ham9yaywgVXdlIFJhc2Noa2UsIFBldGVyIFR5cm9sbGVyDQo8L3NwYW4+
PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMi4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7VGltZXMgTmV3
IFJvbWFuJnF1b3Q7LHNlcmlmIj48YnI+DQrigIs8L3NwYW4+PG86cD48L286cD48L3A+DQo8L2Rp
dj4NCjwvYm9keT4NCjwvaHRtbD4NCg==

--_000_1b60ed1cd7834ed5957a2b4870602073inboschcom_--


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 08:12:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 08:12:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsii7-0008Hd-Ih; Tue, 07 Jul 2020 08:12:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VMuD=AS=epam.com=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1jsii5-0008HY-PP
 for xen-devel@lists.xen.org; Tue, 07 Jul 2020 08:12:13 +0000
X-Inumbo-ID: 8f6de7fe-c029-11ea-8496-bc764e2007e4
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.50]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f6de7fe-c029-11ea-8496-bc764e2007e4;
 Tue, 07 Jul 2020 08:12:12 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Wdse/6Au+t6KJKzFeyHDzhDcBUa3EcoEwwS9dXu8+b+iaxDOq1dsQoSSf5Vk05iX/yhK9JVBHivwQiP4SzeKbAVFOmet9DIXdgMqpj/GnNa9Q8zARQgCC/bcteJg8Td18lY9oxuodIaRACcR8NmzkUN6J4kA06QnfXNGmRqtipMavt9IslluTJnSMdwXC7R9qwg3aDGOx3dFbUJB6vhnwR3vLNQDL1miskvCpBis1vdp9r1y7z++wgtEwXO5i8lpsvgMd59mYP0kOgFUZMEnJO/HhbRYn6gzXd6Gk2lc4ERPFPfxr1m22qECPozX6hBuRZ1HRHkulq8zOM+IEyJhSw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IuCSkKk+ldTmqyFOdGjcrn7E6I4yOU4d/kDBxiYg7+M=;
 b=IXT5fslaCFCwXLC2J1rm1uSdZkGPX5yfDVIHkvWLjHlvg3bvjuG5bt3DVwyZUaCm9cSmyctQPYXufpDncQeQ8IxA5eiVwC3ctEqFRkWnXAcxtKl25yoxVDTIPK+zYykuYLveQd2Fz7m0v6rZIK0WOdBeAh5zEJEOvhXC8o+Y+6NMUAESzXiY2KJvo3YjSbNbjhKiE1JyfDX1VrzWjT0qyVyw+F+u086vdChpoPOODVtXaDy/o+KiPUQG94qhTwPciyaJMHX/ORNA479zNdnBDerGlrwPf6wskLMbLrL0ntWWxCWNgdnWGlAMk2KIBn+1Wtl8UPg9SHZ13rvI9zyy+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IuCSkKk+ldTmqyFOdGjcrn7E6I4yOU4d/kDBxiYg7+M=;
 b=SPQ+dHscOhItxiBd+8YjFiiCmo9MIxe2i/ezQfh0Ms8z+b6cwpgzwwTTmV906yqcuj6QohdpeJKGVJyP5ZUAHAj8LuEF5qSChl8kzc3oD5Bk23BOnMoDCM3yPJNmuZJmQau8XeN5fWv4FZ1edtFwEvtGmHLRdEw3B23gu4DZgmRhaUhP9B/iMgvRAN8fkLgaIUirct4mYH0caydNendZq4i4mq5+MUN9vQLg0SpPyBq6B9a9NVJQLtVu8Ugj8yKpFBYmFAW4GA0xQXEvtDXTlJLGgDifLTaoobqW8SDFMUINqEb1UdXLpmax7NFlk++e1fMlkJvudyrLrp10riRDew==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4308.eurprd03.prod.outlook.com (2603:10a6:208:ce::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.29; Tue, 7 Jul
 2020 08:12:11 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508%9]) with mapi id 15.20.3174.020; Tue, 7 Jul 2020
 08:12:11 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>, "xen-devel@lists.xen.org"
 <xen-devel@lists.xen.org>
Subject: Re: [BUG] Xen build for RCAR failing
Thread-Topic: [BUG] Xen build for RCAR failing
Thread-Index: AQHWVDZQf0pjwKxk/0KTFStJ+XMxEQ==
Date: Tue, 7 Jul 2020 08:12:10 +0000
Message-ID: <48b1ea69-f5c1-4ea2-455c-50bab72bc1da@epam.com>
References: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
In-Reply-To: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: in.bosch.com; dkim=none (message not signed)
 header.d=none;in.bosch.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 6318028d-8239-47c7-b736-08d8224d7300
x-ms-traffictypediagnostic: AM0PR03MB4308:
x-microsoft-antispam-prvs: <AM0PR03MB430862B593EFEBAF0BD37C0CE7660@AM0PR03MB4308.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:2399;
x-forefront-prvs: 0457F11EAF
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: D1w/wy/E8HGCK0X0TcbxLCG/6J55DOsnmxLpz+646QkZilRNCc2lja1X3thsam17C4jPzIfw1QM45FgLTJ+oTlSJVoNZN0ASsvAZtDxeemzLHscecwggDpcm2Lep+zTqFEoKivGXdb9XCHC5j+SAFSKwIylPDI5QaXpmpNU9EPA2A/QssJTni8Q+2jhbjajw5UNC+b2TdtBqb/5VK4rUEcsuKgHqF6PrMCcSnujRHzddZEgPihs3RolYiHLR+wOgMoK5js1Jx8335NUp9yhdcwNZDe3OOlLDRYyBeSSfXOi5V3w9lD9wsJLNpgUmGkbFk4VsXTqYjFOMj50XtoJz7BiI9zffVa4Vc0o+ZOAouTzG+v2+RESkv2tzEOPysCnWg2L7p0IixruDa3uv5eIoSA==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM0PR03MB6324.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(136003)(396003)(39860400002)(376002)(366004)(346002)(2906002)(966005)(26005)(71200400001)(186003)(8936002)(36756003)(8676002)(66446008)(64756008)(66556008)(31686004)(110136005)(91956017)(76116006)(66946007)(316002)(66476007)(6506007)(86362001)(31696002)(6486002)(53546011)(5660300002)(478600001)(6512007)(2616005)(4744005);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: FANMJz84mkLyL0fYW7cLy3TxMoA5VxqqEihFgnxXtZo5swo8I1hWc6cSwTdhaUR77uTZo7+FCGJ8PYr/LZ3l5ufGf/yTcLGmv52EGjn0oAo59X3CjTuOjn71kU2jpxAWBw9gB/ByQUb3OFFKQw/wKpgMoRmR4ehX108b7lxgnsag5A3+gS6mMTNh0CmW2UvZonemegXPMCyUbDUWCnBWG0Jl5vieeS2b87CR8p3IPEtZ8xnR+EWsKUWry75r3O6Lo61ZF0J1vojnA163h7GLVA73h4tnxa4y+s4TywJiH4fBWWEohGSqZh+58nKGUgl1l/FF6FQ+C1j0uedhU1a7ykDuTWftpu/bswzWCGhhTnLYvVEOGp79q4Q4VefPWIe8FNH8hsHxm97XNSs61f52iJ4INiu8uK5l3yW4vah4tPP83FZtc/DIesjaWdwdS4Qe2oPSWVWE6UMDddIc3IE9xm6ItizFmnTEawJ58McqQ9U=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <D58609778F27B346B8C6F934138BFC5D@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6318028d-8239-47c7-b736-08d8224d7300
X-MS-Exchange-CrossTenant-originalarrivaltime: 07 Jul 2020 08:12:10.8774 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: JzR/49QpKheqcDolNHF+271WT7GwDDibNWRhNJwB1IYBoCKfyZ8MZeF2kOOfTVXqFRFC4B0fSe2M+hA8Zg8W+219zc0EJaH+n8t1dPCTZ6r8WPWWlglkr1XxR/9ZtUtq
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4308
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQpPbiA3LzcvMjAgMTA6NTggQU0sIE1hbmlrYW5kYW4gQ2hvY2thbGluZ2FtIChSQkVJL0VDRjMp
IHdyb3RlOg0KPg0KPiBIZWxsbyBUZWFtLA0KPg0KPiBJIGFtIHRyeWluZyB0byBidWlsZCB4ZW4g
aHlwZXJ2aXNvciBmb3IgUkNBUiBhbmQgZm9sbG93aW5nIHRoZSBodHRwczovL3dpa2kueGVucHJv
amVjdC5vcmcvd2lraS9YZW5fQVJNX3dpdGhfVmlydHVhbGl6YXRpb25fRXh0ZW5zaW9ucy9TYWx2
YXRvci1YIHN0ZXBzLg0KPg0KPiBCdXQgYW0gZmFjaW5nIHRoZSBmb2xsb3dpbmcgaXNzdWVzDQo+
DQo+IDEuU1JDX1VSST0iaHR0cDovL3YzLnNrL35sa3VuZHJhay9kZXY4Ni9hcmNoaXZlL0Rldjg2
c3JjLSR7UFZ9LnRhci5neiBpbiByZWNpcGVzLWV4dGVuZGVkL2Rldjg2L2Rldjg2XzAuMTYuMjAu
YmIgaXMgbm90IGFjY2VzaWJsZQ0KPg0KPiAqTW9kaWZpY2F0aW9uIGRvbmU6KlNSQ19VUkk9aHR0
cHM6Ly9zcmMuZmVkb3JhcHJvamVjdC5vcmcvbG9va2FzaWRlL2V4dHJhcy9kZXY4Ni9EZXY4NnNy
Yy0wLjE2LjIwLnRhci5nei81NjdjZjQ2MGQxMzJmOWQ4Nzc1ZGQ5NWY5MjA4ZTQ5YS9EZXY4NnNy
Yy0ke1BWfS50YXIuZ3ogPGh0dHBzOi8vc3JjLmZlZG9yYXByb2plY3Qub3JnL2xvb2thc2lkZS9l
eHRyYXMvZGV2ODYvRGV2ODZzcmMtMC4xNi4yMC50YXIuZ3ovNTY3Y2Y0NjBkMTMyZjlkODc3NWRk
OTVmOTIwOGU0OWEvRGV2ODZzcmMtJCU3YlBWJTdkLnRhci5nej4NCj4NCllvdSBjYW4gdHJ5IHdo
YXQgd2UgdXNlIFsxXS4gQW5kIHRoZSBpc3N1ZSB5b3UgYXJlIGZhY2luZyBpcyByYXRoZXIgWW9j
dG8gcmVsYXRlZCwgbm90IFItQ2FyIHNwZWNpZmljLCBJTU8NCg0KWzFdIGh0dHBzOi8vZ2l0aHVi
LmNvbS94ZW4tdHJvb3BzL21ldGEteHQtcHJvZC1kZXZlbC9ibG9iL21hc3Rlci9yZWNpcGVzLWRv
bWQvZG9tZC1pbWFnZS13ZXN0b24vZmlsZXMvbWV0YS14dC1wcm9kLWV4dHJhL3JlY2lwZXMtZXh0
ZW5kZWQvZGV2ODYvZGV2ODZfJTI1LmJiYXBwZW5k


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 08:44:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 08:44:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsjDU-0002SZ-Bx; Tue, 07 Jul 2020 08:44:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9WyG=AS=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jsjDT-0002ST-EU
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 08:44:39 +0000
X-Inumbo-ID: 17ad336e-c02e-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17ad336e-c02e-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 08:44:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=l9/Jhhg5WTpTp/kynMobTgRdsKp87/0UUnKvHBfgjJA=; b=gpSuY7iCL5I5pQdbtEnbmzIRBs
 0Qo/42cFT7RvA0/KxGr9hl8ZkqOYo6x5hMxJrThy3ldMFWoqk4UmyxiX7+gKqErZCF1+ShmzoLL1N
 reCwsG3fXMiYDAul0j/8+hhELVQzmwf2Wg5PnMeWxlle/g2Ra+o6EiErxxo2S73W45f0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jsjDN-00077S-GS; Tue, 07 Jul 2020 08:44:33 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jsjDN-0006Li-35; Tue, 07 Jul 2020 08:44:33 +0000
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
To: Jan Beulich <jbeulich@suse.com>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
 <20200702090047.GX735@Air-de-Roger>
 <1505813895.18300396.1593707008144.JavaMail.zimbra@cert.pl>
 <20200703094438.GY735@Air-de-Roger>
 <b5335c2e-da13-28de-002b-e93dd68a0a11@suse.com>
 <20200703101120.GZ735@Air-de-Roger>
 <51ecaf40-8fb5-8454-7055-5af33a47152e@xen.org>
 <d9e604e9-acb7-17df-f0d1-7552dab526c7@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <88892784-0ed6-2594-bef8-fd0ae46c2b17@xen.org>
Date: Tue, 7 Jul 2020 09:44:30 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d9e604e9-acb7-17df-f0d1-7552dab526c7@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 06/07/2020 09:46, Jan Beulich wrote:
> On 04.07.2020 19:23, Julien Grall wrote:
>> Hi,
>>
>> On 03/07/2020 11:11, Roger Pau Monné wrote:
>>> On Fri, Jul 03, 2020 at 11:56:38AM +0200, Jan Beulich wrote:
>>>> On 03.07.2020 11:44, Roger Pau Monné wrote:
>>>>> On Thu, Jul 02, 2020 at 06:23:28PM +0200, Michał Leszczyński wrote:
>>>>>> ----- 2 lip 2020 o 11:00, Roger Pau Monné roger.pau@citrix.com napisał(a):
>>>>>>
>>>>>>> On Tue, Jun 30, 2020 at 02:33:46PM +0200, Michał Leszczyński wrote:
>>>>>>>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
>>>>>>>> index 59bdc28c89..7b8289d436 100644
>>>>>>>> --- a/xen/include/public/domctl.h
>>>>>>>> +++ b/xen/include/public/domctl.h
>>>>>>>> @@ -92,6 +92,7 @@ struct xen_domctl_createdomain {
>>>>>>>>        uint32_t max_evtchn_port;
>>>>>>>>        int32_t max_grant_frames;
>>>>>>>>        int32_t max_maptrack_frames;
>>>>>>>> +    uint8_t vmtrace_pt_order;
>>>>>>>
>>>>>>> I've been thinking about this, and even though this is a domctl (so
>>>>>>> not a stable interface) we might want to consider using a size (or a
>>>>>>> number of pages) here rather than an order. IPT also supports
>>>>>>> TOPA mode (kind of a linked list of buffers) that would allow for
>>>>>>> sizes not rounded to order boundaries to be used, since then only each
>>>>>>> item in the linked list needs to be rounded to an order boundary, so
>>>>>>> you could for example use three 4K pages in TOPA mode AFAICT.
>>>>>>>
>>>>>>> Roger.
>>>>>>
>>>>>> In previous versions it was "size" but it was requested to change it
>>>>>> to "order" in order to shrink the variable size from uint64_t to
>>>>>> uint8_t, because there is limited space for xen_domctl_createdomain
>>>>>> structure.
>>>>>
>>>>> It's likely I'm missing something here, but I wasn't aware
>>>>> xen_domctl_createdomain had any constrains regarding it's size. It's
>>>>> currently 48bytes which seems fairly small.
>>>>
>>>> Additionally I would guess a uint32_t could do here, if the value
>>>> passed was "number of pages" rather than "number of bytes"?
>> Looking at the rest of the code, the toolstack accepts a 64-bit value.
>> So this would lead to truncation of the buffer if it is bigger than 2^44
>> bytes.
>>
>> I agree such buffer is unlikely, yet I still think we want to harden the
>> code whenever we can. So the solution is to either prevent check
>> truncation in libxl or directly use 64-bit in the domctl.
>>
>> My preference is the latter.
>>
>>>
>>> That could work, not sure if it needs to state however that those will
>>> be 4K pages, since Arm can have a different minimum page size IIRC?
>>> (or that's already the assumption for all number of frames fields)
>>> vmtrace_nr_frames seems fine to me.
>>
>> The hypercalls interface is using the same page granularity as the
>> hypervisor (i.e 4KB).
>>
>> While we already support guest using 64KB page granularity, it is
>> impossible to have a 64KB Arm hypervisor in the current state. You are
>> going to either break existing guest (if you switch to 64KB page
>> granularity for the hypercall ABI) or render them insecure (the mimimum
>> mapping in the P2M would be 64KB).
>>
>> DOMCTLs are not stable yet, so using a number of pages is OK. However, I
>> would strongly suggest to use a number of bytes for any xl/libxl/stable
>> libraries interfaces as this avoids confusion and also make more
>> futureproof.
> 
> If we can't settle on what "page size" means in the public interface
> (which imo is embarrassing), then how about going with number of kb,
> like other memory libxl controls do? (I guess using Mb, in line with
> other config file controls, may end up being too coarse here.) This
> would likely still allow for a 32-bit field to be wide enough.

A 32-bit field would definitely not be able to cover a full address 
space. So do you mind to explain what is the upper bound you expect here?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 08:51:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 08:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsjKG-0003IA-4U; Tue, 07 Jul 2020 08:51:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F44P=AS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsjKF-0003Hq-2u
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 08:51:39 +0000
X-Inumbo-ID: 0de5b5c6-c02f-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0de5b5c6-c02f-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 08:51:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=5Drgvb1gIv3JgKsOaEtJ6ZyyZTCm0Ly0K4vngJmc3lE=; b=wxb4YCsX5xqhA//sqVSLdj+44
 /u/j/e+/zYWif6suG00LqAlH+NxeDUbMIbzHbKtiWwBFLyZ1Pjg0kiw7Gfo8CpRSzmfBE+qZKbkEZ
 bxJ4pg8zpYIlfDHGGPgmf6AkHQySr0ZY8Z/a4K0Ztb2mgABMI9BxE+d9RJYwPhWtExF0s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsjK7-0007G5-CT; Tue, 07 Jul 2020 08:51:31 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsjK7-0001Qg-0g; Tue, 07 Jul 2020 08:51:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsjK6-0006ua-WF; Tue, 07 Jul 2020 08:51:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151685-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151685: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=eb6490f544388dd24c0d054a96dd304bc7284450
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jul 2020 08:51:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151685 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151685/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     12 guest-start                fail pass in 151669
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10     fail pass in 151669

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds    13 migrate-support-check fail in 151669 never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-check fail in 151669 never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                eb6490f544388dd24c0d054a96dd304bc7284450
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   24 days
Failing since        151101  2020-06-14 08:32:51 Z   23 days   29 attempts
Testing same since   151634  2020-07-05 00:36:29 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Cathy Zhang <cathy.zhang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <thuth@redhat.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17819 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 08:57:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 08:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsjPl-0003UM-Tn; Tue, 07 Jul 2020 08:57:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YS/2=AS=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jsjPk-0003UH-EQ
 for xen-devel@lists.xen.org; Tue, 07 Jul 2020 08:57:20 +0000
X-Inumbo-ID: dca8dd0c-c02f-11ea-8d34-12813bfff9fa
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.46]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dca8dd0c-c02f-11ea-8d34-12813bfff9fa;
 Tue, 07 Jul 2020 08:57:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/sKcIoIro8jAD71R2JkQzhnw1ZF/34x5Ir5puXLGjCQ=;
 b=vWir7w8QXznbzqvoyJukaSLfeTbCbzH5PGvn7XybeaZdu46ZsE9XdrZv5O8Lilm0rs0UmBbiJGJgvn/6LqXcVqO5pM/ZroQVY+omvqyuxR3nYuFkcfvBht0+mEOA6oHa4MTPIg8WO963wZ/fL+1m1ZvtfG7OM+Lr1gTN5Zp+Bx8=
Received: from AM6P195CA0063.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:87::40)
 by VE1PR08MB4717.eurprd08.prod.outlook.com (2603:10a6:802:ad::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.28; Tue, 7 Jul
 2020 08:57:17 +0000
Received: from AM5EUR03FT021.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:87:cafe::26) by AM6P195CA0063.outlook.office365.com
 (2603:10a6:209:87::40) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.21 via Frontend
 Transport; Tue, 7 Jul 2020 08:57:16 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xen.org; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;lists.xen.org; dmarc=bestguesspass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT021.mail.protection.outlook.com (10.152.16.105) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3153.24 via Frontend Transport; Tue, 7 Jul 2020 08:57:16 +0000
Received: ("Tessian outbound f7489b7e84a7:v62");
 Tue, 07 Jul 2020 08:57:15 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 70cb4ee3251b1d0c
X-CR-MTA-TID: 64aa7808
Received: from 11ffd43a1040.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C9A3401A-1131-49B7-925F-E9C0EFB42405.1; 
 Tue, 07 Jul 2020 08:57:10 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 11ffd43a1040.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 07 Jul 2020 08:57:10 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OX68sRS29WL05KYMF9+60Syr3IGMiIe55Ng4ZqFt71qQRz3WmLT2NBYtPaEXki8hDKNDXbAj5AHUZ34S9DMCZnfUVFIqNiVUtQHMZIVIq1v5Ixa6PhMxMrS8m00q2cLwxhwosnqyPiVf2Gsp+K0faTarW6lwVqeS436M1so0GshVjSexOZHtuL3/MHZeKzNG0JNhFX4+GVNWaDxVCfALFMIow9S8ECygBA1/B4KSsUITmi0FFSqVu2yvFUkJmrzC/VOQRSCnVBWhVgeGJZTYhwVLFwTYsQzeXh4TXEEeNp3DMDfL2C+gb4lTcBnv+bqHFxmLzqOjcNeZB0bH3FWcaQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/sKcIoIro8jAD71R2JkQzhnw1ZF/34x5Ir5puXLGjCQ=;
 b=ZWV3IYG0jG3x1BuqMsMZJvIaPh/0ZK26B17Q6XdxomKiYp0r8DVw3DpeePwWoq3+pJSUI9JhSHELwRX7roRqWBykMkcdpCIauWuC52ixNLXSboNKHdEA6zoX801WzkVm2+XHHXYFxicF9u8v+aP0cAb/T1xDVjRuw4BEtQ96+0/nb1tH5MzEa7MHttRwAVlYVM0PM+fVSFW/TnxwNGGX2TbD50qNQBA0uNPKnEJLwckWvV4rWZOXoEXCjqz/h0W7Cx8XiOBo7iSMwkwZdzZTM14XsD4r5DEJ9o79hGtauKrYA9IqY4pKBat599DukC/zgWRxPC1Q/B1xYKrKT04xtw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/sKcIoIro8jAD71R2JkQzhnw1ZF/34x5Ir5puXLGjCQ=;
 b=vWir7w8QXznbzqvoyJukaSLfeTbCbzH5PGvn7XybeaZdu46ZsE9XdrZv5O8Lilm0rs0UmBbiJGJgvn/6LqXcVqO5pM/ZroQVY+omvqyuxR3nYuFkcfvBht0+mEOA6oHa4MTPIg8WO963wZ/fL+1m1ZvtfG7OM+Lr1gTN5Zp+Bx8=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4821.eurprd08.prod.outlook.com (2603:10a6:10:d5::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.24; Tue, 7 Jul
 2020 08:57:09 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3153.029; Tue, 7 Jul 2020
 08:57:09 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>
Subject: Re: [BUG] Xen build for RCAR failing
Thread-Topic: [BUG] Xen build for RCAR failing
Thread-Index: AdZUKc5JeR7gPpESR52uLkZK1kYwOwAEsnEA
Date: Tue, 7 Jul 2020 08:57:09 +0000
Message-ID: <1D0E7281-95D7-482E-BF6D-EE5B1FEE4876@arm.com>
References: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
In-Reply-To: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: in.bosch.com; dkim=none (message not signed)
 header.d=none; in.bosch.com;
 dmarc=none action=none header.from=arm.com; 
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 4873d8e8-c3e2-40a6-20dc-08d82253bf57
x-ms-traffictypediagnostic: DBBPR08MB4821:|VE1PR08MB4717:
X-Microsoft-Antispam-PRVS: <VE1PR08MB4717BA89AA82BA858A67D7599D660@VE1PR08MB4717.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
x-forefront-prvs: 0457F11EAF
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 4/yZjpNpOhhZexbSnTRTtuLjulDJBER+lvam6IgYL1rEZxoQyZQyf/TWHtIa73zEUzdJyv+2oDlK9Xc++5kC4q9sF3YbIUrto25pzyViurw8LcqCCqYgIrIOEnBFklMf83ggGvqc8j1/RK5b2tPPTwRoZV+U1jjpmy/TxxWgtiv7QjtydYtaf/tZk3Ot3p0QTWI1vY1ECSPkqS3iTEFPxIvuY7N/iB1BN2A2Rv/fcyCJ0uDsI/o3rAgsNnSSZk4u+6fZfUvzAyNyW/nE1FU7MaXMTIwzldUXnTpt3yIJ3RyTxczQgNlJBg25CLO45HSK4nipVURpORuCuB0FP/gRqfOhq8iPHWjK1dScBjmjorl8sWtFSQYyARWOrgiW9RuWVXtuLGLDdteMmbtKVyTTAQ==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(376002)(346002)(136003)(39860400002)(366004)(83380400001)(71200400001)(2616005)(478600001)(54906003)(316002)(33656002)(8676002)(6486002)(966005)(4326008)(64756008)(66556008)(66476007)(66446008)(66946007)(8936002)(6506007)(36756003)(186003)(6512007)(5660300002)(86362001)(26005)(76116006)(2906002)(91956017)(53546011)(6916009);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: ZayKnGFHRjxhuk/S28EXwMT5Fzz+T/dsX2mvRpv/1WLCFbsiXChzt7ZOOe6ZZc17gg9yvREnCDS/EasENpIt4niD96sLSPiZoMNLClAvth7AqPpsZrsMJ8d4EdAVGf0Vrq5sgUE9CQCpv86nbo6wnITnx5INq+hFMh7rqEWzsuGRT3FSz1pRwoJLTh5wQJrUFmMj2XYzuvBB2oA9WWaAtwsfmkCQIRM+kql4kyBloQ8lw2U3VjBdUmDqkhfNJUGi8ypFHYUch7zYIEJtlDI4hJtTDmydyq8DIfQwXm85cnekI1GVWZ6kTeEFFKf2mMJKSDL6jtWvb451EuZ0V/0G3zEwUiudwXJd9ymzJv+JIbsKciHM2llC1Bc+zrklrR5qJfSTzFhLI2EWt5fKHRPiaUqVnQLMdDuifdF82Ln62PZ3X1mledrwWF11F8DIglRNpxgwFvgMLPay4h1IfmMJagJxA2cxYPfm+sUN+nCp86g=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <B31A79F50FB5534181755F7D6D684453@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4821
Original-Authentication-Results: in.bosch.com; dkim=none (message not signed)
 header.d=none; in.bosch.com;
 dmarc=none action=none header.from=arm.com; 
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT021.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(376002)(39860400002)(136003)(396003)(46966005)(478600001)(8936002)(186003)(26005)(356005)(966005)(81166007)(6512007)(82310400002)(47076004)(86362001)(6506007)(82740400003)(6862004)(53546011)(6486002)(36756003)(54906003)(2906002)(5660300002)(83380400001)(8676002)(316002)(36906005)(70206006)(70586007)(33656002)(4326008)(2616005)(336012);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 13c5bc66-201b-4f90-6962-08d82253bb46
X-Forefront-PRVS: 0457F11EAF
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 6fItP/KM3IOwE3s3k7zvbJBbFhkFTYOaaf3xIGbYwnf0q1wLHGjN0OScowZcQ5wAAMajbsFUmYPTZKfdpKMbJnpZdV/b2ryLR0vN6DQhgzhY1Bg50MVMabfEdth9aFy4tL7qHOtE2PAkeCEeFPIWwh0oRLQQlcB6TnXRf4mGdZMxfARy+Q6+e1T7rMT+1B4+pS10kT3OeeW2/9pmakDSLMWoBf8XpjmeE6i7533OG5Armji1Pc3ioUh752A2DIdNg6E6BgIfAMB3tfbe3oorc2/bsn4mnmv7BnqqpF7hQO+8FIkeiFgH0/z2oQmkV01uDVv+w/0IH34HnMCu6zBrmIzLwZIxsG6036jOSrpp1fT4IoRIKX7B7Kz+mM845JMIo3tvMucr1JTMpahQf7D0wx5LZ2MrZyhg71AVsNdxU+9w4HdcYsXfmUj0S0KwAIFkheAyEDVhLF+ECxcLtXnDEZDrJZBXhNJ7LtTxYsRSuvw=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jul 2020 08:57:16.0143 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4873d8e8-c3e2-40a6-20dc-08d82253bf57
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT021.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB4717
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 7 Jul 2020, at 08:58, Manikandan Chockalingam (RBEI/ECF3) <Manikandan.=
Chockalingam@in.bosch.com> wrote:
>=20
> Hello Team,
> =20
> I am trying to build xen hypervisor for RCAR and following the https://wi=
ki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Salvator-X st=
eps.
> =20
> But am facing the following issues
> 1.      SRC_URI=3D"http://v3.sk/~lkundrak/dev86/archive/Dev86src-${PV}.ta=
r.gz in recipes-extended/dev86/dev86_0.16.20.bb is not accesible
> Modification done: SRC_URI=3Dhttps://src.fedoraproject.org/lookaside/extr=
as/dev86/Dev86src-0.16.20.tar.gz/567cf460d132f9d8775dd95f9208e49a/Dev86src-=
${PV}.tar.gz
> 2.      LIC_FILES_CHKSUM changed in recipes-extended/xen/xen.inc
> 3.      QA Issue: xen: Files/directories were installed but not shipped i=
n any package:
> /usr/bin/vchan-socket-proxy
>   /usr/sbin/xenmon
>   /usr/sbin/xenhypfs
>   /usr/lib/libxenfsimage.so.4.14.0
>   /usr/lib/libxenhypfs.so.1
>   /usr/lib/libxenfsimage.so
>   /usr/lib/libxenhypfs.so.1.0
>   /usr/lib/libxenfsimage.so.4.14
>   /usr/lib/libxenhypfs.so
>   /usr/lib/pkgconfig
>   /usr/lib/xen/bin/depriv-fd-checker
>   /usr/lib/pkgconfig/xenlight.pc
>   /usr/lib/pkgconfig/xenguest.pc
>   /usr/lib/pkgconfig/xenhypfs.pc
>   /usr/lib/pkgconfig/xlutil.pc
>   /usr/lib/pkgconfig/xentoolcore.pc
>   /usr/lib/pkgconfig/xentoollog.pc
>   /usr/lib/pkgconfig/xenstore.pc
>   /usr/lib/pkgconfig/xencall.pc
>   /usr/lib/pkgconfig/xencontrol.pc
>   /usr/lib/pkgconfig/xendevicemodel.pc
>   /usr/lib/pkgconfig/xenstat.pc
>   /usr/lib/pkgconfig/xengnttab.pc
>   /usr/lib/pkgconfig/xenevtchn.pc
>   /usr/lib/pkgconfig/xenvchan.pc
>   /usr/lib/pkgconfig/xenforeignmemory.pc
>   /usr/lib/xenfsimage/fat/fsimage.so
>   /usr/lib/xenfsimage/iso9660/fsimage.so
>   /usr/lib/xenfsimage/reiserfs/fsimage.so
>   /usr/lib/xenfsimage/ufs/fsimage.so
>   /usr/lib/xenfsimage/ext2fs-lib/fsimage.so
>   /usr/lib/xenfsimage/zfs/fsimage.so
> Please set FILES such that these items are packaged. Alternatively if the=
y are unneeded, avoid installing them or delete them within do_install.
> xen: 32 installed and not shipped files. [installed-vs-shipped]
> ERROR: xen-unstable+gitAUTOINC+be63d9d47f-r0 do_package: Fatal QA errors =
found, failing task.
> ERROR: xen-unstable+gitAUTOINC+be63d9d47f-r0 do_package: Function failed:=
 do_package
> ERROR: Logfile of failure stored in: /home/manikandan/yocto_2.19/build/bu=
ild/tmp/work/aarch64-poky-linux/xen/unstable+gitAUTOINC+be63d9d47f-r0/temp/=
log.do_package.17889
> ERROR: Task 13 (/home/manikandan/yocto_2.19/build/meta-virtualization/rec=
ipes-extended/xen/xen_git.bb, do_package) failed with exit code '1'

The configuration from the page is using a rather old release of Yocto.
I would suggest to switch to dunfell which will use xen 4.12.

Current master status of Yocto is not building at the moment.
Christopher Clark has done some work on it here [1] in meta-virtualization =
but this is not merged yet.

If you are trying to build Xen master by modifying a recipe you will have i=
ssues as some things have been added like hypfs which are creating the issu=
es you see when Yocto is checking that everything was installed.

Bertrand

[1] https://lists.yoctoproject.org/g/meta-virtualization/message/5495



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 09:11:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 09:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsjcy-00059e-AS; Tue, 07 Jul 2020 09:11:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kqME=AS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jsjcx-00059Z-93
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 09:10:59 +0000
X-Inumbo-ID: c4b13404-c031-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4b13404-c031-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 09:10:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 662EBAF87;
 Tue,  7 Jul 2020 09:10:57 +0000 (UTC)
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
To: Julien Grall <julien@xen.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
 <20200702090047.GX735@Air-de-Roger>
 <1505813895.18300396.1593707008144.JavaMail.zimbra@cert.pl>
 <20200703094438.GY735@Air-de-Roger>
 <b5335c2e-da13-28de-002b-e93dd68a0a11@suse.com>
 <20200703101120.GZ735@Air-de-Roger>
 <51ecaf40-8fb5-8454-7055-5af33a47152e@xen.org>
 <d9e604e9-acb7-17df-f0d1-7552dab526c7@suse.com>
 <88892784-0ed6-2594-bef8-fd0ae46c2b17@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a13451d6-d6b5-6d86-aeb0-8985db730866@suse.com>
Date: Tue, 7 Jul 2020 11:10:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <88892784-0ed6-2594-bef8-fd0ae46c2b17@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.07.2020 10:44, Julien Grall wrote:
> Hi,
> 
> On 06/07/2020 09:46, Jan Beulich wrote:
>> On 04.07.2020 19:23, Julien Grall wrote:
>>> Hi,
>>>
>>> On 03/07/2020 11:11, Roger Pau Monné wrote:
>>>> On Fri, Jul 03, 2020 at 11:56:38AM +0200, Jan Beulich wrote:
>>>>> On 03.07.2020 11:44, Roger Pau Monné wrote:
>>>>>> On Thu, Jul 02, 2020 at 06:23:28PM +0200, Michał Leszczyński wrote:
>>>>>>> ----- 2 lip 2020 o 11:00, Roger Pau Monné roger.pau@citrix.com napisał(a):
>>>>>>>
>>>>>>>> On Tue, Jun 30, 2020 at 02:33:46PM +0200, Michał Leszczyński wrote:
>>>>>>>>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
>>>>>>>>> index 59bdc28c89..7b8289d436 100644
>>>>>>>>> --- a/xen/include/public/domctl.h
>>>>>>>>> +++ b/xen/include/public/domctl.h
>>>>>>>>> @@ -92,6 +92,7 @@ struct xen_domctl_createdomain {
>>>>>>>>>        uint32_t max_evtchn_port;
>>>>>>>>>        int32_t max_grant_frames;
>>>>>>>>>        int32_t max_maptrack_frames;
>>>>>>>>> +    uint8_t vmtrace_pt_order;
>>>>>>>>
>>>>>>>> I've been thinking about this, and even though this is a domctl (so
>>>>>>>> not a stable interface) we might want to consider using a size (or a
>>>>>>>> number of pages) here rather than an order. IPT also supports
>>>>>>>> TOPA mode (kind of a linked list of buffers) that would allow for
>>>>>>>> sizes not rounded to order boundaries to be used, since then only each
>>>>>>>> item in the linked list needs to be rounded to an order boundary, so
>>>>>>>> you could for example use three 4K pages in TOPA mode AFAICT.
>>>>>>>>
>>>>>>>> Roger.
>>>>>>>
>>>>>>> In previous versions it was "size" but it was requested to change it
>>>>>>> to "order" in order to shrink the variable size from uint64_t to
>>>>>>> uint8_t, because there is limited space for xen_domctl_createdomain
>>>>>>> structure.
>>>>>>
>>>>>> It's likely I'm missing something here, but I wasn't aware
>>>>>> xen_domctl_createdomain had any constrains regarding it's size. It's
>>>>>> currently 48bytes which seems fairly small.
>>>>>
>>>>> Additionally I would guess a uint32_t could do here, if the value
>>>>> passed was "number of pages" rather than "number of bytes"?
>>> Looking at the rest of the code, the toolstack accepts a 64-bit value.
>>> So this would lead to truncation of the buffer if it is bigger than 2^44
>>> bytes.
>>>
>>> I agree such buffer is unlikely, yet I still think we want to harden the
>>> code whenever we can. So the solution is to either prevent check
>>> truncation in libxl or directly use 64-bit in the domctl.
>>>
>>> My preference is the latter.
>>>
>>>>
>>>> That could work, not sure if it needs to state however that those will
>>>> be 4K pages, since Arm can have a different minimum page size IIRC?
>>>> (or that's already the assumption for all number of frames fields)
>>>> vmtrace_nr_frames seems fine to me.
>>>
>>> The hypercalls interface is using the same page granularity as the
>>> hypervisor (i.e 4KB).
>>>
>>> While we already support guest using 64KB page granularity, it is
>>> impossible to have a 64KB Arm hypervisor in the current state. You are
>>> going to either break existing guest (if you switch to 64KB page
>>> granularity for the hypercall ABI) or render them insecure (the mimimum
>>> mapping in the P2M would be 64KB).
>>>
>>> DOMCTLs are not stable yet, so using a number of pages is OK. However, I
>>> would strongly suggest to use a number of bytes for any xl/libxl/stable
>>> libraries interfaces as this avoids confusion and also make more
>>> futureproof.
>>
>> If we can't settle on what "page size" means in the public interface
>> (which imo is embarrassing), then how about going with number of kb,
>> like other memory libxl controls do? (I guess using Mb, in line with
>> other config file controls, may end up being too coarse here.) This
>> would likely still allow for a 32-bit field to be wide enough.
> 
> A 32-bit field would definitely not be able to cover a full address 
> space. So do you mind to explain what is the upper bound you expect here?

Do you foresee a need for buffer sizes of 4Tb and up?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 09:16:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 09:16:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsjiI-0005Kv-VY; Tue, 07 Jul 2020 09:16:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9WyG=AS=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jsjiH-0005Kq-GV
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 09:16:29 +0000
X-Inumbo-ID: 89ef5c1e-c032-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 89ef5c1e-c032-11ea-8496-bc764e2007e4;
 Tue, 07 Jul 2020 09:16:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=qSabd6FXwgWNjWONeiNBfL6eI+4HbpsI+k1Q3590rBg=; b=STEvShSs5p5Vv6hOszNa5HcwWo
 RMb5A925cnUOgaCJQTwqybMR2YiGxznwPRC4eGnn2QknX1QW0vYA+LmCb/z5iOenP1zVxMFJFIror
 5rLy2+mNSGrPJnPSr/3FdVewyRLIb4tJFgmfApNF6wSosvNNFq/o0MR7WiwSYOHy8NVE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jsjiC-0007kA-0I; Tue, 07 Jul 2020 09:16:24 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jsjiB-00083V-Iw; Tue, 07 Jul 2020 09:16:23 +0000
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
To: Jan Beulich <jbeulich@suse.com>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <5f4f4b1afa432258daff43f2dc8119b6a441fff4.1593519420.git.michal.leszczynski@cert.pl>
 <20200702090047.GX735@Air-de-Roger>
 <1505813895.18300396.1593707008144.JavaMail.zimbra@cert.pl>
 <20200703094438.GY735@Air-de-Roger>
 <b5335c2e-da13-28de-002b-e93dd68a0a11@suse.com>
 <20200703101120.GZ735@Air-de-Roger>
 <51ecaf40-8fb5-8454-7055-5af33a47152e@xen.org>
 <d9e604e9-acb7-17df-f0d1-7552dab526c7@suse.com>
 <88892784-0ed6-2594-bef8-fd0ae46c2b17@xen.org>
 <a13451d6-d6b5-6d86-aeb0-8985db730866@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ab992813-4584-f8e0-b90a-7a587c396bae@xen.org>
Date: Tue, 7 Jul 2020 10:16:20 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a13451d6-d6b5-6d86-aeb0-8985db730866@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 07/07/2020 10:10, Jan Beulich wrote:
> On 07.07.2020 10:44, Julien Grall wrote:
>> Hi,
>>
>> On 06/07/2020 09:46, Jan Beulich wrote:
>>> On 04.07.2020 19:23, Julien Grall wrote:
>>>> Hi,
>>>>
>>>> On 03/07/2020 11:11, Roger Pau Monné wrote:
>>>>> On Fri, Jul 03, 2020 at 11:56:38AM +0200, Jan Beulich wrote:
>>>>>> On 03.07.2020 11:44, Roger Pau Monné wrote:
>>>>>>> On Thu, Jul 02, 2020 at 06:23:28PM +0200, Michał Leszczyński wrote:
>>>>>>>> ----- 2 lip 2020 o 11:00, Roger Pau Monné roger.pau@citrix.com napisał(a):
>>>>>>>>
>>>>>>>>> On Tue, Jun 30, 2020 at 02:33:46PM +0200, Michał Leszczyński wrote:
>>>>>>>>>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
>>>>>>>>>> index 59bdc28c89..7b8289d436 100644
>>>>>>>>>> --- a/xen/include/public/domctl.h
>>>>>>>>>> +++ b/xen/include/public/domctl.h
>>>>>>>>>> @@ -92,6 +92,7 @@ struct xen_domctl_createdomain {
>>>>>>>>>>         uint32_t max_evtchn_port;
>>>>>>>>>>         int32_t max_grant_frames;
>>>>>>>>>>         int32_t max_maptrack_frames;
>>>>>>>>>> +    uint8_t vmtrace_pt_order;
>>>>>>>>>
>>>>>>>>> I've been thinking about this, and even though this is a domctl (so
>>>>>>>>> not a stable interface) we might want to consider using a size (or a
>>>>>>>>> number of pages) here rather than an order. IPT also supports
>>>>>>>>> TOPA mode (kind of a linked list of buffers) that would allow for
>>>>>>>>> sizes not rounded to order boundaries to be used, since then only each
>>>>>>>>> item in the linked list needs to be rounded to an order boundary, so
>>>>>>>>> you could for example use three 4K pages in TOPA mode AFAICT.
>>>>>>>>>
>>>>>>>>> Roger.
>>>>>>>>
>>>>>>>> In previous versions it was "size" but it was requested to change it
>>>>>>>> to "order" in order to shrink the variable size from uint64_t to
>>>>>>>> uint8_t, because there is limited space for xen_domctl_createdomain
>>>>>>>> structure.
>>>>>>>
>>>>>>> It's likely I'm missing something here, but I wasn't aware
>>>>>>> xen_domctl_createdomain had any constrains regarding it's size. It's
>>>>>>> currently 48bytes which seems fairly small.
>>>>>>
>>>>>> Additionally I would guess a uint32_t could do here, if the value
>>>>>> passed was "number of pages" rather than "number of bytes"?
>>>> Looking at the rest of the code, the toolstack accepts a 64-bit value.
>>>> So this would lead to truncation of the buffer if it is bigger than 2^44
>>>> bytes.
>>>>
>>>> I agree such buffer is unlikely, yet I still think we want to harden the
>>>> code whenever we can. So the solution is to either prevent check
>>>> truncation in libxl or directly use 64-bit in the domctl.
>>>>
>>>> My preference is the latter.
>>>>
>>>>>
>>>>> That could work, not sure if it needs to state however that those will
>>>>> be 4K pages, since Arm can have a different minimum page size IIRC?
>>>>> (or that's already the assumption for all number of frames fields)
>>>>> vmtrace_nr_frames seems fine to me.
>>>>
>>>> The hypercalls interface is using the same page granularity as the
>>>> hypervisor (i.e 4KB).
>>>>
>>>> While we already support guest using 64KB page granularity, it is
>>>> impossible to have a 64KB Arm hypervisor in the current state. You are
>>>> going to either break existing guest (if you switch to 64KB page
>>>> granularity for the hypercall ABI) or render them insecure (the mimimum
>>>> mapping in the P2M would be 64KB).
>>>>
>>>> DOMCTLs are not stable yet, so using a number of pages is OK. However, I
>>>> would strongly suggest to use a number of bytes for any xl/libxl/stable
>>>> libraries interfaces as this avoids confusion and also make more
>>>> futureproof.
>>>
>>> If we can't settle on what "page size" means in the public interface
>>> (which imo is embarrassing), then how about going with number of kb,
>>> like other memory libxl controls do? (I guess using Mb, in line with
>>> other config file controls, may end up being too coarse here.) This
>>> would likely still allow for a 32-bit field to be wide enough.
>>
>> A 32-bit field would definitely not be able to cover a full address
>> space. So do you mind to explain what is the upper bound you expect here?
> 
> Do you foresee a need for buffer sizes of 4Tb and up?

Not I am aware of... However, I think the question was worth it given 
that "wide enough" can mean anything.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 09:22:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 09:22:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsjo6-0006A2-Lt; Tue, 07 Jul 2020 09:22:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F44P=AS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsjo5-00069x-9P
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 09:22:29 +0000
X-Inumbo-ID: 5ee32c48-c033-11ea-8d3a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ee32c48-c033-11ea-8d3a-12813bfff9fa;
 Tue, 07 Jul 2020 09:22:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=uztWroc4VY1vjISWw9QvvJfu9s3qMzRJkrl7AnFAuK4=; b=jq7A0PGaBZHoqgYT2EicxSY3/
 YpuZgLIGel2rXk62VwN5v1KnfUpyaChX6KT+cbZ5NfD+weKJLis84+Mp/rHi6T/5rizuXWDqBIRZ6
 +aHTi2PJFVk1Uhz/B8P3PnbdpJ3Ku3cMfC1dGG2JKIo4LTkAKz4UvNludd/yt5Y0qQAw0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsjo1-0007qm-4M; Tue, 07 Jul 2020 09:22:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsjo0-0003kq-RP; Tue, 07 Jul 2020 09:22:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsjo0-0000gb-Qk; Tue, 07 Jul 2020 09:22:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151690-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151690: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
 linux-linus:test-armhf-armhf-xl-rtds:guest-start.2:fail:allowable
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=bfe91da29bfad9941d5d703d45e29f0812a20724
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jul 2020 09:22:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151690 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151690/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214
 test-armhf-armhf-xl-arndale   7 xen-boot                 fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     17 guest-start.2            fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                bfe91da29bfad9941d5d703d45e29f0812a20724
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   19 days
Failing since        151236  2020-06-19 19:10:35 Z   17 days   26 attempts
Testing same since   151690  2020-07-06 23:10:49 Z    0 days    1 attempts

------------------------------------------------------------
577 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 28056 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 09:25:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 09:25:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsjqn-0006J5-9J; Tue, 07 Jul 2020 09:25:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+gHK=AS=in.bosch.com=manikandan.chockalingam@srs-us1.protection.inumbo.net>)
 id 1jsjql-0006J0-FN
 for xen-devel@lists.xen.org; Tue, 07 Jul 2020 09:25:15 +0000
X-Inumbo-ID: c1b511ec-c033-11ea-bca7-bc764e2007e4
Received: from de-out1.bosch-org.com (unknown [139.15.230.186])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1b511ec-c033-11ea-bca7-bc764e2007e4;
 Tue, 07 Jul 2020 09:25:12 +0000 (UTC)
Received: from fe0vm1649.rbesz01.com
 (lb41g3-ha-dmz-psi-sl1-mailout.fe.ssn.bosch.com [139.15.230.188])
 by fe0vms0187.rbdmz01.com (Postfix) with ESMTPS id 4B1H8k5ssjz1XLDRG;
 Tue,  7 Jul 2020 11:25:10 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=in.bosch.com;
 s=key1-intmail; t=1594113910;
 bh=A4qWJiI7Ti4LOjdFz+6O+vb2tWItS3Ce/gtvptC5txA=; l=10;
 h=From:Subject:From:Reply-To:Sender;
 b=nWIFqBgX97OYYZONIEVtYxo4hfMyhhvUfGl1WpYxGghdUc44zTScp3kdENeQcYBgN
 ksSG8kxhURGuXG9bWEjYxPm7SLjMADpa81kE4q21DF0a77ojUen27o/iwcmCK5vUcm
 CNPib0wvGF8P+9X4NEgcryNGLusva9YACLKCnF1M=
Received: from si0vm4642.rbesz01.com (unknown [10.58.172.176])
 by fe0vm1649.rbesz01.com (Postfix) with ESMTPS id 4B1H8k5XYGz3Kf;
 Tue,  7 Jul 2020 11:25:10 +0200 (CEST)
X-AuditID: 0a3aad12-235ff700000028b1-8e-5f043f762674
Received: from si0vm1950.rbesz01.com ( [10.58.173.29])
 (using TLS with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by si0vm4642.rbesz01.com (SMG Outbound) with SMTP id CA.3C.10417.67F340F5;
 Tue,  7 Jul 2020 11:25:10 +0200 (CEST)
Received: from FE-MBX2060.de.bosch.com (fe-mbx2060.de.bosch.com [10.3.231.165])
 by si0vm1950.rbesz01.com (Postfix) with ESMTPS id 4B1H8k4cQLzW7P;
 Tue,  7 Jul 2020 11:25:10 +0200 (CEST)
Received: from SGPMBX2022.APAC.bosch.com (10.187.83.37) by
 FE-MBX2060.de.bosch.com (10.3.231.165) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1979.3; Tue, 7 Jul 2020 11:25:10 +0200
Received: from SGPMBX2022.APAC.bosch.com (10.187.83.37) by
 SGPMBX2022.APAC.bosch.com (10.187.83.37) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1979.3; Tue, 7 Jul 2020 17:25:08 +0800
Received: from SGPMBX2022.APAC.bosch.com ([fe80::2d4d:b176:b210:896]) by
 SGPMBX2022.APAC.bosch.com ([fe80::2d4d:b176:b210:896%6]) with mapi id
 15.01.1979.003; Tue, 7 Jul 2020 17:25:08 +0800
From: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: RE: [BUG] Xen build for RCAR failing
Thread-Topic: [BUG] Xen build for RCAR failing
Thread-Index: AdZUKc5JeR7gPpESR52uLkZK1kYwO///kucA//9vm+A=
Date: Tue, 7 Jul 2020 09:25:08 +0000
Message-ID: <898ca246818146f2bfce961d6ce9d5c1@in.bosch.com>
References: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
 <48b1ea69-f5c1-4ea2-455c-50bab72bc1da@epam.com>
In-Reply-To: <48b1ea69-f5c1-4ea2-455c-50bab72bc1da@epam.com>
Accept-Language: en-US, en-SG
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.187.56.214]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmpnkeLIzCtJLcpLzFFi42Lhslorq1tmzxJv8OyGtsWH1e9ZLZZ8XMzi
 wORxZ+lORo+ju38zBTBFcdmkpOZklqUW6dslcGU0nGtnLFgnVrFuSht7A+MT0S5GTg4JAROJ
 T2deMnUxcnEICcxgktjZ+o8FwtnDKHFxygZmCOcDo0T/4nuMIC1CAp8YJc5OdIFIHGSUuPW6
 hwkkwSYQIrFv7w12EFtEoFRiz/tTzCC2sICuRNe52VBxPYmtC/tZIWwriQmrfrJ1MXJwsAio
 SByY5QsS5hWwlphxdzM7xK5ciZY7e8HKOQVsJPbNWg4WZxSQlVh0cxILiM0sIC5x68l8Joh3
 BCSW7DnPDGGLSrx8/I8VwlaUWDZ/FTvIKmYBTYn1u/QhWhUlpnQ/ZIdYKyhxcuYTlgmM4rOQ
 TJ2F0DELSccsJB0LGFlWMYoWZxqU5ZqYmRjpFSWlFlcZGOol5+duYoTEldAOxl8dH/QOMTJx
 MB5ilOBgVhLh7dVmjBfiTUmsrEotyo8vKs1JLT7EKM3BoiTOq8KzMU5IID2xJDU7NbUgtQgm
 y8TBKdXApLc0Vv9n/vU1tVEcUVu+LPgWOlf68BFV85pnpRdE2hQ0jacs4wq6+Fik9ob+2n4J
 490WUffq6yeUx9hGvOLtPKGXqJv5bYfLl6q8/Qx3T3KZZd0rfDll087dCZNPzz69s8Hmmdky
 cbsvL3g3nX3Z/W3LVI46nfzYops3hOe1Kt0PT/Htej3DpvXP91QZWQPXbVZeogrVG5c+WdLe
 zXDAxJHx0+6DJxjit/EytRvkFKW84/N48XZSo9nHkgjX2NANIpkiU4o1Hs0+7GpzILn7zP8P
 BxZz3xRhWszjvp5LwXTe7jnCUn5Gm90jK5267s8ueZ55oTmn0GLd9o6wzvvBz3V7rdQyZUyW
 58o29T1TYinOSDTUYi4qTgQAP2CL+xoDAAA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGVsbG8gT2xla3NhbmRyIEFuZHJ1c2hjaGVua28sDQoNClRoYW5rcyBmb3IgeW91ciBxdWljayBy
ZXNwb25zZS4gSSBhbSB1c2luZyB0aGUgeW9jdG8gdmVyc2lvblt5b2N0b18yLjE5XSBtZW50aW9u
ZWQgaW4gdGhlIGxpbmsuIFN0aWxsIEkgZmFjZSB0aGUgaXNzdWUuDQoNCk1pdCBmcmV1bmRsaWNo
ZW4gR3LDvMOfZW4gLyBCZXN0IHJlZ2FyZHMNCg0KIENob2NrYWxpbmdhbSBNYW5pa2FuZGFuDQoN
CkVTLUNNIENvcmUgZm4sQURJVCAoUkJFSS9FQ0YzKQ0KUm9iZXJ0IEJvc2NoIEdtYkggfCBQb3N0
ZmFjaCAxMCA2MCA1MCB8IDcwMDQ5IFN0dXR0Z2FydCB8IEdFUk1BTlkgfCB3d3cuYm9zY2guY29t
DQpUZWwuICs5MSA4MCA2MTM2LTQ0NTIgfCBGYXggKzkxIDgwIDY2MTctMDcxMSB8IE1hbmlrYW5k
YW4uQ2hvY2thbGluZ2FtQGluLmJvc2NoLmNvbQ0KDQpSZWdpc3RlcmVkIE9mZmljZTogU3R1dHRn
YXJ0LCBSZWdpc3RyYXRpb24gQ291cnQ6IEFtdHNnZXJpY2h0IFN0dXR0Z2FydCwgSFJCIDE0MDAw
Ow0KQ2hhaXJtYW4gb2YgdGhlIFN1cGVydmlzb3J5IEJvYXJkOiBGcmFueiBGZWhyZW5iYWNoOyBN
YW5hZ2luZyBEaXJlY3RvcnM6IERyLiBWb2xrbWFyIERlbm5lciwgDQpQcm9mLiBEci4gU3RlZmFu
IEFzZW5rZXJzY2hiYXVtZXIsIERyLiBNaWNoYWVsIEJvbGxlLCBEci4gQ2hyaXN0aWFuIEZpc2No
ZXIsIERyLiBTdGVmYW4gSGFydHVuZywNCkRyLiBNYXJrdXMgSGV5biwgSGFyYWxkIEtyw7ZnZXIs
IENocmlzdG9waCBLw7xiZWwsIFJvbGYgTmFqb3JrLCBVd2UgUmFzY2hrZSwgUGV0ZXIgVHlyb2xs
ZXINCg0KLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCkZyb206IE9sZWtzYW5kciBBbmRydXNo
Y2hlbmtvIDxPbGVrc2FuZHJfQW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4gDQpTZW50OiBUdWVzZGF5
LCBKdWx5IDcsIDIwMjAgMTo0MiBQTQ0KVG86IE1hbmlrYW5kYW4gQ2hvY2thbGluZ2FtIChSQkVJ
L0VDRjMpIDxNYW5pa2FuZGFuLkNob2NrYWxpbmdhbUBpbi5ib3NjaC5jb20+OyB4ZW4tZGV2ZWxA
bGlzdHMueGVuLm9yZw0KU3ViamVjdDogUmU6IFtCVUddIFhlbiBidWlsZCBmb3IgUkNBUiBmYWls
aW5nDQoNCg0KT24gNy83LzIwIDEwOjU4IEFNLCBNYW5pa2FuZGFuIENob2NrYWxpbmdhbSAoUkJF
SS9FQ0YzKSB3cm90ZToNCj4NCj4gSGVsbG8gVGVhbSwNCj4NCj4gSSBhbSB0cnlpbmcgdG8gYnVp
bGQgeGVuIGh5cGVydmlzb3IgZm9yIFJDQVIgYW5kIGZvbGxvd2luZyB0aGUgaHR0cHM6Ly93aWtp
LnhlbnByb2plY3Qub3JnL3dpa2kvWGVuX0FSTV93aXRoX1ZpcnR1YWxpemF0aW9uX0V4dGVuc2lv
bnMvU2FsdmF0b3ItWCBzdGVwcy4NCj4NCj4gQnV0IGFtIGZhY2luZyB0aGUgZm9sbG93aW5nIGlz
c3Vlcw0KPg0KPiAxLlNSQ19VUkk9Imh0dHA6Ly92My5zay9+bGt1bmRyYWsvZGV2ODYvYXJjaGl2
ZS9EZXY4NnNyYy0ke1BWfS50YXIuZ3ogaW4gcmVjaXBlcy1leHRlbmRlZC9kZXY4Ni9kZXY4Nl8w
LjE2LjIwLmJiIGlzIG5vdCBhY2Nlc2libGUNCj4NCj4gKk1vZGlmaWNhdGlvbiBkb25lOipTUkNf
VVJJPWh0dHBzOi8vc3JjLmZlZG9yYXByb2plY3Qub3JnL2xvb2thc2lkZS9leHRyYXMvZGV2ODYv
RGV2ODZzcmMtMC4xNi4yMC50YXIuZ3ovNTY3Y2Y0NjBkMTMyZjlkODc3NWRkOTVmOTIwOGU0OWEv
RGV2ODZzcmMtJHtQVn0udGFyLmd6IDxodHRwczovL3NyYy5mZWRvcmFwcm9qZWN0Lm9yZy9sb29r
YXNpZGUvZXh0cmFzL2Rldjg2L0Rldjg2c3JjLTAuMTYuMjAudGFyLmd6LzU2N2NmNDYwZDEzMmY5
ZDg3NzVkZDk1ZjkyMDhlNDlhL0Rldjg2c3JjLSQlN2JQViU3ZC50YXIuZ3o+DQo+DQpZb3UgY2Fu
IHRyeSB3aGF0IHdlIHVzZSBbMV0uIEFuZCB0aGUgaXNzdWUgeW91IGFyZSBmYWNpbmcgaXMgcmF0
aGVyIFlvY3RvIHJlbGF0ZWQsIG5vdCBSLUNhciBzcGVjaWZpYywgSU1PDQoNClsxXSBodHRwczov
L2dpdGh1Yi5jb20veGVuLXRyb29wcy9tZXRhLXh0LXByb2QtZGV2ZWwvYmxvYi9tYXN0ZXIvcmVj
aXBlcy1kb21kL2RvbWQtaW1hZ2Utd2VzdG9uL2ZpbGVzL21ldGEteHQtcHJvZC1leHRyYS9yZWNp
cGVzLWV4dGVuZGVkL2Rldjg2L2Rldjg2XyUyNS5iYmFwcGVuZA0K


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 09:25:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 09:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsjrS-0006Mp-J3; Tue, 07 Jul 2020 09:25:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V8UY=AS=ozlabs.org=dgibson@srs-us1.protection.inumbo.net>)
 id 1jsjrR-0006Mh-B7
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 09:25:58 +0000
X-Inumbo-ID: d5099a4c-c033-11ea-8d3a-12813bfff9fa
Received: from ozlabs.org (unknown [203.11.71.1])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d5099a4c-c033-11ea-8d3a-12813bfff9fa;
 Tue, 07 Jul 2020 09:25:49 +0000 (UTC)
Received: by ozlabs.org (Postfix, from userid 1007)
 id 4B1H9F4Tmzz9sSJ; Tue,  7 Jul 2020 19:25:37 +1000 (AEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
 d=gibson.dropbear.id.au; s=201602; t=1594113937;
 bh=9tmB0oo0rInMJmC0KSMdYmwrfmZzqQ2wS/Mgir375Ws=;
 h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
 b=Gc4tCkDzOS1c6B9lVduRGQZey++bexGdj7WDDS2L5hTOtbUqSdMrYiSOSrSmuFTYU
 L18hUUUdlO0eFGWW82Dx+mjVbxouEcoVJ3ZMTwf2easzclCwoB6qApIQNqAOySpNO+
 Z+eQcGlKb9irrMEkGgB7Toc7KDagmsTGkupVkOCc=
Date: Tue, 7 Jul 2020 18:22:09 +1000
From: David Gibson <david@gibson.dropbear.id.au>
To: Christophe de Dinechin <dinechin@redhat.com>
Subject: Re: [PATCH] trivial: Remove trailing whitespaces
Message-ID: <20200707082209.GC18595@umbus.fritz.box>
References: <20200706162300.1084753-1-dinechin@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature"; boundary="FsscpQKzF/jJk6ya"
Content-Disposition: inline
In-Reply-To: <20200706162300.1084753-1-dinechin@redhat.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Dmitry Fleytman <dmitry.fleytman@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Michael Roth <mdroth@linux.vnet.ibm.com>, Max Filippov <jcmvbkbc@gmail.com>,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>, Max Reitz <mreitz@redhat.com>,
 Marek Vasut <marex@denx.de>, Stefano Stabellini <sstabellini@kernel.org>,
 qemu-block@nongnu.org, qemu-trivial@nongnu.org, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?iso-8859-1?Q?Herv=E9?= Poussineau <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?iso-8859-1?Q?Marc-Andr=E9?= Lureau <marcandre.lureau@redhat.com>,
 Andrzej Zaborowski <balrogg@gmail.com>, Artyom Tarasenko <atar4qemu@gmail.com>,
 Alistair Francis <alistair@alistair23.me>,
 Eduardo Habkost <ehabkost@redhat.com>, Michael Tokarev <mjt@tls.msk.ru>,
 Riku Voipio <riku.voipio@iki.fi>, Peter Lieven <pl@kamp.de>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Roman Bolshakov <r.bolshakov@yadro.com>, qemu-arm@nongnu.org,
 Peter Chubb <peter.chubb@nicta.com.au>,
 Ronnie Sahlberg <ronniesahlberg@gmail.com>, xen-devel@lists.xenproject.org,
 Alex =?iso-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>,
 Richard Henderson <rth@twiddle.net>, Kevin Wolf <kwolf@redhat.com>,
 Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
 Yoshinori Sato <ysato@users.sourceforge.jp>,
 Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>,
 Chris Wulff <crwulff@gmail.com>, Laurent Vivier <laurent@vivier.eu>,
 Jean-Christophe Dubois <jcd@tribudubois.net>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--FsscpQKzF/jJk6ya
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Mon, Jul 06, 2020 at 06:23:00PM +0200, Christophe de Dinechin wrote:
> There are a number of unnecessary trailing whitespaces that have
> accumulated over time in the source code. They cause stray changes
> in patches if you use tools that automatically remove them.
>=20
> Tested by doing a `git diff -w` after the change.
>=20
> This could probably be turned into a pre-commit hook.
>=20
> Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>

ppc parts

Acked-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  block/iscsi.c                                 |   2 +-
>  disas/cris.c                                  |   2 +-
>  disas/microblaze.c                            |  80 +++---
>  disas/nios2.c                                 | 256 +++++++++---------
>  hmp-commands.hx                               |   2 +-
>  hw/alpha/typhoon.c                            |   6 +-
>  hw/arm/gumstix.c                              |   6 +-
>  hw/arm/omap1.c                                |   2 +-
>  hw/arm/stellaris.c                            |   2 +-
>  hw/char/etraxfs_ser.c                         |   2 +-
>  hw/core/ptimer.c                              |   2 +-
>  hw/cris/axis_dev88.c                          |   2 +-
>  hw/cris/boot.c                                |   2 +-
>  hw/display/qxl.c                              |   2 +-
>  hw/dma/etraxfs_dma.c                          |  18 +-
>  hw/dma/i82374.c                               |   2 +-
>  hw/i2c/bitbang_i2c.c                          |   2 +-
>  hw/input/tsc2005.c                            |   2 +-
>  hw/input/tsc210x.c                            |   2 +-
>  hw/intc/etraxfs_pic.c                         |   8 +-
>  hw/intc/sh_intc.c                             |  10 +-
>  hw/intc/xilinx_intc.c                         |   2 +-
>  hw/misc/imx25_ccm.c                           |   6 +-
>  hw/misc/imx31_ccm.c                           |   2 +-
>  hw/net/vmxnet3.h                              |   2 +-
>  hw/net/xilinx_ethlite.c                       |   2 +-
>  hw/pci/pcie.c                                 |   2 +-
>  hw/sd/omap_mmc.c                              |   2 +-
>  hw/sh4/shix.c                                 |   2 +-
>  hw/sparc64/sun4u.c                            |   2 +-
>  hw/timer/etraxfs_timer.c                      |   2 +-
>  hw/timer/xilinx_timer.c                       |   4 +-
>  hw/usb/hcd-musb.c                             |  10 +-
>  hw/usb/hcd-ohci.c                             |   6 +-
>  hw/usb/hcd-uhci.c                             |   2 +-
>  hw/virtio/virtio-pci.c                        |   2 +-
>  include/hw/cris/etraxfs_dma.h                 |   4 +-
>  include/hw/net/lance.h                        |   2 +-
>  include/hw/ppc/spapr.h                        |   2 +-
>  include/hw/xen/interface/io/ring.h            |  34 +--
>  include/qemu/log.h                            |   2 +-
>  include/qom/object.h                          |   4 +-
>  linux-user/cris/cpu_loop.c                    |  16 +-
>  linux-user/microblaze/cpu_loop.c              |  16 +-
>  linux-user/mmap.c                             |   8 +-
>  linux-user/sparc/signal.c                     |   4 +-
>  linux-user/syscall.c                          |  24 +-
>  linux-user/syscall_defs.h                     |   2 +-
>  linux-user/uaccess.c                          |   2 +-
>  os-posix.c                                    |   2 +-
>  qapi/qapi-util.c                              |   2 +-
>  qemu-img.c                                    |   2 +-
>  qemu-options.hx                               |  26 +-
>  qom/object.c                                  |   2 +-
>  target/cris/translate.c                       |  28 +-
>  target/cris/translate_v10.inc.c               |   6 +-
>  target/i386/hvf/hvf.c                         |   4 +-
>  target/i386/hvf/x86.c                         |   4 +-
>  target/i386/hvf/x86_decode.c                  |  20 +-
>  target/i386/hvf/x86_decode.h                  |   4 +-
>  target/i386/hvf/x86_descr.c                   |   2 +-
>  target/i386/hvf/x86_emu.c                     |   2 +-
>  target/i386/hvf/x86_mmu.c                     |   6 +-
>  target/i386/hvf/x86_task.c                    |   2 +-
>  target/i386/hvf/x86hvf.c                      |  42 +--
>  target/i386/translate.c                       |   8 +-
>  target/microblaze/mmu.c                       |   2 +-
>  target/microblaze/translate.c                 |   2 +-
>  target/sh4/op_helper.c                        |   4 +-
>  target/xtensa/core-de212/core-isa.h           |   6 +-
>  .../xtensa/core-sample_controller/core-isa.h  |   6 +-
>  target/xtensa/core-test_kc705_be/core-isa.h   |   2 +-
>  tcg/sparc/tcg-target.inc.c                    |   2 +-
>  tcg/tcg.c                                     |  32 +--
>  tests/tcg/multiarch/test-mmap.c               |  72 ++---
>  ui/curses.c                                   |   4 +-
>  ui/curses_keys.h                              |   4 +-
>  util/cutils.c                                 |   2 +-
>  78 files changed, 440 insertions(+), 440 deletions(-)
>=20
> diff --git a/block/iscsi.c b/block/iscsi.c
> index a8b76979d8..884075f4e1 100644
> --- a/block/iscsi.c
> +++ b/block/iscsi.c
> @@ -1412,7 +1412,7 @@ static void iscsi_readcapacity_sync(IscsiLun *iscsi=
lun, Error **errp)
>      struct scsi_task *task =3D NULL;
>      struct scsi_readcapacity10 *rc10 =3D NULL;
>      struct scsi_readcapacity16 *rc16 =3D NULL;
> -    int retries =3D ISCSI_CMD_RETRIES;=20
> +    int retries =3D ISCSI_CMD_RETRIES;
> =20
>      do {
>          if (task !=3D NULL) {
> diff --git a/disas/cris.c b/disas/cris.c
> index 0b0a3fb916..a2be8f1412 100644
> --- a/disas/cris.c
> +++ b/disas/cris.c
> @@ -2569,7 +2569,7 @@ print_insn_cris_generic (bfd_vma memaddr,
>    nbytes =3D info->buffer_length ? info->buffer_length
>                                 : MAX_BYTES_PER_CRIS_INSN;
>    nbytes =3D MIN(nbytes, MAX_BYTES_PER_CRIS_INSN);
> -  status =3D (*info->read_memory_func) (memaddr, buffer, nbytes, info); =
=20
> +  status =3D (*info->read_memory_func) (memaddr, buffer, nbytes, info);
> =20
>    /* If we did not get all we asked for, then clear the rest.
>       Hopefully this makes a reproducible result in case of errors.  */
> diff --git a/disas/microblaze.c b/disas/microblaze.c
> index 0b89b9c4fa..6de66532f5 100644
> --- a/disas/microblaze.c
> +++ b/disas/microblaze.c
> @@ -15,15 +15,15 @@ You should have received a copy of the GNU General Pu=
blic License
>  along with this program; if not, see <http://www.gnu.org/licenses/>. */
> =20
>  /*
> - * Copyright (c) 2001 Xilinx, Inc.  All rights reserved.=20
> + * Copyright (c) 2001 Xilinx, Inc.  All rights reserved.
>   *
>   * Redistribution and use in source and binary forms are permitted
>   * provided that the above copyright notice and this paragraph are
>   * duplicated in all such forms and that any documentation,
>   * advertising materials, and other materials related to such
>   * distribution and use acknowledge that the software was developed
> - * by Xilinx, Inc.  The name of the Company may not be used to endorse=
=20
> - * or promote products derived from this software without specific prior=
=20
> + * by Xilinx, Inc.  The name of the Company may not be used to endorse
> + * or promote products derived from this software without specific prior
>   * written permission.
>   * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
>   * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
> @@ -42,7 +42,7 @@ along with this program; if not, see <http://www.gnu.or=
g/licenses/>. */
>  /* Assembler instructions for Xilinx's microblaze processor
>     Copyright (C) 1999, 2000 Free Software Foundation, Inc.
> =20
> -  =20
> +
>  This program is free software; you can redistribute it and/or modify
>  it under the terms of the GNU General Public License as published by
>  the Free Software Foundation; either version 2 of the License, or
> @@ -57,15 +57,15 @@ You should have received a copy of the GNU General Pu=
blic License
>  along with this program; if not, see <http://www.gnu.org/licenses/>.  */
> =20
>  /*
> - * Copyright (c) 2001 Xilinx, Inc.  All rights reserved.=20
> + * Copyright (c) 2001 Xilinx, Inc.  All rights reserved.
>   *
>   * Redistribution and use in source and binary forms are permitted
>   * provided that the above copyright notice and this paragraph are
>   * duplicated in all such forms and that any documentation,
>   * advertising materials, and other materials related to such
>   * distribution and use acknowledge that the software was developed
> - * by Xilinx, Inc.  The name of the Company may not be used to endorse=
=20
> - * or promote products derived from this software without specific prior=
=20
> + * by Xilinx, Inc.  The name of the Company may not be used to endorse
> + * or promote products derived from this software without specific prior
>   * written permission.
>   * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
>   * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
> @@ -79,15 +79,15 @@ along with this program; if not, see <http://www.gnu.=
org/licenses/>.  */
>  #define MICROBLAZE_OPCM
> =20
>  /*
> - * Copyright (c) 2001 Xilinx, Inc.  All rights reserved.=20
> + * Copyright (c) 2001 Xilinx, Inc.  All rights reserved.
>   *
>   * Redistribution and use in source and binary forms are permitted
>   * provided that the above copyright notice and this paragraph are
>   * duplicated in all such forms and that any documentation,
>   * advertising materials, and other materials related to such
>   * distribution and use acknowledge that the software was developed
> - * by Xilinx, Inc.  The name of the Company may not be used to endorse=
=20
> - * or promote products derived from this software without specific prior=
=20
> + * by Xilinx, Inc.  The name of the Company may not be used to endorse
> + * or promote products derived from this software without specific prior
>   * written permission.
>   * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
>   * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
> @@ -108,8 +108,8 @@ enum microblaze_instr {
>     imm, rtsd, rtid, rtbd, rted, bri, brid, brlid, brai, braid, bralid,
>     brki, beqi, beqid, bnei, bneid, blti, bltid, blei, bleid, bgti,
>     bgtid, bgei, bgeid, lbu, lhu, lw, lwx, sb, sh, sw, swx, lbui, lhui, l=
wi,
> -   sbi, shi, swi, msrset, msrclr, tuqula, fadd, frsub, fmul, fdiv,=20
> -   fcmp_lt, fcmp_eq, fcmp_le, fcmp_gt, fcmp_ne, fcmp_ge, fcmp_un, flt, f=
int, fsqrt,=20
> +   sbi, shi, swi, msrset, msrclr, tuqula, fadd, frsub, fmul, fdiv,
> +   fcmp_lt, fcmp_eq, fcmp_le, fcmp_gt, fcmp_ne, fcmp_ge, fcmp_un, flt, f=
int, fsqrt,
>     tget, tcget, tnget, tncget, tput, tcput, tnput, tncput,
>     eget, ecget, neget, necget, eput, ecput, neput, necput,
>     teget, tecget, tneget, tnecget, teput, tecput, tneput, tnecput,
> @@ -182,7 +182,7 @@ enum microblaze_instr_type {
>  /* Assembler Register - Used in Delay Slot Optimization */
>  #define REG_AS    18
>  #define REG_ZERO  0
> -=20
> +
>  #define RD_LOW  21 /* low bit for RD */
>  #define RA_LOW  16 /* low bit for RA */
>  #define RB_LOW  11 /* low bit for RB */
> @@ -258,7 +258,7 @@ enum microblaze_instr_type {
>  #define OPCODE_MASK_H24 0xFC1F07FF /* High 6, bits 20-16 and low 11 bits=
 */
>  #define OPCODE_MASK_H124  0xFFFF07FF /* High 16, and low 11 bits */
>  #define OPCODE_MASK_H1234 0xFFFFFFFF /* All 32 bits */
> -#define OPCODE_MASK_H3  0xFC000600 /* High 6 bits and bits 21, 22 */ =20
> +#define OPCODE_MASK_H3  0xFC000600 /* High 6 bits and bits 21, 22 */
>  #define OPCODE_MASK_H32 0xFC00FC00 /* High 6 bits and bit 16-21 */
>  #define OPCODE_MASK_H34B   0xFC0000FF /* High 6 bits and low 8 bits */
>  #define OPCODE_MASK_H34C   0xFC0007E0 /* High 6 bits and bits 21-26 */
> @@ -277,14 +277,14 @@ static const struct op_code_struct {
>    short inst_offset_type; /* immediate vals offset from PC? (=3D 1 for b=
ranches) */
>    short delay_slots; /* info about delay slots needed after this instr. =
*/
>    short immval_mask;
> -  unsigned long bit_sequence; /* all the fixed bits for the op are set a=
nd all the variable bits (reg names, imm vals) are set to 0 */=20
> +  unsigned long bit_sequence; /* all the fixed bits for the op are set a=
nd all the variable bits (reg names, imm vals) are set to 0 */
>    unsigned long opcode_mask; /* which bits define the opcode */
>    enum microblaze_instr instr;
>    enum microblaze_instr_type instr_type;
>    /* more info about output format here */
> -} opcodes[MAX_OPCODES] =3D=20
> +} opcodes[MAX_OPCODES] =3D
> =20
> -{=20
> +{
>    {"add",   INST_TYPE_RD_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x00000000, OPCODE_MASK_H4, add, arithmetic_inst },
>    {"rsub",  INST_TYPE_RD_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x04000000, OPCODE_MASK_H4, rsub, arithmetic_inst },
>    {"addc",  INST_TYPE_RD_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x08000000, OPCODE_MASK_H4, addc, arithmetic_inst },
> @@ -437,7 +437,7 @@ static const struct op_code_struct {
>    {"tcput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x6C00B000, OPCODE_MASK_H32, tcput,  anyware_inst },
>    {"tnput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x6C00D000, OPCODE_MASK_H32, tnput,  anyware_inst },
>    {"tncput", INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x6C00F000, OPCODE_MASK_H32, tncput, anyware_inst },
> -=20
> +
>    {"eget",   INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C000400, OPCODE_MASK_H32, eget,   anyware_inst },
>    {"ecget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C002400, OPCODE_MASK_H32, ecget,  anyware_inst },
>    {"neget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C004400, OPCODE_MASK_H32, neget,  anyware_inst },
> @@ -446,7 +446,7 @@ static const struct op_code_struct {
>    {"ecput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C00A400, OPCODE_MASK_H32, ecput,  anyware_inst },
>    {"neput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C00C400, OPCODE_MASK_H32, neput,  anyware_inst },
>    {"necput", INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C00E400, OPCODE_MASK_H32, necput, anyware_inst },
> -=20
> +
>    {"teget",   INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C001400, OPCODE_MASK_H32, teget,   anyware_inst },
>    {"tecget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C003400, OPCODE_MASK_H32, tecget,  anyware_inst },
>    {"tneget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C005400, OPCODE_MASK_H32, tneget,  anyware_inst },
> @@ -455,7 +455,7 @@ static const struct op_code_struct {
>    {"tecput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C00B400, OPCODE_MASK_H32, tecput,  anyware_inst },
>    {"tneput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C00D400, OPCODE_MASK_H32, tneput,  anyware_inst },
>    {"tnecput", INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C00F400, OPCODE_MASK_H32, tnecput, anyware_inst },
> -=20
> +
>    {"aget",   INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C000800, OPCODE_MASK_H32, aget,   anyware_inst },
>    {"caget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C002800, OPCODE_MASK_H32, caget,  anyware_inst },
>    {"naget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C004800, OPCODE_MASK_H32, naget,  anyware_inst },
> @@ -464,7 +464,7 @@ static const struct op_code_struct {
>    {"caput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C00A800, OPCODE_MASK_H32, caput,  anyware_inst },
>    {"naput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C00C800, OPCODE_MASK_H32, naput,  anyware_inst },
>    {"ncaput", INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C00E800, OPCODE_MASK_H32, ncaput, anyware_inst },
> -=20
> +
>    {"taget",   INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C001800, OPCODE_MASK_H32, taget,   anyware_inst },
>    {"tcaget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C003800, OPCODE_MASK_H32, tcaget,  anyware_inst },
>    {"tnaget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C005800, OPCODE_MASK_H32, tnaget,  anyware_inst },
> @@ -473,7 +473,7 @@ static const struct op_code_struct {
>    {"tcaput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C00B800, OPCODE_MASK_H32, tcaput,  anyware_inst },
>    {"tnaput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C00D800, OPCODE_MASK_H32, tnaput,  anyware_inst },
>    {"tncaput", INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x6C00F800, OPCODE_MASK_H32, tncaput, anyware_inst },
> -=20
> +
>    {"eaget",   INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_=
MASK_NON_SPECIAL, 0x6C000C00, OPCODE_MASK_H32, eget,   anyware_inst },
>    {"ecaget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_=
MASK_NON_SPECIAL, 0x6C002C00, OPCODE_MASK_H32, ecget,  anyware_inst },
>    {"neaget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_=
MASK_NON_SPECIAL, 0x6C004C00, OPCODE_MASK_H32, neget,  anyware_inst },
> @@ -482,7 +482,7 @@ static const struct op_code_struct {
>    {"ecaput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_=
MASK_NON_SPECIAL, 0x6C00AC00, OPCODE_MASK_H32, ecput,  anyware_inst },
>    {"neaput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_=
MASK_NON_SPECIAL, 0x6C00CC00, OPCODE_MASK_H32, neput,  anyware_inst },
>    {"necaput", INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_=
MASK_NON_SPECIAL, 0x6C00EC00, OPCODE_MASK_H32, necput, anyware_inst },
> -=20
> +
>    {"teaget",   INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_=
MASK_NON_SPECIAL, 0x6C001C00, OPCODE_MASK_H32, teaget,   anyware_inst },
>    {"tecaget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_=
MASK_NON_SPECIAL, 0x6C003C00, OPCODE_MASK_H32, tecaget,  anyware_inst },
>    {"tneaget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_=
MASK_NON_SPECIAL, 0x6C005C00, OPCODE_MASK_H32, tneaget,  anyware_inst },
> @@ -491,7 +491,7 @@ static const struct op_code_struct {
>    {"tecaput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_=
MASK_NON_SPECIAL, 0x6C00BC00, OPCODE_MASK_H32, tecaput,  anyware_inst },
>    {"tneaput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_=
MASK_NON_SPECIAL, 0x6C00DC00, OPCODE_MASK_H32, tneaput,  anyware_inst },
>    {"tnecaput", INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_=
MASK_NON_SPECIAL, 0x6C00FC00, OPCODE_MASK_H32, tnecaput, anyware_inst },
> -=20
> +
>    {"getd",    INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MAS=
K_NON_SPECIAL, 0x4C000000, OPCODE_MASK_H34C, getd,    anyware_inst },
>    {"tgetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MAS=
K_NON_SPECIAL, 0x4C000080, OPCODE_MASK_H34C, tgetd,   anyware_inst },
>    {"cgetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MAS=
K_NON_SPECIAL, 0x4C000100, OPCODE_MASK_H34C, cgetd,   anyware_inst },
> @@ -508,7 +508,7 @@ static const struct op_code_struct {
>    {"tnputd",  INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MAS=
K_NON_SPECIAL, 0x4C000680, OPCODE_MASK_H34C, tnputd,  anyware_inst },
>    {"ncputd",  INST_TYPE_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MAS=
K_NON_SPECIAL, 0x4C000700, OPCODE_MASK_H34C, ncputd,  anyware_inst },
>    {"tncputd", INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MAS=
K_NON_SPECIAL, 0x4C000780, OPCODE_MASK_H34C, tncputd, anyware_inst },
> -=20
> +
>    {"egetd",    INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x4C000020, OPCODE_MASK_H34C, egetd,    anyware_inst },
>    {"tegetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x4C0000A0, OPCODE_MASK_H34C, tegetd,   anyware_inst },
>    {"ecgetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x4C000120, OPCODE_MASK_H34C, ecgetd,   anyware_inst },
> @@ -525,7 +525,7 @@ static const struct op_code_struct {
>    {"tneputd",  INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x4C0006A0, OPCODE_MASK_H34C, tneputd,  anyware_inst },
>    {"necputd",  INST_TYPE_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x4C000720, OPCODE_MASK_H34C, necputd,  anyware_inst },
>    {"tnecputd", INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x4C0007A0, OPCODE_MASK_H34C, tnecputd, anyware_inst },
> -=20
> +
>    {"agetd",    INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x4C000040, OPCODE_MASK_H34C, agetd,    anyware_inst },
>    {"tagetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x4C0000C0, OPCODE_MASK_H34C, tagetd,   anyware_inst },
>    {"cagetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x4C000140, OPCODE_MASK_H34C, cagetd,   anyware_inst },
> @@ -542,7 +542,7 @@ static const struct op_code_struct {
>    {"tnaputd",  INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x4C0006C0, OPCODE_MASK_H34C, tnaputd,  anyware_inst },
>    {"ncaputd",  INST_TYPE_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x4C000740, OPCODE_MASK_H34C, ncaputd,  anyware_inst },
>    {"tncaputd", INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MA=
SK_NON_SPECIAL, 0x4C0007C0, OPCODE_MASK_H34C, tncaputd, anyware_inst },
> -=20
> +
>    {"eagetd",    INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x4C000060, OPCODE_MASK_H34C, eagetd,    anyware_inst },
>    {"teagetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x4C0000E0, OPCODE_MASK_H34C, teagetd,   anyware_inst },
>    {"ecagetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_M=
ASK_NON_SPECIAL, 0x4C000160, OPCODE_MASK_H34C, ecagetd,   anyware_inst },
> @@ -648,13 +648,13 @@ get_field_unsigned_imm (long instr)
> =20
>  /*
>    char *
> -  get_field_special (instr)=20
> +  get_field_special (instr)
>    long instr;
>    {
>    char tmpstr[25];
> - =20
> +
>    sprintf(tmpstr, "%s%s", register_prefix, (((instr & IMM_MASK) >> IMM_L=
OW) & REG_MSR_MASK) =3D=3D 0 ? "pc" : "msr");
> - =20
> +
>    return(strdup(tmpstr));
>    }
>  */
> @@ -684,7 +684,7 @@ get_field_special(long instr, const struct op_code_st=
ruct *op)
>        break;
>     case REG_BTR_MASK :
>        strcpy(spr, "btr");
> -      break;     =20
> +      break;
>     case REG_EDR_MASK :
>        strcpy(spr, "edr");
>        break;
> @@ -719,13 +719,13 @@ get_field_special(long instr, const struct op_code_=
struct *op)
>       }
>       break;
>     }
> -  =20
> +
>     sprintf(tmpstr, "%s%s", register_prefix, spr);
>     return(strdup(tmpstr));
>  }
> =20
>  static unsigned long
> -read_insn_microblaze (bfd_vma memaddr,=20
> +read_insn_microblaze (bfd_vma memaddr,
>  		      struct disassemble_info *info,
>  		      const struct op_code_struct **opr)
>  {
> @@ -736,7 +736,7 @@ read_insn_microblaze (bfd_vma memaddr,
> =20
>    status =3D info->read_memory_func (memaddr, ibytes, 4, info);
> =20
> -  if (status !=3D 0)=20
> +  if (status !=3D 0)
>      {
>        info->memory_error_func (status, memaddr, info);
>        return 0;
> @@ -761,7 +761,7 @@ read_insn_microblaze (bfd_vma memaddr,
>  }
> =20
> =20
> -int=20
> +int
>  print_insn_microblaze (bfd_vma memaddr, struct disassemble_info * info)
>  {
>    fprintf_function    fprintf_func =3D info->fprintf_func;
> @@ -780,7 +780,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disass=
emble_info * info)
>    if (inst =3D=3D 0) {
>      return -1;
>    }
> - =20
> +
>    if (prev_insn_vma =3D=3D curr_insn_vma) {
>    if (memaddr-(info->bytes_per_chunk) =3D=3D prev_insn_addr) {
>      prev_inst =3D read_insn_microblaze (prev_insn_addr, info, &pop);
> @@ -806,7 +806,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disass=
emble_info * info)
>    else
>      {
>        fprintf_func (stream, "%s", op->name);
> -     =20
> +
>        switch (op->inst_type)
>  	{
>    case INST_TYPE_RD_R1_R2:
> @@ -851,7 +851,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disass=
emble_info * info)
>  	  break;
>  	case INST_TYPE_R1_IMM:
>  	  fprintf_func(stream, "\t%s, %s", get_field_r1(inst), get_field_imm(in=
st));
> -	  /* The non-pc relative instructions are returns, which shouldn't=20
> +	  /* The non-pc relative instructions are returns, which shouldn't
>  	     have a label printed */
>  	  if (info->print_address_func && op->inst_offset_type =3D=3D INST_PC_O=
FFSET && info->symbol_at_address_func) {
>  	    if (immfound)
> @@ -886,7 +886,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disass=
emble_info * info)
>  	    if (info->symbol_at_address_func(immval, info)) {
>  	      fprintf_func (stream, "\t// ");
>  	      info->print_address_func (immval, info);
> -	    }=20
> +	    }
>  	  }
>  	  break;
>          case INST_TYPE_IMM:
> @@ -938,7 +938,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disass=
emble_info * info)
>  	  break;
>  	}
>      }
> - =20
> +
>    /* Say how many bytes we consumed? */
>    return 4;
>  }
> diff --git a/disas/nios2.c b/disas/nios2.c
> index c3e82140c7..35d9f40f3e 100644
> --- a/disas/nios2.c
> +++ b/disas/nios2.c
> @@ -96,7 +96,7 @@ enum overflow_type
>    no_overflow
>  };
> =20
> -/* This structure holds information for a particular instruction.=20
> +/* This structure holds information for a particular instruction.
> =20
>     The args field is a string describing the operands.  The following
>     letters can appear in the args:
> @@ -152,26 +152,26 @@ enum overflow_type
>  struct nios2_opcode
>  {
>    const char *name;		/* The name of the instruction.  */
> -  const char *args;		/* A string describing the arguments for this=20
> +  const char *args;		/* A string describing the arguments for this
>  				   instruction.  */
> -  const char *args_test;	/* Like args, but with an extra argument for=20
> +  const char *args_test;	/* Like args, but with an extra argument for
>  				   the expected opcode.  */
> -  unsigned long num_args;	/* The number of arguments the instruction=20
> +  unsigned long num_args;	/* The number of arguments the instruction
>  				   takes.  */
>    unsigned size;		/* Size in bytes of the instruction.  */
>    enum iw_format_type format;	/* Instruction format.  */
>    unsigned long match;		/* The basic opcode for the instruction.  */
> -  unsigned long mask;		/* Mask for the opcode field of the=20
> +  unsigned long mask;		/* Mask for the opcode field of the
>  				   instruction.  */
> -  unsigned long pinfo;		/* Is this a real instruction or instruction=20
> +  unsigned long pinfo;		/* Is this a real instruction or instruction
>  				   macro?  */
> -  enum overflow_type overflow_msg;  /* Used to generate informative=20
> +  enum overflow_type overflow_msg;  /* Used to generate informative
>  				       message when fixup overflows.  */
>  };
> =20
> -/* This value is used in the nios2_opcode.pinfo field to indicate that t=
he=20
> -   instruction is a macro or pseudo-op.  This requires special treatment=
 by=20
> -   the assembler, and is used by the disassembler to determine whether t=
o=20
> +/* This value is used in the nios2_opcode.pinfo field to indicate that t=
he
> +   instruction is a macro or pseudo-op.  This requires special treatment=
 by
> +   the assembler, and is used by the disassembler to determine whether to
>     check for a nop.  */
>  #define NIOS2_INSN_MACRO	0x80000000
>  #define NIOS2_INSN_MACRO_MOV	0x80000001
> @@ -207,124 +207,124 @@ struct nios2_reg
>  #define _NIOS2R1_H_
> =20
>  /* R1 fields.  */
> -#define IW_R1_OP_LSB 0=20
> -#define IW_R1_OP_SIZE 6=20
> -#define IW_R1_OP_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R1_OP_SIZE))=20
> -#define IW_R1_OP_SHIFTED_MASK (IW_R1_OP_UNSHIFTED_MASK << IW_R1_OP_LSB)=
=20
> -#define GET_IW_R1_OP(W) (((W) >> IW_R1_OP_LSB) & IW_R1_OP_UNSHIFTED_MASK=
)=20
> -#define SET_IW_R1_OP(V) (((V) & IW_R1_OP_UNSHIFTED_MASK) << IW_R1_OP_LSB=
)=20
> -
> -#define IW_I_A_LSB 27=20
> -#define IW_I_A_SIZE 5=20
> -#define IW_I_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_A_SIZE))=20
> -#define IW_I_A_SHIFTED_MASK (IW_I_A_UNSHIFTED_MASK << IW_I_A_LSB)=20
> -#define GET_IW_I_A(W) (((W) >> IW_I_A_LSB) & IW_I_A_UNSHIFTED_MASK)=20
> -#define SET_IW_I_A(V) (((V) & IW_I_A_UNSHIFTED_MASK) << IW_I_A_LSB)=20
> -
> -#define IW_I_B_LSB 22=20
> -#define IW_I_B_SIZE 5=20
> -#define IW_I_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_B_SIZE))=20
> -#define IW_I_B_SHIFTED_MASK (IW_I_B_UNSHIFTED_MASK << IW_I_B_LSB)=20
> -#define GET_IW_I_B(W) (((W) >> IW_I_B_LSB) & IW_I_B_UNSHIFTED_MASK)=20
> -#define SET_IW_I_B(V) (((V) & IW_I_B_UNSHIFTED_MASK) << IW_I_B_LSB)=20
> -
> -#define IW_I_IMM16_LSB 6=20
> -#define IW_I_IMM16_SIZE 16=20
> -#define IW_I_IMM16_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_IMM16_SIZE)=
)=20
> -#define IW_I_IMM16_SHIFTED_MASK (IW_I_IMM16_UNSHIFTED_MASK << IW_I_IMM16=
_LSB)=20
> -#define GET_IW_I_IMM16(W) (((W) >> IW_I_IMM16_LSB) & IW_I_IMM16_UNSHIFTE=
D_MASK)=20
> -#define SET_IW_I_IMM16(V) (((V) & IW_I_IMM16_UNSHIFTED_MASK) << IW_I_IMM=
16_LSB)=20
> -
> -#define IW_R_A_LSB 27=20
> -#define IW_R_A_SIZE 5=20
> -#define IW_R_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_A_SIZE))=20
> -#define IW_R_A_SHIFTED_MASK (IW_R_A_UNSHIFTED_MASK << IW_R_A_LSB)=20
> -#define GET_IW_R_A(W) (((W) >> IW_R_A_LSB) & IW_R_A_UNSHIFTED_MASK)=20
> -#define SET_IW_R_A(V) (((V) & IW_R_A_UNSHIFTED_MASK) << IW_R_A_LSB)=20
> -
> -#define IW_R_B_LSB 22=20
> -#define IW_R_B_SIZE 5=20
> -#define IW_R_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_B_SIZE))=20
> -#define IW_R_B_SHIFTED_MASK (IW_R_B_UNSHIFTED_MASK << IW_R_B_LSB)=20
> -#define GET_IW_R_B(W) (((W) >> IW_R_B_LSB) & IW_R_B_UNSHIFTED_MASK)=20
> -#define SET_IW_R_B(V) (((V) & IW_R_B_UNSHIFTED_MASK) << IW_R_B_LSB)=20
> -
> -#define IW_R_C_LSB 17=20
> -#define IW_R_C_SIZE 5=20
> -#define IW_R_C_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_C_SIZE))=20
> -#define IW_R_C_SHIFTED_MASK (IW_R_C_UNSHIFTED_MASK << IW_R_C_LSB)=20
> -#define GET_IW_R_C(W) (((W) >> IW_R_C_LSB) & IW_R_C_UNSHIFTED_MASK)=20
> -#define SET_IW_R_C(V) (((V) & IW_R_C_UNSHIFTED_MASK) << IW_R_C_LSB)=20
> -
> -#define IW_R_OPX_LSB 11=20
> -#define IW_R_OPX_SIZE 6=20
> -#define IW_R_OPX_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_OPX_SIZE))=20
> -#define IW_R_OPX_SHIFTED_MASK (IW_R_OPX_UNSHIFTED_MASK << IW_R_OPX_LSB)=
=20
> -#define GET_IW_R_OPX(W) (((W) >> IW_R_OPX_LSB) & IW_R_OPX_UNSHIFTED_MASK=
)=20
> -#define SET_IW_R_OPX(V) (((V) & IW_R_OPX_UNSHIFTED_MASK) << IW_R_OPX_LSB=
)=20
> -
> -#define IW_R_IMM5_LSB 6=20
> -#define IW_R_IMM5_SIZE 5=20
> -#define IW_R_IMM5_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_IMM5_SIZE))=
=20
> -#define IW_R_IMM5_SHIFTED_MASK (IW_R_IMM5_UNSHIFTED_MASK << IW_R_IMM5_LS=
B)=20
> -#define GET_IW_R_IMM5(W) (((W) >> IW_R_IMM5_LSB) & IW_R_IMM5_UNSHIFTED_M=
ASK)=20
> -#define SET_IW_R_IMM5(V) (((V) & IW_R_IMM5_UNSHIFTED_MASK) << IW_R_IMM5_=
LSB)=20
> -
> -#define IW_J_IMM26_LSB 6=20
> -#define IW_J_IMM26_SIZE 26=20
> -#define IW_J_IMM26_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_J_IMM26_SIZE)=
)=20
> -#define IW_J_IMM26_SHIFTED_MASK (IW_J_IMM26_UNSHIFTED_MASK << IW_J_IMM26=
_LSB)=20
> -#define GET_IW_J_IMM26(W) (((W) >> IW_J_IMM26_LSB) & IW_J_IMM26_UNSHIFTE=
D_MASK)=20
> -#define SET_IW_J_IMM26(V) (((V) & IW_J_IMM26_UNSHIFTED_MASK) << IW_J_IMM=
26_LSB)=20
> -
> -#define IW_CUSTOM_A_LSB 27=20
> -#define IW_CUSTOM_A_SIZE 5=20
> -#define IW_CUSTOM_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_A_SIZ=
E))=20
> -#define IW_CUSTOM_A_SHIFTED_MASK (IW_CUSTOM_A_UNSHIFTED_MASK << IW_CUSTO=
M_A_LSB)=20
> -#define GET_IW_CUSTOM_A(W) (((W) >> IW_CUSTOM_A_LSB) & IW_CUSTOM_A_UNSHI=
FTED_MASK)=20
> -#define SET_IW_CUSTOM_A(V) (((V) & IW_CUSTOM_A_UNSHIFTED_MASK) << IW_CUS=
TOM_A_LSB)=20
> -
> -#define IW_CUSTOM_B_LSB 22=20
> -#define IW_CUSTOM_B_SIZE 5=20
> -#define IW_CUSTOM_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_B_SIZ=
E))=20
> -#define IW_CUSTOM_B_SHIFTED_MASK (IW_CUSTOM_B_UNSHIFTED_MASK << IW_CUSTO=
M_B_LSB)=20
> -#define GET_IW_CUSTOM_B(W) (((W) >> IW_CUSTOM_B_LSB) & IW_CUSTOM_B_UNSHI=
FTED_MASK)=20
> -#define SET_IW_CUSTOM_B(V) (((V) & IW_CUSTOM_B_UNSHIFTED_MASK) << IW_CUS=
TOM_B_LSB)=20
> -
> -#define IW_CUSTOM_C_LSB 17=20
> -#define IW_CUSTOM_C_SIZE 5=20
> -#define IW_CUSTOM_C_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_C_SIZ=
E))=20
> -#define IW_CUSTOM_C_SHIFTED_MASK (IW_CUSTOM_C_UNSHIFTED_MASK << IW_CUSTO=
M_C_LSB)=20
> -#define GET_IW_CUSTOM_C(W) (((W) >> IW_CUSTOM_C_LSB) & IW_CUSTOM_C_UNSHI=
FTED_MASK)=20
> -#define SET_IW_CUSTOM_C(V) (((V) & IW_CUSTOM_C_UNSHIFTED_MASK) << IW_CUS=
TOM_C_LSB)=20
> -
> -#define IW_CUSTOM_READA_LSB 16=20
> -#define IW_CUSTOM_READA_SIZE 1=20
> -#define IW_CUSTOM_READA_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_R=
EADA_SIZE))=20
> -#define IW_CUSTOM_READA_SHIFTED_MASK (IW_CUSTOM_READA_UNSHIFTED_MASK << =
IW_CUSTOM_READA_LSB)=20
> -#define GET_IW_CUSTOM_READA(W) (((W) >> IW_CUSTOM_READA_LSB) & IW_CUSTOM=
_READA_UNSHIFTED_MASK)=20
> -#define SET_IW_CUSTOM_READA(V) (((V) & IW_CUSTOM_READA_UNSHIFTED_MASK) <=
< IW_CUSTOM_READA_LSB)=20
> -
> -#define IW_CUSTOM_READB_LSB 15=20
> -#define IW_CUSTOM_READB_SIZE 1=20
> -#define IW_CUSTOM_READB_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_R=
EADB_SIZE))=20
> -#define IW_CUSTOM_READB_SHIFTED_MASK (IW_CUSTOM_READB_UNSHIFTED_MASK << =
IW_CUSTOM_READB_LSB)=20
> -#define GET_IW_CUSTOM_READB(W) (((W) >> IW_CUSTOM_READB_LSB) & IW_CUSTOM=
_READB_UNSHIFTED_MASK)=20
> -#define SET_IW_CUSTOM_READB(V) (((V) & IW_CUSTOM_READB_UNSHIFTED_MASK) <=
< IW_CUSTOM_READB_LSB)=20
> -
> -#define IW_CUSTOM_READC_LSB 14=20
> -#define IW_CUSTOM_READC_SIZE 1=20
> -#define IW_CUSTOM_READC_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_R=
EADC_SIZE))=20
> -#define IW_CUSTOM_READC_SHIFTED_MASK (IW_CUSTOM_READC_UNSHIFTED_MASK << =
IW_CUSTOM_READC_LSB)=20
> -#define GET_IW_CUSTOM_READC(W) (((W) >> IW_CUSTOM_READC_LSB) & IW_CUSTOM=
_READC_UNSHIFTED_MASK)=20
> -#define SET_IW_CUSTOM_READC(V) (((V) & IW_CUSTOM_READC_UNSHIFTED_MASK) <=
< IW_CUSTOM_READC_LSB)=20
> -
> -#define IW_CUSTOM_N_LSB 6=20
> -#define IW_CUSTOM_N_SIZE 8=20
> -#define IW_CUSTOM_N_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_N_SIZ=
E))=20
> -#define IW_CUSTOM_N_SHIFTED_MASK (IW_CUSTOM_N_UNSHIFTED_MASK << IW_CUSTO=
M_N_LSB)=20
> -#define GET_IW_CUSTOM_N(W) (((W) >> IW_CUSTOM_N_LSB) & IW_CUSTOM_N_UNSHI=
FTED_MASK)=20
> -#define SET_IW_CUSTOM_N(V) (((V) & IW_CUSTOM_N_UNSHIFTED_MASK) << IW_CUS=
TOM_N_LSB)=20
> +#define IW_R1_OP_LSB 0
> +#define IW_R1_OP_SIZE 6
> +#define IW_R1_OP_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R1_OP_SIZE))
> +#define IW_R1_OP_SHIFTED_MASK (IW_R1_OP_UNSHIFTED_MASK << IW_R1_OP_LSB)
> +#define GET_IW_R1_OP(W) (((W) >> IW_R1_OP_LSB) & IW_R1_OP_UNSHIFTED_MASK)
> +#define SET_IW_R1_OP(V) (((V) & IW_R1_OP_UNSHIFTED_MASK) << IW_R1_OP_LSB)
> +
> +#define IW_I_A_LSB 27
> +#define IW_I_A_SIZE 5
> +#define IW_I_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_A_SIZE))
> +#define IW_I_A_SHIFTED_MASK (IW_I_A_UNSHIFTED_MASK << IW_I_A_LSB)
> +#define GET_IW_I_A(W) (((W) >> IW_I_A_LSB) & IW_I_A_UNSHIFTED_MASK)
> +#define SET_IW_I_A(V) (((V) & IW_I_A_UNSHIFTED_MASK) << IW_I_A_LSB)
> +
> +#define IW_I_B_LSB 22
> +#define IW_I_B_SIZE 5
> +#define IW_I_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_B_SIZE))
> +#define IW_I_B_SHIFTED_MASK (IW_I_B_UNSHIFTED_MASK << IW_I_B_LSB)
> +#define GET_IW_I_B(W) (((W) >> IW_I_B_LSB) & IW_I_B_UNSHIFTED_MASK)
> +#define SET_IW_I_B(V) (((V) & IW_I_B_UNSHIFTED_MASK) << IW_I_B_LSB)
> +
> +#define IW_I_IMM16_LSB 6
> +#define IW_I_IMM16_SIZE 16
> +#define IW_I_IMM16_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_IMM16_SIZE))
> +#define IW_I_IMM16_SHIFTED_MASK (IW_I_IMM16_UNSHIFTED_MASK << IW_I_IMM16=
_LSB)
> +#define GET_IW_I_IMM16(W) (((W) >> IW_I_IMM16_LSB) & IW_I_IMM16_UNSHIFTE=
D_MASK)
> +#define SET_IW_I_IMM16(V) (((V) & IW_I_IMM16_UNSHIFTED_MASK) << IW_I_IMM=
16_LSB)
> +
> +#define IW_R_A_LSB 27
> +#define IW_R_A_SIZE 5
> +#define IW_R_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_A_SIZE))
> +#define IW_R_A_SHIFTED_MASK (IW_R_A_UNSHIFTED_MASK << IW_R_A_LSB)
> +#define GET_IW_R_A(W) (((W) >> IW_R_A_LSB) & IW_R_A_UNSHIFTED_MASK)
> +#define SET_IW_R_A(V) (((V) & IW_R_A_UNSHIFTED_MASK) << IW_R_A_LSB)
> +
> +#define IW_R_B_LSB 22
> +#define IW_R_B_SIZE 5
> +#define IW_R_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_B_SIZE))
> +#define IW_R_B_SHIFTED_MASK (IW_R_B_UNSHIFTED_MASK << IW_R_B_LSB)
> +#define GET_IW_R_B(W) (((W) >> IW_R_B_LSB) & IW_R_B_UNSHIFTED_MASK)
> +#define SET_IW_R_B(V) (((V) & IW_R_B_UNSHIFTED_MASK) << IW_R_B_LSB)
> +
> +#define IW_R_C_LSB 17
> +#define IW_R_C_SIZE 5
> +#define IW_R_C_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_C_SIZE))
> +#define IW_R_C_SHIFTED_MASK (IW_R_C_UNSHIFTED_MASK << IW_R_C_LSB)
> +#define GET_IW_R_C(W) (((W) >> IW_R_C_LSB) & IW_R_C_UNSHIFTED_MASK)
> +#define SET_IW_R_C(V) (((V) & IW_R_C_UNSHIFTED_MASK) << IW_R_C_LSB)
> +
> +#define IW_R_OPX_LSB 11
> +#define IW_R_OPX_SIZE 6
> +#define IW_R_OPX_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_OPX_SIZE))
> +#define IW_R_OPX_SHIFTED_MASK (IW_R_OPX_UNSHIFTED_MASK << IW_R_OPX_LSB)
> +#define GET_IW_R_OPX(W) (((W) >> IW_R_OPX_LSB) & IW_R_OPX_UNSHIFTED_MASK)
> +#define SET_IW_R_OPX(V) (((V) & IW_R_OPX_UNSHIFTED_MASK) << IW_R_OPX_LSB)
> +
> +#define IW_R_IMM5_LSB 6
> +#define IW_R_IMM5_SIZE 5
> +#define IW_R_IMM5_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_IMM5_SIZE))
> +#define IW_R_IMM5_SHIFTED_MASK (IW_R_IMM5_UNSHIFTED_MASK << IW_R_IMM5_LS=
B)
> +#define GET_IW_R_IMM5(W) (((W) >> IW_R_IMM5_LSB) & IW_R_IMM5_UNSHIFTED_M=
ASK)
> +#define SET_IW_R_IMM5(V) (((V) & IW_R_IMM5_UNSHIFTED_MASK) << IW_R_IMM5_=
LSB)
> +
> +#define IW_J_IMM26_LSB 6
> +#define IW_J_IMM26_SIZE 26
> +#define IW_J_IMM26_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_J_IMM26_SIZE))
> +#define IW_J_IMM26_SHIFTED_MASK (IW_J_IMM26_UNSHIFTED_MASK << IW_J_IMM26=
_LSB)
> +#define GET_IW_J_IMM26(W) (((W) >> IW_J_IMM26_LSB) & IW_J_IMM26_UNSHIFTE=
D_MASK)
> +#define SET_IW_J_IMM26(V) (((V) & IW_J_IMM26_UNSHIFTED_MASK) << IW_J_IMM=
26_LSB)
> +
> +#define IW_CUSTOM_A_LSB 27
> +#define IW_CUSTOM_A_SIZE 5
> +#define IW_CUSTOM_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_A_SIZ=
E))
> +#define IW_CUSTOM_A_SHIFTED_MASK (IW_CUSTOM_A_UNSHIFTED_MASK << IW_CUSTO=
M_A_LSB)
> +#define GET_IW_CUSTOM_A(W) (((W) >> IW_CUSTOM_A_LSB) & IW_CUSTOM_A_UNSHI=
FTED_MASK)
> +#define SET_IW_CUSTOM_A(V) (((V) & IW_CUSTOM_A_UNSHIFTED_MASK) << IW_CUS=
TOM_A_LSB)
> +
> +#define IW_CUSTOM_B_LSB 22
> +#define IW_CUSTOM_B_SIZE 5
> +#define IW_CUSTOM_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_B_SIZ=
E))
> +#define IW_CUSTOM_B_SHIFTED_MASK (IW_CUSTOM_B_UNSHIFTED_MASK << IW_CUSTO=
M_B_LSB)
> +#define GET_IW_CUSTOM_B(W) (((W) >> IW_CUSTOM_B_LSB) & IW_CUSTOM_B_UNSHI=
FTED_MASK)
> +#define SET_IW_CUSTOM_B(V) (((V) & IW_CUSTOM_B_UNSHIFTED_MASK) << IW_CUS=
TOM_B_LSB)
> +
> +#define IW_CUSTOM_C_LSB 17
> +#define IW_CUSTOM_C_SIZE 5
> +#define IW_CUSTOM_C_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_C_SIZ=
E))
> +#define IW_CUSTOM_C_SHIFTED_MASK (IW_CUSTOM_C_UNSHIFTED_MASK << IW_CUSTO=
M_C_LSB)
> +#define GET_IW_CUSTOM_C(W) (((W) >> IW_CUSTOM_C_LSB) & IW_CUSTOM_C_UNSHI=
FTED_MASK)
> +#define SET_IW_CUSTOM_C(V) (((V) & IW_CUSTOM_C_UNSHIFTED_MASK) << IW_CUS=
TOM_C_LSB)
> +
> +#define IW_CUSTOM_READA_LSB 16
> +#define IW_CUSTOM_READA_SIZE 1
> +#define IW_CUSTOM_READA_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_R=
EADA_SIZE))
> +#define IW_CUSTOM_READA_SHIFTED_MASK (IW_CUSTOM_READA_UNSHIFTED_MASK << =
IW_CUSTOM_READA_LSB)
> +#define GET_IW_CUSTOM_READA(W) (((W) >> IW_CUSTOM_READA_LSB) & IW_CUSTOM=
_READA_UNSHIFTED_MASK)
> +#define SET_IW_CUSTOM_READA(V) (((V) & IW_CUSTOM_READA_UNSHIFTED_MASK) <=
< IW_CUSTOM_READA_LSB)
> +
> +#define IW_CUSTOM_READB_LSB 15
> +#define IW_CUSTOM_READB_SIZE 1
> +#define IW_CUSTOM_READB_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_R=
EADB_SIZE))
> +#define IW_CUSTOM_READB_SHIFTED_MASK (IW_CUSTOM_READB_UNSHIFTED_MASK << =
IW_CUSTOM_READB_LSB)
> +#define GET_IW_CUSTOM_READB(W) (((W) >> IW_CUSTOM_READB_LSB) & IW_CUSTOM=
_READB_UNSHIFTED_MASK)
> +#define SET_IW_CUSTOM_READB(V) (((V) & IW_CUSTOM_READB_UNSHIFTED_MASK) <=
< IW_CUSTOM_READB_LSB)
> +
> +#define IW_CUSTOM_READC_LSB 14
> +#define IW_CUSTOM_READC_SIZE 1
> +#define IW_CUSTOM_READC_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_R=
EADC_SIZE))
> +#define IW_CUSTOM_READC_SHIFTED_MASK (IW_CUSTOM_READC_UNSHIFTED_MASK << =
IW_CUSTOM_READC_LSB)
> +#define GET_IW_CUSTOM_READC(W) (((W) >> IW_CUSTOM_READC_LSB) & IW_CUSTOM=
_READC_UNSHIFTED_MASK)
> +#define SET_IW_CUSTOM_READC(V) (((V) & IW_CUSTOM_READC_UNSHIFTED_MASK) <=
< IW_CUSTOM_READC_LSB)
> +
> +#define IW_CUSTOM_N_LSB 6
> +#define IW_CUSTOM_N_SIZE 8
> +#define IW_CUSTOM_N_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_N_SIZ=
E))
> +#define IW_CUSTOM_N_SHIFTED_MASK (IW_CUSTOM_N_UNSHIFTED_MASK << IW_CUSTO=
M_N_LSB)
> +#define GET_IW_CUSTOM_N(W) (((W) >> IW_CUSTOM_N_LSB) & IW_CUSTOM_N_UNSHI=
FTED_MASK)
> +#define SET_IW_CUSTOM_N(V) (((V) & IW_CUSTOM_N_UNSHIFTED_MASK) << IW_CUS=
TOM_N_LSB)
> =20
>  /* R1 opcodes.  */
>  #define R1_OP_CALL 0
> diff --git a/hmp-commands.hx b/hmp-commands.hx
> index 60f395c276..d548a3ab74 100644
> --- a/hmp-commands.hx
> +++ b/hmp-commands.hx
> @@ -1120,7 +1120,7 @@ ERST
> =20
>  SRST
>  ``dump-guest-memory [-p]`` *filename* *begin* *length*
> -  \=20
> +  \
>  ``dump-guest-memory [-z|-l|-s|-w]`` *filename*
>    Dump guest memory to *protocol*. The file can be processed with crash =
or
>    gdb. Without ``-z|-l|-s|-w``, the dump format is ELF.
> diff --git a/hw/alpha/typhoon.c b/hw/alpha/typhoon.c
> index 29d44dfb06..57c7cf0bd3 100644
> --- a/hw/alpha/typhoon.c
> +++ b/hw/alpha/typhoon.c
> @@ -34,7 +34,7 @@ typedef struct TyphoonWindow {
>      uint64_t wsm;
>      uint64_t tba;
>  } TyphoonWindow;
> -=20
> +
>  typedef struct TyphoonPchip {
>      MemoryRegion region;
>      MemoryRegion reg_iack;
> @@ -189,7 +189,7 @@ static MemTxResult cchip_read(void *opaque, hwaddr ad=
dr,
>      case 0x0780:
>          /* PWR: Power Management Control.   */
>          break;
> -   =20
> +
>      case 0x0c00: /* CMONCTLA */
>      case 0x0c40: /* CMONCTLB */
>      case 0x0c80: /* CMONCNT01 */
> @@ -441,7 +441,7 @@ static MemTxResult cchip_write(void *opaque, hwaddr a=
ddr,
>      case 0x0780:
>          /* PWR: Power Management Control.   */
>          break;
> -   =20
> +
>      case 0x0c00: /* CMONCTLA */
>      case 0x0c40: /* CMONCTLB */
>      case 0x0c80: /* CMONCNT01 */
> diff --git a/hw/arm/gumstix.c b/hw/arm/gumstix.c
> index 3a4bc332c4..3fdef425ab 100644
> --- a/hw/arm/gumstix.c
> +++ b/hw/arm/gumstix.c
> @@ -10,10 +10,10 @@
>   * Contributions after 2012-01-13 are licensed under the terms of the
>   * GNU GPL, version 2 or (at your option) any later version.
>   */
> -=20
> -/*=20
> +
> +/*
>   * Example usage:
> - *=20
> + *
>   * connex:
>   * =3D=3D=3D=3D=3D=3D=3D
>   * create image:
> diff --git a/hw/arm/omap1.c b/hw/arm/omap1.c
> index 6ba0df6b6d..82e60e3b30 100644
> --- a/hw/arm/omap1.c
> +++ b/hw/arm/omap1.c
> @@ -2914,7 +2914,7 @@ static void omap_rtc_tick(void *opaque)
> =20
>      /*
>       * Every full hour add a rough approximation of the compensation
> -     * register to the 32kHz Timer (which drives the RTC) value.=20
> +     * register to the 32kHz Timer (which drives the RTC) value.
>       */
>      if (s->auto_comp && !s->current_tm.tm_sec && !s->current_tm.tm_min)
>          s->tick +=3D s->comp_reg * 1000 / 32768;
> diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
> index 97ef566c12..7089a534d4 100644
> --- a/hw/arm/stellaris.c
> +++ b/hw/arm/stellaris.c
> @@ -979,7 +979,7 @@ static void stellaris_adc_fifo_write(stellaris_adc_st=
ate *s, int n,
>  {
>      int head;
> =20
> -    /* TODO: Real hardware has limited size FIFOs.  We have a full 16 en=
try=20
> +    /* TODO: Real hardware has limited size FIFOs.  We have a full 16 en=
try
>         FIFO fir each sequencer.  */
>      head =3D (s->fifo[n].state >> 4) & 0xf;
>      if (s->fifo[n].state & STELLARIS_ADC_FIFO_FULL) {
> diff --git a/hw/char/etraxfs_ser.c b/hw/char/etraxfs_ser.c
> index 947bdb649a..85f6523efe 100644
> --- a/hw/char/etraxfs_ser.c
> +++ b/hw/char/etraxfs_ser.c
> @@ -180,7 +180,7 @@ static void serial_receive(void *opaque, const uint8_=
t *buf, int size)
>          return;
>      }
> =20
> -    for (i =3D 0; i < size; i++) {=20
> +    for (i =3D 0; i < size; i++) {
>          s->rx_fifo[s->rx_fifo_pos] =3D buf[i];
>          s->rx_fifo_pos++;
>          s->rx_fifo_pos &=3D 15;
> diff --git a/hw/core/ptimer.c b/hw/core/ptimer.c
> index b5a54e2536..f08c3c33a7 100644
> --- a/hw/core/ptimer.c
> +++ b/hw/core/ptimer.c
> @@ -246,7 +246,7 @@ uint64_t ptimer_get_count(ptimer_state *s)
>              } else {
>                  if (shift !=3D 0)
>                      div |=3D (period_frac >> (32 - shift));
> -                /* Look at remaining bits of period_frac and round div u=
p if=20
> +                /* Look at remaining bits of period_frac and round div u=
p if
>                     necessary.  */
>                  if ((uint32_t)(period_frac << shift))
>                      div +=3D 1;
> diff --git a/hw/cris/axis_dev88.c b/hw/cris/axis_dev88.c
> index dab7423c73..adeed30638 100644
> --- a/hw/cris/axis_dev88.c
> +++ b/hw/cris/axis_dev88.c
> @@ -267,7 +267,7 @@ void axisdev88_init(MachineState *machine)
> =20
>      memory_region_add_subregion(address_space_mem, 0x40000000, machine->=
ram);
> =20
> -    /* The ETRAX-FS has 128Kb on chip ram, the docs refer to it as the=
=20
> +    /* The ETRAX-FS has 128Kb on chip ram, the docs refer to it as the
>         internal memory.  */
>      memory_region_init_ram(phys_intmem, NULL, "axisdev88.chipram",
>                             INTMEM_SIZE, &error_fatal);
> diff --git a/hw/cris/boot.c b/hw/cris/boot.c
> index b8947bc660..06a440431a 100644
> --- a/hw/cris/boot.c
> +++ b/hw/cris/boot.c
> @@ -72,7 +72,7 @@ void cris_load_image(CRISCPU *cpu, struct cris_load_inf=
o *li)
>      int image_size;
> =20
>      env->load_info =3D li;
> -    /* Boots a kernel elf binary, os/linux-2.6/vmlinux from the axis=20
> +    /* Boots a kernel elf binary, os/linux-2.6/vmlinux from the axis
>         devboard SDK.  */
>      image_size =3D load_elf(li->image_filename, NULL,
>                            translate_kernel_address, NULL,
> diff --git a/hw/display/qxl.c b/hw/display/qxl.c
> index d5627119ec..28caf878cd 100644
> --- a/hw/display/qxl.c
> +++ b/hw/display/qxl.c
> @@ -51,7 +51,7 @@
>  #undef ALIGN
>  #define ALIGN(a, b) (((a) + ((b) - 1)) & ~((b) - 1))
> =20
> -#define PIXEL_SIZE 0.2936875 //1280x1024 is 14.8" x 11.9"=20
> +#define PIXEL_SIZE 0.2936875 //1280x1024 is 14.8" x 11.9"
> =20
>  #define QXL_MODE(_x, _y, _b, _o)                  \
>      {   .x_res =3D _x,                              \
> diff --git a/hw/dma/etraxfs_dma.c b/hw/dma/etraxfs_dma.c
> index c4334e87bf..20173330a0 100644
> --- a/hw/dma/etraxfs_dma.c
> +++ b/hw/dma/etraxfs_dma.c
> @@ -322,12 +322,12 @@ static inline void channel_start(struct fs_dma_ctrl=
 *ctrl, int c)
> =20
>  static void channel_continue(struct fs_dma_ctrl *ctrl, int c)
>  {
> -	if (!channel_en(ctrl, c)=20
> +	if (!channel_en(ctrl, c)
>  	    || channel_stopped(ctrl, c)
>  	    || ctrl->channels[c].state !=3D RUNNING
>  	    /* Only reload the current data descriptor if it has eol set.  */
>  	    || !ctrl->channels[c].current_d.eol) {
> -		D(printf("continue failed ch=3D%d state=3D%d stopped=3D%d en=3D%d eol=
=3D%d\n",=20
> +		D(printf("continue failed ch=3D%d state=3D%d stopped=3D%d en=3D%d eol=
=3D%d\n",
>  			 c, ctrl->channels[c].state,
>  			 channel_stopped(ctrl, c),
>  			 channel_en(ctrl,c),
> @@ -383,7 +383,7 @@ static void channel_update_irq(struct fs_dma_ctrl *ct=
rl, int c)
>  		ctrl->channels[c].regs[R_INTR]
>  		& ctrl->channels[c].regs[RW_INTR_MASK];
> =20
> -	D(printf("%s: chan=3D%d masked_intr=3D%x\n", __func__,=20
> +	D(printf("%s: chan=3D%d masked_intr=3D%x\n", __func__,
>  		 c,
>  		 ctrl->channels[c].regs[R_MASKED_INTR]));
> =20
> @@ -492,7 +492,7 @@ static int channel_out_run(struct fs_dma_ctrl *ctrl, =
int c)
>  	return 1;
>  }
> =20
> -static int channel_in_process(struct fs_dma_ctrl *ctrl, int c,=20
> +static int channel_in_process(struct fs_dma_ctrl *ctrl, int c,
>  			      unsigned char *buf, int buflen, int eop)
>  {
>  	uint32_t len;
> @@ -517,7 +517,7 @@ static int channel_in_process(struct fs_dma_ctrl *ctr=
l, int c,
>  	    || eop) {
>  		uint32_t r_intr =3D ctrl->channels[c].regs[R_INTR];
> =20
> -		D(printf("in dscr end len=3D%d\n",=20
> +		D(printf("in dscr end len=3D%d\n",
>  			 ctrl->channels[c].current_d.after
>  			 - ctrl->channels[c].current_d.buf));
>  		ctrl->channels[c].current_d.after =3D saved_data_buf;
> @@ -708,7 +708,7 @@ static int etraxfs_dmac_run(void *opaque)
>  	int i;
>  	int p =3D 0;
> =20
> -	for (i =3D 0;=20
> +	for (i =3D 0;
>  	     i < ctrl->nr_channels;
>  	     i++)
>  	{
> @@ -724,10 +724,10 @@ static int etraxfs_dmac_run(void *opaque)
>  	return p;
>  }
> =20
> -int etraxfs_dmac_input(struct etraxfs_dma_client *client,=20
> +int etraxfs_dmac_input(struct etraxfs_dma_client *client,
>  		       void *buf, int len, int eop)
>  {
> -	return channel_in_process(client->ctrl, client->channel,=20
> +	return channel_in_process(client->ctrl, client->channel,
>  				  buf, len, eop);
>  }
> =20
> @@ -739,7 +739,7 @@ void etraxfs_dmac_connect(void *opaque, int c, qemu_i=
rq *line, int input)
>  	ctrl->channels[c].input =3D input;
>  }
> =20
> -void etraxfs_dmac_connect_client(void *opaque, int c,=20
> +void etraxfs_dmac_connect_client(void *opaque, int c,
>  				 struct etraxfs_dma_client *cl)
>  {
>  	struct fs_dma_ctrl *ctrl =3D opaque;
> diff --git a/hw/dma/i82374.c b/hw/dma/i82374.c
> index 6977d85ef8..0db27628d5 100644
> --- a/hw/dma/i82374.c
> +++ b/hw/dma/i82374.c
> @@ -146,7 +146,7 @@ static Property i82374_properties[] =3D {
>  static void i82374_class_init(ObjectClass *klass, void *data)
>  {
>      DeviceClass *dc =3D DEVICE_CLASS(klass);
> -   =20
> +
>      dc->realize =3D i82374_realize;
>      dc->vmsd =3D &vmstate_i82374;
>      device_class_set_props(dc, i82374_properties);
> diff --git a/hw/i2c/bitbang_i2c.c b/hw/i2c/bitbang_i2c.c
> index b000952b98..425b0ed69e 100644
> --- a/hw/i2c/bitbang_i2c.c
> +++ b/hw/i2c/bitbang_i2c.c
> @@ -95,7 +95,7 @@ int bitbang_i2c_set(bitbang_i2c_interface *i2c, int lin=
e, int level)
>      case SENDING_BIT7 ... SENDING_BIT0:
>          i2c->buffer =3D (i2c->buffer << 1) | data;
>          /* will end up in WAITING_FOR_ACK */
> -        i2c->state++;=20
> +        i2c->state++;
>          return bitbang_i2c_ret(i2c, 1);
> =20
>      case WAITING_FOR_ACK:
> diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
> index 55d61cc843..df07476c3e 100644
> --- a/hw/input/tsc2005.c
> +++ b/hw/input/tsc2005.c
> @@ -169,7 +169,7 @@ static uint16_t tsc2005_read(TSC2005State *s, int reg)
> =20
>      case 0xc:	/* CFR0 */
>          return (s->pressure << 15) | ((!s->busy) << 14) |
> -                (s->nextprecision << 13) | s->timing[0];=20
> +                (s->nextprecision << 13) | s->timing[0];
>      case 0xd:	/* CFR1 */
>          return s->timing[1];
>      case 0xe:	/* CFR2 */
> diff --git a/hw/input/tsc210x.c b/hw/input/tsc210x.c
> index 182d3725fc..610b3fca59 100644
> --- a/hw/input/tsc210x.c
> +++ b/hw/input/tsc210x.c
> @@ -412,7 +412,7 @@ static uint16_t tsc2102_control_register_read(
>      switch (reg) {
>      case 0x00:	/* TSC ADC */
>          return (s->pressure << 15) | ((!s->busy) << 14) |
> -                (s->nextfunction << 10) | (s->nextprecision << 8) | s->f=
ilter;=20
> +                (s->nextfunction << 10) | (s->nextprecision << 8) | s->f=
ilter;
> =20
>      case 0x01:	/* Status / Keypad Control */
>          if ((s->model & 0xff00) =3D=3D 0x2100)
> diff --git a/hw/intc/etraxfs_pic.c b/hw/intc/etraxfs_pic.c
> index 12988c7aa9..9f9377798d 100644
> --- a/hw/intc/etraxfs_pic.c
> +++ b/hw/intc/etraxfs_pic.c
> @@ -52,15 +52,15 @@ struct etrax_pic
>  };
> =20
>  static void pic_update(struct etrax_pic *fs)
> -{  =20
> +{
>      uint32_t vector =3D 0;
>      int i;
> =20
>      fs->regs[R_R_MASKED_VECT] =3D fs->regs[R_R_VECT] & fs->regs[R_RW_MAS=
K];
> =20
>      /* The ETRAX interrupt controller signals interrupts to the core
> -       through an interrupt request wire and an irq vector bus. If=20
> -       multiple interrupts are simultaneously active it chooses vector=
=20
> +       through an interrupt request wire and an irq vector bus. If
> +       multiple interrupts are simultaneously active it chooses vector
>         0x30 and lets the sw choose the priorities.  */
>      if (fs->regs[R_R_MASKED_VECT]) {
>          uint32_t mv =3D fs->regs[R_R_MASKED_VECT];
> @@ -113,7 +113,7 @@ static const MemoryRegionOps pic_ops =3D {
>  };
> =20
>  static void nmi_handler(void *opaque, int irq, int level)
> -{  =20
> +{
>      struct etrax_pic *fs =3D (void *)opaque;
>      uint32_t mask;
> =20
> diff --git a/hw/intc/sh_intc.c b/hw/intc/sh_intc.c
> index 72a55e32dd..4c6e4b89a1 100644
> --- a/hw/intc/sh_intc.c
> +++ b/hw/intc/sh_intc.c
> @@ -236,7 +236,7 @@ static uint64_t sh_intc_read(void *opaque, hwaddr off=
set,
>      printf("sh_intc_read 0x%lx\n", (unsigned long) offset);
>  #endif
> =20
> -    sh_intc_locate(desc, (unsigned long)offset, &valuep,=20
> +    sh_intc_locate(desc, (unsigned long)offset, &valuep,
>  		   &enum_ids, &first, &width, &mode);
>      return *valuep;
>  }
> @@ -257,7 +257,7 @@ static void sh_intc_write(void *opaque, hwaddr offset,
>      printf("sh_intc_write 0x%lx 0x%08x\n", (unsigned long) offset, value=
);
>  #endif
> =20
> -    sh_intc_locate(desc, (unsigned long)offset, &valuep,=20
> +    sh_intc_locate(desc, (unsigned long)offset, &valuep,
>  		   &enum_ids, &first, &width, &mode);
> =20
>      switch (mode) {
> @@ -273,7 +273,7 @@ static void sh_intc_write(void *opaque, hwaddr offset,
>  	if ((*valuep & mask) =3D=3D (value & mask))
>              continue;
>  #if 0
> -	printf("k =3D %d, first =3D %d, enum =3D %d, mask =3D 0x%08x\n",=20
> +	printf("k =3D %d, first =3D %d, enum =3D %d, mask =3D 0x%08x\n",
>  	       k, first, enum_ids[k], (unsigned int)mask);
>  #endif
>          sh_intc_toggle_mask(desc, enum_ids[k], value & mask, 0);
> @@ -466,7 +466,7 @@ int sh_intc_init(MemoryRegion *sysmem,
>      }
> =20
>      desc->irqs =3D qemu_allocate_irqs(sh_intc_set_irq, desc, nr_sources);
> -=20
> +
>      memory_region_init_io(&desc->iomem, NULL, &sh_intc_ops, desc,
>                            "interrupt-controller", 0x100000000ULL);
> =20
> @@ -498,7 +498,7 @@ int sh_intc_init(MemoryRegion *sysmem,
>      return 0;
>  }
> =20
> -/* Assert level <n> IRL interrupt.=20
> +/* Assert level <n> IRL interrupt.
>     0:deassert. 1:lowest priority,... 15:highest priority. */
>  void sh_intc_set_irl(void *opaque, int n, int level)
>  {
> diff --git a/hw/intc/xilinx_intc.c b/hw/intc/xilinx_intc.c
> index 3e65e68619..dfc049de92 100644
> --- a/hw/intc/xilinx_intc.c
> +++ b/hw/intc/xilinx_intc.c
> @@ -113,7 +113,7 @@ pic_write(void *opaque, hwaddr addr,
> =20
>      addr >>=3D 2;
>      D(qemu_log("%s addr=3D%x val=3D%x\n", __func__, addr * 4, value));
> -    switch (addr)=20
> +    switch (addr)
>      {
>          case R_IAR:
>              p->regs[R_ISR] &=3D ~value; /* ACK.  */
> diff --git a/hw/misc/imx25_ccm.c b/hw/misc/imx25_ccm.c
> index d3107e5ca2..83dd09a9bc 100644
> --- a/hw/misc/imx25_ccm.c
> +++ b/hw/misc/imx25_ccm.c
> @@ -200,9 +200,9 @@ static void imx25_ccm_reset(DeviceState *dev)
>      memset(s->reg, 0, IMX25_CCM_MAX_REG * sizeof(uint32_t));
>      s->reg[IMX25_CCM_MPCTL_REG] =3D 0x800b2c01;
>      s->reg[IMX25_CCM_UPCTL_REG] =3D 0x84042800;
> -    /*=20
> +    /*
>       * The value below gives:
> -     * CPU =3D 133 MHz, AHB =3D 66,5 MHz, IPG =3D 33 MHz.=20
> +     * CPU =3D 133 MHz, AHB =3D 66,5 MHz, IPG =3D 33 MHz.
>       */
>      s->reg[IMX25_CCM_CCTL_REG]  =3D 0xd0030000;
>      s->reg[IMX25_CCM_CGCR0_REG] =3D 0x028A0100;
> @@ -219,7 +219,7 @@ static void imx25_ccm_reset(DeviceState *dev)
> =20
>      /*
>       * default boot will change the reset values to allow:
> -     * CPU =3D 399 MHz, AHB =3D 133 MHz, IPG =3D 66,5 MHz.=20
> +     * CPU =3D 399 MHz, AHB =3D 133 MHz, IPG =3D 66,5 MHz.
>       * For some reason, this doesn't work. With the value below, linux
>       * detects a 88 MHz IPG CLK instead of 66,5 MHz.
>      s->reg[IMX25_CCM_CCTL_REG]  =3D 0x20032000;
> diff --git a/hw/misc/imx31_ccm.c b/hw/misc/imx31_ccm.c
> index 6e246827ab..8da2757cbe 100644
> --- a/hw/misc/imx31_ccm.c
> +++ b/hw/misc/imx31_ccm.c
> @@ -115,7 +115,7 @@ static uint32_t imx31_ccm_get_pll_ref_clk(IMXCCMState=
 *dev)
>              if (s->reg[IMX31_CCM_CCMR_REG] & CCMR_FPMF) {
>                  freq *=3D 1024;
>              }
> -        }=20
> +        }
>      } else {
>          freq =3D CKIH_FREQ;
>      }
> diff --git a/hw/net/vmxnet3.h b/hw/net/vmxnet3.h
> index 5b3b76ba7a..020bf70afd 100644
> --- a/hw/net/vmxnet3.h
> +++ b/hw/net/vmxnet3.h
> @@ -246,7 +246,7 @@ struct Vmxnet3_TxDesc {
>          };
>          u32 val1;
>      };
> -   =20
> +
>      union {
>          struct {
>  #ifdef __BIG_ENDIAN_BITFIELD
> diff --git a/hw/net/xilinx_ethlite.c b/hw/net/xilinx_ethlite.c
> index 71d16fef3d..0703f9e444 100644
> --- a/hw/net/xilinx_ethlite.c
> +++ b/hw/net/xilinx_ethlite.c
> @@ -117,7 +117,7 @@ eth_write(void *opaque, hwaddr addr,
>      uint32_t value =3D val64;
> =20
>      addr >>=3D 2;
> -    switch (addr)=20
> +    switch (addr)
>      {
>          case R_TX_CTRL0:
>          case R_TX_CTRL1:
> diff --git a/hw/pci/pcie.c b/hw/pci/pcie.c
> index 5b48bae0f6..4692d9b5a3 100644
> --- a/hw/pci/pcie.c
> +++ b/hw/pci/pcie.c
> @@ -705,7 +705,7 @@ void pcie_cap_slot_write_config(PCIDevice *dev,
> =20
>      hotplug_event_notify(dev);
> =20
> -    /*=20
> +    /*
>       * 6.7.3.2 Command Completed Events
>       *
>       * Software issues a command to a hot-plug capable Downstream Port by
> diff --git a/hw/sd/omap_mmc.c b/hw/sd/omap_mmc.c
> index 4088a8a80b..7c6f179578 100644
> --- a/hw/sd/omap_mmc.c
> +++ b/hw/sd/omap_mmc.c
> @@ -342,7 +342,7 @@ static uint64_t omap_mmc_read(void *opaque, hwaddr of=
fset,
>          return s->arg >> 16;
> =20
>      case 0x0c:	/* MMC_CON */
> -        return (s->dw << 15) | (s->mode << 12) | (s->enable << 11) |=20
> +        return (s->dw << 15) | (s->mode << 12) | (s->enable << 11) |
>                  (s->be << 10) | s->clkdiv;
> =20
>      case 0x10:	/* MMC_STAT */
> diff --git a/hw/sh4/shix.c b/hw/sh4/shix.c
> index f410c08883..dddfa8b336 100644
> --- a/hw/sh4/shix.c
> +++ b/hw/sh4/shix.c
> @@ -49,7 +49,7 @@ static void shix_init(MachineState *machine)
>      MemoryRegion *sysmem =3D get_system_memory();
>      MemoryRegion *rom =3D g_new(MemoryRegion, 1);
>      MemoryRegion *sdram =3D g_new(MemoryRegion, 2);
> -   =20
> +
>      cpu =3D SUPERH_CPU(cpu_create(machine->cpu_type));
> =20
>      /* Allocate memory space */
> diff --git a/hw/sparc64/sun4u.c b/hw/sparc64/sun4u.c
> index 9c8655cffc..e3dd1c67a0 100644
> --- a/hw/sparc64/sun4u.c
> +++ b/hw/sparc64/sun4u.c
> @@ -670,7 +670,7 @@ static void sun4uv_init(MemoryRegion *address_space_m=
em,
>      s =3D SYS_BUS_DEVICE(nvram);
>      memory_region_add_subregion(pci_address_space_io(ebus), 0x2000,
>                                  sysbus_mmio_get_region(s, 0));
> -=20
> +
>      initrd_size =3D 0;
>      initrd_addr =3D 0;
>      kernel_size =3D sun4u_load_kernel(machine->kernel_filename,
> diff --git a/hw/timer/etraxfs_timer.c b/hw/timer/etraxfs_timer.c
> index afe3d30a8e..797f65b3f4 100644
> --- a/hw/timer/etraxfs_timer.c
> +++ b/hw/timer/etraxfs_timer.c
> @@ -230,7 +230,7 @@ static inline void timer_watchdog_update(ETRAXTimerSt=
ate *t, uint32_t value)
>      if (wd_en && wd_key !=3D new_key)
>          return;
> =20
> -    D(printf("en=3D%d new_key=3D%x oldkey=3D%x cmd=3D%d cnt=3D%d\n",=20
> +    D(printf("en=3D%d new_key=3D%x oldkey=3D%x cmd=3D%d cnt=3D%d\n",
>           wd_en, new_key, wd_key, new_cmd, wd_cnt));
> =20
>      if (t->wd_hits)
> diff --git a/hw/timer/xilinx_timer.c b/hw/timer/xilinx_timer.c
> index 0190aa47d0..0901ca7b05 100644
> --- a/hw/timer/xilinx_timer.c
> +++ b/hw/timer/xilinx_timer.c
> @@ -166,7 +166,7 @@ timer_write(void *opaque, hwaddr addr,
>               __func__, addr * 4, value, timer, addr & 3));
>      /* Further decoding to address a specific timers reg.  */
>      addr &=3D 3;
> -    switch (addr)=20
> +    switch (addr)
>      {
>          case R_TCSR:
>              if (value & TCSR_TINT)
> @@ -179,7 +179,7 @@ timer_write(void *opaque, hwaddr addr,
>                  ptimer_transaction_commit(xt->ptimer);
>              }
>              break;
> -=20
> +
>          default:
>              if (addr < ARRAY_SIZE(xt->regs))
>                  xt->regs[addr] =3D value;
> diff --git a/hw/usb/hcd-musb.c b/hw/usb/hcd-musb.c
> index 85f5ff5bd4..f64f47b34f 100644
> --- a/hw/usb/hcd-musb.c
> +++ b/hw/usb/hcd-musb.c
> @@ -33,8 +33,8 @@
> =20
>  #define MUSB_HDRC_INTRTX	0x02	/* 16-bit */
>  #define MUSB_HDRC_INTRRX	0x04
> -#define MUSB_HDRC_INTRTXE	0x06 =20
> -#define MUSB_HDRC_INTRRXE	0x08 =20
> +#define MUSB_HDRC_INTRTXE	0x06
> +#define MUSB_HDRC_INTRRXE	0x08
>  #define MUSB_HDRC_INTRUSB	0x0a	/* 8 bit */
>  #define MUSB_HDRC_INTRUSBE	0x0b	/* 8 bit */
>  #define MUSB_HDRC_FRAME		0x0c	/* 16-bit */
> @@ -113,7 +113,7 @@
>   */
> =20
>  /* POWER */
> -#define MGC_M_POWER_ISOUPDATE		0x80=20
> +#define MGC_M_POWER_ISOUPDATE		0x80
>  #define	MGC_M_POWER_SOFTCONN		0x40
>  #define	MGC_M_POWER_HSENAB		0x20
>  #define	MGC_M_POWER_HSMODE		0x10
> @@ -127,7 +127,7 @@
>  #define MGC_M_INTR_RESUME		0x02
>  #define MGC_M_INTR_RESET		0x04
>  #define MGC_M_INTR_BABBLE		0x04
> -#define MGC_M_INTR_SOF			0x08=20
> +#define MGC_M_INTR_SOF			0x08
>  #define MGC_M_INTR_CONNECT		0x10
>  #define MGC_M_INTR_DISCONNECT		0x20
>  #define MGC_M_INTR_SESSREQ		0x40
> @@ -135,7 +135,7 @@
>  #define MGC_M_INTR_EP0			0x01	/* FOR EP0 INTERRUPT */
> =20
>  /* DEVCTL */
> -#define MGC_M_DEVCTL_BDEVICE		0x80  =20
> +#define MGC_M_DEVCTL_BDEVICE		0x80
>  #define MGC_M_DEVCTL_FSDEV		0x40
>  #define MGC_M_DEVCTL_LSDEV		0x20
>  #define MGC_M_DEVCTL_VBUS		0x18
> diff --git a/hw/usb/hcd-ohci.c b/hw/usb/hcd-ohci.c
> index 1e6e85e86a..a2bc7e05d6 100644
> --- a/hw/usb/hcd-ohci.c
> +++ b/hw/usb/hcd-ohci.c
> @@ -670,7 +670,7 @@ static int ohci_service_iso_td(OHCIState *ohci, struc=
t ohci_ed *ed,
> =20
>      starting_frame =3D OHCI_BM(iso_td.flags, TD_SF);
>      frame_count =3D OHCI_BM(iso_td.flags, TD_FC);
> -    relative_frame_number =3D USUB(ohci->frame_number, starting_frame);=
=20
> +    relative_frame_number =3D USUB(ohci->frame_number, starting_frame);
> =20
>      trace_usb_ohci_iso_td_head(
>             ed->head & OHCI_DPTR_MASK, ed->tail & OHCI_DPTR_MASK,
> @@ -733,8 +733,8 @@ static int ohci_service_iso_td(OHCIState *ohci, struc=
t ohci_ed *ed,
>      start_offset =3D iso_td.offset[relative_frame_number];
>      next_offset =3D iso_td.offset[relative_frame_number + 1];
> =20
> -    if (!(OHCI_BM(start_offset, TD_PSW_CC) & 0xe) ||=20
> -        ((relative_frame_number < frame_count) &&=20
> +    if (!(OHCI_BM(start_offset, TD_PSW_CC) & 0xe) ||
> +        ((relative_frame_number < frame_count) &&
>           !(OHCI_BM(next_offset, TD_PSW_CC) & 0xe))) {
>          trace_usb_ohci_iso_td_bad_cc_not_accessed(start_offset, next_off=
set);
>          return 1;
> diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
> index 37f7beb3fa..bebc10a723 100644
> --- a/hw/usb/hcd-uhci.c
> +++ b/hw/usb/hcd-uhci.c
> @@ -80,7 +80,7 @@ struct UHCIPCIDeviceClass {
>      UHCIInfo       info;
>  };
> =20
> -/*=20
> +/*
>   * Pending async transaction.
>   * 'packet' must be the first field because completion
>   * handler does "(UHCIAsync *) pkt" cast.
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index 7bc8c1c056..f7d8b30fd7 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -836,7 +836,7 @@ static void virtio_pci_vq_vector_mask(VirtIOPCIProxy =
*proxy,
> =20
>      /* If guest supports masking, keep irqfd but mask it.
>       * Otherwise, clean it up now.
> -     */=20
> +     */
>      if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
>          k->guest_notifier_mask(vdev, queue_no, true);
>      } else {
> diff --git a/include/hw/cris/etraxfs_dma.h b/include/hw/cris/etraxfs_dma.h
> index 095d76b956..f11a5874cf 100644
> --- a/include/hw/cris/etraxfs_dma.h
> +++ b/include/hw/cris/etraxfs_dma.h
> @@ -28,9 +28,9 @@ struct etraxfs_dma_client
>  void *etraxfs_dmac_init(hwaddr base, int nr_channels);
>  void etraxfs_dmac_connect(void *opaque, int channel, qemu_irq *line,
>  			  int input);
> -void etraxfs_dmac_connect_client(void *opaque, int c,=20
> +void etraxfs_dmac_connect_client(void *opaque, int c,
>  				 struct etraxfs_dma_client *cl);
> -int etraxfs_dmac_input(struct etraxfs_dma_client *client,=20
> +int etraxfs_dmac_input(struct etraxfs_dma_client *client,
>  		       void *buf, int len, int eop);
> =20
>  #endif
> diff --git a/include/hw/net/lance.h b/include/hw/net/lance.h
> index 0357f5f65c..6099c12d37 100644
> --- a/include/hw/net/lance.h
> +++ b/include/hw/net/lance.h
> @@ -6,7 +6,7 @@
>   *
>   * This represents the Sparc32 lance (Am7990) ethernet device which is an
>   * earlier register-compatible member of the AMD PC-Net II (Am79C970A) f=
amily.
> - *=20
> + *
>   * Permission is hereby granted, free of charge, to any person obtaining=
 a copy
>   * of this software and associated documentation files (the "Software"),=
 to deal
>   * in the Software without restriction, including without limitation the=
 rights
> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
> index c421410e3f..fdeed5ecb6 100644
> --- a/include/hw/ppc/spapr.h
> +++ b/include/hw/ppc/spapr.h
> @@ -131,7 +131,7 @@ struct SpaprMachineClass {
>      hwaddr rma_limit;          /* clamp the RMA to this size */
> =20
>      void (*phb_placement)(SpaprMachineState *spapr, uint32_t index,
> -                          uint64_t *buid, hwaddr *pio,=20
> +                          uint64_t *buid, hwaddr *pio,
>                            hwaddr *mmio32, hwaddr *mmio64,
>                            unsigned n_dma, uint32_t *liobns, hwaddr *nv2g=
pa,
>                            hwaddr *nv2atsd, Error **errp);
> diff --git a/include/hw/xen/interface/io/ring.h b/include/hw/xen/interfac=
e/io/ring.h
> index 5d048b335c..fdb2a6ecba 100644
> --- a/include/hw/xen/interface/io/ring.h
> +++ b/include/hw/xen/interface/io/ring.h
> @@ -1,6 +1,6 @@
>  /***********************************************************************=
*******
>   * ring.h
> - *=20
> + *
>   * Shared producer-consumer ring macros.
>   *
>   * Permission is hereby granted, free of charge, to any person obtaining=
 a copy
> @@ -61,7 +61,7 @@ typedef unsigned int RING_IDX;
>  /*
>   * Calculate size of a shared ring, given the total available space for =
the
>   * ring and indexes (_sz), and the name tag of the request/response stru=
cture.
> - * A ring contains as many entries as will fit, rounded down to the near=
est=20
> + * A ring contains as many entries as will fit, rounded down to the near=
est
>   * power of two (so we can mask with (size-1) to loop around).
>   */
>  #define __CONST_RING_SIZE(_s, _sz) \
> @@ -75,7 +75,7 @@ typedef unsigned int RING_IDX;
> =20
>  /*
>   * Macros to make the correct C datatypes for a new kind of ring.
> - *=20
> + *
>   * To make a new ring datatype, you need to have two message structures,
>   * let's say request_t, and response_t already defined.
>   *
> @@ -85,7 +85,7 @@ typedef unsigned int RING_IDX;
>   *
>   * These expand out to give you a set of types, as you can see below.
>   * The most important of these are:
> - *=20
> + *
>   *     mytag_sring_t      - The shared ring.
>   *     mytag_front_ring_t - The 'front' half of the ring.
>   *     mytag_back_ring_t  - The 'back' half of the ring.
> @@ -153,15 +153,15 @@ typedef struct __name##_back_ring __name##_back_rin=
g_t
> =20
>  /*
>   * Macros for manipulating rings.
> - *=20
> - * FRONT_RING_whatever works on the "front end" of a ring: here=20
> + *
> + * FRONT_RING_whatever works on the "front end" of a ring: here
>   * requests are pushed on to the ring and responses taken off it.
> - *=20
> - * BACK_RING_whatever works on the "back end" of a ring: here=20
> + *
> + * BACK_RING_whatever works on the "back end" of a ring: here
>   * requests are taken off the ring and responses put on.
> - *=20
> - * N.B. these macros do NO INTERLOCKS OR FLOW CONTROL.=20
> - * This is OK in 1-for-1 request-response situations where the=20
> + *
> + * N.B. these macros do NO INTERLOCKS OR FLOW CONTROL.
> + * This is OK in 1-for-1 request-response situations where the
>   * requestor (front end) never has more than RING_SIZE()-1
>   * outstanding requests.
>   */
> @@ -263,26 +263,26 @@ typedef struct __name##_back_ring __name##_back_rin=
g_t
> =20
>  /*
>   * Notification hold-off (req_event and rsp_event):
> - *=20
> + *
>   * When queueing requests or responses on a shared ring, it may not alwa=
ys be
>   * necessary to notify the remote end. For example, if requests are in f=
light
>   * in a backend, the front may be able to queue further requests without
>   * notifying the back (if the back checks for new requests when it queues
>   * responses).
> - *=20
> + *
>   * When enqueuing requests or responses:
> - *=20
> + *
>   *  Use RING_PUSH_{REQUESTS,RESPONSES}_AND_CHECK_NOTIFY(). The second ar=
gument
>   *  is a boolean return value. True indicates that the receiver requires=
 an
>   *  asynchronous notification.
> - *=20
> + *
>   * After dequeuing requests or responses (before sleeping the connection=
):
> - *=20
> + *
>   *  Use RING_FINAL_CHECK_FOR_REQUESTS() or RING_FINAL_CHECK_FOR_RESPONSE=
S().
>   *  The second argument is a boolean return value. True indicates that t=
here
>   *  are pending messages on the ring (i.e., the connection should not be=
 put
>   *  to sleep).
> - *=20
> + *
>   *  These macros will set the req_event/rsp_event field to trigger a
>   *  notification on the very next message that is enqueued. If you want =
to
>   *  create batches of work (i.e., only receive a notification after seve=
ral
> diff --git a/include/qemu/log.h b/include/qemu/log.h
> index f4724f7330..1a4e066160 100644
> --- a/include/qemu/log.h
> +++ b/include/qemu/log.h
> @@ -14,7 +14,7 @@ typedef struct QemuLogFile {
>  extern QemuLogFile *qemu_logfile;
> =20
> =20
> -/*=20
> +/*
>   * The new API:
>   *
>   */
> diff --git a/include/qom/object.h b/include/qom/object.h
> index 94a61ccc3f..380007b133 100644
> --- a/include/qom/object.h
> +++ b/include/qom/object.h
> @@ -1443,12 +1443,12 @@ char *object_get_canonical_path(const Object *obj=
);
>   *   ambiguous match
>   *
>   * There are two types of supported paths--absolute paths and partial pa=
ths.
> - *=20
> + *
>   * Absolute paths are derived from the root object and can follow child<=
> or
>   * link<> properties.  Since they can follow link<> properties, they can=
 be
>   * arbitrarily long.  Absolute paths look like absolute filenames and are
>   * prefixed with a leading slash.
> - *=20
> + *
>   * Partial paths look like relative filenames.  They do not begin with a
>   * prefix.  The matching rules for partial paths are subtle but designed=
 to make
>   * specifying objects easy.  At each level of the composition tree, the =
partial
> diff --git a/linux-user/cris/cpu_loop.c b/linux-user/cris/cpu_loop.c
> index 334edddd1e..25d0861df9 100644
> --- a/linux-user/cris/cpu_loop.c
> +++ b/linux-user/cris/cpu_loop.c
> @@ -27,7 +27,7 @@ void cpu_loop(CPUCRISState *env)
>      CPUState *cs =3D env_cpu(env);
>      int trapnr, ret;
>      target_siginfo_t info;
> -   =20
> +
>      while (1) {
>          cpu_exec_start(cs);
>          trapnr =3D cpu_exec(cs);
> @@ -49,13 +49,13 @@ void cpu_loop(CPUCRISState *env)
>            /* just indicate that signals should be handled asap */
>            break;
>          case EXCP_BREAK:
> -            ret =3D do_syscall(env,=20
> -                             env->regs[9],=20
> -                             env->regs[10],=20
> -                             env->regs[11],=20
> -                             env->regs[12],=20
> -                             env->regs[13],=20
> -                             env->pregs[7],=20
> +            ret =3D do_syscall(env,
> +                             env->regs[9],
> +                             env->regs[10],
> +                             env->regs[11],
> +                             env->regs[12],
> +                             env->regs[13],
> +                             env->pregs[7],
>                               env->pregs[11],
>                               0, 0);
>              if (ret =3D=3D -TARGET_ERESTARTSYS) {
> diff --git a/linux-user/microblaze/cpu_loop.c b/linux-user/microblaze/cpu=
_loop.c
> index 3e0a7f730b..990dda26c3 100644
> --- a/linux-user/microblaze/cpu_loop.c
> +++ b/linux-user/microblaze/cpu_loop.c
> @@ -27,7 +27,7 @@ void cpu_loop(CPUMBState *env)
>      CPUState *cs =3D env_cpu(env);
>      int trapnr, ret;
>      target_siginfo_t info;
> -   =20
> +
>      while (1) {
>          cpu_exec_start(cs);
>          trapnr =3D cpu_exec(cs);
> @@ -52,13 +52,13 @@ void cpu_loop(CPUMBState *env)
>              /* Return address is 4 bytes after the call.  */
>              env->regs[14] +=3D 4;
>              env->sregs[SR_PC] =3D env->regs[14];
> -            ret =3D do_syscall(env,=20
> -                             env->regs[12],=20
> -                             env->regs[5],=20
> -                             env->regs[6],=20
> -                             env->regs[7],=20
> -                             env->regs[8],=20
> -                             env->regs[9],=20
> +            ret =3D do_syscall(env,
> +                             env->regs[12],
> +                             env->regs[5],
> +                             env->regs[6],
> +                             env->regs[7],
> +                             env->regs[8],
> +                             env->regs[9],
>                               env->regs[10],
>                               0, 0);
>              if (ret =3D=3D -TARGET_ERESTARTSYS) {
> diff --git a/linux-user/mmap.c b/linux-user/mmap.c
> index 0019447892..e48056f6ad 100644
> --- a/linux-user/mmap.c
> +++ b/linux-user/mmap.c
> @@ -401,12 +401,12 @@ abi_long target_mmap(abi_ulong start, abi_ulong len=
, int prot,
>      }
> =20
>      /* When mapping files into a memory area larger than the file, acces=
ses
> -       to pages beyond the file size will cause a SIGBUS.=20
> +       to pages beyond the file size will cause a SIGBUS.
> =20
>         For example, if mmaping a file of 100 bytes on a host with 4K pag=
es
>         emulating a target with 8K pages, the target expects to be able to
>         access the first 8K. But the host will trap us on any access beyo=
nd
> -       4K. =20
> +       4K.
> =20
>         When emulating a target with a larger page-size than the hosts, we
>         may need to truncate file maps at EOF and add extra anonymous pag=
es
> @@ -421,7 +421,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, =
int prot,
> =20
>         /* Are we trying to create a map beyond EOF?.  */
>         if (offset + len > sb.st_size) {
> -           /* If so, truncate the file map at eof aligned with=20
> +           /* If so, truncate the file map at eof aligned with
>                the hosts real pagesize. Additional anonymous maps
>                will be created beyond EOF.  */
>             len =3D REAL_HOST_PAGE_ALIGN(sb.st_size - offset);
> @@ -496,7 +496,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, =
int prot,
>              }
>              goto the_end;
>          }
> -       =20
> +
>          /* handle the start of the mapping */
>          if (start > real_start) {
>              if (real_end =3D=3D real_start + qemu_host_page_size) {
> diff --git a/linux-user/sparc/signal.c b/linux-user/sparc/signal.c
> index d796f50f66..53efb61c70 100644
> --- a/linux-user/sparc/signal.c
> +++ b/linux-user/sparc/signal.c
> @@ -104,7 +104,7 @@ struct target_rt_signal_frame {
>      qemu_siginfo_fpu_t  fpu_state;
>  };
> =20
> -static inline abi_ulong get_sigframe(struct target_sigaction *sa,=20
> +static inline abi_ulong get_sigframe(struct target_sigaction *sa,
>                                       CPUSPARCState *env,
>                                       unsigned long framesize)
>  {
> @@ -506,7 +506,7 @@ void sparc64_get_context(CPUSPARCState *env)
>      if (!lock_user_struct(VERIFY_WRITE, ucp, ucp_addr, 0)) {
>          goto do_sigsegv;
>      }
> -   =20
> +
>      mcp =3D &ucp->tuc_mcontext;
>      grp =3D &mcp->mc_gregs;
> =20
> diff --git a/linux-user/syscall.c b/linux-user/syscall.c
> index 97de9fb5c9..10d91a9781 100644
> --- a/linux-user/syscall.c
> +++ b/linux-user/syscall.c
> @@ -1104,7 +1104,7 @@ static inline rlim_t target_to_host_rlim(abi_ulong =
target_rlim)
>  {
>      abi_ulong target_rlim_swap;
>      rlim_t result;
> -   =20
> +
>      target_rlim_swap =3D tswapal(target_rlim);
>      if (target_rlim_swap =3D=3D TARGET_RLIM_INFINITY)
>          return RLIM_INFINITY;
> @@ -1112,7 +1112,7 @@ static inline rlim_t target_to_host_rlim(abi_ulong =
target_rlim)
>      result =3D target_rlim_swap;
>      if (target_rlim_swap !=3D (rlim_t)result)
>          return RLIM_INFINITY;
> -   =20
> +
>      return result;
>  }
>  #endif
> @@ -1122,13 +1122,13 @@ static inline abi_ulong host_to_target_rlim(rlim_=
t rlim)
>  {
>      abi_ulong target_rlim_swap;
>      abi_ulong result;
> -   =20
> +
>      if (rlim =3D=3D RLIM_INFINITY || rlim !=3D (abi_long)rlim)
>          target_rlim_swap =3D TARGET_RLIM_INFINITY;
>      else
>          target_rlim_swap =3D rlim;
>      result =3D tswapal(target_rlim_swap);
> -   =20
> +
>      return result;
>  }
>  #endif
> @@ -1615,9 +1615,9 @@ static inline abi_long target_to_host_cmsg(struct m=
sghdr *msgh,
>      abi_ulong target_cmsg_addr;
>      struct target_cmsghdr *target_cmsg, *target_cmsg_start;
>      socklen_t space =3D 0;
> -   =20
> +
>      msg_controllen =3D tswapal(target_msgh->msg_controllen);
> -    if (msg_controllen < sizeof (struct target_cmsghdr))=20
> +    if (msg_controllen < sizeof (struct target_cmsghdr))
>          goto the_end;
>      target_cmsg_addr =3D tswapal(target_msgh->msg_control);
>      target_cmsg =3D lock_user(VERIFY_READ, target_cmsg_addr, msg_control=
len, 1);
> @@ -1703,7 +1703,7 @@ static inline abi_long host_to_target_cmsg(struct t=
arget_msghdr *target_msgh,
>      socklen_t space =3D 0;
> =20
>      msg_controllen =3D tswapal(target_msgh->msg_controllen);
> -    if (msg_controllen < sizeof (struct target_cmsghdr))=20
> +    if (msg_controllen < sizeof (struct target_cmsghdr))
>          goto the_end;
>      target_cmsg_addr =3D tswapal(target_msgh->msg_control);
>      target_cmsg =3D lock_user(VERIFY_WRITE, target_cmsg_addr, msg_contro=
llen, 0);
> @@ -5750,7 +5750,7 @@ abi_long do_set_thread_area(CPUX86State *env, abi_u=
long ptr)
>      }
>      unlock_user_struct(target_ldt_info, ptr, 1);
> =20
> -    if (ldt_info.entry_number < TARGET_GDT_ENTRY_TLS_MIN ||=20
> +    if (ldt_info.entry_number < TARGET_GDT_ENTRY_TLS_MIN ||
>          ldt_info.entry_number > TARGET_GDT_ENTRY_TLS_MAX)
>             return -TARGET_EINVAL;
>      seg_32bit =3D ldt_info.flags & 1;
> @@ -5828,7 +5828,7 @@ static abi_long do_get_thread_area(CPUX86State *env=
, abi_ulong ptr)
>      lp =3D (uint32_t *)(gdt_table + idx);
>      entry_1 =3D tswap32(lp[0]);
>      entry_2 =3D tswap32(lp[1]);
> -   =20
> +
>      read_exec_only =3D ((entry_2 >> 9) & 1) ^ 1;
>      contents =3D (entry_2 >> 10) & 3;
>      seg_not_present =3D ((entry_2 >> 15) & 1) ^ 1;
> @@ -5844,8 +5844,8 @@ static abi_long do_get_thread_area(CPUX86State *env=
, abi_ulong ptr)
>          (read_exec_only << 3) | (limit_in_pages << 4) |
>          (seg_not_present << 5) | (useable << 6) | (lm << 7);
>      limit =3D (entry_1 & 0xffff) | (entry_2  & 0xf0000);
> -    base_addr =3D (entry_1 >> 16) |=20
> -        (entry_2 & 0xff000000) |=20
> +    base_addr =3D (entry_1 >> 16) |
> +        (entry_2 & 0xff000000) |
>          ((entry_2 & 0xff) << 16);
>      target_ldt_info->base_addr =3D tswapal(base_addr);
>      target_ldt_info->limit =3D tswap32(limit);
> @@ -10873,7 +10873,7 @@ static abi_long do_syscall1(void *cpu_env, int nu=
m, abi_long arg1,
>          return get_errno(fchown(arg1, low2highuid(arg2), low2highgid(arg=
3)));
>  #if defined(TARGET_NR_fchownat)
>      case TARGET_NR_fchownat:
> -        if (!(p =3D lock_user_string(arg2)))=20
> +        if (!(p =3D lock_user_string(arg2)))
>              return -TARGET_EFAULT;
>          ret =3D get_errno(fchownat(arg1, p, low2highuid(arg3),
>                                   low2highgid(arg4), arg5));
> diff --git a/linux-user/syscall_defs.h b/linux-user/syscall_defs.h
> index 152ec637cb..752ea5ee83 100644
> --- a/linux-user/syscall_defs.h
> +++ b/linux-user/syscall_defs.h
> @@ -1923,7 +1923,7 @@ struct target_stat {
>  	abi_long	st_blocks;	/* Number 512-byte blocks allocated. */
> =20
>  	abi_ulong	target_st_atime;
> -	abi_ulong 	target_st_atime_nsec;=20
> +	abi_ulong 	target_st_atime_nsec;
>  	abi_ulong	target_st_mtime;
>  	abi_ulong	target_st_mtime_nsec;
>  	abi_ulong	target_st_ctime;
> diff --git a/linux-user/uaccess.c b/linux-user/uaccess.c
> index e215ecc2a6..91e2067933 100644
> --- a/linux-user/uaccess.c
> +++ b/linux-user/uaccess.c
> @@ -55,7 +55,7 @@ abi_long target_strlen(abi_ulong guest_addr1)
>          unlock_user(ptr, guest_addr, 0);
>          guest_addr +=3D len;
>          /* we don't allow wrapping or integer overflow */
> -        if (guest_addr =3D=3D 0 ||=20
> +        if (guest_addr =3D=3D 0 ||
>              (guest_addr - guest_addr1) > 0x7fffffff)
>              return -TARGET_EFAULT;
>          if (len !=3D max_len)
> diff --git a/os-posix.c b/os-posix.c
> index 3cd52e1e70..fa6dfae168 100644
> --- a/os-posix.c
> +++ b/os-posix.c
> @@ -316,7 +316,7 @@ void os_setup_post(void)
> =20
>          close(fd);
> =20
> -        do {       =20
> +        do {
>              len =3D write(daemon_pipe, &status, 1);
>          } while (len < 0 && errno =3D=3D EINTR);
>          if (len !=3D 1) {
> diff --git a/qapi/qapi-util.c b/qapi/qapi-util.c
> index 29a6c98b53..48045c3ccc 100644
> --- a/qapi/qapi-util.c
> +++ b/qapi/qapi-util.c
> @@ -4,7 +4,7 @@
>   * Authors:
>   *  Hu Tao       <hutao@cn.fujitsu.com>
>   *  Peter Lieven <pl@kamp.de>
> - *=20
> + *
>   * This work is licensed under the terms of the GNU LGPL, version 2.1 or=
 later.
>   * See the COPYING.LIB file in the top-level directory.
>   *
> diff --git a/qemu-img.c b/qemu-img.c
> index bdb9f6aa46..72dfa096b1 100644
> --- a/qemu-img.c
> +++ b/qemu-img.c
> @@ -248,7 +248,7 @@ static bool qemu_img_object_print_help(const char *ty=
pe, QemuOpts *opts)
>   * an odd number of ',' (or else a separating ',' following it gets
>   * escaped), or be empty (or else a separating ',' preceding it can
>   * escape a separating ',' following it).
> - *=20
> + *
>   */
>  static bool is_valid_option_list(const char *optarg)
>  {
> diff --git a/qemu-options.hx b/qemu-options.hx
> index 196f468786..2f728fde47 100644
> --- a/qemu-options.hx
> +++ b/qemu-options.hx
> @@ -192,15 +192,15 @@ DEF("numa", HAS_ARG, QEMU_OPTION_numa,
>      QEMU_ARCH_ALL)
>  SRST
>  ``-numa node[,mem=3Dsize][,cpus=3Dfirstcpu[-lastcpu]][,nodeid=3Dnode][,i=
nitiator=3Dinitiator]``
> -  \=20
> +  \
>  ``-numa node[,memdev=3Did][,cpus=3Dfirstcpu[-lastcpu]][,nodeid=3Dnode][,=
initiator=3Dinitiator]``
>    \
>  ``-numa dist,src=3Dsource,dst=3Ddestination,val=3Ddistance``
> -  \=20
> +  \
>  ``-numa cpu,node-id=3Dnode[,socket-id=3Dx][,core-id=3Dy][,thread-id=3Dz]=
``
> -  \=20
> +  \
>  ``-numa hmat-lb,initiator=3Dnode,target=3Dnode,hierarchy=3Dhierarchy,dat=
a-type=3Dtpye[,latency=3Dlat][,bandwidth=3Dbw]``
> -  \=20
> +  \
>  ``-numa hmat-cache,node-id=3Dnode,size=3Dsize,level=3Dlevel[,associativi=
ty=3Dstr][,policy=3Dstr][,line=3Dsize]``
>      Define a NUMA node and assign RAM and VCPUs to it. Set the NUMA
>      distance from a source node to a destination node. Set the ACPI
> @@ -395,7 +395,7 @@ DEF("global", HAS_ARG, QEMU_OPTION_global,
>      QEMU_ARCH_ALL)
>  SRST
>  ``-global driver.prop=3Dvalue``
> -  \=20
> +  \
>  ``-global driver=3Ddriver,property=3Dproperty,value=3Dvalue``
>      Set default value of driver's property prop to value, e.g.:
> =20
> @@ -926,9 +926,9 @@ SRST
>  ``-hda file``
>    \
>  ``-hdb file``
> -  \=20
> +  \
>  ``-hdc file``
> -  \=20
> +  \
>  ``-hdd file``
>      Use file as hard disk 0, 1, 2 or 3 image (see
>      :ref:`disk_005fimages`).
> @@ -1416,7 +1416,7 @@ DEF("fsdev", HAS_ARG, QEMU_OPTION_fsdev,
> =20
>  SRST
>  ``-fsdev local,id=3Did,path=3Dpath,security_model=3Dsecurity_model [,wri=
teout=3Dwriteout][,readonly][,fmode=3Dfmode][,dmode=3Ddmode] [,throttling.o=
ption=3Dvalue[,throttling.option=3Dvalue[,...]]]``
> -  \=20
> +  \
>  ``-fsdev proxy,id=3Did,socket=3Dsocket[,writeout=3Dwriteout][,readonly]``
>    \
>  ``-fsdev proxy,id=3Did,sock_fd=3Dsock_fd[,writeout=3Dwriteout][,readonly=
]``
> @@ -1537,9 +1537,9 @@ DEF("virtfs", HAS_ARG, QEMU_OPTION_virtfs,
> =20
>  SRST
>  ``-virtfs local,path=3Dpath,mount_tag=3Dmount_tag ,security_model=3Dsecu=
rity_model[,writeout=3Dwriteout][,readonly] [,fmode=3Dfmode][,dmode=3Ddmode=
][,multidevs=3Dmultidevs]``
> -  \=20
> +  \
>  ``-virtfs proxy,socket=3Dsocket,mount_tag=3Dmount_tag [,writeout=3Dwrite=
out][,readonly]``
> -  \=20
> +  \
>  ``-virtfs proxy,sock_fd=3Dsock_fd,mount_tag=3Dmount_tag [,writeout=3Dwri=
teout][,readonly]``
>    \
>  ``-virtfs synth,mount_tag=3Dmount_tag``
> @@ -3674,7 +3674,7 @@ DEF("overcommit", HAS_ARG, QEMU_OPTION_overcommit,
>      QEMU_ARCH_ALL)
>  SRST
>  ``-overcommit mem-lock=3Don|off``
> -  \=20
> +  \
>  ``-overcommit cpu-pm=3Don|off``
>      Run qemu with hints about host resource overcommit. The default is
>      to assume that host overcommits all resources.
> @@ -4045,7 +4045,7 @@ DEF("incoming", HAS_ARG, QEMU_OPTION_incoming, \
>      QEMU_ARCH_ALL)
>  SRST
>  ``-incoming tcp:[host]:port[,to=3Dmaxport][,ipv4][,ipv6]``
> -  \=20
> +  \
>  ``-incoming rdma:host:port[,ipv4][,ipv6]``
>      Prepare for incoming migration, listen on a given tcp port.
> =20
> @@ -4753,7 +4753,7 @@ SRST
>                 [...]
> =20
>      ``-object secret,id=3Did,data=3Dstring,format=3Draw|base64[,keyid=3D=
secretid,iv=3Dstring]``
> -      \=20
> +      \
>      ``-object secret,id=3Did,file=3Dfilename,format=3Draw|base64[,keyid=
=3Dsecretid,iv=3Dstring]``
>          Defines a secret to store a password, encryption key, or some
>          other sensitive data. The sensitive data can either be passed
> diff --git a/qom/object.c b/qom/object.c
> index 6ece96bc2b..30630d789f 100644
> --- a/qom/object.c
> +++ b/qom/object.c
> @@ -1020,7 +1020,7 @@ static void object_class_foreach_tramp(gpointer key=
, gpointer value,
>          return;
>      }
> =20
> -    if (data->implements_type &&=20
> +    if (data->implements_type &&
>          !object_class_dynamic_cast(k, data->implements_type)) {
>          return;
>      }
> diff --git a/target/cris/translate.c b/target/cris/translate.c
> index aaa46b5bca..df979594c3 100644
> --- a/target/cris/translate.c
> +++ b/target/cris/translate.c
> @@ -369,7 +369,7 @@ static inline void t_gen_addx_carry(DisasContext *dc,=
 TCGv d)
>      if (dc->flagx_known) {
>          if (dc->flags_x) {
>              TCGv c;
> -           =20
> +
>              c =3D tcg_temp_new();
>              t_gen_mov_TN_preg(c, PR_CCS);
>              /* C flag is already at bit 0.  */
> @@ -402,7 +402,7 @@ static inline void t_gen_subx_carry(DisasContext *dc,=
 TCGv d)
>      if (dc->flagx_known) {
>          if (dc->flags_x) {
>              TCGv c;
> -           =20
> +
>              c =3D tcg_temp_new();
>              t_gen_mov_TN_preg(c, PR_CCS);
>              /* C flag is already at bit 0.  */
> @@ -688,7 +688,7 @@ static inline void cris_update_cc_x(DisasContext *dc)
>  }
> =20
>  /* Update cc prior to executing ALU op. Needs source operands untouched.=
  */
> -static void cris_pre_alu_update_cc(DisasContext *dc, int op,=20
> +static void cris_pre_alu_update_cc(DisasContext *dc, int op,
>                     TCGv dst, TCGv src, int size)
>  {
>      if (dc->update_cc) {
> @@ -718,7 +718,7 @@ static inline void cris_update_result(DisasContext *d=
c, TCGv res)
>  }
> =20
>  /* Returns one if the write back stage should execute.  */
> -static void cris_alu_op_exec(DisasContext *dc, int op,=20
> +static void cris_alu_op_exec(DisasContext *dc, int op,
>                     TCGv dst, TCGv a, TCGv b, int size)
>  {
>      /* Emit the ALU insns.  */
> @@ -1068,7 +1068,7 @@ static void cris_store_direct_jmp(DisasContext *dc)
>      }
>  }
> =20
> -static void cris_prepare_cc_branch (DisasContext *dc,=20
> +static void cris_prepare_cc_branch (DisasContext *dc,
>                      int offset, int cond)
>  {
>      /* This helps us re-schedule the micro-code to insns in delay-slots
> @@ -1108,7 +1108,7 @@ static void gen_load64(DisasContext *dc, TCGv_i64 d=
st, TCGv addr)
>      tcg_gen_qemu_ld_i64(dst, addr, mem_index, MO_TEQ);
>  }
> =20
> -static void gen_load(DisasContext *dc, TCGv dst, TCGv addr,=20
> +static void gen_load(DisasContext *dc, TCGv dst, TCGv addr,
>               unsigned int size, int sign)
>  {
>      int mem_index =3D cpu_mmu_index(&dc->cpu->env, false);
> @@ -3047,27 +3047,27 @@ static unsigned int crisv32_decoder(CPUCRISState =
*env, DisasContext *dc)
>   * to give SW a hint that the exception actually hit on the dslot.
>   *
>   * CRIS expects all PC addresses to be 16-bit aligned. The lsb is ignore=
d by
> - * the core and any jmp to an odd addresses will mask off that lsb. It i=
s=20
> + * the core and any jmp to an odd addresses will mask off that lsb. It is
>   * simply there to let sw know there was an exception on a dslot.
>   *
>   * When the software returns from an exception, the branch will re-execu=
te.
>   * On QEMU care needs to be taken when a branch+delayslot sequence is br=
oken
>   * and the branch and delayslot don't share pages.
>   *
> - * The TB contaning the branch insn will set up env->btarget and evaluat=
e=20
> - * env->btaken. When the translation loop exits we will note that the br=
anch=20
> + * The TB contaning the branch insn will set up env->btarget and evaluate
> + * env->btaken. When the translation loop exits we will note that the br=
anch
>   * sequence is broken and let env->dslot be the size of the branch insn =
(those
>   * vary in length).
>   *
>   * The TB contaning the delayslot will have the PC of its real insn (i.e=
 no lsb
> - * set). It will also expect to have env->dslot setup with the size of t=
he=20
> - * delay slot so that env->pc - env->dslot point to the branch insn. Thi=
s TB=20
> - * will execute the dslot and take the branch, either to btarget or just=
 one=20
> + * set). It will also expect to have env->dslot setup with the size of t=
he
> + * delay slot so that env->pc - env->dslot point to the branch insn. Thi=
s TB
> + * will execute the dslot and take the branch, either to btarget or just=
 one
>   * insn ahead.
>   *
> - * When exceptions occur, we check for env->dslot in do_interrupt to det=
ect=20
> + * When exceptions occur, we check for env->dslot in do_interrupt to det=
ect
>   * broken branch sequences and setup $erp accordingly (i.e let it point =
to the
> - * branch and set lsb). Then env->dslot gets cleared so that the excepti=
on=20
> + * branch and set lsb). Then env->dslot gets cleared so that the excepti=
on
>   * handler can enter. When returning from exceptions (jump $erp) the lsb=
 gets
>   * masked off and we will reexecute the branch insn.
>   *
> diff --git a/target/cris/translate_v10.inc.c b/target/cris/translate_v10.=
inc.c
> index ae34a0d1a3..ad4e213847 100644
> --- a/target/cris/translate_v10.inc.c
> +++ b/target/cris/translate_v10.inc.c
> @@ -299,7 +299,7 @@ static unsigned int dec10_quick_imm(DisasContext *dc)
> =20
>              op =3D CC_OP_LSL;
>              if (imm & (1 << 5)) {
> -                op =3D CC_OP_LSR;=20
> +                op =3D CC_OP_LSR;
>              }
>              imm &=3D 0x1f;
>              cris_cc_mask(dc, CC_MASK_NZVC);
> @@ -335,7 +335,7 @@ static unsigned int dec10_quick_imm(DisasContext *dc)
>              LOG_DIS("b%s %d\n", cc_name(dc->cond), imm);
> =20
>              cris_cc_mask(dc, 0);
> -            cris_prepare_cc_branch(dc, imm, dc->cond);=20
> +            cris_prepare_cc_branch(dc, imm, dc->cond);
>              break;
> =20
>          default:
> @@ -487,7 +487,7 @@ static void dec10_reg_mov_pr(DisasContext *dc)
>          return;
>      }
>      if (dc->dst =3D=3D PR_CCS) {
> -        cris_evaluate_flags(dc);=20
> +        cris_evaluate_flags(dc);
>      }
>      cris_alu(dc, CC_OP_MOVE, cpu_R[dc->src],
>                   cpu_R[dc->src], cpu_PR[dc->dst], preg_sizes_v10[dc->dst=
]);
> diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
> index be016b951a..d3f836a0f4 100644
> --- a/target/i386/hvf/hvf.c
> +++ b/target/i386/hvf/hvf.c
> @@ -967,13 +967,13 @@ static int hvf_accel_init(MachineState *ms)
>      assert_hvf_ok(ret);
> =20
>      s =3D g_new0(HVFState, 1);
> -=20
> +
>      s->num_slots =3D 32;
>      for (x =3D 0; x < s->num_slots; ++x) {
>          s->slots[x].size =3D 0;
>          s->slots[x].slot_id =3D x;
>      }
> - =20
> +
>      hvf_state =3D s;
>      cpu_interrupt_handler =3D hvf_handle_interrupt;
>      memory_listener_register(&hvf_memory_listener, &address_space_memory=
);
> diff --git a/target/i386/hvf/x86.c b/target/i386/hvf/x86.c
> index fdb11c8db9..6fd6e541d8 100644
> --- a/target/i386/hvf/x86.c
> +++ b/target/i386/hvf/x86.c
> @@ -83,7 +83,7 @@ bool x86_write_segment_descriptor(struct CPUState *cpu,
>  {
>      target_ulong base;
>      uint32_t limit;
> -   =20
> +
>      if (GDT_SEL =3D=3D sel.ti) {
>          base  =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE);
>          limit =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
> @@ -91,7 +91,7 @@ bool x86_write_segment_descriptor(struct CPUState *cpu,
>          base  =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE);
>          limit =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT);
>      }
> -   =20
> +
>      if (sel.index * 8 >=3D limit) {
>          printf("%s: gdt limit\n", __func__);
>          return false;
> diff --git a/target/i386/hvf/x86_decode.c b/target/i386/hvf/x86_decode.c
> index 34c5e3006c..8c576febd2 100644
> --- a/target/i386/hvf/x86_decode.c
> +++ b/target/i386/hvf/x86_decode.c
> @@ -63,7 +63,7 @@ static inline uint64_t decode_bytes(CPUX86State *env, s=
truct x86_decode *decode,
>                                      int size)
>  {
>      target_ulong val =3D 0;
> -   =20
> +
>      switch (size) {
>      case 1:
>      case 2:
> @@ -77,7 +77,7 @@ static inline uint64_t decode_bytes(CPUX86State *env, s=
truct x86_decode *decode,
>      target_ulong va  =3D linear_rip(env_cpu(env), env->eip) + decode->le=
n;
>      vmx_read_mem(env_cpu(env), &val, va, size);
>      decode->len +=3D size;
> -   =20
> +
>      return val;
>  }
> =20
> @@ -210,7 +210,7 @@ static void decode_imm_0(CPUX86State *env, struct x86=
_decode *decode,
>  static void decode_pushseg(CPUX86State *env, struct x86_decode *decode)
>  {
>      uint8_t op =3D (decode->opcode_len > 1) ? decode->opcode[1] : decode=
->opcode[0];
> -   =20
> +
>      decode->op[0].type =3D X86_VAR_REG;
>      switch (op) {
>      case 0xe:
> @@ -237,7 +237,7 @@ static void decode_pushseg(CPUX86State *env, struct x=
86_decode *decode)
>  static void decode_popseg(CPUX86State *env, struct x86_decode *decode)
>  {
>      uint8_t op =3D (decode->opcode_len > 1) ? decode->opcode[1] : decode=
->opcode[0];
> -   =20
> +
>      decode->op[0].type =3D X86_VAR_REG;
>      switch (op) {
>      case 0xf:
> @@ -461,14 +461,14 @@ struct decode_x87_tbl _decode_tbl3[256];
>  static void decode_x87_ins(CPUX86State *env, struct x86_decode *decode)
>  {
>      struct decode_x87_tbl *decoder;
> -   =20
> +
>      decode->is_fpu =3D true;
>      int mode =3D decode->modrm.mod =3D=3D 3 ? 1 : 0;
>      int index =3D ((decode->opcode[0] & 0xf) << 4) | (mode << 3) |
>                   decode->modrm.reg;
> =20
>      decoder =3D &_decode_tbl3[index];
> -   =20
> +
>      decode->cmd =3D decoder->cmd;
>      if (decoder->operand_size) {
>          decode->operand_size =3D decoder->operand_size;
> @@ -476,7 +476,7 @@ static void decode_x87_ins(CPUX86State *env, struct x=
86_decode *decode)
>      decode->flags_mask =3D decoder->flags_mask;
>      decode->fpop_stack =3D decoder->pop;
>      decode->frev =3D decoder->rev;
> -   =20
> +
>      if (decoder->decode_op1) {
>          decoder->decode_op1(env, decode, &decode->op[0]);
>      }
> @@ -2002,7 +2002,7 @@ static inline void decode_displacement(CPUX86State =
*env, struct x86_decode *deco
>      int addressing_size =3D decode->addressing_size;
>      int mod =3D decode->modrm.mod;
>      int rm =3D decode->modrm.rm;
> -   =20
> +
>      decode->displacement_size =3D 0;
>      switch (addressing_size) {
>      case 2:
> @@ -2115,7 +2115,7 @@ uint32_t decode_instruction(CPUX86State *env, struc=
t x86_decode *decode)
>  void init_decoder()
>  {
>      int i;
> -   =20
> +
>      for (i =3D 0; i < ARRAY_SIZE(_decode_tbl1); i++) {
>          memcpy(&_decode_tbl1[i], &invl_inst, sizeof(invl_inst));
>      }
> @@ -2124,7 +2124,7 @@ void init_decoder()
>      }
>      for (i =3D 0; i < ARRAY_SIZE(_decode_tbl3); i++) {
>          memcpy(&_decode_tbl3[i], &invl_inst_x87, sizeof(invl_inst_x87));
> -   =20
> +
>      }
>      for (i =3D 0; i < ARRAY_SIZE(_1op_inst); i++) {
>          _decode_tbl1[_1op_inst[i].opcode] =3D _1op_inst[i];
> diff --git a/target/i386/hvf/x86_decode.h b/target/i386/hvf/x86_decode.h
> index ef7960113f..c7879c9ea7 100644
> --- a/target/i386/hvf/x86_decode.h
> +++ b/target/i386/hvf/x86_decode.h
> @@ -43,7 +43,7 @@ typedef enum x86_prefix {
> =20
>  enum x86_decode_cmd {
>      X86_DECODE_CMD_INVL =3D 0,
> -   =20
> +
>      X86_DECODE_CMD_PUSH,
>      X86_DECODE_CMD_PUSH_SEG,
>      X86_DECODE_CMD_POP,
> @@ -174,7 +174,7 @@ enum x86_decode_cmd {
>      X86_DECODE_CMD_CMPXCHG8B,
>      X86_DECODE_CMD_CMPXCHG,
>      X86_DECODE_CMD_POPCNT,
> -   =20
> +
>      X86_DECODE_CMD_FNINIT,
>      X86_DECODE_CMD_FLD,
>      X86_DECODE_CMD_FLDxx,
> diff --git a/target/i386/hvf/x86_descr.c b/target/i386/hvf/x86_descr.c
> index 8c05c34f33..fd6a63754d 100644
> --- a/target/i386/hvf/x86_descr.c
> +++ b/target/i386/hvf/x86_descr.c
> @@ -112,7 +112,7 @@ void vmx_segment_to_x86_descriptor(struct CPUState *c=
pu, struct vmx_segment *vmx
>  {
>      x86_set_segment_limit(desc, vmx_desc->limit);
>      x86_set_segment_base(desc, vmx_desc->base);
> -   =20
> +
>      desc->type =3D vmx_desc->ar & 15;
>      desc->s =3D (vmx_desc->ar >> 4) & 1;
>      desc->dpl =3D (vmx_desc->ar >> 5) & 3;
> diff --git a/target/i386/hvf/x86_emu.c b/target/i386/hvf/x86_emu.c
> index d3e289ed87..edc7f74903 100644
> --- a/target/i386/hvf/x86_emu.c
> +++ b/target/i386/hvf/x86_emu.c
> @@ -131,7 +131,7 @@ void write_reg(CPUX86State *env, int reg, target_ulon=
g val, int size)
>  target_ulong read_val_from_reg(target_ulong reg_ptr, int size)
>  {
>      target_ulong val;
> -   =20
> +
>      switch (size) {
>      case 1:
>          val =3D *(uint8_t *)reg_ptr;
> diff --git a/target/i386/hvf/x86_mmu.c b/target/i386/hvf/x86_mmu.c
> index 65d4603dbf..168c47fa34 100644
> --- a/target/i386/hvf/x86_mmu.c
> +++ b/target/i386/hvf/x86_mmu.c
> @@ -143,7 +143,7 @@ static bool test_pt_entry(struct CPUState *cpu, struc=
t gpt_translation *pt,
>      if (pae && pt->exec_access && !pte_exec_access(pte)) {
>          return false;
>      }
> -   =20
> +
>  exit:
>      /* TODO: check reserved bits */
>      return true;
> @@ -175,7 +175,7 @@ static bool walk_gpt(struct CPUState *cpu, target_ulo=
ng addr, int err_code,
>      bool is_large =3D false;
>      target_ulong cr3 =3D rvmcs(cpu->hvf_fd, VMCS_GUEST_CR3);
>      uint64_t page_mask =3D pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MAS=
K;
> -   =20
> +
>      memset(pt, 0, sizeof(*pt));
>      top_level =3D gpt_top_level(cpu, pae);
> =20
> @@ -184,7 +184,7 @@ static bool walk_gpt(struct CPUState *cpu, target_ulo=
ng addr, int err_code,
>      pt->user_access =3D (err_code & MMU_PAGE_US);
>      pt->write_access =3D (err_code & MMU_PAGE_WT);
>      pt->exec_access =3D (err_code & MMU_PAGE_NX);
> -   =20
> +
>      for (level =3D top_level; level > 0; level--) {
>          get_pt_entry(cpu, pt, level, pae);
> =20
> diff --git a/target/i386/hvf/x86_task.c b/target/i386/hvf/x86_task.c
> index 6f04478b3a..9748220381 100644
> --- a/target/i386/hvf/x86_task.c
> +++ b/target/i386/hvf/x86_task.c
> @@ -1,7 +1,7 @@
>  // This software is licensed under the terms of the GNU General Public
>  // License version 2, as published by the Free Software Foundation, and
>  // may be copied, distributed, and modified under those terms.
> -//=20
> +//
>  // This program is distributed in the hope that it will be useful,
>  // but WITHOUT ANY WARRANTY; without even the implied warranty of
>  // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
> index 5cbcb32ab6..fd33ab4efc 100644
> --- a/target/i386/hvf/x86hvf.c
> +++ b/target/i386/hvf/x86hvf.c
> @@ -88,7 +88,7 @@ void hvf_put_segments(CPUState *cpu_state)
>  {
>      CPUX86State *env =3D &X86_CPU(cpu_state)->env;
>      struct vmx_segment seg;
> -   =20
> +
>      wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit);
>      wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE, env->idt.base);
> =20
> @@ -105,7 +105,7 @@ void hvf_put_segments(CPUState *cpu_state)
> =20
>      hvf_set_segment(cpu_state, &seg, &env->segs[R_CS], false);
>      vmx_write_segment_descriptor(cpu_state, &seg, R_CS);
> -   =20
> +
>      hvf_set_segment(cpu_state, &seg, &env->segs[R_DS], false);
>      vmx_write_segment_descriptor(cpu_state, &seg, R_DS);
> =20
> @@ -126,10 +126,10 @@ void hvf_put_segments(CPUState *cpu_state)
> =20
>      hvf_set_segment(cpu_state, &seg, &env->ldt, false);
>      vmx_write_segment_descriptor(cpu_state, &seg, R_LDTR);
> -   =20
> +
>      hv_vcpu_flush(cpu_state->hvf_fd);
>  }
> -   =20
> +
>  void hvf_put_msrs(CPUState *cpu_state)
>  {
>      CPUX86State *env =3D &X86_CPU(cpu_state)->env;
> @@ -178,7 +178,7 @@ void hvf_get_segments(CPUState *cpu_state)
> =20
>      vmx_read_segment_descriptor(cpu_state, &seg, R_CS);
>      hvf_get_segment(&env->segs[R_CS], &seg);
> -   =20
> +
>      vmx_read_segment_descriptor(cpu_state, &seg, R_DS);
>      hvf_get_segment(&env->segs[R_DS], &seg);
> =20
> @@ -209,7 +209,7 @@ void hvf_get_segments(CPUState *cpu_state)
>      env->cr[2] =3D 0;
>      env->cr[3] =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3);
>      env->cr[4] =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR4);
> -   =20
> +
>      env->efer =3D rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER);
>  }
> =20
> @@ -217,10 +217,10 @@ void hvf_get_msrs(CPUState *cpu_state)
>  {
>      CPUX86State *env =3D &X86_CPU(cpu_state)->env;
>      uint64_t tmp;
> -   =20
> +
>      hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, &tmp);
>      env->sysenter_cs =3D tmp;
> -   =20
> +
>      hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, &tmp);
>      env->sysenter_esp =3D tmp;
> =20
> @@ -237,7 +237,7 @@ void hvf_get_msrs(CPUState *cpu_state)
>  #endif
> =20
>      hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_APICBASE, &tmp);
> -   =20
> +
>      env->tsc =3D rdtscp() + rvmcs(cpu_state->hvf_fd, VMCS_TSC_OFFSET);
>  }
> =20
> @@ -264,15 +264,15 @@ int hvf_put_registers(CPUState *cpu_state)
>      wreg(cpu_state->hvf_fd, HV_X86_R15, env->regs[15]);
>      wreg(cpu_state->hvf_fd, HV_X86_RFLAGS, env->eflags);
>      wreg(cpu_state->hvf_fd, HV_X86_RIP, env->eip);
> -  =20
> +
>      wreg(cpu_state->hvf_fd, HV_X86_XCR0, env->xcr0);
> -   =20
> +
>      hvf_put_xsave(cpu_state);
> -   =20
> +
>      hvf_put_segments(cpu_state);
> -   =20
> +
>      hvf_put_msrs(cpu_state);
> -   =20
> +
>      wreg(cpu_state->hvf_fd, HV_X86_DR0, env->dr[0]);
>      wreg(cpu_state->hvf_fd, HV_X86_DR1, env->dr[1]);
>      wreg(cpu_state->hvf_fd, HV_X86_DR2, env->dr[2]);
> @@ -281,7 +281,7 @@ int hvf_put_registers(CPUState *cpu_state)
>      wreg(cpu_state->hvf_fd, HV_X86_DR5, env->dr[5]);
>      wreg(cpu_state->hvf_fd, HV_X86_DR6, env->dr[6]);
>      wreg(cpu_state->hvf_fd, HV_X86_DR7, env->dr[7]);
> -   =20
> +
>      return 0;
>  }
> =20
> @@ -306,16 +306,16 @@ int hvf_get_registers(CPUState *cpu_state)
>      env->regs[13] =3D rreg(cpu_state->hvf_fd, HV_X86_R13);
>      env->regs[14] =3D rreg(cpu_state->hvf_fd, HV_X86_R14);
>      env->regs[15] =3D rreg(cpu_state->hvf_fd, HV_X86_R15);
> -   =20
> +
>      env->eflags =3D rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
>      env->eip =3D rreg(cpu_state->hvf_fd, HV_X86_RIP);
> -  =20
> +
>      hvf_get_xsave(cpu_state);
>      env->xcr0 =3D rreg(cpu_state->hvf_fd, HV_X86_XCR0);
> -   =20
> +
>      hvf_get_segments(cpu_state);
>      hvf_get_msrs(cpu_state);
> -   =20
> +
>      env->dr[0] =3D rreg(cpu_state->hvf_fd, HV_X86_DR0);
>      env->dr[1] =3D rreg(cpu_state->hvf_fd, HV_X86_DR1);
>      env->dr[2] =3D rreg(cpu_state->hvf_fd, HV_X86_DR2);
> @@ -324,7 +324,7 @@ int hvf_get_registers(CPUState *cpu_state)
>      env->dr[5] =3D rreg(cpu_state->hvf_fd, HV_X86_DR5);
>      env->dr[6] =3D rreg(cpu_state->hvf_fd, HV_X86_DR6);
>      env->dr[7] =3D rreg(cpu_state->hvf_fd, HV_X86_DR7);
> -   =20
> +
>      x86_update_hflags(env);
>      return 0;
>  }
> @@ -388,7 +388,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
>                  intr_type =3D=3D VMCS_INTR_T_SWEXCEPTION) {
>                  wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, env->in=
s_len);
>              }
> -           =20
> +
>              if (env->has_error_code) {
>                  wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_EXCEPTION_ERROR,
>                        env->error_code);
> diff --git a/target/i386/translate.c b/target/i386/translate.c
> index 5e5dbb41b0..d824cfcfe7 100644
> --- a/target/i386/translate.c
> +++ b/target/i386/translate.c
> @@ -1623,7 +1623,7 @@ static void gen_rot_rm_T1(DisasContext *s, MemOp ot=
, int op1, int is_right)
>      tcg_temp_free_i32(t0);
>      tcg_temp_free_i32(t1);
> =20
> -    /* The CC_OP value is no longer predictable.  */=20
> +    /* The CC_OP value is no longer predictable.  */
>      set_cc_op(s, CC_OP_DYNAMIC);
>  }
> =20
> @@ -1716,7 +1716,7 @@ static void gen_rotc_rm_T1(DisasContext *s, MemOp o=
t, int op1,
>          gen_op_ld_v(s, ot, s->T0, s->A0);
>      else
>          gen_op_mov_v_reg(s, ot, s->T0, op1);
> -   =20
> +
>      if (is_right) {
>          switch (ot) {
>          case MO_8:
> @@ -5353,7 +5353,7 @@ static target_ulong disas_insn(DisasContext *s, CPU=
State *cpu)
>                  set_cc_op(s, CC_OP_EFLAGS);
>                  break;
>              }
> -#endif       =20
> +#endif
>              if (!(s->cpuid_features & CPUID_CX8)) {
>                  goto illegal_op;
>              }
> @@ -6398,7 +6398,7 @@ static target_ulong disas_insn(DisasContext *s, CPU=
State *cpu)
>      case 0x6d:
>          ot =3D mo_b_d32(b, dflag);
>          tcg_gen_ext16u_tl(s->T0, cpu_regs[R_EDX]);
> -        gen_check_io(s, ot, pc_start - s->cs_base,=20
> +        gen_check_io(s, ot, pc_start - s->cs_base,
>                       SVM_IOIO_TYPE_MASK | svm_is_rep(prefixes) | 4);
>          if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) {
>              gen_repz_ins(s, ot, pc_start - s->cs_base, s->pc - s->cs_bas=
e);
> diff --git a/target/microblaze/mmu.c b/target/microblaze/mmu.c
> index 6763421ba2..5487696089 100644
> --- a/target/microblaze/mmu.c
> +++ b/target/microblaze/mmu.c
> @@ -53,7 +53,7 @@ static void mmu_flush_idx(CPUMBState *env, unsigned int=
 idx)
>      }
>  }
> =20
> -static void mmu_change_pid(CPUMBState *env, unsigned int newpid)=20
> +static void mmu_change_pid(CPUMBState *env, unsigned int newpid)
>  {
>      struct microblaze_mmu *mmu =3D &env->mmu;
>      unsigned int i;
> diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
> index f6ff2591c3..1925a93eb2 100644
> --- a/target/microblaze/translate.c
> +++ b/target/microblaze/translate.c
> @@ -663,7 +663,7 @@ static void dec_div(DisasContext *dc)
>  {
>      unsigned int u;
> =20
> -    u =3D dc->imm & 2;=20
> +    u =3D dc->imm & 2;
>      LOG_DIS("div\n");
> =20
>      if (trap_illegal(dc, !dc->cpu->cfg.use_div)) {
> diff --git a/target/sh4/op_helper.c b/target/sh4/op_helper.c
> index 14c3db0f48..fa4f5aee4f 100644
> --- a/target/sh4/op_helper.c
> +++ b/target/sh4/op_helper.c
> @@ -133,7 +133,7 @@ void helper_discard_movcal_backup(CPUSH4State *env)
>  	env->movcal_backup =3D current =3D next;
>  	if (current =3D=3D NULL)
>  	    env->movcal_backup_tail =3D &(env->movcal_backup);
> -    }=20
> +    }
>  }
> =20
>  void helper_ocbi(CPUSH4State *env, uint32_t address)
> @@ -146,7 +146,7 @@ void helper_ocbi(CPUSH4State *env, uint32_t address)
>  	{
>  	    memory_content *next =3D (*current)->next;
>              cpu_stl_data(env, a, (*current)->value);
> -	   =20
> +=09
>  	    if (next =3D=3D NULL)
>  	    {
>  		env->movcal_backup_tail =3D current;
> diff --git a/target/xtensa/core-de212/core-isa.h b/target/xtensa/core-de2=
12/core-isa.h
> index 90ac329230..4528dd7942 100644
> --- a/target/xtensa/core-de212/core-isa.h
> +++ b/target/xtensa/core-de212/core-isa.h
> @@ -1,4 +1,4 @@
> -/*=20
> +/*
>   * xtensa/config/core-isa.h -- HAL definitions that are dependent on Xte=
nsa
>   *				processor CORE configuration
>   *
> @@ -605,12 +605,12 @@
>  /*----------------------------------------------------------------------
>  				MPU
>    ----------------------------------------------------------------------=
*/
> -#define XCHAL_HAVE_MPU			0=20
> +#define XCHAL_HAVE_MPU			0
>  #define XCHAL_MPU_ENTRIES		0
> =20
>  #define XCHAL_MPU_ALIGN_REQ		1	/* MPU requires alignment of entries to b=
ackground map */
>  #define XCHAL_MPU_BACKGROUND_ENTRIES	0	/* number of entries in backgroun=
d map */
> -=20
> +
>  #define XCHAL_MPU_ALIGN_BITS		0
>  #define XCHAL_MPU_ALIGN			0
> =20
> diff --git a/target/xtensa/core-sample_controller/core-isa.h b/target/xte=
nsa/core-sample_controller/core-isa.h
> index d53dca8665..de5a5f3ba2 100644
> --- a/target/xtensa/core-sample_controller/core-isa.h
> +++ b/target/xtensa/core-sample_controller/core-isa.h
> @@ -1,4 +1,4 @@
> -/*=20
> +/*
>   * xtensa/config/core-isa.h -- HAL definitions that are dependent on Xte=
nsa
>   *				processor CORE configuration
>   *
> @@ -626,13 +626,13 @@
>  /*----------------------------------------------------------------------
>  				MPU
>    ----------------------------------------------------------------------=
*/
> -#define XCHAL_HAVE_MPU			0=20
> +#define XCHAL_HAVE_MPU			0
>  #define XCHAL_MPU_ENTRIES		0
> =20
>  #define XCHAL_MPU_ALIGN_REQ		1	/* MPU requires alignment of entries to b=
ackground map */
>  #define XCHAL_MPU_BACKGROUND_ENTRIES	0	/* number of entries in bg map*/
>  #define XCHAL_MPU_BG_CACHEADRDIS	0	/* default CACHEADRDIS for bg */
> -=20
> +
>  #define XCHAL_MPU_ALIGN_BITS		0
>  #define XCHAL_MPU_ALIGN			0
> =20
> diff --git a/target/xtensa/core-test_kc705_be/core-isa.h b/target/xtensa/=
core-test_kc705_be/core-isa.h
> index 408fed871d..382e3f187d 100644
> --- a/target/xtensa/core-test_kc705_be/core-isa.h
> +++ b/target/xtensa/core-test_kc705_be/core-isa.h
> @@ -1,4 +1,4 @@
> -/*=20
> +/*
>   * xtensa/config/core-isa.h -- HAL definitions that are dependent on Xte=
nsa
>   *				processor CORE configuration
>   *
> diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
> index 65fddb310d..d856000c16 100644
> --- a/tcg/sparc/tcg-target.inc.c
> +++ b/tcg/sparc/tcg-target.inc.c
> @@ -988,7 +988,7 @@ static void build_trampolines(TCGContext *s)
>              /* Skip the oi argument.  */
>              ra +=3D 1;
>          }
> -               =20
> +
>          /* Set the retaddr operand.  */
>          if (ra >=3D TCG_REG_O6) {
>              tcg_out_st(s, TCG_TYPE_PTR, TCG_REG_O7, TCG_REG_CALL_STACK,
> diff --git a/tcg/tcg.c b/tcg/tcg.c
> index 1362bc6101..45d15fe837 100644
> --- a/tcg/tcg.c
> +++ b/tcg/tcg.c
> @@ -872,7 +872,7 @@ void *tcg_malloc_internal(TCGContext *s, int size)
>  {
>      TCGPool *p;
>      int pool_size;
> -   =20
> +
>      if (size > TCG_POOL_CHUNK_SIZE) {
>          /* big malloc: insert a new pool (XXX: could optimize) */
>          p =3D g_malloc(sizeof(TCGPool) + size);
> @@ -893,7 +893,7 @@ void *tcg_malloc_internal(TCGContext *s, int size)
>                  p =3D g_malloc(sizeof(TCGPool) + pool_size);
>                  p->size =3D pool_size;
>                  p->next =3D NULL;
> -                if (s->pool_current)=20
> +                if (s->pool_current)
>                      s->pool_current->next =3D p;
>                  else
>                      s->pool_first =3D p;
> @@ -3093,8 +3093,8 @@ static void dump_regs(TCGContext *s)
> =20
>      for(i =3D 0; i < TCG_TARGET_NB_REGS; i++) {
>          if (s->reg_to_temp[i] !=3D NULL) {
> -            printf("%s: %s\n",=20
> -                   tcg_target_reg_names[i],=20
> +            printf("%s: %s\n",
> +                   tcg_target_reg_names[i],
>                     tcg_get_arg_str_ptr(s, buf, sizeof(buf), s->reg_to_te=
mp[i]));
>          }
>      }
> @@ -3111,7 +3111,7 @@ static void check_regs(TCGContext *s)
>          ts =3D s->reg_to_temp[reg];
>          if (ts !=3D NULL) {
>              if (ts->val_type !=3D TEMP_VAL_REG || ts->reg !=3D reg) {
> -                printf("Inconsistency for register %s:\n",=20
> +                printf("Inconsistency for register %s:\n",
>                         tcg_target_reg_names[reg]);
>                  goto fail;
>              }
> @@ -3648,14 +3648,14 @@ static void tcg_reg_alloc_op(TCGContext *s, const=
 TCGOp *op)
>      nb_iargs =3D def->nb_iargs;
> =20
>      /* copy constants */
> -    memcpy(new_args + nb_oargs + nb_iargs,=20
> +    memcpy(new_args + nb_oargs + nb_iargs,
>             op->args + nb_oargs + nb_iargs,
>             sizeof(TCGArg) * def->nb_cargs);
> =20
>      i_allocated_regs =3D s->reserved_regs;
>      o_allocated_regs =3D s->reserved_regs;
> =20
> -    /* satisfy input constraints */=20
> +    /* satisfy input constraints */
>      for (k =3D 0; k < nb_iargs; k++) {
>          TCGRegSet i_preferred_regs, o_preferred_regs;
> =20
> @@ -3713,7 +3713,7 @@ static void tcg_reg_alloc_op(TCGContext *s, const T=
CGOp *op)
>              /* nothing to do : the constraint is satisfied */
>          } else {
>          allocate_in_reg:
> -            /* allocate a new register matching the constraint=20
> +            /* allocate a new register matching the constraint
>                 and move the temporary register into it */
>              temp_load(s, ts, tcg_target_available_regs[ts->type],
>                        i_allocated_regs, 0);
> @@ -3733,7 +3733,7 @@ static void tcg_reg_alloc_op(TCGContext *s, const T=
CGOp *op)
>          const_args[i] =3D 0;
>          tcg_regset_set_reg(i_allocated_regs, reg);
>      }
> -   =20
> +
>      /* mark dead temporaries and free the associated registers */
>      for (i =3D nb_oargs; i < nb_oargs + nb_iargs; i++) {
>          if (IS_DEAD_ARG(i)) {
> @@ -3745,7 +3745,7 @@ static void tcg_reg_alloc_op(TCGContext *s, const T=
CGOp *op)
>          tcg_reg_alloc_bb_end(s, i_allocated_regs);
>      } else {
>          if (def->flags & TCG_OPF_CALL_CLOBBER) {
> -            /* XXX: permit generic clobber register list ? */=20
> +            /* XXX: permit generic clobber register list ? */
>              for (i =3D 0; i < TCG_TARGET_NB_REGS; i++) {
>                  if (tcg_regset_test_reg(tcg_target_call_clobber_regs, i)=
) {
>                      tcg_reg_free(s, i, i_allocated_regs);
> @@ -3757,7 +3757,7 @@ static void tcg_reg_alloc_op(TCGContext *s, const T=
CGOp *op)
>                 an exception. */
>              sync_globals(s, i_allocated_regs);
>          }
> -       =20
> +
>          /* satisfy the output constraints */
>          for(k =3D 0; k < nb_oargs; k++) {
>              i =3D def->sorted_args[k];
> @@ -3849,7 +3849,7 @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp=
 *op)
> =20
>      /* assign stack slots first */
>      call_stack_size =3D (nb_iargs - nb_regs) * sizeof(tcg_target_long);
> -    call_stack_size =3D (call_stack_size + TCG_TARGET_STACK_ALIGN - 1) &=
=20
> +    call_stack_size =3D (call_stack_size + TCG_TARGET_STACK_ALIGN - 1) &
>          ~(TCG_TARGET_STACK_ALIGN - 1);
>      allocate_args =3D (call_stack_size > TCG_STATIC_CALL_ARGS_SIZE);
>      if (allocate_args) {
> @@ -3874,7 +3874,7 @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp=
 *op)
>          stack_offset +=3D sizeof(tcg_target_long);
>  #endif
>      }
> -   =20
> +
>      /* assign input registers */
>      allocated_regs =3D s->reserved_regs;
>      for (i =3D 0; i < nb_regs; i++) {
> @@ -3907,14 +3907,14 @@ static void tcg_reg_alloc_call(TCGContext *s, TCG=
Op *op)
>              tcg_regset_set_reg(allocated_regs, reg);
>          }
>      }
> -   =20
> +
>      /* mark dead temporaries and free the associated registers */
>      for (i =3D nb_oargs; i < nb_iargs + nb_oargs; i++) {
>          if (IS_DEAD_ARG(i)) {
>              temp_dead(s, arg_temp(op->args[i]));
>          }
>      }
> -   =20
> +
>      /* clobber call registers */
>      for (i =3D 0; i < TCG_TARGET_NB_REGS; i++) {
>          if (tcg_regset_test_reg(tcg_target_call_clobber_regs, i)) {
> @@ -4317,7 +4317,7 @@ void tcg_dump_info(void)
>                  (double)s->code_out_len / tb_div_count);
>      qemu_printf("avg search data/TB  %0.1f\n",
>                  (double)s->search_out_len / tb_div_count);
> -   =20
> +
>      qemu_printf("cycles/op           %0.1f\n",
>                  s->op_count ? (double)tot / s->op_count : 0);
>      qemu_printf("cycles/in byte      %0.1f\n",
> diff --git a/tests/tcg/multiarch/test-mmap.c b/tests/tcg/multiarch/test-m=
map.c
> index 11d0e777b1..0009e62855 100644
> --- a/tests/tcg/multiarch/test-mmap.c
> +++ b/tests/tcg/multiarch/test-mmap.c
> @@ -17,7 +17,7 @@
>   * but WITHOUT ANY WARRANTY; without even the implied warranty of
>   * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>   * GNU General Public License for more details.
> - *=20
> + *
>   * You should have received a copy of the GNU General Public License
>   * along with this program; if not, see <http://www.gnu.org/licenses/>.
>   */
> @@ -63,15 +63,15 @@ void check_aligned_anonymous_unfixed_mmaps(void)
>  		size_t len;
> =20
>  		len =3D pagesize + (pagesize * i & 7);
> -		p1 =3D mmap(NULL, len, PROT_READ,=20
> +		p1 =3D mmap(NULL, len, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> -		p2 =3D mmap(NULL, len, PROT_READ,=20
> +		p2 =3D mmap(NULL, len, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> -		p3 =3D mmap(NULL, len, PROT_READ,=20
> +		p3 =3D mmap(NULL, len, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> -		p4 =3D mmap(NULL, len, PROT_READ,=20
> +		p4 =3D mmap(NULL, len, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> -		p5 =3D mmap(NULL, len, PROT_READ,=20
> +		p5 =3D mmap(NULL, len, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> =20
>  		/* Make sure we get pages aligned with the pagesize. The
> @@ -118,7 +118,7 @@ void check_large_anonymous_unfixed_mmap(void)
>  	fprintf(stdout, "%s", __func__);
> =20
>  	len =3D 0x02000000;
> -	p1 =3D mmap(NULL, len, PROT_READ,=20
> +	p1 =3D mmap(NULL, len, PROT_READ,
>  		  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> =20
>  	/* Make sure we get pages aligned with the pagesize. The
> @@ -145,14 +145,14 @@ void check_aligned_anonymous_unfixed_colliding_mmap=
s(void)
>  	for (i =3D 0; i < 0x2fff; i++)
>  	{
>  		int nlen;
> -		p1 =3D mmap(NULL, pagesize, PROT_READ,=20
> +		p1 =3D mmap(NULL, pagesize, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
>  		fail_unless (p1 !=3D MAP_FAILED);
>  		p =3D (uintptr_t) p1;
>  		fail_unless ((p & pagemask) =3D=3D 0);
>  		memcpy (dummybuf, p1, pagesize);
> =20
> -		p2 =3D mmap(NULL, pagesize, PROT_READ,=20
> +		p2 =3D mmap(NULL, pagesize, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
>  		fail_unless (p2 !=3D MAP_FAILED);
>  		p =3D (uintptr_t) p2;
> @@ -162,12 +162,12 @@ void check_aligned_anonymous_unfixed_colliding_mmap=
s(void)
> =20
>  		munmap (p1, pagesize);
>  		nlen =3D pagesize * 8;
> -		p3 =3D mmap(NULL, nlen, PROT_READ,=20
> +		p3 =3D mmap(NULL, nlen, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
>  		fail_unless (p3 !=3D MAP_FAILED);
> =20
>  		/* Check if the mmaped areas collide.  */
> -		if (p3 < p2=20
> +		if (p3 < p2
>  		    && (p3 + nlen) > p2)
>  			fail_unless (0);
> =20
> @@ -191,7 +191,7 @@ void check_aligned_anonymous_fixed_mmaps(void)
>  	int i;
> =20
>  	/* Find a suitable address to start with.  */
> -	addr =3D mmap(NULL, pagesize * 40, PROT_READ | PROT_WRITE,=20
> +	addr =3D mmap(NULL, pagesize * 40, PROT_READ | PROT_WRITE,
>  		    MAP_PRIVATE | MAP_ANONYMOUS,
>  		    -1, 0);
>  	fprintf(stdout, "%s addr=3D%p", __func__, addr);
> @@ -200,10 +200,10 @@ void check_aligned_anonymous_fixed_mmaps(void)
>  	for (i =3D 0; i < 40; i++)
>  	{
>  		/* Create submaps within our unfixed map.  */
> -		p1 =3D mmap(addr, pagesize, PROT_READ,=20
> +		p1 =3D mmap(addr, pagesize, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
>  			  -1, 0);
> -		/* Make sure we get pages aligned with the pagesize.=20
> +		/* Make sure we get pages aligned with the pagesize.
>  		   The target expects this.  */
>  		p =3D (uintptr_t) p1;
>  		fail_unless (p1 =3D=3D addr);
> @@ -231,10 +231,10 @@ void check_aligned_anonymous_fixed_mmaps_collide_wi=
th_host(void)
>  	for (i =3D 0; i < 20; i++)
>  	{
>  		/* Create submaps within our unfixed map.  */
> -		p1 =3D mmap(addr, pagesize, PROT_READ | PROT_WRITE,=20
> +		p1 =3D mmap(addr, pagesize, PROT_READ | PROT_WRITE,
>  			  MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
>  			  -1, 0);
> -		/* Make sure we get pages aligned with the pagesize.=20
> +		/* Make sure we get pages aligned with the pagesize.
>  		   The target expects this.  */
>  		p =3D (uintptr_t) p1;
>  		fail_unless (p1 =3D=3D addr);
> @@ -258,14 +258,14 @@ void check_file_unfixed_mmaps(void)
>  		size_t len;
> =20
>  		len =3D pagesize;
> -		p1 =3D mmap(NULL, len, PROT_READ,=20
> -			  MAP_PRIVATE,=20
> +		p1 =3D mmap(NULL, len, PROT_READ,
> +			  MAP_PRIVATE,
>  			  test_fd, 0);
> -		p2 =3D mmap(NULL, len, PROT_READ,=20
> -			  MAP_PRIVATE,=20
> +		p2 =3D mmap(NULL, len, PROT_READ,
> +			  MAP_PRIVATE,
>  			  test_fd, pagesize);
> -		p3 =3D mmap(NULL, len, PROT_READ,=20
> -			  MAP_PRIVATE,=20
> +		p3 =3D mmap(NULL, len, PROT_READ,
> +			  MAP_PRIVATE,
>  			  test_fd, pagesize * 2);
> =20
>  		fail_unless (p1 !=3D MAP_FAILED);
> @@ -307,9 +307,9 @@ void check_file_unfixed_eof_mmaps(void)
>  	fprintf(stdout, "%s", __func__);
>  	for (i =3D 0; i < 0x10; i++)
>  	{
> -		p1 =3D mmap(NULL, pagesize, PROT_READ,=20
> -			  MAP_PRIVATE,=20
> -			  test_fd,=20
> +		p1 =3D mmap(NULL, pagesize, PROT_READ,
> +			  MAP_PRIVATE,
> +			  test_fd,
>  			  (test_fsize - sizeof *p1) & ~pagemask);
> =20
>  		fail_unless (p1 !=3D MAP_FAILED);
> @@ -339,7 +339,7 @@ void check_file_fixed_eof_mmaps(void)
>  	int i;
> =20
>  	/* Find a suitable address to start with.  */
> -	addr =3D mmap(NULL, pagesize * 44, PROT_READ,=20
> +	addr =3D mmap(NULL, pagesize * 44, PROT_READ,
>  		    MAP_PRIVATE | MAP_ANONYMOUS,
>  		    -1, 0);
> =20
> @@ -349,9 +349,9 @@ void check_file_fixed_eof_mmaps(void)
>  	for (i =3D 0; i < 0x10; i++)
>  	{
>  		/* Create submaps within our unfixed map.  */
> -		p1 =3D mmap(addr, pagesize, PROT_READ,=20
> -			  MAP_PRIVATE | MAP_FIXED,=20
> -			  test_fd,=20
> +		p1 =3D mmap(addr, pagesize, PROT_READ,
> +			  MAP_PRIVATE | MAP_FIXED,
> +			  test_fd,
>  			  (test_fsize - sizeof *p1) & ~pagemask);
> =20
>  		fail_unless (p1 !=3D MAP_FAILED);
> @@ -381,7 +381,7 @@ void check_file_fixed_mmaps(void)
>  	int i;
> =20
>  	/* Find a suitable address to start with.  */
> -	addr =3D mmap(NULL, pagesize * 40 * 4, PROT_READ,=20
> +	addr =3D mmap(NULL, pagesize * 40 * 4, PROT_READ,
>  		    MAP_PRIVATE | MAP_ANONYMOUS,
>  		    -1, 0);
>  	fprintf(stdout, "%s addr=3D%p", __func__, (void *)addr);
> @@ -389,20 +389,20 @@ void check_file_fixed_mmaps(void)
> =20
>  	for (i =3D 0; i < 40; i++)
>  	{
> -		p1 =3D mmap(addr, pagesize, PROT_READ,=20
> +		p1 =3D mmap(addr, pagesize, PROT_READ,
>  			  MAP_PRIVATE | MAP_FIXED,
>  			  test_fd, 0);
> -		p2 =3D mmap(addr + pagesize, pagesize, PROT_READ,=20
> +		p2 =3D mmap(addr + pagesize, pagesize, PROT_READ,
>  			  MAP_PRIVATE | MAP_FIXED,
>  			  test_fd, pagesize);
> -		p3 =3D mmap(addr + pagesize * 2, pagesize, PROT_READ,=20
> +		p3 =3D mmap(addr + pagesize * 2, pagesize, PROT_READ,
>  			  MAP_PRIVATE | MAP_FIXED,
>  			  test_fd, pagesize * 2);
> -		p4 =3D mmap(addr + pagesize * 3, pagesize, PROT_READ,=20
> +		p4 =3D mmap(addr + pagesize * 3, pagesize, PROT_READ,
>  			  MAP_PRIVATE | MAP_FIXED,
>  			  test_fd, pagesize * 3);
> =20
> -		/* Make sure we get pages aligned with the pagesize.=20
> +		/* Make sure we get pages aligned with the pagesize.
>  		   The target expects this.  */
>  		fail_unless (p1 =3D=3D (void *)addr);
>  		fail_unless (p2 =3D=3D (void *)addr + pagesize);
> @@ -479,7 +479,7 @@ int main(int argc, char **argv)
>          checked_write(test_fd, &i, sizeof i);
>      }
> =20
> -	/* Append a few extra writes to make the file end at non=20
> +	/* Append a few extra writes to make the file end at non
>  	   page boundary.  */
>      checked_write(test_fd, &i, sizeof i); i++;
>      checked_write(test_fd, &i, sizeof i); i++;
> diff --git a/ui/curses.c b/ui/curses.c
> index a59b23a9cf..ba362eb927 100644
> --- a/ui/curses.c
> +++ b/ui/curses.c
> @@ -1,8 +1,8 @@
>  /*
>   * QEMU curses/ncurses display driver
> - *=20
> + *
>   * Copyright (c) 2005 Andrzej Zaborowski  <balrog@zabor.org>
> - *=20
> + *
>   * Permission is hereby granted, free of charge, to any person obtaining=
 a copy
>   * of this software and associated documentation files (the "Software"),=
 to deal
>   * in the Software without restriction, including without limitation the=
 rights
> diff --git a/ui/curses_keys.h b/ui/curses_keys.h
> index 71e04acdc7..8b62258756 100644
> --- a/ui/curses_keys.h
> +++ b/ui/curses_keys.h
> @@ -1,8 +1,8 @@
>  /*
>   * Keycode and keysyms conversion tables for curses
> - *=20
> + *
>   * Copyright (c) 2005 Andrzej Zaborowski  <balrog@zabor.org>
> - *=20
> + *
>   * Permission is hereby granted, free of charge, to any person obtaining=
 a copy
>   * of this software and associated documentation files (the "Software"),=
 to deal
>   * in the Software without restriction, including without limitation the=
 rights
> diff --git a/util/cutils.c b/util/cutils.c
> index 36ce712271..ce4f756bd9 100644
> --- a/util/cutils.c
> +++ b/util/cutils.c
> @@ -142,7 +142,7 @@ time_t mktimegm(struct tm *tm)
>          m +=3D 12;
>          y--;
>      }
> -    t =3D 86400ULL * (d + (153 * m - 457) / 5 + 365 * y + y / 4 - y / 10=
0 +=20
> +    t =3D 86400ULL * (d + (153 * m - 457) / 5 + 365 * y + y / 4 - y / 10=
0 +
>                   y / 400 - 719469);
>      t +=3D 3600 * tm->tm_hour + 60 * tm->tm_min + tm->tm_sec;
>      return t;

--=20
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

--FsscpQKzF/jJk6ya
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAl8EMLEACgkQbDjKyiDZ
s5Kf1xAA1en6tQVJPNGgcUEkR4Pd+eCvDk8Q38S0McpvlnN7HgXGRvfJUQi91gLh
+gGdu9X8fpwrDO8WRQLIgCWzNAqyDGPKf1Z/W4O72iZFxMoMyMsnAIA9e1rKyHTx
7zqpcCfu2h+ZqS9aMkSfyOMo0H56Zp9DyVVKvpmy3w3/L2M32kTYayCCXJdetZ8i
7n7Racb4C4B7XHsOgjECOkyIlPrSyZfY4B7SajEhNBR7Lut3+84HR8Af7VShv88L
CJynHfZV5GM6KQ1+jKA7w/hpSBcmk6VSpA0DzxbRIgnb1EFqRmNQSxkCPDqYrDej
5nesFgidH8bpWWU2t1kuCS7hzN5dabOSOF6caPqpwrdAO68QhB34QcrK36fXqrWP
b4lqMwwQ1oqDtWLD1vE/rBOVe1n/So5UgPn1SyX6SlVkt4yxpocg3kffDEVMKjV2
MKNktOr8xOKvCaltdEFvBnZeYuCMcQhkn7uAKRpAfZr6fb+i6UoE+GP0+uacXCCr
WnUlakE9ADEO976m1GxtkwcB9xoVbcssfOfrEHpjqOzpnv1CWqpvHRMlVnAxp+OD
1gmr85Rkup+uQawxW5dwDOgjCsBm3WK5JqH3DgbPydeXZeuITJY99nl4LnB2SltH
O7QUmeUz7kELdrayohj1+5hOWPUhh/tjWU7nti5kPfdYMp6zce4=
=Glb/
-----END PGP SIGNATURE-----

--FsscpQKzF/jJk6ya--


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 09:28:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 09:28:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsjuD-0006Z0-8g; Tue, 07 Jul 2020 09:28:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+gHK=AS=in.bosch.com=manikandan.chockalingam@srs-us1.protection.inumbo.net>)
 id 1jsjuB-0006Yt-U4
 for xen-devel@lists.xen.org; Tue, 07 Jul 2020 09:28:47 +0000
X-Inumbo-ID: 41784854-c034-11ea-b7bb-bc764e2007e4
Received: from de-out1.bosch-org.com (unknown [139.15.230.186])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 41784854-c034-11ea-b7bb-bc764e2007e4;
 Tue, 07 Jul 2020 09:28:46 +0000 (UTC)
Received: from si0vm1947.rbesz01.com
 (lb41g3-ha-dmz-psi-sl1-mailout.fe.ssn.bosch.com [139.15.230.188])
 by si0vms0217.rbdmz01.com (Postfix) with ESMTPS id 4B1HDs3Wzrz4f3m1c;
 Tue,  7 Jul 2020 11:28:45 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=in.bosch.com;
 s=key1-intmail; t=1594114125;
 bh=0flT9FbF1bnILmJA1WrUz6hc6gUFrxwO0XIL8WgUsdU=; l=10;
 h=From:Subject:From:Reply-To:Sender;
 b=NO77WDT8ctv9ACX0J9utXSQIZyT7icNwMUiclPVvijEBkeZgIMzTuqBWdcAmsxTuB
 asdY4wiU9/lmqgByPj1EUUHEFxaTd7Vp4ded03zD+qno8v2bn2lZrTwGuRMvwmSkT9
 8kbhJHEnAumDnPEMBT6eMMGnHgjujDvxytGBUVDE=
Received: from si0vm2082.rbesz01.com (unknown [10.58.172.176])
 by si0vm1947.rbesz01.com (Postfix) with ESMTPS id 4B1HDs389zz6CjTFn;
 Tue,  7 Jul 2020 11:28:45 +0200 (CEST)
X-AuditID: 0a3aad16-845ff700000077c5-2a-5f04404db267
Received: from fe0vm1651.rbesz01.com ( [10.58.173.29])
 (using TLS with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by si0vm2082.rbesz01.com (SMG Outbound) with SMTP id A6.ED.30661.D40440F5;
 Tue,  7 Jul 2020 11:28:45 +0200 (CEST)
Received: from FE-MBX2044.de.bosch.com (fe-mbx2044.de.bosch.com [10.3.231.54])
 by fe0vm1651.rbesz01.com (Postfix) with ESMTPS id 4B1HDs2Mq0zvlC;
 Tue,  7 Jul 2020 11:28:45 +0200 (CEST)
Received: from SGPMBX2022.APAC.bosch.com (10.187.83.37) by
 FE-MBX2044.de.bosch.com (10.3.231.54) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1979.3; Tue, 7 Jul 2020 11:28:44 +0200
Received: from SGPMBX2022.APAC.bosch.com (10.187.83.37) by
 SGPMBX2022.APAC.bosch.com (10.187.83.37) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1979.3; Tue, 7 Jul 2020 17:28:42 +0800
Received: from SGPMBX2022.APAC.bosch.com ([fe80::2d4d:b176:b210:896]) by
 SGPMBX2022.APAC.bosch.com ([fe80::2d4d:b176:b210:896%6]) with mapi id
 15.01.1979.003; Tue, 7 Jul 2020 17:28:42 +0800
From: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: [BUG] Xen build for RCAR failing
Thread-Topic: [BUG] Xen build for RCAR failing
Thread-Index: AdZUKc5JeR7gPpESR52uLkZK1kYwOwAEsnEAAAD8OlA=
Date: Tue, 7 Jul 2020 09:28:42 +0000
Message-ID: <ab84437081a346d6bf0f73581382c74e@in.bosch.com>
References: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
 <1D0E7281-95D7-482E-BF6D-EE5B1FEE4876@arm.com>
In-Reply-To: <1D0E7281-95D7-482E-BF6D-EE5B1FEE4876@arm.com>
Accept-Language: en-US, en-SG
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.187.56.214]
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprJIsWRmVeSWpSXmKPExsXCZbVWVtfXgSXe4OcHCYulSzYzWZxZ3sNs
 seTjYhYHZo8189Ywehzd/ZspgCmKyyYlNSezLLVI3y6BK+P3jKyCiSoVZ+e0sDYw3pbrYuTg
 kBAwkbhzR62LkYtDSGAGk8SO62tZIJzdjBI/e79AOe8ZJfrXLWKGcD4xSjyfsoMRwjnIKPH7
 73b2LkZODjaBEIl9e2+A2SIC+hK/b/5gBbGZBTwkVl3dwwhiCwvoSnSdmw1VoyexdWE/K4Rt
 JfHrZROYzSKgIjGlayeYzStgLdF29h8byK1CAjkSj7Zog4Q5gcIbl05jA7EZBWQlFt2cxAKx
 Slzi1pP5TCC2hICAxJI955khbFGJl4//sULYihLL5q9ih6jXk7gxdQobhK0tsWzha2aItYIS
 J2c+YZnAKDELydhZSFpmIWmZhaRlASPLKkbR4kyDslwjAwsjvaKk1OIqA0O95PzcTYyQ2BPb
 wbi964PeIUYmDsZDjBIczEoivL3ajPFCvCmJlVWpRfnxRaU5qcWHGKU5WJTEeVV4NsYJCaQn
 lqRmp6YWpBbBZJk4OKUamDweBgSc5y4y+9ThKyl8ZY/9/PNzHN+8OfxJKXbN8ieHNfwZ5tcf
 j5u40sFPO2uJv/WdCZt8u8Nu3/kU/SbnzsS598syBW6/7ZDX3NLAwt87XbD3V39vT4Tk5HKW
 KcoSnBX8LgU/ubIEii8Z/pxcmzfh7z3mZfp26UGe7RdPqvCY9r/ee0Zm7o2ZZq6PWI+7aOeb
 m0yuvHQzUfz38kOXWEMPzFuc1ZsXPdcg9aboqTXqph7hS4xlG+989JqwKlnp6USWTaJhL6LL
 rkw1Fc23tA+WPXzdO3L1g7f8nB87vrPdy2FqeHl4yaqldlFRvspTYzqmtUg3rH8xWbfp5Mun
 DErHz+wqunid5U7n9ixRDiWW4oxEQy3mouJEACQfLbgsAwAA
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello Bertrand,

Thanks for your quick response. I tired switching to dunfell branch and bui=
ld gives parse error as below.

bitbake core-image-weston
ERROR: ParseError in /home/manikandan/yocto_2.19/build/meta-virtualization/=
classes/: not a BitBake file

Is there any additional changes required here?

Mit freundlichen Gr=FC=DFen / Best regards

 Chockalingam Manikandan

ES-CM Core fn,ADIT (RBEI/ECF3)
Robert Bosch GmbH | Postfach 10 60 50 | 70049 Stuttgart | GERMANY | www.bos=
ch.com
Tel. +91 80 6136-4452 | Fax +91 80 6617-0711 | Manikandan.Chockalingam@in.b=
osch.com

Registered Office: Stuttgart, Registration Court: Amtsgericht Stuttgart, HR=
B 14000;
Chairman of the Supervisory Board: Franz Fehrenbach; Managing Directors: Dr=
. Volkmar Denner,=20
Prof. Dr. Stefan Asenkerschbaumer, Dr. Michael Bolle, Dr. Christian Fischer=
, Dr. Stefan Hartung,
Dr. Markus Heyn, Harald Kr=F6ger, Christoph K=FCbel, Rolf Najork, Uwe Rasch=
ke, Peter Tyroller


-----Original Message-----
From: Bertrand Marquis <Bertrand.Marquis@arm.com>=20
Sent: Tuesday, July 7, 2020 2:27 PM
To: Manikandan Chockalingam (RBEI/ECF3) <Manikandan.Chockalingam@in.bosch.c=
om>
Cc: xen-devel@lists.xen.org; nd <nd@arm.com>
Subject: Re: [BUG] Xen build for RCAR failing



> On 7 Jul 2020, at 08:58, Manikandan Chockalingam (RBEI/ECF3) <Manikandan.=
Chockalingam@in.bosch.com> wrote:
>=20
> Hello Team,
> =20
> I am trying to build xen hypervisor for RCAR and following the https://wi=
ki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/Salvator-X st=
eps.
> =20
> But am facing the following issues
> 1.      SRC_URI=3D"http://v3.sk/~lkundrak/dev86/archive/Dev86src-${PV}.ta=
r.gz in recipes-extended/dev86/dev86_0.16.20.bb is not accesible
> Modification done: SRC_URI=3Dhttps://src.fedoraproject.org/lookaside/extr=
as/dev86/Dev86src-0.16.20.tar.gz/567cf460d132f9d8775dd95f9208e49a/Dev86src-=
${PV}.tar.gz
> 2.      LIC_FILES_CHKSUM changed in recipes-extended/xen/xen.inc
> 3.      QA Issue: xen: Files/directories were installed but not shipped i=
n any package:
> /usr/bin/vchan-socket-proxy
>   /usr/sbin/xenmon
>   /usr/sbin/xenhypfs
>   /usr/lib/libxenfsimage.so.4.14.0
>   /usr/lib/libxenhypfs.so.1
>   /usr/lib/libxenfsimage.so
>   /usr/lib/libxenhypfs.so.1.0
>   /usr/lib/libxenfsimage.so.4.14
>   /usr/lib/libxenhypfs.so
>   /usr/lib/pkgconfig
>   /usr/lib/xen/bin/depriv-fd-checker
>   /usr/lib/pkgconfig/xenlight.pc
>   /usr/lib/pkgconfig/xenguest.pc
>   /usr/lib/pkgconfig/xenhypfs.pc
>   /usr/lib/pkgconfig/xlutil.pc
>   /usr/lib/pkgconfig/xentoolcore.pc
>   /usr/lib/pkgconfig/xentoollog.pc
>   /usr/lib/pkgconfig/xenstore.pc
>   /usr/lib/pkgconfig/xencall.pc
>   /usr/lib/pkgconfig/xencontrol.pc
>   /usr/lib/pkgconfig/xendevicemodel.pc
>   /usr/lib/pkgconfig/xenstat.pc
>   /usr/lib/pkgconfig/xengnttab.pc
>   /usr/lib/pkgconfig/xenevtchn.pc
>   /usr/lib/pkgconfig/xenvchan.pc
>   /usr/lib/pkgconfig/xenforeignmemory.pc
>   /usr/lib/xenfsimage/fat/fsimage.so
>   /usr/lib/xenfsimage/iso9660/fsimage.so
>   /usr/lib/xenfsimage/reiserfs/fsimage.so
>   /usr/lib/xenfsimage/ufs/fsimage.so
>   /usr/lib/xenfsimage/ext2fs-lib/fsimage.so
>   /usr/lib/xenfsimage/zfs/fsimage.so
> Please set FILES such that these items are packaged. Alternatively if the=
y are unneeded, avoid installing them or delete them within do_install.
> xen: 32 installed and not shipped files. [installed-vs-shipped]
> ERROR: xen-unstable+gitAUTOINC+be63d9d47f-r0 do_package: Fatal QA errors =
found, failing task.
> ERROR: xen-unstable+gitAUTOINC+be63d9d47f-r0 do_package: Function failed:=
 do_package
> ERROR: Logfile of failure stored in: /home/manikandan/yocto_2.19/build/bu=
ild/tmp/work/aarch64-poky-linux/xen/unstable+gitAUTOINC+be63d9d47f-r0/temp/=
log.do_package.17889
> ERROR: Task 13 (/home/manikandan/yocto_2.19/build/meta-virtualization/rec=
ipes-extended/xen/xen_git.bb, do_package) failed with exit code '1'

The configuration from the page is using a rather old release of Yocto.
I would suggest to switch to dunfell which will use xen 4.12.

Current master status of Yocto is not building at the moment.
Christopher Clark has done some work on it here [1] in meta-virtualization =
but this is not merged yet.

If you are trying to build Xen master by modifying a recipe you will have i=
ssues as some things have been added like hypfs which are creating the issu=
es you see when Yocto is checking that everything was installed.

Bertrand

[1] https://lists.yoctoproject.org/g/meta-virtualization/message/5495



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 09:32:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 09:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsjy2-0007Mu-QV; Tue, 07 Jul 2020 09:32:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YS/2=AS=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jsjy1-0007Mm-3O
 for xen-devel@lists.xen.org; Tue, 07 Jul 2020 09:32:45 +0000
X-Inumbo-ID: cf03f6e6-c034-11ea-bb8b-bc764e2007e4
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.87]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf03f6e6-c034-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 09:32:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NLX/AhMbkhWFC2C3uVjyTAejozWL1ODt5MuHslyzkZA=;
 b=C9ySjCriTnZfQNYzmr9QtxOxRmSMS26vS7v3fmflEGHZII6m70/Yo5HirL29p3fGjrO+cpbqT5vdJo1LKLMI0T434kvapL1IwzDyBee1Z9mMUiDl7hHSe//WOBxqWeBY/PNJ5z5J2hXjqJUaZvlbvy/4g9kGDlxuCrzElwzXCf4=
Received: from PR3P189CA0007.EURP189.PROD.OUTLOOK.COM (2603:10a6:102:52::12)
 by AM6PR08MB3767.eurprd08.prod.outlook.com (2603:10a6:20b:84::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.21; Tue, 7 Jul
 2020 09:32:42 +0000
Received: from VE1EUR03FT021.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:102:52:cafe::2d) by PR3P189CA0007.outlook.office365.com
 (2603:10a6:102:52::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.23 via Frontend
 Transport; Tue, 7 Jul 2020 09:32:42 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xen.org; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;lists.xen.org; dmarc=bestguesspass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT021.mail.protection.outlook.com (10.152.18.117) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3153.24 via Frontend Transport; Tue, 7 Jul 2020 09:32:41 +0000
Received: ("Tessian outbound 4e683f4039d5:v62");
 Tue, 07 Jul 2020 09:32:41 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 82f1e06f8c9aead7
X-CR-MTA-TID: 64aa7808
Received: from 66edd43d0094.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0270E046-079D-4288-B40E-A1AA269BDB94.1; 
 Tue, 07 Jul 2020 09:32:36 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 66edd43d0094.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 07 Jul 2020 09:32:36 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eR7SXE4ImEahstAbnENAkBE13lFzb20kPnG790EOp/8htJlvAJf1Ir/j3rqRxMHZidMW3Uk7l3n9bsPHF2NbBj+ysp94SC51FCdp3eCPv7eH4s+uFTgl+bhX5WRR4peqT4ZnFM7cG1TVYKT11YCQgasKHrma4U51DnCbbpT96TwtnTeSeEtIu3D4EvVhBO2HW0dYRQr2mGXjWk+O12ni2iYzKV88Mz4yFTqiqdCR+ho1rKnugByif1VnFyx0I0zti1d5YV6o9uCpo3IQ83uuRsvVwVn7GtI9b2d5MZwQJAhLRNQAoNfa+9WqErqlnVk+Awa9xbRr88Qtu9SEbCazgQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NLX/AhMbkhWFC2C3uVjyTAejozWL1ODt5MuHslyzkZA=;
 b=JaB/l5oCy6JXb8bk+ZmmBi6X3C6+0YhU2Wd9XCZqN7RaDBU2oL47N/zhxBnvTQBSu9NCzICT2ZyQPpuNd4H8nGKAbtkPR8s1Mco5OnPnllOy14aSxLvKOWh+Nc23WphZTL0fIRfZP6jB9a29vgGMpkRIjABI+yGgFh0FBsmmq5nZumIyra/ozfGL/xo8eMbrEYOkxHKByZoPO6acZiRYY82R/4M54sIMQMgc14QsoVkMs5pYmBWZ14G77LVDzGHiQSQaIpcA0NWEwT8tZehJkz9b/u5xIle7h3yxXc0eftYUdPNZIBud7vhuwVH+cFLTKaNCakD73wAFDvbvzILQsw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NLX/AhMbkhWFC2C3uVjyTAejozWL1ODt5MuHslyzkZA=;
 b=C9ySjCriTnZfQNYzmr9QtxOxRmSMS26vS7v3fmflEGHZII6m70/Yo5HirL29p3fGjrO+cpbqT5vdJo1LKLMI0T434kvapL1IwzDyBee1Z9mMUiDl7hHSe//WOBxqWeBY/PNJ5z5J2hXjqJUaZvlbvy/4g9kGDlxuCrzElwzXCf4=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2166.eurprd08.prod.outlook.com (2603:10a6:4:85::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.28; Tue, 7 Jul
 2020 09:32:34 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3153.029; Tue, 7 Jul 2020
 09:32:34 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>
Subject: Re: [BUG] Xen build for RCAR failing
Thread-Topic: [BUG] Xen build for RCAR failing
Thread-Index: AdZUKc5JeR7gPpESR52uLkZK1kYwOwAEsnEAAAD8OlAAAEBtgA==
Date: Tue, 7 Jul 2020 09:32:34 +0000
Message-ID: <D84A5DA7-683C-480B-8837-C51D560FC2E1@arm.com>
References: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
 <1D0E7281-95D7-482E-BF6D-EE5B1FEE4876@arm.com>
 <ab84437081a346d6bf0f73581382c74e@in.bosch.com>
In-Reply-To: <ab84437081a346d6bf0f73581382c74e@in.bosch.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: in.bosch.com; dkim=none (message not signed)
 header.d=none; in.bosch.com;
 dmarc=none action=none header.from=arm.com; 
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 8eb80588-e6ac-4420-efe3-08d82258b254
x-ms-traffictypediagnostic: DB6PR0802MB2166:|AM6PR08MB3767:
X-Microsoft-Antispam-PRVS: <AM6PR08MB37677AE749CF8538A091FC2C9D660@AM6PR08MB3767.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4941;OLM:4941;
x-forefront-prvs: 0457F11EAF
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: jrn1mZIs5x08dsTXj0MRuTg5S9oI09OiYBLt5R5+R8yXwfOkANwEp1Z92F3ioanZRaKypCKZubR1DcYmo4v9tOmZUH5FlsqslKTkL3LLV3v4QbarTeGTsUs4wyAcs2sfir3K45PyqCnAZYh6/CCMMTAmF/dRR4F+MqBaNxAu4pJrkcQl/mc8fs8V8X1BIsMd1uH/9SYVVUSHcZITUzFpp/YvFofF7gHQe8+U8jXqfEcZLE0+oX3QDIQbsOKm+JBCMQAPdZNsKLTiZC0LO4ufZrqJqgVkqz4YJxG6rsobsEdP5abnQ23AX7XGY4xZp/VM
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(376002)(346002)(366004)(136003)(39860400002)(5660300002)(33656002)(8936002)(478600001)(54906003)(316002)(6916009)(4326008)(66556008)(66476007)(66946007)(66446008)(64756008)(6506007)(53546011)(186003)(71200400001)(6486002)(2906002)(91956017)(26005)(76116006)(4744005)(8676002)(86362001)(2616005)(36756003)(6512007);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: OMPYw9yDIGX7WRWu1Zufo0li7ZDY8uL5FR45PpQuSxPSDuWRKvdHr4K0TNOQMQQO2H1zSWnOGre0D9TKy6ptfUrZ0y0AscCA6QFHxX3hr3dmiw4345yIVW11ac00XIIOkUcLaaqeVg66n2QSae3vfX9c8tmtPc0ZYd7xmtGS8qxzvVPBQ/q6uwdknshjzaUUN5bWhySPopvFvcvs3d/01nkUWbpShaaEE0EhfaZfgMJoQnot0JWI7b44mRnnks15iqlZSAvPbEYy+iBreQxexcYg2mObmrgSaNhlNYhLQrjefFJ5sKXjYhRc5MleZikIqjIexpZue5X1b3WV3xhojY2PF1n8boBhVXmK0wrHTkg0AC/P/Y69GOuQm2CQX/bVFf55i66bscU+cAygD7aGcGyad7tDpuGAkuXd6k9uPe2kYoQBqdJSmwkoVmL5zC1gv6iQmLNPF7YGr9Mhd+Gl0Y0kf1tDllV88t3y9ufID07g6QF2b86qPsQYoe4oASOp
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <357DED9AE18EFB40A90A0DA46882C0F3@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2166
Original-Authentication-Results: in.bosch.com; dkim=none (message not signed)
 header.d=none; in.bosch.com;
 dmarc=none action=none header.from=arm.com; 
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT021.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(39860400002)(136003)(346002)(376002)(46966005)(4744005)(26005)(8676002)(33656002)(70206006)(53546011)(82310400002)(4326008)(6862004)(70586007)(36906005)(478600001)(86362001)(356005)(8936002)(6506007)(81166007)(36756003)(82740400003)(316002)(5660300002)(2906002)(47076004)(186003)(6486002)(54906003)(2616005)(6512007)(336012);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 3f3726b4-12da-4b86-c4c5-08d82258ade2
X-Forefront-PRVS: 0457F11EAF
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: //qZu1JywtYL4fsSVB8wsFBVAxA30uLilGdqaapDBU9RArf2ADZIHnXsixrtXURSvtONRAdgrgmL9G4ywsXlIRT1ppsr4ab5Q5QDC1gYVaRzFEXKvbfgnIHmECkq3ildhmpuiz6XUY+V8FThQXIYQJegArxex3C9w5Od/8J0ubof/ScljrTNOD5zeij0u3xE1x51fJWzZQZ/EMxK35cCxb/QOES8POoGzqgh6Zst7qVCFuEvz7wRGLPkrqHa0hOwmVE4DNxT4D/sEL3zdg2MmagCUcmjFt8XXJYF+P23S8k3sSC6OvuVNcYaIoP+SRlJGd8Xc3CGk8mT3ct99UtfVzAv9xP2TdtLdbkDILfy7LbVSsJqSDQDwV1ydmgdgu4oCYsNWhM5Fekp9xJ2ILaoEw==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jul 2020 09:32:41.6083 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8eb80588-e6ac-4420-efe3-08d82258b254
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT021.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3767
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

> On 7 Jul 2020, at 10:28, Manikandan Chockalingam (RBEI/ECF3) <Manikandan.=
Chockalingam@in.bosch.com> wrote:
>=20
> Hello Bertrand,
>=20
> Thanks for your quick response. I tired switching to dunfell branch and b=
uild gives parse error as below.
>=20
> bitbake core-image-weston
> ERROR: ParseError in /home/manikandan/yocto_2.19/build/meta-virtualizatio=
n/classes/: not a BitBake file
>=20
> Is there any additional changes required here?

I do not have this on my side when building using dunfell.
You might need to restart from a fresh build and checkout (you need dunfell=
 branch on all layers).

Bertrand



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 09:35:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 09:35:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsk0X-0007Uk-8l; Tue, 07 Jul 2020 09:35:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E+9W=AS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jsk0W-0007Uf-Er
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 09:35:20 +0000
X-Inumbo-ID: 2c261174-c035-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c261174-c035-11ea-8496-bc764e2007e4;
 Tue, 07 Jul 2020 09:35:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7046AAC52;
 Tue,  7 Jul 2020 09:35:19 +0000 (UTC)
Subject: Re: [PATCH v2 1/3] xen/privcmd: Corrected error handling path
To: Souptick Joarder <jrdr.linux@gmail.com>, boris.ostrovsky@oracle.com,
 sstabellini@kernel.org
References: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
 <1594059372-15563-2-git-send-email-jrdr.linux@gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <4bafb184-6f07-2582-3d0f-86fb53dd30dc@suse.com>
Date: Tue, 7 Jul 2020 11:35:18 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <1594059372-15563-2-git-send-email-jrdr.linux@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Paul Durrant <xadimgnik@gmail.com>, John Hubbard <jhubbard@nvidia.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 06.07.20 20:16, Souptick Joarder wrote:
> Previously, if lock_pages() end up partially mapping pages, it used
> to return -ERRNO due to which unlock_pages() have to go through
> each pages[i] till *nr_pages* to validate them. This can be avoided
> by passing correct number of partially mapped pages & -ERRNO separately,
> while returning from lock_pages() due to error.
> 
> With this fix unlock_pages() doesn't need to validate pages[i] till
> *nr_pages* for error scenario and few condition checks can be ignored.
> 
> Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Paul Durrant <xadimgnik@gmail.com>
> ---
>   drivers/xen/privcmd.c | 31 +++++++++++++++----------------
>   1 file changed, 15 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index a250d11..33677ea 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -580,13 +580,13 @@ static long privcmd_ioctl_mmap_batch(
>   
>   static int lock_pages(
>   	struct privcmd_dm_op_buf kbufs[], unsigned int num,
> -	struct page *pages[], unsigned int nr_pages)
> +	struct page *pages[], unsigned int nr_pages, unsigned int *pinned)
>   {
>   	unsigned int i;
> +	int page_count = 0;

Initial value shouldn't be needed, and ...

>   
>   	for (i = 0; i < num; i++) {
>   		unsigned int requested;
> -		int pinned;

... you could move the declaration here.

With that done you can add my

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 09:35:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 09:35:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsk0c-0007VL-Gg; Tue, 07 Jul 2020 09:35:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kqME=AS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jsk0b-0007Uf-Dm
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 09:35:25 +0000
X-Inumbo-ID: 2ef7305e-c035-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ef7305e-c035-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 09:35:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4DDC3AFB8;
 Tue,  7 Jul 2020 09:35:24 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: avoid assembler warning about .type not taking
 effect in test harness
Message-ID: <4a0f9e7d-53f1-b77f-e8a9-a75483884a6f@suse.com>
Date: Tue, 7 Jul 2020 11:35:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

gcc 9.3 started to re-order top level blocks by default when optimizing.
This re-ordering results in all our .type directives to get emitted to
the assembly file first, followed by gcc's. The assembler warns about
attempts to change the type of a symbol when it was already set (and
when there's no intervening setting to "notype").

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -295,4 +295,9 @@ x86-emulate.o cpuid.o test_x86_emulator.
 x86-emulate.o: x86_emulate/x86_emulate.c
 x86-emulate.o: HOSTCFLAGS += -D__XEN_TOOLS__
 
+# In order for our custom .type assembler directives to reliably land after
+# gcc's, we need to keep it from re-ordering top-level constructs.
+$(call cc-option-add,HOSTCFLAGS-toplevel,HOSTCC,-fno-toplevel-reorder)
+test_x86_emulator.o: HOSTCFLAGS += $(HOSTCFLAGS-toplevel)
+
 test_x86_emulator.o: $(addsuffix .h,$(TESTCASES)) $(addsuffix -opmask.h,$(OPMASK))


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 09:38:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 09:38:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsk42-0007kQ-1I; Tue, 07 Jul 2020 09:38:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E+9W=AS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jsk40-0007kL-LA
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 09:38:56 +0000
X-Inumbo-ID: acfe9118-c035-11ea-8d3d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id acfe9118-c035-11ea-8d3d-12813bfff9fa;
 Tue, 07 Jul 2020 09:38:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B7BE1AB3D;
 Tue,  7 Jul 2020 09:38:55 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] xen/privcmd: Mark pages as dirty
To: Souptick Joarder <jrdr.linux@gmail.com>, boris.ostrovsky@oracle.com,
 sstabellini@kernel.org
References: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
 <1594059372-15563-3-git-send-email-jrdr.linux@gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <8fdd8c77-27dd-2847-7929-b5d3098b1b45@suse.com>
Date: Tue, 7 Jul 2020 11:38:53 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <1594059372-15563-3-git-send-email-jrdr.linux@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Paul Durrant <xadimgnik@gmail.com>, John Hubbard <jhubbard@nvidia.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 06.07.20 20:16, Souptick Joarder wrote:
> pages need to be marked as dirty before unpinned it in
> unlock_pages() which was oversight. This is fixed now.
> 
> Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
> Suggested-by: John Hubbard <jhubbard@nvidia.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Paul Durrant <xadimgnik@gmail.com>
> ---
>   drivers/xen/privcmd.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 33677ea..f6c1543 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -612,8 +612,11 @@ static void unlock_pages(struct page *pages[], unsigned int nr_pages)
>   {
>   	unsigned int i;
>   
> -	for (i = 0; i < nr_pages; i++)
> +	for (i = 0; i < nr_pages; i++) {
> +		if (!PageDirty(pages[i]))
> +			set_page_dirty_lock(pages[i]);

With put_page() directly following I think you should be able to use
set_page_dirty() instead, as there is obviously a reference to the page
existing.

>   		put_page(pages[i]);
> +	}
>   }
>   
>   static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
> 

Juergen


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 09:56:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 09:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jskKW-0000yC-LX; Tue, 07 Jul 2020 09:56:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ryuI=AS=vivier.eu=laurent@srs-us1.protection.inumbo.net>)
 id 1jskKU-0000y7-RP
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 09:55:58 +0000
X-Inumbo-ID: 0a0ba290-c038-11ea-bca7-bc764e2007e4
Received: from mout.kundenserver.de (unknown [212.227.17.24])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a0ba290-c038-11ea-bca7-bc764e2007e4;
 Tue, 07 Jul 2020 09:55:51 +0000 (UTC)
Received: from [192.168.100.1] ([82.252.135.106]) by mrelayeu.kundenserver.de
 (mreue108 [213.165.67.119]) with ESMTPSA (Nemesis) id
 1MsZ3N-1kmdxS2RYp-00u2nJ; Tue, 07 Jul 2020 11:55:23 +0200
Subject: Re: [PATCH] trivial: Remove trailing whitespaces
To: Christophe de Dinechin <dinechin@redhat.com>, qemu-devel@nongnu.org
References: <20200706162300.1084753-1-dinechin@redhat.com>
From: Laurent Vivier <laurent@vivier.eu>
Autocrypt: addr=laurent@vivier.eu; prefer-encrypt=mutual; keydata=
 mQINBFYFJhkBEAC2me7w2+RizYOKZM+vZCx69GTewOwqzHrrHSG07MUAxJ6AY29/+HYf6EY2
 WoeuLWDmXE7A3oJoIsRecD6BXHTb0OYS20lS608anr3B0xn5g0BX7es9Mw+hV/pL+63EOCVm
 SUVTEQwbGQN62guOKnJJJfphbbv82glIC/Ei4Ky8BwZkUuXd7d5NFJKC9/GDrbWdj75cDNQx
 UZ9XXbXEKY9MHX83Uy7JFoiFDMOVHn55HnncflUncO0zDzY7CxFeQFwYRbsCXOUL9yBtqLer
 Ky8/yjBskIlNrp0uQSt9LMoMsdSjYLYhvk1StsNPg74+s4u0Q6z45+l8RAsgLw5OLtTa+ePM
 JyS7OIGNYxAX6eZk1+91a6tnqfyPcMbduxyBaYXn94HUG162BeuyBkbNoIDkB7pCByed1A7q
 q9/FbuTDwgVGVLYthYSfTtN0Y60OgNkWCMtFwKxRaXt1WFA5ceqinN/XkgA+vf2Ch72zBkJL
 RBIhfOPFv5f2Hkkj0MvsUXpOWaOjatiu0fpPo6Hw14UEpywke1zN4NKubApQOlNKZZC4hu6/
 8pv2t4HRi7s0K88jQYBRPObjrN5+owtI51xMaYzvPitHQ2053LmgsOdN9EKOqZeHAYG2SmRW
 LOxYWKX14YkZI5j/TXfKlTpwSMvXho+efN4kgFvFmP6WT+tPnwARAQABtCJMYXVyZW50IFZp
 dmllciA8bGF1cmVudEB2aXZpZXIuZXU+iQI4BBMBAgAiBQJWBTDeAhsDBgsJCAcDAgYVCAIJ
 CgsEFgIDAQIeAQIXgAAKCRDzDDi9Py++PCEdD/oD8LD5UWxhQrMQCsUgLlXCSM7sxGLkwmmF
 ozqSSljEGRhffxZvO35wMFcdX9Z0QOabVoFTKrT04YmvbjsErh/dP5zeM/4EhUByeOS7s6Yl
 HubMXVQTkak9Wa9Eq6irYC6L41QNzz/oTwNEqL1weV1+XC3TNnht9B76lIaELyrJvRfgsp9M
 rE+PzGPo5h7QHWdL/Cmu8yOtPLa8Y6l/ywEJ040IoiAUfzRoaJs2csMXf0eU6gVBhCJ4bs91
 jtWTXhkzdl4tdV+NOwj3j0ukPy+RjqeL2Ej+bomnPTOW8nAZ32dapmu7Fj7VApuQO/BSIHyO
 NkowMMjB46yohEepJaJZkcgseaus0x960c4ua/SUm/Nm6vioRsxyUmWd2nG0m089pp8LPopq
 WfAk1l4GciiMepp1Cxn7cnn1kmG6fhzedXZ/8FzsKjvx/aVeZwoEmucA42uGJ3Vk9TiVdZes
 lqMITkHqDIpHjC79xzlWkXOsDbA2UY/P18AtgJEZQPXbcrRBtdSifCuXdDfHvI+3exIdTpvj
 BfbgZAar8x+lcsQBugvktlQWPfAXZu4Shobi3/mDYMEDOE92dnNRD2ChNXg2IuvAL4OW40wh
 gXlkHC1ZgToNGoYVvGcZFug1NI+vCeCFchX+L3bXyLMg3rAfWMFPAZLzn42plIDMsBs+x2yP
 +bkCDQRWBSYZARAAvFJBFuX9A6eayxUPFaEczlMbGXugs0mazbOYGlyaWsiyfyc3PStHLFPj
 rSTaeJpPCjBJErwpZUN4BbpkBpaJiMuVO6egrC8Xy8/cnJakHPR2JPEvmj7Gm/L9DphTcE15
 92rxXLesWzGBbuYxKsj8LEnrrvLyi3kNW6B5LY3Id+ZmU8YTQ2zLuGV5tLiWKKxc6s3eMXNq
 wrJTCzdVd6ThXrmUfAHbcFXOycUyf9vD+s+WKpcZzCXwKgm7x1LKsJx3UhuzT8ier1L363RW
 ZaJBZ9CTPiu8R5NCSn9V+BnrP3wlFbtLqXp6imGhazT9nJF86b5BVKpF8Vl3F0/Y+UZ4gUwL
 d9cmDKBcmQU/JaRUSWvvolNu1IewZZu3rFSVgcpdaj7F/1aC0t5vLdx9KQRyEAKvEOtCmP4m
 38kU/6r33t3JuTJnkigda4+Sfu5kYGsogeYG6dNyjX5wpK5GJIJikEhdkwcLM+BUOOTi+I9u
 tX03BGSZo7FW/J7S9y0l5a8nooDs2gBRGmUgYKqQJHCDQyYut+hmcr+BGpUn9/pp2FTWijrP
 inb/Pc96YDQLQA1q2AeAFv3Rx3XoBTGl0RCY4KZ02c0kX/dm3eKfMX40XMegzlXCrqtzUk+N
 8LeipEsnOoAQcEONAWWo1HcgUIgCjhJhBEF0AcELOQzitbJGG5UAEQEAAYkCHwQYAQIACQUC
 VgUmGQIbDAAKCRDzDDi9Py++PCD3D/9VCtydWDdOyMTJvEMRQGbx0GacqpydMEWbE3kUW0ha
 US5jz5gyJZHKR3wuf1En/3z+CEAEfP1M3xNGjZvpaKZXrgWaVWfXtGLoWAVTfE231NMQKGoB
 w2Dzx5ivIqxikXB6AanBSVpRpoaHWb06tPNxDL6SVV9lZpUn03DSR6gZEZvyPheNWkvz7bE6
 FcqszV/PNvwm0C5Ju7NlJA8PBAQjkIorGnvN/vonbVh5GsRbhYPOc/JVwNNr63P76rZL8Gk/
 hb3xtcIEi5CCzab45+URG/lzc6OV2nTj9Lg0SNcRhFZ2ILE3txrmI+aXmAu26+EkxLLfqCVT
 ohb2SffQha5KgGlOSBXustQSGH0yzzZVZb+HZPEvx6d/HjQ+t9sO1bCpEgPdZjyMuuMp9N1H
 ctbwGdQM2Qb5zgXO+8ZSzwC+6rHHIdtcB8PH2j+Nd88dVGYlWFKZ36ELeZxD7iJflsE8E8yg
 OpKgu3nD0ahBDqANU/ZmNNarBJEwvM2vfusmNnWm3QMIwxNuJghRyuFfx694Im1js0ZY3LEU
 JGSHFG4ZynA+ZFUPA6Xf0wHeJOxGKCGIyeKORsteIqgnkINW9fnKJw2pgk8qHkwVc3Vu+wGS
 ZiJK0xFusPQehjWTHn9WjMG1zvQ5TQQHxau/2FkP45+nRPco6vVFQe8JmgtRF8WFJA==
Message-ID: <162d20a6-a29d-ccbc-7c72-73269833546b@vivier.eu>
Date: Tue, 7 Jul 2020 11:55:11 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200706162300.1084753-1-dinechin@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: fr
Content-Transfer-Encoding: 8bit
X-Provags-ID: V03:K1:IyHjYXPw9pi37trEtkUN0Z0wyasB7eaEuiJHCzlvZWCFbYkZxw2
 sg/KMtk64S8KcfckXMua+vV+U1EQotGjPn4IawLq1d/tcJuAszUQ/jF+d+1X3xyVVkAYEJo
 Ai6gN+9alk1BerJl8oCBPtqyE+B1FFQs1qPflKSfpmJyXTsmje9a2J72kVySvP4OZpFqQ/e
 2fal/qT79jcGmQQT8fBSA==
X-Spam-Flag: NO
X-UI-Out-Filterresults: notjunk:1;V03:K0:XjPKcYosp4w=:uS+XAeu9mUExO4APmmd9cc
 +IpY3RyXawZ4c4DLglO240AnOkClc8Hl1W++WOnod4TsEy3XgEkrFPmT824d6DqBwIUpGrZ02
 9GX+/t4+74VgqTO+cZ9i2q11+ZdBeCbl0EjwiGSBdZF0+iY/TPJh+1X4WtGe37XUmRYUD9c/0
 9eP/ANeQI0itxCfvo5fB96D1EEEqyW4PRb0XdjRtQCoYyCH7cY5H2d0BjRUH9tfHdxrPYRiLx
 OOg20rF3Ogux+oX+86P1cmKRPmb3n8C/NW5zZ3vbXmPs61MR6YDvdgLtkG52qxLKalHP8ud6l
 rDpM/TcNzotoQD/EV+RqVroZP0HWYhQgRokf7p34ZKECLEkefmfc87EtLsvQBADlqFvDdqyAb
 +0vfImN1sg+O3AmAxMUFxYq+Z6/ix4mTtbVvcjjkXmTXyMXnDDT+vk3NpU9nitQyYdmEPhUZZ
 ospzvVaDUS3t4H1C+HtztXO2WLQSwdICJnq2ZE8Qns6zd5AadHy0I2syQcEwjQW2lyWUDUvBp
 RH1foCrqRgEE2cU+hJv7Vofw4TQTxdjE66azDfwOu/pF5soEE8x6lpxgPHNvoMFA6V3ndSM6n
 fq1wHcEKKtHuyxZwkvxNp77cGxSIY6g61//h+FR5CbZz1Nt0LFGMgI9t0brLKZvzh5dGwWPJ6
 t7r3sC9EACHEVoZGN+T+FkHzjiQg12W4cCvCG2Un6Ae1GD2S8GZB2Jut2muZbs8sxIO80oc4U
 NQ0l9CrG2VG4DmQL4TOXPixFS5hug0d3NR/jBcSb1fZDROxVDqdxcU39haRp7WipJsgAfMQLS
 7SKU1Q/GgLgZv6udLcxKyYrl628B3aZoWr3V5KI4kROPXysQTb/k6QDV8oWF7db/KjsKq7R
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Dmitry Fleytman <dmitry.fleytman@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 Michael Roth <mdroth@linux.vnet.ibm.com>, Max Filippov <jcmvbkbc@gmail.com>,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>, Max Reitz <mreitz@redhat.com>,
 Marek Vasut <marex@denx.de>, Stefano Stabellini <sstabellini@kernel.org>,
 qemu-block@nongnu.org, qemu-trivial@nongnu.org, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Herv=c3=a9_Poussineau?= <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>, Andrzej Zaborowski <balrogg@gmail.com>,
 Artyom Tarasenko <atar4qemu@gmail.com>,
 Alistair Francis <alistair@alistair23.me>,
 Eduardo Habkost <ehabkost@redhat.com>, Michael Tokarev <mjt@tls.msk.ru>,
 Riku Voipio <riku.voipio@iki.fi>, Peter Lieven <pl@kamp.de>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Roman Bolshakov <r.bolshakov@yadro.com>, qemu-arm@nongnu.org,
 Peter Chubb <peter.chubb@nicta.com.au>,
 Ronnie Sahlberg <ronniesahlberg@gmail.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 David Gibson <david@gibson.dropbear.id.au>, Kevin Wolf <kwolf@redhat.com>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Yoshinori Sato <ysato@users.sourceforge.jp>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Chris Wulff <crwulff@gmail.com>, Jean-Christophe Dubois <jcd@tribudubois.net>,
 qemu-ppc@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Le 06/07/2020 à 18:23, Christophe de Dinechin a écrit :
> There are a number of unnecessary trailing whitespaces that have
> accumulated over time in the source code. They cause stray changes
> in patches if you use tools that automatically remove them.
> 
> Tested by doing a `git diff -w` after the change.
> 
> This could probably be turned into a pre-commit hook.
> 
> Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
> ---
>  block/iscsi.c                                 |   2 +-
>  disas/cris.c                                  |   2 +-
>  disas/microblaze.c                            |  80 +++---
>  disas/nios2.c                                 | 256 +++++++++---------
>  hmp-commands.hx                               |   2 +-
>  hw/alpha/typhoon.c                            |   6 +-
>  hw/arm/gumstix.c                              |   6 +-
>  hw/arm/omap1.c                                |   2 +-
>  hw/arm/stellaris.c                            |   2 +-
>  hw/char/etraxfs_ser.c                         |   2 +-
>  hw/core/ptimer.c                              |   2 +-
>  hw/cris/axis_dev88.c                          |   2 +-
>  hw/cris/boot.c                                |   2 +-
>  hw/display/qxl.c                              |   2 +-
>  hw/dma/etraxfs_dma.c                          |  18 +-
>  hw/dma/i82374.c                               |   2 +-
>  hw/i2c/bitbang_i2c.c                          |   2 +-
>  hw/input/tsc2005.c                            |   2 +-
>  hw/input/tsc210x.c                            |   2 +-
>  hw/intc/etraxfs_pic.c                         |   8 +-
>  hw/intc/sh_intc.c                             |  10 +-
>  hw/intc/xilinx_intc.c                         |   2 +-
>  hw/misc/imx25_ccm.c                           |   6 +-
>  hw/misc/imx31_ccm.c                           |   2 +-
>  hw/net/vmxnet3.h                              |   2 +-
>  hw/net/xilinx_ethlite.c                       |   2 +-
>  hw/pci/pcie.c                                 |   2 +-
>  hw/sd/omap_mmc.c                              |   2 +-
>  hw/sh4/shix.c                                 |   2 +-
>  hw/sparc64/sun4u.c                            |   2 +-
>  hw/timer/etraxfs_timer.c                      |   2 +-
>  hw/timer/xilinx_timer.c                       |   4 +-
>  hw/usb/hcd-musb.c                             |  10 +-
>  hw/usb/hcd-ohci.c                             |   6 +-
>  hw/usb/hcd-uhci.c                             |   2 +-
>  hw/virtio/virtio-pci.c                        |   2 +-
>  include/hw/cris/etraxfs_dma.h                 |   4 +-
>  include/hw/net/lance.h                        |   2 +-
>  include/hw/ppc/spapr.h                        |   2 +-
>  include/hw/xen/interface/io/ring.h            |  34 +--
>  include/qemu/log.h                            |   2 +-
>  include/qom/object.h                          |   4 +-
>  linux-user/cris/cpu_loop.c                    |  16 +-
>  linux-user/microblaze/cpu_loop.c              |  16 +-
>  linux-user/mmap.c                             |   8 +-
>  linux-user/sparc/signal.c                     |   4 +-
>  linux-user/syscall.c                          |  24 +-
>  linux-user/syscall_defs.h                     |   2 +-
>  linux-user/uaccess.c                          |   2 +-
>  os-posix.c                                    |   2 +-
>  qapi/qapi-util.c                              |   2 +-
>  qemu-img.c                                    |   2 +-
>  qemu-options.hx                               |  26 +-
>  qom/object.c                                  |   2 +-
>  target/cris/translate.c                       |  28 +-
>  target/cris/translate_v10.inc.c               |   6 +-
>  target/i386/hvf/hvf.c                         |   4 +-
>  target/i386/hvf/x86.c                         |   4 +-
>  target/i386/hvf/x86_decode.c                  |  20 +-
>  target/i386/hvf/x86_decode.h                  |   4 +-
>  target/i386/hvf/x86_descr.c                   |   2 +-
>  target/i386/hvf/x86_emu.c                     |   2 +-
>  target/i386/hvf/x86_mmu.c                     |   6 +-
>  target/i386/hvf/x86_task.c                    |   2 +-
>  target/i386/hvf/x86hvf.c                      |  42 +--
>  target/i386/translate.c                       |   8 +-
>  target/microblaze/mmu.c                       |   2 +-
>  target/microblaze/translate.c                 |   2 +-
>  target/sh4/op_helper.c                        |   4 +-
>  target/xtensa/core-de212/core-isa.h           |   6 +-
>  .../xtensa/core-sample_controller/core-isa.h  |   6 +-
>  target/xtensa/core-test_kc705_be/core-isa.h   |   2 +-
>  tcg/sparc/tcg-target.inc.c                    |   2 +-
>  tcg/tcg.c                                     |  32 +--
>  tests/tcg/multiarch/test-mmap.c               |  72 ++---
>  ui/curses.c                                   |   4 +-
>  ui/curses_keys.h                              |   4 +-
>  util/cutils.c                                 |   2 +-
>  78 files changed, 440 insertions(+), 440 deletions(-)
> 
> diff --git a/block/iscsi.c b/block/iscsi.c
> index a8b76979d8..884075f4e1 100644
> --- a/block/iscsi.c
> +++ b/block/iscsi.c
> @@ -1412,7 +1412,7 @@ static void iscsi_readcapacity_sync(IscsiLun *iscsilun, Error **errp)
>      struct scsi_task *task = NULL;
>      struct scsi_readcapacity10 *rc10 = NULL;
>      struct scsi_readcapacity16 *rc16 = NULL;
> -    int retries = ISCSI_CMD_RETRIES; 
> +    int retries = ISCSI_CMD_RETRIES;
>  
>      do {
>          if (task != NULL) {
> diff --git a/disas/cris.c b/disas/cris.c
> index 0b0a3fb916..a2be8f1412 100644
> --- a/disas/cris.c
> +++ b/disas/cris.c
> @@ -2569,7 +2569,7 @@ print_insn_cris_generic (bfd_vma memaddr,
>    nbytes = info->buffer_length ? info->buffer_length
>                                 : MAX_BYTES_PER_CRIS_INSN;
>    nbytes = MIN(nbytes, MAX_BYTES_PER_CRIS_INSN);
> -  status = (*info->read_memory_func) (memaddr, buffer, nbytes, info);  
> +  status = (*info->read_memory_func) (memaddr, buffer, nbytes, info);
>  
>    /* If we did not get all we asked for, then clear the rest.
>       Hopefully this makes a reproducible result in case of errors.  */
> diff --git a/disas/microblaze.c b/disas/microblaze.c
> index 0b89b9c4fa..6de66532f5 100644
> --- a/disas/microblaze.c
> +++ b/disas/microblaze.c
> @@ -15,15 +15,15 @@ You should have received a copy of the GNU General Public License
>  along with this program; if not, see <http://www.gnu.org/licenses/>. */
>  
>  /*
> - * Copyright (c) 2001 Xilinx, Inc.  All rights reserved. 
> + * Copyright (c) 2001 Xilinx, Inc.  All rights reserved.
>   *
>   * Redistribution and use in source and binary forms are permitted
>   * provided that the above copyright notice and this paragraph are
>   * duplicated in all such forms and that any documentation,
>   * advertising materials, and other materials related to such
>   * distribution and use acknowledge that the software was developed
> - * by Xilinx, Inc.  The name of the Company may not be used to endorse 
> - * or promote products derived from this software without specific prior 
> + * by Xilinx, Inc.  The name of the Company may not be used to endorse
> + * or promote products derived from this software without specific prior
>   * written permission.
>   * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
>   * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
> @@ -42,7 +42,7 @@ along with this program; if not, see <http://www.gnu.org/licenses/>. */
>  /* Assembler instructions for Xilinx's microblaze processor
>     Copyright (C) 1999, 2000 Free Software Foundation, Inc.
>  
> -   
> +
>  This program is free software; you can redistribute it and/or modify
>  it under the terms of the GNU General Public License as published by
>  the Free Software Foundation; either version 2 of the License, or
> @@ -57,15 +57,15 @@ You should have received a copy of the GNU General Public License
>  along with this program; if not, see <http://www.gnu.org/licenses/>.  */
>  
>  /*
> - * Copyright (c) 2001 Xilinx, Inc.  All rights reserved. 
> + * Copyright (c) 2001 Xilinx, Inc.  All rights reserved.
>   *
>   * Redistribution and use in source and binary forms are permitted
>   * provided that the above copyright notice and this paragraph are
>   * duplicated in all such forms and that any documentation,
>   * advertising materials, and other materials related to such
>   * distribution and use acknowledge that the software was developed
> - * by Xilinx, Inc.  The name of the Company may not be used to endorse 
> - * or promote products derived from this software without specific prior 
> + * by Xilinx, Inc.  The name of the Company may not be used to endorse
> + * or promote products derived from this software without specific prior
>   * written permission.
>   * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
>   * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
> @@ -79,15 +79,15 @@ along with this program; if not, see <http://www.gnu.org/licenses/>.  */
>  #define MICROBLAZE_OPCM
>  
>  /*
> - * Copyright (c) 2001 Xilinx, Inc.  All rights reserved. 
> + * Copyright (c) 2001 Xilinx, Inc.  All rights reserved.
>   *
>   * Redistribution and use in source and binary forms are permitted
>   * provided that the above copyright notice and this paragraph are
>   * duplicated in all such forms and that any documentation,
>   * advertising materials, and other materials related to such
>   * distribution and use acknowledge that the software was developed
> - * by Xilinx, Inc.  The name of the Company may not be used to endorse 
> - * or promote products derived from this software without specific prior 
> + * by Xilinx, Inc.  The name of the Company may not be used to endorse
> + * or promote products derived from this software without specific prior
>   * written permission.
>   * THIS SOFTWARE IS PROVIDED ``AS IS'' AND WITHOUT ANY EXPRESS OR
>   * IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
> @@ -108,8 +108,8 @@ enum microblaze_instr {
>     imm, rtsd, rtid, rtbd, rted, bri, brid, brlid, brai, braid, bralid,
>     brki, beqi, beqid, bnei, bneid, blti, bltid, blei, bleid, bgti,
>     bgtid, bgei, bgeid, lbu, lhu, lw, lwx, sb, sh, sw, swx, lbui, lhui, lwi,
> -   sbi, shi, swi, msrset, msrclr, tuqula, fadd, frsub, fmul, fdiv, 
> -   fcmp_lt, fcmp_eq, fcmp_le, fcmp_gt, fcmp_ne, fcmp_ge, fcmp_un, flt, fint, fsqrt, 
> +   sbi, shi, swi, msrset, msrclr, tuqula, fadd, frsub, fmul, fdiv,
> +   fcmp_lt, fcmp_eq, fcmp_le, fcmp_gt, fcmp_ne, fcmp_ge, fcmp_un, flt, fint, fsqrt,
>     tget, tcget, tnget, tncget, tput, tcput, tnput, tncput,
>     eget, ecget, neget, necget, eput, ecput, neput, necput,
>     teget, tecget, tneget, tnecget, teput, tecput, tneput, tnecput,
> @@ -182,7 +182,7 @@ enum microblaze_instr_type {
>  /* Assembler Register - Used in Delay Slot Optimization */
>  #define REG_AS    18
>  #define REG_ZERO  0
> - 
> +
>  #define RD_LOW  21 /* low bit for RD */
>  #define RA_LOW  16 /* low bit for RA */
>  #define RB_LOW  11 /* low bit for RB */
> @@ -258,7 +258,7 @@ enum microblaze_instr_type {
>  #define OPCODE_MASK_H24 0xFC1F07FF /* High 6, bits 20-16 and low 11 bits */
>  #define OPCODE_MASK_H124  0xFFFF07FF /* High 16, and low 11 bits */
>  #define OPCODE_MASK_H1234 0xFFFFFFFF /* All 32 bits */
> -#define OPCODE_MASK_H3  0xFC000600 /* High 6 bits and bits 21, 22 */  
> +#define OPCODE_MASK_H3  0xFC000600 /* High 6 bits and bits 21, 22 */
>  #define OPCODE_MASK_H32 0xFC00FC00 /* High 6 bits and bit 16-21 */
>  #define OPCODE_MASK_H34B   0xFC0000FF /* High 6 bits and low 8 bits */
>  #define OPCODE_MASK_H34C   0xFC0007E0 /* High 6 bits and bits 21-26 */
> @@ -277,14 +277,14 @@ static const struct op_code_struct {
>    short inst_offset_type; /* immediate vals offset from PC? (= 1 for branches) */
>    short delay_slots; /* info about delay slots needed after this instr. */
>    short immval_mask;
> -  unsigned long bit_sequence; /* all the fixed bits for the op are set and all the variable bits (reg names, imm vals) are set to 0 */ 
> +  unsigned long bit_sequence; /* all the fixed bits for the op are set and all the variable bits (reg names, imm vals) are set to 0 */
>    unsigned long opcode_mask; /* which bits define the opcode */
>    enum microblaze_instr instr;
>    enum microblaze_instr_type instr_type;
>    /* more info about output format here */
> -} opcodes[MAX_OPCODES] = 
> +} opcodes[MAX_OPCODES] =
>  
> -{ 
> +{
>    {"add",   INST_TYPE_RD_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x00000000, OPCODE_MASK_H4, add, arithmetic_inst },
>    {"rsub",  INST_TYPE_RD_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x04000000, OPCODE_MASK_H4, rsub, arithmetic_inst },
>    {"addc",  INST_TYPE_RD_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x08000000, OPCODE_MASK_H4, addc, arithmetic_inst },
> @@ -437,7 +437,7 @@ static const struct op_code_struct {
>    {"tcput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00B000, OPCODE_MASK_H32, tcput,  anyware_inst },
>    {"tnput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00D000, OPCODE_MASK_H32, tnput,  anyware_inst },
>    {"tncput", INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00F000, OPCODE_MASK_H32, tncput, anyware_inst },
> - 
> +
>    {"eget",   INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C000400, OPCODE_MASK_H32, eget,   anyware_inst },
>    {"ecget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C002400, OPCODE_MASK_H32, ecget,  anyware_inst },
>    {"neget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C004400, OPCODE_MASK_H32, neget,  anyware_inst },
> @@ -446,7 +446,7 @@ static const struct op_code_struct {
>    {"ecput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00A400, OPCODE_MASK_H32, ecput,  anyware_inst },
>    {"neput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00C400, OPCODE_MASK_H32, neput,  anyware_inst },
>    {"necput", INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00E400, OPCODE_MASK_H32, necput, anyware_inst },
> - 
> +
>    {"teget",   INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C001400, OPCODE_MASK_H32, teget,   anyware_inst },
>    {"tecget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C003400, OPCODE_MASK_H32, tecget,  anyware_inst },
>    {"tneget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C005400, OPCODE_MASK_H32, tneget,  anyware_inst },
> @@ -455,7 +455,7 @@ static const struct op_code_struct {
>    {"tecput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00B400, OPCODE_MASK_H32, tecput,  anyware_inst },
>    {"tneput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00D400, OPCODE_MASK_H32, tneput,  anyware_inst },
>    {"tnecput", INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00F400, OPCODE_MASK_H32, tnecput, anyware_inst },
> - 
> +
>    {"aget",   INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C000800, OPCODE_MASK_H32, aget,   anyware_inst },
>    {"caget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C002800, OPCODE_MASK_H32, caget,  anyware_inst },
>    {"naget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C004800, OPCODE_MASK_H32, naget,  anyware_inst },
> @@ -464,7 +464,7 @@ static const struct op_code_struct {
>    {"caput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00A800, OPCODE_MASK_H32, caput,  anyware_inst },
>    {"naput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00C800, OPCODE_MASK_H32, naput,  anyware_inst },
>    {"ncaput", INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00E800, OPCODE_MASK_H32, ncaput, anyware_inst },
> - 
> +
>    {"taget",   INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C001800, OPCODE_MASK_H32, taget,   anyware_inst },
>    {"tcaget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C003800, OPCODE_MASK_H32, tcaget,  anyware_inst },
>    {"tnaget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C005800, OPCODE_MASK_H32, tnaget,  anyware_inst },
> @@ -473,7 +473,7 @@ static const struct op_code_struct {
>    {"tcaput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00B800, OPCODE_MASK_H32, tcaput,  anyware_inst },
>    {"tnaput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00D800, OPCODE_MASK_H32, tnaput,  anyware_inst },
>    {"tncaput", INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00F800, OPCODE_MASK_H32, tncaput, anyware_inst },
> - 
> +
>    {"eaget",   INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C000C00, OPCODE_MASK_H32, eget,   anyware_inst },
>    {"ecaget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C002C00, OPCODE_MASK_H32, ecget,  anyware_inst },
>    {"neaget",  INST_TYPE_RD_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C004C00, OPCODE_MASK_H32, neget,  anyware_inst },
> @@ -482,7 +482,7 @@ static const struct op_code_struct {
>    {"ecaput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00AC00, OPCODE_MASK_H32, ecput,  anyware_inst },
>    {"neaput",  INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00CC00, OPCODE_MASK_H32, neput,  anyware_inst },
>    {"necaput", INST_TYPE_R1_RFSL,  INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00EC00, OPCODE_MASK_H32, necput, anyware_inst },
> - 
> +
>    {"teaget",   INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C001C00, OPCODE_MASK_H32, teaget,   anyware_inst },
>    {"tecaget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C003C00, OPCODE_MASK_H32, tecaget,  anyware_inst },
>    {"tneaget",  INST_TYPE_RD_RFSL, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C005C00, OPCODE_MASK_H32, tneaget,  anyware_inst },
> @@ -491,7 +491,7 @@ static const struct op_code_struct {
>    {"tecaput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00BC00, OPCODE_MASK_H32, tecaput,  anyware_inst },
>    {"tneaput",  INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00DC00, OPCODE_MASK_H32, tneaput,  anyware_inst },
>    {"tnecaput", INST_TYPE_RFSL,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x6C00FC00, OPCODE_MASK_H32, tnecaput, anyware_inst },
> - 
> +
>    {"getd",    INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000000, OPCODE_MASK_H34C, getd,    anyware_inst },
>    {"tgetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000080, OPCODE_MASK_H34C, tgetd,   anyware_inst },
>    {"cgetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000100, OPCODE_MASK_H34C, cgetd,   anyware_inst },
> @@ -508,7 +508,7 @@ static const struct op_code_struct {
>    {"tnputd",  INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000680, OPCODE_MASK_H34C, tnputd,  anyware_inst },
>    {"ncputd",  INST_TYPE_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000700, OPCODE_MASK_H34C, ncputd,  anyware_inst },
>    {"tncputd", INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000780, OPCODE_MASK_H34C, tncputd, anyware_inst },
> - 
> +
>    {"egetd",    INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000020, OPCODE_MASK_H34C, egetd,    anyware_inst },
>    {"tegetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C0000A0, OPCODE_MASK_H34C, tegetd,   anyware_inst },
>    {"ecgetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000120, OPCODE_MASK_H34C, ecgetd,   anyware_inst },
> @@ -525,7 +525,7 @@ static const struct op_code_struct {
>    {"tneputd",  INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C0006A0, OPCODE_MASK_H34C, tneputd,  anyware_inst },
>    {"necputd",  INST_TYPE_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000720, OPCODE_MASK_H34C, necputd,  anyware_inst },
>    {"tnecputd", INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C0007A0, OPCODE_MASK_H34C, tnecputd, anyware_inst },
> - 
> +
>    {"agetd",    INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000040, OPCODE_MASK_H34C, agetd,    anyware_inst },
>    {"tagetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C0000C0, OPCODE_MASK_H34C, tagetd,   anyware_inst },
>    {"cagetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000140, OPCODE_MASK_H34C, cagetd,   anyware_inst },
> @@ -542,7 +542,7 @@ static const struct op_code_struct {
>    {"tnaputd",  INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C0006C0, OPCODE_MASK_H34C, tnaputd,  anyware_inst },
>    {"ncaputd",  INST_TYPE_R1_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000740, OPCODE_MASK_H34C, ncaputd,  anyware_inst },
>    {"tncaputd", INST_TYPE_R2,    INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C0007C0, OPCODE_MASK_H34C, tncaputd, anyware_inst },
> - 
> +
>    {"eagetd",    INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000060, OPCODE_MASK_H34C, eagetd,    anyware_inst },
>    {"teagetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C0000E0, OPCODE_MASK_H34C, teagetd,   anyware_inst },
>    {"ecagetd",   INST_TYPE_RD_R2, INST_NO_OFFSET, NO_DELAY_SLOT, IMMVAL_MASK_NON_SPECIAL, 0x4C000160, OPCODE_MASK_H34C, ecagetd,   anyware_inst },
> @@ -648,13 +648,13 @@ get_field_unsigned_imm (long instr)
>  
>  /*
>    char *
> -  get_field_special (instr) 
> +  get_field_special (instr)
>    long instr;
>    {
>    char tmpstr[25];
> -  
> +
>    sprintf(tmpstr, "%s%s", register_prefix, (((instr & IMM_MASK) >> IMM_LOW) & REG_MSR_MASK) == 0 ? "pc" : "msr");
> -  
> +
>    return(strdup(tmpstr));
>    }
>  */
> @@ -684,7 +684,7 @@ get_field_special(long instr, const struct op_code_struct *op)
>        break;
>     case REG_BTR_MASK :
>        strcpy(spr, "btr");
> -      break;      
> +      break;
>     case REG_EDR_MASK :
>        strcpy(spr, "edr");
>        break;
> @@ -719,13 +719,13 @@ get_field_special(long instr, const struct op_code_struct *op)
>       }
>       break;
>     }
> -   
> +
>     sprintf(tmpstr, "%s%s", register_prefix, spr);
>     return(strdup(tmpstr));
>  }
>  
>  static unsigned long
> -read_insn_microblaze (bfd_vma memaddr, 
> +read_insn_microblaze (bfd_vma memaddr,
>  		      struct disassemble_info *info,
>  		      const struct op_code_struct **opr)
>  {
> @@ -736,7 +736,7 @@ read_insn_microblaze (bfd_vma memaddr,
>  
>    status = info->read_memory_func (memaddr, ibytes, 4, info);
>  
> -  if (status != 0) 
> +  if (status != 0)
>      {
>        info->memory_error_func (status, memaddr, info);
>        return 0;
> @@ -761,7 +761,7 @@ read_insn_microblaze (bfd_vma memaddr,
>  }
>  
>  
> -int 
> +int
>  print_insn_microblaze (bfd_vma memaddr, struct disassemble_info * info)
>  {
>    fprintf_function    fprintf_func = info->fprintf_func;
> @@ -780,7 +780,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disassemble_info * info)
>    if (inst == 0) {
>      return -1;
>    }
> -  
> +
>    if (prev_insn_vma == curr_insn_vma) {
>    if (memaddr-(info->bytes_per_chunk) == prev_insn_addr) {
>      prev_inst = read_insn_microblaze (prev_insn_addr, info, &pop);
> @@ -806,7 +806,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disassemble_info * info)
>    else
>      {
>        fprintf_func (stream, "%s", op->name);
> -      
> +
>        switch (op->inst_type)
>  	{
>    case INST_TYPE_RD_R1_R2:
> @@ -851,7 +851,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disassemble_info * info)
>  	  break;
>  	case INST_TYPE_R1_IMM:
>  	  fprintf_func(stream, "\t%s, %s", get_field_r1(inst), get_field_imm(inst));
> -	  /* The non-pc relative instructions are returns, which shouldn't 
> +	  /* The non-pc relative instructions are returns, which shouldn't
>  	     have a label printed */
>  	  if (info->print_address_func && op->inst_offset_type == INST_PC_OFFSET && info->symbol_at_address_func) {
>  	    if (immfound)
> @@ -886,7 +886,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disassemble_info * info)
>  	    if (info->symbol_at_address_func(immval, info)) {
>  	      fprintf_func (stream, "\t// ");
>  	      info->print_address_func (immval, info);
> -	    } 
> +	    }
>  	  }
>  	  break;
>          case INST_TYPE_IMM:
> @@ -938,7 +938,7 @@ print_insn_microblaze (bfd_vma memaddr, struct disassemble_info * info)
>  	  break;
>  	}
>      }
> -  
> +
>    /* Say how many bytes we consumed? */
>    return 4;
>  }
> diff --git a/disas/nios2.c b/disas/nios2.c
> index c3e82140c7..35d9f40f3e 100644
> --- a/disas/nios2.c
> +++ b/disas/nios2.c
> @@ -96,7 +96,7 @@ enum overflow_type
>    no_overflow
>  };
>  
> -/* This structure holds information for a particular instruction. 
> +/* This structure holds information for a particular instruction.
>  
>     The args field is a string describing the operands.  The following
>     letters can appear in the args:
> @@ -152,26 +152,26 @@ enum overflow_type
>  struct nios2_opcode
>  {
>    const char *name;		/* The name of the instruction.  */
> -  const char *args;		/* A string describing the arguments for this 
> +  const char *args;		/* A string describing the arguments for this
>  				   instruction.  */
> -  const char *args_test;	/* Like args, but with an extra argument for 
> +  const char *args_test;	/* Like args, but with an extra argument for
>  				   the expected opcode.  */
> -  unsigned long num_args;	/* The number of arguments the instruction 
> +  unsigned long num_args;	/* The number of arguments the instruction
>  				   takes.  */
>    unsigned size;		/* Size in bytes of the instruction.  */
>    enum iw_format_type format;	/* Instruction format.  */
>    unsigned long match;		/* The basic opcode for the instruction.  */
> -  unsigned long mask;		/* Mask for the opcode field of the 
> +  unsigned long mask;		/* Mask for the opcode field of the
>  				   instruction.  */
> -  unsigned long pinfo;		/* Is this a real instruction or instruction 
> +  unsigned long pinfo;		/* Is this a real instruction or instruction
>  				   macro?  */
> -  enum overflow_type overflow_msg;  /* Used to generate informative 
> +  enum overflow_type overflow_msg;  /* Used to generate informative
>  				       message when fixup overflows.  */
>  };
>  
> -/* This value is used in the nios2_opcode.pinfo field to indicate that the 
> -   instruction is a macro or pseudo-op.  This requires special treatment by 
> -   the assembler, and is used by the disassembler to determine whether to 
> +/* This value is used in the nios2_opcode.pinfo field to indicate that the
> +   instruction is a macro or pseudo-op.  This requires special treatment by
> +   the assembler, and is used by the disassembler to determine whether to
>     check for a nop.  */
>  #define NIOS2_INSN_MACRO	0x80000000
>  #define NIOS2_INSN_MACRO_MOV	0x80000001
> @@ -207,124 +207,124 @@ struct nios2_reg
>  #define _NIOS2R1_H_
>  
>  /* R1 fields.  */
> -#define IW_R1_OP_LSB 0 
> -#define IW_R1_OP_SIZE 6 
> -#define IW_R1_OP_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R1_OP_SIZE)) 
> -#define IW_R1_OP_SHIFTED_MASK (IW_R1_OP_UNSHIFTED_MASK << IW_R1_OP_LSB) 
> -#define GET_IW_R1_OP(W) (((W) >> IW_R1_OP_LSB) & IW_R1_OP_UNSHIFTED_MASK) 
> -#define SET_IW_R1_OP(V) (((V) & IW_R1_OP_UNSHIFTED_MASK) << IW_R1_OP_LSB) 
> -
> -#define IW_I_A_LSB 27 
> -#define IW_I_A_SIZE 5 
> -#define IW_I_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_A_SIZE)) 
> -#define IW_I_A_SHIFTED_MASK (IW_I_A_UNSHIFTED_MASK << IW_I_A_LSB) 
> -#define GET_IW_I_A(W) (((W) >> IW_I_A_LSB) & IW_I_A_UNSHIFTED_MASK) 
> -#define SET_IW_I_A(V) (((V) & IW_I_A_UNSHIFTED_MASK) << IW_I_A_LSB) 
> -
> -#define IW_I_B_LSB 22 
> -#define IW_I_B_SIZE 5 
> -#define IW_I_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_B_SIZE)) 
> -#define IW_I_B_SHIFTED_MASK (IW_I_B_UNSHIFTED_MASK << IW_I_B_LSB) 
> -#define GET_IW_I_B(W) (((W) >> IW_I_B_LSB) & IW_I_B_UNSHIFTED_MASK) 
> -#define SET_IW_I_B(V) (((V) & IW_I_B_UNSHIFTED_MASK) << IW_I_B_LSB) 
> -
> -#define IW_I_IMM16_LSB 6 
> -#define IW_I_IMM16_SIZE 16 
> -#define IW_I_IMM16_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_IMM16_SIZE)) 
> -#define IW_I_IMM16_SHIFTED_MASK (IW_I_IMM16_UNSHIFTED_MASK << IW_I_IMM16_LSB) 
> -#define GET_IW_I_IMM16(W) (((W) >> IW_I_IMM16_LSB) & IW_I_IMM16_UNSHIFTED_MASK) 
> -#define SET_IW_I_IMM16(V) (((V) & IW_I_IMM16_UNSHIFTED_MASK) << IW_I_IMM16_LSB) 
> -
> -#define IW_R_A_LSB 27 
> -#define IW_R_A_SIZE 5 
> -#define IW_R_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_A_SIZE)) 
> -#define IW_R_A_SHIFTED_MASK (IW_R_A_UNSHIFTED_MASK << IW_R_A_LSB) 
> -#define GET_IW_R_A(W) (((W) >> IW_R_A_LSB) & IW_R_A_UNSHIFTED_MASK) 
> -#define SET_IW_R_A(V) (((V) & IW_R_A_UNSHIFTED_MASK) << IW_R_A_LSB) 
> -
> -#define IW_R_B_LSB 22 
> -#define IW_R_B_SIZE 5 
> -#define IW_R_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_B_SIZE)) 
> -#define IW_R_B_SHIFTED_MASK (IW_R_B_UNSHIFTED_MASK << IW_R_B_LSB) 
> -#define GET_IW_R_B(W) (((W) >> IW_R_B_LSB) & IW_R_B_UNSHIFTED_MASK) 
> -#define SET_IW_R_B(V) (((V) & IW_R_B_UNSHIFTED_MASK) << IW_R_B_LSB) 
> -
> -#define IW_R_C_LSB 17 
> -#define IW_R_C_SIZE 5 
> -#define IW_R_C_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_C_SIZE)) 
> -#define IW_R_C_SHIFTED_MASK (IW_R_C_UNSHIFTED_MASK << IW_R_C_LSB) 
> -#define GET_IW_R_C(W) (((W) >> IW_R_C_LSB) & IW_R_C_UNSHIFTED_MASK) 
> -#define SET_IW_R_C(V) (((V) & IW_R_C_UNSHIFTED_MASK) << IW_R_C_LSB) 
> -
> -#define IW_R_OPX_LSB 11 
> -#define IW_R_OPX_SIZE 6 
> -#define IW_R_OPX_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_OPX_SIZE)) 
> -#define IW_R_OPX_SHIFTED_MASK (IW_R_OPX_UNSHIFTED_MASK << IW_R_OPX_LSB) 
> -#define GET_IW_R_OPX(W) (((W) >> IW_R_OPX_LSB) & IW_R_OPX_UNSHIFTED_MASK) 
> -#define SET_IW_R_OPX(V) (((V) & IW_R_OPX_UNSHIFTED_MASK) << IW_R_OPX_LSB) 
> -
> -#define IW_R_IMM5_LSB 6 
> -#define IW_R_IMM5_SIZE 5 
> -#define IW_R_IMM5_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_IMM5_SIZE)) 
> -#define IW_R_IMM5_SHIFTED_MASK (IW_R_IMM5_UNSHIFTED_MASK << IW_R_IMM5_LSB) 
> -#define GET_IW_R_IMM5(W) (((W) >> IW_R_IMM5_LSB) & IW_R_IMM5_UNSHIFTED_MASK) 
> -#define SET_IW_R_IMM5(V) (((V) & IW_R_IMM5_UNSHIFTED_MASK) << IW_R_IMM5_LSB) 
> -
> -#define IW_J_IMM26_LSB 6 
> -#define IW_J_IMM26_SIZE 26 
> -#define IW_J_IMM26_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_J_IMM26_SIZE)) 
> -#define IW_J_IMM26_SHIFTED_MASK (IW_J_IMM26_UNSHIFTED_MASK << IW_J_IMM26_LSB) 
> -#define GET_IW_J_IMM26(W) (((W) >> IW_J_IMM26_LSB) & IW_J_IMM26_UNSHIFTED_MASK) 
> -#define SET_IW_J_IMM26(V) (((V) & IW_J_IMM26_UNSHIFTED_MASK) << IW_J_IMM26_LSB) 
> -
> -#define IW_CUSTOM_A_LSB 27 
> -#define IW_CUSTOM_A_SIZE 5 
> -#define IW_CUSTOM_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_A_SIZE)) 
> -#define IW_CUSTOM_A_SHIFTED_MASK (IW_CUSTOM_A_UNSHIFTED_MASK << IW_CUSTOM_A_LSB) 
> -#define GET_IW_CUSTOM_A(W) (((W) >> IW_CUSTOM_A_LSB) & IW_CUSTOM_A_UNSHIFTED_MASK) 
> -#define SET_IW_CUSTOM_A(V) (((V) & IW_CUSTOM_A_UNSHIFTED_MASK) << IW_CUSTOM_A_LSB) 
> -
> -#define IW_CUSTOM_B_LSB 22 
> -#define IW_CUSTOM_B_SIZE 5 
> -#define IW_CUSTOM_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_B_SIZE)) 
> -#define IW_CUSTOM_B_SHIFTED_MASK (IW_CUSTOM_B_UNSHIFTED_MASK << IW_CUSTOM_B_LSB) 
> -#define GET_IW_CUSTOM_B(W) (((W) >> IW_CUSTOM_B_LSB) & IW_CUSTOM_B_UNSHIFTED_MASK) 
> -#define SET_IW_CUSTOM_B(V) (((V) & IW_CUSTOM_B_UNSHIFTED_MASK) << IW_CUSTOM_B_LSB) 
> -
> -#define IW_CUSTOM_C_LSB 17 
> -#define IW_CUSTOM_C_SIZE 5 
> -#define IW_CUSTOM_C_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_C_SIZE)) 
> -#define IW_CUSTOM_C_SHIFTED_MASK (IW_CUSTOM_C_UNSHIFTED_MASK << IW_CUSTOM_C_LSB) 
> -#define GET_IW_CUSTOM_C(W) (((W) >> IW_CUSTOM_C_LSB) & IW_CUSTOM_C_UNSHIFTED_MASK) 
> -#define SET_IW_CUSTOM_C(V) (((V) & IW_CUSTOM_C_UNSHIFTED_MASK) << IW_CUSTOM_C_LSB) 
> -
> -#define IW_CUSTOM_READA_LSB 16 
> -#define IW_CUSTOM_READA_SIZE 1 
> -#define IW_CUSTOM_READA_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_READA_SIZE)) 
> -#define IW_CUSTOM_READA_SHIFTED_MASK (IW_CUSTOM_READA_UNSHIFTED_MASK << IW_CUSTOM_READA_LSB) 
> -#define GET_IW_CUSTOM_READA(W) (((W) >> IW_CUSTOM_READA_LSB) & IW_CUSTOM_READA_UNSHIFTED_MASK) 
> -#define SET_IW_CUSTOM_READA(V) (((V) & IW_CUSTOM_READA_UNSHIFTED_MASK) << IW_CUSTOM_READA_LSB) 
> -
> -#define IW_CUSTOM_READB_LSB 15 
> -#define IW_CUSTOM_READB_SIZE 1 
> -#define IW_CUSTOM_READB_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_READB_SIZE)) 
> -#define IW_CUSTOM_READB_SHIFTED_MASK (IW_CUSTOM_READB_UNSHIFTED_MASK << IW_CUSTOM_READB_LSB) 
> -#define GET_IW_CUSTOM_READB(W) (((W) >> IW_CUSTOM_READB_LSB) & IW_CUSTOM_READB_UNSHIFTED_MASK) 
> -#define SET_IW_CUSTOM_READB(V) (((V) & IW_CUSTOM_READB_UNSHIFTED_MASK) << IW_CUSTOM_READB_LSB) 
> -
> -#define IW_CUSTOM_READC_LSB 14 
> -#define IW_CUSTOM_READC_SIZE 1 
> -#define IW_CUSTOM_READC_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_READC_SIZE)) 
> -#define IW_CUSTOM_READC_SHIFTED_MASK (IW_CUSTOM_READC_UNSHIFTED_MASK << IW_CUSTOM_READC_LSB) 
> -#define GET_IW_CUSTOM_READC(W) (((W) >> IW_CUSTOM_READC_LSB) & IW_CUSTOM_READC_UNSHIFTED_MASK) 
> -#define SET_IW_CUSTOM_READC(V) (((V) & IW_CUSTOM_READC_UNSHIFTED_MASK) << IW_CUSTOM_READC_LSB) 
> -
> -#define IW_CUSTOM_N_LSB 6 
> -#define IW_CUSTOM_N_SIZE 8 
> -#define IW_CUSTOM_N_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_N_SIZE)) 
> -#define IW_CUSTOM_N_SHIFTED_MASK (IW_CUSTOM_N_UNSHIFTED_MASK << IW_CUSTOM_N_LSB) 
> -#define GET_IW_CUSTOM_N(W) (((W) >> IW_CUSTOM_N_LSB) & IW_CUSTOM_N_UNSHIFTED_MASK) 
> -#define SET_IW_CUSTOM_N(V) (((V) & IW_CUSTOM_N_UNSHIFTED_MASK) << IW_CUSTOM_N_LSB) 
> +#define IW_R1_OP_LSB 0
> +#define IW_R1_OP_SIZE 6
> +#define IW_R1_OP_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R1_OP_SIZE))
> +#define IW_R1_OP_SHIFTED_MASK (IW_R1_OP_UNSHIFTED_MASK << IW_R1_OP_LSB)
> +#define GET_IW_R1_OP(W) (((W) >> IW_R1_OP_LSB) & IW_R1_OP_UNSHIFTED_MASK)
> +#define SET_IW_R1_OP(V) (((V) & IW_R1_OP_UNSHIFTED_MASK) << IW_R1_OP_LSB)
> +
> +#define IW_I_A_LSB 27
> +#define IW_I_A_SIZE 5
> +#define IW_I_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_A_SIZE))
> +#define IW_I_A_SHIFTED_MASK (IW_I_A_UNSHIFTED_MASK << IW_I_A_LSB)
> +#define GET_IW_I_A(W) (((W) >> IW_I_A_LSB) & IW_I_A_UNSHIFTED_MASK)
> +#define SET_IW_I_A(V) (((V) & IW_I_A_UNSHIFTED_MASK) << IW_I_A_LSB)
> +
> +#define IW_I_B_LSB 22
> +#define IW_I_B_SIZE 5
> +#define IW_I_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_B_SIZE))
> +#define IW_I_B_SHIFTED_MASK (IW_I_B_UNSHIFTED_MASK << IW_I_B_LSB)
> +#define GET_IW_I_B(W) (((W) >> IW_I_B_LSB) & IW_I_B_UNSHIFTED_MASK)
> +#define SET_IW_I_B(V) (((V) & IW_I_B_UNSHIFTED_MASK) << IW_I_B_LSB)
> +
> +#define IW_I_IMM16_LSB 6
> +#define IW_I_IMM16_SIZE 16
> +#define IW_I_IMM16_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_I_IMM16_SIZE))
> +#define IW_I_IMM16_SHIFTED_MASK (IW_I_IMM16_UNSHIFTED_MASK << IW_I_IMM16_LSB)
> +#define GET_IW_I_IMM16(W) (((W) >> IW_I_IMM16_LSB) & IW_I_IMM16_UNSHIFTED_MASK)
> +#define SET_IW_I_IMM16(V) (((V) & IW_I_IMM16_UNSHIFTED_MASK) << IW_I_IMM16_LSB)
> +
> +#define IW_R_A_LSB 27
> +#define IW_R_A_SIZE 5
> +#define IW_R_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_A_SIZE))
> +#define IW_R_A_SHIFTED_MASK (IW_R_A_UNSHIFTED_MASK << IW_R_A_LSB)
> +#define GET_IW_R_A(W) (((W) >> IW_R_A_LSB) & IW_R_A_UNSHIFTED_MASK)
> +#define SET_IW_R_A(V) (((V) & IW_R_A_UNSHIFTED_MASK) << IW_R_A_LSB)
> +
> +#define IW_R_B_LSB 22
> +#define IW_R_B_SIZE 5
> +#define IW_R_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_B_SIZE))
> +#define IW_R_B_SHIFTED_MASK (IW_R_B_UNSHIFTED_MASK << IW_R_B_LSB)
> +#define GET_IW_R_B(W) (((W) >> IW_R_B_LSB) & IW_R_B_UNSHIFTED_MASK)
> +#define SET_IW_R_B(V) (((V) & IW_R_B_UNSHIFTED_MASK) << IW_R_B_LSB)
> +
> +#define IW_R_C_LSB 17
> +#define IW_R_C_SIZE 5
> +#define IW_R_C_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_C_SIZE))
> +#define IW_R_C_SHIFTED_MASK (IW_R_C_UNSHIFTED_MASK << IW_R_C_LSB)
> +#define GET_IW_R_C(W) (((W) >> IW_R_C_LSB) & IW_R_C_UNSHIFTED_MASK)
> +#define SET_IW_R_C(V) (((V) & IW_R_C_UNSHIFTED_MASK) << IW_R_C_LSB)
> +
> +#define IW_R_OPX_LSB 11
> +#define IW_R_OPX_SIZE 6
> +#define IW_R_OPX_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_OPX_SIZE))
> +#define IW_R_OPX_SHIFTED_MASK (IW_R_OPX_UNSHIFTED_MASK << IW_R_OPX_LSB)
> +#define GET_IW_R_OPX(W) (((W) >> IW_R_OPX_LSB) & IW_R_OPX_UNSHIFTED_MASK)
> +#define SET_IW_R_OPX(V) (((V) & IW_R_OPX_UNSHIFTED_MASK) << IW_R_OPX_LSB)
> +
> +#define IW_R_IMM5_LSB 6
> +#define IW_R_IMM5_SIZE 5
> +#define IW_R_IMM5_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_R_IMM5_SIZE))
> +#define IW_R_IMM5_SHIFTED_MASK (IW_R_IMM5_UNSHIFTED_MASK << IW_R_IMM5_LSB)
> +#define GET_IW_R_IMM5(W) (((W) >> IW_R_IMM5_LSB) & IW_R_IMM5_UNSHIFTED_MASK)
> +#define SET_IW_R_IMM5(V) (((V) & IW_R_IMM5_UNSHIFTED_MASK) << IW_R_IMM5_LSB)
> +
> +#define IW_J_IMM26_LSB 6
> +#define IW_J_IMM26_SIZE 26
> +#define IW_J_IMM26_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_J_IMM26_SIZE))
> +#define IW_J_IMM26_SHIFTED_MASK (IW_J_IMM26_UNSHIFTED_MASK << IW_J_IMM26_LSB)
> +#define GET_IW_J_IMM26(W) (((W) >> IW_J_IMM26_LSB) & IW_J_IMM26_UNSHIFTED_MASK)
> +#define SET_IW_J_IMM26(V) (((V) & IW_J_IMM26_UNSHIFTED_MASK) << IW_J_IMM26_LSB)
> +
> +#define IW_CUSTOM_A_LSB 27
> +#define IW_CUSTOM_A_SIZE 5
> +#define IW_CUSTOM_A_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_A_SIZE))
> +#define IW_CUSTOM_A_SHIFTED_MASK (IW_CUSTOM_A_UNSHIFTED_MASK << IW_CUSTOM_A_LSB)
> +#define GET_IW_CUSTOM_A(W) (((W) >> IW_CUSTOM_A_LSB) & IW_CUSTOM_A_UNSHIFTED_MASK)
> +#define SET_IW_CUSTOM_A(V) (((V) & IW_CUSTOM_A_UNSHIFTED_MASK) << IW_CUSTOM_A_LSB)
> +
> +#define IW_CUSTOM_B_LSB 22
> +#define IW_CUSTOM_B_SIZE 5
> +#define IW_CUSTOM_B_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_B_SIZE))
> +#define IW_CUSTOM_B_SHIFTED_MASK (IW_CUSTOM_B_UNSHIFTED_MASK << IW_CUSTOM_B_LSB)
> +#define GET_IW_CUSTOM_B(W) (((W) >> IW_CUSTOM_B_LSB) & IW_CUSTOM_B_UNSHIFTED_MASK)
> +#define SET_IW_CUSTOM_B(V) (((V) & IW_CUSTOM_B_UNSHIFTED_MASK) << IW_CUSTOM_B_LSB)
> +
> +#define IW_CUSTOM_C_LSB 17
> +#define IW_CUSTOM_C_SIZE 5
> +#define IW_CUSTOM_C_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_C_SIZE))
> +#define IW_CUSTOM_C_SHIFTED_MASK (IW_CUSTOM_C_UNSHIFTED_MASK << IW_CUSTOM_C_LSB)
> +#define GET_IW_CUSTOM_C(W) (((W) >> IW_CUSTOM_C_LSB) & IW_CUSTOM_C_UNSHIFTED_MASK)
> +#define SET_IW_CUSTOM_C(V) (((V) & IW_CUSTOM_C_UNSHIFTED_MASK) << IW_CUSTOM_C_LSB)
> +
> +#define IW_CUSTOM_READA_LSB 16
> +#define IW_CUSTOM_READA_SIZE 1
> +#define IW_CUSTOM_READA_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_READA_SIZE))
> +#define IW_CUSTOM_READA_SHIFTED_MASK (IW_CUSTOM_READA_UNSHIFTED_MASK << IW_CUSTOM_READA_LSB)
> +#define GET_IW_CUSTOM_READA(W) (((W) >> IW_CUSTOM_READA_LSB) & IW_CUSTOM_READA_UNSHIFTED_MASK)
> +#define SET_IW_CUSTOM_READA(V) (((V) & IW_CUSTOM_READA_UNSHIFTED_MASK) << IW_CUSTOM_READA_LSB)
> +
> +#define IW_CUSTOM_READB_LSB 15
> +#define IW_CUSTOM_READB_SIZE 1
> +#define IW_CUSTOM_READB_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_READB_SIZE))
> +#define IW_CUSTOM_READB_SHIFTED_MASK (IW_CUSTOM_READB_UNSHIFTED_MASK << IW_CUSTOM_READB_LSB)
> +#define GET_IW_CUSTOM_READB(W) (((W) >> IW_CUSTOM_READB_LSB) & IW_CUSTOM_READB_UNSHIFTED_MASK)
> +#define SET_IW_CUSTOM_READB(V) (((V) & IW_CUSTOM_READB_UNSHIFTED_MASK) << IW_CUSTOM_READB_LSB)
> +
> +#define IW_CUSTOM_READC_LSB 14
> +#define IW_CUSTOM_READC_SIZE 1
> +#define IW_CUSTOM_READC_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_READC_SIZE))
> +#define IW_CUSTOM_READC_SHIFTED_MASK (IW_CUSTOM_READC_UNSHIFTED_MASK << IW_CUSTOM_READC_LSB)
> +#define GET_IW_CUSTOM_READC(W) (((W) >> IW_CUSTOM_READC_LSB) & IW_CUSTOM_READC_UNSHIFTED_MASK)
> +#define SET_IW_CUSTOM_READC(V) (((V) & IW_CUSTOM_READC_UNSHIFTED_MASK) << IW_CUSTOM_READC_LSB)
> +
> +#define IW_CUSTOM_N_LSB 6
> +#define IW_CUSTOM_N_SIZE 8
> +#define IW_CUSTOM_N_UNSHIFTED_MASK (0xffffffffu >> (32 - IW_CUSTOM_N_SIZE))
> +#define IW_CUSTOM_N_SHIFTED_MASK (IW_CUSTOM_N_UNSHIFTED_MASK << IW_CUSTOM_N_LSB)
> +#define GET_IW_CUSTOM_N(W) (((W) >> IW_CUSTOM_N_LSB) & IW_CUSTOM_N_UNSHIFTED_MASK)
> +#define SET_IW_CUSTOM_N(V) (((V) & IW_CUSTOM_N_UNSHIFTED_MASK) << IW_CUSTOM_N_LSB)
>  
>  /* R1 opcodes.  */
>  #define R1_OP_CALL 0
> diff --git a/hmp-commands.hx b/hmp-commands.hx
> index 60f395c276..d548a3ab74 100644
> --- a/hmp-commands.hx
> +++ b/hmp-commands.hx
> @@ -1120,7 +1120,7 @@ ERST
>  
>  SRST
>  ``dump-guest-memory [-p]`` *filename* *begin* *length*
> -  \ 
> +  \
>  ``dump-guest-memory [-z|-l|-s|-w]`` *filename*
>    Dump guest memory to *protocol*. The file can be processed with crash or
>    gdb. Without ``-z|-l|-s|-w``, the dump format is ELF.
> diff --git a/hw/alpha/typhoon.c b/hw/alpha/typhoon.c
> index 29d44dfb06..57c7cf0bd3 100644
> --- a/hw/alpha/typhoon.c
> +++ b/hw/alpha/typhoon.c
> @@ -34,7 +34,7 @@ typedef struct TyphoonWindow {
>      uint64_t wsm;
>      uint64_t tba;
>  } TyphoonWindow;
> - 
> +
>  typedef struct TyphoonPchip {
>      MemoryRegion region;
>      MemoryRegion reg_iack;
> @@ -189,7 +189,7 @@ static MemTxResult cchip_read(void *opaque, hwaddr addr,
>      case 0x0780:
>          /* PWR: Power Management Control.   */
>          break;
> -    
> +
>      case 0x0c00: /* CMONCTLA */
>      case 0x0c40: /* CMONCTLB */
>      case 0x0c80: /* CMONCNT01 */
> @@ -441,7 +441,7 @@ static MemTxResult cchip_write(void *opaque, hwaddr addr,
>      case 0x0780:
>          /* PWR: Power Management Control.   */
>          break;
> -    
> +
>      case 0x0c00: /* CMONCTLA */
>      case 0x0c40: /* CMONCTLB */
>      case 0x0c80: /* CMONCNT01 */
> diff --git a/hw/arm/gumstix.c b/hw/arm/gumstix.c
> index 3a4bc332c4..3fdef425ab 100644
> --- a/hw/arm/gumstix.c
> +++ b/hw/arm/gumstix.c
> @@ -10,10 +10,10 @@
>   * Contributions after 2012-01-13 are licensed under the terms of the
>   * GNU GPL, version 2 or (at your option) any later version.
>   */
> - 
> -/* 
> +
> +/*
>   * Example usage:
> - * 
> + *
>   * connex:
>   * =======
>   * create image:
> diff --git a/hw/arm/omap1.c b/hw/arm/omap1.c
> index 6ba0df6b6d..82e60e3b30 100644
> --- a/hw/arm/omap1.c
> +++ b/hw/arm/omap1.c
> @@ -2914,7 +2914,7 @@ static void omap_rtc_tick(void *opaque)
>  
>      /*
>       * Every full hour add a rough approximation of the compensation
> -     * register to the 32kHz Timer (which drives the RTC) value. 
> +     * register to the 32kHz Timer (which drives the RTC) value.
>       */
>      if (s->auto_comp && !s->current_tm.tm_sec && !s->current_tm.tm_min)
>          s->tick += s->comp_reg * 1000 / 32768;
> diff --git a/hw/arm/stellaris.c b/hw/arm/stellaris.c
> index 97ef566c12..7089a534d4 100644
> --- a/hw/arm/stellaris.c
> +++ b/hw/arm/stellaris.c
> @@ -979,7 +979,7 @@ static void stellaris_adc_fifo_write(stellaris_adc_state *s, int n,
>  {
>      int head;
>  
> -    /* TODO: Real hardware has limited size FIFOs.  We have a full 16 entry 
> +    /* TODO: Real hardware has limited size FIFOs.  We have a full 16 entry
>         FIFO fir each sequencer.  */
>      head = (s->fifo[n].state >> 4) & 0xf;
>      if (s->fifo[n].state & STELLARIS_ADC_FIFO_FULL) {
> diff --git a/hw/char/etraxfs_ser.c b/hw/char/etraxfs_ser.c
> index 947bdb649a..85f6523efe 100644
> --- a/hw/char/etraxfs_ser.c
> +++ b/hw/char/etraxfs_ser.c
> @@ -180,7 +180,7 @@ static void serial_receive(void *opaque, const uint8_t *buf, int size)
>          return;
>      }
>  
> -    for (i = 0; i < size; i++) { 
> +    for (i = 0; i < size; i++) {
>          s->rx_fifo[s->rx_fifo_pos] = buf[i];
>          s->rx_fifo_pos++;
>          s->rx_fifo_pos &= 15;
> diff --git a/hw/core/ptimer.c b/hw/core/ptimer.c
> index b5a54e2536..f08c3c33a7 100644
> --- a/hw/core/ptimer.c
> +++ b/hw/core/ptimer.c
> @@ -246,7 +246,7 @@ uint64_t ptimer_get_count(ptimer_state *s)
>              } else {
>                  if (shift != 0)
>                      div |= (period_frac >> (32 - shift));
> -                /* Look at remaining bits of period_frac and round div up if 
> +                /* Look at remaining bits of period_frac and round div up if
>                     necessary.  */
>                  if ((uint32_t)(period_frac << shift))
>                      div += 1;
> diff --git a/hw/cris/axis_dev88.c b/hw/cris/axis_dev88.c
> index dab7423c73..adeed30638 100644
> --- a/hw/cris/axis_dev88.c
> +++ b/hw/cris/axis_dev88.c
> @@ -267,7 +267,7 @@ void axisdev88_init(MachineState *machine)
>  
>      memory_region_add_subregion(address_space_mem, 0x40000000, machine->ram);
>  
> -    /* The ETRAX-FS has 128Kb on chip ram, the docs refer to it as the 
> +    /* The ETRAX-FS has 128Kb on chip ram, the docs refer to it as the
>         internal memory.  */
>      memory_region_init_ram(phys_intmem, NULL, "axisdev88.chipram",
>                             INTMEM_SIZE, &error_fatal);
> diff --git a/hw/cris/boot.c b/hw/cris/boot.c
> index b8947bc660..06a440431a 100644
> --- a/hw/cris/boot.c
> +++ b/hw/cris/boot.c
> @@ -72,7 +72,7 @@ void cris_load_image(CRISCPU *cpu, struct cris_load_info *li)
>      int image_size;
>  
>      env->load_info = li;
> -    /* Boots a kernel elf binary, os/linux-2.6/vmlinux from the axis 
> +    /* Boots a kernel elf binary, os/linux-2.6/vmlinux from the axis
>         devboard SDK.  */
>      image_size = load_elf(li->image_filename, NULL,
>                            translate_kernel_address, NULL,
> diff --git a/hw/display/qxl.c b/hw/display/qxl.c
> index d5627119ec..28caf878cd 100644
> --- a/hw/display/qxl.c
> +++ b/hw/display/qxl.c
> @@ -51,7 +51,7 @@
>  #undef ALIGN
>  #define ALIGN(a, b) (((a) + ((b) - 1)) & ~((b) - 1))
>  
> -#define PIXEL_SIZE 0.2936875 //1280x1024 is 14.8" x 11.9" 
> +#define PIXEL_SIZE 0.2936875 //1280x1024 is 14.8" x 11.9"
>  
>  #define QXL_MODE(_x, _y, _b, _o)                  \
>      {   .x_res = _x,                              \
> diff --git a/hw/dma/etraxfs_dma.c b/hw/dma/etraxfs_dma.c
> index c4334e87bf..20173330a0 100644
> --- a/hw/dma/etraxfs_dma.c
> +++ b/hw/dma/etraxfs_dma.c
> @@ -322,12 +322,12 @@ static inline void channel_start(struct fs_dma_ctrl *ctrl, int c)
>  
>  static void channel_continue(struct fs_dma_ctrl *ctrl, int c)
>  {
> -	if (!channel_en(ctrl, c) 
> +	if (!channel_en(ctrl, c)
>  	    || channel_stopped(ctrl, c)
>  	    || ctrl->channels[c].state != RUNNING
>  	    /* Only reload the current data descriptor if it has eol set.  */
>  	    || !ctrl->channels[c].current_d.eol) {
> -		D(printf("continue failed ch=%d state=%d stopped=%d en=%d eol=%d\n", 
> +		D(printf("continue failed ch=%d state=%d stopped=%d en=%d eol=%d\n",
>  			 c, ctrl->channels[c].state,
>  			 channel_stopped(ctrl, c),
>  			 channel_en(ctrl,c),
> @@ -383,7 +383,7 @@ static void channel_update_irq(struct fs_dma_ctrl *ctrl, int c)
>  		ctrl->channels[c].regs[R_INTR]
>  		& ctrl->channels[c].regs[RW_INTR_MASK];
>  
> -	D(printf("%s: chan=%d masked_intr=%x\n", __func__, 
> +	D(printf("%s: chan=%d masked_intr=%x\n", __func__,
>  		 c,
>  		 ctrl->channels[c].regs[R_MASKED_INTR]));
>  
> @@ -492,7 +492,7 @@ static int channel_out_run(struct fs_dma_ctrl *ctrl, int c)
>  	return 1;
>  }
>  
> -static int channel_in_process(struct fs_dma_ctrl *ctrl, int c, 
> +static int channel_in_process(struct fs_dma_ctrl *ctrl, int c,
>  			      unsigned char *buf, int buflen, int eop)
>  {
>  	uint32_t len;
> @@ -517,7 +517,7 @@ static int channel_in_process(struct fs_dma_ctrl *ctrl, int c,
>  	    || eop) {
>  		uint32_t r_intr = ctrl->channels[c].regs[R_INTR];
>  
> -		D(printf("in dscr end len=%d\n", 
> +		D(printf("in dscr end len=%d\n",
>  			 ctrl->channels[c].current_d.after
>  			 - ctrl->channels[c].current_d.buf));
>  		ctrl->channels[c].current_d.after = saved_data_buf;
> @@ -708,7 +708,7 @@ static int etraxfs_dmac_run(void *opaque)
>  	int i;
>  	int p = 0;
>  
> -	for (i = 0; 
> +	for (i = 0;
>  	     i < ctrl->nr_channels;
>  	     i++)
>  	{
> @@ -724,10 +724,10 @@ static int etraxfs_dmac_run(void *opaque)
>  	return p;
>  }
>  
> -int etraxfs_dmac_input(struct etraxfs_dma_client *client, 
> +int etraxfs_dmac_input(struct etraxfs_dma_client *client,
>  		       void *buf, int len, int eop)
>  {
> -	return channel_in_process(client->ctrl, client->channel, 
> +	return channel_in_process(client->ctrl, client->channel,
>  				  buf, len, eop);
>  }
>  
> @@ -739,7 +739,7 @@ void etraxfs_dmac_connect(void *opaque, int c, qemu_irq *line, int input)
>  	ctrl->channels[c].input = input;
>  }
>  
> -void etraxfs_dmac_connect_client(void *opaque, int c, 
> +void etraxfs_dmac_connect_client(void *opaque, int c,
>  				 struct etraxfs_dma_client *cl)
>  {
>  	struct fs_dma_ctrl *ctrl = opaque;
> diff --git a/hw/dma/i82374.c b/hw/dma/i82374.c
> index 6977d85ef8..0db27628d5 100644
> --- a/hw/dma/i82374.c
> +++ b/hw/dma/i82374.c
> @@ -146,7 +146,7 @@ static Property i82374_properties[] = {
>  static void i82374_class_init(ObjectClass *klass, void *data)
>  {
>      DeviceClass *dc = DEVICE_CLASS(klass);
> -    
> +
>      dc->realize = i82374_realize;
>      dc->vmsd = &vmstate_i82374;
>      device_class_set_props(dc, i82374_properties);
> diff --git a/hw/i2c/bitbang_i2c.c b/hw/i2c/bitbang_i2c.c
> index b000952b98..425b0ed69e 100644
> --- a/hw/i2c/bitbang_i2c.c
> +++ b/hw/i2c/bitbang_i2c.c
> @@ -95,7 +95,7 @@ int bitbang_i2c_set(bitbang_i2c_interface *i2c, int line, int level)
>      case SENDING_BIT7 ... SENDING_BIT0:
>          i2c->buffer = (i2c->buffer << 1) | data;
>          /* will end up in WAITING_FOR_ACK */
> -        i2c->state++; 
> +        i2c->state++;
>          return bitbang_i2c_ret(i2c, 1);
>  
>      case WAITING_FOR_ACK:
> diff --git a/hw/input/tsc2005.c b/hw/input/tsc2005.c
> index 55d61cc843..df07476c3e 100644
> --- a/hw/input/tsc2005.c
> +++ b/hw/input/tsc2005.c
> @@ -169,7 +169,7 @@ static uint16_t tsc2005_read(TSC2005State *s, int reg)
>  
>      case 0xc:	/* CFR0 */
>          return (s->pressure << 15) | ((!s->busy) << 14) |
> -                (s->nextprecision << 13) | s->timing[0]; 
> +                (s->nextprecision << 13) | s->timing[0];
>      case 0xd:	/* CFR1 */
>          return s->timing[1];
>      case 0xe:	/* CFR2 */
> diff --git a/hw/input/tsc210x.c b/hw/input/tsc210x.c
> index 182d3725fc..610b3fca59 100644
> --- a/hw/input/tsc210x.c
> +++ b/hw/input/tsc210x.c
> @@ -412,7 +412,7 @@ static uint16_t tsc2102_control_register_read(
>      switch (reg) {
>      case 0x00:	/* TSC ADC */
>          return (s->pressure << 15) | ((!s->busy) << 14) |
> -                (s->nextfunction << 10) | (s->nextprecision << 8) | s->filter; 
> +                (s->nextfunction << 10) | (s->nextprecision << 8) | s->filter;
>  
>      case 0x01:	/* Status / Keypad Control */
>          if ((s->model & 0xff00) == 0x2100)
> diff --git a/hw/intc/etraxfs_pic.c b/hw/intc/etraxfs_pic.c
> index 12988c7aa9..9f9377798d 100644
> --- a/hw/intc/etraxfs_pic.c
> +++ b/hw/intc/etraxfs_pic.c
> @@ -52,15 +52,15 @@ struct etrax_pic
>  };
>  
>  static void pic_update(struct etrax_pic *fs)
> -{   
> +{
>      uint32_t vector = 0;
>      int i;
>  
>      fs->regs[R_R_MASKED_VECT] = fs->regs[R_R_VECT] & fs->regs[R_RW_MASK];
>  
>      /* The ETRAX interrupt controller signals interrupts to the core
> -       through an interrupt request wire and an irq vector bus. If 
> -       multiple interrupts are simultaneously active it chooses vector 
> +       through an interrupt request wire and an irq vector bus. If
> +       multiple interrupts are simultaneously active it chooses vector
>         0x30 and lets the sw choose the priorities.  */
>      if (fs->regs[R_R_MASKED_VECT]) {
>          uint32_t mv = fs->regs[R_R_MASKED_VECT];
> @@ -113,7 +113,7 @@ static const MemoryRegionOps pic_ops = {
>  };
>  
>  static void nmi_handler(void *opaque, int irq, int level)
> -{   
> +{
>      struct etrax_pic *fs = (void *)opaque;
>      uint32_t mask;
>  
> diff --git a/hw/intc/sh_intc.c b/hw/intc/sh_intc.c
> index 72a55e32dd..4c6e4b89a1 100644
> --- a/hw/intc/sh_intc.c
> +++ b/hw/intc/sh_intc.c
> @@ -236,7 +236,7 @@ static uint64_t sh_intc_read(void *opaque, hwaddr offset,
>      printf("sh_intc_read 0x%lx\n", (unsigned long) offset);
>  #endif
>  
> -    sh_intc_locate(desc, (unsigned long)offset, &valuep, 
> +    sh_intc_locate(desc, (unsigned long)offset, &valuep,
>  		   &enum_ids, &first, &width, &mode);
>      return *valuep;
>  }
> @@ -257,7 +257,7 @@ static void sh_intc_write(void *opaque, hwaddr offset,
>      printf("sh_intc_write 0x%lx 0x%08x\n", (unsigned long) offset, value);
>  #endif
>  
> -    sh_intc_locate(desc, (unsigned long)offset, &valuep, 
> +    sh_intc_locate(desc, (unsigned long)offset, &valuep,
>  		   &enum_ids, &first, &width, &mode);
>  
>      switch (mode) {
> @@ -273,7 +273,7 @@ static void sh_intc_write(void *opaque, hwaddr offset,
>  	if ((*valuep & mask) == (value & mask))
>              continue;
>  #if 0
> -	printf("k = %d, first = %d, enum = %d, mask = 0x%08x\n", 
> +	printf("k = %d, first = %d, enum = %d, mask = 0x%08x\n",
>  	       k, first, enum_ids[k], (unsigned int)mask);
>  #endif
>          sh_intc_toggle_mask(desc, enum_ids[k], value & mask, 0);
> @@ -466,7 +466,7 @@ int sh_intc_init(MemoryRegion *sysmem,
>      }
>  
>      desc->irqs = qemu_allocate_irqs(sh_intc_set_irq, desc, nr_sources);
> - 
> +
>      memory_region_init_io(&desc->iomem, NULL, &sh_intc_ops, desc,
>                            "interrupt-controller", 0x100000000ULL);
>  
> @@ -498,7 +498,7 @@ int sh_intc_init(MemoryRegion *sysmem,
>      return 0;
>  }
>  
> -/* Assert level <n> IRL interrupt. 
> +/* Assert level <n> IRL interrupt.
>     0:deassert. 1:lowest priority,... 15:highest priority. */
>  void sh_intc_set_irl(void *opaque, int n, int level)
>  {
> diff --git a/hw/intc/xilinx_intc.c b/hw/intc/xilinx_intc.c
> index 3e65e68619..dfc049de92 100644
> --- a/hw/intc/xilinx_intc.c
> +++ b/hw/intc/xilinx_intc.c
> @@ -113,7 +113,7 @@ pic_write(void *opaque, hwaddr addr,
>  
>      addr >>= 2;
>      D(qemu_log("%s addr=%x val=%x\n", __func__, addr * 4, value));
> -    switch (addr) 
> +    switch (addr)
>      {
>          case R_IAR:
>              p->regs[R_ISR] &= ~value; /* ACK.  */
> diff --git a/hw/misc/imx25_ccm.c b/hw/misc/imx25_ccm.c
> index d3107e5ca2..83dd09a9bc 100644
> --- a/hw/misc/imx25_ccm.c
> +++ b/hw/misc/imx25_ccm.c
> @@ -200,9 +200,9 @@ static void imx25_ccm_reset(DeviceState *dev)
>      memset(s->reg, 0, IMX25_CCM_MAX_REG * sizeof(uint32_t));
>      s->reg[IMX25_CCM_MPCTL_REG] = 0x800b2c01;
>      s->reg[IMX25_CCM_UPCTL_REG] = 0x84042800;
> -    /* 
> +    /*
>       * The value below gives:
> -     * CPU = 133 MHz, AHB = 66,5 MHz, IPG = 33 MHz. 
> +     * CPU = 133 MHz, AHB = 66,5 MHz, IPG = 33 MHz.
>       */
>      s->reg[IMX25_CCM_CCTL_REG]  = 0xd0030000;
>      s->reg[IMX25_CCM_CGCR0_REG] = 0x028A0100;
> @@ -219,7 +219,7 @@ static void imx25_ccm_reset(DeviceState *dev)
>  
>      /*
>       * default boot will change the reset values to allow:
> -     * CPU = 399 MHz, AHB = 133 MHz, IPG = 66,5 MHz. 
> +     * CPU = 399 MHz, AHB = 133 MHz, IPG = 66,5 MHz.
>       * For some reason, this doesn't work. With the value below, linux
>       * detects a 88 MHz IPG CLK instead of 66,5 MHz.
>      s->reg[IMX25_CCM_CCTL_REG]  = 0x20032000;
> diff --git a/hw/misc/imx31_ccm.c b/hw/misc/imx31_ccm.c
> index 6e246827ab..8da2757cbe 100644
> --- a/hw/misc/imx31_ccm.c
> +++ b/hw/misc/imx31_ccm.c
> @@ -115,7 +115,7 @@ static uint32_t imx31_ccm_get_pll_ref_clk(IMXCCMState *dev)
>              if (s->reg[IMX31_CCM_CCMR_REG] & CCMR_FPMF) {
>                  freq *= 1024;
>              }
> -        } 
> +        }
>      } else {
>          freq = CKIH_FREQ;
>      }
> diff --git a/hw/net/vmxnet3.h b/hw/net/vmxnet3.h
> index 5b3b76ba7a..020bf70afd 100644
> --- a/hw/net/vmxnet3.h
> +++ b/hw/net/vmxnet3.h
> @@ -246,7 +246,7 @@ struct Vmxnet3_TxDesc {
>          };
>          u32 val1;
>      };
> -    
> +
>      union {
>          struct {
>  #ifdef __BIG_ENDIAN_BITFIELD
> diff --git a/hw/net/xilinx_ethlite.c b/hw/net/xilinx_ethlite.c
> index 71d16fef3d..0703f9e444 100644
> --- a/hw/net/xilinx_ethlite.c
> +++ b/hw/net/xilinx_ethlite.c
> @@ -117,7 +117,7 @@ eth_write(void *opaque, hwaddr addr,
>      uint32_t value = val64;
>  
>      addr >>= 2;
> -    switch (addr) 
> +    switch (addr)
>      {
>          case R_TX_CTRL0:
>          case R_TX_CTRL1:
> diff --git a/hw/pci/pcie.c b/hw/pci/pcie.c
> index 5b48bae0f6..4692d9b5a3 100644
> --- a/hw/pci/pcie.c
> +++ b/hw/pci/pcie.c
> @@ -705,7 +705,7 @@ void pcie_cap_slot_write_config(PCIDevice *dev,
>  
>      hotplug_event_notify(dev);
>  
> -    /* 
> +    /*
>       * 6.7.3.2 Command Completed Events
>       *
>       * Software issues a command to a hot-plug capable Downstream Port by
> diff --git a/hw/sd/omap_mmc.c b/hw/sd/omap_mmc.c
> index 4088a8a80b..7c6f179578 100644
> --- a/hw/sd/omap_mmc.c
> +++ b/hw/sd/omap_mmc.c
> @@ -342,7 +342,7 @@ static uint64_t omap_mmc_read(void *opaque, hwaddr offset,
>          return s->arg >> 16;
>  
>      case 0x0c:	/* MMC_CON */
> -        return (s->dw << 15) | (s->mode << 12) | (s->enable << 11) | 
> +        return (s->dw << 15) | (s->mode << 12) | (s->enable << 11) |
>                  (s->be << 10) | s->clkdiv;
>  
>      case 0x10:	/* MMC_STAT */
> diff --git a/hw/sh4/shix.c b/hw/sh4/shix.c
> index f410c08883..dddfa8b336 100644
> --- a/hw/sh4/shix.c
> +++ b/hw/sh4/shix.c
> @@ -49,7 +49,7 @@ static void shix_init(MachineState *machine)
>      MemoryRegion *sysmem = get_system_memory();
>      MemoryRegion *rom = g_new(MemoryRegion, 1);
>      MemoryRegion *sdram = g_new(MemoryRegion, 2);
> -    
> +
>      cpu = SUPERH_CPU(cpu_create(machine->cpu_type));
>  
>      /* Allocate memory space */
> diff --git a/hw/sparc64/sun4u.c b/hw/sparc64/sun4u.c
> index 9c8655cffc..e3dd1c67a0 100644
> --- a/hw/sparc64/sun4u.c
> +++ b/hw/sparc64/sun4u.c
> @@ -670,7 +670,7 @@ static void sun4uv_init(MemoryRegion *address_space_mem,
>      s = SYS_BUS_DEVICE(nvram);
>      memory_region_add_subregion(pci_address_space_io(ebus), 0x2000,
>                                  sysbus_mmio_get_region(s, 0));
> - 
> +
>      initrd_size = 0;
>      initrd_addr = 0;
>      kernel_size = sun4u_load_kernel(machine->kernel_filename,
> diff --git a/hw/timer/etraxfs_timer.c b/hw/timer/etraxfs_timer.c
> index afe3d30a8e..797f65b3f4 100644
> --- a/hw/timer/etraxfs_timer.c
> +++ b/hw/timer/etraxfs_timer.c
> @@ -230,7 +230,7 @@ static inline void timer_watchdog_update(ETRAXTimerState *t, uint32_t value)
>      if (wd_en && wd_key != new_key)
>          return;
>  
> -    D(printf("en=%d new_key=%x oldkey=%x cmd=%d cnt=%d\n", 
> +    D(printf("en=%d new_key=%x oldkey=%x cmd=%d cnt=%d\n",
>           wd_en, new_key, wd_key, new_cmd, wd_cnt));
>  
>      if (t->wd_hits)
> diff --git a/hw/timer/xilinx_timer.c b/hw/timer/xilinx_timer.c
> index 0190aa47d0..0901ca7b05 100644
> --- a/hw/timer/xilinx_timer.c
> +++ b/hw/timer/xilinx_timer.c
> @@ -166,7 +166,7 @@ timer_write(void *opaque, hwaddr addr,
>               __func__, addr * 4, value, timer, addr & 3));
>      /* Further decoding to address a specific timers reg.  */
>      addr &= 3;
> -    switch (addr) 
> +    switch (addr)
>      {
>          case R_TCSR:
>              if (value & TCSR_TINT)
> @@ -179,7 +179,7 @@ timer_write(void *opaque, hwaddr addr,
>                  ptimer_transaction_commit(xt->ptimer);
>              }
>              break;
> - 
> +
>          default:
>              if (addr < ARRAY_SIZE(xt->regs))
>                  xt->regs[addr] = value;
> diff --git a/hw/usb/hcd-musb.c b/hw/usb/hcd-musb.c
> index 85f5ff5bd4..f64f47b34f 100644
> --- a/hw/usb/hcd-musb.c
> +++ b/hw/usb/hcd-musb.c
> @@ -33,8 +33,8 @@
>  
>  #define MUSB_HDRC_INTRTX	0x02	/* 16-bit */
>  #define MUSB_HDRC_INTRRX	0x04
> -#define MUSB_HDRC_INTRTXE	0x06  
> -#define MUSB_HDRC_INTRRXE	0x08  
> +#define MUSB_HDRC_INTRTXE	0x06
> +#define MUSB_HDRC_INTRRXE	0x08
>  #define MUSB_HDRC_INTRUSB	0x0a	/* 8 bit */
>  #define MUSB_HDRC_INTRUSBE	0x0b	/* 8 bit */
>  #define MUSB_HDRC_FRAME		0x0c	/* 16-bit */
> @@ -113,7 +113,7 @@
>   */
>  
>  /* POWER */
> -#define MGC_M_POWER_ISOUPDATE		0x80 
> +#define MGC_M_POWER_ISOUPDATE		0x80
>  #define	MGC_M_POWER_SOFTCONN		0x40
>  #define	MGC_M_POWER_HSENAB		0x20
>  #define	MGC_M_POWER_HSMODE		0x10
> @@ -127,7 +127,7 @@
>  #define MGC_M_INTR_RESUME		0x02
>  #define MGC_M_INTR_RESET		0x04
>  #define MGC_M_INTR_BABBLE		0x04
> -#define MGC_M_INTR_SOF			0x08 
> +#define MGC_M_INTR_SOF			0x08
>  #define MGC_M_INTR_CONNECT		0x10
>  #define MGC_M_INTR_DISCONNECT		0x20
>  #define MGC_M_INTR_SESSREQ		0x40
> @@ -135,7 +135,7 @@
>  #define MGC_M_INTR_EP0			0x01	/* FOR EP0 INTERRUPT */
>  
>  /* DEVCTL */
> -#define MGC_M_DEVCTL_BDEVICE		0x80   
> +#define MGC_M_DEVCTL_BDEVICE		0x80
>  #define MGC_M_DEVCTL_FSDEV		0x40
>  #define MGC_M_DEVCTL_LSDEV		0x20
>  #define MGC_M_DEVCTL_VBUS		0x18
> diff --git a/hw/usb/hcd-ohci.c b/hw/usb/hcd-ohci.c
> index 1e6e85e86a..a2bc7e05d6 100644
> --- a/hw/usb/hcd-ohci.c
> +++ b/hw/usb/hcd-ohci.c
> @@ -670,7 +670,7 @@ static int ohci_service_iso_td(OHCIState *ohci, struct ohci_ed *ed,
>  
>      starting_frame = OHCI_BM(iso_td.flags, TD_SF);
>      frame_count = OHCI_BM(iso_td.flags, TD_FC);
> -    relative_frame_number = USUB(ohci->frame_number, starting_frame); 
> +    relative_frame_number = USUB(ohci->frame_number, starting_frame);
>  
>      trace_usb_ohci_iso_td_head(
>             ed->head & OHCI_DPTR_MASK, ed->tail & OHCI_DPTR_MASK,
> @@ -733,8 +733,8 @@ static int ohci_service_iso_td(OHCIState *ohci, struct ohci_ed *ed,
>      start_offset = iso_td.offset[relative_frame_number];
>      next_offset = iso_td.offset[relative_frame_number + 1];
>  
> -    if (!(OHCI_BM(start_offset, TD_PSW_CC) & 0xe) || 
> -        ((relative_frame_number < frame_count) && 
> +    if (!(OHCI_BM(start_offset, TD_PSW_CC) & 0xe) ||
> +        ((relative_frame_number < frame_count) &&
>           !(OHCI_BM(next_offset, TD_PSW_CC) & 0xe))) {
>          trace_usb_ohci_iso_td_bad_cc_not_accessed(start_offset, next_offset);
>          return 1;
> diff --git a/hw/usb/hcd-uhci.c b/hw/usb/hcd-uhci.c
> index 37f7beb3fa..bebc10a723 100644
> --- a/hw/usb/hcd-uhci.c
> +++ b/hw/usb/hcd-uhci.c
> @@ -80,7 +80,7 @@ struct UHCIPCIDeviceClass {
>      UHCIInfo       info;
>  };
>  
> -/* 
> +/*
>   * Pending async transaction.
>   * 'packet' must be the first field because completion
>   * handler does "(UHCIAsync *) pkt" cast.
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index 7bc8c1c056..f7d8b30fd7 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -836,7 +836,7 @@ static void virtio_pci_vq_vector_mask(VirtIOPCIProxy *proxy,
>  
>      /* If guest supports masking, keep irqfd but mask it.
>       * Otherwise, clean it up now.
> -     */ 
> +     */
>      if (vdev->use_guest_notifier_mask && k->guest_notifier_mask) {
>          k->guest_notifier_mask(vdev, queue_no, true);
>      } else {
> diff --git a/include/hw/cris/etraxfs_dma.h b/include/hw/cris/etraxfs_dma.h
> index 095d76b956..f11a5874cf 100644
> --- a/include/hw/cris/etraxfs_dma.h
> +++ b/include/hw/cris/etraxfs_dma.h
> @@ -28,9 +28,9 @@ struct etraxfs_dma_client
>  void *etraxfs_dmac_init(hwaddr base, int nr_channels);
>  void etraxfs_dmac_connect(void *opaque, int channel, qemu_irq *line,
>  			  int input);
> -void etraxfs_dmac_connect_client(void *opaque, int c, 
> +void etraxfs_dmac_connect_client(void *opaque, int c,
>  				 struct etraxfs_dma_client *cl);
> -int etraxfs_dmac_input(struct etraxfs_dma_client *client, 
> +int etraxfs_dmac_input(struct etraxfs_dma_client *client,
>  		       void *buf, int len, int eop);
>  
>  #endif
> diff --git a/include/hw/net/lance.h b/include/hw/net/lance.h
> index 0357f5f65c..6099c12d37 100644
> --- a/include/hw/net/lance.h
> +++ b/include/hw/net/lance.h
> @@ -6,7 +6,7 @@
>   *
>   * This represents the Sparc32 lance (Am7990) ethernet device which is an
>   * earlier register-compatible member of the AMD PC-Net II (Am79C970A) family.
> - * 
> + *
>   * Permission is hereby granted, free of charge, to any person obtaining a copy
>   * of this software and associated documentation files (the "Software"), to deal
>   * in the Software without restriction, including without limitation the rights
> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
> index c421410e3f..fdeed5ecb6 100644
> --- a/include/hw/ppc/spapr.h
> +++ b/include/hw/ppc/spapr.h
> @@ -131,7 +131,7 @@ struct SpaprMachineClass {
>      hwaddr rma_limit;          /* clamp the RMA to this size */
>  
>      void (*phb_placement)(SpaprMachineState *spapr, uint32_t index,
> -                          uint64_t *buid, hwaddr *pio, 
> +                          uint64_t *buid, hwaddr *pio,
>                            hwaddr *mmio32, hwaddr *mmio64,
>                            unsigned n_dma, uint32_t *liobns, hwaddr *nv2gpa,
>                            hwaddr *nv2atsd, Error **errp);
> diff --git a/include/hw/xen/interface/io/ring.h b/include/hw/xen/interface/io/ring.h
> index 5d048b335c..fdb2a6ecba 100644
> --- a/include/hw/xen/interface/io/ring.h
> +++ b/include/hw/xen/interface/io/ring.h
> @@ -1,6 +1,6 @@
>  /******************************************************************************
>   * ring.h
> - * 
> + *
>   * Shared producer-consumer ring macros.
>   *
>   * Permission is hereby granted, free of charge, to any person obtaining a copy
> @@ -61,7 +61,7 @@ typedef unsigned int RING_IDX;
>  /*
>   * Calculate size of a shared ring, given the total available space for the
>   * ring and indexes (_sz), and the name tag of the request/response structure.
> - * A ring contains as many entries as will fit, rounded down to the nearest 
> + * A ring contains as many entries as will fit, rounded down to the nearest
>   * power of two (so we can mask with (size-1) to loop around).
>   */
>  #define __CONST_RING_SIZE(_s, _sz) \
> @@ -75,7 +75,7 @@ typedef unsigned int RING_IDX;
>  
>  /*
>   * Macros to make the correct C datatypes for a new kind of ring.
> - * 
> + *
>   * To make a new ring datatype, you need to have two message structures,
>   * let's say request_t, and response_t already defined.
>   *
> @@ -85,7 +85,7 @@ typedef unsigned int RING_IDX;
>   *
>   * These expand out to give you a set of types, as you can see below.
>   * The most important of these are:
> - * 
> + *
>   *     mytag_sring_t      - The shared ring.
>   *     mytag_front_ring_t - The 'front' half of the ring.
>   *     mytag_back_ring_t  - The 'back' half of the ring.
> @@ -153,15 +153,15 @@ typedef struct __name##_back_ring __name##_back_ring_t
>  
>  /*
>   * Macros for manipulating rings.
> - * 
> - * FRONT_RING_whatever works on the "front end" of a ring: here 
> + *
> + * FRONT_RING_whatever works on the "front end" of a ring: here
>   * requests are pushed on to the ring and responses taken off it.
> - * 
> - * BACK_RING_whatever works on the "back end" of a ring: here 
> + *
> + * BACK_RING_whatever works on the "back end" of a ring: here
>   * requests are taken off the ring and responses put on.
> - * 
> - * N.B. these macros do NO INTERLOCKS OR FLOW CONTROL. 
> - * This is OK in 1-for-1 request-response situations where the 
> + *
> + * N.B. these macros do NO INTERLOCKS OR FLOW CONTROL.
> + * This is OK in 1-for-1 request-response situations where the
>   * requestor (front end) never has more than RING_SIZE()-1
>   * outstanding requests.
>   */
> @@ -263,26 +263,26 @@ typedef struct __name##_back_ring __name##_back_ring_t
>  
>  /*
>   * Notification hold-off (req_event and rsp_event):
> - * 
> + *
>   * When queueing requests or responses on a shared ring, it may not always be
>   * necessary to notify the remote end. For example, if requests are in flight
>   * in a backend, the front may be able to queue further requests without
>   * notifying the back (if the back checks for new requests when it queues
>   * responses).
> - * 
> + *
>   * When enqueuing requests or responses:
> - * 
> + *
>   *  Use RING_PUSH_{REQUESTS,RESPONSES}_AND_CHECK_NOTIFY(). The second argument
>   *  is a boolean return value. True indicates that the receiver requires an
>   *  asynchronous notification.
> - * 
> + *
>   * After dequeuing requests or responses (before sleeping the connection):
> - * 
> + *
>   *  Use RING_FINAL_CHECK_FOR_REQUESTS() or RING_FINAL_CHECK_FOR_RESPONSES().
>   *  The second argument is a boolean return value. True indicates that there
>   *  are pending messages on the ring (i.e., the connection should not be put
>   *  to sleep).
> - * 
> + *
>   *  These macros will set the req_event/rsp_event field to trigger a
>   *  notification on the very next message that is enqueued. If you want to
>   *  create batches of work (i.e., only receive a notification after several
> diff --git a/include/qemu/log.h b/include/qemu/log.h
> index f4724f7330..1a4e066160 100644
> --- a/include/qemu/log.h
> +++ b/include/qemu/log.h
> @@ -14,7 +14,7 @@ typedef struct QemuLogFile {
>  extern QemuLogFile *qemu_logfile;
>  
>  
> -/* 
> +/*
>   * The new API:
>   *
>   */
> diff --git a/include/qom/object.h b/include/qom/object.h
> index 94a61ccc3f..380007b133 100644
> --- a/include/qom/object.h
> +++ b/include/qom/object.h
> @@ -1443,12 +1443,12 @@ char *object_get_canonical_path(const Object *obj);
>   *   ambiguous match
>   *
>   * There are two types of supported paths--absolute paths and partial paths.
> - * 
> + *
>   * Absolute paths are derived from the root object and can follow child<> or
>   * link<> properties.  Since they can follow link<> properties, they can be
>   * arbitrarily long.  Absolute paths look like absolute filenames and are
>   * prefixed with a leading slash.
> - * 
> + *
>   * Partial paths look like relative filenames.  They do not begin with a
>   * prefix.  The matching rules for partial paths are subtle but designed to make
>   * specifying objects easy.  At each level of the composition tree, the partial
> diff --git a/linux-user/cris/cpu_loop.c b/linux-user/cris/cpu_loop.c
> index 334edddd1e..25d0861df9 100644
> --- a/linux-user/cris/cpu_loop.c
> +++ b/linux-user/cris/cpu_loop.c
> @@ -27,7 +27,7 @@ void cpu_loop(CPUCRISState *env)
>      CPUState *cs = env_cpu(env);
>      int trapnr, ret;
>      target_siginfo_t info;
> -    
> +
>      while (1) {
>          cpu_exec_start(cs);
>          trapnr = cpu_exec(cs);
> @@ -49,13 +49,13 @@ void cpu_loop(CPUCRISState *env)
>            /* just indicate that signals should be handled asap */
>            break;
>          case EXCP_BREAK:
> -            ret = do_syscall(env, 
> -                             env->regs[9], 
> -                             env->regs[10], 
> -                             env->regs[11], 
> -                             env->regs[12], 
> -                             env->regs[13], 
> -                             env->pregs[7], 
> +            ret = do_syscall(env,
> +                             env->regs[9],
> +                             env->regs[10],
> +                             env->regs[11],
> +                             env->regs[12],
> +                             env->regs[13],
> +                             env->pregs[7],
>                               env->pregs[11],
>                               0, 0);
>              if (ret == -TARGET_ERESTARTSYS) {
> diff --git a/linux-user/microblaze/cpu_loop.c b/linux-user/microblaze/cpu_loop.c
> index 3e0a7f730b..990dda26c3 100644
> --- a/linux-user/microblaze/cpu_loop.c
> +++ b/linux-user/microblaze/cpu_loop.c
> @@ -27,7 +27,7 @@ void cpu_loop(CPUMBState *env)
>      CPUState *cs = env_cpu(env);
>      int trapnr, ret;
>      target_siginfo_t info;
> -    
> +
>      while (1) {
>          cpu_exec_start(cs);
>          trapnr = cpu_exec(cs);
> @@ -52,13 +52,13 @@ void cpu_loop(CPUMBState *env)
>              /* Return address is 4 bytes after the call.  */
>              env->regs[14] += 4;
>              env->sregs[SR_PC] = env->regs[14];
> -            ret = do_syscall(env, 
> -                             env->regs[12], 
> -                             env->regs[5], 
> -                             env->regs[6], 
> -                             env->regs[7], 
> -                             env->regs[8], 
> -                             env->regs[9], 
> +            ret = do_syscall(env,
> +                             env->regs[12],
> +                             env->regs[5],
> +                             env->regs[6],
> +                             env->regs[7],
> +                             env->regs[8],
> +                             env->regs[9],
>                               env->regs[10],
>                               0, 0);
>              if (ret == -TARGET_ERESTARTSYS) {
> diff --git a/linux-user/mmap.c b/linux-user/mmap.c
> index 0019447892..e48056f6ad 100644
> --- a/linux-user/mmap.c
> +++ b/linux-user/mmap.c
> @@ -401,12 +401,12 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int prot,
>      }
>  
>      /* When mapping files into a memory area larger than the file, accesses
> -       to pages beyond the file size will cause a SIGBUS. 
> +       to pages beyond the file size will cause a SIGBUS.
>  
>         For example, if mmaping a file of 100 bytes on a host with 4K pages
>         emulating a target with 8K pages, the target expects to be able to
>         access the first 8K. But the host will trap us on any access beyond
> -       4K.  
> +       4K.
>  
>         When emulating a target with a larger page-size than the hosts, we
>         may need to truncate file maps at EOF and add extra anonymous pages
> @@ -421,7 +421,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int prot,
>  
>         /* Are we trying to create a map beyond EOF?.  */
>         if (offset + len > sb.st_size) {
> -           /* If so, truncate the file map at eof aligned with 
> +           /* If so, truncate the file map at eof aligned with
>                the hosts real pagesize. Additional anonymous maps
>                will be created beyond EOF.  */
>             len = REAL_HOST_PAGE_ALIGN(sb.st_size - offset);
> @@ -496,7 +496,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int prot,
>              }
>              goto the_end;
>          }
> -        
> +
>          /* handle the start of the mapping */
>          if (start > real_start) {
>              if (real_end == real_start + qemu_host_page_size) {
> diff --git a/linux-user/sparc/signal.c b/linux-user/sparc/signal.c
> index d796f50f66..53efb61c70 100644
> --- a/linux-user/sparc/signal.c
> +++ b/linux-user/sparc/signal.c
> @@ -104,7 +104,7 @@ struct target_rt_signal_frame {
>      qemu_siginfo_fpu_t  fpu_state;
>  };
>  
> -static inline abi_ulong get_sigframe(struct target_sigaction *sa, 
> +static inline abi_ulong get_sigframe(struct target_sigaction *sa,
>                                       CPUSPARCState *env,
>                                       unsigned long framesize)
>  {
> @@ -506,7 +506,7 @@ void sparc64_get_context(CPUSPARCState *env)
>      if (!lock_user_struct(VERIFY_WRITE, ucp, ucp_addr, 0)) {
>          goto do_sigsegv;
>      }
> -    
> +
>      mcp = &ucp->tuc_mcontext;
>      grp = &mcp->mc_gregs;
>  
> diff --git a/linux-user/syscall.c b/linux-user/syscall.c
> index 97de9fb5c9..10d91a9781 100644
> --- a/linux-user/syscall.c
> +++ b/linux-user/syscall.c
> @@ -1104,7 +1104,7 @@ static inline rlim_t target_to_host_rlim(abi_ulong target_rlim)
>  {
>      abi_ulong target_rlim_swap;
>      rlim_t result;
> -    
> +
>      target_rlim_swap = tswapal(target_rlim);
>      if (target_rlim_swap == TARGET_RLIM_INFINITY)
>          return RLIM_INFINITY;
> @@ -1112,7 +1112,7 @@ static inline rlim_t target_to_host_rlim(abi_ulong target_rlim)
>      result = target_rlim_swap;
>      if (target_rlim_swap != (rlim_t)result)
>          return RLIM_INFINITY;
> -    
> +
>      return result;
>  }
>  #endif
> @@ -1122,13 +1122,13 @@ static inline abi_ulong host_to_target_rlim(rlim_t rlim)
>  {
>      abi_ulong target_rlim_swap;
>      abi_ulong result;
> -    
> +
>      if (rlim == RLIM_INFINITY || rlim != (abi_long)rlim)
>          target_rlim_swap = TARGET_RLIM_INFINITY;
>      else
>          target_rlim_swap = rlim;
>      result = tswapal(target_rlim_swap);
> -    
> +
>      return result;
>  }
>  #endif
> @@ -1615,9 +1615,9 @@ static inline abi_long target_to_host_cmsg(struct msghdr *msgh,
>      abi_ulong target_cmsg_addr;
>      struct target_cmsghdr *target_cmsg, *target_cmsg_start;
>      socklen_t space = 0;
> -    
> +
>      msg_controllen = tswapal(target_msgh->msg_controllen);
> -    if (msg_controllen < sizeof (struct target_cmsghdr)) 
> +    if (msg_controllen < sizeof (struct target_cmsghdr))
>          goto the_end;
>      target_cmsg_addr = tswapal(target_msgh->msg_control);
>      target_cmsg = lock_user(VERIFY_READ, target_cmsg_addr, msg_controllen, 1);
> @@ -1703,7 +1703,7 @@ static inline abi_long host_to_target_cmsg(struct target_msghdr *target_msgh,
>      socklen_t space = 0;
>  
>      msg_controllen = tswapal(target_msgh->msg_controllen);
> -    if (msg_controllen < sizeof (struct target_cmsghdr)) 
> +    if (msg_controllen < sizeof (struct target_cmsghdr))
>          goto the_end;
>      target_cmsg_addr = tswapal(target_msgh->msg_control);
>      target_cmsg = lock_user(VERIFY_WRITE, target_cmsg_addr, msg_controllen, 0);
> @@ -5750,7 +5750,7 @@ abi_long do_set_thread_area(CPUX86State *env, abi_ulong ptr)
>      }
>      unlock_user_struct(target_ldt_info, ptr, 1);
>  
> -    if (ldt_info.entry_number < TARGET_GDT_ENTRY_TLS_MIN || 
> +    if (ldt_info.entry_number < TARGET_GDT_ENTRY_TLS_MIN ||
>          ldt_info.entry_number > TARGET_GDT_ENTRY_TLS_MAX)
>             return -TARGET_EINVAL;
>      seg_32bit = ldt_info.flags & 1;
> @@ -5828,7 +5828,7 @@ static abi_long do_get_thread_area(CPUX86State *env, abi_ulong ptr)
>      lp = (uint32_t *)(gdt_table + idx);
>      entry_1 = tswap32(lp[0]);
>      entry_2 = tswap32(lp[1]);
> -    
> +
>      read_exec_only = ((entry_2 >> 9) & 1) ^ 1;
>      contents = (entry_2 >> 10) & 3;
>      seg_not_present = ((entry_2 >> 15) & 1) ^ 1;
> @@ -5844,8 +5844,8 @@ static abi_long do_get_thread_area(CPUX86State *env, abi_ulong ptr)
>          (read_exec_only << 3) | (limit_in_pages << 4) |
>          (seg_not_present << 5) | (useable << 6) | (lm << 7);
>      limit = (entry_1 & 0xffff) | (entry_2  & 0xf0000);
> -    base_addr = (entry_1 >> 16) | 
> -        (entry_2 & 0xff000000) | 
> +    base_addr = (entry_1 >> 16) |
> +        (entry_2 & 0xff000000) |
>          ((entry_2 & 0xff) << 16);
>      target_ldt_info->base_addr = tswapal(base_addr);
>      target_ldt_info->limit = tswap32(limit);
> @@ -10873,7 +10873,7 @@ static abi_long do_syscall1(void *cpu_env, int num, abi_long arg1,
>          return get_errno(fchown(arg1, low2highuid(arg2), low2highgid(arg3)));
>  #if defined(TARGET_NR_fchownat)
>      case TARGET_NR_fchownat:
> -        if (!(p = lock_user_string(arg2))) 
> +        if (!(p = lock_user_string(arg2)))
>              return -TARGET_EFAULT;
>          ret = get_errno(fchownat(arg1, p, low2highuid(arg3),
>                                   low2highgid(arg4), arg5));
> diff --git a/linux-user/syscall_defs.h b/linux-user/syscall_defs.h
> index 152ec637cb..752ea5ee83 100644
> --- a/linux-user/syscall_defs.h
> +++ b/linux-user/syscall_defs.h
> @@ -1923,7 +1923,7 @@ struct target_stat {
>  	abi_long	st_blocks;	/* Number 512-byte blocks allocated. */
>  
>  	abi_ulong	target_st_atime;
> -	abi_ulong 	target_st_atime_nsec; 
> +	abi_ulong 	target_st_atime_nsec;
>  	abi_ulong	target_st_mtime;
>  	abi_ulong	target_st_mtime_nsec;
>  	abi_ulong	target_st_ctime;
> diff --git a/linux-user/uaccess.c b/linux-user/uaccess.c
> index e215ecc2a6..91e2067933 100644
> --- a/linux-user/uaccess.c
> +++ b/linux-user/uaccess.c
> @@ -55,7 +55,7 @@ abi_long target_strlen(abi_ulong guest_addr1)
>          unlock_user(ptr, guest_addr, 0);
>          guest_addr += len;
>          /* we don't allow wrapping or integer overflow */
> -        if (guest_addr == 0 || 
> +        if (guest_addr == 0 ||
>              (guest_addr - guest_addr1) > 0x7fffffff)
>              return -TARGET_EFAULT;
>          if (len != max_len)
> diff --git a/os-posix.c b/os-posix.c
> index 3cd52e1e70..fa6dfae168 100644
> --- a/os-posix.c
> +++ b/os-posix.c
> @@ -316,7 +316,7 @@ void os_setup_post(void)
>  
>          close(fd);
>  
> -        do {        
> +        do {
>              len = write(daemon_pipe, &status, 1);
>          } while (len < 0 && errno == EINTR);
>          if (len != 1) {
> diff --git a/qapi/qapi-util.c b/qapi/qapi-util.c
> index 29a6c98b53..48045c3ccc 100644
> --- a/qapi/qapi-util.c
> +++ b/qapi/qapi-util.c
> @@ -4,7 +4,7 @@
>   * Authors:
>   *  Hu Tao       <hutao@cn.fujitsu.com>
>   *  Peter Lieven <pl@kamp.de>
> - * 
> + *
>   * This work is licensed under the terms of the GNU LGPL, version 2.1 or later.
>   * See the COPYING.LIB file in the top-level directory.
>   *
> diff --git a/qemu-img.c b/qemu-img.c
> index bdb9f6aa46..72dfa096b1 100644
> --- a/qemu-img.c
> +++ b/qemu-img.c
> @@ -248,7 +248,7 @@ static bool qemu_img_object_print_help(const char *type, QemuOpts *opts)
>   * an odd number of ',' (or else a separating ',' following it gets
>   * escaped), or be empty (or else a separating ',' preceding it can
>   * escape a separating ',' following it).
> - * 
> + *
>   */
>  static bool is_valid_option_list(const char *optarg)
>  {
> diff --git a/qemu-options.hx b/qemu-options.hx
> index 196f468786..2f728fde47 100644
> --- a/qemu-options.hx
> +++ b/qemu-options.hx
> @@ -192,15 +192,15 @@ DEF("numa", HAS_ARG, QEMU_OPTION_numa,
>      QEMU_ARCH_ALL)
>  SRST
>  ``-numa node[,mem=size][,cpus=firstcpu[-lastcpu]][,nodeid=node][,initiator=initiator]``
> -  \ 
> +  \
>  ``-numa node[,memdev=id][,cpus=firstcpu[-lastcpu]][,nodeid=node][,initiator=initiator]``
>    \
>  ``-numa dist,src=source,dst=destination,val=distance``
> -  \ 
> +  \
>  ``-numa cpu,node-id=node[,socket-id=x][,core-id=y][,thread-id=z]``
> -  \ 
> +  \
>  ``-numa hmat-lb,initiator=node,target=node,hierarchy=hierarchy,data-type=tpye[,latency=lat][,bandwidth=bw]``
> -  \ 
> +  \
>  ``-numa hmat-cache,node-id=node,size=size,level=level[,associativity=str][,policy=str][,line=size]``
>      Define a NUMA node and assign RAM and VCPUs to it. Set the NUMA
>      distance from a source node to a destination node. Set the ACPI
> @@ -395,7 +395,7 @@ DEF("global", HAS_ARG, QEMU_OPTION_global,
>      QEMU_ARCH_ALL)
>  SRST
>  ``-global driver.prop=value``
> -  \ 
> +  \
>  ``-global driver=driver,property=property,value=value``
>      Set default value of driver's property prop to value, e.g.:
>  
> @@ -926,9 +926,9 @@ SRST
>  ``-hda file``
>    \
>  ``-hdb file``
> -  \ 
> +  \
>  ``-hdc file``
> -  \ 
> +  \
>  ``-hdd file``
>      Use file as hard disk 0, 1, 2 or 3 image (see
>      :ref:`disk_005fimages`).
> @@ -1416,7 +1416,7 @@ DEF("fsdev", HAS_ARG, QEMU_OPTION_fsdev,
>  
>  SRST
>  ``-fsdev local,id=id,path=path,security_model=security_model [,writeout=writeout][,readonly][,fmode=fmode][,dmode=dmode] [,throttling.option=value[,throttling.option=value[,...]]]``
> -  \ 
> +  \
>  ``-fsdev proxy,id=id,socket=socket[,writeout=writeout][,readonly]``
>    \
>  ``-fsdev proxy,id=id,sock_fd=sock_fd[,writeout=writeout][,readonly]``
> @@ -1537,9 +1537,9 @@ DEF("virtfs", HAS_ARG, QEMU_OPTION_virtfs,
>  
>  SRST
>  ``-virtfs local,path=path,mount_tag=mount_tag ,security_model=security_model[,writeout=writeout][,readonly] [,fmode=fmode][,dmode=dmode][,multidevs=multidevs]``
> -  \ 
> +  \
>  ``-virtfs proxy,socket=socket,mount_tag=mount_tag [,writeout=writeout][,readonly]``
> -  \ 
> +  \
>  ``-virtfs proxy,sock_fd=sock_fd,mount_tag=mount_tag [,writeout=writeout][,readonly]``
>    \
>  ``-virtfs synth,mount_tag=mount_tag``
> @@ -3674,7 +3674,7 @@ DEF("overcommit", HAS_ARG, QEMU_OPTION_overcommit,
>      QEMU_ARCH_ALL)
>  SRST
>  ``-overcommit mem-lock=on|off``
> -  \ 
> +  \
>  ``-overcommit cpu-pm=on|off``
>      Run qemu with hints about host resource overcommit. The default is
>      to assume that host overcommits all resources.
> @@ -4045,7 +4045,7 @@ DEF("incoming", HAS_ARG, QEMU_OPTION_incoming, \
>      QEMU_ARCH_ALL)
>  SRST
>  ``-incoming tcp:[host]:port[,to=maxport][,ipv4][,ipv6]``
> -  \ 
> +  \
>  ``-incoming rdma:host:port[,ipv4][,ipv6]``
>      Prepare for incoming migration, listen on a given tcp port.
>  
> @@ -4753,7 +4753,7 @@ SRST
>                 [...]
>  
>      ``-object secret,id=id,data=string,format=raw|base64[,keyid=secretid,iv=string]``
> -      \ 
> +      \
>      ``-object secret,id=id,file=filename,format=raw|base64[,keyid=secretid,iv=string]``
>          Defines a secret to store a password, encryption key, or some
>          other sensitive data. The sensitive data can either be passed
> diff --git a/qom/object.c b/qom/object.c
> index 6ece96bc2b..30630d789f 100644
> --- a/qom/object.c
> +++ b/qom/object.c
> @@ -1020,7 +1020,7 @@ static void object_class_foreach_tramp(gpointer key, gpointer value,
>          return;
>      }
>  
> -    if (data->implements_type && 
> +    if (data->implements_type &&
>          !object_class_dynamic_cast(k, data->implements_type)) {
>          return;
>      }
> diff --git a/target/cris/translate.c b/target/cris/translate.c
> index aaa46b5bca..df979594c3 100644
> --- a/target/cris/translate.c
> +++ b/target/cris/translate.c
> @@ -369,7 +369,7 @@ static inline void t_gen_addx_carry(DisasContext *dc, TCGv d)
>      if (dc->flagx_known) {
>          if (dc->flags_x) {
>              TCGv c;
> -            
> +
>              c = tcg_temp_new();
>              t_gen_mov_TN_preg(c, PR_CCS);
>              /* C flag is already at bit 0.  */
> @@ -402,7 +402,7 @@ static inline void t_gen_subx_carry(DisasContext *dc, TCGv d)
>      if (dc->flagx_known) {
>          if (dc->flags_x) {
>              TCGv c;
> -            
> +
>              c = tcg_temp_new();
>              t_gen_mov_TN_preg(c, PR_CCS);
>              /* C flag is already at bit 0.  */
> @@ -688,7 +688,7 @@ static inline void cris_update_cc_x(DisasContext *dc)
>  }
>  
>  /* Update cc prior to executing ALU op. Needs source operands untouched.  */
> -static void cris_pre_alu_update_cc(DisasContext *dc, int op, 
> +static void cris_pre_alu_update_cc(DisasContext *dc, int op,
>                     TCGv dst, TCGv src, int size)
>  {
>      if (dc->update_cc) {
> @@ -718,7 +718,7 @@ static inline void cris_update_result(DisasContext *dc, TCGv res)
>  }
>  
>  /* Returns one if the write back stage should execute.  */
> -static void cris_alu_op_exec(DisasContext *dc, int op, 
> +static void cris_alu_op_exec(DisasContext *dc, int op,
>                     TCGv dst, TCGv a, TCGv b, int size)
>  {
>      /* Emit the ALU insns.  */
> @@ -1068,7 +1068,7 @@ static void cris_store_direct_jmp(DisasContext *dc)
>      }
>  }
>  
> -static void cris_prepare_cc_branch (DisasContext *dc, 
> +static void cris_prepare_cc_branch (DisasContext *dc,
>                      int offset, int cond)
>  {
>      /* This helps us re-schedule the micro-code to insns in delay-slots
> @@ -1108,7 +1108,7 @@ static void gen_load64(DisasContext *dc, TCGv_i64 dst, TCGv addr)
>      tcg_gen_qemu_ld_i64(dst, addr, mem_index, MO_TEQ);
>  }
>  
> -static void gen_load(DisasContext *dc, TCGv dst, TCGv addr, 
> +static void gen_load(DisasContext *dc, TCGv dst, TCGv addr,
>               unsigned int size, int sign)
>  {
>      int mem_index = cpu_mmu_index(&dc->cpu->env, false);
> @@ -3047,27 +3047,27 @@ static unsigned int crisv32_decoder(CPUCRISState *env, DisasContext *dc)
>   * to give SW a hint that the exception actually hit on the dslot.
>   *
>   * CRIS expects all PC addresses to be 16-bit aligned. The lsb is ignored by
> - * the core and any jmp to an odd addresses will mask off that lsb. It is 
> + * the core and any jmp to an odd addresses will mask off that lsb. It is
>   * simply there to let sw know there was an exception on a dslot.
>   *
>   * When the software returns from an exception, the branch will re-execute.
>   * On QEMU care needs to be taken when a branch+delayslot sequence is broken
>   * and the branch and delayslot don't share pages.
>   *
> - * The TB contaning the branch insn will set up env->btarget and evaluate 
> - * env->btaken. When the translation loop exits we will note that the branch 
> + * The TB contaning the branch insn will set up env->btarget and evaluate
> + * env->btaken. When the translation loop exits we will note that the branch
>   * sequence is broken and let env->dslot be the size of the branch insn (those
>   * vary in length).
>   *
>   * The TB contaning the delayslot will have the PC of its real insn (i.e no lsb
> - * set). It will also expect to have env->dslot setup with the size of the 
> - * delay slot so that env->pc - env->dslot point to the branch insn. This TB 
> - * will execute the dslot and take the branch, either to btarget or just one 
> + * set). It will also expect to have env->dslot setup with the size of the
> + * delay slot so that env->pc - env->dslot point to the branch insn. This TB
> + * will execute the dslot and take the branch, either to btarget or just one
>   * insn ahead.
>   *
> - * When exceptions occur, we check for env->dslot in do_interrupt to detect 
> + * When exceptions occur, we check for env->dslot in do_interrupt to detect
>   * broken branch sequences and setup $erp accordingly (i.e let it point to the
> - * branch and set lsb). Then env->dslot gets cleared so that the exception 
> + * branch and set lsb). Then env->dslot gets cleared so that the exception
>   * handler can enter. When returning from exceptions (jump $erp) the lsb gets
>   * masked off and we will reexecute the branch insn.
>   *
> diff --git a/target/cris/translate_v10.inc.c b/target/cris/translate_v10.inc.c
> index ae34a0d1a3..ad4e213847 100644
> --- a/target/cris/translate_v10.inc.c
> +++ b/target/cris/translate_v10.inc.c
> @@ -299,7 +299,7 @@ static unsigned int dec10_quick_imm(DisasContext *dc)
>  
>              op = CC_OP_LSL;
>              if (imm & (1 << 5)) {
> -                op = CC_OP_LSR; 
> +                op = CC_OP_LSR;
>              }
>              imm &= 0x1f;
>              cris_cc_mask(dc, CC_MASK_NZVC);
> @@ -335,7 +335,7 @@ static unsigned int dec10_quick_imm(DisasContext *dc)
>              LOG_DIS("b%s %d\n", cc_name(dc->cond), imm);
>  
>              cris_cc_mask(dc, 0);
> -            cris_prepare_cc_branch(dc, imm, dc->cond); 
> +            cris_prepare_cc_branch(dc, imm, dc->cond);
>              break;
>  
>          default:
> @@ -487,7 +487,7 @@ static void dec10_reg_mov_pr(DisasContext *dc)
>          return;
>      }
>      if (dc->dst == PR_CCS) {
> -        cris_evaluate_flags(dc); 
> +        cris_evaluate_flags(dc);
>      }
>      cris_alu(dc, CC_OP_MOVE, cpu_R[dc->src],
>                   cpu_R[dc->src], cpu_PR[dc->dst], preg_sizes_v10[dc->dst]);
> diff --git a/target/i386/hvf/hvf.c b/target/i386/hvf/hvf.c
> index be016b951a..d3f836a0f4 100644
> --- a/target/i386/hvf/hvf.c
> +++ b/target/i386/hvf/hvf.c
> @@ -967,13 +967,13 @@ static int hvf_accel_init(MachineState *ms)
>      assert_hvf_ok(ret);
>  
>      s = g_new0(HVFState, 1);
> - 
> +
>      s->num_slots = 32;
>      for (x = 0; x < s->num_slots; ++x) {
>          s->slots[x].size = 0;
>          s->slots[x].slot_id = x;
>      }
> -  
> +
>      hvf_state = s;
>      cpu_interrupt_handler = hvf_handle_interrupt;
>      memory_listener_register(&hvf_memory_listener, &address_space_memory);
> diff --git a/target/i386/hvf/x86.c b/target/i386/hvf/x86.c
> index fdb11c8db9..6fd6e541d8 100644
> --- a/target/i386/hvf/x86.c
> +++ b/target/i386/hvf/x86.c
> @@ -83,7 +83,7 @@ bool x86_write_segment_descriptor(struct CPUState *cpu,
>  {
>      target_ulong base;
>      uint32_t limit;
> -    
> +
>      if (GDT_SEL == sel.ti) {
>          base  = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_BASE);
>          limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_GDTR_LIMIT);
> @@ -91,7 +91,7 @@ bool x86_write_segment_descriptor(struct CPUState *cpu,
>          base  = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_BASE);
>          limit = rvmcs(cpu->hvf_fd, VMCS_GUEST_LDTR_LIMIT);
>      }
> -    
> +
>      if (sel.index * 8 >= limit) {
>          printf("%s: gdt limit\n", __func__);
>          return false;
> diff --git a/target/i386/hvf/x86_decode.c b/target/i386/hvf/x86_decode.c
> index 34c5e3006c..8c576febd2 100644
> --- a/target/i386/hvf/x86_decode.c
> +++ b/target/i386/hvf/x86_decode.c
> @@ -63,7 +63,7 @@ static inline uint64_t decode_bytes(CPUX86State *env, struct x86_decode *decode,
>                                      int size)
>  {
>      target_ulong val = 0;
> -    
> +
>      switch (size) {
>      case 1:
>      case 2:
> @@ -77,7 +77,7 @@ static inline uint64_t decode_bytes(CPUX86State *env, struct x86_decode *decode,
>      target_ulong va  = linear_rip(env_cpu(env), env->eip) + decode->len;
>      vmx_read_mem(env_cpu(env), &val, va, size);
>      decode->len += size;
> -    
> +
>      return val;
>  }
>  
> @@ -210,7 +210,7 @@ static void decode_imm_0(CPUX86State *env, struct x86_decode *decode,
>  static void decode_pushseg(CPUX86State *env, struct x86_decode *decode)
>  {
>      uint8_t op = (decode->opcode_len > 1) ? decode->opcode[1] : decode->opcode[0];
> -    
> +
>      decode->op[0].type = X86_VAR_REG;
>      switch (op) {
>      case 0xe:
> @@ -237,7 +237,7 @@ static void decode_pushseg(CPUX86State *env, struct x86_decode *decode)
>  static void decode_popseg(CPUX86State *env, struct x86_decode *decode)
>  {
>      uint8_t op = (decode->opcode_len > 1) ? decode->opcode[1] : decode->opcode[0];
> -    
> +
>      decode->op[0].type = X86_VAR_REG;
>      switch (op) {
>      case 0xf:
> @@ -461,14 +461,14 @@ struct decode_x87_tbl _decode_tbl3[256];
>  static void decode_x87_ins(CPUX86State *env, struct x86_decode *decode)
>  {
>      struct decode_x87_tbl *decoder;
> -    
> +
>      decode->is_fpu = true;
>      int mode = decode->modrm.mod == 3 ? 1 : 0;
>      int index = ((decode->opcode[0] & 0xf) << 4) | (mode << 3) |
>                   decode->modrm.reg;
>  
>      decoder = &_decode_tbl3[index];
> -    
> +
>      decode->cmd = decoder->cmd;
>      if (decoder->operand_size) {
>          decode->operand_size = decoder->operand_size;
> @@ -476,7 +476,7 @@ static void decode_x87_ins(CPUX86State *env, struct x86_decode *decode)
>      decode->flags_mask = decoder->flags_mask;
>      decode->fpop_stack = decoder->pop;
>      decode->frev = decoder->rev;
> -    
> +
>      if (decoder->decode_op1) {
>          decoder->decode_op1(env, decode, &decode->op[0]);
>      }
> @@ -2002,7 +2002,7 @@ static inline void decode_displacement(CPUX86State *env, struct x86_decode *deco
>      int addressing_size = decode->addressing_size;
>      int mod = decode->modrm.mod;
>      int rm = decode->modrm.rm;
> -    
> +
>      decode->displacement_size = 0;
>      switch (addressing_size) {
>      case 2:
> @@ -2115,7 +2115,7 @@ uint32_t decode_instruction(CPUX86State *env, struct x86_decode *decode)
>  void init_decoder()
>  {
>      int i;
> -    
> +
>      for (i = 0; i < ARRAY_SIZE(_decode_tbl1); i++) {
>          memcpy(&_decode_tbl1[i], &invl_inst, sizeof(invl_inst));
>      }
> @@ -2124,7 +2124,7 @@ void init_decoder()
>      }
>      for (i = 0; i < ARRAY_SIZE(_decode_tbl3); i++) {
>          memcpy(&_decode_tbl3[i], &invl_inst_x87, sizeof(invl_inst_x87));
> -    
> +
>      }
>      for (i = 0; i < ARRAY_SIZE(_1op_inst); i++) {
>          _decode_tbl1[_1op_inst[i].opcode] = _1op_inst[i];
> diff --git a/target/i386/hvf/x86_decode.h b/target/i386/hvf/x86_decode.h
> index ef7960113f..c7879c9ea7 100644
> --- a/target/i386/hvf/x86_decode.h
> +++ b/target/i386/hvf/x86_decode.h
> @@ -43,7 +43,7 @@ typedef enum x86_prefix {
>  
>  enum x86_decode_cmd {
>      X86_DECODE_CMD_INVL = 0,
> -    
> +
>      X86_DECODE_CMD_PUSH,
>      X86_DECODE_CMD_PUSH_SEG,
>      X86_DECODE_CMD_POP,
> @@ -174,7 +174,7 @@ enum x86_decode_cmd {
>      X86_DECODE_CMD_CMPXCHG8B,
>      X86_DECODE_CMD_CMPXCHG,
>      X86_DECODE_CMD_POPCNT,
> -    
> +
>      X86_DECODE_CMD_FNINIT,
>      X86_DECODE_CMD_FLD,
>      X86_DECODE_CMD_FLDxx,
> diff --git a/target/i386/hvf/x86_descr.c b/target/i386/hvf/x86_descr.c
> index 8c05c34f33..fd6a63754d 100644
> --- a/target/i386/hvf/x86_descr.c
> +++ b/target/i386/hvf/x86_descr.c
> @@ -112,7 +112,7 @@ void vmx_segment_to_x86_descriptor(struct CPUState *cpu, struct vmx_segment *vmx
>  {
>      x86_set_segment_limit(desc, vmx_desc->limit);
>      x86_set_segment_base(desc, vmx_desc->base);
> -    
> +
>      desc->type = vmx_desc->ar & 15;
>      desc->s = (vmx_desc->ar >> 4) & 1;
>      desc->dpl = (vmx_desc->ar >> 5) & 3;
> diff --git a/target/i386/hvf/x86_emu.c b/target/i386/hvf/x86_emu.c
> index d3e289ed87..edc7f74903 100644
> --- a/target/i386/hvf/x86_emu.c
> +++ b/target/i386/hvf/x86_emu.c
> @@ -131,7 +131,7 @@ void write_reg(CPUX86State *env, int reg, target_ulong val, int size)
>  target_ulong read_val_from_reg(target_ulong reg_ptr, int size)
>  {
>      target_ulong val;
> -    
> +
>      switch (size) {
>      case 1:
>          val = *(uint8_t *)reg_ptr;
> diff --git a/target/i386/hvf/x86_mmu.c b/target/i386/hvf/x86_mmu.c
> index 65d4603dbf..168c47fa34 100644
> --- a/target/i386/hvf/x86_mmu.c
> +++ b/target/i386/hvf/x86_mmu.c
> @@ -143,7 +143,7 @@ static bool test_pt_entry(struct CPUState *cpu, struct gpt_translation *pt,
>      if (pae && pt->exec_access && !pte_exec_access(pte)) {
>          return false;
>      }
> -    
> +
>  exit:
>      /* TODO: check reserved bits */
>      return true;
> @@ -175,7 +175,7 @@ static bool walk_gpt(struct CPUState *cpu, target_ulong addr, int err_code,
>      bool is_large = false;
>      target_ulong cr3 = rvmcs(cpu->hvf_fd, VMCS_GUEST_CR3);
>      uint64_t page_mask = pae ? PAE_PTE_PAGE_MASK : LEGACY_PTE_PAGE_MASK;
> -    
> +
>      memset(pt, 0, sizeof(*pt));
>      top_level = gpt_top_level(cpu, pae);
>  
> @@ -184,7 +184,7 @@ static bool walk_gpt(struct CPUState *cpu, target_ulong addr, int err_code,
>      pt->user_access = (err_code & MMU_PAGE_US);
>      pt->write_access = (err_code & MMU_PAGE_WT);
>      pt->exec_access = (err_code & MMU_PAGE_NX);
> -    
> +
>      for (level = top_level; level > 0; level--) {
>          get_pt_entry(cpu, pt, level, pae);
>  
> diff --git a/target/i386/hvf/x86_task.c b/target/i386/hvf/x86_task.c
> index 6f04478b3a..9748220381 100644
> --- a/target/i386/hvf/x86_task.c
> +++ b/target/i386/hvf/x86_task.c
> @@ -1,7 +1,7 @@
>  // This software is licensed under the terms of the GNU General Public
>  // License version 2, as published by the Free Software Foundation, and
>  // may be copied, distributed, and modified under those terms.
> -// 
> +//
>  // This program is distributed in the hope that it will be useful,
>  // but WITHOUT ANY WARRANTY; without even the implied warranty of
>  // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> diff --git a/target/i386/hvf/x86hvf.c b/target/i386/hvf/x86hvf.c
> index 5cbcb32ab6..fd33ab4efc 100644
> --- a/target/i386/hvf/x86hvf.c
> +++ b/target/i386/hvf/x86hvf.c
> @@ -88,7 +88,7 @@ void hvf_put_segments(CPUState *cpu_state)
>  {
>      CPUX86State *env = &X86_CPU(cpu_state)->env;
>      struct vmx_segment seg;
> -    
> +
>      wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_LIMIT, env->idt.limit);
>      wvmcs(cpu_state->hvf_fd, VMCS_GUEST_IDTR_BASE, env->idt.base);
>  
> @@ -105,7 +105,7 @@ void hvf_put_segments(CPUState *cpu_state)
>  
>      hvf_set_segment(cpu_state, &seg, &env->segs[R_CS], false);
>      vmx_write_segment_descriptor(cpu_state, &seg, R_CS);
> -    
> +
>      hvf_set_segment(cpu_state, &seg, &env->segs[R_DS], false);
>      vmx_write_segment_descriptor(cpu_state, &seg, R_DS);
>  
> @@ -126,10 +126,10 @@ void hvf_put_segments(CPUState *cpu_state)
>  
>      hvf_set_segment(cpu_state, &seg, &env->ldt, false);
>      vmx_write_segment_descriptor(cpu_state, &seg, R_LDTR);
> -    
> +
>      hv_vcpu_flush(cpu_state->hvf_fd);
>  }
> -    
> +
>  void hvf_put_msrs(CPUState *cpu_state)
>  {
>      CPUX86State *env = &X86_CPU(cpu_state)->env;
> @@ -178,7 +178,7 @@ void hvf_get_segments(CPUState *cpu_state)
>  
>      vmx_read_segment_descriptor(cpu_state, &seg, R_CS);
>      hvf_get_segment(&env->segs[R_CS], &seg);
> -    
> +
>      vmx_read_segment_descriptor(cpu_state, &seg, R_DS);
>      hvf_get_segment(&env->segs[R_DS], &seg);
>  
> @@ -209,7 +209,7 @@ void hvf_get_segments(CPUState *cpu_state)
>      env->cr[2] = 0;
>      env->cr[3] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR3);
>      env->cr[4] = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_CR4);
> -    
> +
>      env->efer = rvmcs(cpu_state->hvf_fd, VMCS_GUEST_IA32_EFER);
>  }
>  
> @@ -217,10 +217,10 @@ void hvf_get_msrs(CPUState *cpu_state)
>  {
>      CPUX86State *env = &X86_CPU(cpu_state)->env;
>      uint64_t tmp;
> -    
> +
>      hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_CS, &tmp);
>      env->sysenter_cs = tmp;
> -    
> +
>      hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_SYSENTER_ESP, &tmp);
>      env->sysenter_esp = tmp;
>  
> @@ -237,7 +237,7 @@ void hvf_get_msrs(CPUState *cpu_state)
>  #endif
>  
>      hv_vcpu_read_msr(cpu_state->hvf_fd, MSR_IA32_APICBASE, &tmp);
> -    
> +
>      env->tsc = rdtscp() + rvmcs(cpu_state->hvf_fd, VMCS_TSC_OFFSET);
>  }
>  
> @@ -264,15 +264,15 @@ int hvf_put_registers(CPUState *cpu_state)
>      wreg(cpu_state->hvf_fd, HV_X86_R15, env->regs[15]);
>      wreg(cpu_state->hvf_fd, HV_X86_RFLAGS, env->eflags);
>      wreg(cpu_state->hvf_fd, HV_X86_RIP, env->eip);
> -   
> +
>      wreg(cpu_state->hvf_fd, HV_X86_XCR0, env->xcr0);
> -    
> +
>      hvf_put_xsave(cpu_state);
> -    
> +
>      hvf_put_segments(cpu_state);
> -    
> +
>      hvf_put_msrs(cpu_state);
> -    
> +
>      wreg(cpu_state->hvf_fd, HV_X86_DR0, env->dr[0]);
>      wreg(cpu_state->hvf_fd, HV_X86_DR1, env->dr[1]);
>      wreg(cpu_state->hvf_fd, HV_X86_DR2, env->dr[2]);
> @@ -281,7 +281,7 @@ int hvf_put_registers(CPUState *cpu_state)
>      wreg(cpu_state->hvf_fd, HV_X86_DR5, env->dr[5]);
>      wreg(cpu_state->hvf_fd, HV_X86_DR6, env->dr[6]);
>      wreg(cpu_state->hvf_fd, HV_X86_DR7, env->dr[7]);
> -    
> +
>      return 0;
>  }
>  
> @@ -306,16 +306,16 @@ int hvf_get_registers(CPUState *cpu_state)
>      env->regs[13] = rreg(cpu_state->hvf_fd, HV_X86_R13);
>      env->regs[14] = rreg(cpu_state->hvf_fd, HV_X86_R14);
>      env->regs[15] = rreg(cpu_state->hvf_fd, HV_X86_R15);
> -    
> +
>      env->eflags = rreg(cpu_state->hvf_fd, HV_X86_RFLAGS);
>      env->eip = rreg(cpu_state->hvf_fd, HV_X86_RIP);
> -   
> +
>      hvf_get_xsave(cpu_state);
>      env->xcr0 = rreg(cpu_state->hvf_fd, HV_X86_XCR0);
> -    
> +
>      hvf_get_segments(cpu_state);
>      hvf_get_msrs(cpu_state);
> -    
> +
>      env->dr[0] = rreg(cpu_state->hvf_fd, HV_X86_DR0);
>      env->dr[1] = rreg(cpu_state->hvf_fd, HV_X86_DR1);
>      env->dr[2] = rreg(cpu_state->hvf_fd, HV_X86_DR2);
> @@ -324,7 +324,7 @@ int hvf_get_registers(CPUState *cpu_state)
>      env->dr[5] = rreg(cpu_state->hvf_fd, HV_X86_DR5);
>      env->dr[6] = rreg(cpu_state->hvf_fd, HV_X86_DR6);
>      env->dr[7] = rreg(cpu_state->hvf_fd, HV_X86_DR7);
> -    
> +
>      x86_update_hflags(env);
>      return 0;
>  }
> @@ -388,7 +388,7 @@ bool hvf_inject_interrupts(CPUState *cpu_state)
>                  intr_type == VMCS_INTR_T_SWEXCEPTION) {
>                  wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_INST_LENGTH, env->ins_len);
>              }
> -            
> +
>              if (env->has_error_code) {
>                  wvmcs(cpu_state->hvf_fd, VMCS_ENTRY_EXCEPTION_ERROR,
>                        env->error_code);
> diff --git a/target/i386/translate.c b/target/i386/translate.c
> index 5e5dbb41b0..d824cfcfe7 100644
> --- a/target/i386/translate.c
> +++ b/target/i386/translate.c
> @@ -1623,7 +1623,7 @@ static void gen_rot_rm_T1(DisasContext *s, MemOp ot, int op1, int is_right)
>      tcg_temp_free_i32(t0);
>      tcg_temp_free_i32(t1);
>  
> -    /* The CC_OP value is no longer predictable.  */ 
> +    /* The CC_OP value is no longer predictable.  */
>      set_cc_op(s, CC_OP_DYNAMIC);
>  }
>  
> @@ -1716,7 +1716,7 @@ static void gen_rotc_rm_T1(DisasContext *s, MemOp ot, int op1,
>          gen_op_ld_v(s, ot, s->T0, s->A0);
>      else
>          gen_op_mov_v_reg(s, ot, s->T0, op1);
> -    
> +
>      if (is_right) {
>          switch (ot) {
>          case MO_8:
> @@ -5353,7 +5353,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
>                  set_cc_op(s, CC_OP_EFLAGS);
>                  break;
>              }
> -#endif        
> +#endif
>              if (!(s->cpuid_features & CPUID_CX8)) {
>                  goto illegal_op;
>              }
> @@ -6398,7 +6398,7 @@ static target_ulong disas_insn(DisasContext *s, CPUState *cpu)
>      case 0x6d:
>          ot = mo_b_d32(b, dflag);
>          tcg_gen_ext16u_tl(s->T0, cpu_regs[R_EDX]);
> -        gen_check_io(s, ot, pc_start - s->cs_base, 
> +        gen_check_io(s, ot, pc_start - s->cs_base,
>                       SVM_IOIO_TYPE_MASK | svm_is_rep(prefixes) | 4);
>          if (prefixes & (PREFIX_REPZ | PREFIX_REPNZ)) {
>              gen_repz_ins(s, ot, pc_start - s->cs_base, s->pc - s->cs_base);
> diff --git a/target/microblaze/mmu.c b/target/microblaze/mmu.c
> index 6763421ba2..5487696089 100644
> --- a/target/microblaze/mmu.c
> +++ b/target/microblaze/mmu.c
> @@ -53,7 +53,7 @@ static void mmu_flush_idx(CPUMBState *env, unsigned int idx)
>      }
>  }
>  
> -static void mmu_change_pid(CPUMBState *env, unsigned int newpid) 
> +static void mmu_change_pid(CPUMBState *env, unsigned int newpid)
>  {
>      struct microblaze_mmu *mmu = &env->mmu;
>      unsigned int i;
> diff --git a/target/microblaze/translate.c b/target/microblaze/translate.c
> index f6ff2591c3..1925a93eb2 100644
> --- a/target/microblaze/translate.c
> +++ b/target/microblaze/translate.c
> @@ -663,7 +663,7 @@ static void dec_div(DisasContext *dc)
>  {
>      unsigned int u;
>  
> -    u = dc->imm & 2; 
> +    u = dc->imm & 2;
>      LOG_DIS("div\n");
>  
>      if (trap_illegal(dc, !dc->cpu->cfg.use_div)) {
> diff --git a/target/sh4/op_helper.c b/target/sh4/op_helper.c
> index 14c3db0f48..fa4f5aee4f 100644
> --- a/target/sh4/op_helper.c
> +++ b/target/sh4/op_helper.c
> @@ -133,7 +133,7 @@ void helper_discard_movcal_backup(CPUSH4State *env)
>  	env->movcal_backup = current = next;
>  	if (current == NULL)
>  	    env->movcal_backup_tail = &(env->movcal_backup);
> -    } 
> +    }
>  }
>  
>  void helper_ocbi(CPUSH4State *env, uint32_t address)
> @@ -146,7 +146,7 @@ void helper_ocbi(CPUSH4State *env, uint32_t address)
>  	{
>  	    memory_content *next = (*current)->next;
>              cpu_stl_data(env, a, (*current)->value);
> -	    
> +	
>  	    if (next == NULL)
>  	    {
>  		env->movcal_backup_tail = current;
> diff --git a/target/xtensa/core-de212/core-isa.h b/target/xtensa/core-de212/core-isa.h
> index 90ac329230..4528dd7942 100644
> --- a/target/xtensa/core-de212/core-isa.h
> +++ b/target/xtensa/core-de212/core-isa.h
> @@ -1,4 +1,4 @@
> -/* 
> +/*
>   * xtensa/config/core-isa.h -- HAL definitions that are dependent on Xtensa
>   *				processor CORE configuration
>   *
> @@ -605,12 +605,12 @@
>  /*----------------------------------------------------------------------
>  				MPU
>    ----------------------------------------------------------------------*/
> -#define XCHAL_HAVE_MPU			0 
> +#define XCHAL_HAVE_MPU			0
>  #define XCHAL_MPU_ENTRIES		0
>  
>  #define XCHAL_MPU_ALIGN_REQ		1	/* MPU requires alignment of entries to background map */
>  #define XCHAL_MPU_BACKGROUND_ENTRIES	0	/* number of entries in background map */
> - 
> +
>  #define XCHAL_MPU_ALIGN_BITS		0
>  #define XCHAL_MPU_ALIGN			0
>  
> diff --git a/target/xtensa/core-sample_controller/core-isa.h b/target/xtensa/core-sample_controller/core-isa.h
> index d53dca8665..de5a5f3ba2 100644
> --- a/target/xtensa/core-sample_controller/core-isa.h
> +++ b/target/xtensa/core-sample_controller/core-isa.h
> @@ -1,4 +1,4 @@
> -/* 
> +/*
>   * xtensa/config/core-isa.h -- HAL definitions that are dependent on Xtensa
>   *				processor CORE configuration
>   *
> @@ -626,13 +626,13 @@
>  /*----------------------------------------------------------------------
>  				MPU
>    ----------------------------------------------------------------------*/
> -#define XCHAL_HAVE_MPU			0 
> +#define XCHAL_HAVE_MPU			0
>  #define XCHAL_MPU_ENTRIES		0
>  
>  #define XCHAL_MPU_ALIGN_REQ		1	/* MPU requires alignment of entries to background map */
>  #define XCHAL_MPU_BACKGROUND_ENTRIES	0	/* number of entries in bg map*/
>  #define XCHAL_MPU_BG_CACHEADRDIS	0	/* default CACHEADRDIS for bg */
> - 
> +
>  #define XCHAL_MPU_ALIGN_BITS		0
>  #define XCHAL_MPU_ALIGN			0
>  
> diff --git a/target/xtensa/core-test_kc705_be/core-isa.h b/target/xtensa/core-test_kc705_be/core-isa.h
> index 408fed871d..382e3f187d 100644
> --- a/target/xtensa/core-test_kc705_be/core-isa.h
> +++ b/target/xtensa/core-test_kc705_be/core-isa.h
> @@ -1,4 +1,4 @@
> -/* 
> +/*
>   * xtensa/config/core-isa.h -- HAL definitions that are dependent on Xtensa
>   *				processor CORE configuration
>   *
> diff --git a/tcg/sparc/tcg-target.inc.c b/tcg/sparc/tcg-target.inc.c
> index 65fddb310d..d856000c16 100644
> --- a/tcg/sparc/tcg-target.inc.c
> +++ b/tcg/sparc/tcg-target.inc.c
> @@ -988,7 +988,7 @@ static void build_trampolines(TCGContext *s)
>              /* Skip the oi argument.  */
>              ra += 1;
>          }
> -                
> +
>          /* Set the retaddr operand.  */
>          if (ra >= TCG_REG_O6) {
>              tcg_out_st(s, TCG_TYPE_PTR, TCG_REG_O7, TCG_REG_CALL_STACK,
> diff --git a/tcg/tcg.c b/tcg/tcg.c
> index 1362bc6101..45d15fe837 100644
> --- a/tcg/tcg.c
> +++ b/tcg/tcg.c
> @@ -872,7 +872,7 @@ void *tcg_malloc_internal(TCGContext *s, int size)
>  {
>      TCGPool *p;
>      int pool_size;
> -    
> +
>      if (size > TCG_POOL_CHUNK_SIZE) {
>          /* big malloc: insert a new pool (XXX: could optimize) */
>          p = g_malloc(sizeof(TCGPool) + size);
> @@ -893,7 +893,7 @@ void *tcg_malloc_internal(TCGContext *s, int size)
>                  p = g_malloc(sizeof(TCGPool) + pool_size);
>                  p->size = pool_size;
>                  p->next = NULL;
> -                if (s->pool_current) 
> +                if (s->pool_current)
>                      s->pool_current->next = p;
>                  else
>                      s->pool_first = p;
> @@ -3093,8 +3093,8 @@ static void dump_regs(TCGContext *s)
>  
>      for(i = 0; i < TCG_TARGET_NB_REGS; i++) {
>          if (s->reg_to_temp[i] != NULL) {
> -            printf("%s: %s\n", 
> -                   tcg_target_reg_names[i], 
> +            printf("%s: %s\n",
> +                   tcg_target_reg_names[i],
>                     tcg_get_arg_str_ptr(s, buf, sizeof(buf), s->reg_to_temp[i]));
>          }
>      }
> @@ -3111,7 +3111,7 @@ static void check_regs(TCGContext *s)
>          ts = s->reg_to_temp[reg];
>          if (ts != NULL) {
>              if (ts->val_type != TEMP_VAL_REG || ts->reg != reg) {
> -                printf("Inconsistency for register %s:\n", 
> +                printf("Inconsistency for register %s:\n",
>                         tcg_target_reg_names[reg]);
>                  goto fail;
>              }
> @@ -3648,14 +3648,14 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
>      nb_iargs = def->nb_iargs;
>  
>      /* copy constants */
> -    memcpy(new_args + nb_oargs + nb_iargs, 
> +    memcpy(new_args + nb_oargs + nb_iargs,
>             op->args + nb_oargs + nb_iargs,
>             sizeof(TCGArg) * def->nb_cargs);
>  
>      i_allocated_regs = s->reserved_regs;
>      o_allocated_regs = s->reserved_regs;
>  
> -    /* satisfy input constraints */ 
> +    /* satisfy input constraints */
>      for (k = 0; k < nb_iargs; k++) {
>          TCGRegSet i_preferred_regs, o_preferred_regs;
>  
> @@ -3713,7 +3713,7 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
>              /* nothing to do : the constraint is satisfied */
>          } else {
>          allocate_in_reg:
> -            /* allocate a new register matching the constraint 
> +            /* allocate a new register matching the constraint
>                 and move the temporary register into it */
>              temp_load(s, ts, tcg_target_available_regs[ts->type],
>                        i_allocated_regs, 0);
> @@ -3733,7 +3733,7 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
>          const_args[i] = 0;
>          tcg_regset_set_reg(i_allocated_regs, reg);
>      }
> -    
> +
>      /* mark dead temporaries and free the associated registers */
>      for (i = nb_oargs; i < nb_oargs + nb_iargs; i++) {
>          if (IS_DEAD_ARG(i)) {
> @@ -3745,7 +3745,7 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
>          tcg_reg_alloc_bb_end(s, i_allocated_regs);
>      } else {
>          if (def->flags & TCG_OPF_CALL_CLOBBER) {
> -            /* XXX: permit generic clobber register list ? */ 
> +            /* XXX: permit generic clobber register list ? */
>              for (i = 0; i < TCG_TARGET_NB_REGS; i++) {
>                  if (tcg_regset_test_reg(tcg_target_call_clobber_regs, i)) {
>                      tcg_reg_free(s, i, i_allocated_regs);
> @@ -3757,7 +3757,7 @@ static void tcg_reg_alloc_op(TCGContext *s, const TCGOp *op)
>                 an exception. */
>              sync_globals(s, i_allocated_regs);
>          }
> -        
> +
>          /* satisfy the output constraints */
>          for(k = 0; k < nb_oargs; k++) {
>              i = def->sorted_args[k];
> @@ -3849,7 +3849,7 @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op)
>  
>      /* assign stack slots first */
>      call_stack_size = (nb_iargs - nb_regs) * sizeof(tcg_target_long);
> -    call_stack_size = (call_stack_size + TCG_TARGET_STACK_ALIGN - 1) & 
> +    call_stack_size = (call_stack_size + TCG_TARGET_STACK_ALIGN - 1) &
>          ~(TCG_TARGET_STACK_ALIGN - 1);
>      allocate_args = (call_stack_size > TCG_STATIC_CALL_ARGS_SIZE);
>      if (allocate_args) {
> @@ -3874,7 +3874,7 @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op)
>          stack_offset += sizeof(tcg_target_long);
>  #endif
>      }
> -    
> +
>      /* assign input registers */
>      allocated_regs = s->reserved_regs;
>      for (i = 0; i < nb_regs; i++) {
> @@ -3907,14 +3907,14 @@ static void tcg_reg_alloc_call(TCGContext *s, TCGOp *op)
>              tcg_regset_set_reg(allocated_regs, reg);
>          }
>      }
> -    
> +
>      /* mark dead temporaries and free the associated registers */
>      for (i = nb_oargs; i < nb_iargs + nb_oargs; i++) {
>          if (IS_DEAD_ARG(i)) {
>              temp_dead(s, arg_temp(op->args[i]));
>          }
>      }
> -    
> +
>      /* clobber call registers */
>      for (i = 0; i < TCG_TARGET_NB_REGS; i++) {
>          if (tcg_regset_test_reg(tcg_target_call_clobber_regs, i)) {
> @@ -4317,7 +4317,7 @@ void tcg_dump_info(void)
>                  (double)s->code_out_len / tb_div_count);
>      qemu_printf("avg search data/TB  %0.1f\n",
>                  (double)s->search_out_len / tb_div_count);
> -    
> +
>      qemu_printf("cycles/op           %0.1f\n",
>                  s->op_count ? (double)tot / s->op_count : 0);
>      qemu_printf("cycles/in byte      %0.1f\n",
> diff --git a/tests/tcg/multiarch/test-mmap.c b/tests/tcg/multiarch/test-mmap.c
> index 11d0e777b1..0009e62855 100644
> --- a/tests/tcg/multiarch/test-mmap.c
> +++ b/tests/tcg/multiarch/test-mmap.c
> @@ -17,7 +17,7 @@
>   * but WITHOUT ANY WARRANTY; without even the implied warranty of
>   * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>   * GNU General Public License for more details.
> - * 
> + *
>   * You should have received a copy of the GNU General Public License
>   * along with this program; if not, see <http://www.gnu.org/licenses/>.
>   */
> @@ -63,15 +63,15 @@ void check_aligned_anonymous_unfixed_mmaps(void)
>  		size_t len;
>  
>  		len = pagesize + (pagesize * i & 7);
> -		p1 = mmap(NULL, len, PROT_READ, 
> +		p1 = mmap(NULL, len, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> -		p2 = mmap(NULL, len, PROT_READ, 
> +		p2 = mmap(NULL, len, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> -		p3 = mmap(NULL, len, PROT_READ, 
> +		p3 = mmap(NULL, len, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> -		p4 = mmap(NULL, len, PROT_READ, 
> +		p4 = mmap(NULL, len, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> -		p5 = mmap(NULL, len, PROT_READ, 
> +		p5 = mmap(NULL, len, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
>  
>  		/* Make sure we get pages aligned with the pagesize. The
> @@ -118,7 +118,7 @@ void check_large_anonymous_unfixed_mmap(void)
>  	fprintf(stdout, "%s", __func__);
>  
>  	len = 0x02000000;
> -	p1 = mmap(NULL, len, PROT_READ, 
> +	p1 = mmap(NULL, len, PROT_READ,
>  		  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
>  
>  	/* Make sure we get pages aligned with the pagesize. The
> @@ -145,14 +145,14 @@ void check_aligned_anonymous_unfixed_colliding_mmaps(void)
>  	for (i = 0; i < 0x2fff; i++)
>  	{
>  		int nlen;
> -		p1 = mmap(NULL, pagesize, PROT_READ, 
> +		p1 = mmap(NULL, pagesize, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
>  		fail_unless (p1 != MAP_FAILED);
>  		p = (uintptr_t) p1;
>  		fail_unless ((p & pagemask) == 0);
>  		memcpy (dummybuf, p1, pagesize);
>  
> -		p2 = mmap(NULL, pagesize, PROT_READ, 
> +		p2 = mmap(NULL, pagesize, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
>  		fail_unless (p2 != MAP_FAILED);
>  		p = (uintptr_t) p2;
> @@ -162,12 +162,12 @@ void check_aligned_anonymous_unfixed_colliding_mmaps(void)
>  
>  		munmap (p1, pagesize);
>  		nlen = pagesize * 8;
> -		p3 = mmap(NULL, nlen, PROT_READ, 
> +		p3 = mmap(NULL, nlen, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
>  		fail_unless (p3 != MAP_FAILED);
>  
>  		/* Check if the mmaped areas collide.  */
> -		if (p3 < p2 
> +		if (p3 < p2
>  		    && (p3 + nlen) > p2)
>  			fail_unless (0);
>  
> @@ -191,7 +191,7 @@ void check_aligned_anonymous_fixed_mmaps(void)
>  	int i;
>  
>  	/* Find a suitable address to start with.  */
> -	addr = mmap(NULL, pagesize * 40, PROT_READ | PROT_WRITE, 
> +	addr = mmap(NULL, pagesize * 40, PROT_READ | PROT_WRITE,
>  		    MAP_PRIVATE | MAP_ANONYMOUS,
>  		    -1, 0);
>  	fprintf(stdout, "%s addr=%p", __func__, addr);
> @@ -200,10 +200,10 @@ void check_aligned_anonymous_fixed_mmaps(void)
>  	for (i = 0; i < 40; i++)
>  	{
>  		/* Create submaps within our unfixed map.  */
> -		p1 = mmap(addr, pagesize, PROT_READ, 
> +		p1 = mmap(addr, pagesize, PROT_READ,
>  			  MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
>  			  -1, 0);
> -		/* Make sure we get pages aligned with the pagesize. 
> +		/* Make sure we get pages aligned with the pagesize.
>  		   The target expects this.  */
>  		p = (uintptr_t) p1;
>  		fail_unless (p1 == addr);
> @@ -231,10 +231,10 @@ void check_aligned_anonymous_fixed_mmaps_collide_with_host(void)
>  	for (i = 0; i < 20; i++)
>  	{
>  		/* Create submaps within our unfixed map.  */
> -		p1 = mmap(addr, pagesize, PROT_READ | PROT_WRITE, 
> +		p1 = mmap(addr, pagesize, PROT_READ | PROT_WRITE,
>  			  MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED,
>  			  -1, 0);
> -		/* Make sure we get pages aligned with the pagesize. 
> +		/* Make sure we get pages aligned with the pagesize.
>  		   The target expects this.  */
>  		p = (uintptr_t) p1;
>  		fail_unless (p1 == addr);
> @@ -258,14 +258,14 @@ void check_file_unfixed_mmaps(void)
>  		size_t len;
>  
>  		len = pagesize;
> -		p1 = mmap(NULL, len, PROT_READ, 
> -			  MAP_PRIVATE, 
> +		p1 = mmap(NULL, len, PROT_READ,
> +			  MAP_PRIVATE,
>  			  test_fd, 0);
> -		p2 = mmap(NULL, len, PROT_READ, 
> -			  MAP_PRIVATE, 
> +		p2 = mmap(NULL, len, PROT_READ,
> +			  MAP_PRIVATE,
>  			  test_fd, pagesize);
> -		p3 = mmap(NULL, len, PROT_READ, 
> -			  MAP_PRIVATE, 
> +		p3 = mmap(NULL, len, PROT_READ,
> +			  MAP_PRIVATE,
>  			  test_fd, pagesize * 2);
>  
>  		fail_unless (p1 != MAP_FAILED);
> @@ -307,9 +307,9 @@ void check_file_unfixed_eof_mmaps(void)
>  	fprintf(stdout, "%s", __func__);
>  	for (i = 0; i < 0x10; i++)
>  	{
> -		p1 = mmap(NULL, pagesize, PROT_READ, 
> -			  MAP_PRIVATE, 
> -			  test_fd, 
> +		p1 = mmap(NULL, pagesize, PROT_READ,
> +			  MAP_PRIVATE,
> +			  test_fd,
>  			  (test_fsize - sizeof *p1) & ~pagemask);
>  
>  		fail_unless (p1 != MAP_FAILED);
> @@ -339,7 +339,7 @@ void check_file_fixed_eof_mmaps(void)
>  	int i;
>  
>  	/* Find a suitable address to start with.  */
> -	addr = mmap(NULL, pagesize * 44, PROT_READ, 
> +	addr = mmap(NULL, pagesize * 44, PROT_READ,
>  		    MAP_PRIVATE | MAP_ANONYMOUS,
>  		    -1, 0);
>  
> @@ -349,9 +349,9 @@ void check_file_fixed_eof_mmaps(void)
>  	for (i = 0; i < 0x10; i++)
>  	{
>  		/* Create submaps within our unfixed map.  */
> -		p1 = mmap(addr, pagesize, PROT_READ, 
> -			  MAP_PRIVATE | MAP_FIXED, 
> -			  test_fd, 
> +		p1 = mmap(addr, pagesize, PROT_READ,
> +			  MAP_PRIVATE | MAP_FIXED,
> +			  test_fd,
>  			  (test_fsize - sizeof *p1) & ~pagemask);
>  
>  		fail_unless (p1 != MAP_FAILED);
> @@ -381,7 +381,7 @@ void check_file_fixed_mmaps(void)
>  	int i;
>  
>  	/* Find a suitable address to start with.  */
> -	addr = mmap(NULL, pagesize * 40 * 4, PROT_READ, 
> +	addr = mmap(NULL, pagesize * 40 * 4, PROT_READ,
>  		    MAP_PRIVATE | MAP_ANONYMOUS,
>  		    -1, 0);
>  	fprintf(stdout, "%s addr=%p", __func__, (void *)addr);
> @@ -389,20 +389,20 @@ void check_file_fixed_mmaps(void)
>  
>  	for (i = 0; i < 40; i++)
>  	{
> -		p1 = mmap(addr, pagesize, PROT_READ, 
> +		p1 = mmap(addr, pagesize, PROT_READ,
>  			  MAP_PRIVATE | MAP_FIXED,
>  			  test_fd, 0);
> -		p2 = mmap(addr + pagesize, pagesize, PROT_READ, 
> +		p2 = mmap(addr + pagesize, pagesize, PROT_READ,
>  			  MAP_PRIVATE | MAP_FIXED,
>  			  test_fd, pagesize);
> -		p3 = mmap(addr + pagesize * 2, pagesize, PROT_READ, 
> +		p3 = mmap(addr + pagesize * 2, pagesize, PROT_READ,
>  			  MAP_PRIVATE | MAP_FIXED,
>  			  test_fd, pagesize * 2);
> -		p4 = mmap(addr + pagesize * 3, pagesize, PROT_READ, 
> +		p4 = mmap(addr + pagesize * 3, pagesize, PROT_READ,
>  			  MAP_PRIVATE | MAP_FIXED,
>  			  test_fd, pagesize * 3);
>  
> -		/* Make sure we get pages aligned with the pagesize. 
> +		/* Make sure we get pages aligned with the pagesize.
>  		   The target expects this.  */
>  		fail_unless (p1 == (void *)addr);
>  		fail_unless (p2 == (void *)addr + pagesize);
> @@ -479,7 +479,7 @@ int main(int argc, char **argv)
>          checked_write(test_fd, &i, sizeof i);
>      }
>  
> -	/* Append a few extra writes to make the file end at non 
> +	/* Append a few extra writes to make the file end at non
>  	   page boundary.  */
>      checked_write(test_fd, &i, sizeof i); i++;
>      checked_write(test_fd, &i, sizeof i); i++;
> diff --git a/ui/curses.c b/ui/curses.c
> index a59b23a9cf..ba362eb927 100644
> --- a/ui/curses.c
> +++ b/ui/curses.c
> @@ -1,8 +1,8 @@
>  /*
>   * QEMU curses/ncurses display driver
> - * 
> + *
>   * Copyright (c) 2005 Andrzej Zaborowski  <balrog@zabor.org>
> - * 
> + *
>   * Permission is hereby granted, free of charge, to any person obtaining a copy
>   * of this software and associated documentation files (the "Software"), to deal
>   * in the Software without restriction, including without limitation the rights
> diff --git a/ui/curses_keys.h b/ui/curses_keys.h
> index 71e04acdc7..8b62258756 100644
> --- a/ui/curses_keys.h
> +++ b/ui/curses_keys.h
> @@ -1,8 +1,8 @@
>  /*
>   * Keycode and keysyms conversion tables for curses
> - * 
> + *
>   * Copyright (c) 2005 Andrzej Zaborowski  <balrog@zabor.org>
> - * 
> + *
>   * Permission is hereby granted, free of charge, to any person obtaining a copy
>   * of this software and associated documentation files (the "Software"), to deal
>   * in the Software without restriction, including without limitation the rights
> diff --git a/util/cutils.c b/util/cutils.c
> index 36ce712271..ce4f756bd9 100644
> --- a/util/cutils.c
> +++ b/util/cutils.c
> @@ -142,7 +142,7 @@ time_t mktimegm(struct tm *tm)
>          m += 12;
>          y--;
>      }
> -    t = 86400ULL * (d + (153 * m - 457) / 5 + 365 * y + y / 4 - y / 100 + 
> +    t = 86400ULL * (d + (153 * m - 457) / 5 + 365 * y + y / 4 - y / 100 +
>                   y / 400 - 719469);
>      t += 3600 * tm->tm_hour + 60 * tm->tm_min + tm->tm_sec;
>      return t;
> 

For the linux-user part:

Acked-by: Laurent Vivier <laurent@vivier.eu>


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 10:19:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 10:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jskgb-0002qB-VE; Tue, 07 Jul 2020 10:18:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+gHK=AS=in.bosch.com=manikandan.chockalingam@srs-us1.protection.inumbo.net>)
 id 1jskga-0002q6-So
 for xen-devel@lists.xen.org; Tue, 07 Jul 2020 10:18:48 +0000
X-Inumbo-ID: 3da8f758-c03b-11ea-8d45-12813bfff9fa
Received: from de-out1.bosch-org.com (unknown [139.15.230.186])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3da8f758-c03b-11ea-8d45-12813bfff9fa;
 Tue, 07 Jul 2020 10:18:46 +0000 (UTC)
Received: from si0vm1948.rbesz01.com
 (lb41g3-ha-dmz-psi-sl1-mailout.fe.ssn.bosch.com [139.15.230.188])
 by fe0vms0187.rbdmz01.com (Postfix) with ESMTPS id 4B1JLY2tqfz1XLDQx;
 Tue,  7 Jul 2020 12:18:45 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=in.bosch.com;
 s=key1-intmail; t=1594117125;
 bh=0flT9FbF1bnILmJA1WrUz6hc6gUFrxwO0XIL8WgUsdU=; l=10;
 h=From:Subject:From:Reply-To:Sender;
 b=BX0qJ13YDqU+ICX8fYGlMS1GZJtOWlEyDuNJq5XlFzDYtfu/WR8Vcle+xv7JVpGLJ
 EmQ5nbcnBtXs5FiFNIii/QvFbmUHNwwlEZmGdT5LTMp938Dppi28SKztVI7mNyaZI4
 L778NlMPadPJSuUV3Fd+HPfR4C6xr6D8ZLNnBDoQ=
Received: from si0vm2083.rbesz01.com (unknown [10.58.172.176])
 by si0vm1948.rbesz01.com (Postfix) with ESMTPS id 4B1JLY2WT5z1hw;
 Tue,  7 Jul 2020 12:18:45 +0200 (CEST)
X-AuditID: 0a3aad17-4b9ff7000000186c-e8-5f044c052dbe
Received: from si0vm1949.rbesz01.com ( [10.58.173.29])
 (using TLS with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by si0vm2083.rbesz01.com (SMG Outbound) with SMTP id 83.F6.06252.50C440F5;
 Tue,  7 Jul 2020 12:18:45 +0200 (CEST)
Received: from FE-MBX2057.de.bosch.com (fe-mbx2057.de.bosch.com [10.3.231.162])
 by si0vm1949.rbesz01.com (Postfix) with ESMTPS id 4B1JLY1p7zz6CjZP2;
 Tue,  7 Jul 2020 12:18:45 +0200 (CEST)
Received: from SGPMBX2024.APAC.bosch.com (10.187.83.44) by
 FE-MBX2057.de.bosch.com (10.3.231.162) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1979.3; Tue, 7 Jul 2020 12:18:44 +0200
Received: from SGPMBX2022.APAC.bosch.com (10.187.83.37) by
 SGPMBX2024.APAC.bosch.com (10.187.83.44) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1979.3; Tue, 7 Jul 2020 18:18:42 +0800
Received: from SGPMBX2022.APAC.bosch.com ([fe80::2d4d:b176:b210:896]) by
 SGPMBX2022.APAC.bosch.com ([fe80::2d4d:b176:b210:896%6]) with mapi id
 15.01.1979.003; Tue, 7 Jul 2020 18:18:42 +0800
From: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: [BUG] Xen build for RCAR failing
Thread-Topic: [BUG] Xen build for RCAR failing
Thread-Index: AdZUKc5JeR7gPpESR52uLkZK1kYwOwAEsnEAAAD8OlAAAEBtgAABZtcg
Date: Tue, 7 Jul 2020 10:18:42 +0000
Message-ID: <139024a891324455a13a3d468908798d@in.bosch.com>
References: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
 <1D0E7281-95D7-482E-BF6D-EE5B1FEE4876@arm.com>
 <ab84437081a346d6bf0f73581382c74e@in.bosch.com>
 <D84A5DA7-683C-480B-8837-C51D560FC2E1@arm.com>
In-Reply-To: <D84A5DA7-683C-480B-8837-C51D560FC2E1@arm.com>
Accept-Language: en-US, en-SG
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.187.56.214]
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprNIsWRmVeSWpSXmKPExsXCZbVWVpfVhyXeYNpGYYulSzYzWZxZ3sNs
 seTjYhYHZo8189Ywehzd/ZspgCmKyyYlNSezLLVI3y6BK2NT82S2gjP8FSsaO5kbGP/wdDFy
 cEgImEjc2xraxcjFISQwg0li0oKvjF2MnEDOfkaJ9Y/tIRIfGCX29H5jh3A+MUo0vv3NDOEc
 ZJS4v2sLC0gLm0CIxL69N9hBbBEBfYnfN3+wgtjMAh4Sq67uARsrLKAr0XVuNlSNnsTWhf2s
 ELabxPuetcwgNouAisSW+9fAangFrCV61xxmhVh2nlFi+vZ3bCAJTqDE6oZGsCJGAVmJRTcn
 sUAsE5e49WQ+E4gtISAgsWTPeWYIW1Ti5eN/rBC2osSy+avYIer1JG5MncIGYWtLLFv4mhli
 saDEyZlPWCYwSsxCMnYWkpZZSFpmIWlZwMiyilG0ONOgLNfIwMJYrygptbjKwFAvOT93EyMk
 /sR3MP7v+KB3iJGJg/EQowQHs5IIb682Y7wQb0piZVVqUX58UWlOavEhRmkOFiVxXhWejXFC
 AumJJanZqakFqUUwWSYOTqkGprqJEYu+L//8gylqS1pspdP5d/zTbFSOZjoaf9123mr+BFG/
 xSu383Bbq57d4M+tMnuDZupKZf172t99OaXNn4t+ylB4OEf/soVki9leq2lzNG68m2YWrDNZ
 Wt3vxe5TaaYy32ytqy8ZfYs6e93yS+WL+MVvOHJuiglzl5nmeFpt3qH0xrj086vN2e+e9VmZ
 uxSrF9xt1s24alLnJuHEdmDynQcvb3GzcdzYLnPk+87yJXIeollbTLZNef9hrtLck/rbr39j
 5ZC+dTFdSfdcwZfi+Q+OTig5uIr3Q1eegntf1a2bNXe/O3Xfij69OmURh4tnubHh+rdu/5pu
 33/U/jLgYNyehJkdjm9tn5aeT1ZiKc5INNRiLipOBABYejiALgMAAA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello Bertrand,

Thank you. I will try a fresh build with dunfell branch. All layers in the =
sense [poky, meta-openembedded, meta-linaro, meta-rensas, meta-virtualisati=
on, meta-selinux, xen-troops] right?

Also, Can I use the same proprietary drivers which I used for yocto2.19[R-C=
ar_Gen3_Series_Evaluation_Software_Package_for_Linux-20170427.zip] for this=
 branch?

Mit freundlichen Gr=FC=DFen / Best regards

 Chockalingam Manikandan

ES-CM Core fn,ADIT (RBEI/ECF3)
Robert Bosch GmbH | Postfach 10 60 50 | 70049 Stuttgart | GERMANY | www.bos=
ch.com
Tel. +91 80 6136-4452 | Fax +91 80 6617-0711 | Manikandan.Chockalingam@in.b=
osch.com

Registered Office: Stuttgart, Registration Court: Amtsgericht Stuttgart, HR=
B 14000;
Chairman of the Supervisory Board: Franz Fehrenbach; Managing Directors: Dr=
. Volkmar Denner,=20
Prof. Dr. Stefan Asenkerschbaumer, Dr. Michael Bolle, Dr. Christian Fischer=
, Dr. Stefan Hartung,
Dr. Markus Heyn, Harald Kr=F6ger, Christoph K=FCbel, Rolf Najork, Uwe Rasch=
ke, Peter Tyroller


-----Original Message-----
From: Bertrand Marquis <Bertrand.Marquis@arm.com>=20
Sent: Tuesday, July 7, 2020 3:03 PM
To: Manikandan Chockalingam (RBEI/ECF3) <Manikandan.Chockalingam@in.bosch.c=
om>
Cc: xen-devel@lists.xen.org; nd <nd@arm.com>
Subject: Re: [BUG] Xen build for RCAR failing

Hi,

> On 7 Jul 2020, at 10:28, Manikandan Chockalingam (RBEI/ECF3) <Manikandan.=
Chockalingam@in.bosch.com> wrote:
>=20
> Hello Bertrand,
>=20
> Thanks for your quick response. I tired switching to dunfell branch and b=
uild gives parse error as below.
>=20
> bitbake core-image-weston
> ERROR: ParseError in /home/manikandan/yocto_2.19/build/meta-virtualizatio=
n/classes/: not a BitBake file
>=20
> Is there any additional changes required here?

I do not have this on my side when building using dunfell.
You might need to restart from a fresh build and checkout (you need dunfell=
 branch on all layers).

Bertrand



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 10:26:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 10:26:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsko1-0003h2-Pf; Tue, 07 Jul 2020 10:26:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1dji=AS=redhat.com=berrange@srs-us1.protection.inumbo.net>)
 id 1jsknz-0003gx-4P
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 10:26:27 +0000
X-Inumbo-ID: 50244436-c03c-11ea-8d4a-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 50244436-c03c-11ea-8d4a-12813bfff9fa;
 Tue, 07 Jul 2020 10:26:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594117586;
 h=from:from:reply-to:reply-to:subject:subject:date:date:
 message-id:message-id:to:to:cc:cc:mime-version:mime-version:
 content-type:content-type:in-reply-to:in-reply-to:  references:references;
 bh=Ksh7qm6adGST/ZaaaSRGOC5KkyimhdVy3s/dSUsS2wc=;
 b=QTC+1qPAq/g22yJeeYDhoIOn8KBr714fQfUhkU5clN4EfIqCJiVPpsaz09ImuJmPgDtiE7
 NSFwdT9/g2eMsZY3RmlTxj3XQy3EXvhUzecPhK2QBfLjByu3oB9hFQzTY2KJaZH8FQZhb7
 tfWvaX1gBuXPxCYV5KGGLKcHSyegp4E=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-147-OgGMpmGzP4aC_p9nKwLp_w-1; Tue, 07 Jul 2020 06:25:55 -0400
X-MC-Unique: OgGMpmGzP4aC_p9nKwLp_w-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 742B2879511;
 Tue,  7 Jul 2020 10:25:50 +0000 (UTC)
Received: from redhat.com (unknown [10.36.110.57])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id DEF315D9F3;
 Tue,  7 Jul 2020 10:25:13 +0000 (UTC)
Date: Tue, 7 Jul 2020 11:25:10 +0100
From: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
To: Christophe de Dinechin <dinechin@redhat.com>
Subject: Re: [PATCH] trivial: Remove trailing whitespaces
Message-ID: <20200707102510.GF2649462@redhat.com>
References: <20200706162300.1084753-1-dinechin@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20200706162300.1084753-1-dinechin@redhat.com>
User-Agent: Mutt/1.14.3 (2020-06-14)
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=berrange@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Dmitry Fleytman <dmitry.fleytman@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Michael Roth <mdroth@linux.vnet.ibm.com>, Max Filippov <jcmvbkbc@gmail.com>,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>, Max Reitz <mreitz@redhat.com>,
 Marek Vasut <marex@denx.de>, Stefano Stabellini <sstabellini@kernel.org>,
 qemu-block@nongnu.org, qemu-trivial@nongnu.org, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?utf-8?B?SGVydsOp?= Poussineau <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?utf-8?Q?Marc-Andr=C3=A9?= Lureau <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>, Andrzej Zaborowski <balrogg@gmail.com>,
 Artyom Tarasenko <atar4qemu@gmail.com>,
 Alistair Francis <alistair@alistair23.me>,
 Eduardo Habkost <ehabkost@redhat.com>, Michael Tokarev <mjt@tls.msk.ru>,
 Riku Voipio <riku.voipio@iki.fi>, Peter Lieven <pl@kamp.de>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Roman Bolshakov <r.bolshakov@yadro.com>, qemu-arm@nongnu.org,
 Peter Chubb <peter.chubb@nicta.com.au>,
 Ronnie Sahlberg <ronniesahlberg@gmail.com>, xen-devel@lists.xenproject.org,
 Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
 David Gibson <david@gibson.dropbear.id.au>, Kevin Wolf <kwolf@redhat.com>,
 Yoshinori Sato <ysato@users.sourceforge.jp>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Chris Wulff <crwulff@gmail.com>, Laurent Vivier <laurent@vivier.eu>,
 Jean-Christophe Dubois <jcd@tribudubois.net>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 06, 2020 at 06:23:00PM +0200, Christophe de Dinechin wrote:
> There are a number of unnecessary trailing whitespaces that have
> accumulated over time in the source code. They cause stray changes
> in patches if you use tools that automatically remove them.
> 
> Tested by doing a `git diff -w` after the change.
> 
> This could probably be turned into a pre-commit hook.

scripts/checkpatch.pl ought to be made to check it.

> 
> Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
> ---
>  block/iscsi.c                                 |   2 +-
>  disas/cris.c                                  |   2 +-
>  disas/microblaze.c                            |  80 +++---
>  disas/nios2.c                                 | 256 +++++++++---------
>  hmp-commands.hx                               |   2 +-
>  hw/alpha/typhoon.c                            |   6 +-
>  hw/arm/gumstix.c                              |   6 +-
>  hw/arm/omap1.c                                |   2 +-
>  hw/arm/stellaris.c                            |   2 +-
>  hw/char/etraxfs_ser.c                         |   2 +-
>  hw/core/ptimer.c                              |   2 +-
>  hw/cris/axis_dev88.c                          |   2 +-
>  hw/cris/boot.c                                |   2 +-
>  hw/display/qxl.c                              |   2 +-
>  hw/dma/etraxfs_dma.c                          |  18 +-
>  hw/dma/i82374.c                               |   2 +-
>  hw/i2c/bitbang_i2c.c                          |   2 +-
>  hw/input/tsc2005.c                            |   2 +-
>  hw/input/tsc210x.c                            |   2 +-
>  hw/intc/etraxfs_pic.c                         |   8 +-
>  hw/intc/sh_intc.c                             |  10 +-
>  hw/intc/xilinx_intc.c                         |   2 +-
>  hw/misc/imx25_ccm.c                           |   6 +-
>  hw/misc/imx31_ccm.c                           |   2 +-
>  hw/net/vmxnet3.h                              |   2 +-
>  hw/net/xilinx_ethlite.c                       |   2 +-
>  hw/pci/pcie.c                                 |   2 +-
>  hw/sd/omap_mmc.c                              |   2 +-
>  hw/sh4/shix.c                                 |   2 +-
>  hw/sparc64/sun4u.c                            |   2 +-
>  hw/timer/etraxfs_timer.c                      |   2 +-
>  hw/timer/xilinx_timer.c                       |   4 +-
>  hw/usb/hcd-musb.c                             |  10 +-
>  hw/usb/hcd-ohci.c                             |   6 +-
>  hw/usb/hcd-uhci.c                             |   2 +-
>  hw/virtio/virtio-pci.c                        |   2 +-
>  include/hw/cris/etraxfs_dma.h                 |   4 +-
>  include/hw/net/lance.h                        |   2 +-
>  include/hw/ppc/spapr.h                        |   2 +-
>  include/hw/xen/interface/io/ring.h            |  34 +--
>  include/qemu/log.h                            |   2 +-
>  include/qom/object.h                          |   4 +-
>  linux-user/cris/cpu_loop.c                    |  16 +-
>  linux-user/microblaze/cpu_loop.c              |  16 +-
>  linux-user/mmap.c                             |   8 +-
>  linux-user/sparc/signal.c                     |   4 +-
>  linux-user/syscall.c                          |  24 +-
>  linux-user/syscall_defs.h                     |   2 +-
>  linux-user/uaccess.c                          |   2 +-
>  os-posix.c                                    |   2 +-
>  qapi/qapi-util.c                              |   2 +-
>  qemu-img.c                                    |   2 +-
>  qemu-options.hx                               |  26 +-
>  qom/object.c                                  |   2 +-
>  target/cris/translate.c                       |  28 +-
>  target/cris/translate_v10.inc.c               |   6 +-
>  target/i386/hvf/hvf.c                         |   4 +-
>  target/i386/hvf/x86.c                         |   4 +-
>  target/i386/hvf/x86_decode.c                  |  20 +-
>  target/i386/hvf/x86_decode.h                  |   4 +-
>  target/i386/hvf/x86_descr.c                   |   2 +-
>  target/i386/hvf/x86_emu.c                     |   2 +-
>  target/i386/hvf/x86_mmu.c                     |   6 +-
>  target/i386/hvf/x86_task.c                    |   2 +-
>  target/i386/hvf/x86hvf.c                      |  42 +--
>  target/i386/translate.c                       |   8 +-
>  target/microblaze/mmu.c                       |   2 +-
>  target/microblaze/translate.c                 |   2 +-
>  target/sh4/op_helper.c                        |   4 +-
>  target/xtensa/core-de212/core-isa.h           |   6 +-
>  .../xtensa/core-sample_controller/core-isa.h  |   6 +-
>  target/xtensa/core-test_kc705_be/core-isa.h   |   2 +-
>  tcg/sparc/tcg-target.inc.c                    |   2 +-
>  tcg/tcg.c                                     |  32 +--
>  tests/tcg/multiarch/test-mmap.c               |  72 ++---
>  ui/curses.c                                   |   4 +-
>  ui/curses_keys.h                              |   4 +-
>  util/cutils.c                                 |   2 +-
>  78 files changed, 440 insertions(+), 440 deletions(-)

The cleanup is a good idea, however, I think it is probably better to
split the patch approx into subsystems. That will make it much easier
to cherry-pick for people doing backports.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 11:18:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 11:18:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jslbs-00082r-Bu; Tue, 07 Jul 2020 11:18:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Yzy/=AS=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jslbr-00082m-Kq
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 11:17:59 +0000
X-Inumbo-ID: 82e0634e-c043-11ea-bb8b-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82e0634e-c043-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 11:17:58 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 6C4ADA2322;
 Tue,  7 Jul 2020 13:17:57 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 22AB6A2319;
 Tue,  7 Jul 2020 13:17:56 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id Xn2Whme0m547; Tue,  7 Jul 2020 13:17:55 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 8624AA231D;
 Tue,  7 Jul 2020 13:17:55 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id Wm18HX-qFkK6; Tue,  7 Jul 2020 13:17:55 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 58282A2319;
 Tue,  7 Jul 2020 13:17:55 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 3D75020C25;
 Tue,  7 Jul 2020 13:17:25 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id Maul6dzMYoBj; Tue,  7 Jul 2020 13:17:19 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 8F3E921C93;
 Tue,  7 Jul 2020 13:17:19 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 1MsecBsfTsD1; Tue,  7 Jul 2020 13:17:19 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 6AC9A21B5D;
 Tue,  7 Jul 2020 13:17:19 +0200 (CEST)
Date: Tue, 7 Jul 2020 13:17:19 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: Julien Grall <julien@xen.org>
Message-ID: <1580655090.20712847.1594120639229.JavaMail.zimbra@cert.pl>
In-Reply-To: <ab992813-4584-f8e0-b90a-7a587c396bae@xen.org>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <b5335c2e-da13-28de-002b-e93dd68a0a11@suse.com>
 <20200703101120.GZ735@Air-de-Roger>
 <51ecaf40-8fb5-8454-7055-5af33a47152e@xen.org>
 <d9e604e9-acb7-17df-f0d1-7552dab526c7@suse.com>
 <88892784-0ed6-2594-bef8-fd0ae46c2b17@xen.org>
 <a13451d6-d6b5-6d86-aeb0-8985db730866@suse.com>
 <ab992813-4584-f8e0-b90a-7a587c396bae@xen.org>
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - FF78 (Linux)/8.6.0_GA_1194)
Thread-Topic: tools/libxl: add vmtrace_pt_size parameter
Thread-Index: 9asBT9S/k/EEKcYiYjEmMl9lAxGMew==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 Jan Beulich <jbeulich@suse.com>, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org,
 Roger Pau =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 7 lip 2020 o 11:16, Julien Grall julien@xen.org napisa=C5=82(a):

> On 07/07/2020 10:10, Jan Beulich wrote:
>> On 07.07.2020 10:44, Julien Grall wrote:
>>> Hi,
>>>
>>> On 06/07/2020 09:46, Jan Beulich wrote:
>>>> On 04.07.2020 19:23, Julien Grall wrote:
>>>>> Hi,
>>>>>
>>>>> On 03/07/2020 11:11, Roger Pau Monn=C3=A9 wrote:
>>>>>> On Fri, Jul 03, 2020 at 11:56:38AM +0200, Jan Beulich wrote:
>>>>>>> On 03.07.2020 11:44, Roger Pau Monn=C3=A9 wrote:
>>>>>>>> On Thu, Jul 02, 2020 at 06:23:28PM +0200, Micha=C5=82 Leszczy=C5=
=84ski wrote:
>>>>>>>>> In previous versions it was "size" but it was requested to change=
 it
>>>>>>>>> to "order" in order to shrink the variable size from uint64_t to
>>>>>>>>> uint8_t, because there is limited space for xen_domctl_createdoma=
in
>>>>>>>>> structure.
>>>>>>>>
>>>>>>>> It's likely I'm missing something here, but I wasn't aware
>>>>>>>> xen_domctl_createdomain had any constrains regarding it's size. It=
's
>>>>>>>> currently 48bytes which seems fairly small.
>>>>>>>
>>>>>>> Additionally I would guess a uint32_t could do here, if the value
>>>>>>> passed was "number of pages" rather than "number of bytes"?
>>>>> Looking at the rest of the code, the toolstack accepts a 64-bit value=
.
>>>>> So this would lead to truncation of the buffer if it is bigger than 2=
^44
>>>>> bytes.
>>>>>
>>>>> I agree such buffer is unlikely, yet I still think we want to harden =
the
>>>>> code whenever we can. So the solution is to either prevent check
>>>>> truncation in libxl or directly use 64-bit in the domctl.
>>>>>
>>>>> My preference is the latter.
>>>>>
>>>>>>
>>>>>> That could work, not sure if it needs to state however that those wi=
ll
>>>>>> be 4K pages, since Arm can have a different minimum page size IIRC?
>>>>>> (or that's already the assumption for all number of frames fields)
>>>>>> vmtrace_nr_frames seems fine to me.
>>>>>
>>>>> The hypercalls interface is using the same page granularity as the
>>>>> hypervisor (i.e 4KB).
>>>>>
>>>>> While we already support guest using 64KB page granularity, it is
>>>>> impossible to have a 64KB Arm hypervisor in the current state. You ar=
e
>>>>> going to either break existing guest (if you switch to 64KB page
>>>>> granularity for the hypercall ABI) or render them insecure (the mimim=
um
>>>>> mapping in the P2M would be 64KB).
>>>>>
>>>>> DOMCTLs are not stable yet, so using a number of pages is OK. However=
, I
>>>>> would strongly suggest to use a number of bytes for any xl/libxl/stab=
le
>>>>> libraries interfaces as this avoids confusion and also make more
>>>>> futureproof.
>>>>
>>>> If we can't settle on what "page size" means in the public interface
>>>> (which imo is embarrassing), then how about going with number of kb,
>>>> like other memory libxl controls do? (I guess using Mb, in line with
>>>> other config file controls, may end up being too coarse here.) This
>>>> would likely still allow for a 32-bit field to be wide enough.
>>>
>>> A 32-bit field would definitely not be able to cover a full address
>>> space. So do you mind to explain what is the upper bound you expect her=
e?
>>=20
>> Do you foresee a need for buffer sizes of 4Tb and up?
>=20
> Not I am aware of... However, I think the question was worth it given
> that "wide enough" can mean anything.
>=20
> Cheers,
>=20
> --
> Julien Grall


So would it be OK to use uint32_t everywhere and to store the trace buffer
size as number of kB? I think this is the most straightforward option.

I would also stick with the name "processor_trace_buf_size"
everywhere, both in the hypervisor, ABI and the toolstack, with the
respective comments that the size is in kB.


Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 11:21:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 11:21:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jslfL-0000QA-V7; Tue, 07 Jul 2020 11:21:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kqME=AS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jslfK-0000Q5-U2
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 11:21:34 +0000
X-Inumbo-ID: 0384d05c-c044-11ea-8d51-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0384d05c-c044-11ea-8d51-12813bfff9fa;
 Tue, 07 Jul 2020 11:21:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C28C5B163;
 Tue,  7 Jul 2020 11:21:33 +0000 (UTC)
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <b5335c2e-da13-28de-002b-e93dd68a0a11@suse.com>
 <20200703101120.GZ735@Air-de-Roger>
 <51ecaf40-8fb5-8454-7055-5af33a47152e@xen.org>
 <d9e604e9-acb7-17df-f0d1-7552dab526c7@suse.com>
 <88892784-0ed6-2594-bef8-fd0ae46c2b17@xen.org>
 <a13451d6-d6b5-6d86-aeb0-8985db730866@suse.com>
 <ab992813-4584-f8e0-b90a-7a587c396bae@xen.org>
 <1580655090.20712847.1594120639229.JavaMail.zimbra@cert.pl>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3644946e-a3fa-bef0-4c55-a42988b12368@suse.com>
Date: Tue, 7 Jul 2020 13:21:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1580655090.20712847.1594120639229.JavaMail.zimbra@cert.pl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.07.2020 13:17, Michał Leszczyński wrote:
> So would it be OK to use uint32_t everywhere and to store the trace buffer
> size as number of kB? I think this is the most straightforward option.
> 
> I would also stick with the name "processor_trace_buf_size"
> everywhere, both in the hypervisor, ABI and the toolstack, with the
> respective comments that the size is in kB.

Perhaps even more clearly "processor_trace_buf_kb" then?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 11:31:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 11:31:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsloW-0001Iz-Rq; Tue, 07 Jul 2020 11:31:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aoUK=AS=gmail.com=jrdr.linux@srs-us1.protection.inumbo.net>)
 id 1jsloV-0001Iu-8I
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 11:31:03 +0000
X-Inumbo-ID: 5618d2cc-c045-11ea-b7bb-bc764e2007e4
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5618d2cc-c045-11ea-b7bb-bc764e2007e4;
 Tue, 07 Jul 2020 11:31:02 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id z24so24574037ljn.8
 for <xen-devel@lists.xenproject.org>; Tue, 07 Jul 2020 04:31:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=oU+zHS1dWLePXpv1roLop07qC5/+PHNWookUwJLe/zU=;
 b=HzqoVfv5RmJSiJec48Z3gyXEpw3X1xzcpa7yGskC4YP4FKDLYWnNwGMKx7bPp5Di7M
 tAPv4yoi7/blcmwpByoACwe0S74SDsfhbuSrSJbAMsKtYruaS6ia/LBd+13co+pkezkk
 7ED8pj13E7YmJr85IrN0MVKjFyHLI0G0TJ73UB9v7WnD2zN0X7ICOOtfjGUQ0J71cnjP
 iYWwsuoH7p9DXqBStWZvc8pwrz62bilvgWSXrlKpRzl7c1aLuLFkvFYxt82/FhaGmybv
 fSUho90IdiLmF9K8hKcl976CmHjAZzritDhMnDBdK/WDiwPviUgXcEseAddmE8iDGBHE
 TFJw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=oU+zHS1dWLePXpv1roLop07qC5/+PHNWookUwJLe/zU=;
 b=Pqu3HB2Qoo9dw++5QKqwoDGybe+vP8SinWbxEb4qYxmeL9A4es3btmTP5NyCkfDFsG
 SL3edwn6oUxlOR28sn78gij0qWXHW+SWZiYQGR0d8DysBMhO04RRpRonWdBGj7nvIqFI
 ffDrLUYG9XPvUdl/zb53pOlsIe6lsJNexRNk1FZ+e1RjOhWJyhu4KXMhvAV7znmyC2+O
 Rvq4GyR7AuayJyqLgNch2PsBHXVIR8KAnlCMT1fO2u1uoIRzu4dEFzYHOsOldyEvxglG
 3Oe3DNHyl1RvOj1t/eDk5oWyp+tusPAoC5LbWF7BEmNhhYIXi/FI1pb7dzF3N3Ahp6Hb
 aWUQ==
X-Gm-Message-State: AOAM531nh8vW2wR8hVSZ6uxG3Znc9m6EISpwWN+vY8zQ4o9/2FtoEjBE
 0R/x1V/JfnHAJmGEG8xqRFQmciBgEMJandIoUrY=
X-Google-Smtp-Source: ABdhPJwnwnMnJ0iADB+yNPtPOSoRhfTA8saUU2vq4vN7LszeEgrx/ZXEdYoZDO9/LdNXumlwQmvPr/VyPXzdTGQtYqA=
X-Received: by 2002:a2e:5d8:: with SMTP id 207mr28871025ljf.257.1594121461257; 
 Tue, 07 Jul 2020 04:31:01 -0700 (PDT)
MIME-Version: 1.0
References: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
 <1594059372-15563-3-git-send-email-jrdr.linux@gmail.com>
 <8fdd8c77-27dd-2847-7929-b5d3098b1b45@suse.com>
In-Reply-To: <8fdd8c77-27dd-2847-7929-b5d3098b1b45@suse.com>
From: Souptick Joarder <jrdr.linux@gmail.com>
Date: Tue, 7 Jul 2020 17:00:49 +0530
Message-ID: <CAFqt6zZRx3oDO+p2e6EiDig9fzKirME-t6fanzDRh6e7gWx+nA@mail.gmail.com>
Subject: Re: [PATCH v2 2/3] xen/privcmd: Mark pages as dirty
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: sstabellini@kernel.org, John Hubbard <jhubbard@nvidia.com>,
 linux-kernel@vger.kernel.org, Paul Durrant <xadimgnik@gmail.com>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 7, 2020 at 3:08 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wrot=
e:
>
> On 06.07.20 20:16, Souptick Joarder wrote:
> > pages need to be marked as dirty before unpinned it in
> > unlock_pages() which was oversight. This is fixed now.
> >
> > Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
> > Suggested-by: John Hubbard <jhubbard@nvidia.com>
> > Cc: John Hubbard <jhubbard@nvidia.com>
> > Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > Cc: Paul Durrant <xadimgnik@gmail.com>
> > ---
> >   drivers/xen/privcmd.c | 5 ++++-
> >   1 file changed, 4 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> > index 33677ea..f6c1543 100644
> > --- a/drivers/xen/privcmd.c
> > +++ b/drivers/xen/privcmd.c
> > @@ -612,8 +612,11 @@ static void unlock_pages(struct page *pages[], uns=
igned int nr_pages)
> >   {
> >       unsigned int i;
> >
> > -     for (i =3D 0; i < nr_pages; i++)
> > +     for (i =3D 0; i < nr_pages; i++) {
> > +             if (!PageDirty(pages[i]))
> > +                     set_page_dirty_lock(pages[i]);
>
> With put_page() directly following I think you should be able to use
> set_page_dirty() instead, as there is obviously a reference to the page
> existing.

Patch [3/3] will convert above codes to use unpin_user_pages_dirty_lock()
which internally do the same check. So I thought to keep linux-stable and
linux-next code in sync. John had a similar concern [1] and later agreed to=
 keep
this check.

Shall I keep this check ?  No ?

[1] https://lore.kernel.org/xen-devel/a750e5e5-fd5d-663b-c5fd-261d7c939ba7@=
nvidia.com/

>
> >               put_page(pages[i]);
> > +     }
> >   }
> >
> >   static long privcmd_ioctl_dm_op(struct file *file, void __user *udata=
)
> >
>
> Juergen


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 11:36:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 11:36:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsltK-0001Tw-JZ; Tue, 07 Jul 2020 11:36:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Yzy/=AS=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jsltK-0001Tr-36
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 11:36:02 +0000
X-Inumbo-ID: 07d95c98-c046-11ea-8d53-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 07d95c98-c046-11ea-8d53-12813bfff9fa;
 Tue, 07 Jul 2020 11:36:00 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 95A6AA2338;
 Tue,  7 Jul 2020 13:35:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 7F3E6A232E;
 Tue,  7 Jul 2020 13:35:58 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 0FsJ15pnF5hk; Tue,  7 Jul 2020 13:35:58 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 11E70A233A;
 Tue,  7 Jul 2020 13:35:58 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id F_vW2uMNc4Ie; Tue,  7 Jul 2020 13:35:57 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id DA026A232E;
 Tue,  7 Jul 2020 13:35:57 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id C363E20B30;
 Tue,  7 Jul 2020 13:35:27 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id PXgkSWN0EHas; Tue,  7 Jul 2020 13:35:22 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 6488C215B3;
 Tue,  7 Jul 2020 13:35:22 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 1m7ljDy6gyKa; Tue,  7 Jul 2020 13:35:22 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 416C2214C4;
 Tue,  7 Jul 2020 13:35:22 +0200 (CEST)
Date: Tue, 7 Jul 2020 13:35:22 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: Jan Beulich <jbeulich@suse.com>
Message-ID: <668304207.20725370.1594121722137.JavaMail.zimbra@cert.pl>
In-Reply-To: <3644946e-a3fa-bef0-4c55-a42988b12368@suse.com>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <51ecaf40-8fb5-8454-7055-5af33a47152e@xen.org>
 <d9e604e9-acb7-17df-f0d1-7552dab526c7@suse.com>
 <88892784-0ed6-2594-bef8-fd0ae46c2b17@xen.org>
 <a13451d6-d6b5-6d86-aeb0-8985db730866@suse.com>
 <ab992813-4584-f8e0-b90a-7a587c396bae@xen.org>
 <1580655090.20712847.1594120639229.JavaMail.zimbra@cert.pl>
 <3644946e-a3fa-bef0-4c55-a42988b12368@suse.com>
Subject: Re: [PATCH v4 03/10] tools/libxl: add vmtrace_pt_size parameter
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - FF78 (Linux)/8.6.0_GA_1194)
Thread-Topic: tools/libxl: add vmtrace_pt_size parameter
Thread-Index: /FX9IfO24J25T2HL91s/1VhYoIURdw==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Roger Pau =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 7 lip 2020 o 13:21, Jan Beulich jbeulich@suse.com napisa=C5=82(a):

> On 07.07.2020 13:17, Micha=C5=82 Leszczy=C5=84ski wrote:
>> So would it be OK to use uint32_t everywhere and to store the trace buff=
er
>> size as number of kB? I think this is the most straightforward option.
>>=20
>> I would also stick with the name "processor_trace_buf_size"
>> everywhere, both in the hypervisor, ABI and the toolstack, with the
>> respective comments that the size is in kB.
>=20
> Perhaps even more clearly "processor_trace_buf_kb" then?
>=20
> Jan


Ok.

Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 11:40:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 11:40:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jslxX-0002If-5H; Tue, 07 Jul 2020 11:40:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aoUK=AS=gmail.com=jrdr.linux@srs-us1.protection.inumbo.net>)
 id 1jslxV-0002Ia-Sf
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 11:40:21 +0000
X-Inumbo-ID: a30a3232-c046-11ea-bb8b-bc764e2007e4
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a30a3232-c046-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 11:40:21 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id j11so757868ljo.7
 for <xen-devel@lists.xenproject.org>; Tue, 07 Jul 2020 04:40:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=NlD5Coj6tItyyuvkckmXcC1NzYj94/V/Y7EED+OVWYo=;
 b=ur/ID3tfSVMUmuYc56bPe6NPM1LH2sYj4bu4bhvgZAmORagv/bvqXunhRxwyqeod6L
 SOm/cGGjOtiumCNKjq0K3oDIZWwDwqm95Dj83bqeIZD2Rtyf6YOKwx6sGcjHePyu7aUj
 G1nKcmBmSiqLu18Io1vVZcf9ny8a+5q1hBkQjzbmra5VbcqKI8Q2Tq7ljTYr3gjOs0IY
 bEFhHK2Swntv/N5IkEbC5vyIlSwff0Svb4JNTFaxeweXV1er6tXcsdYxwJ9zkC85J2wh
 ViXR4KEJnfkEY8clR9ou0fiEewKDwfvy4M/9i+ZMwrzhbHf7z6Ls3fe8LzrJ5jnMHQy4
 +m2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=NlD5Coj6tItyyuvkckmXcC1NzYj94/V/Y7EED+OVWYo=;
 b=aKHZeLDm3tbsTWXr3qq9/gy3j83Xw3iBH+/NDQH78BfBVCbkiy+S2SYZyVNmm+VBl/
 QsmJxjlCcxyshENaTOP3ie9T2BtgIHAZKob6ycpf3abAi4ng00LwiflomDRe7I425VvU
 GYZVKqcRc74yhkFCdDEe9lXrvF4KNiuwIgkufpxjBfZmWcfiFXNirwbwz+N6+6qLq0dY
 7MVeZKrNv6X76l17AXuaFm57vkQJdZwkzcZE8FKZI1ixmCjOYYK1V8WyaNxqtYS66LjT
 rFKzYDa6GjcDa41twz3QP71L1jof6pNkqXOF94yOUCjKYrWt4rWDBgHKV5JAPxNPr5kP
 ubvg==
X-Gm-Message-State: AOAM533xGWUqYKn+4hAsvm3D1kFwvzd5cKK1tfzMNfi8b8+zdllWtSsP
 CxptigRwM6ciL0/da6P6GR0+M9MPcB1al2elSmQ=
X-Google-Smtp-Source: ABdhPJzhH1+0xPd3Q4buxdjJqgwE4vDYVmSliZxyPTqVkoGiSg5XII9WREdm6ITrWaZ34HU1/VmklFDcrGbcJ8rh5yw=
X-Received: by 2002:a2e:9746:: with SMTP id f6mr22572230ljj.68.1594122019940; 
 Tue, 07 Jul 2020 04:40:19 -0700 (PDT)
MIME-Version: 1.0
References: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
 <1594059372-15563-2-git-send-email-jrdr.linux@gmail.com>
 <4bafb184-6f07-2582-3d0f-86fb53dd30dc@suse.com>
In-Reply-To: <4bafb184-6f07-2582-3d0f-86fb53dd30dc@suse.com>
From: Souptick Joarder <jrdr.linux@gmail.com>
Date: Tue, 7 Jul 2020 17:10:08 +0530
Message-ID: <CAFqt6zaWbEiozfkEuMvusxig15buuS1vjJaj4Q5okxNsRz_1vw@mail.gmail.com>
Subject: Re: [PATCH v2 1/3] xen/privcmd: Corrected error handling path
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: sstabellini@kernel.org, John Hubbard <jhubbard@nvidia.com>,
 linux-kernel@vger.kernel.org, Paul Durrant <xadimgnik@gmail.com>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 7, 2020 at 3:05 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wrot=
e:
>
> On 06.07.20 20:16, Souptick Joarder wrote:
> > Previously, if lock_pages() end up partially mapping pages, it used
> > to return -ERRNO due to which unlock_pages() have to go through
> > each pages[i] till *nr_pages* to validate them. This can be avoided
> > by passing correct number of partially mapped pages & -ERRNO separately=
,
> > while returning from lock_pages() due to error.
> >
> > With this fix unlock_pages() doesn't need to validate pages[i] till
> > *nr_pages* for error scenario and few condition checks can be ignored.
> >
> > Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
> > Cc: John Hubbard <jhubbard@nvidia.com>
> > Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > Cc: Paul Durrant <xadimgnik@gmail.com>
> > ---
> >   drivers/xen/privcmd.c | 31 +++++++++++++++----------------
> >   1 file changed, 15 insertions(+), 16 deletions(-)
> >
> > diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> > index a250d11..33677ea 100644
> > --- a/drivers/xen/privcmd.c
> > +++ b/drivers/xen/privcmd.c
> > @@ -580,13 +580,13 @@ static long privcmd_ioctl_mmap_batch(
> >
> >   static int lock_pages(
> >       struct privcmd_dm_op_buf kbufs[], unsigned int num,
> > -     struct page *pages[], unsigned int nr_pages)
> > +     struct page *pages[], unsigned int nr_pages, unsigned int *pinned=
)
> >   {
> >       unsigned int i;
> > +     int page_count =3D 0;
>
> Initial value shouldn't be needed, and ...
>
> >
> >       for (i =3D 0; i < num; i++) {
> >               unsigned int requested;
> > -             int pinned;
>
> ... you could move the declaration here.
>
> With that done you can add my
>
> Reviewed-by: Juergen Gross <jgross@suse.com>

Ok. But does it going make any difference other than limiting scope ?

>
>
> Juergen


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 11:43:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 11:43:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsm0M-0002R8-JN; Tue, 07 Jul 2020 11:43:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E+9W=AS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jsm0L-0002Qc-IQ
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 11:43:17 +0000
X-Inumbo-ID: 0b47cf3a-c047-11ea-8d53-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0b47cf3a-c047-11ea-8d53-12813bfff9fa;
 Tue, 07 Jul 2020 11:43:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 53655AE35;
 Tue,  7 Jul 2020 11:43:15 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] xen/privcmd: Mark pages as dirty
To: Souptick Joarder <jrdr.linux@gmail.com>
References: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
 <1594059372-15563-3-git-send-email-jrdr.linux@gmail.com>
 <8fdd8c77-27dd-2847-7929-b5d3098b1b45@suse.com>
 <CAFqt6zZRx3oDO+p2e6EiDig9fzKirME-t6fanzDRh6e7gWx+nA@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <4abc0dd2-655c-16fa-dfc3-95904196c81f@suse.com>
Date: Tue, 7 Jul 2020 13:43:12 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <CAFqt6zZRx3oDO+p2e6EiDig9fzKirME-t6fanzDRh6e7gWx+nA@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: sstabellini@kernel.org, John Hubbard <jhubbard@nvidia.com>,
 linux-kernel@vger.kernel.org, Paul Durrant <xadimgnik@gmail.com>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.07.20 13:30, Souptick Joarder wrote:
> On Tue, Jul 7, 2020 at 3:08 PM Jürgen Groß <jgross@suse.com> wrote:
>>
>> On 06.07.20 20:16, Souptick Joarder wrote:
>>> pages need to be marked as dirty before unpinned it in
>>> unlock_pages() which was oversight. This is fixed now.
>>>
>>> Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
>>> Suggested-by: John Hubbard <jhubbard@nvidia.com>
>>> Cc: John Hubbard <jhubbard@nvidia.com>
>>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>> Cc: Paul Durrant <xadimgnik@gmail.com>
>>> ---
>>>    drivers/xen/privcmd.c | 5 ++++-
>>>    1 file changed, 4 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>> index 33677ea..f6c1543 100644
>>> --- a/drivers/xen/privcmd.c
>>> +++ b/drivers/xen/privcmd.c
>>> @@ -612,8 +612,11 @@ static void unlock_pages(struct page *pages[], unsigned int nr_pages)
>>>    {
>>>        unsigned int i;
>>>
>>> -     for (i = 0; i < nr_pages; i++)
>>> +     for (i = 0; i < nr_pages; i++) {
>>> +             if (!PageDirty(pages[i]))
>>> +                     set_page_dirty_lock(pages[i]);
>>
>> With put_page() directly following I think you should be able to use
>> set_page_dirty() instead, as there is obviously a reference to the page
>> existing.
> 
> Patch [3/3] will convert above codes to use unpin_user_pages_dirty_lock()
> which internally do the same check. So I thought to keep linux-stable and
> linux-next code in sync. John had a similar concern [1] and later agreed to keep
> this check.
> 
> Shall I keep this check ?  No ?
> 
> [1] https://lore.kernel.org/xen-devel/a750e5e5-fd5d-663b-c5fd-261d7c939ba7@nvidia.com/

I wasn't referring to checking PageDirty(), but to the use of
set_page_dirty_lock().

Looking at the comment just before the implementation of
set_page_dirty_lock() suggests that it is fine to use set_page_dirty()
instead (so not calling lock_page()).

Only the transition from get_user_pages_fast() to pin_user_pages_fast()
requires to use the locked version IMO.


Juergen


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 11:45:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 11:45:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsm2k-0002Y0-0c; Tue, 07 Jul 2020 11:45:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E+9W=AS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jsm2i-0002Xv-DD
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 11:45:44 +0000
X-Inumbo-ID: 6398cae0-c047-11ea-8d53-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6398cae0-c047-11ea-8d53-12813bfff9fa;
 Tue, 07 Jul 2020 11:45:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 85156AE35;
 Tue,  7 Jul 2020 11:45:43 +0000 (UTC)
Subject: Re: [PATCH v2 1/3] xen/privcmd: Corrected error handling path
To: Souptick Joarder <jrdr.linux@gmail.com>
References: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
 <1594059372-15563-2-git-send-email-jrdr.linux@gmail.com>
 <4bafb184-6f07-2582-3d0f-86fb53dd30dc@suse.com>
 <CAFqt6zaWbEiozfkEuMvusxig15buuS1vjJaj4Q5okxNsRz_1vw@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <7208d7fe-8822-8e9b-e531-05238ece0b02@suse.com>
Date: Tue, 7 Jul 2020 13:45:42 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <CAFqt6zaWbEiozfkEuMvusxig15buuS1vjJaj4Q5okxNsRz_1vw@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: sstabellini@kernel.org, John Hubbard <jhubbard@nvidia.com>,
 linux-kernel@vger.kernel.org, Paul Durrant <xadimgnik@gmail.com>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.07.20 13:40, Souptick Joarder wrote:
> On Tue, Jul 7, 2020 at 3:05 PM Jürgen Groß <jgross@suse.com> wrote:
>>
>> On 06.07.20 20:16, Souptick Joarder wrote:
>>> Previously, if lock_pages() end up partially mapping pages, it used
>>> to return -ERRNO due to which unlock_pages() have to go through
>>> each pages[i] till *nr_pages* to validate them. This can be avoided
>>> by passing correct number of partially mapped pages & -ERRNO separately,
>>> while returning from lock_pages() due to error.
>>>
>>> With this fix unlock_pages() doesn't need to validate pages[i] till
>>> *nr_pages* for error scenario and few condition checks can be ignored.
>>>
>>> Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
>>> Cc: John Hubbard <jhubbard@nvidia.com>
>>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>> Cc: Paul Durrant <xadimgnik@gmail.com>
>>> ---
>>>    drivers/xen/privcmd.c | 31 +++++++++++++++----------------
>>>    1 file changed, 15 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>> index a250d11..33677ea 100644
>>> --- a/drivers/xen/privcmd.c
>>> +++ b/drivers/xen/privcmd.c
>>> @@ -580,13 +580,13 @@ static long privcmd_ioctl_mmap_batch(
>>>
>>>    static int lock_pages(
>>>        struct privcmd_dm_op_buf kbufs[], unsigned int num,
>>> -     struct page *pages[], unsigned int nr_pages)
>>> +     struct page *pages[], unsigned int nr_pages, unsigned int *pinned)
>>>    {
>>>        unsigned int i;
>>> +     int page_count = 0;
>>
>> Initial value shouldn't be needed, and ...
>>
>>>
>>>        for (i = 0; i < num; i++) {
>>>                unsigned int requested;
>>> -             int pinned;
>>
>> ... you could move the declaration here.
>>
>> With that done you can add my
>>
>> Reviewed-by: Juergen Gross <jgross@suse.com>
> 
> Ok. But does it going make any difference other than limiting scope ?

Dropping the initializer surely does, and in the end page_count just
replaces the former pinned variable, so why would we want to widen the
scope with this patch?


Juergen


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 11:48:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 11:48:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsm5R-0002iQ-Fp; Tue, 07 Jul 2020 11:48:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YS/2=AS=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jsm5Q-0002iL-Lt
 for xen-devel@lists.xen.org; Tue, 07 Jul 2020 11:48:32 +0000
X-Inumbo-ID: c754309d-c047-11ea-8d53-12813bfff9fa
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.84]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c754309d-c047-11ea-8d53-12813bfff9fa;
 Tue, 07 Jul 2020 11:48:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v5Tifxque/dl15ra/s1SFPZ8EEDTwfsSXLPNjxlgRWU=;
 b=ys85qp5AbsapNo1epSUfuqMFdbUWuicoDO1ekM5OIeaOK9/EUErSHKgsJsXq7ZFPYNAIkng4HbR1UGlVyE+8sCXm440pi/tJ58AqTFW/tKZdF2KEzkG0FJDMOW86NDh2RgcuaAjFikJf2KYsdppXEvnMe7cvw4oBrOIu0+IxDn0=
Received: from DB6PR07CA0067.eurprd07.prod.outlook.com (2603:10a6:6:2a::29) by
 DB8PR08MB5372.eurprd08.prod.outlook.com (2603:10a6:10:f9::17) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3153.21; Tue, 7 Jul 2020 11:48:29 +0000
Received: from DB5EUR03FT062.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2a:cafe::bf) by DB6PR07CA0067.outlook.office365.com
 (2603:10a6:6:2a::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.9 via Frontend
 Transport; Tue, 7 Jul 2020 11:48:29 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xen.org; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;lists.xen.org; dmarc=bestguesspass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT062.mail.protection.outlook.com (10.152.20.197) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3153.24 via Frontend Transport; Tue, 7 Jul 2020 11:48:29 +0000
Received: ("Tessian outbound b8ad5ab47c8c:v62");
 Tue, 07 Jul 2020 11:48:29 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: dfbf81eb248924bc
X-CR-MTA-TID: 64aa7808
Received: from eab0f25cf421.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 85E6F30D-9A9C-4550-A07E-CD97DCBA7AC9.1; 
 Tue, 07 Jul 2020 11:48:24 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id eab0f25cf421.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 07 Jul 2020 11:48:24 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=auF1fNc2ganQgJLiwXcTI6/D0u2rcgDfyTXjR9whUjGZxcauPvhRQLXVtQ2heUSAhktL6mUMTLxO9l2gt4t8vVhQHqXlRK6x83X72lXjm7uUaXIXSwqE35+MASKt9nlDd2F30f2sEtKufyFN3ZYSazWT4WcbPcrXqW3S51j06e54kEY+tDUjbEsdlNfIoUvzTIXq8LFW1mgc+Pno7X5c7yyV+d3qzYUiKU7CNUGi1XKBhut+el2YIDi0mKqIGu83Go0dc7bVJabWAHobE8EvhzgsseTD6P07283NFNj+oWv/YMoZrMVSa/3GbAWrlnzscXGa+8QeNW604/b/k5JPMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v5Tifxque/dl15ra/s1SFPZ8EEDTwfsSXLPNjxlgRWU=;
 b=j/0SoL+fvOcwO7NWTSchTcB3+xjBpu3mkM0NDGVg9p+sP+FRJ8dU8dYPiyc+Of54Q1EFSzeevEAQDGGB5J39yVaIVRpUIO4LagSqLHSObSzIOnRQ9bONUeCSGubuKKA1GtE7VWVxbt4Cysw4v3mnMyvaKQ0JrrJqV38DKobi0f+3zibQATtSZ1HPAfVVUNrqjhi9VqKunIU0JqHLpQzwO6PJ5bd6XRTXQtA7Ak0sjFn9J0gAFq/DTR+y7oTlEqX9pwbfLwEmZhqBAri5sZ+lxzFSoK8xmA9fhuNGq4kGsdetJmS71TsqMo70Of2ZFq6IoEEvgHIUlJEWLgfTgAb4+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v5Tifxque/dl15ra/s1SFPZ8EEDTwfsSXLPNjxlgRWU=;
 b=ys85qp5AbsapNo1epSUfuqMFdbUWuicoDO1ekM5OIeaOK9/EUErSHKgsJsXq7ZFPYNAIkng4HbR1UGlVyE+8sCXm440pi/tJ58AqTFW/tKZdF2KEzkG0FJDMOW86NDh2RgcuaAjFikJf2KYsdppXEvnMe7cvw4oBrOIu0+IxDn0=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB4140.eurprd08.prod.outlook.com (2603:10a6:10:a8::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.28; Tue, 7 Jul
 2020 11:48:23 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3153.029; Tue, 7 Jul 2020
 11:48:23 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>
Subject: Re: [BUG] Xen build for RCAR failing
Thread-Topic: [BUG] Xen build for RCAR failing
Thread-Index: AdZUKc5JeR7gPpESR52uLkZK1kYwOwAEsnEAAAD8OlAAAEBtgAABZtcgAANXdAA=
Date: Tue, 7 Jul 2020 11:48:23 +0000
Message-ID: <C3BCAA62-51EF-49DD-B978-6657BC6D5A21@arm.com>
References: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
 <1D0E7281-95D7-482E-BF6D-EE5B1FEE4876@arm.com>
 <ab84437081a346d6bf0f73581382c74e@in.bosch.com>
 <D84A5DA7-683C-480B-8837-C51D560FC2E1@arm.com>
 <139024a891324455a13a3d468908798d@in.bosch.com>
In-Reply-To: <139024a891324455a13a3d468908798d@in.bosch.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: in.bosch.com; dkim=none (message not signed)
 header.d=none; in.bosch.com;
 dmarc=none action=none header.from=arm.com; 
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 9d0dc605-7636-4cb9-d3db-08d8226baacd
x-ms-traffictypediagnostic: DB8PR08MB4140:|DB8PR08MB5372:
X-Microsoft-Antispam-PRVS: <DB8PR08MB537297FD0714E56914D16CFF9D660@DB8PR08MB5372.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
x-forefront-prvs: 0457F11EAF
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: Obor/VWC7Jyvj+6y7IZdrk7QmRpCyvpR0r9pGueAxUhwxkbtUxVsTBoI7+Llx59+cCix863p9IR9lfY+l5W+V2yZUbRfG9PJEuu/OKrKeJowGDIflgPWewYmwD2UiFxyQ3CEKs6Lf/f/DneRzUKnM7wZT9oBKw6t8b91Ua0Np8X9akb11V/Dpwmfs3qSFTlUJ7SxbXEFLzWG9Ci2JoJjbIj9VWwf17Dk7mq8CLvIaqbQgjncvocLB1QCxZYb4FhIX7sDxJnqsicUGq2t5dkWmSndg3fXX6F9QB+sNx5JpaPz52S+qXbvuagH4a0zQaulbeJy/tdEFpiFvea9tpdBJg==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(366004)(346002)(376002)(39860400002)(136003)(396003)(478600001)(186003)(8936002)(26005)(71200400001)(6916009)(86362001)(6512007)(6506007)(53546011)(6486002)(2906002)(5660300002)(54906003)(36756003)(8676002)(316002)(66946007)(66446008)(66476007)(76116006)(4744005)(64756008)(91956017)(66556008)(33656002)(2616005)(4326008);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: 5bWMzyXS/tGe1oI2WKF8cnMJ8miPufqgK/EkikyymyGIf8ZtaROchGOk/432V9gx3Vj/vMOCugHF67Qnij0Feicf6LM53zP8nI3Y3ghKibHSniDdYSoomXwqA9ufHTr1aiJVAzpLs5cCNA7FvNLDZcKQRHyOR9yqJRYz1YE8MW3i82sIgfC2n2cyp2O8wWrsjVSkFv93ZZ13bJ6pKSYI+LVfbiqX+Hwo/kkBUKQEUcoOXeaeoD5KdnYry5fhBrkKKMZSRVoZCFyP7Lgy64mmHQ0NjY5yHt1WHxg95OeCF/KagYX2l9pXjPYyltxxHXRnrxFVsQLvN/xRRh01uOarxcb0cHjaP4D8ENvN92WUfg9fILCsh/2SikirI3mC66NvugxS4h7yV/1ydYUKbcwRaTfoKznr3XXU0qD9ViApRlDEF0XMX4h72oRAbXIfQsiDDNnoCVmnuaNEdvpi38tcxp1vQQNSiCZjZofC6UGmVbP+jAYm6QEGDH9V9YwkwNLL
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <FAE6ECC876282042B9FD1F6C9C429B46@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB4140
Original-Authentication-Results: in.bosch.com; dkim=none (message not signed)
 header.d=none; in.bosch.com;
 dmarc=none action=none header.from=arm.com; 
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(376002)(346002)(396003)(39860400002)(46966005)(47076004)(186003)(2906002)(5660300002)(6512007)(336012)(2616005)(6486002)(54906003)(8676002)(53546011)(82310400002)(70206006)(33656002)(4744005)(26005)(36756003)(81166007)(6862004)(86362001)(4326008)(70586007)(8936002)(356005)(6506007)(478600001)(82740400003)(316002);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: dfe98605-a2c0-462c-1d12-08d8226ba70b
X-Forefront-PRVS: 0457F11EAF
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 0aVKCTjpNZN5brBo7FtcWZs7uHZd5b1kcdkEdOpfOXr1Z0VHXz7PvBHGnebZLprPQwgfNK+nzfWoyw4BdKfVsOAPV5Dbgil4UQBvH/bTUZVSDMm13UyTRy+uJd+f1xgM+Qr8MnBVTYeeYlG02j2hBBmk8pmrBqjpGE4TYAybCBTAuoSFJKVAp1xXwflMvGnWKLFjskgidVI4fPGwspbEUhSjM7rltPv7R8ORnaJHvPbW7ccDtU+TtG4Ao+bdy0oBmHn1tGWYB6xa22SwUyW4AXq9T3UjHe4sJ8q0BzDKtQY/nTmKG0bWB2ANt2w2/07VHqEDeCDV9F7jvva+/7t7LnWM8U4MZ8v0bX801IX+6ckktnRmpfavkGafyCg3T9i9EuAzbGlnXcRdnSrCdd6C5g==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jul 2020 11:48:29.5316 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9d0dc605-7636-4cb9-d3db-08d8226baacd
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5372
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 7 Jul 2020, at 11:18, Manikandan Chockalingam (RBEI/ECF3) <Manikandan.=
Chockalingam@in.bosch.com> wrote:
>=20
> Hello Bertrand,
>=20
> Thank you. I will try a fresh build with dunfell branch. All layers in th=
e sense [poky, meta-openembedded, meta-linaro, meta-rensas, meta-virtualisa=
tion, meta-selinux, xen-troops] right?

right

>=20
> Also, Can I use the same proprietary drivers which I used for yocto2.19[R=
-Car_Gen3_Series_Evaluation_Software_Package_for_Linux-20170427.zip] for th=
is branch?

I have no idea what is in that but i would guess it will probably not work =
that easily.
You might need to get in contact with Renesas to get more up-to-date instru=
ctions on how to build that.

Bertrand



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 11:49:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 11:49:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsm61-0002mZ-TT; Tue, 07 Jul 2020 11:49:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E+9W=AS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jsm61-0002mO-6k
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 11:49:09 +0000
X-Inumbo-ID: dd9ad950-c047-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd9ad950-c047-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 11:49:08 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2F4C5AE35;
 Tue,  7 Jul 2020 11:49:08 +0000 (UTC)
Subject: Re: [PATCH v2 3/3] xen/privcmd: Convert get_user_pages*() to
 pin_user_pages*()
To: Souptick Joarder <jrdr.linux@gmail.com>, boris.ostrovsky@oracle.com,
 sstabellini@kernel.org
References: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
 <1594059372-15563-4-git-send-email-jrdr.linux@gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <07e11e4b-5a32-f213-46a0-0c2a8bd44b56@suse.com>
Date: Tue, 7 Jul 2020 13:49:07 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <1594059372-15563-4-git-send-email-jrdr.linux@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Paul Durrant <xadimgnik@gmail.com>, John Hubbard <jhubbard@nvidia.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 06.07.20 20:16, Souptick Joarder wrote:
> In 2019, we introduced pin_user_pages*() and now we are converting
> get_user_pages*() to the new API as appropriate. [1] & [2] could
> be referred for more information. This is case 5 as per document [1].
> 
> [1] Documentation/core-api/pin_user_pages.rst
> 
> [2] "Explicit pinning of user-space pages":
>          https://lwn.net/Articles/807108/
> 
> Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Paul Durrant <xadimgnik@gmail.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 12:19:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 12:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsmYl-0005Tn-FW; Tue, 07 Jul 2020 12:18:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1g3R=AS=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1jsmYk-0005Sp-9n
 for xen-devel@lists.xen.org; Tue, 07 Jul 2020 12:18:50 +0000
X-Inumbo-ID: fd2469cc-c04b-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd2469cc-c04b-11ea-bca7-bc764e2007e4;
 Tue, 07 Jul 2020 12:18:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=mFmAfu+l9Qnx7Y9fQVSM/XKGmQ57MHrxuyQCgm25OCQ=; b=qFrRGnrO5DPb5kurUU6le/eUUD
 CRXilbVw+GyIW2LF4yoHF83eJv2Lo6OS1FSAteKRby6ngUBfP4WTQTQXEktq/crTkHrQadF43OC/W
 jnoB7/SzpRDfk438u0t5y97zmw9NIc8cVs3vSjL3cKLFhTa2gwzhQ1Z/cOKGfBuF064Q=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1jsmYT-0002mn-TH; Tue, 07 Jul 2020 12:18:33 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1jsmYT-0000Wl-Q7; Tue, 07 Jul 2020 12:18:33 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Subject: Xen Security Advisory 317 v3 (CVE-2020-15566) - Incorrect error
 handling in event channel port allocation
Message-Id: <E1jsmYT-0000Wl-Q7@xenbits.xenproject.org>
Date: Tue, 07 Jul 2020 12:18:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "Xen.org security team" <security-team-members@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-15566 / XSA-317
                               version 3

       Incorrect error handling in event channel port allocation

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

The allocation of an event channel port may fail for multiple reasons:
    1) Port is already in use
    2) The memory allocation failed
    3) The port we try to allocate is higher than what is supported by
       the ABI (e.g 2L or FIFO) used by the guest or the limit set by an
       administrator ('max_event_channels' in xl cfg).

Due to the missing error checks, only 1) will be considered as an error.  All
the other cases will provide a "valid" port and will result to a crash when
trying to access the event channel.

IMPACT
======

When the administrator configured a guest to allow more than 1023
event channels, that guest may be able to crash the host.

When Xen is out-of-memory, allocation of new event channels will
result in crashing the host rather than reporting an error.

VULNERABLE SYSTEMS
==================

Xen versions 4.10 and later are affected.  (The special Xen 4.8
"Comet" branch for XSA-254 contains changes similar to those which led
to this vulnerability; so it is likely to be affected, but - like
mainline Xen 4.8 - that branch is longer security-supported.)

Older Xen versions are unaffected.

All architectures are affected.

The default configuration, when guests are created with xl/libxl, is
not vulnerable, because of the default event channel limit (see
Mitigation, below).

MITIGATION
==========

The problem can be avoided by reducing the number of event channels
available to the guest no more than 1023.  For example, setting
"max_event_channels=1023" in the xl domain configuration, or deleting
any existing setting (since 1023 is the default for xl/libxl).

For ARM systems, any limit no more than 4095 is safe.

For 64-bit x86 PV guests, any limit no more than 4095 is likewise safe
if the host configuration prevents the guest administrator from
substituting and running a 32-bit kernel (and thereby putting the
guest into 32-bit PV mode).

CREDITS
=======

This issue was discovered by Amazon.

RESOLUTION
==========

Applying the attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa317.patch           Xen 4.10 - xen-unstable

$ sha256sum xsa317*
11e77dd8644cee40cee609d02e27d70655f3999005cae8c24fb2801980ebb4f2  xsa317.meta
17908035e2da07f6070fa8de345db68c96ed9bd78f8b114e43ba0194c1be3f15  xsa317.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the *patch* described above (or others which are
substantially similar) is permitted during the embargo, even on
public-facing systems with untrusted guest users and administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).


And: deployment of the event channel limit reduction mitigation is NOT
permitted (except where all the affected systems and VMs are
administered and used only by organisations which are members of the
Xen Project Security Issues Predisclosure List).  Specifically,
deployment on public cloud systems is NOT permitted.

This is because such a change can be visible to the guest, so it would
leak the preconditions for the vulnerability and maybe lead to
rediscovery.

Deployment of this, or similar mitigations, is permitted only AFTER
the embargo ends.


Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl8EZ/gMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZQUwIAK8W8bZ0xml2bzAu4vsXi8QqhDX4VrpkgADYZS+M
BD8hpllQ+O/CiM5ZMECj7zaWYTt7+VrGrqK4jtf2REBs/sOWcO+k7KdEury4XCKf
jIG4CzCBHC46RVEKftiqQNTX2ebVBDwoj+1fGeIvm7OhcZ7f6KdhYPHvE2bU8D45
ghr2jw33HZHoG7IsPQvJn8u6wqd6l+7h0BxhgzO5U8pI+w3ZXRM4XAno+ERzs8LO
N5ffv8UeaMIpkHoYEdsKOK/ItjhoCASoWTFvbE90u7f2WbimFnBG3oCPEVPt89kv
Y/o0+0jBk+WjXbPChMmMu5WuQuKVFDelMXLLE6mjfhGAvnI=
=vEgE
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa317.meta"
Content-Disposition: attachment; filename="xsa317.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzMTcsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIsCiAgICAiNC4xMSIs
CiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAgICJ4ZW4iCiAgXSwK
ICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAgICAiUmVjaXBlcyI6
IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICJm
ZDZlNDllY2FlMDM4NDA2MTBmZGM2YTQxNmE2Mzg1OTBjMGI2NTM1IiwKICAg
ICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAiUGF0Y2hlcyI6IFtd
CiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTEiOiB7CiAgICAg
ICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3Rh
YmxlUmVmIjogIjJiNzc3Mjk4ODhmYjg1MWFiOTZlN2Y3N2JjODU0MTIyNjI2
YjQ4NjEiLAogICAgICAgICAgIlByZXJlcXMiOiBbXSwKICAgICAgICAgICJQ
YXRjaGVzIjogW10KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4x
MiI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAg
ICAgICAgICJTdGFibGVSZWYiOiAiMDUwZmU0OGRjOTgxZTA0ODhkZTFmNmM2
YzA3ZDgxMTBmM2I3NTIzYiIsCiAgICAgICAgICAiUHJlcmVxcyI6IFtdLAog
ICAgICAgICAgIlBhdGNoZXMiOiBbXQogICAgICAgIH0KICAgICAgfQogICAg
fSwKICAgICI0LjEzIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAi
eGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICI5ZjdlOGJhYzRjYTI3
OWIzYmZjY2I1ZjM3MzBmYjJlNTM5OGM5NWFiIiwKICAgICAgICAgICJQcmVy
ZXFzIjogW10sCiAgICAgICAgICAiUGF0Y2hlcyI6IFtdCiAgICAgICAgfQog
ICAgICB9CiAgICB9LAogICAgIjQuOSI6IHsKICAgICAgIlJlY2lwZXMiOiB7
CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiNmU0
NzdjMmVhNGQ1YzI2YTdhN2IyZjg1MDE2NmFhNzllZGM1MjI1YyIsCiAgICAg
ICAgICAiUHJlcmVxcyI6IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbXQog
ICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICJtYXN0ZXIiOiB7CiAgICAg
ICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3Rh
YmxlUmVmIjogImU0ZDIyMDcxNjViMzc5ZWMxM2M4YjUxMjkzNmY2Mzk4MmFm
NjJkMTMiLAogICAgICAgICAgIlByZXJlcXMiOiBbXSwKICAgICAgICAgICJQ
YXRjaGVzIjogW10KICAgICAgICB9CiAgICAgIH0KICAgIH0KICB9Cn0K

--=separator
Content-Type: application/octet-stream; name="xsa317.patch"
Content-Disposition: attachment; filename="xsa317.patch"
Content-Transfer-Encoding: base64

RnJvbSBhZWI0NmU5MmY5MTVmMTlhNjFkNWE4YTFmNGI2OTY3OTNmNjRlNmZi
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPgpEYXRlOiBUaHUsIDE5IE1hciAyMDIwIDEz
OjE3OjMxICswMDAwClN1YmplY3Q6IFtQQVRDSF0geGVuL2NvbW1vbjogZXZl
bnRfY2hhbm5lbDogRG9uJ3QgaWdub3JlIGVycm9yIGluCiBnZXRfZnJlZV9w
b3J0KCkKCkN1cnJlbnRseSwgZ2V0X2ZyZWVfcG9ydCgpIGlzIGFzc3VtaW5n
IHRoYXQgdGhlIHBvcnQgaGFzIGJlZW4gYWxsb2NhdGVkCndoZW4gZXZ0Y2hu
X2FsbG9jYXRlX3BvcnQoKSBpcyBub3QgcmV0dXJuIC1FQlVTWS4KCkhvd2V2
ZXIsIHRoZSBmdW5jdGlvbiBtYXkgcmV0dXJuIGFuIGVycm9yIHdoZW46CiAg
ICAtIFdlIGV4aGF1c3RlZCBhbGwgdGhlIGV2ZW50IGNoYW5uZWxzLiBUaGlz
IGNhbiBoYXBwZW4gaWYgdGhlIGxpbWl0CiAgICBjb25maWd1cmVkIGJ5IHRo
ZSBhZG1pbmlzdHJhdG9yIGZvciB0aGUgZ3Vlc3QgKCdtYXhfZXZlbnRfY2hh
bm5lbHMnCiAgICBpbiB4bCBjZmcpIGlzIGhpZ2hlciB0aGFuIHRoZSBBQkkg
dXNlZCBieSB0aGUgZ3Vlc3QuIEZvciBpbnN0YW5jZSwKICAgIGlmIHRoZSBn
dWVzdCBpcyB1c2luZyAyTCwgdGhlIGxpbWl0IHNob3VsZCBub3QgYmUgaGln
aGVyIHRoYW4gNDA5NS4KICAgIC0gV2UgY2Fubm90IGFsbG9jYXRlIG1lbW9y
eSAoZS5nIFhlbiBoYXMgbm90IG1vcmUgbWVtb3J5KS4KClVzZXJzIG9mIGdl
dF9mcmVlX3BvcnQoKSAoc3VjaCBhcyBFVlRDSE5PUF9hbGxvY191bmJvdW5k
KSB3aWxsIHZhbGlkbHkKYXNzdW1pbmcgdGhlIHBvcnQgd2FzIHZhbGlkIGFu
ZCB3aWxsIG5leHQgY2FsbCBldnRjaG5fZnJvbV9wb3J0KCkuIFRoaXMKd2ls
bCByZXN1bHQgdG8gYSBjcmFzaCBhcyB0aGUgbWVtb3J5IGJhY2tpbmcgdGhl
IGV2ZW50IGNoYW5uZWwgc3RydWN0dXJlCmlzIG5vdCBwcmVzZW50LgoKRml4
ZXM6IDM2OGFlOWEwNWZlICgieGVuL3B2c2hpbTogZm9yd2FyZCBldnRjaG4g
b3BzIGJldHdlZW4gTDAgWGVuIGFuZCBMMiBEb21VIikKU2lnbmVkLW9mZi1i
eTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4KUmV2aWV3ZWQt
Ynk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KLS0tCiB4ZW4v
Y29tbW9uL2V2ZW50X2NoYW5uZWwuYyB8IDggKysrKy0tLS0KIDEgZmlsZSBj
aGFuZ2VkLCA0IGluc2VydGlvbnMoKyksIDQgZGVsZXRpb25zKC0pCgpkaWZm
IC0tZ2l0IGEveGVuL2NvbW1vbi9ldmVudF9jaGFubmVsLmMgYi94ZW4vY29t
bW9uL2V2ZW50X2NoYW5uZWwuYwppbmRleCBlODZlMmJmYWIwLi5hOGQxODJi
NTg0IDEwMDY0NAotLS0gYS94ZW4vY29tbW9uL2V2ZW50X2NoYW5uZWwuYwor
KysgYi94ZW4vY29tbW9uL2V2ZW50X2NoYW5uZWwuYwpAQCAtMTk1LDEwICsx
OTUsMTAgQEAgc3RhdGljIGludCBnZXRfZnJlZV9wb3J0KHN0cnVjdCBkb21h
aW4gKmQpCiAgICAgewogICAgICAgICBpbnQgcmMgPSBldnRjaG5fYWxsb2Nh
dGVfcG9ydChkLCBwb3J0KTsKIAotICAgICAgICBpZiAoIHJjID09IC1FQlVT
WSApCi0gICAgICAgICAgICBjb250aW51ZTsKLQotICAgICAgICByZXR1cm4g
cG9ydDsKKyAgICAgICAgaWYgKCByYyA9PSAwICkKKyAgICAgICAgICAgIHJl
dHVybiBwb3J0OworICAgICAgICBlbHNlIGlmICggcmMgIT0gLUVCVVNZICkK
KyAgICAgICAgICAgIHJldHVybiByYzsKICAgICB9CiAKICAgICByZXR1cm4g
LUVOT1NQQzsKLS0gCjIuMTcuMQoK

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 12:19:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 12:19:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsmYm-0005UF-Nu; Tue, 07 Jul 2020 12:18:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1g3R=AS=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1jsmYl-0005Sv-Qd
 for xen-devel@lists.xen.org; Tue, 07 Jul 2020 12:18:51 +0000
X-Inumbo-ID: ff1edf8c-c04b-11ea-8d5d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ff1edf8c-c04b-11ea-8d5d-12813bfff9fa;
 Tue, 07 Jul 2020 12:18:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=pka224yeKHvOnBWMQf/upTLDc1Wie4UNUJLCWEs/KC4=; b=HQHGlhpB4pJrFNJPJts4F2QL3R
 BQ8USz9N48wvmAx4C9Hsi8zpMVPfGYxWgbuTHqr25bH3KPLI9XeapPHLF+eP1JJPUrwYLtnhlOtCs
 CS7wTaNjlsJK/5yRIieZVFv3NcoeRZ8I/aREgxO68K+iVFFpoOs0R5EKNcaapHhbYUw0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1jsmYX-0002mz-D6; Tue, 07 Jul 2020 12:18:37 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1jsmYX-0000Xl-BV; Tue, 07 Jul 2020 12:18:37 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Subject: Xen Security Advisory 319 v3 (CVE-2020-15563) - inverted code
 paths in x86 dirty VRAM tracking
Message-Id: <E1jsmYX-0000Xl-BV@xenbits.xenproject.org>
Date: Tue, 07 Jul 2020 12:18:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "Xen.org security team" <security-team-members@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-15563 / XSA-319
                               version 3

            inverted code paths in x86 dirty VRAM tracking

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

An inverted conditional in x86 HVM guests' dirty video RAM tracking
code allows such guests to make Xen de-reference a pointer guaranteed
to point at unmapped space.

IMPACT
======

A malicious or buggy HVM guest may cause the hypervisor to crash,
resulting in Denial of Service (DoS) affecting the entire host.

VULNERABLE SYSTEMS
==================

Xen versions from 4.8 onwards are affected.  Xen versions 4.7 and
earlier are not affected.

Only x86 systems are affected.  Arm systems are not affected.

Only x86 HVM guests using shadow paging can leverage the vulnerability.
In addition there needs to be an entity actively monitoring a guest's
video frame buffer (typically for display purposes) in order for such a
guest to be able to leverage the vulnerability.  x86 PV guests as well
as x86 HVM guest using hardware assisted paging (HAP) cannot leverage
the vulnerability.

MITIGATION
==========

Running only PV guests will avoid the vulnerability.

For HVM guest explicitly configured to use shadow paging (e.g. via the
`hap=0' xl domain configuration file parameter), changing to HAP (e.g.
by setting `hap=1') will avoid exposing the vulnerability to those
guests.  HAP is the default (in upstream Xen), where the hardware
supports it; so this mitigation is only applicable if HAP has been
disabled by configuration.

CREDITS
=======

This issue was discovered by Jan Beulich of SUSE.

RESOLUTION
==========

Applying the attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa319.patch           xen-unstable, 4.13 - 4.9

$ sha256sum xsa319*
1fe0dc2e274776b8e1275f85129280f280f94ca4eabe6a8166113283dad93ed8  xsa319.meta
c145f394f8ac7d8838c376a97e1850c4125c12e478fc66ebe025ae397b27e6ea  xsa319.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patch described above (or others which are
substantially similar) is permitted during the embargo, even on
public-facing systems with untrusted guest users and administrators.

HOWEVER deployment of the "use HAP mode" mitigation described above is
NOT permitted (except where all the affected systems and VMs are
administered and used only by organisations which are members of the Xen
Project Security Issues Predisclosure List).  Specifically, deployment
on public cloud systems is NOT permitted.

This is because in that case the configuration change can be observed
by guests, which could lead to the rediscovery of the vulnerability.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl8EZ/sMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZ75YH/jX/sAs0icOgBtHkwVZHg318OBExxt9x+ehk/pxb
i+1ZlS/IrJ8eJdHJYq8HYvAlxmtmFP1I0t+C9vmwbP4QMcR++RmKgdJI4+/sqCsB
AMEnK+cVJSbHxD7y7eW2CPuU3h0cKx0H24JgtzA2ONse7dVz7RN+oa97D5IKryTL
cBW8WroMn2InbKMCUy/5zj89NLAlbSuWSVZzQidDwzTITukzhZZ7Xw0+Q2yh1nkK
S4kcmz7Bzzd5Mc1gFr1Eh1FxfmVVl5RxwDE//3a5VbmfPVo/f0kMOIWjXVd1R1dj
x78SPrPojOAZbb8+f1LYqHmqzCgzvpa4EFbsOnsB7CBmP2Q=
=bDFh
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa319.meta"
Content-Disposition: attachment; filename="xsa319.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzMTksCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIsCiAgICAiNC4xMSIs
CiAgICAiNC4xMCIsCiAgICAiNC45IgogIF0sCiAgIlRyZWVzIjogWwogICAg
InhlbiIKICBdLAogICJSZWNpcGVzIjogewogICAgIjQuMTAiOiB7CiAgICAg
ICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3Rh
YmxlUmVmIjogImZkNmU0OWVjYWUwMzg0MDYxMGZkYzZhNDE2YTYzODU5MGMw
YjY1MzUiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDMx
NwogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAg
ICAgICAieHNhMzE5LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAg
ICAgfQogICAgfSwKICAgICI0LjExIjogewogICAgICAiUmVjaXBlcyI6IHsK
ICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIyYjc3
NzI5ODg4ZmI4NTFhYjk2ZTdmNzdiYzg1NDEyMjYyNmI0ODYxIiwKICAgICAg
ICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAzMTcKICAgICAgICAgIF0s
CiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTMxOS5w
YXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAg
ICAiNC4xMiI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6
IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiMDUwZmU0OGRjOTgxZTA0ODhk
ZTFmNmM2YzA3ZDgxMTBmM2I3NTIzYiIsCiAgICAgICAgICAiUHJlcmVxcyI6
IFsKICAgICAgICAgICAgMzE3CiAgICAgICAgICBdLAogICAgICAgICAgIlBh
dGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzMTkucGF0Y2giCiAgICAgICAg
ICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTMiOiB7CiAg
ICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAi
U3RhYmxlUmVmIjogIjlmN2U4YmFjNGNhMjc5YjNiZmNjYjVmMzczMGZiMmU1
Mzk4Yzk1YWIiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAg
IDMxNwogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAg
ICAgICAgICAieHNhMzE5LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0K
ICAgICAgfQogICAgfSwKICAgICI0LjkiOiB7CiAgICAgICJSZWNpcGVzIjog
ewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjZl
NDc3YzJlYTRkNWMyNmE3YTdiMmY4NTAxNjZhYTc5ZWRjNTIyNWMiLAogICAg
ICAgICAgIlByZXJlcXMiOiBbXSwKICAgICAgICAgICJQYXRjaGVzIjogWwog
ICAgICAgICAgICAieHNhMzE5LnBhdGNoIgogICAgICAgICAgXQogICAgICAg
IH0KICAgICAgfQogICAgfSwKICAgICJtYXN0ZXIiOiB7CiAgICAgICJSZWNp
cGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVm
IjogImU0ZDIyMDcxNjViMzc5ZWMxM2M4YjUxMjkzNmY2Mzk4MmFmNjJkMTMi
LAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDMxNwogICAg
ICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAi
eHNhMzE5LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQog
ICAgfQogIH0KfQ==

--=separator
Content-Type: application/octet-stream; name="xsa319.patch"
Content-Disposition: attachment; filename="xsa319.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvc2hhZG93OiBjb3JyZWN0IGFuIGludmVydGVkIGNvbmRpdGlvbmFs
IGluIGRpcnR5IFZSQU0gdHJhY2tpbmcKClRoaXMgb3JpZ2luYWxseSB3YXMg
Im1mbl94KG1mbikgPT0gSU5WQUxJRF9NRk4iLiBNYWtlIGl0IGxpa2UgdGhp
cwphZ2FpbiwgdGFraW5nIHRoZSBvcHBvcnR1bml0eSB0byBhbHNvIGRyb3Ag
dGhlIHVubmVjZXNzYXJ5IG5lYXJieQpicmFjZXMuCgpUaGlzIGlzIFhTQS0z
MTkuCgpGaXhlczogMjQ2YTVhMzM3N2MyICgieGVuOiBVc2UgYSB0eXBlc2Fm
ZSB0byBkZWZpbmUgSU5WQUxJRF9NRk4iKQpTaWduZWQtb2ZmLWJ5OiBKYW4g
QmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBBbmRy
ZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgoKLS0tIGEv
eGVuL2FyY2gveDg2L21tL3NoYWRvdy9jb21tb24uYworKysgYi94ZW4vYXJj
aC94ODYvbW0vc2hhZG93L2NvbW1vbi5jCkBAIC0zMjUyLDEwICszMjUyLDgg
QEAgaW50IHNoYWRvd190cmFja19kaXJ0eV92cmFtKHN0cnVjdCBkb21haQog
ICAgICAgICAgICAgaW50IGRpcnR5ID0gMDsKICAgICAgICAgICAgIHBhZGRy
X3Qgc2wxbWEgPSBkaXJ0eV92cmFtLT5zbDFtYVtpXTsKIAotICAgICAgICAg
ICAgaWYgKCAhbWZuX2VxKG1mbiwgSU5WQUxJRF9NRk4pICkKLSAgICAgICAg
ICAgIHsKKyAgICAgICAgICAgIGlmICggbWZuX2VxKG1mbiwgSU5WQUxJRF9N
Rk4pICkKICAgICAgICAgICAgICAgICBkaXJ0eSA9IDE7Ci0gICAgICAgICAg
ICB9CiAgICAgICAgICAgICBlbHNlCiAgICAgICAgICAgICB7CiAgICAgICAg
ICAgICAgICAgcGFnZSA9IG1mbl90b19wYWdlKG1mbik7Cg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 12:22:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 12:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsmbv-0006lA-PD; Tue, 07 Jul 2020 12:22:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1g3R=AS=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1jsmbu-0006jB-Do
 for xen-devel@lists.xen.org; Tue, 07 Jul 2020 12:22:06 +0000
X-Inumbo-ID: 71f3babe-c04c-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 71f3babe-c04c-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 12:21:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+jb059afZWg7ECHFvnGYbIcRZH6cC5fsjzA8zhBrxqQ=; b=kh2EL+TxbfwSawpcEjdqwVgmfs
 X1HkRdkXQVAzhyfbtPZ9FIoUkdn7qaATSomj8UCNVuREowPibLmJsiXcvyZkd2WxexueoolwoJgwq
 8DlQrORTiohRl3A741rGmYwuDAH54EiWf6401v1kotfOoYUmSKKT1QzWMdWgdts1Nfek=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1jsmbd-0002rk-6d; Tue, 07 Jul 2020 12:21:49 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1jsmbd-0002Fv-4w; Tue, 07 Jul 2020 12:21:49 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Subject: Xen Security Advisory 321 v3 (CVE-2020-15565) - insufficient
 cache write-back under VT-d
Message-Id: <E1jsmbd-0002Fv-4w@xenbits.xenproject.org>
Date: Tue, 07 Jul 2020 12:21:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "Xen.org security team" <security-team-members@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-15565 / XSA-321
                               version 3

                 insufficient cache write-back under VT-d

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

When page tables are shared between IOMMU and CPU, changes to them
require flushing of both TLBs.  Furthermore, IOMMUs may be non-coherent,
and hence prior to flushing IOMMU TLBs CPU cached also needs writing
back to memory after changes were made.  Such writing back of cached
data was missing in particular when splitting large page mappings into
smaller granularity ones.

IMPACT
======

A malicious guest may be able to retain read/write DMA access to
frames returned to Xen's free pool, and later reused for another
purpose.  Host crashes (leading to a Denial of Service) and privilege
escalation cannot be ruled out.

VULNERABLE SYSTEMS
==================

Xen versions from at least 3.2 onwards are affected.

Only x86 Intel systems are affected.  x86 AMD as well as Arm systems are
not affected.

Only x86 HVM guests using hardware assisted paging (HAP), having a
passed through PCI device assigned, and having page table sharing
enabled can leverage the vulnerability.  Note that page table
sharing will be enabled (by default) only if Xen considers IOMMU and
CPU large page size support compatible.

MITIGATION
==========

Suppressing the use of page table sharing will avoid the vulnerability
(command line option "iommu=no-sharept").  Note however that as of Xen
version 4.13 there's also a respective per-guest control ("passthrough="
libxl guest config file option).  If any guests have been created with
an explicit setting here, this setting may conflict with the addition of
the "iommu=no-sharept" Xen command line option.

Suppressing the use of large HAP pages will avoid the vulnerability
(command line options "hap_2mb=no hap_1gb=no").

Not passing through PCI devices to HVM guests will avoid the
vulnerability.

CREDITS
=======

This issue was discovered by Roger Pau Monné of Citrix.

RESOLUTION
==========

Applying the appropriate set of attached patches resolves this issue.

Note that unlike implied by the numbering, the patches here are intended
to go on top of XSA-328's.

Note also that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa321/xsa321-?.patch        xen-unstable
xsa321/xsa321-4.13-?.patch   Xen 4.13.x
xsa321/xsa321-4.12-?.patch   Xen 4.12.x
xsa321/xsa321-4.11-?.patch   Xen 4.11.x
xsa321/xsa321-4.10-?.patch   Xen 4.10.x
xsa321/xsa321-4.9-?.patch    Xen 4.9.x

$ sha256sum xsa321* xsa321*/*
f0824c6b6e5de723301223927dbad916e0e5fbeb70f30a7e2467a04094dd840b  xsa321.meta
35ed3be5e66da0580de8fb14ee7e6c073ac60e08e022c35ef194a714698641ad  xsa321/xsa321-1.patch
b2bbb4cf397b7b532dcab120a4d678938c50ca0df6ff2724a416ac8567bd667b  xsa321/xsa321-2.patch
87d2e0446ee3fb013c8f307e71c0ddeae8122d6beee3e5d2871aa429d8d19daa  xsa321/xsa321-3.patch
38d7e715d4ed751a9ce503b61cacaf2d06c91b2eab4be95cbc3a9ae4d2a05efb  xsa321/xsa321-4.9-1.patch
e4d5238233c883ea62491f852e543550bce9d74d7239a866f5e117df46838abc  xsa321/xsa321-4.9-2.patch
d9140aee60c848e2e07a59741bab1fde4669f2627923e5d3f08b8f2971f589c4  xsa321/xsa321-4.9-3.patch
be8e320f64185bb29c52c0c1472d9d9aa1319768076ff70e691d4b40f7938a27  xsa321/xsa321-4.9-4.patch
7d83cb2d7de293f8534fa4eae1c56979984d01d8842ac06cfcb645191f27e51f  xsa321/xsa321-4.9-5.patch
99c7cf186f0fea47ef516e3d477a5f5068adaad44624b406694b9ff33268e05b  xsa321/xsa321-4.9-6.patch
9731286e9af9d83c5bf191aa5a6be0dfa34c79bca15660cd9b9e1c8e930cf974  xsa321/xsa321-4.9-7.patch
360765e859866c466dc1c9c6893dd800407d8f09b0b6f2b07fa403c290c4f0c6  xsa321/xsa321-4.10-1.patch
e4d5238233c883ea62491f852e543550bce9d74d7239a866f5e117df46838abc  xsa321/xsa321-4.10-2.patch
74b5c19a469cc7252a296cb19288f1ab53a411530d06dd364a0e3292c6aa273f  xsa321/xsa321-4.10-3.patch
be8e320f64185bb29c52c0c1472d9d9aa1319768076ff70e691d4b40f7938a27  xsa321/xsa321-4.10-4.patch
7d83cb2d7de293f8534fa4eae1c56979984d01d8842ac06cfcb645191f27e51f  xsa321/xsa321-4.10-5.patch
99c7cf186f0fea47ef516e3d477a5f5068adaad44624b406694b9ff33268e05b  xsa321/xsa321-4.10-6.patch
fb3122d23ae7381d798721fe92c622ea2d37baac369fe89b0707030315dfc896  xsa321/xsa321-4.10-7.patch
360765e859866c466dc1c9c6893dd800407d8f09b0b6f2b07fa403c290c4f0c6  xsa321/xsa321-4.11-1.patch
02e2fda4b467f10a7f38cb2a095b9da04289d9e8489db88bf542d6527b823a23  xsa321/xsa321-4.11-2.patch
04c9bc347f8d3cbb8aecede370189bba2ed47be560d1871b91eb01b962a578cc  xsa321/xsa321-4.11-3.patch
be8e320f64185bb29c52c0c1472d9d9aa1319768076ff70e691d4b40f7938a27  xsa321/xsa321-4.11-4.patch
c1b143b43b59244d5dc755f6a99de70ac39e803a7204296bb47300b9ffe26e59  xsa321/xsa321-4.11-5.patch
38456ff553416e48f2f5438c2a5a163b20929e8a58dbe811942d0d47aacfc9ea  xsa321/xsa321-4.11-6.patch
d3b6df41682e6b88898545590bee8242c00b4593773ba8070ce57a0473094189  xsa321/xsa321-4.11-7.patch
c6d00d7a988002687be9a19a2d631c3562d8ec9f02ae24efc23eb0039f9e0ddb  xsa321/xsa321-4.12-1.patch
64dd3aa18be3ccb17ab6d813df16e2025adabbe38127f2f00175a6a481651d86  xsa321/xsa321-4.12-2.patch
935346f3d0f2759699b0ccb8002abfb0dc173ec3ed616fb9042ad86751445757  xsa321/xsa321-4.12-3.patch
be8e320f64185bb29c52c0c1472d9d9aa1319768076ff70e691d4b40f7938a27  xsa321/xsa321-4.12-4.patch
c1b143b43b59244d5dc755f6a99de70ac39e803a7204296bb47300b9ffe26e59  xsa321/xsa321-4.12-5.patch
0da20aeb89e18490d60649dbfdb9c374e5861032da784a7724216c329f2cc5f0  xsa321/xsa321-4.12-6.patch
4d1954600eeca7e2cb9143ea8e32969731071f991a9a88a245c18e860c57c22c  xsa321/xsa321-4.12-7.patch
946053a8bba53d87b4164acaf3343e30689d91b505b6355d873c016166d87103  xsa321/xsa321-4.13-1.patch
f09e8cbf0cce17647d47f38137792517c8b108c3b54f57793d03578b0d5ccf99  xsa321/xsa321-4.13-2.patch
bd50ad52d23c6fc12b69ecaaf41073833cbe9b1d66a9f4e148df078e30dd45d4  xsa321/xsa321-4.13-3.patch
b181511962ce397302be8b7d5a130abe0995b3fda68b96f1afa95ae64f62dd09  xsa321/xsa321-4.13-4.patch
3286fc184fb377c1ce94344d1dbae3b78e95b0ae766eabb80b2fc612e59ffb69  xsa321/xsa321-4.13-5.patch
03a193197d176109dc586f4d6a76aebe32a4aa147e88c79d57582cf0a186c4ef  xsa321/xsa321-4.13-6.patch
ef7f9ac74313d2dabfb258b2519b2144e4feed3c85b5f705c4b1b7ba31ec316a  xsa321/xsa321-4.13-7.patch
e6d4b77063d4cd7a7242ac54b150ce42ce684ecbf46c7eaff5715976f272f4bc  xsa321/xsa321-4.patch
920771be10110a3eef8e4b8644145794d274042092f3aa14e04fa94fc1e78e8a  xsa321/xsa321-5.patch
b10c5583e01f1c26862806562f30e393960b0bbdd7cf7fca6640f4daa88fe017  xsa321/xsa321-6.patch
18da003fb05b7aebe868ff9f1c77063b8a51be3b07ab0c9fc4821bf46ca86eeb  xsa321/xsa321-7.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl8EaM8MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZ35IH/iNi7HaBQrIqks4MB/0odUAIYyUEVsI4eAavChkX
oKO+IQ7sDOyjKG+VHWgMxtnZhcQk9A+qHMnfCjL7igp0HMonT5C1r38x/+Nf203+
V/mQ0h/Vj1Fz7qSk0mtX2j2zkAS7hEFnOQcT5TIkxAt5ZO3wSbPEwmt9UqR7VON9
rXFX6WyAqDhO7Hw2lngPXc2VGoORHqybII4XZGb24TO7q9U4vFhBR0ZVgWKBo1pt
82gl2h2jQn8IA0Rrack+ucfsoD9D+E3AQYtipZVd9PI/SJNsZHvHJdaPxBf2CUqO
Jb1e5MMXRG9Htpe0GPu8Y0TSUAUCoHqBsJTE1wkn4hun5SQ=
=/CNm
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa321.meta"
Content-Disposition: attachment; filename="xsa321.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzMjEsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIsCiAgICAiNC4xMSIs
CiAgICAiNC4xMCIsCiAgICAiNC45IgogIF0sCiAgIlRyZWVzIjogWwogICAg
InhlbiIKICBdLAogICJSZWNpcGVzIjogewogICAgIjQuMTAiOiB7CiAgICAg
ICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3Rh
YmxlUmVmIjogImZkNmU0OWVjYWUwMzg0MDYxMGZkYzZhNDE2YTYzODU5MGMw
YjY1MzUiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDMx
NywKICAgICAgICAgICAgMzE5LAogICAgICAgICAgICAzMjgKICAgICAgICAg
IF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTMy
MS94c2EzMjEtNC4xMC0xLnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyMS94
c2EzMjEtNC4xMC0yLnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyMS94c2Ez
MjEtNC4xMC0zLnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyMS94c2EzMjEt
NC4xMC00LnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyMS94c2EzMjEtNC4x
MC01LnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyMS94c2EzMjEtNC4xMC02
LnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyMS94c2EzMjEtNC4xMC03LnBh
dGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAg
ICI0LjExIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjog
ewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIyYjc3NzI5ODg4ZmI4NTFhYjk2
ZTdmNzdiYzg1NDEyMjYyNmI0ODYxIiwKICAgICAgICAgICJQcmVyZXFzIjog
WwogICAgICAgICAgICAzMTcsCiAgICAgICAgICAgIDMxOSwKICAgICAgICAg
ICAgMzI4CiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAg
ICAgICAgICAgICJ4c2EzMjEveHNhMzIxLTQuMTEtMS5wYXRjaCIsCiAgICAg
ICAgICAgICJ4c2EzMjEveHNhMzIxLTQuMTEtMi5wYXRjaCIsCiAgICAgICAg
ICAgICJ4c2EzMjEveHNhMzIxLTQuMTEtMy5wYXRjaCIsCiAgICAgICAgICAg
ICJ4c2EzMjEveHNhMzIxLTQuMTEtNC5wYXRjaCIsCiAgICAgICAgICAgICJ4
c2EzMjEveHNhMzIxLTQuMTEtNS5wYXRjaCIsCiAgICAgICAgICAgICJ4c2Ez
MjEveHNhMzIxLTQuMTEtNi5wYXRjaCIsCiAgICAgICAgICAgICJ4c2EzMjEv
eHNhMzIxLTQuMTEtNy5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAg
ICAgIH0KICAgIH0sCiAgICAiNC4xMiI6IHsKICAgICAgIlJlY2lwZXMiOiB7
CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiMDUw
ZmU0OGRjOTgxZTA0ODhkZTFmNmM2YzA3ZDgxMTBmM2I3NTIzYiIsCiAgICAg
ICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzE3LAogICAgICAgICAg
ICAzMTksCiAgICAgICAgICAgIDMyOAogICAgICAgICAgXSwKICAgICAgICAg
ICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzIxL3hzYTMyMS00LjEy
LTEucGF0Y2giLAogICAgICAgICAgICAieHNhMzIxL3hzYTMyMS00LjEyLTIu
cGF0Y2giLAogICAgICAgICAgICAieHNhMzIxL3hzYTMyMS00LjEyLTMucGF0
Y2giLAogICAgICAgICAgICAieHNhMzIxL3hzYTMyMS00LjEyLTQucGF0Y2gi
LAogICAgICAgICAgICAieHNhMzIxL3hzYTMyMS00LjEyLTUucGF0Y2giLAog
ICAgICAgICAgICAieHNhMzIxL3hzYTMyMS00LjEyLTYucGF0Y2giLAogICAg
ICAgICAgICAieHNhMzIxL3hzYTMyMS00LjEyLTcucGF0Y2giCiAgICAgICAg
ICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTMiOiB7CiAg
ICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAi
U3RhYmxlUmVmIjogIjlmN2U4YmFjNGNhMjc5YjNiZmNjYjVmMzczMGZiMmU1
Mzk4Yzk1YWIiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAg
IDMxNywKICAgICAgICAgICAgMzE5LAogICAgICAgICAgICAzMjgKICAgICAg
ICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhz
YTMyMS94c2EzMjEtNC4xMy0xLnBhdGNoIiwKICAgICAgICAgICAgInhzYTMy
MS94c2EzMjEtNC4xMy0yLnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyMS94
c2EzMjEtNC4xMy0zLnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyMS94c2Ez
MjEtNC4xMy00LnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyMS94c2EzMjEt
NC4xMy01LnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyMS94c2EzMjEtNC4x
My02LnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyMS94c2EzMjEtNC4xMy03
LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwK
ICAgICI0LjkiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4i
OiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjZlNDc3YzJlYTRkNWMyNmE3
YTdiMmY4NTAxNjZhYTc5ZWRjNTIyNWMiLAogICAgICAgICAgIlByZXJlcXMi
OiBbCiAgICAgICAgICAgIDMxOSwKICAgICAgICAgICAgMzI4CiAgICAgICAg
ICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
MjEveHNhMzIxLTQuOS0xLnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyMS94
c2EzMjEtNC45LTIucGF0Y2giLAogICAgICAgICAgICAieHNhMzIxL3hzYTMy
MS00LjktMy5wYXRjaCIsCiAgICAgICAgICAgICJ4c2EzMjEveHNhMzIxLTQu
OS00LnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyMS94c2EzMjEtNC45LTUu
cGF0Y2giLAogICAgICAgICAgICAieHNhMzIxL3hzYTMyMS00LjktNi5wYXRj
aCIsCiAgICAgICAgICAgICJ4c2EzMjEveHNhMzIxLTQuOS03LnBhdGNoIgog
ICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICJtYXN0
ZXIiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAg
ICAgICAgICAiU3RhYmxlUmVmIjogImU0ZDIyMDcxNjViMzc5ZWMxM2M4YjUx
MjkzNmY2Mzk4MmFmNjJkMTMiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAg
ICAgICAgICAgIDMxNywKICAgICAgICAgICAgMzE5LAogICAgICAgICAgICAz
MjgKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAg
ICAgICAgInhzYTMyMS94c2EzMjEtMS5wYXRjaCIsCiAgICAgICAgICAgICJ4
c2EzMjEveHNhMzIxLTIucGF0Y2giLAogICAgICAgICAgICAieHNhMzIxL3hz
YTMyMS0zLnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQog
ICAgfQogIH0KfQ==

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-1.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBbUEFUQ0ggdjUgMS85XSB2dGQ6IGltcHJvdmUgSU9NTVUgVExCIGZsdXNo
CgpEbyBub3QgbGltaXQgUFNJIGZsdXNoZXMgdG8gb3JkZXIgMCBwYWdlcywg
aW4gb3JkZXIgdG8gYXZvaWQgZG9pbmcgYQpmdWxsIFRMQiBmbHVzaCBpZiB0
aGUgcGFzc2VkIGluIHBhZ2UgaGFzIGFuIG9yZGVyIGdyZWF0ZXIgdGhhbiAw
IGFuZAppcyBhbGlnbmVkLiBTaG91bGQgaW5jcmVhc2UgdGhlIHBlcmZvcm1h
bmNlIG9mIElPTU1VIFRMQiBmbHVzaGVzIHdoZW4KZGVhbGluZyB3aXRoIHBh
Z2Ugb3JkZXJzIGdyZWF0ZXIgdGhhbiAwLgoKVGhpcyBpcyBwYXJ0IG9mIFhT
QS0zMjEuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hA
c3VzZS5jb20+ClJldmlld2VkLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dl
ci5wYXVAY2l0cml4LmNvbT4KLS0tCiB4ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC92dGQvaW9tbXUuYyB8IDUgKysrLS0KIDEgZmlsZSBjaGFuZ2VkLCAzIGlu
c2VydGlvbnMoKyksIDIgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEveGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11LmMgYi94ZW4vZHJpdmVy
cy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYwppbmRleCAyMDhiMzNjMGU0Li5k
Y2M5YjdhMzVlIDEwMDY0NAotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC92dGQvaW9tbXUuYworKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92
dGQvaW9tbXUuYwpAQCAtNTc2LDEzICs1NzYsMTQgQEAgc3RhdGljIGludCBf
X211c3RfY2hlY2sgaW9tbXVfZmx1c2hfaW90bGIoc3RydWN0IGRvbWFpbiAq
ZCwgZGZuX3QgZGZuLAogICAgICAgICBpZiAoIGlvbW11X2RvbWlkID09IC0x
ICkKICAgICAgICAgICAgIGNvbnRpbnVlOwogCi0gICAgICAgIGlmICggcGFn
ZV9jb3VudCAhPSAxIHx8IGRmbl9lcShkZm4sIElOVkFMSURfREZOKSApCisg
ICAgICAgIGlmICggIXBhZ2VfY291bnQgfHwgKHBhZ2VfY291bnQgJiAocGFn
ZV9jb3VudCAtIDEpKSB8fAorICAgICAgICAgICAgIGRmbl9lcShkZm4sIElO
VkFMSURfREZOKSB8fCAhSVNfQUxJR05FRChkZm5feChkZm4pLCBwYWdlX2Nv
dW50KSApCiAgICAgICAgICAgICByYyA9IGlvbW11X2ZsdXNoX2lvdGxiX2Rz
aShpb21tdSwgaW9tbXVfZG9taWQsCiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAwLCBmbHVzaF9kZXZfaW90bGIpOwogICAgICAg
ICBlbHNlCiAgICAgICAgICAgICByYyA9IGlvbW11X2ZsdXNoX2lvdGxiX3Bz
aShpb21tdSwgaW9tbXVfZG9taWQsCiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBkZm5fdG9fZGFkZHIoZGZuKSwKLSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFBBR0VfT1JERVJfNEss
CisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBnZXRf
b3JkZXJfZnJvbV9wYWdlcyhwYWdlX2NvdW50KSwKICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICFkbWFfb2xkX3B0ZV9wcmVzZW50
LAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZmx1
c2hfZGV2X2lvdGxiKTsKIAotLSAKMi4yNi4yCgo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-2.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
ClN1YmplY3Q6IFtQQVRDSCB2NSA0LzldIHZ0ZDogcHJ1bmUgKGFuZCByZW5h
bWUpIGNhY2hlIGZsdXNoIGZ1bmN0aW9ucwoKUmVuYW1lIF9faW9tbXVfZmx1
c2hfY2FjaGUgdG8gaW9tbXVfc3luY19jYWNoZSBhbmQgcmVtb3ZlCmlvbW11
X2ZsdXNoX2NhY2hlX3BhZ2UuIEFsc28gcmVtb3ZlIHRoZSBpb21tdV9mbHVz
aF9jYWNoZV9lbnRyeQp3cmFwcGVyIGFuZCBqdXN0IHVzZSBpb21tdV9zeW5j
X2NhY2hlIGluc3RlYWQuIE5vdGUgdGhlIF9lbnRyeSBzdWZmaXgKd2FzIG1l
YW5pbmdsZXNzIGFzIHRoZSB3cmFwcGVyIHdhcyBhbHJlYWR5IHRha2luZyBh
IHNpemUgcGFyYW1ldGVyIGluCmJ5dGVzLiBXaGlsZSB0aGVyZSBhbHNvIGNv
bnN0aWZ5IHRoZSBhZGRyIHBhcmFtZXRlci4KCk5vIGZ1bmN0aW9uYWwgY2hh
bmdlIGludGVuZGVkLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjEuCgpTaWdu
ZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNl
LmNvbT4KLS0tCkNoYW5nZXMgc2luY2UgdjM6CiAtIENvbnN0aWZ5IGFkZHIg
cGFyYW1ldGVyLgoKQ2hhbmdlcyBzaW5jZSB2MjoKIC0gTmV3IGluIHRoaXMg
dmVyc2lvbi4KLS0tCiB4ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvZXh0
ZXJuLmggICB8ICAzICstLQogeGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRk
L2ludHJlbWFwLmMgfCAgNiArKy0tLQogeGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvdnRkL2lvbW11LmMgICAgfCAzMyArKysrKysrKysrLS0tLS0tLS0tLS0t
LS0tLQogMyBmaWxlcyBjaGFuZ2VkLCAxNiBpbnNlcnRpb25zKCspLCAyNiBk
ZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC92dGQvZXh0ZXJuLmggYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92
dGQvZXh0ZXJuLmgKaW5kZXggOTk0ZDM2MGU5MC4uNTJiNWUxYzYwZCAxMDA2
NDQKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2V4dGVybi5o
CisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9leHRlcm4uaApA
QCAtNDMsOCArNDMsNyBAQCB2b2lkIGRpc2FibGVfcWludmFsKHN0cnVjdCB2
dGRfaW9tbXUgKmlvbW11KTsKIGludCBlbmFibGVfaW50cmVtYXAoc3RydWN0
IHZ0ZF9pb21tdSAqaW9tbXUsIGludCBlaW0pOwogdm9pZCBkaXNhYmxlX2lu
dHJlbWFwKHN0cnVjdCB2dGRfaW9tbXUgKmlvbW11KTsKIAotdm9pZCBpb21t
dV9mbHVzaF9jYWNoZV9lbnRyeSh2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQg
c2l6ZSk7Ci12b2lkIGlvbW11X2ZsdXNoX2NhY2hlX3BhZ2Uodm9pZCAqYWRk
ciwgdW5zaWduZWQgbG9uZyBucGFnZXMpOwordm9pZCBpb21tdV9zeW5jX2Nh
Y2hlKGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKTsKIGlu
dCBpb21tdV9hbGxvYyhzdHJ1Y3QgYWNwaV9kcmhkX3VuaXQgKmRyaGQpOwog
dm9pZCBpb21tdV9mcmVlKHN0cnVjdCBhY3BpX2RyaGRfdW5pdCAqZHJoZCk7
CiAKZGlmZiAtLWdpdCBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9p
bnRyZW1hcC5jIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2ludHJl
bWFwLmMKaW5kZXggYmY4NDYxOTVjNC4uYTJmMDJjMWJlYSAxMDA2NDQKLS0t
IGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2ludHJlbWFwLmMKKysr
IGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2ludHJlbWFwLmMKQEAg
LTIzMCw3ICsyMzAsNyBAQCBzdGF0aWMgdm9pZCBmcmVlX3JlbWFwX2VudHJ5
KHN0cnVjdCB2dGRfaW9tbXUgKmlvbW11LCBpbnQgaW5kZXgpCiAgICAgICAg
ICAgICAgICAgICAgICBpcmVtYXBfZW50cmllcywgaXJlbWFwX2VudHJ5KTsK
IAogICAgIHVwZGF0ZV9pcnRlKGlvbW11LCBpcmVtYXBfZW50cnksICZuZXdf
aXJlLCBmYWxzZSk7Ci0gICAgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnkoaXJl
bWFwX2VudHJ5LCBzaXplb2YoKmlyZW1hcF9lbnRyeSkpOworICAgIGlvbW11
X3N5bmNfY2FjaGUoaXJlbWFwX2VudHJ5LCBzaXplb2YoKmlyZW1hcF9lbnRy
eSkpOwogICAgIGlvbW11X2ZsdXNoX2llY19pbmRleChpb21tdSwgMCwgaW5k
ZXgpOwogCiAgICAgdW5tYXBfdnRkX2RvbWFpbl9wYWdlKGlyZW1hcF9lbnRy
aWVzKTsKQEAgLTQwNiw3ICs0MDYsNyBAQCBzdGF0aWMgaW50IGlvYXBpY19y
dGVfdG9fcmVtYXBfZW50cnkoc3RydWN0IHZ0ZF9pb21tdSAqaW9tbXUsCiAg
ICAgfQogCiAgICAgdXBkYXRlX2lydGUoaW9tbXUsIGlyZW1hcF9lbnRyeSwg
Jm5ld19pcmUsICFpbml0KTsKLSAgICBpb21tdV9mbHVzaF9jYWNoZV9lbnRy
eShpcmVtYXBfZW50cnksIHNpemVvZigqaXJlbWFwX2VudHJ5KSk7CisgICAg
aW9tbXVfc3luY19jYWNoZShpcmVtYXBfZW50cnksIHNpemVvZigqaXJlbWFw
X2VudHJ5KSk7CiAgICAgaW9tbXVfZmx1c2hfaWVjX2luZGV4KGlvbW11LCAw
LCBpbmRleCk7CiAKICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2UoaXJlbWFw
X2VudHJpZXMpOwpAQCAtNjk1LDcgKzY5NSw3IEBAIHN0YXRpYyBpbnQgbXNp
X21zZ190b19yZW1hcF9lbnRyeSgKICAgICB1cGRhdGVfaXJ0ZShpb21tdSwg
aXJlbWFwX2VudHJ5LCAmbmV3X2lyZSwgbXNpX2Rlc2MtPmlydGVfaW5pdGlh
bGl6ZWQpOwogICAgIG1zaV9kZXNjLT5pcnRlX2luaXRpYWxpemVkID0gdHJ1
ZTsKIAotICAgIGlvbW11X2ZsdXNoX2NhY2hlX2VudHJ5KGlyZW1hcF9lbnRy
eSwgc2l6ZW9mKCppcmVtYXBfZW50cnkpKTsKKyAgICBpb21tdV9zeW5jX2Nh
Y2hlKGlyZW1hcF9lbnRyeSwgc2l6ZW9mKCppcmVtYXBfZW50cnkpKTsKICAg
ICBpb21tdV9mbHVzaF9pZWNfaW5kZXgoaW9tbXUsIDAsIGluZGV4KTsKIAog
ICAgIHVubWFwX3Z0ZF9kb21haW5fcGFnZShpcmVtYXBfZW50cmllcyk7CmRp
ZmYgLS1naXQgYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUu
YyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCmluZGV4
IGRjYzliN2EzNWUuLjU1ZWIxNDAwMzMgMTAwNjQ0Ci0tLSBhL3hlbi9kcml2
ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJz
L3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBAIC0xNDYsNyArMTQ2LDggQEAg
c3RhdGljIGludCBjb250ZXh0X2dldF9kb21haW5faWQoc3RydWN0IGNvbnRl
eHRfZW50cnkgKmNvbnRleHQsCiB9CiAKIHN0YXRpYyBpbnQgaW9tbXVzX2lu
Y29oZXJlbnQ7Ci1zdGF0aWMgdm9pZCBfX2lvbW11X2ZsdXNoX2NhY2hlKHZv
aWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQorCit2b2lkIGlvbW11X3N5
bmNfY2FjaGUoY29uc3Qgdm9pZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUp
CiB7CiAgICAgaW50IGk7CiAgICAgc3RhdGljIHVuc2lnbmVkIGludCBjbGZs
dXNoX3NpemUgPSAwOwpAQCAtMTYxLDE2ICsxNjIsNiBAQCBzdGF0aWMgdm9p
ZCBfX2lvbW11X2ZsdXNoX2NhY2hlKHZvaWQgKmFkZHIsIHVuc2lnbmVkIGlu
dCBzaXplKQogICAgICAgICBjYWNoZWxpbmVfZmx1c2goKGNoYXIgKilhZGRy
ICsgaSk7CiB9CiAKLXZvaWQgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnkodm9p
ZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUpCi17Ci0gICAgX19pb21tdV9m
bHVzaF9jYWNoZShhZGRyLCBzaXplKTsKLX0KLQotdm9pZCBpb21tdV9mbHVz
aF9jYWNoZV9wYWdlKHZvaWQgKmFkZHIsIHVuc2lnbmVkIGxvbmcgbnBhZ2Vz
KQotewotICAgIF9faW9tbXVfZmx1c2hfY2FjaGUoYWRkciwgUEFHRV9TSVpF
ICogbnBhZ2VzKTsKLX0KLQogLyogQWxsb2NhdGUgcGFnZSB0YWJsZSwgcmV0
dXJuIGl0cyBtYWNoaW5lIGFkZHJlc3MgKi8KIHVpbnQ2NF90IGFsbG9jX3Bn
dGFibGVfbWFkZHIodW5zaWduZWQgbG9uZyBucGFnZXMsIG5vZGVpZF90IG5v
ZGUpCiB7CkBAIC0xODksNyArMTgwLDcgQEAgdWludDY0X3QgYWxsb2NfcGd0
YWJsZV9tYWRkcih1bnNpZ25lZCBsb25nIG5wYWdlcywgbm9kZWlkX3Qgbm9k
ZSkKICAgICAgICAgdmFkZHIgPSBfX21hcF9kb21haW5fcGFnZShjdXJfcGcp
OwogICAgICAgICBtZW1zZXQodmFkZHIsIDAsIFBBR0VfU0laRSk7CiAKLSAg
ICAgICAgaW9tbXVfZmx1c2hfY2FjaGVfcGFnZSh2YWRkciwgMSk7CisgICAg
ICAgIGlvbW11X3N5bmNfY2FjaGUodmFkZHIsIFBBR0VfU0laRSk7CiAgICAg
ICAgIHVubWFwX2RvbWFpbl9wYWdlKHZhZGRyKTsKICAgICAgICAgY3VyX3Bn
Kys7CiAgICAgfQpAQCAtMjIyLDcgKzIxMyw3IEBAIHN0YXRpYyB1NjQgYnVz
X3RvX2NvbnRleHRfbWFkZHIoc3RydWN0IHZ0ZF9pb21tdSAqaW9tbXUsIHU4
IGJ1cykKICAgICAgICAgfQogICAgICAgICBzZXRfcm9vdF92YWx1ZSgqcm9v
dCwgbWFkZHIpOwogICAgICAgICBzZXRfcm9vdF9wcmVzZW50KCpyb290KTsK
LSAgICAgICAgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnkocm9vdCwgc2l6ZW9m
KHN0cnVjdCByb290X2VudHJ5KSk7CisgICAgICAgIGlvbW11X3N5bmNfY2Fj
aGUocm9vdCwgc2l6ZW9mKHN0cnVjdCByb290X2VudHJ5KSk7CiAgICAgfQog
ICAgIG1hZGRyID0gKHU2NCkgZ2V0X2NvbnRleHRfYWRkcigqcm9vdCk7CiAg
ICAgdW5tYXBfdnRkX2RvbWFpbl9wYWdlKHJvb3RfZW50cmllcyk7CkBAIC0y
NjksNyArMjYwLDcgQEAgc3RhdGljIHU2NCBhZGRyX3RvX2RtYV9wYWdlX21h
ZGRyKHN0cnVjdCBkb21haW4gKmRvbWFpbiwgdTY0IGFkZHIsIGludCBhbGxv
YykKICAgICAgICAgICAgICAqLwogICAgICAgICAgICAgZG1hX3NldF9wdGVf
cmVhZGFibGUoKnB0ZSk7CiAgICAgICAgICAgICBkbWFfc2V0X3B0ZV93cml0
YWJsZSgqcHRlKTsKLSAgICAgICAgICAgIGlvbW11X2ZsdXNoX2NhY2hlX2Vu
dHJ5KHB0ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CisgICAgICAgICAg
ICBpb21tdV9zeW5jX2NhY2hlKHB0ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRl
KSk7CiAgICAgICAgIH0KIAogICAgICAgICBpZiAoIGxldmVsID09IDIgKQpA
QCAtNjQ1LDcgKzYzNiw3IEBAIHN0YXRpYyB2b2lkIGRtYV9wdGVfY2xlYXJf
b25lKHN0cnVjdCBkb21haW4gKmRvbWFpbiwgdWludDY0X3QgYWRkciwKICAg
ICAqZmx1c2hfZmxhZ3MgfD0gSU9NTVVfRkxVU0hGX21vZGlmaWVkOwogCiAg
ICAgc3Bpbl91bmxvY2soJmhkLT5hcmNoLm1hcHBpbmdfbG9jayk7Ci0gICAg
aW9tbXVfZmx1c2hfY2FjaGVfZW50cnkocHRlLCBzaXplb2Yoc3RydWN0IGRt
YV9wdGUpKTsKKyAgICBpb21tdV9zeW5jX2NhY2hlKHB0ZSwgc2l6ZW9mKHN0
cnVjdCBkbWFfcHRlKSk7CiAKICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2Uo
cGFnZSk7CiB9CkBAIC02ODIsNyArNjczLDcgQEAgc3RhdGljIHZvaWQgaW9t
bXVfZnJlZV9wYWdlX3RhYmxlKHN0cnVjdCBwYWdlX2luZm8gKnBnKQogICAg
ICAgICAgICAgaW9tbXVfZnJlZV9wYWdldGFibGUoZG1hX3B0ZV9hZGRyKCpw
dGUpLCBuZXh0X2xldmVsKTsKIAogICAgICAgICBkbWFfY2xlYXJfcHRlKCpw
dGUpOwotICAgICAgICBpb21tdV9mbHVzaF9jYWNoZV9lbnRyeShwdGUsIHNp
emVvZihzdHJ1Y3QgZG1hX3B0ZSkpOworICAgICAgICBpb21tdV9zeW5jX2Nh
Y2hlKHB0ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CiAgICAgfQogCiAg
ICAgdW5tYXBfdnRkX2RvbWFpbl9wYWdlKHB0X3ZhZGRyKTsKQEAgLTE0MDEs
NyArMTM5Miw3IEBAIGludCBkb21haW5fY29udGV4dF9tYXBwaW5nX29uZSgK
ICAgICBjb250ZXh0X3NldF9hZGRyZXNzX3dpZHRoKCpjb250ZXh0LCBhZ2F3
KTsKICAgICBjb250ZXh0X3NldF9mYXVsdF9lbmFibGUoKmNvbnRleHQpOwog
ICAgIGNvbnRleHRfc2V0X3ByZXNlbnQoKmNvbnRleHQpOwotICAgIGlvbW11
X2ZsdXNoX2NhY2hlX2VudHJ5KGNvbnRleHQsIHNpemVvZihzdHJ1Y3QgY29u
dGV4dF9lbnRyeSkpOworICAgIGlvbW11X3N5bmNfY2FjaGUoY29udGV4dCwg
c2l6ZW9mKHN0cnVjdCBjb250ZXh0X2VudHJ5KSk7CiAgICAgc3Bpbl91bmxv
Y2soJmlvbW11LT5sb2NrKTsKIAogICAgIC8qIENvbnRleHQgZW50cnkgd2Fz
IHByZXZpb3VzbHkgbm9uLXByZXNlbnQgKHdpdGggZG9taWQgMCkuICovCkBA
IC0xNTY1LDcgKzE1NTYsNyBAQCBpbnQgZG9tYWluX2NvbnRleHRfdW5tYXBf
b25lKAogCiAgICAgY29udGV4dF9jbGVhcl9wcmVzZW50KCpjb250ZXh0KTsK
ICAgICBjb250ZXh0X2NsZWFyX2VudHJ5KCpjb250ZXh0KTsKLSAgICBpb21t
dV9mbHVzaF9jYWNoZV9lbnRyeShjb250ZXh0LCBzaXplb2Yoc3RydWN0IGNv
bnRleHRfZW50cnkpKTsKKyAgICBpb21tdV9zeW5jX2NhY2hlKGNvbnRleHQs
IHNpemVvZihzdHJ1Y3QgY29udGV4dF9lbnRyeSkpOwogCiAgICAgaW9tbXVf
ZG9taWQ9IGRvbWFpbl9pb21tdV9kb21pZChkb21haW4sIGlvbW11KTsKICAg
ICBpZiAoIGlvbW11X2RvbWlkID09IC0xICkKQEAgLTE3OTIsNyArMTc4Myw3
IEBAIHN0YXRpYyBpbnQgX19tdXN0X2NoZWNrIGludGVsX2lvbW11X21hcF9w
YWdlKHN0cnVjdCBkb21haW4gKmQsIGRmbl90IGRmbiwKIAogICAgICpwdGUg
PSBuZXc7CiAKLSAgICBpb21tdV9mbHVzaF9jYWNoZV9lbnRyeShwdGUsIHNp
emVvZihzdHJ1Y3QgZG1hX3B0ZSkpOworICAgIGlvbW11X3N5bmNfY2FjaGUo
cHRlLCBzaXplb2Yoc3RydWN0IGRtYV9wdGUpKTsKICAgICBzcGluX3VubG9j
aygmaGQtPmFyY2gubWFwcGluZ19sb2NrKTsKICAgICB1bm1hcF92dGRfZG9t
YWluX3BhZ2UocGFnZSk7CiAKQEAgLTE4NjksNyArMTg2MCw3IEBAIGludCBp
b21tdV9wdGVfZmx1c2goc3RydWN0IGRvbWFpbiAqZCwgdWludDY0X3QgZGZu
LCB1aW50NjRfdCAqcHRlLAogICAgIGludCBpb21tdV9kb21pZDsKICAgICBp
bnQgcmMgPSAwOwogCi0gICAgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnkocHRl
LCBzaXplb2Yoc3RydWN0IGRtYV9wdGUpKTsKKyAgICBpb21tdV9zeW5jX2Nh
Y2hlKHB0ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CiAKICAgICBmb3Jf
ZWFjaF9kcmhkX3VuaXQgKCBkcmhkICkKICAgICB7CkBAIC0yNzM5LDcgKzI3
MzAsNyBAQCBzdGF0aWMgaW50IF9faW5pdCBpbnRlbF9pb21tdV9xdWFyYW50
aW5lX2luaXQoc3RydWN0IGRvbWFpbiAqZCkKICAgICAgICAgICAgIGRtYV9z
ZXRfcHRlX2FkZHIoKnB0ZSwgbWFkZHIpOwogICAgICAgICAgICAgZG1hX3Nl
dF9wdGVfcmVhZGFibGUoKnB0ZSk7CiAgICAgICAgIH0KLSAgICAgICAgaW9t
bXVfZmx1c2hfY2FjaGVfcGFnZShwYXJlbnQsIDEpOworICAgICAgICBpb21t
dV9zeW5jX2NhY2hlKHBhcmVudCwgUEFHRV9TSVpFKTsKIAogICAgICAgICB1
bm1hcF92dGRfZG9tYWluX3BhZ2UocGFyZW50KTsKICAgICAgICAgcGFyZW50
ID0gbWFwX3Z0ZF9kb21haW5fcGFnZShtYWRkcik7Ci0tIAoyLjI2LjIKCg==

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-3.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
ClN1YmplY3Q6IFtQQVRDSCB2NSA1LzldIHg4Ni9pb21tdTogaW50cm9kdWNl
IGEgY2FjaGUgc3luYyBob29rCgpUaGUgaG9vayBpcyBvbmx5IGltcGxlbWVu
dGVkIGZvciBWVC1kIGFuZCBpdCB1c2VzIHRoZSBhbHJlYWR5IGV4aXN0aW5n
CmlvbW11X3N5bmNfY2FjaGUgZnVuY3Rpb24gcHJlc2VudCBpbiBWVC1kIGNv
ZGUuIFRoZSBuZXcgaG9vayBpcwphZGRlZCBzbyB0aGF0IHRoZSBjYWNoZSBj
YW4gYmUgZmx1c2hlZCBieSBjb2RlIG91dHNpZGUgb2YgVlQtZCB3aGVuCnVz
aW5nIHNoYXJlZCBwYWdlIHRhYmxlcy4KCk5vdGUgdGhhdCBhbGxvY19wZ3Rh
YmxlX21hZGRyIG11c3QgdXNlIHRoZSBub3cgbG9jYWxseSBkZWZpbmVkCnN5
bmNfY2FjaGUgZnVuY3Rpb24sIGJlY2F1c2UgSU9NTVUgb3BzIGFyZSBub3Qg
eWV0IHNldHVwIHRoZSBmaXJzdAp0aW1lIHRoZSBmdW5jdGlvbiBnZXRzIGNh
bGxlZCBkdXJpbmcgSU9NTVUgaW5pdGlhbGl6YXRpb24uCgpObyBmdW5jdGlv
bmFsIGNoYW5nZSBpbnRlbmRlZC4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzIx
LgoKU2lnbmVkLW9mZi1ieTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1
QGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+Ci0tLQpDaGFuZ2VzIHNpbmNlIHYzOgogLSBVc2UgYSBt
YWNybyBpbnN0ZWFkIG9mIGEgZnVuY3Rpb24uCgpDaGFuZ2VzIHNpbmNlIHYy
OgogLSBSZW5hbWUgdG8gaW9tbXVfc3luY19jYWNoZS4KIC0gTW92ZSB0byBp
b21tdS5jIGluIG9yZGVyIHRvIHVzZSB0aGUgYWx0ZXJuYXRpdmUgY2FsbCBw
YXRjaGluZy4KLS0tCiB4ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvZXh0
ZXJuLmggfCAxIC0KIHhlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21t
dS5jICB8IDUgKysrLS0KIHhlbi9pbmNsdWRlL2FzbS14ODYvaW9tbXUuaCAg
ICAgICAgICB8IDcgKysrKysrKwogeGVuL2luY2x1ZGUveGVuL2lvbW11Lmgg
ICAgICAgICAgICAgIHwgMSArCiA0IGZpbGVzIGNoYW5nZWQsIDExIGluc2Vy
dGlvbnMoKyksIDMgZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvdnRkL2V4dGVybi5oIGIveGVuL2RyaXZlcnMv
cGFzc3Rocm91Z2gvdnRkL2V4dGVybi5oCmluZGV4IDUyYjVlMWM2MGQuLmYx
NTk0N2FmMWYgMTAwNjQ0Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdo
L3Z0ZC9leHRlcm4uaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92
dGQvZXh0ZXJuLmgKQEAgLTQzLDcgKzQzLDYgQEAgdm9pZCBkaXNhYmxlX3Fp
bnZhbChzdHJ1Y3QgdnRkX2lvbW11ICppb21tdSk7CiBpbnQgZW5hYmxlX2lu
dHJlbWFwKHN0cnVjdCB2dGRfaW9tbXUgKmlvbW11LCBpbnQgZWltKTsKIHZv
aWQgZGlzYWJsZV9pbnRyZW1hcChzdHJ1Y3QgdnRkX2lvbW11ICppb21tdSk7
CiAKLXZvaWQgaW9tbXVfc3luY19jYWNoZShjb25zdCB2b2lkICphZGRyLCB1
bnNpZ25lZCBpbnQgc2l6ZSk7CiBpbnQgaW9tbXVfYWxsb2Moc3RydWN0IGFj
cGlfZHJoZF91bml0ICpkcmhkKTsKIHZvaWQgaW9tbXVfZnJlZShzdHJ1Y3Qg
YWNwaV9kcmhkX3VuaXQgKmRyaGQpOwogCmRpZmYgLS1naXQgYS94ZW4vZHJp
dmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYyBiL3hlbi9kcml2ZXJzL3Bh
c3N0aHJvdWdoL3Z0ZC9pb21tdS5jCmluZGV4IDU1ZWIxNDAwMzMuLjkzYmNk
NzJmODQgMTAwNjQ0Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9p
b21tdS5jCkBAIC0xNDcsNyArMTQ3LDcgQEAgc3RhdGljIGludCBjb250ZXh0
X2dldF9kb21haW5faWQoc3RydWN0IGNvbnRleHRfZW50cnkgKmNvbnRleHQs
CiAKIHN0YXRpYyBpbnQgaW9tbXVzX2luY29oZXJlbnQ7CiAKLXZvaWQgaW9t
bXVfc3luY19jYWNoZShjb25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQg
c2l6ZSkKK3N0YXRpYyB2b2lkIHN5bmNfY2FjaGUoY29uc3Qgdm9pZCAqYWRk
ciwgdW5zaWduZWQgaW50IHNpemUpCiB7CiAgICAgaW50IGk7CiAgICAgc3Rh
dGljIHVuc2lnbmVkIGludCBjbGZsdXNoX3NpemUgPSAwOwpAQCAtMTgwLDcg
KzE4MCw3IEBAIHVpbnQ2NF90IGFsbG9jX3BndGFibGVfbWFkZHIodW5zaWdu
ZWQgbG9uZyBucGFnZXMsIG5vZGVpZF90IG5vZGUpCiAgICAgICAgIHZhZGRy
ID0gX19tYXBfZG9tYWluX3BhZ2UoY3VyX3BnKTsKICAgICAgICAgbWVtc2V0
KHZhZGRyLCAwLCBQQUdFX1NJWkUpOwogCi0gICAgICAgIGlvbW11X3N5bmNf
Y2FjaGUodmFkZHIsIFBBR0VfU0laRSk7CisgICAgICAgIHN5bmNfY2FjaGUo
dmFkZHIsIFBBR0VfU0laRSk7CiAgICAgICAgIHVubWFwX2RvbWFpbl9wYWdl
KHZhZGRyKTsKICAgICAgICAgY3VyX3BnKys7CiAgICAgfQpAQCAtMjc3OCw2
ICsyNzc4LDcgQEAgY29uc3Qgc3RydWN0IGlvbW11X29wcyBfX2luaXRjb25z
dHJlbCBpbnRlbF9pb21tdV9vcHMgPSB7CiAgICAgLmlvdGxiX2ZsdXNoX2Fs
bCA9IGlvbW11X2ZsdXNoX2lvdGxiX2FsbCwKICAgICAuZ2V0X3Jlc2VydmVk
X2RldmljZV9tZW1vcnkgPSBpbnRlbF9pb21tdV9nZXRfcmVzZXJ2ZWRfZGV2
aWNlX21lbW9yeSwKICAgICAuZHVtcF9wMm1fdGFibGUgPSB2dGRfZHVtcF9w
Mm1fdGFibGUsCisgICAgLnN5bmNfY2FjaGUgPSBzeW5jX2NhY2hlLAogfTsK
IAogY29uc3Qgc3RydWN0IGlvbW11X2luaXRfb3BzIF9faW5pdGNvbnN0cmVs
IGludGVsX2lvbW11X2luaXRfb3BzID0gewpkaWZmIC0tZ2l0IGEveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9pb21tdS5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9p
b21tdS5oCmluZGV4IDg1NzQxZjdjOTYuLjg2NGUwMjUwNzggMTAwNjQ0Ci0t
LSBhL3hlbi9pbmNsdWRlL2FzbS14ODYvaW9tbXUuaAorKysgYi94ZW4vaW5j
bHVkZS9hc20teDg2L2lvbW11LmgKQEAgLTEyMSw2ICsxMjEsMTMgQEAgZXh0
ZXJuIGJvb2wgdW50cnVzdGVkX21zaTsKIGludCBwaV91cGRhdGVfaXJ0ZShj
b25zdCBzdHJ1Y3QgcGlfZGVzYyAqcGlfZGVzYywgY29uc3Qgc3RydWN0IHBp
cnEgKnBpcnEsCiAgICAgICAgICAgICAgICAgICAgY29uc3QgdWludDhfdCBn
dmVjKTsKIAorI2RlZmluZSBpb21tdV9zeW5jX2NhY2hlKGFkZHIsIHNpemUp
ICh7ICAgICAgICAgICAgICAgICBcCisgICAgY29uc3Qgc3RydWN0IGlvbW11
X29wcyAqb3BzID0gaW9tbXVfZ2V0X29wcygpOyAgICAgIFwKKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXAorICAgIGlmICggb3BzLT5zeW5jX2NhY2hlICkgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBcCisgICAgICAgIGlvbW11X3ZjYWxsKG9wcywg
c3luY19jYWNoZSwgYWRkciwgc2l6ZSk7ICAgICAgIFwKK30pCisKICNlbmRp
ZiAvKiAhX19BUkNIX1g4Nl9JT01NVV9IX18gKi8KIC8qCiAgKiBMb2NhbCB2
YXJpYWJsZXM6CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS94ZW4vaW9tbXUu
aCBiL3hlbi9pbmNsdWRlL3hlbi9pb21tdS5oCmluZGV4IDYyNjRkM2QwN2Yu
LjMyNzI4NzQ5NTggMTAwNjQ0Ci0tLSBhL3hlbi9pbmNsdWRlL3hlbi9pb21t
dS5oCisrKyBiL3hlbi9pbmNsdWRlL3hlbi9pb21tdS5oCkBAIC0yNzUsNiAr
Mjc1LDcgQEAgc3RydWN0IGlvbW11X29wcyB7CiAgICAgaW50ICgqc2V0dXBf
aHBldF9tc2kpKHN0cnVjdCBtc2lfZGVzYyAqKTsKIAogICAgIGludCAoKmFk
anVzdF9pcnFfYWZmaW5pdGllcykodm9pZCk7CisgICAgdm9pZCAoKnN5bmNf
Y2FjaGUpKGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKTsK
ICNlbmRpZiAvKiBDT05GSUdfWDg2ICovCiAKICAgICBpbnQgX19tdXN0X2No
ZWNrICgqc3VzcGVuZCkodm9pZCk7Ci0tIAoyLjI2LjIKCg==

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.9-1.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.9-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB2dGQ6IGltcHJvdmUgSU9NTVUgVExCIGZsdXNoCgpEbyBub3QgbGltaXQg
UFNJIGZsdXNoZXMgdG8gb3JkZXIgMCBwYWdlcywgaW4gb3JkZXIgdG8gYXZv
aWQgZG9pbmcgYQpmdWxsIFRMQiBmbHVzaCBpZiB0aGUgcGFzc2VkIGluIHBh
Z2UgaGFzIGFuIG9yZGVyIGdyZWF0ZXIgdGhhbiAwIGFuZAppcyBhbGlnbmVk
LiBTaG91bGQgaW5jcmVhc2UgdGhlIHBlcmZvcm1hbmNlIG9mIElPTU1VIFRM
QiBmbHVzaGVzIHdoZW4KZGVhbGluZyB3aXRoIHBhZ2Ugb3JkZXJzIGdyZWF0
ZXIgdGhhbiAwLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjEuCgpTaWduZWQt
b2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJldmll
d2VkLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNv
bT4KCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5j
CisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBA
IC02MTIsMTMgKzYxMiwxNCBAQCBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayBp
b21tdV9mbHVzaF9pb3RsCiAgICAgICAgIGlmICggaW9tbXVfZG9taWQgPT0g
LTEgKQogICAgICAgICAgICAgY29udGludWU7CiAKLSAgICAgICAgaWYgKCBw
YWdlX2NvdW50ICE9IDEgfHwgZ2ZuID09IGdmbl94KElOVkFMSURfR0ZOKSAp
CisgICAgICAgIGlmICggIXBhZ2VfY291bnQgfHwgKHBhZ2VfY291bnQgJiAo
cGFnZV9jb3VudCAtIDEpKSB8fAorICAgICAgICAgICAgIGdmbiA9PSBnZm5f
eChJTlZBTElEX0dGTikgfHwgIUlTX0FMSUdORUQoZ2ZuLCBwYWdlX2NvdW50
KSApCiAgICAgICAgICAgICByYyA9IGlvbW11X2ZsdXNoX2lvdGxiX2RzaShp
b21tdSwgaW9tbXVfZG9taWQsCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAwLCBmbHVzaF9kZXZfaW90bGIpOwogICAgICAgICBl
bHNlCiAgICAgICAgICAgICByYyA9IGlvbW11X2ZsdXNoX2lvdGxiX3BzaShp
b21tdSwgaW9tbXVfZG9taWQsCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAocGFkZHJfdClnZm4gPDwgUEFHRV9TSElGVF80SywK
LSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFBBR0Vf
T1JERVJfNEssCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBnZXRfb3JkZXJfZnJvbV9wYWdlcyhwYWdlX2NvdW50KSwKICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICFkbWFfb2xkX3B0
ZV9wcmVzZW50LAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgZmx1c2hfZGV2X2lvdGxiKTsKIAo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.9-2.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.9-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IHBydW5lIChhbmQgcmVuYW1lKSBjYWNoZSBmbHVzaCBmdW5jdGlvbnMKClJl
bmFtZSBfX2lvbW11X2ZsdXNoX2NhY2hlIHRvIGlvbW11X3N5bmNfY2FjaGUg
YW5kIHJlbW92ZQppb21tdV9mbHVzaF9jYWNoZV9wYWdlLiBBbHNvIHJlbW92
ZSB0aGUgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnkKd3JhcHBlciBhbmQganVz
dCB1c2UgaW9tbXVfc3luY19jYWNoZSBpbnN0ZWFkLiBOb3RlIHRoZSBfZW50
cnkgc3VmZml4CndhcyBtZWFuaW5nbGVzcyBhcyB0aGUgd3JhcHBlciB3YXMg
YWxyZWFkeSB0YWtpbmcgYSBzaXplIHBhcmFtZXRlciBpbgpieXRlcy4gV2hp
bGUgdGhlcmUgYWxzbyBjb25zdGlmeSB0aGUgYWRkciBwYXJhbWV0ZXIuCgpO
byBmdW5jdGlvbmFsIGNoYW5nZSBpbnRlbmRlZC4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtMzIxLgoKUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4KCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9leHRlcm4uaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
ZXh0ZXJuLmgKQEAgLTM3LDggKzM3LDcgQEAgdm9pZCBkaXNhYmxlX3FpbnZh
bChzdHJ1Y3QgaW9tbXUgKmlvbW11KQogaW50IGVuYWJsZV9pbnRyZW1hcChz
dHJ1Y3QgaW9tbXUgKmlvbW11LCBpbnQgZWltKTsKIHZvaWQgZGlzYWJsZV9p
bnRyZW1hcChzdHJ1Y3QgaW9tbXUgKmlvbW11KTsKIAotdm9pZCBpb21tdV9m
bHVzaF9jYWNoZV9lbnRyeSh2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQgc2l6
ZSk7Ci12b2lkIGlvbW11X2ZsdXNoX2NhY2hlX3BhZ2Uodm9pZCAqYWRkciwg
dW5zaWduZWQgbG9uZyBucGFnZXMpOwordm9pZCBpb21tdV9zeW5jX2NhY2hl
KGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKTsKIGludCBp
b21tdV9hbGxvYyhzdHJ1Y3QgYWNwaV9kcmhkX3VuaXQgKmRyaGQpOwogdm9p
ZCBpb21tdV9mcmVlKHN0cnVjdCBhY3BpX2RyaGRfdW5pdCAqZHJoZCk7CiAK
LS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2ludHJlbWFwLmMK
KysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2ludHJlbWFwLmMK
QEAgLTIzMSw3ICsyMzEsNyBAQCBzdGF0aWMgdm9pZCBmcmVlX3JlbWFwX2Vu
dHJ5KHN0cnVjdCBpb21tCiAgICAgICAgICAgICAgICAgICAgICBpcmVtYXBf
ZW50cmllcywgaXJlbWFwX2VudHJ5KTsKIAogICAgIHVwZGF0ZV9pcnRlKGlv
bW11LCBpcmVtYXBfZW50cnksICZuZXdfaXJlLCBmYWxzZSk7Ci0gICAgaW9t
bXVfZmx1c2hfY2FjaGVfZW50cnkoaXJlbWFwX2VudHJ5LCBzaXplb2YoKmly
ZW1hcF9lbnRyeSkpOworICAgIGlvbW11X3N5bmNfY2FjaGUoaXJlbWFwX2Vu
dHJ5LCBzaXplb2YoKmlyZW1hcF9lbnRyeSkpOwogICAgIGlvbW11X2ZsdXNo
X2llY19pbmRleChpb21tdSwgMCwgaW5kZXgpOwogCiAgICAgdW5tYXBfdnRk
X2RvbWFpbl9wYWdlKGlyZW1hcF9lbnRyaWVzKTsKQEAgLTQwMyw3ICs0MDMs
NyBAQCBzdGF0aWMgaW50IGlvYXBpY19ydGVfdG9fcmVtYXBfZW50cnkoc3Ry
CiAgICAgfQogCiAgICAgdXBkYXRlX2lydGUoaW9tbXUsIGlyZW1hcF9lbnRy
eSwgJm5ld19pcmUsICFpbml0KTsKLSAgICBpb21tdV9mbHVzaF9jYWNoZV9l
bnRyeShpcmVtYXBfZW50cnksIHNpemVvZigqaXJlbWFwX2VudHJ5KSk7Cisg
ICAgaW9tbXVfc3luY19jYWNoZShpcmVtYXBfZW50cnksIHNpemVvZigqaXJl
bWFwX2VudHJ5KSk7CiAgICAgaW9tbXVfZmx1c2hfaWVjX2luZGV4KGlvbW11
LCAwLCBpbmRleCk7CiAKICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2UoaXJl
bWFwX2VudHJpZXMpOwpAQCAtNjk0LDcgKzY5NCw3IEBAIHN0YXRpYyBpbnQg
bXNpX21zZ190b19yZW1hcF9lbnRyeSgKICAgICB1cGRhdGVfaXJ0ZShpb21t
dSwgaXJlbWFwX2VudHJ5LCAmbmV3X2lyZSwgbXNpX2Rlc2MtPmlydGVfaW5p
dGlhbGl6ZWQpOwogICAgIG1zaV9kZXNjLT5pcnRlX2luaXRpYWxpemVkID0g
dHJ1ZTsKIAotICAgIGlvbW11X2ZsdXNoX2NhY2hlX2VudHJ5KGlyZW1hcF9l
bnRyeSwgc2l6ZW9mKCppcmVtYXBfZW50cnkpKTsKKyAgICBpb21tdV9zeW5j
X2NhY2hlKGlyZW1hcF9lbnRyeSwgc2l6ZW9mKCppcmVtYXBfZW50cnkpKTsK
ICAgICBpb21tdV9mbHVzaF9pZWNfaW5kZXgoaW9tbXUsIDAsIGluZGV4KTsK
IAogICAgIHVubWFwX3Z0ZF9kb21haW5fcGFnZShpcmVtYXBfZW50cmllcyk7
Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCisr
KyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBAIC0x
NTgsNyArMTU4LDggQEAgc3RhdGljIHZvaWQgX19pbml0IGZyZWVfaW50ZWxf
aW9tbXUoc3RydQogfQogCiBzdGF0aWMgaW50IGlvbW11c19pbmNvaGVyZW50
Owotc3RhdGljIHZvaWQgX19pb21tdV9mbHVzaF9jYWNoZSh2b2lkICphZGRy
LCB1bnNpZ25lZCBpbnQgc2l6ZSkKKwordm9pZCBpb21tdV9zeW5jX2NhY2hl
KGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQogewogICAg
IGludCBpOwogICAgIHN0YXRpYyB1bnNpZ25lZCBpbnQgY2xmbHVzaF9zaXpl
ID0gMDsKQEAgLTE3MywxNiArMTc0LDYgQEAgc3RhdGljIHZvaWQgX19pb21t
dV9mbHVzaF9jYWNoZSh2b2lkICphZAogICAgICAgICBjYWNoZWxpbmVfZmx1
c2goKGNoYXIgKilhZGRyICsgaSk7CiB9CiAKLXZvaWQgaW9tbXVfZmx1c2hf
Y2FjaGVfZW50cnkodm9pZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUpCi17
Ci0gICAgX19pb21tdV9mbHVzaF9jYWNoZShhZGRyLCBzaXplKTsKLX0KLQot
dm9pZCBpb21tdV9mbHVzaF9jYWNoZV9wYWdlKHZvaWQgKmFkZHIsIHVuc2ln
bmVkIGxvbmcgbnBhZ2VzKQotewotICAgIF9faW9tbXVfZmx1c2hfY2FjaGUo
YWRkciwgUEFHRV9TSVpFICogbnBhZ2VzKTsKLX0KLQogLyogQWxsb2NhdGUg
cGFnZSB0YWJsZSwgcmV0dXJuIGl0cyBtYWNoaW5lIGFkZHJlc3MgKi8KIHU2
NCBhbGxvY19wZ3RhYmxlX21hZGRyKHN0cnVjdCBhY3BpX2RyaGRfdW5pdCAq
ZHJoZCwgdW5zaWduZWQgbG9uZyBucGFnZXMpCiB7CkBAIC0yMDcsNyArMTk4
LDcgQEAgdTY0IGFsbG9jX3BndGFibGVfbWFkZHIoc3RydWN0IGFjcGlfZHJo
ZAogICAgICAgICB2YWRkciA9IF9fbWFwX2RvbWFpbl9wYWdlKGN1cl9wZyk7
CiAgICAgICAgIG1lbXNldCh2YWRkciwgMCwgUEFHRV9TSVpFKTsKIAotICAg
ICAgICBpb21tdV9mbHVzaF9jYWNoZV9wYWdlKHZhZGRyLCAxKTsKKyAgICAg
ICAgaW9tbXVfc3luY19jYWNoZSh2YWRkciwgUEFHRV9TSVpFKTsKICAgICAg
ICAgdW5tYXBfZG9tYWluX3BhZ2UodmFkZHIpOwogICAgICAgICBjdXJfcGcr
KzsKICAgICB9CkBAIC0yNDIsNyArMjMzLDcgQEAgc3RhdGljIHU2NCBidXNf
dG9fY29udGV4dF9tYWRkcihzdHJ1Y3QgaQogICAgICAgICB9CiAgICAgICAg
IHNldF9yb290X3ZhbHVlKCpyb290LCBtYWRkcik7CiAgICAgICAgIHNldF9y
b290X3ByZXNlbnQoKnJvb3QpOwotICAgICAgICBpb21tdV9mbHVzaF9jYWNo
ZV9lbnRyeShyb290LCBzaXplb2Yoc3RydWN0IHJvb3RfZW50cnkpKTsKKyAg
ICAgICAgaW9tbXVfc3luY19jYWNoZShyb290LCBzaXplb2Yoc3RydWN0IHJv
b3RfZW50cnkpKTsKICAgICB9CiAgICAgbWFkZHIgPSAodTY0KSBnZXRfY29u
dGV4dF9hZGRyKCpyb290KTsKICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2Uo
cm9vdF9lbnRyaWVzKTsKQEAgLTMwMCw3ICsyOTEsNyBAQCBzdGF0aWMgdTY0
IGFkZHJfdG9fZG1hX3BhZ2VfbWFkZHIoc3RydWN0CiAgICAgICAgICAgICAg
Ki8KICAgICAgICAgICAgIGRtYV9zZXRfcHRlX3JlYWRhYmxlKCpwdGUpOwog
ICAgICAgICAgICAgZG1hX3NldF9wdGVfd3JpdGFibGUoKnB0ZSk7Ci0gICAg
ICAgICAgICBpb21tdV9mbHVzaF9jYWNoZV9lbnRyeShwdGUsIHNpemVvZihz
dHJ1Y3QgZG1hX3B0ZSkpOworICAgICAgICAgICAgaW9tbXVfc3luY19jYWNo
ZShwdGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0ZSkpOwogICAgICAgICB9CiAK
ICAgICAgICAgaWYgKCBsZXZlbCA9PSAyICkKQEAgLTY3NCw3ICs2NjUsNyBA
QCBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayBkbWFfcHRlX2NsZWFyX29uCiAK
ICAgICBkbWFfY2xlYXJfcHRlKCpwdGUpOwogICAgIHNwaW5fdW5sb2NrKCZo
ZC0+YXJjaC5tYXBwaW5nX2xvY2spOwotICAgIGlvbW11X2ZsdXNoX2NhY2hl
X2VudHJ5KHB0ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CisgICAgaW9t
bXVfc3luY19jYWNoZShwdGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0ZSkpOwog
CiAgICAgaWYgKCAhdGhpc19jcHUoaW9tbXVfZG9udF9mbHVzaF9pb3RsYikg
KQogICAgICAgICByYyA9IGlvbW11X2ZsdXNoX2lvdGxiX3BhZ2VzKGRvbWFp
biwgYWRkciA+PiBQQUdFX1NISUZUXzRLLCAxKTsKQEAgLTcxNiw3ICs3MDcs
NyBAQCBzdGF0aWMgdm9pZCBpb21tdV9mcmVlX3BhZ2VfdGFibGUoc3RydWN0
CiAgICAgICAgICAgICBpb21tdV9mcmVlX3BhZ2V0YWJsZShkbWFfcHRlX2Fk
ZHIoKnB0ZSksIG5leHRfbGV2ZWwpOwogCiAgICAgICAgIGRtYV9jbGVhcl9w
dGUoKnB0ZSk7Ci0gICAgICAgIGlvbW11X2ZsdXNoX2NhY2hlX2VudHJ5KHB0
ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CisgICAgICAgIGlvbW11X3N5
bmNfY2FjaGUocHRlLCBzaXplb2Yoc3RydWN0IGRtYV9wdGUpKTsKICAgICB9
CiAKICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2UocHRfdmFkZHIpOwpAQCAt
MTQ0Nyw3ICsxNDM4LDcgQEAgaW50IGRvbWFpbl9jb250ZXh0X21hcHBpbmdf
b25lKAogICAgIGNvbnRleHRfc2V0X2FkZHJlc3Nfd2lkdGgoKmNvbnRleHQs
IGFnYXcpOwogICAgIGNvbnRleHRfc2V0X2ZhdWx0X2VuYWJsZSgqY29udGV4
dCk7CiAgICAgY29udGV4dF9zZXRfcHJlc2VudCgqY29udGV4dCk7Ci0gICAg
aW9tbXVfZmx1c2hfY2FjaGVfZW50cnkoY29udGV4dCwgc2l6ZW9mKHN0cnVj
dCBjb250ZXh0X2VudHJ5KSk7CisgICAgaW9tbXVfc3luY19jYWNoZShjb250
ZXh0LCBzaXplb2Yoc3RydWN0IGNvbnRleHRfZW50cnkpKTsKICAgICBzcGlu
X3VubG9jaygmaW9tbXUtPmxvY2spOwogCiAgICAgLyogQ29udGV4dCBlbnRy
eSB3YXMgcHJldmlvdXNseSBub24tcHJlc2VudCAod2l0aCBkb21pZCAwKS4g
Ki8KQEAgLTE1OTQsNyArMTU4NSw3IEBAIGludCBkb21haW5fY29udGV4dF91
bm1hcF9vbmUoCiAKICAgICBjb250ZXh0X2NsZWFyX3ByZXNlbnQoKmNvbnRl
eHQpOwogICAgIGNvbnRleHRfY2xlYXJfZW50cnkoKmNvbnRleHQpOwotICAg
IGlvbW11X2ZsdXNoX2NhY2hlX2VudHJ5KGNvbnRleHQsIHNpemVvZihzdHJ1
Y3QgY29udGV4dF9lbnRyeSkpOworICAgIGlvbW11X3N5bmNfY2FjaGUoY29u
dGV4dCwgc2l6ZW9mKHN0cnVjdCBjb250ZXh0X2VudHJ5KSk7CiAKICAgICBp
b21tdV9kb21pZD0gZG9tYWluX2lvbW11X2RvbWlkKGRvbWFpbiwgaW9tbXUp
OwogICAgIGlmICggaW9tbXVfZG9taWQgPT0gLTEgKQpAQCAtMTgyNCw3ICsx
ODE1LDcgQEAgc3RhdGljIGludCBfX211c3RfY2hlY2sgaW50ZWxfaW9tbXVf
bWFwXwogCiAgICAgKnB0ZSA9IG5ldzsKIAotICAgIGlvbW11X2ZsdXNoX2Nh
Y2hlX2VudHJ5KHB0ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CisgICAg
aW9tbXVfc3luY19jYWNoZShwdGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0ZSkp
OwogICAgIHNwaW5fdW5sb2NrKCZoZC0+YXJjaC5tYXBwaW5nX2xvY2spOwog
ICAgIHVubWFwX3Z0ZF9kb21haW5fcGFnZShwYWdlKTsKIApAQCAtMTg1OCw3
ICsxODQ5LDcgQEAgaW50IGlvbW11X3B0ZV9mbHVzaChzdHJ1Y3QgZG9tYWlu
ICpkLCB1NgogICAgIGludCBpb21tdV9kb21pZDsKICAgICBpbnQgcmMgPSAw
OwogCi0gICAgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnkocHRlLCBzaXplb2Yo
c3RydWN0IGRtYV9wdGUpKTsKKyAgICBpb21tdV9zeW5jX2NhY2hlKHB0ZSwg
c2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CiAKICAgICBmb3JfZWFjaF9kcmhk
X3VuaXQgKCBkcmhkICkKICAgICB7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.9-3.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.9-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
aW9tbXU6IGludHJvZHVjZSBhIGNhY2hlIHN5bmMgaG9vawoKVGhlIGhvb2sg
aXMgb25seSBpbXBsZW1lbnRlZCBmb3IgVlQtZCBhbmQgaXQgdXNlcyB0aGUg
YWxyZWFkeSBleGlzdGluZwppb21tdV9zeW5jX2NhY2hlIGZ1bmN0aW9uIHBy
ZXNlbnQgaW4gVlQtZCBjb2RlLiBUaGUgbmV3IGhvb2sgaXMKYWRkZWQgc28g
dGhhdCB0aGUgY2FjaGUgY2FuIGJlIGZsdXNoZWQgYnkgY29kZSBvdXRzaWRl
IG9mIFZULWQgd2hlbgp1c2luZyBzaGFyZWQgcGFnZSB0YWJsZXMuCgpOb3Rl
IHRoYXQgYWxsb2NfcGd0YWJsZV9tYWRkciBtdXN0IHVzZSB0aGUgbm93IGxv
Y2FsbHkgZGVmaW5lZApzeW5jX2NhY2hlIGZ1bmN0aW9uLCBiZWNhdXNlIElP
TU1VIG9wcyBhcmUgbm90IHlldCBzZXR1cCB0aGUgZmlyc3QKdGltZSB0aGUg
ZnVuY3Rpb24gZ2V0cyBjYWxsZWQgZHVyaW5nIElPTU1VIGluaXRpYWxpemF0
aW9uLgoKTm8gZnVuY3Rpb25hbCBjaGFuZ2UgaW50ZW5kZWQuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTMyMS4KClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC92dGQvZXh0ZXJuLmgKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvdnRkL2V4dGVybi5oCkBAIC0zNyw3ICszNyw2IEBAIHZvaWQgZGlzYWJs
ZV9xaW52YWwoc3RydWN0IGlvbW11ICppb21tdSkKIGludCBlbmFibGVfaW50
cmVtYXAoc3RydWN0IGlvbW11ICppb21tdSwgaW50IGVpbSk7CiB2b2lkIGRp
c2FibGVfaW50cmVtYXAoc3RydWN0IGlvbW11ICppb21tdSk7CiAKLXZvaWQg
aW9tbXVfc3luY19jYWNoZShjb25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBp
bnQgc2l6ZSk7CiBpbnQgaW9tbXVfYWxsb2Moc3RydWN0IGFjcGlfZHJoZF91
bml0ICpkcmhkKTsKIHZvaWQgaW9tbXVfZnJlZShzdHJ1Y3QgYWNwaV9kcmhk
X3VuaXQgKmRyaGQpOwogCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdo
L3Z0ZC9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9pb21tdS5jCkBAIC0xNTksNyArMTU5LDcgQEAgc3RhdGljIHZvaWQgX19p
bml0IGZyZWVfaW50ZWxfaW9tbXUoc3RydQogCiBzdGF0aWMgaW50IGlvbW11
c19pbmNvaGVyZW50OwogCi12b2lkIGlvbW11X3N5bmNfY2FjaGUoY29uc3Qg
dm9pZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUpCitzdGF0aWMgdm9pZCBz
eW5jX2NhY2hlKGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXpl
KQogewogICAgIGludCBpOwogICAgIHN0YXRpYyB1bnNpZ25lZCBpbnQgY2xm
bHVzaF9zaXplID0gMDsKQEAgLTE5OCw3ICsxOTgsNyBAQCB1NjQgYWxsb2Nf
cGd0YWJsZV9tYWRkcihzdHJ1Y3QgYWNwaV9kcmhkCiAgICAgICAgIHZhZGRy
ID0gX19tYXBfZG9tYWluX3BhZ2UoY3VyX3BnKTsKICAgICAgICAgbWVtc2V0
KHZhZGRyLCAwLCBQQUdFX1NJWkUpOwogCi0gICAgICAgIGlvbW11X3N5bmNf
Y2FjaGUodmFkZHIsIFBBR0VfU0laRSk7CisgICAgICAgIHN5bmNfY2FjaGUo
dmFkZHIsIFBBR0VfU0laRSk7CiAgICAgICAgIHVubWFwX2RvbWFpbl9wYWdl
KHZhZGRyKTsKICAgICAgICAgY3VyX3BnKys7CiAgICAgfQpAQCAtMjY5Niw2
ICsyNjk2LDcgQEAgY29uc3Qgc3RydWN0IGlvbW11X29wcyBpbnRlbF9pb21t
dV9vcHMgPQogICAgIC5pb3RsYl9mbHVzaF9hbGwgPSBpb21tdV9mbHVzaF9p
b3RsYl9hbGwsCiAgICAgLmdldF9yZXNlcnZlZF9kZXZpY2VfbWVtb3J5ID0g
aW50ZWxfaW9tbXVfZ2V0X3Jlc2VydmVkX2RldmljZV9tZW1vcnksCiAgICAg
LmR1bXBfcDJtX3RhYmxlID0gdnRkX2R1bXBfcDJtX3RhYmxlLAorICAgIC5z
eW5jX2NhY2hlID0gc3luY19jYWNoZSwKIH07CiAKIC8qCi0tLSBhL3hlbi9p
bmNsdWRlL2FzbS14ODYvaW9tbXUuaAorKysgYi94ZW4vaW5jbHVkZS9hc20t
eDg2L2lvbW11LmgKQEAgLTk4LDYgKzk4LDEzIEBAIGV4dGVybiBib29sIHVu
dHJ1c3RlZF9tc2k7CiBpbnQgcGlfdXBkYXRlX2lydGUoY29uc3Qgc3RydWN0
IHBpX2Rlc2MgKnBpX2Rlc2MsIGNvbnN0IHN0cnVjdCBwaXJxICpwaXJxLAog
ICAgICAgICAgICAgICAgICAgIGNvbnN0IHVpbnQ4X3QgZ3ZlYyk7CiAKKyNk
ZWZpbmUgaW9tbXVfc3luY19jYWNoZShhZGRyLCBzaXplKSAoeyAgICAgICAg
ICAgICAgICAgXAorICAgIGNvbnN0IHN0cnVjdCBpb21tdV9vcHMgKm9wcyA9
IGlvbW11X2dldF9vcHMoKTsgICAgICBcCisgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBp
ZiAoIG9wcy0+c3luY19jYWNoZSApICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgXAorICAgICAgICBvcHMtPnN5bmNfY2FjaGUoYWRkciwgc2l6ZSk7
ICAgICAgICAgICAgICAgICAgICBcCit9KQorCiAjZW5kaWYgLyogIV9fQVJD
SF9YODZfSU9NTVVfSF9fICovCiAvKgogICogTG9jYWwgdmFyaWFibGVzOgot
LS0gYS94ZW4vaW5jbHVkZS94ZW4vaW9tbXUuaAorKysgYi94ZW4vaW5jbHVk
ZS94ZW4vaW9tbXUuaApAQCAtMTc2LDYgKzE3Niw3IEBAIHN0cnVjdCBpb21t
dV9vcHMgewogICAgIHZvaWQgKCp1cGRhdGVfaXJlX2Zyb21fYXBpYykodW5z
aWduZWQgaW50IGFwaWMsIHVuc2lnbmVkIGludCByZWcsIHVuc2lnbmVkIGlu
dCB2YWx1ZSk7CiAgICAgdW5zaWduZWQgaW50ICgqcmVhZF9hcGljX2Zyb21f
aXJlKSh1bnNpZ25lZCBpbnQgYXBpYywgdW5zaWduZWQgaW50IHJlZyk7CiAg
ICAgaW50ICgqc2V0dXBfaHBldF9tc2kpKHN0cnVjdCBtc2lfZGVzYyAqKTsK
KyAgICB2b2lkICgqc3luY19jYWNoZSkoY29uc3Qgdm9pZCAqYWRkciwgdW5z
aWduZWQgaW50IHNpemUpOwogI2VuZGlmIC8qIENPTkZJR19YODYgKi8KICAg
ICBpbnQgX19tdXN0X2NoZWNrICgqc3VzcGVuZCkodm9pZCk7CiAgICAgdm9p
ZCAoKnJlc3VtZSkodm9pZCk7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.9-4.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.9-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IGRvbid0IGFzc3VtZSBhZGRyZXNzZXMgYXJlIGFsaWduZWQgaW4gc3luY19j
YWNoZQoKQ3VycmVudCBjb2RlIGluIHN5bmNfY2FjaGUgYXNzdW1lIHRoYXQg
dGhlIGFkZHJlc3MgcGFzc2VkIGluIGlzCmFsaWduZWQgdG8gYSBjYWNoZSBs
aW5lIHNpemUuIEZpeCB0aGUgY29kZSB0byBzdXBwb3J0IHBhc3NpbmcgaW4K
YXJiaXRyYXJ5IGFkZHJlc3NlcyBub3QgbmVjZXNzYXJpbHkgYWxpZ25lZCB0
byBhIGNhY2hlIGxpbmUgc2l6ZS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzIx
LgoKUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4KCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5j
CisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBA
IC0xNjEsOCArMTYxLDggQEAgc3RhdGljIGludCBpb21tdXNfaW5jb2hlcmVu
dDsKIAogc3RhdGljIHZvaWQgc3luY19jYWNoZShjb25zdCB2b2lkICphZGRy
LCB1bnNpZ25lZCBpbnQgc2l6ZSkKIHsKLSAgICBpbnQgaTsKLSAgICBzdGF0
aWMgdW5zaWduZWQgaW50IGNsZmx1c2hfc2l6ZSA9IDA7CisgICAgc3RhdGlj
IHVuc2lnbmVkIGxvbmcgY2xmbHVzaF9zaXplID0gMDsKKyAgICBjb25zdCB2
b2lkICplbmQgPSBhZGRyICsgc2l6ZTsKIAogICAgIGlmICggIWlvbW11c19p
bmNvaGVyZW50ICkKICAgICAgICAgcmV0dXJuOwpAQCAtMTcwLDggKzE3MCw5
IEBAIHN0YXRpYyB2b2lkIHN5bmNfY2FjaGUoY29uc3Qgdm9pZCAqYWRkciwK
ICAgICBpZiAoIGNsZmx1c2hfc2l6ZSA9PSAwICkKICAgICAgICAgY2xmbHVz
aF9zaXplID0gZ2V0X2NhY2hlX2xpbmVfc2l6ZSgpOwogCi0gICAgZm9yICgg
aSA9IDA7IGkgPCBzaXplOyBpICs9IGNsZmx1c2hfc2l6ZSApCi0gICAgICAg
IGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIgKyBpKTsKKyAgICBhZGRy
IC09ICh1bnNpZ25lZCBsb25nKWFkZHIgJiAoY2xmbHVzaF9zaXplIC0gMSk7
CisgICAgZm9yICggOyBhZGRyIDwgZW5kOyBhZGRyICs9IGNsZmx1c2hfc2l6
ZSApCisgICAgICAgIGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIpOwog
fQogCiAvKiBBbGxvY2F0ZSBwYWdlIHRhYmxlLCByZXR1cm4gaXRzIG1hY2hp
bmUgYWRkcmVzcyAqLwo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.9-5.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.9-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
YWx0ZXJuYXRpdmU6IGludHJvZHVjZSBhbHRlcm5hdGl2ZV8yCgpJdCdzIGJh
c2VkIG9uIGFsdGVybmF0aXZlX2lvXzIgd2l0aG91dCBpbnB1dHMgb3Igb3V0
cHV0cyBidXQgd2l0aCBhbgphZGRlZCBtZW1vcnkgY2xvYmJlci4KClRoaXMg
aXMgcGFydCBvZiBYU0EtMzIxLgoKQWNrZWQtYnk6IEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4KCi0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYv
YWx0ZXJuYXRpdmUuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2FsdGVy
bmF0aXZlLmgKQEAgLTg1LDYgKzg1LDExIEBAIGV4dGVybiB2b2lkIGFsdGVy
bmF0aXZlX2luc3RydWN0aW9ucyh2b2kKICNkZWZpbmUgYWx0ZXJuYXRpdmUo
b2xkaW5zdHIsIG5ld2luc3RyLCBmZWF0dXJlKSAgICAgICAgICAgICAgICAg
ICAgICAgIFwKICAgICAgICAgYXNtIHZvbGF0aWxlIChBTFRFUk5BVElWRShv
bGRpbnN0ciwgbmV3aW5zdHIsIGZlYXR1cmUpIDogOiA6ICJtZW1vcnkiKQog
CisjZGVmaW5lIGFsdGVybmF0aXZlXzIob2xkaW5zdHIsIG5ld2luc3RyMSwg
ZmVhdHVyZTEsIG5ld2luc3RyMiwgZmVhdHVyZTIpIFwKKwlhc20gdm9sYXRp
bGUgKEFMVEVSTkFUSVZFXzIob2xkaW5zdHIsIG5ld2luc3RyMSwgZmVhdHVy
ZTEsCVwKKwkJCQkgICAgbmV3aW5zdHIyLCBmZWF0dXJlMikJCVwKKwkJICAg
ICAgOiA6IDogIm1lbW9yeSIpCisKIC8qCiAgKiBBbHRlcm5hdGl2ZSBpbmxp
bmUgYXNzZW1ibHkgd2l0aCBpbnB1dC4KICAqCg==

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.9-6.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.9-6.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IG9wdGltaXplIENQVSBjYWNoZSBzeW5jCgpTb21lIFZULWQgSU9NTVVzIGFy
ZSBub24tY29oZXJlbnQsIHdoaWNoIHJlcXVpcmVzIGEgY2FjaGUgd3JpdGUg
YmFjawppbiBvcmRlciBmb3IgdGhlIGNoYW5nZXMgbWFkZSBieSB0aGUgQ1BV
IHRvIGJlIHZpc2libGUgdG8gdGhlIElPTU1VLgpUaGlzIGNhY2hlIHdyaXRl
IGJhY2sgd2FzIHVuY29uZGl0aW9uYWxseSBkb25lIHVzaW5nIGNsZmx1c2gs
IGJ1dCB0aGVyZSBhcmUKb3RoZXIgbW9yZSBlZmZpY2llbnQgaW5zdHJ1Y3Rp
b25zIHRvIGRvIHNvLCBoZW5jZSBpbXBsZW1lbnQgc3VwcG9ydApmb3IgdGhl
bSB1c2luZyB0aGUgYWx0ZXJuYXRpdmUgZnJhbWV3b3JrLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zMjEuCgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvdnRkL2V4dGVybi5oCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdo
L3Z0ZC9leHRlcm4uaApAQCAtNjMsNyArNjMsNiBAQCBpbnQgX19tdXN0X2No
ZWNrIHFpbnZhbF9kZXZpY2VfaW90bGJfc3luCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICB1MTYgZGlkLCB1MTYgc2l6ZSwg
dTY0IGFkZHIpOwogCiB1bnNpZ25lZCBpbnQgZ2V0X2NhY2hlX2xpbmVfc2l6
ZSh2b2lkKTsKLXZvaWQgY2FjaGVsaW5lX2ZsdXNoKGNoYXIgKik7CiB2b2lk
IGZsdXNoX2FsbF9jYWNoZSh2b2lkKTsKIAogdTY0IGFsbG9jX3BndGFibGVf
bWFkZHIoc3RydWN0IGFjcGlfZHJoZF91bml0ICpkcmhkLCB1bnNpZ25lZCBs
b25nIG5wYWdlcyk7Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9p
b21tdS5jCkBAIC0zMSw2ICszMSw3IEBACiAjaW5jbHVkZSA8eGVuL3BjaV9y
ZWdzLmg+CiAjaW5jbHVkZSA8eGVuL2tleWhhbmRsZXIuaD4KICNpbmNsdWRl
IDxhc20vbXNpLmg+CisjaW5jbHVkZSA8YXNtL25vcHMuaD4KICNpbmNsdWRl
IDxhc20vaXJxLmg+CiAjaW5jbHVkZSA8YXNtL2h2bS92bXgvdm14Lmg+CiAj
aW5jbHVkZSA8YXNtL3AybS5oPgpAQCAtMTcyLDcgKzE3Myw0MiBAQCBzdGF0
aWMgdm9pZCBzeW5jX2NhY2hlKGNvbnN0IHZvaWQgKmFkZHIsCiAKICAgICBh
ZGRyIC09ICh1bnNpZ25lZCBsb25nKWFkZHIgJiAoY2xmbHVzaF9zaXplIC0g
MSk7CiAgICAgZm9yICggOyBhZGRyIDwgZW5kOyBhZGRyICs9IGNsZmx1c2hf
c2l6ZSApCi0gICAgICAgIGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIp
OworLyoKKyAqIFRoZSBhcmd1bWVudHMgdG8gYSBtYWNybyBtdXN0IG5vdCBp
bmNsdWRlIHByZXByb2Nlc3NvciBkaXJlY3RpdmVzLiBEb2luZyBzbworICog
cmVzdWx0cyBpbiB1bmRlZmluZWQgYmVoYXZpb3IsIHNvIHdlIGhhdmUgdG8g
Y3JlYXRlIHNvbWUgZGVmaW5lcyBoZXJlIGluCisgKiBvcmRlciB0byBhdm9p
ZCBpdC4KKyAqLworI2lmIGRlZmluZWQoSEFWRV9BU19DTFdCKQorIyBkZWZp
bmUgQ0xXQl9FTkNPRElORyAiY2x3YiAlW3BdIgorI2VsaWYgZGVmaW5lZChI
QVZFX0FTX1hTQVZFT1BUKQorIyBkZWZpbmUgQ0xXQl9FTkNPRElORyAiZGF0
YTE2IHhzYXZlb3B0ICVbcF0iIC8qIGNsd2IgKi8KKyNlbHNlCisjIGRlZmlu
ZSBDTFdCX0VOQ09ESU5HICIuYnl0ZSAweDY2LCAweDBmLCAweGFlLCAweDMw
IiAvKiBjbHdiICglJXJheCkgKi8KKyNlbmRpZgorCisjZGVmaW5lIEJBU0Vf
SU5QVVQoYWRkcikgW3BdICJtIiAoKihjb25zdCBjaGFyICopKGFkZHIpKQor
I2lmIGRlZmluZWQoSEFWRV9BU19DTFdCKSB8fCBkZWZpbmVkKEhBVkVfQVNf
WFNBVkVPUFQpCisjIGRlZmluZSBJTlBVVCBCQVNFX0lOUFVUCisjZWxzZQor
IyBkZWZpbmUgSU5QVVQoYWRkcikgImEiIChhZGRyKSwgQkFTRV9JTlBVVChh
ZGRyKQorI2VuZGlmCisgICAgICAgIC8qCisgICAgICAgICAqIE5vdGUgcmVn
YXJkaW5nIHRoZSB1c2Ugb2YgTk9QX0RTX1BSRUZJWDogaXQncyBmYXN0ZXIg
dG8gZG8gYSBjbGZsdXNoCisgICAgICAgICAqICsgcHJlZml4IHRoYW4gYSBj
bGZsdXNoICsgbm9wLCBhbmQgaGVuY2UgdGhlIHByZWZpeCBpcyBhZGRlZCBp
bnN0ZWFkCisgICAgICAgICAqIG9mIGxldHRpbmcgdGhlIGFsdGVybmF0aXZl
IGZyYW1ld29yayBmaWxsIHRoZSBnYXAgYnkgYXBwZW5kaW5nIG5vcHMuCisg
ICAgICAgICAqLworICAgICAgICBhbHRlcm5hdGl2ZV9pb18yKCIuYnl0ZSAi
IF9fc3RyaW5naWZ5KE5PUF9EU19QUkVGSVgpICI7IGNsZmx1c2ggJVtwXSIs
CisgICAgICAgICAgICAgICAgICAgICAgICAgImRhdGExNiBjbGZsdXNoICVb
cF0iLCAvKiBjbGZsdXNob3B0ICovCisgICAgICAgICAgICAgICAgICAgICAg
ICAgWDg2X0ZFQVRVUkVfQ0xGTFVTSE9QVCwKKyAgICAgICAgICAgICAgICAg
ICAgICAgICBDTFdCX0VOQ09ESU5HLAorICAgICAgICAgICAgICAgICAgICAg
ICAgIFg4Nl9GRUFUVVJFX0NMV0IsIC8qIG5vIG91dHB1dHMgKi8sCisgICAg
ICAgICAgICAgICAgICAgICAgICAgSU5QVVQoYWRkcikpOworI3VuZGVmIElO
UFVUCisjdW5kZWYgQkFTRV9JTlBVVAorI3VuZGVmIENMV0JfRU5DT0RJTkcK
KworICAgIGFsdGVybmF0aXZlXzIoQVNNX05PUDMsICJzZmVuY2UiLCBYODZf
RkVBVFVSRV9DTEZMVVNIT1BULAorICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICJzZmVuY2UiLCBYODZfRkVBVFVSRV9DTFdCKTsKIH0KIAogLyogQWxs
b2NhdGUgcGFnZSB0YWJsZSwgcmV0dXJuIGl0cyBtYWNoaW5lIGFkZHJlc3Mg
Ki8KLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3g4Ni92dGQu
YworKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQveDg2L3Z0ZC5j
CkBAIC01MywxMSArNTMsNiBAQCB1bnNpZ25lZCBpbnQgZ2V0X2NhY2hlX2xp
bmVfc2l6ZSh2b2lkKQogICAgIHJldHVybiAoKGNwdWlkX2VieCgxKSA+PiA4
KSAmIDB4ZmYpICogODsKIH0KIAotdm9pZCBjYWNoZWxpbmVfZmx1c2goY2hh
ciAqIGFkZHIpCi17Ci0gICAgY2xmbHVzaChhZGRyKTsKLX0KLQogdm9pZCBm
bHVzaF9hbGxfY2FjaGUoKQogewogICAgIHdiaW52ZCgpOwo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.9-7.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.9-7.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
ZXB0OiBmbHVzaCBjYWNoZSB3aGVuIG1vZGlmeWluZyBQVEVzIGFuZCBzaGFy
aW5nIHBhZ2UgdGFibGVzCgpNb2RpZmljYXRpb25zIG1hZGUgdG8gdGhlIHBh
Z2UgdGFibGVzIGJ5IEVQVCBjb2RlIG5lZWQgdG8gYmUgd3JpdHRlbgp0byBt
ZW1vcnkgd2hlbiB0aGUgcGFnZSB0YWJsZXMgYXJlIHNoYXJlZCB3aXRoIHRo
ZSBJT01NVSwgYXMgSW50ZWwKSU9NTVVzIGNhbiBiZSBub24tY29oZXJlbnQg
YW5kIHRodXMgcmVxdWlyZSBjaGFuZ2VzIHRvIGJlIHdyaXR0ZW4gdG8KbWVt
b3J5IGluIG9yZGVyIHRvIGJlIHZpc2libGUgdG8gdGhlIElPTU1VLgoKSW4g
b3JkZXIgdG8gYWNoaWV2ZSB0aGlzIG1ha2Ugc3VyZSBkYXRhIGlzIHdyaXR0
ZW4gYmFjayB0byBtZW1vcnkKYWZ0ZXIgd3JpdGluZyBhbiBFUFQgZW50cnkg
d2hlbiB0aGUgcmVjYWxjIGJpdCBpcyBub3Qgc2V0IGluCmF0b21pY193cml0
ZV9lcHRfZW50cnkuIElmIHN1Y2ggYml0IGlzIHNldCwgdGhlIGVudHJ5IHdp
bGwgYmUKYWRqdXN0ZWQgYW5kIGF0b21pY193cml0ZV9lcHRfZW50cnkgd2ls
bCBiZSBjYWxsZWQgYSBzZWNvbmQgdGltZQp3aXRob3V0IHRoZSByZWNhbGMg
Yml0IHNldC4gTm90ZSB0aGF0IHdoZW4gc3BsaXR0aW5nIGEgc3VwZXIgcGFn
ZSB0aGUKbmV3IHRhYmxlcyByZXN1bHRpbmcgb2YgdGhlIHNwbGl0IHNob3Vs
ZCBhbHNvIGJlIHdyaXR0ZW4gYmFjay4KCkZhaWx1cmUgdG8gZG8gc28gY2Fu
IGFsbG93IGRldmljZXMgYmVoaW5kIHRoZSBJT01NVSBhY2Nlc3MgdG8gdGhl
CnN0YWxlIHN1cGVyIHBhZ2UsIG9yIGNhdXNlIGNvaGVyZW5jeSBpc3N1ZXMg
YXMgY2hhbmdlcyBtYWRlIGJ5IHRoZQpwcm9jZXNzb3IgdG8gdGhlIHBhZ2Ug
dGFibGVzIGFyZSBub3QgdmlzaWJsZSB0byB0aGUgSU9NTVUuCgpUaGlzIGFs
bG93cyB0byByZW1vdmUgdGhlIFZULWQgc3BlY2lmaWMgaW9tbXVfcHRlX2Zs
dXNoIGhlbHBlciwgc2luY2UKdGhlIGNhY2hlIHdyaXRlIGJhY2sgaXMgbm93
IHBlcmZvcm1lZCBieSBhdG9taWNfd3JpdGVfZXB0X2VudHJ5LCBhbmQKaGVu
Y2UgaW9tbXVfaW90bGJfZmx1c2ggY2FuIGJlIHVzZWQgdG8gZmx1c2ggdGhl
IElPTU1VIFRMQi4gVGhlIG5ld2x5CnVzZWQgbWV0aG9kIChpb21tdV9pb3Rs
Yl9mbHVzaCkgY2FuIHJlc3VsdCBpbiBsZXNzIGZsdXNoZXMsIHNpbmNlIGl0
Cm1pZ2h0IHNvbWV0aW1lcyBiZSBjYWxsZWQgcmlnaHRseSB3aXRoIDAgZmxh
Z3MsIGluIHdoaWNoIGNhc2UgaXQKYmVjb21lcyBhIG5vLW9wLgoKVGhpcyBp
cyBwYXJ0IG9mIFhTQS0zMjEuCgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2L21tL3Ay
bS1lcHQuYworKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5jCkBAIC05
MCw2ICs5MCwxOSBAQCBzdGF0aWMgaW50IGF0b21pY193cml0ZV9lcHRfZW50
cnkoZXB0X2VuCiAKICAgICB3cml0ZV9hdG9taWMoJmVudHJ5cHRyLT5lcHRl
LCBuZXcuZXB0ZSk7CiAKKyAgICAvKgorICAgICAqIFRoZSByZWNhbGMgZmll
bGQgb24gdGhlIEVQVCBpcyB1c2VkIHRvIHNpZ25hbCBlaXRoZXIgdGhhdCBh
CisgICAgICogcmVjYWxjdWxhdGlvbiBvZiB0aGUgRU1UIGZpZWxkIGlzIHJl
cXVpcmVkICh3aGljaCBkb2Vzbid0IGVmZmVjdCB0aGUKKyAgICAgKiBJT01N
VSksIG9yIGEgdHlwZSBjaGFuZ2UuIFR5cGUgY2hhbmdlcyBjYW4gb25seSBi
ZSBiZXR3ZWVuIHJhbV9ydywKKyAgICAgKiBsb2dkaXJ0eSBhbmQgaW9yZXFf
c2VydmVyOiBjaGFuZ2VzIHRvL2Zyb20gbG9nZGlydHkgd29uJ3Qgd29yayB3
ZWxsIHdpdGgKKyAgICAgKiBhbiBJT01NVSBhbnl3YXksIGFzIElPTU1VICNQ
RnMgYXJlIG5vdCBzeW5jaHJvbm91cyBhbmQgd2lsbCBsZWFkIHRvCisgICAg
ICogYWJvcnRzLCBhbmQgY2hhbmdlcyB0by9mcm9tIGlvcmVxX3NlcnZlciBh
cmUgYWxyZWFkeSBmdWxseSBmbHVzaGVkCisgICAgICogYmVmb3JlIHJldHVy
bmluZyB0byBndWVzdCBjb250ZXh0IChzZWUKKyAgICAgKiBYRU5fRE1PUF9t
YXBfbWVtX3R5cGVfdG9faW9yZXFfc2VydmVyKS4KKyAgICAgKi8KKyAgICBp
ZiAoICFuZXcucmVjYWxjICYmIGlvbW11X2hhcF9wdF9zaGFyZSApCisgICAg
ICAgIGlvbW11X3N5bmNfY2FjaGUoZW50cnlwdHIsIHNpemVvZigqZW50cnlw
dHIpKTsKKwogICAgIGlmICggdW5saWtlbHkob2xkbWZuICE9IG1mbl94KElO
VkFMSURfTUZOKSkgKQogICAgICAgICBwdXRfcGFnZShtZm5fdG9fcGFnZShv
bGRtZm4pKTsKIApAQCAtMzE5LDYgKzMzMiw5IEBAIHN0YXRpYyBib29sX3Qg
ZXB0X3NwbGl0X3N1cGVyX3BhZ2Uoc3RydWMKICAgICAgICAgICAgIGJyZWFr
OwogICAgIH0KIAorICAgIGlmICggaW9tbXVfaGFwX3B0X3NoYXJlICkKKyAg
ICAgICAgaW9tbXVfc3luY19jYWNoZSh0YWJsZSwgRVBUX1BBR0VUQUJMRV9F
TlRSSUVTICogc2l6ZW9mKGVwdF9lbnRyeV90KSk7CisKICAgICB1bm1hcF9k
b21haW5fcGFnZSh0YWJsZSk7CiAKICAgICAvKiBFdmVuIGZhaWxlZCB3ZSBz
aG91bGQgaW5zdGFsbCB0aGUgbmV3bHkgYWxsb2NhdGVkIGVwdCBwYWdlLiAq
LwpAQCAtMzc4LDYgKzM5NCw5IEBAIHN0YXRpYyBpbnQgZXB0X25leHRfbGV2
ZWwoc3RydWN0IHAybV9kb20KICAgICAgICAgaWYgKCAhbmV4dCApCiAgICAg
ICAgICAgICByZXR1cm4gR1VFU1RfVEFCTEVfTUFQX0ZBSUxFRDsKIAorICAg
ICAgICBpZiAoIGlvbW11X2hhcF9wdF9zaGFyZSApCisgICAgICAgICAgICBp
b21tdV9zeW5jX2NhY2hlKG5leHQsIEVQVF9QQUdFVEFCTEVfRU5UUklFUyAq
IHNpemVvZihlcHRfZW50cnlfdCkpOworCiAgICAgICAgIHJjID0gYXRvbWlj
X3dyaXRlX2VwdF9lbnRyeShlcHRfZW50cnksIGUsIG5leHRfbGV2ZWwpOwog
ICAgICAgICBBU1NFUlQocmMgPT0gMCk7CiAgICAgfQpAQCAtODczLDcgKzg5
Miw3IEBAIG91dDoKICAgICAgICAgIG5lZWRfbW9kaWZ5X3Z0ZF90YWJsZSAp
CiAgICAgewogICAgICAgICBpZiAoIGlvbW11X2hhcF9wdF9zaGFyZSApCi0g
ICAgICAgICAgICByYyA9IGlvbW11X3B0ZV9mbHVzaChkLCBnZm4sICZlcHRf
ZW50cnktPmVwdGUsIG9yZGVyLCB2dGRfcHRlX3ByZXNlbnQpOworICAgICAg
ICAgICAgcmMgPSBpb21tdV9mbHVzaF9pb3RsYihkLCBnZm4sIHZ0ZF9wdGVf
cHJlc2VudCwgMXUgPDwgb3JkZXIpOwogICAgICAgICBlbHNlCiAgICAgICAg
IHsKICAgICAgICAgICAgIGlmICggaW9tbXVfZmxhZ3MgKQotLS0gYS94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYworKysgYi94ZW4vZHJp
dmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYwpAQCAtNjEyLDEwICs2MTIs
OCBAQCBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayBpb21tdV9mbHVzaF9hbGwo
CiAgICAgcmV0dXJuIHJjOwogfQogCi1zdGF0aWMgaW50IF9fbXVzdF9jaGVj
ayBpb21tdV9mbHVzaF9pb3RsYihzdHJ1Y3QgZG9tYWluICpkLAotICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQg
bG9uZyBnZm4sCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBib29sX3QgZG1hX29sZF9wdGVfcHJlc2VudCwKLSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGlu
dCBwYWdlX2NvdW50KQoraW50IGlvbW11X2ZsdXNoX2lvdGxiKHN0cnVjdCBk
b21haW4gKmQsIHVuc2lnbmVkIGxvbmcgZ2ZuLAorICAgICAgICAgICAgICAg
ICAgICAgIGJvb2wgZG1hX29sZF9wdGVfcHJlc2VudCwgdW5zaWduZWQgaW50
IHBhZ2VfY291bnQpCiB7CiAgICAgc3RydWN0IGRvbWFpbl9pb21tdSAqaGQg
PSBkb21faW9tbXUoZCk7CiAgICAgc3RydWN0IGFjcGlfZHJoZF91bml0ICpk
cmhkOwpAQCAtMTg3Niw1MyArMTg3NCw2IEBAIHN0YXRpYyBpbnQgX19tdXN0
X2NoZWNrIGludGVsX2lvbW11X3VubWEKICAgICByZXR1cm4gZG1hX3B0ZV9j
bGVhcl9vbmUoZCwgKHBhZGRyX3QpZ2ZuIDw8IFBBR0VfU0hJRlRfNEspOwog
fQogCi1pbnQgaW9tbXVfcHRlX2ZsdXNoKHN0cnVjdCBkb21haW4gKmQsIHU2
NCBnZm4sIHU2NCAqcHRlLAotICAgICAgICAgICAgICAgICAgICBpbnQgb3Jk
ZXIsIGludCBwcmVzZW50KQotewotICAgIHN0cnVjdCBhY3BpX2RyaGRfdW5p
dCAqZHJoZDsKLSAgICBzdHJ1Y3QgaW9tbXUgKmlvbW11ID0gTlVMTDsKLSAg
ICBzdHJ1Y3QgZG9tYWluX2lvbW11ICpoZCA9IGRvbV9pb21tdShkKTsKLSAg
ICBib29sX3QgZmx1c2hfZGV2X2lvdGxiOwotICAgIGludCBpb21tdV9kb21p
ZDsKLSAgICBpbnQgcmMgPSAwOwotCi0gICAgaW9tbXVfc3luY19jYWNoZShw
dGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0ZSkpOwotCi0gICAgZm9yX2VhY2hf
ZHJoZF91bml0ICggZHJoZCApCi0gICAgewotICAgICAgICBpb21tdSA9IGRy
aGQtPmlvbW11OwotICAgICAgICBpZiAoICF0ZXN0X2JpdChpb21tdS0+aW5k
ZXgsICZoZC0+YXJjaC5pb21tdV9iaXRtYXApICkKLSAgICAgICAgICAgIGNv
bnRpbnVlOwotCi0gICAgICAgIGZsdXNoX2Rldl9pb3RsYiA9ICEhZmluZF9h
dHNfZGV2X2RyaGQoaW9tbXUpOwotICAgICAgICBpb21tdV9kb21pZD0gZG9t
YWluX2lvbW11X2RvbWlkKGQsIGlvbW11KTsKLSAgICAgICAgaWYgKCBpb21t
dV9kb21pZCA9PSAtMSApCi0gICAgICAgICAgICBjb250aW51ZTsKLQotICAg
ICAgICByYyA9IGlvbW11X2ZsdXNoX2lvdGxiX3BzaShpb21tdSwgaW9tbXVf
ZG9taWQsCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIChw
YWRkcl90KWdmbiA8PCBQQUdFX1NISUZUXzRLLAotICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBvcmRlciwgIXByZXNlbnQsIGZsdXNoX2Rl
dl9pb3RsYik7Ci0gICAgICAgIGlmICggcmMgPiAwICkKLSAgICAgICAgewot
ICAgICAgICAgICAgaW9tbXVfZmx1c2hfd3JpdGVfYnVmZmVyKGlvbW11KTsK
LSAgICAgICAgICAgIHJjID0gMDsKLSAgICAgICAgfQotICAgIH0KLQotICAg
IGlmICggdW5saWtlbHkocmMpICkKLSAgICB7Ci0gICAgICAgIGlmICggIWQt
PmlzX3NodXR0aW5nX2Rvd24gJiYgcHJpbnRrX3JhdGVsaW1pdCgpICkKLSAg
ICAgICAgICAgIHByaW50ayhYRU5MT0dfRVJSIFZURFBSRUZJWAotICAgICAg
ICAgICAgICAgICAgICIgZCVkOiBJT01NVSBwYWdlcyBmbHVzaCBmYWlsZWQ6
ICVkXG4iLAotICAgICAgICAgICAgICAgICAgIGQtPmRvbWFpbl9pZCwgcmMp
OwotCi0gICAgICAgIGlmICggIWlzX2hhcmR3YXJlX2RvbWFpbihkKSApCi0g
ICAgICAgICAgICBkb21haW5fY3Jhc2goZCk7Ci0gICAgfQotCi0gICAgcmV0
dXJuIHJjOwotfQotCiBzdGF0aWMgaW50IF9faW5pdCB2dGRfZXB0X3BhZ2Vf
Y29tcGF0aWJsZShzdHJ1Y3QgaW9tbXUgKmlvbW11KQogewogICAgIHU2NCBl
cHRfY2FwLCB2dGRfY2FwID0gaW9tbXUtPmNhcDsKLS0tIGEveGVuL2luY2x1
ZGUvYXNtLXg4Ni9pb21tdS5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYv
aW9tbXUuaApAQCAtODcsOCArODcsOSBAQCBpbnQgaW9tbXVfc2V0dXBfaHBl
dF9tc2koc3RydWN0IG1zaV9kZXNjCiAKIC8qIFdoaWxlIFZULWQgc3BlY2lm
aWMsIHRoaXMgbXVzdCBnZXQgZGVjbGFyZWQgaW4gYSBnZW5lcmljIGhlYWRl
ci4gKi8KIGludCBhZGp1c3RfdnRkX2lycV9hZmZpbml0aWVzKHZvaWQpOwot
aW50IF9fbXVzdF9jaGVjayBpb21tdV9wdGVfZmx1c2goc3RydWN0IGRvbWFp
biAqZCwgdTY0IGdmbiwgdTY0ICpwdGUsCi0gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBpbnQgb3JkZXIsIGludCBwcmVzZW50KTsKK2ludCBf
X211c3RfY2hlY2sgaW9tbXVfZmx1c2hfaW90bGIoc3RydWN0IGRvbWFpbiAq
ZCwgdW5zaWduZWQgbG9uZyBnZm4sCisgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGJvb2wgZG1hX29sZF9wdGVfcHJlc2VudCwKKyAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgaW50IHBh
Z2VfY291bnQpOwogYm9vbF90IGlvbW11X3N1cHBvcnRzX2VpbSh2b2lkKTsK
IGludCBpb21tdV9lbmFibGVfeDJhcGljX0lSKHZvaWQpOwogdm9pZCBpb21t
dV9kaXNhYmxlX3gyYXBpY19JUih2b2lkKTsK

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.10-1.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.10-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB2dGQ6IGltcHJvdmUgSU9NTVUgVExCIGZsdXNoCgpEbyBub3QgbGltaXQg
UFNJIGZsdXNoZXMgdG8gb3JkZXIgMCBwYWdlcywgaW4gb3JkZXIgdG8gYXZv
aWQgZG9pbmcgYQpmdWxsIFRMQiBmbHVzaCBpZiB0aGUgcGFzc2VkIGluIHBh
Z2UgaGFzIGFuIG9yZGVyIGdyZWF0ZXIgdGhhbiAwIGFuZAppcyBhbGlnbmVk
LiBTaG91bGQgaW5jcmVhc2UgdGhlIHBlcmZvcm1hbmNlIG9mIElPTU1VIFRM
QiBmbHVzaGVzIHdoZW4KZGVhbGluZyB3aXRoIHBhZ2Ugb3JkZXJzIGdyZWF0
ZXIgdGhhbiAwLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjEuCgpTaWduZWQt
b2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+CgotLS0g
YS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYworKysgYi94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYwpAQCAtNjEyLDEz
ICs2MTIsMTQgQEAgc3RhdGljIGludCBfX211c3RfY2hlY2sgaW9tbXVfZmx1
c2hfaW90bAogICAgICAgICBpZiAoIGlvbW11X2RvbWlkID09IC0xICkKICAg
ICAgICAgICAgIGNvbnRpbnVlOwogCi0gICAgICAgIGlmICggcGFnZV9jb3Vu
dCAhPSAxIHx8IGdmbiA9PSBnZm5feChJTlZBTElEX0dGTikgKQorICAgICAg
ICBpZiAoICFwYWdlX2NvdW50IHx8IChwYWdlX2NvdW50ICYgKHBhZ2VfY291
bnQgLSAxKSkgfHwKKyAgICAgICAgICAgICBnZm4gPT0gZ2ZuX3goSU5WQUxJ
RF9HRk4pIHx8ICFJU19BTElHTkVEKGdmbiwgcGFnZV9jb3VudCkgKQogICAg
ICAgICAgICAgcmMgPSBpb21tdV9mbHVzaF9pb3RsYl9kc2koaW9tbXUsIGlv
bW11X2RvbWlkLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgMCwgZmx1c2hfZGV2X2lvdGxiKTsKICAgICAgICAgZWxzZQogICAg
ICAgICAgICAgcmMgPSBpb21tdV9mbHVzaF9pb3RsYl9wc2koaW9tbXUsIGlv
bW11X2RvbWlkLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgKHBhZGRyX3QpZ2ZuIDw8IFBBR0VfU0hJRlRfNEssCi0gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBQQUdFX09SREVSXzRL
LAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZ2V0
X29yZGVyX2Zyb21fcGFnZXMocGFnZV9jb3VudCksCiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAhZG1hX29sZF9wdGVfcHJlc2Vu
dCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGZs
dXNoX2Rldl9pb3RsYik7CiAK

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.10-2.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.10-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IHBydW5lIChhbmQgcmVuYW1lKSBjYWNoZSBmbHVzaCBmdW5jdGlvbnMKClJl
bmFtZSBfX2lvbW11X2ZsdXNoX2NhY2hlIHRvIGlvbW11X3N5bmNfY2FjaGUg
YW5kIHJlbW92ZQppb21tdV9mbHVzaF9jYWNoZV9wYWdlLiBBbHNvIHJlbW92
ZSB0aGUgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnkKd3JhcHBlciBhbmQganVz
dCB1c2UgaW9tbXVfc3luY19jYWNoZSBpbnN0ZWFkLiBOb3RlIHRoZSBfZW50
cnkgc3VmZml4CndhcyBtZWFuaW5nbGVzcyBhcyB0aGUgd3JhcHBlciB3YXMg
YWxyZWFkeSB0YWtpbmcgYSBzaXplIHBhcmFtZXRlciBpbgpieXRlcy4gV2hp
bGUgdGhlcmUgYWxzbyBjb25zdGlmeSB0aGUgYWRkciBwYXJhbWV0ZXIuCgpO
byBmdW5jdGlvbmFsIGNoYW5nZSBpbnRlbmRlZC4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtMzIxLgoKUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4KCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9leHRlcm4uaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
ZXh0ZXJuLmgKQEAgLTM3LDggKzM3LDcgQEAgdm9pZCBkaXNhYmxlX3FpbnZh
bChzdHJ1Y3QgaW9tbXUgKmlvbW11KQogaW50IGVuYWJsZV9pbnRyZW1hcChz
dHJ1Y3QgaW9tbXUgKmlvbW11LCBpbnQgZWltKTsKIHZvaWQgZGlzYWJsZV9p
bnRyZW1hcChzdHJ1Y3QgaW9tbXUgKmlvbW11KTsKIAotdm9pZCBpb21tdV9m
bHVzaF9jYWNoZV9lbnRyeSh2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQgc2l6
ZSk7Ci12b2lkIGlvbW11X2ZsdXNoX2NhY2hlX3BhZ2Uodm9pZCAqYWRkciwg
dW5zaWduZWQgbG9uZyBucGFnZXMpOwordm9pZCBpb21tdV9zeW5jX2NhY2hl
KGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKTsKIGludCBp
b21tdV9hbGxvYyhzdHJ1Y3QgYWNwaV9kcmhkX3VuaXQgKmRyaGQpOwogdm9p
ZCBpb21tdV9mcmVlKHN0cnVjdCBhY3BpX2RyaGRfdW5pdCAqZHJoZCk7CiAK
LS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2ludHJlbWFwLmMK
KysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2ludHJlbWFwLmMK
QEAgLTIzMSw3ICsyMzEsNyBAQCBzdGF0aWMgdm9pZCBmcmVlX3JlbWFwX2Vu
dHJ5KHN0cnVjdCBpb21tCiAgICAgICAgICAgICAgICAgICAgICBpcmVtYXBf
ZW50cmllcywgaXJlbWFwX2VudHJ5KTsKIAogICAgIHVwZGF0ZV9pcnRlKGlv
bW11LCBpcmVtYXBfZW50cnksICZuZXdfaXJlLCBmYWxzZSk7Ci0gICAgaW9t
bXVfZmx1c2hfY2FjaGVfZW50cnkoaXJlbWFwX2VudHJ5LCBzaXplb2YoKmly
ZW1hcF9lbnRyeSkpOworICAgIGlvbW11X3N5bmNfY2FjaGUoaXJlbWFwX2Vu
dHJ5LCBzaXplb2YoKmlyZW1hcF9lbnRyeSkpOwogICAgIGlvbW11X2ZsdXNo
X2llY19pbmRleChpb21tdSwgMCwgaW5kZXgpOwogCiAgICAgdW5tYXBfdnRk
X2RvbWFpbl9wYWdlKGlyZW1hcF9lbnRyaWVzKTsKQEAgLTQwMyw3ICs0MDMs
NyBAQCBzdGF0aWMgaW50IGlvYXBpY19ydGVfdG9fcmVtYXBfZW50cnkoc3Ry
CiAgICAgfQogCiAgICAgdXBkYXRlX2lydGUoaW9tbXUsIGlyZW1hcF9lbnRy
eSwgJm5ld19pcmUsICFpbml0KTsKLSAgICBpb21tdV9mbHVzaF9jYWNoZV9l
bnRyeShpcmVtYXBfZW50cnksIHNpemVvZigqaXJlbWFwX2VudHJ5KSk7Cisg
ICAgaW9tbXVfc3luY19jYWNoZShpcmVtYXBfZW50cnksIHNpemVvZigqaXJl
bWFwX2VudHJ5KSk7CiAgICAgaW9tbXVfZmx1c2hfaWVjX2luZGV4KGlvbW11
LCAwLCBpbmRleCk7CiAKICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2UoaXJl
bWFwX2VudHJpZXMpOwpAQCAtNjk0LDcgKzY5NCw3IEBAIHN0YXRpYyBpbnQg
bXNpX21zZ190b19yZW1hcF9lbnRyeSgKICAgICB1cGRhdGVfaXJ0ZShpb21t
dSwgaXJlbWFwX2VudHJ5LCAmbmV3X2lyZSwgbXNpX2Rlc2MtPmlydGVfaW5p
dGlhbGl6ZWQpOwogICAgIG1zaV9kZXNjLT5pcnRlX2luaXRpYWxpemVkID0g
dHJ1ZTsKIAotICAgIGlvbW11X2ZsdXNoX2NhY2hlX2VudHJ5KGlyZW1hcF9l
bnRyeSwgc2l6ZW9mKCppcmVtYXBfZW50cnkpKTsKKyAgICBpb21tdV9zeW5j
X2NhY2hlKGlyZW1hcF9lbnRyeSwgc2l6ZW9mKCppcmVtYXBfZW50cnkpKTsK
ICAgICBpb21tdV9mbHVzaF9pZWNfaW5kZXgoaW9tbXUsIDAsIGluZGV4KTsK
IAogICAgIHVubWFwX3Z0ZF9kb21haW5fcGFnZShpcmVtYXBfZW50cmllcyk7
Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCisr
KyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBAIC0x
NTgsNyArMTU4LDggQEAgc3RhdGljIHZvaWQgX19pbml0IGZyZWVfaW50ZWxf
aW9tbXUoc3RydQogfQogCiBzdGF0aWMgaW50IGlvbW11c19pbmNvaGVyZW50
Owotc3RhdGljIHZvaWQgX19pb21tdV9mbHVzaF9jYWNoZSh2b2lkICphZGRy
LCB1bnNpZ25lZCBpbnQgc2l6ZSkKKwordm9pZCBpb21tdV9zeW5jX2NhY2hl
KGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQogewogICAg
IGludCBpOwogICAgIHN0YXRpYyB1bnNpZ25lZCBpbnQgY2xmbHVzaF9zaXpl
ID0gMDsKQEAgLTE3MywxNiArMTc0LDYgQEAgc3RhdGljIHZvaWQgX19pb21t
dV9mbHVzaF9jYWNoZSh2b2lkICphZAogICAgICAgICBjYWNoZWxpbmVfZmx1
c2goKGNoYXIgKilhZGRyICsgaSk7CiB9CiAKLXZvaWQgaW9tbXVfZmx1c2hf
Y2FjaGVfZW50cnkodm9pZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUpCi17
Ci0gICAgX19pb21tdV9mbHVzaF9jYWNoZShhZGRyLCBzaXplKTsKLX0KLQot
dm9pZCBpb21tdV9mbHVzaF9jYWNoZV9wYWdlKHZvaWQgKmFkZHIsIHVuc2ln
bmVkIGxvbmcgbnBhZ2VzKQotewotICAgIF9faW9tbXVfZmx1c2hfY2FjaGUo
YWRkciwgUEFHRV9TSVpFICogbnBhZ2VzKTsKLX0KLQogLyogQWxsb2NhdGUg
cGFnZSB0YWJsZSwgcmV0dXJuIGl0cyBtYWNoaW5lIGFkZHJlc3MgKi8KIHU2
NCBhbGxvY19wZ3RhYmxlX21hZGRyKHN0cnVjdCBhY3BpX2RyaGRfdW5pdCAq
ZHJoZCwgdW5zaWduZWQgbG9uZyBucGFnZXMpCiB7CkBAIC0yMDcsNyArMTk4
LDcgQEAgdTY0IGFsbG9jX3BndGFibGVfbWFkZHIoc3RydWN0IGFjcGlfZHJo
ZAogICAgICAgICB2YWRkciA9IF9fbWFwX2RvbWFpbl9wYWdlKGN1cl9wZyk7
CiAgICAgICAgIG1lbXNldCh2YWRkciwgMCwgUEFHRV9TSVpFKTsKIAotICAg
ICAgICBpb21tdV9mbHVzaF9jYWNoZV9wYWdlKHZhZGRyLCAxKTsKKyAgICAg
ICAgaW9tbXVfc3luY19jYWNoZSh2YWRkciwgUEFHRV9TSVpFKTsKICAgICAg
ICAgdW5tYXBfZG9tYWluX3BhZ2UodmFkZHIpOwogICAgICAgICBjdXJfcGcr
KzsKICAgICB9CkBAIC0yNDIsNyArMjMzLDcgQEAgc3RhdGljIHU2NCBidXNf
dG9fY29udGV4dF9tYWRkcihzdHJ1Y3QgaQogICAgICAgICB9CiAgICAgICAg
IHNldF9yb290X3ZhbHVlKCpyb290LCBtYWRkcik7CiAgICAgICAgIHNldF9y
b290X3ByZXNlbnQoKnJvb3QpOwotICAgICAgICBpb21tdV9mbHVzaF9jYWNo
ZV9lbnRyeShyb290LCBzaXplb2Yoc3RydWN0IHJvb3RfZW50cnkpKTsKKyAg
ICAgICAgaW9tbXVfc3luY19jYWNoZShyb290LCBzaXplb2Yoc3RydWN0IHJv
b3RfZW50cnkpKTsKICAgICB9CiAgICAgbWFkZHIgPSAodTY0KSBnZXRfY29u
dGV4dF9hZGRyKCpyb290KTsKICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2Uo
cm9vdF9lbnRyaWVzKTsKQEAgLTMwMCw3ICsyOTEsNyBAQCBzdGF0aWMgdTY0
IGFkZHJfdG9fZG1hX3BhZ2VfbWFkZHIoc3RydWN0CiAgICAgICAgICAgICAg
Ki8KICAgICAgICAgICAgIGRtYV9zZXRfcHRlX3JlYWRhYmxlKCpwdGUpOwog
ICAgICAgICAgICAgZG1hX3NldF9wdGVfd3JpdGFibGUoKnB0ZSk7Ci0gICAg
ICAgICAgICBpb21tdV9mbHVzaF9jYWNoZV9lbnRyeShwdGUsIHNpemVvZihz
dHJ1Y3QgZG1hX3B0ZSkpOworICAgICAgICAgICAgaW9tbXVfc3luY19jYWNo
ZShwdGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0ZSkpOwogICAgICAgICB9CiAK
ICAgICAgICAgaWYgKCBsZXZlbCA9PSAyICkKQEAgLTY3NCw3ICs2NjUsNyBA
QCBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayBkbWFfcHRlX2NsZWFyX29uCiAK
ICAgICBkbWFfY2xlYXJfcHRlKCpwdGUpOwogICAgIHNwaW5fdW5sb2NrKCZo
ZC0+YXJjaC5tYXBwaW5nX2xvY2spOwotICAgIGlvbW11X2ZsdXNoX2NhY2hl
X2VudHJ5KHB0ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CisgICAgaW9t
bXVfc3luY19jYWNoZShwdGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0ZSkpOwog
CiAgICAgaWYgKCAhdGhpc19jcHUoaW9tbXVfZG9udF9mbHVzaF9pb3RsYikg
KQogICAgICAgICByYyA9IGlvbW11X2ZsdXNoX2lvdGxiX3BhZ2VzKGRvbWFp
biwgYWRkciA+PiBQQUdFX1NISUZUXzRLLCAxKTsKQEAgLTcxNiw3ICs3MDcs
NyBAQCBzdGF0aWMgdm9pZCBpb21tdV9mcmVlX3BhZ2VfdGFibGUoc3RydWN0
CiAgICAgICAgICAgICBpb21tdV9mcmVlX3BhZ2V0YWJsZShkbWFfcHRlX2Fk
ZHIoKnB0ZSksIG5leHRfbGV2ZWwpOwogCiAgICAgICAgIGRtYV9jbGVhcl9w
dGUoKnB0ZSk7Ci0gICAgICAgIGlvbW11X2ZsdXNoX2NhY2hlX2VudHJ5KHB0
ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CisgICAgICAgIGlvbW11X3N5
bmNfY2FjaGUocHRlLCBzaXplb2Yoc3RydWN0IGRtYV9wdGUpKTsKICAgICB9
CiAKICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2UocHRfdmFkZHIpOwpAQCAt
MTQ0Nyw3ICsxNDM4LDcgQEAgaW50IGRvbWFpbl9jb250ZXh0X21hcHBpbmdf
b25lKAogICAgIGNvbnRleHRfc2V0X2FkZHJlc3Nfd2lkdGgoKmNvbnRleHQs
IGFnYXcpOwogICAgIGNvbnRleHRfc2V0X2ZhdWx0X2VuYWJsZSgqY29udGV4
dCk7CiAgICAgY29udGV4dF9zZXRfcHJlc2VudCgqY29udGV4dCk7Ci0gICAg
aW9tbXVfZmx1c2hfY2FjaGVfZW50cnkoY29udGV4dCwgc2l6ZW9mKHN0cnVj
dCBjb250ZXh0X2VudHJ5KSk7CisgICAgaW9tbXVfc3luY19jYWNoZShjb250
ZXh0LCBzaXplb2Yoc3RydWN0IGNvbnRleHRfZW50cnkpKTsKICAgICBzcGlu
X3VubG9jaygmaW9tbXUtPmxvY2spOwogCiAgICAgLyogQ29udGV4dCBlbnRy
eSB3YXMgcHJldmlvdXNseSBub24tcHJlc2VudCAod2l0aCBkb21pZCAwKS4g
Ki8KQEAgLTE1OTQsNyArMTU4NSw3IEBAIGludCBkb21haW5fY29udGV4dF91
bm1hcF9vbmUoCiAKICAgICBjb250ZXh0X2NsZWFyX3ByZXNlbnQoKmNvbnRl
eHQpOwogICAgIGNvbnRleHRfY2xlYXJfZW50cnkoKmNvbnRleHQpOwotICAg
IGlvbW11X2ZsdXNoX2NhY2hlX2VudHJ5KGNvbnRleHQsIHNpemVvZihzdHJ1
Y3QgY29udGV4dF9lbnRyeSkpOworICAgIGlvbW11X3N5bmNfY2FjaGUoY29u
dGV4dCwgc2l6ZW9mKHN0cnVjdCBjb250ZXh0X2VudHJ5KSk7CiAKICAgICBp
b21tdV9kb21pZD0gZG9tYWluX2lvbW11X2RvbWlkKGRvbWFpbiwgaW9tbXUp
OwogICAgIGlmICggaW9tbXVfZG9taWQgPT0gLTEgKQpAQCAtMTgyNCw3ICsx
ODE1LDcgQEAgc3RhdGljIGludCBfX211c3RfY2hlY2sgaW50ZWxfaW9tbXVf
bWFwXwogCiAgICAgKnB0ZSA9IG5ldzsKIAotICAgIGlvbW11X2ZsdXNoX2Nh
Y2hlX2VudHJ5KHB0ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CisgICAg
aW9tbXVfc3luY19jYWNoZShwdGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0ZSkp
OwogICAgIHNwaW5fdW5sb2NrKCZoZC0+YXJjaC5tYXBwaW5nX2xvY2spOwog
ICAgIHVubWFwX3Z0ZF9kb21haW5fcGFnZShwYWdlKTsKIApAQCAtMTg1OCw3
ICsxODQ5LDcgQEAgaW50IGlvbW11X3B0ZV9mbHVzaChzdHJ1Y3QgZG9tYWlu
ICpkLCB1NgogICAgIGludCBpb21tdV9kb21pZDsKICAgICBpbnQgcmMgPSAw
OwogCi0gICAgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnkocHRlLCBzaXplb2Yo
c3RydWN0IGRtYV9wdGUpKTsKKyAgICBpb21tdV9zeW5jX2NhY2hlKHB0ZSwg
c2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CiAKICAgICBmb3JfZWFjaF9kcmhk
X3VuaXQgKCBkcmhkICkKICAgICB7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.10-3.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.10-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
aW9tbXU6IGludHJvZHVjZSBhIGNhY2hlIHN5bmMgaG9vawoKVGhlIGhvb2sg
aXMgb25seSBpbXBsZW1lbnRlZCBmb3IgVlQtZCBhbmQgaXQgdXNlcyB0aGUg
YWxyZWFkeSBleGlzdGluZwppb21tdV9zeW5jX2NhY2hlIGZ1bmN0aW9uIHBy
ZXNlbnQgaW4gVlQtZCBjb2RlLiBUaGUgbmV3IGhvb2sgaXMKYWRkZWQgc28g
dGhhdCB0aGUgY2FjaGUgY2FuIGJlIGZsdXNoZWQgYnkgY29kZSBvdXRzaWRl
IG9mIFZULWQgd2hlbgp1c2luZyBzaGFyZWQgcGFnZSB0YWJsZXMuCgpOb3Rl
IHRoYXQgYWxsb2NfcGd0YWJsZV9tYWRkciBtdXN0IHVzZSB0aGUgbm93IGxv
Y2FsbHkgZGVmaW5lZApzeW5jX2NhY2hlIGZ1bmN0aW9uLCBiZWNhdXNlIElP
TU1VIG9wcyBhcmUgbm90IHlldCBzZXR1cCB0aGUgZmlyc3QKdGltZSB0aGUg
ZnVuY3Rpb24gZ2V0cyBjYWxsZWQgZHVyaW5nIElPTU1VIGluaXRpYWxpemF0
aW9uLgoKTm8gZnVuY3Rpb25hbCBjaGFuZ2UgaW50ZW5kZWQuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTMyMS4KClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC92dGQvZXh0ZXJuLmgKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvdnRkL2V4dGVybi5oCkBAIC0zNyw3ICszNyw2IEBAIHZvaWQgZGlzYWJs
ZV9xaW52YWwoc3RydWN0IGlvbW11ICppb21tdSkKIGludCBlbmFibGVfaW50
cmVtYXAoc3RydWN0IGlvbW11ICppb21tdSwgaW50IGVpbSk7CiB2b2lkIGRp
c2FibGVfaW50cmVtYXAoc3RydWN0IGlvbW11ICppb21tdSk7CiAKLXZvaWQg
aW9tbXVfc3luY19jYWNoZShjb25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBp
bnQgc2l6ZSk7CiBpbnQgaW9tbXVfYWxsb2Moc3RydWN0IGFjcGlfZHJoZF91
bml0ICpkcmhkKTsKIHZvaWQgaW9tbXVfZnJlZShzdHJ1Y3QgYWNwaV9kcmhk
X3VuaXQgKmRyaGQpOwogCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdo
L3Z0ZC9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9pb21tdS5jCkBAIC0xNTksNyArMTU5LDcgQEAgc3RhdGljIHZvaWQgX19p
bml0IGZyZWVfaW50ZWxfaW9tbXUoc3RydQogCiBzdGF0aWMgaW50IGlvbW11
c19pbmNvaGVyZW50OwogCi12b2lkIGlvbW11X3N5bmNfY2FjaGUoY29uc3Qg
dm9pZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUpCitzdGF0aWMgdm9pZCBz
eW5jX2NhY2hlKGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXpl
KQogewogICAgIGludCBpOwogICAgIHN0YXRpYyB1bnNpZ25lZCBpbnQgY2xm
bHVzaF9zaXplID0gMDsKQEAgLTE5OCw3ICsxOTgsNyBAQCB1NjQgYWxsb2Nf
cGd0YWJsZV9tYWRkcihzdHJ1Y3QgYWNwaV9kcmhkCiAgICAgICAgIHZhZGRy
ID0gX19tYXBfZG9tYWluX3BhZ2UoY3VyX3BnKTsKICAgICAgICAgbWVtc2V0
KHZhZGRyLCAwLCBQQUdFX1NJWkUpOwogCi0gICAgICAgIGlvbW11X3N5bmNf
Y2FjaGUodmFkZHIsIFBBR0VfU0laRSk7CisgICAgICAgIHN5bmNfY2FjaGUo
dmFkZHIsIFBBR0VfU0laRSk7CiAgICAgICAgIHVubWFwX2RvbWFpbl9wYWdl
KHZhZGRyKTsKICAgICAgICAgY3VyX3BnKys7CiAgICAgfQpAQCAtMjY5Niw2
ICsyNjk2LDcgQEAgY29uc3Qgc3RydWN0IGlvbW11X29wcyBpbnRlbF9pb21t
dV9vcHMgPQogICAgIC5pb3RsYl9mbHVzaF9hbGwgPSBpb21tdV9mbHVzaF9p
b3RsYl9hbGwsCiAgICAgLmdldF9yZXNlcnZlZF9kZXZpY2VfbWVtb3J5ID0g
aW50ZWxfaW9tbXVfZ2V0X3Jlc2VydmVkX2RldmljZV9tZW1vcnksCiAgICAg
LmR1bXBfcDJtX3RhYmxlID0gdnRkX2R1bXBfcDJtX3RhYmxlLAorICAgIC5z
eW5jX2NhY2hlID0gc3luY19jYWNoZSwKIH07CiAKIC8qCi0tLSBhL3hlbi9p
bmNsdWRlL2FzbS14ODYvaW9tbXUuaAorKysgYi94ZW4vaW5jbHVkZS9hc20t
eDg2L2lvbW11LmgKQEAgLTk4LDYgKzk4LDEzIEBAIGV4dGVybiBib29sIHVu
dHJ1c3RlZF9tc2k7CiBpbnQgcGlfdXBkYXRlX2lydGUoY29uc3Qgc3RydWN0
IHBpX2Rlc2MgKnBpX2Rlc2MsIGNvbnN0IHN0cnVjdCBwaXJxICpwaXJxLAog
ICAgICAgICAgICAgICAgICAgIGNvbnN0IHVpbnQ4X3QgZ3ZlYyk7CiAKKyNk
ZWZpbmUgaW9tbXVfc3luY19jYWNoZShhZGRyLCBzaXplKSAoeyAgICAgICAg
ICAgICAgICAgXAorICAgIGNvbnN0IHN0cnVjdCBpb21tdV9vcHMgKm9wcyA9
IGlvbW11X2dldF9vcHMoKTsgICAgICBcCisgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBp
ZiAoIG9wcy0+c3luY19jYWNoZSApICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgXAorICAgICAgICBvcHMtPnN5bmNfY2FjaGUoYWRkciwgc2l6ZSk7
ICAgICAgICAgICAgICAgICAgICBcCit9KQorCiAjZW5kaWYgLyogIV9fQVJD
SF9YODZfSU9NTVVfSF9fICovCiAvKgogICogTG9jYWwgdmFyaWFibGVzOgot
LS0gYS94ZW4vaW5jbHVkZS94ZW4vaW9tbXUuaAorKysgYi94ZW4vaW5jbHVk
ZS94ZW4vaW9tbXUuaApAQCAtMTYwLDYgKzE2MCw3IEBAIHN0cnVjdCBpb21t
dV9vcHMgewogICAgIHZvaWQgKCp1cGRhdGVfaXJlX2Zyb21fYXBpYykodW5z
aWduZWQgaW50IGFwaWMsIHVuc2lnbmVkIGludCByZWcsIHVuc2lnbmVkIGlu
dCB2YWx1ZSk7CiAgICAgdW5zaWduZWQgaW50ICgqcmVhZF9hcGljX2Zyb21f
aXJlKSh1bnNpZ25lZCBpbnQgYXBpYywgdW5zaWduZWQgaW50IHJlZyk7CiAg
ICAgaW50ICgqc2V0dXBfaHBldF9tc2kpKHN0cnVjdCBtc2lfZGVzYyAqKTsK
KyAgICB2b2lkICgqc3luY19jYWNoZSkoY29uc3Qgdm9pZCAqYWRkciwgdW5z
aWduZWQgaW50IHNpemUpOwogI2VuZGlmIC8qIENPTkZJR19YODYgKi8KICAg
ICBpbnQgX19tdXN0X2NoZWNrICgqc3VzcGVuZCkodm9pZCk7CiAgICAgdm9p
ZCAoKnJlc3VtZSkodm9pZCk7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.10-4.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.10-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IGRvbid0IGFzc3VtZSBhZGRyZXNzZXMgYXJlIGFsaWduZWQgaW4gc3luY19j
YWNoZQoKQ3VycmVudCBjb2RlIGluIHN5bmNfY2FjaGUgYXNzdW1lIHRoYXQg
dGhlIGFkZHJlc3MgcGFzc2VkIGluIGlzCmFsaWduZWQgdG8gYSBjYWNoZSBs
aW5lIHNpemUuIEZpeCB0aGUgY29kZSB0byBzdXBwb3J0IHBhc3NpbmcgaW4K
YXJiaXRyYXJ5IGFkZHJlc3NlcyBub3QgbmVjZXNzYXJpbHkgYWxpZ25lZCB0
byBhIGNhY2hlIGxpbmUgc2l6ZS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzIx
LgoKUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4KCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5j
CisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBA
IC0xNjEsOCArMTYxLDggQEAgc3RhdGljIGludCBpb21tdXNfaW5jb2hlcmVu
dDsKIAogc3RhdGljIHZvaWQgc3luY19jYWNoZShjb25zdCB2b2lkICphZGRy
LCB1bnNpZ25lZCBpbnQgc2l6ZSkKIHsKLSAgICBpbnQgaTsKLSAgICBzdGF0
aWMgdW5zaWduZWQgaW50IGNsZmx1c2hfc2l6ZSA9IDA7CisgICAgc3RhdGlj
IHVuc2lnbmVkIGxvbmcgY2xmbHVzaF9zaXplID0gMDsKKyAgICBjb25zdCB2
b2lkICplbmQgPSBhZGRyICsgc2l6ZTsKIAogICAgIGlmICggIWlvbW11c19p
bmNvaGVyZW50ICkKICAgICAgICAgcmV0dXJuOwpAQCAtMTcwLDggKzE3MCw5
IEBAIHN0YXRpYyB2b2lkIHN5bmNfY2FjaGUoY29uc3Qgdm9pZCAqYWRkciwK
ICAgICBpZiAoIGNsZmx1c2hfc2l6ZSA9PSAwICkKICAgICAgICAgY2xmbHVz
aF9zaXplID0gZ2V0X2NhY2hlX2xpbmVfc2l6ZSgpOwogCi0gICAgZm9yICgg
aSA9IDA7IGkgPCBzaXplOyBpICs9IGNsZmx1c2hfc2l6ZSApCi0gICAgICAg
IGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIgKyBpKTsKKyAgICBhZGRy
IC09ICh1bnNpZ25lZCBsb25nKWFkZHIgJiAoY2xmbHVzaF9zaXplIC0gMSk7
CisgICAgZm9yICggOyBhZGRyIDwgZW5kOyBhZGRyICs9IGNsZmx1c2hfc2l6
ZSApCisgICAgICAgIGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIpOwog
fQogCiAvKiBBbGxvY2F0ZSBwYWdlIHRhYmxlLCByZXR1cm4gaXRzIG1hY2hp
bmUgYWRkcmVzcyAqLwo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.10-5.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.10-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
YWx0ZXJuYXRpdmU6IGludHJvZHVjZSBhbHRlcm5hdGl2ZV8yCgpJdCdzIGJh
c2VkIG9uIGFsdGVybmF0aXZlX2lvXzIgd2l0aG91dCBpbnB1dHMgb3Igb3V0
cHV0cyBidXQgd2l0aCBhbgphZGRlZCBtZW1vcnkgY2xvYmJlci4KClRoaXMg
aXMgcGFydCBvZiBYU0EtMzIxLgoKQWNrZWQtYnk6IEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4KCi0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYv
YWx0ZXJuYXRpdmUuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2FsdGVy
bmF0aXZlLmgKQEAgLTg1LDYgKzg1LDExIEBAIGV4dGVybiB2b2lkIGFsdGVy
bmF0aXZlX2luc3RydWN0aW9ucyh2b2kKICNkZWZpbmUgYWx0ZXJuYXRpdmUo
b2xkaW5zdHIsIG5ld2luc3RyLCBmZWF0dXJlKSAgICAgICAgICAgICAgICAg
ICAgICAgIFwKICAgICAgICAgYXNtIHZvbGF0aWxlIChBTFRFUk5BVElWRShv
bGRpbnN0ciwgbmV3aW5zdHIsIGZlYXR1cmUpIDogOiA6ICJtZW1vcnkiKQog
CisjZGVmaW5lIGFsdGVybmF0aXZlXzIob2xkaW5zdHIsIG5ld2luc3RyMSwg
ZmVhdHVyZTEsIG5ld2luc3RyMiwgZmVhdHVyZTIpIFwKKwlhc20gdm9sYXRp
bGUgKEFMVEVSTkFUSVZFXzIob2xkaW5zdHIsIG5ld2luc3RyMSwgZmVhdHVy
ZTEsCVwKKwkJCQkgICAgbmV3aW5zdHIyLCBmZWF0dXJlMikJCVwKKwkJICAg
ICAgOiA6IDogIm1lbW9yeSIpCisKIC8qCiAgKiBBbHRlcm5hdGl2ZSBpbmxp
bmUgYXNzZW1ibHkgd2l0aCBpbnB1dC4KICAqCg==

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.10-6.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.10-6.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IG9wdGltaXplIENQVSBjYWNoZSBzeW5jCgpTb21lIFZULWQgSU9NTVVzIGFy
ZSBub24tY29oZXJlbnQsIHdoaWNoIHJlcXVpcmVzIGEgY2FjaGUgd3JpdGUg
YmFjawppbiBvcmRlciBmb3IgdGhlIGNoYW5nZXMgbWFkZSBieSB0aGUgQ1BV
IHRvIGJlIHZpc2libGUgdG8gdGhlIElPTU1VLgpUaGlzIGNhY2hlIHdyaXRl
IGJhY2sgd2FzIHVuY29uZGl0aW9uYWxseSBkb25lIHVzaW5nIGNsZmx1c2gs
IGJ1dCB0aGVyZSBhcmUKb3RoZXIgbW9yZSBlZmZpY2llbnQgaW5zdHJ1Y3Rp
b25zIHRvIGRvIHNvLCBoZW5jZSBpbXBsZW1lbnQgc3VwcG9ydApmb3IgdGhl
bSB1c2luZyB0aGUgYWx0ZXJuYXRpdmUgZnJhbWV3b3JrLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zMjEuCgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvdnRkL2V4dGVybi5oCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdo
L3Z0ZC9leHRlcm4uaApAQCAtNjMsNyArNjMsNiBAQCBpbnQgX19tdXN0X2No
ZWNrIHFpbnZhbF9kZXZpY2VfaW90bGJfc3luCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICB1MTYgZGlkLCB1MTYgc2l6ZSwg
dTY0IGFkZHIpOwogCiB1bnNpZ25lZCBpbnQgZ2V0X2NhY2hlX2xpbmVfc2l6
ZSh2b2lkKTsKLXZvaWQgY2FjaGVsaW5lX2ZsdXNoKGNoYXIgKik7CiB2b2lk
IGZsdXNoX2FsbF9jYWNoZSh2b2lkKTsKIAogdTY0IGFsbG9jX3BndGFibGVf
bWFkZHIoc3RydWN0IGFjcGlfZHJoZF91bml0ICpkcmhkLCB1bnNpZ25lZCBs
b25nIG5wYWdlcyk7Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9p
b21tdS5jCkBAIC0zMSw2ICszMSw3IEBACiAjaW5jbHVkZSA8eGVuL3BjaV9y
ZWdzLmg+CiAjaW5jbHVkZSA8eGVuL2tleWhhbmRsZXIuaD4KICNpbmNsdWRl
IDxhc20vbXNpLmg+CisjaW5jbHVkZSA8YXNtL25vcHMuaD4KICNpbmNsdWRl
IDxhc20vaXJxLmg+CiAjaW5jbHVkZSA8YXNtL2h2bS92bXgvdm14Lmg+CiAj
aW5jbHVkZSA8YXNtL3AybS5oPgpAQCAtMTcyLDcgKzE3Myw0MiBAQCBzdGF0
aWMgdm9pZCBzeW5jX2NhY2hlKGNvbnN0IHZvaWQgKmFkZHIsCiAKICAgICBh
ZGRyIC09ICh1bnNpZ25lZCBsb25nKWFkZHIgJiAoY2xmbHVzaF9zaXplIC0g
MSk7CiAgICAgZm9yICggOyBhZGRyIDwgZW5kOyBhZGRyICs9IGNsZmx1c2hf
c2l6ZSApCi0gICAgICAgIGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIp
OworLyoKKyAqIFRoZSBhcmd1bWVudHMgdG8gYSBtYWNybyBtdXN0IG5vdCBp
bmNsdWRlIHByZXByb2Nlc3NvciBkaXJlY3RpdmVzLiBEb2luZyBzbworICog
cmVzdWx0cyBpbiB1bmRlZmluZWQgYmVoYXZpb3IsIHNvIHdlIGhhdmUgdG8g
Y3JlYXRlIHNvbWUgZGVmaW5lcyBoZXJlIGluCisgKiBvcmRlciB0byBhdm9p
ZCBpdC4KKyAqLworI2lmIGRlZmluZWQoSEFWRV9BU19DTFdCKQorIyBkZWZp
bmUgQ0xXQl9FTkNPRElORyAiY2x3YiAlW3BdIgorI2VsaWYgZGVmaW5lZChI
QVZFX0FTX1hTQVZFT1BUKQorIyBkZWZpbmUgQ0xXQl9FTkNPRElORyAiZGF0
YTE2IHhzYXZlb3B0ICVbcF0iIC8qIGNsd2IgKi8KKyNlbHNlCisjIGRlZmlu
ZSBDTFdCX0VOQ09ESU5HICIuYnl0ZSAweDY2LCAweDBmLCAweGFlLCAweDMw
IiAvKiBjbHdiICglJXJheCkgKi8KKyNlbmRpZgorCisjZGVmaW5lIEJBU0Vf
SU5QVVQoYWRkcikgW3BdICJtIiAoKihjb25zdCBjaGFyICopKGFkZHIpKQor
I2lmIGRlZmluZWQoSEFWRV9BU19DTFdCKSB8fCBkZWZpbmVkKEhBVkVfQVNf
WFNBVkVPUFQpCisjIGRlZmluZSBJTlBVVCBCQVNFX0lOUFVUCisjZWxzZQor
IyBkZWZpbmUgSU5QVVQoYWRkcikgImEiIChhZGRyKSwgQkFTRV9JTlBVVChh
ZGRyKQorI2VuZGlmCisgICAgICAgIC8qCisgICAgICAgICAqIE5vdGUgcmVn
YXJkaW5nIHRoZSB1c2Ugb2YgTk9QX0RTX1BSRUZJWDogaXQncyBmYXN0ZXIg
dG8gZG8gYSBjbGZsdXNoCisgICAgICAgICAqICsgcHJlZml4IHRoYW4gYSBj
bGZsdXNoICsgbm9wLCBhbmQgaGVuY2UgdGhlIHByZWZpeCBpcyBhZGRlZCBp
bnN0ZWFkCisgICAgICAgICAqIG9mIGxldHRpbmcgdGhlIGFsdGVybmF0aXZl
IGZyYW1ld29yayBmaWxsIHRoZSBnYXAgYnkgYXBwZW5kaW5nIG5vcHMuCisg
ICAgICAgICAqLworICAgICAgICBhbHRlcm5hdGl2ZV9pb18yKCIuYnl0ZSAi
IF9fc3RyaW5naWZ5KE5PUF9EU19QUkVGSVgpICI7IGNsZmx1c2ggJVtwXSIs
CisgICAgICAgICAgICAgICAgICAgICAgICAgImRhdGExNiBjbGZsdXNoICVb
cF0iLCAvKiBjbGZsdXNob3B0ICovCisgICAgICAgICAgICAgICAgICAgICAg
ICAgWDg2X0ZFQVRVUkVfQ0xGTFVTSE9QVCwKKyAgICAgICAgICAgICAgICAg
ICAgICAgICBDTFdCX0VOQ09ESU5HLAorICAgICAgICAgICAgICAgICAgICAg
ICAgIFg4Nl9GRUFUVVJFX0NMV0IsIC8qIG5vIG91dHB1dHMgKi8sCisgICAg
ICAgICAgICAgICAgICAgICAgICAgSU5QVVQoYWRkcikpOworI3VuZGVmIElO
UFVUCisjdW5kZWYgQkFTRV9JTlBVVAorI3VuZGVmIENMV0JfRU5DT0RJTkcK
KworICAgIGFsdGVybmF0aXZlXzIoQVNNX05PUDMsICJzZmVuY2UiLCBYODZf
RkVBVFVSRV9DTEZMVVNIT1BULAorICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICJzZmVuY2UiLCBYODZfRkVBVFVSRV9DTFdCKTsKIH0KIAogLyogQWxs
b2NhdGUgcGFnZSB0YWJsZSwgcmV0dXJuIGl0cyBtYWNoaW5lIGFkZHJlc3Mg
Ki8KLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3g4Ni92dGQu
YworKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQveDg2L3Z0ZC5j
CkBAIC01MywxMSArNTMsNiBAQCB1bnNpZ25lZCBpbnQgZ2V0X2NhY2hlX2xp
bmVfc2l6ZSh2b2lkKQogICAgIHJldHVybiAoKGNwdWlkX2VieCgxKSA+PiA4
KSAmIDB4ZmYpICogODsKIH0KIAotdm9pZCBjYWNoZWxpbmVfZmx1c2goY2hh
ciAqIGFkZHIpCi17Ci0gICAgY2xmbHVzaChhZGRyKTsKLX0KLQogdm9pZCBm
bHVzaF9hbGxfY2FjaGUoKQogewogICAgIHdiaW52ZCgpOwo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.10-7.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.10-7.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
ZXB0OiBmbHVzaCBjYWNoZSB3aGVuIG1vZGlmeWluZyBQVEVzIGFuZCBzaGFy
aW5nIHBhZ2UgdGFibGVzCgpNb2RpZmljYXRpb25zIG1hZGUgdG8gdGhlIHBh
Z2UgdGFibGVzIGJ5IEVQVCBjb2RlIG5lZWQgdG8gYmUgd3JpdHRlbgp0byBt
ZW1vcnkgd2hlbiB0aGUgcGFnZSB0YWJsZXMgYXJlIHNoYXJlZCB3aXRoIHRo
ZSBJT01NVSwgYXMgSW50ZWwKSU9NTVVzIGNhbiBiZSBub24tY29oZXJlbnQg
YW5kIHRodXMgcmVxdWlyZSBjaGFuZ2VzIHRvIGJlIHdyaXR0ZW4gdG8KbWVt
b3J5IGluIG9yZGVyIHRvIGJlIHZpc2libGUgdG8gdGhlIElPTU1VLgoKSW4g
b3JkZXIgdG8gYWNoaWV2ZSB0aGlzIG1ha2Ugc3VyZSBkYXRhIGlzIHdyaXR0
ZW4gYmFjayB0byBtZW1vcnkKYWZ0ZXIgd3JpdGluZyBhbiBFUFQgZW50cnkg
d2hlbiB0aGUgcmVjYWxjIGJpdCBpcyBub3Qgc2V0IGluCmF0b21pY193cml0
ZV9lcHRfZW50cnkuIElmIHN1Y2ggYml0IGlzIHNldCwgdGhlIGVudHJ5IHdp
bGwgYmUKYWRqdXN0ZWQgYW5kIGF0b21pY193cml0ZV9lcHRfZW50cnkgd2ls
bCBiZSBjYWxsZWQgYSBzZWNvbmQgdGltZQp3aXRob3V0IHRoZSByZWNhbGMg
Yml0IHNldC4gTm90ZSB0aGF0IHdoZW4gc3BsaXR0aW5nIGEgc3VwZXIgcGFn
ZSB0aGUKbmV3IHRhYmxlcyByZXN1bHRpbmcgb2YgdGhlIHNwbGl0IHNob3Vs
ZCBhbHNvIGJlIHdyaXR0ZW4gYmFjay4KCkZhaWx1cmUgdG8gZG8gc28gY2Fu
IGFsbG93IGRldmljZXMgYmVoaW5kIHRoZSBJT01NVSBhY2Nlc3MgdG8gdGhl
CnN0YWxlIHN1cGVyIHBhZ2UsIG9yIGNhdXNlIGNvaGVyZW5jeSBpc3N1ZXMg
YXMgY2hhbmdlcyBtYWRlIGJ5IHRoZQpwcm9jZXNzb3IgdG8gdGhlIHBhZ2Ug
dGFibGVzIGFyZSBub3QgdmlzaWJsZSB0byB0aGUgSU9NTVUuCgpUaGlzIGFs
bG93cyB0byByZW1vdmUgdGhlIFZULWQgc3BlY2lmaWMgaW9tbXVfcHRlX2Zs
dXNoIGhlbHBlciwgc2luY2UKdGhlIGNhY2hlIHdyaXRlIGJhY2sgaXMgbm93
IHBlcmZvcm1lZCBieSBhdG9taWNfd3JpdGVfZXB0X2VudHJ5LCBhbmQKaGVu
Y2UgaW9tbXVfaW90bGJfZmx1c2ggY2FuIGJlIHVzZWQgdG8gZmx1c2ggdGhl
IElPTU1VIFRMQi4gVGhlIG5ld2x5CnVzZWQgbWV0aG9kIChpb21tdV9pb3Rs
Yl9mbHVzaCkgY2FuIHJlc3VsdCBpbiBsZXNzIGZsdXNoZXMsIHNpbmNlIGl0
Cm1pZ2h0IHNvbWV0aW1lcyBiZSBjYWxsZWQgcmlnaHRseSB3aXRoIDAgZmxh
Z3MsIGluIHdoaWNoIGNhc2UgaXQKYmVjb21lcyBhIG5vLW9wLgoKVGhpcyBp
cyBwYXJ0IG9mIFhTQS0zMjEuCgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2L21tL3Ay
bS1lcHQuYworKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5jCkBAIC05
MCw2ICs5MCwxOSBAQCBzdGF0aWMgaW50IGF0b21pY193cml0ZV9lcHRfZW50
cnkoZXB0X2VuCiAKICAgICB3cml0ZV9hdG9taWMoJmVudHJ5cHRyLT5lcHRl
LCBuZXcuZXB0ZSk7CiAKKyAgICAvKgorICAgICAqIFRoZSByZWNhbGMgZmll
bGQgb24gdGhlIEVQVCBpcyB1c2VkIHRvIHNpZ25hbCBlaXRoZXIgdGhhdCBh
CisgICAgICogcmVjYWxjdWxhdGlvbiBvZiB0aGUgRU1UIGZpZWxkIGlzIHJl
cXVpcmVkICh3aGljaCBkb2Vzbid0IGVmZmVjdCB0aGUKKyAgICAgKiBJT01N
VSksIG9yIGEgdHlwZSBjaGFuZ2UuIFR5cGUgY2hhbmdlcyBjYW4gb25seSBi
ZSBiZXR3ZWVuIHJhbV9ydywKKyAgICAgKiBsb2dkaXJ0eSBhbmQgaW9yZXFf
c2VydmVyOiBjaGFuZ2VzIHRvL2Zyb20gbG9nZGlydHkgd29uJ3Qgd29yayB3
ZWxsIHdpdGgKKyAgICAgKiBhbiBJT01NVSBhbnl3YXksIGFzIElPTU1VICNQ
RnMgYXJlIG5vdCBzeW5jaHJvbm91cyBhbmQgd2lsbCBsZWFkIHRvCisgICAg
ICogYWJvcnRzLCBhbmQgY2hhbmdlcyB0by9mcm9tIGlvcmVxX3NlcnZlciBh
cmUgYWxyZWFkeSBmdWxseSBmbHVzaGVkCisgICAgICogYmVmb3JlIHJldHVy
bmluZyB0byBndWVzdCBjb250ZXh0IChzZWUKKyAgICAgKiBYRU5fRE1PUF9t
YXBfbWVtX3R5cGVfdG9faW9yZXFfc2VydmVyKS4KKyAgICAgKi8KKyAgICBp
ZiAoICFuZXcucmVjYWxjICYmIGlvbW11X2hhcF9wdF9zaGFyZSApCisgICAg
ICAgIGlvbW11X3N5bmNfY2FjaGUoZW50cnlwdHIsIHNpemVvZigqZW50cnlw
dHIpKTsKKwogICAgIGlmICggdW5saWtlbHkob2xkbWZuICE9IG1mbl94KElO
VkFMSURfTUZOKSkgKQogICAgICAgICBwdXRfcGFnZShtZm5fdG9fcGFnZShv
bGRtZm4pKTsKIApAQCAtMzE5LDYgKzMzMiw5IEBAIHN0YXRpYyBib29sX3Qg
ZXB0X3NwbGl0X3N1cGVyX3BhZ2Uoc3RydWMKICAgICAgICAgICAgIGJyZWFr
OwogICAgIH0KIAorICAgIGlmICggaW9tbXVfaGFwX3B0X3NoYXJlICkKKyAg
ICAgICAgaW9tbXVfc3luY19jYWNoZSh0YWJsZSwgRVBUX1BBR0VUQUJMRV9F
TlRSSUVTICogc2l6ZW9mKGVwdF9lbnRyeV90KSk7CisKICAgICB1bm1hcF9k
b21haW5fcGFnZSh0YWJsZSk7CiAKICAgICAvKiBFdmVuIGZhaWxlZCB3ZSBz
aG91bGQgaW5zdGFsbCB0aGUgbmV3bHkgYWxsb2NhdGVkIGVwdCBwYWdlLiAq
LwpAQCAtMzc4LDYgKzM5NCw5IEBAIHN0YXRpYyBpbnQgZXB0X25leHRfbGV2
ZWwoc3RydWN0IHAybV9kb20KICAgICAgICAgaWYgKCAhbmV4dCApCiAgICAg
ICAgICAgICByZXR1cm4gR1VFU1RfVEFCTEVfTUFQX0ZBSUxFRDsKIAorICAg
ICAgICBpZiAoIGlvbW11X2hhcF9wdF9zaGFyZSApCisgICAgICAgICAgICBp
b21tdV9zeW5jX2NhY2hlKG5leHQsIEVQVF9QQUdFVEFCTEVfRU5UUklFUyAq
IHNpemVvZihlcHRfZW50cnlfdCkpOworCiAgICAgICAgIHJjID0gYXRvbWlj
X3dyaXRlX2VwdF9lbnRyeShlcHRfZW50cnksIGUsIG5leHRfbGV2ZWwpOwog
ICAgICAgICBBU1NFUlQocmMgPT0gMCk7CiAgICAgfQpAQCAtODc0LDcgKzg5
Myw3IEBAIG91dDoKICAgICAgICAgIG5lZWRfbW9kaWZ5X3Z0ZF90YWJsZSAp
CiAgICAgewogICAgICAgICBpZiAoIGlvbW11X2hhcF9wdF9zaGFyZSApCi0g
ICAgICAgICAgICByYyA9IGlvbW11X3B0ZV9mbHVzaChkLCBnZm4sICZlcHRf
ZW50cnktPmVwdGUsIG9yZGVyLCB2dGRfcHRlX3ByZXNlbnQpOworICAgICAg
ICAgICAgcmMgPSBpb21tdV9mbHVzaF9pb3RsYihkLCBnZm4sIHZ0ZF9wdGVf
cHJlc2VudCwgMXUgPDwgb3JkZXIpOwogICAgICAgICBlbHNlCiAgICAgICAg
IHsKICAgICAgICAgICAgIGlmICggaW9tbXVfZmxhZ3MgKQotLS0gYS94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYworKysgYi94ZW4vZHJp
dmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYwpAQCAtNjEyLDEwICs2MTIs
OCBAQCBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayBpb21tdV9mbHVzaF9hbGwo
CiAgICAgcmV0dXJuIHJjOwogfQogCi1zdGF0aWMgaW50IF9fbXVzdF9jaGVj
ayBpb21tdV9mbHVzaF9pb3RsYihzdHJ1Y3QgZG9tYWluICpkLAotICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQg
bG9uZyBnZm4sCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBib29sX3QgZG1hX29sZF9wdGVfcHJlc2VudCwKLSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGlu
dCBwYWdlX2NvdW50KQoraW50IGlvbW11X2ZsdXNoX2lvdGxiKHN0cnVjdCBk
b21haW4gKmQsIHVuc2lnbmVkIGxvbmcgZ2ZuLAorICAgICAgICAgICAgICAg
ICAgICAgIGJvb2wgZG1hX29sZF9wdGVfcHJlc2VudCwgdW5zaWduZWQgaW50
IHBhZ2VfY291bnQpCiB7CiAgICAgc3RydWN0IGRvbWFpbl9pb21tdSAqaGQg
PSBkb21faW9tbXUoZCk7CiAgICAgc3RydWN0IGFjcGlfZHJoZF91bml0ICpk
cmhkOwpAQCAtMTg3Niw1MyArMTg3NCw2IEBAIHN0YXRpYyBpbnQgX19tdXN0
X2NoZWNrIGludGVsX2lvbW11X3VubWEKICAgICByZXR1cm4gZG1hX3B0ZV9j
bGVhcl9vbmUoZCwgKHBhZGRyX3QpZ2ZuIDw8IFBBR0VfU0hJRlRfNEspOwog
fQogCi1pbnQgaW9tbXVfcHRlX2ZsdXNoKHN0cnVjdCBkb21haW4gKmQsIHU2
NCBnZm4sIHU2NCAqcHRlLAotICAgICAgICAgICAgICAgICAgICBpbnQgb3Jk
ZXIsIGludCBwcmVzZW50KQotewotICAgIHN0cnVjdCBhY3BpX2RyaGRfdW5p
dCAqZHJoZDsKLSAgICBzdHJ1Y3QgaW9tbXUgKmlvbW11ID0gTlVMTDsKLSAg
ICBzdHJ1Y3QgZG9tYWluX2lvbW11ICpoZCA9IGRvbV9pb21tdShkKTsKLSAg
ICBib29sX3QgZmx1c2hfZGV2X2lvdGxiOwotICAgIGludCBpb21tdV9kb21p
ZDsKLSAgICBpbnQgcmMgPSAwOwotCi0gICAgaW9tbXVfc3luY19jYWNoZShw
dGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0ZSkpOwotCi0gICAgZm9yX2VhY2hf
ZHJoZF91bml0ICggZHJoZCApCi0gICAgewotICAgICAgICBpb21tdSA9IGRy
aGQtPmlvbW11OwotICAgICAgICBpZiAoICF0ZXN0X2JpdChpb21tdS0+aW5k
ZXgsICZoZC0+YXJjaC5pb21tdV9iaXRtYXApICkKLSAgICAgICAgICAgIGNv
bnRpbnVlOwotCi0gICAgICAgIGZsdXNoX2Rldl9pb3RsYiA9ICEhZmluZF9h
dHNfZGV2X2RyaGQoaW9tbXUpOwotICAgICAgICBpb21tdV9kb21pZD0gZG9t
YWluX2lvbW11X2RvbWlkKGQsIGlvbW11KTsKLSAgICAgICAgaWYgKCBpb21t
dV9kb21pZCA9PSAtMSApCi0gICAgICAgICAgICBjb250aW51ZTsKLQotICAg
ICAgICByYyA9IGlvbW11X2ZsdXNoX2lvdGxiX3BzaShpb21tdSwgaW9tbXVf
ZG9taWQsCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIChw
YWRkcl90KWdmbiA8PCBQQUdFX1NISUZUXzRLLAotICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBvcmRlciwgIXByZXNlbnQsIGZsdXNoX2Rl
dl9pb3RsYik7Ci0gICAgICAgIGlmICggcmMgPiAwICkKLSAgICAgICAgewot
ICAgICAgICAgICAgaW9tbXVfZmx1c2hfd3JpdGVfYnVmZmVyKGlvbW11KTsK
LSAgICAgICAgICAgIHJjID0gMDsKLSAgICAgICAgfQotICAgIH0KLQotICAg
IGlmICggdW5saWtlbHkocmMpICkKLSAgICB7Ci0gICAgICAgIGlmICggIWQt
PmlzX3NodXR0aW5nX2Rvd24gJiYgcHJpbnRrX3JhdGVsaW1pdCgpICkKLSAg
ICAgICAgICAgIHByaW50ayhYRU5MT0dfRVJSIFZURFBSRUZJWAotICAgICAg
ICAgICAgICAgICAgICIgZCVkOiBJT01NVSBwYWdlcyBmbHVzaCBmYWlsZWQ6
ICVkXG4iLAotICAgICAgICAgICAgICAgICAgIGQtPmRvbWFpbl9pZCwgcmMp
OwotCi0gICAgICAgIGlmICggIWlzX2hhcmR3YXJlX2RvbWFpbihkKSApCi0g
ICAgICAgICAgICBkb21haW5fY3Jhc2goZCk7Ci0gICAgfQotCi0gICAgcmV0
dXJuIHJjOwotfQotCiBzdGF0aWMgaW50IF9faW5pdCB2dGRfZXB0X3BhZ2Vf
Y29tcGF0aWJsZShzdHJ1Y3QgaW9tbXUgKmlvbW11KQogewogICAgIHU2NCBl
cHRfY2FwLCB2dGRfY2FwID0gaW9tbXUtPmNhcDsKLS0tIGEveGVuL2luY2x1
ZGUvYXNtLXg4Ni9pb21tdS5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYv
aW9tbXUuaApAQCAtODcsOCArODcsOSBAQCBpbnQgaW9tbXVfc2V0dXBfaHBl
dF9tc2koc3RydWN0IG1zaV9kZXNjCiAKIC8qIFdoaWxlIFZULWQgc3BlY2lm
aWMsIHRoaXMgbXVzdCBnZXQgZGVjbGFyZWQgaW4gYSBnZW5lcmljIGhlYWRl
ci4gKi8KIGludCBhZGp1c3RfdnRkX2lycV9hZmZpbml0aWVzKHZvaWQpOwot
aW50IF9fbXVzdF9jaGVjayBpb21tdV9wdGVfZmx1c2goc3RydWN0IGRvbWFp
biAqZCwgdTY0IGdmbiwgdTY0ICpwdGUsCi0gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBpbnQgb3JkZXIsIGludCBwcmVzZW50KTsKK2ludCBf
X211c3RfY2hlY2sgaW9tbXVfZmx1c2hfaW90bGIoc3RydWN0IGRvbWFpbiAq
ZCwgdW5zaWduZWQgbG9uZyBnZm4sCisgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGJvb2wgZG1hX29sZF9wdGVfcHJlc2VudCwKKyAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgaW50IHBh
Z2VfY291bnQpOwogYm9vbF90IGlvbW11X3N1cHBvcnRzX2VpbSh2b2lkKTsK
IGludCBpb21tdV9lbmFibGVfeDJhcGljX0lSKHZvaWQpOwogdm9pZCBpb21t
dV9kaXNhYmxlX3gyYXBpY19JUih2b2lkKTsK

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.11-1.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.11-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB2dGQ6IGltcHJvdmUgSU9NTVUgVExCIGZsdXNoCgpEbyBub3QgbGltaXQg
UFNJIGZsdXNoZXMgdG8gb3JkZXIgMCBwYWdlcywgaW4gb3JkZXIgdG8gYXZv
aWQgZG9pbmcgYQpmdWxsIFRMQiBmbHVzaCBpZiB0aGUgcGFzc2VkIGluIHBh
Z2UgaGFzIGFuIG9yZGVyIGdyZWF0ZXIgdGhhbiAwIGFuZAppcyBhbGlnbmVk
LiBTaG91bGQgaW5jcmVhc2UgdGhlIHBlcmZvcm1hbmNlIG9mIElPTU1VIFRM
QiBmbHVzaGVzIHdoZW4KZGVhbGluZyB3aXRoIHBhZ2Ugb3JkZXJzIGdyZWF0
ZXIgdGhhbiAwLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjEuCgpTaWduZWQt
b2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+CgotLS0g
YS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYworKysgYi94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYwpAQCAtNjEyLDEz
ICs2MTIsMTQgQEAgc3RhdGljIGludCBfX211c3RfY2hlY2sgaW9tbXVfZmx1
c2hfaW90bAogICAgICAgICBpZiAoIGlvbW11X2RvbWlkID09IC0xICkKICAg
ICAgICAgICAgIGNvbnRpbnVlOwogCi0gICAgICAgIGlmICggcGFnZV9jb3Vu
dCAhPSAxIHx8IGdmbiA9PSBnZm5feChJTlZBTElEX0dGTikgKQorICAgICAg
ICBpZiAoICFwYWdlX2NvdW50IHx8IChwYWdlX2NvdW50ICYgKHBhZ2VfY291
bnQgLSAxKSkgfHwKKyAgICAgICAgICAgICBnZm4gPT0gZ2ZuX3goSU5WQUxJ
RF9HRk4pIHx8ICFJU19BTElHTkVEKGdmbiwgcGFnZV9jb3VudCkgKQogICAg
ICAgICAgICAgcmMgPSBpb21tdV9mbHVzaF9pb3RsYl9kc2koaW9tbXUsIGlv
bW11X2RvbWlkLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgMCwgZmx1c2hfZGV2X2lvdGxiKTsKICAgICAgICAgZWxzZQogICAg
ICAgICAgICAgcmMgPSBpb21tdV9mbHVzaF9pb3RsYl9wc2koaW9tbXUsIGlv
bW11X2RvbWlkLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgKHBhZGRyX3QpZ2ZuIDw8IFBBR0VfU0hJRlRfNEssCi0gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBQQUdFX09SREVSXzRL
LAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZ2V0
X29yZGVyX2Zyb21fcGFnZXMocGFnZV9jb3VudCksCiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAhZG1hX29sZF9wdGVfcHJlc2Vu
dCwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGZs
dXNoX2Rldl9pb3RsYik7CiAK

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.11-2.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.11-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IHBydW5lIChhbmQgcmVuYW1lKSBjYWNoZSBmbHVzaCBmdW5jdGlvbnMKClJl
bmFtZSBfX2lvbW11X2ZsdXNoX2NhY2hlIHRvIGlvbW11X3N5bmNfY2FjaGUg
YW5kIHJlbW92ZQppb21tdV9mbHVzaF9jYWNoZV9wYWdlLiBBbHNvIHJlbW92
ZSB0aGUgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnkKd3JhcHBlciBhbmQganVz
dCB1c2UgaW9tbXVfc3luY19jYWNoZSBpbnN0ZWFkLiBOb3RlIHRoZSBfZW50
cnkgc3VmZml4CndhcyBtZWFuaW5nbGVzcyBhcyB0aGUgd3JhcHBlciB3YXMg
YWxyZWFkeSB0YWtpbmcgYSBzaXplIHBhcmFtZXRlciBpbgpieXRlcy4gV2hp
bGUgdGhlcmUgYWxzbyBjb25zdGlmeSB0aGUgYWRkciBwYXJhbWV0ZXIuCgpO
byBmdW5jdGlvbmFsIGNoYW5nZSBpbnRlbmRlZC4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtMzIxLgoKUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4KCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9leHRlcm4uaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
ZXh0ZXJuLmgKQEAgLTM3LDggKzM3LDcgQEAgdm9pZCBkaXNhYmxlX3FpbnZh
bChzdHJ1Y3QgaW9tbXUgKmlvbW11KQogaW50IGVuYWJsZV9pbnRyZW1hcChz
dHJ1Y3QgaW9tbXUgKmlvbW11LCBpbnQgZWltKTsKIHZvaWQgZGlzYWJsZV9p
bnRyZW1hcChzdHJ1Y3QgaW9tbXUgKmlvbW11KTsKIAotdm9pZCBpb21tdV9m
bHVzaF9jYWNoZV9lbnRyeSh2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQgc2l6
ZSk7Ci12b2lkIGlvbW11X2ZsdXNoX2NhY2hlX3BhZ2Uodm9pZCAqYWRkciwg
dW5zaWduZWQgbG9uZyBucGFnZXMpOwordm9pZCBpb21tdV9zeW5jX2NhY2hl
KGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKTsKIGludCBp
b21tdV9hbGxvYyhzdHJ1Y3QgYWNwaV9kcmhkX3VuaXQgKmRyaGQpOwogdm9p
ZCBpb21tdV9mcmVlKHN0cnVjdCBhY3BpX2RyaGRfdW5pdCAqZHJoZCk7CiAK
LS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2ludHJlbWFwLmMK
KysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2ludHJlbWFwLmMK
QEAgLTIzMSw3ICsyMzEsNyBAQCBzdGF0aWMgdm9pZCBmcmVlX3JlbWFwX2Vu
dHJ5KHN0cnVjdCBpb21tCiAgICAgICAgICAgICAgICAgICAgICBpcmVtYXBf
ZW50cmllcywgaXJlbWFwX2VudHJ5KTsKIAogICAgIHVwZGF0ZV9pcnRlKGlv
bW11LCBpcmVtYXBfZW50cnksICZuZXdfaXJlLCBmYWxzZSk7Ci0gICAgaW9t
bXVfZmx1c2hfY2FjaGVfZW50cnkoaXJlbWFwX2VudHJ5LCBzaXplb2YoKmly
ZW1hcF9lbnRyeSkpOworICAgIGlvbW11X3N5bmNfY2FjaGUoaXJlbWFwX2Vu
dHJ5LCBzaXplb2YoKmlyZW1hcF9lbnRyeSkpOwogICAgIGlvbW11X2ZsdXNo
X2llY19pbmRleChpb21tdSwgMCwgaW5kZXgpOwogCiAgICAgdW5tYXBfdnRk
X2RvbWFpbl9wYWdlKGlyZW1hcF9lbnRyaWVzKTsKQEAgLTQwMyw3ICs0MDMs
NyBAQCBzdGF0aWMgaW50IGlvYXBpY19ydGVfdG9fcmVtYXBfZW50cnkoc3Ry
CiAgICAgfQogCiAgICAgdXBkYXRlX2lydGUoaW9tbXUsIGlyZW1hcF9lbnRy
eSwgJm5ld19pcmUsICFpbml0KTsKLSAgICBpb21tdV9mbHVzaF9jYWNoZV9l
bnRyeShpcmVtYXBfZW50cnksIHNpemVvZigqaXJlbWFwX2VudHJ5KSk7Cisg
ICAgaW9tbXVfc3luY19jYWNoZShpcmVtYXBfZW50cnksIHNpemVvZigqaXJl
bWFwX2VudHJ5KSk7CiAgICAgaW9tbXVfZmx1c2hfaWVjX2luZGV4KGlvbW11
LCAwLCBpbmRleCk7CiAKICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2UoaXJl
bWFwX2VudHJpZXMpOwpAQCAtNjk0LDcgKzY5NCw3IEBAIHN0YXRpYyBpbnQg
bXNpX21zZ190b19yZW1hcF9lbnRyeSgKICAgICB1cGRhdGVfaXJ0ZShpb21t
dSwgaXJlbWFwX2VudHJ5LCAmbmV3X2lyZSwgbXNpX2Rlc2MtPmlydGVfaW5p
dGlhbGl6ZWQpOwogICAgIG1zaV9kZXNjLT5pcnRlX2luaXRpYWxpemVkID0g
dHJ1ZTsKIAotICAgIGlvbW11X2ZsdXNoX2NhY2hlX2VudHJ5KGlyZW1hcF9l
bnRyeSwgc2l6ZW9mKCppcmVtYXBfZW50cnkpKTsKKyAgICBpb21tdV9zeW5j
X2NhY2hlKGlyZW1hcF9lbnRyeSwgc2l6ZW9mKCppcmVtYXBfZW50cnkpKTsK
ICAgICBpb21tdV9mbHVzaF9pZWNfaW5kZXgoaW9tbXUsIDAsIGluZGV4KTsK
IAogICAgIHVubWFwX3Z0ZF9kb21haW5fcGFnZShpcmVtYXBfZW50cmllcyk7
Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCisr
KyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBAIC0x
NTgsNyArMTU4LDggQEAgc3RhdGljIHZvaWQgX19pbml0IGZyZWVfaW50ZWxf
aW9tbXUoc3RydQogfQogCiBzdGF0aWMgaW50IGlvbW11c19pbmNvaGVyZW50
Owotc3RhdGljIHZvaWQgX19pb21tdV9mbHVzaF9jYWNoZSh2b2lkICphZGRy
LCB1bnNpZ25lZCBpbnQgc2l6ZSkKKwordm9pZCBpb21tdV9zeW5jX2NhY2hl
KGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQogewogICAg
IGludCBpOwogICAgIHN0YXRpYyB1bnNpZ25lZCBpbnQgY2xmbHVzaF9zaXpl
ID0gMDsKQEAgLTE3MywxNiArMTc0LDYgQEAgc3RhdGljIHZvaWQgX19pb21t
dV9mbHVzaF9jYWNoZSh2b2lkICphZAogICAgICAgICBjYWNoZWxpbmVfZmx1
c2goKGNoYXIgKilhZGRyICsgaSk7CiB9CiAKLXZvaWQgaW9tbXVfZmx1c2hf
Y2FjaGVfZW50cnkodm9pZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUpCi17
Ci0gICAgX19pb21tdV9mbHVzaF9jYWNoZShhZGRyLCBzaXplKTsKLX0KLQot
dm9pZCBpb21tdV9mbHVzaF9jYWNoZV9wYWdlKHZvaWQgKmFkZHIsIHVuc2ln
bmVkIGxvbmcgbnBhZ2VzKQotewotICAgIF9faW9tbXVfZmx1c2hfY2FjaGUo
YWRkciwgUEFHRV9TSVpFICogbnBhZ2VzKTsKLX0KLQogLyogQWxsb2NhdGUg
cGFnZSB0YWJsZSwgcmV0dXJuIGl0cyBtYWNoaW5lIGFkZHJlc3MgKi8KIHU2
NCBhbGxvY19wZ3RhYmxlX21hZGRyKHN0cnVjdCBhY3BpX2RyaGRfdW5pdCAq
ZHJoZCwgdW5zaWduZWQgbG9uZyBucGFnZXMpCiB7CkBAIC0yMDcsNyArMTk4
LDcgQEAgdTY0IGFsbG9jX3BndGFibGVfbWFkZHIoc3RydWN0IGFjcGlfZHJo
ZAogICAgICAgICB2YWRkciA9IF9fbWFwX2RvbWFpbl9wYWdlKGN1cl9wZyk7
CiAgICAgICAgIG1lbXNldCh2YWRkciwgMCwgUEFHRV9TSVpFKTsKIAotICAg
ICAgICBpb21tdV9mbHVzaF9jYWNoZV9wYWdlKHZhZGRyLCAxKTsKKyAgICAg
ICAgaW9tbXVfc3luY19jYWNoZSh2YWRkciwgUEFHRV9TSVpFKTsKICAgICAg
ICAgdW5tYXBfZG9tYWluX3BhZ2UodmFkZHIpOwogICAgICAgICBjdXJfcGcr
KzsKICAgICB9CkBAIC0yNDIsNyArMjMzLDcgQEAgc3RhdGljIHU2NCBidXNf
dG9fY29udGV4dF9tYWRkcihzdHJ1Y3QgaQogICAgICAgICB9CiAgICAgICAg
IHNldF9yb290X3ZhbHVlKCpyb290LCBtYWRkcik7CiAgICAgICAgIHNldF9y
b290X3ByZXNlbnQoKnJvb3QpOwotICAgICAgICBpb21tdV9mbHVzaF9jYWNo
ZV9lbnRyeShyb290LCBzaXplb2Yoc3RydWN0IHJvb3RfZW50cnkpKTsKKyAg
ICAgICAgaW9tbXVfc3luY19jYWNoZShyb290LCBzaXplb2Yoc3RydWN0IHJv
b3RfZW50cnkpKTsKICAgICB9CiAgICAgbWFkZHIgPSAodTY0KSBnZXRfY29u
dGV4dF9hZGRyKCpyb290KTsKICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2Uo
cm9vdF9lbnRyaWVzKTsKQEAgLTMwMCw3ICsyOTEsNyBAQCBzdGF0aWMgdTY0
IGFkZHJfdG9fZG1hX3BhZ2VfbWFkZHIoc3RydWN0CiAgICAgICAgICAgICAg
Ki8KICAgICAgICAgICAgIGRtYV9zZXRfcHRlX3JlYWRhYmxlKCpwdGUpOwog
ICAgICAgICAgICAgZG1hX3NldF9wdGVfd3JpdGFibGUoKnB0ZSk7Ci0gICAg
ICAgICAgICBpb21tdV9mbHVzaF9jYWNoZV9lbnRyeShwdGUsIHNpemVvZihz
dHJ1Y3QgZG1hX3B0ZSkpOworICAgICAgICAgICAgaW9tbXVfc3luY19jYWNo
ZShwdGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0ZSkpOwogICAgICAgICB9CiAK
ICAgICAgICAgaWYgKCBsZXZlbCA9PSAyICkKQEAgLTY3NCw3ICs2NjUsNyBA
QCBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayBkbWFfcHRlX2NsZWFyX29uCiAK
ICAgICBkbWFfY2xlYXJfcHRlKCpwdGUpOwogICAgIHNwaW5fdW5sb2NrKCZo
ZC0+YXJjaC5tYXBwaW5nX2xvY2spOwotICAgIGlvbW11X2ZsdXNoX2NhY2hl
X2VudHJ5KHB0ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CisgICAgaW9t
bXVfc3luY19jYWNoZShwdGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0ZSkpOwog
CiAgICAgaWYgKCAhdGhpc19jcHUoaW9tbXVfZG9udF9mbHVzaF9pb3RsYikg
KQogICAgICAgICByYyA9IGlvbW11X2ZsdXNoX2lvdGxiX3BhZ2VzKGRvbWFp
biwgYWRkciA+PiBQQUdFX1NISUZUXzRLLCAxKTsKQEAgLTcxNiw3ICs3MDcs
NyBAQCBzdGF0aWMgdm9pZCBpb21tdV9mcmVlX3BhZ2VfdGFibGUoc3RydWN0
CiAgICAgICAgICAgICBpb21tdV9mcmVlX3BhZ2V0YWJsZShkbWFfcHRlX2Fk
ZHIoKnB0ZSksIG5leHRfbGV2ZWwpOwogCiAgICAgICAgIGRtYV9jbGVhcl9w
dGUoKnB0ZSk7Ci0gICAgICAgIGlvbW11X2ZsdXNoX2NhY2hlX2VudHJ5KHB0
ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CisgICAgICAgIGlvbW11X3N5
bmNfY2FjaGUocHRlLCBzaXplb2Yoc3RydWN0IGRtYV9wdGUpKTsKICAgICB9
CiAKICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2UocHRfdmFkZHIpOwpAQCAt
MTQ0OSw3ICsxNDQwLDcgQEAgaW50IGRvbWFpbl9jb250ZXh0X21hcHBpbmdf
b25lKAogICAgIGNvbnRleHRfc2V0X2FkZHJlc3Nfd2lkdGgoKmNvbnRleHQs
IGFnYXcpOwogICAgIGNvbnRleHRfc2V0X2ZhdWx0X2VuYWJsZSgqY29udGV4
dCk7CiAgICAgY29udGV4dF9zZXRfcHJlc2VudCgqY29udGV4dCk7Ci0gICAg
aW9tbXVfZmx1c2hfY2FjaGVfZW50cnkoY29udGV4dCwgc2l6ZW9mKHN0cnVj
dCBjb250ZXh0X2VudHJ5KSk7CisgICAgaW9tbXVfc3luY19jYWNoZShjb250
ZXh0LCBzaXplb2Yoc3RydWN0IGNvbnRleHRfZW50cnkpKTsKICAgICBzcGlu
X3VubG9jaygmaW9tbXUtPmxvY2spOwogCiAgICAgLyogQ29udGV4dCBlbnRy
eSB3YXMgcHJldmlvdXNseSBub24tcHJlc2VudCAod2l0aCBkb21pZCAwKS4g
Ki8KQEAgLTE2MDIsNyArMTU5Myw3IEBAIGludCBkb21haW5fY29udGV4dF91
bm1hcF9vbmUoCiAKICAgICBjb250ZXh0X2NsZWFyX3ByZXNlbnQoKmNvbnRl
eHQpOwogICAgIGNvbnRleHRfY2xlYXJfZW50cnkoKmNvbnRleHQpOwotICAg
IGlvbW11X2ZsdXNoX2NhY2hlX2VudHJ5KGNvbnRleHQsIHNpemVvZihzdHJ1
Y3QgY29udGV4dF9lbnRyeSkpOworICAgIGlvbW11X3N5bmNfY2FjaGUoY29u
dGV4dCwgc2l6ZW9mKHN0cnVjdCBjb250ZXh0X2VudHJ5KSk7CiAKICAgICBp
b21tdV9kb21pZD0gZG9tYWluX2lvbW11X2RvbWlkKGRvbWFpbiwgaW9tbXUp
OwogICAgIGlmICggaW9tbXVfZG9taWQgPT0gLTEgKQpAQCAtMTgyOCw3ICsx
ODE5LDcgQEAgc3RhdGljIGludCBfX211c3RfY2hlY2sgaW50ZWxfaW9tbXVf
bWFwXwogCiAgICAgKnB0ZSA9IG5ldzsKIAotICAgIGlvbW11X2ZsdXNoX2Nh
Y2hlX2VudHJ5KHB0ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CisgICAg
aW9tbXVfc3luY19jYWNoZShwdGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0ZSkp
OwogICAgIHNwaW5fdW5sb2NrKCZoZC0+YXJjaC5tYXBwaW5nX2xvY2spOwog
ICAgIHVubWFwX3Z0ZF9kb21haW5fcGFnZShwYWdlKTsKIApAQCAtMTg2Miw3
ICsxODUzLDcgQEAgaW50IGlvbW11X3B0ZV9mbHVzaChzdHJ1Y3QgZG9tYWlu
ICpkLCB1NgogICAgIGludCBpb21tdV9kb21pZDsKICAgICBpbnQgcmMgPSAw
OwogCi0gICAgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnkocHRlLCBzaXplb2Yo
c3RydWN0IGRtYV9wdGUpKTsKKyAgICBpb21tdV9zeW5jX2NhY2hlKHB0ZSwg
c2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CiAKICAgICBmb3JfZWFjaF9kcmhk
X3VuaXQgKCBkcmhkICkKICAgICB7CkBAIC0yNzI1LDcgKzI3MTYsNyBAQCBz
dGF0aWMgaW50IF9faW5pdCBpbnRlbF9pb21tdV9xdWFyYW50aW5lCiAgICAg
ICAgICAgICBkbWFfc2V0X3B0ZV9hZGRyKCpwdGUsIG1hZGRyKTsKICAgICAg
ICAgICAgIGRtYV9zZXRfcHRlX3JlYWRhYmxlKCpwdGUpOwogICAgICAgICB9
Ci0gICAgICAgIGlvbW11X2ZsdXNoX2NhY2hlX3BhZ2UocGFyZW50LCAxKTsK
KyAgICAgICAgaW9tbXVfc3luY19jYWNoZShwYXJlbnQsIFBBR0VfU0laRSk7
CiAKICAgICAgICAgdW5tYXBfdnRkX2RvbWFpbl9wYWdlKHBhcmVudCk7CiAg
ICAgICAgIHBhcmVudCA9IG1hcF92dGRfZG9tYWluX3BhZ2UobWFkZHIpOwo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.11-3.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.11-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
aW9tbXU6IGludHJvZHVjZSBhIGNhY2hlIHN5bmMgaG9vawoKVGhlIGhvb2sg
aXMgb25seSBpbXBsZW1lbnRlZCBmb3IgVlQtZCBhbmQgaXQgdXNlcyB0aGUg
YWxyZWFkeSBleGlzdGluZwppb21tdV9zeW5jX2NhY2hlIGZ1bmN0aW9uIHBy
ZXNlbnQgaW4gVlQtZCBjb2RlLiBUaGUgbmV3IGhvb2sgaXMKYWRkZWQgc28g
dGhhdCB0aGUgY2FjaGUgY2FuIGJlIGZsdXNoZWQgYnkgY29kZSBvdXRzaWRl
IG9mIFZULWQgd2hlbgp1c2luZyBzaGFyZWQgcGFnZSB0YWJsZXMuCgpOb3Rl
IHRoYXQgYWxsb2NfcGd0YWJsZV9tYWRkciBtdXN0IHVzZSB0aGUgbm93IGxv
Y2FsbHkgZGVmaW5lZApzeW5jX2NhY2hlIGZ1bmN0aW9uLCBiZWNhdXNlIElP
TU1VIG9wcyBhcmUgbm90IHlldCBzZXR1cCB0aGUgZmlyc3QKdGltZSB0aGUg
ZnVuY3Rpb24gZ2V0cyBjYWxsZWQgZHVyaW5nIElPTU1VIGluaXRpYWxpemF0
aW9uLgoKTm8gZnVuY3Rpb25hbCBjaGFuZ2UgaW50ZW5kZWQuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTMyMS4KClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC92dGQvZXh0ZXJuLmgKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvdnRkL2V4dGVybi5oCkBAIC0zNyw3ICszNyw2IEBAIHZvaWQgZGlzYWJs
ZV9xaW52YWwoc3RydWN0IGlvbW11ICppb21tdSkKIGludCBlbmFibGVfaW50
cmVtYXAoc3RydWN0IGlvbW11ICppb21tdSwgaW50IGVpbSk7CiB2b2lkIGRp
c2FibGVfaW50cmVtYXAoc3RydWN0IGlvbW11ICppb21tdSk7CiAKLXZvaWQg
aW9tbXVfc3luY19jYWNoZShjb25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBp
bnQgc2l6ZSk7CiBpbnQgaW9tbXVfYWxsb2Moc3RydWN0IGFjcGlfZHJoZF91
bml0ICpkcmhkKTsKIHZvaWQgaW9tbXVfZnJlZShzdHJ1Y3QgYWNwaV9kcmhk
X3VuaXQgKmRyaGQpOwogCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdo
L3Z0ZC9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9pb21tdS5jCkBAIC0xNTksNyArMTU5LDcgQEAgc3RhdGljIHZvaWQgX19p
bml0IGZyZWVfaW50ZWxfaW9tbXUoc3RydQogCiBzdGF0aWMgaW50IGlvbW11
c19pbmNvaGVyZW50OwogCi12b2lkIGlvbW11X3N5bmNfY2FjaGUoY29uc3Qg
dm9pZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUpCitzdGF0aWMgdm9pZCBz
eW5jX2NhY2hlKGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXpl
KQogewogICAgIGludCBpOwogICAgIHN0YXRpYyB1bnNpZ25lZCBpbnQgY2xm
bHVzaF9zaXplID0gMDsKQEAgLTE5OCw3ICsxOTgsNyBAQCB1NjQgYWxsb2Nf
cGd0YWJsZV9tYWRkcihzdHJ1Y3QgYWNwaV9kcmhkCiAgICAgICAgIHZhZGRy
ID0gX19tYXBfZG9tYWluX3BhZ2UoY3VyX3BnKTsKICAgICAgICAgbWVtc2V0
KHZhZGRyLCAwLCBQQUdFX1NJWkUpOwogCi0gICAgICAgIGlvbW11X3N5bmNf
Y2FjaGUodmFkZHIsIFBBR0VfU0laRSk7CisgICAgICAgIHN5bmNfY2FjaGUo
dmFkZHIsIFBBR0VfU0laRSk7CiAgICAgICAgIHVubWFwX2RvbWFpbl9wYWdl
KHZhZGRyKTsKICAgICAgICAgY3VyX3BnKys7CiAgICAgfQpAQCAtMjc2MCw2
ICsyNzYwLDcgQEAgY29uc3Qgc3RydWN0IGlvbW11X29wcyBpbnRlbF9pb21t
dV9vcHMgPQogICAgIC5pb3RsYl9mbHVzaF9hbGwgPSBpb21tdV9mbHVzaF9p
b3RsYl9hbGwsCiAgICAgLmdldF9yZXNlcnZlZF9kZXZpY2VfbWVtb3J5ID0g
aW50ZWxfaW9tbXVfZ2V0X3Jlc2VydmVkX2RldmljZV9tZW1vcnksCiAgICAg
LmR1bXBfcDJtX3RhYmxlID0gdnRkX2R1bXBfcDJtX3RhYmxlLAorICAgIC5z
eW5jX2NhY2hlID0gc3luY19jYWNoZSwKIH07CiAKIC8qCi0tLSBhL3hlbi9p
bmNsdWRlL2FzbS14ODYvaW9tbXUuaAorKysgYi94ZW4vaW5jbHVkZS9hc20t
eDg2L2lvbW11LmgKQEAgLTk4LDYgKzk4LDEzIEBAIGV4dGVybiBib29sIHVu
dHJ1c3RlZF9tc2k7CiBpbnQgcGlfdXBkYXRlX2lydGUoY29uc3Qgc3RydWN0
IHBpX2Rlc2MgKnBpX2Rlc2MsIGNvbnN0IHN0cnVjdCBwaXJxICpwaXJxLAog
ICAgICAgICAgICAgICAgICAgIGNvbnN0IHVpbnQ4X3QgZ3ZlYyk7CiAKKyNk
ZWZpbmUgaW9tbXVfc3luY19jYWNoZShhZGRyLCBzaXplKSAoeyAgICAgICAg
ICAgICAgICAgXAorICAgIGNvbnN0IHN0cnVjdCBpb21tdV9vcHMgKm9wcyA9
IGlvbW11X2dldF9vcHMoKTsgICAgICBcCisgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBp
ZiAoIG9wcy0+c3luY19jYWNoZSApICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgXAorICAgICAgICBvcHMtPnN5bmNfY2FjaGUoYWRkciwgc2l6ZSk7
ICAgICAgICAgICAgICAgICAgICBcCit9KQorCiAjZW5kaWYgLyogIV9fQVJD
SF9YODZfSU9NTVVfSF9fICovCiAvKgogICogTG9jYWwgdmFyaWFibGVzOgot
LS0gYS94ZW4vaW5jbHVkZS94ZW4vaW9tbXUuaAorKysgYi94ZW4vaW5jbHVk
ZS94ZW4vaW9tbXUuaApAQCAtMTYxLDYgKzE2MSw3IEBAIHN0cnVjdCBpb21t
dV9vcHMgewogICAgIHZvaWQgKCp1cGRhdGVfaXJlX2Zyb21fYXBpYykodW5z
aWduZWQgaW50IGFwaWMsIHVuc2lnbmVkIGludCByZWcsIHVuc2lnbmVkIGlu
dCB2YWx1ZSk7CiAgICAgdW5zaWduZWQgaW50ICgqcmVhZF9hcGljX2Zyb21f
aXJlKSh1bnNpZ25lZCBpbnQgYXBpYywgdW5zaWduZWQgaW50IHJlZyk7CiAg
ICAgaW50ICgqc2V0dXBfaHBldF9tc2kpKHN0cnVjdCBtc2lfZGVzYyAqKTsK
KyAgICB2b2lkICgqc3luY19jYWNoZSkoY29uc3Qgdm9pZCAqYWRkciwgdW5z
aWduZWQgaW50IHNpemUpOwogI2VuZGlmIC8qIENPTkZJR19YODYgKi8KICAg
ICBpbnQgX19tdXN0X2NoZWNrICgqc3VzcGVuZCkodm9pZCk7CiAgICAgdm9p
ZCAoKnJlc3VtZSkodm9pZCk7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.11-4.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.11-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IGRvbid0IGFzc3VtZSBhZGRyZXNzZXMgYXJlIGFsaWduZWQgaW4gc3luY19j
YWNoZQoKQ3VycmVudCBjb2RlIGluIHN5bmNfY2FjaGUgYXNzdW1lIHRoYXQg
dGhlIGFkZHJlc3MgcGFzc2VkIGluIGlzCmFsaWduZWQgdG8gYSBjYWNoZSBs
aW5lIHNpemUuIEZpeCB0aGUgY29kZSB0byBzdXBwb3J0IHBhc3NpbmcgaW4K
YXJiaXRyYXJ5IGFkZHJlc3NlcyBub3QgbmVjZXNzYXJpbHkgYWxpZ25lZCB0
byBhIGNhY2hlIGxpbmUgc2l6ZS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzIx
LgoKUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4KCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5j
CisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBA
IC0xNjEsOCArMTYxLDggQEAgc3RhdGljIGludCBpb21tdXNfaW5jb2hlcmVu
dDsKIAogc3RhdGljIHZvaWQgc3luY19jYWNoZShjb25zdCB2b2lkICphZGRy
LCB1bnNpZ25lZCBpbnQgc2l6ZSkKIHsKLSAgICBpbnQgaTsKLSAgICBzdGF0
aWMgdW5zaWduZWQgaW50IGNsZmx1c2hfc2l6ZSA9IDA7CisgICAgc3RhdGlj
IHVuc2lnbmVkIGxvbmcgY2xmbHVzaF9zaXplID0gMDsKKyAgICBjb25zdCB2
b2lkICplbmQgPSBhZGRyICsgc2l6ZTsKIAogICAgIGlmICggIWlvbW11c19p
bmNvaGVyZW50ICkKICAgICAgICAgcmV0dXJuOwpAQCAtMTcwLDggKzE3MCw5
IEBAIHN0YXRpYyB2b2lkIHN5bmNfY2FjaGUoY29uc3Qgdm9pZCAqYWRkciwK
ICAgICBpZiAoIGNsZmx1c2hfc2l6ZSA9PSAwICkKICAgICAgICAgY2xmbHVz
aF9zaXplID0gZ2V0X2NhY2hlX2xpbmVfc2l6ZSgpOwogCi0gICAgZm9yICgg
aSA9IDA7IGkgPCBzaXplOyBpICs9IGNsZmx1c2hfc2l6ZSApCi0gICAgICAg
IGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIgKyBpKTsKKyAgICBhZGRy
IC09ICh1bnNpZ25lZCBsb25nKWFkZHIgJiAoY2xmbHVzaF9zaXplIC0gMSk7
CisgICAgZm9yICggOyBhZGRyIDwgZW5kOyBhZGRyICs9IGNsZmx1c2hfc2l6
ZSApCisgICAgICAgIGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIpOwog
fQogCiAvKiBBbGxvY2F0ZSBwYWdlIHRhYmxlLCByZXR1cm4gaXRzIG1hY2hp
bmUgYWRkcmVzcyAqLwo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.11-5.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.11-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
YWx0ZXJuYXRpdmU6IGludHJvZHVjZSBhbHRlcm5hdGl2ZV8yCgpJdCdzIGJh
c2VkIG9uIGFsdGVybmF0aXZlX2lvXzIgd2l0aG91dCBpbnB1dHMgb3Igb3V0
cHV0cyBidXQgd2l0aCBhbgphZGRlZCBtZW1vcnkgY2xvYmJlci4KClRoaXMg
aXMgcGFydCBvZiBYU0EtMzIxLgoKQWNrZWQtYnk6IEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4KCi0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYv
YWx0ZXJuYXRpdmUuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2FsdGVy
bmF0aXZlLmgKQEAgLTExMyw2ICsxMTMsMTEgQEAgZXh0ZXJuIHZvaWQgYWx0
ZXJuYXRpdmVfaW5zdHJ1Y3Rpb25zKHZvaQogI2RlZmluZSBhbHRlcm5hdGl2
ZShvbGRpbnN0ciwgbmV3aW5zdHIsIGZlYXR1cmUpICAgICAgICAgICAgICAg
ICAgICAgICAgXAogICAgICAgICBhc20gdm9sYXRpbGUgKEFMVEVSTkFUSVZF
KG9sZGluc3RyLCBuZXdpbnN0ciwgZmVhdHVyZSkgOiA6IDogIm1lbW9yeSIp
CiAKKyNkZWZpbmUgYWx0ZXJuYXRpdmVfMihvbGRpbnN0ciwgbmV3aW5zdHIx
LCBmZWF0dXJlMSwgbmV3aW5zdHIyLCBmZWF0dXJlMikgXAorCWFzbSB2b2xh
dGlsZSAoQUxURVJOQVRJVkVfMihvbGRpbnN0ciwgbmV3aW5zdHIxLCBmZWF0
dXJlMSwJXAorCQkJCSAgICBuZXdpbnN0cjIsIGZlYXR1cmUyKQkJXAorCQkg
ICAgICA6IDogOiAibWVtb3J5IikKKwogLyoKICAqIEFsdGVybmF0aXZlIGlu
bGluZSBhc3NlbWJseSB3aXRoIGlucHV0LgogICoK

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.11-6.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.11-6.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IG9wdGltaXplIENQVSBjYWNoZSBzeW5jCgpTb21lIFZULWQgSU9NTVVzIGFy
ZSBub24tY29oZXJlbnQsIHdoaWNoIHJlcXVpcmVzIGEgY2FjaGUgd3JpdGUg
YmFjawppbiBvcmRlciBmb3IgdGhlIGNoYW5nZXMgbWFkZSBieSB0aGUgQ1BV
IHRvIGJlIHZpc2libGUgdG8gdGhlIElPTU1VLgpUaGlzIGNhY2hlIHdyaXRl
IGJhY2sgd2FzIHVuY29uZGl0aW9uYWxseSBkb25lIHVzaW5nIGNsZmx1c2gs
IGJ1dCB0aGVyZSBhcmUKb3RoZXIgbW9yZSBlZmZpY2llbnQgaW5zdHJ1Y3Rp
b25zIHRvIGRvIHNvLCBoZW5jZSBpbXBsZW1lbnQgc3VwcG9ydApmb3IgdGhl
bSB1c2luZyB0aGUgYWx0ZXJuYXRpdmUgZnJhbWV3b3JrLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zMjEuCgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvdnRkL2V4dGVybi5oCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdo
L3Z0ZC9leHRlcm4uaApAQCAtNjMsNyArNjMsNiBAQCBpbnQgX19tdXN0X2No
ZWNrIHFpbnZhbF9kZXZpY2VfaW90bGJfc3luCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICB1MTYgZGlkLCB1MTYgc2l6ZSwg
dTY0IGFkZHIpOwogCiB1bnNpZ25lZCBpbnQgZ2V0X2NhY2hlX2xpbmVfc2l6
ZSh2b2lkKTsKLXZvaWQgY2FjaGVsaW5lX2ZsdXNoKGNoYXIgKik7CiB2b2lk
IGZsdXNoX2FsbF9jYWNoZSh2b2lkKTsKIAogdTY0IGFsbG9jX3BndGFibGVf
bWFkZHIoc3RydWN0IGFjcGlfZHJoZF91bml0ICpkcmhkLCB1bnNpZ25lZCBs
b25nIG5wYWdlcyk7Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9p
b21tdS5jCkBAIC0zMSw2ICszMSw3IEBACiAjaW5jbHVkZSA8eGVuL3BjaV9y
ZWdzLmg+CiAjaW5jbHVkZSA8eGVuL2tleWhhbmRsZXIuaD4KICNpbmNsdWRl
IDxhc20vbXNpLmg+CisjaW5jbHVkZSA8YXNtL25vcHMuaD4KICNpbmNsdWRl
IDxhc20vaXJxLmg+CiAjaW5jbHVkZSA8YXNtL2h2bS92bXgvdm14Lmg+CiAj
aW5jbHVkZSA8YXNtL3AybS5oPgpAQCAtMTcyLDcgKzE3Myw0MiBAQCBzdGF0
aWMgdm9pZCBzeW5jX2NhY2hlKGNvbnN0IHZvaWQgKmFkZHIsCiAKICAgICBh
ZGRyIC09ICh1bnNpZ25lZCBsb25nKWFkZHIgJiAoY2xmbHVzaF9zaXplIC0g
MSk7CiAgICAgZm9yICggOyBhZGRyIDwgZW5kOyBhZGRyICs9IGNsZmx1c2hf
c2l6ZSApCi0gICAgICAgIGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIp
OworLyoKKyAqIFRoZSBhcmd1bWVudHMgdG8gYSBtYWNybyBtdXN0IG5vdCBp
bmNsdWRlIHByZXByb2Nlc3NvciBkaXJlY3RpdmVzLiBEb2luZyBzbworICog
cmVzdWx0cyBpbiB1bmRlZmluZWQgYmVoYXZpb3IsIHNvIHdlIGhhdmUgdG8g
Y3JlYXRlIHNvbWUgZGVmaW5lcyBoZXJlIGluCisgKiBvcmRlciB0byBhdm9p
ZCBpdC4KKyAqLworI2lmIGRlZmluZWQoSEFWRV9BU19DTFdCKQorIyBkZWZp
bmUgQ0xXQl9FTkNPRElORyAiY2x3YiAlW3BdIgorI2VsaWYgZGVmaW5lZChI
QVZFX0FTX1hTQVZFT1BUKQorIyBkZWZpbmUgQ0xXQl9FTkNPRElORyAiZGF0
YTE2IHhzYXZlb3B0ICVbcF0iIC8qIGNsd2IgKi8KKyNlbHNlCisjIGRlZmlu
ZSBDTFdCX0VOQ09ESU5HICIuYnl0ZSAweDY2LCAweDBmLCAweGFlLCAweDMw
IiAvKiBjbHdiICglJXJheCkgKi8KKyNlbmRpZgorCisjZGVmaW5lIEJBU0Vf
SU5QVVQoYWRkcikgW3BdICJtIiAoKihjb25zdCBjaGFyICopKGFkZHIpKQor
I2lmIGRlZmluZWQoSEFWRV9BU19DTFdCKSB8fCBkZWZpbmVkKEhBVkVfQVNf
WFNBVkVPUFQpCisjIGRlZmluZSBJTlBVVCBCQVNFX0lOUFVUCisjZWxzZQor
IyBkZWZpbmUgSU5QVVQoYWRkcikgImEiIChhZGRyKSwgQkFTRV9JTlBVVChh
ZGRyKQorI2VuZGlmCisgICAgICAgIC8qCisgICAgICAgICAqIE5vdGUgcmVn
YXJkaW5nIHRoZSB1c2Ugb2YgTk9QX0RTX1BSRUZJWDogaXQncyBmYXN0ZXIg
dG8gZG8gYSBjbGZsdXNoCisgICAgICAgICAqICsgcHJlZml4IHRoYW4gYSBj
bGZsdXNoICsgbm9wLCBhbmQgaGVuY2UgdGhlIHByZWZpeCBpcyBhZGRlZCBp
bnN0ZWFkCisgICAgICAgICAqIG9mIGxldHRpbmcgdGhlIGFsdGVybmF0aXZl
IGZyYW1ld29yayBmaWxsIHRoZSBnYXAgYnkgYXBwZW5kaW5nIG5vcHMuCisg
ICAgICAgICAqLworICAgICAgICBhbHRlcm5hdGl2ZV9pb18yKCIuYnl0ZSAi
IF9fc3RyaW5naWZ5KE5PUF9EU19QUkVGSVgpICI7IGNsZmx1c2ggJVtwXSIs
CisgICAgICAgICAgICAgICAgICAgICAgICAgImRhdGExNiBjbGZsdXNoICVb
cF0iLCAvKiBjbGZsdXNob3B0ICovCisgICAgICAgICAgICAgICAgICAgICAg
ICAgWDg2X0ZFQVRVUkVfQ0xGTFVTSE9QVCwKKyAgICAgICAgICAgICAgICAg
ICAgICAgICBDTFdCX0VOQ09ESU5HLAorICAgICAgICAgICAgICAgICAgICAg
ICAgIFg4Nl9GRUFUVVJFX0NMV0IsIC8qIG5vIG91dHB1dHMgKi8sCisgICAg
ICAgICAgICAgICAgICAgICAgICAgSU5QVVQoYWRkcikpOworI3VuZGVmIElO
UFVUCisjdW5kZWYgQkFTRV9JTlBVVAorI3VuZGVmIENMV0JfRU5DT0RJTkcK
KworICAgIGFsdGVybmF0aXZlXzIoIiIsICJzZmVuY2UiLCBYODZfRkVBVFVS
RV9DTEZMVVNIT1BULAorICAgICAgICAgICAgICAgICAgICAgICJzZmVuY2Ui
LCBYODZfRkVBVFVSRV9DTFdCKTsKIH0KIAogLyogQWxsb2NhdGUgcGFnZSB0
YWJsZSwgcmV0dXJuIGl0cyBtYWNoaW5lIGFkZHJlc3MgKi8KLS0tIGEveGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3g4Ni92dGQuYworKysgYi94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC92dGQveDg2L3Z0ZC5jCkBAIC01MywxMSAr
NTMsNiBAQCB1bnNpZ25lZCBpbnQgZ2V0X2NhY2hlX2xpbmVfc2l6ZSh2b2lk
KQogICAgIHJldHVybiAoKGNwdWlkX2VieCgxKSA+PiA4KSAmIDB4ZmYpICog
ODsKIH0KIAotdm9pZCBjYWNoZWxpbmVfZmx1c2goY2hhciAqIGFkZHIpCi17
Ci0gICAgY2xmbHVzaChhZGRyKTsKLX0KLQogdm9pZCBmbHVzaF9hbGxfY2Fj
aGUoKQogewogICAgIHdiaW52ZCgpOwo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.11-7.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.11-7.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
ZXB0OiBmbHVzaCBjYWNoZSB3aGVuIG1vZGlmeWluZyBQVEVzIGFuZCBzaGFy
aW5nIHBhZ2UgdGFibGVzCgpNb2RpZmljYXRpb25zIG1hZGUgdG8gdGhlIHBh
Z2UgdGFibGVzIGJ5IEVQVCBjb2RlIG5lZWQgdG8gYmUgd3JpdHRlbgp0byBt
ZW1vcnkgd2hlbiB0aGUgcGFnZSB0YWJsZXMgYXJlIHNoYXJlZCB3aXRoIHRo
ZSBJT01NVSwgYXMgSW50ZWwKSU9NTVVzIGNhbiBiZSBub24tY29oZXJlbnQg
YW5kIHRodXMgcmVxdWlyZSBjaGFuZ2VzIHRvIGJlIHdyaXR0ZW4gdG8KbWVt
b3J5IGluIG9yZGVyIHRvIGJlIHZpc2libGUgdG8gdGhlIElPTU1VLgoKSW4g
b3JkZXIgdG8gYWNoaWV2ZSB0aGlzIG1ha2Ugc3VyZSBkYXRhIGlzIHdyaXR0
ZW4gYmFjayB0byBtZW1vcnkKYWZ0ZXIgd3JpdGluZyBhbiBFUFQgZW50cnkg
d2hlbiB0aGUgcmVjYWxjIGJpdCBpcyBub3Qgc2V0IGluCmF0b21pY193cml0
ZV9lcHRfZW50cnkuIElmIHN1Y2ggYml0IGlzIHNldCwgdGhlIGVudHJ5IHdp
bGwgYmUKYWRqdXN0ZWQgYW5kIGF0b21pY193cml0ZV9lcHRfZW50cnkgd2ls
bCBiZSBjYWxsZWQgYSBzZWNvbmQgdGltZQp3aXRob3V0IHRoZSByZWNhbGMg
Yml0IHNldC4gTm90ZSB0aGF0IHdoZW4gc3BsaXR0aW5nIGEgc3VwZXIgcGFn
ZSB0aGUKbmV3IHRhYmxlcyByZXN1bHRpbmcgb2YgdGhlIHNwbGl0IHNob3Vs
ZCBhbHNvIGJlIHdyaXR0ZW4gYmFjay4KCkZhaWx1cmUgdG8gZG8gc28gY2Fu
IGFsbG93IGRldmljZXMgYmVoaW5kIHRoZSBJT01NVSBhY2Nlc3MgdG8gdGhl
CnN0YWxlIHN1cGVyIHBhZ2UsIG9yIGNhdXNlIGNvaGVyZW5jeSBpc3N1ZXMg
YXMgY2hhbmdlcyBtYWRlIGJ5IHRoZQpwcm9jZXNzb3IgdG8gdGhlIHBhZ2Ug
dGFibGVzIGFyZSBub3QgdmlzaWJsZSB0byB0aGUgSU9NTVUuCgpUaGlzIGFs
bG93cyB0byByZW1vdmUgdGhlIFZULWQgc3BlY2lmaWMgaW9tbXVfcHRlX2Zs
dXNoIGhlbHBlciwgc2luY2UKdGhlIGNhY2hlIHdyaXRlIGJhY2sgaXMgbm93
IHBlcmZvcm1lZCBieSBhdG9taWNfd3JpdGVfZXB0X2VudHJ5LCBhbmQKaGVu
Y2UgaW9tbXVfaW90bGJfZmx1c2ggY2FuIGJlIHVzZWQgdG8gZmx1c2ggdGhl
IElPTU1VIFRMQi4gVGhlIG5ld2x5CnVzZWQgbWV0aG9kIChpb21tdV9pb3Rs
Yl9mbHVzaCkgY2FuIHJlc3VsdCBpbiBsZXNzIGZsdXNoZXMsIHNpbmNlIGl0
Cm1pZ2h0IHNvbWV0aW1lcyBiZSBjYWxsZWQgcmlnaHRseSB3aXRoIDAgZmxh
Z3MsIGluIHdoaWNoIGNhc2UgaXQKYmVjb21lcyBhIG5vLW9wLgoKVGhpcyBp
cyBwYXJ0IG9mIFhTQS0zMjEuCgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2L21tL3Ay
bS1lcHQuYworKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5jCkBAIC05
MCw2ICs5MCwxOSBAQCBzdGF0aWMgaW50IGF0b21pY193cml0ZV9lcHRfZW50
cnkoZXB0X2VuCiAKICAgICB3cml0ZV9hdG9taWMoJmVudHJ5cHRyLT5lcHRl
LCBuZXcuZXB0ZSk7CiAKKyAgICAvKgorICAgICAqIFRoZSByZWNhbGMgZmll
bGQgb24gdGhlIEVQVCBpcyB1c2VkIHRvIHNpZ25hbCBlaXRoZXIgdGhhdCBh
CisgICAgICogcmVjYWxjdWxhdGlvbiBvZiB0aGUgRU1UIGZpZWxkIGlzIHJl
cXVpcmVkICh3aGljaCBkb2Vzbid0IGVmZmVjdCB0aGUKKyAgICAgKiBJT01N
VSksIG9yIGEgdHlwZSBjaGFuZ2UuIFR5cGUgY2hhbmdlcyBjYW4gb25seSBi
ZSBiZXR3ZWVuIHJhbV9ydywKKyAgICAgKiBsb2dkaXJ0eSBhbmQgaW9yZXFf
c2VydmVyOiBjaGFuZ2VzIHRvL2Zyb20gbG9nZGlydHkgd29uJ3Qgd29yayB3
ZWxsIHdpdGgKKyAgICAgKiBhbiBJT01NVSBhbnl3YXksIGFzIElPTU1VICNQ
RnMgYXJlIG5vdCBzeW5jaHJvbm91cyBhbmQgd2lsbCBsZWFkIHRvCisgICAg
ICogYWJvcnRzLCBhbmQgY2hhbmdlcyB0by9mcm9tIGlvcmVxX3NlcnZlciBh
cmUgYWxyZWFkeSBmdWxseSBmbHVzaGVkCisgICAgICogYmVmb3JlIHJldHVy
bmluZyB0byBndWVzdCBjb250ZXh0IChzZWUKKyAgICAgKiBYRU5fRE1PUF9t
YXBfbWVtX3R5cGVfdG9faW9yZXFfc2VydmVyKS4KKyAgICAgKi8KKyAgICBp
ZiAoICFuZXcucmVjYWxjICYmIGlvbW11X2hhcF9wdF9zaGFyZSApCisgICAg
ICAgIGlvbW11X3N5bmNfY2FjaGUoZW50cnlwdHIsIHNpemVvZigqZW50cnlw
dHIpKTsKKwogICAgIGlmICggdW5saWtlbHkob2xkbWZuICE9IG1mbl94KElO
VkFMSURfTUZOKSkgKQogICAgICAgICBwdXRfcGFnZShtZm5fdG9fcGFnZShf
bWZuKG9sZG1mbikpKTsKIApAQCAtMzE5LDYgKzMzMiw5IEBAIHN0YXRpYyBi
b29sX3QgZXB0X3NwbGl0X3N1cGVyX3BhZ2Uoc3RydWMKICAgICAgICAgICAg
IGJyZWFrOwogICAgIH0KIAorICAgIGlmICggaW9tbXVfaGFwX3B0X3NoYXJl
ICkKKyAgICAgICAgaW9tbXVfc3luY19jYWNoZSh0YWJsZSwgRVBUX1BBR0VU
QUJMRV9FTlRSSUVTICogc2l6ZW9mKGVwdF9lbnRyeV90KSk7CisKICAgICB1
bm1hcF9kb21haW5fcGFnZSh0YWJsZSk7CiAKICAgICAvKiBFdmVuIGZhaWxl
ZCB3ZSBzaG91bGQgaW5zdGFsbCB0aGUgbmV3bHkgYWxsb2NhdGVkIGVwdCBw
YWdlLiAqLwpAQCAtMzc4LDYgKzM5NCw5IEBAIHN0YXRpYyBpbnQgZXB0X25l
eHRfbGV2ZWwoc3RydWN0IHAybV9kb20KICAgICAgICAgaWYgKCAhbmV4dCAp
CiAgICAgICAgICAgICByZXR1cm4gR1VFU1RfVEFCTEVfTUFQX0ZBSUxFRDsK
IAorICAgICAgICBpZiAoIGlvbW11X2hhcF9wdF9zaGFyZSApCisgICAgICAg
ICAgICBpb21tdV9zeW5jX2NhY2hlKG5leHQsIEVQVF9QQUdFVEFCTEVfRU5U
UklFUyAqIHNpemVvZihlcHRfZW50cnlfdCkpOworCiAgICAgICAgIHJjID0g
YXRvbWljX3dyaXRlX2VwdF9lbnRyeShlcHRfZW50cnksIGUsIG5leHRfbGV2
ZWwpOwogICAgICAgICBBU1NFUlQocmMgPT0gMCk7CiAgICAgfQpAQCAtODc1
LDcgKzg5NCw3IEBAIG91dDoKICAgICAgICAgIG5lZWRfbW9kaWZ5X3Z0ZF90
YWJsZSApCiAgICAgewogICAgICAgICBpZiAoIGlvbW11X2hhcF9wdF9zaGFy
ZSApCi0gICAgICAgICAgICByYyA9IGlvbW11X3B0ZV9mbHVzaChkLCBnZm4s
ICZlcHRfZW50cnktPmVwdGUsIG9yZGVyLCB2dGRfcHRlX3ByZXNlbnQpOwor
ICAgICAgICAgICAgcmMgPSBpb21tdV9mbHVzaF9pb3RsYihkLCBnZm4sIHZ0
ZF9wdGVfcHJlc2VudCwgMXUgPDwgb3JkZXIpOwogICAgICAgICBlbHNlCiAg
ICAgICAgIHsKICAgICAgICAgICAgIGlmICggaW9tbXVfZmxhZ3MgKQotLS0g
YS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYworKysgYi94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYwpAQCAtNjEyLDEw
ICs2MTIsOCBAQCBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayBpb21tdV9mbHVz
aF9hbGwoCiAgICAgcmV0dXJuIHJjOwogfQogCi1zdGF0aWMgaW50IF9fbXVz
dF9jaGVjayBpb21tdV9mbHVzaF9pb3RsYihzdHJ1Y3QgZG9tYWluICpkLAot
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5z
aWduZWQgbG9uZyBnZm4sCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBib29sX3QgZG1hX29sZF9wdGVfcHJlc2VudCwKLSAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2ln
bmVkIGludCBwYWdlX2NvdW50KQoraW50IGlvbW11X2ZsdXNoX2lvdGxiKHN0
cnVjdCBkb21haW4gKmQsIHVuc2lnbmVkIGxvbmcgZ2ZuLAorICAgICAgICAg
ICAgICAgICAgICAgIGJvb2wgZG1hX29sZF9wdGVfcHJlc2VudCwgdW5zaWdu
ZWQgaW50IHBhZ2VfY291bnQpCiB7CiAgICAgc3RydWN0IGRvbWFpbl9pb21t
dSAqaGQgPSBkb21faW9tbXUoZCk7CiAgICAgc3RydWN0IGFjcGlfZHJoZF91
bml0ICpkcmhkOwpAQCAtMTg4MCw1MyArMTg3OCw2IEBAIHN0YXRpYyBpbnQg
X19tdXN0X2NoZWNrIGludGVsX2lvbW11X3VubWEKICAgICByZXR1cm4gZG1h
X3B0ZV9jbGVhcl9vbmUoZCwgKHBhZGRyX3QpZ2ZuIDw8IFBBR0VfU0hJRlRf
NEspOwogfQogCi1pbnQgaW9tbXVfcHRlX2ZsdXNoKHN0cnVjdCBkb21haW4g
KmQsIHU2NCBnZm4sIHU2NCAqcHRlLAotICAgICAgICAgICAgICAgICAgICBp
bnQgb3JkZXIsIGludCBwcmVzZW50KQotewotICAgIHN0cnVjdCBhY3BpX2Ry
aGRfdW5pdCAqZHJoZDsKLSAgICBzdHJ1Y3QgaW9tbXUgKmlvbW11ID0gTlVM
TDsKLSAgICBzdHJ1Y3QgZG9tYWluX2lvbW11ICpoZCA9IGRvbV9pb21tdShk
KTsKLSAgICBib29sX3QgZmx1c2hfZGV2X2lvdGxiOwotICAgIGludCBpb21t
dV9kb21pZDsKLSAgICBpbnQgcmMgPSAwOwotCi0gICAgaW9tbXVfc3luY19j
YWNoZShwdGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0ZSkpOwotCi0gICAgZm9y
X2VhY2hfZHJoZF91bml0ICggZHJoZCApCi0gICAgewotICAgICAgICBpb21t
dSA9IGRyaGQtPmlvbW11OwotICAgICAgICBpZiAoICF0ZXN0X2JpdChpb21t
dS0+aW5kZXgsICZoZC0+YXJjaC5pb21tdV9iaXRtYXApICkKLSAgICAgICAg
ICAgIGNvbnRpbnVlOwotCi0gICAgICAgIGZsdXNoX2Rldl9pb3RsYiA9ICEh
ZmluZF9hdHNfZGV2X2RyaGQoaW9tbXUpOwotICAgICAgICBpb21tdV9kb21p
ZD0gZG9tYWluX2lvbW11X2RvbWlkKGQsIGlvbW11KTsKLSAgICAgICAgaWYg
KCBpb21tdV9kb21pZCA9PSAtMSApCi0gICAgICAgICAgICBjb250aW51ZTsK
LQotICAgICAgICByYyA9IGlvbW11X2ZsdXNoX2lvdGxiX3BzaShpb21tdSwg
aW9tbXVfZG9taWQsCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIChwYWRkcl90KWdmbiA8PCBQQUdFX1NISUZUXzRLLAotICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBvcmRlciwgIXByZXNlbnQsIGZs
dXNoX2Rldl9pb3RsYik7Ci0gICAgICAgIGlmICggcmMgPiAwICkKLSAgICAg
ICAgewotICAgICAgICAgICAgaW9tbXVfZmx1c2hfd3JpdGVfYnVmZmVyKGlv
bW11KTsKLSAgICAgICAgICAgIHJjID0gMDsKLSAgICAgICAgfQotICAgIH0K
LQotICAgIGlmICggdW5saWtlbHkocmMpICkKLSAgICB7Ci0gICAgICAgIGlm
ICggIWQtPmlzX3NodXR0aW5nX2Rvd24gJiYgcHJpbnRrX3JhdGVsaW1pdCgp
ICkKLSAgICAgICAgICAgIHByaW50ayhYRU5MT0dfRVJSIFZURFBSRUZJWAot
ICAgICAgICAgICAgICAgICAgICIgZCVkOiBJT01NVSBwYWdlcyBmbHVzaCBm
YWlsZWQ6ICVkXG4iLAotICAgICAgICAgICAgICAgICAgIGQtPmRvbWFpbl9p
ZCwgcmMpOwotCi0gICAgICAgIGlmICggIWlzX2hhcmR3YXJlX2RvbWFpbihk
KSApCi0gICAgICAgICAgICBkb21haW5fY3Jhc2goZCk7Ci0gICAgfQotCi0g
ICAgcmV0dXJuIHJjOwotfQotCiBzdGF0aWMgaW50IF9faW5pdCB2dGRfZXB0
X3BhZ2VfY29tcGF0aWJsZShzdHJ1Y3QgaW9tbXUgKmlvbW11KQogewogICAg
IHU2NCBlcHRfY2FwLCB2dGRfY2FwID0gaW9tbXUtPmNhcDsKLS0tIGEveGVu
L2luY2x1ZGUvYXNtLXg4Ni9pb21tdS5oCisrKyBiL3hlbi9pbmNsdWRlL2Fz
bS14ODYvaW9tbXUuaApAQCAtODcsOCArODcsOSBAQCBpbnQgaW9tbXVfc2V0
dXBfaHBldF9tc2koc3RydWN0IG1zaV9kZXNjCiAKIC8qIFdoaWxlIFZULWQg
c3BlY2lmaWMsIHRoaXMgbXVzdCBnZXQgZGVjbGFyZWQgaW4gYSBnZW5lcmlj
IGhlYWRlci4gKi8KIGludCBhZGp1c3RfdnRkX2lycV9hZmZpbml0aWVzKHZv
aWQpOwotaW50IF9fbXVzdF9jaGVjayBpb21tdV9wdGVfZmx1c2goc3RydWN0
IGRvbWFpbiAqZCwgdTY0IGdmbiwgdTY0ICpwdGUsCi0gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBpbnQgb3JkZXIsIGludCBwcmVzZW50KTsK
K2ludCBfX211c3RfY2hlY2sgaW9tbXVfZmx1c2hfaW90bGIoc3RydWN0IGRv
bWFpbiAqZCwgdW5zaWduZWQgbG9uZyBnZm4sCisgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIGJvb2wgZG1hX29sZF9wdGVfcHJlc2VudCwK
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQg
aW50IHBhZ2VfY291bnQpOwogYm9vbF90IGlvbW11X3N1cHBvcnRzX2VpbSh2
b2lkKTsKIGludCBpb21tdV9lbmFibGVfeDJhcGljX0lSKHZvaWQpOwogdm9p
ZCBpb21tdV9kaXNhYmxlX3gyYXBpY19JUih2b2lkKTsK

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.12-1.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.12-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB2dGQ6IGltcHJvdmUgSU9NTVUgVExCIGZsdXNoCgpEbyBub3QgbGltaXQg
UFNJIGZsdXNoZXMgdG8gb3JkZXIgMCBwYWdlcywgaW4gb3JkZXIgdG8gYXZv
aWQgZG9pbmcgYQpmdWxsIFRMQiBmbHVzaCBpZiB0aGUgcGFzc2VkIGluIHBh
Z2UgaGFzIGFuIG9yZGVyIGdyZWF0ZXIgdGhhbiAwIGFuZAppcyBhbGlnbmVk
LiBTaG91bGQgaW5jcmVhc2UgdGhlIHBlcmZvcm1hbmNlIG9mIElPTU1VIFRM
QiBmbHVzaGVzIHdoZW4KZGVhbGluZyB3aXRoIHBhZ2Ugb3JkZXJzIGdyZWF0
ZXIgdGhhbiAwLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjEuCgpTaWduZWQt
b2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+CgotLS0g
YS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYworKysgYi94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYwpAQCAtNjExLDEz
ICs2MTEsMTQgQEAgc3RhdGljIGludCBfX211c3RfY2hlY2sgaW9tbXVfZmx1
c2hfaW90bAogICAgICAgICBpZiAoIGlvbW11X2RvbWlkID09IC0xICkKICAg
ICAgICAgICAgIGNvbnRpbnVlOwogCi0gICAgICAgIGlmICggcGFnZV9jb3Vu
dCAhPSAxIHx8IGRmbl9lcShkZm4sIElOVkFMSURfREZOKSApCisgICAgICAg
IGlmICggIXBhZ2VfY291bnQgfHwgKHBhZ2VfY291bnQgJiAocGFnZV9jb3Vu
dCAtIDEpKSB8fAorICAgICAgICAgICAgIGRmbl9lcShkZm4sIElOVkFMSURf
REZOKSB8fCAhSVNfQUxJR05FRChkZm5feChkZm4pLCBwYWdlX2NvdW50KSAp
CiAgICAgICAgICAgICByYyA9IGlvbW11X2ZsdXNoX2lvdGxiX2RzaShpb21t
dSwgaW9tbXVfZG9taWQsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAwLCBmbHVzaF9kZXZfaW90bGIpOwogICAgICAgICBlbHNl
CiAgICAgICAgICAgICByYyA9IGlvbW11X2ZsdXNoX2lvdGxiX3BzaShpb21t
dSwgaW9tbXVfZG9taWQsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBkZm5fdG9fZGFkZHIoZGZuKSwKLSAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIFBBR0VfT1JERVJfNEssCisgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBnZXRfb3JkZXJf
ZnJvbV9wYWdlcyhwYWdlX2NvdW50KSwKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICFkbWFfb2xkX3B0ZV9wcmVzZW50LAogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZmx1c2hfZGV2
X2lvdGxiKTsKIAo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.12-2.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.12-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IHBydW5lIChhbmQgcmVuYW1lKSBjYWNoZSBmbHVzaCBmdW5jdGlvbnMKClJl
bmFtZSBfX2lvbW11X2ZsdXNoX2NhY2hlIHRvIGlvbW11X3N5bmNfY2FjaGUg
YW5kIHJlbW92ZQppb21tdV9mbHVzaF9jYWNoZV9wYWdlLiBBbHNvIHJlbW92
ZSB0aGUgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnkKd3JhcHBlciBhbmQganVz
dCB1c2UgaW9tbXVfc3luY19jYWNoZSBpbnN0ZWFkLiBOb3RlIHRoZSBfZW50
cnkgc3VmZml4CndhcyBtZWFuaW5nbGVzcyBhcyB0aGUgd3JhcHBlciB3YXMg
YWxyZWFkeSB0YWtpbmcgYSBzaXplIHBhcmFtZXRlciBpbgpieXRlcy4gV2hp
bGUgdGhlcmUgYWxzbyBjb25zdGlmeSB0aGUgYWRkciBwYXJhbWV0ZXIuCgpO
byBmdW5jdGlvbmFsIGNoYW5nZSBpbnRlbmRlZC4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtMzIxLgoKUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4KCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9leHRlcm4uaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
ZXh0ZXJuLmgKQEAgLTM4LDggKzM4LDcgQEAgdm9pZCBkaXNhYmxlX3FpbnZh
bChzdHJ1Y3QgaW9tbXUgKmlvbW11KQogaW50IGVuYWJsZV9pbnRyZW1hcChz
dHJ1Y3QgaW9tbXUgKmlvbW11LCBpbnQgZWltKTsKIHZvaWQgZGlzYWJsZV9p
bnRyZW1hcChzdHJ1Y3QgaW9tbXUgKmlvbW11KTsKIAotdm9pZCBpb21tdV9m
bHVzaF9jYWNoZV9lbnRyeSh2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQgc2l6
ZSk7Ci12b2lkIGlvbW11X2ZsdXNoX2NhY2hlX3BhZ2Uodm9pZCAqYWRkciwg
dW5zaWduZWQgbG9uZyBucGFnZXMpOwordm9pZCBpb21tdV9zeW5jX2NhY2hl
KGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKTsKIGludCBp
b21tdV9hbGxvYyhzdHJ1Y3QgYWNwaV9kcmhkX3VuaXQgKmRyaGQpOwogdm9p
ZCBpb21tdV9mcmVlKHN0cnVjdCBhY3BpX2RyaGRfdW5pdCAqZHJoZCk7CiAK
LS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2ludHJlbWFwLmMK
KysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2ludHJlbWFwLmMK
QEAgLTIzMSw3ICsyMzEsNyBAQCBzdGF0aWMgdm9pZCBmcmVlX3JlbWFwX2Vu
dHJ5KHN0cnVjdCBpb21tCiAgICAgICAgICAgICAgICAgICAgICBpcmVtYXBf
ZW50cmllcywgaXJlbWFwX2VudHJ5KTsKIAogICAgIHVwZGF0ZV9pcnRlKGlv
bW11LCBpcmVtYXBfZW50cnksICZuZXdfaXJlLCBmYWxzZSk7Ci0gICAgaW9t
bXVfZmx1c2hfY2FjaGVfZW50cnkoaXJlbWFwX2VudHJ5LCBzaXplb2YoKmly
ZW1hcF9lbnRyeSkpOworICAgIGlvbW11X3N5bmNfY2FjaGUoaXJlbWFwX2Vu
dHJ5LCBzaXplb2YoKmlyZW1hcF9lbnRyeSkpOwogICAgIGlvbW11X2ZsdXNo
X2llY19pbmRleChpb21tdSwgMCwgaW5kZXgpOwogCiAgICAgdW5tYXBfdnRk
X2RvbWFpbl9wYWdlKGlyZW1hcF9lbnRyaWVzKTsKQEAgLTQwMyw3ICs0MDMs
NyBAQCBzdGF0aWMgaW50IGlvYXBpY19ydGVfdG9fcmVtYXBfZW50cnkoc3Ry
CiAgICAgfQogCiAgICAgdXBkYXRlX2lydGUoaW9tbXUsIGlyZW1hcF9lbnRy
eSwgJm5ld19pcmUsICFpbml0KTsKLSAgICBpb21tdV9mbHVzaF9jYWNoZV9l
bnRyeShpcmVtYXBfZW50cnksIHNpemVvZigqaXJlbWFwX2VudHJ5KSk7Cisg
ICAgaW9tbXVfc3luY19jYWNoZShpcmVtYXBfZW50cnksIHNpemVvZigqaXJl
bWFwX2VudHJ5KSk7CiAgICAgaW9tbXVfZmx1c2hfaWVjX2luZGV4KGlvbW11
LCAwLCBpbmRleCk7CiAKICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2UoaXJl
bWFwX2VudHJpZXMpOwpAQCAtNjk0LDcgKzY5NCw3IEBAIHN0YXRpYyBpbnQg
bXNpX21zZ190b19yZW1hcF9lbnRyeSgKICAgICB1cGRhdGVfaXJ0ZShpb21t
dSwgaXJlbWFwX2VudHJ5LCAmbmV3X2lyZSwgbXNpX2Rlc2MtPmlydGVfaW5p
dGlhbGl6ZWQpOwogICAgIG1zaV9kZXNjLT5pcnRlX2luaXRpYWxpemVkID0g
dHJ1ZTsKIAotICAgIGlvbW11X2ZsdXNoX2NhY2hlX2VudHJ5KGlyZW1hcF9l
bnRyeSwgc2l6ZW9mKCppcmVtYXBfZW50cnkpKTsKKyAgICBpb21tdV9zeW5j
X2NhY2hlKGlyZW1hcF9lbnRyeSwgc2l6ZW9mKCppcmVtYXBfZW50cnkpKTsK
ICAgICBpb21tdV9mbHVzaF9pZWNfaW5kZXgoaW9tbXUsIDAsIGluZGV4KTsK
IAogICAgIHVubWFwX3Z0ZF9kb21haW5fcGFnZShpcmVtYXBfZW50cmllcyk7
Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCisr
KyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBAIC0x
NTgsNyArMTU4LDggQEAgc3RhdGljIHZvaWQgX19pbml0IGZyZWVfaW50ZWxf
aW9tbXUoc3RydQogfQogCiBzdGF0aWMgaW50IGlvbW11c19pbmNvaGVyZW50
Owotc3RhdGljIHZvaWQgX19pb21tdV9mbHVzaF9jYWNoZSh2b2lkICphZGRy
LCB1bnNpZ25lZCBpbnQgc2l6ZSkKKwordm9pZCBpb21tdV9zeW5jX2NhY2hl
KGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQogewogICAg
IGludCBpOwogICAgIHN0YXRpYyB1bnNpZ25lZCBpbnQgY2xmbHVzaF9zaXpl
ID0gMDsKQEAgLTE3MywxNiArMTc0LDYgQEAgc3RhdGljIHZvaWQgX19pb21t
dV9mbHVzaF9jYWNoZSh2b2lkICphZAogICAgICAgICBjYWNoZWxpbmVfZmx1
c2goKGNoYXIgKilhZGRyICsgaSk7CiB9CiAKLXZvaWQgaW9tbXVfZmx1c2hf
Y2FjaGVfZW50cnkodm9pZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUpCi17
Ci0gICAgX19pb21tdV9mbHVzaF9jYWNoZShhZGRyLCBzaXplKTsKLX0KLQot
dm9pZCBpb21tdV9mbHVzaF9jYWNoZV9wYWdlKHZvaWQgKmFkZHIsIHVuc2ln
bmVkIGxvbmcgbnBhZ2VzKQotewotICAgIF9faW9tbXVfZmx1c2hfY2FjaGUo
YWRkciwgUEFHRV9TSVpFICogbnBhZ2VzKTsKLX0KLQogLyogQWxsb2NhdGUg
cGFnZSB0YWJsZSwgcmV0dXJuIGl0cyBtYWNoaW5lIGFkZHJlc3MgKi8KIHU2
NCBhbGxvY19wZ3RhYmxlX21hZGRyKHN0cnVjdCBhY3BpX2RyaGRfdW5pdCAq
ZHJoZCwgdW5zaWduZWQgbG9uZyBucGFnZXMpCiB7CkBAIC0yMDcsNyArMTk4
LDcgQEAgdTY0IGFsbG9jX3BndGFibGVfbWFkZHIoc3RydWN0IGFjcGlfZHJo
ZAogICAgICAgICB2YWRkciA9IF9fbWFwX2RvbWFpbl9wYWdlKGN1cl9wZyk7
CiAgICAgICAgIG1lbXNldCh2YWRkciwgMCwgUEFHRV9TSVpFKTsKIAotICAg
ICAgICBpb21tdV9mbHVzaF9jYWNoZV9wYWdlKHZhZGRyLCAxKTsKKyAgICAg
ICAgaW9tbXVfc3luY19jYWNoZSh2YWRkciwgUEFHRV9TSVpFKTsKICAgICAg
ICAgdW5tYXBfZG9tYWluX3BhZ2UodmFkZHIpOwogICAgICAgICBjdXJfcGcr
KzsKICAgICB9CkBAIC0yNDIsNyArMjMzLDcgQEAgc3RhdGljIHU2NCBidXNf
dG9fY29udGV4dF9tYWRkcihzdHJ1Y3QgaQogICAgICAgICB9CiAgICAgICAg
IHNldF9yb290X3ZhbHVlKCpyb290LCBtYWRkcik7CiAgICAgICAgIHNldF9y
b290X3ByZXNlbnQoKnJvb3QpOwotICAgICAgICBpb21tdV9mbHVzaF9jYWNo
ZV9lbnRyeShyb290LCBzaXplb2Yoc3RydWN0IHJvb3RfZW50cnkpKTsKKyAg
ICAgICAgaW9tbXVfc3luY19jYWNoZShyb290LCBzaXplb2Yoc3RydWN0IHJv
b3RfZW50cnkpKTsKICAgICB9CiAgICAgbWFkZHIgPSAodTY0KSBnZXRfY29u
dGV4dF9hZGRyKCpyb290KTsKICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2Uo
cm9vdF9lbnRyaWVzKTsKQEAgLTMwMCw3ICsyOTEsNyBAQCBzdGF0aWMgdTY0
IGFkZHJfdG9fZG1hX3BhZ2VfbWFkZHIoc3RydWN0CiAgICAgICAgICAgICAg
Ki8KICAgICAgICAgICAgIGRtYV9zZXRfcHRlX3JlYWRhYmxlKCpwdGUpOwog
ICAgICAgICAgICAgZG1hX3NldF9wdGVfd3JpdGFibGUoKnB0ZSk7Ci0gICAg
ICAgICAgICBpb21tdV9mbHVzaF9jYWNoZV9lbnRyeShwdGUsIHNpemVvZihz
dHJ1Y3QgZG1hX3B0ZSkpOworICAgICAgICAgICAgaW9tbXVfc3luY19jYWNo
ZShwdGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0ZSkpOwogICAgICAgICB9CiAK
ICAgICAgICAgaWYgKCBsZXZlbCA9PSAyICkKQEAgLTY4MSw3ICs2NzIsNyBA
QCBzdGF0aWMgaW50IF9fbXVzdF9jaGVjayBkbWFfcHRlX2NsZWFyX29uCiAg
ICAgKmZsdXNoX2ZsYWdzIHw9IElPTU1VX0ZMVVNIRl9tb2RpZmllZDsKIAog
ICAgIHNwaW5fdW5sb2NrKCZoZC0+YXJjaC5tYXBwaW5nX2xvY2spOwotICAg
IGlvbW11X2ZsdXNoX2NhY2hlX2VudHJ5KHB0ZSwgc2l6ZW9mKHN0cnVjdCBk
bWFfcHRlKSk7CisgICAgaW9tbXVfc3luY19jYWNoZShwdGUsIHNpemVvZihz
dHJ1Y3QgZG1hX3B0ZSkpOwogCiAgICAgdW5tYXBfdnRkX2RvbWFpbl9wYWdl
KHBhZ2UpOwogCkBAIC03MjAsNyArNzExLDcgQEAgc3RhdGljIHZvaWQgaW9t
bXVfZnJlZV9wYWdlX3RhYmxlKHN0cnVjdAogICAgICAgICAgICAgaW9tbXVf
ZnJlZV9wYWdldGFibGUoZG1hX3B0ZV9hZGRyKCpwdGUpLCBuZXh0X2xldmVs
KTsKIAogICAgICAgICBkbWFfY2xlYXJfcHRlKCpwdGUpOwotICAgICAgICBp
b21tdV9mbHVzaF9jYWNoZV9lbnRyeShwdGUsIHNpemVvZihzdHJ1Y3QgZG1h
X3B0ZSkpOworICAgICAgICBpb21tdV9zeW5jX2NhY2hlKHB0ZSwgc2l6ZW9m
KHN0cnVjdCBkbWFfcHRlKSk7CiAgICAgfQogCiAgICAgdW5tYXBfdnRkX2Rv
bWFpbl9wYWdlKHB0X3ZhZGRyKTsKQEAgLTE0NDksNyArMTQ0MCw3IEBAIGlu
dCBkb21haW5fY29udGV4dF9tYXBwaW5nX29uZSgKICAgICBjb250ZXh0X3Nl
dF9hZGRyZXNzX3dpZHRoKCpjb250ZXh0LCBhZ2F3KTsKICAgICBjb250ZXh0
X3NldF9mYXVsdF9lbmFibGUoKmNvbnRleHQpOwogICAgIGNvbnRleHRfc2V0
X3ByZXNlbnQoKmNvbnRleHQpOwotICAgIGlvbW11X2ZsdXNoX2NhY2hlX2Vu
dHJ5KGNvbnRleHQsIHNpemVvZihzdHJ1Y3QgY29udGV4dF9lbnRyeSkpOwor
ICAgIGlvbW11X3N5bmNfY2FjaGUoY29udGV4dCwgc2l6ZW9mKHN0cnVjdCBj
b250ZXh0X2VudHJ5KSk7CiAgICAgc3Bpbl91bmxvY2soJmlvbW11LT5sb2Nr
KTsKIAogICAgIC8qIENvbnRleHQgZW50cnkgd2FzIHByZXZpb3VzbHkgbm9u
LXByZXNlbnQgKHdpdGggZG9taWQgMCkuICovCkBAIC0xNjAyLDcgKzE1OTMs
NyBAQCBpbnQgZG9tYWluX2NvbnRleHRfdW5tYXBfb25lKAogCiAgICAgY29u
dGV4dF9jbGVhcl9wcmVzZW50KCpjb250ZXh0KTsKICAgICBjb250ZXh0X2Ns
ZWFyX2VudHJ5KCpjb250ZXh0KTsKLSAgICBpb21tdV9mbHVzaF9jYWNoZV9l
bnRyeShjb250ZXh0LCBzaXplb2Yoc3RydWN0IGNvbnRleHRfZW50cnkpKTsK
KyAgICBpb21tdV9zeW5jX2NhY2hlKGNvbnRleHQsIHNpemVvZihzdHJ1Y3Qg
Y29udGV4dF9lbnRyeSkpOwogCiAgICAgaW9tbXVfZG9taWQ9IGRvbWFpbl9p
b21tdV9kb21pZChkb21haW4sIGlvbW11KTsKICAgICBpZiAoIGlvbW11X2Rv
bWlkID09IC0xICkKQEAgLTE4MzcsNyArMTgyOCw3IEBAIHN0YXRpYyBpbnQg
X19tdXN0X2NoZWNrIGludGVsX2lvbW11X21hcF8KIAogICAgICpwdGUgPSBu
ZXc7CiAKLSAgICBpb21tdV9mbHVzaF9jYWNoZV9lbnRyeShwdGUsIHNpemVv
ZihzdHJ1Y3QgZG1hX3B0ZSkpOworICAgIGlvbW11X3N5bmNfY2FjaGUocHRl
LCBzaXplb2Yoc3RydWN0IGRtYV9wdGUpKTsKICAgICBzcGluX3VubG9jaygm
aGQtPmFyY2gubWFwcGluZ19sb2NrKTsKICAgICB1bm1hcF92dGRfZG9tYWlu
X3BhZ2UocGFnZSk7CiAKQEAgLTE5MTIsNyArMTkwMyw3IEBAIGludCBpb21t
dV9wdGVfZmx1c2goc3RydWN0IGRvbWFpbiAqZCwgdWkKICAgICBpbnQgaW9t
bXVfZG9taWQ7CiAgICAgaW50IHJjID0gMDsKIAotICAgIGlvbW11X2ZsdXNo
X2NhY2hlX2VudHJ5KHB0ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7Cisg
ICAgaW9tbXVfc3luY19jYWNoZShwdGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0
ZSkpOwogCiAgICAgZm9yX2VhY2hfZHJoZF91bml0ICggZHJoZCApCiAgICAg
ewpAQCAtMjc3Nyw3ICsyNzY4LDcgQEAgc3RhdGljIGludCBfX2luaXQgaW50
ZWxfaW9tbXVfcXVhcmFudGluZQogICAgICAgICAgICAgZG1hX3NldF9wdGVf
YWRkcigqcHRlLCBtYWRkcik7CiAgICAgICAgICAgICBkbWFfc2V0X3B0ZV9y
ZWFkYWJsZSgqcHRlKTsKICAgICAgICAgfQotICAgICAgICBpb21tdV9mbHVz
aF9jYWNoZV9wYWdlKHBhcmVudCwgMSk7CisgICAgICAgIGlvbW11X3N5bmNf
Y2FjaGUocGFyZW50LCBQQUdFX1NJWkUpOwogCiAgICAgICAgIHVubWFwX3Z0
ZF9kb21haW5fcGFnZShwYXJlbnQpOwogICAgICAgICBwYXJlbnQgPSBtYXBf
dnRkX2RvbWFpbl9wYWdlKG1hZGRyKTsK

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.12-3.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.12-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
aW9tbXU6IGludHJvZHVjZSBhIGNhY2hlIHN5bmMgaG9vawoKVGhlIGhvb2sg
aXMgb25seSBpbXBsZW1lbnRlZCBmb3IgVlQtZCBhbmQgaXQgdXNlcyB0aGUg
YWxyZWFkeSBleGlzdGluZwppb21tdV9zeW5jX2NhY2hlIGZ1bmN0aW9uIHBy
ZXNlbnQgaW4gVlQtZCBjb2RlLiBUaGUgbmV3IGhvb2sgaXMKYWRkZWQgc28g
dGhhdCB0aGUgY2FjaGUgY2FuIGJlIGZsdXNoZWQgYnkgY29kZSBvdXRzaWRl
IG9mIFZULWQgd2hlbgp1c2luZyBzaGFyZWQgcGFnZSB0YWJsZXMuCgpOb3Rl
IHRoYXQgYWxsb2NfcGd0YWJsZV9tYWRkciBtdXN0IHVzZSB0aGUgbm93IGxv
Y2FsbHkgZGVmaW5lZApzeW5jX2NhY2hlIGZ1bmN0aW9uLCBiZWNhdXNlIElP
TU1VIG9wcyBhcmUgbm90IHlldCBzZXR1cCB0aGUgZmlyc3QKdGltZSB0aGUg
ZnVuY3Rpb24gZ2V0cyBjYWxsZWQgZHVyaW5nIElPTU1VIGluaXRpYWxpemF0
aW9uLgoKTm8gZnVuY3Rpb25hbCBjaGFuZ2UgaW50ZW5kZWQuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTMyMS4KClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC92dGQvZXh0ZXJuLmgKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvdnRkL2V4dGVybi5oCkBAIC0zOCw3ICszOCw2IEBAIHZvaWQgZGlzYWJs
ZV9xaW52YWwoc3RydWN0IGlvbW11ICppb21tdSkKIGludCBlbmFibGVfaW50
cmVtYXAoc3RydWN0IGlvbW11ICppb21tdSwgaW50IGVpbSk7CiB2b2lkIGRp
c2FibGVfaW50cmVtYXAoc3RydWN0IGlvbW11ICppb21tdSk7CiAKLXZvaWQg
aW9tbXVfc3luY19jYWNoZShjb25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBp
bnQgc2l6ZSk7CiBpbnQgaW9tbXVfYWxsb2Moc3RydWN0IGFjcGlfZHJoZF91
bml0ICpkcmhkKTsKIHZvaWQgaW9tbXVfZnJlZShzdHJ1Y3QgYWNwaV9kcmhk
X3VuaXQgKmRyaGQpOwogCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdo
L3Z0ZC9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9pb21tdS5jCkBAIC0xNTksNyArMTU5LDcgQEAgc3RhdGljIHZvaWQgX19p
bml0IGZyZWVfaW50ZWxfaW9tbXUoc3RydQogCiBzdGF0aWMgaW50IGlvbW11
c19pbmNvaGVyZW50OwogCi12b2lkIGlvbW11X3N5bmNfY2FjaGUoY29uc3Qg
dm9pZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUpCitzdGF0aWMgdm9pZCBz
eW5jX2NhY2hlKGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXpl
KQogewogICAgIGludCBpOwogICAgIHN0YXRpYyB1bnNpZ25lZCBpbnQgY2xm
bHVzaF9zaXplID0gMDsKQEAgLTE5OCw3ICsxOTgsNyBAQCB1NjQgYWxsb2Nf
cGd0YWJsZV9tYWRkcihzdHJ1Y3QgYWNwaV9kcmhkCiAgICAgICAgIHZhZGRy
ID0gX19tYXBfZG9tYWluX3BhZ2UoY3VyX3BnKTsKICAgICAgICAgbWVtc2V0
KHZhZGRyLCAwLCBQQUdFX1NJWkUpOwogCi0gICAgICAgIGlvbW11X3N5bmNf
Y2FjaGUodmFkZHIsIFBBR0VfU0laRSk7CisgICAgICAgIHN5bmNfY2FjaGUo
dmFkZHIsIFBBR0VfU0laRSk7CiAgICAgICAgIHVubWFwX2RvbWFpbl9wYWdl
KHZhZGRyKTsKICAgICAgICAgY3VyX3BnKys7CiAgICAgfQpAQCAtMjgxMyw2
ICsyODEzLDcgQEAgY29uc3Qgc3RydWN0IGlvbW11X29wcyBfX2luaXRjb25z
dHJlbCBpbgogICAgIC5pb3RsYl9mbHVzaF9hbGwgPSBpb21tdV9mbHVzaF9p
b3RsYl9hbGwsCiAgICAgLmdldF9yZXNlcnZlZF9kZXZpY2VfbWVtb3J5ID0g
aW50ZWxfaW9tbXVfZ2V0X3Jlc2VydmVkX2RldmljZV9tZW1vcnksCiAgICAg
LmR1bXBfcDJtX3RhYmxlID0gdnRkX2R1bXBfcDJtX3RhYmxlLAorICAgIC5z
eW5jX2NhY2hlID0gc3luY19jYWNoZSwKIH07CiAKIC8qCi0tLSBhL3hlbi9p
bmNsdWRlL2FzbS14ODYvaW9tbXUuaAorKysgYi94ZW4vaW5jbHVkZS9hc20t
eDg2L2lvbW11LmgKQEAgLTEwMSw2ICsxMDEsMTMgQEAgZXh0ZXJuIGJvb2wg
dW50cnVzdGVkX21zaTsKIGludCBwaV91cGRhdGVfaXJ0ZShjb25zdCBzdHJ1
Y3QgcGlfZGVzYyAqcGlfZGVzYywgY29uc3Qgc3RydWN0IHBpcnEgKnBpcnEs
CiAgICAgICAgICAgICAgICAgICAgY29uc3QgdWludDhfdCBndmVjKTsKIAor
I2RlZmluZSBpb21tdV9zeW5jX2NhY2hlKGFkZHIsIHNpemUpICh7ICAgICAg
ICAgICAgICAgICBcCisgICAgY29uc3Qgc3RydWN0IGlvbW11X29wcyAqb3Bz
ID0gaW9tbXVfZ2V0X29wcygpOyAgICAgIFwKKyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAorICAg
IGlmICggb3BzLT5zeW5jX2NhY2hlICkgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBcCisgICAgICAgIG9wcy0+c3luY19jYWNoZShhZGRyLCBzaXpl
KTsgICAgICAgICAgICAgICAgICAgIFwKK30pCisKICNlbmRpZiAvKiAhX19B
UkNIX1g4Nl9JT01NVV9IX18gKi8KIC8qCiAgKiBMb2NhbCB2YXJpYWJsZXM6
Ci0tLSBhL3hlbi9pbmNsdWRlL3hlbi9pb21tdS5oCisrKyBiL3hlbi9pbmNs
dWRlL3hlbi9pb21tdS5oCkBAIC0yMjEsNiArMjIxLDcgQEAgc3RydWN0IGlv
bW11X29wcyB7CiAgICAgdm9pZCAoKnVwZGF0ZV9pcmVfZnJvbV9hcGljKSh1
bnNpZ25lZCBpbnQgYXBpYywgdW5zaWduZWQgaW50IHJlZywgdW5zaWduZWQg
aW50IHZhbHVlKTsKICAgICB1bnNpZ25lZCBpbnQgKCpyZWFkX2FwaWNfZnJv
bV9pcmUpKHVuc2lnbmVkIGludCBhcGljLCB1bnNpZ25lZCBpbnQgcmVnKTsK
ICAgICBpbnQgKCpzZXR1cF9ocGV0X21zaSkoc3RydWN0IG1zaV9kZXNjICop
OworICAgIHZvaWQgKCpzeW5jX2NhY2hlKShjb25zdCB2b2lkICphZGRyLCB1
bnNpZ25lZCBpbnQgc2l6ZSk7CiAjZW5kaWYgLyogQ09ORklHX1g4NiAqLwog
ICAgIGludCBfX211c3RfY2hlY2sgKCpzdXNwZW5kKSh2b2lkKTsKICAgICB2
b2lkICgqcmVzdW1lKSh2b2lkKTsK

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.12-4.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.12-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IGRvbid0IGFzc3VtZSBhZGRyZXNzZXMgYXJlIGFsaWduZWQgaW4gc3luY19j
YWNoZQoKQ3VycmVudCBjb2RlIGluIHN5bmNfY2FjaGUgYXNzdW1lIHRoYXQg
dGhlIGFkZHJlc3MgcGFzc2VkIGluIGlzCmFsaWduZWQgdG8gYSBjYWNoZSBs
aW5lIHNpemUuIEZpeCB0aGUgY29kZSB0byBzdXBwb3J0IHBhc3NpbmcgaW4K
YXJiaXRyYXJ5IGFkZHJlc3NlcyBub3QgbmVjZXNzYXJpbHkgYWxpZ25lZCB0
byBhIGNhY2hlIGxpbmUgc2l6ZS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzIx
LgoKUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4KCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5j
CisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBA
IC0xNjEsOCArMTYxLDggQEAgc3RhdGljIGludCBpb21tdXNfaW5jb2hlcmVu
dDsKIAogc3RhdGljIHZvaWQgc3luY19jYWNoZShjb25zdCB2b2lkICphZGRy
LCB1bnNpZ25lZCBpbnQgc2l6ZSkKIHsKLSAgICBpbnQgaTsKLSAgICBzdGF0
aWMgdW5zaWduZWQgaW50IGNsZmx1c2hfc2l6ZSA9IDA7CisgICAgc3RhdGlj
IHVuc2lnbmVkIGxvbmcgY2xmbHVzaF9zaXplID0gMDsKKyAgICBjb25zdCB2
b2lkICplbmQgPSBhZGRyICsgc2l6ZTsKIAogICAgIGlmICggIWlvbW11c19p
bmNvaGVyZW50ICkKICAgICAgICAgcmV0dXJuOwpAQCAtMTcwLDggKzE3MCw5
IEBAIHN0YXRpYyB2b2lkIHN5bmNfY2FjaGUoY29uc3Qgdm9pZCAqYWRkciwK
ICAgICBpZiAoIGNsZmx1c2hfc2l6ZSA9PSAwICkKICAgICAgICAgY2xmbHVz
aF9zaXplID0gZ2V0X2NhY2hlX2xpbmVfc2l6ZSgpOwogCi0gICAgZm9yICgg
aSA9IDA7IGkgPCBzaXplOyBpICs9IGNsZmx1c2hfc2l6ZSApCi0gICAgICAg
IGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIgKyBpKTsKKyAgICBhZGRy
IC09ICh1bnNpZ25lZCBsb25nKWFkZHIgJiAoY2xmbHVzaF9zaXplIC0gMSk7
CisgICAgZm9yICggOyBhZGRyIDwgZW5kOyBhZGRyICs9IGNsZmx1c2hfc2l6
ZSApCisgICAgICAgIGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIpOwog
fQogCiAvKiBBbGxvY2F0ZSBwYWdlIHRhYmxlLCByZXR1cm4gaXRzIG1hY2hp
bmUgYWRkcmVzcyAqLwo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.12-5.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.12-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
YWx0ZXJuYXRpdmU6IGludHJvZHVjZSBhbHRlcm5hdGl2ZV8yCgpJdCdzIGJh
c2VkIG9uIGFsdGVybmF0aXZlX2lvXzIgd2l0aG91dCBpbnB1dHMgb3Igb3V0
cHV0cyBidXQgd2l0aCBhbgphZGRlZCBtZW1vcnkgY2xvYmJlci4KClRoaXMg
aXMgcGFydCBvZiBYU0EtMzIxLgoKQWNrZWQtYnk6IEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4KCi0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYv
YWx0ZXJuYXRpdmUuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2FsdGVy
bmF0aXZlLmgKQEAgLTExMyw2ICsxMTMsMTEgQEAgZXh0ZXJuIHZvaWQgYWx0
ZXJuYXRpdmVfaW5zdHJ1Y3Rpb25zKHZvaQogI2RlZmluZSBhbHRlcm5hdGl2
ZShvbGRpbnN0ciwgbmV3aW5zdHIsIGZlYXR1cmUpICAgICAgICAgICAgICAg
ICAgICAgICAgXAogICAgICAgICBhc20gdm9sYXRpbGUgKEFMVEVSTkFUSVZF
KG9sZGluc3RyLCBuZXdpbnN0ciwgZmVhdHVyZSkgOiA6IDogIm1lbW9yeSIp
CiAKKyNkZWZpbmUgYWx0ZXJuYXRpdmVfMihvbGRpbnN0ciwgbmV3aW5zdHIx
LCBmZWF0dXJlMSwgbmV3aW5zdHIyLCBmZWF0dXJlMikgXAorCWFzbSB2b2xh
dGlsZSAoQUxURVJOQVRJVkVfMihvbGRpbnN0ciwgbmV3aW5zdHIxLCBmZWF0
dXJlMSwJXAorCQkJCSAgICBuZXdpbnN0cjIsIGZlYXR1cmUyKQkJXAorCQkg
ICAgICA6IDogOiAibWVtb3J5IikKKwogLyoKICAqIEFsdGVybmF0aXZlIGlu
bGluZSBhc3NlbWJseSB3aXRoIGlucHV0LgogICoK

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.12-6.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.12-6.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IG9wdGltaXplIENQVSBjYWNoZSBzeW5jCgpTb21lIFZULWQgSU9NTVVzIGFy
ZSBub24tY29oZXJlbnQsIHdoaWNoIHJlcXVpcmVzIGEgY2FjaGUgd3JpdGUg
YmFjawppbiBvcmRlciBmb3IgdGhlIGNoYW5nZXMgbWFkZSBieSB0aGUgQ1BV
IHRvIGJlIHZpc2libGUgdG8gdGhlIElPTU1VLgpUaGlzIGNhY2hlIHdyaXRl
IGJhY2sgd2FzIHVuY29uZGl0aW9uYWxseSBkb25lIHVzaW5nIGNsZmx1c2gs
IGJ1dCB0aGVyZSBhcmUKb3RoZXIgbW9yZSBlZmZpY2llbnQgaW5zdHJ1Y3Rp
b25zIHRvIGRvIHNvLCBoZW5jZSBpbXBsZW1lbnQgc3VwcG9ydApmb3IgdGhl
bSB1c2luZyB0aGUgYWx0ZXJuYXRpdmUgZnJhbWV3b3JrLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zMjEuCgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvdnRkL2V4dGVybi5oCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdo
L3Z0ZC9leHRlcm4uaApAQCAtNjQsNyArNjQsNiBAQCBpbnQgX19tdXN0X2No
ZWNrIHFpbnZhbF9kZXZpY2VfaW90bGJfc3luCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICB1MTYgZGlkLCB1MTYgc2l6ZSwg
dTY0IGFkZHIpOwogCiB1bnNpZ25lZCBpbnQgZ2V0X2NhY2hlX2xpbmVfc2l6
ZSh2b2lkKTsKLXZvaWQgY2FjaGVsaW5lX2ZsdXNoKGNoYXIgKik7CiB2b2lk
IGZsdXNoX2FsbF9jYWNoZSh2b2lkKTsKIAogdTY0IGFsbG9jX3BndGFibGVf
bWFkZHIoc3RydWN0IGFjcGlfZHJoZF91bml0ICpkcmhkLCB1bnNpZ25lZCBs
b25nIG5wYWdlcyk7Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9p
b21tdS5jCkBAIC0zMSw2ICszMSw3IEBACiAjaW5jbHVkZSA8eGVuL3BjaV9y
ZWdzLmg+CiAjaW5jbHVkZSA8eGVuL2tleWhhbmRsZXIuaD4KICNpbmNsdWRl
IDxhc20vbXNpLmg+CisjaW5jbHVkZSA8YXNtL25vcHMuaD4KICNpbmNsdWRl
IDxhc20vaXJxLmg+CiAjaW5jbHVkZSA8YXNtL2h2bS92bXgvdm14Lmg+CiAj
aW5jbHVkZSA8YXNtL3AybS5oPgpAQCAtMTcyLDcgKzE3Myw0MiBAQCBzdGF0
aWMgdm9pZCBzeW5jX2NhY2hlKGNvbnN0IHZvaWQgKmFkZHIsCiAKICAgICBh
ZGRyIC09ICh1bnNpZ25lZCBsb25nKWFkZHIgJiAoY2xmbHVzaF9zaXplIC0g
MSk7CiAgICAgZm9yICggOyBhZGRyIDwgZW5kOyBhZGRyICs9IGNsZmx1c2hf
c2l6ZSApCi0gICAgICAgIGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIp
OworLyoKKyAqIFRoZSBhcmd1bWVudHMgdG8gYSBtYWNybyBtdXN0IG5vdCBp
bmNsdWRlIHByZXByb2Nlc3NvciBkaXJlY3RpdmVzLiBEb2luZyBzbworICog
cmVzdWx0cyBpbiB1bmRlZmluZWQgYmVoYXZpb3IsIHNvIHdlIGhhdmUgdG8g
Y3JlYXRlIHNvbWUgZGVmaW5lcyBoZXJlIGluCisgKiBvcmRlciB0byBhdm9p
ZCBpdC4KKyAqLworI2lmIGRlZmluZWQoSEFWRV9BU19DTFdCKQorIyBkZWZp
bmUgQ0xXQl9FTkNPRElORyAiY2x3YiAlW3BdIgorI2VsaWYgZGVmaW5lZChI
QVZFX0FTX1hTQVZFT1BUKQorIyBkZWZpbmUgQ0xXQl9FTkNPRElORyAiZGF0
YTE2IHhzYXZlb3B0ICVbcF0iIC8qIGNsd2IgKi8KKyNlbHNlCisjIGRlZmlu
ZSBDTFdCX0VOQ09ESU5HICIuYnl0ZSAweDY2LCAweDBmLCAweGFlLCAweDMw
IiAvKiBjbHdiICglJXJheCkgKi8KKyNlbmRpZgorCisjZGVmaW5lIEJBU0Vf
SU5QVVQoYWRkcikgW3BdICJtIiAoKihjb25zdCBjaGFyICopKGFkZHIpKQor
I2lmIGRlZmluZWQoSEFWRV9BU19DTFdCKSB8fCBkZWZpbmVkKEhBVkVfQVNf
WFNBVkVPUFQpCisjIGRlZmluZSBJTlBVVCBCQVNFX0lOUFVUCisjZWxzZQor
IyBkZWZpbmUgSU5QVVQoYWRkcikgImEiIChhZGRyKSwgQkFTRV9JTlBVVChh
ZGRyKQorI2VuZGlmCisgICAgICAgIC8qCisgICAgICAgICAqIE5vdGUgcmVn
YXJkaW5nIHRoZSB1c2Ugb2YgTk9QX0RTX1BSRUZJWDogaXQncyBmYXN0ZXIg
dG8gZG8gYSBjbGZsdXNoCisgICAgICAgICAqICsgcHJlZml4IHRoYW4gYSBj
bGZsdXNoICsgbm9wLCBhbmQgaGVuY2UgdGhlIHByZWZpeCBpcyBhZGRlZCBp
bnN0ZWFkCisgICAgICAgICAqIG9mIGxldHRpbmcgdGhlIGFsdGVybmF0aXZl
IGZyYW1ld29yayBmaWxsIHRoZSBnYXAgYnkgYXBwZW5kaW5nIG5vcHMuCisg
ICAgICAgICAqLworICAgICAgICBhbHRlcm5hdGl2ZV9pb18yKCIuYnl0ZSAi
IF9fc3RyaW5naWZ5KE5PUF9EU19QUkVGSVgpICI7IGNsZmx1c2ggJVtwXSIs
CisgICAgICAgICAgICAgICAgICAgICAgICAgImRhdGExNiBjbGZsdXNoICVb
cF0iLCAvKiBjbGZsdXNob3B0ICovCisgICAgICAgICAgICAgICAgICAgICAg
ICAgWDg2X0ZFQVRVUkVfQ0xGTFVTSE9QVCwKKyAgICAgICAgICAgICAgICAg
ICAgICAgICBDTFdCX0VOQ09ESU5HLAorICAgICAgICAgICAgICAgICAgICAg
ICAgIFg4Nl9GRUFUVVJFX0NMV0IsIC8qIG5vIG91dHB1dHMgKi8sCisgICAg
ICAgICAgICAgICAgICAgICAgICAgSU5QVVQoYWRkcikpOworI3VuZGVmIElO
UFVUCisjdW5kZWYgQkFTRV9JTlBVVAorI3VuZGVmIENMV0JfRU5DT0RJTkcK
KworICAgIGFsdGVybmF0aXZlXzIoIiIsICJzZmVuY2UiLCBYODZfRkVBVFVS
RV9DTEZMVVNIT1BULAorICAgICAgICAgICAgICAgICAgICAgICJzZmVuY2Ui
LCBYODZfRkVBVFVSRV9DTFdCKTsKIH0KIAogLyogQWxsb2NhdGUgcGFnZSB0
YWJsZSwgcmV0dXJuIGl0cyBtYWNoaW5lIGFkZHJlc3MgKi8KLS0tIGEveGVu
L2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL3g4Ni92dGQuYworKysgYi94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC92dGQveDg2L3Z0ZC5jCkBAIC01MSwxMSAr
NTEsNiBAQCB1bnNpZ25lZCBpbnQgZ2V0X2NhY2hlX2xpbmVfc2l6ZSh2b2lk
KQogICAgIHJldHVybiAoKGNwdWlkX2VieCgxKSA+PiA4KSAmIDB4ZmYpICog
ODsKIH0KIAotdm9pZCBjYWNoZWxpbmVfZmx1c2goY2hhciAqIGFkZHIpCi17
Ci0gICAgY2xmbHVzaChhZGRyKTsKLX0KLQogdm9pZCBmbHVzaF9hbGxfY2Fj
aGUoKQogewogICAgIHdiaW52ZCgpOwo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.12-7.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.12-7.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
ZXB0OiBmbHVzaCBjYWNoZSB3aGVuIG1vZGlmeWluZyBQVEVzIGFuZCBzaGFy
aW5nIHBhZ2UgdGFibGVzCgpNb2RpZmljYXRpb25zIG1hZGUgdG8gdGhlIHBh
Z2UgdGFibGVzIGJ5IEVQVCBjb2RlIG5lZWQgdG8gYmUgd3JpdHRlbgp0byBt
ZW1vcnkgd2hlbiB0aGUgcGFnZSB0YWJsZXMgYXJlIHNoYXJlZCB3aXRoIHRo
ZSBJT01NVSwgYXMgSW50ZWwKSU9NTVVzIGNhbiBiZSBub24tY29oZXJlbnQg
YW5kIHRodXMgcmVxdWlyZSBjaGFuZ2VzIHRvIGJlIHdyaXR0ZW4gdG8KbWVt
b3J5IGluIG9yZGVyIHRvIGJlIHZpc2libGUgdG8gdGhlIElPTU1VLgoKSW4g
b3JkZXIgdG8gYWNoaWV2ZSB0aGlzIG1ha2Ugc3VyZSBkYXRhIGlzIHdyaXR0
ZW4gYmFjayB0byBtZW1vcnkKYWZ0ZXIgd3JpdGluZyBhbiBFUFQgZW50cnkg
d2hlbiB0aGUgcmVjYWxjIGJpdCBpcyBub3Qgc2V0IGluCmF0b21pY193cml0
ZV9lcHRfZW50cnkuIElmIHN1Y2ggYml0IGlzIHNldCwgdGhlIGVudHJ5IHdp
bGwgYmUKYWRqdXN0ZWQgYW5kIGF0b21pY193cml0ZV9lcHRfZW50cnkgd2ls
bCBiZSBjYWxsZWQgYSBzZWNvbmQgdGltZQp3aXRob3V0IHRoZSByZWNhbGMg
Yml0IHNldC4gTm90ZSB0aGF0IHdoZW4gc3BsaXR0aW5nIGEgc3VwZXIgcGFn
ZSB0aGUKbmV3IHRhYmxlcyByZXN1bHRpbmcgb2YgdGhlIHNwbGl0IHNob3Vs
ZCBhbHNvIGJlIHdyaXR0ZW4gYmFjay4KCkZhaWx1cmUgdG8gZG8gc28gY2Fu
IGFsbG93IGRldmljZXMgYmVoaW5kIHRoZSBJT01NVSBhY2Nlc3MgdG8gdGhl
CnN0YWxlIHN1cGVyIHBhZ2UsIG9yIGNhdXNlIGNvaGVyZW5jeSBpc3N1ZXMg
YXMgY2hhbmdlcyBtYWRlIGJ5IHRoZQpwcm9jZXNzb3IgdG8gdGhlIHBhZ2Ug
dGFibGVzIGFyZSBub3QgdmlzaWJsZSB0byB0aGUgSU9NTVUuCgpUaGlzIGFs
bG93cyB0byByZW1vdmUgdGhlIFZULWQgc3BlY2lmaWMgaW9tbXVfcHRlX2Zs
dXNoIGhlbHBlciwgc2luY2UKdGhlIGNhY2hlIHdyaXRlIGJhY2sgaXMgbm93
IHBlcmZvcm1lZCBieSBhdG9taWNfd3JpdGVfZXB0X2VudHJ5LCBhbmQKaGVu
Y2UgaW9tbXVfaW90bGJfZmx1c2ggY2FuIGJlIHVzZWQgdG8gZmx1c2ggdGhl
IElPTU1VIFRMQi4gVGhlIG5ld2x5CnVzZWQgbWV0aG9kIChpb21tdV9pb3Rs
Yl9mbHVzaCkgY2FuIHJlc3VsdCBpbiBsZXNzIGZsdXNoZXMsIHNpbmNlIGl0
Cm1pZ2h0IHNvbWV0aW1lcyBiZSBjYWxsZWQgcmlnaHRseSB3aXRoIDAgZmxh
Z3MsIGluIHdoaWNoIGNhc2UgaXQKYmVjb21lcyBhIG5vLW9wLgoKVGhpcyBp
cyBwYXJ0IG9mIFhTQS0zMjEuCgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2L21tL3Ay
bS1lcHQuYworKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5jCkBAIC01
OCw2ICs1OCwxOSBAQCBzdGF0aWMgaW50IGF0b21pY193cml0ZV9lcHRfZW50
cnkoc3RydWN0CiAKICAgICB3cml0ZV9hdG9taWMoJmVudHJ5cHRyLT5lcHRl
LCBuZXcuZXB0ZSk7CiAKKyAgICAvKgorICAgICAqIFRoZSByZWNhbGMgZmll
bGQgb24gdGhlIEVQVCBpcyB1c2VkIHRvIHNpZ25hbCBlaXRoZXIgdGhhdCBh
CisgICAgICogcmVjYWxjdWxhdGlvbiBvZiB0aGUgRU1UIGZpZWxkIGlzIHJl
cXVpcmVkICh3aGljaCBkb2Vzbid0IGVmZmVjdCB0aGUKKyAgICAgKiBJT01N
VSksIG9yIGEgdHlwZSBjaGFuZ2UuIFR5cGUgY2hhbmdlcyBjYW4gb25seSBi
ZSBiZXR3ZWVuIHJhbV9ydywKKyAgICAgKiBsb2dkaXJ0eSBhbmQgaW9yZXFf
c2VydmVyOiBjaGFuZ2VzIHRvL2Zyb20gbG9nZGlydHkgd29uJ3Qgd29yayB3
ZWxsIHdpdGgKKyAgICAgKiBhbiBJT01NVSBhbnl3YXksIGFzIElPTU1VICNQ
RnMgYXJlIG5vdCBzeW5jaHJvbm91cyBhbmQgd2lsbCBsZWFkIHRvCisgICAg
ICogYWJvcnRzLCBhbmQgY2hhbmdlcyB0by9mcm9tIGlvcmVxX3NlcnZlciBh
cmUgYWxyZWFkeSBmdWxseSBmbHVzaGVkCisgICAgICogYmVmb3JlIHJldHVy
bmluZyB0byBndWVzdCBjb250ZXh0IChzZWUKKyAgICAgKiBYRU5fRE1PUF9t
YXBfbWVtX3R5cGVfdG9faW9yZXFfc2VydmVyKS4KKyAgICAgKi8KKyAgICBp
ZiAoICFuZXcucmVjYWxjICYmIGlvbW11X3VzZV9oYXBfcHQocDJtLT5kb21h
aW4pICkKKyAgICAgICAgaW9tbXVfc3luY19jYWNoZShlbnRyeXB0ciwgc2l6
ZW9mKCplbnRyeXB0cikpOworCiAgICAgcmV0dXJuIDA7CiB9CiAKQEAgLTI3
OCw2ICsyOTEsOSBAQCBzdGF0aWMgYm9vbF90IGVwdF9zcGxpdF9zdXBlcl9w
YWdlKHN0cnVjCiAgICAgICAgICAgICBicmVhazsKICAgICB9CiAKKyAgICBp
ZiAoIGlvbW11X3VzZV9oYXBfcHQocDJtLT5kb21haW4pICkKKyAgICAgICAg
aW9tbXVfc3luY19jYWNoZSh0YWJsZSwgRVBUX1BBR0VUQUJMRV9FTlRSSUVT
ICogc2l6ZW9mKGVwdF9lbnRyeV90KSk7CisKICAgICB1bm1hcF9kb21haW5f
cGFnZSh0YWJsZSk7CiAKICAgICAvKiBFdmVuIGZhaWxlZCB3ZSBzaG91bGQg
aW5zdGFsbCB0aGUgbmV3bHkgYWxsb2NhdGVkIGVwdCBwYWdlLiAqLwpAQCAt
MzM3LDYgKzM1Myw5IEBAIHN0YXRpYyBpbnQgZXB0X25leHRfbGV2ZWwoc3Ry
dWN0IHAybV9kb20KICAgICAgICAgaWYgKCAhbmV4dCApCiAgICAgICAgICAg
ICByZXR1cm4gR1VFU1RfVEFCTEVfTUFQX0ZBSUxFRDsKIAorICAgICAgICBp
ZiAoIGlvbW11X3VzZV9oYXBfcHQocDJtLT5kb21haW4pICkKKyAgICAgICAg
ICAgIGlvbW11X3N5bmNfY2FjaGUobmV4dCwgRVBUX1BBR0VUQUJMRV9FTlRS
SUVTICogc2l6ZW9mKGVwdF9lbnRyeV90KSk7CisKICAgICAgICAgcmMgPSBh
dG9taWNfd3JpdGVfZXB0X2VudHJ5KHAybSwgZXB0X2VudHJ5LCBlLCBuZXh0
X2xldmVsKTsKICAgICAgICAgQVNTRVJUKHJjID09IDApOwogICAgIH0KQEAg
LTgxNSw3ICs4MzQsMTAgQEAgb3V0OgogICAgICAgICAgbmVlZF9tb2RpZnlf
dnRkX3RhYmxlICkKICAgICB7CiAgICAgICAgIGlmICggaW9tbXVfdXNlX2hh
cF9wdChkKSApCi0gICAgICAgICAgICByYyA9IGlvbW11X3B0ZV9mbHVzaChk
LCBnZm4sICZlcHRfZW50cnktPmVwdGUsIG9yZGVyLCB2dGRfcHRlX3ByZXNl
bnQpOworICAgICAgICAgICAgcmMgPSBpb21tdV9pb3RsYl9mbHVzaChkLCBf
ZGZuKGdmbiksICgxdSA8PCBvcmRlciksCisgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIChpb21tdV9mbGFncyA/IElPTU1VX0ZMVVNIRl9h
ZGRlZCA6IDApIHwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgKHZ0ZF9wdGVfcHJlc2VudCA/IElPTU1VX0ZMVVNIRl9tb2RpZmllZAor
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIDogMCkpOwogICAgICAgICBlbHNlIGlmICggbmVlZF9pb21tdV9w
dF9zeW5jKGQpICkKICAgICAgICAgICAgIHJjID0gaW9tbXVfZmxhZ3MgPwog
ICAgICAgICAgICAgICAgIGlvbW11X2xlZ2FjeV9tYXAoZCwgX2RmbihnZm4p
LCBtZm4sIG9yZGVyLCBpb21tdV9mbGFncykgOgotLS0gYS94ZW4vZHJpdmVy
cy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYworKysgYi94ZW4vZHJpdmVycy9w
YXNzdGhyb3VnaC92dGQvaW9tbXUuYwpAQCAtMTkzMCw1MyArMTkzMCw2IEBA
IHN0YXRpYyBpbnQgaW50ZWxfaW9tbXVfbG9va3VwX3BhZ2Uoc3RydWMKICAg
ICByZXR1cm4gMDsKIH0KIAotaW50IGlvbW11X3B0ZV9mbHVzaChzdHJ1Y3Qg
ZG9tYWluICpkLCB1aW50NjRfdCBkZm4sIHVpbnQ2NF90ICpwdGUsCi0gICAg
ICAgICAgICAgICAgICAgIGludCBvcmRlciwgaW50IHByZXNlbnQpCi17Ci0g
ICAgc3RydWN0IGFjcGlfZHJoZF91bml0ICpkcmhkOwotICAgIHN0cnVjdCBp
b21tdSAqaW9tbXUgPSBOVUxMOwotICAgIHN0cnVjdCBkb21haW5faW9tbXUg
KmhkID0gZG9tX2lvbW11KGQpOwotICAgIGJvb2xfdCBmbHVzaF9kZXZfaW90
bGI7Ci0gICAgaW50IGlvbW11X2RvbWlkOwotICAgIGludCByYyA9IDA7Ci0K
LSAgICBpb21tdV9zeW5jX2NhY2hlKHB0ZSwgc2l6ZW9mKHN0cnVjdCBkbWFf
cHRlKSk7Ci0KLSAgICBmb3JfZWFjaF9kcmhkX3VuaXQgKCBkcmhkICkKLSAg
ICB7Ci0gICAgICAgIGlvbW11ID0gZHJoZC0+aW9tbXU7Ci0gICAgICAgIGlm
ICggIXRlc3RfYml0KGlvbW11LT5pbmRleCwgJmhkLT5hcmNoLmlvbW11X2Jp
dG1hcCkgKQotICAgICAgICAgICAgY29udGludWU7Ci0KLSAgICAgICAgZmx1
c2hfZGV2X2lvdGxiID0gISFmaW5kX2F0c19kZXZfZHJoZChpb21tdSk7Ci0g
ICAgICAgIGlvbW11X2RvbWlkPSBkb21haW5faW9tbXVfZG9taWQoZCwgaW9t
bXUpOwotICAgICAgICBpZiAoIGlvbW11X2RvbWlkID09IC0xICkKLSAgICAg
ICAgICAgIGNvbnRpbnVlOwotCi0gICAgICAgIHJjID0gaW9tbXVfZmx1c2hf
aW90bGJfcHNpKGlvbW11LCBpb21tdV9kb21pZCwKLSAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgX19kZm5fdG9fZGFkZHIoZGZuKSwKLSAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgb3JkZXIsICFwcmVz
ZW50LCBmbHVzaF9kZXZfaW90bGIpOwotICAgICAgICBpZiAoIHJjID4gMCAp
Ci0gICAgICAgIHsKLSAgICAgICAgICAgIGlvbW11X2ZsdXNoX3dyaXRlX2J1
ZmZlcihpb21tdSk7Ci0gICAgICAgICAgICByYyA9IDA7Ci0gICAgICAgIH0K
LSAgICB9Ci0KLSAgICBpZiAoIHVubGlrZWx5KHJjKSApCi0gICAgewotICAg
ICAgICBpZiAoICFkLT5pc19zaHV0dGluZ19kb3duICYmIHByaW50a19yYXRl
bGltaXQoKSApCi0gICAgICAgICAgICBwcmludGsoWEVOTE9HX0VSUiBWVERQ
UkVGSVgKLSAgICAgICAgICAgICAgICAgICAiIGQlZDogSU9NTVUgcGFnZXMg
Zmx1c2ggZmFpbGVkOiAlZFxuIiwKLSAgICAgICAgICAgICAgICAgICBkLT5k
b21haW5faWQsIHJjKTsKLQotICAgICAgICBpZiAoICFpc19oYXJkd2FyZV9k
b21haW4oZCkgKQotICAgICAgICAgICAgZG9tYWluX2NyYXNoKGQpOwotICAg
IH0KLQotICAgIHJldHVybiByYzsKLX0KLQogc3RhdGljIGludCBfX2luaXQg
dnRkX2VwdF9wYWdlX2NvbXBhdGlibGUoc3RydWN0IGlvbW11ICppb21tdSkK
IHsKICAgICB1NjQgZXB0X2NhcCwgdnRkX2NhcCA9IGlvbW11LT5jYXA7Ci0t
LSBhL3hlbi9pbmNsdWRlL2FzbS14ODYvaW9tbXUuaAorKysgYi94ZW4vaW5j
bHVkZS9hc20teDg2L2lvbW11LmgKQEAgLTkwLDggKzkwLDYgQEAgaW50IGlv
bW11X3NldHVwX2hwZXRfbXNpKHN0cnVjdCBtc2lfZGVzYwogCiAvKiBXaGls
ZSBWVC1kIHNwZWNpZmljLCB0aGlzIG11c3QgZ2V0IGRlY2xhcmVkIGluIGEg
Z2VuZXJpYyBoZWFkZXIuICovCiBpbnQgYWRqdXN0X3Z0ZF9pcnFfYWZmaW5p
dGllcyh2b2lkKTsKLWludCBfX211c3RfY2hlY2sgaW9tbXVfcHRlX2ZsdXNo
KHN0cnVjdCBkb21haW4gKmQsIHU2NCBnZm4sIHU2NCAqcHRlLAotICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50IG9yZGVyLCBpbnQgcHJl
c2VudCk7CiBib29sX3QgaW9tbXVfc3VwcG9ydHNfZWltKHZvaWQpOwogaW50
IGlvbW11X2VuYWJsZV94MmFwaWNfSVIodm9pZCk7CiB2b2lkIGlvbW11X2Rp
c2FibGVfeDJhcGljX0lSKHZvaWQpOwo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.13-1.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.13-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB2dGQ6IGltcHJvdmUgSU9NTVUgVExCIGZsdXNoCgpEbyBub3QgbGltaXQg
UFNJIGZsdXNoZXMgdG8gb3JkZXIgMCBwYWdlcywgaW4gb3JkZXIgdG8gYXZv
aWQgZG9pbmcgYQpmdWxsIFRMQiBmbHVzaCBpZiB0aGUgcGFzc2VkIGluIHBh
Z2UgaGFzIGFuIG9yZGVyIGdyZWF0ZXIgdGhhbiAwIGFuZAppcyBhbGlnbmVk
LiBTaG91bGQgaW5jcmVhc2UgdGhlIHBlcmZvcm1hbmNlIG9mIElPTU1VIFRM
QiBmbHVzaGVzIHdoZW4KZGVhbGluZyB3aXRoIHBhZ2Ugb3JkZXJzIGdyZWF0
ZXIgdGhhbiAwLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjEuCgpTaWduZWQt
b2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+CgotLS0g
YS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYworKysgYi94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYwpAQCAtNTcwLDEz
ICs1NzAsMTQgQEAgc3RhdGljIGludCBfX211c3RfY2hlY2sgaW9tbXVfZmx1
c2hfaW90bAogICAgICAgICBpZiAoIGlvbW11X2RvbWlkID09IC0xICkKICAg
ICAgICAgICAgIGNvbnRpbnVlOwogCi0gICAgICAgIGlmICggcGFnZV9jb3Vu
dCAhPSAxIHx8IGRmbl9lcShkZm4sIElOVkFMSURfREZOKSApCisgICAgICAg
IGlmICggIXBhZ2VfY291bnQgfHwgKHBhZ2VfY291bnQgJiAocGFnZV9jb3Vu
dCAtIDEpKSB8fAorICAgICAgICAgICAgIGRmbl9lcShkZm4sIElOVkFMSURf
REZOKSB8fCAhSVNfQUxJR05FRChkZm5feChkZm4pLCBwYWdlX2NvdW50KSAp
CiAgICAgICAgICAgICByYyA9IGlvbW11X2ZsdXNoX2lvdGxiX2RzaShpb21t
dSwgaW9tbXVfZG9taWQsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAwLCBmbHVzaF9kZXZfaW90bGIpOwogICAgICAgICBlbHNl
CiAgICAgICAgICAgICByYyA9IGlvbW11X2ZsdXNoX2lvdGxiX3BzaShpb21t
dSwgaW9tbXVfZG9taWQsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBkZm5fdG9fZGFkZHIoZGZuKSwKLSAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIFBBR0VfT1JERVJfNEssCisgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBnZXRfb3JkZXJf
ZnJvbV9wYWdlcyhwYWdlX2NvdW50KSwKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICFkbWFfb2xkX3B0ZV9wcmVzZW50LAogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZmx1c2hfZGV2
X2lvdGxiKTsKIAo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.13-2.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.13-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IHBydW5lIChhbmQgcmVuYW1lKSBjYWNoZSBmbHVzaCBmdW5jdGlvbnMKClJl
bmFtZSBfX2lvbW11X2ZsdXNoX2NhY2hlIHRvIGlvbW11X3N5bmNfY2FjaGUg
YW5kIHJlbW92ZQppb21tdV9mbHVzaF9jYWNoZV9wYWdlLiBBbHNvIHJlbW92
ZSB0aGUgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnkKd3JhcHBlciBhbmQganVz
dCB1c2UgaW9tbXVfc3luY19jYWNoZSBpbnN0ZWFkLiBOb3RlIHRoZSBfZW50
cnkgc3VmZml4CndhcyBtZWFuaW5nbGVzcyBhcyB0aGUgd3JhcHBlciB3YXMg
YWxyZWFkeSB0YWtpbmcgYSBzaXplIHBhcmFtZXRlciBpbgpieXRlcy4gV2hp
bGUgdGhlcmUgYWxzbyBjb25zdGlmeSB0aGUgYWRkciBwYXJhbWV0ZXIuCgpO
byBmdW5jdGlvbmFsIGNoYW5nZSBpbnRlbmRlZC4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtMzIxLgoKUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4KCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9leHRlcm4uaAorKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQv
ZXh0ZXJuLmgKQEAgLTQzLDggKzQzLDcgQEAgdm9pZCBkaXNhYmxlX3FpbnZh
bChzdHJ1Y3QgdnRkX2lvbW11ICppbwogaW50IGVuYWJsZV9pbnRyZW1hcChz
dHJ1Y3QgdnRkX2lvbW11ICppb21tdSwgaW50IGVpbSk7CiB2b2lkIGRpc2Fi
bGVfaW50cmVtYXAoc3RydWN0IHZ0ZF9pb21tdSAqaW9tbXUpOwogCi12b2lk
IGlvbW11X2ZsdXNoX2NhY2hlX2VudHJ5KHZvaWQgKmFkZHIsIHVuc2lnbmVk
IGludCBzaXplKTsKLXZvaWQgaW9tbXVfZmx1c2hfY2FjaGVfcGFnZSh2b2lk
ICphZGRyLCB1bnNpZ25lZCBsb25nIG5wYWdlcyk7Cit2b2lkIGlvbW11X3N5
bmNfY2FjaGUoY29uc3Qgdm9pZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUp
OwogaW50IGlvbW11X2FsbG9jKHN0cnVjdCBhY3BpX2RyaGRfdW5pdCAqZHJo
ZCk7CiB2b2lkIGlvbW11X2ZyZWUoc3RydWN0IGFjcGlfZHJoZF91bml0ICpk
cmhkKTsKIAotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW50
cmVtYXAuYworKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW50
cmVtYXAuYwpAQCAtMjMwLDcgKzIzMCw3IEBAIHN0YXRpYyB2b2lkIGZyZWVf
cmVtYXBfZW50cnkoc3RydWN0IHZ0ZF8KICAgICAgICAgICAgICAgICAgICAg
IGlyZW1hcF9lbnRyaWVzLCBpcmVtYXBfZW50cnkpOwogCiAgICAgdXBkYXRl
X2lydGUoaW9tbXUsIGlyZW1hcF9lbnRyeSwgJm5ld19pcmUsIGZhbHNlKTsK
LSAgICBpb21tdV9mbHVzaF9jYWNoZV9lbnRyeShpcmVtYXBfZW50cnksIHNp
emVvZigqaXJlbWFwX2VudHJ5KSk7CisgICAgaW9tbXVfc3luY19jYWNoZShp
cmVtYXBfZW50cnksIHNpemVvZigqaXJlbWFwX2VudHJ5KSk7CiAgICAgaW9t
bXVfZmx1c2hfaWVjX2luZGV4KGlvbW11LCAwLCBpbmRleCk7CiAKICAgICB1
bm1hcF92dGRfZG9tYWluX3BhZ2UoaXJlbWFwX2VudHJpZXMpOwpAQCAtNDA2
LDcgKzQwNiw3IEBAIHN0YXRpYyBpbnQgaW9hcGljX3J0ZV90b19yZW1hcF9l
bnRyeShzdHIKICAgICB9CiAKICAgICB1cGRhdGVfaXJ0ZShpb21tdSwgaXJl
bWFwX2VudHJ5LCAmbmV3X2lyZSwgIWluaXQpOwotICAgIGlvbW11X2ZsdXNo
X2NhY2hlX2VudHJ5KGlyZW1hcF9lbnRyeSwgc2l6ZW9mKCppcmVtYXBfZW50
cnkpKTsKKyAgICBpb21tdV9zeW5jX2NhY2hlKGlyZW1hcF9lbnRyeSwgc2l6
ZW9mKCppcmVtYXBfZW50cnkpKTsKICAgICBpb21tdV9mbHVzaF9pZWNfaW5k
ZXgoaW9tbXUsIDAsIGluZGV4KTsKIAogICAgIHVubWFwX3Z0ZF9kb21haW5f
cGFnZShpcmVtYXBfZW50cmllcyk7CkBAIC02OTUsNyArNjk1LDcgQEAgc3Rh
dGljIGludCBtc2lfbXNnX3RvX3JlbWFwX2VudHJ5KAogICAgIHVwZGF0ZV9p
cnRlKGlvbW11LCBpcmVtYXBfZW50cnksICZuZXdfaXJlLCBtc2lfZGVzYy0+
aXJ0ZV9pbml0aWFsaXplZCk7CiAgICAgbXNpX2Rlc2MtPmlydGVfaW5pdGlh
bGl6ZWQgPSB0cnVlOwogCi0gICAgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnko
aXJlbWFwX2VudHJ5LCBzaXplb2YoKmlyZW1hcF9lbnRyeSkpOworICAgIGlv
bW11X3N5bmNfY2FjaGUoaXJlbWFwX2VudHJ5LCBzaXplb2YoKmlyZW1hcF9l
bnRyeSkpOwogICAgIGlvbW11X2ZsdXNoX2llY19pbmRleChpb21tdSwgMCwg
aW5kZXgpOwogCiAgICAgdW5tYXBfdnRkX2RvbWFpbl9wYWdlKGlyZW1hcF9l
bnRyaWVzKTsKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lv
bW11LmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11
LmMKQEAgLTE0MCw3ICsxNDAsOCBAQCBzdGF0aWMgaW50IGNvbnRleHRfZ2V0
X2RvbWFpbl9pZChzdHJ1Y3QKIH0KIAogc3RhdGljIGludCBpb21tdXNfaW5j
b2hlcmVudDsKLXN0YXRpYyB2b2lkIF9faW9tbXVfZmx1c2hfY2FjaGUodm9p
ZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUpCisKK3ZvaWQgaW9tbXVfc3lu
Y19jYWNoZShjb25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQgc2l6ZSkK
IHsKICAgICBpbnQgaTsKICAgICBzdGF0aWMgdW5zaWduZWQgaW50IGNsZmx1
c2hfc2l6ZSA9IDA7CkBAIC0xNTUsMTYgKzE1Niw2IEBAIHN0YXRpYyB2b2lk
IF9faW9tbXVfZmx1c2hfY2FjaGUodm9pZCAqYWQKICAgICAgICAgY2FjaGVs
aW5lX2ZsdXNoKChjaGFyICopYWRkciArIGkpOwogfQogCi12b2lkIGlvbW11
X2ZsdXNoX2NhY2hlX2VudHJ5KHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBz
aXplKQotewotICAgIF9faW9tbXVfZmx1c2hfY2FjaGUoYWRkciwgc2l6ZSk7
Ci19Ci0KLXZvaWQgaW9tbXVfZmx1c2hfY2FjaGVfcGFnZSh2b2lkICphZGRy
LCB1bnNpZ25lZCBsb25nIG5wYWdlcykKLXsKLSAgICBfX2lvbW11X2ZsdXNo
X2NhY2hlKGFkZHIsIFBBR0VfU0laRSAqIG5wYWdlcyk7Ci19Ci0KIC8qIEFs
bG9jYXRlIHBhZ2UgdGFibGUsIHJldHVybiBpdHMgbWFjaGluZSBhZGRyZXNz
ICovCiB1aW50NjRfdCBhbGxvY19wZ3RhYmxlX21hZGRyKHVuc2lnbmVkIGxv
bmcgbnBhZ2VzLCBub2RlaWRfdCBub2RlKQogewpAQCAtMTgzLDcgKzE3NCw3
IEBAIHVpbnQ2NF90IGFsbG9jX3BndGFibGVfbWFkZHIodW5zaWduZWQgbG8K
ICAgICAgICAgdmFkZHIgPSBfX21hcF9kb21haW5fcGFnZShjdXJfcGcpOwog
ICAgICAgICBtZW1zZXQodmFkZHIsIDAsIFBBR0VfU0laRSk7CiAKLSAgICAg
ICAgaW9tbXVfZmx1c2hfY2FjaGVfcGFnZSh2YWRkciwgMSk7CisgICAgICAg
IGlvbW11X3N5bmNfY2FjaGUodmFkZHIsIFBBR0VfU0laRSk7CiAgICAgICAg
IHVubWFwX2RvbWFpbl9wYWdlKHZhZGRyKTsKICAgICAgICAgY3VyX3BnKys7
CiAgICAgfQpAQCAtMjE2LDcgKzIwNyw3IEBAIHN0YXRpYyB1NjQgYnVzX3Rv
X2NvbnRleHRfbWFkZHIoc3RydWN0IHYKICAgICAgICAgfQogICAgICAgICBz
ZXRfcm9vdF92YWx1ZSgqcm9vdCwgbWFkZHIpOwogICAgICAgICBzZXRfcm9v
dF9wcmVzZW50KCpyb290KTsKLSAgICAgICAgaW9tbXVfZmx1c2hfY2FjaGVf
ZW50cnkocm9vdCwgc2l6ZW9mKHN0cnVjdCByb290X2VudHJ5KSk7CisgICAg
ICAgIGlvbW11X3N5bmNfY2FjaGUocm9vdCwgc2l6ZW9mKHN0cnVjdCByb290
X2VudHJ5KSk7CiAgICAgfQogICAgIG1hZGRyID0gKHU2NCkgZ2V0X2NvbnRl
eHRfYWRkcigqcm9vdCk7CiAgICAgdW5tYXBfdnRkX2RvbWFpbl9wYWdlKHJv
b3RfZW50cmllcyk7CkBAIC0yNjMsNyArMjU0LDcgQEAgc3RhdGljIHU2NCBh
ZGRyX3RvX2RtYV9wYWdlX21hZGRyKHN0cnVjdAogICAgICAgICAgICAgICov
CiAgICAgICAgICAgICBkbWFfc2V0X3B0ZV9yZWFkYWJsZSgqcHRlKTsKICAg
ICAgICAgICAgIGRtYV9zZXRfcHRlX3dyaXRhYmxlKCpwdGUpOwotICAgICAg
ICAgICAgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnkocHRlLCBzaXplb2Yoc3Ry
dWN0IGRtYV9wdGUpKTsKKyAgICAgICAgICAgIGlvbW11X3N5bmNfY2FjaGUo
cHRlLCBzaXplb2Yoc3RydWN0IGRtYV9wdGUpKTsKICAgICAgICAgfQogCiAg
ICAgICAgIGlmICggbGV2ZWwgPT0gMiApCkBAIC02NDAsNyArNjMxLDcgQEAg
c3RhdGljIGludCBfX211c3RfY2hlY2sgZG1hX3B0ZV9jbGVhcl9vbgogICAg
ICpmbHVzaF9mbGFncyB8PSBJT01NVV9GTFVTSEZfbW9kaWZpZWQ7CiAKICAg
ICBzcGluX3VubG9jaygmaGQtPmFyY2gubWFwcGluZ19sb2NrKTsKLSAgICBp
b21tdV9mbHVzaF9jYWNoZV9lbnRyeShwdGUsIHNpemVvZihzdHJ1Y3QgZG1h
X3B0ZSkpOworICAgIGlvbW11X3N5bmNfY2FjaGUocHRlLCBzaXplb2Yoc3Ry
dWN0IGRtYV9wdGUpKTsKIAogICAgIHVubWFwX3Z0ZF9kb21haW5fcGFnZShw
YWdlKTsKIApAQCAtNjc5LDcgKzY3MCw3IEBAIHN0YXRpYyB2b2lkIGlvbW11
X2ZyZWVfcGFnZV90YWJsZShzdHJ1Y3QKICAgICAgICAgICAgIGlvbW11X2Zy
ZWVfcGFnZXRhYmxlKGRtYV9wdGVfYWRkcigqcHRlKSwgbmV4dF9sZXZlbCk7
CiAKICAgICAgICAgZG1hX2NsZWFyX3B0ZSgqcHRlKTsKLSAgICAgICAgaW9t
bXVfZmx1c2hfY2FjaGVfZW50cnkocHRlLCBzaXplb2Yoc3RydWN0IGRtYV9w
dGUpKTsKKyAgICAgICAgaW9tbXVfc3luY19jYWNoZShwdGUsIHNpemVvZihz
dHJ1Y3QgZG1hX3B0ZSkpOwogICAgIH0KIAogICAgIHVubWFwX3Z0ZF9kb21h
aW5fcGFnZShwdF92YWRkcik7CkBAIC0xNDAwLDcgKzEzOTEsNyBAQCBpbnQg
ZG9tYWluX2NvbnRleHRfbWFwcGluZ19vbmUoCiAgICAgY29udGV4dF9zZXRf
YWRkcmVzc193aWR0aCgqY29udGV4dCwgYWdhdyk7CiAgICAgY29udGV4dF9z
ZXRfZmF1bHRfZW5hYmxlKCpjb250ZXh0KTsKICAgICBjb250ZXh0X3NldF9w
cmVzZW50KCpjb250ZXh0KTsKLSAgICBpb21tdV9mbHVzaF9jYWNoZV9lbnRy
eShjb250ZXh0LCBzaXplb2Yoc3RydWN0IGNvbnRleHRfZW50cnkpKTsKKyAg
ICBpb21tdV9zeW5jX2NhY2hlKGNvbnRleHQsIHNpemVvZihzdHJ1Y3QgY29u
dGV4dF9lbnRyeSkpOwogICAgIHNwaW5fdW5sb2NrKCZpb21tdS0+bG9jayk7
CiAKICAgICAvKiBDb250ZXh0IGVudHJ5IHdhcyBwcmV2aW91c2x5IG5vbi1w
cmVzZW50ICh3aXRoIGRvbWlkIDApLiAqLwpAQCAtMTU2NCw3ICsxNTU1LDcg
QEAgaW50IGRvbWFpbl9jb250ZXh0X3VubWFwX29uZSgKIAogICAgIGNvbnRl
eHRfY2xlYXJfcHJlc2VudCgqY29udGV4dCk7CiAgICAgY29udGV4dF9jbGVh
cl9lbnRyeSgqY29udGV4dCk7Ci0gICAgaW9tbXVfZmx1c2hfY2FjaGVfZW50
cnkoY29udGV4dCwgc2l6ZW9mKHN0cnVjdCBjb250ZXh0X2VudHJ5KSk7Cisg
ICAgaW9tbXVfc3luY19jYWNoZShjb250ZXh0LCBzaXplb2Yoc3RydWN0IGNv
bnRleHRfZW50cnkpKTsKIAogICAgIGlvbW11X2RvbWlkPSBkb21haW5faW9t
bXVfZG9taWQoZG9tYWluLCBpb21tdSk7CiAgICAgaWYgKCBpb21tdV9kb21p
ZCA9PSAtMSApCkBAIC0xNzkxLDcgKzE3ODIsNyBAQCBzdGF0aWMgaW50IF9f
bXVzdF9jaGVjayBpbnRlbF9pb21tdV9tYXBfCiAKICAgICAqcHRlID0gbmV3
OwogCi0gICAgaW9tbXVfZmx1c2hfY2FjaGVfZW50cnkocHRlLCBzaXplb2Yo
c3RydWN0IGRtYV9wdGUpKTsKKyAgICBpb21tdV9zeW5jX2NhY2hlKHB0ZSwg
c2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7CiAgICAgc3Bpbl91bmxvY2soJmhk
LT5hcmNoLm1hcHBpbmdfbG9jayk7CiAgICAgdW5tYXBfdnRkX2RvbWFpbl9w
YWdlKHBhZ2UpOwogCkBAIC0xODY2LDcgKzE4NTcsNyBAQCBpbnQgaW9tbXVf
cHRlX2ZsdXNoKHN0cnVjdCBkb21haW4gKmQsIHVpCiAgICAgaW50IGlvbW11
X2RvbWlkOwogICAgIGludCByYyA9IDA7CiAKLSAgICBpb21tdV9mbHVzaF9j
YWNoZV9lbnRyeShwdGUsIHNpemVvZihzdHJ1Y3QgZG1hX3B0ZSkpOworICAg
IGlvbW11X3N5bmNfY2FjaGUocHRlLCBzaXplb2Yoc3RydWN0IGRtYV9wdGUp
KTsKIAogICAgIGZvcl9lYWNoX2RyaGRfdW5pdCAoIGRyaGQgKQogICAgIHsK
QEAgLTI3MjQsNyArMjcxNSw3IEBAIHN0YXRpYyBpbnQgX19pbml0IGludGVs
X2lvbW11X3F1YXJhbnRpbmUKICAgICAgICAgICAgIGRtYV9zZXRfcHRlX2Fk
ZHIoKnB0ZSwgbWFkZHIpOwogICAgICAgICAgICAgZG1hX3NldF9wdGVfcmVh
ZGFibGUoKnB0ZSk7CiAgICAgICAgIH0KLSAgICAgICAgaW9tbXVfZmx1c2hf
Y2FjaGVfcGFnZShwYXJlbnQsIDEpOworICAgICAgICBpb21tdV9zeW5jX2Nh
Y2hlKHBhcmVudCwgUEFHRV9TSVpFKTsKIAogICAgICAgICB1bm1hcF92dGRf
ZG9tYWluX3BhZ2UocGFyZW50KTsKICAgICAgICAgcGFyZW50ID0gbWFwX3Z0
ZF9kb21haW5fcGFnZShtYWRkcik7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.13-3.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.13-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
aW9tbXU6IGludHJvZHVjZSBhIGNhY2hlIHN5bmMgaG9vawoKVGhlIGhvb2sg
aXMgb25seSBpbXBsZW1lbnRlZCBmb3IgVlQtZCBhbmQgaXQgdXNlcyB0aGUg
YWxyZWFkeSBleGlzdGluZwppb21tdV9zeW5jX2NhY2hlIGZ1bmN0aW9uIHBy
ZXNlbnQgaW4gVlQtZCBjb2RlLiBUaGUgbmV3IGhvb2sgaXMKYWRkZWQgc28g
dGhhdCB0aGUgY2FjaGUgY2FuIGJlIGZsdXNoZWQgYnkgY29kZSBvdXRzaWRl
IG9mIFZULWQgd2hlbgp1c2luZyBzaGFyZWQgcGFnZSB0YWJsZXMuCgpOb3Rl
IHRoYXQgYWxsb2NfcGd0YWJsZV9tYWRkciBtdXN0IHVzZSB0aGUgbm93IGxv
Y2FsbHkgZGVmaW5lZApzeW5jX2NhY2hlIGZ1bmN0aW9uLCBiZWNhdXNlIElP
TU1VIG9wcyBhcmUgbm90IHlldCBzZXR1cCB0aGUgZmlyc3QKdGltZSB0aGUg
ZnVuY3Rpb24gZ2V0cyBjYWxsZWQgZHVyaW5nIElPTU1VIGluaXRpYWxpemF0
aW9uLgoKTm8gZnVuY3Rpb25hbCBjaGFuZ2UgaW50ZW5kZWQuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTMyMS4KClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8
amJldWxpY2hAc3VzZS5jb20+CgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC92dGQvZXh0ZXJuLmgKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvdnRkL2V4dGVybi5oCkBAIC00Myw3ICs0Myw2IEBAIHZvaWQgZGlzYWJs
ZV9xaW52YWwoc3RydWN0IHZ0ZF9pb21tdSAqaW8KIGludCBlbmFibGVfaW50
cmVtYXAoc3RydWN0IHZ0ZF9pb21tdSAqaW9tbXUsIGludCBlaW0pOwogdm9p
ZCBkaXNhYmxlX2ludHJlbWFwKHN0cnVjdCB2dGRfaW9tbXUgKmlvbW11KTsK
IAotdm9pZCBpb21tdV9zeW5jX2NhY2hlKGNvbnN0IHZvaWQgKmFkZHIsIHVu
c2lnbmVkIGludCBzaXplKTsKIGludCBpb21tdV9hbGxvYyhzdHJ1Y3QgYWNw
aV9kcmhkX3VuaXQgKmRyaGQpOwogdm9pZCBpb21tdV9mcmVlKHN0cnVjdCBh
Y3BpX2RyaGRfdW5pdCAqZHJoZCk7CiAKLS0tIGEveGVuL2RyaXZlcnMvcGFz
c3Rocm91Z2gvdnRkL2lvbW11LmMKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Ro
cm91Z2gvdnRkL2lvbW11LmMKQEAgLTE0MSw3ICsxNDEsNyBAQCBzdGF0aWMg
aW50IGNvbnRleHRfZ2V0X2RvbWFpbl9pZChzdHJ1Y3QKIAogc3RhdGljIGlu
dCBpb21tdXNfaW5jb2hlcmVudDsKIAotdm9pZCBpb21tdV9zeW5jX2NhY2hl
KGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGludCBzaXplKQorc3RhdGlj
IHZvaWQgc3luY19jYWNoZShjb25zdCB2b2lkICphZGRyLCB1bnNpZ25lZCBp
bnQgc2l6ZSkKIHsKICAgICBpbnQgaTsKICAgICBzdGF0aWMgdW5zaWduZWQg
aW50IGNsZmx1c2hfc2l6ZSA9IDA7CkBAIC0xNzQsNyArMTc0LDcgQEAgdWlu
dDY0X3QgYWxsb2NfcGd0YWJsZV9tYWRkcih1bnNpZ25lZCBsbwogICAgICAg
ICB2YWRkciA9IF9fbWFwX2RvbWFpbl9wYWdlKGN1cl9wZyk7CiAgICAgICAg
IG1lbXNldCh2YWRkciwgMCwgUEFHRV9TSVpFKTsKIAotICAgICAgICBpb21t
dV9zeW5jX2NhY2hlKHZhZGRyLCBQQUdFX1NJWkUpOworICAgICAgICBzeW5j
X2NhY2hlKHZhZGRyLCBQQUdFX1NJWkUpOwogICAgICAgICB1bm1hcF9kb21h
aW5fcGFnZSh2YWRkcik7CiAgICAgICAgIGN1cl9wZysrOwogICAgIH0KQEAg
LTI3NjMsNiArMjc2Myw3IEBAIGNvbnN0IHN0cnVjdCBpb21tdV9vcHMgX19p
bml0Y29uc3RyZWwgaW4KICAgICAuaW90bGJfZmx1c2hfYWxsID0gaW9tbXVf
Zmx1c2hfaW90bGJfYWxsLAogICAgIC5nZXRfcmVzZXJ2ZWRfZGV2aWNlX21l
bW9yeSA9IGludGVsX2lvbW11X2dldF9yZXNlcnZlZF9kZXZpY2VfbWVtb3J5
LAogICAgIC5kdW1wX3AybV90YWJsZSA9IHZ0ZF9kdW1wX3AybV90YWJsZSwK
KyAgICAuc3luY19jYWNoZSA9IHN5bmNfY2FjaGUsCiB9OwogCiBjb25zdCBz
dHJ1Y3QgaW9tbXVfaW5pdF9vcHMgX19pbml0Y29uc3RyZWwgaW50ZWxfaW9t
bXVfaW5pdF9vcHMgPSB7Ci0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYvaW9t
bXUuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2lvbW11LmgKQEAgLTEy
MSw2ICsxMjEsMTMgQEAgZXh0ZXJuIGJvb2wgdW50cnVzdGVkX21zaTsKIGlu
dCBwaV91cGRhdGVfaXJ0ZShjb25zdCBzdHJ1Y3QgcGlfZGVzYyAqcGlfZGVz
YywgY29uc3Qgc3RydWN0IHBpcnEgKnBpcnEsCiAgICAgICAgICAgICAgICAg
ICAgY29uc3QgdWludDhfdCBndmVjKTsKIAorI2RlZmluZSBpb21tdV9zeW5j
X2NhY2hlKGFkZHIsIHNpemUpICh7ICAgICAgICAgICAgICAgICBcCisgICAg
Y29uc3Qgc3RydWN0IGlvbW11X29wcyAqb3BzID0gaW9tbXVfZ2V0X29wcygp
OyAgICAgIFwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgXAorICAgIGlmICggb3BzLT5zeW5jX2Nh
Y2hlICkgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAg
IGlvbW11X3ZjYWxsKG9wcywgc3luY19jYWNoZSwgYWRkciwgc2l6ZSk7ICAg
ICAgIFwKK30pCisKICNlbmRpZiAvKiAhX19BUkNIX1g4Nl9JT01NVV9IX18g
Ki8KIC8qCiAgKiBMb2NhbCB2YXJpYWJsZXM6Ci0tLSBhL3hlbi9pbmNsdWRl
L3hlbi9pb21tdS5oCisrKyBiL3hlbi9pbmNsdWRlL3hlbi9pb21tdS5oCkBA
IC0yNTAsNiArMjUwLDcgQEAgc3RydWN0IGlvbW11X29wcyB7CiAgICAgaW50
ICgqc2V0dXBfaHBldF9tc2kpKHN0cnVjdCBtc2lfZGVzYyAqKTsKIAogICAg
IGludCAoKmFkanVzdF9pcnFfYWZmaW5pdGllcykodm9pZCk7CisgICAgdm9p
ZCAoKnN5bmNfY2FjaGUpKGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVkIGlu
dCBzaXplKTsKICNlbmRpZiAvKiBDT05GSUdfWDg2ICovCiAKICAgICBpbnQg
X19tdXN0X2NoZWNrICgqc3VzcGVuZCkodm9pZCk7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.13-4.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.13-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IGRvbid0IGFzc3VtZSBhZGRyZXNzZXMgYXJlIGFsaWduZWQgaW4gc3luY19j
YWNoZQoKQ3VycmVudCBjb2RlIGluIHN5bmNfY2FjaGUgYXNzdW1lIHRoYXQg
dGhlIGFkZHJlc3MgcGFzc2VkIGluIGlzCmFsaWduZWQgdG8gYSBjYWNoZSBs
aW5lIHNpemUuIEZpeCB0aGUgY29kZSB0byBzdXBwb3J0IHBhc3NpbmcgaW4K
YXJiaXRyYXJ5IGFkZHJlc3NlcyBub3QgbmVjZXNzYXJpbHkgYWxpZ25lZCB0
byBhIGNhY2hlIGxpbmUgc2l6ZS4KClRoaXMgaXMgcGFydCBvZiBYU0EtMzIx
LgoKUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNv
bT4KCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5j
CisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBA
IC0xNDMsOCArMTQzLDggQEAgc3RhdGljIGludCBpb21tdXNfaW5jb2hlcmVu
dDsKIAogc3RhdGljIHZvaWQgc3luY19jYWNoZShjb25zdCB2b2lkICphZGRy
LCB1bnNpZ25lZCBpbnQgc2l6ZSkKIHsKLSAgICBpbnQgaTsKLSAgICBzdGF0
aWMgdW5zaWduZWQgaW50IGNsZmx1c2hfc2l6ZSA9IDA7CisgICAgc3RhdGlj
IHVuc2lnbmVkIGxvbmcgY2xmbHVzaF9zaXplID0gMDsKKyAgICBjb25zdCB2
b2lkICplbmQgPSBhZGRyICsgc2l6ZTsKIAogICAgIGlmICggIWlvbW11c19p
bmNvaGVyZW50ICkKICAgICAgICAgcmV0dXJuOwpAQCAtMTUyLDggKzE1Miw5
IEBAIHN0YXRpYyB2b2lkIHN5bmNfY2FjaGUoY29uc3Qgdm9pZCAqYWRkciwK
ICAgICBpZiAoIGNsZmx1c2hfc2l6ZSA9PSAwICkKICAgICAgICAgY2xmbHVz
aF9zaXplID0gZ2V0X2NhY2hlX2xpbmVfc2l6ZSgpOwogCi0gICAgZm9yICgg
aSA9IDA7IGkgPCBzaXplOyBpICs9IGNsZmx1c2hfc2l6ZSApCi0gICAgICAg
IGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIgKyBpKTsKKyAgICBhZGRy
IC09ICh1bnNpZ25lZCBsb25nKWFkZHIgJiAoY2xmbHVzaF9zaXplIC0gMSk7
CisgICAgZm9yICggOyBhZGRyIDwgZW5kOyBhZGRyICs9IGNsZmx1c2hfc2l6
ZSApCisgICAgICAgIGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIpOwog
fQogCiAvKiBBbGxvY2F0ZSBwYWdlIHRhYmxlLCByZXR1cm4gaXRzIG1hY2hp
bmUgYWRkcmVzcyAqLwo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.13-5.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.13-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
YWx0ZXJuYXRpdmU6IGludHJvZHVjZSBhbHRlcm5hdGl2ZV8yCgpJdCdzIGJh
c2VkIG9uIGFsdGVybmF0aXZlX2lvXzIgd2l0aG91dCBpbnB1dHMgb3Igb3V0
cHV0cyBidXQgd2l0aCBhbgphZGRlZCBtZW1vcnkgY2xvYmJlci4KClRoaXMg
aXMgcGFydCBvZiBYU0EtMzIxLgoKQWNrZWQtYnk6IEphbiBCZXVsaWNoIDxq
YmV1bGljaEBzdXNlLmNvbT4KCi0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYv
YWx0ZXJuYXRpdmUuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2FsdGVy
bmF0aXZlLmgKQEAgLTExNCw2ICsxMTQsMTEgQEAgZXh0ZXJuIHZvaWQgYWx0
ZXJuYXRpdmVfYnJhbmNoZXModm9pZCk7CiAjZGVmaW5lIGFsdGVybmF0aXZl
KG9sZGluc3RyLCBuZXdpbnN0ciwgZmVhdHVyZSkgICAgICAgICAgICAgICAg
ICAgICAgICBcCiAgICAgICAgIGFzbSB2b2xhdGlsZSAoQUxURVJOQVRJVkUo
b2xkaW5zdHIsIG5ld2luc3RyLCBmZWF0dXJlKSA6IDogOiAibWVtb3J5IikK
IAorI2RlZmluZSBhbHRlcm5hdGl2ZV8yKG9sZGluc3RyLCBuZXdpbnN0cjEs
IGZlYXR1cmUxLCBuZXdpbnN0cjIsIGZlYXR1cmUyKSBcCisJYXNtIHZvbGF0
aWxlIChBTFRFUk5BVElWRV8yKG9sZGluc3RyLCBuZXdpbnN0cjEsIGZlYXR1
cmUxLAlcCisJCQkJICAgIG5ld2luc3RyMiwgZmVhdHVyZTIpCQlcCisJCSAg
ICAgIDogOiA6ICJtZW1vcnkiKQorCiAvKgogICogQWx0ZXJuYXRpdmUgaW5s
aW5lIGFzc2VtYmx5IHdpdGggaW5wdXQuCiAgKgo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.13-6.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.13-6.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB2dGQ6
IG9wdGltaXplIENQVSBjYWNoZSBzeW5jCgpTb21lIFZULWQgSU9NTVVzIGFy
ZSBub24tY29oZXJlbnQsIHdoaWNoIHJlcXVpcmVzIGEgY2FjaGUgd3JpdGUg
YmFjawppbiBvcmRlciBmb3IgdGhlIGNoYW5nZXMgbWFkZSBieSB0aGUgQ1BV
IHRvIGJlIHZpc2libGUgdG8gdGhlIElPTU1VLgpUaGlzIGNhY2hlIHdyaXRl
IGJhY2sgd2FzIHVuY29uZGl0aW9uYWxseSBkb25lIHVzaW5nIGNsZmx1c2gs
IGJ1dCB0aGVyZSBhcmUKb3RoZXIgbW9yZSBlZmZpY2llbnQgaW5zdHJ1Y3Rp
b25zIHRvIGRvIHNvLCBoZW5jZSBpbXBsZW1lbnQgc3VwcG9ydApmb3IgdGhl
bSB1c2luZyB0aGUgYWx0ZXJuYXRpdmUgZnJhbWV3b3JrLgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zMjEuCgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpi
ZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvdnRkL2V4dGVybi5oCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdo
L3Z0ZC9leHRlcm4uaApAQCAtNjgsNyArNjgsNiBAQCBpbnQgX19tdXN0X2No
ZWNrIHFpbnZhbF9kZXZpY2VfaW90bGJfc3luCiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICB1MTYgZGlkLCB1MTYgc2l6ZSwg
dTY0IGFkZHIpOwogCiB1bnNpZ25lZCBpbnQgZ2V0X2NhY2hlX2xpbmVfc2l6
ZSh2b2lkKTsKLXZvaWQgY2FjaGVsaW5lX2ZsdXNoKGNoYXIgKik7CiB2b2lk
IGZsdXNoX2FsbF9jYWNoZSh2b2lkKTsKIAogdWludDY0X3QgYWxsb2NfcGd0
YWJsZV9tYWRkcih1bnNpZ25lZCBsb25nIG5wYWdlcywgbm9kZWlkX3Qgbm9k
ZSk7Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5j
CisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jCkBA
IC0zMSw2ICszMSw3IEBACiAjaW5jbHVkZSA8eGVuL3BjaV9yZWdzLmg+CiAj
aW5jbHVkZSA8eGVuL2tleWhhbmRsZXIuaD4KICNpbmNsdWRlIDxhc20vbXNp
Lmg+CisjaW5jbHVkZSA8YXNtL25vcHMuaD4KICNpbmNsdWRlIDxhc20vaXJx
Lmg+CiAjaW5jbHVkZSA8YXNtL2h2bS92bXgvdm14Lmg+CiAjaW5jbHVkZSA8
YXNtL3AybS5oPgpAQCAtMTU0LDcgKzE1NSw0MiBAQCBzdGF0aWMgdm9pZCBz
eW5jX2NhY2hlKGNvbnN0IHZvaWQgKmFkZHIsCiAKICAgICBhZGRyIC09ICh1
bnNpZ25lZCBsb25nKWFkZHIgJiAoY2xmbHVzaF9zaXplIC0gMSk7CiAgICAg
Zm9yICggOyBhZGRyIDwgZW5kOyBhZGRyICs9IGNsZmx1c2hfc2l6ZSApCi0g
ICAgICAgIGNhY2hlbGluZV9mbHVzaCgoY2hhciAqKWFkZHIpOworLyoKKyAq
IFRoZSBhcmd1bWVudHMgdG8gYSBtYWNybyBtdXN0IG5vdCBpbmNsdWRlIHBy
ZXByb2Nlc3NvciBkaXJlY3RpdmVzLiBEb2luZyBzbworICogcmVzdWx0cyBp
biB1bmRlZmluZWQgYmVoYXZpb3IsIHNvIHdlIGhhdmUgdG8gY3JlYXRlIHNv
bWUgZGVmaW5lcyBoZXJlIGluCisgKiBvcmRlciB0byBhdm9pZCBpdC4KKyAq
LworI2lmIGRlZmluZWQoSEFWRV9BU19DTFdCKQorIyBkZWZpbmUgQ0xXQl9F
TkNPRElORyAiY2x3YiAlW3BdIgorI2VsaWYgZGVmaW5lZChIQVZFX0FTX1hT
QVZFT1BUKQorIyBkZWZpbmUgQ0xXQl9FTkNPRElORyAiZGF0YTE2IHhzYXZl
b3B0ICVbcF0iIC8qIGNsd2IgKi8KKyNlbHNlCisjIGRlZmluZSBDTFdCX0VO
Q09ESU5HICIuYnl0ZSAweDY2LCAweDBmLCAweGFlLCAweDMwIiAvKiBjbHdi
ICglJXJheCkgKi8KKyNlbmRpZgorCisjZGVmaW5lIEJBU0VfSU5QVVQoYWRk
cikgW3BdICJtIiAoKihjb25zdCBjaGFyICopKGFkZHIpKQorI2lmIGRlZmlu
ZWQoSEFWRV9BU19DTFdCKSB8fCBkZWZpbmVkKEhBVkVfQVNfWFNBVkVPUFQp
CisjIGRlZmluZSBJTlBVVCBCQVNFX0lOUFVUCisjZWxzZQorIyBkZWZpbmUg
SU5QVVQoYWRkcikgImEiIChhZGRyKSwgQkFTRV9JTlBVVChhZGRyKQorI2Vu
ZGlmCisgICAgICAgIC8qCisgICAgICAgICAqIE5vdGUgcmVnYXJkaW5nIHRo
ZSB1c2Ugb2YgTk9QX0RTX1BSRUZJWDogaXQncyBmYXN0ZXIgdG8gZG8gYSBj
bGZsdXNoCisgICAgICAgICAqICsgcHJlZml4IHRoYW4gYSBjbGZsdXNoICsg
bm9wLCBhbmQgaGVuY2UgdGhlIHByZWZpeCBpcyBhZGRlZCBpbnN0ZWFkCisg
ICAgICAgICAqIG9mIGxldHRpbmcgdGhlIGFsdGVybmF0aXZlIGZyYW1ld29y
ayBmaWxsIHRoZSBnYXAgYnkgYXBwZW5kaW5nIG5vcHMuCisgICAgICAgICAq
LworICAgICAgICBhbHRlcm5hdGl2ZV9pb18yKCIuYnl0ZSAiIF9fc3RyaW5n
aWZ5KE5PUF9EU19QUkVGSVgpICI7IGNsZmx1c2ggJVtwXSIsCisgICAgICAg
ICAgICAgICAgICAgICAgICAgImRhdGExNiBjbGZsdXNoICVbcF0iLCAvKiBj
bGZsdXNob3B0ICovCisgICAgICAgICAgICAgICAgICAgICAgICAgWDg2X0ZF
QVRVUkVfQ0xGTFVTSE9QVCwKKyAgICAgICAgICAgICAgICAgICAgICAgICBD
TFdCX0VOQ09ESU5HLAorICAgICAgICAgICAgICAgICAgICAgICAgIFg4Nl9G
RUFUVVJFX0NMV0IsIC8qIG5vIG91dHB1dHMgKi8sCisgICAgICAgICAgICAg
ICAgICAgICAgICAgSU5QVVQoYWRkcikpOworI3VuZGVmIElOUFVUCisjdW5k
ZWYgQkFTRV9JTlBVVAorI3VuZGVmIENMV0JfRU5DT0RJTkcKKworICAgIGFs
dGVybmF0aXZlXzIoIiIsICJzZmVuY2UiLCBYODZfRkVBVFVSRV9DTEZMVVNI
T1BULAorICAgICAgICAgICAgICAgICAgICAgICJzZmVuY2UiLCBYODZfRkVB
VFVSRV9DTFdCKTsKIH0KIAogLyogQWxsb2NhdGUgcGFnZSB0YWJsZSwgcmV0
dXJuIGl0cyBtYWNoaW5lIGFkZHJlc3MgKi8KLS0tIGEveGVuL2RyaXZlcnMv
cGFzc3Rocm91Z2gvdnRkL3g4Ni92dGQuYworKysgYi94ZW4vZHJpdmVycy9w
YXNzdGhyb3VnaC92dGQveDg2L3Z0ZC5jCkBAIC01MSwxMSArNTEsNiBAQCB1
bnNpZ25lZCBpbnQgZ2V0X2NhY2hlX2xpbmVfc2l6ZSh2b2lkKQogICAgIHJl
dHVybiAoKGNwdWlkX2VieCgxKSA+PiA4KSAmIDB4ZmYpICogODsKIH0KIAot
dm9pZCBjYWNoZWxpbmVfZmx1c2goY2hhciAqIGFkZHIpCi17Ci0gICAgY2xm
bHVzaChhZGRyKTsKLX0KLQogdm9pZCBmbHVzaF9hbGxfY2FjaGUoKQogewog
ICAgIHdiaW52ZCgpOwo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.13-7.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.13-7.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
ZXB0OiBmbHVzaCBjYWNoZSB3aGVuIG1vZGlmeWluZyBQVEVzIGFuZCBzaGFy
aW5nIHBhZ2UgdGFibGVzCgpNb2RpZmljYXRpb25zIG1hZGUgdG8gdGhlIHBh
Z2UgdGFibGVzIGJ5IEVQVCBjb2RlIG5lZWQgdG8gYmUgd3JpdHRlbgp0byBt
ZW1vcnkgd2hlbiB0aGUgcGFnZSB0YWJsZXMgYXJlIHNoYXJlZCB3aXRoIHRo
ZSBJT01NVSwgYXMgSW50ZWwKSU9NTVVzIGNhbiBiZSBub24tY29oZXJlbnQg
YW5kIHRodXMgcmVxdWlyZSBjaGFuZ2VzIHRvIGJlIHdyaXR0ZW4gdG8KbWVt
b3J5IGluIG9yZGVyIHRvIGJlIHZpc2libGUgdG8gdGhlIElPTU1VLgoKSW4g
b3JkZXIgdG8gYWNoaWV2ZSB0aGlzIG1ha2Ugc3VyZSBkYXRhIGlzIHdyaXR0
ZW4gYmFjayB0byBtZW1vcnkKYWZ0ZXIgd3JpdGluZyBhbiBFUFQgZW50cnkg
d2hlbiB0aGUgcmVjYWxjIGJpdCBpcyBub3Qgc2V0IGluCmF0b21pY193cml0
ZV9lcHRfZW50cnkuIElmIHN1Y2ggYml0IGlzIHNldCwgdGhlIGVudHJ5IHdp
bGwgYmUKYWRqdXN0ZWQgYW5kIGF0b21pY193cml0ZV9lcHRfZW50cnkgd2ls
bCBiZSBjYWxsZWQgYSBzZWNvbmQgdGltZQp3aXRob3V0IHRoZSByZWNhbGMg
Yml0IHNldC4gTm90ZSB0aGF0IHdoZW4gc3BsaXR0aW5nIGEgc3VwZXIgcGFn
ZSB0aGUKbmV3IHRhYmxlcyByZXN1bHRpbmcgb2YgdGhlIHNwbGl0IHNob3Vs
ZCBhbHNvIGJlIHdyaXR0ZW4gYmFjay4KCkZhaWx1cmUgdG8gZG8gc28gY2Fu
IGFsbG93IGRldmljZXMgYmVoaW5kIHRoZSBJT01NVSBhY2Nlc3MgdG8gdGhl
CnN0YWxlIHN1cGVyIHBhZ2UsIG9yIGNhdXNlIGNvaGVyZW5jeSBpc3N1ZXMg
YXMgY2hhbmdlcyBtYWRlIGJ5IHRoZQpwcm9jZXNzb3IgdG8gdGhlIHBhZ2Ug
dGFibGVzIGFyZSBub3QgdmlzaWJsZSB0byB0aGUgSU9NTVUuCgpUaGlzIGFs
bG93cyB0byByZW1vdmUgdGhlIFZULWQgc3BlY2lmaWMgaW9tbXVfcHRlX2Zs
dXNoIGhlbHBlciwgc2luY2UKdGhlIGNhY2hlIHdyaXRlIGJhY2sgaXMgbm93
IHBlcmZvcm1lZCBieSBhdG9taWNfd3JpdGVfZXB0X2VudHJ5LCBhbmQKaGVu
Y2UgaW9tbXVfaW90bGJfZmx1c2ggY2FuIGJlIHVzZWQgdG8gZmx1c2ggdGhl
IElPTU1VIFRMQi4gVGhlIG5ld2x5CnVzZWQgbWV0aG9kIChpb21tdV9pb3Rs
Yl9mbHVzaCkgY2FuIHJlc3VsdCBpbiBsZXNzIGZsdXNoZXMsIHNpbmNlIGl0
Cm1pZ2h0IHNvbWV0aW1lcyBiZSBjYWxsZWQgcmlnaHRseSB3aXRoIDAgZmxh
Z3MsIGluIHdoaWNoIGNhc2UgaXQKYmVjb21lcyBhIG5vLW9wLgoKVGhpcyBp
cyBwYXJ0IG9mIFhTQS0zMjEuCgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2L21tL3Ay
bS1lcHQuYworKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5jCkBAIC01
OCw2ICs1OCwxOSBAQCBzdGF0aWMgaW50IGF0b21pY193cml0ZV9lcHRfZW50
cnkoc3RydWN0CiAKICAgICB3cml0ZV9hdG9taWMoJmVudHJ5cHRyLT5lcHRl
LCBuZXcuZXB0ZSk7CiAKKyAgICAvKgorICAgICAqIFRoZSByZWNhbGMgZmll
bGQgb24gdGhlIEVQVCBpcyB1c2VkIHRvIHNpZ25hbCBlaXRoZXIgdGhhdCBh
CisgICAgICogcmVjYWxjdWxhdGlvbiBvZiB0aGUgRU1UIGZpZWxkIGlzIHJl
cXVpcmVkICh3aGljaCBkb2Vzbid0IGVmZmVjdCB0aGUKKyAgICAgKiBJT01N
VSksIG9yIGEgdHlwZSBjaGFuZ2UuIFR5cGUgY2hhbmdlcyBjYW4gb25seSBi
ZSBiZXR3ZWVuIHJhbV9ydywKKyAgICAgKiBsb2dkaXJ0eSBhbmQgaW9yZXFf
c2VydmVyOiBjaGFuZ2VzIHRvL2Zyb20gbG9nZGlydHkgd29uJ3Qgd29yayB3
ZWxsIHdpdGgKKyAgICAgKiBhbiBJT01NVSBhbnl3YXksIGFzIElPTU1VICNQ
RnMgYXJlIG5vdCBzeW5jaHJvbm91cyBhbmQgd2lsbCBsZWFkIHRvCisgICAg
ICogYWJvcnRzLCBhbmQgY2hhbmdlcyB0by9mcm9tIGlvcmVxX3NlcnZlciBh
cmUgYWxyZWFkeSBmdWxseSBmbHVzaGVkCisgICAgICogYmVmb3JlIHJldHVy
bmluZyB0byBndWVzdCBjb250ZXh0IChzZWUKKyAgICAgKiBYRU5fRE1PUF9t
YXBfbWVtX3R5cGVfdG9faW9yZXFfc2VydmVyKS4KKyAgICAgKi8KKyAgICBp
ZiAoICFuZXcucmVjYWxjICYmIGlvbW11X3VzZV9oYXBfcHQocDJtLT5kb21h
aW4pICkKKyAgICAgICAgaW9tbXVfc3luY19jYWNoZShlbnRyeXB0ciwgc2l6
ZW9mKCplbnRyeXB0cikpOworCiAgICAgcmV0dXJuIDA7CiB9CiAKQEAgLTI3
OCw2ICsyOTEsOSBAQCBzdGF0aWMgYm9vbF90IGVwdF9zcGxpdF9zdXBlcl9w
YWdlKHN0cnVjCiAgICAgICAgICAgICBicmVhazsKICAgICB9CiAKKyAgICBp
ZiAoIGlvbW11X3VzZV9oYXBfcHQocDJtLT5kb21haW4pICkKKyAgICAgICAg
aW9tbXVfc3luY19jYWNoZSh0YWJsZSwgRVBUX1BBR0VUQUJMRV9FTlRSSUVT
ICogc2l6ZW9mKGVwdF9lbnRyeV90KSk7CisKICAgICB1bm1hcF9kb21haW5f
cGFnZSh0YWJsZSk7CiAKICAgICAvKiBFdmVuIGZhaWxlZCB3ZSBzaG91bGQg
aW5zdGFsbCB0aGUgbmV3bHkgYWxsb2NhdGVkIGVwdCBwYWdlLiAqLwpAQCAt
MzM3LDYgKzM1Myw5IEBAIHN0YXRpYyBpbnQgZXB0X25leHRfbGV2ZWwoc3Ry
dWN0IHAybV9kb20KICAgICAgICAgaWYgKCAhbmV4dCApCiAgICAgICAgICAg
ICByZXR1cm4gR1VFU1RfVEFCTEVfTUFQX0ZBSUxFRDsKIAorICAgICAgICBp
ZiAoIGlvbW11X3VzZV9oYXBfcHQocDJtLT5kb21haW4pICkKKyAgICAgICAg
ICAgIGlvbW11X3N5bmNfY2FjaGUobmV4dCwgRVBUX1BBR0VUQUJMRV9FTlRS
SUVTICogc2l6ZW9mKGVwdF9lbnRyeV90KSk7CisKICAgICAgICAgcmMgPSBh
dG9taWNfd3JpdGVfZXB0X2VudHJ5KHAybSwgZXB0X2VudHJ5LCBlLCBuZXh0
X2xldmVsKTsKICAgICAgICAgQVNTRVJUKHJjID09IDApOwogICAgIH0KQEAg
LTgyMSw3ICs4NDAsMTAgQEAgb3V0OgogICAgICAgICAgbmVlZF9tb2RpZnlf
dnRkX3RhYmxlICkKICAgICB7CiAgICAgICAgIGlmICggaW9tbXVfdXNlX2hh
cF9wdChkKSApCi0gICAgICAgICAgICByYyA9IGlvbW11X3B0ZV9mbHVzaChk
LCBnZm4sICZlcHRfZW50cnktPmVwdGUsIG9yZGVyLCB2dGRfcHRlX3ByZXNl
bnQpOworICAgICAgICAgICAgcmMgPSBpb21tdV9pb3RsYl9mbHVzaChkLCBf
ZGZuKGdmbiksICgxdSA8PCBvcmRlciksCisgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIChpb21tdV9mbGFncyA/IElPTU1VX0ZMVVNIRl9h
ZGRlZCA6IDApIHwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgKHZ0ZF9wdGVfcHJlc2VudCA/IElPTU1VX0ZMVVNIRl9tb2RpZmllZAor
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIDogMCkpOwogICAgICAgICBlbHNlIGlmICggbmVlZF9pb21tdV9w
dF9zeW5jKGQpICkKICAgICAgICAgICAgIHJjID0gaW9tbXVfZmxhZ3MgPwog
ICAgICAgICAgICAgICAgIGlvbW11X2xlZ2FjeV9tYXAoZCwgX2RmbihnZm4p
LCBtZm4sIG9yZGVyLCBpb21tdV9mbGFncykgOgotLS0gYS94ZW4vZHJpdmVy
cy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYworKysgYi94ZW4vZHJpdmVycy9w
YXNzdGhyb3VnaC92dGQvaW9tbXUuYwpAQCAtMTg4NCw1MyArMTg4NCw2IEBA
IHN0YXRpYyBpbnQgaW50ZWxfaW9tbXVfbG9va3VwX3BhZ2Uoc3RydWMKICAg
ICByZXR1cm4gMDsKIH0KIAotaW50IGlvbW11X3B0ZV9mbHVzaChzdHJ1Y3Qg
ZG9tYWluICpkLCB1aW50NjRfdCBkZm4sIHVpbnQ2NF90ICpwdGUsCi0gICAg
ICAgICAgICAgICAgICAgIGludCBvcmRlciwgaW50IHByZXNlbnQpCi17Ci0g
ICAgc3RydWN0IGFjcGlfZHJoZF91bml0ICpkcmhkOwotICAgIHN0cnVjdCB2
dGRfaW9tbXUgKmlvbW11ID0gTlVMTDsKLSAgICBzdHJ1Y3QgZG9tYWluX2lv
bW11ICpoZCA9IGRvbV9pb21tdShkKTsKLSAgICBib29sX3QgZmx1c2hfZGV2
X2lvdGxiOwotICAgIGludCBpb21tdV9kb21pZDsKLSAgICBpbnQgcmMgPSAw
OwotCi0gICAgaW9tbXVfc3luY19jYWNoZShwdGUsIHNpemVvZihzdHJ1Y3Qg
ZG1hX3B0ZSkpOwotCi0gICAgZm9yX2VhY2hfZHJoZF91bml0ICggZHJoZCAp
Ci0gICAgewotICAgICAgICBpb21tdSA9IGRyaGQtPmlvbW11OwotICAgICAg
ICBpZiAoICF0ZXN0X2JpdChpb21tdS0+aW5kZXgsICZoZC0+YXJjaC5pb21t
dV9iaXRtYXApICkKLSAgICAgICAgICAgIGNvbnRpbnVlOwotCi0gICAgICAg
IGZsdXNoX2Rldl9pb3RsYiA9ICEhZmluZF9hdHNfZGV2X2RyaGQoaW9tbXUp
OwotICAgICAgICBpb21tdV9kb21pZD0gZG9tYWluX2lvbW11X2RvbWlkKGQs
IGlvbW11KTsKLSAgICAgICAgaWYgKCBpb21tdV9kb21pZCA9PSAtMSApCi0g
ICAgICAgICAgICBjb250aW51ZTsKLQotICAgICAgICByYyA9IGlvbW11X2Zs
dXNoX2lvdGxiX3BzaShpb21tdSwgaW9tbXVfZG9taWQsCi0gICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIF9fZGZuX3RvX2RhZGRyKGRmbiks
Ci0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIG9yZGVyLCAh
cHJlc2VudCwgZmx1c2hfZGV2X2lvdGxiKTsKLSAgICAgICAgaWYgKCByYyA+
IDAgKQotICAgICAgICB7Ci0gICAgICAgICAgICBpb21tdV9mbHVzaF93cml0
ZV9idWZmZXIoaW9tbXUpOwotICAgICAgICAgICAgcmMgPSAwOwotICAgICAg
ICB9Ci0gICAgfQotCi0gICAgaWYgKCB1bmxpa2VseShyYykgKQotICAgIHsK
LSAgICAgICAgaWYgKCAhZC0+aXNfc2h1dHRpbmdfZG93biAmJiBwcmludGtf
cmF0ZWxpbWl0KCkgKQotICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19FUlIg
VlREUFJFRklYCi0gICAgICAgICAgICAgICAgICAgIiBkJWQ6IElPTU1VIHBh
Z2VzIGZsdXNoIGZhaWxlZDogJWRcbiIsCi0gICAgICAgICAgICAgICAgICAg
ZC0+ZG9tYWluX2lkLCByYyk7Ci0KLSAgICAgICAgaWYgKCAhaXNfaGFyZHdh
cmVfZG9tYWluKGQpICkKLSAgICAgICAgICAgIGRvbWFpbl9jcmFzaChkKTsK
LSAgICB9Ci0KLSAgICByZXR1cm4gcmM7Ci19Ci0KIHN0YXRpYyBpbnQgX19p
bml0IHZ0ZF9lcHRfcGFnZV9jb21wYXRpYmxlKHN0cnVjdCB2dGRfaW9tbXUg
KmlvbW11KQogewogICAgIHU2NCBlcHRfY2FwLCB2dGRfY2FwID0gaW9tbXUt
PmNhcDsKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9pb21tdS5oCisrKyBi
L3hlbi9pbmNsdWRlL2FzbS14ODYvaW9tbXUuaApAQCAtOTcsMTAgKzk3LDYg
QEAgc3RhdGljIGlubGluZSBpbnQgaW9tbXVfYWRqdXN0X2lycV9hZmZpbgog
ICAgICAgICAgICA6IDA7CiB9CiAKLS8qIFdoaWxlIFZULWQgc3BlY2lmaWMs
IHRoaXMgbXVzdCBnZXQgZGVjbGFyZWQgaW4gYSBnZW5lcmljIGhlYWRlci4g
Ki8KLWludCBfX211c3RfY2hlY2sgaW9tbXVfcHRlX2ZsdXNoKHN0cnVjdCBk
b21haW4gKmQsIHU2NCBnZm4sIHU2NCAqcHRlLAotICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgaW50IG9yZGVyLCBpbnQgcHJlc2VudCk7Ci0K
IHN0YXRpYyBpbmxpbmUgYm9vbCBpb21tdV9zdXBwb3J0c194MmFwaWModm9p
ZCkKIHsKICAgICByZXR1cm4gaW9tbXVfaW5pdF9vcHMgJiYgaW9tbXVfaW5p
dF9vcHMtPnN1cHBvcnRzX3gyYXBpYwo=

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-4.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-4.patch"
Content-Transfer-Encoding: base64

RnJvbTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
ClN1YmplY3Q6IFtQQVRDSCB2NSA2LzldIHZ0ZDogZG9uJ3QgYXNzdW1lIGFk
ZHJlc3NlcyBhcmUgYWxpZ25lZCBpbiBzeW5jX2NhY2hlCgpDdXJyZW50IGNv
ZGUgaW4gc3luY19jYWNoZSBhc3N1bWUgdGhhdCB0aGUgYWRkcmVzcyBwYXNz
ZWQgaW4gaXMKYWxpZ25lZCB0byBhIGNhY2hlIGxpbmUgc2l6ZS4gRml4IHRo
ZSBjb2RlIHRvIHN1cHBvcnQgcGFzc2luZyBpbgphcmJpdHJhcnkgYWRkcmVz
c2VzIG5vdCBuZWNlc3NhcmlseSBhbGlnbmVkIHRvIGEgY2FjaGUgbGluZSBz
aXplLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjEuCgpSZXBvcnRlZC1ieTog
SmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTaWduZWQtb2ZmLWJ5
OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4KUmV2
aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KLS0t
CkNoYW5nZXMgc2luY2UgdjU6CiAtIGJ1aWxkIGZpeAoKQ2hhbmdlcyBzaW5j
ZSB2MzoKIC0gQXZvaWQgb25lIGNhc3QgYnkgdXNpbmcgYSBzdWJ0cmFjdGlv
bi4KCkNoYW5nZXMgc2luY2UgdjI6CiAtIE5ldyBpbiB0aGlzIHZlcnNpb24u
Ci0tLQogeGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11LmMgfCA5
ICsrKysrLS0tLQogMSBmaWxlIGNoYW5nZWQsIDUgaW5zZXJ0aW9ucygrKSwg
NCBkZWxldGlvbnMoLSkKCi0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdo
L3Z0ZC9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0
ZC9pb21tdS5jCkBAIC0xNDksOCArMTQ5LDggQEAgc3RhdGljIGludCBpb21t
dXNfaW5jb2hlcmVudDsKIAogc3RhdGljIHZvaWQgc3luY19jYWNoZShjb25z
dCB2b2lkICphZGRyLCB1bnNpZ25lZCBpbnQgc2l6ZSkKIHsKLSAgICBpbnQg
aTsKLSAgICBzdGF0aWMgdW5zaWduZWQgaW50IGNsZmx1c2hfc2l6ZSA9IDA7
CisgICAgc3RhdGljIHVuc2lnbmVkIGxvbmcgY2xmbHVzaF9zaXplID0gMDsK
KyAgICBjb25zdCB2b2lkICplbmQgPSBhZGRyICsgc2l6ZTsKIAogICAgIGlm
ICggIWlvbW11c19pbmNvaGVyZW50ICkKICAgICAgICAgcmV0dXJuOwpAQCAt
MTU4LDggKzE1OCw5IEBAIHN0YXRpYyB2b2lkIHN5bmNfY2FjaGUoY29uc3Qg
dm9pZCAqYWRkciwgdW5zaWduZWQgaW50IHNpemUpCiAgICAgaWYgKCBjbGZs
dXNoX3NpemUgPT0gMCApCiAgICAgICAgIGNsZmx1c2hfc2l6ZSA9IGdldF9j
YWNoZV9saW5lX3NpemUoKTsKIAotICAgIGZvciAoIGkgPSAwOyBpIDwgc2l6
ZTsgaSArPSBjbGZsdXNoX3NpemUgKQotICAgICAgICBjYWNoZWxpbmVfZmx1
c2goKGNoYXIgKilhZGRyICsgaSk7CisgICAgYWRkciAtPSAodW5zaWduZWQg
bG9uZylhZGRyICYgKGNsZmx1c2hfc2l6ZSAtIDEpOworICAgIGZvciAoIDsg
YWRkciA8IGVuZDsgYWRkciArPSBjbGZsdXNoX3NpemUgKQorICAgICAgICBj
YWNoZWxpbmVfZmx1c2goKGNoYXIgKilhZGRyKTsKIH0KIAogLyogQWxsb2Nh
dGUgcGFnZSB0YWJsZSwgcmV0dXJuIGl0cyBtYWNoaW5lIGFkZHJlc3MgKi8K

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-5.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-5.patch"
Content-Transfer-Encoding: base64

RnJvbTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
ClN1YmplY3Q6IFtQQVRDSCB2NSA3LzldIHg4Ni9hbHRlcm5hdGl2ZTogaW50
cm9kdWNlIGFsdGVybmF0aXZlXzIKCkl0J3MgYmFzZWQgb24gYWx0ZXJuYXRp
dmVfaW9fMiB3aXRob3V0IGlucHV0cyBvciBvdXRwdXRzIGJ1dCB3aXRoIGFu
CmFkZGVkIG1lbW9yeSBjbG9iYmVyLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0z
MjEuCgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5w
YXVAY2l0cml4LmNvbT4KQWNrZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4KLS0tCkNoYW5nZXMgc2luY2UgdjM6CiAtIFNsaWdodGx5
IHJld29yZCBjb21taXQgbWVzc2FnZS4KCkNoYW5nZXMgc2luY2UgdjI6CiAt
IFJld29yZCB0aGUgY29tbWl0IG1lc3NhZ2UgdG8gbm90ZSB0aGUgYWRkaXRp
b24gb2YgdGhlIG1lbW9yeQogICBjbG9iYmVyLgotLS0KIHhlbi9pbmNsdWRl
L2FzbS14ODYvYWx0ZXJuYXRpdmUuaCB8IDUgKysrKysKIDEgZmlsZSBjaGFu
Z2VkLCA1IGluc2VydGlvbnMoKykKCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVk
ZS9hc20teDg2L2FsdGVybmF0aXZlLmggYi94ZW4vaW5jbHVkZS9hc20teDg2
L2FsdGVybmF0aXZlLmgKaW5kZXggOTJlMzU4MWJjMi4uOGU3OGNjOTFjMyAx
MDA2NDQKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9hbHRlcm5hdGl2ZS5o
CisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvYWx0ZXJuYXRpdmUuaApAQCAt
MTE0LDYgKzExNCwxMSBAQCBleHRlcm4gdm9pZCBhbHRlcm5hdGl2ZV9icmFu
Y2hlcyh2b2lkKTsKICNkZWZpbmUgYWx0ZXJuYXRpdmUob2xkaW5zdHIsIG5l
d2luc3RyLCBmZWF0dXJlKSAgICAgICAgICAgICAgICAgICAgICAgIFwKICAg
ICAgICAgYXNtIHZvbGF0aWxlIChBTFRFUk5BVElWRShvbGRpbnN0ciwgbmV3
aW5zdHIsIGZlYXR1cmUpIDogOiA6ICJtZW1vcnkiKQogCisjZGVmaW5lIGFs
dGVybmF0aXZlXzIob2xkaW5zdHIsIG5ld2luc3RyMSwgZmVhdHVyZTEsIG5l
d2luc3RyMiwgZmVhdHVyZTIpIFwKKwlhc20gdm9sYXRpbGUgKEFMVEVSTkFU
SVZFXzIob2xkaW5zdHIsIG5ld2luc3RyMSwgZmVhdHVyZTEsCVwKKwkJCQkg
ICAgbmV3aW5zdHIyLCBmZWF0dXJlMikJCVwKKwkJICAgICAgOiA6IDogIm1l
bW9yeSIpCisKIC8qCiAgKiBBbHRlcm5hdGl2ZSBpbmxpbmUgYXNzZW1ibHkg
d2l0aCBpbnB1dC4KICAqCi0tIAoyLjI2LjIKCg==

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-6.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-6.patch"
Content-Transfer-Encoding: base64

RnJvbTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
ClN1YmplY3Q6IFtQQVRDSCB2NSA4LzldIHZ0ZDogb3B0aW1pemUgQ1BVIGNh
Y2hlIHN5bmMKClNvbWUgVlQtZCBJT01NVXMgYXJlIG5vbi1jb2hlcmVudCwg
d2hpY2ggcmVxdWlyZXMgYSBjYWNoZSB3cml0ZSBiYWNrCmluIG9yZGVyIGZv
ciB0aGUgY2hhbmdlcyBtYWRlIGJ5IHRoZSBDUFUgdG8gYmUgdmlzaWJsZSB0
byB0aGUgSU9NTVUuClRoaXMgY2FjaGUgd3JpdGUgYmFjayB3YXMgdW5jb25k
aXRpb25hbGx5IGRvbmUgdXNpbmcgY2xmbHVzaCwgYnV0IHRoZXJlIGFyZQpv
dGhlciBtb3JlIGVmZmljaWVudCBpbnN0cnVjdGlvbnMgdG8gZG8gc28sIGhl
bmNlIGltcGxlbWVudCBzdXBwb3J0CmZvciB0aGVtIHVzaW5nIHRoZSBhbHRl
cm5hdGl2ZSBmcmFtZXdvcmsuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTMyMS4K
ClNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBj
aXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPgotLS0KQ2hhbmdlcyBzaW5jZSB2NToKIC0gcmUtYmFzZSBv
dmVyIGNoYW5nZSB0byBwYXRjaCA2CiAtIGFsc28gZHJvcCBjYWNoZWxpbmVf
Zmx1c2goKSdzIGRlY2xhcmF0aW9uCgpDaGFuZ2VzIHNpbmNlIHYyOgogLSBE
byBub3QgdXNlIHByZXByb2Nlc3NvciBkaXJlY3RpdmVzIG1peGVkIGluIG1h
Y3JvIGFyZ3VtZW50czogaXQncwogICB1bmRlZmluZWQgYmVoYXZpb3IuCiAt
IFJlbW92ZSBjYWNoZWxpbmVfZmx1c2ggYXMgc3luY19jYWNoZSB3YXMgdGhl
IG9ubHkgdXNlci4KCkNoYW5nZXMgc2luY2UgdjE6CiAtIE5ldyBpbiB0aGlz
IHZlcnNpb24uCi0tLQogeGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2V4
dGVybi5oICB8ICAxIC0KIHhlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9p
b21tdS5jICAgfCAzOCArKysrKysrKysrKysrKysrKysrKysrKysrKy0KIHhl
bi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC94ODYvdnRkLmMgfCAgNSAtLS0t
CiAzIGZpbGVzIGNoYW5nZWQsIDM3IGluc2VydGlvbnMoKyksIDcgZGVsZXRp
b25zKC0pCgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvZXh0
ZXJuLmgKKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2V4dGVy
bi5oCkBAIC02OCw3ICs2OCw2IEBAIGludCBfX211c3RfY2hlY2sgcWludmFs
X2RldmljZV9pb3RsYl9zeW5jKHN0cnVjdCB2dGRfaW9tbXUgKmlvbW11LAog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdTE2
IGRpZCwgdTE2IHNpemUsIHU2NCBhZGRyKTsKIAogdW5zaWduZWQgaW50IGdl
dF9jYWNoZV9saW5lX3NpemUodm9pZCk7Ci12b2lkIGNhY2hlbGluZV9mbHVz
aChjaGFyICopOwogdm9pZCBmbHVzaF9hbGxfY2FjaGUodm9pZCk7CiAKIHVp
bnQ2NF90IGFsbG9jX3BndGFibGVfbWFkZHIodW5zaWduZWQgbG9uZyBucGFn
ZXMsIG5vZGVpZF90IG5vZGUpOwotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC92dGQvaW9tbXUuYworKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC92dGQvaW9tbXUuYwpAQCAtMzEsNiArMzEsNyBAQAogI2luY2x1ZGUgPHhl
bi9wY2lfcmVncy5oPgogI2luY2x1ZGUgPHhlbi9rZXloYW5kbGVyLmg+CiAj
aW5jbHVkZSA8YXNtL21zaS5oPgorI2luY2x1ZGUgPGFzbS9ub3BzLmg+CiAj
aW5jbHVkZSA8YXNtL2lycS5oPgogI2luY2x1ZGUgPGFzbS9odm0vdm14L3Zt
eC5oPgogI2luY2x1ZGUgPGFzbS9wMm0uaD4KQEAgLTE2MCw3ICsxNjEsNDIg
QEAgc3RhdGljIHZvaWQgc3luY19jYWNoZShjb25zdCB2b2lkICphZGRyLCB1
bnNpZ25lZCBpbnQgc2l6ZSkKIAogICAgIGFkZHIgLT0gKHVuc2lnbmVkIGxv
bmcpYWRkciAmIChjbGZsdXNoX3NpemUgLSAxKTsKICAgICBmb3IgKCA7IGFk
ZHIgPCBlbmQ7IGFkZHIgKz0gY2xmbHVzaF9zaXplICkKLSAgICAgICAgY2Fj
aGVsaW5lX2ZsdXNoKChjaGFyICopYWRkcik7CisvKgorICogVGhlIGFyZ3Vt
ZW50cyB0byBhIG1hY3JvIG11c3Qgbm90IGluY2x1ZGUgcHJlcHJvY2Vzc29y
IGRpcmVjdGl2ZXMuIERvaW5nIHNvCisgKiByZXN1bHRzIGluIHVuZGVmaW5l
ZCBiZWhhdmlvciwgc28gd2UgaGF2ZSB0byBjcmVhdGUgc29tZSBkZWZpbmVz
IGhlcmUgaW4KKyAqIG9yZGVyIHRvIGF2b2lkIGl0LgorICovCisjaWYgZGVm
aW5lZChIQVZFX0FTX0NMV0IpCisjIGRlZmluZSBDTFdCX0VOQ09ESU5HICJj
bHdiICVbcF0iCisjZWxpZiBkZWZpbmVkKEhBVkVfQVNfWFNBVkVPUFQpCisj
IGRlZmluZSBDTFdCX0VOQ09ESU5HICJkYXRhMTYgeHNhdmVvcHQgJVtwXSIg
LyogY2x3YiAqLworI2Vsc2UKKyMgZGVmaW5lIENMV0JfRU5DT0RJTkcgIi5i
eXRlIDB4NjYsIDB4MGYsIDB4YWUsIDB4MzAiIC8qIGNsd2IgKCUlcmF4KSAq
LworI2VuZGlmCisKKyNkZWZpbmUgQkFTRV9JTlBVVChhZGRyKSBbcF0gIm0i
ICgqKGNvbnN0IGNoYXIgKikoYWRkcikpCisjaWYgZGVmaW5lZChIQVZFX0FT
X0NMV0IpIHx8IGRlZmluZWQoSEFWRV9BU19YU0FWRU9QVCkKKyMgZGVmaW5l
IElOUFVUIEJBU0VfSU5QVVQKKyNlbHNlCisjIGRlZmluZSBJTlBVVChhZGRy
KSAiYSIgKGFkZHIpLCBCQVNFX0lOUFVUKGFkZHIpCisjZW5kaWYKKyAgICAg
ICAgLyoKKyAgICAgICAgICogTm90ZSByZWdhcmRpbmcgdGhlIHVzZSBvZiBO
T1BfRFNfUFJFRklYOiBpdCdzIGZhc3RlciB0byBkbyBhIGNsZmx1c2gKKyAg
ICAgICAgICogKyBwcmVmaXggdGhhbiBhIGNsZmx1c2ggKyBub3AsIGFuZCBo
ZW5jZSB0aGUgcHJlZml4IGlzIGFkZGVkIGluc3RlYWQKKyAgICAgICAgICog
b2YgbGV0dGluZyB0aGUgYWx0ZXJuYXRpdmUgZnJhbWV3b3JrIGZpbGwgdGhl
IGdhcCBieSBhcHBlbmRpbmcgbm9wcy4KKyAgICAgICAgICovCisgICAgICAg
IGFsdGVybmF0aXZlX2lvXzIoIi5ieXRlICIgX19zdHJpbmdpZnkoTk9QX0RT
X1BSRUZJWCkgIjsgY2xmbHVzaCAlW3BdIiwKKyAgICAgICAgICAgICAgICAg
ICAgICAgICAiZGF0YTE2IGNsZmx1c2ggJVtwXSIsIC8qIGNsZmx1c2hvcHQg
Ki8KKyAgICAgICAgICAgICAgICAgICAgICAgICBYODZfRkVBVFVSRV9DTEZM
VVNIT1BULAorICAgICAgICAgICAgICAgICAgICAgICAgIENMV0JfRU5DT0RJ
TkcsCisgICAgICAgICAgICAgICAgICAgICAgICAgWDg2X0ZFQVRVUkVfQ0xX
QiwgLyogbm8gb3V0cHV0cyAqLywKKyAgICAgICAgICAgICAgICAgICAgICAg
ICBJTlBVVChhZGRyKSk7CisjdW5kZWYgSU5QVVQKKyN1bmRlZiBCQVNFX0lO
UFVUCisjdW5kZWYgQ0xXQl9FTkNPRElORworCisgICAgYWx0ZXJuYXRpdmVf
MigiIiwgInNmZW5jZSIsIFg4Nl9GRUFUVVJFX0NMRkxVU0hPUFQsCisgICAg
ICAgICAgICAgICAgICAgICAgInNmZW5jZSIsIFg4Nl9GRUFUVVJFX0NMV0Ip
OwogfQogCiAvKiBBbGxvY2F0ZSBwYWdlIHRhYmxlLCByZXR1cm4gaXRzIG1h
Y2hpbmUgYWRkcmVzcyAqLwotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC92dGQveDg2L3Z0ZC5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdo
L3Z0ZC94ODYvdnRkLmMKQEAgLTUyLDExICs1Miw2IEBAIHVuc2lnbmVkIGlu
dCBnZXRfY2FjaGVfbGluZV9zaXplKHZvaWQpCiAgICAgcmV0dXJuICgoY3B1
aWRfZWJ4KDEpID4+IDgpICYgMHhmZikgKiA4OwogfQogCi12b2lkIGNhY2hl
bGluZV9mbHVzaChjaGFyICogYWRkcikKLXsKLSAgICBjbGZsdXNoKGFkZHIp
OwotfQotCiB2b2lkIGZsdXNoX2FsbF9jYWNoZSgpCiB7CiAgICAgd2JpbnZk
KCk7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa321/xsa321-7.patch"
Content-Disposition: attachment; filename="xsa321/xsa321-7.patch"
Content-Transfer-Encoding: base64

RnJvbTogUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+
ClN1YmplY3Q6IFtQQVRDSCB2NSA5LzldIHg4Ni9lcHQ6IGZsdXNoIGNhY2hl
IHdoZW4gbW9kaWZ5aW5nIFBURXMgYW5kIHNoYXJpbmcgcGFnZSB0YWJsZXMK
Ck1vZGlmaWNhdGlvbnMgbWFkZSB0byB0aGUgcGFnZSB0YWJsZXMgYnkgRVBU
IGNvZGUgbmVlZCB0byBiZSB3cml0dGVuCnRvIG1lbW9yeSB3aGVuIHRoZSBw
YWdlIHRhYmxlcyBhcmUgc2hhcmVkIHdpdGggdGhlIElPTU1VLCBhcyBJbnRl
bApJT01NVXMgY2FuIGJlIG5vbi1jb2hlcmVudCBhbmQgdGh1cyByZXF1aXJl
IGNoYW5nZXMgdG8gYmUgd3JpdHRlbiB0bwptZW1vcnkgaW4gb3JkZXIgdG8g
YmUgdmlzaWJsZSB0byB0aGUgSU9NTVUuCgpJbiBvcmRlciB0byBhY2hpZXZl
IHRoaXMgbWFrZSBzdXJlIGRhdGEgaXMgd3JpdHRlbiBiYWNrIHRvIG1lbW9y
eQphZnRlciB3cml0aW5nIGFuIEVQVCBlbnRyeSB3aGVuIHRoZSByZWNhbGMg
Yml0IGlzIG5vdCBzZXQgaW4KYXRvbWljX3dyaXRlX2VwdF9lbnRyeS4gSWYg
c3VjaCBiaXQgaXMgc2V0LCB0aGUgZW50cnkgd2lsbCBiZQphZGp1c3RlZCBh
bmQgYXRvbWljX3dyaXRlX2VwdF9lbnRyeSB3aWxsIGJlIGNhbGxlZCBhIHNl
Y29uZCB0aW1lCndpdGhvdXQgdGhlIHJlY2FsYyBiaXQgc2V0LiBOb3RlIHRo
YXQgd2hlbiBzcGxpdHRpbmcgYSBzdXBlciBwYWdlIHRoZQpuZXcgdGFibGVz
IHJlc3VsdGluZyBvZiB0aGUgc3BsaXQgc2hvdWxkIGFsc28gYmUgd3JpdHRl
biBiYWNrLgoKRmFpbHVyZSB0byBkbyBzbyBjYW4gYWxsb3cgZGV2aWNlcyBi
ZWhpbmQgdGhlIElPTU1VIGFjY2VzcyB0byB0aGUKc3RhbGUgc3VwZXIgcGFn
ZSwgb3IgY2F1c2UgY29oZXJlbmN5IGlzc3VlcyBhcyBjaGFuZ2VzIG1hZGUg
YnkgdGhlCnByb2Nlc3NvciB0byB0aGUgcGFnZSB0YWJsZXMgYXJlIG5vdCB2
aXNpYmxlIHRvIHRoZSBJT01NVS4KClRoaXMgYWxsb3dzIHRvIHJlbW92ZSB0
aGUgVlQtZCBzcGVjaWZpYyBpb21tdV9wdGVfZmx1c2ggaGVscGVyLCBzaW5j
ZQp0aGUgY2FjaGUgd3JpdGUgYmFjayBpcyBub3cgcGVyZm9ybWVkIGJ5IGF0
b21pY193cml0ZV9lcHRfZW50cnksIGFuZApoZW5jZSBpb21tdV9pb3RsYl9m
bHVzaCBjYW4gYmUgdXNlZCB0byBmbHVzaCB0aGUgSU9NTVUgVExCLiBUaGUg
bmV3bHkKdXNlZCBtZXRob2QgKGlvbW11X2lvdGxiX2ZsdXNoKSBjYW4gcmVz
dWx0IGluIGxlc3MgZmx1c2hlcywgc2luY2UgaXQKbWlnaHQgc29tZXRpbWVz
IGJlIGNhbGxlZCByaWdodGx5IHdpdGggMCBmbGFncywgaW4gd2hpY2ggY2Fz
ZSBpdApiZWNvbWVzIGEgbm8tb3AuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTMy
MS4KClNpZ25lZC1vZmYtYnk6IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBh
dUBjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPgotLS0KQ2hhbmdlcyBzaW5jZSB2MzoKIC0gTm90ZSB0
aGF0IGlvbW11X2lvdGxiX2ZsdXNoIGNhbiBiZSBjYWxsZWQgd2l0aCAwIGZs
YWdzLgogLSBSZW1vdmUgdGhlIGZsdXNoIGZyb20gZXB0X3NldF9taWRkbGVf
ZW50cnksIHNob3VsZCBiZSBkb25lIGJ5IHRoZQogICBjYWxsZXJzLgoKQ2hh
bmdlcyBzaW5jZSB2MjoKIC0gV3JpdGUgYmFjayB0YWJsZSBpbiBlcHRfc2V0
X21pZGRsZV9lbnRyeSBhbHNvLgogLSBSZW1vdmUgaW9tbXVfcHRlX2ZsdXNo
IGFuZCBpbnN0ZWFkIHVzZSBpb21tdV9pb3RsYl9mbHVzaC4KCkNoYW5nZXMg
c2luY2UgdjE6CiAtIE9ubHkgZmx1c2ggY2FjaGUgd2hlbiAhcmVjYWxjLgog
LSBNb3ZlIHRoZSBmbHVzaCBpbnRvIGF0b21pY193cml0ZV9lcHRfZW50cnku
CiAtIERvIGEgc2luZ2xlIGNhbGwgdG8gaW9tbXVfZmx1c2hfY2FjaGVfZW50
cnkgdGhhdCBmbHVzaGVzIHRoZSB3aG9sZQogICBuZXcgdGFibGUuCi0tLQog
eGVuL2FyY2gveDg2L21tL3AybS1lcHQuYyAgICAgICAgICAgfCAyNCArKysr
KysrKysrKysrKy0KIHhlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21t
dS5jIHwgNDcgLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0KIHhlbi9p
bmNsdWRlL2FzbS14ODYvaW9tbXUuaCAgICAgICAgIHwgIDQgLS0tCiAzIGZp
bGVzIGNoYW5nZWQsIDIzIGluc2VydGlvbnMoKyksIDUyIGRlbGV0aW9ucygt
KQoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMgYi94
ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5jCmluZGV4IDg3YTE0ZjZmMjIuLmI4
MTU0YTdlY2MgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0
LmMKKysrIGIveGVuL2FyY2gveDg2L21tL3AybS1lcHQuYwpAQCAtNTgsNiAr
NTgsMTkgQEAgc3RhdGljIGludCBhdG9taWNfd3JpdGVfZXB0X2VudHJ5KHN0
cnVjdCBwMm1fZG9tYWluICpwMm0sCiAKICAgICB3cml0ZV9hdG9taWMoJmVu
dHJ5cHRyLT5lcHRlLCBuZXcuZXB0ZSk7CiAKKyAgICAvKgorICAgICAqIFRo
ZSByZWNhbGMgZmllbGQgb24gdGhlIEVQVCBpcyB1c2VkIHRvIHNpZ25hbCBl
aXRoZXIgdGhhdCBhCisgICAgICogcmVjYWxjdWxhdGlvbiBvZiB0aGUgRU1U
IGZpZWxkIGlzIHJlcXVpcmVkICh3aGljaCBkb2Vzbid0IGVmZmVjdCB0aGUK
KyAgICAgKiBJT01NVSksIG9yIGEgdHlwZSBjaGFuZ2UuIFR5cGUgY2hhbmdl
cyBjYW4gb25seSBiZSBiZXR3ZWVuIHJhbV9ydywKKyAgICAgKiBsb2dkaXJ0
eSBhbmQgaW9yZXFfc2VydmVyOiBjaGFuZ2VzIHRvL2Zyb20gbG9nZGlydHkg
d29uJ3Qgd29yayB3ZWxsIHdpdGgKKyAgICAgKiBhbiBJT01NVSBhbnl3YXks
IGFzIElPTU1VICNQRnMgYXJlIG5vdCBzeW5jaHJvbm91cyBhbmQgd2lsbCBs
ZWFkIHRvCisgICAgICogYWJvcnRzLCBhbmQgY2hhbmdlcyB0by9mcm9tIGlv
cmVxX3NlcnZlciBhcmUgYWxyZWFkeSBmdWxseSBmbHVzaGVkCisgICAgICog
YmVmb3JlIHJldHVybmluZyB0byBndWVzdCBjb250ZXh0IChzZWUKKyAgICAg
KiBYRU5fRE1PUF9tYXBfbWVtX3R5cGVfdG9faW9yZXFfc2VydmVyKS4KKyAg
ICAgKi8KKyAgICBpZiAoICFuZXcucmVjYWxjICYmIGlvbW11X3VzZV9oYXBf
cHQocDJtLT5kb21haW4pICkKKyAgICAgICAgaW9tbXVfc3luY19jYWNoZShl
bnRyeXB0ciwgc2l6ZW9mKCplbnRyeXB0cikpOworCiAgICAgcmV0dXJuIDA7
CiB9CiAKQEAgLTI3Nyw2ICsyOTAsOSBAQCBzdGF0aWMgYm9vbF90IGVwdF9z
cGxpdF9zdXBlcl9wYWdlKHN0cnVjdCBwMm1fZG9tYWluICpwMm0sCiAgICAg
ICAgICAgICBicmVhazsKICAgICB9CiAKKyAgICBpZiAoIGlvbW11X3VzZV9o
YXBfcHQocDJtLT5kb21haW4pICkKKyAgICAgICAgaW9tbXVfc3luY19jYWNo
ZSh0YWJsZSwgRVBUX1BBR0VUQUJMRV9FTlRSSUVTICogc2l6ZW9mKGVwdF9l
bnRyeV90KSk7CisKICAgICB1bm1hcF9kb21haW5fcGFnZSh0YWJsZSk7CiAK
ICAgICAvKiBFdmVuIGZhaWxlZCB3ZSBzaG91bGQgaW5zdGFsbCB0aGUgbmV3
bHkgYWxsb2NhdGVkIGVwdCBwYWdlLiAqLwpAQCAtMzM2LDYgKzM1Miw5IEBA
IHN0YXRpYyBpbnQgZXB0X25leHRfbGV2ZWwoc3RydWN0IHAybV9kb21haW4g
KnAybSwgYm9vbF90IHJlYWRfb25seSwKICAgICAgICAgaWYgKCAhbmV4dCAp
CiAgICAgICAgICAgICByZXR1cm4gR1VFU1RfVEFCTEVfTUFQX0ZBSUxFRDsK
IAorICAgICAgICBpZiAoIGlvbW11X3VzZV9oYXBfcHQocDJtLT5kb21haW4p
ICkKKyAgICAgICAgICAgIGlvbW11X3N5bmNfY2FjaGUobmV4dCwgRVBUX1BB
R0VUQUJMRV9FTlRSSUVTICogc2l6ZW9mKGVwdF9lbnRyeV90KSk7CisKICAg
ICAgICAgcmMgPSBhdG9taWNfd3JpdGVfZXB0X2VudHJ5KHAybSwgZXB0X2Vu
dHJ5LCBlLCBuZXh0X2xldmVsKTsKICAgICAgICAgQVNTRVJUKHJjID09IDAp
OwogICAgIH0KQEAgLTgyNCw3ICs4NDMsMTAgQEAgb3V0OgogICAgICAgICAg
bmVlZF9tb2RpZnlfdnRkX3RhYmxlICkKICAgICB7CiAgICAgICAgIGlmICgg
aW9tbXVfdXNlX2hhcF9wdChkKSApCi0gICAgICAgICAgICByYyA9IGlvbW11
X3B0ZV9mbHVzaChkLCBnZm4sICZlcHRfZW50cnktPmVwdGUsIG9yZGVyLCB2
dGRfcHRlX3ByZXNlbnQpOworICAgICAgICAgICAgcmMgPSBpb21tdV9pb3Rs
Yl9mbHVzaChkLCBfZGZuKGdmbiksICgxdSA8PCBvcmRlciksCisgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIChpb21tdV9mbGFncyA/IElP
TU1VX0ZMVVNIRl9hZGRlZCA6IDApIHwKKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgKHZ0ZF9wdGVfcHJlc2VudCA/IElPTU1VX0ZMVVNI
Rl9tb2RpZmllZAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIDogMCkpOwogICAgICAgICBlbHNlIGlmICgg
bmVlZF9pb21tdV9wdF9zeW5jKGQpICkKICAgICAgICAgICAgIHJjID0gaW9t
bXVfZmxhZ3MgPwogICAgICAgICAgICAgICAgIGlvbW11X2xlZ2FjeV9tYXAo
ZCwgX2RmbihnZm4pLCBtZm4sIG9yZGVyLCBpb21tdV9mbGFncykgOgpkaWZm
IC0tZ2l0IGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lvbW11LmMg
Yi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYwppbmRleCBj
ZjY1MWZiYTM0Li4yYTk5Y2QyMDhmIDEwMDY0NAotLS0gYS94ZW4vZHJpdmVy
cy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYworKysgYi94ZW4vZHJpdmVycy9w
YXNzdGhyb3VnaC92dGQvaW9tbXUuYwpAQCAtMTg4Nyw1MyArMTg4Nyw2IEBA
IHN0YXRpYyBpbnQgaW50ZWxfaW9tbXVfbG9va3VwX3BhZ2Uoc3RydWN0IGRv
bWFpbiAqZCwgZGZuX3QgZGZuLCBtZm5fdCAqbWZuLAogICAgIHJldHVybiAw
OwogfQogCi1pbnQgaW9tbXVfcHRlX2ZsdXNoKHN0cnVjdCBkb21haW4gKmQs
IHVpbnQ2NF90IGRmbiwgdWludDY0X3QgKnB0ZSwKLSAgICAgICAgICAgICAg
ICAgICAgaW50IG9yZGVyLCBpbnQgcHJlc2VudCkKLXsKLSAgICBzdHJ1Y3Qg
YWNwaV9kcmhkX3VuaXQgKmRyaGQ7Ci0gICAgc3RydWN0IHZ0ZF9pb21tdSAq
aW9tbXUgPSBOVUxMOwotICAgIHN0cnVjdCBkb21haW5faW9tbXUgKmhkID0g
ZG9tX2lvbW11KGQpOwotICAgIGJvb2xfdCBmbHVzaF9kZXZfaW90bGI7Ci0g
ICAgaW50IGlvbW11X2RvbWlkOwotICAgIGludCByYyA9IDA7Ci0KLSAgICBp
b21tdV9zeW5jX2NhY2hlKHB0ZSwgc2l6ZW9mKHN0cnVjdCBkbWFfcHRlKSk7
Ci0KLSAgICBmb3JfZWFjaF9kcmhkX3VuaXQgKCBkcmhkICkKLSAgICB7Ci0g
ICAgICAgIGlvbW11ID0gZHJoZC0+aW9tbXU7Ci0gICAgICAgIGlmICggIXRl
c3RfYml0KGlvbW11LT5pbmRleCwgJmhkLT5hcmNoLmlvbW11X2JpdG1hcCkg
KQotICAgICAgICAgICAgY29udGludWU7Ci0KLSAgICAgICAgZmx1c2hfZGV2
X2lvdGxiID0gISFmaW5kX2F0c19kZXZfZHJoZChpb21tdSk7Ci0gICAgICAg
IGlvbW11X2RvbWlkPSBkb21haW5faW9tbXVfZG9taWQoZCwgaW9tbXUpOwot
ICAgICAgICBpZiAoIGlvbW11X2RvbWlkID09IC0xICkKLSAgICAgICAgICAg
IGNvbnRpbnVlOwotCi0gICAgICAgIHJjID0gaW9tbXVfZmx1c2hfaW90bGJf
cHNpKGlvbW11LCBpb21tdV9kb21pZCwKLSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgX19kZm5fdG9fZGFkZHIoZGZuKSwKLSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgb3JkZXIsICFwcmVzZW50LCBm
bHVzaF9kZXZfaW90bGIpOwotICAgICAgICBpZiAoIHJjID4gMCApCi0gICAg
ICAgIHsKLSAgICAgICAgICAgIGlvbW11X2ZsdXNoX3dyaXRlX2J1ZmZlcihp
b21tdSk7Ci0gICAgICAgICAgICByYyA9IDA7Ci0gICAgICAgIH0KLSAgICB9
Ci0KLSAgICBpZiAoIHVubGlrZWx5KHJjKSApCi0gICAgewotICAgICAgICBp
ZiAoICFkLT5pc19zaHV0dGluZ19kb3duICYmIHByaW50a19yYXRlbGltaXQo
KSApCi0gICAgICAgICAgICBwcmludGsoWEVOTE9HX0VSUiBWVERQUkVGSVgK
LSAgICAgICAgICAgICAgICAgICAiIGQlZDogSU9NTVUgcGFnZXMgZmx1c2gg
ZmFpbGVkOiAlZFxuIiwKLSAgICAgICAgICAgICAgICAgICBkLT5kb21haW5f
aWQsIHJjKTsKLQotICAgICAgICBpZiAoICFpc19oYXJkd2FyZV9kb21haW4o
ZCkgKQotICAgICAgICAgICAgZG9tYWluX2NyYXNoKGQpOwotICAgIH0KLQot
ICAgIHJldHVybiByYzsKLX0KLQogc3RhdGljIGludCBfX2luaXQgdnRkX2Vw
dF9wYWdlX2NvbXBhdGlibGUoc3RydWN0IHZ0ZF9pb21tdSAqaW9tbXUpCiB7
CiAgICAgdTY0IGVwdF9jYXAsIHZ0ZF9jYXAgPSBpb21tdS0+Y2FwOwpkaWZm
IC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9pb21tdS5oIGIveGVuL2lu
Y2x1ZGUvYXNtLXg4Ni9pb21tdS5oCmluZGV4IDg2NGUwMjUwNzguLjZjOWQ1
ZTU2MzIgMTAwNjQ0Ci0tLSBhL3hlbi9pbmNsdWRlL2FzbS14ODYvaW9tbXUu
aAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2lvbW11LmgKQEAgLTk3LDEw
ICs5Nyw2IEBAIHN0YXRpYyBpbmxpbmUgaW50IGlvbW11X2FkanVzdF9pcnFf
YWZmaW5pdGllcyh2b2lkKQogICAgICAgICAgICA6IDA7CiB9CiAKLS8qIFdo
aWxlIFZULWQgc3BlY2lmaWMsIHRoaXMgbXVzdCBnZXQgZGVjbGFyZWQgaW4g
YSBnZW5lcmljIGhlYWRlci4gKi8KLWludCBfX211c3RfY2hlY2sgaW9tbXVf
cHRlX2ZsdXNoKHN0cnVjdCBkb21haW4gKmQsIHU2NCBnZm4sIHU2NCAqcHRl
LAotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50IG9yZGVy
LCBpbnQgcHJlc2VudCk7Ci0KIHN0YXRpYyBpbmxpbmUgYm9vbCBpb21tdV9z
dXBwb3J0c194MmFwaWModm9pZCkKIHsKICAgICByZXR1cm4gaW9tbXVfaW5p
dF9vcHMgJiYgaW9tbXVfaW5pdF9vcHMtPnN1cHBvcnRzX3gyYXBpYwotLSAK
Mi4yNi4yCgo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 12:24:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 12:24:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsmdq-0007EZ-I1; Tue, 07 Jul 2020 12:24:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1g3R=AS=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1jsmdo-0007DT-W3
 for xen-devel@lists.xen.org; Tue, 07 Jul 2020 12:24:05 +0000
X-Inumbo-ID: bbb45f0a-c04c-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bbb45f0a-c04c-11ea-bca7-bc764e2007e4;
 Tue, 07 Jul 2020 12:23:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ZWlDk5b1NVC6EET/cOV6YdvAXsB2RSRQIeRlFabmc9s=; b=GgqsbibzuG1G6WuLD3aoi4YSAp
 CjXkwfoYWDV6HljfvN+b7aNhE7/UdBrnvT9OiTcCJjg2aPjZ0cCjEm/JQe8l9gr09bgN2gzOmrwrd
 3VZSPQu6H1vt2jCQYXA315Wl30kQ0N9n2at3b5UtwEefe7HW4vVYLnsDPmbDVTxw6CIc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1jsmde-0002uU-Dd; Tue, 07 Jul 2020 12:23:54 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1jsmde-00040K-Bc; Tue, 07 Jul 2020 12:23:54 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Subject: Xen Security Advisory 327 v3 (CVE-2020-15564) - Missing alignment
 check in VCPUOP_register_vcpu_info
Message-Id: <E1jsmde-00040K-Bc@xenbits.xenproject.org>
Date: Tue, 07 Jul 2020 12:23:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "Xen.org security team" <security-team-members@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-15564 / XSA-327
                               version 3

         Missing alignment check in VCPUOP_register_vcpu_info

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

The hypercall VCPUOP_register_vcpu_info is used by a guest to register
a shared region with the hypervisor. The region will be mapped into Xen address
space so it can be directly accessed.

On Arm, the region is accessed with instructions which require a specific
alignment. Unfortunately, there is no check that the address provided by
the guest will be correctly aligned.

As a result, a malicious guest could cause a hypervisor crash by passing
a misaligned address.

IMPACT
======

A malicious guest administrator may cause a hypervisor crash, resulting in a
Denial of Service (DoS).

VULNERABLE SYSTEMS
==================

All Xen versions are vulnerable.

Only Arm systems are vulnerable.  x86 systems are not affected.

MITIGATION
==========

There is no mitigation.

CREDITS
=======

This issue was discovered by Julien Grall of Amazon.

RESOLUTION
==========

Applying the attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa327.patch           Xen 4.9 - xen-unstable

$ sha256sum xsa327*
f046eefcc1368708bd1fafc88e063d3dbc5c4cdb593d68b3b04917c6cdb7bcb5  xsa327.meta
1d057695d5b74ce2857204103e943caeaf773bc4fb9d91ea78016e01a9147ed7  xsa327.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patch and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl8EaVAMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZcqIIAKpb992pMq1jFStIGPhk6HsaIhxVEGep67eJHq9d
TMaFiyBix125djY0zV8KaznmZmRpM2pNKVsIkGe1XHgtEMcWgMAYARejJLRC4UnW
xHhpunI7rJMQc1vL5ZGxAFbVYF6U/PX0rwESwQb2/Rt0eLBTAmH4m25TQiSEnrkM
3C4Dbk3puCbaeB7VGiyccK07hh6qQhEO8s1FhZTNVTaqqcNWZYqy/SbmRYHiT/in
2dK6XOiBgRhHnjsDDoXj5abSMb00KnJ9PkWu8RC2b7+BVZJUii1557T8zpDo9Fyl
CJ3YXrekd+gQSFxgwCts00BbLr2NUf3uqEtpY1EEV7UKmvQ=
=fPiG
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa327.meta"
Content-Disposition: attachment; filename="xsa327.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzMjcsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIsCiAgICAiNC4xMSIs
CiAgICAiNC4xMCIsCiAgICAiNC45IgogIF0sCiAgIlRyZWVzIjogWwogICAg
InhlbiIKICBdLAogICJSZWNpcGVzIjogewogICAgIjQuMTAiOiB7CiAgICAg
ICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3Rh
YmxlUmVmIjogImZkNmU0OWVjYWUwMzg0MDYxMGZkYzZhNDE2YTYzODU5MGMw
YjY1MzUiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDMx
NywKICAgICAgICAgICAgMzE5LAogICAgICAgICAgICAzMjgsCiAgICAgICAg
ICAgIDMyMQogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwog
ICAgICAgICAgICAieHNhMzI3LnBhdGNoIgogICAgICAgICAgXQogICAgICAg
IH0KICAgICAgfQogICAgfSwKICAgICI0LjExIjogewogICAgICAiUmVjaXBl
cyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6
ICIyYjc3NzI5ODg4ZmI4NTFhYjk2ZTdmNzdiYzg1NDEyMjYyNmI0ODYxIiwK
ICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAzMTcsCiAgICAg
ICAgICAgIDMxOSwKICAgICAgICAgICAgMzI4LAogICAgICAgICAgICAzMjEK
ICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAg
ICAgInhzYTMyNy5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAg
IH0KICAgIH0sCiAgICAiNC4xMiI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAg
ICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiMDUwZmU0
OGRjOTgxZTA0ODhkZTFmNmM2YzA3ZDgxMTBmM2I3NTIzYiIsCiAgICAgICAg
ICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMzE3LAogICAgICAgICAgICAz
MTksCiAgICAgICAgICAgIDMyOCwKICAgICAgICAgICAgMzIxCiAgICAgICAg
ICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
MjcucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9
LAogICAgIjQuMTMiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4
ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjlmN2U4YmFjNGNhMjc5
YjNiZmNjYjVmMzczMGZiMmU1Mzk4Yzk1YWIiLAogICAgICAgICAgIlByZXJl
cXMiOiBbCiAgICAgICAgICAgIDMxNywKICAgICAgICAgICAgMzE5LAogICAg
ICAgICAgICAzMjgsCiAgICAgICAgICAgIDMyMQogICAgICAgICAgXSwKICAg
ICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzI3LnBhdGNo
IgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0
LjkiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAg
ICAgICAgICAiU3RhYmxlUmVmIjogIjZlNDc3YzJlYTRkNWMyNmE3YTdiMmY4
NTAxNjZhYTc5ZWRjNTIyNWMiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAg
ICAgICAgICAgIDMxOSwKICAgICAgICAgICAgMzI4LAogICAgICAgICAgICAz
MjEKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAg
ICAgICAgInhzYTMyNy5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAg
ICAgIH0KICAgIH0sCiAgICAibWFzdGVyIjogewogICAgICAiUmVjaXBlcyI6
IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICJl
NGQyMjA3MTY1YjM3OWVjMTNjOGI1MTI5MzZmNjM5ODJhZjYyZDEzIiwKICAg
ICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAzMTcsCiAgICAgICAg
ICAgIDMxOSwKICAgICAgICAgICAgMzI4LAogICAgICAgICAgICAzMjEKICAg
ICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAg
InhzYTMyNy5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0K
ICAgIH0KICB9Cn0=

--=separator
Content-Type: application/octet-stream; name="xsa327.patch"
Content-Disposition: attachment; filename="xsa327.patch"
Content-Transfer-Encoding: base64

RnJvbSAwMzAzMDBlYmJiODZjNDBjMTJkYjAzODcxNDQ3OWQ3NDYxNjdjNzY3
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPgpEYXRlOiBUdWUsIDI2IE1heSAyMDIwIDE4
OjMxOjMzICswMTAwClN1YmplY3Q6IFtQQVRDSF0geGVuOiBDaGVjayB0aGUg
YWxpZ25tZW50IG9mIHRoZSBvZmZzZXQgcGFzZWQgdmlhCiBWQ1BVT1BfcmVn
aXN0ZXJfdmNwdV9pbmZvCgpDdXJyZW50bHkgYSBndWVzdCBpcyBhYmxlIHRv
IHJlZ2lzdGVyIGFueSBndWVzdCBwaHlzaWNhbCBhZGRyZXNzIHRvIHVzZQpm
b3IgdGhlIHZjcHVfaW5mbyBzdHJ1Y3R1cmUgYXMgbG9uZyBhcyB0aGUgc3Ry
dWN0dXJlIGNhbiBmaXRzIGluIHRoZQpyZXN0IG9mIHRoZSBmcmFtZS4KClRo
aXMgbWVhbnMgYSBndWVzdCBjYW4gcHJvdmlkZSBhbiBhZGRyZXNzIHRoYXQg
aXMgbm90IGFsaWduZWQgdG8gdGhlCm5hdHVyYWwgYWxpZ25tZW50IG9mIHRo
ZSBzdHJ1Y3R1cmUuCgpPbiBBcm0gMzItYml0LCB1bmFsaWduZWQgYWNjZXNz
IGFyZSBjb21wbGV0ZWx5IGZvcmJpZGRlbiBieSB0aGUKaHlwZXJ2aXNvci4g
VGhpcyB3aWxsIHJlc3VsdCB0byBhIGRhdGEgYWJvcnQgd2hpY2ggaXMgZmF0
YWwuCgpPbiBBcm0gNjQtYml0LCB1bmFsaWduZWQgYWNjZXNzIGFyZSBvbmx5
IGZvcmJpZGRlbiB3aGVuIHVzZWQgZm9yIGF0b21pYwphY2Nlc3MuIEFzIHRo
ZSBzdHJ1Y3R1cmUgY29udGFpbnMgZmllbGRzIChzdWNoIGFzIGV2dGNobl9w
ZW5kaW5nX3NlbGYpCnRoYXQgYXJlIHVwZGF0ZWQgdXNpbmcgYXRvbWljIG9w
ZXJhdGlvbnMsIGFueSB1bmFsaWduZWQgYWNjZXNzIHdpbGwgYmUKZmF0YWwg
YXMgd2VsbC4KCldoaWxlIHRoZSBtaXNhbGlnbm1lbnQgaXMgb25seSBmYXRh
bCBvbiBBcm0sIGEgZ2VuZXJpYyBjaGVjayBpcyBhZGRlZAphcyBhbiB4ODYg
Z3Vlc3Qgc2hvdWxkbid0IHNlbnNpYmx5IHBhc3MgYW4gdW5hbGlnbmVkIGFk
ZHJlc3MgKHRoaXMKd291bGQgcmVzdWx0IHRvIGEgc3BsaXQgbG9jaykuCgpU
aGlzIGlzIFhTQS0zMjcuCgpSZXBvcnRlZC1ieTogSnVsaWVuIEdyYWxsIDxq
Z3JhbGxAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogSnVsaWVuIEdyYWxs
IDxqZ3JhbGxAYW1hem9uLmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29w
ZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmlld2VkLWJ5OiBT
dGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+Ci0t
LQogeGVuL2NvbW1vbi9kb21haW4uYyB8IDEwICsrKysrKysrKysKIDEgZmls
ZSBjaGFuZ2VkLCAxMCBpbnNlcnRpb25zKCspCgpkaWZmIC0tZ2l0IGEveGVu
L2NvbW1vbi9kb21haW4uYyBiL3hlbi9jb21tb24vZG9tYWluLmMKaW5kZXgg
N2NjOTUyNjEzOWE2Li5lOWJlMDVmMWQwNWYgMTAwNjQ0Ci0tLSBhL3hlbi9j
b21tb24vZG9tYWluLmMKKysrIGIveGVuL2NvbW1vbi9kb21haW4uYwpAQCAt
MTIyNywxMCArMTIyNywyMCBAQCBpbnQgbWFwX3ZjcHVfaW5mbyhzdHJ1Y3Qg
dmNwdSAqdiwgdW5zaWduZWQgbG9uZyBnZm4sIHVuc2lnbmVkIG9mZnNldCkK
ICAgICB2b2lkICptYXBwaW5nOwogICAgIHZjcHVfaW5mb190ICpuZXdfaW5m
bzsKICAgICBzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlOworICAgIHVuc2lnbmVk
IGludCBhbGlnbjsKIAogICAgIGlmICggb2Zmc2V0ID4gKFBBR0VfU0laRSAt
IHNpemVvZih2Y3B1X2luZm9fdCkpICkKICAgICAgICAgcmV0dXJuIC1FSU5W
QUw7CiAKKyNpZmRlZiBDT05GSUdfQ09NUEFUCisgICAgaWYgKCBoYXNfMzJi
aXRfc2hpbmZvKGQpICkKKyAgICAgICAgYWxpZ24gPSBhbGlnbm9mKG5ld19p
bmZvLT5jb21wYXQpOworICAgIGVsc2UKKyNlbmRpZgorICAgICAgICBhbGln
biA9IGFsaWdub2YoKm5ld19pbmZvKTsKKyAgICBpZiAoIG9mZnNldCAmIChh
bGlnbiAtIDEpICkKKyAgICAgICAgcmV0dXJuIC1FSU5WQUw7CisKICAgICBp
ZiAoICFtZm5fZXEodi0+dmNwdV9pbmZvX21mbiwgSU5WQUxJRF9NRk4pICkK
ICAgICAgICAgcmV0dXJuIC1FSU5WQUw7CiAKLS0gCjIuMTcuMQoK

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 12:24:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 12:24:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsme1-0007IQ-3x; Tue, 07 Jul 2020 12:24:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1g3R=AS=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1jsme0-0007FV-5c
 for xen-devel@lists.xen.org; Tue, 07 Jul 2020 12:24:16 +0000
X-Inumbo-ID: c2ace552-c04c-11ea-8d5d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c2ace552-c04c-11ea-8d5d-12813bfff9fa;
 Tue, 07 Jul 2020 12:24:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=OfSeTYeTsKjSZwvL6r1Hs9ToUyWuj9sme0SjaVk1H1c=; b=ZBe1l2yJAbysjWHq+PAhapWEXn
 RC0YCao2qaD6diSAoiz3LaKo4oAGseJajxsgLiUab8j71GFiBA+gp9kO5MKdOwP+vvsb0EOGap0ob
 +yamxp0Rc1lTDxcqIMoCGqbRzqhVAUnPZuJXCBH/IUcnmRJXB1bIUOpfvW8ZQpk+sFtU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1jsmdp-0002v0-Sz; Tue, 07 Jul 2020 12:24:05 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1jsmdp-0004Ry-Rv; Tue, 07 Jul 2020 12:24:05 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Subject: Xen Security Advisory 328 v3 (CVE-2020-15567) - non-atomic
 modification of live EPT PTE
Message-Id: <E1jsmdp-0004Ry-Rv@xenbits.xenproject.org>
Date: Tue, 07 Jul 2020 12:24:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "Xen.org security team" <security-team-members@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-15567 / XSA-328
                               version 3

                non-atomic modification of live EPT PTE

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

When mapping guest EPT (nested paging) tables, Xen would in some
circumstances use a series of non-atomic bitfield writes.

Depending on the compiler version and optimisation flags, Xen might
expose a dangerous partially-written PTE to the hardware, which an
attacker might be able to race to exploit.

IMPACT
======

A guest administrator or perhaps even unprivileged guest user might
be able to cause denial of service, data corruption, or privilege
escalation.

VULNERABLE SYSTEMS
==================

Only systems using Intel CPUs are vulnerable.  Sytems using AMD CPUs,
and Arm systems, are not vulnerable.

Only systems using nested paging ("hap", aka nested paging, aka in
this case Intel EPT) are vulnerable.

Only HVM and PVH guests can exploit the vulnerability.

The presence and scope of the vulnerability depends on the precise
optimisations performed by the compiler used to build Xen.  If the
compiler generates (a) a single 64-bit write, or (b) a series of
read-modify-write operations which are in the same order as the source
code, the hypervisor is not vulnerable.

For example, in one test build with gcc 8.3 with normal settings, the
compiler generated multiple (unlocked) read-modify-write operations in
source code order, which did *not* constitute a vulnerability.

We have not been able to survey compilers; consequently we cannot say
which compiler(s) might produce vulnerable code (with which code
generation options).  The code clearly violates the C rules.  So we
have chosen to issue this advisory.

MITIGATION
==========

Running only PV guests will avoid this vulnerability.

Switching to shadow paging (e.g. using the "hap=0" xl domain domain
configuration file parameter) will avoid exposing the vulnerability to
those guests.

Manual inspection of the generated assembly code might allow a
suitably qualified person to say that a particular build is not
vulnerable.

There is no less broad mitigation.

CREDITS
=======

This issue was discovered by Jan Beulich of SUSE.

For patch 1:
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

For patch 2:
From: Roger Pau Monné <roger.pau@citrix.com>
Reported-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

RESOLUTION
==========

Applying the appropriate pair of attached patches resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa328/xsa328-?.patch        xen-unstable
xsa328/xsa328-4.13-?.patch   Xen 4.13.x
xsa328/xsa328-4.12-?.patch   Xen 4.12.x
xsa328/xsa328-4.11-?.patch   Xen 4.11.x, Xen 4.10.x
xsa328/xsa328-4.9-?.patch    Xen 4.9.x

$ sha256sum xsa328* xsa328*/*
61ceb3d039c3ebb06f480a17593b367b01e7c1e5cc3669d77caecb704fbc7071  xsa328.meta
cae53f7e6c46fe245790036279bc50eaa10e4271790e871ad8a7d446629b2e12  xsa328/xsa328-1.patch
d61354a992869451cd7a3c92254672b5e253d1a994135cf9b4a5c784be0a07ef  xsa328/xsa328-2.patch
018412fba6f153c1d6b03fc2fa6f3ac381060efe6a8651404462028d24c830a8  xsa328/xsa328-4.9-1.patch
f3deb26e0ce27c385ab16065a0ba67b86a228afd949c0a6a78b9d48366fc2554  xsa328/xsa328-4.9-2.patch
a600ecef784485e8608cd4549f756ffa24705747a4d876147f9ba64fff118580  xsa328/xsa328-4.11-1.patch
f3deb26e0ce27c385ab16065a0ba67b86a228afd949c0a6a78b9d48366fc2554  xsa328/xsa328-4.11-2.patch
d608921359e561f9c594c9f8f7ee02432518a229ecea638d472ab91227d705ec  xsa328/xsa328-4.12-1.patch
a51162c019e7e6ed394faa7d40c932456059b7b76a784dc7886dd0a47c43da0b  xsa328/xsa328-4.12-2.patch
51a41fae885aed40839887da473e0c8ab4c4d897a121f5fac2cc3c6c0188d6d2  xsa328/xsa328-4.13-1.patch
a51162c019e7e6ed394faa7d40c932456059b7b76a784dc7886dd0a47c43da0b  xsa328/xsa328-4.13-2.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl8EaAIMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZi0YH/Aqd/aStpQKD3gTEuif3YBwL9YRf9q8ZxSQqgrG/
du4lABcOE87kRqaAnsVRNe3sQ1sL995O1oiRbcQPcnKqr5q34IPqMghYGJZgpupE
qfreaA6b4Uv7XFEM8Z7NTN17t9dx9Y8aLIoD8dETbFaidtKwjBsQ8fkX7tFSmXH9
YQ0he7B8Is0pGmH6EM5mM6TxqCHz2mtWDdVL4jFuLVqrt10TnNH6S4OHJkEkJcYP
rcSgqOkM7q7tBP3yDWPvlcSGgk+cijEI3AmKREMuISEmimrBpGzrosBpdh8zqbYU
MPmRwbn+luyEEOn2Y8j81EfgJR+LR1Itct1E8CU0vS2v0Gw=
=b0L/
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa328.meta"
Content-Disposition: attachment; filename="xsa328.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzMjgsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIsCiAgICAiNC4xMSIs
CiAgICAiNC4xMCIsCiAgICAiNC45IgogIF0sCiAgIlRyZWVzIjogWwogICAg
InhlbiIKICBdLAogICJSZWNpcGVzIjogewogICAgIjQuMTAiOiB7CiAgICAg
ICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3Rh
YmxlUmVmIjogImZkNmU0OWVjYWUwMzg0MDYxMGZkYzZhNDE2YTYzODU5MGMw
YjY1MzUiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDMx
NywKICAgICAgICAgICAgMzE5CiAgICAgICAgICBdLAogICAgICAgICAgIlBh
dGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzMjgveHNhMzI4LTQuMTEtMS5w
YXRjaCIsCiAgICAgICAgICAgICJ4c2EzMjgveHNhMzI4LTQuMTEtMi5wYXRj
aCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAi
NC4xMSI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsK
ICAgICAgICAgICJTdGFibGVSZWYiOiAiMmI3NzcyOTg4OGZiODUxYWI5NmU3
Zjc3YmM4NTQxMjI2MjZiNDg2MSIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsK
ICAgICAgICAgICAgMzE3LAogICAgICAgICAgICAzMTkKICAgICAgICAgIF0s
CiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTMyOC94
c2EzMjgtNC4xMS0xLnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyOC94c2Ez
MjgtNC4xMS0yLnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAg
fQogICAgfSwKICAgICI0LjEyIjogewogICAgICAiUmVjaXBlcyI6IHsKICAg
ICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICIwNTBmZTQ4
ZGM5ODFlMDQ4OGRlMWY2YzZjMDdkODExMGYzYjc1MjNiIiwKICAgICAgICAg
ICJQcmVyZXFzIjogWwogICAgICAgICAgICAzMTcsCiAgICAgICAgICAgIDMx
OQogICAgICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAg
ICAgICAieHNhMzI4L3hzYTMyOC00LjEyLTEucGF0Y2giLAogICAgICAgICAg
ICAieHNhMzI4L3hzYTMyOC00LjEyLTIucGF0Y2giCiAgICAgICAgICBdCiAg
ICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTMiOiB7CiAgICAgICJS
ZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxl
UmVmIjogIjlmN2U4YmFjNGNhMjc5YjNiZmNjYjVmMzczMGZiMmU1Mzk4Yzk1
YWIiLAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDMxNywK
ICAgICAgICAgICAgMzE5CiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNo
ZXMiOiBbCiAgICAgICAgICAgICJ4c2EzMjgveHNhMzI4LTQuMTMtMS5wYXRj
aCIsCiAgICAgICAgICAgICJ4c2EzMjgveHNhMzI4LTQuMTMtMi5wYXRjaCIK
ICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC45
IjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAg
ICAgICAgIlN0YWJsZVJlZiI6ICI2ZTQ3N2MyZWE0ZDVjMjZhN2E3YjJmODUw
MTY2YWE3OWVkYzUyMjVjIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAg
ICAgICAgICAzMTkKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6
IFsKICAgICAgICAgICAgInhzYTMyOC94c2EzMjgtNC45LTEucGF0Y2giLAog
ICAgICAgICAgICAieHNhMzI4L3hzYTMyOC00LjktMi5wYXRjaCIKICAgICAg
ICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAibWFzdGVyIjog
ewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAg
ICAgIlN0YWJsZVJlZiI6ICJlNGQyMjA3MTY1YjM3OWVjMTNjOGI1MTI5MzZm
NjM5ODJhZjYyZDEzIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAg
ICAgICAzMTcsCiAgICAgICAgICAgIDMxOQogICAgICAgICAgXSwKICAgICAg
ICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzI4L3hzYTMyOC0x
LnBhdGNoIiwKICAgICAgICAgICAgInhzYTMyOC94c2EzMjgtMi5wYXRjaCIK
ICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0KICB9Cn0=

--=separator
Content-Type: application/octet-stream; name="xsa328/xsa328-1.patch"
Content-Disposition: attachment; filename="xsa328/xsa328-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvRVBUOiBlcHRfc2V0X21pZGRsZV9lbnRyeSgpIHJlbGF0ZWQgYWRq
dXN0bWVudHMKCmVwdF9zcGxpdF9zdXBlcl9wYWdlKCkgd2FudHMgdG8gZnVy
dGhlciBtb2RpZnkgdGhlIG5ld2x5IGFsbG9jYXRlZAp0YWJsZSwgc28gaGF2
ZSBlcHRfc2V0X21pZGRsZV9lbnRyeSgpIHJldHVybiB0aGUgbWFwcGVkIHBv
aW50ZXIgcmF0aGVyCnRoYW4gdGVhcmluZyBpdCBkb3duIGFuZCB0aGVuIGdl
dHRpbmcgcmUtZXN0YWJsaXNoZWQgcmlnaHQgYWdhaW4uCgpTaW1pbGFybHkg
ZXB0X25leHRfbGV2ZWwoKSB3YW50cyB0byBoYW5kIGJhY2sgYSBtYXBwZWQg
cG9pbnRlciBvZgp0aGUgbmV4dCBsZXZlbCBwYWdlLCBzbyByZS11c2UgdGhl
IG9uZSBlc3RhYmxpc2hlZCBieQplcHRfc2V0X21pZGRsZV9lbnRyeSgpIGlu
IGNhc2UgdGhhdCBwYXRoIHdhcyB0YWtlbi4KClB1bGwgdGhlIHNldHRpbmcg
b2Ygc3VwcHJlc3NfdmUgYWhlYWQgb2YgaW5zZXJ0aW9uIGludG8gdGhlIGhp
Z2hlciBsZXZlbAp0YWJsZSwgYW5kIGRvbid0IGhhdmUgZXB0X3NwbGl0X3N1
cGVyX3BhZ2UoKSBzZXQgdGhlIGZpZWxkIGEgMm5kIHRpbWUuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTMyOC4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KLS0tCiB4ZW4vYXJjaC94ODYvbW0vcDJt
LWVwdC5jIHwgNDEgKysrKysrKysrKysrKysrKystLS0tLS0tLS0tLS0tLS0t
LS0tLS0tCiAxIGZpbGUgY2hhbmdlZCwgMTggaW5zZXJ0aW9ucygrKSwgMjMg
ZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tL3Ay
bS1lcHQuYyBiL3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMKaW5kZXggMjkz
ZjNlOTQxOS4uZDk5MTNhNmM5NyAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2
L21tL3AybS1lcHQuYworKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5j
CkBAIC0xODYsOCArMTg2LDkgQEAgc3RhdGljIHZvaWQgZXB0X3AybV90eXBl
X3RvX2ZsYWdzKGNvbnN0IHN0cnVjdCBwMm1fZG9tYWluICpwMm0sCiAjZGVm
aW5lIEdVRVNUX1RBQkxFX1NVUEVSX1BBR0UgIDIKICNkZWZpbmUgR1VFU1Rf
VEFCTEVfUE9EX1BBR0UgICAgMwogCi0vKiBGaWxsIGluIG1pZGRsZSBsZXZl
bHMgb2YgZXB0IHRhYmxlICovCi1zdGF0aWMgaW50IGVwdF9zZXRfbWlkZGxl
X2VudHJ5KHN0cnVjdCBwMm1fZG9tYWluICpwMm0sIGVwdF9lbnRyeV90ICpl
cHRfZW50cnkpCisvKiBGaWxsIGluIG1pZGRsZSBsZXZlbCBvZiBlcHQgdGFi
bGU7IHJldHVybiBwb2ludGVyIHRvIG1hcHBlZCBuZXcgdGFibGUuICovCitz
dGF0aWMgZXB0X2VudHJ5X3QgKmVwdF9zZXRfbWlkZGxlX2VudHJ5KHN0cnVj
dCBwMm1fZG9tYWluICpwMm0sCisgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIGVwdF9lbnRyeV90ICplcHRfZW50cnkpCiB7CiAg
ICAgbWZuX3QgbWZuOwogICAgIGVwdF9lbnRyeV90ICp0YWJsZTsKQEAgLTE5
NSw3ICsxOTYsMTIgQEAgc3RhdGljIGludCBlcHRfc2V0X21pZGRsZV9lbnRy
eShzdHJ1Y3QgcDJtX2RvbWFpbiAqcDJtLCBlcHRfZW50cnlfdCAqZXB0X2Vu
dHJ5KQogCiAgICAgbWZuID0gcDJtX2FsbG9jX3B0cChwMm0sIDApOwogICAg
IGlmICggbWZuX2VxKG1mbiwgSU5WQUxJRF9NRk4pICkKLSAgICAgICAgcmV0
dXJuIDA7CisgICAgICAgIHJldHVybiBOVUxMOworCisgICAgdGFibGUgPSBt
YXBfZG9tYWluX3BhZ2UobWZuKTsKKworICAgIGZvciAoIGkgPSAwOyBpIDwg
RVBUX1BBR0VUQUJMRV9FTlRSSUVTOyBpKysgKQorICAgICAgICB0YWJsZVtp
XS5zdXBwcmVzc192ZSA9IDE7CiAKICAgICBlcHRfZW50cnktPmVwdGUgPSAw
OwogICAgIGVwdF9lbnRyeS0+bWZuID0gbWZuX3gobWZuKTsKQEAgLTIwNywx
NCArMjEzLDcgQEAgc3RhdGljIGludCBlcHRfc2V0X21pZGRsZV9lbnRyeShz
dHJ1Y3QgcDJtX2RvbWFpbiAqcDJtLCBlcHRfZW50cnlfdCAqZXB0X2VudHJ5
KQogCiAgICAgZXB0X2VudHJ5LT5zdXBwcmVzc192ZSA9IDE7CiAKLSAgICB0
YWJsZSA9IG1hcF9kb21haW5fcGFnZShtZm4pOwotCi0gICAgZm9yICggaSA9
IDA7IGkgPCBFUFRfUEFHRVRBQkxFX0VOVFJJRVM7IGkrKyApCi0gICAgICAg
IHRhYmxlW2ldLnN1cHByZXNzX3ZlID0gMTsKLQotICAgIHVubWFwX2RvbWFp
bl9wYWdlKHRhYmxlKTsKLQotICAgIHJldHVybiAxOworICAgIHJldHVybiB0
YWJsZTsKIH0KIAogLyogZnJlZSBlcHQgc3ViIHRyZWUgYmVoaW5kIGFuIGVu
dHJ5ICovCkBAIC0yNTIsMTAgKzI1MSwxMCBAQCBzdGF0aWMgYm9vbF90IGVw
dF9zcGxpdF9zdXBlcl9wYWdlKHN0cnVjdCBwMm1fZG9tYWluICpwMm0sCiAK
ICAgICBBU1NFUlQoaXNfZXB0ZV9zdXBlcnBhZ2UoZXB0X2VudHJ5KSk7CiAK
LSAgICBpZiAoICFlcHRfc2V0X21pZGRsZV9lbnRyeShwMm0sICZuZXdfZXB0
KSApCisgICAgdGFibGUgPSBlcHRfc2V0X21pZGRsZV9lbnRyeShwMm0sICZu
ZXdfZXB0KTsKKyAgICBpZiAoICF0YWJsZSApCiAgICAgICAgIHJldHVybiAw
OwogCi0gICAgdGFibGUgPSBtYXBfZG9tYWluX3BhZ2UoX21mbihuZXdfZXB0
Lm1mbikpOwogICAgIHRydW5rID0gMVVMIDw8ICgobGV2ZWwgLSAxKSAqIEVQ
VF9UQUJMRV9PUkRFUik7CiAKICAgICBmb3IgKCBpID0gMDsgaSA8IEVQVF9Q
QUdFVEFCTEVfRU5UUklFUzsgaSsrICkKQEAgLTI2Niw3ICsyNjUsNiBAQCBz
dGF0aWMgYm9vbF90IGVwdF9zcGxpdF9zdXBlcl9wYWdlKHN0cnVjdCBwMm1f
ZG9tYWluICpwMm0sCiAgICAgICAgIGVwdGUtPnNwID0gKGxldmVsID4gMSk7
CiAgICAgICAgIGVwdGUtPm1mbiArPSBpICogdHJ1bms7CiAgICAgICAgIGVw
dGUtPnNucCA9IGlzX2lvbW11X2VuYWJsZWQocDJtLT5kb21haW4pICYmIGlv
bW11X3Nub29wOwotICAgICAgICBlcHRlLT5zdXBwcmVzc192ZSA9IDE7CiAK
ICAgICAgICAgZXB0X3AybV90eXBlX3RvX2ZsYWdzKHAybSwgZXB0ZSk7CiAK
QEAgLTMwNSw4ICszMDMsNyBAQCBzdGF0aWMgaW50IGVwdF9uZXh0X2xldmVs
KHN0cnVjdCBwMm1fZG9tYWluICpwMm0sIGJvb2xfdCByZWFkX29ubHksCiAg
ICAgICAgICAgICAgICAgICAgICAgICAgIGVwdF9lbnRyeV90ICoqdGFibGUs
IHVuc2lnbmVkIGxvbmcgKmdmbl9yZW1haW5kZXIsCiAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGludCBuZXh0X2xldmVsKQogewotICAgIHVuc2lnbmVk
IGxvbmcgbWZuOwotICAgIGVwdF9lbnRyeV90ICplcHRfZW50cnksIGU7Cisg
ICAgZXB0X2VudHJ5X3QgKmVwdF9lbnRyeSwgKm5leHQgPSBOVUxMLCBlOwog
ICAgIHUzMiBzaGlmdCwgaW5kZXg7CiAKICAgICBzaGlmdCA9IG5leHRfbGV2
ZWwgKiBFUFRfVEFCTEVfT1JERVI7CkBAIC0zMzEsMTkgKzMyOCwxNyBAQCBz
dGF0aWMgaW50IGVwdF9uZXh0X2xldmVsKHN0cnVjdCBwMm1fZG9tYWluICpw
Mm0sIGJvb2xfdCByZWFkX29ubHksCiAgICAgICAgIGlmICggcmVhZF9vbmx5
ICkKICAgICAgICAgICAgIHJldHVybiBHVUVTVF9UQUJMRV9NQVBfRkFJTEVE
OwogCi0gICAgICAgIGlmICggIWVwdF9zZXRfbWlkZGxlX2VudHJ5KHAybSwg
ZXB0X2VudHJ5KSApCisgICAgICAgIG5leHQgPSBlcHRfc2V0X21pZGRsZV9l
bnRyeShwMm0sIGVwdF9lbnRyeSk7CisgICAgICAgIGlmICggIW5leHQgKQog
ICAgICAgICAgICAgcmV0dXJuIEdVRVNUX1RBQkxFX01BUF9GQUlMRUQ7Ci0g
ICAgICAgIGVsc2UKLSAgICAgICAgICAgIGUgPSBhdG9taWNfcmVhZF9lcHRf
ZW50cnkoZXB0X2VudHJ5KTsgLyogUmVmcmVzaCAqLworICAgICAgICAvKiBl
IGlzIG5vdyBzdGFsZSBhbmQgaGVuY2UgbWF5IG5vdCBiZSB1c2VkIGFueW1v
cmUgYmVsb3cuICovCiAgICAgfQotCiAgICAgLyogVGhlIG9ubHkgdGltZSBz
cCB3b3VsZCBiZSBzZXQgaGVyZSBpcyBpZiB3ZSBoYWQgaGl0IGEgc3VwZXJw
YWdlICovCi0gICAgaWYgKCBpc19lcHRlX3N1cGVycGFnZSgmZSkgKQorICAg
IGVsc2UgaWYgKCBpc19lcHRlX3N1cGVycGFnZSgmZSkgKQogICAgICAgICBy
ZXR1cm4gR1VFU1RfVEFCTEVfU1VQRVJfUEFHRTsKIAotICAgIG1mbiA9IGUu
bWZuOwogICAgIHVubWFwX2RvbWFpbl9wYWdlKCp0YWJsZSk7Ci0gICAgKnRh
YmxlID0gbWFwX2RvbWFpbl9wYWdlKF9tZm4obWZuKSk7CisgICAgKnRhYmxl
ID0gbmV4dCA/OiBtYXBfZG9tYWluX3BhZ2UoX21mbihlLm1mbikpOwogICAg
ICpnZm5fcmVtYWluZGVyICY9ICgxVUwgPDwgc2hpZnQpIC0gMTsKICAgICBy
ZXR1cm4gR1VFU1RfVEFCTEVfTk9STUFMX1BBR0U7CiB9Ci0tIAoyLjI2LjIK
Cg==

--=separator
Content-Type: application/octet-stream; name="xsa328/xsa328-2.patch"
Content-Disposition: attachment; filename="xsa328/xsa328-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
ZXB0OiBhdG9taWNhbGx5IG1vZGlmeSBlbnRyaWVzIGluIGVwdF9uZXh0X2xl
dmVsCgplcHRfbmV4dF9sZXZlbCB3YXMgcGFzc2luZyBhIGxpdmUgUFRFIHBv
aW50ZXIgdG8gZXB0X3NldF9taWRkbGVfZW50cnksCndoaWNoIHdhcyB0aGVu
IG1vZGlmaWVkIHdpdGhvdXQgdGFraW5nIGludG8gYWNjb3VudCB0aGF0IHRo
ZSBQVEUgY291bGQKYmUgcGFydCBvZiBhIGxpdmUgRVBUIHRhYmxlLiBUaGlz
IHdhc24ndCBhIHNlY3VyaXR5IGlzc3VlIGJlY2F1c2UgdGhlCnBhZ2VzIHJl
dHVybmVkIGJ5IHAybV9hbGxvY19wdHAgYXJlIHplcm9lZCwgc28gYWRkaW5n
IHN1Y2ggYW4gZW50cnkKYmVmb3JlIGFjdHVhbGx5IGluaXRpYWxpemluZyBp
dCBkaWRuJ3QgYWxsb3cgYSBndWVzdCB0byBhY2Nlc3MKcGh5c2ljYWwgbWVt
b3J5IGFkZHJlc3NlcyBpdCB3YXNuJ3Qgc3VwcG9zZWQgdG8gYWNjZXNzLgoK
VGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjguCgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgotLS0KIHhlbi9hcmNoL3g4Ni9t
bS9wMm0tZXB0LmMgfCAxMCArKysrKysrKy0tCiAxIGZpbGUgY2hhbmdlZCwg
OCBpbnNlcnRpb25zKCspLCAyIGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBh
L3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMgYi94ZW4vYXJjaC94ODYvbW0v
cDJtLWVwdC5jCmluZGV4IGQ5OTEzYTZjOTcuLjg3YTE0ZjZmMjIgMTAwNjQ0
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMKKysrIGIveGVuL2Fy
Y2gveDg2L21tL3AybS1lcHQuYwpAQCAtMzA2LDYgKzMwNiw4IEBAIHN0YXRp
YyBpbnQgZXB0X25leHRfbGV2ZWwoc3RydWN0IHAybV9kb21haW4gKnAybSwg
Ym9vbF90IHJlYWRfb25seSwKICAgICBlcHRfZW50cnlfdCAqZXB0X2VudHJ5
LCAqbmV4dCA9IE5VTEwsIGU7CiAgICAgdTMyIHNoaWZ0LCBpbmRleDsKIAor
ICAgIEFTU0VSVChuZXh0X2xldmVsKTsKKwogICAgIHNoaWZ0ID0gbmV4dF9s
ZXZlbCAqIEVQVF9UQUJMRV9PUkRFUjsKIAogICAgIGluZGV4ID0gKmdmbl9y
ZW1haW5kZXIgPj4gc2hpZnQ7CkBAIC0zMjIsMTYgKzMyNCwyMCBAQCBzdGF0
aWMgaW50IGVwdF9uZXh0X2xldmVsKHN0cnVjdCBwMm1fZG9tYWluICpwMm0s
IGJvb2xfdCByZWFkX29ubHksCiAKICAgICBpZiAoICFpc19lcHRlX3ByZXNl
bnQoJmUpICkKICAgICB7CisgICAgICAgIGludCByYzsKKwogICAgICAgICBp
ZiAoIGUuc2FfcDJtdCA9PSBwMm1fcG9wdWxhdGVfb25fZGVtYW5kICkKICAg
ICAgICAgICAgIHJldHVybiBHVUVTVF9UQUJMRV9QT0RfUEFHRTsKIAogICAg
ICAgICBpZiAoIHJlYWRfb25seSApCiAgICAgICAgICAgICByZXR1cm4gR1VF
U1RfVEFCTEVfTUFQX0ZBSUxFRDsKIAotICAgICAgICBuZXh0ID0gZXB0X3Nl
dF9taWRkbGVfZW50cnkocDJtLCBlcHRfZW50cnkpOworICAgICAgICBuZXh0
ID0gZXB0X3NldF9taWRkbGVfZW50cnkocDJtLCAmZSk7CiAgICAgICAgIGlm
ICggIW5leHQgKQogICAgICAgICAgICAgcmV0dXJuIEdVRVNUX1RBQkxFX01B
UF9GQUlMRUQ7Ci0gICAgICAgIC8qIGUgaXMgbm93IHN0YWxlIGFuZCBoZW5j
ZSBtYXkgbm90IGJlIHVzZWQgYW55bW9yZSBiZWxvdy4gKi8KKworICAgICAg
ICByYyA9IGF0b21pY193cml0ZV9lcHRfZW50cnkocDJtLCBlcHRfZW50cnks
IGUsIG5leHRfbGV2ZWwpOworICAgICAgICBBU1NFUlQocmMgPT0gMCk7CiAg
ICAgfQogICAgIC8qIFRoZSBvbmx5IHRpbWUgc3Agd291bGQgYmUgc2V0IGhl
cmUgaXMgaWYgd2UgaGFkIGhpdCBhIHN1cGVycGFnZSAqLwogICAgIGVsc2Ug
aWYgKCBpc19lcHRlX3N1cGVycGFnZSgmZSkgKQotLSAKMi4yNi4yCgo=

--=separator
Content-Type: application/octet-stream; name="xsa328/xsa328-4.9-1.patch"
Content-Disposition: attachment; filename="xsa328/xsa328-4.9-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvRVBUOiBlcHRfc2V0X21pZGRsZV9lbnRyeSgpIHJlbGF0ZWQgYWRq
dXN0bWVudHMKCmVwdF9zcGxpdF9zdXBlcl9wYWdlKCkgd2FudHMgdG8gZnVy
dGhlciBtb2RpZnkgdGhlIG5ld2x5IGFsbG9jYXRlZAp0YWJsZSwgc28gaGF2
ZSBlcHRfc2V0X21pZGRsZV9lbnRyeSgpIHJldHVybiB0aGUgbWFwcGVkIHBv
aW50ZXIgcmF0aGVyCnRoYW4gdGVhcmluZyBpdCBkb3duIGFuZCB0aGVuIGdl
dHRpbmcgcmUtZXN0YWJsaXNoZWQgcmlnaHQgYWdhaW4uCgpTaW1pbGFybHkg
ZXB0X25leHRfbGV2ZWwoKSB3YW50cyB0byBoYW5kIGJhY2sgYSBtYXBwZWQg
cG9pbnRlciBvZgp0aGUgbmV4dCBsZXZlbCBwYWdlLCBzbyByZS11c2UgdGhl
IG9uZSBlc3RhYmxpc2hlZCBieQplcHRfc2V0X21pZGRsZV9lbnRyeSgpIGlu
IGNhc2UgdGhhdCBwYXRoIHdhcyB0YWtlbi4KClB1bGwgdGhlIHNldHRpbmcg
b2Ygc3VwcHJlc3NfdmUgYWhlYWQgb2YgaW5zZXJ0aW9uIGludG8gdGhlIGhp
Z2hlciBsZXZlbAp0YWJsZSwgYW5kIGRvbid0IGhhdmUgZXB0X3NwbGl0X3N1
cGVyX3BhZ2UoKSBzZXQgdGhlIGZpZWxkIGEgMm5kIHRpbWUuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTMyOC4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFJvZ2VyIFBhdSBN
b25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgoKLS0tIGEveGVuL2FyY2gv
eDg2L21tL3AybS1lcHQuYworKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLWVw
dC5jCkBAIC0yMjgsOCArMjI4LDkgQEAgc3RhdGljIHZvaWQgZXB0X3AybV90
eXBlX3RvX2ZsYWdzKHN0cnVjdAogI2RlZmluZSBHVUVTVF9UQUJMRV9TVVBF
Ul9QQUdFICAyCiAjZGVmaW5lIEdVRVNUX1RBQkxFX1BPRF9QQUdFICAgIDMK
IAotLyogRmlsbCBpbiBtaWRkbGUgbGV2ZWxzIG9mIGVwdCB0YWJsZSAqLwot
c3RhdGljIGludCBlcHRfc2V0X21pZGRsZV9lbnRyeShzdHJ1Y3QgcDJtX2Rv
bWFpbiAqcDJtLCBlcHRfZW50cnlfdCAqZXB0X2VudHJ5KQorLyogRmlsbCBp
biBtaWRkbGUgbGV2ZWwgb2YgZXB0IHRhYmxlOyByZXR1cm4gcG9pbnRlciB0
byBtYXBwZWQgbmV3IHRhYmxlLiAqLworc3RhdGljIGVwdF9lbnRyeV90ICpl
cHRfc2V0X21pZGRsZV9lbnRyeShzdHJ1Y3QgcDJtX2RvbWFpbiAqcDJtLAor
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBlcHRf
ZW50cnlfdCAqZXB0X2VudHJ5KQogewogICAgIHN0cnVjdCBwYWdlX2luZm8g
KnBnOwogICAgIGVwdF9lbnRyeV90ICp0YWJsZTsKQEAgLTIzNyw3ICsyMzgs
MTIgQEAgc3RhdGljIGludCBlcHRfc2V0X21pZGRsZV9lbnRyeShzdHJ1Y3Qg
cAogCiAgICAgcGcgPSBwMm1fYWxsb2NfcHRwKHAybSwgMCk7CiAgICAgaWYg
KCBwZyA9PSBOVUxMICkKLSAgICAgICAgcmV0dXJuIDA7CisgICAgICAgIHJl
dHVybiBOVUxMOworCisgICAgdGFibGUgPSBfX21hcF9kb21haW5fcGFnZShw
Zyk7CisKKyAgICBmb3IgKCBpID0gMDsgaSA8IEVQVF9QQUdFVEFCTEVfRU5U
UklFUzsgaSsrICkKKyAgICAgICAgdGFibGVbaV0uc3VwcHJlc3NfdmUgPSAx
OwogCiAgICAgZXB0X2VudHJ5LT5lcHRlID0gMDsKICAgICBlcHRfZW50cnkt
Pm1mbiA9IHBhZ2VfdG9fbWZuKHBnKTsKQEAgLTI0OSwxNCArMjU1LDcgQEAg
c3RhdGljIGludCBlcHRfc2V0X21pZGRsZV9lbnRyeShzdHJ1Y3QgcAogCiAg
ICAgZXB0X2VudHJ5LT5zdXBwcmVzc192ZSA9IDE7CiAKLSAgICB0YWJsZSA9
IF9fbWFwX2RvbWFpbl9wYWdlKHBnKTsKLQotICAgIGZvciAoIGkgPSAwOyBp
IDwgRVBUX1BBR0VUQUJMRV9FTlRSSUVTOyBpKysgKQotICAgICAgICB0YWJs
ZVtpXS5zdXBwcmVzc192ZSA9IDE7Ci0KLSAgICB1bm1hcF9kb21haW5fcGFn
ZSh0YWJsZSk7Ci0KLSAgICByZXR1cm4gMTsKKyAgICByZXR1cm4gdGFibGU7
CiB9CiAKIC8qIGZyZWUgZXB0IHN1YiB0cmVlIGJlaGluZCBhbiBlbnRyeSAq
LwpAQCAtMjk0LDEwICsyOTMsMTAgQEAgc3RhdGljIGJvb2xfdCBlcHRfc3Bs
aXRfc3VwZXJfcGFnZShzdHJ1YwogCiAgICAgQVNTRVJUKGlzX2VwdGVfc3Vw
ZXJwYWdlKGVwdF9lbnRyeSkpOwogCi0gICAgaWYgKCAhZXB0X3NldF9taWRk
bGVfZW50cnkocDJtLCAmbmV3X2VwdCkgKQorICAgIHRhYmxlID0gZXB0X3Nl
dF9taWRkbGVfZW50cnkocDJtLCAmbmV3X2VwdCk7CisgICAgaWYgKCAhdGFi
bGUgKQogICAgICAgICByZXR1cm4gMDsKIAotICAgIHRhYmxlID0gbWFwX2Rv
bWFpbl9wYWdlKF9tZm4obmV3X2VwdC5tZm4pKTsKICAgICB0cnVuayA9IDFV
TCA8PCAoKGxldmVsIC0gMSkgKiBFUFRfVEFCTEVfT1JERVIpOwogCiAgICAg
Zm9yICggaSA9IDA7IGkgPCBFUFRfUEFHRVRBQkxFX0VOVFJJRVM7IGkrKyAp
CkBAIC0zMDgsNyArMzA3LDYgQEAgc3RhdGljIGJvb2xfdCBlcHRfc3BsaXRf
c3VwZXJfcGFnZShzdHJ1YwogICAgICAgICBlcHRlLT5zcCA9IChsZXZlbCA+
IDEpOwogICAgICAgICBlcHRlLT5tZm4gKz0gaSAqIHRydW5rOwogICAgICAg
ICBlcHRlLT5zbnAgPSAoaW9tbXVfZW5hYmxlZCAmJiBpb21tdV9zbm9vcCk7
Ci0gICAgICAgIGVwdGUtPnN1cHByZXNzX3ZlID0gMTsKIAogICAgICAgICBl
cHRfcDJtX3R5cGVfdG9fZmxhZ3MocDJtLCBlcHRlLCBlcHRlLT5zYV9wMm10
LCBlcHRlLT5hY2Nlc3MpOwogCkBAIC0zNDcsOCArMzQ1LDcgQEAgc3RhdGlj
IGludCBlcHRfbmV4dF9sZXZlbChzdHJ1Y3QgcDJtX2RvbQogICAgICAgICAg
ICAgICAgICAgICAgICAgICBlcHRfZW50cnlfdCAqKnRhYmxlLCB1bnNpZ25l
ZCBsb25nICpnZm5fcmVtYWluZGVyLAogICAgICAgICAgICAgICAgICAgICAg
ICAgICBpbnQgbmV4dF9sZXZlbCkKIHsKLSAgICB1bnNpZ25lZCBsb25nIG1m
bjsKLSAgICBlcHRfZW50cnlfdCAqZXB0X2VudHJ5LCBlOworICAgIGVwdF9l
bnRyeV90ICplcHRfZW50cnksICpuZXh0ID0gTlVMTCwgZTsKICAgICB1MzIg
c2hpZnQsIGluZGV4OwogCiAgICAgc2hpZnQgPSBuZXh0X2xldmVsICogRVBU
X1RBQkxFX09SREVSOwpAQCAtMzczLDE5ICszNzAsMTcgQEAgc3RhdGljIGlu
dCBlcHRfbmV4dF9sZXZlbChzdHJ1Y3QgcDJtX2RvbQogICAgICAgICBpZiAo
IHJlYWRfb25seSApCiAgICAgICAgICAgICByZXR1cm4gR1VFU1RfVEFCTEVf
TUFQX0ZBSUxFRDsKIAotICAgICAgICBpZiAoICFlcHRfc2V0X21pZGRsZV9l
bnRyeShwMm0sIGVwdF9lbnRyeSkgKQorICAgICAgICBuZXh0ID0gZXB0X3Nl
dF9taWRkbGVfZW50cnkocDJtLCBlcHRfZW50cnkpOworICAgICAgICBpZiAo
ICFuZXh0ICkKICAgICAgICAgICAgIHJldHVybiBHVUVTVF9UQUJMRV9NQVBf
RkFJTEVEOwotICAgICAgICBlbHNlCi0gICAgICAgICAgICBlID0gYXRvbWlj
X3JlYWRfZXB0X2VudHJ5KGVwdF9lbnRyeSk7IC8qIFJlZnJlc2ggKi8KKyAg
ICAgICAgLyogZSBpcyBub3cgc3RhbGUgYW5kIGhlbmNlIG1heSBub3QgYmUg
dXNlZCBhbnltb3JlIGJlbG93LiAqLwogICAgIH0KLQogICAgIC8qIFRoZSBv
bmx5IHRpbWUgc3Agd291bGQgYmUgc2V0IGhlcmUgaXMgaWYgd2UgaGFkIGhp
dCBhIHN1cGVycGFnZSAqLwotICAgIGlmICggaXNfZXB0ZV9zdXBlcnBhZ2Uo
JmUpICkKKyAgICBlbHNlIGlmICggaXNfZXB0ZV9zdXBlcnBhZ2UoJmUpICkK
ICAgICAgICAgcmV0dXJuIEdVRVNUX1RBQkxFX1NVUEVSX1BBR0U7CiAKLSAg
ICBtZm4gPSBlLm1mbjsKICAgICB1bm1hcF9kb21haW5fcGFnZSgqdGFibGUp
OwotICAgICp0YWJsZSA9IG1hcF9kb21haW5fcGFnZShfbWZuKG1mbikpOwor
ICAgICp0YWJsZSA9IG5leHQgPzogbWFwX2RvbWFpbl9wYWdlKF9tZm4oZS5t
Zm4pKTsKICAgICAqZ2ZuX3JlbWFpbmRlciAmPSAoMVVMIDw8IHNoaWZ0KSAt
IDE7CiAgICAgcmV0dXJuIEdVRVNUX1RBQkxFX05PUk1BTF9QQUdFOwogfQo=

--=separator
Content-Type: application/octet-stream; name="xsa328/xsa328-4.9-2.patch"
Content-Disposition: attachment; filename="xsa328/xsa328-4.9-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
ZXB0OiBhdG9taWNhbGx5IG1vZGlmeSBlbnRyaWVzIGluIGVwdF9uZXh0X2xl
dmVsCgplcHRfbmV4dF9sZXZlbCB3YXMgcGFzc2luZyBhIGxpdmUgUFRFIHBv
aW50ZXIgdG8gZXB0X3NldF9taWRkbGVfZW50cnksCndoaWNoIHdhcyB0aGVu
IG1vZGlmaWVkIHdpdGhvdXQgdGFraW5nIGludG8gYWNjb3VudCB0aGF0IHRo
ZSBQVEUgY291bGQKYmUgcGFydCBvZiBhIGxpdmUgRVBUIHRhYmxlLiBUaGlz
IHdhc24ndCBhIHNlY3VyaXR5IGlzc3VlIGJlY2F1c2UgdGhlCnBhZ2VzIHJl
dHVybmVkIGJ5IHAybV9hbGxvY19wdHAgYXJlIHplcm9lZCwgc28gYWRkaW5n
IHN1Y2ggYW4gZW50cnkKYmVmb3JlIGFjdHVhbGx5IGluaXRpYWxpemluZyBp
dCBkaWRuJ3QgYWxsb3cgYSBndWVzdCB0byBhY2Nlc3MKcGh5c2ljYWwgbWVt
b3J5IGFkZHJlc3NlcyBpdCB3YXNuJ3Qgc3VwcG9zZWQgdG8gYWNjZXNzLgoK
VGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjguCgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2
L21tL3AybS1lcHQuYworKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5j
CkBAIC0zNDgsNiArMzQ4LDggQEAgc3RhdGljIGludCBlcHRfbmV4dF9sZXZl
bChzdHJ1Y3QgcDJtX2RvbQogICAgIGVwdF9lbnRyeV90ICplcHRfZW50cnks
ICpuZXh0ID0gTlVMTCwgZTsKICAgICB1MzIgc2hpZnQsIGluZGV4OwogCisg
ICAgQVNTRVJUKG5leHRfbGV2ZWwpOworCiAgICAgc2hpZnQgPSBuZXh0X2xl
dmVsICogRVBUX1RBQkxFX09SREVSOwogCiAgICAgaW5kZXggPSAqZ2ZuX3Jl
bWFpbmRlciA+PiBzaGlmdDsKQEAgLTM2NCwxNiArMzY2LDIwIEBAIHN0YXRp
YyBpbnQgZXB0X25leHRfbGV2ZWwoc3RydWN0IHAybV9kb20KIAogICAgIGlm
ICggIWlzX2VwdGVfcHJlc2VudCgmZSkgKQogICAgIHsKKyAgICAgICAgaW50
IHJjOworCiAgICAgICAgIGlmICggZS5zYV9wMm10ID09IHAybV9wb3B1bGF0
ZV9vbl9kZW1hbmQgKQogICAgICAgICAgICAgcmV0dXJuIEdVRVNUX1RBQkxF
X1BPRF9QQUdFOwogCiAgICAgICAgIGlmICggcmVhZF9vbmx5ICkKICAgICAg
ICAgICAgIHJldHVybiBHVUVTVF9UQUJMRV9NQVBfRkFJTEVEOwogCi0gICAg
ICAgIG5leHQgPSBlcHRfc2V0X21pZGRsZV9lbnRyeShwMm0sIGVwdF9lbnRy
eSk7CisgICAgICAgIG5leHQgPSBlcHRfc2V0X21pZGRsZV9lbnRyeShwMm0s
ICZlKTsKICAgICAgICAgaWYgKCAhbmV4dCApCiAgICAgICAgICAgICByZXR1
cm4gR1VFU1RfVEFCTEVfTUFQX0ZBSUxFRDsKLSAgICAgICAgLyogZSBpcyBu
b3cgc3RhbGUgYW5kIGhlbmNlIG1heSBub3QgYmUgdXNlZCBhbnltb3JlIGJl
bG93LiAqLworCisgICAgICAgIHJjID0gYXRvbWljX3dyaXRlX2VwdF9lbnRy
eShlcHRfZW50cnksIGUsIG5leHRfbGV2ZWwpOworICAgICAgICBBU1NFUlQo
cmMgPT0gMCk7CiAgICAgfQogICAgIC8qIFRoZSBvbmx5IHRpbWUgc3Agd291
bGQgYmUgc2V0IGhlcmUgaXMgaWYgd2UgaGFkIGhpdCBhIHN1cGVycGFnZSAq
LwogICAgIGVsc2UgaWYgKCBpc19lcHRlX3N1cGVycGFnZSgmZSkgKQo=

--=separator
Content-Type: application/octet-stream; name="xsa328/xsa328-4.11-1.patch"
Content-Disposition: attachment; filename="xsa328/xsa328-4.11-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvRVBUOiBlcHRfc2V0X21pZGRsZV9lbnRyeSgpIHJlbGF0ZWQgYWRq
dXN0bWVudHMKCmVwdF9zcGxpdF9zdXBlcl9wYWdlKCkgd2FudHMgdG8gZnVy
dGhlciBtb2RpZnkgdGhlIG5ld2x5IGFsbG9jYXRlZAp0YWJsZSwgc28gaGF2
ZSBlcHRfc2V0X21pZGRsZV9lbnRyeSgpIHJldHVybiB0aGUgbWFwcGVkIHBv
aW50ZXIgcmF0aGVyCnRoYW4gdGVhcmluZyBpdCBkb3duIGFuZCB0aGVuIGdl
dHRpbmcgcmUtZXN0YWJsaXNoZWQgcmlnaHQgYWdhaW4uCgpTaW1pbGFybHkg
ZXB0X25leHRfbGV2ZWwoKSB3YW50cyB0byBoYW5kIGJhY2sgYSBtYXBwZWQg
cG9pbnRlciBvZgp0aGUgbmV4dCBsZXZlbCBwYWdlLCBzbyByZS11c2UgdGhl
IG9uZSBlc3RhYmxpc2hlZCBieQplcHRfc2V0X21pZGRsZV9lbnRyeSgpIGlu
IGNhc2UgdGhhdCBwYXRoIHdhcyB0YWtlbi4KClB1bGwgdGhlIHNldHRpbmcg
b2Ygc3VwcHJlc3NfdmUgYWhlYWQgb2YgaW5zZXJ0aW9uIGludG8gdGhlIGhp
Z2hlciBsZXZlbAp0YWJsZSwgYW5kIGRvbid0IGhhdmUgZXB0X3NwbGl0X3N1
cGVyX3BhZ2UoKSBzZXQgdGhlIGZpZWxkIGEgMm5kIHRpbWUuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTMyOC4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KCi0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9w
Mm0tZXB0LmMKKysrIGIveGVuL2FyY2gveDg2L21tL3AybS1lcHQuYwpAQCAt
MjI4LDggKzIyOCw5IEBAIHN0YXRpYyB2b2lkIGVwdF9wMm1fdHlwZV90b19m
bGFncyhzdHJ1Y3QKICNkZWZpbmUgR1VFU1RfVEFCTEVfU1VQRVJfUEFHRSAg
MgogI2RlZmluZSBHVUVTVF9UQUJMRV9QT0RfUEFHRSAgICAzCiAKLS8qIEZp
bGwgaW4gbWlkZGxlIGxldmVscyBvZiBlcHQgdGFibGUgKi8KLXN0YXRpYyBp
bnQgZXB0X3NldF9taWRkbGVfZW50cnkoc3RydWN0IHAybV9kb21haW4gKnAy
bSwgZXB0X2VudHJ5X3QgKmVwdF9lbnRyeSkKKy8qIEZpbGwgaW4gbWlkZGxl
IGxldmVsIG9mIGVwdCB0YWJsZTsgcmV0dXJuIHBvaW50ZXIgdG8gbWFwcGVk
IG5ldyB0YWJsZS4gKi8KK3N0YXRpYyBlcHRfZW50cnlfdCAqZXB0X3NldF9t
aWRkbGVfZW50cnkoc3RydWN0IHAybV9kb21haW4gKnAybSwKKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZXB0X2VudHJ5X3Qg
KmVwdF9lbnRyeSkKIHsKICAgICBtZm5fdCBtZm47CiAgICAgZXB0X2VudHJ5
X3QgKnRhYmxlOwpAQCAtMjM3LDcgKzIzOCwxMiBAQCBzdGF0aWMgaW50IGVw
dF9zZXRfbWlkZGxlX2VudHJ5KHN0cnVjdCBwCiAKICAgICBtZm4gPSBwMm1f
YWxsb2NfcHRwKHAybSwgMCk7CiAgICAgaWYgKCBtZm5fZXEobWZuLCBJTlZB
TElEX01GTikgKQotICAgICAgICByZXR1cm4gMDsKKyAgICAgICAgcmV0dXJu
IE5VTEw7CisKKyAgICB0YWJsZSA9IG1hcF9kb21haW5fcGFnZShtZm4pOwor
CisgICAgZm9yICggaSA9IDA7IGkgPCBFUFRfUEFHRVRBQkxFX0VOVFJJRVM7
IGkrKyApCisgICAgICAgIHRhYmxlW2ldLnN1cHByZXNzX3ZlID0gMTsKIAog
ICAgIGVwdF9lbnRyeS0+ZXB0ZSA9IDA7CiAgICAgZXB0X2VudHJ5LT5tZm4g
PSBtZm5feChtZm4pOwpAQCAtMjQ5LDE0ICsyNTUsNyBAQCBzdGF0aWMgaW50
IGVwdF9zZXRfbWlkZGxlX2VudHJ5KHN0cnVjdCBwCiAKICAgICBlcHRfZW50
cnktPnN1cHByZXNzX3ZlID0gMTsKIAotICAgIHRhYmxlID0gbWFwX2RvbWFp
bl9wYWdlKG1mbik7Ci0KLSAgICBmb3IgKCBpID0gMDsgaSA8IEVQVF9QQUdF
VEFCTEVfRU5UUklFUzsgaSsrICkKLSAgICAgICAgdGFibGVbaV0uc3VwcHJl
c3NfdmUgPSAxOwotCi0gICAgdW5tYXBfZG9tYWluX3BhZ2UodGFibGUpOwot
Ci0gICAgcmV0dXJuIDE7CisgICAgcmV0dXJuIHRhYmxlOwogfQogCiAvKiBm
cmVlIGVwdCBzdWIgdHJlZSBiZWhpbmQgYW4gZW50cnkgKi8KQEAgLTI5NCwx
MCArMjkzLDEwIEBAIHN0YXRpYyBib29sX3QgZXB0X3NwbGl0X3N1cGVyX3Bh
Z2Uoc3RydWMKIAogICAgIEFTU0VSVChpc19lcHRlX3N1cGVycGFnZShlcHRf
ZW50cnkpKTsKIAotICAgIGlmICggIWVwdF9zZXRfbWlkZGxlX2VudHJ5KHAy
bSwgJm5ld19lcHQpICkKKyAgICB0YWJsZSA9IGVwdF9zZXRfbWlkZGxlX2Vu
dHJ5KHAybSwgJm5ld19lcHQpOworICAgIGlmICggIXRhYmxlICkKICAgICAg
ICAgcmV0dXJuIDA7CiAKLSAgICB0YWJsZSA9IG1hcF9kb21haW5fcGFnZShf
bWZuKG5ld19lcHQubWZuKSk7CiAgICAgdHJ1bmsgPSAxVUwgPDwgKChsZXZl
bCAtIDEpICogRVBUX1RBQkxFX09SREVSKTsKIAogICAgIGZvciAoIGkgPSAw
OyBpIDwgRVBUX1BBR0VUQUJMRV9FTlRSSUVTOyBpKysgKQpAQCAtMzA4LDcg
KzMwNyw2IEBAIHN0YXRpYyBib29sX3QgZXB0X3NwbGl0X3N1cGVyX3BhZ2Uo
c3RydWMKICAgICAgICAgZXB0ZS0+c3AgPSAobGV2ZWwgPiAxKTsKICAgICAg
ICAgZXB0ZS0+bWZuICs9IGkgKiB0cnVuazsKICAgICAgICAgZXB0ZS0+c25w
ID0gKGlvbW11X2VuYWJsZWQgJiYgaW9tbXVfc25vb3ApOwotICAgICAgICBl
cHRlLT5zdXBwcmVzc192ZSA9IDE7CiAKICAgICAgICAgZXB0X3AybV90eXBl
X3RvX2ZsYWdzKHAybSwgZXB0ZSwgZXB0ZS0+c2FfcDJtdCwgZXB0ZS0+YWNj
ZXNzKTsKIApAQCAtMzQ3LDggKzM0NSw3IEBAIHN0YXRpYyBpbnQgZXB0X25l
eHRfbGV2ZWwoc3RydWN0IHAybV9kb20KICAgICAgICAgICAgICAgICAgICAg
ICAgICAgZXB0X2VudHJ5X3QgKip0YWJsZSwgdW5zaWduZWQgbG9uZyAqZ2Zu
X3JlbWFpbmRlciwKICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50IG5l
eHRfbGV2ZWwpCiB7Ci0gICAgdW5zaWduZWQgbG9uZyBtZm47Ci0gICAgZXB0
X2VudHJ5X3QgKmVwdF9lbnRyeSwgZTsKKyAgICBlcHRfZW50cnlfdCAqZXB0
X2VudHJ5LCAqbmV4dCA9IE5VTEwsIGU7CiAgICAgdTMyIHNoaWZ0LCBpbmRl
eDsKIAogICAgIHNoaWZ0ID0gbmV4dF9sZXZlbCAqIEVQVF9UQUJMRV9PUkRF
UjsKQEAgLTM3MywxOSArMzcwLDE3IEBAIHN0YXRpYyBpbnQgZXB0X25leHRf
bGV2ZWwoc3RydWN0IHAybV9kb20KICAgICAgICAgaWYgKCByZWFkX29ubHkg
KQogICAgICAgICAgICAgcmV0dXJuIEdVRVNUX1RBQkxFX01BUF9GQUlMRUQ7
CiAKLSAgICAgICAgaWYgKCAhZXB0X3NldF9taWRkbGVfZW50cnkocDJtLCBl
cHRfZW50cnkpICkKKyAgICAgICAgbmV4dCA9IGVwdF9zZXRfbWlkZGxlX2Vu
dHJ5KHAybSwgZXB0X2VudHJ5KTsKKyAgICAgICAgaWYgKCAhbmV4dCApCiAg
ICAgICAgICAgICByZXR1cm4gR1VFU1RfVEFCTEVfTUFQX0ZBSUxFRDsKLSAg
ICAgICAgZWxzZQotICAgICAgICAgICAgZSA9IGF0b21pY19yZWFkX2VwdF9l
bnRyeShlcHRfZW50cnkpOyAvKiBSZWZyZXNoICovCisgICAgICAgIC8qIGUg
aXMgbm93IHN0YWxlIGFuZCBoZW5jZSBtYXkgbm90IGJlIHVzZWQgYW55bW9y
ZSBiZWxvdy4gKi8KICAgICB9Ci0KICAgICAvKiBUaGUgb25seSB0aW1lIHNw
IHdvdWxkIGJlIHNldCBoZXJlIGlzIGlmIHdlIGhhZCBoaXQgYSBzdXBlcnBh
Z2UgKi8KLSAgICBpZiAoIGlzX2VwdGVfc3VwZXJwYWdlKCZlKSApCisgICAg
ZWxzZSBpZiAoIGlzX2VwdGVfc3VwZXJwYWdlKCZlKSApCiAgICAgICAgIHJl
dHVybiBHVUVTVF9UQUJMRV9TVVBFUl9QQUdFOwogCi0gICAgbWZuID0gZS5t
Zm47CiAgICAgdW5tYXBfZG9tYWluX3BhZ2UoKnRhYmxlKTsKLSAgICAqdGFi
bGUgPSBtYXBfZG9tYWluX3BhZ2UoX21mbihtZm4pKTsKKyAgICAqdGFibGUg
PSBuZXh0ID86IG1hcF9kb21haW5fcGFnZShfbWZuKGUubWZuKSk7CiAgICAg
Kmdmbl9yZW1haW5kZXIgJj0gKDFVTCA8PCBzaGlmdCkgLSAxOwogICAgIHJl
dHVybiBHVUVTVF9UQUJMRV9OT1JNQUxfUEFHRTsKIH0K

--=separator
Content-Type: application/octet-stream; name="xsa328/xsa328-4.11-2.patch"
Content-Disposition: attachment; filename="xsa328/xsa328-4.11-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
ZXB0OiBhdG9taWNhbGx5IG1vZGlmeSBlbnRyaWVzIGluIGVwdF9uZXh0X2xl
dmVsCgplcHRfbmV4dF9sZXZlbCB3YXMgcGFzc2luZyBhIGxpdmUgUFRFIHBv
aW50ZXIgdG8gZXB0X3NldF9taWRkbGVfZW50cnksCndoaWNoIHdhcyB0aGVu
IG1vZGlmaWVkIHdpdGhvdXQgdGFraW5nIGludG8gYWNjb3VudCB0aGF0IHRo
ZSBQVEUgY291bGQKYmUgcGFydCBvZiBhIGxpdmUgRVBUIHRhYmxlLiBUaGlz
IHdhc24ndCBhIHNlY3VyaXR5IGlzc3VlIGJlY2F1c2UgdGhlCnBhZ2VzIHJl
dHVybmVkIGJ5IHAybV9hbGxvY19wdHAgYXJlIHplcm9lZCwgc28gYWRkaW5n
IHN1Y2ggYW4gZW50cnkKYmVmb3JlIGFjdHVhbGx5IGluaXRpYWxpemluZyBp
dCBkaWRuJ3QgYWxsb3cgYSBndWVzdCB0byBhY2Nlc3MKcGh5c2ljYWwgbWVt
b3J5IGFkZHJlc3NlcyBpdCB3YXNuJ3Qgc3VwcG9zZWQgdG8gYWNjZXNzLgoK
VGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjguCgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2
L21tL3AybS1lcHQuYworKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5j
CkBAIC0zNDgsNiArMzQ4LDggQEAgc3RhdGljIGludCBlcHRfbmV4dF9sZXZl
bChzdHJ1Y3QgcDJtX2RvbQogICAgIGVwdF9lbnRyeV90ICplcHRfZW50cnks
ICpuZXh0ID0gTlVMTCwgZTsKICAgICB1MzIgc2hpZnQsIGluZGV4OwogCisg
ICAgQVNTRVJUKG5leHRfbGV2ZWwpOworCiAgICAgc2hpZnQgPSBuZXh0X2xl
dmVsICogRVBUX1RBQkxFX09SREVSOwogCiAgICAgaW5kZXggPSAqZ2ZuX3Jl
bWFpbmRlciA+PiBzaGlmdDsKQEAgLTM2NCwxNiArMzY2LDIwIEBAIHN0YXRp
YyBpbnQgZXB0X25leHRfbGV2ZWwoc3RydWN0IHAybV9kb20KIAogICAgIGlm
ICggIWlzX2VwdGVfcHJlc2VudCgmZSkgKQogICAgIHsKKyAgICAgICAgaW50
IHJjOworCiAgICAgICAgIGlmICggZS5zYV9wMm10ID09IHAybV9wb3B1bGF0
ZV9vbl9kZW1hbmQgKQogICAgICAgICAgICAgcmV0dXJuIEdVRVNUX1RBQkxF
X1BPRF9QQUdFOwogCiAgICAgICAgIGlmICggcmVhZF9vbmx5ICkKICAgICAg
ICAgICAgIHJldHVybiBHVUVTVF9UQUJMRV9NQVBfRkFJTEVEOwogCi0gICAg
ICAgIG5leHQgPSBlcHRfc2V0X21pZGRsZV9lbnRyeShwMm0sIGVwdF9lbnRy
eSk7CisgICAgICAgIG5leHQgPSBlcHRfc2V0X21pZGRsZV9lbnRyeShwMm0s
ICZlKTsKICAgICAgICAgaWYgKCAhbmV4dCApCiAgICAgICAgICAgICByZXR1
cm4gR1VFU1RfVEFCTEVfTUFQX0ZBSUxFRDsKLSAgICAgICAgLyogZSBpcyBu
b3cgc3RhbGUgYW5kIGhlbmNlIG1heSBub3QgYmUgdXNlZCBhbnltb3JlIGJl
bG93LiAqLworCisgICAgICAgIHJjID0gYXRvbWljX3dyaXRlX2VwdF9lbnRy
eShlcHRfZW50cnksIGUsIG5leHRfbGV2ZWwpOworICAgICAgICBBU1NFUlQo
cmMgPT0gMCk7CiAgICAgfQogICAgIC8qIFRoZSBvbmx5IHRpbWUgc3Agd291
bGQgYmUgc2V0IGhlcmUgaXMgaWYgd2UgaGFkIGhpdCBhIHN1cGVycGFnZSAq
LwogICAgIGVsc2UgaWYgKCBpc19lcHRlX3N1cGVycGFnZSgmZSkgKQo=

--=separator
Content-Type: application/octet-stream; name="xsa328/xsa328-4.12-1.patch"
Content-Disposition: attachment; filename="xsa328/xsa328-4.12-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvRVBUOiBlcHRfc2V0X21pZGRsZV9lbnRyeSgpIHJlbGF0ZWQgYWRq
dXN0bWVudHMKCmVwdF9zcGxpdF9zdXBlcl9wYWdlKCkgd2FudHMgdG8gZnVy
dGhlciBtb2RpZnkgdGhlIG5ld2x5IGFsbG9jYXRlZAp0YWJsZSwgc28gaGF2
ZSBlcHRfc2V0X21pZGRsZV9lbnRyeSgpIHJldHVybiB0aGUgbWFwcGVkIHBv
aW50ZXIgcmF0aGVyCnRoYW4gdGVhcmluZyBpdCBkb3duIGFuZCB0aGVuIGdl
dHRpbmcgcmUtZXN0YWJsaXNoZWQgcmlnaHQgYWdhaW4uCgpTaW1pbGFybHkg
ZXB0X25leHRfbGV2ZWwoKSB3YW50cyB0byBoYW5kIGJhY2sgYSBtYXBwZWQg
cG9pbnRlciBvZgp0aGUgbmV4dCBsZXZlbCBwYWdlLCBzbyByZS11c2UgdGhl
IG9uZSBlc3RhYmxpc2hlZCBieQplcHRfc2V0X21pZGRsZV9lbnRyeSgpIGlu
IGNhc2UgdGhhdCBwYXRoIHdhcyB0YWtlbi4KClB1bGwgdGhlIHNldHRpbmcg
b2Ygc3VwcHJlc3NfdmUgYWhlYWQgb2YgaW5zZXJ0aW9uIGludG8gdGhlIGhp
Z2hlciBsZXZlbAp0YWJsZSwgYW5kIGRvbid0IGhhdmUgZXB0X3NwbGl0X3N1
cGVyX3BhZ2UoKSBzZXQgdGhlIGZpZWxkIGEgMm5kIHRpbWUuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTMyOC4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KCi0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9w
Mm0tZXB0LmMKKysrIGIveGVuL2FyY2gveDg2L21tL3AybS1lcHQuYwpAQCAt
MTg3LDggKzE4Nyw5IEBAIHN0YXRpYyB2b2lkIGVwdF9wMm1fdHlwZV90b19m
bGFncyhzdHJ1Y3QKICNkZWZpbmUgR1VFU1RfVEFCTEVfU1VQRVJfUEFHRSAg
MgogI2RlZmluZSBHVUVTVF9UQUJMRV9QT0RfUEFHRSAgICAzCiAKLS8qIEZp
bGwgaW4gbWlkZGxlIGxldmVscyBvZiBlcHQgdGFibGUgKi8KLXN0YXRpYyBp
bnQgZXB0X3NldF9taWRkbGVfZW50cnkoc3RydWN0IHAybV9kb21haW4gKnAy
bSwgZXB0X2VudHJ5X3QgKmVwdF9lbnRyeSkKKy8qIEZpbGwgaW4gbWlkZGxl
IGxldmVsIG9mIGVwdCB0YWJsZTsgcmV0dXJuIHBvaW50ZXIgdG8gbWFwcGVk
IG5ldyB0YWJsZS4gKi8KK3N0YXRpYyBlcHRfZW50cnlfdCAqZXB0X3NldF9t
aWRkbGVfZW50cnkoc3RydWN0IHAybV9kb21haW4gKnAybSwKKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZXB0X2VudHJ5X3Qg
KmVwdF9lbnRyeSkKIHsKICAgICBtZm5fdCBtZm47CiAgICAgZXB0X2VudHJ5
X3QgKnRhYmxlOwpAQCAtMTk2LDcgKzE5NywxMiBAQCBzdGF0aWMgaW50IGVw
dF9zZXRfbWlkZGxlX2VudHJ5KHN0cnVjdCBwCiAKICAgICBtZm4gPSBwMm1f
YWxsb2NfcHRwKHAybSwgMCk7CiAgICAgaWYgKCBtZm5fZXEobWZuLCBJTlZB
TElEX01GTikgKQotICAgICAgICByZXR1cm4gMDsKKyAgICAgICAgcmV0dXJu
IE5VTEw7CisKKyAgICB0YWJsZSA9IG1hcF9kb21haW5fcGFnZShtZm4pOwor
CisgICAgZm9yICggaSA9IDA7IGkgPCBFUFRfUEFHRVRBQkxFX0VOVFJJRVM7
IGkrKyApCisgICAgICAgIHRhYmxlW2ldLnN1cHByZXNzX3ZlID0gMTsKIAog
ICAgIGVwdF9lbnRyeS0+ZXB0ZSA9IDA7CiAgICAgZXB0X2VudHJ5LT5tZm4g
PSBtZm5feChtZm4pOwpAQCAtMjA4LDE0ICsyMTQsNyBAQCBzdGF0aWMgaW50
IGVwdF9zZXRfbWlkZGxlX2VudHJ5KHN0cnVjdCBwCiAKICAgICBlcHRfZW50
cnktPnN1cHByZXNzX3ZlID0gMTsKIAotICAgIHRhYmxlID0gbWFwX2RvbWFp
bl9wYWdlKG1mbik7Ci0KLSAgICBmb3IgKCBpID0gMDsgaSA8IEVQVF9QQUdF
VEFCTEVfRU5UUklFUzsgaSsrICkKLSAgICAgICAgdGFibGVbaV0uc3VwcHJl
c3NfdmUgPSAxOwotCi0gICAgdW5tYXBfZG9tYWluX3BhZ2UodGFibGUpOwot
Ci0gICAgcmV0dXJuIDE7CisgICAgcmV0dXJuIHRhYmxlOwogfQogCiAvKiBm
cmVlIGVwdCBzdWIgdHJlZSBiZWhpbmQgYW4gZW50cnkgKi8KQEAgLTI1Mywx
MCArMjUyLDEwIEBAIHN0YXRpYyBib29sX3QgZXB0X3NwbGl0X3N1cGVyX3Bh
Z2Uoc3RydWMKIAogICAgIEFTU0VSVChpc19lcHRlX3N1cGVycGFnZShlcHRf
ZW50cnkpKTsKIAotICAgIGlmICggIWVwdF9zZXRfbWlkZGxlX2VudHJ5KHAy
bSwgJm5ld19lcHQpICkKKyAgICB0YWJsZSA9IGVwdF9zZXRfbWlkZGxlX2Vu
dHJ5KHAybSwgJm5ld19lcHQpOworICAgIGlmICggIXRhYmxlICkKICAgICAg
ICAgcmV0dXJuIDA7CiAKLSAgICB0YWJsZSA9IG1hcF9kb21haW5fcGFnZShf
bWZuKG5ld19lcHQubWZuKSk7CiAgICAgdHJ1bmsgPSAxVUwgPDwgKChsZXZl
bCAtIDEpICogRVBUX1RBQkxFX09SREVSKTsKIAogICAgIGZvciAoIGkgPSAw
OyBpIDwgRVBUX1BBR0VUQUJMRV9FTlRSSUVTOyBpKysgKQpAQCAtMjY3LDcg
KzI2Niw2IEBAIHN0YXRpYyBib29sX3QgZXB0X3NwbGl0X3N1cGVyX3BhZ2Uo
c3RydWMKICAgICAgICAgZXB0ZS0+c3AgPSAobGV2ZWwgPiAxKTsKICAgICAg
ICAgZXB0ZS0+bWZuICs9IGkgKiB0cnVuazsKICAgICAgICAgZXB0ZS0+c25w
ID0gKGlvbW11X2VuYWJsZWQgJiYgaW9tbXVfc25vb3ApOwotICAgICAgICBl
cHRlLT5zdXBwcmVzc192ZSA9IDE7CiAKICAgICAgICAgZXB0X3AybV90eXBl
X3RvX2ZsYWdzKHAybSwgZXB0ZSwgZXB0ZS0+c2FfcDJtdCwgZXB0ZS0+YWNj
ZXNzKTsKIApAQCAtMzA2LDggKzMwNCw3IEBAIHN0YXRpYyBpbnQgZXB0X25l
eHRfbGV2ZWwoc3RydWN0IHAybV9kb20KICAgICAgICAgICAgICAgICAgICAg
ICAgICAgZXB0X2VudHJ5X3QgKip0YWJsZSwgdW5zaWduZWQgbG9uZyAqZ2Zu
X3JlbWFpbmRlciwKICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50IG5l
eHRfbGV2ZWwpCiB7Ci0gICAgdW5zaWduZWQgbG9uZyBtZm47Ci0gICAgZXB0
X2VudHJ5X3QgKmVwdF9lbnRyeSwgZTsKKyAgICBlcHRfZW50cnlfdCAqZXB0
X2VudHJ5LCAqbmV4dCA9IE5VTEwsIGU7CiAgICAgdTMyIHNoaWZ0LCBpbmRl
eDsKIAogICAgIHNoaWZ0ID0gbmV4dF9sZXZlbCAqIEVQVF9UQUJMRV9PUkRF
UjsKQEAgLTMzMiwxOSArMzI5LDE3IEBAIHN0YXRpYyBpbnQgZXB0X25leHRf
bGV2ZWwoc3RydWN0IHAybV9kb20KICAgICAgICAgaWYgKCByZWFkX29ubHkg
KQogICAgICAgICAgICAgcmV0dXJuIEdVRVNUX1RBQkxFX01BUF9GQUlMRUQ7
CiAKLSAgICAgICAgaWYgKCAhZXB0X3NldF9taWRkbGVfZW50cnkocDJtLCBl
cHRfZW50cnkpICkKKyAgICAgICAgbmV4dCA9IGVwdF9zZXRfbWlkZGxlX2Vu
dHJ5KHAybSwgZXB0X2VudHJ5KTsKKyAgICAgICAgaWYgKCAhbmV4dCApCiAg
ICAgICAgICAgICByZXR1cm4gR1VFU1RfVEFCTEVfTUFQX0ZBSUxFRDsKLSAg
ICAgICAgZWxzZQotICAgICAgICAgICAgZSA9IGF0b21pY19yZWFkX2VwdF9l
bnRyeShlcHRfZW50cnkpOyAvKiBSZWZyZXNoICovCisgICAgICAgIC8qIGUg
aXMgbm93IHN0YWxlIGFuZCBoZW5jZSBtYXkgbm90IGJlIHVzZWQgYW55bW9y
ZSBiZWxvdy4gKi8KICAgICB9Ci0KICAgICAvKiBUaGUgb25seSB0aW1lIHNw
IHdvdWxkIGJlIHNldCBoZXJlIGlzIGlmIHdlIGhhZCBoaXQgYSBzdXBlcnBh
Z2UgKi8KLSAgICBpZiAoIGlzX2VwdGVfc3VwZXJwYWdlKCZlKSApCisgICAg
ZWxzZSBpZiAoIGlzX2VwdGVfc3VwZXJwYWdlKCZlKSApCiAgICAgICAgIHJl
dHVybiBHVUVTVF9UQUJMRV9TVVBFUl9QQUdFOwogCi0gICAgbWZuID0gZS5t
Zm47CiAgICAgdW5tYXBfZG9tYWluX3BhZ2UoKnRhYmxlKTsKLSAgICAqdGFi
bGUgPSBtYXBfZG9tYWluX3BhZ2UoX21mbihtZm4pKTsKKyAgICAqdGFibGUg
PSBuZXh0ID86IG1hcF9kb21haW5fcGFnZShfbWZuKGUubWZuKSk7CiAgICAg
Kmdmbl9yZW1haW5kZXIgJj0gKDFVTCA8PCBzaGlmdCkgLSAxOwogICAgIHJl
dHVybiBHVUVTVF9UQUJMRV9OT1JNQUxfUEFHRTsKIH0K

--=separator
Content-Type: application/octet-stream; name="xsa328/xsa328-4.12-2.patch"
Content-Disposition: attachment; filename="xsa328/xsa328-4.12-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
ZXB0OiBhdG9taWNhbGx5IG1vZGlmeSBlbnRyaWVzIGluIGVwdF9uZXh0X2xl
dmVsCgplcHRfbmV4dF9sZXZlbCB3YXMgcGFzc2luZyBhIGxpdmUgUFRFIHBv
aW50ZXIgdG8gZXB0X3NldF9taWRkbGVfZW50cnksCndoaWNoIHdhcyB0aGVu
IG1vZGlmaWVkIHdpdGhvdXQgdGFraW5nIGludG8gYWNjb3VudCB0aGF0IHRo
ZSBQVEUgY291bGQKYmUgcGFydCBvZiBhIGxpdmUgRVBUIHRhYmxlLiBUaGlz
IHdhc24ndCBhIHNlY3VyaXR5IGlzc3VlIGJlY2F1c2UgdGhlCnBhZ2VzIHJl
dHVybmVkIGJ5IHAybV9hbGxvY19wdHAgYXJlIHplcm9lZCwgc28gYWRkaW5n
IHN1Y2ggYW4gZW50cnkKYmVmb3JlIGFjdHVhbGx5IGluaXRpYWxpemluZyBp
dCBkaWRuJ3QgYWxsb3cgYSBndWVzdCB0byBhY2Nlc3MKcGh5c2ljYWwgbWVt
b3J5IGFkZHJlc3NlcyBpdCB3YXNuJ3Qgc3VwcG9zZWQgdG8gYWNjZXNzLgoK
VGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjguCgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2
L21tL3AybS1lcHQuYworKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5j
CkBAIC0zMDcsNiArMzA3LDggQEAgc3RhdGljIGludCBlcHRfbmV4dF9sZXZl
bChzdHJ1Y3QgcDJtX2RvbQogICAgIGVwdF9lbnRyeV90ICplcHRfZW50cnks
ICpuZXh0ID0gTlVMTCwgZTsKICAgICB1MzIgc2hpZnQsIGluZGV4OwogCisg
ICAgQVNTRVJUKG5leHRfbGV2ZWwpOworCiAgICAgc2hpZnQgPSBuZXh0X2xl
dmVsICogRVBUX1RBQkxFX09SREVSOwogCiAgICAgaW5kZXggPSAqZ2ZuX3Jl
bWFpbmRlciA+PiBzaGlmdDsKQEAgLTMyMywxNiArMzI1LDIwIEBAIHN0YXRp
YyBpbnQgZXB0X25leHRfbGV2ZWwoc3RydWN0IHAybV9kb20KIAogICAgIGlm
ICggIWlzX2VwdGVfcHJlc2VudCgmZSkgKQogICAgIHsKKyAgICAgICAgaW50
IHJjOworCiAgICAgICAgIGlmICggZS5zYV9wMm10ID09IHAybV9wb3B1bGF0
ZV9vbl9kZW1hbmQgKQogICAgICAgICAgICAgcmV0dXJuIEdVRVNUX1RBQkxF
X1BPRF9QQUdFOwogCiAgICAgICAgIGlmICggcmVhZF9vbmx5ICkKICAgICAg
ICAgICAgIHJldHVybiBHVUVTVF9UQUJMRV9NQVBfRkFJTEVEOwogCi0gICAg
ICAgIG5leHQgPSBlcHRfc2V0X21pZGRsZV9lbnRyeShwMm0sIGVwdF9lbnRy
eSk7CisgICAgICAgIG5leHQgPSBlcHRfc2V0X21pZGRsZV9lbnRyeShwMm0s
ICZlKTsKICAgICAgICAgaWYgKCAhbmV4dCApCiAgICAgICAgICAgICByZXR1
cm4gR1VFU1RfVEFCTEVfTUFQX0ZBSUxFRDsKLSAgICAgICAgLyogZSBpcyBu
b3cgc3RhbGUgYW5kIGhlbmNlIG1heSBub3QgYmUgdXNlZCBhbnltb3JlIGJl
bG93LiAqLworCisgICAgICAgIHJjID0gYXRvbWljX3dyaXRlX2VwdF9lbnRy
eShwMm0sIGVwdF9lbnRyeSwgZSwgbmV4dF9sZXZlbCk7CisgICAgICAgIEFT
U0VSVChyYyA9PSAwKTsKICAgICB9CiAgICAgLyogVGhlIG9ubHkgdGltZSBz
cCB3b3VsZCBiZSBzZXQgaGVyZSBpcyBpZiB3ZSBoYWQgaGl0IGEgc3VwZXJw
YWdlICovCiAgICAgZWxzZSBpZiAoIGlzX2VwdGVfc3VwZXJwYWdlKCZlKSAp
Cg==

--=separator
Content-Type: application/octet-stream; name="xsa328/xsa328-4.13-1.patch"
Content-Disposition: attachment; filename="xsa328/xsa328-4.13-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvRVBUOiBlcHRfc2V0X21pZGRsZV9lbnRyeSgpIHJlbGF0ZWQgYWRq
dXN0bWVudHMKCmVwdF9zcGxpdF9zdXBlcl9wYWdlKCkgd2FudHMgdG8gZnVy
dGhlciBtb2RpZnkgdGhlIG5ld2x5IGFsbG9jYXRlZAp0YWJsZSwgc28gaGF2
ZSBlcHRfc2V0X21pZGRsZV9lbnRyeSgpIHJldHVybiB0aGUgbWFwcGVkIHBv
aW50ZXIgcmF0aGVyCnRoYW4gdGVhcmluZyBpdCBkb3duIGFuZCB0aGVuIGdl
dHRpbmcgcmUtZXN0YWJsaXNoZWQgcmlnaHQgYWdhaW4uCgpTaW1pbGFybHkg
ZXB0X25leHRfbGV2ZWwoKSB3YW50cyB0byBoYW5kIGJhY2sgYSBtYXBwZWQg
cG9pbnRlciBvZgp0aGUgbmV4dCBsZXZlbCBwYWdlLCBzbyByZS11c2UgdGhl
IG9uZSBlc3RhYmxpc2hlZCBieQplcHRfc2V0X21pZGRsZV9lbnRyeSgpIGlu
IGNhc2UgdGhhdCBwYXRoIHdhcyB0YWtlbi4KClB1bGwgdGhlIHNldHRpbmcg
b2Ygc3VwcHJlc3NfdmUgYWhlYWQgb2YgaW5zZXJ0aW9uIGludG8gdGhlIGhp
Z2hlciBsZXZlbAp0YWJsZSwgYW5kIGRvbid0IGhhdmUgZXB0X3NwbGl0X3N1
cGVyX3BhZ2UoKSBzZXQgdGhlIGZpZWxkIGEgMm5kIHRpbWUuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTMyOC4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KCi0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9w
Mm0tZXB0LmMKKysrIGIveGVuL2FyY2gveDg2L21tL3AybS1lcHQuYwpAQCAt
MTg3LDggKzE4Nyw5IEBAIHN0YXRpYyB2b2lkIGVwdF9wMm1fdHlwZV90b19m
bGFncyhzdHJ1Y3QKICNkZWZpbmUgR1VFU1RfVEFCTEVfU1VQRVJfUEFHRSAg
MgogI2RlZmluZSBHVUVTVF9UQUJMRV9QT0RfUEFHRSAgICAzCiAKLS8qIEZp
bGwgaW4gbWlkZGxlIGxldmVscyBvZiBlcHQgdGFibGUgKi8KLXN0YXRpYyBp
bnQgZXB0X3NldF9taWRkbGVfZW50cnkoc3RydWN0IHAybV9kb21haW4gKnAy
bSwgZXB0X2VudHJ5X3QgKmVwdF9lbnRyeSkKKy8qIEZpbGwgaW4gbWlkZGxl
IGxldmVsIG9mIGVwdCB0YWJsZTsgcmV0dXJuIHBvaW50ZXIgdG8gbWFwcGVk
IG5ldyB0YWJsZS4gKi8KK3N0YXRpYyBlcHRfZW50cnlfdCAqZXB0X3NldF9t
aWRkbGVfZW50cnkoc3RydWN0IHAybV9kb21haW4gKnAybSwKKyAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgZXB0X2VudHJ5X3Qg
KmVwdF9lbnRyeSkKIHsKICAgICBtZm5fdCBtZm47CiAgICAgZXB0X2VudHJ5
X3QgKnRhYmxlOwpAQCAtMTk2LDcgKzE5NywxMiBAQCBzdGF0aWMgaW50IGVw
dF9zZXRfbWlkZGxlX2VudHJ5KHN0cnVjdCBwCiAKICAgICBtZm4gPSBwMm1f
YWxsb2NfcHRwKHAybSwgMCk7CiAgICAgaWYgKCBtZm5fZXEobWZuLCBJTlZB
TElEX01GTikgKQotICAgICAgICByZXR1cm4gMDsKKyAgICAgICAgcmV0dXJu
IE5VTEw7CisKKyAgICB0YWJsZSA9IG1hcF9kb21haW5fcGFnZShtZm4pOwor
CisgICAgZm9yICggaSA9IDA7IGkgPCBFUFRfUEFHRVRBQkxFX0VOVFJJRVM7
IGkrKyApCisgICAgICAgIHRhYmxlW2ldLnN1cHByZXNzX3ZlID0gMTsKIAog
ICAgIGVwdF9lbnRyeS0+ZXB0ZSA9IDA7CiAgICAgZXB0X2VudHJ5LT5tZm4g
PSBtZm5feChtZm4pOwpAQCAtMjA4LDE0ICsyMTQsNyBAQCBzdGF0aWMgaW50
IGVwdF9zZXRfbWlkZGxlX2VudHJ5KHN0cnVjdCBwCiAKICAgICBlcHRfZW50
cnktPnN1cHByZXNzX3ZlID0gMTsKIAotICAgIHRhYmxlID0gbWFwX2RvbWFp
bl9wYWdlKG1mbik7Ci0KLSAgICBmb3IgKCBpID0gMDsgaSA8IEVQVF9QQUdF
VEFCTEVfRU5UUklFUzsgaSsrICkKLSAgICAgICAgdGFibGVbaV0uc3VwcHJl
c3NfdmUgPSAxOwotCi0gICAgdW5tYXBfZG9tYWluX3BhZ2UodGFibGUpOwot
Ci0gICAgcmV0dXJuIDE7CisgICAgcmV0dXJuIHRhYmxlOwogfQogCiAvKiBm
cmVlIGVwdCBzdWIgdHJlZSBiZWhpbmQgYW4gZW50cnkgKi8KQEAgLTI1Mywx
MCArMjUyLDEwIEBAIHN0YXRpYyBib29sX3QgZXB0X3NwbGl0X3N1cGVyX3Bh
Z2Uoc3RydWMKIAogICAgIEFTU0VSVChpc19lcHRlX3N1cGVycGFnZShlcHRf
ZW50cnkpKTsKIAotICAgIGlmICggIWVwdF9zZXRfbWlkZGxlX2VudHJ5KHAy
bSwgJm5ld19lcHQpICkKKyAgICB0YWJsZSA9IGVwdF9zZXRfbWlkZGxlX2Vu
dHJ5KHAybSwgJm5ld19lcHQpOworICAgIGlmICggIXRhYmxlICkKICAgICAg
ICAgcmV0dXJuIDA7CiAKLSAgICB0YWJsZSA9IG1hcF9kb21haW5fcGFnZShf
bWZuKG5ld19lcHQubWZuKSk7CiAgICAgdHJ1bmsgPSAxVUwgPDwgKChsZXZl
bCAtIDEpICogRVBUX1RBQkxFX09SREVSKTsKIAogICAgIGZvciAoIGkgPSAw
OyBpIDwgRVBUX1BBR0VUQUJMRV9FTlRSSUVTOyBpKysgKQpAQCAtMjY3LDcg
KzI2Niw2IEBAIHN0YXRpYyBib29sX3QgZXB0X3NwbGl0X3N1cGVyX3BhZ2Uo
c3RydWMKICAgICAgICAgZXB0ZS0+c3AgPSAobGV2ZWwgPiAxKTsKICAgICAg
ICAgZXB0ZS0+bWZuICs9IGkgKiB0cnVuazsKICAgICAgICAgZXB0ZS0+c25w
ID0gaXNfaW9tbXVfZW5hYmxlZChwMm0tPmRvbWFpbikgJiYgaW9tbXVfc25v
b3A7Ci0gICAgICAgIGVwdGUtPnN1cHByZXNzX3ZlID0gMTsKIAogICAgICAg
ICBlcHRfcDJtX3R5cGVfdG9fZmxhZ3MocDJtLCBlcHRlLCBlcHRlLT5zYV9w
Mm10LCBlcHRlLT5hY2Nlc3MpOwogCkBAIC0zMDYsOCArMzA0LDcgQEAgc3Rh
dGljIGludCBlcHRfbmV4dF9sZXZlbChzdHJ1Y3QgcDJtX2RvbQogICAgICAg
ICAgICAgICAgICAgICAgICAgICBlcHRfZW50cnlfdCAqKnRhYmxlLCB1bnNp
Z25lZCBsb25nICpnZm5fcmVtYWluZGVyLAogICAgICAgICAgICAgICAgICAg
ICAgICAgICBpbnQgbmV4dF9sZXZlbCkKIHsKLSAgICB1bnNpZ25lZCBsb25n
IG1mbjsKLSAgICBlcHRfZW50cnlfdCAqZXB0X2VudHJ5LCBlOworICAgIGVw
dF9lbnRyeV90ICplcHRfZW50cnksICpuZXh0ID0gTlVMTCwgZTsKICAgICB1
MzIgc2hpZnQsIGluZGV4OwogCiAgICAgc2hpZnQgPSBuZXh0X2xldmVsICog
RVBUX1RBQkxFX09SREVSOwpAQCAtMzMyLDE5ICszMjksMTcgQEAgc3RhdGlj
IGludCBlcHRfbmV4dF9sZXZlbChzdHJ1Y3QgcDJtX2RvbQogICAgICAgICBp
ZiAoIHJlYWRfb25seSApCiAgICAgICAgICAgICByZXR1cm4gR1VFU1RfVEFC
TEVfTUFQX0ZBSUxFRDsKIAotICAgICAgICBpZiAoICFlcHRfc2V0X21pZGRs
ZV9lbnRyeShwMm0sIGVwdF9lbnRyeSkgKQorICAgICAgICBuZXh0ID0gZXB0
X3NldF9taWRkbGVfZW50cnkocDJtLCBlcHRfZW50cnkpOworICAgICAgICBp
ZiAoICFuZXh0ICkKICAgICAgICAgICAgIHJldHVybiBHVUVTVF9UQUJMRV9N
QVBfRkFJTEVEOwotICAgICAgICBlbHNlCi0gICAgICAgICAgICBlID0gYXRv
bWljX3JlYWRfZXB0X2VudHJ5KGVwdF9lbnRyeSk7IC8qIFJlZnJlc2ggKi8K
KyAgICAgICAgLyogZSBpcyBub3cgc3RhbGUgYW5kIGhlbmNlIG1heSBub3Qg
YmUgdXNlZCBhbnltb3JlIGJlbG93LiAqLwogICAgIH0KLQogICAgIC8qIFRo
ZSBvbmx5IHRpbWUgc3Agd291bGQgYmUgc2V0IGhlcmUgaXMgaWYgd2UgaGFk
IGhpdCBhIHN1cGVycGFnZSAqLwotICAgIGlmICggaXNfZXB0ZV9zdXBlcnBh
Z2UoJmUpICkKKyAgICBlbHNlIGlmICggaXNfZXB0ZV9zdXBlcnBhZ2UoJmUp
ICkKICAgICAgICAgcmV0dXJuIEdVRVNUX1RBQkxFX1NVUEVSX1BBR0U7CiAK
LSAgICBtZm4gPSBlLm1mbjsKICAgICB1bm1hcF9kb21haW5fcGFnZSgqdGFi
bGUpOwotICAgICp0YWJsZSA9IG1hcF9kb21haW5fcGFnZShfbWZuKG1mbikp
OworICAgICp0YWJsZSA9IG5leHQgPzogbWFwX2RvbWFpbl9wYWdlKF9tZm4o
ZS5tZm4pKTsKICAgICAqZ2ZuX3JlbWFpbmRlciAmPSAoMVVMIDw8IHNoaWZ0
KSAtIDE7CiAgICAgcmV0dXJuIEdVRVNUX1RBQkxFX05PUk1BTF9QQUdFOwog
fQo=

--=separator
Content-Type: application/octet-stream; name="xsa328/xsa328-4.13-2.patch"
Content-Disposition: attachment; filename="xsa328/xsa328-4.13-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogPHNlY3VyaXR5QHhlbnByb2plY3Qub3JnPgpTdWJqZWN0OiB4ODYv
ZXB0OiBhdG9taWNhbGx5IG1vZGlmeSBlbnRyaWVzIGluIGVwdF9uZXh0X2xl
dmVsCgplcHRfbmV4dF9sZXZlbCB3YXMgcGFzc2luZyBhIGxpdmUgUFRFIHBv
aW50ZXIgdG8gZXB0X3NldF9taWRkbGVfZW50cnksCndoaWNoIHdhcyB0aGVu
IG1vZGlmaWVkIHdpdGhvdXQgdGFraW5nIGludG8gYWNjb3VudCB0aGF0IHRo
ZSBQVEUgY291bGQKYmUgcGFydCBvZiBhIGxpdmUgRVBUIHRhYmxlLiBUaGlz
IHdhc24ndCBhIHNlY3VyaXR5IGlzc3VlIGJlY2F1c2UgdGhlCnBhZ2VzIHJl
dHVybmVkIGJ5IHAybV9hbGxvY19wdHAgYXJlIHplcm9lZCwgc28gYWRkaW5n
IHN1Y2ggYW4gZW50cnkKYmVmb3JlIGFjdHVhbGx5IGluaXRpYWxpemluZyBp
dCBkaWRuJ3QgYWxsb3cgYSBndWVzdCB0byBhY2Nlc3MKcGh5c2ljYWwgbWVt
b3J5IGFkZHJlc3NlcyBpdCB3YXNuJ3Qgc3VwcG9zZWQgdG8gYWNjZXNzLgoK
VGhpcyBpcyBwYXJ0IG9mIFhTQS0zMjguCgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgoKLS0tIGEveGVuL2FyY2gveDg2
L21tL3AybS1lcHQuYworKysgYi94ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5j
CkBAIC0zMDcsNiArMzA3LDggQEAgc3RhdGljIGludCBlcHRfbmV4dF9sZXZl
bChzdHJ1Y3QgcDJtX2RvbQogICAgIGVwdF9lbnRyeV90ICplcHRfZW50cnks
ICpuZXh0ID0gTlVMTCwgZTsKICAgICB1MzIgc2hpZnQsIGluZGV4OwogCisg
ICAgQVNTRVJUKG5leHRfbGV2ZWwpOworCiAgICAgc2hpZnQgPSBuZXh0X2xl
dmVsICogRVBUX1RBQkxFX09SREVSOwogCiAgICAgaW5kZXggPSAqZ2ZuX3Jl
bWFpbmRlciA+PiBzaGlmdDsKQEAgLTMyMywxNiArMzI1LDIwIEBAIHN0YXRp
YyBpbnQgZXB0X25leHRfbGV2ZWwoc3RydWN0IHAybV9kb20KIAogICAgIGlm
ICggIWlzX2VwdGVfcHJlc2VudCgmZSkgKQogICAgIHsKKyAgICAgICAgaW50
IHJjOworCiAgICAgICAgIGlmICggZS5zYV9wMm10ID09IHAybV9wb3B1bGF0
ZV9vbl9kZW1hbmQgKQogICAgICAgICAgICAgcmV0dXJuIEdVRVNUX1RBQkxF
X1BPRF9QQUdFOwogCiAgICAgICAgIGlmICggcmVhZF9vbmx5ICkKICAgICAg
ICAgICAgIHJldHVybiBHVUVTVF9UQUJMRV9NQVBfRkFJTEVEOwogCi0gICAg
ICAgIG5leHQgPSBlcHRfc2V0X21pZGRsZV9lbnRyeShwMm0sIGVwdF9lbnRy
eSk7CisgICAgICAgIG5leHQgPSBlcHRfc2V0X21pZGRsZV9lbnRyeShwMm0s
ICZlKTsKICAgICAgICAgaWYgKCAhbmV4dCApCiAgICAgICAgICAgICByZXR1
cm4gR1VFU1RfVEFCTEVfTUFQX0ZBSUxFRDsKLSAgICAgICAgLyogZSBpcyBu
b3cgc3RhbGUgYW5kIGhlbmNlIG1heSBub3QgYmUgdXNlZCBhbnltb3JlIGJl
bG93LiAqLworCisgICAgICAgIHJjID0gYXRvbWljX3dyaXRlX2VwdF9lbnRy
eShwMm0sIGVwdF9lbnRyeSwgZSwgbmV4dF9sZXZlbCk7CisgICAgICAgIEFT
U0VSVChyYyA9PSAwKTsKICAgICB9CiAgICAgLyogVGhlIG9ubHkgdGltZSBz
cCB3b3VsZCBiZSBzZXQgaGVyZSBpcyBpZiB3ZSBoYWQgaGl0IGEgc3VwZXJw
YWdlICovCiAgICAgZWxzZSBpZiAoIGlzX2VwdGVfc3VwZXJwYWdlKCZlKSAp
Cg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 12:54:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 12:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsn7A-0003AU-TK; Tue, 07 Jul 2020 12:54:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F44P=AS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsn79-0003AP-RL
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 12:54:23 +0000
X-Inumbo-ID: f855265c-c050-11ea-8d65-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f855265c-c050-11ea-8d65-12813bfff9fa;
 Tue, 07 Jul 2020 12:54:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+SmuHk3uT8hrmI1zcq3oe/8MttKUSLYMhTbnC7kszCg=; b=2rh6BbiCQ1Zf7MId2BcBc5i07
 LgMFuCPjQg0F/0/F+HDFWqL9xxYCUG7ju14HpCnqol5RvVuYz/nniqOb8YVbhlPGvPMnLuwWRAKrJ
 tOa+94msorxEoi0aqbEnYEue02aXguyI98HqI17/L0XKRVUhmTzyr/mBqeMfPrfJteDSA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsn74-0003U2-5f; Tue, 07 Jul 2020 12:54:18 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsn72-0005qB-VW; Tue, 07 Jul 2020 12:54:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsn72-0004zZ-Ux; Tue, 07 Jul 2020 12:54:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151696-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151696: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=f97f99c8d88ebc108f6adc3ba74e87d53ba57c70
X-Osstest-Versions-That: xen=be63d9d47f571a60d70f8fb630c03871312d9655
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jul 2020 12:54:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151696 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151696/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151661
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151661
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151661
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151661
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151661
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151661
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151661
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151661
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151661
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151661
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f97f99c8d88ebc108f6adc3ba74e87d53ba57c70
baseline version:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655

Last test of basis   151661  2020-07-06 01:54:10 Z    1 days
Failing since        151684  2020-07-06 16:36:21 Z    0 days    2 attempts
Testing same since   151696  2020-07-07 03:11:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   be63d9d47f..f97f99c8d8  f97f99c8d88ebc108f6adc3ba74e87d53ba57c70 -> master


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 12:57:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 12:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsn9n-0003U6-UQ; Tue, 07 Jul 2020 12:57:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F44P=AS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsn9m-0003SE-U5
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 12:57:07 +0000
X-Inumbo-ID: 58cf91fc-c051-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58cf91fc-c051-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 12:57:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=DGsvvD3cFzfl5LXDA6+aRnsCtQ0WzO0DQPO1kaFIrxs=; b=IsEYkJyOndyuhSX+T8vS6Ocl5R
 /h3haflXQflGfXK9DJKC4mjZbDDo/Rh0r9ENbdhiThnvvEH8PdiJ4Y0rIC/DA+BvrNCPz+zPflpw1
 ESXg9OWEhzVNnAJPtP4XvFw4sNvyLhyj8FFq+TVxgHFDK16VnBCBX5LPvJUmUar85Fhs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsn9f-0003Yh-Uo; Tue, 07 Jul 2020 12:57:00 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsn9f-0005wT-Kj; Tue, 07 Jul 2020 12:56:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsn9f-0007BX-K7; Tue, 07 Jul 2020 12:56:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete
 test-amd64-amd64-xl-qemuu-debianhvm-amd64
Message-Id: <E1jsn9f-0007BX-K7@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jul 2020 12:56:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-debianhvm-amd64
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151709/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-debianhvm-amd64.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-debianhvm-amd64.debian-hvm-install --summary-out=tmp/151709.bisection-summary --basis-template=151065 --blessings=real,real-bisect qemu-mainline test-amd64-amd64-xl-qemuu-debianhvm-amd64 debian-hvm-install
Searching for failure / basis pass:
 151685 fail [host=huxelrebe0] / 151149 [host=albana1] 151101 [host=elbling0] 151065 [host=godello1] 151047 [host=chardonnay1] 150970 [host=elbling1] 150930 [host=godello0] 150916 [host=chardonnay0] 150909 [host=fiano1] 150899 [host=fiano0] 150895 [host=debina1] 150831 [host=huxelrebe1] 150694 [host=pinot1] 150631 [host=italia0] 150608 [host=albana0] 150593 ok.
Failure / basis pass flights: 151685 / 150593
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 4ec2a1f53e8aaa22924614b64dde97321126943e 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3-627d1d6693b0594d257dbe1a3363a8d4bd4d8307 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://git.qemu.org/qemu.git#4ec2a1f53e8aaa22924614b64dde97321126943e-eb6490f544388dd24c0d054a96dd304bc7284450 git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-88ab0c15525ced2eefe39220742efe4769089ad8 git://xenbits.xen.org/xen.git#1497e78068421d83956f8e82fb6e1bf1fc3b1199-be63d9d47f571a60d70f8fb630c03871312d9655
>From git://cache:9419/git://xenbits.xen.org/xen
   be63d9d47f..f97f99c8d8  master     -> origin/master
   f97f99c8d8..3fdc211b01  staging    -> origin/staging
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 43179 nodes in revision graph
Searching for test results:
 150585 [host=elbling0]
 150593 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 4ec2a1f53e8aaa22924614b64dde97321126943e 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 150631 [host=italia0]
 150608 [host=albana0]
 150694 [host=pinot1]
 150831 [host=huxelrebe1]
 150909 [host=fiano1]
 150930 [host=godello0]
 150916 [host=chardonnay0]
 150895 [host=debina1]
 150899 [host=fiano0]
 150970 [host=elbling1]
 151047 [host=chardonnay1]
 151101 [host=elbling0]
 151065 [host=godello1]
 151149 [host=albana1]
 151221 fail irrelevant
 151175 fail irrelevant
 151241 fail irrelevant
 151286 fail irrelevant
 151269 fail irrelevant
 151328 fail irrelevant
 151304 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 322969adf1fb3d6cfbd613f35121315715aff2ed 3c659044118e34603161457db9934a34f816d78b 171199f56f5f9bdf1e5d670d09ef1351d8f01bae 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151377 fail irrelevant
 151353 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b d4b78317b7cf8c0c635b70086503813f79ff21ec 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151399 fail irrelevant
 151414 fail irrelevant
 151435 fail irrelevant
 151459 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151471 fail irrelevant
 151485 fail irrelevant
 151500 fail irrelevant
 151518 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151547 fail irrelevant
 151598 fail irrelevant
 151577 fail irrelevant
 151683 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 250b1da35d579f42319af234f36207902ca4baa4 2e3de6253422112ae43e608661ba94ea6b345694 dde6174ada5280cd9a6396e3b12606360a0d29a3
 151659 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b 3f429a3400822141651486193d6af625eeab05a5 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 151674 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 3575b0aea983ad57804c9af739ed8ff7bc168393 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151699 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 007d1dbf72536ec1b847a944832e4de1546af2ac 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151660 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 58ae92a993687d913aa6dd00ef3497a1bc5f6c40 3c659044118e34603161457db9934a34f816d78b 54cdfe511219b8051046be55a6e156c4f08ff7ff 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 151663 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a2433243fbe471c250d7eddc2c7da325d91265fd 3c659044118e34603161457db9934a34f816d78b 6675a653d2e57ab09c32c0ea7b44a1d6c40a7f58 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151622 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b 7b7515702012219410802a168ae4aa45b72a44df 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151693 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 fec6a7af5c5760b9bccd9e7c3eaf29f0401af264
 151676 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 49ee11555262a256afec592dfed7c5902d5eefd2 2e3de6253422112ae43e608661ba94ea6b345694 726c78d14dfe6ec76f5e4c7756821a91f0a04b34
 151686 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 71b04329c4f7d5824a289ca5225e1883a278cf3b 2e3de6253422112ae43e608661ba94ea6b345694 e181db8ba4e0797b8f9b55996adfa71ffb5b4081
 151656 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151664 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 53550e81e2cafe7c03a39526b95cd21b5194d9b1 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151634 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151666 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 250bc43a406f7d46e319abe87c19548d4f027828 2e3de6253422112ae43e608661ba94ea6b345694 3371ced37ced359167b5a71abee2062854371323
 151645 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151655 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 4ec2a1f53e8aaa22924614b64dde97321126943e 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151668 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 3c659044118e34603161457db9934a34f816d78b 9f1f264edbdf5516d6f208497310b3eedbc7b74c 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151657 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151658 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b eefe34ea4b82c2b47abe28af4cc7247d51553626 2e3de6253422112ae43e608661ba94ea6b345694 25636ed707cf1211ce846c7ec58f8643e435d7a7
 151678 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 5d2f557b47dfbf8f23277a5bdd8473d4607c681a 2e3de6253422112ae43e608661ba94ea6b345694 51ca66c37371b10b378513af126646de22eddb17
 151670 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1d24410da356731da70b3334f86343e11e207d2 3c659044118e34603161457db9934a34f816d78b 470dd165d152ff7ceac61c7b71c2b89220b3aad7 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151688 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 6bb228190ef0b45669d285114cf8a280c55f4b39 2e3de6253422112ae43e608661ba94ea6b345694 ad33a573c009d72466432b41ba0591c64e819c19
 151680 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 7d2410cea154bf915fb30179ebda3b17ac36e70e 2e3de6253422112ae43e608661ba94ea6b345694 780aba2779b834f19b2a6f0dcdea0e7e0b5e1622
 151672 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b eea8f5df4ecc607d64f091b8d916fcc11a697541 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151673 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 86e8c353f705f14f2f2fd7a6195cefa431aa24d9 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 151669 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151682 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bb78cfbec07eda45118b630a09b0af549b43a135 3c659044118e34603161457db9934a34f816d78b fe0fe4735e798578097758781166cc221319b93d 2e3de6253422112ae43e608661ba94ea6b345694 d9f58cd54fe2f05e1f05e2fe254684bd1840de8e
 151694 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 157ed954e2dc8c2a4230d38058ca7f1fe50902e0 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151689 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 175198ad91d8bac540159705873b4ffe4fb94eab 2e3de6253422112ae43e608661ba94ea6b345694 51ca66c37371b10b378513af126646de22eddb17
 151700 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151691 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 6b0eff1a4ea47c835a7d8bee88c05c47ada37495 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151685 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151692 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b da9630c57ee386f8beb571ba6bb4a98d546c42ca 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151697 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 75a6ed875ff0a2eb6b2971ae2098ed09963d7329 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151706 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151703 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151701 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151709 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151708 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
Searching for interesting versions
 Result found: flight 150593 (pass), for basis pass
 Result found: flight 151634 (fail), for basis failure
 Repro found: flight 151655 (pass), for basis pass
 Repro found: flight 151656 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
No revisions left to test, checking graph state.
 Result found: flight 151700 (pass), for last pass
 Result found: flight 151701 (fail), for first failure
 Repro found: flight 151703 (pass), for last pass
 Repro found: flight 151706 (fail), for first failure
 Repro found: flight 151708 (pass), for last pass
 Repro found: flight 151709 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151709/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.622332 to fit
pnmtopng: 223 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-debianhvm-amd64.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
151709: tolerable ALL FAIL

flight 151709 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/151709/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail baseline untested


jobs:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 13:31:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 13:31:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsngs-0007Js-5f; Tue, 07 Jul 2020 13:31:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F44P=AS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsngq-0007JY-85
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 13:31:16 +0000
X-Inumbo-ID: 1edd2018-c056-11ea-8d72-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1edd2018-c056-11ea-8d72-12813bfff9fa;
 Tue, 07 Jul 2020 13:31:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=YoxJ1qLa5MpV7s6uZITIPYzM7iMtYORv1Fzp2XXtG3o=; b=fp6AW0XVf4j+TvXHINzGNGsXq
 s6IWF5uTpyXmfm9zMfUPmtsZNbwTSSJTdAP2CvWTgwOLdYk+OSV5BFvRnxTil0c2tnVYjhE620WMU
 7t2igjP7kGdxev34PFFswHgCEc+2XOlwdV8FaCsntkmfpQvtNocAn+iKzojxRfBwmY5JE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsngk-0004GG-4U; Tue, 07 Jul 2020 13:31:10 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsngj-00077P-SW; Tue, 07 Jul 2020 13:31:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsngj-0003zg-S0; Tue, 07 Jul 2020 13:31:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151698-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151698: regressions - FAIL
X-Osstest-Failures: libvirt:test-amd64-amd64-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: libvirt=852ee1950aee5f31c9656b30c5fe9124f734c38c
X-Osstest-Versions-That: libvirt=e7998ebeaf15e4e8825be0dd97aa1316f194f00d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jul 2020 13:31:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151698 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151698/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496
 test-amd64-i386-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151496
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151496
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     14 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              852ee1950aee5f31c9656b30c5fe9124f734c38c
baseline version:
 libvirt              e7998ebeaf15e4e8825be0dd97aa1316f194f00d

Last test of basis   151496  2020-07-01 04:23:43 Z    6 days
Failing since        151527  2020-07-02 04:29:15 Z    5 days    6 attempts
Testing same since   151665  2020-07-06 04:18:46 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Yanqiu Zhang <yanqzhan@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 852ee1950aee5f31c9656b30c5fe9124f734c38c
Author: Laine Stump <laine@redhat.com>
Date:   Thu Jun 18 22:33:28 2020 -0400

    util: remove OOM error log from virGetHostnameImpl()
    
    The strings allocated in virGetHostnameImpl() are all allocated via
    g_strdup(), which will exit on OOM anyway, so the call to
    virReportOOMError() is redundant, and removing it allows slight
    modification to the code, in particular the cleanup label can be
    eliminated.
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit 59afd0b0bcbbefd65fd8c171d73db207828c8b18
Author: Laine Stump <laine@redhat.com>
Date:   Thu Jun 18 23:00:47 2020 -0400

    conf: eliminate useless error label in virDomainFeaturesDefParse()
    
    The error: label in this function just does "return -1", so replace
    all the "goto error" in the function with "return -1".
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit ab9fd53823483975adb0cb7d46e03f647c7f3b57
Author: Laine Stump <laine@redhat.com>
Date:   Wed Jun 24 13:12:56 2020 -0400

    network: use proper arg type when calling virNetDevSetOnline()
    
    The 2nd arg to this function is a bool, not an int.
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit e95dd7aacd814c5b7d109252f0b68b2ac9cebb9b
Author: Laine Stump <laine@redhat.com>
Date:   Tue Jun 23 22:52:58 2020 -0400

    network: make networkDnsmasqXmlNsDef private to bridge_driver.c
    
    This struct isn't used anywhere else.
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit 9ceb3cff8582119d2f907b85655e758068770e83
Author: Laine Stump <laine@redhat.com>
Date:   Fri Jun 19 17:40:17 2020 -0400

    network: fix memory leak in networkBuildDhcpDaemonCommandLine()
    
    hostsfilestr was not being freed. This will be turned into g_autofree
    in an upcoming patch converting a lot more of the same file to using
    g_auto*, but I wanted to make a separate patch for this first so the
    other patch is simpler to review (and to make backporting easier).
    
    The leak was introduced in commit 97a0aa246799c97d0a9ca9ecd6b4fd932ae4756c
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit a726feb693ea5a1b0c90761c35641f0db8fc0619
Author: Laine Stump <laine@redhat.com>
Date:   Thu Jun 18 19:16:33 2020 -0400

    use g_autoptr for all xmlBuffers
    
    AUTOPTR_CLEANUP_FUNC is set to xmlBufferFree() in util/virxml.h (This
    is actually new - added accidentally (but fortunately harmlessly!) in
    commit 257aba2dafe. I had added it along with the hunks in this patch,
    then decided to remove it and submit separately, but missed taking out
    the hunk in virxml.h)
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit b7a92bce070fd57844a59bf8b1c30cb4ef4f3acd
Author: Laine Stump <laine@redhat.com>
Date:   Thu Jun 18 12:49:09 2020 -0400

    conf, vmx: check for OOM after calling xmlBufferCreate()
    
    Although libvirt itself uses g_malloc0() and friends, which exit when
    there isn't enouogh memory, libxml2 uses standard malloc(), which just
    returns NULL on OOM - this means we must check for NULL on return from
    any libxml2 functions that allocate memory.
    
    xmlBufferCreate(), for example, might return NULL, and we don't always
    check for it. This patch adds checks where it isn't already done.
    
    (NB: Although libxml2 has a provision for changing behavior on OOM (by
    calling xmlMemSetup() to change what functions are used to
    allocating/freeing memory), we can't use that, since parts of libvirt
    code end up in libvirt.so, which is linked and called directly by
    applications that may themselves use libxml2 (and may have already set
    their own alternate malloc()), e.g. drivers like esx which live totally
    in the library rather than a separate process.)
    
    Signed-off-by: Laine Stump <laine@redhat.com>
    Reviewed-by: Ján Tomko <jtomko@redhat.com>

commit ad231189ab948f82b8f2288250df088d9718bb7c
Author: Yanqiu Zhang <yanqzhan@redhat.com>
Date:   Thu Jul 2 09:06:46 2020 +0000

    news.html: Add 3 new features
    
    Add 'virtio packed' in 6.3.0, 'virDomainGetHostnameFlags' and
    'Panic Crashloaded event' for 6.1.0.
    
    Signed-off-by: Yanqiu Zhang <yanqzhan@redhat.com>
    Reviewed-by: Andrea Bolognani <abologna@redhat.com>

commit 201f8d1876136b0693505614efa3c9d113aff0bb
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Mon Jun 29 14:55:54 2020 +0200

    virConnectGetAllDomainStats: Document two vcpu stats
    
    When introducing vcpu.<num>.wait (v1.3.2-rc1~301) and
    vcpu.<num>.halted (v2.4.0-rc1~36) the documentation was
    not written.
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Erik Skultety <eskultet@redhat.com>

commit c203b8fee1ce15003934c09e811fbd2eaec9f230
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Thu Jul 2 15:02:38 2020 +0200

    docs: Update CI documentation
    
    We're no longer using either Travis CI or the Jenkins-based
    CentOS CI, but we have started using Cirrus CI.
    
    Mention the libvirt-ci subproject as well, as a pointer for those
    who might want to learn more about our CI infrastructure.
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit fb912901316dbe7d485551606373bd71d5271601
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Mon Jun 29 19:00:36 2020 +0200

    cirrus: Generate jobs dynamically
    
    Instead of having static job definitions for FreeBSD and macOS,
    use a generic template for both and fill in the details that are
    actually different, such as the list of packages to install, in
    the GitLab CI job, right before calling cirrus-run.
    
    The target-specific information are provided by lcitool, so that
    keeping them up to date is just a matter of running the refresh
    script when necessary.
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>
    Reviewed-by: Erik Skultety <eskultet@redhat.com>

commit 919ee94ca9c7fed77897fa8e3b04952e02780c0c
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Fri Jul 3 09:32:30 2020 +0200

    maint: Post-release version bump to 6.6.0
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>

commit d7f935f1f17a3ecf180ddb9600cbef4ba8dc20f4
Author: Daniel Veillard <veillard@redhat.com>
Date:   Fri Jul 3 08:49:25 2020 +0200

    Release of libvirt-6.5.0
    
    * NEWS.rst: updated with date of release
    
    Signed-off-by: Daniel Veillard <veillard@redhat.com>

commit d1d888a69f505922140bec292b8d208b3571f084
Author: Andrea Bolognani <abologna@redhat.com>
Date:   Thu Jul 2 14:41:18 2020 +0200

    NEWS: Update for libvirt 6.5.0
    
    Signed-off-by: Andrea Bolognani <abologna@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit 7fa7f7eeb6e969e002845928e155914da2fc8cd0
Author: Daniel P. Berrangé <berrange@redhat.com>
Date:   Wed Jul 1 17:36:51 2020 +0100

    util: add access check for hooks to fix running as non-root
    
    Since feb83c1e710b9ea8044a89346f4868d03b31b0f1 libvirtd will abort on
    startup if run as non-root
    
      2020-07-01 16:30:30.738+0000: 1647444: error : virDirOpenInternal:2869 : cannot open directory '/etc/libvirt/hooks/daemon.d': Permission denied
    
    The root cause flaw is that non-root libvirtd is using /etc/libvirt for
    its hooks. Traditionally that has been harmless though since we checked
    whether we could access the hook file and degraded gracefully. We need
    the same access check for iterating over the hook directory.
    
    Long term we should make it possible to have an unprivileged hook dir
    under $HOME.
    
    Reviewed-by: Daniel Henrique Barboza <danielhb413@gmail.com>
    Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

commit c3fa17cd9a158f38416a80af3e0f712bf96ebf38
Author: Michal Privoznik <mprivozn@redhat.com>
Date:   Wed Jul 1 09:47:48 2020 +0200

    virnettlshelpers: Update private key
    
    With the recent update of Fedora rawhide I've noticed
    virnettlssessiontest and virnettlscontexttest failing with:
    
      Our own certificate servercertreq-ctx.pem failed validation
      against cacertreq-ctx.pem: The certificate uses an insecure
      algorithm
    
    This is result of Fedora changes to support strong crypto [1]. RSA
    with 1024 bit key is viewed as legacy and thus insecure. Generate
    a new private key then. Moreover, switch to EC which is not only
    shorter but also not deprecated that often as RSA. Generated
    using the following command:
    
      openssl genpkey --outform PEM --out privkey.pem \$
      --algorithm EC --pkeyopt ec_paramgen_curve:P-384 \$
      --pkeyopt ec_param_enc:named_curve
    
    1: https://fedoraproject.org/wiki/Changes/StrongCryptoSettings2
    
    Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

commit d57f361083c5053267e6d9380c1afe2abfcae8ac
Author: Daniel Henrique Barboza <danielhb413@gmail.com>
Date:   Tue Jun 30 16:43:43 2020 -0300

    docs: Fix 'Offline migration' description
    
    'transfers inactive the definition of a domain' seems odd.
    
    Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
    Reviewed-by: Andrea Bolognani <abologna@redhat.com>


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 13:56:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 13:56:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jso4n-0001Ei-MH; Tue, 07 Jul 2020 13:56:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kqME=AS=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jso4m-0001Ea-Pp
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 13:56:00 +0000
X-Inumbo-ID: 9670a368-c059-11ea-8d75-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9670a368-c059-11ea-8d75-12813bfff9fa;
 Tue, 07 Jul 2020 13:56:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B7CFAAE2C;
 Tue,  7 Jul 2020 13:55:59 +0000 (UTC)
Subject: Re: [PATCH] x86emul: avoid assembler warning about .type not taking
 effect in test harness
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <4a0f9e7d-53f1-b77f-e8a9-a75483884a6f@suse.com>
Message-ID: <7f59e378-1996-7fd7-9c2b-e8dc36c7f992@suse.com>
Date: Tue, 7 Jul 2020 15:55:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <4a0f9e7d-53f1-b77f-e8a9-a75483884a6f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.07.2020 11:35, Jan Beulich wrote:
> gcc 9.3 started to re-order top level blocks by default when optimizing.
> This re-ordering results in all our .type directives to get emitted to
> the assembly file first, followed by gcc's. The assembler warns about
> attempts to change the type of a symbol when it was already set (and
> when there's no intervening setting to "notype").

Turns out this wasn't a gcc change - the problem had been there all the
time, it just went through silently. It was the newer gas that I built
gcc 9.3 with that caused to issue to become visible. I've slightly
updated the description to account for this.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 15:47:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 15:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jspoW-0002GV-6l; Tue, 07 Jul 2020 15:47:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F44P=AS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jspoV-0002GQ-4G
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 15:47:19 +0000
X-Inumbo-ID: 22ad42b5-c069-11ea-8da1-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 22ad42b5-c069-11ea-8da1-12813bfff9fa;
 Tue, 07 Jul 2020 15:47:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=2zJ7n0NS19MmPMjNWvUtF9lPx7+C/LUeb4wjFeKH2jM=; b=KDoEaV3jXMudM8SnhdVnlj1F1
 3jxCuh97qm49xBo5VYLqV9FctskFDHB9gZvXv/z81cgoWiS2Arqp/ootNrGZ1RgUwNU+N/1j0NecP
 zBYBPaqiSUAXdph7AwPUImjSHjszNf83fq79JpGrVUHrPemt+Uw0brOes3WUUorU58l58=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jspoU-0006rD-48; Tue, 07 Jul 2020 15:47:18 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jspoT-0004AT-Sy; Tue, 07 Jul 2020 15:47:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jspoT-0007Xl-Rx; Tue, 07 Jul 2020 15:47:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151707-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xtf test] 151707: all pass - PUSHED
X-Osstest-Versions-This: xtf=2dd14fbcf9d03fdc300491939aeac75d3eb9e05f
X-Osstest-Versions-That: xtf=2a8859e87761a0efc119778e094f203dc2ea487a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jul 2020 15:47:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151707 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151707/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf                  2dd14fbcf9d03fdc300491939aeac75d3eb9e05f
baseline version:
 xtf                  2a8859e87761a0efc119778e094f203dc2ea487a

Last test of basis   150870  2020-06-05 20:09:44 Z   31 days
Testing same since   151707  2020-07-07 10:39:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-amd64-pvops                                            pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xtf.git
   2a8859e..2dd14fb  2dd14fbcf9d03fdc300491939aeac75d3eb9e05f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 16:14:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 16:14:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsqEf-0005IP-58; Tue, 07 Jul 2020 16:14:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F44P=AS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsqEe-0005IK-24
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 16:14:20 +0000
X-Inumbo-ID: e9084906-c06c-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e9084906-c06c-11ea-b7bb-bc764e2007e4;
 Tue, 07 Jul 2020 16:14:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=TUvLCzlIo1+jrPDGuZ38DVR5NogCU3sLhSaDLtfe+EI=; b=aHQhTvv33PK6rIh4gz35Q6/eq
 wYkb/MX4hpwnYwnNEpvltve6cM2nTH/67zsJbKvTMWxmguM9TYYdBDpfntcC6UWgeKYVP/bGCjPQV
 0SvjblCFiQ7Gpn8ZRJJJGI2FgiMAGy4UwFSofeMMpee6StZtRxRa6BC1djUA0SzlXyLU0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsqEc-0007so-Gc; Tue, 07 Jul 2020 16:14:18 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsqEc-0004zT-4p; Tue, 07 Jul 2020 16:14:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsqEc-0004Ci-2u; Tue, 07 Jul 2020 16:14:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151711-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 151711: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=3fdc211b01b29f252166937238efe02d15cb5780
X-Osstest-Versions-That: xen=f97f99c8d88ebc108f6adc3ba74e87d53ba57c70
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jul 2020 16:14:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151711 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151711/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3fdc211b01b29f252166937238efe02d15cb5780
baseline version:
 xen                  f97f99c8d88ebc108f6adc3ba74e87d53ba57c70

Last test of basis   151687  2020-07-06 19:01:28 Z    0 days
Testing same since   151711  2020-07-07 13:13:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f97f99c8d8..3fdc211b01  3fdc211b01b29f252166937238efe02d15cb5780 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 16:51:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 16:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsqo4-0000EH-Q3; Tue, 07 Jul 2020 16:50:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gwtg=AS=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jsqo3-0000Dr-5M
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 16:50:55 +0000
X-Inumbo-ID: 04f59fd8-c072-11ea-b7bb-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 04f59fd8-c072-11ea-b7bb-bc764e2007e4;
 Tue, 07 Jul 2020 16:50:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594140652;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=kTc4yRwO5a9XF+O06cElX+INNSFMrjGg1hnzHUx6ZOU=;
 b=aSBpSdGNECTHeybyoEgiQ9+/I91OHNtf7c59WTpSt6Bzl5c++iQf9oGyHQJPoWNEunc2vX
 qPSEqxOziyB8zC7Y/W4OPJsZ0QCbAnK52gqeug6MuEmSRy370spUUI20msi6EvDVBYcS2W
 d9dRNFzq5cNOhHON+TwAElysIA0XCNE=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-205-8gbxd39nN9aA_Cw08BWDdA-1; Tue, 07 Jul 2020 12:50:50 -0400
X-MC-Unique: 8gbxd39nN9aA_Cw08BWDdA-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B71BD1005510;
 Tue,  7 Jul 2020 16:50:48 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 98A2A5D9C9;
 Tue,  7 Jul 2020 16:50:42 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 25B3611328A3; Tue,  7 Jul 2020 18:50:37 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v12 3/8] sd: Use ERRP_AUTO_PROPAGATE()
Date: Tue,  7 Jul 2020 18:50:32 +0200
Message-Id: <20200707165037.1026246-4-armbru@redhat.com>
In-Reply-To: <20200707165037.1026246-1-armbru@redhat.com>
References: <20200707165037.1026246-1-armbru@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 Michael Roth <mdroth@linux.vnet.ibm.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laszlo Ersek <lersek@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, groug@kaod.org,
 Max Reitz <mreitz@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

If we want to add some info to errp (by error_prepend() or
error_append_hint()), we must use the ERRP_AUTO_PROPAGATE macro.
Otherwise, this info will not be added when errp == &error_fatal
(the program will exit prior to the error_append_hint() or
error_prepend() call).  Fix such cases.

If we want to check error after errp-function call, we need to
introduce local_err and then propagate it to errp. Instead, use
ERRP_AUTO_PROPAGATE macro, benefits are:
1. No need of explicit error_propagate call
2. No need of explicit local_err variable: use errp directly
3. ERRP_AUTO_PROPAGATE leaves errp as is if it's not NULL or
   &error_fatal, this means that we don't break error_abort
   (we'll abort on error_set, not on error_propagate)

This commit is generated by command

    sed -n '/^SD (Secure Card)$/,/^$/{s/^F: //p}' \
        MAINTAINERS | \
    xargs git ls-files | grep '\.[hc]$' | \
    xargs spatch \
        --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
        --macro-file scripts/cocci-macro-file.h \
        --in-place --no-show-diff --max-width 80

Reported-by: Kevin Wolf <kwolf@redhat.com>
Reported-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
[Commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
---
 hw/sd/sdhci-pci.c |  7 +++----
 hw/sd/sdhci.c     | 21 +++++++++------------
 hw/sd/ssi-sd.c    | 10 +++++-----
 3 files changed, 17 insertions(+), 21 deletions(-)

diff --git a/hw/sd/sdhci-pci.c b/hw/sd/sdhci-pci.c
index 4f5977d487..38ec572fc6 100644
--- a/hw/sd/sdhci-pci.c
+++ b/hw/sd/sdhci-pci.c
@@ -29,13 +29,12 @@ static Property sdhci_pci_properties[] = {
 
 static void sdhci_pci_realize(PCIDevice *dev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     SDHCIState *s = PCI_SDHCI(dev);
-    Error *local_err = NULL;
 
     sdhci_initfn(s);
-    sdhci_common_realize(s, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+    sdhci_common_realize(s, errp);
+    if (*errp) {
         return;
     }
 
diff --git a/hw/sd/sdhci.c b/hw/sd/sdhci.c
index eb2be6529e..be1928784d 100644
--- a/hw/sd/sdhci.c
+++ b/hw/sd/sdhci.c
@@ -1288,7 +1288,7 @@ static const MemoryRegionOps sdhci_mmio_ops = {
 
 static void sdhci_init_readonly_registers(SDHCIState *s, Error **errp)
 {
-    Error *local_err = NULL;
+    ERRP_AUTO_PROPAGATE();
 
     switch (s->sd_spec_version) {
     case 2 ... 3:
@@ -1299,9 +1299,8 @@ static void sdhci_init_readonly_registers(SDHCIState *s, Error **errp)
     }
     s->version = (SDHC_HCVER_VENDOR << 8) | (s->sd_spec_version - 1);
 
-    sdhci_check_capareg(s, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+    sdhci_check_capareg(s, errp);
+    if (*errp) {
         return;
     }
 }
@@ -1332,11 +1331,10 @@ void sdhci_uninitfn(SDHCIState *s)
 
 void sdhci_common_realize(SDHCIState *s, Error **errp)
 {
-    Error *local_err = NULL;
+    ERRP_AUTO_PROPAGATE();
 
-    sdhci_init_readonly_registers(s, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+    sdhci_init_readonly_registers(s, errp);
+    if (*errp) {
         return;
     }
     s->buf_maxsz = sdhci_get_fifolen(s);
@@ -1456,13 +1454,12 @@ static void sdhci_sysbus_finalize(Object *obj)
 
 static void sdhci_sysbus_realize(DeviceState *dev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     SDHCIState *s = SYSBUS_SDHCI(dev);
     SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
-    Error *local_err = NULL;
 
-    sdhci_common_realize(s, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+    sdhci_common_realize(s, errp);
+    if (*errp) {
         return;
     }
 
diff --git a/hw/sd/ssi-sd.c b/hw/sd/ssi-sd.c
index 3717b2e721..adb7fa9c24 100644
--- a/hw/sd/ssi-sd.c
+++ b/hw/sd/ssi-sd.c
@@ -241,10 +241,10 @@ static const VMStateDescription vmstate_ssi_sd = {
 
 static void ssi_sd_realize(SSISlave *d, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     ssi_sd_state *s = SSI_SD(d);
     DeviceState *carddev;
     DriveInfo *dinfo;
-    Error *err = NULL;
 
     qbus_create_inplace(&s->sdbus, sizeof(s->sdbus), TYPE_SD_BUS,
                         DEVICE(d), "sd-bus");
@@ -255,23 +255,23 @@ static void ssi_sd_realize(SSISlave *d, Error **errp)
     carddev = qdev_new(TYPE_SD_CARD);
     if (dinfo) {
         if (!qdev_prop_set_drive_err(carddev, "drive",
-                                     blk_by_legacy_dinfo(dinfo), &err)) {
+                                     blk_by_legacy_dinfo(dinfo), errp)) {
             goto fail;
         }
     }
 
-    if (!object_property_set_bool(OBJECT(carddev), "spi", true, &err)) {
+    if (!object_property_set_bool(OBJECT(carddev), "spi", true, errp)) {
         goto fail;
     }
 
-    if (!qdev_realize_and_unref(carddev, BUS(&s->sdbus), &err)) {
+    if (!qdev_realize_and_unref(carddev, BUS(&s->sdbus), errp)) {
         goto fail;
     }
 
     return;
 
 fail:
-    error_propagate_prepend(errp, err, "failed to init SD card: ");
+    error_prepend(errp, "failed to init SD card: ");
 }
 
 static void ssi_sd_reset(DeviceState *dev)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 16:51:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 16:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsqo3-0000E3-Hg; Tue, 07 Jul 2020 16:50:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gwtg=AS=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jsqo2-0000Df-Vw
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 16:50:55 +0000
X-Inumbo-ID: 05212e3e-c072-11ea-8dc2-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 05212e3e-c072-11ea-8dc2-12813bfff9fa;
 Tue, 07 Jul 2020 16:50:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594140653;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=ZwJYdyi9Qt1/sCxmydGDv1Kj4p25FpuhPaW1i/Z86t0=;
 b=DMXS20HIwIMuAu1PfTHuEIpjBPpxgnpsLqvc1fMB6kb3OQSowZl27A0LHhzi0SpCe5PJ+X
 PWHjo4h2GxaWC/QHwns4Fzvh/ixgr3muUf6QfbbDr61ePz47/Ewva0VPE+Kq33lFgxtGqK
 exbtSPiMI9f8GpZBdlDlO48ZFBrGCGw=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-469-cFlUF-bKNRqFv4oHtYQYbA-1; Tue, 07 Jul 2020 12:50:51 -0400
X-MC-Unique: cFlUF-bKNRqFv4oHtYQYbA-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7A42E107ACF2;
 Tue,  7 Jul 2020 16:50:49 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 9C93719D61;
 Tue,  7 Jul 2020 16:50:43 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 2960311326E4; Tue,  7 Jul 2020 18:50:37 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v12 4/8] pflash: Use ERRP_AUTO_PROPAGATE()
Date: Tue,  7 Jul 2020 18:50:33 +0200
Message-Id: <20200707165037.1026246-5-armbru@redhat.com>
In-Reply-To: <20200707165037.1026246-1-armbru@redhat.com>
References: <20200707165037.1026246-1-armbru@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 Michael Roth <mdroth@linux.vnet.ibm.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laszlo Ersek <lersek@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, groug@kaod.org,
 Max Reitz <mreitz@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

If we want to add some info to errp (by error_prepend() or
error_append_hint()), we must use the ERRP_AUTO_PROPAGATE macro.
Otherwise, this info will not be added when errp == &error_fatal
(the program will exit prior to the error_append_hint() or
error_prepend() call).  Fix such cases.

If we want to check error after errp-function call, we need to
introduce local_err and then propagate it to errp. Instead, use
ERRP_AUTO_PROPAGATE macro, benefits are:
1. No need of explicit error_propagate call
2. No need of explicit local_err variable: use errp directly
3. ERRP_AUTO_PROPAGATE leaves errp as is if it's not NULL or
   &error_fatal, this means that we don't break error_abort
   (we'll abort on error_set, not on error_propagate)

This commit is generated by command

    sed -n '/^Parallel NOR Flash devices$/,/^$/{s/^F: //p}' \
        MAINTAINERS | \
    xargs git ls-files | grep '\.[hc]$' | \
    xargs spatch \
        --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
        --macro-file scripts/cocci-macro-file.h \
        --in-place --no-show-diff --max-width 80

Reported-by: Kevin Wolf <kwolf@redhat.com>
Reported-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
[Commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
---
 hw/block/pflash_cfi01.c | 7 +++----
 hw/block/pflash_cfi02.c | 7 +++----
 2 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/hw/block/pflash_cfi01.c b/hw/block/pflash_cfi01.c
index cddc3a5a0c..859cfeae14 100644
--- a/hw/block/pflash_cfi01.c
+++ b/hw/block/pflash_cfi01.c
@@ -696,12 +696,12 @@ static const MemoryRegionOps pflash_cfi01_ops = {
 
 static void pflash_cfi01_realize(DeviceState *dev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     PFlashCFI01 *pfl = PFLASH_CFI01(dev);
     uint64_t total_len;
     int ret;
     uint64_t blocks_per_device, sector_len_per_device, device_len;
     int num_devices;
-    Error *local_err = NULL;
 
     if (pfl->sector_len == 0) {
         error_setg(errp, "attribute \"sector-length\" not specified or zero.");
@@ -735,9 +735,8 @@ static void pflash_cfi01_realize(DeviceState *dev, Error **errp)
         &pfl->mem, OBJECT(dev),
         &pflash_cfi01_ops,
         pfl,
-        pfl->name, total_len, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+        pfl->name, total_len, errp);
+    if (*errp) {
         return;
     }
 
diff --git a/hw/block/pflash_cfi02.c b/hw/block/pflash_cfi02.c
index b40ce2335a..15035ee5ef 100644
--- a/hw/block/pflash_cfi02.c
+++ b/hw/block/pflash_cfi02.c
@@ -724,9 +724,9 @@ static const MemoryRegionOps pflash_cfi02_ops = {
 
 static void pflash_cfi02_realize(DeviceState *dev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     PFlashCFI02 *pfl = PFLASH_CFI02(dev);
     int ret;
-    Error *local_err = NULL;
 
     if (pfl->uniform_sector_len == 0 && pfl->sector_len[0] == 0) {
         error_setg(errp, "attribute \"sector-length\" not specified or zero.");
@@ -792,9 +792,8 @@ static void pflash_cfi02_realize(DeviceState *dev, Error **errp)
 
     memory_region_init_rom_device(&pfl->orig_mem, OBJECT(pfl),
                                   &pflash_cfi02_ops, pfl, pfl->name,
-                                  pfl->chip_len, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+                                  pfl->chip_len, errp);
+    if (*errp) {
         return;
     }
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 16:51:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 16:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsqo9-0000F3-6S; Tue, 07 Jul 2020 16:51:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gwtg=AS=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jsqo8-0000Dr-22
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 16:51:00 +0000
X-Inumbo-ID: 05964c12-c072-11ea-8496-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 05964c12-c072-11ea-8496-bc764e2007e4;
 Tue, 07 Jul 2020 16:50:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594140653;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=tfYdRm061SMd4nDtfstsQpb4KLifF83vlL7jhH/NxAw=;
 b=aqalNxokzJJMU6DCOQCyRYjgpHSlmK50s1VzpXX9wESmQyGpiVezToCpdsr5mwB0InYXzg
 lYUHH+lnxohvdNYiWGg6CI2QZ5IzdAj+XuUiqyIAwP3bRxqtzY+yuSHvhd+4ZCzV1nODua
 09f+uAX4HDXOI+35FyrEQW6C5XyQYpo=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-486-jgdjWGplOOuWAbWVG03AkQ-1; Tue, 07 Jul 2020 12:50:48 -0400
X-MC-Unique: jgdjWGplOOuWAbWVG03AkQ-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D8A9191177;
 Tue,  7 Jul 2020 16:50:46 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 9C55160E1C;
 Tue,  7 Jul 2020 16:50:40 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 21DD71132922; Tue,  7 Jul 2020 18:50:37 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v12 2/8] scripts: Coccinelle script to use
 ERRP_AUTO_PROPAGATE()
Date: Tue,  7 Jul 2020 18:50:31 +0200
Message-Id: <20200707165037.1026246-3-armbru@redhat.com>
In-Reply-To: <20200707165037.1026246-1-armbru@redhat.com>
References: <20200707165037.1026246-1-armbru@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 Michael Roth <mdroth@linux.vnet.ibm.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laszlo Ersek <lersek@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, groug@kaod.org,
 Max Reitz <mreitz@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

Script adds ERRP_AUTO_PROPAGATE macro invocation where appropriate and
does corresponding changes in code (look for details in
include/qapi/error.h)

Usage example:
spatch --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
 --macro-file scripts/cocci-macro-file.h --in-place --no-show-diff \
 --max-width 80 FILES...

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
---
 scripts/coccinelle/auto-propagated-errp.cocci | 337 ++++++++++++++++++
 include/qapi/error.h                          |   3 +
 MAINTAINERS                                   |   1 +
 3 files changed, 341 insertions(+)
 create mode 100644 scripts/coccinelle/auto-propagated-errp.cocci

diff --git a/scripts/coccinelle/auto-propagated-errp.cocci b/scripts/coccinelle/auto-propagated-errp.cocci
new file mode 100644
index 0000000000..c29f695adf
--- /dev/null
+++ b/scripts/coccinelle/auto-propagated-errp.cocci
@@ -0,0 +1,337 @@
+// Use ERRP_AUTO_PROPAGATE (see include/qapi/error.h)
+//
+// Copyright (c) 2020 Virtuozzo International GmbH.
+//
+// This program is free software; you can redistribute it and/or
+// modify it under the terms of the GNU General Public License as
+// published by the Free Software Foundation; either version 2 of the
+// License, or (at your option) any later version.
+//
+// This program is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+//
+// You should have received a copy of the GNU General Public License
+// along with this program.  If not, see
+// <http://www.gnu.org/licenses/>.
+//
+// Usage example:
+// spatch --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
+//  --macro-file scripts/cocci-macro-file.h --in-place \
+//  --no-show-diff --max-width 80 FILES...
+//
+// Note: --max-width 80 is needed because coccinelle default is less
+// than 80, and without this parameter coccinelle may reindent some
+// lines which fit into 80 characters but not to coccinelle default,
+// which in turn produces extra patch hunks for no reason.
+
+// Switch unusual Error ** parameter names to errp
+// (this is necessary to use ERRP_AUTO_PROPAGATE).
+//
+// Disable optional_qualifier to skip functions with
+// "Error *const *errp" parameter.
+//
+// Skip functions with "assert(_errp && *_errp)" statement, because
+// that signals unusual semantics, and the parameter name may well
+// serve a purpose. (like nbd_iter_channel_error()).
+//
+// Skip util/error.c to not touch, for example, error_propagate() and
+// error_propagate_prepend().
+@ depends on !(file in "util/error.c") disable optional_qualifier@
+identifier fn;
+identifier _errp != errp;
+@@
+
+ fn(...,
+-   Error **_errp
++   Error **errp
+    ,...)
+ {
+(
+     ... when != assert(_errp && *_errp)
+&
+     <...
+-    _errp
++    errp
+     ...>
+)
+ }
+
+// Add invocation of ERRP_AUTO_PROPAGATE to errp-functions where
+// necessary
+//
+// Note, that without "when any" the final "..." does not mach
+// something matched by previous pattern, i.e. the rule will not match
+// double error_prepend in control flow like in
+// vfio_set_irq_signaling().
+//
+// Note, "exists" says that we want apply rule even if it does not
+// match on all possible control flows (otherwise, it will not match
+// standard pattern when error_propagate() call is in if branch).
+@ disable optional_qualifier exists@
+identifier fn, local_err;
+symbol errp;
+@@
+
+ fn(..., Error **errp, ...)
+ {
++   ERRP_AUTO_PROPAGATE();
+    ...  when != ERRP_AUTO_PROPAGATE();
+(
+(
+    error_append_hint(errp, ...);
+|
+    error_prepend(errp, ...);
+|
+    error_vprepend(errp, ...);
+)
+    ... when any
+|
+    Error *local_err = NULL;
+    ...
+(
+    error_propagate_prepend(errp, local_err, ...);
+|
+    error_propagate(errp, local_err);
+)
+    ...
+)
+ }
+
+// Warn when several Error * definitions are in the control flow.
+// This rule is not chained to rule1 and less restrictive, to cover more
+// functions to warn (even those we are not going to convert).
+//
+// Note, that even with one (or zero) Error * definition in the each
+// control flow we may have several (in total) Error * definitions in
+// the function. This case deserves attention too, but I don't see
+// simple way to match with help of coccinelle.
+@check1 disable optional_qualifier exists@
+identifier fn, _errp, local_err, local_err2;
+position p1, p2;
+@@
+
+ fn(..., Error **_errp, ...)
+ {
+     ...
+     Error *local_err = NULL;@p1
+     ... when any
+     Error *local_err2 = NULL;@p2
+     ... when any
+ }
+
+@ script:python @
+fn << check1.fn;
+p1 << check1.p1;
+p2 << check1.p2;
+@@
+
+print('Warning: function {} has several definitions of '
+      'Error * local variable: at {}:{} and then at {}:{}'.format(
+          fn, p1[0].file, p1[0].line, p2[0].file, p2[0].line))
+
+// Warn when several propagations are in the control flow.
+@check2 disable optional_qualifier exists@
+identifier fn, _errp;
+position p1, p2;
+@@
+
+ fn(..., Error **_errp, ...)
+ {
+     ...
+(
+     error_propagate_prepend(_errp, ...);@p1
+|
+     error_propagate(_errp, ...);@p1
+)
+     ...
+(
+     error_propagate_prepend(_errp, ...);@p2
+|
+     error_propagate(_errp, ...);@p2
+)
+     ... when any
+ }
+
+@ script:python @
+fn << check2.fn;
+p1 << check2.p1;
+p2 << check2.p2;
+@@
+
+print('Warning: function {} propagates to errp several times in '
+      'one control flow: at {}:{} and then at {}:{}'.format(
+          fn, p1[0].file, p1[0].line, p2[0].file, p2[0].line))
+
+// Match functions with propagation of local error to errp.
+// We want to refer these functions in several following rules, but I
+// don't know a proper way to inherit a function, not just its name
+// (to not match another functions with same name in following rules).
+// Not-proper way is as follows: rename errp parameter in functions
+// header and match it in following rules. Rename it back after all
+// transformations.
+//
+// The common case is a single definition of local_err with at most one
+// error_propagate_prepend() or error_propagate() on each control-flow
+// path. Functions with multiple definitions or propagates we want to
+// examine manually. Rules check1 and check2 emit warnings to guide us
+// to them.
+//
+// Note that we match not only this "common case", but any function,
+// which has the "common case" on at least one control-flow path.
+@rule1 disable optional_qualifier exists@
+identifier fn, local_err;
+symbol errp;
+@@
+
+ fn(..., Error **
+-    errp
++    ____
+    , ...)
+ {
+     ...
+     Error *local_err = NULL;
+     ...
+(
+     error_propagate_prepend(errp, local_err, ...);
+|
+     error_propagate(errp, local_err);
+)
+     ...
+ }
+
+// Convert special case with goto separately.
+// I tried merging this into the following rule the obvious way, but
+// it made Coccinelle hang on block.c
+//
+// Note interesting thing: if we don't do it here, and try to fixup
+// "out: }" things later after all transformations (the rule will be
+// the same, just without error_propagate() call), coccinelle fails to
+// match this "out: }".
+@ disable optional_qualifier@
+identifier rule1.fn, rule1.local_err, out;
+symbol errp;
+@@
+
+ fn(..., Error ** ____, ...)
+ {
+     <...
+-    goto out;
++    return;
+     ...>
+- out:
+-    error_propagate(errp, local_err);
+ }
+
+// Convert most of local_err related stuff.
+//
+// Note, that we inherit rule1.fn and rule1.local_err names, not
+// objects themselves. We may match something not related to the
+// pattern matched by rule1. For example, local_err may be defined with
+// the same name in different blocks inside one function, and in one
+// block follow the propagation pattern and in other block doesn't.
+//
+// Note also that errp-cleaning functions
+//   error_free_errp
+//   error_report_errp
+//   error_reportf_errp
+//   warn_report_errp
+//   warn_reportf_errp
+// are not yet implemented. They must call corresponding Error* -
+// freeing function and then set *errp to NULL, to avoid further
+// propagation to original errp (consider ERRP_AUTO_PROPAGATE in use).
+// For example, error_free_errp may look like this:
+//
+//    void error_free_errp(Error **errp)
+//    {
+//        error_free(*errp);
+//        *errp = NULL;
+//    }
+@ disable optional_qualifier exists@
+identifier rule1.fn, rule1.local_err;
+expression list args;
+symbol errp;
+@@
+
+ fn(..., Error ** ____, ...)
+ {
+     <...
+(
+-    Error *local_err = NULL;
+|
+
+// Convert error clearing functions
+(
+-    error_free(local_err);
++    error_free_errp(errp);
+|
+-    error_report_err(local_err);
++    error_report_errp(errp);
+|
+-    error_reportf_err(local_err, args);
++    error_reportf_errp(errp, args);
+|
+-    warn_report_err(local_err);
++    warn_report_errp(errp);
+|
+-    warn_reportf_err(local_err, args);
++    warn_reportf_errp(errp, args);
+)
+?-    local_err = NULL;
+
+|
+-    error_propagate_prepend(errp, local_err, args);
++    error_prepend(errp, args);
+|
+-    error_propagate(errp, local_err);
+|
+-    &local_err
++    errp
+)
+     ...>
+ }
+
+// Convert remaining local_err usage. For example, different kinds of
+// error checking in if conditionals. We can't merge this into
+// previous hunk, as this conflicts with other substitutions in it (at
+// least with "- local_err = NULL").
+@ disable optional_qualifier@
+identifier rule1.fn, rule1.local_err;
+symbol errp;
+@@
+
+ fn(..., Error ** ____, ...)
+ {
+     <...
+-    local_err
++    *errp
+     ...>
+ }
+
+// Always use the same pattern for checking error
+@ disable optional_qualifier@
+identifier rule1.fn;
+symbol errp;
+@@
+
+ fn(..., Error ** ____, ...)
+ {
+     <...
+-    *errp != NULL
++    *errp
+     ...>
+ }
+
+// Revert temporary ___ identifier.
+@ disable optional_qualifier@
+identifier rule1.fn;
+@@
+
+ fn(..., Error **
+-   ____
++   errp
+    , ...)
+ {
+     ...
+ }
diff --git a/include/qapi/error.h b/include/qapi/error.h
index c865a7d2f1..91547fe4ea 100644
--- a/include/qapi/error.h
+++ b/include/qapi/error.h
@@ -264,6 +264,9 @@
  *         }
  *         ...
  *     }
+ *
+ * For mass-conversion, use script
+ *   scripts/coccinelle/auto-propagated-errp.cocci
  */
 
 #ifndef ERROR_H
diff --git a/MAINTAINERS b/MAINTAINERS
index c31c878c63..121953b24d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2166,6 +2166,7 @@ F: scripts/coccinelle/error-use-after-free.cocci
 F: scripts/coccinelle/error_propagate_null.cocci
 F: scripts/coccinelle/remove_local_err.cocci
 F: scripts/coccinelle/use-error_fatal.cocci
+F: scripts/coccinelle/auto-propagated-errp.cocci
 
 GDB stub
 M: Alex Bennée <alex.bennee@linaro.org>
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 16:51:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 16:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsqo9-0000FB-Fm; Tue, 07 Jul 2020 16:51:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gwtg=AS=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jsqo8-0000Ey-IF
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 16:51:00 +0000
X-Inumbo-ID: 07f329a9-c072-11ea-8dc2-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 07f329a9-c072-11ea-8dc2-12813bfff9fa;
 Tue, 07 Jul 2020 16:50:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594140659;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=GtSQ9ltFDy+ZS6gTE2EcxmbrVjKJQ192ZTarFddQmh4=;
 b=IdDMDYb/dEoY+IcSojtJq4x9TvwqizZgjdmay49JAj2Ox5uivpsDaZhhlUlMIHHRQEq0XB
 kt2zhOLsOW4NViO3QKYTB9Hcb46uvSixtEPxtRw3X2PFSgIQoZHxzgowVMtPLwkynGqHMp
 oNc0tsGch4HuRRSQl/HK1WxdHvaj59o=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-450-0UWQII4DOdu_QowMQ0wkdA-1; Tue, 07 Jul 2020 12:50:55 -0400
X-MC-Unique: 0UWQII4DOdu_QowMQ0wkdA-1
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 39B2E80183C;
 Tue,  7 Jul 2020 16:50:54 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 8D5415C1D0;
 Tue,  7 Jul 2020 16:50:48 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 3102611326F0; Tue,  7 Jul 2020 18:50:37 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v12 6/8] virtio-9p: Use ERRP_AUTO_PROPAGATE()
Date: Tue,  7 Jul 2020 18:50:35 +0200
Message-Id: <20200707165037.1026246-7-armbru@redhat.com>
In-Reply-To: <20200707165037.1026246-1-armbru@redhat.com>
References: <20200707165037.1026246-1-armbru@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 Michael Roth <mdroth@linux.vnet.ibm.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laszlo Ersek <lersek@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, groug@kaod.org,
 Max Reitz <mreitz@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

If we want to add some info to errp (by error_prepend() or
error_append_hint()), we must use the ERRP_AUTO_PROPAGATE macro.
Otherwise, this info will not be added when errp == &error_fatal
(the program will exit prior to the error_append_hint() or
error_prepend() call).  Fix such cases.

If we want to check error after errp-function call, we need to
introduce local_err and then propagate it to errp. Instead, use
ERRP_AUTO_PROPAGATE macro, benefits are:
1. No need of explicit error_propagate call
2. No need of explicit local_err variable: use errp directly
3. ERRP_AUTO_PROPAGATE leaves errp as is if it's not NULL or
   &error_fatal, this means that we don't break error_abort
   (we'll abort on error_set, not on error_propagate)

This commit is generated by command

    sed -n '/^virtio-9p$/,/^$/{s/^F: //p}' MAINTAINERS | \
    xargs git ls-files | grep '\.[hc]$' | \
    xargs spatch \
        --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
        --macro-file scripts/cocci-macro-file.h \
        --in-place --no-show-diff --max-width 80

Reported-by: Kevin Wolf <kwolf@redhat.com>
Reported-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Acked-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Christian Schoenebeck <qemu_oss@crudebyte.com>
[Commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
---
 hw/9pfs/9p-local.c | 12 +++++-------
 hw/9pfs/9p.c       |  1 +
 2 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/hw/9pfs/9p-local.c b/hw/9pfs/9p-local.c
index 54e012e5b4..0361e0c0b4 100644
--- a/hw/9pfs/9p-local.c
+++ b/hw/9pfs/9p-local.c
@@ -1479,10 +1479,10 @@ static void error_append_security_model_hint(Error *const *errp)
 
 static int local_parse_opts(QemuOpts *opts, FsDriverEntry *fse, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     const char *sec_model = qemu_opt_get(opts, "security_model");
     const char *path = qemu_opt_get(opts, "path");
     const char *multidevs = qemu_opt_get(opts, "multidevs");
-    Error *local_err = NULL;
 
     if (!sec_model) {
         error_setg(errp, "security_model property not set");
@@ -1516,11 +1516,10 @@ static int local_parse_opts(QemuOpts *opts, FsDriverEntry *fse, Error **errp)
             fse->export_flags &= ~V9FS_FORBID_MULTIDEVS;
             fse->export_flags &= ~V9FS_REMAP_INODES;
         } else {
-            error_setg(&local_err, "invalid multidevs property '%s'",
+            error_setg(errp, "invalid multidevs property '%s'",
                        multidevs);
-            error_append_hint(&local_err, "Valid options are: multidevs="
+            error_append_hint(errp, "Valid options are: multidevs="
                               "[remap|forbid|warn]\n");
-            error_propagate(errp, local_err);
             return -1;
         }
     }
@@ -1530,9 +1529,8 @@ static int local_parse_opts(QemuOpts *opts, FsDriverEntry *fse, Error **errp)
         return -1;
     }
 
-    if (fsdev_throttle_parse_opts(opts, &fse->fst, &local_err)) {
-        error_propagate_prepend(errp, local_err,
-                                "invalid throttle configuration: ");
+    if (fsdev_throttle_parse_opts(opts, &fse->fst, errp)) {
+        error_prepend(errp, "invalid throttle configuration: ");
         return -1;
     }
 
diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
index 9755fba9a9..bdb1360482 100644
--- a/hw/9pfs/9p.c
+++ b/hw/9pfs/9p.c
@@ -4011,6 +4011,7 @@ void pdu_submit(V9fsPDU *pdu, P9MsgHeader *hdr)
 int v9fs_device_realize_common(V9fsState *s, const V9fsTransport *t,
                                Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     int i, len;
     struct stat stat;
     FsDriverEntry *fse;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 16:51:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 16:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsqnz-0000Dk-9G; Tue, 07 Jul 2020 16:50:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gwtg=AS=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jsqny-0000Df-4l
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 16:50:50 +0000
X-Inumbo-ID: 02b555cf-c072-11ea-8dc2-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 02b555cf-c072-11ea-8dc2-12813bfff9fa;
 Tue, 07 Jul 2020 16:50:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594140649;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=KKO2Q9u+xz+qTdzJ9ZmsOxTNiwSVbF6iRvMZkU+WmSs=;
 b=HlfXZsqzUZIFp22x0kaGtSSoFKguIpL+8VD/c3Sin28ZWkkHjVUn+YUTv791APz7bv6jRW
 iqIgn6EDd3tSjQqwd6MPZ6J7Vusw7H01gOLkp0qXCIC3ChzycMlJKH72lDDRiDPkFqxoVJ
 pEKTSdl/zhn5S9/13JhpOFbZ48ng8ww=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-408-mq3_WHHyNKKe-_lpW2tf_w-1; Tue, 07 Jul 2020 12:50:46 -0400
X-MC-Unique: mq3_WHHyNKKe-_lpW2tf_w-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E66C7461;
 Tue,  7 Jul 2020 16:50:44 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 9169A73FC0;
 Tue,  7 Jul 2020 16:50:38 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 1DA4A1132921; Tue,  7 Jul 2020 18:50:37 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v12 1/8] error: New macro ERRP_AUTO_PROPAGATE()
Date: Tue,  7 Jul 2020 18:50:30 +0200
Message-Id: <20200707165037.1026246-2-armbru@redhat.com>
In-Reply-To: <20200707165037.1026246-1-armbru@redhat.com>
References: <20200707165037.1026246-1-armbru@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 Michael Roth <mdroth@linux.vnet.ibm.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laszlo Ersek <lersek@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, groug@kaod.org,
 Eric Blake <eblake@redhat.com>, Max Reitz <mreitz@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

Introduce a new ERRP_AUTO_PROPAGATE macro, to be used at start of
functions with an errp OUT parameter.

It has three goals:

1. Fix issue with error_fatal and error_prepend/error_append_hint: user
can't see this additional information, because exit() happens in
error_setg earlier than information is added. [Reported by Greg Kurz]

2. Fix issue with error_abort and error_propagate: when we wrap
error_abort by local_err+error_propagate, the resulting coredump will
refer to error_propagate and not to the place where error happened.
(the macro itself doesn't fix the issue, but it allows us to [3.] drop
the local_err+error_propagate pattern, which will definitely fix the
issue) [Reported by Kevin Wolf]

3. Drop local_err+error_propagate pattern, which is used to workaround
void functions with errp parameter, when caller wants to know resulting
status. (Note: actually these functions could be merely updated to
return int error code).

To achieve these goals, later patches will add invocations
of this macro at the start of functions with either use
error_prepend/error_append_hint (solving 1) or which use
local_err+error_propagate to check errors, switching those
functions to use *errp instead (solving 2 and 3).

Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Paul Durrant <paul@xen.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Comments merged properly with recent commit "error: Document Error
API usage rules", and edited for clarity.  Put ERRP_AUTO_PROPAGATE()
before its helpers, and touch up style.  Commit message tweaked.]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
---
 include/qapi/error.h | 160 ++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 141 insertions(+), 19 deletions(-)

diff --git a/include/qapi/error.h b/include/qapi/error.h
index 3fed49747d..c865a7d2f1 100644
--- a/include/qapi/error.h
+++ b/include/qapi/error.h
@@ -30,6 +30,10 @@
  *   job.  Since the value of @errp is about handling the error, the
  *   function should not examine it.
  *
+ * - The function may pass @errp to functions it calls to pass on
+ *   their errors to its caller.  If it dereferences @errp to check
+ *   for errors, it must use ERRP_AUTO_PROPAGATE().
+ *
  * - On success, the function should not touch *errp.  On failure, it
  *   should set a new error, e.g. with error_setg(errp, ...), or
  *   propagate an existing one, e.g. with error_propagate(errp, ...).
@@ -45,15 +49,17 @@
  * = Creating errors =
  *
  * Create an error:
- *     error_setg(&err, "situation normal, all fouled up");
+ *     error_setg(errp, "situation normal, all fouled up");
+ * where @errp points to the location to receive the error.
  *
  * Create an error and add additional explanation:
- *     error_setg(&err, "invalid quark");
- *     error_append_hint(&err, "Valid quarks are up, down, strange, "
+ *     error_setg(errp, "invalid quark");
+ *     error_append_hint(errp, "Valid quarks are up, down, strange, "
  *                       "charm, top, bottom.\n");
+ * This may require use of ERRP_AUTO_PROPAGATE(); more on that below.
  *
  * Do *not* contract this to
- *     error_setg(&err, "invalid quark\n" // WRONG!
+ *     error_setg(errp, "invalid quark\n" // WRONG!
  *                "Valid quarks are up, down, strange, charm, top, bottom.");
  *
  * = Reporting and destroying errors =
@@ -107,18 +113,6 @@
  * Errors get passed to the caller through the conventional @errp
  * parameter.
  *
- * Pass an existing error to the caller:
- *     error_propagate(errp, err);
- * where Error **errp is a parameter, by convention the last one.
- *
- * Pass an existing error to the caller with the message modified:
- *     error_propagate_prepend(errp, err,
- *                             "Could not frobnicate '%s': ", name);
- * This is more concise than
- *     error_propagate(errp, err); // don't do this
- *     error_prepend(errp, "Could not frobnicate '%s': ", name);
- * and works even when @errp is &error_fatal.
- *
  * Create a new error and pass it to the caller:
  *     error_setg(errp, "situation normal, all fouled up");
  *
@@ -128,18 +122,26 @@
  *         handle the error...
  *     }
  * when it doesn't, say a void function:
+ *     ERRP_AUTO_PROPAGATE();
+ *     foo(arg, errp);
+ *     if (*errp) {
+ *         handle the error...
+ *     }
+ * More on ERRP_AUTO_PROPAGATE() below.
+ *
+ * Code predating ERRP_AUTO_PROPAGATE() still exits, and looks like this:
  *     Error *err = NULL;
  *     foo(arg, &err);
  *     if (err) {
  *         handle the error...
- *         error_propagate(errp, err);
+ *         error_propagate(errp, err); // deprecated
  *     }
- * Do *not* "optimize" this to
+ * Avoid in new code.  Do *not* "optimize" it to
  *     foo(arg, errp);
  *     if (*errp) { // WRONG!
  *         handle the error...
  *     }
- * because errp may be NULL!
+ * because errp may be NULL!  Guard with ERRP_AUTO_PROPAGATE().
  *
  * But when all you do with the error is pass it on, please use
  *     foo(arg, errp);
@@ -158,6 +160,19 @@
  *         handle the error...
  *     }
  *
+ * Pass an existing error to the caller:
+ *     error_propagate(errp, err);
+ * This is rarely needed.  When @err is a local variable, use of
+ * ERRP_AUTO_PROPAGATE() commonly results in more readable code.
+ *
+ * Pass an existing error to the caller with the message modified:
+ *     error_propagate_prepend(errp, err,
+ *                             "Could not frobnicate '%s': ", name);
+ * This is more concise than
+ *     error_propagate(errp, err); // don't do this
+ *     error_prepend(errp, "Could not frobnicate '%s': ", name);
+ * and works even when @errp is &error_fatal.
+ *
  * Receive and accumulate multiple errors (first one wins):
  *     Error *err = NULL, *local_err = NULL;
  *     foo(arg, &err);
@@ -185,6 +200,70 @@
  *         error_setg(&err, ...); // WRONG!
  *     }
  * because this may pass a non-null err to error_setg().
+ *
+ * = Why, when and how to use ERRP_AUTO_PROPAGATE() =
+ *
+ * Without ERRP_AUTO_PROPAGATE(), use of the @errp parameter is
+ * restricted:
+ * - It must not be dereferenced, because it may be null.
+ * - It should not be passed to error_prepend() or
+ *   error_append_hint(), because that doesn't work with &error_fatal.
+ * ERRP_AUTO_PROPAGATE() lifts these restrictions.
+ *
+ * To use ERRP_AUTO_PROPAGATE(), add it right at the beginning of the
+ * function.  @errp can then be used without worrying about the
+ * argument being NULL or &error_fatal.
+ *
+ * Using it when it's not needed is safe, but please avoid cluttering
+ * the source with useless code.
+ *
+ * = Converting to ERRP_AUTO_PROPAGATE() =
+ *
+ * To convert a function to use ERRP_AUTO_PROPAGATE():
+ *
+ * 0. If the Error ** parameter is not named @errp, rename it to
+ *    @errp.
+ *
+ * 1. Add an ERRP_AUTO_PROPAGATE() invocation, by convention right at
+ *    the beginning of the function.  This makes @errp safe to use.
+ *
+ * 2. Replace &err by errp, and err by *errp.  Delete local variable
+ *    @err.
+ *
+ * 3. Delete error_propagate(errp, *errp), replace
+ *    error_propagate_prepend(errp, *errp, ...) by error_prepend(errp, ...),
+ *
+ * 4. Ensure @errp is valid at return: when you destroy *errp, set
+ *    errp = NULL.
+ *
+ * Example:
+ *
+ *     bool fn(..., Error **errp)
+ *     {
+ *         Error *err = NULL;
+ *
+ *         foo(arg, &err);
+ *         if (err) {
+ *             handle the error...
+ *             error_propagate(errp, err);
+ *             return false;
+ *         }
+ *         ...
+ *     }
+ *
+ * becomes
+ *
+ *     bool fn(..., Error **errp)
+ *     {
+ *         ERRP_AUTO_PROPAGATE();
+ *
+ *         foo(arg, errp);
+ *         if (*errp) {
+ *             handle the error...
+ *             return false;
+ *         }
+ *         ...
+ *     }
  */
 
 #ifndef ERROR_H
@@ -285,6 +364,7 @@ void error_setg_win32_internal(Error **errp,
  * the error object.
  * Else, move the error object from @local_err to *@dst_errp.
  * On return, @local_err is invalid.
+ * Please use ERRP_AUTO_PROPAGATE() instead when possible.
  * Please don't error_propagate(&error_fatal, ...), use
  * error_report_err() and exit(), because that's more obvious.
  */
@@ -296,6 +376,8 @@ void error_propagate(Error **dst_errp, Error *local_err);
  * Behaves like
  *     error_prepend(&local_err, fmt, ...);
  *     error_propagate(dst_errp, local_err);
+ * Please use ERRP_AUTO_PROPAGATE() and error_prepend() instead when
+ * possible.
  */
 void error_propagate_prepend(Error **dst_errp, Error *local_err,
                              const char *fmt, ...);
@@ -393,6 +475,46 @@ void error_set_internal(Error **errp,
                         ErrorClass err_class, const char *fmt, ...)
     GCC_FMT_ATTR(6, 7);
 
+/*
+ * Make @errp parameter easier to use regardless of argument value
+ *
+ * This macro is for use right at the beginning of a function that
+ * takes an Error **errp parameter to pass errors to its caller.  The
+ * parameter must be named @errp.
+ *
+ * It must be used when the function dereferences @errp or passes
+ * @errp to error_prepend(), error_vprepend(), or error_append_hint().
+ * It is safe to use even when it's not needed, but please avoid
+ * cluttering the source with useless code.
+ *
+ * If @errp is NULL or &error_fatal, rewrite it to point to a local
+ * Error variable, which will be automatically propagated to the
+ * original @errp on function exit.
+ *
+ * Note: &error_abort is not rewritten, because that would move the
+ * abort from the place where the error is created to the place where
+ * it's propagated.
+ */
+#define ERRP_AUTO_PROPAGATE()                                   \
+    g_auto(ErrorPropagator) _auto_errp_prop = {.errp = errp};   \
+    do {                                                        \
+        if (!errp || errp == &error_fatal) {                    \
+            errp = &_auto_errp_prop.local_err;                  \
+        }                                                       \
+    } while (0)
+
+typedef struct ErrorPropagator {
+    Error *local_err;
+    Error **errp;
+} ErrorPropagator;
+
+static inline void error_propagator_cleanup(ErrorPropagator *prop)
+{
+    error_propagate(prop->errp, prop->local_err);
+}
+
+G_DEFINE_AUTO_CLEANUP_CLEAR_FUNC(ErrorPropagator, error_propagator_cleanup);
+
 /*
  * Special error destination to abort on error.
  * See error_setg() and error_propagate() for details.
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 16:51:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 16:51:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsqoD-0000Hn-Pg; Tue, 07 Jul 2020 16:51:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gwtg=AS=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jsqoD-0000Ey-08
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 16:51:05 +0000
X-Inumbo-ID: 0b8b342a-c072-11ea-8dc2-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 0b8b342a-c072-11ea-8dc2-12813bfff9fa;
 Tue, 07 Jul 2020 16:51:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594140663;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding;
 bh=10CNakQ3NgheZX/vjI8iYykwppKbLWYe9EuKJd8Y9yA=;
 b=i0puxRCVJ2B2kPGU/OlWHDSw16Xf+e/6zYSK1PTpMNTXfLmneesYGKZ8MCS1WhdqznFC3o
 Wt1Mr2CrnvahpiDZgAFXseau/47/FAkh8h6+rDREK2yyjhr3HKmQX0sKR7PkxB5pBUhrsu
 d8BQV+bTkyELZ5y9oLwxrQKay4oVSXo=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-354-dQR7_YpFNua-zqFIhuDs5w-1; Tue, 07 Jul 2020 12:50:50 -0400
X-MC-Unique: dQR7_YpFNua-zqFIhuDs5w-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id F2A12461;
 Tue,  7 Jul 2020 16:50:48 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 8DE0F5D9F3;
 Tue,  7 Jul 2020 16:50:48 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 1B50B1132FD2; Tue,  7 Jul 2020 18:50:37 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v12 0/8] error: auto propagated local_err part I
Date: Tue,  7 Jul 2020 18:50:29 +0200
Message-Id: <20200707165037.1026246-1-armbru@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 Michael Roth <mdroth@linux.vnet.ibm.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laszlo Ersek <lersek@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, groug@kaod.org,
 Max Reitz <mreitz@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

To speed things up, I'm taking the liberty to respin Vladimir's series
with my documentation amendments.

After my documentation work, I'm very much inclined to rename
ERRP_AUTO_PROPAGATE() to ERRP_GUARD().  The fact that it propagates
below the hood is detail.  What matters to its users is that it lets
them use @errp more freely.  Thoughts?

Based-on: Message-Id: <20200707160613.848843-1-armbru@redhat.com>

Available from my public repository https://repo.or.cz/qemu/armbru.git
on branch error-auto.

v12: (based on "[PATCH v4 00/45] Less clumsy error checking")
01: Comments merged properly with recent commit "error: Document Error
API usage rules", and edited for clarity.  Put ERRP_AUTO_PROPAGATE()
before its helpers, and touch up style.
01-08: Commit messages tweaked

Vladimir's cover letter for v11:

v11: (based-on "[PATCH v2 00/44] Less clumsy error checking")
01: minor rebase of documentation, keep r-bs
02: - minor comment tweaks [Markus]
    - use explicit file name in MAINTAINERS instead of pattern
    - add Markus's r-b
03,07,08: rabase changes, drop r-bs


v11 is available at
 https://src.openvz.org/scm/~vsementsov/qemu.git #tag up-auto-local-err-partI-v11
v10 is available at
 https://src.openvz.org/scm/~vsementsov/qemu.git #tag up-auto-local-err-partI-v10

In these series, there is no commit-per-subsystem script, each generated
commit is generated in separate.

Still, generating commands are very similar, and looks like

    sed -n '/^<Subsystem name>$/,/^$/{s/^F: //p}' MAINTAINERS | \
    xargs git ls-files | grep '\.[hc]$' | \
    xargs spatch \
        --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
        --macro-file scripts/cocci-macro-file.h \
        --in-place --no-show-diff --max-width 80

Note, that in each generated commit, generation command is the only
text, indented by 8 spaces in 'git log -1' output, so, to regenerate all
commits (for example, after rebase, or change in coccinelle script), you
may use the following command:

git rebase -x "sh -c \"git show --pretty= --name-only | xargs git checkout HEAD^ -- ; git reset; git log -1 | grep '^        ' | sh\"" HEAD~6

Which will start automated interactive rebase for generated patches,
which will stop if generated patch changed
(you may do git commit --amend to apply updated generated changes).

Note:
  git show --pretty= --name-only   - lists files, changed in HEAD
  git log -1 | grep '^        ' | sh   - rerun generation command of HEAD


Check for compilation of changed .c files
git rebase -x "sh -c \"git show --pretty= --name-only | sed -n 's/\.c$/.o/p' | xargs make -j9\"" HEAD~6

Vladimir Sementsov-Ogievskiy (8):
  error: New macro ERRP_AUTO_PROPAGATE()
  scripts: Coccinelle script to use ERRP_AUTO_PROPAGATE()
  sd: Use ERRP_AUTO_PROPAGATE()
  pflash: Use ERRP_AUTO_PROPAGATE()
  fw_cfg: Use ERRP_AUTO_PROPAGATE()
  virtio-9p: Use ERRP_AUTO_PROPAGATE()
  nbd: Use ERRP_AUTO_PROPAGATE()
  xen: Use ERRP_AUTO_PROPAGATE()

 scripts/coccinelle/auto-propagated-errp.cocci | 337 ++++++++++++++++++
 include/block/nbd.h                           |   1 +
 include/qapi/error.h                          | 163 ++++++++-
 block/nbd.c                                   |   7 +-
 hw/9pfs/9p-local.c                            |  12 +-
 hw/9pfs/9p.c                                  |   1 +
 hw/block/dataplane/xen-block.c                |  17 +-
 hw/block/pflash_cfi01.c                       |   7 +-
 hw/block/pflash_cfi02.c                       |   7 +-
 hw/block/xen-block.c                          | 102 +++---
 hw/nvram/fw_cfg.c                             |  14 +-
 hw/pci-host/xen_igd_pt.c                      |   7 +-
 hw/sd/sdhci-pci.c                             |   7 +-
 hw/sd/sdhci.c                                 |  21 +-
 hw/sd/ssi-sd.c                                |  10 +-
 hw/xen/xen-backend.c                          |   7 +-
 hw/xen/xen-bus.c                              |  92 ++---
 hw/xen/xen-host-pci-device.c                  |  27 +-
 hw/xen/xen_pt.c                               |  25 +-
 hw/xen/xen_pt_config_init.c                   |  17 +-
 nbd/client.c                                  |   5 +
 nbd/server.c                                  |   5 +
 MAINTAINERS                                   |   1 +
 23 files changed, 659 insertions(+), 233 deletions(-)
 create mode 100644 scripts/coccinelle/auto-propagated-errp.cocci

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 16:51:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 16:51:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsqoE-0000IB-3a; Tue, 07 Jul 2020 16:51:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gwtg=AS=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jsqoD-0000Dr-2L
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 16:51:05 +0000
X-Inumbo-ID: 07f41f7a-c072-11ea-8496-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 07f41f7a-c072-11ea-8496-bc764e2007e4;
 Tue, 07 Jul 2020 16:50:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594140657;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=aIJau8wOVykrPOVaricLGyo+FPnaWPbGXksNfPDBJtM=;
 b=PU83hAxKcx94CD/7EdNgm9kJphvbYlHMVE0DJrykf3YeBBrXfane8GZ4WhCLohrBfOKp/P
 U3zkTJVqYiVNtWmIM/EsTVB6Z3abK0kpnEigANGKgLBSMnbRZ0gHy1LN7MHvGzvbN8Ewd/
 7+NPfpMp03nFGgHzy7SRtC9/9znkCLs=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-231-WEDo_pGxM1Wyw7Jv2vbvsw-1; Tue, 07 Jul 2020 12:50:55 -0400
X-MC-Unique: WEDo_pGxM1Wyw7Jv2vbvsw-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D81738015F6;
 Tue,  7 Jul 2020 16:50:52 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 2A0775D9C9;
 Tue,  7 Jul 2020 16:50:52 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 3924F11326F3; Tue,  7 Jul 2020 18:50:37 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v12 8/8] xen: Use ERRP_AUTO_PROPAGATE()
Date: Tue,  7 Jul 2020 18:50:37 +0200
Message-Id: <20200707165037.1026246-9-armbru@redhat.com>
In-Reply-To: <20200707165037.1026246-1-armbru@redhat.com>
References: <20200707165037.1026246-1-armbru@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 Michael Roth <mdroth@linux.vnet.ibm.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laszlo Ersek <lersek@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, groug@kaod.org,
 Max Reitz <mreitz@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

If we want to add some info to errp (by error_prepend() or
error_append_hint()), we must use the ERRP_AUTO_PROPAGATE macro.
Otherwise, this info will not be added when errp == &error_fatal
(the program will exit prior to the error_append_hint() or
error_prepend() call).  Fix such cases.

If we want to check error after errp-function call, we need to
introduce local_err and then propagate it to errp. Instead, use
ERRP_AUTO_PROPAGATE macro, benefits are:
1. No need of explicit error_propagate call
2. No need of explicit local_err variable: use errp directly
3. ERRP_AUTO_PROPAGATE leaves errp as is if it's not NULL or
   &error_fatal, this means that we don't break error_abort
   (we'll abort on error_set, not on error_propagate)

This commit is generated by command

    sed -n '/^X86 Xen CPUs$/,/^$/{s/^F: //p}' MAINTAINERS | \
    xargs git ls-files | grep '\.[hc]$' | \
    xargs spatch \
        --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
        --macro-file scripts/cocci-macro-file.h \
        --in-place --no-show-diff --max-width 80

Reported-by: Kevin Wolf <kwolf@redhat.com>
Reported-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
[Commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
---
 hw/block/dataplane/xen-block.c |  17 +++---
 hw/block/xen-block.c           | 102 ++++++++++++++-------------------
 hw/pci-host/xen_igd_pt.c       |   7 +--
 hw/xen/xen-backend.c           |   7 +--
 hw/xen/xen-bus.c               |  92 +++++++++++++----------------
 hw/xen/xen-host-pci-device.c   |  27 +++++----
 hw/xen/xen_pt.c                |  25 ++++----
 hw/xen/xen_pt_config_init.c    |  17 +++---
 8 files changed, 128 insertions(+), 166 deletions(-)

diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c
index 5f8f15778b..1a077cc05f 100644
--- a/hw/block/dataplane/xen-block.c
+++ b/hw/block/dataplane/xen-block.c
@@ -723,8 +723,8 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
                                unsigned int protocol,
                                Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenDevice *xendev = dataplane->xendev;
-    Error *local_err = NULL;
     unsigned int ring_size;
     unsigned int i;
 
@@ -760,9 +760,8 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
     }
 
     xen_device_set_max_grant_refs(xendev, dataplane->nr_ring_ref,
-                                  &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+                                  errp);
+    if (*errp) {
         goto stop;
     }
 
@@ -770,9 +769,8 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
                                               dataplane->ring_ref,
                                               dataplane->nr_ring_ref,
                                               PROT_READ | PROT_WRITE,
-                                              &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+                                              errp);
+    if (*errp) {
         goto stop;
     }
 
@@ -805,9 +803,8 @@ void xen_block_dataplane_start(XenBlockDataPlane *dataplane,
     dataplane->event_channel =
         xen_device_bind_event_channel(xendev, event_channel,
                                       xen_block_dataplane_event, dataplane,
-                                      &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+                                      errp);
+    if (*errp) {
         goto stop;
     }
 
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index a775fba7c0..623ae5b8e0 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -195,6 +195,7 @@ static const BlockDevOps xen_block_dev_ops = {
 
 static void xen_block_realize(XenDevice *xendev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenBlockDevice *blockdev = XEN_BLOCK_DEVICE(xendev);
     XenBlockDeviceClass *blockdev_class =
         XEN_BLOCK_DEVICE_GET_CLASS(xendev);
@@ -202,7 +203,6 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
     XenBlockVdev *vdev = &blockdev->props.vdev;
     BlockConf *conf = &blockdev->props.conf;
     BlockBackend *blk = conf->blk;
-    Error *local_err = NULL;
 
     if (vdev->type == XEN_BLOCK_VDEV_TYPE_INVALID) {
         error_setg(errp, "vdev property not set");
@@ -212,9 +212,8 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
     trace_xen_block_realize(type, vdev->disk, vdev->partition);
 
     if (blockdev_class->realize) {
-        blockdev_class->realize(blockdev, &local_err);
-        if (local_err) {
-            error_propagate(errp, local_err);
+        blockdev_class->realize(blockdev, errp);
+        if (*errp) {
             return;
         }
     }
@@ -280,8 +279,8 @@ static void xen_block_frontend_changed(XenDevice *xendev,
                                        enum xenbus_state frontend_state,
                                        Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     enum xenbus_state backend_state = xen_device_backend_get_state(xendev);
-    Error *local_err = NULL;
 
     switch (frontend_state) {
     case XenbusStateInitialised:
@@ -290,15 +289,13 @@ static void xen_block_frontend_changed(XenDevice *xendev,
             break;
         }
 
-        xen_block_disconnect(xendev, &local_err);
-        if (local_err) {
-            error_propagate(errp, local_err);
+        xen_block_disconnect(xendev, errp);
+        if (*errp) {
             break;
         }
 
-        xen_block_connect(xendev, &local_err);
-        if (local_err) {
-            error_propagate(errp, local_err);
+        xen_block_connect(xendev, errp);
+        if (*errp) {
             break;
         }
 
@@ -311,9 +308,8 @@ static void xen_block_frontend_changed(XenDevice *xendev,
 
     case XenbusStateClosed:
     case XenbusStateUnknown:
-        xen_block_disconnect(xendev, &local_err);
-        if (local_err) {
-            error_propagate(errp, local_err);
+        xen_block_disconnect(xendev, errp);
+        if (*errp) {
             break;
         }
 
@@ -665,9 +661,9 @@ static void xen_block_blockdev_del(const char *node_name, Error **errp)
 static char *xen_block_blockdev_add(const char *id, QDict *qdict,
                                     Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     const char *driver = qdict_get_try_str(qdict, "driver");
     BlockdevOptions *options = NULL;
-    Error *local_err = NULL;
     char *node_name;
     Visitor *v;
 
@@ -688,10 +684,9 @@ static char *xen_block_blockdev_add(const char *id, QDict *qdict,
         goto fail;
     }
 
-    qmp_blockdev_add(options, &local_err);
+    qmp_blockdev_add(options, errp);
 
-    if (local_err) {
-        error_propagate(errp, local_err);
+    if (*errp) {
         goto fail;
     }
 
@@ -710,14 +705,12 @@ fail:
 
 static void xen_block_drive_destroy(XenBlockDrive *drive, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     char *node_name = drive->node_name;
 
     if (node_name) {
-        Error *local_err = NULL;
-
-        xen_block_blockdev_del(node_name, &local_err);
-        if (local_err) {
-            error_propagate(errp, local_err);
+        xen_block_blockdev_del(node_name, errp);
+        if (*errp) {
             return;
         }
         g_free(node_name);
@@ -731,6 +724,7 @@ static XenBlockDrive *xen_block_drive_create(const char *id,
                                              const char *device_type,
                                              QDict *opts, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     const char *params = qdict_get_try_str(opts, "params");
     const char *mode = qdict_get_try_str(opts, "mode");
     const char *direct_io_safe = qdict_get_try_str(opts, "direct-io-safe");
@@ -738,7 +732,6 @@ static XenBlockDrive *xen_block_drive_create(const char *id,
     char *driver = NULL;
     char *filename = NULL;
     XenBlockDrive *drive = NULL;
-    Error *local_err = NULL;
     QDict *file_layer;
     QDict *driver_layer;
 
@@ -817,13 +810,12 @@ static XenBlockDrive *xen_block_drive_create(const char *id,
 
     g_assert(!drive->node_name);
     drive->node_name = xen_block_blockdev_add(drive->id, driver_layer,
-                                              &local_err);
+                                              errp);
 
     qobject_unref(driver_layer);
 
 done:
-    if (local_err) {
-        error_propagate(errp, local_err);
+    if (*errp) {
         xen_block_drive_destroy(drive, NULL);
         return NULL;
     }
@@ -848,8 +840,8 @@ static void xen_block_iothread_destroy(XenBlockIOThread *iothread,
 static XenBlockIOThread *xen_block_iothread_create(const char *id,
                                                    Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenBlockIOThread *iothread = g_new(XenBlockIOThread, 1);
-    Error *local_err = NULL;
     QDict *opts;
     QObject *ret_data = NULL;
 
@@ -858,13 +850,11 @@ static XenBlockIOThread *xen_block_iothread_create(const char *id,
     opts = qdict_new();
     qdict_put_str(opts, "qom-type", TYPE_IOTHREAD);
     qdict_put_str(opts, "id", id);
-    qmp_object_add(opts, &ret_data, &local_err);
+    qmp_object_add(opts, &ret_data, errp);
     qobject_unref(opts);
     qobject_unref(ret_data);
 
-    if (local_err) {
-        error_propagate(errp, local_err);
-
+    if (*errp) {
         g_free(iothread->id);
         g_free(iothread);
         return NULL;
@@ -876,6 +866,7 @@ static XenBlockIOThread *xen_block_iothread_create(const char *id,
 static void xen_block_device_create(XenBackendInstance *backend,
                                     QDict *opts, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenBus *xenbus = xen_backend_get_bus(backend);
     const char *name = xen_backend_get_name(backend);
     unsigned long number;
@@ -883,7 +874,6 @@ static void xen_block_device_create(XenBackendInstance *backend,
     XenBlockDrive *drive = NULL;
     XenBlockIOThread *iothread = NULL;
     XenDevice *xendev = NULL;
-    Error *local_err = NULL;
     const char *type;
     XenBlockDevice *blockdev;
 
@@ -915,16 +905,15 @@ static void xen_block_device_create(XenBackendInstance *backend,
         goto fail;
     }
 
-    drive = xen_block_drive_create(vdev, device_type, opts, &local_err);
+    drive = xen_block_drive_create(vdev, device_type, opts, errp);
     if (!drive) {
-        error_propagate_prepend(errp, local_err, "failed to create drive: ");
+        error_prepend(errp, "failed to create drive: ");
         goto fail;
     }
 
-    iothread = xen_block_iothread_create(vdev, &local_err);
-    if (local_err) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to create iothread: ");
+    iothread = xen_block_iothread_create(vdev, errp);
+    if (*errp) {
+        error_prepend(errp, "failed to create iothread: ");
         goto fail;
     }
 
@@ -932,32 +921,29 @@ static void xen_block_device_create(XenBackendInstance *backend,
     blockdev = XEN_BLOCK_DEVICE(xendev);
 
     if (!object_property_set_str(OBJECT(xendev), "vdev", vdev,
-                                 &local_err)) {
-        error_propagate_prepend(errp, local_err, "failed to set 'vdev': ");
+                                 errp)) {
+        error_prepend(errp, "failed to set 'vdev': ");
         goto fail;
     }
 
     if (!object_property_set_str(OBJECT(xendev), "drive",
                                  xen_block_drive_get_node_name(drive),
-                                 &local_err)) {
-        error_propagate_prepend(errp, local_err, "failed to set 'drive': ");
+                                 errp)) {
+        error_prepend(errp, "failed to set 'drive': ");
         goto fail;
     }
 
     if (!object_property_set_str(OBJECT(xendev), "iothread", iothread->id,
-                                 &local_err)) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to set 'iothread': ");
+                                 errp)) {
+        error_prepend(errp, "failed to set 'iothread': ");
         goto fail;
     }
 
     blockdev->iothread = iothread;
     blockdev->drive = drive;
 
-    if (!qdev_realize_and_unref(DEVICE(xendev), BUS(xenbus), &local_err)) {
-        error_propagate_prepend(errp, local_err,
-                                "realization of device %s failed: ",
-                                type);
+    if (!qdev_realize_and_unref(DEVICE(xendev), BUS(xenbus), errp)) {
+        error_prepend(errp, "realization of device %s failed: ", type);
         goto fail;
     }
 
@@ -981,31 +967,29 @@ fail:
 static void xen_block_device_destroy(XenBackendInstance *backend,
                                      Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenDevice *xendev = xen_backend_get_device(backend);
     XenBlockDevice *blockdev = XEN_BLOCK_DEVICE(xendev);
     XenBlockVdev *vdev = &blockdev->props.vdev;
     XenBlockDrive *drive = blockdev->drive;
     XenBlockIOThread *iothread = blockdev->iothread;
-    Error *local_err = NULL;
 
     trace_xen_block_device_destroy(vdev->number);
 
     object_unparent(OBJECT(xendev));
 
     if (iothread) {
-        xen_block_iothread_destroy(iothread, &local_err);
-        if (local_err) {
-            error_propagate_prepend(errp, local_err,
-                                    "failed to destroy iothread: ");
+        xen_block_iothread_destroy(iothread, errp);
+        if (*errp) {
+            error_prepend(errp, "failed to destroy iothread: ");
             return;
         }
     }
 
     if (drive) {
-        xen_block_drive_destroy(drive, &local_err);
-        if (local_err) {
-            error_propagate_prepend(errp, local_err,
-                                    "failed to destroy drive: ");
+        xen_block_drive_destroy(drive, errp);
+        if (*errp) {
+            error_prepend(errp, "failed to destroy drive: ");
             return;
         }
     }
diff --git a/hw/pci-host/xen_igd_pt.c b/hw/pci-host/xen_igd_pt.c
index efcc9347ff..29ade9ca25 100644
--- a/hw/pci-host/xen_igd_pt.c
+++ b/hw/pci-host/xen_igd_pt.c
@@ -79,17 +79,16 @@ static void host_pci_config_read(int pos, int len, uint32_t *val, Error **errp)
 
 static void igd_pt_i440fx_realize(PCIDevice *pci_dev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     uint32_t val = 0;
     size_t i;
     int pos, len;
-    Error *local_err = NULL;
 
     for (i = 0; i < ARRAY_SIZE(igd_host_bridge_infos); i++) {
         pos = igd_host_bridge_infos[i].offset;
         len = igd_host_bridge_infos[i].len;
-        host_pci_config_read(pos, len, &val, &local_err);
-        if (local_err) {
-            error_propagate(errp, local_err);
+        host_pci_config_read(pos, len, &val, errp);
+        if (*errp) {
             return;
         }
         pci_default_write_config(pci_dev, pos, val, len);
diff --git a/hw/xen/xen-backend.c b/hw/xen/xen-backend.c
index da065f81b7..1cc0694053 100644
--- a/hw/xen/xen-backend.c
+++ b/hw/xen/xen-backend.c
@@ -98,9 +98,9 @@ static void xen_backend_list_remove(XenBackendInstance *backend)
 void xen_backend_device_create(XenBus *xenbus, const char *type,
                                const char *name, QDict *opts, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     const XenBackendImpl *impl = xen_backend_table_lookup(type);
     XenBackendInstance *backend;
-    Error *local_error = NULL;
 
     if (!impl) {
         return;
@@ -110,9 +110,8 @@ void xen_backend_device_create(XenBus *xenbus, const char *type,
     backend->xenbus = xenbus;
     backend->name = g_strdup(name);
 
-    impl->create(backend, opts, &local_error);
-    if (local_error) {
-        error_propagate(errp, local_error);
+    impl->create(backend, opts, errp);
+    if (*errp) {
         g_free(backend->name);
         g_free(backend);
         return;
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index c4e2162ae9..2ea5144ef0 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -53,9 +53,9 @@ static char *xen_device_get_frontend_path(XenDevice *xendev)
 
 static void xen_device_unplug(XenDevice *xendev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenBus *xenbus = XEN_BUS(qdev_get_parent_bus(DEVICE(xendev)));
     const char *type = object_get_typename(OBJECT(xendev));
-    Error *local_err = NULL;
     xs_transaction_t tid;
 
     trace_xen_device_unplug(type, xendev->name);
@@ -69,14 +69,14 @@ again:
     }
 
     xs_node_printf(xenbus->xsh, tid, xendev->backend_path, "online",
-                   &local_err, "%u", 0);
-    if (local_err) {
+                   errp, "%u", 0);
+    if (*errp) {
         goto abort;
     }
 
     xs_node_printf(xenbus->xsh, tid, xendev->backend_path, "state",
-                   &local_err, "%u", XenbusStateClosing);
-    if (local_err) {
+                   errp, "%u", XenbusStateClosing);
+    if (*errp) {
         goto abort;
     }
 
@@ -96,7 +96,6 @@ abort:
      * from ending the transaction.
      */
     xs_transaction_end(xenbus->xsh, tid, true);
-    error_propagate(errp, local_err);
 }
 
 static void xen_bus_print_dev(Monitor *mon, DeviceState *dev, int indent)
@@ -205,15 +204,13 @@ static XenWatch *watch_list_add(XenWatchList *watch_list, const char *node,
                                 const char *key, XenWatchHandler handler,
                                 void *opaque, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenWatch *watch = new_watch(node, key, handler, opaque);
-    Error *local_err = NULL;
 
     notifier_list_add(&watch_list->notifiers, &watch->notifier);
 
-    xs_node_watch(watch_list->xsh, node, key, watch->token, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
-
+    xs_node_watch(watch_list->xsh, node, key, watch->token, errp);
+    if (*errp) {
         notifier_remove(&watch->notifier);
         free_watch(watch);
 
@@ -255,11 +252,11 @@ static void xen_bus_backend_create(XenBus *xenbus, const char *type,
                                    const char *name, char *path,
                                    Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     xs_transaction_t tid;
     char **key;
     QDict *opts;
     unsigned int i, n;
-    Error *local_err = NULL;
 
     trace_xen_bus_backend_create(type, path);
 
@@ -314,13 +311,11 @@ again:
         return;
     }
 
-    xen_backend_device_create(xenbus, type, name, opts, &local_err);
+    xen_backend_device_create(xenbus, type, name, opts, errp);
     qobject_unref(opts);
 
-    if (local_err) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to create '%s' device '%s': ",
-                                type, name);
+    if (*errp) {
+        error_prepend(errp, "failed to create '%s' device '%s': ", type, name);
     }
 }
 
@@ -692,9 +687,9 @@ static void xen_device_remove_watch(XenDevice *xendev, XenWatch *watch,
 
 static void xen_device_backend_create(XenDevice *xendev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenBus *xenbus = XEN_BUS(qdev_get_parent_bus(DEVICE(xendev)));
     struct xs_permissions perms[2];
-    Error *local_err = NULL;
 
     xendev->backend_path = xen_device_get_backend_path(xendev);
 
@@ -706,30 +701,27 @@ static void xen_device_backend_create(XenDevice *xendev, Error **errp)
     g_assert(xenbus->xsh);
 
     xs_node_create(xenbus->xsh, XBT_NULL, xendev->backend_path, perms,
-                   ARRAY_SIZE(perms), &local_err);
-    if (local_err) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to create backend: ");
+                   ARRAY_SIZE(perms), errp);
+    if (*errp) {
+        error_prepend(errp, "failed to create backend: ");
         return;
     }
 
     xendev->backend_state_watch =
         xen_device_add_watch(xendev, xendev->backend_path,
                              "state", xen_device_backend_changed,
-                             &local_err);
-    if (local_err) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to watch backend state: ");
+                             errp);
+    if (*errp) {
+        error_prepend(errp, "failed to watch backend state: ");
         return;
     }
 
     xendev->backend_online_watch =
         xen_device_add_watch(xendev, xendev->backend_path,
                              "online", xen_device_backend_changed,
-                             &local_err);
-    if (local_err) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to watch backend online: ");
+                             errp);
+    if (*errp) {
+        error_prepend(errp, "failed to watch backend online: ");
         return;
     }
 }
@@ -866,9 +858,9 @@ static bool xen_device_frontend_exists(XenDevice *xendev)
 
 static void xen_device_frontend_create(XenDevice *xendev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenBus *xenbus = XEN_BUS(qdev_get_parent_bus(DEVICE(xendev)));
     struct xs_permissions perms[2];
-    Error *local_err = NULL;
 
     xendev->frontend_path = xen_device_get_frontend_path(xendev);
 
@@ -885,20 +877,18 @@ static void xen_device_frontend_create(XenDevice *xendev, Error **errp)
         g_assert(xenbus->xsh);
 
         xs_node_create(xenbus->xsh, XBT_NULL, xendev->frontend_path, perms,
-                       ARRAY_SIZE(perms), &local_err);
-        if (local_err) {
-            error_propagate_prepend(errp, local_err,
-                                    "failed to create frontend: ");
+                       ARRAY_SIZE(perms), errp);
+        if (*errp) {
+            error_prepend(errp, "failed to create frontend: ");
             return;
         }
     }
 
     xendev->frontend_state_watch =
         xen_device_add_watch(xendev, xendev->frontend_path, "state",
-                             xen_device_frontend_changed, &local_err);
-    if (local_err) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to watch frontend state: ");
+                             xen_device_frontend_changed, errp);
+    if (*errp) {
+        error_prepend(errp, "failed to watch frontend state: ");
     }
 }
 
@@ -1247,11 +1237,11 @@ static void xen_device_exit(Notifier *n, void *data)
 
 static void xen_device_realize(DeviceState *dev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenDevice *xendev = XEN_DEVICE(dev);
     XenDeviceClass *xendev_class = XEN_DEVICE_GET_CLASS(xendev);
     XenBus *xenbus = XEN_BUS(qdev_get_parent_bus(DEVICE(xendev)));
     const char *type = object_get_typename(OBJECT(xendev));
-    Error *local_err = NULL;
 
     if (xendev->frontend_id == DOMID_INVALID) {
         xendev->frontend_id = xen_domid;
@@ -1267,10 +1257,9 @@ static void xen_device_realize(DeviceState *dev, Error **errp)
         goto unrealize;
     }
 
-    xendev->name = xendev_class->get_name(xendev, &local_err);
-    if (local_err) {
-        error_propagate_prepend(errp, local_err,
-                                "failed to get device name: ");
+    xendev->name = xendev_class->get_name(xendev, errp);
+    if (*errp) {
+        error_prepend(errp, "failed to get device name: ");
         goto unrealize;
     }
 
@@ -1293,22 +1282,19 @@ static void xen_device_realize(DeviceState *dev, Error **errp)
     xendev->feature_grant_copy =
         (xengnttab_grant_copy(xendev->xgth, 0, NULL) == 0);
 
-    xen_device_backend_create(xendev, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+    xen_device_backend_create(xendev, errp);
+    if (*errp) {
         goto unrealize;
     }
 
-    xen_device_frontend_create(xendev, &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+    xen_device_frontend_create(xendev, errp);
+    if (*errp) {
         goto unrealize;
     }
 
     if (xendev_class->realize) {
-        xendev_class->realize(xendev, &local_err);
-        if (local_err) {
-            error_propagate(errp, local_err);
+        xendev_class->realize(xendev, errp);
+        if (*errp) {
             goto unrealize;
         }
     }
diff --git a/hw/xen/xen-host-pci-device.c b/hw/xen/xen-host-pci-device.c
index 1b44dcafaf..02379c341c 100644
--- a/hw/xen/xen-host-pci-device.c
+++ b/hw/xen/xen-host-pci-device.c
@@ -333,8 +333,8 @@ void xen_host_pci_device_get(XenHostPCIDevice *d, uint16_t domain,
                              uint8_t bus, uint8_t dev, uint8_t func,
                              Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     unsigned int v;
-    Error *err = NULL;
 
     d->config_fd = -1;
     d->domain = domain;
@@ -342,36 +342,36 @@ void xen_host_pci_device_get(XenHostPCIDevice *d, uint16_t domain,
     d->dev = dev;
     d->func = func;
 
-    xen_host_pci_config_open(d, &err);
-    if (err) {
+    xen_host_pci_config_open(d, errp);
+    if (*errp) {
         goto error;
     }
 
-    xen_host_pci_get_resource(d, &err);
-    if (err) {
+    xen_host_pci_get_resource(d, errp);
+    if (*errp) {
         goto error;
     }
 
-    xen_host_pci_get_hex_value(d, "vendor", &v, &err);
-    if (err) {
+    xen_host_pci_get_hex_value(d, "vendor", &v, errp);
+    if (*errp) {
         goto error;
     }
     d->vendor_id = v;
 
-    xen_host_pci_get_hex_value(d, "device", &v, &err);
-    if (err) {
+    xen_host_pci_get_hex_value(d, "device", &v, errp);
+    if (*errp) {
         goto error;
     }
     d->device_id = v;
 
-    xen_host_pci_get_dec_value(d, "irq", &v, &err);
-    if (err) {
+    xen_host_pci_get_dec_value(d, "irq", &v, errp);
+    if (*errp) {
         goto error;
     }
     d->irq = v;
 
-    xen_host_pci_get_hex_value(d, "class", &v, &err);
-    if (err) {
+    xen_host_pci_get_hex_value(d, "class", &v, errp);
+    if (*errp) {
         goto error;
     }
     d->class_code = v;
@@ -381,7 +381,6 @@ void xen_host_pci_device_get(XenHostPCIDevice *d, uint16_t domain,
     return;
 
 error:
-    error_propagate(errp, err);
 
     if (d->config_fd >= 0) {
         close(d->config_fd);
diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
index ab84443d5e..baa25eb91a 100644
--- a/hw/xen/xen_pt.c
+++ b/hw/xen/xen_pt.c
@@ -777,12 +777,12 @@ static void xen_pt_destroy(PCIDevice *d) {
 
 static void xen_pt_realize(PCIDevice *d, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     XenPCIPassthroughState *s = XEN_PT_DEVICE(d);
     int i, rc = 0;
     uint8_t machine_irq = 0, scratch;
     uint16_t cmd = 0;
     int pirq = XEN_PT_UNASSIGNED_PIRQ;
-    Error *err = NULL;
 
     /* register real device */
     XEN_PT_LOG(d, "Assigning real physical device %02x:%02x.%d"
@@ -793,10 +793,9 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
     xen_host_pci_device_get(&s->real_device,
                             s->hostaddr.domain, s->hostaddr.bus,
                             s->hostaddr.slot, s->hostaddr.function,
-                            &err);
-    if (err) {
-        error_append_hint(&err, "Failed to \"open\" the real pci device");
-        error_propagate(errp, err);
+                            errp);
+    if (*errp) {
+        error_append_hint(errp, "Failed to \"open\" the real pci device");
         return;
     }
 
@@ -823,11 +822,10 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
             return;
         }
 
-        xen_pt_setup_vga(s, &s->real_device, &err);
-        if (err) {
-            error_append_hint(&err, "Setup VGA BIOS of passthrough"
-                    " GFX failed");
-            error_propagate(errp, err);
+        xen_pt_setup_vga(s, &s->real_device, errp);
+        if (*errp) {
+            error_append_hint(errp, "Setup VGA BIOS of passthrough"
+                              " GFX failed");
             xen_host_pci_device_put(&s->real_device);
             return;
         }
@@ -840,10 +838,9 @@ static void xen_pt_realize(PCIDevice *d, Error **errp)
     xen_pt_register_regions(s, &cmd);
 
     /* reinitialize each config register to be emulated */
-    xen_pt_config_init(s, &err);
-    if (err) {
-        error_append_hint(&err, "PCI Config space initialisation failed");
-        error_propagate(errp, err);
+    xen_pt_config_init(s, errp);
+    if (*errp) {
+        error_append_hint(errp, "PCI Config space initialisation failed");
         rc = -1;
         goto err_out;
     }
diff --git a/hw/xen/xen_pt_config_init.c b/hw/xen/xen_pt_config_init.c
index d0d7c720a6..af3fbd1bfb 100644
--- a/hw/xen/xen_pt_config_init.c
+++ b/hw/xen/xen_pt_config_init.c
@@ -2008,8 +2008,8 @@ static void xen_pt_config_reg_init(XenPCIPassthroughState *s,
 
 void xen_pt_config_init(XenPCIPassthroughState *s, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     int i, rc;
-    Error *err = NULL;
 
     QLIST_INIT(&s->reg_grps);
 
@@ -2067,13 +2067,14 @@ void xen_pt_config_init(XenPCIPassthroughState *s, Error **errp)
 
                 /* initialize capability register */
                 for (j = 0; regs->size != 0; j++, regs++) {
-                    xen_pt_config_reg_init(s, reg_grp_entry, regs, &err);
-                    if (err) {
-                        error_append_hint(&err, "Failed to init register %d"
-                                " offsets 0x%x in grp_type = 0x%x (%d/%zu)", j,
-                                regs->offset, xen_pt_emu_reg_grps[i].grp_type,
-                                i, ARRAY_SIZE(xen_pt_emu_reg_grps));
-                        error_propagate(errp, err);
+                    xen_pt_config_reg_init(s, reg_grp_entry, regs, errp);
+                    if (*errp) {
+                        error_append_hint(errp, "Failed to init register %d"
+                                          " offsets 0x%x in grp_type = 0x%x (%d/%zu)",
+                                          j,
+                                          regs->offset,
+                                          xen_pt_emu_reg_grps[i].grp_type,
+                                          i, ARRAY_SIZE(xen_pt_emu_reg_grps));
                         xen_pt_config_delete(s);
                         return;
                     }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 16:51:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 16:51:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsqoI-0000La-JJ; Tue, 07 Jul 2020 16:51:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gwtg=AS=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jsqoI-0000Dr-2X
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 16:51:10 +0000
X-Inumbo-ID: 0d763faa-c072-11ea-bca7-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0d763faa-c072-11ea-bca7-bc764e2007e4;
 Tue, 07 Jul 2020 16:51:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594140667;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=XdaTtnXzRdmtkraOf3fJnKW205wPYgtaTPC/KbxTb9A=;
 b=Fq3QyNnWigSpK4O9xGqOTT18QL3trFM8gGVZmNqJq6lMxkPZw0YwVEkCWIs8hduFBd5vuP
 tJZ0tvlx30FriOcF7LQ6Z4Q9t6/viBz80ReAqmzEuHYUUSo/bsDorgAVirWPQNO80smXhy
 mCPOCLXEknHjGgt7M5xTEQLpOExN+FE=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-39-CKUKUa5lMlaT6ynxi7hKYw-1; Tue, 07 Jul 2020 12:51:02 -0400
X-MC-Unique: CKUKUa5lMlaT6ynxi7hKYw-1
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5205D804001;
 Tue,  7 Jul 2020 16:51:00 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 8857B512FE;
 Tue,  7 Jul 2020 16:50:54 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 2D1D711326EC; Tue,  7 Jul 2020 18:50:37 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v12 5/8] fw_cfg: Use ERRP_AUTO_PROPAGATE()
Date: Tue,  7 Jul 2020 18:50:34 +0200
Message-Id: <20200707165037.1026246-6-armbru@redhat.com>
In-Reply-To: <20200707165037.1026246-1-armbru@redhat.com>
References: <20200707165037.1026246-1-armbru@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 Michael Roth <mdroth@linux.vnet.ibm.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laszlo Ersek <lersek@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, groug@kaod.org,
 Max Reitz <mreitz@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

If we want to add some info to errp (by error_prepend() or
error_append_hint()), we must use the ERRP_AUTO_PROPAGATE macro.
Otherwise, this info will not be added when errp == &error_fatal
(the program will exit prior to the error_append_hint() or
error_prepend() call).  Fix such cases.

If we want to check error after errp-function call, we need to
introduce local_err and then propagate it to errp. Instead, use
ERRP_AUTO_PROPAGATE macro, benefits are:
1. No need of explicit error_propagate call
2. No need of explicit local_err variable: use errp directly
3. ERRP_AUTO_PROPAGATE leaves errp as is if it's not NULL or
   &error_fatal, this means that we don't break error_abort
   (we'll abort on error_set, not on error_propagate)

This commit is generated by command

    sed -n '/^Firmware configuration (fw_cfg)$/,/^$/{s/^F: //p}' \
        MAINTAINERS | \
    xargs git ls-files | grep '\.[hc]$' | \
    xargs spatch \
        --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
        --macro-file scripts/cocci-macro-file.h \
        --in-place --no-show-diff --max-width 80

Reported-by: Kevin Wolf <kwolf@redhat.com>
Reported-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
[Commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
---
 hw/nvram/fw_cfg.c | 14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

diff --git a/hw/nvram/fw_cfg.c b/hw/nvram/fw_cfg.c
index 0408a31f8e..d5386c3235 100644
--- a/hw/nvram/fw_cfg.c
+++ b/hw/nvram/fw_cfg.c
@@ -1231,12 +1231,11 @@ static Property fw_cfg_io_properties[] = {
 
 static void fw_cfg_io_realize(DeviceState *dev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     FWCfgIoState *s = FW_CFG_IO(dev);
-    Error *local_err = NULL;
 
-    fw_cfg_file_slots_allocate(FW_CFG(s), &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+    fw_cfg_file_slots_allocate(FW_CFG(s), errp);
+    if (*errp) {
         return;
     }
 
@@ -1282,14 +1281,13 @@ static Property fw_cfg_mem_properties[] = {
 
 static void fw_cfg_mem_realize(DeviceState *dev, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     FWCfgMemState *s = FW_CFG_MEM(dev);
     SysBusDevice *sbd = SYS_BUS_DEVICE(dev);
     const MemoryRegionOps *data_ops = &fw_cfg_data_mem_ops;
-    Error *local_err = NULL;
 
-    fw_cfg_file_slots_allocate(FW_CFG(s), &local_err);
-    if (local_err) {
-        error_propagate(errp, local_err);
+    fw_cfg_file_slots_allocate(FW_CFG(s), errp);
+    if (*errp) {
         return;
     }
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 16:53:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 16:53:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsqqb-0000w1-1o; Tue, 07 Jul 2020 16:53:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gwtg=AS=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jsqqa-0000vs-4N
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 16:53:32 +0000
X-Inumbo-ID: 635ccc04-c072-11ea-8496-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 635ccc04-c072-11ea-8496-bc764e2007e4;
 Tue, 07 Jul 2020 16:53:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594140811;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=YkAQltNuOP2YXnC4GnK3EmsF7L67rYh833RXsqUTHOE=;
 b=AHYQSo3yWbuAssq/srD4rbKBRuh6bngn1SjXLYfeHdBT6pLpDD4UqsEJrpvQXlaWa5u3Kr
 5pLea2Q3OHcx3imQsB5rUiUkE6qDB3F5HrB/v6j1OVI1o1kFmT1klAxw/HawGu99i9Vq7B
 Beim9GUDV3Y9f/cfF9fxv7qy0hyw86c=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-407-RP_ZSErIOhiH6b3uAxhGHA-1; Tue, 07 Jul 2020 12:50:54 -0400
X-MC-Unique: RP_ZSErIOhiH6b3uAxhGHA-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 1CF71107ACF4;
 Tue,  7 Jul 2020 16:50:52 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 22AF060E1C;
 Tue,  7 Jul 2020 16:50:50 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 34DC611326F2; Tue,  7 Jul 2020 18:50:37 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v12 7/8] nbd: Use ERRP_AUTO_PROPAGATE()
Date: Tue,  7 Jul 2020 18:50:36 +0200
Message-Id: <20200707165037.1026246-8-armbru@redhat.com>
In-Reply-To: <20200707165037.1026246-1-armbru@redhat.com>
References: <20200707165037.1026246-1-armbru@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 Michael Roth <mdroth@linux.vnet.ibm.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laszlo Ersek <lersek@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, groug@kaod.org,
 Max Reitz <mreitz@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

If we want to add some info to errp (by error_prepend() or
error_append_hint()), we must use the ERRP_AUTO_PROPAGATE macro.
Otherwise, this info will not be added when errp == &error_fatal
(the program will exit prior to the error_append_hint() or
error_prepend() call).  Fix such cases.

If we want to check error after errp-function call, we need to
introduce local_err and then propagate it to errp. Instead, use
ERRP_AUTO_PROPAGATE macro, benefits are:
1. No need of explicit error_propagate call
2. No need of explicit local_err variable: use errp directly
3. ERRP_AUTO_PROPAGATE leaves errp as is if it's not NULL or
   &error_fatal, this means that we don't break error_abort
   (we'll abort on error_set, not on error_propagate)

This commit is generated by command

    sed -n '/^Network Block Device (NBD)$/,/^$/{s/^F: //p}' \
        MAINTAINERS | \
    xargs git ls-files | grep '\.[hc]$' | \
    xargs spatch \
        --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
        --macro-file scripts/cocci-macro-file.h \
        --in-place --no-show-diff --max-width 80

Reported-by: Kevin Wolf <kwolf@redhat.com>
Reported-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Commit message tweaked]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
---
 include/block/nbd.h | 1 +
 block/nbd.c         | 7 +++----
 nbd/client.c        | 5 +++++
 nbd/server.c        | 5 +++++
 4 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/include/block/nbd.h b/include/block/nbd.h
index 20363280ae..f7d87636d3 100644
--- a/include/block/nbd.h
+++ b/include/block/nbd.h
@@ -361,6 +361,7 @@ void nbd_server_start_options(NbdServerOptions *arg, Error **errp);
 static inline int nbd_read(QIOChannel *ioc, void *buffer, size_t size,
                            const char *desc, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     int ret = qio_channel_read_all(ioc, buffer, size, errp) < 0 ? -EIO : 0;
 
     if (ret < 0) {
diff --git a/block/nbd.c b/block/nbd.c
index 6876da04a7..b7cea0f650 100644
--- a/block/nbd.c
+++ b/block/nbd.c
@@ -1408,16 +1408,15 @@ static void nbd_client_close(BlockDriverState *bs)
 static QIOChannelSocket *nbd_establish_connection(SocketAddress *saddr,
                                                   Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     QIOChannelSocket *sioc;
-    Error *local_err = NULL;
 
     sioc = qio_channel_socket_new();
     qio_channel_set_name(QIO_CHANNEL(sioc), "nbd-client");
 
-    qio_channel_socket_connect_sync(sioc, saddr, &local_err);
-    if (local_err) {
+    qio_channel_socket_connect_sync(sioc, saddr, errp);
+    if (*errp) {
         object_unref(OBJECT(sioc));
-        error_propagate(errp, local_err);
         return NULL;
     }
 
diff --git a/nbd/client.c b/nbd/client.c
index ba173108ba..e258ef3f7e 100644
--- a/nbd/client.c
+++ b/nbd/client.c
@@ -68,6 +68,7 @@ static int nbd_send_option_request(QIOChannel *ioc, uint32_t opt,
                                    uint32_t len, const char *data,
                                    Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     NBDOption req;
     QEMU_BUILD_BUG_ON(sizeof(req) != 16);
 
@@ -153,6 +154,7 @@ static int nbd_receive_option_reply(QIOChannel *ioc, uint32_t opt,
 static int nbd_handle_reply_err(QIOChannel *ioc, NBDOptionReply *reply,
                                 bool strict, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     g_autofree char *msg = NULL;
 
     if (!(reply->type & (1 << 31))) {
@@ -337,6 +339,7 @@ static int nbd_receive_list(QIOChannel *ioc, char **name, char **description,
 static int nbd_opt_info_or_go(QIOChannel *ioc, uint32_t opt,
                               NBDExportInfo *info, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     NBDOptionReply reply;
     uint32_t len = strlen(info->name);
     uint16_t type;
@@ -882,6 +885,7 @@ static int nbd_start_negotiate(AioContext *aio_context, QIOChannel *ioc,
                                bool structured_reply, bool *zeroes,
                                Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     uint64_t magic;
 
     trace_nbd_start_negotiate(tlscreds, hostname ? hostname : "<null>");
@@ -1017,6 +1021,7 @@ int nbd_receive_negotiate(AioContext *aio_context, QIOChannel *ioc,
                           const char *hostname, QIOChannel **outioc,
                           NBDExportInfo *info, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     int result;
     bool zeroes;
     bool base_allocation = info->base_allocation;
diff --git a/nbd/server.c b/nbd/server.c
index 20754e9ebc..8a12e586d7 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -211,6 +211,7 @@ static int GCC_FMT_ATTR(4, 0)
 nbd_negotiate_send_rep_verr(NBDClient *client, uint32_t type,
                             Error **errp, const char *fmt, va_list va)
 {
+    ERRP_AUTO_PROPAGATE();
     g_autofree char *msg = NULL;
     int ret;
     size_t len;
@@ -382,6 +383,7 @@ static int nbd_opt_read_name(NBDClient *client, char **name, uint32_t *length,
 static int nbd_negotiate_send_rep_list(NBDClient *client, NBDExport *exp,
                                        Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     size_t name_len, desc_len;
     uint32_t len;
     const char *name = exp->name ? exp->name : "";
@@ -445,6 +447,7 @@ static void nbd_check_meta_export(NBDClient *client)
 static int nbd_negotiate_handle_export_name(NBDClient *client, bool no_zeroes,
                                             Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     g_autofree char *name = NULL;
     char buf[NBD_REPLY_EXPORT_NAME_SIZE] = "";
     size_t len;
@@ -1289,6 +1292,7 @@ static int nbd_negotiate_options(NBDClient *client, Error **errp)
  */
 static coroutine_fn int nbd_negotiate(NBDClient *client, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     char buf[NBD_OLDSTYLE_NEGOTIATE_SIZE] = "";
     int ret;
 
@@ -1663,6 +1667,7 @@ void nbd_export_close(NBDExport *exp)
 
 void nbd_export_remove(NBDExport *exp, NbdServerRemoveMode mode, Error **errp)
 {
+    ERRP_AUTO_PROPAGATE();
     if (mode == NBD_SERVER_REMOVE_MODE_HARD || QTAILQ_EMPTY(&exp->clients)) {
         nbd_export_close(exp);
         return;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 18:14:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 18:14:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jss6x-0007rJ-2t; Tue, 07 Jul 2020 18:14:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F44P=AS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jss6v-0007rD-G1
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 18:14:29 +0000
X-Inumbo-ID: b27639d2-c07d-11ea-8dce-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b27639d2-c07d-11ea-8dce-12813bfff9fa;
 Tue, 07 Jul 2020 18:14:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=uzQkLX3+pGUeWWgtY43T8vmddHlOSqzV5mHYhds28nA=; b=ockjhuwHgZYGVSWrdYocZ+KDV
 PSNcz6X+/xMKIUlfBIpRMmOjLRwImAGnt4QyILIxEQv7e2Q32TMS6BwSdHBSKrq3a28KUpLRx0l6n
 ITD1I9JtpNKRMpEejkJ77JjywwEtBW++m21eo/ArMN9O9v3LZw0qFMjNMWt5WvTd22Vnw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jss6u-0001kM-2t; Tue, 07 Jul 2020 18:14:28 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jss6t-0003mF-Qs; Tue, 07 Jul 2020 18:14:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jss6t-0006gq-Po; Tue, 07 Jul 2020 18:14:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151704-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151704: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=eb6490f544388dd24c0d054a96dd304bc7284450
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jul 2020 18:14:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151704 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151704/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                eb6490f544388dd24c0d054a96dd304bc7284450
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   24 days
Failing since        151101  2020-06-14 08:32:51 Z   23 days   30 attempts
Testing same since   151634  2020-07-05 00:36:29 Z    2 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Cathy Zhang <cathy.zhang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <thuth@redhat.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17819 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 18:48:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 18:48:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jssdV-00027m-VE; Tue, 07 Jul 2020 18:48:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JMg+=AS=virtuozzo.com=vsementsov@srs-us1.protection.inumbo.net>)
 id 1jssdU-00027h-03
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 18:48:08 +0000
X-Inumbo-ID: 63a40488-c082-11ea-b7bb-bc764e2007e4
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.13.95]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63a40488-c082-11ea-b7bb-bc764e2007e4;
 Tue, 07 Jul 2020 18:48:04 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cc/5GsC2i4PcAsNyCJXfmcO9F2Zw8QHQNerajVGyxBbXkTj8WCGOTC1dHPXH2CaAM2hJwUros+hLAfLkPDn2xnCWHca35rb7I+eYt3jSfdony085Jw0a0CVdeaVmyErQjX3WU1j0cmYv8cGz0eCNzJAAKSk5eJhUv/SgdHcLCWILg53TF3yLuv65kuG1b0i3RX4gxbu2mD+u8czKUPJUJbX8Wfvihf4Ca1yGmlm+4bgBdkyfapds9DnBigGu2aHIghWm7/j5kSDPthQuu5LDWuEq8wUnvO7jxfxfcrwoZR8lj7kEXAGc+meJ1fnOmAV6KrrKxoNHpznV6LLvKnqtIw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EOacSb4oX/N+euakjpkVsJbdOajqWcZuGY9DL710w6c=;
 b=CtaAH06D0oZMDc8/ma3Jpe1ZVLt2Lj4sBkzfsWJ0E6aPOIGyOntkWOeEriDQLfI/eaMa0Boadyi5il5VfekUe5LB2aovqkmMSD2PNGGh93KzPBxhPmLc04lRUiqpxhSQWg71ZB0LGu0illDcIQRpPnOFQ0ebv6IsdjnXzkmnkbLRzMbdhBCJXeN567s+u109U0UBtvyGs7DIamiScqoOU46zkrvyXihpMD7JoZPLm0SMzVmACnaIs0pVLHAHw/M5YoY/p6k2/9Sm0ellBI541UyxyjqctT1fmbhTz0R2/nemoWqmUih32f6RViax68/sbVsgJ2EMX/TPomiwJWgYkw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=virtuozzo.com; dmarc=pass action=none
 header.from=virtuozzo.com; dkim=pass header.d=virtuozzo.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtuozzo.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EOacSb4oX/N+euakjpkVsJbdOajqWcZuGY9DL710w6c=;
 b=FpFfQddas2Tz8yUWtaW2zVczE/irtIDWTytLuu/bshf1K6OPuY6i9AwWc7xpc2uwX+whZdHHF05uDmFrrSsZWFSabnKM+fE9x3MAy7+IKpY4NtqRfxD9sIu7WmmjVlf/BzBt85PoHZusDJ4II4n5RZ2bdZwdArJAo1dP/JMFH0k=
Authentication-Results: linux.vnet.ibm.com; dkim=none (message not signed)
 header.d=none;linux.vnet.ibm.com; dmarc=none action=none
 header.from=virtuozzo.com;
Received: from VI1PR08MB5503.eurprd08.prod.outlook.com (2603:10a6:803:137::19)
 by VI1PR0801MB1982.eurprd08.prod.outlook.com (2603:10a6:800:83::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.20; Tue, 7 Jul
 2020 18:48:02 +0000
Received: from VI1PR08MB5503.eurprd08.prod.outlook.com
 ([fe80::2c53:d56b:77ba:8aac]) by VI1PR08MB5503.eurprd08.prod.outlook.com
 ([fe80::2c53:d56b:77ba:8aac%5]) with mapi id 15.20.3153.029; Tue, 7 Jul 2020
 18:48:02 +0000
Subject: Re: [PATCH v12 0/8] error: auto propagated local_err part I
To: Markus Armbruster <armbru@redhat.com>, qemu-devel@nongnu.org
References: <20200707165037.1026246-1-armbru@redhat.com>
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-ID: <9aec78ec-058c-37a9-4fdc-05f0613880b7@virtuozzo.com>
Date: Tue, 7 Jul 2020 21:47:59 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
In-Reply-To: <20200707165037.1026246-1-armbru@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM3PR05CA0141.eurprd05.prod.outlook.com
 (2603:10a6:207:3::19) To VI1PR08MB5503.eurprd08.prod.outlook.com
 (2603:10a6:803:137::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.100.2] (185.215.60.58) by
 AM3PR05CA0141.eurprd05.prod.outlook.com (2603:10a6:207:3::19) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3153.27 via Frontend Transport; Tue, 7 Jul 2020 18:48:00 +0000
X-Originating-IP: [185.215.60.58]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e38c8d3b-020c-47ce-0d7b-08d822a6469b
X-MS-TrafficTypeDiagnostic: VI1PR0801MB1982:
X-Microsoft-Antispam-PRVS: <VI1PR0801MB1982D4DE4E4FF8CFC5E90B9FC1660@VI1PR0801MB1982.eurprd08.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: VO+ttkIpBnsq9+iQ3jcFFzNWsyt8PQI9EIkiGbps6NGnqpJnIFirfQ+WVDl38SztfeQeWizod/Y332HrJcQ+PCuomUvOOj+KvQErS8jBGY6kCsMOH37gsM9wReyvhi6pXlhYWqy+VaIBN5clIYTyrQbWZqCRt8+uxA7JZhdZFzfMPIUrIubwF9un4e5SSRsMThYAGQENXtjA/qWQq1DOWd4pPdmYA8uZJXpenpvizVpAozt3Q7GYKSy7NpkL6D+/YNKVSF2/Bzhxx13r5K/uUqwgbaaO0xkL8zXkFlZampuhBODOZlpgyX0if/va/+yXu7YHeSMAmRuRXh2icnBVYJoqoAgaeBc9+uosk677dzVh81f8ybhKl1Hhp4MCPUkQ
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR08MB5503.eurprd08.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(346002)(376002)(39840400004)(136003)(366004)(396003)(7416002)(2906002)(36756003)(8676002)(8936002)(31686004)(31696002)(2616005)(956004)(4744005)(26005)(86362001)(16526019)(186003)(478600001)(4326008)(54906003)(66946007)(66476007)(66556008)(316002)(16576012)(6486002)(5660300002)(52116002)(43740500002);
 DIR:OUT; SFP:1102; 
X-MS-Exchange-AntiSpam-MessageData: lxpB24rJ3lu2hNbV+N/3UjDemjLtW36cTPUo1nRQZn7kep1b6ZNs1iNKmbS3D35kwEgRkUc1vf5W6k9eNx7PHmcGFG8TGBibLSeff5jg/k+Oi/8ci0A419ryFGM6vlPmgps+T1SRhzdOgzyNh+GvLVC+Hy3fvdNjTXIHQjUclZaQ4FuPVCW+DFr9AGH23kmylT0fj9wD0qKkz0TA9XNQd+k+nOprTKhYm3DoP147ntjsg/hR485eys5fBBkAiN29Dpc2VIvHBnZ3S1DINpnf1KM57NfXYoDgCdEuYU2R88XbOeElK5/tjXmp6l6hJh+QVXHz7Tgutlz6wvEAVyVJyE+2a5XNSLpAHU/Wk/JWWehfmncOIvzgkfiKDWLDhfgayiTfFNze5BnapzFNOTsHFXf3N5d6tnWKXO6oggNs8Y+l8KXPSujWUMHfB9QamITWHKTEb1UveBICDWaFAnI9mT2p9SeS5Gh8UbXW5iDd0Zo=
X-OriginatorOrg: virtuozzo.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e38c8d3b-020c-47ce-0d7b-08d822a6469b
X-MS-Exchange-CrossTenant-AuthSource: VI1PR08MB5503.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jul 2020 18:48:01.8631 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 0bc7f26d-0264-416e-a6fc-8352af79c58f
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: isIu3/EOtpaZKJvfN+zNS1VKgSiuv2YKnu3hcZ3npJq4mZA/jOK54RCMtyQMdJ1sFlsKRcP/94DiqQo+0yr22Lymw1KrUOfniEdgLp/0TLo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1982
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Michael Roth <mdroth@linux.vnet.ibm.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laszlo Ersek <lersek@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, groug@kaod.org,
 Max Reitz <mreitz@redhat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

07.07.2020 19:50, Markus Armbruster wrote:
> To speed things up, I'm taking the liberty to respin Vladimir's series
> with my documentation amendments.

Thank you!

> 
> After my documentation work, I'm very much inclined to rename
> ERRP_AUTO_PROPAGATE() to ERRP_GUARD().  The fact that it propagates
> below the hood is detail.  What matters to its users is that it lets
> them use @errp more freely.  Thoughts?

No objections, if we are making error-propagation to be internal implementation detail, no reason to shout about it in the macro name.


-- 
Best regards,
Vladimir


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 18:53:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 18:53:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jssiN-0002vL-Iv; Tue, 07 Jul 2020 18:53:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Om5Q=AS=redhat.com=eblake@srs-us1.protection.inumbo.net>)
 id 1jssiL-0002vG-Ov
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 18:53:10 +0000
X-Inumbo-ID: 198c9972-c083-11ea-8ddb-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 198c9972-c083-11ea-8ddb-12813bfff9fa;
 Tue, 07 Jul 2020 18:53:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594147988;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=P2nrkPBSg7lKn7Db+eLD8ivJZ2CITjUcGCMtuRpONqg=;
 b=JLSRERDek0k6H0w9rdsGErS6artpwUoeEldDgG6ofa68uRqwQIImxaJ8Wu2RI8U500dn1z
 s93+Gne7RCb+ndda5k5fj681/8hKKIWMVDpw1CJSwlTXcZvOds3hna+tiEW71qkjhTmVpW
 9X0sqUOQYr0RZlymtRLiQeLGixfgHus=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-495-Wt_SiSRhOAuuW8c8FQiGcQ-1; Tue, 07 Jul 2020 14:52:53 -0400
X-MC-Unique: Wt_SiSRhOAuuW8c8FQiGcQ-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 297121B18BC7;
 Tue,  7 Jul 2020 18:52:51 +0000 (UTC)
Received: from [10.3.115.46] (ovpn-115-46.phx2.redhat.com [10.3.115.46])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 7A27A73FC5;
 Tue,  7 Jul 2020 18:52:44 +0000 (UTC)
Subject: Re: [PATCH v12 0/8] error: auto propagated local_err part I
To: Markus Armbruster <armbru@redhat.com>, qemu-devel@nongnu.org
References: <20200707165037.1026246-1-armbru@redhat.com>
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
Message-ID: <bbfca52b-0732-1242-ca85-59713d125e26@redhat.com>
Date: Tue, 7 Jul 2020 13:52:43 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200707165037.1026246-1-armbru@redhat.com>
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eblake@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 qemu-block@nongnu.org, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>,
 Michael Roth <mdroth@linux.vnet.ibm.com>, groug@kaod.org,
 Stefano Stabellini <sstabellini@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Max Reitz <mreitz@redhat.com>, Laszlo Ersek <lersek@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/7/20 11:50 AM, Markus Armbruster wrote:
> To speed things up, I'm taking the liberty to respin Vladimir's series
> with my documentation amendments.
> 
> After my documentation work, I'm very much inclined to rename
> ERRP_AUTO_PROPAGATE() to ERRP_GUARD().  The fact that it propagates
> below the hood is detail.  What matters to its users is that it lets
> them use @errp more freely.  Thoughts?

I like it - the shorter name is easier to type.

(The rename is a mechanical change, so if we agree to it, we should do 
it up front to minimize the churn to all the functions where we add use 
of the macro)

> 
> Based-on: Message-Id: <20200707160613.848843-1-armbru@redhat.com>
> 
> Available from my public repository https://repo.or.cz/qemu/armbru.git
> on branch error-auto.
> 
> v12: (based on "[PATCH v4 00/45] Less clumsy error checking")
> 01: Comments merged properly with recent commit "error: Document Error
> API usage rules", and edited for clarity.  Put ERRP_AUTO_PROPAGATE()
> before its helpers, and touch up style.
> 01-08: Commit messages tweaked
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:03:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jssrp-0003r4-I0; Tue, 07 Jul 2020 19:02:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Om5Q=AS=redhat.com=eblake@srs-us1.protection.inumbo.net>)
 id 1jssrn-0003qz-RV
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:02:55 +0000
X-Inumbo-ID: 76ad438a-c084-11ea-8496-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.61])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 76ad438a-c084-11ea-8496-bc764e2007e4;
 Tue, 07 Jul 2020 19:02:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594148574;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=KI8ePZl9TDbDjrxDfDqnNlC+04UP8bDvXEL9iQK5BS0=;
 b=WMA3Ie0lgx/n0gbNS1yziw62jZYjOqH2419WEB39BlC5xEK9DIglyvnMRqQzEv87VfGfrg
 m9cJWOIFghYHaXUbaQ6Gu3Dp7lC9EtZl+Zjb2rawT79+Ygd5gh98tQMfp2GT+rXVGGqM3U
 /z5sAagCigepG+TnLK6vJyqwHKAFrb4=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-316-UBMfkjp5NNKkrI0n77hKaA-1; Tue, 07 Jul 2020 15:02:50 -0400
X-MC-Unique: UBMfkjp5NNKkrI0n77hKaA-1
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id ED476107ACCD;
 Tue,  7 Jul 2020 19:02:48 +0000 (UTC)
Received: from [10.3.115.46] (ovpn-115-46.phx2.redhat.com [10.3.115.46])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id ED170797F3;
 Tue,  7 Jul 2020 19:02:41 +0000 (UTC)
Subject: Re: [PATCH v12 1/8] error: New macro ERRP_AUTO_PROPAGATE()
To: Markus Armbruster <armbru@redhat.com>, qemu-devel@nongnu.org
References: <20200707165037.1026246-1-armbru@redhat.com>
 <20200707165037.1026246-2-armbru@redhat.com>
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
Message-ID: <afd4b693-2aec-247b-c0a7-7d061ed5bdff@redhat.com>
Date: Tue, 7 Jul 2020 14:02:41 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200707165037.1026246-2-armbru@redhat.com>
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eblake@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 Michael Roth <mdroth@linux.vnet.ibm.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laszlo Ersek <lersek@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, groug@kaod.org,
 Max Reitz <mreitz@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/7/20 11:50 AM, Markus Armbruster wrote:
> From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> 
> Introduce a new ERRP_AUTO_PROPAGATE macro, to be used at start of
> functions with an errp OUT parameter.
> 
> It has three goals:
> 
> 1. Fix issue with error_fatal and error_prepend/error_append_hint: user

the user

> can't see this additional information, because exit() happens in
> error_setg earlier than information is added. [Reported by Greg Kurz]
> 
> 2. Fix issue with error_abort and error_propagate: when we wrap
> error_abort by local_err+error_propagate, the resulting coredump will
> refer to error_propagate and not to the place where error happened.
> (the macro itself doesn't fix the issue, but it allows us to [3.] drop
> the local_err+error_propagate pattern, which will definitely fix the
> issue) [Reported by Kevin Wolf]
> 
> 3. Drop local_err+error_propagate pattern, which is used to workaround
> void functions with errp parameter, when caller wants to know resulting
> status. (Note: actually these functions could be merely updated to
> return int error code).
> 
> To achieve these goals, later patches will add invocations
> of this macro at the start of functions with either use
> error_prepend/error_append_hint (solving 1) or which use
> local_err+error_propagate to check errors, switching those
> functions to use *errp instead (solving 2 and 3).
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> Reviewed-by: Paul Durrant <paul@xen.org>
> Reviewed-by: Greg Kurz <groug@kaod.org>
> Reviewed-by: Eric Blake <eblake@redhat.com>
> [Comments merged properly with recent commit "error: Document Error
> API usage rules", and edited for clarity.  Put ERRP_AUTO_PROPAGATE()
> before its helpers, and touch up style.  Commit message tweaked.]
> Signed-off-by: Markus Armbruster <armbru@redhat.com>
> ---
>   include/qapi/error.h | 160 ++++++++++++++++++++++++++++++++++++++-----
>   1 file changed, 141 insertions(+), 19 deletions(-)
> 
> diff --git a/include/qapi/error.h b/include/qapi/error.h
> index 3fed49747d..c865a7d2f1 100644
> --- a/include/qapi/error.h
> +++ b/include/qapi/error.h

> @@ -128,18 +122,26 @@
>    *         handle the error...
>    *     }
>    * when it doesn't, say a void function:
> + *     ERRP_AUTO_PROPAGATE();
> + *     foo(arg, errp);
> + *     if (*errp) {
> + *         handle the error...
> + *     }
> + * More on ERRP_AUTO_PROPAGATE() below.
> + *
> + * Code predating ERRP_AUTO_PROPAGATE() still exits, and looks like this:

exists

>    *     Error *err = NULL;
>    *     foo(arg, &err);
>    *     if (err) {
>    *         handle the error...
> - *         error_propagate(errp, err);
> + *         error_propagate(errp, err); // deprecated
>    *     }
> - * Do *not* "optimize" this to
> + * Avoid in new code.  Do *not* "optimize" it to
>    *     foo(arg, errp);
>    *     if (*errp) { // WRONG!
>    *         handle the error...
>    *     }
> - * because errp may be NULL!
> + * because errp may be NULL!  Guard with ERRP_AUTO_PROPAGATE().

maybe:

because errp may be NULL without the ERRP_AUTO_PROPAGATE() guard.

>    *
>    * But when all you do with the error is pass it on, please use
>    *     foo(arg, errp);
> @@ -158,6 +160,19 @@
>    *         handle the error...
>    *     }
>    *
> + * Pass an existing error to the caller:

> + * = Converting to ERRP_AUTO_PROPAGATE() =
> + *
> + * To convert a function to use ERRP_AUTO_PROPAGATE():
> + *
> + * 0. If the Error ** parameter is not named @errp, rename it to
> + *    @errp.
> + *
> + * 1. Add an ERRP_AUTO_PROPAGATE() invocation, by convention right at
> + *    the beginning of the function.  This makes @errp safe to use.
> + *
> + * 2. Replace &err by errp, and err by *errp.  Delete local variable
> + *    @err.
> + *
> + * 3. Delete error_propagate(errp, *errp), replace
> + *    error_propagate_prepend(errp, *errp, ...) by error_prepend(errp, ...),
> + *

Why a comma here?

> + * 4. Ensure @errp is valid at return: when you destroy *errp, set
> + *    errp = NULL.
> + *
> + * Example:
> + *
> + *     bool fn(..., Error **errp)
> + *     {
> + *         Error *err = NULL;
> + *
> + *         foo(arg, &err);
> + *         if (err) {
> + *             handle the error...
> + *             error_propagate(errp, err);
> + *             return false;
> + *         }
> + *         ...
> + *     }
> + *
> + * becomes
> + *
> + *     bool fn(..., Error **errp)
> + *     {
> + *         ERRP_AUTO_PROPAGATE();
> + *
> + *         foo(arg, errp);
> + *         if (*errp) {
> + *             handle the error...
> + *             return false;
> + *         }
> + *         ...
> + *     }

Do we want the example to show the use of error_free and *errp = NULL?

Otherwise, this is looking good to me.  It will need a tweak if we go 
with the shorter name ERRP_GUARD, but I like that idea.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:23:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:23:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstBY-0005d6-CM; Tue, 07 Jul 2020 19:23:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F44P=AS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jstBW-0005cX-QP
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:23:18 +0000
X-Inumbo-ID: 4ce2fe66-c087-11ea-8ddd-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4ce2fe66-c087-11ea-8ddd-12813bfff9fa;
 Tue, 07 Jul 2020 19:23:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=TBEsrYS8r8v9HKRGwC0Nokil/nk5rrR9oP43j7Ziypo=; b=YCp8KbJLlhgsy81xubYWZhcMG
 sKCKkCbvIeNuV0ELoB/5ZThg1iJ/auyXR+ebnq9Er8qs28XDPqr192+uKJVFfoleuTLic/iiqm3fD
 J13x3MH5l/xloukJ6JjYaESnIEzG+gl2qAnWgsvdUggblxVHEDxKExcbc5/SDrW7+3/o8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jstBQ-000338-Qr; Tue, 07 Jul 2020 19:23:12 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jstBQ-00009r-GC; Tue, 07 Jul 2020 19:23:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jstBQ-0001Xx-FI; Tue, 07 Jul 2020 19:23:12 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151705-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151705: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start.2:fail:allowable
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=bfe91da29bfad9941d5d703d45e29f0812a20724
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jul 2020 19:23:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151705 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151705/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale   7 xen-boot         fail in 151690 pass in 151705
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 151690

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     17 guest-start.2  fail in 151690 REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                bfe91da29bfad9941d5d703d45e29f0812a20724
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   19 days
Failing since        151236  2020-06-19 19:10:35 Z   18 days   27 attempts
Testing same since   151690  2020-07-06 23:10:49 Z    0 days    2 attempts

------------------------------------------------------------
577 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 28056 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:31:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:31:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstIy-0006Zj-Gj; Tue, 07 Jul 2020 19:31:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hoqm=AS=nvidia.com=jhubbard@srs-us1.protection.inumbo.net>)
 id 1jstIy-0006Ze-0e
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:31:00 +0000
X-Inumbo-ID: 6287bdb4-c088-11ea-8de3-12813bfff9fa
Received: from hqnvemgate26.nvidia.com (unknown [216.228.121.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6287bdb4-c088-11ea-8de3-12813bfff9fa;
 Tue, 07 Jul 2020 19:30:59 +0000 (UTC)
Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by
 hqnvemgate26.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA)
 id <B5f04cd650001>; Tue, 07 Jul 2020 12:30:45 -0700
Received: from hqmail.nvidia.com ([172.20.161.6])
 by hqpgpgate101.nvidia.com (PGP Universal service);
 Tue, 07 Jul 2020 12:30:58 -0700
X-PGP-Universal: processed;
 by hqpgpgate101.nvidia.com on Tue, 07 Jul 2020 12:30:58 -0700
Received: from [10.2.50.36] (10.124.1.5) by HQMAIL107.nvidia.com
 (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Tue, 7 Jul
 2020 19:30:57 +0000
Subject: Re: [PATCH v2 2/3] xen/privcmd: Mark pages as dirty
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, Souptick Joarder
 <jrdr.linux@gmail.com>
References: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
 <1594059372-15563-3-git-send-email-jrdr.linux@gmail.com>
 <8fdd8c77-27dd-2847-7929-b5d3098b1b45@suse.com>
 <CAFqt6zZRx3oDO+p2e6EiDig9fzKirME-t6fanzDRh6e7gWx+nA@mail.gmail.com>
 <4abc0dd2-655c-16fa-dfc3-95904196c81f@suse.com>
From: John Hubbard <jhubbard@nvidia.com>
Message-ID: <4c6e52e7-1d33-132b-1d7e-e57963966dcc@nvidia.com>
Date: Tue, 7 Jul 2020 12:30:57 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <4abc0dd2-655c-16fa-dfc3-95904196c81f@suse.com>
X-Originating-IP: [10.124.1.5]
X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To
 HQMAIL107.nvidia.com (172.20.187.13)
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1;
 t=1594150245; bh=dpqlDiB4hX+WGF7HO6/5EfWHYhPI+AkkSY6wCIzaoGI=;
 h=X-PGP-Universal:Subject:To:CC:References:From:Message-ID:Date:
 User-Agent:MIME-Version:In-Reply-To:X-Originating-IP:
 X-ClientProxiedBy:Content-Type:Content-Language:
 Content-Transfer-Encoding;
 b=k7md7ayLM9GOFI11DNbkGz1E86R2aKIrNml3+R0dG1CxEQpyzJwxmDVWz4rsASK/I
 Tqdu9EMUjsfwzvxkxQfxYOw/cqRVNZH3vST7zs68eD2pn+JZyrRb9s8MHKsB8ODbYn
 HfgOfBluQzlyPE7gtGVbu740+lonybCMcXoEUlPyVNdwaUB6n1GlppmnknUS6S44pQ
 53cO71d9AodweLvrmv422MJjRySe+0+CB34gdp7S0Oynv24TjAoeXoijmbpLzTfoNj
 MKtnXjDMTRCGqioDkQN7Dh+/amw/q/x0JeMhjvatfn/genCs/lSHd/LnBe1976d6Dd
 uiI+ffTpw6+og==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org, Paul Durrant <xadimgnik@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 2020-07-07 04:43, J=C3=BCrgen Gro=C3=9F wrote:
> On 07.07.20 13:30, Souptick Joarder wrote:
>> On Tue, Jul 7, 2020 at 3:08 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> w=
rote:
...
>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>> index 33677ea..f6c1543 100644
>>>> --- a/drivers/xen/privcmd.c
>>>> +++ b/drivers/xen/privcmd.c
>>>> @@ -612,8 +612,11 @@ static void unlock_pages(struct page *pages[], un=
signed int nr_pages)
>>>> =C2=A0=C2=A0 {
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned int i;
>>>>
>>>> -=C2=A0=C2=A0=C2=A0=C2=A0 for (i =3D 0; i < nr_pages; i++)
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0 for (i =3D 0; i < nr_pages; i++) {
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 if (!PageDirty(pages[i]))
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 set_page_dirty_lock(pag=
es[i]);
>>>
>>> With put_page() directly following I think you should be able to use
>>> set_page_dirty() instead, as there is obviously a reference to the page
>>> existing.
>>
>> Patch [3/3] will convert above codes to use unpin_user_pages_dirty_lock(=
)
>> which internally do the same check. So I thought to keep linux-stable an=
d
>> linux-next code in sync. John had a similar concern [1] and later agreed=
 to keep
>> this check.
>>
>> Shall I keep this check ?=C2=A0 No ?

It doesn't matter *too* much, because patch 3/3 fixes up everything by
changing it all to unpin_user_pages_dirty_lock(). However, there is somethi=
ng
to be said for having correct interim patches, too. :)  Details:

>>
>> [1] https://lore.kernel.org/xen-devel/a750e5e5-fd5d-663b-c5fd-261d7c939b=
a7@nvidia.com/
>=20
> I wasn't referring to checking PageDirty(), but to the use of
> set_page_dirty_lock().
>=20
> Looking at the comment just before the implementation of
> set_page_dirty_lock() suggests that it is fine to use set_page_dirty()
> instead (so not calling lock_page()).


no no, that's a misreading of the comment. Unless this xen/privcmd code has
somehow taken a reference on page->mapping->host (which I do *not* think is
the case), then it is still racy to call set_page_dirty() here. Instead,
set_page_dirty_lock() should be used.


>=20
> Only the transition from get_user_pages_fast() to pin_user_pages_fast()
> requires to use the locked version IMO.
>=20

That's a different misunderstanding. :) pin_user_pages*() APIs are meant to=
 be
functionally drop-in replacements for get_user_pages*(). Internally,
pin_user_pages*() functions do some additional tracking, but from a caller'=
s
perspective, it should look the same. In other words, there is nothing
about pin_user_pages_fast() that requires set_page_dirty_lock() upon releas=
e.
The reason set_page_dirty_lock() was chosen is that there are very few
(none at all?) call sites that need to release and dirty a page, that also =
meet
the requirements to safely call set_page_dirty().

That's why there is a unpin_user_pages_dirty_lock(), but there is not a
corresponding unpin_user_pages_dirty() call: the latter has not been requir=
ed
so far, even though the call site conversions are nearly done.


thanks,
--=20
John Hubbard
NVIDIA


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:36:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:36:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstOE-0006k3-6F; Tue, 07 Jul 2020 19:36:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Om5Q=AS=redhat.com=eblake@srs-us1.protection.inumbo.net>)
 id 1jstOC-0006je-RE
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:36:24 +0000
X-Inumbo-ID: 23d5ff4f-c089-11ea-8de3-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 23d5ff4f-c089-11ea-8de3-12813bfff9fa;
 Tue, 07 Jul 2020 19:36:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594150583;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=s4pxyif6vCA7Q2ARf0cNNXCLU62fQ7KvB9o94CRQIdw=;
 b=hNmhmp7g9VxnHRhLFDU39rBQXaL7dx6e6G9wSA7pstdmrG811O0CXDVoDu8l+t8D0LlIMs
 NRrF4TTdamZuxykc77GIbyRgoMuGid86B7/uikO3vYG8zchx12m8sB0kDfwEmPfwsIRLRv
 JUVOkZGUFFaTtXssvWReFWmj3XOibfw=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-150-lslRWG3pO1GJwf3QQRsP1Q-1; Tue, 07 Jul 2020 15:36:12 -0400
X-MC-Unique: lslRWG3pO1GJwf3QQRsP1Q-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C0EA3800D5C;
 Tue,  7 Jul 2020 19:36:09 +0000 (UTC)
Received: from [10.3.115.46] (ovpn-115-46.phx2.redhat.com [10.3.115.46])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id A97601A7CE;
 Tue,  7 Jul 2020 19:36:02 +0000 (UTC)
Subject: Re: [PATCH v12 2/8] scripts: Coccinelle script to use
 ERRP_AUTO_PROPAGATE()
To: Markus Armbruster <armbru@redhat.com>, qemu-devel@nongnu.org
References: <20200707165037.1026246-1-armbru@redhat.com>
 <20200707165037.1026246-3-armbru@redhat.com>
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
Message-ID: <764387d7-0d42-a291-d720-60df303c15e4@redhat.com>
Date: Tue, 7 Jul 2020 14:36:02 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200707165037.1026246-3-armbru@redhat.com>
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eblake@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 qemu-block@nongnu.org, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>,
 Michael Roth <mdroth@linux.vnet.ibm.com>, groug@kaod.org,
 Stefano Stabellini <sstabellini@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Max Reitz <mreitz@redhat.com>, Laszlo Ersek <lersek@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/7/20 11:50 AM, Markus Armbruster wrote:
> From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> 
> Script adds ERRP_AUTO_PROPAGATE macro invocation where appropriate and
> does corresponding changes in code (look for details in
> include/qapi/error.h)
> 
> Usage example:
> spatch --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
>   --macro-file scripts/cocci-macro-file.h --in-place --no-show-diff \
>   --max-width 80 FILES...
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> Reviewed-by: Markus Armbruster <armbru@redhat.com>
> Signed-off-by: Markus Armbruster <armbru@redhat.com>
> ---
>   scripts/coccinelle/auto-propagated-errp.cocci | 337 ++++++++++++++++++
>   include/qapi/error.h                          |   3 +
>   MAINTAINERS                                   |   1 +
>   3 files changed, 341 insertions(+)
>   create mode 100644 scripts/coccinelle/auto-propagated-errp.cocci

Needs a tweak if we go with ERRP_GUARD.  But that's easy.

> +
> +// Convert special case with goto separately.
> +// I tried merging this into the following rule the obvious way, but
> +// it made Coccinelle hang on block.c
> +//
> +// Note interesting thing: if we don't do it here, and try to fixup
> +// "out: }" things later after all transformations (the rule will be
> +// the same, just without error_propagate() call), coccinelle fails to
> +// match this "out: }".

"out: }" is not valid C; would referring to "out: ; }" fare any better?

> +@ disable optional_qualifier@
> +identifier rule1.fn, rule1.local_err, out;
> +symbol errp;
> +@@
> +
> + fn(..., Error ** ____, ...)
> + {
> +     <...
> +-    goto out;
> ++    return;
> +     ...>
> +- out:
> +-    error_propagate(errp, local_err);
> + }
> +
> +// Convert most of local_err related stuff.
> +//
> +// Note, that we inherit rule1.fn and rule1.local_err names, not
> +// objects themselves. We may match something not related to the
> +// pattern matched by rule1. For example, local_err may be defined with
> +// the same name in different blocks inside one function, and in one
> +// block follow the propagation pattern and in other block doesn't.
> +//
> +// Note also that errp-cleaning functions
> +//   error_free_errp
> +//   error_report_errp
> +//   error_reportf_errp
> +//   warn_report_errp
> +//   warn_reportf_errp
> +// are not yet implemented. They must call corresponding Error* -
> +// freeing function and then set *errp to NULL, to avoid further
> +// propagation to original errp (consider ERRP_AUTO_PROPAGATE in use).
> +// For example, error_free_errp may look like this:
> +//
> +//    void error_free_errp(Error **errp)
> +//    {
> +//        error_free(*errp);
> +//        *errp = NULL;
> +//    }

I guess we can still decide later if we want these additional functions, 
or if they will even help after the number of places we have already 
improved after applying this script as-is and with Markus' cleanups in 
place.

While I won't call myself a Coccinelle expert, it at least looks sane 
enough that I'm comfortable if you add:

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:40:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstSE-0007ay-OM; Tue, 07 Jul 2020 19:40:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CHg+=AS=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1jstSD-0007ao-01
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:40:33 +0000
X-Inumbo-ID: b7d032be-c089-11ea-bb8b-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7d032be-c089-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 19:40:32 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 0E816A2675;
 Tue,  7 Jul 2020 21:40:31 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 06A92A2691;
 Tue,  7 Jul 2020 21:40:30 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id fcq4hya3Rxz7; Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 8CFE0A261F;
 Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id SD8nyO5q28Qn; Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 5986AA2647;
 Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 3AE5A22383;
 Tue,  7 Jul 2020 21:39:59 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id kvb5TwRL8e28; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id ACE342242F;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
X-Quarantine-ID: <8p2776uf7EM3>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 8p2776uf7EM3; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 6C40922303;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 04/11] common: add vmtrace_pt_size domain parameter
Date: Tue,  7 Jul 2020 21:39:43 +0200
Message-Id: <036bc768bfb074269d9bd4530304a11170b7142d.1594150543.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Add vmtrace_pt_size domain parameter in live domain and
vmtrace_pt_order parameter in xen_domctl_createdomain.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 xen/arch/x86/domain.c       | 6 ++++++
 xen/common/domain.c         | 9 +++++++++
 xen/include/public/domctl.h | 1 +
 xen/include/xen/sched.h     | 3 +++
 4 files changed, 19 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index fee6c3931a..b75017b28b 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -499,6 +499,12 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
          */
         config->flags |= XEN_DOMCTL_CDF_oos_off;
 
+    if ( !hvm && config->processor_trace_buf_kb )
+    {
+        dprintk(XENLOG_INFO, "Processor trace is not supported on non-HVM\n");
+        return -EINVAL;
+    }
+
     return 0;
 }
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index a45cf023f7..e6e8f88da1 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -338,6 +338,12 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }
 
+    if ( config->processor_trace_buf_kb && !vmtrace_supported )
+    {
+        dprintk(XENLOG_INFO, "Processor tracing is not supported\n");
+        return -EINVAL;
+    }
+
     return arch_sanitise_domain_config(config);
 }
 
@@ -443,6 +449,9 @@ struct domain *domain_create(domid_t domid,
         d->nr_pirqs = min(d->nr_pirqs, nr_irqs);
 
         radix_tree_init(&d->pirq_tree);
+
+        if ( config->processor_trace_buf_kb )
+            d->processor_trace_buf_kb = config->processor_trace_buf_kb;
     }
 
     if ( (err = arch_domain_create(d, config)) != 0 )
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 59bdc28c89..7681675a94 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -92,6 +92,7 @@ struct xen_domctl_createdomain {
     uint32_t max_evtchn_port;
     int32_t max_grant_frames;
     int32_t max_maptrack_frames;
+    uint32_t processor_trace_buf_kb;
 
     struct xen_arch_domainconfig arch;
 };
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index ac53519d7f..c046e59886 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -457,6 +457,9 @@ struct domain
     unsigned    pbuf_idx;
     spinlock_t  pbuf_lock;
 
+    /* Used by vmtrace features */
+    uint32_t    processor_trace_buf_kb;
+
     /* OProfile support. */
     struct xenoprof *xenoprof;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:40:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstSG-0007b4-0G; Tue, 07 Jul 2020 19:40:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CHg+=AS=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1jstSE-0007at-FX
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:40:34 +0000
X-Inumbo-ID: b7bf1741-c089-11ea-8de3-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b7bf1741-c089-11ea-8de3-12813bfff9fa;
 Tue, 07 Jul 2020 19:40:32 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 5FCBEA268B;
 Tue,  7 Jul 2020 21:40:31 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 5882EA264E;
 Tue,  7 Jul 2020 21:40:30 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 5VbTMOSLIUg9; Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id B98B3A2660;
 Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id g6H3xNpPY_Yw; Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 7A071A265E;
 Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 6232222477;
 Tue,  7 Jul 2020 21:39:59 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id HGbXudXT8yoR; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 9964922427;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
X-Quarantine-ID: <gZ2uXSNp0qxh>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id gZ2uXSNp0qxh; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 510622236E;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 03/11] x86/vmx: add IPT cpu feature
Date: Tue,  7 Jul 2020 21:39:42 +0200
Message-Id: <4d6eac657d082efaa0e7d141b5c9a07791b31f94.1594150543.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, luwei.kang@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Check if Intel Processor Trace feature is supported by current
processor. Define vmtrace_supported global variable.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 xen/arch/x86/hvm/vmx/vmcs.c                 | 15 ++++++++++++++-
 xen/common/domain.c                         |  2 ++
 xen/include/asm-x86/cpufeature.h            |  1 +
 xen/include/asm-x86/hvm/vmx/vmcs.h          |  1 +
 xen/include/public/arch-x86/cpufeatureset.h |  1 +
 xen/include/xen/domain.h                    |  2 ++
 6 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index ca94c2bedc..3a53553f10 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -291,6 +291,20 @@ static int vmx_init_vmcs_config(void)
         _vmx_cpu_based_exec_control &=
             ~(CPU_BASED_CR8_LOAD_EXITING | CPU_BASED_CR8_STORE_EXITING);
 
+    rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
+
+    /* Check whether IPT is supported in VMX operation. */
+    if ( !smp_processor_id() )
+        vmtrace_supported = cpu_has_ipt &&
+                            (_vmx_misc_cap & VMX_MISC_PROC_TRACE);
+    else if ( vmtrace_supported &&
+              !(_vmx_misc_cap & VMX_MISC_PROC_TRACE) )
+    {
+        printk("VMX: IPT capabilities fatally differ between CPU%u and CPU0\n",
+               smp_processor_id());
+        return -EINVAL;
+    }
+
     if ( _vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS )
     {
         min = 0;
@@ -305,7 +319,6 @@ static int vmx_init_vmcs_config(void)
                SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS |
                SECONDARY_EXEC_XSAVES |
                SECONDARY_EXEC_TSC_SCALING);
-        rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
         if ( _vmx_misc_cap & VMX_MISC_VMWRITE_ALL )
             opt |= SECONDARY_EXEC_ENABLE_VMCS_SHADOWING;
         if ( opt_vpid_enabled )
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 7cc9526139..a45cf023f7 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -82,6 +82,8 @@ struct vcpu *idle_vcpu[NR_CPUS] __read_mostly;
 
 vcpu_info_t dummy_vcpu_info;
 
+bool vmtrace_supported __read_mostly;
+
 static void __domain_finalise_shutdown(struct domain *d)
 {
     struct vcpu *v;
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index f790d5c1f8..555f696a26 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -104,6 +104,7 @@
 #define cpu_has_clwb            boot_cpu_has(X86_FEATURE_CLWB)
 #define cpu_has_avx512er        boot_cpu_has(X86_FEATURE_AVX512ER)
 #define cpu_has_avx512cd        boot_cpu_has(X86_FEATURE_AVX512CD)
+#define cpu_has_ipt             boot_cpu_has(X86_FEATURE_PROC_TRACE)
 #define cpu_has_sha             boot_cpu_has(X86_FEATURE_SHA)
 #define cpu_has_avx512bw        boot_cpu_has(X86_FEATURE_AVX512BW)
 #define cpu_has_avx512vl        boot_cpu_has(X86_FEATURE_AVX512VL)
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 906810592f..6153ba6769 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -283,6 +283,7 @@ extern u32 vmx_secondary_exec_control;
 #define VMX_VPID_INVVPID_SINGLE_CONTEXT_RETAINING_GLOBAL 0x80000000000ULL
 extern u64 vmx_ept_vpid_cap;
 
+#define VMX_MISC_PROC_TRACE                     0x00004000
 #define VMX_MISC_CR3_TARGET                     0x01ff0000
 #define VMX_MISC_VMWRITE_ALL                    0x20000000
 
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index fe7492a225..2c91862f2d 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -217,6 +217,7 @@ XEN_CPUFEATURE(SMAP,          5*32+20) /*S  Supervisor Mode Access Prevention */
 XEN_CPUFEATURE(AVX512_IFMA,   5*32+21) /*A  AVX-512 Integer Fused Multiply Add */
 XEN_CPUFEATURE(CLFLUSHOPT,    5*32+23) /*A  CLFLUSHOPT instruction */
 XEN_CPUFEATURE(CLWB,          5*32+24) /*A  CLWB instruction */
+XEN_CPUFEATURE(PROC_TRACE,    5*32+25) /*   Processor Tracing feature */
 XEN_CPUFEATURE(AVX512PF,      5*32+26) /*A  AVX-512 Prefetch Instructions */
 XEN_CPUFEATURE(AVX512ER,      5*32+27) /*A  AVX-512 Exponent & Reciprocal Instrs */
 XEN_CPUFEATURE(AVX512CD,      5*32+28) /*A  AVX-512 Conflict Detection Instrs */
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 7e51d361de..61ebc6c24d 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -130,4 +130,6 @@ struct vnuma_info {
 
 void vnuma_destroy(struct vnuma_info *vnuma);
 
+extern bool vmtrace_supported;
+
 #endif /* __XEN_DOMAIN_H__ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:40:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstSI-0007bq-8P; Tue, 07 Jul 2020 19:40:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CHg+=AS=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1jstSH-0007ao-VE
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:40:37 +0000
X-Inumbo-ID: b7c80ff8-c089-11ea-bca7-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7c80ff8-c089-11ea-bca7-bc764e2007e4;
 Tue, 07 Jul 2020 19:40:32 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 0B140A2660;
 Tue,  7 Jul 2020 21:40:31 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id ECE7EA2657;
 Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id qQ7tpZAsGLBM; Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 637FBA264E;
 Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 9XQj3sShKaPE; Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 2F4DBA2489;
 Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 0E1D12245A;
 Tue,  7 Jul 2020 21:39:59 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id QEW3aQAhvDPS; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 64D2522383;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
X-Quarantine-ID: <8m8VZJ10Geso>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 8m8VZJ10Geso; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 3049C222A1;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 01/11] memory: batch processing in acquire_resource()
Date: Tue,  7 Jul 2020 21:39:40 +0200
Message-Id: <02415890e4e8211513b495228c790e1d16de767f.1594150543.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Allow to acquire large resources by allowing acquire_resource()
to process items in batches, using hypercall continuation.

Be aware that this modifies the behavior of acquire_resource
call with frame_list=NULL. While previously it would return
the size of internal array (32), with this patch it returns
the maximal quantity of frames that could be requested at once,
i.e. UINT_MAX >> MEMOP_EXTENT_SHIFT.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 xen/common/memory.c | 49 ++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 44 insertions(+), 5 deletions(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index 714077c1e5..eb42f883df 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1046,10 +1046,12 @@ static int acquire_grant_table(struct domain *d, unsigned int id,
 }
 
 static int acquire_resource(
-    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
+    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg,
+    unsigned long *start_extent)
 {
     struct domain *d, *currd = current->domain;
     xen_mem_acquire_resource_t xmar;
+    uint32_t total_frames;
     /*
      * The mfn_list and gfn_list (below) arrays are ok on stack for the
      * moment since they are small, but if they need to grow in future
@@ -1069,7 +1071,7 @@ static int acquire_resource(
         if ( xmar.nr_frames )
             return -EINVAL;
 
-        xmar.nr_frames = ARRAY_SIZE(mfn_list);
+        xmar.nr_frames = UINT_MAX >> MEMOP_EXTENT_SHIFT;
 
         if ( __copy_field_to_guest(arg, &xmar, nr_frames) )
             return -EFAULT;
@@ -1077,8 +1079,28 @@ static int acquire_resource(
         return 0;
     }
 
+    total_frames = xmar.nr_frames;
+
+    /* Is the size too large for us to encode a continuation? */
+    if ( unlikely(xmar.nr_frames > (UINT_MAX >> MEMOP_EXTENT_SHIFT)) )
+        return -EINVAL;
+
+    if ( *start_extent )
+    {
+        /*
+         * Check whether start_extent is in bounds, as this
+         * value if visible to the calling domain.
+         */
+        if ( *start_extent > xmar.nr_frames )
+            return -EINVAL;
+
+        xmar.frame += *start_extent;
+        xmar.nr_frames -= *start_extent;
+        guest_handle_add_offset(xmar.frame_list, *start_extent);
+    }
+
     if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
-        return -E2BIG;
+        xmar.nr_frames = ARRAY_SIZE(mfn_list);
 
     rc = rcu_lock_remote_domain_by_id(xmar.domid, &d);
     if ( rc )
@@ -1135,6 +1157,14 @@ static int acquire_resource(
         }
     }
 
+    if ( !rc )
+    {
+        *start_extent += xmar.nr_frames;
+
+        if ( *start_extent != total_frames )
+            rc = -ERESTART;
+    }
+
  out:
     rcu_unlock_domain(d);
 
@@ -1599,8 +1629,17 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 #endif
 
     case XENMEM_acquire_resource:
-        rc = acquire_resource(
-            guest_handle_cast(arg, xen_mem_acquire_resource_t));
+        do {
+            rc = acquire_resource(
+                guest_handle_cast(arg, xen_mem_acquire_resource_t),
+                &start_extent);
+
+            if ( hypercall_preempt_check() )
+                return hypercall_create_continuation(
+                    __HYPERVISOR_memory_op, "lh",
+                    op | (start_extent << MEMOP_EXTENT_SHIFT), arg);
+        } while ( rc == -ERESTART );
+
         break;
 
     default:
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:40:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:40:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstSK-0007d9-No; Tue, 07 Jul 2020 19:40:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CHg+=AS=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1jstSJ-0007at-8A
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:40:39 +0000
X-Inumbo-ID: b7ba091c-c089-11ea-8de3-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b7ba091c-c089-11ea-8de3-12813bfff9fa;
 Tue, 07 Jul 2020 19:40:32 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id EEF6DA265A;
 Tue,  7 Jul 2020 21:40:30 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id E020EA268B;
 Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id PinkwRJevqWr; Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 66F75A2657;
 Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id hshjaMDDlq4C; Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 35734A261F;
 Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 12F762245C;
 Tue,  7 Jul 2020 21:39:59 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id rQdVHyLH4fzM; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 70EEE2241B;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
X-Quarantine-ID: <2pmxzWWtxK_F>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 2pmxzWWtxK_F; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 45657222A3;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 02/11] x86/vmx: add Intel PT MSR definitions
Date: Tue,  7 Jul 2020 21:39:41 +0200
Message-Id: <ba3de1d4cd926b16a297d90055a03fda0762c2b5.1594150543.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Jan Beulich <jbeulich@suse.com>, tamas.lengyel@intel.com,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Define constants related to Intel Processor Trace features.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/include/asm-x86/msr-index.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 0fe98af923..4fd54fb5c9 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -72,7 +72,31 @@
 #define MSR_RTIT_OUTPUT_BASE                0x00000560
 #define MSR_RTIT_OUTPUT_MASK                0x00000561
 #define MSR_RTIT_CTL                        0x00000570
+#define  RTIT_CTL_TRACE_EN                  (_AC(1, ULL) <<  0)
+#define  RTIT_CTL_CYC_EN                    (_AC(1, ULL) <<  1)
+#define  RTIT_CTL_OS                        (_AC(1, ULL) <<  2)
+#define  RTIT_CTL_USR                       (_AC(1, ULL) <<  3)
+#define  RTIT_CTL_PWR_EVT_EN                (_AC(1, ULL) <<  4)
+#define  RTIT_CTL_FUP_ON_PTW                (_AC(1, ULL) <<  5)
+#define  RTIT_CTL_FABRIC_EN                 (_AC(1, ULL) <<  6)
+#define  RTIT_CTL_CR3_FILTER                (_AC(1, ULL) <<  7)
+#define  RTIT_CTL_TOPA                      (_AC(1, ULL) <<  8)
+#define  RTIT_CTL_MTC_EN                    (_AC(1, ULL) <<  9)
+#define  RTIT_CTL_TSC_EN                    (_AC(1, ULL) << 10)
+#define  RTIT_CTL_DIS_RETC                  (_AC(1, ULL) << 11)
+#define  RTIT_CTL_PTW_EN                    (_AC(1, ULL) << 12)
+#define  RTIT_CTL_BRANCH_EN                 (_AC(1, ULL) << 13)
+#define  RTIT_CTL_MTC_FREQ                  (_AC(0xf, ULL) << 14)
+#define  RTIT_CTL_CYC_THRESH                (_AC(0xf, ULL) << 19)
+#define  RTIT_CTL_PSB_FREQ                  (_AC(0xf, ULL) << 24)
+#define  RTIT_CTL_ADDR(n)                   (_AC(0xf, ULL) << (32 + 4 * (n)))
 #define MSR_RTIT_STATUS                     0x00000571
+#define  RTIT_STATUS_FILTER_EN              (_AC(1, ULL) <<  0)
+#define  RTIT_STATUS_CONTEXT_EN             (_AC(1, ULL) <<  1)
+#define  RTIT_STATUS_TRIGGER_EN             (_AC(1, ULL) <<  2)
+#define  RTIT_STATUS_ERROR                  (_AC(1, ULL) <<  4)
+#define  RTIT_STATUS_STOPPED                (_AC(1, ULL) <<  5)
+#define  RTIT_STATUS_BYTECNT                (_AC(0x1ffff, ULL) << 32)
 #define MSR_RTIT_CR3_MATCH                  0x00000572
 #define MSR_RTIT_ADDR_A(n)                 (0x00000580 + (n) * 2)
 #define MSR_RTIT_ADDR_B(n)                 (0x00000581 + (n) * 2)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:40:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstSO-0007ep-0f; Tue, 07 Jul 2020 19:40:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CHg+=AS=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1jstSM-0007ao-VY
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:40:42 +0000
X-Inumbo-ID: b7fb4044-c089-11ea-bb8b-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7fb4044-c089-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 19:40:32 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 45DD5A2657;
 Tue,  7 Jul 2020 21:40:31 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 2DAB7A261F;
 Tue,  7 Jul 2020 21:40:30 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id A7sP2nFKaCHb; Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 76ADDA265A;
 Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id S1odB7LY9wkz; Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 3E4C7A2646;
 Tue,  7 Jul 2020 21:40:29 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 192B622466;
 Tue,  7 Jul 2020 21:39:59 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id ziXmPpBNaCej; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 4AA252230B;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id Gzc46ntvovfp; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 13D8C21B7E;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 00/11] Implement support for external IPT monitoring
Date: Tue,  7 Jul 2020 21:39:39 +0200
Message-Id: <cover.1594150543.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, luwei.kang@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Anthony PERARD <anthony.perard@citrix.com>, tamas.lengyel@intel.com,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Intel Processor Trace is an architectural extension available in modern I=
ntel=20
family CPUs. It allows recording the detailed trace of activity while the=
=20
processor executes the code. One might use the recorded trace to reconstr=
uct=20
the code flow. It means, to find out the executed code paths, determine=20
branches taken, and so forth.

The abovementioned feature is described in Intel(R) 64 and IA-32 Architec=
tures=20
Software Developer's Manual Volume 3C: System Programming Guide, Part 3,=20
Chapter 36: "Intel Processor Trace."

This patch series implements an interface that Dom0 could use in order to=
=20
enable IPT for particular vCPUs in DomU, allowing for external monitoring=
. Such=20
a feature has numerous applications like malware monitoring, fuzzing, or=20
performance testing.

Also thanks to Tamas K Lengyel for a few preliminary hints before
first version of this patch was submitted to xen-devel.

Changed since v1:
  * MSR_RTIT_CTL is managed using MSR load lists
  * other PT-related MSRs are modified only when vCPU goes out of context
  * trace buffer is now acquired as a resource
  * added vmtrace_pt_size parameter in xl.cfg, the size of trace buffer
    must be specified in the moment of domain creation
  * trace buffers are allocated on domain creation, destructed on
    domain destruction
  * HVMOP_vmtrace_ipt_enable/disable is limited to enabling/disabling PT
    these calls don't manage buffer memory anymore
  * lifted 32 MFN/GFN array limit when acquiring resources
  * minor code style changes according to review

Changed since v2:
  * trace buffer is now allocated on domain creation (in v2 it was
    allocated when hvm param was set)
  * restored 32-item limit in mfn/gfn arrays in acquire_resource
    and instead implemented hypercall continuations
  * code changes according to Jan's and Roger's review

Changed since v3:
  * vmtrace HVMOPs are not implemented as DOMCTLs
  * patches splitted up according to Andrew's comments
  * code changes according to v3 review on the mailing list

Changed since v4:
  * rebased to commit be63d9d4
  * fixed dependencies between patches
    (earlier patches don't reference further patches)
  * introduced preemption check in acquire_resource
  * moved buffer allocation to common code
  * splitted some patches according to code review
  * minor fixes according to code review

Changed since v5:
  * trace buffer size is now dynamically determined by the proctrace
    tool
  * trace buffer size variable is uniformly defined as uint32_t
    processor_trace_buf_kb in hypervisor, toolstack and ABI
  * buffer pages are not freed explicitly but reference count is
    now used instead
  * minor fixes according to code review

This patch series is available on GitHub:
https://github.com/icedevml/xen/tree/ipt-patch-v6


Michal Leszczynski (11):
  memory: batch processing in acquire_resource()
  x86/vmx: add Intel PT MSR definitions
  x86/vmx: add IPT cpu feature
  common: add vmtrace_pt_size domain parameter
  tools/libxl: add vmtrace_pt_size parameter
  x86/hvm: processor trace interface in HVM
  x86/vmx: implement IPT in VMX
  x86/mm: add vmtrace_buf resource type
  x86/domctl: add XEN_DOMCTL_vmtrace_op
  tools/libxc: add xc_vmtrace_* functions
  tools/proctrace: add proctrace tool

 docs/man/xl.cfg.5.pod.in                    |  13 ++
 tools/golang/xenlight/helpers.gen.go        |   2 +
 tools/golang/xenlight/types.gen.go          |   1 +
 tools/libxc/Makefile                        |   1 +
 tools/libxc/include/xenctrl.h               |  40 +++++
 tools/libxc/xc_vmtrace.c                    |  87 ++++++++++
 tools/libxl/libxl.h                         |   8 +
 tools/libxl/libxl_create.c                  |   1 +
 tools/libxl/libxl_types.idl                 |   4 +
 tools/proctrace/Makefile                    |  45 +++++
 tools/proctrace/proctrace.c                 | 179 ++++++++++++++++++++
 tools/xl/xl_parse.c                         |  22 +++
 xen/arch/x86/domain.c                       |  27 +++
 xen/arch/x86/domctl.c                       |  50 ++++++
 xen/arch/x86/hvm/vmx/vmcs.c                 |  15 +-
 xen/arch/x86/hvm/vmx/vmx.c                  | 110 ++++++++++++
 xen/common/domain.c                         |  46 +++++
 xen/common/memory.c                         |  80 ++++++++-
 xen/include/asm-x86/cpufeature.h            |   1 +
 xen/include/asm-x86/hvm/hvm.h               |  20 +++
 xen/include/asm-x86/hvm/vmx/vmcs.h          |   4 +
 xen/include/asm-x86/hvm/vmx/vmx.h           |  14 ++
 xen/include/asm-x86/msr-index.h             |  24 +++
 xen/include/public/arch-x86/cpufeatureset.h |   1 +
 xen/include/public/domctl.h                 |  29 ++++
 xen/include/public/memory.h                 |   1 +
 xen/include/xen/domain.h                    |   2 +
 xen/include/xen/sched.h                     |   7 +
 28 files changed, 828 insertions(+), 6 deletions(-)
 create mode 100644 tools/libxc/xc_vmtrace.c
 create mode 100644 tools/proctrace/Makefile
 create mode 100644 tools/proctrace/proctrace.c

--=20
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:41:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:41:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstSi-0007nF-AN; Tue, 07 Jul 2020 19:41:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CHg+=AS=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1jstSh-0007ml-5i
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:41:03 +0000
X-Inumbo-ID: c9c50350-c089-11ea-bca7-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c9c50350-c089-11ea-bca7-bc764e2007e4;
 Tue, 07 Jul 2020 19:41:02 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 58FB1A26A3;
 Tue,  7 Jul 2020 21:41:01 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 5684AA26BB;
 Tue,  7 Jul 2020 21:41:00 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id cPr368eJ3i4J; Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 83B62A26AD;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id zXXw5sv7UBpl; Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 63757A2675;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 531FB22467;
 Tue,  7 Jul 2020 21:40:05 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id mD7MzktKtjP9; Tue,  7 Jul 2020 21:39:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 412DD22459;
 Tue,  7 Jul 2020 21:39:54 +0200 (CEST)
X-Quarantine-ID: <z977rWsuvJ92>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id z977rWsuvJ92; Tue,  7 Jul 2020 21:39:54 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 10B2F22454;
 Tue,  7 Jul 2020 21:39:54 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 11/11] tools/proctrace: add proctrace tool
Date: Tue,  7 Jul 2020 21:39:50 +0200
Message-Id: <8bc5959478d6ba1c1873615b53628094da578688.1594150543.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: luwei.kang@intel.com, Michal Leszczynski <michal.leszczynski@cert.pl>,
 tamas.lengyel@intel.com, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Add an demonstration tool that uses xc_vmtrace_* calls in order
to manage external IPT monitoring for DomU.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 tools/proctrace/Makefile    |  45 +++++++++
 tools/proctrace/proctrace.c | 179 ++++++++++++++++++++++++++++++++++++
 2 files changed, 224 insertions(+)
 create mode 100644 tools/proctrace/Makefile
 create mode 100644 tools/proctrace/proctrace.c

diff --git a/tools/proctrace/Makefile b/tools/proctrace/Makefile
new file mode 100644
index 0000000000..9c135229b9
--- /dev/null
+++ b/tools/proctrace/Makefile
@@ -0,0 +1,45 @@
+# Copyright (C) CERT Polska - NASK PIB
+# Author: Micha=C5=82 Leszczy=C5=84ski <michal.leszczynski@cert.pl>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; under version 2 of the License.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+
+XEN_ROOT=3D$(CURDIR)/../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+CFLAGS  +=3D -Werror
+CFLAGS  +=3D $(CFLAGS_libxenevtchn)
+CFLAGS  +=3D $(CFLAGS_libxenctrl)
+LDLIBS  +=3D $(LDLIBS_libxenctrl)
+LDLIBS  +=3D $(LDLIBS_libxenevtchn)
+LDLIBS  +=3D $(LDLIBS_libxenforeignmemory)
+
+.PHONY: all
+all: build
+
+.PHONY: build
+build: proctrace
+
+.PHONY: install
+install: build
+	$(INSTALL_DIR) $(DESTDIR)$(sbindir)
+	$(INSTALL_PROG) proctrace $(DESTDIR)$(sbindir)/proctrace
+
+.PHONY: uninstall
+uninstall:
+	rm -f $(DESTDIR)$(sbindir)/proctrace
+
+.PHONY: clean
+clean:
+	$(RM) -f proctrace $(DEPS_RM)
+
+.PHONY: distclean
+distclean: clean
+
+-include $(DEPS_INCLUDE)
diff --git a/tools/proctrace/proctrace.c b/tools/proctrace/proctrace.c
new file mode 100644
index 0000000000..3c1ccccee8
--- /dev/null
+++ b/tools/proctrace/proctrace.c
@@ -0,0 +1,179 @@
+/***********************************************************************=
*******
+ * tools/proctrace.c
+ *
+ * Demonstrative tool for collecting Intel Processor Trace data from Xen=
.
+ *  Could be used to externally monitor a given vCPU in given DomU.
+ *
+ * Copyright (C) 2020 by CERT Polska - NASK PIB
+ *
+ * Authors: Micha=C5=82 Leszczy=C5=84ski, michal.leszczynski@cert.pl
+ * Date:    June, 2020
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License as published by
+ *  the Free Software Foundation; under version 2 of the License.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *  GNU General Public License for more details.
+ *
+ *  You should have received a copy of the GNU General Public License
+ *  along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <sys/mman.h>
+#include <signal.h>
+#include <errno.h>
+
+#include <xenctrl.h>
+#include <xen/xen.h>
+#include <xenforeignmemory.h>
+
+volatile int interrupted =3D 0;
+volatile int domain_down =3D 0;
+
+void term_handler(int signum) {
+    interrupted =3D 1;
+}
+
+int main(int argc, char* argv[]) {
+    xc_interface *xc;
+    uint32_t domid;
+    uint32_t vcpu_id;
+    uint64_t size;
+
+    int rc =3D -1;
+    uint8_t *buf =3D NULL;
+    uint64_t last_offset =3D 0;
+
+    xenforeignmemory_handle *fmem;
+    xenforeignmemory_resource_handle *fres;
+
+    if (signal(SIGINT, term_handler) =3D=3D SIG_ERR)
+    {
+        fprintf(stderr, "Failed to register signal handler\n");
+        return 1;
+    }
+
+    if (argc !=3D 3) {
+        fprintf(stderr, "Usage: %s <domid> <vcpu_id>\n", argv[0]);
+        fprintf(stderr, "It's recommended to redirect this"
+                        "program's output to file\n");
+        fprintf(stderr, "or to pipe it's output to xxd or other program.=
\n");
+        return 1;
+    }
+
+    domid =3D atoi(argv[1]);
+    vcpu_id =3D atoi(argv[2]);
+
+    xc =3D xc_interface_open(0, 0, 0);
+
+    fmem =3D xenforeignmemory_open(0, 0);
+
+    if (!xc) {
+        fprintf(stderr, "Failed to open xc interface\n");
+        return 1;
+    }
+
+    rc =3D xc_vmtrace_pt_enable(xc, domid, vcpu_id);
+
+    if (rc) {
+        fprintf(stderr, "Failed to call xc_vmtrace_pt_enable\n");
+        return 1;
+    }
+   =20
+    rc =3D xc_vmtrace_pt_get_offset(xc, domid, vcpu_id, NULL, &size);
+
+    if (rc) {
+        fprintf(stderr, "Failed to get trace buffer size\n");
+        return 1;
+    }
+
+    fres =3D xenforeignmemory_map_resource(
+        fmem, domid, XENMEM_resource_vmtrace_buf,
+        /* vcpu: */ vcpu_id,
+        /* frame: */ 0,
+        /* num_frames: */ size >> XC_PAGE_SHIFT,
+        (void **)&buf,
+        PROT_READ, 0);
+
+    if (!buf) {
+        fprintf(stderr, "Failed to map trace buffer\n");
+        return 1;
+    }
+
+    while (!interrupted) {
+        uint64_t offset;
+        rc =3D xc_vmtrace_pt_get_offset(xc, domid, vcpu_id, &offset, NUL=
L);
+
+        if (rc =3D=3D ENODATA) {
+            interrupted =3D 1;
+            domain_down =3D 1;
+	} else if (rc) {
+            fprintf(stderr, "Failed to call xc_vmtrace_pt_get_offset\n")=
;
+            return 1;
+        }
+
+        if (offset > last_offset)
+        {
+            fwrite(buf + last_offset, offset - last_offset, 1, stdout);
+        }
+        else if (offset < last_offset)
+        {
+            // buffer wrapped
+            fwrite(buf + last_offset, size - last_offset, 1, stdout);
+            fwrite(buf, offset, 1, stdout);
+        }
+
+        last_offset =3D offset;
+        usleep(1000 * 100);
+    }
+
+    rc =3D xenforeignmemory_unmap_resource(fmem, fres);
+
+    if (rc) {
+        fprintf(stderr, "Failed to unmap resource\n");
+        return 1;
+    }
+
+    rc =3D xenforeignmemory_close(fmem);
+
+    if (rc) {
+        fprintf(stderr, "Failed to close fmem\n");
+        return 1;
+    }
+
+    /*
+     * Don't try to disable PT if the domain is already dying.
+     */
+    if (!domain_down) {
+        rc =3D xc_vmtrace_pt_disable(xc, domid, vcpu_id);
+
+        if (rc) {
+            fprintf(stderr, "Failed to call xc_vmtrace_pt_disable\n");
+            return 1;
+        }
+    }
+
+    rc =3D xc_interface_close(xc);
+
+    if (rc) {
+        fprintf(stderr, "Failed to close xc interface\n");
+        return 1;
+    }
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
--=20
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:41:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:41:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstSj-0007oG-Ja; Tue, 07 Jul 2020 19:41:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CHg+=AS=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1jstSj-0007no-23
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:41:05 +0000
X-Inumbo-ID: c98e3e7e-c089-11ea-8de3-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c98e3e7e-c089-11ea-8de3-12813bfff9fa;
 Tue, 07 Jul 2020 19:41:02 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id E789EA26AF;
 Tue,  7 Jul 2020 21:41:00 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id E4FA5A2675;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id tpwVciS3kWfD; Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 79919A26A3;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id R9EPWexH40j8; Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 4FFA3A261F;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id BEE562247F;
 Tue,  7 Jul 2020 21:40:04 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id fYEdaYTL948b; Tue,  7 Jul 2020 21:39:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 0B00922456;
 Tue,  7 Jul 2020 21:39:54 +0200 (CEST)
X-Quarantine-ID: <hw4v3GfJxCBR>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id hw4v3GfJxCBR; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id CD26022303;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 08/11] x86/mm: add vmtrace_buf resource type
Date: Tue,  7 Jul 2020 21:39:47 +0200
Message-Id: <2129d21e7ef7e960951a8baafab01d9392dff8f3.1594150543.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Allow to map processor trace buffer using
acquire_resource().

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 xen/common/memory.c         | 31 +++++++++++++++++++++++++++++++
 xen/include/public/memory.h |  1 +
 2 files changed, 32 insertions(+)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index eb42f883df..c0a22eb60f 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1007,6 +1007,32 @@ static long xatp_permission_check(struct domain *d, unsigned int space)
     return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
 }
 
+static int acquire_vmtrace_buf(struct domain *d, unsigned int id,
+                               uint64_t frame,
+                               uint64_t nr_frames,
+                               xen_pfn_t mfn_list[])
+{
+    mfn_t mfn;
+    unsigned int i;
+    uint64_t size;
+    struct vcpu *v = domain_vcpu(d, id);
+
+    if ( !v || !v->vmtrace.pt_buf )
+        return -EINVAL;
+
+    mfn = page_to_mfn(v->vmtrace.pt_buf);
+    size = v->domain->processor_trace_buf_kb * KB(1);
+
+    if ( (frame > (size >> PAGE_SHIFT)) ||
+         (nr_frames > ((size >> PAGE_SHIFT) - frame)) )
+        return -EINVAL;
+
+    for ( i = 0; i < nr_frames; i++ )
+        mfn_list[i] = mfn_x(mfn_add(mfn, frame + i));
+
+    return 0;
+}
+
 static int acquire_grant_table(struct domain *d, unsigned int id,
                                unsigned long frame,
                                unsigned int nr_frames,
@@ -1117,6 +1143,11 @@ static int acquire_resource(
                                  mfn_list);
         break;
 
+    case XENMEM_resource_vmtrace_buf:
+        rc = acquire_vmtrace_buf(d, xmar.id, xmar.frame, xmar.nr_frames,
+                                 mfn_list);
+        break;
+
     default:
         rc = arch_acquire_resource(d, xmar.type, xmar.id, xmar.frame,
                                    xmar.nr_frames, mfn_list);
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 21057ed78e..f4c905a10e 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -625,6 +625,7 @@ struct xen_mem_acquire_resource {
 
 #define XENMEM_resource_ioreq_server 0
 #define XENMEM_resource_grant_table 1
+#define XENMEM_resource_vmtrace_buf 2
 
     /*
      * IN - a type-specific resource identifier, which must be zero
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:41:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:41:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstSn-0007rG-2T; Tue, 07 Jul 2020 19:41:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CHg+=AS=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1jstSm-0007ml-4K
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:41:08 +0000
X-Inumbo-ID: cabb34d2-c089-11ea-bb8b-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cabb34d2-c089-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 19:41:03 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 0D846A26BB;
 Tue,  7 Jul 2020 21:41:02 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id ADD5EA26A8;
 Tue,  7 Jul 2020 21:41:00 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id fXx8pdKVsSKH; Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id A79E9A26AF;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id l2_NCbfU6I40; Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 61B5DA2660;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 39C8D2247C;
 Tue,  7 Jul 2020 21:40:05 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id Lg6X8o5lhAGl; Tue,  7 Jul 2020 21:39:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 0020522452;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
X-Quarantine-ID: <OFjbTfOjfw3p>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id OFjbTfOjfw3p; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id B5839223C8;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 07/11] x86/vmx: implement IPT in VMX
Date: Tue,  7 Jul 2020 21:39:46 +0200
Message-Id: <7ddfc44d6ffde0fa307f0e074225f588c397aef0.1594150543.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, luwei.kang@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Jan Beulich <jbeulich@suse.com>, tamas.lengyel@intel.com,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Use Intel Processor Trace feature to provide vmtrace_pt_*
interface for HVM/VMX.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 xen/arch/x86/hvm/vmx/vmx.c         | 110 +++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/vmx/vmcs.h |   3 +
 xen/include/asm-x86/hvm/vmx/vmx.h  |  14 ++++
 3 files changed, 127 insertions(+)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index cc6d4ece22..63a5a76e16 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -428,6 +428,56 @@ static void vmx_domain_relinquish_resources(struct domain *d)
     vmx_free_vlapic_mapping(d);
 }
 
+static int vmx_init_pt(struct vcpu *v)
+{
+    int rc;
+    uint64_t size = v->domain->processor_trace_buf_kb * KB(1);
+
+    if ( !v->vmtrace.pt_buf || !size )
+        return -EINVAL;
+
+    /*
+     * We don't accept trace buffer size smaller than single page
+     * and the upper bound is defined as 4GB in the specification.
+     * The buffer size must be also a power of 2.
+     */
+    if ( size < PAGE_SIZE || size > GB(4) || (size & (size - 1)) )
+        return -EINVAL;
+
+    v->arch.hvm.vmx.ipt_state = xzalloc(struct ipt_state);
+
+    if ( !v->arch.hvm.vmx.ipt_state )
+        return -ENOMEM;
+
+    v->arch.hvm.vmx.ipt_state->output_base =
+        page_to_maddr(v->vmtrace.pt_buf);
+    v->arch.hvm.vmx.ipt_state->output_mask.raw = size - 1;
+
+    rc = vmx_add_host_load_msr(v, MSR_RTIT_CTL, 0);
+
+    if ( rc )
+        return rc;
+
+    rc = vmx_add_guest_msr(v, MSR_RTIT_CTL,
+                              RTIT_CTL_TRACE_EN | RTIT_CTL_OS |
+                              RTIT_CTL_USR | RTIT_CTL_BRANCH_EN);
+
+    if ( rc )
+        return rc;
+
+    return 0;
+}
+
+static int vmx_destroy_pt(struct vcpu* v)
+{
+    if ( v->arch.hvm.vmx.ipt_state )
+        xfree(v->arch.hvm.vmx.ipt_state);
+
+    v->arch.hvm.vmx.ipt_state = NULL;
+    return 0;
+}
+
+
 static int vmx_vcpu_initialise(struct vcpu *v)
 {
     int rc;
@@ -471,6 +521,14 @@ static int vmx_vcpu_initialise(struct vcpu *v)
 
     vmx_install_vlapic_mapping(v);
 
+    if ( v->domain->processor_trace_buf_kb )
+    {
+        rc = vmx_init_pt(v);
+
+        if ( rc )
+            return rc;
+    }
+
     return 0;
 }
 
@@ -483,6 +541,7 @@ static void vmx_vcpu_destroy(struct vcpu *v)
      * prior to vmx_domain_destroy so we need to disable PML for each vcpu
      * separately here.
      */
+    vmx_destroy_pt(v);
     vmx_vcpu_disable_pml(v);
     vmx_destroy_vmcs(v);
     passive_domain_destroy(v);
@@ -513,6 +572,18 @@ static void vmx_save_guest_msrs(struct vcpu *v)
      * be updated at any time via SWAPGS, which we cannot trap.
      */
     v->arch.hvm.vmx.shadow_gs = rdgsshadow();
+
+    if ( unlikely(v->arch.hvm.vmx.ipt_state &&
+                  v->arch.hvm.vmx.ipt_state->active) )
+    {
+        uint64_t rtit_ctl;
+        rdmsrl(MSR_RTIT_CTL, rtit_ctl);
+        BUG_ON(rtit_ctl & RTIT_CTL_TRACE_EN);
+
+        rdmsrl(MSR_RTIT_STATUS, v->arch.hvm.vmx.ipt_state->status);
+        rdmsrl(MSR_RTIT_OUTPUT_MASK,
+               v->arch.hvm.vmx.ipt_state->output_mask.raw);
+    }
 }
 
 static void vmx_restore_guest_msrs(struct vcpu *v)
@@ -524,6 +595,17 @@ static void vmx_restore_guest_msrs(struct vcpu *v)
 
     if ( cpu_has_msr_tsc_aux )
         wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
+
+    if ( unlikely(v->arch.hvm.vmx.ipt_state &&
+                  v->arch.hvm.vmx.ipt_state->active) )
+    {
+        wrmsrl(MSR_RTIT_OUTPUT_BASE,
+               v->arch.hvm.vmx.ipt_state->output_base);
+        wrmsrl(MSR_RTIT_OUTPUT_MASK,
+               v->arch.hvm.vmx.ipt_state->output_mask.raw);
+        wrmsrl(MSR_RTIT_STATUS,
+               v->arch.hvm.vmx.ipt_state->status);
+    }
 }
 
 void vmx_update_cpu_exec_control(struct vcpu *v)
@@ -2240,6 +2322,25 @@ static bool vmx_get_pending_event(struct vcpu *v, struct x86_event *info)
     return true;
 }
 
+static int vmx_control_pt(struct vcpu *v, bool enable)
+{
+    if ( !v->arch.hvm.vmx.ipt_state )
+        return -EINVAL;
+
+    v->arch.hvm.vmx.ipt_state->active = enable;
+    return 0;
+}
+
+static int vmx_get_pt_offset(struct vcpu *v, uint64_t *offset, uint64_t *size)
+{
+    if ( !v->arch.hvm.vmx.ipt_state )
+        return -EINVAL;
+
+    *offset = v->arch.hvm.vmx.ipt_state->output_mask.offset;
+    *size = v->arch.hvm.vmx.ipt_state->output_mask.size + 1;
+    return 0;
+}
+
 static struct hvm_function_table __initdata vmx_function_table = {
     .name                 = "VMX",
     .cpu_up_prepare       = vmx_cpu_up_prepare,
@@ -2295,6 +2396,8 @@ static struct hvm_function_table __initdata vmx_function_table = {
     .altp2m_vcpu_update_vmfunc_ve = vmx_vcpu_update_vmfunc_ve,
     .altp2m_vcpu_emulate_ve = vmx_vcpu_emulate_ve,
     .altp2m_vcpu_emulate_vmfunc = vmx_vcpu_emulate_vmfunc,
+    .vmtrace_control_pt = vmx_control_pt,
+    .vmtrace_get_pt_offset = vmx_get_pt_offset,
     .tsc_scaling = {
         .max_ratio = VMX_TSC_MULTIPLIER_MAX,
     },
@@ -3674,6 +3777,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
 
     hvm_invalidate_regs_fields(regs);
 
+    if ( unlikely(v->arch.hvm.vmx.ipt_state &&
+                  v->arch.hvm.vmx.ipt_state->active) )
+    {
+        rdmsrl(MSR_RTIT_OUTPUT_MASK,
+               v->arch.hvm.vmx.ipt_state->output_mask.raw);
+    }
+
     if ( paging_mode_hap(v->domain) )
     {
         /*
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 6153ba6769..65971fa6ad 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -186,6 +186,9 @@ struct vmx_vcpu {
      * pCPU and wakeup the related vCPU.
      */
     struct pi_blocking_vcpu pi_blocking;
+
+    /* State of processor trace feature */
+    struct ipt_state      *ipt_state;
 };
 
 int vmx_create_vmcs(struct vcpu *v);
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 111ccd7e61..8d7c67e43d 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -689,4 +689,18 @@ typedef union ldt_or_tr_instr_info {
     };
 } ldt_or_tr_instr_info_t;
 
+/* Processor Trace state per vCPU */
+struct ipt_state {
+    bool active;
+    uint64_t status;
+    uint64_t output_base;
+    union {
+        uint64_t raw;
+        struct {
+            uint32_t size;
+            uint32_t offset;
+        };
+    } output_mask;
+};
+
 #endif /* __ASM_X86_HVM_VMX_VMX_H__ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:41:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:41:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstSp-0007t4-CS; Tue, 07 Jul 2020 19:41:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CHg+=AS=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1jstSo-0007no-0c
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:41:10 +0000
X-Inumbo-ID: c9b27083-c089-11ea-8de3-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c9b27083-c089-11ea-8de3-12813bfff9fa;
 Tue, 07 Jul 2020 19:41:02 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 3F581A26B6;
 Tue,  7 Jul 2020 21:41:01 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 3CA2FA26B9;
 Tue,  7 Jul 2020 21:41:00 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id Edb0dHd9Z37g; Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 8BB3EA2657;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 2Y_eDj3TRnSX; Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 579C1A265A;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 1830E2243D;
 Tue,  7 Jul 2020 21:40:05 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id YPKWuYMmlrOk; Tue,  7 Jul 2020 21:39:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id D299522444;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
X-Quarantine-ID: <6kPGelPQA59y>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 6kPGelPQA59y; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 9DBA82242E;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 06/11] x86/hvm: processor trace interface in HVM
Date: Tue,  7 Jul 2020 21:39:45 +0200
Message-Id: <1916e06793ffaaa70c471bcd6bcf168597793bd5.1594150543.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Implement necessary changes in common code/HVM to support
processor trace features. Define vmtrace_pt_* API and
implement trace buffer allocation/deallocation in common
code.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 xen/arch/x86/domain.c         | 21 +++++++++++++++++++++
 xen/common/domain.c           | 35 +++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/hvm.h | 20 ++++++++++++++++++++
 xen/include/xen/sched.h       |  4 ++++
 4 files changed, 80 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index b75017b28b..8ce2ab6b8f 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2205,6 +2205,27 @@ int domain_relinquish_resources(struct domain *d)
                 altp2m_vcpu_disable_ve(v);
         }
 
+        for_each_vcpu ( d, v )
+        {
+            unsigned int i;
+            uint64_t nr_pages = v->domain->processor_trace_buf_kb * KB(1);
+            nr_pages >>= PAGE_SHIFT;
+
+            if ( !v->vmtrace.pt_buf )
+                continue;
+
+            for ( i = 0; i < nr_pages; i++ )
+            {
+                struct page_info *pg = mfn_to_page(
+                    mfn_add(page_to_mfn(v->vmtrace.pt_buf), i));
+
+                put_page_alloc_ref(pg);
+                put_page_and_type(pg);
+            }
+
+            v->vmtrace.pt_buf = NULL;
+        }
+
         if ( is_pv_domain(d) )
         {
             for_each_vcpu ( d, v )
diff --git a/xen/common/domain.c b/xen/common/domain.c
index e6e8f88da1..193099a2ab 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -137,6 +137,38 @@ static void vcpu_destroy(struct vcpu *v)
     free_vcpu_struct(v);
 }
 
+static int vmtrace_alloc_buffers(struct vcpu *v)
+{
+    unsigned int i;
+    struct page_info *pg;
+    uint64_t size = v->domain->processor_trace_buf_kb * KB(1);
+
+    pg = alloc_domheap_pages(v->domain, get_order_from_bytes(size),
+                             MEMF_no_refcount);
+
+    if ( !pg )
+        return -ENOMEM;
+
+    for ( i = 0; i < (size >> PAGE_SHIFT); i++ )
+    {
+        struct page_info *pg_iter = mfn_to_page(
+            mfn_add(page_to_mfn(pg), i));
+
+        if ( !get_page_and_type(pg_iter, v->domain, PGT_writable_page) )
+        {
+            /*
+             * The domain can't possibly know about this page yet, so failure
+             * here is a clear indication of something fishy going on.
+             */
+            domain_crash(v->domain);
+            return -ENODATA;
+        }
+    }
+
+    v->vmtrace.pt_buf = pg;
+    return 0;
+}
+
 struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
 {
     struct vcpu *v;
@@ -162,6 +194,9 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
     v->vcpu_id = vcpu_id;
     v->dirty_cpu = VCPU_CPU_CLEAN;
 
+    if ( d->processor_trace_buf_kb && vmtrace_alloc_buffers(v) != 0 )
+        return NULL;
+
     spin_lock_init(&v->virq_lock);
 
     tasklet_init(&v->continue_hypercall_tasklet, NULL, NULL);
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 1eb377dd82..476a216205 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -214,6 +214,10 @@ struct hvm_function_table {
     bool_t (*altp2m_vcpu_emulate_ve)(struct vcpu *v);
     int (*altp2m_vcpu_emulate_vmfunc)(const struct cpu_user_regs *regs);
 
+    /* vmtrace */
+    int (*vmtrace_control_pt)(struct vcpu *v, bool enable);
+    int (*vmtrace_get_pt_offset)(struct vcpu *v, uint64_t *offset, uint64_t *size);
+
     /*
      * Parameters and callbacks for hardware-assisted TSC scaling,
      * which are valid only when the hardware feature is available.
@@ -655,6 +659,22 @@ static inline bool altp2m_vcpu_emulate_ve(struct vcpu *v)
     return false;
 }
 
+static inline int vmtrace_control_pt(struct vcpu *v, bool enable)
+{
+    if ( hvm_funcs.vmtrace_control_pt )
+        return hvm_funcs.vmtrace_control_pt(v, enable);
+
+    return -EOPNOTSUPP;
+}
+
+static inline int vmtrace_get_pt_offset(struct vcpu *v, uint64_t *offset, uint64_t *size)
+{
+    if ( hvm_funcs.vmtrace_get_pt_offset )
+        return hvm_funcs.vmtrace_get_pt_offset(v, offset, size);
+
+    return -EOPNOTSUPP;
+}
+
 /*
  * This must be defined as a macro instead of an inline function,
  * because it uses 'struct vcpu' and 'struct domain' which have
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index c046e59886..b6f39233aa 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -253,6 +253,10 @@ struct vcpu
     /* vPCI per-vCPU area, used to store data for long running operations. */
     struct vpci_vcpu vpci;
 
+    struct {
+        struct page_info *pt_buf;
+    } vmtrace;
+
     struct arch_vcpu arch;
 };
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:41:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:41:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstSr-0007v9-Ll; Tue, 07 Jul 2020 19:41:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CHg+=AS=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1jstSr-0007ml-4n
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:41:13 +0000
X-Inumbo-ID: cc6135ca-c089-11ea-b7bb-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc6135ca-c089-11ea-b7bb-bc764e2007e4;
 Tue, 07 Jul 2020 19:41:06 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 7AB06A2675;
 Tue,  7 Jul 2020 21:41:05 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 77C2FA26BD;
 Tue,  7 Jul 2020 21:41:04 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id YWX_RmwuQhJ8; Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 74F96A2695;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id i9YARP6nXRLI; Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 51EF8A264E;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id D35E72248C;
 Tue,  7 Jul 2020 21:40:04 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 9gHZUn6OFfvJ; Tue,  7 Jul 2020 21:39:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 1ACA122303;
 Tue,  7 Jul 2020 21:39:54 +0200 (CEST)
X-Quarantine-ID: <Q7F0amUjHudM>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id Q7F0amUjHudM; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id E408A22426;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 09/11] x86/domctl: add XEN_DOMCTL_vmtrace_op
Date: Tue,  7 Jul 2020 21:39:48 +0200
Message-Id: <a9899858dba4a7e22a0256cff734399bff348adb.1594150543.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 tamas.lengyel@intel.com,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Implement domctl to manage the runtime state of
processor trace feature.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 xen/arch/x86/domctl.c       | 50 +++++++++++++++++++++++++++++++++++++
 xen/include/public/domctl.h | 28 +++++++++++++++++++++
 2 files changed, 78 insertions(+)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 6f2c69788d..6132499db4 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -322,6 +322,50 @@ void arch_get_domain_info(const struct domain *d,
     info->arch_config.emulation_flags = d->arch.emulation_flags;
 }
 
+static int do_vmtrace_op(struct domain *d, struct xen_domctl_vmtrace_op *op,
+                         XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    int rc;
+    struct vcpu *v;
+
+    if ( op->pad1 || op->pad2 )
+        return -EINVAL;
+
+    if ( !vmtrace_supported )
+        return -EOPNOTSUPP;
+
+    if ( !is_hvm_domain(d) )
+        return -EOPNOTSUPP;
+
+    if ( op->vcpu >= d->max_vcpus )
+        return -EINVAL;
+
+    v = domain_vcpu(d, op->vcpu);
+    rc = 0;
+
+    switch ( op->cmd )
+    {
+    case XEN_DOMCTL_vmtrace_pt_enable:
+    case XEN_DOMCTL_vmtrace_pt_disable:
+        vcpu_pause(v);
+        rc = vmtrace_control_pt(v, op->cmd == XEN_DOMCTL_vmtrace_pt_enable);
+        vcpu_unpause(v);
+        break;
+
+    case XEN_DOMCTL_vmtrace_pt_get_offset:
+        rc = vmtrace_get_pt_offset(v, &op->offset, &op->size);
+
+        if ( !rc && d->is_dying )
+            rc = ENODATA;
+        break;
+
+    default:
+        rc = -EOPNOTSUPP;
+    }
+
+    return rc;
+}
+
 #define MAX_IOPORTS 0x10000
 
 long arch_do_domctl(
@@ -337,6 +381,12 @@ long arch_do_domctl(
     switch ( domctl->cmd )
     {
 
+    case XEN_DOMCTL_vmtrace_op:
+        ret = do_vmtrace_op(d, &domctl->u.vmtrace_op, u_domctl);
+        if ( !ret )
+            copyback = true;
+        break;
+
     case XEN_DOMCTL_shadow_op:
         ret = paging_domctl(d, &domctl->u.shadow_op, u_domctl, 0);
         if ( ret == -ERESTART )
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 7681675a94..73c7ccbd16 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1136,6 +1136,30 @@ struct xen_domctl_vuart_op {
                                  */
 };
 
+/* XEN_DOMCTL_vmtrace_op: Perform VM tracing related operation */
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
+struct xen_domctl_vmtrace_op {
+    /* IN variable */
+    uint32_t cmd;
+/* Enable/disable external vmtrace for given domain */
+#define XEN_DOMCTL_vmtrace_pt_enable      1
+#define XEN_DOMCTL_vmtrace_pt_disable     2
+#define XEN_DOMCTL_vmtrace_pt_get_offset  3
+    domid_t domain;
+    uint16_t pad1;
+    uint32_t vcpu;
+    uint16_t pad2;
+
+    /* OUT variable */
+    uint64_aligned_t size;
+    uint64_aligned_t offset;
+};
+typedef struct xen_domctl_vmtrace_op xen_domctl_vmtrace_op_t;
+DEFINE_XEN_GUEST_HANDLE(xen_domctl_vmtrace_op_t);
+
+#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1217,6 +1241,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_vuart_op                      81
 #define XEN_DOMCTL_get_cpu_policy                82
 #define XEN_DOMCTL_set_cpu_policy                83
+#define XEN_DOMCTL_vmtrace_op                    84
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1277,6 +1302,9 @@ struct xen_domctl {
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_alloc         psr_alloc;
         struct xen_domctl_vuart_op          vuart_op;
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+        struct xen_domctl_vmtrace_op        vmtrace_op;
+#endif
         uint8_t                             pad[128];
     } u;
 };
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:41:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:41:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstSu-0007xX-0E; Tue, 07 Jul 2020 19:41:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CHg+=AS=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1jstSt-0007no-0j
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:41:15 +0000
X-Inumbo-ID: c9b27082-c089-11ea-8de3-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c9b27082-c089-11ea-8de3-12813bfff9fa;
 Tue, 07 Jul 2020 19:41:02 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 2FD7FA2675;
 Tue,  7 Jul 2020 21:41:01 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 2BA35A26A3;
 Tue,  7 Jul 2020 21:41:00 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id xobwEcbRVpKw; Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 76F41A269B;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id EX9po0wo_flD; Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 534B2A2657;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id E9C0322303;
 Tue,  7 Jul 2020 21:40:04 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id Zc-6faRX1ri1; Tue,  7 Jul 2020 21:39:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id BA90C2243D;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
X-Quarantine-ID: <vL0WSPJJQuxu>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id vL0WSPJJQuxu; Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id 857A32237F;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 05/11] tools/libxl: add vmtrace_pt_size parameter
Date: Tue,  7 Jul 2020 21:39:44 +0200
Message-Id: <ac7b950a7ef86cbf0c63fe428ec94e2b6fe27453.1594150543.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: luwei.kang@intel.com, Wei Liu <wl@xen.org>,
 Michal Leszczynski <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>, tamas.lengyel@intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Allow to specify the size of per-vCPU trace buffer upon
domain creation. This is zero by default (meaning: not enabled).

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 docs/man/xl.cfg.5.pod.in             | 13 +++++++++++++
 tools/golang/xenlight/helpers.gen.go |  2 ++
 tools/golang/xenlight/types.gen.go   |  1 +
 tools/libxl/libxl.h                  |  8 ++++++++
 tools/libxl/libxl_create.c           |  1 +
 tools/libxl/libxl_types.idl          |  4 ++++
 tools/xl/xl_parse.c                  | 22 ++++++++++++++++++++++
 7 files changed, 51 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1f..ddef9b6014 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -683,6 +683,19 @@ If this option is not specified then it will default to B<false>.
 
 =back
 
+=item B<processor_trace_buf_kb=KBYTES>
+
+Specifies the size of processor trace buffer that would be allocated
+for each vCPU belonging to this domain. Disabled (i.e.
+B<processor_trace_buf_kb=0> by default. This must be set to
+non-zero value in order to be able to use processor tracing features
+with this domain.
+
+B<NOTE>: In order to use Intel Processor Trace feature, this value
+must be between 8 kB and 4 GB and it must be a power of 2.
+
+=back
+
 =head2 Devices
 
 The following options define the paravirtual, emulated and physical
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 152c7e8e6b..3ce6f2374b 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1117,6 +1117,7 @@ return fmt.Errorf("invalid union key '%v'", x.Type)}
 x.ArchArm.GicVersion = GicVersion(xc.arch_arm.gic_version)
 x.ArchArm.Vuart = VuartType(xc.arch_arm.vuart)
 x.Altp2M = Altp2MMode(xc.altp2m)
+x.ProcessorTraceBufKb = int(xc.processor_trace_buf_kb)
 
  return nil}
 
@@ -1592,6 +1593,7 @@ return fmt.Errorf("invalid union key '%v'", x.Type)}
 xc.arch_arm.gic_version = C.libxl_gic_version(x.ArchArm.GicVersion)
 xc.arch_arm.vuart = C.libxl_vuart_type(x.ArchArm.Vuart)
 xc.altp2m = C.libxl_altp2m_mode(x.Altp2M)
+xc.processor_trace_buf_kb = C.int(x.ProcessorTraceBufKb)
 
  return nil
  }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index 663c1e86b4..f4bc16c0fd 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -516,6 +516,7 @@ GicVersion GicVersion
 Vuart VuartType
 }
 Altp2M Altp2MMode
+ProcessorTraceBufKb int
 }
 
 type domainBuildInfoTypeUnion interface {
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 1cd6c38e83..fbf222967a 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -438,6 +438,14 @@
  */
 #define LIBXL_HAVE_CREATEINFO_PASSTHROUGH 1
 
+/*
+ * LIBXL_HAVE_PROCESSOR_TRACE_BUF_KB indicates that
+ * libxl_domain_create_info has a processor_trace_buf_kb parameter, which
+ * allows to enable pre-allocation of processor tracing buffers of given
+ * size.
+ */
+#define LIBXL_HAVE_PROCESSOR_TRACE_BUF_KB 1
+
 /*
  * libxl ABI compatibility
  *
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 2814818e34..4d6318124a 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -608,6 +608,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
             .max_evtchn_port = b_info->event_channels,
             .max_grant_frames = b_info->max_grant_frames,
             .max_maptrack_frames = b_info->max_maptrack_frames,
+            .processor_trace_buf_kb = b_info->processor_trace_buf_kb,
         };
 
         if (info->type != LIBXL_DOMAIN_TYPE_PV) {
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 9d3f05f399..748fde65ab 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -645,6 +645,10 @@ libxl_domain_build_info = Struct("domain_build_info",[
     # supported by x86 HVM and ARM support is planned.
     ("altp2m", libxl_altp2m_mode),
 
+    # Size of preallocated processor trace buffers (in KBYTES).
+    # Use zero value to disable this feature.
+    ("processor_trace_buf_kb", integer),
+
     ], dir=DIR_IN,
        copy_deprecated_fn="libxl__domain_build_info_copy_deprecated",
 )
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 61b4ef7b7e..87e373b413 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1861,6 +1861,28 @@ void parse_config_data(const char *config_source,
         }
     }
 
+    if (!xlu_cfg_get_long(config, "processor_trace_buf_kb", &l, 1) && l) {
+        if (l & (l - 1)) {
+            fprintf(stderr, "ERROR: processor_trace_buf_kb"
+                            " - must be a power of 2\n");
+            exit(1);
+        }
+
+        if (l < 8) {
+            fprintf(stderr, "ERROR: processor_trace_buf_kb"
+                            " - value is too small\n");
+            exit(1);
+        }
+
+        if (l > 1024*1024*4) {
+            fprintf(stderr, "ERROR: processor_trace_buf_kb"
+                            " - value is too large\n");
+            exit(1);
+        }
+
+        b_info->processor_trace_buf_kb = l;
+    }
+
     if (!xlu_cfg_get_list(config, "ioports", &ioports, &num_ioports, 0)) {
         b_info->num_ioports = num_ioports;
         b_info->ioports = calloc(num_ioports, sizeof(*b_info->ioports));
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:41:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:41:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstSz-00082p-GW; Tue, 07 Jul 2020 19:41:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CHg+=AS=cert.pl=michal.leszczynski@srs-us1.protection.inumbo.net>)
 id 1jstSy-0007no-16
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:41:20 +0000
X-Inumbo-ID: c9c1d00f-c089-11ea-8de3-12813bfff9fa
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c9c1d00f-c089-11ea-8de3-12813bfff9fa;
 Tue, 07 Jul 2020 19:41:02 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 9D8B7A26B9;
 Tue,  7 Jul 2020 21:41:01 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 9461AA26BD;
 Tue,  7 Jul 2020 21:41:00 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id AHbW4AukT979; Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id A0EAEA2691;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id ciAltz9WSXP5; Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 7CD8CA26A8;
 Tue,  7 Jul 2020 21:40:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 68EDD2242F;
 Tue,  7 Jul 2020 21:40:05 +0200 (CEST)
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id XSkxh8RtH51F; Tue,  7 Jul 2020 21:39:59 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 367FC22426;
 Tue,  7 Jul 2020 21:39:54 +0200 (CEST)
X-Quarantine-ID: <R1Vij3rUE1XP>
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
X-Amavis-Alert: BAD HEADER SECTION, Duplicate header field: "References"
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id R1Vij3rUE1XP; Tue,  7 Jul 2020 21:39:54 +0200 (CEST)
Received: from mq-desktop.cert.pl (unknown [195.187.238.217])
 by belindir.nask.net.pl (Postfix) with ESMTPSA id F013422450;
 Tue,  7 Jul 2020 21:39:53 +0200 (CEST)
From: =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 10/11] tools/libxc: add xc_vmtrace_* functions
Date: Tue,  7 Jul 2020 21:39:49 +0200
Message-Id: <476203bca92f1fb0e8de2be2bcfb88695a5688f8.1594150543.git.michal.leszczynski@cert.pl>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: luwei.kang@intel.com, Michal Leszczynski <michal.leszczynski@cert.pl>,
 tamas.lengyel@intel.com, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Michal Leszczynski <michal.leszczynski@cert.pl>

Add functions in libxc that use the new XEN_DOMCTL_vmtrace interface.

Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
---
 tools/libxc/Makefile          |  1 +
 tools/libxc/include/xenctrl.h | 40 ++++++++++++++++
 tools/libxc/xc_vmtrace.c      | 87 +++++++++++++++++++++++++++++++++++
 3 files changed, 128 insertions(+)
 create mode 100644 tools/libxc/xc_vmtrace.c

diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index fae5969a73..605e44501d 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -27,6 +27,7 @@ CTRL_SRCS-y       += xc_csched2.c
 CTRL_SRCS-y       += xc_arinc653.c
 CTRL_SRCS-y       += xc_rt.c
 CTRL_SRCS-y       += xc_tbuf.c
+CTRL_SRCS-y       += xc_vmtrace.c
 CTRL_SRCS-y       += xc_pm.c
 CTRL_SRCS-y       += xc_cpu_hotplug.c
 CTRL_SRCS-y       += xc_resume.c
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 4c89b7294c..491b2c3236 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1585,6 +1585,46 @@ int xc_tbuf_set_cpu_mask(xc_interface *xch, xc_cpumap_t mask);
 
 int xc_tbuf_set_evt_mask(xc_interface *xch, uint32_t mask);
 
+/**
+ * Enable processor trace for given vCPU in given DomU.
+ * Allocate the trace ringbuffer with given size.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid domain identifier
+ * @parm vcpu vcpu identifier
+ * @return 0 on success, -1 on failure
+ */
+int xc_vmtrace_pt_enable(xc_interface *xch, uint32_t domid,
+                         uint32_t vcpu);
+
+/**
+ * Disable processor trace for given vCPU in given DomU.
+ * Deallocate the trace ringbuffer.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid domain identifier
+ * @parm vcpu vcpu identifier
+ * @return 0 on success, -1 on failure
+ */
+int xc_vmtrace_pt_disable(xc_interface *xch, uint32_t domid,
+                          uint32_t vcpu);
+
+/**
+ * Get current offset inside the trace ringbuffer.
+ * This allows to determine how much data was written into the buffer.
+ * Once buffer overflows, the offset will reset to 0 and the previous
+ * data will be overriden.
+ *
+ * @parm xch a handle to an open hypervisor interface
+ * @parm domid domain identifier
+ * @parm vcpu vcpu identifier
+ * @parm offset current offset inside trace buffer will be written there
+ * @parm size the total size of the trace buffer (in bytes)
+ * @return 0 on success, -1 on failure
+ */
+int xc_vmtrace_pt_get_offset(xc_interface *xch, uint32_t domid,
+                             uint32_t vcpu, uint64_t *offset, uint64_t *size);
+
 int xc_domctl(xc_interface *xch, struct xen_domctl *domctl);
 int xc_sysctl(xc_interface *xch, struct xen_sysctl *sysctl);
 
diff --git a/tools/libxc/xc_vmtrace.c b/tools/libxc/xc_vmtrace.c
new file mode 100644
index 0000000000..ee034da8d3
--- /dev/null
+++ b/tools/libxc/xc_vmtrace.c
@@ -0,0 +1,87 @@
+/******************************************************************************
+ * xc_vmtrace.c
+ *
+ * API for manipulating hardware tracing features
+ *
+ * Copyright (c) 2020, Michal Leszczynski
+ *
+ * Copyright 2020 CERT Polska. All rights reserved.
+ * Use is subject to license terms.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include "xc_private.h"
+#include <xen/trace.h>
+
+int xc_vmtrace_pt_enable(
+        xc_interface *xch, uint32_t domid, uint32_t vcpu)
+{
+    DECLARE_DOMCTL;
+    int rc;
+
+    domctl.cmd = XEN_DOMCTL_vmtrace_op;
+    domctl.domain = domid;
+    domctl.u.vmtrace_op.cmd = XEN_DOMCTL_vmtrace_pt_enable;
+    domctl.u.vmtrace_op.vcpu = vcpu;
+    domctl.u.vmtrace_op.pad1 = 0;
+    domctl.u.vmtrace_op.pad2 = 0;
+
+    rc = do_domctl(xch, &domctl);
+    return rc;
+}
+
+int xc_vmtrace_pt_get_offset(
+        xc_interface *xch, uint32_t domid, uint32_t vcpu,
+        uint64_t *offset, uint64_t *size)
+{
+    DECLARE_DOMCTL;
+    int rc;
+
+    domctl.cmd = XEN_DOMCTL_vmtrace_op;
+    domctl.domain = domid;
+    domctl.u.vmtrace_op.cmd = XEN_DOMCTL_vmtrace_pt_get_offset;
+    domctl.u.vmtrace_op.vcpu = vcpu;
+    domctl.u.vmtrace_op.pad1 = 0;
+    domctl.u.vmtrace_op.pad2 = 0;
+
+    rc = do_domctl(xch, &domctl);
+    if ( !rc )
+    {
+        if (offset)
+            *offset = domctl.u.vmtrace_op.offset;
+
+        if (size)
+            *size = domctl.u.vmtrace_op.size;
+    }
+
+    return rc;
+}
+
+int xc_vmtrace_pt_disable(xc_interface *xch, uint32_t domid, uint32_t vcpu)
+{
+    DECLARE_DOMCTL;
+    int rc;
+
+    domctl.cmd = XEN_DOMCTL_vmtrace_op;
+    domctl.domain = domid;
+    domctl.u.vmtrace_op.cmd = XEN_DOMCTL_vmtrace_pt_disable;
+    domctl.u.vmtrace_op.vcpu = vcpu;
+    domctl.u.vmtrace_op.pad1 = 0;
+    domctl.u.vmtrace_op.pad2 = 0;
+
+    rc = do_domctl(xch, &domctl);
+    return rc;
+}
+
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 19:44:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 19:44:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstVq-0000JF-0z; Tue, 07 Jul 2020 19:44:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Om5Q=AS=redhat.com=eblake@srs-us1.protection.inumbo.net>)
 id 1jstVo-0000J2-Sp
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 19:44:16 +0000
X-Inumbo-ID: 3d3ea318-c08a-11ea-bca7-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 3d3ea318-c08a-11ea-bca7-bc764e2007e4;
 Tue, 07 Jul 2020 19:44:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594151054;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=ZfMfperoI6IR1+/lO21emHFoYMQUVo5XjIjuDXPGCjc=;
 b=SK0gdn0IFQQlFTzvjUrrzEw03W8s3EGInuk83Y7lk6/QVgUVxWKv4L+sQeLkAKwv5C/qLY
 8CTsKIRLoQqPPBg0H8NuLW7Hz0OKVt49mnr8e+s9OheaRBfu/vBiDsnCFwdxGy0gaRYXYT
 y4N3KzuWLgV8n8vpJthLXDrFzFSDtD4=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-309-FVu01SLLMvS_nSwA9aNUZw-1; Tue, 07 Jul 2020 15:44:11 -0400
X-MC-Unique: FVu01SLLMvS_nSwA9aNUZw-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3B9458015FA;
 Tue,  7 Jul 2020 19:44:09 +0000 (UTC)
Received: from [10.3.115.46] (ovpn-115-46.phx2.redhat.com [10.3.115.46])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 3B4715D9DC;
 Tue,  7 Jul 2020 19:43:55 +0000 (UTC)
Subject: Re: [PATCH v12 7/8] nbd: Use ERRP_AUTO_PROPAGATE()
To: Markus Armbruster <armbru@redhat.com>, qemu-devel@nongnu.org
References: <20200707165037.1026246-1-armbru@redhat.com>
 <20200707165037.1026246-8-armbru@redhat.com>
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
Message-ID: <88ceae2e-eb02-de34-9210-354423bab4a4@redhat.com>
Date: Tue, 7 Jul 2020 14:43:54 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200707165037.1026246-8-armbru@redhat.com>
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=eblake@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 qemu-block@nongnu.org, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>,
 Michael Roth <mdroth@linux.vnet.ibm.com>, groug@kaod.org,
 Stefano Stabellini <sstabellini@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Max Reitz <mreitz@redhat.com>, Laszlo Ersek <lersek@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/7/20 11:50 AM, Markus Armbruster wrote:
> From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> 
> If we want to add some info to errp (by error_prepend() or
> error_append_hint()), we must use the ERRP_AUTO_PROPAGATE macro.
> Otherwise, this info will not be added when errp == &error_fatal
> (the program will exit prior to the error_append_hint() or
> error_prepend() call).  Fix such cases.
> 
> If we want to check error after errp-function call, we need to
> introduce local_err and then propagate it to errp. Instead, use
> ERRP_AUTO_PROPAGATE macro, benefits are:
> 1. No need of explicit error_propagate call
> 2. No need of explicit local_err variable: use errp directly
> 3. ERRP_AUTO_PROPAGATE leaves errp as is if it's not NULL or
>     &error_fatal, this means that we don't break error_abort
>     (we'll abort on error_set, not on error_propagate)
> 
> This commit is generated by command
> 
>      sed -n '/^Network Block Device (NBD)$/,/^$/{s/^F: //p}' \
>          MAINTAINERS | \
>      xargs git ls-files | grep '\.[hc]$' | \
>      xargs spatch \
>          --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
>          --macro-file scripts/cocci-macro-file.h \
>          --in-place --no-show-diff --max-width 80
> 
> Reported-by: Kevin Wolf <kwolf@redhat.com>
> Reported-by: Greg Kurz <groug@kaod.org>
> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
> Reviewed-by: Markus Armbruster <armbru@redhat.com>
> [Commit message tweaked]
> Signed-off-by: Markus Armbruster <armbru@redhat.com>
> ---

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 20:01:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 20:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstma-00028v-HR; Tue, 07 Jul 2020 20:01:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gwtg=AS=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jstmZ-00028q-Gm
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 20:01:35 +0000
X-Inumbo-ID: a8575bde-c08c-11ea-bb8b-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id a8575bde-c08c-11ea-bb8b-bc764e2007e4;
 Tue, 07 Jul 2020 20:01:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594152093;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 in-reply-to:in-reply-to:references:references;
 bh=ANH7oWwKRrDk54ynkXpx8px3Vbl0w8cuK3dMy77nJTc=;
 b=dGCbrmzpOA0yopaLK+qU/FxIl/PTPB3xGo1VrisrewTa6aH11ekeDPhjBv80JLxJtZplRo
 1G2S1jCOaonSaqv0qq3JmfsxwLOzdHQW8Nzvr0zwRDENObhIjOP0ooA6RXF91rHNWgSLtZ
 60KlOEdvyHpbKXJEyvtrQG+hz5Y8P+Q=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-238-PFsI-5sUPva7wvO5__2oNg-1; Tue, 07 Jul 2020 16:01:32 -0400
X-MC-Unique: PFsI-5sUPva7wvO5__2oNg-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 717C318FE860;
 Tue,  7 Jul 2020 20:01:30 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id F079E1A835;
 Tue,  7 Jul 2020 20:01:23 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 5BAA71132FD2; Tue,  7 Jul 2020 22:01:21 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Eric Blake <eblake@redhat.com>
Subject: Re: [PATCH v12 1/8] error: New macro ERRP_AUTO_PROPAGATE()
References: <20200707165037.1026246-1-armbru@redhat.com>
 <20200707165037.1026246-2-armbru@redhat.com>
 <afd4b693-2aec-247b-c0a7-7d061ed5bdff@redhat.com>
Date: Tue, 07 Jul 2020 22:01:21 +0200
In-Reply-To: <afd4b693-2aec-247b-c0a7-7d061ed5bdff@redhat.com> (Eric Blake's
 message of "Tue, 7 Jul 2020 14:02:41 -0500")
Message-ID: <87h7uj6qni.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 qemu-block@nongnu.org, Paul Durrant <paul@xen.org>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, qemu-devel@nongnu.org,
 Michael Roth <mdroth@linux.vnet.ibm.com>, groug@kaod.org,
 Stefano Stabellini <sstabellini@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Max Reitz <mreitz@redhat.com>, Laszlo Ersek <lersek@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Eric Blake <eblake@redhat.com> writes:

> On 7/7/20 11:50 AM, Markus Armbruster wrote:
>> From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>
>> Introduce a new ERRP_AUTO_PROPAGATE macro, to be used at start of
>> functions with an errp OUT parameter.
>>
>> It has three goals:
>>
>> 1. Fix issue with error_fatal and error_prepend/error_append_hint: user
>
> the user

Yes.

>> can't see this additional information, because exit() happens in
>> error_setg earlier than information is added. [Reported by Greg Kurz]
>>
>> 2. Fix issue with error_abort and error_propagate: when we wrap
>> error_abort by local_err+error_propagate, the resulting coredump will
>> refer to error_propagate and not to the place where error happened.
>> (the macro itself doesn't fix the issue, but it allows us to [3.] drop
>> the local_err+error_propagate pattern, which will definitely fix the
>> issue) [Reported by Kevin Wolf]
>>
>> 3. Drop local_err+error_propagate pattern, which is used to workaround
>> void functions with errp parameter, when caller wants to know resulting
>> status. (Note: actually these functions could be merely updated to
>> return int error code).
>>
>> To achieve these goals, later patches will add invocations
>> of this macro at the start of functions with either use
>> error_prepend/error_append_hint (solving 1) or which use
>> local_err+error_propagate to check errors, switching those
>> functions to use *errp instead (solving 2 and 3).
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> Reviewed-by: Paul Durrant <paul@xen.org>
>> Reviewed-by: Greg Kurz <groug@kaod.org>
>> Reviewed-by: Eric Blake <eblake@redhat.com>
>> [Comments merged properly with recent commit "error: Document Error
>> API usage rules", and edited for clarity.  Put ERRP_AUTO_PROPAGATE()
>> before its helpers, and touch up style.  Commit message tweaked.]
>> Signed-off-by: Markus Armbruster <armbru@redhat.com>
>> ---
>>   include/qapi/error.h | 160 ++++++++++++++++++++++++++++++++++++++-----
>>   1 file changed, 141 insertions(+), 19 deletions(-)
>>
>> diff --git a/include/qapi/error.h b/include/qapi/error.h
>> index 3fed49747d..c865a7d2f1 100644
>> --- a/include/qapi/error.h
>> +++ b/include/qapi/error.h
>
>> @@ -128,18 +122,26 @@
>>    *         handle the error...
>>    *     }
>>    * when it doesn't, say a void function:
>> + *     ERRP_AUTO_PROPAGATE();
>> + *     foo(arg, errp);
>> + *     if (*errp) {
>> + *         handle the error...
>> + *     }
>> + * More on ERRP_AUTO_PROPAGATE() below.
>> + *
>> + * Code predating ERRP_AUTO_PROPAGATE() still exits, and looks like this:
>
> exists

Fixing...

>>    *     Error *err = NULL;
>>    *     foo(arg, &err);
>>    *     if (err) {
>>    *         handle the error...
>> - *         error_propagate(errp, err);
>> + *         error_propagate(errp, err); // deprecated
>>    *     }
>> - * Do *not* "optimize" this to
>> + * Avoid in new code.  Do *not* "optimize" it to
>>    *     foo(arg, errp);
>>    *     if (*errp) { // WRONG!
>>    *         handle the error...
>>    *     }
>> - * because errp may be NULL!
>> + * because errp may be NULL!  Guard with ERRP_AUTO_PROPAGATE().
>
> maybe:
>
> because errp may be NULL without the ERRP_AUTO_PROPAGATE() guard.

Sold.

>>    *
>>    * But when all you do with the error is pass it on, please use
>>    *     foo(arg, errp);
>> @@ -158,6 +160,19 @@
>>    *         handle the error...
>>    *     }
>>    *
>> + * Pass an existing error to the caller:
>
>> + * = Converting to ERRP_AUTO_PROPAGATE() =
>> + *
>> + * To convert a function to use ERRP_AUTO_PROPAGATE():
>> + *
>> + * 0. If the Error ** parameter is not named @errp, rename it to
>> + *    @errp.
>> + *
>> + * 1. Add an ERRP_AUTO_PROPAGATE() invocation, by convention right at
>> + *    the beginning of the function.  This makes @errp safe to use.
>> + *
>> + * 2. Replace &err by errp, and err by *errp.  Delete local variable
>> + *    @err.
>> + *
>> + * 3. Delete error_propagate(errp, *errp), replace
>> + *    error_propagate_prepend(errp, *errp, ...) by error_prepend(errp, ...),
>> + *
>
> Why a comma here?

Editing accident.

>> + * 4. Ensure @errp is valid at return: when you destroy *errp, set
>> + *    errp = NULL.
>> + *
>> + * Example:
>> + *
>> + *     bool fn(..., Error **errp)
>> + *     {
>> + *         Error *err = NULL;
>> + *
>> + *         foo(arg, &err);
>> + *         if (err) {
>> + *             handle the error...
>> + *             error_propagate(errp, err);
>> + *             return false;
>> + *         }
>> + *         ...
>> + *     }
>> + *
>> + * becomes
>> + *
>> + *     bool fn(..., Error **errp)
>> + *     {
>> + *         ERRP_AUTO_PROPAGATE();
>> + *
>> + *         foo(arg, errp);
>> + *         if (*errp) {
>> + *             handle the error...
>> + *             return false;
>> + *         }
>> + *         ...
>> + *     }
>
> Do we want the example to show the use of error_free and *errp = NULL?

Yes, but we're running out of time, so let's do it in the series that
introduces the usage to the code.

> Otherwise, this is looking good to me.  It will need a tweak if we go
> with the shorter name ERRP_GUARD, but I like that idea.

Tweaking away...

Thanks!



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 20:08:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 20:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsttF-0002Nx-A8; Tue, 07 Jul 2020 20:08:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JMg+=AS=virtuozzo.com=vsementsov@srs-us1.protection.inumbo.net>)
 id 1jsttD-0002Ns-39
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 20:08:27 +0000
X-Inumbo-ID: 9c1b19ae-c08d-11ea-b7bb-bc764e2007e4
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0d::71c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c1b19ae-c08d-11ea-b7bb-bc764e2007e4;
 Tue, 07 Jul 2020 20:08:23 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XiDrAuSpVmz6ZLc4y1YU5F0PypfliKCbPMiN4V2i7DJjHWGhVLGnK/pPFk8rrYPWJFZQ71eQ/BiXQCV75NVJjNwYj+IofTHQ1dKRkcAf6NLeIjotX/bJAIE5OuL+5WIr9fBEQ7ZGLYLzqS2BQb845BvfLgSSsiCFK9M0XpUhmb8kGq3G0L5WDhLemrm/LHfcTjxtmwZZWCxp6ig3sK0aSCmelUU5Gp9jDjEK1goPKZY28Ue38J9twrBVhpmiem203eL5Wc2yygIamDCclcYYQDpqlLwPv4wAUzilAmDhbi4NmXET9my2EqII3t8Mm5uT3ZTVTX76oXObP7BQ41GoUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1Y0d7pH2Kl9fMsIHQ/bgSh645mak3V/leis4tLusQVY=;
 b=E46U6tQ3uSgPrSa4gK1Jd+Xr7rsqx0rA1uxiz5nwbghMEbOaDKdIxg2tbETJeeMueHXCrfbBoeXhmLN2Z0ZOF89PvOLt/SlzbVBsbJVnIwn9x/UkwUc+Sz2qjPol3OZdft2PCWxQnvGW51/fSYIX+twv5yv1H+CK1SruGpSrxGd8WBPmKKSdlOlCewYCuAiM3QUz79NH/hED/Uk2BGLU4UilE3WT2YJyqjLDK/zabhQZlKXaR9zqPopdNv1PDI0VHk4Xmym7AITY4jpQghxQ6Dcq0ghL0h6RBEhl9DM8316fGAdGqioorCUnUw0356azpqtUjYEZ0qPxmoG3t7ioig==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=virtuozzo.com; dmarc=pass action=none
 header.from=virtuozzo.com; dkim=pass header.d=virtuozzo.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=virtuozzo.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1Y0d7pH2Kl9fMsIHQ/bgSh645mak3V/leis4tLusQVY=;
 b=AiWrRKB/ymapRzUA3SkdCXG8gK1xpaaVemisZhRwWA+Etz7YnkCLsItS5BINixKRGTXm/0QLWx1/u/XHGdNjL7nPYOlGvnYdO7VynjQ7r0x+/Qhr3Zmw3lmJaMwiKQGkQc+eh0zjDMpUhFb4kWJVYhdAOYdL7urjCWeL68eGGyc=
Authentication-Results: redhat.com; dkim=none (message not signed)
 header.d=none;redhat.com; dmarc=none action=none header.from=virtuozzo.com;
Received: from VI1PR08MB5503.eurprd08.prod.outlook.com (2603:10a6:803:137::19)
 by VI1PR08MB5503.eurprd08.prod.outlook.com (2603:10a6:803:137::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.21; Tue, 7 Jul
 2020 20:08:21 +0000
Received: from VI1PR08MB5503.eurprd08.prod.outlook.com
 ([fe80::2c53:d56b:77ba:8aac]) by VI1PR08MB5503.eurprd08.prod.outlook.com
 ([fe80::2c53:d56b:77ba:8aac%5]) with mapi id 15.20.3153.029; Tue, 7 Jul 2020
 20:08:21 +0000
Subject: Re: [PATCH v12 1/8] error: New macro ERRP_AUTO_PROPAGATE()
To: Markus Armbruster <armbru@redhat.com>, qemu-devel@nongnu.org
References: <20200707165037.1026246-1-armbru@redhat.com>
 <20200707165037.1026246-2-armbru@redhat.com>
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-ID: <3bea211a-4522-f713-2edc-261730702114@virtuozzo.com>
Date: Tue, 7 Jul 2020 23:08:19 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
In-Reply-To: <20200707165037.1026246-2-armbru@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM4PR05CA0029.eurprd05.prod.outlook.com (2603:10a6:205::42)
 To VI1PR08MB5503.eurprd08.prod.outlook.com
 (2603:10a6:803:137::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.100.2] (185.215.60.58) by
 AM4PR05CA0029.eurprd05.prod.outlook.com (2603:10a6:205::42) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3153.23 via Frontend Transport; Tue, 7 Jul 2020 20:08:20 +0000
X-Originating-IP: [185.215.60.58]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a20f82af-f381-4aa1-0deb-08d822b17f72
X-MS-TrafficTypeDiagnostic: VI1PR08MB5503:
X-Microsoft-Antispam-PRVS: <VI1PR08MB5503F80F03A35A8E5733A751C1660@VI1PR08MB5503.eurprd08.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-Forefront-PRVS: 0457F11EAF
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: nYngkoqd/Gcc6Scb0LbJnul6tlc1SNUaRKuqWTOGynR3oLNL74Go/4FHpxHMnWI9LX3EO8ryUt2qn1g48ITW0TesFK3k0mpYKr1FKebHwXiiF7a/QKqFsatRPTOyGBwDacvRU79wVkQucXManN4OMwrAvllRvplCYnoHcsEcscEfFYCs6lUxgdJv/q9agKjqvgHOdp/sFf6o/HnGyssV7e7Z4u6xhhFNwm2edhMWaKbUn0gBCSoVM4nGrBWAzLR4mAFbVMfuQ7LZdDh2GTcLvWiEPSM6qQp8XLDLQ1iRf8wioxn9gSeKZ6WWywXWKsnDof9sFuC7NgLaLN3x8M+ilQmLRw36ZAq0iwYZg5fze5mVZJeliCI4sItu5USqEw/p
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR08MB5503.eurprd08.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(366004)(4326008)(26005)(83380400001)(54906003)(8936002)(31696002)(2616005)(2906002)(16576012)(956004)(86362001)(31686004)(7416002)(5660300002)(6486002)(8676002)(498600001)(36756003)(52116002)(186003)(16526019)(66476007)(66556008)(66946007)(43740500002);
 DIR:OUT; SFP:1102; 
X-MS-Exchange-AntiSpam-MessageData: L4GlWqi6r3yNIcr4nKsZgPrT+ycvy+1peDO3JLJXl6eUG6bWnMnKbpvGAFbJYfsrbtKz5vTEwRXFvQSFPZVsRYJq3EGXBbxODAY2Bj2g3BxbnCkEyiRBBSG6fnoSL5z2J+gUsZnCpLL6q9KFN3Jd5qGcu08S5DNDrxb4qhjrW8Nk4ytx6ENGU/PDj7MQ5BCiwc5j46g//f8gvYLeK1BwRpDnurpH2GYfQ9PYk56I6PeNb2cv1nqKTDNSto+CzArUkoJobIPxZrC8B/t5b36iBSQoM9bqi8HZavpCCSFVEpwfP/7aGP63VxNBNq7C2HhjiUyKHfAns0cxSejKB0SO/WDIk+saxVi5mbMA13mInD0CSzxgaz3JBCcUCZ47CQTCbR6G39mRl4iahxnqp/yWKsXCSH//AJlSwZshYIHTL7lhEqFR9GqtIzAxRX3FDVGP+p+AWtW8XES59xKF2SLFvXe1RNCBcH3TsM42COn1Sh8=
X-OriginatorOrg: virtuozzo.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a20f82af-f381-4aa1-0deb-08d822b17f72
X-MS-Exchange-CrossTenant-AuthSource: VI1PR08MB5503.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jul 2020 20:08:21.7095 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 0bc7f26d-0264-416e-a6fc-8352af79c58f
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LPAkjGhAG/5rKuhS9mJwXZH0Mf31nmgHRaWqEtQ9ZY24Q8a7WRMtGxRCrHIWO5Kpin5aiiDeaw7vVsL3KyUvF6bwRJ1wqzv0v7Lqq3M+q/o=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5503
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Michael Roth <mdroth@linux.vnet.ibm.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laszlo Ersek <lersek@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, groug@kaod.org,
 Eric Blake <eblake@redhat.com>, Max Reitz <mreitz@redhat.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

07.07.2020 19:50, Markus Armbruster wrote:
> From: Vladimir Sementsov-Ogievskiy<vsementsov@virtuozzo.com>
> 
> Introduce a new ERRP_AUTO_PROPAGATE macro, to be used at start of
> functions with an errp OUT parameter.
> 
> It has three goals:
> 
> 1. Fix issue with error_fatal and error_prepend/error_append_hint: user
> can't see this additional information, because exit() happens in
> error_setg earlier than information is added. [Reported by Greg Kurz]
> 
> 2. Fix issue with error_abort and error_propagate: when we wrap
> error_abort by local_err+error_propagate, the resulting coredump will
> refer to error_propagate and not to the place where error happened.
> (the macro itself doesn't fix the issue, but it allows us to [3.] drop
> the local_err+error_propagate pattern, which will definitely fix the
> issue) [Reported by Kevin Wolf]
> 
> 3. Drop local_err+error_propagate pattern, which is used to workaround
> void functions with errp parameter, when caller wants to know resulting
> status. (Note: actually these functions could be merely updated to
> return int error code).
> 
> To achieve these goals, later patches will add invocations
> of this macro at the start of functions with either use
> error_prepend/error_append_hint (solving 1) or which use
> local_err+error_propagate to check errors, switching those
> functions to use *errp instead (solving 2 and 3).
> 
> Signed-off-by: Vladimir Sementsov-Ogievskiy<vsementsov@virtuozzo.com>
> Reviewed-by: Paul Durrant<paul@xen.org>
> Reviewed-by: Greg Kurz<groug@kaod.org>
> Reviewed-by: Eric Blake<eblake@redhat.com>
> [Comments merged properly with recent commit "error: Document Error
> API usage rules", and edited for clarity.  Put ERRP_AUTO_PROPAGATE()
> before its helpers, and touch up style.  Commit message tweaked.]
> Signed-off-by: Markus Armbruster<armbru@redhat.com>

Ok, I see you have mostly rewritten the big comment and not only in this patch, so, I go and read the whole comment on top of these series.

=================================

    * Pass an existing error to the caller with the message modified:
    *     error_propagate_prepend(errp, err,
    *                             "Could not frobnicate '%s': ", name);
    * This is more concise than
    *     error_propagate(errp, err); // don't do this
    *     error_prepend(errp, "Could not frobnicate '%s': ", name);
    * and works even when @errp is &error_fatal.

- the latter doesn't consider ERRP_AUTO_PROPAGATE: as we know, that ERRP_AUTO_PROPAGATE should be used when we use error_prepend, the latter should look like


ERRP_AUTO_PROPAGATE();
...
error_propagate(errp, err); // don't do this
error_prepend(errp, "Could not frobnicate '%s': ", name);

- and it works even when @errp is &error_fatal, so the error_propagate_prepend now is just a shortcut, not the only correct way.


Still, the text is formally correct as is, and may be improved later.

=================================

    * 2. Replace &err by errp, and err by *errp.  Delete local variable
    *    @err.

- hmm a bit not obvious,, It can be local_err. It can be (in some rare cases) still needed to handle the error locally, not passing to the caller..

may be just something like "Assume local Error *err variable is used to get errors from called functions and than propagated to caller's errp" before paragraph [2.] will help.


    *
    * 3. Delete error_propagate(errp, *errp), replace
    *    error_propagate_prepend(errp, *errp, ...) by error_prepend(errp, ...),
    *
    * 4. Ensure @errp is valid at return: when you destroy *errp, set
    *    errp = NULL.

=================================


May be good to add note about ERRP_AUTO_PROPAGATE() into comment above error_append_hint (and error_(v)prepend)).



=================================

   /*
    * Make @errp parameter easier to use regardless of argument value

may be s/argument/its/

    *
    * This macro is for use right at the beginning of a function that
    * takes an Error **errp parameter to pass errors to its caller.  The
    * parameter must be named @errp.
    *
    * It must be used when the function dereferences @errp or passes
    * @errp to error_prepend(), error_vprepend(), or error_append_hint().
    * It is safe to use even when it's not needed, but please avoid
    * cluttering the source with useless code.
    *
    * If @errp is NULL or &error_fatal, rewrite it to point to a local
    * Error variable, which will be automatically propagated to the
    * original @errp on function exit.
    *
    * Note: &error_abort is not rewritten, because that would move the
    * abort from the place where the error is created to the place where
    * it's propagated.
    */

=================================


All these are minor, the documentation is good as is, thank you!

-- 
Best regards,
Vladimir


From xen-devel-bounces@lists.xenproject.org Tue Jul 07 20:11:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 20:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jstwN-0003AC-Sa; Tue, 07 Jul 2020 20:11:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gwtg=AS=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jstwL-0003A6-Rb
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 20:11:41 +0000
X-Inumbo-ID: 11f0c746-c08e-11ea-8dea-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 11f0c746-c08e-11ea-8dea-12813bfff9fa;
 Tue, 07 Jul 2020 20:11:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594152700;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 in-reply-to:in-reply-to:references:references;
 bh=FA94CubuNsJD3dhRwMuG/cH+l9RtDN/Na7ZHN23jtV8=;
 b=MMAt3OnjEjJuRUNue7Ocn11rB53cy+ns0ldW28IZPhYmlI95n22CyT0sZ+dP0/15v6sr0D
 u1k0ySvklAd7K5txdn1GylKlZvNKSx1e3Xdj8Sg/gpYxPyGMqk2gwLNN6NfU53uGsk6fn1
 xDohjD5HLNb9KeXQhr2aMNvT13oNlL0=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-214-lsdgI-V3MtukxB0PW3T76A-1; Tue, 07 Jul 2020 16:11:39 -0400
X-MC-Unique: lsdgI-V3MtukxB0PW3T76A-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4D941107ACCA;
 Tue,  7 Jul 2020 20:11:37 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id A7A1073FEA;
 Tue,  7 Jul 2020 20:11:30 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 3451A1132FD2; Tue,  7 Jul 2020 22:11:28 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Eric Blake <eblake@redhat.com>
Subject: Re: [PATCH v12 2/8] scripts: Coccinelle script to use
 ERRP_AUTO_PROPAGATE()
References: <20200707165037.1026246-1-armbru@redhat.com>
 <20200707165037.1026246-3-armbru@redhat.com>
 <764387d7-0d42-a291-d720-60df303c15e4@redhat.com>
Date: Tue, 07 Jul 2020 22:11:28 +0200
In-Reply-To: <764387d7-0d42-a291-d720-60df303c15e4@redhat.com> (Eric Blake's
 message of "Tue, 7 Jul 2020 14:36:02 -0500")
Message-ID: <87y2nv5bm7.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, vsementsov@virtuozzo.com,
 qemu-block@nongnu.org, Paul Durrant <paul@xen.org>,
 Laszlo Ersek <lersek@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, qemu-devel@nongnu.org,
 Michael Roth <mdroth@linux.vnet.ibm.com>, groug@kaod.org,
 Stefano Stabellini <sstabellini@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Max Reitz <mreitz@redhat.com>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Eric Blake <eblake@redhat.com> writes:

> On 7/7/20 11:50 AM, Markus Armbruster wrote:
>> From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>>
>> Script adds ERRP_AUTO_PROPAGATE macro invocation where appropriate and
>> does corresponding changes in code (look for details in
>> include/qapi/error.h)
>>
>> Usage example:
>> spatch --sp-file scripts/coccinelle/auto-propagated-errp.cocci \
>>   --macro-file scripts/cocci-macro-file.h --in-place --no-show-diff \
>>   --max-width 80 FILES...
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
>> Reviewed-by: Markus Armbruster <armbru@redhat.com>
>> Signed-off-by: Markus Armbruster <armbru@redhat.com>
>> ---
>>   scripts/coccinelle/auto-propagated-errp.cocci | 337 ++++++++++++++++++
>>   include/qapi/error.h                          |   3 +
>>   MAINTAINERS                                   |   1 +
>>   3 files changed, 341 insertions(+)
>>   create mode 100644 scripts/coccinelle/auto-propagated-errp.cocci
>
> Needs a tweak if we go with ERRP_GUARD.  But that's easy.
>
>> +
>> +// Convert special case with goto separately.
>> +// I tried merging this into the following rule the obvious way, but
>> +// it made Coccinelle hang on block.c
>> +//
>> +// Note interesting thing: if we don't do it here, and try to fixup
>> +// "out: }" things later after all transformations (the rule will be
>> +// the same, just without error_propagate() call), coccinelle fails to
>> +// match this "out: }".
>
> "out: }" is not valid C; would referring to "out: ; }" fare any better?

We can try for the next batch.

>> +@ disable optional_qualifier@
>> +identifier rule1.fn, rule1.local_err, out;
>> +symbol errp;
>> +@@
>> +
>> + fn(..., Error ** ____, ...)
>> + {
>> +     <...
>> +-    goto out;
>> ++    return;
>> +     ...>
>> +- out:
>> +-    error_propagate(errp, local_err);
>> + }
>> +
>> +// Convert most of local_err related stuff.
>> +//
>> +// Note, that we inherit rule1.fn and rule1.local_err names, not
>> +// objects themselves. We may match something not related to the
>> +// pattern matched by rule1. For example, local_err may be defined with
>> +// the same name in different blocks inside one function, and in one
>> +// block follow the propagation pattern and in other block doesn't.
>> +//
>> +// Note also that errp-cleaning functions
>> +//   error_free_errp
>> +//   error_report_errp
>> +//   error_reportf_errp
>> +//   warn_report_errp
>> +//   warn_reportf_errp
>> +// are not yet implemented. They must call corresponding Error* -
>> +// freeing function and then set *errp to NULL, to avoid further
>> +// propagation to original errp (consider ERRP_AUTO_PROPAGATE in use).
>> +// For example, error_free_errp may look like this:
>> +//
>> +//    void error_free_errp(Error **errp)
>> +//    {
>> +//        error_free(*errp);
>> +//        *errp = NULL;
>> +//    }
>
> I guess we can still decide later if we want these additional
> functions, or if they will even help after the number of places we
> have already improved after applying this script as-is and with
> Markus' cleanups in place.

Yes.

> While I won't call myself a Coccinelle expert, it at least looks sane
> enough that I'm comfortable if you add:
>
> Reviewed-by: Eric Blake <eblake@redhat.com>

Thanks!



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 20:44:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 20:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsuRN-0005o2-Fq; Tue, 07 Jul 2020 20:43:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gwtg=AS=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jsuRM-0005nx-9j
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 20:43:44 +0000
X-Inumbo-ID: 8be18320-c092-11ea-bca7-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.61])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 8be18320-c092-11ea-bca7-bc764e2007e4;
 Tue, 07 Jul 2020 20:43:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594154623;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 in-reply-to:in-reply-to:references:references;
 bh=OSUiNs2fk9vf5Yf2XiF6PjTmNHMFGvknvlUiA71OCsM=;
 b=YnL9K+zPlgkYK2giTSJwivIg8J3vPsoQoplVkdyAV9b7UQOtahZMPkWkiCjIfiB4hSkFG7
 mIzlAFHtPpDGeRW2ZzypIQdyGPmAoGX5KdgQNPOj1nZcIW4/IHJbJsr7p1cncr+3Eg+uJR
 v1d5VG43NKhNB2X1vCQbi9b+/8h0Ni0=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-444-RP6EYJbuNYOpi3320bjyLw-1; Tue, 07 Jul 2020 16:43:41 -0400
X-MC-Unique: RP6EYJbuNYOpi3320bjyLw-1
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 368F1108BD15;
 Tue,  7 Jul 2020 20:43:39 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id CAC8110013C2;
 Tue,  7 Jul 2020 20:43:32 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 4FE451132FD2; Tue,  7 Jul 2020 22:43:29 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Subject: Re: [PATCH v12 1/8] error: New macro ERRP_AUTO_PROPAGATE()
References: <20200707165037.1026246-1-armbru@redhat.com>
 <20200707165037.1026246-2-armbru@redhat.com>
 <3bea211a-4522-f713-2edc-261730702114@virtuozzo.com>
Date: Tue, 07 Jul 2020 22:43:29 +0200
In-Reply-To: <3bea211a-4522-f713-2edc-261730702114@virtuozzo.com> (Vladimir
 Sementsov-Ogievskiy's message of "Tue, 7 Jul 2020 23:08:19 +0300")
Message-ID: <87a70b5a4u.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 qemu-block@nongnu.org, Paul Durrant <paul@xen.org>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Christian Schoenebeck <qemu_oss@crudebyte.com>, qemu-devel@nongnu.org,
 Michael Roth <mdroth@linux.vnet.ibm.com>, groug@kaod.org,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Max Reitz <mreitz@redhat.com>, Laszlo Ersek <lersek@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> writes:

> 07.07.2020 19:50, Markus Armbruster wrote:
>> From: Vladimir Sementsov-Ogievskiy<vsementsov@virtuozzo.com>
>>
>> Introduce a new ERRP_AUTO_PROPAGATE macro, to be used at start of
>> functions with an errp OUT parameter.
>>
>> It has three goals:
>>
>> 1. Fix issue with error_fatal and error_prepend/error_append_hint: user
>> can't see this additional information, because exit() happens in
>> error_setg earlier than information is added. [Reported by Greg Kurz]
>>
>> 2. Fix issue with error_abort and error_propagate: when we wrap
>> error_abort by local_err+error_propagate, the resulting coredump will
>> refer to error_propagate and not to the place where error happened.
>> (the macro itself doesn't fix the issue, but it allows us to [3.] drop
>> the local_err+error_propagate pattern, which will definitely fix the
>> issue) [Reported by Kevin Wolf]
>>
>> 3. Drop local_err+error_propagate pattern, which is used to workaround
>> void functions with errp parameter, when caller wants to know resulting
>> status. (Note: actually these functions could be merely updated to
>> return int error code).
>>
>> To achieve these goals, later patches will add invocations
>> of this macro at the start of functions with either use
>> error_prepend/error_append_hint (solving 1) or which use
>> local_err+error_propagate to check errors, switching those
>> functions to use *errp instead (solving 2 and 3).
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy<vsementsov@virtuozzo.com>
>> Reviewed-by: Paul Durrant<paul@xen.org>
>> Reviewed-by: Greg Kurz<groug@kaod.org>
>> Reviewed-by: Eric Blake<eblake@redhat.com>
>> [Comments merged properly with recent commit "error: Document Error
>> API usage rules", and edited for clarity.  Put ERRP_AUTO_PROPAGATE()
>> before its helpers, and touch up style.  Commit message tweaked.]
>> Signed-off-by: Markus Armbruster<armbru@redhat.com>
>
> Ok, I see you have mostly rewritten the big comment

Guilty as charged...  I was happy with the contents you provided (and
grateful for it), but our parallel work caused some redundancy.  I went
beyond a minimal merge to get a something that reads as one coherent
text.

> and not only in this patch, so, I go and read the whole comment on top of these series.
>
> =================================
>
>    * Pass an existing error to the caller with the message modified:
>    *     error_propagate_prepend(errp, err,
>    *                             "Could not frobnicate '%s': ", name);
>    * This is more concise than
>    *     error_propagate(errp, err); // don't do this
>    *     error_prepend(errp, "Could not frobnicate '%s': ", name);
>    * and works even when @errp is &error_fatal.
>
> - the latter doesn't consider ERRP_AUTO_PROPAGATE: as we know, that ERRP_AUTO_PROPAGATE should be used when we use error_prepend, the latter should look like
>
>
> ERRP_AUTO_PROPAGATE();
> ...
> error_propagate(errp, err); // don't do this
> error_prepend(errp, "Could not frobnicate '%s': ", name);
>
> - and it works even when @errp is &error_fatal, so the error_propagate_prepend now is just a shortcut, not the only correct way.

I can duplicate the advice from the paragraph preceding it, like this:

     * This is rarely needed.  When @err is a local variable, use of
     * ERRP_GUARD() commonly results in more readable code.
     * Where it is needed, it is more concise than
     *     error_propagate(errp, err); // don't do this
     *     error_prepend(errp, "Could not frobnicate '%s': ", name);
     * and works even when @errp is &error_fatal.

> Still, the text is formally correct as is, and may be improved later.
>
> =================================
>
>    * 2. Replace &err by errp, and err by *errp.  Delete local variable
>    *    @err.
>
> - hmm a bit not obvious,, It can be local_err.

Yes, but I trust the reader can make that mental jump.

> It can be (in some rare cases) still needed to handle the error locally, not passing to the caller..

I didn't think of this.

What about

     * To convert a function to use ERRP_GUARD(), assuming the local
     * variable it propagates to @errp is called @err:
     [...]
     * 2. Replace &err by errp, and err by *errp.  Delete local variable
     *    @err if it s now unused.

Nope, still no good, if we replace like that, @err *will* be unused, and
the locally handled error will leak to the caller.

No time left for wordsmithing; let's improve on top.

> may be just something like "Assume local Error *err variable is used to get errors from called functions and than propagated to caller's errp" before paragraph [2.] will help.
>
>
>    *
>    * 3. Delete error_propagate(errp, *errp), replace
>    *    error_propagate_prepend(errp, *errp, ...) by error_prepend(errp, ...),
>    *
>    * 4. Ensure @errp is valid at return: when you destroy *errp, set
>    *    errp = NULL.
>
> =================================
>
>
> May be good to add note about ERRP_AUTO_PROPAGATE() into comment above error_append_hint (and error_(v)prepend)).

Good point.

> =================================
>
>   /*
>    * Make @errp parameter easier to use regardless of argument value
>
> may be s/argument/its/
>
>    *
>    * This macro is for use right at the beginning of a function that
>    * takes an Error **errp parameter to pass errors to its caller.  The
>    * parameter must be named @errp.
>    *
>    * It must be used when the function dereferences @errp or passes
>    * @errp to error_prepend(), error_vprepend(), or error_append_hint().
>    * It is safe to use even when it's not needed, but please avoid
>    * cluttering the source with useless code.
>    *
>    * If @errp is NULL or &error_fatal, rewrite it to point to a local
>    * Error variable, which will be automatically propagated to the
>    * original @errp on function exit.
>    *
>    * Note: &error_abort is not rewritten, because that would move the
>    * abort from the place where the error is created to the place where
>    * it's propagated.
>    */
>
> =================================
>
>
> All these are minor, the documentation is good as is, thank you!

Thanks for your review, and your patience!



From xen-devel-bounces@lists.xenproject.org Tue Jul 07 22:02:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 07 Jul 2020 22:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsvfV-0004PL-23; Tue, 07 Jul 2020 22:02:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F44P=AS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsvfT-0004PF-DY
 for xen-devel@lists.xenproject.org; Tue, 07 Jul 2020 22:02:23 +0000
X-Inumbo-ID: 88425766-c09d-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88425766-c09d-11ea-b7bb-bc764e2007e4;
 Tue, 07 Jul 2020 22:02:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=8JlxGXryYYe88pSU/zh59OGmAzqZG/ps8ArLPMAWDUw=; b=p+W0D0LxyPtbTOQzPvIu39MF1
 c8EMMM50ecDEy5IfXJIChUwFQFgmosnBOdmQnZAt/BfzaxvTucoGgPxqPTKaIOjo32OKnb3iXW02s
 4bX04t5Pg5pTnrHvIcVq1d3yl1/b01PFTveZzG2rPVLvH5mX3HcgRH6zREqW8IWDAwA2Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsvfR-00068S-Dk; Tue, 07 Jul 2020 22:02:21 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsvfR-00027i-5S; Tue, 07 Jul 2020 22:02:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsvfR-0003ur-4n; Tue, 07 Jul 2020 22:02:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151712-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 151712: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=378321bb1fd5272653ae64f0306827614a3bd196
X-Osstest-Versions-That: xen=9f7e8bac4ca279b3bfccb5f3730fb2e5398c95ab
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 07 Jul 2020 22:02:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151712 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151712/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  378321bb1fd5272653ae64f0306827614a3bd196
baseline version:
 xen                  9f7e8bac4ca279b3bfccb5f3730fb2e5398c95ab

Last test of basis   151337  2020-06-24 15:14:15 Z   13 days
Testing same since   151712  2020-07-07 13:13:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9f7e8bac4c..378321bb1f  378321bb1fd5272653ae64f0306827614a3bd196 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 00:57:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 00:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jsyOt-0001rv-2X; Wed, 08 Jul 2020 00:57:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=okql=AT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jsyOr-0001rq-Bs
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 00:57:25 +0000
X-Inumbo-ID: fc29be0e-c0b5-11ea-8e0a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fc29be0e-c0b5-11ea-8e0a-12813bfff9fa;
 Wed, 08 Jul 2020 00:57:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=KQWbBUJNi8MNjjBJSfrAst8wwJD5YNg9xO9+ENv/Iyc=; b=aAfiPIZYHW8yyxaIblyV1lSeZ
 i5YO8YXaVpD/zS4yd5d/qQ0o24ZcwuF6H+MRNYmTpo7SMb0oVRi2Sg6HIkSVqFCBPwbeKtOQTeUWD
 mDc+qbVL5rn+kGdluwL8DaippdaUDHzbr0PvM2LFmTLIY3FObIgmsk0Hb5UypdNuMGDQg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsyOp-0001Tb-GP; Wed, 08 Jul 2020 00:57:23 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jsyOp-0008CU-1L; Wed, 08 Jul 2020 00:57:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jsyOo-0003r5-VW; Wed, 08 Jul 2020 00:57:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151713-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 151713: regressions - trouble:
 fail/pass/starved
X-Osstest-Failures: xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:guest-saverestore.2:fail:regression
 xen-4.10-testing:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
 xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=93be943e7d759015bd5db41a48f6dce58e580d5a
X-Osstest-Versions-That: xen=fd6e49ecae03840610fdc6a416a638590c0b6535
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jul 2020 00:57:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151713 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151713/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 14 guest-saverestore.2 fail REGR. vs. 151255
 test-armhf-armhf-xl-vhd     15 guest-start/debian.repeat fail REGR. vs. 151255

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop             fail like 151255
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail like 151255
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  93be943e7d759015bd5db41a48f6dce58e580d5a
baseline version:
 xen                  fd6e49ecae03840610fdc6a416a638590c0b6535

Last test of basis   151255  2020-06-20 12:32:09 Z   17 days
Testing same since   151713  2020-07-07 13:35:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 93be943e7d759015bd5db41a48f6dce58e580d5a
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue Jul 7 15:32:54 2020 +0200

    xen: Check the alignment of the offset pased via VCPUOP_register_vcpu_info
    
    Currently a guest is able to register any guest physical address to use
    for the vcpu_info structure as long as the structure can fits in the
    rest of the frame.
    
    This means a guest can provide an address that is not aligned to the
    natural alignment of the structure.
    
    On Arm 32-bit, unaligned access are completely forbidden by the
    hypervisor. This will result to a data abort which is fatal.
    
    On Arm 64-bit, unaligned access are only forbidden when used for atomic
    access. As the structure contains fields (such as evtchn_pending_self)
    that are updated using atomic operations, any unaligned access will be
    fatal as well.
    
    While the misalignment is only fatal on Arm, a generic check is added
    as an x86 guest shouldn't sensibly pass an unaligned address (this
    would result to a split lock).
    
    This is XSA-327.
    
    Reported-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3fdc211b01b29f252166937238efe02d15cb5780
    master date: 2020-07-07 14:41:00 +0200

commit 4418841aa620071a86a70f1ad5ad6e1f8c3c2636
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 7 15:32:21 2020 +0200

    x86/ept: flush cache when modifying PTEs and sharing page tables
    
    Modifications made to the page tables by EPT code need to be written
    to memory when the page tables are shared with the IOMMU, as Intel
    IOMMUs can be non-coherent and thus require changes to be written to
    memory in order to be visible to the IOMMU.
    
    In order to achieve this make sure data is written back to memory
    after writing an EPT entry when the recalc bit is not set in
    atomic_write_ept_entry. If such bit is set, the entry will be
    adjusted and atomic_write_ept_entry will be called a second time
    without the recalc bit set. Note that when splitting a super page the
    new tables resulting of the split should also be written back.
    
    Failure to do so can allow devices behind the IOMMU access to the
    stale super page, or cause coherency issues as changes made by the
    processor to the page tables are not visible to the IOMMU.
    
    This allows to remove the VT-d specific iommu_pte_flush helper, since
    the cache write back is now performed by atomic_write_ept_entry, and
    hence iommu_iotlb_flush can be used to flush the IOMMU TLB. The newly
    used method (iommu_iotlb_flush) can result in less flushes, since it
    might sometimes be called rightly with 0 flags, in which case it
    becomes a no-op.
    
    This is part of XSA-321.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: c23274fd0412381bd75068ebc9f8f8c90a4be748
    master date: 2020-07-07 14:40:11 +0200

commit d9c67d382a60eaad717e06746f5bcce7d16e2e83
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 7 15:31:54 2020 +0200

    vtd: optimize CPU cache sync
    
    Some VT-d IOMMUs are non-coherent, which requires a cache write back
    in order for the changes made by the CPU to be visible to the IOMMU.
    This cache write back was unconditionally done using clflush, but there are
    other more efficient instructions to do so, hence implement support
    for them using the alternative framework.
    
    This is part of XSA-321.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: a64ea16522a73a13a0d66cfa4b66a9d3b95dd9d6
    master date: 2020-07-07 14:39:54 +0200

commit 8976bab464775e2713ec8b6c910986d13891df3a
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 7 15:31:20 2020 +0200

    x86/alternative: introduce alternative_2
    
    It's based on alternative_io_2 without inputs or outputs but with an
    added memory clobber.
    
    This is part of XSA-321.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    master commit: 23570bce00ee6ba2139ece978ab6f03ff166e21d
    master date: 2020-07-07 14:39:25 +0200

commit 388e303baaa695f8167d040e33d9b8daa63a08dd
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 7 15:30:56 2020 +0200

    vtd: don't assume addresses are aligned in sync_cache
    
    Current code in sync_cache assume that the address passed in is
    aligned to a cache line size. Fix the code to support passing in
    arbitrary addresses not necessarily aligned to a cache line size.
    
    This is part of XSA-321.
    
    Reported-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: b6d9398144f21718d25daaf8d72669a75592abc5
    master date: 2020-07-07 14:39:05 +0200

commit 0b0a15580757e8103746941e4a558a745fc553f1
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 7 15:30:23 2020 +0200

    x86/iommu: introduce a cache sync hook
    
    The hook is only implemented for VT-d and it uses the already existing
    iommu_sync_cache function present in VT-d code. The new hook is
    added so that the cache can be flushed by code outside of VT-d when
    using shared page tables.
    
    Note that alloc_pgtable_maddr must use the now locally defined
    sync_cache function, because IOMMU ops are not yet setup the first
    time the function gets called during IOMMU initialization.
    
    No functional change intended.
    
    This is part of XSA-321.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 91526b460e5009fc56edbd6809e66c327281faba
    master date: 2020-07-07 14:38:34 +0200

commit 9df4399c7900d82592cc4fe889a96f1b3078a301
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 7 15:30:03 2020 +0200

    vtd: prune (and rename) cache flush functions
    
    Rename __iommu_flush_cache to iommu_sync_cache and remove
    iommu_flush_cache_page. Also remove the iommu_flush_cache_entry
    wrapper and just use iommu_sync_cache instead. Note the _entry suffix
    was meaningless as the wrapper was already taking a size parameter in
    bytes. While there also constify the addr parameter.
    
    No functional change intended.
    
    This is part of XSA-321.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 62298825b9a44f45761acbd758138b5ba059ebd1
    master date: 2020-07-07 14:38:13 +0200

commit fd5703813f92451b39fdd8257596de0c45ebb160
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jul 7 15:29:44 2020 +0200

    vtd: improve IOMMU TLB flush
    
    Do not limit PSI flushes to order 0 pages, in order to avoid doing a
    full TLB flush if the passed in page has an order greater than 0 and
    is aligned. Should increase the performance of IOMMU TLB flushes when
    dealing with page orders greater than 0.
    
    This is part of XSA-321.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: 5fe515a0fede07543f2a3b049167b1fd8b873caf
    master date: 2020-07-07 14:37:46 +0200

commit a9bda69c6bf7da2933891f3e18f77f8a18f874bb
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 7 15:29:19 2020 +0200

    x86/ept: atomically modify entries in ept_next_level
    
    ept_next_level was passing a live PTE pointer to ept_set_middle_entry,
    which was then modified without taking into account that the PTE could
    be part of a live EPT table. This wasn't a security issue because the
    pages returned by p2m_alloc_ptp are zeroed, so adding such an entry
    before actually initializing it didn't allow a guest to access
    physical memory addresses it wasn't supposed to access.
    
    This is part of XSA-328.
    
    Reported-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: bc3d9f95d661372b059a5539ae6cb1e79435bb95
    master date: 2020-07-07 14:37:12 +0200

commit a380168a5672cc3bf3066f630bab3c9355e4e1cf
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jul 7 15:28:53 2020 +0200

    x86/EPT: ept_set_middle_entry() related adjustments
    
    ept_split_super_page() wants to further modify the newly allocated
    table, so have ept_set_middle_entry() return the mapped pointer rather
    than tearing it down and then getting re-established right again.
    
    Similarly ept_next_level() wants to hand back a mapped pointer of
    the next level page, so re-use the one established by
    ept_set_middle_entry() in case that path was taken.
    
    Pull the setting of suppress_ve ahead of insertion into the higher level
    table, and don't have ept_split_super_page() set the field a 2nd time.
    
    This is part of XSA-328.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: 1104288186ee73a7f9bfa41cbaa5bb7611521028
    master date: 2020-07-07 14:36:52 +0200

commit c1a4914323eecce82a664896845947fa1dd73ce4
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jul 7 15:28:25 2020 +0200

    x86/shadow: correct an inverted conditional in dirty VRAM tracking
    
    This originally was "mfn_x(mfn) == INVALID_MFN". Make it like this
    again, taking the opportunity to also drop the unnecessary nearby
    braces.
    
    This is XSA-319.
    
    Fixes: 246a5a3377c2 ("xen: Use a typesafe to define INVALID_MFN")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 23a216f99d40fbfbc2318ade89d8213eea6ba1f8
    master date: 2020-07-07 14:36:24 +0200

commit 6261a06d990cc50ea8765356d296a61cc2d4a3e5
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue Jul 7 15:27:47 2020 +0200

    xen/common: event_channel: Don't ignore error in get_free_port()
    
    Currently, get_free_port() is assuming that the port has been allocated
    when evtchn_allocate_port() is not return -EBUSY.
    
    However, the function may return an error when:
        - We exhausted all the event channels. This can happen if the limit
        configured by the administrator for the guest ('max_event_channels'
        in xl cfg) is higher than the ABI used by the guest. For instance,
        if the guest is using 2L, the limit should not be higher than 4095.
        - We cannot allocate memory (e.g Xen has not more memory).
    
    Users of get_free_port() (such as EVTCHNOP_alloc_unbound) will validly
    assuming the port was valid and will next call evtchn_from_port(). This
    will result to a crash as the memory backing the event channel structure
    is not present.
    
    Fixes: 368ae9a05fe ("xen/pvshim: forward evtchn ops between L0 Xen and L2 DomU")
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 2e9c2bc292231823a3a021d2e0a9f1956bf00b3c
    master date: 2020-07-07 14:35:36 +0200
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 01:58:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 01:58:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jszMA-0007sS-Vl; Wed, 08 Jul 2020 01:58:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1K4e=AT=gmail.com=jrdr.linux@srs-us1.protection.inumbo.net>)
 id 1jszM9-0007sN-3T
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 01:58:41 +0000
X-Inumbo-ID: 8ac6fc5a-c0be-11ea-bb8b-bc764e2007e4
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ac6fc5a-c0be-11ea-bb8b-bc764e2007e4;
 Wed, 08 Jul 2020 01:58:40 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id h19so52341759ljg.13
 for <xen-devel@lists.xenproject.org>; Tue, 07 Jul 2020 18:58:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=753o6lI+14JCSrORPzmiJM1I+sNyMEUrssi+r3hqays=;
 b=ADC+/wSk/ZvHTMRaqZVMsFab7QmR7F76m5bIXFQTCrGykFlnIcim/Mt9DsEFFmtIqH
 7QdE4Mw68tnCRxeihzvLD2YG4tKnCEXfsbVEMo502yX58MvOY1yGfgr9ZxBhctr/ThOQ
 nmwwiLCIVazXAt/cNFopl1b2jRhIDsjMxgzz/AcofwcRa/m2kc7oVv068u0HrhCjGaD1
 NjSUa0zJjtYdkT5c1r6Xd4h+m7qBgQGpKNIrXpkJLxjJw2T+Illoq7YlHWKRR0ySKZcs
 +993mJrKClq5YuIXWBrcwfJi1dvDLxJgtnPKMg6mk9uDmVkn1Oq3q2EkSBmIdL2XbUto
 /0/g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=753o6lI+14JCSrORPzmiJM1I+sNyMEUrssi+r3hqays=;
 b=X4rdb+6TF7Y+MrmdufsZ++UsPyQY6yqnkh+AzCuijt8k5cFtsLHodkpkHthHytn1u8
 q8nfBjUdy3ifuCfB6s7/5PZ3N0w7/cLc0YZePHJc6pZTEAt+VtTJFlqTSNjSKLDtldto
 4Bx07aKXxV3OImrwx8xy5amEiAQLiTotfxVyFYJLcYD4Wmc2LOrnibkno9SnR9yrD4B8
 b8quxxHcyKOnkg3XzGnleqRIj2qJSYbjYIMIX/MiSAMmLQ25ztSmoenDPRDfXWULWLmg
 3WROz2hfQSjobU6AvHBm5nI4mR6y2aziqbThoIW4YXSDHDCdbnT0N3vA/+HQneUQfjsj
 jj7Q==
X-Gm-Message-State: AOAM53029TYr6Z7dTCGSwqOQL4piqvnrLEq6HRuHcZZgwKcNCmT7UteS
 lVVMtv9Kom5tWRIUJuMO4omoMZyciVdK7g9gsyc=
X-Google-Smtp-Source: ABdhPJwX7jxnpPSaGjGI8Z53hHfavgWbxkjQSz6JsvYOwflqZC+IcLyYA9D4DBIx0PEz3GyhJjdcCJ8z+zeYHjD+EwI=
X-Received: by 2002:a2e:920e:: with SMTP id k14mr33414682ljg.430.1594173518700; 
 Tue, 07 Jul 2020 18:58:38 -0700 (PDT)
MIME-Version: 1.0
References: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
 <1594059372-15563-2-git-send-email-jrdr.linux@gmail.com>
 <4bafb184-6f07-2582-3d0f-86fb53dd30dc@suse.com>
 <CAFqt6zaWbEiozfkEuMvusxig15buuS1vjJaj4Q5okxNsRz_1vw@mail.gmail.com>
 <7208d7fe-8822-8e9b-e531-05238ece0b02@suse.com>
In-Reply-To: <7208d7fe-8822-8e9b-e531-05238ece0b02@suse.com>
From: Souptick Joarder <jrdr.linux@gmail.com>
Date: Wed, 8 Jul 2020 07:37:00 +0530
Message-ID: <CAFqt6zYXnD2VvGNcAnLC3BTyA4vSBpFLHQq3h+BzEDQcGRJD2w@mail.gmail.com>
Subject: Re: [PATCH v2 1/3] xen/privcmd: Corrected error handling path
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: sstabellini@kernel.org, John Hubbard <jhubbard@nvidia.com>,
 linux-kernel@vger.kernel.org, Paul Durrant <xadimgnik@gmail.com>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 7, 2020 at 5:15 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wrot=
e:
>
> On 07.07.20 13:40, Souptick Joarder wrote:
> > On Tue, Jul 7, 2020 at 3:05 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> =
wrote:
> >>
> >> On 06.07.20 20:16, Souptick Joarder wrote:
> >>> Previously, if lock_pages() end up partially mapping pages, it used
> >>> to return -ERRNO due to which unlock_pages() have to go through
> >>> each pages[i] till *nr_pages* to validate them. This can be avoided
> >>> by passing correct number of partially mapped pages & -ERRNO separate=
ly,
> >>> while returning from lock_pages() due to error.
> >>>
> >>> With this fix unlock_pages() doesn't need to validate pages[i] till
> >>> *nr_pages* for error scenario and few condition checks can be ignored=
.
> >>>
> >>> Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
> >>> Cc: John Hubbard <jhubbard@nvidia.com>
> >>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> >>> Cc: Paul Durrant <xadimgnik@gmail.com>
> >>> ---
> >>>    drivers/xen/privcmd.c | 31 +++++++++++++++----------------
> >>>    1 file changed, 15 insertions(+), 16 deletions(-)
> >>>
> >>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> >>> index a250d11..33677ea 100644
> >>> --- a/drivers/xen/privcmd.c
> >>> +++ b/drivers/xen/privcmd.c
> >>> @@ -580,13 +580,13 @@ static long privcmd_ioctl_mmap_batch(
> >>>
> >>>    static int lock_pages(
> >>>        struct privcmd_dm_op_buf kbufs[], unsigned int num,
> >>> -     struct page *pages[], unsigned int nr_pages)
> >>> +     struct page *pages[], unsigned int nr_pages, unsigned int *pinn=
ed)
> >>>    {
> >>>        unsigned int i;
> >>> +     int page_count =3D 0;
> >>
> >> Initial value shouldn't be needed, and ...
> >>
> >>>
> >>>        for (i =3D 0; i < num; i++) {
> >>>                unsigned int requested;
> >>> -             int pinned;
> >>
> >> ... you could move the declaration here.
> >>
> >> With that done you can add my
> >>
> >> Reviewed-by: Juergen Gross <jgross@suse.com>
> >
> > Ok. But does it going make any difference other than limiting scope ?
>
> Dropping the initializer surely does, and in the end page_count just
> replaces the former pinned variable, so why would we want to widen the
> scope with this patch?

Agree, no reason to move it up. Will change it in v3.

>
>
> Juergen


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 04:24:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 04:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt1cb-0003g9-1r; Wed, 08 Jul 2020 04:23:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=okql=AT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jt1ca-0003fv-3m
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 04:23:48 +0000
X-Inumbo-ID: d09d0288-c0d2-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d09d0288-c0d2-11ea-b7bb-bc764e2007e4;
 Wed, 08 Jul 2020 04:23:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=JD9xpqGWrvs2IBms+Kfc1LpdwOxEHQo0pohLZoTPdvU=; b=DJwEUnoJ6+LiGl0KaVFaBBujk
 L6swTgiSIC9Pjk09M1PyJT+WeevAbNS0rgLTCNsCylH+tJ/VE4k+esbfq1AdfiIPepVPrR6jl0Hg1
 h8c/EQfSt7hT1d/gvG3BMHxwMGqtESqxdsIP+w0ZUyh8gscXhP6Sgl7ZtOqcf/WzvfFis=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jt1cX-0007Mo-JF; Wed, 08 Jul 2020 04:23:45 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jt1cX-0006tw-3K; Wed, 08 Jul 2020 04:23:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jt1cX-0002yn-26; Wed, 08 Jul 2020 04:23:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151714-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 151714: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: xen=ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d
X-Osstest-Versions-That: xen=2b77729888fb851ab96e7f77bc854122626b4861
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jul 2020 04:23:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151714 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151714/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d
baseline version:
 xen                  2b77729888fb851ab96e7f77bc854122626b4861

Last test of basis   151318  2020-06-23 18:34:16 Z   14 days
Testing same since   151714  2020-07-07 13:35:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   2b77729888..ddaaccbbab  ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 04:25:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 04:25:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt1eU-0003nR-FX; Wed, 08 Jul 2020 04:25:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ym/r=AT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jt1eS-0003nJ-Ul
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 04:25:44 +0000
X-Inumbo-ID: 16c4b954-c0d3-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 16c4b954-c0d3-11ea-bb8b-bc764e2007e4;
 Wed, 08 Jul 2020 04:25:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 31DACAD81;
 Wed,  8 Jul 2020 04:25:44 +0000 (UTC)
Subject: Re: [linux-linus test] 151705: regressions - FAIL
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-151705-mainreport@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <e47bb121-72f3-e608-1177-4991669ec5d7@suse.com>
Date: Wed, 8 Jul 2020 06:25:42 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <osstest-151705-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.07.20 21:23, osstest service owner wrote:
> flight 151705 linux-linus real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/151705/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:

...

>   build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

This one is a known problem with a fixing patch already queued for the
next rc.


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 04:58:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 04:58:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt29e-0006Ji-3L; Wed, 08 Jul 2020 04:57:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2zOC=AT=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jt29c-0006Jd-SH
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 04:57:57 +0000
X-Inumbo-ID: 96195ac6-c0d7-11ea-bb8b-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 96195ac6-c0d7-11ea-bb8b-bc764e2007e4;
 Wed, 08 Jul 2020 04:57:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594184275;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=smbKjRsU5i3KhKg3QbXzYi2KAz29ytP0Ea1NEY02NP8=;
 b=PtTPUP8GHXGdumVac5oiLLMZd6SPW2UMnfgQAPOdEgCpGaHPbFVZV6nx3Cza/VNJ1YLeIh
 6vbITakfC7OlU81+exfkeTQH80Wav3kjO7cjjm1o++kRAnO0Psxvx6Kp6GlHFCZVjrEU+v
 Kh+POpDqU3TXlY9IBLwbLpAu3NUmeHw=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-407-Pw-148vnOlaI8JRnwwDe1w-1; Wed, 08 Jul 2020 00:57:53 -0400
X-MC-Unique: Pw-148vnOlaI8JRnwwDe1w-1
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C50A11005510;
 Wed,  8 Jul 2020 04:57:48 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 676E271678;
 Wed,  8 Jul 2020 04:57:34 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id B747E1132FD2; Wed,  8 Jul 2020 06:57:32 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Daniel P. =?utf-8?Q?Berrang=C3=A9?= <berrange@redhat.com>
Subject: Re: [PATCH] trivial: Remove trailing whitespaces
References: <20200706162300.1084753-1-dinechin@redhat.com>
 <20200707102510.GF2649462@redhat.com>
Date: Wed, 08 Jul 2020 06:57:32 +0200
In-Reply-To: <20200707102510.GF2649462@redhat.com> ("Daniel P. =?utf-8?Q?B?=
 =?utf-8?Q?errang=C3=A9=22's?=
 message of "Tue, 7 Jul 2020 11:25:10 +0100")
Message-ID: <878sfu4n9f.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Dmitry Fleytman <dmitry.fleytman@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Max Filippov <jcmvbkbc@gmail.com>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Jean-Christophe Dubois <jcd@tribudubois.net>, Marek Vasut <marex@denx.de>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 qemu-trivial@nongnu.org, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Michael Roth <mdroth@linux.vnet.ibm.com>,
 =?utf-8?Q?Herv=C3=A9?= Poussineau <hpoussin@reactos.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?utf-8?Q?Marc-Andr=C3=A9?= Lureau <marcandre.lureau@redhat.com>,
 David Gibson <david@gibson.dropbear.id.au>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Artyom Tarasenko <atar4qemu@gmail.com>, Riku Voipio <riku.voipio@iki.fi>,
 Eduardo Habkost <ehabkost@redhat.com>, Michael Tokarev <mjt@tls.msk.ru>,
 Alistair Francis <alistair@alistair23.me>, Peter Lieven <pl@kamp.de>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Roman Bolshakov <r.bolshakov@yadro.com>, qemu-arm@nongnu.org,
 Peter Chubb <peter.chubb@nicta.com.au>,
 Ronnie Sahlberg <ronniesahlberg@gmail.com>, xen-devel@lists.xenproject.org,
 Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>,
 Richard Henderson <rth@twiddle.net>, Kevin Wolf <kwolf@redhat.com>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, Chris Wulff <crwulff@gmail.com>,
 Laurent Vivier <laurent@vivier.eu>, Max Reitz <mreitz@redhat.com>,
 qemu-ppc@nongnu.org, Christophe de Dinechin <dinechin@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Daniel P. Berrang=C3=A9 <berrange@redhat.com> writes:

> On Mon, Jul 06, 2020 at 06:23:00PM +0200, Christophe de Dinechin wrote:
>> There are a number of unnecessary trailing whitespaces that have
>> accumulated over time in the source code. They cause stray changes
>> in patches if you use tools that automatically remove them.
>>=20
>> Tested by doing a `git diff -w` after the change.
>>=20
>> This could probably be turned into a pre-commit hook.

See .git/hooks/pre-commit.sample.

Expected test output is prone to flunk the whitespace test.  One
solution is to strip trailing whitespace from test output.

> scripts/checkpatch.pl ought to be made to check it.
>
>>=20
>> Signed-off-by: Christophe de Dinechin <dinechin@redhat.com>
>> ---
[...]
>>  78 files changed, 440 insertions(+), 440 deletions(-)
>
> The cleanup is a good idea, however, I think it is probably better to
> split the patch approx into subsystems. That will make it much easier
> to cherry-pick for people doing backports.

I doubt that's worth the trouble.

Acked-by: Markus Armbruster <armbru@redhat.com>



From xen-devel-bounces@lists.xenproject.org Wed Jul 08 05:40:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 05:40:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt2od-000281-EN; Wed, 08 Jul 2020 05:40:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ym/r=AT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jt2oc-00027w-7S
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 05:40:18 +0000
X-Inumbo-ID: 800f5e28-c0dd-11ea-8e19-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 800f5e28-c0dd-11ea-8e19-12813bfff9fa;
 Wed, 08 Jul 2020 05:40:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C55DEABE4;
 Wed,  8 Jul 2020 05:40:15 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] xen/privcmd: Mark pages as dirty
To: John Hubbard <jhubbard@nvidia.com>, Souptick Joarder <jrdr.linux@gmail.com>
References: <1594059372-15563-1-git-send-email-jrdr.linux@gmail.com>
 <1594059372-15563-3-git-send-email-jrdr.linux@gmail.com>
 <8fdd8c77-27dd-2847-7929-b5d3098b1b45@suse.com>
 <CAFqt6zZRx3oDO+p2e6EiDig9fzKirME-t6fanzDRh6e7gWx+nA@mail.gmail.com>
 <4abc0dd2-655c-16fa-dfc3-95904196c81f@suse.com>
 <4c6e52e7-1d33-132b-1d7e-e57963966dcc@nvidia.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <df0a79be-540a-5f5f-4aa4-41a11131b9b5@suse.com>
Date: Wed, 8 Jul 2020 07:40:14 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <4c6e52e7-1d33-132b-1d7e-e57963966dcc@nvidia.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org, Paul Durrant <xadimgnik@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.07.20 21:30, John Hubbard wrote:
> On 2020-07-07 04:43, Jürgen Groß wrote:
>> On 07.07.20 13:30, Souptick Joarder wrote:
>>> On Tue, Jul 7, 2020 at 3:08 PM Jürgen Groß <jgross@suse.com> wrote:
> ...
>>>>> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
>>>>> index 33677ea..f6c1543 100644
>>>>> --- a/drivers/xen/privcmd.c
>>>>> +++ b/drivers/xen/privcmd.c
>>>>> @@ -612,8 +612,11 @@ static void unlock_pages(struct page *pages[], 
>>>>> unsigned int nr_pages)
>>>>>    {
>>>>>        unsigned int i;
>>>>>
>>>>> -     for (i = 0; i < nr_pages; i++)
>>>>> +     for (i = 0; i < nr_pages; i++) {
>>>>> +             if (!PageDirty(pages[i]))
>>>>> +                     set_page_dirty_lock(pages[i]);
>>>>
>>>> With put_page() directly following I think you should be able to use
>>>> set_page_dirty() instead, as there is obviously a reference to the page
>>>> existing.
>>>
>>> Patch [3/3] will convert above codes to use 
>>> unpin_user_pages_dirty_lock()
>>> which internally do the same check. So I thought to keep linux-stable 
>>> and
>>> linux-next code in sync. John had a similar concern [1] and later 
>>> agreed to keep
>>> this check.
>>>
>>> Shall I keep this check ?  No ?
> 
> It doesn't matter *too* much, because patch 3/3 fixes up everything by
> changing it all to unpin_user_pages_dirty_lock(). However, there is 
> something
> to be said for having correct interim patches, too. :)  Details:
> 
>>>
>>> [1] 
>>> https://lore.kernel.org/xen-devel/a750e5e5-fd5d-663b-c5fd-261d7c939ba7@nvidia.com/ 
>>>
>>
>> I wasn't referring to checking PageDirty(), but to the use of
>> set_page_dirty_lock().
>>
>> Looking at the comment just before the implementation of
>> set_page_dirty_lock() suggests that it is fine to use set_page_dirty()
>> instead (so not calling lock_page()).
> 
> 
> no no, that's a misreading of the comment. Unless this xen/privcmd code has
> somehow taken a reference on page->mapping->host (which I do *not* think is
> the case), then it is still racy to call set_page_dirty() here. Instead,
> set_page_dirty_lock() should be used.

Ah, okay. Thanks for the clarification.

So you can add my

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 05:59:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 05:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt37J-00038v-2N; Wed, 08 Jul 2020 05:59:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yNBQ=AT=epam.com=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1jt37H-00038q-Sn
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 05:59:36 +0000
X-Inumbo-ID: 3215bee4-c0e0-11ea-bb8b-bc764e2007e4
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.48]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3215bee4-c0e0-11ea-bb8b-bc764e2007e4;
 Wed, 08 Jul 2020 05:59:34 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=du/7gS0JjyMYmY2QnHsRiWDq8G4QGnJsFOXsRpIzraxavp7tMBNfePkftRhiVlPcmz1L3pNdEei3nx0B0pUX9cNAks7RTHdIWJbrHOCz49zTtoxqS4GFonmjXEl3Tr7VFWoRN2YE9I39Q3P3NmH7SGsUfrnILhkcTxEzTAz0QelSy88uReNv/KcCbDK0QQ+uNEyFeDWJaVrHyWOHeqvzX+qVfwoksdfUX3TwU3d7nGU3tzbmVCEcUmEDIwzbljIojOcZ2MOghw9haGavl3FKUzflEuQ8u2geei9EfuICVdcmJxCtiavF3OmzFq1MuYazEaEJf9vsGfXXCSGQ9A+iuQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gkZben1qMnLj1VM1VKV6UvVAcUqY5BtSCC6VYfncShA=;
 b=UwerMGGjATM/T7Jw7/pg71wHLm6WERkCO++Daj/8E/Iyj6QxtS6dDYjwuDLUuXvGo50o+HiEPe1xQhfbSIlX+HfJVHp5p3ktJe8QU2umZZVtgWqdM51UgQxDlYtvPtcM3C4Q5wgHLO1I5/kd6AyODhZOn4i3KatO09gRC3UVqy5Xuy+dHiJYSb3oE/RA5cUWzc9B5O96uClEHCdWU3Oo7MOqOvTb6uXlfM+/Eod6yOZvXwkDj5KYeqP006hXsmvMDQd0V8xo9ZXT2oBZ63oFziFnGjulwlnNRVR3DzIPIsBXxtAgUIcsNwGpQYi9tHNSXUN68HJkKcm8T3lpdaRPxg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gkZben1qMnLj1VM1VKV6UvVAcUqY5BtSCC6VYfncShA=;
 b=4CdfZMYGfimy9jke+yok6b23xX657LXM1XC5wxYSkPfA+Gbiwe4mc+up0R3NbPR/P8SQVw5wqb/5C5zx4yrvFuDXSby+1bBEO1NKla5fFpR5jp/OOh59X5Ai/IRdxI6WUZAFKxYp0PBN9RKVQHx5y/erKjcZlZkLjzoj+6u2PECp5fHmUSHxbvzbZt5uZ4cXCvmXsc/4kXZU7sz8oB3sqs6je/jFeE0TZU1Y33KAdbq9ywGMrPuhGH/cAnA/2/JiQSqCJY7mHCELCejiTnAsMam3hJrNVt/Dd7YVrKHbLg0TLPklg2BC3CTQ4dkz0ZRB31WS+lB6RG1R/uQFt1aBoQ==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB6323.eurprd03.prod.outlook.com (2603:10a6:20b:159::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.23; Wed, 8 Jul
 2020 05:59:32 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508%9]) with mapi id 15.20.3174.021; Wed, 8 Jul 2020
 05:59:32 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: "paul@xen.org" <paul@xen.org>
Subject: Re: [PATCH v2] xen/displif: Protocol version 2
Thread-Topic: [PATCH v2] xen/displif: Protocol version 2
Thread-Index: AQHWT3f2pfc1d0tAak69A/eTMH1j5qjyit6AgAAWp4CAAUH0AIAACwsAgAAL5oCACUCJAA==
Date: Wed, 8 Jul 2020 05:59:32 +0000
Message-ID: <9bef2cd8-2ff4-2507-efc5-ed087333455c@epam.com>
References: <20200701071923.18883-1-andr2000@gmail.com>
 <dffd127d-c5a1-4c77-baa8-f1d931145bc4@suse.com>
 <b5a6e034-4d52-d6b2-7c14-3c44c4a19cc3@epam.com>
 <e442e4d9-fe79-7f65-c196-2a0a35923492@suse.com>
 <f50ec904-8cb2-2bd6-c3ba-35e8c44bd607@epam.com>
 <be21be56-ea1b-e558-6905-a6cb3e5e4849@suse.com>
In-Reply-To: <be21be56-ea1b-e558-6905-a6cb3e5e4849@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 20ead0b4-cb1a-4c87-d193-08d8230415dc
x-ms-traffictypediagnostic: AM0PR03MB6323:
x-microsoft-antispam-prvs: <AM0PR03MB6323A48FDEAC8F88A263038BE7670@AM0PR03MB6323.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7219;
x-forefront-prvs: 04583CED1A
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 0DKK5V9/AzUe2pSrjOigVLrUrLVj9lTFVi1XrOTBuBzSlLFzK37MA5JDxMUOgSMLcXeqNrF3cNAA2xWSMZPXZ8bczQATDfrk9lBsQFlA2laK7Fva70ZD8mI0Mhk0Htt8lR1PjYbNcnLTQLVG0G7wmP8YUKG69UaYTZyMY2M5BukM10ls1/57ie8yPCRf/amaGYRXBUZq2W6e9SGOukHL0XKqB5e1pMcg8a0i6JExIzl3V7TRUQJNq42vHQHgY5iETMydcyI4uh3uhyI4E7dFokvFrqVJG/aVuk8cDMNX84MLJZ4vo+BU75fgIe8x47n1
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM0PR03MB6324.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(136003)(396003)(39860400002)(376002)(346002)(366004)(53546011)(6506007)(478600001)(83380400001)(5660300002)(8676002)(8936002)(2906002)(31686004)(186003)(26005)(36756003)(66574015)(86362001)(31696002)(2616005)(6486002)(6916009)(71200400001)(54906003)(66446008)(66556008)(64756008)(66476007)(91956017)(76116006)(66946007)(316002)(4326008)(6512007);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: sRyp9idyElF/d4GYEeChL8M3ZysfL8pDfR1HhAXlECIvfIvcd4rtMwh3UClRozYbvS7tVE2KSe8ccDuiCwojfgNKcorVoT6jdSn4BQtjvW1fy25dCxWqPDoWqi4XskKkhN6mqn1xeW9ikd/qalvQBd7xrYRN4YR3s0bS6nRSr2GxdhDRMEyOmTpajkRYOpEEnn94Gx8nyF1LHa+9QEoYxrE6ALDhYPznO3zvRJzyF/saUHSCpUmcFa+i9guqrPM5ImNfipH2nYXMHJF95ABjo2mesE+um9uHxDS3jt/9odUJIaZKNugYu0ZmKHvXhdSGuVThL1/JA8h9yyQr8xkEzYgjpaf7Whoee6hJvSH/srKzUPn1d3UxYseC9uNp6UWGzhcjjG6ECY0mw6MukpGBP8sxAPl1h9gBwxevs261bpMDJk1320n0a/GB/CoXeIoXDZrkOW5CS/t3NPAL3M8fyu2fhN6fEURBAHRSvj93Z7E=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <F1DB7E96D656A84AA995342877F7CC7D@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 20ead0b4-cb1a-4c87-d193-08d8230415dc
X-MS-Exchange-CrossTenant-originalarrivaltime: 08 Jul 2020 05:59:32.5673 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: LG+0ZgFzeJeDZZux0AdPswB02BMFsyZIvZpT6d/f2nJH1axtM/noHuJS39UDZi526ocdlvi+UxV578tUqs9po6ny2pxzdpPk8NERsFD7mXPxmfeM5TH0uggCf/3c7N4J
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB6323
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGksIFBhdWwhDQoNCk9uIDcvMi8yMCAxMTo0MiBBTSwgSsO8cmdlbiBHcm/DnyB3cm90ZToNCj4g
T24gMDIuMDcuMjAgMDk6NTksIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4NCj4+
IE9uIDcvMi8yMCAxMDoyMCBBTSwgSsO8cmdlbiBHcm/DnyB3cm90ZToNCj4+PiBPbiAwMS4wNy4y
MCAxNDowNywgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+Pj4+IE9uIDcvMS8yMCAx
OjQ2IFBNLCBKw7xyZ2VuIEdyb8OfIHdyb3RlOg0KPj4+Pj4gT24gMDEuMDcuMjAgMDk6MTksIE9s
ZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4+Pj4+IEZyb206IE9sZWtzYW5kciBBbmRy
dXNoY2hlbmtvIDxvbGVrc2FuZHJfYW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4NCj4+Pj4+Pg0KPj4+
Pj4+IDEuIEFkZCBwcm90b2NvbCB2ZXJzaW9uIGFzIGFuIGludGVnZXINCj4+Pj4+Pg0KPj4+Pj4+
IFZlcnNpb24gc3RyaW5nLCB3aGljaCBpcyBpbiBmYWN0IGFuIGludGVnZXIsIGlzIGhhcmQgdG8g
aGFuZGxlIGluIHRoZQ0KPj4+Pj4+IGNvZGUgdGhhdCBzdXBwb3J0cyBkaWZmZXJlbnQgcHJvdG9j
b2wgdmVyc2lvbnMuIFRvIHNpbXBsaWZ5IHRoYXQNCj4+Pj4+PiBhbHNvIGFkZCB0aGUgdmVyc2lv
biBhcyBhbiBpbnRlZ2VyLg0KPj4+Pj4+DQo+Pj4+Pj4gMi4gUGFzcyBidWZmZXIgb2Zmc2V0IHdp
dGggWEVORElTUExfT1BfREJVRl9DUkVBVEUNCj4+Pj4+Pg0KPj4+Pj4+IFRoZXJlIGFyZSBjYXNl
cyB3aGVuIGRpc3BsYXkgZGF0YSBidWZmZXIgaXMgY3JlYXRlZCB3aXRoIG5vbi16ZXJvDQo+Pj4+
Pj4gb2Zmc2V0IHRvIHRoZSBkYXRhIHN0YXJ0LiBIYW5kbGUgc3VjaCBjYXNlcyBhbmQgcHJvdmlk
ZSB0aGF0IG9mZnNldA0KPj4+Pj4+IHdoaWxlIGNyZWF0aW5nIGEgZGlzcGxheSBidWZmZXIuDQo+
Pj4+Pj4NCj4+Pj4+PiAzLiBBZGQgWEVORElTUExfT1BfR0VUX0VESUQgY29tbWFuZA0KPj4+Pj4+
DQo+Pj4+Pj4gQWRkIGFuIG9wdGlvbmFsIHJlcXVlc3QgZm9yIHJlYWRpbmcgRXh0ZW5kZWQgRGlz
cGxheSBJZGVudGlmaWNhdGlvbg0KPj4+Pj4+IERhdGEgKEVESUQpIHN0cnVjdHVyZSB3aGljaCBh
bGxvd3MgYmV0dGVyIGNvbmZpZ3VyYXRpb24gb2YgdGhlDQo+Pj4+Pj4gZGlzcGxheSBjb25uZWN0
b3JzIG92ZXIgdGhlIGNvbmZpZ3VyYXRpb24gc2V0IGluIFhlblN0b3JlLg0KPj4+Pj4+IFdpdGgg
dGhpcyBjaGFuZ2UgY29ubmVjdG9ycyBtYXkgaGF2ZSBtdWx0aXBsZSByZXNvbHV0aW9ucyBkZWZp
bmVkDQo+Pj4+Pj4gd2l0aCByZXNwZWN0IHRvIGRldGFpbGVkIHRpbWluZyBkZWZpbml0aW9ucyBh
bmQgYWRkaXRpb25hbCBwcm9wZXJ0aWVzDQo+Pj4+Pj4gbm9ybWFsbHkgcHJvdmlkZWQgYnkgZGlz
cGxheXMuDQo+Pj4+Pj4NCj4+Pj4+PiBJZiB0aGlzIHJlcXVlc3QgaXMgbm90IHN1cHBvcnRlZCBi
eSB0aGUgYmFja2VuZCB0aGVuIHZpc2libGUgYXJlYQ0KPj4+Pj4+IGlzIGRlZmluZWQgYnkgdGhl
IHJlbGV2YW50IFhlblN0b3JlJ3MgInJlc29sdXRpb24iIHByb3BlcnR5Lg0KPj4+Pj4+DQo+Pj4+
Pj4gSWYgYmFja2VuZCBwcm92aWRlcyBleHRlbmRlZCBkaXNwbGF5IGlkZW50aWZpY2F0aW9uIGRh
dGEgKEVESUQpIHdpdGgNCj4+Pj4+PiBYRU5ESVNQTF9PUF9HRVRfRURJRCByZXF1ZXN0IHRoZW4g
RURJRCB2YWx1ZXMgbXVzdCB0YWtlIHByZWNlZGVuY2UNCj4+Pj4+PiBvdmVyIHRoZSByZXNvbHV0
aW9ucyBkZWZpbmVkIGluIFhlblN0b3JlLg0KPj4+Pj4+DQo+Pj4+Pj4gNC4gQnVtcCBwcm90b2Nv
bCB2ZXJzaW9uIHRvIDIuDQo+Pj4+Pj4NCj4+Pj4+PiBTaWduZWQtb2ZmLWJ5OiBPbGVrc2FuZHIg
QW5kcnVzaGNoZW5rbyA8b2xla3NhbmRyX2FuZHJ1c2hjaGVua29AZXBhbS5jb20+DQo+Pj4+Pg0K
Pj4+Pj4gUmV2aWV3ZWQtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCj4+Pj4N
Cj4+Pj4gVGhhbmsgeW91LCBkbyB5b3Ugd2FudCBtZSB0byBwcmVwYXJlIHRoZSBzYW1lIGZvciB0
aGUga2VybmVsIHNvDQo+Pj4+DQo+Pj4+IHlvdSBoYXZlIGl0IGF0IGhhbmQgd2hlbiB0aGUgdGlt
ZSBjb21lcz8NCj4+Pg0KPj4+IEl0IHNob3VsZCBiZSBhZGRlZCB0byB0aGUga2VybmVsIG9ubHkg
d2hlbiByZWFsbHkgbmVlZGVkIChpLmUuIGEgdXNlciBvZg0KPj4+IHRoZSBuZXcgZnVuY3Rpb25h
bGl0eSBpcyBzaG93aW5nIHVwKS4NCj4+DQo+PiBXZSBoYXZlIGEgcGF0Y2ggZm9yIHRoYXQgd2hp
Y2ggYWRkcyBFRElEIHRvIHRoZSBleGlzdGluZyBQViBEUk0gZnJvbnRlbmQsDQo+Pg0KPj4gc28g
d2hpbGUgdXBzdHJlYW1pbmcgdGhvc2UgY2hhbmdlcyBJIHdpbGwgYWxzbyBpbmNsdWRlIGNoYW5n
ZXMgdG8gdGhlIHByb3RvY29sDQo+Pg0KPj4gaW4gdGhlIGtlcm5lbCBzZXJpZXM6IGZvciB0aGF0
IHdlIG5lZWQgdGhlIGhlYWRlciBpbiBYZW4gdHJlZSBmaXJzdCwgcmlnaHQ/DQo+DQo+IFllcy4N
Cj4NCklzIGl0IHBvc3NpYmxlIHRoYXQgd2UgaGF2ZSB0aGlzIGNoYW5nZSBpbiB0aGUgcmVsZWFz
ZSBwbGVhc2U/DQoNClRoaXMgaXMgbm90IHVzZWQgYnkgYW55IHBpZWNlIG9mIGNvZGUgaW4gWGVu
LCBzbyBpdCB3b24ndCBodXJ0IGFueXRoaW5nLg0KDQpCdXQgSSBuZWVkIHRoaXMgY2hhbmdlIGlu
IHNvIEkgY2FuIHByb2NlZWQgd2l0aCB0aGUgTGludXgga2VybmVsIHBhcnQ6DQoNCndlIHdvdWxk
IGxpa2UgdG8gdXBzdHJlYW0gdGhlIHJlbGV2YW50IEVESUQgY2hhbmdlIHRvIHRoZSBkaXNwbGF5
DQoNCmZyb250ZW5kIGRyaXZlciBhbmQgd2l0aG91dCB0aGUgaGVhZGVyIGluIFhlbiB0cmVlIHdl
IGFyZSBzdHVjaw0KDQpUaGFuayB5b3UgaW4gYWR2YW5jZSwNCg0KT2xla3NhbmRyDQoNCj4NCj4g
SnVlcmdlbg==


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 09:23:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 09:23:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt6Ie-0003qf-A7; Wed, 08 Jul 2020 09:23:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=okql=AT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jt6Id-0003q0-FW
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 09:23:31 +0000
X-Inumbo-ID: abfc295c-c0fc-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id abfc295c-c0fc-11ea-b7bb-bc764e2007e4;
 Wed, 08 Jul 2020 09:23:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Z7vLaJlPK/z1M4E4vZzEg8W63IhT4RWM0ZtTJoLeV5A=; b=Tip1Ixjfcz2SJsob+i4IVF3KI
 R2JqPmNOYmR2cAvQvFuj8NxT/+RqISEpJ4vonOXvSSsNZAWglfPvst5XUahVsRO5bhNogcOVv8Ac4
 JP5Lp6jlZfLAji2Td5VTVsH7IfU0NlBFOL7clbt8lpU6KoAWQpPjYV6TVuSwlSTlkmidk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jt6IV-0005So-70; Wed, 08 Jul 2020 09:23:23 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jt6IU-0003kN-R0; Wed, 08 Jul 2020 09:23:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jt6IU-0001B9-PC; Wed, 08 Jul 2020 09:23:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151715-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 151715: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: xen=19e0bbb4eba8d781b972448ec01ede6ca7fa22cb
X-Osstest-Versions-That: xen=050fe48dc981e0488de1f6c6c07d8110f3b7523b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jul 2020 09:23:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151715 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151715/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    17 guest-localmigrate/x10       fail  like 151388
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  19e0bbb4eba8d781b972448ec01ede6ca7fa22cb
baseline version:
 xen                  050fe48dc981e0488de1f6c6c07d8110f3b7523b

Last test of basis   151388  2020-06-26 18:56:29 Z   11 days
Testing same since   151715  2020-07-07 13:36:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   050fe48dc9..19e0bbb4eb  19e0bbb4eba8d781b972448ec01ede6ca7fa22cb -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 09:59:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 09:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt6rF-0006St-DQ; Wed, 08 Jul 2020 09:59:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=okql=AT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jt6rD-0006So-Rm
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 09:59:15 +0000
X-Inumbo-ID: adbcd2fa-c101-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id adbcd2fa-c101-11ea-bb8b-bc764e2007e4;
 Wed, 08 Jul 2020 09:59:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=QJsgid+SQfFcIKRxEoHEZS0zVtL3U2OEnqIYnkazhjM=; b=pv3fgA4mdGAPLI72lBRrZY3s+
 BpMQzontS1sviunBLn1DFNuqK+brbgj296rcAUuJC2ueUiymXURkaZyBwQFTbjzuXqafwPAr5Ljce
 cvVfliWLF8ogcTGyWHY+4QdJzI5OkbIxC9zdOVke2UVXBv0k8Z5fp2ryWoDcDxbsAn4gA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jt6rB-00067d-Vs; Wed, 08 Jul 2020 09:59:14 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jt6rB-0007Fh-GC; Wed, 08 Jul 2020 09:59:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jt6rB-0007pa-FY; Wed, 08 Jul 2020 09:59:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151733-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 151733: all pass - PUSHED
X-Osstest-Versions-This: xen=3fdc211b01b29f252166937238efe02d15cb5780
X-Osstest-Versions-That: xen=be63d9d47f571a60d70f8fb630c03871312d9655
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jul 2020 09:59:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151733 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151733/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  3fdc211b01b29f252166937238efe02d15cb5780
baseline version:
 xen                  be63d9d47f571a60d70f8fb630c03871312d9655

Last test of basis   151641  2020-07-05 09:18:32 Z    3 days
Testing same since   151733  2020-07-08 09:19:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   be63d9d47f..3fdc211b01  3fdc211b01b29f252166937238efe02d15cb5780 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 10:15:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 10:15:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt76Z-00089G-RW; Wed, 08 Jul 2020 10:15:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ybAq=AT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jt76Z-00089B-3U
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 10:15:07 +0000
X-Inumbo-ID: e43f418a-c103-11ea-8496-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e43f418a-c103-11ea-8496-bc764e2007e4;
 Wed, 08 Jul 2020 10:15:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594203305;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=EqX5HsGWso4KEfZeIdFy2NtZiVaj6fAO0QiUvpxlhqQ=;
 b=YDBXX74yMRQfUykzNOh5LD4LkRS71jzXltqrU8ztDNuxCXtfdG9npnaQ
 76fmhsn2Lqv3Pi1GB8VoP5lq3qFP5ybTGGHA9YRUaD4D2wcAdHQ7g1onE
 WQ7EPluyGyyyPDyIEPFNocLnitazRBEIX6ZyHaKP0JY0oNTkte5dD/lIY I=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: eWb59Zf8b6Y/+7ppjFxVF2PriIQvxQ2UMzTIrLTPf5WxgJm4zySowcz+WUflV3Mzqd2a5nH/e/
 WRtFQc9VSGIybmU1pFzJ5iNyFwSwpQo3N197/exom9v1gnNAnHbrC2VMIzdlSk76zghvWbDJfJ
 LCVhJmR/hnBDI46D3YwRCSzMYoB3hPVxiJfZvlifFRWgYYkLI8ezRnZIciLtgjJgdhzs5deupg
 8z42rSrOAl5BYaEncnpJpyC2nOWJsQnnBS4Jfo7HUpund+vWOLNQFrBiH4/zsz68rGwNVx/qbv
 1hU=
X-SBRS: 2.7
X-MesageID: 22174640
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,327,1589256000"; d="scan'208";a="22174640"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/mtrr: Drop workaround for old 32bit CPUs
Date: Wed, 8 Jul 2020 11:14:43 +0100
Message-ID: <20200708101443.27321-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This logic is dead as Xen is 64bit-only these days.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/cpu/mtrr/generic.c | 17 -----------------
 1 file changed, 17 deletions(-)

diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
index 89634f918f..06fa0c0420 100644
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -570,23 +570,6 @@ int generic_validate_add_page(unsigned long base, unsigned long size, unsigned i
 {
 	unsigned long lbase, last;
 
-	/*  For Intel PPro stepping <= 7, must be 4 MiB aligned 
-	    and not touch 0x70000000->0x7003FFFF */
-	if (is_cpu(INTEL) && boot_cpu_data.x86 == 6 &&
-	    boot_cpu_data.x86_model == 1 &&
-	    boot_cpu_data.x86_mask <= 7) {
-		if (base & ((1 << (22 - PAGE_SHIFT)) - 1)) {
-			printk(KERN_WARNING "mtrr: base(%#lx000) is not 4 MiB aligned\n", base);
-			return -EINVAL;
-		}
-		if (!(base + size < 0x70000 || base > 0x7003F) &&
-		    (type == MTRR_TYPE_WRCOMB
-		     || type == MTRR_TYPE_WRBACK)) {
-			printk(KERN_WARNING "mtrr: writable mtrr between 0x70000000 and 0x7003FFFF may hang the CPU.\n");
-			return -EINVAL;
-		}
-	}
-
 	/*  Check upper bits of base and last are equal and lower bits are 0
 	    for base and 1 for last  */
 	last = base + size - 1;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jul 08 10:49:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 10:49:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt7d4-0002Du-Gu; Wed, 08 Jul 2020 10:48:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=65hh=AT=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jt7d3-0002Do-5C
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 10:48:41 +0000
X-Inumbo-ID: 92dac832-c108-11ea-8e28-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 92dac832-c108-11ea-8e28-12813bfff9fa;
 Wed, 08 Jul 2020 10:48:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594205315;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=96w8xFOqjvMpiz+UgwWoWBOlDLJJV1ZTJ/4HdyPtyiU=;
 b=XeV+SWUUyF2p7/YaZ3vmYNsV3AXfwJN+PUlqmg+GNVaxu3NNNT/2qnf3
 MYuoSA2ZvNjA6/DJ4uz1HSkaFRPIPLjDlfb8n8ftAaMoGcMmMgselCY/4
 XTBm6XJcEx6oZQUHdK5jOxBoYAqO8wopjvpUzA2qHGt1oWtLHtvdKNwPu U=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Vwgn8c+qvSq74E1FDcXRKZSnY5J0JUKdnYXTrXkjPEWjIhztddzMOf/XaHARm3PmgkcSZ4sbhA
 QBY6HAZfV+FnjQRy9Wds8rRzFss59qAgSpkMkFY540J3qO0BETMjRLlOjp7somH5FeWGRpWhXi
 2IkWoeKBC0F5xlW4dJTia9gz5mS33uUbr4sDmOu/GPP5pc96NN/9JzBsjUUtZPdPH7eijKaD6A
 tyBck01yimnWY1WIbaDnXW+N+4hUpPOukbAdHiMRy4aY1AJ4RYSFqQ/jSzzPcX8z2hSdwm5uV6
 bLI=
X-SBRS: 2.7
X-MesageID: 22684830
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,327,1589256000"; d="scan'208";a="22684830"
Date: Wed, 8 Jul 2020 12:48:26 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/mtrr: Drop workaround for old 32bit CPUs
Message-ID: <20200708104826.GB7191@Air-de-Roger>
References: <20200708101443.27321-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200708101443.27321-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 08, 2020 at 11:14:43AM +0100, Andrew Cooper wrote:
> This logic is dead as Xen is 64bit-only these days.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/arch/x86/cpu/mtrr/generic.c | 17 -----------------
>  1 file changed, 17 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
> index 89634f918f..06fa0c0420 100644
> --- a/xen/arch/x86/cpu/mtrr/generic.c
> +++ b/xen/arch/x86/cpu/mtrr/generic.c
> @@ -570,23 +570,6 @@ int generic_validate_add_page(unsigned long base, unsigned long size, unsigned i
>  {
>  	unsigned long lbase, last;
>  
> -	/*  For Intel PPro stepping <= 7, must be 4 MiB aligned 
> -	    and not touch 0x70000000->0x7003FFFF */
> -	if (is_cpu(INTEL) && boot_cpu_data.x86 == 6 &&
> -	    boot_cpu_data.x86_model == 1 &&
> -	    boot_cpu_data.x86_mask <= 7) {
> -		if (base & ((1 << (22 - PAGE_SHIFT)) - 1)) {
> -			printk(KERN_WARNING "mtrr: base(%#lx000) is not 4 MiB aligned\n", base);
> -			return -EINVAL;
> -		}
> -		if (!(base + size < 0x70000 || base > 0x7003F) &&
> -		    (type == MTRR_TYPE_WRCOMB
> -		     || type == MTRR_TYPE_WRBACK)) {
> -			printk(KERN_WARNING "mtrr: writable mtrr between 0x70000000 and 0x7003FFFF may hang the CPU.\n");
> -			return -EINVAL;
> -		}
> -	}
> -
>  	/*  Check upper bits of base and last are equal and lower bits are 0
>  	    for base and 1 for last  */
>  	last = base + size - 1;

FWIW, you could also initialize last at definition time.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 10:53:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 10:53:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt7hJ-00030N-2H; Wed, 08 Jul 2020 10:53:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ybAq=AT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jt7hH-00030H-Nx
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 10:53:03 +0000
X-Inumbo-ID: 31e595ec-c109-11ea-b7bb-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31e595ec-c109-11ea-b7bb-bc764e2007e4;
 Wed, 08 Jul 2020 10:53:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594205582;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=dJYE+lAWD82IJCGRw/dF/vl314HHpf6JGSZL40tuNkE=;
 b=f91+ispgLcG5y79Q6HFMw9UIxaA62lIE2u/X67DQKJH4dSMG3gvvmQ+l
 FQDNBwFq4d1ERS33wDZeowN5Z+tH+77mHyyLAUHogL0JapwXQY/lTGp7M
 93N+zlUQ2Ee8W7J5VLwFj/Ay+1H83v4pd9q90BarzVUbrNlM3qFTHF2/7 k=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: px4yTnfH5mWopo2kT1HqDZOMMJGbDfRLN3R+tZgPrQgSVr6awm6WgNA/6/k5qYXvzPAhRfw5fp
 B895dWBT3wALOjZEQnFPRr8zwIulKu6IGIfstja0zS0PEERvz/e+sS5McxOj1baC30wBfzd+XM
 MMSRyMAkh49mddKSTy2+AUeMzCkFGonOo0dS8c5Py9NYse3VUu47WivBvu/yYh79n6KtyZ9aEh
 lyjCxmedZO5zwILBMRYlA+9VkVLBscoP9t1EaOX/RlGAzwkHxmufUnwq1/ej2kRtCk88qX/wxv
 lAQ=
X-SBRS: 2.7
X-MesageID: 22195081
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,327,1589256000"; d="scan'208";a="22195081"
Subject: Re: [PATCH] x86/mtrr: Drop workaround for old 32bit CPUs
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200708101443.27321-1-andrew.cooper3@citrix.com>
 <20200708104826.GB7191@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <12874bc4-39e8-5ed4-3893-79154a206293@citrix.com>
Date: Wed, 8 Jul 2020 11:52:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200708104826.GB7191@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08/07/2020 11:48, Roger Pau Monné wrote:
> On Wed, Jul 08, 2020 at 11:14:43AM +0100, Andrew Cooper wrote:
>> This logic is dead as Xen is 64bit-only these days.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks,

>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>>  xen/arch/x86/cpu/mtrr/generic.c | 17 -----------------
>>  1 file changed, 17 deletions(-)
>>
>> diff --git a/xen/arch/x86/cpu/mtrr/generic.c b/xen/arch/x86/cpu/mtrr/generic.c
>> index 89634f918f..06fa0c0420 100644
>> --- a/xen/arch/x86/cpu/mtrr/generic.c
>> +++ b/xen/arch/x86/cpu/mtrr/generic.c
>> @@ -570,23 +570,6 @@ int generic_validate_add_page(unsigned long base, unsigned long size, unsigned i
>>  {
>>  	unsigned long lbase, last;
>>  
>> -	/*  For Intel PPro stepping <= 7, must be 4 MiB aligned 
>> -	    and not touch 0x70000000->0x7003FFFF */
>> -	if (is_cpu(INTEL) && boot_cpu_data.x86 == 6 &&
>> -	    boot_cpu_data.x86_model == 1 &&
>> -	    boot_cpu_data.x86_mask <= 7) {
>> -		if (base & ((1 << (22 - PAGE_SHIFT)) - 1)) {
>> -			printk(KERN_WARNING "mtrr: base(%#lx000) is not 4 MiB aligned\n", base);
>> -			return -EINVAL;
>> -		}
>> -		if (!(base + size < 0x70000 || base > 0x7003F) &&
>> -		    (type == MTRR_TYPE_WRCOMB
>> -		     || type == MTRR_TYPE_WRBACK)) {
>> -			printk(KERN_WARNING "mtrr: writable mtrr between 0x70000000 and 0x7003FFFF may hang the CPU.\n");
>> -			return -EINVAL;
>> -		}
>> -	}
>> -
>>  	/*  Check upper bits of base and last are equal and lower bits are 0
>>  	    for base and 1 for last  */
>>  	last = base + size - 1;
> FWIW, you could also initialize last at definition time.

I've got some very different cleanup in mind for that code, seeing as it
can be simplified to a single test expression.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 11:17:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 11:17:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt84S-0004n9-2I; Wed, 08 Jul 2020 11:17:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=65hh=AT=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jt84Q-0004n4-P5
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 11:16:58 +0000
X-Inumbo-ID: 890ca024-c10c-11ea-8e29-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 890ca024-c10c-11ea-8e29-12813bfff9fa;
 Wed, 08 Jul 2020 11:16:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594207017;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=bojDQ70VGgpEjsSAvhiVonk1IPlVtB8NqHG4JqvRoA8=;
 b=co+peH64vBQbk9Ne7UNpCl5I72Aq5xVIBswb/BoqCB0VhRtPl/U08WNF
 d+JcVzkwkY8JvJhpytcaDAWvkY2sZ4QcS6FmdWCt2k8tNhyL0eArkfcz3
 L4/VGws3sw1yvzlgDS+pJKe39J4F3TmN2TRLnIq692oyIAZwtxHss0Ket o=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Y7LAl79xRXjtB7ODpVYEuFtpRq1EBz2Li8v/eqx7wuPkp9H/OOtOUb9ODvKCSfb56Rbbvjp1TI
 SYpncIND7tvjagbcxtS9kxWDNjkpocNgNptfJ1FQn2hHIp+XgQopyq6xZliXvCxgi5sVFC29LH
 T7BvnFerjdObrIAJPZpvGBHCgkJcHFt/RN1PzlFPjYdCRrpw8RrHlAr3447wrndc72Z5IwdkcQ
 1armTpImHl5yBv7+qL4sDzmPGzqd+etoFktYkt0R3XfXMOP7dCFj+SxM0dlYw3gjY73FH36yyl
 rmY=
X-SBRS: 2.7
X-MesageID: 22197937
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,327,1589256000"; d="scan'208";a="22197937"
Date: Wed, 8 Jul 2020 13:16:47 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/mtrr: Drop workaround for old 32bit CPUs
Message-ID: <20200708111647.GC7191@Air-de-Roger>
References: <20200708101443.27321-1-andrew.cooper3@citrix.com>
 <20200708104826.GB7191@Air-de-Roger>
 <12874bc4-39e8-5ed4-3893-79154a206293@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <12874bc4-39e8-5ed4-3893-79154a206293@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 08, 2020 at 11:52:57AM +0100, Andrew Cooper wrote:
> On 08/07/2020 11:48, Roger Pau Monné wrote:
> > On Wed, Jul 08, 2020 at 11:14:43AM +0100, Andrew Cooper wrote:
> >>  	/*  Check upper bits of base and last are equal and lower bits are 0
> >>  	    for base and 1 for last  */
> >>  	last = base + size - 1;
> > FWIW, you could also initialize last at definition time.
> 
> I've got some very different cleanup in mind for that code, seeing as it
> can be simplified to a single test expression.

Oh, I certainly didn't look at it that much :).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 11:22:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 11:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt89Z-0005ab-Mx; Wed, 08 Jul 2020 11:22:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lA1A=AT=yadro.com=r.bolshakov@srs-us1.protection.inumbo.net>)
 id 1jt840-0004lU-Ck
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 11:16:32 +0000
X-Inumbo-ID: 792ac01e-c10c-11ea-bb8b-bc764e2007e4
Received: from mta-01.yadro.com (unknown [89.207.88.252])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 792ac01e-c10c-11ea-bb8b-bc764e2007e4;
 Wed, 08 Jul 2020 11:16:31 +0000 (UTC)
Received: from localhost (unknown [127.0.0.1])
 by mta-01.yadro.com (Postfix) with ESMTP id F07E74C895;
 Wed,  8 Jul 2020 11:16:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=yadro.com; h=
 in-reply-to:content-disposition:content-type:content-type
 :mime-version:references:message-id:subject:subject:from:from
 :date:date:received:received:received; s=mta-01; t=1594206989;
 x=1596021390; bh=IZP5UhMNqaSvqAnZInLSw7aMUyR9UHa4g7mz2kaJfkI=; b=
 LwlLD4gqzKhmjDg+9WVU7o3CgUCt71MSdjCZREhDV0oSr58YKSEbnCG0ZNKD2k5K
 K3tD35qHsix1ygABezMpCKHuFTf4V8AvRtKhcuyU/eAfNtdtkJ65uJTKxajbTCNI
 E8KgP0bcTL3+zm7qxjuC99ogd006+TMn84nN/V+WM88=
X-Virus-Scanned: amavisd-new at yadro.com
Received: from mta-01.yadro.com ([127.0.0.1])
 by localhost (mta-01.yadro.com [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id PGMWieCgyhvF; Wed,  8 Jul 2020 14:16:29 +0300 (MSK)
Received: from T-EXCH-02.corp.yadro.com (t-exch-02.corp.yadro.com
 [172.17.10.102])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits))
 (No client certificate requested)
 by mta-01.yadro.com (Postfix) with ESMTPS id A617C4C889;
 Wed,  8 Jul 2020 14:16:25 +0300 (MSK)
Received: from localhost (172.17.204.212) by T-EXCH-02.corp.yadro.com
 (172.17.10.102) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32; Wed, 8 Jul
 2020 14:16:25 +0300
Date: Wed, 8 Jul 2020 14:16:24 +0300
From: Roman Bolshakov <r.bolshakov@yadro.com>
To: Christophe de Dinechin <dinechin@redhat.com>
Subject: Re: [PATCH] trivial: Remove trailing whitespaces
Message-ID: <20200708111624.GA29018@SPB-NB-133.local>
References: <20200706162300.1084753-1-dinechin@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20200706162300.1084753-1-dinechin@redhat.com>
X-Originating-IP: [172.17.204.212]
X-ClientProxiedBy: T-EXCH-01.corp.yadro.com (172.17.10.101) To
 T-EXCH-02.corp.yadro.com (172.17.10.102)
X-Mailman-Approved-At: Wed, 08 Jul 2020 11:22:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Dmitry Fleytman <dmitry.fleytman@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Michael Roth <mdroth@linux.vnet.ibm.com>, Max Filippov <jcmvbkbc@gmail.com>,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>, Max Reitz <mreitz@redhat.com>,
 Marek Vasut <marex@denx.de>, Stefano Stabellini <sstabellini@kernel.org>,
 qemu-block@nongnu.org, qemu-trivial@nongnu.org, Paul Durrant <paul@xen.org>,
 Magnus Damm <magnus.damm@gmail.com>, Markus Armbruster <armbru@redhat.com>,
 =?iso-8859-1?Q?Herv=E9?= Poussineau <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?iso-8859-1?Q?Marc-Andr=E9?= Lureau <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>, Andrzej Zaborowski <balrogg@gmail.com>,
 Artyom Tarasenko <atar4qemu@gmail.com>,
 Alistair Francis <alistair@alistair23.me>,
 Eduardo Habkost <ehabkost@redhat.com>, Michael Tokarev <mjt@tls.msk.ru>,
 Riku Voipio <riku.voipio@iki.fi>, Peter Lieven <pl@kamp.de>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-arm@nongnu.org,
 Peter Chubb <peter.chubb@nicta.com.au>,
 Ronnie Sahlberg <ronniesahlberg@gmail.com>, xen-devel@lists.xenproject.org,
 Alex =?iso-8859-1?Q?Benn=E9e?= <alex.bennee@linaro.org>,
 David Gibson <david@gibson.dropbear.id.au>, Kevin Wolf <kwolf@redhat.com>,
 Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
 Yoshinori Sato <ysato@users.sourceforge.jp>,
 Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>,
 Chris Wulff <crwulff@gmail.com>, Laurent Vivier <laurent@vivier.eu>,
 Jean-Christophe Dubois <jcd@tribudubois.net>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 06, 2020 at 06:23:00PM +0200, Christophe de Dinechin wrote:
> There are a number of unnecessary trailing whitespaces that have
> accumulated over time in the source code. They cause stray changes
> in patches if you use tools that automatically remove them.
> 
> Tested by doing a `git diff -w` after the change.
> 
> This could probably be turned into a pre-commit hook.
> 

For HVF bits,

Reviewed-by: Roman Bolshakov <r.bolshakov@yadro.com>

Thanks,
Roman


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 11:31:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 11:31:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt8IB-0006Ts-Jz; Wed, 08 Jul 2020 11:31:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=okql=AT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jt8IA-0006TY-Bz
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 11:31:10 +0000
X-Inumbo-ID: 80e32bbe-c10e-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80e32bbe-c10e-11ea-8496-bc764e2007e4;
 Wed, 08 Jul 2020 11:31:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=swFJcA061/g9KiIBe5it1CXv/JrjRUltQloULrW/bFQ=; b=irZk4eDZe0HtCHRlojLK+iZR3
 Btwaw/jKNNArFo9AzP5aCwra/qitPB/Qg159gaPIbnANnwevp8QiLf/476u4zZ10liWOea1ZJxaeG
 RzV0T2PJs+rXQHQZvD8CED3OoCniIKO+pKbxGVy6yC0B9mbxCAPiB2dWKu0IWMNLcLMzQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jt8I2-0007ux-7Q; Wed, 08 Jul 2020 11:31:02 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jt8I1-0003Zq-Up; Wed, 08 Jul 2020 11:31:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jt8I1-00056C-TA; Wed, 08 Jul 2020 11:31:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151716-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.9-testing test] 151716: regressions - trouble:
 fail/pass/starved
X-Osstest-Failures: xen-4.9-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:guest-saverestore.2:fail:regression
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore.2:fail:regression
 xen-4.9-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=4597fc97b3b8870c39214e3aa4132ab711a40691
X-Osstest-Versions-That: xen=6e477c2ea4d5c26a7a7b2f850166aa79edc5225c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jul 2020 11:31:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151716 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151716/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64 15 guest-saverestore.2 fail REGR. vs. 151223
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore.2 fail REGR. vs. 151223

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop      fail blocked in 151223
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151223
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 151223
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151223
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 151223
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151223
 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail like 151223
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151223
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  4597fc97b3b8870c39214e3aa4132ab711a40691
baseline version:
 xen                  6e477c2ea4d5c26a7a7b2f850166aa79edc5225c

Last test of basis   151223  2020-06-18 15:48:44 Z   19 days
Testing same since   151716  2020-07-07 14:05:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4597fc97b3b8870c39214e3aa4132ab711a40691
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue Jul 7 15:43:59 2020 +0200

    xen: Check the alignment of the offset pased via VCPUOP_register_vcpu_info
    
    Currently a guest is able to register any guest physical address to use
    for the vcpu_info structure as long as the structure can fits in the
    rest of the frame.
    
    This means a guest can provide an address that is not aligned to the
    natural alignment of the structure.
    
    On Arm 32-bit, unaligned access are completely forbidden by the
    hypervisor. This will result to a data abort which is fatal.
    
    On Arm 64-bit, unaligned access are only forbidden when used for atomic
    access. As the structure contains fields (such as evtchn_pending_self)
    that are updated using atomic operations, any unaligned access will be
    fatal as well.
    
    While the misalignment is only fatal on Arm, a generic check is added
    as an x86 guest shouldn't sensibly pass an unaligned address (this
    would result to a split lock).
    
    This is XSA-327.
    
    Reported-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    master commit: 3fdc211b01b29f252166937238efe02d15cb5780
    master date: 2020-07-07 14:41:00 +0200

commit a852040fe3ab6658ab0dd4fa8f86f50db9d08874
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 7 15:43:29 2020 +0200

    x86/ept: flush cache when modifying PTEs and sharing page tables
    
    Modifications made to the page tables by EPT code need to be written
    to memory when the page tables are shared with the IOMMU, as Intel
    IOMMUs can be non-coherent and thus require changes to be written to
    memory in order to be visible to the IOMMU.
    
    In order to achieve this make sure data is written back to memory
    after writing an EPT entry when the recalc bit is not set in
    atomic_write_ept_entry. If such bit is set, the entry will be
    adjusted and atomic_write_ept_entry will be called a second time
    without the recalc bit set. Note that when splitting a super page the
    new tables resulting of the split should also be written back.
    
    Failure to do so can allow devices behind the IOMMU access to the
    stale super page, or cause coherency issues as changes made by the
    processor to the page tables are not visible to the IOMMU.
    
    This allows to remove the VT-d specific iommu_pte_flush helper, since
    the cache write back is now performed by atomic_write_ept_entry, and
    hence iommu_iotlb_flush can be used to flush the IOMMU TLB. The newly
    used method (iommu_iotlb_flush) can result in less flushes, since it
    might sometimes be called rightly with 0 flags, in which case it
    becomes a no-op.
    
    This is part of XSA-321.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: c23274fd0412381bd75068ebc9f8f8c90a4be748
    master date: 2020-07-07 14:40:11 +0200

commit 3c9a98410be01236f1de1ad171885552fae5397a
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 7 15:42:59 2020 +0200

    vtd: optimize CPU cache sync
    
    Some VT-d IOMMUs are non-coherent, which requires a cache write back
    in order for the changes made by the CPU to be visible to the IOMMU.
    This cache write back was unconditionally done using clflush, but there are
    other more efficient instructions to do so, hence implement support
    for them using the alternative framework.
    
    This is part of XSA-321.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: a64ea16522a73a13a0d66cfa4b66a9d3b95dd9d6
    master date: 2020-07-07 14:39:54 +0200

commit 46d6a0721405149ace209cdf23fa009fd366dbf2
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 7 15:42:34 2020 +0200

    x86/alternative: introduce alternative_2
    
    It's based on alternative_io_2 without inputs or outputs but with an
    added memory clobber.
    
    This is part of XSA-321.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    master commit: 23570bce00ee6ba2139ece978ab6f03ff166e21d
    master date: 2020-07-07 14:39:25 +0200

commit 83917016f87ca5feeb32f2fdcae32e560bb8d283
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 7 15:42:15 2020 +0200

    vtd: don't assume addresses are aligned in sync_cache
    
    Current code in sync_cache assume that the address passed in is
    aligned to a cache line size. Fix the code to support passing in
    arbitrary addresses not necessarily aligned to a cache line size.
    
    This is part of XSA-321.
    
    Reported-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: b6d9398144f21718d25daaf8d72669a75592abc5
    master date: 2020-07-07 14:39:05 +0200

commit 1c51a292788e3e006dd9b14ec987c5da662d4a50
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 7 15:41:49 2020 +0200

    x86/iommu: introduce a cache sync hook
    
    The hook is only implemented for VT-d and it uses the already existing
    iommu_sync_cache function present in VT-d code. The new hook is
    added so that the cache can be flushed by code outside of VT-d when
    using shared page tables.
    
    Note that alloc_pgtable_maddr must use the now locally defined
    sync_cache function, because IOMMU ops are not yet setup the first
    time the function gets called during IOMMU initialization.
    
    No functional change intended.
    
    This is part of XSA-321.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 91526b460e5009fc56edbd6809e66c327281faba
    master date: 2020-07-07 14:38:34 +0200

commit 7338b3371c0535944346fb3336a5e28b5c080658
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 7 15:41:18 2020 +0200

    vtd: prune (and rename) cache flush functions
    
    Rename __iommu_flush_cache to iommu_sync_cache and remove
    iommu_flush_cache_page. Also remove the iommu_flush_cache_entry
    wrapper and just use iommu_sync_cache instead. Note the _entry suffix
    was meaningless as the wrapper was already taking a size parameter in
    bytes. While there also constify the addr parameter.
    
    No functional change intended.
    
    This is part of XSA-321.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: 62298825b9a44f45761acbd758138b5ba059ebd1
    master date: 2020-07-07 14:38:13 +0200

commit 6fe2c30d483c7c02db1d517edd1c708f81e62bd9
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jul 7 15:40:56 2020 +0200

    vtd: improve IOMMU TLB flush
    
    Do not limit PSI flushes to order 0 pages, in order to avoid doing a
    full TLB flush if the passed in page has an order greater than 0 and
    is aligned. Should increase the performance of IOMMU TLB flushes when
    dealing with page orders greater than 0.
    
    This is part of XSA-321.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: 5fe515a0fede07543f2a3b049167b1fd8b873caf
    master date: 2020-07-07 14:37:46 +0200

commit 6ee71c98e35054045d47e65cb10587b0f60cae52
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 7 15:40:33 2020 +0200

    x86/ept: atomically modify entries in ept_next_level
    
    ept_next_level was passing a live PTE pointer to ept_set_middle_entry,
    which was then modified without taking into account that the PTE could
    be part of a live EPT table. This wasn't a security issue because the
    pages returned by p2m_alloc_ptp are zeroed, so adding such an entry
    before actually initializing it didn't allow a guest to access
    physical memory addresses it wasn't supposed to access.
    
    This is part of XSA-328.
    
    Reported-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    master commit: bc3d9f95d661372b059a5539ae6cb1e79435bb95
    master date: 2020-07-07 14:37:12 +0200

commit 098d95995564f38f5415dd7b30096785db9e2337
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jul 7 15:40:11 2020 +0200

    x86/EPT: ept_set_middle_entry() related adjustments
    
    ept_split_super_page() wants to further modify the newly allocated
    table, so have ept_set_middle_entry() return the mapped pointer rather
    than tearing it down and then getting re-established right again.
    
    Similarly ept_next_level() wants to hand back a mapped pointer of
    the next level page, so re-use the one established by
    ept_set_middle_entry() in case that path was taken.
    
    Pull the setting of suppress_ve ahead of insertion into the higher level
    table, and don't have ept_split_super_page() set the field a 2nd time.
    
    This is part of XSA-328.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    master commit: 1104288186ee73a7f9bfa41cbaa5bb7611521028
    master date: 2020-07-07 14:36:52 +0200

commit 715453066082072b38eaa754840c558e7a9edf88
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jul 7 15:39:47 2020 +0200

    x86/shadow: correct an inverted conditional in dirty VRAM tracking
    
    This originally was "mfn_x(mfn) == INVALID_MFN". Make it like this
    again, taking the opportunity to also drop the unnecessary nearby
    braces.
    
    This is XSA-319.
    
    Fixes: 246a5a3377c2 ("xen: Use a typesafe to define INVALID_MFN")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    master commit: 23a216f99d40fbfbc2318ade89d8213eea6ba1f8
    master date: 2020-07-07 14:36:24 +0200
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 11:35:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 11:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt8MQ-0006eL-95; Wed, 08 Jul 2020 11:35:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4YfP=AT=redhat.com=berrange@srs-us1.protection.inumbo.net>)
 id 1jt8MO-0006eG-K9
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 11:35:32 +0000
X-Inumbo-ID: 210d60d2-c10f-11ea-8e2a-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 210d60d2-c10f-11ea-8e2a-12813bfff9fa;
 Wed, 08 Jul 2020 11:35:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594208131;
 h=from:from:reply-to:reply-to:subject:subject:date:date:
 message-id:message-id:to:to:cc:cc:mime-version:mime-version:
 content-type:content-type:in-reply-to:in-reply-to:  references:references;
 bh=8Z2Gh8iDn/pnjTwj/tk9pg091TYfdl6bfJD17+GLrn4=;
 b=CkApSgbPAXg7DhbpCbfPLg9fcutA+UwuWT10Xd1HV0jq/iI0tfvuuQZ9vYkPVJjhZ8WMzy
 oGL+q7KEGgPu9H92r7WIfXLB+N3QhTOOQotL39rtWEjA+mLTesJfshrLiD+RTqqqaY1VbE
 7Cdw5PjuhX3+bbqc1IvxsjSGCLGRVqs=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-141-wJIGFVQANCeLH0FuHYQ-RQ-1; Wed, 08 Jul 2020 07:35:16 -0400
X-MC-Unique: wJIGFVQANCeLH0FuHYQ-RQ-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EFD221940920;
 Wed,  8 Jul 2020 11:35:10 +0000 (UTC)
Received: from redhat.com (unknown [10.36.110.36])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 8423660E1C;
 Wed,  8 Jul 2020 11:34:39 +0000 (UTC)
Date: Wed, 8 Jul 2020 12:34:36 +0100
From: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
To: qemu-devel@nongnu.org
Subject: Re: [PATCH] trivial: Remove trailing whitespaces
Message-ID: <20200708113436.GG3229307@redhat.com>
References: <20200706162300.1084753-1-dinechin@redhat.com>
 <159405307662.7847.17757844911728214859@d1fd068a5071>
MIME-Version: 1.0
In-Reply-To: <159405307662.7847.17757844911728214859@d1fd068a5071>
User-Agent: Mutt/1.14.3 (2020-06-14)
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=berrange@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
Cc: peter.maydell@linaro.org, dmitry.fleytman@gmail.com, mst@redhat.com,
 jasowang@redhat.com, mark.cave-ayland@ilande.co.uk, armbru@redhat.com,
 jcmvbkbc@gmail.com, kraxel@redhat.com, edgar.iglesias@gmail.com,
 jcd@tribudubois.net, marex@denx.de, sstabellini@kernel.org,
 qemu-block@nongnu.org, qemu-trivial@nongnu.org, paul@xen.org,
 magnus.damm@gmail.com, mdroth@linux.vnet.ibm.com, hpoussin@reactos.org,
 anthony.perard@citrix.com, marcandre.lureau@redhat.com,
 david@gibson.dropbear.id.au, philmd@redhat.com, atar4qemu@gmail.com,
 riku.voipio@iki.fi, ehabkost@redhat.com, mjt@tls.msk.ru,
 alistair@alistair23.me, pl@kamp.de, dgilbert@redhat.com, r.bolshakov@yadro.com,
 qemu-arm@nongnu.org, peter.chubb@nicta.com.au, ronniesahlberg@gmail.com,
 xen-devel@lists.xenproject.org, alex.bennee@linaro.org, rth@twiddle.net,
 kwolf@redhat.com, ysato@users.sourceforge.jp, crwulff@gmail.com,
 laurent@vivier.eu, mreitz@redhat.com, qemu-ppc@nongnu.org, dinechin@redhat.com,
 pbonzini@redhat.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 06, 2020 at 09:31:21AM -0700, no-reply@patchew.org wrote:
> Patchew URL: https://patchew.org/QEMU/20200706162300.1084753-1-dinechin@redhat.com/
> 
> 
> 
> Hi,
> 
> This series seems to have some coding style problems. See output below for
> more information:
> 
> Subject: [PATCH] trivial: Remove trailing whitespaces
> Type: series
> Message-id: 20200706162300.1084753-1-dinechin@redhat.com
> 
> === TEST SCRIPT BEGIN ===
> #!/bin/bash
> git rev-parse base > /dev/null || exit 0
> git config --local diff.renamelimit 0
> git config --local diff.renames True
> git config --local diff.algorithm histogram
> ./scripts/checkpatch.pl --mailback base..
> === TEST SCRIPT END ===
> 
> From https://github.com/patchew-project/qemu
>  * [new tag]         patchew/20200706162300.1084753-1-dinechin@redhat.com -> patchew/20200706162300.1084753-1-dinechin@redhat.com
> Switched to a new branch 'test'
> 9af3e90 trivial: Remove trailing whitespaces
> 
> === OUTPUT BEGIN ===

snip

> ERROR: trailing whitespace
> #2395: FILE: target/sh4/op_helper.c:149:
> +^I$

One case of trailing whitespace missed.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



From xen-devel-bounces@lists.xenproject.org Wed Jul 08 12:56:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 12:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt9c5-0004lw-SA; Wed, 08 Jul 2020 12:55:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LfZR=AT=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jt9c4-0004lr-7x
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 12:55:48 +0000
X-Inumbo-ID: 57269b88-c11a-11ea-8e35-12813bfff9fa
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.57]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 57269b88-c11a-11ea-8e35-12813bfff9fa;
 Wed, 08 Jul 2020 12:55:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7O0pWyJHPvi8nQWXoAC5491PgIgYx6Koi38WUGnJONA=;
 b=PY/NIA7P+mGZ/MYsbtHOS0FwgVsWjCxdSRYpEZ64oGQzufRH5Kmks37X3hH0yDx9RXELDx1DZNOr5gcEbdh6lOzyVeUrIJ50kidc8kEuXdmbK5WGcoTx10szKDyKxw4SGWkk6PnCpnqmDTwB5N7a7oDQkmsCKG9VjlY3/a1JgwE=
Received: from AM5PR0201CA0024.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::34) by AM5PR0801MB2017.eurprd08.prod.outlook.com
 (2603:10a6:203:42::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.27; Wed, 8 Jul
 2020 12:55:45 +0000
Received: from AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:3d:cafe::47) by AM5PR0201CA0024.outlook.office365.com
 (2603:10a6:203:3d::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.21 via Frontend
 Transport; Wed, 8 Jul 2020 12:55:45 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT025.mail.protection.outlook.com (10.152.16.157) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3174.21 via Frontend Transport; Wed, 8 Jul 2020 12:55:44 +0000
Received: ("Tessian outbound f7489b7e84a7:v62");
 Wed, 08 Jul 2020 12:55:44 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8a6a04dc7f605275
X-CR-MTA-TID: 64aa7808
Received: from 9283b3c3f04b.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0FD76029-7820-4A82-9E8B-6428A928BDE8.1; 
 Wed, 08 Jul 2020 12:55:37 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9283b3c3f04b.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 08 Jul 2020 12:55:37 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kpNeAywT9SXiqyv8+hUA+CUOPetR+aF8BY6FJ7yca8nu5c0JLoXMZ/zaiKiChjbMUxFW+Vvt+jqrWgXAcDGhCLGdPaUFeJdiSmXW+wJD1C5C+LNv62VoLUGsvZObUlHpMOCm4ppGOvFgm6XeipghISOtpUhpXNkY/tLG3wfJtHW9am3uHAYO0DZK3LTitYEr+Bo0IG9IBWsMYlIpbQPTLYwHSeDYEk+JBUCrPtlCJy0sWMaP8R1U3sj5iiM8x0hG+yVz8UPdYo+5muYrinzW3G81l2nip53x5Wz5iNsjMmHH2l/suMPMKBkV9ZwLiuRRuZVpuxbYB55BMR/CYSbfyQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7O0pWyJHPvi8nQWXoAC5491PgIgYx6Koi38WUGnJONA=;
 b=eHQD7tbX6ONuBkzDVGblzBi6c1xKKRZXLO6iQ4qUw99Qw2XMl3UnSFnnhSHrVr0+YOb2hdt6whfyW3UTSIKOmbC2EAyVxWHyup+ugrY2TuPWHh4d82BD/3JpHp4P7qlbk2VQ9WhT+1tOcZA574CwQtgqACjEZABdnPm8A6cjhGT+vgxLJxMVkX8SLBoy9FRaMkQq7rl0x++SuMn5PjM8iAuXZnJqs8TF71CtdcV7DHjU7lnXQuMsejX9C/KMrNKANhhAv4SgWjYrL2LdcRYMQseJB2vMrQzUB6RHFAnyK9hPBj3OZ5YPQN6rzqVEA7QA7gW3CJLBuqLimYT/SgkKyg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7O0pWyJHPvi8nQWXoAC5491PgIgYx6Koi38WUGnJONA=;
 b=PY/NIA7P+mGZ/MYsbtHOS0FwgVsWjCxdSRYpEZ64oGQzufRH5Kmks37X3hH0yDx9RXELDx1DZNOr5gcEbdh6lOzyVeUrIJ50kidc8kEuXdmbK5WGcoTx10szKDyKxw4SGWkk6PnCpnqmDTwB5N7a7oDQkmsCKG9VjlY3/a1JgwE=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5370.eurprd08.prod.outlook.com (2603:10a6:10:112::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.21; Wed, 8 Jul
 2020 12:55:36 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.021; Wed, 8 Jul 2020
 12:55:36 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: PCI passthrough on arm Design Session MoM
Thread-Topic: PCI passthrough on arm Design Session MoM
Thread-Index: AQHWVScSyjWrTj065kev9PFLOqlwvQ==
Date: Wed, 8 Jul 2020 12:55:36 +0000
Message-ID: <4E0A40D3-2979-4A91-9376-C2B19B9F582E@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none; lists.xenproject.org;
 dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 1f7caf64-a57a-483a-8cd9-08d8233e3a5b
x-ms-traffictypediagnostic: DB8PR08MB5370:|AM5PR0801MB2017:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <AM5PR0801MB2017AF39344CBDEC061F77AB9D670@AM5PR0801MB2017.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04583CED1A
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: ODSgBwEy0l3nzBWqW/PoizaqQ0RO9Pvzei2iLsvz9Mbd1o9DPT4sb/iQCRQCkqK3/9lta0spmBXrMrKYIdgD5bW/ZTcp6G4Q1YxDt1dqH0bWJTaTa4dS0FFzmxXClzIr/XwuLS2c2xUJIAZLnJ45X3HaZlJ2/gVuRdnixHCfI/5eJlv+uAILKO6ZfIxae/Zp16zg33BQ0SNu3gVw5c90PHvqdByKprN6fo0lW9sMDWkcCJwUaEYWSvb7fC0Pvw8Qal5hSOZAHFGgh56RQ55CBwLhOG40O2nw51RSxYb/UpDlYE1CG/n7Rd8KCYLyvew0knpFyuOmUAh3mIWSORbcug==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(136003)(346002)(396003)(39860400002)(366004)(8936002)(4326008)(54906003)(186003)(66476007)(66556008)(64756008)(66446008)(66946007)(6916009)(2906002)(83380400001)(6512007)(33656002)(36756003)(8676002)(71200400001)(76116006)(91956017)(5660300002)(6486002)(478600001)(86362001)(6506007)(316002)(26005)(2616005);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: IX59vcc3iPOtqpalKUwXUO+DwRZ/YyZf9PRTPRNioNPzc04/lZXCn6JyUPHycQSKbtRottCWvFCrC2kGM9IAvCHWEV7gqbcsPwziO2UU/b5GR9qPRdsjDm2N56TUbznrlG9kvo+C6sDWk+GWvm/uTi5uVbSV/5dscwEI8IQ5e3wV02QiRqmc0r4LGIW+5kLh9gQ51LoIPPGx+P1gmWzxino5L/N8mMsichkAy0ozZup/N7oEd3SWEdnmIfEGEnBVlpIrGqKxJMQSppyZfHKKgXE78L7TmcFpc7ttN0dd4uihI89GMQPODrlw7jdsubqJU/M1whE37/RqH8YVb3WdwScyZAhr1OBqh54LkMlXw77QjsJhsEGRbRQIMonBFp5WwB0mc7fEYX4waKj+FtwZAw34zehAC6gbbuApAPA7PW1xbhdKRfWGW6hueHtmJ+eY2J//fKqXWg6Y5O0maq6NChT7JcAJiEVyvCQz1kjvAoG2nN+F/CoPeQa7Bl8wlSyN
Content-Type: text/plain; charset="us-ascii"
Content-ID: <950234224D229F41A0B42F0ACBF90DE4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5370
Original-Authentication-Results: lists.xenproject.org;
 dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(346002)(376002)(396003)(46966005)(47076004)(2616005)(86362001)(33656002)(36906005)(336012)(316002)(6512007)(54906003)(6916009)(81166007)(6486002)(5660300002)(36756003)(82310400002)(83380400001)(356005)(82740400003)(8936002)(186003)(2906002)(478600001)(70586007)(8676002)(6506007)(70206006)(4326008)(26005);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 48ddfc76-7c25-4d88-6524-08d8233e354d
X-Forefront-PRVS: 04583CED1A
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ZnhuUCrcf3z5RnJprp6y4ZsMXmaIVehadzUf6CoTVaAMYkSQMHPY5iwnuXZhf/wupPBNlaEXXBlH2v+8+9uycZwXpby7qElK63Ukbf3LERkhSgr0Ww4r/xYfdN0mK6WfOIXWfqr35io1vl4aA4Kh2/9bntaasbLiK2Om3Q7NRJfOu6s6Y4/947/LUypompAw2pw2UXcfGFtNpigVxwtUOOjHTtpsFM9u7tYbMQCVWT8SQbGSHagV37/ZPhKtGwaAbEdkzyMMHv1tNgYGfz5LuQSTJ9Dtpdw18EK08B8L+WT/clcBmFpnMkiabGmrAH4Llc+v+5Hgl0oHVNpue6V+qUpbtfatflh1quISoSy3rZRm31DjVnmJ1Z5f7XDWFGPBbi7/w2ZCJRIUUdd8tppAqw==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Jul 2020 12:55:44.6358 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1f7caf64-a57a-483a-8cd9-08d8233e3a5b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB2017
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, Rahul Singh <Rahul.Singh@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

Here are the notes we took during the design session around PCI devices pas=
sthrough on Arm.
Feel free to comment or add anything :-)

Bertrand

PCI devices passthrough on Arm Design Session
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

Date: 7/7/2020

- X86 VPCI support  is for the PVH guest .
- X86 PCI devices discovery code should be checked and maybe used on Arm as=
 it is not very complex
	- Remark from Julien: This might not work in number of cases
- Sanitation of each the PCI access for each guest in XEN is required
- MSI trap is not required for gicv3 but it required for gicv2m
	- We do not plan to support non ITS GIC
- Check possibility to add some specifications in EBBR for PCI enumeration =
(address assignment part)
- PCI enumeration support should not depend on DOM0 for safety reasons
- PCI enumeration could be done in several places
	- DTB, with some entries giving values to be applied by Xen
	- In XEN (complex, not wanted out of devices discovery)
	- In Firmware and then xen device discovery
- As per Julien it is difficult to tell the XEN on which segment PCI device=
 is present
	- Current test implementation is done on Juno where there is only one segm=
ent
	- This should be investigated with an other hardware in the next months
- Julien mentioned that clocks issues will be complex to solve and most har=
dware are not following the ECAM standard
- Julien mentioned that Linux and Xen could do the enumeration in a differe=
nt way, making it complex to have linux doing an enumeration after Xen
- We should push the code we have ASAP on the mailing list for a review and=
 discussion on the design
	- Arm will try to do that before end of July
- It would be good to push some PCI support in Xen even though it would not=
 be compatible with all hardware




From xen-devel-bounces@lists.xenproject.org Wed Jul 08 13:05:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 13:05:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt9kx-0005gF-Qd; Wed, 08 Jul 2020 13:04:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sx7s=AT=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jt9kw-0005gA-Vt
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 13:04:59 +0000
X-Inumbo-ID: a024b6c0-c11b-11ea-bb8b-bc764e2007e4
Received: from mail-il1-x12d.google.com (unknown [2607:f8b0:4864:20::12d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a024b6c0-c11b-11ea-bb8b-bc764e2007e4;
 Wed, 08 Jul 2020 13:04:58 +0000 (UTC)
Received: by mail-il1-x12d.google.com with SMTP id k6so38958998ili.6
 for <xen-devel@lists.xenproject.org>; Wed, 08 Jul 2020 06:04:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:from:date:message-id:subject:to:cc;
 bh=9nLHeBgBW8UFVrjRxD8OX4sNylczp6LaD1uqbgcE0kc=;
 b=D4R5OTp/YEamTWt158xQhM2KZF8/FkwNtmmFPpCqWNlcJR/16DHFHfHCYZ/D+f9Gh1
 qkP7M46DHyW98nhUXBVvO+mQlFNcOQrSoAyJ1FK0hD1b/UACvrdQxo1v8c+HNilN04fQ
 nvKIhkLXZ5MzJLgr5OAzbINo7PyrGDrKfUsZzlPAxm/NaLlkojBviRwNf0eS2yP3LQCU
 XK+TGBSxpsh2VxppnJBHgA/ImrELkB/H8t50DVz+h011FXeL7Wmm25GwZADbw4XlbMHN
 8v1TgYNlLfb0evf05IAVzZu9bkxUzyPfbGrZU5yebH0IStWUEPbpn4POn7lwPZnlHlmd
 nBhw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc;
 bh=9nLHeBgBW8UFVrjRxD8OX4sNylczp6LaD1uqbgcE0kc=;
 b=ZXc08GPufyrSB0r5EbMRkP+XkkykDnM0fKSMP5EuPngJfGRWjXB++aMQaic/EJMra2
 UWjw5IZt7I8hCJXvV5ILE/D7ZDgrPcXogzgoGsK/Ba8elLgQ5tBLwa99unysk5G8vd9K
 kjLaL9tyVBEMgTFyGK6UiOVIr+USi6fl2i+QrAmGb8gv5BK71zuk9FcZDzdKNRZU5vfc
 jyIeFyQhoLOFBDNew+oDhcyVl4BNYBl5S5yJ3wrW8lVOK2y+A3iriyxWSBkp1XlNndVE
 m7LrXSr3bxg78oedhrQl3Ze8rv10olHGO/Qy2ODrk7s9HtOf1xEAILqyq3RCj0yiedjF
 BjDw==
X-Gm-Message-State: AOAM5306Jn4U0Uy3kNg69n/O+ks35QAlvJrSObRtHMyxLg1+cnmfGqR7
 53w0XyvN9YTa0D7XQzs962o8596OdUNA8sZGDuMDjomE
X-Google-Smtp-Source: ABdhPJyqYE1MdnlgxzM8cc5OmZKD0G4/j2Ea01ySH8yJS56xlEHHZ/nypyWlQeqchSVz9cPzItfpU3aQVfUmrcPP7Qs=
X-Received: by 2002:a92:c213:: with SMTP id j19mr42080782ilo.40.1594213497749; 
 Wed, 08 Jul 2020 06:04:57 -0700 (PDT)
MIME-Version: 1.0
From: Alistair Francis <alistair23@gmail.com>
Date: Wed, 8 Jul 2020 05:55:06 -0700
Message-ID: <CAKmqyKPFMGtDLzc2RiEZR6KCcbPL6wumm+V5SNdxNf7fAowcBQ@mail.gmail.com>
Subject: Xen and RISC-V Design Session
To: "open list:X86" <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bobby Eshleman <bobby.eshleman@starlab.io>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hey,

I'm not sure if there is a better way to suggest a design session but
can we have a Xen on RISC-V discussion tomorrow at the Xen Design
Summit?

Preferably in a later slot as the early slots are very early for me.

Alistair


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 13:12:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 13:12:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt9rk-0006W6-Ek; Wed, 08 Jul 2020 13:12:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=65hh=AT=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jt9rj-0006W1-90
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 13:11:59 +0000
X-Inumbo-ID: 9a0ae830-c11c-11ea-8e36-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9a0ae830-c11c-11ea-8e36-12813bfff9fa;
 Wed, 08 Jul 2020 13:11:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594213918;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=hX0TtMCUmf+4bowMcy/kzhrtYTZFcgfPybT5oM0w5oo=;
 b=f4z+pi1Gf1Mv0x9tTkgKaD3hPjXUB72X6WCabkmqpz7RVqZ9HGWJkIhU
 ns8a+jZ1wl8z7zSK/bcpI8/1BSqbuuT4eJa47EJFh2kGdGsQaklCYjN5a
 PvordE5LWSYJS3CwIoG7BTWK0hd8hmAYM9cyAsfXXRovnNbia/mv6dErd A=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: uJVjlQLQiM6CBRa0xQ0h5vOP8vHiHRY7zdJi/OYa2zYgz1YaJn95QaQXv7yFpPLf1upH9O0NR+
 HG8Nf8ldEVAEVM6DFZIN+xFgv0+LZCqIQnYVzkFcSRRui/JSWXyaaolHczpzbk5+mkQSp3GmnC
 Ymhrt9Z8yosx55URv/e1uWaws5RTTmxBKd5hZCJpnhCHvmNHpl+Zn4j/y1kC7nbL91vM/0bCM6
 1LTv6z7JP8TwVTv2FgMHy0YCJIxc7hrl3iCNjK8BMI+ICPHf8WslW6DoBBho4MN9rGn+kvFRQV
 SYU=
X-SBRS: 2.7
X-MesageID: 22188848
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,327,1589256000"; d="scan'208";a="22188848"
Date: Wed, 8 Jul 2020 15:11:50 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Alistair Francis <alistair23@gmail.com>, George Dunlap
 <george.dunlap@eu.citrix.com>
Subject: Re: Xen and RISC-V Design Session
Message-ID: <20200708131150.GD7191@Air-de-Roger>
References: <CAKmqyKPFMGtDLzc2RiEZR6KCcbPL6wumm+V5SNdxNf7fAowcBQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <CAKmqyKPFMGtDLzc2RiEZR6KCcbPL6wumm+V5SNdxNf7fAowcBQ@mail.gmail.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bobby Eshleman <bobby.eshleman@starlab.io>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Adding George in case he can help with the session placement.

On Wed, Jul 08, 2020 at 05:55:06AM -0700, Alistair Francis wrote:
> Hey,
> 
> I'm not sure if there is a better way to suggest a design session but
> can we have a Xen on RISC-V discussion tomorrow at the Xen Design
> Summit?

I think that would be interesting!

> Preferably in a later slot as the early slots are very early for me.

You have to register it at: https://design-sessions.xenproject.org/

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 13:18:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 13:18:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jt9xj-0006iR-47; Wed, 08 Jul 2020 13:18:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=okql=AT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jt9xh-0006hF-Ay
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 13:18:09 +0000
X-Inumbo-ID: 7346cbf0-c11d-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7346cbf0-c11d-11ea-bca7-bc764e2007e4;
 Wed, 08 Jul 2020 13:18:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ipP54IJqUL6IZrHjNqYyN1O5PMQxsRIFoJiVkykXmw4=; b=BW0GPQB6glWM6lsngWLmUYYa9G
 KU5l7qxRJQvfxPrV/1q+KaZQOWo65CzVSmPE976ZRbUKaRT++DbLiu/cIPc1vPxk8xehx8wHTogk2
 XF02qKprsLcpRxNOmHoaAwJA9YS2nOtvw1o5jHKgyv5AUmir0C+4bzt4F+P2hRGagro8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jt9xZ-0001TS-GP; Wed, 08 Jul 2020 13:18:01 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jt9xZ-000353-4n; Wed, 08 Jul 2020 13:18:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jt9xZ-0004KT-47; Wed, 08 Jul 2020 13:18:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete
 test-amd64-amd64-xl-qemuu-win7-amd64
Message-Id: <E1jt9xZ-0004KT-47@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jul 2020 13:18:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-win7-amd64
testid windows-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  7d3660e79830a069f1848bb4fa1cdf8f666424fb
  Bug not present: 9e3903136d9acde2fb2dd9e967ba928050a6cb4a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151734/


  (Revision log too long, omitted.)


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-win7-amd64.windows-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-win7-amd64.windows-install --summary-out=tmp/151734.bisection-summary --basis-template=151065 --blessings=real,real-bisect qemu-mainline test-amd64-amd64-xl-qemuu-win7-amd64 windows-install
Searching for failure / basis pass:
 151704 fail [host=godello1] / 151065 ok.
Failure / basis pass flights: 151704 / 151065
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#dafce295e6f447ed8905db4e29241e2c6c2a4389-627d1d6693b0594d257dbe1a3363a8d4bd4d8307 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://git.qemu.org/qemu.git#9e3903136d9acde2fb2dd9e967ba928050a6cb4a-eb6490f544388dd24c0d054a96dd304bc7284450 git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-88ab0c15525ced2eefe39220742efe4769089ad8 git://xenbits.xen.org/xen.git#058023b343d4e366864831db46e9b438e9e3a178-be63d9d47f571a60d70f8fb630c03871312d9655
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 42847 nodes in revision graph
Searching for test results:
 151101 fail irrelevant
 151065 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 151149 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151221 fail irrelevant
 151175 fail irrelevant
 151241 fail irrelevant
 151286 fail irrelevant
 151269 fail irrelevant
 151328 fail irrelevant
 151304 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 322969adf1fb3d6cfbd613f35121315715aff2ed 3c659044118e34603161457db9934a34f816d78b 171199f56f5f9bdf1e5d670d09ef1351d8f01bae 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151377 fail irrelevant
 151353 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b d4b78317b7cf8c0c635b70086503813f79ff21ec 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151399 fail irrelevant
 151414 fail irrelevant
 151435 fail irrelevant
 151459 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151471 fail irrelevant
 151485 fail irrelevant
 151500 fail irrelevant
 151518 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151547 fail irrelevant
 151598 fail irrelevant
 151577 fail irrelevant
 151622 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b 7b7515702012219410802a168ae4aa45b72a44df 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151656 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151634 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151645 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151669 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151685 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151710 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 151704 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151717 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151718 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 7028534d8482d25860c4d1aa8e45f0b911abfc5a
 151720 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 394e8e4bf586b4749620a48a23c816ee19f0e04a 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151723 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151724 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fd708fe0e1f813d6faf02d92ec5e8d73ce876ed1 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151726 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151730 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151731 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151732 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151734 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
Searching for interesting versions
 Result found: flight 151065 (pass), for basis pass
 Result found: flight 151634 (fail), for basis failure
 Repro found: flight 151710 (pass), for basis pass
 Repro found: flight 151717 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
No revisions left to test, checking graph state.
 Result found: flight 151723 (pass), for last pass
 Result found: flight 151726 (fail), for first failure
 Repro found: flight 151730 (pass), for last pass
 Repro found: flight 151731 (fail), for first failure
 Repro found: flight 151732 (pass), for last pass
 Repro found: flight 151734 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  7d3660e79830a069f1848bb4fa1cdf8f666424fb
  Bug not present: 9e3903136d9acde2fb2dd9e967ba928050a6cb4a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151734/


  (Revision log too long, omitted.)

pnmtopng: 164 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-win7-amd64.windows-install.{dot,ps,png,html,svg}.
----------------------------------------
151734: tolerable ALL FAIL

flight 151734 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/151734/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install fail baseline untested


jobs:
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Jul 08 13:21:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 13:21:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtA0X-0007UF-M5; Wed, 08 Jul 2020 13:21:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sx7s=AT=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jtA0W-0007UA-A5
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 13:21:04 +0000
X-Inumbo-ID: df9a59ca-c11d-11ea-bb8b-bc764e2007e4
Received: from mail-io1-xd2e.google.com (unknown [2607:f8b0:4864:20::d2e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df9a59ca-c11d-11ea-bb8b-bc764e2007e4;
 Wed, 08 Jul 2020 13:21:03 +0000 (UTC)
Received: by mail-io1-xd2e.google.com with SMTP id l1so4979983ioh.5
 for <xen-devel@lists.xenproject.org>; Wed, 08 Jul 2020 06:21:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=el09H0U7lna7w0VKAPojZ6lb1UMQcOdSFaycVKQzoT4=;
 b=n4/LIKVJIbbiYWHP3HF5WJZjRsL8vzat4uIwgZbXkvqaCgRSyf7ZEfIuByGXNhy+jh
 3XIou23lW5FaMpYxZZINSgBypG5Dl8YEIADuV2FKTBO6MCAgLJSAzeyMqUfZQgfxxOcV
 0LAL1jBygbjmJxEPIv/4fesxE5BZiSsZkpKjhASn+NPW3sh4/KuPepchuMPiY223NbGb
 ygoICg1FDuYpigWHU94hnGr+3ptTMCtRJMJWt1isyiuyBLOowDCZqONBZT2RIuEGXi+5
 9tP+4enLNipDjRJvt5BUrlniwkLRfgHr7hJaMbX1jHKR54mmsTbhUR0lmB5VbHdxPWym
 qhNw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=el09H0U7lna7w0VKAPojZ6lb1UMQcOdSFaycVKQzoT4=;
 b=lKWyAD5UIAFlEcdy8hYMOrvglSvCWB7X4M7+CrTAIYTeelfNY5J0iZBC7sh+HCyZb+
 +y7opesN6YDDZVTQTOFvdDu9DuZzUJAgFOlS5mR3BeEANscv4BJ9+HNsJ5k285AkZnIS
 xqKwD5qMrjuaW0PzHW1rRYxJlZ6XkZ5daAfo+B1Wb5gQXP/WFVFR0YvKfYq2YMZWoi3X
 /7+6xZbBHBxZqlW3L8KSuPd3OUJctW9Tkc8gbMov7BckBiUqyAznGN/wrDYKxuUx7Lnz
 dQuaq1A4NTR9vKfY92GfuwWoDld6KepYBWArecZRyLTHpGNKalIjXg8CrhaFDOzeEJ4U
 5ghA==
X-Gm-Message-State: AOAM533WDr14IBye1+iyoegbPVDpyri9PGZElEXIDL3kToJJSJMFtsWM
 8MI42Bf9UBSNzL3lY2Q00s0pqn0+Fj8o4D30D5s=
X-Google-Smtp-Source: ABdhPJydVMch3lmCTd03PLZjeyKB3p6GcgVtB1SuyB6p7MElZVVeHSD/5You4P5Cn2d8KCHts0sJcegDCGCfD/c8n5I=
X-Received: by 2002:a05:6638:dd3:: with SMTP id
 m19mr67790322jaj.106.1594214463549; 
 Wed, 08 Jul 2020 06:21:03 -0700 (PDT)
MIME-Version: 1.0
References: <CAKmqyKPFMGtDLzc2RiEZR6KCcbPL6wumm+V5SNdxNf7fAowcBQ@mail.gmail.com>
 <20200708131150.GD7191@Air-de-Roger>
In-Reply-To: <20200708131150.GD7191@Air-de-Roger>
From: Alistair Francis <alistair23@gmail.com>
Date: Wed, 8 Jul 2020 06:11:13 -0700
Message-ID: <CAKmqyKOhW=YJ-WW28v-Ddt5yDDfVfCJKwijfsXo0oWAcvfrg2w@mail.gmail.com>
Subject: Re: Xen and RISC-V Design Session
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bobby Eshleman <bobby.eshleman@starlab.io>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 8, 2020 at 6:11 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.com> =
wrote:
>
> Adding George in case he can help with the session placement.
>
> On Wed, Jul 08, 2020 at 05:55:06AM -0700, Alistair Francis wrote:
> > Hey,
> >
> > I'm not sure if there is a better way to suggest a design session but
> > can we have a Xen on RISC-V discussion tomorrow at the Xen Design
> > Summit?
>
> I think that would be interesting!
>
> > Preferably in a later slot as the early slots are very early for me.
>
> You have to register it at: https://design-sessions.xenproject.org/

Thanks!

I don't have a verification code unfortunately. Is it possible to get one?

Alistair

>
> Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 13:21:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 13:21:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtA1E-0007Xo-VT; Wed, 08 Jul 2020 13:21:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M5eT=AT=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jtA1D-0007Xf-Kx
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 13:21:47 +0000
X-Inumbo-ID: f8e1a078-c11d-11ea-bb8b-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8e1a078-c11d-11ea-bb8b-bc764e2007e4;
 Wed, 08 Jul 2020 13:21:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594214506;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=JCcOhaQmMrJFhQ/6grXhm1QtDc+5GV1eZ41vFeKYK7I=;
 b=B9R4XfU+ie7zd3xxZW7Qxu5kO7qm2VD52m1xbvZ8BMkEwt8RA9Flsudu
 XWMDX2ntDOITgtDBEjxzu3Sn6RpOSEUphxGttTQusJvh2DNpHlB22f7UU
 3Z2XAaR1KNoCQPaiG4a7g1gBQNHtryoKBot7oQfNAvt/4vdIkQNdNZOBT 0=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: jd3rbUJL1tKUz3NzmWij23SQeQTbyv6J0FDVy5SgqGlaK/yeY9pcBPimvdQ17anr02D/YdwfXk
 1XX3ipCF1a7kA8FrxetvzcLksHI/vyq7UEDGJMsSSkjOn6nIPk8HpmxdS6WcR+CP/R4YAZftCi
 SOr0rPXwalvTBE4zHC3k0bJTykqz9SksOCV8CRP/dsSrVNaaSt/TU0c1ndjqQyaHBW+SVzeNbR
 TQFJgUaiKmMkLNdvFaZt8qUA7h0tb43+bGMQN6MuIHqzPUxsvcUDU+cxWXDmIPtzDo7cQIf/yw
 mmU=
X-SBRS: 2.7
X-MesageID: 22698887
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,327,1589256000"; d="scan'208";a="22698887"
From: George Dunlap <George.Dunlap@citrix.com>
To: Alistair Francis <alistair23@gmail.com>
Subject: Re: Xen and RISC-V Design Session
Thread-Topic: Xen and RISC-V Design Session
Thread-Index: AQHWVSlaXbsHlooEqk+f8JPClsuwY6j9hqiAgAAC7gA=
Date: Wed, 8 Jul 2020 13:21:42 +0000
Message-ID: <6CE81465-9F87-486F-A3CC-08857C9C4332@citrix.com>
References: <CAKmqyKPFMGtDLzc2RiEZR6KCcbPL6wumm+V5SNdxNf7fAowcBQ@mail.gmail.com>
 <20200708131150.GD7191@Air-de-Roger>
 <CAKmqyKOhW=YJ-WW28v-Ddt5yDDfVfCJKwijfsXo0oWAcvfrg2w@mail.gmail.com>
In-Reply-To: <CAKmqyKOhW=YJ-WW28v-Ddt5yDDfVfCJKwijfsXo0oWAcvfrg2w@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <2053D2EF33E67944AB0C0423E3C99717@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 Bobby Eshleman <bobby.eshleman@starlab.io>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDgsIDIwMjAsIGF0IDI6MTEgUE0sIEFsaXN0YWlyIEZyYW5jaXMgPGFsaXN0
YWlyMjNAZ21haWwuY29tPiB3cm90ZToNCj4gDQo+IE9uIFdlZCwgSnVsIDgsIDIwMjAgYXQgNjox
MSBBTSBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4gd3JvdGU6DQo+PiAN
Cj4+IEFkZGluZyBHZW9yZ2UgaW4gY2FzZSBoZSBjYW4gaGVscCB3aXRoIHRoZSBzZXNzaW9uIHBs
YWNlbWVudC4NCj4+IA0KPj4gT24gV2VkLCBKdWwgMDgsIDIwMjAgYXQgMDU6NTU6MDZBTSAtMDcw
MCwgQWxpc3RhaXIgRnJhbmNpcyB3cm90ZToNCj4+PiBIZXksDQo+Pj4gDQo+Pj4gSSdtIG5vdCBz
dXJlIGlmIHRoZXJlIGlzIGEgYmV0dGVyIHdheSB0byBzdWdnZXN0IGEgZGVzaWduIHNlc3Npb24g
YnV0DQo+Pj4gY2FuIHdlIGhhdmUgYSBYZW4gb24gUklTQy1WIGRpc2N1c3Npb24gdG9tb3Jyb3cg
YXQgdGhlIFhlbiBEZXNpZ24NCj4+PiBTdW1taXQ/DQo+PiANCj4+IEkgdGhpbmsgdGhhdCB3b3Vs
ZCBiZSBpbnRlcmVzdGluZyENCj4+IA0KPj4+IFByZWZlcmFibHkgaW4gYSBsYXRlciBzbG90IGFz
IHRoZSBlYXJseSBzbG90cyBhcmUgdmVyeSBlYXJseSBmb3IgbWUuDQo+PiANCj4+IFlvdSBoYXZl
IHRvIHJlZ2lzdGVyIGl0IGF0OiBodHRwczovL2Rlc2lnbi1zZXNzaW9ucy54ZW5wcm9qZWN0Lm9y
Zy8NCj4gDQo+IFRoYW5rcyENCj4gDQo+IEkgZG9uJ3QgaGF2ZSBhIHZlcmlmaWNhdGlvbiBjb2Rl
IHVuZm9ydHVuYXRlbHkuIElzIGl0IHBvc3NpYmxlIHRvIGdldCBvbmU/DQoNClZpcnR1YWxQYW5k
YTIwMjANCg0KQ2hlZXJzLA0KIC1HZW9yZ2U=


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 13:30:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 13:30:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtA9m-0008Tm-T2; Wed, 08 Jul 2020 13:30:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sx7s=AT=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jtA9l-0008Th-TF
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 13:30:37 +0000
X-Inumbo-ID: 355f1322-c11f-11ea-bb8b-bc764e2007e4
Received: from mail-io1-xd36.google.com (unknown [2607:f8b0:4864:20::d36])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 355f1322-c11f-11ea-bb8b-bc764e2007e4;
 Wed, 08 Jul 2020 13:30:37 +0000 (UTC)
Received: by mail-io1-xd36.google.com with SMTP id q8so46970721iow.7
 for <xen-devel@lists.xenproject.org>; Wed, 08 Jul 2020 06:30:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=1qooX1rDuH5ISVhDXxteHHMQjGTfb4o9k5ecOHl4zFE=;
 b=ZMXH1PRlMtMlNvTzH2+g584oOYSzwektF8zD0v3vYgal2ybjsDL1+T7tQ8XH0xioyd
 NbMV90qsjd4BLjJeasRjRP4apfvwdx2/E58Os+sG4LsguVvEBJOi2EgYd0hhdD/KKFq0
 5howubC8KnFkivmJRHHUjdryXgAfmR+/dnNtOSXZKQfiGnz3cRAvsul+Xrp9/btarFGP
 dyC767EAnOCYCBtqxXPw2SjhGKvQIytyjt8QRuvhQB4tLqkKdvSgw6WhrbOoBf664Rn5
 WTq5IzNI9bm3/XECa8fMkhauYBWMfaQE2xPZUfMyoiA2xCI2qoxC/CwsmcYP4qo9OjoA
 2kDw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=1qooX1rDuH5ISVhDXxteHHMQjGTfb4o9k5ecOHl4zFE=;
 b=WJ577WWztevn7DWLYC06EO5VY1ajN2PavIQa2DkicQMuvdGcGc4slz+lbuoFLcpYmo
 BxRzih/sCZ41uRB6SLp/n8LTzWn1xOC06mEmKUa5nXmMC9rUMU3rh+1ws71wL+IlcAvS
 IIWw7f4oRLC0TOe8awrc4qevSBXL6+WK3CCbeRIm1jEn6MADtLzXJg5awzOnIVLeqbzO
 nw4suZM4i2WpcGT7yJVpd8715pKGaakShy4lo3LF3iC7Ikke1FVt9mzsWcKf5Xs9uRfg
 GONg77xMqbKfcLOVf5vzfNfycifj86OZUw+uWYxCnO9klGQQk/pbW0WkuDx8W9TvH/BU
 An7A==
X-Gm-Message-State: AOAM533s6jSCtQ2fontsGxLHt9dEtiACRazwk6zBsEc/Xts9XFC0GvQj
 oGVzl7xu9FItmgTT0smZp4MtlNXQRgK8wc7MHBc=
X-Google-Smtp-Source: ABdhPJw2bFr/j9pXgEUhk/KNCENpWt1tVaZINsxgi9B0uU2285XUZUK6MIhg5/1C7yF4hi0YcfFQJ8Wz42IC1lys7hQ=
X-Received: by 2002:a6b:8dd1:: with SMTP id
 p200mr37059144iod.118.1594215036696; 
 Wed, 08 Jul 2020 06:30:36 -0700 (PDT)
MIME-Version: 1.0
References: <CAKmqyKPFMGtDLzc2RiEZR6KCcbPL6wumm+V5SNdxNf7fAowcBQ@mail.gmail.com>
 <20200708131150.GD7191@Air-de-Roger>
 <CAKmqyKOhW=YJ-WW28v-Ddt5yDDfVfCJKwijfsXo0oWAcvfrg2w@mail.gmail.com>
 <6CE81465-9F87-486F-A3CC-08857C9C4332@citrix.com>
In-Reply-To: <6CE81465-9F87-486F-A3CC-08857C9C4332@citrix.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Wed, 8 Jul 2020 06:20:47 -0700
Message-ID: <CAKmqyKP5j7tQLZ8ka=CoN93X87a1LQhnMTxSeYfFo0jviMzP-w@mail.gmail.com>
Subject: Re: Xen and RISC-V Design Session
To: George Dunlap <George.Dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bobby Eshleman <bobby.eshleman@starlab.io>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 8, 2020 at 6:21 AM George Dunlap <George.Dunlap@citrix.com> wro=
te:
>
>
>
> > On Jul 8, 2020, at 2:11 PM, Alistair Francis <alistair23@gmail.com> wro=
te:
> >
> > On Wed, Jul 8, 2020 at 6:11 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.c=
om> wrote:
> >>
> >> Adding George in case he can help with the session placement.
> >>
> >> On Wed, Jul 08, 2020 at 05:55:06AM -0700, Alistair Francis wrote:
> >>> Hey,
> >>>
> >>> I'm not sure if there is a better way to suggest a design session but
> >>> can we have a Xen on RISC-V discussion tomorrow at the Xen Design
> >>> Summit?
> >>
> >> I think that would be interesting!
> >>
> >>> Preferably in a later slot as the early slots are very early for me.
> >>
> >> You have to register it at: https://design-sessions.xenproject.org/
> >
> > Thanks!
> >
> > I don't have a verification code unfortunately. Is it possible to get o=
ne?
>
> VirtualPanda2020

Thanks! Just submitted the proposal.

It would be really great to have Bobby attend (on CC) as he has been
working on it. I'm not sure what timezone he is in though.

Alistair

>
> Cheers,
>  -George


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 13:32:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 13:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtABL-00008k-9x; Wed, 08 Jul 2020 13:32:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=65hh=AT=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jtABK-00008c-0y
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 13:32:14 +0000
X-Inumbo-ID: 6e1a8980-c11f-11ea-8e38-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6e1a8980-c11f-11ea-8e38-12813bfff9fa;
 Wed, 08 Jul 2020 13:32:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594215133;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=HtZuI3CdV67mITIBl5roTEp3sCpytQ+TQBB1qswd7/A=;
 b=Kzi1T70d9S+OuoAdSs+2XIRceO3j0DcD4OlOEONPkY/wcCdQbcPpKmjv
 M7xDfU8HyQOQmtEGEEtS1B4rPj0c6lCvvDvKpcrrbwpEIWF2t9YIRn7r/
 NzX+tfcRVvxah0QZx1zw/NWYnhnzR0xhCCc4nDyVx95MhBI/AzP+Kab9z U=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: bqgUmpPKJkm1N4Qd6szRWO97fo3vYR1GrX4nBemd9OaCzJzp+YxJazmJ2Ygqbtl3mNA6ioEesY
 uU891TXo8rqc6kw+8BxYMxsqI3PgyQSIvLRVf8Ijb8dBypPr1Ruqauj6kJpGmDhcO3NN2A8id/
 2yA/kp2DJSeyKGXoTH68EfKYbI6Xyi/Lw3RnM+IZ4iWWV3Y+hEDFfs/3HngX84ap/HrfNGiYwy
 DaCDUnbKs2WgGMwJas/1eD3W15gtWpaURXRUOTVc/jq8jkVRbx4cVQ4w2m/RPcV/+82kYHPBUF
 nUI=
X-SBRS: 2.7
X-MesageID: 21866724
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,327,1589256000"; d="scan'208";a="21866724"
Date: Wed, 8 Jul 2020 15:32:05 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: PCI passthrough on arm Design Session MoM
Message-ID: <20200708133205.GE7191@Air-de-Roger>
References: <4E0A40D3-2979-4A91-9376-C2B19B9F582E@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <4E0A40D3-2979-4A91-9376-C2B19B9F582E@arm.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Rahul Singh <Rahul.Singh@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 08, 2020 at 12:55:36PM +0000, Bertrand Marquis wrote:
> Hi,
> 
> Here are the notes we took during the design session around PCI devices passthrough on Arm.
> Feel free to comment or add anything :-)
> 
> Bertrand
> 
> PCI devices passthrough on Arm Design Session
> ======================================
> 
> Date: 7/7/2020
> 
> - X86 VPCI support  is for the PVH guest .

Current vPCI is only for PVH dom0. We need to decide what to do for
PVH domUs, whether we want to use vPCI or xenpt from Paul:

http://xenbits.xen.org/gitweb/?p=people/pauldu/xenpt.git;a=summary

Or something else. I think this decision also needs to take into
account Arm.

> - X86 PCI devices discovery code should be checked and maybe used on Arm as it is not very complex
> 	- Remark from Julien: This might not work in number of cases
> - Sanitation of each the PCI access for each guest in XEN is required
> - MSI trap is not required for gicv3 but it required for gicv2m
> 	- We do not plan to support non ITS GIC
> - Check possibility to add some specifications in EBBR for PCI enumeration (address assignment part)
> - PCI enumeration support should not depend on DOM0 for safety reasons
> - PCI enumeration could be done in several places
> 	- DTB, with some entries giving values to be applied by Xen
> 	- In XEN (complex, not wanted out of devices discovery)
> 	- In Firmware and then xen device discovery
> - As per Julien it is difficult to tell the XEN on which segment PCI device is present
> 	- Current test implementation is done on Juno where there is only one segment
> 	- This should be investigated with an other hardware in the next months

I'm not sure the segments used by Xen need to match the segments used
by the guest. This is just an abstract value assigned from the OS (or
Xen) in order to differentiate different MMCFG (ECAM) regions, and
whether such numbers match doesn't seem relevant to me, as at the end
Xen will trap ECAM accesses and map such accesses to the Xen assigned
segments.

Segments matching between the OS and Xen is only relevant when PCI
information needs to be conveyed between the OS and Xen using some
kind of hypercall, but I think you want to avoid using such side-band
communication channels anyway?

> - Julien mentioned that clocks issues will be complex to solve and most hardware are not following the ECAM standard
> - Julien mentioned that Linux and Xen could do the enumeration in a different way, making it complex to have linux doing an enumeration after Xen
> - We should push the code we have ASAP on the mailing list for a review and discussion on the design
> 	- Arm will try to do that before end of July

I will be happy to give it a look and provide feedback.

For such complex pieces of work I would recommend to first send some
kind of document to the mailing list in order to make sure the
direction taken is accepted by the community, and we can also provide
feedback or point to existing components that can be helpful :). If
you have code done already that's also fine, feel free to send it.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 14:34:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 14:34:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtB9b-00057E-1G; Wed, 08 Jul 2020 14:34:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I9ua=AT=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1jtB9Z-000579-TQ
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 14:34:29 +0000
X-Inumbo-ID: 210e0640-c128-11ea-8496-bc764e2007e4
Received: from mail-pg1-x530.google.com (unknown [2607:f8b0:4864:20::530])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 210e0640-c128-11ea-8496-bc764e2007e4;
 Wed, 08 Jul 2020 14:34:29 +0000 (UTC)
Received: by mail-pg1-x530.google.com with SMTP id e18so21759038pgn.7
 for <xen-devel@lists.xenproject.org>; Wed, 08 Jul 2020 07:34:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=date:from:to:cc:subject:message-id:references:mime-version
 :content-disposition:in-reply-to:user-agent;
 bh=Yb7cX4ea2fPkbfiHgqV02jbgj81wzxI3uvVH5pLvZwk=;
 b=CPtKHGglkJqqmRyR9C8qoQL4AoMYiCDpXR6XIrWzo+w5yb9HiLwEATY+5iMA5n+3vQ
 amqA6xNKjgtObwGk/NFoQvdv20Z7XYASLWQSb6WtaNFuYfGfn81Kkd6aihWDvRoTknP3
 U9lB4b0Xv61X/8nbYkfk/yVWoB8UG/7NjgE7EK8XPRu27dBbqoZSdJqgzMdQZ3Qw2Ivq
 J8203Ftsbn6qwrd5EPG8gxTzfKKVN2471lKtUPa2GWBz32aRaG6BxO/20ITemX/0Rtbf
 CLZMRV2FtzbL0WlbQJOyeXPTL1q1p1S43GOcyMkw1wDuFDtokFbHG/RNcyoAjQGHuT18
 5jmg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=Yb7cX4ea2fPkbfiHgqV02jbgj81wzxI3uvVH5pLvZwk=;
 b=g8T1N99/W7di7RCZVKs21HZXiiEpk/NO6okVsWE9ipPAJtBib5TT72b9nosNwy8570
 FGCeErQqxofA3wkJxHdFtoI1Tjk64s3At8Q+AYoPQrT6TeaR/TOxe8qhnnezo85jPM+p
 RuUgboBmIqW0N4riaEFLNUcCC9feKPsUGicQtxBfJA4PDS7Eu+eeWUbifv8HMQwBHLwg
 xbrXB5tjwl6Og9r3kfdDnhnumZI0Oj9bUqf4BxaWCc3sQIg+quVBjl+nacQ9QbywzEEF
 Dr3hXwmSN2x2Ev+DfrDR2PZ59mbGRwYtuZaFlrU5FAhmXpStTuhZk+y/6xEOAWv0fkem
 y9Gg==
X-Gm-Message-State: AOAM531fvD9kcehWqIrvu8jOBxRSF+mB6BsUMFX59/AgDey3/lFZP+R8
 mJTILZtj0zvENaKhwFJfMu4=
X-Google-Smtp-Source: ABdhPJzyouzTBoWX0PV+jsigo+PfoifWL165NGaAWJZJXwa6vhTJZymHfyoZwHagFLkaMavI2RaHtQ==
X-Received: by 2002:a62:58c4:: with SMTP id
 m187mr51230914pfb.216.1594218868085; 
 Wed, 08 Jul 2020 07:34:28 -0700 (PDT)
Received: from piano ([2601:1c2:4f00:c640:fc6:7318:8185:4d2d])
 by smtp.gmail.com with ESMTPSA id n137sm102963pfd.194.2020.07.08.07.34.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 08 Jul 2020 07:34:27 -0700 (PDT)
Date: Wed, 8 Jul 2020 07:34:20 -0700
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: Alistair Francis <alistair23@gmail.com>
Subject: Re: Xen and RISC-V Design Session
Message-ID: <20200708143420.GA8562@piano>
References: <CAKmqyKPFMGtDLzc2RiEZR6KCcbPL6wumm+V5SNdxNf7fAowcBQ@mail.gmail.com>
 <20200708131150.GD7191@Air-de-Roger>
 <CAKmqyKOhW=YJ-WW28v-Ddt5yDDfVfCJKwijfsXo0oWAcvfrg2w@mail.gmail.com>
 <6CE81465-9F87-486F-A3CC-08857C9C4332@citrix.com>
 <CAKmqyKP5j7tQLZ8ka=CoN93X87a1LQhnMTxSeYfFo0jviMzP-w@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAKmqyKP5j7tQLZ8ka=CoN93X87a1LQhnMTxSeYfFo0jviMzP-w@mail.gmail.com>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 Bobby Eshleman <bobby.eshleman@starlab.io>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 08, 2020 at 06:20:47AM -0700, Alistair Francis wrote:
> 
> Thanks! Just submitted the proposal.
> 
> It would be really great to have Bobby attend (on CC) as he has been
> working on it. I'm not sure what timezone he is in though.
> 

Hey Alistair,

I am on the west coast in the USA, so some of the later slots would work best
for me too.

Best,
Bobby


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 15:10:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 15:10:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtBiS-0008OM-2x; Wed, 08 Jul 2020 15:10:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vA5+=AT=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jtBiQ-0008OH-Mo
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 15:10:30 +0000
X-Inumbo-ID: 2881301e-c12d-11ea-b7bb-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2881301e-c12d-11ea-b7bb-bc764e2007e4;
 Wed, 08 Jul 2020 15:10:28 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C8B632064B;
 Wed,  8 Jul 2020 15:10:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594221028;
 bh=CkPT5VjDi6r3bKPhISgc9er9XvSARCVDrNHMl+ySW3o=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=ubkCsteX/VTNE20KBPjTYTTb1JfpuBiOGIF7b9LIMEgfUVjIFQqoEbR18bknG5zf7
 u9p5DN+LlW5ZEuOCLEE0q/+ruNdoOQ/BlDhwMwD5ZTEJm88dtLdiJtiXRyw8ZXx0Ux
 PWG5LKgx65XJm0ydemCy9rt+zhlhqx6UxXl72OXs=
Date: Wed, 8 Jul 2020 08:10:27 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bobby Eshleman <bobbyeshleman@gmail.com>
Subject: Re: Xen and RISC-V Design Session
In-Reply-To: <20200708143420.GA8562@piano>
Message-ID: <alpine.DEB.2.21.2007080808420.4124@sstabellini-ThinkPad-T480s>
References: <CAKmqyKPFMGtDLzc2RiEZR6KCcbPL6wumm+V5SNdxNf7fAowcBQ@mail.gmail.com>
 <20200708131150.GD7191@Air-de-Roger>
 <CAKmqyKOhW=YJ-WW28v-Ddt5yDDfVfCJKwijfsXo0oWAcvfrg2w@mail.gmail.com>
 <6CE81465-9F87-486F-A3CC-08857C9C4332@citrix.com>
 <CAKmqyKP5j7tQLZ8ka=CoN93X87a1LQhnMTxSeYfFo0jviMzP-w@mail.gmail.com>
 <20200708143420.GA8562@piano>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 Bobby Eshleman <bobby.eshleman@starlab.io>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Alistair Francis <alistair23@gmail.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 8 Jul 2020, Bobby Eshleman wrote:
> On Wed, Jul 08, 2020 at 06:20:47AM -0700, Alistair Francis wrote:
> > 
> > Thanks! Just submitted the proposal.
> > 
> > It would be really great to have Bobby attend (on CC) as he has been
> > working on it. I'm not sure what timezone he is in though.
> > 
> 
> Hey Alistair,
> 
> I am on the west coast in the USA, so some of the later slots would work best
> for me too.

I'd love to attend this session. Realistically we have two sessions
tomorrow we could use, the 7:15AM and the 8:30AM California time, we
could use one for FuSa and the other for RISC-V.


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 15:13:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 15:13:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtBkw-0008Vy-Gm; Wed, 08 Jul 2020 15:13:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sx7s=AT=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1jtBku-0008V8-U9
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 15:13:05 +0000
X-Inumbo-ID: 8561223a-c12d-11ea-bca7-bc764e2007e4
Received: from mail-io1-xd2b.google.com (unknown [2607:f8b0:4864:20::d2b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8561223a-c12d-11ea-bca7-bc764e2007e4;
 Wed, 08 Jul 2020 15:13:04 +0000 (UTC)
Received: by mail-io1-xd2b.google.com with SMTP id i4so47316994iov.11
 for <xen-devel@lists.xenproject.org>; Wed, 08 Jul 2020 08:13:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=xJ675J9bOACbo4EfI5RqubkWGD/0vf/sAuCPPSuqTY4=;
 b=JGYXQbkzxsX7+RxzoeG+dRD0Om4RkBNx/3ge5VfkP7Z+Su73PKgnkkmlCPAoacFAa3
 dHwi2L67aSQ+eMOoRAhAwiz6iYXjMUl8dxTMl4NdAWYaaA6zTGSimRTyaByXOA/Ns9Ms
 9o1OQDtMRI4SIGpFsz0Z6WwOTS6ZE7eotCPofyaIsKli2vP6cvNad+An29ybbHRkdhdg
 kjTH6NIgOlS7OUAUYa0QK7/yYngdR9NFClzBam5Nn2u67R6U6RYfUViOL8zRyqwM/Lqz
 JFn/1P4aS9Yj46ObJDZMY73RZjm97soWVEKRCFKNqIi52ZgvdpY0yib0YJVewh7RRCXg
 hbYQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=xJ675J9bOACbo4EfI5RqubkWGD/0vf/sAuCPPSuqTY4=;
 b=QY8T4TECOHLOUaVQBXdseHH+PU+rVRyp90p586FJtT1UIlnlUCLpNo4wzaydf0D6Y1
 AVeNDmTC68GcDolRDcv/vb9byRty6j1pbmpPSQcuZY3WoqBp0ouvs1FOg8A0WofzACc8
 N8di7SIwQWKWKCE/KAOAJG2a05GUISOHkL0295NhOSFHb7VN+xj2d0c/94VSS8d9tU3x
 WSyn+0lJEz5hXO6cYTcc+dIo8fXdNrzzKHgMHkw0CYr0z9LzX7m3q2m/n32hPKmTkvdo
 aFzW9SGwTyh/CslmtdkgRyqGkaqop3Ks/Vni9x4kdMs1EZk3zOLrcXxD5aBWJ5URz0nw
 TRRw==
X-Gm-Message-State: AOAM532GZtGxu3uuXf+l9zJSgOMTkmqbvEy4i12T7rnMVD3vAFLGO3Lw
 U/74fc/5bMS1znBH9o+rBqHu5sqxTjF7FSEetcQ=
X-Google-Smtp-Source: ABdhPJz3+czMgC71M/Ii9yPW4uFjNbnq8Z9aQ/gKITu9QPcH06eYUuPz96o4w6exyK89HcpfjSe3XMiAgV/N1WErBDI=
X-Received: by 2002:a02:5b83:: with SMTP id g125mr66655393jab.91.1594221184028; 
 Wed, 08 Jul 2020 08:13:04 -0700 (PDT)
MIME-Version: 1.0
References: <CAKmqyKPFMGtDLzc2RiEZR6KCcbPL6wumm+V5SNdxNf7fAowcBQ@mail.gmail.com>
 <20200708131150.GD7191@Air-de-Roger>
 <CAKmqyKOhW=YJ-WW28v-Ddt5yDDfVfCJKwijfsXo0oWAcvfrg2w@mail.gmail.com>
 <6CE81465-9F87-486F-A3CC-08857C9C4332@citrix.com>
 <CAKmqyKP5j7tQLZ8ka=CoN93X87a1LQhnMTxSeYfFo0jviMzP-w@mail.gmail.com>
 <20200708143420.GA8562@piano>
 <alpine.DEB.2.21.2007080808420.4124@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007080808420.4124@sstabellini-ThinkPad-T480s>
From: Alistair Francis <alistair23@gmail.com>
Date: Wed, 8 Jul 2020 08:03:14 -0700
Message-ID: <CAKmqyKPrQKyz8HY00pGnS-mM8Dr5P-m69sziCJ-K8yiFoza08Q@mail.gmail.com>
Subject: Re: Xen and RISC-V Design Session
To: Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 Bobby Eshleman <bobby.eshleman@starlab.io>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 8, 2020 at 8:10 AM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> On Wed, 8 Jul 2020, Bobby Eshleman wrote:
> > On Wed, Jul 08, 2020 at 06:20:47AM -0700, Alistair Francis wrote:
> > >
> > > Thanks! Just submitted the proposal.
> > >
> > > It would be really great to have Bobby attend (on CC) as he has been
> > > working on it. I'm not sure what timezone he is in though.
> > >
> >
> > Hey Alistair,
> >
> > I am on the west coast in the USA, so some of the later slots would work best
> > for me too.
>
> I'd love to attend this session. Realistically we have two sessions
> tomorrow we could use, the 7:15AM and the 8:30AM California time, we
> could use one for FuSa and the other for RISC-V.

Either of those work for me.

Alistair


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 15:20:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 15:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtBs6-0000wC-E8; Wed, 08 Jul 2020 15:20:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M5eT=AT=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jtBs5-0000w7-HO
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 15:20:29 +0000
X-Inumbo-ID: 8dd6996c-c12e-11ea-bca7-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8dd6996c-c12e-11ea-bca7-bc764e2007e4;
 Wed, 08 Jul 2020 15:20:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594221629;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=WOMsddn/zKhCKzuqg9/VGFYaJcm5n1MgYlbeStwmZGQ=;
 b=OKclBi3EuBwAu6UOOL39kvr01I0JyoP8OeE9Wcsw0a2W+BwZqKLw43uH
 gczfyUat0F6UdwCRvq119f4ZpLiww7vg/rWqiRzO6DFJ6ro28FjvvnuUH
 4vmugCI0T+AY3Gm1v/WkH0CtZkIk9PGylcErmSCx4rTWz1sjzht8kYdco A=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: dFflYsRtA7q9fKlOSOu0X5bDHLayLWBAvbc9MkLQ3JoQics1HN8weY/ugGSxziABis9d+W4tdo
 k2O83Th9LZv8UcOFVX5utVAfSEqElW+9Ciys3M8mJRKjmOJ62NZ2JFyti/ESpq7oLhnS3LG9w8
 D2Bk6RxeSJuy7Isfbt2o3Y2WnvZzCc0/67BR6LfN19dOeGK9jcCh/8SKOYzy4NSPL7t7KtEmiW
 1kY+4/tN6+OopX7XOwnOltdnxB8rRlYoIJ0rN1vKZpo9qrNQyC+0z1oUy/aa5pFqepnRyJ/GNE
 6So=
X-SBRS: 2.7
X-MesageID: 21880057
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,327,1589256000"; d="scan'208";a="21880057"
From: George Dunlap <George.Dunlap@citrix.com>
To: Alistair Francis <alistair23@gmail.com>
Subject: Re: Xen and RISC-V Design Session
Thread-Topic: Xen and RISC-V Design Session
Thread-Index: AQHWVSlaXbsHlooEqk+f8JPClsuwY6j9hqiAgAAC7gD///++gIAAFI0AgAAKF4D///38AIAABMwA
Date: Wed, 8 Jul 2020 15:20:24 +0000
Message-ID: <CD72753B-2DFF-45CF-9E4C-B4AEE6813FF0@citrix.com>
References: <CAKmqyKPFMGtDLzc2RiEZR6KCcbPL6wumm+V5SNdxNf7fAowcBQ@mail.gmail.com>
 <20200708131150.GD7191@Air-de-Roger>
 <CAKmqyKOhW=YJ-WW28v-Ddt5yDDfVfCJKwijfsXo0oWAcvfrg2w@mail.gmail.com>
 <6CE81465-9F87-486F-A3CC-08857C9C4332@citrix.com>
 <CAKmqyKP5j7tQLZ8ka=CoN93X87a1LQhnMTxSeYfFo0jviMzP-w@mail.gmail.com>
 <20200708143420.GA8562@piano>
 <alpine.DEB.2.21.2007080808420.4124@sstabellini-ThinkPad-T480s>
 <CAKmqyKPrQKyz8HY00pGnS-mM8Dr5P-m69sziCJ-K8yiFoza08Q@mail.gmail.com>
In-Reply-To: <CAKmqyKPrQKyz8HY00pGnS-mM8Dr5P-m69sziCJ-K8yiFoza08Q@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="us-ascii"
Content-ID: <6F18A0A7D3A6B94589A800FB88BCEB1B@citrix.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Bobby Eshleman <bobby.eshleman@starlab.io>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

OK, I made that restriction; the resulting schedule seems OK to me.  :-)

 -George

> On Jul 8, 2020, at 4:03 PM, Alistair Francis <alistair23@gmail.com> wrote=
:
>=20
> On Wed, Jul 8, 2020 at 8:10 AM Stefano Stabellini
> <sstabellini@kernel.org> wrote:
>>=20
>> On Wed, 8 Jul 2020, Bobby Eshleman wrote:
>>> On Wed, Jul 08, 2020 at 06:20:47AM -0700, Alistair Francis wrote:
>>>>=20
>>>> Thanks! Just submitted the proposal.
>>>>=20
>>>> It would be really great to have Bobby attend (on CC) as he has been
>>>> working on it. I'm not sure what timezone he is in though.
>>>>=20
>>>=20
>>> Hey Alistair,
>>>=20
>>> I am on the west coast in the USA, so some of the later slots would wor=
k best
>>> for me too.
>>=20
>> I'd love to attend this session. Realistically we have two sessions
>> tomorrow we could use, the 7:15AM and the 8:30AM California time, we
>> could use one for FuSa and the other for RISC-V.
>=20
> Either of those work for me.
>=20
> Alistair



From xen-devel-bounces@lists.xenproject.org Wed Jul 08 15:22:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 15:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtBuG-00012b-RZ; Wed, 08 Jul 2020 15:22:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=H35X=AT=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jtBuF-00012V-Ov
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 15:22:43 +0000
X-Inumbo-ID: de2a70dc-c12e-11ea-8e46-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id de2a70dc-c12e-11ea-8e46-12813bfff9fa;
 Wed, 08 Jul 2020 15:22:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E45B4AFF9;
 Wed,  8 Jul 2020 15:22:42 +0000 (UTC)
Subject: Re: [PATCH] x86/mtrr: Drop workaround for old 32bit CPUs
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200708101443.27321-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3fe7c767-99e7-f1e5-b8f6-10a011bddf8a@suse.com>
Date: Wed, 8 Jul 2020 17:22:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200708101443.27321-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.07.2020 12:14, Andrew Cooper wrote:
> This logic is dead as Xen is 64bit-only these days.

Ah, yes, this should have long been gone.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 15:25:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 15:25:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtBwR-0001AB-8T; Wed, 08 Jul 2020 15:24:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vA5+=AT=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jtBwP-0001A4-P2
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 15:24:57 +0000
X-Inumbo-ID: 2b63a8a0-c12f-11ea-8e46-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2b63a8a0-c12f-11ea-8e46-12813bfff9fa;
 Wed, 08 Jul 2020 15:24:52 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C350220786;
 Wed,  8 Jul 2020 15:24:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594221892;
 bh=J8aEgJv2hP6yfCMW9cCOzfndWOqixiCWC1FxYtawrcE=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=ztkdtVVUiUe7iQbfU8d88S6YdwB5kxzzXZspRxmZBkrDOEEG2YWJM6D2er2fypESY
 3XHv2R/VPRtRLUsR1sbwx7XoJ7b/Ii5AWw5tTGVd32WmobhcgVY1oN7D8Yzgt90Yj4
 Imj5gRzBh/1JBbZ/hMr6fNLw0DmBcaUTfOMXZMJ0=
Date: Wed, 8 Jul 2020 08:24:51 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: Xen and RISC-V Design Session
In-Reply-To: <CD72753B-2DFF-45CF-9E4C-B4AEE6813FF0@citrix.com>
Message-ID: <alpine.DEB.2.21.2007080824390.4124@sstabellini-ThinkPad-T480s>
References: <CAKmqyKPFMGtDLzc2RiEZR6KCcbPL6wumm+V5SNdxNf7fAowcBQ@mail.gmail.com>
 <20200708131150.GD7191@Air-de-Roger>
 <CAKmqyKOhW=YJ-WW28v-Ddt5yDDfVfCJKwijfsXo0oWAcvfrg2w@mail.gmail.com>
 <6CE81465-9F87-486F-A3CC-08857C9C4332@citrix.com>
 <CAKmqyKP5j7tQLZ8ka=CoN93X87a1LQhnMTxSeYfFo0jviMzP-w@mail.gmail.com>
 <20200708143420.GA8562@piano>
 <alpine.DEB.2.21.2007080808420.4124@sstabellini-ThinkPad-T480s>
 <CAKmqyKPrQKyz8HY00pGnS-mM8Dr5P-m69sziCJ-K8yiFoza08Q@mail.gmail.com>
 <CD72753B-2DFF-45CF-9E4C-B4AEE6813FF0@citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Bobby Eshleman <bobby.eshleman@starlab.io>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Alistair Francis <alistair23@gmail.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Perfect, thank you!

On Wed, 8 Jul 2020, George Dunlap wrote:
> OK, I made that restriction; the resulting schedule seems OK to me.  :-)
> 
>  -George
> 
> > On Jul 8, 2020, at 4:03 PM, Alistair Francis <alistair23@gmail.com> wrote:
> > 
> > On Wed, Jul 8, 2020 at 8:10 AM Stefano Stabellini
> > <sstabellini@kernel.org> wrote:
> >> 
> >> On Wed, 8 Jul 2020, Bobby Eshleman wrote:
> >>> On Wed, Jul 08, 2020 at 06:20:47AM -0700, Alistair Francis wrote:
> >>>> 
> >>>> Thanks! Just submitted the proposal.
> >>>> 
> >>>> It would be really great to have Bobby attend (on CC) as he has been
> >>>> working on it. I'm not sure what timezone he is in though.
> >>>> 
> >>> 
> >>> Hey Alistair,
> >>> 
> >>> I am on the west coast in the USA, so some of the later slots would work best
> >>> for me too.
> >> 
> >> I'd love to attend this session. Realistically we have two sessions
> >> tomorrow we could use, the 7:15AM and the 8:30AM California time, we
> >> could use one for FuSa and the other for RISC-V.
> > 
> > Either of those work for me.
> > 
> > Alistair
> 


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 17:01:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 17:01:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtDRq-0001TA-6b; Wed, 08 Jul 2020 17:01:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=okql=AT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtDRo-0001Rz-MO
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 17:01:28 +0000
X-Inumbo-ID: a6335de8-c13c-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6335de8-c13c-11ea-bca7-bc764e2007e4;
 Wed, 08 Jul 2020 17:01:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ui+hc3cpinPKthE/E7AN+/ey1siARBCtoMITsbuFJqU=; b=3CvUzfAiA1hHUQshzA5RKEjcH
 Bcl32UrVcKN/tjlcc7vEJv0OZb+9V8hVkd+ptMK4LyO+wWrrOatRV/KgmZptDO8iTFIjrCprmYqcy
 0DDAnAwRxHHxB1w+zP7jspl7lLhubvseNusgSJXalbdo8wWJt7Y3pf5ZxRCs/jWSjEhe8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtDRh-0006C5-Fv; Wed, 08 Jul 2020 17:01:21 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtDRh-00062q-1C; Wed, 08 Jul 2020 17:01:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtDRh-0004w2-0N; Wed, 08 Jul 2020 17:01:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151719-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151719: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=3fdc211b01b29f252166937238efe02d15cb5780
X-Osstest-Versions-That: xen=f97f99c8d88ebc108f6adc3ba74e87d53ba57c70
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jul 2020 17:01:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151719 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151719/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151696
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151696
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151696
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151696
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151696
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151696
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151696
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151696
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151696
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151696
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3fdc211b01b29f252166937238efe02d15cb5780
baseline version:
 xen                  f97f99c8d88ebc108f6adc3ba74e87d53ba57c70

Last test of basis   151696  2020-07-07 03:11:56 Z    1 days
Testing same since   151719  2020-07-07 16:38:06 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f97f99c8d8..3fdc211b01  3fdc211b01b29f252166937238efe02d15cb5780 -> master


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 19:37:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 19:37:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtFsj-0005O4-EH; Wed, 08 Jul 2020 19:37:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=okql=AT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtFsi-0005Nz-64
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 19:37:24 +0000
X-Inumbo-ID: 7170b248-c152-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7170b248-c152-11ea-bca7-bc764e2007e4;
 Wed, 08 Jul 2020 19:37:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=83j16kzvggRvfmD89RlbP9uGbPMPDVvlB/9qPmIxHHQ=; b=m/pRhbajalthpybMZZG6Us6w6
 Hg7yvkzvW7wewtz/ByJIosx63g/9YVCY1W1jWBhBctgDNBOiy4/xw2TJeKoUIt+0AvlRC3BdaUQCT
 0A1EdKGPBz5B4CYB/MDhl073UQ6OU4XuT4PXQct/ejcRS1II0nBU6wjPgI4iIrDZoEa4E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtFsf-0000h0-TL; Wed, 08 Jul 2020 19:37:21 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtFsf-0004Ii-Kt; Wed, 08 Jul 2020 19:37:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtFsf-00016F-K3; Wed, 08 Jul 2020 19:37:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151725-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151725: all pass - PUSHED
X-Osstest-Versions-This: ovmf=bdafda8c457eb90c65f37026589b54258300f05c
X-Osstest-Versions-That: ovmf=627d1d6693b0594d257dbe1a3363a8d4bd4d8307
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jul 2020 19:37:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151725 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151725/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 bdafda8c457eb90c65f37026589b54258300f05c
baseline version:
 ovmf                 627d1d6693b0594d257dbe1a3363a8d4bd4d8307

Last test of basis   151590  2020-07-03 17:14:40 Z    5 days
Testing same since   151725  2020-07-07 23:40:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Garrett Kirkendall <garrett.kirkendall@amd.com>
  Kirkendall, Garrett <garrett.kirkendall@amd.com>
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   627d1d6693..bdafda8c45  bdafda8c457eb90c65f37026589b54258300f05c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 20:44:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 20:44:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtGvF-0002g2-Dn; Wed, 08 Jul 2020 20:44:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=okql=AT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtGvD-0002fY-Pq
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 20:44:03 +0000
X-Inumbo-ID: bdf48596-c15b-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bdf48596-c15b-11ea-bb8b-bc764e2007e4;
 Wed, 08 Jul 2020 20:43:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=x4BZCyfNq5FAECX+m3n2YDIJbbhRHWpNTAHhi3bcKmI=; b=dH5+F1nmzBtN/rOH7NrG+j+2/
 /USg4enJTH8eOSM1Bg03p6r35ePpMonZhCZ/j826L6GASpeZvPJogftr47ICGwlX+PgyXcOqh+nZI
 ViUSGqzkUV8hiwWYaBKdP20k/RCLZ1LMJ1m0otxQNzldtQ7UCwkCHNPrhQ9Tvry9MnExk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtGv5-0001yI-PR; Wed, 08 Jul 2020 20:43:55 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtGv5-000116-CJ; Wed, 08 Jul 2020 20:43:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtGv5-00065i-5P; Wed, 08 Jul 2020 20:43:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151721-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151721: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=c8eaf81fd22638691c5bdcc7d723d31fbb80ff6f
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jul 2020 20:43:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151721 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151721/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                c8eaf81fd22638691c5bdcc7d723d31fbb80ff6f
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   25 days
Failing since        151101  2020-06-14 08:32:51 Z   24 days   31 attempts
Testing same since   151721  2020-07-07 18:37:11 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michele Denber <denber@mindspring.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19299 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 08 23:07:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 08 Jul 2020 23:07:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtJAA-0005dl-5X; Wed, 08 Jul 2020 23:07:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=okql=AT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtJA8-0005dg-QB
 for xen-devel@lists.xenproject.org; Wed, 08 Jul 2020 23:07:36 +0000
X-Inumbo-ID: cedd390c-c16f-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cedd390c-c16f-11ea-8496-bc764e2007e4;
 Wed, 08 Jul 2020 23:07:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=cSTmcSITX2Xp+bnr6AA1DQAVPRopuKri97v5bAcYVL4=; b=lEqWb3vGn8CnTfYNFgYzqeOiY
 cwK+QoNIG/RjB4xSj8txD1Xcu7DqS9O/O27DCrucO3zcTmEs4zuF184SLkIMrTpy6sIW5tJY6GHai
 zzzKpZsLzG42Yv0G8SjmpfaYEJq7ugiHqOKgV1JtT8gDDyoI14OYK4vWzqNRyFdjPmCf8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtJA6-0004cv-13; Wed, 08 Jul 2020 23:07:34 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtJA5-0001Pz-Nx; Wed, 08 Jul 2020 23:07:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtJA5-0006Xf-Mu; Wed, 08 Jul 2020 23:07:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151722-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151722: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start.2:fail:allowable
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=bfe91da29bfad9941d5d703d45e29f0812a20724
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 08 Jul 2020 23:07:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151722 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151722/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale   7 xen-boot         fail in 151690 pass in 151722
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail in 151705 pass in 151690
 test-armhf-armhf-xl-rtds     12 guest-start                fail pass in 151705

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     17 guest-start.2  fail in 151690 REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-armhf-armhf-xl-rtds    13 migrate-support-check fail in 151705 never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-check fail in 151705 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                bfe91da29bfad9941d5d703d45e29f0812a20724
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   20 days
Failing since        151236  2020-06-19 19:10:35 Z   19 days   28 attempts
Testing same since   151690  2020-07-06 23:10:49 Z    1 days    3 attempts

------------------------------------------------------------
577 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 28056 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 00:28:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 00:28:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtKQQ-0004N6-EK; Thu, 09 Jul 2020 00:28:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKwJ=AU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtKQO-0004N1-HJ
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 00:28:28 +0000
X-Inumbo-ID: 1b65731a-c17b-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b65731a-c17b-11ea-b7bb-bc764e2007e4;
 Thu, 09 Jul 2020 00:28:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=fkBDi18tr5bh4YvlL/8oU0UtJ3Hr3LRBgsQeQJJhT6w=; b=Se9iBf//yu9w6aHfze1FXG/Fg
 v2BcFZLv8k+MNcqOe44gSh7cnSJIZjeCS3gLpqWr7QKTR9m9gCPa5XHA6wox8g7ENwGsGAxoXnrlG
 ibdCsyU1HtEQ3W795GPm0AXrREtPJeL+x9pld40KiEkufintxqtYNtKIwYsG/RbZdlXHA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtKQM-0006mY-Lf; Thu, 09 Jul 2020 00:28:26 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtKQM-0004CG-Ay; Thu, 09 Jul 2020 00:28:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtKQM-0003kJ-AJ; Thu, 09 Jul 2020 00:28:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151729-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151729: regressions - FAIL
X-Osstest-Failures: libvirt:test-amd64-amd64-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: libvirt=26daf37623395c18e07bc86f58363f40e8f4235b
X-Osstest-Versions-That: libvirt=e7998ebeaf15e4e8825be0dd97aa1316f194f00d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jul 2020 00:28:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151729 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151729/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496
 test-amd64-i386-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151496
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151496
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     14 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              26daf37623395c18e07bc86f58363f40e8f4235b
baseline version:
 libvirt              e7998ebeaf15e4e8825be0dd97aa1316f194f00d

Last test of basis   151496  2020-07-01 04:23:43 Z    7 days
Failing since        151527  2020-07-02 04:29:15 Z    6 days    7 attempts
Testing same since   151729  2020-07-08 04:20:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Yanqiu Zhang <yanqzhan@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 554 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 01:23:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 01:23:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtLHF-0001vJ-96; Thu, 09 Jul 2020 01:23:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKwJ=AU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtLHE-0001ul-41
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 01:23:04 +0000
X-Inumbo-ID: b9a03716-c182-11ea-8e92-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b9a03716-c182-11ea-8e92-12813bfff9fa;
 Thu, 09 Jul 2020 01:22:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=h2bqwpavCTxf+8kSBWWmkARRv3wyQW3vlaLlkQMXhpc=; b=4JjmmQGp5+21QcigWx0Eb27Zc
 pNw/epGNm/hmilrjFxOslZ72rNNajsEcVjGJr7a0rlejWPGyJ8zdmI1f4C6t9vK3Jj961S0/RTgS9
 N6PlBtxFNGtrlK/bLixcy4r4+99S9YBo9Vpw8PHdCxzB1TqS8dgSrdwrCjK8WHz8Nafxc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtLH9-0000ko-0P; Thu, 09 Jul 2020 01:22:59 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtLH8-0008Un-MW; Thu, 09 Jul 2020 01:22:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtLH8-0006zF-Lr; Thu, 09 Jul 2020 01:22:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151728-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 151728: tolerable trouble: fail/pass/starved
 - PUSHED
X-Osstest-Failures: xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:guest-saverestore.2:fail:heisenbug
 xen-4.10-testing:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:heisenbug
 xen-4.10-testing:test-xtf-amd64-amd64-1:xtf/test-hvm64-lbr-tsx-vmentry:fail:heisenbug
 xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=93be943e7d759015bd5db41a48f6dce58e580d5a
X-Osstest-Versions-That: xen=fd6e49ecae03840610fdc6a416a638590c0b6535
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jul 2020 01:22:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151728 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151728/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 14 guest-saverestore.2 fail in 151713 pass in 151728
 test-armhf-armhf-xl-vhd 15 guest-start/debian.repeat fail in 151713 pass in 151728
 test-xtf-amd64-amd64-1   51 xtf/test-hvm64-lbr-tsx-vmentry fail pass in 151713

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop             fail like 151255
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail like 151255
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  93be943e7d759015bd5db41a48f6dce58e580d5a
baseline version:
 xen                  fd6e49ecae03840610fdc6a416a638590c0b6535

Last test of basis   151255  2020-06-20 12:32:09 Z   18 days
Testing same since   151713  2020-07-07 13:35:49 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   fd6e49ecae..93be943e7d  93be943e7d759015bd5db41a48f6dce58e580d5a -> stable-4.10


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 04:58:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 04:58:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtOdp-0002qK-JC; Thu, 09 Jul 2020 04:58:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKwJ=AU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtOdo-0002q0-NJ
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 04:58:36 +0000
X-Inumbo-ID: d4ec9d84-c1a0-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d4ec9d84-c1a0-11ea-b7bb-bc764e2007e4;
 Thu, 09 Jul 2020 04:58:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+jdlYi1EZX86sptgJEViSg1W4VXZDtD4qzt1Tn7CynY=; b=5Ko4v/3doBUseXs9jmBwtzR0U
 T76rZU+JduYzFmlwIK9nBrnqD7v9wpL4vGxMCG2PVFljIbnOBcvW6v1x1Cl3rRmOJEtzTXNLnqz50
 R03E65BaZroM0j5qaG8tUWt4u7IJaaj8DaP0/h654jCg7x5z2OjayLZvPHJQp+WksaxaI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtOdh-0005GC-9A; Thu, 09 Jul 2020 04:58:29 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtOdg-0004PT-OY; Thu, 09 Jul 2020 04:58:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtOdg-0006Gw-MM; Thu, 09 Jul 2020 04:58:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151735-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.9-testing test] 151735: tolerable trouble: fail/pass/starved -
 PUSHED
X-Osstest-Failures: xen-4.9-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:guest-saverestore.2:fail:heisenbug
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore.2:fail:heisenbug
 xen-4.9-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:heisenbug
 xen-4.9-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=4597fc97b3b8870c39214e3aa4132ab711a40691
X-Osstest-Versions-That: xen=6e477c2ea4d5c26a7a7b2f850166aa79edc5225c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jul 2020 04:58:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151735 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151735/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-amd64 15 guest-saverestore.2 fail in 151716 pass in 151735
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore.2 fail in 151716 pass in 151735
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail pass in 151716

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop      fail blocked in 151223
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail in 151716 blocked in 151223
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop   fail in 151716 like 151223
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail in 151716 like 151223
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151223
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 151223
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151223
 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail like 151223
 test-amd64-amd64-xl-qemuu-ws16-amd64 16 guest-localmigrate/x10 fail like 151223
 test-amd64-amd64-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail like 151223
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151223
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  4597fc97b3b8870c39214e3aa4132ab711a40691
baseline version:
 xen                  6e477c2ea4d5c26a7a7b2f850166aa79edc5225c

Last test of basis   151223  2020-06-18 15:48:44 Z   20 days
Testing same since   151716  2020-07-07 14:05:23 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6e477c2ea4..4597fc97b3  4597fc97b3b8870c39214e3aa4132ab711a40691 -> stable-4.9


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 05:57:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 05:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtPYO-00089S-PA; Thu, 09 Jul 2020 05:57:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C1N0=AU=pfupf.net=jg@srs-us1.protection.inumbo.net>)
 id 1jtPYN-00089N-NE
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 05:57:03 +0000
X-Inumbo-ID: 00c7fe50-c1a9-11ea-8ea6-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 00c7fe50-c1a9-11ea-8ea6-12813bfff9fa;
 Thu, 09 Jul 2020 05:57:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A0682ABDE
 for <xen-devel@lists.xenproject.org>; Thu,  9 Jul 2020 05:56:59 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jg@pfupf.net>
Subject: Followup of yesterday's design session "refactoring the REST"
Message-ID: <a578fb24-cc6a-b3bd-b83d-3f7b9b1302cf@pfupf.net>
Date: Thu, 9 Jul 2020 07:56:58 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="------------46943642394ECFC057364D0A"
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a multi-part message in MIME format.
--------------46943642394ECFC057364D0A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit

Yesterday's design session at Xen Developer Summit "Hypervisor Team: .."
had one topic regarding whether we should find specific maintainers of
all the files currently assigned to "THE REST" in order to lower the
amount of reviews for those assigned to be "THE REST" maintainers.

Modifying the MAINTAINERS file adding "REST@x.y" as REST maintainer
and running the rune:

git ls-files | while true; do f=`line`; [ "$f" = "" ] && exit; \
echo $f `./scripts/get_maintainer.pl -f $f | awk '{print $(NF)}'`; \
done | awk '/REST/ { print $1}'

shows that basically the following files are covered by "THE REST":

- files directly in /
- config/
- most files in docs/ (not docs/man/)
- misc/ (only one file)
- scripts/
- lots of files in xen/common/
- xen/crypto/
- lots of files in xen/drivers/
- lots of files in xen/include/
- xen/scripts/
- some files in xen/tools/

I have attached the file list.

So the basic idea to have a "hypervisor REST" and a "tools REST"
wouldn't make a huge difference, if we don't assign docs/ to "tools
REST".

So I think it would make sense to:

- look through the docs/ and xen/include/ files whether some of those
   can be assigned to a component already having dedicated maintainers

- try to find maintainers for the other files, especially those in
   xen/common/ and xen/drivers/ (including the related include files, of
   course)

- if any of the REST maintainers doesn't want to receive mails for a
   group of the remaining REST files split the REST maintainers/files up
   accordingly

Thougts?


Juergen

--------------46943642394ECFC057364D0A
Content-Type: text/plain; charset=UTF-8;
 name="rest.txt"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="rest.txt"

LmdpdGFyY2hpdmUtaW5mbwouZ2l0YXR0cmlidXRlcwouZ2l0aWdub3JlCi5oZ2lnbm9yZQou
aGdzaWdzCi5oZ3RhZ3MKQ09ESU5HX1NUWUxFCkNPTlRSSUJVVElORwpDT1BZSU5HCkNSRURJ
VFMKQ29uZmlnLm1rCklOU1RBTEwKTUFJTlRBSU5FUlMKTWFrZWZpbGUKUkVBRE1FClNVUFBP
UlQubWQKY29uZmlnLmd1ZXNzCmNvbmZpZy5zdWIKY29uZmlnL0ZyZWVCU0QubWsKY29uZmln
L0xpbnV4Lm1rCmNvbmZpZy9OZXRCU0QubWsKY29uZmlnL05ldEJTRFJ1bXAubWsKY29uZmln
L09wZW5CU0QubWsKY29uZmlnL1N0ZEdOVS5tawpjb25maWcvU3VuT1MubWsKY29uZmlnL2Fy
bTMyLm1rCmNvbmZpZy9hcm02NC5tawpjb25maWcveDg2XzMyLm1rCmNvbmZpZy94ODZfNjQu
bWsKZG9jcy9JTkRFWApkb2NzL1JFQURNRS5jb2xvCmRvY3MvUkVBRE1FLnNvdXJjZQpkb2Nz
L2FkbWluLWd1aWRlL2luZGV4LnJzdApkb2NzL2FkbWluLWd1aWRlL2ludHJvZHVjdGlvbi5y
c3QKZG9jcy9hZG1pbi1ndWlkZS9taWNyb2NvZGUtbG9hZGluZy5yc3QKZG9jcy9hZG1pbi1n
dWlkZS94ZW4tb3ZlcnZpZXcuZHJhd2lvLnN2Zwpkb2NzL2NvbmYucHkKZG9jcy9kZXNpZ25z
L2FyZ28ucGFuZG9jCmRvY3MvZGVzaWducy9kbW9wLnBhbmRvYwpkb2NzL2Rlc2lnbnMvbm9u
LWNvb3BlcmF0aXZlLW1pZ3JhdGlvbi5tZApkb2NzL2Rlc2lnbnMvcWVtdS1kZXByaXZpbGVn
ZS5tZApkb2NzL2Rlc2lnbnMveGVuc3RvcmUtbWlncmF0aW9uLm1kCmRvY3MvZmVhdHVyZXMv
ZG9tMGxlc3MucGFuZG9jCmRvY3MvZmVhdHVyZXMvZmVhdHVyZS1sZXZlbGxpbmcucGFuZG9j
CmRvY3MvZmVhdHVyZXMvaHlwZXJ2aXNvcmZzLnBhbmRvYwpkb2NzL2ZlYXR1cmVzL2ludGVs
X3Bzcl9jYXRfY2RwLnBhbmRvYwpkb2NzL2ZlYXR1cmVzL2ludGVsX3Bzcl9tYmEucGFuZG9j
CmRvY3MvZmVhdHVyZXMvbGl2ZXBhdGNoLnBhbmRvYwpkb2NzL2ZlYXR1cmVzL21pZ3JhdGlv
bi5wYW5kb2MKZG9jcy9mZWF0dXJlcy9xZW11LWRlcHJpdmlsZWdlLnBhbmRvYwpkb2NzL2Zl
YXR1cmVzL3NjaGVkX2NyZWRpdC5wYW5kb2MKZG9jcy9mZWF0dXJlcy9zY2hlZF9jcmVkaXQy
LnBhbmRvYwpkb2NzL2ZlYXR1cmVzL3NjaGVkX3J0ZHMucGFuZG9jCmRvY3MvZmVhdHVyZXMv
dGVtcGxhdGUucGFuZG9jCmRvY3MvZmlncy9NYWtlZmlsZQpkb2NzL2ZpZ3MvbmV0d29yay1i
YXNpYy5maWcKZG9jcy9maWdzL25ldHdvcmstYnJpZGdlLmZpZwpkb2NzL2ZpZ3MveGVubG9n
by5lcHMKZG9jcy9nZW4taHRtbC1pbmRleApkb2NzL2dsb3NzYXJ5LnJzdApkb2NzL2d1ZXN0
LWd1aWRlL2luZGV4LnJzdApkb2NzL2d1ZXN0LWd1aWRlL3g4Ni9oeXBlcmNhbGwtYWJpLnJz
dApkb2NzL2d1ZXN0LWd1aWRlL3g4Ni9pbmRleC5yc3QKZG9jcy9oeXBlcnZpc29yLWd1aWRl
L2NvZGUtY292ZXJhZ2UucnN0CmRvY3MvaHlwZXJ2aXNvci1ndWlkZS9pbmRleC5yc3QKZG9j
cy9oeXBlcnZpc29yLWd1aWRlL3g4Ni9ob3cteGVuLWJvb3RzLnJzdApkb2NzL2h5cGVydmlz
b3ItZ3VpZGUveDg2L2luZGV4LnJzdApkb2NzL2luZGV4LnJzdApkb2NzL21pc2MvOXBmcy5w
YW5kb2MKZG9jcy9taXNjL2FtZC11Y29kZS1jb250YWluZXIudHh0CmRvY3MvbWlzYy9ibG9j
ay1zY3JpcHRzLnR4dApkb2NzL21pc2MvY29uc29sZS50eHQKZG9jcy9taXNjL2NyYXNoZGIu
dHh0CmRvY3MvbWlzYy9kaXN0cm9fbWFwcGluZy50eHQKZG9jcy9taXNjL2R1bXAtY29yZS1m
b3JtYXQudHh0CmRvY3MvbWlzYy9lZmkucGFuZG9jCmRvY3MvbWlzYy9ncmFudC10YWJsZXMu
dHh0CmRvY3MvbWlzYy9odm0tZW11bGF0ZWQtdW5wbHVnLnBhbmRvYwpkb2NzL21pc2MvaHlw
ZnMtcGF0aHMucGFuZG9jCmRvY3MvbWlzYy9rY29uZmlnLWxhbmd1YWdlLnJzdApkb2NzL21p
c2Mva2NvbmZpZy1tYWNyby1sYW5ndWFnZS5yc3QKZG9jcy9taXNjL2tjb25maWcucnN0CmRv
Y3MvbWlzYy9rZXhlY19hbmRfa2R1bXAudHh0CmRvY3MvbWlzYy9saWJ4bF9tZW1vcnkudHh0
CmRvY3MvbWlzYy9uZXRpZi1zdGFnaW5nLWdyYW50cy5wYW5kb2MKZG9jcy9taXNjL3ByaW50
ay1mb3JtYXRzLnR4dApkb2NzL21pc2MvcHYtZHJpdmVycy1saWZlY3ljbGUucGFuZG9jCmRv
Y3MvbWlzYy9wdmNhbGxzLnBhbmRvYwpkb2NzL21pc2MvcHZoLnBhbmRvYwpkb2NzL21pc2Mv
cWVtdS1iYWNrZW5kcy50eHQKZG9jcy9taXNjL3N0YXR1cy1vdmVycmlkZS10YWJsZS1zcGVj
LmZvZHQKZG9jcy9taXNjL3N0dWJkb20udHh0CmRvY3MvbWlzYy9zdXBwb3J0LW1hdHJpeC1o
ZWFkLmh0bWwKZG9jcy9taXNjL3Z0ZC1waS50eHQKZG9jcy9taXNjL3Z0ZC50eHQKZG9jcy9t
aXNjL3g4Ni14ZW5wdi1ib290bG9hZGVyLnBhbmRvYwpkb2NzL21pc2MveGVuLWNvbW1hbmQt
bGluZS5wYW5kb2MKZG9jcy9taXNjL3hlbi1lbnYtdGFibGUtc3BlYy5mb2R0CmRvY3MvbWlz
Yy94ZW4tZXJyb3ItaGFuZGxpbmcudHh0CmRvY3MvbWlzYy94ZW4tbWFrZWZpbGVzL21ha2Vm
aWxlcy5yc3QKZG9jcy9taXNjL3hlbl9jb25maWcuaHRtbApkb2NzL21pc2MveGVubW9uLnR4
dApkb2NzL21pc2MveGVucGFnaW5nLnR4dApkb2NzL21pc2MveGVuc3RvcmUtcGF0aHMucGFu
ZG9jCmRvY3MvbWlzYy94ZW5zdG9yZS1yaW5nLnR4dApkb2NzL21pc2MveGVuc3RvcmUudHh0
CmRvY3MvbWlzYy94bC1wc3IucGFuZG9jCmRvY3MvcGFyc2Utc3VwcG9ydC1tZApkb2NzL3By
b2Nlc3MvUlVCUklDCmRvY3MvcHJvY2Vzcy9icmFuY2hpbmctY2hlY2tsaXN0LnR4dApkb2Nz
L3Byb2Nlc3MvcmVsZWFzZS10ZWNobmljaWFuLWNoZWNrbGlzdC50eHQKZG9jcy9wcm9jZXNz
L3RhZ3MucGFuZG9jCmRvY3MvcHJvY2Vzcy94ZW4tcmVsZWFzZS1tYW5hZ2VtZW50LnBhbmRv
Ywpkb2NzL3NwZWNzL2xpYnhjLW1pZ3JhdGlvbi1zdHJlYW0ucGFuZG9jCmRvY3Mvc3BlY3Mv
bGlieGwtbWlncmF0aW9uLXN0cmVhbS5wYW5kb2MKZG9jcy9zdXBwb3J0LW1hdHJpeC1nZW5l
cmF0ZQpkb2NzL3hlbi1oZWFkZXJzCm1pc2MvY292ZXJpdHkvbW9kZWwuYwpzY3JpcHRzL2Fk
ZF9tYWludGFpbmVycy5wbApzY3JpcHRzL2dldF9tYWludGFpbmVyLnBsCnNjcmlwdHMvZ2l0
LWNoZWNrb3V0LnNoCnZlcnNpb24uc2gKeGVuL0NPUFlJTkcKeGVuL0tjb25maWcKeGVuL0tj
b25maWcuZGVidWcKeGVuL01ha2VmaWxlCnhlbi9SdWxlcy5tawp4ZW4vYXJjaC9LY29uZmln
Cnhlbi9jb21tb24vQ09QWUlORwp4ZW4vY29tbW9uL0tjb25maWcKeGVuL2NvbW1vbi9NYWtl
ZmlsZQp4ZW4vY29tbW9uL1JFQURNRS5zb3VyY2UKeGVuL2NvbW1vbi9iaXRtYXAuYwp4ZW4v
Y29tbW9uL2JzZWFyY2guYwp4ZW4vY29tbW9uL2J1bnppcDIuYwp4ZW4vY29tbW9uL2NvbXBh
dC9kb21haW4uYwp4ZW4vY29tbW9uL2NvbXBhdC9ncmFudF90YWJsZS5jCnhlbi9jb21tb24v
Y29tcGF0L2tlcm5lbC5jCnhlbi9jb21tb24vY29tcGF0L21lbW9yeS5jCnhlbi9jb21tb24v
Y29tcGF0L211bHRpY2FsbC5jCnhlbi9jb21tb24vY29tcGF0L3hlbm9wcm9mLmMKeGVuL2Nv
bW1vbi9jb21wYXQveGxhdC5jCnhlbi9jb21tb24vY29yZV9wYXJraW5nLmMKeGVuL2NvbW1v
bi9jb3ZlcmFnZS9NYWtlZmlsZQp4ZW4vY29tbW9uL2NvdmVyYWdlL2NvdmVyYWdlLmMKeGVu
L2NvbW1vbi9jb3ZlcmFnZS9jb3ZlcmFnZS5oCnhlbi9jb21tb24vY292ZXJhZ2UvZ2NjXzNf
NC5jCnhlbi9jb21tb24vY292ZXJhZ2UvZ2NjXzRfNy5jCnhlbi9jb21tb24vY292ZXJhZ2Uv
Z2NjXzRfOS5jCnhlbi9jb21tb24vY292ZXJhZ2UvZ2NjXzUuYwp4ZW4vY29tbW9uL2NvdmVy
YWdlL2djY183LmMKeGVuL2NvbW1vbi9jb3ZlcmFnZS9nY292LmMKeGVuL2NvbW1vbi9jb3Zl
cmFnZS9nY292LmgKeGVuL2NvbW1vbi9jb3ZlcmFnZS9nY292X2Jhc2UuYwp4ZW4vY29tbW9u
L2NvdmVyYWdlL2xsdm0uYwp4ZW4vY29tbW9uL2NwdS5jCnhlbi9jb21tb24vZGVidWd0cmFj
ZS5jCnhlbi9jb21tb24vZGVjb21wcmVzcy5jCnhlbi9jb21tb24vZGVjb21wcmVzcy5oCnhl
bi9jb21tb24vZG9tYWluLmMKeGVuL2NvbW1vbi9kb21jdGwuYwp4ZW4vY29tbW9uL2Vhcmx5
Y3Bpby5jCnhlbi9jb21tb24vZXZlbnRfMmwuYwp4ZW4vY29tbW9uL2V2ZW50X2NoYW5uZWwu
Ywp4ZW4vY29tbW9uL2V2ZW50X2ZpZm8uYwp4ZW4vY29tbW9uL2dkYnN0dWIuYwp4ZW4vY29t
bW9uL2dyYW50X3RhYmxlLmMKeGVuL2NvbW1vbi9ndWVzdGNvcHkuYwp4ZW4vY29tbW9uL2d1
bnppcC5jCnhlbi9jb21tb24vaHlwZnMuYwp4ZW4vY29tbW9uL2luZmxhdGUuYwp4ZW4vY29t
bW9uL2lycS5jCnhlbi9jb21tb24va2VybmVsLmMKeGVuL2NvbW1vbi9rZXloYW5kbGVyLmMK
eGVuL2NvbW1vbi9saWIuYwp4ZW4vY29tbW9uL2xpYmVsZi9DT1BZSU5HCnhlbi9jb21tb24v
bGliZWxmL01ha2VmaWxlCnhlbi9jb21tb24vbGliZWxmL1JFQURNRQp4ZW4vY29tbW9uL2xp
YmVsZi9saWJlbGYtZG9taW5mby5jCnhlbi9jb21tb24vbGliZWxmL2xpYmVsZi1sb2FkZXIu
Ywp4ZW4vY29tbW9uL2xpYmVsZi9saWJlbGYtcHJpdmF0ZS5oCnhlbi9jb21tb24vbGliZWxm
L2xpYmVsZi10b29scy5jCnhlbi9jb21tb24vbGlzdF9zb3J0LmMKeGVuL2NvbW1vbi9sejQv
ZGVjb21wcmVzcy5jCnhlbi9jb21tb24vbHo0L2RlZnMuaAp4ZW4vY29tbW9uL2x6by5jCnhl
bi9jb21tb24vbWVtb3J5LmMKeGVuL2NvbW1vbi9tdWx0aWNhbGwuYwp4ZW4vY29tbW9uL25v
dGlmaWVyLmMKeGVuL2NvbW1vbi9wYWdlX2FsbG9jLmMKeGVuL2NvbW1vbi9wZHguYwp4ZW4v
Y29tbW9uL3BlcmZjLmMKeGVuL2NvbW1vbi9wcmVlbXB0LmMKeGVuL2NvbW1vbi9yYWRpeC10
cmVlLmMKeGVuL2NvbW1vbi9yYW5kb20uYwp4ZW4vY29tbW9uL3Jhbmdlc2V0LmMKeGVuL2Nv
bW1vbi9yYnRyZWUuYwp4ZW4vY29tbW9uL3JjdXBkYXRlLmMKeGVuL2NvbW1vbi9yd2xvY2su
Ywp4ZW4vY29tbW9uL3NodXRkb3duLmMKeGVuL2NvbW1vbi9zbXAuYwp4ZW4vY29tbW9uL3Nv
ZnRpcnEuYwp4ZW4vY29tbW9uL3NvcnQuYwp4ZW4vY29tbW9uL3NwaW5sb2NrLmMKeGVuL2Nv
bW1vbi9zdG9wX21hY2hpbmUuYwp4ZW4vY29tbW9uL3N0cmluZy5jCnhlbi9jb21tb24vc3lt
Ym9scy1kdW1teS5jCnhlbi9jb21tb24vc3ltYm9scy5jCnhlbi9jb21tb24vc3lzY3RsLmMK
eGVuL2NvbW1vbi90YXNrbGV0LmMKeGVuL2NvbW1vbi90aW1lLmMKeGVuL2NvbW1vbi90aW1l
ci5jCnhlbi9jb21tb24vdWJzYW4vTWFrZWZpbGUKeGVuL2NvbW1vbi91YnNhbi91YnNhbi5j
Cnhlbi9jb21tb24vdWJzYW4vdWJzYW4uaAp4ZW4vY29tbW9uL3VubHo0LmMKeGVuL2NvbW1v
bi91bmx6bWEuYwp4ZW4vY29tbW9uL3VubHpvLmMKeGVuL2NvbW1vbi91bnh6LmMKeGVuL2Nv
bW1vbi92ZXJzaW9uLmMKeGVuL2NvbW1vbi92aXJ0dWFsX3JlZ2lvbi5jCnhlbi9jb21tb24v
dm1hcC5jCnhlbi9jb21tb24vdnNwcmludGYuYwp4ZW4vY29tbW9uL3dhaXQuYwp4ZW4vY29t
bW9uL3dhcm5pbmcuYwp4ZW4vY29tbW9uL3hlbm9wcm9mLmMKeGVuL2NvbW1vbi94bWFsbG9j
X3Rsc2YuYwp4ZW4vY29tbW9uL3h6L2NyYzMyLmMKeGVuL2NvbW1vbi94ei9kZWNfYmNqLmMK
eGVuL2NvbW1vbi94ei9kZWNfbHptYTIuYwp4ZW4vY29tbW9uL3h6L2RlY19zdHJlYW0uYwp4
ZW4vY29tbW9uL3h6L2x6bWEyLmgKeGVuL2NvbW1vbi94ei9wcml2YXRlLmgKeGVuL2NvbW1v
bi94ei9zdHJlYW0uaAp4ZW4vY3J5cHRvL01ha2VmaWxlCnhlbi9jcnlwdG8vUkVBRE1FLnNv
dXJjZQp4ZW4vY3J5cHRvL3Jpam5kYWVsLmMKeGVuL2NyeXB0by92bWFjLmMKeGVuL2RyaXZl
cnMvS2NvbmZpZwp4ZW4vZHJpdmVycy9NYWtlZmlsZQp4ZW4vZHJpdmVycy9jaGFyL0tjb25m
aWcKeGVuL2RyaXZlcnMvY2hhci9NYWtlZmlsZQp4ZW4vZHJpdmVycy9jaGFyL2NvbnNvbGUu
Ywp4ZW4vZHJpdmVycy9jaGFyL2NvbnNvbGVkLmMKeGVuL2RyaXZlcnMvY2hhci9laGNpLWRi
Z3AuYwp4ZW4vZHJpdmVycy9jaGFyL25zMTY1NTAuYwp4ZW4vZHJpdmVycy9jaGFyL3Nlcmlh
bC5jCnhlbi9kcml2ZXJzL2NoYXIveGVuX3B2X2NvbnNvbGUuYwp4ZW4vZHJpdmVycy9wY2kv
S2NvbmZpZwp4ZW4vZHJpdmVycy9wY2kvTWFrZWZpbGUKeGVuL2RyaXZlcnMvcGNpL3BjaS5j
Cnhlbi9kcml2ZXJzL3ZpZGVvL0tjb25maWcKeGVuL2RyaXZlcnMvdmlkZW8vTWFrZWZpbGUK
eGVuL2RyaXZlcnMvdmlkZW8vZm9udC5oCnhlbi9kcml2ZXJzL3ZpZGVvL2ZvbnRfOHgxNC5j
Cnhlbi9kcml2ZXJzL3ZpZGVvL2ZvbnRfOHgxNi5jCnhlbi9kcml2ZXJzL3ZpZGVvL2ZvbnRf
OHg4LmMKeGVuL2RyaXZlcnMvdmlkZW8vbGZiLmMKeGVuL2RyaXZlcnMvdmlkZW8vbGZiLmgK
eGVuL2RyaXZlcnMvdmlkZW8vbW9kZWxpbmVzLmgKeGVuL2RyaXZlcnMvdmlkZW8vdmVzYS5j
Cnhlbi9kcml2ZXJzL3ZpZGVvL3ZnYS5jCnhlbi9pbmNsdWRlL01ha2VmaWxlCnhlbi9pbmNs
dWRlL2NyeXB0by9SRUFETUUuc291cmNlCnhlbi9pbmNsdWRlL2NyeXB0by9yaWpuZGFlbC5o
Cnhlbi9pbmNsdWRlL2NyeXB0by92bWFjLmgKeGVuL2luY2x1ZGUvcHVibGljL0NPUFlJTkcK
eGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2XzMyLmgKeGVuL2luY2x1ZGUvcHVibGljL2Fy
Y2gteDg2XzY0LmgKeGVuL2luY2x1ZGUvcHVibGljL2NhbGxiYWNrLmgKeGVuL2luY2x1ZGUv
cHVibGljL2RldmljZV90cmVlX2RlZnMuaAp4ZW4vaW5jbHVkZS9wdWJsaWMvZG9tMF9vcHMu
aAp4ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgKeGVuL2luY2x1ZGUvcHVibGljL2VsZm5v
dGUuaAp4ZW4vaW5jbHVkZS9wdWJsaWMvZXJybm8uaAp4ZW4vaW5jbHVkZS9wdWJsaWMvZXZl
bnRfY2hhbm5lbC5oCnhlbi9pbmNsdWRlL3B1YmxpYy9mZWF0dXJlcy5oCnhlbi9pbmNsdWRl
L3B1YmxpYy9ncmFudF90YWJsZS5oCnhlbi9pbmNsdWRlL3B1YmxpYy9odm0vZG1fb3AuaAp4
ZW4vaW5jbHVkZS9wdWJsaWMvaHZtL2U4MjAuaAp4ZW4vaW5jbHVkZS9wdWJsaWMvaHZtL2h2
bV9pbmZvX3RhYmxlLmgKeGVuL2luY2x1ZGUvcHVibGljL2h2bS9odm1fb3AuaAp4ZW4vaW5j
bHVkZS9wdWJsaWMvaHZtL2h2bV92Y3B1LmgKeGVuL2luY2x1ZGUvcHVibGljL2h2bS9odm1f
eHNfc3RyaW5ncy5oCnhlbi9pbmNsdWRlL3B1YmxpYy9odm0vcGFyYW1zLmgKeGVuL2luY2x1
ZGUvcHVibGljL2h2bS9wdmRyaXZlcnMuaAp4ZW4vaW5jbHVkZS9wdWJsaWMvaHZtL3NhdmUu
aAp4ZW4vaW5jbHVkZS9wdWJsaWMvaHlwZnMuaAp4ZW4vaW5jbHVkZS9wdWJsaWMva2V4ZWMu
aAp4ZW4vaW5jbHVkZS9wdWJsaWMvbWVtb3J5LmgKeGVuL2luY2x1ZGUvcHVibGljL25taS5o
Cnhlbi9pbmNsdWRlL3B1YmxpYy9waHlzZGV2LmgKeGVuL2luY2x1ZGUvcHVibGljL3BsYXRm
b3JtLmgKeGVuL2luY2x1ZGUvcHVibGljL3BtdS5oCnhlbi9pbmNsdWRlL3B1YmxpYy9zY2hl
ZC5oCnhlbi9pbmNsdWRlL3B1YmxpYy9zeXNjdGwuaAp4ZW4vaW5jbHVkZS9wdWJsaWMvdG1l
bS5oCnhlbi9pbmNsdWRlL3B1YmxpYy90cmFjZS5oCnhlbi9pbmNsdWRlL3B1YmxpYy92Y3B1
LmgKeGVuL2luY2x1ZGUvcHVibGljL3ZlcnNpb24uaAp4ZW4vaW5jbHVkZS9wdWJsaWMveGVu
LWNvbXBhdC5oCnhlbi9pbmNsdWRlL3B1YmxpYy94ZW4uaAp4ZW4vaW5jbHVkZS9wdWJsaWMv
eGVuY29tbS5oCnhlbi9pbmNsdWRlL3B1YmxpYy94ZW5vcHJvZi5oCnhlbi9pbmNsdWRlL3B1
YmxpYy94c20vZmxhc2tfb3AuaAp4ZW4vaW5jbHVkZS94ZW4vODI1MC11YXJ0LmgKeGVuL2lu
Y2x1ZGUveGVuL2FjcGkuaAp4ZW4vaW5jbHVkZS94ZW4vYXRvbWljLmgKeGVuL2luY2x1ZGUv
eGVuL2JpdG1hcC5oCnhlbi9pbmNsdWRlL3hlbi9iaXRvcHMuaAp4ZW4vaW5jbHVkZS94ZW4v
Ynl0ZW9yZGVyL2JpZ19lbmRpYW4uaAp4ZW4vaW5jbHVkZS94ZW4vYnl0ZW9yZGVyL2dlbmVy
aWMuaAp4ZW4vaW5jbHVkZS94ZW4vYnl0ZW9yZGVyL2xpdHRsZV9lbmRpYW4uaAp4ZW4vaW5j
bHVkZS94ZW4vYnl0ZW9yZGVyL3N3YWIuaAp4ZW4vaW5jbHVkZS94ZW4vY2FjaGUuaAp4ZW4v
aW5jbHVkZS94ZW4vY29tcGF0LmgKeGVuL2luY2x1ZGUveGVuL2NvbXBpbGUuaC5pbgp4ZW4v
aW5jbHVkZS94ZW4vY29tcGlsZXIuaAp4ZW4vaW5jbHVkZS94ZW4vY29uZmlnLmgKeGVuL2lu
Y2x1ZGUveGVuL2NvbnNvbGUuaAp4ZW4vaW5jbHVkZS94ZW4vY29uc29sZWQuaAp4ZW4vaW5j
bHVkZS94ZW4vY29uc3QuaAp4ZW4vaW5jbHVkZS94ZW4vY292ZXJhZ2UuaAp4ZW4vaW5jbHVk
ZS94ZW4vY3Blci5oCnhlbi9pbmNsdWRlL3hlbi9jcHUuaAp4ZW4vaW5jbHVkZS94ZW4vY3B1
aWRsZS5oCnhlbi9pbmNsdWRlL3hlbi9jcHVtYXNrLmgKeGVuL2luY2x1ZGUveGVuL2N0eXBl
LmgKeGVuL2luY2x1ZGUveGVuL2RlY29tcHJlc3MuaAp4ZW4vaW5jbHVkZS94ZW4vZGVsYXku
aAp4ZW4vaW5jbHVkZS94ZW4vZG1pLmgKeGVuL2luY2x1ZGUveGVuL2RvbWFpbi5oCnhlbi9p
bmNsdWRlL3hlbi9kb21haW5fcGFnZS5oCnhlbi9pbmNsdWRlL3hlbi9lYXJseV9wcmludGsu
aAp4ZW4vaW5jbHVkZS94ZW4vZWFybHljcGlvLmgKeGVuL2luY2x1ZGUveGVuL2VmaS5oCnhl
bi9pbmNsdWRlL3hlbi9lbGYuaAp4ZW4vaW5jbHVkZS94ZW4vZWxmY29yZS5oCnhlbi9pbmNs
dWRlL3hlbi9lbGZzdHJ1Y3RzLmgKeGVuL2luY2x1ZGUveGVuL2Vyci5oCnhlbi9pbmNsdWRl
L3hlbi9lcnJuby5oCnhlbi9pbmNsdWRlL3hlbi9ldmVudC5oCnhlbi9pbmNsdWRlL3hlbi9l
dmVudF9maWZvLmgKeGVuL2luY2x1ZGUveGVuL2dkYnN0dWIuaAp4ZW4vaW5jbHVkZS94ZW4v
Z3JhbnRfdGFibGUuaAp4ZW4vaW5jbHVkZS94ZW4vZ3Vlc3RfYWNjZXNzLmgKeGVuL2luY2x1
ZGUveGVuL2d1bnppcC5oCnhlbi9pbmNsdWRlL3hlbi9oYXNoLmgKeGVuL2luY2x1ZGUveGVu
L2h5cGVyY2FsbC5oCnhlbi9pbmNsdWRlL3hlbi9oeXBmcy5oCnhlbi9pbmNsdWRlL3hlbi9p
bml0LmgKeGVuL2luY2x1ZGUveGVuL2ludHR5cGVzLmgKeGVuL2luY2x1ZGUveGVuL2lvY2Fw
LmgKeGVuL2luY2x1ZGUveGVuL2lycS5oCnhlbi9pbmNsdWRlL3hlbi9pcnFfY3B1c3RhdC5o
Cnhlbi9pbmNsdWRlL3hlbi9rY29uZmlnLmgKeGVuL2luY2x1ZGUveGVuL2tlcm5lbC5oCnhl
bi9pbmNsdWRlL3hlbi9rZXhlYy5oCnhlbi9pbmNsdWRlL3hlbi9rZXloYW5kbGVyLmgKeGVu
L2luY2x1ZGUveGVuL2tpbWFnZS5oCnhlbi9pbmNsdWRlL3hlbi9saWIuaAp4ZW4vaW5jbHVk
ZS94ZW4vbGliZWxmLmgKeGVuL2luY2x1ZGUveGVuL2xpc3QuaAp4ZW4vaW5jbHVkZS94ZW4v
bGlzdF9zb3J0LmgKeGVuL2luY2x1ZGUveGVuL2x6NC5oCnhlbi9pbmNsdWRlL3hlbi9sem8u
aAp4ZW4vaW5jbHVkZS94ZW4vbW0uaAp4ZW4vaW5jbHVkZS94ZW4vbXVsdGlib290LmgKeGVu
L2luY2x1ZGUveGVuL211bHRpYm9vdDIuaAp4ZW4vaW5jbHVkZS94ZW4vbXVsdGljYWxsLmgK
eGVuL2luY2x1ZGUveGVuL25vZGVtYXNrLmgKeGVuL2luY2x1ZGUveGVuL25vc3BlYy5oCnhl
bi9pbmNsdWRlL3hlbi9ub3RpZmllci5oCnhlbi9pbmNsdWRlL3hlbi9udW1hLmgKeGVuL2lu
Y2x1ZGUveGVuL3AybS1jb21tb24uaAp4ZW4vaW5jbHVkZS94ZW4vcGFnZS1kZWZzLmgKeGVu
L2luY2x1ZGUveGVuL3BhZ2luZy5oCnhlbi9pbmNsdWRlL3hlbi9wYXJhbS5oCnhlbi9pbmNs
dWRlL3hlbi9wY2kuaAp4ZW4vaW5jbHVkZS94ZW4vcGNpX2lkcy5oCnhlbi9pbmNsdWRlL3hl
bi9wY2lfcmVncy5oCnhlbi9pbmNsdWRlL3hlbi9wZHguaAp4ZW4vaW5jbHVkZS94ZW4vcGVy
Y3B1LmgKeGVuL2luY2x1ZGUveGVuL3BlcmZjLmgKeGVuL2luY2x1ZGUveGVuL3BlcmZjX2Rl
Zm4uaAp4ZW4vaW5jbHVkZS94ZW4vcGZuLmgKeGVuL2luY2x1ZGUveGVuL3Btc3RhdC5oCnhl
bi9pbmNsdWRlL3hlbi9wcmVlbXB0LmgKeGVuL2luY2x1ZGUveGVuL3ByZWZldGNoLmgKeGVu
L2luY2x1ZGUveGVuL3B2X2NvbnNvbGUuaAp4ZW4vaW5jbHVkZS94ZW4vcmFkaXgtdHJlZS5o
Cnhlbi9pbmNsdWRlL3hlbi9yYW5kb20uaAp4ZW4vaW5jbHVkZS94ZW4vcmFuZ2VzZXQuaAp4
ZW4vaW5jbHVkZS94ZW4vcmJ0cmVlLmgKeGVuL2luY2x1ZGUveGVuL3JjdXBkYXRlLmgKeGVu
L2luY2x1ZGUveGVuL3J3bG9jay5oCnhlbi9pbmNsdWRlL3hlbi9zY2hlZC5oCnhlbi9pbmNs
dWRlL3hlbi9zZXJpYWwuaAp4ZW4vaW5jbHVkZS94ZW4vc2hhcmVkLmgKeGVuL2luY2x1ZGUv
eGVuL3NodXRkb3duLmgKeGVuL2luY2x1ZGUveGVuL3NpemVzLmgKeGVuL2luY2x1ZGUveGVu
L3NtcC5oCnhlbi9pbmNsdWRlL3hlbi9zb2Z0aXJxLmgKeGVuL2luY2x1ZGUveGVuL3NvcnQu
aAp4ZW4vaW5jbHVkZS94ZW4vc3BpbmxvY2suaAp4ZW4vaW5jbHVkZS94ZW4vc3RkYXJnLmgK
eGVuL2luY2x1ZGUveGVuL3N0ZGJvb2wuaAp4ZW4vaW5jbHVkZS94ZW4vc3RvcF9tYWNoaW5l
LmgKeGVuL2luY2x1ZGUveGVuL3N0cmluZy5oCnhlbi9pbmNsdWRlL3hlbi9zdHJpbmdpZnku
aAp4ZW4vaW5jbHVkZS94ZW4vc3ltYm9scy5oCnhlbi9pbmNsdWRlL3hlbi90YXNrbGV0LmgK
eGVuL2luY2x1ZGUveGVuL3RpbWUuaAp4ZW4vaW5jbHVkZS94ZW4vdGltZXIuaAp4ZW4vaW5j
bHVkZS94ZW4vdHlwZXMuaAp4ZW4vaW5jbHVkZS94ZW4vdHlwZXNhZmUuaAp4ZW4vaW5jbHVk
ZS94ZW4vdmVyc2lvbi5oCnhlbi9pbmNsdWRlL3hlbi92Z2EuaAp4ZW4vaW5jbHVkZS94ZW4v
dmlkZW8uaAp4ZW4vaW5jbHVkZS94ZW4vdmlydHVhbF9yZWdpb24uaAp4ZW4vaW5jbHVkZS94
ZW4vdm1hcC5oCnhlbi9pbmNsdWRlL3hlbi93YWl0LmgKeGVuL2luY2x1ZGUveGVuL3dhcm5p
bmcuaAp4ZW4vaW5jbHVkZS94ZW4vd2F0Y2hkb2cuaAp4ZW4vaW5jbHVkZS94ZW4veGVub3By
b2YuaAp4ZW4vaW5jbHVkZS94ZW4veG1hbGxvYy5oCnhlbi9pbmNsdWRlL3hsYXQubHN0Cnhl
bi9saWIvTWFrZWZpbGUKeGVuL3NjcmlwdHMvS2J1aWxkLmluY2x1ZGUKeGVuL3NjcmlwdHMv
S2NvbmZpZy5pbmNsdWRlCnhlbi9zY3JpcHRzL01ha2VmaWxlLmNsZWFuCnhlbi9zY3JpcHRz
L2NsYW5nLXZlcnNpb24uc2gKeGVuL3NjcmlwdHMvZ2NjLXZlcnNpb24uc2gKeGVuL3Rlc3Qv
TWFrZWZpbGUKeGVuL3Rvb2xzL01ha2VmaWxlCnhlbi90b29scy9iaW5maWxlCnhlbi90b29s
cy9jb21wYXQtYnVpbGQtaGVhZGVyLnB5Cnhlbi90b29scy9jb21wYXQtYnVpbGQtc291cmNl
LnB5Cnhlbi90b29scy9nZW4tY3B1aWQucHkKeGVuL3Rvb2xzL2dldC1maWVsZHMuc2gKeGVu
L3Rvb2xzL3Byb2Nlc3MtYmFubmVyLnNlZAp4ZW4vdG9vbHMvc2NtdmVyc2lvbgp4ZW4vdG9v
bHMvc3ltYm9scy5jCnhlbi90b29scy94ZW4uZmxmCg==
--------------46943642394ECFC057364D0A--


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 09:18:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 09:18:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtSge-0008Lb-GD; Thu, 09 Jul 2020 09:17:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ehwI=AU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jtSgd-0008LW-EG
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 09:17:47 +0000
X-Inumbo-ID: 0d07c256-c1c5-11ea-8eaf-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0d07c256-c1c5-11ea-8eaf-12813bfff9fa;
 Thu, 09 Jul 2020 09:17:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 411ABADE5;
 Thu,  9 Jul 2020 09:17:45 +0000 (UTC)
Subject: Re: [PATCH] efi: avoid error message when booting under Xen
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: xen-devel@lists.xenproject.org, linux-fbdev@vger.kernel.org,
 dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org
References: <20200610141052.13258-1-jgross@suse.com>
 <094be567-2c82-7d5b-e432-288286c6c3fb@suse.com>
Message-ID: <ec21b883-dc5c-f3fe-e989-7fa13875a4c4@suse.com>
Date: Thu, 9 Jul 2020 11:17:44 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <094be567-2c82-7d5b-e432-288286c6c3fb@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Jones <pjones@redhat.com>,
 Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.06.20 10:50, Jürgen Groß wrote:
> Ping?
> 
> On 10.06.20 16:10, Juergen Gross wrote:
>> efifb_probe() will issue an error message in case the kernel is booted
>> as Xen dom0 from UEFI as EFI_MEMMAP won't be set in this case. Avoid
>> that message by calling efi_mem_desc_lookup() only if EFI_PARAVIRT
>> isn't set.
>>
>> Fixes: 38ac0287b7f4 ("fbdev/efifb: Honour UEFI memory map attributes 
>> when mapping the FB")
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   drivers/video/fbdev/efifb.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
>> index 65491ae74808..f5eccd1373e9 100644
>> --- a/drivers/video/fbdev/efifb.c
>> +++ b/drivers/video/fbdev/efifb.c
>> @@ -453,7 +453,7 @@ static int efifb_probe(struct platform_device *dev)
>>       info->apertures->ranges[0].base = efifb_fix.smem_start;
>>       info->apertures->ranges[0].size = size_remap;
>> -    if (efi_enabled(EFI_BOOT) &&
>> +    if (efi_enabled(EFI_BOOT) && !efi_enabled(EFI_PARAVIRT) &&
>>           !efi_mem_desc_lookup(efifb_fix.smem_start, &md)) {
>>           if ((efifb_fix.smem_start + efifb_fix.smem_len) >
>>               (md.phys_addr + (md.num_pages << EFI_PAGE_SHIFT))) {
>>
> 

In case I see no reaction from the maintainer for another week I'll take
this patch through the Xen tree.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 09:22:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 09:22:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtSl3-0000ho-25; Thu, 09 Jul 2020 09:22:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qFbs=AU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jtSl1-0000hj-Je
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 09:22:19 +0000
X-Inumbo-ID: af794ab4-c1c5-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af794ab4-c1c5-11ea-bb8b-bc764e2007e4;
 Thu, 09 Jul 2020 09:22:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D7896ADE5;
 Thu,  9 Jul 2020 09:22:17 +0000 (UTC)
Subject: Re: Followup of yesterday's design session "refactoring the REST"
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jg@pfupf.net>
References: <a578fb24-cc6a-b3bd-b83d-3f7b9b1302cf@pfupf.net>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f0367a9e-fdfb-1bf6-d569-f380349f9dd8@suse.com>
Date: Thu, 9 Jul 2020 11:22:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a578fb24-cc6a-b3bd-b83d-3f7b9b1302cf@pfupf.net>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 09.07.2020 07:56, Jürgen Groß wrote:
> Yesterday's design session at Xen Developer Summit "Hypervisor Team: .."
> had one topic regarding whether we should find specific maintainers of
> all the files currently assigned to "THE REST" in order to lower the
> amount of reviews for those assigned to be "THE REST" maintainers.
> 
> Modifying the MAINTAINERS file adding "REST@x.y" as REST maintainer
> and running the rune:
> 
> git ls-files | while true; do f=`line`; [ "$f" = "" ] && exit; \
> echo $f `./scripts/get_maintainer.pl -f $f | awk '{print $(NF)}'`; \
> done | awk '/REST/ { print $1}'
> 
> shows that basically the following files are covered by "THE REST":
> 
> - files directly in /
> - config/
> - most files in docs/ (not docs/man/)
> - misc/ (only one file)
> - scripts/
> - lots of files in xen/common/
> - xen/crypto/
> - lots of files in xen/drivers/
> - lots of files in xen/include/
> - xen/scripts/
> - some files in xen/tools/
> 
> I have attached the file list.
> 
> So the basic idea to have a "hypervisor REST" and a "tools REST"
> wouldn't make a huge difference, if we don't assign docs/ to "tools
> REST".
> 
> So I think it would make sense to:
> 
> - look through the docs/ and xen/include/ files whether some of those
>    can be assigned to a component already having dedicated maintainers
> 
> - try to find maintainers for the other files, especially those in
>    xen/common/ and xen/drivers/ (including the related include files, of
>    course)

At least for files in xen/common/ I think it was really intentional
that they - as core hypervisor files - fall under THE REST. We could
of course have a "Core Hypervisor" (or so) group, which would already
...

> - if any of the REST maintainers doesn't want to receive mails for a
>    group of the remaining REST files split the REST maintainers/files up
>    accordingly

... allow moving some into this direction.

For files under xen/drivers/ not currently covered by other entries it
may indeed be (more) feasible to find individual maintainers.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 09:35:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 09:35:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtSxV-0001cx-82; Thu, 09 Jul 2020 09:35:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sAgz=AU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jtSxT-0001cs-Br
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 09:35:11 +0000
X-Inumbo-ID: 7b270966-c1c7-11ea-8496-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b270966-c1c7-11ea-8496-bc764e2007e4;
 Thu, 09 Jul 2020 09:35:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594287310;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=gqMCyMr9A6fAlEZaEvpq/W0eCaBGhGhpO8X4MLxTtp0=;
 b=akaCTK/udqsczFGOLS7YtPKJc1W/TlmjeUXf6QnFEDV3to8cS+yXFNXA
 eB6Gvw3i4DrVGzMiZFrbUYTN0psvN/aCaN67VkRuc3LSJkcX6x+njy5iw
 EfiORaofF85jZWImKwq7ofmqMov5P6fJ95L5lstQS89HL4gv9v4eNV7wP 0=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: xcFvwX9mLqd7rZFTNTw/Ng0DWImv0Jfo1s53jxr/IKq3ViG0Xh/62gx2BFJUy46sk9Ih0weSci
 I9KlqXGrEpRufggJK+NNrGigg1InEv6h2jELrmbeKOdK3JlyR88vQdriHkbCic98U6g9ap8CqR
 /u9LzyIk7HockGinLfDvpjPBfLYiDCHVsIDcnyGYeOiG76f9EtryuYw5wdphQ+NqC61jGrQfHw
 kryzwTTiX0xbo9/IG1XmKSfMq2r7+Q4KXYUFiAJ9cxyOBBegd6fwmsFHAR3/RN2o6ktc9EIWzR
 D10=
X-SBRS: 2.7
X-MesageID: 21956476
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,331,1589256000"; d="scan'208";a="21956476"
Date: Thu, 9 Jul 2020 11:34:54 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jg@pfupf.net>
Subject: Re: Followup of yesterday's design session "refactoring the REST"
Message-ID: <20200709093454.GF7191@Air-de-Roger>
References: <a578fb24-cc6a-b3bd-b83d-3f7b9b1302cf@pfupf.net>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a578fb24-cc6a-b3bd-b83d-3f7b9b1302cf@pfupf.net>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 09, 2020 at 07:56:58AM +0200, Jürgen Groß wrote:
> Yesterday's design session at Xen Developer Summit "Hypervisor Team: .."
> had one topic regarding whether we should find specific maintainers of
> all the files currently assigned to "THE REST" in order to lower the
> amount of reviews for those assigned to be "THE REST" maintainers.
> 
> Modifying the MAINTAINERS file adding "REST@x.y" as REST maintainer
> and running the rune:
> 
> git ls-files | while true; do f=`line`; [ "$f" = "" ] && exit; \
> echo $f `./scripts/get_maintainer.pl -f $f | awk '{print $(NF)}'`; \
> done | awk '/REST/ { print $1}'
> 
> shows that basically the following files are covered by "THE REST":
> 
> - files directly in /
> - config/
> - most files in docs/ (not docs/man/)
> - misc/ (only one file)
> - scripts/
> - lots of files in xen/common/
> - xen/crypto/
> - lots of files in xen/drivers/
> - lots of files in xen/include/
> - xen/scripts/
> - some files in xen/tools/
> 
> I have attached the file list.

Thanks! I still have to go over the list in more detail, just some
comments below.

> So the basic idea to have a "hypervisor REST" and a "tools REST"
> wouldn't make a huge difference, if we don't assign docs/ to "tools
> REST".
> 
> So I think it would make sense to:
> 
> - look through the docs/ and xen/include/ files whether some of those
>   can be assigned to a component already having dedicated maintainers
> 
> - try to find maintainers for the other files, especially those in
>   xen/common/ and xen/drivers/ (including the related include files, of
>   course)

I think it's important that xen/common files (specially the ones
containing interfaces exposed to guests) have at least one maintainer
from each supported architecture (Arm and x86 ATM), plus whatever
common code maintainers we want to have. It's sometimes easy (at least
for me) to forget about other arches or make wrong assumptions about
them when modifying common code.

Drivers could also benefit from something similar IMO, where common
code has a shared group of maintainers, for example IOMMU should
ideally have a mix of maintainers from the current implementations
(Arm/Intel/AMD) plus again whatever common code maintainers we want
to have.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 10:30:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 10:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtTob-0006h4-W1; Thu, 09 Jul 2020 10:30:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z2lT=AU=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jtToa-0006XK-Ty
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 10:30:04 +0000
X-Inumbo-ID: 25de565a-c1cf-11ea-8496-bc764e2007e4
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.80]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 25de565a-c1cf-11ea-8496-bc764e2007e4;
 Thu, 09 Jul 2020 10:30:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V70iKL15+X3CYs9tSao7Y8W3ZX5EWgAu8UTy9G+d8Bw=;
 b=HwP8xcBfCBdwlwsPyza1iDJOugfD7AohXMr0oYMcgY/3zH4bhUcMuHD7cok1FIPqKCYod6wkHMnVMsvhWIUwG1JdgDSEZ57+lC+r4rKfZ0S3X4/PFFQU6/+PY8LTfsLD0cFaM3w2jFCzc5RkGtkNg7tqHNmMnb1PbJLh/1fskUU=
Received: from DB7PR03CA0079.eurprd03.prod.outlook.com (2603:10a6:10:72::20)
 by AM0PR08MB3604.eurprd08.prod.outlook.com (2603:10a6:208:e3::25) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.21; Thu, 9 Jul
 2020 10:30:00 +0000
Received: from DB5EUR03FT049.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:72:cafe::89) by DB7PR03CA0079.outlook.office365.com
 (2603:10a6:10:72::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.21 via Frontend
 Transport; Thu, 9 Jul 2020 10:30:00 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT049.mail.protection.outlook.com (10.152.20.191) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3174.21 via Frontend Transport; Thu, 9 Jul 2020 10:29:59 +0000
Received: ("Tessian outbound 1dc58800d5dd:v62");
 Thu, 09 Jul 2020 10:29:59 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: c080f6014f919a71
X-CR-MTA-TID: 64aa7808
Received: from cc472f304b99.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1C4C06E2-9759-4C10-B0B9-035BF97ACE68.1; 
 Thu, 09 Jul 2020 10:29:54 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cc472f304b99.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 09 Jul 2020 10:29:54 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I8m6CqWCn1OlkqPjbfoBY8FmGq+7Fxo21/RX8YG/csRCKjs0ZsuTK2m+XSco8OwYf6dLl2u4NMYG+LUinUkQkdZF7fliR0zmenB/X4Q9izXPTMQIGu1V8oKkQHZuSr4LxXGyls67N45oPy02X/20/iVwoZnkP/CaeOG0R8tXGlj0Vz7pXhp6qD4s0GXODokyXJaT4AYArW7Qdb5FORDKD7n7MCc7pko4tlr6dJfRj+qQcfsqfT+x0cGJ4Ikq6W4qTbayCejqpCt7W9AISkQixt3wXNmUgSSfUfMvIO2o2qRrfsAweoQ/KthGm7WkwWyrgEjx2LbHu69oNeQuPcOoXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V70iKL15+X3CYs9tSao7Y8W3ZX5EWgAu8UTy9G+d8Bw=;
 b=ZvCVKHxu4tIEnZN9cZhCz1HmIr9XW/bidUOeu372bNaoo2f8Ztn/JU5at/oQOpBCs/Xv57pobEOhfa79jOg+GtGPFgGVP/Qmt/43V3m5WHzGiX+BD4VdVqSE9UTWQWxrD98Sq2QF4rEutv3lKNEH/D59unz+vBNxgKVpfm1g745eisH1TVwdSoRaimA43tLXP6jjgVN6O79AT/p/rV1bpJzd+fjz9YEIZq3PcnZU2ZUtrxOnIV03zpcdiav2hLWYlUv5o9efZZFoeBzVvpNfErwAv2t5Kq9jO99eOWqqDu0nj6YN4V+tJSriDYNgM2OtsGV1ZSWQLwxtFRsggeE2TQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=V70iKL15+X3CYs9tSao7Y8W3ZX5EWgAu8UTy9G+d8Bw=;
 b=HwP8xcBfCBdwlwsPyza1iDJOugfD7AohXMr0oYMcgY/3zH4bhUcMuHD7cok1FIPqKCYod6wkHMnVMsvhWIUwG1JdgDSEZ57+lC+r4rKfZ0S3X4/PFFQU6/+PY8LTfsLD0cFaM3w2jFCzc5RkGtkNg7tqHNmMnb1PbJLh/1fskUU=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4776.eurprd08.prod.outlook.com (2603:10a6:10:f2::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.22; Thu, 9 Jul
 2020 10:29:52 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.022; Thu, 9 Jul 2020
 10:29:52 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: PCI passthrough on arm Design Session MoM
Thread-Topic: PCI passthrough on arm Design Session MoM
Thread-Index: AQHWVScSHsZRWGBXYkau2LjYpe8C1aj9rgiAgAFfbAA=
Date: Thu, 9 Jul 2020 10:29:52 +0000
Message-ID: <227BD2A5-1FC7-4B1A-8B93-4DBECC43BDA2@arm.com>
References: <4E0A40D3-2979-4A91-9376-C2B19B9F582E@arm.com>
 <20200708133205.GE7191@Air-de-Roger>
In-Reply-To: <20200708133205.GE7191@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: a226a2d5-a98e-49a7-155d-08d823f30860
x-ms-traffictypediagnostic: DBBPR08MB4776:|AM0PR08MB3604:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <AM0PR08MB3604DC1C1BF38BF8E748B7D39D640@AM0PR08MB3604.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04599F3534
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: E1b7T49HIwV2Z6/eQ5sqJkHaFQKEcCisQyfbq9eaJhifR2fpIr0lXpzwWc3pC3MquKNckibv1y/PdeeWQ7R14LJjDL6OxXD/ejPKq/6nTcKe6rzqlZEvEgXWJVxOe85mJvsUSTl82+Nggi/D/Kc9JiC8p/z3o/7aga4br2eNavsT7oM/vQDdeVMmL2Kx/gtPR814RzX5gp1d5WypMV6Q+aamuE6QQ2WAqTqpUzfUwJ8ejTvrDpobdYa431qFraYgC0V8xXOfAhbehBbzAElk3JtTOoZ4NnKX9305Pu/ELNFoKMtPUt7bGsbZtebY0RNjbINJC0kpHNwyV42cpV4DukVdKEPj+1nc5hat5qZ/xskc7D2SXuys32I1vKsu3TxOn6okJ8Kl2yycTTsft9+KJQ==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(366004)(396003)(376002)(39860400002)(136003)(316002)(76116006)(5660300002)(36756003)(6916009)(66446008)(64756008)(66556008)(6486002)(66476007)(8676002)(66946007)(2906002)(71200400001)(8936002)(91956017)(6506007)(53546011)(186003)(2616005)(33656002)(26005)(966005)(86362001)(478600001)(83380400001)(4326008)(54906003)(6512007);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: Xk5Md9rOybQXIcbYDNBs0Mc45uxLR54skGkTzR0ZIsK0sNshZvWW0AcUaPnLjfnEYXGhIozvaGpBjui2DCR3bIq8WmbjKp8xzhCLyPO5c8mhEB1Hg8w6uU2iedJ+P+yku6Y6ATS8EyTMoB0AA9R0dj7R2PAidNe2iZbSUbfBr9ipYCyFUn5GuIIaUfn3TNvJANAGcCHZSwNZDU3E/V+30TVbJDrx92CFZXFvOAWxSu0+5fd5xCqGiaV1Mt+8W2BAyoxeK2B1TDc17qy7rcrNPfsBybO4ZT9lP8F9rACAmvC6tmV7NIDiGNSmZL4KfqELXx2VovqdgvGuhKbPq/6z3xUzffzUL5eq5M7m/1NiuRMj6rcRQzEqeI+RHvCLM0XGFhawj/9UaXkmHxf82SlDufNSQ4Z+0Jm9LzelJKio69ohhBICkvnoNZ7eAYgFpE7l95Yvtiaadgi25g/y8Pj1P4nEvCS9Y7c8dxrTdPAUmyBux+G/g7h9BkcdJY4mpRcn
Content-Type: text/plain; charset="utf-8"
Content-ID: <88476A19FF81CC47AF83457E67DFCDFB@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4776
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT049.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(396003)(376002)(136003)(39860400002)(46966005)(47076004)(186003)(356005)(336012)(81166007)(82310400002)(33656002)(6486002)(82740400003)(86362001)(36756003)(6862004)(70206006)(70586007)(6506007)(4326008)(5660300002)(53546011)(8936002)(2906002)(2616005)(26005)(83380400001)(316002)(8676002)(6512007)(54906003)(966005)(478600001);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: eb41ebff-3291-473e-2a80-08d823f30435
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: qmmZ8Cmes5f6k4MtCxxTcceVaIkSvXn7q6SWA1FFVs4XQJGoVHh0aISpulCTW3HfnBJBFf17gXLHPCG17seaOQjlNYBbWkpIKE7LjwRNcpFYmiCZevnlkMbxpCPn7Hq83xP7EYNMiEPVkaDfuDDLXssaEku07l84kt3+izEPjozNMwdECTqRz739QCfHkZYGGVc5nxs9vaKjwl0xGbPlF3HU6EUxhV769zlgqNsa91QAhNyrjA4nVgolzA2L7Q+M+jUTOCNgvhNkXswQwHGzA6eJi0EMiDC8ztNPO/dSGLNkyxxqLVFA5pxiJTHb39oCJr7W87vEkOZB3n/N2XEMqjvn5F2SYMs67VvGX4w6V6mFVOMiwHn5ZNPq5jGoS5EDxmvw2sezvi7iznUOZkSKBr1pzS4JvgKSsc4PIMgRVR2nbcYyFWSW3zz/FHB8tqptK75MC/enjk0EPDk6eszUV8RS4HwVpaE6ZuHvfJ3sVos=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Jul 2020 10:29:59.7404 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a226a2d5-a98e-49a7-155d-08d823f30860
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT049.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3604
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Rahul Singh <Rahul.Singh@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gOCBKdWwgMjAyMCwgYXQgMTQ6MzIsIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBh
dUBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IE9uIFdlZCwgSnVsIDA4LCAyMDIwIGF0IDEyOjU1
OjM2UE0gKzAwMDAsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+PiBIaSwNCj4+IA0KPj4gSGVy
ZSBhcmUgdGhlIG5vdGVzIHdlIHRvb2sgZHVyaW5nIHRoZSBkZXNpZ24gc2Vzc2lvbiBhcm91bmQg
UENJIGRldmljZXMgcGFzc3Rocm91Z2ggb24gQXJtLg0KPj4gRmVlbCBmcmVlIHRvIGNvbW1lbnQg
b3IgYWRkIGFueXRoaW5nIDotKQ0KPj4gDQo+PiBCZXJ0cmFuZA0KPj4gDQo+PiBQQ0kgZGV2aWNl
cyBwYXNzdGhyb3VnaCBvbiBBcm0gRGVzaWduIFNlc3Npb24NCj4+ID09PT09PT09PT09PT09PT09
PT09PT09PT09PT09PT09PT09PT09DQo+PiANCj4+IERhdGU6IDcvNy8yMDIwDQo+PiANCj4+IC0g
WDg2IFZQQ0kgc3VwcG9ydCAgaXMgZm9yIHRoZSBQVkggZ3Vlc3QgLg0KPiANCj4gQ3VycmVudCB2
UENJIGlzIG9ubHkgZm9yIFBWSCBkb20wLiBXZSBuZWVkIHRvIGRlY2lkZSB3aGF0IHRvIGRvIGZv
cg0KPiBQVkggZG9tVXMsIHdoZXRoZXIgd2Ugd2FudCB0byB1c2UgdlBDSSBvciB4ZW5wdCBmcm9t
IFBhdWw6DQo+IA0KPiBodHRwOi8veGVuYml0cy54ZW4ub3JnL2dpdHdlYi8/cD1wZW9wbGUvcGF1
bGR1L3hlbnB0LmdpdDthPXN1bW1hcnkNCj4gDQo+IE9yIHNvbWV0aGluZyBlbHNlLiBJIHRoaW5r
IHRoaXMgZGVjaXNpb24gYWxzbyBuZWVkcyB0byB0YWtlIGludG8NCj4gYWNjb3VudCBBcm0uDQoN
CldlIGFyZSBjdXJyZW50bHkgdXNpbmcgdnBjaSBmb3IgZ3Vlc3RzLg0KQnV0IHdlIGNvdWxkIGFs
c28gbG9vayBpbnRvIHhlbnB0IGJ1dCBmcm9tIGEgcXVpY2sgY2hlY2sgaXQgZG9lcyByZXF1aXJl
IGEgRG9tMCB3aGljaCB3b3VsZCBkZWZlYXQgdGhlIERvbTBsZXNzIHVzZSBjYXNlLg0KDQo+IA0K
Pj4gLSBYODYgUENJIGRldmljZXMgZGlzY292ZXJ5IGNvZGUgc2hvdWxkIGJlIGNoZWNrZWQgYW5k
IG1heWJlIHVzZWQgb24gQXJtIGFzIGl0IGlzIG5vdCB2ZXJ5IGNvbXBsZXgNCj4+IAktIFJlbWFy
ayBmcm9tIEp1bGllbjogVGhpcyBtaWdodCBub3Qgd29yayBpbiBudW1iZXIgb2YgY2FzZXMNCj4+
IC0gU2FuaXRhdGlvbiBvZiBlYWNoIHRoZSBQQ0kgYWNjZXNzIGZvciBlYWNoIGd1ZXN0IGluIFhF
TiBpcyByZXF1aXJlZA0KPj4gLSBNU0kgdHJhcCBpcyBub3QgcmVxdWlyZWQgZm9yIGdpY3YzIGJ1
dCBpdCByZXF1aXJlZCBmb3IgZ2ljdjJtDQo+PiAJLSBXZSBkbyBub3QgcGxhbiB0byBzdXBwb3J0
IG5vbiBJVFMgR0lDDQo+PiAtIENoZWNrIHBvc3NpYmlsaXR5IHRvIGFkZCBzb21lIHNwZWNpZmlj
YXRpb25zIGluIEVCQlIgZm9yIFBDSSBlbnVtZXJhdGlvbiAoYWRkcmVzcyBhc3NpZ25tZW50IHBh
cnQpDQo+PiAtIFBDSSBlbnVtZXJhdGlvbiBzdXBwb3J0IHNob3VsZCBub3QgZGVwZW5kIG9uIERP
TTAgZm9yIHNhZmV0eSByZWFzb25zDQo+PiAtIFBDSSBlbnVtZXJhdGlvbiBjb3VsZCBiZSBkb25l
IGluIHNldmVyYWwgcGxhY2VzDQo+PiAJLSBEVEIsIHdpdGggc29tZSBlbnRyaWVzIGdpdmluZyB2
YWx1ZXMgdG8gYmUgYXBwbGllZCBieSBYZW4NCj4+IAktIEluIFhFTiAoY29tcGxleCwgbm90IHdh
bnRlZCBvdXQgb2YgZGV2aWNlcyBkaXNjb3ZlcnkpDQo+PiAJLSBJbiBGaXJtd2FyZSBhbmQgdGhl
biB4ZW4gZGV2aWNlIGRpc2NvdmVyeQ0KPj4gLSBBcyBwZXIgSnVsaWVuIGl0IGlzIGRpZmZpY3Vs
dCB0byB0ZWxsIHRoZSBYRU4gb24gd2hpY2ggc2VnbWVudCBQQ0kgZGV2aWNlIGlzIHByZXNlbnQN
Cj4+IAktIEN1cnJlbnQgdGVzdCBpbXBsZW1lbnRhdGlvbiBpcyBkb25lIG9uIEp1bm8gd2hlcmUg
dGhlcmUgaXMgb25seSBvbmUgc2VnbWVudA0KPj4gCS0gVGhpcyBzaG91bGQgYmUgaW52ZXN0aWdh
dGVkIHdpdGggYW4gb3RoZXIgaGFyZHdhcmUgaW4gdGhlIG5leHQgbW9udGhzDQo+IA0KPiBJJ20g
bm90IHN1cmUgdGhlIHNlZ21lbnRzIHVzZWQgYnkgWGVuIG5lZWQgdG8gbWF0Y2ggdGhlIHNlZ21l
bnRzIHVzZWQNCj4gYnkgdGhlIGd1ZXN0LiBUaGlzIGlzIGp1c3QgYW4gYWJzdHJhY3QgdmFsdWUg
YXNzaWduZWQgZnJvbSB0aGUgT1MgKG9yDQo+IFhlbikgaW4gb3JkZXIgdG8gZGlmZmVyZW50aWF0
ZSBkaWZmZXJlbnQgTU1DRkcgKEVDQU0pIHJlZ2lvbnMsIGFuZA0KPiB3aGV0aGVyIHN1Y2ggbnVt
YmVycyBtYXRjaCBkb2Vzbid0IHNlZW0gcmVsZXZhbnQgdG8gbWUsIGFzIGF0IHRoZSBlbmQNCj4g
WGVuIHdpbGwgdHJhcCBFQ0FNIGFjY2Vzc2VzIGFuZCBtYXAgc3VjaCBhY2Nlc3NlcyB0byB0aGUg
WGVuIGFzc2lnbmVkDQo+IHNlZ21lbnRzLg0KPiANCj4gU2VnbWVudHMgbWF0Y2hpbmcgYmV0d2Vl
biB0aGUgT1MgYW5kIFhlbiBpcyBvbmx5IHJlbGV2YW50IHdoZW4gUENJDQo+IGluZm9ybWF0aW9u
IG5lZWRzIHRvIGJlIGNvbnZleWVkIGJldHdlZW4gdGhlIE9TIGFuZCBYZW4gdXNpbmcgc29tZQ0K
PiBraW5kIG9mIGh5cGVyY2FsbCwgYnV0IEkgdGhpbmsgeW91IHdhbnQgdG8gYXZvaWQgdXNpbmcg
c3VjaCBzaWRlLWJhbmQNCj4gY29tbXVuaWNhdGlvbiBjaGFubmVscyBhbnl3YXk/DQoNCldlIGRl
ZmluaXRlbHkgd2FudCB0byBhdm9pZCB0aGVtLg0KT24gdGhlIGp1bm8gYm9hcmQgd2UgdXNlIGN1
cnJlbmx0eSB0aGlzIHF1ZXN0aW9uIHdhcyBpZ25vcmVkIGZvciBub3cgYXMgdGhlcmUgaXMgb25s
eSBvbmUgcmVnaW9uLg0KVGhpcyBpcyBkZWZpbml0ZWx5IHNvbWV0aGluZyB3ZSBuZWVkIHRvIGlu
dmVzdGlnYXRlLg0KDQo+IA0KPj4gLSBKdWxpZW4gbWVudGlvbmVkIHRoYXQgY2xvY2tzIGlzc3Vl
cyB3aWxsIGJlIGNvbXBsZXggdG8gc29sdmUgYW5kIG1vc3QgaGFyZHdhcmUgYXJlIG5vdCBmb2xs
b3dpbmcgdGhlIEVDQU0gc3RhbmRhcmQNCj4+IC0gSnVsaWVuIG1lbnRpb25lZCB0aGF0IExpbnV4
IGFuZCBYZW4gY291bGQgZG8gdGhlIGVudW1lcmF0aW9uIGluIGEgZGlmZmVyZW50IHdheSwgbWFr
aW5nIGl0IGNvbXBsZXggdG8gaGF2ZSBsaW51eCBkb2luZyBhbiBlbnVtZXJhdGlvbiBhZnRlciBY
ZW4NCj4+IC0gV2Ugc2hvdWxkIHB1c2ggdGhlIGNvZGUgd2UgaGF2ZSBBU0FQIG9uIHRoZSBtYWls
aW5nIGxpc3QgZm9yIGEgcmV2aWV3IGFuZCBkaXNjdXNzaW9uIG9uIHRoZSBkZXNpZ24NCj4+IAkt
IEFybSB3aWxsIHRyeSB0byBkbyB0aGF0IGJlZm9yZSBlbmQgb2YgSnVseQ0KPiANCj4gSSB3aWxs
IGJlIGhhcHB5IHRvIGdpdmUgaXQgYSBsb29rIGFuZCBwcm92aWRlIGZlZWRiYWNrLg0KDQpUaGFu
a3MsIHdlIHdpbGwgdHJ5IHRvIHB1c2ggb3VyIHN0YXR1cyBmb3IgdGhlIGVuZCBvZiBuZXh0IHdl
ZWsuDQoNCj4gDQo+IEZvciBzdWNoIGNvbXBsZXggcGllY2VzIG9mIHdvcmsgSSB3b3VsZCByZWNv
bW1lbmQgdG8gZmlyc3Qgc2VuZCBzb21lDQo+IGtpbmQgb2YgZG9jdW1lbnQgdG8gdGhlIG1haWxp
bmcgbGlzdCBpbiBvcmRlciB0byBtYWtlIHN1cmUgdGhlDQo+IGRpcmVjdGlvbiB0YWtlbiBpcyBh
Y2NlcHRlZCBieSB0aGUgY29tbXVuaXR5LCBhbmQgd2UgY2FuIGFsc28gcHJvdmlkZQ0KPiBmZWVk
YmFjayBvciBwb2ludCB0byBleGlzdGluZyBjb21wb25lbnRzIHRoYXQgY2FuIGJlIGhlbHBmdWwg
OikuIElmDQo+IHlvdSBoYXZlIGNvZGUgZG9uZSBhbHJlYWR5IHRoYXQncyBhbHNvIGZpbmUsIGZl
ZWwgZnJlZSB0byBzZW5kIGl0Lg0KDQpXZSBoYXZlIHNvbWUgY29kZSBkb25lIGFscmVhZHkgYnV0
IHdlIHdpbGwgZGVmaW5pdGVseSBzcGVuZCBzb21lIHRpbWUgaW50byB3cml0aW5nIGEgZGVzaWdu
IHRvIGFncmVlIG9uIGJlZm9yZSBnb2luZyB0byBmYXIuDQpUaGVyZSBhcmUgc3RpbGwgc29tZSBh
cmVhcyB0aGF0IHdlIHdhbnQgdG8gY2hlY2sgdGVjaG5pY2FsbHkgZm9yIGZlYXNpYmlsaXR5IChy
ZWdpb25zLCBNU0ksIGNsb2NrcyBmb3IgZXhhbXBsZSkuDQoNClRoYW5rcyBhIGxvdCBmb3IgeW91
ciBmZWVkYmFjaw0KQmVydHJhbmQNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 10:43:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 10:43:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtU14-0007b8-5S; Thu, 09 Jul 2020 10:42:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sAgz=AU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jtU12-0007b3-Pq
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 10:42:56 +0000
X-Inumbo-ID: f214dc84-c1d0-11ea-8eb3-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f214dc84-c1d0-11ea-8eb3-12813bfff9fa;
 Thu, 09 Jul 2020 10:42:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594291375;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=ZNcSrL/sffho2hPU646wScTh9BRapwNszVWJHurTmpw=;
 b=Bb5C+1+4Y4wDcal0F93V1veNbYQC8WiRCvLoaG0CVbwgeKDb/OM+FRjG
 S1/a7ZT02bKBjOBEJQCHOF8oKerlRK0nhU7qOK5PrarG/y/AUNCoQSFsX
 SQOcOP9VbJZ6aqa9Kh96QUrd+uM2tKZ9SgA2e+GGn1wFE6ulBT/8XInZt Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: f1FV1F/pVtT9bOmvlu2myC0wQ18xub/hvV780AcLgepTfIG+8vWFnwLqLDfp+Gsgck8m0Crw8u
 kydZ5oiESOmrAwO8v4gpiYtuW8HR2Xp+cPcCFe/EHtfZYci4WXYblmDNJ5oQwamIfx/koPTcuK
 weF0Iqdg2KoKj5fdOiJA2lrxIcUKPJ00To9Nvb616NiCzMyYqKAmNY9uq0Ba3A7I7Ue71AhReV
 lPzrHx1VxHcWlppBuiM8vrpAaWki9S+/4EaoW+QCtGprkE8F+7gaechox940/m7M7NXMxFP/jf
 Ruc=
X-SBRS: 2.7
X-MesageID: 22274415
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,331,1589256000"; d="scan'208";a="22274415"
Date: Thu, 9 Jul 2020 12:42:45 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: PCI passthrough on arm Design Session MoM
Message-ID: <20200709104245.GG7191@Air-de-Roger>
References: <4E0A40D3-2979-4A91-9376-C2B19B9F582E@arm.com>
 <20200708133205.GE7191@Air-de-Roger>
 <227BD2A5-1FC7-4B1A-8B93-4DBECC43BDA2@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <227BD2A5-1FC7-4B1A-8B93-4DBECC43BDA2@arm.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Rahul Singh <Rahul.Singh@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 09, 2020 at 10:29:52AM +0000, Bertrand Marquis wrote:
> 
> 
> > On 8 Jul 2020, at 14:32, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > 
> > On Wed, Jul 08, 2020 at 12:55:36PM +0000, Bertrand Marquis wrote:
> >> Hi,
> >> 
> >> Here are the notes we took during the design session around PCI devices passthrough on Arm.
> >> Feel free to comment or add anything :-)
> >> 
> >> Bertrand
> >> 
> >> PCI devices passthrough on Arm Design Session
> >> ======================================
> >> 
> >> Date: 7/7/2020
> >> 
> >> - X86 VPCI support  is for the PVH guest .
> > 
> > Current vPCI is only for PVH dom0. We need to decide what to do for
> > PVH domUs, whether we want to use vPCI or xenpt from Paul:
> > 
> > http://xenbits.xen.org/gitweb/?p=people/pauldu/xenpt.git;a=summary
> > 
> > Or something else. I think this decision also needs to take into
> > account Arm.
> 
> We are currently using vpci for guests.
> But we could also look into xenpt but from a quick check it does require a Dom0 which would defeat the Dom0less use case.

Right, you need a hardware domain in order to use xenpt. Note that
AFAICT you would also need a hardware domain in order to do the
placement of BARs for example if the firmware is not doing it for you.

Likely better to discuss all this against a design document instead of
raising specific issues here I guess :).

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 11:04:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 11:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtUM4-0000tk-2h; Thu, 09 Jul 2020 11:04:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKwJ=AU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtUM3-0000tP-4w
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 11:04:39 +0000
X-Inumbo-ID: f73d072e-c1d3-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f73d072e-c1d3-11ea-b7bb-bc764e2007e4;
 Thu, 09 Jul 2020 11:04:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=QyOZ6//0VF30CYqv0zbDCRQb5hSTJrrbjCVrO3LfeWY=; b=rVyb+A4oQvTj5yMHVmY+kV1bq
 8a8bFqCyU4jMG8Qqa4nJW+kcgjYy5o81kS3MnibWXkllJoRAsZ3ivb5FvQcpkjxKKiKFhST4I2cF3
 HRpVaf4bSsh0aigDAVkd8zQih5TtpdFt79//pJ1l1wglGEh59qWhx1fVSYbGiQoTgDqnM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtULv-0004Yk-Jy; Thu, 09 Jul 2020 11:04:31 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtULv-0004Ij-5o; Thu, 09 Jul 2020 11:04:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtULv-0006pc-4q; Thu, 09 Jul 2020 11:04:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151740-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151740: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:guest-localmigrate/x10:fail:heisenbug
 xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=3fdc211b01b29f252166937238efe02d15cb5780
X-Osstest-Versions-That: xen=3fdc211b01b29f252166937238efe02d15cb5780
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jul 2020 11:04:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151740 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151740/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail in 151719 pass in 151740
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 16 guest-localmigrate/x10 fail pass in 151719
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 151719

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151719
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151719
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151719
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151719
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151719
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151719
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151719
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151719
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151719
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3fdc211b01b29f252166937238efe02d15cb5780
baseline version:
 xen                  3fdc211b01b29f252166937238efe02d15cb5780

Last test of basis   151740  2020-07-08 17:04:23 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Jul 09 12:01:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 12:01:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtVF7-0005xa-5v; Thu, 09 Jul 2020 12:01:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sHay=AU=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jtVF5-0005wm-HG
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 12:01:31 +0000
X-Inumbo-ID: ecf6a812-c1db-11ea-8ebb-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ecf6a812-c1db-11ea-8ebb-12813bfff9fa;
 Thu, 09 Jul 2020 12:01:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AfM26NXRFpMb9avqj7wJgLrOtlUtJTQKut8Bvojfonw=; b=Rf3PR0EvLcWf3N+DF2rdrpKMeu
 pHz8dQ/Squle3jakF4mhpcjNsCyxa5jQO8YpoyNaAq+habgyi8jQ/yjzOSjKo8hN6dkKd8GwGthLO
 mMQTHgga8ep/SwSxsbkKbOMsxYCm9Ksbwzg80EHZXFi2VeXjEyZ5hBgXyOgdLFT7Vz8w=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jtVEz-0005ck-Af; Thu, 09 Jul 2020 12:01:25 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jtVEz-00087o-0C; Thu, 09 Jul 2020 12:01:25 +0000
Subject: Re: [PATCH v4 for-4.14 2/2] pvcalls: Document correctly and
 explicitely the padding for all arches
To: xen-devel@lists.xenproject.org
References: <20200627095533.14145-1-julien@xen.org>
 <20200627095533.14145-3-julien@xen.org>
 <b7f41be0-f1d2-2c3b-c79f-5d9763dfb5df@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <2e0a451e-58be-eff1-0030-be489e5098f3@xen.org>
Date: Thu, 9 Jul 2020 13:01:22 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b7f41be0-f1d2-2c3b-c79f-5d9763dfb5df@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

Gentle ping.

Is the new commit message fine?

Cheers,

On 04/07/2020 16:29, Julien Grall wrote:
> Hi,
> 
> On 27/06/2020 10:55, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The specification of pvcalls suggests there is padding for 32-bit x86
>> at the end of most the structure. However, they are not described in
>> in the public header.
>>
>> Because of that all the structures would be 32-bit aligned and not
>> 64-bit aligned for 32-bit x86.
>>
>> For all the other architectures supported (Arm and 64-bit x86), the
>> structure are aligned to 64-bit because they contain uint64_t field.
>> Therefore all the structures contain implicit padding.
>>
>> Given the specification is authoriitative, the padding will the same for
>> the all architectures. The potential breakage of compatibility is ought
>> to be fine as pvcalls is still a tech preview.
>>
>> As an aside, the padding sadly cannot be mandated to be 0 as they are
>> already present. So it is not going to be possible to use the padding
>> for extending a command in the future.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> It looks like most of the comments are on the commit message. So rather 
> than sending the series again, below a new version of the commit message:
> 
> "
> The specification of pvcalls suggests there is padding for 32-bit x86
> at the end of most the structure. However, they are not described in
> in the public header.
> 
> Because of that all the structures would have a different size between 
> 32-bit x86 and 64-bit x86.
> 
> For all the other architectures supported (Arm and 64-bit x86), the
> structure have the sames sizes because they contain implicit padding 
> thanks to the 64-bit alinment of the field uint64_t field.
> 
> Given the specification is authoritative, the padding will now be the 
> same for all architectures. The potential breakage of compatibility is 
> ought to be fine as pvcalls is still a tech preview.
> "
> 
> Cheers,
> 
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 15:11:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 15:11:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtYD0-0004bf-PL; Thu, 09 Jul 2020 15:11:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKwJ=AU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtYCz-0004ba-JB
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 15:11:33 +0000
X-Inumbo-ID: 77bfe46c-c1f6-11ea-8ee5-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 77bfe46c-c1f6-11ea-8ee5-12813bfff9fa;
 Thu, 09 Jul 2020 15:11:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=shgaYWBia/1cE/i3Ag2vCAlllGm2K2CYDHuZD4e/1gE=; b=rKenImfPc/EVqOybYgTooxOsd
 Wa2L9yIuaASH1FioSWNNXes6JrO2HjkYn4UX63N0q+urKr9jomMrSQfJxEGoTFPF27l17CvoQBWoO
 H1gpnVZGh0rD0xuMJZwqpTatPqzaBrHK5bkjx02Q+GAsxPUsx48ePKlkyeLRnn5RzLtRU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtYCw-0000iN-1P; Thu, 09 Jul 2020 15:11:30 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtYCv-0007OH-Ns; Thu, 09 Jul 2020 15:11:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtYCv-0001wS-MP; Thu, 09 Jul 2020 15:11:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151744-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151744: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=8796c64ecdfd34be394ea277aaaaa53df0c76996
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jul 2020 15:11:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151744 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151744/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                8796c64ecdfd34be394ea277aaaaa53df0c76996
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   26 days
Failing since        151101  2020-06-14 08:32:51 Z   25 days   32 attempts
Testing same since   151744  2020-07-08 20:47:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michele Denber <denber@mindspring.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 20291 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 15:19:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 15:19:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtYL2-0004qi-O2; Thu, 09 Jul 2020 15:19:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKwJ=AU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtYL1-0004qd-46
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 15:19:51 +0000
X-Inumbo-ID: a0efcf22-c1f7-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a0efcf22-c1f7-11ea-bca7-bc764e2007e4;
 Thu, 09 Jul 2020 15:19:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=YDUDRhz4AoaWPeDEtP/q6UtbA7QPbAxEGm3YY4DVwb0=; b=INMV8A+DflYu/1bhluD6jzAvq
 jerEMNRYtnVp/GHS4WWZSw+yG8CsTwxLIiK3s/rRDmS41zxY+bGfKNptgapc7VeLdRoxfg8vwJuZD
 Uvghre/wwBiDiZFCe+PIoDAI+b5UwjqiTH8HXM2+TLtVFJfcEVGU8/BU1nM0VwiyhRvMc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtYKy-0000qx-LA; Thu, 09 Jul 2020 15:19:48 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtYKy-00086Q-29; Thu, 09 Jul 2020 15:19:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtYKy-0004VC-0p; Thu, 09 Jul 2020 15:19:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151747-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151747: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=0bddd227f3dc55975e2b8dfa7fc6f959b062a2c7
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jul 2020 15:19:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151747 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151747/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                0bddd227f3dc55975e2b8dfa7fc6f959b062a2c7
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   21 days
Failing since        151236  2020-06-19 19:10:35 Z   19 days   29 attempts
Testing same since   151747  2020-07-08 23:12:54 Z    0 days    1 attempts

------------------------------------------------------------
608 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29588 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 15:29:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 15:29:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtYUS-0005kL-Qr; Thu, 09 Jul 2020 15:29:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKwJ=AU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtYUR-0005k0-Ne
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 15:29:35 +0000
X-Inumbo-ID: fb0d0668-c1f8-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb0d0668-c1f8-11ea-8496-bc764e2007e4;
 Thu, 09 Jul 2020 15:29:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=P2zXi/jWG4mQSY11qqxye/BhvOSAvXuhlr2QkAt+9fY=; b=uIctOdQQaQW7T2KvFWUl+xYbN
 jh7fQ0nvdg9He0oUShptNP+pcEUrzoDaU/XqEhyfz8CbOE2isOfGpVV8IGUN96krNUbFAQO+eF9rl
 w5Mk700jkeJMfX/Stj5GCouHRQI2tZRZAdtaxbNS4J9m/pzoeiYbF3muRMTae3bovNexA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtYUL-00011p-8k; Thu, 09 Jul 2020 15:29:29 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtYUK-000056-SD; Thu, 09 Jul 2020 15:29:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtYUK-0002If-RL; Thu, 09 Jul 2020 15:29:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151754-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151754: regressions - FAIL
X-Osstest-Failures: libvirt:test-amd64-amd64-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-amd64-i386-libvirt-pair:guest-migrate/src_host/dst_host:fail:regression
 libvirt:test-amd64-i386-libvirt-xsm:guest-start.2:fail:regression
 libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: libvirt=a4d97f0c199e980bb55d3b659214308c1b371b73
X-Osstest-Versions-That: libvirt=e7998ebeaf15e4e8825be0dd97aa1316f194f00d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jul 2020 15:29:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151754 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151754/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496
 test-amd64-i386-libvirt-pair 22 guest-migrate/src_host/dst_host fail REGR. vs. 151496
 test-amd64-i386-libvirt-xsm  19 guest-start.2            fail REGR. vs. 151496

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151496
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151496
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     14 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              a4d97f0c199e980bb55d3b659214308c1b371b73
baseline version:
 libvirt              e7998ebeaf15e4e8825be0dd97aa1316f194f00d

Last test of basis   151496  2020-07-01 04:23:43 Z    8 days
Failing since        151527  2020-07-02 04:29:15 Z    7 days    8 attempts
Testing same since   151754  2020-07-09 04:18:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Bastien Orivel <bastien.orivel@diateam.net>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Jianan Gao <jgao@redhat.com>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Michal Privoznik <mprivozn@redhat.com>
  Nicolas Brignone <nmbrignone@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Yanqiu Zhang <yanqzhan@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1394 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 18:47:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 18:47:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtbZO-0005iK-Ud; Thu, 09 Jul 2020 18:46:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sHay=AU=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jtbZN-0005iF-UD
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 18:46:54 +0000
X-Inumbo-ID: 8e1ceed0-c214-11ea-8efe-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8e1ceed0-c214-11ea-8efe-12813bfff9fa;
 Thu, 09 Jul 2020 18:46:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yAia7M8H/OX8wdWtPuYtBpWuLcvugvT5fqNIY3v1cqc=; b=bMBXouJoV4o2CuzXEwfCBzv6T3
 afu8X8hl70PQzWx83a5qLbJhQHOzWG5satYVHQYIQJrrmgfdJYyig7AVLc0+JVxOg010KBS7Xm2g7
 1ADGmb9Mq9YmXnjVwEzxATbDMRvO8bJbq+Y2mnZ09Be1G96RA3bpERRFamABP1pVge9Y=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jtbZK-0005Dk-W6; Thu, 09 Jul 2020 18:46:50 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jtbZK-00010P-L6; Thu, 09 Jul 2020 18:46:50 +0000
From: Julien Grall <julien@xen.org>
To: andrew.cooper3@citrix.com
Subject: [XTF] xenbus: Don't wait if the response ring is full
Date: Thu,  9 Jul 2020 19:46:47 +0100
Message-Id: <20200709184647.5159-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: wipawel@amazon.de, Julien Grall <jgrall@amazon.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

XenStore response can be bigger than the response ring. In this case,
it is possible to have the ring full (e.g cons = 19 and prod = 1043).

However, XTF will consider that there is no data and therefore wait for
more input. This will result to block indefinitely as the ring is full.

This can be solved by avoiding to mask the difference between prod and
cons.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 common/xenbus.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/common/xenbus.c b/common/xenbus.c
index 24fff4872372..f3bb30ac693f 100644
--- a/common/xenbus.c
+++ b/common/xenbus.c
@@ -75,7 +75,7 @@ static void xenbus_read(void *data, size_t len)
         uint32_t prod = ACCESS_ONCE(xb_ring->rsp_prod);
         uint32_t cons = ACCESS_ONCE(xb_ring->rsp_cons);
 
-        part = mask_xenbus_idx(prod - cons);
+        part = prod - cons;
 
         /* No data?  Kick xenstored and wait for it to produce some data. */
         if ( !part )
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 09 19:24:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 19:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtc9U-0000a5-1O; Thu, 09 Jul 2020 19:24:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKwJ=AU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtc9S-0000Zl-6L
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 19:24:10 +0000
X-Inumbo-ID: bf5ba9dc-c219-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bf5ba9dc-c219-11ea-bb8b-bc764e2007e4;
 Thu, 09 Jul 2020 19:24:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=IML250Zql7Jem+TvApKCx41UqnrBsEKDBw7EdzDsNM8=; b=n23v33n8bOcK7ISe17RMsQ36A
 zkPqKU/RPWRCBfP3FLkyQU4qvRpO1EbcDmoPTdumgLKvXQxIYaKhmgV2A+ZM6RCexJNBk+Sog10df
 fSYGWS5BEv968ZC/NtDjTqu1qf5xmWBGP9eeqBoe2RuDxs/64jYwo0Ub6qzFbA/ib2ZrU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtc9K-0005uE-Bk; Thu, 09 Jul 2020 19:24:02 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtc9K-0002Dr-0T; Thu, 09 Jul 2020 19:24:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtc9J-0001lG-Vz; Thu, 09 Jul 2020 19:24:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151757-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 151757: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=1c54d3c15afacf179c07ce6c57a0d43f412f1b3a
X-Osstest-Versions-That: linux=e75220890bf6b37c5f7b1dbd81d8292ed6d96643
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 09 Jul 2020 19:24:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151757 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151757/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151516

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                1c54d3c15afacf179c07ce6c57a0d43f412f1b3a
baseline version:
 linux                e75220890bf6b37c5f7b1dbd81d8292ed6d96643

Last test of basis   151516  2020-07-01 21:12:12 Z    7 days
Testing same since   151757  2020-07-09 08:18:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Agarwal, Anchal <anchalag@amazon.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Guzman <alex@guzman.io>
  Andrew Morton <akpm@linux-foundation.org>
  Anton Eidelman <anton@lightbitslabs.com>
  Ard Biesheuvel <ardb@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Babu Moger <babu.moger@amd.com>
  Borislav Petkov <bp@suse.de>
  Charan Teja Reddy <charante@codeaurora.org>
  Chen Tao <chentao107@huawei.com>
  Chen-Yu Tsai <wens@csie.org>
  Chris Murphy <lists@colorremedies.com>
  Chris Packham <chris.packham@alliedtelesis.co.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christoph Hellwig <hch@lst.de>
  Chu Lin <linchuyuan@google.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Thompson <daniel.thompson@linaro.org>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dien Pham <dien.pham.ry@renesas.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Douglas Anderson <dianders@chromium.org>
  Evan Quan <evan.quan@amd.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hauke Mehrtens <hauke@hauke-m.de>
  Heiko Carstens <heiko.carstens@de.ibm.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hou Tao <houtao1@huawei.com>
  Hsin-Yi Wang <hsinyi@chromium.org>
  Hugh Dickins <hughd@google.com>
  J. Bruce Fields <bfields@redhat.com>
  James Bottomley <James.Bottomley@HansenPartnership.com>
  Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Jens Axboe <axboe@kernel.dk>
  Kees Cook <keescook@chromium.org>
  Keith Busch <kbusch@kernel.org>
  Krzysztof Kozlowski <krzk@kernel.org>
  Leon Romanovsky <leonro@mellanox.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Mark Zhang <markz@mellanox.com>
  Martin Blumenstingl <martin.blumenstingl@googlemail.com>
  Maxime Ripard <maxime@cerno.tech>
  Mel Gorman <mgorman@techsingularity.net>
  Michael Kao <michael.kao@mediatek.com>
  Michael Shych <michaelsh@mellanox.com>
  Mike Snitzer <snitzer@redhat.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
  Niklas Soderlund <niklas.soderlund+renesas@ragnatech.se>
  Nirmoy Das <nirmoy.das@amd.com>
  Paul Aurich <paul@darkrain42.org>
  Peter Jones <pjones@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Qian Cai <cai@lca.pw>
  Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
  Rob Clark <robdclark@chromium.org>
  Sagi Grimberg <sagi@grimberg.me>
  Salvatore Bonaccorso <carnil@debian.org>
  Sasha Levin <sashal@kernel.org>
  Shuah Khan <skhan@linuxfoundation.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sumit Semwal <sumit.semwal@linaro.org>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Tuomas Tynkkynen <tuomas.tynkkynen@iki.fi>
  Valentin Schneider <valentin.schneider@arm.com>
  Van Do <van.do.xw@renesas.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Wolfram Sang <wsa@kernel.org>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Zhang Xiaoxu <zhangxiaoxu5@huawei.com>
  Zqiang <qiang.zhang@windriver.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   e75220890bf6..1c54d3c15afa  1c54d3c15afacf179c07ce6c57a0d43f412f1b3a -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu Jul 09 21:44:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 09 Jul 2020 21:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jteKZ-0003TF-8N; Thu, 09 Jul 2020 21:43:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PDeG=AU=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jteKY-0003TA-2f
 for xen-devel@lists.xenproject.org; Thu, 09 Jul 2020 21:43:46 +0000
X-Inumbo-ID: 4368e092-c22d-11ea-8f22-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4368e092-c22d-11ea-8f22-12813bfff9fa;
 Thu, 09 Jul 2020 21:43:45 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E786220774;
 Thu,  9 Jul 2020 21:43:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594331024;
 bh=6uwGewJZuGMzlEvaEh3gYRgYTjhsK9tA03tQLqLSXfQ=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=aakhgkeGgwjspp3hEdLzeY3aOMxyXTjy25PT7sMfE7e6G+PUiFi/7+Rj1TCUJ/s4e
 DXRDxIvCsEhD1VTScBesyLsPwdq00BCO39EsGfauwrMZ0HdNkP3iInMgNBytesM7Qh
 gnRFN5eEtJjkUxUCqvR+8I4749AebmKO0Zjf1Jk4=
Date: Thu, 9 Jul 2020 14:43:43 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v4 for-4.14 2/2] pvcalls: Document correctly and
 explicitely the padding for all arches
In-Reply-To: <2e0a451e-58be-eff1-0030-be489e5098f3@xen.org>
Message-ID: <alpine.DEB.2.21.2007091443290.4124@sstabellini-ThinkPad-T480s>
References: <20200627095533.14145-1-julien@xen.org>
 <20200627095533.14145-3-julien@xen.org>
 <b7f41be0-f1d2-2c3b-c79f-5d9763dfb5df@xen.org>
 <2e0a451e-58be-eff1-0030-be489e5098f3@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 9 Jul 2020, Julien Grall wrote:
> Hi,
> 
> Gentle ping.
> 
> Is the new commit message fine?
> 
> Cheers,
> 
> On 04/07/2020 16:29, Julien Grall wrote:
> > Hi,
> > 
> > On 27/06/2020 10:55, Julien Grall wrote:
> > > From: Julien Grall <jgrall@amazon.com>
> > > 
> > > The specification of pvcalls suggests there is padding for 32-bit x86
> > > at the end of most the structure. However, they are not described in
> > > in the public header.
> > > 
> > > Because of that all the structures would be 32-bit aligned and not
> > > 64-bit aligned for 32-bit x86.
> > > 
> > > For all the other architectures supported (Arm and 64-bit x86), the
> > > structure are aligned to 64-bit because they contain uint64_t field.
> > > Therefore all the structures contain implicit padding.
> > > 
> > > Given the specification is authoriitative, the padding will the same for
> > > the all architectures. The potential breakage of compatibility is ought
> > > to be fine as pvcalls is still a tech preview.
> > > 
> > > As an aside, the padding sadly cannot be mandated to be 0 as they are
> > > already present. So it is not going to be possible to use the padding
> > > for extending a command in the future.
> > > 
> > > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > 
> > It looks like most of the comments are on the commit message. So rather than
> > sending the series again, below a new version of the commit message:
> > 
> > "
> > The specification of pvcalls suggests there is padding for 32-bit x86
> > at the end of most the structure. However, they are not described in
> > in the public header.
> > 
> > Because of that all the structures would have a different size between
> > 32-bit x86 and 64-bit x86.
> > 
> > For all the other architectures supported (Arm and 64-bit x86), the
> > structure have the sames sizes because they contain implicit padding thanks
> > to the 64-bit alinment of the field uint64_t field.
> > 
> > Given the specification is authoritative, the padding will now be the same
> > for all architectures. The potential breakage of compatibility is ought to
> > be fine as pvcalls is still a tech preview.
> > "

Looks good to me

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 01:39:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 01:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jthzw-000787-JX; Fri, 10 Jul 2020 01:38:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8UDJ=AV=prgmr.com=srn@srs-us1.protection.inumbo.net>)
 id 1jthzv-000782-NZ
 for xen-devel@lists.xen.org; Fri, 10 Jul 2020 01:38:43 +0000
X-Inumbo-ID: 161f2742-c24e-11ea-b7bb-bc764e2007e4
Received: from mail.prgmr.com (unknown [2605:2700:0:5::4713:9506])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 161f2742-c24e-11ea-b7bb-bc764e2007e4;
 Fri, 10 Jul 2020 01:38:42 +0000 (UTC)
Received: from [192.168.2.47] (c-174-62-72-237.hsd1.ca.comcast.net
 [174.62.72.237]) (Authenticated sender: srn)
 by mail.prgmr.com (Postfix) with ESMTPSA id C2BEB720128
 for <xen-devel@lists.xen.org>; Thu,  9 Jul 2020 21:38:41 -0400 (EDT)
DKIM-Filter: OpenDKIM Filter v2.11.0 mail.prgmr.com C2BEB720128
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=prgmr.com;
 s=default; t=1594345121;
 bh=Zuofq26+UDaoVOyGqVyCpbefTyYVsjzLJ7mlHBwxUr4=;
 h=To:From:Subject:Date:From;
 b=SI/8kIqW51KVfwP4oi89MxiSVnyKVkm3200FVKzWF0ncXv3bnbGCqXrwaFrPEMxX6
 wO+MakTfHYnGovYcUyUsb2SD7dHZwBzr3rIxDlvS8LcYkO92Tq/BY7pslZR0llghkO
 9XSE5qZ2ZVPpeI27d7CIy4/33nsEUk2i5hV/FMlc=
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
From: Sarah Newman <srn@prgmr.com>
Subject: Linux 5.4.46: BUG: sleeping function called from invalid context at
 drivers/xen/preempt.c:37
Message-ID: <1a98b431-4112-3962-f7f4-7a335ef33cb6@prgmr.com>
Date: Thu, 9 Jul 2020 18:38:41 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

BUG: sleeping function called from invalid context at drivers/xen/preempt.c:37
in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 20775, name: xl
INFO: lockdep is turned off.
CPU: 1 PID: 20775 Comm: xl Tainted: G      D W         5.4.46-1_prgmr_debug.el7.x86_64 #1
Call Trace:
<IRQ>
dump_stack+0x8f/0xd0
___might_sleep.cold.76+0xb2/0x103
xen_maybe_preempt_hcall+0x48/0x70
xen_do_hypervisor_callback+0x37/0x40
RIP: e030:xen_hypercall_xen_version+0xa/0x20
Code: 51 41 53 b8 10 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 11 00 00 00 0f 05 <41> 5b 59 c3 cc 
cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
RSP: e02b:ffffc900400dcc30 EFLAGS: 00000246
RAX: 000000000004000d RBX: 0000000000000200 RCX: ffffffff8100122a
RDX: ffff88812e788000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffffff83ee3ad0 R08: 0000000000000001 R09: 0000000000000001
R10: 0000000000000000 R11: 0000000000000246 R12: ffff8881824aa0b0
R13: 0000000865496000 R14: 0000000865496000 R15: ffff88815d040000
? xen_hypercall_xen_version+0xa/0x20
? xen_force_evtchn_callback+0x9/0x10
? check_events+0x12/0x20
? xen_restore_fl_direct+0x1f/0x20
? _raw_spin_unlock_irqrestore+0x53/0x60
? debug_dma_sync_single_for_cpu+0x91/0xc0
? _raw_spin_unlock_irqrestore+0x53/0x60
? xen_swiotlb_sync_single_for_cpu+0x3d/0x140
? mlx4_en_process_rx_cq+0x6b6/0x1110 [mlx4_en]
? mlx4_en_poll_rx_cq+0x64/0x100 [mlx4_en]
? net_rx_action+0x151/0x4a0
? __do_softirq+0xed/0x55b
? irq_exit+0xea/0x100
? xen_evtchn_do_upcall+0x2c/0x40
? xen_do_hypervisor_callback+0x29/0x40
</IRQ>
? xen_hypercall_domctl+0xa/0x20
? xen_hypercall_domctl+0x8/0x20
? privcmd_ioctl+0x221/0x990 [xen_privcmd]
? do_vfs_ioctl+0xa5/0x6f0
? ksys_ioctl+0x60/0x90
? trace_hardirqs_off_thunk+0x1a/0x20
? __x64_sys_ioctl+0x16/0x20
? do_syscall_64+0x62/0x250
? entry_SYSCALL_64_after_hwframe+0x49/0xbe


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 01:40:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 01:40:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jti26-0007ry-07; Fri, 10 Jul 2020 01:40:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uxYN=AV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jti24-0007rt-Lc
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 01:40:56 +0000
X-Inumbo-ID: 65311a98-c24e-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65311a98-c24e-11ea-b7bb-bc764e2007e4;
 Fri, 10 Jul 2020 01:40:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=M7RgWxc7yeg1/XN6bxFwQFkcgeFB7aJ2EbT4FTAOcZs=; b=c5V5X51iF29eM67btyV4rv1mY
 rq5x+DCwrUHIxpRYsMb4Z4+JwEbaNIGVi5W3GKHufZ11smBrr5PwJKMjTqgZv0PWmjq7yKLP4VXnH
 I11ZfXRA6TmajzeD4cOt5AA732vuognxY1oJB6JbTbARM6IOswAuflgrfGHxvANmOtBYM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jti22-0006TI-Cb; Fri, 10 Jul 2020 01:40:54 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jti22-0003WW-1a; Fri, 10 Jul 2020 01:40:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jti22-0006Qh-06; Fri, 10 Jul 2020 01:40:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151759-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151759: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=3fdc211b01b29f252166937238efe02d15cb5780
X-Osstest-Versions-That: xen=3fdc211b01b29f252166937238efe02d15cb5780
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jul 2020 01:40:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151759 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151759/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151740
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151740
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151740
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151740
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151740
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151740
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151740
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151740
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151740
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3fdc211b01b29f252166937238efe02d15cb5780
baseline version:
 xen                  3fdc211b01b29f252166937238efe02d15cb5780

Last test of basis   151759  2020-07-09 11:07:03 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 04:26:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 04:26:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtkcJ-0004Ir-CU; Fri, 10 Jul 2020 04:26:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TFc2=AV=in.bosch.com=manikandan.chockalingam@srs-us1.protection.inumbo.net>)
 id 1jtkcF-0004Im-U4
 for xen-devel@lists.xen.org; Fri, 10 Jul 2020 04:26:30 +0000
X-Inumbo-ID: 8255d0c1-c265-11ea-8f56-12813bfff9fa
Received: from de-out1.bosch-org.com (unknown [139.15.230.186])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8255d0c1-c265-11ea-8f56-12813bfff9fa;
 Fri, 10 Jul 2020 04:26:24 +0000 (UTC)
Received: from fe0vm1650.rbesz01.com
 (lb41g3-ha-dmz-psi-sl1-mailout.fe.ssn.bosch.com [139.15.230.188])
 by si0vms0217.rbdmz01.com (Postfix) with ESMTPS id 4B30Nb1jhdz4f3lwb;
 Fri, 10 Jul 2020 06:26:23 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=in.bosch.com;
 s=key1-intmail; t=1594355183;
 bh=0flT9FbF1bnILmJA1WrUz6hc6gUFrxwO0XIL8WgUsdU=; l=10;
 h=From:Subject:From:Reply-To:Sender;
 b=TRK7CCrDxpsJj2Y0PHDvyM/XO8LfkrOwtCb3fA7WsbWlbpHClJ6b6P8wreQO0xLFN
 GEVlmdVUT1WR2UmW+t9iT1tCjc4ZwjoVKGxM4EY5oaG861/F6netRdL/xC4MX4xCF9
 gnXxbFKF3WoCyuK3sEgBzkN4OzZc/9Ms5V2jYTqY=
Received: from fe0vm1741.rbesz01.com (unknown [10.58.172.176])
 by fe0vm1650.rbesz01.com (Postfix) with ESMTPS id 4B30Nb1Mbsz1DF;
 Fri, 10 Jul 2020 06:26:23 +0200 (CEST)
X-AuditID: 0a3aad15-e97ff700000001dd-fc-5f07edef2019
Received: from fe0vm1651.rbesz01.com ( [10.58.173.29])
 (using TLS with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by fe0vm1741.rbesz01.com (SMG Outbound) with SMTP id 8E.D4.00477.FEDE70F5;
 Fri, 10 Jul 2020 06:26:23 +0200 (CEST)
Received: from FE-MBX2016.de.bosch.com (fe-mbx2016.de.bosch.com [10.3.231.22])
 by fe0vm1651.rbesz01.com (Postfix) with ESMTPS id 4B30Nb0csrzvl6;
 Fri, 10 Jul 2020 06:26:23 +0200 (CEST)
Received: from SGPMBX2022.APAC.bosch.com (10.187.83.37) by
 FE-MBX2016.de.bosch.com (10.3.231.22) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1979.3; Fri, 10 Jul 2020 06:26:22 +0200
Received: from SGPMBX2022.APAC.bosch.com (10.187.83.37) by
 SGPMBX2022.APAC.bosch.com (10.187.83.37) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1979.3; Fri, 10 Jul 2020 12:26:20 +0800
Received: from SGPMBX2022.APAC.bosch.com ([fe80::2d4d:b176:b210:896]) by
 SGPMBX2022.APAC.bosch.com ([fe80::2d4d:b176:b210:896%6]) with mapi id
 15.01.1979.003; Fri, 10 Jul 2020 12:26:20 +0800
From: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: [BUG] Xen build for RCAR failing
Thread-Topic: [BUG] Xen build for RCAR failing
Thread-Index: AdZUKc5JeR7gPpESR52uLkZK1kYwOwAEsnEAAAD8OlAAAEBtgAABZtcgAANXdAAAh1iJgA==
Date: Fri, 10 Jul 2020 04:26:20 +0000
Message-ID: <427f99fc7de04946957c2896e39fdb78@in.bosch.com>
References: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
 <1D0E7281-95D7-482E-BF6D-EE5B1FEE4876@arm.com>
 <ab84437081a346d6bf0f73581382c74e@in.bosch.com>
 <D84A5DA7-683C-480B-8837-C51D560FC2E1@arm.com>
 <139024a891324455a13a3d468908798d@in.bosch.com>
 <C3BCAA62-51EF-49DD-B978-6657BC6D5A21@arm.com>
In-Reply-To: <C3BCAA62-51EF-49DD-B978-6657BC6D5A21@arm.com>
Accept-Language: en-US, en-SG
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.187.56.204]
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprLIsWRmVeSWpSXmKPExsXCZbVWVvf9W/Z4g/1POS2WLtnMZHFmeQ+z
 xZKPi1kcmD3WzFvD6HF092+mAKYoLpuU1JzMstQifbsEroxZt48wFhzmq1i0dw1TA2MDTxcj
 J4eEgInE7uWrGLsYuTiEBGYwSczatBfK2c0osXb7MjYI5wOjxNFdMM5nRoln/xewQDiHGCWa
 2t4ygQxjEwiR2Lf3BjuILSKgL/H75g9WEJtZwENi1dU9jCC2sICuRNe52VA1ehJbF/azQthh
 Enf+bwCrYRFQlXj/7A0biM0rYC2xY+lNZohlm5gkWna2gzVzAiXOvVoP1swoICux6OYkFohl
 4hK3nsxngvhOQGLJnvPMELaoxMvH/1ghbEWJN182sUPU60ncmDqFDcLWlli28DUzxGJBiZMz
 n7BMYJSYhWTsLCQts5C0zELSsoCRZRWjaFqqQVmuobmJoV5RUmpxlYGhXnJ+7iZGSAyK7mC8
 0/1B7xAjEwcjMAQ5mJVEeA0UWeOFeFMSK6tSi/Lji0pzUosPMUpzsCiJ86rwbIwTEkhPLEnN
 Tk0tSC2CyTJxcEo1MMloamxe8WXLtWfrQ44t/nd8ubavlpPiUomV9uc2n1u3yIShKLZ6X/TO
 5HPsil17paZoZ54NOb3NUsDryN4pUw96p84IU1X+33L94JTtU6fP3W7X3s8UsvJOlcbKT5az
 +TnEhGXYMk52ZhflLdghU3DcS03hW6PFl9UVLceat3v4CpcwXK1Y7nQ+wsj2jnlX1qx73i9e
 d+//+yU2z+f/CxMj8+N3LeManv+Yoz65S2r5npMLbon/ub3k97yq2y4Lr8p9kp1uXb8/dt1y
 84rvcXdu9s95lZnDVzz7mXy6RSGPuezECV43FwtLylRtzJRR0pnNNGUt49sbmR9/32W3fKvX
 kXri2Ykp8Qv28k2YsjJdiaU4I9FQi7moOBEAg0TE2zADAAA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello Bertrand,

I couldn't find dunfell branch in the following repos
1. meta-selinux
2. xen-troops
3. meta-renesas [I took dunfell-dev]

Mit freundlichen Gr=FC=DFen / Best regards

 Chockalingam Manikandan

ES-CM Core fn,ADIT (RBEI/ECF3)
Robert Bosch GmbH | Postfach 10 60 50 | 70049 Stuttgart | GERMANY | www.bos=
ch.com
Tel. +91 80 6136-4452 | Fax +91 80 6617-0711 | Manikandan.Chockalingam@in.b=
osch.com

Registered Office: Stuttgart, Registration Court: Amtsgericht Stuttgart, HR=
B 14000;
Chairman of the Supervisory Board: Franz Fehrenbach; Managing Directors: Dr=
. Volkmar Denner,=20
Prof. Dr. Stefan Asenkerschbaumer, Dr. Michael Bolle, Dr. Christian Fischer=
, Dr. Stefan Hartung,
Dr. Markus Heyn, Harald Kr=F6ger, Christoph K=FCbel, Rolf Najork, Uwe Rasch=
ke, Peter Tyroller

-----Original Message-----
From: Bertrand Marquis <Bertrand.Marquis@arm.com>=20
Sent: Tuesday, July 7, 2020 5:18 PM
To: Manikandan Chockalingam (RBEI/ECF3) <Manikandan.Chockalingam@in.bosch.c=
om>
Cc: xen-devel@lists.xen.org; nd <nd@arm.com>
Subject: Re: [BUG] Xen build for RCAR failing



> On 7 Jul 2020, at 11:18, Manikandan Chockalingam (RBEI/ECF3) <Manikandan.=
Chockalingam@in.bosch.com> wrote:
>=20
> Hello Bertrand,
>=20
> Thank you. I will try a fresh build with dunfell branch. All layers in th=
e sense [poky, meta-openembedded, meta-linaro, meta-rensas, meta-virtualisa=
tion, meta-selinux, xen-troops] right?

right

>=20
> Also, Can I use the same proprietary drivers which I used for yocto2.19[R=
-Car_Gen3_Series_Evaluation_Software_Package_for_Linux-20170427.zip] for th=
is branch?

I have no idea what is in that but i would guess it will probably not work =
that easily.
You might need to get in contact with Renesas to get more up-to-date instru=
ctions on how to build that.

Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 04:35:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 04:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtkkY-0005Bh-7N; Fri, 10 Jul 2020 04:35:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uxYN=AV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtkkX-0005Aw-40
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 04:35:01 +0000
X-Inumbo-ID: b2eab042-c266-11ea-8f56-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b2eab042-c266-11ea-8f56-12813bfff9fa;
 Fri, 10 Jul 2020 04:34:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ZHQMAK3d5uKW5WbZKJjdpZDSfGtdX9EnxLHVeFRiSEA=; b=D3nPARYF79kBjezceFkSh+2zc
 EgVYNqnicsHu/K+fQZVp1xPrqNDL8VLG1IOlzcFZewWboFEcAwmSAzbuCmzzL0BhKpjWx3NwRyhoD
 ASZubMHCv15tCLTs4p5NjagBmpA0KqYpPxbpA4kE4YRzdvmX0JFFFvY5/0UBOYartDUT4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtkkO-0001gR-Jl; Fri, 10 Jul 2020 04:34:52 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtkkN-0002qe-Nf; Fri, 10 Jul 2020 04:34:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtkkN-0000vx-Mt; Fri, 10 Jul 2020 04:34:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151763-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151763: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=48f22ad04ead83e61b4b35871ec6f6109779b791
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jul 2020 04:34:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151763 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151763/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                48f22ad04ead83e61b4b35871ec6f6109779b791
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   27 days
Failing since        151101  2020-06-14 08:32:51 Z   25 days   33 attempts
Testing same since   151763  2020-07-09 15:13:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michele Denber <denber@mindspring.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 20389 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 05:37:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 05:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtliO-000255-38; Fri, 10 Jul 2020 05:36:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AY6w=AV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jtliM-000250-JO
 for xen-devel@lists.xen.org; Fri, 10 Jul 2020 05:36:50 +0000
X-Inumbo-ID: 59c58c04-c26f-11ea-8f5d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 59c58c04-c26f-11ea-8f5d-12813bfff9fa;
 Fri, 10 Jul 2020 05:36:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C656CAEC6;
 Fri, 10 Jul 2020 05:36:48 +0000 (UTC)
Subject: Re: Linux 5.4.46: BUG: sleeping function called from invalid context
 at drivers/xen/preempt.c:37
To: Sarah Newman <srn@prgmr.com>,
 "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
References: <1a98b431-4112-3962-f7f4-7a335ef33cb6@prgmr.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <922baa3b-5201-fa5f-e46a-e33b0b4f2ce0@suse.com>
Date: Fri, 10 Jul 2020 07:36:48 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <1a98b431-4112-3962-f7f4-7a335ef33cb6@prgmr.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 10.07.20 03:38, Sarah Newman wrote:
> BUG: sleeping function called from invalid context at 
> drivers/xen/preempt.c:37
> in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 20775, name: xl
> INFO: lockdep is turned off.
> CPU: 1 PID: 20775 Comm: xl Tainted: G      D W         
> 5.4.46-1_prgmr_debug.el7.x86_64 #1
> Call Trace:
> <IRQ>
> dump_stack+0x8f/0xd0
> ___might_sleep.cold.76+0xb2/0x103
> xen_maybe_preempt_hcall+0x48/0x70
> xen_do_hypervisor_callback+0x37/0x40
> RIP: e030:xen_hypercall_xen_version+0xa/0x20
> Code: 51 41 53 b8 10 00 00 00 0f 05 41 5b 59 c3 cc cc cc cc cc cc cc cc 
> cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 11 00 00 00 0f 05 <41> 5b 59 
> c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
> RSP: e02b:ffffc900400dcc30 EFLAGS: 00000246
> RAX: 000000000004000d RBX: 0000000000000200 RCX: ffffffff8100122a
> RDX: ffff88812e788000 RSI: 0000000000000000 RDI: 0000000000000000
> RBP: ffffffff83ee3ad0 R08: 0000000000000001 R09: 0000000000000001
> R10: 0000000000000000 R11: 0000000000000246 R12: ffff8881824aa0b0
> R13: 0000000865496000 R14: 0000000865496000 R15: ffff88815d040000
> ? xen_hypercall_xen_version+0xa/0x20
> ? xen_force_evtchn_callback+0x9/0x10
> ? check_events+0x12/0x20
> ? xen_restore_fl_direct+0x1f/0x20
> ? _raw_spin_unlock_irqrestore+0x53/0x60
> ? debug_dma_sync_single_for_cpu+0x91/0xc0
> ? _raw_spin_unlock_irqrestore+0x53/0x60
> ? xen_swiotlb_sync_single_for_cpu+0x3d/0x140
> ? mlx4_en_process_rx_cq+0x6b6/0x1110 [mlx4_en]
> ? mlx4_en_poll_rx_cq+0x64/0x100 [mlx4_en]
> ? net_rx_action+0x151/0x4a0
> ? __do_softirq+0xed/0x55b
> ? irq_exit+0xea/0x100
> ? xen_evtchn_do_upcall+0x2c/0x40
> ? xen_do_hypervisor_callback+0x29/0x40
> </IRQ>
> ? xen_hypercall_domctl+0xa/0x20
> ? xen_hypercall_domctl+0x8/0x20
> ? privcmd_ioctl+0x221/0x990 [xen_privcmd]
> ? do_vfs_ioctl+0xa5/0x6f0
> ? ksys_ioctl+0x60/0x90
> ? trace_hardirqs_off_thunk+0x1a/0x20
> ? __x64_sys_ioctl+0x16/0x20
> ? do_syscall_64+0x62/0x250
> ? entry_SYSCALL_64_after_hwframe+0x49/0xbe
> 

Sarah, thank you very much for this bug report! I think with your help
I finally found a bug I've been searching for a very long time.

The problem will happen when a preemptible hypercall is interrupted and
a softirq action is triggered issuing further hypercalls and one of
those is preempted, too.

In Linux 5.8 this problem will no longer occur due to the entry code
rework done by Thomas Gleixner. For kernels up to 5.7 I'll write a patch
hopefully repairing this hard to reproduce problem, which IMO more often
will show up with a WARN() splat only in xen_mc_flush().


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 05:44:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 05:44:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtlpp-0002wO-0W; Fri, 10 Jul 2020 05:44:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t5yY=AV=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1jtlpn-0002wJ-SV
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 05:44:31 +0000
X-Inumbo-ID: 6cb03c96-c270-11ea-bb8b-bc764e2007e4
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6cb03c96-c270-11ea-bb8b-bc764e2007e4;
 Fri, 10 Jul 2020 05:44:30 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id g75so4579836wme.5
 for <xen-devel@lists.xenproject.org>; Thu, 09 Jul 2020 22:44:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=4edQ0YselGGDwyfZkAcxeIabieLMJek3/TpD0WH/gAs=;
 b=UjYgshhZvEWQNEWUObMADwMsQSzjL+gWFhsuTHMIBcP61wwzte+xmJRtWR8zWDGM1P
 yKfTP3P5NlujAe6T7iRtODNObxBsAYGGvgIABKxUJZfLbKqzYgmWl5bMIYF1Ql+FrG4S
 DdD/RlugPLcc3Kq2x8OSi5flUDSIj8P4CP4QBOKjgrF44UTz1XCg4IdIH/aOPkYWd7Ve
 k4neig2NOv48t7nWgBuw2bNWouTRZEPOMF8P4Npcs9eVtIsITh1Yyow3ZFORSjnmEj85
 e0XNmYVkwFG4UJXCrZgEN3/22C+QJPciILXLA++u2I3UPPpdPAL0j6KIlzjcf5bCIfx/
 c3rw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=4edQ0YselGGDwyfZkAcxeIabieLMJek3/TpD0WH/gAs=;
 b=AQkapJjTkgg8UU7ZletJuZD2U4KCEYt9/OtEipRTl87/5EP++8FzUaE6klYOqhZMDW
 JnzcQw3/g1F/oDK1Fxh6phoEBJk4UglX+aXov3EP9kMzqtMWgKWsnzB+LFreOkXPNlEt
 uT6JnxHo4dRH5vVtklNafuyXR7vd7Go9p3aYOKNAO/yx0wzZ2wsW28LAlGLvLQNmnMKn
 IL9gqju/YAQIvEo5319n/7jaJ6BCjNgGnRgtDza9Wop1/zgQ997kvRqZP9XPVGRAsRgp
 bz/YEOYnfXybfKPvdo6xhK7D51/FTk0Zwi+TwigSJ1WcLs74p8XB5jJ6rkyWgp/PINPm
 MaPQ==
X-Gm-Message-State: AOAM5311ka/6Vb2MMuCdV7+xHXuUO7cXLRP71UtDl/tKLD6tYlJ9Nyu+
 IWzqeu91zYolkAcviHTgsOEyYtylx3F9K3OdbnQ=
X-Google-Smtp-Source: ABdhPJxysuxRtR4lmGruhCGGljqm79dbNa6fBn3RLiqmrOitAMZFSVIoeNqahz4At2LsIVYfu/QWmGcO3Dq95awlOKM=
X-Received: by 2002:a05:600c:2058:: with SMTP id
 p24mr3399981wmg.74.1594359870011; 
 Thu, 09 Jul 2020 22:44:30 -0700 (PDT)
MIME-Version: 1.0
References: <20200627095533.14145-1-julien@xen.org>
 <20200627095533.14145-3-julien@xen.org>
 <b7f41be0-f1d2-2c3b-c79f-5d9763dfb5df@xen.org>
 <2e0a451e-58be-eff1-0030-be489e5098f3@xen.org>
 <alpine.DEB.2.21.2007091443290.4124@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007091443290.4124@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Fri, 10 Jul 2020 06:44:16 +0100
Message-ID: <CAJ=z9a0jTjnTd5OGJfXQcXBFRMY3T3dNFF=K0qcDzP8_Po-ZXQ@mail.gmail.com>
Subject: Re: [PATCH v4 for-4.14 2/2] pvcalls: Document correctly and
 explicitely the padding for all arches
To: Stefano Stabellini <sstabellini@kernel.org>
Content-Type: multipart/alternative; boundary="000000000000b23c4d05aa0fd5ea"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--000000000000b23c4d05aa0fd5ea
Content-Type: text/plain; charset="UTF-8"

On Thu, 9 Jul 2020, 22:43 Stefano Stabellini, <sstabellini@kernel.org>
wrote:

> On Thu, 9 Jul 2020, Julien Grall wrote:
> > Hi,
> >
> > Gentle ping.
> >
> > Is the new commit message fine?
> >
> > Cheers,
> >
> > On 04/07/2020 16:29, Julien Grall wrote:
> > > Hi,
> > >
> > > On 27/06/2020 10:55, Julien Grall wrote:
> > > > From: Julien Grall <jgrall@amazon.com>
> > > >
> > > > The specification of pvcalls suggests there is padding for 32-bit x86
> > > > at the end of most the structure. However, they are not described in
> > > > in the public header.
> > > >
> > > > Because of that all the structures would be 32-bit aligned and not
> > > > 64-bit aligned for 32-bit x86.
> > > >
> > > > For all the other architectures supported (Arm and 64-bit x86), the
> > > > structure are aligned to 64-bit because they contain uint64_t field.
> > > > Therefore all the structures contain implicit padding.
> > > >
> > > > Given the specification is authoriitative, the padding will the same
> for
> > > > the all architectures. The potential breakage of compatibility is
> ought
> > > > to be fine as pvcalls is still a tech preview.
> > > >
> > > > As an aside, the padding sadly cannot be mandated to be 0 as they are
> > > > already present. So it is not going to be possible to use the padding
> > > > for extending a command in the future.
> > > >
> > > > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > >
> > > It looks like most of the comments are on the commit message. So
> rather than
> > > sending the series again, below a new version of the commit message:
> > >
> > > "
> > > The specification of pvcalls suggests there is padding for 32-bit x86
> > > at the end of most the structure. However, they are not described in
> > > in the public header.
> > >
> > > Because of that all the structures would have a different size between
> > > 32-bit x86 and 64-bit x86.
> > >
> > > For all the other architectures supported (Arm and 64-bit x86), the
> > > structure have the sames sizes because they contain implicit padding
> thanks
> > > to the 64-bit alinment of the field uint64_t field.
> > >
> > > Given the specification is authoritative, the padding will now be the
> same
> > > for all architectures. The potential breakage of compatibility is
> ought to
> > > be fine as pvcalls is still a tech preview.
> > > "
>
> Looks good to me
>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
>


Thanks! I don't have access to my work laptop today. Can any of the
committers merge it so it doesn't miss 4.14?

Cheers,

>

--000000000000b23c4d05aa0fd5ea
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><br><br><div class=3D"gmail_quote"><div dir=3D"ltr" =
class=3D"gmail_attr">On Thu, 9 Jul 2020, 22:43 Stefano Stabellini, &lt;<a h=
ref=3D"mailto:sstabellini@kernel.org">sstabellini@kernel.org</a>&gt; wrote:=
<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;bord=
er-left:1px #ccc solid;padding-left:1ex">On Thu, 9 Jul 2020, Julien Grall w=
rote:<br>
&gt; Hi,<br>
&gt; <br>
&gt; Gentle ping.<br>
&gt; <br>
&gt; Is the new commit message fine?<br>
&gt; <br>
&gt; Cheers,<br>
&gt; <br>
&gt; On 04/07/2020 16:29, Julien Grall wrote:<br>
&gt; &gt; Hi,<br>
&gt; &gt; <br>
&gt; &gt; On 27/06/2020 10:55, Julien Grall wrote:<br>
&gt; &gt; &gt; From: Julien Grall &lt;<a href=3D"mailto:jgrall@amazon.com" =
target=3D"_blank" rel=3D"noreferrer">jgrall@amazon.com</a>&gt;<br>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; The specification of pvcalls suggests there is padding for 3=
2-bit x86<br>
&gt; &gt; &gt; at the end of most the structure. However, they are not desc=
ribed in<br>
&gt; &gt; &gt; in the public header.<br>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; Because of that all the structures would be 32-bit aligned a=
nd not<br>
&gt; &gt; &gt; 64-bit aligned for 32-bit x86.<br>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; For all the other architectures supported (Arm and 64-bit x8=
6), the<br>
&gt; &gt; &gt; structure are aligned to 64-bit because they contain uint64_=
t field.<br>
&gt; &gt; &gt; Therefore all the structures contain implicit padding.<br>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; Given the specification is authoriitative, the padding will =
the same for<br>
&gt; &gt; &gt; the all architectures. The potential breakage of compatibili=
ty is ought<br>
&gt; &gt; &gt; to be fine as pvcalls is still a tech preview.<br>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; As an aside, the padding sadly cannot be mandated to be 0 as=
 they are<br>
&gt; &gt; &gt; already present. So it is not going to be possible to use th=
e padding<br>
&gt; &gt; &gt; for extending a command in the future.<br>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; Signed-off-by: Julien Grall &lt;<a href=3D"mailto:jgrall@ama=
zon.com" target=3D"_blank" rel=3D"noreferrer">jgrall@amazon.com</a>&gt;<br>
&gt; &gt; <br>
&gt; &gt; It looks like most of the comments are on the commit message. So =
rather than<br>
&gt; &gt; sending the series again, below a new version of the commit messa=
ge:<br>
&gt; &gt; <br>
&gt; &gt; &quot;<br>
&gt; &gt; The specification of pvcalls suggests there is padding for 32-bit=
 x86<br>
&gt; &gt; at the end of most the structure. However, they are not described=
 in<br>
&gt; &gt; in the public header.<br>
&gt; &gt; <br>
&gt; &gt; Because of that all the structures would have a different size be=
tween<br>
&gt; &gt; 32-bit x86 and 64-bit x86.<br>
&gt; &gt; <br>
&gt; &gt; For all the other architectures supported (Arm and 64-bit x86), t=
he<br>
&gt; &gt; structure have the sames sizes because they contain implicit padd=
ing thanks<br>
&gt; &gt; to the 64-bit alinment of the field uint64_t field.<br>
&gt; &gt; <br>
&gt; &gt; Given the specification is authoritative, the padding will now be=
 the same<br>
&gt; &gt; for all architectures. The potential breakage of compatibility is=
 ought to<br>
&gt; &gt; be fine as pvcalls is still a tech preview.<br>
&gt; &gt; &quot;<br>
<br>
Looks good to me<br>
<br>
Acked-by: Stefano Stabellini &lt;<a href=3D"mailto:sstabellini@kernel.org" =
target=3D"_blank" rel=3D"noreferrer">sstabellini@kernel.org</a>&gt;<br></bl=
ockquote></div></div><div dir=3D"auto"><br></div><div dir=3D"auto"><br></di=
v><div dir=3D"auto">Thanks! I don&#39;t have access to my work laptop today=
. Can any of the committers merge it so it doesn&#39;t miss 4.14?</div><div=
 dir=3D"auto"><br></div><div dir=3D"auto">Cheers,</div><div dir=3D"auto"><d=
iv class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"margin:=
0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
</blockquote></div></div></div>

--000000000000b23c4d05aa0fd5ea--


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 06:09:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 06:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtmDp-0004kP-3f; Fri, 10 Jul 2020 06:09:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uxYN=AV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtmDn-0004jz-VY
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 06:09:20 +0000
X-Inumbo-ID: dfa7c4c8-c273-11ea-8f5f-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dfa7c4c8-c273-11ea-8f5f-12813bfff9fa;
 Fri, 10 Jul 2020 06:09:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=5OuohENlaEoD5MdiOOT/QHu/r+Y+CoUw2mUKagVpq4U=; b=hfXDVbOHR/95dOpY3AHoJBROZ
 bPXY5KIn8j00VvQl6LVt+fAPwX+Wej5SDv2pd6Hhn/vfI+RCRnvVVDa2dN2mP6AcczV1JFDIRJh+N
 YPu8F4D1pN2GfVBOMB1RDQKfwyiuV6qkmk4CG1xIi1jl9TrMWBLSTWADxP7ZV3zj0xYEg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtmDf-0003v8-BJ; Fri, 10 Jul 2020 06:09:11 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtmDe-0005yq-Ov; Fri, 10 Jul 2020 06:09:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtmDe-0000lq-Nu; Fri, 10 Jul 2020 06:09:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151764-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151764: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:heisenbug
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=0bddd227f3dc55975e2b8dfa7fc6f959b062a2c7
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jul 2020 06:09:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151764 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151764/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail in 151747 pass in 151764
 test-armhf-armhf-xl-vhd      11 guest-start                fail pass in 151747

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-xl-vhd     12 migrate-support-check fail in 151747 never pass
 test-armhf-armhf-xl-vhd 13 saverestore-support-check fail in 151747 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                0bddd227f3dc55975e2b8dfa7fc6f959b062a2c7
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   22 days
Failing since        151236  2020-06-19 19:10:35 Z   20 days   30 attempts
Testing same since   151747  2020-07-08 23:12:54 Z    1 days    2 attempts

------------------------------------------------------------
608 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29588 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 07:46:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 07:46:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtnjK-0004KP-J7; Fri, 10 Jul 2020 07:45:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cC6S=AV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jtnjI-0004KK-QG
 for xen-devel@lists.xen.org; Fri, 10 Jul 2020 07:45:56 +0000
X-Inumbo-ID: 62af5928-c281-11ea-8f66-12813bfff9fa
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.62]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 62af5928-c281-11ea-8f66-12813bfff9fa;
 Fri, 10 Jul 2020 07:45:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1GTjuT4swsgCP3tPUeQWgUvpQsvHzcwQ4D+FLsHBsIs=;
 b=9Unu1IlN0jWjbMb/5VjdEDBYEtr3Lv07Ew9uGSqUOpweteGaxD+VOS5WrRBtKIW3w9DJNityt7MGc1wncOjcwKcW2TAHGsZDgFluLSGC+hkIssdk2MT91aBpEzyJjo+x2yv3KAyk9Vm9589MsV+/o7wy4dfyJ1ggxdYk5XvYSEY=
Received: from AM5PR04CA0017.eurprd04.prod.outlook.com (2603:10a6:206:1::30)
 by VI1PR08MB2653.eurprd08.prod.outlook.com (2603:10a6:802:1b::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.27; Fri, 10 Jul
 2020 07:45:53 +0000
Received: from AM5EUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:1:cafe::96) by AM5PR04CA0017.outlook.office365.com
 (2603:10a6:206:1::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.20 via Frontend
 Transport; Fri, 10 Jul 2020 07:45:51 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xen.org; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;lists.xen.org; dmarc=bestguesspass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT020.mail.protection.outlook.com (10.152.16.116) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3174.21 via Frontend Transport; Fri, 10 Jul 2020 07:45:51 +0000
Received: ("Tessian outbound 7de93d801f24:v62");
 Fri, 10 Jul 2020 07:45:51 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5ad13238e87b4258
X-CR-MTA-TID: 64aa7808
Received: from a48785051686.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 358338DD-5F2F-4EBC-9597-0E09E8B9E826.1; 
 Fri, 10 Jul 2020 07:45:45 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a48785051686.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 10 Jul 2020 07:45:45 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k0n5klfOZ+nJNxedrMlY1POY35Y5qvZBu2RL8m975jrpCpZYMlj1U648UptXsKB9ZptBV+wUr8d+WcipaTXSUBvv698lX1YK3wvThRa74x2GqL9MEMLF41AiytKm+dlRXHI6wchx00N4c6x3Xq9sbLQ347dlw/+B12ii7qxlOnsd8x2xpxITWiMfqU/TNKfbTkRGAAmyvmZDL9WRb5HRpot+G9KILWWIc5IDNr6ZYjlkiQNHpOfr8zzvKPPyJ618xLdypuRqw0VAEKzktNFjqjvERw4kduM5t9UjfMyoLZRfxj4YKciCNWIQf2MojM3evLOqM8m+rTAv2h/ErV5p0Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1GTjuT4swsgCP3tPUeQWgUvpQsvHzcwQ4D+FLsHBsIs=;
 b=O+Ita1lfonhOKwkylUro/D7Sw33WPjdxSisbXQAwQSugE4JrkIqOi7BAazY+9XLGOH7+upnC/ujbqoy8BEIvWCPLVgpnUGXng1DNSSDQaB8zrBmLXOaebJzqTDUAGu5LlGGA9+fHN/dNPKIW0thS9bacSv8Rsu5Qp6v7jFrTOC7CdoH2tyYu0W6Q2bTXie+pgBqJZBOZNjg7Sp6etLJm8DAyT8KFSBD84Oo9wq2IIX4LRmh6awFtlZpkTJu4rGigmJhczXKbpbvn3zuH2XOyNnHLbETvTYc6QJBzj+RJjd+KQXlE3GkE7incve0nfXAgtRa8PyeN/14PLb1gRgJYkg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1GTjuT4swsgCP3tPUeQWgUvpQsvHzcwQ4D+FLsHBsIs=;
 b=9Unu1IlN0jWjbMb/5VjdEDBYEtr3Lv07Ew9uGSqUOpweteGaxD+VOS5WrRBtKIW3w9DJNityt7MGc1wncOjcwKcW2TAHGsZDgFluLSGC+hkIssdk2MT91aBpEzyJjo+x2yv3KAyk9Vm9589MsV+/o7wy4dfyJ1ggxdYk5XvYSEY=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB3993.eurprd08.prod.outlook.com (2603:10a6:10:ad::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.20; Fri, 10 Jul
 2020 07:45:44 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.022; Fri, 10 Jul 2020
 07:45:44 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>
Subject: Re: [BUG] Xen build for RCAR failing
Thread-Topic: [BUG] Xen build for RCAR failing
Thread-Index: AdZUKc5JeR7gPpESR52uLkZK1kYwOwAEsnEAAAD8OlAAAEBtgAABZtcgAANXdAAAh1iJgAAHDdGA
Date: Fri, 10 Jul 2020 07:45:44 +0000
Message-ID: <7285E20B-C902-4C3E-ADBA-4AD94EA9D59C@arm.com>
References: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
 <1D0E7281-95D7-482E-BF6D-EE5B1FEE4876@arm.com>
 <ab84437081a346d6bf0f73581382c74e@in.bosch.com>
 <D84A5DA7-683C-480B-8837-C51D560FC2E1@arm.com>
 <139024a891324455a13a3d468908798d@in.bosch.com>
 <C3BCAA62-51EF-49DD-B978-6657BC6D5A21@arm.com>
 <427f99fc7de04946957c2896e39fdb78@in.bosch.com>
In-Reply-To: <427f99fc7de04946957c2896e39fdb78@in.bosch.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: in.bosch.com; dkim=none (message not signed)
 header.d=none; in.bosch.com;
 dmarc=none action=none header.from=arm.com; 
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 712e76c9-9b03-49c1-719d-08d824a544a0
x-ms-traffictypediagnostic: DB8PR08MB3993:|VI1PR08MB2653:
X-Microsoft-Antispam-PRVS: <VI1PR08MB2653E6D84087B47731A715B69D650@VI1PR08MB2653.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7219;OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: syXDB3tSG6o8gVrIhA+mUfISaIagZKl7+9280es/CXxnGLdleaO0Jhteapk+Bi5iZbT5tUDDYjsxsfL+GJF2rO4mAcT4BCy7FCgo71xAgp8PLMg+N9iwtosJf/nd6aAd7glIPzRgztqvxU2ktixW34WvDyoHDV8+nwymljpW78F6gPL31nEfv0WY7YIW/JXp8q38eET9EsSD8MTBdz/TGe1HZnYZ9JssxS+SEXq/0id1pJuipyraPK3cxguBA0q9vWDPTycO4C5x5MTbCioNDRzkU81xapik16gNHugIGtWMff8g5TXlTnEacYvxK3Cg
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(376002)(366004)(39860400002)(346002)(136003)(8676002)(71200400001)(86362001)(36756003)(5660300002)(2616005)(6916009)(33656002)(66446008)(64756008)(66556008)(66476007)(478600001)(6512007)(2906002)(4326008)(186003)(54906003)(316002)(6486002)(76116006)(91956017)(26005)(6506007)(53546011)(4744005)(66946007)(8936002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: wkoU1ghornnjXK2psDE54/gSHCWRuzvqbq6kfZFvD0Uq0RxJW76d86KNp81TcrBPhb/hqNO5SJbzcmHpPS2KU64a+3EvXXWReRgU+gDd5SY6pGXVzrw6UwMcu3n5JE+l6P5Rd8GLYVqsJZd3+92LuqichUewv2Ki35r2I7qpTiE6C9LuGJ4LDmq+4rNN7wPZXmK+C6PCIlZpJ2df9Rh0COwXiMqoAY9TbYAjjS1Rjzs6wpNft2VLbYI+wtYHcxNzG7XlVQBvesIUb63ZDXZo+YpJ2/OQbg8QVEMyGhT0nzmxNqeRg5YTFTJyVCX2H0hf28pfi92FbibuJYyoQUA275bEgbEYhogJVSSOgR8Nv2Lkjpzg913Om1wjjE48bhKy8vl3YDlm0hN5mxpxtSSj1gxTaZfi4wQVx8cGbqjsRue+J3z8dM7izU9/So3ZAiUIeSv4MkBX7Q4DZs4YzkxUlqcuLJ05nAjayrAlziujIWQ=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <A540B1C5776F834AA558BB5745382EE9@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB3993
Original-Authentication-Results: in.bosch.com; dkim=none (message not signed)
 header.d=none; in.bosch.com;
 dmarc=none action=none header.from=arm.com; 
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(39860400002)(376002)(346002)(396003)(46966005)(26005)(4744005)(336012)(8676002)(8936002)(2616005)(186003)(2906002)(6512007)(70586007)(53546011)(82310400002)(6506007)(33656002)(70206006)(5660300002)(478600001)(86362001)(81166007)(47076004)(36906005)(54906003)(356005)(6862004)(82740400003)(4326008)(6486002)(36756003)(316002);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 4f7559f2-3074-4e2c-7a47-08d824a54096
X-Forefront-PRVS: 046060344D
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Ka5oLWAXpF8sOBxroMxfybdzU9RCz6dW4vep83XTHTqNRnNN2SO01ApExvdhI95T5kgbbH2aaeNKJr8kGzoAqjiMwijW2TjVFMlM3cfHQEqpvaN69BuDYzPi97Br5x8oX4gmFrIKxebgkUaBg5O1nxIHIom3fznhK7Wu1LOUGOuQSdQ1SaEk1KBKGRAS6AXnibJBR67xWUu4siaz4duIKgYAxj20c4aDR60/Rbbc7GJWeeqrwOz6KxUfv5f+JBoch/+oDsXgj+vZetBsJ7Cq9Ie458JPKo6Js5fuJkSDjdUPwdVRSwaIFF5bOh07g0aLPH58jnBs/htal7i0lYR4s0Tbp5Otmu7kyA8ZjY1KCqonQUMOat0ol7b0rlCHBXXUwAM8Ucr6by6Fqw4bPeLAQA==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jul 2020 07:45:51.1930 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 712e76c9-9b03-49c1-719d-08d824a544a0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2653
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


> On 10 Jul 2020, at 05:26, Manikandan Chockalingam (RBEI/ECF3) <Manikandan=
.Chockalingam@in.bosch.com> wrote:
>=20
> Hello Bertrand,
>=20
> I couldn't find dunfell branch in the following repos
> 1. meta-selinux
> 2. xen-troops
> 3. meta-renesas [I took dunfell-dev]

Those are not layers i am using.
Not having a dunfell branch could mean that master of those layers is compa=
tible with Dunfell or that those layers are not maintained.

I would advise you to try to contact the maintainers of those layers.

Regards
Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 07:50:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 07:50:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtno9-000586-6l; Fri, 10 Jul 2020 07:50:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AY6w=AV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jtno7-000581-Np
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 07:50:55 +0000
X-Inumbo-ID: 14c71970-c282-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14c71970-c282-11ea-b7bb-bc764e2007e4;
 Fri, 10 Jul 2020 07:50:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 83B8AAD32;
 Fri, 10 Jul 2020 07:50:53 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	stable@vger.kernel.org
Subject: [PATCH] xen: don't reschedule in preemption off sections
Date: Fri, 10 Jul 2020 09:50:50 +0200
Message-Id: <20200710075050.4769-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Sarah Newman <srn@prgmr.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

For support of long running hypercalls xen_maybe_preempt_hcall() is
calling cond_resched() in case a hypercall marked as preemptible has
been interrupted.

Normally this is no problem, as only hypercalls done via some ioctl()s
are marked to be preemptible. In rare cases when during such a
preemptible hypercall an interrupt occurs and any softirq action is
started from irq_exit(), a further hypercall issued by the softirq
handler will be regarded to be preemptible, too. This might lead to
rescheduling in spite of the softirq handler potentially having set
preempt_disable(), leading to splats like:

BUG: sleeping function called from invalid context at drivers/xen/preempt.c:37
in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 20775, name: xl
INFO: lockdep is turned off.
CPU: 1 PID: 20775 Comm: xl Tainted: G D W 5.4.46-1_prgmr_debug.el7.x86_64 #1
Call Trace:
<IRQ>
dump_stack+0x8f/0xd0
___might_sleep.cold.76+0xb2/0x103
xen_maybe_preempt_hcall+0x48/0x70
xen_do_hypervisor_callback+0x37/0x40
RIP: e030:xen_hypercall_xen_version+0xa/0x20
Code: ...
RSP: e02b:ffffc900400dcc30 EFLAGS: 00000246
RAX: 000000000004000d RBX: 0000000000000200 RCX: ffffffff8100122a
RDX: ffff88812e788000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffffff83ee3ad0 R08: 0000000000000001 R09: 0000000000000001
R10: 0000000000000000 R11: 0000000000000246 R12: ffff8881824aa0b0
R13: 0000000865496000 R14: 0000000865496000 R15: ffff88815d040000
? xen_hypercall_xen_version+0xa/0x20
? xen_force_evtchn_callback+0x9/0x10
? check_events+0x12/0x20
? xen_restore_fl_direct+0x1f/0x20
? _raw_spin_unlock_irqrestore+0x53/0x60
? debug_dma_sync_single_for_cpu+0x91/0xc0
? _raw_spin_unlock_irqrestore+0x53/0x60
? xen_swiotlb_sync_single_for_cpu+0x3d/0x140
? mlx4_en_process_rx_cq+0x6b6/0x1110 [mlx4_en]
? mlx4_en_poll_rx_cq+0x64/0x100 [mlx4_en]
? net_rx_action+0x151/0x4a0
? __do_softirq+0xed/0x55b
? irq_exit+0xea/0x100
? xen_evtchn_do_upcall+0x2c/0x40
? xen_do_hypervisor_callback+0x29/0x40
</IRQ>
? xen_hypercall_domctl+0xa/0x20
? xen_hypercall_domctl+0x8/0x20
? privcmd_ioctl+0x221/0x990 [xen_privcmd]
? do_vfs_ioctl+0xa5/0x6f0
? ksys_ioctl+0x60/0x90
? trace_hardirqs_off_thunk+0x1a/0x20
? __x64_sys_ioctl+0x16/0x20
? do_syscall_64+0x62/0x250
? entry_SYSCALL_64_after_hwframe+0x49/0xbe

Fix that by testing preempt_count() before calling cond_resched().

In kernel 5.8 this can't happen any more due to the entry code rework.

Reported-by: Sarah Newman <srn@prgmr.com>
Fixes: 0fa2f5cb2b0ecd8 ("sched/preempt, xen: Use need_resched() instead of should_resched()")
Cc: Sarah Newman <srn@prgmr.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/preempt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
index 17240c5325a3..6ad87b5c95ed 100644
--- a/drivers/xen/preempt.c
+++ b/drivers/xen/preempt.c
@@ -27,7 +27,7 @@ EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
 asmlinkage __visible void xen_maybe_preempt_hcall(void)
 {
 	if (unlikely(__this_cpu_read(xen_in_preemptible_hcall)
-		     && need_resched())) {
+		     && need_resched() && !preempt_count())) {
 		/*
 		 * Clear flag as we may be rescheduled on a different
 		 * cpu.
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 07:53:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 07:53:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtnqw-0005GG-Kt; Fri, 10 Jul 2020 07:53:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RM36=AV=amazon.de=prvs=453334893=wipawel@srs-us1.protection.inumbo.net>)
 id 1jtnqu-0005GB-WF
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 07:53:49 +0000
X-Inumbo-ID: 7c9f6336-c282-11ea-bb8b-bc764e2007e4
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c9f6336-c282-11ea-bb8b-bc764e2007e4;
 Fri, 10 Jul 2020 07:53:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.de; i=@amazon.de; q=dns/txt; s=amazon201209;
 t=1594367629; x=1625903629;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:mime-version: content-transfer-encoding;
 bh=rgE8KP+G2jBgkSX7/8VriuBGi2KgsQj33m8hlyStpus=;
 b=aetBFfIs1kLYrRPcizYRzOqTyZRaivkXz7kDk5cVM+tmXu7GbZ1qO2oi
 TaFSjJjEe3GJB/xv4Sp5118jpv3jF3+Z/iMpFb8+P1CXxA1iobw8aa/0c
 uvTukJpyU8QpyNI35ie38Qio2Kwh2+BB5ijC8p/fLTRrNOkG/hHz1PGNJ M=;
IronPort-SDR: 8K05o5Smf0O3xPGbO1WCyBp+X1hf38iHeQjtJUsbuJPetvlSFcBSQ9F+1N/jff0xPg4vYWKiUk
 5ZzMxP/ixsPg==
X-IronPort-AV: E=Sophos;i="5.75,335,1589241600"; d="scan'208";a="58783042"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1d-74cf8b49.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 10 Jul 2020 07:53:42 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1d-74cf8b49.us-east-1.amazon.com (Postfix) with ESMTPS
 id 13009C1963; Fri, 10 Jul 2020 07:53:41 +0000 (UTC)
Received: from EX13D05EUB003.ant.amazon.com (10.43.166.253) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 10 Jul 2020 07:53:40 +0000
Received: from EX13D05EUB003.ant.amazon.com (10.43.166.253) by
 EX13D05EUB003.ant.amazon.com (10.43.166.253) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 10 Jul 2020 07:53:39 +0000
Received: from EX13D05EUB003.ant.amazon.com ([10.43.166.253]) by
 EX13D05EUB003.ant.amazon.com ([10.43.166.253]) with mapi id 15.00.1497.006;
 Fri, 10 Jul 2020 07:53:39 +0000
From: "Wieczorkiewicz, Pawel" <wipawel@amazon.de>
To: Julien Grall <julien@xen.org>
Subject: Re: [XTF] xenbus: Don't wait if the response ring is full
Thread-Topic: [XTF] xenbus: Don't wait if the response ring is full
Thread-Index: AQHWVo856A8ZAu2mDEGOBIlrR83SyQ==
Date: Fri, 10 Jul 2020 07:53:39 +0000
Message-ID: <02F94EA5-3555-4D3B-97DF-98914410424B@amazon.com>
References: <20200709184647.5159-1-julien@xen.org>
In-Reply-To: <20200709184647.5159-1-julien@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.166.86]
Content-Type: text/plain; charset="utf-8"
Content-ID: <44A3B3D566F72048AF5A80600D164451@amazon.com>
MIME-Version: 1.0
Precedence: Bulk
Content-Transfer-Encoding: base64
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, "Grall,
 Julien" <jgrall@amazon.co.uk>, xen-devel <xen-devel@lists.xenproject.org>,
 "Wieczorkiewicz, Pawel" <wipawel@amazon.de>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gOS4gSnVsIDIwMjAsIGF0IDIwOjQ2LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IA0KPiBGcm9tOiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24u
Y29tPg0KPiANCj4gWGVuU3RvcmUgcmVzcG9uc2UgY2FuIGJlIGJpZ2dlciB0aGFuIHRoZSByZXNw
b25zZSByaW5nLiBJbiB0aGlzIGNhc2UsDQo+IGl0IGlzIHBvc3NpYmxlIHRvIGhhdmUgdGhlIHJp
bmcgZnVsbCAoZS5nIGNvbnMgPSAxOSBhbmQgcHJvZCA9IDEwNDMpLg0KPiANCj4gSG93ZXZlciwg
WFRGIHdpbGwgY29uc2lkZXIgdGhhdCB0aGVyZSBpcyBubyBkYXRhIGFuZCB0aGVyZWZvcmUgd2Fp
dCBmb3INCj4gbW9yZSBpbnB1dC4gVGhpcyB3aWxsIHJlc3VsdCB0byBibG9jayBpbmRlZmluaXRl
bHkgYXMgdGhlIHJpbmcgaXMgZnVsbC4NCj4gDQo+IFRoaXMgY2FuIGJlIHNvbHZlZCBieSBhdm9p
ZGluZyB0byBtYXNrIHRoZSBkaWZmZXJlbmNlIGJldHdlZW4gcHJvZCBhbmQNCj4gY29ucy4NCj4g
DQo+IFNpZ25lZC1vZmYtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQo+IC0t
LQ0KPiBjb21tb24veGVuYnVzLmMgfCAyICstDQo+IDEgZmlsZSBjaGFuZ2VkLCAxIGluc2VydGlv
bigrKSwgMSBkZWxldGlvbigtKQ0KPiANCj4gZGlmZiAtLWdpdCBhL2NvbW1vbi94ZW5idXMuYyBi
L2NvbW1vbi94ZW5idXMuYw0KPiBpbmRleCAyNGZmZjQ4NzIzNzIuLmYzYmIzMGFjNjkzZiAxMDA2
NDQNCj4gLS0tIGEvY29tbW9uL3hlbmJ1cy5jDQo+ICsrKyBiL2NvbW1vbi94ZW5idXMuYw0KPiBA
QCAtNzUsNyArNzUsNyBAQCBzdGF0aWMgdm9pZCB4ZW5idXNfcmVhZCh2b2lkICpkYXRhLCBzaXpl
X3QgbGVuKQ0KPiAgICAgICAgIHVpbnQzMl90IHByb2QgPSBBQ0NFU1NfT05DRSh4Yl9yaW5nLT5y
c3BfcHJvZCk7DQo+ICAgICAgICAgdWludDMyX3QgY29ucyA9IEFDQ0VTU19PTkNFKHhiX3Jpbmct
PnJzcF9jb25zKTsNCj4gDQo+IC0gICAgICAgIHBhcnQgPSBtYXNrX3hlbmJ1c19pZHgocHJvZCAt
IGNvbnMpOw0KPiArICAgICAgICBwYXJ0ID0gcHJvZCAtIGNvbnM7DQo+IA0KPiAgICAgICAgIC8q
IE5vIGRhdGE/ICBLaWNrIHhlbnN0b3JlZCBhbmQgd2FpdCBmb3IgaXQgdG8gcHJvZHVjZSBzb21l
IGRhdGEuICovDQo+ICAgICAgICAgaWYgKCAhcGFydCApDQo+IOKAlA0KPiAyLjE3LjENCj4gDQoN
ClJldmlld2VkLWJ5OiBQYXdlbCBXaWVjem9ya2lld2ljeiA8d2lwYXdlbEBhbWF6b24uZGU+CgoK
CkFtYXpvbiBEZXZlbG9wbWVudCBDZW50ZXIgR2VybWFueSBHbWJICktyYXVzZW5zdHIuIDM4CjEw
MTE3IEJlcmxpbgpHZXNjaGFlZnRzZnVlaHJ1bmc6IENocmlzdGlhbiBTY2hsYWVnZXIsIEpvbmF0
aGFuIFdlaXNzCkVpbmdldHJhZ2VuIGFtIEFtdHNnZXJpY2h0IENoYXJsb3R0ZW5idXJnIHVudGVy
IEhSQiAxNDkxNzMgQgpTaXR6OiBCZXJsaW4KVXN0LUlEOiBERSAyODkgMjM3IDg3OQoKCg==



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 09:50:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 09:50:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtpf6-0006dI-T8; Fri, 10 Jul 2020 09:49:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RxGT=AV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jtpf6-0006dD-1c
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 09:49:44 +0000
X-Inumbo-ID: acfd68e2-c292-11ea-8f79-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id acfd68e2-c292-11ea-8f79-12813bfff9fa;
 Fri, 10 Jul 2020 09:49:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D99DAADC5;
 Fri, 10 Jul 2020 09:49:40 +0000 (UTC)
Subject: Re: [PATCH] xen: don't reschedule in preemption off sections
To: Juergen Gross <jgross@suse.com>
References: <20200710075050.4769-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <988ff766-b7de-2e25-2524-c412379686fc@suse.com>
Date: Fri, 10 Jul 2020 11:49:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200710075050.4769-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Sarah Newman <srn@prgmr.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 10.07.2020 09:50, Juergen Gross wrote:
> For support of long running hypercalls xen_maybe_preempt_hcall() is
> calling cond_resched() in case a hypercall marked as preemptible has
> been interrupted.
> 
> Normally this is no problem, as only hypercalls done via some ioctl()s
> are marked to be preemptible. In rare cases when during such a
> preemptible hypercall an interrupt occurs and any softirq action is
> started from irq_exit(), a further hypercall issued by the softirq
> handler will be regarded to be preemptible, too. This might lead to
> rescheduling in spite of the softirq handler potentially having set
> preempt_disable(), leading to splats like:
> 
> BUG: sleeping function called from invalid context at drivers/xen/preempt.c:37
> in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 20775, name: xl
> INFO: lockdep is turned off.
> CPU: 1 PID: 20775 Comm: xl Tainted: G D W 5.4.46-1_prgmr_debug.el7.x86_64 #1
> Call Trace:
> <IRQ>
> dump_stack+0x8f/0xd0
> ___might_sleep.cold.76+0xb2/0x103
> xen_maybe_preempt_hcall+0x48/0x70
> xen_do_hypervisor_callback+0x37/0x40
> RIP: e030:xen_hypercall_xen_version+0xa/0x20
> Code: ...
> RSP: e02b:ffffc900400dcc30 EFLAGS: 00000246
> RAX: 000000000004000d RBX: 0000000000000200 RCX: ffffffff8100122a
> RDX: ffff88812e788000 RSI: 0000000000000000 RDI: 0000000000000000
> RBP: ffffffff83ee3ad0 R08: 0000000000000001 R09: 0000000000000001
> R10: 0000000000000000 R11: 0000000000000246 R12: ffff8881824aa0b0
> R13: 0000000865496000 R14: 0000000865496000 R15: ffff88815d040000
> ? xen_hypercall_xen_version+0xa/0x20
> ? xen_force_evtchn_callback+0x9/0x10
> ? check_events+0x12/0x20
> ? xen_restore_fl_direct+0x1f/0x20
> ? _raw_spin_unlock_irqrestore+0x53/0x60
> ? debug_dma_sync_single_for_cpu+0x91/0xc0
> ? _raw_spin_unlock_irqrestore+0x53/0x60
> ? xen_swiotlb_sync_single_for_cpu+0x3d/0x140
> ? mlx4_en_process_rx_cq+0x6b6/0x1110 [mlx4_en]
> ? mlx4_en_poll_rx_cq+0x64/0x100 [mlx4_en]
> ? net_rx_action+0x151/0x4a0
> ? __do_softirq+0xed/0x55b
> ? irq_exit+0xea/0x100
> ? xen_evtchn_do_upcall+0x2c/0x40
> ? xen_do_hypervisor_callback+0x29/0x40
> </IRQ>
> ? xen_hypercall_domctl+0xa/0x20
> ? xen_hypercall_domctl+0x8/0x20
> ? privcmd_ioctl+0x221/0x990 [xen_privcmd]
> ? do_vfs_ioctl+0xa5/0x6f0
> ? ksys_ioctl+0x60/0x90
> ? trace_hardirqs_off_thunk+0x1a/0x20
> ? __x64_sys_ioctl+0x16/0x20
> ? do_syscall_64+0x62/0x250
> ? entry_SYSCALL_64_after_hwframe+0x49/0xbe
> 
> Fix that by testing preempt_count() before calling cond_resched().
> 
> In kernel 5.8 this can't happen any more due to the entry code rework.
> 
> Reported-by: Sarah Newman <srn@prgmr.com>
> Fixes: 0fa2f5cb2b0ecd8 ("sched/preempt, xen: Use need_resched() instead of should_resched()")
> Cc: Sarah Newman <srn@prgmr.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  drivers/xen/preempt.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
> index 17240c5325a3..6ad87b5c95ed 100644
> --- a/drivers/xen/preempt.c
> +++ b/drivers/xen/preempt.c
> @@ -27,7 +27,7 @@ EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
>  asmlinkage __visible void xen_maybe_preempt_hcall(void)
>  {
>  	if (unlikely(__this_cpu_read(xen_in_preemptible_hcall)
> -		     && need_resched())) {
> +		     && need_resched() && !preempt_count())) {

Doesn't this have the at least latent risk of hiding issues in
other call trees (by no longer triggering logging like the one
that has propmted this change)? Wouldn't it be better to save,
clear, and restore the flag in one of xen_evtchn_do_upcall() or
xen_do_hypervisor_callback()?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 10:17:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 10:17:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtq5a-0000gL-7C; Fri, 10 Jul 2020 10:17:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qBAA=AV=samsung.com=b.zolnierkie@srs-us1.protection.inumbo.net>)
 id 1jtq5X-0000gG-KT
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 10:17:04 +0000
X-Inumbo-ID: 7eb955be-c296-11ea-8f7c-12813bfff9fa
Received: from mailout2.w1.samsung.com (unknown [210.118.77.12])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7eb955be-c296-11ea-8f7c-12813bfff9fa;
 Fri, 10 Jul 2020 10:17:01 +0000 (UTC)
Received: from eucas1p2.samsung.com (unknown [182.198.249.207])
 by mailout2.w1.samsung.com (KnoxPortal) with ESMTP id
 20200710101701euoutp022f4d00e92229c91bc3c3b0cbb04b83c7~gXLnKukvR1284512845euoutp02L
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jul 2020 10:17:01 +0000 (GMT)
DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.w1.samsung.com
 20200710101701euoutp022f4d00e92229c91bc3c3b0cbb04b83c7~gXLnKukvR1284512845euoutp02L
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com;
 s=mail20170921; t=1594376221;
 bh=zuImCTUG8KB0maKe8qkWKgOT1mxGOgJZkICKt5oGbeI=;
 h=Subject:To:Cc:From:Date:In-Reply-To:References:From;
 b=plnuMrsuGMEkycrAG8jAYl+NYTqyivgLb9gVuse4XXicaMM9PDEyMfw5nB5QoSfJm
 P1aQT6TW0KjabHQlpjLuvOfJMNx0eAQjIwyi7eBAgZXTdBaZyXsq6PVX9p622yLKya
 6tgQ+p2keFGMAmWTZwwfp05CEFWjmdO4ZfAR6TnY=
Received: from eusmges3new.samsung.com (unknown [203.254.199.245]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTP id
 20200710101701eucas1p2b25fdf4c5eb5debe392ff1ff0530324e~gXLnD4l0R0954409544eucas1p2H;
 Fri, 10 Jul 2020 10:17:01 +0000 (GMT)
Received: from eucas1p1.samsung.com ( [182.198.249.206]) by
 eusmges3new.samsung.com (EUCPMTA) with SMTP id B5.6D.06318.C10480F5; Fri, 10
 Jul 2020 11:17:00 +0100 (BST)
Received: from eusmtrp2.samsung.com (unknown [182.198.249.139]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTPA id
 20200710101700eucas1p27a9b4f0c67d6b5af361ad3085c830d39~gXLmrr0ra0940409404eucas1p23;
 Fri, 10 Jul 2020 10:17:00 +0000 (GMT)
Received: from eusmgms1.samsung.com (unknown [182.198.249.179]) by
 eusmtrp2.samsung.com (KnoxPortal) with ESMTP id
 20200710101700eusmtrp20bdf7cf974e4798332791a5c06de0237~gXLmq972P2032920329eusmtrp2m;
 Fri, 10 Jul 2020 10:17:00 +0000 (GMT)
X-AuditID: cbfec7f5-38bff700000018ae-fc-5f08401c4f46
Received: from eusmtip2.samsung.com ( [203.254.199.222]) by
 eusmgms1.samsung.com (EUCPMTA) with SMTP id F2.C0.06314.C10480F5; Fri, 10
 Jul 2020 11:17:00 +0100 (BST)
Received: from [106.120.51.71] (unknown [106.120.51.71]) by
 eusmtip2.samsung.com (KnoxPortal) with ESMTPA id
 20200710101700eusmtip20235ca730843f772b9729ddb9037521d~gXLmM2BMQ2026820268eusmtip2O;
 Fri, 10 Jul 2020 10:17:00 +0000 (GMT)
Subject: Re: [PATCH] efi: avoid error message when booting under Xen
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
From: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Message-ID: <170e01b1-220d-5cb7-03b2-c70ed3ae58e4@samsung.com>
Date: Fri, 10 Jul 2020 12:16:57 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.8.0
MIME-Version: 1.0
In-Reply-To: <ec21b883-dc5c-f3fe-e989-7fa13875a4c4@suse.com>
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Brightmail-Tracker: H4sIAAAAAAAAA02SeUgUYRjG+XZmx3Fx5XM1fHGlYjEhI80UGUzESmrznyL6I+3QNSePPHe8
 KzDMI8uwJMxVWNsgS0HNPNI8QMuzNPNITUjzlszFEyMtZ0fJ/57vfd4f3/PASxOyCrEVHRwe
 zarDVaEKSkJWtax3Hbb2oH2PNPa4MuvLC4jpW1mgmPyho0zq2DBi2h7qxUxvbT7FZDwbNGLW
 KrJFHrSyvOgepfx+v1WkbNb3kcqFhn5KWVrRTyqXyveeo3wkbgFsaHAsq3Zw95METXWsiiPL
 TOP1uTlUEnpgkoGMacDOMPttgMxAElqGXyJY0BSTvCHDywjG33sLxhKC1cWP1A7RkdlBCEYh
 glJtOxIe8whKJuoNuDn2hNXiJBGvLbALzC1lGvFLBP6BoLhx3LBEYVd4lFa0RdO0FLtDn/YC
 PybxAShImSN4vQdfhMXRZjGvpdgM2nMnDKgxdoPxJ3WI1wS2hOEJrUjQ+yC5Ms+QDnCLETTO
 lm3H9oSprDEjQZvDXGvFtraGvzU8zAMlCDbSZ7bpagSF2Zvb9DEY6fpN8UkJfBBKax2E8XH4
 UJ1sKADYFAbnzYQQpvC4KocQxlJIT5UJ27ZQ9mInjjVk1LwispBCs6uaZlcdza46mv//FiCy
 CFmyMVxYIMs5hbNx9pwqjIsJD7S/FhFWjrZuqXOzdeUtavjj34QwjRQmUr8NylcmVsVyCWFN
 CGhCYSE98anzqkwaoEpIZNURvuqYUJZrQnKaVFhKnXSzV2Q4UBXN3mDZSFa944poY6sk9LUu
 1Gf1zc8v85cHCup69mfapMTrrp91jopZ0TGFcSaH9FG3ck9falv08u093xCy0p3f6lhf7yS/
 wznOhLSfqdv4dXugyNiuPOBUopP3kLQwOE3u9U5fEhdyT+v//GSSfHptWj5qox3pUz59nSUp
 q/zsZ4vEkzqXie6beXfdHNcnFSQXpHK0I9Sc6h+lhHiIRwMAAA==
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrGIsWRmVeSWpSXmKPExsVy+t/xe7oyDhzxBru/yFr8/PKe0eLK1/ds
 FnNuGlm0PbzFaHGi7wOrxeVdc9gsuhbeYLf4vmUykwOHx6ZVnWwe97uPM3kc/nCFxeP9vqts
 Huu3XGXx+LxJLoAtSs+mKL+0JFUhI7+4xFYp2tDCSM/Q0kLPyMRSz9DYPNbKyFRJ384mJTUn
 syy1SN8uQS/j2alvrAUb+Co+zJzO1sDYw9PFyMkhIWAicar3FHMXIxeHkMBSRokZK5sYuxg5
 gBIyEsfXl0HUCEv8udbFBlHzmlHi3KpLjCAJYQEXiW+rG5hAbBEBM4lXn3vZQYqYBR4xSnT9
 XcYE0fEGaGrbYRaQKjYBK4mJ7avANvAK2ElcmR8CEmYRUJVY0PqKGcQWFYiQOLxjFtgCXgFB
 iZMzn4C1cgrYSDyeugcsziygLvFn3iVmCFtc4taT+UwQtrxE89bZzBMYhWYhaZ+FpGUWkpZZ
 SFoWMLKsYhRJLS3OTc8tNtQrTswtLs1L10vOz93ECIzDbcd+bt7BeGlj8CFGAQ5GJR7eHf/Z
 4oVYE8uKK3MPMUpwMCuJ8DqdPR0nxJuSWFmVWpQfX1Sak1p8iNEU6LmJzFKiyfnAFJFXEm9o
 amhuYWlobmxubGahJM7bIXAwRkggPbEkNTs1tSC1CKaPiYNTqoHRb92WuV5/fNYZH7OQEeEs
 1LlrHL9SQy3SLE5/7zqVJ3U7H7R+CBFw2V8k+vr8M7WoDfpMocp+seJ/ulZHKGX1RXi+6mxK
 zPFR5rM/eOm+c0dOIEc8s5DCc4N9Vx5tjO+d5SjyKl4j/cnF36cMtp2Kazo3hV89ec6JO1/9
 r2/4L9h6P5nBuU+JpTgj0VCLuag4EQDuYox+2QIAAA==
X-CMS-MailID: 20200710101700eucas1p27a9b4f0c67d6b5af361ad3085c830d39
X-Msg-Generator: CA
Content-Type: text/plain; charset="utf-8"
X-RootMTR: 20200709091750eucas1p18003b0c8127600369485c62c1e587c22
X-EPHeader: CA
CMS-TYPE: 201P
X-CMS-RootMailID: 20200709091750eucas1p18003b0c8127600369485c62c1e587c22
References: <20200610141052.13258-1-jgross@suse.com>
 <094be567-2c82-7d5b-e432-288286c6c3fb@suse.com>
 <CGME20200709091750eucas1p18003b0c8127600369485c62c1e587c22@eucas1p1.samsung.com>
 <ec21b883-dc5c-f3fe-e989-7fa13875a4c4@suse.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: linux-fbdev@vger.kernel.org, linux-efi@vger.kernel.org,
 linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
 Peter Jones <pjones@redhat.com>, xen-devel@lists.xenproject.org,
 Ard Biesheuvel <ardb@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


[ added EFI Maintainer & ML to Cc: ]

Hi,

On 7/9/20 11:17 AM, Jürgen Groß wrote:
> On 28.06.20 10:50, Jürgen Groß wrote:
>> Ping?
>>
>> On 10.06.20 16:10, Juergen Gross wrote:
>>> efifb_probe() will issue an error message in case the kernel is booted
>>> as Xen dom0 from UEFI as EFI_MEMMAP won't be set in this case. Avoid
>>> that message by calling efi_mem_desc_lookup() only if EFI_PARAVIRT
>>> isn't set.
>>>
>>> Fixes: 38ac0287b7f4 ("fbdev/efifb: Honour UEFI memory map attributes when mapping the FB")
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>>   drivers/video/fbdev/efifb.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
>>> index 65491ae74808..f5eccd1373e9 100644
>>> --- a/drivers/video/fbdev/efifb.c
>>> +++ b/drivers/video/fbdev/efifb.c
>>> @@ -453,7 +453,7 @@ static int efifb_probe(struct platform_device *dev)
>>>       info->apertures->ranges[0].base = efifb_fix.smem_start;
>>>       info->apertures->ranges[0].size = size_remap;
>>> -    if (efi_enabled(EFI_BOOT) &&
>>> +    if (efi_enabled(EFI_BOOT) && !efi_enabled(EFI_PARAVIRT) &&
>>>           !efi_mem_desc_lookup(efifb_fix.smem_start, &md)) {
>>>           if ((efifb_fix.smem_start + efifb_fix.smem_len) >
>>>               (md.phys_addr + (md.num_pages << EFI_PAGE_SHIFT))) {
>>>
>>
> 
> In case I see no reaction from the maintainer for another week I'll take
> this patch through the Xen tree.

>From fbdev POV this change looks fine to me and I'm OK with merging it
through Xen or EFI tree:

Acked-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>

Best regards,
--
Bartlomiej Zolnierkiewicz
Samsung R&D Institute Poland
Samsung Electronics


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 10:50:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 10:50:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtqbm-0003q4-U9; Fri, 10 Jul 2020 10:50:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AY6w=AV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jtqbl-0003pz-Mc
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 10:50:21 +0000
X-Inumbo-ID: 25ef0ed8-c29b-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 25ef0ed8-c29b-11ea-b7bb-bc764e2007e4;
 Fri, 10 Jul 2020 10:50:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D3878AD75;
 Fri, 10 Jul 2020 10:50:19 +0000 (UTC)
Subject: Re: [PATCH] xen: don't reschedule in preemption off sections
To: Jan Beulich <jbeulich@suse.com>
References: <20200710075050.4769-1-jgross@suse.com>
 <988ff766-b7de-2e25-2524-c412379686fc@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <742457cf-4892-0e85-2fc8-d2eb9f8a3a51@suse.com>
Date: Fri, 10 Jul 2020 12:50:18 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <988ff766-b7de-2e25-2524-c412379686fc@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Sarah Newman <srn@prgmr.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 10.07.20 11:49, Jan Beulich wrote:
> On 10.07.2020 09:50, Juergen Gross wrote:
>> For support of long running hypercalls xen_maybe_preempt_hcall() is
>> calling cond_resched() in case a hypercall marked as preemptible has
>> been interrupted.
>>
>> Normally this is no problem, as only hypercalls done via some ioctl()s
>> are marked to be preemptible. In rare cases when during such a
>> preemptible hypercall an interrupt occurs and any softirq action is
>> started from irq_exit(), a further hypercall issued by the softirq
>> handler will be regarded to be preemptible, too. This might lead to
>> rescheduling in spite of the softirq handler potentially having set
>> preempt_disable(), leading to splats like:
>>
>> BUG: sleeping function called from invalid context at drivers/xen/preempt.c:37
>> in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 20775, name: xl
>> INFO: lockdep is turned off.
>> CPU: 1 PID: 20775 Comm: xl Tainted: G D W 5.4.46-1_prgmr_debug.el7.x86_64 #1
>> Call Trace:
>> <IRQ>
>> dump_stack+0x8f/0xd0
>> ___might_sleep.cold.76+0xb2/0x103
>> xen_maybe_preempt_hcall+0x48/0x70
>> xen_do_hypervisor_callback+0x37/0x40
>> RIP: e030:xen_hypercall_xen_version+0xa/0x20
>> Code: ...
>> RSP: e02b:ffffc900400dcc30 EFLAGS: 00000246
>> RAX: 000000000004000d RBX: 0000000000000200 RCX: ffffffff8100122a
>> RDX: ffff88812e788000 RSI: 0000000000000000 RDI: 0000000000000000
>> RBP: ffffffff83ee3ad0 R08: 0000000000000001 R09: 0000000000000001
>> R10: 0000000000000000 R11: 0000000000000246 R12: ffff8881824aa0b0
>> R13: 0000000865496000 R14: 0000000865496000 R15: ffff88815d040000
>> ? xen_hypercall_xen_version+0xa/0x20
>> ? xen_force_evtchn_callback+0x9/0x10
>> ? check_events+0x12/0x20
>> ? xen_restore_fl_direct+0x1f/0x20
>> ? _raw_spin_unlock_irqrestore+0x53/0x60
>> ? debug_dma_sync_single_for_cpu+0x91/0xc0
>> ? _raw_spin_unlock_irqrestore+0x53/0x60
>> ? xen_swiotlb_sync_single_for_cpu+0x3d/0x140
>> ? mlx4_en_process_rx_cq+0x6b6/0x1110 [mlx4_en]
>> ? mlx4_en_poll_rx_cq+0x64/0x100 [mlx4_en]
>> ? net_rx_action+0x151/0x4a0
>> ? __do_softirq+0xed/0x55b
>> ? irq_exit+0xea/0x100
>> ? xen_evtchn_do_upcall+0x2c/0x40
>> ? xen_do_hypervisor_callback+0x29/0x40
>> </IRQ>
>> ? xen_hypercall_domctl+0xa/0x20
>> ? xen_hypercall_domctl+0x8/0x20
>> ? privcmd_ioctl+0x221/0x990 [xen_privcmd]
>> ? do_vfs_ioctl+0xa5/0x6f0
>> ? ksys_ioctl+0x60/0x90
>> ? trace_hardirqs_off_thunk+0x1a/0x20
>> ? __x64_sys_ioctl+0x16/0x20
>> ? do_syscall_64+0x62/0x250
>> ? entry_SYSCALL_64_after_hwframe+0x49/0xbe
>>
>> Fix that by testing preempt_count() before calling cond_resched().
>>
>> In kernel 5.8 this can't happen any more due to the entry code rework.
>>
>> Reported-by: Sarah Newman <srn@prgmr.com>
>> Fixes: 0fa2f5cb2b0ecd8 ("sched/preempt, xen: Use need_resched() instead of should_resched()")
>> Cc: Sarah Newman <srn@prgmr.com>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   drivers/xen/preempt.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
>> index 17240c5325a3..6ad87b5c95ed 100644
>> --- a/drivers/xen/preempt.c
>> +++ b/drivers/xen/preempt.c
>> @@ -27,7 +27,7 @@ EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
>>   asmlinkage __visible void xen_maybe_preempt_hcall(void)
>>   {
>>   	if (unlikely(__this_cpu_read(xen_in_preemptible_hcall)
>> -		     && need_resched())) {
>> +		     && need_resched() && !preempt_count())) {
> 
> Doesn't this have the at least latent risk of hiding issues in
> other call trees (by no longer triggering logging like the one
> that has propmted this change)? Wouldn't it be better to save,
> clear, and restore the flag in one of xen_evtchn_do_upcall() or
> xen_do_hypervisor_callback()?

First regarding "risk of hiding issues": it seems as if lots of kernels
aren't even configured to trigger this logging. It would need
CONFIG_DEBUG_ATOMIC_SLEEP to be enabled and at least SUSE kernels don't
seem to have it on. I suspect the occasional xen_mc_flush() failures we
have seen are related to this problem.

And in theory saving, clearing and restoring the flag would be fine, but
it can't be done in a single function with the code flow as up to 5.7.
What would need to be done is to save and clear the flag in e.g.
__xen_evtchn_do_upcall() and to pass it to xen_maybe_preempt_hcall() as
a parameter. In xen_maybe_preempt_hcall() the passed flag value would
need to be used for the decision whether to call cond_resched() and then
the flag could be restored (after the cond_resched() call).

This is basically the code flow as in kernel 5.8, but the flag handling
would need to be spread over various source files.

A problem would be possible only in case there is a missing
preempt_disable() when doing a hypercall which should not be subject to
be interrupted by scheduling, but then a kernel configured with
preemption would already have a problem.

In order to keep this patch - which is for stable only, after all - as
simple as possible I'd rather leave it as is.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 11:01:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 11:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtqmQ-0004kK-V2; Fri, 10 Jul 2020 11:01:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V7us=AV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jtqmP-0004kF-Lz
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 11:01:21 +0000
X-Inumbo-ID: af77e5ca-c29c-11ea-bb8b-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af77e5ca-c29c-11ea-bb8b-bc764e2007e4;
 Fri, 10 Jul 2020 11:01:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594378880;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=puAbi62BkQpGBEAhGE13aYdWcLmORt6l59tmR5VIyyE=;
 b=XBNNoGBHiZaX1+pM7a/UCmUVhzz1GCFa2sYh88REbtIlnGBMchWqCGDD
 o6NGf8Bu+EiC84QQZi8KKC+XLI40FEPL+zPNDAekkWuGaUHi6ewt6CUTc
 6ny73241klvRAGtXPatQD6KI7vtzjkoia7QS6To3HASn+BJVz7HB8yhY8 Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: X5MtrZIjo/eiVV36SsriNXzfVv2nd5uvgr5eWfCiv93k/TR2/wGgs4btYZcRJLFVV2QXVmNgcR
 Fpgw88ROkfNvjRnh0xVLFfr+VtM2I0CjU/E9nkXAu0TYWhCth9EuV98Zu0MbhxlobixSBqy/aF
 E83VOhguMkd2FLdfEHcFd+2MHBk9A1fxOqEs1/K9FiHOfFtEOdHeBW0/kFZgESEj7NCwlfJEjv
 Cv6GbmKxfS3hsJqvyCQNhOkU71Rq0c3beyUQJbkGKRYjAzDgyYSwOyxmS7igmHF/KKrPuyov42
 /tU=
X-SBRS: 2.7
X-MesageID: 22255738
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,335,1589256000"; d="scan'208";a="22255738"
Subject: Re: [XTF] xenbus: Don't wait if the response ring is full
To: "Wieczorkiewicz, Pawel" <wipawel@amazon.de>, Julien Grall <julien@xen.org>
References: <20200709184647.5159-1-julien@xen.org>
 <02F94EA5-3555-4D3B-97DF-98914410424B@amazon.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1e465ce8-c86a-60de-b95b-145982a70552@citrix.com>
Date: Fri, 10 Jul 2020 12:01:16 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <02F94EA5-3555-4D3B-97DF-98914410424B@amazon.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>, "Grall,
 Julien" <jgrall@amazon.co.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 10/07/2020 08:53, Wieczorkiewicz, Pawel wrote:
>> On 9. Jul 2020, at 20:46, Julien Grall <julien@xen.org> wrote:
>>
>> XenStore response can be bigger than the response ring. In this case,
>> it is possible to have the ring full (e.g cons = 19 and prod = 1043).
>>
>> However, XTF will consider that there is no data and therefore wait for
>> more input. This will result to block indefinitely as the ring is full.
>>
>> This can be solved by avoiding to mask the difference between prod and
>> cons.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Pawel Wieczorkiewicz <wipawel@amazon.de>

Applied, thanks.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 11:11:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 11:11:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtqvo-0005c4-U7; Fri, 10 Jul 2020 11:11:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uxYN=AV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtqvn-0005bk-Q4
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 11:11:03 +0000
X-Inumbo-ID: 079afd4a-c29e-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 079afd4a-c29e-11ea-bb8b-bc764e2007e4;
 Fri, 10 Jul 2020 11:10:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=lrhUK3LaYZkt8dIPo0V4UgZzUtTxjfcUErraZRgGlYY=; b=UFJV/LMXX/I+NU39kOdv3cTY3
 fd/ycZEtGAdQXeJmcK7tUJC/z2u5Vswp8jWp3/bzSLwdcqERr9/Yxak2cfW9yQslUTKG8eXoPJjUT
 r5EMMEzZH7Jvb0ykRh+AhrkJZZkcbIykvdjLgxLfAVgt2W/+mFyyAoXRWxrxUy+S2ukZo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtqvh-0001fY-Bb; Fri, 10 Jul 2020 11:10:57 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtqvg-00057P-VF; Fri, 10 Jul 2020 11:10:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtqvg-0004j8-Uc; Fri, 10 Jul 2020 11:10:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151777-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151777: tolerable all pass - PUSHED
X-Osstest-Failures: libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
 libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
X-Osstest-Versions-That: libvirt=e7998ebeaf15e4e8825be0dd97aa1316f194f00d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jul 2020 11:10:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151777 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151777/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151496
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151496
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-qcow2 12 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 13 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     14 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad
baseline version:
 libvirt              e7998ebeaf15e4e8825be0dd97aa1316f194f00d

Last test of basis   151496  2020-07-01 04:23:43 Z    9 days
Failing since        151527  2020-07-02 04:29:15 Z    8 days    9 attempts
Testing same since   151777  2020-07-10 04:19:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Bastien Orivel <bastien.orivel@diateam.net>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fangge Jin <fjin@redhat.com>
  Jianan Gao <jgao@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Martin Kletzander <mkletzan@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nicolas Brignone <nmbrignone@gmail.com>
  Peter Krempa <pkrempa@redhat.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Yanqiu Zhang <yanqzhan@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   e7998ebeaf..2c846fa6bc  2c846fa6bcc11929c9fb857a22430fb9945654ad -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 11:36:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 11:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtrKK-0007Nm-1Y; Fri, 10 Jul 2020 11:36:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6UOn=AV=oracle.com=dan.carpenter@srs-us1.protection.inumbo.net>)
 id 1jtrKI-0007Nh-L8
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 11:36:22 +0000
X-Inumbo-ID: 93e5c426-c2a1-11ea-8f91-12813bfff9fa
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 93e5c426-c2a1-11ea-8f91-12813bfff9fa;
 Fri, 10 Jul 2020 11:36:21 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06ABWuaS065848;
 Fri, 10 Jul 2020 11:36:19 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=date : from : to : cc
 : subject : message-id : mime-version : content-type; s=corp-2020-01-29;
 bh=3J30hYgYpoiRAxDBjTQRBuPvFkgjWUboNehE/fIyh6I=;
 b=A6xesikFkn+jf7BmmzdypSPLfM7uax7Tm5Q9BA+O3rs9tZVzawg/nXAf8dEsdWJs/nvg
 Pac5LRoHBaI8rJAylW+77RnG9nj29L1fibHsAVqx6FxcF+cnnOr6qTSiFO2WjARoREkJ
 OIzMfPPpr0rT33FV5L7Tsdq+3+UI1gcN7kPTXRuvoEnHcciSLHq4laUZ4TzqWNZEefCh
 TLJVRU30dL5UXNoMTtvPRq/x7aeJpfWsa1+KnggzGCZAQnGBR4qwPEOs55U1VI9a9WTi
 87s0AgsIza+I0Yaw+l+Q+ZYizZqKu5XMXuFBuDFLno13Bjef8sn2meAWoPdMuhLHOp6C 9Q== 
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2120.oracle.com with ESMTP id 325y0apxx5-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 10 Jul 2020 11:36:19 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06ABTLDP037869;
 Fri, 10 Jul 2020 11:36:18 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by aserp3030.oracle.com with ESMTP id 325k3jqewy-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 10 Jul 2020 11:36:18 +0000
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 06ABaGgK013873;
 Fri, 10 Jul 2020 11:36:17 GMT
Received: from mwanda (/41.57.98.10) by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 10 Jul 2020 04:36:16 -0700
Date: Fri, 10 Jul 2020 14:36:10 +0300
From: Dan Carpenter <dan.carpenter@oracle.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>
Subject: [PATCH] xen/xenbus: Fix a double free in xenbus_map_ring_pv()
Message-ID: <20200710113610.GA92345@mwanda>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Mailer: git-send-email haha only kidding
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9677
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 phishscore=0
 mlxlogscore=999 bulkscore=0 spamscore=0 mlxscore=0 adultscore=0
 suspectscore=2 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007100082
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9677
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0
 adultscore=0 malwarescore=0
 clxscore=1011 impostorscore=0 phishscore=0 suspectscore=2
 priorityscore=1501 bulkscore=0 lowpriorityscore=0 mlxlogscore=999
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007100082
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Yan Yankovskyi <yyankovskyi@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, kernel-janitors@vger.kernel.org,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

When there is an error the caller frees "info->node" so the free here
will result in a double free.  We should just delete first kfree().

Fixes: 3848e4e0a32a ("xen/xenbus: avoid large structs and arrays on the stack")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
---
 drivers/xen/xenbus/xenbus_client.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index 4f168b46fbca..786fbb7d8be0 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -693,10 +693,8 @@ static int xenbus_map_ring_pv(struct xenbus_device *dev,
 	bool leaked;
 
 	area = alloc_vm_area(XEN_PAGE_SIZE * nr_grefs, info->ptes);
-	if (!area) {
-		kfree(node);
+	if (!area)
 		return -ENOMEM;
-	}
 
 	for (i = 0; i < nr_grefs; i++)
 		info->phys_addrs[i] =
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 11:55:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 11:55:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtrco-0000ny-6J; Fri, 10 Jul 2020 11:55:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RxGT=AV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jtrcm-0000nt-Rs
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 11:55:28 +0000
X-Inumbo-ID: 3ecd8926-c2a4-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ecd8926-c2a4-11ea-bb8b-bc764e2007e4;
 Fri, 10 Jul 2020 11:55:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 152FEAC46;
 Fri, 10 Jul 2020 11:55:27 +0000 (UTC)
Subject: Re: [PATCH] xen: don't reschedule in preemption off sections
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <20200710075050.4769-1-jgross@suse.com>
 <988ff766-b7de-2e25-2524-c412379686fc@suse.com>
 <742457cf-4892-0e85-2fc8-d2eb9f8a3a51@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <af6db1b7-7802-0b2e-eb5f-ce69533b771f@suse.com>
Date: Fri, 10 Jul 2020 13:55:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <742457cf-4892-0e85-2fc8-d2eb9f8a3a51@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Sarah Newman <srn@prgmr.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 10.07.2020 12:50, Jürgen Groß wrote:
> On 10.07.20 11:49, Jan Beulich wrote:
>> On 10.07.2020 09:50, Juergen Gross wrote:
>>> For support of long running hypercalls xen_maybe_preempt_hcall() is
>>> calling cond_resched() in case a hypercall marked as preemptible has
>>> been interrupted.
>>>
>>> Normally this is no problem, as only hypercalls done via some ioctl()s
>>> are marked to be preemptible. In rare cases when during such a
>>> preemptible hypercall an interrupt occurs and any softirq action is
>>> started from irq_exit(), a further hypercall issued by the softirq
>>> handler will be regarded to be preemptible, too. This might lead to
>>> rescheduling in spite of the softirq handler potentially having set
>>> preempt_disable(), leading to splats like:
>>>
>>> BUG: sleeping function called from invalid context at drivers/xen/preempt.c:37
>>> in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 20775, name: xl
>>> INFO: lockdep is turned off.
>>> CPU: 1 PID: 20775 Comm: xl Tainted: G D W 5.4.46-1_prgmr_debug.el7.x86_64 #1
>>> Call Trace:
>>> <IRQ>
>>> dump_stack+0x8f/0xd0
>>> ___might_sleep.cold.76+0xb2/0x103
>>> xen_maybe_preempt_hcall+0x48/0x70
>>> xen_do_hypervisor_callback+0x37/0x40
>>> RIP: e030:xen_hypercall_xen_version+0xa/0x20
>>> Code: ...
>>> RSP: e02b:ffffc900400dcc30 EFLAGS: 00000246
>>> RAX: 000000000004000d RBX: 0000000000000200 RCX: ffffffff8100122a
>>> RDX: ffff88812e788000 RSI: 0000000000000000 RDI: 0000000000000000
>>> RBP: ffffffff83ee3ad0 R08: 0000000000000001 R09: 0000000000000001
>>> R10: 0000000000000000 R11: 0000000000000246 R12: ffff8881824aa0b0
>>> R13: 0000000865496000 R14: 0000000865496000 R15: ffff88815d040000
>>> ? xen_hypercall_xen_version+0xa/0x20
>>> ? xen_force_evtchn_callback+0x9/0x10
>>> ? check_events+0x12/0x20
>>> ? xen_restore_fl_direct+0x1f/0x20
>>> ? _raw_spin_unlock_irqrestore+0x53/0x60
>>> ? debug_dma_sync_single_for_cpu+0x91/0xc0
>>> ? _raw_spin_unlock_irqrestore+0x53/0x60
>>> ? xen_swiotlb_sync_single_for_cpu+0x3d/0x140
>>> ? mlx4_en_process_rx_cq+0x6b6/0x1110 [mlx4_en]
>>> ? mlx4_en_poll_rx_cq+0x64/0x100 [mlx4_en]
>>> ? net_rx_action+0x151/0x4a0
>>> ? __do_softirq+0xed/0x55b
>>> ? irq_exit+0xea/0x100
>>> ? xen_evtchn_do_upcall+0x2c/0x40
>>> ? xen_do_hypervisor_callback+0x29/0x40
>>> </IRQ>
>>> ? xen_hypercall_domctl+0xa/0x20
>>> ? xen_hypercall_domctl+0x8/0x20
>>> ? privcmd_ioctl+0x221/0x990 [xen_privcmd]
>>> ? do_vfs_ioctl+0xa5/0x6f0
>>> ? ksys_ioctl+0x60/0x90
>>> ? trace_hardirqs_off_thunk+0x1a/0x20
>>> ? __x64_sys_ioctl+0x16/0x20
>>> ? do_syscall_64+0x62/0x250
>>> ? entry_SYSCALL_64_after_hwframe+0x49/0xbe
>>>
>>> Fix that by testing preempt_count() before calling cond_resched().
>>>
>>> In kernel 5.8 this can't happen any more due to the entry code rework.
>>>
>>> Reported-by: Sarah Newman <srn@prgmr.com>
>>> Fixes: 0fa2f5cb2b0ecd8 ("sched/preempt, xen: Use need_resched() instead of should_resched()")
>>> Cc: Sarah Newman <srn@prgmr.com>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>>   drivers/xen/preempt.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
>>> index 17240c5325a3..6ad87b5c95ed 100644
>>> --- a/drivers/xen/preempt.c
>>> +++ b/drivers/xen/preempt.c
>>> @@ -27,7 +27,7 @@ EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
>>>   asmlinkage __visible void xen_maybe_preempt_hcall(void)
>>>   {
>>>   	if (unlikely(__this_cpu_read(xen_in_preemptible_hcall)
>>> -		     && need_resched())) {
>>> +		     && need_resched() && !preempt_count())) {
>>
>> Doesn't this have the at least latent risk of hiding issues in
>> other call trees (by no longer triggering logging like the one
>> that has propmted this change)? Wouldn't it be better to save,
>> clear, and restore the flag in one of xen_evtchn_do_upcall() or
>> xen_do_hypervisor_callback()?
> 
> First regarding "risk of hiding issues": it seems as if lots of kernels
> aren't even configured to trigger this logging. It would need
> CONFIG_DEBUG_ATOMIC_SLEEP to be enabled and at least SUSE kernels don't
> seem to have it on. I suspect the occasional xen_mc_flush() failures we
> have seen are related to this problem.
> 
> And in theory saving, clearing and restoring the flag would be fine, but
> it can't be done in a single function with the code flow as up to 5.7.
> What would need to be done is to save and clear the flag in e.g.
> __xen_evtchn_do_upcall() and to pass it to xen_maybe_preempt_hcall() as
> a parameter. In xen_maybe_preempt_hcall() the passed flag value would
> need to be used for the decision whether to call cond_resched() and then
> the flag could be restored (after the cond_resched() call).

I'm afraid I don't follow: If __xen_evtchn_do_upcall() cleared the flag,
xen_maybe_preempt_hcall() would amount to a no-op (up and until the
flag's prior value would get restored), wouldn't it? No need to pass
anything into there.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 12:01:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 12:01:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtriE-0001j5-6R; Fri, 10 Jul 2020 12:01:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AY6w=AV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jtriD-0001j0-H5
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 12:01:05 +0000
X-Inumbo-ID: 0794589e-c2a5-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0794589e-c2a5-11ea-b7bb-bc764e2007e4;
 Fri, 10 Jul 2020 12:01:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E2654AC46;
 Fri, 10 Jul 2020 12:01:03 +0000 (UTC)
Subject: Re: [PATCH] xen: don't reschedule in preemption off sections
To: Jan Beulich <jbeulich@suse.com>
References: <20200710075050.4769-1-jgross@suse.com>
 <988ff766-b7de-2e25-2524-c412379686fc@suse.com>
 <742457cf-4892-0e85-2fc8-d2eb9f8a3a51@suse.com>
 <af6db1b7-7802-0b2e-eb5f-ce69533b771f@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <97b15bd2-11f0-b530-dc07-b7d523bf88a2@suse.com>
Date: Fri, 10 Jul 2020 14:01:03 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <af6db1b7-7802-0b2e-eb5f-ce69533b771f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 Sarah Newman <srn@prgmr.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 10.07.20 13:55, Jan Beulich wrote:
> On 10.07.2020 12:50, Jürgen Groß wrote:
>> On 10.07.20 11:49, Jan Beulich wrote:
>>> On 10.07.2020 09:50, Juergen Gross wrote:
>>>> For support of long running hypercalls xen_maybe_preempt_hcall() is
>>>> calling cond_resched() in case a hypercall marked as preemptible has
>>>> been interrupted.
>>>>
>>>> Normally this is no problem, as only hypercalls done via some ioctl()s
>>>> are marked to be preemptible. In rare cases when during such a
>>>> preemptible hypercall an interrupt occurs and any softirq action is
>>>> started from irq_exit(), a further hypercall issued by the softirq
>>>> handler will be regarded to be preemptible, too. This might lead to
>>>> rescheduling in spite of the softirq handler potentially having set
>>>> preempt_disable(), leading to splats like:
>>>>
>>>> BUG: sleeping function called from invalid context at drivers/xen/preempt.c:37
>>>> in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 20775, name: xl
>>>> INFO: lockdep is turned off.
>>>> CPU: 1 PID: 20775 Comm: xl Tainted: G D W 5.4.46-1_prgmr_debug.el7.x86_64 #1
>>>> Call Trace:
>>>> <IRQ>
>>>> dump_stack+0x8f/0xd0
>>>> ___might_sleep.cold.76+0xb2/0x103
>>>> xen_maybe_preempt_hcall+0x48/0x70
>>>> xen_do_hypervisor_callback+0x37/0x40
>>>> RIP: e030:xen_hypercall_xen_version+0xa/0x20
>>>> Code: ...
>>>> RSP: e02b:ffffc900400dcc30 EFLAGS: 00000246
>>>> RAX: 000000000004000d RBX: 0000000000000200 RCX: ffffffff8100122a
>>>> RDX: ffff88812e788000 RSI: 0000000000000000 RDI: 0000000000000000
>>>> RBP: ffffffff83ee3ad0 R08: 0000000000000001 R09: 0000000000000001
>>>> R10: 0000000000000000 R11: 0000000000000246 R12: ffff8881824aa0b0
>>>> R13: 0000000865496000 R14: 0000000865496000 R15: ffff88815d040000
>>>> ? xen_hypercall_xen_version+0xa/0x20
>>>> ? xen_force_evtchn_callback+0x9/0x10
>>>> ? check_events+0x12/0x20
>>>> ? xen_restore_fl_direct+0x1f/0x20
>>>> ? _raw_spin_unlock_irqrestore+0x53/0x60
>>>> ? debug_dma_sync_single_for_cpu+0x91/0xc0
>>>> ? _raw_spin_unlock_irqrestore+0x53/0x60
>>>> ? xen_swiotlb_sync_single_for_cpu+0x3d/0x140
>>>> ? mlx4_en_process_rx_cq+0x6b6/0x1110 [mlx4_en]
>>>> ? mlx4_en_poll_rx_cq+0x64/0x100 [mlx4_en]
>>>> ? net_rx_action+0x151/0x4a0
>>>> ? __do_softirq+0xed/0x55b
>>>> ? irq_exit+0xea/0x100
>>>> ? xen_evtchn_do_upcall+0x2c/0x40
>>>> ? xen_do_hypervisor_callback+0x29/0x40
>>>> </IRQ>
>>>> ? xen_hypercall_domctl+0xa/0x20
>>>> ? xen_hypercall_domctl+0x8/0x20
>>>> ? privcmd_ioctl+0x221/0x990 [xen_privcmd]
>>>> ? do_vfs_ioctl+0xa5/0x6f0
>>>> ? ksys_ioctl+0x60/0x90
>>>> ? trace_hardirqs_off_thunk+0x1a/0x20
>>>> ? __x64_sys_ioctl+0x16/0x20
>>>> ? do_syscall_64+0x62/0x250
>>>> ? entry_SYSCALL_64_after_hwframe+0x49/0xbe
>>>>
>>>> Fix that by testing preempt_count() before calling cond_resched().
>>>>
>>>> In kernel 5.8 this can't happen any more due to the entry code rework.
>>>>
>>>> Reported-by: Sarah Newman <srn@prgmr.com>
>>>> Fixes: 0fa2f5cb2b0ecd8 ("sched/preempt, xen: Use need_resched() instead of should_resched()")
>>>> Cc: Sarah Newman <srn@prgmr.com>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>>    drivers/xen/preempt.c | 2 +-
>>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/drivers/xen/preempt.c b/drivers/xen/preempt.c
>>>> index 17240c5325a3..6ad87b5c95ed 100644
>>>> --- a/drivers/xen/preempt.c
>>>> +++ b/drivers/xen/preempt.c
>>>> @@ -27,7 +27,7 @@ EXPORT_SYMBOL_GPL(xen_in_preemptible_hcall);
>>>>    asmlinkage __visible void xen_maybe_preempt_hcall(void)
>>>>    {
>>>>    	if (unlikely(__this_cpu_read(xen_in_preemptible_hcall)
>>>> -		     && need_resched())) {
>>>> +		     && need_resched() && !preempt_count())) {
>>>
>>> Doesn't this have the at least latent risk of hiding issues in
>>> other call trees (by no longer triggering logging like the one
>>> that has propmted this change)? Wouldn't it be better to save,
>>> clear, and restore the flag in one of xen_evtchn_do_upcall() or
>>> xen_do_hypervisor_callback()?
>>
>> First regarding "risk of hiding issues": it seems as if lots of kernels
>> aren't even configured to trigger this logging. It would need
>> CONFIG_DEBUG_ATOMIC_SLEEP to be enabled and at least SUSE kernels don't
>> seem to have it on. I suspect the occasional xen_mc_flush() failures we
>> have seen are related to this problem.
>>
>> And in theory saving, clearing and restoring the flag would be fine, but
>> it can't be done in a single function with the code flow as up to 5.7.
>> What would need to be done is to save and clear the flag in e.g.
>> __xen_evtchn_do_upcall() and to pass it to xen_maybe_preempt_hcall() as
>> a parameter. In xen_maybe_preempt_hcall() the passed flag value would
>> need to be used for the decision whether to call cond_resched() and then
>> the flag could be restored (after the cond_resched() call).
> 
> I'm afraid I don't follow: If __xen_evtchn_do_upcall() cleared the flag,
> xen_maybe_preempt_hcall() would amount to a no-op (up and until the
> flag's prior value would get restored), wouldn't it? No need to pass
> anything into there.

The problem is after __xen_evtchn_do_upcall() restoring the flag.
As soon as irq_exit() is being called (either by xen_evtchn_do_upcall()
or by the caller of xen_hvm_evtchn_do_upcall()) softirq handling might
be executed resulting in another hypercall, which might be preempted
afterwards. And this is the case which happened in the original
report by Sarah.


Juergen



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 12:11:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 12:11:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtrrz-0002dQ-4u; Fri, 10 Jul 2020 12:11:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AY6w=AV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jtrry-0002cu-7M
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 12:11:10 +0000
X-Inumbo-ID: 70136788-c2a6-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 70136788-c2a6-11ea-8496-bc764e2007e4;
 Fri, 10 Jul 2020 12:11:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BC9DBAD65;
 Fri, 10 Jul 2020 12:11:08 +0000 (UTC)
Subject: Re: Followup of yesterday's design session "refactoring the REST"
To: Jan Beulich <jbeulich@suse.com>
References: <a578fb24-cc6a-b3bd-b83d-3f7b9b1302cf@pfupf.net>
 <f0367a9e-fdfb-1bf6-d569-f380349f9dd8@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <0d666a2d-25c9-edf2-5eb3-3dcbff827c61@suse.com>
Date: Fri, 10 Jul 2020 14:11:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <f0367a9e-fdfb-1bf6-d569-f380349f9dd8@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 09.07.20 11:22, Jan Beulich wrote:
> On 09.07.2020 07:56, Jürgen Groß wrote:
>> Yesterday's design session at Xen Developer Summit "Hypervisor Team: .."
>> had one topic regarding whether we should find specific maintainers of
>> all the files currently assigned to "THE REST" in order to lower the
>> amount of reviews for those assigned to be "THE REST" maintainers.
>>
>> Modifying the MAINTAINERS file adding "REST@x.y" as REST maintainer
>> and running the rune:
>>
>> git ls-files | while true; do f=`line`; [ "$f" = "" ] && exit; \
>> echo $f `./scripts/get_maintainer.pl -f $f | awk '{print $(NF)}'`; \
>> done | awk '/REST/ { print $1}'
>>
>> shows that basically the following files are covered by "THE REST":
>>
>> - files directly in /
>> - config/
>> - most files in docs/ (not docs/man/)
>> - misc/ (only one file)
>> - scripts/
>> - lots of files in xen/common/
>> - xen/crypto/
>> - lots of files in xen/drivers/
>> - lots of files in xen/include/
>> - xen/scripts/
>> - some files in xen/tools/
>>
>> I have attached the file list.
>>
>> So the basic idea to have a "hypervisor REST" and a "tools REST"
>> wouldn't make a huge difference, if we don't assign docs/ to "tools
>> REST".
>>
>> So I think it would make sense to:
>>
>> - look through the docs/ and xen/include/ files whether some of those
>>     can be assigned to a component already having dedicated maintainers
>>
>> - try to find maintainers for the other files, especially those in
>>     xen/common/ and xen/drivers/ (including the related include files, of
>>     course)
> 
> At least for files in xen/common/ I think it was really intentional
> that they - as core hypervisor files - fall under THE REST. We could

Depends on the files. Those files are under xen/common/ as they are not
architecture dependent. I agree that many of those files are core
hypervisor files, but OTOH e.g. common/sched/core.c has dedicated
maintainers in spite of being a core file.

I don't think files like xen/common/gdbstub.c or xen/common/xenoprof.c
fall in this category, so revisiting the file list would surely be a
good idea.

> of course have a "Core Hypervisor" (or so) group, which would already
> ...
> 
>> - if any of the REST maintainers doesn't want to receive mails for a
>>     group of the remaining REST files split the REST maintainers/files up
>>     accordingly
> 
> ... allow moving some into this direction.

Yes.

> For files under xen/drivers/ not currently covered by other entries it
> may indeed be (more) feasible to find individual maintainers.

I agree.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 12:15:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 12:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtrwI-0002mP-Mw; Fri, 10 Jul 2020 12:15:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AY6w=AV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jtrwH-0002mK-0P
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 12:15:37 +0000
X-Inumbo-ID: 0e4fa903-c2a7-11ea-8f9c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0e4fa903-c2a7-11ea-8f9c-12813bfff9fa;
 Fri, 10 Jul 2020 12:15:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7AD0DADE2;
 Fri, 10 Jul 2020 12:15:34 +0000 (UTC)
Subject: Re: [PATCH] xen/xenbus: Fix a double free in xenbus_map_ring_pv()
To: Dan Carpenter <dan.carpenter@oracle.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <20200710113610.GA92345@mwanda>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3434e219-216f-ba50-c001-35a066d20db4@suse.com>
Date: Fri, 10 Jul 2020 14:15:33 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200710113610.GA92345@mwanda>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Yan Yankovskyi <yyankovskyi@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, kernel-janitors@vger.kernel.org,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 10.07.20 13:36, Dan Carpenter wrote:
> When there is an error the caller frees "info->node" so the free here
> will result in a double free.  We should just delete first kfree().
> 
> Fixes: 3848e4e0a32a ("xen/xenbus: avoid large structs and arrays on the stack")
> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>

Thanks for spotting this!

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 12:33:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 12:33:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtsDg-0004Q8-7Z; Fri, 10 Jul 2020 12:33:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uxYN=AV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtsDe-0004Po-B4
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 12:33:34 +0000
X-Inumbo-ID: 8cad1595-c2a9-11ea-8fa5-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8cad1595-c2a9-11ea-8fa5-12813bfff9fa;
 Fri, 10 Jul 2020 12:33:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=i3fPsAHAcX7LCIVF5motm9E7rhpPU70bJ6JJb4vWRRY=; b=AWuk4Gnu2N9pwpYXCs7xSMFVw
 ALwVOmIdS3L5v61ttizHNVYISAbEu85pS3uY6WxCmn63fpL7ZuiKSTKys+HrGTAorinZbmOJkSi/j
 L74L2J0YE6BobV2ouYLHZnw01nA8iI29s2Sl+fIIm8bsfnLm60Y0O3WlIsC+jLN/Q9DIM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtsDV-0003CQ-Tk; Fri, 10 Jul 2020 12:33:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtsDV-0007xv-Lz; Fri, 10 Jul 2020 12:33:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtsDV-00053f-LG; Fri, 10 Jul 2020 12:33:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151774-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151774: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=3fdc211b01b29f252166937238efe02d15cb5780
X-Osstest-Versions-That: xen=3fdc211b01b29f252166937238efe02d15cb5780
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jul 2020 12:33:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151774 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151774/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail pass in 151759

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151759
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151759
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151759
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151759
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151759
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151759
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151759
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151759
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151759
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3fdc211b01b29f252166937238efe02d15cb5780
baseline version:
 xen                  3fdc211b01b29f252166937238efe02d15cb5780

Last test of basis   151774  2020-07-10 02:00:25 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 13:12:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 13:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtson-0007gs-JF; Fri, 10 Jul 2020 13:11:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q6aM=AV=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jtsom-0007gn-Vj
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 13:11:57 +0000
X-Inumbo-ID: edb5fea0-c2ae-11ea-bca7-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id edb5fea0-c2ae-11ea-bca7-bc764e2007e4;
 Fri, 10 Jul 2020 13:11:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594386716;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=jaVFTTx2HNGKf3JgEbZLbjj1J2bUpC47ifE5uMvXHoA=;
 b=JtpcWXhtNaouckQzlF/J/ouHl9WETwBX7nnSqH3mpHH8LpirOzcAdQht
 qZG2l0ixzYgw9U0tw1iUnW11lnQDgyc55Bk6smCG/GZTIWzPI1Vn4KnNv
 rmANPIFs3RSYLxiI9O1xqCidsCMUvyt2E9osGabrflMVBQjfXrH/1YuIl w=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Kx/E3xb/iLQiWYXFukk0rCEoznEszOCzONcTv7LJ5UGbxDQ+Ev4/7IbcDUTP+bIzi7/wEt+zLk
 dxNZe0/FZZMJqWHfJBn+u7McrVUA3G3lDufYiy22nGJ7LrSm6/ChmcmxNClRQtbSNxcMP870az
 +XKD9wrO5VhaEA1X4Op/NROAEiLFVPt06dzZ6tJ4YYmd1Wnd0aRTR8uhD0SvaBSh1mMZUTJ0l1
 4Bm0uwOJeAiaz9QmecWPrkqZSrgTsSRzfvd/bw5iK90WFjLXYsL6ujMzOow4yxa95SRVbDC6pd
 YDc=
X-SBRS: 2.7
X-MesageID: 22265261
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,335,1589256000"; d="scan'208";a="22265261"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <qemu-devel@nongnu.org>
Subject: [PULL 0/2] xen queue 2020-07-10
Date: Fri, 10 Jul 2020 14:11:43 +0100
Message-ID: <20200710131145.589476-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Peter Maydell <peter.maydell@linaro.org>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The following changes since commit b6d7e9b66f59ca6ebc6e9b830cd5e7bf849d31cf:

  Merge remote-tracking branch 'remotes/stefanha/tags/tracing-pull-request' into staging (2020-07-10 09:01:28 +0100)

are available in the Git repository at:

  https://xenbits.xen.org/git-http/people/aperard/qemu-dm.git tags/pull-xen-20200710

for you to fetch changes up to dd29b5c30cd2a13f8c12376a8de84cb090c338bf:

  xen: cleanup unrealized flash devices (2020-07-10 13:49:16 +0100)

----------------------------------------------------------------
xen patches

Fixes following harden checks in qdev.

----------------------------------------------------------------
Jason Andryuk (1):
      xen: Fix xen-legacy-backend qdev types

Paul Durrant (1):
      xen: cleanup unrealized flash devices

 hw/i386/pc_piix.c           | 9 ++++++---
 hw/i386/pc_sysfw.c          | 2 +-
 hw/xen/xen-legacy-backend.c | 4 ++--
 include/hw/i386/pc.h        | 1 +
 4 files changed, 10 insertions(+), 6 deletions(-)


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 13:12:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 13:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtsos-0007h6-RZ; Fri, 10 Jul 2020 13:12:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q6aM=AV=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jtsor-0007gn-Ot
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 13:12:01 +0000
X-Inumbo-ID: efbb6e74-c2ae-11ea-bca7-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id efbb6e74-c2ae-11ea-bca7-bc764e2007e4;
 Fri, 10 Jul 2020 13:11:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594386719;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=gKGoav32jy6D1wTYeXqZwznlDdJCPEmNvpiim4Sia2E=;
 b=S7x0ZNn4fmK0WxabjoSoAV+bAApqyJ/G+4/aTV4yqj9s+lHbYu46X36K
 1moR36Qc/arO6RUGUpkTSoyt1fHEEA0VzHTv/LH14M7fRGmAicgxpONhO
 q3rMGJJbhTG8czLqGpy0zH3nzxSFziPCUAp53eBZZtam9LrtJd1E++/sk 4=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: PAEOZAOKqwv1y16lNBHg3giB8y3OZJyWJXsMaUXbOJ9Hc7VtEC77P29THrOmz6qG2SZYDoxRFW
 h6F7PAcRbfX+CjoSrh/Dcrcg0W52hgNAOgRHBIyDsyAquBlvYLHbTovsY5g9Xyu0HfoY1KGqjt
 S/xoGRgnuesTHCeVHRu+qqLTObwKfNW5ojPbJ6FHA3VhlnEVL6IKsber/OW8VEX7HoFG7RfXxX
 cewNIK1QNbRl04dnyt/sDRmSFMBRNMK+9Rh9jieeXMwgvTXnpUP0iXxBCDlng4Tyvew5BFxzJA
 eYE=
X-SBRS: 2.7
X-MesageID: 22397957
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,335,1589256000"; d="scan'208";a="22397957"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <qemu-devel@nongnu.org>
Subject: [PULL 1/2] xen: Fix xen-legacy-backend qdev types
Date: Fri, 10 Jul 2020 14:11:44 +0100
Message-ID: <20200710131145.589476-2-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200710131145.589476-1-anthony.perard@citrix.com>
References: <20200710131145.589476-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Peter Maydell <peter.maydell@linaro.org>, Jason Andryuk <jandryuk@gmail.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Jason Andryuk <jandryuk@gmail.com>

xen-sysdev is a TYPE_SYS_BUS_DEVICE.  bus_type should not be changed so
that it can plug into the System bus.  Otherwise this assert triggers:
qemu-system-i386: hw/core/qdev.c:102: qdev_set_parent_bus: Assertion
`dc->bus_type && object_dynamic_cast(OBJECT(bus), dc->bus_type)'
failed.

TYPE_XENBACKEND attaches to TYPE_XENSYSBUS, so its class_init needs to
be set accordingly to attach the qdev.  Otherwise the following assert
triggers:
qemu-system-i386: hw/core/qdev.c:102: qdev_set_parent_bus: Assertion
`dc->bus_type && object_dynamic_cast(OBJECT(bus), dc->bus_type)'
failed.

TYPE_XENBACKEND is not a subclass of XEN_XENSYSDEV, so it's parent
is just TYPE_DEVICE.  Change that.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Paul Durrant <pdurrant@amazon.com>
Fixes: 81cb05732efb ("qdev: Assert devices are plugged into a bus that can take them")
Message-Id: <20200624121939.10282-1-jandryuk@gmail.com>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 hw/xen/xen-legacy-backend.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/xen/xen-legacy-backend.c b/hw/xen/xen-legacy-backend.c
index 7d4b13351e06..083d8dc1b28b 100644
--- a/hw/xen/xen-legacy-backend.c
+++ b/hw/xen/xen-legacy-backend.c
@@ -789,11 +789,12 @@ static void xendev_class_init(ObjectClass *klass, void *data)
     set_bit(DEVICE_CATEGORY_MISC, dc->categories);
     /* xen-backend devices can be plugged/unplugged dynamically */
     dc->user_creatable = true;
+    dc->bus_type = TYPE_XENSYSBUS;
 }
 
 static const TypeInfo xendev_type_info = {
     .name          = TYPE_XENBACKEND,
-    .parent        = TYPE_XENSYSDEV,
+    .parent        = TYPE_DEVICE,
     .class_init    = xendev_class_init,
     .instance_size = sizeof(struct XenLegacyDevice),
 };
@@ -824,7 +825,6 @@ static void xen_sysdev_class_init(ObjectClass *klass, void *data)
     DeviceClass *dc = DEVICE_CLASS(klass);
 
     device_class_set_props(dc, xen_sysdev_properties);
-    dc->bus_type = TYPE_XENSYSBUS;
 }
 
 static const TypeInfo xensysdev_info = {
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 13:12:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 13:12:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtsoy-0007hV-3e; Fri, 10 Jul 2020 13:12:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q6aM=AV=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jtsow-0007gn-Or
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 13:12:06 +0000
X-Inumbo-ID: f0d071f6-c2ae-11ea-bb8b-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0d071f6-c2ae-11ea-bb8b-bc764e2007e4;
 Fri, 10 Jul 2020 13:12:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594386722;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=Nmb6NQAhCzH4JjUSe1UKDKjxPSYUFa2abwapsmcYj28=;
 b=ftZ7E5IhWP/oGVoTzEuI6PpBNSeAVf40WnBEnIa21ZQMfcsdfca1jMSq
 3YYoHqzerGdJGcJfFCEbk7Xls6GILGXtdDtJJjunh5NSgFy8jktx3bee4
 62yVCcvs1mnO+aJkH63Bd8Gpo5VxTWxf3jwlTYtpwhnlhzrjADylXzFlW s=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 6NtbgrUcwTzHphZC00mVAgYezMiyjPFVaIvLjrIxDt0uyB1mfUWbETZ8MXofdy7WWIP/f99xNK
 THx3E4q70CtUqwk7DEhBuLQ5EN+Vd5dShH2ZMpXq31yq04fTUSoQ+TEUzEZv3Hsok6uMchbEkG
 3B/9g1pXg+gnbkdS4YFQspoSpXHGkTe2erQGk6bGKF7GheyyiXAJwhhCVV86RYTlRexKmmmI9V
 4Ie535q++HE0g7oRpfQCaITPndOvHl5VrsgJIDHFdaIYc4rcoLKeqP1Dw2MdwYhpziBU1WOcFb
 /88=
X-SBRS: 2.7
X-MesageID: 22068708
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,335,1589256000"; d="scan'208";a="22068708"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <qemu-devel@nongnu.org>
Subject: [PULL 2/2] xen: cleanup unrealized flash devices
Date: Fri, 10 Jul 2020 14:11:45 +0100
Message-ID: <20200710131145.589476-3-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200710131145.589476-1-anthony.perard@citrix.com>
References: <20200710131145.589476-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <pdurrant@amazon.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

The generic pc_machine_initfn() calls pc_system_flash_create() which creates
'system.flash0' and 'system.flash1' devices. These devices are then realized
by pc_system_flash_map() which is called from pc_system_firmware_init() which
itself is called via pc_memory_init(). The latter however is not called when
xen_enable() is true and hence the following assertion fails:

qemu-system-i386: hw/core/qdev.c:439: qdev_assert_realized_properly:
Assertion `dev->realized' failed

These flash devices are unneeded when using Xen so this patch avoids the
assertion by simply removing them using pc_system_flash_cleanup_unused().

Reported-by: Jason Andryuk <jandryuk@gmail.com>
Fixes: ebc29e1beab0 ("pc: Support firmware configuration with -blockdev")
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Tested-by: Jason Andryuk <jandryuk@gmail.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200624121841.17971-3-paul@xen.org>
Fixes: dfe8c79c4468 ("qdev: Assert onboard devices all get realized properly")
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 hw/i386/pc_piix.c    | 9 ++++++---
 hw/i386/pc_sysfw.c   | 2 +-
 include/hw/i386/pc.h | 1 +
 3 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 2bb42a814144..3469b1fd1072 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -186,9 +186,12 @@ static void pc_init1(MachineState *machine,
     if (!xen_enabled()) {
         pc_memory_init(pcms, system_memory,
                        rom_memory, &ram_memory);
-    } else if (machine->kernel_filename != NULL) {
-        /* For xen HVM direct kernel boot, load linux here */
-        xen_load_linux(pcms);
+    } else {
+        pc_system_flash_cleanup_unused(pcms);
+        if (machine->kernel_filename != NULL) {
+            /* For xen HVM direct kernel boot, load linux here */
+            xen_load_linux(pcms);
+        }
     }
 
     gsi_state = pc_gsi_create(&x86ms->gsi, pcmc->pci_enabled);
diff --git a/hw/i386/pc_sysfw.c b/hw/i386/pc_sysfw.c
index ec2a3b3e7eff..0ff47a4b5915 100644
--- a/hw/i386/pc_sysfw.c
+++ b/hw/i386/pc_sysfw.c
@@ -108,7 +108,7 @@ void pc_system_flash_create(PCMachineState *pcms)
     }
 }
 
-static void pc_system_flash_cleanup_unused(PCMachineState *pcms)
+void pc_system_flash_cleanup_unused(PCMachineState *pcms)
 {
     char *prop_name;
     int i;
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index a802e699749a..3d7ed3a55e30 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -186,6 +186,7 @@ ISADevice *pc_find_fdc0(void);
 
 /* pc_sysfw.c */
 void pc_system_flash_create(PCMachineState *pcms);
+void pc_system_flash_cleanup_unused(PCMachineState *pcms);
 void pc_system_firmware_init(PCMachineState *pcms, MemoryRegion *rom_memory);
 
 /* acpi-build.c */
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 13:27:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 13:27:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtt3x-0000OX-G4; Fri, 10 Jul 2020 13:27:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Va4v=AV=kernel.org=ardb@srs-us1.protection.inumbo.net>)
 id 1jtt3w-0000OS-A6
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 13:27:36 +0000
X-Inumbo-ID: 1ddb7158-c2b1-11ea-8496-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ddb7158-c2b1-11ea-8496-bc764e2007e4;
 Fri, 10 Jul 2020 13:27:35 +0000 (UTC)
Received: from mail-oi1-f169.google.com (mail-oi1-f169.google.com
 [209.85.167.169])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0E59A20836
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jul 2020 13:27:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594387655;
 bh=mNW++VsEqP3S7PL2ngGMIOCMSfxmahBdnZ67NyJxUvU=;
 h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
 b=Y29NVJdkXAxsgk2xI3t1jiVzttBFvTNx8HCPIKiN8ZM7ftE9wB6zTnP1BMb0MkiHU
 QendZh5SWBTBNOnp2LBOwt7y9i89bCxloiL/H5BpHgUTMz/HsVGIVwSr+QGFlWIjcr
 y7jk4l2N4ARxgyR+SRTlUkcRkdfqRggSEE0L6X1I=
Received: by mail-oi1-f169.google.com with SMTP id l63so4739158oih.13
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jul 2020 06:27:35 -0700 (PDT)
X-Gm-Message-State: AOAM533WdHvvYdKDgRtmdrxPskkqJ1zkSp7Hzq76kYWkxDI3uhzBkBCj
 oBT7j1Chlv7l1U5liEyISN51RFahMdH1g6Yd+aE=
X-Google-Smtp-Source: ABdhPJx27k85cRD5Br4iASr1hfkX2ZBE1Hh91r9eyyCurmoSEz5t5x80h7mauiVvJR80iRujQZjgdIQ5i6DKyNJJWKM=
X-Received: by 2002:aca:d643:: with SMTP id n64mr4195802oig.33.1594387654399; 
 Fri, 10 Jul 2020 06:27:34 -0700 (PDT)
MIME-Version: 1.0
References: <20200610141052.13258-1-jgross@suse.com>
 <094be567-2c82-7d5b-e432-288286c6c3fb@suse.com>
 <CGME20200709091750eucas1p18003b0c8127600369485c62c1e587c22@eucas1p1.samsung.com>
 <ec21b883-dc5c-f3fe-e989-7fa13875a4c4@suse.com>
 <170e01b1-220d-5cb7-03b2-c70ed3ae58e4@samsung.com>
In-Reply-To: <170e01b1-220d-5cb7-03b2-c70ed3ae58e4@samsung.com>
From: Ard Biesheuvel <ardb@kernel.org>
Date: Fri, 10 Jul 2020 16:27:23 +0300
X-Gmail-Original-Message-ID: <CAMj1kXGE52Y6QQhGLU6r_9x6TVftZqfS7zyLCiDusZhV4tbhjg@mail.gmail.com>
Message-ID: <CAMj1kXGE52Y6QQhGLU6r_9x6TVftZqfS7zyLCiDusZhV4tbhjg@mail.gmail.com>
Subject: Re: [PATCH] efi: avoid error message when booting under Xen
To: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 linux-fbdev@vger.kernel.org, linux-efi <linux-efi@vger.kernel.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 dri-devel@lists.freedesktop.org, Peter Jones <pjones@redhat.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 10 Jul 2020 at 13:17, Bartlomiej Zolnierkiewicz
<b.zolnierkie@samsung.com> wrote:
>
>
> [ added EFI Maintainer & ML to Cc: ]
>
> Hi,
>
> On 7/9/20 11:17 AM, J=C3=BCrgen Gro=C3=9F wrote:
> > On 28.06.20 10:50, J=C3=BCrgen Gro=C3=9F wrote:
> >> Ping?
> >>
> >> On 10.06.20 16:10, Juergen Gross wrote:
> >>> efifb_probe() will issue an error message in case the kernel is boote=
d
> >>> as Xen dom0 from UEFI as EFI_MEMMAP won't be set in this case. Avoid
> >>> that message by calling efi_mem_desc_lookup() only if EFI_PARAVIRT
> >>> isn't set.
> >>>

Why not test for EFI_MEMMAP instead of EFI_BOOT?


> >>> Fixes: 38ac0287b7f4 ("fbdev/efifb: Honour UEFI memory map attributes =
when mapping the FB")
> >>> Signed-off-by: Juergen Gross <jgross@suse.com>
> >>> ---
> >>>   drivers/video/fbdev/efifb.c | 2 +-
> >>>   1 file changed, 1 insertion(+), 1 deletion(-)
> >>>
> >>> diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.=
c
> >>> index 65491ae74808..f5eccd1373e9 100644
> >>> --- a/drivers/video/fbdev/efifb.c
> >>> +++ b/drivers/video/fbdev/efifb.c
> >>> @@ -453,7 +453,7 @@ static int efifb_probe(struct platform_device *de=
v)
> >>>       info->apertures->ranges[0].base =3D efifb_fix.smem_start;
> >>>       info->apertures->ranges[0].size =3D size_remap;
> >>> -    if (efi_enabled(EFI_BOOT) &&
> >>> +    if (efi_enabled(EFI_BOOT) && !efi_enabled(EFI_PARAVIRT) &&
> >>>           !efi_mem_desc_lookup(efifb_fix.smem_start, &md)) {
> >>>           if ((efifb_fix.smem_start + efifb_fix.smem_len) >
> >>>               (md.phys_addr + (md.num_pages << EFI_PAGE_SHIFT))) {
> >>>
> >>
> >
> > In case I see no reaction from the maintainer for another week I'll tak=
e
> > this patch through the Xen tree.
>
> From fbdev POV this change looks fine to me and I'm OK with merging it
> through Xen or EFI tree:
>
> Acked-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
>
> Best regards,
> --
> Bartlomiej Zolnierkiewicz
> Samsung R&D Institute Poland
> Samsung Electronics


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 13:39:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 13:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jttEs-0001Ig-I2; Fri, 10 Jul 2020 13:38:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AY6w=AV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jttEr-0001Ib-84
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 13:38:53 +0000
X-Inumbo-ID: b12de9bc-c2b2-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b12de9bc-c2b2-11ea-8496-bc764e2007e4;
 Fri, 10 Jul 2020 13:38:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EAE72AC9F;
 Fri, 10 Jul 2020 13:38:51 +0000 (UTC)
Subject: Re: [PATCH] efi: avoid error message when booting under Xen
To: Ard Biesheuvel <ardb@kernel.org>,
 Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
References: <20200610141052.13258-1-jgross@suse.com>
 <094be567-2c82-7d5b-e432-288286c6c3fb@suse.com>
 <CGME20200709091750eucas1p18003b0c8127600369485c62c1e587c22@eucas1p1.samsung.com>
 <ec21b883-dc5c-f3fe-e989-7fa13875a4c4@suse.com>
 <170e01b1-220d-5cb7-03b2-c70ed3ae58e4@samsung.com>
 <CAMj1kXGE52Y6QQhGLU6r_9x6TVftZqfS7zyLCiDusZhV4tbhjg@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <b4e60a2f-e761-d9ad-88ad-fe041109c063@suse.com>
Date: Fri, 10 Jul 2020 15:38:49 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <CAMj1kXGE52Y6QQhGLU6r_9x6TVftZqfS7zyLCiDusZhV4tbhjg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: linux-fbdev@vger.kernel.org, linux-efi <linux-efi@vger.kernel.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 dri-devel@lists.freedesktop.org, Peter Jones <pjones@redhat.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 10.07.20 15:27, Ard Biesheuvel wrote:
> On Fri, 10 Jul 2020 at 13:17, Bartlomiej Zolnierkiewicz
> <b.zolnierkie@samsung.com> wrote:
>>
>>
>> [ added EFI Maintainer & ML to Cc: ]
>>
>> Hi,
>>
>> On 7/9/20 11:17 AM, Jürgen Groß wrote:
>>> On 28.06.20 10:50, Jürgen Groß wrote:
>>>> Ping?
>>>>
>>>> On 10.06.20 16:10, Juergen Gross wrote:
>>>>> efifb_probe() will issue an error message in case the kernel is booted
>>>>> as Xen dom0 from UEFI as EFI_MEMMAP won't be set in this case. Avoid
>>>>> that message by calling efi_mem_desc_lookup() only if EFI_PARAVIRT
>>>>> isn't set.
>>>>>
> 
> Why not test for EFI_MEMMAP instead of EFI_BOOT?

Honestly I'm not sure EFI_BOOT is always set in that case. If you tell
me it is fine to just replace the test to check for EFI_MEMMAP I'm fine
to modify my patch.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 13:41:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 13:41:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jttHd-00023h-0C; Fri, 10 Jul 2020 13:41:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Va4v=AV=kernel.org=ardb@srs-us1.protection.inumbo.net>)
 id 1jttHb-00023c-Qn
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 13:41:43 +0000
X-Inumbo-ID: 17032a4a-c2b3-11ea-8fbd-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 17032a4a-c2b3-11ea-8fbd-12813bfff9fa;
 Fri, 10 Jul 2020 13:41:43 +0000 (UTC)
Received: from mail-ot1-f49.google.com (mail-ot1-f49.google.com
 [209.85.210.49])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 8C0A1207BB
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jul 2020 13:41:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594388502;
 bh=i75CBXIgGhcZUa4Q5eVv78+LmWfKj2lmU+kA47/adfU=;
 h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
 b=BVP2jY9BJ54jjApBEPD0xvCJ3d7zT7Oyt4yCdw6oxKWIRmgnSMcGpIU28L0X3wCxh
 bXzut1P37bZxexmGGjWs8ERCJodOW/vggL3EuRUxtOy4TZBDtPsFf8/2Lxaz6AIvYX
 2BbnS0Pdbqa9ra4kz6hbuXtlqtqQ4nbR47BzrjBc=
Received: by mail-ot1-f49.google.com with SMTP id t18so4207536otq.5
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jul 2020 06:41:42 -0700 (PDT)
X-Gm-Message-State: AOAM532p+OjODEF83P1KAIyhH+vOKQtDk9tTRzFDqdpKTGDA3TokXtU5
 7O6jlbtxd8NdNcj+Wyyp6cqLipAZ3s9oWu/y8ao=
X-Google-Smtp-Source: ABdhPJzMDks2bKtBSSU06I1kMxmshqTzgvxqZ+s5CEUL5PYeC51lRTbaxITZFvhimohokw08oLmR71FMFMUIrqKF7Zk=
X-Received: by 2002:a9d:7553:: with SMTP id b19mr11274563otl.77.1594388501943; 
 Fri, 10 Jul 2020 06:41:41 -0700 (PDT)
MIME-Version: 1.0
References: <20200610141052.13258-1-jgross@suse.com>
 <094be567-2c82-7d5b-e432-288286c6c3fb@suse.com>
 <CGME20200709091750eucas1p18003b0c8127600369485c62c1e587c22@eucas1p1.samsung.com>
 <ec21b883-dc5c-f3fe-e989-7fa13875a4c4@suse.com>
 <170e01b1-220d-5cb7-03b2-c70ed3ae58e4@samsung.com>
 <CAMj1kXGE52Y6QQhGLU6r_9x6TVftZqfS7zyLCiDusZhV4tbhjg@mail.gmail.com>
 <b4e60a2f-e761-d9ad-88ad-fe041109c063@suse.com>
In-Reply-To: <b4e60a2f-e761-d9ad-88ad-fe041109c063@suse.com>
From: Ard Biesheuvel <ardb@kernel.org>
Date: Fri, 10 Jul 2020 16:41:30 +0300
X-Gmail-Original-Message-ID: <CAMj1kXGsAsOiBsbhT9TXNBsjba=GXHegYbGOpaVFR0vZ8w3+bw@mail.gmail.com>
Message-ID: <CAMj1kXGsAsOiBsbhT9TXNBsjba=GXHegYbGOpaVFR0vZ8w3+bw@mail.gmail.com>
Subject: Re: [PATCH] efi: avoid error message when booting under Xen
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: linux-fbdev@vger.kernel.org, linux-efi <linux-efi@vger.kernel.org>,
 Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 dri-devel@lists.freedesktop.org, Peter Jones <pjones@redhat.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 10 Jul 2020 at 16:38, J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wrote=
:
>
> On 10.07.20 15:27, Ard Biesheuvel wrote:
> > On Fri, 10 Jul 2020 at 13:17, Bartlomiej Zolnierkiewicz
> > <b.zolnierkie@samsung.com> wrote:
> >>
> >>
> >> [ added EFI Maintainer & ML to Cc: ]
> >>
> >> Hi,
> >>
> >> On 7/9/20 11:17 AM, J=C3=BCrgen Gro=C3=9F wrote:
> >>> On 28.06.20 10:50, J=C3=BCrgen Gro=C3=9F wrote:
> >>>> Ping?
> >>>>
> >>>> On 10.06.20 16:10, Juergen Gross wrote:
> >>>>> efifb_probe() will issue an error message in case the kernel is boo=
ted
> >>>>> as Xen dom0 from UEFI as EFI_MEMMAP won't be set in this case. Avoi=
d
> >>>>> that message by calling efi_mem_desc_lookup() only if EFI_PARAVIRT
> >>>>> isn't set.
> >>>>>
> >
> > Why not test for EFI_MEMMAP instead of EFI_BOOT?
>
> Honestly I'm not sure EFI_BOOT is always set in that case. If you tell
> me it is fine to just replace the test to check for EFI_MEMMAP I'm fine
> to modify my patch.
>


Yes please


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 14:25:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 14:25:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jttxD-0005TA-Iy; Fri, 10 Jul 2020 14:24:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AY6w=AV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jttxB-0005T5-Np
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 14:24:41 +0000
X-Inumbo-ID: 16a8d742-c2b9-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 16a8d742-c2b9-11ea-bca7-bc764e2007e4;
 Fri, 10 Jul 2020 14:24:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 34ECEAC9F;
 Fri, 10 Jul 2020 14:24:39 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, linux-fbdev@vger.kernel.org,
 dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org,
 linux-efi@vger.kernel.org
Subject: [PATCH v2] efi: avoid error message when booting under Xen
Date: Fri, 10 Jul 2020 16:22:53 +0200
Message-Id: <20200710142253.28070-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Peter Jones <pjones@redhat.com>,
 Ard Biesheuvel <ardb@kernel.org>,
 Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

efifb_probe() will issue an error message in case the kernel is booted
as Xen dom0 from UEFI as EFI_MEMMAP won't be set in this case. Avoid
that message by calling efi_mem_desc_lookup() only if EFI_MEMMAP is set.

Fixes: 38ac0287b7f4 ("fbdev/efifb: Honour UEFI memory map attributes when mapping the FB")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/video/fbdev/efifb.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
index 65491ae74808..e57c00824965 100644
--- a/drivers/video/fbdev/efifb.c
+++ b/drivers/video/fbdev/efifb.c
@@ -453,7 +453,7 @@ static int efifb_probe(struct platform_device *dev)
 	info->apertures->ranges[0].base = efifb_fix.smem_start;
 	info->apertures->ranges[0].size = size_remap;
 
-	if (efi_enabled(EFI_BOOT) &&
+	if (efi_enabled(EFI_MEMMAP) &&
 	    !efi_mem_desc_lookup(efifb_fix.smem_start, &md)) {
 		if ((efifb_fix.smem_start + efifb_fix.smem_len) >
 		    (md.phys_addr + (md.num_pages << EFI_PAGE_SHIFT))) {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 14:27:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 14:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jttzr-0005af-4Z; Fri, 10 Jul 2020 14:27:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Va4v=AV=kernel.org=ardb@srs-us1.protection.inumbo.net>)
 id 1jttzq-0005aa-1G
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 14:27:26 +0000
X-Inumbo-ID: 7995c356-c2b9-11ea-bb8b-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7995c356-c2b9-11ea-bb8b-bc764e2007e4;
 Fri, 10 Jul 2020 14:27:25 +0000 (UTC)
Received: from mail-oo1-f50.google.com (mail-oo1-f50.google.com
 [209.85.161.50])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id DFAEF207D0
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jul 2020 14:27:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594391245;
 bh=z5GS7bzW3vt89ODcDbxTwcqTDUKbrhtLbUgZN/aPJks=;
 h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
 b=udoNZF8P0ADm4fqn4Qy5x9C2vHCV2p9r+Ns+aU2CoUVQCPkWqe/5EBJWiz/+wZ+cB
 ex+pS3L2z+DH2RCdK6ZdxCS5z+5yAfPJTi10+K61d8UR22uyLQCnmMaVnZGFU6P9C/
 uuafWvzXw3P/2xm2EvwMJq+xrmLFHHZ/dRnwr/yQ=
Received: by mail-oo1-f50.google.com with SMTP id a9so1030315oof.12
 for <xen-devel@lists.xenproject.org>; Fri, 10 Jul 2020 07:27:24 -0700 (PDT)
X-Gm-Message-State: AOAM532Lxn4znYrbVkwesVLtQXrQwbgrdfeK+bhcaunVq4HxoFdrMK4K
 3OUvAPkFmLIX5OCeUkMfW/TmorB3M1Vc7KV1wVE=
X-Google-Smtp-Source: ABdhPJz6+EdbHPDvR54y8i1lkJirNkR+7jQgNft2XWsMwOMmyjZqITES3V6nwxYUU7/dFIw3al47o4ejMuE4Cv4kjXk=
X-Received: by 2002:a4a:b34b:: with SMTP id n11mr59771293ooo.41.1594391244224; 
 Fri, 10 Jul 2020 07:27:24 -0700 (PDT)
MIME-Version: 1.0
References: <20200710142253.28070-1-jgross@suse.com>
In-Reply-To: <20200710142253.28070-1-jgross@suse.com>
From: Ard Biesheuvel <ardb@kernel.org>
Date: Fri, 10 Jul 2020 17:27:13 +0300
X-Gmail-Original-Message-ID: <CAMj1kXEdm8MrdWVLO0w_-LJLvpiUURHhazv4-B39L1Bbk8kqFw@mail.gmail.com>
Message-ID: <CAMj1kXEdm8MrdWVLO0w_-LJLvpiUURHhazv4-B39L1Bbk8kqFw@mail.gmail.com>
Subject: Re: [PATCH v2] efi: avoid error message when booting under Xen
To: Juergen Gross <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: linux-fbdev@vger.kernel.org, linux-efi <linux-efi@vger.kernel.org>,
 Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 dri-devel@lists.freedesktop.org, Peter Jones <pjones@redhat.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 10 Jul 2020 at 17:24, Juergen Gross <jgross@suse.com> wrote:
>
> efifb_probe() will issue an error message in case the kernel is booted
> as Xen dom0 from UEFI as EFI_MEMMAP won't be set in this case. Avoid
> that message by calling efi_mem_desc_lookup() only if EFI_MEMMAP is set.
>
> Fixes: 38ac0287b7f4 ("fbdev/efifb: Honour UEFI memory map attributes when mapping the FB")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Ard Biesheuvel <ardb@kernel.org>

> ---
>  drivers/video/fbdev/efifb.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
> index 65491ae74808..e57c00824965 100644
> --- a/drivers/video/fbdev/efifb.c
> +++ b/drivers/video/fbdev/efifb.c
> @@ -453,7 +453,7 @@ static int efifb_probe(struct platform_device *dev)
>         info->apertures->ranges[0].base = efifb_fix.smem_start;
>         info->apertures->ranges[0].size = size_remap;
>
> -       if (efi_enabled(EFI_BOOT) &&
> +       if (efi_enabled(EFI_MEMMAP) &&
>             !efi_mem_desc_lookup(efifb_fix.smem_start, &md)) {
>                 if ((efifb_fix.smem_start + efifb_fix.smem_len) >
>                     (md.phys_addr + (md.num_pages << EFI_PAGE_SHIFT))) {
> --
> 2.26.2
>


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 15:22:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 15:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtuqc-0002C0-90; Fri, 10 Jul 2020 15:21:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uxYN=AV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtuqb-0002Bv-ET
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 15:21:57 +0000
X-Inumbo-ID: 16077ca0-c2c1-11ea-8fdb-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 16077ca0-c2c1-11ea-8fdb-12813bfff9fa;
 Fri, 10 Jul 2020 15:21:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=7s5TOGEEKsK1Aibyd/2WQ9ZAJWwpf083ue44RzHRanw=; b=MgaWf/OTNcbBsE3XVf50/1Mzf
 7t4Afav0y6qyC1grG8sT15nqVUurAgRncUo01Mq94A5W4HuHQozaqQg2XCvXvMgzQzQFnidkOR9F4
 PupMzyloY/OUPlolGiokTtT6z7RJWNOFoVGyMCP+QuPsM2e/1rN00OZD5YCJfmsddOW9s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtuqX-0006Nx-Ov; Fri, 10 Jul 2020 15:21:53 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtuqX-0004EI-D0; Fri, 10 Jul 2020 15:21:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtuqX-0003Z5-CO; Fri, 10 Jul 2020 15:21:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151789-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xtf test] 151789: all pass - PUSHED
X-Osstest-Versions-This: xtf=f645a19115e666ce6401ca63b7d7388571463b55
X-Osstest-Versions-That: xtf=2dd14fbcf9d03fdc300491939aeac75d3eb9e05f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jul 2020 15:21:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151789 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151789/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf                  f645a19115e666ce6401ca63b7d7388571463b55
baseline version:
 xtf                  2dd14fbcf9d03fdc300491939aeac75d3eb9e05f

Last test of basis   151707  2020-07-07 10:39:37 Z    3 days
Testing same since   151789  2020-07-10 11:12:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-amd64-pvops                                            pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xtf.git
   2dd14fb..f645a19  f645a19115e666ce6401ca63b7d7388571463b55 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 15:43:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 15:43:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtvB0-0003tH-4w; Fri, 10 Jul 2020 15:43:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ITf7=AV=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jtvAy-0003t7-Kw
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 15:43:00 +0000
X-Inumbo-ID: 0850d72a-c2c4-11ea-8496-bc764e2007e4
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0850d72a-c2c4-11ea-8496-bc764e2007e4;
 Fri, 10 Jul 2020 15:43:00 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06AFbq3q171854;
 Fri, 10 Jul 2020 15:42:57 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=j8HQYQpdityIQVFdqi/pkPwE4xbfgRcxzNndIXYiCmM=;
 b=r54TsDYmtMMZFRo44tAOMPRrN7olXBpD+ObqkKGAJ4esXvQXpsRYmRFUOLAyewxybH3Y
 1Lj17RKnLiLJGVdyeOijikvhUeXSAqqFuBTLBM7vYLage8RD8Ee24+NpVw8AatMbPUxc
 quXeDbBdSUPalf4uMGEeciMhQx3bEMveptSeUYDx0x0bYO7sxr8DkQwe0bbHpHTgDC7e
 CybtsbAFDrJelBSrYIHcd8iwJW9bZq3S8CSvaIr6jqiGSUgnqv+2oFBVSSAVQhxv+PMp
 duL0btrfuM1a2OAw1atDIrcgbAf9LCc570DlrmnHgP5/TxrkotBGkvqur+8dXIbV05+J gA== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2130.oracle.com with ESMTP id 325y0ar7b7-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 10 Jul 2020 15:42:57 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06AFcPLA191194;
 Fri, 10 Jul 2020 15:40:57 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by userp3030.oracle.com with ESMTP id 325k42n0mk-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 10 Jul 2020 15:40:56 +0000
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 06AFesTa000539;
 Fri, 10 Jul 2020 15:40:54 GMT
Received: from [10.39.229.143] (/10.39.229.143)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 10 Jul 2020 08:40:54 -0700
Subject: Re: [PATCH] xen/xenbus: Fix a double free in xenbus_map_ring_pv()
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Dan Carpenter <dan.carpenter@oracle.com>
References: <20200710113610.GA92345@mwanda>
 <3434e219-216f-ba50-c001-35a066d20db4@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <0c55ff06-4129-4e25-449a-2b310eca39ba@oracle.com>
Date: Fri, 10 Jul 2020 11:40:53 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <3434e219-216f-ba50-c001-35a066d20db4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9678
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 suspectscore=2
 mlxlogscore=999 mlxscore=0 spamscore=0 phishscore=0 bulkscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007100106
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9678
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0
 clxscore=1011
 lowpriorityscore=0 impostorscore=0 malwarescore=0 bulkscore=0 phishscore=0
 adultscore=0 suspectscore=2 mlxlogscore=999 priorityscore=1501 spamscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007100106
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Yan Yankovskyi <yyankovskyi@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, kernel-janitors@vger.kernel.org,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/10/20 8:15 AM, J=C3=BCrgen Gro=C3=9F wrote:
> On 10.07.20 13:36, Dan Carpenter wrote:
>> When there is an error the caller frees "info->node" so the free here
>> will result in a double free.=C2=A0 We should just delete first kfree(=
).
>>
>> Fixes: 3848e4e0a32a ("xen/xenbus: avoid large structs and arrays on
>> the stack")
>> Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
>
> Thanks for spotting this!
>
> Reviewed-by: Juergen Gross <jgross@suse.com>=20


Applied to for-linus-5.8b


-boris




From xen-devel-bounces@lists.xenproject.org Fri Jul 10 17:23:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 17:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtwkA-0004CC-2n; Fri, 10 Jul 2020 17:23:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=idFW=AV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jtwk8-0004C4-7U
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 17:23:24 +0000
X-Inumbo-ID: 0ea6f0e2-c2d2-11ea-8ff8-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0ea6f0e2-c2d2-11ea-8ff8-12813bfff9fa;
 Fri, 10 Jul 2020 17:23:23 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 74D2520725;
 Fri, 10 Jul 2020 17:23:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594401803;
 bh=rAuE9jeGeeqGhOCjOngLXVOcRpO0n8j8ZAoZml36l+k=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=GCGvc7A2swdSgZQM9VLG6v6Isa3U81e3H+SQmUSOjoXi7vktetME438dMXRJ34H42
 aHNMQA5n4X6fIhkjfedVLzGsZTRHDfLQzo6Rgey4siWO2M2zMc4v1OOqc5/8LkhvZ0
 kmIReJDrRl+BKA5/P1OJYrcEUZELDKqQOgt+LAAk=
Date: Fri, 10 Jul 2020 10:23:22 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: "Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: [PATCH] xen: introduce xen_vring_use_dma
In-Reply-To: <20200701172219-mutt-send-email-mst@kernel.org>
Message-ID: <alpine.DEB.2.21.2007101019340.4124@sstabellini-ThinkPad-T480s>
References: <20200624050355-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006241047010.8121@sstabellini-ThinkPad-T480s>
 <20200624163940-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006241351430.8121@sstabellini-ThinkPad-T480s>
 <20200624181026-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006251014230.8121@sstabellini-ThinkPad-T480s>
 <20200626110629-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006291621300.8121@sstabellini-ThinkPad-T480s>
 <20200701133456.GA23888@infradead.org>
 <alpine.DEB.2.21.2007011020320.8121@sstabellini-ThinkPad-T480s>
 <20200701172219-mutt-send-email-mst@kernel.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Peng Fan <peng.fan@nxp.com>,
 Stefano Stabellini <sstabellini@kernel.org>, konrad.wilk@oracle.com,
 jasowang@redhat.com, x86@kernel.org, linux-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 Christoph Hellwig <hch@infradead.org>, iommu@lists.linux-foundation.org,
 linux-imx@nxp.com, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 linux-arm-kernel@lists.infradead.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Sorry for the late reply -- a couple of conferences kept me busy.


On Wed, 1 Jul 2020, Michael S. Tsirkin wrote:
> On Wed, Jul 01, 2020 at 10:34:53AM -0700, Stefano Stabellini wrote:
> > Would you be in favor of a more flexible check along the lines of the
> > one proposed in the patch that started this thread:
> > 
> >     if (xen_vring_use_dma())
> >             return true;
> > 
> > 
> > xen_vring_use_dma would be implemented so that it returns true when
> > xen_swiotlb is required and false otherwise.
> 
> Just to stress - with a patch like this virtio can *still* use DMA API
> if PLATFORM_ACCESS is set. So if DMA API is broken on some platforms
> as you seem to be saying, you guys should fix it before doing something
> like this..

Yes, DMA API is broken with some interfaces (specifically: rpmesg and
trusty), but for them PLATFORM_ACCESS is never set. That is why the
errors weren't reported before. Xen special case aside, there is no
problem under normal circumstances.


If you are OK with this patch (after a little bit of clean-up), Peng,
are you OK with sending an update or do you want me to?


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 18:17:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 18:17:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtxaD-0008N7-9I; Fri, 10 Jul 2020 18:17:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rLy0=AV=amazon.com=prvs=4539a4144=anchalag@srs-us1.protection.inumbo.net>)
 id 1jtxaB-0008My-GB
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 18:17:11 +0000
X-Inumbo-ID: 92097502-c2d9-11ea-bca7-bc764e2007e4
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 92097502-c2d9-11ea-bca7-bc764e2007e4;
 Fri, 10 Jul 2020 18:17:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1594405031; x=1625941031;
 h=from:to:subject:date:message-id:references:in-reply-to:
 content-id:content-transfer-encoding:mime-version;
 bh=joWt3KR9L675nqN6Bcu4u09143359QD2X+bsh9y3Nvo=;
 b=n9SphVKE3a/YXLPBhxoj1oYS41yX+cGq1fYJCD6jEmMJytKFDDmoRfqR
 MdaOTol8hoB4KTkI3D6bto75cyISQcwTlBZYtRE+Ymd4huIEZ8rAjaaXy
 kDiZlDemPi1viOgjAH5TJGL+LnYdhuT77IrEDwEEoPrsmziazpuT3KnLY M=;
IronPort-SDR: UR91taBo6wMI9fsybWeyflevqTm4zIXRLwJEvpFt4kjWyx23bneS910mlSJDHbN9Xyn797xxCe
 mpgDSFyA3dkw==
X-IronPort-AV: E=Sophos;i="5.75,336,1589241600"; d="scan'208";a="50693043"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2a-e7be2041.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP;
 10 Jul 2020 18:17:08 +0000
Received: from EX13MTAUWB001.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2a-e7be2041.us-west-2.amazon.com (Postfix) with ESMTPS
 id 66DC4A223F; Fri, 10 Jul 2020 18:17:06 +0000 (UTC)
Received: from EX13D10UWB002.ant.amazon.com (10.43.161.130) by
 EX13MTAUWB001.ant.amazon.com (10.43.161.249) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 10 Jul 2020 18:17:05 +0000
Received: from EX13D07UWB001.ant.amazon.com (10.43.161.238) by
 EX13D10UWB002.ant.amazon.com (10.43.161.130) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 10 Jul 2020 18:17:05 +0000
Received: from EX13D07UWB001.ant.amazon.com ([10.43.161.238]) by
 EX13D07UWB001.ant.amazon.com ([10.43.161.238]) with mapi id 15.00.1497.006;
 Fri, 10 Jul 2020 18:17:05 +0000
From: "Agarwal, Anchal" <anchalag@amazon.com>
To: "tglx@linutronix.de" <tglx@linutronix.de>, "mingo@redhat.com"
 <mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>, "hpa@zytor.com"
 <hpa@zytor.com>, "x86@kernel.org" <x86@kernel.org>,
 "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>, "jgross@suse.com"
 <jgross@suse.com>, "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
 "linux-mm@kvack.org" <linux-mm@kvack.org>, "Kamata, Munehisa"
 <kamatam@amazon.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>, "roger.pau@citrix.com"
 <roger.pau@citrix.com>, "axboe@kernel.dk" <axboe@kernel.dk>,
 "davem@davemloft.net" <davem@davemloft.net>, "rjw@rjwysocki.net"
 <rjw@rjwysocki.net>, "len.brown@intel.com" <len.brown@intel.com>,
 "pavel@ucw.cz" <pavel@ucw.cz>, "peterz@infradead.org" <peterz@infradead.org>, 
 "Valentin, Eduardo" <eduval@amazon.com>, "Singh, Balbir" <sblbir@amazon.com>, 
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "vkuznets@redhat.com" <vkuznets@redhat.com>, "netdev@vger.kernel.org"
 <netdev@vger.kernel.org>, "linux-kernel@vger.kernel.org"
 <linux-kernel@vger.kernel.org>, "Woodhouse, David" <dwmw@amazon.co.uk>,
 "benh@kernel.crashing.org" <benh@kernel.crashing.org>
Subject: Re: [PATCH v2 00/11] Fix PM hibernation in Xen guests
Thread-Topic: [PATCH v2 00/11] Fix PM hibernation in Xen guests
Thread-Index: AQHWUJhxoD1ejJnCNkKYcMl0G0oJ3qkAtheA
Date: Fri, 10 Jul 2020 18:17:05 +0000
Message-ID: <324020A7-996F-4CF8-A2F4-46957CEA5F0C@amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
In-Reply-To: <cover.1593665947.git.anchalag@amazon.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.162.248]
Content-Type: text/plain; charset="utf-8"
Content-ID: <781BFA7803E4E748A2A73C4D4BD0B730@amazon.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

R2VudGxlIHBpbmcgb24gdGhpcyBzZXJpZXMuIA0KDQotLQ0KQW5jaGFsDQoNCu+7vyAgICBIZWxs
bywNCiAgICBUaGlzIHNlcmllcyBmaXhlcyBQTSBoaWJlcm5hdGlvbiBmb3IgaHZtIGd1ZXN0cyBy
dW5uaW5nIG9uIHhlbiBoeXBlcnZpc29yLg0KICAgIFRoZSBydW5uaW5nIGd1ZXN0IGNvdWxkIG5v
dyBiZSBoaWJlcm5hdGVkIGFuZCByZXN1bWVkIHN1Y2Nlc3NmdWxseSBhdCBhDQogICAgbGF0ZXIg
dGltZS4gVGhlIGZpeGVzIGZvciBQTSBoaWJlcm5hdGlvbiBhcmUgYWRkZWQgdG8gYmxvY2sgYW5k
DQogICAgbmV0d29yayBkZXZpY2UgZHJpdmVycyBpLmUgeGVuLWJsa2Zyb250IGFuZCB4ZW4tbmV0
ZnJvbnQuIEFueSBvdGhlciBkcml2ZXINCiAgICB0aGF0IG5lZWRzIHRvIGFkZCBTNCBzdXBwb3J0
IGlmIG5vdCBhbHJlYWR5LCBjYW4gZm9sbG93IHNhbWUgbWV0aG9kIG9mDQogICAgaW50cm9kdWNp
bmcgZnJlZXplL3RoYXcvcmVzdG9yZSBjYWxsYmFja3MuDQogICAgVGhlIHBhdGNoZXMgaGFkIGJl
ZW4gdGVzdGVkIGFnYWluc3QgdXBzdHJlYW0ga2VybmVsIGFuZCB4ZW40LjExLiBMYXJnZQ0KICAg
IHNjYWxlIHRlc3RpbmcgaXMgYWxzbyBkb25lIG9uIFhlbiBiYXNlZCBBbWF6b24gRUMyIGluc3Rh
bmNlcy4gQWxsIHRoaXMgdGVzdGluZw0KICAgIGludm9sdmVkIHJ1bm5pbmcgbWVtb3J5IGV4aGF1
c3Rpbmcgd29ya2xvYWQgaW4gdGhlIGJhY2tncm91bmQuDQoNCiAgICBEb2luZyBndWVzdCBoaWJl
cm5hdGlvbiBkb2VzIG5vdCBpbnZvbHZlIGFueSBzdXBwb3J0IGZyb20gaHlwZXJ2aXNvciBhbmQN
CiAgICB0aGlzIHdheSBndWVzdCBoYXMgY29tcGxldGUgY29udHJvbCBvdmVyIGl0cyBzdGF0ZS4g
SW5mcmFzdHJ1Y3R1cmUNCiAgICByZXN0cmljdGlvbnMgZm9yIHNhdmluZyB1cCBndWVzdCBzdGF0
ZSBjYW4gYmUgb3ZlcmNvbWUgYnkgZ3Vlc3QgaW5pdGlhdGVkDQogICAgaGliZXJuYXRpb24uDQoN
CiAgICBUaGVzZSBwYXRjaGVzIHdlcmUgc2VuZCBvdXQgYXMgUkZDIGJlZm9yZSBhbmQgYWxsIHRo
ZSBmZWVkYmFjayBoYWQgYmVlbg0KICAgIGluY29ycG9yYXRlZCBpbiB0aGUgcGF0Y2hlcy4gVGhl
IGxhc3QgdjEgY291bGQgYmUgZm91bmQgaGVyZToNCg0KICAgIFt2MV06IGh0dHBzOi8vbGttbC5v
cmcvbGttbC8yMDIwLzUvMTkvMTMxMg0KICAgIEFsbCBjb21tZW50cyBhbmQgZmVlZGJhY2sgZnJv
bSB2MSBoYWQgYmVlbiBpbmNvcnBvcmF0ZWQgaW4gdjIgc2VyaWVzLg0KICAgIEFueSBjb21tZW50
cy9zdWdnZXN0aW9ucyBhcmUgd2VsY29tZQ0KDQogICAgS25vd24gaXNzdWVzOg0KICAgIDEuS0FT
TFIgY2F1c2VzIGludGVybWl0dGVudCBoaWJlcm5hdGlvbiBmYWlsdXJlcy4gVk0gZmFpbHMgdG8g
cmVzdW1lcyBhbmQNCiAgICBoYXMgdG8gYmUgcmVzdGFydGVkLiBJIHdpbGwgaW52ZXN0aWdhdGUg
dGhpcyBpc3N1ZSBzZXBhcmF0ZWx5IGFuZCBzaG91bGRuJ3QNCiAgICBiZSBhIGJsb2NrZXIgZm9y
IHRoaXMgcGF0Y2ggc2VyaWVzLg0KICAgIDIuIER1cmluZyBoaWJlcm5hdGlvbiwgSSBvYnNlcnZl
ZCBzb21ldGltZXMgdGhhdCBmcmVlemluZyBvZiB0YXNrcyBmYWlscyBkdWUNCiAgICB0byBidXN5
IFhGUyB3b3JrcXVldWVpW3hmcy1jaWwveGZzLXN5bmNdLiBUaGlzIGlzIGFsc28gaW50ZXJtaXR0
ZW50IG1heSBiZSAxDQogICAgb3V0IG9mIDIwMCBydW5zIGFuZCBoaWJlcm5hdGlvbiBpcyBhYm9y
dGVkIGluIHRoaXMgY2FzZS4gUmUtdHJ5aW5nIGhpYmVybmF0aW9uDQogICAgbWF5IHdvcmsuIEFs
c28sIHRoaXMgaXMgYSBrbm93biBpc3N1ZSB3aXRoIGhpYmVybmF0aW9uIGFuZCBzb21lDQogICAg
ZmlsZXN5c3RlbXMgbGlrZSBYRlMgaGFzIGJlZW4gZGlzY3Vzc2VkIGJ5IHRoZSBjb21tdW5pdHkg
Zm9yIHllYXJzIHdpdGggbm90IGFuDQogICAgZWZmZWN0dmUgcmVzb2x1dGlvbiBhdCB0aGlzIHBv
aW50Lg0KDQogICAgVGVzdGluZyBIb3cgdG86DQogICAgLS0tLS0tLS0tLS0tLS0tDQogICAgMS4g
U2V0dXAgeGVuIGh5cGVydmlzb3Igb24gYSBwaHlzaWNhbCBtYWNoaW5lWyBJIHVzZWQgVWJ1bnR1
IDE2LjA0ICt1cHN0cmVhbQ0KICAgIHhlbi00LjExXQ0KICAgIDIuIEJyaW5nIHVwIGEgSFZNIGd1
ZXN0IHcvdCBrZXJuZWwgY29tcGlsZWQgd2l0aCBoaWJlcm5hdGlvbiBwYXRjaGVzDQogICAgW0kg
dXNlZCB1YnVudHUxOC4wNCBuZXRib290IGJpb25pYyBpbWFnZXMgYW5kIGFsc28gQW1hem9uIExp
bnV4IG9uLXByZW0gaW1hZ2VzXS4NCiAgICAzLiBDcmVhdGUgYSBzd2FwIGZpbGUgc2l6ZT1SQU0g
c2l6ZQ0KICAgIDQuIFVwZGF0ZSBncnViIHBhcmFtZXRlcnMgYW5kIHJlYm9vdA0KICAgIDUuIFRy
aWdnZXIgcG0taGliZXJuYXRpb24gZnJvbSB3aXRoaW4gdGhlIFZNDQoNCiAgICBFeGFtcGxlOg0K
ICAgIFNldCB1cCBhIGZpbGUtYmFja2VkIHN3YXAgc3BhY2UuIFN3YXAgZmlsZSBzaXplPj1Ub3Rh
bCBtZW1vcnkgb24gdGhlIHN5c3RlbQ0KICAgIHN1ZG8gZGQgaWY9L2Rldi96ZXJvIG9mPS9zd2Fw
IGJzPSQoKCAxMDI0ICogMTAyNCApKSBjb3VudD00MDk2ICMgNDA5Nk1pQg0KICAgIHN1ZG8gY2ht
b2QgNjAwIC9zd2FwDQogICAgc3VkbyBta3N3YXAgL3N3YXANCiAgICBzdWRvIHN3YXBvbiAvc3dh
cA0KDQogICAgVXBkYXRlIHJlc3VtZSBkZXZpY2UvcmVzdW1lIG9mZnNldCBpbiBncnViIGlmIHVz
aW5nIHN3YXAgZmlsZToNCiAgICByZXN1bWU9L2Rldi94dmRhMSByZXN1bWVfb2Zmc2V0PTIwMDcw
NCBub19jb25zb2xlX3N1c3BlbmQ9MQ0KDQogICAgRXhlY3V0ZToNCiAgICAtLS0tLS0tLQ0KICAg
IHN1ZG8gcG0taGliZXJuYXRlDQogICAgT1INCiAgICBlY2hvIGRpc2sgPiAvc3lzL3Bvd2VyL3N0
YXRlICYmIGVjaG8gcmVib290ID4gL3N5cy9wb3dlci9kaXNrDQoNCiAgICBDb21wdXRlIHJlc3Vt
ZSBvZmZzZXQgY29kZToNCiAgICAiDQogICAgIyEvdXNyL2Jpbi9lbnYgcHl0aG9uDQogICAgaW1w
b3J0IHN5cw0KICAgIGltcG9ydCBhcnJheQ0KICAgIGltcG9ydCBmY250bA0KDQogICAgI3N3YXAg
ZmlsZQ0KICAgIGYgPSBvcGVuKHN5cy5hcmd2WzFdLCAncicpDQogICAgYnVmID0gYXJyYXkuYXJy
YXkoJ0wnLCBbMF0pDQoNCiAgICAjRklCTUFQDQogICAgcmV0ID0gZmNudGwuaW9jdGwoZi5maWxl
bm8oKSwgMHgwMSwgYnVmKQ0KICAgIHByaW50IGJ1ZlswXQ0KICAgICINCg0KDQogICAgQWxla3Nl
aSBCZXNvZ29ub3YgKDEpOg0KICAgICAgUE0gLyBoaWJlcm5hdGU6IHVwZGF0ZSB0aGUgcmVzdW1l
IG9mZnNldCBvbiBTTkFQU0hPVF9TRVRfU1dBUF9BUkVBDQoNCiAgICBBbmNoYWwgQWdhcndhbCAo
NCk6DQogICAgICB4ODYveGVuOiBJbnRyb2R1Y2UgbmV3IGZ1bmN0aW9uIHRvIG1hcCBIWVBFUlZJ
U09SX3NoYXJlZF9pbmZvIG9uDQogICAgICAgIFJlc3VtZQ0KICAgICAgeDg2L3hlbjogc2F2ZSBh
bmQgcmVzdG9yZSBzdGVhbCBjbG9jayBkdXJpbmcgUE0gaGliZXJuYXRpb24NCiAgICAgIHhlbjog
SW50cm9kdWNlIHdyYXBwZXIgZm9yIHNhdmUvcmVzdG9yZSBzY2hlZCBjbG9jayBvZmZzZXQNCiAg
ICAgIHhlbjogVXBkYXRlIHNjaGVkIGNsb2NrIG9mZnNldCB0byBhdm9pZCBzeXN0ZW0gaW5zdGFi
aWxpdHkgaW4NCiAgICAgICAgaGliZXJuYXRpb24NCg0KICAgIE11bmVoaXNhIEthbWF0YSAoNSk6
DQogICAgICB4ZW4vbWFuYWdlOiBrZWVwIHRyYWNrIG9mIHRoZSBvbi1nb2luZyBzdXNwZW5kIG1v
ZGUNCiAgICAgIHhlbmJ1czogYWRkIGZyZWV6ZS90aGF3L3Jlc3RvcmUgY2FsbGJhY2tzIHN1cHBv
cnQNCiAgICAgIHg4Ni94ZW46IGFkZCBzeXN0ZW0gY29yZSBzdXNwZW5kIGFuZCByZXN1bWUgY2Fs
bGJhY2tzDQogICAgICB4ZW4tYmxrZnJvbnQ6IGFkZCBjYWxsYmFja3MgZm9yIFBNIHN1c3BlbmQg
YW5kIGhpYmVybmF0aW9uDQogICAgICB4ZW4tbmV0ZnJvbnQ6IGFkZCBjYWxsYmFja3MgZm9yIFBN
IHN1c3BlbmQgYW5kIGhpYmVybmF0aW9uDQoNCiAgICBUaG9tYXMgR2xlaXhuZXIgKDEpOg0KICAg
ICAgZ2VuaXJxOiBTaHV0ZG93biBpcnEgY2hpcHMgaW4gc3VzcGVuZC9yZXN1bWUgZHVyaW5nIGhp
YmVybmF0aW9uDQoNCiAgICAgYXJjaC94ODYveGVuL2VubGlnaHRlbl9odm0uYyAgICAgIHwgICA3
ICsrDQogICAgIGFyY2gveDg2L3hlbi9zdXNwZW5kLmMgICAgICAgICAgICB8ICA1MyArKysrKysr
KysrKysrDQogICAgIGFyY2gveDg2L3hlbi90aW1lLmMgICAgICAgICAgICAgICB8ICAxNSArKyst
DQogICAgIGFyY2gveDg2L3hlbi94ZW4tb3BzLmggICAgICAgICAgICB8ICAgMyArDQogICAgIGRy
aXZlcnMvYmxvY2sveGVuLWJsa2Zyb250LmMgICAgICB8IDEyMiArKysrKysrKysrKysrKysrKysr
KysrKysrKysrKy0NCiAgICAgZHJpdmVycy9uZXQveGVuLW5ldGZyb250LmMgICAgICAgIHwgIDk4
ICsrKysrKysrKysrKysrKysrKysrKysrLQ0KICAgICBkcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRz
X2Jhc2UuYyAgfCAgIDEgKw0KICAgICBkcml2ZXJzL3hlbi9tYW5hZ2UuYyAgICAgICAgICAgICAg
fCAgNjAgKysrKysrKysrKysrKysrDQogICAgIGRyaXZlcnMveGVuL3hlbmJ1cy94ZW5idXNfcHJv
YmUuYyB8ICA5NiArKysrKysrKysrKysrKysrKysrLS0tLQ0KICAgICBpbmNsdWRlL2xpbnV4L2ly
cS5oICAgICAgICAgICAgICAgfCAgIDIgKw0KICAgICBpbmNsdWRlL3hlbi94ZW4tb3BzLmggICAg
ICAgICAgICAgfCAgIDMgKw0KICAgICBpbmNsdWRlL3hlbi94ZW5idXMuaCAgICAgICAgICAgICAg
fCAgIDMgKw0KICAgICBrZXJuZWwvaXJxL2NoaXAuYyAgICAgICAgICAgICAgICAgfCAgIDIgKy0N
CiAgICAga2VybmVsL2lycS9pbnRlcm5hbHMuaCAgICAgICAgICAgIHwgICAxICsNCiAgICAga2Vy
bmVsL2lycS9wbS5jICAgICAgICAgICAgICAgICAgIHwgIDMxICsrKysrLS0tDQogICAgIGtlcm5l
bC9wb3dlci91c2VyLmMgICAgICAgICAgICAgICB8ICAgNiArLQ0KICAgICAxNiBmaWxlcyBjaGFu
Z2VkLCA0NzAgaW5zZXJ0aW9ucygrKSwgMzMgZGVsZXRpb25zKC0pDQoNCiAgICAtLSANCiAgICAy
LjIwLjENCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 18:30:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 18:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtxn1-0001XS-Kj; Fri, 10 Jul 2020 18:30:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uxYN=AV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtxn1-0001X8-3G
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 18:30:27 +0000
X-Inumbo-ID: 6885456a-c2db-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6885456a-c2db-11ea-b7bb-bc764e2007e4;
 Fri, 10 Jul 2020 18:30:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=WW3qtZsgb6gNJP+Ag1jRYSMxRP5cphkBHaC4QIYg35E=; b=SoW7pyw4QKRqwO6IckFcxZKKU
 ZegHYdwTY/oS+MNrdyw7Dzp3lTo7pmUmy4eWREVCfquq7tgNWLtcNtghAv744HiwULa0bRN8LfnZt
 gVng54Gx+7VKIQUVzOAak0zpjjArYJvtX0VDqt7fXCjgW65Ci0/4FhdiSgdg/meygefPo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtxmt-00020V-8i; Fri, 10 Jul 2020 18:30:19 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtxmt-0002n9-0r; Fri, 10 Jul 2020 18:30:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtxmt-0007tg-0E; Fri, 10 Jul 2020 18:30:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151778-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151778: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=aff2caf6b3fbab1062e117a47b66d27f7fd2f272
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jul 2020 18:30:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151778 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151778/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                aff2caf6b3fbab1062e117a47b66d27f7fd2f272
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   27 days
Failing since        151101  2020-06-14 08:32:51 Z   26 days   34 attempts
Testing same since   151778  2020-07-10 04:37:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michele Denber <denber@mindspring.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 20698 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 19:30:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 19:30:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtyij-0007K9-Ad; Fri, 10 Jul 2020 19:30:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uxYN=AV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtyih-0006pn-Us
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 19:30:04 +0000
X-Inumbo-ID: bcdf3f5a-c2e3-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bcdf3f5a-c2e3-11ea-8496-bc764e2007e4;
 Fri, 10 Jul 2020 19:29:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=rfv8npyf05H/6NWv7UsW1FNuZ4vF0Q1OCuldtKcuhoQ=; b=CtbA/dYsFuG2SPLRYljjBj+mi
 bFeMyz2mrbfiZlGr1uQG3JthjgU71ryrGVBtVrhZNprhEXjDD6axPOuLqy9MpbQ+8auD2MWCcFxdw
 x/P7HEc5fPxcZWWIyOb5FdvcLRqvEsD9PuwESKkZL+O4ep5LZyY5u3EuPU0DQu2Udbed8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtyia-00036M-S1; Fri, 10 Jul 2020 19:29:56 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtyia-00044C-IO; Fri, 10 Jul 2020 19:29:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtyia-0003ZG-HX; Fri, 10 Jul 2020 19:29:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151780-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151780: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-libvirt-raw:guest-start:fail:regression
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=42f82040ee66db13525dc6f14b8559890b2f4c1c
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jul 2020 19:29:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151780 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151780/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214
 test-armhf-armhf-libvirt-raw 11 guest-start              fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                42f82040ee66db13525dc6f14b8559890b2f4c1c
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   22 days
Failing since        151236  2020-06-19 19:10:35 Z   21 days   31 attempts
Testing same since   151780  2020-07-10 06:12:42 Z    0 days    1 attempts

------------------------------------------------------------
627 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 30552 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 20:00:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 20:00:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jtzBU-0000uy-1G; Fri, 10 Jul 2020 19:59:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uxYN=AV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jtzBS-0000ua-5V
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 19:59:46 +0000
X-Inumbo-ID: e346a8a0-c2e7-11ea-9026-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e346a8a0-c2e7-11ea-9026-12813bfff9fa;
 Fri, 10 Jul 2020 19:59:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hBPh6ea2DZJ9CP+cW5lOl4aiXV1oyZLHVBr4oDd69hU=; b=0DkCGtROzzT5GafbpJH1sGEJI
 jxzi8chCKrkLfc3C0c0bkBmiFUbEpAROpxhdkzSiSzH7PAiy8Jk/LUIK2kcTJ/6TLU6X0QIZEdrRJ
 uPmHiuAQxWoA+8yzteb8k6bra3nq/CS2l2L+lLhoMbdyQNSWz8gb+Z+RxEyzo/bAL9p5U=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtzBK-0003d2-SQ; Fri, 10 Jul 2020 19:59:38 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jtzBK-0004po-Iw; Fri, 10 Jul 2020 19:59:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jtzBK-0005UC-IK; Fri, 10 Jul 2020 19:59:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151802-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 151802: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=02d69864b51a4302a148c28d6d391238a6778b4b
X-Osstest-Versions-That: xen=3fdc211b01b29f252166937238efe02d15cb5780
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 10 Jul 2020 19:59:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151802 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151802/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b
baseline version:
 xen                  3fdc211b01b29f252166937238efe02d15cb5780

Last test of basis   151711  2020-07-07 13:13:16 Z    3 days
Testing same since   151802  2020-07-10 17:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3fdc211b01..02d69864b5  02d69864b51a4302a148c28d6d391238a6778b4b -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 22:34:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 22:34:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju1b3-0005vJ-HR; Fri, 10 Jul 2020 22:34:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=idFW=AV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ju1b2-0005vE-Hn
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 22:34:20 +0000
X-Inumbo-ID: 7e7dd7ca-c2fd-11ea-903c-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7e7dd7ca-c2fd-11ea-903c-12813bfff9fa;
 Fri, 10 Jul 2020 22:34:19 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9CD5C20720;
 Fri, 10 Jul 2020 22:34:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594420459;
 bh=30tMeSPdpSpzi9JMvJiPg87Z492wFVcJ870fYtV5zZQ=;
 h=Date:From:To:cc:Subject:From;
 b=Pw2KlUzZl4LpIy2xtWqNZk6KKD9pfLyg08tNZvDitK+/OGlAAkx2eGh0w/G6sD1pp
 qR2XHpnwwbJHysHRwhK/G42ZbVPzC3jOPtnW3QqHPQmSyltqJeX0y49vaJ8+J0YM7O
 xNtCHhTP/6SAsY0ESmmxCgCDbFBAdbmoyzak/Mpo=
Date: Fri, 10 Jul 2020 15:34:18 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: jgross@suse.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com
Subject: [PATCH v3 00/11] fix swiotlb-xen for RPi4
Message-ID: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: sstabellini@kernel.org, roman@zededa.com, linux-kernel@vger.kernel.org,
 hch@infradead.org, tamas@tklengyel.com, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi all,

This series is a collection of fixes to get Linux running on the RPi4 as
dom0. Conceptually there are only two significant changes:

- make sure not to call virt_to_page on vmalloc virt addresses (patch
  #1)
- use phys_to_dma and dma_to_phys to translate phys to/from dma
  addresses (all other patches)


I addressed all comments by Christoph to v2 of the series except from
the one about merging the precursor "add struct device *" patches. I can
always merge them together at any time as needed.


Boris gave his Reviewed-by to the whole series v2. I added his
Reviewed-by to all patches, including the ones with small cosmetic
fixes, except for patch #8 #9 #10 because they are either new or changed
significantly in this version of the series.

I retained Roman and Corey's Tested-by.


Cheers,

Stefano


git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git fix-rpi4-v3


Boris Ostrovsky (1):
      swiotlb-xen: use vmalloc_to_page on vmalloc virt addresses

Stefano Stabellini (10):
      swiotlb-xen: remove start_dma_addr
      swiotlb-xen: add struct device * parameter to xen_phys_to_bus
      swiotlb-xen: add struct device * parameter to xen_bus_to_phys
      swiotlb-xen: add struct device * parameter to xen_dma_sync_for_cpu
      swiotlb-xen: add struct device * parameter to xen_dma_sync_for_device
      swiotlb-xen: add struct device * parameter to is_xen_swiotlb_buffer
      swiotlb-xen: remove XEN_PFN_PHYS
      swiotlb-xen: introduce phys_to_dma/dma_to_phys translations
      xen/arm: introduce phys/dma translations in xen_dma_sync_for_*
      xen/arm: call dma_to_phys on the dma_addr_t parameter of dma_cache_maint

 arch/arm/xen/mm.c         |  34 +++++++++++++++----------------
 drivers/xen/swiotlb-xen.c | 119 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------------------------------------------
 include/xen/page.h        |   1 -
 include/xen/swiotlb-xen.h |   8 ++++----
 4 files changed, 93 insertions(+), 69 deletions(-)


From xen-devel-bounces@lists.xenproject.org Fri Jul 10 22:34:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 22:34:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju1bD-0005wO-Pp; Fri, 10 Jul 2020 22:34:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=idFW=AV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ju1bC-0005wJ-FJ
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 22:34:30 +0000
X-Inumbo-ID: 84ba4f4c-c2fd-11ea-bb8b-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 84ba4f4c-c2fd-11ea-bb8b-bc764e2007e4;
 Fri, 10 Jul 2020 22:34:30 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 48CB62075D;
 Fri, 10 Jul 2020 22:34:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594420469;
 bh=3GwsLOtHN4zRKft6xlDRfAH+KgUHUTf+yyaVBSSdyn4=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=jt2yh5WNYaKPWaJY8JQkbiswMPHMg7BCgVT5x2iJ3qNwf8+ekBcSJjbVzC8gz0G3r
 PNuGsVuGq0X296MIu4/hRFaRwZcNLrviXcJ+HjZ31g6Yg4pSVFCK5WlTb5HIX2kvj4
 Rqu3BmiwZNkKDZrBDxg3PWP5kl8dkREO6P7GkM/M=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH v3 02/11] swiotlb-xen: remove start_dma_addr
Date: Fri, 10 Jul 2020 15:34:18 -0700
Message-Id: <20200710223427.6897-2-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: hch@infradead.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

It is not strictly needed. Call virt_to_phys on xen_io_tlb_start
instead. It will be useful not to have a start_dma_addr around with the
next patches.

Note that virt_to_phys is not the same as xen_virt_to_bus but actually
it is used to compared again __pa(xen_io_tlb_start) as passed to
swiotlb_init_with_tbl, so virt_to_phys is actually what we want.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Corey Minyard <cminyard@mvista.com>
Tested-by: Roman Shaposhnik <roman@zededa.com>
---
Changes in v2:
- update commit message
---
 drivers/xen/swiotlb-xen.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 5fbadd07819b..89a775948a02 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -52,8 +52,6 @@ static unsigned long xen_io_tlb_nslabs;
  * Quick lookup value of the bus address of the IOTLB.
  */
 
-static u64 start_dma_addr;
-
 /*
  * Both of these functions should avoid XEN_PFN_PHYS because phys_addr_t
  * can be 32bit when dma_addr_t is 64bit leading to a loss in
@@ -241,7 +239,6 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 		m_ret = XEN_SWIOTLB_EFIXUP;
 		goto error;
 	}
-	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
 	if (early) {
 		if (swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs,
 			 verbose))
@@ -392,8 +389,8 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 	 */
 	trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
 
-	map = swiotlb_tbl_map_single(dev, start_dma_addr, phys,
-				     size, size, dir, attrs);
+	map = swiotlb_tbl_map_single(dev, virt_to_phys(xen_io_tlb_start),
+				     phys, size, size, dir, attrs);
 	if (map == (phys_addr_t)DMA_MAPPING_ERROR)
 		return DMA_MAPPING_ERROR;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 22:34:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 22:34:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju1bF-0005wj-1j; Fri, 10 Jul 2020 22:34:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=idFW=AV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ju1bD-0005wR-Tn
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 22:34:31 +0000
X-Inumbo-ID: 847b9d42-c2fd-11ea-903c-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 847b9d42-c2fd-11ea-903c-12813bfff9fa;
 Fri, 10 Jul 2020 22:34:29 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id D0AB420720;
 Fri, 10 Jul 2020 22:34:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594420469;
 bh=N98sE0T1Wenp2xz8bU/9CGFuY0Q14hRpLDV1a8rPTfI=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=xBZEktMj9lnVDi/7jRtBJyqzvM+VU07KDo+xAU1ggFMejrQ8eI7uVu68ssBJqz8/v
 jU0AhafpNHvkm+ms4tzTwIRAUwK4dasKLn3QWRlddk04NZE7OFbCS72Z+GL8Q0gOKr
 Bp3wgOG/f+MMbp/16nLGwlEHAFA3ZaldiMRJ5hP8=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH v3 01/11] swiotlb-xen: use vmalloc_to_page on vmalloc virt
 addresses
Date: Fri, 10 Jul 2020 15:34:17 -0700
Message-Id: <20200710223427.6897-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: hch@infradead.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Boris Ostrovsky <boris.ostrovsky@oracle.com>

xen_alloc_coherent_pages might return pages for which virt_to_phys and
virt_to_page don't work, e.g. ioremap'ed pages.

So in xen_swiotlb_free_coherent we can't assume that virt_to_page works.
Instead add a is_vmalloc_addr check and use vmalloc_to_page on vmalloc
virt addresses.

This patch fixes the following crash at boot on RPi4 (the underlying
issue is not RPi4 specific):
https://marc.info/?l=xen-devel&m=158862573216800

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Corey Minyard <cminyard@mvista.com>
Tested-by: Roman Shaposhnik <roman@zededa.com>
---
Changes in v3:
- style improvements

Changes in v2:
- update commit message
---
 drivers/xen/swiotlb-xen.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index b6d27762c6f8..5fbadd07819b 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -335,6 +335,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	int order = get_order(size);
 	phys_addr_t phys;
 	u64 dma_mask = DMA_BIT_MASK(32);
+	struct page *page;
 
 	if (hwdev && hwdev->coherent_dma_mask)
 		dma_mask = hwdev->coherent_dma_mask;
@@ -346,9 +347,14 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	/* Convert the size to actually allocated. */
 	size = 1UL << (order + XEN_PAGE_SHIFT);
 
+	if (is_vmalloc_addr(vaddr))
+		page = vmalloc_to_page(vaddr);
+	else
+		page = virt_to_page(vaddr);
+
 	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
 		     range_straddles_page_boundary(phys, size)) &&
-	    TestClearPageXenRemapped(virt_to_page(vaddr)))
+	    TestClearPageXenRemapped(page))
 		xen_destroy_contiguous_region(phys, order);
 
 	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 22:34:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 22:34:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju1bF-0005wr-96; Fri, 10 Jul 2020 22:34:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=idFW=AV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ju1bE-0005wR-8W
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 22:34:32 +0000
X-Inumbo-ID: 8451828d-c2fd-11ea-903c-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8451828d-c2fd-11ea-903c-12813bfff9fa;
 Fri, 10 Jul 2020 22:34:31 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 75FCA207BB;
 Fri, 10 Jul 2020 22:34:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594420470;
 bh=HgiPdFTfQq90v0qFVQCJWX2B++kBNShoCcFtOr7ILuk=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=phL1a1HmIcF5IqHsARIBc9mSooTg4t/lbu/szXY/+4PhXlbLh/EXV1Y82x/OI9S9m
 PfJ0zM3h9v6KPmNL9qpxrZIycqVE+0H2/vQDY6n6j7N2e+o1NfgbR3z1M9k+3NlkQ+
 bhq96IoKnNBDNwDZZLRkhAXx4YUDXm9SKmyqiY3E=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH v3 03/11] swiotlb-xen: add struct device * parameter to
 xen_phys_to_bus
Date: Fri, 10 Jul 2020 15:34:19 -0700
Message-Id: <20200710223427.6897-3-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: hch@infradead.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

No functional changes. The parameter is unused in this patch but will be
used by next patches.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Corey Minyard <cminyard@mvista.com>
Tested-by: Roman Shaposhnik <roman@zededa.com>
---
Changes in v3:
- add whitespace in title
- improve commit message
---
 drivers/xen/swiotlb-xen.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 89a775948a02..dbe710a59bf2 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -57,7 +57,7 @@ static unsigned long xen_io_tlb_nslabs;
  * can be 32bit when dma_addr_t is 64bit leading to a loss in
  * information if the shift is done before casting to 64bit.
  */
-static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
+static inline dma_addr_t xen_phys_to_bus(struct device *dev, phys_addr_t paddr)
 {
 	unsigned long bfn = pfn_to_bfn(XEN_PFN_DOWN(paddr));
 	dma_addr_t dma = (dma_addr_t)bfn << XEN_PAGE_SHIFT;
@@ -78,9 +78,9 @@ static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
 	return paddr;
 }
 
-static inline dma_addr_t xen_virt_to_bus(void *address)
+static inline dma_addr_t xen_virt_to_bus(struct device *dev, void *address)
 {
-	return xen_phys_to_bus(virt_to_phys(address));
+	return xen_phys_to_bus(dev, virt_to_phys(address));
 }
 
 static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
@@ -309,7 +309,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 	 * Do not use virt_to_phys(ret) because on ARM it doesn't correspond
 	 * to *dma_handle. */
 	phys = *dma_handle;
-	dev_addr = xen_phys_to_bus(phys);
+	dev_addr = xen_phys_to_bus(hwdev, phys);
 	if (((dev_addr + size - 1 <= dma_mask)) &&
 	    !range_straddles_page_boundary(phys, size))
 		*dma_handle = dev_addr;
@@ -370,7 +370,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 				unsigned long attrs)
 {
 	phys_addr_t map, phys = page_to_phys(page) + offset;
-	dma_addr_t dev_addr = xen_phys_to_bus(phys);
+	dma_addr_t dev_addr = xen_phys_to_bus(dev, phys);
 
 	BUG_ON(dir == DMA_NONE);
 	/*
@@ -395,7 +395,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 		return DMA_MAPPING_ERROR;
 
 	phys = map;
-	dev_addr = xen_phys_to_bus(map);
+	dev_addr = xen_phys_to_bus(dev, map);
 
 	/*
 	 * Ensure that the address returned is DMA'ble
@@ -539,7 +539,7 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
 static int
 xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
 {
-	return xen_virt_to_bus(xen_io_tlb_end - 1) <= mask;
+	return xen_virt_to_bus(hwdev, xen_io_tlb_end - 1) <= mask;
 }
 
 const struct dma_map_ops xen_swiotlb_dma_ops = {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 22:34:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 22:34:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju1bI-0005yX-HS; Fri, 10 Jul 2020 22:34:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=idFW=AV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ju1bH-0005wJ-Cg
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 22:34:35 +0000
X-Inumbo-ID: 85b4d26e-c2fd-11ea-bca7-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 85b4d26e-c2fd-11ea-bca7-bc764e2007e4;
 Fri, 10 Jul 2020 22:34:31 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E05EB207DF;
 Fri, 10 Jul 2020 22:34:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594420471;
 bh=bb+w4hoTLC8vPpt7yxTmGZYiSiGXzA32wxBnynGozPY=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=dfehD7AQAePQJ56EK6hgbwe6Jwhbv5sPi/IBt/TsLSyG92sE6vPwm14wGuknIP/2H
 25o7wKQ8GkOj3BfJ8E/5wD43xNqd1FCQrIREXWBizXP8ULoa4DPax2k33qTPmalqFE
 O1pRR5dDyGC10CJw7PD2/p8IQZbpUzLXhgRqKjB0=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH v3 04/11] swiotlb-xen: add struct device * parameter to
 xen_bus_to_phys
Date: Fri, 10 Jul 2020 15:34:20 -0700
Message-Id: <20200710223427.6897-4-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: hch@infradead.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

No functional changes. The parameter is unused in this patch but will be
used by next patches.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Corey Minyard <cminyard@mvista.com>
Tested-by: Roman Shaposhnik <roman@zededa.com>
---
Changes in v3:
- add whitespace in title
- improve commit message
---
 drivers/xen/swiotlb-xen.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index dbe710a59bf2..a8e447137faf 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -67,7 +67,7 @@ static inline dma_addr_t xen_phys_to_bus(struct device *dev, phys_addr_t paddr)
 	return dma;
 }
 
-static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
+static inline phys_addr_t xen_bus_to_phys(struct device *dev, dma_addr_t baddr)
 {
 	unsigned long xen_pfn = bfn_to_pfn(XEN_PFN_DOWN(baddr));
 	dma_addr_t dma = (dma_addr_t)xen_pfn << XEN_PAGE_SHIFT;
@@ -339,7 +339,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 
 	/* do not use virt_to_phys because on ARM it doesn't return you the
 	 * physical address */
-	phys = xen_bus_to_phys(dev_addr);
+	phys = xen_bus_to_phys(hwdev, dev_addr);
 
 	/* Convert the size to actually allocated. */
 	size = 1UL << (order + XEN_PAGE_SHIFT);
@@ -423,7 +423,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
 		size_t size, enum dma_data_direction dir, unsigned long attrs)
 {
-	phys_addr_t paddr = xen_bus_to_phys(dev_addr);
+	phys_addr_t paddr = xen_bus_to_phys(hwdev, dev_addr);
 
 	BUG_ON(dir == DMA_NONE);
 
@@ -439,7 +439,7 @@ static void
 xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
 		size_t size, enum dma_data_direction dir)
 {
-	phys_addr_t paddr = xen_bus_to_phys(dma_addr);
+	phys_addr_t paddr = xen_bus_to_phys(dev, dma_addr);
 
 	if (!dev_is_dma_coherent(dev))
 		xen_dma_sync_for_cpu(dma_addr, paddr, size, dir);
@@ -452,7 +452,7 @@ static void
 xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr,
 		size_t size, enum dma_data_direction dir)
 {
-	phys_addr_t paddr = xen_bus_to_phys(dma_addr);
+	phys_addr_t paddr = xen_bus_to_phys(dev, dma_addr);
 
 	if (is_xen_swiotlb_buffer(dma_addr))
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 22:34:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 22:34:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju1bK-0005zf-PW; Fri, 10 Jul 2020 22:34:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=idFW=AV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ju1bI-0005wR-S6
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 22:34:36 +0000
X-Inumbo-ID: 8683f058-c2fd-11ea-903c-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8683f058-c2fd-11ea-903c-12813bfff9fa;
 Fri, 10 Jul 2020 22:34:33 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 3DB1E207FB;
 Fri, 10 Jul 2020 22:34:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594420472;
 bh=/MGI6bbpyfeW2PBoCKZ5Cz+tMk9tLtI9QoqYjLImzVc=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=LgBdVYBgc9RgqvwP366WyyPlA8UgtCJ/fDBUVewj431Q3fqI3qEUCdwjfhqNZSC4q
 wQSN9eut6WxOj7lthfXxS+fEOCBgtpMtDbQokobURzuL/PwrI51K5viOHPCx9hRVP5
 735P3IzlseBdexGGPHvpyrwV7szvunail7KEBR60=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH v3 07/11] swiotlb-xen: add struct device * parameter to
 is_xen_swiotlb_buffer
Date: Fri, 10 Jul 2020 15:34:23 -0700
Message-Id: <20200710223427.6897-7-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: hch@infradead.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

No functional changes. The parameter is unused in this patch but will be
used by next patches.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Corey Minyard <cminyard@mvista.com>
Tested-by: Roman Shaposhnik <roman@zededa.com>
---
Changes in v3:
- add whitespace in title
- improve commit message
---
 drivers/xen/swiotlb-xen.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 8a3a7bcc5ec0..e2c35f45f91e 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -97,7 +97,7 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
 	return 0;
 }
 
-static int is_xen_swiotlb_buffer(dma_addr_t dma_addr)
+static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 {
 	unsigned long bfn = XEN_PFN_DOWN(dma_addr);
 	unsigned long xen_pfn = bfn_to_local_pfn(bfn);
@@ -431,7 +431,7 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
 		xen_dma_sync_for_cpu(hwdev, dev_addr, paddr, size, dir);
 
 	/* NOTE: We use dev_addr here, not paddr! */
-	if (is_xen_swiotlb_buffer(dev_addr))
+	if (is_xen_swiotlb_buffer(hwdev, dev_addr))
 		swiotlb_tbl_unmap_single(hwdev, paddr, size, size, dir, attrs);
 }
 
@@ -444,7 +444,7 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
 	if (!dev_is_dma_coherent(dev))
 		xen_dma_sync_for_cpu(dev, dma_addr, paddr, size, dir);
 
-	if (is_xen_swiotlb_buffer(dma_addr))
+	if (is_xen_swiotlb_buffer(dev, dma_addr))
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_CPU);
 }
 
@@ -454,7 +454,7 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr,
 {
 	phys_addr_t paddr = xen_bus_to_phys(dev, dma_addr);
 
-	if (is_xen_swiotlb_buffer(dma_addr))
+	if (is_xen_swiotlb_buffer(dev, dma_addr))
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE);
 
 	if (!dev_is_dma_coherent(dev))
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 22:34:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 22:34:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju1bN-00061J-7u; Fri, 10 Jul 2020 22:34:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=idFW=AV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ju1bM-0005wJ-Ct
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 22:34:40 +0000
X-Inumbo-ID: 85f912ee-c2fd-11ea-b7bb-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 85f912ee-c2fd-11ea-b7bb-bc764e2007e4;
 Fri, 10 Jul 2020 22:34:32 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 5D3BB2088E;
 Fri, 10 Jul 2020 22:34:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594420471;
 bh=Vs0xqtsJWIWR1q7n2x/kejMEtvYSj5EpVIj/uSYTM10=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=IEfN+YoOhvU0/hM8GDqS8n1h5AfhunMYm2GQc6siAJRM0SWOS7Hl7yZCXjRCpO/ev
 3VLAOXEdk+xBD0On1LhX+YN09vwO5Hjl5QpGMH4phflyXdv0socTbjszzkegJEe48O
 WsYaPP0p6K8SyBbfj59+AuYV6o64CD7Qi3gLt+Jc=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH v3 05/11] swiotlb-xen: add struct device * parameter to
 xen_dma_sync_for_cpu
Date: Fri, 10 Jul 2020 15:34:21 -0700
Message-Id: <20200710223427.6897-5-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: hch@infradead.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

No functional changes. The parameter is unused in this patch but will be
used by next patches.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Corey Minyard <cminyard@mvista.com>
Tested-by: Roman Shaposhnik <roman@zededa.com>
---
Changes in v3:
- add whitespace in title
- improve commit message
---
 arch/arm/xen/mm.c         | 5 +++--
 drivers/xen/swiotlb-xen.c | 4 ++--
 include/xen/swiotlb-xen.h | 5 +++--
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index d40e9e5fc52b..1a00e8003c64 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -71,8 +71,9 @@ static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op)
  * pfn_valid returns true the pages is local and we can use the native
  * dma-direct functions, otherwise we call the Xen specific version.
  */
-void xen_dma_sync_for_cpu(dma_addr_t handle, phys_addr_t paddr, size_t size,
-		enum dma_data_direction dir)
+void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
+			  phys_addr_t paddr, size_t size,
+			  enum dma_data_direction dir)
 {
 	if (pfn_valid(PFN_DOWN(handle)))
 		arch_sync_dma_for_cpu(paddr, size, dir);
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index a8e447137faf..d04b7a15124f 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -428,7 +428,7 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
 	BUG_ON(dir == DMA_NONE);
 
 	if (!dev_is_dma_coherent(hwdev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
-		xen_dma_sync_for_cpu(dev_addr, paddr, size, dir);
+		xen_dma_sync_for_cpu(hwdev, dev_addr, paddr, size, dir);
 
 	/* NOTE: We use dev_addr here, not paddr! */
 	if (is_xen_swiotlb_buffer(dev_addr))
@@ -442,7 +442,7 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
 	phys_addr_t paddr = xen_bus_to_phys(dev, dma_addr);
 
 	if (!dev_is_dma_coherent(dev))
-		xen_dma_sync_for_cpu(dma_addr, paddr, size, dir);
+		xen_dma_sync_for_cpu(dev, dma_addr, paddr, size, dir);
 
 	if (is_xen_swiotlb_buffer(dma_addr))
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_CPU);
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index ffc0d3902b71..f62d1854780b 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -4,8 +4,9 @@
 
 #include <linux/swiotlb.h>
 
-void xen_dma_sync_for_cpu(dma_addr_t handle, phys_addr_t paddr, size_t size,
-		enum dma_data_direction dir);
+void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
+			  phys_addr_t paddr, size_t size,
+			  enum dma_data_direction dir);
 void xen_dma_sync_for_device(dma_addr_t handle, phys_addr_t paddr, size_t size,
 		enum dma_data_direction dir);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 22:34:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 22:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju1bO-00062P-GO; Fri, 10 Jul 2020 22:34:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=idFW=AV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ju1bN-0005wR-S7
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 22:34:41 +0000
X-Inumbo-ID: 8753611c-c2fd-11ea-903c-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8753611c-c2fd-11ea-903c-12813bfff9fa;
 Fri, 10 Jul 2020 22:34:34 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 953A52075D;
 Fri, 10 Jul 2020 22:34:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594420473;
 bh=tVsq6tAoJsobQNAWBk/5/5TNk+DY1mLqE8x5VMYbhB0=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=GZaN8eitbLqQ3J1ffz011jaKi3evRotLPxTsAhAIMsFZsKmo4aMSEidGK3Ax57dr8
 hmq/KjEwIB8hP27Vl7Lnf9UMu23pRsMzy/xdemNKoa6Z/m34WRSM87zQKHL9kWPHMf
 sm0IMJ0E4d5DXQm6FTAO197SQvMUxLc4AmXdGoaU=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH v3 10/11] xen/arm: introduce phys/dma translations in
 xen_dma_sync_for_*
Date: Fri, 10 Jul 2020 15:34:26 -0700
Message-Id: <20200710223427.6897-10-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: hch@infradead.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

xen_dma_sync_for_cpu, xen_dma_sync_for_device, xen_arch_need_swiotlb are
getting called passing dma addresses. On some platforms dma addresses
could be different from physical addresses. Before doing any operations
on these addresses we need to convert them back to physical addresses
using dma_to_phys.

Move the arch_sync_dma_for_cpu and arch_sync_dma_for_device calls from
xen_dma_sync_for_cpu/device to swiotlb-xen.c, and add a call dma_to_phys
to do address translations there.

dma_cache_maint is fixed by the next patch.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Tested-by: Corey Minyard <cminyard@mvista.com>
Tested-by: Roman Shaposhnik <roman@zededa.com>

---
Changes in v2:
- improve commit message
- don't use pfn_valid

Changes in v3:
- move arch_sync_dma_for_cpu/device calls to swiotlb-xen.c
---
 arch/arm/xen/mm.c         | 17 ++++++-----------
 drivers/xen/swiotlb-xen.c | 32 ++++++++++++++++++++++++--------
 include/xen/swiotlb-xen.h |  6 ++----
 3 files changed, 32 insertions(+), 23 deletions(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index f2414ea40a79..a8251a70f442 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 #include <linux/cpu.h>
+#include <linux/dma-direct.h>
 #include <linux/dma-noncoherent.h>
 #include <linux/gfp.h>
 #include <linux/highmem.h>
@@ -72,22 +73,16 @@ static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op)
  * dma-direct functions, otherwise we call the Xen specific version.
  */
 void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
-			  phys_addr_t paddr, size_t size,
-			  enum dma_data_direction dir)
+			  size_t size, enum dma_data_direction dir)
 {
-	if (pfn_valid(PFN_DOWN(handle)))
-		arch_sync_dma_for_cpu(paddr, size, dir);
-	else if (dir != DMA_TO_DEVICE)
+	if (dir != DMA_TO_DEVICE)
 		dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
 }
 
 void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
-			     phys_addr_t paddr, size_t size,
-			     enum dma_data_direction dir)
+			     size_t size, enum dma_data_direction dir)
 {
-	if (pfn_valid(PFN_DOWN(handle)))
-		arch_sync_dma_for_device(paddr, size, dir);
-	else if (dir == DMA_FROM_DEVICE)
+	if (dir == DMA_FROM_DEVICE)
 		dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
 	else
 		dma_cache_maint(handle, size, GNTTAB_CACHE_CLEAN);
@@ -98,7 +93,7 @@ bool xen_arch_need_swiotlb(struct device *dev,
 			   dma_addr_t dev_addr)
 {
 	unsigned int xen_pfn = XEN_PFN_DOWN(phys);
-	unsigned int bfn = XEN_PFN_DOWN(dev_addr);
+	unsigned int bfn = XEN_PFN_DOWN(dma_to_phys(dev, dev_addr));
 
 	/*
 	 * The swiotlb buffer should be used if
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index a6a95358a8cb..39a0f2e0847c 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -413,8 +413,12 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 	}
 
 done:
-	if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
-		xen_dma_sync_for_device(dev, dev_addr, phys, size, dir);
+	if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
+		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dev_addr))))
+			arch_sync_dma_for_device(phys, size, dir);
+		else
+			xen_dma_sync_for_device(dev, dev_addr, size, dir);
+	}
 	return dev_addr;
 }
 
@@ -433,8 +437,12 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
 
 	BUG_ON(dir == DMA_NONE);
 
-	if (!dev_is_dma_coherent(hwdev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
-		xen_dma_sync_for_cpu(hwdev, dev_addr, paddr, size, dir);
+	if (!dev_is_dma_coherent(hwdev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) {
+		if (pfn_valid(PFN_DOWN(dma_to_phys(hwdev, dev_addr))))
+			arch_sync_dma_for_cpu(paddr, size, dir);
+		else
+			xen_dma_sync_for_cpu(hwdev, dev_addr, size, dir);
+	}
 
 	/* NOTE: We use dev_addr here, not paddr! */
 	if (is_xen_swiotlb_buffer(hwdev, dev_addr))
@@ -447,8 +455,12 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
 {
 	phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr);
 
-	if (!dev_is_dma_coherent(dev))
-		xen_dma_sync_for_cpu(dev, dma_addr, paddr, size, dir);
+	if (!dev_is_dma_coherent(dev)) {
+		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
+			arch_sync_dma_for_cpu(paddr, size, dir);
+		else
+			xen_dma_sync_for_cpu(dev, dma_addr, size, dir);
+	}
 
 	if (is_xen_swiotlb_buffer(dev, dma_addr))
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_CPU);
@@ -463,8 +475,12 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr,
 	if (is_xen_swiotlb_buffer(dev, dma_addr))
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE);
 
-	if (!dev_is_dma_coherent(dev))
-		xen_dma_sync_for_device(dev, dma_addr, paddr, size, dir);
+	if (!dev_is_dma_coherent(dev)) {
+		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
+			arch_sync_dma_for_device(paddr, size, dir);
+		else
+			xen_dma_sync_for_device(dev, dma_addr, size, dir);
+	}
 }
 
 /*
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index 6d235fe2b92d..d5eaf9d682b8 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -5,11 +5,9 @@
 #include <linux/swiotlb.h>
 
 void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
-			  phys_addr_t paddr, size_t size,
-			  enum dma_data_direction dir);
+			  size_t size, enum dma_data_direction dir);
 void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
-			     phys_addr_t paddr, size_t size,
-			     enum dma_data_direction dir);
+			     size_t size, enum dma_data_direction dir);
 
 extern int xen_swiotlb_init(int verbose, bool early);
 extern const struct dma_map_ops xen_swiotlb_dma_ops;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 22:34:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 22:34:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju1bS-00066A-Qu; Fri, 10 Jul 2020 22:34:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=idFW=AV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ju1bR-0005wJ-DB
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 22:34:45 +0000
X-Inumbo-ID: 863ab99c-c2fd-11ea-bca7-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 863ab99c-c2fd-11ea-bca7-bc764e2007e4;
 Fri, 10 Jul 2020 22:34:32 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C8DA7207FC;
 Fri, 10 Jul 2020 22:34:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594420472;
 bh=sMYkMYitD6sPj0ukCknVdpCgnLmfr5vqpCBV7xeRxnc=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=ZkosITiJ48CQikhlYJWkeZGIhGG+eMgl8OaoZ/kNTsCH+/33DkflrU7cSBOMfzdUq
 bPWXlCXKKhcKjExNrnFg2SReGONdaI3jhfncDja5w3vlNRiOu2aF0upucK00xtmOYQ
 ITCaHzvbzVF33GKpwDOLggZY96Jh0H46k5ZQLxfY=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH v3 06/11] swiotlb-xen: add struct device * parameter to
 xen_dma_sync_for_device
Date: Fri, 10 Jul 2020 15:34:22 -0700
Message-Id: <20200710223427.6897-6-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: hch@infradead.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

No functional changes. The parameter is unused in this patch but will be
used by next patches.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Corey Minyard <cminyard@mvista.com>
Tested-by: Roman Shaposhnik <roman@zededa.com>
---
Changes in v3:
- add whitespace in title
- improve commit message
---
 arch/arm/xen/mm.c         | 5 +++--
 drivers/xen/swiotlb-xen.c | 4 ++--
 include/xen/swiotlb-xen.h | 5 +++--
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index 1a00e8003c64..f2414ea40a79 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -81,8 +81,9 @@ void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
 		dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
 }
 
-void xen_dma_sync_for_device(dma_addr_t handle, phys_addr_t paddr, size_t size,
-		enum dma_data_direction dir)
+void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
+			     phys_addr_t paddr, size_t size,
+			     enum dma_data_direction dir)
 {
 	if (pfn_valid(PFN_DOWN(handle)))
 		arch_sync_dma_for_device(paddr, size, dir);
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index d04b7a15124f..8a3a7bcc5ec0 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -408,7 +408,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 
 done:
 	if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
-		xen_dma_sync_for_device(dev_addr, phys, size, dir);
+		xen_dma_sync_for_device(dev, dev_addr, phys, size, dir);
 	return dev_addr;
 }
 
@@ -458,7 +458,7 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr,
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE);
 
 	if (!dev_is_dma_coherent(dev))
-		xen_dma_sync_for_device(dma_addr, paddr, size, dir);
+		xen_dma_sync_for_device(dev, dma_addr, paddr, size, dir);
 }
 
 /*
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index f62d1854780b..6d235fe2b92d 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -7,8 +7,9 @@
 void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
 			  phys_addr_t paddr, size_t size,
 			  enum dma_data_direction dir);
-void xen_dma_sync_for_device(dma_addr_t handle, phys_addr_t paddr, size_t size,
-		enum dma_data_direction dir);
+void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
+			     phys_addr_t paddr, size_t size,
+			     enum dma_data_direction dir);
 
 extern int xen_swiotlb_init(int verbose, bool early);
 extern const struct dma_map_ops xen_swiotlb_dma_ops;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 22:34:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 22:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju1bU-00067R-4E; Fri, 10 Jul 2020 22:34:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=idFW=AV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ju1bS-0005wR-SM
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 22:34:46 +0000
X-Inumbo-ID: 879c57f0-c2fd-11ea-903c-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 879c57f0-c2fd-11ea-903c-12813bfff9fa;
 Fri, 10 Jul 2020 22:34:35 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 1306420842;
 Fri, 10 Jul 2020 22:34:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594420474;
 bh=cbWejgBryP3wyPjTJqgvL1vW18Wt3qtjvaAEe9kL0pA=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=FMZ1pvxBAj/UqOMhOUE4LnQZIiFubMpVx7zHItoWPFKhBrWbTBYPVAdTBFCy2+0G1
 9WCbwxcHHxgEqTzBBWG2k82KXIWM2L+Q1svJMEePdQc1BWjdG3rg+K0IjMVi9IuZ5m
 Fahls4gJEmbcd12Pe6pig9hLI9mD2vlA5ttxzRQQ=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH v3 11/11] xen/arm: call dma_to_phys on the dma_addr_t
 parameter of dma_cache_maint
Date: Fri, 10 Jul 2020 15:34:27 -0700
Message-Id: <20200710223427.6897-11-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: hch@infradead.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

dma_cache_maint is getting called passing a dma address which could be
different from a physical address.

Add a struct device* parameter to dma_cache_maint.

Translate the dma_addr_t parameter of dma_cache_maint by calling
dma_to_phys. Do it for the first page and all the following pages, in
case of multipage handling.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Tested-by: Corey Minyard <cminyard@mvista.com>
Tested-by: Roman Shaposhnik <roman@zededa.com>
---
Changes in v2:
- improve commit message
---
 arch/arm/xen/mm.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index a8251a70f442..396797ffe2b1 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -43,15 +43,18 @@ unsigned long xen_get_swiotlb_free_pages(unsigned int order)
 static bool hypercall_cflush = false;
 
 /* buffers in highmem or foreign pages cannot cross page boundaries */
-static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op)
+static void dma_cache_maint(struct device *dev, dma_addr_t handle,
+			    size_t size, u32 op)
 {
 	struct gnttab_cache_flush cflush;
 
-	cflush.a.dev_bus_addr = handle & XEN_PAGE_MASK;
 	cflush.offset = xen_offset_in_page(handle);
 	cflush.op = op;
+	handle &= XEN_PAGE_MASK;
 
 	do {
+		cflush.a.dev_bus_addr = dma_to_phys(dev, handle);
+
 		if (size + cflush.offset > XEN_PAGE_SIZE)
 			cflush.length = XEN_PAGE_SIZE - cflush.offset;
 		else
@@ -60,7 +63,7 @@ static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op)
 		HYPERVISOR_grant_table_op(GNTTABOP_cache_flush, &cflush, 1);
 
 		cflush.offset = 0;
-		cflush.a.dev_bus_addr += cflush.length;
+		handle += cflush.length;
 		size -= cflush.length;
 	} while (size);
 }
@@ -76,16 +79,16 @@ void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
 			  size_t size, enum dma_data_direction dir)
 {
 	if (dir != DMA_TO_DEVICE)
-		dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
+		dma_cache_maint(dev, handle, size, GNTTAB_CACHE_INVAL);
 }
 
 void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
 			     size_t size, enum dma_data_direction dir)
 {
 	if (dir == DMA_FROM_DEVICE)
-		dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
+		dma_cache_maint(dev, handle, size, GNTTAB_CACHE_INVAL);
 	else
-		dma_cache_maint(handle, size, GNTTAB_CACHE_CLEAN);
+		dma_cache_maint(dev, handle, size, GNTTAB_CACHE_CLEAN);
 }
 
 bool xen_arch_need_swiotlb(struct device *dev,
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 22:34:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 22:34:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju1bX-0006AA-Cm; Fri, 10 Jul 2020 22:34:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=idFW=AV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ju1bW-0005wJ-DU
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 22:34:50 +0000
X-Inumbo-ID: 86c42e3e-c2fd-11ea-bca7-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 86c42e3e-c2fd-11ea-bca7-bc764e2007e4;
 Fri, 10 Jul 2020 22:34:33 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id AE6EE2084C;
 Fri, 10 Jul 2020 22:34:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594420473;
 bh=jLgIlw0XpAVuOCkBCu96F/m3f+tfTNFZAXAR4R0KYhQ=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=vsgLFsqHx83ZxrVE357/rAy2BkLYAlHXH3STs/PYChzyTa06t95Oll0lDdW/RvhzA
 3w15KyVrAiCPA2w60Luc8SqDOVyQyqkDSy2CJJiPwbNNxRy1ZFtBmpSrg/JPH5N0pg
 53S4KsSkr+3N1JIHxPP80eE+akQ+CPF2WRZY7gSM=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH v3 08/11] swiotlb-xen: remove XEN_PFN_PHYS
Date: Fri, 10 Jul 2020 15:34:24 -0700
Message-Id: <20200710223427.6897-8-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: hch@infradead.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

XEN_PFN_PHYS is only used in one place in swiotlb-xen making things more
complex than need to be.

Remove the definition of XEN_PFN_PHYS and open code the cast in the one
place where it is needed.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 drivers/xen/swiotlb-xen.c | 7 +------
 include/xen/page.h        | 1 -
 2 files changed, 1 insertion(+), 7 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index e2c35f45f91e..03d118b6c141 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -52,11 +52,6 @@ static unsigned long xen_io_tlb_nslabs;
  * Quick lookup value of the bus address of the IOTLB.
  */
 
-/*
- * Both of these functions should avoid XEN_PFN_PHYS because phys_addr_t
- * can be 32bit when dma_addr_t is 64bit leading to a loss in
- * information if the shift is done before casting to 64bit.
- */
 static inline dma_addr_t xen_phys_to_bus(struct device *dev, phys_addr_t paddr)
 {
 	unsigned long bfn = pfn_to_bfn(XEN_PFN_DOWN(paddr));
@@ -101,7 +96,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 {
 	unsigned long bfn = XEN_PFN_DOWN(dma_addr);
 	unsigned long xen_pfn = bfn_to_local_pfn(bfn);
-	phys_addr_t paddr = XEN_PFN_PHYS(xen_pfn);
+	phys_addr_t paddr = (phys_addr_t)xen_pfn << XEN_PAGE_SHIFT;
 
 	/* If the address is outside our domain, it CAN
 	 * have the same virtual address as another address
diff --git a/include/xen/page.h b/include/xen/page.h
index df6d6b6ec66e..285677b42943 100644
--- a/include/xen/page.h
+++ b/include/xen/page.h
@@ -24,7 +24,6 @@
 
 #define XEN_PFN_DOWN(x)	((x) >> XEN_PAGE_SHIFT)
 #define XEN_PFN_UP(x)	(((x) + XEN_PAGE_SIZE-1) >> XEN_PAGE_SHIFT)
-#define XEN_PFN_PHYS(x)	((phys_addr_t)(x) << XEN_PAGE_SHIFT)
 
 #include <asm/xen/page.h>
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 10 22:34:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 10 Jul 2020 22:34:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju1bc-0006Fz-Ot; Fri, 10 Jul 2020 22:34:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=idFW=AV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ju1bb-0005wJ-Dc
 for xen-devel@lists.xenproject.org; Fri, 10 Jul 2020 22:34:55 +0000
X-Inumbo-ID: 8709b2d8-c2fd-11ea-8496-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8709b2d8-c2fd-11ea-8496-bc764e2007e4;
 Fri, 10 Jul 2020 22:34:34 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 2675920720;
 Fri, 10 Jul 2020 22:34:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594420473;
 bh=ucoDUHYkjSRPOm2qbV+240NTBrykvQHA5UqCzIUfAYE=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=WejIoLlLic1B7Dmj085WZh457ou/ls3auQwQETNjRv3STFqcYkb+c5Eos22MCsgQy
 j5Km6FnmA73P5v2e7pm1hFt3dq0NkCt5v3auBQMG9F3ycKHomSh8rx7b+F+jgUHNXX
 lEF1m7TGustCcFirrHcsPh7b/hHQkhV31TFi0JfY=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH v3 09/11] swiotlb-xen: introduce phys_to_dma/dma_to_phys
 translations
Date: Fri, 10 Jul 2020 15:34:25 -0700
Message-Id: <20200710223427.6897-9-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2007101521290.4124@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: hch@infradead.org, xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

With some devices physical addresses are different than dma addresses.
To be able to deal with these cases, we need to call phys_to_dma on
physical addresses (including machine addresses in Xen terminology)
before returning them from xen_swiotlb_alloc_coherent and
xen_swiotlb_map_page.

We also need to convert dma addresses back to physical addresses using
dma_to_phys in xen_swiotlb_free_coherent and xen_swiotlb_unmap_page if
we want to do any operations on them.

Call dma_to_phys in is_xen_swiotlb_buffer.
Introduce xen_phys_to_dma and call phys_to_dma in its implementation.
Introduce xen_dma_to_phys and call dma_to_phys in its implementation.
Call xen_phys_to_dma/xen_dma_to_phys instead of
xen_phys_to_bus/xen_bus_to_phys through swiotlb-xen.c.

Everything is taken care of by these changes except for
xen_swiotlb_alloc_coherent and xen_swiotlb_free_coherent, which need a
few explicit phys_to_dma/dma_to_phys calls.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Tested-by: Corey Minyard <cminyard@mvista.com>
Tested-by: Roman Shaposhnik <roman@zededa.com>
---
Changes in v2:
- improve commit message

Changes in v3:
- add xen_phys_to_dma and xen_dma_to_phys in this patch
---
 drivers/xen/swiotlb-xen.c | 53 +++++++++++++++++++++++----------------
 1 file changed, 32 insertions(+), 21 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 03d118b6c141..a6a95358a8cb 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -52,30 +52,39 @@ static unsigned long xen_io_tlb_nslabs;
  * Quick lookup value of the bus address of the IOTLB.
  */
 
-static inline dma_addr_t xen_phys_to_bus(struct device *dev, phys_addr_t paddr)
+static inline phys_addr_t xen_phys_to_bus(struct device *dev, phys_addr_t paddr)
 {
 	unsigned long bfn = pfn_to_bfn(XEN_PFN_DOWN(paddr));
-	dma_addr_t dma = (dma_addr_t)bfn << XEN_PAGE_SHIFT;
+	phys_addr_t baddr = (phys_addr_t)bfn << XEN_PAGE_SHIFT;
 
-	dma |= paddr & ~XEN_PAGE_MASK;
+	baddr |= paddr & ~XEN_PAGE_MASK;
+	return baddr;
+}
 
-	return dma;
+static inline dma_addr_t xen_phys_to_dma(struct device *dev, phys_addr_t paddr)
+{
+	return phys_to_dma(dev, xen_phys_to_bus(dev, paddr));
 }
 
-static inline phys_addr_t xen_bus_to_phys(struct device *dev, dma_addr_t baddr)
+static inline phys_addr_t xen_bus_to_phys(struct device *dev,
+					  phys_addr_t baddr)
 {
 	unsigned long xen_pfn = bfn_to_pfn(XEN_PFN_DOWN(baddr));
-	dma_addr_t dma = (dma_addr_t)xen_pfn << XEN_PAGE_SHIFT;
-	phys_addr_t paddr = dma;
-
-	paddr |= baddr & ~XEN_PAGE_MASK;
+	phys_addr_t paddr = (xen_pfn << XEN_PAGE_SHIFT) |
+			    (baddr & ~XEN_PAGE_MASK);
 
 	return paddr;
 }
 
+static inline phys_addr_t xen_dma_to_phys(struct device *dev,
+					  dma_addr_t dma_addr)
+{
+	return xen_bus_to_phys(dev, dma_to_phys(dev, dma_addr));
+}
+
 static inline dma_addr_t xen_virt_to_bus(struct device *dev, void *address)
 {
-	return xen_phys_to_bus(dev, virt_to_phys(address));
+	return xen_phys_to_dma(dev, virt_to_phys(address));
 }
 
 static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
@@ -94,7 +103,7 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
 
 static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 {
-	unsigned long bfn = XEN_PFN_DOWN(dma_addr);
+	unsigned long bfn = XEN_PFN_DOWN(dma_to_phys(dev, dma_addr));
 	unsigned long xen_pfn = bfn_to_local_pfn(bfn);
 	phys_addr_t paddr = (phys_addr_t)xen_pfn << XEN_PAGE_SHIFT;
 
@@ -299,12 +308,12 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 	if (hwdev && hwdev->coherent_dma_mask)
 		dma_mask = hwdev->coherent_dma_mask;
 
-	/* At this point dma_handle is the physical address, next we are
+	/* At this point dma_handle is the dma address, next we are
 	 * going to set it to the machine address.
 	 * Do not use virt_to_phys(ret) because on ARM it doesn't correspond
 	 * to *dma_handle. */
-	phys = *dma_handle;
-	dev_addr = xen_phys_to_bus(hwdev, phys);
+	phys = dma_to_phys(hwdev, *dma_handle);
+	dev_addr = xen_phys_to_dma(hwdev, phys);
 	if (((dev_addr + size - 1 <= dma_mask)) &&
 	    !range_straddles_page_boundary(phys, size))
 		*dma_handle = dev_addr;
@@ -314,6 +323,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 			xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
 			return NULL;
 		}
+		*dma_handle = phys_to_dma(hwdev, *dma_handle);
 		SetPageXenRemapped(virt_to_page(ret));
 	}
 	memset(ret, 0, size);
@@ -334,7 +344,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 
 	/* do not use virt_to_phys because on ARM it doesn't return you the
 	 * physical address */
-	phys = xen_bus_to_phys(hwdev, dev_addr);
+	phys = xen_dma_to_phys(hwdev, dev_addr);
 
 	/* Convert the size to actually allocated. */
 	size = 1UL << (order + XEN_PAGE_SHIFT);
@@ -349,7 +359,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	    TestClearPageXenRemapped(page))
 		xen_destroy_contiguous_region(phys, order);
 
-	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
+	xen_free_coherent_pages(hwdev, size, vaddr, phys_to_dma(hwdev, phys),
+				attrs);
 }
 
 /*
@@ -365,7 +376,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 				unsigned long attrs)
 {
 	phys_addr_t map, phys = page_to_phys(page) + offset;
-	dma_addr_t dev_addr = xen_phys_to_bus(dev, phys);
+	dma_addr_t dev_addr = xen_phys_to_dma(dev, phys);
 
 	BUG_ON(dir == DMA_NONE);
 	/*
@@ -390,7 +401,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 		return DMA_MAPPING_ERROR;
 
 	phys = map;
-	dev_addr = xen_phys_to_bus(dev, map);
+	dev_addr = xen_phys_to_dma(dev, map);
 
 	/*
 	 * Ensure that the address returned is DMA'ble
@@ -418,7 +429,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
 		size_t size, enum dma_data_direction dir, unsigned long attrs)
 {
-	phys_addr_t paddr = xen_bus_to_phys(hwdev, dev_addr);
+	phys_addr_t paddr = xen_dma_to_phys(hwdev, dev_addr);
 
 	BUG_ON(dir == DMA_NONE);
 
@@ -434,7 +445,7 @@ static void
 xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
 		size_t size, enum dma_data_direction dir)
 {
-	phys_addr_t paddr = xen_bus_to_phys(dev, dma_addr);
+	phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr);
 
 	if (!dev_is_dma_coherent(dev))
 		xen_dma_sync_for_cpu(dev, dma_addr, paddr, size, dir);
@@ -447,7 +458,7 @@ static void
 xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr,
 		size_t size, enum dma_data_direction dir)
 {
-	phys_addr_t paddr = xen_bus_to_phys(dev, dma_addr);
+	phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr);
 
 	if (is_xen_swiotlb_buffer(dev, dma_addr))
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat Jul 11 02:56:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 02:56:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju5gC-0004Rw-DC; Sat, 11 Jul 2020 02:55:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+oRR=AW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ju5gB-0004Rc-Kl
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 02:55:55 +0000
X-Inumbo-ID: 06c6d3ce-c322-11ea-9064-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06c6d3ce-c322-11ea-9064-12813bfff9fa;
 Sat, 11 Jul 2020 02:55:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1obhdDij3IrWGIbqULvT00dUqyvRVcyphfkX+lt4ifo=; b=U61aKKGq0irpMaEPipl1rjxZy
 0eZIIcKyS/W5uTr2wnLeMBcv7hkjEhfUNNhTY2X4P0bvWQyfPZ8zteNk9+k7TYbLoYIdDNa42JC0d
 5VXs80BIeprNi1oqx29V/KqYNgRjtBvjbK25xdtpsNFdNwypnRxoikBe1gI+uCf46vdAI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ju5g5-0005Eh-Io; Sat, 11 Jul 2020 02:55:49 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ju5g5-00050x-4U; Sat, 11 Jul 2020 02:55:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1ju5g5-0005bs-3m; Sat, 11 Jul 2020 02:55:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151808-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151808: regressions - FAIL
X-Osstest-Failures: linux-linus:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=a581387e415bbb0085e7e67906c8f4a99746590e
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jul 2020 02:55:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151808 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151808/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1         fail REGR. vs. 151214
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214
 test-armhf-armhf-xl-vhd      11 guest-start              fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                a581387e415bbb0085e7e67906c8f4a99746590e
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   23 days
Failing since        151236  2020-06-19 19:10:35 Z   21 days   32 attempts
Testing same since   151808  2020-07-10 19:46:32 Z    0 days    1 attempts

------------------------------------------------------------
641 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 32017 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 11 03:00:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 03:00:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju5kf-0005H8-1q; Sat, 11 Jul 2020 03:00:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+oRR=AW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ju5ke-0005Gi-AF
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 03:00:32 +0000
X-Inumbo-ID: aaa73646-c322-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aaa73646-c322-11ea-8496-bc764e2007e4;
 Sat, 11 Jul 2020 03:00:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=t4SeMKQiWVFH5YDpJPFBUlCOpaqglId1RM9KVH1Z9dk=; b=p2+QFrpxl1jNtSqlwpzM2IP/e
 nHzauxYl42GjC/3Nfg5TxADVVe7IHfvWdFH9qStPPPokmNNzZPoUrPgzUKXLjr6KAGUkl7/Epzefv
 nhe6cV8GC8FxlAQwJx6WDArc8ejJChe7rGEhXqpHWn2YaNVpQYvnYoZG0LIrvbq6IjnKQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ju5kV-0005LV-Ok; Sat, 11 Jul 2020 03:00:23 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ju5kV-00058h-H1; Sat, 11 Jul 2020 03:00:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1ju5kV-0008FI-Fz; Sat, 11 Jul 2020 03:00:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151804-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151804: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=45db94cc90c286a9965a285ba19450f448760a09
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jul 2020 03:00:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151804 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151804/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                45db94cc90c286a9965a285ba19450f448760a09
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   28 days
Failing since        151101  2020-06-14 08:32:51 Z   26 days   35 attempts
Testing same since   151804  2020-07-10 18:37:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michele Denber <denber@mindspring.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 22498 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 11 06:33:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 06:33:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju94h-0005v1-31; Sat, 11 Jul 2020 06:33:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+oRR=AW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ju94g-0005ud-1x
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 06:33:26 +0000
X-Inumbo-ID: 6976abfc-c340-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6976abfc-c340-11ea-bb8b-bc764e2007e4;
 Sat, 11 Jul 2020 06:33:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=w6q3aj7JzV+GzJiWAc8oks4Zx/08dL3M+K+c/4T1yaY=; b=iGf8ENs6gGc7Fz4WQl7NUZCXK
 pF4VV3NEts5vQqN3vkC8PSC4yRz63EjKP3PL4EZGpHlmDMwRu3RgcxcIvdQiK95awR4GwozEuEIHK
 H+YQxbKFjHAg2AoNoIzPTS6t41vvP7uqkkIO6ukpv7wLfZPO6ot42YAKHoAi0gsfzGsuI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ju94Z-0001Lu-AX; Sat, 11 Jul 2020 06:33:19 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ju94Y-000081-Qy; Sat, 11 Jul 2020 06:33:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1ju94Y-0002sF-QI; Sat, 11 Jul 2020 06:33:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151812-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151812: all pass - PUSHED
X-Osstest-Versions-This: ovmf=f7f1b33282b7dc52a0c77860d3f4523b231a750f
X-Osstest-Versions-That: ovmf=bdafda8c457eb90c65f37026589b54258300f05c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jul 2020 06:33:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151812 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151812/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 f7f1b33282b7dc52a0c77860d3f4523b231a750f
baseline version:
 ovmf                 bdafda8c457eb90c65f37026589b54258300f05c

Last test of basis   151725  2020-07-07 23:40:20 Z    3 days
Testing same since   151812  2020-07-10 22:11:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Guo Dong <guo.dong@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   bdafda8c45..f7f1b33282  f7f1b33282b7dc52a0c77860d3f4523b231a750f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jul 11 07:30:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 07:30:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ju9xr-0002Kq-3g; Sat, 11 Jul 2020 07:30:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+oRR=AW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ju9xp-0002KW-J7
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 07:30:25 +0000
X-Inumbo-ID: 5f82194e-c348-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f82194e-c348-11ea-bb8b-bc764e2007e4;
 Sat, 11 Jul 2020 07:30:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=WbpK5L7UI8AQf5nG5ks1543yV5trnN7/4NO6pqXqvmU=; b=gRaoVUDy8nGxVf6c2MXRUVANu
 /ciecqPrl0HVa/xf1cptr506KjIEM2EViXJOVOGkWtr12Uf5TjJ+YdIuRX/uDRKQ6oPrn/cbHfZoG
 HcReoHVjHsEI2kEgSVG56gJ0fc+qYW3tWHkRO3x/Cgx/2z4KRGEWDxUoZRGsAcp46fOB4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ju9xj-0002NT-BF; Sat, 11 Jul 2020 07:30:19 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ju9xi-0002rD-OU; Sat, 11 Jul 2020 07:30:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1ju9xi-0003ZE-Nx; Sat, 11 Jul 2020 07:30:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151818-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151818: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=866bb996441cd73800643df9fb7df2939bdb75bd
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jul 2020 07:30:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151818 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151818/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              866bb996441cd73800643df9fb7df2939bdb75bd
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z    1 days
Testing same since   151818  2020-07-11 04:18:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 691 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 11 08:52:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 08:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juBF4-0000wO-Nd; Sat, 11 Jul 2020 08:52:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+oRR=AW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juBF3-0000vw-Rg
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 08:52:17 +0000
X-Inumbo-ID: ceb9db48-c353-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ceb9db48-c353-11ea-bb8b-bc764e2007e4;
 Sat, 11 Jul 2020 08:52:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ZYN+LolpZ/TsMsIJSNvtXaKraY2BJNBH4QnVwU4TMRM=; b=ZieWTy0nH9x126EsA/ucfRTto
 LLucz4yCESVoHstirIXXGRPlSvyA/wiYaAieiVAVI6q27UmlohZcmIAMBQLtAJGwC/Jl86OGW2mgo
 fdjv9kbPt2Vj6C3A6p2mtbQs0ZiIGms/4ZPofJhuPCg9+QzDdvPr9GAlt3Ml5aJhOENck=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juBEw-0004Ng-BA; Sat, 11 Jul 2020 08:52:10 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juBEw-0006SM-2j; Sat, 11 Jul 2020 08:52:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juBEw-0007WA-1z; Sat, 11 Jul 2020 08:52:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151809-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151809: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=02d69864b51a4302a148c28d6d391238a6778b4b
X-Osstest-Versions-That: xen=3fdc211b01b29f252166937238efe02d15cb5780
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jul 2020 08:52:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151809 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151809/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151774
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151774
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151774
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151774
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151774
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151774
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151774
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151774
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151774
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b
baseline version:
 xen                  3fdc211b01b29f252166937238efe02d15cb5780

Last test of basis   151774  2020-07-10 02:00:25 Z    1 days
Testing same since   151809  2020-07-10 20:06:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3fdc211b01..02d69864b5  02d69864b51a4302a148c28d6d391238a6778b4b -> master


From xen-devel-bounces@lists.xenproject.org Sat Jul 11 11:42:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 11:42:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juDt1-0006Xw-Mg; Sat, 11 Jul 2020 11:41:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+oRR=AW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juDt0-0006Xc-Ue
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 11:41:43 +0000
X-Inumbo-ID: 79c53e26-c36b-11ea-9097-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 79c53e26-c36b-11ea-9097-12813bfff9fa;
 Sat, 11 Jul 2020 11:41:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=z4FciB8UI8w+0y79oApzsWQEAZIo2TFO58Lu4PLBT9Y=; b=brMoO9imEN0pF0gcTqGYyFPi7
 CZkyTl1FXT/OmlyLedhQ6y8KZg/CiM5CxUG3qTVjex1Cys5zB9BBeTfrr5lwAJvGCDwtATupr8gq+
 wQBBMf8JOy4jFHlnFhhUVV7rQcRXOnLfN0suLV7uK81BXhoqdJS9cbOiFPhQXkkCHptLg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juDst-0007VC-Dl; Sat, 11 Jul 2020 11:41:35 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juDss-0004Hm-UN; Sat, 11 Jul 2020 11:41:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juDss-0001ph-Sx; Sat, 11 Jul 2020 11:41:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151815-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151815: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=aa0c9086b40c17a7ad94425b3b70dd1fdd7497bf
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jul 2020 11:41:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151815 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151815/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       7 xen-boot                 fail REGR. vs. 151214
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                aa0c9086b40c17a7ad94425b3b70dd1fdd7497bf
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   23 days
Failing since        151236  2020-06-19 19:10:35 Z   21 days   33 attempts
Testing same since   151815  2020-07-11 02:58:08 Z    0 days    1 attempts

------------------------------------------------------------
645 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 32369 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 11 12:23:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 12:23:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juEXB-0001Ym-Go; Sat, 11 Jul 2020 12:23:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+oRR=AW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juEXA-0001Yh-1J
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 12:23:12 +0000
X-Inumbo-ID: 48fa04f6-c371-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 48fa04f6-c371-11ea-b7bb-bc764e2007e4;
 Sat, 11 Jul 2020 12:23:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ubHWDZ09bX3Bj0JS2nhjC15y0APnhYXwuiAi/5enjkU=; b=N/mpGFVweV87ks1t3ko5OIvVr
 ov6wxkllpwc0Psey2RFZA6Sx4qlecnEtOgnI9ukldKCJuMIQsq6OxLumy+Teb9Nqx2l8pPoFkEe8R
 TgT2n7biZMBrLQklf42AfywGG+xFxVA0QdMrOdN4tXTv1shwyoevhlRko7/cfoS4auzAc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juEX8-0008Iw-ON; Sat, 11 Jul 2020 12:23:10 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juEX8-0005ho-Gt; Sat, 11 Jul 2020 12:23:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juEX8-0001nT-Dn; Sat, 11 Jul 2020 12:23:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151821-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151821: all pass - PUSHED
X-Osstest-Versions-This: ovmf=f45e3a4afa65a45ea1a956a7c5e7410ff40190d1
X-Osstest-Versions-That: ovmf=f7f1b33282b7dc52a0c77860d3f4523b231a750f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jul 2020 12:23:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151821 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151821/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1
baseline version:
 ovmf                 f7f1b33282b7dc52a0c77860d3f4523b231a750f

Last test of basis   151812  2020-07-10 22:11:49 Z    0 days
Testing same since   151821  2020-07-11 06:33:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiewen Yao <jiewen.yao@intel.com>
  Marcello Sylvester Bauer <marcello.bauer@9elements.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f7f1b33282..f45e3a4afa  f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jul 11 12:52:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 12:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juEzV-0003zK-Sq; Sat, 11 Jul 2020 12:52:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RtAK=AW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1juEzV-0003zF-0W
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 12:52:29 +0000
X-Inumbo-ID: 5f9163b8-c375-11ea-90a7-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5f9163b8-c375-11ea-90a7-12813bfff9fa;
 Sat, 11 Jul 2020 12:52:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 723E5ACC6;
 Sat, 11 Jul 2020 12:52:27 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Subject: [GIT PULL] xen: branch for v5.8-rc5
Date: Sat, 11 Jul 2020 14:52:24 +0200
Message-Id: <20200711125224.14225-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.8b-rc5-tag

xen: branch for v5.8-rc5

It is just one fix of a recent patch (double free in an error path).

Thanks.

Juergen

 drivers/xen/xenbus/xenbus_client.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

Dan Carpenter (1):
      xen/xenbus: Fix a double free in xenbus_map_ring_pv()


From xen-devel-bounces@lists.xenproject.org Sat Jul 11 14:11:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 14:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juGDv-00025l-3l; Sat, 11 Jul 2020 14:11:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MSc6=AW=linaro.org=peter.maydell@srs-us1.protection.inumbo.net>)
 id 1juGDt-00025g-Ju
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 14:11:25 +0000
X-Inumbo-ID: 67899120-c380-11ea-bca7-bc764e2007e4
Received: from mail-oo1-xc34.google.com (unknown [2607:f8b0:4864:20::c34])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67899120-c380-11ea-bca7-bc764e2007e4;
 Sat, 11 Jul 2020 14:11:25 +0000 (UTC)
Received: by mail-oo1-xc34.google.com with SMTP id a9so1537064oof.12
 for <xen-devel@lists.xenproject.org>; Sat, 11 Jul 2020 07:11:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=yzLl2PgbI1FNgby9NDzTcsup/gjp2Uex/YQuS8VmxYU=;
 b=qM2wRNus0twBuJctyCrhetAiA4H4O4PAUSsppX2YFQjOSiMUK3kMPVEhKqZQZF4g55
 DLBzmmcC8kLbOMDUTqN++GqlT6pM/Mb/7XridGwQkzYUGvCLjjCH0gy4HxgtLxe79P8c
 B18fyTtOpy+52VVatjOgMHPBmoC+4DTRyIU1gfxYUGlzABbyQgizczkL8X1hiKHYqdYF
 +YmgiVVjQBBs507FU9zEFxKTUDSFt6mVOFDt2jmNNia2RW2pcAK2Fj5Hp8CA9vsacnM1
 knv5h8ZfRwOFFyzCR3M0YnHn47bLPxC6H0YJA22vxBzeiaDsjj9wMk8xE0kzfGut8R1t
 g2hA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=yzLl2PgbI1FNgby9NDzTcsup/gjp2Uex/YQuS8VmxYU=;
 b=joK06mwv+yd+PnwhnDwNlpm1UhaBcMAjaTIrPwqHM6aPHTSouoGP+66l0Dgi58apTT
 AhTQhNGGmhsp4NV0OxdZgevSV6roVTM7irX1ziY7ZAwYn7sJe/jI6uqjYow7kLjOQFA/
 dwQ0YkThKULA47CpFfwsivNIvmLFR6oqQmgasVvBPbyRSckQNOCfuI9sRvWGd4EFMsd8
 Dhj/fgObxovLbJd6rSue4Heqhb1vT4gRUHhyzH2pv1P6bHrFZdtzxPNG4ANTBIm5jUg6
 7y+xKdnIFNtUpbhMhs6AIn4szH3F73RT3oMaFmtREk90nfOX9T2eTPagATXIcAgW8qxJ
 VPoA==
X-Gm-Message-State: AOAM532vYSswLTROT92dtoPr89XHNziEWz/ISXtPNmaCH6mYBg901jH4
 3f7xBZ8OsKpAXe7hDbyURrz2wTCnvPy0OLQP45uwvd7D
X-Google-Smtp-Source: ABdhPJzcy6k5v3Yt8/825njvJvXCdsdLtrZUgpv74mVPfIjDgfyzHYzYagTuYX8ycfvp2QWcSD74nWhxOdboT6Ly/mo=
X-Received: by 2002:a4a:9653:: with SMTP id r19mr41397241ooi.85.1594476684538; 
 Sat, 11 Jul 2020 07:11:24 -0700 (PDT)
MIME-Version: 1.0
References: <20200710131145.589476-1-anthony.perard@citrix.com>
In-Reply-To: <20200710131145.589476-1-anthony.perard@citrix.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Sat, 11 Jul 2020 15:11:13 +0100
Message-ID: <CAFEAcA__qp9jrvdw7Zt6_y_Z9NjEtn+5arsds9cWpoWF=doYSA@mail.gmail.com>
Subject: Re: [PULL 0/2] xen queue 2020-07-10
To: Anthony PERARD <anthony.perard@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
 QEMU Developers <qemu-devel@nongnu.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 10 Jul 2020 at 14:11, Anthony PERARD <anthony.perard@citrix.com> wrote:
>
> The following changes since commit b6d7e9b66f59ca6ebc6e9b830cd5e7bf849d31cf:
>
>   Merge remote-tracking branch 'remotes/stefanha/tags/tracing-pull-request' into staging (2020-07-10 09:01:28 +0100)
>
> are available in the Git repository at:
>
>   https://xenbits.xen.org/git-http/people/aperard/qemu-dm.git tags/pull-xen-20200710
>
> for you to fetch changes up to dd29b5c30cd2a13f8c12376a8de84cb090c338bf:
>
>   xen: cleanup unrealized flash devices (2020-07-10 13:49:16 +0100)
>
> ----------------------------------------------------------------
> xen patches
>
> Fixes following harden checks in qdev.
>
> ----------------------------------------------------------------


Applied, thanks.

Please update the changelog at https://wiki.qemu.org/ChangeLog/5.1
for any user-visible changes.

-- PMM


From xen-devel-bounces@lists.xenproject.org Sat Jul 11 15:02:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 15:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juH0e-0006O6-HR; Sat, 11 Jul 2020 15:01:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+oRR=AW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juH0d-0006O1-BD
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 15:01:47 +0000
X-Inumbo-ID: 6f84f87c-c387-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f84f87c-c387-11ea-bb8b-bc764e2007e4;
 Sat, 11 Jul 2020 15:01:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ABIaLQarKdMv5X82/rRCK4CsWsTPCYJZHBXMBvjNdmI=; b=4JPDuU+5UVVI8Ou5rMsVLbH7U
 UWHYFAuTPQBLZMeIwAuCX/3hZT09b9/lQT7W6agavHJE0uwmb8dyYy3Caeu1mUwtwJ3h9baF06rlD
 NE9Tj8rBfbDX6xnZ1V1hvgWD6+15d6beKxvSSCwoR+8d5XqSESpHrI6nN7dqvUWViwOuU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juH0a-0002nU-0D; Sat, 11 Jul 2020 15:01:44 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juH0Z-0005Mw-G3; Sat, 11 Jul 2020 15:01:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juH0Z-0003JA-FN; Sat, 11 Jul 2020 15:01:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151816-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151816: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=45db94cc90c286a9965a285ba19450f448760a09
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jul 2020 15:01:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151816 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151816/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-libvirt-pair 21 guest-start/debian       fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 151065
 test-amd64-amd64-libvirt-pair 21 guest-start/debian      fail REGR. vs. 151065
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 151065
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 151065
 test-amd64-i386-libvirt      12 guest-start              fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                45db94cc90c286a9965a285ba19450f448760a09
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   28 days
Failing since        151101  2020-06-14 08:32:51 Z   27 days   36 attempts
Testing same since   151804  2020-07-10 18:37:56 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michele Denber <denber@mindspring.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 22498 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 11 18:25:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 18:25:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juKBP-0006B7-IX; Sat, 11 Jul 2020 18:25:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XzhJ=AW=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1juKBO-0006B2-C2
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 18:25:06 +0000
X-Inumbo-ID: d779f4e8-c3a3-11ea-90e6-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d779f4e8-c3a3-11ea-90e6-12813bfff9fa;
 Sat, 11 Jul 2020 18:25:05 +0000 (UTC)
Subject: Re: [GIT PULL] xen: branch for v5.8-rc5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594491904;
 bh=aRDoA7AMBzG3Y1+sI4J7hKyfzMcAetRpZkvTH+3d4yo=;
 h=From:In-Reply-To:References:Date:To:Cc:From;
 b=uMms6/j75GgQiY8o2tjDogC6jOYWJ/FYmW0coQHJcc1ZAqeGqt0CCuSO23Xx8oXg5
 aH+NVMDVwv0f/Mgc9HPllsQl7L43Ul0p5MmPbOiIVUWrILTc6wZgWOVm+j3+X78Jbv
 eC2h//O3sFO+MB6X6WmB26Us4X9WPFDNtejcA0/I=
From: pr-tracker-bot@kernel.org
In-Reply-To: <20200711125224.14225-1-jgross@suse.com>
References: <20200711125224.14225-1-jgross@suse.com>
X-PR-Tracked-List-Id: <linux-kernel.vger.kernel.org>
X-PR-Tracked-Message-Id: <20200711125224.14225-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git
 for-linus-5.8b-rc5-tag
X-PR-Tracked-Commit-Id: ba8c423488974f02b538e9dc1730f0334f9b85aa
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 0aea6d5c5be33ce94c16f9ab2f64de1f481f424b
Message-Id: <159449190473.25373.7244067193903548460.pr-tracker-bot@kernel.org>
Date: Sat, 11 Jul 2020 18:25:04 +0000
To: Juergen Gross <jgross@suse.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 torvalds@linux-foundation.org, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The pull request you sent on Sat, 11 Jul 2020 14:52:24 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.8b-rc5-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/0aea6d5c5be33ce94c16f9ab2f64de1f481f424b

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.wiki.kernel.org/userdoc/prtracker


From xen-devel-bounces@lists.xenproject.org Sat Jul 11 18:44:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 18:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juKUP-0007p4-89; Sat, 11 Jul 2020 18:44:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EKeT=AW=redhat.com=mst@srs-us1.protection.inumbo.net>)
 id 1juKUN-0007ox-No
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 18:44:44 +0000
X-Inumbo-ID: 95197f81-c3a6-11ea-90f5-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 95197f81-c3a6-11ea-90f5-12813bfff9fa;
 Sat, 11 Jul 2020 18:44:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594493082;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 in-reply-to:in-reply-to:references:references;
 bh=bS19/wWScaoigC28WtauFL/l6Ha+uxblMLUn/Mypy1E=;
 b=DQMnvWiiZflT0AWnTyRqDBlBueVW65Yir9ON4xHra06Sb1qqJ5IheuIH4SU+o1SGGeFc2Y
 0fG6vYIBtCaigfc6vl3GtrSfo23tjEwHVKqNfTiLkV20OXxmVb0CRguD/SXTI6DSC9VCYh
 M2zsNySXeufAyhmDxANWlwYjj6/5Qbc=
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-46-rwQCwpe6PQuZv95v1V8g5w-1; Sat, 11 Jul 2020 14:44:40 -0400
X-MC-Unique: rwQCwpe6PQuZv95v1V8g5w-1
Received: by mail-wm1-f69.google.com with SMTP id l5so11714297wml.7
 for <xen-devel@lists.xenproject.org>; Sat, 11 Jul 2020 11:44:40 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to;
 bh=bS19/wWScaoigC28WtauFL/l6Ha+uxblMLUn/Mypy1E=;
 b=XPqvfsLYWtJ6GcSEiahaE6pxqZAFkj/7XBkZFOVnnP9denMB9rBTwsTLarzX/hBGq7
 25Ty5OzgC4jz4p8AGvIYvvuPd9qiqaD32zogpYv0Qa9eZ+ThDN9dfpQ8pUIU3L3hvtHf
 Pa7onfiDTOHrEcvBbcvSnTt7Zn4d4DKfGMeb+Y4OrTfRix39ZB/xERp9tTyxaKX2tML5
 8ZDPDYccL/ONnGssvcUeDD1yNTFOraVAzCclHiLaOAUieIl9g1gIyI/U/FK8iVQPRJr6
 Z0/0kxuo9LK3O0Wa5z+0L/LIiUHtOkjVMivg4lTPFbLf1jglUmpL4exdgLjTj8b06GQy
 a92Q==
X-Gm-Message-State: AOAM533nDvnXofHb/oy/kSjgQkRy5EEdRRvmskLrFz6gRbBUM3IszK5p
 aGGH74aGyW9tR48k3uV/EiO29Jvv/CC13y7F25iESSsMEL2p0KivM1RapNzioVPEq5z2346V0tI
 t/+tNvQwrTwnTmvhadTnYGilYArI=
X-Received: by 2002:a1c:4d11:: with SMTP id o17mr10838357wmh.134.1594493079797; 
 Sat, 11 Jul 2020 11:44:39 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJygwQmxPTGDCpKObOVAAUSmALBrwy00qKV5C2K74YeE+tRXIP/JRrrSaIMIkdPE6mekfNCtDg==
X-Received: by 2002:a1c:4d11:: with SMTP id o17mr10838336wmh.134.1594493079525; 
 Sat, 11 Jul 2020 11:44:39 -0700 (PDT)
Received: from redhat.com (bzq-79-182-31-92.red.bezeqint.net. [79.182.31.92])
 by smtp.gmail.com with ESMTPSA id
 a84sm4541096wmh.47.2020.07.11.11.44.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 11 Jul 2020 11:44:38 -0700 (PDT)
Date: Sat, 11 Jul 2020 14:44:34 -0400
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH] xen: introduce xen_vring_use_dma
Message-ID: <20200711144334-mutt-send-email-mst@kernel.org>
References: <20200624163940-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006241351430.8121@sstabellini-ThinkPad-T480s>
 <20200624181026-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006251014230.8121@sstabellini-ThinkPad-T480s>
 <20200626110629-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006291621300.8121@sstabellini-ThinkPad-T480s>
 <20200701133456.GA23888@infradead.org>
 <alpine.DEB.2.21.2007011020320.8121@sstabellini-ThinkPad-T480s>
 <20200701172219-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2007101019340.4124@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007101019340.4124@sstabellini-ThinkPad-T480s>
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=mst@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Peng Fan <peng.fan@nxp.com>, konrad.wilk@oracle.com,
 jasowang@redhat.com, x86@kernel.org, linux-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 Christoph Hellwig <hch@infradead.org>, iommu@lists.linux-foundation.org,
 linux-imx@nxp.com, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 linux-arm-kernel@lists.infradead.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 10, 2020 at 10:23:22AM -0700, Stefano Stabellini wrote:
> Sorry for the late reply -- a couple of conferences kept me busy.
> 
> 
> On Wed, 1 Jul 2020, Michael S. Tsirkin wrote:
> > On Wed, Jul 01, 2020 at 10:34:53AM -0700, Stefano Stabellini wrote:
> > > Would you be in favor of a more flexible check along the lines of the
> > > one proposed in the patch that started this thread:
> > > 
> > >     if (xen_vring_use_dma())
> > >             return true;
> > > 
> > > 
> > > xen_vring_use_dma would be implemented so that it returns true when
> > > xen_swiotlb is required and false otherwise.
> > 
> > Just to stress - with a patch like this virtio can *still* use DMA API
> > if PLATFORM_ACCESS is set. So if DMA API is broken on some platforms
> > as you seem to be saying, you guys should fix it before doing something
> > like this..
> 
> Yes, DMA API is broken with some interfaces (specifically: rpmesg and
> trusty), but for them PLATFORM_ACCESS is never set. That is why the
> errors weren't reported before. Xen special case aside, there is no
> problem under normal circumstances.

So why not fix DMA API? Then this patch is not needed.


> 
> If you are OK with this patch (after a little bit of clean-up), Peng,
> are you OK with sending an update or do you want me to?



From xen-devel-bounces@lists.xenproject.org Sat Jul 11 20:19:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 20:19:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juLy0-0006lu-Jb; Sat, 11 Jul 2020 20:19:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+oRR=AW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juLy0-0006la-2z
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 20:19:24 +0000
X-Inumbo-ID: cb7ed112-c3b3-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cb7ed112-c3b3-11ea-b7bb-bc764e2007e4;
 Sat, 11 Jul 2020 20:19:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=mXHJo7vYPLuCS2HlQb0qUinYKWji66rZtrBOtrOV+pw=; b=HHMim6ugKAfy+HJo36LkXInil
 NMN2PfmK4sKP13JJlNJ1tlNnSbeOlgON7BA+OXwbVLTq/MsTm1iDeFRMHWaH/wqd5QwsT0lQttGUi
 JD+1tcBk8yRkF2HznmVxWvkZkGU7Ntx1mlpK1OM7JHuK+qyshffACwqmF6mU8kXnakMa8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juLxs-0000kO-2L; Sat, 11 Jul 2020 20:19:16 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juLxr-0007gf-RL; Sat, 11 Jul 2020 20:19:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juLxr-0003Q0-QI; Sat, 11 Jul 2020 20:19:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151824-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151824: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=02d69864b51a4302a148c28d6d391238a6778b4b
X-Osstest-Versions-That: xen=02d69864b51a4302a148c28d6d391238a6778b4b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jul 2020 20:19:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151824 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151824/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 151809
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail pass in 151809

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151809
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151809
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151809
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151809
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151809
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151809
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151809
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151809
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151809
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b
baseline version:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b

Last test of basis   151824  2020-07-11 08:53:50 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Jul 11 21:51:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 21:51:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juNOf-0006F0-K2; Sat, 11 Jul 2020 21:51:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+oRR=AW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juNOe-0006Eg-5I
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 21:51:00 +0000
X-Inumbo-ID: 972b7f34-c3c0-11ea-911b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 972b7f34-c3c0-11ea-911b-12813bfff9fa;
 Sat, 11 Jul 2020 21:50:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=MmmPOaqznnUxhPG4iNhT/JxgHKF7lbyyrOf323SDspw=; b=MKh82UM1GICm5sDe/XGFnI1fg
 Xut/NEjteq1Bjx2cUgj8rQUXpQAHUVQJKWX+YiGLNY90D24LhiT5Ol8+wNQjp9nZ/54rID+jw3jFk
 ooOVo2vJuGwvhm4pRWfhL6ota7pcQAd9tyXoD2HTFnGItvBGZyud6Cv1rgG3diR47tOxs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juNOV-0002QA-U4; Sat, 11 Jul 2020 21:50:51 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juNOV-00043g-LA; Sat, 11 Jul 2020 21:50:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juNOV-00044Z-KW; Sat, 11 Jul 2020 21:50:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151830-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151830: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=1df0d8960499e58963fd6c8ac75e544f2b417b29
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jul 2020 21:50:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151830 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151830/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                1df0d8960499e58963fd6c8ac75e544f2b417b29
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   23 days
Failing since        151236  2020-06-19 19:10:35 Z   22 days   34 attempts
Testing same since   151830  2020-07-11 12:12:38 Z    0 days    1 attempts

------------------------------------------------------------
710 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 37258 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 11 21:59:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 11 Jul 2020 21:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juNWa-0006Ti-Li; Sat, 11 Jul 2020 21:59:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+oRR=AW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juNWZ-0006TI-8m
 for xen-devel@lists.xenproject.org; Sat, 11 Jul 2020 21:59:11 +0000
X-Inumbo-ID: b9d664d0-c3c1-11ea-911c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b9d664d0-c3c1-11ea-911c-12813bfff9fa;
 Sat, 11 Jul 2020 21:59:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=EpopIUBx3KgtvDx/mmjz0+bROBd3QAEsjsP13XHCWxk=; b=JxoccIFDVthM7rbY8xk6HEGEkw
 exU8u7Ph3nePT90AyPx+yodr+zvC0+/7DDkQioSp4RgkoVJRGCzMPSKd0xMnW/qtbOKImJxWLaFKz
 ae/OGNvv1Ajg7m9A6vZk6LFM3dqqMfElWYkf48a0yXJzrNjiq1PhuoptLiwtry9hZ0lg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juNWN-0002YT-QN; Sat, 11 Jul 2020 21:58:59 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juNWN-0004Yl-EH; Sat, 11 Jul 2020 21:58:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juNWN-0007u8-DZ; Sat, 11 Jul 2020 21:58:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-libvirt-vhd
Message-Id: <E1juNWN-0007u8-DZ@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 11 Jul 2020 21:58:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-libvirt-vhd
testid debian-di-install

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151836/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-libvirt-vhd.debian-di-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-libvirt-vhd.debian-di-install --summary-out=tmp/151836.bisection-summary --basis-template=151065 --blessings=real,real-bisect qemu-mainline test-amd64-amd64-libvirt-vhd debian-di-install
Searching for failure / basis pass:
 151816 fail [host=pinot0] / 151149 [host=chardonnay0] 151101 [host=huxelrebe1] 151065 [host=huxelrebe0] 151047 [host=elbling0] 150970 [host=albana1] 150930 [host=godello1] 150916 [host=elbling1] 150909 [host=rimava1] 150899 [host=italia0] 150895 [host=albana0] 150831 [host=pinot1] 150694 [host=debina0] 150631 [host=fiano1] 150608 [host=debina1] 150593 [host=godello0] 150585 [host=fiano0] 150532 [host=huxelrebe0] 150492 [host=chardonnay0] 150457 ok.
Failure / basis pass flights: 151816 / 150457
(tree with no url: minios)
(tree in basispass but not in latest: libvirt_gnulib)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
Basis pass a1cd25b919509be2645dbe6f952d5263e0d4e4e5 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#a1cd25b919509be2645dbe6f952d5263e0d4e4e5-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#317d3eeb963a515e15a63fa356d8ebcda7041a51-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0\
 dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3-bdafda8c457eb90c65f37026589b54258300f05c git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db9934a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://git.qemu.org/qemu.git#a20ab81d22300cca80325c284f21eefee99aa740-45db94cc90c286a9965a285ba19450f448760a09 git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-88ab0c1\
 5525ced2eefe39220742efe4769089ad8 git://xenbits.xen.org/xen.git#9f3e9139fa6c3d620eb08dff927518fc88200b8d-3fdc211b01b29f252166937238efe02d15cb5780
Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.

Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.

Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 42057 nodes in revision graph
Searching for test results:
 150457 pass a1cd25b919509be2645dbe6f952d5263e0d4e4e5 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 150492 [host=chardonnay0]
 150532 [host=huxelrebe0]
 150585 [host=fiano0]
 150593 [host=godello0]
 150631 [host=fiano1]
 150608 [host=debina1]
 150694 [host=debina0]
 150831 [host=pinot1]
 150909 [host=rimava1]
 150930 [host=godello1]
 150916 [host=elbling1]
 150895 [host=albana0]
 150899 [host=italia0]
 150970 [host=albana1]
 151047 [host=elbling0]
 151101 [host=huxelrebe1]
 151065 [host=huxelrebe0]
 151149 [host=chardonnay0]
 151221 fail irrelevant
 151175 fail irrelevant
 151241 fail irrelevant
 151286 fail irrelevant
 151269 fail irrelevant
 151328 fail irrelevant
 151304 fail irrelevant
 151377 fail irrelevant
 151353 fail c5815b31976f3982d18c7f6c1367ab6e403eb7eb 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b d4b78317b7cf8c0c635b70086503813f79ff21ec 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151399 fail irrelevant
 151414 fail irrelevant
 151435 fail irrelevant
 151459 fail d482cf6bef484e697f1dbb99f2504e7d67b149e7 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151471 fail irrelevant
 151485 fail irrelevant
 151500 fail irrelevant
 151518 fail e7998ebeaf15e4e8825be0dd97aa1316f194f00d 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151547 fail irrelevant
 151598 fail irrelevant
 151577 fail irrelevant
 151622 fail irrelevant
 151656 fail irrelevant
 151634 fail irrelevant
 151645 fail irrelevant
 151669 fail irrelevant
 151685 fail irrelevant
 151704 fail irrelevant
 151748 fail ea3320048897f5279bc49cb49d26f8099706a834 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b eefe34ea4b82c2b47abe28af4cc7247d51553626 2e3de6253422112ae43e608661ba94ea6b345694 25636ed707cf1211ce846c7ec58f8643e435d7a7
 151767 pass 63d08bec0b2dace2fcefffb23a1fa5b14c473d67 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b eea8f5df4ecc607d64f091b8d916fcc11a697541 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151782 blocked bc85c34ea91c46588423fa24e56e09ca5aab31dd 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 7d2410cea154bf915fb30179ebda3b17ac36e70e 2e3de6253422112ae43e608661ba94ea6b345694 780aba2779b834f19b2a6f0dcdea0e7e0b5e1622
 151750 fail 07e1a18accee37a2850f3825c85cb29b1599b1e0 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b 3f429a3400822141651486193d6af625eeab05a5 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 151768 pass b934d5f42f29764277bc6f0f1cae19ada6f85e74 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 86e8c353f705f14f2f2fd7a6195cefa431aa24d9 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 151790 blocked ab55a8a0871207de5fe194f55cbbcecae7a3cfe9 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 773861274ad75a62c7ecf70ecc8e4ba31ed62190 2e3de6253422112ae43e608661ba94ea6b345694 ad33a573c009d72466432b41ba0591c64e819c19
 151769 pass 63d08bec0b2dace2fcefffb23a1fa5b14c473d67 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 3575b0aea983ad57804c9af739ed8ff7bc168393 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151752 fail 6f28865223292a816f1bfde589901a00156cf8a1 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 58ae92a993687d913aa6dd00ef3497a1bc5f6c40 3c659044118e34603161457db9934a34f816d78b 54cdfe511219b8051046be55a6e156c4f08ff7ff 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 151753 fail 3a58613b0cf6a29960b909e6fd7420639ff794bd 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a2433243fbe471c250d7eddc2c7da325d91265fd 3c659044118e34603161457db9934a34f816d78b 6675a653d2e57ab09c32c0ea7b44a1d6c40a7f58 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151783 blocked 611e03127fcc84c7cd64b1da30140ca3b8fa1269 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bb78cfbec07eda45118b630a09b0af549b43a135 3c659044118e34603161457db9934a34f816d78b fe0fe4735e798578097758781166cc221319b93d 2e3de6253422112ae43e608661ba94ea6b345694 d9f58cd54fe2f05e1f05e2fe254684bd1840de8e
 151778 fail irrelevant
 151755 pass cf9e7726b38bc93a2728638d435199297d2b3aaa 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 53550e81e2cafe7c03a39526b95cd21b5194d9b1 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151737 pass a1cd25b919509be2645dbe6f952d5263e0d4e4e5 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151770 pass f45735786a3d9bee622f80eab75131b0da485798 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 49ee11555262a256afec592dfed7c5902d5eefd2 2e3de6253422112ae43e608661ba94ea6b345694 726c78d14dfe6ec76f5e4c7756821a91f0a04b34
 151738 fail irrelevant
 151756 pass cf9e7726b38bc93a2728638d435199297d2b3aaa 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a2433243fbe471c250d7eddc2c7da325d91265fd 3c659044118e34603161457db9934a34f816d78b 250bc43a406f7d46e319abe87c19548d4f027828 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151721 fail irrelevant
 151741 fail 36b1e8669d85f5dbde4e40a6625df9a78085c2a0 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b 61fee7f45955cd0bf9b79be9fa9c7ebabb5e6a85 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151813 pass 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 210d18674a34bb43bd05cdd68d24fd03e161ff3d 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151743 fail irrelevant
 151785 blocked a5a297f387fee9e9aa4cbc2df6788c330fd33ad1 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 250b1da35d579f42319af234f36207902ca4baa4 2e3de6253422112ae43e608661ba94ea6b345694 dde6174ada5280cd9a6396e3b12606360a0d29a3
 151758 fail 1eabe312ea4fa80922443ad73a950857c1f87786 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 9fc7fc4d3909817555ce0af6bcb69dff1606140d 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151745 fail f57a8cd3df0167d72b87fdd868a287608a741b73 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 322969adf1fb3d6cfbd613f35121315715aff2ed 3c659044118e34603161457db9934a34f816d78b bae31bfa48b9caecee25da3d5333901a126a06b4 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151794 blocked f6c79ca2af3607eb1cbbb7208c194f7cbf7a6abd 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 4ec2a1f53e8aaa22924614b64dde97321126943e 2e3de6253422112ae43e608661ba94ea6b345694 ad33a573c009d72466432b41ba0591c64e819c19
 151763 fail irrelevant
 151772 pass f45735786a3d9bee622f80eab75131b0da485798 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 5d2f557b47dfbf8f23277a5bdd8473d4607c681a 2e3de6253422112ae43e608661ba94ea6b345694 51ca66c37371b10b378513af126646de22eddb17
 151760 pass 63d08bec0b2dace2fcefffb23a1fa5b14c473d67 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 3c659044118e34603161457db9934a34f816d78b 9f1f264edbdf5516d6f208497310b3eedbc7b74c 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151744 fail irrelevant
 151761 pass 9170b0ee6f867d2be1165e83c80910b0e0ac952d 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1d24410da356731da70b3334f86343e11e207d2 3c659044118e34603161457db9934a34f816d78b 470dd165d152ff7ceac61c7b71c2b89220b3aad7 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151786 blocked a5a297f387fee9e9aa4cbc2df6788c330fd33ad1 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 71b04329c4f7d5824a289ca5225e1883a278cf3b 2e3de6253422112ae43e608661ba94ea6b345694 e181db8ba4e0797b8f9b55996adfa71ffb5b4081
 151762 pass a1cd25b919509be2645dbe6f952d5263e0d4e4e5 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151800 blocked 6f60d2a8503ce8c624bce6b53bf7b68476f5056f 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b b8bee16e94df0fcd03bdad9969c30894418b0e6e 2e3de6253422112ae43e608661ba94ea6b345694 fced27b002c73c47c6c24ece2fe32b78157ad6b6
 151766 fail irrelevant
 151806 blocked d901fd6092414417ee59a4567d2c62f853a62d5c 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151775 pass a1cd25b919509be2645dbe6f952d5263e0d4e4e5 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151796 blocked 6f60d2a8503ce8c624bce6b53bf7b68476f5056f 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b cf2d1203dcfc2bf964453d83a2302231ce77f2dc 2e3de6253422112ae43e608661ba94ea6b345694 422ec8fcf34cf961e81fbccd7d236fa2c1e678a8
 151779 fail irrelevant
 151788 blocked ab55a8a0871207de5fe194f55cbbcecae7a3cfe9 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b d127de3baa64d1cabc8e1994e658688abb577ba9 2e3de6253422112ae43e608661ba94ea6b345694 ad33a573c009d72466432b41ba0591c64e819c19
 151810 pass 21597d3caad8c94996de05e5d426178966a17860 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 49ee11555262a256afec592dfed7c5902d5eefd2 2e3de6253422112ae43e608661ba94ea6b345694 16c36d27f2644737c34d4a0fc1de525d0ee185ad
 151797 blocked 6f60d2a8503ce8c624bce6b53bf7b68476f5056f 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b b8bee16e94df0fcd03bdad9969c30894418b0e6e 2e3de6253422112ae43e608661ba94ea6b345694 3351acaee706b8e238b031a456bf181f97f167c3
 151801 pass a1cd25b919509be2645dbe6f952d5263e0d4e4e5 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151828 pass 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151814 pass a1cd25b919509be2645dbe6f952d5263e0d4e4e5 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151798 blocked 6f60d2a8503ce8c624bce6b53bf7b68476f5056f 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b cf2d1203dcfc2bf964453d83a2302231ce77f2dc 2e3de6253422112ae43e608661ba94ea6b345694 3351acaee706b8e238b031a456bf181f97f167c3
 151803 fail irrelevant
 151799 blocked 6297560761adf660497ab0053af18bab159f6b2f 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151820 pass 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 589b1be07c060e583d9f758ff0cb10e0f1ff242f 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151805 blocked 4ccc69707e9e4a16d66c1bc7b5de55bc3943e3dd 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151807 blocked 0137bf0dab2738d5443e2f407239856e2aa25bb3 317d3eeb963a515e15a63fa356d8ebcda7041a51 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b a20ab81d22300cca80325c284f21eefee99aa740 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 151811 pass 63d08bec0b2dace2fcefffb23a1fa5b14c473d67 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a4cfb842fca9693a330cb5435284c1ee8bfbbace 3c659044118e34603161457db9934a34f816d78b 23374a84c5f08e20ec2506a6322330d51f9134c5 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151817 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151804 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151819 fail 257aba2dafee0fec97f3f0a2d06fb82587aaf1a0 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 3e80f6902c13f6edb6675c0f33edcbbf0163ec32 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151823 fail 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b da9630c57ee386f8beb571ba6bb4a98d546c42ca 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151816 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151831 fail 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151825 fail 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 007d1dbf72536ec1b847a944832e4de1546af2ac 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151832 pass 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151834 fail 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151835 pass 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151836 fail 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
Searching for interesting versions
 Result found: flight 150457 (pass), for basis pass
 Result found: flight 151804 (fail), for basis failure
 Repro found: flight 151814 (pass), for basis pass
 Repro found: flight 151816 (fail), for basis failure
 0 revisions at 8a4f331e8cb662d787a310df07fefd080879abde 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
No revisions left to test, checking graph state.
 Result found: flight 151828 (pass), for last pass
 Result found: flight 151831 (fail), for first failure
 Repro found: flight 151832 (pass), for last pass
 Repro found: flight 151834 (fail), for first failure
 Repro found: flight 151835 (pass), for last pass
 Repro found: flight 151836 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151836/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.202477 to fit
pnmtopng: 157 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-libvirt-vhd.debian-di-install.{dot,ps,png,html,svg}.
----------------------------------------
151836: tolerable FAIL

flight 151836 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/151836/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 10 debian-di-install       fail baseline untested


jobs:
 build-amd64-libvirt                                          pass    
 test-amd64-amd64-libvirt-vhd                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun Jul 12 02:45:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jul 2020 02:45:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juRzG-0006CO-3g; Sun, 12 Jul 2020 02:45:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZHa=AX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juRzF-0006CJ-CQ
 for xen-devel@lists.xenproject.org; Sun, 12 Jul 2020 02:45:05 +0000
X-Inumbo-ID: af35b8b4-c3e9-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af35b8b4-c3e9-11ea-bca7-bc764e2007e4;
 Sun, 12 Jul 2020 02:45:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=tDgGByBA/8dzs/iV0LuTmcdjwOCKp8Vt9M31/ZZh5DI=; b=3b7D8Tr9obuUcBR7wBe64fdJa
 NH2xzi8h9F8P8e7E3pZEkXm3tbMD9fADGaIZN1dhCfHvJefQAI1SzqUt2SXwJzcUmv6nErUWZEuE3
 wAZ2iu899SVL3scYCIw3L7DF61s1v9C8qz9HYrRECrD3IDjvLaXJuxhWfu/5qbUx2jvgs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juRzB-0001hr-OA; Sun, 12 Jul 2020 02:45:01 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juRzB-0006Y5-Gh; Sun, 12 Jul 2020 02:45:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juRzB-0005lJ-G5; Sun, 12 Jul 2020 02:45:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151833-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151833: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=827937158b72ce2265841ff528bba3c44a1bfbc8
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 12 Jul 2020 02:45:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151833 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151833/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     12 guest-start              fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                827937158b72ce2265841ff528bba3c44a1bfbc8
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   29 days
Failing since        151101  2020-06-14 08:32:51 Z   27 days   37 attempts
Testing same since   151833  2020-07-11 15:07:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michele Denber <denber@mindspring.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 22620 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 12 03:31:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jul 2020 03:31:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juSiB-0001nL-Qo; Sun, 12 Jul 2020 03:31:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QQ2C=AX=gmail.com=jrdr.linux@srs-us1.protection.inumbo.net>)
 id 1juSiA-0001nG-9D
 for xen-devel@lists.xenproject.org; Sun, 12 Jul 2020 03:31:30 +0000
X-Inumbo-ID: 2c6f78dc-c3f0-11ea-bb8b-bc764e2007e4
Received: from mail-pj1-x1043.google.com (unknown [2607:f8b0:4864:20::1043])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c6f78dc-c3f0-11ea-bb8b-bc764e2007e4;
 Sun, 12 Jul 2020 03:31:29 +0000 (UTC)
Received: by mail-pj1-x1043.google.com with SMTP id o22so4549344pjw.2
 for <xen-devel@lists.xenproject.org>; Sat, 11 Jul 2020 20:31:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id;
 bh=SNT1YRB7yGxFXAfJITtoOOG8C2EkzXWjMU3CGAepd0k=;
 b=k+ZQrb716nrHemLK6iqV7smPv77pw/GKXxXxuVENsQJXTAuf3uFvj6HLP5Quch18uy
 2ZsN4C/E23HahZ6bww+zVyRTrm+hKZ6ZNucZLA/dlPSfZ2SADETxSGM6oC3nuTKG7/rA
 MPuJMVXIEMOpAartu7zdD9dWKLGtHyr0Rif/zDUJjMQ0CSFmJgbgIA6Sf4Tpr+iIAk4g
 9w8rfln7yKkZedfZwbJo4pyRdAlTAOZ245dMl7LOJP3oLxse0hgi4Y+Edkabn2FFoQkx
 E33hXyDjxtFVKVPik4ao3wtz5nj9wKv0i5lEYAkCFT7nviQmF3KpJGm9EnrZJTbPPxgu
 gr8g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=SNT1YRB7yGxFXAfJITtoOOG8C2EkzXWjMU3CGAepd0k=;
 b=Cbll2OKLl1Y6sOljuMsRkwS4mfG82oaRmYkt+WJsH5+HIqJRf2uVDbYsWmeUsIWTML
 ZReQnIoS1gXeyH5K0XWnKLBdkg+3Od4r/hRw7/q6Y4xHONuKdYZBP0IRRMgEo8HRQ+DH
 cgoBt7CdbwttveDnetBA+uxWGoJ64S8gPqv/oLKjYRQyOZrIlOZeAyxYOVfBT9N2+oEZ
 5WpmD6ZdkK2r1l6JMleNOYkGpoK5rVzjNc5UN0oWttrodK8a0LfqPAiO5PhnPaTif90h
 4jCyflgo/78KoZv+ZL6m+fAMaBak5OcSlC9H+eQGSBikrGBtV3D5Crr3fOgaTQEAMvQb
 0yRw==
X-Gm-Message-State: AOAM531HnVO9vW+F6NCwtZ75xfs2Lv3WXbblO738eykbC3wf+AyVJk8q
 uBH/J2kJNdFCOnZwQPKNuSE=
X-Google-Smtp-Source: ABdhPJx3aQQO00Kcd3gxYPg+782A3wemaoqS7dDv8UmKUj3KOWgupYZB5ltmp2SDc0bqfCs5Nc4XNQ==
X-Received: by 2002:a17:902:8348:: with SMTP id
 z8mr47938844pln.113.1594524688860; 
 Sat, 11 Jul 2020 20:31:28 -0700 (PDT)
Received: from jordon-HP-15-Notebook-PC.domain.name ([122.167.224.89])
 by smtp.gmail.com with ESMTPSA id s89sm9750271pjj.28.2020.07.11.20.31.25
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Sat, 11 Jul 2020 20:31:27 -0700 (PDT)
From: Souptick Joarder <jrdr.linux@gmail.com>
To: boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org
Subject: [PATCH v3 0/3] Few bug fixes and Convert to pin_user_pages*()
Date: Sun, 12 Jul 2020 09:09:52 +0530
Message-Id: <1594525195-28345-1-git-send-email-jrdr.linux@gmail.com>
X-Mailer: git-send-email 1.9.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <xadimgnik@gmail.com>,
 linux-kernel@vger.kernel.org, John Hubbard <jhubbard@nvidia.com>,
 Souptick Joarder <jrdr.linux@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This series contains few clean up, minor bug fixes and
Convert get_user_pages() to pin_user_pages().

I'm compile tested this, but unable to run-time test,
so any testing help is much appriciated.

v2:
        Addressed few review comments and compile issue.
        Patch[1/2] from v1 split into 2 in v2.
v3:
	Address review comment. Add review tag.

Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Paul Durrant <xadimgnik@gmail.com>

Souptick Joarder (3):
  xen/privcmd: Corrected error handling path
  xen/privcmd: Mark pages as dirty
  xen/privcmd: Convert get_user_pages*() to pin_user_pages*()

 drivers/xen/privcmd.c | 32 ++++++++++++++------------------
 1 file changed, 14 insertions(+), 18 deletions(-)

-- 
1.9.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 12 03:31:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jul 2020 03:31:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juSiH-0001nv-6n; Sun, 12 Jul 2020 03:31:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QQ2C=AX=gmail.com=jrdr.linux@srs-us1.protection.inumbo.net>)
 id 1juSiF-0001nG-3n
 for xen-devel@lists.xenproject.org; Sun, 12 Jul 2020 03:31:35 +0000
X-Inumbo-ID: 2f373f0a-c3f0-11ea-8496-bc764e2007e4
Received: from mail-pf1-x441.google.com (unknown [2607:f8b0:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f373f0a-c3f0-11ea-8496-bc764e2007e4;
 Sun, 12 Jul 2020 03:31:34 +0000 (UTC)
Received: by mail-pf1-x441.google.com with SMTP id m9so4487595pfh.0
 for <xen-devel@lists.xenproject.org>; Sat, 11 Jul 2020 20:31:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=aJSWgi0xiDq2TEjOwzrr33u9Kob9ua2/jtyaxeBw01I=;
 b=BJXHKUnEMD8COwDxple2uc80bG5s+/UbetrM7wV8qMA4W1AeaRkA1dUyAA+qm438vT
 Gy00Mc+YrUQxaq/DmQnTK1DFyC3H+fK0aLz6QNB/mvbhx4hZHMLc+AM+rAPjoq2VGf4k
 sTWacxWoTe8csdNCUsjNqLRQyLsgRh+itgowLckdvIv4Nqt2ooHoxOl3UtBKHxwHkPGa
 jgH0sN/FUstxj6s1MRpaEihCKLF7llJxium2YMebgjk+qKe3lORUerU1ZbCR0fNoAocG
 koPDJjKw+JGN2l7JqMyjXe2+sP64iRvf3HrcuZhNGqf+POiictvmNBqs9fI9orDMVJFq
 7KEw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=aJSWgi0xiDq2TEjOwzrr33u9Kob9ua2/jtyaxeBw01I=;
 b=ElqGhu3LePqe7f8xqa8bJ413NRB3lMIILWu+bNM4+mzKv2sHz0jqq9GjD9Z4awW41B
 ejtQsCrPxgBCRU/cQHTzUkcRmmKFPlhJjeohOIMjLTYd/2j2D0MLmpzXBP3lwsDyGIyE
 aAqpDWHQ/c0YmJtcrVJhYRPBjDq8r6STQYmr2+50tn+9QXB9e8ZBuyWrHuKCJc7zSFIJ
 hSsfkidYNrrFtZ+3pnznbeFuRd47Qq6fnx7GoiP+fFsMJzbSlz5jGGtJJJoK/I29KY/K
 CQmBtERSRPevryidq6Z71ZDlQ6lC2017I9p846g+pRnIF8gwex+9EPOHiMWrWAzmEg+J
 vYlg==
X-Gm-Message-State: AOAM530Sl4h9ivQFa932WGkXZdPVMLOvtk7lsHRVP59PIRzlZ1QwK14w
 o5/wuwwx8dSH88VIl9aosBQ=
X-Google-Smtp-Source: ABdhPJzGcosdIsNjTLJCw8K3Nj8FTvzS0ofcadybPbfYsTh6uAimwx5dECsp1E5FMwZE8E+TEIqYmw==
X-Received: by 2002:a63:dd4d:: with SMTP id g13mr63445998pgj.179.1594524693521; 
 Sat, 11 Jul 2020 20:31:33 -0700 (PDT)
Received: from jordon-HP-15-Notebook-PC.domain.name ([122.167.224.89])
 by smtp.gmail.com with ESMTPSA id s89sm9750271pjj.28.2020.07.11.20.31.30
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Sat, 11 Jul 2020 20:31:32 -0700 (PDT)
From: Souptick Joarder <jrdr.linux@gmail.com>
To: boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org
Subject: [PATCH v3 1/3] xen/privcmd: Corrected error handling path
Date: Sun, 12 Jul 2020 09:09:53 +0530
Message-Id: <1594525195-28345-2-git-send-email-jrdr.linux@gmail.com>
X-Mailer: git-send-email 1.9.1
In-Reply-To: <1594525195-28345-1-git-send-email-jrdr.linux@gmail.com>
References: <1594525195-28345-1-git-send-email-jrdr.linux@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <xadimgnik@gmail.com>,
 linux-kernel@vger.kernel.org, John Hubbard <jhubbard@nvidia.com>,
 Souptick Joarder <jrdr.linux@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Previously, if lock_pages() end up partially mapping pages, it used
to return -ERRNO due to which unlock_pages() have to go through
each pages[i] till *nr_pages* to validate them. This can be avoided
by passing correct number of partially mapped pages & -ERRNO separately,
while returning from lock_pages() due to error.

With this fix unlock_pages() doesn't need to validate pages[i] till
*nr_pages* for error scenario and few condition checks can be ignored.

Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Paul Durrant <xadimgnik@gmail.com>
---
 drivers/xen/privcmd.c | 31 +++++++++++++++----------------
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 5dfc59f..b001673 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -579,13 +579,13 @@ static long privcmd_ioctl_mmap_batch(
 
 static int lock_pages(
 	struct privcmd_dm_op_buf kbufs[], unsigned int num,
-	struct page *pages[], unsigned int nr_pages)
+	struct page *pages[], unsigned int nr_pages, unsigned int *pinned)
 {
 	unsigned int i;
 
 	for (i = 0; i < num; i++) {
 		unsigned int requested;
-		int pinned;
+		int page_count;
 
 		requested = DIV_ROUND_UP(
 			offset_in_page(kbufs[i].uptr) + kbufs[i].size,
@@ -593,14 +593,15 @@ static int lock_pages(
 		if (requested > nr_pages)
 			return -ENOSPC;
 
-		pinned = get_user_pages_fast(
+		page_count = get_user_pages_fast(
 			(unsigned long) kbufs[i].uptr,
 			requested, FOLL_WRITE, pages);
-		if (pinned < 0)
-			return pinned;
+		if (page_count < 0)
+			return page_count;
 
-		nr_pages -= pinned;
-		pages += pinned;
+		*pinned += page_count;
+		nr_pages -= page_count;
+		pages += page_count;
 	}
 
 	return 0;
@@ -610,13 +611,8 @@ static void unlock_pages(struct page *pages[], unsigned int nr_pages)
 {
 	unsigned int i;
 
-	if (!pages)
-		return;
-
-	for (i = 0; i < nr_pages; i++) {
-		if (pages[i])
-			put_page(pages[i]);
-	}
+	for (i = 0; i < nr_pages; i++)
+		put_page(pages[i]);
 }
 
 static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
@@ -629,6 +625,7 @@ static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
 	struct xen_dm_op_buf *xbufs = NULL;
 	unsigned int i;
 	long rc;
+	unsigned int pinned = 0;
 
 	if (copy_from_user(&kdata, udata, sizeof(kdata)))
 		return -EFAULT;
@@ -682,9 +679,11 @@ static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
 		goto out;
 	}
 
-	rc = lock_pages(kbufs, kdata.num, pages, nr_pages);
-	if (rc)
+	rc = lock_pages(kbufs, kdata.num, pages, nr_pages, &pinned);
+	if (rc < 0) {
+		nr_pages = pinned;
 		goto out;
+	}
 
 	for (i = 0; i < kdata.num; i++) {
 		set_xen_guest_handle(xbufs[i].h, kbufs[i].uptr);
-- 
1.9.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 12 03:31:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jul 2020 03:31:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juSiO-0001pG-Fg; Sun, 12 Jul 2020 03:31:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QQ2C=AX=gmail.com=jrdr.linux@srs-us1.protection.inumbo.net>)
 id 1juSiN-0001ot-5r
 for xen-devel@lists.xenproject.org; Sun, 12 Jul 2020 03:31:43 +0000
X-Inumbo-ID: 342b6aea-c3f0-11ea-b7bb-bc764e2007e4
Received: from mail-pf1-x443.google.com (unknown [2607:f8b0:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 342b6aea-c3f0-11ea-b7bb-bc764e2007e4;
 Sun, 12 Jul 2020 03:31:42 +0000 (UTC)
Received: by mail-pf1-x443.google.com with SMTP id 207so4482629pfu.3
 for <xen-devel@lists.xenproject.org>; Sat, 11 Jul 2020 20:31:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=Z3iLvmsALgpeKkdpuNL17/HM7clfrVNQ9KU9ARLG1y0=;
 b=V4JjtjKfiTieITrJU1+IsNVyCXTofXiYb1IBwVWUF0NhkJZwo2pn4KSii2ojXEDJUI
 66oIhsD0yr8miUbgs/hnkZguouIlQFh57D8QELabHjbdW1UxLxGLLNBkzNIsKUoO8gQJ
 hI8vluoROpyGtD3WEqXuxsbWXfeiLkK/6UD16CPnJbJHi0BlLgXvWM25njV3H3Hm/6s+
 NwUxqQK575Tft3dW74YgxV38tHrMXyq+M4eIrmalxNhNHumwd5n531ub6o9lXtqZWAhu
 qqUOfZZHNjqwmn+xR0U4aefx1Joq7fTfTsBq4TT7aT6Qe32Y3aocPzm27rqmGh1kc35U
 jGSw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=Z3iLvmsALgpeKkdpuNL17/HM7clfrVNQ9KU9ARLG1y0=;
 b=mr6DK1W7A/9UuVVUnVuFmM6h9gwR6GD1t8oeV1ZMOs1x0lajc4+tBc7uFsYNXJIyeo
 B7S1QRriuiCZgkoa+vzW7rA/VscBQe7AcJa9kFZefUpXsdmSVqZWPlwJR3EZmZbmmWEG
 fa7e6UXeUpSFr9EVrYXapFZn9NDL2bbcvdDRedBNjW0RRv7BnvpA2rIcmC5LI8aExwn8
 MFRBM2TD7MG/zm++2EqOvUyLB/q6t2ympaItx4AeTuUKC32pNJPgbasMXd3TF3G6A/bN
 4mVT83xoDHV0xGLNbQT5rCqbbl14uQKO0WYxKjLeK31bA+V0sncjLeI+1n+ocuVOdG12
 JX+A==
X-Gm-Message-State: AOAM5307iRvgL7zxHhGRAeZDc+IetzkECHCnD62CKKwm9+H7rFVImSZn
 ACOMxu6PEltAbTHGNVBU1yI=
X-Google-Smtp-Source: ABdhPJxXbi9KSZF/o0irOMdLPlhjV3pV92Mf5shXsy1/+Fp0VJiHmbskE4DYeA/T1f3pXWChiRQDSA==
X-Received: by 2002:a62:3583:: with SMTP id
 c125mr21856884pfa.158.1594524701949; 
 Sat, 11 Jul 2020 20:31:41 -0700 (PDT)
Received: from jordon-HP-15-Notebook-PC.domain.name ([122.167.224.89])
 by smtp.gmail.com with ESMTPSA id s89sm9750271pjj.28.2020.07.11.20.31.39
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Sat, 11 Jul 2020 20:31:41 -0700 (PDT)
From: Souptick Joarder <jrdr.linux@gmail.com>
To: boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org
Subject: [PATCH v3 2/3] xen/privcmd: Mark pages as dirty
Date: Sun, 12 Jul 2020 09:09:54 +0530
Message-Id: <1594525195-28345-3-git-send-email-jrdr.linux@gmail.com>
X-Mailer: git-send-email 1.9.1
In-Reply-To: <1594525195-28345-1-git-send-email-jrdr.linux@gmail.com>
References: <1594525195-28345-1-git-send-email-jrdr.linux@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <xadimgnik@gmail.com>,
 linux-kernel@vger.kernel.org, John Hubbard <jhubbard@nvidia.com>,
 Souptick Joarder <jrdr.linux@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

pages need to be marked as dirty before unpinned it in
unlock_pages() which was oversight. This is fixed now.

Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
Suggested-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Paul Durrant <xadimgnik@gmail.com>
---
 drivers/xen/privcmd.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index b001673..079d35b 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -611,8 +611,11 @@ static void unlock_pages(struct page *pages[], unsigned int nr_pages)
 {
 	unsigned int i;
 
-	for (i = 0; i < nr_pages; i++)
+	for (i = 0; i < nr_pages; i++) {
+		if (!PageDirty(pages[i]))
+			set_page_dirty_lock(pages[i]);
 		put_page(pages[i]);
+	}
 }
 
 static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
-- 
1.9.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 12 03:31:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jul 2020 03:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juSiT-0001qw-PR; Sun, 12 Jul 2020 03:31:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QQ2C=AX=gmail.com=jrdr.linux@srs-us1.protection.inumbo.net>)
 id 1juSiS-0001ot-8w
 for xen-devel@lists.xenproject.org; Sun, 12 Jul 2020 03:31:48 +0000
X-Inumbo-ID: 373492d4-c3f0-11ea-bca7-bc764e2007e4
Received: from mail-pj1-x1041.google.com (unknown [2607:f8b0:4864:20::1041])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 373492d4-c3f0-11ea-bca7-bc764e2007e4;
 Sun, 12 Jul 2020 03:31:47 +0000 (UTC)
Received: by mail-pj1-x1041.google.com with SMTP id gc9so4521787pjb.2
 for <xen-devel@lists.xenproject.org>; Sat, 11 Jul 2020 20:31:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=CSSDnsVzCbm4HJvWsVbXXkGEs9xedyGKl/Rt6nKFdOE=;
 b=dbumPayO/fDTRmwmbfWoGMopnkBa2rTRRdhdRZxImMg+IAEJ4iJ8xxqnf4pNPHIBg1
 Twl174xM19B2sWzwefd5I9eq27cj5l4WVLLgvH8RrIDAYsmA5AKM/GXypd5+wBM+d3m0
 b34IyrConj/4kSOSkZLRoYbZzX/ILotrFvuLHy95osDmsX2ON1+iwKFO8738TKSReZYJ
 2C7jOsaov4l/WQlCt1iLyoWVRE7qOPoZ20O3E9H/PPRrrQd7alB/0C/ybSJlCGcI7Ou3
 aoh4/zO9wrDfQLZg6SgO+lQTHtcB2q2Xm6GAxchCvbsre8RS+cGwptniFoffEJI5O9D6
 h6OA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=CSSDnsVzCbm4HJvWsVbXXkGEs9xedyGKl/Rt6nKFdOE=;
 b=SXsGadSeoX1Z9tsDe5i/OO9+Y+xDSSjO8sFg4ma8QBW+qnNjdwYYjm/OcaB2QdNLDi
 LE0zVsn2kPCDGNZ7+Dp+b//lC2lV9APp6nZnpFqMwnbpEtUotOLAMcGoVig5ntjakHG5
 Zwh7Exl9bE6eWLyw009w1UjhJFZuzOP2KA0AZJQGnBYwT31fTJ4nhgYRbyxeg7Ixue9n
 VtzLP1BBR8UlnXZc/cZWrFmk7mQQRD56sAJmhuqRrE/abg4DuzFXesndL/9gEbpZqWL3
 6Kg8KFf69ymclqmIAtUKu8V5eWpMqd6xSyGVNF4xuH2UijBN5vkJcAf1t+jqF4LuYKmP
 f1dA==
X-Gm-Message-State: AOAM530sSl2F+04TDIrq8agun/xqwVDHPhsn9tccmbmJYAfVcNJ22KWu
 Q3cqcpGYLFaWyHrqzjUTuJs=
X-Google-Smtp-Source: ABdhPJxjvGahtoyXNSccUhl0AeaffkZO4kJLPbsRyYmRclGhUYpFdroi35MHWOJWvCcdaQoqJ6BZFw==
X-Received: by 2002:a17:902:904c:: with SMTP id
 w12mr22660155plz.147.1594524707025; 
 Sat, 11 Jul 2020 20:31:47 -0700 (PDT)
Received: from jordon-HP-15-Notebook-PC.domain.name ([122.167.224.89])
 by smtp.gmail.com with ESMTPSA id s89sm9750271pjj.28.2020.07.11.20.31.44
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Sat, 11 Jul 2020 20:31:46 -0700 (PDT)
From: Souptick Joarder <jrdr.linux@gmail.com>
To: boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org
Subject: [PATCH v3 3/3] xen/privcmd: Convert get_user_pages*() to
 pin_user_pages*()
Date: Sun, 12 Jul 2020 09:09:55 +0530
Message-Id: <1594525195-28345-4-git-send-email-jrdr.linux@gmail.com>
X-Mailer: git-send-email 1.9.1
In-Reply-To: <1594525195-28345-1-git-send-email-jrdr.linux@gmail.com>
References: <1594525195-28345-1-git-send-email-jrdr.linux@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <xadimgnik@gmail.com>,
 linux-kernel@vger.kernel.org, John Hubbard <jhubbard@nvidia.com>,
 Souptick Joarder <jrdr.linux@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In 2019, we introduced pin_user_pages*() and now we are converting
get_user_pages*() to the new API as appropriate. [1] & [2] could
be referred for more information. This is case 5 as per document [1].

[1] Documentation/core-api/pin_user_pages.rst

[2] "Explicit pinning of user-space pages":
        https://lwn.net/Articles/807108/

Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Paul Durrant <xadimgnik@gmail.com>
---
 drivers/xen/privcmd.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 079d35b..63abe6c 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -593,7 +593,7 @@ static int lock_pages(
 		if (requested > nr_pages)
 			return -ENOSPC;
 
-		page_count = get_user_pages_fast(
+		page_count = pin_user_pages_fast(
 			(unsigned long) kbufs[i].uptr,
 			requested, FOLL_WRITE, pages);
 		if (page_count < 0)
@@ -609,13 +609,7 @@ static int lock_pages(
 
 static void unlock_pages(struct page *pages[], unsigned int nr_pages)
 {
-	unsigned int i;
-
-	for (i = 0; i < nr_pages; i++) {
-		if (!PageDirty(pages[i]))
-			set_page_dirty_lock(pages[i]);
-		put_page(pages[i]);
-	}
+	unpin_user_pages_dirty_lock(pages, nr_pages, true);
 }
 
 static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
-- 
1.9.1



From xen-devel-bounces@lists.xenproject.org Sun Jul 12 04:50:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jul 2020 04:50:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juTw0-000816-Lo; Sun, 12 Jul 2020 04:49:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZHa=AX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juTvy-00080m-NH
 for xen-devel@lists.xenproject.org; Sun, 12 Jul 2020 04:49:50 +0000
X-Inumbo-ID: 1a77d95c-c3fb-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a77d95c-c3fb-11ea-bb8b-bc764e2007e4;
 Sun, 12 Jul 2020 04:49:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Lm57S/d7MS0RYGYUmCdiScvFx9LC4H2C2+6zU2NqbHg=; b=O7QNI7BAhQ6Ck5pm5K+Q1sV9i
 qe0goBG5/L5nR9ou2zwjHIVQCz0GzbJKSiURGTGaS1qY+AsDOVeyIshhzhinFXtZM3LM4QaFjDQ4h
 /ZToHp6GL7DhAELhGwjo3VJjmMU1Tny1l4GEzcQFZnjGFxBicrwSvT3/nzjJIQhN+9RQc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juTvr-00049t-BL; Sun, 12 Jul 2020 04:49:43 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juTvr-0006yw-0k; Sun, 12 Jul 2020 04:49:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juTvq-0003Uy-Vy; Sun, 12 Jul 2020 04:49:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151838-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151838: regressions - FAIL
X-Osstest-Failures: linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=0aea6d5c5be33ce94c16f9ab2f64de1f481f424b
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 12 Jul 2020 04:49:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151838 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151838/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     12 guest-start              fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                0aea6d5c5be33ce94c16f9ab2f64de1f481f424b
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   24 days
Failing since        151236  2020-06-19 19:10:35 Z   22 days   35 attempts
Testing same since   151838  2020-07-11 22:12:04 Z    0 days    1 attempts

------------------------------------------------------------
710 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 37325 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 12 07:30:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jul 2020 07:30:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juWQd-0004Kf-4R; Sun, 12 Jul 2020 07:29:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZHa=AX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juWQb-0004Ka-VA
 for xen-devel@lists.xenproject.org; Sun, 12 Jul 2020 07:29:38 +0000
X-Inumbo-ID: 6f62a54f-c411-11ea-914f-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6f62a54f-c411-11ea-914f-12813bfff9fa;
 Sun, 12 Jul 2020 07:29:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=/EecfGm+Zcpo8qw50mweHv3yC5nPgF4Fyjjz5xYR86I=; b=6TC0BHxN5zyTApG+TcGY88jJm
 v6Vi9iHM5wabtzwGaoNe4UTw3Qh1xDx0xmcp6BDbrFO9MeA9uiDMGzufzI2D+C50zAe7f9YNDNLMr
 SRMwcbZUO8NHFhWu9hsuivjwg+S7ZUAUJcfThcbcRo4Ogsrftuhs9qjxANFmR/F7uhDKs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juWQY-0007Oe-Ex; Sun, 12 Jul 2020 07:29:34 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juWQY-0006Iv-4l; Sun, 12 Jul 2020 07:29:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juWQY-00022g-49; Sun, 12 Jul 2020 07:29:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151842-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151842: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=866bb996441cd73800643df9fb7df2939bdb75bd
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 12 Jul 2020 07:29:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151842 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151842/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              866bb996441cd73800643df9fb7df2939bdb75bd
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z    2 days
Testing same since   151818  2020-07-11 04:18:52 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 691 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 12 10:18:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jul 2020 10:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juZ42-0001dJ-1T; Sun, 12 Jul 2020 10:18:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZHa=AX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juZ41-0001dE-FQ
 for xen-devel@lists.xenproject.org; Sun, 12 Jul 2020 10:18:29 +0000
X-Inumbo-ID: 074ef030-c429-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 074ef030-c429-11ea-bb8b-bc764e2007e4;
 Sun, 12 Jul 2020 10:18:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=7aLdQhRPmWh+wVWWA6pZY3Kfru/iky5rj172kDlGXGY=; b=fWV25ik4+Dg1g0XJFYYMWUpN/
 oDH82goTnW1gK3rWUZzH5ou8JoGG/ktfXNpmmfUKd8SH1bIIm5DUAPIQxqR7goleCp5BEQIzJIOq+
 Q81omDxZekA8JYD8ZFAtXgt4U1prZsipG0f3e63jNm+emM7/otlfPxZm76fqTxxRO+0r8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juZ3z-0002kO-IX; Sun, 12 Jul 2020 10:18:27 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juZ3z-0003kC-Ae; Sun, 12 Jul 2020 10:18:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juZ3z-0001gS-9z; Sun, 12 Jul 2020 10:18:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151847-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 151847: all pass - PUSHED
X-Osstest-Versions-This: xen=02d69864b51a4302a148c28d6d391238a6778b4b
X-Osstest-Versions-That: xen=3fdc211b01b29f252166937238efe02d15cb5780
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 12 Jul 2020 10:18:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151847 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151847/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b
baseline version:
 xen                  3fdc211b01b29f252166937238efe02d15cb5780

Last test of basis   151733  2020-07-08 09:19:56 Z    4 days
Testing same since   151847  2020-07-12 09:18:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3fdc211b01..02d69864b5  02d69864b51a4302a148c28d6d391238a6778b4b -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Jul 12 12:39:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jul 2020 12:39:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jubG3-0004RU-MG; Sun, 12 Jul 2020 12:39:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZHa=AX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jubG3-0004RP-78
 for xen-devel@lists.xenproject.org; Sun, 12 Jul 2020 12:39:03 +0000
X-Inumbo-ID: a8a772e6-c43c-11ea-917f-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a8a772e6-c43c-11ea-917f-12813bfff9fa;
 Sun, 12 Jul 2020 12:38:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=joPtVt5WOgssYD5Jhjf9mIJlQ6grNaxLA+vp76WzKO8=; b=g9aJhgj0lPzVroikkPTFKDggm
 2cSuCawUAl1rexvH5UGjBwB7HLpqWIKR7epzgkISaoBKC0OBfPzEJPf0FFDtY/h+NG8wk0a4PCFSh
 qgJ9vD2A2NpQPGZTNMbdPWeVyOEnLGGBjL/fy1wXa0ojPdu5+tO1ahMjDgqod/iOCsU6E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jubFy-0005IJ-Lb; Sun, 12 Jul 2020 12:38:58 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jubFy-0007Ae-Ea; Sun, 12 Jul 2020 12:38:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jubFy-0003mp-Dj; Sun, 12 Jul 2020 12:38:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151840-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151840: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=02d69864b51a4302a148c28d6d391238a6778b4b
X-Osstest-Versions-That: xen=02d69864b51a4302a148c28d6d391238a6778b4b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 12 Jul 2020 12:38:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151840 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151840/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail in 151824 pass in 151809
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail in 151824 pass in 151840
 test-armhf-armhf-xl-rtds     12 guest-start                fail pass in 151824

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds    13 migrate-support-check fail in 151824 never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-check fail in 151824 never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151824
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151824
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151824
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151824
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151824
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151824
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151824
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151824
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151824
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b
baseline version:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b

Last test of basis   151840  2020-07-12 01:51:25 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jul 12 16:28:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jul 2020 16:28:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juepp-0006QH-94; Sun, 12 Jul 2020 16:28:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZHa=AX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juepo-0006Pn-11
 for xen-devel@lists.xenproject.org; Sun, 12 Jul 2020 16:28:12 +0000
X-Inumbo-ID: a92f07f4-c45c-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a92f07f4-c45c-11ea-b7bb-bc764e2007e4;
 Sun, 12 Jul 2020 16:28:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=y1zEKnzq98nNFVhzW7/y1Iz6VzyidOkFNI65oTGiwyo=; b=fJrqWz/fm2jcGvXUSjFZjQ5AM
 1XPFL8hoqXyH0P58ucgJDaFD9K1YAozXPLNpJhbRDS9mkze4cCEFR9Z5qf5uhwCt2ke77n3FA1c2V
 S1a+aSAF4iR6/4wmj760Gv6USFzsZzeXm47/dbJ+7v5yBVT0yzULQ1saXJT/QnDonx5Go=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juepf-0001Xr-K8; Sun, 12 Jul 2020 16:28:03 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juepf-000560-Bf; Sun, 12 Jul 2020 16:28:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juepf-00071q-B2; Sun, 12 Jul 2020 16:28:03 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151841-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151841: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=2033cc6efa98b831d7839e367aa7d5aa74d0750f
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 12 Jul 2020 16:28:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151841 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151841/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                2033cc6efa98b831d7839e367aa7d5aa74d0750f
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   29 days
Failing since        151101  2020-06-14 08:32:51 Z   28 days   38 attempts
Testing same since   151841  2020-07-12 02:46:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michele Denber <denber@mindspring.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 23617 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 12 19:18:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jul 2020 19:18:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juhUN-0003Oi-IH; Sun, 12 Jul 2020 19:18:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YZHa=AX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juhUM-0003OO-6R
 for xen-devel@lists.xenproject.org; Sun, 12 Jul 2020 19:18:14 +0000
X-Inumbo-ID: 6a70a0e6-c474-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6a70a0e6-c474-11ea-8496-bc764e2007e4;
 Sun, 12 Jul 2020 19:18:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HO9tGjP5VAob3C7pJs6QsMKVFku1/PVCpDkVBwgr8q4=; b=hEx95TcErLZnewYF1MvsNqFeV
 ILdHJanomFAenlVlyWgtLkKIlF6rdeh9nd5sGINITqo0QvhVKHf8Eo2Y9fRoSD+fBXoPm1d0eSo/C
 kGLRPLfwIt463ROl1kYtFjXLN9LGZWIwmwab6JMQO7GQE+q0Gj+vwNP2ZJIIgn+u+Rkpo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juhUE-0004eo-D6; Sun, 12 Jul 2020 19:18:06 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juhUE-0004BX-66; Sun, 12 Jul 2020 19:18:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juhUE-0007NZ-5O; Sun, 12 Jul 2020 19:18:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151843-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151843: regressions - FAIL
X-Osstest-Failures: linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=0aea6d5c5be33ce94c16f9ab2f64de1f481f424b
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 12 Jul 2020 19:18:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151843 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151843/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                0aea6d5c5be33ce94c16f9ab2f64de1f481f424b
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   24 days
Failing since        151236  2020-06-19 19:10:35 Z   23 days   36 attempts
Testing same since   151838  2020-07-11 22:12:04 Z    0 days    2 attempts

------------------------------------------------------------
710 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 37325 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 12 22:26:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 12 Jul 2020 22:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jukQL-0001f8-By; Sun, 12 Jul 2020 22:26:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mbu0=AX=gmail.com=marietto2008@srs-us1.protection.inumbo.net>)
 id 1jukQJ-0001f3-P7
 for xen-devel@lists.xenproject.org; Sun, 12 Jul 2020 22:26:16 +0000
X-Inumbo-ID: b1fcbac0-c48e-11ea-bb8b-bc764e2007e4
Received: from mail-lj1-x22f.google.com (unknown [2a00:1450:4864:20::22f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1fcbac0-c48e-11ea-bb8b-bc764e2007e4;
 Sun, 12 Jul 2020 22:26:14 +0000 (UTC)
Received: by mail-lj1-x22f.google.com with SMTP id x9so3537017ljc.5
 for <xen-devel@lists.xenproject.org>; Sun, 12 Jul 2020 15:26:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:from:date:message-id:subject:to;
 bh=DndvfLd9uyHXeaqFN4iLX+XhPAuX2eei+4DQXZwmmoQ=;
 b=R0zymY9NzRguKeF3Hh71iYc/U2gg44U2ED9YBRRZJRmSp6HN5B1RcUFk/MgGwvW1fK
 4vFCIU+tONt6FZAYODw7twB5peyobl5SMv1Bd2nqFZSjfKJ3rW0aTlDX4o+yH1LIUE7l
 A2kxb38wZI/kUAHiiiID8GOkHVx61O6Jmfwn/VD8+q6orZVpN6awdZLLAVyHpttsh9Tg
 xS1VngL3EDFMdyCiHvKa4c7zoIxe9Du9O76jkkWroUFRPlYmhWv5OD+2DcDSeuJVrSbF
 W02krBBTfbbRNCQe/vWsWwyfXktPfEWOTDbJLtZ88f0AnC/oKuC2LBkN8LRkps/85QO+
 dlYw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
 bh=DndvfLd9uyHXeaqFN4iLX+XhPAuX2eei+4DQXZwmmoQ=;
 b=oe/K3qS0RGRBTgf+OB9QWxdbGOg2WT0ujGCQ4d39Nrd+FGkefeuHweY7ElV6VGFLnW
 L4vvWTTDD8RD9e0qMKKxArMU12Kx9P/YOBqjMEG2ZXr7IQGJfuWZm0mOyrUAKZglkClI
 EkDzl9NWARG5jzaZQ09RfSeWc9fLFozXv8vikY/77PsBkIzhxjUE3bkkngUiIdh7f/KT
 RmB+4AY7wI3jdtCJEnWhz1g9wLcmWrjaeoC+OuQt/buZ3tQjDrGKORLO/ro2Thr0YPDo
 5soBqIEz4a98nUs0KCJR4G45IAfyn3/FW9zSRyAIfT1BBlTZ05KVh4Wopr/MBD4MlrHJ
 /FQA==
X-Gm-Message-State: AOAM533RPPGMO1hGjYPFg8YzOOX6zXwNDguZChekkwornLDIr85EcR7c
 l8FomywzSheN61Kq7kiGFvIZSRdyTPt/Bwwqo3qpmxIAoTM=
X-Google-Smtp-Source: ABdhPJz1/3l/ddPsxJTHUY47v/X/GdUawWM8qnUQXFttSZgWrcpzupr58vuh/PlGeyVGwkCKb0Z3JyUeLDDGaz6Yqn4=
X-Received: by 2002:a05:651c:307:: with SMTP id
 a7mr45376729ljp.297.1594592772935; 
 Sun, 12 Jul 2020 15:26:12 -0700 (PDT)
MIME-Version: 1.0
From: Mario Marietto <marietto2008@gmail.com>
Date: Mon, 13 Jul 2020 00:25:36 +0200
Message-ID: <CA+1FSiiQCcOnqJFJ0NM2mawZrmu5+5BQDUoAQ+-LeX3uAQozpA@mail.gmail.com>
Subject: config parsing error in disk specification: unknown value for format:
 near `hdb' in `/usr/share/ovmf/OVMF.fd,hdb,w'
To: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="000000000000cad9a605aa460f5e"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--000000000000cad9a605aa460f5e
Content-Type: text/plain; charset="UTF-8"

Hello.

I'm a new xen user. I'm learning how works the xen hypervisor that I have
installed on ubuntu 20.04 with the command : apt install xen-hypervisor. I
want to boot the phisycal installation of windows 10 x64 bit that I have on
/dev/sdb,which belong to these partitions :


root@ziomario-I9:/etc/xen# fdisk /dev/sdb -l

Disk /dev/sdb: 465,78 GiB, 500107862016 bytes, 976773168 sectors

Disk model: Samsung SSD 860

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: gpt


Dispositivo Start Fine Settori Size Tipo

/dev/sdb1 34 262177 262144 128M Microsoft reserved

/dev/sdb2 264192 1286143 1021952 499M Windows recovery environment

/dev/sdb3 1286144 1488895 202752 99M EFI System

/dev/sdb4 1488896 975591423 974102528 464,5G Microsoft basic data

/dev/sdb5 975591424 976773119 1181696 577M Windows recovery environment


I'm reading here : https://wiki.xenproject.org/wiki/OVMF


it says that if I want to enable EFI / UEFI for virtual machines I should
add this parameter bios='ovmf' and it also says :


# This is a disk image with EFI guest installed, you can also use live CD
if you prefer.

disk = [ '/data/s0-efi.qcow2,qcow2,hda,w' ]


So,according with my situation,I created a cfg file like this :


builder = 'hvm'

bios='ovmf'

vif = [ 'type=ioemu, bridge=xenbr0' ]

memory = 8192

name = "windows-10" # domain prefix name

disk = [ '/usr/share/ovmf/OVMF.fd,hdb,w' ]

boot = "c"

vcpus = 6 # number of cpu's to assignsdl=1

stdvga = 0

serial = 'pty'

usbdevice = 'tablet' # Required for USB mouse

on_poweroff = 'destroy'

on_reboot   = 'destroy'

on_crash    = 'preserve'

device_model_args_hvm = [

# Debug OVMF

'-chardev', 'file,id=debugcon,path=/etc/xen/ovmf.log,',

'-device', 'isa-debugcon,iobase=0x402,chardev=debugcon',

]



but it gives the following error :


xenwin.cfg: config parsing error in disk specification: unknown value for
format: near \hdb' in \/usr/share/ovmf/OVMF.fd,hdb,w'\``


do u know where is the error ?

-- 
Mario.

--000000000000cad9a605aa460f5e
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hello.<br><br>I&#39;m a new xen user. I&#39;m learning how=
 works the xen hypervisor that I have installed on ubuntu 20.04 with the co=
mmand : apt install xen-hypervisor. I want to boot the phisycal installatio=
n of windows 10 x64 bit that I have on /dev/sdb,which belong to these parti=
tions :<br><br><br>root@ziomario-I9:/etc/xen# fdisk /dev/sdb -l<br><br>Disk=
 /dev/sdb: 465,78 GiB, 500107862016 bytes, 976773168 sectors<br><br>Disk mo=
del: Samsung SSD 860<br><br>Units: sectors of 1 * 512 =3D 512 bytes<br><br>=
Sector size (logical/physical): 512 bytes / 512 bytes<br><br>I/O size (mini=
mum/optimal): 512 bytes / 512 bytes<br><br>Disklabel type: gpt<br><br><br>D=
ispositivo Start Fine Settori Size Tipo<br><br>/dev/sdb1 34 262177 262144 1=
28M Microsoft reserved<br><br>/dev/sdb2 264192 1286143 1021952 499M Windows=
 recovery environment<br><br>/dev/sdb3 1286144 1488895 202752 99M EFI Syste=
m<br><br>/dev/sdb4 1488896 975591423 974102528 464,5G Microsoft basic data<=
br><br>/dev/sdb5 975591424 976773119 1181696 577M Windows recovery environm=
ent<br><br><br>I&#39;m reading here : <a href=3D"https://wiki.xenproject.or=
g/wiki/OVMF">https://wiki.xenproject.org/wiki/OVMF</a><br><br><br>it says t=
hat if I want to enable EFI / UEFI for virtual machines I should add this p=
arameter bios=3D&#39;ovmf&#39; and it also says :<br><br><br># This is a di=
sk image with EFI guest installed, you can also use live CD if you prefer.<=
br><br>disk =3D [ &#39;/data/s0-efi.qcow2,qcow2,hda,w&#39; ]<br><br><br>So,=
according with my situation,I created a cfg file like this :<br><br><br>bui=
lder =3D &#39;hvm&#39;<br><br>bios=3D&#39;ovmf&#39;<br><br>vif =3D [ &#39;t=
ype=3Dioemu, bridge=3Dxenbr0&#39; ]<br><br>memory =3D 8192<br><br>name =3D =
&quot;windows-10&quot; # domain prefix name<br><br>disk =3D [ &#39;/usr/sha=
re/ovmf/OVMF.fd,hdb,w&#39; ]<br><br>boot =3D &quot;c&quot;<br><br>vcpus =3D=
 6 # number of cpu&#39;s to assignsdl=3D1<br><br>stdvga =3D 0<br><br>serial=
 =3D &#39;pty&#39;<br><br>usbdevice =3D &#39;tablet&#39; # Required for USB=
 mouse<br><br>on_poweroff =3D &#39;destroy&#39;<br><br>on_reboot =C2=A0 =3D=
 &#39;destroy&#39;<br><br>on_crash =C2=A0 =C2=A0=3D &#39;preserve&#39;<br><=
br>device_model_args_hvm =3D [<br><br># Debug OVMF<br><br>&#39;-chardev&#39=
;, &#39;file,id=3Ddebugcon,path=3D/etc/xen/ovmf.log,&#39;,<br><br>&#39;-dev=
ice&#39;, &#39;isa-debugcon,iobase=3D0x402,chardev=3Ddebugcon&#39;,<br><br>=
]<br><br><br><br>but it gives the following error :<br><br><br>xenwin.cfg: =
config parsing error in disk specification: unknown value for format: near =
\hdb&#39; in \/usr/share/ovmf/OVMF.fd,hdb,w&#39;\``<br><br><br>do u know wh=
ere is the error ?<br clear=3D"all"><br>-- <br><div dir=3D"ltr" class=3D"gm=
ail_signature" data-smartmail=3D"gmail_signature">Mario.<br></div></div>

--000000000000cad9a605aa460f5e--


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 01:42:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 01:42:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1junTw-0002ed-8P; Mon, 13 Jul 2020 01:42:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fjqj=AY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1junTu-0002eY-US
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 01:42:11 +0000
X-Inumbo-ID: 0e6f1f30-c4aa-11ea-9216-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0e6f1f30-c4aa-11ea-9216-12813bfff9fa;
 Mon, 13 Jul 2020 01:42:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=q32uwym4P7p+vdfLA9T3I82w07gO5d2OnMArxDCYBHc=; b=hmwZnOtj+XjrA3ghHB7VxOigp
 qRwiqeaOAzuHUOwAnZ/7W1J6nqEYinlhop+yTEN8QrirR7BO3+fGZrLdHqFnv70fPGVxWzbrds/Yg
 4k3x+mGVAt9WWl+27WdVgkKHKq/Nx08z/c/TagMBTtyjDKH/AL+famhROtLWdp6In3zo4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1junTo-00058y-NG; Mon, 13 Jul 2020 01:42:04 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1junTo-0002ur-BH; Mon, 13 Jul 2020 01:42:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1junTo-0003CT-AN; Mon, 13 Jul 2020 01:42:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151849-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151849: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=d34498309cff7560ac90c422c56e3137e6a64b19
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jul 2020 01:42:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151849 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151849/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                d34498309cff7560ac90c422c56e3137e6a64b19
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   30 days
Failing since        151101  2020-06-14 08:32:51 Z   28 days   39 attempts
Testing same since   151849  2020-07-12 16:37:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michele Denber <denber@mindspring.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 24438 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 01:53:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 01:53:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1junfE-0003ZI-DO; Mon, 13 Jul 2020 01:53:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ulr8=AY=nxp.com=peng.fan@srs-us1.protection.inumbo.net>)
 id 1junfD-0003ZD-39
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 01:53:51 +0000
X-Inumbo-ID: b17efcf8-c4ab-11ea-b7bb-bc764e2007e4
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.54]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b17efcf8-c4ab-11ea-b7bb-bc764e2007e4;
 Mon, 13 Jul 2020 01:53:49 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LruRMIOJFrkQOpMYDRd1yBY/0DXM3V6+A1bEDdoxc9RmaRwS9HBRvq5Hoo8xHILQZW5LkamzP69XibMXDNBMw8uz5gAEJZCjk5XcroVB/x0vmXZxf0EdxhDZ+SnC4ga775AoPClrV2YFkPP2nX5x0yQdBA0CQ6qEGFNLiupejdNV+bD+2V/QSs0TVAGo419en+wm72yiZaZTnoWJW5dnfrABUesjhRfwqEmX7KqUr0B+AJ42xNCigcv42m3enWoyH3jYGja6s4+Uxwu+yYH47lKX8iqZVTAzGAR5C7N2Vc6Ez5uwcmuiWJgt3NdYuaqCN3Hfhy0C+p2ZmR5yRSiLyA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0GABxCLbPG/InTruHWRpo4qulx4rYnvByLeP8nYru0E=;
 b=YjkUykQ71iQuSBu4BfIcWg+2KGefoG+bOWCDyA7KR2I5eb3EzcO4TnGTidRM51/GAwwcFHzOhVDLN0P7GSpzxCrbrG5ZUdGpwZ2ACv1y8Ckvp1pIl/sAa0MQVk3zZRT8y9Q0qn8tcjCU8cm+ZOwW/N3TgPeWiiUMUZ3EKH07PmrLdu0xF5h8D3e5oYhrwpBejTMB/TdejTxbDV+dtLxN7l1NoIsKDzgLs6N/WxGevp5Cg2slHo5qMqkjcjBhXYdJhSNAD7vdQ2qyY2k6nzTlfPrqQ/rc379jBqoSjd72nhoCFmOeiZzBlwYHG+DgSZB3lVBaQb2zXQOlwKcMBfQ6yA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass
 header.d=nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0GABxCLbPG/InTruHWRpo4qulx4rYnvByLeP8nYru0E=;
 b=heSq42g8WHT6NiBV2e/XNCtUZpyt80CMWW/KlzGxuWhXKHsOY5GLi5GhgTP2lifKD5lRveY7y65FuZcKKfacA5WFDt+MQOBE9BmSPh58C0EN4mN65PPF51KMh9fOZtD9JyrEYy/jC2TwsFdW1roXU1na/IPUOsPO9cUNhvalgE0=
Received: from DB6PR0402MB2760.eurprd04.prod.outlook.com (2603:10a6:4:a1::14)
 by DB6PR0402MB2934.eurprd04.prod.outlook.com (2603:10a6:4:9b::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.23; Mon, 13 Jul
 2020 01:53:47 +0000
Received: from DB6PR0402MB2760.eurprd04.prod.outlook.com
 ([fe80::2d36:b569:17c:7701]) by DB6PR0402MB2760.eurprd04.prod.outlook.com
 ([fe80::2d36:b569:17c:7701%4]) with mapi id 15.20.3174.025; Mon, 13 Jul 2020
 01:53:46 +0000
From: Peng Fan <peng.fan@nxp.com>
To: Stefano Stabellini <sstabellini@kernel.org>, "Michael S. Tsirkin"
 <mst@redhat.com>
Subject: RE: [PATCH] xen: introduce xen_vring_use_dma
Thread-Topic: [PATCH] xen: introduce xen_vring_use_dma
Thread-Index: AQHWSgTusARd8c8cRkWwDit233DtZajneYoAgACU6oCAAC7QAIAAEpoAgAAGSwCAAUK2gIABcSaAgAVA3oCAAnnkAIAAQwqAgAA/4wCADeHhAIADsWjQ
Date: Mon, 13 Jul 2020 01:53:46 +0000
Message-ID: <DB6PR0402MB2760A98A427AA48FA325635288600@DB6PR0402MB2760.eurprd04.prod.outlook.com>
References: <20200624050355-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006241047010.8121@sstabellini-ThinkPad-T480s>
 <20200624163940-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006241351430.8121@sstabellini-ThinkPad-T480s>
 <20200624181026-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006251014230.8121@sstabellini-ThinkPad-T480s>
 <20200626110629-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006291621300.8121@sstabellini-ThinkPad-T480s>
 <20200701133456.GA23888@infradead.org>
 <alpine.DEB.2.21.2007011020320.8121@sstabellini-ThinkPad-T480s>
 <20200701172219-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2007101019340.4124@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007101019340.4124@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=nxp.com;
x-originating-ip: [92.121.68.129]
x-ms-publictraffictype: Email
x-ms-office365-filtering-ht: Tenant
x-ms-office365-filtering-correlation-id: 99a7de08-78dc-4b39-c1f7-08d826cf94d4
x-ms-traffictypediagnostic: DB6PR0402MB2934:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <DB6PR0402MB293480B64104501E330E226088600@DB6PR0402MB2934.eurprd04.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: HNjTjTjWrIzXfE6ltpWdBR2DeplyC5i9JvzDPZQia160JH+4EpFBFzUBzaTYh+uNrM/VAFmAV3PbNhBJHmw4II1QBugVpSYd+ZkN3p/kF1yaQTw4pm/RfS3uD0WGoDkSTdS79kgwDookFi9BKKxCd10mfxdveukGFFsjh/BIVkUj48h+STYHB61GG8gkXTt6ITZV0OnndJ8ZhM6eTVmusVfqT4VGMIOba0QJ7G5tYI+xecyBwKWGL5LPqc2bRxVYTRFcQlD97nLK37PDGgCUsc7osct3GMyYTmmMbUfTR9BNamWstINU8Hh7Y9Zp4MCrITQOw3eHM8GlxxprJogzeQ==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:DB6PR0402MB2760.eurprd04.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(366004)(346002)(39860400002)(376002)(136003)(396003)(66476007)(66556008)(64756008)(66446008)(66946007)(76116006)(2906002)(86362001)(478600001)(8936002)(8676002)(4326008)(83380400001)(71200400001)(26005)(7696005)(33656002)(44832011)(9686003)(55016002)(110136005)(7416002)(6506007)(316002)(54906003)(186003)(5660300002)(52536014);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: Iun8yMXMvh1uJny1TTawzz+RrjDW6kn8ygbIzjr3Mw6Ca8T8w5/hMDadUD5xBFjCFBsj4QPIiReVKRta2T+kh2YkRdvG58bGxrewzphSpr8HT6vhsCU/1QmupGEgZaPVUseO/ppaC+y0yTT/POW9E40JnMMr0/uKZrcYxKazMrOMAZ5quH6xrF2sj8N35RRCDqhDLSO7PVwhVKOtExLwKPDhV3juJVIk4L/5+OAIO7iJ7QreF5xC50QYFLi4bgA8BVNNt2Zqu/LbiiW+hmOWV5wVog8ablQmmxHgJkE0xIk3grgQ7hQoDvcj7g5k+s0oaD80LLjXhzmt3qHbTr4j9jD6/shsyW94xYVPdTW/caw5dVIQXvMhI5QcGOe78Yf5tKAlBryF38etGJEkPrvsO8ReuhJj6/l9q4SeAjyMwFrrWlJ5zkv4Z6OHWS60F8GC4mGuHGqOxZeHW8OwvsxL9Uw/dBc3P99uO6gTD+BH168=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: nxp.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DB6PR0402MB2760.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 99a7de08-78dc-4b39-c1f7-08d826cf94d4
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 Jul 2020 01:53:46.8602 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: nhMchUJ1LWLK4MCDdWLbSri36Ld+/UF1kdGDfCfQG+NsyzpEL7vFL5ZnooxMhdUBejFSfT+ZNO67zUrafCM7YA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0402MB2934
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "jgross@suse.com" <jgross@suse.com>,
 "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
 "jasowang@redhat.com" <jasowang@redhat.com>, "x86@kernel.org" <x86@kernel.org>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "virtualization@lists.linux-foundation.org"
 <virtualization@lists.linux-foundation.org>,
 Christoph Hellwig <hch@infradead.org>,
 "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>,
 dl-linux-imx <linux-imx@nxp.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
 "linux-arm-kernel@lists.infradead.org" <linux-arm-kernel@lists.infradead.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> Subject: Re: [PATCH] xen: introduce xen_vring_use_dma
>=20
> Sorry for the late reply -- a couple of conferences kept me busy.
>=20
>=20
> On Wed, 1 Jul 2020, Michael S. Tsirkin wrote:
> > On Wed, Jul 01, 2020 at 10:34:53AM -0700, Stefano Stabellini wrote:
> > > Would you be in favor of a more flexible check along the lines of
> > > the one proposed in the patch that started this thread:
> > >
> > >     if (xen_vring_use_dma())
> > >             return true;
> > >
> > >
> > > xen_vring_use_dma would be implemented so that it returns true when
> > > xen_swiotlb is required and false otherwise.
> >
> > Just to stress - with a patch like this virtio can *still* use DMA API
> > if PLATFORM_ACCESS is set. So if DMA API is broken on some platforms
> > as you seem to be saying, you guys should fix it before doing
> > something like this..
>=20
> Yes, DMA API is broken with some interfaces (specifically: rpmesg and tru=
sty),
> but for them PLATFORM_ACCESS is never set. That is why the errors weren't
> reported before. Xen special case aside, there is no problem under normal
> circumstances.
>=20
>=20
> If you are OK with this patch (after a little bit of clean-up), Peng, are=
 you OK
> with sending an update or do you want me to?

If you could help, that would be great. You have more expertise in knowing
the whole picture.

Thanks,
Peng.



From xen-devel-bounces@lists.xenproject.org Mon Jul 13 02:22:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 02:22:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juo79-0006NU-Sl; Mon, 13 Jul 2020 02:22:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fjqj=AY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juo78-0006NA-5u
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 02:22:42 +0000
X-Inumbo-ID: b695259c-c4af-11ea-921a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b695259c-c4af-11ea-921a-12813bfff9fa;
 Mon, 13 Jul 2020 02:22:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=UBRFLU5PtylScJ86UwlxDUY52xR4jZ2Hu+BQNmcyLok=; b=XCV2FDNB15XTT1eZuwTu3ZXnq
 pASiWfjN/nfK7ydxTqNK8WuHszPPehGVO06eu8ZTt9GnTFGOqs/kBtt23kFEzGHMhjHZ/ESow7V0S
 Q388rKTjZjjDHSP+rKx4O7hnG/zppcxjgVDjyTnO2FZ9XQWM6ZN6ldcQ/p8onV+fqPryE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juo70-0006Jl-6R; Mon, 13 Jul 2020 02:22:34 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juo6z-00052M-Gv; Mon, 13 Jul 2020 02:22:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juo6z-00081r-Fs; Mon, 13 Jul 2020 02:22:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151852-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151852: regressions - FAIL
X-Osstest-Failures: linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=4437dd6e8f71e8b4b5546924a6e48e7edaf4a8dc
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jul 2020 02:22:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151852 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151852/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                4437dd6e8f71e8b4b5546924a6e48e7edaf4a8dc
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   24 days
Failing since        151236  2020-06-19 19:10:35 Z   23 days   37 attempts
Testing same since   151852  2020-07-12 19:40:20 Z    0 days    1 attempts

------------------------------------------------------------
715 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 37788 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 05:26:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 05:26:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juqyE-00057I-H3; Mon, 13 Jul 2020 05:25:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1juqyD-00057D-Go
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 05:25:41 +0000
X-Inumbo-ID: 49459df4-c4c9-11ea-9222-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 49459df4-c4c9-11ea-9222-12813bfff9fa;
 Mon, 13 Jul 2020 05:25:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D9E3BAB89;
 Mon, 13 Jul 2020 05:25:39 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] docs: specify stability of hypfs path documentation
Date: Mon, 13 Jul 2020 07:16:39 +0200
Message-Id: <20200713051639.26948-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In docs/misc/hypfs-paths.pandoc the supported paths in the hypervisor
file system are specified. Make it more clear that path availability
might change, e.g. due to scope widening or narrowing (e.g. being
limited to a specific architecture).

Signed-off-by: Juergen Gross <jgross@suse.com>
---
This might be a candidate for 4.14, as hypfs is new in 4.14 and the
documentation should be as clear as possible.
---
 docs/misc/hypfs-paths.pandoc | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index a111c6f25c..7ad4b7ba95 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -5,6 +5,9 @@ in the Xen hypervisor file system (hypfs).
 
 The hypervisor file system can be accessed via the xenhypfs tool.
 
+The availability of the hypervisor file system depends on the hypervisor
+config option CONFIG_HYPFS, which is on per default.
+
 ## Notation
 
 The hypervisor file system is similar to the Linux kernel's sysfs.
@@ -55,6 +58,11 @@ tags enclosed in square brackets.
 * CONFIG_* -- Path is valid only in case the hypervisor was built with
   the respective config option.
 
+Path availability is subject to change, e.g. a specific path specified
+for a single architecture now might be made available for other architectures
+in future, or it could be made conditional by an additional config option
+of the hypervisor.
+
 So an entry could look like this:
 
     /cpu-bugs/active-pv/xpti = ("No"|{"dom0", "domU", "PCID-on"}) [w,X86,PV]
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Jul 13 06:26:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 06:26:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juruc-0001bC-5T; Mon, 13 Jul 2020 06:26:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fjqj=AY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jurub-0001as-Hl
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 06:26:01 +0000
X-Inumbo-ID: b52a910c-c4d1-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b52a910c-c4d1-11ea-b7bb-bc764e2007e4;
 Mon, 13 Jul 2020 06:25:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=8bsQaYRAHOxS+cQVVJ87dgV4fmWhx4LV8pu2Xfm9Qjc=; b=GWc/1kavgj+L9bM4YzL+7smfU
 dH35LN1zXDyvysjZYyqxq6I1pxy7YXnAKJOIU/RHYI5dt56RVYbkLAud2yeiznhLjlyJko+Q7muaK
 q6D2jqcRd242/SAgHpp4jCSmCp8Q5e1sM6iIADwRqcK63nQFNS56ySjt6/9psSL2Whntw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juruV-0003Zx-0y; Mon, 13 Jul 2020 06:25:55 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juruU-0001Gc-L3; Mon, 13 Jul 2020 06:25:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juruU-0005Eg-KO; Mon, 13 Jul 2020 06:25:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151858-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151858: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=866bb996441cd73800643df9fb7df2939bdb75bd
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jul 2020 06:25:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151858 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151858/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              866bb996441cd73800643df9fb7df2939bdb75bd
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z    3 days
Testing same since   151818  2020-07-11 04:18:52 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 691 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 07:46:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 07:46:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jut9j-00087i-Hb; Mon, 13 Jul 2020 07:45:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jut9i-000877-4R
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 07:45:42 +0000
X-Inumbo-ID: d6233408-c4dc-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d6233408-c4dc-11ea-8496-bc764e2007e4;
 Mon, 13 Jul 2020 07:45:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C9652AC82;
 Mon, 13 Jul 2020 07:45:36 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH] mini-os: don't hard-wire xen internal paths
Date: Mon, 13 Jul 2020 09:45:31 +0200
Message-Id: <20200713074531.27547-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, samuel.thibault@ens-lyon.org, wl@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Mini-OS shouldn't use Xen internal paths for building. Import the
needed paths from Xen and fall back to the current values only if
the import was not possible.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 Config.mk | 15 ++++++++++++++-
 Makefile  | 35 ++++++++++++++++++-----------------
 2 files changed, 32 insertions(+), 18 deletions(-)

diff --git a/Config.mk b/Config.mk
index f6a2afa..cb823c2 100644
--- a/Config.mk
+++ b/Config.mk
@@ -33,6 +33,19 @@ endif
 #
 ifneq ($(XEN_ROOT),)
 MINIOS_ROOT=$(XEN_ROOT)/extras/mini-os
+
+-include $(XEN_ROOT)/stubdom/mini-os.mk
+
+XENSTORE_CPPFLAGS ?= -isystem $(XEN_ROOT)/tools/xenstore/include
+TOOLCORE_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toolcore
+TOOLLOG_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toollog
+EVTCHN_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/evtchn
+GNTTAB_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/gnttab
+CALL_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/call
+FOREIGNMEMORY_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/foreignmemory
+DEVICEMODEL_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/devicemodel
+CTRL_PATH ?= $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)
+GUEST_PATH ?= $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)
 else
 MINIOS_ROOT=$(TOPLEVEL_DIR)
 endif
@@ -93,7 +106,7 @@ DEF_CPPFLAGS += -D__MINIOS__
 ifeq ($(libc),y)
 DEF_CPPFLAGS += -DHAVE_LIBC
 DEF_CPPFLAGS += -isystem $(MINIOS_ROOT)/include/posix
-DEF_CPPFLAGS += -isystem $(XEN_ROOT)/tools/xenstore/include
+DEF_CPPFLAGS += $(XENSTORE_CPPFLAGS)
 endif
 
 ifneq ($(LWIPDIR),)
diff --git a/Makefile b/Makefile
index be640cd..82422a5 100644
--- a/Makefile
+++ b/Makefile
@@ -125,23 +125,24 @@ OBJS := $(filter-out $(OBJ_DIR)/lwip%.o $(LWO), $(OBJS))
 
 ifeq ($(libc),y)
 ifeq ($(CONFIG_XC),y)
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toolcore -whole-archive -lxentoolcore -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toolcore/libxentoolcore.a
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toollog -whole-archive -lxentoollog -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toollog/libxentoollog.a
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/evtchn -whole-archive -lxenevtchn -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/evtchn/libxenevtchn.a
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/gnttab -whole-archive -lxengnttab -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/gnttab/libxengnttab.a
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/call -whole-archive -lxencall -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/call/libxencall.a
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/foreignmemory -whole-archive -lxenforeignmemory -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/foreignmemory/libxenforeignmemory.a
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/devicemodel -whole-archive -lxendevicemodel -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/devicemodel/libxendevicemodel.a
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH) -whole-archive -lxenguest -lxenctrl -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)/libxenctrl.a
-LIBS += $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)/libxenguest.a
+APP_LDLIBS += -L$(TOOLCORE_PATH) -whole-archive -lxentoolcore -no-whole-archive
+LIBS += $(TOOLCORE_PATH)/libxentoolcore.a
+APP_LDLIBS += -L$(TOOLLOG_PATH) -whole-archive -lxentoollog -no-whole-archive
+LIBS += $(TOOLLOG_PATH)/libxentoollog.a
+APP_LDLIBS += -L$(EVTCHN_PATH) -whole-archive -lxenevtchn -no-whole-archive
+LIBS += $(EVTCHN_PATH)/libxenevtchn.a
+APP_LDLIBS += -L$(GNTTAB_PATH) -whole-archive -lxengnttab -no-whole-archive
+LIBS += $(GNTTAB_PATH)/libxengnttab.a
+APP_LDLIBS += -L$(CALL_PATH) -whole-archive -lxencall -no-whole-archive
+LIBS += $(XCALL_PATH)/libxencall.a
+APP_LDLIBS += -L$(FOREIGNMEMORY_PATH) -whole-archive -lxenforeignmemory -no-whole-archive
+LIBS += $(FOREIGNMEMORY_PATH)/libxenforeignmemory.a
+APP_LDLIBS += -L$(DEVICEMODEL_PATH) -whole-archive -lxendevicemodel -no-whole-archive
+LIBS += $(DEVICEMODEL_PATH)/libxendevicemodel.a
+APP_LDLIBS += -L$(GUEST_PATH) -whole-archive -lxenguest -no-whole-archive
+LIBS += $(GUEST_PATH)/libxenguest.a
+APP_LDLIBS += -L$(CTRL_PATH) -whole-archive -lxenctrl -no-whole-archive
+LIBS += $(CTRL_PATH)/libxenctrl.a
 endif
 APP_LDLIBS += -lpci
 APP_LDLIBS += -lz
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Jul 13 07:56:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 07:56:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jutKJ-0000c0-Oa; Mon, 13 Jul 2020 07:56:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jutKI-0000bv-CS
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 07:56:38 +0000
X-Inumbo-ID: 60ae0d54-c4de-11ea-922b-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 60ae0d54-c4de-11ea-922b-12813bfff9fa;
 Mon, 13 Jul 2020 07:56:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B2295ADAB;
 Mon, 13 Jul 2020 07:56:38 +0000 (UTC)
Subject: Re: [PATCH] docs: specify stability of hypfs path documentation
To: Juergen Gross <jgross@suse.com>
References: <20200713051639.26948-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <057edc96-85ae-2511-9713-99341a6b6486@suse.com>
Date: Mon, 13 Jul 2020 09:56:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200713051639.26948-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13.07.2020 07:16, Juergen Gross wrote:
> @@ -55,6 +58,11 @@ tags enclosed in square brackets.
>  * CONFIG_* -- Path is valid only in case the hypervisor was built with
>    the respective config option.
>  
> +Path availability is subject to change, e.g. a specific path specified
> +for a single architecture now might be made available for other architectures
> +in future, or it could be made conditional by an additional config option
> +of the hypervisor.

I agree this is worthwhile clarifying. To me, between the lines,
this then suggests that paths are entirely unreliable, which I
don't think is what we want. So perhaps some further clarification
could be added to clarify that we're not going to arbitrarily
change paths or their meaning? Or am I mistaken in understanding
that this interface is meant to act ABI-like?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 08:04:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 08:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jutRg-00022m-VE; Mon, 13 Jul 2020 08:04:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jutRf-00022h-2L
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 08:04:15 +0000
X-Inumbo-ID: 6e6aa546-c4df-11ea-922e-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6e6aa546-c4df-11ea-922e-12813bfff9fa;
 Mon, 13 Jul 2020 08:04:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 45576AB55;
 Mon, 13 Jul 2020 08:04:11 +0000 (UTC)
Subject: Re: [PATCH] docs: specify stability of hypfs path documentation
To: Jan Beulich <jbeulich@suse.com>
References: <20200713051639.26948-1-jgross@suse.com>
 <057edc96-85ae-2511-9713-99341a6b6486@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <e8187851-9d1f-89d1-6e6b-53881e79a6b8@suse.com>
Date: Mon, 13 Jul 2020 10:04:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <057edc96-85ae-2511-9713-99341a6b6486@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13.07.20 09:56, Jan Beulich wrote:
> On 13.07.2020 07:16, Juergen Gross wrote:
>> @@ -55,6 +58,11 @@ tags enclosed in square brackets.
>>   * CONFIG_* -- Path is valid only in case the hypervisor was built with
>>     the respective config option.
>>   
>> +Path availability is subject to change, e.g. a specific path specified
>> +for a single architecture now might be made available for other architectures
>> +in future, or it could be made conditional by an additional config option
>> +of the hypervisor.
> 
> I agree this is worthwhile clarifying. To me, between the lines,
> this then suggests that paths are entirely unreliable, which I
> don't think is what we want. So perhaps some further clarification
> could be added to clarify that we're not going to arbitrarily
> change paths or their meaning? Or am I mistaken in understanding
> that this interface is meant to act ABI-like?

You are right. What about following rewording:

"In case a tag for a path indicates that this path is available in some
  case only, this availability might be extended or reduced in future by
  modification or removal of the tag."


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 08:07:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 08:07:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jutUb-0002A6-EY; Mon, 13 Jul 2020 08:07:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jutUa-00029y-3H
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 08:07:16 +0000
X-Inumbo-ID: dce5fdb8-c4df-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dce5fdb8-c4df-11ea-bca7-bc764e2007e4;
 Mon, 13 Jul 2020 08:07:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A39C5AB7D;
 Mon, 13 Jul 2020 08:07:16 +0000 (UTC)
Subject: Re: [PATCH] docs: specify stability of hypfs path documentation
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <20200713051639.26948-1-jgross@suse.com>
 <057edc96-85ae-2511-9713-99341a6b6486@suse.com>
 <e8187851-9d1f-89d1-6e6b-53881e79a6b8@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6187b535-bb04-4b7a-fbc8-7f6c297c1ddc@suse.com>
Date: Mon, 13 Jul 2020 10:07:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <e8187851-9d1f-89d1-6e6b-53881e79a6b8@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13.07.2020 10:04, Jürgen Groß wrote:
> On 13.07.20 09:56, Jan Beulich wrote:
>> On 13.07.2020 07:16, Juergen Gross wrote:
>>> @@ -55,6 +58,11 @@ tags enclosed in square brackets.
>>>   * CONFIG_* -- Path is valid only in case the hypervisor was built with
>>>     the respective config option.
>>>   
>>> +Path availability is subject to change, e.g. a specific path specified
>>> +for a single architecture now might be made available for other architectures
>>> +in future, or it could be made conditional by an additional config option
>>> +of the hypervisor.
>>
>> I agree this is worthwhile clarifying. To me, between the lines,
>> this then suggests that paths are entirely unreliable, which I
>> don't think is what we want. So perhaps some further clarification
>> could be added to clarify that we're not going to arbitrarily
>> change paths or their meaning? Or am I mistaken in understanding
>> that this interface is meant to act ABI-like?
> 
> You are right. What about following rewording:
> 
> "In case a tag for a path indicates that this path is available in some
>   case only, this availability might be extended or reduced in future by
>   modification or removal of the tag."

Yes, yet I'd go even further and add "A path once assigned meaning
won't go away altogether or change its meaning, though" or some such.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 08:08:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 08:08:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jutVR-0002Ey-Oe; Mon, 13 Jul 2020 08:08:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bAa1=AY=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jutVQ-0002Es-TB
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 08:08:08 +0000
X-Inumbo-ID: fc46ce08-c4df-11ea-8496-bc764e2007e4
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fc46ce08-c4df-11ea-8496-bc764e2007e4;
 Mon, 13 Jul 2020 08:08:08 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id j4so14795633wrp.10
 for <xen-devel@lists.xenproject.org>; Mon, 13 Jul 2020 01:08:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=n0kxcs6gNGs9FJbIs3Lc7T0KsS0d3xL066fOjGqZegk=;
 b=pcjaFOmOWH7bFsme46lfUzgiDMH76YUAcKrxY/kWWwAMFaTOK1EmHZlPDKfvD4LqAH
 4blcM1nrhzcvamHkZjtv2KCa8he/ylRAPTPVNfVLgziX8YdA5pQEVv+w0J+RydwcGzm6
 tw5AdBLeQ5Uz7FGfe2vzx7v9sDfeGtYZw3CoEau0WOf8iCRpchJ23m65JR+oRaH4Y1mw
 0QHtR3c2QqUHhCTy/ClyvX16VoddM0RzMmyhglPZl+vYVY82+IhZoTzQIXpBZ6At7W5S
 y2zcFt6XbooWaVXBLQBVPdlskr+GaxmNnedVbh6QcQVOBMIsmyTwfR9/uMQ96jY3BZOQ
 lB1w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=n0kxcs6gNGs9FJbIs3Lc7T0KsS0d3xL066fOjGqZegk=;
 b=RfCLuOlcl/+VSjtxM5R13+ddcPLaU/zM9NbT0Z5a0y3ZTJDMVeLSyFllbVC5U5QTvx
 kNRb2NC2cXlk8L1vgDsQRKp5fXmd4WqM1TL1Fci05cqtMMBiSIcJwA7C5/RbEcYTD71c
 0vrF7laRkX7Vel5FXesIn8VuwMKIrYTZTUbszyeAEUyAyBNIrimfSbTkLKTbELbOh9I2
 EO0amFlayDn2Y0xuWHzIYFC/GAPAXFCHPrXQgrkq5Kiu93fxE/nrSSkbf02f8eRb3ESS
 o5AoF3gcLyLB9GKJbnkTGWqt714Wjeh2txbJzgw2OqTlNdfIo1UM1d8fkUKRqkwfpll4
 AFXw==
X-Gm-Message-State: AOAM530sAJIKfw+qGhODw446OiZ2wFFg7v/35ntRszIszJJ3ORb0VLaP
 Omb7Fwvjh40GktJ4IODY6PM=
X-Google-Smtp-Source: ABdhPJytfL/cIdT+33/hj/QLy5PvtlQ6lTN3tAN0bJF3PHbH20eU5KnmkWKiks/uAreR0c/RFyjo9Q==
X-Received: by 2002:a5d:4845:: with SMTP id n5mr76883891wrs.353.1594627687426; 
 Mon, 13 Jul 2020 01:08:07 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-236.amazon.com. [54.240.197.236])
 by smtp.gmail.com with ESMTPSA id v3sm22686177wrq.57.2020.07.13.01.08.05
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 13 Jul 2020 01:08:06 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Souptick Joarder'" <jrdr.linux@gmail.com>, <boris.ostrovsky@oracle.com>,
 <jgross@suse.com>, <sstabellini@kernel.org>
References: <1594525195-28345-1-git-send-email-jrdr.linux@gmail.com>
 <1594525195-28345-2-git-send-email-jrdr.linux@gmail.com>
In-Reply-To: <1594525195-28345-2-git-send-email-jrdr.linux@gmail.com>
Subject: RE: [PATCH v3 1/3] xen/privcmd: Corrected error handling path
Date: Mon, 13 Jul 2020 09:08:04 +0100
Message-ID: <003801d658ec$bd526c70$37f74550$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQGQUUjbM0hB7euxJ1kpRMo6FNfk+gI2hbljqX+CmZA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 'Paul Durrant' <xadimgnik@gmail.com>, 'John Hubbard' <jhubbard@nvidia.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Souptick Joarder <jrdr.linux@gmail.com>
> Sent: 12 July 2020 04:40
> To: boris.ostrovsky@oracle.com; jgross@suse.com; sstabellini@kernel.org
> Cc: xen-devel@lists.xenproject.org; linux-kernel@vger.kernel.org; Souptick Joarder
> <jrdr.linux@gmail.com>; John Hubbard <jhubbard@nvidia.com>; Paul Durrant <xadimgnik@gmail.com>
> Subject: [PATCH v3 1/3] xen/privcmd: Corrected error handling path
> 
> Previously, if lock_pages() end up partially mapping pages, it used
> to return -ERRNO due to which unlock_pages() have to go through
> each pages[i] till *nr_pages* to validate them. This can be avoided
> by passing correct number of partially mapped pages & -ERRNO separately,
> while returning from lock_pages() due to error.
> 
> With this fix unlock_pages() doesn't need to validate pages[i] till
> *nr_pages* for error scenario and few condition checks can be ignored.
> 
> Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Paul Durrant <xadimgnik@gmail.com>

Reviewed-by: Paul Durrant <paul@xen.org>

> ---
>  drivers/xen/privcmd.c | 31 +++++++++++++++----------------
>  1 file changed, 15 insertions(+), 16 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 5dfc59f..b001673 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -579,13 +579,13 @@ static long privcmd_ioctl_mmap_batch(
> 
>  static int lock_pages(
>  	struct privcmd_dm_op_buf kbufs[], unsigned int num,
> -	struct page *pages[], unsigned int nr_pages)
> +	struct page *pages[], unsigned int nr_pages, unsigned int *pinned)
>  {
>  	unsigned int i;
> 
>  	for (i = 0; i < num; i++) {
>  		unsigned int requested;
> -		int pinned;
> +		int page_count;
> 
>  		requested = DIV_ROUND_UP(
>  			offset_in_page(kbufs[i].uptr) + kbufs[i].size,
> @@ -593,14 +593,15 @@ static int lock_pages(
>  		if (requested > nr_pages)
>  			return -ENOSPC;
> 
> -		pinned = get_user_pages_fast(
> +		page_count = get_user_pages_fast(
>  			(unsigned long) kbufs[i].uptr,
>  			requested, FOLL_WRITE, pages);
> -		if (pinned < 0)
> -			return pinned;
> +		if (page_count < 0)
> +			return page_count;
> 
> -		nr_pages -= pinned;
> -		pages += pinned;
> +		*pinned += page_count;
> +		nr_pages -= page_count;
> +		pages += page_count;
>  	}
> 
>  	return 0;
> @@ -610,13 +611,8 @@ static void unlock_pages(struct page *pages[], unsigned int nr_pages)
>  {
>  	unsigned int i;
> 
> -	if (!pages)
> -		return;
> -
> -	for (i = 0; i < nr_pages; i++) {
> -		if (pages[i])
> -			put_page(pages[i]);
> -	}
> +	for (i = 0; i < nr_pages; i++)
> +		put_page(pages[i]);
>  }
> 
>  static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
> @@ -629,6 +625,7 @@ static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
>  	struct xen_dm_op_buf *xbufs = NULL;
>  	unsigned int i;
>  	long rc;
> +	unsigned int pinned = 0;
> 
>  	if (copy_from_user(&kdata, udata, sizeof(kdata)))
>  		return -EFAULT;
> @@ -682,9 +679,11 @@ static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
>  		goto out;
>  	}
> 
> -	rc = lock_pages(kbufs, kdata.num, pages, nr_pages);
> -	if (rc)
> +	rc = lock_pages(kbufs, kdata.num, pages, nr_pages, &pinned);
> +	if (rc < 0) {
> +		nr_pages = pinned;
>  		goto out;
> +	}
> 
>  	for (i = 0; i < kdata.num; i++) {
>  		set_xen_guest_handle(xbufs[i].h, kbufs[i].uptr);
> --
> 1.9.1




From xen-devel-bounces@lists.xenproject.org Mon Jul 13 08:09:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 08:09:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jutWT-0002Lo-2V; Mon, 13 Jul 2020 08:09:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bAa1=AY=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jutWR-0002Le-MW
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 08:09:11 +0000
X-Inumbo-ID: 21cb17ec-c4e0-11ea-b7bb-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 21cb17ec-c4e0-11ea-b7bb-bc764e2007e4;
 Mon, 13 Jul 2020 08:09:11 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id k6so14803452wrn.3
 for <xen-devel@lists.xenproject.org>; Mon, 13 Jul 2020 01:09:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=goZFdv0vYFu2qS+8tgZEgkamqnIThorE/kvTLGKqcjg=;
 b=OmYmHmxuhJhhhadcUjIexz8FybS0Qab1l5YUvnTcwlHRlfPb3o6CxYT7D3lPOTgSY1
 W/KjDgtKjNd/47TU74VaF4LA9NfwpYM5/nEIC1kC5YP70Mu8PQZCF1LEhvccq8evwrSr
 UnGRC0q/pmBw7NVdGDYVD4UOwaGXq/jCDodj/+l5uwYWx6bXOUuNzkbajhL+dOOf4ua9
 gJjnXtXfjCCfqYyQ6Jx7rzHyBO55DdJqT5suU4H/8lYqjJGxnsrlKc6DT/8Cbc7jHSRF
 ccsevynM3q1ZfxqS1eUCWCsljRbbZrAxiyCYmULL3DddmEmI62Li4GgR+JFl3k8VT3oA
 49Ag==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=goZFdv0vYFu2qS+8tgZEgkamqnIThorE/kvTLGKqcjg=;
 b=UeTDYSobwfVtg/pEAL4yBZZNuhVHN4DudGA+r6Egzcp1ssaH5VfwglVFjaHSK3vr9J
 7ilaDn00FMptr/yhhxm8vD8l7v2UEA3DPLjd50H4a+GiBP3oPtJ3HhikJu/1zhwfxbgI
 C5rDqsdQq3SEfrd40pyNKqfV2UXcgCSiKxe+XbL0JOUiGiAsijK138vvHS1ge7cJ4gxB
 mytetLnVFgEiJu7MC+9cUbNAjMz3xibIYArO2Tx0CdcKryZn8omsJdD3aaX9hOQTkIxT
 LvaMnFm3YXIAHAWv7L8ohMG5nBZoIsyM+ZtAFjO+5b9w/K7KKl+ia8gHSBF0Xn3Y4x9P
 dskw==
X-Gm-Message-State: AOAM532yGUELWQbDKwZcGJzJV8ZtIt/2OQAy3qlKDLXENdsFc9gG2tB1
 8QjyY+7cpBC0mLzjeLjLgEc=
X-Google-Smtp-Source: ABdhPJyr5gyI63fyswQfNkAN5vUdZgX+iXjJYkUAcBLa26UIsUoR2kkPYeok06gmIYfdASjUIA1utg==
X-Received: by 2002:adf:fa81:: with SMTP id h1mr79156106wrr.266.1594627750240; 
 Mon, 13 Jul 2020 01:09:10 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-236.amazon.com. [54.240.197.236])
 by smtp.gmail.com with ESMTPSA id j6sm23357762wro.25.2020.07.13.01.09.08
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 13 Jul 2020 01:09:09 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Souptick Joarder'" <jrdr.linux@gmail.com>, <boris.ostrovsky@oracle.com>,
 <jgross@suse.com>, <sstabellini@kernel.org>
References: <1594525195-28345-1-git-send-email-jrdr.linux@gmail.com>
 <1594525195-28345-3-git-send-email-jrdr.linux@gmail.com>
In-Reply-To: <1594525195-28345-3-git-send-email-jrdr.linux@gmail.com>
Subject: RE: [PATCH v3 2/3] xen/privcmd: Mark pages as dirty
Date: Mon, 13 Jul 2020 09:09:08 +0100
Message-ID: <003901d658ec$e2d93460$a88b9d20$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQGQUUjbM0hB7euxJ1kpRMo6FNfk+gF2cfqiqYWDhTA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 'Paul Durrant' <xadimgnik@gmail.com>, 'John Hubbard' <jhubbard@nvidia.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Souptick Joarder <jrdr.linux@gmail.com>
> Sent: 12 July 2020 04:40
> To: boris.ostrovsky@oracle.com; jgross@suse.com; sstabellini@kernel.org
> Cc: xen-devel@lists.xenproject.org; linux-kernel@vger.kernel.org; Souptick Joarder
> <jrdr.linux@gmail.com>; John Hubbard <jhubbard@nvidia.com>; Paul Durrant <xadimgnik@gmail.com>
> Subject: [PATCH v3 2/3] xen/privcmd: Mark pages as dirty
> 
> pages need to be marked as dirty before unpinned it in
> unlock_pages() which was oversight. This is fixed now.
> 
> Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
> Suggested-by: John Hubbard <jhubbard@nvidia.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Paul Durrant <xadimgnik@gmail.com>

Reviewed-by: Paul Durrant <paul@xen.org>

> ---
>  drivers/xen/privcmd.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index b001673..079d35b 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -611,8 +611,11 @@ static void unlock_pages(struct page *pages[], unsigned int nr_pages)
>  {
>  	unsigned int i;
> 
> -	for (i = 0; i < nr_pages; i++)
> +	for (i = 0; i < nr_pages; i++) {
> +		if (!PageDirty(pages[i]))
> +			set_page_dirty_lock(pages[i]);
>  		put_page(pages[i]);
> +	}
>  }
> 
>  static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
> --
> 1.9.1




From xen-devel-bounces@lists.xenproject.org Mon Jul 13 08:10:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 08:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jutXu-00036k-FI; Mon, 13 Jul 2020 08:10:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bAa1=AY=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jutXt-00036c-95
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 08:10:41 +0000
X-Inumbo-ID: 571fbc36-c4e0-11ea-b7bb-bc764e2007e4
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 571fbc36-c4e0-11ea-b7bb-bc764e2007e4;
 Mon, 13 Jul 2020 08:10:40 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id f18so12240555wml.3
 for <xen-devel@lists.xenproject.org>; Mon, 13 Jul 2020 01:10:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=nT2xE2I/1laRy3/n8/6HWxHbLx8UbAkaa/McJbaY/Go=;
 b=L61FhwCCvRDKQLIKdnXaGxMEL0Cgg8QW8GHZ/v5d4/c4Ig9855dnDAK7hZ3Wq81MI4
 qGwEokCjqapm8wRSgtkuMyucU0k+p2FetZQCLUDsTltZ8fg/4F8NpNvovCKRb/bXP7HX
 Ue3VEO//CgeIoEbLfWpRHvErKA0kri+aNdUQpz2B8nobBdvqjM1WD3qY428QfYJLtQ5G
 pAp6MyLRE3O84pp8zBWGe0tmiF0pXOzRvP0P/NL1gPzbUd48IENUBf4oZDwTXuy+4osv
 a+z0Rj6AUbKa9a63yMQNqYKWMMD1iBcAdVyr7vILDMeuHZuN6+EL57okjFctAnUNbBsK
 1Saw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=nT2xE2I/1laRy3/n8/6HWxHbLx8UbAkaa/McJbaY/Go=;
 b=loblu7048Gk8fFCnC7rIqMuHm/IuiDE5hT/rCk7rmhsHHWfyGCRcmR1+zeyVNk4ekI
 DyLuTjpbOB49wHYUn43r+TOQyIb5/+oZ1gpHrxUl0lLVL0u+dQgdbPB95mpWxVymM3yY
 ifuutqwADLHmDwkvTTaoGQdf0tITRmhDZHNpIAi0RCYjT1R5PUFEmlqB2JM0NeMtItKT
 kvODJ9FU7cBRDnA/XxbmIBBN8DACusC2uKPKT/93lSeoFXUdu8jCrbi0sSRzXeMNKyAr
 W5xAmzakFGarUY/XaxK4ZRnifDp2omGp6716mJMrAdA2pwXqseSQJUWeMKE81NVO4W4r
 62ng==
X-Gm-Message-State: AOAM530hu0aUu8o7cTZPzRTtkb1mReuiAqEcsbxmj9RvajmBz0KrrtsX
 3lM16rrwNAjUOKo66x3wm9w=
X-Google-Smtp-Source: ABdhPJygTefIsKyZlBfTQwg8Ok+6EIMpEHdvDLG1LGMonu/CaPUYSMBvfx4mgduaPCPtD/3iCo0qvQ==
X-Received: by 2002:a7b:cd09:: with SMTP id f9mr18552307wmj.160.1594627839822; 
 Mon, 13 Jul 2020 01:10:39 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-236.amazon.com. [54.240.197.236])
 by smtp.gmail.com with ESMTPSA id n17sm21674581wrs.2.2020.07.13.01.10.38
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 13 Jul 2020 01:10:39 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Souptick Joarder'" <jrdr.linux@gmail.com>, <boris.ostrovsky@oracle.com>,
 <jgross@suse.com>, <sstabellini@kernel.org>
References: <1594525195-28345-1-git-send-email-jrdr.linux@gmail.com>
 <1594525195-28345-4-git-send-email-jrdr.linux@gmail.com>
In-Reply-To: <1594525195-28345-4-git-send-email-jrdr.linux@gmail.com>
Subject: RE: [PATCH v3 3/3] xen/privcmd: Convert get_user_pages*() to
 pin_user_pages*()
Date: Mon, 13 Jul 2020 09:10:38 +0100
Message-ID: <003a01d658ed$1858e220$490aa660$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQGQUUjbM0hB7euxJ1kpRMo6FNfk+gGYi24xqYRzHNA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 'Paul Durrant' <xadimgnik@gmail.com>, 'John Hubbard' <jhubbard@nvidia.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Souptick Joarder <jrdr.linux@gmail.com>
> Sent: 12 July 2020 04:40
> To: boris.ostrovsky@oracle.com; jgross@suse.com; sstabellini@kernel.org
> Cc: xen-devel@lists.xenproject.org; linux-kernel@vger.kernel.org; Souptick Joarder
> <jrdr.linux@gmail.com>; John Hubbard <jhubbard@nvidia.com>; Paul Durrant <xadimgnik@gmail.com>
> Subject: [PATCH v3 3/3] xen/privcmd: Convert get_user_pages*() to pin_user_pages*()
> 
> In 2019, we introduced pin_user_pages*() and now we are converting
> get_user_pages*() to the new API as appropriate. [1] & [2] could
> be referred for more information. This is case 5 as per document [1].
> 
> [1] Documentation/core-api/pin_user_pages.rst
> 
> [2] "Explicit pinning of user-space pages":
>         https://lwn.net/Articles/807108/
> 
> Signed-off-by: Souptick Joarder <jrdr.linux@gmail.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Paul Durrant <xadimgnik@gmail.com>

Reviewed-by: Paul Durrant <paul@xen.org>

> ---
>  drivers/xen/privcmd.c | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 079d35b..63abe6c 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -593,7 +593,7 @@ static int lock_pages(
>  		if (requested > nr_pages)
>  			return -ENOSPC;
> 
> -		page_count = get_user_pages_fast(
> +		page_count = pin_user_pages_fast(
>  			(unsigned long) kbufs[i].uptr,
>  			requested, FOLL_WRITE, pages);
>  		if (page_count < 0)
> @@ -609,13 +609,7 @@ static int lock_pages(
> 
>  static void unlock_pages(struct page *pages[], unsigned int nr_pages)
>  {
> -	unsigned int i;
> -
> -	for (i = 0; i < nr_pages; i++) {
> -		if (!PageDirty(pages[i]))
> -			set_page_dirty_lock(pages[i]);
> -		put_page(pages[i]);
> -	}
> +	unpin_user_pages_dirty_lock(pages, nr_pages, true);
>  }
> 
>  static long privcmd_ioctl_dm_op(struct file *file, void __user *udata)
> --
> 1.9.1




From xen-devel-bounces@lists.xenproject.org Mon Jul 13 08:13:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 08:13:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jutan-0003GC-1n; Mon, 13 Jul 2020 08:13:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bAa1=AY=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jutam-0003G6-DA
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 08:13:40 +0000
X-Inumbo-ID: c1e70452-c4e0-11ea-b7bb-bc764e2007e4
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1e70452-c4e0-11ea-b7bb-bc764e2007e4;
 Mon, 13 Jul 2020 08:13:39 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id l2so12251097wmf.0
 for <xen-devel@lists.xenproject.org>; Mon, 13 Jul 2020 01:13:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=Ar0ph7hv3Yh6PwDGEGAD9y+GhfIA3gUk01Sca/V8LyQ=;
 b=mlRwTfR7RDvOrnBEJ5S7rP6ObfAIVmA8W4J7oOeVAA3vbOUl0bXanzaBLwDrzG5/Lr
 FIgQ3OQrLC0iZU8GXD4w/RKJXxbC+gYWcwM5KwQyO1+vFw6Rru5xEBj86OZRjHqzbUo5
 1Ad4wVTIZePwkOjCrurIQ4WKOAh6jYFlJibOHJ9i/qb2ApOrDXtAz/BmHjzq54qwUq0g
 3rO0gS5QGVXHkW8jfeiB9oxQ1pBi2zHM1s7HjiRI78rsJtsDkQzNc087h5Tf5TFCXeDw
 bQw4dJy1o5PdoGOFcPuHInv8uQb5drjvixokUo6iRxLFFxsqnhhW/piVpGOwrBS6ZmD7
 Yi7Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=Ar0ph7hv3Yh6PwDGEGAD9y+GhfIA3gUk01Sca/V8LyQ=;
 b=L2g80KofQ/IEdCEGKtn4eUgQOEix0h4NXbOSSUrKlOkm9PwUPmhbPtHT7iDrME8jP9
 TjJH07L7l5v2DjIC1zgEnGoEjX+M+KPo4pKoadV2azTHyzxeykolGmemOfUG/PvwuveX
 YTmyJOWH7RD2setvkT5lCZuHzVGumLMApLhXeGlOLqIAbOdGAkLWdwJhppwbvfwu+Cd4
 NXTtiaE/HL5i6zgPcm1RZos1GtG239WwgQ8zWJfqtzQ8u7AHJn0JUIhVeS+BtOklaMYy
 AFksS4XTV28hFtNxQWIZiYegproCKtCwUl25ZHxjBb8E67kyFoBQNS3w6EzKPjha04Vt
 4W3Q==
X-Gm-Message-State: AOAM531RKfRpuaPe8CjMsZep2ncuMbJpqW+Cg4gh8RpnHwrX7580lLht
 nbEaEVM9R6z/gPoEjYdV+Ns=
X-Google-Smtp-Source: ABdhPJyVyUJJTvpZOfshsPBjra2CRmozn1fcF3R3+0T6/6b8nEDE+IPag8OJMWDCyFMfEdLmjAA+VA==
X-Received: by 2002:a7b:cf18:: with SMTP id l24mr17476565wmg.116.1594628019011; 
 Mon, 13 Jul 2020 01:13:39 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-228.amazon.com. [54.240.197.228])
 by smtp.gmail.com with ESMTPSA id j6sm23374885wro.25.2020.07.13.01.13.37
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 13 Jul 2020 01:13:38 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Juergen Gross'" <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>
References: <20200713051639.26948-1-jgross@suse.com>
In-Reply-To: <20200713051639.26948-1-jgross@suse.com>
Subject: RE: [PATCH] docs: specify stability of hypfs path documentation
Date: Mon, 13 Jul 2020 09:13:37 +0100
Message-ID: <003b01d658ed$831ae680$8950b380$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIFNc7lIAokfYuPpo0uaUrzq7VTS6inbwKA
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Juergen Gross <jgross@suse.com>
> Sent: 13 July 2020 06:17
> To: xen-devel@lists.xenproject.org
> Cc: paul@xen.org; Juergen Gross <jgross@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; George
> Dunlap <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Jan Beulich
> <jbeulich@suse.com>; Julien Grall <julien@xen.org>; Stefano Stabellini <sstabellini@kernel.org>; Wei
> Liu <wl@xen.org>
> Subject: [PATCH] docs: specify stability of hypfs path documentation
> 
> In docs/misc/hypfs-paths.pandoc the supported paths in the hypervisor
> file system are specified. Make it more clear that path availability
> might change, e.g. due to scope widening or narrowing (e.g. being
> limited to a specific architecture).
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> This might be a candidate for 4.14, as hypfs is new in 4.14 and the
> documentation should be as clear as possible.

Agreed. Since this a pure documentation change it carries no risk, so once the final wording is agreed then consider it...

Release-acked-by: Paul Durrant <paul@xen.org>

> ---
>  docs/misc/hypfs-paths.pandoc | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
> index a111c6f25c..7ad4b7ba95 100644
> --- a/docs/misc/hypfs-paths.pandoc
> +++ b/docs/misc/hypfs-paths.pandoc
> @@ -5,6 +5,9 @@ in the Xen hypervisor file system (hypfs).
> 
>  The hypervisor file system can be accessed via the xenhypfs tool.
> 
> +The availability of the hypervisor file system depends on the hypervisor
> +config option CONFIG_HYPFS, which is on per default.
> +
>  ## Notation
> 
>  The hypervisor file system is similar to the Linux kernel's sysfs.
> @@ -55,6 +58,11 @@ tags enclosed in square brackets.
>  * CONFIG_* -- Path is valid only in case the hypervisor was built with
>    the respective config option.
> 
> +Path availability is subject to change, e.g. a specific path specified
> +for a single architecture now might be made available for other architectures
> +in future, or it could be made conditional by an additional config option
> +of the hypervisor.
> +
>  So an entry could look like this:
> 
>      /cpu-bugs/active-pv/xpti = ("No"|{"dom0", "domU", "PCID-on"}) [w,X86,PV]
> --
> 2.26.2




From xen-devel-bounces@lists.xenproject.org Mon Jul 13 08:14:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 08:14:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jutbq-0003LP-CP; Mon, 13 Jul 2020 08:14:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jutbp-0003LG-Oy
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 08:14:45 +0000
X-Inumbo-ID: e8f37ea4-c4e0-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8f37ea4-c4e0-11ea-bca7-bc764e2007e4;
 Mon, 13 Jul 2020 08:14:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5A3ABAB7D;
 Mon, 13 Jul 2020 08:14:46 +0000 (UTC)
Subject: Re: [PATCH] docs: specify stability of hypfs path documentation
To: Jan Beulich <jbeulich@suse.com>
References: <20200713051639.26948-1-jgross@suse.com>
 <057edc96-85ae-2511-9713-99341a6b6486@suse.com>
 <e8187851-9d1f-89d1-6e6b-53881e79a6b8@suse.com>
 <6187b535-bb04-4b7a-fbc8-7f6c297c1ddc@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <6d4fcc81-c488-f12f-fd2e-2c8ebaab4c01@suse.com>
Date: Mon, 13 Jul 2020 10:14:43 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <6187b535-bb04-4b7a-fbc8-7f6c297c1ddc@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13.07.20 10:07, Jan Beulich wrote:
> On 13.07.2020 10:04, Jürgen Groß wrote:
>> On 13.07.20 09:56, Jan Beulich wrote:
>>> On 13.07.2020 07:16, Juergen Gross wrote:
>>>> @@ -55,6 +58,11 @@ tags enclosed in square brackets.
>>>>    * CONFIG_* -- Path is valid only in case the hypervisor was built with
>>>>      the respective config option.
>>>>    
>>>> +Path availability is subject to change, e.g. a specific path specified
>>>> +for a single architecture now might be made available for other architectures
>>>> +in future, or it could be made conditional by an additional config option
>>>> +of the hypervisor.
>>>
>>> I agree this is worthwhile clarifying. To me, between the lines,
>>> this then suggests that paths are entirely unreliable, which I
>>> don't think is what we want. So perhaps some further clarification
>>> could be added to clarify that we're not going to arbitrarily
>>> change paths or their meaning? Or am I mistaken in understanding
>>> that this interface is meant to act ABI-like?
>>
>> You are right. What about following rewording:
>>
>> "In case a tag for a path indicates that this path is available in some
>>    case only, this availability might be extended or reduced in future by
>>    modification or removal of the tag."
> 
> Yes, yet I'd go even further and add "A path once assigned meaning
> won't go away altogether or change its meaning, though" or some such.

Fine with me. Will send V2.


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 08:15:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 08:15:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jutcF-0003Oj-Lb; Mon, 13 Jul 2020 08:15:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jutcD-0003OX-Up
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 08:15:09 +0000
X-Inumbo-ID: f75fc830-c4e0-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f75fc830-c4e0-11ea-8496-bc764e2007e4;
 Mon, 13 Jul 2020 08:15:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9304FAF01;
 Mon, 13 Jul 2020 08:15:10 +0000 (UTC)
Subject: Re: [PATCH] docs: specify stability of hypfs path documentation
To: paul@xen.org, xen-devel@lists.xenproject.org
References: <20200713051639.26948-1-jgross@suse.com>
 <003b01d658ed$831ae680$8950b380$@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <f7ff8cc6-9730-8ec9-c5c6-32d3708dd40a@suse.com>
Date: Mon, 13 Jul 2020 10:15:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <003b01d658ed$831ae680$8950b380$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13.07.20 10:13, Paul Durrant wrote:
>> -----Original Message-----
>> From: Juergen Gross <jgross@suse.com>
>> Sent: 13 July 2020 06:17
>> To: xen-devel@lists.xenproject.org
>> Cc: paul@xen.org; Juergen Gross <jgross@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; George
>> Dunlap <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Jan Beulich
>> <jbeulich@suse.com>; Julien Grall <julien@xen.org>; Stefano Stabellini <sstabellini@kernel.org>; Wei
>> Liu <wl@xen.org>
>> Subject: [PATCH] docs: specify stability of hypfs path documentation
>>
>> In docs/misc/hypfs-paths.pandoc the supported paths in the hypervisor
>> file system are specified. Make it more clear that path availability
>> might change, e.g. due to scope widening or narrowing (e.g. being
>> limited to a specific architecture).
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> This might be a candidate for 4.14, as hypfs is new in 4.14 and the
>> documentation should be as clear as possible.
> 
> Agreed. Since this a pure documentation change it carries no risk, so once the final wording is agreed then consider it...
> 
> Release-acked-by: Paul Durrant <paul@xen.org>

Thanks!


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 08:43:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 08:43:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juu2q-00018k-6W; Mon, 13 Jul 2020 08:42:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1juu2o-00018L-K6
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 08:42:38 +0000
X-Inumbo-ID: caa2a39a-c4e4-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id caa2a39a-c4e4-11ea-8496-bc764e2007e4;
 Mon, 13 Jul 2020 08:42:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8077CB013;
 Mon, 13 Jul 2020 08:42:33 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: minios-devel@lists.xenproject.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2] mini-os: don't hard-wire xen internal paths
Date: Mon, 13 Jul 2020 10:42:30 +0200
Message-Id: <20200713084230.18177-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, samuel.thibault@ens-lyon.org, wl@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Mini-OS shouldn't use Xen internal paths for building. Import the
needed paths from Xen and fall back to the current values only if
the import was not possible.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2: correct typo (XCALL_APTH -> CALL_PATH)
---
 Config.mk | 15 ++++++++++++++-
 Makefile  | 35 ++++++++++++++++++-----------------
 2 files changed, 32 insertions(+), 18 deletions(-)

diff --git a/Config.mk b/Config.mk
index f6a2afa..cb823c2 100644
--- a/Config.mk
+++ b/Config.mk
@@ -33,6 +33,19 @@ endif
 #
 ifneq ($(XEN_ROOT),)
 MINIOS_ROOT=$(XEN_ROOT)/extras/mini-os
+
+-include $(XEN_ROOT)/stubdom/mini-os.mk
+
+XENSTORE_CPPFLAGS ?= -isystem $(XEN_ROOT)/tools/xenstore/include
+TOOLCORE_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toolcore
+TOOLLOG_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toollog
+EVTCHN_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/evtchn
+GNTTAB_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/gnttab
+CALL_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/call
+FOREIGNMEMORY_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/foreignmemory
+DEVICEMODEL_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/devicemodel
+CTRL_PATH ?= $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)
+GUEST_PATH ?= $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)
 else
 MINIOS_ROOT=$(TOPLEVEL_DIR)
 endif
@@ -93,7 +106,7 @@ DEF_CPPFLAGS += -D__MINIOS__
 ifeq ($(libc),y)
 DEF_CPPFLAGS += -DHAVE_LIBC
 DEF_CPPFLAGS += -isystem $(MINIOS_ROOT)/include/posix
-DEF_CPPFLAGS += -isystem $(XEN_ROOT)/tools/xenstore/include
+DEF_CPPFLAGS += $(XENSTORE_CPPFLAGS)
 endif
 
 ifneq ($(LWIPDIR),)
diff --git a/Makefile b/Makefile
index be640cd..4b76b55 100644
--- a/Makefile
+++ b/Makefile
@@ -125,23 +125,24 @@ OBJS := $(filter-out $(OBJ_DIR)/lwip%.o $(LWO), $(OBJS))
 
 ifeq ($(libc),y)
 ifeq ($(CONFIG_XC),y)
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toolcore -whole-archive -lxentoolcore -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toolcore/libxentoolcore.a
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toollog -whole-archive -lxentoollog -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toollog/libxentoollog.a
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/evtchn -whole-archive -lxenevtchn -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/evtchn/libxenevtchn.a
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/gnttab -whole-archive -lxengnttab -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/gnttab/libxengnttab.a
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/call -whole-archive -lxencall -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/call/libxencall.a
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/foreignmemory -whole-archive -lxenforeignmemory -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/foreignmemory/libxenforeignmemory.a
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/devicemodel -whole-archive -lxendevicemodel -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/devicemodel/libxendevicemodel.a
-APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH) -whole-archive -lxenguest -lxenctrl -no-whole-archive
-LIBS += $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)/libxenctrl.a
-LIBS += $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)/libxenguest.a
+APP_LDLIBS += -L$(TOOLCORE_PATH) -whole-archive -lxentoolcore -no-whole-archive
+LIBS += $(TOOLCORE_PATH)/libxentoolcore.a
+APP_LDLIBS += -L$(TOOLLOG_PATH) -whole-archive -lxentoollog -no-whole-archive
+LIBS += $(TOOLLOG_PATH)/libxentoollog.a
+APP_LDLIBS += -L$(EVTCHN_PATH) -whole-archive -lxenevtchn -no-whole-archive
+LIBS += $(EVTCHN_PATH)/libxenevtchn.a
+APP_LDLIBS += -L$(GNTTAB_PATH) -whole-archive -lxengnttab -no-whole-archive
+LIBS += $(GNTTAB_PATH)/libxengnttab.a
+APP_LDLIBS += -L$(CALL_PATH) -whole-archive -lxencall -no-whole-archive
+LIBS += $(CALL_PATH)/libxencall.a
+APP_LDLIBS += -L$(FOREIGNMEMORY_PATH) -whole-archive -lxenforeignmemory -no-whole-archive
+LIBS += $(FOREIGNMEMORY_PATH)/libxenforeignmemory.a
+APP_LDLIBS += -L$(DEVICEMODEL_PATH) -whole-archive -lxendevicemodel -no-whole-archive
+LIBS += $(DEVICEMODEL_PATH)/libxendevicemodel.a
+APP_LDLIBS += -L$(GUEST_PATH) -whole-archive -lxenguest -no-whole-archive
+LIBS += $(GUEST_PATH)/libxenguest.a
+APP_LDLIBS += -L$(CTRL_PATH) -whole-archive -lxenctrl -no-whole-archive
+LIBS += $(CTRL_PATH)/libxenctrl.a
 endif
 APP_LDLIBS += -lpci
 APP_LDLIBS += -lz
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Jul 13 09:47:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 09:47:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juv2q-0006BE-GP; Mon, 13 Jul 2020 09:46:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GELZ=AY=redhat.com=kraxel@srs-us1.protection.inumbo.net>)
 id 1juv2o-0006B9-OA
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 09:46:42 +0000
X-Inumbo-ID: c15a7552-c4ed-11ea-9239-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c15a7552-c4ed-11ea-9239-12813bfff9fa;
 Mon, 13 Jul 2020 09:46:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594633601;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=B0Oe5UV2qyeUG9Z0o/6/qc3LlnZaZG6gs7b/Di/jHF4=;
 b=BNfy4jRNb+q7fTp8yS586Xua+3qTe/i7iKosN4pSzSpw3XCtc26N70zb6A6V04EBIY5Uj+
 f7Xrvi7xCjqubLKUqrg8V+OVXa9thi+U/JiOSBuNCtzMJ2DPMc+LIvszHSX9srJ2moK5ka
 OBg1B1C5BOXDp4BHcApNYInorCR9DyM=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-180-SmNjCEALNaCu-3G69JToMA-1; Mon, 13 Jul 2020 05:46:38 -0400
X-MC-Unique: SmNjCEALNaCu-3G69JToMA-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6DB3E8015F4;
 Mon, 13 Jul 2020 09:46:34 +0000 (UTC)
Received: from sirius.home.kraxel.org (ovpn-115-89.ams2.redhat.com
 [10.36.115.89])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 303A327DE7C;
 Mon, 13 Jul 2020 09:46:24 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 1CA893EBB1; Mon, 13 Jul 2020 11:46:24 +0200 (CEST)
Date: Mon, 13 Jul 2020 11:46:24 +0200
From: Gerd Hoffmann <kraxel@redhat.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: Re: [PATCH 00/26] hw/usb: Give it love, reduce 'hw/usb.h' inclusion
 out of hw/usb/
Message-ID: <20200713094624.occmvxdb56kvbqy2@sirius.home.kraxel.org>
References: <20200704144943.18292-1-f4bug@amsat.org>
MIME-Version: 1.0
In-Reply-To: <20200704144943.18292-1-f4bug@amsat.org>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=iso-8859-1
Content-Transfer-Encoding: 8bit
Content-Disposition: inline
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Jiaxun Yang <jiaxun.yang@flygoat.com>, BALATON Zoltan <balaton@eik.bme.hu>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Yoshinori Sato <ysato@users.sourceforge.jp>,
 Paul Durrant <paul@xen.org>, Magnus Damm <magnus.damm@gmail.com>,
 Markus Armbruster <armbru@redhat.com>,
 =?utf-8?B?SGVydsOp?= Poussineau <hpoussin@reactos.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Leif Lindholm <leif@nuviainc.com>, Andrzej Zaborowski <balrogg@gmail.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Alistair Francis <alistair@alistair23.me>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Beniamino Galvani <b.galvani@gmail.com>,
 Niek Linnenbank <nieklinnenbank@gmail.com>, qemu-arm@nongnu.org,
 =?utf-8?Q?Marc-Andr=C3=A9?= Lureau <marcandre.lureau@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 Radoslaw Biernacki <radoslaw.biernacki@linaro.org>,
 Igor Mitsyanko <i.mitsyanko@gmail.com>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Paul Zimmerman <pauldzim@gmail.com>, qemu-ppc@nongnu.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 04, 2020 at 04:49:17PM +0200, Philippe Mathieu-Daudé wrote:
> Hi,
> 
> This is the second time I try to replace a magic typename string
> by a constant, and Zoltan warns me this is counter productive as
> "hw/usb.h" pulls in an insane amount of code.
> 
> Time to give the usb subsystem some love and move forward.
> 
> This series can be decomposed as follow:
> 
>  1-2:    preliminary machine cleanups (arm/ppc)
>  3-13:   usb related headers cleanups
>  14-15:  usb quirks cleanup
>  16-18:  refactor usb_get_dev_path() to add usb_get_port_path()
>  19:     let spapr use usb_get_port_path() to make USBDevice opaque
>  20:     extract the public USB API (for machine/board/soc)
>  21:     make the older "usb.h" internal to hw/usb/
>  22-25:  use TYPENAME definitions
>  26:     cover dwc2 in MAINTAINERS
> 
> Please review.

Looks good overall, I don't fell like squeezing this into 5.1 though.
Can you repost (with the few comments addressed) once 5.2 is open for
development in roughly a month?

thanks,
  Gerd



From xen-devel-bounces@lists.xenproject.org Mon Jul 13 11:44:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 11:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juws4-0007cO-4J; Mon, 13 Jul 2020 11:43:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fjqj=AY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juws3-0007c4-M7
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 11:43:43 +0000
X-Inumbo-ID: 152d22e7-c4fe-11ea-9243-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 152d22e7-c4fe-11ea-9243-12813bfff9fa;
 Mon, 13 Jul 2020 11:43:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=oi/c1m8/qFg/+TEn1BBCXCPxxA27qgIce9/HwLGAQrc=; b=anaPa3hjP3mvpcKO+VGULFyiU
 ENMIu+EvmeHH6vXelZiJhDmt7+LMrfqlWtle2WB6NYRSsL5d2VXEJbsDyDoQ5buiVlDB3Kv8Q4cuS
 v4DDBTy1jK8iGwx/aoVwKz1Aa5OjO0v5sk6sixOfknopX4WYK5S2Sagf1czK4y12QL254=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juwrv-000486-JI; Mon, 13 Jul 2020 11:43:35 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juwrv-0003Ub-9p; Mon, 13 Jul 2020 11:43:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juwrv-00055q-96; Mon, 13 Jul 2020 11:43:35 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151854-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151854: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=02d69864b51a4302a148c28d6d391238a6778b4b
X-Osstest-Versions-That: xen=02d69864b51a4302a148c28d6d391238a6778b4b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jul 2020 11:43:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151854 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151854/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151824
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151840
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151840
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151840
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151840
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151840
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151840
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151840
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151840
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151840
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b
baseline version:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b

Last test of basis   151854  2020-07-13 01:51:12 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jul 13 13:36:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 13:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juyd6-0008M2-Ck; Mon, 13 Jul 2020 13:36:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7ndB=AY=gmail.com=refactormyself@srs-us1.protection.inumbo.net>)
 id 1juyPm-0007TO-6S
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 13:22:38 +0000
X-Inumbo-ID: eaf90130-c50b-11ea-bb8b-bc764e2007e4
Received: from mail-ed1-x544.google.com (unknown [2a00:1450:4864:20::544])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eaf90130-c50b-11ea-bb8b-bc764e2007e4;
 Mon, 13 Jul 2020 13:22:37 +0000 (UTC)
Received: by mail-ed1-x544.google.com with SMTP id a1so8212201edt.10
 for <xen-devel@lists.xenproject.org>; Mon, 13 Jul 2020 06:22:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id;
 bh=TEh/JkkBeDq0BmwWDLkY2ALcEwIhER9cNUR1loI6e78=;
 b=hlFk02nnRr+rQKvpcmIRPBcv9Vppks3wU3BsbHQlS35RNRr0Lm2YUw7/utWjHu70OH
 fu21BCriHQitKJ9VJJ9HkQBoc0oNiPianJ9DfryJ5s4iB77Gi9+P+wSDFA8dw7sDMdjN
 CRk+C/cn77UZdk8p4eMVCYK2N1kZ9NOWbvXAMsA8NoetiFgB4PjRem+NJr8k6FdLryN7
 KSXqy55pf7Mo7GW41Rrh43kHUodvpJcq7yyqbdwHNICmnMteEU7rrwF/z8u+pZc3nMOJ
 t5bRIsTEDAnzpyq0Z/t5ptBct6erL48Bl/rPmxQHyXzHf8lqWgUjMDOqwGt/I2ULZZSO
 jYxQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=TEh/JkkBeDq0BmwWDLkY2ALcEwIhER9cNUR1loI6e78=;
 b=bxo3hiry3KYOa7F4NUxNXwA0JQQVlnIYXslHfHtM8NInh92Cr9LyyqkPl1q/zT5TpI
 tq8Z2IhYlpDinT3FRwf/h66zImxj5VluAitku94cONY46ZnpceebGV0gXTtIn/yDnk6b
 BU3qe+kktTHTI6gycZKy2qVPMuVjlAr4tGP7oHIIZAgRNx67nOSM5Y48tICNxskATHFN
 HckbqqkEw3q4CzZiXbtkvVzftGthg7jJbl0spexZe/nmlfSBXTTrpQDni6NGFFrwPUdM
 ffX+L1rnCGGKgbZkPZpWVVP0Z2UwbJExP1oEeCQcrp20MhttHHDe60JV7Cxy5klVaeRh
 vDTw==
X-Gm-Message-State: AOAM531fmjqB0Bt6/nwiasJYLtQmbATWyLw+XepO/DjbOl+7ey81rh9a
 0BW5DT6/TWGaIwvKyKFvIGw=
X-Google-Smtp-Source: ABdhPJw69dKLZM419xxp02IeffDX2nQ+F52A+jh3cXUOZ7YUfQhM+G+xXauU5Oq4EhNqFD1PfU4cXQ==
X-Received: by 2002:a50:fb93:: with SMTP id e19mr83849853edq.106.1594646556147; 
 Mon, 13 Jul 2020 06:22:36 -0700 (PDT)
Received: from net.saheed (54007186.dsl.pool.telekom.hu. [84.0.113.134])
 by smtp.gmail.com with ESMTPSA id n9sm11806540edr.46.2020.07.13.06.22.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 13 Jul 2020 06:22:35 -0700 (PDT)
From: "Saheed O. Bolarinwa" <refactormyself@gmail.com>
To: helgaas@kernel.org
Subject: [RFC PATCH 00/35] Move all PCIBIOS* definitions into arch/x86
Date: Mon, 13 Jul 2020 14:22:12 +0200
Message-Id: <20200713122247.10985-1-refactormyself@gmail.com>
X-Mailer: git-send-email 2.18.2
X-Mailman-Approved-At: Mon, 13 Jul 2020 13:36:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rich Felker <dalias@libc.org>,
 "Martin K. Petersen" <martin.petersen@oracle.com>, linux-sh@vger.kernel.org,
 linux-pci@vger.kernel.org, linux-nvme@lists.infradead.org,
 Yicong Yang <yangyicong@hisilicon.com>, Keith Busch <kbusch@kernel.org>,
 netdev@vger.kernel.org, Paul Mackerras <paulus@samba.org>,
 linux-i2c@vger.kernel.org, bcm-kernel-feedback-list@broadcom.com,
 sparclinux@vger.kernel.org, rfi@lists.rocketboards.org,
 Toan Le <toan@os.amperecomputing.com>, Greg Ungerer <gerg@linux-m68k.org>,
 Marek Vasut <marek.vasut+renesas@gmail.com>, Rob Herring <robh@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Sagi Grimberg <sagi@grimberg.me>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, linux-scsi@vger.kernel.org,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Michael Ellerman <mpe@ellerman.id.au>, linux-atm-general@lists.sourceforge.net,
 Russell King <linux@armlinux.org.uk>,
 Realtek linux nic maintainers <nic_swsd@realtek.com>,
 Christoph Hellwig <hch@lst.de>, Ley Foon Tan <ley.foon.tan@intel.com>,
 Geert Uytterhoeven <geert@linux-m68k.org>,
 =?UTF-8?q?Rafa=C5=82=20Mi=C5=82ecki?= <zajec5@gmail.com>,
 Chas Williams <3chas3@gmail.com>,
 Benjamin Herrenschmidt <benh@kernel.crashing.org>,
 xen-devel@lists.xenproject.org, Matt Turner <mattst88@gmail.com>,
 linux-mips@vger.kernel.org, linux-kernel-mentees@lists.linuxfoundation.org,
 Kevin Hilman <khilman@baylibre.com>, Guenter Roeck <linux@roeck-us.net>,
 linux-hwmon@vger.kernel.org, Jean Delvare <jdelvare@suse.com>,
 Andrew Donnellan <ajd@linux.ibm.com>, Arnd Bergmann <arnd@arndb.de>,
 Ray Jui <rjui@broadcom.com>, "James E.J. Bottomley" <jejb@linux.ibm.com>,
 Yue Wang <yue.wang@Amlogic.com>, Jens Axboe <axboe@fb.com>,
 Jakub Kicinski <kuba@kernel.org>, linux-m68k@lists.linux-m68k.org,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Ivan Kokshaysky <ink@jurassic.park.msu.ru>, Michael Buesch <m@bues.ch>,
 skhan@linuxfoundation.org, bjorn@helgaas.com,
 linux-amlogic@lists.infradead.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Guan Xuetao <gxt@pku.edu.cn>,
 linux-arm-kernel@lists.infradead.org, Richard Henderson <rth@twiddle.net>,
 Juergen Gross <jgross@suse.com>, Michal Simek <monstr@monstr.eu>,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 Scott Branden <sbranden@broadcom.com>, Bjorn Helgaas <bhelgaas@google.com>,
 Jingoo Han <jingoohan1@gmail.com>,
 "Saheed O. Bolarinwa" <refactormyself@gmail.com>,
 Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>,
 linux-wireless@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-renesas-soc@vger.kernel.org, Brian King <brking@us.ibm.com>,
 Philipp Zabel <p.zabel@pengutronix.de>, linux-alpha@vger.kernel.org,
 Frederic Barrat <fbarrat@linux.ibm.com>,
 Gustavo Pimentel <gustavo.pimentel@synopsys.com>,
 linuxppc-dev@lists.ozlabs.org, "David S. Miller" <davem@davemloft.net>,
 Heiner Kallweit <hkallweit1@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



This goal of these series is to move the definition of *all* PCIBIOS* from
include/linux/pci.h to arch/x86 and limit their use within there.
All other tree specific definition will be left for intact. Maybe they can
be renamed.

PCIBIOS* is an x86 concept as defined by the PCI spec. The returned error
codes of PCIBIOS* are positive values and this introduces some complexities
which other archs need not incur.

PLAN:

1.   [PATCH v0 1-36] Replace all PCIBIOS_SUCCESSFUL with 0

2a.  Audit all functions returning PCIBIOS_* error values directly or
     indirectly and prevent possible bug coming in (2b)

2b.  Make all functions returning PCIBIOS_* error values call 
     pcibios_err_to_errno(). *This will change their behaviour, for good.*

3.   Clone a pcibios_err_to_errno() into arch/x86/pci/pcbios.c as _v2.
     This handles the positive error codes directly and will not use any
     PCIBIOS* definitions. So calls to it have no outside dependence.

4.   Make all x86 codes that needs to convert to -E* values call the 
     cloned version - pcibios_err_to_errno_v2()

5.   Assign PCIBIOS_* errors values directly to generic -E* errors

6.   Refactor pcibios_err_to_errno() and mark it deprecated

7.   Replace all calls to pcibios_err_to_errno() with the proper -E* value
     or 0.

8.   Remove all PCIBIOS* definitions in include/linux/pci.h and 
     pcibios_err_to_errno() too.

9.   Redefine all PCIBIOS* definitions with original values inside 
     arch/x86/pci/pcbios.c

10.  Redefine pcibios_err_to_errno() inside arch/x86/pci/pcbios.c

11.  Replace pcibios_err_to_errno_v2() calls with pcibios_err_to_errno()

12.  Remove pcibios_err_to_errno_v2()

Suggested-by: Bjorn Helgaas <bjorn@helgaas.com>
Suggested-by: Yicong Yang <yangyicong@hisilicon.com>
Signed-off-by: "Saheed O. Bolarinwa" <refactormyself@gmail.com>


Bolarinwa Olayemi Saheed (35):
  Change PCIBIOS_SUCCESSFUL to 0
  Change PCIBIOS_SUCCESSFUL to 0
  Change PCIBIOS_SUCCESSFUL to 0
  Tidy Success/Failure checks
  Change PCIBIOS_SUCCESSFUL to 0
  Tidy Success/Failure checks
  Change PCIBIOS_SUCCESSFUL to 0
  Tidy Success/Failure checks
  Change PCIBIOS_SUCCESSFUL to 0
  Tidy Success/Failure checks
  Change PCIBIOS_SUCCESSFUL to 0
  Tidy Success/Failure checks
  Change PCIBIOS_SUCCESSFUL to 0
  Change PCIBIOS_SUCCESSFUL to 0
  Tidy Success/Failure checks
  Change PCIBIOS_SUCCESSFUL to 0
  Tidy Success/Failure checks
  Change PCIBIOS_SUCCESSFUL to 0
  Change PCIBIOS_SUCCESSFUL to 0
  Tidy Success/Failure checks
  Fix Style ERROR: assignment in if condition
  Change PCIBIOS_SUCCESSFUL to 0
  Change PCIBIOS_SUCCESSFUL to 0
  Change PCIBIOS_SUCCESSFUL to 0
  Tidy Success/Failure checks
  Change PCIBIOS_SUCCESSFUL to 0
  Tidy Success/Failure checks
  Change PCIBIOS_SUCCESSFUL to 0
  Tidy Success/Failure checks
  Change PCIBIOS_SUCCESSFUL to 0
  Change PCIBIOS_SUCCESSFUL to 0
  Change PCIBIOS_SUCCESSFUL to 0
  Tidy Success/Failure checks
  Change PCIBIOS_SUCCESSFUL to 0
  Tidy Success/Failure checks

 arch/alpha/kernel/core_apecs.c                |  4 +--
 arch/alpha/kernel/core_cia.c                  |  4 +--
 arch/alpha/kernel/core_irongate.c             |  4 +--
 arch/alpha/kernel/core_lca.c                  |  4 +--
 arch/alpha/kernel/core_marvel.c               |  4 +--
 arch/alpha/kernel/core_mcpcia.c               |  4 +--
 arch/alpha/kernel/core_polaris.c              |  4 +--
 arch/alpha/kernel/core_t2.c                   |  4 +--
 arch/alpha/kernel/core_titan.c                |  4 +--
 arch/alpha/kernel/core_tsunami.c              |  4 +--
 arch/alpha/kernel/core_wildfire.c             |  4 +--
 arch/alpha/kernel/sys_miata.c                 |  2 +-
 arch/arm/common/it8152.c                      |  4 +--
 arch/arm/mach-cns3xxx/pcie.c                  |  2 +-
 arch/arm/mach-footbridge/dc21285.c            |  4 +--
 arch/arm/mach-iop32x/pci.c                    |  6 ++--
 arch/arm/mach-ixp4xx/common-pci.c             |  8 ++---
 arch/arm/mach-orion5x/pci.c                   |  4 +--
 arch/arm/plat-orion/pcie.c                    |  8 ++---
 arch/m68k/coldfire/pci.c                      |  8 ++---
 arch/microblaze/pci/indirect_pci.c            |  4 +--
 arch/mips/pci/fixup-ath79.c                   |  2 +-
 arch/mips/pci/ops-bcm63xx.c                   | 14 ++++----
 arch/mips/pci/ops-bonito64.c                  |  4 +--
 arch/mips/pci/ops-gt64xxx_pci0.c              |  4 +--
 arch/mips/pci/ops-lantiq.c                    |  4 +--
 arch/mips/pci/ops-loongson2.c                 |  4 +--
 arch/mips/pci/ops-mace.c                      |  4 +--
 arch/mips/pci/ops-msc.c                       |  4 +--
 arch/mips/pci/ops-rc32434.c                   |  6 ++--
 arch/mips/pci/ops-sni.c                       |  4 +--
 arch/mips/pci/ops-tx3927.c                    |  2 +-
 arch/mips/pci/ops-tx4927.c                    |  2 +-
 arch/mips/pci/ops-vr41xx.c                    |  4 +--
 arch/mips/pci/pci-alchemy.c                   |  6 ++--
 arch/mips/pci/pci-ar2315.c                    |  5 ++-
 arch/mips/pci/pci-ar71xx.c                    |  4 +--
 arch/mips/pci/pci-ar724x.c                    |  6 ++--
 arch/mips/pci/pci-bcm1480.c                   |  4 +--
 arch/mips/pci/pci-bcm1480ht.c                 |  4 +--
 arch/mips/pci/pci-mt7620.c                    |  4 +--
 arch/mips/pci/pci-octeon.c                    | 12 +++----
 arch/mips/pci/pci-rt2880.c                    |  4 +--
 arch/mips/pci/pci-rt3883.c                    |  4 +--
 arch/mips/pci/pci-sb1250.c                    |  4 +--
 arch/mips/pci/pci-virtio-guest.c              |  4 +--
 arch/mips/pci/pci-xlp.c                       |  4 +--
 arch/mips/pci/pci-xlr.c                       |  4 +--
 arch/mips/pci/pci-xtalk-bridge.c              | 14 ++++----
 arch/mips/pci/pcie-octeon.c                   |  4 +--
 arch/mips/txx9/generic/pci.c                  |  5 ++-
 arch/powerpc/kernel/rtas_pci.c                |  4 +--
 arch/powerpc/platforms/4xx/pci.c              |  4 +--
 arch/powerpc/platforms/52xx/efika.c           |  4 +--
 arch/powerpc/platforms/52xx/mpc52xx_pci.c     |  4 +--
 arch/powerpc/platforms/82xx/pq2.c             |  2 +-
 arch/powerpc/platforms/85xx/mpc85xx_cds.c     |  2 +-
 arch/powerpc/platforms/85xx/mpc85xx_ds.c      |  2 +-
 arch/powerpc/platforms/86xx/mpc86xx_hpcn.c    |  2 +-
 arch/powerpc/platforms/chrp/pci.c             |  8 ++---
 arch/powerpc/platforms/embedded6xx/holly.c    |  2 +-
 .../platforms/embedded6xx/mpc7448_hpc2.c      |  2 +-
 arch/powerpc/platforms/fsl_uli1575.c          |  2 +-
 arch/powerpc/platforms/maple/pci.c            | 18 +++++-----
 arch/powerpc/platforms/pasemi/pci.c           |  6 ++--
 arch/powerpc/platforms/powermac/pci.c         |  8 ++---
 arch/powerpc/platforms/powernv/eeh-powernv.c  |  4 +--
 arch/powerpc/platforms/powernv/pci.c          |  4 +--
 arch/powerpc/platforms/pseries/eeh_pseries.c  |  4 +--
 arch/powerpc/sysdev/fsl_pci.c                 |  2 +-
 arch/powerpc/sysdev/indirect_pci.c            |  4 +--
 arch/powerpc/sysdev/tsi108_pci.c              |  4 +--
 arch/sh/drivers/pci/common.c                  |  3 +-
 arch/sh/drivers/pci/ops-dreamcast.c           |  4 +--
 arch/sh/drivers/pci/ops-sh4.c                 |  4 +--
 arch/sh/drivers/pci/ops-sh7786.c              |  8 ++---
 arch/sh/drivers/pci/pci.c                     |  2 +-
 arch/sparc/kernel/pci_common.c                | 28 +++++++--------
 arch/unicore32/kernel/pci.c                   |  4 +--
 drivers/atm/iphase.c                          | 20 ++++++-----
 drivers/atm/lanai.c                           |  8 ++---
 drivers/bcma/driver_pci_host.c                |  4 +--
 drivers/hwmon/sis5595.c                       | 13 +++----
 drivers/hwmon/via686a.c                       | 13 +++----
 drivers/hwmon/vt8231.c                        | 13 +++----
 drivers/i2c/busses/i2c-ali15x3.c              |  5 ++-
 drivers/i2c/busses/i2c-nforce2.c              |  3 +-
 drivers/i2c/busses/i2c-sis5595.c              | 15 +++-----
 drivers/misc/cxl/vphb.c                       |  4 +--
 drivers/net/ethernet/realtek/r8169_main.c     |  2 +-
 drivers/nvme/host/pci.c                       |  2 +-
 drivers/pci/access.c                          | 14 ++++----
 drivers/pci/controller/dwc/pci-meson.c        |  4 +--
 .../pci/controller/dwc/pcie-designware-host.c |  2 +-
 drivers/pci/controller/dwc/pcie-designware.c  |  4 +--
 drivers/pci/controller/dwc/pcie-hisi.c        |  4 +--
 drivers/pci/controller/dwc/pcie-tegra194.c    |  4 +--
 .../pci/controller/mobiveil/pcie-mobiveil.c   |  4 +--
 drivers/pci/controller/pci-aardvark.c         |  4 +--
 drivers/pci/controller/pci-ftpci100.c         |  4 +--
 drivers/pci/controller/pci-hyperv.c           |  8 ++---
 drivers/pci/controller/pci-mvebu.c            |  4 +--
 drivers/pci/controller/pci-thunder-ecam.c     | 36 +++++++++----------
 drivers/pci/controller/pci-thunder-pem.c      |  4 +--
 drivers/pci/controller/pci-xgene.c            |  5 ++-
 drivers/pci/controller/pcie-altera.c          | 16 ++++-----
 drivers/pci/controller/pcie-iproc.c           | 10 +++---
 drivers/pci/controller/pcie-mediatek.c        |  4 +--
 drivers/pci/controller/pcie-rcar-host.c       |  8 ++---
 drivers/pci/controller/pcie-rockchip-host.c   | 10 +++---
 drivers/pci/pci-bridge-emul.c                 | 14 ++++----
 drivers/pci/pci.c                             |  8 ++---
 drivers/pci/pcie/bw_notification.c            |  4 +--
 drivers/pci/probe.c                           |  4 +--
 drivers/pci/quirks.c                          |  4 +--
 drivers/pci/syscall.c                         |  8 ++---
 drivers/pci/xen-pcifront.c                    |  2 +-
 drivers/scsi/ipr.c                            | 16 ++++-----
 drivers/scsi/pmcraid.c                        |  6 ++--
 drivers/ssb/driver_gige.c                     |  4 +--
 drivers/ssb/driver_pcicore.c                  |  4 +--
 drivers/xen/xen-pciback/conf_space.c          |  2 +-
 122 files changed, 347 insertions(+), 369 deletions(-)

-- 
2.18.2



From xen-devel-bounces@lists.xenproject.org Mon Jul 13 13:36:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 13:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juyd6-0008M8-L8; Mon, 13 Jul 2020 13:36:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7ndB=AY=gmail.com=refactormyself@srs-us1.protection.inumbo.net>)
 id 1juyPr-0007TO-0o
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 13:22:43 +0000
X-Inumbo-ID: ebbe1f92-c50b-11ea-bb8b-bc764e2007e4
Received: from mail-ej1-x641.google.com (unknown [2a00:1450:4864:20::641])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ebbe1f92-c50b-11ea-bb8b-bc764e2007e4;
 Mon, 13 Jul 2020 13:22:38 +0000 (UTC)
Received: by mail-ej1-x641.google.com with SMTP id dp18so17116023ejc.8
 for <xen-devel@lists.xenproject.org>; Mon, 13 Jul 2020 06:22:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=z4DbCQASJpB18XYS7ktJTNc93Wbe31Dke4XqE2e1oq0=;
 b=DOSHDGVmk30fZFiz3WKd7zMCSK69f3jEL1u8AsUJjxB2mBuvUr1qBHCx969bF8n9Q2
 KtbdMAJyAt3EFUnpKPSIMJbXHWkN6pnl/9Tnrb3Gl0n+kYXCTO2h5BNyruT2BHFjgl+g
 wH9iQCx1EI7m7ZrVqhRyZdNkYPv0uZXHNRXI4H3ZgM4nssEF6Z4+vhCyAINKoOna9COA
 2vQ/UXMl3YhqVXxT1k4grDXlx/GMqUZZFWedx8icvuVMGE/cnpL+73qZzJoFpLnQEXqM
 kKNEdCxwfmXyA7RuYT4PGHN4OQXUHW+RCFO60LXxoSfPzZmiOAwLoId6GJoDAnwMnMXf
 6VkA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=z4DbCQASJpB18XYS7ktJTNc93Wbe31Dke4XqE2e1oq0=;
 b=rHJC60PW/osm74ZIXIb0RGp+mIAg54R/B5b5ZsXVSU7oOmEfsyeAdYD0HQGugH/fzH
 7xUtV9uPi2Y5VzFr5VSHkAzDpx7aUEiSFOIAiavWMoks+2ml2eO0rwlFIh+RrkGypghB
 l0Onkxod/3P8xrnZVDsFu/ObOMc7CpUpnYjRtuwxpBNZ10v4jyfHLyZsVjCs0XkfTUht
 +aIoaCEKOScWIgSAxAgMuGaNcc2/0pFoCNECYO4H9MnUzq+eyw++GdLKKHR5a8/NQ/Ln
 mQgpKOT0RTFnWUA+KyoSONg09Qc9hXmhYlNL/L3op1d6KetQlJTH7KA4VSRCLCtAr+lG
 2OJA==
X-Gm-Message-State: AOAM530bBvRk8NJeZ98VFI5a4zDy6n3uv4wh+jiUZT4vPHJb2wPcvfsW
 518eeN1+/uIIsQx0AxsWy9g=
X-Google-Smtp-Source: ABdhPJyNJ+2A+7vhlimkJ5sh3qnTf4AmYHcIfFpOxzTngMqwIMgJRuRgZYpxdl6ypR4zISe0e31EHw==
X-Received: by 2002:a17:906:4dd4:: with SMTP id
 f20mr77154689ejw.170.1594646557469; 
 Mon, 13 Jul 2020 06:22:37 -0700 (PDT)
Received: from net.saheed (54007186.dsl.pool.telekom.hu. [84.0.113.134])
 by smtp.gmail.com with ESMTPSA id n9sm11806540edr.46.2020.07.13.06.22.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 13 Jul 2020 06:22:37 -0700 (PDT)
From: "Saheed O. Bolarinwa" <refactormyself@gmail.com>
To: helgaas@kernel.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Subject: [RFC PATCH 01/35] xen-pciback: Change PCIBIOS_SUCCESSFUL to 0
Date: Mon, 13 Jul 2020 14:22:13 +0200
Message-Id: <20200713122247.10985-2-refactormyself@gmail.com>
X-Mailer: git-send-email 2.18.2
In-Reply-To: <20200713122247.10985-1-refactormyself@gmail.com>
References: <20200713122247.10985-1-refactormyself@gmail.com>
X-Mailman-Approved-At: Mon, 13 Jul 2020 13:36:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "Saheed O. Bolarinwa" <refactormyself@gmail.com>, skhan@linuxfoundation.org,
 linux-kernel@vger.kernel.org, linux-pci@vger.kernel.org, bjorn@helgaas.com,
 xen-devel@lists.xenproject.org, linux-kernel-mentees@lists.linuxfoundation.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In reference to the PCI spec (Chapter 2), PCIBIOS* is an x86 concept.
Their scope should be limited within arch/x86.

Change all PCIBIOS_SUCCESSFUL to 0

Signed-off-by: "Saheed O. Bolarinwa" <refactormyself@gmail.com>
---
 drivers/xen/xen-pciback/conf_space.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/xen-pciback/conf_space.c b/drivers/xen/xen-pciback/conf_space.c
index 059de92aea7d..0e7577f16f78 100644
--- a/drivers/xen/xen-pciback/conf_space.c
+++ b/drivers/xen/xen-pciback/conf_space.c
@@ -130,7 +130,7 @@ static inline u32 merge_value(u32 val, u32 new_val, u32 new_val_mask,
 static int xen_pcibios_err_to_errno(int err)
 {
 	switch (err) {
-	case PCIBIOS_SUCCESSFUL:
+	case 0:
 		return XEN_PCI_ERR_success;
 	case PCIBIOS_DEVICE_NOT_FOUND:
 		return XEN_PCI_ERR_dev_not_found;
-- 
2.18.2



From xen-devel-bounces@lists.xenproject.org Mon Jul 13 14:04:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 14:04:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juz40-0002eS-SM; Mon, 13 Jul 2020 14:04:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C18W=AY=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1juz3z-0002eN-Jy
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 14:04:11 +0000
X-Inumbo-ID: b97bc844-c511-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b97bc844-c511-11ea-bca7-bc764e2007e4;
 Mon, 13 Jul 2020 14:04:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2FAF5ACA0;
 Mon, 13 Jul 2020 14:04:12 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2] docs: specify stability of hypfs path documentation
Date: Mon, 13 Jul 2020 16:03:38 +0200
Message-Id: <20200713140338.16172-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In docs/misc/hypfs-paths.pandoc the supported paths in the hypervisor
file system are specified. Make it more clear that path availability
might change, e.g. due to scope widening or narrowing (e.g. being
limited to a specific architecture).

Signed-off-by: Juergen Gross <jgross@suse.com>
Release-acked-by: Paul Durrant <paul@xen.org>
---
V2: reworded as requested by Jan Beulich
---
 docs/misc/hypfs-paths.pandoc | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index a111c6f25c..00a7cec031 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -5,6 +5,9 @@ in the Xen hypervisor file system (hypfs).
 
 The hypervisor file system can be accessed via the xenhypfs tool.
 
+The availability of the hypervisor file system depends on the hypervisor
+config option CONFIG_HYPFS, which is on per default.
+
 ## Notation
 
 The hypervisor file system is similar to the Linux kernel's sysfs.
@@ -55,6 +58,11 @@ tags enclosed in square brackets.
 * CONFIG_* -- Path is valid only in case the hypervisor was built with
   the respective config option.
 
+In case a tag for a path indicates that this path is available in some
+case only, this availability might be extended or reduced in future by
+modification or removal of the tag. A path once assigned meaning won't go
+away altogether or change its meaning, though.
+
 So an entry could look like this:
 
     /cpu-bugs/active-pv/xpti = ("No"|{"dom0", "domU", "PCID-on"}) [w,X86,PV]
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Jul 13 14:48:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 14:48:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juzkC-0005zk-Ue; Mon, 13 Jul 2020 14:47:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S1uT=AY=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1juzkB-0005zf-R7
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 14:47:47 +0000
X-Inumbo-ID: d0bee1de-c517-11ea-926b-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d0bee1de-c517-11ea-926b-12813bfff9fa;
 Mon, 13 Jul 2020 14:47:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 34F2DAB55;
 Mon, 13 Jul 2020 14:47:48 +0000 (UTC)
Subject: Re: [PATCH v2] docs: specify stability of hypfs path documentation
To: Juergen Gross <jgross@suse.com>
References: <20200713140338.16172-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8a96b1b9-cbcb-557a-5b82-661bbe40fe25@suse.com>
Date: Mon, 13 Jul 2020 16:47:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200713140338.16172-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13.07.2020 16:03, Juergen Gross wrote:
> In docs/misc/hypfs-paths.pandoc the supported paths in the hypervisor
> file system are specified. Make it more clear that path availability
> might change, e.g. due to scope widening or narrowing (e.g. being
> limited to a specific architecture).
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Release-acked-by: Paul Durrant <paul@xen.org>

Acked-by: Jan Beulich <jbeulich@suse.com>

However, I'd like agreement by at least one other REST maintainer on
...

> @@ -55,6 +58,11 @@ tags enclosed in square brackets.
>  * CONFIG_* -- Path is valid only in case the hypervisor was built with
>    the respective config option.
>  
> +In case a tag for a path indicates that this path is available in some
> +case only, this availability might be extended or reduced in future by
> +modification or removal of the tag. A path once assigned meaning won't go
> +away altogether or change its meaning, though.

... the newly imposed guarantee we're now making. We really want to
avoid declaring something as stable without being quite certain we
can keep it stable.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 15:02:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 15:02:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1juzyg-0007dr-7U; Mon, 13 Jul 2020 15:02:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fjqj=AY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1juzye-0007dX-HK
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 15:02:44 +0000
X-Inumbo-ID: e48d4c9e-c519-11ea-926e-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e48d4c9e-c519-11ea-926e-12813bfff9fa;
 Mon, 13 Jul 2020 15:02:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=rpKeXUYZHKV9NfrzOsArYDEtX0bjGWA2mnWH2YBxS2U=; b=H4Z4vOnbd9+XCkvQUiX22CAGj3
 OsmCN+0UAnUyU3Wd2tIoAMGmyvV0oa+Z153YX1SiK9p4KBKF32nntCagACIoW/jVr+wkAnN9mfJhP
 tsmNS9zp+zdqA6OSTInqXjvmas6GunFBeZTRIyjVA2r7tt16sSXefvDhrXQoFmypYbGU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juzyY-0008Ej-ET; Mon, 13 Jul 2020 15:02:38 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1juzyY-0000P6-48; Mon, 13 Jul 2020 15:02:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1juzyY-0000qP-3Q; Mon, 13 Jul 2020 15:02:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-xl-qemuu-win7-amd64
Message-Id: <E1juzyY-0000qP-3Q@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jul 2020 15:02:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemuu-win7-amd64
testid windows-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  7d3660e79830a069f1848bb4fa1cdf8f666424fb
  Bug not present: 9e3903136d9acde2fb2dd9e967ba928050a6cb4a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151862/


  (Revision log too long, omitted.)


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-win7-amd64.windows-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-win7-amd64.windows-install --summary-out=tmp/151862.bisection-summary --basis-template=151065 --blessings=real,real-bisect qemu-mainline test-amd64-i386-xl-qemuu-win7-amd64 windows-install
Searching for failure / basis pass:
 151849 fail [host=pinot0] / 151065 ok.
Failure / basis pass flights: 151849 / 151065
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#dafce295e6f447ed8905db4e29241e2c6c2a4389-f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://git.qemu.org/qemu.git#9e3903136d9acde2fb2dd9e967ba928050a6cb4a-d34498309cff7560ac90c422c56e3137e6a64b19 git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-88ab0c15525ced2eefe39220742efe4769089ad8 git://xenbits.xen.org/xen.git#058023b343d4e366864831db46e9b438e9e3a178-02d69864b51a4302a148c28d6d391238a6778b4b
>From git://cache:9419/git://git.qemu.org/qemu
   6c87d9f311..00ce6c36b3  master     -> origin/master
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 55435 nodes in revision graph
Searching for test results:
 151101 fail irrelevant
 151065 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 151149 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151221 fail irrelevant
 151175 fail irrelevant
 151241 fail irrelevant
 151286 fail irrelevant
 151269 fail irrelevant
 151328 fail irrelevant
 151304 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 322969adf1fb3d6cfbd613f35121315715aff2ed 3c659044118e34603161457db9934a34f816d78b 171199f56f5f9bdf1e5d670d09ef1351d8f01bae 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151377 fail irrelevant
 151353 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b d4b78317b7cf8c0c635b70086503813f79ff21ec 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151399 fail irrelevant
 151414 fail irrelevant
 151435 fail irrelevant
 151459 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151471 fail irrelevant
 151485 fail irrelevant
 151500 fail irrelevant
 151518 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151547 fail irrelevant
 151598 fail irrelevant
 151577 fail irrelevant
 151622 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b 7b7515702012219410802a168ae4aa45b72a44df 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151656 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151634 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151645 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151669 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151685 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151704 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151778 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b aff2caf6b3fbab1062e117a47b66d27f7fd2f272 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151721 fail irrelevant
 151763 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 48f22ad04ead83e61b4b35871ec6f6109779b791 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151744 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 8796c64ecdfd34be394ea277aaaaa53df0c76996 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151804 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151816 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151833 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 827937158b72ce2265841ff528bba3c44a1bfbc8 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151837 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 151839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 827937158b72ce2265841ff528bba3c44a1bfbc8 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151844 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 7028534d8482d25860c4d1aa8e45f0b911abfc5a
 151845 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 394e8e4bf586b4749620a48a23c816ee19f0e04a 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151846 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151841 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 2033cc6efa98b831d7839e367aa7d5aa74d0750f 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151848 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fd708fe0e1f813d6faf02d92ec5e8d73ce876ed1 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151850 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 151851 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 2033cc6efa98b831d7839e367aa7d5aa74d0750f 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151849 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151853 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151857 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151859 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151860 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151861 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
Searching for interesting versions
 Result found: flight 151065 (pass), for basis pass
 Result found: flight 151849 (fail), for basis failure
 Repro found: flight 151850 (pass), for basis pass
 Repro found: flight 151857 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
No revisions left to test, checking graph state.
 Result found: flight 151846 (pass), for last pass
 Result found: flight 151853 (fail), for first failure
 Repro found: flight 151859 (pass), for last pass
 Repro found: flight 151860 (fail), for first failure
 Repro found: flight 151861 (pass), for last pass
 Repro found: flight 151862 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  7d3660e79830a069f1848bb4fa1cdf8f666424fb
  Bug not present: 9e3903136d9acde2fb2dd9e967ba928050a6cb4a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151862/


  (Revision log too long, omitted.)

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.780631 to fit
pnmtopng: 176 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-win7-amd64.windows-install.{dot,ps,png,html,svg}.
----------------------------------------
151862: tolerable ALL FAIL

flight 151862 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/151862/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install  fail baseline untested


jobs:
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Jul 13 15:08:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 15:08:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jv04F-0007rB-06; Mon, 13 Jul 2020 15:08:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GBjS=AY=arndb.de=arnd@srs-us1.protection.inumbo.net>)
 id 1jv04E-0007r6-Cm
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 15:08:30 +0000
X-Inumbo-ID: b4d8d04e-c51a-11ea-8496-bc764e2007e4
Received: from mout.kundenserver.de (unknown [212.227.17.13])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4d8d04e-c51a-11ea-8496-bc764e2007e4;
 Mon, 13 Jul 2020 15:08:28 +0000 (UTC)
Received: from mail-lf1-f49.google.com ([209.85.167.49]) by
 mrelayeu.kundenserver.de (mreue108 [212.227.15.145]) with ESMTPSA (Nemesis)
 id 1Mg6i8-1kVxun2WkA-00hbxs for <xen-devel@lists.xenproject.org>; Mon, 13 Jul
 2020 17:08:27 +0200
Received: by mail-lf1-f49.google.com with SMTP id m26so9270820lfo.13
 for <xen-devel@lists.xenproject.org>; Mon, 13 Jul 2020 08:08:27 -0700 (PDT)
X-Gm-Message-State: AOAM530bBuQ9F/0yVaZQJIjtDI25LVnq8n4YNuGbZjFgSwCTqWw7vf5G
 vp1ANJSyLo8lvPxr+OI/+PE7G0P3N1GeomYHuAo=
X-Google-Smtp-Source: ABdhPJwTQltR//Qoy7CJjbYFWf5SE8su8bBzApvuOmTDy/uiiaL5a6kEWZGPL3gOs+ATNW8dGkRO2Pf0imsPrOKn0vM=
X-Received: by 2002:ac2:51a1:: with SMTP id f1mr53585665lfk.173.1594652906936; 
 Mon, 13 Jul 2020 08:08:26 -0700 (PDT)
MIME-Version: 1.0
References: <20200713122247.10985-1-refactormyself@gmail.com>
In-Reply-To: <20200713122247.10985-1-refactormyself@gmail.com>
From: Arnd Bergmann <arnd@arndb.de>
Date: Mon, 13 Jul 2020 17:08:10 +0200
X-Gmail-Original-Message-ID: <CAK8P3a3NWSZw6678k1O2eJ6-c5GuW7484PRvEzU9MEPPrCD-yw@mail.gmail.com>
Message-ID: <CAK8P3a3NWSZw6678k1O2eJ6-c5GuW7484PRvEzU9MEPPrCD-yw@mail.gmail.com>
Subject: Re: [RFC PATCH 00/35] Move all PCIBIOS* definitions into arch/x86
To: "Saheed O. Bolarinwa" <refactormyself@gmail.com>
Content-Type: text/plain; charset="UTF-8"
X-Provags-ID: V03:K1:XFtXGoka/3tyw1FhbSmnONvaU+vzKLgXjQqTD9l5k8YtAsjujNv
 Pa8T3ZAGZ9La16GpeF5i++nkqzF/44rTyq/p/jK5AiQykuJ3U+hXctfd/SnZunVKYSki2JQ
 Hxeg0FnhVpvdtueHmN6S3RKp7WWC+K5Z9qADkSDwcwWj0LJsP15CD2R8dIwV3xY3CCW76rw
 O7DHMWHP30fvFmJKrfFig==
X-Spam-Flag: NO
X-UI-Out-Filterresults: notjunk:1;V03:K0:rTCVYQ6W/II=:+iAUCRNTn1Fv7kzF7ZV35d
 Lm2gmW7hIihPGowNRuTixmNL5gWx/8kG84dbJGgPBbfGuEeTgBJqdE7Ep8hoU+GHo5nzMsIhe
 GovCT4Vpzuk9o4n1QDg7lHxKmvEVj5bo4Sjva4CBDYFC0cKPgHrlPj5ON1m2W+pGO8I55nVw4
 h/Q0J/jS5RM3Eh87PjzLX8qjRsgoM+T5hVCkRg737Z8scCWGJjF2urvAjaAOuvINV6NFsujKr
 DUMtv3bTfRv2L4qRK2fSSwypbaRGbJrnqY711cUEvGPwq1C9vBQOznwfRiCf9XSNeGHeXINQq
 IOItU2efwRyt/u1l/uRdSNeXEpnxhm0v6K6vN2BO/8eJFZSdHfMB0HAFDBgs2u3Rz7ZFZ2w0c
 B+NDUHrKlRcKjEHqDwF1xN+u2ct6ZMthxQ3jWvsS/RZCC/vOjR+OwWbmG4C7hB5hsBI9kCexx
 AAVsNeFjaYgON0F2zt2xI6PGz49KAB0YH1pjvRa/Nlin9SGoFvnZxlx+UPSHJNaAhm8rQdWfV
 U924BhNUGR++szjgZZ0Z6ZCTuW0b+iVxoAyL4dBD830QoSyoJLYt5HjuC/5eG9aQt1beFiNo3
 VUiy8lPurJ/wIuAhIfV7meuaJNXyny9kJjY61qv1WNdB9PPnEKl64Rggk9aQ7JeqGIsacBwOw
 kCyqAYN6hjMWv+YZVZrI2gQhFAhQLhXwKeP0W1jotwxsHj9NGJ6/QuDPVFKje5uye0mYY/eQ8
 nCtC7k+J2ijlWZ0Y/Jo/zsSKTKCvV2EZWdB3+RFPBe66mbnrI029hF6pLfIJ3IUOH7niiKeDk
 jLZkbFQMvg/H252cEsMDeeFZvoeJNHLAsy1GgN1A99HQmsWeCzjIjf25XbRPtGMwZvkNSAt
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rich Felker <dalias@libc.org>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 Linux-sh list <linux-sh@vger.kernel.org>,
 linux-pci <linux-pci@vger.kernel.org>, linux-nvme@lists.infradead.org,
 Yicong Yang <yangyicong@hisilicon.com>,
 sparclinux <sparclinux@vger.kernel.org>,
 Realtek linux nic maintainers <nic_swsd@realtek.com>,
 Paul Mackerras <paulus@samba.org>, Linux I2C <linux-i2c@vger.kernel.org>,
 bcm-kernel-feedback-list <bcm-kernel-feedback-list@broadcom.com>,
 Bjorn Helgaas <helgaas@kernel.org>, rfi@lists.rocketboards.org,
 Toan Le <toan@os.amperecomputing.com>, Greg Ungerer <gerg@linux-m68k.org>,
 Marek Vasut <marek.vasut+renesas@gmail.com>, Rob Herring <robh@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Sagi Grimberg <sagi@grimberg.me>,
 Yoshinori Sato <ysato@users.sourceforge.jp>,
 linux-scsi <linux-scsi@vger.kernel.org>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Michael Ellerman <mpe@ellerman.id.au>, linux-atm-general@lists.sourceforge.net,
 Russell King <linux@armlinux.org.uk>, Ley Foon Tan <ley.foon.tan@intel.com>,
 Christoph Hellwig <hch@lst.de>, Geert Uytterhoeven <geert@linux-m68k.org>,
 =?UTF-8?B?UmFmYcWCIE1pxYJlY2tp?= <zajec5@gmail.com>,
 Chas Williams <3chas3@gmail.com>,
 Benjamin Herrenschmidt <benh@kernel.crashing.org>,
 xen-devel <xen-devel@lists.xenproject.org>, Matt Turner <mattst88@gmail.com>,
 "open list:BROADCOM NVRAM DRIVER" <linux-mips@vger.kernel.org>,
 linux-kernel-mentees@lists.linuxfoundation.org,
 Kevin Hilman <khilman@baylibre.com>, Guenter Roeck <linux@roeck-us.net>,
 linux-hwmon@vger.kernel.org, Jean Delvare <jdelvare@suse.com>,
 Andrew Donnellan <ajd@linux.ibm.com>, Ray Jui <rjui@broadcom.com>,
 "James E.J. Bottomley" <jejb@linux.ibm.com>,
 Linux-Renesas <linux-renesas-soc@vger.kernel.org>,
 Yue Wang <yue.wang@amlogic.com>, Jens Axboe <axboe@fb.com>,
 Jakub Kicinski <kuba@kernel.org>, linux-m68k <linux-m68k@lists.linux-m68k.org>,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Ivan Kokshaysky <ink@jurassic.park.msu.ru>, Michael Buesch <m@bues.ch>,
 Shuah Khan <skhan@linuxfoundation.org>, bjorn@helgaas.com,
 "open list:ARM/Amlogic Meson SoC support" <linux-amlogic@lists.infradead.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Guan Xuetao <gxt@pku.edu.cn>,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 Richard Henderson <rth@twiddle.net>, Juergen Gross <jgross@suse.com>,
 Michal Simek <monstr@monstr.eu>,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 Scott Branden <sbranden@broadcom.com>, Bjorn Helgaas <bhelgaas@google.com>,
 Jingoo Han <jingoohan1@gmail.com>, Networking <netdev@vger.kernel.org>,
 Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>,
 linux-wireless <linux-wireless@vger.kernel.org>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 Keith Busch <kbusch@kernel.org>, Brian King <brking@us.ibm.com>,
 Philipp Zabel <p.zabel@pengutronix.de>, alpha <linux-alpha@vger.kernel.org>,
 Frederic Barrat <fbarrat@linux.ibm.com>,
 Gustavo Pimentel <gustavo.pimentel@synopsys.com>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
 "David S. Miller" <davem@davemloft.net>,
 Heiner Kallweit <hkallweit1@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 13, 2020 at 3:22 PM Saheed O. Bolarinwa
<refactormyself@gmail.com> wrote:
> This goal of these series is to move the definition of *all* PCIBIOS* from
> include/linux/pci.h to arch/x86 and limit their use within there.
> All other tree specific definition will be left for intact. Maybe they can
> be renamed.
>
> PCIBIOS* is an x86 concept as defined by the PCI spec. The returned error
> codes of PCIBIOS* are positive values and this introduces some complexities
> which other archs need not incur.

I think the intention is good, but I find the series in its current
form very hard
to review, in particular the way you touch some functions three times with
trivial changes. Instead of

1) replace PCIBIOS_SUCCESSFUL with 0
2) drop pointless 0-comparison
3) reformat whitespace

I would suggest to combine the first two steps into one patch per
subsystem and drop the third step.

> PLAN:
>
> 1.   [PATCH v0 1-36] Replace all PCIBIOS_SUCCESSFUL with 0
>
> 2a.  Audit all functions returning PCIBIOS_* error values directly or
>      indirectly and prevent possible bug coming in (2b)
>
> 2b.  Make all functions returning PCIBIOS_* error values call
>      pcibios_err_to_errno(). *This will change their behaviour, for good.*
>
> 3.   Clone a pcibios_err_to_errno() into arch/x86/pci/pcbios.c as _v2.
>      This handles the positive error codes directly and will not use any
>      PCIBIOS* definitions. So calls to it have no outside dependence.
>
> 4.   Make all x86 codes that needs to convert to -E* values call the
>      cloned version - pcibios_err_to_errno_v2()
>
> 5.   Assign PCIBIOS_* errors values directly to generic -E* errors
>
> 6.   Refactor pcibios_err_to_errno() and mark it deprecated
>
> 7.   Replace all calls to pcibios_err_to_errno() with the proper -E* value
>      or 0.
>
> 8.   Remove all PCIBIOS* definitions in include/linux/pci.h and
>      pcibios_err_to_errno() too.
>
> 9.   Redefine all PCIBIOS* definitions with original values inside
>      arch/x86/pci/pcbios.c
>
> 10.  Redefine pcibios_err_to_errno() inside arch/x86/pci/pcbios.c
>
> 11.  Replace pcibios_err_to_errno_v2() calls with pcibios_err_to_errno()
>
> 12.  Remove pcibios_err_to_errno_v2()
>
> Suggested-by: Bjorn Helgaas <bjorn@helgaas.com>
> Suggested-by: Yicong Yang <yangyicong@hisilicon.com>
> Signed-off-by: "Saheed O. Bolarinwa" <refactormyself@gmail.com>

I would hope that there is a simpler procedure to get to good
code than 12 steps that rename the same things multiple times.

Maybe the work can be split up differently, with a similar end result
but fewer and easier reviewed patches. The way I'd look at the
problem, there are three main areas that can be dealt with one at
a time:

a) callers of the high-level config space accessors
   pci_{write,read}_config_{byte,word,dword}, mostly in device
   drivers.
b) low-level implementation of the config space accessors
    through struct pci_ops
c) all other occurrences of these constants

Starting with a), my first question is whether any high-level drivers
even need to care about errors from these functions. I see 4913
callers that ignore the return code, and 576 that actually
check it, and almost none care about the specific error (as you
found as well). Unless we conclude that most PCI drivers are
wrong, could we just change the return type to 'void' and assume
they never fail for valid arguments on a valid pci_device* ?

For b), it might be nice to also change other aspects of the interface,
e.g. passing a pci_host_bridge pointer plus bus number instead of
a pci_bus pointer, or having the callback in the pci_host_bridge
structure.

> Bolarinwa Olayemi Saheed (35):
>   Change PCIBIOS_SUCCESSFUL to 0
>   Change PCIBIOS_SUCCESSFUL to 0
>   Change PCIBIOS_SUCCESSFUL to 0
>   Tidy Success/Failure checks
>   Change PCIBIOS_SUCCESSFUL to 0
>   Tidy Success/Failure checks
>   Change PCIBIOS_SUCCESSFUL to 0

Some patches have identical subject lines including the subsystem
prefix, which you should avoid. Try to also fix the git request-pull
output to not drop that prefix here so the list makes more sense.

        Arnd


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 15:55:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 15:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jv0nH-0003VF-Iv; Mon, 13 Jul 2020 15:55:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=52To=AY=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jv0nG-0003VA-0N
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 15:55:02 +0000
X-Inumbo-ID: 3525a33e-c521-11ea-927e-12813bfff9fa
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3525a33e-c521-11ea-927e-12813bfff9fa;
 Mon, 13 Jul 2020 15:55:00 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06DFkrsq142046;
 Mon, 13 Jul 2020 15:54:25 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=YfsCV97Xo9LXbqDIaJggcb1pS+4bJbSWqsCksn6aRt4=;
 b=qfIW8De2FYILCNtlXt/UGbYzyfngsSU3s6D7AZqhh+ORukvy1OS5Bk9irwhm3BG5fqxz
 Sh9mxtjU6u08LA1gORyS6FgGuSJTgftzoOW1sjHXOzvkxrmerVFue7spOoEFbisa+Rz3
 JUBl4UaMF5jrkciTUDuhfp/Ig78kuViMQZa+nMaKGbOqkCnSJvZH3KzxYImCEAjHz302
 HvT7eiPx79vM+CQWVWS398iBHBImeVIQCV/E8RS0PxTAv+4PaPcVywzSmZOOOFNWvG/g
 9I5QEOXeqEYxOBHrv6BzUD21wFfHSArHQ0accz5j650r5fJt7U43bq1uVT6B+36lVzB3 Kw== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2120.oracle.com with ESMTP id 3275ckyv8y-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Mon, 13 Jul 2020 15:54:25 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06DFmYAG136898;
 Mon, 13 Jul 2020 15:52:24 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by userp3030.oracle.com with ESMTP id 327qbvt2rt-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 13 Jul 2020 15:52:24 +0000
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 06DFq5Ll000743;
 Mon, 13 Jul 2020 15:52:06 GMT
Received: from [10.39.206.45] (/10.39.206.45)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Mon, 13 Jul 2020 08:52:05 -0700
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
To: Anchal Agarwal <anchalag@amazon.com>, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, hpa@zytor.com, x86@kernel.org, jgross@suse.com,
 linux-pm@vger.kernel.org, linux-mm@kvack.org, kamatam@amazon.com,
 sstabellini@kernel.org, konrad.wilk@oracle.com, roger.pau@citrix.com,
 axboe@kernel.dk, davem@davemloft.net, rjw@rjwysocki.net,
 len.brown@intel.com, pavel@ucw.cz, peterz@infradead.org,
 eduval@amazon.com, sblbir@amazon.com, xen-devel@lists.xenproject.org,
 vkuznets@redhat.com, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, dwmw@amazon.co.uk, benh@kernel.crashing.org
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
Date: Mon, 13 Jul 2020 11:52:01 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9681
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0
 spamscore=0
 mlxlogscore=999 bulkscore=0 malwarescore=0 mlxscore=0 phishscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007130117
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9681
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0
 priorityscore=1501
 bulkscore=0 adultscore=0 lowpriorityscore=0 phishscore=0 spamscore=0
 impostorscore=0 malwarescore=0 mlxlogscore=999 clxscore=1011 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007130117
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/2/20 2:21 PM, Anchal Agarwal wrote:
> From: Munehisa Kamata <kamatam@amazon.com>
>
> Guest hibernation is different from xen suspend/resume/live migration.
> Xen save/restore does not use pm_ops as is needed by guest hibernation.=

> Hibernation in guest follows ACPI path and is guest inititated , the
> hibernation image is saved within guest as compared to later modes
> which are xen toolstack assisted and image creation/storage is in
> control of hypervisor/host machine.
> To differentiate between Xen suspend and PM hibernation, keep track
> of the on-going suspend mode by mainly using a new PM notifier.
> Introduce simple functions which help to know the on-going suspend mode=

> so that other Xen-related code can behave differently according to the
> current suspend mode.
> Since Xen suspend doesn't have corresponding PM event, its main logic
> is modfied to acquire pm_mutex and set the current mode.
>
> Though, acquirng pm_mutex is still right thing to do, we may
> see deadlock if PM hibernation is interrupted by Xen suspend.
> PM hibernation depends on xenwatch thread to process xenbus state
> transactions, but the thread will sleep to wait pm_mutex which is
> already held by PM hibernation context in the scenario. Xen shutdown
> code may need some changes to avoid the issue.
>
> [Anchal Agarwal: Changelog]:
>  RFC v1->v2: Code refactoring
>  v1->v2:     Remove unused functions for PM SUSPEND/PM hibernation
>
> Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
> ---
>  drivers/xen/manage.c  | 60 +++++++++++++++++++++++++++++++++++++++++++=

>  include/xen/xen-ops.h |  1 +
>  2 files changed, 61 insertions(+)
>
> diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
> index cd046684e0d1..69833fd6cfd1 100644
> --- a/drivers/xen/manage.c
> +++ b/drivers/xen/manage.c
> @@ -14,6 +14,7 @@
>  #include <linux/freezer.h>
>  #include <linux/syscore_ops.h>
>  #include <linux/export.h>
> +#include <linux/suspend.h>
> =20
>  #include <xen/xen.h>
>  #include <xen/xenbus.h>
> @@ -40,6 +41,20 @@ enum shutdown_state {
>  /* Ignore multiple shutdown requests. */
>  static enum shutdown_state shutting_down =3D SHUTDOWN_INVALID;
> =20
> +enum suspend_modes {
> +	NO_SUSPEND =3D 0,
> +	XEN_SUSPEND,
> +	PM_HIBERNATION,
> +};
> +
> +/* Protected by pm_mutex */
> +static enum suspend_modes suspend_mode =3D NO_SUSPEND;
> +
> +bool xen_is_xen_suspend(void)


Weren't you going to call this pv suspend? (And also --- is this suspend
or hibernation? Your commit messages and cover letter talk about fixing
hibernation).


> +{
> +	return suspend_mode =3D=3D XEN_SUSPEND;
> +}
> +



> +
> +static int xen_pm_notifier(struct notifier_block *notifier,
> +			unsigned long pm_event, void *unused)
> +{
> +	switch (pm_event) {
> +	case PM_SUSPEND_PREPARE:
> +	case PM_HIBERNATION_PREPARE:
> +	case PM_RESTORE_PREPARE:
> +		suspend_mode =3D PM_HIBERNATION;


Do you ever use this mode? It seems to me all you care about is whether
or not we are doing XEN_SUSPEND. And so perhaps suspend_mode could
become a boolean. And then maybe even drop it altogether because it you
should be able to key off (shutting_down =3D=3D SHUTDOWN_SUSPEND).


> +		break;
> +	case PM_POST_SUSPEND:
> +	case PM_POST_RESTORE:
> +	case PM_POST_HIBERNATION:
> +		/* Set back to the default */
> +		suspend_mode =3D NO_SUSPEND;
> +		break;
> +	default:
> +		pr_warn("Receive unknown PM event 0x%lx\n", pm_event);
> +		return -EINVAL;
> +	}
> +
> +	return 0;
> +};



> +static int xen_setup_pm_notifier(void)
> +{
> +	if (!xen_hvm_domain())
> +		return -ENODEV;


I forgot --- what did we decide about non-x86 (i.e. ARM)?


And PVH dom0.


-boris





From xen-devel-bounces@lists.xenproject.org Mon Jul 13 17:03:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 17:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jv1ri-0001PW-3P; Mon, 13 Jul 2020 17:03:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fjqj=AY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jv1rg-0001PR-Uz
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 17:03:40 +0000
X-Inumbo-ID: ccb20ba8-c52a-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ccb20ba8-c52a-11ea-b7bb-bc764e2007e4;
 Mon, 13 Jul 2020 17:03:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=y7CFwtj47kXWSAoFXxs1Pea+RooZIyw6E/4AKAnqBZo=; b=pMOgaKnQ/wckiFDPyYzWlXy7Q
 1KPDv5EzMQydYoFfomiOpaaUDhgrllym7fnTr4YYB9WbCvoLUtqmWlJOP+t4953IpgJGEk4SHJ+A2
 FEm0fPptQ2QS+vMbHkG029mJBInoc2hhzWOnnydNrOiNGFqzBHh9czsrA8x7+cUqdpmS8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jv1rg-0002me-1a; Mon, 13 Jul 2020 17:03:40 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jv1rf-00089L-PY; Mon, 13 Jul 2020 17:03:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jv1rf-0007w5-Ox; Mon, 13 Jul 2020 17:03:39 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151863-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 151863: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=165f3afbfc3db70fcfdccad07085cde0a03c858b
X-Osstest-Versions-That: xen=02d69864b51a4302a148c28d6d391238a6778b4b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jul 2020 17:03:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151863 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151863/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  165f3afbfc3db70fcfdccad07085cde0a03c858b
baseline version:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b

Last test of basis   151802  2020-07-10 17:00:25 Z    2 days
Testing same since   151863  2020-07-13 14:00:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   02d69864b5..165f3afbfc  165f3afbfc3db70fcfdccad07085cde0a03c858b -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 17:26:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 17:26:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jv2Dj-0003J3-Vr; Mon, 13 Jul 2020 17:26:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fjqj=AY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jv2Di-0003Iy-Ri
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 17:26:26 +0000
X-Inumbo-ID: f9ee8076-c52d-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f9ee8076-c52d-11ea-bb8b-bc764e2007e4;
 Mon, 13 Jul 2020 17:26:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=P8cEfFeEw+0UigrDzvNOXy/+pTRZvRyF3vdkrQpIpt8=; b=zbRxsfytSnEv2YY2Aba2KpTiZ
 oXVKgNExel6WsUjNt7lk+LuWfu9e4+iOges5JkPvxkW/YoXwyT7R5ifaTVCZH6asA4Z+MVli/fbAl
 iVGXiAJkhxXDsHnX5Z11I7Q4Btk6WaNZN0fIkYnF062HlDNr0abXFbH6AD7aeFnEFtL4o=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jv2Dg-0003FX-31; Mon, 13 Jul 2020 17:26:24 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jv2Df-0001cS-Ns; Mon, 13 Jul 2020 17:26:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jv2Df-0007vp-N9; Mon, 13 Jul 2020 17:26:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151856-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151856: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=11ba468877bb23f28956a35e896356252d63c983
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jul 2020 17:26:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151856 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151856/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                11ba468877bb23f28956a35e896356252d63c983
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   25 days
Failing since        151236  2020-06-19 19:10:35 Z   23 days   38 attempts
Testing same since   151856  2020-07-13 02:24:43 Z    0 days    1 attempts

------------------------------------------------------------
717 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 37860 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 19:46:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 19:46:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jv4PC-0006Rj-QC; Mon, 13 Jul 2020 19:46:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=52To=AY=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jv4PC-0006Re-3w
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 19:46:26 +0000
X-Inumbo-ID: 88cded64-c541-11ea-92a2-12813bfff9fa
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 88cded64-c541-11ea-92a2-12813bfff9fa;
 Mon, 13 Jul 2020 19:46:25 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06DJgqUw018314;
 Mon, 13 Jul 2020 19:45:49 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=/NOAgD8eugJROilkvt1yb3FiBOxsBZAAo+loaRjbXIQ=;
 b=lxUjQUezzTHbjv2PKH7fkacbn0jBsQ+PeCiUU88pjX3DXY3oSg9AApLFZGiJ+Dl+1gqB
 oMv2nSC+u/kZF++mLtmdHz7E4CLjaS9QXG8MyyQEyRDrLZ7AJZ4x17LszrhITiCS0UpT
 u9cksQmygGs+Ft0/I3SlnEdA9vCZMlfzf8UuvpflCVLvmUDOWpzN9fjPfATNGEl0EfMS
 6QgvfTeC3qcYajz5+Ts5BKihav9cS+RfK+qCzzYf6KR0PRWM6uSNpdJGDIcahQNUjxIC
 z0BnZIJwFMjVMfp1cZydZ88AsiUW6Dtra2C2hIDleRZhrNPqkMHrOgstLdLnR5rpN7BD 0Q== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2120.oracle.com with ESMTP id 32762n91gc-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Mon, 13 Jul 2020 19:45:49 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06DJgcft104509;
 Mon, 13 Jul 2020 19:43:49 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by userp3030.oracle.com with ESMTP id 327qbw4gh2-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 13 Jul 2020 19:43:49 +0000
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 06DJhdle025941;
 Mon, 13 Jul 2020 19:43:39 GMT
Received: from [10.39.206.45] (/10.39.206.45)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Mon, 13 Jul 2020 12:43:38 -0700
Subject: Re: [PATCH v2 00/11] Fix PM hibernation in Xen guests
To: "Agarwal, Anchal" <anchalag@amazon.com>,
 "tglx@linutronix.de" <tglx@linutronix.de>,
 "mingo@redhat.com" <mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>,
 "hpa@zytor.com" <hpa@zytor.com>, "x86@kernel.org" <x86@kernel.org>,
 "jgross@suse.com" <jgross@suse.com>,
 "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
 "linux-mm@kvack.org" <linux-mm@kvack.org>,
 "Kamata, Munehisa" <kamatam@amazon.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>,
 "axboe@kernel.dk" <axboe@kernel.dk>,
 "davem@davemloft.net" <davem@davemloft.net>,
 "rjw@rjwysocki.net" <rjw@rjwysocki.net>,
 "len.brown@intel.com" <len.brown@intel.com>, "pavel@ucw.cz" <pavel@ucw.cz>,
 "peterz@infradead.org" <peterz@infradead.org>,
 "Valentin, Eduardo" <eduval@amazon.com>, "Singh, Balbir"
 <sblbir@amazon.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "vkuznets@redhat.com" <vkuznets@redhat.com>,
 "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "Woodhouse, David" <dwmw@amazon.co.uk>,
 "benh@kernel.crashing.org" <benh@kernel.crashing.org>
References: <cover.1593665947.git.anchalag@amazon.com>
 <324020A7-996F-4CF8-A2F4-46957CEA5F0C@amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <c6688a0c-7fec-97d2-3dcc-e160e97206e6@oracle.com>
Date: Mon, 13 Jul 2020 15:43:33 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <324020A7-996F-4CF8-A2F4-46957CEA5F0C@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9681
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0
 spamscore=0
 mlxlogscore=999 bulkscore=0 malwarescore=0 mlxscore=0 phishscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007130141
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9681
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0
 malwarescore=0 spamscore=0
 clxscore=1015 priorityscore=1501 mlxlogscore=999 lowpriorityscore=0
 bulkscore=0 suspectscore=0 phishscore=0 adultscore=0 impostorscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007130141
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/10/20 2:17 PM, Agarwal, Anchal wrote:
> Gentle ping on this series. 


Have you tested save/restore?


-bois





From xen-devel-bounces@lists.xenproject.org Mon Jul 13 21:07:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 21:07:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jv5er-0004h8-Hd; Mon, 13 Jul 2020 21:06:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fjqj=AY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jv5eq-0004go-6u
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 21:06:40 +0000
X-Inumbo-ID: ba2bf9ae-c54c-11ea-92aa-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba2bf9ae-c54c-11ea-92aa-12813bfff9fa;
 Mon, 13 Jul 2020 21:06:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=PGLazznxF/D7yIb4PeJEMyDmDNmULnfjW9hpIc/01Fs=; b=G5HRJ4xhG8qGXMDNC+RREMyxn
 SJSC0rudfQN/HHBd5mLrTELoYtFQFEc5fB7MH40sAapFBQzH9TRjFVKa0ny2AMNGZgxPdmmN15d2O
 LDUpHr6RhPdGHVhfQe4OTib9jqpw6kLXNbf5v9cWkJ4rQew/qCVzx65HmMiYt1YjmxPDk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jv5eh-0007st-MH; Mon, 13 Jul 2020 21:06:31 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jv5eg-0003yg-W1; Mon, 13 Jul 2020 21:06:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jv5eg-00064z-VP; Mon, 13 Jul 2020 21:06:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151855-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151855: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=d34498309cff7560ac90c422c56e3137e6a64b19
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jul 2020 21:06:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151855 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151855/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10     fail pass in 151849

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                d34498309cff7560ac90c422c56e3137e6a64b19
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   30 days
Failing since        151101  2020-06-14 08:32:51 Z   29 days   40 attempts
Testing same since   151849  2020-07-12 16:37:51 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michele Denber <denber@mindspring.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 24438 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 21:13:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 21:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jv5lL-0005ZH-Bt; Mon, 13 Jul 2020 21:13:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fjqj=AY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jv5lK-0005ZC-1S
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 21:13:22 +0000
X-Inumbo-ID: ae122124-c54d-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae122124-c54d-11ea-bb8b-bc764e2007e4;
 Mon, 13 Jul 2020 21:13:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=laf81dCMnqLhyWOADgLo3ZfPx6gkEFZUfMZWpMJL9zM=; b=XNiV4cccs/BbiTIm0b+lBxZKA
 /wL/qnjRRrm4XZ6pXSqB/WVWFkWE8vaKWVxcwkyxe5qVN167wHBERUII/tzQ6Cj5NkoXyNHcq3EUy
 582f432C/ci3BbDfeQslB+WbJWizvidY9llaf2N68dbzNFcl3p8TpjaeP2DhF1taxZr4Q=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jv5lI-00080l-W1; Mon, 13 Jul 2020 21:13:21 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jv5lI-0004EU-LD; Mon, 13 Jul 2020 21:13:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jv5lI-0005t4-Kf; Mon, 13 Jul 2020 21:13:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151867-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151867: all pass - PUSHED
X-Osstest-Versions-This: ovmf=d9a4084544134eea50f62e88d79c466ae91f0455
X-Osstest-Versions-That: ovmf=f45e3a4afa65a45ea1a956a7c5e7410ff40190d1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 13 Jul 2020 21:13:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151867 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151867/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d9a4084544134eea50f62e88d79c466ae91f0455
baseline version:
 ovmf                 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1

Last test of basis   151821  2020-07-11 06:33:51 Z    2 days
Testing same since   151867  2020-07-13 16:09:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Guo Dong <guo.dong@intel.com>
  Marcello Sylvester Bauer <marcello.bauer@9elements.com>
  Patrick Rudolph <patrick.rudolph@9elements.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f45e3a4afa..d9a4084544  d9a4084544134eea50f62e88d79c466ae91f0455 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 22:02:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 22:02:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jv6WP-0001Gx-66; Mon, 13 Jul 2020 22:02:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1HzM=AY=kernel.org=helgaas@srs-us1.protection.inumbo.net>)
 id 1jv6WO-0001Gs-Cw
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 22:02:00 +0000
X-Inumbo-ID: 7958b66c-c554-11ea-bca7-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7958b66c-c554-11ea-bca7-bc764e2007e4;
 Mon, 13 Jul 2020 22:01:59 +0000 (UTC)
Received: from localhost (mobile-166-175-191-139.mycingular.net
 [166.175.191.139])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A520620663;
 Mon, 13 Jul 2020 22:01:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594677718;
 bh=HSAUHsS9kXD1HIJNE6WsRPhRQVem0B8cFajFbFBR0EI=;
 h=Date:From:To:Cc:Subject:In-Reply-To:From;
 b=MWvKJTjvl/1YEIl1J2bYU68QOMCk3laZW9SifUEehjNOErgOIYQha/bgzSaTFKI2l
 QuIAmIsupUSbzYcZR4NjPAxoi7WbrMXAjLRHTnmV/Yc0VJOJrp/0Y79w5eZfIKt71L
 +1J5CchrY3/xdkH17Ff+7DtiGZ9FgLPSUNKvv7eo=
Date: Mon, 13 Jul 2020 17:01:56 -0500
From: Bjorn Helgaas <helgaas@kernel.org>
To: "Saheed O. Bolarinwa" <refactormyself@gmail.com>
Subject: Re: [RFC PATCH 00/35] Move all PCIBIOS* definitions into arch/x86
Message-ID: <20200713220156.GA284762@bjorn-Precision-5520>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200713122247.10985-1-refactormyself@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rich Felker <dalias@libc.org>,
 "Martin K. Petersen" <martin.petersen@oracle.com>, linux-sh@vger.kernel.org,
 linux-pci@vger.kernel.org, linux-nvme@lists.infradead.org,
 Yicong Yang <yangyicong@hisilicon.com>, Keith Busch <kbusch@kernel.org>,
 Realtek linux nic maintainers <nic_swsd@realtek.com>,
 Paul Mackerras <paulus@samba.org>, linux-i2c@vger.kernel.org,
 bcm-kernel-feedback-list@broadcom.com, sparclinux@vger.kernel.org,
 rfi@lists.rocketboards.org, Toan Le <toan@os.amperecomputing.com>,
 Greg Ungerer <gerg@linux-m68k.org>,
 Marek Vasut <marek.vasut+renesas@gmail.com>, Rob Herring <robh@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Sagi Grimberg <sagi@grimberg.me>,
 Yoshinori Sato <ysato@users.sourceforge.jp>, linux-scsi@vger.kernel.org,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Michael Ellerman <mpe@ellerman.id.au>, linux-atm-general@lists.sourceforge.net,
 Russell King <linux@armlinux.org.uk>, Ley Foon Tan <ley.foon.tan@intel.com>,
 Christoph Hellwig <hch@lst.de>, Geert Uytterhoeven <geert@linux-m68k.org>,
 =?utf-8?B?UmFmYcWCIE1pxYJlY2tp?= <zajec5@gmail.com>,
 Chas Williams <3chas3@gmail.com>,
 Benjamin Herrenschmidt <benh@kernel.crashing.org>,
 xen-devel@lists.xenproject.org, Matt Turner <mattst88@gmail.com>,
 linux-mips@vger.kernel.org, linux-kernel-mentees@lists.linuxfoundation.org,
 Kevin Hilman <khilman@baylibre.com>, Guenter Roeck <linux@roeck-us.net>,
 linux-hwmon@vger.kernel.org, Jean Delvare <jdelvare@suse.com>,
 Andrew Donnellan <ajd@linux.ibm.com>, Arnd Bergmann <arnd@arndb.de>,
 Ray Jui <rjui@broadcom.com>, "James E.J. Bottomley" <jejb@linux.ibm.com>,
 Yue Wang <yue.wang@Amlogic.com>, Jens Axboe <axboe@fb.com>,
 Jakub Kicinski <kuba@kernel.org>, linux-m68k@lists.linux-m68k.org,
 Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>,
 Ivan Kokshaysky <ink@jurassic.park.msu.ru>, Michael Buesch <m@bues.ch>,
 skhan@linuxfoundation.org, bjorn@helgaas.com,
 linux-amlogic@lists.infradead.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Guan Xuetao <gxt@pku.edu.cn>,
 linux-arm-kernel@lists.infradead.org, Richard Henderson <rth@twiddle.net>,
 Juergen Gross <jgross@suse.com>, Michal Simek <monstr@monstr.eu>,
 Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
 Scott Branden <sbranden@broadcom.com>, Bjorn Helgaas <bhelgaas@google.com>,
 Jingoo Han <jingoohan1@gmail.com>, netdev@vger.kernel.org,
 Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>,
 linux-wireless@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-renesas-soc@vger.kernel.org, Brian King <brking@us.ibm.com>,
 Philipp Zabel <p.zabel@pengutronix.de>, linux-alpha@vger.kernel.org,
 Frederic Barrat <fbarrat@linux.ibm.com>,
 Gustavo Pimentel <gustavo.pimentel@synopsys.com>,
 linuxppc-dev@lists.ozlabs.org, "David S. Miller" <davem@davemloft.net>,
 Heiner Kallweit <hkallweit1@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 13, 2020 at 02:22:12PM +0200, Saheed O. Bolarinwa wrote:
> This goal of these series is to move the definition of *all* PCIBIOS* from
> include/linux/pci.h to arch/x86 and limit their use within there.
> All other tree specific definition will be left for intact. Maybe they can
> be renamed.

More comments later, but a few trivial whitespace issues you can clean
up in the meantime.  Don't repost for at least a few days to avoid
spamming everybody.  I found these with:

  $ b4 am -om/ 20200713122247.10985-1-refactormyself@gmail.com
  $ git am m/20200713_refactormyself_move_all_pcibios_definitions_into_arch_x86.mbx

  Applying: atm: Change PCIBIOS_SUCCESSFUL to 0
  .git/rebase-apply/patch:11: trailing whitespace.
	  iadev = INPH_IA_DEV(dev);
  .git/rebase-apply/patch:12: trailing whitespace.
	  for(i=0; i<64; i++)
  .git/rebase-apply/patch:13: trailing whitespace.
	    if ((error = pci_read_config_dword(iadev->pci,
  .git/rebase-apply/patch:16: trailing whitespace, space before tab in indent.
		return error;
  .git/rebase-apply/patch:17: trailing whitespace.
	  writel(0, iadev->reg+IPHASE5575_EXT_RESET);
  warning: squelched 5 whitespace errors
  warning: 10 lines add whitespace errors.
  Applying: atm: Tidy Success/Failure checks
  .git/rebase-apply/patch:13: trailing whitespace.

  .git/rebase-apply/patch:14: trailing whitespace.
	  iadev = INPH_IA_DEV(dev);
  .git/rebase-apply/patch:15: trailing whitespace.
	  for(i=0; i<64; i++)
  .git/rebase-apply/patch:21: trailing whitespace.
	  writel(0, iadev->reg+IPHASE5575_EXT_RESET);
  .git/rebase-apply/patch:22: trailing whitespace.
	  for(i=0; i<64; i++)
  warning: squelched 3 whitespace errors
  warning: 8 lines add whitespace errors.
  Applying: atm: Fix Style ERROR- assignment in if condition
  .git/rebase-apply/patch:12: trailing whitespace.
	  unsigned int pci[64];
  .git/rebase-apply/patch:13: trailing whitespace.

  .git/rebase-apply/patch:14: trailing whitespace.
	  iadev = INPH_IA_DEV(dev);
  .git/rebase-apply/patch:23: trailing whitespace.
	  writel(0, iadev->reg+IPHASE5575_EXT_RESET);
  .git/rebase-apply/patch:32: trailing whitespace.
	  udelay(5);
  warning: squelched 2 whitespace errors
  warning: 7 lines add whitespace errors.
  Applying: PCI: Change PCIBIOS_SUCCESSFUL to 0
  .git/rebase-apply/patch:37: trailing whitespace.
  struct pci_ops apecs_pci_ops =
  .git/rebase-apply/patch:50: trailing whitespace.
  static int
  .git/rebase-apply/patch:59: trailing whitespace.
  struct pci_ops cia_pci_ops =
  .git/rebase-apply/patch:94: trailing whitespace.
  static int
  .git/rebase-apply/patch:103: trailing whitespace.
  struct pci_ops lca_pci_ops =
  warning: squelched 10 whitespace errors
  warning: 15 lines add whitespace errors.


From xen-devel-bounces@lists.xenproject.org Mon Jul 13 22:56:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 13 Jul 2020 22:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jv7Mq-0005TJ-FI; Mon, 13 Jul 2020 22:56:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xZO8=AY=runbox.com=m.v.b@srs-us1.protection.inumbo.net>)
 id 1jv7Mo-0005TE-47
 for xen-devel@lists.xenproject.org; Mon, 13 Jul 2020 22:56:11 +0000
X-Inumbo-ID: 08a03a46-c55c-11ea-bb8b-bc764e2007e4
Received: from aibo.runbox.com (unknown [91.220.196.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08a03a46-c55c-11ea-bb8b-bc764e2007e4;
 Mon, 13 Jul 2020 22:56:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=runbox.com; 
 s=selector2;
 h=Content-Transfer-Encoding:Content-Type:MIME-Version:Date:
 Message-ID:To:Subject:From; bh=guBFdf7SaZKe9dfYA/vayezyCZQor71TAanVYv2h/UM=;
 b=YknaB173KeflUwvXAuuTi7n5HSiHtbFbEJLXDcjvecM9JMjOt3oYJ/EsiDDUCWSVCH2bQLS7gM
 w5LnVyB9atKRDbE1rIWdCgeynaY6tDpHhasKxNUTfnv6KkhfgZuj9Wg/IDz7FmWwLvfw15z7M+C0D
 clp0AXPra/EE5/FgD0CRjk+waqkFe8t/gjQ5JmRK+//KyAkLAGhmdGhkd7bqGNAmYhrCTVkrRm4DZ
 8jPiM2w8NDG3Pi1maaAEotbwjsrrWXfQQK1DSdJtjGDPxx20KZcjMBvL177BpzSTvLDb9ehrXk1mr
 yceo+Y8YaHwc1d50pvEa0sNvHKWNvC7LNQ9Dw==;
Received: from [10.9.9.74] (helo=submission03.runbox)
 by mailtransmit03.runbox with esmtp (Exim 4.86_2)
 (envelope-from <m.v.b@runbox.com>) id 1jv7Mj-0006Al-Ga
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 00:56:05 +0200
Received: by submission03.runbox with esmtpsa [Authenticated alias (536975)]
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1)
 id 1jv7Mi-0002AY-3l; Tue, 14 Jul 2020 00:56:04 +0200
From: "M. Vefa Bicakci" <m.v.b@runbox.com>
Subject: Bug involving %fs register and suspend-to-RAM
To: xen-devel@lists.xenproject.org
Message-ID: <6d4a452c-26fa-f204-16fa-4dc630ca3bea@runbox.com>
Date: Tue, 14 Jul 2020 01:56:02 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello all,

I encountered an unusual bug involving Xen's and Linux's handling of the
%fs register after resuming from suspend-to-RAM (S2R) with Xen 4.8.y,
and I am reaching out to the xen-devel mailing list, because the bug
appears to affect Xen's master branch too.

In summary, Xen uses/overwrites the %fs register during the resume from
S2R in "xen/arch/x86/boot/wakeup.S", and this register's value is not
restored from the saved context during the context switch to dom0 after
the resume.  Furthermore, Linux running in dom0 appears to leak the %fs
register's value to systemd-sleep, which in turn encounters a
segmentation fault when it attempts to access a variable in thread local
storage. The fault is intermittent. A segfault is observed approximately
once in 10~15 suspend and resume cycles.

Here is the problematic code in context from Xen 4.8.y, where the %fs
register is used:

   xen/arch/x86/boot/wakeup.S
   37
   38  1:      # Show some progress if VGA is resumed
   39          movw    $0xb800, %ax
   40          movw    %ax, %fs
   41          movw    $0x0e00 + 'L', %fs:(0x10)
   42
   43          # boot trampoline is under 1M, and shift its start into
   44          # %fs to reference symbols in that area
   45          mov     wakesym(trampoline_seg), %fs
   46          lidt    %fs:bootsym(idt_48)
   47          lgdt    %fs:bootsym(gdt_48)

And here are some logs obtained from an instrumented version of Linux.
(The magic value 0x9b00 originates from line 45 in the code excerpt
above, which was verified by instrumenting Xen.)

   kernel: Enabling non-boot CPUs ...
   kernel: installing Xen timer for CPU 1
   kernel: xxx: save_fsgs: cpu:0 task:systemd-sleep saved fsindex != 0 (0x9b00)
   kernel: xxx: load_seg_legacy: cpu:0 prev_task:systemd-sleep next_task:swapper/0 next_index: 0 <= 3
   kernel: xxx: load_seg_legacy: cpu:0 prev_task:systemd-sleep next_task:swapper/0 next_base == 0. next_index:0x0 CPU has X86_BUG_NULL_SEG
   kernel: xxx: load_seg_legacy: loading segments. cpu:0. prev_task:swapper/0: next_task:systemd-sleep May cause a segfault... next_index: 39680 > 3 (0x9b00)
   ...
   kernel: PM: suspend exit
   kernel: systemd-sleep[7102]: segfault at 10 ip 00007fb79ac283f9 sp 00007ffd7c1dd508 error 4 in libc-2.24.so[7fb79abb1000+1bd000]

I encountered this bug while testing patches that I had cherry-picked to
Qubes OS's Xen 4.8.y tree to reduce power consumption after S2R by
parking unused CPU hyperthreads. Interestingly, if the changes to park
the CPU's hyperthreads are reverted, then the bug cannot be reproduced
(at least in my experience).

Note that this issue appears to affect recent Xen versions as well. The
%fs register continues to be used in "wakeup.S" to report progress on
the console/VGA, as can be seen in the code listing above. With Xen
4.8.y, unconditionally restoring %fs in the function
restore_rest_processor_state (in file xen/arch/x86/acpi/suspend.c) is
sufficient to resolve this issue.

Some of my debugging notes can be found at [1], and a patch that
resolves this issue in Xen 4.8.y (along with cherry-picked patches to
park hyperthreads) can be found at [2].

All this to say, there appears to be a bug that might negatively affect
Xen's master branch as well, and I wanted to bring this to the attention
of the xen-devel mailing list's subscribers, on the recommendation of
Marek Marczykowski-Grecki of the Qubes OS team.

If there is any further information or assistance I can provide, I would
be happy to help. I would also appreciate any feedback regarding the
last patch at [2] (i.e.,
"patch-0009-x86-acpi-suspend-Unconditionally-restore-fs.patch").

Thank you,

Vefa

[1] https://github.com/QubesOS/qubes-issues/issues/5210
[2] https://github.com/m-v-b/qubes-vmm-xen/commit/9678563a9d099db2879af9c534dbc655ea086d17



From xen-devel-bounces@lists.xenproject.org Tue Jul 14 04:45:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 04:45:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvCoS-0002iM-9X; Tue, 14 Jul 2020 04:45:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xDy+=AZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvCoR-0002i2-1K
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 04:45:03 +0000
X-Inumbo-ID: c3d60414-c58c-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3d60414-c58c-11ea-bca7-bc764e2007e4;
 Tue, 14 Jul 2020 04:44:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=LHdYzZe2SwEgjur/SLjSTLfOyYzwxcu5B1kuqoHAVkY=; b=tR7jYbzcv/jd4x48+NzQvX7te
 LrrOmPGTwOuRFJ5UUy95tNuQffFFS3qArWpTg0FrSNvIRQq0+kEIRzAZT0UPoEQkZGXiqfrjblzQg
 B5+9J6JsmkxGhMTETRSg7U2dd2p2TgnTFJyog9WKJIwW7fim8/1wdK3kKuMy6SYhabNq4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvCoJ-0002nj-FX; Tue, 14 Jul 2020 04:44:55 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvCoJ-0002ob-37; Tue, 14 Jul 2020 04:44:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvCoJ-0004jC-2V; Tue, 14 Jul 2020 04:44:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151869-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151869: regressions - FAIL
X-Osstest-Failures: xen-unstable:build-arm64-pvops:kernel-build:fail:regression
 xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
 xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=165f3afbfc3db70fcfdccad07085cde0a03c858b
X-Osstest-Versions-That: xen=02d69864b51a4302a148c28d6d391238a6778b4b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jul 2020 04:44:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151869 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151869/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 151854

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151854
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151854
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151854
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151854
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151854
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151854
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151854
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151854
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151854
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  165f3afbfc3db70fcfdccad07085cde0a03c858b
baseline version:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b

Last test of basis   151854  2020-07-13 01:51:12 Z    1 days
Testing same since   151869  2020-07-13 17:06:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 165f3afbfc3db70fcfdccad07085cde0a03c858b
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Jul 13 14:50:33 2020 +0100

    Config.mk: Unnail versions (for unstable branch)
    
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 3df0424e69549ca21613fad3654509c35b2a3e94
Author: Ian Jackson <ian.jackson@eu.citrix.com>
Date:   Mon Jul 13 14:50:06 2020 +0100

    Branch Xen 4.15: Change version numbers
    
    And rerun autogen.sh.  No changes other than to versions.
    
    Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 07:44:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 07:44:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvFbh-00014W-1A; Tue, 14 Jul 2020 07:44:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dU3A=AZ=samsung.com=b.zolnierkie@srs-us1.protection.inumbo.net>)
 id 1jvFbf-00014R-So
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 07:44:04 +0000
X-Inumbo-ID: c64a9110-c5a5-11ea-bb8b-bc764e2007e4
Received: from mailout2.w1.samsung.com (unknown [210.118.77.12])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c64a9110-c5a5-11ea-bb8b-bc764e2007e4;
 Tue, 14 Jul 2020 07:43:58 +0000 (UTC)
Received: from eucas1p1.samsung.com (unknown [182.198.249.206])
 by mailout2.w1.samsung.com (KnoxPortal) with ESMTP id
 20200714074357euoutp0270a64a58d71dba43cb8b1967f4f50391~hjrHDrHcS2905529055euoutp02x
 for <xen-devel@lists.xenproject.org>; Tue, 14 Jul 2020 07:43:57 +0000 (GMT)
DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.w1.samsung.com
 20200714074357euoutp0270a64a58d71dba43cb8b1967f4f50391~hjrHDrHcS2905529055euoutp02x
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com;
 s=mail20170921; t=1594712637;
 bh=9FFWm4rsrO3DZUrBCqVdTmUf2OLNkpr0y/XL43tw84k=;
 h=Subject:To:Cc:From:Date:In-Reply-To:References:From;
 b=CcUqPNyhsViA/Idl1LClhP81wfgRxwG48w1XXx/i47+bGNdzUWbfeDfo7ZoTwUYIf
 5Q8tW7viwh8NYjZoyUfuao0h1zEsD5fd/sIAIZx7p/stGQ2rQmmriHRIpWz/todPm4
 zwxCClxnEVrsXRlcT6SCyyvERNLG5UamSunGorK8=
Received: from eusmges3new.samsung.com (unknown [203.254.199.245]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTP id
 20200714074357eucas1p25141581b6dd71d92ae3ad81afa90e594~hjrG4ByGa2165521655eucas1p2J;
 Tue, 14 Jul 2020 07:43:57 +0000 (GMT)
Received: from eucas1p1.samsung.com ( [182.198.249.206]) by
 eusmges3new.samsung.com (EUCPMTA) with SMTP id B2.23.06318.C326D0F5; Tue, 14
 Jul 2020 08:43:56 +0100 (BST)
Received: from eusmtrp2.samsung.com (unknown [182.198.249.139]) by
 eucas1p1.samsung.com (KnoxPortal) with ESMTPA id
 20200714074356eucas1p15a1f57757ad1c40e9f7531a1fecd1f6d~hjrGdqDfm0685606856eucas1p1E;
 Tue, 14 Jul 2020 07:43:56 +0000 (GMT)
Received: from eusmgms2.samsung.com (unknown [182.198.249.180]) by
 eusmtrp2.samsung.com (KnoxPortal) with ESMTP id
 20200714074356eusmtrp26e887373ffa755e3bc2d1486c78c216f~hjrGc44M72697626976eusmtrp23;
 Tue, 14 Jul 2020 07:43:56 +0000 (GMT)
X-AuditID: cbfec7f5-38bff700000018ae-25-5f0d623ca237
Received: from eusmtip2.samsung.com ( [203.254.199.222]) by
 eusmgms2.samsung.com (EUCPMTA) with SMTP id 6B.2F.06017.C326D0F5; Tue, 14
 Jul 2020 08:43:56 +0100 (BST)
Received: from [106.120.51.71] (unknown [106.120.51.71]) by
 eusmtip2.samsung.com (KnoxPortal) with ESMTPA id
 20200714074356eusmtip2b68aaccf1ec7da02052cb9604d6b0f60~hjrGDQU2G1674116741eusmtip2u;
 Tue, 14 Jul 2020 07:43:56 +0000 (GMT)
Subject: Re: [PATCH v2] efi: avoid error message when booting under Xen
To: Juergen Gross <jgross@suse.com>
From: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Message-ID: <0a5494ff-431d-5667-680f-77987cff2984@samsung.com>
Date: Tue, 14 Jul 2020 09:43:55 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.8.0
MIME-Version: 1.0
In-Reply-To: <20200710142253.28070-1-jgross@suse.com>
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrKKsWRmVeSWpSXmKPExsWy7djPc7o2SbzxBu/eW1r8/PKe0eLK1/ds
 FnNuGlm0PbzFaHGi7wOrxeVdc9gsuhbeYLf4vmUykwOHx6ZVnWwe97uPM3kc/nCFxeP9vqts
 Huu3XGXx+LxJLoAtissmJTUnsyy1SN8ugSuj6eEN1oKlXBXfb8xga2A8ytHFyMkhIWAicenp
 EZYuRi4OIYEVjBLXzxxnhXC+MEq0//oBlfnMKLFwziVWmJbTE+4zQSSWM0rsPjAPquUto0T3
 yUeMIFXCAu4SRyf+ZAaxRQSUJT629rKDFDELPGKU6Fj+gQUkwSZgJTGxfRVYA6+AnUT7syVs
 IDaLgKrEgT9/wGxRgQiJTw8Os0LUCEqcnPkErJdTwFSiu+0LWC+zgLjErSfzmSBseYntb+cw
 gyyTEDjELnHg2Daou10kvh85xA5hC0u8Or4FypaR+L9zPhNEwzpGib8dL6C6tzNKLJ/8jw2i
 ylrizrlfQDYH0ApNifW79EFMCQFHiRk/IiFMPokbbwUhbuCTmLRtOjNEmFeio00IYoaaxIZl
 G9hgtnbtXMk8gVFpFpLPZiH5ZhaSb2YhrF3AyLKKUTy1tDg3PbXYOC+1XK84Mbe4NC9dLzk/
 dxMjMDGd/nf86w7GfX+SDjEKcDAq8fBK+PPEC7EmlhVX5h5ilOBgVhLhdTp7Ok6INyWxsiq1
 KD++qDQntfgQozQHi5I4r/Gil7FCAumJJanZqakFqUUwWSYOTqkGRqaG5mk9j6fL6LG38q+O
 vbE04TpvxveAlJPPOfZv7ZDv2sPW61ihuiT9+58dBcvD5Yo+N7ff5RY03bmnM+FZeAbfX6kP
 whMLWGrYjfYyF3/d1b7Q314+dafm9YKkfSemPfr2/Msx84Roi7mzStrep9naZoVN0lfavDhZ
 K4P3fsR07a784lXeSizFGYmGWsxFxYkAAxqkn0gDAAA=
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrOIsWRmVeSWpSXmKPExsVy+t/xe7o2SbzxBm0TFCx+fnnPaHHl63s2
 izk3jSzaHt5itDjR94HV4vKuOWwWXQtvsFt83zKZyYHDY9OqTjaP+93HmTwOf7jC4vF+31U2
 j/VbrrJ4fN4kF8AWpWdTlF9akqqQkV9cYqsUbWhhpGdoaaFnZGKpZ2hsHmtlZKqkb2eTkpqT
 WZZapG+XoJfR9PAGa8FSrorvN2awNTAe5ehi5OSQEDCROD3hPlMXIxeHkMBSRonXTW2MXYwc
 QAkZiePryyBqhCX+XOtig6h5zSixaFkPG0hCWMBd4ujEn8wgtoiAssTH1l52kCJmgUeMEn2H
 pzFDdHQwSnyc/YARpIpNwEpiYvsqMJtXwE6i/dkSsEksAqoSB/78AbNFBSIkDu+YBVUjKHFy
 5hMWEJtTwFSiu+0LWJxZQF3iz7xLzBC2uMStJ/OZIGx5ie1v5zBPYBSahaR9FpKWWUhaZiFp
 WcDIsopRJLW0ODc9t9hIrzgxt7g0L10vOT93EyMwErcd+7llB2PXu+BDjAIcjEo8vBL+PPFC
 rIllxZW5hxglOJiVRHidzp6OE+JNSaysSi3Kjy8qzUktPsRoCvTcRGYp0eR8YJLIK4k3NDU0
 t7A0NDc2NzazUBLn7RA4GCMkkJ5YkpqdmlqQWgTTx8TBKdXAWHBgtvj7FWmpe4U+L5JaY154
 6MRePc8JV19PjpHanrx8d4FFl0tZ27fP3tP/mvP++3rmRDf729a9j1SCwy727wrxNTA90LNU
 2W7Lpaa4+XtLToiGfp5nVmtwWfiv9l0BBqfv3O0rGsp/NVb+Fzc6bZDmHOcUdbL6yokZ9+xN
 9x5q4OSZu+BqjRJLcUaioRZzUXEiAKwTzYPaAgAA
X-CMS-MailID: 20200714074356eucas1p15a1f57757ad1c40e9f7531a1fecd1f6d
X-Msg-Generator: CA
Content-Type: text/plain; charset="utf-8"
X-RootMTR: 20200710142443eucas1p120b3d15e1f7be8bb95ffb9d83875fa70
X-EPHeader: CA
CMS-TYPE: 201P
X-CMS-RootMailID: 20200710142443eucas1p120b3d15e1f7be8bb95ffb9d83875fa70
References: <CGME20200710142443eucas1p120b3d15e1f7be8bb95ffb9d83875fa70@eucas1p1.samsung.com>
 <20200710142253.28070-1-jgross@suse.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: linux-fbdev@vger.kernel.org, linux-efi@vger.kernel.org,
 linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
 Peter Jones <pjones@redhat.com>, xen-devel@lists.xenproject.org,
 Ard Biesheuvel <ardb@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 7/10/20 4:22 PM, Juergen Gross wrote:
> efifb_probe() will issue an error message in case the kernel is booted
> as Xen dom0 from UEFI as EFI_MEMMAP won't be set in this case. Avoid
> that message by calling efi_mem_desc_lookup() only if EFI_MEMMAP is set.
> 
> Fixes: 38ac0287b7f4 ("fbdev/efifb: Honour UEFI memory map attributes when mapping the FB")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>

Best regards,
--
Bartlomiej Zolnierkiewicz
Samsung R&D Institute Poland
Samsung Electronics

> ---
>  drivers/video/fbdev/efifb.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
> index 65491ae74808..e57c00824965 100644
> --- a/drivers/video/fbdev/efifb.c
> +++ b/drivers/video/fbdev/efifb.c
> @@ -453,7 +453,7 @@ static int efifb_probe(struct platform_device *dev)
>  	info->apertures->ranges[0].base = efifb_fix.smem_start;
>  	info->apertures->ranges[0].size = size_remap;
>  
> -	if (efi_enabled(EFI_BOOT) &&
> +	if (efi_enabled(EFI_MEMMAP) &&
>  	    !efi_mem_desc_lookup(efifb_fix.smem_start, &md)) {
>  		if ((efifb_fix.smem_start + efifb_fix.smem_len) >
>  		    (md.phys_addr + (md.num_pages << EFI_PAGE_SHIFT))) {
> 



From xen-devel-bounces@lists.xenproject.org Tue Jul 14 07:52:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 07:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvFjM-0001vR-OW; Tue, 14 Jul 2020 07:52:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xDy+=AZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvFjL-0001v1-Eo
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 07:51:59 +0000
X-Inumbo-ID: e223f628-c5a6-11ea-92f0-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e223f628-c5a6-11ea-92f0-12813bfff9fa;
 Tue, 14 Jul 2020 07:51:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=MIuy21U/TxhU/xAjai52YNGwetfoabhz1b4VrgiI+AI=; b=K+wm2xq3LtWP1XAZE8c+gy7fC
 Dv925vB1UvPzOZbmP83+Cidt3XuO6vkNnP6zG3iuq2ZBxgNXMZ1dpUTAFPdSuEUjj43FguBgbSVUV
 XR4t9QO1lKXvujtOQ4bT9sOQlmg0VvyDjTEElWkH6ohIHeT+1v5au9m1vPSwiIpa8GTGw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvFjF-00072K-4T; Tue, 14 Jul 2020 07:51:53 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvFjE-0000AR-Q7; Tue, 14 Jul 2020 07:51:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvFjE-0006nF-PN; Tue, 14 Jul 2020 07:51:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151870-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151870: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=11ba468877bb23f28956a35e896356252d63c983
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jul 2020 07:51:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151870 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151870/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                11ba468877bb23f28956a35e896356252d63c983
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   26 days
Failing since        151236  2020-06-19 19:10:35 Z   24 days   39 attempts
Testing same since   151856  2020-07-13 02:24:43 Z    1 days    2 attempts

------------------------------------------------------------
717 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 37860 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 08:06:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 08:06:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvFxc-0003Tc-Hf; Tue, 14 Jul 2020 08:06:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvFxb-0003TX-CG
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 08:06:43 +0000
X-Inumbo-ID: f365284c-c5a8-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f365284c-c5a8-11ea-8496-bc764e2007e4;
 Tue, 14 Jul 2020 08:06:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C9D98AD76;
 Tue, 14 Jul 2020 08:06:43 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] x86emul: avoid assembler warning about .type not taking
 effect in test harness
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <42875d48-10e4-cc88-70ac-8979fea2493c@suse.com>
Date: Tue, 14 Jul 2020 10:06:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

gcc re-orders top level blocks by default when optimizing. This
re-ordering results in all our .type directives to get emitted to the
assembly file first, followed by gcc's. The assembler warns about
attempts to change the type of a symbol when it was already set (and
when there's no intervening setting to "notype").

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Refine description to no longer claim a gcc change to be the reason.

--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -295,4 +295,9 @@ x86-emulate.o cpuid.o test_x86_emulator.
 x86-emulate.o: x86_emulate/x86_emulate.c
 x86-emulate.o: HOSTCFLAGS += -D__XEN_TOOLS__
 
+# In order for our custom .type assembler directives to reliably land after
+# gcc's, we need to keep it from re-ordering top-level constructs.
+$(call cc-option-add,HOSTCFLAGS-toplevel,HOSTCC,-fno-toplevel-reorder)
+test_x86_emulator.o: HOSTCFLAGS += $(HOSTCFLAGS-toplevel)
+
 test_x86_emulator.o: $(addsuffix .h,$(TESTCASES)) $(addsuffix -opmask.h,$(OPMASK))


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 10:25:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 10:25:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvI7U-0006mY-7O; Tue, 14 Jul 2020 10:25:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CaI7=AZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvI7S-0006mT-QD
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 10:25:02 +0000
X-Inumbo-ID: 4638e6cc-c5bc-11ea-8496-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4638e6cc-c5bc-11ea-8496-bc764e2007e4;
 Tue, 14 Jul 2020 10:25:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594722302;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=EKq76JGCpX9JyuiBmD1RFZqcz2aaCNExo1bFMJfbehY=;
 b=D/FtVt7MOXBW7J2wbQM5HXhb5ZS2C+0RyxVNYa7K3RI3lgVd4M56UzNb
 bIlNAUix+182K5kKGDILbf+7640YUS1Xbqc5KeuAFbS1OYx6hoNXqqkRx
 cp+yy2ixQv1Py3Z8RU3L+ONFiE0q4P5GQ32UTjKiHDAfcTqLKOUulLmQ4 A=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: hRnzf6DI4KKkJSoRozZQ7nLmEmjCFGKgNJGqkc2ENhFFjm4CKsl4YzXWqDgQFfRFS3NnKPmz1D
 BU5ZfVEt7aWYlfWvWqQUdHOSKOgS8BdTDlTn/FcBt9G+1m4bFq3kW+4bGIcEdMWafGI0mqcdoe
 OIECEMFCT7hZ8osW1prnkM55uousRj3hP4FZ3I/kHx8MwINGTHcieb46u5r4xW9vpa3Tg7tmEE
 BKtu+RNP7bNGtxdxUA8ElveA2KCW+6nuThjq0H9DzPXO6JWR5S54qnwH9+trkiqCVRDx06k3e3
 eRs=
X-SBRS: 2.7
X-MesageID: 22313854
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,350,1589256000"; d="scan'208";a="22313854"
Date: Tue, 14 Jul 2020 12:24:48 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 2/7] x86/mce: add compat struct checking for
 XEN_MC_inject_v2
Message-ID: <20200714102448.GH7191@Air-de-Roger>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <007679c8-84d5-2e91-e48e-68746741fb45@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <007679c8-84d5-2e91-e48e-68746741fb45@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 12:25:48PM +0200, Jan Beulich wrote:
> 84e364f2eda2 ("x86: add CMCI software injection interface") merely made
> sure things would build, without any concern about things actually
> working:
> - despite the addition of xenctl_bitmap to xlat.lst, the resulting macro
>   wasn't invoked anywhere (which would have lead to recognizing that the
>   structure appeared to have no fully compatible layout, despite the use
>   of a 64-bit handle),
> - the interface struct itself was neither added to xlat.lst (and the
>   resulting macro then invoked) nor was any manual checking of
>   individual fields added.
> 
> Adjust compat header generation logic to retain XEN_GUEST_HANDLE_64(),
> which is intentionally layed out to be compatible between different size
> guests. Invoke the missing checking (implicitly through CHECK_mc).
> 
> No change in the resulting generated code.
> 
> Fixes: 84e364f2eda2 ("x86: add CMCI software injection interface")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

LGTM, just one question below.

> --- a/xen/tools/compat-build-header.py
> +++ b/xen/tools/compat-build-header.py
> @@ -19,6 +19,7 @@ pats = [
>   [ r"(^|[^\w])xen_?(\w*)_compat_t([^\w]|$$)", r"\1compat_\2_t\3" ],
>   [ r"(^|[^\w])XEN_?", r"\1COMPAT_" ],
>   [ r"(^|[^\w])Xen_?", r"\1Compat_" ],
> + [ r"(^|[^\w])COMPAT_HANDLE_64\(", r"\1XEN_GUEST_HANDLE_64(" ],

Why do you need to match with the opening parenthesis?

Is this for the #ifndef XEN_GUEST_HANDLE_64 instances? Don't they need
to also be replaced with the compat types?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 10:47:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 10:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvITD-0008VM-PY; Tue, 14 Jul 2020 10:47:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvITC-0008VH-PS
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 10:47:30 +0000
X-Inumbo-ID: 695681de-c5bf-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 695681de-c5bf-11ea-8496-bc764e2007e4;
 Tue, 14 Jul 2020 10:47:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AC26FACF9;
 Tue, 14 Jul 2020 10:47:30 +0000 (UTC)
Subject: Re: [PATCH v7 03/15] x86/mm: rewrite virt_to_xen_l*e
To: Hongyan Xia <hx242@xen.org>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <fd5d98198d9539b232a570a83e7a24be2407e739.1590750232.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <826d5a28-c391-dd30-d588-6f730b454c18@suse.com>
Date: Tue, 14 Jul 2020 12:47:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <fd5d98198d9539b232a570a83e7a24be2407e739.1590750232.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 13:11, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Rewrite those functions to use the new APIs. Modify its callers to unmap
> the pointer returned. Since alloc_xen_pagetable_new() is almost never
> useful unless accompanied by page clearing and a mapping, introduce a
> helper alloc_map_clear_xen_pt() for this sequence.
> 
> Note that the change of virt_to_xen_l1e() also requires vmap_to_mfn() to
> unmap the page, which requires domain_page.h header in vmap.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with two further small adjustments:

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4948,8 +4948,28 @@ void free_xen_pagetable_new(mfn_t mfn)
>          free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
>  }
>  
> +void *alloc_map_clear_xen_pt(mfn_t *pmfn)
> +{
> +    mfn_t mfn = alloc_xen_pagetable_new();
> +    void *ret;
> +
> +    if ( mfn_eq(mfn, INVALID_MFN) )
> +        return NULL;
> +
> +    if ( pmfn )
> +        *pmfn = mfn;
> +    ret = map_domain_page(mfn);
> +    clear_page(ret);
> +
> +    return ret;
> +}
> +
>  static DEFINE_SPINLOCK(map_pgdir_lock);
>  
> +/*
> + * For virt_to_xen_lXe() functions, they take a virtual address and return a
> + * pointer to Xen's LX entry. Caller needs to unmap the pointer.
> + */
>  static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)

May I suggest s/virtual/linear/ to at least make the new comment
correct?

> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -291,7 +291,13 @@ void copy_page_sse2(void *, const void *);
>  #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
>  #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
>  #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
> -#define vmap_to_mfn(va)     _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va))))
> +
> +#define vmap_to_mfn(va) ({                                                  \
> +        const l1_pgentry_t *pl1e_ = virt_to_xen_l1e((unsigned long)(va));   \
> +        mfn_t mfn_ = l1e_get_mfn(*pl1e_);                                   \
> +        unmap_domain_page(pl1e_);                                           \
> +        mfn_; })

Just like is already the case in domain_page_map_to_mfn() I think
you want to add "BUG_ON(!pl1e)" here to limit the impact of any
problem to DoS (rather than a possible privilege escalation).

Or actually, considering the only case where virt_to_xen_l1e()
would return NULL, returning INVALID_MFN here would likely be
even more robust. There looks to be just a single caller, which
would need adjusting to cope with an error coming back. In fact -
it already ASSERT()s, despite NULL right now never coming back
from vmap_to_page(). I think the loop there would better be

    for ( i = 0; i < pages; i++ )
    {
        struct page_info *page = vmap_to_page(va + i * PAGE_SIZE);

        if ( page )
            page_list_add(page, &pg_list);
        else
            printk_once(...);
    }

Thoughts?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 11:17:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 11:17:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvIwF-0002cR-0g; Tue, 14 Jul 2020 11:17:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xDy+=AZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvIwD-0002c7-RT
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 11:17:29 +0000
X-Inumbo-ID: 9767f932-c5c3-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9767f932-c5c3-11ea-b7bb-bc764e2007e4;
 Tue, 14 Jul 2020 11:17:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=0ZAN8ZvqimyJJt0Soudf+92kF8H4XTgzYz/w1s6G/Ec=; b=VN8l5jZnvRu3e0AR+LJoXAiNK
 hpfs9K4p4grKr4i8exHjWtXDI1E7h1w5YbNR1jABGHkjnqTvpEYgM0ep6Z7lSzCWcG6MfvD+P7S3j
 03CQNwcfaI59iZjXuMbm4WmlMHH+XVm3sim1kz3HkhweEjRvCMjRTrhxz44b2wbICSCtg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvIw7-0003Mo-Gm; Tue, 14 Jul 2020 11:17:23 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvIw7-0007Od-4Y; Tue, 14 Jul 2020 11:17:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvIw7-00089g-42; Tue, 14 Jul 2020 11:17:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151891-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 151891: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=1969576661f3e34318e9b0a61a1a38f9a5aee16f
X-Osstest-Versions-That: xen=165f3afbfc3db70fcfdccad07085cde0a03c858b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jul 2020 11:17:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151891 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151891/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1969576661f3e34318e9b0a61a1a38f9a5aee16f
baseline version:
 xen                  165f3afbfc3db70fcfdccad07085cde0a03c858b

Last test of basis   151863  2020-07-13 14:00:24 Z    0 days
Testing same since   151891  2020-07-14 09:00:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   165f3afbfc..1969576661  1969576661f3e34318e9b0a61a1a38f9a5aee16f -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 11:19:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 11:19:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvIxs-0002jJ-Cw; Tue, 14 Jul 2020 11:19:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CaI7=AZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvIxq-0002jD-Sd
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 11:19:10 +0000
X-Inumbo-ID: d5ce9619-c5c3-11ea-92fb-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d5ce9619-c5c3-11ea-92fb-12813bfff9fa;
 Tue, 14 Jul 2020 11:19:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594725549;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=bmMPQd+N8T9YBrSTYgCcoOlLXK1DFjRkp+a//IvzCNQ=;
 b=YTz8UlXSGNV2OcXX/cvKYb15Ik22+D6TInVYEF82cjkWN0y5D+caPbX5
 Bves7EjjyvVT+r0XEqHqAk8e22BwONqtH5PVf0FjlhIXndn2hg70rB353
 bP8MEOTPLigTaWvd15e1P2YOMQaZ38/gl6+ROuTKAyyaXE7dc5dO4/EQH I=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: dKvtN1W44Xs5OPJjtNdDWaeoDcRLea9VIVmnYr3g6VwRy0IdJIg9VyRU7czC6pkBts3fj5ptR/
 elVIgypijoulLpE33wL6LdkUb3HLI/0Mg7/vQQBQ2SVrzMlE0gX5VhohOg9+X2n8bAZOoE44Sl
 jtjMcQT0QnbzDEgG4fEJDjWZBavMRBtTJx+5gHrI5aat7emVe438Oh1grqKsfVUTJBqz9qxNH9
 7seSSqxxnd9vwZVTB0SUBxV01VHz0bZiwR7ZmI6jv7QwCfUeNRWKuRuuUGglz+a4Qs4FskQnOR
 62A=
X-SBRS: 2.7
X-MesageID: 22653112
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,350,1589256000"; d="scan'208";a="22653112"
Date: Tue, 14 Jul 2020 13:19:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 3/7] x86/mce: bring hypercall subop compat checking in
 sync again
Message-ID: <20200714111900.GI7191@Air-de-Roger>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <5d53a2e3-716c-2213-96e5-9d37371c482c@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <5d53a2e3-716c-2213-96e5-9d37371c482c@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 12:26:54PM +0200, Jan Beulich wrote:
> Use a typedef in struct xen_mc also for the two subops "manually"
> translated in the handler, just for consistency. No functional
> change.

I'm slightly puzzled by the fact that mc_fetch is marked as needs
checking while mc_physcpuinfo is marked as needs translation,
shouldn't both be marked as needing translation? (since both need to
handle a guest pointer using XEN_GUEST_HANDLE)

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 11:20:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 11:20:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvIz7-0003UM-Nx; Tue, 14 Jul 2020 11:20:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CaI7=AZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvIz6-0003UG-Qq
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 11:20:28 +0000
X-Inumbo-ID: 04d12412-c5c4-11ea-92fb-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 04d12412-c5c4-11ea-92fb-12813bfff9fa;
 Tue, 14 Jul 2020 11:20:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594725628;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=Vt9c3nTJZogbN6qLCKM6gSJ5dFdCX/WrcqZiM8ZlO/Q=;
 b=Lfa+UxTxVWi9xzOOWLn7o+5sW9s/vVeKhqHuOT2QfNCTa54DtRDfDq25
 dv4xrjLeahw0DwA5SaklIqG5dIuBXhWpA3dsq5NQ6VRSNUB8ogKZqeFqS
 ilw/7qlv9/B4C4ynVtGlumsjomJ+TQzjgr97PZzoaswupeXdPKgBlSVKE g=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: VE1OCxTXnGzaJJbvQ+p0n95YcaiXxN0ZIjl3t4yUpWCq9PADJ/cTx7bmHcNJ5uqe7WckrCG/oe
 j76OX6XFfhxJ2JBe7AX9ZB178Moi2dSWZukiVNtST6EY3pbYIy2of2fLggu12l+Nc3KWe8RBw5
 QbNAa2sIR9OQtT+gVWn86hQH0sM6BeyA9YexosbrfOnvcks6/YqkE6N1bAWsjSb1bRYwcCyVtL
 XbF2qRBjzskXlJqFIgqWWlP88PQwqlb4smsH559v8nEigid53qkBUG4YMZzIe6ri4ubVGtEyrI
 pYk=
X-SBRS: 2.7
X-MesageID: 22326596
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,350,1589256000"; d="scan'208";a="22326596"
Date: Tue, 14 Jul 2020 13:19:57 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 4/7] x86/dmop: add compat struct checking for
 XEN_DMOP_map_mem_type_to_ioreq_server
Message-ID: <20200714111957.GJ7191@Air-de-Roger>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <8bb00b11-7004-51c4-c679-83da922d085b@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8bb00b11-7004-51c4-c679-83da922d085b@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 12:27:16PM +0200, Jan Beulich wrote:
> This was forgotten when the subop was added.
> 
> Also take the opportunity and move the dm_op_relocate_memory entry in
> xlat.lst to its designated place.
> 
> No change in the resulting generated code.
> 
> Fixes: ca2b511d3ff4 ("x86/ioreq server: add DMOP to map guest ram with p2m_ioreq_server to an ioreq server")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 11:44:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 11:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvJMT-0005H3-Iv; Tue, 14 Jul 2020 11:44:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvJMS-0005Gy-RM
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 11:44:36 +0000
X-Inumbo-ID: 63abc1f6-c5c7-11ea-92fd-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 63abc1f6-c5c7-11ea-92fd-12813bfff9fa;
 Tue, 14 Jul 2020 11:44:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 305BAB009;
 Tue, 14 Jul 2020 11:44:37 +0000 (UTC)
Subject: Re: [PATCH v2 2/7] x86/mce: add compat struct checking for
 XEN_MC_inject_v2
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <007679c8-84d5-2e91-e48e-68746741fb45@suse.com>
 <20200714102448.GH7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <89e25262-7ad2-1b82-ed35-b4564ee40f62@suse.com>
Date: Tue, 14 Jul 2020 13:44:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200714102448.GH7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.07.2020 12:24, Roger Pau Monné wrote:
> On Wed, Jul 01, 2020 at 12:25:48PM +0200, Jan Beulich wrote:
>> --- a/xen/tools/compat-build-header.py
>> +++ b/xen/tools/compat-build-header.py
>> @@ -19,6 +19,7 @@ pats = [
>>   [ r"(^|[^\w])xen_?(\w*)_compat_t([^\w]|$$)", r"\1compat_\2_t\3" ],
>>   [ r"(^|[^\w])XEN_?", r"\1COMPAT_" ],
>>   [ r"(^|[^\w])Xen_?", r"\1Compat_" ],
>> + [ r"(^|[^\w])COMPAT_HANDLE_64\(", r"\1XEN_GUEST_HANDLE_64(" ],
> 
> Why do you need to match with the opening parenthesis?
> 
> Is this for the #ifndef XEN_GUEST_HANDLE_64 instances? Don't they need
> to also be replaced with the compat types?

Well, I wanted to be as strict as possible, i.e. along with
the matching of a non-identifier char at the beginning I
also wanted the other side to be delimited.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 11:47:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 11:47:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvJOz-0005Np-0N; Tue, 14 Jul 2020 11:47:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvJOx-0005Nj-Pg
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 11:47:11 +0000
X-Inumbo-ID: c091a750-c5c7-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c091a750-c5c7-11ea-8496-bc764e2007e4;
 Tue, 14 Jul 2020 11:47:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1762CAB3D;
 Tue, 14 Jul 2020 11:47:13 +0000 (UTC)
Subject: Re: [PATCH v2 3/7] x86/mce: bring hypercall subop compat checking in
 sync again
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <5d53a2e3-716c-2213-96e5-9d37371c482c@suse.com>
 <20200714111900.GI7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f82edef5-ee75-b24c-0a24-03ed38486882@suse.com>
Date: Tue, 14 Jul 2020 13:47:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200714111900.GI7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.07.2020 13:19, Roger Pau Monné wrote:
> On Wed, Jul 01, 2020 at 12:26:54PM +0200, Jan Beulich wrote:
>> Use a typedef in struct xen_mc also for the two subops "manually"
>> translated in the handler, just for consistency. No functional
>> change.
> 
> I'm slightly puzzled by the fact that mc_fetch is marked as needs
> checking while mc_physcpuinfo is marked as needs translation,
> shouldn't both be marked as needing translation? (since both need to
> handle a guest pointer using XEN_GUEST_HANDLE)

I guess I'm confused - I see an exclamation mark on both respective
lines in xlat.lst.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 11:52:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 11:52:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvJUB-0006Dd-Ld; Tue, 14 Jul 2020 11:52:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvJU9-0006DY-Vx
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 11:52:34 +0000
X-Inumbo-ID: 8065f950-c5c8-11ea-92fe-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8065f950-c5c8-11ea-92fe-12813bfff9fa;
 Tue, 14 Jul 2020 11:52:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DDA2AAB3D;
 Tue, 14 Jul 2020 11:52:34 +0000 (UTC)
Subject: Re: [PATCH v7 04/15] x86/mm: switch to new APIs in map_pages_to_xen
To: Hongyan Xia <hx242@xen.org>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <0c7cd882e8f8e853d2da78f41cce42d1f70b3bdf.1590750232.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <66c3855c-1b4a-69fd-7eb7-b4ee481174c4@suse.com>
Date: Tue, 14 Jul 2020 13:52:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0c7cd882e8f8e853d2da78f41cce42d1f70b3bdf.1590750232.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 13:11, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Page tables allocated in that function should be mapped and unmapped
> now.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 11:55:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 11:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvJWv-0006MB-2z; Tue, 14 Jul 2020 11:55:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvJWu-0006M6-1N
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 11:55:24 +0000
X-Inumbo-ID: e5f4a4ce-c5c8-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5f4a4ce-c5c8-11ea-b7bb-bc764e2007e4;
 Tue, 14 Jul 2020 11:55:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4FB90B000;
 Tue, 14 Jul 2020 11:55:25 +0000 (UTC)
Subject: Re: [PATCH v7 05/15] x86/mm: switch to new APIs in modify_xen_mappings
To: Hongyan Xia <hx242@xen.org>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <2d57f21e24cc898ba41ec537ea5df7ad5dfd6a05.1590750232.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2daa8c68-ab74-d836-3ceb-c477d70c8624@suse.com>
Date: Tue, 14 Jul 2020 13:55:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <2d57f21e24cc898ba41ec537ea5df7ad5dfd6a05.1590750232.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 13:11, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Page tables allocated in that function should be mapped and unmapped
> now.
> 
> Note that pl2e now maybe mapped and unmapped in different iterations, so
> we need to add clean-ups for that.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Tue Jul 14 12:02:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 12:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvJdr-0007Hz-CO; Tue, 14 Jul 2020 12:02:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvJdp-0007Ht-GJ
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 12:02:33 +0000
X-Inumbo-ID: e5df9ba0-c5c9-11ea-92ff-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e5df9ba0-c5c9-11ea-92ff-12813bfff9fa;
 Tue, 14 Jul 2020 12:02:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A0D89B008;
 Tue, 14 Jul 2020 12:02:34 +0000 (UTC)
Subject: Re: [PATCH v7 06/15] x86_64/mm: introduce pl2e in paging_init
To: Hongyan Xia <hx242@xen.org>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <6e840488d3512bb1b0d5e678391323df5301e1e0.1590750232.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ab9f656a-bad6-c19b-cce4-001aa4236637@suse.com>
Date: Tue, 14 Jul 2020 14:02:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <6e840488d3512bb1b0d5e678391323df5301e1e0.1590750232.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 13:11, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> We will soon map and unmap pages in paging_init(). Introduce pl2e so
> that we can use l2_ro_mpt to point to the page table itself.
> 
> No functional change.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Tue Jul 14 12:03:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 12:03:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvJf7-0007Oh-Nk; Tue, 14 Jul 2020 12:03:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvJf6-0007OX-PW
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 12:03:52 +0000
X-Inumbo-ID: 1522ab1e-c5ca-11ea-92ff-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1522ab1e-c5ca-11ea-92ff-12813bfff9fa;
 Tue, 14 Jul 2020 12:03:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EDF0DB021;
 Tue, 14 Jul 2020 12:03:53 +0000 (UTC)
Subject: Re: [PATCH v7 07/15] x86_64/mm: switch to new APIs in paging_init
To: Hongyan Xia <hx242@xen.org>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <7eb8f68f2202d97062d714d35a8b1d6a972cc623.1590750232.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <eb6d00d1-ea87-b33e-b2de-eacba4e6f4da@suse.com>
Date: Tue, 14 Jul 2020 14:03:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <7eb8f68f2202d97062d714d35a8b1d6a972cc623.1590750232.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 13:11, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Map and unmap pages instead of relying on the direct map.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
albeit preferably with ...

> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -481,6 +481,7 @@ void __init paging_init(void)
>      l3_pgentry_t *l3_ro_mpt;
>      l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL;
>      struct page_info *l1_pg;
> +    mfn_t l3_ro_mpt_mfn, l2_ro_mpt_mfn;

... just a single variable named "mfn" here. Afaics the life ranges
allow for this (imo) readability improvement.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 12:29:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 12:29:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvK3M-0000n8-I1; Tue, 14 Jul 2020 12:28:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xDy+=AZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvK3L-0000mo-51
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 12:28:55 +0000
X-Inumbo-ID: 9098f962-c5cd-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9098f962-c5cd-11ea-bb8b-bc764e2007e4;
 Tue, 14 Jul 2020 12:28:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=wu0w5t5A7G4HCprLI39nsdD8wHsIgP+/zdpN1Am3CKA=; b=zs+y09qTvscE2zC4jqBDE+AHU
 0fhbl0onweNME3KfSgeSl4hHh3CuD08pxuXN2QTnvwTqrdETwvHwDNKvNRarT84Wp1ENWqTbppQtY
 12QOr1pRyQu7DSvCherHIwenWH1paFLcCLu43crKBdyBvuBRUyPg7HPIeTijwBvvb4Yzo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvK3C-0004tH-RD; Tue, 14 Jul 2020 12:28:46 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvK3C-0001aj-Hm; Tue, 14 Jul 2020 12:28:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvK3C-0007Nt-H8; Tue, 14 Jul 2020 12:28:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151874-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151874: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=20c1df5476e1e9b5d3f5b94f9f3ce01d21f14c46
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jul 2020 12:28:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151874 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151874/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                20c1df5476e1e9b5d3f5b94f9f3ce01d21f14c46
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   31 days
Failing since        151101  2020-06-14 08:32:51 Z   30 days   41 attempts
Testing same since   151874  2020-07-13 21:40:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michele Denber <denber@mindspring.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 26285 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 12:32:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 12:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvK6Z-0001ZO-4V; Tue, 14 Jul 2020 12:32:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvK6Y-0001ZJ-3m
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 12:32:14 +0000
X-Inumbo-ID: 0b4a43aa-c5ce-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b4a43aa-c5ce-11ea-b7bb-bc764e2007e4;
 Tue, 14 Jul 2020 12:32:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 630AEAD3A;
 Tue, 14 Jul 2020 12:32:15 +0000 (UTC)
Subject: Re: [PATCH v7 08/15] x86_64/mm: switch to new APIs in setup_m2p_table
To: Hongyan Xia <hx242@xen.org>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <14aec5f25e8226a45dbc6b26fcc9981ea5f66a90.1590750232.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <625a5450-daca-0837-6977-4b0ae237d988@suse.com>
Date: Tue, 14 Jul 2020 14:32:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <14aec5f25e8226a45dbc6b26fcc9981ea5f66a90.1590750232.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 13:11, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Avoid repetitive mapping of l2_ro_mpt by keeping it across loops, and
> only unmap and map it when crossing 1G boundaries.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

I do think, however, that ...

> @@ -438,32 +443,29 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
>  
>              ASSERT(!(l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
>                    _PAGE_PSE));
> -            if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
> -              _PAGE_PRESENT )
> -                l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
> -                  l2_table_offset(va);
> -            else
> +            if ( (l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
> +                  _PAGE_PRESENT) && !l2_ro_mpt)
> +                l2_ro_mpt = map_l2t_from_l3e(l3_ro_mpt[l3_table_offset(va)]);
> +            else if ( !l2_ro_mpt )
>              {

... this would be slightly neater as

            if ( l2_ro_mpt )
                /* nothing */;
            else if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
                      _PAGE_PRESENT )
                l2_ro_mpt = map_l2t_from_l3e(l3_ro_mpt[l3_table_offset(va)]);
            else
            {
                ...

My R-b holds if you would change it like this.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 12:33:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 12:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvK7z-0001hA-F9; Tue, 14 Jul 2020 12:33:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvK7x-0001gO-Vw
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 12:33:42 +0000
X-Inumbo-ID: 3fa27df2-c5ce-11ea-9305-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3fa27df2-c5ce-11ea-9305-12813bfff9fa;
 Tue, 14 Jul 2020 12:33:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 45022AD3A;
 Tue, 14 Jul 2020 12:33:43 +0000 (UTC)
Subject: Re: [PATCH v7 08/15] x86_64/mm: switch to new APIs in setup_m2p_table
To: Hongyan Xia <hx242@xen.org>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <14aec5f25e8226a45dbc6b26fcc9981ea5f66a90.1590750232.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <10b00e52-03b4-a413-0822-ff9882f82d06@suse.com>
Date: Tue, 14 Jul 2020 14:33:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <14aec5f25e8226a45dbc6b26fcc9981ea5f66a90.1590750232.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 13:11, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Avoid repetitive mapping of l2_ro_mpt by keeping it across loops, and
> only unmap and map it when crossing 1G boundaries.

Oh, one other thing: This reads as if there were such repetitive
mapping operations, and the patch took care of them. How about
starting this with "While doing so, avoid making ..."?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 12:42:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 12:42:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvKGS-0002Ym-BX; Tue, 14 Jul 2020 12:42:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bg5W=AZ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvKGR-0002Yh-BG
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 12:42:27 +0000
X-Inumbo-ID: 78ab5e60-c5cf-11ea-9307-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 78ab5e60-c5cf-11ea-9307-12813bfff9fa;
 Tue, 14 Jul 2020 12:42:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6CED5AC49;
 Tue, 14 Jul 2020 12:42:28 +0000 (UTC)
Subject: Re: [PATCH v7 09/15] efi: use new page table APIs in copy_mapping
To: Hongyan Xia <hx242@xen.org>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <0259b645c81ecc3879240e30760b0e7641a2b602.1590750232.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bfe28c9c-af4e-96c2-9e6c-354a5bf626d8@suse.com>
Date: Tue, 14 Jul 2020 14:42:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0259b645c81ecc3879240e30760b0e7641a2b602.1590750232.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 13:11, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> After inspection ARM doesn't have alloc_xen_pagetable so this function
> is x86 only, which means it is safe for us to change.

Well, it sits inside a "#ifndef CONFIG_ARM" section.

> @@ -1442,29 +1443,42 @@ static __init void copy_mapping(unsigned long mfn, unsigned long end,
>                                                   unsigned long emfn))
>  {
>      unsigned long next;
> +    l3_pgentry_t *l3src = NULL, *l3dst = NULL;
>  
>      for ( ; mfn < end; mfn = next )
>      {
>          l4_pgentry_t l4e = efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)];
> -        l3_pgentry_t *l3src, *l3dst;
>          unsigned long va = (unsigned long)mfn_to_virt(mfn);
>  
> +        if ( !((mfn << PAGE_SHIFT) & ((1UL << L4_PAGETABLE_SHIFT) - 1)) )

To be in line with ...

> +        {
> +            UNMAP_DOMAIN_PAGE(l3src);
> +            UNMAP_DOMAIN_PAGE(l3dst);
> +        }
>          next = mfn + (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT));

... this, please avoid the left shift of mfn in the if(). Judging from
code further down I also think the zapping of l3src would better be
dependent upon va than upon mfn.

>          if ( !is_valid(mfn, min(next, end)) )
>              continue;
> -        if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
> +        if ( !l3dst )
>          {
> -            l3dst = alloc_xen_pagetable();
> -            BUG_ON(!l3dst);
> -            clear_page(l3dst);
> -            efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)] =
> -                l4e_from_paddr(virt_to_maddr(l3dst), __PAGE_HYPERVISOR);
> +            if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
> +            {
> +                mfn_t l3mfn;
> +
> +                l3dst = alloc_map_clear_xen_pt(&l3mfn);
> +                BUG_ON(!l3dst);
> +                efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)] =
> +                    l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR);
> +            }
> +            else
> +                l3dst = map_l3t_from_l4e(l4e);
>          }
> -        else
> -            l3dst = l4e_to_l3e(l4e);

As for the earlier patch, maybe again neater if you started with

        if ( l3dst )
            /* nothing */;
        else if ...

Would also save a level of indentation as it seems.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 13:12:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 13:12:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvKjd-0005MX-Tm; Tue, 14 Jul 2020 13:12:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S7fe=AZ=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jvKjd-0005MS-Bv
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 13:12:37 +0000
X-Inumbo-ID: aec7fba8-c5d3-11ea-8496-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aec7fba8-c5d3-11ea-8496-bc764e2007e4;
 Tue, 14 Jul 2020 13:12:35 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 61C3BA2BA8;
 Tue, 14 Jul 2020 15:12:34 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 45090A1796;
 Tue, 14 Jul 2020 15:12:33 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id JzGPGcf6Nwfj; Tue, 14 Jul 2020 15:12:32 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 83634A2BA8;
 Tue, 14 Jul 2020 15:12:32 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id dotFs9edIDvX; Tue, 14 Jul 2020 15:12:32 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 4FEF7A1796;
 Tue, 14 Jul 2020 15:12:32 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 1E0CD205EA;
 Tue, 14 Jul 2020 15:12:02 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id LjkiNPvFauii; Tue, 14 Jul 2020 15:11:56 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 52E6020DB0;
 Tue, 14 Jul 2020 15:11:56 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id tiy1MUQ1TTHP; Tue, 14 Jul 2020 15:11:56 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 218CE206F7;
 Tue, 14 Jul 2020 15:11:56 +0200 (CEST)
Date: Tue, 14 Jul 2020 15:11:55 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Message-ID: <878376484.57739222.1594732315968.JavaMail.zimbra@cert.pl>
In-Reply-To: <cover.1594150543.git.michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
Subject: Re: [PATCH v6 00/11] Implement support for external IPT monitoring
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - GC83 (Win)/8.6.0_GA_1194)
Thread-Topic: Implement support for external IPT monitoring
Thread-Index: JnwhUhLclswxe0oQ1Qji7hws02nUzA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, luwei kang <luwei.kang@intel.com>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 tamas lengyel <tamas.lengyel@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>, Julien Grall <julien@xen.org>,
 Roger Pau =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 7 lip 2020 o 21:39, Micha=C5=82 Leszczy=C5=84ski michal.leszczynski@c=
ert.pl napisa=C5=82(a):

> Intel Processor Trace is an architectural extension available in modern I=
ntel
> family CPUs. It allows recording the detailed trace of activity while the
> processor executes the code. One might use the recorded trace to reconstr=
uct
> the code flow. It means, to find out the executed code paths, determine
> branches taken, and so forth.
>=20
> The abovementioned feature is described in Intel(R) 64 and IA-32 Architec=
tures
> Software Developer's Manual Volume 3C: System Programming Guide, Part 3,
> Chapter 36: "Intel Processor Trace."
>=20
> This patch series implements an interface that Dom0 could use in order to
> enable IPT for particular vCPUs in DomU, allowing for external monitoring=
. Such
> a feature has numerous applications like malware monitoring, fuzzing, or
> performance testing.
>=20
> Also thanks to Tamas K Lengyel for a few preliminary hints before
> first version of this patch was submitted to xen-devel.
>=20
> Changed since v1:
>  * MSR_RTIT_CTL is managed using MSR load lists
>  * other PT-related MSRs are modified only when vCPU goes out of context
>  * trace buffer is now acquired as a resource
>  * added vmtrace_pt_size parameter in xl.cfg, the size of trace buffer
>    must be specified in the moment of domain creation
>  * trace buffers are allocated on domain creation, destructed on
>    domain destruction
>  * HVMOP_vmtrace_ipt_enable/disable is limited to enabling/disabling PT
>    these calls don't manage buffer memory anymore
>  * lifted 32 MFN/GFN array limit when acquiring resources
>  * minor code style changes according to review
>=20
> Changed since v2:
>  * trace buffer is now allocated on domain creation (in v2 it was
>    allocated when hvm param was set)
>  * restored 32-item limit in mfn/gfn arrays in acquire_resource
>    and instead implemented hypercall continuations
>  * code changes according to Jan's and Roger's review
>=20
> Changed since v3:
>  * vmtrace HVMOPs are not implemented as DOMCTLs
>  * patches splitted up according to Andrew's comments
>  * code changes according to v3 review on the mailing list
>=20
> Changed since v4:
>  * rebased to commit be63d9d4
>  * fixed dependencies between patches
>    (earlier patches don't reference further patches)
>  * introduced preemption check in acquire_resource
>  * moved buffer allocation to common code
>  * splitted some patches according to code review
>  * minor fixes according to code review
>=20
> Changed since v5:
>  * trace buffer size is now dynamically determined by the proctrace
>    tool
>  * trace buffer size variable is uniformly defined as uint32_t
>    processor_trace_buf_kb in hypervisor, toolstack and ABI
>  * buffer pages are not freed explicitly but reference count is
>    now used instead
>  * minor fixes according to code review
>=20
> This patch series is available on GitHub:
> https://github.com/icedevml/xen/tree/ipt-patch-v6
>=20
>=20
> Michal Leszczynski (11):
>  memory: batch processing in acquire_resource()
>  x86/vmx: add Intel PT MSR definitions
>  x86/vmx: add IPT cpu feature
>  common: add vmtrace_pt_size domain parameter
>  tools/libxl: add vmtrace_pt_size parameter
>  x86/hvm: processor trace interface in HVM
>  x86/vmx: implement IPT in VMX
>  x86/mm: add vmtrace_buf resource type
>  x86/domctl: add XEN_DOMCTL_vmtrace_op
>  tools/libxc: add xc_vmtrace_* functions
>  tools/proctrace: add proctrace tool
>=20
> docs/man/xl.cfg.5.pod.in                    |  13 ++
> tools/golang/xenlight/helpers.gen.go        |   2 +
> tools/golang/xenlight/types.gen.go          |   1 +
> tools/libxc/Makefile                        |   1 +
> tools/libxc/include/xenctrl.h               |  40 +++++
> tools/libxc/xc_vmtrace.c                    |  87 ++++++++++
> tools/libxl/libxl.h                         |   8 +
> tools/libxl/libxl_create.c                  |   1 +
> tools/libxl/libxl_types.idl                 |   4 +
> tools/proctrace/Makefile                    |  45 +++++
> tools/proctrace/proctrace.c                 | 179 ++++++++++++++++++++
> tools/xl/xl_parse.c                         |  22 +++
> xen/arch/x86/domain.c                       |  27 +++
> xen/arch/x86/domctl.c                       |  50 ++++++
> xen/arch/x86/hvm/vmx/vmcs.c                 |  15 +-
> xen/arch/x86/hvm/vmx/vmx.c                  | 110 ++++++++++++
> xen/common/domain.c                         |  46 +++++
> xen/common/memory.c                         |  80 ++++++++-
> xen/include/asm-x86/cpufeature.h            |   1 +
> xen/include/asm-x86/hvm/hvm.h               |  20 +++
> xen/include/asm-x86/hvm/vmx/vmcs.h          |   4 +
> xen/include/asm-x86/hvm/vmx/vmx.h           |  14 ++
> xen/include/asm-x86/msr-index.h             |  24 +++
> xen/include/public/arch-x86/cpufeatureset.h |   1 +
> xen/include/public/domctl.h                 |  29 ++++
> xen/include/public/memory.h                 |   1 +
> xen/include/xen/domain.h                    |   2 +
> xen/include/xen/sched.h                     |   7 +
> 28 files changed, 828 insertions(+), 6 deletions(-)
> create mode 100644 tools/libxc/xc_vmtrace.c
> create mode 100644 tools/proctrace/Makefile
> create mode 100644 tools/proctrace/proctrace.c
>=20
> --
> 2.17.1


Kind reminder about this new patch version for external IPT monitoring.


Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 13:43:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 13:43:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvLCw-0007sz-E1; Tue, 14 Jul 2020 13:42:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ad9T=AZ=in.bosch.com=manikandan.chockalingam@srs-us1.protection.inumbo.net>)
 id 1jvLCu-0007su-IE
 for xen-devel@lists.xen.org; Tue, 14 Jul 2020 13:42:53 +0000
X-Inumbo-ID: e7706d9c-c5d7-11ea-b7bb-bc764e2007e4
Received: from de-out1.bosch-org.com (unknown [139.15.230.186])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7706d9c-c5d7-11ea-b7bb-bc764e2007e4;
 Tue, 14 Jul 2020 13:42:48 +0000 (UTC)
Received: from si0vm1948.rbesz01.com
 (lb41g3-ha-dmz-psi-sl1-mailout.fe.ssn.bosch.com [139.15.230.188])
 by fe0vms0186.rbdmz01.com (Postfix) with ESMTPS id 4B5hXk500Gz1XLFs0;
 Tue, 14 Jul 2020 15:42:46 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=in.bosch.com;
 s=key1-intmail; t=1594734166;
 bh=D8zwxoergcrtek4NXmGatgUgF/p/9IbmioM2WmJFroQ=; l=10;
 h=From:Subject:From:Reply-To:Sender;
 b=SwcreMK9QwlVLBUgS120siOB+ZMsz2rWPIfKuSE0RTcDzya+7N8hJF7An1z6/X3bC
 AmmcENXQ7gjPANHAS+yI1gpPGIrDHWokdt/PnH8Xix9vgEdFSz0Z3qi1RDVGAcJtwu
 6MzJ4wLJ3fHeezafsEiauFRaSTIHhd7Q2whTZgjE=
Received: from fe0vm1741.rbesz01.com (unknown [10.58.172.176])
 by si0vm1948.rbesz01.com (Postfix) with ESMTPS id 4B5hXk4Znlz1jG;
 Tue, 14 Jul 2020 15:42:46 +0200 (CEST)
X-AuditID: 0a3aad15-e97ff700000001dd-57-5f0db65648bc
Received: from si0vm1949.rbesz01.com ( [10.58.173.29])
 (using TLS with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by fe0vm1741.rbesz01.com (SMG Outbound) with SMTP id 00.BC.00477.656BD0F5;
 Tue, 14 Jul 2020 15:42:46 +0200 (CEST)
Received: from FE-MBX2066.de.bosch.com (fe-mbx2066.de.bosch.com [10.3.231.171])
 by si0vm1949.rbesz01.com (Postfix) with ESMTPS id 4B5hXk331Zz6CjZNx;
 Tue, 14 Jul 2020 15:42:46 +0200 (CEST)
Received: from SGPMBX2024.APAC.bosch.com (10.187.83.44) by
 FE-MBX2066.de.bosch.com (10.3.231.171) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1979.3; Tue, 14 Jul 2020 15:42:45 +0200
Received: from SGPMBX2022.APAC.bosch.com (10.187.83.37) by
 SGPMBX2024.APAC.bosch.com (10.187.83.44) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1979.3; Tue, 14 Jul 2020 21:42:44 +0800
Received: from SGPMBX2022.APAC.bosch.com ([fe80::2d4d:b176:b210:896]) by
 SGPMBX2022.APAC.bosch.com ([fe80::2d4d:b176:b210:896%6]) with mapi id
 15.01.1979.003; Tue, 14 Jul 2020 21:42:44 +0800
From: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: [BUG] Xen build for RCAR failing
Thread-Topic: [BUG] Xen build for RCAR failing
Thread-Index: AdZUKc5JeR7gPpESR52uLkZK1kYwOwAEsnEAAAD8OlAAAEBtgAABZtcgAANXdAAAh1iJgADaJ12w
Date: Tue, 14 Jul 2020 13:42:44 +0000
Message-ID: <67b4454424c4485fb59d542d052aaf2d@in.bosch.com>
References: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
 <1D0E7281-95D7-482E-BF6D-EE5B1FEE4876@arm.com>
 <ab84437081a346d6bf0f73581382c74e@in.bosch.com>
 <D84A5DA7-683C-480B-8837-C51D560FC2E1@arm.com>
 <139024a891324455a13a3d468908798d@in.bosch.com>
 <C3BCAA62-51EF-49DD-B978-6657BC6D5A21@arm.com> 
Accept-Language: en-US, en-SG
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.187.56.210]
Content-Type: multipart/mixed;
 boundary="_002_67b4454424c4485fb59d542d052aaf2dinboschcom_"
MIME-Version: 1.0
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFjrDIsWRmVeSWpSXmKPExsXCZbVWVjdsG2+8wfqvchZLl2xmsjizvIfZ
 YsnHxSwOzB5r5q1h9Di6+zdTAFMUl01Kak5mWWqRvl0CV8bdW42sBfeyKx7dOcHSwNid3MXI
 ySEhYCLRcmM2cxcjF4eQwAwmiRvvHrJAOPsZJXYsv80KUiUk8JFR4uvLMojEZ0aJL11nmSCc
 Q4wS07qOMINUsQmESOzbe4MdxBYR0Jf4ffMHWDezgIfEqqt7GEFsYQFdia5zs6Fq9CS2LuwH
 quEAsqMkfkzQBgmzCKhKNF1pAivnFbCWWP+uD+qizUwSTz/+YQJJMArISiy6OYkFYr64xK0n
 85kg/hGReHjxNBuELSrx8vE/VghbUeLR/VVMEPUxEncabrNALBCUODnzCcsERrFZSEbNQlI2
 C0kZRFxP4sbUKWwQtrbEsoWvmSFsJ4nj779Bxc0kDhxdCRTnArIPMEqs/N8OVaQoMaX7IfsC
 Rs5VjKJpqQZluYbmJoZ6RUmpxVUGhnrJ+bmbGCGRLLqD8U73B71DjEwcjIcYVYBaH21YfYFR
 iiUvPy9VSYSXh4s3Xog3JbGyKrUoP76oNCe1+BCjNAeLkjivCs/GOCGB9MSS1OzU1ILUIpgs
 EwenVAOThCivw0nRBsZbx+73epgbzY5InTLnzfS8WRuUZeNEj7QJ3bmge8jVr+FZu7blhbdH
 DMqk/pVkJp3PPntpY8sCPp1ycREL8QaOjQcvblRWuT43J6va0Ued3ZQhenJl6vWdTs7btmmU
 sqauuPuY99G+UL/UNBveu/G3rx6NWrP1/Z87K9OPTz9gcK562oHTdjttJPlty//FtTBG3fM+
 LZ5YwPWnNXneS87pdeI9W4vXKq0xcLwdaKz+Z/GV9YmrmLjuC17fWeJ7PH/S0wXxNvGWFxhc
 7D7fmhgRdUdg8+ni/ouWSrPun0h9vDWf2UbxzKlFnw13VrJYlusvPsxu9TJ5vrX4ojadw7E/
 3EsjmAOUWIozEg21mIuKEwEAW0i9XwMAAA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--_002_67b4454424c4485fb59d542d052aaf2dinboschcom_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Hello Bertrand,

I succeeded in building the core minimal image with dunfell and its compati=
ble branches [except xen-troops (modified some files to complete the build)=
].

But I face the following error while booting.

(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Timer: Unable to retrieve IRQ 0 from the device tree
(XEN) ****************************************

My Build Configuration:
BB_VERSION           =3D "1.46.0"
BUILD_SYS            =3D "x86_64-linux"
NATIVELSBSTRING      =3D "universal"
TARGET_SYS           =3D "aarch64-poky-linux"
MACHINE              =3D "salvator-x"
DISTRO               =3D "poky"
DISTRO_VERSION       =3D "3.1.1"
TUNE_FEATURES        =3D "aarch64 cortexa57-cortexa53"
TARGET_FPU           =3D ""
SOC_FAMILY           =3D "rcar-gen3:r8a7795"
meta                =20
meta-poky           =20
meta-yocto-bsp       =3D "tmp:39d7cf1abb2c88baaedb3a627eba8827747b2eb9"
meta-rcar-gen3       =3D "dunfell-dev:2b3b0447fbc6c360a43a13525ae63a253fe3e=
5b7"
meta-oe             =20
meta-python         =20
meta-filesystems    =20
meta-networking      =3D "tmp:cc6fc6b1641ab23089c1e3bba11e0c6394f0867c"
meta-selinux         =3D "dunfell:7af62c91d7d00a260cf28e7908955539304d100d"
meta-virtualization  =3D "dunfell:ffd787fb850e5a1657a01febc8402c74832147a1"
meta-rcar-gen3-xen   =3D "master:60699c631d541aeeaebaeec9a087efed9385ee42"

May I know the reason for this error? Am I missing something here? Attachin=
g complete logs for reference.

Mit freundlichen Gr=FC=DFen / Best regards

 Chockalingam Manikandan

ES-CM Core fn,ADIT (RBEI/ECF3)
Robert Bosch GmbH | Postfach 10 60 50 | 70049 Stuttgart | GERMANY | www.bos=
ch.com
Tel. +91 80 6136-4452 | Fax +91 80 6617-0711 | Manikandan.Chockalingam@in.b=
osch.com

Registered Office: Stuttgart, Registration Court: Amtsgericht Stuttgart, HR=
B 14000;
Chairman of the Supervisory Board: Franz Fehrenbach; Managing Directors: Dr=
. Volkmar Denner,=20
Prof. Dr. Stefan Asenkerschbaumer, Dr. Michael Bolle, Dr. Christian Fischer=
, Dr. Stefan Hartung,
Dr. Markus Heyn, Harald Kr=F6ger, Christoph K=FCbel, Rolf Najork, Uwe Rasch=
ke, Peter Tyroller

-----Original Message-----
From: Manikandan Chockalingam (RBEI/ECF3)=20
Sent: Friday, July 10, 2020 9:56 AM
To: 'Bertrand Marquis' <Bertrand.Marquis@arm.com>
Cc: xen-devel@lists.xen.org; nd <nd@arm.com>
Subject: RE: [BUG] Xen build for RCAR failing

Hello Bertrand,

I couldn't find dunfell branch in the following repos 1. meta-selinux 2. xe=
n-troops 3. meta-renesas [I took dunfell-dev]

Mit freundlichen Gr=FC=DFen / Best regards

 Chockalingam Manikandan

ES-CM Core fn,ADIT (RBEI/ECF3)
Robert Bosch GmbH | Postfach 10 60 50 | 70049 Stuttgart | GERMANY | www.bos=
ch.com Tel. +91 80 6136-4452 | Fax +91 80 6617-0711 | Manikandan.Chockaling=
am@in.bosch.com

Registered Office: Stuttgart, Registration Court: Amtsgericht Stuttgart, HR=
B 14000; Chairman of the Supervisory Board: Franz Fehrenbach; Managing Dire=
ctors: Dr. Volkmar Denner, Prof. Dr. Stefan Asenkerschbaumer, Dr. Michael B=
olle, Dr. Christian Fischer, Dr. Stefan Hartung, Dr. Markus Heyn, Harald Kr=
=F6ger, Christoph K=FCbel, Rolf Najork, Uwe Raschke, Peter Tyroller

-----Original Message-----
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
Sent: Tuesday, July 7, 2020 5:18 PM
To: Manikandan Chockalingam (RBEI/ECF3) <Manikandan.Chockalingam@in.bosch.c=
om>
Cc: xen-devel@lists.xen.org; nd <nd@arm.com>
Subject: Re: [BUG] Xen build for RCAR failing



> On 7 Jul 2020, at 11:18, Manikandan Chockalingam (RBEI/ECF3) <Manikandan.=
Chockalingam@in.bosch.com> wrote:
>=20
> Hello Bertrand,
>=20
> Thank you. I will try a fresh build with dunfell branch. All layers in th=
e sense [poky, meta-openembedded, meta-linaro, meta-rensas, meta-virtualisa=
tion, meta-selinux, xen-troops] right?

right

>=20
> Also, Can I use the same proprietary drivers which I used for yocto2.19[R=
-Car_Gen3_Series_Evaluation_Software_Package_for_Linux-20170427.zip] for th=
is branch?

I have no idea what is in that but i would guess it will probably not work =
that easily.
You might need to get in contact with Renesas to get more up-to-date instru=
ctions on how to build that.

Bertrand


--_002_67b4454424c4485fb59d542d052aaf2dinboschcom_
Content-Type: text/plain; name="built-u-boot-xen-bootup-error.txt"
Content-Description: built-u-boot-xen-bootup-error.txt
Content-Disposition: attachment;
	filename="built-u-boot-xen-bootup-error.txt"; size=6480;
	creation-date="Tue, 14 Jul 2020 13:38:45 GMT";
	modification-date="Tue, 14 Jul 2020 13:38:45 GMT"
Content-Transfer-Encoding: base64

WyAgICAwLjAwMDE0OV0gTk9USUNFOiAgQkwyOiBSLUNhciBIMyBJbml0aWFsIFByb2dyYW0gTG9h
ZGVyKENBNTcpDQpbICAgIDAuMDA0NTg1XSBOT1RJQ0U6ICBCTDI6IEluaXRpYWwgUHJvZ3JhbSBM
b2FkZXIoUmV2LjIuMC42KQ0KWyAgICAwLjAxMDExOV0gTk9USUNFOiAgQkwyOiBQUlIgaXMgUi1D
YXIgSDMgVmVyLjIuMA0KWyAgICAwLjAxNDc4N10gTk9USUNFOiAgQkwyOiBCb2FyZCBpcyBTYWx2
YXRvci1YIFJldi4xLjANClsgICAgMC4wMTk4MTRdIE5PVElDRTogIEJMMjogQm9vdCBkZXZpY2Ug
aXMgSHlwZXJGbGFzaCgxNjBNSHopDQpbICAgIDAuMDI1MzI1XSBOT1RJQ0U6ICBCTDI6IExDTSBz
dGF0ZSBpcyBDTQ0KWyAgICAwLjAyOTM2OF0gTk9USUNFOiAgQkwyOiBBVlMgc2V0dGluZyBzdWNj
ZWVkZWQuIERWRlNfU2V0VklEPTB4NTMNClsgICAgMC4wMzUzODRdIE5PVElDRTogIEJMMjogRERS
MzIwMChyZXYuMC40MCkNClsgICAgMC4wNTA3MjJdIE5PVElDRTogIEJMMjogW0NPTERfQk9PVF0N
ClsgICAgMC4wNTk0OTddIE5PVElDRTogIEJMMjogRFJBTSBTcGxpdCBpcyA0Y2gNClsgICAgMC4w
NjIxOTJdIE5PVElDRTogIEJMMjogUW9TIGlzIGRlZmF1bHQgc2V0dGluZyhyZXYuMC4yMSkNClsg
ICAgMC4wNjc2MzVdIE5PVElDRTogIEJMMjogRFJBTSByZWZyZXNoIGludGVydmFsIDEuOTUgdXNl
Yw0KWyAgICAwLjA3Mjk5Ml0gTk9USUNFOiAgQkwyOiBQZXJpb2RpYyBXcml0ZSBEUSBUcmFpbmlu
Zw0KWyAgICAwLjA3ODAyM10gTk9USUNFOiAgQkwyOiB2MS41KHJlbGVhc2UpOmFmOWY0MjkNClsg
ICAgMC4wODI0MTBdIE5PVElDRTogIEJMMjogQnVpbHQgOiAwNzowNzoyMiwgSnVsIDE0IDIwMjAN
ClsgICAgMC4wODc1OTddIE5PVElDRTogIEJMMjogTm9ybWFsIGJvb3QNClsgICAgMC4wOTEyMzhd
IE5PVElDRTogIEJMMjogZHN0PTB4ZTYzMjUxMDAgc3JjPTB4ODE4MDAwMCBsZW49NTEyKDB4MjAw
KQ0KWyAgICAwLjA5NzYyM10gTk9USUNFOiAgQkwyOiBkc3Q9MHg0M2YwMDAwMCBzcmM9MHg4MTgw
NDAwIGxlbj02MTQ0KDB4MTgwMCkNClsgICAgMC4xMDQyMzNdIE5PVElDRTogIEJMMjogZHN0PTB4
NDQwMDAwMDAgc3JjPTB4ODFjMDAwMCBsZW49NjU1MzYoMHgxMDAwMCkNClsgICAgMC4xMTEyMTdd
IE5PVElDRTogIEJMMjogZHN0PTB4NDQxMDAwMDAgc3JjPTB4ODIwMDAwMCBsZW49MTA0ODU3Nigw
eDEwMDAwMCkNClsgICAgMC4xMjIxNzldIE5PVElDRTogIEJMMjogZHN0PTB4NTAwMDAwMDAgc3Jj
PTB4ODY0MDAwMCBsZW49MTA0ODU3NigweDEwMDAwMCkNClsgICAgMC4xMzIwMzNdIE5PVElDRTog
IEJMMjogQm9vdGluZyBCTDMxDQoNClUtQm9vdCAyMDE4LjA5IChKdWwgMTIgMjAyMCAtIDE5OjA5
OjEzICswMDAwKQ0KDQpDUFU6IFJlbmVzYXMgRWxlY3Ryb25pY3MgUjhBNzc5NSByZXYgMi4wDQpN
b2RlbDogUmVuZXNhcyBTYWx2YXRvci1YIGJvYXJkIGJhc2VkIG9uIHI4YTc3OTUgRVMyLjArDQpE
UkFNOiAgMy45IEdpQg0KQmFuayAjMDogMHgwNDgwMDAwMDAgLSAweDA3ZmZmZmZmZiwgODk2IE1p
Qg0KQmFuayAjMTogMHg1MDAwMDAwMDAgLSAweDUzZmZmZmZmZiwgMSBHaUINCkJhbmsgIzI6IDB4
NjAwMDAwMDAwIC0gMHg2M2ZmZmZmZmYsIDEgR2lCDQpCYW5rICMzOiAweDcwMDAwMDAwMCAtIDB4
NzNmZmZmZmZmLCAxIEdpQg0KDQpNTUM6ICAgc2RAZWUxMDAwMDA6IDAsIHNkQGVlMTQwMDAwOiAx
LCBzZEBlZTE2MDAwMDogMg0KTG9hZGluZyBFbnZpcm9ubWVudCBmcm9tIE1NQy4uLiBPSw0KSW46
ICAgIHNlcmlhbEBlNmU4ODAwMA0KT3V0OiAgIHNlcmlhbEBlNmU4ODAwMA0KRXJyOiAgIHNlcmlh
bEBlNmU4ODAwMA0KTmV0OiAgIA0KRXJyb3I6IGV0aGVybmV0QGU2ODAwMDAwIGFkZHJlc3Mgbm90
IHNldC4NCmV0aC0xOiBldGhlcm5ldEBlNjgwMDAwMA0KSGl0IGFueSBrZXkgdG8gc3RvcCBhdXRv
Ym9vdDogIDAgDQo9PiBzZXRlbnYgZXRoYWRkciBkZTphZDpiZTplZjowMTowMg0KPT4gc2V0ZW52
IGlwYWRkciAxOTIuMTY4LjIuNTENCj0+IHNldGVudiBzZXJ2ZXJpcCAxOTIuMTY4LjIuMTENCj0+
IHRmdHAgMHg0QTAwMDAwMCByOGE3Nzk1LXNhbHZhdG9yLXgteGVuLmR0Yg0KZXRoZXJuZXRAZTY4
MDAwMDAgV2FpdGluZyBmb3IgUEhZIGF1dG8gbmVnb3RpYXRpb24gdG8gY29tcGxldGUuLi4uLiBk
b25lDQpVc2luZyBldGhlcm5ldEBlNjgwMDAwMCBkZXZpY2UNClRGVFAgZnJvbSBzZXJ2ZXIgMTky
LjE2OC4yLjExOyBvdXIgSVAgYWRkcmVzcyBpcyAxOTIuMTY4LjIuNTENCkZpbGVuYW1lICdyOGE3
Nzk1LXNhbHZhdG9yLXgteGVuLmR0YicuDQpMb2FkIGFkZHJlc3M6IDB4NGEwMDAwMDANCkxvYWRp
bmc6ICoNCkFSUCBSZXRyeSBjb3VudCBleGNlZWRlZDsgc3RhcnRpbmcgYWdhaW4NCj0+IHRmdHAg
MHg0QTAwMDAwMCByOGE3Nzk1LXNhbHZhdG9yLXgteGVuLmR0Yg0KVXNpbmcgZXRoZXJuZXRAZTY4
MDAwMDAgZGV2aWNlDQpURlRQIGZyb20gc2VydmVyIDE5Mi4xNjguMi4xMTsgb3VyIElQIGFkZHJl
c3MgaXMgMTkyLjE2OC4yLjUxDQpGaWxlbmFtZSAncjhhNzc5NS1zYWx2YXRvci14LXhlbi5kdGIn
Lg0KTG9hZCBhZGRyZXNzOiAweDRhMDAwMDAwDQpMb2FkaW5nOiAjIyMjIyMNCiAgICAgICAgIDIu
MiBNaUIvcw0KZG9uZQ0KQnl0ZXMgdHJhbnNmZXJyZWQgPSA4MDc4MyAoMTNiOGYgaGV4KQ0KPT4g
dGZ0cCAweDQ4MjAwMDAwIEltYWdlDQpVc2luZyBldGhlcm5ldEBlNjgwMDAwMCBkZXZpY2UNClRG
VFAgZnJvbSBzZXJ2ZXIgMTkyLjE2OC4yLjExOyBvdXIgSVAgYWRkcmVzcyBpcyAxOTIuMTY4LjIu
NTENCkZpbGVuYW1lICdJbWFnZScuDQpMb2FkIGFkZHJlc3M6IDB4NDgyMDAwMDANCkxvYWRpbmc6
ICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAgICAgIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMNCiAgICAgICAg
ICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAgICAgIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMNCiAgICAgICAg
ICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAgICAgIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMNCiAgICAgICAg
ICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAgICAgIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMNCiAgICAgICAg
ICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAgICAgIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMNCiAgICAgICAg
ICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAgICAgIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMNCiAgICAgICAg
ICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAgICAgIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMNCiAgICAgICAg
ICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMNCiAgICAgICAgIDIuMyBNaUIvcw0KZG9u
ZQ0KQnl0ZXMgdHJhbnNmZXJyZWQgPSAyMTE1NjM1MiAoMTQyZDIwMCBoZXgpDQo9PiB0ZnRwIDB4
NDgwMDAwMDAgeGVuDQpVc2luZyBldGhlcm5ldEBlNjgwMDAwMCBkZXZpY2UNClRGVFAgZnJvbSBz
ZXJ2ZXIgMTkyLjE2OC4yLjExOyBvdXIgSVAgYWRkcmVzcyBpcyAxOTIuMTY4LjIuNTENCkZpbGVu
YW1lICd4ZW4nLg0KTG9hZCBhZGRyZXNzOiAweDQ4MDAwMDAwDQpMb2FkaW5nOiAjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAg
ICAgMi4xIE1pQi9zDQpkb25lDQpCeXRlcyB0cmFuc2ZlcnJlZCA9IDg1MzMyOCAoZDA1NTAgaGV4
KQ0KPT4gYm9vdGkgMHg0ODAwMDAwMCAtIDB4NEEwMDAwMDANCiMjIEZsYXR0ZW5lZCBEZXZpY2Ug
VHJlZSBibG9iIGF0IDRhMDAwMDAwDQogICBCb290aW5nIHVzaW5nIHRoZSBmZHQgYmxvYiBhdCAw
eDRhMDAwMDAwDQogICBVc2luZyBEZXZpY2UgVHJlZSBpbiBwbGFjZSBhdCAwMDAwMDAwMDRhMDAw
MDAwLCBlbmQgMDAwMDAwMDA0YTAxNmI4ZQ0KDQpTdGFydGluZyBrZXJuZWwgLi4uDQoNCiBYZW4g
NC4xMi40LXByZQ0KKFhFTikgWGVuIHZlcnNpb24gNC4xMi40LXByZSAobWFuaWthbmRhbkApIChh
YXJjaDY0LWxpbnV4LWdudS1nY2MgKFNvdXJjZXJ5IENvZGVCZW5jaCAyMDE4LjA1LTcpIDcuMy4x
IDIwMTgwNjEyKSBkMA0KKFhFTikgTGF0ZXN0IENoYW5nZVNldDogVHVlIEp1bCA3IDE1OjEzOjQw
IDIwMjAgKzAyMDAgZ2l0OjE5ZTBiYmINCihYRU4pIFByb2Nlc3NvcjogNDExZmQwNzM6ICJBUk0g
TGltaXRlZCIsIHZhcmlhbnQ6IDB4MSwgcGFydCAweGQwNywgcmV2IDB4Mw0KKFhFTikgNjQtYml0
IEV4ZWN1dGlvbjoNCihYRU4pICAgUHJvY2Vzc29yIEZlYXR1cmVzOiAwMDAwMDAwMDAwMDAyMjIy
IDAwMDAwMDAwMDAwMDAwMDANCihYRU4pICAgICBFeGNlcHRpb24gTGV2ZWxzOiBFTDM6NjQrMzIg
RUwyOjY0KzMyIEVMMTo2NCszMiBFTDA6NjQrMzINCihYRU4pICAgICBFeHRlbnNpb25zOiBGbG9h
dGluZ1BvaW50IEFkdmFuY2VkU0lNRA0KKFhFTikgICBEZWJ1ZyBGZWF0dXJlczogMDAwMDAwMDAx
MDMwNTEwNiAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSAgIEF1eGlsaWFyeSBGZWF0dXJlczogMDAw
MDAwMDAwMDAwMDAwMCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSAgIE1lbW9yeSBNb2RlbCBGZWF0
dXJlczogMDAwMDAwMDAwMDAwMTEyNCAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSAgIElTQSBGZWF0
dXJlczogIDAwMDAwMDAwMDAwMTExMjAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgMzItYml0IEV4
ZWN1dGlvbjoNCihYRU4pICAgUHJvY2Vzc29yIEZlYXR1cmVzOiAwMDAwMDEzMTowMDAxMTAxMQ0K
KFhFTikgICAgIEluc3RydWN0aW9uIFNldHM6IEFBcmNoMzIgQTMyIFRodW1iIFRodW1iLTIgSmF6
ZWxsZQ0KKFhFTikgICAgIEV4dGVuc2lvbnM6IEdlbmVyaWNUaW1lciBTZWN1cml0eQ0KKFhFTikg
ICBEZWJ1ZyBGZWF0dXJlczogMDMwMTAwNjYNCihYRU4pICAgQXV4aWxpYXJ5IEZlYXR1cmVzOiAw
MDAwMDAwMA0KKFhFTikgICBNZW1vcnkgTW9kZWwgRmVhdHVyZXM6IDEwMjAxMTA1IDQwMDAwMDAw
IDAxMjYwMDAwIDAyMTAyMjExDQooWEVOKSAgSVNBIEZlYXR1cmVzOiAwMjEwMTExMCAxMzExMjEx
MSAyMTIzMjA0MiAwMTExMjEzMSAwMDAxMTE0MiAwMDAxMTEyMQ0KKFhFTikgDQooWEVOKSAqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQooWEVOKSBQYW5pYyBvbiBDUFUg
MDoNCihYRU4pIFRpbWVyOiBVbmFibGUgdG8gcmV0cmlldmUgSVJRIDAgZnJvbSB0aGUgZGV2aWNl
IHRyZWUNCihYRU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCihY
RU4pIA0KKFhFTikgUmVib290IGluIGZpdmUgc2Vjb25kcy4uLg0K

--_002_67b4454424c4485fb59d542d052aaf2dinboschcom_--


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 14:30:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 14:30:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvLwU-00030V-5p; Tue, 14 Jul 2020 14:29:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CaI7=AZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvLwS-00030Q-Tz
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 14:29:56 +0000
X-Inumbo-ID: 7cd94aec-c5de-11ea-bca7-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7cd94aec-c5de-11ea-bca7-bc764e2007e4;
 Tue, 14 Jul 2020 14:29:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594736996;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=b8om463gRNF4qLu5qLFh8zoCkH24j5z2o6bB4TYwNp4=;
 b=IfaKIxB4a9HLe7pLX4lEu5xnxG04frl8+Es0b9g9A8U2HCQrkopGBgW1
 Sr7tsbk3tj1n2+4jt5OMry/32V9Mjr0uPtCKdRTrrItgvzYap4RpWS/Kr
 Qy33t/vvQM8Ji66/igbNikJaeQmJkPt9EahH0On6/PsMOM4iKrLXHGOkJ s=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: DoJ72Sl64rH9CcCLYR0m4n/sCQF0S1+885X9QLdm5Z6gs5bV24vng9fU2/J7QjuyQsb+BbP+io
 Cs7c32lugl5WPFWUdAmijNhCTeYiSU0uP5bts31IOca2GEmay7wWUZo5EF/gvigiQDbWxMAF6S
 sS9WGPw8RUanxUt7PLtI8gvws85jffWDSjpwJ7uTTgWZa5qAgka+Pszr97G/OunK3f40btA2rA
 YTOqN4v5yeGAo8bkwY1QWkHZh9UKFsVEQo9cHdcgdhbV1wm0lB4ANAeHdfkOEq4jN98nyXU4nF
 6Ck=
X-SBRS: 2.7
X-MesageID: 22541779
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,350,1589256000"; d="scan'208";a="22541779"
Date: Tue, 14 Jul 2020 16:29:48 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 5/7] x86: generalize padding field handling
Message-ID: <20200714142948.GK7191@Air-de-Roger>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <83274416-2812-53c9-f8cb-23ebdf73782e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <83274416-2812-53c9-f8cb-23ebdf73782e@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 12:27:37PM +0200, Jan Beulich wrote:
> The original intention was to ignore padding fields, but the pattern
> matched only ones whose names started with an underscore. Also match
> fields whose names are in line with the C spec by not having a leading
> underscore. (Note that the leading ^ in the sed regexps was pointless
> and hence get dropped.)
> 
> This requires adjusting some vNUMA macros, to avoid triggering
> "enumeration value ... not handled in switch" warnings, which - due to
> -Werror - would cause the build to fail. (I have to admit that I find
> these padding fields odd, when translation of the containing structure
> is needed anyway.)
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> While for translation macros skipping padding fields pretty surely is a
> reasonable thing to do, we may want to consider not ignoring them when
> generating checking macros.
> 
> --- a/xen/common/compat/memory.c
> +++ b/xen/common/compat/memory.c
> @@ -354,10 +354,13 @@ int compat_memory_op(unsigned int cmd, X
>                  return -EFAULT;
>  
>  #define XLAT_vnuma_topology_info_HNDL_vdistance_h(_d_, _s_)		\
> +            case XLAT_vnuma_topology_info_vdistance_pad:                \
>              guest_from_compat_handle((_d_)->vdistance.h, (_s_)->vdistance.h)
>  #define XLAT_vnuma_topology_info_HNDL_vcpu_to_vnode_h(_d_, _s_)		\
> +            case XLAT_vnuma_topology_info_vcpu_to_vnode_pad:            \
>              guest_from_compat_handle((_d_)->vcpu_to_vnode.h, (_s_)->vcpu_to_vnode.h)
>  #define XLAT_vnuma_topology_info_HNDL_vmemrange_h(_d_, _s_)		\
> +            case XLAT_vnuma_topology_info_vmemrange_pad:                \
>              guest_from_compat_handle((_d_)->vmemrange.h, (_s_)->vmemrange.h)

I find this quite ugly, would it be better to just handle them with a
default case in the XLAT_ macros?

AFAICT it will also set (_d_)->vmemrange.h twice?

>  
>              XLAT_vnuma_topology_info(nat.vnuma, &cmp.vnuma);
> --- a/xen/tools/get-fields.sh
> +++ b/xen/tools/get-fields.sh
> @@ -218,7 +218,7 @@ for line in sys.stdin.readlines():
>  				fi
>  				;;
>  			[\,\;])
> -				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
> +				if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
>  				then
>  					if [ $kind = union ]
>  					then
> @@ -347,7 +347,7 @@ build_body ()
>  			fi
>  			;;
>  		[\,\;])
> -			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
> +			if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
>  			then
>  				if [ -z "$array" -a -z "$array_type" ]
>  				then
> @@ -437,7 +437,7 @@ check_field ()
>  				id=$token
>  				;;
>  			[\,\;])
> -				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
> +				if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
>  				then
>  					check_field $1 $2 $3.$id "$fields"
>  					test "$token" != ";" || fields= id=
> @@ -491,7 +491,7 @@ build_check ()
>  			test $level != 2 -o $arrlvl != 1 || id=$token
>  			;;
>  		[\,\;])
> -			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
> +			if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
>  			then
>  				check_field $kind $1 $id "$fields"
>  				test "$token" != ";" || fields= id=

I have to admit I'm not overly happy with this level of repetition
(not that you introduce it here), but I would prefer to have the
regexp in a single place if possible, it's easy to miss instances
IMO.

Thanks.


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 14:31:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 14:31:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvLxb-0003hf-GH; Tue, 14 Jul 2020 14:31:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CaI7=AZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvLxa-0003hX-8R
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 14:31:06 +0000
X-Inumbo-ID: a57251f7-c5de-11ea-931c-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a57251f7-c5de-11ea-931c-12813bfff9fa;
 Tue, 14 Jul 2020 14:31:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594737065;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=8FOpZIUFkT24rTxJ54S6+D0PJNk5JhVGQbVAYis6n7I=;
 b=Zxf+DndZy3wGPRYaRer9mcimy7QC0HKqjD2+ZdjKAjFzbar4OQVSZM2J
 dSX1rD378H3R0gAr+OWjIU4Rv+gyqFpMeAXiKezJKwofiAaHdlj24YDWO
 8QLHMot2p8s2LTpkDPM5bEDvU98IbOIshmD8BHTKr2cJAxZsBAGwsOWSc Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 9qnxXcY4NEsRUKhOCIWlGWUEOW9Soj8rsp7Md0TeUyPO4z/fWV5rOeYKybZYpkpAwT/oqDBeKO
 80bLUEHNVBsZRGdVCiQFTzIFK9t9aAd0+tjxb/ByBltyVHQRpGEbUDMO4cP58bhbfVykLK8fjI
 sMuRaP8CkBUY0BH1H+idzoTyLSEeo4J6WZgcQwC7qjIQhdcnRNKc9QI1X6lNf4QZGv/y7U70pF
 ylgjNi9e+TX/W/zmSSNECBtC6H/LtEQabdusj3fIvdtPNChK4CrHsV/Duew7VV/N7juwASWd49
 nts=
X-SBRS: 2.7
X-MesageID: 22660398
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,350,1589256000"; d="scan'208";a="22660398"
Date: Tue, 14 Jul 2020 16:30:56 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 2/7] x86/mce: add compat struct checking for
 XEN_MC_inject_v2
Message-ID: <20200714143056.GL7191@Air-de-Roger>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <007679c8-84d5-2e91-e48e-68746741fb45@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <007679c8-84d5-2e91-e48e-68746741fb45@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 12:25:48PM +0200, Jan Beulich wrote:
> 84e364f2eda2 ("x86: add CMCI software injection interface") merely made
> sure things would build, without any concern about things actually
> working:
> - despite the addition of xenctl_bitmap to xlat.lst, the resulting macro
>   wasn't invoked anywhere (which would have lead to recognizing that the
>   structure appeared to have no fully compatible layout, despite the use
>   of a 64-bit handle),
> - the interface struct itself was neither added to xlat.lst (and the
>   resulting macro then invoked) nor was any manual checking of
>   individual fields added.
> 
> Adjust compat header generation logic to retain XEN_GUEST_HANDLE_64(),
> which is intentionally layed out to be compatible between different size
> guests. Invoke the missing checking (implicitly through CHECK_mc).
> 
> No change in the resulting generated code.
> 
> Fixes: 84e364f2eda2 ("x86: add CMCI software injection interface")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 14:32:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 14:32:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvLyZ-0003nV-RU; Tue, 14 Jul 2020 14:32:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CaI7=AZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvLyY-0003nH-Mb
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 14:32:06 +0000
X-Inumbo-ID: ca3e2690-c5de-11ea-bb8b-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca3e2690-c5de-11ea-bb8b-bc764e2007e4;
 Tue, 14 Jul 2020 14:32:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594737125;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=itH0grY7MffT3KoiW6iaLuvAdbO+5SArD63hAEr6TxE=;
 b=gXto9f/lmICFWo/C3I15YMe+5u09NQR/6I/R9pCnhTv+eJIiv4HhGUoD
 I2pYLWaobF1ThC/PBhAHux92VjldrkbV2csnbYWvjQEhDAmfYWID2kxNl
 Wb/ErNbmYYl/h7iCzGfFebJqI80NknZSfgTl2fB1cG/jgW3EotQHFE0TQ Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: l8GoQ23qy4S7V7XESdJ18zy/KiZNdFpDs5A2qjD1RZE1PXg9JqtiWwHqLD9TJuDKEI4zFYjUvh
 OugjLDZO+RsyLsVBu61su+PkptPNFpSCnC2YSK0d8GI6e6IvKaQtEAwvtfSaDGICi+xgmz8lyN
 /w3LIMIRe5rKuf+vDmf/0X7U1wQJXVUgylRFLbtLUBtxgfoOKXmWP2h0QWBImH3BopUWdFiFLX
 incJZEybQeDu3um0C58UrAGDmlw95mwy4DUyrVOWPheerc+8xvqOzbe8XM2PMuTbd1K9U0NDwp
 TXY=
X-SBRS: 2.7
X-MesageID: 22673717
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,350,1589256000"; d="scan'208";a="22673717"
Date: Tue, 14 Jul 2020 16:31:57 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 3/7] x86/mce: bring hypercall subop compat checking in
 sync again
Message-ID: <20200714143157.GM7191@Air-de-Roger>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <5d53a2e3-716c-2213-96e5-9d37371c482c@suse.com>
 <20200714111900.GI7191@Air-de-Roger>
 <f82edef5-ee75-b24c-0a24-03ed38486882@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f82edef5-ee75-b24c-0a24-03ed38486882@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 14, 2020 at 01:47:11PM +0200, Jan Beulich wrote:
> On 14.07.2020 13:19, Roger Pau Monné wrote:
> > On Wed, Jul 01, 2020 at 12:26:54PM +0200, Jan Beulich wrote:
> >> Use a typedef in struct xen_mc also for the two subops "manually"
> >> translated in the handler, just for consistency. No functional
> >> change.
> > 
> > I'm slightly puzzled by the fact that mc_fetch is marked as needs
> > checking while mc_physcpuinfo is marked as needs translation,
> > shouldn't both be marked as needing translation? (since both need to
> > handle a guest pointer using XEN_GUEST_HANDLE)
> 
> I guess I'm confused - I see an exclamation mark on both respective

No, I was the one confused, you are right that both are marked as need
translation.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 14:32:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 14:32:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvLzG-0003t4-50; Tue, 14 Jul 2020 14:32:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CaI7=AZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvLzE-0003sp-QL
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 14:32:48 +0000
X-Inumbo-ID: e35c7dc0-c5de-11ea-931c-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e35c7dc0-c5de-11ea-931c-12813bfff9fa;
 Tue, 14 Jul 2020 14:32:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594737167;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=W+4mPB3k7E9WEQwkaLmFSNqFF5GBBRz9I004eyGpiao=;
 b=N3pQLoomMYgOjUC6ALF3R3hQjG6n4ehqzxnKj0tJzm5FAXcGcOkXG29G
 Ayg2yVlybZ36rjBgQ0gRroq6DFC96HtmR09QKfjOUvb+7Rsn9d2/Ov18B
 j6HsvrPp9pmuyQZtGGXmcWTsWcLsZyxG/Hxjp3TcNy48A0U4KE93sxfFH g=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: e9W0yGQB1TiwM2wn+v7JH3noqdCbU1DSvWWsyO2sIODaC0rR+Zfj0+962f6JwP/OImmHvA6ue6
 uaVci7yMfrSY48s2Q6LBQTt15I7HxUr/Nxvip2iHzFQQWZ/1Lz+kCpUMOUr1Iq206yqCIJ6Vn9
 AQciIStJH4ogvOR8mhZ5LoG/qVSehW2OeF/Wau2C2ofEndxMd1aDUoXSoENiN8yj5Cn9/nH4IW
 rJP/ViMjN84N/Cb9QYpX1k25sXj4kni+B0KifBwA5RkIcIJm2ud/mIbamb5R79EOK1KUs8xRED
 TKw=
X-SBRS: 2.7
X-MesageID: 22673810
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,350,1589256000"; d="scan'208";a="22673810"
Date: Tue, 14 Jul 2020 16:32:38 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 3/7] x86/mce: bring hypercall subop compat checking in
 sync again
Message-ID: <20200714143238.GN7191@Air-de-Roger>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <5d53a2e3-716c-2213-96e5-9d37371c482c@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5d53a2e3-716c-2213-96e5-9d37371c482c@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 12:26:54PM +0200, Jan Beulich wrote:
> Use a typedef in struct xen_mc also for the two subops "manually"
> translated in the handler, just for consistency. No functional
> change.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 14:58:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 14:58:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvMNo-0005wh-HE; Tue, 14 Jul 2020 14:58:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CaI7=AZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvMNn-0005wc-PY
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 14:58:11 +0000
X-Inumbo-ID: 6f19a9e8-c5e2-11ea-9320-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6f19a9e8-c5e2-11ea-9320-12813bfff9fa;
 Tue, 14 Jul 2020 14:58:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594738690;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=8GwmC1NYm0grqvw7mMFRcv5MLXu3sjy/KxOFU51OiTI=;
 b=Wr3g114WHbd49hER/E+Z2bxM5nwxmyQEh7DdoUVpMK/qjkXtICEjCuGv
 j7g/CI4JuwVu4V+OdH5PiXCnpUCn/4LW8cmKnyt8AT5xQgXIsKuypNHr6
 mCtqd8qQhaZ9JTFrES+QKRdD2YKHhMqed6evFCJw8I25/tDkBzMb1nA+N E=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ObyGNI763xUGEZXGQWCbEQOyEiBqPG5Lau/VeEp9Dokf5bC8HmXx/m8Pxz4HQd+Bg5VE9tVddP
 SImemHO1A7NWUjcO2Gudq+OlPNQ9c1I+oDEE1S6I1Tp7KAEbn+4EGNpuoTA8zUoFpyHC+qS2vi
 eQzNs63d/PqIshaWkQkI9uRlehnHthegKvagiPg+n44PI32aSjJNUh3uHna3GOWNPuKM/vkFlS
 X8Vkm3Iegz+M0XmIBgVoyruZrpSc4WLiTuIExAk87FX1xTGfS4Vp7/Zy3RZk2Sb6sqiYfyPa3Q
 ubw=
X-SBRS: 2.7
X-MesageID: 22544735
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,350,1589256000"; d="scan'208";a="22544735"
Date: Tue, 14 Jul 2020 16:58:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 6/7] flask: drop dead compat translation code
Message-ID: <20200714145800.GO7191@Air-de-Roger>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <7711f68d-394e-a74f-81fa-51f8447174ce@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <7711f68d-394e-a74f-81fa-51f8447174ce@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel de Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 12:28:07PM +0200, Jan Beulich wrote:
> Translation macros aren't needed at all (or else a devicetree_label
> entry would have been missing), and userlist has been removed quite some
> time ago.
> 
> No functional change.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/include/xlat.lst
> +++ b/xen/include/xlat.lst
> @@ -148,14 +148,11 @@
>  ?	xenoprof_init			xenoprof.h
>  ?	xenoprof_passive		xenoprof.h
>  ?	flask_access			xsm/flask_op.h
> -!	flask_boolean			xsm/flask_op.h
>  ?	flask_cache_stats		xsm/flask_op.h
>  ?	flask_hash_stats		xsm/flask_op.h
> -!	flask_load			xsm/flask_op.h
>  ?	flask_ocontext			xsm/flask_op.h
>  ?	flask_peersid			xsm/flask_op.h
>  ?	flask_relabel			xsm/flask_op.h
>  ?	flask_setavc_threshold		xsm/flask_op.h
>  ?	flask_setenforce		xsm/flask_op.h
> -!	flask_sid_context		xsm/flask_op.h
>  ?	flask_transition		xsm/flask_op.h

Shouldn't those become checks then?

Sorry I realize my knowledge of all this compat stuff is very poor.

Thanks.


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 15:04:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 15:04:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvMTM-0006nR-6I; Tue, 14 Jul 2020 15:03:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CaI7=AZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvMTL-0006nM-FC
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 15:03:55 +0000
X-Inumbo-ID: 3bc51a18-c5e3-11ea-bb8b-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3bc51a18-c5e3-11ea-bb8b-bc764e2007e4;
 Tue, 14 Jul 2020 15:03:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594739036;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=ES+PpSMP/jGPxKETpq2ksPRVP5enEb1jmX7U4+BWQZQ=;
 b=CY+0b8O5wOvEp1fCzQ13wgO8TnF83mzVBxPtOa1rihUwzaKrCHJ+2BMw
 9eVfN7vza2tguxKWMEfBRDsHAqVcXfVRLuGnMUT9Zf3hQDlVF2wMfI+wD
 O5Mb7gB7NLPBW2pz8NwyfD+KHqvfRtS5YN8GJQdaR+GzBQtjXX9H6ZiE6 I=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: MwyClK8ACq6LlffZj8u0yXzH3r95/eznJztR+uyRriagpgkmGer/jfyrt++WjsdPGxUpNz8RKO
 W89tPHZ0oHGMVBLO0ICaieLnqKIhF2QWpS3SqNEP7oFYtallmXdxAuyKvN925+GS3U4Z2w3nO0
 UgxVUmJjt9iUn/sqXQJnNSwZCf1s9unnNNvXz7sY3sF7sGB1HUcfI9ouHJs+3YFVHa4Nn1U50q
 +n1XGDz54IWxAOoGA5bb0SV6nAEL09tGXJHRxqIqz2wVYzTbnitSp6p9pJYfrfrA7z5xdgt1ix
 2b4=
X-SBRS: 2.7
X-MesageID: 22664202
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,350,1589256000"; d="scan'208";a="22664202"
Date: Tue, 14 Jul 2020 17:03:44 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 7/7] x86: only generate compat headers actually needed
Message-ID: <20200714150344.GP7191@Air-de-Roger>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <5892f237-cfcf-eb19-058c-bd4f45c7bc97@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5892f237-cfcf-eb19-058c-bd4f45c7bc97@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 12:28:27PM +0200, Jan Beulich wrote:
> As was already the case for XSM/Flask, avoid generating compat headers
> when they're not going to be needed. To address resulting build issues
> - move compat/hvm/dm_op.h inclusion to the only source file needing it,
> - add a little bit of #ifdef-ary.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

TBH I wouldn't mind generating the compat headers even when not
actually used by the build, sometimes is useful to have them for
review context without having to play with the build options.

Thanks.


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 15:05:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 15:05:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvMVK-0006tU-Iz; Tue, 14 Jul 2020 15:05:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CaI7=AZ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvMVJ-0006tO-Ft
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 15:05:57 +0000
X-Inumbo-ID: 849cfe5e-c5e3-11ea-9322-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 849cfe5e-c5e3-11ea-9322-12813bfff9fa;
 Tue, 14 Jul 2020 15:05:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594739157;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=dJQoIV9pNeRqJsoQNrNGEJdhBKWs5EPfNOlMfrHHwhU=;
 b=Ue71bs+jc8KBaPiN8wxwq0RykUCN+r90lOSvKHUDsszZ8SdV6rMnW4+y
 esgwcpIZKcx7HtgsUyl1OnlLqLwYGWgB3UVaXPR4WibAFNqhNWtM7lCEO
 mgsp3ffJqMamKIWlqlIbN1gjw1MphXTqGLqeWz46oqmJ6+QRTAfIii0v/ Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: tAalSOLjbJ2ZM3IueLa3z4mYTIqgwuTXydjV8WZD7B+wfd01YjOsXgRDph5qm02QIQDJnerORQ
 qLQvyhq1N6GYkLWV+T2nTQLKeuS6oVMYm5f22iN7CgllsTrHxHlonFM5dXFFxtGea4SOw6iJod
 LhQwWbF+MH7k9Ga0cjgMFKdOCnOysd7p5/IvmsfKR/cnGjVtA38sDcsM+pcvVQuhe0B1+AF3hg
 VqnK8cvNarB/XTSq+vsMV0Pnt3OVLmuzSkX8KjNax8oFnIjDD2xijtl1Hh3jtVBZxWsEwE3Twu
 oSk=
X-SBRS: 2.7
X-MesageID: 22350201
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,350,1589256000"; d="scan'208";a="22350201"
Date: Tue, 14 Jul 2020 17:05:48 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v6 00/11] Implement support for external IPT monitoring
Message-ID: <20200714150548.GQ7191@Air-de-Roger>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <878376484.57739222.1594732315968.JavaMail.zimbra@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <878376484.57739222.1594732315968.JavaMail.zimbra@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin
 Tian <kevin.tian@intel.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei kang <luwei.kang@intel.com>,
 tamas lengyel <tamas.lengyel@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 14, 2020 at 03:11:55PM +0200, Michał Leszczyński wrote:
> Kind reminder about this new patch version for external IPT monitoring.

It's on my queue, but with XenSummit I haven't been able to take a
look, will try to do between today and tomorrow.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 15:56:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 15:56:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvNIK-0002cf-9m; Tue, 14 Jul 2020 15:56:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xDy+=AZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvNIJ-0002ca-Fh
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 15:56:35 +0000
X-Inumbo-ID: 973874a6-c5ea-11ea-9335-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 973874a6-c5ea-11ea-9335-12813bfff9fa;
 Tue, 14 Jul 2020 15:56:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=9/TyLAsML8lzOOFLGuMxqYe4LgwFUKBaCuFoIa5y6TA=; b=nyJBXvtMtEhyE1asj2zChjPmx
 gT5qNuuVa2YTM+e2330LJD/E10/rD4tx6fXUSd4W0UIokoszj1g1nPsxnCa/AzQUW8oSfXTNWV6YX
 2/x5XPHkqgByrLVhH58sqIPu36Zqkcz4101PpZ7Agvckwi0eAmHDG+Tzuiy5Cx03zDYmg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvNIH-0000og-MR; Tue, 14 Jul 2020 15:56:33 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvNIH-0006B3-E8; Tue, 14 Jul 2020 15:56:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvNIH-0006q5-DW; Tue, 14 Jul 2020 15:56:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151892-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing baseline test] 151892: tolerable FAIL
X-Osstest-Failures: xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: xen=02d69864b51a4302a148c28d6d391238a6778b4b
X-Osstest-Versions-That: xen=02d69864b51a4302a148c28d6d391238a6778b4b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jul 2020 15:56:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

"Old" tested version had not actually been tested; therefore in this
flight we test it, rather than a new candidate.  The baseline, if
any, is the most recent actually tested revision.

flight 151892 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151892/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass

version targeted for testing:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b
baseline version:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b

Last test of basis   151892  2020-07-14 09:21:35 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Jul 14 16:43:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 16:43:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvO15-0007Re-Cz; Tue, 14 Jul 2020 16:42:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xDy+=AZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvO14-0007Qt-VX
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 16:42:51 +0000
X-Inumbo-ID: 0ae7461a-c5f1-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ae7461a-c5f1-11ea-b7bb-bc764e2007e4;
 Tue, 14 Jul 2020 16:42:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=YIFTkEkgmzzF9+W5g34Wsi13xmDHkXmgkdLdNg2j/p4=; b=XCWcSXQceMeQ/uprG2fPByL21
 Z+EjandjZFlUI1/j2RyKiMN2iQFowy3ZEEdxuGIKsyEafArDMwWkvIU1fPisV2ZE+xVpJeJwxaJrB
 6u8Z+0cdvN+3amVlhZDpmvwUFQI64JY+Y2eBUBChw6WPmMZ8v9ubJkg8sUHqhTA5gz44E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvO0y-0002HQ-JD; Tue, 14 Jul 2020 16:42:44 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvO0y-0007Zq-7h; Tue, 14 Jul 2020 16:42:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvO0y-0002rr-6v; Tue, 14 Jul 2020 16:42:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151883-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151883: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=2f470a4fb1edbe2da702e398314b9db201bb991e
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jul 2020 16:42:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151883 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151883/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              2f470a4fb1edbe2da702e398314b9db201bb991e
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z    4 days
Failing since        151818  2020-07-11 04:18:52 Z    3 days    4 attempts
Testing same since   151883  2020-07-14 04:19:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bastien Orivel <bastien.orivel@diateam.net>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Jin Yan <jinyan12@huawei.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1010 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 17:16:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 17:16:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvOXd-0001dP-9G; Tue, 14 Jul 2020 17:16:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xDy+=AZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvOXc-0001d5-52
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 17:16:28 +0000
X-Inumbo-ID: bcee1b64-c5f5-11ea-9342-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bcee1b64-c5f5-11ea-9342-12813bfff9fa;
 Tue, 14 Jul 2020 17:16:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=3w6gYnKBua3p356eh4FaJ3t7dtGbQ82+hMuNktdHiFQ=; b=GY/nXZRvMb2krbaW82RdBbrG5
 H643YwLEbayLOP00VGl+PnGJ8XclG2ys/2KC/llJB2NJx5/RRl+tiQYNO8/CHoE98WHouhhPs8B4V
 MFPa/TOgbu2JN4TMKI93wNEt95wEGd2icoUaSc1zRaPxb6DfL/3RC1sfcEhk5Lb382GZw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvOXV-0002xZ-1g; Tue, 14 Jul 2020 17:16:21 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvOXU-0000UY-Iq; Tue, 14 Jul 2020 17:16:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvOXU-0006Ux-ID; Tue, 14 Jul 2020 17:16:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151881-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151881: all pass - PUSHED
X-Osstest-Versions-This: ovmf=9c6f3545aee0808b78a0ad4480b6eb9d24989dc1
X-Osstest-Versions-That: ovmf=d9a4084544134eea50f62e88d79c466ae91f0455
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jul 2020 17:16:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151881 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151881/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 9c6f3545aee0808b78a0ad4480b6eb9d24989dc1
baseline version:
 ovmf                 d9a4084544134eea50f62e88d79c466ae91f0455

Last test of basis   151867  2020-07-13 16:09:22 Z    1 days
Testing same since   151881  2020-07-14 03:39:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ray Ni <ray.ni@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d9a4084544..9c6f3545ae  9c6f3545aee0808b78a0ad4480b6eb9d24989dc1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jul 14 18:36:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 18:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvPn9-0008HZ-6c; Tue, 14 Jul 2020 18:36:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xDy+=AZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvPn8-0008H9-Og
 for xen-devel@lists.xenproject.org; Tue, 14 Jul 2020 18:36:34 +0000
X-Inumbo-ID: ecc7b876-c600-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ecc7b876-c600-11ea-bb8b-bc764e2007e4;
 Tue, 14 Jul 2020 18:36:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=q61d333kC4dEd+BnB92LNG+p7/LsgGt+WJHqqOQgvew=; b=0uuSzbTgSdxgXDFIoNkun73wI4
 Q3co2M4FDgBxWRjKIFXDr2kifCSXxH+uxSg8LPr7KBIycm4tlXYtNCsxVX3/72Z1E/OvmFuBBOWaw
 kbeReF/HO9ykzNSXhjLjWdvJt25A8BJBYYMkvW+/3USm+cjfZAnrW/EGckiW7DXLo8B8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvPmz-0004f1-Tr; Tue, 14 Jul 2020 18:36:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvPmz-0003D3-Ge; Tue, 14 Jul 2020 18:36:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvPmz-0005II-Eh; Tue, 14 Jul 2020 18:36:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-qemuu-nested-intel
Message-Id: <E1jvPmz-0005II-Eh@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 14 Jul 2020 18:36:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-qemuu-nested-intel
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151897/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-intel.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-intel.debian-hvm-install --summary-out=tmp/151897.bisection-summary --basis-template=151065 --blessings=real,real-bisect qemu-mainline test-amd64-amd64-qemuu-nested-intel debian-hvm-install
Searching for failure / basis pass:
 151874 fail [host=albana0] / 151149 ok.
Failure / basis pass flights: 151874 / 151149
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d9a4084544134eea50f62e88d79c466ae91f0455 3c659044118e34603161457db9934a34f816d78b 20c1df5476e1e9b5d3f5b94f9f3ce01d21f14c46 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#9af1064995d479df92e399a682ba7e4b2fc78415-d9a4084544134eea50f62e88d79c466ae91f0455 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://git.qemu.org/qemu.git#7d3660e79830a069f1848bb4fa1cdf8f666424fb-20c1df5476e1e9b5d3f5b94f9f3ce01d21f14c46 git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-88ab0c15525ced2eefe39220742efe4769089ad8 git://xenbits.xen.org/xen.git#b91825f628c9a62cf2a3a0d972ea81484a8b7fce-02d69864b51a4302a148c28d6d391238a6778b4b
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 125292 nodes in revision graph
Searching for test results:
 151160 pass irrelevant
 151101 pass irrelevant
 151065 [host=fiano0]
 151149 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151212 pass irrelevant
 151168 pass irrelevant
 151221 fail irrelevant
 151171 pass irrelevant
 151193 pass irrelevant
 151172 pass irrelevant
 151215 pass irrelevant
 151174 pass irrelevant
 151176 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151177 pass irrelevant
 151178 pass irrelevant
 151179 pass irrelevant
 151180 pass irrelevant
 151199 pass irrelevant
 151182 pass irrelevant
 151183 pass irrelevant
 151216 pass irrelevant
 151185 pass irrelevant
 151189 pass irrelevant
 151175 fail irrelevant
 151218 pass irrelevant
 151190 pass irrelevant
 151202 pass irrelevant
 151220 pass irrelevant
 151207 pass irrelevant
 151211 pass irrelevant
 151241 fail irrelevant
 151286 fail irrelevant
 151269 fail irrelevant
 151328 fail irrelevant
 151304 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 322969adf1fb3d6cfbd613f35121315715aff2ed 3c659044118e34603161457db9934a34f816d78b 171199f56f5f9bdf1e5d670d09ef1351d8f01bae 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151377 fail irrelevant
 151353 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b d4b78317b7cf8c0c635b70086503813f79ff21ec 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151399 fail irrelevant
 151414 fail irrelevant
 151435 fail irrelevant
 151459 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151471 fail irrelevant
 151485 fail irrelevant
 151500 fail irrelevant
 151518 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151547 fail irrelevant
 151598 fail irrelevant
 151577 fail irrelevant
 151622 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b 7b7515702012219410802a168ae4aa45b72a44df 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151656 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151634 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151645 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151669 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151685 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151704 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151778 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b aff2caf6b3fbab1062e117a47b66d27f7fd2f272 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151721 fail irrelevant
 151763 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 48f22ad04ead83e61b4b35871ec6f6109779b791 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151744 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 8796c64ecdfd34be394ea277aaaaa53df0c76996 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151804 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151816 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151833 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 827937158b72ce2265841ff528bba3c44a1bfbc8 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151882 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 589b1be07c060e583d9f758ff0cb10e0f1ff242f 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151865 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151885 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151866 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151855 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151841 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 2033cc6efa98b831d7839e367aa7d5aa74d0750f 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151887 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151868 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b eefe34ea4b82c2b47abe28af4cc7247d51553626 2e3de6253422112ae43e608661ba94ea6b345694 25636ed707cf1211ce846c7ec58f8643e435d7a7
 151897 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151871 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b 3f429a3400822141651486193d6af625eeab05a5 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 151849 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151872 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 58ae92a993687d913aa6dd00ef3497a1bc5f6c40 3c659044118e34603161457db9934a34f816d78b 54cdfe511219b8051046be55a6e156c4f08ff7ff 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 151888 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151873 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a2433243fbe471c250d7eddc2c7da325d91265fd 3c659044118e34603161457db9934a34f816d78b b77b5b3dc7a4730d804090d359c57d33573cf85a 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b db2322469a245eb9d9aa1c98747f6d595cca8f35 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151877 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 9354eaaf16fdb98651574f131ff66ad974e50bba 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151890 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151878 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 9940b2cfbc05cdffdf6b42227a80cb1e6d2a85c2 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151879 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151893 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151880 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 75a6ed875ff0a2eb6b2971ae2098ed09963d7329 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151874 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d9a4084544134eea50f62e88d79c466ae91f0455 3c659044118e34603161457db9934a34f816d78b 20c1df5476e1e9b5d3f5b94f9f3ce01d21f14c46 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151894 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151896 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d9a4084544134eea50f62e88d79c466ae91f0455 3c659044118e34603161457db9934a34f816d78b 20c1df5476e1e9b5d3f5b94f9f3ce01d21f14c46 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
Searching for interesting versions
 Result found: flight 151149 (pass), for basis pass
 Result found: flight 151874 (fail), for basis failure
 Repro found: flight 151894 (pass), for basis pass
 Repro found: flight 151896 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
No revisions left to test, checking graph state.
 Result found: flight 151885 (pass), for last pass
 Result found: flight 151887 (fail), for first failure
 Repro found: flight 151888 (pass), for last pass
 Repro found: flight 151890 (fail), for first failure
 Repro found: flight 151893 (pass), for last pass
 Repro found: flight 151897 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151897/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.765656 to fit
pnmtopng: 215 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-intel.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
151897: tolerable ALL FAIL

flight 151897 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/151897/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail baseline untested


jobs:
 test-amd64-amd64-qemuu-nested-intel                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Jul 14 19:30:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 14 Jul 2020 19:30:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvQdU-0004kW-Hx; Tue, 14 Jul 2020 19:30:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hY5v=AZ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1jvQdS-0004kR-J1
 for xen-devel@lists.xen.org; Tue, 14 Jul 2020 19:30:38 +0000
X-Inumbo-ID: 7e6e73d0-c608-11ea-bb8b-bc764e2007e4
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e6e73d0-c608-11ea-bb8b-bc764e2007e4;
 Tue, 14 Jul 2020 19:30:37 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id o11so24363703wrv.9
 for <xen-devel@lists.xen.org>; Tue, 14 Jul 2020 12:30:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=T6z4+TdptBRLmJh0/2xDTJHl7ScuqKH20jtDl7baOmw=;
 b=eZLwx8TFnJQcpIqGLP+j7hmtUeweHdIkJOEx1BUvhL+kg5i+oaucDiqCgdmTnz/GSM
 Xg1e2MoIqX/LCzBE4emJ1NJSgvj9fvxvqGxNwFN31NbLGX/9oEP2VRNhXy9mRrFlEvNm
 Rxu0qEB5ShL9k8ELTfcqD9Ucz9gCbHSNS8dUC+dHRK7ONy207WFvLBZkDd1ttIlKzzwv
 GmICFFZwEJTwR0+mKX2yyhctWJlvnlffeoxOcRB4c07nNHLY0bevtILc1rAzkh2YT2di
 K3NlOYiAwOshYCOOa8AYFlzqW924GgYO56w6fMrtf+JWHdGmi1aiEwkB5S6dLVu/tXvF
 pisA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=T6z4+TdptBRLmJh0/2xDTJHl7ScuqKH20jtDl7baOmw=;
 b=SO8ytudoDT+QQeqIjHrMMbfko519cNhQmeQ68wMu1u25qgr543LaeS3F0YDWQvkVFn
 W4Dtg4SGd5+9VNdyHF0bU+kypJTZXZ54RVI1OLDtNTSm7plZbcn7dfPWWsPDpc1ZXY/1
 CgM3YIMR0AZ0NSM3IaZS5R0w4XQExTv9AcmtdH9eY4cUOPIm1h6+t2g7kaU1njs2eWwl
 hTeaqIJZPwbgFySY58a8G7dsLvnYaxnJuqgb4S7bi4WXgKt88vTrTOsisIWbchlAFnIB
 y8WZ5zsA52C6HejUOaN/E+v758tUfHxkEm749b0dXqLBSDFr95uBhC6Cof/sOeOCAVUO
 9k7A==
X-Gm-Message-State: AOAM5335+7jMkPr9GGB5auRlCf0W2nJyo1X+/mzkDRz9ED3ywOOQenhX
 VZowptq9w1Cg1pJNm5k6H+hcHtjwAgVKplKMcAQ=
X-Google-Smtp-Source: ABdhPJyKpFT5i87sa0gTIfu0IvoNb4zJVH186yGeMIjaj0+goz2JNy4abwmtfHqbqJmsBok8gLh1ev4ta8hP4+XbUxk=
X-Received: by 2002:a5d:4986:: with SMTP id r6mr7267166wrq.424.1594755036679; 
 Tue, 14 Jul 2020 12:30:36 -0700 (PDT)
MIME-Version: 1.0
References: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
 <1D0E7281-95D7-482E-BF6D-EE5B1FEE4876@arm.com>
 <ab84437081a346d6bf0f73581382c74e@in.bosch.com>
 <D84A5DA7-683C-480B-8837-C51D560FC2E1@arm.com>
 <139024a891324455a13a3d468908798d@in.bosch.com>
 <C3BCAA62-51EF-49DD-B978-6657BC6D5A21@arm.com>
 <67b4454424c4485fb59d542d052aaf2d@in.bosch.com>
In-Reply-To: <67b4454424c4485fb59d542d052aaf2d@in.bosch.com>
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
Date: Tue, 14 Jul 2020 22:30:25 +0300
Message-ID: <CAPD2p-nZZpDBZ5yc=gVvVAW1oFdN0KZ2jMH-T59W_sntsENwxw@mail.gmail.com>
Subject: Re: [BUG] Xen build for RCAR failing
To: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>
Content-Type: multipart/alternative; boundary="0000000000007710f805aa6bd79e"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--0000000000007710f805aa6bd79e
Content-Type: text/plain; charset="UTF-8"

Hello

[Sorry for the possible format issues]

On Tue, Jul 14, 2020 at 4:44 PM Manikandan Chockalingam (RBEI/ECF3) <
Manikandan.Chockalingam@in.bosch.com> wrote:

> Hello Bertrand,
>
> I succeeded in building the core minimal image with dunfell and its
> compatible branches [except xen-troops (modified some files to complete the
> build)].
>
> But I face the following error while booting.
>
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Timer: Unable to retrieve IRQ 0 from the device tree
> (XEN) ****************************************
>


The reason for that problem *might* be in the arch timer node in your
device tree which contains "interrupts-extended" property instead of just
"interrupts". As far as I remember Xen v4.12 doesn't have required support
to handle "interrupts-extended".
It went in a bit later [1]. If this is the real reason, I think you should
either switch to the new Xen version or modify your arch timer node in a
way to use the "interrupts" property [2]. I would suggest using the new Xen
version if possible (at least v4.13).

[1]
https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg01775.html
[2]
https://github.com/otyshchenko1/linux/commit/c25044845f2c3678f5df789881e7a125556af6fc

-- 
Regards,

Oleksandr Tyshchenko

--0000000000007710f805aa6bd79e
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hello</div><div><br></div><div>[Sorry for the possibl=
e format issues]</div><br><div class=3D"gmail_quote"><div dir=3D"ltr" class=
=3D"gmail_attr">On Tue, Jul 14, 2020 at 4:44 PM Manikandan Chockalingam (RB=
EI/ECF3) &lt;<a href=3D"mailto:Manikandan.Chockalingam@in.bosch.com">Manika=
ndan.Chockalingam@in.bosch.com</a>&gt; wrote:<br></div><blockquote class=3D=
"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(2=
04,204,204);padding-left:1ex">Hello Bertrand,<br>
<br>
I succeeded in building the core minimal image with dunfell and its compati=
ble branches [except xen-troops (modified some files to complete the build)=
].<br>
<br>
But I face the following error while booting.<br>
<br>
(XEN) ****************************************<br>
(XEN) Panic on CPU 0:<br>
(XEN) Timer: Unable to retrieve IRQ 0 from the device tree<br>
(XEN) ****************************************<br></blockquote><div><br></d=
iv><div><br></div></div><div>The reason for that problem *might* be in the =
arch timer node in your device tree which contains &quot;interrupts-extende=
d&quot; property instead of just &quot;interrupts&quot;. As far as I rememb=
er Xen v4.12 doesn&#39;t have required support to handle=C2=A0&quot;interru=
pts-extended&quot;.</div><div>It went in a bit later [1]. If this is the re=
al reason, I think you should either switch to the new Xen version or modif=
y your arch timer node in a way to use the=C2=A0&quot;interrupts&quot; prop=
erty [2]. I would suggest using the new Xen version if possible (at least v=
4.13).</div><div><br></div><div>[1] <a href=3D"https://lists.xenproject.org=
/archives/html/xen-devel/2019-05/msg01775.html">https://lists.xenproject.or=
g/archives/html/xen-devel/2019-05/msg01775.html</a><br></div><div>[2]=C2=A0=
<a href=3D"https://github.com/otyshchenko1/linux/commit/c25044845f2c3678f5d=
f789881e7a125556af6fc">https://github.com/otyshchenko1/linux/commit/c250448=
45f2c3678f5df789881e7a125556af6fc</a></div><div><br></div>-- <br><div dir=
=3D"ltr" class=3D"gmail_signature"><div dir=3D"ltr"><div><div dir=3D"ltr"><=
div><div dir=3D"ltr"><span style=3D"background-color:rgb(255,255,255)"><fon=
t size=3D"2"><span style=3D"color:rgb(51,51,51);font-family:Arial,sans-seri=
f">Regards,</span></font></span></div><div dir=3D"ltr"><br></div><div dir=
=3D"ltr"><div><span style=3D"background-color:rgb(255,255,255)"><font size=
=3D"2">Oleksandr Tyshchenko</font></span></div></div></div></div></div></di=
v></div></div>

--0000000000007710f805aa6bd79e--


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 00:53:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 00:53:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvVfd-0006DK-1V; Wed, 15 Jul 2020 00:53:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tzA3=A2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvVfb-0006DF-Ky
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 00:53:11 +0000
X-Inumbo-ID: 8d21c832-c635-11ea-937f-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d21c832-c635-11ea-937f-12813bfff9fa;
 Wed, 15 Jul 2020 00:53:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ELxH55Ujf8UAHF/6pA2F5ODsJkgjhcBowmKIPxMxsiE=; b=zgQwrEYkVYgjVFjKOdUH5yqzL
 CSqf0bpIlfz90s8JPMW5RJc40HUZgm7aSpkuaN2O3GEXdEzfQ8guTipxuQoyXLPz/xIalH1CQ5e9N
 ebLy4QAJJT4Ox5zyal4omCo8dBpgDaJNx0ao65i0rzUeQa+OBo8LPRPcgCjoQO2w9D1KI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvVfY-0004Td-GH; Wed, 15 Jul 2020 00:53:08 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvVfY-0007fl-7l; Wed, 15 Jul 2020 00:53:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvVfY-0007SP-75; Wed, 15 Jul 2020 00:53:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151884-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151884: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=165f3afbfc3db70fcfdccad07085cde0a03c858b
X-Osstest-Versions-That: xen=02d69864b51a4302a148c28d6d391238a6778b4b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jul 2020 00:53:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151884 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151884/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151854
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151854
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151854
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151854
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151854
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151854
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151854
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151854
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151854
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151854
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  165f3afbfc3db70fcfdccad07085cde0a03c858b
baseline version:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b

Last test of basis   151854  2020-07-13 01:51:12 Z    1 days
Testing same since   151869  2020-07-13 17:06:25 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   02d69864b5..165f3afbfc  165f3afbfc3db70fcfdccad07085cde0a03c858b -> master


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 03:30:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 03:30:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvY7Z-0003qn-Pu; Wed, 15 Jul 2020 03:30:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tzA3=A2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvY7Y-0003qT-TT
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 03:30:12 +0000
X-Inumbo-ID: 7a8c2832-c64b-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a8c2832-c64b-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 03:30:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=GMZP34yYKCmiLNSy4nhz/0OZLpJwUpfd9FY1kwRboRg=; b=Kfi6Jcugd+BDs4/HbOpl2D0Mu
 ISgXuBI7L2zhawG6JR9LcMC3uDEAugNC/zLPxEJpG+EqHhahcVXclUk++gYviLwFb8c1BUCzpG5F3
 F6btho/TpbfP9aq/wd3SvYwSyKAYp7avLLshT/Z5vCExtwnxVipcAJ7yHcGwJf7t4Y7EE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvY7S-00013t-8a; Wed, 15 Jul 2020 03:30:06 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvY7R-0002YV-PE; Wed, 15 Jul 2020 03:30:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvY7R-0003TH-OS; Wed, 15 Jul 2020 03:30:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151898-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151898: all pass - PUSHED
X-Osstest-Versions-This: ovmf=256c4470f86e53661c070f8c64a1052e975f9ef0
X-Osstest-Versions-That: ovmf=9c6f3545aee0808b78a0ad4480b6eb9d24989dc1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jul 2020 03:30:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151898 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151898/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 256c4470f86e53661c070f8c64a1052e975f9ef0
baseline version:
 ovmf                 9c6f3545aee0808b78a0ad4480b6eb9d24989dc1

Last test of basis   151881  2020-07-14 03:39:27 Z    0 days
Testing same since   151898  2020-07-14 17:42:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   9c6f3545ae..256c4470f8  256c4470f86e53661c070f8c64a1052e975f9ef0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 03:44:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 03:44:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvYLM-0004pE-7t; Wed, 15 Jul 2020 03:44:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tzA3=A2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvYLL-0004p9-4c
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 03:44:27 +0000
X-Inumbo-ID: 7abff264-c64d-11ea-9386-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7abff264-c64d-11ea-9386-12813bfff9fa;
 Wed, 15 Jul 2020 03:44:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=GX2WR77YK05OGaY6ZwyEnqg1M6FsDOlHr4aB+v+Vlz8=; b=zvZU4dnw3bGAOVOe+wc+6ROTc
 Q9diCklAccWZ/k+lmERJJUE1Qzxh3TQXDHuxmD9N95IEnGC7Dd6/Zx5cwE5UuOmiUSCk83Lg4w27R
 IFUqVHuAx0f/OdvMKz7fDtV1raX2ou+wcpjA81DnUo4BsLxq88XIPK4Wrqq+HOvKNi3Fk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvYLJ-0001Lh-MU; Wed, 15 Jul 2020 03:44:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvYLJ-0003fm-FH; Wed, 15 Jul 2020 03:44:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvYLJ-0008Pi-EX; Wed, 15 Jul 2020 03:44:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151889-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151889: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=0dc589da873b58b70f4caf4b070fb0cf70fdd1dc
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jul 2020 03:44:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151889 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151889/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                0dc589da873b58b70f4caf4b070fb0cf70fdd1dc
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   27 days
Failing since        151236  2020-06-19 19:10:35 Z   25 days   40 attempts
Testing same since   151889  2020-07-14 07:53:17 Z    0 days    1 attempts

------------------------------------------------------------
719 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 38170 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 06:27:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 06:27:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvasi-0001e5-4j; Wed, 15 Jul 2020 06:27:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvasg-0001e0-9y
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 06:27:02 +0000
X-Inumbo-ID: 30eec5f4-c664-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30eec5f4-c664-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 06:27:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 86E72B007;
 Wed, 15 Jul 2020 06:27:03 +0000 (UTC)
Subject: Re: [PATCH v2 3/7] x86/mce: bring hypercall subop compat checking in
 sync again
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <5d53a2e3-716c-2213-96e5-9d37371c482c@suse.com>
 <20200714111900.GI7191@Air-de-Roger>
 <f82edef5-ee75-b24c-0a24-03ed38486882@suse.com>
 <20200714143157.GM7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0e27d46f-b0b2-8681-7f9e-9c12bb99da4a@suse.com>
Date: Wed, 15 Jul 2020 08:27:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200714143157.GM7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.07.2020 16:31, Roger Pau Monné wrote:
> On Tue, Jul 14, 2020 at 01:47:11PM +0200, Jan Beulich wrote:
>> On 14.07.2020 13:19, Roger Pau Monné wrote:
>>> On Wed, Jul 01, 2020 at 12:26:54PM +0200, Jan Beulich wrote:
>>>> Use a typedef in struct xen_mc also for the two subops "manually"
>>>> translated in the handler, just for consistency. No functional
>>>> change.
>>>
>>> I'm slightly puzzled by the fact that mc_fetch is marked as needs
>>> checking while mc_physcpuinfo is marked as needs translation,
>>> shouldn't both be marked as needing translation? (since both need to
>>> handle a guest pointer using XEN_GUEST_HANDLE)
>>
>> I guess I'm confused - I see an exclamation mark on both respective
> 
> No, I was the one confused, you are right that both are marked as need
> translation.

And just to mention it explicitly - I think the lines could be
dropped, as they look to be there just for documentation (if at
all). The resulting XLAT_* macros don't get used anywhere.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 06:36:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 06:36:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvb1Z-0002X4-09; Wed, 15 Jul 2020 06:36:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvb1Y-0002Wz-CC
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 06:36:12 +0000
X-Inumbo-ID: 78739084-c665-11ea-9393-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 78739084-c665-11ea-9393-12813bfff9fa;
 Wed, 15 Jul 2020 06:36:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 03659ADC0;
 Wed, 15 Jul 2020 06:36:13 +0000 (UTC)
Subject: Re: [PATCH v2 5/7] x86: generalize padding field handling
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <83274416-2812-53c9-f8cb-23ebdf73782e@suse.com>
 <20200714142948.GK7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a319e308-9cf3-52dc-1883-fe749e3c5629@suse.com>
Date: Wed, 15 Jul 2020 08:36:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200714142948.GK7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.07.2020 16:29, Roger Pau Monné wrote:
> On Wed, Jul 01, 2020 at 12:27:37PM +0200, Jan Beulich wrote:
>> The original intention was to ignore padding fields, but the pattern
>> matched only ones whose names started with an underscore. Also match
>> fields whose names are in line with the C spec by not having a leading
>> underscore. (Note that the leading ^ in the sed regexps was pointless
>> and hence get dropped.)
>>
>> This requires adjusting some vNUMA macros, to avoid triggering
>> "enumeration value ... not handled in switch" warnings, which - due to
>> -Werror - would cause the build to fail. (I have to admit that I find
>> these padding fields odd, when translation of the containing structure
>> is needed anyway.)
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

>> ---
>> While for translation macros skipping padding fields pretty surely is a
>> reasonable thing to do, we may want to consider not ignoring them when
>> generating checking macros.

(note this remark, towards your question at the end)

>> --- a/xen/common/compat/memory.c
>> +++ b/xen/common/compat/memory.c
>> @@ -354,10 +354,13 @@ int compat_memory_op(unsigned int cmd, X
>>                  return -EFAULT;
>>  
>>  #define XLAT_vnuma_topology_info_HNDL_vdistance_h(_d_, _s_)		\
>> +            case XLAT_vnuma_topology_info_vdistance_pad:                \
>>              guest_from_compat_handle((_d_)->vdistance.h, (_s_)->vdistance.h)
>>  #define XLAT_vnuma_topology_info_HNDL_vcpu_to_vnode_h(_d_, _s_)		\
>> +            case XLAT_vnuma_topology_info_vcpu_to_vnode_pad:            \
>>              guest_from_compat_handle((_d_)->vcpu_to_vnode.h, (_s_)->vcpu_to_vnode.h)
>>  #define XLAT_vnuma_topology_info_HNDL_vmemrange_h(_d_, _s_)		\
>> +            case XLAT_vnuma_topology_info_vmemrange_pad:                \
>>              guest_from_compat_handle((_d_)->vmemrange.h, (_s_)->vmemrange.h)
> 
> I find this quite ugly, would it be better to just handle them with a
> default case in the XLAT_ macros?

Default cases explicitly do not get added to be able to spot missing
case labels, as most compilers will warn about such when the controlling
expression is of enum type.

> AFAICT it will also set (_d_)->vmemrange.h twice?

I'm not seeing it (and if it was, I'd then also wonder why not for the
other two handles above). This is the generated macro:

#define XLAT_vnuma_topology_info(_d_, _s_) do { \
    (_d_)->domid = (_s_)->domid; \
    (_d_)->nr_vnodes = (_s_)->nr_vnodes; \
    (_d_)->nr_vcpus = (_s_)->nr_vcpus; \
    (_d_)->nr_vmemranges = (_s_)->nr_vmemranges; \
    switch (vdistance) { \
    case XLAT_vnuma_topology_info_vdistance_h: \
        XLAT_vnuma_topology_info_HNDL_vdistance_h(_d_, _s_); \
        break; \
    } \
    switch (vcpu_to_vnode) { \
    case XLAT_vnuma_topology_info_vcpu_to_vnode_h: \
        XLAT_vnuma_topology_info_HNDL_vcpu_to_vnode_h(_d_, _s_); \
        break; \
    } \
    switch (vmemrange) { \
    case XLAT_vnuma_topology_info_vmemrange_h: \
        XLAT_vnuma_topology_info_HNDL_vmemrange_h(_d_, _s_); \
        break; \
    } \
} while (0)

Am I overlooking any further aspect?

>> --- a/xen/tools/get-fields.sh
>> +++ b/xen/tools/get-fields.sh
>> @@ -218,7 +218,7 @@ for line in sys.stdin.readlines():
>>  				fi
>>  				;;
>>  			[\,\;])
>> -				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
>> +				if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
>>  				then
>>  					if [ $kind = union ]
>>  					then
>> @@ -347,7 +347,7 @@ build_body ()
>>  			fi
>>  			;;
>>  		[\,\;])
>> -			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
>> +			if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
>>  			then
>>  				if [ -z "$array" -a -z "$array_type" ]
>>  				then
>> @@ -437,7 +437,7 @@ check_field ()
>>  				id=$token
>>  				;;
>>  			[\,\;])
>> -				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
>> +				if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
>>  				then
>>  					check_field $1 $2 $3.$id "$fields"
>>  					test "$token" != ";" || fields= id=
>> @@ -491,7 +491,7 @@ build_check ()
>>  			test $level != 2 -o $arrlvl != 1 || id=$token
>>  			;;
>>  		[\,\;])
>> -			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
>> +			if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
>>  			then
>>  				check_field $kind $1 $id "$fields"
>>  				test "$token" != ";" || fields= id=
> 
> I have to admit I'm not overly happy with this level of repetition
> (not that you introduce it here), but I would prefer to have the
> regexp in a single place if possible, it's easy to miss instances
> IMO.

I too thought so while making the changes, but besides viewing this
as an orthogonal adjustment I'm also, as per the remark further up,
unconvinced the expressions actually want to be the same between
the checking macros and the xlat ones.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 06:42:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 06:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvb7u-0003Mt-Rm; Wed, 15 Jul 2020 06:42:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvb7t-0003Mo-Ur
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 06:42:45 +0000
X-Inumbo-ID: 62d32a68-c666-11ea-9393-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 62d32a68-c666-11ea-9393-12813bfff9fa;
 Wed, 15 Jul 2020 06:42:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 384EAADC0;
 Wed, 15 Jul 2020 06:42:46 +0000 (UTC)
Subject: Re: [PATCH v2 6/7] flask: drop dead compat translation code
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <7711f68d-394e-a74f-81fa-51f8447174ce@suse.com>
 <20200714145800.GO7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <937a51c5-7563-0ac2-4ada-b4dfd7a5d636@suse.com>
Date: Wed, 15 Jul 2020 08:42:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200714145800.GO7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel de Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.07.2020 16:58, Roger Pau Monné wrote:
> On Wed, Jul 01, 2020 at 12:28:07PM +0200, Jan Beulich wrote:
>> Translation macros aren't needed at all (or else a devicetree_label
>> entry would have been missing), and userlist has been removed quite some
>> time ago.
>>
>> No functional change.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/include/xlat.lst
>> +++ b/xen/include/xlat.lst
>> @@ -148,14 +148,11 @@
>>  ?	xenoprof_init			xenoprof.h
>>  ?	xenoprof_passive		xenoprof.h
>>  ?	flask_access			xsm/flask_op.h
>> -!	flask_boolean			xsm/flask_op.h
>>  ?	flask_cache_stats		xsm/flask_op.h
>>  ?	flask_hash_stats		xsm/flask_op.h
>> -!	flask_load			xsm/flask_op.h
>>  ?	flask_ocontext			xsm/flask_op.h
>>  ?	flask_peersid			xsm/flask_op.h
>>  ?	flask_relabel			xsm/flask_op.h
>>  ?	flask_setavc_threshold		xsm/flask_op.h
>>  ?	flask_setenforce		xsm/flask_op.h
>> -!	flask_sid_context		xsm/flask_op.h
>>  ?	flask_transition		xsm/flask_op.h
> 
> Shouldn't those become checks then?

No, checking will never succeed for structures containing
XEN_GUEST_HANDLE(). But there's no point in generating xlat macros
when they're never used. There are two fundamentally different
strategies for handling the compat hypercalls: One is to wrap a
translation layer around the native hypercall. That's where the
xlat macros come into play. The other, used here, is to compile
the entire hypercall function a second time, arranging for the
compat structures to get used in place of the native ones. There
are no xlat macros involved here, all that's needed are correctly
translated structures. (For completeness, x86's MCA hypercall
uses yet another, quite adhoc strategy for handling, but also not
involving any xlat macro use. Hence the consideration there to
possibly drop the respective lines from the file here.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 06:47:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 06:47:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvbCV-0003X6-EU; Wed, 15 Jul 2020 06:47:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7dcN=A2=in.bosch.com=manikandan.chockalingam@srs-us1.protection.inumbo.net>)
 id 1jvbCS-0003X1-CX
 for xen-devel@lists.xen.org; Wed, 15 Jul 2020 06:47:29 +0000
X-Inumbo-ID: 08f68890-c667-11ea-bb8b-bc764e2007e4
Received: from de-out1.bosch-org.com (unknown [139.15.230.186])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08f68890-c667-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 06:47:24 +0000 (UTC)
Received: from fe0vm1649.rbesz01.com
 (lb41g3-ha-dmz-psi-sl1-mailout.fe.ssn.bosch.com [139.15.230.188])
 by fe0vms0186.rbdmz01.com (Postfix) with ESMTPS id 4B67Gx4BcHz1XLFjw;
 Wed, 15 Jul 2020 08:47:21 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=in.bosch.com;
 s=key1-intmail; t=1594795641;
 bh=mR9h4JGYWQgt7ke11rD9GIjdV1evllpFWcrxvUuRh8A=; l=10;
 h=From:Subject:From:Reply-To:Sender;
 b=f+AP/zuj4qUwtZcuSJojhIybjindZIwFdvPt8RSY3r0nFCTxB/xpR8W9GIsstnZmY
 G8hi6tw6lZUbsaki8B24g6dBaSbG1J9pXRzb02x8Ycp2qdCJiL0zO9Haw80TvjOfRq
 kEAVpqwXLB88ya7oD/hie6/QMb5tbrPGuZ6vMGTU=
Received: from si0vm2082.rbesz01.com (unknown [10.58.172.176])
 by fe0vm1649.rbesz01.com (Postfix) with ESMTPS id 4B67Gx3kdBz7DS;
 Wed, 15 Jul 2020 08:47:21 +0200 (CEST)
X-AuditID: 0a3aad16-82fff700000077c5-8b-5f0ea6791a14
Received: from si0vm1949.rbesz01.com ( [10.58.173.29])
 (using TLS with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by si0vm2082.rbesz01.com (SMG Outbound) with SMTP id 1E.22.30661.976AE0F5;
 Wed, 15 Jul 2020 08:47:21 +0200 (CEST)
Received: from FE-MBX2025.de.bosch.com (fe-mbx2025.de.bosch.com [10.3.231.35])
 by si0vm1949.rbesz01.com (Postfix) with ESMTPS id 4B67Gx2D2Sz6CjZNx; 
 Wed, 15 Jul 2020 08:47:21 +0200 (CEST)
Received: from SGPMBX2022.APAC.bosch.com (10.187.83.37) by
 FE-MBX2025.de.bosch.com (10.3.231.35) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1979.3; Wed, 15 Jul 2020 08:47:20 +0200
Received: from SGPMBX2022.APAC.bosch.com (10.187.83.37) by
 SGPMBX2022.APAC.bosch.com (10.187.83.37) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1979.3; Wed, 15 Jul 2020 14:47:19 +0800
Received: from SGPMBX2022.APAC.bosch.com ([fe80::2d4d:b176:b210:896]) by
 SGPMBX2022.APAC.bosch.com ([fe80::2d4d:b176:b210:896%6]) with mapi id
 15.01.1979.003; Wed, 15 Jul 2020 14:47:19 +0800
From: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Subject: RE: [BUG] Xen build for RCAR failing
Thread-Topic: [BUG] Xen build for RCAR failing
Thread-Index: AdZUKc5JeR7gPpESR52uLkZK1kYwOwAEsnEAAAD8OlAAAEBtgAABZtcgAANXdAAAh1iJgADaJ12w///vToD//r6pMA==
Date: Wed, 15 Jul 2020 06:47:19 +0000
Message-ID: <3f155a0b598745a3b2d158599dd992fd@in.bosch.com>
References: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
 <1D0E7281-95D7-482E-BF6D-EE5B1FEE4876@arm.com>
 <ab84437081a346d6bf0f73581382c74e@in.bosch.com>
 <D84A5DA7-683C-480B-8837-C51D560FC2E1@arm.com>
 <139024a891324455a13a3d468908798d@in.bosch.com>
 <C3BCAA62-51EF-49DD-B978-6657BC6D5A21@arm.com>
 <67b4454424c4485fb59d542d052aaf2d@in.bosch.com>
 <CAPD2p-nZZpDBZ5yc=gVvVAW1oFdN0KZ2jMH-T59W_sntsENwxw@mail.gmail.com>
In-Reply-To: <CAPD2p-nZZpDBZ5yc=gVvVAW1oFdN0KZ2jMH-T59W_sntsENwxw@mail.gmail.com>
Accept-Language: en-US, en-SG
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [10.187.56.213]
Content-Type: multipart/mixed;
 boundary="_004_3f155a0b598745a3b2d158599dd992fdinboschcom_"
MIME-Version: 1.0
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFmplleLIzCtJLcpLzFFi42Lhslorq1u5jC/e4PVbcYulSzYzWZxZ3sNs
 cXfJbUaLJR8XsziweKyZt4bRY+esu+weR3f/ZgpgjuKySUnNySxLLdK3S+DKOPRYq2DqDcaK
 w1fWMTYwTrvE2MXIySEhYCLx9upDti5GLg4hgRlMEqdnfWeFcPYxSuza8A2sSkjgA6PE/We6
 EInPjBIXvn5ghHAOMUrc+f6CGaSKTSBEYt/eG+wgtoiAnkRzz1MWEJtZoFzic+NnsLiwgK5E
 17nZcDVbF/azQthZEt8bp4HFWQRUJW7Ne84GYvMKWEvs2nKaFeKK1cwSX++JgticAoESbQ//
 MIHYjAKyEotuToLaJS5x68l8JojfRCQeXjzNBmGLSrx8/I8VwlaUeDjjGhNEfYxE69K5TBC7
 BCVOznzCMoFRfBaSUbOQlM1CUgYRT5K4MWku+yxGDiBbU2L9Ln2IsKLElO6H7BC2hkTrnLns
 mOLuEsf+rIEaby1x9/R+qPhRRolXd5WQ1S9g5FnFKFqcaVCWa2RgYaRXlJRaXGVgqJecn7uJ
 EZI6xHYwbu/6oHeIkYmD8RCjClDrow2rLzBKseTl56UqifDycPHGC/GmJFZWpRblxxeV5qQW
 H2KU5mBREudV4dkYJySQnliSmp2aWpBaBJNl4uCUamDi7Fp52ff2h1NFnFYhbupT79x+elRv
 +oe1z8IFn6mW6oc26rL80hWzFBPyNRKS+8Ixc61Apvv9xQG/rT6c9PUMj5p9fzPr59T5E80E
 r5+YyT/li5WQ6i3xvMqLF3s+Shm9N8ldvXzLX1epXUubktSnPD0teP2V4dvolf1Bvqct5iw5
 6fCj/Vts3n+va2vrWHt0olqN73G6bDH68yTFdsP5zUf/cR8r31/AypxxN+/M+wf2oe6V/z5L
 vd1/dUa8ZNPMKNtvIYd50zfbdFpOdH6ym826ZOmcZyerb7pWm7Z/31db3f7uomfN/+jU0rdF
 wUp3nvOUHXuhpbBzVWGve/17iXmMzxSsD7naPXv3QTJaiaU4I9FQi7moOBEA+Zzy7JgDAAA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--_004_3f155a0b598745a3b2d158599dd992fdinboschcom_
Content-Type: multipart/alternative;
	boundary="_000_3f155a0b598745a3b2d158599dd992fdinboschcom_"

--_000_3f155a0b598745a3b2d158599dd992fdinboschcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

SGVsbG8gT2xla3NhbmRyIFR5c2hjaGVua28sDQpUaGFua3MgZm9yIHlvdXIgcXVpY2sgcmVzcG9u
c2UuIFdpdGggWGVuIHN0YWJsZSA0LjEzIGJyYW5jaCwgdGhlIG1lbnRpb25lZCBpc3N1ZSBpcyBz
b2x2ZWQuDQoNCklzIHRoZXJlIGFueSBkb2N1bWVudCB3aGljaCBJIGNvdWxkIHJlZmVyIHRvIGJy
aW5nIHVwIFhlbltET00wXSBhbmQgaGF2ZSBhbiBoYW5kcyBvbiA/IGJlY2F1c2UgYW0gY3VycmVu
dGx5IHNlZWluZyBubyBvdXRwdXQgYWZ0ZXIgdGhpcw0KDQooWEVOKSAqKiogU2VyaWFsIGlucHV0
IHRvIERPTTAgKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0KQ0KKFhF
TikgRnJlZWQgMzI0a0IgaW5pdCBtZW1vcnkuDQooWEVOKSAqKiogU2VyaWFsIGlucHV0IHRvIFhl
biAodHlwZSAnQ1RSTC1hJyB0aHJlZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQpDQoNCkF0dGFjaGlu
ZyB0aGUgY29tcGxldGUgbG9ncyBmb3IgcmVmZXJlbmNlLg0KDQpNaXQgZnJldW5kbGljaGVuIEdy
w7zDn2VuIC8gQmVzdCByZWdhcmRzDQoNCkNob2NrYWxpbmdhbSBNYW5pa2FuZGFuDQoNCkVTLUNN
IENvcmUgZm4sQURJVCAoUkJFSS9FQ0YzKQ0KUm9iZXJ0IEJvc2NoIEdtYkggfCBQb3N0ZmFjaCAx
MCA2MCA1MCB8IDcwMDQ5IFN0dXR0Z2FydCB8IEdFUk1BTlkgfCB3d3cuYm9zY2guY29tDQpUZWwu
ICs5MSA4MCA2MTM2LTQ0NTIgfCBGYXggKzkxIDgwIDY2MTctMDcxMSB8IE1hbmlrYW5kYW4uQ2hv
Y2thbGluZ2FtQGluLmJvc2NoLmNvbTxtYWlsdG86TWFuaWthbmRhbi5DaG9ja2FsaW5nYW1AaW4u
Ym9zY2guY29tPg0KDQpSZWdpc3RlcmVkIE9mZmljZTogU3R1dHRnYXJ0LCBSZWdpc3RyYXRpb24g
Q291cnQ6IEFtdHNnZXJpY2h0IFN0dXR0Z2FydCwgSFJCIDE0MDAwOw0KQ2hhaXJtYW4gb2YgdGhl
IFN1cGVydmlzb3J5IEJvYXJkOiBGcmFueiBGZWhyZW5iYWNoOyBNYW5hZ2luZyBEaXJlY3RvcnM6
IERyLiBWb2xrbWFyIERlbm5lciwNClByb2YuIERyLiBTdGVmYW4gQXNlbmtlcnNjaGJhdW1lciwg
RHIuIE1pY2hhZWwgQm9sbGUsIERyLiBDaHJpc3RpYW4gRmlzY2hlciwgRHIuIFN0ZWZhbiBIYXJ0
dW5nLA0KRHIuIE1hcmt1cyBIZXluLCBIYXJhbGQgS3LDtmdlciwgQ2hyaXN0b3BoIEvDvGJlbCwg
Um9sZiBOYWpvcmssIFV3ZSBSYXNjaGtlLCBQZXRlciBUeXJvbGxlcg0K4oCLDQpGcm9tOiBPbGVr
c2FuZHIgVHlzaGNoZW5rbyA8b2xla3N0eXNoQGdtYWlsLmNvbT4NClNlbnQ6IFdlZG5lc2RheSwg
SnVseSAxNSwgMjAyMCAxOjAwIEFNDQpUbzogTWFuaWthbmRhbiBDaG9ja2FsaW5nYW0gKFJCRUkv
RUNGMykgPE1hbmlrYW5kYW4uQ2hvY2thbGluZ2FtQGluLmJvc2NoLmNvbT4NCkNjOiBCZXJ0cmFu
ZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBuZCA8bmRAYXJtLmNvbT47IHhl
bi1kZXZlbEBsaXN0cy54ZW4ub3JnDQpTdWJqZWN0OiBSZTogW0JVR10gWGVuIGJ1aWxkIGZvciBS
Q0FSIGZhaWxpbmcNCg0KSGVsbG8NCg0KW1NvcnJ5IGZvciB0aGUgcG9zc2libGUgZm9ybWF0IGlz
c3Vlc10NCg0KT24gVHVlLCBKdWwgMTQsIDIwMjAgYXQgNDo0NCBQTSBNYW5pa2FuZGFuIENob2Nr
YWxpbmdhbSAoUkJFSS9FQ0YzKSA8TWFuaWthbmRhbi5DaG9ja2FsaW5nYW1AaW4uYm9zY2guY29t
PG1haWx0bzpNYW5pa2FuZGFuLkNob2NrYWxpbmdhbUBpbi5ib3NjaC5jb20+PiB3cm90ZToNCkhl
bGxvIEJlcnRyYW5kLA0KDQpJIHN1Y2NlZWRlZCBpbiBidWlsZGluZyB0aGUgY29yZSBtaW5pbWFs
IGltYWdlIHdpdGggZHVuZmVsbCBhbmQgaXRzIGNvbXBhdGlibGUgYnJhbmNoZXMgW2V4Y2VwdCB4
ZW4tdHJvb3BzIChtb2RpZmllZCBzb21lIGZpbGVzIHRvIGNvbXBsZXRlIHRoZSBidWlsZCldLg0K
DQpCdXQgSSBmYWNlIHRoZSBmb2xsb3dpbmcgZXJyb3Igd2hpbGUgYm9vdGluZy4NCg0KKFhFTikg
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKFhFTikgUGFuaWMgb24g
Q1BVIDA6DQooWEVOKSBUaW1lcjogVW5hYmxlIHRvIHJldHJpZXZlIElSUSAwIGZyb20gdGhlIGRl
dmljZSB0cmVlDQooWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
DQoNCg0KVGhlIHJlYXNvbiBmb3IgdGhhdCBwcm9ibGVtICptaWdodCogYmUgaW4gdGhlIGFyY2gg
dGltZXIgbm9kZSBpbiB5b3VyIGRldmljZSB0cmVlIHdoaWNoIGNvbnRhaW5zICJpbnRlcnJ1cHRz
LWV4dGVuZGVkIiBwcm9wZXJ0eSBpbnN0ZWFkIG9mIGp1c3QgImludGVycnVwdHMiLiBBcyBmYXIg
YXMgSSByZW1lbWJlciBYZW4gdjQuMTIgZG9lc24ndCBoYXZlIHJlcXVpcmVkIHN1cHBvcnQgdG8g
aGFuZGxlICJpbnRlcnJ1cHRzLWV4dGVuZGVkIi4NCkl0IHdlbnQgaW4gYSBiaXQgbGF0ZXIgWzFd
LiBJZiB0aGlzIGlzIHRoZSByZWFsIHJlYXNvbiwgSSB0aGluayB5b3Ugc2hvdWxkIGVpdGhlciBz
d2l0Y2ggdG8gdGhlIG5ldyBYZW4gdmVyc2lvbiBvciBtb2RpZnkgeW91ciBhcmNoIHRpbWVyIG5v
ZGUgaW4gYSB3YXkgdG8gdXNlIHRoZSAiaW50ZXJydXB0cyIgcHJvcGVydHkgWzJdLiBJIHdvdWxk
IHN1Z2dlc3QgdXNpbmcgdGhlIG5ldyBYZW4gdmVyc2lvbiBpZiBwb3NzaWJsZSAoYXQgbGVhc3Qg
djQuMTMpLg0KDQpbMV0gaHR0cHM6Ly9saXN0cy54ZW5wcm9qZWN0Lm9yZy9hcmNoaXZlcy9odG1s
L3hlbi1kZXZlbC8yMDE5LTA1L21zZzAxNzc1Lmh0bWw8aHR0cHM6Ly9ldXIwMy5zYWZlbGlua3Mu
cHJvdGVjdGlvbi5vdXRsb29rLmNvbS8/dXJsPWh0dHBzJTNBJTJGJTJGbGlzdHMueGVucHJvamVj
dC5vcmclMkZhcmNoaXZlcyUyRmh0bWwlMkZ4ZW4tZGV2ZWwlMkYyMDE5LTA1JTJGbXNnMDE3NzUu
aHRtbCZkYXRhPTAyJTdDMDElN0NNYW5pa2FuZGFuLkNob2NrYWxpbmdhbSU0MGluLmJvc2NoLmNv
bSU3Q2Y0MzczNWI2ODI1MjQ3MDFmOWUwMDhkODI4MmM2NDAwJTdDMGFlNTFlMTkwN2M4NGU0YmJi
NmQ2NDhlZTU4NDEwZjQlN0MwJTdDMCU3QzYzNzMwMzUxODgyOTA0NjI4NyZzZGF0YT04WnJpdEdr
RiUyQnYyOWNRc2FnUjJLSmFXMk5VVFZCazBubzdSdDdFejM3ZWslM0QmcmVzZXJ2ZWQ9MD4NClsy
XSBodHRwczovL2dpdGh1Yi5jb20vb3R5c2hjaGVua28xL2xpbnV4L2NvbW1pdC9jMjUwNDQ4NDVm
MmMzNjc4ZjVkZjc4OTg4MWU3YTEyNTU1NmFmNmZjPGh0dHBzOi8vZXVyMDMuc2FmZWxpbmtzLnBy
b3RlY3Rpb24ub3V0bG9vay5jb20vP3VybD1odHRwcyUzQSUyRiUyRmdpdGh1Yi5jb20lMkZvdHlz
aGNoZW5rbzElMkZsaW51eCUyRmNvbW1pdCUyRmMyNTA0NDg0NWYyYzM2NzhmNWRmNzg5ODgxZTdh
MTI1NTU2YWY2ZmMmZGF0YT0wMiU3QzAxJTdDTWFuaWthbmRhbi5DaG9ja2FsaW5nYW0lNDBpbi5i
b3NjaC5jb20lN0NmNDM3MzViNjgyNTI0NzAxZjllMDA4ZDgyODJjNjQwMCU3QzBhZTUxZTE5MDdj
ODRlNGJiYjZkNjQ4ZWU1ODQxMGY0JTdDMCU3QzAlN0M2MzczMDM1MTg4MjkwNDYyODcmc2RhdGE9
ckxqeDlWempCZmxkVWpnJTJGdjZ3VEQ4Wjg5VHpTZk43MVJGcGR4Q08lMkYlMkYwQSUzRCZyZXNl
cnZlZD0wPg0KDQotLQ0KUmVnYXJkcywNCg0KT2xla3NhbmRyIFR5c2hjaGVua28NCg==

--_000_3f155a0b598745a3b2d158599dd992fdinboschcom_
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: base64

PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVy
bjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVt
YXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6bT0iaHR0cDovL3NjaGVtYXMubWlj
cm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcv
VFIvUkVDLWh0bWw0MCI+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIg
Y29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjxtZXRhIG5hbWU9IkdlbmVyYXRv
ciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUgKGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxl
PjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6
IkNhbWJyaWEgTWF0aCI7DQoJcGFub3NlLTE6MiA0IDUgMyA1IDQgNiAzIDIgNDt9DQpAZm9udC1m
YWNlDQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7DQoJcGFub3NlLTE6MiAxNSA1IDIgMiAyIDQgMyAy
IDQ7fQ0KLyogU3R5bGUgRGVmaW5pdGlvbnMgKi8NCnAuTXNvTm9ybWFsLCBsaS5Nc29Ob3JtYWws
IGRpdi5Nc29Ob3JtYWwNCgl7bWFyZ2luOjBpbjsNCgltYXJnaW4tYm90dG9tOi4wMDAxcHQ7DQoJ
Zm9udC1zaXplOjEyLjBwdDsNCglmb250LWZhbWlseToiVGltZXMgTmV3IFJvbWFuIixzZXJpZjt9
DQphOmxpbmssIHNwYW4uTXNvSHlwZXJsaW5rDQoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsNCglj
b2xvcjpibHVlOw0KCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmU7fQ0KYTp2aXNpdGVkLCBzcGFu
Lk1zb0h5cGVybGlua0ZvbGxvd2VkDQoJe21zby1zdHlsZS1wcmlvcml0eTo5OTsNCgljb2xvcjpw
dXJwbGU7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQpwLm1zb25vcm1hbDAsIGxpLm1z
b25vcm1hbDAsIGRpdi5tc29ub3JtYWwwDQoJe21zby1zdHlsZS1uYW1lOm1zb25vcm1hbDsNCglt
c28tbWFyZ2luLXRvcC1hbHQ6YXV0bzsNCgltYXJnaW4tcmlnaHQ6MGluOw0KCW1zby1tYXJnaW4t
Ym90dG9tLWFsdDphdXRvOw0KCW1hcmdpbi1sZWZ0OjBpbjsNCglmb250LXNpemU6MTIuMHB0Ow0K
CWZvbnQtZmFtaWx5OiJUaW1lcyBOZXcgUm9tYW4iLHNlcmlmO30NCnNwYW4uRW1haWxTdHlsZTE4
DQoJe21zby1zdHlsZS10eXBlOnBlcnNvbmFsLXJlcGx5Ow0KCWZvbnQtZmFtaWx5OiJBcmlhbCIs
c2Fucy1zZXJpZjsNCgljb2xvcjp3aW5kb3d0ZXh0Ow0KCWZvbnQtd2VpZ2h0Om5vcm1hbDsNCglm
b250LXN0eWxlOm5vcm1hbDt9DQouTXNvQ2hwRGVmYXVsdA0KCXttc28tc3R5bGUtdHlwZTpleHBv
cnQtb25seTsNCglmb250LWZhbWlseToiQ2FsaWJyaSIsc2Fucy1zZXJpZjt9DQpAcGFnZSBXb3Jk
U2VjdGlvbjENCgl7c2l6ZTo4LjVpbiAxMS4waW47DQoJbWFyZ2luOjEuMGluIDEuMGluIDEuMGlu
IDEuMGluO30NCmRpdi5Xb3JkU2VjdGlvbjENCgl7cGFnZTpXb3JkU2VjdGlvbjE7fQ0KLS0+PC9z
dHlsZT48IS0tW2lmIGd0ZSBtc28gOV0+PHhtbD4NCjxvOnNoYXBlZGVmYXVsdHMgdjpleHQ9ImVk
aXQiIHNwaWRtYXg9IjEwMjYiIC8+DQo8L3htbD48IVtlbmRpZl0tLT48IS0tW2lmIGd0ZSBtc28g
OV0+PHhtbD4NCjxvOnNoYXBlbGF5b3V0IHY6ZXh0PSJlZGl0Ij4NCjxvOmlkbWFwIHY6ZXh0PSJl
ZGl0IiBkYXRhPSIxIiAvPg0KPC9vOnNoYXBlbGF5b3V0PjwveG1sPjwhW2VuZGlmXS0tPg0KPC9o
ZWFkPg0KPGJvZHkgbGFuZz0iRU4tVVMiIGxpbms9ImJsdWUiIHZsaW5rPSJwdXJwbGUiPg0KPGRp
diBjbGFzcz0iV29yZFNlY3Rpb24xIj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxl
PSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2Vy
aWYiPkhlbGxvDQo8L3NwYW4+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMS4wcHQ7Zm9udC1mYW1p
bHk6JnF1b3Q7Q2FsaWJyaSZxdW90OyxzYW5zLXNlcmlmIj5PbGVrc2FuZHIgVHlzaGNoZW5rbyw8
bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0i
Zm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LHNhbnMtc2Vy
aWYiPjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0
eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0NhbGlicmkmcXVvdDssc2Fu
cy1zZXJpZiI+VGhhbmtzIGZvciB5b3VyIHF1aWNrIHJlc3BvbnNlLiBXaXRoIFhlbiBzdGFibGUg
NC4xMyBicmFuY2gsIHRoZSBtZW50aW9uZWQgaXNzdWUgaXMgc29sdmVkLg0KPG86cD48L286cD48
L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTox
MS4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7Q2FsaWJyaSZxdW90OyxzYW5zLXNlcmlmIj48bzpwPiZu
YnNwOzwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0i
Zm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LHNhbnMtc2Vy
aWYiPklzIHRoZXJlIGFueSBkb2N1bWVudCB3aGljaCBJIGNvdWxkIHJlZmVyIHRvIGJyaW5nIHVw
IFhlbltET00wXSBhbmQgaGF2ZSBhbiBoYW5kcyBvbiA/IGJlY2F1c2UgYW0gY3VycmVudGx5IHNl
ZWluZyBubyBvdXRwdXQgYWZ0ZXIgdGhpczxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNz
PSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OiZx
dW90O0NhbGlicmkmcXVvdDssc2Fucy1zZXJpZiI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9w
Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9u
dC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+KFhFTikgKioqIFNlcmlhbCBp
bnB1dCB0byBET00wICh0eXBlICdDVFJMLWEnIHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCk8
bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0i
Zm9udC1zaXplOjEwLjBwdDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlm
Ij4oWEVOKSBGcmVlZCAzMjRrQiBpbml0IG1lbW9yeS48bzpwPjwvbzpwPjwvc3Bhbj48L3A+DQo8
cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBwdDtmb250LWZh
bWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmIj4oWEVOKSAqKiogU2VyaWFsIGlucHV0
IHRvIFhlbiAodHlwZSAnQ1RSTC1hJyB0aHJlZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQpPG86cD48
L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQt
c2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+PG86
cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5
bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1z
ZXJpZiI+QXR0YWNoaW5nIHRoZSBjb21wbGV0ZSBsb2dzIGZvciByZWZlcmVuY2UuPG86cD48L286
cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6
ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJpZiI+PG86cD4m
bmJzcDs8L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9
ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1zZXJp
Zjtjb2xvcjpibGFjayI+TWl0IGZyZXVuZGxpY2hlbiBHcsO8w59lbiAvIEJlc3QgcmVnYXJkczxi
cj4NCjxicj4NCjxiPkNob2NrYWxpbmdhbSBNYW5pa2FuZGFuPC9iPjwvc3Bhbj4gPHNwYW4gc3R5
bGU9ImZvbnQtc2l6ZToxMC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7QXJpYWwmcXVvdDssc2Fucy1z
ZXJpZjtjb2xvcjpibGFjayI+DQo8YnI+DQo8YnI+DQpFUy1DTSBDb3JlIGZuLEFESVQgKFJCRUkv
RUNGMyk8YnI+DQpSb2JlcnQgQm9zY2ggR21iSCB8IFBvc3RmYWNoIDEwIDYwIDUwIHwgNzAwNDkg
U3R1dHRnYXJ0IHwgPHNwYW4gc3R5bGU9InRleHQtdHJhbnNmb3JtOnVwcGVyY2FzZSI+DQpHRVJN
QU5ZIHwgPC9zcGFuPjxhIGhyZWY9Ind3dy5ib3NjaC5jb20iPnd3dy5ib3NjaC5jb208L2E+PGJy
Pg0KVGVsLiAmIzQzOzkxIDgwIDYxMzYtNDQ1MiB8IEZheCAmIzQzOzkxIDgwIDY2MTctMDcxMSB8
IDxhIGhyZWY9Im1haWx0bzpNYW5pa2FuZGFuLkNob2NrYWxpbmdhbUBpbi5ib3NjaC5jb20iPg0K
TWFuaWthbmRhbi5DaG9ja2FsaW5nYW1AaW4uYm9zY2guY29tPC9hPjwvc3Bhbj48c3BhbiBzdHls
ZT0iZm9udC1zaXplOjguMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2Vy
aWY7Y29sb3I6YmxhY2siPjxicj4NCjxicj4NClJlZ2lzdGVyZWQgT2ZmaWNlOiBTdHV0dGdhcnQs
IFJlZ2lzdHJhdGlvbiBDb3VydDogQW10c2dlcmljaHQgU3R1dHRnYXJ0LCBIUkIgMTQwMDA7PGJy
Pg0KQ2hhaXJtYW4gb2YgdGhlIFN1cGVydmlzb3J5IEJvYXJkOiBGcmFueiBGZWhyZW5iYWNoOyBN
YW5hZ2luZyBEaXJlY3RvcnM6IERyLiBWb2xrbWFyIERlbm5lciwNCjxicj4NClByb2YuIERyLiBT
dGVmYW4gQXNlbmtlcnNjaGJhdW1lciwgRHIuIE1pY2hhZWwgQm9sbGUsIERyLiBDaHJpc3RpYW4g
RmlzY2hlciwgRHIuIFN0ZWZhbiBIYXJ0dW5nLDxicj4NCkRyLiBNYXJrdXMgSGV5biwgSGFyYWxk
IEtyw7ZnZXIsIENocmlzdG9waCBLw7xiZWwsIFJvbGYgTmFqb3JrLCBVd2UgUmFzY2hrZSwgUGV0
ZXIgVHlyb2xsZXINCjwvc3Bhbj48YnI+DQrigIs8c3BhbiBzdHlsZT0iZm9udC1zaXplOjEwLjBw
dDtmb250LWZhbWlseTomcXVvdDtBcmlhbCZxdW90OyxzYW5zLXNlcmlmIj48bzpwPjwvbzpwPjwv
c3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48Yj48c3BhbiBzdHlsZT0iZm9udC1zaXpl
OjExLjBwdDtmb250LWZhbWlseTomcXVvdDtDYWxpYnJpJnF1b3Q7LHNhbnMtc2VyaWYiPkZyb206
PC9zcGFuPjwvYj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTomcXVv
dDtDYWxpYnJpJnF1b3Q7LHNhbnMtc2VyaWYiPiBPbGVrc2FuZHIgVHlzaGNoZW5rbyAmbHQ7b2xl
a3N0eXNoQGdtYWlsLmNvbSZndDsNCjxicj4NCjxiPlNlbnQ6PC9iPiBXZWRuZXNkYXksIEp1bHkg
MTUsIDIwMjAgMTowMCBBTTxicj4NCjxiPlRvOjwvYj4gTWFuaWthbmRhbiBDaG9ja2FsaW5nYW0g
KFJCRUkvRUNGMykgJmx0O01hbmlrYW5kYW4uQ2hvY2thbGluZ2FtQGluLmJvc2NoLmNvbSZndDs8
YnI+DQo8Yj5DYzo8L2I+IEJlcnRyYW5kIE1hcnF1aXMgJmx0O0JlcnRyYW5kLk1hcnF1aXNAYXJt
LmNvbSZndDs7IG5kICZsdDtuZEBhcm0uY29tJmd0OzsgeGVuLWRldmVsQGxpc3RzLnhlbi5vcmc8
YnI+DQo8Yj5TdWJqZWN0OjwvYj4gUmU6IFtCVUddIFhlbiBidWlsZCBmb3IgUkNBUiBmYWlsaW5n
PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8
L286cD48L3A+DQo8ZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPkhlbGxvPG86cD48
L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNw
OzwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPltTb3JyeSBm
b3IgdGhlIHBvc3NpYmxlIGZvcm1hdCBpc3N1ZXNdPG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxw
IGNsYXNzPSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPGRpdj4NCjxkaXY+DQo8
cCBjbGFzcz0iTXNvTm9ybWFsIj5PbiBUdWUsIEp1bCAxNCwgMjAyMCBhdCA0OjQ0IFBNIE1hbmlr
YW5kYW4gQ2hvY2thbGluZ2FtIChSQkVJL0VDRjMpICZsdDs8YSBocmVmPSJtYWlsdG86TWFuaWth
bmRhbi5DaG9ja2FsaW5nYW1AaW4uYm9zY2guY29tIj5NYW5pa2FuZGFuLkNob2NrYWxpbmdhbUBp
bi5ib3NjaC5jb208L2E+Jmd0OyB3cm90ZTo8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGJsb2Nr
cXVvdGUgc3R5bGU9ImJvcmRlcjpub25lO2JvcmRlci1sZWZ0OnNvbGlkICNDQ0NDQ0MgMS4wcHQ7
cGFkZGluZzowaW4gMGluIDBpbiA2LjBwdDttYXJnaW4tbGVmdDo0LjhwdDttYXJnaW4tcmlnaHQ6
MGluIj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPkhlbGxvIEJlcnRyYW5kLDxicj4NCjxicj4NCkkg
c3VjY2VlZGVkIGluIGJ1aWxkaW5nIHRoZSBjb3JlIG1pbmltYWwgaW1hZ2Ugd2l0aCBkdW5mZWxs
IGFuZCBpdHMgY29tcGF0aWJsZSBicmFuY2hlcyBbZXhjZXB0IHhlbi10cm9vcHMgKG1vZGlmaWVk
IHNvbWUgZmlsZXMgdG8gY29tcGxldGUgdGhlIGJ1aWxkKV0uPGJyPg0KPGJyPg0KQnV0IEkgZmFj
ZSB0aGUgZm9sbG93aW5nIGVycm9yIHdoaWxlIGJvb3RpbmcuPGJyPg0KPGJyPg0KKFhFTikgKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKjxicj4NCihYRU4pIFBhbmljIG9u
IENQVSAwOjxicj4NCihYRU4pIFRpbWVyOiBVbmFibGUgdG8gcmV0cmlldmUgSVJRIDAgZnJvbSB0
aGUgZGV2aWNlIHRyZWU8YnI+DQooWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqPG86cD48L286cD48L3A+DQo8L2Jsb2NrcXVvdGU+DQo8ZGl2Pg0KPHAgY2xhc3M9
Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFz
cz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjwvZGl2Pg0KPC9kaXY+DQo8ZGl2
Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+VGhlIHJlYXNvbiBmb3IgdGhhdCBwcm9ibGVtICptaWdo
dCogYmUgaW4gdGhlIGFyY2ggdGltZXIgbm9kZSBpbiB5b3VyIGRldmljZSB0cmVlIHdoaWNoIGNv
bnRhaW5zICZxdW90O2ludGVycnVwdHMtZXh0ZW5kZWQmcXVvdDsgcHJvcGVydHkgaW5zdGVhZCBv
ZiBqdXN0ICZxdW90O2ludGVycnVwdHMmcXVvdDsuIEFzIGZhciBhcyBJIHJlbWVtYmVyIFhlbiB2
NC4xMiBkb2Vzbid0IGhhdmUgcmVxdWlyZWQgc3VwcG9ydCB0byBoYW5kbGUmbmJzcDsmcXVvdDtp
bnRlcnJ1cHRzLWV4dGVuZGVkJnF1b3Q7LjxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0K
PHAgY2xhc3M9Ik1zb05vcm1hbCI+SXQgd2VudCBpbiBhIGJpdCBsYXRlciBbMV0uIElmIHRoaXMg
aXMgdGhlIHJlYWwgcmVhc29uLCBJIHRoaW5rIHlvdSBzaG91bGQgZWl0aGVyIHN3aXRjaCB0byB0
aGUgbmV3IFhlbiB2ZXJzaW9uIG9yIG1vZGlmeSB5b3VyIGFyY2ggdGltZXIgbm9kZSBpbiBhIHdh
eSB0byB1c2UgdGhlJm5ic3A7JnF1b3Q7aW50ZXJydXB0cyZxdW90OyBwcm9wZXJ0eSBbMl0uIEkg
d291bGQgc3VnZ2VzdCB1c2luZyB0aGUgbmV3IFhlbiB2ZXJzaW9uIGlmIHBvc3NpYmxlDQogKGF0
IGxlYXN0IHY0LjEzKS48bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJN
c29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9
Ik1zb05vcm1hbCI+WzFdIDxhIGhyZWY9Imh0dHBzOi8vZXVyMDMuc2FmZWxpbmtzLnByb3RlY3Rp
b24ub3V0bG9vay5jb20vP3VybD1odHRwcyUzQSUyRiUyRmxpc3RzLnhlbnByb2plY3Qub3JnJTJG
YXJjaGl2ZXMlMkZodG1sJTJGeGVuLWRldmVsJTJGMjAxOS0wNSUyRm1zZzAxNzc1Lmh0bWwmYW1w
O2RhdGE9MDIlN0MwMSU3Q01hbmlrYW5kYW4uQ2hvY2thbGluZ2FtJTQwaW4uYm9zY2guY29tJTdD
ZjQzNzM1YjY4MjUyNDcwMWY5ZTAwOGQ4MjgyYzY0MDAlN0MwYWU1MWUxOTA3Yzg0ZTRiYmI2ZDY0
OGVlNTg0MTBmNCU3QzAlN0MwJTdDNjM3MzAzNTE4ODI5MDQ2Mjg3JmFtcDtzZGF0YT04WnJpdEdr
RiUyQnYyOWNRc2FnUjJLSmFXMk5VVFZCazBubzdSdDdFejM3ZWslM0QmYW1wO3Jlc2VydmVkPTAi
Pg0KaHR0cHM6Ly9saXN0cy54ZW5wcm9qZWN0Lm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8y
MDE5LTA1L21zZzAxNzc1Lmh0bWw8L2E+PG86cD48L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8
cCBjbGFzcz0iTXNvTm9ybWFsIj5bMl0mbmJzcDs8YSBocmVmPSJodHRwczovL2V1cjAzLnNhZmVs
aW5rcy5wcm90ZWN0aW9uLm91dGxvb2suY29tLz91cmw9aHR0cHMlM0ElMkYlMkZnaXRodWIuY29t
JTJGb3R5c2hjaGVua28xJTJGbGludXglMkZjb21taXQlMkZjMjUwNDQ4NDVmMmMzNjc4ZjVkZjc4
OTg4MWU3YTEyNTU1NmFmNmZjJmFtcDtkYXRhPTAyJTdDMDElN0NNYW5pa2FuZGFuLkNob2NrYWxp
bmdhbSU0MGluLmJvc2NoLmNvbSU3Q2Y0MzczNWI2ODI1MjQ3MDFmOWUwMDhkODI4MmM2NDAwJTdD
MGFlNTFlMTkwN2M4NGU0YmJiNmQ2NDhlZTU4NDEwZjQlN0MwJTdDMCU3QzYzNzMwMzUxODgyOTA0
NjI4NyZhbXA7c2RhdGE9ckxqeDlWempCZmxkVWpnJTJGdjZ3VEQ4Wjg5VHpTZk43MVJGcGR4Q08l
MkYlMkYwQSUzRCZhbXA7cmVzZXJ2ZWQ9MCI+aHR0cHM6Ly9naXRodWIuY29tL290eXNoY2hlbmtv
MS9saW51eC9jb21taXQvYzI1MDQ0ODQ1ZjJjMzY3OGY1ZGY3ODk4ODFlN2ExMjU1NTZhZjZmYzwv
YT48bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxv
OnA+Jm5ic3A7PC9vOnA+PC9wPg0KPC9kaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj4tLSA8bzpw
PjwvbzpwPjwvcD4NCjxkaXY+DQo8ZGl2Pg0KPGRpdj4NCjxkaXY+DQo8ZGl2Pg0KPGRpdj4NCjxw
IGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFt
aWx5OiZxdW90O0FyaWFsJnF1b3Q7LHNhbnMtc2VyaWY7Y29sb3I6IzMzMzMzMztiYWNrZ3JvdW5k
OndoaXRlIj5SZWdhcmRzLDwvc3Bhbj48bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxw
IGNsYXNzPSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0K
PGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0
O2JhY2tncm91bmQ6d2hpdGUiPk9sZWtzYW5kciBUeXNoY2hlbmtvPC9zcGFuPjxvOnA+PC9vOnA+
PC9wPg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9k
aXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9ib2R5Pg0KPC9odG1sPg0K

--_000_3f155a0b598745a3b2d158599dd992fdinboschcom_--

--_004_3f155a0b598745a3b2d158599dd992fdinboschcom_
Content-Type: text/plain; name="built-u-boot-xen-bootup-error_4.13.txt"
Content-Description: built-u-boot-xen-bootup-error_4.13.txt
Content-Disposition: attachment;
	filename="built-u-boot-xen-bootup-error_4.13.txt"; size=6604;
	creation-date="Wed, 15 Jul 2020 06:20:25 GMT";
	modification-date="Wed, 15 Jul 2020 06:20:25 GMT"
Content-Transfer-Encoding: base64

VXNpbmcgZXRoZXJuZXRAZTY4MDAwMDAgZGV2aWNlDQpURlRQIGZyb20gc2VydmVyIDE5Mi4xNjgu
Mi4xMTsgb3VyIElQIGFkZHJlc3MgaXMgMTkyLjE2OC4yLjUxDQpGaWxlbmFtZSAncjhhNzc5NS1z
YWx2YXRvci14LXhlbi5kdGInLg0KTG9hZCBhZGRyZXNzOiAweDRhMDAwMDAwDQpMb2FkaW5nOiAj
IyMjIyMNCiAgICAgICAgIDEuOCBNaUIvcw0KZG9uZQ0KQnl0ZXMgdHJhbnNmZXJyZWQgPSA4MDc4
MyAoMTNiOGYgaGV4KQ0KPT4gdGZ0cCAweDQ4MjAwMDAwIEltYWdlDQpVc2luZyBldGhlcm5ldEBl
NjgwMDAwMCBkZXZpY2UNClRGVFAgZnJvbSBzZXJ2ZXIgMTkyLjE2OC4yLjExOyBvdXIgSVAgYWRk
cmVzcyBpcyAxOTIuMTY4LjIuNTENCkZpbGVuYW1lICdJbWFnZScuDQpMb2FkIGFkZHJlc3M6IDB4
NDgyMDAwMDANCkxvYWRpbmc6ICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAgICAg
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMNCiAgICAgICAgICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAgICAg
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMNCiAgICAgICAgICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAgICAg
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMNCiAgICAgICAgICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAgICAg
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMNCiAgICAgICAgICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAgICAg
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMNCiAgICAgICAgICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAgICAg
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMNCiAgICAgICAgICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIw0KICAgICAgICAg
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMNCiAgICAgICAgICMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjDQogICAgICAgICAjIyMjIyMjIyMjIyMNCiAgICAg
ICAgIDIuMiBNaUIvcw0KZG9uZQ0KQnl0ZXMgdHJhbnNmZXJyZWQgPSAyMTE1NjM1MiAoMTQyZDIw
MCBoZXgpDQo9PiB0ZnRwIDB4NDgwMDAwMDAgeGVuDQpVc2luZyBldGhlcm5ldEBlNjgwMDAwMCBk
ZXZpY2UNClRGVFAgZnJvbSBzZXJ2ZXIgMTkyLjE2OC4yLjExOyBvdXIgSVAgYWRkcmVzcyBpcyAx
OTIuMTY4LjIuNTENCkZpbGVuYW1lICd4ZW4nLg0KTG9hZCBhZGRyZXNzOiAweDQ4MDAwMDAwDQpM
b2FkaW5nOiAjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMj
IyMjIyMjIyMjIyMjIyMNCiAgICAgICAgIDEgTWlCL3MNCmRvbmUNCkJ5dGVzIHRyYW5zZmVycmVk
ID0gOTE4ODY0IChlMDU1MCBoZXgpDQo9PiBib290aSAweDQ4MDAwMDAwIC0gMHg0QTAwMDAwMA0K
IyMgRmxhdHRlbmVkIERldmljZSBUcmVlIGJsb2IgYXQgNGEwMDAwMDANCiAgIEJvb3RpbmcgdXNp
bmcgdGhlIGZkdCBibG9iIGF0IDB4NGEwMDAwMDANCiAgIFVzaW5nIERldmljZSBUcmVlIGluIHBs
YWNlIGF0IDAwMDAwMDAwNGEwMDAwMDAsIGVuZCAwMDAwMDAwMDRhMDE2YjhlDQoNClN0YXJ0aW5n
IGtlcm5lbCAuLi4NCg0KIFhlbiA0LjEzLjItcHJlDQooWEVOKSBYZW4gdmVyc2lvbiA0LjEzLjIt
cHJlIChtYW5pa2FuZGFuQCkgKGFhcmNoNjQtbGludXgtZ251LWdjYyAoU291cmNlcnkgQ29kZUJl
bmNoIDIwMTguMDUtNykgNy4zLjEgMjAxODA2MTIpIGQwDQooWEVOKSBMYXRlc3QgQ2hhbmdlU2V0
OiBUdWUgSnVsIDcgMTU6MDU6MzYgMjAyMCArMDIwMCBnaXQ6Mzc4MzIxYg0KKFhFTikgUHJvY2Vz
c29yOiA0MTFmZDA3MzogIkFSTSBMaW1pdGVkIiwgdmFyaWFudDogMHgxLCBwYXJ0IDB4ZDA3LCBy
ZXYgMHgzDQooWEVOKSA2NC1iaXQgRXhlY3V0aW9uOg0KKFhFTikgICBQcm9jZXNzb3IgRmVhdHVy
ZXM6IDAwMDAwMDAwMDAwMDIyMjIgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICAgIEV4Y2VwdGlv
biBMZXZlbHM6IEVMMzo2NCszMiBFTDI6NjQrMzIgRUwxOjY0KzMyIEVMMDo2NCszMg0KKFhFTikg
ICAgIEV4dGVuc2lvbnM6IEZsb2F0aW5nUG9pbnQgQWR2YW5jZWRTSU1EDQooWEVOKSAgIERlYnVn
IEZlYXR1cmVzOiAwMDAwMDAwMDEwMzA1MTA2IDAwMDAwMDAwMDAwMDAwMDANCihYRU4pICAgQXV4
aWxpYXJ5IEZlYXR1cmVzOiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4p
ICAgTWVtb3J5IE1vZGVsIEZlYXR1cmVzOiAwMDAwMDAwMDAwMDAxMTI0IDAwMDAwMDAwMDAwMDAw
MDANCihYRU4pICAgSVNBIEZlYXR1cmVzOiAgMDAwMDAwMDAwMDAxMTEyMCAwMDAwMDAwMDAwMDAw
MDAwDQooWEVOKSAzMi1iaXQgRXhlY3V0aW9uOg0KKFhFTikgICBQcm9jZXNzb3IgRmVhdHVyZXM6
IDAwMDAwMTMxOjAwMDExMDExDQooWEVOKSAgICAgSW5zdHJ1Y3Rpb24gU2V0czogQUFyY2gzMiBB
MzIgVGh1bWIgVGh1bWItMiBKYXplbGxlDQooWEVOKSAgICAgRXh0ZW5zaW9uczogR2VuZXJpY1Rp
bWVyIFNlY3VyaXR5DQooWEVOKSAgIERlYnVnIEZlYXR1cmVzOiAwMzAxMDA2Ng0KKFhFTikgICBB
dXhpbGlhcnkgRmVhdHVyZXM6IDAwMDAwMDAwDQooWEVOKSAgIE1lbW9yeSBNb2RlbCBGZWF0dXJl
czogMTAyMDExMDUgNDAwMDAwMDAgMDEyNjAwMDAgMDIxMDIyMTENCihYRU4pICBJU0EgRmVhdHVy
ZXM6IDAyMTAxMTEwIDEzMTEyMTExIDIxMjMyMDQyIDAxMTEyMTMxIDAwMDExMTQyIDAwMDExMTIx
DQooWEVOKSBHZW5lcmljIFRpbWVyIElSUTogcGh5cz0zMCBoeXA9MjYgdmlydD0yNyBGcmVxOiA4
MzMzIEtIeg0KKFhFTikgR0lDdjIgaW5pdGlhbGl6YXRpb246DQooWEVOKSAgICAgICAgIGdpY19k
aXN0X2FkZHI9MDAwMDAwMDBmMTAxMDAwMA0KKFhFTikgICAgICAgICBnaWNfY3B1X2FkZHI9MDAw
MDAwMDBmMTAyMDAwMA0KKFhFTikgICAgICAgICBnaWNfaHlwX2FkZHI9MDAwMDAwMDBmMTA0MDAw
MA0KKFhFTikgICAgICAgICBnaWNfdmNwdV9hZGRyPTAwMDAwMDAwZjEwNjAwMDANCihYRU4pICAg
ICAgICAgZ2ljX21haW50ZW5hbmNlX2lycT0yNQ0KKFhFTikgR0lDdjI6IEFkanVzdGluZyBDUFUg
aW50ZXJmYWNlIGJhc2UgdG8gMHhmMTAyZjAwMA0KKFhFTikgR0lDdjI6IDUxMiBsaW5lcywgOCBj
cHVzLCBzZWN1cmUgKElJRCAwMjAwMDQzYikuDQooWEVOKSBYU00gRnJhbWV3b3JrIHYxLjAuMCBp
bml0aWFsaXplZA0KKFhFTikgSW5pdGlhbGlzaW5nIFhTTSBTSUxPIG1vZGUNCihYRU4pIFVzaW5n
IHNjaGVkdWxlcjogU01QIENyZWRpdCBTY2hlZHVsZXIgcmV2MiAoY3JlZGl0MikNCihYRU4pIElu
aXRpYWxpemluZyBDcmVkaXQyIHNjaGVkdWxlcg0KKFhFTikgQWxsb2NhdGVkIGNvbnNvbGUgcmlu
ZyBvZiAxNiBLaUIuDQooWEVOKSBCcmluZ2luZyB1cCBDUFUxDQooWEVOKSBCcmluZ2luZyB1cCBD
UFUyDQooWEVOKSBCcmluZ2luZyB1cCBDUFUzDQooWEVOKSBCcmluZ2luZyB1cCBDUFU0DQooWEVO
KSBDUFU0IE1JRFIgKDB4NDEwZmQwMzQpIGRvZXMgbm90IG1hdGNoIGJvb3QgQ1BVIE1JRFIgKDB4
NDExZmQwNzMpLA0KKFhFTikgZGlzYWJsZSBjcHUgKHNlZSBiaWcuTElUVExFLnR4dCB1bmRlciBk
b2NzLykuDQooWEVOKSBDUFU0IG5ldmVyIGNhbWUgb25saW5lDQooWEVOKSBGYWlsZWQgdG8gYnJp
bmcgdXAgQ1BVIDQgKGVycm9yIC01KQ0KKFhFTikgQnJpbmdpbmcgdXAgQ1BVNQ0KKFhFTikgQ1BV
NSBNSURSICgweDQxMGZkMDM0KSBkb2VzIG5vdCBtYXRjaCBib290IENQVSBNSURSICgweDQxMWZk
MDczKSwNCihYRU4pIGRpc2FibGUgY3B1IChzZWUgYmlnLkxJVFRMRS50eHQgdW5kZXIgZG9jcy8p
Lg0KKFhFTikgQ1BVNSBuZXZlciBjYW1lIG9ubGluZQ0KKFhFTikgRmFpbGVkIHRvIGJyaW5nIHVw
IENQVSA1IChlcnJvciAtNSkNCihYRU4pIEJyaW5naW5nIHVwIENQVTYNCihYRU4pIENQVTYgTUlE
UiAoMHg0MTBmZDAzNCkgZG9lcyBub3QgbWF0Y2ggYm9vdCBDUFUgTUlEUiAoMHg0MTFmZDA3Myks
DQooWEVOKSBkaXNhYmxlIGNwdSAoc2VlIGJpZy5MSVRUTEUudHh0IHVuZGVyIGRvY3MvKS4NCihY
RU4pIENQVTYgbmV2ZXIgY2FtZSBvbmxpbmUNCihYRU4pIEZhaWxlZCB0byBicmluZyB1cCBDUFUg
NiAoZXJyb3IgLTUpDQooWEVOKSBCcmluZ2luZyB1cCBDUFU3DQooWEVOKSBDUFU3IE1JRFIgKDB4
NDEwZmQwMzQpIGRvZXMgbm90IG1hdGNoIGJvb3QgQ1BVIE1JRFIgKDB4NDExZmQwNzMpLA0KKFhF
TikgZGlzYWJsZSBjcHUgKHNlZSBiaWcuTElUVExFLnR4dCB1bmRlciBkb2NzLykuDQooWEVOKSBD
UFU3IG5ldmVyIGNhbWUgb25saW5lDQooWEVOKSBGYWlsZWQgdG8gYnJpbmcgdXAgQ1BVIDcgKGVy
cm9yIC01KQ0KKFhFTikgQnJvdWdodCB1cCA0IENQVXMNCihYRU4pIEkvTyB2aXJ0dWFsaXNhdGlv
biBkaXNhYmxlZA0KKFhFTikgUDJNOiA0NC1iaXQgSVBBIHdpdGggNDQtYml0IFBBIGFuZCA4LWJp
dCBWTUlEDQooWEVOKSBQMk06IDQgbGV2ZWxzIHdpdGggb3JkZXItMCByb290LCBWVENSIDB4ODAw
NDM1OTQNCihYRU4pICoqKiBMT0FESU5HIERPTUFJTiAwICoqKg0KKFhFTikgTG9hZGluZyBkMCBr
ZXJuZWwgZnJvbSBib290IG1vZHVsZSBAIDAwMDAwMDAwNDgyMDAwMDANCihYRU4pIEFsbG9jYXRp
bmcgMToxIG1hcHBpbmdzIHRvdGFsbGluZyAxMDI0TUIgZm9yIGRvbTA6DQooWEVOKSBCQU5LWzBd
IDB4MDAwMDA2MDAwMDAwMDAtMHgwMDAwMDY0MDAwMDAwMCAoMTAyNE1CKQ0KKFhFTikgR3JhbnQg
dGFibGUgcmFuZ2U6IDB4MDAwMDAwNDgwMDAwMDAtMHgwMDAwMDA0ODA0MDAwMA0KKFhFTikgQWxs
b2NhdGluZyBQUEkgMTYgZm9yIGV2ZW50IGNoYW5uZWwgaW50ZXJydXB0DQooWEVOKSBMb2FkaW5n
IHpJbWFnZSBmcm9tIDAwMDAwMDAwNDgyMDAwMDAgdG8gMDAwMDAwMDYwMDA4MDAwMC0wMDAwMDAw
NjAwZjgwMDAwDQooWEVOKSBMb2FkaW5nIGQwIERUQiB0byAweDAwMDAwMDA2MDgwMDAwMDAtMHgw
MDAwMDAwNjA4MDEyZDM3DQooWEVOKSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQg
c2V0IGF0IDB4NDAwMCBwYWdlcy4NCihYRU4pIFNjcnViYmluZyBGcmVlIFJBTSBpbiBiYWNrZ3Jv
dW5kDQooWEVOKSBTdGQuIExvZ2xldmVsOiBFcnJvcnMgYW5kIHdhcm5pbmdzDQooWEVOKSBHdWVz
dCBMb2dsZXZlbDogTm90aGluZyAoUmF0ZS1saW1pdGVkOiBFcnJvcnMgYW5kIHdhcm5pbmdzKQ0K
KFhFTikgKioqIFNlcmlhbCBpbnB1dCB0byBET00wICh0eXBlICdDVFJMLWEnIHRocmVlIHRpbWVz
IHRvIHN3aXRjaCBpbnB1dCkNCihYRU4pIEZyZWVkIDMyNGtCIGluaXQgbWVtb3J5Lg0KKFhFTikg
KioqIFNlcmlhbCBpbnB1dCB0byBYZW4gKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMgdG8gc3dp
dGNoIGlucHV0KQ0KKFhFTikgKioqIFNlcmlhbCBpbnB1dCB0byBET00wICh0eXBlICdDVFJMLWEn
IHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCkNCihYRU4pICoqKiBTZXJpYWwgaW5wdXQgdG8g
WGVuICh0eXBlICdDVFJMLWEnIHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCkNCg==

--_004_3f155a0b598745a3b2d158599dd992fdinboschcom_--


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 06:47:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 06:47:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvbCW-0003Xj-QV; Wed, 15 Jul 2020 06:47:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvbCV-0003X1-Fk
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 06:47:31 +0000
X-Inumbo-ID: 0de6c5ae-c667-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0de6c5ae-c667-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 06:47:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 45AD2AC85;
 Wed, 15 Jul 2020 06:47:33 +0000 (UTC)
Subject: Re: [PATCH v2 7/7] x86: only generate compat headers actually needed
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <5892f237-cfcf-eb19-058c-bd4f45c7bc97@suse.com>
 <20200714150344.GP7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <41db65e1-8be9-fc19-48e7-3b83b8da1cf6@suse.com>
Date: Wed, 15 Jul 2020 08:47:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200714150344.GP7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.07.2020 17:03, Roger Pau Monné wrote:
> On Wed, Jul 01, 2020 at 12:28:27PM +0200, Jan Beulich wrote:
>> As was already the case for XSM/Flask, avoid generating compat headers
>> when they're not going to be needed. To address resulting build issues
>> - move compat/hvm/dm_op.h inclusion to the only source file needing it,
>> - add a little bit of #ifdef-ary.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> TBH I wouldn't mind generating the compat headers even when not
> actually used by the build, sometimes is useful to have them for
> review context without having to play with the build options.

Right, that's what the post-commit-message remark says. The main
goal is to be consistent in what we do. The primary reason for
me to have chosen this route is that the compat header generation
isn't really quick, compared to the rest of the build process. It
is so slow that commit 7e9009891688 ("include: parallelize
compat/xlat.h generation") was a significant (i.e. very
noticeable) win.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 07:44:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 07:44:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvc5m-0000A8-6N; Wed, 15 Jul 2020 07:44:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7qVd=A2=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jvc5l-0000A3-MH
 for xen-devel@lists.xen.org; Wed, 15 Jul 2020 07:44:37 +0000
X-Inumbo-ID: 0772a7da-c66f-11ea-bb8b-bc764e2007e4
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.84]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0772a7da-c66f-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 07:44:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+PSrM61KNhHMHE9fzrGtKYCBSl+bRweW/9ifG6uK8Us=;
 b=ojGhL5v7Hj6YXLh1vQ3qp8YcW0CPU87dbSi+eDjgvHTekrhAo58R5S/nslxVNiaL5gCpmB6v+bPWeJYgwy/iAvmmkMSZ483A4NHl6ToCBxBeHuMoybjyp3fVA0Kc5jtuLt6vvegFK6lNhKqPpQKHxsIWl2jlijyRkFXPRGm6vBE=
Received: from AM6P192CA0001.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:83::14)
 by AM6PR08MB4550.eurprd08.prod.outlook.com (2603:10a6:20b:71::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Wed, 15 Jul
 2020 07:44:34 +0000
Received: from AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:83:cafe::a3) by AM6P192CA0001.outlook.office365.com
 (2603:10a6:209:83::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18 via Frontend
 Transport; Wed, 15 Jul 2020 07:44:34 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xen.org; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;lists.xen.org; dmarc=bestguesspass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT055.mail.protection.outlook.com (10.152.17.214) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Wed, 15 Jul 2020 07:44:33 +0000
Received: ("Tessian outbound 2ae7cfbcc26c:v62");
 Wed, 15 Jul 2020 07:44:33 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 81a4a8d1426ff4af
X-CR-MTA-TID: 64aa7808
Received: from f6a6afc1680d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F087AF4C-0902-4820-A421-8F82856249BC.1; 
 Wed, 15 Jul 2020 07:44:28 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f6a6afc1680d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 15 Jul 2020 07:44:28 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M3YmYUfilCmQIZI9ewRECVq+Ea+A250Ckocf2OOqbHKohEFe9hWvfVhgH8IFoOyOZrYK5oogNll81IexHz9pyqy2YJj99zzBr/4FupjGjJr5Rxame1pl26MoGVxGYwWNZfW55B78HMOM6Ia18U6HNyRjgaNvtpyHcvw5QpWunhTBlmzF80tkkABf24lcTYWcQlopPxFFUtEuBCBmlxO6c1SA0DiJv1ToYBSFLb+T2Vi/BAgu8xBWsSKX8uk2BkRD1vKg7iABYVjsiXml3DhNoDUxI6JpFfuC4QU8Y1Xz0ZdtG9zugabpuj4FfjGfyguTCIr7uEPTVtX+kSFLId6HDQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+PSrM61KNhHMHE9fzrGtKYCBSl+bRweW/9ifG6uK8Us=;
 b=NlQOb7Tegx/5vFbk8sCda7m7wd6jyavCPsz60wnqWv9vlhPpEBuRAUuPZ0/dm7yNKvzn3Xak+nG7Rk2yNsr6TP9/U2Yv+wzQRNXBBNegIk7DFe999h32dyp0Z9geNO23ZhtIufCbpV9j+q2CesmkchTDLx0FfLnPS1tVGqrrzJhTBJMVEDdD+zr6J4AQO5o2fIKzPl7OUfDhFjj6lN84AxbDjLwRut9cod8DF9jyvFYYw+NvF3hh5D0cl3f3VHpsJAUEl1Gx/RCDrk5GqkFKmAKMP82R+Oyu+YRIuHB3XmYgFAAyCOaDySKfgX8WcDYSW1ifai/CLLjhMQ91fslJFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+PSrM61KNhHMHE9fzrGtKYCBSl+bRweW/9ifG6uK8Us=;
 b=ojGhL5v7Hj6YXLh1vQ3qp8YcW0CPU87dbSi+eDjgvHTekrhAo58R5S/nslxVNiaL5gCpmB6v+bPWeJYgwy/iAvmmkMSZ483A4NHl6ToCBxBeHuMoybjyp3fVA0Kc5jtuLt6vvegFK6lNhKqPpQKHxsIWl2jlijyRkFXPRGm6vBE=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1909.eurprd08.prod.outlook.com (2603:10a6:4:72::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.23; Wed, 15 Jul
 2020 07:44:26 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.025; Wed, 15 Jul 2020
 07:44:26 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>
Subject: Re: [BUG] Xen build for RCAR failing
Thread-Topic: [BUG] Xen build for RCAR failing
Thread-Index: AdZUKc5JeR7gPpESR52uLkZK1kYwOwAEsnEAAAD8OlAAAEBtgAABZtcgAANXdAAAh1iJgADaJ12wAA6tYIAAF6P1gAAB/qoA
Date: Wed, 15 Jul 2020 07:44:26 +0000
Message-ID: <0AC5E91F-7C7A-4B5A-AE55-E48574AB04C5@arm.com>
References: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
 <1D0E7281-95D7-482E-BF6D-EE5B1FEE4876@arm.com>
 <ab84437081a346d6bf0f73581382c74e@in.bosch.com>
 <D84A5DA7-683C-480B-8837-C51D560FC2E1@arm.com>
 <139024a891324455a13a3d468908798d@in.bosch.com>
 <C3BCAA62-51EF-49DD-B978-6657BC6D5A21@arm.com>
 <67b4454424c4485fb59d542d052aaf2d@in.bosch.com>
 <CAPD2p-nZZpDBZ5yc=gVvVAW1oFdN0KZ2jMH-T59W_sntsENwxw@mail.gmail.com>
 <3f155a0b598745a3b2d158599dd992fd@in.bosch.com>
In-Reply-To: <3f155a0b598745a3b2d158599dd992fd@in.bosch.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: in.bosch.com; dkim=none (message not signed)
 header.d=none; in.bosch.com;
 dmarc=none action=none header.from=arm.com; 
x-originating-ip: [2a01:e0a:13:6f10:c444:ae10:7760:1678]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: bf10cf39-1ce1-4f9a-125f-08d82892ea96
x-ms-traffictypediagnostic: DB6PR0801MB1909:|AM6PR08MB4550:
X-Microsoft-Antispam-PRVS: <AM6PR08MB4550C5CD87331A36C5FA37629D7E0@AM6PR08MB4550.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: NR+WL9Z6Rc1ulMIjxq6K/hAYzauKXkxE5precGh9iMz7BKuYoERWrJh/u0QP0ccTbq0rQ7XcOUd/zCxRWMSEEvu7LQ5+R2CI6BO+wIAVYOu3P1BWhtNTl79b7RsCtQGpCDvCOneDGvtZ5SoKhis5TuDF/k5LEZHjlxu86bN5VLHmGyfVwK3djsa9LP1Ye14WkGsKwk7j4xXlr3yyXo0V1feGG/E/I/SzCPInhro0vrkw5zVhD4LKHg4YQ25431z+Hda1I7/2LZg1yk3yaA5khqVVJpWnkY+4vClvbTH+kBWg5lZYqU6ffAOJyse62PIE+fvYlFSRhAEdea1X6xRIZiaaKieqeAXlyGkRPN0u4xUiiXh2QM8hehKycVHQrjFrzYfvcToC0/PjkPq0csxmUA==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(366004)(376002)(346002)(136003)(396003)(39850400004)(15974865002)(2906002)(54906003)(8936002)(86362001)(4326008)(66574015)(6512007)(966005)(36756003)(33656002)(2616005)(5660300002)(478600001)(316002)(186003)(6916009)(71200400001)(76116006)(6486002)(91956017)(53546011)(66446008)(66946007)(66556008)(64756008)(66476007)(6506007)(8676002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: szg6Cmy7tokavKHdrPBTfShfp3qMMyRsGzVde6Sw9rYZdNJTKinvHtmeQDQhgeef23DqivlZqo2cJs14vG6eYyD4X8XzffQqvDcLzZmz6sDYxqIk2boWwE7aM+o7tD2NZhSE7oDeW+jdUIzy0paEnaGkD2ERRJTBPJfOKWs4rTxBu5miTGI+899muyXWyAM+6ipu/V6vHIB5KDNl5GcvaZGTsaH08tmPvZ0dKwolE3TxdX1GEBV7C/NrAFe6hyGWi29D4Np+cBWm51D0q+bf8vU0TcUgXoN7xVJYq0tJ+r2a6kG8mXL/h1kMMn4yep61/uwPRBafft8RKU9KNfVA5sXOCll2tA21U9erzPXFd3x6a0MgnYMptI34fFj03gHWL4NStHBWFaKfPEJ7ZB3eCCxXwdNPY6BffdPW/dB38RBoS4QHdtSgr92LSyl0mRuTsgW83DD/L6LHcF2xH9Q6IkiNGGMFnpMFKq6ZlqcW2OGE6hLIannkMlagxYnGV5xhC7X2qGVu85vJArPSeDeVuwE/dUveJ7rszStrz31Ypzc=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <A846D54247F29443B6DD41CD3509882C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1909
Original-Authentication-Results: in.bosch.com; dkim=none (message not signed)
 header.d=none; in.bosch.com;
 dmarc=none action=none header.from=arm.com; 
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(346002)(376002)(136003)(39860400002)(46966005)(33656002)(15974865002)(2906002)(6862004)(36906005)(6512007)(478600001)(4326008)(316002)(66574015)(82310400002)(54906003)(70206006)(966005)(70586007)(356005)(8676002)(86362001)(82740400003)(6486002)(186003)(26005)(36756003)(8936002)(336012)(53546011)(5660300002)(47076004)(6506007)(81166007)(2616005);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 67669966-0489-4b04-ebf5-08d82892e63e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: wtH5dH0vy2ZPTwGWDITJjjDbi0/Y+VijS1l5e9a7uO4EGzCkYivHjw0eWASGUixgYQzNgwumVVUvnoqPseUrlKGk/qrWebxPNKTCnEAHVo9tsIJWGb+aK1TB/KayatwjQ6LmODRPYMLduNRYS0ehB28TZajVU2Uv1dkfs7f5dtHpqtO+Mkyki9o4TyUDtPXvVPsAMiQRVlvMdbkCOPMw4VwkhA9bHdKFds8dCooss19m1an71qsTHIeHmtGh+fJkctWfjzPw2odjYa7qvS8kuvSwbXv5hbVJBJp/tjsix9J8gvQA5sy7869cuYpr2LV5x7rcnDdjX3lSmXth7kIbZTF2kt8e/ZIymsxj7Y8YZik5PPj4qzM5gP2AiH1KjsyaswRVyLDml4K1CIHlvMnczliK9HGaHLKVkh6mpPXJpPMBew1bOU5RazXBgU5DF/qNuxfXCbAzKMNRuYAZBvPULI9C9Kx/k+3vf1HAmRHOjRk=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jul 2020 07:44:33.8165 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bf10cf39-1ce1-4f9a-125f-08d82892ea96
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4550
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Tyshchenko <olekstysh@gmail.com>, nd <nd@arm.com>,
 "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGkgTWFuaWthbmRhbiwNCg0KPiBPbiAxNSBKdWwgMjAyMCwgYXQgMDg6NDcsIE1hbmlrYW5kYW4g
Q2hvY2thbGluZ2FtIChSQkVJL0VDRjMpIDxNYW5pa2FuZGFuLkNob2NrYWxpbmdhbUBpbi5ib3Nj
aC5jb20+IHdyb3RlOg0KPiANCj4gSGVsbG8gT2xla3NhbmRyIFR5c2hjaGVua28sDQo+IFRoYW5r
cyBmb3IgeW91ciBxdWljayByZXNwb25zZS4gV2l0aCBYZW4gc3RhYmxlIDQuMTMgYnJhbmNoLCB0
aGUgbWVudGlvbmVkIGlzc3VlIGlzIHNvbHZlZC4NCj4gIA0KPiBJcyB0aGVyZSBhbnkgZG9jdW1l
bnQgd2hpY2ggSSBjb3VsZCByZWZlciB0byBicmluZyB1cCBYZW5bRE9NMF0gYW5kIGhhdmUgYW4g
aGFuZHMgb24gPyBiZWNhdXNlIGFtIGN1cnJlbnRseSBzZWVpbmcgbm8gb3V0cHV0IGFmdGVyIHRo
aXMNCj4gIA0KPiAoWEVOKSAqKiogU2VyaWFsIGlucHV0IHRvIERPTTAgKHR5cGUgJ0NUUkwtYScg
dGhyZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0KQ0KPiAoWEVOKSBGcmVlZCAzMjRrQiBpbml0IG1l
bW9yeS4NCj4gKFhFTikgKioqIFNlcmlhbCBpbnB1dCB0byBYZW4gKHR5cGUgJ0NUUkwtYScgdGhy
ZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0KQ0KPiAgDQo+IEF0dGFjaGluZyB0aGUgY29tcGxldGUg
bG9ncyBmb3IgcmVmZXJlbmNlLg0KDQpZb3VyIERPTTAgaXMgcHJvYmFibHkgdHJ5aW5nIHRvIHVz
ZSB0aGUgd3JvbmcgY29uc29sZS4NCllvdSBzaG91bGQgYWRkIOKAnGNvbnNvbGU9aHZjMOKAnSB0
byB5b3VyIERvbTAgbGludXggY29tbWFuZCBsaW5lIChhbmQgcmVtb3ZlIGFueSBvdGhlciBjb25z
b2xlPSBhcmd1bWVudCkNCg0KUmVnYXJkcw0KQmVydHJhbmQNCg0KPiAgDQo+IE1pdCBmcmV1bmRs
aWNoZW4gR3LDvMOfZW4gLyBCZXN0IHJlZ2FyZHMNCj4gDQo+IENob2NrYWxpbmdhbSBNYW5pa2Fu
ZGFuIA0KPiANCj4gRVMtQ00gQ29yZSBmbixBRElUIChSQkVJL0VDRjMpDQo+IFJvYmVydCBCb3Nj
aCBHbWJIIHwgUG9zdGZhY2ggMTAgNjAgNTAgfCA3MDA0OSBTdHV0dGdhcnQgfCBHRVJNQU5ZIHwg
d3d3LmJvc2NoLmNvbQ0KPiBUZWwuICs5MSA4MCA2MTM2LTQ0NTIgfCBGYXggKzkxIDgwIDY2MTct
MDcxMSB8IE1hbmlrYW5kYW4uQ2hvY2thbGluZ2FtQGluLmJvc2NoLmNvbQ0KPiANCj4gUmVnaXN0
ZXJlZCBPZmZpY2U6IFN0dXR0Z2FydCwgUmVnaXN0cmF0aW9uIENvdXJ0OiBBbXRzZ2VyaWNodCBT
dHV0dGdhcnQsIEhSQiAxNDAwMDsNCj4gQ2hhaXJtYW4gb2YgdGhlIFN1cGVydmlzb3J5IEJvYXJk
OiBGcmFueiBGZWhyZW5iYWNoOyBNYW5hZ2luZyBEaXJlY3RvcnM6IERyLiBWb2xrbWFyIERlbm5l
ciwgDQo+IFByb2YuIERyLiBTdGVmYW4gQXNlbmtlcnNjaGJhdW1lciwgRHIuIE1pY2hhZWwgQm9s
bGUsIERyLiBDaHJpc3RpYW4gRmlzY2hlciwgRHIuIFN0ZWZhbiBIYXJ0dW5nLA0KPiBEci4gTWFy
a3VzIEhleW4sIEhhcmFsZCBLcsO2Z2VyLCBDaHJpc3RvcGggS8O8YmVsLCBSb2xmIE5ham9yaywg
VXdlIFJhc2Noa2UsIFBldGVyIFR5cm9sbGVyIA0KPiDigIsNCj4gRnJvbTogT2xla3NhbmRyIFR5
c2hjaGVua28gPG9sZWtzdHlzaEBnbWFpbC5jb20+IA0KPiBTZW50OiBXZWRuZXNkYXksIEp1bHkg
MTUsIDIwMjAgMTowMCBBTQ0KPiBUbzogTWFuaWthbmRhbiBDaG9ja2FsaW5nYW0gKFJCRUkvRUNG
MykgPE1hbmlrYW5kYW4uQ2hvY2thbGluZ2FtQGluLmJvc2NoLmNvbT4NCj4gQ2M6IEJlcnRyYW5k
IE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPjsgeGVu
LWRldmVsQGxpc3RzLnhlbi5vcmcNCj4gU3ViamVjdDogUmU6IFtCVUddIFhlbiBidWlsZCBmb3Ig
UkNBUiBmYWlsaW5nDQo+ICANCj4gSGVsbG8NCj4gIA0KPiBbU29ycnkgZm9yIHRoZSBwb3NzaWJs
ZSBmb3JtYXQgaXNzdWVzXQ0KPiAgDQo+IE9uIFR1ZSwgSnVsIDE0LCAyMDIwIGF0IDQ6NDQgUE0g
TWFuaWthbmRhbiBDaG9ja2FsaW5nYW0gKFJCRUkvRUNGMykgPE1hbmlrYW5kYW4uQ2hvY2thbGlu
Z2FtQGluLmJvc2NoLmNvbT4gd3JvdGU6DQo+IEhlbGxvIEJlcnRyYW5kLA0KPiANCj4gSSBzdWNj
ZWVkZWQgaW4gYnVpbGRpbmcgdGhlIGNvcmUgbWluaW1hbCBpbWFnZSB3aXRoIGR1bmZlbGwgYW5k
IGl0cyBjb21wYXRpYmxlIGJyYW5jaGVzIFtleGNlcHQgeGVuLXRyb29wcyAobW9kaWZpZWQgc29t
ZSBmaWxlcyB0byBjb21wbGV0ZSB0aGUgYnVpbGQpXS4NCj4gDQo+IEJ1dCBJIGZhY2UgdGhlIGZv
bGxvd2luZyBlcnJvciB3aGlsZSBib290aW5nLg0KPiANCj4gKFhFTikgKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKg0KPiAoWEVOKSBQYW5pYyBvbiBDUFUgMDoNCj4gKFhF
TikgVGltZXI6IFVuYWJsZSB0byByZXRyaWV2ZSBJUlEgMCBmcm9tIHRoZSBkZXZpY2UgdHJlZQ0K
PiAoWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+ICANCj4g
IA0KPiBUaGUgcmVhc29uIGZvciB0aGF0IHByb2JsZW0gKm1pZ2h0KiBiZSBpbiB0aGUgYXJjaCB0
aW1lciBub2RlIGluIHlvdXIgZGV2aWNlIHRyZWUgd2hpY2ggY29udGFpbnMgImludGVycnVwdHMt
ZXh0ZW5kZWQiIHByb3BlcnR5IGluc3RlYWQgb2YganVzdCAiaW50ZXJydXB0cyIuIEFzIGZhciBh
cyBJIHJlbWVtYmVyIFhlbiB2NC4xMiBkb2Vzbid0IGhhdmUgcmVxdWlyZWQgc3VwcG9ydCB0byBo
YW5kbGUgImludGVycnVwdHMtZXh0ZW5kZWQiLg0KPiBJdCB3ZW50IGluIGEgYml0IGxhdGVyIFsx
XS4gSWYgdGhpcyBpcyB0aGUgcmVhbCByZWFzb24sIEkgdGhpbmsgeW91IHNob3VsZCBlaXRoZXIg
c3dpdGNoIHRvIHRoZSBuZXcgWGVuIHZlcnNpb24gb3IgbW9kaWZ5IHlvdXIgYXJjaCB0aW1lciBu
b2RlIGluIGEgd2F5IHRvIHVzZSB0aGUgImludGVycnVwdHMiIHByb3BlcnR5IFsyXS4gSSB3b3Vs
ZCBzdWdnZXN0IHVzaW5nIHRoZSBuZXcgWGVuIHZlcnNpb24gaWYgcG9zc2libGUgKGF0IGxlYXN0
IHY0LjEzKS4NCj4gIA0KPiBbMV0gaHR0cHM6Ly9saXN0cy54ZW5wcm9qZWN0Lm9yZy9hcmNoaXZl
cy9odG1sL3hlbi1kZXZlbC8yMDE5LTA1L21zZzAxNzc1Lmh0bWwNCj4gWzJdIGh0dHBzOi8vZ2l0
aHViLmNvbS9vdHlzaGNoZW5rbzEvbGludXgvY29tbWl0L2MyNTA0NDg0NWYyYzM2NzhmNWRmNzg5
ODgxZTdhMTI1NTU2YWY2ZmMNCj4gIA0KPiAtLSANCj4gUmVnYXJkcywNCj4gIA0KPiBPbGVrc2Fu
ZHIgVHlzaGNoZW5rbw0KPiA8YnVpbHQtdS1ib290LXhlbi1ib290dXAtZXJyb3JfNC4xMy50eHQ+
DQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 07:45:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 07:45:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvc6v-0000Dg-I3; Wed, 15 Jul 2020 07:45:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvc6u-0000DZ-Ay
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 07:45:48 +0000
X-Inumbo-ID: 31fd1c10-c66f-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31fd1c10-c66f-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 07:45:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C5CE2AC7F;
 Wed, 15 Jul 2020 07:45:49 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/CPUID: move some static masks into .init
Message-ID: <2e3dfe1a-bc8b-6774-ef7e-efb565343c52@suse.com>
Date: Wed, 15 Jul 2020 09:45:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Except for hvm_shadow_max_featuremask and deep_features they're
referenced by __init functions only.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -16,12 +16,15 @@
 const uint32_t known_features[] = INIT_KNOWN_FEATURES;
 const uint32_t special_features[] = INIT_SPECIAL_FEATURES;
 
-static const uint32_t pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
+static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
 static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
-static const uint32_t hvm_hap_max_featuremask[] = INIT_HVM_HAP_MAX_FEATURES;
-static const uint32_t pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
-static const uint32_t hvm_shadow_def_featuremask[] = INIT_HVM_SHADOW_DEF_FEATURES;
-static const uint32_t hvm_hap_def_featuremask[] = INIT_HVM_HAP_DEF_FEATURES;
+static const uint32_t __initconst hvm_hap_max_featuremask[] =
+    INIT_HVM_HAP_MAX_FEATURES;
+static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
+static const uint32_t __initconst hvm_shadow_def_featuremask[] =
+    INIT_HVM_SHADOW_DEF_FEATURES;
+static const uint32_t __initconst hvm_hap_def_featuremask[] =
+    INIT_HVM_HAP_DEF_FEATURES;
 static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
 
 static int __init parse_xen_cpuid(const char *s)


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 07:47:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 07:47:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvc8q-0000MI-VM; Wed, 15 Jul 2020 07:47:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tzA3=A2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvc8p-0000MC-Nk
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 07:47:47 +0000
X-Inumbo-ID: 784d55c2-c66f-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 784d55c2-c66f-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 07:47:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1kC6h5PrF8ne0+++EQSI4M8F0hD05JddVAqXnjsfFnk=; b=ITrRRZNOd/4EdJd3wKWWBfdru
 +I7rCW3oDLuRPUVu9L2HnqheKSZLusibAFN4ZxuxfCeI/4NsTQyDw10TTYC9YPTffv7F0jm6+3jGh
 Uny6DFrbsdL8DKe8III8GjXcU7itLiCbi0hld7wc07TglNo7cywRmgXWvpnZu5CWEK0FE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvc8m-0006uT-7c; Wed, 15 Jul 2020 07:47:44 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvc8l-0006so-QO; Wed, 15 Jul 2020 07:47:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvc8l-0005FM-Js; Wed, 15 Jul 2020 07:47:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151895-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151895: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=20c1df5476e1e9b5d3f5b94f9f3ce01d21f14c46
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jul 2020 07:47:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151895 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151895/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                20c1df5476e1e9b5d3f5b94f9f3ce01d21f14c46
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   32 days
Failing since        151101  2020-06-14 08:32:51 Z   30 days   42 attempts
Testing same since   151874  2020-07-13 21:40:56 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michele Denber <denber@mindspring.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 26285 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 08:32:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 08:32:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvcpm-000519-0V; Wed, 15 Jul 2020 08:32:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uJCN=A2=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1jvcpl-000514-BL
 for xen-devel@lists.xen.org; Wed, 15 Jul 2020 08:32:09 +0000
X-Inumbo-ID: ab8c5446-c675-11ea-b7bb-bc764e2007e4
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab8c5446-c675-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 08:32:08 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id z13so1494940wrw.5
 for <xen-devel@lists.xen.org>; Wed, 15 Jul 2020 01:32:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=oFaTic2J9aEhYOAcvU8ROjalBQdQEVEoC2cLZwlwnKU=;
 b=tBZf1pqm1aKtx68JMF1vWwJHNBf7SzgQOxCSWBuFsPT3ZSj88H8aVam0PXGtxWsK2p
 LPJu1Imph3yAZvVB7lDAdayNjtrjGbMFZjtffDSV6BR+bQTyimWT4txevBfCKReLlKUs
 H+Ew4lRbrfxQ3o442AN/PRzZgDD83DEkaGsWoZUCoIeWuQsOgxdhOZH5Q61QLj/nYChR
 fPz0iig3nwkRRPBiRugqWmszBvyhxu0zknvODsN/x+NcdSecGQuYWjOJ2WeA1P0ogI1v
 L/JwXmPR53fxbtuFWj1kEfHx1tVhWYRH+C9FJzIY2dJeEFD7heQASGi5YbujabQDcLJe
 TXNg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=oFaTic2J9aEhYOAcvU8ROjalBQdQEVEoC2cLZwlwnKU=;
 b=ODM9YAXsyv4BIL7E+12vccpn/ZcQZbK9VhqHh6a2bWDSngbCXl9jbP8aGBev+FwloJ
 nXQzA9megKJCVT14aoG7uUnsRsdR4uC+/P+QibMKKBEdQMPVlBFf4691R0sS5tjRIPB/
 RtG1WL7B0uUy5Ss/eLMcFFdxRVJ7mERLQLhvYvj+kxz2uwJLuo91IwYStMC3G7gyETNC
 2m8VZkZu5CFiEwcq7G2fEeXBqzmz520O5g8awTN+ka00Gmg04S5DZ8LDajFqKDK40SBM
 Be8CSBycjjFhIRGOhPZMPzjk9RvvhFdn6xMjfq+9zLDCDsR0u0Cw9X0LumjLmgXYzxjC
 Dvzg==
X-Gm-Message-State: AOAM532QlWpjZEHo/a6ZotUn+yyPqWAVRb5Et48b8CI08vx12Ks90Igt
 5CLCu5hTjb+eQZCs5RxVqqJFSV4tN+Od5FbNX3I=
X-Google-Smtp-Source: ABdhPJy6RLGkDJvsLOl/Ba4wmCtrxHhI4uA34a7lWIKol4dLNNOYWcYi2fTzk34ryT+Z+vuD3nx/xjWrMRN7o+16jW4=
X-Received: by 2002:adf:f751:: with SMTP id z17mr10544307wrp.114.1594801927584; 
 Wed, 15 Jul 2020 01:32:07 -0700 (PDT)
MIME-Version: 1.0
References: <1b60ed1cd7834ed5957a2b4870602073@in.bosch.com>
 <1D0E7281-95D7-482E-BF6D-EE5B1FEE4876@arm.com>
 <ab84437081a346d6bf0f73581382c74e@in.bosch.com>
 <D84A5DA7-683C-480B-8837-C51D560FC2E1@arm.com>
 <139024a891324455a13a3d468908798d@in.bosch.com>
 <C3BCAA62-51EF-49DD-B978-6657BC6D5A21@arm.com>
 <67b4454424c4485fb59d542d052aaf2d@in.bosch.com>
 <CAPD2p-nZZpDBZ5yc=gVvVAW1oFdN0KZ2jMH-T59W_sntsENwxw@mail.gmail.com>
 <3f155a0b598745a3b2d158599dd992fd@in.bosch.com>
 <0AC5E91F-7C7A-4B5A-AE55-E48574AB04C5@arm.com>
In-Reply-To: <0AC5E91F-7C7A-4B5A-AE55-E48574AB04C5@arm.com>
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
Date: Wed, 15 Jul 2020 11:31:55 +0300
Message-ID: <CAPD2p-kh7XoG+DGs08rjJQBOOPUDFOsc_yys+p6jTAqRBYyymg@mail.gmail.com>
Subject: Re: [BUG] Xen build for RCAR failing
To: "Manikandan Chockalingam (RBEI/ECF3)"
 <Manikandan.Chockalingam@in.bosch.com>
Content-Type: multipart/alternative; boundary="0000000000006180a705aa76c2a0"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--0000000000006180a705aa76c2a0
Content-Type: text/plain; charset="UTF-8"

Hello all

[Sorry for the possible format issues]


> > Is there any document which I could refer to bring up Xen[DOM0] and have
> an hands on ? because am currently seeing no output after this
>

The actual document I am aware of [1]. *Although this is not exactly how to
bring up Xen Dom0*, this is about how to bring up the whole Xen based
system (based on v4.13 release) with thin generic ARMv8 Dom0 without H/W,
DomD with H/W (based on Renesas BSP v3.21.0) and DomU with set of PV
drivers, so contains a lot of probably an extra/unneeded information for
you, but you could refer to meta-xt-prod-devel/meta-xt-images Yocto layers
anyway in order to find information regarding various Xen bits
(including default chosen node for Host device tree).

[1]
https://github.com/xen-troops/meta-xt-prod-devel/blob/master/INSTALL.txt

-- 
Regards,

Oleksandr Tyshchenko

--0000000000006180a705aa76c2a0
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><div>Hello all</div><div><br></=
div><div>[Sorry for the possible format issues]<br></div><br><div class=3D"=
gmail_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px =
0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
&gt; Is there any document which I could refer to bring up Xen[DOM0] and ha=
ve an hands on ? because am currently seeing no output after this<br></bloc=
kquote><div><br></div><div>The actual document I am aware of [1]. *Although=
 this is not exactly how to bring up Xen Dom0*, this is about how to bring =
up the whole Xen based system (based on v4.13 release) with thin generic AR=
Mv8 Dom0 without H/W, DomD with H/W (based on Renesas BSP v3.21.0) and DomU=
 with set of PV drivers, so contains a lot of probably an extra/unneeded in=
formation for you, but you could refer to=C2=A0meta-xt-prod-devel/meta-xt-i=
mages Yocto layers anyway in order to find information regarding various Xe=
n bits (including=C2=A0default chosen node for Host device tree).</div><div=
><br></div><div>[1]=C2=A0 <a href=3D"https://github.com/xen-troops/meta-xt-=
prod-devel/blob/master/INSTALL.txt">https://github.com/xen-troops/meta-xt-p=
rod-devel/blob/master/INSTALL.txt</a></div></div><div><br></div>-- <br><div=
 dir=3D"ltr"><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div dir=3D"ltr"><=
span style=3D"background-color:rgb(255,255,255)"><font size=3D"2"><span sty=
le=3D"color:rgb(51,51,51);font-family:Arial,sans-serif">Regards,</span></fo=
nt></span></div><div dir=3D"ltr"><br></div><div dir=3D"ltr"><div><span styl=
e=3D"background-color:rgb(255,255,255)"><font size=3D"2">Oleksandr Tyshchen=
ko</font></span></div></div></div></div></div></div></div></div>

--0000000000006180a705aa76c2a0--


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 08:34:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 08:34:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvcsS-00059a-Em; Wed, 15 Jul 2020 08:34:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvcsQ-00059V-Vt
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 08:34:55 +0000
X-Inumbo-ID: 0d0e30c2-c676-11ea-bb8b-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d0e30c2-c676-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 08:34:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594802093;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=m1zYlnWCGeigBRLopOjVVCPrIjQTIp4NpqO1z7cdwF4=;
 b=c3vSpRcrchH2MGhfiZvcH1S4LCkmxE+Qbx9C2m6wBm8O7tsHgSgwf0ND
 PqqL1rLTujN1lDWNTi7kn/k5gYbHd9PKjl6i7Pde6lbyxwwHClnrQbBAL
 FfdyEqCv5RXcQxplegZzobFLImJYHHk5QWhELHGK8/ZVN/Oaeh1VwOON6 s=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: gpXS6aAanqMBJpS65vS253WVzDpztu8kXwN+gYoGnX3JitLB9lQLZIAmPz9YOoJl+lLComVGv+
 M3xnMFQ02IPwTh0t0mTFCQ1WyMTahyxo3RRaLc65CzY2oSxsA5Pua4Mk4yfqB4+HQgSm9Qx0Gx
 kZo6sK/nfXiEIo4bocXLeI0xhV5s2Yx5CcwJceYJVJ3PyvEDhwGNk62muZfiJ4v2qQvfbeOisG
 hYRFWZ1WKOHo+kA8OsJ0LbmMwEwomlsBUfu1TmG7roxGHND7eOH6Aocxz/JtsuVoMIbZe/dMiq
 6bE=
X-SBRS: 2.7
X-MesageID: 23256205
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,354,1589256000"; d="scan'208";a="23256205"
Date: Wed, 15 Jul 2020 10:34:41 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 5/7] x86: generalize padding field handling
Message-ID: <20200715083441.GR7191@Air-de-Roger>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <83274416-2812-53c9-f8cb-23ebdf73782e@suse.com>
 <20200714142948.GK7191@Air-de-Roger>
 <a319e308-9cf3-52dc-1883-fe749e3c5629@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a319e308-9cf3-52dc-1883-fe749e3c5629@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 08:36:10AM +0200, Jan Beulich wrote:
> On 14.07.2020 16:29, Roger Pau Monné wrote:
> > On Wed, Jul 01, 2020 at 12:27:37PM +0200, Jan Beulich wrote:
> >> The original intention was to ignore padding fields, but the pattern
> >> matched only ones whose names started with an underscore. Also match
> >> fields whose names are in line with the C spec by not having a leading
> >> underscore. (Note that the leading ^ in the sed regexps was pointless
> >> and hence get dropped.)
> >>
> >> This requires adjusting some vNUMA macros, to avoid triggering
> >> "enumeration value ... not handled in switch" warnings, which - due to
> >> -Werror - would cause the build to fail. (I have to admit that I find
> >> these padding fields odd, when translation of the containing structure
> >> is needed anyway.)
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Thanks.
> 
> >> ---
> >> While for translation macros skipping padding fields pretty surely is a
> >> reasonable thing to do, we may want to consider not ignoring them when
> >> generating checking macros.
> 
> (note this remark, towards your question at the end)
> 
> >> --- a/xen/common/compat/memory.c
> >> +++ b/xen/common/compat/memory.c
> >> @@ -354,10 +354,13 @@ int compat_memory_op(unsigned int cmd, X
> >>                  return -EFAULT;
> >>  
> >>  #define XLAT_vnuma_topology_info_HNDL_vdistance_h(_d_, _s_)		\
> >> +            case XLAT_vnuma_topology_info_vdistance_pad:                \
> >>              guest_from_compat_handle((_d_)->vdistance.h, (_s_)->vdistance.h)
> >>  #define XLAT_vnuma_topology_info_HNDL_vcpu_to_vnode_h(_d_, _s_)		\
> >> +            case XLAT_vnuma_topology_info_vcpu_to_vnode_pad:            \
> >>              guest_from_compat_handle((_d_)->vcpu_to_vnode.h, (_s_)->vcpu_to_vnode.h)
> >>  #define XLAT_vnuma_topology_info_HNDL_vmemrange_h(_d_, _s_)		\
> >> +            case XLAT_vnuma_topology_info_vmemrange_pad:                \
> >>              guest_from_compat_handle((_d_)->vmemrange.h, (_s_)->vmemrange.h)
> > 
> > I find this quite ugly, would it be better to just handle them with a
> > default case in the XLAT_ macros?
> 
> Default cases explicitly do not get added to be able to spot missing
> case labels, as most compilers will warn about such when the controlling
> expression is of enum type.

As you say on the comment above, ignoring those for translation
macros would be better, and would avoid the ugliness of having to add
the _pad cases here.

> > AFAICT it will also set (_d_)->vmemrange.h twice?
> 
> I'm not seeing it (and if it was, I'd then also wonder why not for the
> other two handles above). This is the generated macro:
> 
> #define XLAT_vnuma_topology_info(_d_, _s_) do { \
>     (_d_)->domid = (_s_)->domid; \
>     (_d_)->nr_vnodes = (_s_)->nr_vnodes; \
>     (_d_)->nr_vcpus = (_s_)->nr_vcpus; \
>     (_d_)->nr_vmemranges = (_s_)->nr_vmemranges; \
>     switch (vdistance) { \
>     case XLAT_vnuma_topology_info_vdistance_h: \
>         XLAT_vnuma_topology_info_HNDL_vdistance_h(_d_, _s_); \
>         break; \
>     } \
>     switch (vcpu_to_vnode) { \
>     case XLAT_vnuma_topology_info_vcpu_to_vnode_h: \
>         XLAT_vnuma_topology_info_HNDL_vcpu_to_vnode_h(_d_, _s_); \
>         break; \
>     } \
>     switch (vmemrange) { \
>     case XLAT_vnuma_topology_info_vmemrange_h: \
>         XLAT_vnuma_topology_info_HNDL_vmemrange_h(_d_, _s_); \
>         break; \
>     } \
> } while (0)
> 
> Am I overlooking any further aspect?

No, vdistance, vcpu_to_vnode and vmemrange are set by the caller, so
the enums will never have the _pad value, and hence the assignation
will be done only once, you are right.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 08:41:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 08:41:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvcym-0005zr-5o; Wed, 15 Jul 2020 08:41:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvcyl-0005zl-3K
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 08:41:27 +0000
X-Inumbo-ID: f685dbba-c676-11ea-939d-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f685dbba-c676-11ea-939d-12813bfff9fa;
 Wed, 15 Jul 2020 08:41:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594802485;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=qOIlK/cVyyOE1sWXX5QKm0Oq6F74EPo/yMXtJBvTGzc=;
 b=ETTz8vtcZn5eFji4cWWsgyyh0YeCz3cKGfCJfFx5V/XE+b391Q9Hi6/a
 XtpTq8/mOcIkahl7I+YnfJ71CkDtPgm3AOkP/cFedBRujpCbHdo3TaERp
 5FGmSUrlcArdXLztyKFwSF08yLvimm2gmRVAfAPzcoX/g6pUh68BOjTEl w=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: b7rsqCGmoulWnZI2MCgkcsNEKMBHqsqx/4UsM2fKwx2lHSNCWuPfD5mkXkZHbezHKbJ6RMvscE
 6NCiv6ivzGBbmr4bN5qMTB/rVAmL0lLoJ3j/EPfj91VQbF9abgeQ/OF5cILErAv3+++ZdmIQUo
 lukqUnj5FPabD6n16K6oq96NBSK616oX9iVjE2ti3nUJYrQRG1lIGne25K6zz/t6VcVyZqBj/d
 bFAyv/w2IdyVroDvtbeeZVnW82yyrGFCbzGPuYLNMXy3lOGrqpFsU/AOQBOSDyC+Q2JVwwmqkr
 ZnM=
X-SBRS: 2.7
X-MesageID: 22407872
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,354,1589256000"; d="scan'208";a="22407872"
Date: Wed, 15 Jul 2020 10:41:15 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 6/7] flask: drop dead compat translation code
Message-ID: <20200715084115.GS7191@Air-de-Roger>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <7711f68d-394e-a74f-81fa-51f8447174ce@suse.com>
 <20200714145800.GO7191@Air-de-Roger>
 <937a51c5-7563-0ac2-4ada-b4dfd7a5d636@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <937a51c5-7563-0ac2-4ada-b4dfd7a5d636@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel de Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 08:42:44AM +0200, Jan Beulich wrote:
> On 14.07.2020 16:58, Roger Pau Monné wrote:
> > On Wed, Jul 01, 2020 at 12:28:07PM +0200, Jan Beulich wrote:
> >> Translation macros aren't needed at all (or else a devicetree_label
> >> entry would have been missing), and userlist has been removed quite some
> >> time ago.
> >>
> >> No functional change.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >>
> >> --- a/xen/include/xlat.lst
> >> +++ b/xen/include/xlat.lst
> >> @@ -148,14 +148,11 @@
> >>  ?	xenoprof_init			xenoprof.h
> >>  ?	xenoprof_passive		xenoprof.h
> >>  ?	flask_access			xsm/flask_op.h
> >> -!	flask_boolean			xsm/flask_op.h
> >>  ?	flask_cache_stats		xsm/flask_op.h
> >>  ?	flask_hash_stats		xsm/flask_op.h
> >> -!	flask_load			xsm/flask_op.h
> >>  ?	flask_ocontext			xsm/flask_op.h
> >>  ?	flask_peersid			xsm/flask_op.h
> >>  ?	flask_relabel			xsm/flask_op.h
> >>  ?	flask_setavc_threshold		xsm/flask_op.h
> >>  ?	flask_setenforce		xsm/flask_op.h
> >> -!	flask_sid_context		xsm/flask_op.h
> >>  ?	flask_transition		xsm/flask_op.h
> > 
> > Shouldn't those become checks then?
> 
> No, checking will never succeed for structures containing
> XEN_GUEST_HANDLE(). But there's no point in generating xlat macros
> when they're never used. There are two fundamentally different
> strategies for handling the compat hypercalls: One is to wrap a
> translation layer around the native hypercall. That's where the
> xlat macros come into play. The other, used here, is to compile
> the entire hypercall function a second time, arranging for the
> compat structures to get used in place of the native ones. There
> are no xlat macros involved here, all that's needed are correctly
> translated structures. (For completeness, x86's MCA hypercall
> uses yet another, quite adhoc strategy for handling, but also not
> involving any xlat macro use. Hence the consideration there to
> possibly drop the respective lines from the file here.)

Thanks, I think this explanation is helpful and I wonder whether it
would be possible to have something along this lines in a file or as a
comment somewhere, maybe at the top of xlat.lst?

Also could you add a line to the commit message noting that flask code
doesn't use any of the translation macros because it follows a
different approach to compat handling?

IMO the compat code is complicated to understand, and it also seems to
be mostly undocumented.

For the patch:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 08:43:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 08:43:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvd0s-00067Z-IU; Wed, 15 Jul 2020 08:43:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvd0q-00067S-Kd
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 08:43:36 +0000
X-Inumbo-ID: 450ad31c-c677-11ea-bb8b-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 450ad31c-c677-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 08:43:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594802615;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=LtIovrDRKRVQOx6LezTe8u0WyCRzFKL+6oJm1gZCxU0=;
 b=fsgq+PqwrldIJ8oOfRPrzUX66l9vbAsADJQ208PA0WeEfxLgfjWxihtP
 weCC9Z6r0MBib7MV73EujBYK1/G6HuyiTcvuSKR382fTYGRZuNpHGIivX
 3e2lx+PjGubpB9bwQ73ShRVTUMFostofH0Xz9sKEkfPiSdRczPpMwukvA A=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 6gD07dbHdMmGsBeraAY5oSVSybiFjz4OED5sAvK0ArctRRDcgo/R5JaboNEOi0zNP8XgSRZt+6
 S9IN2PIMYvgcY0mxOkkoMwqTvrVS9AVvOxLfKXRvrF58MTIwROtPzgIy8SNoGGYm+oH1EwiqVv
 Z1Zz9er2TP7gCUXrODa63in3hKS+fK+rw0Woy7DOxTUe/DbwpmpNT+7vbNJcyneGDJYjo7pyLZ
 euhxalbUYHx2vKzPIPkCUlVmtz51AhZbiMHBl7z4jcg9OZ5OnBkmLCCTOYBk0+c1/WwdcDRuWq
 Htw=
X-SBRS: 2.7
X-MesageID: 22408004
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,354,1589256000"; d="scan'208";a="22408004"
Date: Wed, 15 Jul 2020 10:43:25 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 1/7] x86: fix compat header generation
Message-ID: <20200715084325.GT7191@Air-de-Roger>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <a8139d0e-f332-b877-dea8-3ce8a6869285@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a8139d0e-f332-b877-dea8-3ce8a6869285@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 01, 2020 at 12:25:15PM +0200, Jan Beulich wrote:
> As was pointed out by 0e2e54966af5 ("mm: fix public declaration of
> struct xen_mem_acquire_resource"), we're not currently handling structs
> correctly that have uint64_aligned_t fields. #pragma pack(4) suppresses
> the necessary alignment even if the type did properly survive (which
> it also didn't) in the process of generating the headers. Overall,
> with the above mentioned change applied, there's only a latent issue
> here afaict, i.e. no other of our interface structs is currently
> affected.
> 
> As a result it is clear that using #pragma pack(4) is not an option.
> Drop all uses from compat header generation. Make sure
> {,u}int64_aligned_t actually survives, such that explicitly aligned
> fields will remain aligned. Arrange for {,u}int64_t to be transformed
> into a type that's 64 bits wide and 4-byte aligned, by utilizing that
> in typedef-s the "aligned" attribute can be used to reduce alignment.
> Additionally, for the cases where native structures get re-used,
> enforce suitable alignment via typedef-s (which allow alignment to be
> reduced).
> 
> This use of typedef-s makes necessary changes to CHECK_*() macro
> generation: Previously get-fields.sh relied on finding struct/union
> keywords when other compound types were used. We now need to use the
> typedef-s (guaranteeing suitable alignment) now, and hence the script
> has to recognize those cases, too. (Unfortunately there are a few
> special cases to be dealt with, but this is really not much different
> from e.g. the pre-existing compat_domain_handle_t special case.)
> 
> This need to use typedef-s is certainly somewhat fragile going forward,
> as in similar future cases it is imperative to also use typedef-s, or
> else the CHECK_*() macros won't check what they're supposed to check. I
> don't currently see any means to avoid this fragility, though.
> 
> There's one change to generated code according to my observations: In
> arch_compat_vcpu_op() the runstate area "area" variable would previously
> have been put in a just 4-byte aligned stack slot (despite being 8 bytes
> in size), whereas now it gets put in an 8-byte aligned location.
> 
> There also results some curious inconsistency in struct xen_mc from
> these changes - I intend to clean this up later on. Otherwise unrelated
> code would also need adjustment right here.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 08:47:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 08:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvd4W-0006GD-2n; Wed, 15 Jul 2020 08:47:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvd4U-0006G8-W4
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 08:47:23 +0000
X-Inumbo-ID: cc1d5cf8-c677-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc1d5cf8-c677-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 08:47:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5921CAE65;
 Wed, 15 Jul 2020 08:47:24 +0000 (UTC)
Subject: Re: [PATCH v2 5/7] x86: generalize padding field handling
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <83274416-2812-53c9-f8cb-23ebdf73782e@suse.com>
 <20200714142948.GK7191@Air-de-Roger>
 <a319e308-9cf3-52dc-1883-fe749e3c5629@suse.com>
 <20200715083441.GR7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cf99e680-c870-bb7c-8513-dc5b17595afe@suse.com>
Date: Wed, 15 Jul 2020 10:47:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200715083441.GR7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 10:34, Roger Pau Monné wrote:
> On Wed, Jul 15, 2020 at 08:36:10AM +0200, Jan Beulich wrote:
>> On 14.07.2020 16:29, Roger Pau Monné wrote:
>>> On Wed, Jul 01, 2020 at 12:27:37PM +0200, Jan Beulich wrote:
>>>> --- a/xen/common/compat/memory.c
>>>> +++ b/xen/common/compat/memory.c
>>>> @@ -354,10 +354,13 @@ int compat_memory_op(unsigned int cmd, X
>>>>                  return -EFAULT;
>>>>  
>>>>  #define XLAT_vnuma_topology_info_HNDL_vdistance_h(_d_, _s_)		\
>>>> +            case XLAT_vnuma_topology_info_vdistance_pad:                \
>>>>              guest_from_compat_handle((_d_)->vdistance.h, (_s_)->vdistance.h)
>>>>  #define XLAT_vnuma_topology_info_HNDL_vcpu_to_vnode_h(_d_, _s_)		\
>>>> +            case XLAT_vnuma_topology_info_vcpu_to_vnode_pad:            \
>>>>              guest_from_compat_handle((_d_)->vcpu_to_vnode.h, (_s_)->vcpu_to_vnode.h)
>>>>  #define XLAT_vnuma_topology_info_HNDL_vmemrange_h(_d_, _s_)		\
>>>> +            case XLAT_vnuma_topology_info_vmemrange_pad:                \
>>>>              guest_from_compat_handle((_d_)->vmemrange.h, (_s_)->vmemrange.h)
>>>
>>> I find this quite ugly, would it be better to just handle them with a
>>> default case in the XLAT_ macros?
>>
>> Default cases explicitly do not get added to be able to spot missing
>> case labels, as most compilers will warn about such when the controlling
>> expression is of enum type.
> 
> As you say on the comment above, ignoring those for translation
> macros would be better, and would avoid the ugliness of having to add
> the _pad cases here.

Ah, yes, if the supposed adjustment would also suppress the generation
of respective enumerators.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 08:51:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 08:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvd8t-000746-Mb; Wed, 15 Jul 2020 08:51:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tzA3=A2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvd8s-000741-9v
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 08:51:54 +0000
X-Inumbo-ID: 6d910eb8-c678-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6d910eb8-c678-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 08:51:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=cDJ7DfhOmk5fFpJLBNUaqmzvv+udHQ1GRVdbwZmc+c4=; b=rrry1YejXwDgYS6F3Cv/3okdK
 vql0D9hq+Kiu/JNXBwZc4HnQqa4IFoA2AP1qSw9mBfjMN6P0ZkwDHaXjrdrqCXFHeH4DmMIYi+wzV
 sN0O1SH1JTEC7aNdv826fQ7wfnk0Cfv2MS4NDrJJpS8LhiZXT7U+m2cRr2SWqutwsbSoI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvd8p-0000Kr-PZ; Wed, 15 Jul 2020 08:51:51 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvd8p-0002oQ-6Q; Wed, 15 Jul 2020 08:51:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvd8p-0006Mg-5i; Wed, 15 Jul 2020 08:51:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151899-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 151899: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-4.14-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: xen=ce3c4493e4e6c94495ddd8538e801a35980bff0d
X-Osstest-Versions-That: xen=02d69864b51a4302a148c28d6d391238a6778b4b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jul 2020 08:51:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151899 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151899/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151892

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass

version targeted for testing:
 xen                  ce3c4493e4e6c94495ddd8538e801a35980bff0d
baseline version:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b

Last test of basis   151892  2020-07-14 09:21:35 Z    0 days
Testing same since   151899  2020-07-14 18:07:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   02d69864b5..ce3c4493e4  ce3c4493e4e6c94495ddd8538e801a35980bff0d -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 08:52:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 08:52:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvd9U-00078C-50; Wed, 15 Jul 2020 08:52:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvd9S-000780-CK
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 08:52:30 +0000
X-Inumbo-ID: 82b1e77c-c678-11ea-939f-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 82b1e77c-c678-11ea-939f-12813bfff9fa;
 Wed, 15 Jul 2020 08:52:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A844DAD75;
 Wed, 15 Jul 2020 08:52:30 +0000 (UTC)
Subject: Re: [PATCH v2 6/7] flask: drop dead compat translation code
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <7711f68d-394e-a74f-81fa-51f8447174ce@suse.com>
 <20200714145800.GO7191@Air-de-Roger>
 <937a51c5-7563-0ac2-4ada-b4dfd7a5d636@suse.com>
 <20200715084115.GS7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9dc3e0fd-3e1a-5bd0-b8c7-01287e5c2c93@suse.com>
Date: Wed, 15 Jul 2020 10:52:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200715084115.GS7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel de Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 10:41, Roger Pau Monné wrote:
> On Wed, Jul 15, 2020 at 08:42:44AM +0200, Jan Beulich wrote:
>> On 14.07.2020 16:58, Roger Pau Monné wrote:
>>> On Wed, Jul 01, 2020 at 12:28:07PM +0200, Jan Beulich wrote:
>>>> Translation macros aren't needed at all (or else a devicetree_label
>>>> entry would have been missing), and userlist has been removed quite some
>>>> time ago.
>>>>
>>>> No functional change.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> --- a/xen/include/xlat.lst
>>>> +++ b/xen/include/xlat.lst
>>>> @@ -148,14 +148,11 @@
>>>>  ?	xenoprof_init			xenoprof.h
>>>>  ?	xenoprof_passive		xenoprof.h
>>>>  ?	flask_access			xsm/flask_op.h
>>>> -!	flask_boolean			xsm/flask_op.h
>>>>  ?	flask_cache_stats		xsm/flask_op.h
>>>>  ?	flask_hash_stats		xsm/flask_op.h
>>>> -!	flask_load			xsm/flask_op.h
>>>>  ?	flask_ocontext			xsm/flask_op.h
>>>>  ?	flask_peersid			xsm/flask_op.h
>>>>  ?	flask_relabel			xsm/flask_op.h
>>>>  ?	flask_setavc_threshold		xsm/flask_op.h
>>>>  ?	flask_setenforce		xsm/flask_op.h
>>>> -!	flask_sid_context		xsm/flask_op.h
>>>>  ?	flask_transition		xsm/flask_op.h
>>>
>>> Shouldn't those become checks then?
>>
>> No, checking will never succeed for structures containing
>> XEN_GUEST_HANDLE(). But there's no point in generating xlat macros
>> when they're never used. There are two fundamentally different
>> strategies for handling the compat hypercalls: One is to wrap a
>> translation layer around the native hypercall. That's where the
>> xlat macros come into play. The other, used here, is to compile
>> the entire hypercall function a second time, arranging for the
>> compat structures to get used in place of the native ones. There
>> are no xlat macros involved here, all that's needed are correctly
>> translated structures. (For completeness, x86's MCA hypercall
>> uses yet another, quite adhoc strategy for handling, but also not
>> involving any xlat macro use. Hence the consideration there to
>> possibly drop the respective lines from the file here.)
> 
> Thanks, I think this explanation is helpful and I wonder whether it
> would be possible to have something along this lines in a file or as a
> comment somewhere, maybe at the top of xlat.lst?

To be honest - I'm not sure: Such a comment may indeed be helpful
to have, but I don't think I can see any single good place for it
to live. For people editing xlat.lst (a file the existence of which
many aren't even aware of), this would be a good place. But how
would others have any chance of running into this comment?

> Also could you add a line to the commit message noting that flask code
> doesn't use any of the translation macros because it follows a
> different approach to compat handling?

I've made the sentence start "Translation macros aren't used (and
hence needed) at all ..." - is that enough of an adjustment?

> For the patch:
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 08:56:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 08:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvdDS-0007Mw-P5; Wed, 15 Jul 2020 08:56:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvdDQ-0007Mr-TS
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 08:56:36 +0000
X-Inumbo-ID: 16747358-c679-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 16747358-c679-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 08:56:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9B0DEAF44;
 Wed, 15 Jul 2020 08:56:38 +0000 (UTC)
Subject: Re: [PATCH v2 1/7] x86: fix compat header generation
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <a8139d0e-f332-b877-dea8-3ce8a6869285@suse.com>
 <20200715084325.GT7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f92d154e-514e-031f-aaad-f1534a06e514@suse.com>
Date: Wed, 15 Jul 2020 10:56:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200715084325.GT7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 10:43, Roger Pau Monné wrote:
> On Wed, Jul 01, 2020 at 12:25:15PM +0200, Jan Beulich wrote:
>> As was pointed out by 0e2e54966af5 ("mm: fix public declaration of
>> struct xen_mem_acquire_resource"), we're not currently handling structs
>> correctly that have uint64_aligned_t fields. #pragma pack(4) suppresses
>> the necessary alignment even if the type did properly survive (which
>> it also didn't) in the process of generating the headers. Overall,
>> with the above mentioned change applied, there's only a latent issue
>> here afaict, i.e. no other of our interface structs is currently
>> affected.
>>
>> As a result it is clear that using #pragma pack(4) is not an option.
>> Drop all uses from compat header generation. Make sure
>> {,u}int64_aligned_t actually survives, such that explicitly aligned
>> fields will remain aligned. Arrange for {,u}int64_t to be transformed
>> into a type that's 64 bits wide and 4-byte aligned, by utilizing that
>> in typedef-s the "aligned" attribute can be used to reduce alignment.
>> Additionally, for the cases where native structures get re-used,
>> enforce suitable alignment via typedef-s (which allow alignment to be
>> reduced).
>>
>> This use of typedef-s makes necessary changes to CHECK_*() macro
>> generation: Previously get-fields.sh relied on finding struct/union
>> keywords when other compound types were used. We now need to use the
>> typedef-s (guaranteeing suitable alignment) now, and hence the script
>> has to recognize those cases, too. (Unfortunately there are a few
>> special cases to be dealt with, but this is really not much different
>> from e.g. the pre-existing compat_domain_handle_t special case.)
>>
>> This need to use typedef-s is certainly somewhat fragile going forward,
>> as in similar future cases it is imperative to also use typedef-s, or
>> else the CHECK_*() macros won't check what they're supposed to check. I
>> don't currently see any means to avoid this fragility, though.
>>
>> There's one change to generated code according to my observations: In
>> arch_compat_vcpu_op() the runstate area "area" variable would previously
>> have been put in a just 4-byte aligned stack slot (despite being 8 bytes
>> in size), whereas now it gets put in an 8-byte aligned location.
>>
>> There also results some curious inconsistency in struct xen_mc from
>> these changes - I intend to clean this up later on. Otherwise unrelated
>> code would also need adjustment right here.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks. I shall send out v3, as I had to fix an issue with old gcc:
There were two (identical) typedef-s for {,u}int64_compat_t, which
newer gcc tolerate, but older don't.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 09:01:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 09:01:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvdIN-0008Di-F4; Wed, 15 Jul 2020 09:01:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=U57p=A2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jvdIM-0008Dd-T7
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 09:01:42 +0000
X-Inumbo-ID: cccba14e-c679-11ea-93a3-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cccba14e-c679-11ea-93a3-12813bfff9fa;
 Wed, 15 Jul 2020 09:01:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7F3B0AF59;
 Wed, 15 Jul 2020 09:01:44 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] qemu-trad: remove Xen path dependencies
Date: Wed, 15 Jul 2020 11:01:40 +0200
Message-Id: <20200715090140.7590-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, ian.jackson@eu.citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

xen-hhoks.mak contains hard wired paths for the used libraries of
qemu-trad. Replace those by the make variables from Xen's Rules.mk,
which is already included.

This in turn removes the need to add the runtime link paths of the
libraries the directly used libraries depend on.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen-hooks.mak | 18 ++++--------------
 1 file changed, 4 insertions(+), 14 deletions(-)

diff --git a/xen-hooks.mak b/xen-hooks.mak
index a68eba3c..2689db0f 100644
--- a/xen-hooks.mak
+++ b/xen-hooks.mak
@@ -1,10 +1,8 @@
-CPPFLAGS+= -I$(XEN_ROOT)/tools/libs/toollog/include
-CPPFLAGS+= -I$(XEN_ROOT)/tools/libs/evtchn/include
-CPPFLAGS+= -I$(XEN_ROOT)/tools/libs/gnttab/include
+XEN_LIBS = evtchn gnttab ctrl guest store
+
 CPPFLAGS+= -DXC_WANT_COMPAT_MAP_FOREIGN_API
 CPPFLAGS+= -DXC_WANT_COMPAT_DEVICEMODEL_API
-CPPFLAGS+= -I$(XEN_ROOT)/tools/libxc/include
-CPPFLAGS+= -I$(XEN_ROOT)/tools/xenstore/include
+CPPFLAGS += $(foreach lib,$(XEN_LIBS),$(CFLAGS_libxen$(lib)))
 CPPFLAGS+= -I$(XEN_ROOT)/tools/include
 
 SSE2 := $(call cc-option,-msse2,)
@@ -22,15 +20,7 @@ endif
 
 CFLAGS += $(CMDLINE_CFLAGS)
 
-LIBS += -L$(XEN_ROOT)/tools/libs/evtchn -lxenevtchn
-LIBS += -L$(XEN_ROOT)/tools/libs/gnttab -lxengnttab
-LIBS += -L$(XEN_ROOT)/tools/libxc -lxenctrl -lxenguest
-LIBS += -L$(XEN_ROOT)/tools/xenstore -lxenstore
-LIBS += -Wl,-rpath-link=$(XEN_ROOT)/tools/libs/toollog
-LIBS += -Wl,-rpath-link=$(XEN_ROOT)/tools/libs/toolcore
-LIBS += -Wl,-rpath-link=$(XEN_ROOT)/tools/libs/call
-LIBS += -Wl,-rpath-link=$(XEN_ROOT)/tools/libs/foreignmemory
-LIBS += -Wl,-rpath-link=$(XEN_ROOT)/tools/libs/devicemodel
+LIBS += $(foreach lib,$(XEN_LIBS),$(LDLIBS_libxen$(lib)))
 
 LDFLAGS := $(CFLAGS) $(LDFLAGS)
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 09:36:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 09:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvdpo-0002PR-7G; Wed, 15 Jul 2020 09:36:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvdpn-0002Oi-8O
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 09:36:15 +0000
X-Inumbo-ID: 9fe4c7e7-c67e-11ea-93a9-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9fe4c7e7-c67e-11ea-93a9-12813bfff9fa;
 Wed, 15 Jul 2020 09:36:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594805775;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=eAVhxkVPJFov0SlG2bMT+jjb/4wuJ2Q5ALt7SUBTOwA=;
 b=JmWNlT/243wk6jWsyBsJYJ1zqtzOUxH3UcOr//CadZ/hzXiOuYQduXoq
 u2oQa0EqxOARG3bOIiXzaiWtSi3JbO6o01om/3FcRWfRD6nVwTaiF/Oxy
 CLIae4B5m/W+Vv5fgXf7/nt4TmOcf8uei3SK5HR6zb+F29w2MqbeVyQaj I=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Un/Fyobl/UYAVxynwhwit+fa0ZWnsYnH9le0Mc4fQg0D/61/UYkxcb8ruMmo6HiixFtSpcT1Qi
 sWXUactz9LAIZ+dG03z/s+Ln1+c+XNc6NH/AMG25Aix2ftrgFxPWdQJobJlIkxLiW/fuBq1SRM
 nYLhaE4Ys1b3Ak8eS9BddkswWL9yzZveiRaRVOYGYftMw/Ek07trSjIswP5wNKOepmR+rOZeou
 qj184Zo3XnFquoCxWgmZZ6fpeFl3EEomn23djO+MjcWmkllPQwFLeXh/SFY4rEIBmE6f9wRbkB
 alE=
X-SBRS: 2.7
X-MesageID: 22418574
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,354,1589256000"; d="scan'208";a="22418574"
Date: Wed, 15 Jul 2020 11:36:06 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v6 01/11] memory: batch processing in acquire_resource()
Message-ID: <20200715093606.GU7191@Air-de-Roger>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <02415890e4e8211513b495228c790e1d16de767f.1594150543.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <02415890e4e8211513b495228c790e1d16de767f.1594150543.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 tamas.lengyel@intel.com, Jan
 Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 07, 2020 at 09:39:40PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Allow to acquire large resources by allowing acquire_resource()
> to process items in batches, using hypercall continuation.
> 
> Be aware that this modifies the behavior of acquire_resource
> call with frame_list=NULL. While previously it would return
> the size of internal array (32), with this patch it returns
> the maximal quantity of frames that could be requested at once,
> i.e. UINT_MAX >> MEMOP_EXTENT_SHIFT.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---

FWIW, I think I've also said on a previous version, I would prefer if
the changelog between versions is added to each patch, having it on
the cover letter is not very helpful as I usually care about specific
changes made to each patch.

I've just got one comment that needs addressing below.

>  xen/common/memory.c | 49 ++++++++++++++++++++++++++++++++++++++++-----
>  1 file changed, 44 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 714077c1e5..eb42f883df 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1046,10 +1046,12 @@ static int acquire_grant_table(struct domain *d, unsigned int id,
>  }
>  
>  static int acquire_resource(
> -    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
> +    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg,
> +    unsigned long *start_extent)
>  {
>      struct domain *d, *currd = current->domain;
>      xen_mem_acquire_resource_t xmar;
> +    uint32_t total_frames;
>      /*
>       * The mfn_list and gfn_list (below) arrays are ok on stack for the
>       * moment since they are small, but if they need to grow in future
> @@ -1069,7 +1071,7 @@ static int acquire_resource(
>          if ( xmar.nr_frames )
>              return -EINVAL;
>  
> -        xmar.nr_frames = ARRAY_SIZE(mfn_list);
> +        xmar.nr_frames = UINT_MAX >> MEMOP_EXTENT_SHIFT;
>  
>          if ( __copy_field_to_guest(arg, &xmar, nr_frames) )
>              return -EFAULT;
> @@ -1077,8 +1079,28 @@ static int acquire_resource(
>          return 0;
>      }
>  
> +    total_frames = xmar.nr_frames;
> +
> +    /* Is the size too large for us to encode a continuation? */
> +    if ( unlikely(xmar.nr_frames > (UINT_MAX >> MEMOP_EXTENT_SHIFT)) )
> +        return -EINVAL;
> +
> +    if ( *start_extent )
> +    {
> +        /*
> +         * Check whether start_extent is in bounds, as this
> +         * value if visible to the calling domain.
> +         */
> +        if ( *start_extent > xmar.nr_frames )
> +            return -EINVAL;
> +
> +        xmar.frame += *start_extent;
> +        xmar.nr_frames -= *start_extent;
> +        guest_handle_add_offset(xmar.frame_list, *start_extent);
> +    }
> +
>      if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
> -        return -E2BIG;
> +        xmar.nr_frames = ARRAY_SIZE(mfn_list);
>  
>      rc = rcu_lock_remote_domain_by_id(xmar.domid, &d);
>      if ( rc )
> @@ -1135,6 +1157,14 @@ static int acquire_resource(
>          }
>      }
>  
> +    if ( !rc )
> +    {
> +        *start_extent += xmar.nr_frames;
> +
> +        if ( *start_extent != total_frames )
> +            rc = -ERESTART;
> +    }
> +
>   out:
>      rcu_unlock_domain(d);
>  
> @@ -1599,8 +1629,17 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>  #endif
>  
>      case XENMEM_acquire_resource:
> -        rc = acquire_resource(
> -            guest_handle_cast(arg, xen_mem_acquire_resource_t));
> +        do {
> +            rc = acquire_resource(
> +                guest_handle_cast(arg, xen_mem_acquire_resource_t),
> +                &start_extent);

I think it would be interesting from a performance PoV to move the
xmar copy_from_guest here, so that each call to acquire_resource
in the loop doesn't need to perform a copy from guest. That's
more relevant for translated callers, where a copy_from_guest involves
a guest page table and a p2m walk.

> +
> +            if ( hypercall_preempt_check() )

You are missing a rc == -ERESTART check here, you don't want to encode
a continuation if rc is different than -ERESTART AFAICT.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 09:45:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 09:45:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvdyC-0003Hk-SF; Wed, 15 Jul 2020 09:44:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvdyC-0003Hf-13
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 09:44:56 +0000
X-Inumbo-ID: d630cead-c67f-11ea-93a9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d630cead-c67f-11ea-93a9-12813bfff9fa;
 Wed, 15 Jul 2020 09:44:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7C681AF0B;
 Wed, 15 Jul 2020 09:44:57 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/2] x86: RTC handling adjustments
Message-ID: <416ac9b1-93d1-81a2-be19-d58d509fdfec@suse.com>
Date: Wed, 15 Jul 2020 11:44:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The first patch addresses a recent regression and hence ought to be
considered for 4.14, despite it getting late. I noticed the problem
while re-basing the 2nd patch here, which I decided to now re-post
despite the original discussion on v1 not having lead to any clear
result (i.e. the supposed "leave Dom0 handle the RTC" has never
materialized over the past almost 5 years).

1: restore pv_rtc_handler() invocation
2: detect CMOS aliasing on ports other than 0x70/0x71

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 09:47:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 09:47:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jve0V-0003Om-9P; Wed, 15 Jul 2020 09:47:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jve0T-0003Oh-ST
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 09:47:17 +0000
X-Inumbo-ID: 2b154376-c680-11ea-93a9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2b154376-c680-11ea-93a9-12813bfff9fa;
 Wed, 15 Jul 2020 09:47:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BEBEDAF0B;
 Wed, 15 Jul 2020 09:47:19 +0000 (UTC)
Subject: [PATCH v2 1/2] x86: restore pv_rtc_handler() invocation
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <416ac9b1-93d1-81a2-be19-d58d509fdfec@suse.com>
Message-ID: <59f26856-d23d-bb69-0403-39e77acbf85c@suse.com>
Date: Wed, 15 Jul 2020 11:47:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <416ac9b1-93d1-81a2-be19-d58d509fdfec@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This was lost when making the logic accessible to PVH Dom0.

Fixes: 835d8d69d96a ("x86/rtc: provide mediated access to RTC for PVH dom0")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1160,6 +1160,10 @@ void rtc_guest_write(unsigned int port,
     case RTC_PORT(1):
         if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
             break;
+
+        if ( pv_rtc_handler )
+            pv_rtc_handler(currd->arch.cmos_idx & 0x7f, data);
+
         spin_lock_irqsave(&rtc_lock, flags);
         outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
         outb(data, RTC_PORT(1));



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 09:48:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 09:48:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jve19-0003T3-JH; Wed, 15 Jul 2020 09:47:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jve17-0003S0-I0
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 09:47:57 +0000
X-Inumbo-ID: 4234bb18-c680-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4234bb18-c680-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 09:47:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8B0ADAF0B;
 Wed, 15 Jul 2020 09:47:58 +0000 (UTC)
Subject: [PATCH v2 2/2] x86: detect CMOS aliasing on ports other than 0x70/0x71
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <416ac9b1-93d1-81a2-be19-d58d509fdfec@suse.com>
Message-ID: <72a63cba-bfdb-ae3c-284b-8ba5b9d7f7a9@suse.com>
Date: Wed, 15 Jul 2020 11:47:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <416ac9b1-93d1-81a2-be19-d58d509fdfec@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

... in order to also intercept accesses through the alias ports.

Also stop intercepting accesses to the CMOS ports if we won't ourselves
use the CMOS RTC.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Re-base.

--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -670,6 +670,80 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
     return ret;
 }
 
+#ifndef COMPAT
+#include <asm/mc146818rtc.h>
+
+unsigned int __read_mostly cmos_alias_mask;
+
+static int __init probe_cmos_alias(void)
+{
+    unsigned int i, offs;
+
+    if ( acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC )
+        return 0;
+
+    for ( offs = 2; offs < 8; offs <<= 1 )
+    {
+        bool read = true;
+
+        for ( i = RTC_REG_D + 1; i < 0x80; ++i )
+        {
+            uint8_t normal, alt;
+            unsigned long flags;
+
+            if ( i == acpi_gbl_FADT.century )
+                continue;
+
+            spin_lock_irqsave(&rtc_lock, flags);
+
+            normal = CMOS_READ(i);
+            if ( inb(RTC_PORT(offs)) != i )
+                read = false;
+
+            alt = inb(RTC_PORT(offs + 1));
+
+            spin_unlock_irqrestore(&rtc_lock, flags);
+
+            if ( normal != alt )
+                break;
+
+            process_pending_softirqs();
+        }
+        if ( i == 0x80 )
+        {
+            cmos_alias_mask |= offs;
+            printk(XENLOG_INFO "CMOS aliased at %02x, index %s\n",
+                   RTC_PORT(offs), read ? "r/w" : "w/o");
+        }
+    }
+
+    return 0;
+}
+__initcall(probe_cmos_alias);
+
+/* Has the administrator granted sufficient permission for this I/O access? */
+bool admin_io_okay(unsigned int port, unsigned int bytes,
+                   const struct domain *d)
+{
+    /*
+     * Port 0xcf8 (CONFIG_ADDRESS) is only visible for DWORD accesses.
+     * We never permit direct access to that register.
+     */
+    if ( (port == 0xcf8) && (bytes == 4) )
+        return false;
+
+    /*
+     * We also never permit direct access to the RTC/CMOS registers
+     * if we may be accessing the RTC ones ourselves.
+     */
+    if ( !(acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC) &&
+         ((port & ~(cmos_alias_mask | 1)) == RTC_PORT(0)) )
+        return false;
+
+    return ioports_access_permitted(d, port, port + bytes - 1);
+}
+#endif /* COMPAT */
+
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -198,24 +198,6 @@ static bool guest_io_okay(unsigned int p
     return false;
 }
 
-/* Has the administrator granted sufficient permission for this I/O access? */
-static bool admin_io_okay(unsigned int port, unsigned int bytes,
-                          const struct domain *d)
-{
-    /*
-     * Port 0xcf8 (CONFIG_ADDRESS) is only visible for DWORD accesses.
-     * We never permit direct access to that register.
-     */
-    if ( (port == 0xcf8) && (bytes == 4) )
-        return false;
-
-    /* We also never permit direct access to the RTC/CMOS registers. */
-    if ( ((port & ~1) == RTC_PORT(0)) )
-        return false;
-
-    return ioports_access_permitted(d, port, port + bytes - 1);
-}
-
 static bool pci_cfg_ok(struct domain *currd, unsigned int start,
                        unsigned int size, uint32_t *write)
 {
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -48,7 +48,7 @@
 #include <xen/cpu.h>
 #include <asm/nmi.h>
 #include <asm/alternative.h>
-#include <asm/mc146818rtc.h>
+#include <asm/iocap.h>
 #include <asm/cpuid.h>
 #include <asm/spec_ctrl.h>
 #include <asm/guest.h>
@@ -2009,37 +2009,33 @@ int __hwdom_init xen_in_range(unsigned l
 static int __hwdom_init io_bitmap_cb(unsigned long s, unsigned long e,
                                      void *ctx)
 {
-    struct domain *d = ctx;
+    const struct domain *d = ctx;
     unsigned int i;
 
     ASSERT(e <= INT_MAX);
     for ( i = s; i <= e; i++ )
-        __clear_bit(i, d->arch.hvm.io_bitmap);
+        if ( admin_io_okay(i, 1, d) )
+            __clear_bit(i, d->arch.hvm.io_bitmap);
 
     return 0;
 }
 
 void __hwdom_init setup_io_bitmap(struct domain *d)
 {
-    int rc;
+    if ( !is_hvm_domain(d) )
+        return;
 
-    if ( is_hvm_domain(d) )
-    {
-        bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
-        rc = rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
-                                    io_bitmap_cb, d);
-        BUG_ON(rc);
-        /*
-         * NB: we need to trap accesses to 0xcf8 in order to intercept
-         * 4 byte accesses, that need to be handled by Xen in order to
-         * keep consistency.
-         * Access to 1 byte RTC ports also needs to be trapped in order
-         * to keep consistency with PV.
-         */
-        __set_bit(0xcf8, d->arch.hvm.io_bitmap);
-        __set_bit(RTC_PORT(0), d->arch.hvm.io_bitmap);
-        __set_bit(RTC_PORT(1), d->arch.hvm.io_bitmap);
-    }
+    bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
+    if ( rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
+                                io_bitmap_cb, d) )
+        BUG();
+
+    /*
+     * We need to trap 4-byte accesses to 0xcf8 (see admin_io_okay(),
+     * guest_io_read(), and guest_io_write()), which isn't covered by
+     * the admin_io_okay() check in io_bitmap_cb().
+     */
+    __set_bit(0xcf8, d->arch.hvm.io_bitmap);
 }
 
 /*
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1092,7 +1092,10 @@ static unsigned long get_cmos_time(void)
         if ( seconds < 60 )
         {
             if ( rtc.sec != seconds )
+            {
                 cmos_rtc_probe = false;
+                acpi_gbl_FADT.boot_flags &= ~ACPI_FADT_NO_CMOS_RTC;
+            }
             break;
         }
 
@@ -1114,7 +1117,7 @@ unsigned int rtc_guest_read(unsigned int
     unsigned long flags;
     unsigned int data = ~0;
 
-    switch ( port )
+    switch ( port & ~cmos_alias_mask )
     {
     case RTC_PORT(0):
         /*
@@ -1126,11 +1129,12 @@ unsigned int rtc_guest_read(unsigned int
         break;
 
     case RTC_PORT(1):
-        if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
+        if ( !ioports_access_permitted(currd, port - 1, port) )
             break;
         spin_lock_irqsave(&rtc_lock, flags);
-        outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
-        data = inb(RTC_PORT(1));
+        outb(currd->arch.cmos_idx & (0xff >> (port == RTC_PORT(1))),
+             port - 1);
+        data = inb(port);
         spin_unlock_irqrestore(&rtc_lock, flags);
         break;
 
@@ -1146,8 +1150,10 @@ void rtc_guest_write(unsigned int port,
     struct domain *currd = current->domain;
     unsigned long flags;
 
-    switch ( port )
+    switch ( port & ~cmos_alias_mask )
     {
+        unsigned int idx;
+
     case RTC_PORT(0):
         /*
          * All PV domains (and PVH dom0) are allowed to write to the latched
@@ -1158,15 +1164,17 @@ void rtc_guest_write(unsigned int port,
         break;
 
     case RTC_PORT(1):
-        if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
+        if ( !ioports_access_permitted(currd, port - 1, port) )
             break;
 
+        idx = currd->arch.cmos_idx & (0xff >> (port == RTC_PORT(1)));
+
         if ( pv_rtc_handler )
-            pv_rtc_handler(currd->arch.cmos_idx & 0x7f, data);
+            pv_rtc_handler(idx, data);
 
         spin_lock_irqsave(&rtc_lock, flags);
-        outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
-        outb(data, RTC_PORT(1));
+        outb(idx, port - 1);
+        outb(data, port);
         spin_unlock_irqrestore(&rtc_lock, flags);
         break;
 
--- a/xen/include/asm-x86/iocap.h
+++ b/xen/include/asm-x86/iocap.h
@@ -18,4 +18,7 @@
     (!rangeset_is_empty((d)->iomem_caps) ||             \
      !rangeset_is_empty((d)->arch.ioport_caps))
 
+bool admin_io_okay(unsigned int port, unsigned int bytes,
+                   const struct domain *d);
+
 #endif /* __X86_IOCAP_H__ */
--- a/xen/include/asm-x86/mc146818rtc.h
+++ b/xen/include/asm-x86/mc146818rtc.h
@@ -9,6 +9,8 @@
 
 extern spinlock_t rtc_lock;             /* serialize CMOS RAM access */
 
+extern unsigned int cmos_alias_mask;
+
 /**********************************************************************
  * register summary
  **********************************************************************/



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 09:57:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 09:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jve9o-0004OV-LU; Wed, 15 Jul 2020 09:56:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jve9n-0004OQ-Fo
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 09:56:55 +0000
X-Inumbo-ID: 8353465e-c681-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8353465e-c681-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 09:56:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 40A7CAF8A;
 Wed, 15 Jul 2020 09:56:57 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/5] x86: mostly shadow related XSA-319 follow-up
Message-ID: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
Date: Wed, 15 Jul 2020 11:56:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This in particular goes a few small steps further towards proper
!HVM and !PV config handling (i.e. no carrying of unnecessary
baggage).

1: x86/shadow: dirty VRAM tracking is needed for HVM only
2: x86/shadow: shadow_table[] needs only one entry for PV-only configs
3: x86/PV: drop a few misleading paging_mode_refcounts() checks
4: x86/shadow: have just a single instance of sh_set_toplevel_shadow()
5: x86/shadow: l3table[] and gl3e[] are HVM only

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 09:58:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 09:58:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveAz-0004US-0R; Wed, 15 Jul 2020 09:58:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveAx-0004UK-RK
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 09:58:07 +0000
X-Inumbo-ID: accbd924-c681-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id accbd924-c681-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 09:58:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D7CA8AF8A;
 Wed, 15 Jul 2020 09:58:06 +0000 (UTC)
Subject: [PATCH 1/5] x86/shadow: dirty VRAM tracking is needed for HVM only
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
Message-ID: <a77ac11b-a198-491d-1819-13ba75f72cd8@suse.com>
Date: Wed, 15 Jul 2020 11:58:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Move shadow_track_dirty_vram() into hvm.c (requiring two static
functions to become non-static). More importantly though make sure we
don't de-reference d->arch.hvm.dirty_vram for a non-HVM guest. This was
a latent issue only just because the field lives far enough into struct
hvm_domain to be outside the part overlapping with struct pv_domain.

While moving shadow_track_dirty_vram() some purely typographic
adjustments are being made, like inserting missing blanks or putting
breaces on their own lines.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Tim Deegan <tim@xen.org>

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -999,7 +999,7 @@ void shadow_prealloc(struct domain *d, u
 
 /* Deliberately free all the memory we can: this will tear down all of
  * this domain's shadows */
-static void shadow_blow_tables(struct domain *d)
+void shadow_blow_tables(struct domain *d)
 {
     struct page_info *sp, *t;
     struct vcpu *v;
@@ -2029,7 +2029,7 @@ int sh_remove_write_access(struct domain
 /* Remove all mappings of a guest frame from the shadow tables.
  * Returns non-zero if we need to flush TLBs. */
 
-static int sh_remove_all_mappings(struct domain *d, mfn_t gmfn, gfn_t gfn)
+int sh_remove_all_mappings(struct domain *d, mfn_t gmfn, gfn_t gfn)
 {
     struct page_info *page = mfn_to_page(gmfn);
 
@@ -3162,203 +3162,6 @@ static void sh_clean_dirty_bitmap(struct
 }
 
 
-/**************************************************************************/
-/* VRAM dirty tracking support */
-int shadow_track_dirty_vram(struct domain *d,
-                            unsigned long begin_pfn,
-                            unsigned long nr,
-                            XEN_GUEST_HANDLE(void) guest_dirty_bitmap)
-{
-    int rc = 0;
-    unsigned long end_pfn = begin_pfn + nr;
-    unsigned long dirty_size = (nr + 7) / 8;
-    int flush_tlb = 0;
-    unsigned long i;
-    p2m_type_t t;
-    struct sh_dirty_vram *dirty_vram;
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    uint8_t *dirty_bitmap = NULL;
-
-    if ( end_pfn < begin_pfn || end_pfn > p2m->max_mapped_pfn + 1 )
-        return -EINVAL;
-
-    /* We perform p2m lookups, so lock the p2m upfront to avoid deadlock */
-    p2m_lock(p2m_get_hostp2m(d));
-    paging_lock(d);
-
-    dirty_vram = d->arch.hvm.dirty_vram;
-
-    if ( dirty_vram && (!nr ||
-             ( begin_pfn != dirty_vram->begin_pfn
-            || end_pfn   != dirty_vram->end_pfn )) )
-    {
-        /* Different tracking, tear the previous down. */
-        gdprintk(XENLOG_INFO, "stopping tracking VRAM %lx - %lx\n", dirty_vram->begin_pfn, dirty_vram->end_pfn);
-        xfree(dirty_vram->sl1ma);
-        xfree(dirty_vram->dirty_bitmap);
-        xfree(dirty_vram);
-        dirty_vram = d->arch.hvm.dirty_vram = NULL;
-    }
-
-    if ( !nr )
-        goto out;
-
-    dirty_bitmap = vzalloc(dirty_size);
-    if ( dirty_bitmap == NULL )
-    {
-        rc = -ENOMEM;
-        goto out;
-    }
-    /* This should happen seldomly (Video mode change),
-     * no need to be careful. */
-    if ( !dirty_vram )
-    {
-        /* Throw away all the shadows rather than walking through them
-         * up to nr times getting rid of mappings of each pfn */
-        shadow_blow_tables(d);
-
-        gdprintk(XENLOG_INFO, "tracking VRAM %lx - %lx\n", begin_pfn, end_pfn);
-
-        rc = -ENOMEM;
-        if ( (dirty_vram = xmalloc(struct sh_dirty_vram)) == NULL )
-            goto out;
-        dirty_vram->begin_pfn = begin_pfn;
-        dirty_vram->end_pfn = end_pfn;
-        d->arch.hvm.dirty_vram = dirty_vram;
-
-        if ( (dirty_vram->sl1ma = xmalloc_array(paddr_t, nr)) == NULL )
-            goto out_dirty_vram;
-        memset(dirty_vram->sl1ma, ~0, sizeof(paddr_t) * nr);
-
-        if ( (dirty_vram->dirty_bitmap = xzalloc_array(uint8_t, dirty_size)) == NULL )
-            goto out_sl1ma;
-
-        dirty_vram->last_dirty = NOW();
-
-        /* Tell the caller that this time we could not track dirty bits. */
-        rc = -ENODATA;
-    }
-    else if (dirty_vram->last_dirty == -1)
-        /* still completely clean, just copy our empty bitmap */
-        memcpy(dirty_bitmap, dirty_vram->dirty_bitmap, dirty_size);
-    else
-    {
-        mfn_t map_mfn = INVALID_MFN;
-        void *map_sl1p = NULL;
-
-        /* Iterate over VRAM to track dirty bits. */
-        for ( i = 0; i < nr; i++ ) {
-            mfn_t mfn = get_gfn_query_unlocked(d, begin_pfn + i, &t);
-            struct page_info *page;
-            int dirty = 0;
-            paddr_t sl1ma = dirty_vram->sl1ma[i];
-
-            if ( mfn_eq(mfn, INVALID_MFN) )
-                dirty = 1;
-            else
-            {
-                page = mfn_to_page(mfn);
-                switch (page->u.inuse.type_info & PGT_count_mask)
-                {
-                case 0:
-                    /* No guest reference, nothing to track. */
-                    break;
-                case 1:
-                    /* One guest reference. */
-                    if ( sl1ma == INVALID_PADDR )
-                    {
-                        /* We don't know which sl1e points to this, too bad. */
-                        dirty = 1;
-                        /* TODO: Heuristics for finding the single mapping of
-                         * this gmfn */
-                        flush_tlb |= sh_remove_all_mappings(d, mfn,
-                                                            _gfn(begin_pfn + i));
-                    }
-                    else
-                    {
-                        /* Hopefully the most common case: only one mapping,
-                         * whose dirty bit we can use. */
-                        l1_pgentry_t *sl1e;
-                        mfn_t sl1mfn = maddr_to_mfn(sl1ma);
-
-                        if ( !mfn_eq(sl1mfn, map_mfn) )
-                        {
-                            if ( map_sl1p )
-                                unmap_domain_page(map_sl1p);
-                            map_sl1p = map_domain_page(sl1mfn);
-                            map_mfn = sl1mfn;
-                        }
-                        sl1e = map_sl1p + (sl1ma & ~PAGE_MASK);
-
-                        if ( l1e_get_flags(*sl1e) & _PAGE_DIRTY )
-                        {
-                            dirty = 1;
-                            /* Note: this is atomic, so we may clear a
-                             * _PAGE_ACCESSED set by another processor. */
-                            l1e_remove_flags(*sl1e, _PAGE_DIRTY);
-                            flush_tlb = 1;
-                        }
-                    }
-                    break;
-                default:
-                    /* More than one guest reference,
-                     * we don't afford tracking that. */
-                    dirty = 1;
-                    break;
-                }
-            }
-
-            if ( dirty )
-            {
-                dirty_vram->dirty_bitmap[i / 8] |= 1 << (i % 8);
-                dirty_vram->last_dirty = NOW();
-            }
-        }
-
-        if ( map_sl1p )
-            unmap_domain_page(map_sl1p);
-
-        memcpy(dirty_bitmap, dirty_vram->dirty_bitmap, dirty_size);
-        memset(dirty_vram->dirty_bitmap, 0, dirty_size);
-        if ( dirty_vram->last_dirty + SECONDS(2) < NOW() )
-        {
-            /* was clean for more than two seconds, try to disable guest
-             * write access */
-            for ( i = begin_pfn; i < end_pfn; i++ )
-            {
-                mfn_t mfn = get_gfn_query_unlocked(d, i, &t);
-                if ( !mfn_eq(mfn, INVALID_MFN) )
-                    flush_tlb |= sh_remove_write_access(d, mfn, 1, 0);
-            }
-            dirty_vram->last_dirty = -1;
-        }
-    }
-    if ( flush_tlb )
-        guest_flush_tlb_mask(d, d->dirty_cpumask);
-    goto out;
-
-out_sl1ma:
-    xfree(dirty_vram->sl1ma);
-out_dirty_vram:
-    xfree(dirty_vram);
-    dirty_vram = d->arch.hvm.dirty_vram = NULL;
-
-out:
-    paging_unlock(d);
-    if ( rc == 0 && dirty_bitmap != NULL &&
-         copy_to_guest(guest_dirty_bitmap, dirty_bitmap, dirty_size) )
-    {
-        paging_lock(d);
-        for ( i = 0; i < dirty_size; i++ )
-            dirty_vram->dirty_bitmap[i] |= dirty_bitmap[i];
-        paging_unlock(d);
-        rc = -EFAULT;
-    }
-    vfree(dirty_bitmap);
-    p2m_unlock(p2m_get_hostp2m(d));
-    return rc;
-}
-
 /* Fluhs TLB of selected vCPUs. */
 bool shadow_flush_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
                       void *ctxt)
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -691,6 +691,218 @@ static void sh_emulate_unmap_dest(struct
     atomic_inc(&v->domain->arch.paging.shadow.gtable_dirty_version);
 }
 
+/**************************************************************************/
+/* VRAM dirty tracking support */
+int shadow_track_dirty_vram(struct domain *d,
+                            unsigned long begin_pfn,
+                            unsigned long nr,
+                            XEN_GUEST_HANDLE(void) guest_dirty_bitmap)
+{
+    int rc = 0;
+    unsigned long end_pfn = begin_pfn + nr;
+    unsigned long dirty_size = (nr + 7) / 8;
+    int flush_tlb = 0;
+    unsigned long i;
+    p2m_type_t t;
+    struct sh_dirty_vram *dirty_vram;
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    uint8_t *dirty_bitmap = NULL;
+
+    if ( end_pfn < begin_pfn || end_pfn > p2m->max_mapped_pfn + 1 )
+        return -EINVAL;
+
+    /* We perform p2m lookups, so lock the p2m upfront to avoid deadlock */
+    p2m_lock(p2m_get_hostp2m(d));
+    paging_lock(d);
+
+    dirty_vram = d->arch.hvm.dirty_vram;
+
+    if ( dirty_vram && (!nr ||
+             ( begin_pfn != dirty_vram->begin_pfn
+            || end_pfn   != dirty_vram->end_pfn )) )
+    {
+        /* Different tracking, tear the previous down. */
+        gdprintk(XENLOG_INFO, "stopping tracking VRAM %lx - %lx\n", dirty_vram->begin_pfn, dirty_vram->end_pfn);
+        xfree(dirty_vram->sl1ma);
+        xfree(dirty_vram->dirty_bitmap);
+        xfree(dirty_vram);
+        dirty_vram = d->arch.hvm.dirty_vram = NULL;
+    }
+
+    if ( !nr )
+        goto out;
+
+    dirty_bitmap = vzalloc(dirty_size);
+    if ( dirty_bitmap == NULL )
+    {
+        rc = -ENOMEM;
+        goto out;
+    }
+    /*
+     * This should happen seldomly (Video mode change),
+     * no need to be careful.
+     */
+    if ( !dirty_vram )
+    {
+        /*
+         * Throw away all the shadows rather than walking through them
+         * up to nr times getting rid of mappings of each pfn.
+         */
+        shadow_blow_tables(d);
+
+        gdprintk(XENLOG_INFO, "tracking VRAM %lx - %lx\n", begin_pfn, end_pfn);
+
+        rc = -ENOMEM;
+        if ( (dirty_vram = xmalloc(struct sh_dirty_vram)) == NULL )
+            goto out;
+        dirty_vram->begin_pfn = begin_pfn;
+        dirty_vram->end_pfn = end_pfn;
+        d->arch.hvm.dirty_vram = dirty_vram;
+
+        if ( (dirty_vram->sl1ma = xmalloc_array(paddr_t, nr)) == NULL )
+            goto out_dirty_vram;
+        memset(dirty_vram->sl1ma, ~0, sizeof(paddr_t) * nr);
+
+        if ( (dirty_vram->dirty_bitmap = xzalloc_array(uint8_t, dirty_size)) == NULL )
+            goto out_sl1ma;
+
+        dirty_vram->last_dirty = NOW();
+
+        /* Tell the caller that this time we could not track dirty bits. */
+        rc = -ENODATA;
+    }
+    else if ( dirty_vram->last_dirty == -1 )
+        /* still completely clean, just copy our empty bitmap */
+        memcpy(dirty_bitmap, dirty_vram->dirty_bitmap, dirty_size);
+    else
+    {
+        mfn_t map_mfn = INVALID_MFN;
+        void *map_sl1p = NULL;
+
+        /* Iterate over VRAM to track dirty bits. */
+        for ( i = 0; i < nr; i++ )
+        {
+            mfn_t mfn = get_gfn_query_unlocked(d, begin_pfn + i, &t);
+            struct page_info *page;
+            int dirty = 0;
+            paddr_t sl1ma = dirty_vram->sl1ma[i];
+
+            if ( mfn_eq(mfn, INVALID_MFN) )
+                dirty = 1;
+            else
+            {
+                page = mfn_to_page(mfn);
+                switch ( page->u.inuse.type_info & PGT_count_mask )
+                {
+                case 0:
+                    /* No guest reference, nothing to track. */
+                    break;
+
+                case 1:
+                    /* One guest reference. */
+                    if ( sl1ma == INVALID_PADDR )
+                    {
+                        /* We don't know which sl1e points to this, too bad. */
+                        dirty = 1;
+                        /*
+                         * TODO: Heuristics for finding the single mapping of
+                         * this gmfn
+                         */
+                        flush_tlb |= sh_remove_all_mappings(d, mfn,
+                                                            _gfn(begin_pfn + i));
+                    }
+                    else
+                    {
+                        /*
+                         * Hopefully the most common case: only one mapping,
+                         * whose dirty bit we can use.
+                         */
+                        l1_pgentry_t *sl1e;
+                        mfn_t sl1mfn = maddr_to_mfn(sl1ma);
+
+                        if ( !mfn_eq(sl1mfn, map_mfn) )
+                        {
+                            if ( map_sl1p )
+                                unmap_domain_page(map_sl1p);
+                            map_sl1p = map_domain_page(sl1mfn);
+                            map_mfn = sl1mfn;
+                        }
+                        sl1e = map_sl1p + (sl1ma & ~PAGE_MASK);
+
+                        if ( l1e_get_flags(*sl1e) & _PAGE_DIRTY )
+                        {
+                            dirty = 1;
+                            /*
+                             * Note: this is atomic, so we may clear a
+                             * _PAGE_ACCESSED set by another processor.
+                             */
+                            l1e_remove_flags(*sl1e, _PAGE_DIRTY);
+                            flush_tlb = 1;
+                        }
+                    }
+                    break;
+
+                default:
+                    /* More than one guest reference,
+                     * we don't afford tracking that. */
+                    dirty = 1;
+                    break;
+                }
+            }
+
+            if ( dirty )
+            {
+                dirty_vram->dirty_bitmap[i / 8] |= 1 << (i % 8);
+                dirty_vram->last_dirty = NOW();
+            }
+        }
+
+        if ( map_sl1p )
+            unmap_domain_page(map_sl1p);
+
+        memcpy(dirty_bitmap, dirty_vram->dirty_bitmap, dirty_size);
+        memset(dirty_vram->dirty_bitmap, 0, dirty_size);
+        if ( dirty_vram->last_dirty + SECONDS(2) < NOW() )
+        {
+            /*
+             * Was clean for more than two seconds, try to disable guest
+             * write access.
+             */
+            for ( i = begin_pfn; i < end_pfn; i++ )
+            {
+                mfn_t mfn = get_gfn_query_unlocked(d, i, &t);
+                if ( !mfn_eq(mfn, INVALID_MFN) )
+                    flush_tlb |= sh_remove_write_access(d, mfn, 1, 0);
+            }
+            dirty_vram->last_dirty = -1;
+        }
+    }
+    if ( flush_tlb )
+        guest_flush_tlb_mask(d, d->dirty_cpumask);
+    goto out;
+
+ out_sl1ma:
+    xfree(dirty_vram->sl1ma);
+ out_dirty_vram:
+    xfree(dirty_vram);
+    dirty_vram = d->arch.hvm.dirty_vram = NULL;
+
+ out:
+    paging_unlock(d);
+    if ( rc == 0 && dirty_bitmap != NULL &&
+         copy_to_guest(guest_dirty_bitmap, dirty_bitmap, dirty_size) )
+    {
+        paging_lock(d);
+        for ( i = 0; i < dirty_size; i++ )
+            dirty_vram->dirty_bitmap[i] |= dirty_bitmap[i];
+        paging_unlock(d);
+        rc = -EFAULT;
+    }
+    vfree(dirty_bitmap);
+    p2m_unlock(p2m_get_hostp2m(d));
+    return rc;
+}
+
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -494,7 +494,6 @@ _sh_propagate(struct vcpu *v,
     guest_l1e_t guest_entry = { guest_intpte };
     shadow_l1e_t *sp = shadow_entry_ptr;
     struct domain *d = v->domain;
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
     gfn_t target_gfn = guest_l1e_get_gfn(guest_entry);
     u32 pass_thru_flags;
     u32 gflags, sflags;
@@ -649,15 +648,19 @@ _sh_propagate(struct vcpu *v,
         }
     }
 
-    if ( unlikely((level == 1) && dirty_vram
-            && dirty_vram->last_dirty == -1
-            && gfn_x(target_gfn) >= dirty_vram->begin_pfn
-            && gfn_x(target_gfn) < dirty_vram->end_pfn) )
+    if ( unlikely(level == 1) && is_hvm_domain(d) )
     {
-        if ( ft & FETCH_TYPE_WRITE )
-            dirty_vram->last_dirty = NOW();
-        else
-            sflags &= ~_PAGE_RW;
+        struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
+
+        if ( dirty_vram && dirty_vram->last_dirty == -1 &&
+             gfn_x(target_gfn) >= dirty_vram->begin_pfn &&
+             gfn_x(target_gfn) < dirty_vram->end_pfn )
+        {
+            if ( ft & FETCH_TYPE_WRITE )
+                dirty_vram->last_dirty = NOW();
+            else
+                sflags &= ~_PAGE_RW;
+        }
     }
 
     /* Read-only memory */
@@ -1082,7 +1085,7 @@ static inline void shadow_vram_get_l1e(s
     unsigned long gfn;
     struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
 
-    if ( !dirty_vram         /* tracking disabled? */
+    if ( !is_hvm_domain(d) || !dirty_vram /* tracking disabled? */
          || !(flags & _PAGE_RW) /* read-only mapping? */
          || !mfn_valid(mfn) )   /* mfn can be invalid in mmio_direct */
         return;
@@ -1113,7 +1116,7 @@ static inline void shadow_vram_put_l1e(s
     unsigned long gfn;
     struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
 
-    if ( !dirty_vram         /* tracking disabled? */
+    if ( !is_hvm_domain(d) || !dirty_vram /* tracking disabled? */
          || !(flags & _PAGE_RW) /* read-only mapping? */
          || !mfn_valid(mfn) )   /* mfn can be invalid in mmio_direct */
         return;
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -438,6 +438,14 @@ mfn_t oos_snapshot_lookup(struct domain
 
 #endif /* (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC) */
 
+/* Deliberately free all the memory we can: tear down all of d's shadows. */
+void shadow_blow_tables(struct domain *d);
+
+/*
+ * Remove all mappings of a guest frame from the shadow tables.
+ * Returns non-zero if we need to flush TLBs.
+ */
+int sh_remove_all_mappings(struct domain *d, mfn_t gmfn, gfn_t gfn);
 
 /* Reset the up-pointers of every L3 shadow to 0.
  * This is called when l3 shadows stop being pinnable, to clear out all



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 09:58:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 09:58:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveBX-0004Xr-9N; Wed, 15 Jul 2020 09:58:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveBV-0004Xh-Kt
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 09:58:41 +0000
X-Inumbo-ID: c15f22d8-c681-11ea-93a9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c15f22d8-c681-11ea-93a9-12813bfff9fa;
 Wed, 15 Jul 2020 09:58:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 61C38AF95;
 Wed, 15 Jul 2020 09:58:41 +0000 (UTC)
Subject: [PATCH 2/5] x86/shadow: shadow_table[] needs only one entry for
 PV-only configs
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
Message-ID: <6d93b35d-3d9a-a5e8-7e7c-9fcd2b930ced@suse.com>
Date: Wed, 15 Jul 2020 11:58:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Furthermore the field isn't needed at all with shadow support disabled -
move it into struct shadow_vcpu.

Introduce for_each_shadow_table(), shortening loops for the 4-level case
at the same time.

Adjust loop variables and a function parameter to be "unsigned int"
where applicable at the same time. Also move a comment that ended up
misplaced due to incremental additions.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -959,13 +959,15 @@ static void _shadow_prealloc(struct doma
     perfc_incr(shadow_prealloc_2);
 
     for_each_vcpu(d, v)
-        for ( i = 0 ; i < 4 ; i++ )
+        for ( i = 0; i < ARRAY_SIZE(v->arch.paging.shadow.shadow_table); i++ )
         {
-            if ( !pagetable_is_null(v->arch.shadow_table[i]) )
+            if ( !pagetable_is_null(v->arch.paging.shadow.shadow_table[i]) )
             {
                 TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_PREALLOC_UNHOOK);
-                shadow_unhook_mappings(d,
-                               pagetable_get_mfn(v->arch.shadow_table[i]), 0);
+                shadow_unhook_mappings(
+                    d,
+                    pagetable_get_mfn(v->arch.paging.shadow.shadow_table[i]),
+                    0);
 
                 /* See if that freed up enough space */
                 if ( d->arch.paging.shadow.free_pages >= pages )
@@ -1018,10 +1020,12 @@ void shadow_blow_tables(struct domain *d
 
     /* Second pass: unhook entries of in-use shadows */
     for_each_vcpu(d, v)
-        for ( i = 0 ; i < 4 ; i++ )
-            if ( !pagetable_is_null(v->arch.shadow_table[i]) )
-                shadow_unhook_mappings(d,
-                               pagetable_get_mfn(v->arch.shadow_table[i]), 0);
+        for ( i = 0; i < ARRAY_SIZE(v->arch.paging.shadow.shadow_table); i++ )
+            if ( !pagetable_is_null(v->arch.paging.shadow.shadow_table[i]) )
+                shadow_unhook_mappings(
+                    d,
+                    pagetable_get_mfn(v->arch.paging.shadow.shadow_table[i]),
+                    0);
 
     /* Make sure everyone sees the unshadowings */
     guest_flush_tlb_mask(d, d->dirty_cpumask);
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -85,6 +85,15 @@ const char *const fetch_type_names[] = {
 };
 #endif
 
+#if SHADOW_PAGING_LEVELS == 3
+# define for_each_shadow_table(v, i) \
+    for ( (i) = 0; \
+          (i) < ARRAY_SIZE((v)->arch.paging.shadow.shadow_table); \
+          ++(i) )
+#else
+# define for_each_shadow_table(v, i) for ( (i) = 0; (i) < 1; ++(i) )
+#endif
+
 /* Helper to perform a local TLB flush. */
 static void sh_flush_local(const struct domain *d)
 {
@@ -1624,7 +1633,7 @@ static shadow_l4e_t * shadow_get_and_cre
                                                 mfn_t *sl4mfn)
 {
     /* There is always a shadow of the top level table.  Get it. */
-    *sl4mfn = pagetable_get_mfn(v->arch.shadow_table[0]);
+    *sl4mfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]);
     /* Reading the top level table is always valid. */
     return sh_linear_l4_table(v) + shadow_l4_linear_offset(gw->va);
 }
@@ -1740,7 +1749,7 @@ static shadow_l2e_t * shadow_get_and_cre
     return sh_linear_l2_table(v) + shadow_l2_linear_offset(gw->va);
 #else /* 32bit... */
     /* There is always a shadow of the top level table.  Get it. */
-    *sl2mfn = pagetable_get_mfn(v->arch.shadow_table[0]);
+    *sl2mfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]);
     /* This next line is important: the guest l2 has a 16k
      * shadow, we need to return the right mfn of the four. This
      * call will set it for us as a side-effect. */
@@ -2333,6 +2342,7 @@ int sh_safe_not_to_sync(struct vcpu *v,
     struct domain *d = v->domain;
     struct page_info *sp;
     mfn_t smfn;
+    unsigned int i;
 
     if ( !sh_type_has_up_pointer(d, SH_type_l1_shadow) )
         return 0;
@@ -2365,14 +2375,10 @@ int sh_safe_not_to_sync(struct vcpu *v,
     ASSERT(mfn_valid(smfn));
 #endif
 
-    if ( pagetable_get_pfn(v->arch.shadow_table[0]) == mfn_x(smfn)
-#if (SHADOW_PAGING_LEVELS == 3)
-         || pagetable_get_pfn(v->arch.shadow_table[1]) == mfn_x(smfn)
-         || pagetable_get_pfn(v->arch.shadow_table[2]) == mfn_x(smfn)
-         || pagetable_get_pfn(v->arch.shadow_table[3]) == mfn_x(smfn)
-#endif
-        )
-        return 0;
+    for_each_shadow_table(v, i)
+        if ( pagetable_get_pfn(v->arch.paging.shadow.shadow_table[i]) ==
+             mfn_x(smfn) )
+            return 0;
 
     /* Only in use in one toplevel shadow, and it's not the one we're
      * running on */
@@ -3287,10 +3293,12 @@ static int sh_page_fault(struct vcpu *v,
         for_each_vcpu(d, tmp)
         {
 #if GUEST_PAGING_LEVELS == 3
-            int i;
-            for ( i = 0; i < 4; i++ )
+            unsigned int i;
+
+            for_each_shadow_table(v, i)
             {
-                mfn_t smfn = pagetable_get_mfn(v->arch.shadow_table[i]);
+                mfn_t smfn = pagetable_get_mfn(
+                                 v->arch.paging.shadow.shadow_table[i]);
 
                 if ( mfn_valid(smfn) && (mfn_x(smfn) != 0) )
                 {
@@ -3707,7 +3715,7 @@ sh_update_linear_entries(struct vcpu *v)
      *
      * Because HVM guests run on the same monitor tables regardless of the
      * shadow tables in use, the linear mapping of the shadow tables has to
-     * be updated every time v->arch.shadow_table changes.
+     * be updated every time v->arch.paging.shadow.shadow_table changes.
      */
 
     /* Don't try to update the monitor table if it doesn't exist */
@@ -3723,8 +3731,9 @@ sh_update_linear_entries(struct vcpu *v)
     if ( v == current )
     {
         __linear_l4_table[l4_linear_offset(SH_LINEAR_PT_VIRT_START)] =
-            l4e_from_pfn(pagetable_get_pfn(v->arch.shadow_table[0]),
-                         __PAGE_HYPERVISOR_RW);
+            l4e_from_pfn(
+                pagetable_get_pfn(v->arch.paging.shadow.shadow_table[0]),
+                __PAGE_HYPERVISOR_RW);
     }
     else
     {
@@ -3732,8 +3741,9 @@ sh_update_linear_entries(struct vcpu *v)
 
         ml4e = map_domain_page(pagetable_get_mfn(v->arch.hvm.monitor_table));
         ml4e[l4_table_offset(SH_LINEAR_PT_VIRT_START)] =
-            l4e_from_pfn(pagetable_get_pfn(v->arch.shadow_table[0]),
-                         __PAGE_HYPERVISOR_RW);
+            l4e_from_pfn(
+                pagetable_get_pfn(v->arch.paging.shadow.shadow_table[0]),
+                __PAGE_HYPERVISOR_RW);
         unmap_domain_page(ml4e);
     }
 
@@ -3812,7 +3822,7 @@ sh_update_linear_entries(struct vcpu *v)
 
 
 /*
- * Removes vcpu->arch.shadow_table[].
+ * Removes v->arch.paging.shadow.shadow_table[].
  * Does all appropriate management/bookkeeping/refcounting/etc...
  */
 static void
@@ -3820,38 +3830,34 @@ sh_detach_old_tables(struct vcpu *v)
 {
     struct domain *d = v->domain;
     mfn_t smfn;
-    int i = 0;
+    unsigned int i;
 
     ////
-    //// vcpu->arch.shadow_table[]
+    //// vcpu->arch.paging.shadow.shadow_table[]
     ////
 
-#if GUEST_PAGING_LEVELS == 3
-    /* PAE guests have four shadow_table entries */
-    for ( i = 0 ; i < 4 ; i++ )
-#endif
+    for_each_shadow_table(v, i)
     {
-        smfn = pagetable_get_mfn(v->arch.shadow_table[i]);
+        smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[i]);
         if ( mfn_x(smfn) )
             sh_put_ref(d, smfn, 0);
-        v->arch.shadow_table[i] = pagetable_null();
+        v->arch.paging.shadow.shadow_table[i] = pagetable_null();
     }
 }
 
 /* Set up the top-level shadow and install it in slot 'slot' of shadow_table */
 static void
 sh_set_toplevel_shadow(struct vcpu *v,
-                       int slot,
+                       unsigned int slot,
                        mfn_t gmfn,
                        unsigned int root_type)
 {
     mfn_t smfn;
     pagetable_t old_entry, new_entry;
-
     struct domain *d = v->domain;
 
     /* Remember the old contents of this slot */
-    old_entry = v->arch.shadow_table[slot];
+    old_entry = v->arch.paging.shadow.shadow_table[slot];
 
     /* Now figure out the new contents: is this a valid guest MFN? */
     if ( !mfn_valid(gmfn) )
@@ -3893,7 +3899,7 @@ sh_set_toplevel_shadow(struct vcpu *v,
     SHADOW_PRINTK("%u/%u [%u] gmfn %#"PRI_mfn" smfn %#"PRI_mfn"\n",
                   GUEST_PAGING_LEVELS, SHADOW_PAGING_LEVELS, slot,
                   mfn_x(gmfn), mfn_x(pagetable_get_mfn(new_entry)));
-    v->arch.shadow_table[slot] = new_entry;
+    v->arch.paging.shadow.shadow_table[slot] = new_entry;
 
     /* Decrement the refcount of the old contents of this slot */
     if ( !pagetable_is_null(old_entry) ) {
@@ -3999,7 +4005,7 @@ sh_update_cr3(struct vcpu *v, int do_loc
 
 
     ////
-    //// vcpu->arch.shadow_table[]
+    //// vcpu->arch.paging.shadow.shadow_table[]
     ////
 
     /* We revoke write access to the new guest toplevel page(s) before we
@@ -4056,7 +4062,7 @@ sh_update_cr3(struct vcpu *v, int do_loc
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow);
     if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
     {
-        mfn_t smfn = pagetable_get_mfn(v->arch.shadow_table[0]);
+        mfn_t smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]);
 
         if ( !(v->arch.flags & TF_kernel_mode) && VM_ASSIST(d, m2p_strict) )
             zap_ro_mpt(smfn);
@@ -4074,9 +4080,10 @@ sh_update_cr3(struct vcpu *v, int do_loc
     ///
 #if SHADOW_PAGING_LEVELS == 3
         {
-            mfn_t smfn = pagetable_get_mfn(v->arch.shadow_table[0]);
-            int i;
-            for ( i = 0; i < 4; i++ )
+            mfn_t smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]);
+            unsigned int i;
+
+            for_each_shadow_table(v, i)
             {
 #if GUEST_PAGING_LEVELS == 2
                 /* 2-on-3: make a PAE l3 that points at the four-page l2 */
@@ -4084,7 +4091,7 @@ sh_update_cr3(struct vcpu *v, int do_loc
                     smfn = sh_next_page(smfn);
 #else
                 /* 3-on-3: make a PAE l3 that points at the four l2 pages */
-                smfn = pagetable_get_mfn(v->arch.shadow_table[i]);
+                smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[i]);
 #endif
                 v->arch.paging.shadow.l3table[i] =
                     (mfn_x(smfn) == 0)
@@ -4108,7 +4115,7 @@ sh_update_cr3(struct vcpu *v, int do_loc
         /* We don't support PV except guest == shadow == config levels */
         BUILD_BUG_ON(GUEST_PAGING_LEVELS != SHADOW_PAGING_LEVELS);
         /* Just use the shadow top-level directly */
-        make_cr3(v, pagetable_get_mfn(v->arch.shadow_table[0]));
+        make_cr3(v, pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]));
     }
 #endif
 
@@ -4124,7 +4131,8 @@ sh_update_cr3(struct vcpu *v, int do_loc
         v->arch.hvm.hw_cr[3] = virt_to_maddr(&v->arch.paging.shadow.l3table);
 #else
         /* 4-on-4: Just use the shadow top-level directly */
-        v->arch.hvm.hw_cr[3] = pagetable_get_paddr(v->arch.shadow_table[0]);
+        v->arch.hvm.hw_cr[3] =
+            pagetable_get_paddr(v->arch.paging.shadow.shadow_table[0]);
 #endif
         hvm_update_guest_cr3(v, noflush);
     }
@@ -4443,7 +4451,7 @@ static void sh_pagetable_dying(paddr_t g
 {
     struct vcpu *v = current;
     struct domain *d = v->domain;
-    int i = 0;
+    unsigned int i;
     int flush = 0;
     int fast_path = 0;
     paddr_t gcr3 = 0;
@@ -4474,15 +4482,16 @@ static void sh_pagetable_dying(paddr_t g
         gl3pa = map_domain_page(l3mfn);
         gl3e = (guest_l3e_t *)(gl3pa + ((unsigned long)gpa & ~PAGE_MASK));
     }
-    for ( i = 0; i < 4; i++ )
+    for_each_shadow_table(v, i)
     {
         mfn_t smfn, gmfn;
 
-        if ( fast_path ) {
-            if ( pagetable_is_null(v->arch.shadow_table[i]) )
+        if ( fast_path )
+        {
+            if ( pagetable_is_null(v->arch.paging.shadow.shadow_table[i]) )
                 smfn = INVALID_MFN;
             else
-                smfn = pagetable_get_mfn(v->arch.shadow_table[i]);
+                smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[i]);
         }
         else
         {
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -135,6 +135,14 @@ struct shadow_vcpu {
     l3_pgentry_t l3table[4] __attribute__((__aligned__(32)));
     /* PAE guests: per-vcpu cache of the top-level *guest* entries */
     l3_pgentry_t gl3e[4] __attribute__((__aligned__(32)));
+
+    /* shadow(s) of guest (MFN) */
+#ifdef CONFIG_HVM
+    pagetable_t shadow_table[4];
+#else
+    pagetable_t shadow_table[1];
+#endif
+
     /* Last MFN that we emulated a write to as unshadow heuristics. */
     unsigned long last_emulated_mfn_for_unshadow;
     /* MFN of the last shadow that we shot a writeable mapping in */
@@ -576,6 +584,10 @@ struct arch_vcpu
         struct hvm_vcpu hvm;
     };
 
+    /*
+     * guest_table{,_user} hold a ref to the page, and also a type-count
+     * unless shadow refcounts are in use
+     */
     pagetable_t guest_table_user;       /* (MFN) x86/64 user-space pagetable */
     pagetable_t guest_table;            /* (MFN) guest notion of cr3 */
     struct page_info *old_guest_table;  /* partially destructed pagetable */
@@ -583,9 +595,7 @@ struct arch_vcpu
                                         /* former, if any */
     bool old_guest_table_partial;       /* Are we dropping a type ref, or just
                                          * finishing up a partial de-validation? */
-    /* guest_table holds a ref to the page, and also a type-count unless
-     * shadow refcounts are in use */
-    pagetable_t shadow_table[4];        /* (MFN) shadow(s) of guest */
+
     unsigned long cr3;                  /* (MA) value to install in HW CR3 */
 
     /*



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 09:59:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 09:59:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveCE-0004eA-O4; Wed, 15 Jul 2020 09:59:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveCE-0004e3-FP
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 09:59:26 +0000
X-Inumbo-ID: dc29910d-c681-11ea-93a9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dc29910d-c681-11ea-93a9-12813bfff9fa;
 Wed, 15 Jul 2020 09:59:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 189AAAFB5;
 Wed, 15 Jul 2020 09:59:27 +0000 (UTC)
Subject: [PATCH 3/5] x86/PV: drop a few misleading paging_mode_refcounts()
 checks
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
Message-ID: <9f8d0c4d-dec2-0175-09df-51d5e11c88e1@suse.com>
Date: Wed, 15 Jul 2020 11:59:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The filling and cleaning up of v->arch.guest_table in new_guest_cr3()
was apparently inconsistent so far: There was a type ref acquired
unconditionally for the new top level page table, but the dropping of
the old type ref was conditional upon !paging_mode_refcounts(). Mirror
this also to arch_set_info_guest().

Also move new_guest_cr3()'s #ifdef to around the function - both callers
now get built only when CONFIG_PV, i.e. no need to retain a stub.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1122,8 +1122,6 @@ int arch_set_info_guest(
 
     if ( !cr3_page )
         rc = -EINVAL;
-    else if ( paging_mode_refcounts(d) )
-        /* nothing */;
     else if ( cr3_page == v->arch.old_guest_table )
     {
         v->arch.old_guest_table = NULL;
@@ -1144,8 +1142,7 @@ int arch_set_info_guest(
         case -ERESTART:
             break;
         case 0:
-            if ( !compat && !VM_ASSIST(d, m2p_strict) &&
-                 !paging_mode_refcounts(d) )
+            if ( !compat && !VM_ASSIST(d, m2p_strict) )
                 fill_ro_mpt(cr3_mfn);
             break;
         default:
@@ -1166,7 +1163,7 @@ int arch_set_info_guest(
 
             if ( !cr3_page )
                 rc = -EINVAL;
-            else if ( !paging_mode_refcounts(d) )
+            else
             {
                 rc = get_page_type_preemptible(cr3_page, PGT_root_page_table);
                 switch ( rc )
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3149,9 +3149,9 @@ int vcpu_destroy_pagetables(struct vcpu
     return rc;
 }
 
+#ifdef CONFIG_PV
 int new_guest_cr3(mfn_t mfn)
 {
-#ifdef CONFIG_PV
     struct vcpu *curr = current;
     struct domain *d = curr->domain;
     int rc;
@@ -3220,7 +3220,7 @@ int new_guest_cr3(mfn_t mfn)
 
     pv_destroy_ldt(curr); /* Unconditional TLB flush later. */
 
-    if ( !VM_ASSIST(d, m2p_strict) && !paging_mode_refcounts(d) )
+    if ( !VM_ASSIST(d, m2p_strict) )
         fill_ro_mpt(mfn);
     curr->arch.guest_table = pagetable_from_mfn(mfn);
     update_cr3(curr);
@@ -3231,30 +3231,24 @@ int new_guest_cr3(mfn_t mfn)
     {
         struct page_info *page = mfn_to_page(old_base_mfn);
 
-        if ( paging_mode_refcounts(d) )
-            put_page(page);
-        else
-            switch ( rc = put_page_and_type_preemptible(page) )
-            {
-            case -EINTR:
-            case -ERESTART:
-                curr->arch.old_guest_ptpg = NULL;
-                curr->arch.old_guest_table = page;
-                curr->arch.old_guest_table_partial = (rc == -ERESTART);
-                rc = -ERESTART;
-                break;
-            default:
-                BUG_ON(rc);
-                break;
-            }
+        switch ( rc = put_page_and_type_preemptible(page) )
+        {
+        case -EINTR:
+        case -ERESTART:
+            curr->arch.old_guest_ptpg = NULL;
+            curr->arch.old_guest_table = page;
+            curr->arch.old_guest_table_partial = (rc == -ERESTART);
+            rc = -ERESTART;
+            break;
+        default:
+            BUG_ON(rc);
+            break;
+        }
     }
 
     return rc;
-#else
-    ASSERT_UNREACHABLE();
-    return -EINVAL;
-#endif
 }
+#endif
 
 #ifdef CONFIG_PV
 static int vcpumask_to_pcpumask(



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 09:59:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 09:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveCc-0004hs-1B; Wed, 15 Jul 2020 09:59:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveCb-0004hh-2D
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 09:59:49 +0000
X-Inumbo-ID: ea7e66f7-c681-11ea-93a9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ea7e66f7-c681-11ea-93a9-12813bfff9fa;
 Wed, 15 Jul 2020 09:59:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5FAABB02A;
 Wed, 15 Jul 2020 09:59:50 +0000 (UTC)
Subject: [PATCH 4/5] x86/shadow: have just a single instance of
 sh_set_toplevel_shadow()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
Message-ID: <a9308564-be9e-8112-8ca6-499a7501cfa7@suse.com>
Date: Wed, 15 Jul 2020 11:59:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The only guest/shadow level dependent piece here is the call to
sh_make_shadow(). Make a pointer to the respective function an
argument of sh_set_toplevel_shadow(), allowing it to be moved to
common.c.

This implies making get_shadow_status() available to common.c; its set
and delete counterparts are moved along with it.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Besides reducing compiled code size, this also avoids the function
becoming unused in !HVM builds (in two out of the three objects built)
in a subsequent change.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2560,6 +2560,80 @@ void shadow_update_paging_modes(struct v
     paging_unlock(v->domain);
 }
 
+/* Set up the top-level shadow and install it in slot 'slot' of shadow_table */
+void sh_set_toplevel_shadow(struct vcpu *v,
+                            unsigned int slot,
+                            mfn_t gmfn,
+                            unsigned int root_type,
+                            mfn_t (*make_shadow)(struct vcpu *v,
+                                                 mfn_t gmfn,
+                                                 uint32_t shadow_type))
+{
+    mfn_t smfn;
+    pagetable_t old_entry, new_entry;
+    struct domain *d = v->domain;
+
+    /* Remember the old contents of this slot */
+    old_entry = v->arch.paging.shadow.shadow_table[slot];
+
+    /* Now figure out the new contents: is this a valid guest MFN? */
+    if ( !mfn_valid(gmfn) )
+    {
+        new_entry = pagetable_null();
+        goto install_new_entry;
+    }
+
+    /* Guest mfn is valid: shadow it and install the shadow */
+    smfn = get_shadow_status(d, gmfn, root_type);
+    if ( !mfn_valid(smfn) )
+    {
+        /* Make sure there's enough free shadow memory. */
+        shadow_prealloc(d, root_type, 1);
+        /* Shadow the page. */
+        smfn = make_shadow(v, gmfn, root_type);
+    }
+    ASSERT(mfn_valid(smfn));
+
+    /* Take a ref to this page: it will be released in sh_detach_old_tables()
+     * or the next call to set_toplevel_shadow() */
+    if ( sh_get_ref(d, smfn, 0) )
+    {
+        /* Pin the shadow and put it (back) on the list of pinned shadows */
+        sh_pin(d, smfn);
+
+        new_entry = pagetable_from_mfn(smfn);
+    }
+    else
+    {
+        printk(XENLOG_G_ERR "can't install %"PRI_mfn" as toplevel shadow\n",
+               mfn_x(smfn));
+        domain_crash(d);
+        new_entry = pagetable_null();
+    }
+
+ install_new_entry:
+    /* Done.  Install it */
+    SHADOW_PRINTK("%u [%u] gmfn %#"PRI_mfn" smfn %#"PRI_mfn"\n",
+                  v->arch.paging.mode->shadow.shadow_levels, slot,
+                  mfn_x(gmfn), mfn_x(pagetable_get_mfn(new_entry)));
+    v->arch.paging.shadow.shadow_table[slot] = new_entry;
+
+    /* Decrement the refcount of the old contents of this slot */
+    if ( !pagetable_is_null(old_entry) )
+    {
+        mfn_t old_smfn = pagetable_get_mfn(old_entry);
+        /* Need to repin the old toplevel shadow if it's been unpinned
+         * by shadow_prealloc(): in PV mode we're still running on this
+         * shadow and it's not safe to free it yet. */
+        if ( !mfn_to_page(old_smfn)->u.sh.pinned && !sh_pin(d, old_smfn) )
+        {
+            printk(XENLOG_G_ERR "can't re-pin %"PRI_mfn"\n", mfn_x(old_smfn));
+            domain_crash(d);
+        }
+        sh_put_ref(d, old_smfn, 0);
+    }
+}
+
 /**************************************************************************/
 /* Turning on and off shadow features */
 
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -103,7 +103,7 @@ static void sh_flush_local(const struct
 /**************************************************************************/
 /* Hash table mapping from guest pagetables to shadows
  *
- * Normal case: maps the mfn of a guest page to the mfn of its shadow page.
+ * normal case: see private.h.
  * FL1's:       maps the *gfn* of the start of a superpage to the mfn of a
  *              shadow L1 which maps its "splinters".
  */
@@ -117,16 +117,6 @@ get_fl1_shadow_status(struct domain *d,
     return smfn;
 }
 
-static inline mfn_t
-get_shadow_status(struct domain *d, mfn_t gmfn, u32 shadow_type)
-/* Look for shadows in the hash table */
-{
-    mfn_t smfn = shadow_hash_lookup(d, mfn_x(gmfn), shadow_type);
-    ASSERT(!mfn_valid(smfn) || mfn_to_page(smfn)->u.sh.head);
-    perfc_incr(shadow_get_shadow_status);
-    return smfn;
-}
-
 static inline void
 set_fl1_shadow_status(struct domain *d, gfn_t gfn, mfn_t smfn)
 /* Put an FL1 shadow into the hash table */
@@ -139,27 +129,6 @@ set_fl1_shadow_status(struct domain *d,
 }
 
 static inline void
-set_shadow_status(struct domain *d, mfn_t gmfn, u32 shadow_type, mfn_t smfn)
-/* Put a shadow into the hash table */
-{
-    int res;
-
-    SHADOW_PRINTK("d%d gmfn=%lx, type=%08x, smfn=%lx\n",
-                  d->domain_id, mfn_x(gmfn), shadow_type, mfn_x(smfn));
-
-    ASSERT(mfn_to_page(smfn)->u.sh.head);
-
-    /* 32-bit PV guests don't own their l4 pages so can't get_page them */
-    if ( !is_pv_32bit_domain(d) || shadow_type != SH_type_l4_64_shadow )
-    {
-        res = get_page(mfn_to_page(gmfn), d);
-        ASSERT(res == 1);
-    }
-
-    shadow_hash_insert(d, mfn_x(gmfn), shadow_type, smfn);
-}
-
-static inline void
 delete_fl1_shadow_status(struct domain *d, gfn_t gfn, mfn_t smfn)
 /* Remove a shadow from the hash table */
 {
@@ -169,19 +138,6 @@ delete_fl1_shadow_status(struct domain *
     shadow_hash_delete(d, gfn_x(gfn), SH_type_fl1_shadow, smfn);
 }
 
-static inline void
-delete_shadow_status(struct domain *d, mfn_t gmfn, u32 shadow_type, mfn_t smfn)
-/* Remove a shadow from the hash table */
-{
-    SHADOW_PRINTK("d%d gmfn=%"PRI_mfn", type=%08x, smfn=%"PRI_mfn"\n",
-                  d->domain_id, mfn_x(gmfn), shadow_type, mfn_x(smfn));
-    ASSERT(mfn_to_page(smfn)->u.sh.head);
-    shadow_hash_delete(d, mfn_x(gmfn), shadow_type, smfn);
-    /* 32-bit PV guests don't own their l4 pages; see set_shadow_status */
-    if ( !is_pv_32bit_domain(d) || shadow_type != SH_type_l4_64_shadow )
-        put_page(mfn_to_page(gmfn));
-}
-
 
 /**************************************************************************/
 /* Functions for walking the guest page tables */
@@ -3845,78 +3801,6 @@ sh_detach_old_tables(struct vcpu *v)
     }
 }
 
-/* Set up the top-level shadow and install it in slot 'slot' of shadow_table */
-static void
-sh_set_toplevel_shadow(struct vcpu *v,
-                       unsigned int slot,
-                       mfn_t gmfn,
-                       unsigned int root_type)
-{
-    mfn_t smfn;
-    pagetable_t old_entry, new_entry;
-    struct domain *d = v->domain;
-
-    /* Remember the old contents of this slot */
-    old_entry = v->arch.paging.shadow.shadow_table[slot];
-
-    /* Now figure out the new contents: is this a valid guest MFN? */
-    if ( !mfn_valid(gmfn) )
-    {
-        new_entry = pagetable_null();
-        goto install_new_entry;
-    }
-
-    /* Guest mfn is valid: shadow it and install the shadow */
-    smfn = get_shadow_status(d, gmfn, root_type);
-    if ( !mfn_valid(smfn) )
-    {
-        /* Make sure there's enough free shadow memory. */
-        shadow_prealloc(d, root_type, 1);
-        /* Shadow the page. */
-        smfn = sh_make_shadow(v, gmfn, root_type);
-    }
-    ASSERT(mfn_valid(smfn));
-
-    /* Take a ref to this page: it will be released in sh_detach_old_tables()
-     * or the next call to set_toplevel_shadow() */
-    if ( sh_get_ref(d, smfn, 0) )
-    {
-        /* Pin the shadow and put it (back) on the list of pinned shadows */
-        sh_pin(d, smfn);
-
-        new_entry = pagetable_from_mfn(smfn);
-    }
-    else
-    {
-        printk(XENLOG_G_ERR "can't install %"PRI_mfn" as toplevel shadow\n",
-               mfn_x(smfn));
-        domain_crash(d);
-        new_entry = pagetable_null();
-    }
-
- install_new_entry:
-    /* Done.  Install it */
-    SHADOW_PRINTK("%u/%u [%u] gmfn %#"PRI_mfn" smfn %#"PRI_mfn"\n",
-                  GUEST_PAGING_LEVELS, SHADOW_PAGING_LEVELS, slot,
-                  mfn_x(gmfn), mfn_x(pagetable_get_mfn(new_entry)));
-    v->arch.paging.shadow.shadow_table[slot] = new_entry;
-
-    /* Decrement the refcount of the old contents of this slot */
-    if ( !pagetable_is_null(old_entry) ) {
-        mfn_t old_smfn = pagetable_get_mfn(old_entry);
-        /* Need to repin the old toplevel shadow if it's been unpinned
-         * by shadow_prealloc(): in PV mode we're still running on this
-         * shadow and it's not safe to free it yet. */
-        if ( !mfn_to_page(old_smfn)->u.sh.pinned && !sh_pin(d, old_smfn) )
-        {
-            printk(XENLOG_G_ERR "can't re-pin %"PRI_mfn"\n", mfn_x(old_smfn));
-            domain_crash(d);
-        }
-        sh_put_ref(d, old_smfn, 0);
-    }
-}
-
-
 static void
 sh_update_cr3(struct vcpu *v, int do_locking, bool noflush)
 /* Updates vcpu->arch.cr3 after the guest has changed CR3.
@@ -4014,7 +3898,7 @@ sh_update_cr3(struct vcpu *v, int do_loc
 #if GUEST_PAGING_LEVELS == 2
     if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
-    sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow);
+    sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow, sh_make_shadow);
 #elif GUEST_PAGING_LEVELS == 3
     /* PAE guests have four shadow_table entries, based on the
      * current values of the guest's four l3es. */
@@ -4048,18 +3932,20 @@ sh_update_cr3(struct vcpu *v, int do_loc
                 if ( p2m_is_ram(p2mt) )
                     sh_set_toplevel_shadow(v, i, gl2mfn, (i == 3)
                                            ? SH_type_l2h_shadow
-                                           : SH_type_l2_shadow);
+                                           : SH_type_l2_shadow,
+                                           sh_make_shadow);
                 else
-                    sh_set_toplevel_shadow(v, i, INVALID_MFN, 0);
+                    sh_set_toplevel_shadow(v, i, INVALID_MFN, 0,
+                                           sh_make_shadow);
             }
             else
-                sh_set_toplevel_shadow(v, i, INVALID_MFN, 0);
+                sh_set_toplevel_shadow(v, i, INVALID_MFN, 0, sh_make_shadow);
         }
     }
 #elif GUEST_PAGING_LEVELS == 4
     if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
-    sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow);
+    sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow, sh_make_shadow);
     if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
     {
         mfn_t smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]);
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -357,6 +357,15 @@ mfn_t shadow_alloc(struct domain *d,
                     unsigned long backpointer);
 void  shadow_free(struct domain *d, mfn_t smfn);
 
+/* Set up the top-level shadow and install it in slot 'slot' of shadow_table */
+void sh_set_toplevel_shadow(struct vcpu *v,
+                            unsigned int slot,
+                            mfn_t gmfn,
+                            unsigned int root_type,
+                            mfn_t (*make_shadow)(struct vcpu *v,
+                                                 mfn_t gmfn,
+                                                 uint32_t shadow_type));
+
 /* Install the xen mappings in various flavours of shadow */
 void sh_install_xen_entries_in_l4(struct domain *, mfn_t gl4mfn, mfn_t sl4mfn);
 
@@ -701,6 +710,58 @@ static inline void sh_unpin(struct domai
 }
 
 
+/**************************************************************************/
+/* Hash table mapping from guest pagetables to shadows
+ *
+ * Normal case: maps the mfn of a guest page to the mfn of its shadow page.
+ * FL1's:       see multi.c.
+ */
+
+static inline mfn_t
+get_shadow_status(struct domain *d, mfn_t gmfn, u32 shadow_type)
+/* Look for shadows in the hash table */
+{
+    mfn_t smfn = shadow_hash_lookup(d, mfn_x(gmfn), shadow_type);
+    ASSERT(!mfn_valid(smfn) || mfn_to_page(smfn)->u.sh.head);
+    perfc_incr(shadow_get_shadow_status);
+    return smfn;
+}
+
+static inline void
+set_shadow_status(struct domain *d, mfn_t gmfn, u32 shadow_type, mfn_t smfn)
+/* Put a shadow into the hash table */
+{
+    int res;
+
+    SHADOW_PRINTK("d%d gmfn=%lx, type=%08x, smfn=%lx\n",
+                  d->domain_id, mfn_x(gmfn), shadow_type, mfn_x(smfn));
+
+    ASSERT(mfn_to_page(smfn)->u.sh.head);
+
+    /* 32-bit PV guests don't own their l4 pages so can't get_page them */
+    if ( !is_pv_32bit_domain(d) || shadow_type != SH_type_l4_64_shadow )
+    {
+        res = get_page(mfn_to_page(gmfn), d);
+        ASSERT(res == 1);
+    }
+
+    shadow_hash_insert(d, mfn_x(gmfn), shadow_type, smfn);
+}
+
+static inline void
+delete_shadow_status(struct domain *d, mfn_t gmfn, u32 shadow_type, mfn_t smfn)
+/* Remove a shadow from the hash table */
+{
+    SHADOW_PRINTK("d%d gmfn=%"PRI_mfn", type=%08x, smfn=%"PRI_mfn"\n",
+                  d->domain_id, mfn_x(gmfn), shadow_type, mfn_x(smfn));
+    ASSERT(mfn_to_page(smfn)->u.sh.head);
+    shadow_hash_delete(d, mfn_x(gmfn), shadow_type, smfn);
+    /* 32-bit PV guests don't own their l4 pages; see set_shadow_status */
+    if ( !is_pv_32bit_domain(d) || shadow_type != SH_type_l4_64_shadow )
+        put_page(mfn_to_page(gmfn));
+}
+
+
 /**************************************************************************/
 /* PTE-write emulation. */
 



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:00:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveCx-0005UX-AY; Wed, 15 Jul 2020 10:00:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveCw-0005UJ-28
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:00:10 +0000
X-Inumbo-ID: f7122cb8-c681-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7122cb8-c681-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 10:00:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 78DE6B02A;
 Wed, 15 Jul 2020 10:00:11 +0000 (UTC)
Subject: [PATCH 5/5] x86/shadow: l3table[] and gl3e[] are HVM only
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
Message-ID: <a3b9b496-e860-e657-2afc-c0658871fa3f@suse.com>
Date: Wed, 15 Jul 2020 12:00:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

... by the very fact that they're 3-level specific, while PV always gets
run in 4-level mode. This requires adding some seemingly redundant
#ifdef-s - some of them will be possible to drop again once 2- and
3-level guest code doesn't get built anymore in !HVM configs, but I'm
afraid there's still quite a bit of disentangling work to be done to
make this possible.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -150,10 +150,7 @@ sh_walk_guest_tables(struct vcpu *v, uns
                           ? cr3_pa(v->arch.hvm.guest_cr[3]) >> PAGE_SHIFT
                           : pagetable_get_pfn(v->arch.guest_table));
 
-#if GUEST_PAGING_LEVELS == 3 /* PAE */
-    return guest_walk_tables(v, p2m_get_hostp2m(v->domain), va, gw, pfec,
-                             root_gfn, INVALID_MFN, v->arch.paging.shadow.gl3e);
-#else /* 32 or 64 */
+#if GUEST_PAGING_LEVELS != 3 /* 32 or 64 */
     const struct domain *d = v->domain;
     mfn_t root_mfn = (v->arch.flags & TF_kernel_mode
                       ? pagetable_get_mfn(v->arch.guest_table)
@@ -165,6 +162,13 @@ sh_walk_guest_tables(struct vcpu *v, uns
     unmap_domain_page(root_map);
 
     return ok;
+#elif !defined(CONFIG_HVM)
+    (void)root_gfn;
+    memset(gw, 0, sizeof(*gw));
+    return false;
+#else /* PAE */
+    return guest_walk_tables(v, p2m_get_hostp2m(v->domain), va, gw, pfec,
+                             root_gfn, INVALID_MFN, v->arch.paging.shadow.gl3e);
 #endif
 }
 
@@ -211,7 +215,7 @@ shadow_check_gwalk(struct vcpu *v, unsig
     l3p = map_domain_page(gw->l3mfn);
     mismatch |= (gw->l3e.l3 != l3p[guest_l3_table_offset(va)].l3);
     unmap_domain_page(l3p);
-#else
+#elif defined(CONFIG_HVM)
     mismatch |= (gw->l3e.l3 !=
                  v->arch.paging.shadow.gl3e[guest_l3_table_offset(va)].l3);
 #endif
@@ -1693,6 +1697,8 @@ static shadow_l2e_t * shadow_get_and_cre
     }
     /* Now follow it down a level.  Guaranteed to succeed. */
     return sh_linear_l2_table(v) + shadow_l2_linear_offset(gw->va);
+#elif !defined(CONFIG_HVM)
+    return NULL;
 #elif GUEST_PAGING_LEVELS == 3 /* PAE... */
     /* We never demand-shadow PAE l3es: they are only created in
      * sh_update_cr3().  Check if the relevant sl3e is present. */
@@ -3526,6 +3532,8 @@ static bool sh_invlpg(struct vcpu *v, un
         if ( !(shadow_l3e_get_flags(sl3e) & _PAGE_PRESENT) )
             return false;
     }
+#elif !defined(CONFIG_HVM)
+    return false;
 #else /* SHADOW_PAGING_LEVELS == 3 */
     if ( !(l3e_get_flags(v->arch.paging.shadow.l3table[shadow_l3_linear_offset(linear)])
            & _PAGE_PRESENT) )
@@ -3679,7 +3687,9 @@ sh_update_linear_entries(struct vcpu *v)
          pagetable_get_pfn(v->arch.hvm.monitor_table) == 0 )
         return;
 
-#if SHADOW_PAGING_LEVELS == 4
+#if !defined(CONFIG_HVM)
+    return;
+#elif SHADOW_PAGING_LEVELS == 4
 
     /* For HVM, just need to update the l4e that points to the shadow l4. */
 
@@ -3816,7 +3826,7 @@ sh_update_cr3(struct vcpu *v, int do_loc
 {
     struct domain *d = v->domain;
     mfn_t gmfn;
-#if GUEST_PAGING_LEVELS == 3
+#if GUEST_PAGING_LEVELS == 3 && defined(CONFIG_HVM)
     const guest_l3e_t *gl3e;
     unsigned int i, guest_idx;
 #endif
@@ -3867,7 +3877,7 @@ sh_update_cr3(struct vcpu *v, int do_loc
 #endif
         gmfn = pagetable_get_mfn(v->arch.guest_table);
 
-#if GUEST_PAGING_LEVELS == 3
+#if GUEST_PAGING_LEVELS == 3 && defined(CONFIG_HVM)
     /*
      * On PAE guests we don't use a mapping of the guest's own top-level
      * table.  We cache the current state of that table and shadow that,
@@ -3895,10 +3905,22 @@ sh_update_cr3(struct vcpu *v, int do_loc
     /* We revoke write access to the new guest toplevel page(s) before we
      * replace the old shadow pagetable(s), so that we can safely use the
      * (old) shadow linear maps in the writeable mapping heuristics. */
-#if GUEST_PAGING_LEVELS == 2
-    if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
+#if GUEST_PAGING_LEVELS == 4
+    if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
-    sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow, sh_make_shadow);
+    sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow, sh_make_shadow);
+    if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
+    {
+        mfn_t smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]);
+
+        if ( !(v->arch.flags & TF_kernel_mode) && VM_ASSIST(d, m2p_strict) )
+            zap_ro_mpt(smfn);
+        else if ( (v->arch.flags & TF_kernel_mode) &&
+                  !VM_ASSIST(d, m2p_strict) )
+            fill_ro_mpt(smfn);
+    }
+#elif !defined(CONFIG_HVM)
+    ASSERT_UNREACHABLE();
 #elif GUEST_PAGING_LEVELS == 3
     /* PAE guests have four shadow_table entries, based on the
      * current values of the guest's four l3es. */
@@ -3907,7 +3929,8 @@ sh_update_cr3(struct vcpu *v, int do_loc
         gfn_t gl2gfn;
         mfn_t gl2mfn;
         p2m_type_t p2mt;
-        const guest_l3e_t *gl3e = v->arch.paging.shadow.gl3e;
+
+        gl3e = v->arch.paging.shadow.gl3e;
 
         /* First, make all four entries read-only. */
         for ( i = 0; i < 4; i++ )
@@ -3942,25 +3965,16 @@ sh_update_cr3(struct vcpu *v, int do_loc
                 sh_set_toplevel_shadow(v, i, INVALID_MFN, 0, sh_make_shadow);
         }
     }
-#elif GUEST_PAGING_LEVELS == 4
-    if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
+#elif GUEST_PAGING_LEVELS == 2
+    if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
-    sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow, sh_make_shadow);
-    if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
-    {
-        mfn_t smfn = pagetable_get_mfn(v->arch.paging.shadow.shadow_table[0]);
-
-        if ( !(v->arch.flags & TF_kernel_mode) && VM_ASSIST(d, m2p_strict) )
-            zap_ro_mpt(smfn);
-        else if ( (v->arch.flags & TF_kernel_mode) &&
-                  !VM_ASSIST(d, m2p_strict) )
-            fill_ro_mpt(smfn);
-    }
+    sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow, sh_make_shadow);
 #else
 #error This should never happen
 #endif
 
 
+#ifdef CONFIG_HVM
     ///
     /// v->arch.paging.shadow.l3table
     ///
@@ -3986,7 +4000,7 @@ sh_update_cr3(struct vcpu *v, int do_loc
             }
         }
 #endif /* SHADOW_PAGING_LEVELS == 3 */
-
+#endif /* CONFIG_HVM */
 
     ///
     /// v->arch.cr3
@@ -4006,6 +4020,7 @@ sh_update_cr3(struct vcpu *v, int do_loc
 #endif
 
 
+#ifdef CONFIG_HVM
     ///
     /// v->arch.hvm.hw_cr[3]
     ///
@@ -4022,6 +4037,7 @@ sh_update_cr3(struct vcpu *v, int do_loc
 #endif
         hvm_update_guest_cr3(v, noflush);
     }
+#endif /* CONFIG_HVM */
 
     /* Fix up the linear pagetable mappings */
     sh_update_linear_entries(v);
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -131,15 +131,16 @@ struct shadow_domain {
 
 struct shadow_vcpu {
 #ifdef CONFIG_SHADOW_PAGING
+#ifdef CONFIG_HVM
     /* PAE guests: per-vcpu shadow top-level table */
     l3_pgentry_t l3table[4] __attribute__((__aligned__(32)));
     /* PAE guests: per-vcpu cache of the top-level *guest* entries */
     l3_pgentry_t gl3e[4] __attribute__((__aligned__(32)));
 
     /* shadow(s) of guest (MFN) */
-#ifdef CONFIG_HVM
     pagetable_t shadow_table[4];
 #else
+    /* shadow of guest (MFN) */
     pagetable_t shadow_table[1];
 #endif
 



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:01:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:01:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveE3-0005hD-Pt; Wed, 15 Jul 2020 10:01:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hlgd=A2=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jveE2-0005gz-0v
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:01:18 +0000
X-Inumbo-ID: 1fd2f6f0-c682-11ea-b7bb-bc764e2007e4
Received: from mail-wr1-x431.google.com (unknown [2a00:1450:4864:20::431])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1fd2f6f0-c682-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 10:01:17 +0000 (UTC)
Received: by mail-wr1-x431.google.com with SMTP id s10so1769753wrw.12
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jul 2020 03:01:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=OwSFL/9dqqsJtoE5Gx51FwuHDO2G9G77qcT4DW0dgpg=;
 b=MLKen9Dac7+Ln1LjTnQMH+tPeLZHYjNknvrLP8DBOlG59lwI+iKSaOEeHzStjH6uIr
 9htLYOvlZNg8GY57/rod4ipcBLlIgLd4MZXyAIeq9unrX38LSA6bziN/tD8yrLM/HL0x
 x+QIHJXR/vqiKrBSW0yN+oiipieYW1Yc6+PZafe7RJU9fjHwt1jCMlAwunnqklehosE9
 XNgFpWtz/cSIAvBWJXJ4q0g/Mp9kPLknPFcdYVJVNMPjZa5CQoK8r3IoXDPaeZRHOe56
 A9ZUElZfHo8LRXpq/jtvxexEPtNhIYWrWUPV2MhY26+eW0KwyXsBbaqlDWs5yXeNl3R5
 CEHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=OwSFL/9dqqsJtoE5Gx51FwuHDO2G9G77qcT4DW0dgpg=;
 b=NHHBDd4+NNiLW2l05tteuugXoQpeykMklbzSt98oUY3OMSAxZPcW56CkMFZKR0Hi1v
 cgkHkt+qmtVwGkc/UWbB8939nD+yR9BS/q1UTGC44pd2WqkoU/6ZnV/cpOldNDiFKCbS
 4Xz+1BrGfkBTO6EaLzugb+KVYtsO/rDRbKJu5oN9eeToxDlDt4s0cSg7hiLiV8LH04NI
 yB0/jraYJ5jcx/CtDQe5UpbCaOJ7zAf+GZwLJvmqIb9YY5zM5+fdOZdpcO3wcgLala3L
 c5l5Gyw06+oT/UfcXPc83/vx9otowRmHzkL1yF0uHkKZBvnPBqkATG418er0nwjm6ROB
 ocqQ==
X-Gm-Message-State: AOAM5332hZCaq6M0BgOXmLJV4D7TLT6xTcCl70Z6ZK5nB2kD1+cfBQSN
 Ffqa6m1UuwvuvGj6ke7H1/U=
X-Google-Smtp-Source: ABdhPJxNtGMHGj2Xx7RiUug5Xg5H3HALDE+w+pKGBXCw7MOOs9FxtMgWMtesm52FzQLuJim0UVnyWg==
X-Received: by 2002:adf:eec2:: with SMTP id a2mr10369846wrp.127.1594807276683; 
 Wed, 15 Jul 2020 03:01:16 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id u2sm2468011wml.16.2020.07.15.03.01.15
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 15 Jul 2020 03:01:16 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	<xen-devel@lists.xenproject.org>
References: <416ac9b1-93d1-81a2-be19-d58d509fdfec@suse.com>
 <59f26856-d23d-bb69-0403-39e77acbf85c@suse.com>
In-Reply-To: <59f26856-d23d-bb69-0403-39e77acbf85c@suse.com>
Subject: RE: [PATCH v2 1/2] x86: restore pv_rtc_handler() invocation
Date: Wed, 15 Jul 2020 11:01:15 +0100
Message-ID: <001301d65a8e$e10e1260$a32a3720$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQH810IKdkrzCSlAezDylIfWz5DUNADle+uqqLRCwlA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Wei Liu' <wl@xen.org>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 15 July 2020 10:47
> To: xen-devel@lists.xenproject.org
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Paul Durrant =
<paul@xen.org>; Wei Liu <wl@xen.org>;
> Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> Subject: [PATCH v2 1/2] x86: restore pv_rtc_handler() invocation
>=20
> This was lost when making the logic accessible to PVH Dom0.
>=20
> Fixes: 835d8d69d96a ("x86/rtc: provide mediated access to RTC for PVH =
dom0")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>=20
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1160,6 +1160,10 @@ void rtc_guest_write(unsigned int port,
>      case RTC_PORT(1):
>          if ( !ioports_access_permitted(currd, RTC_PORT(0), =
RTC_PORT(1)) )
>              break;
> +
> +        if ( pv_rtc_handler )
> +            pv_rtc_handler(currd->arch.cmos_idx & 0x7f, data);
> +

This appears to be semantically slightly different to the old code in =
that it is only done for a write to RC_PORT(1), whereas it would have =
been done on a write to either RTC_POR(0) or RTC_PORT(1) before. Is that =
of any concern?

  Paul

>          spin_lock_irqsave(&rtc_lock, flags);
>          outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
>          outb(data, RTC_PORT(1));




From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:02:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:02:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveEz-0005nX-4Y; Wed, 15 Jul 2020 10:02:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jveEy-0005mg-0g
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:02:16 +0000
X-Inumbo-ID: 425ab334-c682-11ea-93a9-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 425ab334-c682-11ea-93a9-12813bfff9fa;
 Wed, 15 Jul 2020 10:02:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594807336;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=2PkuJRZvitgqE+amfJqSAmUoUPpETQKF/MbZNepc2qc=;
 b=I1ALDFenHUJKJ9YnKC9+Q45EAjFq8vzyzoTG/sB/vL6N64k4Ov8cX8NF
 RiRwOO5EfUHma/RZsXhIs5PCvYCkOKeZc0bzqztP7o5rj8nREocUfZkyQ
 P7TWZhn2lsTOgJrmQWb+rvhHCEFXVgFTO5WkwZRCDkQd0JDLr3pXxbwsE s=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 7zvnMzgC+7l7iVz1Pww9tMWX6sfYz4Ey5w4ngFuEnAIurqo5ZZcLlNbzhzcrZKNTz2sje5hulf
 xa0R2Yrh1xFLhVK5kEJk3eJd5X0wZXzF9z/znJorovciIUbdGrtzUYdpTJYlRv4HyifTS9/50E
 BotuTlcCfb0wF5KM8UawopcGneZO2rEQeKyEsPvx+8vzaWGPYqs/xbEHjPoI30guvzTr+6oMQO
 24HVNL/t0MuYO+iGqW3s2v22Upe4U5THbnCTxmVuhnUlLiwpxXSsRy4Ff4Cu3DHYEqlG+Y//J5
 G0U=
X-SBRS: 2.7
X-MesageID: 22420329
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,354,1589256000"; d="scan'208";a="22420329"
Date: Wed, 15 Jul 2020 12:02:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v6 03/11] x86/vmx: add IPT cpu feature
Message-ID: <20200715100207.GV7191@Air-de-Roger>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <4d6eac657d082efaa0e7d141b5c9a07791b31f94.1594150543.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <4d6eac657d082efaa0e7d141b5c9a07791b31f94.1594150543.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 07, 2020 at 09:39:42PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Check if Intel Processor Trace feature is supported by current
> processor. Define vmtrace_supported global variable.

IIRC there was some discussion in previous versions about whether
vmtrace_supported should be globally exposed to all arches, since I
see the symbol is defined in a common file I assume Arm maintainers
agree to this approach, and hence it would be helpful to add a note to
the commit message, ie:

"The vmtrace_supported global variable is defined in common code with
the expectation that other arches also supporting processor tracing
can make use of it."

> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>

LGTM:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
>  xen/arch/x86/hvm/vmx/vmcs.c                 | 15 ++++++++++++++-
>  xen/common/domain.c                         |  2 ++
>  xen/include/asm-x86/cpufeature.h            |  1 +
>  xen/include/asm-x86/hvm/vmx/vmcs.h          |  1 +
>  xen/include/public/arch-x86/cpufeatureset.h |  1 +
>  xen/include/xen/domain.h                    |  2 ++
>  6 files changed, 21 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index ca94c2bedc..3a53553f10 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -291,6 +291,20 @@ static int vmx_init_vmcs_config(void)
>          _vmx_cpu_based_exec_control &=
>              ~(CPU_BASED_CR8_LOAD_EXITING | CPU_BASED_CR8_STORE_EXITING);
>  
> +    rdmsrl(MSR_IA32_VMX_MISC, _vmx_misc_cap);
> +
> +    /* Check whether IPT is supported in VMX operation. */
> +    if ( !smp_processor_id() )
> +        vmtrace_supported = cpu_has_ipt &&
> +                            (_vmx_misc_cap & VMX_MISC_PROC_TRACE);

Andrew also suggested to set vmtrace_supported to the value of
cpu_has_ipt during CPU identification and then clear it here if VMX
doesn't support IPT. Since this implementation won't add support for
PV guests I'm not specially trilled and I think this approach is fine
for the time being. If/When PV support is added we will have to
re-arrange this a bit.

Thanks.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:02:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:02:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveFC-0005pD-De; Wed, 15 Jul 2020 10:02:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveFB-0005p2-KY
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:02:29 +0000
X-Inumbo-ID: 4a527e82-c682-11ea-93a9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a527e82-c682-11ea-93a9-12813bfff9fa;
 Wed, 15 Jul 2020 10:02:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 22A74B166;
 Wed, 15 Jul 2020 10:02:31 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] VT-d: XSA-321 follow-up
Message-ID: <2ad1607d-0909-f1cc-83bf-2546b28a9128@suse.com>
Date: Wed, 15 Jul 2020 12:02:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

1: install sync_cache hook on demand
2: use clear_page() in alloc_pgtable_maddr()

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:03:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:03:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveGc-00061f-S6; Wed, 15 Jul 2020 10:03:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveGc-00061W-3r
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:03:58 +0000
X-Inumbo-ID: 7f21ce56-c682-11ea-93a9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f21ce56-c682-11ea-93a9-12813bfff9fa;
 Wed, 15 Jul 2020 10:03:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BEB99AF99;
 Wed, 15 Jul 2020 10:03:59 +0000 (UTC)
Subject: [PATCH 1/2] VT-d: install sync_cache hook on demand
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <2ad1607d-0909-f1cc-83bf-2546b28a9128@suse.com>
Message-ID: <0036b69f-0d56-9ac4-1afa-06640c9007de@suse.com>
Date: Wed, 15 Jul 2020 12:03:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <2ad1607d-0909-f1cc-83bf-2546b28a9128@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Instead of checking inside the hook whether any non-coherent IOMMUs are
present, simply install the hook only when this is the case.

To prove that there are no other references to the now dynamically
updated ops structure (and hence that its updating happens early
enough), make it static and rename it at the same time.

Note that this change implies that sync_cache() shouldn't be called
directly unless there are unusual circumstances, like is the case in
alloc_pgtable_maddr(), which gets invoked too early for iommu_ops to
be set already (and therefore we also need to be careful there to
avoid accessing vtd_ops later on, as it lives in .init).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/vtd/extern.h
+++ b/xen/drivers/passthrough/vtd/extern.h
@@ -28,7 +28,6 @@
 struct pci_ats_dev;
 extern bool_t rwbf_quirk;
 extern const struct iommu_init_ops intel_iommu_init_ops;
-extern const struct iommu_ops intel_iommu_ops;
 
 void print_iommu_regs(struct acpi_drhd_unit *drhd);
 void print_vtd_entries(struct vtd_iommu *iommu, int bus, int devfn, u64 gmfn);
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -59,6 +59,7 @@ bool __read_mostly iommu_snoop = true;
 
 int nr_iommus;
 
+static struct iommu_ops vtd_ops;
 static struct tasklet vtd_fault_tasklet;
 
 static int setup_hwdom_device(u8 devfn, struct pci_dev *);
@@ -146,16 +147,11 @@ static int context_get_domain_id(struct
     return domid;
 }
 
-static int iommus_incoherent;
-
 static void sync_cache(const void *addr, unsigned int size)
 {
     static unsigned long clflush_size = 0;
     const void *end = addr + size;
 
-    if ( !iommus_incoherent )
-        return;
-
     if ( clflush_size == 0 )
         clflush_size = get_cache_line_size();
 
@@ -217,7 +213,8 @@ uint64_t alloc_pgtable_maddr(unsigned lo
         vaddr = __map_domain_page(cur_pg);
         memset(vaddr, 0, PAGE_SIZE);
 
-        sync_cache(vaddr, PAGE_SIZE);
+        if ( (iommu_ops.init ? &iommu_ops : &vtd_ops)->sync_cache )
+            sync_cache(vaddr, PAGE_SIZE);
         unmap_domain_page(vaddr);
         cur_pg++;
     }
@@ -1227,7 +1224,7 @@ int __init iommu_alloc(struct acpi_drhd_
     iommu->nr_pt_levels = agaw_to_level(agaw);
 
     if ( !ecap_coherent(iommu->ecap) )
-        iommus_incoherent = 1;
+        vtd_ops.sync_cache = sync_cache;
 
     /* allocate domain id bitmap */
     nr_dom = cap_ndoms(iommu->cap);
@@ -2737,7 +2734,7 @@ static int __init intel_iommu_quarantine
     return level ? -ENOMEM : rc;
 }
 
-const struct iommu_ops __initconstrel intel_iommu_ops = {
+static struct iommu_ops __initdata vtd_ops = {
     .init = intel_iommu_domain_init,
     .hwdom_init = intel_iommu_hwdom_init,
     .quarantine_init = intel_iommu_quarantine_init,
@@ -2768,11 +2765,10 @@ const struct iommu_ops __initconstrel in
     .iotlb_flush_all = iommu_flush_iotlb_all,
     .get_reserved_device_memory = intel_iommu_get_reserved_device_memory,
     .dump_p2m_table = vtd_dump_p2m_table,
-    .sync_cache = sync_cache,
 };
 
 const struct iommu_init_ops __initconstrel intel_iommu_init_ops = {
-    .ops = &intel_iommu_ops,
+    .ops = &vtd_ops,
     .setup = vtd_setup,
     .supports_x2apic = intel_iommu_supports_eim,
 };



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:04:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:04:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveGv-00064x-6N; Wed, 15 Jul 2020 10:04:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveGt-000641-AL
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:04:15 +0000
X-Inumbo-ID: 89790662-c682-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 89790662-c682-11ea-8496-bc764e2007e4;
 Wed, 15 Jul 2020 10:04:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2205FAFB0;
 Wed, 15 Jul 2020 10:04:17 +0000 (UTC)
Subject: [PATCH 2/2] VT-d: use clear_page() in alloc_pgtable_maddr()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <2ad1607d-0909-f1cc-83bf-2546b28a9128@suse.com>
Message-ID: <14f8b940-252f-9837-8958-5e76e1c3f06f@suse.com>
Date: Wed, 15 Jul 2020 12:04:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <2ad1607d-0909-f1cc-83bf-2546b28a9128@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

For full pages this is (meant to be) more efficient. Also change the
type and reduce the scope of the involved local variable.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -199,7 +199,6 @@ static void sync_cache(const void *addr,
 uint64_t alloc_pgtable_maddr(unsigned long npages, nodeid_t node)
 {
     struct page_info *pg, *cur_pg;
-    u64 *vaddr;
     unsigned int i;
 
     pg = alloc_domheap_pages(NULL, get_order_from_pages(npages),
@@ -210,8 +209,9 @@ uint64_t alloc_pgtable_maddr(unsigned lo
     cur_pg = pg;
     for ( i = 0; i < npages; i++ )
     {
-        vaddr = __map_domain_page(cur_pg);
-        memset(vaddr, 0, PAGE_SIZE);
+        void *vaddr = __map_domain_page(cur_pg);
+
+        clear_page(vaddr);
 
         if ( (iommu_ops.init ? &iommu_ops : &vtd_ops)->sync_cache )
             sync_cache(vaddr, PAGE_SIZE);



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:08:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveLD-0006Im-Rn; Wed, 15 Jul 2020 10:08:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveLD-0006Ih-6k
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:08:43 +0000
X-Inumbo-ID: 292587d0-c683-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 292587d0-c683-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 10:08:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ECED0B6D9;
 Wed, 15 Jul 2020 10:08:44 +0000 (UTC)
Subject: Re: [PATCH v2 1/2] x86: restore pv_rtc_handler() invocation
To: paul@xen.org
References: <416ac9b1-93d1-81a2-be19-d58d509fdfec@suse.com>
 <59f26856-d23d-bb69-0403-39e77acbf85c@suse.com>
 <001301d65a8e$e10e1260$a32a3720$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c0764a4e-b3f3-06e7-dbeb-1104f967525e@suse.com>
Date: Wed, 15 Jul 2020 12:08:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <001301d65a8e$e10e1260$a32a3720$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 12:01, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 15 July 2020 10:47
>> To: xen-devel@lists.xenproject.org
>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Paul Durrant <paul@xen.org>; Wei Liu <wl@xen.org>;
>> Roger Pau Monné <roger.pau@citrix.com>
>> Subject: [PATCH v2 1/2] x86: restore pv_rtc_handler() invocation
>>
>> This was lost when making the logic accessible to PVH Dom0.
>>
>> Fixes: 835d8d69d96a ("x86/rtc: provide mediated access to RTC for PVH dom0")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/time.c
>> +++ b/xen/arch/x86/time.c
>> @@ -1160,6 +1160,10 @@ void rtc_guest_write(unsigned int port,
>>      case RTC_PORT(1):
>>          if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
>>              break;
>> +
>> +        if ( pv_rtc_handler )
>> +            pv_rtc_handler(currd->arch.cmos_idx & 0x7f, data);
>> +
> 
> This appears to be semantically slightly different to the old code in that it is only done for a write to RC_PORT(1), whereas it would have been done on a write to either RTC_POR(0) or RTC_PORT(1) before. Is that of any concern?

The old code was (quoting plain 4.13.1)

        else if ( port == RTC_PORT(0) )
        {
            currd->arch.cmos_idx = data;
        }
        else if ( (port == RTC_PORT(1)) &&
                  ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
        {
            unsigned long flags;

            if ( pv_rtc_handler )
                pv_rtc_handler(currd->arch.cmos_idx & 0x7f, data);
            spin_lock_irqsave(&rtc_lock, flags);
            outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
            outb(data, RTC_PORT(1));
            spin_unlock_irqrestore(&rtc_lock, flags);
        }

which I think similarly invoked the hook for RTC_PORT(1) only.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:08:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveLL-0006JF-5S; Wed, 15 Jul 2020 10:08:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jveLK-0006J3-Fo
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:08:50 +0000
X-Inumbo-ID: 2d58c5f6-c683-11ea-bb8b-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2d58c5f6-c683-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 10:08:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594807730;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=cGrDsOEoLyOteeerMRk2P3sBspylXAN/qJf+Rk+bvt0=;
 b=S+yPkYyUq7rNJ07tH4UiBhVRd5h3lo2vOSGX9nfV1WYY7l93GpCD3A9k
 6a12i7Fiu0Wl29UBe8TEuYA86jbC1z6HHJ2A/NidhorIAE61PUcpWhX2z
 kOFUQiCqI+AXjvAc3D7XtvJ3q+eLbOTnUxFsLQBntHaBHFPlS3JiI+03I 0=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: yEpYYOSdvyTEluRmTLmNzznYnwMoMBgGUgl8/4owaGDjcTXjy8O9JVfXGOt1lG05WgawR5UN3X
 WsSyFRGEBwxujyKrDTaCiit20my6J/YCBOvdN9KOUIiaZFl77ZngML+gWyp8fgT88FWZQg9iG6
 T33B9dK+4cQwYl3/XV1/+X1tF+AN7KPc+gnm4zqjQrDDCdrdDQTlVTMVXHK0mLZyCwszXe8tQ6
 yjRnW/7He9becOyLZEpIMxrj03v3aeuNeL4q9Uz+1XgjUidprw+8k2w6O/YWqWT2RRKtqZeUIA
 yeE=
X-SBRS: 2.7
X-MesageID: 22413213
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,354,1589256000"; d="scan'208";a="22413213"
Date: Wed, 15 Jul 2020 12:08:41 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 6/7] flask: drop dead compat translation code
Message-ID: <20200715100841.GW7191@Air-de-Roger>
References: <bb6a96c6-b6b1-76ff-f9db-10bec0fb4ab1@suse.com>
 <7711f68d-394e-a74f-81fa-51f8447174ce@suse.com>
 <20200714145800.GO7191@Air-de-Roger>
 <937a51c5-7563-0ac2-4ada-b4dfd7a5d636@suse.com>
 <20200715084115.GS7191@Air-de-Roger>
 <9dc3e0fd-3e1a-5bd0-b8c7-01287e5c2c93@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <9dc3e0fd-3e1a-5bd0-b8c7-01287e5c2c93@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel de Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 10:52:28AM +0200, Jan Beulich wrote:
> On 15.07.2020 10:41, Roger Pau Monné wrote:
> > On Wed, Jul 15, 2020 at 08:42:44AM +0200, Jan Beulich wrote:
> >> On 14.07.2020 16:58, Roger Pau Monné wrote:
> >>> On Wed, Jul 01, 2020 at 12:28:07PM +0200, Jan Beulich wrote:
> >>>> Translation macros aren't needed at all (or else a devicetree_label
> >>>> entry would have been missing), and userlist has been removed quite some
> >>>> time ago.
> >>>>
> >>>> No functional change.
> >>>>
> >>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >>>>
> >>>> --- a/xen/include/xlat.lst
> >>>> +++ b/xen/include/xlat.lst
> >>>> @@ -148,14 +148,11 @@
> >>>>  ?	xenoprof_init			xenoprof.h
> >>>>  ?	xenoprof_passive		xenoprof.h
> >>>>  ?	flask_access			xsm/flask_op.h
> >>>> -!	flask_boolean			xsm/flask_op.h
> >>>>  ?	flask_cache_stats		xsm/flask_op.h
> >>>>  ?	flask_hash_stats		xsm/flask_op.h
> >>>> -!	flask_load			xsm/flask_op.h
> >>>>  ?	flask_ocontext			xsm/flask_op.h
> >>>>  ?	flask_peersid			xsm/flask_op.h
> >>>>  ?	flask_relabel			xsm/flask_op.h
> >>>>  ?	flask_setavc_threshold		xsm/flask_op.h
> >>>>  ?	flask_setenforce		xsm/flask_op.h
> >>>> -!	flask_sid_context		xsm/flask_op.h
> >>>>  ?	flask_transition		xsm/flask_op.h
> >>>
> >>> Shouldn't those become checks then?
> >>
> >> No, checking will never succeed for structures containing
> >> XEN_GUEST_HANDLE(). But there's no point in generating xlat macros
> >> when they're never used. There are two fundamentally different
> >> strategies for handling the compat hypercalls: One is to wrap a
> >> translation layer around the native hypercall. That's where the
> >> xlat macros come into play. The other, used here, is to compile
> >> the entire hypercall function a second time, arranging for the
> >> compat structures to get used in place of the native ones. There
> >> are no xlat macros involved here, all that's needed are correctly
> >> translated structures. (For completeness, x86's MCA hypercall
> >> uses yet another, quite adhoc strategy for handling, but also not
> >> involving any xlat macro use. Hence the consideration there to
> >> possibly drop the respective lines from the file here.)
> > 
> > Thanks, I think this explanation is helpful and I wonder whether it
> > would be possible to have something along this lines in a file or as a
> > comment somewhere, maybe at the top of xlat.lst?
> 
> To be honest - I'm not sure: Such a comment may indeed be helpful
> to have, but I don't think I can see any single good place for it
> to live. For people editing xlat.lst (a file the existence of which
> many aren't even aware of), this would be a good place. But how
> would others have any chance of running into this comment?

I would add it to xlat.lst rather than not adding it at all.

If we can find a better place to add it I'm all in, but as said I
would rather add it somewhere right now than just defer adding it
until the perfect placement is found.

> > Also could you add a line to the commit message noting that flask code
> > doesn't use any of the translation macros because it follows a
> > different approach to compat handling?
> 
> I've made the sentence start "Translation macros aren't used (and
> hence needed) at all ..." - is that enough of an adjustment?

Yes, that's fine.

Thanks.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:10:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveMq-00078Z-PT; Wed, 15 Jul 2020 10:10:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tzA3=A2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jveMp-00078A-Ea
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:10:23 +0000
X-Inumbo-ID: 6205bb88-c683-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6205bb88-c683-11ea-8496-bc764e2007e4;
 Wed, 15 Jul 2020 10:10:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=4j4Vmq+BupEO1No6Z0EgZlfIMC5GPOT4aFBLG4ckspc=; b=pX1Wib2Z6mHWHwPkyRLS2KcG/
 XOp+Qd34C6Gm3wFvHzcCE33UBAoBxdPNRCd/T6BYDPOF4n+2mmCn2UFjNO7TFI6j01AIIKzNwIchk
 +rrS/g7nU0YM6LliqcYb9KFEOC0YN8bhGBoiK+d4nlPyPL8mQVdYZk8s+/x1Yw2MnpbRE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jveMi-00025F-QT; Wed, 15 Jul 2020 10:10:16 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jveMi-0005jm-7c; Wed, 15 Jul 2020 10:10:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jveMi-0003Pp-6x; Wed, 15 Jul 2020 10:10:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151916-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 151916: all pass - PUSHED
X-Osstest-Versions-This: xen=1969576661f3e34318e9b0a61a1a38f9a5aee16f
X-Osstest-Versions-That: xen=02d69864b51a4302a148c28d6d391238a6778b4b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jul 2020 10:10:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151916 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151916/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  1969576661f3e34318e9b0a61a1a38f9a5aee16f
baseline version:
 xen                  02d69864b51a4302a148c28d6d391238a6778b4b

Last test of basis   151847  2020-07-12 09:18:34 Z    3 days
Testing same since   151916  2020-07-15 09:18:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   02d69864b5..1969576661  1969576661f3e34318e9b0a61a1a38f9a5aee16f -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:13:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:13:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvePc-0007JF-AV; Wed, 15 Jul 2020 10:13:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvePb-0007JA-Qt
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:13:15 +0000
X-Inumbo-ID: cb06f0f2-c683-11ea-93ae-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cb06f0f2-c683-11ea-93ae-12813bfff9fa;
 Wed, 15 Jul 2020 10:13:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8F6D7B6DD;
 Wed, 15 Jul 2020 10:13:16 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] common: XSA-327 follow-up
Message-ID: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
Date: Wed, 15 Jul 2020 12:13:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

There are a few largely cosmetic things that were discussed in the
context of the XSA, but which weren't really XSA material.

1: common: map_vcpu_info() cosmetics
2: evtchn/fifo: don't enforce higher than necessary alignment

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:15:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveRU-0007PX-NG; Wed, 15 Jul 2020 10:15:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveRT-0007PS-UR
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:15:11 +0000
X-Inumbo-ID: 10cd5f18-c684-11ea-93af-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 10cd5f18-c684-11ea-93af-12813bfff9fa;
 Wed, 15 Jul 2020 10:15:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9793EB090;
 Wed, 15 Jul 2020 10:15:13 +0000 (UTC)
Subject: [PATCH 1/2] common: map_vcpu_info() cosmetics
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
Message-ID: <2a0341c7-3836-a8c0-9516-b6a08e2720ec@suse.com>
Date: Wed, 15 Jul 2020 12:15:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Use ENXIO instead of EINVAL to cover the two cases of the address not
satisfying the requirements. This will make an issue here better stand
out at the call site.

Also add a missing compat-mode related size check: If the sizes
differed, other code in the function would need changing. Accompany this
by a change to the initial sizeof() expression, tying it to the type of
the variable we're actually after (matching e.g. the alignof() added by
XSA-327).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -1229,17 +1229,18 @@ int map_vcpu_info(struct vcpu *v, unsign
     struct page_info *page;
     unsigned int align;
 
-    if ( offset > (PAGE_SIZE - sizeof(vcpu_info_t)) )
-        return -EINVAL;
+    if ( offset > (PAGE_SIZE - sizeof(*new_info)) )
+        return -ENXIO;
 
 #ifdef CONFIG_COMPAT
+    BUILD_BUG_ON(sizeof(*new_info) != sizeof(new_info->compat));
     if ( has_32bit_shinfo(d) )
         align = alignof(new_info->compat);
     else
 #endif
         align = alignof(*new_info);
     if ( offset & (align - 1) )
-        return -EINVAL;
+        return -ENXIO;
 
     if ( !mfn_eq(v->vcpu_info_mfn, INVALID_MFN) )
         return -EINVAL;



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:16:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveSj-0007W1-1m; Wed, 15 Jul 2020 10:16:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveSi-0007Vs-A2
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:16:28 +0000
X-Inumbo-ID: 3e600278-c684-11ea-93af-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3e600278-c684-11ea-93af-12813bfff9fa;
 Wed, 15 Jul 2020 10:16:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 19FADB090;
 Wed, 15 Jul 2020 10:16:30 +0000 (UTC)
Subject: [PATCH 2/2] evtchn/fifo: don't enforce higher than necessary alignment
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
Message-ID: <e47a9ef5-5f4c-1ca6-1b31-f7b10516e5ed@suse.com>
Date: Wed, 15 Jul 2020 12:16:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Neither the code nor the original commit provide any justification for
the need to 8-byte align the struct in all cases. Enforce just as much
alignment as the structure actually needs - 4 bytes - by using alignof()
instead of a literal number.

Take the opportunity and also
- add so far missing validation that native and compat mode layouts of
  the structures actually match,
- tie sizeof() expressions to the types of the fields we're actually
  after, rather than specifying the type explicitly (which in the
  general case risks a disconnect, even if there's close to zero risk in
  this particular case),
- use ENXIO instead of EINVAL for the two cases of the address not
  satisfying the requirements, which will make an issue here better
  stand out at the call site.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I question the need for the array_index_nospec() here. Or else I'd
expect map_vcpu_info() would also need the same.

--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -504,6 +504,16 @@ static void setup_ports(struct domain *d
     }
 }
 
+#ifdef CONFIG_COMPAT
+
+#include <compat/event_channel.h>
+
+#define xen_evtchn_fifo_control_block evtchn_fifo_control_block
+CHECK_evtchn_fifo_control_block;
+#undef xen_evtchn_fifo_control_block
+
+#endif
+
 int evtchn_fifo_init_control(struct evtchn_init_control *init_control)
 {
     struct domain *d = current->domain;
@@ -523,19 +533,20 @@ int evtchn_fifo_init_control(struct evtc
         return -ENOENT;
 
     /* Must not cross page boundary. */
-    if ( offset > (PAGE_SIZE - sizeof(evtchn_fifo_control_block_t)) )
-        return -EINVAL;
+    if ( offset > (PAGE_SIZE - sizeof(*v->evtchn_fifo->control_block)) )
+        return -ENXIO;
 
     /*
      * Make sure the guest controlled value offset is bounded even during
      * speculative execution.
      */
     offset = array_index_nospec(offset,
-                           PAGE_SIZE - sizeof(evtchn_fifo_control_block_t) + 1);
+                                PAGE_SIZE -
+                                sizeof(*v->evtchn_fifo->control_block) + 1);
 
-    /* Must be 8-bytes aligned. */
-    if ( offset & (8 - 1) )
-        return -EINVAL;
+    /* Must be suitably aligned. */
+    if ( offset & (alignof(*v->evtchn_fifo->control_block) - 1) )
+        return -ENXIO;
 
     spin_lock(&d->event_lock);
 
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -46,6 +46,7 @@
 ?	evtchn_bind_vcpu		event_channel.h
 ?	evtchn_bind_virq		event_channel.h
 ?	evtchn_close			event_channel.h
+?	evtchn_fifo_control_block	event_channel.h
 ?	evtchn_op			event_channel.h
 ?	evtchn_send			event_channel.h
 ?	evtchn_status			event_channel.h



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:29:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:29:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvef9-0008UM-8V; Wed, 15 Jul 2020 10:29:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvef8-0008UH-7t
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:29:18 +0000
X-Inumbo-ID: 092c57ee-c686-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 092c57ee-c686-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 10:29:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C7728B57B;
 Wed, 15 Jul 2020 10:29:19 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/8] x86: build adjustments
Message-ID: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Date: Wed, 15 Jul 2020 12:29:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is in part a just loosely connected set of changes in particular
aiming at further shim binary reduction.

Patch 3 depends functionally (but not contextually in any way) on the
previously submitted "x86/shadow: dirty VRAM tracking is needed for
HVM only". But I could imagine anyway that this ends up being the most
controversial part of the series.

1: x86/EFI: sanitize build logic
2: x86: don't build with EFI support in shim-exclusive mode
3: x86: shrink struct arch_{vcpu,domain} when !HVM
4: Arm: prune #include-s needed by domain.h
5: bitmap: move to/from xenctl_bitmap conversion helpers
6: x86: move domain_cpu_policy_changed()
7: x86: move cpu_{up,down}_helper()
8: x86: don't include domctl and alike in shim-exclusive builds

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:37:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveme-0000v8-W6; Wed, 15 Jul 2020 10:37:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvemc-0000v2-Tn
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:37:02 +0000
X-Inumbo-ID: 1e08857e-c687-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e08857e-c687-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 10:37:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4EE4EB1BA;
 Wed, 15 Jul 2020 10:37:04 +0000 (UTC)
Subject: [PATCH 1/8] x86/EFI: sanitize build logic
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Message-ID: <1a9b1a94-62c0-7a45-9446-9dd8b2b56f0f@suse.com>
Date: Wed, 15 Jul 2020 12:37:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

With changes done over time and as far as linking goes, the only special
thing about building with EFI support enabled is the need for the dummy
relocations object for xen.gz uniformly in all build stages. All other
efi/*.o can be consumed from the built_in*.o files.

In efi/Makefile, besides moving relocs-dummy.o to "extra", also properly
split between obj-y and obj-bin-y.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -113,28 +113,35 @@ $(TARGET): $(TARGET)-syms $(efi-y) boot/
 		{ echo "No Multiboot2 header found" >&2; false; }
 	mv $(TMP) $(TARGET)
 
+# Check if the compiler supports the MS ABI.
+export XEN_BUILD_EFI := $(shell $(CC) $(XEN_CFLAGS) -c efi/check.c -o efi/check.o 2>/dev/null && echo y)
+# Check if the linker supports PE.
+XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) -mi386pep --subsystem=10 -o efi/check.efi efi/check.o 2>/dev/null && echo y))
+CFLAGS-$(XEN_BUILD_EFI) += -DXEN_BUILD_EFI
+
 ALL_OBJS := $(BASEDIR)/arch/x86/boot/built_in.o $(BASEDIR)/arch/x86/efi/built_in.o $(ALL_OBJS)
+EFI_OBJS-$(XEN_BUILD_EFI) := efi/relocs-dummy.o
 
 ifeq ($(CONFIG_LTO),y)
 # Gather all LTO objects together
 prelink_lto.o: $(ALL_OBJS)
 	$(LD_LTO) -r -o $@ $^
 
-prelink-efi_lto.o: $(ALL_OBJS) efi/runtime.o efi/compat.o
-	$(LD_LTO) -r -o $@ $(filter-out %/efi/built_in.o,$^)
+prelink-efi_lto.o: $(ALL_OBJS)
+	$(LD_LTO) -r -o $@ $^
 
 # Link it with all the binary objects
-prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o
+prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o $(EFI_OBJS-y)
 	$(LD) $(XEN_LDFLAGS) -r -o $@ $^
 
-prelink-efi.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink-efi_lto.o efi/boot.init.o
+prelink-efi.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink-efi_lto.o
 	$(LD) $(XEN_LDFLAGS) -r -o $@ $^
 else
-prelink.o: $(ALL_OBJS)
+prelink.o: $(ALL_OBJS) $(EFI_OBJS-y)
 	$(LD) $(XEN_LDFLAGS) -r -o $@ $^
 
-prelink-efi.o: $(ALL_OBJS) efi/boot.init.o efi/runtime.o efi/compat.o
-	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out %/efi/built_in.o,$^)
+prelink-efi.o: $(ALL_OBJS)
+	$(LD) $(XEN_LDFLAGS) -r -o $@ $^
 endif
 
 $(TARGET)-syms: prelink.o xen.lds
@@ -171,12 +178,6 @@ EFI_LDFLAGS += --minor-image-version=$(X
 EFI_LDFLAGS += --major-os-version=2 --minor-os-version=0
 EFI_LDFLAGS += --major-subsystem-version=2 --minor-subsystem-version=0
 
-# Check if the compiler supports the MS ABI.
-export XEN_BUILD_EFI := $(shell $(CC) $(XEN_CFLAGS) -c efi/check.c -o efi/check.o 2>/dev/null && echo y)
-# Check if the linker supports PE.
-XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) -mi386pep --subsystem=10 -o efi/check.efi efi/check.o 2>/dev/null && echo y))
-CFLAGS-$(XEN_BUILD_EFI) += -DXEN_BUILD_EFI
-
 $(TARGET).efi: VIRT_BASE = 0x$(shell $(NM) efi/relocs-dummy.o | sed -n 's, A VIRT_START$$,,p')
 $(TARGET).efi: ALT_BASE = 0x$(shell $(NM) efi/relocs-dummy.o | sed -n 's, A ALT_START$$,,p')
 
--- a/xen/arch/x86/efi/Makefile
+++ b/xen/arch/x86/efi/Makefile
@@ -14,6 +14,7 @@ $(call cc-option-add,cflags-stack-bounda
 $(EFIOBJ): CFLAGS-stack-boundary := $(cflags-stack-boundary)
 
 obj-y := stub.o
-obj-$(XEN_BUILD_EFI) := $(EFIOBJ) relocs-dummy.o
-extra-$(XEN_BUILD_EFI) += buildid.o
+obj-$(XEN_BUILD_EFI) := $(filter-out %.init.o,$(EFIOBJ))
+obj-bin-$(XEN_BUILD_EFI) := $(filter %.init.o,$(EFIOBJ))
+extra-$(XEN_BUILD_EFI) += buildid.o relocs-dummy.o
 nocov-$(XEN_BUILD_EFI) += stub.o



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:37:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:37:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvenI-0000yY-9S; Wed, 15 Jul 2020 10:37:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvenH-0000yS-Oc
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:37:43 +0000
X-Inumbo-ID: 369e6d10-c687-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 369e6d10-c687-11ea-8496-bc764e2007e4;
 Wed, 15 Jul 2020 10:37:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 98506B1F7;
 Wed, 15 Jul 2020 10:37:45 +0000 (UTC)
Subject: [PATCH 2/8] x86: don't build with EFI support in shim-exclusive mode
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Message-ID: <3354347f-b931-2a33-48b8-51eabf654ca2@suse.com>
Date: Wed, 15 Jul 2020 12:37:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

There's no need for xen.efi at all, and there's also no need for EFI
support in xen.gz since the shim runs in PVH mode, i.e. without any
firmware (and hence by implication also without EFI one).

The slightly odd looking use of $(space) is to ensure the new ifneq()
evaluates consistently between "build" and "install" invocations of
make.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
There are further anomalies associated with the need to use $(space)
here:
- xen.efi rebuilding gets suppressed when installing (typically as
  root) from a non-root-owned tree. I think we should similarly suppress
  re-building of xen.gz as well in this case, as tool chains available
  may vary (and hence a partial or full re-build may mistakenly occur).
- xen.lds (re-)generation has a dependency issue: The value of
  XEN_BUILD_EFI changing between builds (like would happen on a pre-
  built tree with a shim-exclusive config, on which then this patch
  would be applied) does not cause it to be re-built. Anthony's
  switching to Linux'es build system will address this afaict, so I
  didn't see a need to supply a separate patch.

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -80,7 +80,9 @@ x86_emulate.o: x86_emulate/x86_emulate.c
 
 efi-y := $(shell if [ ! -r $(BASEDIR)/include/xen/compile.h -o \
                       -O $(BASEDIR)/include/xen/compile.h ]; then \
-                         echo '$(TARGET).efi'; fi)
+                         echo '$(TARGET).efi'; fi) \
+         $(space)
+efi-$(CONFIG_PV_SHIM_EXCLUSIVE) :=
 
 ifneq ($(build_id_linker),)
 notes_phdrs = --notes
@@ -113,11 +115,13 @@ $(TARGET): $(TARGET)-syms $(efi-y) boot/
 		{ echo "No Multiboot2 header found" >&2; false; }
 	mv $(TMP) $(TARGET)
 
+ifneq ($(efi-y),)
 # Check if the compiler supports the MS ABI.
 export XEN_BUILD_EFI := $(shell $(CC) $(XEN_CFLAGS) -c efi/check.c -o efi/check.o 2>/dev/null && echo y)
 # Check if the linker supports PE.
 XEN_BUILD_PE := $(if $(XEN_BUILD_EFI),$(shell $(LD) -mi386pep --subsystem=10 -o efi/check.efi efi/check.o 2>/dev/null && echo y))
 CFLAGS-$(XEN_BUILD_EFI) += -DXEN_BUILD_EFI
+endif
 
 ALL_OBJS := $(BASEDIR)/arch/x86/boot/built_in.o $(BASEDIR)/arch/x86/efi/built_in.o $(ALL_OBJS)
 EFI_OBJS-$(XEN_BUILD_EFI) := efi/relocs-dummy.o



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:38:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:38:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvenq-000156-Jx; Wed, 15 Jul 2020 10:38:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvenp-00014E-Fi
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:38:17 +0000
X-Inumbo-ID: 4a7fa0cf-c687-11ea-93b5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a7fa0cf-c687-11ea-93b5-12813bfff9fa;
 Wed, 15 Jul 2020 10:38:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ED61DB1BA;
 Wed, 15 Jul 2020 10:38:18 +0000 (UTC)
Subject: [PATCH 3/8] x86: shrink struct arch_{vcpu,domain} when !HVM
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Message-ID: <4a9040f0-dbb1-18ce-ebea-2bfe5d68bf86@suse.com>
Date: Wed, 15 Jul 2020 12:38:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

While this won't affect overall memory overhead (struct vcpu as well as
struct domain get allocated as single pages) nor code size (the offsets
into the base structures are too large to be representable as signed 8-
bit displacements), it'll allow the tail of struct pv_{domain,vcpu} to
share a cache line with subsequent struct arch_{domain,vcpu} fields.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: There is a risk associated with this: If we still have code
     somewhere accessing the HVM parts of the structures without a prior
     type check of the guest, this going to end up worse than the so far
     not uncommon case of the access simply going to space unused by PV.
     We may therefore want to consider whether to further restrict when
     this conversion to union gets done.
     And of course there's also the risk of future compilers complaining
     about this abuse of unions. But this is limited to code that's dead
     in !HVM configs, so only an apparent problem.

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -709,7 +709,7 @@ long arch_do_domctl(
         unsigned int fmp = domctl->u.ioport_mapping.first_mport;
         unsigned int np = domctl->u.ioport_mapping.nr_ports;
         unsigned int add = domctl->u.ioport_mapping.add_mapping;
-        struct hvm_domain *hvm;
+        hvm_domain_t *hvm;
         struct g2m_ioport *g2m_ioport;
         int found = 0;
 
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -310,7 +310,7 @@ struct arch_domain
 
     union {
         struct pv_domain pv;
-        struct hvm_domain hvm;
+        hvm_domain_t hvm;
     };
 
     struct paging_domain paging;
@@ -573,7 +573,7 @@ struct arch_vcpu
     /* Virtual Machine Extensions */
     union {
         struct pv_vcpu pv;
-        struct hvm_vcpu hvm;
+        hvm_vcpu_t hvm;
     };
 
     pagetable_t guest_table_user;       /* (MFN) x86/64 user-space pagetable */
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -99,7 +99,13 @@ struct hvm_pi_ops {
 
 #define MAX_NR_IOREQ_SERVERS 8
 
-struct hvm_domain {
+typedef
+#ifdef CONFIG_HVM
+struct
+#else
+union
+#endif
+hvm_domain {
     /* Guest page range used for non-default ioreq servers */
     struct {
         unsigned long base;
@@ -203,7 +209,7 @@ struct hvm_domain {
 #ifdef CONFIG_MEM_SHARING
     struct mem_sharing_domain mem_sharing;
 #endif
-};
+} hvm_domain_t;
 
 #endif /* __ASM_X86_HVM_DOMAIN_H__ */
 
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -149,7 +149,13 @@ struct altp2mvcpu {
 
 #define vcpu_altp2m(v) ((v)->arch.hvm.avcpu)
 
-struct hvm_vcpu {
+typedef
+#ifdef CONFIG_HVM
+struct
+#else
+union
+#endif
+hvm_vcpu {
     /* Guest control-register and EFER values, just as the guest sees them. */
     unsigned long       guest_cr[5];
     unsigned long       guest_efer;
@@ -213,7 +219,7 @@ struct hvm_vcpu {
     struct x86_event     inject_event;
 
     struct viridian_vcpu *viridian;
-};
+} hvm_vcpu_t;
 
 #endif /* __ASM_X86_HVM_VCPU_H__ */
 



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:39:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:39:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveoe-0001CJ-3S; Wed, 15 Jul 2020 10:39:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveoc-0001CA-Eb
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:39:06 +0000
X-Inumbo-ID: 67d5a574-c687-11ea-93b5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 67d5a574-c687-11ea-93b5-12813bfff9fa;
 Wed, 15 Jul 2020 10:39:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 322B1B1F7;
 Wed, 15 Jul 2020 10:39:08 +0000 (UTC)
Subject: [PATCH 4/8] Arm: prune #include-s needed by domain.h
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Message-ID: <150525bb-1c48-c331-3212-eff18bc4c13d@suse.com>
Date: Wed, 15 Jul 2020 12:39:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

asm/domain.h is a dependency of xen/sched.h, and hence should not itself
include xen/sched.h. Nor should any of the other #include-s used by it.
While at it, also drop two other #include-s that aren't needed by this
particular header.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -2,7 +2,7 @@
 #define __ASM_DOMAIN_H__
 
 #include <xen/cache.h>
-#include <xen/sched.h>
+#include <xen/timer.h>
 #include <asm/page.h>
 #include <asm/p2m.h>
 #include <asm/vfp.h>
@@ -11,8 +11,6 @@
 #include <asm/vgic.h>
 #include <asm/vpl011.h>
 #include <public/hvm/params.h>
-#include <xen/serial.h>
-#include <xen/rbtree.h>
 
 struct hvm_domain
 {
--- a/xen/include/asm-arm/vfp.h
+++ b/xen/include/asm-arm/vfp.h
@@ -1,7 +1,7 @@
 #ifndef _ASM_VFP_H
 #define _ASM_VFP_H
 
-#include <xen/sched.h>
+struct vcpu;
 
 #if defined(CONFIG_ARM_32)
 # include <asm/arm32/vfp.h>



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:40:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:40:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvepu-0001yt-Ez; Wed, 15 Jul 2020 10:40:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveps-0001yn-OI
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:40:24 +0000
X-Inumbo-ID: 95feb65c-c687-11ea-93b5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 95feb65c-c687-11ea-93b5-12813bfff9fa;
 Wed, 15 Jul 2020 10:40:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 97A66B1F7;
 Wed, 15 Jul 2020 10:40:25 +0000 (UTC)
Subject: [PATCH 5/8] bitmap: move to/from xenctl_bitmap conversion helpers
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Message-ID: <5835147f-8428-1d74-7d6e-bbb5522289c7@suse.com>
Date: Wed, 15 Jul 2020 12:40:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

A subsequent change will exclude domctl.c from getting built for a
particular configuration, yet the two functions get used from elsewhere.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/bitmap.c
+++ b/xen/common/bitmap.c
@@ -9,6 +9,9 @@
 #include <xen/errno.h>
 #include <xen/bitmap.h>
 #include <xen/bitops.h>
+#include <xen/cpumask.h>
+#include <xen/domain.h>
+#include <xen/guest_access.h>
 #include <asm/byteorder.h>
 
 /*
@@ -384,3 +387,87 @@ void bitmap_byte_to_long(unsigned long *
 }
 
 #endif
+
+int bitmap_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_bitmap,
+                            const unsigned long *bitmap, unsigned int nbits)
+{
+    unsigned int guest_bytes, copy_bytes, i;
+    uint8_t zero = 0;
+    int err = 0;
+    uint8_t *bytemap = xmalloc_array(uint8_t, (nbits + 7) / 8);
+
+    if ( !bytemap )
+        return -ENOMEM;
+
+    guest_bytes = (xenctl_bitmap->nr_bits + 7) / 8;
+    copy_bytes  = min_t(unsigned int, guest_bytes, (nbits + 7) / 8);
+
+    bitmap_long_to_byte(bytemap, bitmap, nbits);
+
+    if ( copy_bytes != 0 )
+        if ( copy_to_guest(xenctl_bitmap->bitmap, bytemap, copy_bytes) )
+            err = -EFAULT;
+
+    for ( i = copy_bytes; !err && i < guest_bytes; i++ )
+        if ( copy_to_guest_offset(xenctl_bitmap->bitmap, i, &zero, 1) )
+            err = -EFAULT;
+
+    xfree(bytemap);
+
+    return err;
+}
+
+int xenctl_bitmap_to_bitmap(unsigned long *bitmap,
+                            const struct xenctl_bitmap *xenctl_bitmap,
+                            unsigned int nbits)
+{
+    unsigned int guest_bytes, copy_bytes;
+    int err = 0;
+    uint8_t *bytemap = xzalloc_array(uint8_t, (nbits + 7) / 8);
+
+    if ( !bytemap )
+        return -ENOMEM;
+
+    guest_bytes = (xenctl_bitmap->nr_bits + 7) / 8;
+    copy_bytes  = min_t(unsigned int, guest_bytes, (nbits + 7) / 8);
+
+    if ( copy_bytes != 0 )
+    {
+        if ( copy_from_guest(bytemap, xenctl_bitmap->bitmap, copy_bytes) )
+            err = -EFAULT;
+        if ( (xenctl_bitmap->nr_bits & 7) && (guest_bytes == copy_bytes) )
+            bytemap[guest_bytes-1] &= ~(0xff << (xenctl_bitmap->nr_bits & 7));
+    }
+
+    if ( !err )
+        bitmap_byte_to_long(bitmap, bytemap, nbits);
+
+    xfree(bytemap);
+
+    return err;
+}
+
+int cpumask_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_cpumap,
+                             const cpumask_t *cpumask)
+{
+    return bitmap_to_xenctl_bitmap(xenctl_cpumap, cpumask_bits(cpumask),
+                                   nr_cpu_ids);
+}
+
+int xenctl_bitmap_to_cpumask(cpumask_var_t *cpumask,
+                             const struct xenctl_bitmap *xenctl_cpumap)
+{
+    int err = 0;
+
+    if ( alloc_cpumask_var(cpumask) ) {
+        err = xenctl_bitmap_to_bitmap(cpumask_bits(*cpumask), xenctl_cpumap,
+                                      nr_cpu_ids);
+        /* In case of error, cleanup is up to us, as the caller won't care! */
+        if ( err )
+            free_cpumask_var(*cpumask);
+    }
+    else
+        err = -ENOMEM;
+
+    return err;
+}
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -34,91 +34,6 @@
 
 static DEFINE_SPINLOCK(domctl_lock);
 
-static int bitmap_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_bitmap,
-                                   const unsigned long *bitmap,
-                                   unsigned int nbits)
-{
-    unsigned int guest_bytes, copy_bytes, i;
-    uint8_t zero = 0;
-    int err = 0;
-    uint8_t *bytemap = xmalloc_array(uint8_t, (nbits + 7) / 8);
-
-    if ( !bytemap )
-        return -ENOMEM;
-
-    guest_bytes = (xenctl_bitmap->nr_bits + 7) / 8;
-    copy_bytes  = min_t(unsigned int, guest_bytes, (nbits + 7) / 8);
-
-    bitmap_long_to_byte(bytemap, bitmap, nbits);
-
-    if ( copy_bytes != 0 )
-        if ( copy_to_guest(xenctl_bitmap->bitmap, bytemap, copy_bytes) )
-            err = -EFAULT;
-
-    for ( i = copy_bytes; !err && i < guest_bytes; i++ )
-        if ( copy_to_guest_offset(xenctl_bitmap->bitmap, i, &zero, 1) )
-            err = -EFAULT;
-
-    xfree(bytemap);
-
-    return err;
-}
-
-int xenctl_bitmap_to_bitmap(unsigned long *bitmap,
-                            const struct xenctl_bitmap *xenctl_bitmap,
-                            unsigned int nbits)
-{
-    unsigned int guest_bytes, copy_bytes;
-    int err = 0;
-    uint8_t *bytemap = xzalloc_array(uint8_t, (nbits + 7) / 8);
-
-    if ( !bytemap )
-        return -ENOMEM;
-
-    guest_bytes = (xenctl_bitmap->nr_bits + 7) / 8;
-    copy_bytes  = min_t(unsigned int, guest_bytes, (nbits + 7) / 8);
-
-    if ( copy_bytes != 0 )
-    {
-        if ( copy_from_guest(bytemap, xenctl_bitmap->bitmap, copy_bytes) )
-            err = -EFAULT;
-        if ( (xenctl_bitmap->nr_bits & 7) && (guest_bytes == copy_bytes) )
-            bytemap[guest_bytes-1] &= ~(0xff << (xenctl_bitmap->nr_bits & 7));
-    }
-
-    if ( !err )
-        bitmap_byte_to_long(bitmap, bytemap, nbits);
-
-    xfree(bytemap);
-
-    return err;
-}
-
-int cpumask_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_cpumap,
-                             const cpumask_t *cpumask)
-{
-    return bitmap_to_xenctl_bitmap(xenctl_cpumap, cpumask_bits(cpumask),
-                                   nr_cpu_ids);
-}
-
-int xenctl_bitmap_to_cpumask(cpumask_var_t *cpumask,
-                             const struct xenctl_bitmap *xenctl_cpumap)
-{
-    int err = 0;
-
-    if ( alloc_cpumask_var(cpumask) ) {
-        err = xenctl_bitmap_to_bitmap(cpumask_bits(*cpumask), xenctl_cpumap,
-                                      nr_cpu_ids);
-        /* In case of error, cleanup is up to us, as the caller won't care! */
-        if ( err )
-            free_cpumask_var(*cpumask);
-    }
-    else
-        err = -ENOMEM;
-
-    return err;
-}
-
 static int nodemask_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_nodemap,
                                      const nodemask_t *nodemask)
 {
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -30,6 +30,8 @@ void arch_get_domain_info(const struct d
 int xenctl_bitmap_to_bitmap(unsigned long *bitmap,
                             const struct xenctl_bitmap *xenctl_bitmap,
                             unsigned int nbits);
+int bitmap_to_xenctl_bitmap(struct xenctl_bitmap *xenctl_bitmap,
+                            const unsigned long *bitmap, unsigned int nbits);
 
 /*
  * Arch-specifics.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:40:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:40:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveqM-000222-P9; Wed, 15 Jul 2020 10:40:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveqM-00021v-4J
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:40:54 +0000
X-Inumbo-ID: a7d60e16-c687-11ea-93b5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a7d60e16-c687-11ea-93b5-12813bfff9fa;
 Wed, 15 Jul 2020 10:40:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8A10BB1F7;
 Wed, 15 Jul 2020 10:40:55 +0000 (UTC)
Subject: [PATCH 6/8] x86: move domain_cpu_policy_changed()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Message-ID: <2ec231cd-a6bb-af88-1019-695eefced925@suse.com>
Date: Wed, 15 Jul 2020 12:40:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is in preparation of making the building of domctl.c conditional.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -294,6 +294,173 @@ void update_guest_memory_policy(struct v
     }
 }
 
+void domain_cpu_policy_changed(struct domain *d)
+{
+    const struct cpuid_policy *p = d->arch.cpuid;
+    struct vcpu *v;
+
+    if ( is_pv_domain(d) )
+    {
+        if ( ((levelling_caps & LCAP_1cd) == LCAP_1cd) )
+        {
+            uint64_t mask = cpuidmask_defaults._1cd;
+            uint32_t ecx = p->basic._1c;
+            uint32_t edx = p->basic._1d;
+
+            /*
+             * Must expose hosts HTT and X2APIC value so a guest using native
+             * CPUID can correctly interpret other leaves which cannot be
+             * masked.
+             */
+            if ( cpu_has_x2apic )
+                ecx |= cpufeat_mask(X86_FEATURE_X2APIC);
+            if ( cpu_has_htt )
+                edx |= cpufeat_mask(X86_FEATURE_HTT);
+
+            switch ( boot_cpu_data.x86_vendor )
+            {
+            case X86_VENDOR_INTEL:
+                /*
+                 * Intel masking MSRs are documented as AND masks.
+                 * Experimentally, they are applied after OSXSAVE and APIC
+                 * are fast-forwarded from real hardware state.
+                 */
+                mask &= ((uint64_t)edx << 32) | ecx;
+
+                if ( ecx & cpufeat_mask(X86_FEATURE_XSAVE) )
+                    ecx = cpufeat_mask(X86_FEATURE_OSXSAVE);
+                else
+                    ecx = 0;
+                edx = cpufeat_mask(X86_FEATURE_APIC);
+
+                mask |= ((uint64_t)edx << 32) | ecx;
+                break;
+
+            case X86_VENDOR_AMD:
+            case X86_VENDOR_HYGON:
+                mask &= ((uint64_t)ecx << 32) | edx;
+
+                /*
+                 * AMD masking MSRs are documented as overrides.
+                 * Experimentally, fast-forwarding of the OSXSAVE and APIC
+                 * bits from real hardware state only occurs if the MSR has
+                 * the respective bits set.
+                 */
+                if ( ecx & cpufeat_mask(X86_FEATURE_XSAVE) )
+                    ecx = cpufeat_mask(X86_FEATURE_OSXSAVE);
+                else
+                    ecx = 0;
+                edx = cpufeat_mask(X86_FEATURE_APIC);
+
+                /*
+                 * If the Hypervisor bit is set in the policy, we can also
+                 * forward it into real CPUID.
+                 */
+                if ( p->basic.hypervisor )
+                    ecx |= cpufeat_mask(X86_FEATURE_HYPERVISOR);
+
+                mask |= ((uint64_t)ecx << 32) | edx;
+                break;
+            }
+
+            d->arch.pv.cpuidmasks->_1cd = mask;
+        }
+
+        if ( ((levelling_caps & LCAP_6c) == LCAP_6c) )
+        {
+            uint64_t mask = cpuidmask_defaults._6c;
+
+            if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
+                mask &= (~0ULL << 32) | p->basic.raw[6].c;
+
+            d->arch.pv.cpuidmasks->_6c = mask;
+        }
+
+        if ( ((levelling_caps & LCAP_7ab0) == LCAP_7ab0) )
+        {
+            uint64_t mask = cpuidmask_defaults._7ab0;
+
+            /*
+             * Leaf 7[0].eax is max_subleaf, not a feature mask.  Take it
+             * wholesale from the policy, but clamp the features in 7[0].ebx
+             * per usual.
+             */
+            if ( boot_cpu_data.x86_vendor &
+                 (X86_VENDOR_AMD | X86_VENDOR_HYGON) )
+                mask = (((uint64_t)p->feat.max_subleaf << 32) |
+                        ((uint32_t)mask & p->feat._7b0));
+
+            d->arch.pv.cpuidmasks->_7ab0 = mask;
+        }
+
+        if ( ((levelling_caps & LCAP_Da1) == LCAP_Da1) )
+        {
+            uint64_t mask = cpuidmask_defaults.Da1;
+            uint32_t eax = p->xstate.Da1;
+
+            if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
+                mask &= (~0ULL << 32) | eax;
+
+            d->arch.pv.cpuidmasks->Da1 = mask;
+        }
+
+        if ( ((levelling_caps & LCAP_e1cd) == LCAP_e1cd) )
+        {
+            uint64_t mask = cpuidmask_defaults.e1cd;
+            uint32_t ecx = p->extd.e1c;
+            uint32_t edx = p->extd.e1d;
+
+            /*
+             * Must expose hosts CMP_LEGACY value so a guest using native
+             * CPUID can correctly interpret other leaves which cannot be
+             * masked.
+             */
+            if ( cpu_has_cmp_legacy )
+                ecx |= cpufeat_mask(X86_FEATURE_CMP_LEGACY);
+
+            /*
+             * If not emulating AMD or Hygon, clear the duplicated features
+             * in e1d.
+             */
+            if ( !(p->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
+                edx &= ~CPUID_COMMON_1D_FEATURES;
+
+            switch ( boot_cpu_data.x86_vendor )
+            {
+            case X86_VENDOR_INTEL:
+                mask &= ((uint64_t)edx << 32) | ecx;
+                break;
+
+            case X86_VENDOR_AMD:
+            case X86_VENDOR_HYGON:
+                mask &= ((uint64_t)ecx << 32) | edx;
+
+                /*
+                 * Fast-forward bits - Must be set in the masking MSR for
+                 * fast-forwarding to occur in hardware.
+                 */
+                ecx = 0;
+                edx = cpufeat_mask(X86_FEATURE_APIC);
+
+                mask |= ((uint64_t)ecx << 32) | edx;
+                break;
+            }
+
+            d->arch.pv.cpuidmasks->e1cd = mask;
+        }
+    }
+
+    for_each_vcpu ( d, v )
+    {
+        cpuid_policy_updated(v);
+
+        /* If PMU version is zero then the guest doesn't have VPMU */
+        if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
+             p->basic.pmu_version == 0 )
+            vpmu_destroy(v);
+    }
+}
+
 #ifndef CONFIG_BIGMEM
 /*
  * The hole may be at or above the 44-bit boundary, so we need to determine
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -49,173 +49,6 @@ static int gdbsx_guest_mem_io(domid_t do
 }
 #endif
 
-void domain_cpu_policy_changed(struct domain *d)
-{
-    const struct cpuid_policy *p = d->arch.cpuid;
-    struct vcpu *v;
-
-    if ( is_pv_domain(d) )
-    {
-        if ( ((levelling_caps & LCAP_1cd) == LCAP_1cd) )
-        {
-            uint64_t mask = cpuidmask_defaults._1cd;
-            uint32_t ecx = p->basic._1c;
-            uint32_t edx = p->basic._1d;
-
-            /*
-             * Must expose hosts HTT and X2APIC value so a guest using native
-             * CPUID can correctly interpret other leaves which cannot be
-             * masked.
-             */
-            if ( cpu_has_x2apic )
-                ecx |= cpufeat_mask(X86_FEATURE_X2APIC);
-            if ( cpu_has_htt )
-                edx |= cpufeat_mask(X86_FEATURE_HTT);
-
-            switch ( boot_cpu_data.x86_vendor )
-            {
-            case X86_VENDOR_INTEL:
-                /*
-                 * Intel masking MSRs are documented as AND masks.
-                 * Experimentally, they are applied after OSXSAVE and APIC
-                 * are fast-forwarded from real hardware state.
-                 */
-                mask &= ((uint64_t)edx << 32) | ecx;
-
-                if ( ecx & cpufeat_mask(X86_FEATURE_XSAVE) )
-                    ecx = cpufeat_mask(X86_FEATURE_OSXSAVE);
-                else
-                    ecx = 0;
-                edx = cpufeat_mask(X86_FEATURE_APIC);
-
-                mask |= ((uint64_t)edx << 32) | ecx;
-                break;
-
-            case X86_VENDOR_AMD:
-            case X86_VENDOR_HYGON:
-                mask &= ((uint64_t)ecx << 32) | edx;
-
-                /*
-                 * AMD masking MSRs are documented as overrides.
-                 * Experimentally, fast-forwarding of the OSXSAVE and APIC
-                 * bits from real hardware state only occurs if the MSR has
-                 * the respective bits set.
-                 */
-                if ( ecx & cpufeat_mask(X86_FEATURE_XSAVE) )
-                    ecx = cpufeat_mask(X86_FEATURE_OSXSAVE);
-                else
-                    ecx = 0;
-                edx = cpufeat_mask(X86_FEATURE_APIC);
-
-                /*
-                 * If the Hypervisor bit is set in the policy, we can also
-                 * forward it into real CPUID.
-                 */
-                if ( p->basic.hypervisor )
-                    ecx |= cpufeat_mask(X86_FEATURE_HYPERVISOR);
-
-                mask |= ((uint64_t)ecx << 32) | edx;
-                break;
-            }
-
-            d->arch.pv.cpuidmasks->_1cd = mask;
-        }
-
-        if ( ((levelling_caps & LCAP_6c) == LCAP_6c) )
-        {
-            uint64_t mask = cpuidmask_defaults._6c;
-
-            if ( boot_cpu_data.x86_vendor == X86_VENDOR_AMD )
-                mask &= (~0ULL << 32) | p->basic.raw[6].c;
-
-            d->arch.pv.cpuidmasks->_6c = mask;
-        }
-
-        if ( ((levelling_caps & LCAP_7ab0) == LCAP_7ab0) )
-        {
-            uint64_t mask = cpuidmask_defaults._7ab0;
-
-            /*
-             * Leaf 7[0].eax is max_subleaf, not a feature mask.  Take it
-             * wholesale from the policy, but clamp the features in 7[0].ebx
-             * per usual.
-             */
-            if ( boot_cpu_data.x86_vendor &
-                 (X86_VENDOR_AMD | X86_VENDOR_HYGON) )
-                mask = (((uint64_t)p->feat.max_subleaf << 32) |
-                        ((uint32_t)mask & p->feat._7b0));
-
-            d->arch.pv.cpuidmasks->_7ab0 = mask;
-        }
-
-        if ( ((levelling_caps & LCAP_Da1) == LCAP_Da1) )
-        {
-            uint64_t mask = cpuidmask_defaults.Da1;
-            uint32_t eax = p->xstate.Da1;
-
-            if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL )
-                mask &= (~0ULL << 32) | eax;
-
-            d->arch.pv.cpuidmasks->Da1 = mask;
-        }
-
-        if ( ((levelling_caps & LCAP_e1cd) == LCAP_e1cd) )
-        {
-            uint64_t mask = cpuidmask_defaults.e1cd;
-            uint32_t ecx = p->extd.e1c;
-            uint32_t edx = p->extd.e1d;
-
-            /*
-             * Must expose hosts CMP_LEGACY value so a guest using native
-             * CPUID can correctly interpret other leaves which cannot be
-             * masked.
-             */
-            if ( cpu_has_cmp_legacy )
-                ecx |= cpufeat_mask(X86_FEATURE_CMP_LEGACY);
-
-            /*
-             * If not emulating AMD or Hygon, clear the duplicated features
-             * in e1d.
-             */
-            if ( !(p->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
-                edx &= ~CPUID_COMMON_1D_FEATURES;
-
-            switch ( boot_cpu_data.x86_vendor )
-            {
-            case X86_VENDOR_INTEL:
-                mask &= ((uint64_t)edx << 32) | ecx;
-                break;
-
-            case X86_VENDOR_AMD:
-            case X86_VENDOR_HYGON:
-                mask &= ((uint64_t)ecx << 32) | edx;
-
-                /*
-                 * Fast-forward bits - Must be set in the masking MSR for
-                 * fast-forwarding to occur in hardware.
-                 */
-                ecx = 0;
-                edx = cpufeat_mask(X86_FEATURE_APIC);
-
-                mask |= ((uint64_t)ecx << 32) | edx;
-                break;
-            }
-
-            d->arch.pv.cpuidmasks->e1cd = mask;
-        }
-    }
-
-    for_each_vcpu ( d, v )
-    {
-        cpuid_policy_updated(v);
-
-        /* If PMU version is zero then the guest doesn't have VPMU */
-        if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
-             p->basic.pmu_version == 0 )
-            vpmu_destroy(v);
-    }
-}
-
 static int update_domain_cpu_policy(struct domain *d,
                                     xen_domctl_cpu_policy_t *xdpc)
 {



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:41:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:41:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveqh-000272-2c; Wed, 15 Jul 2020 10:41:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveqg-00026k-35
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:41:14 +0000
X-Inumbo-ID: b382be76-c687-11ea-93b5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b382be76-c687-11ea-93b5-12813bfff9fa;
 Wed, 15 Jul 2020 10:41:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 25614B56F;
 Wed, 15 Jul 2020 10:41:15 +0000 (UTC)
Subject: [PATCH 7/8] x86: move cpu_{up,down}_helper()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Message-ID: <261477ac-629f-b60c-afe7-9b609231c375@suse.com>
Date: Wed, 15 Jul 2020 12:41:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is in preparation of making the building of sysctl.c conditional.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/smp.c
+++ b/xen/arch/x86/smp.c
@@ -22,6 +22,7 @@
 #include <asm/hardirq.h>
 #include <asm/hpet.h>
 #include <asm/hvm/support.h>
+#include <asm/setup.h>
 #include <irq_vectors.h>
 #include <mach_apic.h>
 
@@ -396,3 +397,36 @@ void call_function_interrupt(struct cpu_
     perfc_incr(ipis);
     smp_call_function_interrupt();
 }
+
+long cpu_up_helper(void *data)
+{
+    unsigned int cpu = (unsigned long)data;
+    int ret = cpu_up(cpu);
+
+    /* Have one more go on EBUSY. */
+    if ( ret == -EBUSY )
+        ret = cpu_up(cpu);
+
+    if ( !ret && !opt_smt &&
+         cpu_data[cpu].compute_unit_id == INVALID_CUID &&
+         cpumask_weight(per_cpu(cpu_sibling_mask, cpu)) > 1 )
+    {
+        ret = cpu_down_helper(data);
+        if ( ret )
+            printk("Could not re-offline CPU%u (%d)\n", cpu, ret);
+        else
+            ret = -EPERM;
+    }
+
+    return ret;
+}
+
+long cpu_down_helper(void *data)
+{
+    int cpu = (unsigned long)data;
+    int ret = cpu_down(cpu);
+    /* Have one more go on EBUSY. */
+    if ( ret == -EBUSY )
+        ret = cpu_down(cpu);
+    return ret;
+}
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -79,39 +79,6 @@ static void l3_cache_get(void *arg)
         l3_info->size = info.size / 1024; /* in KB unit */
 }
 
-long cpu_up_helper(void *data)
-{
-    unsigned int cpu = (unsigned long)data;
-    int ret = cpu_up(cpu);
-
-    /* Have one more go on EBUSY. */
-    if ( ret == -EBUSY )
-        ret = cpu_up(cpu);
-
-    if ( !ret && !opt_smt &&
-         cpu_data[cpu].compute_unit_id == INVALID_CUID &&
-         cpumask_weight(per_cpu(cpu_sibling_mask, cpu)) > 1 )
-    {
-        ret = cpu_down_helper(data);
-        if ( ret )
-            printk("Could not re-offline CPU%u (%d)\n", cpu, ret);
-        else
-            ret = -EPERM;
-    }
-
-    return ret;
-}
-
-long cpu_down_helper(void *data)
-{
-    int cpu = (unsigned long)data;
-    int ret = cpu_down(cpu);
-    /* Have one more go on EBUSY. */
-    if ( ret == -EBUSY )
-        ret = cpu_down(cpu);
-    return ret;
-}
-
 static long smt_up_down_helper(void *data)
 {
     bool up = (bool)data;



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:41:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:41:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jverL-0002EI-CP; Wed, 15 Jul 2020 10:41:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jverK-0002E5-3W
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:41:54 +0000
X-Inumbo-ID: cbaa3308-c687-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cbaa3308-c687-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 10:41:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9EAFEB56F;
 Wed, 15 Jul 2020 10:41:55 +0000 (UTC)
Subject: [PATCH 8/8] x86: don't include domctl and alike in shim-exclusive
 builds
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Message-ID: <56a60744-8046-e90c-d6fa-9c7392d83c51@suse.com>
Date: Wed, 15 Jul 2020 12:41:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

There is no need for platform-wide, system-wide, or per-domain control
in this case. Hence avoid including this dead code in the build.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -23,7 +23,6 @@ obj-$(CONFIG_GDBSX) += debug.o
 obj-y += delay.o
 obj-y += desc.o
 obj-bin-y += dmi_scan.init.o
-obj-y += domctl.o
 obj-y += domain.o
 obj-bin-y += dom0_build.init.o
 obj-y += domain_page.o
@@ -51,7 +50,6 @@ obj-y += numa.o
 obj-y += pci.o
 obj-y += percpu.o
 obj-y += physdev.o x86_64/physdev.o
-obj-y += platform_hypercall.o x86_64/platform_hypercall.o
 obj-y += psr.o
 obj-y += setup.o
 obj-y += shutdown.o
@@ -60,7 +58,6 @@ obj-y += smpboot.o
 obj-y += spec_ctrl.o
 obj-y += srat.o
 obj-y += string.o
-obj-y += sysctl.o
 obj-y += time.o
 obj-y += trace.o
 obj-y += traps.o
@@ -71,6 +68,13 @@ obj-$(CONFIG_TBOOT) += tboot.o
 obj-y += hpet.o
 obj-y += vm_event.o
 obj-y += xstate.o
+
+ifneq ($(CONFIG_PV_SHIM_EXCLUSIVE),y)
+obj-y += domctl.o
+obj-y += platform_hypercall.o x86_64/platform_hypercall.o
+obj-y += sysctl.o
+endif
+
 extra-y += asm-macros.i
 
 ifneq ($(CONFIG_HVM),y)
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -47,6 +47,8 @@
 /* Per-CPU variable for enforcing the lock ordering */
 DEFINE_PER_CPU(int, mm_lock_level);
 
+#ifndef CONFIG_PV_SHIM_EXCLUSIVE
+
 /************************************************/
 /*              LOG DIRTY SUPPORT               */
 /************************************************/
@@ -628,6 +630,8 @@ void paging_log_dirty_init(struct domain
     d->arch.paging.log_dirty.ops = ops;
 }
 
+#endif /* CONFIG_PV_SHIM_EXCLUSIVE */
+
 /************************************************/
 /*           CODE FOR PAGING SUPPORT            */
 /************************************************/
@@ -667,7 +671,7 @@ void paging_vcpu_init(struct vcpu *v)
         shadow_vcpu_init(v);
 }
 
-
+#ifndef CONFIG_PV_SHIM_EXCLUSIVE
 int paging_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
                   XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl,
                   bool_t resuming)
@@ -788,6 +792,7 @@ long paging_domctl_continuation(XEN_GUES
 
     return ret;
 }
+#endif /* CONFIG_PV_SHIM_EXCLUSIVE */
 
 /* Call when destroying a domain */
 int paging_teardown(struct domain *d)
@@ -803,10 +808,12 @@ int paging_teardown(struct domain *d)
     if ( preempted )
         return -ERESTART;
 
+#ifndef CONFIG_PV_SHIM_EXCLUSIVE
     /* clean up log dirty resources. */
     rc = paging_free_log_dirty_bitmap(d, 0);
     if ( rc == -ERESTART )
         return rc;
+#endif
 
     /* Move populate-on-demand cache back to domain_list for destruction */
     rc = p2m_pod_empty_cache(d);
--- a/xen/arch/x86/pv/hypercall.c
+++ b/xen/arch/x86/pv/hypercall.c
@@ -42,7 +42,9 @@ const hypercall_table_t pv_hypercall_tab
     COMPAT_CALL(set_callbacks),
     HYPERCALL(fpu_taskswitch),
     HYPERCALL(sched_op_compat),
+#ifndef CONFIG_PV_SHIM_EXCLUSIVE
     COMPAT_CALL(platform_op),
+#endif
     HYPERCALL(set_debugreg),
     HYPERCALL(get_debugreg),
     COMPAT_CALL(update_descriptor),
@@ -72,8 +74,10 @@ const hypercall_table_t pv_hypercall_tab
 #endif
     HYPERCALL(event_channel_op),
     COMPAT_CALL(physdev_op),
+#ifndef CONFIG_PV_SHIM_EXCLUSIVE
     HYPERCALL(sysctl),
     HYPERCALL(domctl),
+#endif
 #ifdef CONFIG_KEXEC
     COMPAT_CALL(kexec_op),
 #endif
@@ -89,7 +93,9 @@ const hypercall_table_t pv_hypercall_tab
     HYPERCALL(hypfs_op),
 #endif
     HYPERCALL(mca),
+#ifndef CONFIG_PV_SHIM_EXCLUSIVE
     HYPERCALL(arch_1),
+#endif
 };
 
 #undef do_arch_1
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -6,7 +6,6 @@ obj-$(CONFIG_CORE_PARKING) += core_parki
 obj-y += cpu.o
 obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
 obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
-obj-y += domctl.o
 obj-y += domain.o
 obj-y += event_2l.o
 obj-y += event_channel.o
@@ -26,7 +25,6 @@ obj-$(CONFIG_NEEDS_LIST_SORT) += list_so
 obj-$(CONFIG_LIVEPATCH) += livepatch.o livepatch_elf.o
 obj-$(CONFIG_MEM_ACCESS) += mem_access.o
 obj-y += memory.o
-obj-y += monitor.o
 obj-y += multicall.o
 obj-y += notifier.o
 obj-y += page_alloc.o
@@ -47,7 +45,6 @@ obj-y += spinlock.o
 obj-y += stop_machine.o
 obj-y += string.o
 obj-y += symbols.o
-obj-y += sysctl.o
 obj-y += tasklet.o
 obj-y += time.o
 obj-y += timer.o
@@ -66,6 +63,12 @@ obj-bin-$(CONFIG_X86) += $(foreach n,dec
 
 obj-$(CONFIG_COMPAT) += $(addprefix compat/,domain.o kernel.o memory.o multicall.o xlat.o)
 
+ifneq ($(CONFIG_PV_SHIM_EXCLUSIVE),y)
+obj-y += domctl.o
+obj-y += monitor.o
+obj-y += sysctl.o
+endif
+
 extra-y := symbols-dummy.o
 
 obj-$(CONFIG_COVERAGE) += coverage/
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -154,6 +154,8 @@ struct paging_mode {
 /*****************************************************************************
  * Log dirty code */
 
+#ifndef CONFIG_PV_SHIM_EXCLUSIVE
+
 /* get the dirty bitmap for a specific range of pfns */
 void paging_log_dirty_range(struct domain *d,
                             unsigned long begin_pfn,
@@ -202,6 +204,15 @@ struct sh_dirty_vram {
     s_time_t last_dirty;
 };
 
+#else /* !CONFIG_PV_SHIM_EXCLUSIVE */
+
+static inline void paging_log_dirty_init(struct domain *d,
+                                         const struct log_dirty_ops *ops) {}
+static inline void paging_mark_dirty(struct domain *d, mfn_t gmfn) {}
+static inline void paging_mark_pfn_dirty(struct domain *d, pfn_t pfn) {}
+
+#endif /* CONFIG_PV_SHIM_EXCLUSIVE */
+
 /*****************************************************************************
  * Entry points into the paging-assistance code */
 
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -130,6 +130,10 @@ struct vnuma_info {
     struct xen_vmemrange *vmemrange;
 };
 
+#ifndef CONFIG_PV_SHIM_EXCLUSIVE
 void vnuma_destroy(struct vnuma_info *vnuma);
+#else
+static inline void vnuma_destroy(struct vnuma_info *vnuma) { ASSERT(!vnuma); }
+#endif
 
 #endif /* __XEN_DOMAIN_H__ */



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:47:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvewH-0002Th-5L; Wed, 15 Jul 2020 10:47:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OEEU=A2=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1jvewF-0002Tc-FB
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:46:59 +0000
X-Inumbo-ID: 818fe4ba-c688-11ea-b7bb-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 818fe4ba-c688-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 10:46:58 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id a6so2091154wrm.4
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jul 2020 03:46:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=8JVmAnFYLZBWhHKTRFwTsF7Zq5AURrqOGdvr7wu65vE=;
 b=A10uFI79EE+EDA4GobzYM2NQlMjG2gGA7DqoZZZ8bsDTGsqeuuee0cUBhNmISjRPWZ
 btfigjttIz5aJ5jM2H0weO5F1ZmoMFUoqZTra+FFLC2anUBX3TWqkFiBf2aqu+WsxgqX
 rU2DTdHOhmYn2zK1SdrDLWVXI0NqpmXXkTlmhqKxfQ9gp7vJNfIjLKEKIshBCWX/8GN6
 U1hJCK+byhWxpblxwkW7dsu7VjPPv5XB/ojdomTtOy2gfmimiT0VIAciGW7lf4S24Z3U
 hJoo6znuteS/jn1XqTztnxZqWsXkgb0JTkQ46CMnXtKR1/EIDcPiSDUi8VZkeVNTEFVP
 9Aeg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=8JVmAnFYLZBWhHKTRFwTsF7Zq5AURrqOGdvr7wu65vE=;
 b=NaJvRSCGSzBV7WozkRq6mNw0tFhfmPtBOqRH4b3txZKrWLeo5eSrDpo8qJsUTYs2fg
 Y+QfFddTyzoIelsdGqusKTp0rk+haroFkU1CIjFyK8d+Fix4YiujAXlTS3sqtI6xIDCQ
 jIFox1USa7HrF/tpXDs9Q2yN0GqA5CFVYmJYcikhLMrR/oJIbZqKfmH/3b1lITN8d0Be
 gY3+eTMzZDSvBgUi7+B/5KoFOH4IhhNSK1p/HACECDZ5giW6lvcfN2E+0Jq/78wT8OW4
 mm/p6IxMYY/UBxYHzGX49y+jDh4uWS+e5c+e1JLiVPmpYUVcR3Cugj+bbXy9bpBDGYQo
 1LHg==
X-Gm-Message-State: AOAM532yrUGmJGi9sprMwJ7x/yvwFdnyG+bC9nXL/v5HHWqooKx3tDJj
 zSXcunvv5WM6GUwNkoXe0fRq6LX0e+cQuUS8dSA=
X-Google-Smtp-Source: ABdhPJxA1dqLdZU5PkabR+qwX90PSPzTbNQLCt47AeXPzQPFOEErRXn0boqOR9FXaxXC5ScHqtZgrmOixKMKEmKQ/Cc=
X-Received: by 2002:adf:e850:: with SMTP id d16mr10999443wrn.426.1594810017559; 
 Wed, 15 Jul 2020 03:46:57 -0700 (PDT)
MIME-Version: 1.0
References: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
 <e47a9ef5-5f4c-1ca6-1b31-f7b10516e5ed@suse.com>
In-Reply-To: <e47a9ef5-5f4c-1ca6-1b31-f7b10516e5ed@suse.com>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Wed, 15 Jul 2020 12:46:45 +0200
Message-ID: <CAJ=z9a1AWYYVGwHWOct9j3bVDhPtWG7R3tQY05+6BY-9g3C1kQ@mail.gmail.com>
Subject: Re: [PATCH 2/2] evtchn/fifo: don't enforce higher than necessary
 alignment
To: Jan Beulich <jbeulich@suse.com>
Content-Type: multipart/alternative; boundary="00000000000094b88c05aa78a428"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--00000000000094b88c05aa78a428
Content-Type: text/plain; charset="UTF-8"

On Wed, 15 Jul 2020, 12:17 Jan Beulich, <jbeulich@suse.com> wrote:

> Neither the code nor the original commit provide any justification for
> the need to 8-byte align the struct in all cases. Enforce just as much
> alignment as the structure actually needs - 4 bytes - by using alignof()
> instead of a literal number.
>
> Take the opportunity and also
> - add so far missing validation that native and compat mode layouts of
>   the structures actually match,
> - tie sizeof() expressions to the types of the fields we're actually
>   after, rather than specifying the type explicitly (which in the
>   general case risks a disconnect, even if there's close to zero risk in
>   this particular case),
> - use ENXIO instead of EINVAL for the two cases of the address not
>   satisfying the requirements, which will make an issue here better
>   stand out at the call site.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I question the need for the array_index_nospec() here. Or else I'd
> expect map_vcpu_info() would also need the same.
>
> --- a/xen/common/event_fifo.c
> +++ b/xen/common/event_fifo.c
> @@ -504,6 +504,16 @@ static void setup_ports(struct domain *d
>      }
>  }
>
> +#ifdef CONFIG_COMPAT
> +
> +#include <compat/event_channel.h>
> +
> +#define xen_evtchn_fifo_control_block evtchn_fifo_control_block
> +CHECK_evtchn_fifo_control_block;
> +#undef xen_evtchn_fifo_control_block
> +
> +#endif
> +
>  int evtchn_fifo_init_control(struct evtchn_init_control *init_control)
>  {
>      struct domain *d = current->domain;
> @@ -523,19 +533,20 @@ int evtchn_fifo_init_control(struct evtc
>          return -ENOENT;
>
>      /* Must not cross page boundary. */
> -    if ( offset > (PAGE_SIZE - sizeof(evtchn_fifo_control_block_t)) )
> -        return -EINVAL;
> +    if ( offset > (PAGE_SIZE - sizeof(*v->evtchn_fifo->control_block)) )
> +        return -ENXIO;
>
>      /*
>       * Make sure the guest controlled value offset is bounded even during
>       * speculative execution.
>       */
>      offset = array_index_nospec(offset,
> -                           PAGE_SIZE -
> sizeof(evtchn_fifo_control_block_t) + 1);
> +                                PAGE_SIZE -
> +                                sizeof(*v->evtchn_fifo->control_block) +
> 1);
>
> -    /* Must be 8-bytes aligned. */
> -    if ( offset & (8 - 1) )
> -        return -EINVAL;
> +    /* Must be suitably aligned. */
> +    if ( offset & (alignof(*v->evtchn_fifo->control_block) - 1) )
> +        return -ENXIO;
>

A guest relying on this new alignment wouldn't work on older version of
Xen. So I don't think a guest would ever be able to use it.

Therefore is it really worth the change?



>      spin_lock(&d->event_lock);
>
> --- a/xen/include/xlat.lst
> +++ b/xen/include/xlat.lst
> @@ -46,6 +46,7 @@
>  ?      evtchn_bind_vcpu                event_channel.h
>  ?      evtchn_bind_virq                event_channel.h
>  ?      evtchn_close                    event_channel.h
> +?      evtchn_fifo_control_block       event_channel.h
>  ?      evtchn_op                       event_channel.h
>  ?      evtchn_send                     event_channel.h
>  ?      evtchn_status                   event_channel.h
>
>

--00000000000094b88c05aa78a428
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><br><br><div class=3D"gmail_quote"><div dir=3D"ltr" =
class=3D"gmail_attr">On Wed, 15 Jul 2020, 12:17 Jan Beulich, &lt;<a href=3D=
"mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt; wrote:<br></div><block=
quote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc=
 solid;padding-left:1ex">Neither the code nor the original commit provide a=
ny justification for<br>
the need to 8-byte align the struct in all cases. Enforce just as much<br>
alignment as the structure actually needs - 4 bytes - by using alignof()<br=
>
instead of a literal number.<br>
<br>
Take the opportunity and also<br>
- add so far missing validation that native and compat mode layouts of<br>
=C2=A0 the structures actually match,<br>
- tie sizeof() expressions to the types of the fields we&#39;re actually<br=
>
=C2=A0 after, rather than specifying the type explicitly (which in the<br>
=C2=A0 general case risks a disconnect, even if there&#39;s close to zero r=
isk in<br>
=C2=A0 this particular case),<br>
- use ENXIO instead of EINVAL for the two cases of the address not<br>
=C2=A0 satisfying the requirements, which will make an issue here better<br=
>
=C2=A0 stand out at the call site.<br>
<br>
Signed-off-by: Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com" target=
=3D"_blank" rel=3D"noreferrer">jbeulich@suse.com</a>&gt;<br>
---<br>
I question the need for the array_index_nospec() here. Or else I&#39;d<br>
expect map_vcpu_info() would also need the same.<br>
<br>
--- a/xen/common/event_fifo.c<br>
+++ b/xen/common/event_fifo.c<br>
@@ -504,6 +504,16 @@ static void setup_ports(struct domain *d<br>
=C2=A0 =C2=A0 =C2=A0}<br>
=C2=A0}<br>
<br>
+#ifdef CONFIG_COMPAT<br>
+<br>
+#include &lt;compat/event_channel.h&gt;<br>
+<br>
+#define xen_evtchn_fifo_control_block evtchn_fifo_control_block<br>
+CHECK_evtchn_fifo_control_block;<br>
+#undef xen_evtchn_fifo_control_block<br>
+<br>
+#endif<br>
+<br>
=C2=A0int evtchn_fifo_init_control(struct evtchn_init_control *init_control=
)<br>
=C2=A0{<br>
=C2=A0 =C2=A0 =C2=A0struct domain *d =3D current-&gt;domain;<br>
@@ -523,19 +533,20 @@ int evtchn_fifo_init_control(struct evtc<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0return -ENOENT;<br>
<br>
=C2=A0 =C2=A0 =C2=A0/* Must not cross page boundary. */<br>
-=C2=A0 =C2=A0 if ( offset &gt; (PAGE_SIZE - sizeof(evtchn_fifo_control_blo=
ck_t)) )<br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EINVAL;<br>
+=C2=A0 =C2=A0 if ( offset &gt; (PAGE_SIZE - sizeof(*v-&gt;evtchn_fifo-&gt;=
control_block)) )<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -ENXIO;<br>
<br>
=C2=A0 =C2=A0 =C2=A0/*<br>
=C2=A0 =C2=A0 =C2=A0 * Make sure the guest controlled value offset is bound=
ed even during<br>
=C2=A0 =C2=A0 =C2=A0 * speculative execution.<br>
=C2=A0 =C2=A0 =C2=A0 */<br>
=C2=A0 =C2=A0 =C2=A0offset =3D array_index_nospec(offset,<br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0PAGE_SIZE - sizeof(evtchn_fifo_control_block_t) + 1=
);<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 PAGE_SIZE -<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 sizeof(*v-&gt;evtchn_fifo-&gt;contro=
l_block) + 1);<br>
<br>
-=C2=A0 =C2=A0 /* Must be 8-bytes aligned. */<br>
-=C2=A0 =C2=A0 if ( offset &amp; (8 - 1) )<br>
-=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EINVAL;<br>
+=C2=A0 =C2=A0 /* Must be suitably aligned. */<br>
+=C2=A0 =C2=A0 if ( offset &amp; (alignof(*v-&gt;evtchn_fifo-&gt;control_bl=
ock) - 1) )<br>
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -ENXIO;<br></blockquote></div></div><di=
v dir=3D"auto"><br></div><div dir=3D"auto">A guest relying on this new alig=
nment wouldn&#39;t work on older version of Xen. So I don&#39;t think a gue=
st would ever be able to use it.</div><div dir=3D"auto"><br></div><div dir=
=3D"auto">Therefore is it really worth the change?</div><div dir=3D"auto"><=
br></div><div dir=3D"auto"><br></div><div dir=3D"auto"><div class=3D"gmail_=
quote"><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-=
left:1px #ccc solid;padding-left:1ex">
<br>
=C2=A0 =C2=A0 =C2=A0spin_lock(&amp;d-&gt;event_lock);<br>
<br>
--- a/xen/include/xlat.lst<br>
+++ b/xen/include/xlat.lst<br>
@@ -46,6 +46,7 @@<br>
=C2=A0?=C2=A0 =C2=A0 =C2=A0 evtchn_bind_vcpu=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 event_channel.h<br>
=C2=A0?=C2=A0 =C2=A0 =C2=A0 evtchn_bind_virq=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 event_channel.h<br>
=C2=A0?=C2=A0 =C2=A0 =C2=A0 evtchn_close=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 event_channel.h<br>
+?=C2=A0 =C2=A0 =C2=A0 evtchn_fifo_control_block=C2=A0 =C2=A0 =C2=A0 =C2=A0=
event_channel.h<br>
=C2=A0?=C2=A0 =C2=A0 =C2=A0 evtchn_op=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0event_channel.h<br>
=C2=A0?=C2=A0 =C2=A0 =C2=A0 evtchn_send=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0event_channel.h<br>
=C2=A0?=C2=A0 =C2=A0 =C2=A0 evtchn_status=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0event_channel.h<br>
<br>
</blockquote></div></div></div>

--00000000000094b88c05aa78a428--


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:47:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:47:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvewV-0002Un-Dw; Wed, 15 Jul 2020 10:47:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvewU-0002Ue-3O
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:47:14 +0000
X-Inumbo-ID: 8a28beee-c688-11ea-93b7-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8a28beee-c688-11ea-93b7-12813bfff9fa;
 Wed, 15 Jul 2020 10:47:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 38A1FAD93;
 Wed, 15 Jul 2020 10:47:15 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/4] x86: some assembler macro rework
Message-ID: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
Date: Wed, 15 Jul 2020 12:47:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Parts of this were discussed in the context of Andrew's CET-SS work.
Other parts simply fit the underlying picture.

1: replace __ASM_{CL,ST}AC
2: reduce CET-SS related #ifdef-ary
3: drop ASM_{CL,ST}AC
4: fold indirect_thunk_asm.h into asm-defns.h

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:48:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:48:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvexT-0002e1-Nk; Wed, 15 Jul 2020 10:48:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvexS-0002dr-Qi
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:48:14 +0000
X-Inumbo-ID: aeb598fe-c688-11ea-93b7-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aeb598fe-c688-11ea-93b7-12813bfff9fa;
 Wed, 15 Jul 2020 10:48:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 90D91ADC2;
 Wed, 15 Jul 2020 10:48:16 +0000 (UTC)
Subject: [PATCH 1/4] x86: replace __ASM_{CL,ST}AC
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
Message-ID: <fc8e042e-fef8-ac38-34d8-16b13e4b0135@suse.com>
Date: Wed, 15 Jul 2020 12:48:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce proper assembler macros instead, enabled only when the
assembler itself doesn't support the insns. To avoid duplicating the
macros for assembly and C files, have them processed into asm-macros.h.
This in turn requires adding a multiple inclusion guard when generating
that header.

No change to generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -235,7 +235,10 @@ $(BASEDIR)/include/asm-x86/asm-macros.h:
 	echo '#if 0' >$@.new
 	echo '.if 0' >>$@.new
 	echo '#endif' >>$@.new
+	echo '#ifndef __ASM_MACROS_H__' >>$@.new
+	echo '#define __ASM_MACROS_H__' >>$@.new
 	echo 'asm ( ".include \"$@\"" );' >>$@.new
+	echo '#endif /* __ASM_MACROS_H__ */' >>$@.new
 	echo '#if 0' >>$@.new
 	echo '.endif' >>$@.new
 	cat $< >>$@.new
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -20,6 +20,7 @@ $(call as-option-add,CFLAGS,CC,"rdrand %
 $(call as-option-add,CFLAGS,CC,"rdfsbase %rax",-DHAVE_AS_FSGSBASE)
 $(call as-option-add,CFLAGS,CC,"xsaveopt (%rax)",-DHAVE_AS_XSAVEOPT)
 $(call as-option-add,CFLAGS,CC,"rdseed %eax",-DHAVE_AS_RDSEED)
+$(call as-option-add,CFLAGS,CC,"clac",-DHAVE_AS_CLAC_STAC)
 $(call as-option-add,CFLAGS,CC,"clwb (%rax)",-DHAVE_AS_CLWB)
 $(call as-option-add,CFLAGS,CC,".equ \"x\"$$(comma)1",-DHAVE_AS_QUOTED_SYM)
 $(call as-option-add,CFLAGS,CC,"invpcid (%rax)$$(comma)%rax",-DHAVE_AS_INVPCID)
--- a/xen/arch/x86/asm-macros.c
+++ b/xen/arch/x86/asm-macros.c
@@ -1 +1,2 @@
+#include <asm/asm-defns.h>
 #include <asm/alternative-asm.h>
--- /dev/null
+++ b/xen/include/asm-x86/asm-defns.h
@@ -0,0 +1,9 @@
+#ifndef HAVE_AS_CLAC_STAC
+.macro clac
+    .byte 0x0f, 0x01, 0xca
+.endm
+
+.macro stac
+    .byte 0x0f, 0x01, 0xcb
+.endm
+#endif
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -13,10 +13,12 @@
 #include <asm/alternative.h>
 
 #ifdef __ASSEMBLY__
+#include <asm/asm-defns.h>
 #ifndef CONFIG_INDIRECT_THUNK
 .equ CONFIG_INDIRECT_THUNK, 0
 #endif
 #else
+#include <asm/asm-macros.h>
 asm ( "\t.equ CONFIG_INDIRECT_THUNK, "
       __stringify(IS_ENABLED(CONFIG_INDIRECT_THUNK)) );
 #endif
@@ -200,34 +202,27 @@ register unsigned long current_stack_poi
 
 #endif
 
-/* "Raw" instruction opcodes */
-#define __ASM_CLAC      ".byte 0x0f,0x01,0xca"
-#define __ASM_STAC      ".byte 0x0f,0x01,0xcb"
-
 #ifdef __ASSEMBLY__
 .macro ASM_STAC
-    ALTERNATIVE "", __ASM_STAC, X86_FEATURE_XEN_SMAP
+    ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
 .endm
 .macro ASM_CLAC
-    ALTERNATIVE "", __ASM_CLAC, X86_FEATURE_XEN_SMAP
+    ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 .endm
 #else
 static always_inline void clac(void)
 {
     /* Note: a barrier is implicit in alternative() */
-    alternative("", __ASM_CLAC, X86_FEATURE_XEN_SMAP);
+    alternative("", "clac", X86_FEATURE_XEN_SMAP);
 }
 
 static always_inline void stac(void)
 {
     /* Note: a barrier is implicit in alternative() */
-    alternative("", __ASM_STAC, X86_FEATURE_XEN_SMAP);
+    alternative("", "stac", X86_FEATURE_XEN_SMAP);
 }
 #endif
 
-#undef __ASM_STAC
-#undef __ASM_CLAC
-
 #ifdef __ASSEMBLY__
 .macro SAVE_ALL op, compat=0
 .ifeqs "\op", "CLAC"



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:48:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:48:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvey1-0002iJ-15; Wed, 15 Jul 2020 10:48:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvexz-0002i6-Mq
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:48:47 +0000
X-Inumbo-ID: c204da5a-c688-11ea-93b7-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c204da5a-c688-11ea-93b7-12813bfff9fa;
 Wed, 15 Jul 2020 10:48:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F3884AAE8;
 Wed, 15 Jul 2020 10:48:48 +0000 (UTC)
Subject: [PATCH 2/4] x86: reduce CET-SS related #ifdef-ary
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
Message-ID: <58615a18-7f81-c000-d499-1a49f4752879@suse.com>
Date: Wed, 15 Jul 2020 12:48:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Commit b586a81b7a90 ("x86/CET: Fix build following c/s 43b98e7190") had
to introduce a number of #ifdef-s to make the build work with older tool
chains. Introduce an assembler macro covering for tool chains not
knowing of CET-SS, allowing some conditionals where just SETSSBSY is the
problem to be dropped again.

No change to generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Now that I've done this I'm not longer sure which direction is better to
follow: On one hand this introduces dead code (even if just NOPs) into
CET-SS-disabled builds. Otoh this is a step towards breaking the tool
chain version dependency of the feature.

I've also dropped conditionals around bigger chunks of code; while I
think that's preferable, I'm open to undo those parts.

--- a/xen/arch/x86/boot/x86_64.S
+++ b/xen/arch/x86/boot/x86_64.S
@@ -31,7 +31,6 @@ ENTRY(__high_start)
         jz      .L_bsp
 
         /* APs.  Set up shadow stacks before entering C. */
-#ifdef CONFIG_XEN_SHSTK
         testl   $cpufeat_mask(X86_FEATURE_XEN_SHSTK), \
                 CPUINFO_FEATURE_OFFSET(X86_FEATURE_XEN_SHSTK) + boot_cpu_data(%rip)
         je      .L_ap_shstk_done
@@ -55,7 +54,6 @@ ENTRY(__high_start)
         mov     $XEN_MINIMAL_CR4 | X86_CR4_CET, %ecx
         mov     %rcx, %cr4
         setssbsy
-#endif
 
 .L_ap_shstk_done:
         call    start_secondary
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -668,7 +668,7 @@ static void __init noreturn reinit_bsp_s
     stack_base[0] = stack;
     memguard_guard_stack(stack);
 
-    if ( IS_ENABLED(CONFIG_XEN_SHSTK) && cpu_has_xen_shstk )
+    if ( cpu_has_xen_shstk )
     {
         wrmsrl(MSR_PL0_SSP,
                (unsigned long)stack + (PRIMARY_SHSTK_SLOT + 1) * PAGE_SIZE - 8);
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -198,9 +198,7 @@ ENTRY(cr4_pv32_restore)
 
 /* See lstar_enter for entry register state. */
 ENTRY(cstar_enter)
-#ifdef CONFIG_XEN_SHSTK
         ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
-#endif
         /* sti could live here when we don't switch page tables below. */
         CR4_PV32_RESTORE
         movq  8(%rsp),%rax /* Restore %rax. */
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -237,9 +237,7 @@ iret_exit_to_guest:
  * %ss must be saved into the space left by the trampoline.
  */
 ENTRY(lstar_enter)
-#ifdef CONFIG_XEN_SHSTK
         ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
-#endif
         /* sti could live here when we don't switch page tables below. */
         movq  8(%rsp),%rax /* Restore %rax. */
         movq  $FLAT_KERNEL_SS,8(%rsp)
@@ -273,9 +271,7 @@ ENTRY(lstar_enter)
         jmp   test_all_events
 
 ENTRY(sysenter_entry)
-#ifdef CONFIG_XEN_SHSTK
         ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
-#endif
         /* sti could live here when we don't switch page tables below. */
         pushq $FLAT_USER_SS
         pushq $0
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -7,3 +7,9 @@
     .byte 0x0f, 0x01, 0xcb
 .endm
 #endif
+
+#ifndef CONFIG_HAS_AS_CET_SS
+.macro setssbsy
+    .byte 0xf3, 0x0f, 0x01, 0xe8
+.endm
+#endif



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:49:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:49:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveyI-0002lD-9j; Wed, 15 Jul 2020 10:49:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveyG-0002kw-Q4
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:49:04 +0000
X-Inumbo-ID: cc6437c0-c688-11ea-93b7-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cc6437c0-c688-11ea-93b7-12813bfff9fa;
 Wed, 15 Jul 2020 10:49:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 64185AD93;
 Wed, 15 Jul 2020 10:49:06 +0000 (UTC)
Subject: [PATCH 3/4] x86: drop ASM_{CL,ST}AC
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
Message-ID: <048c3702-f0b0-6f8e-341e-bec6cfaded27@suse.com>
Date: Wed, 15 Jul 2020 12:49:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Use ALTERNATIVE directly, such that at the use sites it is visible that
alternative code patching is in use. Similarly avoid hiding the fact in
SAVE_ALL.

No change to generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2165,9 +2165,9 @@ void activate_debugregs(const struct vcp
 void asm_domain_crash_synchronous(unsigned long addr)
 {
     /*
-     * We need clear AC bit here because in entry.S AC is set
-     * by ASM_STAC to temporarily allow accesses to user pages
-     * which is prevented by SMAP by default.
+     * We need to clear AC bit here because in entry.S it gets set to
+     * temporarily allow accesses to user pages which is prevented by
+     * SMAP by default.
      *
      * For some code paths, where this function is called, clac()
      * is not needed, but adding clac() here instead of each place
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -12,7 +12,7 @@
 #include <irq_vectors.h>
 
 ENTRY(entry_int82)
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         pushq $0
         movl  $HYPERCALL_VECTOR, 4(%rsp)
         SAVE_ALL compat=1 /* DPL1 gate, restricted to 32bit PV guests only. */
@@ -285,7 +285,7 @@ ENTRY(compat_int80_direct_trap)
 compat_create_bounce_frame:
         ASSERT_INTERRUPTS_ENABLED
         mov   %fs,%edi
-        ASM_STAC
+        ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
         testb $2,UREGS_cs+8(%rsp)
         jz    1f
         /* Push new frame at registered guest-OS stack base. */
@@ -332,7 +332,7 @@ compat_create_bounce_frame:
         movl  TRAPBOUNCE_error_code(%rdx),%eax
 .Lft8:  movl  %eax,%fs:(%rsi)           # ERROR CODE
 1:
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         /* Rewrite our stack frame and return to guest-OS mode. */
         /* IA32 Ref. Vol. 3: TF, VM, RF and NT flags are cleared on trap. */
         andl  $~(X86_EFLAGS_VM|X86_EFLAGS_RF|\
@@ -372,7 +372,7 @@ compat_crash_page_fault_4:
         addl  $4,%esi
 compat_crash_page_fault:
 .Lft14: mov   %edi,%fs
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         movl  %esi,%edi
         call  show_page_walk
         jmp   dom_crash_sync_extable
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -277,7 +277,7 @@ ENTRY(sysenter_entry)
         pushq $0
         pushfq
 GLOBAL(sysenter_eflags_saved)
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         pushq $3 /* ring 3 null cs */
         pushq $0 /* null rip */
         pushq $0
@@ -331,7 +331,7 @@ UNLIKELY_END(sysenter_gpf)
         jmp   .Lbounce_exception
 
 ENTRY(int80_direct_trap)
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         pushq $0
         movl  $0x80, 4(%rsp)
         SAVE_ALL
@@ -450,7 +450,7 @@ __UNLIKELY_END(create_bounce_frame_bad_s
 
         subq  $7*8,%rsi
         movq  UREGS_ss+8(%rsp),%rax
-        ASM_STAC
+        ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
         movq  VCPU_domain(%rbx),%rdi
         STORE_GUEST_STACK(rax,6)        # SS
         movq  UREGS_rsp+8(%rsp),%rax
@@ -488,7 +488,7 @@ __UNLIKELY_END(create_bounce_frame_bad_s
         STORE_GUEST_STACK(rax,1)        # R11
         movq  UREGS_rcx+8(%rsp),%rax
         STORE_GUEST_STACK(rax,0)        # RCX
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 
 #undef STORE_GUEST_STACK
 
@@ -525,11 +525,11 @@ domain_crash_page_fault_2x8:
 domain_crash_page_fault_1x8:
         addq  $8,%rsi
 domain_crash_page_fault_0x8:
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         movq  %rsi,%rdi
         call  show_page_walk
 ENTRY(dom_crash_sync_extable)
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         # Get out of the guest-save area of the stack.
         GET_STACK_END(ax)
         leaq  STACK_CPUINFO_FIELD(guest_cpu_user_regs)(%rax),%rsp
@@ -587,7 +587,8 @@ UNLIKELY_END(exit_cr3)
         iretq
 
 ENTRY(common_interrupt)
-        SAVE_ALL CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
+        SAVE_ALL
 
         GET_STACK_END(14)
 
@@ -619,7 +620,8 @@ ENTRY(page_fault)
         movl  $TRAP_page_fault,4(%rsp)
 /* No special register assumptions. */
 GLOBAL(handle_exception)
-        SAVE_ALL CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
+        SAVE_ALL
 
         GET_STACK_END(14)
 
@@ -824,7 +826,8 @@ ENTRY(entry_CP)
 ENTRY(double_fault)
         movl  $TRAP_double_fault,4(%rsp)
         /* Set AC to reduce chance of further SMAP faults */
-        SAVE_ALL STAC
+        ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
+        SAVE_ALL
 
         GET_STACK_END(14)
 
@@ -857,7 +860,8 @@ ENTRY(nmi)
         pushq $0
         movl  $TRAP_nmi,4(%rsp)
 handle_ist_exception:
-        SAVE_ALL CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
+        SAVE_ALL
 
         GET_STACK_END(14)
 
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -200,16 +200,6 @@ register unsigned long current_stack_poi
         UNLIKELY_END_SECTION "\n"          \
         ".Llikely." #tag ".%=:"
 
-#endif
-
-#ifdef __ASSEMBLY__
-.macro ASM_STAC
-    ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
-.endm
-.macro ASM_CLAC
-    ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
-.endm
-#else
 static always_inline void clac(void)
 {
     /* Note: a barrier is implicit in alternative() */
@@ -224,18 +214,7 @@ static always_inline void stac(void)
 #endif
 
 #ifdef __ASSEMBLY__
-.macro SAVE_ALL op, compat=0
-.ifeqs "\op", "CLAC"
-        ASM_CLAC
-.else
-.ifeqs "\op", "STAC"
-        ASM_STAC
-.else
-.ifnb \op
-        .err
-.endif
-.endif
-.endif
+.macro SAVE_ALL compat=0
         addq  $-(UREGS_error_code-UREGS_r15), %rsp
         cld
         movq  %rdi,UREGS_rdi(%rsp)



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 10:49:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 10:49:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jveyr-0002sL-OT; Wed, 15 Jul 2020 10:49:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jveyq-0002s8-Lt
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 10:49:40 +0000
X-Inumbo-ID: e1d7700e-c688-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1d7700e-c688-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 10:49:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6061DADC2;
 Wed, 15 Jul 2020 10:49:42 +0000 (UTC)
Subject: [PATCH 4/4] x86: fold indirect_thunk_asm.h into asm-defns.h
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
Message-ID: <af69f44a-5009-40e8-fbbc-6f78b67f7adb@suse.com>
Date: Wed, 15 Jul 2020 12:49:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

There's little point in having two separate headers both getting
included by asm_defns.h. This in particular reduces the number of
instances of guarding asm(".include ...") suitably in such dual use
headers.

No change to generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/Makefile
+++ b/xen/Makefile
@@ -139,7 +139,7 @@ ifeq ($(TARGET_ARCH),x86)
 t1 = $(call as-insn,$(CC),".L0: .L1: .skip (.L1 - .L0)",,-no-integrated-as)
 
 # Check whether clang asm()-s support .include.
-t2 = $(call as-insn,$(CC) -I$(BASEDIR)/include,".include \"asm-x86/indirect_thunk_asm.h\"",,-no-integrated-as)
+t2 = $(call as-insn,$(CC) -I$(BASEDIR)/include,".include \"asm-x86/asm-defns.h\"",,-no-integrated-as)
 
 # Check whether clang keeps .macro-s between asm()-s:
 # https://bugs.llvm.org/show_bug.cgi?id=36110
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -13,3 +13,40 @@
     .byte 0xf3, 0x0f, 0x01, 0xe8
 .endm
 #endif
+
+.macro INDIRECT_BRANCH insn:req arg:req
+/*
+ * Create an indirect branch.  insn is one of call/jmp, arg is a single
+ * register.
+ *
+ * With no compiler support, this degrades into a plain indirect call/jmp.
+ * With compiler support, dispatch to the correct __x86_indirect_thunk_*
+ */
+    .if CONFIG_INDIRECT_THUNK == 1
+
+        $done = 0
+        .irp reg, ax, cx, dx, bx, bp, si, di, 8, 9, 10, 11, 12, 13, 14, 15
+        .ifeqs "\arg", "%r\reg"
+            \insn __x86_indirect_thunk_r\reg
+            $done = 1
+           .exitm
+        .endif
+        .endr
+
+        .if $done != 1
+            .error "Bad register arg \arg"
+        .endif
+
+    .else
+        \insn *\arg
+    .endif
+.endm
+
+/* Convenience wrappers. */
+.macro INDIRECT_CALL arg:req
+    INDIRECT_BRANCH call \arg
+.endm
+
+.macro INDIRECT_JMP arg:req
+    INDIRECT_BRANCH jmp \arg
+.endm
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -22,7 +22,6 @@
 asm ( "\t.equ CONFIG_INDIRECT_THUNK, "
       __stringify(IS_ENABLED(CONFIG_INDIRECT_THUNK)) );
 #endif
-#include <asm/indirect_thunk_asm.h>
 
 #ifndef __ASSEMBLY__
 void ret_from_intr(void);
--- a/xen/include/asm-x86/indirect_thunk_asm.h
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
- * Trickery to allow this header to be included at the C level, to permit
- * proper dependency tracking in .*.o.d files, while still having it contain
- * assembler only macros.
- */
-#ifndef __ASSEMBLY__
-# if 0
-  .if 0
-# endif
-asm ( "\t.include \"asm/indirect_thunk_asm.h\"" );
-# if 0
-  .endif
-# endif
-#else
-
-.macro INDIRECT_BRANCH insn:req arg:req
-/*
- * Create an indirect branch.  insn is one of call/jmp, arg is a single
- * register.
- *
- * With no compiler support, this degrades into a plain indirect call/jmp.
- * With compiler support, dispatch to the correct __x86_indirect_thunk_*
- */
-    .if CONFIG_INDIRECT_THUNK == 1
-
-        $done = 0
-        .irp reg, ax, cx, dx, bx, bp, si, di, 8, 9, 10, 11, 12, 13, 14, 15
-        .ifeqs "\arg", "%r\reg"
-            \insn __x86_indirect_thunk_r\reg
-            $done = 1
-           .exitm
-        .endif
-        .endr
-
-        .if $done != 1
-            .error "Bad register arg \arg"
-        .endif
-
-    .else
-        \insn *\arg
-    .endif
-.endm
-
-/* Convenience wrappers. */
-.macro INDIRECT_CALL arg:req
-    INDIRECT_BRANCH call \arg
-.endm
-
-.macro INDIRECT_JMP arg:req
-    INDIRECT_BRANCH jmp \arg
-.endm
-
-#endif



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 11:22:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 11:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvfUK-0006Iz-EW; Wed, 15 Jul 2020 11:22:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tzA3=A2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvfUJ-0006Iu-4B
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 11:22:11 +0000
X-Inumbo-ID: 6be9b8d4-c68d-11ea-93bf-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6be9b8d4-c68d-11ea-93bf-12813bfff9fa;
 Wed, 15 Jul 2020 11:22:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Rb2dpUwLgUovP1LcWrHtHaHM/nLAyoPPr/NupJbKgGk=; b=kDSUAhhDLnabYsSc8wtm+fhOY
 WPcS9ED1yacEJV6f40Sn7N0PJDuZz+aUZp0GavYYIk9JseCQs/Wk8wY/grK1IKDdBceSYYSKGL0Jp
 9x35oDHPqxinLhAp8Z77ysYSSe/3zeJZ3zsQWikEMK/uN655LTMyd3W63vCwQN+eLsDsY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvfUG-0003fN-V1; Wed, 15 Jul 2020 11:22:09 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvfUG-0007ev-B1; Wed, 15 Jul 2020 11:22:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvfUG-0008FK-9I; Wed, 15 Jul 2020 11:22:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151900-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-4.14-testing baseline test] 151900: tolerable FAIL
X-Osstest-Failures: qemu-upstream-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-upstream-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-upstream-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-upstream-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-upstream-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-upstream-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=ea6d3cd1ed79d824e605a70c3626bc437c386260
X-Osstest-Versions-That: qemuu=ea6d3cd1ed79d824e605a70c3626bc437c386260
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jul 2020 11:22:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

"Old" tested version had not actually been tested; therefore in this
flight we test it, rather than a new candidate.  The baseline, if
any, is the most recent actually tested revision.

flight 151900 qemu-upstream-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151900/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                ea6d3cd1ed79d824e605a70c3626bc437c386260
baseline version:
 qemuu                ea6d3cd1ed79d824e605a70c3626bc437c386260

Last test of basis   151900  2020-07-14 18:08:51 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 11:55:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 11:55:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvfzj-0000Wk-3D; Wed, 15 Jul 2020 11:54:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvfzh-0000Wf-Qy
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 11:54:37 +0000
X-Inumbo-ID: f49b1f84-c691-11ea-93cb-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f49b1f84-c691-11ea-93cb-12813bfff9fa;
 Wed, 15 Jul 2020 11:54:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 49D96B14D;
 Wed, 15 Jul 2020 11:54:39 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 0/2] x86: RTC handling adjustments
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
Date: Wed, 15 Jul 2020 13:54:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The first patch addresses a recent regression and hence ought to be
considered for 4.14, despite it getting late. I noticed the problem
while re-basing the 2nd patch here, which I decided to now re-post
despite the original discussion on v1 not having lead to any clear
result (i.e. the supposed "leave Dom0 handle the RTC" has never
materialized over the past almost 5 years), while I still think
that the changed code is at least a little bit of improvement.

1: restore pv_rtc_handler() invocation
2: detect CMOS aliasing on ports other than 0x70/0x71

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 11:56:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 11:56:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvg1p-0000cm-FV; Wed, 15 Jul 2020 11:56:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvg1o-0000ce-97
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 11:56:48 +0000
X-Inumbo-ID: 428134fe-c692-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 428134fe-c692-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 11:56:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0CEE4AC7D;
 Wed, 15 Jul 2020 11:56:50 +0000 (UTC)
Subject: [PATCH v3 1/2] x86: restore pv_rtc_handler() invocation
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
Message-ID: <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
Date: Wed, 15 Jul 2020 13:56:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This was lost when making the logic accessible to PVH Dom0.

While doing so make the access to the global function pointer safe
against races (as noticed by Roger): The only current user wants to be
invoked just once (but can tolerate to be invoked multiple times),
zapping the pointer at that point.

Fixes: 835d8d69d96a ("x86/rtc: provide mediated access to RTC for PVH dom0")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Latch pointer under lock.
v2: New.

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1148,6 +1148,8 @@ void rtc_guest_write(unsigned int port,
 
     switch ( port )
     {
+        typeof(pv_rtc_handler) hook;
+
     case RTC_PORT(0):
         /*
          * All PV domains (and PVH dom0) are allowed to write to the latched
@@ -1160,6 +1162,14 @@ void rtc_guest_write(unsigned int port,
     case RTC_PORT(1):
         if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
             break;
+
+        spin_lock_irqsave(&rtc_lock, flags);
+        hook = pv_rtc_handler;
+        spin_unlock_irqrestore(&rtc_lock, flags);
+
+        if ( hook )
+            hook(currd->arch.cmos_idx & 0x7f, data);
+
         spin_lock_irqsave(&rtc_lock, flags);
         outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
         outb(data, RTC_PORT(1));



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 11:57:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 11:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvg28-0000es-Om; Wed, 15 Jul 2020 11:57:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvg27-0000ef-MY
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 11:57:07 +0000
X-Inumbo-ID: 4de68af6-c692-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4de68af6-c692-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 11:57:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 280CAAC6E;
 Wed, 15 Jul 2020 11:57:09 +0000 (UTC)
Subject: [PATCH v3 2/2] x86: detect CMOS aliasing on ports other than 0x70/0x71
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
Message-ID: <441327cd-a7d6-8cb6-bf90-69df8e509425@suse.com>
Date: Wed, 15 Jul 2020 13:57:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

... in order to also intercept accesses through the alias ports.

Also stop intercepting accesses to the CMOS ports if we won't ourselves
use the CMOS RTC.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Re-base over change to earlier patch.
v2: Re-base.

--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -670,6 +670,80 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
     return ret;
 }
 
+#ifndef COMPAT
+#include <asm/mc146818rtc.h>
+
+unsigned int __read_mostly cmos_alias_mask;
+
+static int __init probe_cmos_alias(void)
+{
+    unsigned int i, offs;
+
+    if ( acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC )
+        return 0;
+
+    for ( offs = 2; offs < 8; offs <<= 1 )
+    {
+        bool read = true;
+
+        for ( i = RTC_REG_D + 1; i < 0x80; ++i )
+        {
+            uint8_t normal, alt;
+            unsigned long flags;
+
+            if ( i == acpi_gbl_FADT.century )
+                continue;
+
+            spin_lock_irqsave(&rtc_lock, flags);
+
+            normal = CMOS_READ(i);
+            if ( inb(RTC_PORT(offs)) != i )
+                read = false;
+
+            alt = inb(RTC_PORT(offs + 1));
+
+            spin_unlock_irqrestore(&rtc_lock, flags);
+
+            if ( normal != alt )
+                break;
+
+            process_pending_softirqs();
+        }
+        if ( i == 0x80 )
+        {
+            cmos_alias_mask |= offs;
+            printk(XENLOG_INFO "CMOS aliased at %02x, index %s\n",
+                   RTC_PORT(offs), read ? "r/w" : "w/o");
+        }
+    }
+
+    return 0;
+}
+__initcall(probe_cmos_alias);
+
+/* Has the administrator granted sufficient permission for this I/O access? */
+bool admin_io_okay(unsigned int port, unsigned int bytes,
+                   const struct domain *d)
+{
+    /*
+     * Port 0xcf8 (CONFIG_ADDRESS) is only visible for DWORD accesses.
+     * We never permit direct access to that register.
+     */
+    if ( (port == 0xcf8) && (bytes == 4) )
+        return false;
+
+    /*
+     * We also never permit direct access to the RTC/CMOS registers
+     * if we may be accessing the RTC ones ourselves.
+     */
+    if ( !(acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC) &&
+         ((port & ~(cmos_alias_mask | 1)) == RTC_PORT(0)) )
+        return false;
+
+    return ioports_access_permitted(d, port, port + bytes - 1);
+}
+#endif /* COMPAT */
+
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -198,24 +198,6 @@ static bool guest_io_okay(unsigned int p
     return false;
 }
 
-/* Has the administrator granted sufficient permission for this I/O access? */
-static bool admin_io_okay(unsigned int port, unsigned int bytes,
-                          const struct domain *d)
-{
-    /*
-     * Port 0xcf8 (CONFIG_ADDRESS) is only visible for DWORD accesses.
-     * We never permit direct access to that register.
-     */
-    if ( (port == 0xcf8) && (bytes == 4) )
-        return false;
-
-    /* We also never permit direct access to the RTC/CMOS registers. */
-    if ( ((port & ~1) == RTC_PORT(0)) )
-        return false;
-
-    return ioports_access_permitted(d, port, port + bytes - 1);
-}
-
 static bool pci_cfg_ok(struct domain *currd, unsigned int start,
                        unsigned int size, uint32_t *write)
 {
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -48,7 +48,7 @@
 #include <xen/cpu.h>
 #include <asm/nmi.h>
 #include <asm/alternative.h>
-#include <asm/mc146818rtc.h>
+#include <asm/iocap.h>
 #include <asm/cpuid.h>
 #include <asm/spec_ctrl.h>
 #include <asm/guest.h>
@@ -2009,37 +2009,33 @@ int __hwdom_init xen_in_range(unsigned l
 static int __hwdom_init io_bitmap_cb(unsigned long s, unsigned long e,
                                      void *ctx)
 {
-    struct domain *d = ctx;
+    const struct domain *d = ctx;
     unsigned int i;
 
     ASSERT(e <= INT_MAX);
     for ( i = s; i <= e; i++ )
-        __clear_bit(i, d->arch.hvm.io_bitmap);
+        if ( admin_io_okay(i, 1, d) )
+            __clear_bit(i, d->arch.hvm.io_bitmap);
 
     return 0;
 }
 
 void __hwdom_init setup_io_bitmap(struct domain *d)
 {
-    int rc;
+    if ( !is_hvm_domain(d) )
+        return;
 
-    if ( is_hvm_domain(d) )
-    {
-        bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
-        rc = rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
-                                    io_bitmap_cb, d);
-        BUG_ON(rc);
-        /*
-         * NB: we need to trap accesses to 0xcf8 in order to intercept
-         * 4 byte accesses, that need to be handled by Xen in order to
-         * keep consistency.
-         * Access to 1 byte RTC ports also needs to be trapped in order
-         * to keep consistency with PV.
-         */
-        __set_bit(0xcf8, d->arch.hvm.io_bitmap);
-        __set_bit(RTC_PORT(0), d->arch.hvm.io_bitmap);
-        __set_bit(RTC_PORT(1), d->arch.hvm.io_bitmap);
-    }
+    bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
+    if ( rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
+                                io_bitmap_cb, d) )
+        BUG();
+
+    /*
+     * We need to trap 4-byte accesses to 0xcf8 (see admin_io_okay(),
+     * guest_io_read(), and guest_io_write()), which isn't covered by
+     * the admin_io_okay() check in io_bitmap_cb().
+     */
+    __set_bit(0xcf8, d->arch.hvm.io_bitmap);
 }
 
 /*
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -1092,7 +1092,10 @@ static unsigned long get_cmos_time(void)
         if ( seconds < 60 )
         {
             if ( rtc.sec != seconds )
+            {
                 cmos_rtc_probe = false;
+                acpi_gbl_FADT.boot_flags &= ~ACPI_FADT_NO_CMOS_RTC;
+            }
             break;
         }
 
@@ -1114,7 +1117,7 @@ unsigned int rtc_guest_read(unsigned int
     unsigned long flags;
     unsigned int data = ~0;
 
-    switch ( port )
+    switch ( port & ~cmos_alias_mask )
     {
     case RTC_PORT(0):
         /*
@@ -1126,11 +1129,12 @@ unsigned int rtc_guest_read(unsigned int
         break;
 
     case RTC_PORT(1):
-        if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
+        if ( !ioports_access_permitted(currd, port - 1, port) )
             break;
         spin_lock_irqsave(&rtc_lock, flags);
-        outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
-        data = inb(RTC_PORT(1));
+        outb(currd->arch.cmos_idx & (0xff >> (port == RTC_PORT(1))),
+             port - 1);
+        data = inb(port);
         spin_unlock_irqrestore(&rtc_lock, flags);
         break;
 
@@ -1146,9 +1150,10 @@ void rtc_guest_write(unsigned int port,
     struct domain *currd = current->domain;
     unsigned long flags;
 
-    switch ( port )
+    switch ( port & ~cmos_alias_mask )
     {
         typeof(pv_rtc_handler) hook;
+        unsigned int idx;
 
     case RTC_PORT(0):
         /*
@@ -1160,19 +1165,21 @@ void rtc_guest_write(unsigned int port,
         break;
 
     case RTC_PORT(1):
-        if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
+        if ( !ioports_access_permitted(currd, port - 1, port) )
             break;
 
         spin_lock_irqsave(&rtc_lock, flags);
         hook = pv_rtc_handler;
         spin_unlock_irqrestore(&rtc_lock, flags);
 
+        idx = currd->arch.cmos_idx & (0xff >> (port == RTC_PORT(1)));
+
         if ( hook )
-            hook(currd->arch.cmos_idx & 0x7f, data);
+            hook(idx, data);
 
         spin_lock_irqsave(&rtc_lock, flags);
-        outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
-        outb(data, RTC_PORT(1));
+        outb(idx, port - 1);
+        outb(data, port);
         spin_unlock_irqrestore(&rtc_lock, flags);
         break;
 
--- a/xen/include/asm-x86/iocap.h
+++ b/xen/include/asm-x86/iocap.h
@@ -18,4 +18,7 @@
     (!rangeset_is_empty((d)->iomem_caps) ||             \
      !rangeset_is_empty((d)->arch.ioport_caps))
 
+bool admin_io_okay(unsigned int port, unsigned int bytes,
+                   const struct domain *d);
+
 #endif /* __X86_IOCAP_H__ */
--- a/xen/include/asm-x86/mc146818rtc.h
+++ b/xen/include/asm-x86/mc146818rtc.h
@@ -9,6 +9,8 @@
 
 extern spinlock_t rtc_lock;             /* serialize CMOS RAM access */
 
+extern unsigned int cmos_alias_mask;
+
 /**********************************************************************
  * register summary
  **********************************************************************/



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:02:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:02:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvg6o-0001eh-R7; Wed, 15 Jul 2020 12:01:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvg6n-0001eb-Tv
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:01:57 +0000
X-Inumbo-ID: fb1ff91e-c692-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb1ff91e-c692-11ea-8496-bc764e2007e4;
 Wed, 15 Jul 2020 12:01:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B4F9FAC7D;
 Wed, 15 Jul 2020 12:01:59 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/3] x86/hvm: follow-up to f7039ee41b3d
Message-ID: <42270be7-43d6-ba53-3896-e20b5d7e3de0@suse.com>
Date: Wed, 15 Jul 2020 14:01:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Much of what gets done here was discussed in the context of
"ioreq: handle pending emulation racing with ioreq server
destruction" already.

1: fold hvm_io_assist() into its only caller
2: re-work hvm_wait_for_io() a little
3: fold both instances of looking up a hvm_ioreq_vcpu with a request pending

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:03:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:03:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvg8i-0001mh-7B; Wed, 15 Jul 2020 12:03:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvg8h-0001mZ-2K
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:03:55 +0000
X-Inumbo-ID: 40fb2e22-c693-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40fb2e22-c693-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 12:03:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 01579AEFB;
 Wed, 15 Jul 2020 12:03:57 +0000 (UTC)
Subject: [PATCH 1/3] x86/HVM: fold hvm_io_assist() into its only caller
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <42270be7-43d6-ba53-3896-e20b5d7e3de0@suse.com>
Message-ID: <94d683d7-302b-f0c2-0108-3f6d76b8df9b@suse.com>
Date: Wed, 15 Jul 2020 14:03:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <42270be7-43d6-ba53-3896-e20b5d7e3de0@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

While there are two call sites, the function they're in can be slightly
re-arranged such that the code sequence can be added at its bottom. Note
that the function's only caller has already checked sv->pending, and
that the prior while() loop was just a slightly more fancy if()
(allowing an early break out of the construct).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -103,23 +103,12 @@ bool hvm_io_pending(struct vcpu *v)
     return false;
 }
 
-static void hvm_io_assist(struct hvm_ioreq_vcpu *sv, uint64_t data)
-{
-    struct vcpu *v = sv->vcpu;
-    ioreq_t *ioreq = &v->arch.hvm.hvm_io.io_req;
-
-    if ( hvm_ioreq_needs_completion(ioreq) )
-        ioreq->data = data;
-
-    sv->pending = false;
-}
-
 static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
 {
     unsigned int prev_state = STATE_IOREQ_NONE;
+    uint64_t data = ~0;
 
-    while ( sv->pending )
-    {
+    do {
         unsigned int state = p->state;
 
         smp_rmb();
@@ -132,7 +121,6 @@ static bool hvm_wait_for_io(struct hvm_i
              * emulator is dying and it races with an I/O being
              * requested.
              */
-            hvm_io_assist(sv, ~0ul);
             break;
         }
 
@@ -149,7 +137,7 @@ static bool hvm_wait_for_io(struct hvm_i
         {
         case STATE_IORESP_READY: /* IORESP_READY -> NONE */
             p->state = STATE_IOREQ_NONE;
-            hvm_io_assist(sv, p->data);
+            data = p->data;
             break;
         case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
         case STATE_IOREQ_INPROCESS:
@@ -164,7 +152,13 @@ static bool hvm_wait_for_io(struct hvm_i
             domain_crash(sv->vcpu->domain);
             return false; /* bail */
         }
-    }
+    } while ( false );
+
+    p = &sv->vcpu->arch.hvm.hvm_io.io_req;
+    if ( hvm_ioreq_needs_completion(p) )
+        p->data = data;
+
+    sv->pending = false;
 
     return true;
 }



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:04:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:04:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvg8z-0001od-Gg; Wed, 15 Jul 2020 12:04:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvg8y-0001oT-AH
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:04:12 +0000
X-Inumbo-ID: 4b3f336a-c693-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b3f336a-c693-11ea-8496-bc764e2007e4;
 Wed, 15 Jul 2020 12:04:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 37782AB76;
 Wed, 15 Jul 2020 12:04:14 +0000 (UTC)
Subject: [PATCH 2/3] x86/HVM: re-work hvm_wait_for_io() a little
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <42270be7-43d6-ba53-3896-e20b5d7e3de0@suse.com>
Message-ID: <872c2d16-f49a-41fd-68ae-f1e0ee14c7d8@suse.com>
Date: Wed, 15 Jul 2020 14:04:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <42270be7-43d6-ba53-3896-e20b5d7e3de0@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Convert the function's main loop to a more ordinary one, without goto
and without initial steps not needing to be inside a loop at all.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -106,24 +106,17 @@ bool hvm_io_pending(struct vcpu *v)
 static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
 {
     unsigned int prev_state = STATE_IOREQ_NONE;
+    unsigned int state = p->state;
     uint64_t data = ~0;
 
-    do {
-        unsigned int state = p->state;
-
-        smp_rmb();
-
-    recheck:
-        if ( unlikely(state == STATE_IOREQ_NONE) )
-        {
-            /*
-             * The only reason we should see this case is when an
-             * emulator is dying and it races with an I/O being
-             * requested.
-             */
-            break;
-        }
+    smp_rmb();
 
+    /*
+     * The only reason we should see this condition be false is when an
+     * emulator dying races with I/O being requested.
+     */
+    while ( likely(state != STATE_IOREQ_NONE) )
+    {
         if ( unlikely(state < prev_state) )
         {
             gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n",
@@ -139,20 +132,24 @@ static bool hvm_wait_for_io(struct hvm_i
             p->state = STATE_IOREQ_NONE;
             data = p->data;
             break;
+
         case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
         case STATE_IOREQ_INPROCESS:
             wait_on_xen_event_channel(sv->ioreq_evtchn,
                                       ({ state = p->state;
                                          smp_rmb();
                                          state != prev_state; }));
-            goto recheck;
+            continue;
+
         default:
             gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state);
             sv->pending = false;
             domain_crash(sv->vcpu->domain);
             return false; /* bail */
         }
-    } while ( false );
+
+        break;
+    }
 
     p = &sv->vcpu->arch.hvm.hvm_io.io_req;
     if ( hvm_ioreq_needs_completion(p) )



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:05:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:05:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvg9u-0001wL-SC; Wed, 15 Jul 2020 12:05:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvg9u-0001wE-7t
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:05:10 +0000
X-Inumbo-ID: 6de2f24e-c693-11ea-93cc-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6de2f24e-c693-11ea-93cc-12813bfff9fa;
 Wed, 15 Jul 2020 12:05:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 54009B1CE;
 Wed, 15 Jul 2020 12:05:12 +0000 (UTC)
Subject: [PATCH 3/3] x86/HVM: fold both instances of looking up a
 hvm_ioreq_vcpu with a request pending
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <42270be7-43d6-ba53-3896-e20b5d7e3de0@suse.com>
Message-ID: <ab92e0ec-d869-dae6-f47c-b7ac55bea326@suse.com>
Date: Wed, 15 Jul 2020 14:05:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <42270be7-43d6-ba53-3896-e20b5d7e3de0@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

It seems pretty likely that the "break" in the loop getting replaced in
handle_hvm_io_completion() was meant to exit both nested loops at the
same time. Re-purpose what has been hvm_io_pending() to hand back the
struct hvm_ioreq_vcpu instance found, and use it to replace the two
nested loops.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -81,7 +81,8 @@ static ioreq_t *get_ioreq(struct hvm_ior
     return &p->vcpu_ioreq[v->vcpu_id];
 }
 
-bool hvm_io_pending(struct vcpu *v)
+static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v,
+                                               struct hvm_ioreq_server **srvp)
 {
     struct domain *d = v->domain;
     struct hvm_ioreq_server *s;
@@ -96,11 +97,20 @@ bool hvm_io_pending(struct vcpu *v)
                               list_entry )
         {
             if ( sv->vcpu == v && sv->pending )
-                return true;
+            {
+                if ( srvp )
+                    *srvp = s;
+                return sv;
+            }
         }
     }
 
-    return false;
+    return NULL;
+}
+
+bool hvm_io_pending(struct vcpu *v)
+{
+    return get_pending_vcpu(v, NULL);
 }
 
 static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
@@ -165,8 +175,8 @@ bool handle_hvm_io_completion(struct vcp
     struct domain *d = v->domain;
     struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
     struct hvm_ioreq_server *s;
+    struct hvm_ioreq_vcpu *sv;
     enum hvm_io_completion io_completion;
-    unsigned int id;
 
     if ( has_vpci(d) && vpci_process_pending(v) )
     {
@@ -174,23 +184,9 @@ bool handle_hvm_io_completion(struct vcp
         return false;
     }
 
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-    {
-        struct hvm_ioreq_vcpu *sv;
-
-        list_for_each_entry ( sv,
-                              &s->ioreq_vcpu_list,
-                              list_entry )
-        {
-            if ( sv->vcpu == v && sv->pending )
-            {
-                if ( !hvm_wait_for_io(sv, get_ioreq(s, v)) )
-                    return false;
-
-                break;
-            }
-        }
-    }
+    sv = get_pending_vcpu(v, &s);
+    if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
+        return false;
 
     vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ?
         STATE_IORESP_READY : STATE_IOREQ_NONE;


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:13:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:13:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvgIC-0002tB-Mp; Wed, 15 Jul 2020 12:13:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvgIB-0002t6-LU
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:13:43 +0000
X-Inumbo-ID: 9fce9f1e-c694-11ea-93cf-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9fce9f1e-c694-11ea-93cf-12813bfff9fa;
 Wed, 15 Jul 2020 12:13:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7A593B1E7;
 Wed, 15 Jul 2020 12:13:45 +0000 (UTC)
Subject: Re: [PATCH v6 01/11] memory: batch processing in acquire_resource()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <02415890e4e8211513b495228c790e1d16de767f.1594150543.git.michal.leszczynski@cert.pl>
 <20200715093606.GU7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <61828142-8135-deee-43c6-1a2124f55756@suse.com>
Date: Wed, 15 Jul 2020 14:13:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200715093606.GU7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 tamas.lengyel@intel.com, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 11:36, Roger Pau Monné wrote:
> On Tue, Jul 07, 2020 at 09:39:40PM +0200, Michał Leszczyński wrote:
>> @@ -1599,8 +1629,17 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>>  #endif
>>  
>>      case XENMEM_acquire_resource:
>> -        rc = acquire_resource(
>> -            guest_handle_cast(arg, xen_mem_acquire_resource_t));
>> +        do {
>> +            rc = acquire_resource(
>> +                guest_handle_cast(arg, xen_mem_acquire_resource_t),
>> +                &start_extent);
> 
> I think it would be interesting from a performance PoV to move the
> xmar copy_from_guest here, so that each call to acquire_resource
> in the loop doesn't need to perform a copy from guest. That's
> more relevant for translated callers, where a copy_from_guest involves
> a guest page table and a p2m walk.

This isn't just a nice-to-have for performance reasons, but a
correctness/consistency thing: A rogue (or buggy) guest may alter
the structure between two such reads. It _may_ be the case that
we're dealing fine with this right now, but it would feel like a
trap to fall into later on.

>> +
>> +            if ( hypercall_preempt_check() )
> 
> You are missing a rc == -ERESTART check here, you don't want to encode
> a continuation if rc is different than -ERESTART AFAICT.

At which point the subsequent containing do/while() likely wants
adjusting to, e.g. to "for( ; ; )".

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:13:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvgIQ-0002uH-VS; Wed, 15 Jul 2020 12:13:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvgIP-0002u4-PG
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:13:57 +0000
X-Inumbo-ID: a80cdca4-c694-11ea-bca7-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a80cdca4-c694-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 12:13:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594815237;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=18xsUfIrITO/uLx/6yzDas52v/QFuFZM5fa1tNXshps=;
 b=MR9b2X3oMUszQw1W2fLscTeGZN1wZ40ZEmqdLN102tAxrUjI5fMKMXvR
 CrL1eyiuJBkILdJwagsQuqDK23ve+45QE/hmZGX/wc55ZIAl89LpGH6v9
 6WP/5iglLG4RZB9BhkgbVlGNTcwbOCEnmcaB4IfO9k9EH4RTC2syM5KjV c=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: f9ar1GRO/hivYYe9vK8tx2xxirWGSS/IArLK/iT2AycvlE88yiOwCqcQ9H3fZtQ76YleHU7r8M
 jBhs0RbiyGhyX+No+i7eViroXmV3KYtBNsgIX6KALqTjI6JxqC0UmwBwaWrQgdIFGQ8ohHGbP3
 BTY7UrHQ5cs3OnFGW1G6Zy5CZPSzEgA4WZVmgguIJH9lcH8h6sx72MZ2MtYP677lAJmmeESq+t
 R0tyM+9v5w73nsZsnE2W3wC8F84vwaGtIWM0pBaqPnyunWdLwbHMk8aBetP9gJqpce898pM2jP
 1w0=
X-SBRS: 2.7
X-MesageID: 22744247
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22744247"
Date: Wed, 15 Jul 2020 14:13:47 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v3 1/2] x86: restore pv_rtc_handler() invocation
Message-ID: <20200715121347.GY7191@Air-de-Roger>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 01:56:47PM +0200, Jan Beulich wrote:
> This was lost when making the logic accessible to PVH Dom0.
> 
> While doing so make the access to the global function pointer safe
> against races (as noticed by Roger): The only current user wants to be
> invoked just once (but can tolerate to be invoked multiple times),
> zapping the pointer at that point.
> 
> Fixes: 835d8d69d96a ("x86/rtc: provide mediated access to RTC for PVH dom0")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Thanks, sorry I have one comment below.

> ---
> v3: Latch pointer under lock.
> v2: New.
> 
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1148,6 +1148,8 @@ void rtc_guest_write(unsigned int port,
>  
>      switch ( port )
>      {
> +        typeof(pv_rtc_handler) hook;

Nit: FWIW, given the current structure of the function I would just have placed
it together with the rest of the top-level local variables.

> +
>      case RTC_PORT(0):
>          /*
>           * All PV domains (and PVH dom0) are allowed to write to the latched
> @@ -1160,6 +1162,14 @@ void rtc_guest_write(unsigned int port,
>      case RTC_PORT(1):
>          if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
>              break;
> +
> +        spin_lock_irqsave(&rtc_lock, flags);
> +        hook = pv_rtc_handler;
> +        spin_unlock_irqrestore(&rtc_lock, flags);

Given that clearing the pv_rtc_handler variable in handle_rtc_once is
not done while holding the rtc_lock, I'm not sure there's much point
in holding the lock here, ie: just doing something like:

hook = pv_rtc_handler;
if ( hook )
    hook(currd->arch.cmos_idx & 0x7f, data);

Should be as safe as what you do.

We also assume that setting pv_rtc_handler to NULL is an atomic
operation.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:29:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:29:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvgWx-0003ye-9D; Wed, 15 Jul 2020 12:28:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvgWv-0003yZ-KT
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:28:57 +0000
X-Inumbo-ID: bfe0b038-c696-11ea-93d4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bfe0b038-c696-11ea-93d4-12813bfff9fa;
 Wed, 15 Jul 2020 12:28:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 513C2AC83;
 Wed, 15 Jul 2020 12:28:58 +0000 (UTC)
Subject: Re: [PATCH v6 01/11] memory: batch processing in acquire_resource()
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <02415890e4e8211513b495228c790e1d16de767f.1594150543.git.michal.leszczynski@cert.pl>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <83100b6c-3a06-e379-bef0-fcbd8fdcce98@suse.com>
Date: Wed, 15 Jul 2020 14:28:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <02415890e4e8211513b495228c790e1d16de767f.1594150543.git.michal.leszczynski@cert.pl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas.lengyel@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.07.2020 21:39, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Allow to acquire large resources by allowing acquire_resource()
> to process items in batches, using hypercall continuation.
> 
> Be aware that this modifies the behavior of acquire_resource
> call with frame_list=NULL. While previously it would return
> the size of internal array (32), with this patch it returns
> the maximal quantity of frames that could be requested at once,
> i.e. UINT_MAX >> MEMOP_EXTENT_SHIFT.

This isn't really a behavioral change, and hence I'd prefer this
to be re-worded: It was and is the upper bound on request sizes
that gets reported here. It's just that this upper bound now
changes.

> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1046,10 +1046,12 @@ static int acquire_grant_table(struct domain *d, unsigned int id,
>  }
>  
>  static int acquire_resource(
> -    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
> +    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg,
> +    unsigned long *start_extent)
>  {
>      struct domain *d, *currd = current->domain;
>      xen_mem_acquire_resource_t xmar;
> +    uint32_t total_frames;

Please don't use fixed width types when plain C types will do
(unsigned int here).

> @@ -1069,7 +1071,7 @@ static int acquire_resource(
>          if ( xmar.nr_frames )
>              return -EINVAL;
>  
> -        xmar.nr_frames = ARRAY_SIZE(mfn_list);
> +        xmar.nr_frames = UINT_MAX >> MEMOP_EXTENT_SHIFT;
>  
>          if ( __copy_field_to_guest(arg, &xmar, nr_frames) )
>              return -EFAULT;
> @@ -1077,8 +1079,28 @@ static int acquire_resource(
>          return 0;
>      }
>  
> +    total_frames = xmar.nr_frames;
> +
> +    /* Is the size too large for us to encode a continuation? */
> +    if ( unlikely(xmar.nr_frames > (UINT_MAX >> MEMOP_EXTENT_SHIFT)) )
> +        return -EINVAL;
> +
> +    if ( *start_extent )
> +    {
> +        /*
> +         * Check whether start_extent is in bounds, as this
> +         * value if visible to the calling domain.
> +         */
> +        if ( *start_extent > xmar.nr_frames )
> +            return -EINVAL;
> +
> +        xmar.frame += *start_extent;
> +        xmar.nr_frames -= *start_extent;
> +        guest_handle_add_offset(xmar.frame_list, *start_extent);
> +    }

May I ask that you drop the if() around this block of code?

Also, looking at this, I wonder whether it's a good idea to use the
"start extent" model here anyway: You could as well update the
struct (saying that it may be clobbered in the public header) and
copy the whole thing back to the original guest struct. This would
then remove the pretty arbitrary "UINT_MAX >> MEMOP_EXTENT_SHIFT"
limit you currently need to enforce. The main question is whether
we'd consider such an adjustment to an existing interface
acceptable; there's an at least theoretical risk that it may break
existing callers. Then again no existing caller can sensibly have
specified a count above 32, and when the copying back would be
limited to just the continuation case, no such caller would be
affected in any way afaict.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:31:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvgZ9-0004kw-Me; Wed, 15 Jul 2020 12:31:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hlgd=A2=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jvgZ8-0004kq-Fo
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:31:14 +0000
X-Inumbo-ID: 12129312-c697-11ea-bb8b-bc764e2007e4
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12129312-c697-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 12:31:13 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id f7so2516890wrw.1
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jul 2020 05:31:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=QHdNXpa9mboIfnxcYiH8e2qwHoHgSHKSNZWbmNm5qN0=;
 b=ipC4LPP4kT51AjTBIodCMRGeY3wTja0cjvXAsJ3HgQ82OFikIQQ7esuamuqQkrkiT4
 6S9dvbTnMZhwmI+m0mqIK9cYh+LlOOIYb5rMpZ4ioWzL5vncr9s8YLVp0WLcyyQv9sWl
 PYphrIrrgCPfcHBaKyeeHBzGAFzcoz4tb+djXVu3QzHXsY+g0Yr3U2g/rP8pnxPTsBgk
 FjCyzgHsly5QwCWX2N14iYXaHA/ugseFdSIIcfya6RH7Bkw291LRe4K3NQH8IkEbICHt
 P/h/x/OTDnsAwBbhoBKvvO2XCpjqxF19NuJaNqAHp2mlwl4MPCZ+yvecv11L4p4msq3P
 0U2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=QHdNXpa9mboIfnxcYiH8e2qwHoHgSHKSNZWbmNm5qN0=;
 b=elHhLhbq4hPOpzaJ3Dr1FjKMnEv4fVSnR6Ltaq/3j4SZJr5zTf7uJVDFVh/EdFtPH+
 iKXC0qJaDVfHa+00ZDNPqWmebSZ61ZFySozTuLoStwGk9A9DIi+8ecjNp6Kj4/PbxDi3
 o0qB+Brdf9jDnViSSgn0ZKAsZHWldH6t2Ub7z8VkHm0ZIVTSbmb5c7SEFKAxWAvPI6pq
 tJCIB+G3xgyDHTdNup9V5pqbuKEhcSKKhsUUe64/vASg6r+gMBbnVh1zij36ANyc+IQj
 uwnsnSApWXI4x066uB7KCBfz0xqsCaFfjiSbZzb3BYSc23ogAuKBMnCjFgeXCGCUPB6x
 3inw==
X-Gm-Message-State: AOAM532JOg5x1BloHPccDnbUjGOKroR5RnG4xC4qSKC0i98iMLBQjPmo
 4T/snA56qxPMNwp7H/+h9SY=
X-Google-Smtp-Source: ABdhPJw8zYWZhX7NAipuS5bEckvfBxCavsC8toIZHKTV9Q40oKjTlUkMUEUo7ORbqhNghQTX5czLdg==
X-Received: by 2002:adf:fd8e:: with SMTP id d14mr11202870wrr.202.1594816272940; 
 Wed, 15 Jul 2020 05:31:12 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id j145sm3424894wmj.7.2020.07.15.05.31.11
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 15 Jul 2020 05:31:12 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	<xen-devel@lists.xenproject.org>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
In-Reply-To: <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
Subject: RE: [PATCH v3 1/2] x86: restore pv_rtc_handler() invocation
Date: Wed, 15 Jul 2020 13:31:11 +0100
Message-ID: <001801d65aa3$d33bd090$79b371b0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQJIM2WedjUkwpXeO7NSr7H9rPuYmAKgm2H4qA/b1LA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Wei Liu' <wl@xen.org>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 15 July 2020 12:57
> To: xen-devel@lists.xenproject.org
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Paul Durrant =
<paul@xen.org>; Wei Liu <wl@xen.org>;
> Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> Subject: [PATCH v3 1/2] x86: restore pv_rtc_handler() invocation
>=20
> This was lost when making the logic accessible to PVH Dom0.
>=20
> While doing so make the access to the global function pointer safe
> against races (as noticed by Roger): The only current user wants to be
> invoked just once (but can tolerate to be invoked multiple times),
> zapping the pointer at that point.
>=20
> Fixes: 835d8d69d96a ("x86/rtc: provide mediated access to RTC for PVH =
dom0")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v3: Latch pointer under lock.
> v2: New.
>=20
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1148,6 +1148,8 @@ void rtc_guest_write(unsigned int port,
>=20
>      switch ( port )
>      {
> +        typeof(pv_rtc_handler) hook;
> +
>      case RTC_PORT(0):
>          /*
>           * All PV domains (and PVH dom0) are allowed to write to the =
latched
> @@ -1160,6 +1162,14 @@ void rtc_guest_write(unsigned int port,
>      case RTC_PORT(1):
>          if ( !ioports_access_permitted(currd, RTC_PORT(0), =
RTC_PORT(1)) )
>              break;
> +
> +        spin_lock_irqsave(&rtc_lock, flags);
> +        hook =3D pv_rtc_handler;
> +        spin_unlock_irqrestore(&rtc_lock, flags);
> +
> +        if ( hook )
> +            hook(currd->arch.cmos_idx & 0x7f, data);
> +
>          spin_lock_irqsave(&rtc_lock, flags);
>          outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
>          outb(data, RTC_PORT(1));

LGTM..

Release-acked-by: Paul Durrant <paul@xen.org>




From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:36:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvgeb-0004xE-G2; Wed, 15 Jul 2020 12:36:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvgea-0004x9-4i
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:36:52 +0000
X-Inumbo-ID: dac7cb7e-c697-11ea-93d7-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dac7cb7e-c697-11ea-93d7-12813bfff9fa;
 Wed, 15 Jul 2020 12:36:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EDD78ACBF;
 Wed, 15 Jul 2020 12:36:52 +0000 (UTC)
Subject: Re: [PATCH v3 1/2] x86: restore pv_rtc_handler() invocation
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
 <20200715121347.GY7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2b9de0fd-5973-ed66-868c-ffadca83edf3@suse.com>
Date: Wed, 15 Jul 2020 14:36:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200715121347.GY7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 14:13, Roger Pau Monné wrote:
> On Wed, Jul 15, 2020 at 01:56:47PM +0200, Jan Beulich wrote:
>> @@ -1160,6 +1162,14 @@ void rtc_guest_write(unsigned int port,
>>      case RTC_PORT(1):
>>          if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
>>              break;
>> +
>> +        spin_lock_irqsave(&rtc_lock, flags);
>> +        hook = pv_rtc_handler;
>> +        spin_unlock_irqrestore(&rtc_lock, flags);
> 
> Given that clearing the pv_rtc_handler variable in handle_rtc_once is
> not done while holding the rtc_lock, I'm not sure there's much point
> in holding the lock here, ie: just doing something like:
> 
> hook = pv_rtc_handler;
> if ( hook )
>     hook(currd->arch.cmos_idx & 0x7f, data);
> 
> Should be as safe as what you do.

No, the compiler is free to eliminate the local variable and read
the global one twice (and it may change contents in between) then.
I could use ACCESS_ONCE() or read_atomic() here, but then it would
become quite clear that at the same time ...

> We also assume that setting pv_rtc_handler to NULL is an atomic
> operation.

... this (which isn't different from what we do elsewhere, and we
really can't fix everything at the same time) ought to also become
ACCESS_ONCE() (or write_atomic()).

A compromise might be to use barrier() in place of the locking for
now ...

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:40:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:40:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvgiN-0005lQ-1A; Wed, 15 Jul 2020 12:40:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hlgd=A2=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jvgiM-0005lL-1O
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:40:46 +0000
X-Inumbo-ID: 66a4b71a-c698-11ea-8496-bc764e2007e4
Received: from mail-wm1-x32a.google.com (unknown [2a00:1450:4864:20::32a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66a4b71a-c698-11ea-8496-bc764e2007e4;
 Wed, 15 Jul 2020 12:40:45 +0000 (UTC)
Received: by mail-wm1-x32a.google.com with SMTP id j18so5431599wmi.3
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jul 2020 05:40:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=E+I8EfTeFx1/UnlGUZwBvvu7W6m8tC1xvjyjldd37z4=;
 b=QX4Ffz+zCiY3Kd1lmv566FAMC77/M9sCMCoD6VU8Uf/qeL9eoPnf1+lGqHFSYgKWhH
 8OZ+8yJWl7XiqXM6TTH5BdF6CKFe0IwUB47yY1MdFgoCwNCm1aKAtyR9rwQxdXWyIAIi
 wJRMomvBvR5HRFLVKyUAT9nTgvAXLTltC3pXA4g69S2gsvZN1mcga/q9rIPJH0Yh4bYz
 7O5UIbykPqgSJ1LYAdV/U6ser0QSFm3ghiqmYfYQXUL44pejl8rwTC/kGyQ7SZHvzbTP
 BS7LoBSw9FWdALjqS63mlxrqX+DFBdpd1Q8kH4ozvnizp+GCqaM6fuwJCzW673MNbx6V
 vrtw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=E+I8EfTeFx1/UnlGUZwBvvu7W6m8tC1xvjyjldd37z4=;
 b=XBXpNo/3/E0sjgVvuAyOtWjLVECkmHjL2VgaGAFS1J0LIDXyldqt+ExKPAhYxFMQHh
 NJed+O4xDHoxPpAyxSle7NHZ7TanOAyC7VoC826G1zmMYKZLNixawjhaiaPoWdY9vv/x
 ebOyVDY1y6T54TIKZDq+HOpB5Q2UVPRIkB/AZMSxFLYX3uAFSZnPZ18e0Lv9gMCrQF8/
 LlSuKo94+LuQpQAiFq1sHY1F3faAkT8FlaLTFhu5oxl2n75/MOv94n7pYqLWfbZEaHqU
 XyDEYZTvglmGrIuStpvQwYfFZTmPDZE1JNntdMFK7dvjJWWoR6eBz7xDizt5W1pIGqam
 GLdQ==
X-Gm-Message-State: AOAM5305qP5URBa2cAmUjQ6BW/xWOOas03fGAy+TMrFqDi0A2d6G4BZf
 CSeAwf+lVHBNlSKxJ/cRGcE=
X-Google-Smtp-Source: ABdhPJwftzb8sDNUbxsD5Bw5mIOCcXj/8l2kWfOCzVtIe3uxTuvsz2WCH1wbVo3egyz7Q8cbtaSwGQ==
X-Received: by 2002:a1c:4183:: with SMTP id o125mr8473288wma.101.1594816844364; 
 Wed, 15 Jul 2020 05:40:44 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id q1sm3343050wro.82.2020.07.15.05.40.43
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 15 Jul 2020 05:40:43 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	<xen-devel@lists.xenproject.org>
References: <42270be7-43d6-ba53-3896-e20b5d7e3de0@suse.com>
 <94d683d7-302b-f0c2-0108-3f6d76b8df9b@suse.com>
In-Reply-To: <94d683d7-302b-f0c2-0108-3f6d76b8df9b@suse.com>
Subject: RE: [PATCH 1/3] x86/HVM: fold hvm_io_assist() into its only caller
Date: Wed, 15 Jul 2020 13:40:42 +0100
Message-ID: <001901d65aa5$27c03560$7740a020$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQKAf3+muaXr7mwqqHZG6k9bSD0TwwIgwNaGp6NEmZA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Wei Liu' <wl@xen.org>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 15 July 2020 13:04
> To: xen-devel@lists.xenproject.org
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>; Wei Liu
> <wl@xen.org>; Paul Durrant <paul@xen.org>
> Subject: [PATCH 1/3] x86/HVM: fold hvm_io_assist() into its only =
caller
>=20
> While there are two call sites, the function they're in can be =
slightly
> re-arranged such that the code sequence can be added at its bottom. =
Note
> that the function's only caller has already checked sv->pending, and
> that the prior while() loop was just a slightly more fancy if()
> (allowing an early break out of the construct).
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>=20
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -103,23 +103,12 @@ bool hvm_io_pending(struct vcpu *v)
>      return false;
>  }
>=20
> -static void hvm_io_assist(struct hvm_ioreq_vcpu *sv, uint64_t data)
> -{
> -    struct vcpu *v =3D sv->vcpu;
> -    ioreq_t *ioreq =3D &v->arch.hvm.hvm_io.io_req;
> -
> -    if ( hvm_ioreq_needs_completion(ioreq) )
> -        ioreq->data =3D data;
> -
> -    sv->pending =3D false;
> -}
> -
>  static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
>  {
>      unsigned int prev_state =3D STATE_IOREQ_NONE;
> +    uint64_t data =3D ~0;
>=20
> -    while ( sv->pending )
> -    {
> +    do {
>          unsigned int state =3D p->state;

I guess this is beneficial from the point of view of restricting cope =
and...

>=20
>          smp_rmb();
> @@ -132,7 +121,6 @@ static bool hvm_wait_for_io(struct hvm_i
>               * emulator is dying and it races with an I/O being
>               * requested.
>               */
> -            hvm_io_assist(sv, ~0ul);
>              break;

...(as you say) allowing this early break, but a forward jump to an =
'out' label would be more consistent with other code. It works though =
so...

Reviewed-by: Paul Durrant <paul@xen.org>


>          }
>=20
> @@ -149,7 +137,7 @@ static bool hvm_wait_for_io(struct hvm_i
>          {
>          case STATE_IORESP_READY: /* IORESP_READY -> NONE */
>              p->state =3D STATE_IOREQ_NONE;
> -            hvm_io_assist(sv, p->data);
> +            data =3D p->data;
>              break;
>          case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> =
IORESP_READY */
>          case STATE_IOREQ_INPROCESS:
> @@ -164,7 +152,13 @@ static bool hvm_wait_for_io(struct hvm_i
>              domain_crash(sv->vcpu->domain);
>              return false; /* bail */
>          }
> -    }
> +    } while ( false );
> +
> +    p =3D &sv->vcpu->arch.hvm.hvm_io.io_req;
> +    if ( hvm_ioreq_needs_completion(p) )
> +        p->data =3D data;
> +
> +    sv->pending =3D false;
>=20
>      return true;
>  }




From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:42:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:42:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvgkL-0005tu-Dm; Wed, 15 Jul 2020 12:42:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvgkK-0005tp-6X
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:42:48 +0000
X-Inumbo-ID: af704d1a-c698-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af704d1a-c698-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 12:42:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BAE30AD11;
 Wed, 15 Jul 2020 12:42:49 +0000 (UTC)
Subject: Re: [PATCH 2/2] evtchn/fifo: don't enforce higher than necessary
 alignment
To: Julien Grall <julien.grall.oss@gmail.com>
References: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
 <e47a9ef5-5f4c-1ca6-1b31-f7b10516e5ed@suse.com>
 <CAJ=z9a1AWYYVGwHWOct9j3bVDhPtWG7R3tQY05+6BY-9g3C1kQ@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <005381d5-3fb5-640f-002c-106c628a77a2@suse.com>
Date: Wed, 15 Jul 2020 14:42:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CAJ=z9a1AWYYVGwHWOct9j3bVDhPtWG7R3tQY05+6BY-9g3C1kQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 12:46, Julien Grall wrote:
> On Wed, 15 Jul 2020, 12:17 Jan Beulich, <jbeulich@suse.com> wrote:
> 
>> Neither the code nor the original commit provide any justification for
>> the need to 8-byte align the struct in all cases. Enforce just as much
>> alignment as the structure actually needs - 4 bytes - by using alignof()
>> instead of a literal number.
>>
>> Take the opportunity and also
>> - add so far missing validation that native and compat mode layouts of
>>   the structures actually match,
>> - tie sizeof() expressions to the types of the fields we're actually
>>   after, rather than specifying the type explicitly (which in the
>>   general case risks a disconnect, even if there's close to zero risk in
>>   this particular case),
>> - use ENXIO instead of EINVAL for the two cases of the address not
>>   satisfying the requirements, which will make an issue here better
>>   stand out at the call site.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> I question the need for the array_index_nospec() here. Or else I'd
>> expect map_vcpu_info() would also need the same.
>>
>> --- a/xen/common/event_fifo.c
>> +++ b/xen/common/event_fifo.c
>> @@ -504,6 +504,16 @@ static void setup_ports(struct domain *d
>>      }
>>  }
>>
>> +#ifdef CONFIG_COMPAT
>> +
>> +#include <compat/event_channel.h>
>> +
>> +#define xen_evtchn_fifo_control_block evtchn_fifo_control_block
>> +CHECK_evtchn_fifo_control_block;
>> +#undef xen_evtchn_fifo_control_block
>> +
>> +#endif
>> +
>>  int evtchn_fifo_init_control(struct evtchn_init_control *init_control)
>>  {
>>      struct domain *d = current->domain;
>> @@ -523,19 +533,20 @@ int evtchn_fifo_init_control(struct evtc
>>          return -ENOENT;
>>
>>      /* Must not cross page boundary. */
>> -    if ( offset > (PAGE_SIZE - sizeof(evtchn_fifo_control_block_t)) )
>> -        return -EINVAL;
>> +    if ( offset > (PAGE_SIZE - sizeof(*v->evtchn_fifo->control_block)) )
>> +        return -ENXIO;
>>
>>      /*
>>       * Make sure the guest controlled value offset is bounded even during
>>       * speculative execution.
>>       */
>>      offset = array_index_nospec(offset,
>> -                           PAGE_SIZE -
>> sizeof(evtchn_fifo_control_block_t) + 1);
>> +                                PAGE_SIZE -
>> +                                sizeof(*v->evtchn_fifo->control_block) +
>> 1);
>>
>> -    /* Must be 8-bytes aligned. */
>> -    if ( offset & (8 - 1) )
>> -        return -EINVAL;
>> +    /* Must be suitably aligned. */
>> +    if ( offset & (alignof(*v->evtchn_fifo->control_block) - 1) )
>> +        return -ENXIO;
>>
> 
> A guest relying on this new alignment wouldn't work on older version of
> Xen. So I don't think a guest would ever be able to use it.
> 
> Therefore is it really worth the change?

That's the question. One of your arguments for using a literal number
also for the vCPU info mapping check was that here a literal number
is used. The goal isn't so much relaxation of the interface, but
making the code consistent as well as eliminating a (how I'd call it)
kludge.

Guests not caring to be able to run on older versions could also make
use of the relaxation (which may be more relevant in 10 years time
than it is now).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:45:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:45:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvgn1-00062o-Sl; Wed, 15 Jul 2020 12:45:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvgn1-00062j-Hf
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:45:35 +0000
X-Inumbo-ID: 136bcf60-c699-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 136bcf60-c699-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 12:45:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8554FACBF;
 Wed, 15 Jul 2020 12:45:37 +0000 (UTC)
Subject: Re: [PATCH 1/3] x86/HVM: fold hvm_io_assist() into its only caller
To: paul@xen.org
References: <42270be7-43d6-ba53-3896-e20b5d7e3de0@suse.com>
 <94d683d7-302b-f0c2-0108-3f6d76b8df9b@suse.com>
 <001901d65aa5$27c03560$7740a020$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ec74b9ba-dbc5-7ab4-0f65-6119bc49c333@suse.com>
Date: Wed, 15 Jul 2020 14:45:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <001901d65aa5$27c03560$7740a020$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 14:40, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 15 July 2020 13:04
>>
>>  static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
>>  {
>>      unsigned int prev_state = STATE_IOREQ_NONE;
>> +    uint64_t data = ~0;
>>
>> -    while ( sv->pending )
>> -    {
>> +    do {
>>          unsigned int state = p->state;
> 
> I guess this is beneficial from the point of view of restricting cope and...
> 
>>
>>          smp_rmb();
>> @@ -132,7 +121,6 @@ static bool hvm_wait_for_io(struct hvm_i
>>               * emulator is dying and it races with an I/O being
>>               * requested.
>>               */
>> -            hvm_io_assist(sv, ~0ul);
>>              break;
> 
> ...(as you say) allowing this early break, but a forward jump to an 'out' label would be more consistent with other code. It works though so...

Since this gets restructured by subsequent patches I thought I'd
avoid the introduction of a disliked by me "goto".

> Reviewed-by: Paul Durrant <paul@xen.org>

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:47:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:47:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvgoo-00069Y-8i; Wed, 15 Jul 2020 12:47:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hlgd=A2=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jvgom-00069S-MQ
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:47:24 +0000
X-Inumbo-ID: 54532aa0-c699-11ea-b7bb-bc764e2007e4
Received: from mail-wr1-x431.google.com (unknown [2a00:1450:4864:20::431])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54532aa0-c699-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 12:47:24 +0000 (UTC)
Received: by mail-wr1-x431.google.com with SMTP id s10so2469845wrw.12
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jul 2020 05:47:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=jd6m2q6vOnBbmSoWegGfAYIc9ZfcnWxkCBUtaO9NLGM=;
 b=FDJhsQOfJTOFIM16Km2rKodkYgOXe3gmOGmAp+yR4ytOgoO60ia7ClRxC6KvpdcSZc
 qpf6z3wVC9G8oDX50aGH/lKaB5OHd17BcmQFb759YCV7HGCEY9l7v4KcRXyXXNzD1p43
 MHWXAb1juzHL0JY90yiFgKUNFWnUdULiiR5iKez73vN2q9sEIxNQ6gc8CqJOyNJXTKO6
 dR/VPpSQ6hpsZNfxsCbZNw+LkbmB5/EyssyAaHvjO4dNM61H37UddRdl+mEBze5CMnQt
 AvBZ5lSjGrQWQJW7r/AxcsS1HneY4ACK3CzyfDk+yUHnS9BW+VWoHb1qb56Zdi04xSQ/
 8VRQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=jd6m2q6vOnBbmSoWegGfAYIc9ZfcnWxkCBUtaO9NLGM=;
 b=BT4DB8zU39R6sAHmNnwMGej8N83/vLpsCsutpzZefYNEf+zFmGKCpx8+p7O6Y//oai
 8CBw/0n0NV+QpSTQyAW20lP0IHan8Zdt+kqYpmL22b3YcyHP3juIsgjqQVwiLmht0Xlz
 A1NHnWPXPxeEpIcHuaE6Jx+JBYNw55Vlr1jd9gXzl6jUR0Q3CJ1BjsaxwpTnYb8EbVnn
 uWKrPqLbFb3W5LCA7x7Tf6HXfq0VJAuI2v4BpzxWA7XBwSHGQ+ICbVDj5iFonk1xbore
 nYfhK7FJO6cQyxcqD4NceXu/S/D21EXcucKC83Uv5CKyB0G4eK1Z5kEF3+gQyFglaDx4
 ymMg==
X-Gm-Message-State: AOAM531huzy/UsiRIDwCbk9OJNalfoKn3jOQtv8RUpi7kyFPW366qLlB
 eA/gF9qPB3BCjrc5y/d8PPg=
X-Google-Smtp-Source: ABdhPJw+cT/sbT7KdwQNMFxiOcBSRoLjeEn/IY1LmgdRTPcfsuReM6+22yFMugUHa67AnleGPNm09g==
X-Received: by 2002:a05:6000:11cc:: with SMTP id
 i12mr11072680wrx.224.1594817243165; 
 Wed, 15 Jul 2020 05:47:23 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id c7sm3517527wrq.58.2020.07.15.05.47.22
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 15 Jul 2020 05:47:22 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	<xen-devel@lists.xenproject.org>
References: <42270be7-43d6-ba53-3896-e20b5d7e3de0@suse.com>
 <872c2d16-f49a-41fd-68ae-f1e0ee14c7d8@suse.com>
In-Reply-To: <872c2d16-f49a-41fd-68ae-f1e0ee14c7d8@suse.com>
Subject: RE: [PATCH 2/3] x86/HVM: re-work hvm_wait_for_io() a little
Date: Wed, 15 Jul 2020 13:47:21 +0100
Message-ID: <001b01d65aa6$15710e10$40532a30$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQKAf3+muaXr7mwqqHZG6k9bSD0TwwHovSMLp6UFpSA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Wei Liu' <wl@xen.org>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 15 July 2020 13:04
> To: xen-devel@lists.xenproject.org
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>; Wei Liu
> <wl@xen.org>; Paul Durrant <paul@xen.org>
> Subject: [PATCH 2/3] x86/HVM: re-work hvm_wait_for_io() a little
>=20
> Convert the function's main loop to a more ordinary one, without goto
> and without initial steps not needing to be inside a loop at all.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>=20
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -106,24 +106,17 @@ bool hvm_io_pending(struct vcpu *v)
>  static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
>  {
>      unsigned int prev_state =3D STATE_IOREQ_NONE;
> +    unsigned int state =3D p->state;
>      uint64_t data =3D ~0;
>=20
> -    do {
> -        unsigned int state =3D p->state;


Oh, you lose the loop here anyway so the conversion in patch #1 was only =
transient.

> -
> -        smp_rmb();
> -
> -    recheck:
> -        if ( unlikely(state =3D=3D STATE_IOREQ_NONE) )
> -        {
> -            /*
> -             * The only reason we should see this case is when an
> -             * emulator is dying and it races with an I/O being
> -             * requested.
> -             */
> -            break;
> -        }
> +    smp_rmb();
>=20
> +    /*
> +     * The only reason we should see this condition be false is when =
an
> +     * emulator dying races with I/O being requested.
> +     */
> +    while ( likely(state !=3D STATE_IOREQ_NONE) )
> +    {
>          if ( unlikely(state < prev_state) )
>          {
>              gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u =
-> %u\n",
> @@ -139,20 +132,24 @@ static bool hvm_wait_for_io(struct hvm_i
>              p->state =3D STATE_IOREQ_NONE;
>              data =3D p->data;
>              break;
> +

Possibly mention the style fix-up in the comment comment.

>          case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> =
IORESP_READY */
>          case STATE_IOREQ_INPROCESS:
>              wait_on_xen_event_channel(sv->ioreq_evtchn,
>                                        ({ state =3D p->state;
>                                           smp_rmb();
>                                           state !=3D prev_state; }));
> -            goto recheck;
> +            continue;
> +

You could just break out of the switch now, I guess. Anyway...

Reviewed-by: Paul Durrant <paul@xen.org>

>          default:
>              gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", =
state);
>              sv->pending =3D false;
>              domain_crash(sv->vcpu->domain);
>              return false; /* bail */
>          }
> -    } while ( false );
> +
> +        break;
> +    }
>=20
>      p =3D &sv->vcpu->arch.hvm.hvm_io.io_req;
>      if ( hvm_ioreq_needs_completion(p) )




From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:50:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:50:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvgri-0006wc-OC; Wed, 15 Jul 2020 12:50:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvgrh-0006wX-LE
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:50:25 +0000
X-Inumbo-ID: c04ac2ae-c699-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c04ac2ae-c699-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 12:50:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 877CCAC23;
 Wed, 15 Jul 2020 12:50:27 +0000 (UTC)
Subject: Re: [PATCH 2/3] x86/HVM: re-work hvm_wait_for_io() a little
To: paul@xen.org, xen-devel@lists.xenproject.org
References: <42270be7-43d6-ba53-3896-e20b5d7e3de0@suse.com>
 <872c2d16-f49a-41fd-68ae-f1e0ee14c7d8@suse.com>
 <001b01d65aa6$15710e10$40532a30$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8d1908c1-05d7-2335-6e11-0b20a377361c@suse.com>
Date: Wed, 15 Jul 2020 14:50:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <001b01d65aa6$15710e10$40532a30$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Wei Liu' <wl@xen.org>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 14:47, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 15 July 2020 13:04
>>
>> @@ -139,20 +132,24 @@ static bool hvm_wait_for_io(struct hvm_i
>>              p->state = STATE_IOREQ_NONE;
>>              data = p->data;
>>              break;
>> +
> 
> Possibly mention the style fix-up in the comment comment.

Done.

>>          case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
>>          case STATE_IOREQ_INPROCESS:
>>              wait_on_xen_event_channel(sv->ioreq_evtchn,
>>                                        ({ state = p->state;
>>                                           smp_rmb();
>>                                           state != prev_state; }));
>> -            goto recheck;
>> +            continue;
>> +
> 
> You could just break out of the switch now, I guess.

I can't because of (see below).

> Anyway...
> 
> Reviewed-by: Paul Durrant <paul@xen.org>

Thanks.

>>          default:
>>              gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state);
>>              sv->pending = false;
>>              domain_crash(sv->vcpu->domain);
>>              return false; /* bail */
>>          }
>> -    } while ( false );
>> +
>> +        break;
>> +    }

This "break" requires the use of "continue" inside the switch().

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:51:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvgt4-00074R-2t; Wed, 15 Jul 2020 12:51:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=U57p=A2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jvgt2-00074M-HS
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:51:48 +0000
X-Inumbo-ID: f10b1ce0-c699-11ea-93d9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f10b1ce0-c699-11ea-93d9-12813bfff9fa;
 Wed, 15 Jul 2020 12:51:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4FC80AD81;
 Wed, 15 Jul 2020 12:51:49 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	xen-devel@dornerworks.com
Subject: [PATCH 00/12] tools: move more libraries into tools/libs
Date: Wed, 15 Jul 2020 14:51:43 +0200
Message-Id: <20200715125143.15199-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>,
 Stewart Hildebrand <stewart.hildebrand@dornerworks.com>,
 Josh Whitehead <josh.whitehead@dornerworks.com>,
 Jan Beulich <jbeulich@suse.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Move some more libraries under tools/libs, including libxenctrl. This
is resulting in a lot of cleanup work regarding building libs and
restructuring of the tools directory.

I have (for now) left out some more libraries like libxenguest and
libxl, but I can have a try moving those, too, if wanted.

Please note that patch 8 ("tools: move libxenctrl below tools/libs")
needs the related mini-os and qemu-trad patches applied in order not
to break the build:

https://lists.xen.org/archives/html/xen-devel/2020-07/msg00548.html
https://lists.xen.org/archives/html/xen-devel/2020-07/msg00617.html

As discussed at the Xen developers summit this series has been
selected to act as a test case for sending pull requests via gitlab.
This is the reason the patches are _not_ sent individually to
xen-devel, but only the cover letter.

The full series is available at:

git@gitlab.com:jgross1/xen.git libbuild/v1


Juergen Gross (12):
  stubdom: add stubdom/mini-os.mk for Xen paths used by Mini-OS
  tools: switch XEN_LIBXEN* make variables to lower case (XEN_libxen*)
  tools: add a copy of library headers in tools/include
  tools: don't call make recursively from libs.mk
  tools: define ROUNDUP() in tools/include/xen-tools/libs.h
  tools/misc: don't use libxenctrl internals from misc tools
  tools/libxc: untangle libxenctrl from libxenguest
  tools: move libxenctrl below tools/libs
  tools: split libxenstore into new tools/libs/store directory
  tools: split libxenvchan into new tools/libs/vchan directory
  tools: split libxenstat into new tools/libs/stat directory
  tools: generate most contents of library make variables

 .gitignore                                    |  29 +++-
 MAINTAINERS                                   |   2 +-
 stubdom/Makefile                              |  29 +++-
 stubdom/grub/kexec.c                          |   2 +-
 stubdom/mini-os.mk                            |  17 ++
 tools/Makefile                                |  15 +-
 tools/Rules.mk                                | 142 ++++++---------
 tools/console/daemon/io.c                     |   6 +-
 tools/golang/xenlight/Makefile                |   4 +-
 tools/helpers/init-xenstore-domain.c          |   2 +-
 tools/include/xen-tools/libs.h                |   4 +
 tools/libs/Makefile                           |   4 +
 tools/libs/call/Makefile                      |   3 +-
 tools/libs/call/buffer.c                      |   3 +-
 tools/libs/ctrl/Makefile                      |  68 ++++++++
 tools/{libxc => libs/ctrl}/include/xenctrl.h  |   0
 .../ctrl}/include/xenctrl_compat.h            |   0
 .../ctrl/include/xenctrl_dom.h}               |  10 +-
 tools/libs/ctrl/libxenctrl.map                |   3 +
 tools/{libxc => libs/ctrl}/xc_altp2m.c        |   0
 tools/{libxc => libs/ctrl}/xc_arinc653.c      |   0
 tools/{libxc => libs/ctrl}/xc_bitops.h        |   0
 tools/{libxc => libs/ctrl}/xc_core.c          |   5 +-
 tools/{libxc => libs/ctrl}/xc_core.h          |   2 +-
 tools/{libxc => libs/ctrl}/xc_core_arm.c      |   2 +-
 tools/{libxc => libs/ctrl}/xc_core_arm.h      |   0
 tools/{libxc => libs/ctrl}/xc_core_x86.c      |   6 +-
 tools/{libxc => libs/ctrl}/xc_core_x86.h      |   0
 tools/{libxc => libs/ctrl}/xc_cpu_hotplug.c   |   0
 tools/{libxc => libs/ctrl}/xc_cpupool.c       |   0
 tools/{libxc => libs/ctrl}/xc_csched.c        |   0
 tools/{libxc => libs/ctrl}/xc_csched2.c       |   0
 .../ctrl}/xc_devicemodel_compat.c             |   0
 tools/{libxc => libs/ctrl}/xc_domain.c        | 129 +-------------
 tools/{libxc => libs/ctrl}/xc_evtchn.c        |   0
 tools/{libxc => libs/ctrl}/xc_evtchn_compat.c |   0
 tools/{libxc => libs/ctrl}/xc_flask.c         |   0
 .../{libxc => libs/ctrl}/xc_foreign_memory.c  |   0
 tools/{libxc => libs/ctrl}/xc_freebsd.c       |   0
 tools/{libxc => libs/ctrl}/xc_gnttab.c        |   0
 tools/{libxc => libs/ctrl}/xc_gnttab_compat.c |   0
 tools/{libxc => libs/ctrl}/xc_hcall_buf.c     |   1 -
 tools/{libxc => libs/ctrl}/xc_kexec.c         |   0
 tools/{libxc => libs/ctrl}/xc_linux.c         |   0
 tools/{libxc => libs/ctrl}/xc_mem_access.c    |   0
 tools/{libxc => libs/ctrl}/xc_mem_paging.c    |   0
 tools/{libxc => libs/ctrl}/xc_memshr.c        |   0
 tools/{libxc => libs/ctrl}/xc_minios.c        |   0
 tools/{libxc => libs/ctrl}/xc_misc.c          |   0
 tools/{libxc => libs/ctrl}/xc_monitor.c       |   0
 tools/{libxc => libs/ctrl}/xc_msr_x86.h       |   0
 tools/{libxc => libs/ctrl}/xc_netbsd.c        |   0
 tools/{libxc => libs/ctrl}/xc_pagetab.c       |   0
 tools/{libxc => libs/ctrl}/xc_physdev.c       |   0
 tools/{libxc => libs/ctrl}/xc_pm.c            |   0
 tools/{libxc => libs/ctrl}/xc_private.c       |   3 +-
 tools/{libxc => libs/ctrl}/xc_private.h       |  36 ++++
 tools/{libxc => libs/ctrl}/xc_psr.c           |   0
 tools/{libxc => libs/ctrl}/xc_resource.c      |   0
 tools/{libxc => libs/ctrl}/xc_resume.c        |   2 -
 tools/{libxc => libs/ctrl}/xc_rt.c            |   0
 tools/{libxc => libs/ctrl}/xc_solaris.c       |   0
 tools/{libxc => libs/ctrl}/xc_tbuf.c          |   0
 tools/{libxc => libs/ctrl}/xc_vm_event.c      |   0
 .../ctrl/xenctrl.pc.in}                       |   0
 tools/libs/devicemodel/Makefile               |   3 +-
 tools/libs/evtchn/Makefile                    |   3 +-
 tools/libs/foreignmemory/Makefile             |   3 +-
 tools/libs/foreignmemory/linux.c              |   3 +-
 tools/libs/gnttab/Makefile                    |   3 +-
 tools/libs/gnttab/private.h                   |   3 -
 tools/libs/hypfs/Makefile                     |   3 +-
 tools/libs/libs.mk                            |  34 +++-
 .../{xenstat/libxenstat => libs/stat}/COPYING |   0
 .../libxenstat => libs/stat}/Makefile         |  97 +++--------
 .../stat}/bindings/swig/perl/.empty           |   0
 .../stat}/bindings/swig/python/.empty         |   0
 .../stat}/bindings/swig/xenstat.i             |   0
 .../src => libs/stat/include}/xenstat.h       |   3 +
 tools/libs/stat/libxenstat.map                |  54 ++++++
 .../libxenstat/src => libs/stat}/xenstat.c    |   0
 .../libxenstat => libs/stat}/xenstat.pc.in    |   2 +-
 .../src => libs/stat}/xenstat_freebsd.c       |   0
 .../src => libs/stat}/xenstat_linux.c         |   4 +-
 .../src => libs/stat}/xenstat_netbsd.c        |   0
 .../src => libs/stat}/xenstat_priv.h          |   0
 .../src => libs/stat}/xenstat_qmp.c           |   0
 .../src => libs/stat}/xenstat_solaris.c       |   0
 tools/libs/store/Makefile                     |  65 +++++++
 .../store}/include/compat/xs.h                |   0
 .../store}/include/compat/xs_lib.h            |   0
 .../store}/include/xenstore.h                 |   0
 tools/libs/store/libxenstore.map              |  49 ++++++
 tools/{xenstore => libs/store}/xenstore.pc.in |   0
 tools/{xenstore => libs/store}/xs.c           |   0
 tools/libs/toolcore/Makefile                  |   2 +-
 tools/libs/toollog/Makefile                   |   2 +-
 tools/libs/vchan/Makefile                     |  18 ++
 .../vchan/include}/libxenvchan.h              |   0
 tools/{libvchan => libs/vchan}/init.c         |   0
 tools/{libvchan => libs/vchan}/io.c           |   0
 tools/libs/vchan/libxenvchan.map              |  16 ++
 tools/{libvchan => libs/vchan}/xenvchan.pc.in |   0
 tools/libvchan/Makefile                       |  95 -----------
 tools/libxc/Makefile                          | 161 +++++-------------
 tools/libxc/include/xenguest.h                |   8 +-
 tools/libxc/xc_efi.h                          | 158 -----------------
 tools/libxc/xc_elf.h                          |  16 --
 .../libxc/{xc_cpuid_x86.c => xg_cpuid_x86.c}  |   0
 tools/libxc/{xc_dom_arm.c => xg_dom_arm.c}    |   2 +-
 ...imageloader.c => xg_dom_armzimageloader.c} |   2 +-
 ...{xc_dom_binloader.c => xg_dom_binloader.c} |   2 +-
 tools/libxc/{xc_dom_boot.c => xg_dom_boot.c}  |   2 +-
 ...bzimageloader.c => xg_dom_bzimageloader.c} |   2 +-
 ...m_compat_linux.c => xg_dom_compat_linux.c} |   2 +-
 tools/libxc/{xc_dom_core.c => xg_dom_core.c}  |   2 +-
 ...c_dom_decompress.h => xg_dom_decompress.h} |   4 +-
 ...compress_lz4.c => xg_dom_decompress_lz4.c} |   2 +-
 ...ss_unsafe.c => xg_dom_decompress_unsafe.c} |   2 +-
 ...ss_unsafe.h => xg_dom_decompress_unsafe.h} |   2 +-
 ...ip2.c => xg_dom_decompress_unsafe_bzip2.c} |   2 +-
 ...lzma.c => xg_dom_decompress_unsafe_lzma.c} |   2 +-
 ...o1x.c => xg_dom_decompress_unsafe_lzo1x.c} |   2 +-
 ...afe_xz.c => xg_dom_decompress_unsafe_xz.c} |   2 +-
 ...{xc_dom_elfloader.c => xg_dom_elfloader.c} |   2 +-
 ...{xc_dom_hvmloader.c => xg_dom_hvmloader.c} |   2 +-
 tools/libxc/{xc_dom_x86.c => xg_dom_x86.c}    |   2 +-
 tools/libxc/xg_domain.c                       | 149 ++++++++++++++++
 .../libxc/{xc_nomigrate.c => xg_nomigrate.c}  |   0
 .../{xc_offline_page.c => xg_offline_page.c}  |   2 +-
 tools/libxc/xg_private.h                      |  23 ---
 tools/libxc/xg_save_restore.h                 |  13 --
 .../libxc/{xc_sr_common.c => xg_sr_common.c}  |   2 +-
 .../libxc/{xc_sr_common.h => xg_sr_common.h}  |   4 +-
 ...{xc_sr_common_x86.c => xg_sr_common_x86.c} |   2 +-
 ...{xc_sr_common_x86.h => xg_sr_common_x86.h} |   2 +-
 ..._common_x86_pv.c => xg_sr_common_x86_pv.c} |   2 +-
 ..._common_x86_pv.h => xg_sr_common_x86_pv.h} |   2 +-
 .../{xc_sr_restore.c => xg_sr_restore.c}      |   2 +-
 ...tore_x86_hvm.c => xg_sr_restore_x86_hvm.c} |   2 +-
 ...estore_x86_pv.c => xg_sr_restore_x86_pv.c} |   2 +-
 tools/libxc/{xc_sr_save.c => xg_sr_save.c}    |   2 +-
 ...sr_save_x86_hvm.c => xg_sr_save_x86_hvm.c} |   2 +-
 ...c_sr_save_x86_pv.c => xg_sr_save_x86_pv.c} |   2 +-
 ..._stream_format.h => xg_sr_stream_format.h} |   0
 tools/libxc/{xc_suspend.c => xg_suspend.c}    |   0
 tools/libxl/Makefile                          |   2 +-
 tools/libxl/libxl_arm.c                       |   2 +-
 tools/libxl/libxl_arm.h                       |   2 +-
 tools/libxl/libxl_create.c                    |   2 +-
 tools/libxl/libxl_dm.c                        |   2 +-
 tools/libxl/libxl_dom.c                       |   2 +-
 tools/libxl/libxl_internal.h                  |   5 +-
 tools/libxl/libxl_vnuma.c                     |   2 +-
 tools/libxl/libxl_x86.c                       |   2 +-
 tools/libxl/libxl_x86_acpi.c                  |   2 +-
 tools/misc/Makefile                           |   5 +-
 tools/misc/xen-hptool.c                       |   8 +-
 tools/misc/xen-mfndump.c                      |  70 ++++----
 tools/python/Makefile                         |   2 +-
 tools/python/setup.py                         |  12 +-
 tools/python/xen/lowlevel/xc/xc.c             |   2 +-
 tools/vchan/Makefile                          |  37 ++++
 tools/{libvchan => vchan}/node-select.c       |   0
 tools/{libvchan => vchan}/node.c              |   0
 .../{libvchan => vchan}/vchan-socket-proxy.c  |   0
 tools/xcutils/readnotes.c                     |   2 +-
 tools/xenstat/Makefile                        |  10 --
 tools/xenstore/Makefile                       |  82 +--------
 tools/xenstore/{include => }/xenstore_lib.h   |   0
 tools/xenstore/xenstored_core.c               |   2 -
 tools/{xenstat => }/xentop/Makefile           |   2 +-
 tools/{xenstat => }/xentop/TODO               |   0
 tools/{xenstat => }/xentop/xentop.c           |   0
 174 files changed, 856 insertions(+), 986 deletions(-)
 create mode 100644 stubdom/mini-os.mk
 create mode 100644 tools/libs/ctrl/Makefile
 rename tools/{libxc => libs/ctrl}/include/xenctrl.h (100%)
 rename tools/{libxc => libs/ctrl}/include/xenctrl_compat.h (100%)
 rename tools/{libxc/include/xc_dom.h => libs/ctrl/include/xenctrl_dom.h} (98%)
 create mode 100644 tools/libs/ctrl/libxenctrl.map
 rename tools/{libxc => libs/ctrl}/xc_altp2m.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_arinc653.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_bitops.h (100%)
 rename tools/{libxc => libs/ctrl}/xc_core.c (99%)
 rename tools/{libxc => libs/ctrl}/xc_core.h (99%)
 rename tools/{libxc => libs/ctrl}/xc_core_arm.c (99%)
 rename tools/{libxc => libs/ctrl}/xc_core_arm.h (100%)
 rename tools/{libxc => libs/ctrl}/xc_core_x86.c (98%)
 rename tools/{libxc => libs/ctrl}/xc_core_x86.h (100%)
 rename tools/{libxc => libs/ctrl}/xc_cpu_hotplug.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_cpupool.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_csched.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_csched2.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_devicemodel_compat.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_domain.c (94%)
 rename tools/{libxc => libs/ctrl}/xc_evtchn.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_evtchn_compat.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_flask.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_foreign_memory.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_freebsd.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_gnttab.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_gnttab_compat.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_hcall_buf.c (99%)
 rename tools/{libxc => libs/ctrl}/xc_kexec.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_linux.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_mem_access.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_mem_paging.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_memshr.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_minios.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_misc.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_monitor.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_msr_x86.h (100%)
 rename tools/{libxc => libs/ctrl}/xc_netbsd.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_pagetab.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_physdev.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_pm.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_private.c (99%)
 rename tools/{libxc => libs/ctrl}/xc_private.h (91%)
 rename tools/{libxc => libs/ctrl}/xc_psr.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_resource.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_resume.c (99%)
 rename tools/{libxc => libs/ctrl}/xc_rt.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_solaris.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_tbuf.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_vm_event.c (100%)
 rename tools/{libxc/xencontrol.pc.in => libs/ctrl/xenctrl.pc.in} (100%)
 rename tools/{xenstat/libxenstat => libs/stat}/COPYING (100%)
 rename tools/{xenstat/libxenstat => libs/stat}/Makefile (56%)
 rename tools/{xenstat/libxenstat => libs/stat}/bindings/swig/perl/.empty (100%)
 rename tools/{xenstat/libxenstat => libs/stat}/bindings/swig/python/.empty (100%)
 rename tools/{xenstat/libxenstat => libs/stat}/bindings/swig/xenstat.i (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat/include}/xenstat.h (98%)
 create mode 100644 tools/libs/stat/libxenstat.map
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat.c (100%)
 rename tools/{xenstat/libxenstat => libs/stat}/xenstat.pc.in (82%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_freebsd.c (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_linux.c (98%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_netbsd.c (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_priv.h (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_qmp.c (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_solaris.c (100%)
 create mode 100644 tools/libs/store/Makefile
 rename tools/{xenstore => libs/store}/include/compat/xs.h (100%)
 rename tools/{xenstore => libs/store}/include/compat/xs_lib.h (100%)
 rename tools/{xenstore => libs/store}/include/xenstore.h (100%)
 create mode 100644 tools/libs/store/libxenstore.map
 rename tools/{xenstore => libs/store}/xenstore.pc.in (100%)
 rename tools/{xenstore => libs/store}/xs.c (100%)
 create mode 100644 tools/libs/vchan/Makefile
 rename tools/{libvchan => libs/vchan/include}/libxenvchan.h (100%)
 rename tools/{libvchan => libs/vchan}/init.c (100%)
 rename tools/{libvchan => libs/vchan}/io.c (100%)
 create mode 100644 tools/libs/vchan/libxenvchan.map
 rename tools/{libvchan => libs/vchan}/xenvchan.pc.in (100%)
 delete mode 100644 tools/libvchan/Makefile
 delete mode 100644 tools/libxc/xc_efi.h
 delete mode 100644 tools/libxc/xc_elf.h
 rename tools/libxc/{xc_cpuid_x86.c => xg_cpuid_x86.c} (100%)
 rename tools/libxc/{xc_dom_arm.c => xg_dom_arm.c} (99%)
 rename tools/libxc/{xc_dom_armzimageloader.c => xg_dom_armzimageloader.c} (99%)
 rename tools/libxc/{xc_dom_binloader.c => xg_dom_binloader.c} (99%)
 rename tools/libxc/{xc_dom_boot.c => xg_dom_boot.c} (99%)
 rename tools/libxc/{xc_dom_bzimageloader.c => xg_dom_bzimageloader.c} (99%)
 rename tools/libxc/{xc_dom_compat_linux.c => xg_dom_compat_linux.c} (99%)
 rename tools/libxc/{xc_dom_core.c => xg_dom_core.c} (99%)
 rename tools/libxc/{xc_dom_decompress.h => xg_dom_decompress.h} (62%)
 rename tools/libxc/{xc_dom_decompress_lz4.c => xg_dom_decompress_lz4.c} (98%)
 rename tools/libxc/{xc_dom_decompress_unsafe.c => xg_dom_decompress_unsafe.c} (96%)
 rename tools/libxc/{xc_dom_decompress_unsafe.h => xg_dom_decompress_unsafe.h} (97%)
 rename tools/libxc/{xc_dom_decompress_unsafe_bzip2.c => xg_dom_decompress_unsafe_bzip2.c} (87%)
 rename tools/libxc/{xc_dom_decompress_unsafe_lzma.c => xg_dom_decompress_unsafe_lzma.c} (87%)
 rename tools/libxc/{xc_dom_decompress_unsafe_lzo1x.c => xg_dom_decompress_unsafe_lzo1x.c} (96%)
 rename tools/libxc/{xc_dom_decompress_unsafe_xz.c => xg_dom_decompress_unsafe_xz.c} (95%)
 rename tools/libxc/{xc_dom_elfloader.c => xg_dom_elfloader.c} (99%)
 rename tools/libxc/{xc_dom_hvmloader.c => xg_dom_hvmloader.c} (99%)
 rename tools/libxc/{xc_dom_x86.c => xg_dom_x86.c} (99%)
 create mode 100644 tools/libxc/xg_domain.c
 rename tools/libxc/{xc_nomigrate.c => xg_nomigrate.c} (100%)
 rename tools/libxc/{xc_offline_page.c => xg_offline_page.c} (99%)
 rename tools/libxc/{xc_sr_common.c => xg_sr_common.c} (99%)
 rename tools/libxc/{xc_sr_common.h => xg_sr_common.h} (99%)
 rename tools/libxc/{xc_sr_common_x86.c => xg_sr_common_x86.c} (99%)
 rename tools/libxc/{xc_sr_common_x86.h => xg_sr_common_x86.h} (98%)
 rename tools/libxc/{xc_sr_common_x86_pv.c => xg_sr_common_x86_pv.c} (99%)
 rename tools/libxc/{xc_sr_common_x86_pv.h => xg_sr_common_x86_pv.h} (98%)
 rename tools/libxc/{xc_sr_restore.c => xg_sr_restore.c} (99%)
 rename tools/libxc/{xc_sr_restore_x86_hvm.c => xg_sr_restore_x86_hvm.c} (99%)
 rename tools/libxc/{xc_sr_restore_x86_pv.c => xg_sr_restore_x86_pv.c} (99%)
 rename tools/libxc/{xc_sr_save.c => xg_sr_save.c} (99%)
 rename tools/libxc/{xc_sr_save_x86_hvm.c => xg_sr_save_x86_hvm.c} (99%)
 rename tools/libxc/{xc_sr_save_x86_pv.c => xg_sr_save_x86_pv.c} (99%)
 rename tools/libxc/{xc_sr_stream_format.h => xg_sr_stream_format.h} (100%)
 rename tools/libxc/{xc_suspend.c => xg_suspend.c} (100%)
 create mode 100644 tools/vchan/Makefile
 rename tools/{libvchan => vchan}/node-select.c (100%)
 rename tools/{libvchan => vchan}/node.c (100%)
 rename tools/{libvchan => vchan}/vchan-socket-proxy.c (100%)
 delete mode 100644 tools/xenstat/Makefile
 rename tools/xenstore/{include => }/xenstore_lib.h (100%)
 rename tools/{xenstat => }/xentop/Makefile (97%)
 rename tools/{xenstat => }/xentop/TODO (100%)
 rename tools/{xenstat => }/xentop/xentop.c (100%)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 12:52:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 12:52:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvgtf-00078W-Gj; Wed, 15 Jul 2020 12:52:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hlgd=A2=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jvgtf-00078Q-1G
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 12:52:27 +0000
X-Inumbo-ID: 088c1d60-c69a-11ea-b7bb-bc764e2007e4
Received: from mail-wm1-x329.google.com (unknown [2a00:1450:4864:20::329])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 088c1d60-c69a-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 12:52:26 +0000 (UTC)
Received: by mail-wm1-x329.google.com with SMTP id o8so5454984wmh.4
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jul 2020 05:52:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=fEeJUzeiSumgRw9T0OyH3bypZsgbZN1x4P+ytx1rqpo=;
 b=YHZmbHhK33cWVHMUVxSJ70Mc1qf7jAWXbHbSKUHLXLbEoV/40+5ZVqIhNynlfC6Yo9
 vMxBrjQJ0L8NeOC5fHI2i4AS8e06h9OM+5k1vJzmbdmB54ab8f+p/MXRfJm52dK4Ar2Y
 1GMGT8YElk9shCxTysBRF9DB0PnoA2J8lvHV8TIFyKt9PGsOxBxEiuBFibHrHc5PG+vL
 9VJ4Q8s9W8XqFdf0ZRa97+xHAXxWoTYK4DgHSjxt24mRe0bjaMlVwIAxajRaCujbztRG
 VWRdDtyltVxFJ4T3xdSN7Tx/dFasKJq//7dRF5/d0Jx2qZvUmEF94gxAnNh/A87sYoiT
 zokA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=fEeJUzeiSumgRw9T0OyH3bypZsgbZN1x4P+ytx1rqpo=;
 b=dVM75sPECMACQYiZz1PsU2bzw/PATcwvEEJ5B2NeWGEipORfyHfpUp0UAd/ulKD2Bo
 82ImDrBDnHoJuR9rusdoT452ZLW9ZjoPgUrkafiPxpuorBoZ/+LvheYunOZP/J5s1swT
 OD4ykvW/aYHLaTcIROLPT/PUuMrJSBO64P7znkD/logvgqD2FyBIh1HBEhzbZWTZUqn2
 TucWjaPBLUQy79f0PZG6/zdT/6c0TWFHXykmv/bJtzgZsHty1g8VoD1bATId8xQhzuwy
 8eVhfn+nal+AX8ypjc7/HP4Nr3N2vCjI2TKt4/j0vq1cBgy/HM0ctRRoYVuZMZAg4T2J
 F+3w==
X-Gm-Message-State: AOAM531TY5JG9L8aZvfMx9zYLakJg4owoATCllEAktq1DRC5BelYHRkS
 iKvG9xk5TYJF50qv34lvw2g=
X-Google-Smtp-Source: ABdhPJyRf2ux3L4qR8fbzruJmkOJXiisUgDHh0YQ+1kQsKwIcGuxsmkYCmEdHittxerszi2tOjhzrg==
X-Received: by 2002:a7b:c18f:: with SMTP id y15mr8733045wmi.85.1594817545545; 
 Wed, 15 Jul 2020 05:52:25 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id z63sm3645724wmb.2.2020.07.15.05.52.24
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 15 Jul 2020 05:52:25 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	<xen-devel@lists.xenproject.org>
References: <42270be7-43d6-ba53-3896-e20b5d7e3de0@suse.com>
 <ab92e0ec-d869-dae6-f47c-b7ac55bea326@suse.com>
In-Reply-To: <ab92e0ec-d869-dae6-f47c-b7ac55bea326@suse.com>
Subject: RE: [PATCH 3/3] x86/HVM: fold both instances of looking up a
 hvm_ioreq_vcpu with a request pending
Date: Wed, 15 Jul 2020 13:52:24 +0100
Message-ID: <001d01d65aa6$c9c57400$5d505c00$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQKAf3+muaXr7mwqqHZG6k9bSD0TwwGy5eBrp6a2GcA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Wei Liu' <wl@xen.org>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 15 July 2020 13:05
> To: xen-devel@lists.xenproject.org
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>; Wei Liu
> <wl@xen.org>; Paul Durrant <paul@xen.org>
> Subject: [PATCH 3/3] x86/HVM: fold both instances of looking up a =
hvm_ioreq_vcpu with a request
> pending
>=20
> It seems pretty likely that the "break" in the loop getting replaced =
in
> handle_hvm_io_completion() was meant to exit both nested loops at the
> same time. Re-purpose what has been hvm_io_pending() to hand back the
> struct hvm_ioreq_vcpu instance found, and use it to replace the two
> nested loops.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Yes, much neater...

Reviewed-by: Paul Durrant <paul@xen.org>

>=20
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -81,7 +81,8 @@ static ioreq_t *get_ioreq(struct hvm_ior
>      return &p->vcpu_ioreq[v->vcpu_id];
>  }
>=20
> -bool hvm_io_pending(struct vcpu *v)
> +static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v,
> +                                               struct =
hvm_ioreq_server **srvp)
>  {
>      struct domain *d =3D v->domain;
>      struct hvm_ioreq_server *s;
> @@ -96,11 +97,20 @@ bool hvm_io_pending(struct vcpu *v)
>                                list_entry )
>          {
>              if ( sv->vcpu =3D=3D v && sv->pending )
> -                return true;
> +            {
> +                if ( srvp )
> +                    *srvp =3D s;
> +                return sv;
> +            }
>          }
>      }
>=20
> -    return false;
> +    return NULL;
> +}
> +
> +bool hvm_io_pending(struct vcpu *v)
> +{
> +    return get_pending_vcpu(v, NULL);
>  }
>=20
>  static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
> @@ -165,8 +175,8 @@ bool handle_hvm_io_completion(struct vcp
>      struct domain *d =3D v->domain;
>      struct hvm_vcpu_io *vio =3D &v->arch.hvm.hvm_io;
>      struct hvm_ioreq_server *s;
> +    struct hvm_ioreq_vcpu *sv;
>      enum hvm_io_completion io_completion;
> -    unsigned int id;
>=20
>      if ( has_vpci(d) && vpci_process_pending(v) )
>      {
> @@ -174,23 +184,9 @@ bool handle_hvm_io_completion(struct vcp
>          return false;
>      }
>=20
> -    FOR_EACH_IOREQ_SERVER(d, id, s)
> -    {
> -        struct hvm_ioreq_vcpu *sv;
> -
> -        list_for_each_entry ( sv,
> -                              &s->ioreq_vcpu_list,
> -                              list_entry )
> -        {
> -            if ( sv->vcpu =3D=3D v && sv->pending )
> -            {
> -                if ( !hvm_wait_for_io(sv, get_ioreq(s, v)) )
> -                    return false;
> -
> -                break;
> -            }
> -        }
> -    }
> +    sv =3D get_pending_vcpu(v, &s);
> +    if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
> +        return false;
>=20
>      vio->io_req.state =3D hvm_ioreq_needs_completion(&vio->io_req) ?
>          STATE_IORESP_READY : STATE_IOREQ_NONE;



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 13:01:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 13:01:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvh2b-00089B-Gi; Wed, 15 Jul 2020 13:01:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvh2a-000896-Mi
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 13:01:40 +0000
X-Inumbo-ID: 526a27f1-c69b-11ea-93da-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 526a27f1-c69b-11ea-93da-12813bfff9fa;
 Wed, 15 Jul 2020 13:01:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 27F5BAD81;
 Wed, 15 Jul 2020 13:01:42 +0000 (UTC)
Subject: Re: [PATCH 00/12] tools: move more libraries into tools/libs
To: Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>
References: <20200715125143.15199-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8dbf6f72-8968-47e1-6806-436acdccc928@suse.com>
Date: Wed, 15 Jul 2020 15:01:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200715125143.15199-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>,
 Stewart Hildebrand <stewart.hildebrand@dornerworks.com>,
 Josh Whitehead <josh.whitehead@dornerworks.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 14:51, Juergen Gross wrote:
> Move some more libraries under tools/libs, including libxenctrl. This
> is resulting in a lot of cleanup work regarding building libs and
> restructuring of the tools directory.
> 
> I have (for now) left out some more libraries like libxenguest and
> libxl, but I can have a try moving those, too, if wanted.
> 
> Please note that patch 8 ("tools: move libxenctrl below tools/libs")
> needs the related mini-os and qemu-trad patches applied in order not
> to break the build:
> 
> https://lists.xen.org/archives/html/xen-devel/2020-07/msg00548.html
> https://lists.xen.org/archives/html/xen-devel/2020-07/msg00617.html
> 
> As discussed at the Xen developers summit this series has been
> selected to act as a test case for sending pull requests via gitlab.
> This is the reason the patches are _not_ sent individually to
> xen-devel, but only the cover letter.

I don't think I've seen any summary of that discussion, and hence I
also can't judge or know whether this is meant just for huge and
presumably very mechanical series, or as a general form of patch
submission. The immediate point that strikes me is - how would I
comment on such series without having to go to gitlab? (There are
likely other issues I'd want to see addressed before this becomes
a common process.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 13:06:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 13:06:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvh70-0008Il-30; Wed, 15 Jul 2020 13:06:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvh6y-0008If-Lv
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 13:06:12 +0000
X-Inumbo-ID: f461c248-c69b-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f461c248-c69b-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 13:06:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ED95DB60B;
 Wed, 15 Jul 2020 13:06:13 +0000 (UTC)
Subject: Re: [PATCH v7 10/15] efi: switch to new APIs in EFI code
To: Hongyan Xia <hx242@xen.org>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <586cb3db63838c5eb10822cdd4efec999e886f02.1590750232.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <681eb5d3-d72b-5f2e-5bfa-8c409224e354@suse.com>
Date: Wed, 15 Jul 2020 15:06:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <586cb3db63838c5eb10822cdd4efec999e886f02.1590750232.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 13:11, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 13:10:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 13:10:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvhBW-0000h0-O6; Wed, 15 Jul 2020 13:10:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=U57p=A2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jvhBV-0000gv-7Y
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 13:10:53 +0000
X-Inumbo-ID: 9bd0ad15-c69c-11ea-93dd-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9bd0ad15-c69c-11ea-93dd-12813bfff9fa;
 Wed, 15 Jul 2020 13:10:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CFCB2B61F;
 Wed, 15 Jul 2020 13:10:54 +0000 (UTC)
Subject: Re: [PATCH 00/12] tools: move more libraries into tools/libs
To: Jan Beulich <jbeulich@suse.com>, George Dunlap <george.dunlap@citrix.com>
References: <20200715125143.15199-1-jgross@suse.com>
 <8dbf6f72-8968-47e1-6806-436acdccc928@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <b4dc4c9d-fa5a-e480-7d4a-95480c40cd1d@suse.com>
Date: Wed, 15 Jul 2020 15:10:50 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <8dbf6f72-8968-47e1-6806-436acdccc928@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>,
 Stewart Hildebrand <stewart.hildebrand@dornerworks.com>,
 Josh Whitehead <josh.whitehead@dornerworks.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.20 15:01, Jan Beulich wrote:
> On 15.07.2020 14:51, Juergen Gross wrote:
>> Move some more libraries under tools/libs, including libxenctrl. This
>> is resulting in a lot of cleanup work regarding building libs and
>> restructuring of the tools directory.
>>
>> I have (for now) left out some more libraries like libxenguest and
>> libxl, but I can have a try moving those, too, if wanted.
>>
>> Please note that patch 8 ("tools: move libxenctrl below tools/libs")
>> needs the related mini-os and qemu-trad patches applied in order not
>> to break the build:
>>
>> https://lists.xen.org/archives/html/xen-devel/2020-07/msg00548.html
>> https://lists.xen.org/archives/html/xen-devel/2020-07/msg00617.html
>>
>> As discussed at the Xen developers summit this series has been
>> selected to act as a test case for sending pull requests via gitlab.
>> This is the reason the patches are _not_ sent individually to
>> xen-devel, but only the cover letter.
> 
> I don't think I've seen any summary of that discussion, and hence I
> also can't judge or know whether this is meant just for huge and
> presumably very mechanical series, or as a general form of patch
> submission. The immediate point that strikes me is - how would I
> comment on such series without having to go to gitlab? (There are
> likely other issues I'd want to see addressed before this becomes
> a common process.)

Those are basically the reasons we decided to have a try.

There is certainly a how-to needed for a submitter and some scripting
to at least ping the maintainers. I have volunteered to send a series
via gitlab as I had already this one in the works and the main
maintainers of the modified files agreed to try the gitlab workflow.

We need to learn whether this is something to consider or not, and for
this purpose we need a practical example.

TBH I had some reservations as I like to be able to review patches while
being offline, so I want to make sure this is still possible somehow.
And being part of the experiment ensures I'm not missing anything. :-)


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 13:11:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 13:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvhBf-0000hk-0R; Wed, 15 Jul 2020 13:11:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvhBd-0000hV-NU
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 13:11:01 +0000
X-Inumbo-ID: a119a41a-c69c-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a119a41a-c69c-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 13:11:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B9E92B61D;
 Wed, 15 Jul 2020 13:11:03 +0000 (UTC)
Subject: Re: [PATCH v7 12/15] x86/smpboot: switch clone_mapping() to new APIs
To: Hongyan Xia <hx242@xen.org>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <2c1d26b0c7fc681d291adc50f65f77922f10f9d2.1590750232.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <93bcbb14-3159-3757-874b-0bdc124dee1f@suse.com>
Date: Wed, 15 Jul 2020 15:11:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <2c1d26b0c7fc681d291adc50f65f77922f10f9d2.1590750232.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 13:11, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 13:16:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 13:16:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvhGc-0000yw-KS; Wed, 15 Jul 2020 13:16:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvhGc-0000yr-7a
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 13:16:10 +0000
X-Inumbo-ID: 58e2b88e-c69d-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58e2b88e-c69d-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 13:16:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0DD46B6F9;
 Wed, 15 Jul 2020 13:16:12 +0000 (UTC)
Subject: Re: [PATCH v7 14/15] x86: switch to use domheap page for page tables
To: Hongyan Xia <hx242@xen.org>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <85808fae77da535b2997bede8965d22d5c80c5d3.1590750232.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7f402905-adba-130f-b000-a98f7e607d2d@suse.com>
Date: Wed, 15 Jul 2020 15:16:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <85808fae77da535b2997bede8965d22d5c80c5d3.1590750232.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 13:11, Hongyan Xia wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with a sufficiently minor remark:

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4918,10 +4918,11 @@ mfn_t alloc_xen_pagetable_new(void)
>  {
>      if ( system_state != SYS_STATE_early_boot )
>      {
> -        void *ptr = alloc_xenheap_page();
>  
> -        BUG_ON(!hardware_domain && !ptr);
> -        return ptr ? virt_to_mfn(ptr) : INVALID_MFN;
> +        struct page_info *pg = alloc_domheap_page(NULL, 0);
> +
> +        BUG_ON(!hardware_domain && !pg);
> +        return pg ? page_to_mfn(pg) : INVALID_MFN;

pg doesn't even get de-referenced, let alone modified. Hence it
would better be pointer-to-const, despite this possibly feeling a
little odd to some of us given this is a freshly allocated page.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 13:32:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 13:32:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvhWM-0002hs-1H; Wed, 15 Jul 2020 13:32:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvhWK-0002hn-Rf
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 13:32:24 +0000
X-Inumbo-ID: 9d76f3e6-c69f-11ea-bb8b-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d76f3e6-c69f-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 13:32:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594819944;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=XxZF1lQLVaj+iOEvxrgNcgapri9fOtFZTq6pex1n/ek=;
 b=iikROLQjzIDybMHYeq4D+B4d8Fw81gah/GUhnefO7KSZDQot4AU8rhEB
 tRKbiUl9TRGV5RSasyUP0XrrQHoMcYnrDf0evoQXx3vz9vpPd5KJiA8L7
 a+9wX0QXRXTfQ0pCpPtPpjNosAXgGg11r0sVK+tiR0ABxFk9P5ZBZ3Hn7 Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: PXo8abT0JRIxGXqYuhOP3KYoEMNinXjxLJDHexb8tzyRu5HoFBEYgg1x2wuCHspLNfver4FMWk
 s/j5Tk2ox3JSvjh3opu6Hb+yUD6lhFyayrrj03kDH3RobhvmpO0RkmxyhqnqoVDSCwissXGNty
 zu2h8O8R5b5ApfFWTDrZR588lbtItKQ44yRWA8jWq/UXBuPax4AMSr6lBNMZ1TzR1EY1WY8Y+q
 ooV2+DI5sZHtmlD7ot4lhASgokJh1Cuk5M0aXC3U3YY+KU3PTevMw5L/Yz8PuChuSAzV9ith/D
 ix8=
X-SBRS: 2.7
X-MesageID: 22429765
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22429765"
Date: Wed, 15 Jul 2020 15:32:17 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v3 1/2] x86: restore pv_rtc_handler() invocation
Message-ID: <20200715133217.GZ7191@Air-de-Roger>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
 <20200715121347.GY7191@Air-de-Roger>
 <2b9de0fd-5973-ed66-868c-ffadca83edf3@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2b9de0fd-5973-ed66-868c-ffadca83edf3@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 02:36:49PM +0200, Jan Beulich wrote:
> On 15.07.2020 14:13, Roger Pau Monné wrote:
> > On Wed, Jul 15, 2020 at 01:56:47PM +0200, Jan Beulich wrote:
> >> @@ -1160,6 +1162,14 @@ void rtc_guest_write(unsigned int port,
> >>      case RTC_PORT(1):
> >>          if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
> >>              break;
> >> +
> >> +        spin_lock_irqsave(&rtc_lock, flags);
> >> +        hook = pv_rtc_handler;
> >> +        spin_unlock_irqrestore(&rtc_lock, flags);
> > 
> > Given that clearing the pv_rtc_handler variable in handle_rtc_once is
> > not done while holding the rtc_lock, I'm not sure there's much point
> > in holding the lock here, ie: just doing something like:
> > 
> > hook = pv_rtc_handler;
> > if ( hook )
> >     hook(currd->arch.cmos_idx & 0x7f, data);
> > 
> > Should be as safe as what you do.
> 
> No, the compiler is free to eliminate the local variable and read
> the global one twice (and it may change contents in between) then.
> I could use ACCESS_ONCE() or read_atomic() here, but then it would
> become quite clear that at the same time ...
> 
> > We also assume that setting pv_rtc_handler to NULL is an atomic
> > operation.
> 
> ... this (which isn't different from what we do elsewhere, and we
> really can't fix everything at the same time) ought to also become
> ACCESS_ONCE() (or write_atomic()).
> 
> A compromise might be to use barrier() in place of the locking for
> now ...

Oh, right. Didn't realize you did it in order to prevent
optimizations. Using the lock seems also quite weird IMO, so I'm not
sure it's much better than just using ACCESS_ONCE (or a barrier).
Anyway, I don't want to delay this any longer, so:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Feel free to change to ACCESS_ONCE or barrier if you think it's
clearer.

Thanks.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 13:51:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 13:51:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvhoe-0004QA-J6; Wed, 15 Jul 2020 13:51:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9G22=A2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jvhoc-0004Q2-Un
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 13:51:18 +0000
X-Inumbo-ID: 41745504-c6a2-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 41745504-c6a2-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 13:51:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3A4CFAC53;
 Wed, 15 Jul 2020 13:51:20 +0000 (UTC)
Subject: Re: [PATCH v3 1/2] x86: restore pv_rtc_handler() invocation
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
 <20200715121347.GY7191@Air-de-Roger>
 <2b9de0fd-5973-ed66-868c-ffadca83edf3@suse.com>
 <20200715133217.GZ7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cd08f928-2be9-314b-56e6-bb414247caff@suse.com>
Date: Wed, 15 Jul 2020 15:51:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200715133217.GZ7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 15:32, Roger Pau Monné wrote:
> On Wed, Jul 15, 2020 at 02:36:49PM +0200, Jan Beulich wrote:
>> On 15.07.2020 14:13, Roger Pau Monné wrote:
>>> On Wed, Jul 15, 2020 at 01:56:47PM +0200, Jan Beulich wrote:
>>>> @@ -1160,6 +1162,14 @@ void rtc_guest_write(unsigned int port,
>>>>      case RTC_PORT(1):
>>>>          if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
>>>>              break;
>>>> +
>>>> +        spin_lock_irqsave(&rtc_lock, flags);
>>>> +        hook = pv_rtc_handler;
>>>> +        spin_unlock_irqrestore(&rtc_lock, flags);
>>>
>>> Given that clearing the pv_rtc_handler variable in handle_rtc_once is
>>> not done while holding the rtc_lock, I'm not sure there's much point
>>> in holding the lock here, ie: just doing something like:
>>>
>>> hook = pv_rtc_handler;
>>> if ( hook )
>>>     hook(currd->arch.cmos_idx & 0x7f, data);
>>>
>>> Should be as safe as what you do.
>>
>> No, the compiler is free to eliminate the local variable and read
>> the global one twice (and it may change contents in between) then.
>> I could use ACCESS_ONCE() or read_atomic() here, but then it would
>> become quite clear that at the same time ...
>>
>>> We also assume that setting pv_rtc_handler to NULL is an atomic
>>> operation.
>>
>> ... this (which isn't different from what we do elsewhere, and we
>> really can't fix everything at the same time) ought to also become
>> ACCESS_ONCE() (or write_atomic()).
>>
>> A compromise might be to use barrier() in place of the locking for
>> now ...
> 
> Oh, right. Didn't realize you did it in order to prevent
> optimizations. Using the lock seems also quite weird IMO, so I'm not
> sure it's much better than just using ACCESS_ONCE (or a barrier).
> Anyway, I don't want to delay this any longer, so:
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> Feel free to change to ACCESS_ONCE or barrier if you think it's
> clearer.

I did so (also on the writer side), not the least based on guessing
what Andrew would presumably have preferred.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 14:03:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 14:03:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvhzu-0005Ya-Si; Wed, 15 Jul 2020 14:02:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OEEU=A2=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1jvhzt-0005YV-JF
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 14:02:57 +0000
X-Inumbo-ID: e1fbe842-c6a3-11ea-bca7-bc764e2007e4
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1fbe842-c6a3-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 14:02:56 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id 17so5962230wmo.1
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jul 2020 07:02:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=fFdxqI5g3k4kC0Cs/kmhn8G4+G4mXjZCQXUJFnALC6c=;
 b=lUXqaFYE4rAgO9L9Voumo+aDCDx/z1qFxobeSljmk4kiU5DdPj4MGyksmeF7Wbe9m1
 snuW7dzyEydLjfGt4SH+8zjfyAhJ0PbLJqiItIkLSL2mDfTz8WYsPogSnatxrAtn9FKg
 eIlF5fgR0sOoBqrKTFQJukyY+oIVawNXE95xKzJLjS7qYMDTqxLoQ88i4jF3g44m6rEH
 64yOy8W8cDnfIHTVu3CALVk681/fAPwBvnbf8E3BYIhAro1ABj7UxfKixc6J6Yw2Wa3g
 yVf7Ecse+DE92JxCy58E7JDzw4NLMvZrI+7SeM1qfWPpwqH3QRZm6Wf0rNJZQth4o9AC
 nwew==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=fFdxqI5g3k4kC0Cs/kmhn8G4+G4mXjZCQXUJFnALC6c=;
 b=BauJu2K3lvPgaR/HjPc9SaUqUreLW4EjuJdeBWMyZ7zGjAMhzEzcLzGeRevB/FCqc6
 njDqUdPkmnRAUnTINlM5UMJ+Bx1YjOmdRRXnqn5UJIHf64taWBtORGwJp7bThfUsYygn
 mDS6hHiqzMsO1GSvl+HpU+oTREHcuUV4USJ544uCHD8IhvT+WGZD1070l/3GLiTh95pU
 oUJ6/YVBzIrZ6ywbyd0XILK5rF3tb3rFZVM7Kv/jIaklRMMBeY7+27rfKlsXeDE9eoa+
 6ReE4tH/Gp2f/9EDWzfXBPQMdIkp/epf+2aWER+yN/aX8vyFh846NFGPf573L0PDaZfT
 /iFQ==
X-Gm-Message-State: AOAM532pGS3qGGIJZSN2tvCGJnPDGQcn3IcmEZMCOOMm9GIXZoRbR7W3
 6sHcBSed93azsecrxuEhQcsiNw9R0P/+TbtYKdo=
X-Google-Smtp-Source: ABdhPJxbv+JiGpWbOGxmA6Ap1a/KsDzNb4gmJbNp29ERmyMwMsJxF1Et28IEfkhAlqpptDK+wWBmHcyVgzPYfgAZY84=
X-Received: by 2002:a05:600c:2058:: with SMTP id
 p24mr8814981wmg.74.1594821775800; 
 Wed, 15 Jul 2020 07:02:55 -0700 (PDT)
MIME-Version: 1.0
References: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
 <e47a9ef5-5f4c-1ca6-1b31-f7b10516e5ed@suse.com>
 <CAJ=z9a1AWYYVGwHWOct9j3bVDhPtWG7R3tQY05+6BY-9g3C1kQ@mail.gmail.com>
 <005381d5-3fb5-640f-002c-106c628a77a2@suse.com>
In-Reply-To: <005381d5-3fb5-640f-002c-106c628a77a2@suse.com>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Wed, 15 Jul 2020 16:02:43 +0200
Message-ID: <CAJ=z9a0LBhO7qJyF-WdBnkD52dXew-TgjTuUC7aeoS8rC13iwQ@mail.gmail.com>
Subject: Re: [PATCH 2/2] evtchn/fifo: don't enforce higher than necessary
 alignment
To: Jan Beulich <jbeulich@suse.com>
Content-Type: multipart/alternative; boundary="0000000000006d3f0c05aa7b61d4"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--0000000000006d3f0c05aa7b61d4
Content-Type: text/plain; charset="UTF-8"

On Wed, 15 Jul 2020, 14:42 Jan Beulich, <jbeulich@suse.com> wrote:

> On 15.07.2020 12:46, Julien Grall wrote:
> > On Wed, 15 Jul 2020, 12:17 Jan Beulich, <jbeulich@suse.com> wrote:
> >
> >> Neither the code nor the original commit provide any justification for
> >> the need to 8-byte align the struct in all cases. Enforce just as much
> >> alignment as the structure actually needs - 4 bytes - by using alignof()
> >> instead of a literal number.
> >>
> >> Take the opportunity and also
> >> - add so far missing validation that native and compat mode layouts of
> >>   the structures actually match,
> >> - tie sizeof() expressions to the types of the fields we're actually
> >>   after, rather than specifying the type explicitly (which in the
> >>   general case risks a disconnect, even if there's close to zero risk in
> >>   this particular case),
> >> - use ENXIO instead of EINVAL for the two cases of the address not
> >>   satisfying the requirements, which will make an issue here better
> >>   stand out at the call site.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> ---
> >> I question the need for the array_index_nospec() here. Or else I'd
> >> expect map_vcpu_info() would also need the same.
> >>
> >> --- a/xen/common/event_fifo.c
> >> +++ b/xen/common/event_fifo.c
> >> @@ -504,6 +504,16 @@ static void setup_ports(struct domain *d
> >>      }
> >>  }
> >>
> >> +#ifdef CONFIG_COMPAT
> >> +
> >> +#include <compat/event_channel.h>
> >> +
> >> +#define xen_evtchn_fifo_control_block evtchn_fifo_control_block
> >> +CHECK_evtchn_fifo_control_block;
> >> +#undef xen_evtchn_fifo_control_block
> >> +
> >> +#endif
> >> +
> >>  int evtchn_fifo_init_control(struct evtchn_init_control *init_control)
> >>  {
> >>      struct domain *d = current->domain;
> >> @@ -523,19 +533,20 @@ int evtchn_fifo_init_control(struct evtc
> >>          return -ENOENT;
> >>
> >>      /* Must not cross page boundary. */
> >> -    if ( offset > (PAGE_SIZE - sizeof(evtchn_fifo_control_block_t)) )
> >> -        return -EINVAL;
> >> +    if ( offset > (PAGE_SIZE - sizeof(*v->evtchn_fifo->control_block))
> )
> >> +        return -ENXIO;
> >>
> >>      /*
> >>       * Make sure the guest controlled value offset is bounded even
> during
> >>       * speculative execution.
> >>       */
> >>      offset = array_index_nospec(offset,
> >> -                           PAGE_SIZE -
> >> sizeof(evtchn_fifo_control_block_t) + 1);
> >> +                                PAGE_SIZE -
> >> +                                sizeof(*v->evtchn_fifo->control_block)
> +
> >> 1);
> >>
> >> -    /* Must be 8-bytes aligned. */
> >> -    if ( offset & (8 - 1) )
> >> -        return -EINVAL;
> >> +    /* Must be suitably aligned. */
> >> +    if ( offset & (alignof(*v->evtchn_fifo->control_block) - 1) )
> >> +        return -ENXIO;
> >>
> >
> > A guest relying on this new alignment wouldn't work on older version of
> > Xen. So I don't think a guest would ever be able to use it.
> >
> > Therefore is it really worth the change?
>
> That's the question. One of your arguments for using a literal number
> also for the vCPU info mapping check was that here a literal number
> is used. The goal isn't so much relaxation of the interface, but
> making the code consistent as well as eliminating a (how I'd call it)
> kludge.
>

Your commit message lead to think the relaxation is the key motivation to
change the code.



> Guests not caring to be able to run on older versions could also make
> use of the relaxation (which may be more relevant in 10 y ears time
> than it is now).


That makes sense. However, I am a bit concerned that an OS developer may
not notice the alignment problem with older version.

I would suggest to at least document the alignment expected in the public
header.



> Jan
>

--0000000000006d3f0c05aa7b61d4
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><br><br><div class=3D"gmail_quote"><div dir=3D"ltr" =
class=3D"gmail_attr">On Wed, 15 Jul 2020, 14:42 Jan Beulich, &lt;<a href=3D=
"mailto:jbeulich@suse.com" rel=3D"noreferrer noreferrer" target=3D"_blank">=
jbeulich@suse.com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote"=
 style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On=
 15.07.2020 12:46, Julien Grall wrote:<br>
&gt; On Wed, 15 Jul 2020, 12:17 Jan Beulich, &lt;<a href=3D"mailto:jbeulich=
@suse.com" rel=3D"noreferrer noreferrer noreferrer" target=3D"_blank">jbeul=
ich@suse.com</a>&gt; wrote:<br>
&gt; <br>
&gt;&gt; Neither the code nor the original commit provide any justification=
 for<br>
&gt;&gt; the need to 8-byte align the struct in all cases. Enforce just as =
much<br>
&gt;&gt; alignment as the structure actually needs - 4 bytes - by using ali=
gnof()<br>
&gt;&gt; instead of a literal number.<br>
&gt;&gt;<br>
&gt;&gt; Take the opportunity and also<br>
&gt;&gt; - add so far missing validation that native and compat mode layout=
s of<br>
&gt;&gt;=C2=A0 =C2=A0the structures actually match,<br>
&gt;&gt; - tie sizeof() expressions to the types of the fields we&#39;re ac=
tually<br>
&gt;&gt;=C2=A0 =C2=A0after, rather than specifying the type explicitly (whi=
ch in the<br>
&gt;&gt;=C2=A0 =C2=A0general case risks a disconnect, even if there&#39;s c=
lose to zero risk in<br>
&gt;&gt;=C2=A0 =C2=A0this particular case),<br>
&gt;&gt; - use ENXIO instead of EINVAL for the two cases of the address not=
<br>
&gt;&gt;=C2=A0 =C2=A0satisfying the requirements, which will make an issue =
here better<br>
&gt;&gt;=C2=A0 =C2=A0stand out at the call site.<br>
&gt;&gt;<br>
&gt;&gt; Signed-off-by: Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.com=
" rel=3D"noreferrer noreferrer noreferrer" target=3D"_blank">jbeulich@suse.=
com</a>&gt;<br>
&gt;&gt; ---<br>
&gt;&gt; I question the need for the array_index_nospec() here. Or else I&#=
39;d<br>
&gt;&gt; expect map_vcpu_info() would also need the same.<br>
&gt;&gt;<br>
&gt;&gt; --- a/xen/common/event_fifo.c<br>
&gt;&gt; +++ b/xen/common/event_fifo.c<br>
&gt;&gt; @@ -504,6 +504,16 @@ static void setup_ports(struct domain *d<br>
&gt;&gt;=C2=A0 =C2=A0 =C2=A0 }<br>
&gt;&gt;=C2=A0 }<br>
&gt;&gt;<br>
&gt;&gt; +#ifdef CONFIG_COMPAT<br>
&gt;&gt; +<br>
&gt;&gt; +#include &lt;compat/event_channel.h&gt;<br>
&gt;&gt; +<br>
&gt;&gt; +#define xen_evtchn_fifo_control_block evtchn_fifo_control_block<b=
r>
&gt;&gt; +CHECK_evtchn_fifo_control_block;<br>
&gt;&gt; +#undef xen_evtchn_fifo_control_block<br>
&gt;&gt; +<br>
&gt;&gt; +#endif<br>
&gt;&gt; +<br>
&gt;&gt;=C2=A0 int evtchn_fifo_init_control(struct evtchn_init_control *ini=
t_control)<br>
&gt;&gt;=C2=A0 {<br>
&gt;&gt;=C2=A0 =C2=A0 =C2=A0 struct domain *d =3D current-&gt;domain;<br>
&gt;&gt; @@ -523,19 +533,20 @@ int evtchn_fifo_init_control(struct evtc<br>
&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 return -ENOENT;<br>
&gt;&gt;<br>
&gt;&gt;=C2=A0 =C2=A0 =C2=A0 /* Must not cross page boundary. */<br>
&gt;&gt; -=C2=A0 =C2=A0 if ( offset &gt; (PAGE_SIZE - sizeof(evtchn_fifo_co=
ntrol_block_t)) )<br>
&gt;&gt; -=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EINVAL;<br>
&gt;&gt; +=C2=A0 =C2=A0 if ( offset &gt; (PAGE_SIZE - sizeof(*v-&gt;evtchn_=
fifo-&gt;control_block)) )<br>
&gt;&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -ENXIO;<br>
&gt;&gt;<br>
&gt;&gt;=C2=A0 =C2=A0 =C2=A0 /*<br>
&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0* Make sure the guest controlled value o=
ffset is bounded even during<br>
&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0* speculative execution.<br>
&gt;&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0*/<br>
&gt;&gt;=C2=A0 =C2=A0 =C2=A0 offset =3D array_index_nospec(offset,<br>
&gt;&gt; -=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0PAGE_SIZE -<br>
&gt;&gt; sizeof(evtchn_fifo_control_block_t) + 1);<br>
&gt;&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 PAGE_SIZE -<br>
&gt;&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 sizeof(*v-&gt;evtchn_fifo-=
&gt;control_block) +<br>
&gt;&gt; 1);<br>
&gt;&gt;<br>
&gt;&gt; -=C2=A0 =C2=A0 /* Must be 8-bytes aligned. */<br>
&gt;&gt; -=C2=A0 =C2=A0 if ( offset &amp; (8 - 1) )<br>
&gt;&gt; -=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -EINVAL;<br>
&gt;&gt; +=C2=A0 =C2=A0 /* Must be suitably aligned. */<br>
&gt;&gt; +=C2=A0 =C2=A0 if ( offset &amp; (alignof(*v-&gt;evtchn_fifo-&gt;c=
ontrol_block) - 1) )<br>
&gt;&gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return -ENXIO;<br>
&gt;&gt;<br>
&gt; <br>
&gt; A guest relying on this new alignment wouldn&#39;t work on older versi=
on of<br>
&gt; Xen. So I don&#39;t think a guest would ever be able to use it.<br>
&gt; <br>
&gt; Therefore is it really worth the change?<br>
<br>
That&#39;s the question. One of your arguments for using a literal number<b=
r>
also for the vCPU info mapping check was that here a literal number<br>
is used. The goal isn&#39;t so much relaxation of the interface, but<br>
making the code consistent as well as eliminating a (how I&#39;d call it)<b=
r>
kludge.<br></blockquote></div></div><div dir=3D"auto"><br></div><div dir=3D=
"auto">Your commit message lead to think the relaxation is the key motivati=
on to change the code.</div><div dir=3D"auto"><br></div><div dir=3D"auto"><=
br></div><div dir=3D"auto"><div class=3D"gmail_quote"><blockquote class=3D"=
gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-=
left:1ex">
<br>
Guests not caring to be able to run on older versions could also make<br>
use of the relaxation (which may be more relevant in 10 y ears time<br>
than it is now).</blockquote></div></div><div dir=3D"auto"><br></div><div d=
ir=3D"auto">That makes sense. However, I am a bit concerned that an OS deve=
loper may not notice the alignment problem with older version.</div><div di=
r=3D"auto"><br></div><div dir=3D"auto">I would suggest to at least document=
 the alignment expected in the public header.</div><div dir=3D"auto"></div>=
<div dir=3D"auto"><br></div><div dir=3D"auto"><br></div><div dir=3D"auto"><=
div class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"margin=
:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Jan<br>
</blockquote></div></div></div>

--0000000000006d3f0c05aa7b61d4--


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 14:37:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 14:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jviX7-0008Gd-Nt; Wed, 15 Jul 2020 14:37:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+F45=A2=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jviX6-0008GY-RJ
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 14:37:16 +0000
X-Inumbo-ID: ad450c14-c6a8-11ea-bca7-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad450c14-c6a8-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 14:37:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594823835;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=/tMb/WMnsmLLYWFUNCttYu/ZMOUPlZZvZasyxQDli7c=;
 b=glCgp0v7E6U6ezsJqT8HTg8m2d9q1gRgQlyvNLO+k83Sa+LiaV5xNP25
 4cXI9k3PAkpu2cxKZk5eBf08F61dwcmo/6zQJNgqaqJu+uZmkmNaU4QTp
 hJz1DEL0DYIhIty0DCUkU3cbk1UZmXGHDAMVjiU35BI8NccuxEmY8oYzH 8=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ViQlBA6jt7rRCfn0fjTA5aMpL9bZBBDPr94kQYM+SOs9DiXNKJ+KZs7t1bxYSMtJh0Xaj8IOK1
 yEwFN/WCgLIbeAAm7fyC4M9FDMI7Ket2HKX/O5I/l/JZx9dj2YWxeoSEW8HOVsa5vqrmByTKFk
 3LanK4Jwgnw5K9kdYvMLuPbG16xbKGgQweQYYR+uSCGSMK5RNzeLMtPSXRl5+DL38481ZCX/sW
 ThmIZenkgsbQVnwLdI5zL4Lrdht4lu+mzzngBx0rvh0m4gvWhcoB6eArA4tZ5B8A/cj/C6tG6u
 3Mo=
X-SBRS: 2.7
X-MesageID: 22637927
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22637927"
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Subject: Re: [PATCH v2] docs: specify stability of hypfs path documentation
Thread-Topic: [PATCH v2] docs: specify stability of hypfs path documentation
Thread-Index: AQHWWR59fvtpmEuro06JsoMrFTmX86kFdV+AgAMhtYA=
Date: Wed, 15 Jul 2020 14:37:11 +0000
Message-ID: <68F727A8-29B8-4846-8BE9-BD4F6E0DC60D@citrix.com>
References: <20200713140338.16172-1-jgross@suse.com>
 <8a96b1b9-cbcb-557a-5b82-661bbe40fe25@suse.com>
In-Reply-To: <8a96b1b9-cbcb-557a-5b82-661bbe40fe25@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <8ABD25FC33C9B44CB88AFDF79476C986@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDEzLCAyMDIwLCBhdCAzOjQ3IFBNLCBKYW4gQmV1bGljaCA8SkJldWxpY2hA
c3VzZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMTMuMDcuMjAyMCAxNjowMywgSnVlcmdlbiBHcm9z
cyB3cm90ZToNCj4+IEluIGRvY3MvbWlzYy9oeXBmcy1wYXRocy5wYW5kb2MgdGhlIHN1cHBvcnRl
ZCBwYXRocyBpbiB0aGUgaHlwZXJ2aXNvcg0KPj4gZmlsZSBzeXN0ZW0gYXJlIHNwZWNpZmllZC4g
TWFrZSBpdCBtb3JlIGNsZWFyIHRoYXQgcGF0aCBhdmFpbGFiaWxpdHkNCj4+IG1pZ2h0IGNoYW5n
ZSwgZS5nLiBkdWUgdG8gc2NvcGUgd2lkZW5pbmcgb3IgbmFycm93aW5nIChlLmcuIGJlaW5nDQo+
PiBsaW1pdGVkIHRvIGEgc3BlY2lmaWMgYXJjaGl0ZWN0dXJlKS4NCj4+IA0KPj4gU2lnbmVkLW9m
Zi1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KPj4gUmVsZWFzZS1hY2tlZC1i
eTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+DQo+IA0KPiBBY2tlZC1ieTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiANCj4gSG93ZXZlciwgSSdkIGxpa2UgYWdyZWVtZW50
IGJ5IGF0IGxlYXN0IG9uZSBvdGhlciBSRVNUIG1haW50YWluZXIgb24NCj4gLi4uDQo+IA0KPj4g
QEAgLTU1LDYgKzU4LDExIEBAIHRhZ3MgZW5jbG9zZWQgaW4gc3F1YXJlIGJyYWNrZXRzLg0KPj4g
KiBDT05GSUdfKiAtLSBQYXRoIGlzIHZhbGlkIG9ubHkgaW4gY2FzZSB0aGUgaHlwZXJ2aXNvciB3
YXMgYnVpbHQgd2l0aA0KPj4gICB0aGUgcmVzcGVjdGl2ZSBjb25maWcgb3B0aW9uLg0KPj4gDQo+
PiArSW4gY2FzZSBhIHRhZyBmb3IgYSBwYXRoIGluZGljYXRlcyB0aGF0IHRoaXMgcGF0aCBpcyBh
dmFpbGFibGUgaW4gc29tZQ0KPj4gK2Nhc2Ugb25seSwgdGhpcyBhdmFpbGFiaWxpdHkgbWlnaHQg
YmUgZXh0ZW5kZWQgb3IgcmVkdWNlZCBpbiBmdXR1cmUgYnkNCj4+ICttb2RpZmljYXRpb24gb3Ig
cmVtb3ZhbCBvZiB0aGUgdGFnLiBBIHBhdGggb25jZSBhc3NpZ25lZCBtZWFuaW5nIHdvbid0IGdv
DQo+PiArYXdheSBhbHRvZ2V0aGVyIG9yIGNoYW5nZSBpdHMgbWVhbmluZywgdGhvdWdoLg0KPiAN
Cj4gLi4uIHRoZSBuZXdseSBpbXBvc2VkIGd1YXJhbnRlZSB3ZSdyZSBub3cgbWFraW5nLiBXZSBy
ZWFsbHkgd2FudCB0bw0KPiBhdm9pZCBkZWNsYXJpbmcgc29tZXRoaW5nIGFzIHN0YWJsZSB3aXRo
b3V0IGJlaW5nIHF1aXRlIGNlcnRhaW4gd2UNCj4gY2FuIGtlZXAgaXQgc3RhYmxlLg0KDQpUaGUg
ZGVjbGFyYXRpb24gb2YgbmV3IG5vZGVzIG11c3QgYWxsIGhhcHBlbiBpbiB0aGlzIGZpbGUsIHJp
Z2h0PyAgU28gYXMgbG9uZyBhcyB0aGUgbWFpbnRhaW5lcihzKSBmbyB0aGlzIGZpbGUgYXJlIGF3
YXJlIG9mIHRoYXQsIGFuZCBpdOKAmXMgY29tbWVudGVkIHNvIHRoYXQgcGVvcGxlIGtub3cgdGhh
dCBleHBlY2F0aW9uLCBJIHRoaW5rIGl04oCZcyBPSy4NCg0KQnV0IEkgdGhpbmsgdGhpcyBwYXJh
Z3JhcGggaXNu4oCZdCB2ZXJ5IGNsZWFyIHRvIG1lIHdoYXQg4oCcbWlnaHQgYmUgZXh0ZW5kZWQg
b3IgcmVkdWNlZCDigKZidXQgd29u4oCZdCBnbyBhd2F5IGFsdG9nZXRoZXLigJ0uDQoNCklUIHNv
dW5kcyBsaWtlIHlvdeKAmXJlIHNheWluZzoNCg0KMS4gUGF0aHMgbGlzdGVkIHdpdGhvdXQgY29u
ZGl0aW9ucyB3aWxsIGFsd2F5cyBiZSBhdmFpbGFibGUNCg0KMi4gUGF0aHMgbGlzdGVkIHdpdGgg
Y29uZGl0aW9ucyBtYXkgYmUgZXh0ZW5kZWQ6IGkuZS4sIGEgbm9kZSBjdXJyZW50bHkgbGlzdGVk
IGFzIFBWIG1pZ2h0IGFsc28gYmVjb21lIGF2YWlsYWJsZSBmb3IgSFZNIGd1ZXN0cw0KDQozLiBQ
YXRocyBsaXN0ZWQgd2l0aCBjb25kaXRpb25zIG1pZ2h0IGhhdmUgdGhvc2UgY29uZGl0aW9ucyBy
ZWR1Y2VkLCBidXQgd2lsbCBuZXZlciBlbnRpcmVseSBkaXNhcHBlYXIuICBTbyBzb21ldGhpbmcg
Y3VycmVudGx5IGxpc3RlZCBhcyBQViBtaWdodCBiZSByZWR1Y2VkIHRvIENPTkZJR19IQVNfRk9P
LCBidXQgd29u4oCZdCBiZSBjb21wbGV0ZWx5IHJlbW92ZWQuDQoNCklzIHRoYXQgd2hhdCB5b3Ug
bWVhbnQ/DQoNCiAtR2Vvcmdl


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 14:41:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 14:41:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvib9-0000h0-9a; Wed, 15 Jul 2020 14:41:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=U57p=A2=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jvib7-0000gv-U0
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 14:41:25 +0000
X-Inumbo-ID: 42140728-c6a9-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 42140728-c6a9-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 14:41:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BD406AF70;
 Wed, 15 Jul 2020 14:41:27 +0000 (UTC)
Subject: Re: [PATCH v2] docs: specify stability of hypfs path documentation
To: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>
References: <20200713140338.16172-1-jgross@suse.com>
 <8a96b1b9-cbcb-557a-5b82-661bbe40fe25@suse.com>
 <68F727A8-29B8-4846-8BE9-BD4F6E0DC60D@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <8505ec1a-bc50-ea16-306f-998c27045e30@suse.com>
Date: Wed, 15 Jul 2020 16:41:22 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <68F727A8-29B8-4846-8BE9-BD4F6E0DC60D@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.20 16:37, George Dunlap wrote:
> 
> 
>> On Jul 13, 2020, at 3:47 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>
>> On 13.07.2020 16:03, Juergen Gross wrote:
>>> In docs/misc/hypfs-paths.pandoc the supported paths in the hypervisor
>>> file system are specified. Make it more clear that path availability
>>> might change, e.g. due to scope widening or narrowing (e.g. being
>>> limited to a specific architecture).
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> Release-acked-by: Paul Durrant <paul@xen.org>
>>
>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>
>> However, I'd like agreement by at least one other REST maintainer on
>> ...
>>
>>> @@ -55,6 +58,11 @@ tags enclosed in square brackets.
>>> * CONFIG_* -- Path is valid only in case the hypervisor was built with
>>>    the respective config option.
>>>
>>> +In case a tag for a path indicates that this path is available in some
>>> +case only, this availability might be extended or reduced in future by
>>> +modification or removal of the tag. A path once assigned meaning won't go
>>> +away altogether or change its meaning, though.
>>
>> ... the newly imposed guarantee we're now making. We really want to
>> avoid declaring something as stable without being quite certain we
>> can keep it stable.
> 
> The declaration of new nodes must all happen in this file, right?  So as long as the maintainer(s) fo this file are aware of that, and it’s commented so that people know that expecation, I think it’s OK.
> 
> But I think this paragraph isn’t very clear to me what “might be extended or reduced …but won’t go away altogether”.
> 
> IT sounds like you’re saying:
> 
> 1. Paths listed without conditions will always be available
> 
> 2. Paths listed with conditions may be extended: i.e., a node currently listed as PV might also become available for HVM guests
> 
> 3. Paths listed with conditions might have those conditions reduced, but will never entirely disappear.  So something currently listed as PV might be reduced to CONFIG_HAS_FOO, but won’t be completely removed.
> 
> Is that what you meant?

Yes.


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 14:52:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 14:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvilL-0001dj-AY; Wed, 15 Jul 2020 14:51:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvilJ-0001de-NU
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 14:51:57 +0000
X-Inumbo-ID: b9cfe394-c6aa-11ea-b7bb-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9cfe394-c6aa-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 14:51:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594824716;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=V9kxh95A7KNWJvFvSiPuEvHRhmMVhybG14SBrb4QY+E=;
 b=EIVyseaYNLTMjGKi7B4bXp5wvzfV9PtV2NZlWaPXeqoe8RuGwz3jq7Uh
 HVa/Bp6RMETHU1Lz+51juj7+UuE0CEQqKLHy2GgapUe+T5SBD8oc6CAKv
 5EtbOuLWr/pYGSYwhIqeUbjVHp+osvW1Vbg2LkvGLP5+dxwi4aGsjnPL9 s=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: f1sF7yFgansxJzTdNSfXVLpbKRJOxUUfXhqwzcRT1UUeoMqh5ONM+bti1/z7HeNB2mKLtOfNgc
 /1L4DlYsmLCrNwEcPZVKUeHbv3lL4eXZYTc6U85X5nsjvNmPMVflVG5snfjeTclJ3JcUz0O0Da
 6bKy7af6HyRYQEPbBbYZc8OlICyCRx6GixQJg5CYtyLP6zoUAYr/mBNu+BS4MBetmfmrwGmQL7
 o7JTOYPmqaeSAt7SwObxy8fd5iiIBeGy5a5UJZMoD7wwbN4jMVb4sM51Q03ZzrtlEAWQZRLxFT
 9Xs=
X-SBRS: 2.7
X-MesageID: 22761476
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22761476"
Date: Wed, 15 Jul 2020 16:51:44 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v3 1/2] x86: restore pv_rtc_handler() invocation
Message-ID: <20200715145144.GA7191@Air-de-Roger>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
 <20200715121347.GY7191@Air-de-Roger>
 <2b9de0fd-5973-ed66-868c-ffadca83edf3@suse.com>
 <20200715133217.GZ7191@Air-de-Roger>
 <cd08f928-2be9-314b-56e6-bb414247caff@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <cd08f928-2be9-314b-56e6-bb414247caff@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 03:51:17PM +0200, Jan Beulich wrote:
> On 15.07.2020 15:32, Roger Pau Monné wrote:
> > On Wed, Jul 15, 2020 at 02:36:49PM +0200, Jan Beulich wrote:
> >> On 15.07.2020 14:13, Roger Pau Monné wrote:
> >>> On Wed, Jul 15, 2020 at 01:56:47PM +0200, Jan Beulich wrote:
> >>>> @@ -1160,6 +1162,14 @@ void rtc_guest_write(unsigned int port,
> >>>>      case RTC_PORT(1):
> >>>>          if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
> >>>>              break;
> >>>> +
> >>>> +        spin_lock_irqsave(&rtc_lock, flags);
> >>>> +        hook = pv_rtc_handler;
> >>>> +        spin_unlock_irqrestore(&rtc_lock, flags);
> >>>
> >>> Given that clearing the pv_rtc_handler variable in handle_rtc_once is
> >>> not done while holding the rtc_lock, I'm not sure there's much point
> >>> in holding the lock here, ie: just doing something like:
> >>>
> >>> hook = pv_rtc_handler;
> >>> if ( hook )
> >>>     hook(currd->arch.cmos_idx & 0x7f, data);
> >>>
> >>> Should be as safe as what you do.
> >>
> >> No, the compiler is free to eliminate the local variable and read
> >> the global one twice (and it may change contents in between) then.
> >> I could use ACCESS_ONCE() or read_atomic() here, but then it would
> >> become quite clear that at the same time ...
> >>
> >>> We also assume that setting pv_rtc_handler to NULL is an atomic
> >>> operation.
> >>
> >> ... this (which isn't different from what we do elsewhere, and we
> >> really can't fix everything at the same time) ought to also become
> >> ACCESS_ONCE() (or write_atomic()).
> >>
> >> A compromise might be to use barrier() in place of the locking for
> >> now ...
> > 
> > Oh, right. Didn't realize you did it in order to prevent
> > optimizations. Using the lock seems also quite weird IMO, so I'm not
> > sure it's much better than just using ACCESS_ONCE (or a barrier).
> > Anyway, I don't want to delay this any longer, so:
> > 
> > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Thanks.
> 
> > Feel free to change to ACCESS_ONCE or barrier if you think it's
> > clearer.
> 
> I did so (also on the writer side), not the least based on guessing
> what Andrew would presumably have preferred.

Thanks! Sorry I might be pedantic, but is the ACCESS_ONCE on the write
side actually required? I'm not sure I see what ACCESS_ONCE protects
against in handle_rtc_once.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 15:07:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 15:07:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvj0S-0002h9-OO; Wed, 15 Jul 2020 15:07:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tzA3=A2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvj0R-0002h4-Tx
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 15:07:35 +0000
X-Inumbo-ID: e9b2aec8-c6ac-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e9b2aec8-c6ac-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 15:07:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hUSotM9XIVTsJe4rVwvMcmQ+IMgJuA3mSB6p2H4qFZU=; b=lXIanbLxt/v34lnO9F8gpcJlD
 NX7RbDtqtDxMbsMh20TY4kJzKvKp/vkw4g398ZBaaIwM34yVdv9hU0MxLBH/IPlXH3/lQxOSXNq05
 8mA2lGJ3xGIchXRm7JfNlFSRTW3nvOnu8VPX2JJIxWNoAyX+sBaCJkScSHhL2SZ/GOmfU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvj0Q-0008Ra-Bu; Wed, 15 Jul 2020 15:07:34 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvj0Q-0002IN-3T; Wed, 15 Jul 2020 15:07:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvj0Q-0001a6-2O; Wed, 15 Jul 2020 15:07:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151910-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151910: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=fcac6490f28152614cf5be70683940c4dfda0f40
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jul 2020 15:07:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151910 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151910/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              fcac6490f28152614cf5be70683940c4dfda0f40
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z    5 days
Failing since        151818  2020-07-11 04:18:52 Z    4 days    5 attempts
Testing same since   151910  2020-07-15 04:18:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Bastien Orivel <bastien.orivel@diateam.net>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Jin Yan <jinyan12@huawei.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1146 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 15:08:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 15:08:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvj1l-0002ne-8Z; Wed, 15 Jul 2020 15:08:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvj1j-0002nX-OT
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 15:08:55 +0000
X-Inumbo-ID: 19262482-c6ad-11ea-bca7-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19262482-c6ad-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 15:08:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594825736;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=UkLPCuhQ/WsRka3FdyCpuF4/kTgr19gqQgiD7k0GE20=;
 b=Qrmj0HvNwqJMCOK7yHJUUtdaj/uHV4/MbbRwfU1flv7US2RV6vG31x1U
 5lgUFERFXxFUlL672LQab0c17yKXzKYTXxy/gypkEDvpMsY+fi9/7Qnob
 9lFqFLJ0k1egpa6jWBXxubt4Pu+/KLJoUKImO34q8NZnFSr21c2Z1PlZe I=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: fJ1kjlxO2FthyXKxezrLDLEPdr3FBerqSF8kS3NqyqKXs0egrj8AWWYk5WZ30G1heLDvTMglDT
 sfxnBK2Ag+sPLRo98u7JwisWG5X5xHCSuavWq8W64WGIgbeC3FpHhNF9nxVexQFYYQSsWuXYkH
 i5C6KiQFF0SqpjHnfXUYcoGD7ntWhGoW9QnMjfLYCUDuxVUAWkPi3VvW3y6Q0hhmWpM6eJNm4I
 ++ySy62Z9iAqB5wdiEO0UbDYzvTjbXkBMTg5zQDgQXMApcoG612mgyLG1YQsCNHQLoEpNY90bd
 Y3A=
X-SBRS: 2.7
X-MesageID: 22449146
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22449146"
Date: Wed, 15 Jul 2020 17:08:46 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v6 04/11] common: add vmtrace_pt_size domain parameter
Message-ID: <20200715150846.GB7191@Air-de-Roger>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <036bc768bfb074269d9bd4530304a11170b7142d.1594150543.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <036bc768bfb074269d9bd4530304a11170b7142d.1594150543.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas.lengyel@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 07, 2020 at 09:39:43PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Add vmtrace_pt_size domain parameter in live domain and
> vmtrace_pt_order parameter in xen_domctl_createdomain.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  xen/arch/x86/domain.c       | 6 ++++++
>  xen/common/domain.c         | 9 +++++++++
>  xen/include/public/domctl.h | 1 +
>  xen/include/xen/sched.h     | 3 +++
>  4 files changed, 19 insertions(+)
> 
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index fee6c3931a..b75017b28b 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -499,6 +499,12 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>           */
>          config->flags |= XEN_DOMCTL_CDF_oos_off;
>  
> +    if ( !hvm && config->processor_trace_buf_kb )
> +    {
> +        dprintk(XENLOG_INFO, "Processor trace is not supported on non-HVM\n");
> +        return -EINVAL;
> +    }
> +
>      return 0;
>  }
>  
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index a45cf023f7..e6e8f88da1 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -338,6 +338,12 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
>          return -EINVAL;
>      }
>  
> +    if ( config->processor_trace_buf_kb && !vmtrace_supported )
> +    {
> +        dprintk(XENLOG_INFO, "Processor tracing is not supported\n");
> +        return -EINVAL;
> +    }
> +
>      return arch_sanitise_domain_config(config);
>  }
>  
> @@ -443,6 +449,9 @@ struct domain *domain_create(domid_t domid,
>          d->nr_pirqs = min(d->nr_pirqs, nr_irqs);
>  
>          radix_tree_init(&d->pirq_tree);
> +
> +        if ( config->processor_trace_buf_kb )
> +            d->processor_trace_buf_kb = config->processor_trace_buf_kb;
>      }
>  
>      if ( (err = arch_domain_create(d, config)) != 0 )
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 59bdc28c89..7681675a94 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -92,6 +92,7 @@ struct xen_domctl_createdomain {
>      uint32_t max_evtchn_port;
>      int32_t max_grant_frames;
>      int32_t max_maptrack_frames;
> +    uint32_t processor_trace_buf_kb;
>  
>      struct xen_arch_domainconfig arch;
>  };
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index ac53519d7f..c046e59886 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -457,6 +457,9 @@ struct domain
>      unsigned    pbuf_idx;
>      spinlock_t  pbuf_lock;
>  
> +    /* Used by vmtrace features */
> +    uint32_t    processor_trace_buf_kb;

Any reason for adding it here instead of doing it at the end of the
struct? Also I think this should be unsigned int, there's no reason to
use a specific width type here.

IMO it would be nice to have a Kconfig option for this, so that it can
be build time disabled, and for example disabled by default on Arm (or
on x86 if HVM is not built).

Most recently added features have followed this model for example
Argo, where a Kconfig option is added in order to build time disable
the added feature. Let me know if you need some tips about it, I'm
happy to help in order to try to get this in Kconfig.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 15:11:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 15:11:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvj47-0003cz-Mm; Wed, 15 Jul 2020 15:11:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wXN5=A2=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1jvj46-0003cu-Pg
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 15:11:22 +0000
X-Inumbo-ID: 70d5109e-c6ad-11ea-9401-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 70d5109e-c6ad-11ea-9401-12813bfff9fa;
 Wed, 15 Jul 2020 15:11:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594825881;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=gz7UzHeVSBo2HbZIQ5cfhOZSE7lzjmWWU60xaJUfS2k=;
 b=CTY0KCKbbaRias51RQDxRoibowKX8btB1TLLwF6i8ECTISEL7Y+2Rivp
 lOj+BYgdVT+Y8XM6ky/84Ccql3+Hm79z0/RJAhqjEmCIiWDVsIv32BSjO
 WdxQdmFZ0V8gROZmjq/z0cpLx3ulwqoxo0L3kbuaeQsSplADw2czL+H6K k=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Y/G9YdrDIqr9DUYl2LIfwsu7cZxfhN3Woz9oKYDES7hhR8kuittwv5WzjbpLVpdvKoX0+FgyJL
 US7FBC4ou2bLqxyVOl2MiKjf9VwOTraUe50Cz+343LgNOEuxy8cwrTr9ygy7uTPsf5L0UiZZ/k
 6qNTqeyf2LP2XCtdqSUgBGxZZSPChNZkkda9jHcSUR9prFOmEy8iII4x4nqwxZYyyReqE6xJ+u
 +uQdCgbjG5nOqEqDBD8HJABMJ/bCo3MSUhW9wYB0A1UAiB4XCciUxBk1GJjq7ZCVaSZpj4P54C
 OfI=
X-SBRS: 2.7
X-MesageID: 22642331
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22642331"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v1 0/1] oxenstored: fix ABI breakage in reset watches
Date: Wed, 15 Jul 2020 16:10:55 +0100
Message-ID: <cover.1594825512.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: David Scott <dave@recoil.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>,
 Wei Liu <wl@xen.org>, Christian Lindig <christian.lindig@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

dbc84d2983969bb47d294131ed9e6bbbdc2aec49 (Xen >= 4.9.0) deleted XS_RESTRICT
from oxenstored, which caused all the following opcodes to be shifted by 1,
breaking the ABI compared to the C version and guests.

The affected opcode is 'reset watches', e.g. Linux uses this during kexec if a suitable
control/platform-feature-xs_reset_watches  field is present in xenstore.

Edwin Török (1):
  oxenstored: fix ABI breakage introduced in Xen 4.9.0

 tools/ocaml/libs/xb/op.ml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 15:11:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 15:11:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvj4J-0003ee-VG; Wed, 15 Jul 2020 15:11:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wXN5=A2=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1jvj4I-0003eS-TX
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 15:11:34 +0000
X-Inumbo-ID: 784095e2-c6ad-11ea-bca7-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 784095e2-c6ad-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 15:11:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594825894;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=K1owBnRIAVTLefyKeo/+v16BnObQDQqtz+nCvF/ExPk=;
 b=KDGMtsEu6pA+66JLREo8V4CXgbfH7TK7LDW8ABX/moM2KGARVQkDE+Up
 1F/Gw/ZAnGqt/8lGy+zQ/3+c1pJL/Z96PvRToG+YcLGOw+AgMY6gkIbfT
 7KNDo5317R7qjCO+Dn86VaNQNeQqaqKly5AXMjArbTk8eLJ5SJIG5KatN M=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: CzHeqjY9VbefBM2DJ6NxvUrHVDQlZCRk4LRNwhjRMWOHZTVopogD3gmx0tv/NAbChMfShjV0sV
 swl7+jP9sy8lFDCCUMPh1rjCwkezY7wfm7+/7h5h2b3iAxIm7vmZUJcbLcTV6UwtAxIHvq3HJ5
 WoJGFVYbuarawQyJlvJiK9XSHZdouH2uviZeoPe/ykm5VJTfn+GdG7F0MnFflPa0Qb4V1iEraG
 /AbqG9psoN1pnKYGRF9U3MZaAly4Q4I29pAR/ff8u6hrPNNSWL7undOQzm7jroEW0CL8jRz0RL
 cD4=
X-SBRS: 2.7
X-MesageID: 22775216
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22775216"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v1 1/1] oxenstored: fix ABI breakage introduced in Xen 4.9.0
Date: Wed, 15 Jul 2020 16:10:56 +0100
Message-ID: <6fcfdb706cc2f666069c1d0bbc59d22f660fc81d.1594825512.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1594825512.git.edvin.torok@citrix.com>
References: <cover.1594825512.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>, Wei Liu <wl@xen.org>, Ian
 Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>,
 David Scott <dave@recoil.org>, Christian
 Lindig <christian.lindig@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

dbc84d2983969bb47d294131ed9e6bbbdc2aec49 (Xen >= 4.9.0) deleted XS_RESTRICT
from oxenstored, which caused all the following opcodes to be shifted by 1:
reset_watches became off-by-one compared to the C version of xenstored.

Looking at the C code the opcode for reset watches needs:
XS_RESET_WATCHES = XS_SET_TARGET + 2

So add the placeholder `Invalid` in the OCaml<->C mapping list.
(Note that the code here doesn't simply convert the OCaml constructor to
 an integer, so we don't need to introduce a dummy constructor).

Igor says that with a suitably patched xenopsd to enable watch reset,
we now see `reset watches` during kdump of a guest in xenstored-access.log.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
Tested-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
 tools/ocaml/libs/xb/op.ml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/ocaml/libs/xb/op.ml b/tools/ocaml/libs/xb/op.ml
index d4f1f08185..9bcab0f38c 100644
--- a/tools/ocaml/libs/xb/op.ml
+++ b/tools/ocaml/libs/xb/op.ml
@@ -28,7 +28,7 @@ let operation_c_mapping =
            Transaction_end; Introduce; Release;
            Getdomainpath; Write; Mkdir; Rm;
            Setperms; Watchevent; Error; Isintroduced;
-           Resume; Set_target; Reset_watches |]
+           Resume; Set_target; Invalid; Reset_watches |]
 let size = Array.length operation_c_mapping
 
 let array_search el a =
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 15:17:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 15:17:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvjAG-0003uA-Mc; Wed, 15 Jul 2020 15:17:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvjAG-0003u5-24
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 15:17:44 +0000
X-Inumbo-ID: 543f182a-c6ae-11ea-bca7-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 543f182a-c6ae-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 15:17:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594826263;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=yETrZ0jRFJ21wdLMJaSuvvLmnouw5Q8hyeovgzLGKq0=;
 b=QnLonrapaQUkNuK4oTHI/cqOeZtZJRWmXK1KqV20jnhRhhNKkLdIRf8t
 19DsvxhXwwBQxISdfHIs+V64oFVxIBpbOmkI3XQVN0nMYUuPSR6kere0e
 I52gwQ+sPvkileeTx9LCXP24F7RjD/y0CGnwQjvuWscnU01RiJ5AYsXn+ 0=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: QCX7I9qvL8lSTyGjeqiCzn8Wufn8YQiM5bKagH93Anez7ScorX051BT1NfqxpZK47w17iwPrAt
 vlrW0ZP6kaGlLUh6PrXbNvahQyfKW0d0aAzvN+YQj8H44OTQdyKqlt2xrM15qcTA9BYx2TjxWG
 eTDCGE0zb/Yqhh5W7fGvfwzK6p8Ro9rOuF94XCdt32JZ7r1BTBGhqWp0xx4XfefSdNUT/sR9zu
 BX680PZN1cikLi+JyG4E2z4u+sKNoTtCf1SLHCAkzIzUfTlAkCn+wET5+AdT6SZ/6PSwO6H710
 gZU=
X-SBRS: 2.7
X-MesageID: 23292598
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="23292598"
Date: Wed, 15 Jul 2020 17:17:35 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v6 05/11] tools/libxl: add vmtrace_pt_size parameter
Message-ID: <20200715151735.GC7191@Air-de-Roger>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <ac7b950a7ef86cbf0c63fe428ec94e2b6fe27453.1594150543.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ac7b950a7ef86cbf0c63fe428ec94e2b6fe27453.1594150543.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: luwei.kang@intel.com, Wei Liu <wl@xen.org>, tamas.lengyel@intel.com,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 07, 2020 at 09:39:44PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Allow to specify the size of per-vCPU trace buffer upon
> domain creation. This is zero by default (meaning: not enabled).
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  docs/man/xl.cfg.5.pod.in             | 13 +++++++++++++
>  tools/golang/xenlight/helpers.gen.go |  2 ++
>  tools/golang/xenlight/types.gen.go   |  1 +
>  tools/libxl/libxl.h                  |  8 ++++++++
>  tools/libxl/libxl_create.c           |  1 +
>  tools/libxl/libxl_types.idl          |  4 ++++
>  tools/xl/xl_parse.c                  | 22 ++++++++++++++++++++++
>  7 files changed, 51 insertions(+)
> 
> diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
> index 0532739c1f..ddef9b6014 100644
> --- a/docs/man/xl.cfg.5.pod.in
> +++ b/docs/man/xl.cfg.5.pod.in
> @@ -683,6 +683,19 @@ If this option is not specified then it will default to B<false>.
>  
>  =back
>  
> +=item B<processor_trace_buf_kb=KBYTES>
> +
> +Specifies the size of processor trace buffer that would be allocated
> +for each vCPU belonging to this domain. Disabled (i.e.
> +B<processor_trace_buf_kb=0> by default. This must be set to
> +non-zero value in order to be able to use processor tracing features
> +with this domain.
> +
> +B<NOTE>: In order to use Intel Processor Trace feature, this value
> +must be between 8 kB and 4 GB and it must be a power of 2.

I think the minimum that we could support is 4KB? (ie: one page?). Not
that it matters much, as I don't think anyone would use such small
buffers.

> diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
> index 9d3f05f399..748fde65ab 100644
> --- a/tools/libxl/libxl_types.idl
> +++ b/tools/libxl/libxl_types.idl
> @@ -645,6 +645,10 @@ libxl_domain_build_info = Struct("domain_build_info",[
>      # supported by x86 HVM and ARM support is planned.
>      ("altp2m", libxl_altp2m_mode),
>  
> +    # Size of preallocated processor trace buffers (in KBYTES).
> +    # Use zero value to disable this feature.
> +    ("processor_trace_buf_kb", integer),

MemKB should be used here instead of integer.

> +
>      ], dir=DIR_IN,
>         copy_deprecated_fn="libxl__domain_build_info_copy_deprecated",
>  )
> diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
> index 61b4ef7b7e..87e373b413 100644
> --- a/tools/xl/xl_parse.c
> +++ b/tools/xl/xl_parse.c
> @@ -1861,6 +1861,28 @@ void parse_config_data(const char *config_source,
>          }
>      }
>  
> +    if (!xlu_cfg_get_long(config, "processor_trace_buf_kb", &l, 1) && l) {
> +        if (l & (l - 1)) {
> +            fprintf(stderr, "ERROR: processor_trace_buf_kb"
> +                            " - must be a power of 2\n");
> +            exit(1);
> +        }
> +
> +        if (l < 8) {
> +            fprintf(stderr, "ERROR: processor_trace_buf_kb"
> +                            " - value is too small\n");
> +            exit(1);
> +        }
> +
> +        if (l > 1024*1024*4) {
> +            fprintf(stderr, "ERROR: processor_trace_buf_kb"
> +                            " - value is too large\n");
> +            exit(1);

Those checks shouldn't be here, this is libxl common code, and those
limitations are specific to the Intel implementation. Those should be
inside of the hypervisor IMO, or if we really want to have them in
libxl for some reason they should be moved to libxl_x86.c

Thanks.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 15:21:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 15:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvjEK-0004kj-8I; Wed, 15 Jul 2020 15:21:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uifZ=A2=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1jvjEJ-0004ke-HM
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 15:21:55 +0000
X-Inumbo-ID: ea332cf4-c6ae-11ea-bca7-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea332cf4-c6ae-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 15:21:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594826515;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-transfer-encoding:mime-version;
 bh=tIIOFUyNL3Wdct8tKDjv7nitVdE2HsIhnbVMuNFpn8o=;
 b=IwLYu36SChRIBBBTLcbK/GLBafa+Oj/izF+7E5/Y459zBbxy0CUKf/DR
 yJQnMDhlMEpKXReVdJ2SiaTwc75olwdGFRxZ3WdsDtpCXPJ9M5SMl7CwL
 nOplk8K8eGceRoLFaqbCmseeaZIh/uYA8KNuwf2jBQE3sv6R22KO+l3U4 U=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: W1DqJkeJubrxKu7yPJiWNggEIR7mSRCcbs0MeMnq2nCNDXodK9+9W1jXkiyFf5tUSRnBhVhwTe
 ztUnWp7F2RqW/rFl0xj+z5jAq0XW5jpqOOJEbXtGub0MY0N6/nLa2Qm0N7yGgSJ/f3jQXW4uMW
 sChWFKxtVBqZDgOn9GpQmyM9z8d+EmpXL56aSdxzv3S/stBjExjXw8DuUzjLNwSjPnFFQEgKuz
 zQc9oazQ4wGhMr7y/9WQd3rZbSANlBx2EXMP5s9jTggeZFA1yiVU14Dcn7ibG3qIqKLdvjZPaO
 zUE=
X-SBRS: 2.7
X-MesageID: 22443251
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22443251"
From: Christian Lindig <christian.lindig@citrix.com>
To: Edwin Torok <edvin.torok@citrix.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v1 1/1] oxenstored: fix ABI breakage introduced in Xen
 4.9.0
Thread-Topic: [PATCH v1 1/1] oxenstored: fix ABI breakage introduced in Xen
 4.9.0
Thread-Index: AQHWWro4sadfUNXtWUGmqr5ru6wdMKkIwcwd
Date: Wed, 15 Jul 2020 15:21:50 +0000
Message-ID: <1594826510774.33560@citrix.com>
References: <cover.1594825512.git.edvin.torok@citrix.com>,
 <6fcfdb706cc2f666069c1d0bbc59d22f660fc81d.1594825512.git.edvin.torok@citrix.com>
In-Reply-To: <6fcfdb706cc2f666069c1d0bbc59d22f660fc81d.1594825512.git.edvin.torok@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <Ian.Jackson@citrix.com>,
 Igor Druzhinin <igor.druzhinin@citrix.com>, Wei
 Liu <wl@xen.org>, David Scott <dave@recoil.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

=0A=
________________________________________=0A=
From: Edwin T=F6r=F6k <edvin.torok@citrix.com>=0A=
Sent: 15 July 2020 16:10=0A=
To: xen-devel@lists.xenproject.org=0A=
Cc: Edwin Torok; Christian Lindig; David Scott; Ian Jackson; Wei Liu; Igor =
Druzhinin=0A=
Subject: [PATCH v1 1/1] oxenstored: fix ABI breakage introduced in Xen 4.9.=
0=0A=
=0A=
dbc84d2983969bb47d294131ed9e6bbbdc2aec49 (Xen >=3D 4.9.0) deleted XS_RESTRI=
CT=0A=
from oxenstored, which caused all the following opcodes to be shifted by 1:=
=0A=
reset_watches became off-by-one compared to the C version of xenstored.=0A=
=0A=
Looking at the C code the opcode for reset watches needs:=0A=
XS_RESET_WATCHES =3D XS_SET_TARGET + 2=0A=
=0A=
So add the placeholder `Invalid` in the OCaml<->C mapping list.=0A=
(Note that the code here doesn't simply convert the OCaml constructor to=0A=
 an integer, so we don't need to introduce a dummy constructor).=0A=
=0A=
Igor says that with a suitably patched xenopsd to enable watch reset,=0A=
we now see `reset watches` during kdump of a guest in xenstored-access.log.=
=0A=
=0A=
Signed-off-by: Edwin T=F6r=F6k <edvin.torok@citrix.com>=0A=
Tested-by: Igor Druzhinin <igor.druzhinin@citrix.com>=0A=
---=0A=
 tools/ocaml/libs/xb/op.ml | 2 +-=0A=
 1 file changed, 1 insertion(+), 1 deletion(-)=0A=
=0A=
diff --git a/tools/ocaml/libs/xb/op.ml b/tools/ocaml/libs/xb/op.ml=0A=
index d4f1f08185..9bcab0f38c 100644=0A=
--- a/tools/ocaml/libs/xb/op.ml=0A=
+++ b/tools/ocaml/libs/xb/op.ml=0A=
@@ -28,7 +28,7 @@ let operation_c_mapping =3D=0A=
            Transaction_end; Introduce; Release;=0A=
            Getdomainpath; Write; Mkdir; Rm;=0A=
            Setperms; Watchevent; Error; Isintroduced;=0A=
-           Resume; Set_target; Reset_watches |]=0A=
+           Resume; Set_target; Invalid; Reset_watches |]=0A=
 let size =3D Array.length operation_c_mapping=0A=
=0A=
 let array_search el a =3D=0A=
--=0A=
2.25.1=0A=
=0A=
-- =0A=
Acked-by: Christian Lindig <christian.lindig@citrix.com>=


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 15:41:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 15:41:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvjX4-0006VF-U7; Wed, 15 Jul 2020 15:41:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvjX3-0006V4-AL
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 15:41:17 +0000
X-Inumbo-ID: 9e73d4c8-c6b1-11ea-b7bb-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e73d4c8-c6b1-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 15:41:16 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvjX0-0006HI-Vv; Wed, 15 Jul 2020 16:41:15 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] docs/process/branching-checklist: Get osstest branch right
Date: Wed, 15 Jul 2020 16:41:12 +0100
Message-Id: <20200715154112.4719-1-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 ian.jackson@eu.citrix.com, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The runes for this manual osstest were wrong.  It needs to run as
osstest, and cr-for-branches should be run from testing.git.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 docs/process/branching-checklist.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/process/branching-checklist.txt b/docs/process/branching-checklist.txt
index e286e65962..0e83272caa 100644
--- a/docs/process/branching-checklist.txt
+++ b/docs/process/branching-checklist.txt
@@ -86,8 +86,8 @@ including turning off debug.
 
 Set off a manual osstest run, since the osstest cr-for-branches change
 will take a while to take effect:
-  ssh osstest.test-lab
-  cd branches/for-xen-$v-testing.git
+  ssh osstest@osstest.test-lab
+  cd testing.git
   screen -S $v
   BRANCHES=xen-$v-testing ./cr-for-branches branches -w "./cr-daily-branch --real"
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 15:43:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 15:43:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvjZd-0006dO-D4; Wed, 15 Jul 2020 15:43:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvjZc-0006dJ-9R
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 15:43:56 +0000
X-Inumbo-ID: fd095db4-c6b1-11ea-9405-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fd095db4-c6b1-11ea-9405-12813bfff9fa;
 Wed, 15 Jul 2020 15:43:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594827835;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=eFX4X3Du+yMs4sKTHGETgVtzJC47K3y4UNn5g7DdCp0=;
 b=cfQac7w6Dmw1G4MWMdJSFflbnCx2NuACQr2IU+Vx7T56kNKuDSzdwdz3
 sZpsV2yJ8AMy0EqUo10eFsjgL+fO45qpsza2xxdGXnEN71oR3gCkKgrzs
 dLfinjUX7IQQWsNzm8EEWzKlC/2PCiUcB+qLm2r5VkUb2Kyp9fyPA3ubu k=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: zy1S6SxYV46G6qHME/eZ5AvW8W/1oZddLztwLx1MC9IYuBS3xlCEDhynA9X6XTlDbvu5sgAgwQ
 9CJDCC59sW24r/apdt47HNeeubJT2RjfbpfkZuP3i0oaO60P4PwNHkDGM9Q9aBgOeU4cBy1o2g
 3tMXh5jj7Q1SqfNrkA1tzehLQ5ureRV43Yk53p68fgCC2/4UGZTTi7/90+7mO2gsusGO5EVAqN
 6BMSJB/fN4dW+JJj9ktn2aMBCUD4s868hqJW81VZuASYJBiJ0WkaDNhuzTZwZ+mXn7CbCOVnCC
 wqA=
X-SBRS: 2.7
X-MesageID: 22445449
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22445449"
Date: Wed, 15 Jul 2020 17:43:47 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v6 06/11] x86/hvm: processor trace interface in HVM
Message-ID: <20200715154347.GD7191@Air-de-Roger>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <1916e06793ffaaa70c471bcd6bcf168597793bd5.1594150543.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1916e06793ffaaa70c471bcd6bcf168597793bd5.1594150543.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas.lengyel@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 07, 2020 at 09:39:45PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Implement necessary changes in common code/HVM to support
> processor trace features. Define vmtrace_pt_* API and
> implement trace buffer allocation/deallocation in common
> code.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  xen/arch/x86/domain.c         | 21 +++++++++++++++++++++
>  xen/common/domain.c           | 35 +++++++++++++++++++++++++++++++++++
>  xen/include/asm-x86/hvm/hvm.h | 20 ++++++++++++++++++++
>  xen/include/xen/sched.h       |  4 ++++
>  4 files changed, 80 insertions(+)
> 
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index b75017b28b..8ce2ab6b8f 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -2205,6 +2205,27 @@ int domain_relinquish_resources(struct domain *d)
>                  altp2m_vcpu_disable_ve(v);
>          }
>  
> +        for_each_vcpu ( d, v )
> +        {
> +            unsigned int i;
> +            uint64_t nr_pages = v->domain->processor_trace_buf_kb * KB(1);
> +            nr_pages >>= PAGE_SHIFT;

It would be easier as:

unsigned int nr_pages = d->processor_trace_buf_kb / KB(4);

Or maybe:

unsigned int nr_pages = d->processor_trace_buf_kb >> (PAGE_SHIFT - 10);

> +
> +            if ( !v->vmtrace.pt_buf )
> +                continue;
> +
> +            for ( i = 0; i < nr_pages; i++ )
> +            {
> +                struct page_info *pg = mfn_to_page(
> +                    mfn_add(page_to_mfn(v->vmtrace.pt_buf), i));

You can just do:

struct page_info *pg = v->vmtrace.pt_buf[i];

> +
> +                put_page_alloc_ref(pg);
> +                put_page_and_type(pg);
> +            }
> +
> +            v->vmtrace.pt_buf = NULL;
> +        }
> +
>          if ( is_pv_domain(d) )
>          {
>              for_each_vcpu ( d, v )
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index e6e8f88da1..193099a2ab 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -137,6 +137,38 @@ static void vcpu_destroy(struct vcpu *v)
>      free_vcpu_struct(v);
>  }
>  
> +static int vmtrace_alloc_buffers(struct vcpu *v)
> +{
> +    unsigned int i;
> +    struct page_info *pg;
> +    uint64_t size = v->domain->processor_trace_buf_kb * KB(1);

Same here, you could just use a number of pages directly and turn size
into 'unsigned int nr_pages', and then use get_order_from_pages
below.

> +
> +    pg = alloc_domheap_pages(v->domain, get_order_from_bytes(size),
> +                             MEMF_no_refcount);
> +

Extra newline.

> +    if ( !pg )
> +        return -ENOMEM;
> +
> +    for ( i = 0; i < (size >> PAGE_SHIFT); i++ )
> +    {
> +        struct page_info *pg_iter = mfn_to_page(
> +            mfn_add(page_to_mfn(pg), i));

Same as above here, just use pg[i],

> +
> +        if ( !get_page_and_type(pg_iter, v->domain, PGT_writable_page) )
> +        {
> +            /*
> +             * The domain can't possibly know about this page yet, so failure
> +             * here is a clear indication of something fishy going on.
> +             */
> +            domain_crash(v->domain);
> +            return -ENODATA;

ENODATA is IMO a weird return code, ENOMEM would likely be better.

What about the pg array of pages, don't you need to free it somehow?
(and likely drop the references to pages before pg[i] on the array)

> +        }
> +    }

Also you seem to assume that size is a power of 2, but I think that's
only guaranteed by the current Intel implementation, and hence other
implementations could have a more lax requirement (or even Intel when
using TOPA).

So you need to free the remaining pages if
(1 << get_order_from_pages(nr_pages)) != nr_pages.

> +
> +    v->vmtrace.pt_buf = pg;
> +    return 0;
> +}
> +
>  struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
>  {
>      struct vcpu *v;
> @@ -162,6 +194,9 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
>      v->vcpu_id = vcpu_id;
>      v->dirty_cpu = VCPU_CPU_CLEAN;
>  
> +    if ( d->processor_trace_buf_kb && vmtrace_alloc_buffers(v) != 0 )
> +        return NULL;

Don't you need to do some cleanup here in case of failure? AFAICT this
seems to leak the allocated v at least.

>      spin_lock_init(&v->virq_lock);
>  
>      tasklet_init(&v->continue_hypercall_tasklet, NULL, NULL);
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index 1eb377dd82..476a216205 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -214,6 +214,10 @@ struct hvm_function_table {
>      bool_t (*altp2m_vcpu_emulate_ve)(struct vcpu *v);
>      int (*altp2m_vcpu_emulate_vmfunc)(const struct cpu_user_regs *regs);
>  
> +    /* vmtrace */
> +    int (*vmtrace_control_pt)(struct vcpu *v, bool enable);
> +    int (*vmtrace_get_pt_offset)(struct vcpu *v, uint64_t *offset, uint64_t *size);
> +
>      /*
>       * Parameters and callbacks for hardware-assisted TSC scaling,
>       * which are valid only when the hardware feature is available.
> @@ -655,6 +659,22 @@ static inline bool altp2m_vcpu_emulate_ve(struct vcpu *v)
>      return false;
>  }
>  
> +static inline int vmtrace_control_pt(struct vcpu *v, bool enable)
> +{
> +    if ( hvm_funcs.vmtrace_control_pt )
> +        return hvm_funcs.vmtrace_control_pt(v, enable);
> +
> +    return -EOPNOTSUPP;
> +}
> +
> +static inline int vmtrace_get_pt_offset(struct vcpu *v, uint64_t *offset, uint64_t *size)
> +{
> +    if ( hvm_funcs.vmtrace_get_pt_offset )
> +        return hvm_funcs.vmtrace_get_pt_offset(v, offset, size);
> +
> +    return -EOPNOTSUPP;
> +}

I think this API would be better placed together with the VMX
implementation of those functions, introducing it in this patch is
not required since there are no callers?

Thanks.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 15:57:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 15:57:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvjmF-0007aI-QX; Wed, 15 Jul 2020 15:56:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tzA3=A2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvjmE-0007aD-J7
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 15:56:58 +0000
X-Inumbo-ID: cfb1affe-c6b3-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cfb1affe-c6b3-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 15:56:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=n0bLvbgZywoL0EDC0g/KkVkjpNSI183mn2yebHyHfy4=; b=0s/Rh+mTxFucxRz6mx8Hehlhj
 LzJekMzjLANuTYd1fsGMHTKPxW0JwldqTMXaBYF2N8PVj7fA9rXGesHiW4zQrBRyb3amfBDrB4DuX
 otAv7fsC1/8KwL1fThLHV/yBuuak8vGi0L0lX7MwClNtf3BeWyVFVauV7extHmj7brark=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvjmC-000116-R8; Wed, 15 Jul 2020 15:56:56 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvjmC-0003sE-Bp; Wed, 15 Jul 2020 15:56:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvjmC-0007RG-B8; Wed, 15 Jul 2020 15:56:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151907-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151907: all pass - PUSHED
X-Osstest-Versions-This: ovmf=c7195b9ec3c5f8f40119c96ac4a0ab1e8cebe9dc
X-Osstest-Versions-That: ovmf=256c4470f86e53661c070f8c64a1052e975f9ef0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jul 2020 15:56:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151907 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151907/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c7195b9ec3c5f8f40119c96ac4a0ab1e8cebe9dc
baseline version:
 ovmf                 256c4470f86e53661c070f8c64a1052e975f9ef0

Last test of basis   151898  2020-07-14 17:42:39 Z    0 days
Testing same since   151907  2020-07-15 03:30:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Zhichao Gao <zhichao.gao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   256c4470f8..c7195b9ec3  c7195b9ec3c5f8f40119c96ac4a0ab1e8cebe9dc -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 15:59:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 15:59:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvjog-0007io-8t; Wed, 15 Jul 2020 15:59:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=COeM=A2=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jvjof-0007iB-EY
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 15:59:29 +0000
X-Inumbo-ID: 26c6066e-c6b4-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26c6066e-c6b4-11ea-8496-bc764e2007e4;
 Wed, 15 Jul 2020 15:59:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Reply-To:Message-Id:Date:Subject:To:From:Sender:Cc:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=f3yeW6sSSoYP7f3EUuNv6OxXPwLcAHQm+R6d13VyfOk=; b=5mfnarL1bmdj/Run/5T+UjZKHo
 /MXYkYfu3cEAfgFzFCIJhC5lfJR7R5QAyi8ZQTvSzBXLu3cfzeTwC6uSA8NoK++K17B1siA4suQiw
 72M8kX81JnLJBGMdBH20cMufksgLif2CJ7NQXooM2+E8wRM6ge/fYn4gFlkRjRT6C8rU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jvjoZ-00013U-Ce; Wed, 15 Jul 2020 15:59:23 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=CBG-R90WXYV0.amazon.com) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jvjoZ-0001SJ-1N; Wed, 15 Jul 2020 15:59:23 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org, xen-users@lists.xenproject.org,
 xen-announce@lists.xenproject.org
Subject: Xen 4.14 RC6
Date: Wed, 15 Jul 2020 16:59:21 +0100
Message-Id: <20200715155921.5543-1-paul@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: xen-devel@lists.xenproject.org, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi all,

Xen 4.14 RC6 is tagged. You can check that out from xen.git:

git://xenbits.xen.org/xen.git 4.14.0-rc6

For your convenience there is also a tarball at:
https://downloads.xenproject.org/release/xen/4.14.0-rc6/xen-4.14.0-rc6.tar.gz

And the signature is at:
https://downloads.xenproject.org/release/xen/4.14.0-rc6/xen-4.14.0-rc6.tar.gz.sig

This RC is built from the new stable-4.14 branch, with CONFIG_DEBUG=n. Please
test as this intended to be the final RC before release. As always, please send
bug reports and test reports to xen-devel@lists.xenproject.org. When sending
bug reports, please CC relevant maintainers and me (paul@xen.org).

  Paul Durrant



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:04:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:04:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvjtg-0000l3-Ad; Wed, 15 Jul 2020 16:04:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvjte-0000kx-RZ
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:04:38 +0000
X-Inumbo-ID: e1e75dbc-c6b4-11ea-bca7-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1e75dbc-c6b4-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 16:04:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594829077;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=48p4kslhnmsyZF5L086IqNPIBWykDiVgHXxxLjb+fFw=;
 b=RQW9S0EAKDCuSn/+fPpf7O3tb7ayUKnJwvMPkrQ2CjNGOgWaK3cdRoVZ
 7NQTu1UsTyDEiY10QsuMHqVI43H5SMnj78k1LEDugrBJrmb/qdV4EjG8n
 AnaEO9ysUTUPMgFlGBScBW8ytizu/nWrXDcELU0qnF7TQZn/Hq/K/cfVF U=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 7HpVz495a2Lzcv0+m9enR+Gy8pkxcyUrXV3iUivkFNrWDcgDwTTBerVl5NvFjLv0o8bOFBb5wy
 8DTxDDSnJRRgxCX09t3i80jt9s+h4pe/h4rKmGJiQ2eCnjQ+gMhDtbQU/b3zUE267q229xy99a
 +7jp+GDsDn7PrCY2PUagzh9dKwZ+s6AhAt1ifDKcrfOba4izeSOZ7DzZ7Ijnaf27lrAQ9Ww5F6
 fxH/AldoP6FPe+LiYFK7FIqXSqufU2M5qM6wJVFNZE8WfZ+KfE3W/KvSLm4Qzgw/J7xshNDV9v
 tV0=
X-SBRS: 2.7
X-MesageID: 22780377
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22780377"
Date: Wed, 15 Jul 2020 18:04:30 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v6 07/11] x86/vmx: implement IPT in VMX
Message-ID: <20200715160430.GE7191@Air-de-Roger>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <7ddfc44d6ffde0fa307f0e074225f588c397aef0.1594150543.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7ddfc44d6ffde0fa307f0e074225f588c397aef0.1594150543.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, tamas.lengyel@intel.com,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, luwei.kang@intel.com,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 07, 2020 at 09:39:46PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Use Intel Processor Trace feature to provide vmtrace_pt_*
> interface for HVM/VMX.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  xen/arch/x86/hvm/vmx/vmx.c         | 110 +++++++++++++++++++++++++++++
>  xen/include/asm-x86/hvm/vmx/vmcs.h |   3 +
>  xen/include/asm-x86/hvm/vmx/vmx.h  |  14 ++++
>  3 files changed, 127 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index cc6d4ece22..63a5a76e16 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -428,6 +428,56 @@ static void vmx_domain_relinquish_resources(struct domain *d)
>      vmx_free_vlapic_mapping(d);
>  }
>  
> +static int vmx_init_pt(struct vcpu *v)
> +{
> +    int rc;
> +    uint64_t size = v->domain->processor_trace_buf_kb * KB(1);

As I commented in other patches, I don't think there's a need to have
the size in bytes, and hence you could just convert to number of pages?

You might have to check that the value is rounded to a page boundary.

> +
> +    if ( !v->vmtrace.pt_buf || !size )
> +        return -EINVAL;
> +
> +    /*
> +     * We don't accept trace buffer size smaller than single page
> +     * and the upper bound is defined as 4GB in the specification.
> +     * The buffer size must be also a power of 2.
> +     */
> +    if ( size < PAGE_SIZE || size > GB(4) || (size & (size - 1)) )
> +        return -EINVAL;

IMO there should be a hook to sanitize the buffer size before you go
and allocate it. It makes no sense to allocate a buffer to come here
and realize it's not suitable.

> +
> +    v->arch.hvm.vmx.ipt_state = xzalloc(struct ipt_state);
> +

Extra newline.

> +    if ( !v->arch.hvm.vmx.ipt_state )
> +        return -ENOMEM;
> +
> +    v->arch.hvm.vmx.ipt_state->output_base =
> +        page_to_maddr(v->vmtrace.pt_buf);

The above fits on a single line now. You could also avoid having an
output_base field and just do the conversion in
vmx_restore_guest_msrs, I'm not sure there's much value in having this
cached here.

> +    v->arch.hvm.vmx.ipt_state->output_mask.raw = size - 1;
> +
> +    rc = vmx_add_host_load_msr(v, MSR_RTIT_CTL, 0);
> +
> +    if ( rc )
> +        return rc;
> +
> +    rc = vmx_add_guest_msr(v, MSR_RTIT_CTL,
> +                              RTIT_CTL_TRACE_EN | RTIT_CTL_OS |
> +                              RTIT_CTL_USR | RTIT_CTL_BRANCH_EN);
> +
> +    if ( rc )
> +        return rc;

We don't usually leave an empty line between setting and testing rc.

> +
> +    return 0;
> +}
> +
> +static int vmx_destroy_pt(struct vcpu* v)
> +{
> +    if ( v->arch.hvm.vmx.ipt_state )
> +        xfree(v->arch.hvm.vmx.ipt_state);
> +
> +    v->arch.hvm.vmx.ipt_state = NULL;
> +    return 0;
> +}
> +
> +

Double newline, just one newline please between functions.

>  static int vmx_vcpu_initialise(struct vcpu *v)
>  {
>      int rc;
> @@ -471,6 +521,14 @@ static int vmx_vcpu_initialise(struct vcpu *v)
>  
>      vmx_install_vlapic_mapping(v);
>  
> +    if ( v->domain->processor_trace_buf_kb )

Can you move this check inside of vmx_init_pt, so that here you just
do:

return vmx_init_pt(v);

> +    {
> +        rc = vmx_init_pt(v);
> +
> +        if ( rc )
> +            return rc;
> +    }
> +
>      return 0;
>  }
>  
> @@ -483,6 +541,7 @@ static void vmx_vcpu_destroy(struct vcpu *v)
>       * prior to vmx_domain_destroy so we need to disable PML for each vcpu
>       * separately here.
>       */
> +    vmx_destroy_pt(v);
>      vmx_vcpu_disable_pml(v);
>      vmx_destroy_vmcs(v);
>      passive_domain_destroy(v);
> @@ -513,6 +572,18 @@ static void vmx_save_guest_msrs(struct vcpu *v)
>       * be updated at any time via SWAPGS, which we cannot trap.
>       */
>      v->arch.hvm.vmx.shadow_gs = rdgsshadow();
> +
> +    if ( unlikely(v->arch.hvm.vmx.ipt_state &&
> +                  v->arch.hvm.vmx.ipt_state->active) )
> +    {
> +        uint64_t rtit_ctl;

Missing newline.

> +        rdmsrl(MSR_RTIT_CTL, rtit_ctl);
> +        BUG_ON(rtit_ctl & RTIT_CTL_TRACE_EN);
> +
> +        rdmsrl(MSR_RTIT_STATUS, v->arch.hvm.vmx.ipt_state->status);
> +        rdmsrl(MSR_RTIT_OUTPUT_MASK,
> +               v->arch.hvm.vmx.ipt_state->output_mask.raw);
> +    }
>  }
>  
>  static void vmx_restore_guest_msrs(struct vcpu *v)
> @@ -524,6 +595,17 @@ static void vmx_restore_guest_msrs(struct vcpu *v)
>  
>      if ( cpu_has_msr_tsc_aux )
>          wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
> +
> +    if ( unlikely(v->arch.hvm.vmx.ipt_state &&
> +                  v->arch.hvm.vmx.ipt_state->active) )
> +    {
> +        wrmsrl(MSR_RTIT_OUTPUT_BASE,
> +               v->arch.hvm.vmx.ipt_state->output_base);
> +        wrmsrl(MSR_RTIT_OUTPUT_MASK,
> +               v->arch.hvm.vmx.ipt_state->output_mask.raw);
> +        wrmsrl(MSR_RTIT_STATUS,
> +               v->arch.hvm.vmx.ipt_state->status);
> +    }
>  }
>  
>  void vmx_update_cpu_exec_control(struct vcpu *v)
> @@ -2240,6 +2322,25 @@ static bool vmx_get_pending_event(struct vcpu *v, struct x86_event *info)
>      return true;
>  }
>  
> +static int vmx_control_pt(struct vcpu *v, bool enable)
> +{
> +    if ( !v->arch.hvm.vmx.ipt_state )
> +        return -EINVAL;
> +
> +    v->arch.hvm.vmx.ipt_state->active = enable;

I think you should assert that the vCPU is paused? As doing this on a
non-paused vCPU is not going to work reliably?

> +    return 0;
> +}
> +
> +static int vmx_get_pt_offset(struct vcpu *v, uint64_t *offset, uint64_t *size)
> +{
> +    if ( !v->arch.hvm.vmx.ipt_state )
> +        return -EINVAL;
> +
> +    *offset = v->arch.hvm.vmx.ipt_state->output_mask.offset;
> +    *size = v->arch.hvm.vmx.ipt_state->output_mask.size + 1;
> +    return 0;
> +}
> +
>  static struct hvm_function_table __initdata vmx_function_table = {
>      .name                 = "VMX",
>      .cpu_up_prepare       = vmx_cpu_up_prepare,
> @@ -2295,6 +2396,8 @@ static struct hvm_function_table __initdata vmx_function_table = {
>      .altp2m_vcpu_update_vmfunc_ve = vmx_vcpu_update_vmfunc_ve,
>      .altp2m_vcpu_emulate_ve = vmx_vcpu_emulate_ve,
>      .altp2m_vcpu_emulate_vmfunc = vmx_vcpu_emulate_vmfunc,
> +    .vmtrace_control_pt = vmx_control_pt,
> +    .vmtrace_get_pt_offset = vmx_get_pt_offset,
>      .tsc_scaling = {
>          .max_ratio = VMX_TSC_MULTIPLIER_MAX,
>      },
> @@ -3674,6 +3777,13 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>  
>      hvm_invalidate_regs_fields(regs);
>  
> +    if ( unlikely(v->arch.hvm.vmx.ipt_state &&
> +                  v->arch.hvm.vmx.ipt_state->active) )
> +    {
> +        rdmsrl(MSR_RTIT_OUTPUT_MASK,
> +               v->arch.hvm.vmx.ipt_state->output_mask.raw);
> +    }

Unneeded braces.

Thanks.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:25:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkE3-0002Zn-Vm; Wed, 15 Jul 2020 16:25:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvkE2-0002Yi-A6
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:25:42 +0000
X-Inumbo-ID: d00b3dae-c6b7-11ea-bca7-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d00b3dae-c6b7-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 16:25:36 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvkDv-0001sU-Je; Wed, 15 Jul 2020 17:25:35 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 01/12] stubdom: add stubdom/mini-os.mk for Xen paths used by
 Mini-OS
Date: Wed, 15 Jul 2020 17:25:00 +0100
Message-Id: <20200715162511.5941-3-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>, ian.jackson@eu.citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Juergen Gross <jgross@suse.com>

stubdom/mini-os.mk should contain paths used by Mini-OS when built as
stubdom.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 stubdom/mini-os.mk | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)
 create mode 100644 stubdom/mini-os.mk

diff --git a/stubdom/mini-os.mk b/stubdom/mini-os.mk
new file mode 100644
index 0000000000..32528bb91f
--- /dev/null
+++ b/stubdom/mini-os.mk
@@ -0,0 +1,17 @@
+# Included by Mini-OS stubdom builds to set variables depending on Xen
+# internal paths.
+#
+# Input variables are:
+# XEN_ROOT
+# MINIOS_TARGET_ARCH
+
+XENSTORE_CPPFLAGS = -isystem $(XEN_ROOT)/tools/xenstore/include
+TOOLCORE_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toolcore
+TOOLLOG_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toollog
+EVTCHN_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/evtchn
+GNTTAB_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/gnttab
+CALL_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/call
+FOREIGNMEMORY_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/foreignmemory
+DEVICEMODEL_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/devicemodel
+CTRL_PATH = $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)
+GUEST_PATH = $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:25:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkDt-0002Yn-4U; Wed, 15 Jul 2020 16:25:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvkDs-0002Yi-GZ
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:25:32 +0000
X-Inumbo-ID: cc98d0aa-c6b7-11ea-b7bb-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc98d0aa-c6b7-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 16:25:30 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvkDp-0001sU-Hz; Wed, 15 Jul 2020 17:25:29 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org,
	xen-devel@dornerworks.com
Subject: [PATCH 00/12] tools: move more libraries into tools/libs
Date: Wed, 15 Jul 2020 17:24:58 +0100
Message-Id: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 ian.jackson@eu.citrix.com, George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>,
 Stewart Hildebrand <stewart.hildebrand@dornerworks.com>,
 Josh Whitehead <josh.whitehead@dornerworks.com>,
 Jan Beulich <jbeulich@suse.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

[ NB: this patch series is actually from Juergen Gross.

  It is being experiemntally handled as a Merge Reqeust in gitlab, in
  part to see what problems there are with that workflow that will
  need extra tooling or whatever.

  I have manually generated this series using git-format-patch,
  scripts/add_maintainers.pl, and git-send-email.  I expect that if we
  adopt this as a real workflow, we will want to make a robot do some
  of that.

  I have set replies to go to the Gitlab comment thread and to
  xen-devel.  Again this is experimental.  We are likely to need
  something to automatically collect acks, at the very least.

  Reviewers: for now, please review this series as normal.  You may
  reply to the messages by email.  Please, for now, send your replies
  to gitlab and to the mailing list.  I think I have set the reply-to
  appropriately.

  Alternatively you may review the code in the gitlab web UI.  But
  please do not use the line-by-line comment system: write only to the
  main MR discussion thread.

  Thanks, Ian ]

Move some more libraries under tools/libs, including libxenctrl. This
is resulting in a lot of cleanup work regarding building libs and
restructuring of the tools directory.

I have (for now) left out some more libraries like libxenguest and
libxl, but I can have a try moving those, too, if wanted.

Please note that patch 8 ("tools: move libxenctrl below tools/libs")
needs the related mini-os and qemu-trad patches applied in order to
not break the build:

https://lists.xen.org/archives/html/xen-devel/2020-07/msg00548.html

https://lists.xen.org/archives/html/xen-devel/2020-07/msg00617.html



Juergen Gross (12):
  stubdom: add stubdom/mini-os.mk for Xen paths used by Mini-OS
  tools: switch XEN_LIBXEN* make variables to lower case (XEN_libxen*)
  tools: add a copy of library headers in tools/include
  tools: don't call make recursively from libs.mk
  tools: define ROUNDUP() in tools/include/xen-tools/libs.h
  tools/misc: don't use libxenctrl internals from misc tools
  tools/libxc: untangle libxenctrl from libxenguest
  tools: move libxenctrl below tools/libs
  tools: split libxenstore into new tools/libs/store directory
  tools: split libxenvchan into new tools/libs/vchan directory
  tools: split libxenstat into new tools/libs/stat directory
  tools: generate most contents of library make variables

 .gitignore                                    |  29 +++-
 MAINTAINERS                                   |   2 +-
 stubdom/Makefile                              |  29 +++-
 stubdom/grub/kexec.c                          |   2 +-
 stubdom/mini-os.mk                            |  17 ++
 tools/Makefile                                |  15 +-
 tools/Rules.mk                                | 142 ++++++---------
 tools/console/daemon/io.c                     |   6 +-
 tools/golang/xenlight/Makefile                |   4 +-
 tools/helpers/init-xenstore-domain.c          |   2 +-
 tools/include/xen-tools/libs.h                |   4 +
 tools/libs/Makefile                           |   4 +
 tools/libs/call/Makefile                      |   3 +-
 tools/libs/call/buffer.c                      |   3 +-
 tools/libs/ctrl/Makefile                      |  68 ++++++++
 tools/{libxc => libs/ctrl}/include/xenctrl.h  |   0
 .../ctrl}/include/xenctrl_compat.h            |   0
 .../ctrl/include/xenctrl_dom.h}               |  10 +-
 tools/libs/ctrl/libxenctrl.map                |   3 +
 tools/{libxc => libs/ctrl}/xc_altp2m.c        |   0
 tools/{libxc => libs/ctrl}/xc_arinc653.c      |   0
 tools/{libxc => libs/ctrl}/xc_bitops.h        |   0
 tools/{libxc => libs/ctrl}/xc_core.c          |   5 +-
 tools/{libxc => libs/ctrl}/xc_core.h          |   2 +-
 tools/{libxc => libs/ctrl}/xc_core_arm.c      |   2 +-
 tools/{libxc => libs/ctrl}/xc_core_arm.h      |   0
 tools/{libxc => libs/ctrl}/xc_core_x86.c      |   6 +-
 tools/{libxc => libs/ctrl}/xc_core_x86.h      |   0
 tools/{libxc => libs/ctrl}/xc_cpu_hotplug.c   |   0
 tools/{libxc => libs/ctrl}/xc_cpupool.c       |   0
 tools/{libxc => libs/ctrl}/xc_csched.c        |   0
 tools/{libxc => libs/ctrl}/xc_csched2.c       |   0
 .../ctrl}/xc_devicemodel_compat.c             |   0
 tools/{libxc => libs/ctrl}/xc_domain.c        | 129 +-------------
 tools/{libxc => libs/ctrl}/xc_evtchn.c        |   0
 tools/{libxc => libs/ctrl}/xc_evtchn_compat.c |   0
 tools/{libxc => libs/ctrl}/xc_flask.c         |   0
 .../{libxc => libs/ctrl}/xc_foreign_memory.c  |   0
 tools/{libxc => libs/ctrl}/xc_freebsd.c       |   0
 tools/{libxc => libs/ctrl}/xc_gnttab.c        |   0
 tools/{libxc => libs/ctrl}/xc_gnttab_compat.c |   0
 tools/{libxc => libs/ctrl}/xc_hcall_buf.c     |   1 -
 tools/{libxc => libs/ctrl}/xc_kexec.c         |   0
 tools/{libxc => libs/ctrl}/xc_linux.c         |   0
 tools/{libxc => libs/ctrl}/xc_mem_access.c    |   0
 tools/{libxc => libs/ctrl}/xc_mem_paging.c    |   0
 tools/{libxc => libs/ctrl}/xc_memshr.c        |   0
 tools/{libxc => libs/ctrl}/xc_minios.c        |   0
 tools/{libxc => libs/ctrl}/xc_misc.c          |   0
 tools/{libxc => libs/ctrl}/xc_monitor.c       |   0
 tools/{libxc => libs/ctrl}/xc_msr_x86.h       |   0
 tools/{libxc => libs/ctrl}/xc_netbsd.c        |   0
 tools/{libxc => libs/ctrl}/xc_pagetab.c       |   0
 tools/{libxc => libs/ctrl}/xc_physdev.c       |   0
 tools/{libxc => libs/ctrl}/xc_pm.c            |   0
 tools/{libxc => libs/ctrl}/xc_private.c       |   3 +-
 tools/{libxc => libs/ctrl}/xc_private.h       |  36 ++++
 tools/{libxc => libs/ctrl}/xc_psr.c           |   0
 tools/{libxc => libs/ctrl}/xc_resource.c      |   0
 tools/{libxc => libs/ctrl}/xc_resume.c        |   2 -
 tools/{libxc => libs/ctrl}/xc_rt.c            |   0
 tools/{libxc => libs/ctrl}/xc_solaris.c       |   0
 tools/{libxc => libs/ctrl}/xc_tbuf.c          |   0
 tools/{libxc => libs/ctrl}/xc_vm_event.c      |   0
 .../ctrl/xenctrl.pc.in}                       |   0
 tools/libs/devicemodel/Makefile               |   3 +-
 tools/libs/evtchn/Makefile                    |   3 +-
 tools/libs/foreignmemory/Makefile             |   3 +-
 tools/libs/foreignmemory/linux.c              |   3 +-
 tools/libs/gnttab/Makefile                    |   3 +-
 tools/libs/gnttab/private.h                   |   3 -
 tools/libs/hypfs/Makefile                     |   3 +-
 tools/libs/libs.mk                            |  34 +++-
 .../{xenstat/libxenstat => libs/stat}/COPYING |   0
 .../libxenstat => libs/stat}/Makefile         |  97 +++--------
 .../stat}/bindings/swig/perl/.empty           |   0
 .../stat}/bindings/swig/python/.empty         |   0
 .../stat}/bindings/swig/xenstat.i             |   0
 .../src => libs/stat/include}/xenstat.h       |   3 +
 tools/libs/stat/libxenstat.map                |  54 ++++++
 .../libxenstat/src => libs/stat}/xenstat.c    |   0
 .../libxenstat => libs/stat}/xenstat.pc.in    |   2 +-
 .../src => libs/stat}/xenstat_freebsd.c       |   0
 .../src => libs/stat}/xenstat_linux.c         |   4 +-
 .../src => libs/stat}/xenstat_netbsd.c        |   0
 .../src => libs/stat}/xenstat_priv.h          |   0
 .../src => libs/stat}/xenstat_qmp.c           |   0
 .../src => libs/stat}/xenstat_solaris.c       |   0
 tools/libs/store/Makefile                     |  65 +++++++
 .../store}/include/compat/xs.h                |   0
 .../store}/include/compat/xs_lib.h            |   0
 .../store}/include/xenstore.h                 |   0
 tools/libs/store/libxenstore.map              |  49 ++++++
 tools/{xenstore => libs/store}/xenstore.pc.in |   0
 tools/{xenstore => libs/store}/xs.c           |   0
 tools/libs/toolcore/Makefile                  |   2 +-
 tools/libs/toollog/Makefile                   |   2 +-
 tools/libs/vchan/Makefile                     |  18 ++
 .../vchan/include}/libxenvchan.h              |   0
 tools/{libvchan => libs/vchan}/init.c         |   0
 tools/{libvchan => libs/vchan}/io.c           |   0
 tools/libs/vchan/libxenvchan.map              |  16 ++
 tools/{libvchan => libs/vchan}/xenvchan.pc.in |   0
 tools/libvchan/Makefile                       |  95 -----------
 tools/libxc/Makefile                          | 161 +++++-------------
 tools/libxc/include/xenguest.h                |   8 +-
 tools/libxc/xc_efi.h                          | 158 -----------------
 tools/libxc/xc_elf.h                          |  16 --
 .../libxc/{xc_cpuid_x86.c => xg_cpuid_x86.c}  |   0
 tools/libxc/{xc_dom_arm.c => xg_dom_arm.c}    |   2 +-
 ...imageloader.c => xg_dom_armzimageloader.c} |   2 +-
 ...{xc_dom_binloader.c => xg_dom_binloader.c} |   2 +-
 tools/libxc/{xc_dom_boot.c => xg_dom_boot.c}  |   2 +-
 ...bzimageloader.c => xg_dom_bzimageloader.c} |   2 +-
 ...m_compat_linux.c => xg_dom_compat_linux.c} |   2 +-
 tools/libxc/{xc_dom_core.c => xg_dom_core.c}  |   2 +-
 ...c_dom_decompress.h => xg_dom_decompress.h} |   4 +-
 ...compress_lz4.c => xg_dom_decompress_lz4.c} |   2 +-
 ...ss_unsafe.c => xg_dom_decompress_unsafe.c} |   2 +-
 ...ss_unsafe.h => xg_dom_decompress_unsafe.h} |   2 +-
 ...ip2.c => xg_dom_decompress_unsafe_bzip2.c} |   2 +-
 ...lzma.c => xg_dom_decompress_unsafe_lzma.c} |   2 +-
 ...o1x.c => xg_dom_decompress_unsafe_lzo1x.c} |   2 +-
 ...afe_xz.c => xg_dom_decompress_unsafe_xz.c} |   2 +-
 ...{xc_dom_elfloader.c => xg_dom_elfloader.c} |   2 +-
 ...{xc_dom_hvmloader.c => xg_dom_hvmloader.c} |   2 +-
 tools/libxc/{xc_dom_x86.c => xg_dom_x86.c}    |   2 +-
 tools/libxc/xg_domain.c                       | 149 ++++++++++++++++
 .../libxc/{xc_nomigrate.c => xg_nomigrate.c}  |   0
 .../{xc_offline_page.c => xg_offline_page.c}  |   2 +-
 tools/libxc/xg_private.h                      |  23 ---
 tools/libxc/xg_save_restore.h                 |  13 --
 .../libxc/{xc_sr_common.c => xg_sr_common.c}  |   2 +-
 .../libxc/{xc_sr_common.h => xg_sr_common.h}  |   4 +-
 ...{xc_sr_common_x86.c => xg_sr_common_x86.c} |   2 +-
 ...{xc_sr_common_x86.h => xg_sr_common_x86.h} |   2 +-
 ..._common_x86_pv.c => xg_sr_common_x86_pv.c} |   2 +-
 ..._common_x86_pv.h => xg_sr_common_x86_pv.h} |   2 +-
 .../{xc_sr_restore.c => xg_sr_restore.c}      |   2 +-
 ...tore_x86_hvm.c => xg_sr_restore_x86_hvm.c} |   2 +-
 ...estore_x86_pv.c => xg_sr_restore_x86_pv.c} |   2 +-
 tools/libxc/{xc_sr_save.c => xg_sr_save.c}    |   2 +-
 ...sr_save_x86_hvm.c => xg_sr_save_x86_hvm.c} |   2 +-
 ...c_sr_save_x86_pv.c => xg_sr_save_x86_pv.c} |   2 +-
 ..._stream_format.h => xg_sr_stream_format.h} |   0
 tools/libxc/{xc_suspend.c => xg_suspend.c}    |   0
 tools/libxl/Makefile                          |   2 +-
 tools/libxl/libxl_arm.c                       |   2 +-
 tools/libxl/libxl_arm.h                       |   2 +-
 tools/libxl/libxl_create.c                    |   2 +-
 tools/libxl/libxl_dm.c                        |   2 +-
 tools/libxl/libxl_dom.c                       |   2 +-
 tools/libxl/libxl_internal.h                  |   5 +-
 tools/libxl/libxl_vnuma.c                     |   2 +-
 tools/libxl/libxl_x86.c                       |   2 +-
 tools/libxl/libxl_x86_acpi.c                  |   2 +-
 tools/misc/Makefile                           |   5 +-
 tools/misc/xen-hptool.c                       |   8 +-
 tools/misc/xen-mfndump.c                      |  70 ++++----
 tools/python/Makefile                         |   2 +-
 tools/python/setup.py                         |  12 +-
 tools/python/xen/lowlevel/xc/xc.c             |   2 +-
 tools/vchan/Makefile                          |  37 ++++
 tools/{libvchan => vchan}/node-select.c       |   0
 tools/{libvchan => vchan}/node.c              |   0
 .../{libvchan => vchan}/vchan-socket-proxy.c  |   0
 tools/xcutils/readnotes.c                     |   2 +-
 tools/xenstat/Makefile                        |  10 --
 tools/xenstore/Makefile                       |  82 +--------
 tools/xenstore/{include => }/xenstore_lib.h   |   0
 tools/xenstore/xenstored_core.c               |   2 -
 tools/{xenstat => }/xentop/Makefile           |   2 +-
 tools/{xenstat => }/xentop/TODO               |   0
 tools/{xenstat => }/xentop/xentop.c           |   0
 174 files changed, 856 insertions(+), 986 deletions(-)
 create mode 100644 stubdom/mini-os.mk
 create mode 100644 tools/libs/ctrl/Makefile
 rename tools/{libxc => libs/ctrl}/include/xenctrl.h (100%)
 rename tools/{libxc => libs/ctrl}/include/xenctrl_compat.h (100%)
 rename tools/{libxc/include/xc_dom.h => libs/ctrl/include/xenctrl_dom.h} (98%)
 create mode 100644 tools/libs/ctrl/libxenctrl.map
 rename tools/{libxc => libs/ctrl}/xc_altp2m.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_arinc653.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_bitops.h (100%)
 rename tools/{libxc => libs/ctrl}/xc_core.c (99%)
 rename tools/{libxc => libs/ctrl}/xc_core.h (99%)
 rename tools/{libxc => libs/ctrl}/xc_core_arm.c (99%)
 rename tools/{libxc => libs/ctrl}/xc_core_arm.h (100%)
 rename tools/{libxc => libs/ctrl}/xc_core_x86.c (98%)
 rename tools/{libxc => libs/ctrl}/xc_core_x86.h (100%)
 rename tools/{libxc => libs/ctrl}/xc_cpu_hotplug.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_cpupool.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_csched.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_csched2.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_devicemodel_compat.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_domain.c (94%)
 rename tools/{libxc => libs/ctrl}/xc_evtchn.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_evtchn_compat.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_flask.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_foreign_memory.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_freebsd.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_gnttab.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_gnttab_compat.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_hcall_buf.c (99%)
 rename tools/{libxc => libs/ctrl}/xc_kexec.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_linux.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_mem_access.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_mem_paging.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_memshr.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_minios.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_misc.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_monitor.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_msr_x86.h (100%)
 rename tools/{libxc => libs/ctrl}/xc_netbsd.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_pagetab.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_physdev.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_pm.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_private.c (99%)
 rename tools/{libxc => libs/ctrl}/xc_private.h (91%)
 rename tools/{libxc => libs/ctrl}/xc_psr.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_resource.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_resume.c (99%)
 rename tools/{libxc => libs/ctrl}/xc_rt.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_solaris.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_tbuf.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_vm_event.c (100%)
 rename tools/{libxc/xencontrol.pc.in => libs/ctrl/xenctrl.pc.in} (100%)
 rename tools/{xenstat/libxenstat => libs/stat}/COPYING (100%)
 rename tools/{xenstat/libxenstat => libs/stat}/Makefile (56%)
 rename tools/{xenstat/libxenstat => libs/stat}/bindings/swig/perl/.empty (100%)
 rename tools/{xenstat/libxenstat => libs/stat}/bindings/swig/python/.empty (100%)
 rename tools/{xenstat/libxenstat => libs/stat}/bindings/swig/xenstat.i (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat/include}/xenstat.h (98%)
 create mode 100644 tools/libs/stat/libxenstat.map
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat.c (100%)
 rename tools/{xenstat/libxenstat => libs/stat}/xenstat.pc.in (82%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_freebsd.c (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_linux.c (98%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_netbsd.c (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_priv.h (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_qmp.c (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_solaris.c (100%)
 create mode 100644 tools/libs/store/Makefile
 rename tools/{xenstore => libs/store}/include/compat/xs.h (100%)
 rename tools/{xenstore => libs/store}/include/compat/xs_lib.h (100%)
 rename tools/{xenstore => libs/store}/include/xenstore.h (100%)
 create mode 100644 tools/libs/store/libxenstore.map
 rename tools/{xenstore => libs/store}/xenstore.pc.in (100%)
 rename tools/{xenstore => libs/store}/xs.c (100%)
 create mode 100644 tools/libs/vchan/Makefile
 rename tools/{libvchan => libs/vchan/include}/libxenvchan.h (100%)
 rename tools/{libvchan => libs/vchan}/init.c (100%)
 rename tools/{libvchan => libs/vchan}/io.c (100%)
 create mode 100644 tools/libs/vchan/libxenvchan.map
 rename tools/{libvchan => libs/vchan}/xenvchan.pc.in (100%)
 delete mode 100644 tools/libvchan/Makefile
 delete mode 100644 tools/libxc/xc_efi.h
 delete mode 100644 tools/libxc/xc_elf.h
 rename tools/libxc/{xc_cpuid_x86.c => xg_cpuid_x86.c} (100%)
 rename tools/libxc/{xc_dom_arm.c => xg_dom_arm.c} (99%)
 rename tools/libxc/{xc_dom_armzimageloader.c => xg_dom_armzimageloader.c} (99%)
 rename tools/libxc/{xc_dom_binloader.c => xg_dom_binloader.c} (99%)
 rename tools/libxc/{xc_dom_boot.c => xg_dom_boot.c} (99%)
 rename tools/libxc/{xc_dom_bzimageloader.c => xg_dom_bzimageloader.c} (99%)
 rename tools/libxc/{xc_dom_compat_linux.c => xg_dom_compat_linux.c} (99%)
 rename tools/libxc/{xc_dom_core.c => xg_dom_core.c} (99%)
 rename tools/libxc/{xc_dom_decompress.h => xg_dom_decompress.h} (62%)
 rename tools/libxc/{xc_dom_decompress_lz4.c => xg_dom_decompress_lz4.c} (98%)
 rename tools/libxc/{xc_dom_decompress_unsafe.c => xg_dom_decompress_unsafe.c} (96%)
 rename tools/libxc/{xc_dom_decompress_unsafe.h => xg_dom_decompress_unsafe.h} (97%)
 rename tools/libxc/{xc_dom_decompress_unsafe_bzip2.c => xg_dom_decompress_unsafe_bzip2.c} (87%)
 rename tools/libxc/{xc_dom_decompress_unsafe_lzma.c => xg_dom_decompress_unsafe_lzma.c} (87%)
 rename tools/libxc/{xc_dom_decompress_unsafe_lzo1x.c => xg_dom_decompress_unsafe_lzo1x.c} (96%)
 rename tools/libxc/{xc_dom_decompress_unsafe_xz.c => xg_dom_decompress_unsafe_xz.c} (95%)
 rename tools/libxc/{xc_dom_elfloader.c => xg_dom_elfloader.c} (99%)
 rename tools/libxc/{xc_dom_hvmloader.c => xg_dom_hvmloader.c} (99%)
 rename tools/libxc/{xc_dom_x86.c => xg_dom_x86.c} (99%)
 create mode 100644 tools/libxc/xg_domain.c
 rename tools/libxc/{xc_nomigrate.c => xg_nomigrate.c} (100%)
 rename tools/libxc/{xc_offline_page.c => xg_offline_page.c} (99%)
 rename tools/libxc/{xc_sr_common.c => xg_sr_common.c} (99%)
 rename tools/libxc/{xc_sr_common.h => xg_sr_common.h} (99%)
 rename tools/libxc/{xc_sr_common_x86.c => xg_sr_common_x86.c} (99%)
 rename tools/libxc/{xc_sr_common_x86.h => xg_sr_common_x86.h} (98%)
 rename tools/libxc/{xc_sr_common_x86_pv.c => xg_sr_common_x86_pv.c} (99%)
 rename tools/libxc/{xc_sr_common_x86_pv.h => xg_sr_common_x86_pv.h} (98%)
 rename tools/libxc/{xc_sr_restore.c => xg_sr_restore.c} (99%)
 rename tools/libxc/{xc_sr_restore_x86_hvm.c => xg_sr_restore_x86_hvm.c} (99%)
 rename tools/libxc/{xc_sr_restore_x86_pv.c => xg_sr_restore_x86_pv.c} (99%)
 rename tools/libxc/{xc_sr_save.c => xg_sr_save.c} (99%)
 rename tools/libxc/{xc_sr_save_x86_hvm.c => xg_sr_save_x86_hvm.c} (99%)
 rename tools/libxc/{xc_sr_save_x86_pv.c => xg_sr_save_x86_pv.c} (99%)
 rename tools/libxc/{xc_sr_stream_format.h => xg_sr_stream_format.h} (100%)
 rename tools/libxc/{xc_suspend.c => xg_suspend.c} (100%)
 create mode 100644 tools/vchan/Makefile
 rename tools/{libvchan => vchan}/node-select.c (100%)
 rename tools/{libvchan => vchan}/node.c (100%)
 rename tools/{libvchan => vchan}/vchan-socket-proxy.c (100%)
 delete mode 100644 tools/xenstat/Makefile
 rename tools/xenstore/{include => }/xenstore_lib.h (100%)
 rename tools/{xenstat => }/xentop/Makefile (97%)
 rename tools/{xenstat => }/xentop/TODO (100%)
 rename tools/{xenstat => }/xentop/xentop.c (100%)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:25:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:25:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkDy-0002ZO-LS; Wed, 15 Jul 2020 16:25:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvkDx-0002Yi-9z
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:25:37 +0000
X-Inumbo-ID: cfe5c24a-c6b7-11ea-bca7-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cfe5c24a-c6b7-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 16:25:36 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvkDv-0001sU-91; Wed, 15 Jul 2020 17:25:35 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 1/1] docs/process/branching-checklist: Get osstest branch right
Date: Wed, 15 Jul 2020 17:24:59 +0100
Message-Id: <20200715162511.5941-2-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 ian.jackson@eu.citrix.com, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The runes for this manual osstest were wrong.  It needs to run as
osstest, and cr-for-branches should be run from testing.git.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 docs/process/branching-checklist.txt | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/process/branching-checklist.txt b/docs/process/branching-checklist.txt
index e286e65962..0e83272caa 100644
--- a/docs/process/branching-checklist.txt
+++ b/docs/process/branching-checklist.txt
@@ -86,8 +86,8 @@ including turning off debug.
 
 Set off a manual osstest run, since the osstest cr-for-branches change
 will take a while to take effect:
-  ssh osstest.test-lab
-  cd branches/for-xen-$v-testing.git
+  ssh osstest@osstest.test-lab
+  cd testing.git
   screen -S $v
   BRANCHES=xen-$v-testing ./cr-for-branches branches -w "./cr-daily-branch --real"
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:25:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:25:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkE8-0002b5-CA; Wed, 15 Jul 2020 16:25:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvkE7-0002Yi-AH
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:25:47 +0000
X-Inumbo-ID: d073c1b2-c6b7-11ea-b7bb-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d073c1b2-c6b7-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 16:25:37 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvkDw-0001sU-8D; Wed, 15 Jul 2020 17:25:36 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 03/12] tools: add a copy of library headers in tools/include
Date: Wed, 15 Jul 2020 17:25:02 +0100
Message-Id: <20200715162511.5941-5-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 ian.jackson@eu.citrix.com, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Juergen Gross <jgross@suse.com>

The headers.chk target in tools/Rules.mk tries to compile all headers
stand alone for testing them not to include any internal header.

Unfortunately the headers tested against are not complete, as any
header for a Xen library is not included in the include path of the
test compile run, resulting in a failure in case any of the tested
headers in including an official Xen library header.

Fix that by copying the official headers located in
tools/libs/*/include to tools/include.

In order to support libraries with header name other than xen<lib>.h
or with multiple headers add a LIBHEADER make variable a lib specific
Makefile can set in that case.

Move the headers.chk target from Rules.mk to libs.mk as it is used
for libraries in tools/libs only.

Add NO_HEADERS_CHK variable to skip checking headers as this will be
needed e.g. for libxenctrl.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore         |  1 +
 tools/Rules.mk     |  8 --------
 tools/libs/libs.mk | 26 +++++++++++++++++++++++---
 3 files changed, 24 insertions(+), 11 deletions(-)

diff --git a/.gitignore b/.gitignore
index 36ce2ea104..5ea48af818 100644
--- a/.gitignore
+++ b/.gitignore
@@ -188,6 +188,7 @@ tools/hotplug/Linux/xendomains
 tools/hotplug/NetBSD/rc.d/xencommons
 tools/hotplug/NetBSD/rc.d/xendriverdomain
 tools/include/acpi
+tools/include/*.h
 tools/include/xen/*
 tools/include/xen-xsm/*
 tools/include/xen-foreign/*.(c|h|size)
diff --git a/tools/Rules.mk b/tools/Rules.mk
index b42e50ebf6..5d699cfd39 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -225,14 +225,6 @@ INSTALL_PYTHON_PROG = \
 %.opic: %.S
 	$(CC) $(CPPFLAGS) -DPIC $(CFLAGS) $(CFLAGS.opic) -fPIC -c -o $@ $< $(APPEND_CFLAGS)
 
-headers.chk:
-	for i in $(filter %.h,$^); do \
-	    $(CC) -x c -ansi -Wall -Werror $(CFLAGS_xeninclude) \
-	          -S -o /dev/null $$i || exit 1; \
-	    echo $$i; \
-	done >$@.new
-	mv $@.new $@
-
 subdirs-all subdirs-clean subdirs-install subdirs-distclean subdirs-uninstall: .phony
 	@set -e; for subdir in $(SUBDIRS) $(SUBDIRS-y); do \
 		$(MAKE) subdir-$(patsubst subdirs-%,%,$@)-$$subdir; \
diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 8027ae7400..8045c00e9a 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -34,6 +34,10 @@ endif
 
 PKG_CONFIG_LOCAL := $(foreach pc,$(PKG_CONFIG),$(PKG_CONFIG_DIR)/$(pc))
 
+LIBHEADER ?= xen$(LIBNAME).h
+LIBHEADERS = $(foreach h, $(LIBHEADER), include/$(h))
+LIBHEADERSGLOB = $(foreach h, $(LIBHEADER), $(XEN_ROOT)/tools/include/$(h))
+
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_PREFIX = $(XEN_ROOT)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 
@@ -47,7 +51,22 @@ build:
 .PHONY: libs
 libs: headers.chk $(LIB) $(PKG_CONFIG_INST) $(PKG_CONFIG_LOCAL)
 
-headers.chk: $(wildcard include/*.h) $(AUTOINCS)
+ifneq ($(NO_HEADERS_CHK),y)
+headers.chk:
+	for i in $(filter %.h,$^); do \
+	    $(CC) -x c -ansi -Wall -Werror $(CFLAGS_xeninclude) \
+	          -S -o /dev/null $$i || exit 1; \
+	    echo $$i; \
+	done >$@.new
+	mv $@.new $@
+else
+.PHONY: headers.chk
+endif
+
+headers.chk: $(LIBHEADERSGLOB) $(AUTOINCS)
+
+$(LIBHEADERSGLOB): $(LIBHEADERS)
+	for i in $(realpath $(LIBHEADERS)); do ln -sf $$i $(XEN_ROOT)/tools/include; done
 
 libxen$(LIBNAME).a: $(LIB_OBJS)
 	$(AR) rc $@ $^
@@ -68,13 +87,13 @@ install: build
 	$(INSTALL_DATA) libxen$(LIBNAME).a $(DESTDIR)$(libdir)
 	$(SYMLINK_SHLIB) libxen$(LIBNAME).so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)/libxen$(LIBNAME).so.$(MAJOR)
 	$(SYMLINK_SHLIB) libxen$(LIBNAME).so.$(MAJOR) $(DESTDIR)$(libdir)/libxen$(LIBNAME).so
-	$(INSTALL_DATA) include/xen$(LIBNAME).h $(DESTDIR)$(includedir)
+	for i in $(LIBHEADERS); do $(INSTALL_DATA) $$i $(DESTDIR)$(includedir); done
 	$(INSTALL_DATA) xen$(LIBNAME).pc $(DESTDIR)$(PKG_INSTALLDIR)
 
 .PHONY: uninstall
 uninstall:
 	rm -f $(DESTDIR)$(PKG_INSTALLDIR)/xen$(LIBNAME).pc
-	rm -f $(DESTDIR)$(includedir)/xen$(LIBNAME).h
+	for i in $(LIBHEADER); do rm -f $(DESTDIR)$(includedir)/$(LIBHEADER); done
 	rm -f $(DESTDIR)$(libdir)/libxen$(LIBNAME).so
 	rm -f $(DESTDIR)$(libdir)/libxen$(LIBNAME).so.$(MAJOR)
 	rm -f $(DESTDIR)$(libdir)/libxen$(LIBNAME).so.$(MAJOR).$(MINOR)
@@ -90,6 +109,7 @@ clean:
 	rm -f libxen$(LIBNAME).so.$(MAJOR).$(MINOR) libxen$(LIBNAME).so.$(MAJOR)
 	rm -f headers.chk
 	rm -f xen$(LIBNAME).pc
+	rm -f $(LIBHEADERSGLOB)
 
 .PHONY: distclean
 distclean: clean
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:25:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:25:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkED-0002d9-Mj; Wed, 15 Jul 2020 16:25:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvkEC-0002Yi-AI
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:25:52 +0000
X-Inumbo-ID: d0395734-c6b7-11ea-b7bb-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d0395734-c6b7-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 16:25:36 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvkDv-0001sU-SJ; Wed, 15 Jul 2020 17:25:35 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 02/12] tools: switch XEN_LIBXEN* make variables to lower case
 (XEN_libxen*)
Date: Wed, 15 Jul 2020 17:25:01 +0100
Message-Id: <20200715162511.5941-4-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>, ian.jackson@eu.citrix.com,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Juergen Gross <jgross@suse.com>

In order to harmonize names of library related make variables switch
XEN_LIBXEN* names to XEN_libxen*, as all other related variables (e.g.
CFLAGS_libxen*, SHDEPS_libxen*, ...) already use this pattern.

Rename XEN_LIBXC to XEN_libxenctrl, XEN_XENSTORE to XEN_libxenstore,
XEN_XENLIGHT to XEN_libxenlight, XEN_XLUTIL to XEN_libxlutil, and
XEN_LIBVCHAN to XEN_libxenvchan for the same reason.

Introduce XEN_libxenguest with the same value as XEN_libxenctrl.

No functional change.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/Rules.mk                    | 120 +++++++++++++++---------------
 tools/golang/xenlight/Makefile    |   4 +-
 tools/libs/call/Makefile          |   2 +-
 tools/libs/devicemodel/Makefile   |   2 +-
 tools/libs/evtchn/Makefile        |   2 +-
 tools/libs/foreignmemory/Makefile |   2 +-
 tools/libs/gnttab/Makefile        |   2 +-
 tools/libs/hypfs/Makefile         |   2 +-
 tools/libs/toolcore/Makefile      |   2 +-
 tools/libs/toollog/Makefile       |   2 +-
 tools/libvchan/Makefile           |   2 +-
 tools/libxc/Makefile              |   2 +-
 tools/xenstat/libxenstat/Makefile |   2 +-
 tools/xenstore/Makefile           |   2 +-
 14 files changed, 75 insertions(+), 73 deletions(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index 5ed5664bf7..b42e50ebf6 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -12,21 +12,23 @@ INSTALL = $(XEN_ROOT)/tools/cross-install
 LDFLAGS += $(PREPEND_LDFLAGS_XEN_TOOLS)
 
 XEN_INCLUDE        = $(XEN_ROOT)/tools/include
-XEN_LIBXENTOOLCORE  = $(XEN_ROOT)/tools/libs/toolcore
-XEN_LIBXENTOOLLOG  = $(XEN_ROOT)/tools/libs/toollog
-XEN_LIBXENEVTCHN   = $(XEN_ROOT)/tools/libs/evtchn
-XEN_LIBXENGNTTAB   = $(XEN_ROOT)/tools/libs/gnttab
-XEN_LIBXENCALL     = $(XEN_ROOT)/tools/libs/call
-XEN_LIBXENFOREIGNMEMORY = $(XEN_ROOT)/tools/libs/foreignmemory
-XEN_LIBXENDEVICEMODEL = $(XEN_ROOT)/tools/libs/devicemodel
-XEN_LIBXENHYPFS    = $(XEN_ROOT)/tools/libs/hypfs
-XEN_LIBXC          = $(XEN_ROOT)/tools/libxc
-XEN_XENLIGHT       = $(XEN_ROOT)/tools/libxl
+XEN_libxentoolcore = $(XEN_ROOT)/tools/libs/toolcore
+XEN_libxentoollog  = $(XEN_ROOT)/tools/libs/toollog
+XEN_libxenevtchn   = $(XEN_ROOT)/tools/libs/evtchn
+XEN_libxengnttab   = $(XEN_ROOT)/tools/libs/gnttab
+XEN_libxencall     = $(XEN_ROOT)/tools/libs/call
+XEN_libxenforeignmemory = $(XEN_ROOT)/tools/libs/foreignmemory
+XEN_libxendevicemodel = $(XEN_ROOT)/tools/libs/devicemodel
+XEN_libxenhypfs    = $(XEN_ROOT)/tools/libs/hypfs
+XEN_libxenctrl     = $(XEN_ROOT)/tools/libxc
+# Currently libxenguest lives in the same directory as libxenctrl
+XEN_libxenguest    = $(XEN_libxenctrl)
+XEN_libxenlight    = $(XEN_ROOT)/tools/libxl
 # Currently libxlutil lives in the same directory as libxenlight
-XEN_XLUTIL         = $(XEN_XENLIGHT)
-XEN_XENSTORE       = $(XEN_ROOT)/tools/xenstore
-XEN_LIBXENSTAT     = $(XEN_ROOT)/tools/xenstat/libxenstat/src
-XEN_LIBVCHAN       = $(XEN_ROOT)/tools/libvchan
+XEN_libxlutil      = $(XEN_libxenlight)
+XEN_libxenstore    = $(XEN_ROOT)/tools/xenstore
+XEN_libxenstat     = $(XEN_ROOT)/tools/xenstat/libxenstat/src
+XEN_libxenvchan    = $(XEN_ROOT)/tools/libvchan
 
 CFLAGS_xeninclude = -I$(XEN_INCLUDE)
 
@@ -97,75 +99,75 @@ endif
 # Consumers of libfoo should not directly use $(SHDEPS_libfoo) or
 # $(SHLIB_libfoo)
 
-CFLAGS_libxentoollog = -I$(XEN_LIBXENTOOLLOG)/include $(CFLAGS_xeninclude)
+CFLAGS_libxentoollog = -I$(XEN_libxentoollog)/include $(CFLAGS_xeninclude)
 SHDEPS_libxentoollog =
-LDLIBS_libxentoollog = $(SHDEPS_libxentoollog) $(XEN_LIBXENTOOLLOG)/libxentoollog$(libextension)
-SHLIB_libxentoollog  = $(SHDEPS_libxentoollog) -Wl,-rpath-link=$(XEN_LIBXENTOOLLOG)
+LDLIBS_libxentoollog = $(SHDEPS_libxentoollog) $(XEN_libxentoollog)/libxentoollog$(libextension)
+SHLIB_libxentoollog  = $(SHDEPS_libxentoollog) -Wl,-rpath-link=$(XEN_libxentoollog)
 
-CFLAGS_libxentoolcore = -I$(XEN_LIBXENTOOLCORE)/include $(CFLAGS_xeninclude)
+CFLAGS_libxentoolcore = -I$(XEN_libxentoolcore)/include $(CFLAGS_xeninclude)
 SHDEPS_libxentoolcore =
-LDLIBS_libxentoolcore = $(SHDEPS_libxentoolcore) $(XEN_LIBXENTOOLCORE)/libxentoolcore$(libextension)
-SHLIB_libxentoolcore  = $(SHDEPS_libxentoolcore) -Wl,-rpath-link=$(XEN_LIBXENTOOLCORE)
+LDLIBS_libxentoolcore = $(SHDEPS_libxentoolcore) $(XEN_libxentoolcore)/libxentoolcore$(libextension)
+SHLIB_libxentoolcore  = $(SHDEPS_libxentoolcore) -Wl,-rpath-link=$(XEN_libxentoolcore)
 
-CFLAGS_libxenevtchn = -I$(XEN_LIBXENEVTCHN)/include $(CFLAGS_xeninclude)
+CFLAGS_libxenevtchn = -I$(XEN_libxenevtchn)/include $(CFLAGS_xeninclude)
 SHDEPS_libxenevtchn = $(SHLIB_libxentoolcore)
-LDLIBS_libxenevtchn = $(SHDEPS_libxenevtchn) $(XEN_LIBXENEVTCHN)/libxenevtchn$(libextension)
-SHLIB_libxenevtchn  = $(SHDEPS_libxenevtchn) -Wl,-rpath-link=$(XEN_LIBXENEVTCHN)
+LDLIBS_libxenevtchn = $(SHDEPS_libxenevtchn) $(XEN_libxenevtchn)/libxenevtchn$(libextension)
+SHLIB_libxenevtchn  = $(SHDEPS_libxenevtchn) -Wl,-rpath-link=$(XEN_libxenevtchn)
 
-CFLAGS_libxengnttab = -I$(XEN_LIBXENGNTTAB)/include $(CFLAGS_xeninclude)
+CFLAGS_libxengnttab = -I$(XEN_libxengnttab)/include $(CFLAGS_xeninclude)
 SHDEPS_libxengnttab = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore)
-LDLIBS_libxengnttab = $(SHDEPS_libxengnttab) $(XEN_LIBXENGNTTAB)/libxengnttab$(libextension)
-SHLIB_libxengnttab  = $(SHDEPS_libxengnttab) -Wl,-rpath-link=$(XEN_LIBXENGNTTAB)
+LDLIBS_libxengnttab = $(SHDEPS_libxengnttab) $(XEN_libxengnttab)/libxengnttab$(libextension)
+SHLIB_libxengnttab  = $(SHDEPS_libxengnttab) -Wl,-rpath-link=$(XEN_libxengnttab)
 
-CFLAGS_libxencall = -I$(XEN_LIBXENCALL)/include $(CFLAGS_xeninclude)
+CFLAGS_libxencall = -I$(XEN_libxencall)/include $(CFLAGS_xeninclude)
 SHDEPS_libxencall = $(SHLIB_libxentoolcore)
-LDLIBS_libxencall = $(SHDEPS_libxencall) $(XEN_LIBXENCALL)/libxencall$(libextension)
-SHLIB_libxencall  = $(SHDEPS_libxencall) -Wl,-rpath-link=$(XEN_LIBXENCALL)
+LDLIBS_libxencall = $(SHDEPS_libxencall) $(XEN_libxencall)/libxencall$(libextension)
+SHLIB_libxencall  = $(SHDEPS_libxencall) -Wl,-rpath-link=$(XEN_libxencall)
 
-CFLAGS_libxenforeignmemory = -I$(XEN_LIBXENFOREIGNMEMORY)/include $(CFLAGS_xeninclude)
+CFLAGS_libxenforeignmemory = -I$(XEN_libxenforeignmemory)/include $(CFLAGS_xeninclude)
 SHDEPS_libxenforeignmemory = $(SHLIB_libxentoolcore)
-LDLIBS_libxenforeignmemory = $(SHDEPS_libxenforeignmemory) $(XEN_LIBXENFOREIGNMEMORY)/libxenforeignmemory$(libextension)
-SHLIB_libxenforeignmemory  = $(SHDEPS_libxenforeignmemory) -Wl,-rpath-link=$(XEN_LIBXENFOREIGNMEMORY)
+LDLIBS_libxenforeignmemory = $(SHDEPS_libxenforeignmemory) $(XEN_libxenforeignmemory)/libxenforeignmemory$(libextension)
+SHLIB_libxenforeignmemory  = $(SHDEPS_libxenforeignmemory) -Wl,-rpath-link=$(XEN_libxenforeignmemory)
 
-CFLAGS_libxendevicemodel = -I$(XEN_LIBXENDEVICEMODEL)/include $(CFLAGS_xeninclude)
+CFLAGS_libxendevicemodel = -I$(XEN_libxendevicemodel)/include $(CFLAGS_xeninclude)
 SHDEPS_libxendevicemodel = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore) $(SHLIB_libxencall)
-LDLIBS_libxendevicemodel = $(SHDEPS_libxendevicemodel) $(XEN_LIBXENDEVICEMODEL)/libxendevicemodel$(libextension)
-SHLIB_libxendevicemodel  = $(SHDEPS_libxendevicemodel) -Wl,-rpath-link=$(XEN_LIBXENDEVICEMODEL)
+LDLIBS_libxendevicemodel = $(SHDEPS_libxendevicemodel) $(XEN_libxendevicemodel)/libxendevicemodel$(libextension)
+SHLIB_libxendevicemodel  = $(SHDEPS_libxendevicemodel) -Wl,-rpath-link=$(XEN_libxendevicemodel)
 
-CFLAGS_libxenhypfs = -I$(XEN_LIBXENHYPFS)/include $(CFLAGS_xeninclude)
+CFLAGS_libxenhypfs = -I$(XEN_libxenhypfs)/include $(CFLAGS_xeninclude)
 SHDEPS_libxenhypfs = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore) $(SHLIB_libxencall)
-LDLIBS_libxenhypfs = $(SHDEPS_libxenhypfs) $(XEN_LIBXENHYPFS)/libxenhypfs$(libextension)
-SHLIB_libxenhypfs  = $(SHDEPS_libxenhypfs) -Wl,-rpath-link=$(XEN_LIBXENHYPFS)
+LDLIBS_libxenhypfs = $(SHDEPS_libxenhypfs) $(XEN_libxenhypfs)/libxenhypfs$(libextension)
+SHLIB_libxenhypfs  = $(SHDEPS_libxenhypfs) -Wl,-rpath-link=$(XEN_libxenhypfs)
 
 # code which compiles against libxenctrl get __XEN_TOOLS__ and
 # therefore sees the unstable hypercall interfaces.
-CFLAGS_libxenctrl = -I$(XEN_LIBXC)/include $(CFLAGS_libxentoollog) $(CFLAGS_libxenforeignmemory) $(CFLAGS_libxendevicemodel) $(CFLAGS_xeninclude) -D__XEN_TOOLS__
+CFLAGS_libxenctrl = -I$(XEN_libxenctrl)/include $(CFLAGS_libxentoollog) $(CFLAGS_libxenforeignmemory) $(CFLAGS_libxendevicemodel) $(CFLAGS_xeninclude) -D__XEN_TOOLS__
 SHDEPS_libxenctrl = $(SHLIB_libxentoollog) $(SHLIB_libxenevtchn) $(SHLIB_libxengnttab) $(SHLIB_libxencall) $(SHLIB_libxenforeignmemory) $(SHLIB_libxendevicemodel)
-LDLIBS_libxenctrl = $(SHDEPS_libxenctrl) $(XEN_LIBXC)/libxenctrl$(libextension)
-SHLIB_libxenctrl  = $(SHDEPS_libxenctrl) -Wl,-rpath-link=$(XEN_LIBXC)
+LDLIBS_libxenctrl = $(SHDEPS_libxenctrl) $(XEN_libxenctrl)/libxenctrl$(libextension)
+SHLIB_libxenctrl  = $(SHDEPS_libxenctrl) -Wl,-rpath-link=$(XEN_libxenctrl)
 
-CFLAGS_libxenguest = -I$(XEN_LIBXC)/include $(CFLAGS_libxenevtchn) $(CFLAGS_libxenforeignmemory) $(CFLAGS_xeninclude)
+CFLAGS_libxenguest = -I$(XEN_libxenguest)/include $(CFLAGS_libxenevtchn) $(CFLAGS_libxenforeignmemory) $(CFLAGS_xeninclude)
 SHDEPS_libxenguest = $(SHLIB_libxenevtchn)
-LDLIBS_libxenguest = $(SHDEPS_libxenguest) $(XEN_LIBXC)/libxenguest$(libextension)
-SHLIB_libxenguest  = $(SHDEPS_libxenguest) -Wl,-rpath-link=$(XEN_LIBXC)
+LDLIBS_libxenguest = $(SHDEPS_libxenguest) $(XEN_libxenguest)/libxenguest$(libextension)
+SHLIB_libxenguest  = $(SHDEPS_libxenguest) -Wl,-rpath-link=$(XEN_libxenguest)
 
-CFLAGS_libxenstore = -I$(XEN_XENSTORE)/include $(CFLAGS_xeninclude)
+CFLAGS_libxenstore = -I$(XEN_libxenstore)/include $(CFLAGS_xeninclude)
 SHDEPS_libxenstore = $(SHLIB_libxentoolcore)
-LDLIBS_libxenstore = $(SHDEPS_libxenstore) $(XEN_XENSTORE)/libxenstore$(libextension)
-SHLIB_libxenstore  = $(SHDEPS_libxenstore) -Wl,-rpath-link=$(XEN_XENSTORE)
+LDLIBS_libxenstore = $(SHDEPS_libxenstore) $(XEN_libxenstore)/libxenstore$(libextension)
+SHLIB_libxenstore  = $(SHDEPS_libxenstore) -Wl,-rpath-link=$(XEN_libxenstore)
 ifeq ($(CONFIG_Linux),y)
 LDLIBS_libxenstore += -ldl
 endif
 
-CFLAGS_libxenstat  = -I$(XEN_LIBXENSTAT)
+CFLAGS_libxenstat  = -I$(XEN_libxenstat)
 SHDEPS_libxenstat  = $(SHLIB_libxenctrl) $(SHLIB_libxenstore)
-LDLIBS_libxenstat  = $(SHDEPS_libxenstat) $(XEN_LIBXENSTAT)/libxenstat$(libextension)
-SHLIB_libxenstat   = $(SHDEPS_libxenstat) -Wl,-rpath-link=$(XEN_LIBXENSTAT)
+LDLIBS_libxenstat  = $(SHDEPS_libxenstat) $(XEN_libxenstat)/libxenstat$(libextension)
+SHLIB_libxenstat   = $(SHDEPS_libxenstat) -Wl,-rpath-link=$(XEN_libxenstat)
 
-CFLAGS_libxenvchan = -I$(XEN_LIBVCHAN) $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
+CFLAGS_libxenvchan = -I$(XEN_libxenvchan) $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
 SHDEPS_libxenvchan = $(SHLIB_libxentoollog) $(SHLIB_libxenstore) $(SHLIB_libxenevtchn) $(SHLIB_libxengnttab)
-LDLIBS_libxenvchan = $(SHDEPS_libxenvchan) $(XEN_LIBVCHAN)/libxenvchan$(libextension)
-SHLIB_libxenvchan  = $(SHDEPS_libxenvchan) -Wl,-rpath-link=$(XEN_LIBVCHAN)
+LDLIBS_libxenvchan = $(SHDEPS_libxenvchan) $(XEN_libxenvchan)/libxenvchan$(libextension)
+SHLIB_libxenvchan  = $(SHDEPS_libxenvchan) -Wl,-rpath-link=$(XEN_libxenvchan)
 
 ifeq ($(debug),y)
 # Disable optimizations
@@ -176,15 +178,15 @@ else
 CFLAGS += -O2 -fomit-frame-pointer
 endif
 
-CFLAGS_libxenlight = -I$(XEN_XENLIGHT) $(CFLAGS_libxenctrl) $(CFLAGS_xeninclude)
+CFLAGS_libxenlight = -I$(XEN_libxenlight) $(CFLAGS_libxenctrl) $(CFLAGS_xeninclude)
 SHDEPS_libxenlight = $(SHLIB_libxenctrl) $(SHLIB_libxenstore) $(SHLIB_libxenhypfs)
-LDLIBS_libxenlight = $(SHDEPS_libxenlight) $(XEN_XENLIGHT)/libxenlight$(libextension)
-SHLIB_libxenlight  = $(SHDEPS_libxenlight) -Wl,-rpath-link=$(XEN_XENLIGHT)
+LDLIBS_libxenlight = $(SHDEPS_libxenlight) $(XEN_libxenlight)/libxenlight$(libextension)
+SHLIB_libxenlight  = $(SHDEPS_libxenlight) -Wl,-rpath-link=$(XEN_libxenlight)
 
-CFLAGS_libxlutil = -I$(XEN_XLUTIL)
+CFLAGS_libxlutil = -I$(XEN_libxlutil)
 SHDEPS_libxlutil = $(SHLIB_libxenlight)
-LDLIBS_libxlutil = $(SHDEPS_libxlutil) $(XEN_XLUTIL)/libxlutil$(libextension)
-SHLIB_libxlutil  = $(SHDEPS_libxlutil) -Wl,-rpath-link=$(XEN_XLUTIL)
+LDLIBS_libxlutil = $(SHDEPS_libxlutil) $(XEN_libxlutil)/libxlutil$(libextension)
+SHLIB_libxlutil  = $(SHDEPS_libxlutil) -Wl,-rpath-link=$(XEN_libxlutil)
 
 CFLAGS += -D__XEN_INTERFACE_VERSION__=__XEN_LATEST_INTERFACE_VERSION__
 
diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
index eac9dbf12a..a83fff7573 100644
--- a/tools/golang/xenlight/Makefile
+++ b/tools/golang/xenlight/Makefile
@@ -30,11 +30,11 @@ idl-gen: $(GOXL_GEN_FILES)
 #
 # NB that because the users of this library need to be able to
 # recompile the library from source, it needs to include '-lxenlight'
-# in the LDFLAGS; and thus we need to add -L$(XEN_XENLIGHT) here
+# in the LDFLAGS; and thus we need to add -L$(XEN_libxenlight) here
 # so that it can find the actual library.
 .PHONY: build
 build: xenlight.go $(GOXL_GEN_FILES)
-	CGO_CFLAGS="$(CFLAGS_libxenlight) $(CFLAGS_libxentoollog)" CGO_LDFLAGS="$(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) -L$(XEN_XENLIGHT) -L$(XEN_LIBXENTOOLLOG)" $(GO) build -x
+	CGO_CFLAGS="$(CFLAGS_libxenlight) $(CFLAGS_libxentoollog)" CGO_LDFLAGS="$(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) -L$(XEN_libxenlight) -L$(XEN_libxentoollog)" $(GO) build -x
 
 .PHONY: install
 install: build
diff --git a/tools/libs/call/Makefile b/tools/libs/call/Makefile
index 7f6dc3fcbd..7994b411fa 100644
--- a/tools/libs/call/Makefile
+++ b/tools/libs/call/Makefile
@@ -15,5 +15,5 @@ SRCS-$(CONFIG_MiniOS)  += minios.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_LIBXENCALL)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxencall)/include
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/devicemodel/Makefile b/tools/libs/devicemodel/Makefile
index 61bfa35273..d9d1d1b850 100644
--- a/tools/libs/devicemodel/Makefile
+++ b/tools/libs/devicemodel/Makefile
@@ -15,5 +15,5 @@ SRCS-$(CONFIG_MiniOS)  += compat.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_LIBXENDEVICEMODEL)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxendevicemodel)/include
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/evtchn/Makefile b/tools/libs/evtchn/Makefile
index 9206f622ef..d7aa4d402f 100644
--- a/tools/libs/evtchn/Makefile
+++ b/tools/libs/evtchn/Makefile
@@ -15,4 +15,4 @@ SRCS-$(CONFIG_MiniOS)  += minios.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_LIBXENEVTCHN)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenevtchn)/include
diff --git a/tools/libs/foreignmemory/Makefile b/tools/libs/foreignmemory/Makefile
index 28f1bddc96..823989681d 100644
--- a/tools/libs/foreignmemory/Makefile
+++ b/tools/libs/foreignmemory/Makefile
@@ -15,5 +15,5 @@ SRCS-$(CONFIG_MiniOS)  += minios.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_LIBXENFOREIGNMEMORY)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenforeignmemory)/include
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/gnttab/Makefile b/tools/libs/gnttab/Makefile
index 2da8fbbb7f..c0fffdac71 100644
--- a/tools/libs/gnttab/Makefile
+++ b/tools/libs/gnttab/Makefile
@@ -17,5 +17,5 @@ SRCS-$(CONFIG_NetBSD)  += gnttab_unimp.c gntshr_unimp.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_LIBXENGNTTAB)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxengnttab)/include
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/hypfs/Makefile b/tools/libs/hypfs/Makefile
index 06dd449929..b4c41f6189 100644
--- a/tools/libs/hypfs/Makefile
+++ b/tools/libs/hypfs/Makefile
@@ -12,5 +12,5 @@ SRCS-y                 += core.c
 
 include ../libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_LIBXENHYPFS)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenhypfs)/include
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/toolcore/Makefile b/tools/libs/toolcore/Makefile
index 9c5a92d93f..85ff2b26fd 100644
--- a/tools/libs/toolcore/Makefile
+++ b/tools/libs/toolcore/Makefile
@@ -10,7 +10,7 @@ SRCS-y	+= handlereg.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_LIBXENTOOLCORE)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxentoolcore)/include
 
 $(LIB_OBJS): $(AUTOINCS)
 $(PIC_OBJS): $(AUTOINCS)
diff --git a/tools/libs/toollog/Makefile b/tools/libs/toollog/Makefile
index 9156e5d08e..2d3ae4e627 100644
--- a/tools/libs/toollog/Makefile
+++ b/tools/libs/toollog/Makefile
@@ -10,4 +10,4 @@ SRCS-y	+= xtl_logger_stdio.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_LIBXENTOOLLOG)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxentoollog)/include
diff --git a/tools/libvchan/Makefile b/tools/libvchan/Makefile
index 913bcc8884..025a935cb7 100644
--- a/tools/libvchan/Makefile
+++ b/tools/libvchan/Makefile
@@ -35,7 +35,7 @@ endif
 PKG_CONFIG_LOCAL := $(foreach pc,$(PKG_CONFIG),$(PKG_CONFIG_DIR)/$(pc))
 
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_PREFIX = $(XEN_ROOT)
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_LIBVCHAN)
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenvchan)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
 
diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index fae5969a73..1e64116bd4 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -168,7 +168,7 @@ endif
 PKG_CONFIG_LOCAL := $(foreach pc,$(PKG_CONFIG),$(PKG_CONFIG_DIR)/$(pc))
 
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_PREFIX = $(XEN_ROOT)
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_LIBXC)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenctrl)/include
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
 
diff --git a/tools/xenstat/libxenstat/Makefile b/tools/xenstat/libxenstat/Makefile
index 03cb212e3b..3d05ecdd9f 100644
--- a/tools/xenstat/libxenstat/Makefile
+++ b/tools/xenstat/libxenstat/Makefile
@@ -50,7 +50,7 @@ endif
 PKG_CONFIG_LOCAL := $(foreach pc,$(PKG_CONFIG),$(PKG_CONFIG_DIR)/$(pc))
 
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_PREFIX = $(XEN_ROOT)
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_LIBXENSTAT)
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenstat)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 
 .PHONY: all
diff --git a/tools/xenstore/Makefile b/tools/xenstore/Makefile
index 445e9911b2..0a64ac1571 100644
--- a/tools/xenstore/Makefile
+++ b/tools/xenstore/Makefile
@@ -128,7 +128,7 @@ endif
 PKG_CONFIG_LOCAL := $(foreach pc,$(PKG_CONFIG),$(PKG_CONFIG_DIR)/$(pc))
 
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_PREFIX = $(XEN_ROOT)
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_XENSTORE)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenstore)/include
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:25:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkEI-0002fY-83; Wed, 15 Jul 2020 16:25:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvkEH-0002Yi-AT
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:25:57 +0000
X-Inumbo-ID: d0b03a0c-c6b7-11ea-bb8b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d0b03a0c-c6b7-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 16:25:37 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvkDw-0001sU-Gu; Wed, 15 Jul 2020 17:25:36 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 04/12] tools: don't call make recursively from libs.mk
Date: Wed, 15 Jul 2020 17:25:03 +0100
Message-Id: <20200715162511.5941-6-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>, ian.jackson@eu.citrix.com,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Juergen Gross <jgross@suse.com>

During build of a xen library make is called again via libs.mk. This is
not necessary as the same can be achieved by a simple dependency.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/libs.mk | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 8045c00e9a..764f5441e2 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -45,8 +45,7 @@ $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 all: build
 
 .PHONY: build
-build:
-	$(MAKE) libs
+build: libs
 
 .PHONY: libs
 libs: headers.chk $(LIB) $(PKG_CONFIG_INST) $(PKG_CONFIG_LOCAL)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:26:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:26:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkEN-0002iF-JS; Wed, 15 Jul 2020 16:26:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvkEM-0002Yi-Am
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:26:02 +0000
X-Inumbo-ID: d0e35540-c6b7-11ea-b7bb-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d0e35540-c6b7-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 16:25:37 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvkDw-0001sU-VQ; Wed, 15 Jul 2020 17:25:37 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 05/12] tools: define ROUNDUP() in
 tools/include/xen-tools/libs.h
Date: Wed, 15 Jul 2020 17:25:04 +0100
Message-Id: <20200715162511.5941-7-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>, Anthony PERARD <anthony.perard@citrix.com>,
 ian.jackson@eu.citrix.com, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Juergen Gross <jgross@suse.com>

Today there are multiple copies of the ROUNDUP() macro in various
sources and headers. Define it once in tools/include/xen-tools/libs.h.

Using xen-tools/libs.h enables removing copies of MIN() and MAX(), too.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/console/daemon/io.c        | 6 +-----
 tools/include/xen-tools/libs.h   | 4 ++++
 tools/libs/call/buffer.c         | 3 +--
 tools/libs/foreignmemory/linux.c | 3 +--
 tools/libs/gnttab/private.h      | 3 ---
 tools/libxc/xg_private.h         | 1 -
 tools/libxl/libxl_internal.h     | 3 ---
 tools/xenstore/xenstored_core.c  | 2 --
 8 files changed, 7 insertions(+), 18 deletions(-)

diff --git a/tools/console/daemon/io.c b/tools/console/daemon/io.c
index a43c57edad..4af27ffc5d 100644
--- a/tools/console/daemon/io.c
+++ b/tools/console/daemon/io.c
@@ -49,9 +49,7 @@
 #include <sys/ioctl.h>
 #include <libutil.h>
 #endif
-
-#define MAX(a, b) (((a) > (b)) ? (a) : (b))
-#define MIN(a, b) (((a) < (b)) ? (a) : (b))
+#include <xen-tools/libs.h>
 
 /* Each 10 bits takes ~ 3 digits, plus one, plus one for nul terminator. */
 #define MAX_STRLEN(x) ((sizeof(x) * CHAR_BIT + CHAR_BIT-1) / 10 * 3 + 2)
@@ -80,8 +78,6 @@ static struct pollfd  *fds;
 static unsigned int current_array_size;
 static unsigned int nr_fds;
 
-#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
-
 struct buffer {
 	char *data;
 	size_t consumed;
diff --git a/tools/include/xen-tools/libs.h b/tools/include/xen-tools/libs.h
index cc7dfc8c64..a16e0c3807 100644
--- a/tools/include/xen-tools/libs.h
+++ b/tools/include/xen-tools/libs.h
@@ -59,4 +59,8 @@
     })
 #endif
 
+#ifndef ROUNDUP
+#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
+#endif
+
 #endif	/* __XEN_TOOLS_LIBS__ */
diff --git a/tools/libs/call/buffer.c b/tools/libs/call/buffer.c
index 0b6af2db60..085674d882 100644
--- a/tools/libs/call/buffer.c
+++ b/tools/libs/call/buffer.c
@@ -16,14 +16,13 @@
 #include <errno.h>
 #include <string.h>
 #include <pthread.h>
+#include <xen-tools/libs.h>
 
 #include "private.h"
 
 #define DBGPRINTF(_m...) \
     xtl_log(xcall->logger, XTL_DEBUG, -1, "xencall:buffer", _m)
 
-#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
-
 pthread_mutex_t cache_mutex = PTHREAD_MUTEX_INITIALIZER;
 
 static void cache_lock(xencall_handle *xcall)
diff --git a/tools/libs/foreignmemory/linux.c b/tools/libs/foreignmemory/linux.c
index 8daa5828e3..fe73d5ab72 100644
--- a/tools/libs/foreignmemory/linux.c
+++ b/tools/libs/foreignmemory/linux.c
@@ -25,11 +25,10 @@
 
 #include <sys/mman.h>
 #include <sys/ioctl.h>
+#include <xen-tools/libs.h>
 
 #include "private.h"
 
-#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
-
 #ifndef O_CLOEXEC
 #define O_CLOEXEC 0
 #endif
diff --git a/tools/libs/gnttab/private.h b/tools/libs/gnttab/private.h
index c5e23639b1..eb6a6abe54 100644
--- a/tools/libs/gnttab/private.h
+++ b/tools/libs/gnttab/private.h
@@ -5,9 +5,6 @@
 #include <xentoolcore_internal.h>
 #include <xengnttab.h>
 
-/* Set of macros/defines used by both Linux and FreeBSD */
-#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
-
 #define GTERROR(_l, _f...) xtl_log(_l, XTL_ERROR, errno, "gnttab", _f)
 #define GSERROR(_l, _f...) xtl_log(_l, XTL_ERROR, errno, "gntshr", _f)
 
diff --git a/tools/libxc/xg_private.h b/tools/libxc/xg_private.h
index f0a4b2c616..40b5baecde 100644
--- a/tools/libxc/xg_private.h
+++ b/tools/libxc/xg_private.h
@@ -95,7 +95,6 @@ typedef uint64_t x86_pgentry_t;
 #define PAGE_SIZE_X86           (1UL << PAGE_SHIFT_X86)
 #define PAGE_MASK_X86           (~(PAGE_SIZE_X86-1))
 
-#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
 #define NRPAGES(x) (ROUNDUP(x, PAGE_SHIFT) >> PAGE_SHIFT)
 
 
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index 94a23179d3..c63d0686fd 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -132,9 +132,6 @@
 #define MB(_mb)     (_AC(_mb, ULL) << 20)
 #define GB(_gb)     (_AC(_gb, ULL) << 30)
 
-#define ROUNDUP(_val, _order)                                           \
-    (((unsigned long)(_val)+(1UL<<(_order))-1) & ~((1UL<<(_order))-1))
-
 #define DIV_ROUNDUP(n, d) (((n) + (d) - 1) / (d))
 
 #define MASK_EXTR(v, m) (((v) & (m)) / ((m) & -(m)))
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 7bd959f28b..9700772d40 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -73,8 +73,6 @@ static unsigned int nr_fds;
 static int sock = -1;
 static int ro_sock = -1;
 
-#define ROUNDUP(_x, _w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
-
 static bool verbose = false;
 LIST_HEAD(connections);
 int tracefd = -1;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:26:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:26:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkET-0002ku-0w; Wed, 15 Jul 2020 16:26:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvkER-0002Yi-Al
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:26:07 +0000
X-Inumbo-ID: d10c44f0-c6b7-11ea-bca7-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d10c44f0-c6b7-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 16:25:38 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvkDx-0001sU-94; Wed, 15 Jul 2020 17:25:37 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 06/12] tools/misc: don't use libxenctrl internals from misc
 tools
Date: Wed, 15 Jul 2020 17:25:05 +0100
Message-Id: <20200715162511.5941-8-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>, ian.jackson@eu.citrix.com,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Juergen Gross <jgross@suse.com>

xen-hptool and xen-mfndump are using internals from libxenctrl e.g. by
including private headers. Fix that by using either the correct
official headers or use other means.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libxc/xg_save_restore.h |  4 --
 tools/misc/Makefile           |  4 --
 tools/misc/xen-hptool.c       |  8 ++--
 tools/misc/xen-mfndump.c      | 70 +++++++++++++++++++----------------
 4 files changed, 43 insertions(+), 43 deletions(-)

diff --git a/tools/libxc/xg_save_restore.h b/tools/libxc/xg_save_restore.h
index 303081df0d..b904296997 100644
--- a/tools/libxc/xg_save_restore.h
+++ b/tools/libxc/xg_save_restore.h
@@ -109,10 +109,6 @@ static inline int get_platform_info(xc_interface *xch, uint32_t dom,
 #define M2P_SIZE(_m)    ROUNDUP(((_m) * sizeof(xen_pfn_t)), M2P_SHIFT)
 #define M2P_CHUNKS(_m)  (M2P_SIZE((_m)) >> M2P_SHIFT)
 
-/* Returns TRUE if the PFN is currently mapped */
-#define is_mapped(pfn_type) (!((pfn_type) & 0x80000000UL))
-
-
 #define GET_FIELD(_p, _f, _w) (((_w) == 8) ? ((_p)->x64._f) : ((_p)->x32._f))
 
 #define SET_FIELD(_p, _f, _v, _w) do {          \
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 9fdb13597f..4e2e8f3b17 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -93,15 +93,11 @@ xenhypfs: xenhypfs.o
 xenlockprof: xenlockprof.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
-# xen-hptool incorrectly uses libxc internals
-xen-hptool.o: CFLAGS += -I$(XEN_ROOT)/tools/libxc $(CFLAGS_libxencall)
 xen-hptool: xen-hptool.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(APPEND_LDFLAGS)
 
 xenhypfs.o: CFLAGS += $(CFLAGS_libxenhypfs)
 
-# xen-mfndump incorrectly uses libxc internals
-xen-mfndump.o: CFLAGS += -I$(XEN_ROOT)/tools/libxc $(CFLAGS_libxencall)
 xen-mfndump: xen-mfndump.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(APPEND_LDFLAGS)
 
diff --git a/tools/misc/xen-hptool.c b/tools/misc/xen-hptool.c
index 6e27d9cf43..7f17f24942 100644
--- a/tools/misc/xen-hptool.c
+++ b/tools/misc/xen-hptool.c
@@ -1,9 +1,11 @@
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
 #include <xenevtchn.h>
 #include <xenctrl.h>
-#include <xc_private.h>
-#include <xc_core.h>
+#include <xenguest.h>
 #include <xenstore.h>
-#include <unistd.h>
+#include <xen-tools/libs.h>
 
 static xc_interface *xch;
 
diff --git a/tools/misc/xen-mfndump.c b/tools/misc/xen-mfndump.c
index 858bd0e26b..39ab00eb55 100644
--- a/tools/misc/xen-mfndump.c
+++ b/tools/misc/xen-mfndump.c
@@ -1,11 +1,17 @@
-#define XC_WANT_COMPAT_MAP_FOREIGN_API
-#include <xenctrl.h>
-#include <xc_private.h>
-#include <xc_core.h>
+#include <stdlib.h>
+#include <string.h>
+#include <sys/mman.h>
 #include <unistd.h>
 #include <inttypes.h>
 
-#include "xg_save_restore.h"
+#define XC_WANT_COMPAT_MAP_FOREIGN_API
+#include <xenctrl.h>
+#include <xenguest.h>
+
+#include <xen-tools/libs.h>
+
+#define M2P_SIZE(_m)    ROUNDUP(((_m) * sizeof(xen_pfn_t)), 21)
+#define is_mapped(pfn_type) (!((pfn_type) & 0x80000000UL))
 
 static xc_interface *xch;
 
@@ -41,13 +47,13 @@ int dump_m2p_func(int argc, char *argv[])
     /* Map M2P and obtain gpfn */
     if ( xc_maximum_ram_page(xch, &max_mfn) < 0 )
     {
-        ERROR("Failed to get the maximum mfn");
+        fprintf(stderr, "Failed to get the maximum mfn");
         return -1;
     }
 
     if ( !(m2p_table = xc_map_m2p(xch, max_mfn, PROT_READ, NULL)) )
     {
-        ERROR("Failed to map live M2P table");
+        fprintf(stderr, "Failed to map live M2P table");
         return -1;
     }
 
@@ -80,7 +86,7 @@ int dump_p2m_func(int argc, char *argv[])
     if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
          info.domid != domid )
     {
-        ERROR("Failed to obtain info for domain %d\n", domid);
+        fprintf(stderr, "Failed to obtain info for domain %d\n", domid);
         return -1;
     }
 
@@ -88,7 +94,7 @@ int dump_p2m_func(int argc, char *argv[])
     memset(&minfo, 0, sizeof(minfo));
     if ( xc_map_domain_meminfo(xch, domid, &minfo) )
     {
-        ERROR("Could not map domain %d memory information\n", domid);
+        fprintf(stderr, "Could not map domain %d memory information\n", domid);
         return -1;
     }
 
@@ -167,7 +173,7 @@ int dump_ptes_func(int argc, char *argv[])
     if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
          info.domid != domid )
     {
-        ERROR("Failed to obtain info for domain %d\n", domid);
+        fprintf(stderr, "Failed to obtain info for domain %d\n", domid);
         return -1;
     }
 
@@ -175,7 +181,7 @@ int dump_ptes_func(int argc, char *argv[])
     memset(&minfo, 0, sizeof(minfo));
     if ( xc_map_domain_meminfo(xch, domid, &minfo) )
     {
-        ERROR("Could not map domain %d memory information\n", domid);
+        fprintf(stderr, "Could not map domain %d memory information\n", domid);
         return -1;
     }
 
@@ -185,35 +191,35 @@ int dump_ptes_func(int argc, char *argv[])
          !(m2p_table = xc_map_m2p(xch, max_mfn, PROT_READ, NULL)) )
     {
         xc_unmap_domain_meminfo(xch, &minfo);
-        ERROR("Failed to map live M2P table");
+        fprintf(stderr, "Failed to map live M2P table");
         return -1;
     }
 
     pfn = m2p_table[mfn];
     if ( pfn >= minfo.p2m_size )
     {
-        ERROR("pfn 0x%lx out of range for domain %d\n", pfn, domid);
+        fprintf(stderr, "pfn 0x%lx out of range for domain %d\n", pfn, domid);
         rc = -1;
         goto out;
     }
 
     if ( !(minfo.pfn_type[pfn] & XEN_DOMCTL_PFINFO_LTABTYPE_MASK) )
     {
-        ERROR("pfn 0x%lx for domain %d is not a PT\n", pfn, domid);
+        fprintf(stderr, "pfn 0x%lx for domain %d is not a PT\n", pfn, domid);
         rc = -1;
         goto out;
     }
 
-    page = xc_map_foreign_range(xch, domid, PAGE_SIZE, PROT_READ,
+    page = xc_map_foreign_range(xch, domid, XC_PAGE_SIZE, PROT_READ,
                                 minfo.p2m_table[pfn]);
     if ( !page )
     {
-        ERROR("Failed to map 0x%lx\n", minfo.p2m_table[pfn]);
+        fprintf(stderr, "Failed to map 0x%lx\n", minfo.p2m_table[pfn]);
         rc = -1;
         goto out;
     }
 
-    pte_num = PAGE_SIZE / 8;
+    pte_num = XC_PAGE_SIZE / 8;
 
     printf(" --- Dumping %d PTEs for domain %d ---\n", pte_num, domid);
     printf(" Guest Width: %u, PT Levels: %u P2M size: = %lu\n",
@@ -249,7 +255,7 @@ int dump_ptes_func(int argc, char *argv[])
 
  out:
     if ( page )
-        munmap(page, PAGE_SIZE);
+        munmap(page, XC_PAGE_SIZE);
     xc_unmap_domain_meminfo(xch, &minfo);
     munmap(m2p_table, M2P_SIZE(max_mfn));
     return rc;
@@ -275,7 +281,7 @@ int lookup_pte_func(int argc, char *argv[])
     if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 ||
          info.domid != domid )
     {
-        ERROR("Failed to obtain info for domain %d\n", domid);
+        fprintf(stderr, "Failed to obtain info for domain %d\n", domid);
         return -1;
     }
 
@@ -283,11 +289,11 @@ int lookup_pte_func(int argc, char *argv[])
     memset(&minfo, 0, sizeof(minfo));
     if ( xc_map_domain_meminfo(xch, domid, &minfo) )
     {
-        ERROR("Could not map domain %d memory information\n", domid);
+        fprintf(stderr, "Could not map domain %d memory information\n", domid);
         return -1;
     }
 
-    pte_num = PAGE_SIZE / 8;
+    pte_num = XC_PAGE_SIZE / 8;
 
     printf(" --- Lookig for PTEs mapping mfn 0x%lx for domain %d ---\n",
            mfn, domid);
@@ -299,7 +305,7 @@ int lookup_pte_func(int argc, char *argv[])
         if ( !(minfo.pfn_type[i] & XEN_DOMCTL_PFINFO_LTABTYPE_MASK) )
             continue;
 
-        page = xc_map_foreign_range(xch, domid, PAGE_SIZE, PROT_READ,
+        page = xc_map_foreign_range(xch, domid, XC_PAGE_SIZE, PROT_READ,
                                     minfo.p2m_table[i]);
         if ( !page )
             continue;
@@ -309,15 +315,15 @@ int lookup_pte_func(int argc, char *argv[])
             uint64_t pte = ((const uint64_t*)page)[j];
 
 #define __MADDR_BITS_X86  ((minfo.guest_width == 8) ? 52 : 44)
-#define __MFN_MASK_X86    ((1ULL << (__MADDR_BITS_X86 - PAGE_SHIFT_X86)) - 1)
-            if ( ((pte >> PAGE_SHIFT_X86) & __MFN_MASK_X86) == mfn)
+#define __MFN_MASK_X86    ((1ULL << (__MADDR_BITS_X86 - XC_PAGE_SHIFT)) - 1)
+            if ( ((pte >> XC_PAGE_SHIFT) & __MFN_MASK_X86) == mfn)
                 printf("  0x%lx <-- [0x%lx][%lu]: 0x%"PRIx64"\n",
                        mfn, minfo.p2m_table[i], j, pte);
 #undef __MADDR_BITS_X86
 #undef __MFN_MASK_X8
         }
 
-        munmap(page, PAGE_SIZE);
+        munmap(page, XC_PAGE_SIZE);
         page = NULL;
     }
 
@@ -348,15 +354,15 @@ int memcmp_mfns_func(int argc, char *argv[])
          xc_domain_getinfo(xch, domid2, 1, &info2) != 1 ||
          info1.domid != domid1 || info2.domid != domid2)
     {
-        ERROR("Failed to obtain info for domains\n");
+        fprintf(stderr, "Failed to obtain info for domains\n");
         return -1;
     }
 
-    page1 = xc_map_foreign_range(xch, domid1, PAGE_SIZE, PROT_READ, mfn1);
-    page2 = xc_map_foreign_range(xch, domid2, PAGE_SIZE, PROT_READ, mfn2);
+    page1 = xc_map_foreign_range(xch, domid1, XC_PAGE_SIZE, PROT_READ, mfn1);
+    page2 = xc_map_foreign_range(xch, domid2, XC_PAGE_SIZE, PROT_READ, mfn2);
     if ( !page1 || !page2 )
     {
-        ERROR("Failed to map either 0x%lx[dom %d] or 0x%lx[dom %d]\n",
+        fprintf(stderr, "Failed to map either 0x%lx[dom %d] or 0x%lx[dom %d]\n",
               mfn1, domid1, mfn2, domid2);
         rc = -1;
         goto out;
@@ -365,13 +371,13 @@ int memcmp_mfns_func(int argc, char *argv[])
     printf(" --- Comparing the content of 2 MFNs ---\n");
     printf(" 1: 0x%lx[dom %d], 2: 0x%lx[dom %d]\n",
            mfn1, domid1, mfn2, domid2);
-    printf("  memcpy(1, 2) = %d\n", memcmp(page1, page2, PAGE_SIZE));
+    printf("  memcpy(1, 2) = %d\n", memcmp(page1, page2, XC_PAGE_SIZE));
 
  out:
     if ( page1 )
-        munmap(page1, PAGE_SIZE);
+        munmap(page1, XC_PAGE_SIZE);
     if ( page2 )
-        munmap(page2, PAGE_SIZE);
+        munmap(page2, XC_PAGE_SIZE);
     return rc;
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:26:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkEY-0002oT-Bb; Wed, 15 Jul 2020 16:26:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvkEW-0002Yi-B3
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:26:12 +0000
X-Inumbo-ID: d157443c-c6b7-11ea-b7bb-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d157443c-c6b7-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 16:25:38 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvkDx-0001sU-JK; Wed, 15 Jul 2020 17:25:37 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 07/12] tools/libxc: untangle libxenctrl from libxenguest
Date: Wed, 15 Jul 2020 17:25:06 +0100
Message-Id: <20200715162511.5941-9-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>, Wei Liu <wl@xen.org>,
 ian.jackson@eu.citrix.com, =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Juergen Gross <jgross@suse.com>

Sources of libxenctrl and libxenguest are completely entangled. In
practice libxenguest is a user of libxenctrl, so don't let any source
libxenctrl include xg_private.h.

This can be achieved by moving all definitions used by libxenctrl from
xg_private.h to xc_private.h. Additionally xc_dom.h needs to be
exported, so rename it to xenctrl_dom.h.

Rename all libxenguest sources from xc_* to xg_*.

Move xc_[un]map_domain_meminfo() fnctions to new source xg_domain.c as
they are defined in include/xenguest.h and should be in libxenguest.

Remove xc_efi.h and xc_elf.h as they aren't used anywhere.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 stubdom/grub/kexec.c                          |   2 +-
 tools/helpers/init-xenstore-domain.c          |   2 +-
 tools/libxc/Makefile                          |  64 +++----
 .../libxc/include/{xc_dom.h => xenctrl_dom.h} |  10 +-
 tools/libxc/include/xenguest.h                |   8 +-
 tools/libxc/xc_core.c                         |   5 +-
 tools/libxc/xc_core.h                         |   2 +-
 tools/libxc/xc_core_arm.c                     |   2 +-
 tools/libxc/xc_core_x86.c                     |   6 +-
 tools/libxc/xc_domain.c                       | 129 +-------------
 tools/libxc/xc_efi.h                          | 158 ------------------
 tools/libxc/xc_elf.h                          |  16 --
 tools/libxc/xc_hcall_buf.c                    |   1 -
 tools/libxc/xc_private.c                      |   3 +-
 tools/libxc/xc_private.h                      |  36 ++++
 tools/libxc/xc_resume.c                       |   2 -
 .../libxc/{xc_cpuid_x86.c => xg_cpuid_x86.c}  |   0
 tools/libxc/{xc_dom_arm.c => xg_dom_arm.c}    |   2 +-
 ...imageloader.c => xg_dom_armzimageloader.c} |   2 +-
 ...{xc_dom_binloader.c => xg_dom_binloader.c} |   2 +-
 tools/libxc/{xc_dom_boot.c => xg_dom_boot.c}  |   2 +-
 ...bzimageloader.c => xg_dom_bzimageloader.c} |   2 +-
 ...m_compat_linux.c => xg_dom_compat_linux.c} |   2 +-
 tools/libxc/{xc_dom_core.c => xg_dom_core.c}  |   2 +-
 ...c_dom_decompress.h => xg_dom_decompress.h} |   4 +-
 ...compress_lz4.c => xg_dom_decompress_lz4.c} |   2 +-
 ...ss_unsafe.c => xg_dom_decompress_unsafe.c} |   2 +-
 ...ss_unsafe.h => xg_dom_decompress_unsafe.h} |   2 +-
 ...ip2.c => xg_dom_decompress_unsafe_bzip2.c} |   2 +-
 ...lzma.c => xg_dom_decompress_unsafe_lzma.c} |   2 +-
 ...o1x.c => xg_dom_decompress_unsafe_lzo1x.c} |   2 +-
 ...afe_xz.c => xg_dom_decompress_unsafe_xz.c} |   2 +-
 ...{xc_dom_elfloader.c => xg_dom_elfloader.c} |   2 +-
 ...{xc_dom_hvmloader.c => xg_dom_hvmloader.c} |   2 +-
 tools/libxc/{xc_dom_x86.c => xg_dom_x86.c}    |   2 +-
 tools/libxc/xg_domain.c                       | 149 +++++++++++++++++
 .../libxc/{xc_nomigrate.c => xg_nomigrate.c}  |   0
 .../{xc_offline_page.c => xg_offline_page.c}  |   2 +-
 tools/libxc/xg_private.h                      |  22 ---
 tools/libxc/xg_save_restore.h                 |   9 -
 .../libxc/{xc_sr_common.c => xg_sr_common.c}  |   2 +-
 .../libxc/{xc_sr_common.h => xg_sr_common.h}  |   4 +-
 ...{xc_sr_common_x86.c => xg_sr_common_x86.c} |   2 +-
 ...{xc_sr_common_x86.h => xg_sr_common_x86.h} |   2 +-
 ..._common_x86_pv.c => xg_sr_common_x86_pv.c} |   2 +-
 ..._common_x86_pv.h => xg_sr_common_x86_pv.h} |   2 +-
 .../{xc_sr_restore.c => xg_sr_restore.c}      |   2 +-
 ...tore_x86_hvm.c => xg_sr_restore_x86_hvm.c} |   2 +-
 ...estore_x86_pv.c => xg_sr_restore_x86_pv.c} |   2 +-
 tools/libxc/{xc_sr_save.c => xg_sr_save.c}    |   2 +-
 ...sr_save_x86_hvm.c => xg_sr_save_x86_hvm.c} |   2 +-
 ...c_sr_save_x86_pv.c => xg_sr_save_x86_pv.c} |   2 +-
 ..._stream_format.h => xg_sr_stream_format.h} |   0
 tools/libxc/{xc_suspend.c => xg_suspend.c}    |   0
 tools/libxl/libxl_arm.c                       |   2 +-
 tools/libxl/libxl_arm.h                       |   2 +-
 tools/libxl/libxl_create.c                    |   2 +-
 tools/libxl/libxl_dm.c                        |   2 +-
 tools/libxl/libxl_dom.c                       |   2 +-
 tools/libxl/libxl_internal.h                  |   2 +-
 tools/libxl/libxl_vnuma.c                     |   2 +-
 tools/libxl/libxl_x86.c                       |   2 +-
 tools/libxl/libxl_x86_acpi.c                  |   2 +-
 tools/python/xen/lowlevel/xc/xc.c             |   2 +-
 tools/xcutils/readnotes.c                     |   2 +-
 65 files changed, 284 insertions(+), 430 deletions(-)
 rename tools/libxc/include/{xc_dom.h => xenctrl_dom.h} (98%)
 delete mode 100644 tools/libxc/xc_efi.h
 delete mode 100644 tools/libxc/xc_elf.h
 rename tools/libxc/{xc_cpuid_x86.c => xg_cpuid_x86.c} (100%)
 rename tools/libxc/{xc_dom_arm.c => xg_dom_arm.c} (99%)
 rename tools/libxc/{xc_dom_armzimageloader.c => xg_dom_armzimageloader.c} (99%)
 rename tools/libxc/{xc_dom_binloader.c => xg_dom_binloader.c} (99%)
 rename tools/libxc/{xc_dom_boot.c => xg_dom_boot.c} (99%)
 rename tools/libxc/{xc_dom_bzimageloader.c => xg_dom_bzimageloader.c} (99%)
 rename tools/libxc/{xc_dom_compat_linux.c => xg_dom_compat_linux.c} (99%)
 rename tools/libxc/{xc_dom_core.c => xg_dom_core.c} (99%)
 rename tools/libxc/{xc_dom_decompress.h => xg_dom_decompress.h} (62%)
 rename tools/libxc/{xc_dom_decompress_lz4.c => xg_dom_decompress_lz4.c} (98%)
 rename tools/libxc/{xc_dom_decompress_unsafe.c => xg_dom_decompress_unsafe.c} (96%)
 rename tools/libxc/{xc_dom_decompress_unsafe.h => xg_dom_decompress_unsafe.h} (97%)
 rename tools/libxc/{xc_dom_decompress_unsafe_bzip2.c => xg_dom_decompress_unsafe_bzip2.c} (87%)
 rename tools/libxc/{xc_dom_decompress_unsafe_lzma.c => xg_dom_decompress_unsafe_lzma.c} (87%)
 rename tools/libxc/{xc_dom_decompress_unsafe_lzo1x.c => xg_dom_decompress_unsafe_lzo1x.c} (96%)
 rename tools/libxc/{xc_dom_decompress_unsafe_xz.c => xg_dom_decompress_unsafe_xz.c} (95%)
 rename tools/libxc/{xc_dom_elfloader.c => xg_dom_elfloader.c} (99%)
 rename tools/libxc/{xc_dom_hvmloader.c => xg_dom_hvmloader.c} (99%)
 rename tools/libxc/{xc_dom_x86.c => xg_dom_x86.c} (99%)
 create mode 100644 tools/libxc/xg_domain.c
 rename tools/libxc/{xc_nomigrate.c => xg_nomigrate.c} (100%)
 rename tools/libxc/{xc_offline_page.c => xg_offline_page.c} (99%)
 rename tools/libxc/{xc_sr_common.c => xg_sr_common.c} (99%)
 rename tools/libxc/{xc_sr_common.h => xg_sr_common.h} (99%)
 rename tools/libxc/{xc_sr_common_x86.c => xg_sr_common_x86.c} (99%)
 rename tools/libxc/{xc_sr_common_x86.h => xg_sr_common_x86.h} (98%)
 rename tools/libxc/{xc_sr_common_x86_pv.c => xg_sr_common_x86_pv.c} (99%)
 rename tools/libxc/{xc_sr_common_x86_pv.h => xg_sr_common_x86_pv.h} (98%)
 rename tools/libxc/{xc_sr_restore.c => xg_sr_restore.c} (99%)
 rename tools/libxc/{xc_sr_restore_x86_hvm.c => xg_sr_restore_x86_hvm.c} (99%)
 rename tools/libxc/{xc_sr_restore_x86_pv.c => xg_sr_restore_x86_pv.c} (99%)
 rename tools/libxc/{xc_sr_save.c => xg_sr_save.c} (99%)
 rename tools/libxc/{xc_sr_save_x86_hvm.c => xg_sr_save_x86_hvm.c} (99%)
 rename tools/libxc/{xc_sr_save_x86_pv.c => xg_sr_save_x86_pv.c} (99%)
 rename tools/libxc/{xc_sr_stream_format.h => xg_sr_stream_format.h} (100%)
 rename tools/libxc/{xc_suspend.c => xg_suspend.c} (100%)

diff --git a/stubdom/grub/kexec.c b/stubdom/grub/kexec.c
index 0e68b969a2..24001220a9 100644
--- a/stubdom/grub/kexec.c
+++ b/stubdom/grub/kexec.c
@@ -20,7 +20,7 @@
 #include <sys/mman.h>
 
 #include <xenctrl.h>
-#include <xc_dom.h>
+#include <xenctrl_dom.h>
 
 #include <kernel.h>
 #include <console.h>
diff --git a/tools/helpers/init-xenstore-domain.c b/tools/helpers/init-xenstore-domain.c
index 4ce8299c3c..5bdb48dc80 100644
--- a/tools/helpers/init-xenstore-domain.c
+++ b/tools/helpers/init-xenstore-domain.c
@@ -8,7 +8,7 @@
 #include <sys/ioctl.h>
 #include <sys/mman.h>
 #include <xenctrl.h>
-#include <xc_dom.h>
+#include <xenctrl_dom.h>
 #include <xenstore.h>
 #include <xen/sys/xenbus_dev.h>
 #include <xen-xsm/flask/flask.h>
diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index 1e64116bd4..6f94b5bb4c 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -52,20 +52,22 @@ CTRL_SRCS-y       += xc_gnttab_compat.c
 CTRL_SRCS-y       += xc_devicemodel_compat.c
 
 GUEST_SRCS-y :=
-GUEST_SRCS-y += xg_private.c xc_suspend.c
+GUEST_SRCS-y += xg_private.c
+GUEST_SRCS-y += xg_domain.c
+GUEST_SRCS-y += xg_suspend.c
 ifeq ($(CONFIG_MIGRATE),y)
-GUEST_SRCS-y += xc_sr_common.c
-GUEST_SRCS-$(CONFIG_X86) += xc_sr_common_x86.c
-GUEST_SRCS-$(CONFIG_X86) += xc_sr_common_x86_pv.c
-GUEST_SRCS-$(CONFIG_X86) += xc_sr_restore_x86_pv.c
-GUEST_SRCS-$(CONFIG_X86) += xc_sr_restore_x86_hvm.c
-GUEST_SRCS-$(CONFIG_X86) += xc_sr_save_x86_pv.c
-GUEST_SRCS-$(CONFIG_X86) += xc_sr_save_x86_hvm.c
-GUEST_SRCS-y += xc_sr_restore.c
-GUEST_SRCS-y += xc_sr_save.c
-GUEST_SRCS-y += xc_offline_page.c
+GUEST_SRCS-y += xg_sr_common.c
+GUEST_SRCS-$(CONFIG_X86) += xg_sr_common_x86.c
+GUEST_SRCS-$(CONFIG_X86) += xg_sr_common_x86_pv.c
+GUEST_SRCS-$(CONFIG_X86) += xg_sr_restore_x86_pv.c
+GUEST_SRCS-$(CONFIG_X86) += xg_sr_restore_x86_hvm.c
+GUEST_SRCS-$(CONFIG_X86) += xg_sr_save_x86_pv.c
+GUEST_SRCS-$(CONFIG_X86) += xg_sr_save_x86_hvm.c
+GUEST_SRCS-y += xg_sr_restore.c
+GUEST_SRCS-y += xg_sr_save.c
+GUEST_SRCS-y += xg_offline_page.c
 else
-GUEST_SRCS-y += xc_nomigrate.c
+GUEST_SRCS-y += xg_nomigrate.c
 endif
 
 vpath %.c ../../xen/common/libelf
@@ -86,25 +88,26 @@ GUEST_SRCS-y                 += cpuid.c msr.c
 endif
 
 # new domain builder
-GUEST_SRCS-y                 += xc_dom_core.c xc_dom_boot.c
-GUEST_SRCS-y                 += xc_dom_elfloader.c
-GUEST_SRCS-$(CONFIG_X86)     += xc_dom_bzimageloader.c
-GUEST_SRCS-$(CONFIG_X86)     += xc_dom_decompress_lz4.c
-GUEST_SRCS-$(CONFIG_X86)     += xc_dom_hvmloader.c
-GUEST_SRCS-$(CONFIG_ARM)     += xc_dom_armzimageloader.c
-GUEST_SRCS-y                 += xc_dom_binloader.c
-GUEST_SRCS-y                 += xc_dom_compat_linux.c
-
-GUEST_SRCS-$(CONFIG_X86)     += xc_dom_x86.c
-GUEST_SRCS-$(CONFIG_X86)     += xc_cpuid_x86.c
-GUEST_SRCS-$(CONFIG_ARM)     += xc_dom_arm.c
+GUEST_SRCS-y                 += xg_dom_core.c
+GUEST_SRCS-y                 += xg_dom_boot.c
+GUEST_SRCS-y                 += xg_dom_elfloader.c
+GUEST_SRCS-$(CONFIG_X86)     += xg_dom_bzimageloader.c
+GUEST_SRCS-$(CONFIG_X86)     += xg_dom_decompress_lz4.c
+GUEST_SRCS-$(CONFIG_X86)     += xg_dom_hvmloader.c
+GUEST_SRCS-$(CONFIG_ARM)     += xg_dom_armzimageloader.c
+GUEST_SRCS-y                 += xg_dom_binloader.c
+GUEST_SRCS-y                 += xg_dom_compat_linux.c
+
+GUEST_SRCS-$(CONFIG_X86)     += xg_dom_x86.c
+GUEST_SRCS-$(CONFIG_X86)     += xg_cpuid_x86.c
+GUEST_SRCS-$(CONFIG_ARM)     += xg_dom_arm.c
 
 ifeq ($(CONFIG_LIBXC_MINIOS),y)
-GUEST_SRCS-y                 += xc_dom_decompress_unsafe.c
-GUEST_SRCS-y                 += xc_dom_decompress_unsafe_bzip2.c
-GUEST_SRCS-y                 += xc_dom_decompress_unsafe_lzma.c
-GUEST_SRCS-y                 += xc_dom_decompress_unsafe_lzo1x.c
-GUEST_SRCS-y                 += xc_dom_decompress_unsafe_xz.c
+GUEST_SRCS-y                 += xg_dom_decompress_unsafe.c
+GUEST_SRCS-y                 += xg_dom_decompress_unsafe_bzip2.c
+GUEST_SRCS-y                 += xg_dom_decompress_unsafe_lzma.c
+GUEST_SRCS-y                 += xg_dom_decompress_unsafe_lzo1x.c
+GUEST_SRCS-y                 += xg_dom_decompress_unsafe_xz.c
 endif
 
 -include $(XEN_TARGET_ARCH)/Makefile
@@ -190,7 +193,7 @@ install: build
 	$(INSTALL_DATA) libxenctrl.a $(DESTDIR)$(libdir)
 	$(SYMLINK_SHLIB) libxenctrl.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)/libxenctrl.so.$(MAJOR)
 	$(SYMLINK_SHLIB) libxenctrl.so.$(MAJOR) $(DESTDIR)$(libdir)/libxenctrl.so
-	$(INSTALL_DATA) include/xenctrl.h include/xenctrl_compat.h $(DESTDIR)$(includedir)
+	$(INSTALL_DATA) include/xenctrl.h include/xenctrl_compat.h include/xenctrl_dom.h $(DESTDIR)$(includedir)
 	$(INSTALL_SHLIB) libxenguest.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)
 	$(INSTALL_DATA) libxenguest.a $(DESTDIR)$(libdir)
 	$(SYMLINK_SHLIB) libxenguest.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)/libxenguest.so.$(MAJOR)
@@ -210,6 +213,7 @@ uninstall:
 	rm -f $(DESTDIR)$(PKG_INSTALLDIR)/xencontrol.pc
 	rm -f $(DESTDIR)$(includedir)/xenctrl.h
 	rm -f $(DESTDIR)$(includedir)/xenctrl_compat.h
+	rm -f $(DESTDIR)$(includedir)/xenctrl_dom.h
 	rm -f $(DESTDIR)$(libdir)/libxenctrl.so
 	rm -f $(DESTDIR)$(libdir)/libxenctrl.so.$(MAJOR)
 	rm -f $(DESTDIR)$(libdir)/libxenctrl.so.$(MAJOR).$(MINOR)
diff --git a/tools/libxc/include/xc_dom.h b/tools/libxc/include/xenctrl_dom.h
similarity index 98%
rename from tools/libxc/include/xc_dom.h
rename to tools/libxc/include/xenctrl_dom.h
index 52a4d6c8c0..40b85b7755 100644
--- a/tools/libxc/include/xc_dom.h
+++ b/tools/libxc/include/xenctrl_dom.h
@@ -17,9 +17,7 @@
 #define _XC_DOM_H
 
 #include <xen/libelf/libelf.h>
-#include <xenguest.h>
 
-#define INVALID_PFN ((xen_pfn_t)-1)
 #define X86_HVM_NR_SPECIAL_PAGES    8
 #define X86_HVM_END_SPECIAL_REGION  0xff000u
 #define XG_MAX_MODULES 2
@@ -38,6 +36,12 @@ struct xc_dom_seg {
     xen_pfn_t pages;
 };
 
+struct xc_hvm_firmware_module {
+    uint8_t  *data;
+    uint32_t  length;
+    uint64_t  guest_addr_out;
+};
+
 struct xc_dom_mem {
     struct xc_dom_mem *next;
     void *ptr;
@@ -255,6 +259,8 @@ struct xc_dom_arch {
     int (*setup_pgtables) (struct xc_dom_image * dom);
 
     /* arch-specific data structs setup */
+    /* in Mini-OS environment start_info might be a macro, avoid collision. */
+#undef start_info
     int (*start_info) (struct xc_dom_image * dom);
     int (*shared_info) (struct xc_dom_image * dom, void *shared_info);
     int (*vcpu) (struct xc_dom_image * dom);
diff --git a/tools/libxc/include/xenguest.h b/tools/libxc/include/xenguest.h
index 7a12d21ff2..4643384790 100644
--- a/tools/libxc/include/xenguest.h
+++ b/tools/libxc/include/xenguest.h
@@ -22,6 +22,8 @@
 #ifndef XENGUEST_H
 #define XENGUEST_H
 
+#include <xenctrl_dom.h>
+
 #define XC_NUMA_NO_NODE   (~0U)
 
 #define XCFLAGS_LIVE      (1 << 0)
@@ -249,12 +251,6 @@ int xc_linux_build(xc_interface *xch,
                    unsigned int console_evtchn,
                    unsigned long *console_mfn);
 
-struct xc_hvm_firmware_module {
-    uint8_t  *data;
-    uint32_t  length;
-    uint64_t  guest_addr_out;
-};
-
 /*
  * Sets *lockfd to -1.
  * Has deallocated everything even on error.
diff --git a/tools/libxc/xc_core.c b/tools/libxc/xc_core.c
index 2ee1d205b4..e8c6fb96f9 100644
--- a/tools/libxc/xc_core.c
+++ b/tools/libxc/xc_core.c
@@ -60,12 +60,13 @@
  *
  */
 
-#include "xg_private.h"
+#include "xc_private.h"
 #include "xc_core.h"
-#include "xc_dom.h"
 #include <stdlib.h>
 #include <unistd.h>
 
+#include <xen/libelf/libelf.h>
+
 /* number of pages to write at a time */
 #define DUMP_INCREMENT (4 * 1024)
 
diff --git a/tools/libxc/xc_core.h b/tools/libxc/xc_core.h
index ed7ed53ca5..36fb755da2 100644
--- a/tools/libxc/xc_core.h
+++ b/tools/libxc/xc_core.h
@@ -21,7 +21,7 @@
 #define XC_CORE_H
 
 #include "xen/version.h"
-#include "xg_private.h"
+#include "xc_private.h"
 #include "xen/libelf/elfstructs.h"
 
 /* section names */
diff --git a/tools/libxc/xc_core_arm.c b/tools/libxc/xc_core_arm.c
index c3c492c971..7b587b4cc5 100644
--- a/tools/libxc/xc_core_arm.c
+++ b/tools/libxc/xc_core_arm.c
@@ -16,7 +16,7 @@
  *
  */
 
-#include "xg_private.h"
+#include "xc_private.h"
 #include "xc_core.h"
 
 #include <xen-tools/libs.h>
diff --git a/tools/libxc/xc_core_x86.c b/tools/libxc/xc_core_x86.c
index 54852a2d1a..cb76e6207b 100644
--- a/tools/libxc/xc_core_x86.c
+++ b/tools/libxc/xc_core_x86.c
@@ -17,12 +17,10 @@
  *
  */
 
-#include "xg_private.h"
+#include "xc_private.h"
 #include "xc_core.h"
 #include <xen/hvm/e820.h>
 
-#define GET_FIELD(_p, _f) ((dinfo->guest_width==8) ? ((_p)->x64._f) : ((_p)->x32._f))
-
 int
 xc_core_arch_gpfn_may_present(struct xc_core_arch_context *arch_ctxt,
                               unsigned long pfn)
@@ -98,7 +96,7 @@ xc_core_arch_map_p2m_rw(xc_interface *xch, struct domain_info_context *dinfo, xc
 
     live_p2m_frame_list_list =
         xc_map_foreign_range(xch, dom, PAGE_SIZE, PROT_READ,
-                             GET_FIELD(live_shinfo, arch.pfn_to_mfn_frame_list_list));
+                             GET_FIELD(live_shinfo, arch.pfn_to_mfn_frame_list_list, dinfo->guest_width));
 
     if ( !live_p2m_frame_list_list )
     {
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 71829c2bce..43fab50c06 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -21,8 +21,7 @@
 
 #include "xc_private.h"
 #include "xc_core.h"
-#include "xg_private.h"
-#include "xg_save_restore.h"
+#include "xc_private.h"
 #include <xen/memory.h>
 #include <xen/hvm/hvm_op.h>
 
@@ -1892,132 +1891,6 @@ int xc_domain_unbind_pt_spi_irq(xc_interface *xch,
                                         PT_IRQ_TYPE_SPI, 0, 0, 0, 0, spi));
 }
 
-int xc_unmap_domain_meminfo(xc_interface *xch, struct xc_domain_meminfo *minfo)
-{
-    struct domain_info_context _di = { .guest_width = minfo->guest_width,
-                                       .p2m_size = minfo->p2m_size};
-    struct domain_info_context *dinfo = &_di;
-
-    free(minfo->pfn_type);
-    if ( minfo->p2m_table )
-        munmap(minfo->p2m_table, P2M_FL_ENTRIES * PAGE_SIZE);
-    minfo->p2m_table = NULL;
-
-    return 0;
-}
-
-int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
-                          struct xc_domain_meminfo *minfo)
-{
-    struct domain_info_context _di;
-    struct domain_info_context *dinfo = &_di;
-
-    xc_dominfo_t info;
-    shared_info_any_t *live_shinfo;
-    xen_capabilities_info_t xen_caps = "";
-    int i;
-
-    /* Only be initialized once */
-    if ( minfo->pfn_type || minfo->p2m_table )
-    {
-        errno = EINVAL;
-        return -1;
-    }
-
-    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
-    {
-        PERROR("Could not get domain info");
-        return -1;
-    }
-
-    if ( xc_domain_get_guest_width(xch, domid, &minfo->guest_width) )
-    {
-        PERROR("Could not get domain address size");
-        return -1;
-    }
-    _di.guest_width = minfo->guest_width;
-
-    /* Get page table levels (see get_platform_info() in xg_save_restore.h */
-    if ( xc_version(xch, XENVER_capabilities, &xen_caps) )
-    {
-        PERROR("Could not get Xen capabilities (for page table levels)");
-        return -1;
-    }
-    if ( strstr(xen_caps, "xen-3.0-x86_64") )
-        /* Depends on whether it's a compat 32-on-64 guest */
-        minfo->pt_levels = ( (minfo->guest_width == 8) ? 4 : 3 );
-    else if ( strstr(xen_caps, "xen-3.0-x86_32p") )
-        minfo->pt_levels = 3;
-    else if ( strstr(xen_caps, "xen-3.0-x86_32") )
-        minfo->pt_levels = 2;
-    else
-    {
-        errno = EFAULT;
-        return -1;
-    }
-
-    /* We need the shared info page for mapping the P2M */
-    live_shinfo = xc_map_foreign_range(xch, domid, PAGE_SIZE, PROT_READ,
-                                       info.shared_info_frame);
-    if ( !live_shinfo )
-    {
-        PERROR("Could not map the shared info frame (MFN 0x%lx)",
-               info.shared_info_frame);
-        return -1;
-    }
-
-    if ( xc_core_arch_map_p2m_writable(xch, minfo->guest_width, &info,
-                                       live_shinfo, &minfo->p2m_table,
-                                       &minfo->p2m_size) )
-    {
-        PERROR("Could not map the P2M table");
-        munmap(live_shinfo, PAGE_SIZE);
-        return -1;
-    }
-    munmap(live_shinfo, PAGE_SIZE);
-    _di.p2m_size = minfo->p2m_size;
-
-    /* Make space and prepare for getting the PFN types */
-    minfo->pfn_type = calloc(sizeof(*minfo->pfn_type), minfo->p2m_size);
-    if ( !minfo->pfn_type )
-    {
-        PERROR("Could not allocate memory for the PFN types");
-        goto failed;
-    }
-    for ( i = 0; i < minfo->p2m_size; i++ )
-        minfo->pfn_type[i] = xc_pfn_to_mfn(i, minfo->p2m_table,
-                                           minfo->guest_width);
-
-    /* Retrieve PFN types in batches */
-    for ( i = 0; i < minfo->p2m_size ; i+=1024 )
-    {
-        int count = ((minfo->p2m_size - i ) > 1024 ) ?
-                        1024: (minfo->p2m_size - i);
-
-        if ( xc_get_pfn_type_batch(xch, domid, count, minfo->pfn_type + i) )
-        {
-            PERROR("Could not get %d-eth batch of PFN types", (i+1)/1024);
-            goto failed;
-        }
-    }
-
-    return 0;
-
-failed:
-    if ( minfo->pfn_type )
-    {
-        free(minfo->pfn_type);
-        minfo->pfn_type = NULL;
-    }
-    if ( minfo->p2m_table )
-    {
-        munmap(minfo->p2m_table, P2M_FL_ENTRIES * PAGE_SIZE);
-        minfo->p2m_table = NULL;
-    }
-
-    return -1;
-}
-
 int xc_domain_memory_mapping(
     xc_interface *xch,
     uint32_t domid,
diff --git a/tools/libxc/xc_efi.h b/tools/libxc/xc_efi.h
deleted file mode 100644
index dbe105be8f..0000000000
--- a/tools/libxc/xc_efi.h
+++ /dev/null
@@ -1,158 +0,0 @@
-/*
- * Extensible Firmware Interface
- * Based on 'Extensible Firmware Interface Specification' version 0.9, April 30, 1999
- *
- * This library is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation;
- * version 2.1 of the License.
- *
- * This library is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with this library; If not, see <http://www.gnu.org/licenses/>.
- *
- * Copyright (C) 1999 VA Linux Systems
- * Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
- * Copyright (C) 1999, 2002-2003 Hewlett-Packard Co.
- *      David Mosberger-Tang <davidm@hpl.hp.com>
- *      Stephane Eranian <eranian@hpl.hp.com>
- */
-
-#ifndef XC_EFI_H
-#define XC_EFI_H
-
-/* definitions from xen/include/asm-ia64/linux-xen/linux/efi.h */
-
-typedef struct {
-        uint8_t b[16];
-} efi_guid_t;
-
-#define EFI_GUID(a,b,c,d0,d1,d2,d3,d4,d5,d6,d7) \
-((efi_guid_t) \
-{{ (a) & 0xff, ((a) >> 8) & 0xff, ((a) >> 16) & 0xff, ((a) >> 24) & 0xff, \
-  (b) & 0xff, ((b) >> 8) & 0xff, \
-  (c) & 0xff, ((c) >> 8) & 0xff, \
-  (d0), (d1), (d2), (d3), (d4), (d5), (d6), (d7) }})
-
-/*
- * Generic EFI table header
- */
-typedef struct {
-	uint64_t signature;
-	uint32_t revision;
-	uint32_t headersize;
-	uint32_t crc32;
-	uint32_t reserved;
-} efi_table_hdr_t;
-
-/*
- * Memory map descriptor:
- */
-
-/* Memory types: */
-#define EFI_RESERVED_TYPE                0
-#define EFI_LOADER_CODE                  1
-#define EFI_LOADER_DATA                  2
-#define EFI_BOOT_SERVICES_CODE           3
-#define EFI_BOOT_SERVICES_DATA           4
-#define EFI_RUNTIME_SERVICES_CODE        5
-#define EFI_RUNTIME_SERVICES_DATA        6
-#define EFI_CONVENTIONAL_MEMORY          7
-#define EFI_UNUSABLE_MEMORY              8
-#define EFI_ACPI_RECLAIM_MEMORY          9
-#define EFI_ACPI_MEMORY_NVS             10
-#define EFI_MEMORY_MAPPED_IO            11
-#define EFI_MEMORY_MAPPED_IO_PORT_SPACE 12
-#define EFI_PAL_CODE                    13
-#define EFI_MAX_MEMORY_TYPE             14
-
-/* Attribute values: */
-#define EFI_MEMORY_UC           ((uint64_t)0x0000000000000001ULL)    /* uncached */
-#define EFI_MEMORY_WC           ((uint64_t)0x0000000000000002ULL)    /* write-coalescing */
-#define EFI_MEMORY_WT           ((uint64_t)0x0000000000000004ULL)    /* write-through */
-#define EFI_MEMORY_WB           ((uint64_t)0x0000000000000008ULL)    /* write-back */
-#define EFI_MEMORY_WP           ((uint64_t)0x0000000000001000ULL)    /* write-protect */
-#define EFI_MEMORY_RP           ((uint64_t)0x0000000000002000ULL)    /* read-protect */
-#define EFI_MEMORY_XP           ((uint64_t)0x0000000000004000ULL)    /* execute-protect */
-#define EFI_MEMORY_RUNTIME      ((uint64_t)0x8000000000000000ULL)    /* range requires runtime mapping */
-#define EFI_MEMORY_DESCRIPTOR_VERSION   1
-
-#define EFI_PAGE_SHIFT          12
-
-/*
- * For current x86 implementations of EFI, there is
- * additional padding in the mem descriptors.  This is not
- * the case in ia64.  Need to have this fixed in the f/w.
- */
-typedef struct {
-        uint32_t type;
-        uint32_t pad;
-        uint64_t phys_addr;
-        uint64_t virt_addr;
-        uint64_t num_pages;
-        uint64_t attribute;
-#if defined (__i386__)
-        uint64_t pad1;
-#endif
-} efi_memory_desc_t;
-
-/*
- * EFI Runtime Services table
- */
-#define EFI_RUNTIME_SERVICES_SIGNATURE	((uint64_t)0x5652453544e5552ULL)
-#define EFI_RUNTIME_SERVICES_REVISION	0x00010000
-
-typedef struct {
-	efi_table_hdr_t hdr;
-	unsigned long get_time;
-	unsigned long set_time;
-	unsigned long get_wakeup_time;
-	unsigned long set_wakeup_time;
-	unsigned long set_virtual_address_map;
-	unsigned long convert_pointer;
-	unsigned long get_variable;
-	unsigned long get_next_variable;
-	unsigned long set_variable;
-	unsigned long get_next_high_mono_count;
-	unsigned long reset_system;
-} efi_runtime_services_t;
-
-/*
- *  EFI Configuration Table and GUID definitions
- */
-#define NULL_GUID \
-    EFI_GUID(  0x00000000, 0x0000, 0x0000, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 )
-#define ACPI_20_TABLE_GUID    \
-    EFI_GUID(  0x8868e871, 0xe4f1, 0x11d3, 0xbc, 0x22, 0x0, 0x80, 0xc7, 0x3c, 0x88, 0x81 )
-#define SAL_SYSTEM_TABLE_GUID    \
-    EFI_GUID(  0xeb9d2d32, 0x2d88, 0x11d3, 0x9a, 0x16, 0x0, 0x90, 0x27, 0x3f, 0xc1, 0x4d )
-
-typedef struct {
-	efi_guid_t guid;
-	unsigned long table;
-} efi_config_table_t;
-
-#define EFI_SYSTEM_TABLE_SIGNATURE ((uint64_t)0x5453595320494249ULL)
-#define EFI_SYSTEM_TABLE_REVISION  ((1 << 16) | 00)
-
-typedef struct {
-	efi_table_hdr_t hdr;
-	unsigned long fw_vendor;	/* physical addr of CHAR16 vendor string */
-	uint32_t fw_revision;
-	unsigned long con_in_handle;
-	unsigned long con_in;
-	unsigned long con_out_handle;
-	unsigned long con_out;
-	unsigned long stderr_handle;
-	unsigned long stderr;
-	efi_runtime_services_t *runtime;
-	unsigned long boottime;
-	unsigned long nr_tables;
-	unsigned long tables;
-} efi_system_table_t;
-
-#endif /* XC_EFI_H */
diff --git a/tools/libxc/xc_elf.h b/tools/libxc/xc_elf.h
deleted file mode 100644
index acbc0280bd..0000000000
--- a/tools/libxc/xc_elf.h
+++ /dev/null
@@ -1,16 +0,0 @@
-/*
- * This library is free software; you can redistribute it and/or
- * modify it under the terms of the GNU Lesser General Public
- * License as published by the Free Software Foundation;
- * version 2.1 of the License.
- *
- * This library is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
- * Lesser General Public License for more details.
- *
- * You should have received a copy of the GNU Lesser General Public
- * License along with this library; If not, see <http://www.gnu.org/licenses/>.
- */
-
-#include <xen/libelf/elfstructs.h>
diff --git a/tools/libxc/xc_hcall_buf.c b/tools/libxc/xc_hcall_buf.c
index c1230a1e2b..200671f36f 100644
--- a/tools/libxc/xc_hcall_buf.c
+++ b/tools/libxc/xc_hcall_buf.c
@@ -19,7 +19,6 @@
 #include <string.h>
 
 #include "xc_private.h"
-#include "xg_private.h"
 
 xc_hypercall_buffer_t XC__HYPERCALL_BUFFER_NAME(HYPERCALL_BUFFER_NULL) = {
     .hbuf = NULL,
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index 90974d572e..8af96b1b7e 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -18,8 +18,7 @@
  */
 
 #include "xc_private.h"
-#include "xg_private.h"
-#include "xc_dom.h"
+#include "xenctrl_dom.h"
 #include <stdarg.h>
 #include <stdlib.h>
 #include <unistd.h>
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index c77edb3c4c..f0b5f83ac8 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -26,6 +26,7 @@
 #include <sys/types.h>
 #include <sys/stat.h>
 #include <stdlib.h>
+#include <limits.h>
 #include <sys/ioctl.h>
 
 #include "_paths.h"
@@ -62,6 +63,39 @@ struct iovec {
 #include <sys/uio.h>
 #endif
 
+#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
+
+#define GET_FIELD(_p, _f, _w) (((_w) == 8) ? ((_p)->x64._f) : ((_p)->x32._f))
+
+#define SET_FIELD(_p, _f, _v, _w) do {          \
+    if ((_w) == 8)                              \
+        (_p)->x64._f = (_v);                    \
+    else                                        \
+        (_p)->x32._f = (_v);                    \
+} while (0)
+
+/* XXX SMH: following skanky macros rely on variable p2m_size being set */
+/* XXX TJD: also, "guest_width" should be the guest's sizeof(unsigned long) */
+
+struct domain_info_context {
+    unsigned int guest_width;
+    unsigned long p2m_size;
+};
+
+/* Number of xen_pfn_t in a page */
+#define FPP             (PAGE_SIZE/(dinfo->guest_width))
+
+/* Number of entries in the pfn_to_mfn_frame_list_list */
+#define P2M_FLL_ENTRIES (((dinfo->p2m_size)+(FPP*FPP)-1)/(FPP*FPP))
+
+/* Number of entries in the pfn_to_mfn_frame_list */
+#define P2M_FL_ENTRIES  (((dinfo->p2m_size)+FPP-1)/FPP)
+
+/* Size in bytes of the pfn_to_mfn_frame_list     */
+#define P2M_GUEST_FL_SIZE ((P2M_FL_ENTRIES) * (dinfo->guest_width))
+#define P2M_TOOLS_FL_SIZE ((P2M_FL_ENTRIES) *                           \
+                           max_t(size_t, sizeof(xen_pfn_t), dinfo->guest_width))
+
 #define DECLARE_DOMCTL struct xen_domctl domctl
 #define DECLARE_SYSCTL struct xen_sysctl sysctl
 #define DECLARE_PHYSDEV_OP struct physdev_op physdev_op
@@ -75,6 +109,8 @@ struct iovec {
 #define PAGE_SIZE               XC_PAGE_SIZE
 #define PAGE_MASK               XC_PAGE_MASK
 
+#define INVALID_PFN ((xen_pfn_t)-1)
+
 /*
 ** Define max dirty page cache to permit during save/restore -- need to balance 
 ** keeping cache usage down with CPU impact of invalidating too often.
diff --git a/tools/libxc/xc_resume.c b/tools/libxc/xc_resume.c
index c169204fac..94c6c9fb31 100644
--- a/tools/libxc/xc_resume.c
+++ b/tools/libxc/xc_resume.c
@@ -14,8 +14,6 @@
  */
 
 #include "xc_private.h"
-#include "xg_private.h"
-#include "xg_save_restore.h"
 
 #if defined(__i386__) || defined(__x86_64__)
 
diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xg_cpuid_x86.c
similarity index 100%
rename from tools/libxc/xc_cpuid_x86.c
rename to tools/libxc/xg_cpuid_x86.c
diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xg_dom_arm.c
similarity index 99%
rename from tools/libxc/xc_dom_arm.c
rename to tools/libxc/xg_dom_arm.c
index 931404c222..3f66f1d890 100644
--- a/tools/libxc/xc_dom_arm.c
+++ b/tools/libxc/xg_dom_arm.c
@@ -24,7 +24,7 @@
 #include <xen-tools/libs.h>
 
 #include "xg_private.h"
-#include "xc_dom.h"
+#include "xenctrl_dom.h"
 
 #define NR_MAGIC_PAGES 4
 #define CONSOLE_PFN_OFFSET 0
diff --git a/tools/libxc/xc_dom_armzimageloader.c b/tools/libxc/xg_dom_armzimageloader.c
similarity index 99%
rename from tools/libxc/xc_dom_armzimageloader.c
rename to tools/libxc/xg_dom_armzimageloader.c
index 0df8c2a4b1..4246c8e5fa 100644
--- a/tools/libxc/xc_dom_armzimageloader.c
+++ b/tools/libxc/xg_dom_armzimageloader.c
@@ -25,7 +25,7 @@
 #include <inttypes.h>
 
 #include "xg_private.h"
-#include "xc_dom.h"
+#include "xenctrl_dom.h"
 
 #include <arpa/inet.h> /* XXX ntohl is not the right function... */
 
diff --git a/tools/libxc/xc_dom_binloader.c b/tools/libxc/xg_dom_binloader.c
similarity index 99%
rename from tools/libxc/xc_dom_binloader.c
rename to tools/libxc/xg_dom_binloader.c
index d6f7f2a500..870a921427 100644
--- a/tools/libxc/xc_dom_binloader.c
+++ b/tools/libxc/xg_dom_binloader.c
@@ -83,7 +83,7 @@
 #include <inttypes.h>
 
 #include "xg_private.h"
-#include "xc_dom.h"
+#include "xenctrl_dom.h"
 
 #define round_pgup(_p)    (((_p)+(PAGE_SIZE_X86-1))&PAGE_MASK_X86)
 #define round_pgdown(_p)  ((_p)&PAGE_MASK_X86)
diff --git a/tools/libxc/xc_dom_boot.c b/tools/libxc/xg_dom_boot.c
similarity index 99%
rename from tools/libxc/xc_dom_boot.c
rename to tools/libxc/xg_dom_boot.c
index bb599b33ba..1e31e92244 100644
--- a/tools/libxc/xc_dom_boot.c
+++ b/tools/libxc/xg_dom_boot.c
@@ -31,7 +31,7 @@
 #include <zlib.h>
 
 #include "xg_private.h"
-#include "xc_dom.h"
+#include "xenctrl_dom.h"
 #include "xc_core.h"
 #include <xen/hvm/params.h>
 #include <xen/grant_table.h>
diff --git a/tools/libxc/xc_dom_bzimageloader.c b/tools/libxc/xg_dom_bzimageloader.c
similarity index 99%
rename from tools/libxc/xc_dom_bzimageloader.c
rename to tools/libxc/xg_dom_bzimageloader.c
index a7d70cc7c6..f959a77602 100644
--- a/tools/libxc/xc_dom_bzimageloader.c
+++ b/tools/libxc/xg_dom_bzimageloader.c
@@ -32,7 +32,7 @@
 #include <inttypes.h>
 
 #include "xg_private.h"
-#include "xc_dom_decompress.h"
+#include "xg_dom_decompress.h"
 
 #include <xen-tools/libs.h>
 
diff --git a/tools/libxc/xc_dom_compat_linux.c b/tools/libxc/xg_dom_compat_linux.c
similarity index 99%
rename from tools/libxc/xc_dom_compat_linux.c
rename to tools/libxc/xg_dom_compat_linux.c
index b3d43feed9..b645f0b14b 100644
--- a/tools/libxc/xc_dom_compat_linux.c
+++ b/tools/libxc/xg_dom_compat_linux.c
@@ -30,7 +30,7 @@
 
 #include "xenctrl.h"
 #include "xg_private.h"
-#include "xc_dom.h"
+#include "xenctrl_dom.h"
 
 /* ------------------------------------------------------------------------ */
 
diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xg_dom_core.c
similarity index 99%
rename from tools/libxc/xc_dom_core.c
rename to tools/libxc/xg_dom_core.c
index 327c8a8575..1c91cce315 100644
--- a/tools/libxc/xc_dom_core.c
+++ b/tools/libxc/xg_dom_core.c
@@ -32,7 +32,7 @@
 #include <assert.h>
 
 #include "xg_private.h"
-#include "xc_dom.h"
+#include "xenctrl_dom.h"
 #include "_paths.h"
 
 /* ------------------------------------------------------------------------ */
diff --git a/tools/libxc/xc_dom_decompress.h b/tools/libxc/xg_dom_decompress.h
similarity index 62%
rename from tools/libxc/xc_dom_decompress.h
rename to tools/libxc/xg_dom_decompress.h
index 42cefa3f0e..c5ab2e59eb 100644
--- a/tools/libxc/xc_dom_decompress.h
+++ b/tools/libxc/xg_dom_decompress.h
@@ -1,7 +1,7 @@
 #ifndef __MINIOS__
-# include "xc_dom.h"
+# include "xenctrl_dom.h"
 #else
-# include "xc_dom_decompress_unsafe.h"
+# include "xg_dom_decompress_unsafe.h"
 #endif
 
 int xc_try_lz4_decode(struct xc_dom_image *dom, void **blob, size_t *size);
diff --git a/tools/libxc/xc_dom_decompress_lz4.c b/tools/libxc/xg_dom_decompress_lz4.c
similarity index 98%
rename from tools/libxc/xc_dom_decompress_lz4.c
rename to tools/libxc/xg_dom_decompress_lz4.c
index b6a33f27a8..97ba620d86 100644
--- a/tools/libxc/xc_dom_decompress_lz4.c
+++ b/tools/libxc/xg_dom_decompress_lz4.c
@@ -4,7 +4,7 @@
 #include <stdint.h>
 
 #include "xg_private.h"
-#include "xc_dom_decompress.h"
+#include "xg_dom_decompress.h"
 
 #define CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
 
diff --git a/tools/libxc/xc_dom_decompress_unsafe.c b/tools/libxc/xg_dom_decompress_unsafe.c
similarity index 96%
rename from tools/libxc/xc_dom_decompress_unsafe.c
rename to tools/libxc/xg_dom_decompress_unsafe.c
index 164e35558f..21d964787d 100644
--- a/tools/libxc/xc_dom_decompress_unsafe.c
+++ b/tools/libxc/xg_dom_decompress_unsafe.c
@@ -3,7 +3,7 @@
 #include <inttypes.h>
 
 #include "xg_private.h"
-#include "xc_dom_decompress_unsafe.h"
+#include "xg_dom_decompress_unsafe.h"
 
 static struct xc_dom_image *unsafe_dom;
 static unsigned char *output_blob;
diff --git a/tools/libxc/xc_dom_decompress_unsafe.h b/tools/libxc/xg_dom_decompress_unsafe.h
similarity index 97%
rename from tools/libxc/xc_dom_decompress_unsafe.h
rename to tools/libxc/xg_dom_decompress_unsafe.h
index 64f68864b1..fb84b6add8 100644
--- a/tools/libxc/xc_dom_decompress_unsafe.h
+++ b/tools/libxc/xg_dom_decompress_unsafe.h
@@ -1,4 +1,4 @@
-#include "xc_dom.h"
+#include "xenctrl_dom.h"
 
 typedef int decompress_fn(unsigned char *inbuf, unsigned int len,
                           int (*fill)(void*, unsigned int),
diff --git a/tools/libxc/xc_dom_decompress_unsafe_bzip2.c b/tools/libxc/xg_dom_decompress_unsafe_bzip2.c
similarity index 87%
rename from tools/libxc/xc_dom_decompress_unsafe_bzip2.c
rename to tools/libxc/xg_dom_decompress_unsafe_bzip2.c
index 4dcabe4061..9d3709e6cc 100644
--- a/tools/libxc/xc_dom_decompress_unsafe_bzip2.c
+++ b/tools/libxc/xg_dom_decompress_unsafe_bzip2.c
@@ -3,7 +3,7 @@
 #include <inttypes.h>
 
 #include "xg_private.h"
-#include "xc_dom_decompress_unsafe.h"
+#include "xg_dom_decompress_unsafe.h"
 
 #include "../../xen/common/bunzip2.c"
 
diff --git a/tools/libxc/xc_dom_decompress_unsafe_lzma.c b/tools/libxc/xg_dom_decompress_unsafe_lzma.c
similarity index 87%
rename from tools/libxc/xc_dom_decompress_unsafe_lzma.c
rename to tools/libxc/xg_dom_decompress_unsafe_lzma.c
index 4ee8cdbab1..5d178f0c43 100644
--- a/tools/libxc/xc_dom_decompress_unsafe_lzma.c
+++ b/tools/libxc/xg_dom_decompress_unsafe_lzma.c
@@ -3,7 +3,7 @@
 #include <inttypes.h>
 
 #include "xg_private.h"
-#include "xc_dom_decompress_unsafe.h"
+#include "xg_dom_decompress_unsafe.h"
 
 #include "../../xen/common/unlzma.c"
 
diff --git a/tools/libxc/xc_dom_decompress_unsafe_lzo1x.c b/tools/libxc/xg_dom_decompress_unsafe_lzo1x.c
similarity index 96%
rename from tools/libxc/xc_dom_decompress_unsafe_lzo1x.c
rename to tools/libxc/xg_dom_decompress_unsafe_lzo1x.c
index 59888b9da2..a4f8ebd42d 100644
--- a/tools/libxc/xc_dom_decompress_unsafe_lzo1x.c
+++ b/tools/libxc/xg_dom_decompress_unsafe_lzo1x.c
@@ -5,7 +5,7 @@
 #include <stdint.h>
 
 #include "xg_private.h"
-#include "xc_dom_decompress_unsafe.h"
+#include "xg_dom_decompress_unsafe.h"
 
 typedef uint8_t u8;
 typedef uint32_t u32;
diff --git a/tools/libxc/xc_dom_decompress_unsafe_xz.c b/tools/libxc/xg_dom_decompress_unsafe_xz.c
similarity index 95%
rename from tools/libxc/xc_dom_decompress_unsafe_xz.c
rename to tools/libxc/xg_dom_decompress_unsafe_xz.c
index fe7a7f49b4..ff6824b38d 100644
--- a/tools/libxc/xc_dom_decompress_unsafe_xz.c
+++ b/tools/libxc/xg_dom_decompress_unsafe_xz.c
@@ -6,7 +6,7 @@
 #include <inttypes.h>
 
 #include "xg_private.h"
-#include "xc_dom_decompress_unsafe.h"
+#include "xg_dom_decompress_unsafe.h"
 
 // TODO
 #define XZ_DEC_X86
diff --git a/tools/libxc/xc_dom_elfloader.c b/tools/libxc/xg_dom_elfloader.c
similarity index 99%
rename from tools/libxc/xc_dom_elfloader.c
rename to tools/libxc/xg_dom_elfloader.c
index b327db219d..7043c3bbba 100644
--- a/tools/libxc/xc_dom_elfloader.c
+++ b/tools/libxc/xg_dom_elfloader.c
@@ -26,7 +26,7 @@
 #include <inttypes.h>
 
 #include "xg_private.h"
-#include "xc_dom.h"
+#include "xenctrl_dom.h"
 #include "xc_bitops.h"
 
 #define XEN_VER "xen-3.0"
diff --git a/tools/libxc/xc_dom_hvmloader.c b/tools/libxc/xg_dom_hvmloader.c
similarity index 99%
rename from tools/libxc/xc_dom_hvmloader.c
rename to tools/libxc/xg_dom_hvmloader.c
index 3f0bd65547..995a0f3dc3 100644
--- a/tools/libxc/xc_dom_hvmloader.c
+++ b/tools/libxc/xg_dom_hvmloader.c
@@ -26,7 +26,7 @@
 #include <assert.h>
 
 #include "xg_private.h"
-#include "xc_dom.h"
+#include "xenctrl_dom.h"
 #include "xc_bitops.h"
 
 /* ------------------------------------------------------------------------ */
diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xg_dom_x86.c
similarity index 99%
rename from tools/libxc/xc_dom_x86.c
rename to tools/libxc/xg_dom_x86.c
index 9439805eaa..842dbcccdd 100644
--- a/tools/libxc/xc_dom_x86.c
+++ b/tools/libxc/xg_dom_x86.c
@@ -38,7 +38,7 @@
 #include <xen-tools/libs.h>
 
 #include "xg_private.h"
-#include "xc_dom.h"
+#include "xenctrl_dom.h"
 #include "xenctrl.h"
 
 /* ------------------------------------------------------------------------ */
diff --git a/tools/libxc/xg_domain.c b/tools/libxc/xg_domain.c
new file mode 100644
index 0000000000..e982f7f077
--- /dev/null
+++ b/tools/libxc/xg_domain.c
@@ -0,0 +1,149 @@
+/******************************************************************************
+ * xc_domain.c
+ *
+ * API for manipulating and obtaining information on domains.
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ *
+ * Copyright (c) 2003, K A Fraser.
+ */
+
+#include "xg_private.h"
+#include "xc_core.h"
+
+int xc_unmap_domain_meminfo(xc_interface *xch, struct xc_domain_meminfo *minfo)
+{
+    struct domain_info_context _di = { .guest_width = minfo->guest_width,
+                                       .p2m_size = minfo->p2m_size};
+    struct domain_info_context *dinfo = &_di;
+
+    free(minfo->pfn_type);
+    if ( minfo->p2m_table )
+        munmap(minfo->p2m_table, P2M_FL_ENTRIES * PAGE_SIZE);
+    minfo->p2m_table = NULL;
+
+    return 0;
+}
+
+int xc_map_domain_meminfo(xc_interface *xch, uint32_t domid,
+                          struct xc_domain_meminfo *minfo)
+{
+    struct domain_info_context _di;
+    struct domain_info_context *dinfo = &_di;
+
+    xc_dominfo_t info;
+    shared_info_any_t *live_shinfo;
+    xen_capabilities_info_t xen_caps = "";
+    int i;
+
+    /* Only be initialized once */
+    if ( minfo->pfn_type || minfo->p2m_table )
+    {
+        errno = EINVAL;
+        return -1;
+    }
+
+    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    {
+        PERROR("Could not get domain info");
+        return -1;
+    }
+
+    if ( xc_domain_get_guest_width(xch, domid, &minfo->guest_width) )
+    {
+        PERROR("Could not get domain address size");
+        return -1;
+    }
+    _di.guest_width = minfo->guest_width;
+
+    /* Get page table levels (see get_platform_info() in xg_save_restore.h */
+    if ( xc_version(xch, XENVER_capabilities, &xen_caps) )
+    {
+        PERROR("Could not get Xen capabilities (for page table levels)");
+        return -1;
+    }
+    if ( strstr(xen_caps, "xen-3.0-x86_64") )
+        /* Depends on whether it's a compat 32-on-64 guest */
+        minfo->pt_levels = ( (minfo->guest_width == 8) ? 4 : 3 );
+    else if ( strstr(xen_caps, "xen-3.0-x86_32p") )
+        minfo->pt_levels = 3;
+    else if ( strstr(xen_caps, "xen-3.0-x86_32") )
+        minfo->pt_levels = 2;
+    else
+    {
+        errno = EFAULT;
+        return -1;
+    }
+
+    /* We need the shared info page for mapping the P2M */
+    live_shinfo = xc_map_foreign_range(xch, domid, PAGE_SIZE, PROT_READ,
+                                       info.shared_info_frame);
+    if ( !live_shinfo )
+    {
+        PERROR("Could not map the shared info frame (MFN 0x%lx)",
+               info.shared_info_frame);
+        return -1;
+    }
+
+    if ( xc_core_arch_map_p2m_writable(xch, minfo->guest_width, &info,
+                                       live_shinfo, &minfo->p2m_table,
+                                       &minfo->p2m_size) )
+    {
+        PERROR("Could not map the P2M table");
+        munmap(live_shinfo, PAGE_SIZE);
+        return -1;
+    }
+    munmap(live_shinfo, PAGE_SIZE);
+    _di.p2m_size = minfo->p2m_size;
+
+    /* Make space and prepare for getting the PFN types */
+    minfo->pfn_type = calloc(sizeof(*minfo->pfn_type), minfo->p2m_size);
+    if ( !minfo->pfn_type )
+    {
+        PERROR("Could not allocate memory for the PFN types");
+        goto failed;
+    }
+    for ( i = 0; i < minfo->p2m_size; i++ )
+        minfo->pfn_type[i] = xc_pfn_to_mfn(i, minfo->p2m_table,
+                                           minfo->guest_width);
+
+    /* Retrieve PFN types in batches */
+    for ( i = 0; i < minfo->p2m_size ; i+=1024 )
+    {
+        int count = ((minfo->p2m_size - i ) > 1024 ) ?
+                        1024: (minfo->p2m_size - i);
+
+        if ( xc_get_pfn_type_batch(xch, domid, count, minfo->pfn_type + i) )
+        {
+            PERROR("Could not get %d-eth batch of PFN types", (i+1)/1024);
+            goto failed;
+        }
+    }
+
+    return 0;
+
+failed:
+    if ( minfo->pfn_type )
+    {
+        free(minfo->pfn_type);
+        minfo->pfn_type = NULL;
+    }
+    if ( minfo->p2m_table )
+    {
+        munmap(minfo->p2m_table, P2M_FL_ENTRIES * PAGE_SIZE);
+        minfo->p2m_table = NULL;
+    }
+
+    return -1;
+}
diff --git a/tools/libxc/xc_nomigrate.c b/tools/libxc/xg_nomigrate.c
similarity index 100%
rename from tools/libxc/xc_nomigrate.c
rename to tools/libxc/xg_nomigrate.c
diff --git a/tools/libxc/xc_offline_page.c b/tools/libxc/xg_offline_page.c
similarity index 99%
rename from tools/libxc/xc_offline_page.c
rename to tools/libxc/xg_offline_page.c
index 19538fc436..77e8889b11 100644
--- a/tools/libxc/xc_offline_page.c
+++ b/tools/libxc/xg_offline_page.c
@@ -28,7 +28,7 @@
 #include <xc_core.h>
 
 #include "xc_private.h"
-#include "xc_dom.h"
+#include "xenctrl_dom.h"
 #include "xg_private.h"
 #include "xg_save_restore.h"
 
diff --git a/tools/libxc/xg_private.h b/tools/libxc/xg_private.h
index 40b5baecde..0000b2b9b6 100644
--- a/tools/libxc/xg_private.h
+++ b/tools/libxc/xg_private.h
@@ -97,15 +97,6 @@ typedef uint64_t x86_pgentry_t;
 
 #define NRPAGES(x) (ROUNDUP(x, PAGE_SHIFT) >> PAGE_SHIFT)
 
-
-/* XXX SMH: following skanky macros rely on variable p2m_size being set */
-/* XXX TJD: also, "guest_width" should be the guest's sizeof(unsigned long) */
-
-struct domain_info_context {
-    unsigned int guest_width;
-    unsigned long p2m_size;
-};
-
 static inline xen_pfn_t xc_pfn_to_mfn(xen_pfn_t pfn, xen_pfn_t *p2m,
                                       unsigned gwidth)
 {
@@ -121,19 +112,6 @@ static inline xen_pfn_t xc_pfn_to_mfn(xen_pfn_t pfn, xen_pfn_t *p2m,
     }
 }
 
-/* Number of xen_pfn_t in a page */
-#define FPP             (PAGE_SIZE/(dinfo->guest_width))
-
-/* Number of entries in the pfn_to_mfn_frame_list_list */
-#define P2M_FLL_ENTRIES (((dinfo->p2m_size)+(FPP*FPP)-1)/(FPP*FPP))
-
-/* Number of entries in the pfn_to_mfn_frame_list */
-#define P2M_FL_ENTRIES  (((dinfo->p2m_size)+FPP-1)/FPP)
-
-/* Size in bytes of the pfn_to_mfn_frame_list     */
-#define P2M_GUEST_FL_SIZE ((P2M_FL_ENTRIES) * (dinfo->guest_width))
-#define P2M_TOOLS_FL_SIZE ((P2M_FL_ENTRIES) *                           \
-                           max_t(size_t, sizeof(xen_pfn_t), dinfo->guest_width))
 
 /* Masks for PTE<->PFN conversions */
 #define MADDR_BITS_X86  ((dinfo->guest_width == 8) ? 52 : 44)
diff --git a/tools/libxc/xg_save_restore.h b/tools/libxc/xg_save_restore.h
index b904296997..88120eb54b 100644
--- a/tools/libxc/xg_save_restore.h
+++ b/tools/libxc/xg_save_restore.h
@@ -109,15 +109,6 @@ static inline int get_platform_info(xc_interface *xch, uint32_t dom,
 #define M2P_SIZE(_m)    ROUNDUP(((_m) * sizeof(xen_pfn_t)), M2P_SHIFT)
 #define M2P_CHUNKS(_m)  (M2P_SIZE((_m)) >> M2P_SHIFT)
 
-#define GET_FIELD(_p, _f, _w) (((_w) == 8) ? ((_p)->x64._f) : ((_p)->x32._f))
-
-#define SET_FIELD(_p, _f, _v, _w) do {          \
-    if ((_w) == 8)                              \
-        (_p)->x64._f = (_v);                    \
-    else                                        \
-        (_p)->x32._f = (_v);                    \
-} while (0)
-
 #define UNFOLD_CR3(_c)                                                  \
   ((uint64_t)((dinfo->guest_width == 8)                                 \
               ? ((_c) >> 12)                                            \
diff --git a/tools/libxc/xc_sr_common.c b/tools/libxc/xg_sr_common.c
similarity index 99%
rename from tools/libxc/xc_sr_common.c
rename to tools/libxc/xg_sr_common.c
index 7c54b03414..17567ab133 100644
--- a/tools/libxc/xc_sr_common.c
+++ b/tools/libxc/xg_sr_common.c
@@ -1,6 +1,6 @@
 #include <assert.h>
 
-#include "xc_sr_common.h"
+#include "xg_sr_common.h"
 
 #include <xen-tools/libs.h>
 
diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xg_sr_common.h
similarity index 99%
rename from tools/libxc/xc_sr_common.h
rename to tools/libxc/xg_sr_common.h
index f3bdea8006..13fcc47420 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xg_sr_common.h
@@ -5,10 +5,10 @@
 
 #include "xg_private.h"
 #include "xg_save_restore.h"
-#include "xc_dom.h"
+#include "xenctrl_dom.h"
 #include "xc_bitops.h"
 
-#include "xc_sr_stream_format.h"
+#include "xg_sr_stream_format.h"
 
 /* String representation of Domain Header types. */
 const char *dhdr_type_to_str(uint32_t type);
diff --git a/tools/libxc/xc_sr_common_x86.c b/tools/libxc/xg_sr_common_x86.c
similarity index 99%
rename from tools/libxc/xc_sr_common_x86.c
rename to tools/libxc/xg_sr_common_x86.c
index 77ea044a74..6f12483907 100644
--- a/tools/libxc/xc_sr_common_x86.c
+++ b/tools/libxc/xg_sr_common_x86.c
@@ -1,4 +1,4 @@
-#include "xc_sr_common_x86.h"
+#include "xg_sr_common_x86.h"
 
 int write_x86_tsc_info(struct xc_sr_context *ctx)
 {
diff --git a/tools/libxc/xc_sr_common_x86.h b/tools/libxc/xg_sr_common_x86.h
similarity index 98%
rename from tools/libxc/xc_sr_common_x86.h
rename to tools/libxc/xg_sr_common_x86.h
index e08d81e0e7..b55758c96d 100644
--- a/tools/libxc/xc_sr_common_x86.h
+++ b/tools/libxc/xg_sr_common_x86.h
@@ -1,7 +1,7 @@
 #ifndef __COMMON_X86__H
 #define __COMMON_X86__H
 
-#include "xc_sr_common.h"
+#include "xg_sr_common.h"
 
 /*
  * Obtains a domains TSC information from Xen and writes a X86_TSC_INFO record
diff --git a/tools/libxc/xc_sr_common_x86_pv.c b/tools/libxc/xg_sr_common_x86_pv.c
similarity index 99%
rename from tools/libxc/xc_sr_common_x86_pv.c
rename to tools/libxc/xg_sr_common_x86_pv.c
index d3d425cb82..cd33406aab 100644
--- a/tools/libxc/xc_sr_common_x86_pv.c
+++ b/tools/libxc/xg_sr_common_x86_pv.c
@@ -1,6 +1,6 @@
 #include <assert.h>
 
-#include "xc_sr_common_x86_pv.h"
+#include "xg_sr_common_x86_pv.h"
 
 xen_pfn_t mfn_to_pfn(struct xc_sr_context *ctx, xen_pfn_t mfn)
 {
diff --git a/tools/libxc/xc_sr_common_x86_pv.h b/tools/libxc/xg_sr_common_x86_pv.h
similarity index 98%
rename from tools/libxc/xc_sr_common_x86_pv.h
rename to tools/libxc/xg_sr_common_x86_pv.h
index 2ed03309af..953b5bfb8d 100644
--- a/tools/libxc/xc_sr_common_x86_pv.h
+++ b/tools/libxc/xg_sr_common_x86_pv.h
@@ -1,7 +1,7 @@
 #ifndef __COMMON_X86_PV_H
 #define __COMMON_X86_PV_H
 
-#include "xc_sr_common_x86.h"
+#include "xg_sr_common_x86.h"
 
 /* Virtual address ranges reserved for hypervisor. */
 #define HYPERVISOR_VIRT_START_X86_64 0xFFFF800000000000ULL
diff --git a/tools/libxc/xc_sr_restore.c b/tools/libxc/xg_sr_restore.c
similarity index 99%
rename from tools/libxc/xc_sr_restore.c
rename to tools/libxc/xg_sr_restore.c
index bc811e6e3a..b57a787519 100644
--- a/tools/libxc/xc_sr_restore.c
+++ b/tools/libxc/xg_sr_restore.c
@@ -2,7 +2,7 @@
 
 #include <assert.h>
 
-#include "xc_sr_common.h"
+#include "xg_sr_common.h"
 
 /*
  * Read and validate the Image and Domain headers.
diff --git a/tools/libxc/xc_sr_restore_x86_hvm.c b/tools/libxc/xg_sr_restore_x86_hvm.c
similarity index 99%
rename from tools/libxc/xc_sr_restore_x86_hvm.c
rename to tools/libxc/xg_sr_restore_x86_hvm.c
index a77624cc9d..d6ea6f3012 100644
--- a/tools/libxc/xc_sr_restore_x86_hvm.c
+++ b/tools/libxc/xg_sr_restore_x86_hvm.c
@@ -1,7 +1,7 @@
 #include <assert.h>
 #include <arpa/inet.h>
 
-#include "xc_sr_common_x86.h"
+#include "xg_sr_common_x86.h"
 
 /*
  * Process an HVM_CONTEXT record from the stream.
diff --git a/tools/libxc/xc_sr_restore_x86_pv.c b/tools/libxc/xg_sr_restore_x86_pv.c
similarity index 99%
rename from tools/libxc/xc_sr_restore_x86_pv.c
rename to tools/libxc/xg_sr_restore_x86_pv.c
index d086271efb..dc50b0f5a8 100644
--- a/tools/libxc/xc_sr_restore_x86_pv.c
+++ b/tools/libxc/xg_sr_restore_x86_pv.c
@@ -1,6 +1,6 @@
 #include <assert.h>
 
-#include "xc_sr_common_x86_pv.h"
+#include "xg_sr_common_x86_pv.h"
 
 static xen_pfn_t pfn_to_mfn(const struct xc_sr_context *ctx, xen_pfn_t pfn)
 {
diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xg_sr_save.c
similarity index 99%
rename from tools/libxc/xc_sr_save.c
rename to tools/libxc/xg_sr_save.c
index 80b1d5de1f..d74c72cba6 100644
--- a/tools/libxc/xc_sr_save.c
+++ b/tools/libxc/xg_sr_save.c
@@ -1,7 +1,7 @@
 #include <assert.h>
 #include <arpa/inet.h>
 
-#include "xc_sr_common.h"
+#include "xg_sr_common.h"
 
 /*
  * Writes an Image header and Domain header into the stream.
diff --git a/tools/libxc/xc_sr_save_x86_hvm.c b/tools/libxc/xg_sr_save_x86_hvm.c
similarity index 99%
rename from tools/libxc/xc_sr_save_x86_hvm.c
rename to tools/libxc/xg_sr_save_x86_hvm.c
index 0b2abb26bd..1634a7bc43 100644
--- a/tools/libxc/xc_sr_save_x86_hvm.c
+++ b/tools/libxc/xg_sr_save_x86_hvm.c
@@ -1,6 +1,6 @@
 #include <assert.h>
 
-#include "xc_sr_common_x86.h"
+#include "xg_sr_common_x86.h"
 
 #include <xen/hvm/params.h>
 
diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xg_sr_save_x86_pv.c
similarity index 99%
rename from tools/libxc/xc_sr_save_x86_pv.c
rename to tools/libxc/xg_sr_save_x86_pv.c
index c7e246ef4f..4964f1f7b8 100644
--- a/tools/libxc/xc_sr_save_x86_pv.c
+++ b/tools/libxc/xg_sr_save_x86_pv.c
@@ -1,7 +1,7 @@
 #include <assert.h>
 #include <limits.h>
 
-#include "xc_sr_common_x86_pv.h"
+#include "xg_sr_common_x86_pv.h"
 
 /* Check a 64 bit virtual address for being canonical. */
 static inline bool is_canonical_address(xen_vaddr_t vaddr)
diff --git a/tools/libxc/xc_sr_stream_format.h b/tools/libxc/xg_sr_stream_format.h
similarity index 100%
rename from tools/libxc/xc_sr_stream_format.h
rename to tools/libxc/xg_sr_stream_format.h
diff --git a/tools/libxc/xc_suspend.c b/tools/libxc/xg_suspend.c
similarity index 100%
rename from tools/libxc/xc_suspend.c
rename to tools/libxc/xg_suspend.c
diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 34f8a29056..975a4d730a 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -3,7 +3,7 @@
 #include "libxl_libfdt_compat.h"
 #include "libxl_arm.h"
 
-#include <xc_dom.h>
+#include <xenctrl_dom.h>
 #include <stdbool.h>
 #include <libfdt.h>
 #include <assert.h>
diff --git a/tools/libxl/libxl_arm.h b/tools/libxl/libxl_arm.h
index 8aef210d4c..52c2ab5e3a 100644
--- a/tools/libxl/libxl_arm.h
+++ b/tools/libxl/libxl_arm.h
@@ -17,7 +17,7 @@
 #include "libxl_internal.h"
 #include "libxl_arch.h"
 
-#include <xc_dom.h>
+#include <xenctrl_dom.h>
 
 _hidden
 int libxl__prepare_acpi(libxl__gc *gc, libxl_domain_build_info *info,
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 2814818e34..1031b75159 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -20,7 +20,7 @@
 #include "libxl_internal.h"
 #include "libxl_arch.h"
 
-#include <xc_dom.h>
+#include <xenctrl_dom.h>
 #include <xenguest.h>
 #include <xen/hvm/hvm_info_table.h>
 #include <xen/hvm/e820.h>
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index f2dc5696b9..fec4e0fbe5 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -19,7 +19,7 @@
 
 #include "libxl_internal.h"
 
-#include <xc_dom.h>
+#include <xenctrl_dom.h>
 #include <xen/hvm/e820.h>
 #include <sys/types.h>
 #include <pwd.h>
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index f8661e90d4..e2dca64aa1 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -20,7 +20,7 @@
 #include "libxl_internal.h"
 #include "libxl_arch.h"
 
-#include <xc_dom.h>
+#include <xenctrl_dom.h>
 #include <xen/hvm/hvm_info_table.h>
 #include <xen/hvm/hvm_xs_strings.h>
 #include <xen/hvm/e820.h>
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index c63d0686fd..e16ae9630b 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -57,7 +57,7 @@
 #include <xenctrl.h>
 #include <xenguest.h>
 #include <xenhypfs.h>
-#include <xc_dom.h>
+#include <xenctrl_dom.h>
 
 #include <xen-tools/libs.h>
 
diff --git a/tools/libxl/libxl_vnuma.c b/tools/libxl/libxl_vnuma.c
index 8ec2abb2e6..c2e144ceae 100644
--- a/tools/libxl/libxl_vnuma.c
+++ b/tools/libxl/libxl_vnuma.c
@@ -17,7 +17,7 @@
 #include "libxl_arch.h"
 #include <stdlib.h>
 
-#include <xc_dom.h>
+#include <xenctrl_dom.h>
 
 bool libxl__vnuma_configured(const libxl_domain_build_info *b_info)
 {
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index e57f63282e..7d95506e00 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -1,7 +1,7 @@
 #include "libxl_internal.h"
 #include "libxl_arch.h"
 
-#include <xc_dom.h>
+#include <xenctrl_dom.h>
 
 int libxl__arch_domain_prepare_config(libxl__gc *gc,
                                       libxl_domain_config *d_config,
diff --git a/tools/libxl/libxl_x86_acpi.c b/tools/libxl/libxl_x86_acpi.c
index ed6610c84e..3df86c7be5 100644
--- a/tools/libxl/libxl_x86_acpi.c
+++ b/tools/libxl/libxl_x86_acpi.c
@@ -18,7 +18,7 @@
 #include <xen/hvm/e820.h>
 #include "libacpi/libacpi.h"
 
-#include <xc_dom.h>
+#include <xenctrl_dom.h>
 
  /* Number of pages holding ACPI tables */
 #define NUM_ACPI_PAGES 16
diff --git a/tools/python/xen/lowlevel/xc/xc.c b/tools/python/xen/lowlevel/xc/xc.c
index 8fde5f311f..8c7b184f0b 100644
--- a/tools/python/xen/lowlevel/xc/xc.c
+++ b/tools/python/xen/lowlevel/xc/xc.c
@@ -17,7 +17,7 @@
 #include <arpa/inet.h>
 
 #include <xen/elfnote.h>
-#include "xc_dom.h"
+#include "xenctrl_dom.h"
 #include <xen/hvm/hvm_info_table.h>
 #include <xen/hvm/params.h>
 
diff --git a/tools/xcutils/readnotes.c b/tools/xcutils/readnotes.c
index e682dd1a21..a6b7358e70 100644
--- a/tools/xcutils/readnotes.c
+++ b/tools/xcutils/readnotes.c
@@ -12,7 +12,7 @@
 #include <sys/mman.h>
 
 #include <xg_private.h>
-#include <xc_dom.h> /* gunzip bits */
+#include <xenctrl_dom.h> /* gunzip bits */
 
 #include <xen/libelf/libelf.h>
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:26:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:26:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkEd-0002s5-2J; Wed, 15 Jul 2020 16:26:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvkEb-0002Yi-B7
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:26:17 +0000
X-Inumbo-ID: d1acb2dc-c6b7-11ea-bca7-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d1acb2dc-c6b7-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 16:25:39 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvkDy-0001sU-51; Wed, 15 Jul 2020 17:25:38 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org,
	xen-devel@dornerworks.com
Subject: [PATCH 08/12] tools: move libxenctrl below tools/libs
Date: Wed, 15 Jul 2020 17:25:07 +0100
Message-Id: <20200715162511.5941-10-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 ian.jackson@eu.citrix.com, George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Josh Whitehead <josh.whitehead@dornerworks.com>,
 Jan Beulich <jbeulich@suse.com>,
 Stewart Hildebrand <stewart.hildebrand@dornerworks.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Juergen Gross <jgross@suse.com>

Today tools/libxc needs to be built after tools/libs as libxenctrl is
depending on some libraries in tools/libs. This in turn blocks moving
other libraries depending on libxenctrl below tools/libs.

So carve out libxenctrl from tools/libxc and move it into
tools/libs/ctrl.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                                    |  7 ++
 MAINTAINERS                                   |  2 +-
 stubdom/Makefile                              | 29 +++++-
 stubdom/mini-os.mk                            |  2 +-
 tools/Makefile                                |  3 +
 tools/Rules.mk                                |  9 +-
 tools/libs/Makefile                           |  1 +
 tools/libs/ctrl/Makefile                      | 69 +++++++++++++
 tools/{libxc => libs/ctrl}/include/xenctrl.h  |  0
 .../ctrl}/include/xenctrl_compat.h            |  0
 .../ctrl}/include/xenctrl_dom.h               |  0
 tools/libs/ctrl/libxenctrl.map                |  3 +
 tools/{libxc => libs/ctrl}/xc_altp2m.c        |  0
 tools/{libxc => libs/ctrl}/xc_arinc653.c      |  0
 tools/{libxc => libs/ctrl}/xc_bitops.h        |  0
 tools/{libxc => libs/ctrl}/xc_core.c          |  0
 tools/{libxc => libs/ctrl}/xc_core.h          |  0
 tools/{libxc => libs/ctrl}/xc_core_arm.c      |  0
 tools/{libxc => libs/ctrl}/xc_core_arm.h      |  0
 tools/{libxc => libs/ctrl}/xc_core_x86.c      |  0
 tools/{libxc => libs/ctrl}/xc_core_x86.h      |  0
 tools/{libxc => libs/ctrl}/xc_cpu_hotplug.c   |  0
 tools/{libxc => libs/ctrl}/xc_cpupool.c       |  0
 tools/{libxc => libs/ctrl}/xc_csched.c        |  0
 tools/{libxc => libs/ctrl}/xc_csched2.c       |  0
 .../ctrl}/xc_devicemodel_compat.c             |  0
 tools/{libxc => libs/ctrl}/xc_domain.c        |  0
 tools/{libxc => libs/ctrl}/xc_evtchn.c        |  0
 tools/{libxc => libs/ctrl}/xc_evtchn_compat.c |  0
 tools/{libxc => libs/ctrl}/xc_flask.c         |  0
 .../{libxc => libs/ctrl}/xc_foreign_memory.c  |  0
 tools/{libxc => libs/ctrl}/xc_freebsd.c       |  0
 tools/{libxc => libs/ctrl}/xc_gnttab.c        |  0
 tools/{libxc => libs/ctrl}/xc_gnttab_compat.c |  0
 tools/{libxc => libs/ctrl}/xc_hcall_buf.c     |  0
 tools/{libxc => libs/ctrl}/xc_kexec.c         |  0
 tools/{libxc => libs/ctrl}/xc_linux.c         |  0
 tools/{libxc => libs/ctrl}/xc_mem_access.c    |  0
 tools/{libxc => libs/ctrl}/xc_mem_paging.c    |  0
 tools/{libxc => libs/ctrl}/xc_memshr.c        |  0
 tools/{libxc => libs/ctrl}/xc_minios.c        |  0
 tools/{libxc => libs/ctrl}/xc_misc.c          |  0
 tools/{libxc => libs/ctrl}/xc_monitor.c       |  0
 tools/{libxc => libs/ctrl}/xc_msr_x86.h       |  0
 tools/{libxc => libs/ctrl}/xc_netbsd.c        |  0
 tools/{libxc => libs/ctrl}/xc_pagetab.c       |  0
 tools/{libxc => libs/ctrl}/xc_physdev.c       |  0
 tools/{libxc => libs/ctrl}/xc_pm.c            |  0
 tools/{libxc => libs/ctrl}/xc_private.c       |  0
 tools/{libxc => libs/ctrl}/xc_private.h       |  0
 tools/{libxc => libs/ctrl}/xc_psr.c           |  0
 tools/{libxc => libs/ctrl}/xc_resource.c      |  0
 tools/{libxc => libs/ctrl}/xc_resume.c        |  0
 tools/{libxc => libs/ctrl}/xc_rt.c            |  0
 tools/{libxc => libs/ctrl}/xc_solaris.c       |  0
 tools/{libxc => libs/ctrl}/xc_tbuf.c          |  0
 tools/{libxc => libs/ctrl}/xc_vm_event.c      |  0
 .../ctrl/xenctrl.pc.in}                       |  0
 tools/libxc/Makefile                          | 99 +++----------------
 tools/libxl/Makefile                          |  2 +-
 tools/misc/Makefile                           |  1 +
 tools/python/Makefile                         |  2 +-
 tools/python/setup.py                         | 10 +-
 63 files changed, 132 insertions(+), 107 deletions(-)
 create mode 100644 tools/libs/ctrl/Makefile
 rename tools/{libxc => libs/ctrl}/include/xenctrl.h (100%)
 rename tools/{libxc => libs/ctrl}/include/xenctrl_compat.h (100%)
 rename tools/{libxc => libs/ctrl}/include/xenctrl_dom.h (100%)
 create mode 100644 tools/libs/ctrl/libxenctrl.map
 rename tools/{libxc => libs/ctrl}/xc_altp2m.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_arinc653.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_bitops.h (100%)
 rename tools/{libxc => libs/ctrl}/xc_core.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_core.h (100%)
 rename tools/{libxc => libs/ctrl}/xc_core_arm.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_core_arm.h (100%)
 rename tools/{libxc => libs/ctrl}/xc_core_x86.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_core_x86.h (100%)
 rename tools/{libxc => libs/ctrl}/xc_cpu_hotplug.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_cpupool.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_csched.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_csched2.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_devicemodel_compat.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_domain.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_evtchn.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_evtchn_compat.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_flask.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_foreign_memory.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_freebsd.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_gnttab.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_gnttab_compat.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_hcall_buf.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_kexec.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_linux.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_mem_access.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_mem_paging.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_memshr.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_minios.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_misc.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_monitor.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_msr_x86.h (100%)
 rename tools/{libxc => libs/ctrl}/xc_netbsd.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_pagetab.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_physdev.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_pm.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_private.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_private.h (100%)
 rename tools/{libxc => libs/ctrl}/xc_psr.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_resource.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_resume.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_rt.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_solaris.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_tbuf.c (100%)
 rename tools/{libxc => libs/ctrl}/xc_vm_event.c (100%)
 rename tools/{libxc/xencontrol.pc.in => libs/ctrl/xenctrl.pc.in} (100%)

diff --git a/.gitignore b/.gitignore
index 5ea48af818..e28c21641c 100644
--- a/.gitignore
+++ b/.gitignore
@@ -114,6 +114,8 @@ tools/libs/hypfs/headers.chk
 tools/libs/hypfs/xenhypfs.pc
 tools/libs/call/headers.chk
 tools/libs/call/xencall.pc
+tools/libs/ctrl/_paths.h
+tools/libs/ctrl/xenctrl.pc
 tools/libs/foreignmemory/headers.chk
 tools/libs/foreignmemory/xenforeignmemory.pc
 tools/libs/devicemodel/headers.chk
@@ -195,6 +197,11 @@ tools/include/xen-foreign/*.(c|h|size)
 tools/include/xen-foreign/checker
 tools/libvchan/xenvchan.pc
 tools/libxc/*.pc
+tools/libxc/xc_bitops.h
+tools/libxc/xc_core.h
+tools/libxc/xc_core_arm.h
+tools/libxc/xc_core_x86.h
+tools/libxc/xc_private.h
 tools/libxl/_libxl.api-for-check
 tools/libxl/*.api-ok
 tools/libxl/*.pc
diff --git a/MAINTAINERS b/MAINTAINERS
index e374816755..7d25ed9c95 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -226,7 +226,7 @@ M:	Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
 S:	Supported
 L:	xen-devel@dornerworks.com
 F:	xen/common/sched/arinc653.c
-F:	tools/libxc/xc_arinc653.c
+F:	tools/libs/ctrl/xc_arinc653.c
 
 ARM (W/ VIRTUALISATION EXTENSIONS) ARCHITECTURE
 M:	Stefano Stabellini <sstabellini@kernel.org>
diff --git a/stubdom/Makefile b/stubdom/Makefile
index af8cde41b9..8147635ad0 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -351,13 +351,16 @@ libs-$(XEN_TARGET_ARCH)/foreignmemory/stamp: $(XEN_ROOT)/tools/libs/foreignmemor
 libs-$(XEN_TARGET_ARCH)/devicemodel/stamp: $(XEN_ROOT)/tools/libs/devicemodel/Makefile
 	$(do_links)
 
+libs-$(XEN_TARGET_ARCH)/ctrl/stamp: $(XEN_ROOT)/tools/libs/ctrl/Makefile
+	$(do_links)
+
 libxc-$(XEN_TARGET_ARCH)/stamp: $(XEN_ROOT)/tools/libxc/Makefile
 	$(do_links)
 
 xenstore/stamp: $(XEN_ROOT)/tools/xenstore/Makefile
 	$(do_links)
 
-LINK_LIBS_DIRS := toolcore toollog evtchn gnttab call foreignmemory devicemodel
+LINK_LIBS_DIRS := toolcore toollog evtchn gnttab call foreignmemory devicemodel ctrl
 LINK_DIRS := libxc-$(XEN_TARGET_ARCH) xenstore $(foreach dir,$(LINK_LIBS_DIRS),libs-$(XEN_TARGET_ARCH)/$(dir))
 LINK_STAMPS := $(foreach dir,$(LINK_DIRS),$(dir)/stamp)
 
@@ -405,6 +408,7 @@ libs-$(XEN_TARGET_ARCH)/toollog/libxentoollog.a: mk-headers-$(XEN_TARGET_ARCH) $
 
 .PHONY: libxenevtchn
 libxenevtchn: libs-$(XEN_TARGET_ARCH)/evtchn/libxenevtchn.a
+libs-$(XEN_TARGET_ARCH)/evtchn/libxenevtchn.a: libxentoolcore
 libs-$(XEN_TARGET_ARCH)/evtchn/libxenevtchn.a: mk-headers-$(XEN_TARGET_ARCH) $(NEWLIB_STAMPFILE)
 	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C libs-$(XEN_TARGET_ARCH)/evtchn
 
@@ -414,6 +418,7 @@ libs-$(XEN_TARGET_ARCH)/evtchn/libxenevtchn.a: mk-headers-$(XEN_TARGET_ARCH) $(N
 
 .PHONY: libxengnttab
 libxengnttab: libs-$(XEN_TARGET_ARCH)/gnttab/libxengnttab.a
+libs-$(XEN_TARGET_ARCH)/gnttab/libxengnttab.a: libxentoolcore libxentoollog
 libs-$(XEN_TARGET_ARCH)/gnttab/libxengnttab.a: mk-headers-$(XEN_TARGET_ARCH) $(NEWLIB_STAMPFILE)
 	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C libs-$(XEN_TARGET_ARCH)/gnttab
 
@@ -423,6 +428,7 @@ libs-$(XEN_TARGET_ARCH)/gnttab/libxengnttab.a: mk-headers-$(XEN_TARGET_ARCH) $(N
 
 .PHONY: libxencall
 libxencall: libs-$(XEN_TARGET_ARCH)/call/libxencall.a
+libs-$(XEN_TARGET_ARCH)/call/libxencall.a: libxentoolcore
 libs-$(XEN_TARGET_ARCH)/call/libxencall.a: mk-headers-$(XEN_TARGET_ARCH) $(NEWLIB_STAMPFILE)
 	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C libs-$(XEN_TARGET_ARCH)/call
 
@@ -432,6 +438,7 @@ libs-$(XEN_TARGET_ARCH)/call/libxencall.a: mk-headers-$(XEN_TARGET_ARCH) $(NEWLI
 
 .PHONY: libxenforeignmemory
 libxenforeignmemory: libs-$(XEN_TARGET_ARCH)/foreignmemory/libxenforeignmemory.a
+libs-$(XEN_TARGET_ARCH)/foreignmemory/libxenforeignmemory.a: libxentoolcore
 libs-$(XEN_TARGET_ARCH)/foreignmemory/libxenforeignmemory.a: mk-headers-$(XEN_TARGET_ARCH) $(NEWLIB_STAMPFILE)
 	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C libs-$(XEN_TARGET_ARCH)/foreignmemory
 
@@ -441,20 +448,31 @@ libs-$(XEN_TARGET_ARCH)/foreignmemory/libxenforeignmemory.a: mk-headers-$(XEN_TA
 
 .PHONY: libxendevicemodel
 libxendevicemodel: libs-$(XEN_TARGET_ARCH)/devicemodel/libxendevicemodel.a
+libs-$(XEN_TARGET_ARCH)/devicemodel/libxendevicemodel.a: libxentoolcore libxentoollog libxencall
 libs-$(XEN_TARGET_ARCH)/devicemodel/libxendevicemodel.a: mk-headers-$(XEN_TARGET_ARCH) $(NEWLIB_STAMPFILE)
 	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= -C libs-$(XEN_TARGET_ARCH)/devicemodel
 
+#######
+#######
+# libxenctrl
+#######
+
+.PHONY: libxenctrl
+libxenctrl: libs-$(XEN_TARGET_ARCH)/ctrl/libxenctrl.a
+libs-$(XEN_TARGET_ARCH)/ctrl/libxenctrl.a: libxentoollog libxencall libxenevtchn libxengnttab libxenforeignmemory libxendevicemodel
+libs-$(XEN_TARGET_ARCH)/ctrl/libxenctrl.a: mk-headers-$(XEN_TARGET_ARCH) $(NEWLIB_STAMPFILE)
+	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= CONFIG_LIBXC_MINIOS=y -C libs-$(XEN_TARGET_ARCH)/ctrl
+
 #######
 # libxc
 #######
 
 .PHONY: libxc
-libxc: libxc-$(XEN_TARGET_ARCH)/libxenctrl.a libxc-$(XEN_TARGET_ARCH)/libxenguest.a
-libxc-$(XEN_TARGET_ARCH)/libxenctrl.a: mk-headers-$(XEN_TARGET_ARCH) libxentoolcore libxentoollog libxenevtchn libxengnttab libxencall libxenforeignmemory libxendevicemodel cross-zlib
+libxc: libxc-$(XEN_TARGET_ARCH)/libxenguest.a
+libxc-$(XEN_TARGET_ARCH)/libxenguest.a: libxenevtchn libxenctrl cross-zlib
+libxc-$(XEN_TARGET_ARCH)/libxenguest.a: mk-headers-$(XEN_TARGET_ARCH) $(NEWLIB_STAMPFILE)
 	CPPFLAGS="$(TARGET_CPPFLAGS)" CFLAGS="$(TARGET_CFLAGS)" $(MAKE) DESTDIR= CONFIG_LIBXC_MINIOS=y -C libxc-$(XEN_TARGET_ARCH)
 
- libxc-$(XEN_TARGET_ARCH)/libxenguest.a: libxc-$(XEN_TARGET_ARCH)/libxenctrl.a
-
 #######
 # ioemu
 #######
@@ -681,6 +699,7 @@ clean:
 	[ ! -e libs-$(XEN_TARGET_ARCH)/call/Makefile ] || $(MAKE) DESTDIR= -C libs-$(XEN_TARGET_ARCH)/call clean
 	[ ! -e libs-$(XEN_TARGET_ARCH)/foreignmemory/Makefile ] || $(MAKE) DESTDIR= -C libs-$(XEN_TARGET_ARCH)/foreignmemory clean
 	[ ! -e libs-$(XEN_TARGET_ARCH)/devicemodel/Makefile ] || $(MAKE) DESTDIR= -C libs-$(XEN_TARGET_ARCH)/devicemodel clean
+	[ ! -e libs-$(XEN_TARGET_ARCH)/ctrl/Makefile ] || $(MAKE) DESTDIR= -C libs-$(XEN_TARGET_ARCH)/ctrl clean
 	[ ! -e libxc-$(XEN_TARGET_ARCH)/Makefile ] || $(MAKE) DESTDIR= -C libxc-$(XEN_TARGET_ARCH) clean
 	-[ ! -d ioemu ] || $(MAKE) DESTDIR= -C ioemu clean
 	-[ ! -d xenstore ] || $(MAKE) DESTDIR= -C xenstore clean
diff --git a/stubdom/mini-os.mk b/stubdom/mini-os.mk
index 32528bb91f..b1387df3f8 100644
--- a/stubdom/mini-os.mk
+++ b/stubdom/mini-os.mk
@@ -13,5 +13,5 @@ GNTTAB_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/gnttab
 CALL_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/call
 FOREIGNMEMORY_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/foreignmemory
 DEVICEMODEL_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/devicemodel
-CTRL_PATH = $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)
+CTRL_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/ctrl
 GUEST_PATH = $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)
diff --git a/tools/Makefile b/tools/Makefile
index c10946e3b1..7c62c599dd 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -255,6 +255,7 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
 		-I$(XEN_ROOT)/tools/libs/gnttab/include \
 		-I$(XEN_ROOT)/tools/libs/foreignmemory/include \
 		-I$(XEN_ROOT)/tools/libs/devicemodel/include \
+		-I$(XEN_ROOT)/tools/libs/ctrl/include \
 		-I$(XEN_ROOT)/tools/libxc/include \
 		-I$(XEN_ROOT)/tools/xenstore/include \
 		-I$(XEN_ROOT)/tools/xenstore/compat/include \
@@ -266,6 +267,7 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
 		-L$(XEN_ROOT)/tools/libs/gnttab \
 		-L$(XEN_ROOT)/tools/libs/foreignmemory \
 		-L$(XEN_ROOT)/tools/libs/devicemodel \
+		-L$(XEN_ROOT)/tools/libs/ctrl \
 		-Wl,-rpath-link=$(XEN_ROOT)/tools/libs/toolcore \
 		-Wl,-rpath-link=$(XEN_ROOT)/tools/libs/toollog \
 		-Wl,-rpath-link=$(XEN_ROOT)/tools/libs/evtchn \
@@ -273,6 +275,7 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
 		-Wl,-rpath-link=$(XEN_ROOT)/tools/libs/call \
 		-Wl,-rpath-link=$(XEN_ROOT)/tools/libs/foreignmemory \
 		-Wl,-rpath-link=$(XEN_ROOT)/tools/libs/devicemodel \
+		-Wl,-rpath-link=$(XEN_ROOT)/tools/libs/ctrl \
 		$(QEMU_UPSTREAM_RPATH)" \
 		--bindir=$(LIBEXEC_BIN) \
 		--datadir=$(SHAREDIR)/qemu-xen \
diff --git a/tools/Rules.mk b/tools/Rules.mk
index 5d699cfd39..6bc3347a1c 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -20,9 +20,8 @@ XEN_libxencall     = $(XEN_ROOT)/tools/libs/call
 XEN_libxenforeignmemory = $(XEN_ROOT)/tools/libs/foreignmemory
 XEN_libxendevicemodel = $(XEN_ROOT)/tools/libs/devicemodel
 XEN_libxenhypfs    = $(XEN_ROOT)/tools/libs/hypfs
-XEN_libxenctrl     = $(XEN_ROOT)/tools/libxc
-# Currently libxenguest lives in the same directory as libxenctrl
-XEN_libxenguest    = $(XEN_libxenctrl)
+XEN_libxenctrl     = $(XEN_ROOT)/tools/libs/ctrl
+XEN_libxenguest    = $(XEN_ROOT)/tools/libxc
 XEN_libxenlight    = $(XEN_ROOT)/tools/libxl
 # Currently libxlutil lives in the same directory as libxenlight
 XEN_libxlutil      = $(XEN_libxenlight)
@@ -147,7 +146,7 @@ LDLIBS_libxenctrl = $(SHDEPS_libxenctrl) $(XEN_libxenctrl)/libxenctrl$(libextens
 SHLIB_libxenctrl  = $(SHDEPS_libxenctrl) -Wl,-rpath-link=$(XEN_libxenctrl)
 
 CFLAGS_libxenguest = -I$(XEN_libxenguest)/include $(CFLAGS_libxenevtchn) $(CFLAGS_libxenforeignmemory) $(CFLAGS_xeninclude)
-SHDEPS_libxenguest = $(SHLIB_libxenevtchn)
+SHDEPS_libxenguest = $(SHLIB_libxenevtchn) $(SHLIB_libxenctrl)
 LDLIBS_libxenguest = $(SHDEPS_libxenguest) $(XEN_libxenguest)/libxenguest$(libextension)
 SHLIB_libxenguest  = $(SHDEPS_libxenguest) -Wl,-rpath-link=$(XEN_libxenguest)
 
@@ -179,7 +178,7 @@ CFLAGS += -O2 -fomit-frame-pointer
 endif
 
 CFLAGS_libxenlight = -I$(XEN_libxenlight) $(CFLAGS_libxenctrl) $(CFLAGS_xeninclude)
-SHDEPS_libxenlight = $(SHLIB_libxenctrl) $(SHLIB_libxenstore) $(SHLIB_libxenhypfs)
+SHDEPS_libxenlight = $(SHLIB_libxenctrl) $(SHLIB_libxenstore) $(SHLIB_libxenhypfs) $(SHLIB_libxenguest)
 LDLIBS_libxenlight = $(SHDEPS_libxenlight) $(XEN_libxenlight)/libxenlight$(libextension)
 SHLIB_libxenlight  = $(SHDEPS_libxenlight) -Wl,-rpath-link=$(XEN_libxenlight)
 
diff --git a/tools/libs/Makefile b/tools/libs/Makefile
index 69cdfb5975..7648ea0e4c 100644
--- a/tools/libs/Makefile
+++ b/tools/libs/Makefile
@@ -9,6 +9,7 @@ SUBDIRS-y += gnttab
 SUBDIRS-y += call
 SUBDIRS-y += foreignmemory
 SUBDIRS-y += devicemodel
+SUBDIRS-y += ctrl
 SUBDIRS-y += hypfs
 
 ifeq ($(CONFIG_RUMP),y)
diff --git a/tools/libs/ctrl/Makefile b/tools/libs/ctrl/Makefile
new file mode 100644
index 0000000000..bda0b43dfa
--- /dev/null
+++ b/tools/libs/ctrl/Makefile
@@ -0,0 +1,69 @@
+XEN_ROOT = $(CURDIR)/../../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+MAJOR    = 4.14
+MINOR    = 0
+LIBNAME  := ctrl
+USELIBS  := toollog call evtchn gnttab foreignmemory devicemodel
+
+SRCS-y       += xc_altp2m.c
+SRCS-y       += xc_core.c
+SRCS-$(CONFIG_X86) += xc_core_x86.c
+SRCS-$(CONFIG_ARM) += xc_core_arm.c
+SRCS-y       += xc_cpupool.c
+SRCS-y       += xc_domain.c
+SRCS-y       += xc_evtchn.c
+SRCS-y       += xc_gnttab.c
+SRCS-y       += xc_misc.c
+SRCS-y       += xc_flask.c
+SRCS-y       += xc_physdev.c
+SRCS-y       += xc_private.c
+SRCS-y       += xc_csched.c
+SRCS-y       += xc_csched2.c
+SRCS-y       += xc_arinc653.c
+SRCS-y       += xc_rt.c
+SRCS-y       += xc_tbuf.c
+SRCS-y       += xc_pm.c
+SRCS-y       += xc_cpu_hotplug.c
+SRCS-y       += xc_resume.c
+SRCS-y       += xc_vm_event.c
+SRCS-y       += xc_monitor.c
+SRCS-y       += xc_mem_paging.c
+SRCS-y       += xc_mem_access.c
+SRCS-y       += xc_memshr.c
+SRCS-y       += xc_hcall_buf.c
+SRCS-y       += xc_foreign_memory.c
+SRCS-y       += xc_kexec.c
+SRCS-y       += xc_resource.c
+SRCS-$(CONFIG_X86) += xc_psr.c
+SRCS-$(CONFIG_X86) += xc_pagetab.c
+SRCS-$(CONFIG_Linux) += xc_linux.c
+SRCS-$(CONFIG_FreeBSD) += xc_freebsd.c
+SRCS-$(CONFIG_SunOS) += xc_solaris.c
+SRCS-$(CONFIG_NetBSD) += xc_netbsd.c
+SRCS-$(CONFIG_NetBSDRump) += xc_netbsd.c
+SRCS-$(CONFIG_MiniOS) += xc_minios.c
+SRCS-y       += xc_evtchn_compat.c
+SRCS-y       += xc_gnttab_compat.c
+SRCS-y       += xc_devicemodel_compat.c
+
+CFLAGS   += -D__XEN_TOOLS__
+CFLAGS	+= $(PTHREAD_CFLAGS)
+CFLAGS += -include $(XEN_ROOT)/tools/config.h
+
+# Needed for posix_fadvise64() in xc_linux.c
+CFLAGS-$(CONFIG_Linux) += -D_GNU_SOURCE
+
+LIBHEADER := xenctrl.h xenctrl_compat.h xenctrl_dom.h
+
+NO_HEADERS_CHK := y
+
+include $(XEN_ROOT)/tools/libs/libs.mk
+
+genpath-target = $(call buildmakevars2header,_paths.h)
+$(eval $(genpath-target))
+
+xc_private.h: _paths.h
+
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenctrl)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libxc/include/xenctrl.h b/tools/libs/ctrl/include/xenctrl.h
similarity index 100%
rename from tools/libxc/include/xenctrl.h
rename to tools/libs/ctrl/include/xenctrl.h
diff --git a/tools/libxc/include/xenctrl_compat.h b/tools/libs/ctrl/include/xenctrl_compat.h
similarity index 100%
rename from tools/libxc/include/xenctrl_compat.h
rename to tools/libs/ctrl/include/xenctrl_compat.h
diff --git a/tools/libxc/include/xenctrl_dom.h b/tools/libs/ctrl/include/xenctrl_dom.h
similarity index 100%
rename from tools/libxc/include/xenctrl_dom.h
rename to tools/libs/ctrl/include/xenctrl_dom.h
diff --git a/tools/libs/ctrl/libxenctrl.map b/tools/libs/ctrl/libxenctrl.map
new file mode 100644
index 0000000000..26f1402d6d
--- /dev/null
+++ b/tools/libs/ctrl/libxenctrl.map
@@ -0,0 +1,3 @@
+VERS_4.14.0 {
+	global: *; /* Expose everything */
+};
diff --git a/tools/libxc/xc_altp2m.c b/tools/libs/ctrl/xc_altp2m.c
similarity index 100%
rename from tools/libxc/xc_altp2m.c
rename to tools/libs/ctrl/xc_altp2m.c
diff --git a/tools/libxc/xc_arinc653.c b/tools/libs/ctrl/xc_arinc653.c
similarity index 100%
rename from tools/libxc/xc_arinc653.c
rename to tools/libs/ctrl/xc_arinc653.c
diff --git a/tools/libxc/xc_bitops.h b/tools/libs/ctrl/xc_bitops.h
similarity index 100%
rename from tools/libxc/xc_bitops.h
rename to tools/libs/ctrl/xc_bitops.h
diff --git a/tools/libxc/xc_core.c b/tools/libs/ctrl/xc_core.c
similarity index 100%
rename from tools/libxc/xc_core.c
rename to tools/libs/ctrl/xc_core.c
diff --git a/tools/libxc/xc_core.h b/tools/libs/ctrl/xc_core.h
similarity index 100%
rename from tools/libxc/xc_core.h
rename to tools/libs/ctrl/xc_core.h
diff --git a/tools/libxc/xc_core_arm.c b/tools/libs/ctrl/xc_core_arm.c
similarity index 100%
rename from tools/libxc/xc_core_arm.c
rename to tools/libs/ctrl/xc_core_arm.c
diff --git a/tools/libxc/xc_core_arm.h b/tools/libs/ctrl/xc_core_arm.h
similarity index 100%
rename from tools/libxc/xc_core_arm.h
rename to tools/libs/ctrl/xc_core_arm.h
diff --git a/tools/libxc/xc_core_x86.c b/tools/libs/ctrl/xc_core_x86.c
similarity index 100%
rename from tools/libxc/xc_core_x86.c
rename to tools/libs/ctrl/xc_core_x86.c
diff --git a/tools/libxc/xc_core_x86.h b/tools/libs/ctrl/xc_core_x86.h
similarity index 100%
rename from tools/libxc/xc_core_x86.h
rename to tools/libs/ctrl/xc_core_x86.h
diff --git a/tools/libxc/xc_cpu_hotplug.c b/tools/libs/ctrl/xc_cpu_hotplug.c
similarity index 100%
rename from tools/libxc/xc_cpu_hotplug.c
rename to tools/libs/ctrl/xc_cpu_hotplug.c
diff --git a/tools/libxc/xc_cpupool.c b/tools/libs/ctrl/xc_cpupool.c
similarity index 100%
rename from tools/libxc/xc_cpupool.c
rename to tools/libs/ctrl/xc_cpupool.c
diff --git a/tools/libxc/xc_csched.c b/tools/libs/ctrl/xc_csched.c
similarity index 100%
rename from tools/libxc/xc_csched.c
rename to tools/libs/ctrl/xc_csched.c
diff --git a/tools/libxc/xc_csched2.c b/tools/libs/ctrl/xc_csched2.c
similarity index 100%
rename from tools/libxc/xc_csched2.c
rename to tools/libs/ctrl/xc_csched2.c
diff --git a/tools/libxc/xc_devicemodel_compat.c b/tools/libs/ctrl/xc_devicemodel_compat.c
similarity index 100%
rename from tools/libxc/xc_devicemodel_compat.c
rename to tools/libs/ctrl/xc_devicemodel_compat.c
diff --git a/tools/libxc/xc_domain.c b/tools/libs/ctrl/xc_domain.c
similarity index 100%
rename from tools/libxc/xc_domain.c
rename to tools/libs/ctrl/xc_domain.c
diff --git a/tools/libxc/xc_evtchn.c b/tools/libs/ctrl/xc_evtchn.c
similarity index 100%
rename from tools/libxc/xc_evtchn.c
rename to tools/libs/ctrl/xc_evtchn.c
diff --git a/tools/libxc/xc_evtchn_compat.c b/tools/libs/ctrl/xc_evtchn_compat.c
similarity index 100%
rename from tools/libxc/xc_evtchn_compat.c
rename to tools/libs/ctrl/xc_evtchn_compat.c
diff --git a/tools/libxc/xc_flask.c b/tools/libs/ctrl/xc_flask.c
similarity index 100%
rename from tools/libxc/xc_flask.c
rename to tools/libs/ctrl/xc_flask.c
diff --git a/tools/libxc/xc_foreign_memory.c b/tools/libs/ctrl/xc_foreign_memory.c
similarity index 100%
rename from tools/libxc/xc_foreign_memory.c
rename to tools/libs/ctrl/xc_foreign_memory.c
diff --git a/tools/libxc/xc_freebsd.c b/tools/libs/ctrl/xc_freebsd.c
similarity index 100%
rename from tools/libxc/xc_freebsd.c
rename to tools/libs/ctrl/xc_freebsd.c
diff --git a/tools/libxc/xc_gnttab.c b/tools/libs/ctrl/xc_gnttab.c
similarity index 100%
rename from tools/libxc/xc_gnttab.c
rename to tools/libs/ctrl/xc_gnttab.c
diff --git a/tools/libxc/xc_gnttab_compat.c b/tools/libs/ctrl/xc_gnttab_compat.c
similarity index 100%
rename from tools/libxc/xc_gnttab_compat.c
rename to tools/libs/ctrl/xc_gnttab_compat.c
diff --git a/tools/libxc/xc_hcall_buf.c b/tools/libs/ctrl/xc_hcall_buf.c
similarity index 100%
rename from tools/libxc/xc_hcall_buf.c
rename to tools/libs/ctrl/xc_hcall_buf.c
diff --git a/tools/libxc/xc_kexec.c b/tools/libs/ctrl/xc_kexec.c
similarity index 100%
rename from tools/libxc/xc_kexec.c
rename to tools/libs/ctrl/xc_kexec.c
diff --git a/tools/libxc/xc_linux.c b/tools/libs/ctrl/xc_linux.c
similarity index 100%
rename from tools/libxc/xc_linux.c
rename to tools/libs/ctrl/xc_linux.c
diff --git a/tools/libxc/xc_mem_access.c b/tools/libs/ctrl/xc_mem_access.c
similarity index 100%
rename from tools/libxc/xc_mem_access.c
rename to tools/libs/ctrl/xc_mem_access.c
diff --git a/tools/libxc/xc_mem_paging.c b/tools/libs/ctrl/xc_mem_paging.c
similarity index 100%
rename from tools/libxc/xc_mem_paging.c
rename to tools/libs/ctrl/xc_mem_paging.c
diff --git a/tools/libxc/xc_memshr.c b/tools/libs/ctrl/xc_memshr.c
similarity index 100%
rename from tools/libxc/xc_memshr.c
rename to tools/libs/ctrl/xc_memshr.c
diff --git a/tools/libxc/xc_minios.c b/tools/libs/ctrl/xc_minios.c
similarity index 100%
rename from tools/libxc/xc_minios.c
rename to tools/libs/ctrl/xc_minios.c
diff --git a/tools/libxc/xc_misc.c b/tools/libs/ctrl/xc_misc.c
similarity index 100%
rename from tools/libxc/xc_misc.c
rename to tools/libs/ctrl/xc_misc.c
diff --git a/tools/libxc/xc_monitor.c b/tools/libs/ctrl/xc_monitor.c
similarity index 100%
rename from tools/libxc/xc_monitor.c
rename to tools/libs/ctrl/xc_monitor.c
diff --git a/tools/libxc/xc_msr_x86.h b/tools/libs/ctrl/xc_msr_x86.h
similarity index 100%
rename from tools/libxc/xc_msr_x86.h
rename to tools/libs/ctrl/xc_msr_x86.h
diff --git a/tools/libxc/xc_netbsd.c b/tools/libs/ctrl/xc_netbsd.c
similarity index 100%
rename from tools/libxc/xc_netbsd.c
rename to tools/libs/ctrl/xc_netbsd.c
diff --git a/tools/libxc/xc_pagetab.c b/tools/libs/ctrl/xc_pagetab.c
similarity index 100%
rename from tools/libxc/xc_pagetab.c
rename to tools/libs/ctrl/xc_pagetab.c
diff --git a/tools/libxc/xc_physdev.c b/tools/libs/ctrl/xc_physdev.c
similarity index 100%
rename from tools/libxc/xc_physdev.c
rename to tools/libs/ctrl/xc_physdev.c
diff --git a/tools/libxc/xc_pm.c b/tools/libs/ctrl/xc_pm.c
similarity index 100%
rename from tools/libxc/xc_pm.c
rename to tools/libs/ctrl/xc_pm.c
diff --git a/tools/libxc/xc_private.c b/tools/libs/ctrl/xc_private.c
similarity index 100%
rename from tools/libxc/xc_private.c
rename to tools/libs/ctrl/xc_private.c
diff --git a/tools/libxc/xc_private.h b/tools/libs/ctrl/xc_private.h
similarity index 100%
rename from tools/libxc/xc_private.h
rename to tools/libs/ctrl/xc_private.h
diff --git a/tools/libxc/xc_psr.c b/tools/libs/ctrl/xc_psr.c
similarity index 100%
rename from tools/libxc/xc_psr.c
rename to tools/libs/ctrl/xc_psr.c
diff --git a/tools/libxc/xc_resource.c b/tools/libs/ctrl/xc_resource.c
similarity index 100%
rename from tools/libxc/xc_resource.c
rename to tools/libs/ctrl/xc_resource.c
diff --git a/tools/libxc/xc_resume.c b/tools/libs/ctrl/xc_resume.c
similarity index 100%
rename from tools/libxc/xc_resume.c
rename to tools/libs/ctrl/xc_resume.c
diff --git a/tools/libxc/xc_rt.c b/tools/libs/ctrl/xc_rt.c
similarity index 100%
rename from tools/libxc/xc_rt.c
rename to tools/libs/ctrl/xc_rt.c
diff --git a/tools/libxc/xc_solaris.c b/tools/libs/ctrl/xc_solaris.c
similarity index 100%
rename from tools/libxc/xc_solaris.c
rename to tools/libs/ctrl/xc_solaris.c
diff --git a/tools/libxc/xc_tbuf.c b/tools/libs/ctrl/xc_tbuf.c
similarity index 100%
rename from tools/libxc/xc_tbuf.c
rename to tools/libs/ctrl/xc_tbuf.c
diff --git a/tools/libxc/xc_vm_event.c b/tools/libs/ctrl/xc_vm_event.c
similarity index 100%
rename from tools/libxc/xc_vm_event.c
rename to tools/libs/ctrl/xc_vm_event.c
diff --git a/tools/libxc/xencontrol.pc.in b/tools/libs/ctrl/xenctrl.pc.in
similarity index 100%
rename from tools/libxc/xencontrol.pc.in
rename to tools/libs/ctrl/xenctrl.pc.in
diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index 6f94b5bb4c..1312223093 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -9,47 +9,10 @@ ifeq ($(CONFIG_LIBXC_MINIOS),y)
 override CONFIG_MIGRATE := n
 endif
 
-CTRL_SRCS-y       :=
-CTRL_SRCS-y       += xc_altp2m.c
-CTRL_SRCS-y       += xc_core.c
-CTRL_SRCS-$(CONFIG_X86) += xc_core_x86.c
-CTRL_SRCS-$(CONFIG_ARM) += xc_core_arm.c
-CTRL_SRCS-y       += xc_cpupool.c
-CTRL_SRCS-y       += xc_domain.c
-CTRL_SRCS-y       += xc_evtchn.c
-CTRL_SRCS-y       += xc_gnttab.c
-CTRL_SRCS-y       += xc_misc.c
-CTRL_SRCS-y       += xc_flask.c
-CTRL_SRCS-y       += xc_physdev.c
-CTRL_SRCS-y       += xc_private.c
-CTRL_SRCS-y       += xc_csched.c
-CTRL_SRCS-y       += xc_csched2.c
-CTRL_SRCS-y       += xc_arinc653.c
-CTRL_SRCS-y       += xc_rt.c
-CTRL_SRCS-y       += xc_tbuf.c
-CTRL_SRCS-y       += xc_pm.c
-CTRL_SRCS-y       += xc_cpu_hotplug.c
-CTRL_SRCS-y       += xc_resume.c
-CTRL_SRCS-y       += xc_vm_event.c
-CTRL_SRCS-y       += xc_monitor.c
-CTRL_SRCS-y       += xc_mem_paging.c
-CTRL_SRCS-y       += xc_mem_access.c
-CTRL_SRCS-y       += xc_memshr.c
-CTRL_SRCS-y       += xc_hcall_buf.c
-CTRL_SRCS-y       += xc_foreign_memory.c
-CTRL_SRCS-y       += xc_kexec.c
-CTRL_SRCS-y       += xc_resource.c
-CTRL_SRCS-$(CONFIG_X86) += xc_psr.c
-CTRL_SRCS-$(CONFIG_X86) += xc_pagetab.c
-CTRL_SRCS-$(CONFIG_Linux) += xc_linux.c
-CTRL_SRCS-$(CONFIG_FreeBSD) += xc_freebsd.c
-CTRL_SRCS-$(CONFIG_SunOS) += xc_solaris.c
-CTRL_SRCS-$(CONFIG_NetBSD) += xc_netbsd.c
-CTRL_SRCS-$(CONFIG_NetBSDRump) += xc_netbsd.c
-CTRL_SRCS-$(CONFIG_MiniOS) += xc_minios.c
-CTRL_SRCS-y       += xc_evtchn_compat.c
-CTRL_SRCS-y       += xc_gnttab_compat.c
-CTRL_SRCS-y       += xc_devicemodel_compat.c
+LINK_FILES := xc_private.h xc_core.h xc_core_x86.h xc_core_arm.h xc_bitops.h
+
+$(LINK_FILES):
+	ln -sf $(XEN_ROOT)/tools/libs/ctrl/$(notdir $@) $@
 
 GUEST_SRCS-y :=
 GUEST_SRCS-y += xg_private.c
@@ -124,26 +87,14 @@ CFLAGS	+= $(CFLAGS_libxentoollog)
 CFLAGS	+= $(CFLAGS_libxenevtchn)
 CFLAGS	+= $(CFLAGS_libxendevicemodel)
 
-CTRL_LIB_OBJS := $(patsubst %.c,%.o,$(CTRL_SRCS-y))
-CTRL_PIC_OBJS := $(patsubst %.c,%.opic,$(CTRL_SRCS-y))
-
 GUEST_LIB_OBJS := $(patsubst %.c,%.o,$(GUEST_SRCS-y))
 GUEST_PIC_OBJS := $(patsubst %.c,%.opic,$(GUEST_SRCS-y))
 
-$(CTRL_LIB_OBJS) $(GUEST_LIB_OBJS) \
-$(CTRL_PIC_OBJS) $(GUEST_PIC_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h
+$(GUEST_LIB_OBJS) $(GUEST_PIC_OBJS): CFLAGS += -include $(XEN_ROOT)/tools/config.h
 
 # libxenguest includes xc_private.h, so needs this despite not using
 # this functionality directly.
-$(CTRL_LIB_OBJS) $(GUEST_LIB_OBJS) \
-$(CTRL_PIC_OBJS) $(GUEST_PIC_OBJS): CFLAGS += $(CFLAGS_libxencall) $(CFLAGS_libxenforeignmemory)
-
-$(CTRL_LIB_OBJS) $(CTRL_PIC_OBJS): CFLAGS += $(CFLAGS_libxengnttab)
-
-LIB := libxenctrl.a
-ifneq ($(nosharedlibs),y)
-LIB += libxenctrl.so libxenctrl.so.$(MAJOR) libxenctrl.so.$(MAJOR).$(MINOR)
-endif
+$(GUEST_LIB_OBJS) $(GUEST_PIC_OBJS): CFLAGS += $(CFLAGS_libxencall) $(CFLAGS_libxenforeignmemory)
 
 LIB += libxenguest.a
 ifneq ($(nosharedlibs),y)
@@ -155,10 +106,9 @@ $(eval $(genpath-target))
 
 xc_private.h: _paths.h
 
-$(CTRL_LIB_OBJS) $(GUEST_LIB_OBJS) \
-$(CTRL_PIC_OBJS) $(GUEST_PIC_OBJS): xc_private.h
+$(GUEST_LIB_OBJS) $(GUEST_PIC_OBJS): $(LINK_FILES)
 
-PKG_CONFIG := xencontrol.pc xenguest.pc
+PKG_CONFIG := xenguest.pc
 PKG_CONFIG_VERSION := $(MAJOR).$(MINOR)
 
 ifneq ($(CONFIG_LIBXC_MINIOS),y)
@@ -189,17 +139,11 @@ libs: $(LIB) $(PKG_CONFIG_INST) $(PKG_CONFIG_LOCAL)
 install: build
 	$(INSTALL_DIR) $(DESTDIR)$(libdir)
 	$(INSTALL_DIR) $(DESTDIR)$(includedir)
-	$(INSTALL_SHLIB) libxenctrl.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)
-	$(INSTALL_DATA) libxenctrl.a $(DESTDIR)$(libdir)
-	$(SYMLINK_SHLIB) libxenctrl.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)/libxenctrl.so.$(MAJOR)
-	$(SYMLINK_SHLIB) libxenctrl.so.$(MAJOR) $(DESTDIR)$(libdir)/libxenctrl.so
-	$(INSTALL_DATA) include/xenctrl.h include/xenctrl_compat.h include/xenctrl_dom.h $(DESTDIR)$(includedir)
 	$(INSTALL_SHLIB) libxenguest.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)
 	$(INSTALL_DATA) libxenguest.a $(DESTDIR)$(libdir)
 	$(SYMLINK_SHLIB) libxenguest.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)/libxenguest.so.$(MAJOR)
 	$(SYMLINK_SHLIB) libxenguest.so.$(MAJOR) $(DESTDIR)$(libdir)/libxenguest.so
 	$(INSTALL_DATA) include/xenguest.h $(DESTDIR)$(includedir)
-	$(INSTALL_DATA) xencontrol.pc $(DESTDIR)$(PKG_INSTALLDIR)
 	$(INSTALL_DATA) xenguest.pc $(DESTDIR)$(PKG_INSTALLDIR)
 
 .PHONY: uninstall
@@ -210,14 +154,6 @@ uninstall:
 	rm -f $(DESTDIR)$(libdir)/libxenguest.so.$(MAJOR)
 	rm -f $(DESTDIR)$(libdir)/libxenguest.so.$(MAJOR).$(MINOR)
 	rm -f $(DESTDIR)$(libdir)/libxenguest.a
-	rm -f $(DESTDIR)$(PKG_INSTALLDIR)/xencontrol.pc
-	rm -f $(DESTDIR)$(includedir)/xenctrl.h
-	rm -f $(DESTDIR)$(includedir)/xenctrl_compat.h
-	rm -f $(DESTDIR)$(includedir)/xenctrl_dom.h
-	rm -f $(DESTDIR)$(libdir)/libxenctrl.so
-	rm -f $(DESTDIR)$(libdir)/libxenctrl.so.$(MAJOR)
-	rm -f $(DESTDIR)$(libdir)/libxenctrl.so.$(MAJOR).$(MINOR)
-	rm -f $(DESTDIR)$(libdir)/libxenctrl.a
 
 .PHONY: TAGS
 TAGS:
@@ -227,8 +163,8 @@ TAGS:
 clean:
 	rm -rf *.rpm $(LIB) *~ $(DEPS_RM) \
             _paths.h \
-	    xencontrol.pc xenguest.pc \
-            $(CTRL_LIB_OBJS) $(CTRL_PIC_OBJS) \
+	    $(LINK_FILES) \
+	    xenguest.pc \
             $(GUEST_LIB_OBJS) $(GUEST_PIC_OBJS)
 
 .PHONY: distclean
@@ -244,19 +180,6 @@ rpm: build
 	mv staging/i386/*.rpm .
 	rm -rf staging
 
-# libxenctrl
-
-libxenctrl.a: $(CTRL_LIB_OBJS)
-	$(AR) rc $@ $^
-
-libxenctrl.so: libxenctrl.so.$(MAJOR)
-	$(SYMLINK_SHLIB) $< $@
-libxenctrl.so.$(MAJOR): libxenctrl.so.$(MAJOR).$(MINOR)
-	$(SYMLINK_SHLIB) $< $@
-
-libxenctrl.so.$(MAJOR).$(MINOR): $(CTRL_PIC_OBJS)
-	$(CC) $(LDFLAGS) $(PTHREAD_LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenctrl.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LDLIBS_libxentoollog) $(LDLIBS_libxenevtchn) $(LDLIBS_libxengnttab) $(LDLIBS_libxencall) $(LDLIBS_libxenforeignmemory) $(LDLIBS_libxendevicemodel) $(PTHREAD_LIBS) $(APPEND_LDFLAGS)
-
 # libxenguest
 
 libxenguest.a: $(GUEST_LIB_OBJS)
@@ -277,7 +200,7 @@ xc_dom_bzimageloader.o: CFLAGS += $(filter -D%,$(zlib-options))
 xc_dom_bzimageloader.opic: CFLAGS += $(filter -D%,$(zlib-options))
 
 libxenguest.so.$(MAJOR).$(MINOR): COMPRESSION_LIBS = $(filter -l%,$(zlib-options))
-libxenguest.so.$(MAJOR).$(MINOR): $(GUEST_PIC_OBJS) libxenctrl.so
+libxenguest.so.$(MAJOR).$(MINOR): $(GUEST_PIC_OBJS)
 	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenguest.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $(GUEST_PIC_OBJS) $(COMPRESSION_LIBS) -lz $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(PTHREAD_LIBS) $(APPEND_LDFLAGS)
 
 -include $(DEPS_INCLUDE)
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 38cd43abae..b1cd5d8ef2 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -188,7 +188,7 @@ libxl_dom.o: CFLAGS += -I$(XEN_ROOT)/tools  # include libacpi/x86.h
 libxl_x86_acpi.o: CFLAGS += -I$(XEN_ROOT)/tools
 
 SAVE_HELPER_OBJS = libxl_save_helper.o _libxl_save_msgs_helper.o
-$(SAVE_HELPER_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenevtchn)
+$(SAVE_HELPER_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenevtchn) $(CFLAGS_libxenguest)
 
 PKG_CONFIG = xenlight.pc xlutil.pc
 PKG_CONFIG_VERSION := $(MAJOR).$(MINOR)
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 4e2e8f3b17..7d37f297a9 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -6,6 +6,7 @@ CFLAGS += -Werror
 CFLAGS += -include $(XEN_ROOT)/tools/config.h
 CFLAGS += $(CFLAGS_libxenevtchn)
 CFLAGS += $(CFLAGS_libxenctrl)
+CFLAGS += $(CFLAGS_libxenguest)
 CFLAGS += $(CFLAGS_xeninclude)
 CFLAGS += $(CFLAGS_libxenstore)
 
diff --git a/tools/python/Makefile b/tools/python/Makefile
index 8d22c03676..bd3d62a36c 100644
--- a/tools/python/Makefile
+++ b/tools/python/Makefile
@@ -33,7 +33,7 @@ uninstall:
 
 .PHONY: test
 test:
-	LD_LIBRARY_PATH=$$(readlink -f ../libxc):$$(readlink -f ../xenstore) $(PYTHON) -m unittest discover
+	LD_LIBRARY_PATH=$$(readlink -f ../libs/ctrl):$$(readlink -f ../libxc):$$(readlink -f ../xenstore) $(PYTHON) -m unittest discover
 
 .PHONY: clean
 clean:
diff --git a/tools/python/setup.py b/tools/python/setup.py
index 8faf1c0ddc..24b284af39 100644
--- a/tools/python/setup.py
+++ b/tools/python/setup.py
@@ -9,7 +9,7 @@ extra_compile_args  = [ "-fno-strict-aliasing", "-Werror" ]
 PATH_XEN      = XEN_ROOT + "/tools/include"
 PATH_LIBXENTOOLLOG = XEN_ROOT + "/tools/libs/toollog"
 PATH_LIBXENEVTCHN = XEN_ROOT + "/tools/libs/evtchn"
-PATH_LIBXC    = XEN_ROOT + "/tools/libxc"
+PATH_LIBXENCTRL = XEN_ROOT + "/tools/libs/ctrl"
 PATH_LIBXL    = XEN_ROOT + "/tools/libxl"
 PATH_XENSTORE = XEN_ROOT + "/tools/xenstore"
 
@@ -18,11 +18,11 @@ xc = Extension("xc",
                include_dirs       = [ PATH_XEN,
                                       PATH_LIBXENTOOLLOG + "/include",
                                       PATH_LIBXENEVTCHN + "/include",
-                                      PATH_LIBXC + "/include",
+                                      PATH_LIBXENCTRL + "/include",
                                       "xen/lowlevel/xc" ],
-               library_dirs       = [ PATH_LIBXC ],
-               libraries          = [ "xenctrl", "xenguest" ],
-               depends            = [ PATH_LIBXC + "/libxenctrl.so", PATH_LIBXC + "/libxenguest.so" ],
+               library_dirs       = [ PATH_LIBXENCTRL ],
+               libraries          = [ "xenctrl" ],
+               depends            = [ PATH_LIBXENCTRL + "/libxenctrl.so" ],
                extra_link_args    = [ "-Wl,-rpath-link="+PATH_LIBXENTOOLLOG ],
                sources            = [ "xen/lowlevel/xc/xc.c" ])
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:50:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkbw-0005ph-J0; Wed, 15 Jul 2020 16:50:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cSQs=A2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jvkbv-0005pc-OO
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:50:23 +0000
X-Inumbo-ID: 45c2e292-c6bb-11ea-9421-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 45c2e292-c6bb-11ea-9421-12813bfff9fa;
 Wed, 15 Jul 2020 16:50:22 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 92EA020663;
 Wed, 15 Jul 2020 16:50:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594831821;
 bh=MsnVDJvH5tGqYJ73GYPExLOKOnGhVsOzCRRS2prv6qE=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=iXhnZldnLaOlvUXB59a0CIHvgBFHcScVxFSaP3tyQG3ufL0QxzRcLoST9StJWX3Di
 ocda536tl0fZuzmUmNSDuiUneCezmx4W4m3Ejgt2wrpfwuxZGZp3/b1t/OqACt1YKd
 HUSpZk0icpuIDdfQxUe/V+JEat/3q/GQkp7NHZEY=
Date: Wed, 15 Jul 2020 09:50:20 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com, 
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH 00/12] tools: move more libraries into tools/libs
In-Reply-To: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
Message-ID: <alpine.DEB.2.21.2007150945230.4124@sstabellini-ThinkPad-T480s>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@dornerworks.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, ian.jackson@eu.citrix.com,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
 Stewart Hildebrand <stewart.hildebrand@dornerworks.com>,
 Josh Whitehead <josh.whitehead@dornerworks.com>,
 Jan Beulich <jbeulich@suse.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 15 Jul 2020, Ian Jackson wrote:
> [ NB: this patch series is actually from Juergen Gross.
> 
>   It is being experiemntally handled as a Merge Reqeust in gitlab, in
>   part to see what problems there are with that workflow that will
>   need extra tooling or whatever.
> 
>   I have manually generated this series using git-format-patch,
>   scripts/add_maintainers.pl, and git-send-email.  I expect that if we
>   adopt this as a real workflow, we will want to make a robot do some
>   of that.
> 
>   I have set replies to go to the Gitlab comment thread and to
>   xen-devel.  Again this is experimental.  We are likely to need
>   something to automatically collect acks, at the very least.
> 
>   Reviewers: for now, please review this series as normal.  You may
>   reply to the messages by email.  Please, for now, send your replies
>   to gitlab and to the mailing list.  I think I have set the reply-to
>   appropriately.
> 
>   Alternatively you may review the code in the gitlab web UI.  But
>   please do not use the line-by-line comment system: write only to the
>   main MR discussion thread.

Thanks for doing this Ian.

I am curious about this: why not the line-by-line comment system? It
looks like it would be the most similar to emails comments. Is it
because comments done that way cannot be sent via email while the main
MR discussion thread can?


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:54:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:54:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkgJ-00060P-3n; Wed, 15 Jul 2020 16:54:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cSQs=A2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jvkgI-00060K-17
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:54:54 +0000
X-Inumbo-ID: e73f0c9a-c6bb-11ea-9421-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e73f0c9a-c6bb-11ea-9421-12813bfff9fa;
 Wed, 15 Jul 2020 16:54:53 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 82FC620672;
 Wed, 15 Jul 2020 16:54:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594832092;
 bh=vIE56xHguh9oMOrReykdU+3lDSXsqRwrprk5pkM7Qq8=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=EYRa8hlmcmJjrPosothC9AU9Yq1muhce8TjFmOhQ27bNOWVe75q/fUiIMlT7NWiW2
 SC6b7jxXBNfDoE0SpKstnDLBoeI7S+FMvVNoIdvLWerKRndIbv0/OXCiR0rMYNjG6i
 m58e14AD5Xusv+q/X5aIzAqgDmC+DsQ0iKKxcTM0=
Date: Wed, 15 Jul 2020 09:54:52 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 4/8] Arm: prune #include-s needed by domain.h
In-Reply-To: <150525bb-1c48-c331-3212-eff18bc4c13d@suse.com>
Message-ID: <alpine.DEB.2.21.2007150954410.4124@sstabellini-ThinkPad-T480s>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
 <150525bb-1c48-c331-3212-eff18bc4c13d@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 15 Jul 2020, Jan Beulich wrote:
> asm/domain.h is a dependency of xen/sched.h, and hence should not itself
> include xen/sched.h. Nor should any of the other #include-s used by it.
> While at it, also drop two other #include-s that aren't needed by this
> particular header.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -2,7 +2,7 @@
>  #define __ASM_DOMAIN_H__
>  
>  #include <xen/cache.h>
> -#include <xen/sched.h>
> +#include <xen/timer.h>
>  #include <asm/page.h>
>  #include <asm/p2m.h>
>  #include <asm/vfp.h>
> @@ -11,8 +11,6 @@
>  #include <asm/vgic.h>
>  #include <asm/vpl011.h>
>  #include <public/hvm/params.h>
> -#include <xen/serial.h>
> -#include <xen/rbtree.h>
>  
>  struct hvm_domain
>  {
> --- a/xen/include/asm-arm/vfp.h
> +++ b/xen/include/asm-arm/vfp.h
> @@ -1,7 +1,7 @@
>  #ifndef _ASM_VFP_H
>  #define _ASM_VFP_H
>  
> -#include <xen/sched.h>
> +struct vcpu;
>  
>  #if defined(CONFIG_ARM_32)
>  # include <asm/arm32/vfp.h>
> 


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:55:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkgU-00061k-CA; Wed, 15 Jul 2020 16:55:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3pvp=A2=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jvkgS-00061R-Gs
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:55:04 +0000
X-Inumbo-ID: ecd92744-c6bb-11ea-9422-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ecd92744-c6bb-11ea-9422-12813bfff9fa;
 Wed, 15 Jul 2020 16:55:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594832102;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=RxF184ngZ84smV8JN0d/jaLXpcSNUgsTt6IavUVihR0=;
 b=O/n8DfCLW84B33YzY5TIU6C5IEL1bne3ypuPtnIQaJfzCvGIFA88Ticu
 os3/KFbW91mCTgf2uA9j9axyEm+MoCexPTbCqnoUDN+PoriJO0ZbBJ7r5
 bnSPmN1sGcCsFmuMjpKHWWM5Y8dZGNeYKPDJ4UrN/aEmy8vt10iD8qwBw s=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: D//hff2Xhv9zoJJFxuJV/vGaAUxVcrPadZJsfFp3kiHBsWbupTjq78KkBBxu8ilYLsu9QuXNor
 iukMqaGgsnSXqEM+xn1SQyQbej9iBGA/L3nUQGf7nt1GecaNBA6ubvz7eJJUY1eo2UnY9RgcIU
 nNTSdBdrB679HcS7K+dQOAKnXOluvvFUqLSw+4A2ITUZAj/ZQ4n+fYvjOLRFcx0aEM+9qGQliM
 eN12qqv2xRmILQTlpw4paxg5As/NnPGwq7bOx3GZv/V+dNA6hvudUqWjFXtQlPDShXTXK5OLPn
 Dic=
X-SBRS: 2.7
X-MesageID: 22652175
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22652175"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24335.13539.223285.885690@mariner.uk.xensource.com>
Date: Wed, 15 Jul 2020 17:54:59 +0100
To: "incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com"
 <incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 04/12] tools: don't call make recursively from libs.mk
In-Reply-To: <20200715162511.5941-6-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
 <20200715162511.5941-6-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Ian Jackson writes ("[PATCH 04/12] tools: don't call make recursively from libs.mk"):
> From: Juergen Gross <jgross@suse.com>
> 
> During build of a xen library make is called again via libs.mk. This is
> not necessary as the same can be achieved by a simple dependency.

Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:55:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:55:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkgX-00062m-Ko; Wed, 15 Jul 2020 16:55:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvkgW-00062T-1z
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:55:08 +0000
X-Inumbo-ID: eee4a2ac-c6bb-11ea-b7bb-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eee4a2ac-c6bb-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 16:55:06 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvkDy-0001sU-JH; Wed, 15 Jul 2020 17:25:38 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 09/12] tools: split libxenstore into new tools/libs/store
 directory
Date: Wed, 15 Jul 2020 17:25:08 +0100
Message-Id: <20200715162511.5941-11-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 ian.jackson@eu.citrix.com, George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Jan Beulich <jbeulich@suse.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Juergen Gross <jgross@suse.com>

There is no reason why libxenstore is not placed in the tools/libs
directory.

The common files between libxenstore and xenstored are kept in the
tools/xenstore directory to be easily accessible by xenstore-stubdom
which needs the xenstored files to be built.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                                    |  7 +-
 stubdom/mini-os.mk                            |  2 +-
 tools/Makefile                                |  8 +-
 tools/Rules.mk                                |  2 +-
 tools/libs/Makefile                           |  1 +
 tools/libs/store/Makefile                     | 66 +++++++++++++++
 .../store}/include/compat/xs.h                |  0
 .../store}/include/compat/xs_lib.h            |  0
 .../store}/include/xenstore.h                 |  0
 tools/libs/store/libxenstore.map              | 49 +++++++++++
 tools/{xenstore => libs/store}/xenstore.pc.in |  0
 tools/{xenstore => libs/store}/xs.c           |  0
 tools/python/setup.py                         |  2 +-
 tools/xenstore/Makefile                       | 82 +------------------
 tools/xenstore/{include => }/xenstore_lib.h   |  0
 15 files changed, 133 insertions(+), 86 deletions(-)
 create mode 100644 tools/libs/store/Makefile
 rename tools/{xenstore => libs/store}/include/compat/xs.h (100%)
 rename tools/{xenstore => libs/store}/include/compat/xs_lib.h (100%)
 rename tools/{xenstore => libs/store}/include/xenstore.h (100%)
 create mode 100644 tools/libs/store/libxenstore.map
 rename tools/{xenstore => libs/store}/xenstore.pc.in (100%)
 rename tools/{xenstore => libs/store}/xs.c (100%)
 rename tools/xenstore/{include => }/xenstore_lib.h (100%)

diff --git a/.gitignore b/.gitignore
index e28c21641c..dad694a979 100644
--- a/.gitignore
+++ b/.gitignore
@@ -120,6 +120,12 @@ tools/libs/foreignmemory/headers.chk
 tools/libs/foreignmemory/xenforeignmemory.pc
 tools/libs/devicemodel/headers.chk
 tools/libs/devicemodel/xendevicemodel.pc
+tools/libs/store/headers.chk
+tools/libs/store/list.h
+tools/libs/store/utils.h
+tools/libs/store/xenstore.pc
+tools/libs/store/xs_lib.c
+tools/libs/store/include/xenstore_lib.h
 tools/console/xenconsole
 tools/console/xenconsoled
 tools/console/client/_paths.h
@@ -280,7 +286,6 @@ tools/xenstore/xenstore-control
 tools/xenstore/xenstore-ls
 tools/xenstore/xenstored
 tools/xenstore/xenstored_test
-tools/xenstore/xenstore.pc
 tools/xenstore/xs_tdb_dump
 tools/xentrace/xentrace_setsize
 tools/xentrace/tbctl
diff --git a/stubdom/mini-os.mk b/stubdom/mini-os.mk
index b1387df3f8..55b26c9517 100644
--- a/stubdom/mini-os.mk
+++ b/stubdom/mini-os.mk
@@ -5,7 +5,7 @@
 # XEN_ROOT
 # MINIOS_TARGET_ARCH
 
-XENSTORE_CPPFLAGS = -isystem $(XEN_ROOT)/tools/xenstore/include
+XENSTORE_CPPFLAGS = -isystem $(XEN_ROOT)/tools/libs/store/include
 TOOLCORE_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toolcore
 TOOLLOG_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toollog
 EVTCHN_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/evtchn
diff --git a/tools/Makefile b/tools/Makefile
index 7c62c599dd..88231856d7 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -47,7 +47,7 @@ SUBDIRS-$(OCAML_TOOLS) += ocaml
 endif
 
 ifeq ($(CONFIG_RUMP),y)
-SUBDIRS-y := libs libxc xenstore
+SUBDIRS-y := libs libxc
 endif
 
 # For the sake of linking, set the sys-root
@@ -256,12 +256,12 @@ subdir-all-qemu-xen-dir: qemu-xen-dir-find
 		-I$(XEN_ROOT)/tools/libs/foreignmemory/include \
 		-I$(XEN_ROOT)/tools/libs/devicemodel/include \
 		-I$(XEN_ROOT)/tools/libs/ctrl/include \
+		-I$(XEN_ROOT)/tools/libs/store/include \
+		-I$(XEN_ROOT)/tools/libs/store/compat/include \
 		-I$(XEN_ROOT)/tools/libxc/include \
-		-I$(XEN_ROOT)/tools/xenstore/include \
-		-I$(XEN_ROOT)/tools/xenstore/compat/include \
 		$(EXTRA_CFLAGS_QEMU_XEN)" \
 		--extra-ldflags="-L$(XEN_ROOT)/tools/libxc \
-		-L$(XEN_ROOT)/tools/xenstore \
+		-L$(XEN_ROOT)/tools/libs/store \
 		-L$(XEN_ROOT)/tools/libs/toolcore \
 		-L$(XEN_ROOT)/tools/libs/evtchn \
 		-L$(XEN_ROOT)/tools/libs/gnttab \
diff --git a/tools/Rules.mk b/tools/Rules.mk
index 6bc3347a1c..cd1b49bca8 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -21,11 +21,11 @@ XEN_libxenforeignmemory = $(XEN_ROOT)/tools/libs/foreignmemory
 XEN_libxendevicemodel = $(XEN_ROOT)/tools/libs/devicemodel
 XEN_libxenhypfs    = $(XEN_ROOT)/tools/libs/hypfs
 XEN_libxenctrl     = $(XEN_ROOT)/tools/libs/ctrl
+XEN_libxenstore    = $(XEN_ROOT)/tools/libs/store
 XEN_libxenguest    = $(XEN_ROOT)/tools/libxc
 XEN_libxenlight    = $(XEN_ROOT)/tools/libxl
 # Currently libxlutil lives in the same directory as libxenlight
 XEN_libxlutil      = $(XEN_libxenlight)
-XEN_libxenstore    = $(XEN_ROOT)/tools/xenstore
 XEN_libxenstat     = $(XEN_ROOT)/tools/xenstat/libxenstat/src
 XEN_libxenvchan    = $(XEN_ROOT)/tools/libvchan
 
diff --git a/tools/libs/Makefile b/tools/libs/Makefile
index 7648ea0e4c..27a7df9b31 100644
--- a/tools/libs/Makefile
+++ b/tools/libs/Makefile
@@ -11,6 +11,7 @@ SUBDIRS-y += foreignmemory
 SUBDIRS-y += devicemodel
 SUBDIRS-y += ctrl
 SUBDIRS-y += hypfs
+SUBDIRS-y += store
 
 ifeq ($(CONFIG_RUMP),y)
 SUBDIRS-y := toolcore
diff --git a/tools/libs/store/Makefile b/tools/libs/store/Makefile
new file mode 100644
index 0000000000..76b30145cf
--- /dev/null
+++ b/tools/libs/store/Makefile
@@ -0,0 +1,66 @@
+XEN_ROOT=$(CURDIR)/../../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+MAJOR = 3.0
+MINOR = 3
+LIBNAME  := store
+USELIBS  := toolcore
+
+ifeq ($(CONFIG_Linux),y)
+APPEND_LDFLAGS += -ldl
+endif
+
+SRCS-y   += xs_lib.c
+SRCS-y   += xs.c
+
+LIBHEADER = xenstore.h xenstore_lib.h
+
+include ../libs.mk
+
+# Include configure output (config.h)
+CFLAGS += -include $(XEN_ROOT)/tools/config.h
+CFLAGS += $(CFLAGS_libxentoolcore)
+CFLAGS += -DXEN_LIB_STORED="\"$(XEN_LIB_STORED)\""
+CFLAGS += -DXEN_RUN_STORED="\"$(XEN_RUN_STORED)\""
+
+LINK_FILES = xs_lib.c include/xenstore_lib.h list.h utils.h
+
+$(LIB_OBJS): $(LINK_FILES)
+
+$(LINK_FILES):
+	ln -sf $(XEN_ROOT)/tools/xenstore/$(notdir $@) $@
+
+xs.opic: CFLAGS += -DUSE_PTHREAD
+ifeq ($(CONFIG_Linux),y)
+xs.opic: CFLAGS += -DUSE_DLSYM
+else
+PKG_CONFIG_REMOVE += -ldl
+endif
+
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenstore)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
+
+.PHONY: install
+install: install-headers
+
+.PHONY: install-headers
+install-headers:
+	$(INSTALL_DIR) $(DESTDIR)$(includedir)
+	$(INSTALL_DIR) $(DESTDIR)$(includedir)/xenstore-compat
+	$(INSTALL_DATA) include/compat/xs.h $(DESTDIR)$(includedir)/xenstore-compat/xs.h
+	$(INSTALL_DATA) include/compat/xs_lib.h $(DESTDIR)$(includedir)/xenstore-compat/xs_lib.h
+	ln -sf xenstore-compat/xs.h  $(DESTDIR)$(includedir)/xs.h
+	ln -sf xenstore-compat/xs_lib.h $(DESTDIR)$(includedir)/xs_lib.h
+
+.PHONY: uninstall
+uninstall: uninstall-headers
+
+.PHONY: uninstall-headers
+uninstall-headers:
+	rm -f $(DESTDIR)$(includedir)/xs_lib.h
+	rm -f $(DESTDIR)$(includedir)/xs.h
+	rm -f $(DESTDIR)$(includedir)/xenstore-compat/xs_lib.h
+	rm -f $(DESTDIR)$(includedir)/xenstore-compat/xs.h
+	if [ -d $(DESTDIR)$(includedir)/xenstore-compat ]; then \
+		rmdir --ignore-fail-on-non-empty $(DESTDIR)$(includedir)/xenstore-compat; \
+	fi
diff --git a/tools/xenstore/include/compat/xs.h b/tools/libs/store/include/compat/xs.h
similarity index 100%
rename from tools/xenstore/include/compat/xs.h
rename to tools/libs/store/include/compat/xs.h
diff --git a/tools/xenstore/include/compat/xs_lib.h b/tools/libs/store/include/compat/xs_lib.h
similarity index 100%
rename from tools/xenstore/include/compat/xs_lib.h
rename to tools/libs/store/include/compat/xs_lib.h
diff --git a/tools/xenstore/include/xenstore.h b/tools/libs/store/include/xenstore.h
similarity index 100%
rename from tools/xenstore/include/xenstore.h
rename to tools/libs/store/include/xenstore.h
diff --git a/tools/libs/store/libxenstore.map b/tools/libs/store/libxenstore.map
new file mode 100644
index 0000000000..9854305a2c
--- /dev/null
+++ b/tools/libs/store/libxenstore.map
@@ -0,0 +1,49 @@
+VERS_3.0.3 {
+	global:
+		xs_open;
+		xs_close;
+		xs_daemon_open;
+		xs_domain_open;
+		xs_daemon_open_readonly;
+		xs_daemon_close;
+		xs_daemon_destroy_postfork;
+		xs_directory;
+		xs_read;
+		xs_write;
+		xs_mkdir;
+		xs_rm;
+		xs_restrict;
+		xs_get_permissions;
+		xs_set_permissions;
+		xs_watch;
+		xs_fileno;
+		xs_check_watch;
+		xs_read_watch;
+		xs_unwatch;
+		xs_transaction_start;
+		xs_transaction_end;
+		xs_introduce_domain;
+		xs_set_target;
+		xs_resume_domain;
+		xs_release_domain;
+		xs_get_domain_path;
+		xs_path_is_subpath;
+		xs_is_domain_introduced;
+		xs_control_command;
+		xs_debug_command;
+		xs_suspend_evtchn_port;
+		xs_daemon_rootdir;
+		xs_daemon_rundir;
+		xs_daemon_socket;
+		xs_daemon_socket_ro;
+		xs_domain_dev;
+		xs_daemon_tdb;
+		xs_write_all;
+		xs_strings_to_perms;
+		xs_perm_to_string;
+		xs_count_strings;
+		expanding_buffer_ensure;
+		sanitise_value;
+		unsanitise_value;
+	local: *; /* Do not expose anything by default */
+};
diff --git a/tools/xenstore/xenstore.pc.in b/tools/libs/store/xenstore.pc.in
similarity index 100%
rename from tools/xenstore/xenstore.pc.in
rename to tools/libs/store/xenstore.pc.in
diff --git a/tools/xenstore/xs.c b/tools/libs/store/xs.c
similarity index 100%
rename from tools/xenstore/xs.c
rename to tools/libs/store/xs.c
diff --git a/tools/python/setup.py b/tools/python/setup.py
index 24b284af39..8254464aff 100644
--- a/tools/python/setup.py
+++ b/tools/python/setup.py
@@ -11,7 +11,7 @@ PATH_LIBXENTOOLLOG = XEN_ROOT + "/tools/libs/toollog"
 PATH_LIBXENEVTCHN = XEN_ROOT + "/tools/libs/evtchn"
 PATH_LIBXENCTRL = XEN_ROOT + "/tools/libs/ctrl"
 PATH_LIBXL    = XEN_ROOT + "/tools/libxl"
-PATH_XENSTORE = XEN_ROOT + "/tools/xenstore"
+PATH_XENSTORE = XEN_ROOT + "/tools/libs/store"
 
 xc = Extension("xc",
                extra_compile_args = extra_compile_args,
diff --git a/tools/xenstore/Makefile b/tools/xenstore/Makefile
index 0a64ac1571..9a0f0d012d 100644
--- a/tools/xenstore/Makefile
+++ b/tools/xenstore/Makefile
@@ -34,17 +34,7 @@ XENSTORED_OBJS_$(CONFIG_MiniOS) = xenstored_minios.o
 XENSTORED_OBJS += $(XENSTORED_OBJS_y)
 LDLIBS_xenstored += -lrt
 
-ifneq ($(XENSTORE_STATIC_CLIENTS),y)
-LIBXENSTORE := libxenstore.so
-else
-LIBXENSTORE := libxenstore.a
-xenstore xenstore-control: CFLAGS += -static
-endif
-
-ALL_TARGETS = libxenstore.a clients
-ifneq ($(nosharedlibs),y)
-ALL_TARGETS += libxenstore.so
-endif
+ALL_TARGETS = clients
 ifeq ($(XENSTORE_XENSTORED),y)
 ALL_TARGETS += xs_tdb_dump xenstored
 endif
@@ -87,60 +77,21 @@ xenstored.a: $(XENSTORED_OBJS)
 $(CLIENTS): xenstore
 	ln -f xenstore $@
 
-xenstore: xenstore_client.o $(LIBXENSTORE)
+xenstore: xenstore_client.o
 	$(CC) $< $(LDFLAGS) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(SOCKET_LIBS) -o $@ $(APPEND_LDFLAGS)
 
-xenstore-control: xenstore_control.o $(LIBXENSTORE)
+xenstore-control: xenstore_control.o
 	$(CC) $< $(LDFLAGS) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(SOCKET_LIBS) -o $@ $(APPEND_LDFLAGS)
 
 xs_tdb_dump: xs_tdb_dump.o utils.o tdb.o talloc.o
 	$(CC) $^ $(LDFLAGS) -o $@ $(APPEND_LDFLAGS)
 
-libxenstore.so: libxenstore.so.$(MAJOR)
-	ln -sf $< $@
-libxenstore.so.$(MAJOR): libxenstore.so.$(MAJOR).$(MINOR)
-	ln -sf $< $@
-
-xs.opic: CFLAGS += -DUSE_PTHREAD
-ifeq ($(CONFIG_Linux),y)
-xs.opic: CFLAGS += -DUSE_DLSYM
-libxenstore.so.$(MAJOR).$(MINOR): APPEND_LDFLAGS += -ldl
-else
-PKG_CONFIG_REMOVE += -ldl
-endif
-
-libxenstore.so.$(MAJOR).$(MINOR): xs.opic xs_lib.opic
-	$(CC) $(LDFLAGS) $(PTHREAD_LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenstore.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LDLIBS_libxentoolcore) $(SOCKET_LIBS) $(PTHREAD_LIBS) $(APPEND_LDFLAGS)
-
-libxenstore.a: xs.o xs_lib.o
-	$(AR) rcs $@ $^
-
-PKG_CONFIG := xenstore.pc
-PKG_CONFIG_VERSION := $(MAJOR).$(MINOR)
-
-ifneq ($(CONFIG_LIBXC_MINIOS),y)
-PKG_CONFIG_INST := $(PKG_CONFIG)
-$(PKG_CONFIG_INST): PKG_CONFIG_PREFIX = $(prefix)
-$(PKG_CONFIG_INST): PKG_CONFIG_INCDIR = $(includedir)
-$(PKG_CONFIG_INST): PKG_CONFIG_LIBDIR = $(libdir)
-endif
-
-PKG_CONFIG_LOCAL := $(foreach pc,$(PKG_CONFIG),$(PKG_CONFIG_DIR)/$(pc))
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_PREFIX = $(XEN_ROOT)
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenstore)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
-
-$(LIBXENSTORE): $(PKG_CONFIG_INST) $(PKG_CONFIG_LOCAL)
-
 .PHONY: clean
 clean:
-	rm -f *.a *.o *.opic *.so* xenstored_probes.h
+	rm -f *.a *.o xenstored_probes.h
 	rm -f xenstored xs_random xs_stress xs_crashme
 	rm -f xs_tdb_dump xenstore-control init-xenstore-domain
 	rm -f xenstore $(CLIENTS)
-	rm -f xenstore.pc
 	$(RM) $(DEPS_RM)
 
 .PHONY: distclean
@@ -157,8 +108,6 @@ tarball: clean
 .PHONY: install
 install: all
 	$(INSTALL_DIR) $(DESTDIR)$(bindir)
-	$(INSTALL_DIR) $(DESTDIR)$(includedir)
-	$(INSTALL_DIR) $(DESTDIR)$(includedir)/xenstore-compat
 ifeq ($(XENSTORE_XENSTORED),y)
 	$(INSTALL_DIR) $(DESTDIR)$(sbindir)
 	$(INSTALL_DIR) $(DESTDIR)$(XEN_LIB_STORED)
@@ -169,32 +118,9 @@ endif
 	set -e ; for c in $(CLIENTS) ; do \
 		ln -f $(DESTDIR)$(bindir)/xenstore $(DESTDIR)$(bindir)/$${c} ; \
 	done
-	$(INSTALL_DIR) $(DESTDIR)$(libdir)
-	$(INSTALL_SHLIB) libxenstore.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)
-	ln -sf libxenstore.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)/libxenstore.so.$(MAJOR)
-	ln -sf libxenstore.so.$(MAJOR) $(DESTDIR)$(libdir)/libxenstore.so
-	$(INSTALL_DATA) libxenstore.a $(DESTDIR)$(libdir)
-	$(INSTALL_DATA) include/xenstore.h $(DESTDIR)$(includedir)
-	$(INSTALL_DATA) include/xenstore_lib.h $(DESTDIR)$(includedir)
-	$(INSTALL_DATA) include/compat/xs.h $(DESTDIR)$(includedir)/xenstore-compat/xs.h
-	$(INSTALL_DATA) include/compat/xs_lib.h $(DESTDIR)$(includedir)/xenstore-compat/xs_lib.h
-	ln -sf xenstore-compat/xs.h  $(DESTDIR)$(includedir)/xs.h
-	ln -sf xenstore-compat/xs_lib.h $(DESTDIR)$(includedir)/xs_lib.h
-	$(INSTALL_DATA) xenstore.pc $(DESTDIR)$(PKG_INSTALLDIR)
 
 .PHONY: uninstall
 uninstall:
-	rm -f $(DESTDIR)$(PKG_INSTALLDIR)/xenstore.pc
-	rm -f $(DESTDIR)$(includedir)/xs_lib.h
-	rm -f $(DESTDIR)$(includedir)/xs.h
-	rm -f $(DESTDIR)$(includedir)/xenstore-compat/xs_lib.h
-	rm -f $(DESTDIR)$(includedir)/xenstore-compat/xs.h
-	rm -f $(DESTDIR)$(includedir)/xenstore_lib.h
-	rm -f $(DESTDIR)$(includedir)/xenstore.h
-	rm -f $(DESTDIR)$(libdir)/libxenstore.a
-	rm -f $(DESTDIR)$(libdir)/libxenstore.so
-	rm -f $(DESTDIR)$(libdir)/libxenstore.so.$(MAJOR)
-	rm -f $(DESTDIR)$(libdir)/libxenstore.so.$(MAJOR).$(MINOR)
 	rm -f $(addprefix $(DESTDIR)$(bindir)/, $(CLIENTS))
 	rm -f $(DESTDIR)$(bindir)/xenstore
 	rm -f $(DESTDIR)$(bindir)/xenstore-control
diff --git a/tools/xenstore/include/xenstore_lib.h b/tools/xenstore/xenstore_lib.h
similarity index 100%
rename from tools/xenstore/include/xenstore_lib.h
rename to tools/xenstore/xenstore_lib.h
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:55:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkgg-00065E-UZ; Wed, 15 Jul 2020 16:55:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvkgf-00064u-Oc
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:55:17 +0000
X-Inumbo-ID: f5198b9c-c6bb-11ea-8496-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5198b9c-c6bb-11ea-8496-bc764e2007e4;
 Wed, 15 Jul 2020 16:55:16 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvkDz-0001sU-2q; Wed, 15 Jul 2020 17:25:39 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 10/12] tools: split libxenvchan into new tools/libs/vchan
 directory
Date: Wed, 15 Jul 2020 17:25:09 +0100
Message-Id: <20200715162511.5941-12-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 ian.jackson@eu.citrix.com, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Juergen Gross <jgross@suse.com>

There is no reason why libvchan is not placed in the tools/libs
directory.

At the same time move libxenvchan.h to a dedicated include directory
in tools/libs/vchan in order to follow the same pattern as the other
libraries in tools/libs.

As tools/libvchan now contains no library any longer rename it to
tools/vchan.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                                    |  7 +-
 tools/Makefile                                |  2 +-
 tools/Rules.mk                                |  4 +-
 tools/libs/Makefile                           |  1 +
 tools/libs/vchan/Makefile                     | 19 ++++
 .../vchan/include}/libxenvchan.h              |  0
 tools/{libvchan => libs/vchan}/init.c         |  0
 tools/{libvchan => libs/vchan}/io.c           |  0
 tools/libs/vchan/libxenvchan.map              | 16 ++++
 tools/{libvchan => libs/vchan}/xenvchan.pc.in |  0
 tools/libvchan/Makefile                       | 95 -------------------
 tools/vchan/Makefile                          | 37 ++++++++
 tools/{libvchan => vchan}/node-select.c       |  0
 tools/{libvchan => vchan}/node.c              |  0
 .../{libvchan => vchan}/vchan-socket-proxy.c  |  0
 15 files changed, 80 insertions(+), 101 deletions(-)
 create mode 100644 tools/libs/vchan/Makefile
 rename tools/{libvchan => libs/vchan/include}/libxenvchan.h (100%)
 rename tools/{libvchan => libs/vchan}/init.c (100%)
 rename tools/{libvchan => libs/vchan}/io.c (100%)
 create mode 100644 tools/libs/vchan/libxenvchan.map
 rename tools/{libvchan => libs/vchan}/xenvchan.pc.in (100%)
 delete mode 100644 tools/libvchan/Makefile
 create mode 100644 tools/vchan/Makefile
 rename tools/{libvchan => vchan}/node-select.c (100%)
 rename tools/{libvchan => vchan}/node.c (100%)
 rename tools/{libvchan => vchan}/vchan-socket-proxy.c (100%)

diff --git a/.gitignore b/.gitignore
index dad694a979..3e130b0596 100644
--- a/.gitignore
+++ b/.gitignore
@@ -126,6 +126,8 @@ tools/libs/store/utils.h
 tools/libs/store/xenstore.pc
 tools/libs/store/xs_lib.c
 tools/libs/store/include/xenstore_lib.h
+tools/libs/vchan/headers.chk
+tools/libs/vchan/xenvchan.pc
 tools/console/xenconsole
 tools/console/xenconsoled
 tools/console/client/_paths.h
@@ -201,7 +203,6 @@ tools/include/xen/*
 tools/include/xen-xsm/*
 tools/include/xen-foreign/*.(c|h|size)
 tools/include/xen-foreign/checker
-tools/libvchan/xenvchan.pc
 tools/libxc/*.pc
 tools/libxc/xc_bitops.h
 tools/libxc/xc_core.h
@@ -387,8 +388,8 @@ tools/misc/xenhypfs
 tools/misc/xenwatchdogd
 tools/misc/xen-hvmcrash
 tools/misc/xen-lowmemd
-tools/libvchan/vchan-node[12]
-tools/libvchan/vchan-socket-proxy
+tools/vchan/vchan-node[12]
+tools/vchan/vchan-socket-proxy
 tools/ocaml/*/.ocamldep.make
 tools/ocaml/*/*.cm[ixao]
 tools/ocaml/*/*.cmxa
diff --git a/tools/Makefile b/tools/Makefile
index 88231856d7..ed119fffd7 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -21,7 +21,7 @@ SUBDIRS-y += xenmon
 SUBDIRS-y += xenstat
 SUBDIRS-$(CONFIG_NetBSD) += xenbackendd
 SUBDIRS-y += libfsimage
-SUBDIRS-$(CONFIG_Linux) += libvchan
+SUBDIRS-$(CONFIG_Linux) += vchan
 
 # do not recurse in to a dir we are about to delete
 ifneq "$(MAKECMDGOALS)" "distclean"
diff --git a/tools/Rules.mk b/tools/Rules.mk
index cd1b49bca8..30ee379484 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -22,12 +22,12 @@ XEN_libxendevicemodel = $(XEN_ROOT)/tools/libs/devicemodel
 XEN_libxenhypfs    = $(XEN_ROOT)/tools/libs/hypfs
 XEN_libxenctrl     = $(XEN_ROOT)/tools/libs/ctrl
 XEN_libxenstore    = $(XEN_ROOT)/tools/libs/store
+XEN_libxenvchan    = $(XEN_ROOT)/tools/libs/vchan
 XEN_libxenguest    = $(XEN_ROOT)/tools/libxc
 XEN_libxenlight    = $(XEN_ROOT)/tools/libxl
 # Currently libxlutil lives in the same directory as libxenlight
 XEN_libxlutil      = $(XEN_libxenlight)
 XEN_libxenstat     = $(XEN_ROOT)/tools/xenstat/libxenstat/src
-XEN_libxenvchan    = $(XEN_ROOT)/tools/libvchan
 
 CFLAGS_xeninclude = -I$(XEN_INCLUDE)
 
@@ -163,7 +163,7 @@ SHDEPS_libxenstat  = $(SHLIB_libxenctrl) $(SHLIB_libxenstore)
 LDLIBS_libxenstat  = $(SHDEPS_libxenstat) $(XEN_libxenstat)/libxenstat$(libextension)
 SHLIB_libxenstat   = $(SHDEPS_libxenstat) -Wl,-rpath-link=$(XEN_libxenstat)
 
-CFLAGS_libxenvchan = -I$(XEN_libxenvchan) $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
+CFLAGS_libxenvchan = -I$(XEN_libxenvchan)/include $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
 SHDEPS_libxenvchan = $(SHLIB_libxentoollog) $(SHLIB_libxenstore) $(SHLIB_libxenevtchn) $(SHLIB_libxengnttab)
 LDLIBS_libxenvchan = $(SHDEPS_libxenvchan) $(XEN_libxenvchan)/libxenvchan$(libextension)
 SHLIB_libxenvchan  = $(SHDEPS_libxenvchan) -Wl,-rpath-link=$(XEN_libxenvchan)
diff --git a/tools/libs/Makefile b/tools/libs/Makefile
index 27a7df9b31..396116b0b3 100644
--- a/tools/libs/Makefile
+++ b/tools/libs/Makefile
@@ -12,6 +12,7 @@ SUBDIRS-y += devicemodel
 SUBDIRS-y += ctrl
 SUBDIRS-y += hypfs
 SUBDIRS-y += store
+SUBDIRS-$(CONFIG_Linux) += vchan
 
 ifeq ($(CONFIG_RUMP),y)
 SUBDIRS-y := toolcore
diff --git a/tools/libs/vchan/Makefile b/tools/libs/vchan/Makefile
new file mode 100644
index 0000000000..bf944d251c
--- /dev/null
+++ b/tools/libs/vchan/Makefile
@@ -0,0 +1,19 @@
+XEN_ROOT = $(CURDIR)/../../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+MAJOR = 4.14
+MINOR = 0
+LIBNAME  := vchan
+USELIBS  := toollog store evtchn gnttab
+
+CFLAGS += $(CFLAGS_libxenctrl)
+
+LIBHEADER := libxenvchan.h
+
+SRCS-y += init.c
+SRCS-y += io.c
+
+include $(XEN_ROOT)/tools/libs/libs.mk
+
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenvchan)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libvchan/libxenvchan.h b/tools/libs/vchan/include/libxenvchan.h
similarity index 100%
rename from tools/libvchan/libxenvchan.h
rename to tools/libs/vchan/include/libxenvchan.h
diff --git a/tools/libvchan/init.c b/tools/libs/vchan/init.c
similarity index 100%
rename from tools/libvchan/init.c
rename to tools/libs/vchan/init.c
diff --git a/tools/libvchan/io.c b/tools/libs/vchan/io.c
similarity index 100%
rename from tools/libvchan/io.c
rename to tools/libs/vchan/io.c
diff --git a/tools/libs/vchan/libxenvchan.map b/tools/libs/vchan/libxenvchan.map
new file mode 100644
index 0000000000..9ac00c5b45
--- /dev/null
+++ b/tools/libs/vchan/libxenvchan.map
@@ -0,0 +1,16 @@
+VERS_4.14.0 {
+	global:
+		libxenvchan_server_init;
+		libxenvchan_client_init;
+		libxenvchan_close;
+		libxenvchan_recv;
+		libxenvchan_read;
+		libxenvchan_send;
+		libxenvchan_write;
+		libxenvchan_wait;
+		libxenvchan_fd_for_select;
+		libxenvchan_is_open;
+		libxenvchan_data_ready;
+		libxenvchan_buffer_space;
+	local: *; /* Do not expose anything by default */
+};
diff --git a/tools/libvchan/xenvchan.pc.in b/tools/libs/vchan/xenvchan.pc.in
similarity index 100%
rename from tools/libvchan/xenvchan.pc.in
rename to tools/libs/vchan/xenvchan.pc.in
diff --git a/tools/libvchan/Makefile b/tools/libvchan/Makefile
deleted file mode 100644
index 025a935cb7..0000000000
--- a/tools/libvchan/Makefile
+++ /dev/null
@@ -1,95 +0,0 @@
-#
-# tools/libvchan/Makefile
-#
-
-XEN_ROOT = $(CURDIR)/../..
-include $(XEN_ROOT)/tools/Rules.mk
-
-LIBVCHAN_OBJS = init.o io.o
-NODE_OBJS = node.o
-NODE2_OBJS = node-select.o
-
-LIBVCHAN_PIC_OBJS = $(patsubst %.o,%.opic,$(LIBVCHAN_OBJS))
-LIBVCHAN_LIBS = $(LDLIBS_libxenstore) $(LDLIBS_libxengnttab) $(LDLIBS_libxenevtchn)
-$(LIBVCHAN_OBJS) $(LIBVCHAN_PIC_OBJS): CFLAGS += $(CFLAGS_libxenstore) $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
-$(NODE_OBJS) $(NODE2_OBJS): CFLAGS += $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
-vchan-socket-proxy.o: CFLAGS += $(CFLAGS_libxenstore) $(CFLAGS_libxenctrl) $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
-
-MAJOR = 4.14
-MINOR = 0
-
-CFLAGS += -I../include -I.
-
-io.o io.opic: CFLAGS += $(CFLAGS_libxenctrl) # for xen_mb et al
-
-PKG_CONFIG := xenvchan.pc
-PKG_CONFIG_VERSION := $(MAJOR).$(MINOR)
-
-ifneq ($(CONFIG_LIBXC_MINIOS),y)
-PKG_CONFIG_INST := $(PKG_CONFIG)
-$(PKG_CONFIG_INST): PKG_CONFIG_PREFIX = $(prefix)
-$(PKG_CONFIG_INST): PKG_CONFIG_INCDIR = $(includedir)
-$(PKG_CONFIG_INST): PKG_CONFIG_LIBDIR = $(libdir)
-endif
-
-PKG_CONFIG_LOCAL := $(foreach pc,$(PKG_CONFIG),$(PKG_CONFIG_DIR)/$(pc))
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_PREFIX = $(XEN_ROOT)
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenvchan)
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
-
-.PHONY: all
-all: libxenvchan.so vchan-node1 vchan-node2 vchan-socket-proxy libxenvchan.a $(PKG_CONFIG_INST) $(PKG_CONFIG_LOCAL)
-
-libxenvchan.so: libxenvchan.so.$(MAJOR)
-	ln -sf $< $@
-
-libxenvchan.so.$(MAJOR): libxenvchan.so.$(MAJOR).$(MINOR)
-	ln -sf $< $@
-
-libxenvchan.so.$(MAJOR).$(MINOR): $(LIBVCHAN_PIC_OBJS)
-	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenvchan.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LIBVCHAN_LIBS) $(APPEND_LDFLAGS)
-
-libxenvchan.a: $(LIBVCHAN_OBJS)
-	$(AR) rcs libxenvchan.a $^
-
-vchan-node1: $(NODE_OBJS) libxenvchan.so
-	$(CC) $(LDFLAGS) -o $@ $(NODE_OBJS) $(LDLIBS_libxenvchan) $(APPEND_LDFLAGS)
-
-vchan-node2: $(NODE2_OBJS) libxenvchan.so
-	$(CC) $(LDFLAGS) -o $@ $(NODE2_OBJS) $(LDLIBS_libxenvchan) $(APPEND_LDFLAGS)
-
-vchan-socket-proxy: vchan-socket-proxy.o libxenvchan.so
-	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenvchan) $(LDLIBS_libxenstore) $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
-
-.PHONY: install
-install: all
-	$(INSTALL_DIR) $(DESTDIR)$(libdir)
-	$(INSTALL_DIR) $(DESTDIR)$(includedir)
-	$(INSTALL_DIR) $(DESTDIR)$(bindir)
-	$(INSTALL_PROG) libxenvchan.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)
-	ln -sf libxenvchan.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)/libxenvchan.so.$(MAJOR)
-	ln -sf libxenvchan.so.$(MAJOR) $(DESTDIR)$(libdir)/libxenvchan.so
-	$(INSTALL_PROG) vchan-socket-proxy $(DESTDIR)$(bindir)
-	$(INSTALL_DATA) libxenvchan.h $(DESTDIR)$(includedir)
-	$(INSTALL_DATA) libxenvchan.a $(DESTDIR)$(libdir)
-	$(INSTALL_DATA) xenvchan.pc $(DESTDIR)$(PKG_INSTALLDIR)
-
-.PHONY: uninstall
-uninstall:
-	rm -f $(DESTDIR)$(PKG_INSTALLDIR)/xenvchan.pc
-	rm -f $(DESTDIR)$(libdir)/libxenvchan.a
-	rm -f $(DESTDIR)$(includedir)/libxenvchan.h
-	rm -f $(DESTDIR)$(libdir)/libxenvchan.so
-	rm -f $(DESTDIR)$(libdir)/libxenvchan.so.$(MAJOR)
-	rm -f $(DESTDIR)$(libdir)/libxenvchan.so.$(MAJOR).$(MINOR)
-
-.PHONY: clean
-clean:
-	$(RM) -f *.o *.opic *.so* *.a vchan-node1 vchan-node2 $(DEPS_RM)
-	$(RM) -f xenvchan.pc
-
-distclean: clean
-
--include $(DEPS_INCLUDE)
diff --git a/tools/vchan/Makefile b/tools/vchan/Makefile
new file mode 100644
index 0000000000..a731e0e073
--- /dev/null
+++ b/tools/vchan/Makefile
@@ -0,0 +1,37 @@
+#
+# tools/vchan/Makefile
+#
+
+XEN_ROOT = $(CURDIR)/../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+NODE_OBJS = node.o
+NODE2_OBJS = node-select.o
+
+$(NODE_OBJS) $(NODE2_OBJS): CFLAGS += $(CFLAGS_libxenvchan) $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
+vchan-socket-proxy.o: CFLAGS += $(CFLAGS_libxenvchan) $(CFLAGS_libxenstore) $(CFLAGS_libxenctrl) $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
+
+.PHONY: all
+all: vchan-node1 vchan-node2 vchan-socket-proxy
+
+vchan-node1: $(NODE_OBJS)
+	$(CC) $(LDFLAGS) -o $@ $(NODE_OBJS) $(LDLIBS_libxenvchan) $(APPEND_LDFLAGS)
+
+vchan-node2: $(NODE2_OBJS)
+	$(CC) $(LDFLAGS) -o $@ $(NODE2_OBJS) $(LDLIBS_libxenvchan) $(APPEND_LDFLAGS)
+
+vchan-socket-proxy: vchan-socket-proxy.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenvchan) $(LDLIBS_libxenstore) $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
+
+.PHONY: install
+install: all
+	$(INSTALL_DIR) $(DESTDIR)$(bindir)
+	$(INSTALL_PROG) vchan-socket-proxy $(DESTDIR)$(bindir)
+
+.PHONY: clean
+clean:
+	$(RM) -f *.o vchan-node1 vchan-node2 $(DEPS_RM)
+
+distclean: clean
+
+-include $(DEPS_INCLUDE)
diff --git a/tools/libvchan/node-select.c b/tools/vchan/node-select.c
similarity index 100%
rename from tools/libvchan/node-select.c
rename to tools/vchan/node-select.c
diff --git a/tools/libvchan/node.c b/tools/vchan/node.c
similarity index 100%
rename from tools/libvchan/node.c
rename to tools/vchan/node.c
diff --git a/tools/libvchan/vchan-socket-proxy.c b/tools/vchan/vchan-socket-proxy.c
similarity index 100%
rename from tools/libvchan/vchan-socket-proxy.c
rename to tools/vchan/vchan-socket-proxy.c
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:55:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkh6-0006DN-CX; Wed, 15 Jul 2020 16:55:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvkh5-0006Bo-58
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:55:43 +0000
X-Inumbo-ID: 0124f41c-c6bc-11ea-bb8b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0124f41c-c6bc-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 16:55:36 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvkDz-0001sU-HT; Wed, 15 Jul 2020 17:25:39 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 11/12] tools: split libxenstat into new tools/libs/stat
 directory
Date: Wed, 15 Jul 2020 17:25:10 +0100
Message-Id: <20200715162511.5941-13-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 ian.jackson@eu.citrix.com, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Juergen Gross <jgross@suse.com>

There is no reason why libxenstat is not placed in the tools/libs
directory.

At the same time move xenstat.h to a dedicated include directory
in tools/libs/stat in order to follow the same pattern as the other
libraries in tools/libs.

As now xentop is the only left directory in xenstat move it directly
under tools and get rid of tools/xenstat.

Fix some missing prototype errors (add one prototype and make two
functions static).

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                                    |  7 +-
 tools/Makefile                                |  2 +-
 tools/Rules.mk                                |  4 +-
 tools/libs/Makefile                           |  1 +
 .../{xenstat/libxenstat => libs/stat}/COPYING |  0
 .../libxenstat => libs/stat}/Makefile         | 98 ++++---------------
 .../stat}/bindings/swig/perl/.empty           |  0
 .../stat}/bindings/swig/python/.empty         |  0
 .../stat}/bindings/swig/xenstat.i             |  0
 .../src => libs/stat/include}/xenstat.h       |  3 +
 tools/libs/stat/libxenstat.map                | 54 ++++++++++
 .../libxenstat/src => libs/stat}/xenstat.c    |  0
 .../libxenstat => libs/stat}/xenstat.pc.in    |  2 +-
 .../src => libs/stat}/xenstat_freebsd.c       |  0
 .../src => libs/stat}/xenstat_linux.c         |  4 +-
 .../src => libs/stat}/xenstat_netbsd.c        |  0
 .../src => libs/stat}/xenstat_priv.h          |  0
 .../src => libs/stat}/xenstat_qmp.c           |  0
 .../src => libs/stat}/xenstat_solaris.c       |  0
 tools/xenstat/Makefile                        | 10 --
 tools/{xenstat => }/xentop/Makefile           |  2 +-
 tools/{xenstat => }/xentop/TODO               |  0
 tools/{xenstat => }/xentop/xentop.c           |  0
 23 files changed, 90 insertions(+), 97 deletions(-)
 rename tools/{xenstat/libxenstat => libs/stat}/COPYING (100%)
 rename tools/{xenstat/libxenstat => libs/stat}/Makefile (56%)
 rename tools/{xenstat/libxenstat => libs/stat}/bindings/swig/perl/.empty (100%)
 rename tools/{xenstat/libxenstat => libs/stat}/bindings/swig/python/.empty (100%)
 rename tools/{xenstat/libxenstat => libs/stat}/bindings/swig/xenstat.i (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat/include}/xenstat.h (98%)
 create mode 100644 tools/libs/stat/libxenstat.map
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat.c (100%)
 rename tools/{xenstat/libxenstat => libs/stat}/xenstat.pc.in (82%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_freebsd.c (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_linux.c (98%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_netbsd.c (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_priv.h (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_qmp.c (100%)
 rename tools/{xenstat/libxenstat/src => libs/stat}/xenstat_solaris.c (100%)
 delete mode 100644 tools/xenstat/Makefile
 rename tools/{xenstat => }/xentop/Makefile (97%)
 rename tools/{xenstat => }/xentop/TODO (100%)
 rename tools/{xenstat => }/xentop/xentop.c (100%)

diff --git a/.gitignore b/.gitignore
index 3e130b0596..5c240c25db 100644
--- a/.gitignore
+++ b/.gitignore
@@ -120,6 +120,9 @@ tools/libs/foreignmemory/headers.chk
 tools/libs/foreignmemory/xenforeignmemory.pc
 tools/libs/devicemodel/headers.chk
 tools/libs/devicemodel/xendevicemodel.pc
+tools/libs/stat/_paths.h
+tools/libs/stat/headers.chk
+tools/libs/stat/xenstat.pc
 tools/libs/store/headers.chk
 tools/libs/store/list.h
 tools/libs/store/utils.h
@@ -273,9 +276,6 @@ tools/xenmon/xentrace_setmask
 tools/xenmon/xenbaked
 tools/xenpaging/xenpaging
 tools/xenpmd/xenpmd
-tools/xenstat/libxenstat/src/_paths.h
-tools/xenstat/libxenstat/xenstat.pc
-tools/xenstat/xentop/xentop
 tools/xenstore/xenstore
 tools/xenstore/xenstore-chmod
 tools/xenstore/xenstore-exists
@@ -288,6 +288,7 @@ tools/xenstore/xenstore-ls
 tools/xenstore/xenstored
 tools/xenstore/xenstored_test
 tools/xenstore/xs_tdb_dump
+tools/xentop/xentop
 tools/xentrace/xentrace_setsize
 tools/xentrace/tbctl
 tools/xentrace/xenctx
diff --git a/tools/Makefile b/tools/Makefile
index ed119fffd7..e37b4c0f12 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -18,7 +18,7 @@ SUBDIRS-$(CONFIG_XCUTILS) += xcutils
 SUBDIRS-$(CONFIG_X86) += firmware
 SUBDIRS-y += console
 SUBDIRS-y += xenmon
-SUBDIRS-y += xenstat
+SUBDIRS-y += xentop
 SUBDIRS-$(CONFIG_NetBSD) += xenbackendd
 SUBDIRS-y += libfsimage
 SUBDIRS-$(CONFIG_Linux) += vchan
diff --git a/tools/Rules.mk b/tools/Rules.mk
index 30ee379484..279df14152 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -22,12 +22,12 @@ XEN_libxendevicemodel = $(XEN_ROOT)/tools/libs/devicemodel
 XEN_libxenhypfs    = $(XEN_ROOT)/tools/libs/hypfs
 XEN_libxenctrl     = $(XEN_ROOT)/tools/libs/ctrl
 XEN_libxenstore    = $(XEN_ROOT)/tools/libs/store
+XEN_libxenstat     = $(XEN_ROOT)/tools/libs/stat
 XEN_libxenvchan    = $(XEN_ROOT)/tools/libs/vchan
 XEN_libxenguest    = $(XEN_ROOT)/tools/libxc
 XEN_libxenlight    = $(XEN_ROOT)/tools/libxl
 # Currently libxlutil lives in the same directory as libxenlight
 XEN_libxlutil      = $(XEN_libxenlight)
-XEN_libxenstat     = $(XEN_ROOT)/tools/xenstat/libxenstat/src
 
 CFLAGS_xeninclude = -I$(XEN_INCLUDE)
 
@@ -158,7 +158,7 @@ ifeq ($(CONFIG_Linux),y)
 LDLIBS_libxenstore += -ldl
 endif
 
-CFLAGS_libxenstat  = -I$(XEN_libxenstat)
+CFLAGS_libxenstat  = -I$(XEN_libxenstat)/include
 SHDEPS_libxenstat  = $(SHLIB_libxenctrl) $(SHLIB_libxenstore)
 LDLIBS_libxenstat  = $(SHDEPS_libxenstat) $(XEN_libxenstat)/libxenstat$(libextension)
 SHLIB_libxenstat   = $(SHDEPS_libxenstat) -Wl,-rpath-link=$(XEN_libxenstat)
diff --git a/tools/libs/Makefile b/tools/libs/Makefile
index 396116b0b3..a1985cf7a7 100644
--- a/tools/libs/Makefile
+++ b/tools/libs/Makefile
@@ -12,6 +12,7 @@ SUBDIRS-y += devicemodel
 SUBDIRS-y += ctrl
 SUBDIRS-y += hypfs
 SUBDIRS-y += store
+SUBDIRS-y += stat
 SUBDIRS-$(CONFIG_Linux) += vchan
 
 ifeq ($(CONFIG_RUMP),y)
diff --git a/tools/xenstat/libxenstat/COPYING b/tools/libs/stat/COPYING
similarity index 100%
rename from tools/xenstat/libxenstat/COPYING
rename to tools/libs/stat/COPYING
diff --git a/tools/xenstat/libxenstat/Makefile b/tools/libs/stat/Makefile
similarity index 56%
rename from tools/xenstat/libxenstat/Makefile
rename to tools/libs/stat/Makefile
index 3d05ecdd9f..6162395a9a 100644
--- a/tools/xenstat/libxenstat/Makefile
+++ b/tools/libs/stat/Makefile
@@ -15,80 +15,29 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-LDCONFIG=ldconfig
-MAKE_LINK=ln -sf
-
-MAJOR=4.14
-MINOR=0
-
-LIB=src/libxenstat.a
-SHLIB=src/libxenstat.so.$(MAJOR).$(MINOR)
-SHLIB_LINKS=src/libxenstat.so.$(MAJOR) src/libxenstat.so
-OBJECTS-y=src/xenstat.o src/xenstat_qmp.o
-OBJECTS-$(CONFIG_Linux) += src/xenstat_linux.o
-OBJECTS-$(CONFIG_SunOS) += src/xenstat_solaris.o
-OBJECTS-$(CONFIG_NetBSD) += src/xenstat_netbsd.o
-OBJECTS-$(CONFIG_FreeBSD) += src/xenstat_freebsd.o
-SONAME_FLAGS=-Wl,$(SONAME_LDFLAG) -Wl,libxenstat.so.$(MAJOR)
-
-CFLAGS+=-fPIC -Werror
-CFLAGS+=-Isrc $(CFLAGS_libxenctrl) $(CFLAGS_libxenstore) $(CFLAGS_xeninclude) -include $(XEN_ROOT)/tools/config.h
-
-LDLIBS-y = $(LDLIBS_libxenstore) $(LDLIBS_libxenctrl) -lyajl
-LDLIBS-$(CONFIG_SunOS) += -lkstat
-
-PKG_CONFIG := xenstat.pc
-PKG_CONFIG_VERSION := $(MAJOR).$(MINOR)
-
-ifneq ($(CONFIG_LIBXC_MINIOS),y)
-PKG_CONFIG_INST := $(PKG_CONFIG)
-$(PKG_CONFIG_INST): PKG_CONFIG_PREFIX = $(prefix)
-$(PKG_CONFIG_INST): PKG_CONFIG_INCDIR = $(includedir)
-$(PKG_CONFIG_INST): PKG_CONFIG_LIBDIR = $(libdir)
-endif
-
-PKG_CONFIG_LOCAL := $(foreach pc,$(PKG_CONFIG),$(PKG_CONFIG_DIR)/$(pc))
+MAJOR = 4.14
+MINOR = 0
+LIBNAME  := stat
+USELIBS  := ctrl store
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_PREFIX = $(XEN_ROOT)
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenstat)
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
+CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenstore) $(CFLAGS_xeninclude) -include $(XEN_ROOT)/tools/config.h
 
-.PHONY: all
-all: $(LIB) $(SHLIB) $(SHLIB_LINKS) $(PKG_CONFIG_INST) $(PKG_CONFIG_LOCAL)
+SRCS-y += xenstat.c
+SRCS-y += xenstat_qmp.c
+SRCS-$(CONFIG_Linux) += xenstat_linux.c
+SRCS-$(CONFIG_SunOS) += xenstat_solaris.c
+SRCS-$(CONFIG_NetBSD) += xenstat_netbsd.c
+SRCS-$(CONFIG_FreeBSD) += xenstat_freebsd.c
 
-$(OBJECTS-y): src/_paths.h
-
-$(LIB): $(OBJECTS-y)
-	$(AR) rc $@ $^
-	$(RANLIB) $@
+LDLIBS-y += -lyajl
+LDLIBS-$(CONFIG_SunOS) += -lkstat
+APPEND_LDFLAGS += $(LDLIBS-y)
 
-$(SHLIB): $(OBJECTS-y)
-	$(CC) $(LDFLAGS) $(SONAME_FLAGS) $(SHLIB_LDFLAGS) -o $@ \
-	    $(OBJECTS-y) $(LDLIBS-y) $(APPEND_LDFLAGS)
+include $(XEN_ROOT)/tools/libs/libs.mk
 
-src/libxenstat.so.$(MAJOR): $(SHLIB)
-	$(MAKE_LINK) $(<F) $@
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenstat)/include
 
-src/libxenstat.so: src/libxenstat.so.$(MAJOR)
-	$(MAKE_LINK) $(<F) $@
-
-.PHONY: install
-install: all
-	$(INSTALL_DATA) src/xenstat.h $(DESTDIR)$(includedir)
-	$(INSTALL_DATA) $(LIB) $(DESTDIR)$(libdir)/libxenstat.a
-	$(INSTALL_PROG) src/libxenstat.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)
-	ln -sf libxenstat.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)/libxenstat.so.$(MAJOR)
-	ln -sf libxenstat.so.$(MAJOR) $(DESTDIR)$(libdir)/libxenstat.so
-	$(INSTALL_DATA) xenstat.pc $(DESTDIR)$(PKG_INSTALLDIR)
-
-.PHONY: uninstall
-uninstall:
-	rm -f $(DESTDIR)$(PKG_INSTALLDIR)/xenstat.pc
-	rm -f $(DESTDIR)$(libdir)/libxenstat.so
-	rm -f $(DESTDIR)$(libdir)/libxenstat.so.$(MAJOR)
-	rm -f $(DESTDIR)$(libdir)/libxenstat.so.$(MAJOR).$(MINOR)
-	rm -f $(DESTDIR)$(libdir)/libxenstat.a
-	rm -f $(DESTDIR)$(includedir)/xenstat.h
+$(LIB_OBJS): _paths.h
 
 PYLIB=bindings/swig/python/_xenstat.so
 PYMOD=bindings/swig/python/xenstat.py
@@ -109,9 +58,9 @@ install-bindings: install-perl-bindings install-python-bindings
 .PHONY: uninstall-bindings
 uninstall-bindings: uninstall-perl-bindings uninstall-python-bindings
 
-$(BINDINGS): $(SHLIB) $(SHLIB_LINKS) src/xenstat.h
+$(BINDINGS): $(SHLIB) $(SHLIB_LINKS) include/xenstat.h
 
-SWIG_FLAGS=-module xenstat -Isrc
+SWIG_FLAGS=-module xenstat -Iinclude -I.
 
 # Python bindings
 PYTHON_VERSION=$(PYTHON:python%=%)
@@ -177,14 +126,9 @@ endif
 
 .PHONY: clean
 clean:
-	rm -f $(LIB) $(SHLIB) $(SHLIB_LINKS) $(OBJECTS-y) \
-	      $(BINDINGS) $(BINDINGSRC) $(DEPS_RM) src/_paths.h
-	rm -f xenstat.pc
-
-.PHONY: distclean
-distclean: clean
+	rm -f $(BINDINGS) $(BINDINGSRC) $(DEPS_RM) _paths.h
 
 -include $(DEPS_INCLUDE)
 
-genpath-target = $(call buildmakevars2header,src/_paths.h)
+genpath-target = $(call buildmakevars2header,_paths.h)
 $(eval $(genpath-target))
diff --git a/tools/xenstat/libxenstat/bindings/swig/perl/.empty b/tools/libs/stat/bindings/swig/perl/.empty
similarity index 100%
rename from tools/xenstat/libxenstat/bindings/swig/perl/.empty
rename to tools/libs/stat/bindings/swig/perl/.empty
diff --git a/tools/xenstat/libxenstat/bindings/swig/python/.empty b/tools/libs/stat/bindings/swig/python/.empty
similarity index 100%
rename from tools/xenstat/libxenstat/bindings/swig/python/.empty
rename to tools/libs/stat/bindings/swig/python/.empty
diff --git a/tools/xenstat/libxenstat/bindings/swig/xenstat.i b/tools/libs/stat/bindings/swig/xenstat.i
similarity index 100%
rename from tools/xenstat/libxenstat/bindings/swig/xenstat.i
rename to tools/libs/stat/bindings/swig/xenstat.i
diff --git a/tools/xenstat/libxenstat/src/xenstat.h b/tools/libs/stat/include/xenstat.h
similarity index 98%
rename from tools/xenstat/libxenstat/src/xenstat.h
rename to tools/libs/stat/include/xenstat.h
index 76a660f321..c3b98909dd 100644
--- a/tools/xenstat/libxenstat/src/xenstat.h
+++ b/tools/libs/stat/include/xenstat.h
@@ -71,6 +71,9 @@ unsigned long long xenstat_node_tot_mem(xenstat_node * node);
 /* Get amount of free memory on a node */
 unsigned long long xenstat_node_free_mem(xenstat_node * node);
 
+/* Get amount of freeable memory on a node */
+long xenstat_node_freeable_mb(xenstat_node * node);
+
 /* Find the number of domains existing on a node */
 unsigned int xenstat_node_num_domains(xenstat_node * node);
 
diff --git a/tools/libs/stat/libxenstat.map b/tools/libs/stat/libxenstat.map
new file mode 100644
index 0000000000..b736d8e94c
--- /dev/null
+++ b/tools/libs/stat/libxenstat.map
@@ -0,0 +1,54 @@
+VERS_4.14.0 {
+	global:
+		xenstat_init;
+		xenstat_uninit;
+		xenstat_get_node;
+		xenstat_free_node;
+		xenstat_node_domain;
+		xenstat_node_domain_by_index;
+		xenstat_node_xen_version;
+		xenstat_node_tot_mem;
+		xenstat_node_free_mem;
+		xenstat_node_freeable_mb;
+		xenstat_node_num_domains;
+		xenstat_node_num_cpus;
+		xenstat_node_cpu_hz;
+		xenstat_domain_id;
+		xenstat_domain_name;
+		xenstat_domain_cpu_ns;
+		xenstat_domain_num_vcpus;
+		xenstat_domain_vcpu;
+		xenstat_domain_cur_mem;
+		xenstat_domain_max_mem;
+		xenstat_domain_ssid;
+		xenstat_domain_dying;
+		xenstat_domain_crashed;
+		xenstat_domain_shutdown;
+		xenstat_domain_paused;
+		xenstat_domain_blocked;
+		xenstat_domain_running;
+		xenstat_domain_num_networks;
+		xenstat_domain_network;
+		xenstat_domain_num_vbds;
+		xenstat_domain_vbd;
+		xenstat_vcpu_online;
+		xenstat_vcpu_ns;
+		xenstat_network_id;
+		xenstat_network_rbytes;
+		xenstat_network_rpackets;
+		xenstat_network_rerrs;
+		xenstat_network_rdrop;
+		xenstat_network_tbytes;
+		xenstat_network_tpackets;
+		xenstat_network_terrs;
+		xenstat_network_tdrop;
+		xenstat_vbd_type;
+		xenstat_vbd_dev;
+		xenstat_vbd_oo_reqs;
+		xenstat_vbd_rd_reqs;
+		xenstat_vbd_wr_reqs;
+		xenstat_vbd_rd_sects;
+		xenstat_vbd_wr_sects;
+		xenstat_vbd_error;
+	local: *; /* Do not expose anything by default */
+};
diff --git a/tools/xenstat/libxenstat/src/xenstat.c b/tools/libs/stat/xenstat.c
similarity index 100%
rename from tools/xenstat/libxenstat/src/xenstat.c
rename to tools/libs/stat/xenstat.c
diff --git a/tools/xenstat/libxenstat/xenstat.pc.in b/tools/libs/stat/xenstat.pc.in
similarity index 82%
rename from tools/xenstat/libxenstat/xenstat.pc.in
rename to tools/libs/stat/xenstat.pc.in
index ad00577c89..6005593ba1 100644
--- a/tools/xenstat/libxenstat/xenstat.pc.in
+++ b/tools/libs/stat/xenstat.pc.in
@@ -7,4 +7,4 @@ Description: The Xenstat library for Xen hypervisor
 Version: @@version@@
 Cflags: -I${includedir}
 Libs: @@libsflag@@${libdir} -lxenstat
-Requires.private: xencontrol,xenstore
+Requires.private: xencontrol,xenstore,yajl
diff --git a/tools/xenstat/libxenstat/src/xenstat_freebsd.c b/tools/libs/stat/xenstat_freebsd.c
similarity index 100%
rename from tools/xenstat/libxenstat/src/xenstat_freebsd.c
rename to tools/libs/stat/xenstat_freebsd.c
diff --git a/tools/xenstat/libxenstat/src/xenstat_linux.c b/tools/libs/stat/xenstat_linux.c
similarity index 98%
rename from tools/xenstat/libxenstat/src/xenstat_linux.c
rename to tools/libs/stat/xenstat_linux.c
index 7530349eee..793263f2b6 100644
--- a/tools/xenstat/libxenstat/src/xenstat_linux.c
+++ b/tools/libs/stat/xenstat_linux.c
@@ -64,7 +64,7 @@ static const char PROCNETDEV_HEADER[] =
 
 /* We need to get the name of the bridge interface for use with bonding interfaces */
 /* Use excludeName parameter to avoid adding bridges we don't care about, eg. virbr0 */
-void getBridge(char *excludeName, char *result, size_t resultLen)
+static void getBridge(char *excludeName, char *result, size_t resultLen)
 {
 	struct dirent *de;
 	DIR *d;
@@ -89,7 +89,7 @@ void getBridge(char *excludeName, char *result, size_t resultLen)
 
 /* parseNetLine provides regular expression based parsing for lines from /proc/net/dev, all the */
 /* information are parsed but not all are used in our case, ie. for xenstat */
-int parseNetDevLine(char *line, char *iface, unsigned long long *rxBytes, unsigned long long *rxPackets,
+static int parseNetDevLine(char *line, char *iface, unsigned long long *rxBytes, unsigned long long *rxPackets,
 		unsigned long long *rxErrs, unsigned long long *rxDrops, unsigned long long *rxFifo,
 		unsigned long long *rxFrames, unsigned long long *rxComp, unsigned long long *rxMcast,
 		unsigned long long *txBytes, unsigned long long *txPackets, unsigned long long *txErrs,
diff --git a/tools/xenstat/libxenstat/src/xenstat_netbsd.c b/tools/libs/stat/xenstat_netbsd.c
similarity index 100%
rename from tools/xenstat/libxenstat/src/xenstat_netbsd.c
rename to tools/libs/stat/xenstat_netbsd.c
diff --git a/tools/xenstat/libxenstat/src/xenstat_priv.h b/tools/libs/stat/xenstat_priv.h
similarity index 100%
rename from tools/xenstat/libxenstat/src/xenstat_priv.h
rename to tools/libs/stat/xenstat_priv.h
diff --git a/tools/xenstat/libxenstat/src/xenstat_qmp.c b/tools/libs/stat/xenstat_qmp.c
similarity index 100%
rename from tools/xenstat/libxenstat/src/xenstat_qmp.c
rename to tools/libs/stat/xenstat_qmp.c
diff --git a/tools/xenstat/libxenstat/src/xenstat_solaris.c b/tools/libs/stat/xenstat_solaris.c
similarity index 100%
rename from tools/xenstat/libxenstat/src/xenstat_solaris.c
rename to tools/libs/stat/xenstat_solaris.c
diff --git a/tools/xenstat/Makefile b/tools/xenstat/Makefile
deleted file mode 100644
index b300f31289..0000000000
--- a/tools/xenstat/Makefile
+++ /dev/null
@@ -1,10 +0,0 @@
-XEN_ROOT = $(CURDIR)/../..
-include $(XEN_ROOT)/tools/Rules.mk
-
-SUBDIRS :=
-SUBDIRS += libxenstat
-SUBDIRS += xentop
-
-.PHONY: all install clean distclean uninstall
-
-all install clean distclean uninstall: %: subdirs-%
diff --git a/tools/xenstat/xentop/Makefile b/tools/xentop/Makefile
similarity index 97%
rename from tools/xenstat/xentop/Makefile
rename to tools/xentop/Makefile
index ec612db2a2..0034114684 100644
--- a/tools/xenstat/xentop/Makefile
+++ b/tools/xentop/Makefile
@@ -10,7 +10,7 @@
 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 # GNU General Public License for more details.
 
-XEN_ROOT=$(CURDIR)/../../..
+XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
 ifneq ($(XENSTAT_XENTOP),y)
diff --git a/tools/xenstat/xentop/TODO b/tools/xentop/TODO
similarity index 100%
rename from tools/xenstat/xentop/TODO
rename to tools/xentop/TODO
diff --git a/tools/xenstat/xentop/xentop.c b/tools/xentop/xentop.c
similarity index 100%
rename from tools/xenstat/xentop/xentop.c
rename to tools/xentop/xentop.c
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 16:55:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 16:55:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkhC-0006Fb-M7; Wed, 15 Jul 2020 16:55:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o5qj=A2=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jvkhB-0006F0-G5
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 16:55:49 +0000
X-Inumbo-ID: 08084f72-c6bc-11ea-bca7-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08084f72-c6bc-11ea-bca7-bc764e2007e4;
 Wed, 15 Jul 2020 16:55:48 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jvkDz-0001sU-Qh; Wed, 15 Jul 2020 17:25:39 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 12/12] tools: generate most contents of library make variables
Date: Wed, 15 Jul 2020 17:25:11 +0100
Message-Id: <20200715162511.5941-14-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>, ian.jackson@eu.citrix.com,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Juergen Gross <jgross@suse.com>

Library related make variables (CFLAGS_lib*, SHDEPS_lib*, LDLIBS_lib*
and SHLIB_lib*) mostly have a common pattern for their values. Generate
most of this content automatically by adding a new per-library variable
defining on which other libraries a lib is depending. This in turn
makes it possible to drop the USELIB variable from each library
Makefile.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/Rules.mk                    | 103 ++++++++++--------------------
 tools/libs/call/Makefile          |   1 -
 tools/libs/ctrl/Makefile          |   1 -
 tools/libs/devicemodel/Makefile   |   1 -
 tools/libs/evtchn/Makefile        |   1 -
 tools/libs/foreignmemory/Makefile |   1 -
 tools/libs/gnttab/Makefile        |   1 -
 tools/libs/hypfs/Makefile         |   1 -
 tools/libs/libs.mk                |   5 +-
 tools/libs/stat/Makefile          |   1 -
 tools/libs/store/Makefile         |   1 -
 tools/libs/vchan/Makefile         |   1 -
 12 files changed, 35 insertions(+), 83 deletions(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index 279df14152..89b28e3fad 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -12,18 +12,38 @@ INSTALL = $(XEN_ROOT)/tools/cross-install
 LDFLAGS += $(PREPEND_LDFLAGS_XEN_TOOLS)
 
 XEN_INCLUDE        = $(XEN_ROOT)/tools/include
-XEN_libxentoolcore = $(XEN_ROOT)/tools/libs/toolcore
-XEN_libxentoollog  = $(XEN_ROOT)/tools/libs/toollog
-XEN_libxenevtchn   = $(XEN_ROOT)/tools/libs/evtchn
-XEN_libxengnttab   = $(XEN_ROOT)/tools/libs/gnttab
-XEN_libxencall     = $(XEN_ROOT)/tools/libs/call
-XEN_libxenforeignmemory = $(XEN_ROOT)/tools/libs/foreignmemory
-XEN_libxendevicemodel = $(XEN_ROOT)/tools/libs/devicemodel
-XEN_libxenhypfs    = $(XEN_ROOT)/tools/libs/hypfs
-XEN_libxenctrl     = $(XEN_ROOT)/tools/libs/ctrl
-XEN_libxenstore    = $(XEN_ROOT)/tools/libs/store
-XEN_libxenstat     = $(XEN_ROOT)/tools/libs/stat
-XEN_libxenvchan    = $(XEN_ROOT)/tools/libs/vchan
+
+LIBS_LIBS += toolcore
+USELIBS_toolcore :=
+LIBS_LIBS += toollog
+USELIBS_toollog :=
+LIBS_LIBS += evtchn
+USELIBS_evtchn := toollog toolcore
+LIBS_LIBS += gnttab
+USELIBS_gnttab := toollog toolcore
+LIBS_LIBS += call
+USELIBS_call := toollog toolcore
+LIBS_LIBS += foreignmemory
+USELIBS_foreignmemory := toollog toolcore
+LIBS_LIBS += devicemodel
+USELIBS_devicemodel := toollog toolcore call
+LIBS_LIBS += hypfs
+USELIBS_hypfs := toollog toolcore call
+LIBS_LIBS += ctrl
+USELIBS_ctrl := toollog call evtchn gnttab foreignmemory devicemodel
+LIBS_LIBS += store
+USELIBS_store := toolcore
+LIBS_LIBS += stat
+USELIBS_stat := ctrl store
+LIBS_LIBS += vchan
+USELIBS_vchan := toollog store gnttab evtchn
+
+$(foreach lib,$(LIBS_LIBS),$(eval XEN_libxen$(lib) = $(XEN_ROOT)/tools/libs/$(lib)))
+$(foreach lib,$(LIBS_LIBS),$(eval CFLAGS_libxen$(lib) = -I$(XEN_libxen$(lib))/include $(CFLAGS_xeninclude)))
+$(foreach lib,$(LIBS_LIBS),$(eval SHDEPS_libxen$(lib) = $(foreach use,$(USELIBS_$(lib)),$(SHLIB_libxen$(use)))))
+$(foreach lib,$(LIBS_LIBS),$(eval LDLIBS_libxen$(lib) = $(SHDEPS_libxen$(lib)) $(XEN_libxen$(lib))/libxen$(lib)$(libextension)))
+$(foreach lib,$(LIBS_LIBS),$(eval SHLIB_libxen$(lib) = $(SHDEPS_libxen$(lib)) -Wl,-rpath-link=$(XEN_libxen$(lib))))
+
 XEN_libxenguest    = $(XEN_ROOT)/tools/libxc
 XEN_libxenlight    = $(XEN_ROOT)/tools/libxl
 # Currently libxlutil lives in the same directory as libxenlight
@@ -98,76 +118,19 @@ endif
 # Consumers of libfoo should not directly use $(SHDEPS_libfoo) or
 # $(SHLIB_libfoo)
 
-CFLAGS_libxentoollog = -I$(XEN_libxentoollog)/include $(CFLAGS_xeninclude)
-SHDEPS_libxentoollog =
-LDLIBS_libxentoollog = $(SHDEPS_libxentoollog) $(XEN_libxentoollog)/libxentoollog$(libextension)
-SHLIB_libxentoollog  = $(SHDEPS_libxentoollog) -Wl,-rpath-link=$(XEN_libxentoollog)
-
-CFLAGS_libxentoolcore = -I$(XEN_libxentoolcore)/include $(CFLAGS_xeninclude)
-SHDEPS_libxentoolcore =
-LDLIBS_libxentoolcore = $(SHDEPS_libxentoolcore) $(XEN_libxentoolcore)/libxentoolcore$(libextension)
-SHLIB_libxentoolcore  = $(SHDEPS_libxentoolcore) -Wl,-rpath-link=$(XEN_libxentoolcore)
-
-CFLAGS_libxenevtchn = -I$(XEN_libxenevtchn)/include $(CFLAGS_xeninclude)
-SHDEPS_libxenevtchn = $(SHLIB_libxentoolcore)
-LDLIBS_libxenevtchn = $(SHDEPS_libxenevtchn) $(XEN_libxenevtchn)/libxenevtchn$(libextension)
-SHLIB_libxenevtchn  = $(SHDEPS_libxenevtchn) -Wl,-rpath-link=$(XEN_libxenevtchn)
-
-CFLAGS_libxengnttab = -I$(XEN_libxengnttab)/include $(CFLAGS_xeninclude)
-SHDEPS_libxengnttab = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore)
-LDLIBS_libxengnttab = $(SHDEPS_libxengnttab) $(XEN_libxengnttab)/libxengnttab$(libextension)
-SHLIB_libxengnttab  = $(SHDEPS_libxengnttab) -Wl,-rpath-link=$(XEN_libxengnttab)
-
-CFLAGS_libxencall = -I$(XEN_libxencall)/include $(CFLAGS_xeninclude)
-SHDEPS_libxencall = $(SHLIB_libxentoolcore)
-LDLIBS_libxencall = $(SHDEPS_libxencall) $(XEN_libxencall)/libxencall$(libextension)
-SHLIB_libxencall  = $(SHDEPS_libxencall) -Wl,-rpath-link=$(XEN_libxencall)
-
-CFLAGS_libxenforeignmemory = -I$(XEN_libxenforeignmemory)/include $(CFLAGS_xeninclude)
-SHDEPS_libxenforeignmemory = $(SHLIB_libxentoolcore)
-LDLIBS_libxenforeignmemory = $(SHDEPS_libxenforeignmemory) $(XEN_libxenforeignmemory)/libxenforeignmemory$(libextension)
-SHLIB_libxenforeignmemory  = $(SHDEPS_libxenforeignmemory) -Wl,-rpath-link=$(XEN_libxenforeignmemory)
-
-CFLAGS_libxendevicemodel = -I$(XEN_libxendevicemodel)/include $(CFLAGS_xeninclude)
-SHDEPS_libxendevicemodel = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore) $(SHLIB_libxencall)
-LDLIBS_libxendevicemodel = $(SHDEPS_libxendevicemodel) $(XEN_libxendevicemodel)/libxendevicemodel$(libextension)
-SHLIB_libxendevicemodel  = $(SHDEPS_libxendevicemodel) -Wl,-rpath-link=$(XEN_libxendevicemodel)
-
-CFLAGS_libxenhypfs = -I$(XEN_libxenhypfs)/include $(CFLAGS_xeninclude)
-SHDEPS_libxenhypfs = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore) $(SHLIB_libxencall)
-LDLIBS_libxenhypfs = $(SHDEPS_libxenhypfs) $(XEN_libxenhypfs)/libxenhypfs$(libextension)
-SHLIB_libxenhypfs  = $(SHDEPS_libxenhypfs) -Wl,-rpath-link=$(XEN_libxenhypfs)
-
 # code which compiles against libxenctrl get __XEN_TOOLS__ and
 # therefore sees the unstable hypercall interfaces.
-CFLAGS_libxenctrl = -I$(XEN_libxenctrl)/include $(CFLAGS_libxentoollog) $(CFLAGS_libxenforeignmemory) $(CFLAGS_libxendevicemodel) $(CFLAGS_xeninclude) -D__XEN_TOOLS__
-SHDEPS_libxenctrl = $(SHLIB_libxentoollog) $(SHLIB_libxenevtchn) $(SHLIB_libxengnttab) $(SHLIB_libxencall) $(SHLIB_libxenforeignmemory) $(SHLIB_libxendevicemodel)
-LDLIBS_libxenctrl = $(SHDEPS_libxenctrl) $(XEN_libxenctrl)/libxenctrl$(libextension)
-SHLIB_libxenctrl  = $(SHDEPS_libxenctrl) -Wl,-rpath-link=$(XEN_libxenctrl)
+CFLAGS_libxenctrl += (CFLAGS_libxentoollog) $(CFLAGS_libxenforeignmemory) $(CFLAGS_libxendevicemodel) -D__XEN_TOOLS__
 
 CFLAGS_libxenguest = -I$(XEN_libxenguest)/include $(CFLAGS_libxenevtchn) $(CFLAGS_libxenforeignmemory) $(CFLAGS_xeninclude)
 SHDEPS_libxenguest = $(SHLIB_libxenevtchn) $(SHLIB_libxenctrl)
 LDLIBS_libxenguest = $(SHDEPS_libxenguest) $(XEN_libxenguest)/libxenguest$(libextension)
 SHLIB_libxenguest  = $(SHDEPS_libxenguest) -Wl,-rpath-link=$(XEN_libxenguest)
 
-CFLAGS_libxenstore = -I$(XEN_libxenstore)/include $(CFLAGS_xeninclude)
-SHDEPS_libxenstore = $(SHLIB_libxentoolcore)
-LDLIBS_libxenstore = $(SHDEPS_libxenstore) $(XEN_libxenstore)/libxenstore$(libextension)
-SHLIB_libxenstore  = $(SHDEPS_libxenstore) -Wl,-rpath-link=$(XEN_libxenstore)
 ifeq ($(CONFIG_Linux),y)
 LDLIBS_libxenstore += -ldl
 endif
 
-CFLAGS_libxenstat  = -I$(XEN_libxenstat)/include
-SHDEPS_libxenstat  = $(SHLIB_libxenctrl) $(SHLIB_libxenstore)
-LDLIBS_libxenstat  = $(SHDEPS_libxenstat) $(XEN_libxenstat)/libxenstat$(libextension)
-SHLIB_libxenstat   = $(SHDEPS_libxenstat) -Wl,-rpath-link=$(XEN_libxenstat)
-
-CFLAGS_libxenvchan = -I$(XEN_libxenvchan)/include $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
-SHDEPS_libxenvchan = $(SHLIB_libxentoollog) $(SHLIB_libxenstore) $(SHLIB_libxenevtchn) $(SHLIB_libxengnttab)
-LDLIBS_libxenvchan = $(SHDEPS_libxenvchan) $(XEN_libxenvchan)/libxenvchan$(libextension)
-SHLIB_libxenvchan  = $(SHDEPS_libxenvchan) -Wl,-rpath-link=$(XEN_libxenvchan)
-
 ifeq ($(debug),y)
 # Disable optimizations
 CFLAGS += -O0 -fno-omit-frame-pointer
diff --git a/tools/libs/call/Makefile b/tools/libs/call/Makefile
index 7994b411fa..acddc41928 100644
--- a/tools/libs/call/Makefile
+++ b/tools/libs/call/Makefile
@@ -4,7 +4,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 MAJOR    = 1
 MINOR    = 2
 LIBNAME  := call
-USELIBS  := toollog toolcore
 
 SRCS-y                 += core.c buffer.c
 SRCS-$(CONFIG_Linux)   += linux.c
diff --git a/tools/libs/ctrl/Makefile b/tools/libs/ctrl/Makefile
index bda0b43dfa..9d9b32cd2b 100644
--- a/tools/libs/ctrl/Makefile
+++ b/tools/libs/ctrl/Makefile
@@ -4,7 +4,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 MAJOR    = 4.14
 MINOR    = 0
 LIBNAME  := ctrl
-USELIBS  := toollog call evtchn gnttab foreignmemory devicemodel
 
 SRCS-y       += xc_altp2m.c
 SRCS-y       += xc_core.c
diff --git a/tools/libs/devicemodel/Makefile b/tools/libs/devicemodel/Makefile
index d9d1d1b850..0c0d9114cb 100644
--- a/tools/libs/devicemodel/Makefile
+++ b/tools/libs/devicemodel/Makefile
@@ -4,7 +4,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 MAJOR    = 1
 MINOR    = 3
 LIBNAME  := devicemodel
-USELIBS  := toollog toolcore call
 
 SRCS-y                 += core.c
 SRCS-$(CONFIG_Linux)   += linux.c
diff --git a/tools/libs/evtchn/Makefile b/tools/libs/evtchn/Makefile
index d7aa4d402f..ea529e5fb7 100644
--- a/tools/libs/evtchn/Makefile
+++ b/tools/libs/evtchn/Makefile
@@ -4,7 +4,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 MAJOR    = 1
 MINOR    = 1
 LIBNAME  := evtchn
-USELIBS  := toollog toolcore
 
 SRCS-y                 += core.c
 SRCS-$(CONFIG_Linux)   += linux.c
diff --git a/tools/libs/foreignmemory/Makefile b/tools/libs/foreignmemory/Makefile
index 823989681d..17057fc1e1 100644
--- a/tools/libs/foreignmemory/Makefile
+++ b/tools/libs/foreignmemory/Makefile
@@ -4,7 +4,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 MAJOR    = 1
 MINOR    = 3
 LIBNAME  := foreignmemory
-USELIBS  := toollog toolcore
 
 SRCS-y                 += core.c
 SRCS-$(CONFIG_Linux)   += linux.c
diff --git a/tools/libs/gnttab/Makefile b/tools/libs/gnttab/Makefile
index c0fffdac71..2467330d1a 100644
--- a/tools/libs/gnttab/Makefile
+++ b/tools/libs/gnttab/Makefile
@@ -4,7 +4,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 MAJOR    = 1
 MINOR    = 2
 LIBNAME  := gnttab
-USELIBS  := toollog toolcore
 
 SRCS-GNTTAB            += gnttab_core.c
 SRCS-GNTSHR            += gntshr_core.c
diff --git a/tools/libs/hypfs/Makefile b/tools/libs/hypfs/Makefile
index b4c41f6189..af358547f2 100644
--- a/tools/libs/hypfs/Makefile
+++ b/tools/libs/hypfs/Makefile
@@ -4,7 +4,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 MAJOR    = 1
 MINOR    = 0
 LIBNAME  := hypfs
-USELIBS  := toollog toolcore call
 
 APPEND_LDFLAGS += -lz
 
diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 764f5441e2..dde0360b1e 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -4,15 +4,14 @@
 #   LIBNAME: name of lib to build, will be prepended with "libxen"
 #   MAJOR:   major version of lib
 #   MINOR:   minor version of lib
-#   USELIBS: xen libs to use (e.g. "toolcore toollog")
 
 SHLIB_LDFLAGS += -Wl,--version-script=libxen$(LIBNAME).map
 
 CFLAGS   += -Werror -Wmissing-prototypes
 CFLAGS   += -I./include $(CFLAGS_xeninclude)
-CFLAGS   += $(foreach lib, $(USELIBS), $(CFLAGS_libxen$(lib)))
+CFLAGS   += $(foreach lib, $(USELIBS_$(LIBNAME)), $(CFLAGS_libxen$(lib)))
 
-LDUSELIBS = $(foreach lib, $(USELIBS), $(LDLIBS_libxen$(lib)))
+LDUSELIBS = $(foreach lib, $(USELIBS_$(LIBNAME)), $(LDLIBS_libxen$(lib)))
 
 LIB_OBJS := $(SRCS-y:.c=.o)
 PIC_OBJS := $(SRCS-y:.c=.opic)
diff --git a/tools/libs/stat/Makefile b/tools/libs/stat/Makefile
index 6162395a9a..87bfa1b91c 100644
--- a/tools/libs/stat/Makefile
+++ b/tools/libs/stat/Makefile
@@ -18,7 +18,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 MAJOR = 4.14
 MINOR = 0
 LIBNAME  := stat
-USELIBS  := ctrl store
 
 CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenstore) $(CFLAGS_xeninclude) -include $(XEN_ROOT)/tools/config.h
 
diff --git a/tools/libs/store/Makefile b/tools/libs/store/Makefile
index 76b30145cf..2aa8add521 100644
--- a/tools/libs/store/Makefile
+++ b/tools/libs/store/Makefile
@@ -4,7 +4,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 MAJOR = 3.0
 MINOR = 3
 LIBNAME  := store
-USELIBS  := toolcore
 
 ifeq ($(CONFIG_Linux),y)
 APPEND_LDFLAGS += -ldl
diff --git a/tools/libs/vchan/Makefile b/tools/libs/vchan/Makefile
index bf944d251c..5b59ff5265 100644
--- a/tools/libs/vchan/Makefile
+++ b/tools/libs/vchan/Makefile
@@ -4,7 +4,6 @@ include $(XEN_ROOT)/tools/Rules.mk
 MAJOR = 4.14
 MINOR = 0
 LIBNAME  := vchan
-USELIBS  := toollog store evtchn gnttab
 
 CFLAGS += $(CFLAGS_libxenctrl)
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 17:00:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 17:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvklO-0007Id-Eq; Wed, 15 Jul 2020 17:00:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3pvp=A2=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jvklN-0007IY-4g
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 17:00:09 +0000
X-Inumbo-ID: a2bddc4e-c6bc-11ea-b7bb-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2bddc4e-c6bc-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 17:00:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594832409;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=zkz+mz662VvK3ybS1BY7w6MMW+kanXb3XebXrtv5YAQ=;
 b=aHv4UZE+zql1drRZKp2Hko11+t91Eey/M1s/L97ISs5gl1fkA2eVx/kn
 wiU/+Oh5KJpqHvM7qsivpz+/eZgUTTJcGp7a56hZ4eaAdAVTxTCqLFfFw
 6BeQaVpm+SJhFK5/h9PWxshN8X56VC/X7bYnRzSteCANZyZVnsJqqYyEI 4=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: z2poUUt8oWRQ/JwDI0gFZuklBUQ7DxRSVUsV9KYfu9wO2q0nOVRzU9B8L07TfK1h1YRGb7kFtg
 p2al74z8XInMoI1lx7jjGGBCawW+ToHeSZ1OpTzc5GOnFs0x/24VoVfjcYHwPUzAU81qdJ3Mgw
 nVZJTov4FabtcIWiqweNS8BPXBZ4fq84f2e0RyV08TTJJH4oNk1xAz42qP8liyK+jhNAhxeuR4
 16kk+5zn6+BlngL7mUAyvA5EZpDKOPtbY2z/YX07RqL/PjmwHXba0wk6BhLza5I4F2aXFlsNZp
 hbo=
X-SBRS: 2.7
X-MesageID: 22775239
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22775239"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24335.13844.281994.447792@mariner.uk.xensource.com>
Date: Wed, 15 Jul 2020 18:00:04 +0100
To: "incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com"
 <incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 06/12] tools/misc: don't use libxenctrl internals from misc
 tools
In-Reply-To: <20200715162511.5941-8-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
 <20200715162511.5941-8-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Ian Jackson writes ("[PATCH 06/12] tools/misc: don't use libxenctrl internals from misc tools"):
> From: Juergen Gross <jgross@suse.com>
> 
> xen-hptool and xen-mfndump are using internals from libxenctrl e.g. by
> including private headers. Fix that by using either the correct
> official headers or use other means.
...
> -        ERROR("Failed to get the maximum mfn");
> +        fprintf(stderr, "Failed to get the maximum mfn");
>          return -1;

I think this is definitely worse than providing a duplicate definition of
ERROR.

Also patches like this are much easier to read if the different kinds
of change are split up.  Do you think you can break it up further ?

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 17:03:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 17:03:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkoK-0007Rj-0J; Wed, 15 Jul 2020 17:03:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3pvp=A2=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jvkoJ-0007Rc-1b
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 17:03:11 +0000
X-Inumbo-ID: 0f23504e-c6bd-11ea-9426-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f23504e-c6bd-11ea-9426-12813bfff9fa;
 Wed, 15 Jul 2020 17:03:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594832589;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=h9cQtaiuufh23WdS0XPBhjmP1otLEwg0Lui2su8Vyjc=;
 b=OFr68LEMmGcnebM3OrcR/9Z+sS4QtKfajQFxZrkwpVp/dhIvHjm6f4JH
 hqG7gGEWd48J/sftbJdvAxMjbJmhwc9A+/ms6So4zk27A8s1Qrdcmkfzy
 /LokJaD7QzgFUdVGTAw9nWe2Y8lnnamkbDdKGyLXiUHdzNc+kNme8Mb95 I=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 0WHqnRHcG2Q+KSilCWSNrM81dMHowC7dUaLdom3SDVgZ7yy+CQTyqUVQZZWhv1HnC3dy6x5iZo
 MjVlMlxDkGrnHrF0K2gHTltr2I4r5cC1rCWW2eBz6zjmv5wbl56NYv+Q7uijcHKCfXjB02Devb
 YXseRlW4c3hymo24cvZAoCn3x9z0lFNnW75QiqPFH8xj9aceCeEbxH9ogAjcTGaBJLWB0Q8v9Y
 GO604ENJxGa/4VkBIEJV5DFF+z3eLANaBp2oCZ2OPG5Ft+vN1qroHF8KIjY4UIoZAsXf2PENYp
 pd4=
X-SBRS: 2.7
X-MesageID: 23304575
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="23304575"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24335.14023.865810.803319@mariner.uk.xensource.com>
Date: Wed, 15 Jul 2020 18:03:03 +0100
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH 00/12] tools: move more libraries into tools/libs
In-Reply-To: <alpine.DEB.2.21.2007150945230.4124@sstabellini-ThinkPad-T480s>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
 <alpine.DEB.2.21.2007150945230.4124@sstabellini-ThinkPad-T480s>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@dornerworks.com" <xen-devel@dornerworks.com>,
 Julien Grall <julien@xen.org>,
 "incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com"
 <incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>,
 Stewart Hildebrand <stewart.hildebrand@dornerworks.com>,
 Josh Whitehead <josh.whitehead@dornerworks.com>, Jan
 Beulich <jbeulich@suse.com>, Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Stefano Stabellini writes ("Re: [PATCH 00/12] tools: move more libraries into tools/libs"):
> On Wed, 15 Jul 2020, Ian Jackson wrote:
> > [ NB: this patch series is actually from Juergen Gross.
> > 
> >   It is being experiemntally handled as a Merge Reqeust in gitlab, in
> >   part to see what problems there are with that workflow that will
> >   need extra tooling or whatever.
> > 
> >   I have manually generated this series using git-format-patch,
> >   scripts/add_maintainers.pl, and git-send-email.  I expect that if we
> >   adopt this as a real workflow, we will want to make a robot do some
> >   of that.
> > 
> >   I have set replies to go to the Gitlab comment thread and to
> >   xen-devel.  Again this is experimental.  We are likely to need
> >   something to automatically collect acks, at the very least.
> > 
> >   Reviewers: for now, please review this series as normal.  You may
> >   reply to the messages by email.  Please, for now, send your replies
> >   to gitlab and to the mailing list.  I think I have set the reply-to
> >   appropriately.
> > 
> >   Alternatively you may review the code in the gitlab web UI.  But
> >   please do not use the line-by-line comment system: write only to the
> >   main MR discussion thread.
> 
> Thanks for doing this Ian.
> 
> I am curious about this: why not the line-by-line comment system? It
> looks like it would be the most similar to emails comments. Is it
> because comments done that way cannot be sent via email while the main
> MR discussion thread can?

Well, they can sort of go by email but they can't be created by email
and the webpage mouse UI is clearly primary for them.

Also they are difficult to archive, so difficult to look at later.

I guess this is an experiemnt, so feel free to try it but be prepared
to c&p many messages manually to the list...

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 17:06:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 17:06:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkr6-0007Z6-Gx; Wed, 15 Jul 2020 17:06:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cSQs=A2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jvkr4-0007Yz-Kz
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 17:06:02 +0000
X-Inumbo-ID: 75f660fe-c6bd-11ea-bb8b-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75f660fe-c6bd-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 17:06:02 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0D53F2065E;
 Wed, 15 Jul 2020 17:06:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594832761;
 bh=MVQs60eyaseKttygMi5Okmio5UqpxovjpBaGYM2KopU=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=CBPLDZ0a2kofL5+ir7rsYDlbbpzpSFpGzRjY5GXtgxKoOkAVo9rx/IXw2nZaAg6FP
 0PGoTpRFSOeEWLcCxLE4J3D3xFoVDU+sXxptoew5BleNGM0IN0YcxXbu+SlhudGRTe
 VCtmQZUU53N6u7CQ+nguMHVqQdrr4STCBfllpVWM=
Date: Wed, 15 Jul 2020 10:06:00 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: "Michael S. Tsirkin" <mst@redhat.com>
Subject: Re: [PATCH] xen: introduce xen_vring_use_dma
In-Reply-To: <20200711144334-mutt-send-email-mst@kernel.org>
Message-ID: <alpine.DEB.2.21.2007151001140.4124@sstabellini-ThinkPad-T480s>
References: <20200624163940-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006241351430.8121@sstabellini-ThinkPad-T480s>
 <20200624181026-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006251014230.8121@sstabellini-ThinkPad-T480s>
 <20200626110629-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2006291621300.8121@sstabellini-ThinkPad-T480s>
 <20200701133456.GA23888@infradead.org>
 <alpine.DEB.2.21.2007011020320.8121@sstabellini-ThinkPad-T480s>
 <20200701172219-mutt-send-email-mst@kernel.org>
 <alpine.DEB.2.21.2007101019340.4124@sstabellini-ThinkPad-T480s>
 <20200711144334-mutt-send-email-mst@kernel.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Peng Fan <peng.fan@nxp.com>,
 Stefano Stabellini <sstabellini@kernel.org>, konrad.wilk@oracle.com,
 jasowang@redhat.com, x86@kernel.org, linux-kernel@vger.kernel.org,
 virtualization@lists.linux-foundation.org,
 Christoph Hellwig <hch@infradead.org>, iommu@lists.linux-foundation.org,
 linux-imx@nxp.com, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 linux-arm-kernel@lists.infradead.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, 11 Jul 2020, Michael S. Tsirkin wrote:
> On Fri, Jul 10, 2020 at 10:23:22AM -0700, Stefano Stabellini wrote:
> > Sorry for the late reply -- a couple of conferences kept me busy.
> > 
> > 
> > On Wed, 1 Jul 2020, Michael S. Tsirkin wrote:
> > > On Wed, Jul 01, 2020 at 10:34:53AM -0700, Stefano Stabellini wrote:
> > > > Would you be in favor of a more flexible check along the lines of the
> > > > one proposed in the patch that started this thread:
> > > > 
> > > >     if (xen_vring_use_dma())
> > > >             return true;
> > > > 
> > > > 
> > > > xen_vring_use_dma would be implemented so that it returns true when
> > > > xen_swiotlb is required and false otherwise.
> > > 
> > > Just to stress - with a patch like this virtio can *still* use DMA API
> > > if PLATFORM_ACCESS is set. So if DMA API is broken on some platforms
> > > as you seem to be saying, you guys should fix it before doing something
> > > like this..
> > 
> > Yes, DMA API is broken with some interfaces (specifically: rpmesg and
> > trusty), but for them PLATFORM_ACCESS is never set. That is why the
> > errors weren't reported before. Xen special case aside, there is no
> > problem under normal circumstances.
> 
> So why not fix DMA API? Then this patch is not needed.

It looks like the conversation is going in circles :-)

I tried to explain the reason why, even if we fixed the DMA API to work
with rpmesg and trusty, we still need this patch with the following
email:  https://marc.info/?l=linux-kernel&m=159347446709625&w=2


At that time it looked like you agreed that we needed to improve this
check?  (https://marc.info/?l=linux-kernel&m=159363662418250&w=2)


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 17:07:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 17:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvksc-0007eW-SW; Wed, 15 Jul 2020 17:07:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3pvp=A2=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jvksa-0007eR-Ny
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 17:07:36 +0000
X-Inumbo-ID: add44c0c-c6bd-11ea-8496-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id add44c0c-c6bd-11ea-8496-bc764e2007e4;
 Wed, 15 Jul 2020 17:07:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594832857;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=FbX8MjGKU+Jfj1e6lWUlUNKCQU6B26jDQrRmqinAWoI=;
 b=FsERpbvM4wF+rYh1AOG13MjJTRidrivsie2Pm1yaDYV+9uu/8DVhE6Ju
 cw5ohQ1qi2rar7bRX73rsd5BvTc47Iy7a7zufwcziH5cnByBIFTWl7+Wh
 z1/4HfphDS7Cx7ZG2N3edoswBNeKEK1hpMpY+kLhwNGCYUX/sdyt2Bnwj A=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: D/DN2EO85rxYrB9J08pvpyvYW26JREQWaJAbgsFzLG69gc4OmjyR4KldJNsUye+fN17yBVse0W
 prHEHU1JUzMTPBNdthMXHfK7n8nMfrhgd8on6j5ufvELg6merpQ7E45UowWvok967oZnZHP0Rx
 saRi1ByNNnrILOmKPAeiZ6N0uSAKBv284+LNjWbSDn8bLU8SBre4coTmb36QV/lPGuhzGXGmSN
 QCjqVgq1BIJWbs37VixQi4ez5ieQqmIY1eGYYcr2PL4GOKYgtdRTt9EaBIBkI/HUg2Qwc+dS3w
 ok8=
X-SBRS: 2.7
X-MesageID: 22776360
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22776360"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24335.14276.232430.125457@mariner.uk.xensource.com>
Date: Wed, 15 Jul 2020 18:07:16 +0100
To: "incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com"
 <incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 07/12] tools/libxc: untangle libxenctrl from libxenguest
In-Reply-To: <20200715162511.5941-9-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
 <20200715162511.5941-9-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Ian Jackson writes ("[PATCH 07/12] tools/libxc: untangle libxenctrl from libxenguest"):
> From: Juergen Gross <jgross@suse.com>
> 
> Sources of libxenctrl and libxenguest are completely entangled. In
> practice libxenguest is a user of libxenctrl, so don't let any source
> libxenctrl include xg_private.h.
> 
> This can be achieved by moving all definitions used by libxenctrl from
> xg_private.h to xc_private.h. Additionally xc_dom.h needs to be
> exported, so rename it to xenctrl_dom.h.
> 
> Rename all libxenguest sources from xc_* to xg_*.
> 
> Move xc_[un]map_domain_meminfo() fnctions to new source xg_domain.c as
> they are defined in include/xenguest.h and should be in libxenguest.
> 
> Remove xc_efi.h and xc_elf.h as they aren't used anywhere.

Reviewing this is quite difficult.  Is there any way it could be split
up ?  Perhaps some of it could be generated automatically ?  (Eg you
could send a patch whose commit message had the perl rune you used to
generate it, which would enable a reviewer to see what was supposed to
be going on.)

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 17:08:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 17:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvkt7-0007j1-5n; Wed, 15 Jul 2020 17:08:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3pvp=A2=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jvkt6-0007ir-Ba
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 17:08:08 +0000
X-Inumbo-ID: c07006f8-c6bd-11ea-9426-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c07006f8-c6bd-11ea-9426-12813bfff9fa;
 Wed, 15 Jul 2020 17:08:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594832888;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=qTgzyhjsAt/1FKsWB9d9JIii+uf1SPMOShrWMF/KTM8=;
 b=avH/RFdQH4n/qRqAfUW1DolK3HuLN25/TuBwQKedEYLSclmpDmsB0heZ
 PTJ6ier/yxlcR8cDwJGBbtOx+Sbfm2bu9k94PfcLnJnZEEjK6RC2ALU9X
 kUpWKSKtEI4fthRv5c7TAhmCrG/OxhxB33G1j9II9pPZBXOXjfNA25mHY o=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: eeRG5UZqRpKInzxF2ZlmT8QAvkN+a6snyMzpC15UyrExHNKCWULJptKXH8f4cCeGGi/Qs5m0p6
 TQYs0kSN3BpaqG6ZNM3vcWIJKO0v+/kM7Uw+deSZNOgw1mq+aldSSbsbB1CeyyXnAutJJMmtnB
 pQYeOkVcx1T6yiKZonGg9pBbnZ+i2vj5BNgRVnQ3DYqaclG+1Z+Jkkr7flnljTMTdVQ9UJ2BSr
 tScXYWnLXH+KUGCMO5rL8BVATkDltt1Te8E6+VkhFWeien7QBGv+Mwd1CI1rV+4J5SAIEEygsp
 JTU=
X-SBRS: 2.7
X-MesageID: 22461457
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22461457"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24335.14321.25667.767614@mariner.uk.xensource.com>
Date: Wed, 15 Jul 2020 18:08:01 +0100
To: "incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com"
 <incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 08/12] tools: move libxenctrl below tools/libs
In-Reply-To: <20200715162511.5941-10-ian.jackson@eu.citrix.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
 <20200715162511.5941-10-ian.jackson@eu.citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@dornerworks.com" <xen-devel@dornerworks.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Anthony Perard <anthony.perard@citrix.com>,
 Josh Whitehead <josh.whitehead@dornerworks.com>,
 Jan Beulich <jbeulich@suse.com>,
 Stewart Hildebrand <stewart.hildebrand@dornerworks.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Ian Jackson writes ("[PATCH 08/12] tools: move libxenctrl below tools/libs"):
> From: Juergen Gross <jgross@suse.com>
> 
> Today tools/libxc needs to be built after tools/libs as libxenctrl is
> depending on some libraries in tools/libs. This in turn blocks moving
> other libraries depending on libxenctrl below tools/libs.
> 
> So carve out libxenctrl from tools/libxc and move it into
> tools/libs/ctrl.

Again, maybe this could be 1. split the library, 2. move it ?  Maybe ?

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 17:20:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 17:20:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvl4u-0000vS-FS; Wed, 15 Jul 2020 17:20:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvl4t-0000vN-GB
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 17:20:19 +0000
X-Inumbo-ID: 742a6f34-c6bf-11ea-942a-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 742a6f34-c6bf-11ea-942a-12813bfff9fa;
 Wed, 15 Jul 2020 17:20:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594833618;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=NpUi5rCb1Mghqi/e7W+ruW3dFQi5FPlOR4vRl+xgeEs=;
 b=GPfRCp1k8n1oqs9eVded6T2TYNam70H8z8dXzknpFHAzByN8JHlUtjkO
 PGnzdVCtSjoAdkuGOhFWfUfYtvR6Hd/WhP8R1aKAEW07avLke+gM5OWX1
 Ip2UTrS2HZNUcmJx66CL0+LrFGHei4FlKjlpjnjIZGWeg/YFgtNoH3JAF E=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: sY6ax7KGjdNZMhYTY+TEcliG6z5cXU3OU2uMlfkQ4N9eIV3GIjxsQdDNRcoMHBTwg+KaEAK0Yl
 J5XCWp52C/vmkY9Shl4+s5a8infloPYf75EXjdKGnISjhAHu83RAINZaUcnf2byaxZ/Px00A56
 ZBNBbvuxza6IU9IprGldY4RikdKNhYuniTwUR3N/F8R2olrqgx/x53uWXes/CGpydAMHAumTwG
 Pr+i4jJUhF6WwAJakrshm5cJBmVctfscKpungCpkyO9Qne1Uqbl2B6YZp0R2se/khph13/umDi
 +gs=
X-SBRS: 2.7
X-MesageID: 22654934
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,355,1589256000"; d="scan'208";a="22654934"
Date: Wed, 15 Jul 2020 19:20:11 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v6 08/11] x86/mm: add vmtrace_buf resource type
Message-ID: <20200715172011.GF7191@Air-de-Roger>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <2129d21e7ef7e960951a8baafab01d9392dff8f3.1594150543.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2129d21e7ef7e960951a8baafab01d9392dff8f3.1594150543.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 tamas.lengyel@intel.com, Jan
 Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 07, 2020 at 09:39:47PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Allow to map processor trace buffer using
> acquire_resource().
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  xen/common/memory.c         | 31 +++++++++++++++++++++++++++++++
>  xen/include/public/memory.h |  1 +
>  2 files changed, 32 insertions(+)
> 
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index eb42f883df..c0a22eb60f 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1007,6 +1007,32 @@ static long xatp_permission_check(struct domain *d, unsigned int space)
>      return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
>  }
>  
> +static int acquire_vmtrace_buf(struct domain *d, unsigned int id,
> +                               uint64_t frame,
> +                               uint64_t nr_frames,

frame and nr_frames should be unsigned int, as I think they can't
overflow an unsigned integer?

> +                               xen_pfn_t mfn_list[])
> +{
> +    mfn_t mfn;
> +    unsigned int i;
> +    uint64_t size;
> +    struct vcpu *v = domain_vcpu(d, id);
> +
> +    if ( !v || !v->vmtrace.pt_buf )
> +        return -EINVAL;
> +
> +    mfn = page_to_mfn(v->vmtrace.pt_buf);
> +    size = v->domain->processor_trace_buf_kb * KB(1);

I think this should be the number of pages, and then you want to
directly access d, there's no need to do v->domain->...

> +
> +    if ( (frame > (size >> PAGE_SHIFT)) ||
> +         (nr_frames > ((size >> PAGE_SHIFT) - frame)) )

Isn't this off by one (should use >=)? If size is 8 pages, you can get
pages [0,7], but requesting page 8 would be out of the range?

> +        return -EINVAL;
> +
> +    for ( i = 0; i < nr_frames; i++ )
> +        mfn_list[i] = mfn_x(mfn_add(mfn, frame + i));

You could drop the mfn variable and just do:

mfn_list[i] = mfn_x(page_to_mfn(v->vmtrace.pt_buf[frame + i]));

Thanks.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 17:32:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 17:32:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvlGY-0001qw-Je; Wed, 15 Jul 2020 17:32:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=osgb=A2=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvlGX-0001qr-MN
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 17:32:21 +0000
X-Inumbo-ID: 21f1f443-c6c1-11ea-942e-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21f1f443-c6c1-11ea-942e-12813bfff9fa;
 Wed, 15 Jul 2020 17:32:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594834340;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=LBDx2QqeDw9WZ96RcUl126PhE18mwDrNiy8GsGKJWF8=;
 b=Td41bDWm5PrUR4zbUiAuYHhiCmwACtafDcRD7CrlaABDUtP0fOZhcHy7
 mr7vwZuvQrAsfUr1PxT7oKAKQHzB3EDj3e3QxEFd/yPCfPA9XhEUIquQI
 gEuWNIy30u72wznhoPMJZS95/+xoNNdBUSIZ29HAGPZBw9eOnHsAKrQKO o=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 8nwuYvLGVw1/QLgYMNx9F8v9aXiQsE//zWNPJxkmi1eXbDbPrfqC6zEoBrWi4qUAKZJUbrvnsf
 vShPtRYnzglJonkrogX2ItiioYpy48kfsfvJCYGl12rxL30ulBWQn9BgZkJy4SWZF3gOiDsU80
 OMf+yzp30IaxkWRJg6bfolDV9sES/++gTNFSIgdna7CI1eHQSMqtRKqVwhj0+cZQg7UVRmTrFx
 Crl4r1YNCMiZlUxkwbpSsFjtESpnIpwaqK8U8XpzzLvyRcA4EoNrs+m8lAMl9o0zWwHtKHryBx
 OWw=
X-SBRS: 2.7
X-MesageID: 22655873
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,356,1589256000"; d="scan'208";a="22655873"
Date: Wed, 15 Jul 2020 19:32:02 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v6 09/11] x86/domctl: add XEN_DOMCTL_vmtrace_op
Message-ID: <20200715173202.GG7191@Air-de-Roger>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <a9899858dba4a7e22a0256cff734399bff348adb.1594150543.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a9899858dba4a7e22a0256cff734399bff348adb.1594150543.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 tamas.lengyel@intel.com, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 07, 2020 at 09:39:48PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Implement domctl to manage the runtime state of
> processor trace feature.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  xen/arch/x86/domctl.c       | 50 +++++++++++++++++++++++++++++++++++++
>  xen/include/public/domctl.h | 28 +++++++++++++++++++++
>  2 files changed, 78 insertions(+)
> 
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index 6f2c69788d..6132499db4 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -322,6 +322,50 @@ void arch_get_domain_info(const struct domain *d,
>      info->arch_config.emulation_flags = d->arch.emulation_flags;
>  }
>  
> +static int do_vmtrace_op(struct domain *d, struct xen_domctl_vmtrace_op *op,
> +                         XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
> +{
> +    int rc;
> +    struct vcpu *v;
> +
> +    if ( op->pad1 || op->pad2 )
> +        return -EINVAL;
> +
> +    if ( !vmtrace_supported )
> +        return -EOPNOTSUPP;
> +
> +    if ( !is_hvm_domain(d) )
> +        return -EOPNOTSUPP;

You can join both if into a single one if they both return
-EOPNOTSUPP.

> +
> +    if ( op->vcpu >= d->max_vcpus )
> +        return -EINVAL;

You can remove this check and just use the return value of domain_vcpu
instead, if it's NULL the vCPU ID is wrong.

> +
> +    v = domain_vcpu(d, op->vcpu);
> +    rc = 0;

No need to init rc to 0, as it would be unconditionally initialized in
the switch below due to the default case.

> +
> +    switch ( op->cmd )
> +    {
> +    case XEN_DOMCTL_vmtrace_pt_enable:
> +    case XEN_DOMCTL_vmtrace_pt_disable:
> +        vcpu_pause(v);
> +        rc = vmtrace_control_pt(v, op->cmd == XEN_DOMCTL_vmtrace_pt_enable);
> +        vcpu_unpause(v);
> +        break;
> +
> +    case XEN_DOMCTL_vmtrace_pt_get_offset:
> +        rc = vmtrace_get_pt_offset(v, &op->offset, &op->size);

In order to get consistent values here the vCPU should be paused, or
else you just get stale values from the last vmexit?

Maybe that's fine for your use case?

> +
> +        if ( !rc && d->is_dying )
> +            rc = ENODATA;

This check here seems weird, if this is really required shouldn't it
be done before attempting to fetch the data?

> +        break;
> +
> +    default:
> +        rc = -EOPNOTSUPP;
> +    }
> +
> +    return rc;
> +}
> +
>  #define MAX_IOPORTS 0x10000
>  
>  long arch_do_domctl(
> @@ -337,6 +381,12 @@ long arch_do_domctl(
>      switch ( domctl->cmd )
>      {
>  
> +    case XEN_DOMCTL_vmtrace_op:
> +        ret = do_vmtrace_op(d, &domctl->u.vmtrace_op, u_domctl);
> +        if ( !ret )
> +            copyback = true;
> +        break;

Nit: new additions would usually got at the end of the switch.

> +
>      case XEN_DOMCTL_shadow_op:
>          ret = paging_domctl(d, &domctl->u.shadow_op, u_domctl, 0);
>          if ( ret == -ERESTART )
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 7681675a94..73c7ccbd16 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -1136,6 +1136,30 @@ struct xen_domctl_vuart_op {
>                                   */
>  };
>  
> +/* XEN_DOMCTL_vmtrace_op: Perform VM tracing related operation */
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +
> +struct xen_domctl_vmtrace_op {
> +    /* IN variable */
> +    uint32_t cmd;
> +/* Enable/disable external vmtrace for given domain */
> +#define XEN_DOMCTL_vmtrace_pt_enable      1
> +#define XEN_DOMCTL_vmtrace_pt_disable     2
> +#define XEN_DOMCTL_vmtrace_pt_get_offset  3
> +    domid_t domain;
> +    uint16_t pad1;
> +    uint32_t vcpu;
> +    uint16_t pad2;

pad2 should be a uint32_t? Or else there's a padding field added by
the compiler? (maybe I'm missing something, I haven't checked with
pahole).

> +
> +    /* OUT variable */
> +    uint64_aligned_t size;
> +    uint64_aligned_t offset;
> +};
> +typedef struct xen_domctl_vmtrace_op xen_domctl_vmtrace_op_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_domctl_vmtrace_op_t);
> +
> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
> +
>  struct xen_domctl {
>      uint32_t cmd;
>  #define XEN_DOMCTL_createdomain                   1
> @@ -1217,6 +1241,7 @@ struct xen_domctl {
>  #define XEN_DOMCTL_vuart_op                      81
>  #define XEN_DOMCTL_get_cpu_policy                82
>  #define XEN_DOMCTL_set_cpu_policy                83
> +#define XEN_DOMCTL_vmtrace_op                    84
>  #define XEN_DOMCTL_gdbsx_guestmemio            1000
>  #define XEN_DOMCTL_gdbsx_pausevcpu             1001
>  #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
> @@ -1277,6 +1302,9 @@ struct xen_domctl {
>          struct xen_domctl_monitor_op        monitor_op;
>          struct xen_domctl_psr_alloc         psr_alloc;
>          struct xen_domctl_vuart_op          vuart_op;
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +        struct xen_domctl_vmtrace_op        vmtrace_op;
> +#endif

No need for the preprocessor conditionals here, the whole domctl.h is
only to be used by the tools or Xen, and is already properly guarded
(see the #error at the top of the file).

Thanks.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 18:25:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 18:25:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvm5v-00068u-U8; Wed, 15 Jul 2020 18:25:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QyDK=A2=protonmail.com=peter.jac@srs-us1.protection.inumbo.net>)
 id 1jvl1G-0000D5-Cb
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 17:16:34 +0000
X-Inumbo-ID: edb623bd-c6be-11ea-9429-12813bfff9fa
Received: from mail-40140.protonmail.ch (unknown [185.70.40.140])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id edb623bd-c6be-11ea-9429-12813bfff9fa;
 Wed, 15 Jul 2020 17:16:32 +0000 (UTC)
Date: Wed, 15 Jul 2020 17:16:22 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com;
 s=protonmail; t=1594833391;
 bh=FGH8dWCw4hV8XCyDkS3efIcLcu5p/CZ6AfNx0XnEB7s=;
 h=Date:To:From:Reply-To:Subject:From;
 b=umYRc7WXhZFnrVxaJySJ2udSuyjRul8hIqy9yd2KKfPPTOhKAwGbXgjCJHY4qsW+d
 LNR5Kw4Rq2Kg45w9pkbfME1Ym2IAs829XI6v0Eg21BQjLPyrm6EOry78jVzO1gVgyI
 NGfw0W0hiO7ygzzXdwBBVsBPpQ4OGUriX8eI/+WM=
To: xen-users@lists.xenproject.org, xen-devel <xen-devel@lists.xenproject.org>
From: peter.jac@protonmail.com
Subject: OpenSUSE and Xen
Message-ID: <jy5CG4DqGrBAir35SWF8DPAZHsHJPYzw4pdQWS8fMylQgEe3hrFhfJG2lZmgvXorh1TayKgqOgrEX8413UGzmF-RU4i_V479poF1OrnKNw8=@protonmail.com>
MIME-Version: 1.0
Content-Type: multipart/alternative;
 boundary="b1_F64iiwOB3gLbt06sPrBEgOIgF8BgaRWf0z9GLTC3c"
X-Spam-Status: No, score=-1.2 required=7.0 tests=ALL_TRUSTED,DKIM_SIGNED,
 DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,HTML_MESSAGE
 shortcircuit=no autolearn=disabled version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mail.protonmail.ch
X-Mailman-Approved-At: Wed, 15 Jul 2020 18:25:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: peter.jac@protonmail.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a multi-part message in MIME format.

--b1_F64iiwOB3gLbt06sPrBEgOIgF8BgaRWf0z9GLTC3c
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64

SGVsbG8sCkkgaGF2ZSBhIHF1ZXN0aW9uIGZyb20gYWxsIFhlbiBQcm9qZWN0IGRldmVsb3BlcnMg
YW5kIHVzZXJzLiBJcyBPcGVuU1VTRSBzdXBwb3J0aW5nIFhlbiBQcm9qZWN0PwpEaWQgYW55b25l
IGluc3RhbGwgT3BlblNVU0U/IFdoeSBPcGVuU1VTRSBub3QgaGF2ZSBhbnkgb3B0aW9uIGFib3V0
IGluc3RhbGxpbmcgWGVuIGR1cmluZyBpbnN0YWxsYXRpb24gYnV0IGhhdmUgYW4gb3B0aW9uIGFi
b3V0IEtWTT8KV2h5IFhlbiBwYWNrYWdlIGRvZXNuJ3QgaW5jbHVkZWQ/CgpDaGVlcnMu

--b1_F64iiwOB3gLbt06sPrBEgOIgF8BgaRWf0z9GLTC3c
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64

PGJyPkhlbGxvLDxicj5JIGhhdmUgYSBxdWVzdGlvbiBmcm9tIGFsbCBYZW4gUHJvamVjdCBkZXZl
bG9wZXJzIGFuZCB1c2Vycy4gSXMgT3BlblNVU0Ugc3VwcG9ydGluZyBYZW4gUHJvamVjdD88YnI+
RGlkIGFueW9uZSBpbnN0YWxsIE9wZW5TVVNFPyBXaHkgT3BlblNVU0Ugbm90IGhhdmUgYW55IG9w
dGlvbiBhYm91dCBpbnN0YWxsaW5nIFhlbiBkdXJpbmcgaW5zdGFsbGF0aW9uIGJ1dCBoYXZlIGFu
IG9wdGlvbiBhYm91dCBLVk0/PGJyPldoeSBYZW4gcGFja2FnZSBkb2Vzbid0IGluY2x1ZGVkPzxi
cj48YnI+PGJyPkNoZWVycy48YnI+PGJyPg==


--b1_F64iiwOB3gLbt06sPrBEgOIgF8BgaRWf0z9GLTC3c--



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 19:15:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 19:15:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvmsN-0001sf-TL; Wed, 15 Jul 2020 19:15:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tzA3=A2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvmsN-0001rt-1O
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 19:15:31 +0000
X-Inumbo-ID: 894b64ee-c6cf-11ea-9442-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 894b64ee-c6cf-11ea-9442-12813bfff9fa;
 Wed, 15 Jul 2020 19:15:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=rtfPxxs9RJKDBDgCr/cAQe/AFi8/yn9QY7bXAAyOoAw=; b=bisfxwo3bhN6+DzKKWTt3ClD6
 8g1Oy+rdSBcON1wb4RNToZ70gWpsUcTG4s4GBgTrQoDAScv/Dv3dxSMsYjSIJRlmcNsmjRySwHDJU
 pmwgkYzM1xq7riumwGu9RRGhonu8oQeUTsVLqaEm7H9p5h/X+8ajzOwB2kTec3saV/JrM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvmsG-0005lh-K6; Wed, 15 Jul 2020 19:15:24 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvmsF-0005vA-MK; Wed, 15 Jul 2020 19:15:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvmsF-0008Fk-Lm; Wed, 15 Jul 2020 19:15:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151921-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 151921: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=f8fe3c07363d11fc81d8e7382dbcaa357c861569
X-Osstest-Versions-That: xen=1969576661f3e34318e9b0a61a1a38f9a5aee16f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jul 2020 19:15:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151921 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151921/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f8fe3c07363d11fc81d8e7382dbcaa357c861569
baseline version:
 xen                  1969576661f3e34318e9b0a61a1a38f9a5aee16f

Last test of basis   151891  2020-07-14 09:00:47 Z    1 days
Testing same since   151921  2020-07-15 14:09:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1969576661..f8fe3c0736  f8fe3c07363d11fc81d8e7382dbcaa357c861569 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 19:17:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 19:17:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvmtw-0001xB-8f; Wed, 15 Jul 2020 19:17:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mv0N=A2=panix.com=marcotte@srs-us1.protection.inumbo.net>)
 id 1jvmtv-0001x5-Fe
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 19:17:07 +0000
X-Inumbo-ID: c57e20d3-c6cf-11ea-9442-12813bfff9fa
Received: from mailbackend.panix.com (unknown [166.84.1.89])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c57e20d3-c6cf-11ea-9442-12813bfff9fa;
 Wed, 15 Jul 2020 19:17:06 +0000 (UTC)
Received: from panix5.panix.com (panix5.panix.com [166.84.1.5])
 by mailbackend.panix.com (Postfix) with ESMTP id 4B6Rw15XLqz1Bb8;
 Wed, 15 Jul 2020 15:17:05 -0400 (EDT)
Received: by panix5.panix.com (Postfix, from userid 13564)
 id 4B6Rw15RRSzfYm; Wed, 15 Jul 2020 15:17:05 -0400 (EDT)
Date: Wed, 15 Jul 2020 15:17:05 -0400
From: Brian Marcotte <marcotte@panix.com>
To: "Durrant, Paul" <pdurrant@amazon.co.uk>
Subject: Re: [Xen-devel] XEN Qdisk Ceph rbd support broken?
Message-ID: <20200715191705.GA20643@panix.com>
References: <AC8105C4-6DAD-4AB0-AC3F-B4CDD151CDEB@ispire.me>
 <763e69df40604c51bb72477c706ec24b@EX13D32EUC003.ant.amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <763e69df40604c51bb72477c706ec24b@EX13D32EUC003.ant.amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Jules <jules@ispire.me>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "oleksandr_grytsov@epam.com" <oleksandr_grytsov@epam.com>,
 "wl@xen.org" <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This issue with Xen 4.13 and Ceph/RBD was last discussed back in February.

> Remote network Ceph image works fine with Xen 4.12.x ...
> 
> In Xen 4.13.0 which I have tested recently it blames with the error
> message "no such file or directory" as it would try accessing the image
> over filesystem instead of remote network image.
> ---
> 
> I doubt the issue is in xl/libxl; sounds more likely to be in QEMU. The
> PV block backend infrastructure in QEMU was changed between the 4.12
> and 4.13 releases. Have you tried using an older QEMU with 4.13?

I'm also encountering the problem:

    failed to create drive: Could not open 'rbd:rbd/machine.disk0': No such file or directory

Xenstore has "params" like this:

    aio:rbd:rbd/machine.disk0

If I set it to "rbd:rbd/machine.disk0", I get a different message:

  failed to create drive: Parameter 'pool' is missing

Using upstream QEMU versions 2 or 3 works fine.

The interesting thing is that access by the virtual BIOS works fine. So,
for a PVHVM domain, GRUB loads which loads a kernel, but the kernel can't
access the disks.

--
- Brian


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 19:50:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 19:50:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvnPo-000506-UI; Wed, 15 Jul 2020 19:50:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PCNO=A2=amazon.com=prvs=458800b8f=anchalag@srs-us1.protection.inumbo.net>)
 id 1jvnPn-0004mM-MT
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 19:50:03 +0000
X-Inumbo-ID: 5fe63138-c6d4-11ea-9447-12813bfff9fa
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5fe63138-c6d4-11ea-9447-12813bfff9fa;
 Wed, 15 Jul 2020 19:50:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1594842604; x=1626378604;
 h=date:from:to:cc:message-id:references:mime-version:
 in-reply-to:subject;
 bh=99ygPzuHwg4Q3YW9N+NvtkYNtktLaI4YT2h/lpn7jBQ=;
 b=UFnWsfiuKcYxKApwDqwbO4JuWJM9eKqMvcjYfa7ae83L+E3IOktXwAD5
 1wHHvVKUrgAus7aPNrkORvrLyZLqsnyIeTugQzQrucQe/iiX/JBzJmcBU
 KA+7Qe+7VED+J/xRZLe/xhq02bb//HcCqu8K5r42HbhhM5tMMZznPTi66 w=;
IronPort-SDR: KVs3TTCGlt4jbzRtfh8Z1F2oZ0rdfRw93Xpe0nJRsHl/jLIcyO7oUiuuap5BxGwGfPqm/SHJu5
 uUVJci5HS3mg==
X-IronPort-AV: E=Sophos;i="5.75,356,1589241600"; d="scan'208";a="42072089"
Subject: Re: [PATCH v2 00/11] Fix PM hibernation in Xen guests
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1d-38ae4ad2.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 15 Jul 2020 19:50:03 +0000
Received: from EX13MTAUEB002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1d-38ae4ad2.us-east-1.amazon.com (Postfix) with ESMTPS
 id 3C48DA216B; Wed, 15 Jul 2020 19:49:55 +0000 (UTC)
Received: from EX13D08UEB004.ant.amazon.com (10.43.60.142) by
 EX13MTAUEB002.ant.amazon.com (10.43.60.12) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 15 Jul 2020 19:49:34 +0000
Received: from EX13MTAUEB002.ant.amazon.com (10.43.60.12) by
 EX13D08UEB004.ant.amazon.com (10.43.60.142) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 15 Jul 2020 19:49:33 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.60.234) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Wed, 15 Jul 2020 19:49:33 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 60CEF4E7C7; Wed, 15 Jul 2020 19:49:33 +0000 (UTC)
Date: Wed, 15 Jul 2020 19:49:33 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <20200715194933.GA17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
 <324020A7-996F-4CF8-A2F4-46957CEA5F0C@amazon.com>
 <c6688a0c-7fec-97d2-3dcc-e160e97206e6@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <c6688a0c-7fec-97d2-3dcc-e160e97206e6@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "Valentin, Eduardo" <eduval@amazon.com>,
 "len.brown@intel.com" <len.brown@intel.com>,
 "peterz@infradead.org" <peterz@infradead.org>,
 "benh@kernel.crashing.org" <benh@kernel.crashing.org>,
 "x86@kernel.org" <x86@kernel.org>, "linux-mm@kvack.org" <linux-mm@kvack.org>,
 "pavel@ucw.cz" <pavel@ucw.cz>, "hpa@zytor.com" <hpa@zytor.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>, "Kamata,
 Munehisa" <kamatam@amazon.com>, "mingo@redhat.com" <mingo@redhat.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Singh,
 Balbir" <sblbir@amazon.com>, "axboe@kernel.dk" <axboe@kernel.dk>,
 "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
 "bp@alien8.de" <bp@alien8.de>, "tglx@linutronix.de" <tglx@linutronix.de>,
 "jgross@suse.com" <jgross@suse.com>,
 "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
 "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
 "rjw@rjwysocki.net" <rjw@rjwysocki.net>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "vkuznets@redhat.com" <vkuznets@redhat.com>,
 "davem@davemloft.net" <davem@davemloft.net>, "Woodhouse,
 David" <dwmw@amazon.co.uk>, "roger.pau@citrix.com" <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 13, 2020 at 03:43:33PM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 7/10/20 2:17 PM, Agarwal, Anchal wrote:
> > Gentle ping on this series.
> 
> 
> Have you tested save/restore?
>
No, not with the last few series. But a good point, I will test that and get
back to you. Do you see anything specific in the series that suggests otherwise?

Thanks,
Anchal
> 
> -bois
> 
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 19:54:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 19:54:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvnTu-0005LA-GY; Wed, 15 Jul 2020 19:54:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Vvos=A2=suse.com=jfehlig@srs-us1.protection.inumbo.net>)
 id 1jvnTt-0005L5-46
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 19:54:17 +0000
X-Inumbo-ID: f625c1a4-c6d4-11ea-9448-12813bfff9fa
Received: from de-smtp-delivery-102.mimecast.com (unknown [51.163.158.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f625c1a4-c6d4-11ea-9448-12813bfff9fa;
 Wed, 15 Jul 2020 19:54:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com;
 s=mimecast20200619; t=1594842855;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=8SS4qKv3xtpptNMOJJuE/l8WzUkx3uT8CN/b8Ziu1s0=;
 b=XB+niildCyrEoZbSuYK+aTZ/F0gJhD673eRTv6SqBZmHMf6ohbqW2Yib46gV5XcY9CG0Cv
 rJUaB97DwaBFHppk7x5aS1yZhYDtPDtucqrrySh5G9QJHJNu6GGV0SMXMndhlLWfP913W7
 WXZRwwONPiskE19Recg5y9To7agzhHQ=
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur03lp2054.outbound.protection.outlook.com [104.47.10.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-17-Vp181LPJOi-QXHZg94-o0A-1; Wed, 15 Jul 2020 21:54:13 +0200
X-MC-Unique: Vp181LPJOi-QXHZg94-o0A-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZXKDuLcMHXFxYtQewIld7TUFICnoGbutWdjSc5I6NOYQW+ubIviubS8uxD2/9w0BqHedVMS40tx/OfCy9uh6O6Vof8ZKUpBwYLRRC4He1RPrdrY1UJu+Oc09cPbn8Oq71TlPqd4mNPx15dPcRG2KijPpUHbtYdFpm3xy+seg+RMaoVanQH5yNyKvy6WI85fhGLOvLralEzzo2BidksE2a7PnlEyA6NDK9yehY0Br+nISrKCi2sIegzlhS8xJjnR7YCN6dXmuHoi6kHEbGyWud/ZSp+pbf5jcGyWNaaZ0kwnPMmb9e6Jff1A803ZkbXX/6BLnCSPbbmE8E4RgbLZHNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8SS4qKv3xtpptNMOJJuE/l8WzUkx3uT8CN/b8Ziu1s0=;
 b=NHStHwat2ZTO6Wos2m7BRbJrWP9ItkmqyTX3Zt1Mr3pPNF3dcJTMfq/QJ3js0eRJVuHa7SYp8O/1jJTVnw7LS8116xl2rcDkdifbcB5LdMoudJ3n424JoE7U6R8zom2Xf6JFyZlkofB1tXILXpHwJbxOvc7mmDnUdJHXjzuf0MZb+OEm0t89qDA0OTUhgTYn9+stAziIeyby8BAzJZ9nhuXIBZEjeWuUJrjLDjMOfEktNb0lEN1NGOBxS86UYsXsaiXri4Wsk6Tk4jCBmAKDgcmMNgEYiEoy8DPsa4OqXVTCTX4U4U747MeMU9c4NT5LjAApzCVLaqIYguYid4io7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: eu.citrix.com; dkim=none (message not signed)
 header.d=none;eu.citrix.com; dmarc=none action=none header.from=suse.com;
Received: from VI1PR0401MB2429.eurprd04.prod.outlook.com
 (2603:10a6:800:2c::13) by VI1PR04MB5534.eurprd04.prod.outlook.com
 (2603:10a6:803:d2::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17; Wed, 15 Jul
 2020 19:54:05 +0000
Received: from VI1PR0401MB2429.eurprd04.prod.outlook.com
 ([fe80::7cc0:b0a4:b951:90e2]) by VI1PR0401MB2429.eurprd04.prod.outlook.com
 ([fe80::7cc0:b0a4:b951:90e2%11]) with mapi id 15.20.3195.018; Wed, 15 Jul
 2020 19:54:05 +0000
Subject: Re: [libvirt test] 151910: regressions - FAIL
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-151910-mainreport@xen.org>
From: Jim Fehlig <jfehlig@suse.com>
Message-ID: <5b44b5dc-bc37-bdaa-47a4-5f5b72392f45@suse.com>
Date: Wed, 15 Jul 2020 13:53:58 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
In-Reply-To: <osstest-151910-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM3PR05CA0105.eurprd05.prod.outlook.com
 (2603:10a6:207:1::31) To VI1PR0401MB2429.eurprd04.prod.outlook.com
 (2603:10a6:800:2c::13)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.1.55] (192.225.185.214) by
 AM3PR05CA0105.eurprd05.prod.outlook.com (2603:10a6:207:1::31) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.17 via Frontend Transport; Wed, 15 Jul 2020 19:54:03 +0000
X-Originating-IP: [192.225.185.214]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b9932b12-eb32-4076-9af6-08d828f8d41c
X-MS-TrafficTypeDiagnostic: VI1PR04MB5534:
X-Microsoft-Antispam-PRVS: <VI1PR04MB553442FD5D8500DAE6561233C67E0@VI1PR04MB5534.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: eDUsdl9kcUYrV/cSgHdj3qOe5MypBXtvWKwPP59eMJ+NO5i0dBdJ1T23PsRBOVc7I/OhB0WLSn2pSpOOX0nX1A3DUCHHuvOWULKHFNdRAmIjlWEPsXxDOEsfnNEhztFFtjfK8shJyCzGzgNzyEOigyjwKQNgnMRrC3+A1e4YNXUOq33k8XsnI7xnA4G3/IoQYKiBWptXZAXNCc6z31rpqcBIU83TpZ9866jkU1FNAs0yS68nMAzSY6X0KitMj/qAJjUJPu+MbRX//gU08a9436ibVB3xyNCAguskUOoSGWPTwLfLojPsfrLMA3jldQ9VY4H7cVFALTWDtxry4kQRKY0FkCi9DrmdW9L/AW/EbZULO+VtIWpKDGPjqKn808KEcJ7c+DfKeO+iVNAbCSFnridwwHGtvpy1tW6gTE/OaVFXDtCg0+59b4DFrEIEpCAkbFom51jkGxnydaFnEVtH5w==
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR0401MB2429.eurprd04.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(366004)(396003)(376002)(136003)(39860400002)(346002)(8676002)(26005)(83380400001)(36756003)(8936002)(66476007)(31686004)(4744005)(16576012)(86362001)(966005)(66556008)(66946007)(186003)(31696002)(5660300002)(16526019)(316002)(2906002)(478600001)(6486002)(956004)(2616005)(53546011)(52116002)(4326008)(6666004)(43740500002);
 DIR:OUT; SFP:1101; 
X-MS-Exchange-AntiSpam-MessageData: kbVPRUB2Dq5+wYlEKmreZZXefrD+c0Bd1Jf8Q6A1W6M+0Z+RyLplKgZA62Opm4vHGALiIWu2aLmr6Anyn0MA57U2C3gKlNrexS/jUjb+bAYwQNhZud9yWnvXoa0nR5HiLPPqqvU0Io5lGMnXpkHcNdCyUTw06XvEhCOISlTQfOziFomErBt6jKoppFdwnoS2nNWvIcQZ1hXawKSRkgYVGEeQ642iAABJjOs11Yfo9NxEQpLhEvMW1XnpbwO/Hqn5Dg+aEn67HxQ2XUyIvG2ymaRrS0NU9BlCkJmlDXl9unTFkxsAs4Ixej/6MD8mKiXQVurKWSN/p+Lo7WtLWkfKsHTN9/XUfcl/LO/MwS1yTDT+n/NO1xa+an1SWRnbdqAL9iGeQieyJ8RN+WIZc4SlFYhN+O9aNfiFWBEBBGiAuAGJRYHlFig3UIXvTuJ6R96uA/l+Q7nCEJt5hq7RaRx/j4XObuLWFg5APC4tMdXcwQ4=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b9932b12-eb32-4076-9af6-08d828f8d41c
X-MS-Exchange-CrossTenant-AuthSource: VI1PR0401MB2429.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Jul 2020 19:54:05.0259 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SqKpf+IJoBzglg6EgfJQYCFeVylAfA0C1PEhV14OfhUEusEcaq51Lm6iG7yNphlF5FdePDZEkGbcZaVFsTtuDA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB5534
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/15/20 9:07 AM, osstest service owner wrote:
> flight 151910 libvirt real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/151910/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>   build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
>   build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
>   build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
>   build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

I see the same configure failure has been encountered since July 11

checking for XDR... no
configure: error: You must install the libtirpc >= 0.1.10 pkg-config module to 
compile libvirt

AFAICT there have been no related changes in libvirt (which has required 
libtirpc for over two years). Has this package changed in debian, or no longer 
part of a base build config?

Regards,
Jim



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 20:00:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 20:00:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvnZS-0005gX-6l; Wed, 15 Jul 2020 20:00:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tzA3=A2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvnZQ-0005Wj-QE
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 20:00:00 +0000
X-Inumbo-ID: bfb95332-c6d5-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bfb95332-c6d5-11ea-8496-bc764e2007e4;
 Wed, 15 Jul 2020 19:59:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=6LNz+ePavW/Vmjl1LbQNPjuq+FNWBbLX1HAtYV5n0Fw=; b=jtVwBUc+61BFYc/lO2zUcN0pf
 5X/wZVD+v8UlOJiZ6I6/YnVxNGtXzLM+IGbC5u54rxUfFDir6rYF7o9gtuIZzji2kbzNRGAZ+/bJR
 y89jYpd/HHzOrLQ7NEwgyRwz8XwH1K1XT8ca52aNmqmKQSuGzZzQEXKqzUCECvCgd6wQQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvnZJ-0006fh-82; Wed, 15 Jul 2020 19:59:53 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvnZI-0007Qz-Tl; Wed, 15 Jul 2020 19:59:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvnZI-0001Fb-Sp; Wed, 15 Jul 2020 19:59:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151903-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151903: regressions - FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-libvirt:guest-start.2:fail:regression
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=1969576661f3e34318e9b0a61a1a38f9a5aee16f
X-Osstest-Versions-That: xen=165f3afbfc3db70fcfdccad07085cde0a03c858b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jul 2020 19:59:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151903 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151903/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt     17 guest-start.2            fail REGR. vs. 151884

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151884
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151884
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151884
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151884
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151884
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151884
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151884
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151884
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151884
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151884
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1969576661f3e34318e9b0a61a1a38f9a5aee16f
baseline version:
 xen                  165f3afbfc3db70fcfdccad07085cde0a03c858b

Last test of basis   151884  2020-07-14 04:47:14 Z    1 days
Testing same since   151903  2020-07-15 01:07:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 1969576661f3e34318e9b0a61a1a38f9a5aee16f
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jul 14 10:00:45 2020 +0200

    VT-x: simplify/clarify vmx_load_pdptrs()
    
    * Guests outside of long mode can't have PCID enabled. Drop the
      respective check to make more obvious that there's no security issue
      (from potentially accessing past the mapped page's boundary).
    
    * Only bits 5...31 of CR3 are relevant in 32-bit PAE mode; all others
      are ignored. The high 32 ones may in particular have remained
      unchanged after leaving long mode.
    
    * Drop the unnecessary and badly typed local variable p.
    
    * Don't open-code hvm_long_mode_active() (and extend this to the related
      nested VT-x code).
    
    * Constify guest_pdptes to clarify that we're only reading from the
      page.
    
    * Drop the "crash" label now that there's only a single path leading
      there.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>

commit f36f4bf582d353d8424154b14b663b075a0276e3
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Jul 14 09:57:17 2020 +0200

    x86/mwait: remove unneeded local variables
    
    Remove the eax and cstate local variables, the same can be directly
    fetched from acpi_processor_cx without any transformations.
    
    No functional change.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 20:50:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 20:50:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvoLo-0001Kh-D8; Wed, 15 Jul 2020 20:50:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PCNO=A2=amazon.com=prvs=458800b8f=anchalag@srs-us1.protection.inumbo.net>)
 id 1jvoLm-0001Kc-Si
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 20:49:58 +0000
X-Inumbo-ID: be3f6080-c6dc-11ea-b7bb-bc764e2007e4
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be3f6080-c6dc-11ea-b7bb-bc764e2007e4;
 Wed, 15 Jul 2020 20:49:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1594846198; x=1626382198;
 h=date:from:to:cc:message-id:references:mime-version:
 content-transfer-encoding:in-reply-to:subject;
 bh=Z0Bj5n1UQ2ZkITPj7VMLUCR260N7o5zgaQh0zJTjo9w=;
 b=rHcL5k2EM9ZnAeTwmLHvVfBzhBqtXI1Rl1pkYVHndn+SimCjMfAyYbvm
 LkIed8BdWvtvlWzoVFr0fnFWPLWUh51en8D7+XKd6fvXgNSA4xfxVi77k
 wcMvUuFYp8S4c/5hPcoN7ccBPq0+YxzqUPJGGK7/Td2cKpyti0+KJf3qJ s=;
IronPort-SDR: egCxTTQAJwMI43/0aX/z8oXu3NQj1TiM9Enuy46fU6CQHIN1t4Iofu4RMoWeW9abbFwiF0d9Lw
 DMcfZ+o0VE1A==
X-IronPort-AV: E=Sophos;i="5.75,356,1589241600"; d="scan'208";a="42136950"
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2a-119b4f96.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 15 Jul 2020 20:49:55 +0000
Received: from EX13MTAUWB001.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2a-119b4f96.us-west-2.amazon.com (Postfix) with ESMTPS
 id 75ECA1A289B; Wed, 15 Jul 2020 20:49:53 +0000 (UTC)
Received: from EX13D01UWB004.ant.amazon.com (10.43.161.157) by
 EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 15 Jul 2020 20:49:44 +0000
Received: from EX13MTAUWB001.ant.amazon.com (10.43.161.207) by
 EX13d01UWB004.ant.amazon.com (10.43.161.157) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 15 Jul 2020 20:49:44 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.161.249) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Wed, 15 Jul 2020 20:49:43 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id A13584E7C6; Wed, 15 Jul 2020 20:49:43 +0000 (UTC)
Date: Wed, 15 Jul 2020 20:49:43 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, sstabellini@kernel.org, kamatam@amazon.com, mingo@redhat.com,
 xen-devel@lists.xenproject.org, sblbir@amazon.com, axboe@kernel.dk,
 konrad.wilk@oracle.com, anchalag@amazon.com, bp@alien8.de, tglx@linutronix.de,
 jgross@suse.com, netdev@vger.kernel.org, linux-pm@vger.kernel.org,
 rjw@rjwysocki.net, linux-kernel@vger.kernel.org, vkuznets@redhat.com,
 davem@davemloft.net, dwmw@amazon.co.uk, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 13, 2020 at 11:52:01AM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 7/2/20 2:21 PM, Anchal Agarwal wrote:
> > From: Munehisa Kamata <kamatam@amazon.com>
> >
> > Guest hibernation is different from xen suspend/resume/live migration.
> > Xen save/restore does not use pm_ops as is needed by guest hibernation.
> > Hibernation in guest follows ACPI path and is guest inititated , the
> > hibernation image is saved within guest as compared to later modes
> > which are xen toolstack assisted and image creation/storage is in
> > control of hypervisor/host machine.
> > To differentiate between Xen suspend and PM hibernation, keep track
> > of the on-going suspend mode by mainly using a new PM notifier.
> > Introduce simple functions which help to know the on-going suspend mode
> > so that other Xen-related code can behave differently according to the
> > current suspend mode.
> > Since Xen suspend doesn't have corresponding PM event, its main logic
> > is modfied to acquire pm_mutex and set the current mode.
> >
> > Though, acquirng pm_mutex is still right thing to do, we may
> > see deadlock if PM hibernation is interrupted by Xen suspend.
> > PM hibernation depends on xenwatch thread to process xenbus state
> > transactions, but the thread will sleep to wait pm_mutex which is
> > already held by PM hibernation context in the scenario. Xen shutdown
> > code may need some changes to avoid the issue.
> >
> > [Anchal Agarwal: Changelog]:
> >  RFC v1->v2: Code refactoring
> >  v1->v2:     Remove unused functions for PM SUSPEND/PM hibernation
> >
> > Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> > Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
> > ---
> >  drivers/xen/manage.c  | 60 +++++++++++++++++++++++++++++++++++++++++++
> >  include/xen/xen-ops.h |  1 +
> >  2 files changed, 61 insertions(+)
> >
> > diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
> > index cd046684e0d1..69833fd6cfd1 100644
> > --- a/drivers/xen/manage.c
> > +++ b/drivers/xen/manage.c
> > @@ -14,6 +14,7 @@
> >  #include <linux/freezer.h>
> >  #include <linux/syscore_ops.h>
> >  #include <linux/export.h>
> > +#include <linux/suspend.h>
> >
> >  #include <xen/xen.h>
> >  #include <xen/xenbus.h>
> > @@ -40,6 +41,20 @@ enum shutdown_state {
> >  /* Ignore multiple shutdown requests. */
> >  static enum shutdown_state shutting_down = SHUTDOWN_INVALID;
> >
> > +enum suspend_modes {
> > +     NO_SUSPEND = 0,
> > +     XEN_SUSPEND,
> > +     PM_HIBERNATION,
> > +};
> > +
> > +/* Protected by pm_mutex */
> > +static enum suspend_modes suspend_mode = NO_SUSPEND;
> > +
> > +bool xen_is_xen_suspend(void)
> 
> 
> Weren't you going to call this pv suspend? (And also --- is this suspend
> or hibernation? Your commit messages and cover letter talk about fixing
> hibernation).
> 
> 
This is for hibernation is for pvhvm/hvm/pv-on-hvm guests as you may call it.
The method is just there to check if "xen suspend" is in progress.
I do not see "xen_suspend" differentiating between pv or hvm
domain until later in the code hence, I abstracted it to xen_is_xen_suspend.
> > +{
> > +     return suspend_mode == XEN_SUSPEND;
> > +}
> > +
> 
> 
> 
> > +
> > +static int xen_pm_notifier(struct notifier_block *notifier,
> > +                     unsigned long pm_event, void *unused)
> > +{
> > +     switch (pm_event) {
> > +     case PM_SUSPEND_PREPARE:
> > +     case PM_HIBERNATION_PREPARE:
> > +     case PM_RESTORE_PREPARE:
> > +             suspend_mode = PM_HIBERNATION;
> 
> 
> Do you ever use this mode? It seems to me all you care about is whether
> or not we are doing XEN_SUSPEND. And so perhaps suspend_mode could
> become a boolean. And then maybe even drop it altogether because it you
> should be able to key off (shutting_down == SHUTDOWN_SUSPEND).
> 
> 
The mode was left there in case its needed for restore prepare cases. But you
are right the only thing I currently care about whether shutting_down ==
SHUTDOWN_SUSPEND. Infact, the notifier may not be needed in first place.
xen_is_xen_suspend could work right off the bat using 'shutting_down' variable
itself. *I think so* I will test it on my end and send an updated patch.
> > +             break;
> > +     case PM_POST_SUSPEND:
> > +     case PM_POST_RESTORE:
> > +     case PM_POST_HIBERNATION:
> > +             /* Set back to the default */
> > +             suspend_mode = NO_SUSPEND;
> > +             break;
> > +     default:
> > +             pr_warn("Receive unknown PM event 0x%lx\n", pm_event);
> > +             return -EINVAL;
> > +     }
> > +
> > +     return 0;
> > +};
> 
> 
> 
> > +static int xen_setup_pm_notifier(void)
> > +{
> > +     if (!xen_hvm_domain())
> > +             return -ENODEV;
> 
> 
> I forgot --- what did we decide about non-x86 (i.e. ARM)?
It would be great to support that however, its  out of
scope for this patch set.
I’ll be happy to discuss it separately.
> 
> 
> And PVH dom0.
That's another good use case to make it work with however, I still
think that should be tested/worked upon separately as the feature itself
(PVH Dom0) is very new.
> 
> 
Thanks,
Anchal
> -boris
> 
> 
> 


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 20:50:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 20:50:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvoMW-00021P-NH; Wed, 15 Jul 2020 20:50:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IaIB=A2=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jvoMU-00021G-J5
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 20:50:43 +0000
X-Inumbo-ID: d861b03a-c6dc-11ea-8496-bc764e2007e4
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d861b03a-c6dc-11ea-8496-bc764e2007e4;
 Wed, 15 Jul 2020 20:50:41 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06FKWKj1066227;
 Wed, 15 Jul 2020 20:50:15 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=JvYRaru1KyYvRqOyLFyBN0GUWD3NhbyrMdaoHytyByQ=;
 b=BtYr5FPVDBA0xZ3ZC5s7rA9zlu4IMZKwCOpKP8o4mKoCgEhKuDLX+jys8JSOT68qZ706
 e0aIdV9/NT5XqvrPwVWccCbGkewNxOMAdYI/TH7IY6/G7LEI5z3VsGkRmOpPtowp/s4Y
 y0BzoU7Sbhe1/BHk5LNpq/isGw/ELrGnOdPTb/QpfqpIIVwesbt6kZr7I8Ne1Ha8kw/l
 n1lp7ZG3rP9nSczU7NwVmYUB+bde3OmiHxgAqlyR778p8tTKC2+V9g5KEIs4qA+qVi42
 e5vbUn2XZWYPeOC+N1WZHY1brO5USwU8uSsr4nqBpxEz2HfZL9qJLCKw+Z+nV0vX5gCn gg== 
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2130.oracle.com with ESMTP id 3274urdt23-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Wed, 15 Jul 2020 20:50:15 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06FKXeaD171549;
 Wed, 15 Jul 2020 20:50:14 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by aserp3030.oracle.com with ESMTP id 327q0ryurw-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 15 Jul 2020 20:50:14 +0000
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 06FKo8kx002016;
 Wed, 15 Jul 2020 20:50:08 GMT
Received: from [10.39.217.130] (/10.39.217.130)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Wed, 15 Jul 2020 13:50:08 -0700
Subject: Re: [PATCH v2 00/11] Fix PM hibernation in Xen guests
To: Anchal Agarwal <anchalag@amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
 <324020A7-996F-4CF8-A2F4-46957CEA5F0C@amazon.com>
 <c6688a0c-7fec-97d2-3dcc-e160e97206e6@oracle.com>
 <20200715194933.GA17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <6145a0d9-fd4e-a739-407e-97f7261eecd8@oracle.com>
Date: Wed, 15 Jul 2020 16:49:57 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200715194933.GA17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9683
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0
 malwarescore=0 spamscore=0
 mlxlogscore=999 bulkscore=0 adultscore=0 phishscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007150158
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9683
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0
 lowpriorityscore=0 impostorscore=0
 suspectscore=0 phishscore=0 spamscore=0 mlxlogscore=999 malwarescore=0
 mlxscore=0 priorityscore=1501 adultscore=0 bulkscore=0 clxscore=1015
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007150158
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "Valentin, Eduardo" <eduval@amazon.com>,
 "len.brown@intel.com" <len.brown@intel.com>,
 "peterz@infradead.org" <peterz@infradead.org>,
 "benh@kernel.crashing.org" <benh@kernel.crashing.org>,
 "x86@kernel.org" <x86@kernel.org>, "linux-mm@kvack.org" <linux-mm@kvack.org>,
 "pavel@ucw.cz" <pavel@ucw.cz>, "hpa@zytor.com" <hpa@zytor.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>, "Kamata,
 Munehisa" <kamatam@amazon.com>, "mingo@redhat.com" <mingo@redhat.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Singh,
 Balbir" <sblbir@amazon.com>, "axboe@kernel.dk" <axboe@kernel.dk>,
 "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
 "bp@alien8.de" <bp@alien8.de>, "tglx@linutronix.de" <tglx@linutronix.de>,
 "jgross@suse.com" <jgross@suse.com>,
 "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
 "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
 "rjw@rjwysocki.net" <rjw@rjwysocki.net>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "vkuznets@redhat.com" <vkuznets@redhat.com>,
 "davem@davemloft.net" <davem@davemloft.net>, "Woodhouse,
 David" <dwmw@amazon.co.uk>, "roger.pau@citrix.com" <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/15/20 3:49 PM, Anchal Agarwal wrote:
> On Mon, Jul 13, 2020 at 03:43:33PM -0400, Boris Ostrovsky wrote:
>> CAUTION: This email originated from outside of the organization. Do no=
t click links or open attachments unless you can confirm the sender and k=
now the content is safe.
>>
>>
>>
>> On 7/10/20 2:17 PM, Agarwal, Anchal wrote:
>>> Gentle ping on this series.
>>
>> Have you tested save/restore?
>>
> No, not with the last few series. But a good point, I will test that an=
d get
> back to you. Do you see anything specific in the series that suggests o=
therwise?


root@ovs104> xl save pvh saved
Saving to saved new xl format (info 0x3/0x0/1699)
xc: info: Saving domain 3, type x86 HVM
xc: Frames: 1044480/1044480=C2=A0 100%
xc: End of stream: 0/0=C2=A0=C2=A0=C2=A0 0%
root@ovs104> xl restore saved
Loading new save file saved (new xl fmt info 0x3/0x0/1699)
=C2=A0Savefile contains xl domain config in JSON format
Parsing config from <saved>
xc: info: Found x86 HVM domain from Xen 4.13
xc: info: Restoring domain
xc: info: Restore successful
xc: info: XenStore: mfn 0xfeffc, dom 0, evt 1
xc: info: Console: mfn 0xfefff, dom 0, evt 2
root@ovs104> xl console pvh
[=C2=A0 139.943872] ------------[ cut here ]------------
[=C2=A0 139.943872] kernel BUG at arch/x86/xen/enlighten.c:205!
[=C2=A0 139.943872] invalid opcode: 0000 [#1] SMP PTI
[=C2=A0 139.943872] CPU: 0 PID: 11 Comm: migration/0 Not tainted 5.8.0-rc=
5 #26
[=C2=A0 139.943872] RIP: 0010:xen_vcpu_setup+0x16d/0x180
[=C2=A0 139.943872] Code: 4a 8b 14 f5 40 c9 1b 82 48 89 d8 48 89 2c 02 8b=
 05
a4 d4 40 01 85 c0 0f 85 15 ff ff ff 4a 8b 04 f5 40 c9 1b 82 e9 f4 fe ff
ff <0f> 0b b8 ed ff ff ff e9 14 ff ff ff e8 12 4f 86 00 66 90 66 66 66
[=C2=A0 139.943872] RSP: 0018:ffffc9000006bdb0 EFLAGS: 00010046
[=C2=A0 139.943872] RAX: 0000000000000000 RBX: ffffc9000014fe00 RCX:
0000000000000000
[=C2=A0 139.943872] RDX: ffff88803fc00000 RSI: 0000000000016128 RDI:
0000000000000000
[=C2=A0 139.943872] RBP: 0000000000000000 R08: 0000000000000000 R09:
0000000000000000
[=C2=A0 139.943872] R10: ffffffff826174a0 R11: ffffc9000006bcb4 R12:
0000000000016120
[=C2=A0 139.943872] R13: 0000000000016120 R14: 0000000000016128 R15:
0000000000000000
[=C2=A0 139.943872] FS:=C2=A0 0000000000000000(0000) GS:ffff88803fc00000(=
0000)
knlGS:0000000000000000
[=C2=A0 139.943872] CS:=C2=A0 0010 DS: 0000 ES: 0000 CR0: 000000008005003=
3
[=C2=A0 139.943872] CR2: 00007f704be8b000 CR3: 000000003901a004 CR4:
00000000000606f0
[=C2=A0 139.943872] Call Trace:
[=C2=A0 139.943872]=C2=A0 ? __kmalloc+0x167/0x260
[=C2=A0 139.943872]=C2=A0 ? xen_manage_runstate_time+0x14a/0x170
[=C2=A0 139.943872]=C2=A0 xen_vcpu_restore+0x134/0x170
[=C2=A0 139.943872]=C2=A0 xen_hvm_post_suspend+0x1d/0x30
[=C2=A0 139.943872]=C2=A0 xen_arch_post_suspend+0x13/0x30
[=C2=A0 139.943872]=C2=A0 xen_suspend+0x87/0x190
[=C2=A0 139.943872]=C2=A0 multi_cpu_stop+0x6d/0x110
[=C2=A0 139.943872]=C2=A0 ? stop_machine_yield+0x10/0x10
[=C2=A0 139.943872]=C2=A0 cpu_stopper_thread+0x47/0x100
[=C2=A0 139.943872]=C2=A0 smpboot_thread_fn+0xc5/0x160
[=C2=A0 139.943872]=C2=A0 ? sort_range+0x20/0x20
[=C2=A0 139.943872]=C2=A0 kthread+0xfe/0x140
[=C2=A0 139.943872]=C2=A0 ? kthread_park+0x90/0x90
[=C2=A0 139.943872]=C2=A0 ret_from_fork+0x22/0x30
[=C2=A0 139.943872] Modules linked in:
[=C2=A0 139.943872] ---[ end trace 74716859a6b4f0a8 ]---
[=C2=A0 139.943872] RIP: 0010:xen_vcpu_setup+0x16d/0x180
[=C2=A0 139.943872] Code: 4a 8b 14 f5 40 c9 1b 82 48 89 d8 48 89 2c 02 8b=
 05
a4 d4 40 01 85 c0 0f 85 15 ff ff ff 4a 8b 04 f5 40 c9 1b 82 e9 f4 fe ff
ff <0f> 0b b8 ed ff ff ff e9 14 ff ff ff e8 12 4f 86 00 66 90 66 66 66
[=C2=A0 139.943872] RSP: 0018:ffffc9000006bdb0 EFLAGS: 00010046
[=C2=A0 139.943872] RAX: 0000000000000000 RBX: ffffc9000014fe00 RCX:
0000000000000000
[=C2=A0 139.943872] RDX: ffff88803fc00000 RSI: 0000000000016128 RDI:
0000000000000000
[=C2=A0 139.943872] RBP: 0000000000000000 R08: 0000000000000000 R09:
0000000000000000
[=C2=A0 139.943872] R10: ffffffff826174a0 R11: ffffc9000006bcb4 R12:
0000000000016120
[=C2=A0 139.943872] R13: 0000000000016120 R14: 0000000000016128 R15:
0000000000000000
[=C2=A0 139.943872] FS:=C2=A0 0000000000000000(0000) GS:ffff88803fc00000(=
0000)
knlGS:0000000000000000
[=C2=A0 139.943872] CS:=C2=A0 0010 DS: 0000 ES: 0000 CR0: 000000008005003=
3
[=C2=A0 139.943872] CR2: 00007f704be8b000 CR3: 000000003901a004 CR4:
00000000000606f0
[=C2=A0 139.943872] Kernel panic - not syncing: Fatal exception
[=C2=A0 139.943872] Shutting down cpus with NMI
[=C2=A0 143.927559] Kernel Offset: disabled
root@ovs104>



From xen-devel-bounces@lists.xenproject.org Wed Jul 15 21:19:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 21:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvonm-0003wV-2M; Wed, 15 Jul 2020 21:18:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IaIB=A2=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jvonl-0003wQ-Lw
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 21:18:53 +0000
X-Inumbo-ID: c854b27e-c6e0-11ea-945a-12813bfff9fa
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c854b27e-c6e0-11ea-945a-12813bfff9fa;
 Wed, 15 Jul 2020 21:18:52 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06FL8QHE006742;
 Wed, 15 Jul 2020 21:18:27 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=zDExtjrwlYBSKB/0QpKhHxZClcZ7lvE9kA+uBbq6Hnc=;
 b=HLwVRZq7VXZQTeFp9Hr52hQ++nIsXdzuR0+KkjceuzLc38AYSzxt89+nmcY4Zxb76rea
 23RIVuKtIsZ+Bu9EOI3VVle5a57J6+fbrFZ4YT7xFo1KjZWthR+NJtUCJRlhZSVBc0XZ
 Q+Ah/Fzws952Boh5nkRCT0oa66P43PenkoUES3MtsrLuTMZ/oFhxqdVZvC/V1r/ZLOlJ
 jl8jC0swD8b2BmZRe7ugv5aGocHfrYBGLZbcIG6E/e8RoRCr0+DebRDRwO5WzEHyjTsa
 6pQ+Dgh2Rd8GHAdyX/1MBSHUgmFND4ZzDYub89a4AAveZiPA+0xS0MwzgMJ7UdWl7t4t 0A== 
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by aserp2130.oracle.com with ESMTP id 327s65m4me-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Wed, 15 Jul 2020 21:18:27 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06FL7eTR101530;
 Wed, 15 Jul 2020 21:18:26 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by aserp3030.oracle.com with ESMTP id 327q0s1562-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 15 Jul 2020 21:18:26 +0000
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 06FLILCr010868;
 Wed, 15 Jul 2020 21:18:21 GMT
Received: from [10.39.217.130] (/10.39.217.130)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Wed, 15 Jul 2020 14:18:20 -0700
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
To: Anchal Agarwal <anchalag@amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
Date: Wed, 15 Jul 2020 17:18:08 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9683
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0
 malwarescore=0 spamscore=0
 mlxlogscore=999 bulkscore=0 adultscore=0 phishscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007150160
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9683
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 malwarescore=0
 phishscore=0 mlxscore=0 priorityscore=1501 lowpriorityscore=0 spamscore=0
 clxscore=1015 bulkscore=0 mlxlogscore=999 impostorscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007150160
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, sstabellini@kernel.org, kamatam@amazon.com, mingo@redhat.com,
 xen-devel@lists.xenproject.org, sblbir@amazon.com, axboe@kernel.dk,
 konrad.wilk@oracle.com, bp@alien8.de, tglx@linutronix.de, jgross@suse.com,
 netdev@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net,
 linux-kernel@vger.kernel.org, vkuznets@redhat.com, davem@davemloft.net,
 dwmw@amazon.co.uk, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/15/20 4:49 PM, Anchal Agarwal wrote:
> On Mon, Jul 13, 2020 at 11:52:01AM -0400, Boris Ostrovsky wrote:
>> CAUTION: This email originated from outside of the organization. Do no=
t click links or open attachments unless you can confirm the sender and k=
now the content is safe.
>>
>>
>>
>> On 7/2/20 2:21 PM, Anchal Agarwal wrote:
>>> +
>>> +bool xen_is_xen_suspend(void)
>>
>> Weren't you going to call this pv suspend? (And also --- is this suspe=
nd
>> or hibernation? Your commit messages and cover letter talk about fixin=
g
>> hibernation).
>>
>>
> This is for hibernation is for pvhvm/hvm/pv-on-hvm guests as you may ca=
ll it.
> The method is just there to check if "xen suspend" is in progress.
> I do not see "xen_suspend" differentiating between pv or hvm
> domain until later in the code hence, I abstracted it to xen_is_xen_sus=
pend.


I meant "pv suspend" in the sense that this is paravirtual suspend, not
suspend for paravirtual guests. Just like pv drivers are for both pv and
hvm guests.


And then --- should it be pv suspend or pv hibernation?



>>> +{
>>> +     return suspend_mode =3D=3D XEN_SUSPEND;
>>> +}
>>> +
>>
>> +static int xen_setup_pm_notifier(void)
>> +{
>> +     if (!xen_hvm_domain())
>> +             return -ENODEV;
>>
>> I forgot --- what did we decide about non-x86 (i.e. ARM)?
> It would be great to support that however, its  out of
> scope for this patch set.
> I=E2=80=99ll be happy to discuss it separately.


I wasn't implying that this *should* work on ARM but rather whether this
will break ARM somehow (because xen_hvm_domain() is true there).



>>
>> And PVH dom0.
> That's another good use case to make it work with however, I still
> think that should be tested/worked upon separately as the feature itsel=
f
> (PVH Dom0) is very new.


Same question here --- will this break PVH dom0?


-boris




From xen-devel-bounces@lists.xenproject.org Wed Jul 15 22:19:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 22:19:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvpkK-0000UP-Pz; Wed, 15 Jul 2020 22:19:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=szr+=A2=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1jvpkJ-0000UK-Qp
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 22:19:23 +0000
X-Inumbo-ID: 3beba410-c6e9-11ea-bb8b-bc764e2007e4
Received: from mail-oo1-xc43.google.com (unknown [2607:f8b0:4864:20::c43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3beba410-c6e9-11ea-bb8b-bc764e2007e4;
 Wed, 15 Jul 2020 22:19:22 +0000 (UTC)
Received: by mail-oo1-xc43.google.com with SMTP id t12so787578ooc.10
 for <xen-devel@lists.xenproject.org>; Wed, 15 Jul 2020 15:19:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:from:date:message-id:subject:to:cc
 :content-transfer-encoding;
 bh=qfx+yuKlbednJyVorfvp+HJSN+F2bTyRUM8zrY/s2k0=;
 b=vZ25fROQzOV1EB4ilKfBKLfASK4Nofrs2o4/eRG3m3Uu4rl8G5T1YfdoE7E72YHoFp
 8hs0RNxCIHzlmozAOllUEKrvkr+kfPwPEmb6zTVMNk0d1JQnY1dMr+b9WRgPB0+KuJlI
 z1rL08HJxu6nHsk/BdmO9TqmkZKCP3LUYijeysc0d+UJ3r8GNgVVPGBc3epWOlUAS2Pz
 g45m1OLh1LwPsxxTSWNQq+uisW39LeJLyY4LnqUuRVxUJmnLhP6Zb7pQ4lMs/pSZYvyU
 u8fF8du692HxGdFYFR7ZbZVuILDNx4jDQZTBo0mGruHhe6DzmiwPiZD7rU4N+XzipYfQ
 Ba6w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc
 :content-transfer-encoding;
 bh=qfx+yuKlbednJyVorfvp+HJSN+F2bTyRUM8zrY/s2k0=;
 b=oQeKWLWL/sbAdQ1G4JXIWubyFJbI1LgP6be/1c9F6+eh4ni9d/Qy/DuyFh+x62l6o9
 1HatB5feVHA47W62cspJMiKljCBPXy80IK3M5iB9l+9+z1s4/edpoMmwrF6N6F41SH8l
 L4e/m2vw/NfhAcsB9eQPJLyhEnRZX3Uu8XjFR2DPrJLhRQaHInhJLpHT+0EVse4UfDAf
 6ojmKYcqJPSNgQPZOlOqCSLcQ01Fp8l62T1MewYHHJJZK7Zs6Kzng5XjI+OJegb8gNnH
 KSJ2dBnzkRxJOjn4ebVXLxWeuafrFDNXvV8iJRxh3EiYkG60XGlYYSthXhBdoPOeWTBE
 Qe6g==
X-Gm-Message-State: AOAM533AHcAGaleYc/I7fZLcQYKRAvFiXKW4DtXDcOEzceocqhD/zp0l
 jVNewoO8fusVfpN/WWQBMu06C0GOM4CA5aZPsUz8K0yY
X-Google-Smtp-Source: ABdhPJzVZk4MFV3ldlIh5dRwFEalmr8n69fgDDy7Y8qquaBSjsALgHSJ3bUgCGZN/VuqbW5o+SMLgQ9wictm2u1fCDw=
X-Received: by 2002:a4a:9630:: with SMTP id q45mr1390129ooi.34.1594851561559; 
 Wed, 15 Jul 2020 15:19:21 -0700 (PDT)
MIME-Version: 1.0
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Wed, 15 Jul 2020 15:19:07 -0700
Message-ID: <CACMJ4GYLKR99Y-J9L-qAG75BeNL9URSEi3HfYjSrCOCohqsw-A@mail.gmail.com>
Subject: Design Sessions notes: Xen system boot: launching VMs (DomB mode of
 dom0less)
To: xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Daniel Smith <dpsmith@apertussolutions.com>,
 Adam Schwalm <adam.schwalm@starlab.io>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

# Session Notes on Xen system boot: launching VMs (DomB mode of dom0less)

Sessions Host: Christopher Clark. Scribing: Daniel Smith & Christopher Clar=
k.

The DomB-mode-for-dom0less topic was covered in two design session slots
at the Xen Design & Developer Summit 2020.

## Session 1: Xen system boot: launching VMs (DomB)

A talk presenting background on the project and a progress update on the
development work sponsored by Star Lab Corp. This talk is preparatory
material for the following second session.

> A presentation of progress towards building DomB: a new mode of
> starting Xen with guest workloads launched at host boot - including
> support for x86 platforms, system disaggregation and running without
> dom0, and architecture to support measurement of system launch.

Slides are available here:
https://static.sched.com/hosted_files/xen2020/91/DomB%20-%20Xen%20Design%20=
%26%20Developer%20Summit%202020.pdf

## Session 2: Next steps for Xen system boot: launching VMs (DomB)

A Design Session discussion, discussing forward direction and topics
identified during building the initial prototype work.

### Session seed notes, shared on-screen during the session:

* Basics: general structure:
    - bootloader loads domain materials into RAM (kernels, ramdisks)
    - some metadata, in binary form, describes the domains to be launched
    - hypervisor performs domain construction
        - PVH and PV supported
    - only one guest is unpaused by the hypervisor: domB
    - domB unpauses other domains when ready to do so
        -> allows measurement to be performed by domB
        -> allows configuration to be applied by domB
        -> allows domB to sequence startup of other domains, if necessary
    - domB permissions: no hardware access, limited privileges to do
setup operations
    - hardware domain permissions: subset of the current dom0, no
is_priv for control ops

Needed for usability:
    - support for bringup of PV devices
        -> toolstack needs to be aware of Initial Domains as started
and initiate
           the bringup of backends

* Questions:
    - is claiming the first multiboot module, and dynamically toggling
dom0/domB mode, acceptable?
    - is there a tree + binary format outside of Xen that provides what we =
need?
    - is there momentum behind a technology elsewhere that Xen needs to sup=
port?
    - logic for building ACPI tables:
        - enable DomB to do this for other Initial Domains?
    - how best to implement "atomic handoff":
        - exit of DomB
        - continuation of the Initial Domains after their configuration by =
DomB
    - how best to surface the Launch Control Module contents to DomB?
        - ACPI tables? (PVH)
        - what about PV mode?

* Guidance:
    - how to bringup PV disk and network (etc) for the Initial Domains?
        - A: the toolstack domain interrogates Xen, gets data on the
Initial Domains,
             and then uses its own database to bring them up
            - means coordination between data in the toolstack and
config in the LCM
    - guest kernel decompression
        - complex, and implementation is not shared with anything else
        - would prefer to do the decompress in a guest context rather
than the hypervisor
          and use a bootloader-supplied decompressor binary, outside
the hypervisor

* To Research / Investigate:
    - from Stefano: "system device tree"
    - Implementing support for HVM-mode initial domains:
        - primary use case is "non-PV VMs that can have devices
directly assigned"
            - so PVH with working PCI passthrough would suffice
            - but having the ability to launch HVM too would be nice
        - needs bringup of the device emulator, and Xen configured to conne=
ct it


### Comments and discussion during the session:

_Jason Andryuk, Q: does the hypervisor construct multiple domains or domB o=
nly?_
Christopher, A: the hypervisor constructs multiple domains

_Damien Thenot (dthenot), Q: Could DomB be used to explore hardware
and create domain driver as needed ?_
Christopher: yes it could; but don't want domB to become a dom0 again

> post-session note: this is about wanting to avoid unbounded scope creep f=
or a single domB:
> the domB structure will enable doing these functions in other independent=
 initial domains
> that are also launched and run at host boot. DomB is unlikely to have per=
mission to perform
> any domain creation by default, since it won't need it - it just applies =
configuration and
> unpause to the other domains that Xen builds at host boot.

_Bobby Eshleman, Q: does no hardware access imply no TPM access here?
Just thinking about the measurement capabilities of DomB._
Christopher: yes but is under discussion. a possibility is to put a
minimal tpm driver in xen so that DomB can be measured before launch.
Roger Pau Monn=C3=A9: TPM is just assigned to dom0 (or the hw domain),
there's no special handling of it in Xen
Jason Andryuk: You can have the bootloader measure all the pieces into
the TPM before transitioning to Xen/domB, but those would be the
compressed artifacts.

> post-session note: enabling a strong full-system architecture for measure=
d launch and
> virtual TPM support for domains, where the vTPM is rooted in the physical=
 TPM is
> important and a motivation behind the DomB architecture.

_Bertrand Marquis: Could you explain a bit more the decompression? i
do not quite get why it is done in Xen?_
Christopher: if the dom0 kernel is detected as compressed, Xen will
decompress it.
Andy Cooper: Xen needs to decompress the elf header to get elf notes
to boot a PV domain.
Christopher: one thought is to do it another vmcs context
Andy: yes but adding a lot of overhead to do that

_Christopher: Is the proposed LCM detection a reasonable upstreamable appro=
ach?_
Andy: yes it is acceptable
??: Arm uses device tree
- Christopher: isn't it fixed, to describe hardware?
Bertrand: Xen already has logic to extend the tree
Stefano: could domb use a small key/value device tree with LCM fields
and use existing DTB parser in Arm XEN
Julien: don't use libft on untrusted device trees, not suitable for
the hypervisor

_Christopher: Is it foreign to use ACPI to expose LCM to guests on ARM?_
Bertrand, Stefano: ARM now has ACPI so its not really foreign, is ok

_Topic: Getting device info to the toolstack after launch_
J=C3=BCrgen Gro=C3=9F: xenstore stubdom is upstream/available
xl/libxl is a separate issue
problem with dom0less is issue with getting xenstored up before domU
starts trying to do xenstore/device setups

< xenstore setup discussion >
  - basic conclusion is that it is a bit of mess and needs cleaned up

_Nicolas Poirot, Q: if domB starts guests and quit, will there be no
management (reboot, shutdown) of the guests?_
Rich Persaud [stacktrust]: I think yes for the static partitioning use
case, which overlaps somewhat with dom0less.  If management is needed,
one of the started guests can be a privileged toolstack domain.

> post-session note: domB doesn't host the management, control or toolstack=
 software that does that, and
> it does not have the control domain privileged permissions that are neede=
d to do it.
> However, you can start a control domain at host boot, with the DomB archi=
tecture, and that will handle it.
> You describe the control domain you want in the Launch Control Module, pr=
ovide a kernel and optional ramdisk
> to the bootloader, and then Xen will build it and DomB will assist with c=
onfiguring and starting that domain.


_Rich Persaud [stacktrust] : For those new to domB, some material for
offline reading:_

* Dec 2019 design meeting in Cambridge: https://lists.gt.net/xen/devel/5778=
00
* May 2020 domB design doc v1: https://lists.gt.net/xen/devel/586365
* TrenchBoot (DRTM):
https://github.com/TrenchBoot/documentation/tree/master/presentations
* OpenXT & boot integrity: https://openxt.org/ecosystems/
* PSEC 2019: https://www.platformsecuritysummit.com/2019/videos/
* PSEC 2018: https://www.platformsecuritysummit.com/2018/videos/

We can also schedule a separate conference call after Xen Summit.
You can email rp@stacktrust.org if you're interested in being included
in a future domB conference call to review v2 of the design doc

> post-session note: DomB is being built with intent to support the
> 'Hardened Access Terminals' (HAT) architecture, also presented at the
> summit, with slides available here:

https://static.sched.com/hosted_files/xen2020/46/Reliable%20Platform%20Secu=
rity_%20Xen%20and%20the%20Fidelis%20Platform%20for%20Hardened%20Access%20Te=
rminals%20%28HAT%29.pdf

### Observations

> - general tone was supportive from many sides
> - device tree needs looking at, and if so, will need a security-capable
>   parser (libfdt is specifically not suitable for it)
> - xenstore is a pain point (yet again)
> - we can=E2=80=99t ditch the existing kernel decompressor since PV needs =
to read
>   the ELF notes, which need decompressing to access
> - TPM access needs explaining in our documentation

A big thanks to the conference attendees for the interest expressed in
the two sessions that enabled both of these to be scheduled, and for the
positive and active engagement in the discussions.


From xen-devel-bounces@lists.xenproject.org Wed Jul 15 23:25:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 15 Jul 2020 23:25:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvqmC-0006Bj-SG; Wed, 15 Jul 2020 23:25:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tzA3=A2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvqmA-0006Be-Rw
 for xen-devel@lists.xenproject.org; Wed, 15 Jul 2020 23:25:22 +0000
X-Inumbo-ID: 72bdcf32-c6f2-11ea-9466-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 72bdcf32-c6f2-11ea-9466-12813bfff9fa;
 Wed, 15 Jul 2020 23:25:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=PL+AIi2bcZVZ3kgCPkFNm8poDiR3eOPvPltzWcuy/A4=; b=UP4Het/UnZ4T0XPnOsKbQXgUB
 802HSP/K4Od2xJ5SfawZowozX7P+4w4yvG9oR5qFFU/eG1lO5WMb0j8UViZ03+QPSXOPLlejTIkjo
 uBcMZiz4DBW0JbHW/vmyM7uxhmanisXUX/iPKShiaPVv9xgOdfdPzRoK9JNXSdByZJbkI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvqm7-0002XC-Ct; Wed, 15 Jul 2020 23:25:19 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvqm7-0007o8-1L; Wed, 15 Jul 2020 23:25:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvqm7-0007wk-0d; Wed, 15 Jul 2020 23:25:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151908-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151908: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=e9919e11e219eaa5e8041b7b1a196839143e9125
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 15 Jul 2020 23:25:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151908 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151908/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                e9919e11e219eaa5e8041b7b1a196839143e9125
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   27 days
Failing since        151236  2020-06-19 19:10:35 Z   26 days   41 attempts
Testing same since   151908  2020-07-15 03:47:00 Z    0 days    1 attempts

------------------------------------------------------------
725 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 38304 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 04:10:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 04:10:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvvE6-0006fb-PV; Thu, 16 Jul 2020 04:10:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sYN5=A3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvvE5-0006fW-S0
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 04:10:29 +0000
X-Inumbo-ID: 46d23b56-c71a-11ea-947a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 46d23b56-c71a-11ea-947a-12813bfff9fa;
 Thu, 16 Jul 2020 04:10:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=oiVkyIg9DDmEw24nd1Oj91i1k4ZrurUWPBrKX9z65TA=; b=aExr7oF72beIKs4qe60fYKIKS
 +jSw9vatmFwbhhzvFYqyTuXXiFxT8XLi7uwU39BPNRKHYy9LTFwJzTHb2pS+DyWjcemxROTjfzsC/
 OUvTI/os86JuOZHjXHCjOEx4FiejA4Tl2s3GyrUYT/bSXHvHnNFmdAqsy1I6jtPkFUkSs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvvE1-0002Mb-EZ; Thu, 16 Jul 2020 04:10:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvvE0-0001me-MR; Thu, 16 Jul 2020 04:10:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvvE0-00079L-LX; Thu, 16 Jul 2020 04:10:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151914-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151914: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=c920fdba39480989cb5f1af3cc63acccef021b54
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jul 2020 04:10:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151914 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151914/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                c920fdba39480989cb5f1af3cc63acccef021b54
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   33 days
Failing since        151101  2020-06-14 08:32:51 Z   31 days   43 attempts
Testing same since   151914  2020-07-15 07:49:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 27597 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 04:18:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 04:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvvLW-0006vY-NA; Thu, 16 Jul 2020 04:18:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sYN5=A3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvvLV-0006vE-He
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 04:18:09 +0000
X-Inumbo-ID: 56dc8c4e-c71b-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56dc8c4e-c71b-11ea-bb8b-bc764e2007e4;
 Thu, 16 Jul 2020 04:18:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=8jsflyg2AVuJ7fAPf/PuiXp/45sWnWsyON0F8CNay3I=; b=Pz5Hmnb1+Yb1TP/qu3RJfKkU5
 KG78u9UgrNHtxh++wj1T4uTWXgbcFU99OPldBjfh95yxnnsSwhFh35ZFTDtzprUGbznjFQnHfhA9V
 xzfDGmF2RYWuZicN5IbH8tENPwzus16Dhf/zq8sjHvU51kit3eMct7Sd9aO1rJC3BoI2g=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvvLN-0002g0-KD; Thu, 16 Jul 2020 04:18:01 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvvLN-0002HM-5v; Thu, 16 Jul 2020 04:18:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvvLN-0002gh-5M; Thu, 16 Jul 2020 04:18:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151922-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 151922: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-4.14-testing:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
 xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: xen=d820391d2fba67566c52d5e0a047e70483265b6e
X-Osstest-Versions-That: xen=ce3c4493e4e6c94495ddd8538e801a35980bff0d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jul 2020 04:18:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151922 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151922/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10   fail REGR. vs. 151899

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass

version targeted for testing:
 xen                  d820391d2fba67566c52d5e0a047e70483265b6e
baseline version:
 xen                  ce3c4493e4e6c94495ddd8538e801a35980bff0d

Last test of basis   151899  2020-07-14 18:07:15 Z    1 days
Testing same since   151922  2020-07-15 14:10:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ce3c4493e4..d820391d2f  d820391d2fba67566c52d5e0a047e70483265b6e -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 04:25:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 04:25:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvvS6-0007nY-NI; Thu, 16 Jul 2020 04:24:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sYN5=A3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jvvS5-0007nE-Fl
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 04:24:57 +0000
X-Inumbo-ID: 4ae84940-c71c-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ae84940-c71c-11ea-8496-bc764e2007e4;
 Thu, 16 Jul 2020 04:24:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AmuuirbdmCY47tLsTP9PXQs6SUOTQtK04uXmhHmYGS0=; b=YteSxArCe9vU8RbgFN5M2KHU/
 ToRvvO0pxuo0gXsdyA3bPkX1XMIXNuBriD4jZ+cD0XGdf38fucb1xJJPY4/gpfUHQKINZ1pYz00Mm
 NO3AR1KDrzjHz7IXRnGV/3VlQGzEeaDg22TpUFtlV6ZdhHodpy4BlWiz5G4/mgx4YnFqQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvvRz-0002nr-IY; Thu, 16 Jul 2020 04:24:51 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jvvRz-0002jo-1p; Thu, 16 Jul 2020 04:24:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jvvRz-0008B6-1E; Thu, 16 Jul 2020 04:24:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151923-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151923: all pass - PUSHED
X-Osstest-Versions-This: ovmf=e77966b341b993291ab2d95718b88a6a0d703d0c
X-Osstest-Versions-That: ovmf=c7195b9ec3c5f8f40119c96ac4a0ab1e8cebe9dc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jul 2020 04:24:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151923 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151923/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e77966b341b993291ab2d95718b88a6a0d703d0c
baseline version:
 ovmf                 c7195b9ec3c5f8f40119c96ac4a0ab1e8cebe9dc

Last test of basis   151907  2020-07-15 03:30:35 Z    1 days
Testing same since   151923  2020-07-15 16:10:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Eric Dong <eric.dong@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Michael D Kinney <michael.d.kinney@intel.com>
  Oleksiy Yakovlev <oleksiyy@ami.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c7195b9ec3..e77966b341  e77966b341b993291ab2d95718b88a6a0d703d0c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 06:33:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 06:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvxRh-00024a-QG; Thu, 16 Jul 2020 06:32:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1QOp=A3=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jvxRg-00024V-B5
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 06:32:40 +0000
X-Inumbo-ID: 241c4d2c-c72e-11ea-9485-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 241c4d2c-c72e-11ea-9485-12813bfff9fa;
 Thu, 16 Jul 2020 06:32:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F34B2B029;
 Thu, 16 Jul 2020 06:32:40 +0000 (UTC)
Subject: Re: OpenSUSE and Xen
To: peter.jac@protonmail.com, xen-users@lists.xenproject.org,
 xen-devel <xen-devel@lists.xenproject.org>
References: <jy5CG4DqGrBAir35SWF8DPAZHsHJPYzw4pdQWS8fMylQgEe3hrFhfJG2lZmgvXorh1TayKgqOgrEX8413UGzmF-RU4i_V479poF1OrnKNw8=@protonmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <a6b4386f-6722-486d-9933-3cb32906b474@suse.com>
Date: Thu, 16 Jul 2020 08:32:36 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <jy5CG4DqGrBAir35SWF8DPAZHsHJPYzw4pdQWS8fMylQgEe3hrFhfJG2lZmgvXorh1TayKgqOgrEX8413UGzmF-RU4i_V479poF1OrnKNw8=@protonmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.20 19:16, peter.jac@protonmail.com wrote:
> 
> Hello,
> I have a question from all Xen Project developers and users.

What? Not from me. ;-)

> Is OpenSUSE supporting Xen Project?

Why do you ask this here, instead of an OpenSUSE list/forum/whatever?

> Did anyone install OpenSUSE?

I know several people having done so, yes.

> Why OpenSUSE not have any option about 
> installing Xen during installation but have an option about KVM?

Ask OpenSUSE community?

BTW, I'm able to select "Xen Virtualization Host and Tools" from the
"Software" menu when installing openSUSE 15.2. It is even above the
"KVM Virtualization Host and Tools" option.

> Why Xen package doesn't included?

They are.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 07:14:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 07:14:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvy5g-0005fx-GM; Thu, 16 Jul 2020 07:14:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VjZN=A3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jvy5e-0005fs-TU
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 07:13:59 +0000
X-Inumbo-ID: ea522dd6-c733-11ea-bca7-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea522dd6-c733-11ea-bca7-bc764e2007e4;
 Thu, 16 Jul 2020 07:13:58 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id f139so10150246wmf.5
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jul 2020 00:13:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=dEiJpr7zg7TDzM405ehbGQSkS7c2TKgEhMD8ZWgg1Xo=;
 b=hxtuhfsgnJKRloI2z4b4Rzj/9LxrQXkw0VpnwH1zRwL+f0psgQftH0hWiTuY4mmGgP
 VB1+2egDQmZMBgnllDz/v9qTP8q0zf+JgQpzxBowA8TeSoTn6gPeKyBU+zYVfgBwWtt7
 CEIEb7ljhE0jdlaKHECA32vddg4GWT294C4QjRam8A/dQafJ9B9M0ra0hpE7b2wI9MS1
 Z7uWFa/hvDUtH4KmWpkVhSToOteMfzBFX3x04jz3LbU6OiRZGqqsV7n5l5RF00usZXo6
 yybUMRC0oT4Ws+o48uQ9CJtHKjr9aXr9N8xspL1YOGK2ZkzMjffOzIVS2+gyn5NkquOV
 pNrw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=dEiJpr7zg7TDzM405ehbGQSkS7c2TKgEhMD8ZWgg1Xo=;
 b=Syho+Rc/T2UcTUWvY/+XTBoB673jebe5d1bkqg6WGpCxhN+iFa7/pQT7wkHC83c8pp
 dn5GWm92UZToo0sxWEekxzRMLqe7dLFhqP5J81X299oNYwyZ+twn2cDh5xNSHK6QdjiR
 ztzPtcAaTHBUeQOk2Pk5mz9EnBGjuvULHyYp43Q0e7ixYKVkG4WrGVTjfmznuka5mo7A
 u22sKxqPse+wuregZzsnFb9gEVx43VCKBreoNnEJiCdmsCbBbQRkH46QN/tHiikgapLT
 bXPkk8JElt0v7nIdLQSBDtEjV7YYQ1A4JiO4ZGMSbqG9pnZzOmb1OnnmAOcTXHiUqpwa
 m6mg==
X-Gm-Message-State: AOAM531/i6+3CE8iewnWRIZkAb5pnBrrGt+WJPuKXSN7mttkt8SdHWtw
 3run2utWItBi4Btdoxyx2u4=
X-Google-Smtp-Source: ABdhPJxo4Rz9/UkYjPcqrQ4s++x6iRZZnU2rCw5Vot9RAuAW4aWaa3P3egs9IqGBWenZBs3NUsa9IQ==
X-Received: by 2002:a1c:59c2:: with SMTP id n185mr3154727wmb.104.1594883637255; 
 Thu, 16 Jul 2020 00:13:57 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id v9sm7678998wri.3.2020.07.16.00.13.55
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 16 Jul 2020 00:13:56 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Brian Marcotte'" <marcotte@panix.com>
References: <AC8105C4-6DAD-4AB0-AC3F-B4CDD151CDEB@ispire.me>
 <763e69df40604c51bb72477c706ec24b@EX13D32EUC003.ant.amazon.com>
 <20200715191705.GA20643@panix.com>
In-Reply-To: <20200715191705.GA20643@panix.com>
Subject: RE: [EXTERNAL] [Xen-devel] XEN Qdisk Ceph rbd support broken?
Date: Thu, 16 Jul 2020 08:13:55 +0100
Message-ID: <000b01d65b40$ab7fead0$027fc070$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="US-ASCII"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQHWWtzrZblaFdJFoUym9DWZsRt33akJySvg
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Jules' <jules@ispire.me>, xen-devel@lists.xenproject.org,
 oleksandr_grytsov@epam.com, wl@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Brian Marcotte <marcotte@panix.com>
> Sent: 15 July 2020 20:17
> To: Durrant, Paul <pdurrant@amazon.co.uk>
> Cc: Jules <jules@ispire.me>; xen-devel@lists.xenproject.org; oleksandr_grytsov@epam.com; wl@xen.org
> Subject: RE: [EXTERNAL] [Xen-devel] XEN Qdisk Ceph rbd support broken?
> 
> CAUTION: This email originated from outside of the organization. Do not click links or open
> attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> This issue with Xen 4.13 and Ceph/RBD was last discussed back in February.
> 
> > Remote network Ceph image works fine with Xen 4.12.x ...
> >
> > In Xen 4.13.0 which I have tested recently it blames with the error
> > message "no such file or directory" as it would try accessing the image
> > over filesystem instead of remote network image.
> > ---
> >
> > I doubt the issue is in xl/libxl; sounds more likely to be in QEMU. The
> > PV block backend infrastructure in QEMU was changed between the 4.12
> > and 4.13 releases. Have you tried using an older QEMU with 4.13?
> 
> I'm also encountering the problem:
> 
>     failed to create drive: Could not open 'rbd:rbd/machine.disk0': No such file or directory
> 
> Xenstore has "params" like this:
> 
>     aio:rbd:rbd/machine.disk0
> 
> If I set it to "rbd:rbd/machine.disk0", I get a different message:
> 
>   failed to create drive: Parameter 'pool' is missing
> 
> Using upstream QEMU versions 2 or 3 works fine.
> 
> The interesting thing is that access by the virtual BIOS works fine. So,
> for a PVHVM domain, GRUB loads which loads a kernel, but the kernel can't
> access the disks.

Brian,

  That's not entirely surprising as the BIOS is likely to be using an emulated device rather than a PV interface. Your issue stems
from the auto-creation code in xen-block:

https://git.qemu.org/?p=qemu.git;a=blob;f=hw/block/xen-block.c;hb=HEAD#l723

  The "aio:rbd:rbd/machine.disk0" string is generated by libxl and does look a little odd and will fool the parser there, but the
error you see after modifying the string appears to be because QEMU's QMP block device instantiation code is objecting to a missing
parameter. Older QEMUs circumvented that code which is almost certainly why you don't see the issue with versions 2 or 3.

  Paul





> 
> --
> - Brian



From xen-devel-bounces@lists.xenproject.org Thu Jul 16 08:15:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 08:15:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvz2u-0002mH-Gh; Thu, 16 Jul 2020 08:15:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oOKz=A3=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvz2s-0002mC-Dv
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 08:15:10 +0000
X-Inumbo-ID: 75a9636c-c73c-11ea-948b-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 75a9636c-c73c-11ea-948b-12813bfff9fa;
 Thu, 16 Jul 2020 08:15:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594887309;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=QM4Yo08UU96sEkO/vwoDacKsS3qDPIvblp+LQRxpJLU=;
 b=LGUT0qXBIWPvBbeZ7F7SZtECSYNhazYozEfqU6ehEoOIZqrnW4EY7IKY
 GXX6DH0NTZVxAn3OtfoCxo5ToZZTpbA56nmMtcZB9a1A/hkBTiYxWCyII
 mPeAr86xRft0WnSZYYy2t+bBJkqG8ZM4BMfYUCnaIVMbl6UtA3sc5DXhD o=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: HN64iurN4tvosFYXRllUbPa5w9feraQzTUBYr1ZVlWnGsKrnMNqR/bp36R+16yGXaR2ppMWY4Z
 WF7GXvBFhj0fxHM6v864cbmJH32YTM+AZ4gFVCSLctj7N/EdoifqUmZbvwRJvDQZIfCQY/mb/R
 pfHVY+qEqTPBr1fmmLgvMhCG1QI07EBRWjIaEY5xvYVMOTeB5I+qRba3NWnXb92Jy5zImLcQms
 va7mgP9Us6S1TOPpjnmIHWxkAvQTLvdcNu/zzMu1a+14qaPzCf34RLwKxMOZ+z7wyWOX6LXqux
 jUQ=
X-SBRS: 2.7
X-MesageID: 23353559
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,358,1589256000"; d="scan'208";a="23353559"
Date: Thu, 16 Jul 2020 10:14:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v6 01/11] memory: batch processing in acquire_resource()
Message-ID: <20200716081455.GH7191@Air-de-Roger>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <02415890e4e8211513b495228c790e1d16de767f.1594150543.git.michal.leszczynski@cert.pl>
 <20200715093606.GU7191@Air-de-Roger>
 <61828142-8135-deee-43c6-1a2124f55756@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <61828142-8135-deee-43c6-1a2124f55756@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 tamas.lengyel@intel.com, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 02:13:42PM +0200, Jan Beulich wrote:
> On 15.07.2020 11:36, Roger Pau Monné wrote:
> > On Tue, Jul 07, 2020 at 09:39:40PM +0200, Michał Leszczyński wrote:
> >> @@ -1599,8 +1629,17 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
> >>  #endif
> >>  
> >>      case XENMEM_acquire_resource:
> >> -        rc = acquire_resource(
> >> -            guest_handle_cast(arg, xen_mem_acquire_resource_t));
> >> +        do {
> >> +            rc = acquire_resource(
> >> +                guest_handle_cast(arg, xen_mem_acquire_resource_t),
> >> +                &start_extent);
> > 
> > I think it would be interesting from a performance PoV to move the
> > xmar copy_from_guest here, so that each call to acquire_resource
> > in the loop doesn't need to perform a copy from guest. That's
> > more relevant for translated callers, where a copy_from_guest involves
> > a guest page table and a p2m walk.
> 
> This isn't just a nice-to-have for performance reasons, but a
> correctness/consistency thing: A rogue (or buggy) guest may alter
> the structure between two such reads. It _may_ be the case that
> we're dealing fine with this right now, but it would feel like a
> trap to fall into later on.

I *think* this is safe, given you copy from guest and perform the
checks for each iteration. I agree the copy should be pulled out of
the loop together with the checks. There's no point in performing it
for every iteration.

> >> +
> >> +            if ( hypercall_preempt_check() )
> > 
> > You are missing a rc == -ERESTART check here, you don't want to encode
> > a continuation if rc is different than -ERESTART AFAICT.
> 
> At which point the subsequent containing do/while() likely wants
> adjusting to, e.g. to "for( ; ; )".

That's another option, but you would need to add an extra
if ( rc != -ERESTART ) break; to the loop body if the while condition
is removed.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 08:20:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 08:20:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvz7X-0002w8-42; Thu, 16 Jul 2020 08:19:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ojOb=A3=protonmail.com=peter.jac@srs-us1.protection.inumbo.net>)
 id 1jvz7V-0002w2-Km
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 08:19:57 +0000
X-Inumbo-ID: 21666004-c73d-11ea-948b-12813bfff9fa
Received: from mail-40130.protonmail.ch (unknown [185.70.40.130])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21666004-c73d-11ea-948b-12813bfff9fa;
 Thu, 16 Jul 2020 08:19:56 +0000 (UTC)
Date: Thu, 16 Jul 2020 08:19:50 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com;
 s=protonmail; t=1594887595;
 bh=xEJ9L75McO5xAyI2evYbwkorZc+ujoatY3A6NsvlZ5o=;
 h=Date:To:From:Reply-To:Subject:In-Reply-To:References:From;
 b=ohgpv9SLT1buuHcNajLwev1sC4H3Os8JX1DnosTZYSwCD1PtizdpdSdi5980eZWdQ
 uyh5Wj+9VWbWZ/K8xjn8b6QgpUepz5Tru+0GQOXOQ0g9Y2sjnOzkyjuxgZR5hIBVL3
 IEuFlJGLjmkBSLvlrJZsGOEPqks2OcHoq8T3qULU=
To: jgross@suse.com, xen-users@lists.xenproject.org,
 xen-devel@lists.xenproject.org
From: peter.jac@protonmail.com
Subject: Re: OpenSUSE and Xen
Message-ID: <uoEjrT5tecvyi0A__wOfDHcCjeQTklewSye5avJNNFIdIj07bJqiWKgfuPDME-moDKtYf0M5GtJjAnXpAez4pDtYq-J2kQKvWs82oi7gNuk=@protonmail.com>
In-Reply-To: <a6b4386f-6722-486d-9933-3cb32906b474@suse.com>
References: <jy5CG4DqGrBAir35SWF8DPAZHsHJPYzw4pdQWS8fMylQgEe3hrFhfJG2lZmgvXorh1TayKgqOgrEX8413UGzmF-RU4i_V479poF1OrnKNw8=@protonmail.com>
 <a6b4386f-6722-486d-9933-3cb32906b474@suse.com>
MIME-Version: 1.0
Content-Type: multipart/alternative;
 boundary="b1_KWkRiXfl3OmaYq6zkzQYYRrxYpPs3k6Ww0lDDodlFTY"
X-Spam-Status: No, score=-1.2 required=7.0 tests=ALL_TRUSTED,DKIM_SIGNED,
 DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,HTML_MESSAGE
 shortcircuit=no autolearn=disabled version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mail.protonmail.ch
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: peter.jac@protonmail.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a multi-part message in MIME format.

--b1_KWkRiXfl3OmaYq6zkzQYYRrxYpPs3k6Ww0lDDodlFTY
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64

RGlkIHlvdSB1c2UgaW50ZXJuZXQgZHVyaW5nIGluc3RhbGxhdGlvbj8gVGhlIEtWTSBwYWNrYWdl
IGlzIGF2YWlsYWJsZSBvbiB0aGUgRFZEIHdpdGhvdXQgdGhlIGludGVybmV0IGNvbm5lY3Rpb24g
YnV0IFhlbi4uLgoKU2VudCBmcm9tIFByb3Rvbk1haWwgbW9iaWxlCgotLS0tLS0tLSBPcmlnaW5h
bCBNZXNzYWdlIC0tLS0tLS0tCk9uIEp1bCAxNiwgMjAyMCwgMTE6MDIgQU0sIErDvHJnZW4gR3Jv
w58gd3JvdGU6Cgo+IE9uIDE1LjA3LjIwIDE5OjE2LCBwZXRlci5qYWNAcHJvdG9ubWFpbC5jb20g
d3JvdGU6Cj4+Cj4+IEhlbGxvLAo+PiBJIGhhdmUgYSBxdWVzdGlvbiBmcm9tIGFsbCBYZW4gUHJv
amVjdCBkZXZlbG9wZXJzIGFuZCB1c2Vycy4KPgo+IFdoYXQ/IE5vdCBmcm9tIG1lLiA7LSkKPgo+
PiBJcyBPcGVuU1VTRSBzdXBwb3J0aW5nIFhlbiBQcm9qZWN0Pwo+Cj4gV2h5IGRvIHlvdSBhc2sg
dGhpcyBoZXJlLCBpbnN0ZWFkIG9mIGFuIE9wZW5TVVNFIGxpc3QvZm9ydW0vd2hhdGV2ZXI/Cj4K
Pj4gRGlkIGFueW9uZSBpbnN0YWxsIE9wZW5TVVNFPwo+Cj4gSSBrbm93IHNldmVyYWwgcGVvcGxl
IGhhdmluZyBkb25lIHNvLCB5ZXMuCj4KPj4gV2h5IE9wZW5TVVNFIG5vdCBoYXZlIGFueSBvcHRp
b24gYWJvdXQKPj4gaW5zdGFsbGluZyBYZW4gZHVyaW5nIGluc3RhbGxhdGlvbiBidXQgaGF2ZSBh
biBvcHRpb24gYWJvdXQgS1ZNPwo+Cj4gQXNrIE9wZW5TVVNFIGNvbW11bml0eT8KPgo+IEJUVywg
SSdtIGFibGUgdG8gc2VsZWN0ICJYZW4gVmlydHVhbGl6YXRpb24gSG9zdCBhbmQgVG9vbHMiIGZy
b20gdGhlCj4gIlNvZnR3YXJlIiBtZW51IHdoZW4gaW5zdGFsbGluZyBvcGVuU1VTRSAxNS4yLiBJ
dCBpcyBldmVuIGFib3ZlIHRoZQo+ICJLVk0gVmlydHVhbGl6YXRpb24gSG9zdCBhbmQgVG9vbHMi
IG9wdGlvbi4KPgo+PiBXaHkgWGVuIHBhY2thZ2UgZG9lc24ndCBpbmNsdWRlZD8KPgo+IFRoZXkg
YXJlLgo+Cj4gSnVlcmdlbg==

--b1_KWkRiXfl3OmaYq6zkzQYYRrxYpPs3k6Ww0lDDodlFTY
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64

RGlkIHlvdSB1c2UgaW50ZXJuZXQgZHVyaW5nIGluc3RhbGxhdGlvbj8gVGhlIEtWTSBwYWNrYWdl
IGlzIGF2YWlsYWJsZSBvbiB0aGUgRFZEIHdpdGhvdXQgdGhlIGludGVybmV0IGNvbm5lY3Rpb24g
YnV0IFhlbi4uLjxicj48YnI+PGJyPlNlbnQgZnJvbSBQcm90b25NYWlsIG1vYmlsZTxicj48YnI+
PGJyPjxicj4tLS0tLS0tLSBPcmlnaW5hbCBNZXNzYWdlIC0tLS0tLS0tPGJyPk9uIEp1bCAxNiwg
MjAyMCwgMTE6MDIgQU0sIErDvHJnZW4gR3Jvw58gPCBqZ3Jvc3NAc3VzZS5jb20+IHdyb3RlOjxi
bG9ja3F1b3RlIGNsYXNzPSJwcm90b25tYWlsX3F1b3RlIj48YnI+PHAgZGlyPSJsdHIiPk9uIDE1
LjA3LjIwIDE5OjE2LCBwZXRlci5qYWNAcHJvdG9ubWFpbC5jb20gd3JvdGU6PGJyPg0KJmd0Ozxi
cj4NCiZndDsgSGVsbG8sPGJyPg0KJmd0OyBJIGhhdmUgYSBxdWVzdGlvbiBmcm9tIGFsbCBYZW4g
UHJvamVjdCBkZXZlbG9wZXJzIGFuZCB1c2Vycy48L3A+DQo8cCBkaXI9Imx0ciI+V2hhdD8gTm90
IGZyb20gbWUuIDstKTwvcD4NCjxwIGRpcj0ibHRyIj4mZ3Q7IElzIE9wZW5TVVNFIHN1cHBvcnRp
bmcgWGVuIFByb2plY3Q/PC9wPg0KPHAgZGlyPSJsdHIiPldoeSBkbyB5b3UgYXNrIHRoaXMgaGVy
ZSwgaW5zdGVhZCBvZiBhbiBPcGVuU1VTRSBsaXN0L2ZvcnVtL3doYXRldmVyPzwvcD4NCjxwIGRp
cj0ibHRyIj4mZ3Q7IERpZCBhbnlvbmUgaW5zdGFsbCBPcGVuU1VTRT88L3A+DQo8cCBkaXI9Imx0
ciI+SSBrbm93IHNldmVyYWwgcGVvcGxlIGhhdmluZyBkb25lIHNvLCB5ZXMuPC9wPg0KPHAgZGly
PSJsdHIiPiZndDsgV2h5IE9wZW5TVVNFIG5vdCBoYXZlIGFueSBvcHRpb24gYWJvdXQ8YnI+DQom
Z3Q7IGluc3RhbGxpbmcgWGVuIGR1cmluZyBpbnN0YWxsYXRpb24gYnV0IGhhdmUgYW4gb3B0aW9u
IGFib3V0IEtWTT88L3A+DQo8cCBkaXI9Imx0ciI+QXNrIE9wZW5TVVNFIGNvbW11bml0eT88L3A+
DQo8cCBkaXI9Imx0ciI+QlRXLCBJJ20gYWJsZSB0byBzZWxlY3QgIlhlbiBWaXJ0dWFsaXphdGlv
biBIb3N0IGFuZCBUb29scyIgZnJvbSB0aGU8YnI+DQoiU29mdHdhcmUiIG1lbnUgd2hlbiBpbnN0
YWxsaW5nIG9wZW5TVVNFIDE1LjIuIEl0IGlzIGV2ZW4gYWJvdmUgdGhlPGJyPg0KIktWTSBWaXJ0
dWFsaXphdGlvbiBIb3N0IGFuZCBUb29scyIgb3B0aW9uLjwvcD4NCjxwIGRpcj0ibHRyIj4mZ3Q7
IFdoeSBYZW4gcGFja2FnZSBkb2Vzbid0IGluY2x1ZGVkPzwvcD4NCjxwIGRpcj0ibHRyIj5UaGV5
IGFyZS48YnI+PC9wPg0KPHAgZGlyPSJsdHIiPkp1ZXJnZW48YnI+DQo8L3A+DQo8L2Rpdj4=


--b1_KWkRiXfl3OmaYq6zkzQYYRrxYpPs3k6Ww0lDDodlFTY--



From xen-devel-bounces@lists.xenproject.org Thu Jul 16 08:26:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 08:26:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvzE1-0003lt-UK; Thu, 16 Jul 2020 08:26:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oOKz=A3=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvzE0-0003lo-LE
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 08:26:40 +0000
X-Inumbo-ID: 114d59a6-c73e-11ea-948b-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 114d59a6-c73e-11ea-948b-12813bfff9fa;
 Thu, 16 Jul 2020 08:26:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594887998;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=eJpoVJ5o3Om7legFwg1TQfD6fd6IeunmXaH8O4YF6OQ=;
 b=M9dWhRqKl4gLZKwkZLFgzaSG1m5OxtTEt7RBRDZrIkCRvIgog9GNdbWc
 PQECeTuIVknysplydSbX6SzdkxiTZHA+91TOcAohz1by4Q7Lg+YpDpuIQ
 e+E8prnaWlQyHZOr1RMK2FleEJkTnkScnODUPyR2cn/yOkCyMdLgx/EqE o=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: BOluNTHkSYVdzAMNQb58R6010i5ICF06kvXn1w7dBi2F6w5uRxXGfl8G8ujIKy9LXU1DpqdzA3
 OwGy2MpWKdDTH0M8ZHq8fxcFvgE8BmPfa/ps6/PxDDu4GsBCU0GQor1ZGHY2NxfOFdk4sDVgyp
 5iSz+983d3HsU+ruXy2HddlTFvprP1+lMasYyKr2cjOz0c0VjCt59fRIhWMj/tZoBXpIdB2vqb
 5Qp0HcelBMhhpcbVKj0IK66Y8zBjx54uZfHrTacN6Y0uq0Pj4VEYBT7ijNmaxo7n6pkY2RH8eg
 NKI=
X-SBRS: 2.7
X-MesageID: 22834130
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,358,1589256000"; d="scan'208";a="22834130"
Date: Thu, 16 Jul 2020 10:26:30 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v6 10/11] tools/libxc: add xc_vmtrace_* functions
Message-ID: <20200716082630.GI7191@Air-de-Roger>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <476203bca92f1fb0e8de2be2bcfb88695a5688f8.1594150543.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <476203bca92f1fb0e8de2be2bcfb88695a5688f8.1594150543.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, tamas.lengyel@intel.com,
 luwei.kang@intel.com, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 07, 2020 at 09:39:49PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Add functions in libxc that use the new XEN_DOMCTL_vmtrace interface.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  tools/libxc/Makefile          |  1 +
>  tools/libxc/include/xenctrl.h | 40 ++++++++++++++++
>  tools/libxc/xc_vmtrace.c      | 87 +++++++++++++++++++++++++++++++++++
>  3 files changed, 128 insertions(+)
>  create mode 100644 tools/libxc/xc_vmtrace.c
> 
> diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
> index fae5969a73..605e44501d 100644
> --- a/tools/libxc/Makefile
> +++ b/tools/libxc/Makefile
> @@ -27,6 +27,7 @@ CTRL_SRCS-y       += xc_csched2.c
>  CTRL_SRCS-y       += xc_arinc653.c
>  CTRL_SRCS-y       += xc_rt.c
>  CTRL_SRCS-y       += xc_tbuf.c
> +CTRL_SRCS-y       += xc_vmtrace.c
>  CTRL_SRCS-y       += xc_pm.c
>  CTRL_SRCS-y       += xc_cpu_hotplug.c
>  CTRL_SRCS-y       += xc_resume.c
> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
> index 4c89b7294c..491b2c3236 100644
> --- a/tools/libxc/include/xenctrl.h
> +++ b/tools/libxc/include/xenctrl.h
> @@ -1585,6 +1585,46 @@ int xc_tbuf_set_cpu_mask(xc_interface *xch, xc_cpumap_t mask);
>  
>  int xc_tbuf_set_evt_mask(xc_interface *xch, uint32_t mask);
>  
> +/**
> + * Enable processor trace for given vCPU in given DomU.
> + * Allocate the trace ringbuffer with given size.
> + *
> + * @parm xch a handle to an open hypervisor interface
> + * @parm domid domain identifier
> + * @parm vcpu vcpu identifier
> + * @return 0 on success, -1 on failure
> + */
> +int xc_vmtrace_pt_enable(xc_interface *xch, uint32_t domid,
> +                         uint32_t vcpu);
> +
> +/**
> + * Disable processor trace for given vCPU in given DomU.
> + * Deallocate the trace ringbuffer.
> + *
> + * @parm xch a handle to an open hypervisor interface
> + * @parm domid domain identifier
> + * @parm vcpu vcpu identifier
> + * @return 0 on success, -1 on failure
> + */
> +int xc_vmtrace_pt_disable(xc_interface *xch, uint32_t domid,
> +                          uint32_t vcpu);
> +
> +/**
> + * Get current offset inside the trace ringbuffer.
> + * This allows to determine how much data was written into the buffer.
> + * Once buffer overflows, the offset will reset to 0 and the previous
> + * data will be overriden.
                   ^ overridden.
> + *
> + * @parm xch a handle to an open hypervisor interface
> + * @parm domid domain identifier
> + * @parm vcpu vcpu identifier
> + * @parm offset current offset inside trace buffer will be written there
> + * @parm size the total size of the trace buffer (in bytes)
> + * @return 0 on success, -1 on failure
> + */
> +int xc_vmtrace_pt_get_offset(xc_interface *xch, uint32_t domid,
> +                             uint32_t vcpu, uint64_t *offset, uint64_t *size);
> +
>  int xc_domctl(xc_interface *xch, struct xen_domctl *domctl);
>  int xc_sysctl(xc_interface *xch, struct xen_sysctl *sysctl);
>  
> diff --git a/tools/libxc/xc_vmtrace.c b/tools/libxc/xc_vmtrace.c
> new file mode 100644
> index 0000000000..ee034da8d3
> --- /dev/null
> +++ b/tools/libxc/xc_vmtrace.c
> @@ -0,0 +1,87 @@
> +/******************************************************************************
> + * xc_vmtrace.c
> + *
> + * API for manipulating hardware tracing features
> + *
> + * Copyright (c) 2020, Michal Leszczynski
> + *
> + * Copyright 2020 CERT Polska. All rights reserved.
> + * Use is subject to license terms.
> + *
> + * This library is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation;
> + * version 2.1 of the License.
> + *
> + * This library is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with this library; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include "xc_private.h"
> +#include <xen/trace.h>
> +
> +int xc_vmtrace_pt_enable(
> +        xc_interface *xch, uint32_t domid, uint32_t vcpu)
> +{
> +    DECLARE_DOMCTL;

You could do:

DECLARE_DOMCTL = {
    .cmd = XEN_DOMCTL_vmtrace_op,
    .domain = domid,
    ...
};

And avoid setting the fields below, the same applies to the rest
of the functions. Note that when doing this there's no need to set the
padding fields, as they will be zeroed by the initialization.

> +    int rc;
> +
> +    domctl.cmd = XEN_DOMCTL_vmtrace_op;
> +    domctl.domain = domid;
> +    domctl.u.vmtrace_op.cmd = XEN_DOMCTL_vmtrace_pt_enable;
> +    domctl.u.vmtrace_op.vcpu = vcpu;
> +    domctl.u.vmtrace_op.pad1 = 0;
> +    domctl.u.vmtrace_op.pad2 = 0;
> +
> +    rc = do_domctl(xch, &domctl);
> +    return rc;

Just do 'return do_domctl(xch, &domctl);', and you can drop the rc
variable (here and in xc_vmtrace_pt_disable).

> +}
> +
> +int xc_vmtrace_pt_get_offset(
> +        xc_interface *xch, uint32_t domid, uint32_t vcpu,
> +        uint64_t *offset, uint64_t *size)
> +{
> +    DECLARE_DOMCTL;
> +    int rc;
> +
> +    domctl.cmd = XEN_DOMCTL_vmtrace_op;
> +    domctl.domain = domid;
> +    domctl.u.vmtrace_op.cmd = XEN_DOMCTL_vmtrace_pt_get_offset;
> +    domctl.u.vmtrace_op.vcpu = vcpu;
> +    domctl.u.vmtrace_op.pad1 = 0;
> +    domctl.u.vmtrace_op.pad2 = 0;
> +
> +    rc = do_domctl(xch, &domctl);
> +    if ( !rc )
> +    {
> +        if (offset)
               ^ missing spaces.

Thanks.


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 08:42:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 08:42:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvzTQ-0005dM-QQ; Thu, 16 Jul 2020 08:42:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oOKz=A3=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jvzTP-0005dH-6L
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 08:42:35 +0000
X-Inumbo-ID: 4ab1f5f6-c740-11ea-bca7-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ab1f5f6-c740-11ea-bca7-bc764e2007e4;
 Thu, 16 Jul 2020 08:42:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594888955;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=ThoXaI+9EcbSt53nPcI0hOiCh0/m8Q04GcOj/lwd8k0=;
 b=fAo/o0blSCJLuTOTaVivGEApzfNiEkbcySPg1mK6O3sj03KhpFENsFN4
 MFoasCoGKBOSB7Wq0b/uJQVKeQ+Y+1RWkV4Mkx82v6jN29m/X6cDjGEZg
 qcEgbgUELodop6WCIbdzC8F4FkdI/rZUIs0zUappBV8LXsoD1SCjZ4HEc 4=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: A5OWCGzZxaEmS5z09n2G2HEoN4Hbb/2KaukvTCnPUiCU8fyaBtM1MinYpyXsQj5cw3VBIMlLYZ
 f4pMeVTc9+SYFflnQnjgyu6h4Kx0gQcbuWCek/Qr36LsFOMkU2fOMlM8ITPnE4bo3wfnhzfa56
 Uv/AlDEHy4o50aC2JmzQmgr4cexHH/s2dvYGUCwbtDoLeWatDguYDWVjeJjGj9wfYY4smxo6uh
 0ZlGNdeenz7sX09AHpKdwnWW+j1LmYTdk1zDIKUX34iK+ix8Mvhvd9RYC9Vr0+TWHEu8rwA8Dg
 Xvs=
X-SBRS: 2.7
X-MesageID: 22504981
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,358,1589256000"; d="scan'208";a="22504981"
Date: Thu, 16 Jul 2020 10:42:09 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v6 11/11] tools/proctrace: add proctrace tool
Message-ID: <20200716084209.GJ7191@Air-de-Roger>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <8bc5959478d6ba1c1873615b53628094da578688.1594150543.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8bc5959478d6ba1c1873615b53628094da578688.1594150543.git.michal.leszczynski@cert.pl>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, tamas.lengyel@intel.com,
 luwei.kang@intel.com, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 07, 2020 at 09:39:50PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Add an demonstration tool that uses xc_vmtrace_* calls in order
      ^ a
> to manage external IPT monitoring for DomU.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>
> ---
>  tools/proctrace/Makefile    |  45 +++++++++
>  tools/proctrace/proctrace.c | 179 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 224 insertions(+)
>  create mode 100644 tools/proctrace/Makefile
>  create mode 100644 tools/proctrace/proctrace.c
> 
> diff --git a/tools/proctrace/Makefile b/tools/proctrace/Makefile
> new file mode 100644
> index 0000000000..9c135229b9
> --- /dev/null
> +++ b/tools/proctrace/Makefile
> @@ -0,0 +1,45 @@
> +# Copyright (C) CERT Polska - NASK PIB
> +# Author: Michał Leszczyński <michal.leszczynski@cert.pl>
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; under version 2 of the License.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +
> +XEN_ROOT=$(CURDIR)/../..
> +include $(XEN_ROOT)/tools/Rules.mk
> +
> +CFLAGS  += -Werror
> +CFLAGS  += $(CFLAGS_libxenevtchn)
> +CFLAGS  += $(CFLAGS_libxenctrl)
> +LDLIBS  += $(LDLIBS_libxenctrl)
> +LDLIBS  += $(LDLIBS_libxenevtchn)
> +LDLIBS  += $(LDLIBS_libxenforeignmemory)
> +
> +.PHONY: all
> +all: build
> +
> +.PHONY: build
> +build: proctrace
> +
> +.PHONY: install
> +install: build
> +	$(INSTALL_DIR) $(DESTDIR)$(sbindir)
> +	$(INSTALL_PROG) proctrace $(DESTDIR)$(sbindir)/proctrace
> +
> +.PHONY: uninstall
> +uninstall:
> +	rm -f $(DESTDIR)$(sbindir)/proctrace
> +
> +.PHONY: clean
> +clean:
> +	$(RM) -f proctrace $(DEPS_RM)
> +
> +.PHONY: distclean
> +distclean: clean
> +
> +-include $(DEPS_INCLUDE)
> diff --git a/tools/proctrace/proctrace.c b/tools/proctrace/proctrace.c
> new file mode 100644
> index 0000000000..3c1ccccee8
> --- /dev/null
> +++ b/tools/proctrace/proctrace.c
> @@ -0,0 +1,179 @@
> +/******************************************************************************
> + * tools/proctrace.c
> + *
> + * Demonstrative tool for collecting Intel Processor Trace data from Xen.
> + *  Could be used to externally monitor a given vCPU in given DomU.
> + *
> + * Copyright (C) 2020 by CERT Polska - NASK PIB
> + *
> + * Authors: Michał Leszczyński, michal.leszczynski@cert.pl
> + * Date:    June, 2020
> + *
> + *  This program is free software; you can redistribute it and/or modify
> + *  it under the terms of the GNU General Public License as published by
> + *  the Free Software Foundation; under version 2 of the License.
> + *
> + *  This program is distributed in the hope that it will be useful,
> + *  but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *  GNU General Public License for more details.
> + *
> + *  You should have received a copy of the GNU General Public License
> + *  along with this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <stdlib.h>
> +#include <stdio.h>
> +#include <sys/mman.h>
> +#include <signal.h>
> +#include <errno.h>
> +
> +#include <xenctrl.h>
> +#include <xen/xen.h>
> +#include <xenforeignmemory.h>
> +
> +volatile int interrupted = 0;
> +volatile int domain_down = 0;

No need for the initialization, globals are already initialized to 0.

> +void term_handler(int signum) {
> +    interrupted = 1;
> +}
> +
> +int main(int argc, char* argv[]) {
> +    xc_interface *xc;
> +    uint32_t domid;
> +    uint32_t vcpu_id;
> +    uint64_t size;
> +
> +    int rc = -1;
> +    uint8_t *buf = NULL;
> +    uint64_t last_offset = 0;
> +
> +    xenforeignmemory_handle *fmem;
> +    xenforeignmemory_resource_handle *fres;
> +
> +    if (signal(SIGINT, term_handler) == SIG_ERR)
> +    {
> +        fprintf(stderr, "Failed to register signal handler\n");
> +        return 1;
> +    }
> +
> +    if (argc != 3) {
> +        fprintf(stderr, "Usage: %s <domid> <vcpu_id>\n", argv[0]);
> +        fprintf(stderr, "It's recommended to redirect this"
> +                        "program's output to file\n");
> +        fprintf(stderr, "or to pipe it's output to xxd or other program.\n");
> +        return 1;
> +    }
> +
> +    domid = atoi(argv[1]);
> +    vcpu_id = atoi(argv[2]);
> +
> +    xc = xc_interface_open(0, 0, 0);
> +
> +    fmem = xenforeignmemory_open(0, 0);

I think you also need to test that fmem is set? (like you do for xc).

> +
> +    if (!xc) {
> +        fprintf(stderr, "Failed to open xc interface\n");
> +        return 1;
> +    }
> +
> +    rc = xc_vmtrace_pt_enable(xc, domid, vcpu_id);
> +
> +    if (rc) {
> +        fprintf(stderr, "Failed to call xc_vmtrace_pt_enable\n");
> +        return 1;
> +    }
> +    
> +    rc = xc_vmtrace_pt_get_offset(xc, domid, vcpu_id, NULL, &size);
> +
> +    if (rc) {
> +        fprintf(stderr, "Failed to get trace buffer size\n");
> +        return 1;
> +    }
> +
> +    fres = xenforeignmemory_map_resource(
> +        fmem, domid, XENMEM_resource_vmtrace_buf,
> +        /* vcpu: */ vcpu_id,
> +        /* frame: */ 0,
> +        /* num_frames: */ size >> XC_PAGE_SHIFT,
> +        (void **)&buf,
> +        PROT_READ, 0);
> +
> +    if (!buf) {
> +        fprintf(stderr, "Failed to map trace buffer\n");
> +        return 1;
> +    }
> +
> +    while (!interrupted) {
> +        uint64_t offset;
> +        rc = xc_vmtrace_pt_get_offset(xc, domid, vcpu_id, &offset, NULL);
> +
> +        if (rc == ENODATA) {
> +            interrupted = 1;
> +            domain_down = 1;
> +	} else if (rc) {

Hard tab.

> +            fprintf(stderr, "Failed to call xc_vmtrace_pt_get_offset\n");

Should you try to disable vmtrace here before exiting?

> +            return 1;
> +        }
> +
> +        if (offset > last_offset)
> +        {
> +            fwrite(buf + last_offset, offset - last_offset, 1, stdout);
> +        }
> +        else if (offset < last_offset)
> +        {
> +            // buffer wrapped

I know this is a test utility, but I would prefer if you could use the
C comment style /* */.

> +            fwrite(buf + last_offset, size - last_offset, 1, stdout);
> +            fwrite(buf, offset, 1, stdout);
> +        }
> +
> +        last_offset = offset;
> +        usleep(1000 * 100);
> +    }
> +
> +    rc = xenforeignmemory_unmap_resource(fmem, fres);
> +
> +    if (rc) {
> +        fprintf(stderr, "Failed to unmap resource\n");
> +        return 1;
> +    }
> +
> +    rc = xenforeignmemory_close(fmem);
> +
> +    if (rc) {
> +        fprintf(stderr, "Failed to close fmem\n");
> +        return 1;
> +    }
> +
> +    /*
> +     * Don't try to disable PT if the domain is already dying.
> +     */
> +    if (!domain_down) {
> +        rc = xc_vmtrace_pt_disable(xc, domid, vcpu_id);

I'm not sure you can assume a domain is dying just because
xc_vmtrace_pt_get_offset has returned ENODATA. Is there any harm in
unconditionally attempting to disable vmtrace?

Thanks.


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 09:09:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 09:09:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jvzta-0007W4-D6; Thu, 16 Jul 2020 09:09:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OKj/=A3=huawei.com=miaoqinglang@srs-us1.protection.inumbo.net>)
 id 1jvzn3-0007O7-Vu
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 09:02:54 +0000
X-Inumbo-ID: 1f77d9d4-c743-11ea-948f-12813bfff9fa
Received: from huawei.com (unknown [45.249.212.190])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1f77d9d4-c743-11ea-948f-12813bfff9fa;
 Thu, 16 Jul 2020 09:02:50 +0000 (UTC)
Received: from DGGEMS413-HUB.china.huawei.com (unknown [172.30.72.58])
 by Forcepoint Email with ESMTP id 17F9E5A95E9514988E2A;
 Thu, 16 Jul 2020 17:02:48 +0800 (CST)
Received: from localhost.localdomain.localdomain (10.175.113.25) by
 DGGEMS413-HUB.china.huawei.com (10.3.19.213) with Microsoft SMTP Server id
 14.3.487.0; Thu, 16 Jul 2020 17:02:46 +0800
From: Qinglang Miao <miaoqinglang@huawei.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Boris Ostrovsky
 <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, Chen-Yu Tsai
 <wens@csie.org>, Thomas Gleixner <tglx@linutronix.de>, Stefano Stabellini
 <sstabellini@kernel.org>
Subject: [PATCH -next] x86/xen: Convert to DEFINE_SHOW_ATTRIBUTE
Date: Thu, 16 Jul 2020 17:06:41 +0800
Message-ID: <20200716090641.14184-1-miaoqinglang@huawei.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.175.113.25]
X-CFilter-Loop: Reflected
X-Mailman-Approved-At: Thu, 16 Jul 2020 09:09:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Chen Huang <chenhuang5@huawei.com>

Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code.

Signed-off-by: Chen Huang <chenhuang5@huawei.com>
---
 arch/x86/xen/p2m.c | 12 +-----------
 1 file changed, 1 insertion(+), 11 deletions(-)

diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
index 4cf680e2e..0f4a449de 100644
--- a/arch/x86/xen/p2m.c
+++ b/arch/x86/xen/p2m.c
@@ -799,17 +799,7 @@ static int p2m_dump_show(struct seq_file *m, void *v)
 	return 0;
 }
 
-static int p2m_dump_open(struct inode *inode, struct file *filp)
-{
-	return single_open(filp, p2m_dump_show, NULL);
-}
-
-static const struct file_operations p2m_dump_fops = {
-	.open		= p2m_dump_open,
-	.read_iter		= seq_read_iter,
-	.llseek		= seq_lseek,
-	.release	= single_release,
-};
+DEFINE_SHOW_ATTRIBUTE(p2m_dump);
 
 static struct dentry *d_mmu_debug;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 16 10:06:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 10:06:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw0mN-00043P-Hq; Thu, 16 Jul 2020 10:06:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dhnJ=A3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jw0mM-00043K-Gh
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 10:06:14 +0000
X-Inumbo-ID: fa918f08-c74b-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa918f08-c74b-11ea-b7bb-bc764e2007e4;
 Thu, 16 Jul 2020 10:06:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3D9B3B69F;
 Thu, 16 Jul 2020 10:06:16 +0000 (UTC)
Subject: Re: [PATCH v3 1/2] x86: restore pv_rtc_handler() invocation
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
 <20200715121347.GY7191@Air-de-Roger>
 <2b9de0fd-5973-ed66-868c-ffadca83edf3@suse.com>
 <20200715133217.GZ7191@Air-de-Roger>
 <cd08f928-2be9-314b-56e6-bb414247caff@suse.com>
 <20200715145144.GA7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ff1926c7-cc21-03ad-1dff-53c703450151@suse.com>
Date: Thu, 16 Jul 2020 12:06:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200715145144.GA7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 16:51, Roger Pau Monné wrote:
> On Wed, Jul 15, 2020 at 03:51:17PM +0200, Jan Beulich wrote:
>> On 15.07.2020 15:32, Roger Pau Monné wrote:
>>> Feel free to change to ACCESS_ONCE or barrier if you think it's
>>> clearer.
>>
>> I did so (also on the writer side), not the least based on guessing
>> what Andrew would presumably have preferred.
> 
> Thanks! Sorry I might be pedantic, but is the ACCESS_ONCE on the write
> side actually required? I'm not sure I see what ACCESS_ONCE protects
> against in handle_rtc_once.

Well, this is all sort of a mess, I think. We have this mixture of
ACCESS_ONCE() and read_atomic() / write_atomic(), but I don't think
we use them consistently, and I'm not sure either is suitable to
deal with all (theoretical) corner cases.

read_atomic() / write_atomic() guarantee a single insn to be used
to access a piece of data. I'm uncertain whether they also guarantee
single access (i.e. that the compiler won't replicate the asm()-s).
The wording in gcc doc is pretty precise, but not quite enough imo
to be entirely certain.

ACCESS_ONCE() guarantees single access, but doesn't guarantee that
the compiler wouldn't split this single access into multiple insns.
(It's just, like elsewhere, that it would be pretty silly of it if
it did.)

Yesterday, as said, I tried to in particular do what I expect/guess
Andrew would have wanted done. This is despite me not being entirely
convinced this is the right thing to do here, i.e. personally I
would have preferred read_atomic() / write_atomic(), as I think the
intention of what the gcc doc is saying is what we want (taking
into consideration both uses of "volatile" in these helpers).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 10:11:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 10:11:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw0rY-0004rv-7M; Thu, 16 Jul 2020 10:11:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dhnJ=A3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jw0rW-0004rq-UC
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 10:11:34 +0000
X-Inumbo-ID: b9894ac2-c74c-11ea-949a-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b9894ac2-c74c-11ea-949a-12813bfff9fa;
 Thu, 16 Jul 2020 10:11:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9C99BB850;
 Thu, 16 Jul 2020 10:11:36 +0000 (UTC)
Subject: Re: [PATCH v2] docs: specify stability of hypfs path documentation
To: George Dunlap <George.Dunlap@citrix.com>, Juergen Gross <jgross@suse.com>
References: <20200713140338.16172-1-jgross@suse.com>
 <8a96b1b9-cbcb-557a-5b82-661bbe40fe25@suse.com>
 <68F727A8-29B8-4846-8BE9-BD4F6E0DC60D@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9f5e86cc-4f64-982b-d84b-1de6b2739a2b@suse.com>
Date: Thu, 16 Jul 2020 12:11:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <68F727A8-29B8-4846-8BE9-BD4F6E0DC60D@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 16:37, George Dunlap wrote:
> IT sounds like you’re saying:
> 
> 1. Paths listed without conditions will always be available
> 
> 2. Paths listed with conditions may be extended: i.e., a node currently listed as PV might also become available for HVM guests
> 
> 3. Paths listed with conditions might have those conditions reduced, but will never entirely disappear.  So something currently listed as PV might be reduced to CONFIG_HAS_FOO, but won’t be completely removed.
> 
> Is that what you meant?

I see Jürgen replied "yes" to this, but I'm not sure about 1.
above: I think it's quite reasonable to expect that paths without
condition may gain a condition. Just like paths now having a
condition and (perhaps temporarily) losing it shouldn't all of
the sudden become "always available" when they weren't meant to
be.

As far a 3. goes, I'm also unsure in how far this is any better
stability wise (from a consumer pov) than allowing paths to
entirely disappear.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 10:14:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 10:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw0uC-00050A-Mj; Thu, 16 Jul 2020 10:14:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oOKz=A3=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jw0uA-000504-Gt
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 10:14:18 +0000
X-Inumbo-ID: 1aecbca4-c74d-11ea-8496-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1aecbca4-c74d-11ea-8496-bc764e2007e4;
 Thu, 16 Jul 2020 10:14:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594894457;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=G8y/Werdp1fC4yWK8mnbUpnENPeqZCERXosdcj65L2Q=;
 b=JdKDr9BSt+922rwuY96JH2IdA4whLBNxpNRgqs9sZaDpjckMQY1XOwhU
 RMGlM032KYyUm/YKwxhbDKqhwrZczq+kVeNe4EVmrxLy3/AK4lcl5aAzY
 +Zeakzx6ypWZ+weLbip4mDxnWgpJtajxXno/pEhrlvqNEA17vjGc/oeVz 4=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: cHUKvwbhjDaAptNNP9/325Js1K18w6caGyK74KVjdm+uT/0y6xo817C1iL2InG2RnfRvc+wG4F
 sp2o+W7JrkGVXK/1eb2gFeGnxfsyZqhzy1sxo1dDG603v/CkEc6bMuLGSii+wVsybPy5DVJPPa
 /CLllbFOmox41UoMJyFCJT4ZoZ2hJZtEi6Y3kZODZSBYvE+v3XJ7jHsyK0Aj2fbnrn5TSYmQSk
 bZEN6KqQZrRwnSEULuX/0ULTCX3OXjuQLAgcwzOVNyOx0PX3pH4zya8aqlrIH9f8ZyoFbEj6jf
 C7Q=
X-SBRS: 2.7
X-MesageID: 23359823
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,358,1589256000"; d="scan'208";a="23359823"
Date: Thu, 16 Jul 2020 12:14:09 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 1/2] VT-d: install sync_cache hook on demand
Message-ID: <20200716101409.GK7191@Air-de-Roger>
References: <2ad1607d-0909-f1cc-83bf-2546b28a9128@suse.com>
 <0036b69f-0d56-9ac4-1afa-06640c9007de@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0036b69f-0d56-9ac4-1afa-06640c9007de@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Kevin
 Tian <kevin.tian@intel.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 12:03:57PM +0200, Jan Beulich wrote:
> Instead of checking inside the hook whether any non-coherent IOMMUs are
> present, simply install the hook only when this is the case.
> 
> To prove that there are no other references to the now dynamically
> updated ops structure (and hence that its updating happens early
> enough), make it static and rename it at the same time.
> 
> Note that this change implies that sync_cache() shouldn't be called
> directly unless there are unusual circumstances, like is the case in
> alloc_pgtable_maddr(), which gets invoked too early for iommu_ops to
> be set already (and therefore we also need to be careful there to
> avoid accessing vtd_ops later on, as it lives in .init).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I think this is slightly better than what we currently have:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

I would however prefer if we also added a check to assert that
alloc_pgtable_maddr is never called before iommu_alloc. We could maybe
poison the .sync_cache field, and then either set to NULL or to
sync_cache in iommu_alloc?

Maybe I'm just overly paranoid with this.

Thanks.


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 10:14:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 10:14:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw0ub-00052g-0N; Thu, 16 Jul 2020 10:14:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sYN5=A3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jw0uZ-00051n-Ra
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 10:14:43 +0000
X-Inumbo-ID: 25f3889f-c74d-11ea-949a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 25f3889f-c74d-11ea-949a-12813bfff9fa;
 Thu, 16 Jul 2020 10:14:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+IP6H6BgHXR6VnVhhVW33Z1jCldh48iTgdRBYUZlGOM=; b=S1+FybE1vN2fDoSuqV6A48yZe
 EUOl8idD03osyLXYYbAxzseYXRbMUEvRyyUrdGNVvb7ozR4JOz5QXPZCUkHDCFKkfofvSJfGsAKdG
 m9d/zyUhC3M/7q+74HEniag8+gqD/r35pnEH4BswFH/fkrlJpT9yGqcOwZa9EwSq6HJaU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jw0uS-0002RJ-K8; Thu, 16 Jul 2020 10:14:36 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jw0uS-00039h-CE; Thu, 16 Jul 2020 10:14:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jw0uS-0004oS-Be; Thu, 16 Jul 2020 10:14:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151935-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151935: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=e71e13488dc1aa65456e54a4b41bc925821b4263
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jul 2020 10:14:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151935 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151935/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              e71e13488dc1aa65456e54a4b41bc925821b4263
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z    6 days
Failing since        151818  2020-07-11 04:18:52 Z    5 days    6 attempts
Testing same since   151935  2020-07-16 04:21:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Bastien Orivel <bastien.orivel@diateam.net>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Jin Yan <jinyan12@huawei.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1264 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 10:15:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 10:15:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw0v4-00058I-GS; Thu, 16 Jul 2020 10:15:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oOKz=A3=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jw0v3-000585-9l
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 10:15:13 +0000
X-Inumbo-ID: 3b09fa2f-c74d-11ea-949a-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3b09fa2f-c74d-11ea-949a-12813bfff9fa;
 Thu, 16 Jul 2020 10:15:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594894512;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=ZGgRMcmFpKiMJJeKQrq1cmjNWWRGADY93PC47sKyqTI=;
 b=FDH398T1/GtckM2eKCV8JIhqnnvbhbQuaIkDcuAIh27HkB7UkFyzUm4/
 PKx7piE5PfcIbf80LPXwfc2hXGJirDBVvwjw5IbNCe5I7nodEvbWlYB+i
 8RtFxfy7+uMupd1kh2UiwGUuy0k0aGGihjjRaWEnjtP0ixwp5KQ/fnlyo c=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: GUxQoiJCzufxyegeXCzlN1GvGCASndEcgyd/ORhWartFeTVvML8GVzidefhxX9chu5uiE4Lm2Q
 r51mQ6RdA1VEOc765xIZa8sVHIlEPgCcN4JYyB7O0OAKh5m5mqo/PIeZw+1tpddqSdBjuLcmCn
 1IndVtk+4HX+/i4+buWG6HBUyAmCA+PW2hTKDS+AZJ0QRJAM3w0meLU3OLaL83gpo0Xe+UO8AI
 kyr0xr70UXXUKF2FzhN6loNu6okZEYnzCCaAkZ6bO2IvPLA2VfeCV4YI89kQcaCgX6HWDQYXo+
 RoQ=
X-SBRS: 2.7
X-MesageID: 22707710
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,358,1589256000"; d="scan'208";a="22707710"
Date: Thu, 16 Jul 2020 12:15:04 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 2/2] VT-d: use clear_page() in alloc_pgtable_maddr()
Message-ID: <20200716101504.GL7191@Air-de-Roger>
References: <2ad1607d-0909-f1cc-83bf-2546b28a9128@suse.com>
 <14f8b940-252f-9837-8958-5e76e1c3f06f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <14f8b940-252f-9837-8958-5e76e1c3f06f@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Kevin
 Tian <kevin.tian@intel.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 12:04:15PM +0200, Jan Beulich wrote:
> For full pages this is (meant to be) more efficient. Also change the
> type and reduce the scope of the involved local variable.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 10:25:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 10:25:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw15A-00066P-I5; Thu, 16 Jul 2020 10:25:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dhnJ=A3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jw159-00066K-TF
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 10:25:39 +0000
X-Inumbo-ID: b1869a58-c74e-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1869a58-c74e-11ea-b7bb-bc764e2007e4;
 Thu, 16 Jul 2020 10:25:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 348B2AEF8;
 Thu, 16 Jul 2020 10:25:42 +0000 (UTC)
Subject: Re: [PATCH 1/2] VT-d: install sync_cache hook on demand
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <2ad1607d-0909-f1cc-83bf-2546b28a9128@suse.com>
 <0036b69f-0d56-9ac4-1afa-06640c9007de@suse.com>
 <20200716101409.GK7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a051d3e7-eaf5-6121-823b-db1a737bc085@suse.com>
Date: Thu, 16 Jul 2020 12:25:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200716101409.GK7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Kevin Tian <kevin.tian@intel.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16.07.2020 12:14, Roger Pau Monné wrote:
> On Wed, Jul 15, 2020 at 12:03:57PM +0200, Jan Beulich wrote:
>> Instead of checking inside the hook whether any non-coherent IOMMUs are
>> present, simply install the hook only when this is the case.
>>
>> To prove that there are no other references to the now dynamically
>> updated ops structure (and hence that its updating happens early
>> enough), make it static and rename it at the same time.
>>
>> Note that this change implies that sync_cache() shouldn't be called
>> directly unless there are unusual circumstances, like is the case in
>> alloc_pgtable_maddr(), which gets invoked too early for iommu_ops to
>> be set already (and therefore we also need to be careful there to
>> avoid accessing vtd_ops later on, as it lives in .init).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I think this is slightly better than what we currently have:
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> I would however prefer if we also added a check to assert that
> alloc_pgtable_maddr is never called before iommu_alloc.

It would be quite odd for this to happen - what point would
there be to allocate a table to hang off of an IOMMU when
no IOMMU was found at all so far? Furthermore, such a
restriction could either be viewed to not suffice afaict (as
a subsequent iommu_alloc() may in principle fine a non-
coherent IOMMU), or to be pointless (until a non-coherent
IOMMU was found and allocated any table for, it doesn't
really matter whether we sync cache). In the end, whether to
sync cache in alloc_pgtable_maddr() doesn't really depend on
any global property, but only on the property / properties
of the IOMMU(s) the table is going to be made visible to.

> We could maybe
> poison the .sync_cache field, and then either set to NULL or to
> sync_cache in iommu_alloc?

Poisoning is at least latently problematic, due to alternative
insn patching we use for indirect calls. There are two passes,
where the 1st pass skips any instances where the target address
is still NULL. Of course that code could be made honor the
poison value as well, but to me this looks like going too far.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 10:31:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 10:31:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw1Ai-0006vT-6w; Thu, 16 Jul 2020 10:31:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1QOp=A3=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jw1Ah-0006vO-Lv
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 10:31:23 +0000
X-Inumbo-ID: 7e53d276-c74f-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e53d276-c74f-11ea-bca7-bc764e2007e4;
 Thu, 16 Jul 2020 10:31:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AE3F7AEF8;
 Thu, 16 Jul 2020 10:31:25 +0000 (UTC)
Subject: Re: [PATCH v2] docs: specify stability of hypfs path documentation
To: Jan Beulich <jbeulich@suse.com>, George Dunlap <George.Dunlap@citrix.com>
References: <20200713140338.16172-1-jgross@suse.com>
 <8a96b1b9-cbcb-557a-5b82-661bbe40fe25@suse.com>
 <68F727A8-29B8-4846-8BE9-BD4F6E0DC60D@citrix.com>
 <9f5e86cc-4f64-982b-d84b-1de6b2739a2b@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <4c681c7c-be69-7e1a-4cd9-c9e05fe85372@suse.com>
Date: Thu, 16 Jul 2020 12:31:21 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <9f5e86cc-4f64-982b-d84b-1de6b2739a2b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16.07.20 12:11, Jan Beulich wrote:
> On 15.07.2020 16:37, George Dunlap wrote:
>> IT sounds like you’re saying:
>>
>> 1. Paths listed without conditions will always be available
>>
>> 2. Paths listed with conditions may be extended: i.e., a node currently listed as PV might also become available for HVM guests
>>
>> 3. Paths listed with conditions might have those conditions reduced, but will never entirely disappear.  So something currently listed as PV might be reduced to CONFIG_HAS_FOO, but won’t be completely removed.
>>
>> Is that what you meant?
> 
> I see Jürgen replied "yes" to this, but I'm not sure about 1.
> above: I think it's quite reasonable to expect that paths without
> condition may gain a condition. Just like paths now having a
> condition and (perhaps temporarily) losing it shouldn't all of
> the sudden become "always available" when they weren't meant to
> be.
> 
> As far a 3. goes, I'm also unsure in how far this is any better
> stability wise (from a consumer pov) than allowing paths to
> entirely disappear.

The idea is that any user tool using hypfs can rely on paths under 1 to
exist, while the ones under 3 might not be there due to the hypervisor
config or the used system.

A path not being allowed to entirely disappear ensures that it remains
in the documentation, so the same path can't be reused for something
different in future.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 10:31:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 10:31:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw1Ak-0006vi-Ez; Thu, 16 Jul 2020 10:31:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oOKz=A3=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jw1Aj-0006vZ-G3
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 10:31:25 +0000
X-Inumbo-ID: 7e42dc46-c74f-11ea-949b-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7e42dc46-c74f-11ea-949b-12813bfff9fa;
 Thu, 16 Jul 2020 10:31:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594895485;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=NBFRfwyxPgTlZ8qfQKummKCUon0ZFqZEtEKuh7EX5dk=;
 b=dCBSXsSoG0Sycc8gUpCI58WH12V0wD1+uYvnYsgu0gcnJFYcBjvUKJZK
 SOiSW1RZ14nhoVQTMnUl26usC7oZU42OrACx5/mMLHxslTLNuHC62jmw5
 xgP6XGYos9MAO2jJCkdQVTc6GwS/614HiFswkjJTEBrS423nuIb7bR/0r E=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: dN4IxCAtEEFFXNtkXI+PVdoa37nyoaS1pZN5o6eWMohq7LVkwHy/W5Y3OAeBCI9JAALwSvPvQh
 2JYMzLFmRg+Za/eegmI6Y8bvnZTBaF4H+gLn2mwaRGsdcMKmWz2xTneledNeUTY1mpdYwefjO8
 uR5FhVowgH0uPJpVNz7oQUE24FLiK4/og+S9txr5EfmbhMk4rKqbSFxXIYIsFki2Lpw1w/pHAM
 wgbRX1gYG599Sw1Y5sBHmqoOd03ETA2DXuBIezpnTlnmuitNtw183WWa+q+3HL/HJqW/gvcbmZ
 8GQ=
X-SBRS: 2.7
X-MesageID: 22830947
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,358,1589256000"; d="scan'208";a="22830947"
Date: Thu, 16 Jul 2020 12:31:10 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v3 1/2] x86: restore pv_rtc_handler() invocation
Message-ID: <20200716103110.GM7191@Air-de-Roger>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
 <20200715121347.GY7191@Air-de-Roger>
 <2b9de0fd-5973-ed66-868c-ffadca83edf3@suse.com>
 <20200715133217.GZ7191@Air-de-Roger>
 <cd08f928-2be9-314b-56e6-bb414247caff@suse.com>
 <20200715145144.GA7191@Air-de-Roger>
 <ff1926c7-cc21-03ad-1dff-53c703450151@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ff1926c7-cc21-03ad-1dff-53c703450151@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 16, 2020 at 12:06:14PM +0200, Jan Beulich wrote:
> On 15.07.2020 16:51, Roger Pau Monné wrote:
> > On Wed, Jul 15, 2020 at 03:51:17PM +0200, Jan Beulich wrote:
> >> On 15.07.2020 15:32, Roger Pau Monné wrote:
> >>> Feel free to change to ACCESS_ONCE or barrier if you think it's
> >>> clearer.
> >>
> >> I did so (also on the writer side), not the least based on guessing
> >> what Andrew would presumably have preferred.
> > 
> > Thanks! Sorry I might be pedantic, but is the ACCESS_ONCE on the write
> > side actually required? I'm not sure I see what ACCESS_ONCE protects
> > against in handle_rtc_once.
> 
> Well, this is all sort of a mess, I think. We have this mixture of
> ACCESS_ONCE() and read_atomic() / write_atomic(), but I don't think
> we use them consistently, and I'm not sure either is suitable to
> deal with all (theoretical) corner cases.
> 
> read_atomic() / write_atomic() guarantee a single insn to be used
> to access a piece of data. I'm uncertain whether they also guarantee
> single access (i.e. that the compiler won't replicate the asm()-s).

Yes, that would be my expectation from my reading of the manual, as
it prevents gcc from: "move it out of loops or omit it on the
assumption that the result from a previous call is still valid".

> The wording in gcc doc is pretty precise, but not quite enough imo
> to be entirely certain.

I agree it's not that precise.

> ACCESS_ONCE() guarantees single access, but doesn't guarantee that
> the compiler wouldn't split this single access into multiple insns.
> (It's just, like elsewhere, that it would be pretty silly of it if
> it did.)
> 
> Yesterday, as said, I tried to in particular do what I expect/guess
> Andrew would have wanted done. This is despite me not being entirely
> convinced this is the right thing to do here, i.e. personally I
> would have preferred read_atomic() / write_atomic(), as I think the
> intention of what the gcc doc is saying is what we want (taking
> into consideration both uses of "volatile" in these helpers).

Well, gcc states:

"Note that the compiler can move even volatile asm instructions
relative to other code, including across jump instructions."

So I think we likely want to use {read/write}_atomic plus a compiler
barrier? I'm not sure anyway how the read of pv_rtc_handler could be
moved, but I guess I'm not that creative :).

AFAICT we require a write_atomic in handle_rtc_once in order to assure
a single instruction is used (no barrier required), and then we
require a read_atomic + a compiler barrier in rtc_guest_write in order
to prevent the compiler from optimizing the accesses to 'hook' in any
way? (that barrier might not be strictly required, as you say it's not
fully clear whether 'asm volatile' doesn't provide the necessary
protection here).

Thanks.


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 10:34:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 10:34:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw1Dy-000799-VO; Thu, 16 Jul 2020 10:34:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oOKz=A3=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jw1Dx-000793-QK
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 10:34:45 +0000
X-Inumbo-ID: f6b02328-c74f-11ea-949b-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6b02328-c74f-11ea-949b-12813bfff9fa;
 Thu, 16 Jul 2020 10:34:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594895685;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=xdME8FFOAnP4xOI4yWGeI2hniWTXZr14SdbFrNhcUzI=;
 b=EzFClyhv5lbi2ru6O+t1LpxQ5G6533j9AaNDjkzePtrOz8SoyitHOHmF
 cDLnuINA2/sS7hlDoWZJLtqxrolMWxQglyliQoIV/e4MfYj7LRQAFRNO1
 t+tslbEyA5/e3RKrHJXNPzjpfZ5E8jaJZHuKzoCqNvWDqHxjvN/nO7Qju I=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: hAOVjV6VmwPMPFTql9GX7W71qPlw21i+K1BM3+mOVGPxx1RwIO8BCPIX7yBDY1HLd9yCpXHKX8
 Lw4/gqnY3qCVgBmObx4XRJq0YHjnTA6prbckmG+D23Rr0v8qpJFK0XJJ1mwtKUUzv096WvvGtv
 AlDrmGjVZ2I55iYmMgl4AGu7MK2H2pHCgpYbkDPJfxoQH3vUXf6DQoCJs86/1hcbYogVPpP2q1
 5zuRTFARMtlTf7STAGPa2zSE7ANyMfmOJQ+TokzWNFQaOonhx3O/uRWC9cHdVduV+QjQulDcYU
 z4w=
X-SBRS: 2.7
X-MesageID: 22510858
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,359,1589256000"; d="scan'208";a="22510858"
Date: Thu, 16 Jul 2020 12:34:37 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 1/2] VT-d: install sync_cache hook on demand
Message-ID: <20200716103437.GN7191@Air-de-Roger>
References: <2ad1607d-0909-f1cc-83bf-2546b28a9128@suse.com>
 <0036b69f-0d56-9ac4-1afa-06640c9007de@suse.com>
 <20200716101409.GK7191@Air-de-Roger>
 <a051d3e7-eaf5-6121-823b-db1a737bc085@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a051d3e7-eaf5-6121-823b-db1a737bc085@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Kevin
 Tian <kevin.tian@intel.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 16, 2020 at 12:25:40PM +0200, Jan Beulich wrote:
> On 16.07.2020 12:14, Roger Pau Monné wrote:
> > On Wed, Jul 15, 2020 at 12:03:57PM +0200, Jan Beulich wrote:
> >> Instead of checking inside the hook whether any non-coherent IOMMUs are
> >> present, simply install the hook only when this is the case.
> >>
> >> To prove that there are no other references to the now dynamically
> >> updated ops structure (and hence that its updating happens early
> >> enough), make it static and rename it at the same time.
> >>
> >> Note that this change implies that sync_cache() shouldn't be called
> >> directly unless there are unusual circumstances, like is the case in
> >> alloc_pgtable_maddr(), which gets invoked too early for iommu_ops to
> >> be set already (and therefore we also need to be careful there to
> >> avoid accessing vtd_ops later on, as it lives in .init).
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > I think this is slightly better than what we currently have:
> > 
> > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Thanks.
> 
> > I would however prefer if we also added a check to assert that
> > alloc_pgtable_maddr is never called before iommu_alloc.
> 
> It would be quite odd for this to happen - what point would
> there be to allocate a table to hang off of an IOMMU when
> no IOMMU was found at all so far? Furthermore, such a
> restriction could either be viewed to not suffice afaict (as
> a subsequent iommu_alloc() may in principle fine a non-
> coherent IOMMU), or to be pointless (until a non-coherent
> IOMMU was found and allocated any table for, it doesn't
> really matter whether we sync cache). In the end, whether to
> sync cache in alloc_pgtable_maddr() doesn't really depend on
> any global property, but only on the property / properties
> of the IOMMU(s) the table is going to be made visible to.

Right, I think I was indeed overly paranoid. I was mostly worried
about iommu_alloc calling alloc_pgtable_maddr before checking whether
the IOOMMU is incoherent, but this is not likely to happen.

Thanks.


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 10:53:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 10:53:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw1VY-0000Nv-HJ; Thu, 16 Jul 2020 10:52:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dhnJ=A3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jw1VW-0000NC-OF
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 10:52:54 +0000
X-Inumbo-ID: 7fbf0128-c752-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7fbf0128-c752-11ea-b7bb-bc764e2007e4;
 Thu, 16 Jul 2020 10:52:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9C88FAD1F;
 Thu, 16 Jul 2020 10:52:56 +0000 (UTC)
Subject: Re: [PATCH v3 1/2] x86: restore pv_rtc_handler() invocation
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
 <20200715121347.GY7191@Air-de-Roger>
 <2b9de0fd-5973-ed66-868c-ffadca83edf3@suse.com>
 <20200715133217.GZ7191@Air-de-Roger>
 <cd08f928-2be9-314b-56e6-bb414247caff@suse.com>
 <20200715145144.GA7191@Air-de-Roger>
 <ff1926c7-cc21-03ad-1dff-53c703450151@suse.com>
 <20200716103110.GM7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1ab50fc4-fbf4-7a5d-e98b-48442df5d2ca@suse.com>
Date: Thu, 16 Jul 2020 12:52:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200716103110.GM7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16.07.2020 12:31, Roger Pau Monné wrote:
> On Thu, Jul 16, 2020 at 12:06:14PM +0200, Jan Beulich wrote:
>> On 15.07.2020 16:51, Roger Pau Monné wrote:
>>> On Wed, Jul 15, 2020 at 03:51:17PM +0200, Jan Beulich wrote:
>>>> On 15.07.2020 15:32, Roger Pau Monné wrote:
>>>>> Feel free to change to ACCESS_ONCE or barrier if you think it's
>>>>> clearer.
>>>>
>>>> I did so (also on the writer side), not the least based on guessing
>>>> what Andrew would presumably have preferred.
>>>
>>> Thanks! Sorry I might be pedantic, but is the ACCESS_ONCE on the write
>>> side actually required? I'm not sure I see what ACCESS_ONCE protects
>>> against in handle_rtc_once.
>>
>> Well, this is all sort of a mess, I think. We have this mixture of
>> ACCESS_ONCE() and read_atomic() / write_atomic(), but I don't think
>> we use them consistently, and I'm not sure either is suitable to
>> deal with all (theoretical) corner cases.
>>
>> read_atomic() / write_atomic() guarantee a single insn to be used
>> to access a piece of data. I'm uncertain whether they also guarantee
>> single access (i.e. that the compiler won't replicate the asm()-s).
> 
> Yes, that would be my expectation from my reading of the manual, as
> it prevents gcc from: "move it out of loops or omit it on the
> assumption that the result from a previous call is still valid".
> 
>> The wording in gcc doc is pretty precise, but not quite enough imo
>> to be entirely certain.
> 
> I agree it's not that precise.
> 
>> ACCESS_ONCE() guarantees single access, but doesn't guarantee that
>> the compiler wouldn't split this single access into multiple insns.
>> (It's just, like elsewhere, that it would be pretty silly of it if
>> it did.)
>>
>> Yesterday, as said, I tried to in particular do what I expect/guess
>> Andrew would have wanted done. This is despite me not being entirely
>> convinced this is the right thing to do here, i.e. personally I
>> would have preferred read_atomic() / write_atomic(), as I think the
>> intention of what the gcc doc is saying is what we want (taking
>> into consideration both uses of "volatile" in these helpers).
> 
> Well, gcc states:
> 
> "Note that the compiler can move even volatile asm instructions
> relative to other code, including across jump instructions."
> 
> So I think we likely want to use {read/write}_atomic plus a compiler
> barrier? I'm not sure anyway how the read of pv_rtc_handler could be
> moved, but I guess I'm not that creative :).
> 
> AFAICT we require a write_atomic in handle_rtc_once in order to assure
> a single instruction is used (no barrier required), and then we
> require a read_atomic + a compiler barrier in rtc_guest_write in order
> to prevent the compiler from optimizing the accesses to 'hook' in any
> way? (that barrier might not be strictly required, as you say it's not
> fully clear whether 'asm volatile' doesn't provide the necessary
> protection here).

Yes, but I'd really like to wait for Andrew's input here before we
make any further adjustments. In any even what has gone in yesterday
is already better than what the original code was that needed
restoring.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 11:24:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 11:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw202-0002y9-4o; Thu, 16 Jul 2020 11:24:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dhnJ=A3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jw200-0002y4-S8
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 11:24:24 +0000
X-Inumbo-ID: e65b635a-c756-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e65b635a-c756-11ea-bb8b-bc764e2007e4;
 Thu, 16 Jul 2020 11:24:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BDCC0AB7D;
 Thu, 16 Jul 2020 11:24:26 +0000 (UTC)
Subject: Re: [PATCH v2] docs: specify stability of hypfs path documentation
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <20200713140338.16172-1-jgross@suse.com>
 <8a96b1b9-cbcb-557a-5b82-661bbe40fe25@suse.com>
 <68F727A8-29B8-4846-8BE9-BD4F6E0DC60D@citrix.com>
 <9f5e86cc-4f64-982b-d84b-1de6b2739a2b@suse.com>
 <4c681c7c-be69-7e1a-4cd9-c9e05fe85372@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2567da9b-be43-3f0d-e213-562b5454f4b7@suse.com>
Date: Thu, 16 Jul 2020 13:24:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <4c681c7c-be69-7e1a-4cd9-c9e05fe85372@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>, Ian Jackson <Ian.Jackson@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16.07.2020 12:31, Jürgen Groß wrote:
> On 16.07.20 12:11, Jan Beulich wrote:
>> On 15.07.2020 16:37, George Dunlap wrote:
>>> IT sounds like you’re saying:
>>>
>>> 1. Paths listed without conditions will always be available
>>>
>>> 2. Paths listed with conditions may be extended: i.e., a node currently listed as PV might also become available for HVM guests
>>>
>>> 3. Paths listed with conditions might have those conditions reduced, but will never entirely disappear.  So something currently listed as PV might be reduced to CONFIG_HAS_FOO, but won’t be completely removed.
>>>
>>> Is that what you meant?
>>
>> I see Jürgen replied "yes" to this, but I'm not sure about 1.
>> above: I think it's quite reasonable to expect that paths without
>> condition may gain a condition. Just like paths now having a
>> condition and (perhaps temporarily) losing it shouldn't all of
>> the sudden become "always available" when they weren't meant to
>> be.
>>
>> As far a 3. goes, I'm also unsure in how far this is any better
>> stability wise (from a consumer pov) than allowing paths to
>> entirely disappear.
> 
> The idea is that any user tool using hypfs can rely on paths under 1 to
> exist, while the ones under 3 might not be there due to the hypervisor
> config or the used system.
> 
> A path not being allowed to entirely disappear ensures that it remains
> in the documentation, so the same path can't be reused for something
> different in future.

And then how do you deal with a condition getting dropped, and
later wanting to get re-added? Do we need a placeholder condition
like [ALWAYS] or [TRUE]?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 11:29:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 11:29:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw257-00039d-Vk; Thu, 16 Jul 2020 11:29:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sYN5=A3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jw256-00039J-Tv
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 11:29:40 +0000
X-Inumbo-ID: 9f03543a-c757-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f03543a-c757-11ea-bca7-bc764e2007e4;
 Thu, 16 Jul 2020 11:29:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Vb+vMPU8eRVarfTSnk/X91pdMwXE+rR1yz02SBIGOIM=; b=CYlRfazmcqdO1czF4rD+RK9QG
 JKjdND6dRmrki9ILdeUeMXPBciMyfVWcZXc5Mth+Q15R2W/45/8Alp5u392TJzgPoPoWLUC3ggCoH
 IaUTm1ICvX6Fxb1fxNYdFKCAR6+G1Ig0K8MNYbVQNPqvkGVuFUbppqxjgMBDdiBzvqdpE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jw24z-00041A-0l; Thu, 16 Jul 2020 11:29:33 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jw24y-0006lO-EP; Thu, 16 Jul 2020 11:29:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jw24y-00033p-Dm; Thu, 16 Jul 2020 11:29:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151926-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151926: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=f8fe3c07363d11fc81d8e7382dbcaa357c861569
X-Osstest-Versions-That: xen=165f3afbfc3db70fcfdccad07085cde0a03c858b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jul 2020 11:29:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151926 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151926/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10   fail REGR. vs. 151884

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151884
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151884
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151884
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151884
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151884
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151884
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151884
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151884
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151884
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151884
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f8fe3c07363d11fc81d8e7382dbcaa357c861569
baseline version:
 xen                  165f3afbfc3db70fcfdccad07085cde0a03c858b

Last test of basis   151884  2020-07-14 04:47:14 Z    2 days
Failing since        151903  2020-07-15 01:07:47 Z    1 days    2 attempts
Testing same since   151926  2020-07-15 20:02:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   165f3afbfc..f8fe3c0736  f8fe3c07363d11fc81d8e7382dbcaa357c861569 -> master


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 11:34:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 11:34:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw29x-00041v-RU; Thu, 16 Jul 2020 11:34:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1QOp=A3=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jw29x-00041q-A2
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 11:34:41 +0000
X-Inumbo-ID: 55f7cc7a-c758-11ea-94a4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 55f7cc7a-c758-11ea-94a4-12813bfff9fa;
 Thu, 16 Jul 2020 11:34:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 93702AE44;
 Thu, 16 Jul 2020 11:34:43 +0000 (UTC)
Subject: Re: [PATCH v2] docs: specify stability of hypfs path documentation
To: Jan Beulich <jbeulich@suse.com>
References: <20200713140338.16172-1-jgross@suse.com>
 <8a96b1b9-cbcb-557a-5b82-661bbe40fe25@suse.com>
 <68F727A8-29B8-4846-8BE9-BD4F6E0DC60D@citrix.com>
 <9f5e86cc-4f64-982b-d84b-1de6b2739a2b@suse.com>
 <4c681c7c-be69-7e1a-4cd9-c9e05fe85372@suse.com>
 <2567da9b-be43-3f0d-e213-562b5454f4b7@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <757f5f78-6ec9-c740-18bf-a01105d552d7@suse.com>
Date: Thu, 16 Jul 2020 13:34:38 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <2567da9b-be43-3f0d-e213-562b5454f4b7@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>, Ian Jackson <Ian.Jackson@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16.07.20 13:24, Jan Beulich wrote:
> On 16.07.2020 12:31, Jürgen Groß wrote:
>> On 16.07.20 12:11, Jan Beulich wrote:
>>> On 15.07.2020 16:37, George Dunlap wrote:
>>>> IT sounds like you’re saying:
>>>>
>>>> 1. Paths listed without conditions will always be available
>>>>
>>>> 2. Paths listed with conditions may be extended: i.e., a node currently listed as PV might also become available for HVM guests
>>>>
>>>> 3. Paths listed with conditions might have those conditions reduced, but will never entirely disappear.  So something currently listed as PV might be reduced to CONFIG_HAS_FOO, but won’t be completely removed.
>>>>
>>>> Is that what you meant?
>>>
>>> I see Jürgen replied "yes" to this, but I'm not sure about 1.
>>> above: I think it's quite reasonable to expect that paths without
>>> condition may gain a condition. Just like paths now having a
>>> condition and (perhaps temporarily) losing it shouldn't all of
>>> the sudden become "always available" when they weren't meant to
>>> be.
>>>
>>> As far a 3. goes, I'm also unsure in how far this is any better
>>> stability wise (from a consumer pov) than allowing paths to
>>> entirely disappear.
>>
>> The idea is that any user tool using hypfs can rely on paths under 1 to
>> exist, while the ones under 3 might not be there due to the hypervisor
>> config or the used system.
>>
>> A path not being allowed to entirely disappear ensures that it remains
>> in the documentation, so the same path can't be reused for something
>> different in future.
> 
> And then how do you deal with a condition getting dropped, and
> later wanting to get re-added? Do we need a placeholder condition
> like [ALWAYS] or [TRUE]?

Dropping a condition has to be considered very carefully, same as
introducing a new path without any condition.

In worst case you can still go with [CONFIG_HYPFS].


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 11:41:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 11:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw2Gh-0004s1-Jw; Thu, 16 Jul 2020 11:41:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oOKz=A3=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jw2Gg-0004rw-E9
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 11:41:38 +0000
X-Inumbo-ID: 4e9453ee-c759-11ea-94a5-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4e9453ee-c759-11ea-94a5-12813bfff9fa;
 Thu, 16 Jul 2020 11:41:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594899697;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=GSo6+S84yGEpCHdoBUFu/eMIj/LXkStHwfl7+iH6gHY=;
 b=Sutngue4IPHv792Wv6IJb3tfopSatQ9fxv2o7NbJq5VjOC2iFVjAunp7
 QyRKnNf3Q04A7Wb4Cc8k5ea1gyRvAxlESAQx0HAfqm79HDTMAqbSUjIci
 VHPWI0hyNQmQi4+FsUyQJu4MQutF3kJ/Y2pshWEq0hmhs9YXQOD0af5gW k=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 3H88suKxwGm3a3+3NSWMAjkXsT76LRHv8f00wQwjNoQo9p19eU9K7XOKR4D07zkZjnwcxJengj
 Kyin4DMI18exrzbdZFMEdDiRnjx5Rvr29rK6p4TGVl43+RiROPqCTbCzHudTJc15+d1WN3PSpu
 GZHmTvO4wB5BK+LZYjBBqin62vDCpJn/wdPEZZ3WuGBK6VBYc/AWL1HkO581/BKscCcfaVqGkL
 BsMCRQrGZGMxFIG0VTv9cf7DMUAMzbsWgaEqqWzjXgbXTL6V+iOdGd0Z6onulpwc8d68ZdnxWk
 Dm8=
X-SBRS: 2.7
X-MesageID: 22712951
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,359,1589256000"; d="scan'208";a="22712951"
Date: Thu, 16 Jul 2020 13:41:28 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 1/2] common: map_vcpu_info() cosmetics
Message-ID: <20200716114128.GO7191@Air-de-Roger>
References: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
 <2a0341c7-3836-a8c0-9516-b6a08e2720ec@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2a0341c7-3836-a8c0-9516-b6a08e2720ec@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 12:15:10PM +0200, Jan Beulich wrote:
> Use ENXIO instead of EINVAL to cover the two cases of the address not
> satisfying the requirements. This will make an issue here better stand
> out at the call site.

Not sure whether I would use EFAULT instead of ENXIO, as the
description of it is 'bad address' which seems more inline with the
error that we are trying to report.

> Also add a missing compat-mode related size check: If the sizes
> differed, other code in the function would need changing. Accompany this
> by a change to the initial sizeof() expression, tying it to the type of
> the variable we're actually after (matching e.g. the alignof() added by
> XSA-327).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 11:48:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 11:48:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw2Ng-00054s-Ae; Thu, 16 Jul 2020 11:48:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dhnJ=A3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jw2Ne-00054n-UQ
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 11:48:50 +0000
X-Inumbo-ID: 506b140e-c75a-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 506b140e-c75a-11ea-b7bb-bc764e2007e4;
 Thu, 16 Jul 2020 11:48:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 47592B931;
 Thu, 16 Jul 2020 11:48:53 +0000 (UTC)
Subject: Re: [PATCH 1/2] common: map_vcpu_info() cosmetics
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
 <2a0341c7-3836-a8c0-9516-b6a08e2720ec@suse.com>
 <20200716114128.GO7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <03a4d446-c26b-f171-57fd-a1eb13dad6c0@suse.com>
Date: Thu, 16 Jul 2020 13:48:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200716114128.GO7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16.07.2020 13:41, Roger Pau Monné wrote:
> On Wed, Jul 15, 2020 at 12:15:10PM +0200, Jan Beulich wrote:
>> Use ENXIO instead of EINVAL to cover the two cases of the address not
>> satisfying the requirements. This will make an issue here better stand
>> out at the call site.
> 
> Not sure whether I would use EFAULT instead of ENXIO, as the
> description of it is 'bad address' which seems more inline with the
> error that we are trying to report.

The address isn't bad in the sense of causing a fault, it's just
that we elect to not allow it. Hence I don't think EFAULT is
suitable. I'm open to replacement suggestions for ENXIO, though.

>> Also add a missing compat-mode related size check: If the sizes
>> differed, other code in the function would need changing. Accompany this
>> by a change to the initial sizeof() expression, tying it to the type of
>> the variable we're actually after (matching e.g. the alignof() added by
>> XSA-327).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 11:54:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 11:54:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw2TG-0005um-3z; Thu, 16 Jul 2020 11:54:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AXgs=A3=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jw2TE-0005uh-BC
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 11:54:36 +0000
X-Inumbo-ID: 1e16853c-c75b-11ea-bb8b-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e16853c-c75b-11ea-bb8b-bc764e2007e4;
 Thu, 16 Jul 2020 11:54:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594900476;
 h=from:to:subject:date:message-id:content-id:
 content-transfer-encoding:mime-version;
 bh=VZ84eSJHuEXHFeY9Oq583Sd5WV44pROfefSm7ED9P8A=;
 b=XgTbwsrKHz5B6LGpy9pnsVegECm1jCXR6WIH0j98/HWu7e7ln2Mcnsud
 8EpcOqsjYclJqT1r0O872ryfJ/q1B0t6/zaOavaNWA+uXQ1o94xcw06rA
 wLK03hGXPJYf624QxwvkiX4oiMUALXaSfhAnVn7MEzNiTGs8xHWCwijeZ c=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: hSgwiYzQK5sUKsr3JM2ioRs/WbQFPbZnxLG0eJv7z/VL7K3V6ARoMpsUojGK8WYQDHJfBHFlB0
 n4NEItZ7txnZDnLmmzmbj7tnzO2Nmri7w5QPmtjDKwOmBvAZJTnS7xWvfhtAZTYL7XHZITDU5o
 CUKe2jNgqJNQaY6fn1VPwojEQgUlly+4bvRoJ+Pk+hFwvPjqzBlaRJV/Z+6G1dw+Um/F0Ti6SE
 oH6NFjbjcZ0jii00awptbm1YaZEJJxg3CY3+/aAoI1KlV50TaKCkfUj1jxvfcJLkRXeDDcQu9V
 at8=
X-SBRS: 2.7
X-MesageID: 22835307
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,359,1589256000"; d="scan'208";a="22835307"
From: George Dunlap <George.Dunlap@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: Saved notes for design sessions
Thread-Topic: Saved notes for design sessions
Thread-Index: AQHWW2fcrzg6m88NVUmqDHsRdn6IYw==
Date: Thu, 16 Jul 2020 11:54:29 +0000
Message-ID: <8D97F48E-1948-4C1D-965F-0B42797516DD@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="us-ascii"
Content-ID: <2E51A99BF8E6A140922844A77786B437@citrix.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hey all,

PDFs of the saved shared notes for the design sessions can be found in this=
 zipfile:

https://cryptpad.fr/file/#/2/file/LoJZpSq+vHKNoisVqdsPj3Z9/

The files are labeled with the start/end time and the room in which they we=
re held; that should help you find out the notes which are pertinent to you=
.

Please remember to post a summary of your design session to xen-devel for p=
osterity.

Thanks!

 -George=


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 12:06:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 12:06:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw2f4-0006vi-Sj; Thu, 16 Jul 2020 12:06:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dhnJ=A3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jw2f3-0006vd-LE
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 12:06:49 +0000
X-Inumbo-ID: d31373c2-c75c-11ea-94ab-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d31373c2-c75c-11ea-94ab-12813bfff9fa;
 Thu, 16 Jul 2020 12:06:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7A511AFBF;
 Thu, 16 Jul 2020 12:06:51 +0000 (UTC)
Subject: Re: [PATCH 2/2] evtchn/fifo: don't enforce higher than necessary
 alignment
To: Julien Grall <julien.grall.oss@gmail.com>
References: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
 <e47a9ef5-5f4c-1ca6-1b31-f7b10516e5ed@suse.com>
 <CAJ=z9a1AWYYVGwHWOct9j3bVDhPtWG7R3tQY05+6BY-9g3C1kQ@mail.gmail.com>
 <005381d5-3fb5-640f-002c-106c628a77a2@suse.com>
 <CAJ=z9a0LBhO7qJyF-WdBnkD52dXew-TgjTuUC7aeoS8rC13iwQ@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <083b8df6-4587-b8dc-e501-4ed8e47aac51@suse.com>
Date: Thu, 16 Jul 2020 14:06:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CAJ=z9a0LBhO7qJyF-WdBnkD52dXew-TgjTuUC7aeoS8rC13iwQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 16:02, Julien Grall wrote:
> On Wed, 15 Jul 2020, 14:42 Jan Beulich, <jbeulich@suse.com> wrote:
> 
>> On 15.07.2020 12:46, Julien Grall wrote:
>>> On Wed, 15 Jul 2020, 12:17 Jan Beulich, <jbeulich@suse.com> wrote:
>>>
>>>> Neither the code nor the original commit provide any justification for
>>>> the need to 8-byte align the struct in all cases. Enforce just as much
>>>> alignment as the structure actually needs - 4 bytes - by using alignof()
>>>> instead of a literal number.
>>>>
>>>> Take the opportunity and also
>>>> - add so far missing validation that native and compat mode layouts of
>>>>   the structures actually match,
>>>> - tie sizeof() expressions to the types of the fields we're actually
>>>>   after, rather than specifying the type explicitly (which in the
>>>>   general case risks a disconnect, even if there's close to zero risk in
>>>>   this particular case),
>>>> - use ENXIO instead of EINVAL for the two cases of the address not
>>>>   satisfying the requirements, which will make an issue here better
>>>>   stand out at the call site.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> I question the need for the array_index_nospec() here. Or else I'd
>>>> expect map_vcpu_info() would also need the same.
>>>>
>>>> --- a/xen/common/event_fifo.c
>>>> +++ b/xen/common/event_fifo.c
>>>> @@ -504,6 +504,16 @@ static void setup_ports(struct domain *d
>>>>      }
>>>>  }
>>>>
>>>> +#ifdef CONFIG_COMPAT
>>>> +
>>>> +#include <compat/event_channel.h>
>>>> +
>>>> +#define xen_evtchn_fifo_control_block evtchn_fifo_control_block
>>>> +CHECK_evtchn_fifo_control_block;
>>>> +#undef xen_evtchn_fifo_control_block
>>>> +
>>>> +#endif
>>>> +
>>>>  int evtchn_fifo_init_control(struct evtchn_init_control *init_control)
>>>>  {
>>>>      struct domain *d = current->domain;
>>>> @@ -523,19 +533,20 @@ int evtchn_fifo_init_control(struct evtc
>>>>          return -ENOENT;
>>>>
>>>>      /* Must not cross page boundary. */
>>>> -    if ( offset > (PAGE_SIZE - sizeof(evtchn_fifo_control_block_t)) )
>>>> -        return -EINVAL;
>>>> +    if ( offset > (PAGE_SIZE - sizeof(*v->evtchn_fifo->control_block))
>> )
>>>> +        return -ENXIO;
>>>>
>>>>      /*
>>>>       * Make sure the guest controlled value offset is bounded even
>> during
>>>>       * speculative execution.
>>>>       */
>>>>      offset = array_index_nospec(offset,
>>>> -                           PAGE_SIZE -
>>>> sizeof(evtchn_fifo_control_block_t) + 1);
>>>> +                                PAGE_SIZE -
>>>> +                                sizeof(*v->evtchn_fifo->control_block)
>> +
>>>> 1);
>>>>
>>>> -    /* Must be 8-bytes aligned. */
>>>> -    if ( offset & (8 - 1) )
>>>> -        return -EINVAL;
>>>> +    /* Must be suitably aligned. */
>>>> +    if ( offset & (alignof(*v->evtchn_fifo->control_block) - 1) )
>>>> +        return -ENXIO;
>>>>
>>>
>>> A guest relying on this new alignment wouldn't work on older version of
>>> Xen. So I don't think a guest would ever be able to use it.
>>>
>>> Therefore is it really worth the change?
>>
>> That's the question. One of your arguments for using a literal number
>> also for the vCPU info mapping check was that here a literal number
>> is used. The goal isn't so much relaxation of the interface, but
>> making the code consistent as well as eliminating a (how I'd call it)
>> kludge.
>>
> 
> Your commit message lead to think the relaxation is the key motivation to
> change the code.

I've added a clarifying sentence.

>> Guests not caring to be able to run on older versions could also make
>> use of the relaxation (which may be more relevant in 10 y ears time
>> than it is now).
> 
> 
> That makes sense. However, I am a bit concerned that an OS developer may
> not notice the alignment problem with older version.
> 
> I would suggest to at least document the alignment expected in the public
> header.

Done for v2.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 12:21:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 12:21:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw2tL-00006Y-43; Thu, 16 Jul 2020 12:21:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dhnJ=A3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jw2tJ-00006T-4K
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 12:21:33 +0000
X-Inumbo-ID: e1d541cc-c75e-11ea-94ac-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e1d541cc-c75e-11ea-94ac-12813bfff9fa;
 Thu, 16 Jul 2020 12:21:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 41396B1D3;
 Thu, 16 Jul 2020 12:21:35 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] compat: add a little bit of description to xlat.lst
Message-ID: <d7d95acc-11b0-278b-373e-0115cfa99b51@suse.com>
Date: Thu, 16 Jul 2020 14:21:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Requested-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -1,3 +1,22 @@
+# There are a few fundamentally different strategies for handling compat
+# (sub-)hypercalls:
+#
+# 1) Wrap a translation layer around the native hypercall. Structures involved
+# in this model should use translation (xlat) macros generated by adding
+# !-prefixed lines here.
+#
+# 2) Compile the entire hypercall function a second time, arranging for the
+# compat structures to get used in place of the native ones. There are no xlat
+# macros involved here, all that's needed are correctly translated structures.
+#
+# 3) Adhoc translation, which may or may not involve adding entries here.
+#
+# 4) Any mixture of the above.
+#
+# In all models any structures re-used in their native form should have
+# ?-mark prefixed lines added here, with the resulting checking macros invoked
+# somewhere in the code handling the hypercall or its translation.
+#
 # First column indicator:
 # ! - needs translation
 # ? - needs checking


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 12:37:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 12:37:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw38U-00017M-LH; Thu, 16 Jul 2020 12:37:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QxFF=A3=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jw38U-000173-23
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 12:37:14 +0000
X-Inumbo-ID: 110dedde-c761-11ea-8496-bc764e2007e4
Received: from mail-wm1-x335.google.com (unknown [2a00:1450:4864:20::335])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 110dedde-c761-11ea-8496-bc764e2007e4;
 Thu, 16 Jul 2020 12:37:10 +0000 (UTC)
Received: by mail-wm1-x335.google.com with SMTP id j18so10150079wmi.3
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jul 2020 05:37:10 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=G5sq9E9UDOnIr1aJgtoFVrLHsTos8+IaCLhksEowqzI=;
 b=XA9l6EiVlZzlbxHORQGsXcyp3vsil3xpF2im2hzpEfWVcmC7O75p+0YEKm3qeqs8gS
 QQRrZbZQmPKph88kE7PY/ZdUba/yzS6pNIb4zDgjxFa6Ge9Ovjp6dTZ+BWzLEqQ3HvCw
 dwzxMAVNCldSl6TKnRCtMhGp0oyThLDZ9I5+qVord9YFhNqmY42BP6aBQm3nEffu4EdR
 FL6sf5bKQDFl1c8XCsCgbmi5KxnvBAv+CuQRJanggG5UMy0p+an7A/tJKI7iTHjXwJN8
 7hbD0ga9bU3ln8RZB9mARbjebQId1iY6ONSkhv6Qf61BiACUkHuhnjXnjpxmR1WDn8Pa
 QJHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=G5sq9E9UDOnIr1aJgtoFVrLHsTos8+IaCLhksEowqzI=;
 b=mi7kPNIRV4F5Up7LmzKnXJqojRs8WHmWRzgNORqut4NrUhLO02fsa0cQV7tcfMfb/o
 j+fANil10w3DeD0GgfnpkUIVQgx1tixbdZY1gqCN4anqUFkCSBVxOJaEtrZxfQMg5sga
 nljILHGTctH3uBkqiWTw1zKnjBszSpTDuUOpSrzAJq3ZlxtitGzBALf7sA+hIgK0URAb
 5GY0gInUlM/VMrIeCARgvy623xVaf59be753Bdg18OMGXwVKAz/VAsGrFVnPuNvpVpZU
 6+AFt1gI9smuKzhzDNl72ksTpVESH53bbJw0Pof9MN/rqeKQ9xSFwO14RIxna+CjYfVm
 WIww==
X-Gm-Message-State: AOAM532vhEsxQFS2bAn/1AQ3kiCFk0OW70788e2s2eAF22qY/ngna+cy
 hnd6v3sU8m/DuKwnTRwwJUw=
X-Google-Smtp-Source: ABdhPJxNr5sWe/rAQ0WwFmtNZQKrc+IVyitTjIZZS8rdOwUabdDJ5fZYRWatelaG7xt8QwLRVW+hrA==
X-Received: by 2002:a05:600c:22d3:: with SMTP id
 19mr4196642wmg.48.1594903029736; 
 Thu, 16 Jul 2020 05:37:09 -0700 (PDT)
Received: from x1w.redhat.com (138.red-83-57-170.dynamicip.rima-tde.net.
 [83.57.170.138])
 by smtp.gmail.com with ESMTPSA id c194sm8336673wme.8.2020.07.16.05.37.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 16 Jul 2020 05:37:09 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: Markus Armbruster <armbru@redhat.com>,
	qemu-devel@nongnu.org
Subject: [PATCH-for-5.2 v2 1/2] block/block-backend: Trace blk_attach_dev()
Date: Thu, 16 Jul 2020 14:37:03 +0200
Message-Id: <20200716123704.6557-2-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200716123704.6557-1-f4bug@amsat.org>
References: <20200716123704.6557-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 John Snow <jsnow@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add a trace event to follow devices attaching block drives:

  $ qemu-system-arm -M n800 -trace blk_\*
  9513@1594040428.738162:blk_attach_dev attaching blk 'sd0' to device 'omap2-mmc'
  9513@1594040428.738189:blk_attach_dev attaching blk 'sd0' to device 'sd-card'
  qemu-system-arm: sd_init failed: blk 'sd0' already attached by device 'sd-card'

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 block/block-backend.c | 1 +
 block/trace-events    | 1 +
 2 files changed, 2 insertions(+)

diff --git a/block/block-backend.c b/block/block-backend.c
index 0bf0188133..63ff940ef9 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -888,6 +888,7 @@ void blk_get_perm(BlockBackend *blk, uint64_t *perm, uint64_t *shared_perm)
  */
 int blk_attach_dev(BlockBackend *blk, DeviceState *dev)
 {
+    trace_blk_attach_dev(blk_name(blk), object_get_typename(OBJECT(dev)));
     if (blk->dev) {
         return -EBUSY;
     }
diff --git a/block/trace-events b/block/trace-events
index dbe76a7613..aa641c2034 100644
--- a/block/trace-events
+++ b/block/trace-events
@@ -9,6 +9,7 @@ blk_co_preadv(void *blk, void *bs, int64_t offset, unsigned int bytes, int flags
 blk_co_pwritev(void *blk, void *bs, int64_t offset, unsigned int bytes, int flags) "blk %p bs %p offset %"PRId64" bytes %u flags 0x%x"
 blk_root_attach(void *child, void *blk, void *bs) "child %p blk %p bs %p"
 blk_root_detach(void *child, void *blk, void *bs) "child %p blk %p bs %p"
+blk_attach_dev(const char *blk_name, const char *dev_name) "attaching blk '%s' to device '%s'"
 
 # io.c
 bdrv_co_preadv(void *bs, int64_t offset, int64_t nbytes, unsigned int flags) "bs %p offset %"PRId64" nbytes %"PRId64" flags 0x%x"
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Thu Jul 16 12:37:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 12:37:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw38Z-00017i-UH; Thu, 16 Jul 2020 12:37:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QxFF=A3=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jw38Z-000173-27
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 12:37:19 +0000
X-Inumbo-ID: 12a88cd0-c761-11ea-bb8b-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12a88cd0-c761-11ea-bb8b-bc764e2007e4;
 Thu, 16 Jul 2020 12:37:13 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id z13so6899519wrw.5
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jul 2020 05:37:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=Yt7eySGtavnjPzGN+i8j014hrBU1DwpMXeuXr5l5jTo=;
 b=GwJrWi7NzhlZA4yxH4OfDvTOmFPCV1YmqJH5FABJF0+CA65AsrRknug6fPfWlv83Qd
 GkI3GoROwQQP9LuFLuo2xdfVAEmPqS0thSSDl9K+vaJr5F1/sZPXuAPl0CB20fO0SFSP
 3b117ls+ZNIOevkLPFyAac3rN9DwA6g31571KA6tdQx2DCPu3wYuhFJfrf8Vny5liIV/
 yyjO4JRhTwcI4M5s7rYk0QOLELwhyggUNLk5uuPiilLonV1m60Lp08kmF4wkmYnaarQe
 Pr4BqRZ8V1+7DkIZUIec5c3kcSyfPOk+bUMv8WC9AC/9XwSpqnfIrH9R/hNhvJyu5Fzv
 D0fw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=Yt7eySGtavnjPzGN+i8j014hrBU1DwpMXeuXr5l5jTo=;
 b=IU3LOE7hORG/knbT8dE/cb2fSvNJQyPvdw4n7ilnitYY7iQfCk3WRNbsXXWosPLAIz
 NuWp5igJrXRFW2wMyK/VYkGqPn08D34uPr0y3OrW8rKrRZsII72xKX3cNsxpSNXCnvXx
 LeiGYXqfAZokQHuzefsRQzaFC7L9rMbb1ucxSoOwFKlQ1Tc/nSJscqLQrYkDtuy3/j0/
 6Tiz/fKZ4yUSsnw8F5D1Y5zjwO9e7EB/KdYZq+zgHPUMAJeCrXgfTiHHUYu1kaj23ymL
 xUns9mKckw2qlWcNWOS8vQGBUgOc9Ul1bxdGcsMSp3G3fEJJYFzNR9fa/JXNayPx6rHw
 Ibag==
X-Gm-Message-State: AOAM533zRRGydsp+ZPSdoclm1wCHRYsrEF0EUoylsa+E9yzbTo64fTyG
 4O/ytsNGcHxCAngSrtvmpVg=
X-Google-Smtp-Source: ABdhPJz9Mv+iabMWKJD6ocXUgDtFCLEwVYiCUdorqAbxwQkYska8o+cgZhIN7hekPn4aaRBpVKJSaA==
X-Received: by 2002:adf:8b5a:: with SMTP id v26mr4826248wra.165.1594903032286; 
 Thu, 16 Jul 2020 05:37:12 -0700 (PDT)
Received: from x1w.redhat.com (138.red-83-57-170.dynamicip.rima-tde.net.
 [83.57.170.138])
 by smtp.gmail.com with ESMTPSA id c194sm8336673wme.8.2020.07.16.05.37.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 16 Jul 2020 05:37:11 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: Markus Armbruster <armbru@redhat.com>,
	qemu-devel@nongnu.org
Subject: [RFC PATCH-for-5.2 v2 2/2] block/block-backend: Let blk_attach_dev()
 provide helpful error
Date: Thu, 16 Jul 2020 14:37:04 +0200
Message-Id: <20200716123704.6557-3-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200716123704.6557-1-f4bug@amsat.org>
References: <20200716123704.6557-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 John Snow <jsnow@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Let blk_attach_dev() take an Error* object to return helpful
information. Adapt the callers.

  $ qemu-system-arm -M n800
  qemu-system-arm: sd_init failed: cannot attach blk 'sd0' to device 'sd-card'
  blk 'sd0' is already attached by device 'omap2-mmc'
  Drive 'sd0' is already in use because it has been automatically connected to another device
  (do you need 'if=none' in the drive options?)

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
v2: Rebased after 668f62ec62 ("error: Eliminate error_propagate()")
---
 include/sysemu/block-backend.h   |  2 +-
 block/block-backend.c            | 11 ++++++++++-
 hw/block/fdc.c                   |  4 +---
 hw/block/swim.c                  |  4 +---
 hw/block/xen-block.c             |  5 +++--
 hw/core/qdev-properties-system.c | 16 +++++++++-------
 hw/ide/qdev.c                    |  4 +---
 hw/scsi/scsi-disk.c              |  4 +---
 8 files changed, 27 insertions(+), 23 deletions(-)

diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backend.h
index 8203d7f6f9..118fbad0b4 100644
--- a/include/sysemu/block-backend.h
+++ b/include/sysemu/block-backend.h
@@ -113,7 +113,7 @@ BlockDeviceIoStatus blk_iostatus(const BlockBackend *blk);
 void blk_iostatus_disable(BlockBackend *blk);
 void blk_iostatus_reset(BlockBackend *blk);
 void blk_iostatus_set_err(BlockBackend *blk, int error);
-int blk_attach_dev(BlockBackend *blk, DeviceState *dev);
+int blk_attach_dev(BlockBackend *blk, DeviceState *dev, Error **errp);
 void blk_detach_dev(BlockBackend *blk, DeviceState *dev);
 DeviceState *blk_get_attached_dev(BlockBackend *blk);
 char *blk_get_attached_dev_id(BlockBackend *blk);
diff --git a/block/block-backend.c b/block/block-backend.c
index 63ff940ef9..b7be0a4619 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -884,12 +884,21 @@ void blk_get_perm(BlockBackend *blk, uint64_t *perm, uint64_t *shared_perm)
 
 /*
  * Attach device model @dev to @blk.
+ *
+ * @blk: Block backend
+ * @dev: Device to attach the block backend to
+ * @errp: pointer to NULL initialized error object
+ *
  * Return 0 on success, -EBUSY when a device model is attached already.
  */
-int blk_attach_dev(BlockBackend *blk, DeviceState *dev)
+int blk_attach_dev(BlockBackend *blk, DeviceState *dev, Error **errp)
 {
     trace_blk_attach_dev(blk_name(blk), object_get_typename(OBJECT(dev)));
     if (blk->dev) {
+        error_setg(errp, "cannot attach blk '%s' to device '%s'",
+                   blk_name(blk), object_get_typename(OBJECT(dev)));
+        error_append_hint(errp, "blk '%s' is already attached by device '%s'\n",
+                          blk_name(blk), object_get_typename(OBJECT(blk->dev)));
         return -EBUSY;
     }
 
diff --git a/hw/block/fdc.c b/hw/block/fdc.c
index e9ed3eef45..bc39244152 100644
--- a/hw/block/fdc.c
+++ b/hw/block/fdc.c
@@ -519,7 +519,6 @@ static void floppy_drive_realize(DeviceState *qdev, Error **errp)
     FloppyBus *bus = FLOPPY_BUS(qdev->parent_bus);
     FDrive *drive;
     bool read_only;
-    int ret;
 
     if (dev->unit == -1) {
         for (dev->unit = 0; dev->unit < MAX_FD; dev->unit++) {
@@ -545,8 +544,7 @@ static void floppy_drive_realize(DeviceState *qdev, Error **errp)
     if (!dev->conf.blk) {
         /* Anonymous BlockBackend for an empty drive */
         dev->conf.blk = blk_new(qemu_get_aio_context(), 0, BLK_PERM_ALL);
-        ret = blk_attach_dev(dev->conf.blk, qdev);
-        assert(ret == 0);
+        blk_attach_dev(dev->conf.blk, qdev, &error_abort);
 
         /* Don't take write permissions on an empty drive to allow attaching a
          * read-only node later */
diff --git a/hw/block/swim.c b/hw/block/swim.c
index 74f56e8f46..2f1ecd0bb2 100644
--- a/hw/block/swim.c
+++ b/hw/block/swim.c
@@ -159,7 +159,6 @@ static void swim_drive_realize(DeviceState *qdev, Error **errp)
     SWIMDrive *dev = SWIM_DRIVE(qdev);
     SWIMBus *bus = SWIM_BUS(qdev->parent_bus);
     FDrive *drive;
-    int ret;
 
     if (dev->unit == -1) {
         for (dev->unit = 0; dev->unit < SWIM_MAX_FD; dev->unit++) {
@@ -185,8 +184,7 @@ static void swim_drive_realize(DeviceState *qdev, Error **errp)
     if (!dev->conf.blk) {
         /* Anonymous BlockBackend for an empty drive */
         dev->conf.blk = blk_new(qemu_get_aio_context(), 0, BLK_PERM_ALL);
-        ret = blk_attach_dev(dev->conf.blk, qdev);
-        assert(ret == 0);
+        blk_attach_dev(dev->conf.blk, qdev, &error_abort);
     }
 
     if (!blkconf_blocksizes(&dev->conf, errp)) {
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 8a7a3f5452..eb99757faf 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -609,14 +609,15 @@ static void xen_cdrom_realize(XenBlockDevice *blockdev, Error **errp)
     blockdev->device_type = "cdrom";
 
     if (!conf->blk) {
+        Error *local_err = NULL;
         int rc;
 
         /* Set up an empty drive */
         conf->blk = blk_new(qemu_get_aio_context(), 0, BLK_PERM_ALL);
 
-        rc = blk_attach_dev(conf->blk, DEVICE(blockdev));
+        rc = blk_attach_dev(conf->blk, DEVICE(blockdev), &local_err);
         if (!rc) {
-            error_setg_errno(errp, -rc, "failed to create drive");
+            error_propagate_prepend(errp, local_err, "failed to create drive");
             return;
         }
     }
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index 3e4f16fc21..e3a757b1c3 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -136,17 +136,19 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
                    object_get_typename(OBJECT(dev)), prop->name, str);
         goto fail;
     }
-    if (blk_attach_dev(blk, dev) < 0) {
+    if (blk_attach_dev(blk, dev, errp) < 0) {
         DriveInfo *dinfo = blk_legacy_dinfo(blk);
 
         if (dinfo && dinfo->type != IF_NONE) {
-            error_setg(errp, "Drive '%s' is already in use because "
-                       "it has been automatically connected to another "
-                       "device (did you need 'if=none' in the drive options?)",
-                       str);
+            error_append_hint(errp,
+                              "Drive '%s' is already in use because it has "
+                              "been automatically connected to another device\n"
+                              "(do you need 'if=none' in the drive options?)\n",
+                              str);
         } else {
-            error_setg(errp, "Drive '%s' is already in use by another device",
-                       str);
+            error_append_hint(errp,
+                              "Drive '%s' is already in use by another device",
+                              str);
         }
         goto fail;
     }
diff --git a/hw/ide/qdev.c b/hw/ide/qdev.c
index 27ff1f7f66..9a02eb87f4 100644
--- a/hw/ide/qdev.c
+++ b/hw/ide/qdev.c
@@ -165,7 +165,6 @@ static void ide_dev_initfn(IDEDevice *dev, IDEDriveKind kind, Error **errp)
 {
     IDEBus *bus = DO_UPCAST(IDEBus, qbus, dev->qdev.parent_bus);
     IDEState *s = bus->ifs + dev->unit;
-    int ret;
 
     if (!dev->conf.blk) {
         if (kind != IDE_CD) {
@@ -174,8 +173,7 @@ static void ide_dev_initfn(IDEDevice *dev, IDEDriveKind kind, Error **errp)
         } else {
             /* Anonymous BlockBackend for an empty drive */
             dev->conf.blk = blk_new(qemu_get_aio_context(), 0, BLK_PERM_ALL);
-            ret = blk_attach_dev(dev->conf.blk, &dev->qdev);
-            assert(ret == 0);
+            blk_attach_dev(dev->conf.blk, &dev->qdev, &error_abort);
         }
     }
 
diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 8ce68a9dd6..92350642c7 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -2451,14 +2451,12 @@ static void scsi_cd_realize(SCSIDevice *dev, Error **errp)
 {
     SCSIDiskState *s = DO_UPCAST(SCSIDiskState, qdev, dev);
     AioContext *ctx;
-    int ret;
 
     if (!dev->conf.blk) {
         /* Anonymous BlockBackend for an empty drive. As we put it into
          * dev->conf, qdev takes care of detaching on unplug. */
         dev->conf.blk = blk_new(qemu_get_aio_context(), 0, BLK_PERM_ALL);
-        ret = blk_attach_dev(dev->conf.blk, &dev->qdev);
-        assert(ret == 0);
+        blk_attach_dev(dev->conf.blk, &dev->qdev, &error_abort);
     }
 
     ctx = blk_get_aio_context(dev->conf.blk);
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Thu Jul 16 12:37:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 12:37:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw38Q-00017A-DI; Thu, 16 Jul 2020 12:37:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QxFF=A3=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jw38P-000173-6z
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 12:37:09 +0000
X-Inumbo-ID: 0fd91844-c761-11ea-bb8b-bc764e2007e4
Received: from mail-wr1-x430.google.com (unknown [2a00:1450:4864:20::430])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0fd91844-c761-11ea-bb8b-bc764e2007e4;
 Thu, 16 Jul 2020 12:37:08 +0000 (UTC)
Received: by mail-wr1-x430.google.com with SMTP id s10so6841937wrw.12
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jul 2020 05:37:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=mB1xxUu5zqmOlJyeJOkLUsa5p10TlJZt5oeYf/aQyPQ=;
 b=ARb60FeHPT4lfmhfWj5c3Dc/GDZ7UeDtLAXLlobXruj8nFLuFB8D7NGC+HwAJ4+Pl5
 ftFweWixOb1ptNP3cP2cdVoqSGHphD6ybCp/hLm/0UaMIV/T3fRr2FUHNPfVRrF0fAJt
 Vzby7G4rs6h1XU7Iu2oLv8VCIaFKAiAlB5T28WiZrAe3z0KgBjSVDvRW5ivN8RJgKfw+
 oR4nWU1wPrBBVSopd6Ii3lgiTLcc/TgEH5xypsyd/1UVRTY+xV9x7g+fLlK8k54MTeov
 IZ332Gh1fr2+UV+VAY0oXfYxOdBWWXdQsnGZ3M0ABYdsmImf7s7Ur2WyZ4dQ9vypJTnP
 pSIA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :mime-version:content-transfer-encoding;
 bh=mB1xxUu5zqmOlJyeJOkLUsa5p10TlJZt5oeYf/aQyPQ=;
 b=d9FTn02njEE8L+H7HYMj0pERy0sfHU5L5rV/Rb5dzm6OylxtZYTOBiBhVATDYhZzxv
 YdLqoVJk8YQlEfSrjHsItEm9x06vVZerKu1pieJrbdVlXOVaDEd/De+0zIlJ5keFZoM7
 jSOnWGp5nIphj7jAY22Os5TjRoA4IY1ycGVP5XQMw3axrP9LdZWPqieZDmN5tjDTWtzu
 ewORAHAyyOwPEZTnuQrvOXzDdH85Yrxmq3w2Ob1I+A9yCTeyPW2KgPA87/qHH1eeThiQ
 7dxc9D5u5hsg5F0tL//+P1dNzoCxIKDR/n4Sq3rxKpdwCA6Q7sqH21nMtDerppqAhU0D
 E+pQ==
X-Gm-Message-State: AOAM533h/a+8NEYiKpLrwFNlVuacC7fD8FCy0Y/301/9aL5oDZP/NZjv
 hmUwoPE6EttM69MnJSTz/wk=
X-Google-Smtp-Source: ABdhPJz9g/vGJP2RKVZgF9h6f20OFk4NNucEE3jZ+fi3QR4RP7R2yi143VXJ45AZJ8lOK4q+3nyDgQ==
X-Received: by 2002:a5d:4845:: with SMTP id n5mr4796592wrs.353.1594903027555; 
 Thu, 16 Jul 2020 05:37:07 -0700 (PDT)
Received: from x1w.redhat.com (138.red-83-57-170.dynamicip.rima-tde.net.
 [83.57.170.138])
 by smtp.gmail.com with ESMTPSA id c194sm8336673wme.8.2020.07.16.05.37.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 16 Jul 2020 05:37:06 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: Markus Armbruster <armbru@redhat.com>,
	qemu-devel@nongnu.org
Subject: [PATCH-for-5.2 v2 0/2] block/block-backend: Let blk_attach_dev()
 provide helpful error
Date: Thu, 16 Jul 2020 14:37:02 +0200
Message-Id: <20200716123704.6557-1-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 John Snow <jsnow@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

A pair of patches which helps me debug an issue with block
drive already attached.

Suggestions to correctly/better use the Error API welcome, in
particular in qdev-properties-system::set_drive_helper().

Since v1:
- Rebased after 668f62ec62 ("error: Eliminate error_propagate()")

Philippe Mathieu-Daudé (2):
  block/block-backend: Trace blk_attach_dev()
  block/block-backend: Let blk_attach_dev() provide helpful error

 include/sysemu/block-backend.h   |  2 +-
 block/block-backend.c            | 12 +++++++++++-
 hw/block/fdc.c                   |  4 +---
 hw/block/swim.c                  |  4 +---
 hw/block/xen-block.c             |  5 +++--
 hw/core/qdev-properties-system.c | 16 +++++++++-------
 hw/ide/qdev.c                    |  4 +---
 hw/scsi/scsi-disk.c              |  4 +---
 block/trace-events               |  1 +
 9 files changed, 29 insertions(+), 23 deletions(-)

-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Thu Jul 16 12:42:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 12:42:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw3DR-00027H-Mk; Thu, 16 Jul 2020 12:42:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QxFF=A3=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jw3DQ-00027C-5a
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 12:42:20 +0000
X-Inumbo-ID: c93d5f7a-c761-11ea-8496-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c93d5f7a-c761-11ea-8496-bc764e2007e4;
 Thu, 16 Jul 2020 12:42:19 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id q15so10177282wmj.2
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jul 2020 05:42:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-language:content-transfer-encoding;
 bh=oOd1Aym1uv5KKe0noR2NOU6jCkeQAG1+aul16luO/II=;
 b=jBNP5ouL0+8xKV29EOKkSz3SM9MGHRvLpSfh57q6k0ZmGC6AuoHr5/pRHNZsFha63n
 zAYARYNEzq1JyGWj6cF3Dc31X/7NKIenLGrJT14qD8souxjTj+5T64F0dNSGHP1MyxbH
 AnD6VP9wQpDfjcuG4emJJ3xr5PiBvqldx4mgzyutHq6y8iqW6jBuhhyUOWFXZ+L+zVtc
 8G+0AXHVVtrp6TGrGgDl9ohF6tJ5HiI9oXTEDAPTvB8Zzy5mhBmZGosjMpaEUjC018EY
 Cg0JDZ6Ls7mP0XYf5wglWJSDeJG9vUlXPNgjVB6O3+2f3WvVV+3UmF+KWHMCzH9cQ/Rz
 Czrg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:subject:to:cc:references:from:message-id
 :date:user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=oOd1Aym1uv5KKe0noR2NOU6jCkeQAG1+aul16luO/II=;
 b=pKLTMZtVPr4dD69pLI2QePI6KG8XgE79a28cfzbR1QnbBzbEw6VonFSt91BfTz36qe
 x9FfAGih68KMSF8QFEO0yRYnzmyAQpAJ4n5XJHxV36QQDBNaMIwc8btcCPM+zRULgksl
 dAafUa3iot8yKLIkiuhqKPpi6dCSY2i/tBZQ3MeZ2vTBIQyGRcqa02cTN7Ozwh4E7WMk
 hH1RnMcoZ8C056jOlO5kXrYhK7ySPvqqKxNxt1CBNXnMFsUY5yz0jeUmGNloUrEezWyh
 c5O3opYTtCHapp2v/622RljN/o6VdqLa2q7/GA8/PXWav0y+vgP9S/EXwfmxxKxCMsVZ
 EtZw==
X-Gm-Message-State: AOAM532EiBxyLLOrpGMmRRrCo9AxYB5BQCxQbFk1NKwn5a11Wj7iPVm4
 xiLSSibfzX5p75MLmQt+V0I=
X-Google-Smtp-Source: ABdhPJwiubA6N1ZbLaINf6jicEifGqUVKpccq2W4gVLvNYVc0q80pSswMs28W2q5IWZjl2OdrdelKg==
X-Received: by 2002:a05:600c:218f:: with SMTP id
 e15mr4078065wme.187.1594903338621; 
 Thu, 16 Jul 2020 05:42:18 -0700 (PDT)
Received: from [192.168.1.37] (138.red-83-57-170.dynamicip.rima-tde.net.
 [83.57.170.138])
 by smtp.gmail.com with ESMTPSA id k131sm8613997wmb.36.2020.07.16.05.42.17
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 16 Jul 2020 05:42:17 -0700 (PDT)
Subject: Re: [PATCH-for-5.2 v2 1/2] block/block-backend: Trace blk_attach_dev()
To: Markus Armbruster <armbru@redhat.com>, qemu-devel@nongnu.org
References: <20200716123704.6557-1-f4bug@amsat.org>
 <20200716123704.6557-2-f4bug@amsat.org>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
Message-ID: <1cdce80f-ef11-b495-98ec-d76dcc6cc729@amsat.org>
Date: Thu, 16 Jul 2020 14:42:16 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <20200716123704.6557-2-f4bug@amsat.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/16/20 2:37 PM, Philippe Mathieu-Daudé wrote:
> Add a trace event to follow devices attaching block drives:
> 
>   $ qemu-system-arm -M n800 -trace blk_\*
>   9513@1594040428.738162:blk_attach_dev attaching blk 'sd0' to device 'omap2-mmc'
>   9513@1594040428.738189:blk_attach_dev attaching blk 'sd0' to device 'sd-card'
>   qemu-system-arm: sd_init failed: blk 'sd0' already attached by device 'sd-card'

Oops, the current error is:

    qemu-system-arm: sd_init failed: Drive 'sd0' is already in use by
another device

> 
> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
> ---
>  block/block-backend.c | 1 +
>  block/trace-events    | 1 +
>  2 files changed, 2 insertions(+)
> 
> diff --git a/block/block-backend.c b/block/block-backend.c
> index 0bf0188133..63ff940ef9 100644
> --- a/block/block-backend.c
> +++ b/block/block-backend.c
> @@ -888,6 +888,7 @@ void blk_get_perm(BlockBackend *blk, uint64_t *perm, uint64_t *shared_perm)
>   */
>  int blk_attach_dev(BlockBackend *blk, DeviceState *dev)
>  {
> +    trace_blk_attach_dev(blk_name(blk), object_get_typename(OBJECT(dev)));
>      if (blk->dev) {
>          return -EBUSY;
>      }
> diff --git a/block/trace-events b/block/trace-events
> index dbe76a7613..aa641c2034 100644
> --- a/block/trace-events
> +++ b/block/trace-events
> @@ -9,6 +9,7 @@ blk_co_preadv(void *blk, void *bs, int64_t offset, unsigned int bytes, int flags
>  blk_co_pwritev(void *blk, void *bs, int64_t offset, unsigned int bytes, int flags) "blk %p bs %p offset %"PRId64" bytes %u flags 0x%x"
>  blk_root_attach(void *child, void *blk, void *bs) "child %p blk %p bs %p"
>  blk_root_detach(void *child, void *blk, void *bs) "child %p blk %p bs %p"
> +blk_attach_dev(const char *blk_name, const char *dev_name) "attaching blk '%s' to device '%s'"
>  
>  # io.c
>  bdrv_co_preadv(void *bs, int64_t offset, int64_t nbytes, unsigned int flags) "bs %p offset %"PRId64" nbytes %"PRId64" flags 0x%x"
> 


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 13:05:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 13:05:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw3ZJ-0003tg-Jl; Thu, 16 Jul 2020 13:04:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pNdb=A3=redhat.com=berrange@srs-us1.protection.inumbo.net>)
 id 1jw3ZJ-0003tb-0q
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 13:04:57 +0000
X-Inumbo-ID: f1503728-c764-11ea-bca7-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id f1503728-c764-11ea-bca7-bc764e2007e4;
 Thu, 16 Jul 2020 13:04:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594904694;
 h=from:from:reply-to:reply-to:subject:subject:date:date:
 message-id:message-id:to:to:cc:cc:mime-version:mime-version:
 content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=az+FmhN9pIiPX+LZPos7raST+pyWDt55nirXChkVUsk=;
 b=fYZRw1uTWVebuDFBB7Wgt42/fQ8ZYH6ftBBY4EOrMiDqmKan6D7JJws4OxtDguqM9hgJzd
 Che/A6edVbJurCconjDuXvK32+BxHiDoYtTYhsaeJrrPhVsAw7M2DJRWAXlWWv4dKCpvjf
 /vJzXlwL6UHt0+hq0oN29BP7DzxBjUc=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-319-wJh8r9ASND-BW_uZ_7uN3Q-1; Thu, 16 Jul 2020 09:04:49 -0400
X-MC-Unique: wJh8r9ASND-BW_uZ_7uN3Q-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9AB15107ACCA;
 Thu, 16 Jul 2020 13:04:47 +0000 (UTC)
Received: from redhat.com (unknown [10.36.110.42])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 99B3D6FEF4;
 Thu, 16 Jul 2020 13:04:43 +0000 (UTC)
Date: Thu, 16 Jul 2020 14:04:40 +0100
From: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: Re: [RFC PATCH-for-5.2 v2 2/2] block/block-backend: Let
 blk_attach_dev() provide helpful error
Message-ID: <20200716130440.GT227735@redhat.com>
References: <20200716123704.6557-1-f4bug@amsat.org>
 <20200716123704.6557-3-f4bug@amsat.org>
MIME-Version: 1.0
In-Reply-To: <20200716123704.6557-3-f4bug@amsat.org>
User-Agent: Mutt/1.14.5 (2020-06-23)
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Disposition: inline
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: Daniel =?utf-8?B?UC4gQmVycmFuZ8Op?= <berrange@redhat.com>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Markus Armbruster <armbru@redhat.com>,
 qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Max Reitz <mreitz@redhat.com>, John Snow <jsnow@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 16, 2020 at 02:37:04PM +0200, Philippe Mathieu-Daudé wrote:
> Let blk_attach_dev() take an Error* object to return helpful
> information. Adapt the callers.
> 
>   $ qemu-system-arm -M n800
>   qemu-system-arm: sd_init failed: cannot attach blk 'sd0' to device 'sd-card'
>   blk 'sd0' is already attached by device 'omap2-mmc'
>   Drive 'sd0' is already in use because it has been automatically connected to another device
>   (do you need 'if=none' in the drive options?)
> 
> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
> ---
> v2: Rebased after 668f62ec62 ("error: Eliminate error_propagate()")
> ---
>  include/sysemu/block-backend.h   |  2 +-
>  block/block-backend.c            | 11 ++++++++++-
>  hw/block/fdc.c                   |  4 +---
>  hw/block/swim.c                  |  4 +---
>  hw/block/xen-block.c             |  5 +++--
>  hw/core/qdev-properties-system.c | 16 +++++++++-------
>  hw/ide/qdev.c                    |  4 +---
>  hw/scsi/scsi-disk.c              |  4 +---
>  8 files changed, 27 insertions(+), 23 deletions(-)
> 
> diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backend.h
> index 8203d7f6f9..118fbad0b4 100644
> --- a/include/sysemu/block-backend.h
> +++ b/include/sysemu/block-backend.h
> @@ -113,7 +113,7 @@ BlockDeviceIoStatus blk_iostatus(const BlockBackend *blk);
>  void blk_iostatus_disable(BlockBackend *blk);
>  void blk_iostatus_reset(BlockBackend *blk);
>  void blk_iostatus_set_err(BlockBackend *blk, int error);
> -int blk_attach_dev(BlockBackend *blk, DeviceState *dev);
> +int blk_attach_dev(BlockBackend *blk, DeviceState *dev, Error **errp);
>  void blk_detach_dev(BlockBackend *blk, DeviceState *dev);
>  DeviceState *blk_get_attached_dev(BlockBackend *blk);
>  char *blk_get_attached_dev_id(BlockBackend *blk);
> diff --git a/block/block-backend.c b/block/block-backend.c
> index 63ff940ef9..b7be0a4619 100644
> --- a/block/block-backend.c
> +++ b/block/block-backend.c
> @@ -884,12 +884,21 @@ void blk_get_perm(BlockBackend *blk, uint64_t *perm, uint64_t *shared_perm)
>  
>  /*
>   * Attach device model @dev to @blk.
> + *
> + * @blk: Block backend
> + * @dev: Device to attach the block backend to
> + * @errp: pointer to NULL initialized error object
> + *
>   * Return 0 on success, -EBUSY when a device model is attached already.
>   */
> -int blk_attach_dev(BlockBackend *blk, DeviceState *dev)
> +int blk_attach_dev(BlockBackend *blk, DeviceState *dev, Error **errp)
>  {
>      trace_blk_attach_dev(blk_name(blk), object_get_typename(OBJECT(dev)));
>      if (blk->dev) {
> +        error_setg(errp, "cannot attach blk '%s' to device '%s'",
> +                   blk_name(blk), object_get_typename(OBJECT(dev)));
> +        error_append_hint(errp, "blk '%s' is already attached by device '%s'\n",
> +                          blk_name(blk), object_get_typename(OBJECT(blk->dev)));

I would have a preference for expanding the main error message and not
using a hint.  Any hint is completely thrown away when using QMP :-(


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



From xen-devel-bounces@lists.xenproject.org Thu Jul 16 13:19:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 13:19:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw3n5-0004s3-0P; Thu, 16 Jul 2020 13:19:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xJjh=A3=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1jw3n3-0004rq-Rn
 for xen-devel@lists.xen.org; Thu, 16 Jul 2020 13:19:09 +0000
X-Inumbo-ID: eb9fc9f4-c766-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb9fc9f4-c766-11ea-b7bb-bc764e2007e4;
 Thu, 16 Jul 2020 13:19:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=3VGZzYg6mn62RtPmWMdMHRbsHJM8t32N5zUX+qK4CUw=; b=G7u8LMu9ALbkKy8HZA3VKFPwtJ
 ExmVOFeRW3Hkjfbt1HVZrvl2CAeZsfBaiq++awmQq1YLnmRYVn9YPIXoEm46dg9pJ6Ps4DPkdYzwd
 pHGAM9/5cy/0G36Iv8l0Y5VmPyGTJIxMP3KLqKuPyYsnAunp95ZFEt46TTSt9M8iRZIY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1jw3ms-0006Kq-VB; Thu, 16 Jul 2020 13:18:58 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1jw3ms-0006i6-Se; Thu, 16 Jul 2020 13:18:58 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Subject: Xen Security Advisory 329 v2 - Linux ioperm bitmap context
 switching issues
Message-Id: <E1jw3ms-0006i6-Se@xenbits.xenproject.org>
Date: Thu, 16 Jul 2020 13:18:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "Xen.org security team" <security-team-members@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

                    Xen Security Advisory XSA-329
                              version 2

             Linux ioperm bitmap context switching issues

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

Linux 5.5 overhauled the internal state handling for the iopl() and ioperm()
system calls.  Unfortunately, one aspect on context switch wasn't wired up
correctly for the Xen PVOps case.

IMPACT
======

IO port permissions don't get rescinded when context switching to an
unprivileged task.  Therefore, all userspace can use the IO ports granted to
the most recently scheduled task with IO port permissions.

VULNERABLE SYSTEMS
==================

Only x86 guests are vulnerable.

All versions of Linux from 5.5 are potentially vulnerable.

Linux is only vulnerable when running as x86 PV guest.  Linux is not
vulnerable when running as an x86 HVM/PVH guests.

The vulnerability can only be exploited in domains which have been granted
access to IO ports by Xen.  This is typically only the hardware domain, and
guests configured with PCI Passthrough.

MITIGATION
==========

Running only HVM/PVH guests avoids the vulnerability.

CREDITS
=======

This issue was discovered by Andy Lutomirski.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa329.patch           Linux 5.5 and later

$ sha256sum xsa329*
cdb5ac9bfd21192b5965e8ec0a1c4fcf12d0a94a962a8158cd27810e6aa362f0  xsa329.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl8QU6EMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZ/sEIAMiCOnz119KTlRU50HTwa4pvIgLphf9htTbPzHXS
iEb8yINqMxmep8NRcAzwFREQP+Z4Tue1upt31Vx0RPkFZpUklLuuBSXsV0JA7+UM
LSGyWhkzDdnfj6iPUHycGmFzRTzkbB7qfcMj7khCvuYtSNbTUdOgUq04ngZksrSJ
UMhfgUNKXawULKvVe7572L/AQTmMXK8eaolb+eWtf1U2pFkZQR8GWoLmiFbKLks2
X2tRUF4U4cHEBzxXRzYrD1ArWLajqK6hQmauwgkCCSowvCHoD1dTv55GlrlEo4od
MSB6YOVLl7HJuUw1GmwlKjA8XqStHq1Fi0urvlKCfHfK2Wk=
=MP+m
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa329.patch"
Content-Disposition: attachment; filename="xsa329.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5keSBMdXRvbWlyc2tpIDxsdXRvQGtlcm5lbC5vcmc+ClN1Ympl
Y3Q6IHg4Ni9pb3Blcm06IEZpeCBpbyBiaXRtYXAgaW52YWxpZGF0aW9uIG9u
IFhlbiBQVgoKdHNzX2ludmFsaWRhdGVfaW9fYml0bWFwKCkgd2Fzbid0IHdp
cmVkIHVwIHByb3Blcmx5IHRocm91Z2ggdGhlIHB2b3AKbWFjaGluZXJ5LCBz
byB0aGUgVFNTIGFuZCBYZW4ncyBpbyBiaXRtYXAgd291bGQgZ2V0IG91dCBv
ZiBzeW5jCndoZW5ldmVyIGRpc2FibGluZyBhIHZhbGlkIGlvIGJpdG1hcC4K
CkFkZCBhIG5ldyBwdm9wIGZvciB0c3NfaW52YWxpZGF0ZV9pb19iaXRtYXAo
KSB0byBmaXggaXQuCgpUaGlzIGlzIFhTQS0zMjkuCgpGaXhlczogMjJmZTVi
MDQzOWRkICgieDg2L2lvcGVybTogTW92ZSBUU1MgYml0bWFwIHVwZGF0ZSB0
byBleGl0IHRvIHVzZXIgd29yayIpClNpZ25lZC1vZmYtYnk6IEFuZHkgTHV0
b21pcnNraSA8bHV0b0BrZXJuZWwub3JnPgpSZXZpZXdlZC1ieTogSnVlcmdl
biBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogVGhvbWFz
IEdsZWl4bmVyIDx0Z2x4QGxpbnV0cm9uaXguZGU+CgpkaWZmIC0tZ2l0IGEv
YXJjaC94ODYvaW5jbHVkZS9hc20vaW9fYml0bWFwLmggYi9hcmNoL3g4Ni9p
bmNsdWRlL2FzbS9pb19iaXRtYXAuaAppbmRleCBhYzFhOTlmZmJkOGQuLjdm
MDgwZjVjN2RlZiAxMDA2NDQKLS0tIGEvYXJjaC94ODYvaW5jbHVkZS9hc20v
aW9fYml0bWFwLmgKKysrIGIvYXJjaC94ODYvaW5jbHVkZS9hc20vaW9fYml0
bWFwLmgKQEAgLTE5LDEyICsxOSwyOCBAQCBzdHJ1Y3QgdGFza19zdHJ1Y3Q7
CiB2b2lkIGlvX2JpdG1hcF9zaGFyZShzdHJ1Y3QgdGFza19zdHJ1Y3QgKnRz
ayk7CiB2b2lkIGlvX2JpdG1hcF9leGl0KHN0cnVjdCB0YXNrX3N0cnVjdCAq
dHNrKTsKIAorc3RhdGljIGlubGluZSB2b2lkIG5hdGl2ZV90c3NfaW52YWxp
ZGF0ZV9pb19iaXRtYXAodm9pZCkKK3sKKwkvKgorCSAqIEludmFsaWRhdGUg
dGhlIEkvTyBiaXRtYXAgYnkgbW92aW5nIGlvX2JpdG1hcF9iYXNlIG91dHNp
ZGUgdGhlCisJICogVFNTIGxpbWl0IHNvIGFueSBzdWJzZXF1ZW50IEkvTyBh
Y2Nlc3MgZnJvbSB1c2VyIHNwYWNlIHdpbGwKKwkgKiB0cmlnZ2VyIGEgI0dQ
LgorCSAqCisJICogVGhpcyBpcyBjb3JyZWN0IGV2ZW4gd2hlbiBWTUVYSVQg
cmV3cml0ZXMgdGhlIFRTUyBsaW1pdAorCSAqIHRvIDB4NjcgYXMgdGhlIG9u
bHkgcmVxdWlyZW1lbnQgaXMgdGhhdCB0aGUgYmFzZSBwb2ludHMKKwkgKiBv
dXRzaWRlIHRoZSBsaW1pdC4KKwkgKi8KKwl0aGlzX2NwdV93cml0ZShjcHVf
dHNzX3J3Lng4Nl90c3MuaW9fYml0bWFwX2Jhc2UsCisJCSAgICAgICBJT19C
SVRNQVBfT0ZGU0VUX0lOVkFMSUQpOworfQorCiB2b2lkIG5hdGl2ZV90c3Nf
dXBkYXRlX2lvX2JpdG1hcCh2b2lkKTsKIAogI2lmZGVmIENPTkZJR19QQVJB
VklSVF9YWEwKICNpbmNsdWRlIDxhc20vcGFyYXZpcnQuaD4KICNlbHNlCiAj
ZGVmaW5lIHRzc191cGRhdGVfaW9fYml0bWFwIG5hdGl2ZV90c3NfdXBkYXRl
X2lvX2JpdG1hcAorI2RlZmluZSB0c3NfaW52YWxpZGF0ZV9pb19iaXRtYXAg
bmF0aXZlX3Rzc19pbnZhbGlkYXRlX2lvX2JpdG1hcAogI2VuZGlmCiAKICNl
bHNlCmRpZmYgLS1naXQgYS9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9wYXJhdmly
dC5oIGIvYXJjaC94ODYvaW5jbHVkZS9hc20vcGFyYXZpcnQuaAppbmRleCA1
Y2E1ZDI5N2RmNzUuLjNkMmFmZWNkZTUwYyAxMDA2NDQKLS0tIGEvYXJjaC94
ODYvaW5jbHVkZS9hc20vcGFyYXZpcnQuaAorKysgYi9hcmNoL3g4Ni9pbmNs
dWRlL2FzbS9wYXJhdmlydC5oCkBAIC0zMDIsNiArMzAyLDExIEBAIHN0YXRp
YyBpbmxpbmUgdm9pZCB3cml0ZV9pZHRfZW50cnkoZ2F0ZV9kZXNjICpkdCwg
aW50IGVudHJ5LCBjb25zdCBnYXRlX2Rlc2MgKmcpCiB9CiAKICNpZmRlZiBD
T05GSUdfWDg2X0lPUExfSU9QRVJNCitzdGF0aWMgaW5saW5lIHZvaWQgdHNz
X2ludmFsaWRhdGVfaW9fYml0bWFwKHZvaWQpCit7CisJUFZPUF9WQ0FMTDAo
Y3B1LmludmFsaWRhdGVfaW9fYml0bWFwKTsKK30KKwogc3RhdGljIGlubGlu
ZSB2b2lkIHRzc191cGRhdGVfaW9fYml0bWFwKHZvaWQpCiB7CiAJUFZPUF9W
Q0FMTDAoY3B1LnVwZGF0ZV9pb19iaXRtYXApOwpkaWZmIC0tZ2l0IGEvYXJj
aC94ODYvaW5jbHVkZS9hc20vcGFyYXZpcnRfdHlwZXMuaCBiL2FyY2gveDg2
L2luY2x1ZGUvYXNtL3BhcmF2aXJ0X3R5cGVzLmgKaW5kZXggNzMyZjYyZTA0
ZGRiLi44ZGZjYjI1MDhlNmQgMTAwNjQ0Ci0tLSBhL2FyY2gveDg2L2luY2x1
ZGUvYXNtL3BhcmF2aXJ0X3R5cGVzLmgKKysrIGIvYXJjaC94ODYvaW5jbHVk
ZS9hc20vcGFyYXZpcnRfdHlwZXMuaApAQCAtMTQxLDYgKzE0MSw3IEBAIHN0
cnVjdCBwdl9jcHVfb3BzIHsKIAl2b2lkICgqbG9hZF9zcDApKHVuc2lnbmVk
IGxvbmcgc3AwKTsKIAogI2lmZGVmIENPTkZJR19YODZfSU9QTF9JT1BFUk0K
Kwl2b2lkICgqaW52YWxpZGF0ZV9pb19iaXRtYXApKHZvaWQpOwogCXZvaWQg
KCp1cGRhdGVfaW9fYml0bWFwKSh2b2lkKTsKICNlbmRpZgogCmRpZmYgLS1n
aXQgYS9hcmNoL3g4Ni9rZXJuZWwvcGFyYXZpcnQuYyBiL2FyY2gveDg2L2tl
cm5lbC9wYXJhdmlydC5jCmluZGV4IDY3NGE3ZDY2ZDk2MC4uZGUyMTM4YmEz
OGU1IDEwMDY0NAotLS0gYS9hcmNoL3g4Ni9rZXJuZWwvcGFyYXZpcnQuYwor
KysgYi9hcmNoL3g4Ni9rZXJuZWwvcGFyYXZpcnQuYwpAQCAtMzI0LDcgKzMy
NCw4IEBAIHN0cnVjdCBwYXJhdmlydF9wYXRjaF90ZW1wbGF0ZSBwdl9vcHMg
PSB7CiAJLmNwdS5zd2FwZ3MJCT0gbmF0aXZlX3N3YXBncywKIAogI2lmZGVm
IENPTkZJR19YODZfSU9QTF9JT1BFUk0KLQkuY3B1LnVwZGF0ZV9pb19iaXRt
YXAJPSBuYXRpdmVfdHNzX3VwZGF0ZV9pb19iaXRtYXAsCisJLmNwdS5pbnZh
bGlkYXRlX2lvX2JpdG1hcAk9IG5hdGl2ZV90c3NfaW52YWxpZGF0ZV9pb19i
aXRtYXAsCisJLmNwdS51cGRhdGVfaW9fYml0bWFwCQk9IG5hdGl2ZV90c3Nf
dXBkYXRlX2lvX2JpdG1hcCwKICNlbmRpZgogCiAJLmNwdS5zdGFydF9jb250
ZXh0X3N3aXRjaAk9IHBhcmF2aXJ0X25vcCwKZGlmZiAtLWdpdCBhL2FyY2gv
eDg2L2tlcm5lbC9wcm9jZXNzLmMgYi9hcmNoL3g4Ni9rZXJuZWwvcHJvY2Vz
cy5jCmluZGV4IGYzNjJjZTBkNWFjMC4uZmU2N2RiZDc2ZTUxIDEwMDY0NAot
LS0gYS9hcmNoL3g4Ni9rZXJuZWwvcHJvY2Vzcy5jCisrKyBiL2FyY2gveDg2
L2tlcm5lbC9wcm9jZXNzLmMKQEAgLTMyMiwyMCArMzIyLDYgQEAgdm9pZCBh
cmNoX3NldHVwX25ld19leGVjKHZvaWQpCiB9CiAKICNpZmRlZiBDT05GSUdf
WDg2X0lPUExfSU9QRVJNCi1zdGF0aWMgaW5saW5lIHZvaWQgdHNzX2ludmFs
aWRhdGVfaW9fYml0bWFwKHN0cnVjdCB0c3Nfc3RydWN0ICp0c3MpCi17Ci0J
LyoKLQkgKiBJbnZhbGlkYXRlIHRoZSBJL08gYml0bWFwIGJ5IG1vdmluZyBp
b19iaXRtYXBfYmFzZSBvdXRzaWRlIHRoZQotCSAqIFRTUyBsaW1pdCBzbyBh
bnkgc3Vic2VxdWVudCBJL08gYWNjZXNzIGZyb20gdXNlciBzcGFjZSB3aWxs
Ci0JICogdHJpZ2dlciBhICNHUC4KLQkgKgotCSAqIFRoaXMgaXMgY29ycmVj
dCBldmVuIHdoZW4gVk1FWElUIHJld3JpdGVzIHRoZSBUU1MgbGltaXQKLQkg
KiB0byAweDY3IGFzIHRoZSBvbmx5IHJlcXVpcmVtZW50IGlzIHRoYXQgdGhl
IGJhc2UgcG9pbnRzCi0JICogb3V0c2lkZSB0aGUgbGltaXQuCi0JICovCi0J
dHNzLT54ODZfdHNzLmlvX2JpdG1hcF9iYXNlID0gSU9fQklUTUFQX09GRlNF
VF9JTlZBTElEOwotfQotCiBzdGF0aWMgaW5saW5lIHZvaWQgc3dpdGNoX3Rv
X2JpdG1hcCh1bnNpZ25lZCBsb25nIHRpZnApCiB7CiAJLyoKQEAgLTM0Niw3
ICszMzIsNyBAQCBzdGF0aWMgaW5saW5lIHZvaWQgc3dpdGNoX3RvX2JpdG1h
cCh1bnNpZ25lZCBsb25nIHRpZnApCiAJICogdXNlciBtb2RlLgogCSAqLwog
CWlmICh0aWZwICYgX1RJRl9JT19CSVRNQVApCi0JCXRzc19pbnZhbGlkYXRl
X2lvX2JpdG1hcCh0aGlzX2NwdV9wdHIoJmNwdV90c3NfcncpKTsKKwkJdHNz
X2ludmFsaWRhdGVfaW9fYml0bWFwKCk7CiB9CiAKIHN0YXRpYyB2b2lkIHRz
c19jb3B5X2lvX2JpdG1hcChzdHJ1Y3QgdHNzX3N0cnVjdCAqdHNzLCBzdHJ1
Y3QgaW9fYml0bWFwICppb2JtKQpAQCAtMzgwLDcgKzM2Niw3IEBAIHZvaWQg
bmF0aXZlX3Rzc191cGRhdGVfaW9fYml0bWFwKHZvaWQpCiAJdTE2ICpiYXNl
ID0gJnRzcy0+eDg2X3Rzcy5pb19iaXRtYXBfYmFzZTsKIAogCWlmICghdGVz
dF90aHJlYWRfZmxhZyhUSUZfSU9fQklUTUFQKSkgewotCQl0c3NfaW52YWxp
ZGF0ZV9pb19iaXRtYXAodHNzKTsKKwkJbmF0aXZlX3Rzc19pbnZhbGlkYXRl
X2lvX2JpdG1hcCgpOwogCQlyZXR1cm47CiAJfQogCmRpZmYgLS1naXQgYS9h
cmNoL3g4Ni94ZW4vZW5saWdodGVuX3B2LmMgYi9hcmNoL3g4Ni94ZW4vZW5s
aWdodGVuX3B2LmMKaW5kZXggYWNjNDlmYTZhMDk3Li5jNDc1YTExYzY2MjAg
MTAwNjQ0Ci0tLSBhL2FyY2gveDg2L3hlbi9lbmxpZ2h0ZW5fcHYuYworKysg
Yi9hcmNoL3g4Ni94ZW4vZW5saWdodGVuX3B2LmMKQEAgLTg1MCw2ICs4NTAs
MTcgQEAgc3RhdGljIHZvaWQgeGVuX2xvYWRfc3AwKHVuc2lnbmVkIGxvbmcg
c3AwKQogfQogCiAjaWZkZWYgQ09ORklHX1g4Nl9JT1BMX0lPUEVSTQorc3Rh
dGljIHZvaWQgeGVuX2ludmFsaWRhdGVfaW9fYml0bWFwKHZvaWQpCit7CisJ
c3RydWN0IHBoeXNkZXZfc2V0X2lvYml0bWFwIGlvYml0bWFwID0geworCQku
Yml0bWFwID0gMCwKKwkJLm5yX3BvcnRzID0gMCwKKwl9OworCisJbmF0aXZl
X3Rzc19pbnZhbGlkYXRlX2lvX2JpdG1hcCgpOworCUhZUEVSVklTT1JfcGh5
c2Rldl9vcChQSFlTREVWT1Bfc2V0X2lvYml0bWFwLCAmaW9iaXRtYXApOwor
fQorCiBzdGF0aWMgdm9pZCB4ZW5fdXBkYXRlX2lvX2JpdG1hcCh2b2lkKQog
ewogCXN0cnVjdCBwaHlzZGV2X3NldF9pb2JpdG1hcCBpb2JpdG1hcDsKQEAg
LTEwNzksNiArMTA5MCw3IEBAIHN0YXRpYyBjb25zdCBzdHJ1Y3QgcHZfY3B1
X29wcyB4ZW5fY3B1X29wcyBfX2luaXRjb25zdCA9IHsKIAkubG9hZF9zcDAg
PSB4ZW5fbG9hZF9zcDAsCiAKICNpZmRlZiBDT05GSUdfWDg2X0lPUExfSU9Q
RVJNCisJLmludmFsaWRhdGVfaW9fYml0bWFwID0geGVuX2ludmFsaWRhdGVf
aW9fYml0bWFwLAogCS51cGRhdGVfaW9fYml0bWFwID0geGVuX3VwZGF0ZV9p
b19iaXRtYXAsCiAjZW5kaWYKIAkuaW9fZGVsYXkgPSB4ZW5faW9fZGVsYXks
Cg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 14:20:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 14:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw4kV-0002nM-Ut; Thu, 16 Jul 2020 14:20:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AXgs=A3=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jw4kU-0002nH-Ll
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 14:20:34 +0000
X-Inumbo-ID: 8256c458-c76f-11ea-94be-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8256c458-c76f-11ea-94be-12813bfff9fa;
 Thu, 16 Jul 2020 14:20:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594909234;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=mI+n4MWWbp6/xnt7yQROW1Mv+5F30+fTKHNKqAe+xuY=;
 b=HpmbW+I3nfCV7tmDlb4s1PgOvK6UupQ/2/3Q5U7DkiakrEJoxuWv50ay
 qvM7Tuhq+3CXB3/TPv+5AX5O4hG4PB9wqY7qRkQHz4X+OWL6pygVeq0Gt
 5kpUPCKsLrtWyEhaLjgLREIdgnwlHh7jTRQNkqL2xpfEasaNs06pQpryY U=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: SXJJF0O1jF8+i76+bT9n60MxMkBvi3U/stUfieunfvW1GusEiy5DWWoFc0eFlu0gCtzUHrn7jc
 ycv1TwOaakKxCnqa34L6Uohft09XhQxr4yL9DCxpHn8SZazJ5VCAJX3AYCjbulxiuVpsfh9NaX
 ZY88kOyLujxxKkdeV1fywZCDGFQR7IAXLQuVYCwQLGDBz1jd1PtgGmrjS+KXSGHkAWkU5QWiJ+
 uRI/p86XImVxasZw8b1ZhKu1/rcT3HlbOyu7M2r/H3M347exBNm2BGdtmG+hR7BDhlt8dujqm0
 dCQ=
X-SBRS: 2.7
X-MesageID: 22532812
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,359,1589256000"; d="scan'208";a="22532812"
From: George Dunlap <George.Dunlap@citrix.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v2] docs: specify stability of hypfs path documentation
Thread-Topic: [PATCH v2] docs: specify stability of hypfs path documentation
Thread-Index: AQHWWR59fvtpmEuro06JsoMrFTmX86kFdV+AgAMhtYCAAUgfAIAABYeAgAAO0gCAAALcAIAALlYA
Date: Thu, 16 Jul 2020 14:20:28 +0000
Message-ID: <A8D7C0A3-BA48-40F2-B290-C73BC1CDEBD1@citrix.com>
References: <20200713140338.16172-1-jgross@suse.com>
 <8a96b1b9-cbcb-557a-5b82-661bbe40fe25@suse.com>
 <68F727A8-29B8-4846-8BE9-BD4F6E0DC60D@citrix.com>
 <9f5e86cc-4f64-982b-d84b-1de6b2739a2b@suse.com>
 <4c681c7c-be69-7e1a-4cd9-c9e05fe85372@suse.com>
 <2567da9b-be43-3f0d-e213-562b5454f4b7@suse.com>
 <757f5f78-6ec9-c740-18bf-a01105d552d7@suse.com>
In-Reply-To: <757f5f78-6ec9-c740-18bf-a01105d552d7@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <1B28F961F9ACD34392C5C594BE9E3004@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDE2LCAyMDIwLCBhdCAxMjozNCBQTSwgSsO8cmdlbiBHcm/DnyA8amdyb3Nz
QHN1c2UuY29tPiB3cm90ZToNCj4gDQo+IE9uIDE2LjA3LjIwIDEzOjI0LCBKYW4gQmV1bGljaCB3
cm90ZToNCj4+IE9uIDE2LjA3LjIwMjAgMTI6MzEsIErDvHJnZW4gR3Jvw58gd3JvdGU6DQo+Pj4g
T24gMTYuMDcuMjAgMTI6MTEsIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+PiBPbiAxNS4wNy4yMDIw
IDE2OjM3LCBHZW9yZ2UgRHVubGFwIHdyb3RlOg0KPj4+Pj4gSVQgc291bmRzIGxpa2UgeW914oCZ
cmUgc2F5aW5nOg0KPj4+Pj4gDQo+Pj4+PiAxLiBQYXRocyBsaXN0ZWQgd2l0aG91dCBjb25kaXRp
b25zIHdpbGwgYWx3YXlzIGJlIGF2YWlsYWJsZQ0KPj4+Pj4gDQo+Pj4+PiAyLiBQYXRocyBsaXN0
ZWQgd2l0aCBjb25kaXRpb25zIG1heSBiZSBleHRlbmRlZDogaS5lLiwgYSBub2RlIGN1cnJlbnRs
eSBsaXN0ZWQgYXMgUFYgbWlnaHQgYWxzbyBiZWNvbWUgYXZhaWxhYmxlIGZvciBIVk0gZ3Vlc3Rz
DQo+Pj4+PiANCj4+Pj4+IDMuIFBhdGhzIGxpc3RlZCB3aXRoIGNvbmRpdGlvbnMgbWlnaHQgaGF2
ZSB0aG9zZSBjb25kaXRpb25zIHJlZHVjZWQsIGJ1dCB3aWxsIG5ldmVyIGVudGlyZWx5IGRpc2Fw
cGVhci4gIFNvIHNvbWV0aGluZyBjdXJyZW50bHkgbGlzdGVkIGFzIFBWIG1pZ2h0IGJlIHJlZHVj
ZWQgdG8gQ09ORklHX0hBU19GT08sIGJ1dCB3b27igJl0IGJlIGNvbXBsZXRlbHkgcmVtb3ZlZC4N
Cj4+Pj4+IA0KPj4+Pj4gSXMgdGhhdCB3aGF0IHlvdSBtZWFudD8NCj4+Pj4gDQo+Pj4+IEkgc2Vl
IErDvHJnZW4gcmVwbGllZCAieWVzIiB0byB0aGlzLCBidXQgSSdtIG5vdCBzdXJlIGFib3V0IDEu
DQo+Pj4+IGFib3ZlOiBJIHRoaW5rIGl0J3MgcXVpdGUgcmVhc29uYWJsZSB0byBleHBlY3QgdGhh
dCBwYXRocyB3aXRob3V0DQo+Pj4+IGNvbmRpdGlvbiBtYXkgZ2FpbiBhIGNvbmRpdGlvbi4gSnVz
dCBsaWtlIHBhdGhzIG5vdyBoYXZpbmcgYQ0KPj4+PiBjb25kaXRpb24gYW5kIChwZXJoYXBzIHRl
bXBvcmFyaWx5KSBsb3NpbmcgaXQgc2hvdWxkbid0IGFsbCBvZg0KPj4+PiB0aGUgc3VkZGVuIGJl
Y29tZSAiYWx3YXlzIGF2YWlsYWJsZSIgd2hlbiB0aGV5IHdlcmVuJ3QgbWVhbnQgdG8NCj4+Pj4g
YmUuDQo+Pj4+IA0KPj4+PiBBcyBmYXIgYSAzLiBnb2VzLCBJJ20gYWxzbyB1bnN1cmUgaW4gaG93
IGZhciB0aGlzIGlzIGFueSBiZXR0ZXINCj4+Pj4gc3RhYmlsaXR5IHdpc2UgKGZyb20gYSBjb25z
dW1lciBwb3YpIHRoYW4gYWxsb3dpbmcgcGF0aHMgdG8NCj4+Pj4gZW50aXJlbHkgZGlzYXBwZWFy
Lg0KPj4+IA0KPj4+IFRoZSBpZGVhIGlzIHRoYXQgYW55IHVzZXIgdG9vbCB1c2luZyBoeXBmcyBj
YW4gcmVseSBvbiBwYXRocyB1bmRlciAxIHRvDQo+Pj4gZXhpc3QsIHdoaWxlIHRoZSBvbmVzIHVu
ZGVyIDMgbWlnaHQgbm90IGJlIHRoZXJlIGR1ZSB0byB0aGUgaHlwZXJ2aXNvcg0KPj4+IGNvbmZp
ZyBvciB0aGUgdXNlZCBzeXN0ZW0uDQo+Pj4gDQo+Pj4gQSBwYXRoIG5vdCBiZWluZyBhbGxvd2Vk
IHRvIGVudGlyZWx5IGRpc2FwcGVhciBlbnN1cmVzIHRoYXQgaXQgcmVtYWlucw0KPj4+IGluIHRo
ZSBkb2N1bWVudGF0aW9uLCBzbyB0aGUgc2FtZSBwYXRoIGNhbid0IGJlIHJldXNlZCBmb3Igc29t
ZXRoaW5nDQo+Pj4gZGlmZmVyZW50IGluIGZ1dHVyZS4NCj4+IEFuZCB0aGVuIGhvdyBkbyB5b3Ug
ZGVhbCB3aXRoIGEgY29uZGl0aW9uIGdldHRpbmcgZHJvcHBlZCwgYW5kDQo+PiBsYXRlciB3YW50
aW5nIHRvIGdldCByZS1hZGRlZD8gRG8gd2UgbmVlZCBhIHBsYWNlaG9sZGVyIGNvbmRpdGlvbg0K
Pj4gbGlrZSBbQUxXQVlTXSBvciBbVFJVRV0/DQo+IA0KPiBEcm9wcGluZyBhIGNvbmRpdGlvbiBo
YXMgdG8gYmUgY29uc2lkZXJlZCB2ZXJ5IGNhcmVmdWxseSwgc2FtZSBhcw0KPiBpbnRyb2R1Y2lu
ZyBhIG5ldyBwYXRoIHdpdGhvdXQgYW55IGNvbmRpdGlvbi4NCj4gDQo+IEluIHdvcnN0IGNhc2Ug
eW91IGNhbiBzdGlsbCBnbyB3aXRoIFtDT05GSUdfSFlQRlNdLg0KDQpDb3VsZG7igJl0IHdlIGp1
c3QgaGF2ZSBhIHNlY3Rpb24gb2YgdGhlIGRvY3VtZW50IGZvciBkZWFkIHBhdGhzIHRoYXQgYXJl
buKAmXQgYWxsb3dlZCB0byBiZSB1c2VkPw0KDQpBbHRlcm5hdGVseSwgd2UgY291bGQgaGF2ZSBh
IHRhZyBmb3IgZW50cmllcyB3ZSBkb27igJl0IHdhbnQgdXNlZCBhbnltb3JlOyBbREVBRF0gb3Ig
W09CU09MRVRFXSBtYXliZT8gW0RFRlVOQ1RdPyBbUkVNT1ZFRF0/DQoNClNvIEkgdGhpbmsgSeKA
mWQgd3JpdGUgYSBzZXBhcmF0ZSBzZWN0aW9uLCBsaWtlIHRoaXM6DQoNCn5+DQojIFN0YWJpbGl0
eQ0KDQpQYXRoICpwcmVzZW5jZSogaXMgbm90IHN0YWJsZSwgYnV0IHBhdGggKm1lYW5pbmcqIGlz
IGFsd2F5cyBzdGFibGU6IGlmIGEgdG9vbCB5b3Ugd3JpdGUgZmluZHMgYSBwYXRoIHByZXNlbnQs
IGl0IGNhbiByZWx5IG9uIGJlaGF2aW9yIGluIGZ1dHVyZSB2ZXJzaW9ucyBvZiB0aGUgaHlwZXJ2
aXNvcnMsIGFuZCBpbiBkaWZmZXJlbnQgY29uZmlndXJhdGlvbnMuICBTcGVjaWZpY2FsbHk6DQoN
CjEuIENvbmRpdGlvbnMgdW5kZXIgd2hpY2ggcGF0aHMgYXJlIHVzZWQgbWF5IGJlIGV4dGVuZGVk
LCByZXN0cmljdGVkLCBvciByZW1vdmVkLiAgRm9yIGV4YW1wbGUsIGEgcGF0aCB0aGF04oCZcyBh
bHdheXMgYXZhaWxhYmxlIG9ubHkgb24gQVJNIHN5c3RlbXMgbWF5IGJlY29tZSBhdmFpbGFibGUg
b24geDg2OyBvciBhIHBhdGggYXZhaWxhYmxlIG9uIGJvdGggc3lzdGVtcyBtYXkgYmUgcmVzdHJp
Y3RlZCB0byBvbmx5IGFwcGVhcmluZyBvbiBBUk0gc3lzdGVtcy4gIFBhdGhzIG1heSBhbHNvIGRp
c2FwcGVhciBlbnRpcmVseS4NCg0KMi4gSG93ZXZlciwgdGhlIG1lYW5pbmcgb2YgYSBwYXRoIHdp
bGwgbmV2ZXIgY2hhbmdlLiAgSWYgYSBwYXRoIGlzIHByZXNlbnQsIGl0IHdpbGwgYWx3YXlzIGhh
dmUgZXhhY3RseSB0aGUgbWVhbmluZyB0aGF0IGl0IGFsd2F5cyBoYWQuICBJbiBvcmRlciB0byBt
YWludGFpbiB0aGlzLCByZW1vdmVkIHBhdGhzIHNob3VsZCBiZSByZXRhaW5lZCB3aXRoIHRoZSB0
YWcgW1JFTU9WRURdLiAgVGhlIHBhdGggbWF5IGJlIHJlc3RvcmVkICpvbmx5KiBpZiB0aGUgcmVz
dG9yZWQgdmVyc2lvbiBvZiB0aGUgcGF0aCBpcyBjb21wYXRpYmxlIHdpdGggdGhlIHByZXZpb3Vz
IGZ1bmN0aW9uYWxpdHkuDQp+fn4NCg0KVGhvdWdodHM/DQoNCiAtR2Vvcmdl


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 14:32:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 14:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw4vV-00046K-Di; Thu, 16 Jul 2020 14:31:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oOKz=A3=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jw4vU-000467-3A
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 14:31:56 +0000
X-Inumbo-ID: 184e0614-c771-11ea-b7bb-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 184e0614-c771-11ea-b7bb-bc764e2007e4;
 Thu, 16 Jul 2020 14:31:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594909915;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=mX6+Al+DCPGvTaGpH5vyQE9N5lqASJCT9hhRoUNp4AQ=;
 b=N7ALdckvL3cNjFZ9rRnCz4vnct7RcpZxpUIiXspkQPnyopgatYA7sBMf
 bEs7mOAxjc/nD6Jcfvc99lpRJIcel4GlMuriTUYd1NwN+R/TpHppCjI2z
 YPezcqhxhQZGVIOX37kmAbDlRyu9fRAjcMdTQsZPewMoTeTX5ht9StBly 8=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ru5cYZopSMX+x0tUfo7K/rPcyH9pFqhrn6q0L/yjTD+mcvkF3r0V+ueHGisPhcm1Q2x1iPoTct
 drdUyDBaYlt5X2tXx6MO4VM9zIfkvvCKpTaClIAgXwDXYK9LwBUKh42mBw1tHRC2x1tBYu2FWk
 dUYLEEYiTKeoaQoULB+qwPzCaUH11CH/lqmhS1cnDT1LxWYkq/qpPfXxmXyso+9GvOBXgXXVzW
 7lpcgrDFX5tqBLKdH2v0TDSFmxvruYlYsBaW0OPqzg0rxEZODFhYdlJQst3Y8uPCsx8lEpXRJh
 Yrs=
X-SBRS: 2.7
X-MesageID: 22852512
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,359,1589256000"; d="scan'208";a="22852512"
Date: Thu, 16 Jul 2020 16:31:45 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 2/2] x86: detect CMOS aliasing on ports other than
 0x70/0x71
Message-ID: <20200716143145.GP7191@Air-de-Roger>
References: <416ac9b1-93d1-81a2-be19-d58d509fdfec@suse.com>
 <72a63cba-bfdb-ae3c-284b-8ba5b9d7f7a9@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <72a63cba-bfdb-ae3c-284b-8ba5b9d7f7a9@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 11:47:56AM +0200, Jan Beulich wrote:
> ... in order to also intercept accesses through the alias ports.
> 
> Also stop intercepting accesses to the CMOS ports if we won't ourselves
> use the CMOS RTC.

I think you are missing the registration of the aliased ports in
rtc_init for a PVH hardware domain, hw_rtc_io will currently only get
called by accesses to 0x71-0x71.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Re-base.
> 
> --- a/xen/arch/x86/physdev.c
> +++ b/xen/arch/x86/physdev.c
> @@ -670,6 +670,80 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>      return ret;
>  }
>  
> +#ifndef COMPAT
> +#include <asm/mc146818rtc.h>
> +
> +unsigned int __read_mostly cmos_alias_mask;
> +
> +static int __init probe_cmos_alias(void)
> +{
> +    unsigned int i, offs;
> +
> +    if ( acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC )
> +        return 0;
> +
> +    for ( offs = 2; offs < 8; offs <<= 1 )
> +    {
> +        bool read = true;
> +
> +        for ( i = RTC_REG_D + 1; i < 0x80; ++i )
> +        {
> +            uint8_t normal, alt;
> +            unsigned long flags;
> +
> +            if ( i == acpi_gbl_FADT.century )
> +                continue;

I'm missing something, why do you avoid the century register for
comparison reasons?

> @@ -2009,37 +2009,33 @@ int __hwdom_init xen_in_range(unsigned l
>  static int __hwdom_init io_bitmap_cb(unsigned long s, unsigned long e,
>                                       void *ctx)
>  {
> -    struct domain *d = ctx;
> +    const struct domain *d = ctx;

Urg, it's kind of weird to constify d ...

>      unsigned int i;
>  
>      ASSERT(e <= INT_MAX);
>      for ( i = s; i <= e; i++ )
> -        __clear_bit(i, d->arch.hvm.io_bitmap);
> +        if ( admin_io_okay(i, 1, d) )
> +            __clear_bit(i, d->arch.hvm.io_bitmap);

... when you are modifying the bitmap here.

>  
>      return 0;
>  }
>  
>  void __hwdom_init setup_io_bitmap(struct domain *d)
>  {
> -    int rc;
> +    if ( !is_hvm_domain(d) )
> +        return;
>  
> -    if ( is_hvm_domain(d) )
> -    {
> -        bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
> -        rc = rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
> -                                    io_bitmap_cb, d);
> -        BUG_ON(rc);
> -        /*
> -         * NB: we need to trap accesses to 0xcf8 in order to intercept
> -         * 4 byte accesses, that need to be handled by Xen in order to
> -         * keep consistency.
> -         * Access to 1 byte RTC ports also needs to be trapped in order
> -         * to keep consistency with PV.
> -         */
> -        __set_bit(0xcf8, d->arch.hvm.io_bitmap);
> -        __set_bit(RTC_PORT(0), d->arch.hvm.io_bitmap);
> -        __set_bit(RTC_PORT(1), d->arch.hvm.io_bitmap);
> -    }
> +    bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
> +    if ( rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
> +                                io_bitmap_cb, d) )
> +        BUG();

You can directly use BUG_ON, no need for the if. IIRC it's safe to
call admin_io_okay (and thus ioports_access_permitted) when already
holding the rangeset lock, as both are read-lockers and can safely
recurse.

> +
> +    /*
> +     * We need to trap 4-byte accesses to 0xcf8 (see admin_io_okay(),
> +     * guest_io_read(), and guest_io_write()), which isn't covered by
> +     * the admin_io_okay() check in io_bitmap_cb().
> +     */
> +    __set_bit(0xcf8, d->arch.hvm.io_bitmap);
>  }
>  
>  /*
> --- a/xen/arch/x86/time.c
> +++ b/xen/arch/x86/time.c
> @@ -1092,7 +1092,10 @@ static unsigned long get_cmos_time(void)
>          if ( seconds < 60 )
>          {
>              if ( rtc.sec != seconds )
> +            {
>                  cmos_rtc_probe = false;
> +                acpi_gbl_FADT.boot_flags &= ~ACPI_FADT_NO_CMOS_RTC;

Do you need to set this flag also when using the EFI runtime services
in order to get the time in get_cmos_time? In that case the RTC is not
use, and hence could be handled to the hardware domain?

> +            }
>              break;
>          }
>  
> @@ -1114,7 +1117,7 @@ unsigned int rtc_guest_read(unsigned int
>      unsigned long flags;
>      unsigned int data = ~0;
>  
> -    switch ( port )
> +    switch ( port & ~cmos_alias_mask )
>      {
>      case RTC_PORT(0):
>          /*
> @@ -1126,11 +1129,12 @@ unsigned int rtc_guest_read(unsigned int
>          break;
>  
>      case RTC_PORT(1):
> -        if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
> +        if ( !ioports_access_permitted(currd, port - 1, port) )
>              break;
>          spin_lock_irqsave(&rtc_lock, flags);
> -        outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
> -        data = inb(RTC_PORT(1));
> +        outb(currd->arch.cmos_idx & (0xff >> (port == RTC_PORT(1))),

Why do you only mask this for accesses to the non aliased ports? If
the RTC is aliased you also want to mask the aliased accesses in the
same way?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 14:34:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 14:34:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw4y1-0004Jw-Si; Thu, 16 Jul 2020 14:34:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1QOp=A3=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jw4y0-0004Jr-AB
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 14:34:32 +0000
X-Inumbo-ID: 758204e8-c771-11ea-94bf-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 758204e8-c771-11ea-94bf-12813bfff9fa;
 Thu, 16 Jul 2020 14:34:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 035C7B57E;
 Thu, 16 Jul 2020 14:34:34 +0000 (UTC)
Subject: Re: [PATCH v2] docs: specify stability of hypfs path documentation
To: George Dunlap <George.Dunlap@citrix.com>
References: <20200713140338.16172-1-jgross@suse.com>
 <8a96b1b9-cbcb-557a-5b82-661bbe40fe25@suse.com>
 <68F727A8-29B8-4846-8BE9-BD4F6E0DC60D@citrix.com>
 <9f5e86cc-4f64-982b-d84b-1de6b2739a2b@suse.com>
 <4c681c7c-be69-7e1a-4cd9-c9e05fe85372@suse.com>
 <2567da9b-be43-3f0d-e213-562b5454f4b7@suse.com>
 <757f5f78-6ec9-c740-18bf-a01105d552d7@suse.com>
 <A8D7C0A3-BA48-40F2-B290-C73BC1CDEBD1@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <fd8902f6-0172-2f1f-aae8-fec096d4bff5@suse.com>
Date: Thu, 16 Jul 2020 16:34:29 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <A8D7C0A3-BA48-40F2-B290-C73BC1CDEBD1@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16.07.20 16:20, George Dunlap wrote:
> 
> 
>> On Jul 16, 2020, at 12:34 PM, Jürgen Groß <jgross@suse.com> wrote:
>>
>> On 16.07.20 13:24, Jan Beulich wrote:
>>> On 16.07.2020 12:31, Jürgen Groß wrote:
>>>> On 16.07.20 12:11, Jan Beulich wrote:
>>>>> On 15.07.2020 16:37, George Dunlap wrote:
>>>>>> IT sounds like you’re saying:
>>>>>>
>>>>>> 1. Paths listed without conditions will always be available
>>>>>>
>>>>>> 2. Paths listed with conditions may be extended: i.e., a node currently listed as PV might also become available for HVM guests
>>>>>>
>>>>>> 3. Paths listed with conditions might have those conditions reduced, but will never entirely disappear.  So something currently listed as PV might be reduced to CONFIG_HAS_FOO, but won’t be completely removed.
>>>>>>
>>>>>> Is that what you meant?
>>>>>
>>>>> I see Jürgen replied "yes" to this, but I'm not sure about 1.
>>>>> above: I think it's quite reasonable to expect that paths without
>>>>> condition may gain a condition. Just like paths now having a
>>>>> condition and (perhaps temporarily) losing it shouldn't all of
>>>>> the sudden become "always available" when they weren't meant to
>>>>> be.
>>>>>
>>>>> As far a 3. goes, I'm also unsure in how far this is any better
>>>>> stability wise (from a consumer pov) than allowing paths to
>>>>> entirely disappear.
>>>>
>>>> The idea is that any user tool using hypfs can rely on paths under 1 to
>>>> exist, while the ones under 3 might not be there due to the hypervisor
>>>> config or the used system.
>>>>
>>>> A path not being allowed to entirely disappear ensures that it remains
>>>> in the documentation, so the same path can't be reused for something
>>>> different in future.
>>> And then how do you deal with a condition getting dropped, and
>>> later wanting to get re-added? Do we need a placeholder condition
>>> like [ALWAYS] or [TRUE]?
>>
>> Dropping a condition has to be considered very carefully, same as
>> introducing a new path without any condition.
>>
>> In worst case you can still go with [CONFIG_HYPFS].
> 
> Couldn’t we just have a section of the document for dead paths that aren’t allowed to be used?
> 
> Alternately, we could have a tag for entries we don’t want used anymore; [DEAD] or [OBSOLETE] maybe? [DEFUNCT]? [REMOVED]?
> 
> So I think I’d write a separate section, like this:
> 
> ~~
> # Stability
> 
> Path *presence* is not stable, but path *meaning* is always stable: if a tool you write finds a path present, it can rely on behavior in future versions of the hypervisors, and in different configurations.  Specifically:
> 
> 1. Conditions under which paths are used may be extended, restricted, or removed.  For example, a path that’s always available only on ARM systems may become available on x86; or a path available on both systems may be restricted to only appearing on ARM systems.  Paths may also disappear entirely.
> 
> 2. However, the meaning of a path will never change.  If a path is present, it will always have exactly the meaning that it always had.  In order to maintain this, removed paths should be retained with the tag [REMOVED].  The path may be restored *only* if the restored version of the path is compatible with the previous functionality.
> ~~~
> 
> Thoughts?

Would work for me, too.


Juergen



From xen-devel-bounces@lists.xenproject.org Thu Jul 16 14:36:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 14:36:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw4zi-0004Tf-9K; Thu, 16 Jul 2020 14:36:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sYN5=A3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jw4zg-0004TW-Kg
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 14:36:16 +0000
X-Inumbo-ID: b073bc72-c771-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b073bc72-c771-11ea-b7bb-bc764e2007e4;
 Thu, 16 Jul 2020 14:36:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=vOV9SHw/6GWcD4t+XZEaTKHdvc3WLw9BftX6h6cWfAE=; b=ZrKpM56oEAC78ZGSBfN4wYCoG
 TDNUDp6Rb5FqMPh91f52q9wGILwQjXlq2Zm1dOSl0ELhwitl/jf79mysamW1g71R7Qa0GTsdV53UO
 kE/Q6XdS1LDgNC/akSCyKrWBStVMDwchip0U91auPEOvtDNes7CoX7hltfUCptatMXLl8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jw4zZ-0007xQ-3n; Thu, 16 Jul 2020 14:36:09 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jw4zY-0004Bt-PU; Thu, 16 Jul 2020 14:36:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jw4zY-0004bX-M3; Thu, 16 Jul 2020 14:36:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151930-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151930: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=994e99a96c9b502b3f6ee411457007cd52051cf5
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jul 2020 14:36:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151930 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151930/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                994e99a96c9b502b3f6ee411457007cd52051cf5
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   28 days
Failing since        151236  2020-06-19 19:10:35 Z   26 days   42 attempts
Testing same since   151930  2020-07-15 23:40:54 Z    0 days    1 attempts

------------------------------------------------------------
732 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 38887 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 14:42:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 14:42:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw55i-0005Pz-47; Thu, 16 Jul 2020 14:42:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oOKz=A3=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jw55g-0005Pu-0F
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 14:42:28 +0000
X-Inumbo-ID: 915f599e-c772-11ea-bb8b-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 915f599e-c772-11ea-bb8b-bc764e2007e4;
 Thu, 16 Jul 2020 14:42:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594910547;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=6FUZUHpNfVY8WNpRF1zA5bVD5ePWW6OJT4WzlFNGH/k=;
 b=f50lpYkLpZzEB3ou1U3wHu9CfBMX+9ud2cQ+dGxah4PbHshGMh+JfAqm
 9K3bdkxmXAXIxF/yH73RmD7mdmzb5oERAjhMQx1XQPjpDAeOw/K+b/0es
 c2xIyUHgStx4z7N4Fb5Fri3sNMYciE7alRzNTpzoN6K7eTKQGry8wJRNP I=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: UKu99vf4dhACvHwiK1lYsBPqdkxTEnyBz22rwdQydEKJS/kCbSRuP+uUNlWXKvqV27rn07lVbU
 xZuEqqtlRwq0a1xrhXnR/HNaPOmydcOrR12Mwi8ySjjB8Fcz6D9k0CLSfZGTW0Xrz9CcR0hhm5
 Cg1GDpWXORxH88PDxUUo9A5XVDzXoC92FzptmCuNhjwmi88uuDWAyUZtSjwADL+M33s5naPz86
 cyfOGf0qCHLHDOPGHoBROry1h4NuHpwsPzE/42tOgGKKGWik03xbJb17bznfyb0sBFxTKer6Vm
 Qkc=
X-SBRS: 2.7
X-MesageID: 22731684
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,359,1589256000"; d="scan'208";a="22731684"
Date: Thu, 16 Jul 2020 16:42:19 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 1/2] common: map_vcpu_info() cosmetics
Message-ID: <20200716144219.GQ7191@Air-de-Roger>
References: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
 <2a0341c7-3836-a8c0-9516-b6a08e2720ec@suse.com>
 <20200716114128.GO7191@Air-de-Roger>
 <03a4d446-c26b-f171-57fd-a1eb13dad6c0@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <03a4d446-c26b-f171-57fd-a1eb13dad6c0@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 16, 2020 at 01:48:51PM +0200, Jan Beulich wrote:
> On 16.07.2020 13:41, Roger Pau Monné wrote:
> > On Wed, Jul 15, 2020 at 12:15:10PM +0200, Jan Beulich wrote:
> >> Use ENXIO instead of EINVAL to cover the two cases of the address not
> >> satisfying the requirements. This will make an issue here better stand
> >> out at the call site.
> > 
> > Not sure whether I would use EFAULT instead of ENXIO, as the
> > description of it is 'bad address' which seems more inline with the
> > error that we are trying to report.
> 
> The address isn't bad in the sense of causing a fault, it's just
> that we elect to not allow it. Hence I don't think EFAULT is
> suitable. I'm open to replacement suggestions for ENXIO, though.

Well, using an address that's not properly aligned to the requirements
of an interface would cause a fault? (in this case it's a software
interface, but the concept applies equally).

Anyway, not something worth arguing about I think, so unless someone
else disagrees I'm fine with using ENXIO.

Thanks.


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 14:44:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 14:44:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw57U-0005XX-HB; Thu, 16 Jul 2020 14:44:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oOKz=A3=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jw57T-0005XR-MO
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 14:44:19 +0000
X-Inumbo-ID: d3e3d5f6-c772-11ea-94c4-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d3e3d5f6-c772-11ea-94c4-12813bfff9fa;
 Thu, 16 Jul 2020 14:44:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594910658;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=qwTSYK560X0BNoFkVmqMazfkzBL6WpAx2P98JYb/EO4=;
 b=VDeqjvMkchkq3TlrwyPu4trVka2e2VxvtgwN8mqfN6Gb7c2oHhgTKREK
 eub2xSlyPKMbhOtwrVjzgqD7XjybOFdoWntPUogxLvqC7gr0KsMWrXbbA
 9oRPBkJzstYk59GqV3d9g4oOmIaoTE0EpgAWkUUSr6l3grONvrFgohQWh s=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: WjIjTeGoOehPh+ITlKQb35aLNU0DcABCMpNGUqW7T+gYa3gQCU1Gk2cNA9Tpn4TsgBMz7QVnSZ
 JAsvMs6UJGLRJc7++DAy1VP727Y2Gh0XYUCAKzemuGJ/a87P4z5F1IZq1cTgs4dI+PxpySexTi
 WFHI5GSN94vux7yH2BjmfyvhslxsUkpdHzB4Jir/+JXU/NT+NviORmAAsVR2RJj2YcW9tOhBqx
 Gvsm1ea8TMi0F8oC9jENzSKZxTbR1QiRPz7IuOmGxgw+VmmHlqBRxPFXqHnfi02MiR9/WRdNMW
 7a4=
X-SBRS: 2.7
X-MesageID: 22731835
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,359,1589256000"; d="scan'208";a="22731835"
Date: Thu, 16 Jul 2020 16:44:04 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] compat: add a little bit of description to xlat.lst
Message-ID: <20200716144404.GR7191@Air-de-Roger>
References: <d7d95acc-11b0-278b-373e-0115cfa99b51@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d7d95acc-11b0-278b-373e-0115cfa99b51@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 16, 2020 at 02:21:33PM +0200, Jan Beulich wrote:
> Requested-by: Roger Pau Monné <roger.pau@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks for writing this up!


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 15:01:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 15:01:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw5No-0007G9-Fh; Thu, 16 Jul 2020 15:01:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dhnJ=A3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jw5Nm-0007G2-N8
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 15:01:10 +0000
X-Inumbo-ID: 2e9f35c4-c775-11ea-94c9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e9f35c4-c775-11ea-94c9-12813bfff9fa;
 Thu, 16 Jul 2020 15:01:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 15D5AACBF;
 Thu, 16 Jul 2020 15:01:13 +0000 (UTC)
Subject: Re: [PATCH 1/2] common: map_vcpu_info() cosmetics
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
 <2a0341c7-3836-a8c0-9516-b6a08e2720ec@suse.com>
 <20200716114128.GO7191@Air-de-Roger>
 <03a4d446-c26b-f171-57fd-a1eb13dad6c0@suse.com>
 <20200716144219.GQ7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d64ee03f-4663-39ce-fd72-5702029e0182@suse.com>
Date: Thu, 16 Jul 2020 17:01:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200716144219.GQ7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16.07.2020 16:42, Roger Pau Monné wrote:
> On Thu, Jul 16, 2020 at 01:48:51PM +0200, Jan Beulich wrote:
>> On 16.07.2020 13:41, Roger Pau Monné wrote:
>>> On Wed, Jul 15, 2020 at 12:15:10PM +0200, Jan Beulich wrote:
>>>> Use ENXIO instead of EINVAL to cover the two cases of the address not
>>>> satisfying the requirements. This will make an issue here better stand
>>>> out at the call site.
>>>
>>> Not sure whether I would use EFAULT instead of ENXIO, as the
>>> description of it is 'bad address' which seems more inline with the
>>> error that we are trying to report.
>>
>> The address isn't bad in the sense of causing a fault, it's just
>> that we elect to not allow it. Hence I don't think EFAULT is
>> suitable. I'm open to replacement suggestions for ENXIO, though.
> 
> Well, using an address that's not properly aligned to the requirements
> of an interface would cause a fault? (in this case it's a software
> interface, but the concept applies equally).

Not necessarily, see x86'es behavior. Also even on strict arches
it is typically possible to cover for the misalignment by using
suitable instructions; it's still an implementation choice to not
do so.

> Anyway, not something worth arguing about I think, so unless someone
> else disagrees I'm fine with using ENXIO.

Good, thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 15:32:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 15:32:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw5rg-0001Yi-Ui; Thu, 16 Jul 2020 15:32:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sYN5=A3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jw5rf-0001Yd-Fs
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 15:32:03 +0000
X-Inumbo-ID: 7f1e7b82-c779-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7f1e7b82-c779-11ea-8496-bc764e2007e4;
 Thu, 16 Jul 2020 15:32:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=YK5+rWAzPxg5iH3SvNGtYxLpTilZXgC6lrTxFI5DUMk=; b=iRvv+7Rk4frBkH3zs/5JxoOGS
 +GAt2gC3d/Q5kNGgwKcA9+9sKjJnVyzZ/7um5l9K0ON45J8SnworQZmcx7niSIhSlPKqwwZ6VQQG8
 hThNlqHtq/Gv3urLegpPiOZhGfPERrvTfmEFbw3YYVo7E/V+wVcDQc8DE4H15LcHwjLWM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jw5re-0000q1-AC; Thu, 16 Jul 2020 15:32:02 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jw5rd-0001EW-UQ; Thu, 16 Jul 2020 15:32:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jw5rd-0006WL-Ts; Thu, 16 Jul 2020 15:32:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151937-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151937: all pass - PUSHED
X-Osstest-Versions-This: ovmf=d9269d69138860edb1ec9796ed48549dc6ba5735
X-Osstest-Versions-That: ovmf=e77966b341b993291ab2d95718b88a6a0d703d0c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jul 2020 15:32:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151937 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151937/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d9269d69138860edb1ec9796ed48549dc6ba5735
baseline version:
 ovmf                 e77966b341b993291ab2d95718b88a6a0d703d0c

Last test of basis   151923  2020-07-15 16:10:49 Z    0 days
Testing same since   151937  2020-07-16 04:25:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gary Lin <glin@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e77966b341..d9269d6913  d9269d69138860edb1ec9796ed48549dc6ba5735 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 15:34:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 15:34:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw5u5-0001ge-Fq; Thu, 16 Jul 2020 15:34:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oOKz=A3=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jw5u4-0001gZ-FY
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 15:34:32 +0000
X-Inumbo-ID: d76b004e-c779-11ea-94d5-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d76b004e-c779-11ea-94d5-12813bfff9fa;
 Thu, 16 Jul 2020 15:34:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594913671;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=yo0e+eVcoUNmo8qcJeqb09b3DNk81JPuxOK5U0VhGBg=;
 b=QMyYD831qjBj80/aCs15RKZUvxYzEmELjCtPh0zExgQL54T/qbG+OeL9
 CUJhRzCjuSg3dd7WDXqA5A63utcDfOUULkvl9RGP/4BrJJjxhPKkyI2AA
 EBoSBCM5L4AghwRUT5MOaOm85FcaJpyYUMxknF6D4HwZeWf06tyES8CTg 0=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: U4y800+yX2wnwbhFPL8jNTMeKK5ujNqT1UfVlHUL9mcul4XnKXAAjlPDQsyJYv66KCV/2IX4a9
 Z8BaDB1hkDXwGvcO3HhS0jImLBJB5EP4JnT4Y6FT9PwzPZArLLdlsWh1kxQ7OJm1D99B2C7Mu2
 U34LByF7ublHiX0+gtXfZgzAuWV4YXFMgNWJSJmuMb1Jp53Pd7ydLW60NwnJu51ykDDWY4KUDT
 MY7JmwxWjz5DQq/3rDYCghdbssCbvyayjTS84LF4FVyuSiLv4PcrooUNQwDwG3UvLfBLqg2wgy
 2tk=
X-SBRS: 2.7
X-MesageID: 22860474
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,359,1589256000"; d="scan'208";a="22860474"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [osstest PATCH] Revert "make-flight: Temporarily disable flaky test"
Date: Thu, 16 Jul 2020 17:34:24 +0200
Message-ID: <20200716153424.40953-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: ian.jackson@eu.citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This reverts commit c436ff754810c15e4d2a434257d1d07498883acb.

Now that XSA-321 has been released we can try to enable PVH dom0
tests again.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 make-flight | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/make-flight b/make-flight
index b25144f7..b8942c1c 100755
--- a/make-flight
+++ b/make-flight
@@ -774,7 +774,7 @@ test_matrix_do_one () {
   xen-4.10-testing) test_dom0pvh=n ;;
   xen-4.11-testing) test_dom0pvh=n ;;
   xen-4.12-testing) test_dom0pvh=n ;;
-  *)                test_dom0pvh=n ;;
+  *)                test_dom0pvh=y ;;
   esac
 
   # core scheduling tests for versions >= 4.13 only
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Thu Jul 16 16:01:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 16:01:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw6JY-0004hP-0x; Thu, 16 Jul 2020 16:00:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5EBU=A3=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jw6JW-0004hK-UW
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 16:00:51 +0000
X-Inumbo-ID: 84e4c072-c77d-11ea-b7bb-bc764e2007e4
Received: from mail-il1-x143.google.com (unknown [2607:f8b0:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 84e4c072-c77d-11ea-b7bb-bc764e2007e4;
 Thu, 16 Jul 2020 16:00:50 +0000 (UTC)
Received: by mail-il1-x143.google.com with SMTP id e18so5486972ilr.7
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jul 2020 09:00:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id;
 bh=b5g1gVvxaRMqQg8QPF9s+84SQrecniVaDWZJDlYGle0=;
 b=qIXzUNjN0ZuQJGNwKpQsUufEGiDfFMQ/Nam8XMu1xHtFNiPaiS+0VAW2j4BIfhcM2I
 roj80QTRIynf200UBn7UzaSj/2NXeIa5/uKSDExFXtEGWX2d/GZ9o3fX12Z2uJwTZxse
 fOQuwQrD7NATGR6ITiVgkBDQ6SzWc8BLbOu9uXlzRZbjnM7Gzsm9aoD1TMFLcsT8+RFv
 W4UTuRINj9yPG+oDiZJPFJbA7/r0PZXjPZelSSUJ77kzVs1/rOVblKN3Qa6D3PnoDsg6
 rgXRru6F5AQTykY78gFXNJRRjXVOSbBBylAkdK6+kxaxMKW3YEz0or+/HvLtX0h8L8Bb
 D01w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=b5g1gVvxaRMqQg8QPF9s+84SQrecniVaDWZJDlYGle0=;
 b=KrwZFbE+Oneg/y0ebmbsv5r3nytzvXQSO/M4rhTgADQfABY6t32XvNefJaeRMbcT63
 pkdWBrvqj7SK1skVgEMPgfBijHZreVs7jrzY6d23LYzJBktrxFSxX1apZyXRr/smhS0E
 3yWqIkm9niYDnqtWLYwC411l0SkZBHxUdzPsOhizj0IHM4dkEhybquNzVCoCP/R9zsvp
 pbJ9rwc8SMznNJMD7p9DlEaA6z+Y5qKFwTgIfdajJrG8jiTgBnxQ2MUqQDV31nLvyVyZ
 uq1hqq+gDDAu3Zv4fA3/R1eTjRLDc1bITBLpuwfOe/dGHLuL6HMizJk+ERoRVUGgRcyJ
 ftBw==
X-Gm-Message-State: AOAM532whXbYZZnXZrecQJeBo+k9fKsoVCFUzt3cF5brrPH86+4/Exj9
 XE1XXSPUMw8XI5vixHBsI85SfRMpfA4=
X-Google-Smtp-Source: ABdhPJxBLwSgvW42ygkxFOPYVA8CDh4UgDfOJgGUW4dFbgVgPtdrElnDxY0CCJoBZQhk473hxgLNlQ==
X-Received: by 2002:a92:c792:: with SMTP id c18mr5467150ilk.223.1594915249460; 
 Thu, 16 Jul 2020 09:00:49 -0700 (PDT)
Received: from six.lan (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id s12sm2957216ilk.58.2020.07.16.09.00.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 16 Jul 2020 09:00:48 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] MAINTAINERS: add myself as a golang bindings maintainer
Date: Thu, 16 Jul 2020 12:00:26 -0400
Message-Id: <2e7fd9648245db7918b674953bb9643733259420.1594914981.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index e374816755..33fe51324e 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -298,6 +298,7 @@ F:	tools/debugger/gdbsx/
 
 GOLANG BINDINGS
 M:	George Dunlap <george.dunlap@citrix.com>
+M:     Nick Rosbrook <rosbrookn@ainfosec.com>
 S:	Maintained
 F:	tools/golang
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 16 16:14:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 16:14:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw6WN-0005gi-99; Thu, 16 Jul 2020 16:14:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AXgs=A3=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jw6WM-0005gd-E4
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 16:14:06 +0000
X-Inumbo-ID: 5ecadbea-c77f-11ea-bca7-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ecadbea-c77f-11ea-bca7-bc764e2007e4;
 Thu, 16 Jul 2020 16:14:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594916046;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=dLHhgHO9Y/hl14cdfSP+15wmYO/DQNbvpNe547Ndmko=;
 b=I+tc81QG2PPhZw5zPcknyeP3QWPZVUDX/aEwOFKZCaWo/rRbVdRyPgEP
 qUTK9aDo2+dNlIz9hw9xZiq7TwgFJqfVjLNUw/6KcBHOQOF42ZBhq207T
 ptHUfMgtnX8CHj17vh333kLrzBjh1HxbG3Uw/Fr0GlkDvNIHBlqpqi5ym g=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 61Kg7tnhg2c3HSwAbBkI5u2vIbSRSKp+ewd8EXL3YuzZl7Qa19QPbY/W4EKGrBaqyhT2VYMGYq
 0NhEHiYqSVBnPfWmlQbiZEJXARuaH/34M5J9BjNQyI6eBH3jH2ujtfhAki6qJKbunkhNuBFTG0
 lousThyTNNoUGI/nGGWjU+oh/7X4EyB9RhqXCG0RvFGPmC8O1feX1BWPt1SyBZjiVgsjut8z9I
 hQj7ctbiIEhcvLB1v0Ka2mSwgEi3KSXedrV2+nhJ0kICB5O72CuwfgvbK2iXD2jtGJ8v4f1Mxz
 +yQ=
X-SBRS: 2.7
X-MesageID: 22864209
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,360,1589256000"; d="scan'208";a="22864209"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH] MAINTAINERS: add myself as a golang bindings maintainer
Thread-Topic: [PATCH] MAINTAINERS: add myself as a golang bindings maintainer
Thread-Index: AQHWW4pIUvo5/lAbGUaesEAOcXnVnqkKP58A
Date: Thu, 16 Jul 2020 16:14:00 +0000
Message-ID: <B0A42BA1-7D5F-4532-BF35-B1EA0F1169FF@citrix.com>
References: <2e7fd9648245db7918b674953bb9643733259420.1594914981.git.rosbrookn@ainfosec.com>
In-Reply-To: <2e7fd9648245db7918b674953bb9643733259420.1594914981.git.rosbrookn@ainfosec.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="us-ascii"
Content-ID: <FA235796577D7B4FAB30D6493E89E075@citrix.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Jan Beulich <jbeulich@suse.com>, Ian
 Jackson <Ian.Jackson@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On Jul 16, 2020, at 5:00 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Acked-by: George Dunlap <george.dunlap@citrix.com>




From xen-devel-bounces@lists.xenproject.org Thu Jul 16 16:17:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 16:17:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw6Zk-0005oA-Pw; Thu, 16 Jul 2020 16:17:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R7sX=A3=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1jw6Zj-0005o5-GT
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 16:17:35 +0000
X-Inumbo-ID: db52c15a-c77f-11ea-b7bb-bc764e2007e4
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db52c15a-c77f-11ea-b7bb-bc764e2007e4;
 Thu, 16 Jul 2020 16:17:34 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id k6so7692336wrn.3
 for <xen-devel@lists.xenproject.org>; Thu, 16 Jul 2020 09:17:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=xodYPEeS7dp1LUqF530uwFZaAtzHV5zb4yrnlJn6COQ=;
 b=P4VwAXAYTNHcTFoeQVwCBU1a3pmTveDh0+nzvoNHmPWPk8GCsw6TEfcf7Vxa6f0gtr
 WYd27HyPdNGQsxk3Qe+Fclcpe9156u/dyTvX4cP6GmMfMe7uVGw8IaJtUSblv0IyTttu
 /YfgP5U48h/N/24QtsAx+e0w7Ss6ViWWKBaxz2ZSgIw9CLg3ZgcINXDvSNkErlgo/fVf
 OViB41oizw5h9HZ+B1CL4YmnPp4m6X53VvtN7cc/s5ZdBBwCs60R4Msb+XFicG1YDAKi
 Ipy2Lk8mrF0wBDcVhlY87UvEKytW1mHx/qF219/lKsIPU2g0a9LOU0JvgflytvKAG79J
 deDw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=xodYPEeS7dp1LUqF530uwFZaAtzHV5zb4yrnlJn6COQ=;
 b=gaK1zcvPR1tsiYSzE56fXGbu8nxzfAZUDzDuAbGy6qmrXNnObVunQzKR2GjRNlTuIg
 tocXiPPVOSzNUEHCWyO0JCM3/LrhBG/fAzVU+yykP+s4t6QHnm2C+EbhhlSqCrbpVyfq
 qrocubY4xpZUvFxtZFallZ6IZdoiyPpnqLJEnCt3GtibXR4G6FlVVP9WEDvb/j+/EKHe
 pVf9HaP4f9TTAL2jX1YCl3BfLAxpIi1Cq2RwZX4wUvlan3+0AClMPrdM4le0rGsPe+Mp
 P948Vqkr2CB7pGRCJrcmZVxMY9DPuye25cxhvkvlZKJfrRILoL2ZZEjamKCqfnkfYpMN
 ErcA==
X-Gm-Message-State: AOAM532anAsnTlIKJ1vw2x/i8XQwT7ou+1GXOgG43CI31MkAonxuxpU2
 vkyqPyvzVYc6NI4ewshGX7Y6csGcOsbhUJiOZ9g=
X-Google-Smtp-Source: ABdhPJztXc0xYdj3Oqi1QD7Rcor//iwik1QLBcbFBF57O9TKPTyGJBqZsKXAzBug0mTgQLcUDZat1vmrpJGdsETbo+4=
X-Received: by 2002:adf:e88b:: with SMTP id d11mr5555855wrm.378.1594916253884; 
 Thu, 16 Jul 2020 09:17:33 -0700 (PDT)
MIME-Version: 1.0
References: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
 <2a0341c7-3836-a8c0-9516-b6a08e2720ec@suse.com>
 <20200716114128.GO7191@Air-de-Roger>
 <03a4d446-c26b-f171-57fd-a1eb13dad6c0@suse.com>
 <20200716144219.GQ7191@Air-de-Roger>
 <d64ee03f-4663-39ce-fd72-5702029e0182@suse.com>
In-Reply-To: <d64ee03f-4663-39ce-fd72-5702029e0182@suse.com>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Thu, 16 Jul 2020 18:17:21 +0200
Message-ID: <CAJ=z9a2gCm7LNOpJUO4nbwUExmtd8KH2TBvt4VXCaqiAeXuCcg@mail.gmail.com>
Subject: Re: [PATCH 1/2] common: map_vcpu_info() cosmetics
To: Jan Beulich <jbeulich@suse.com>
Content-Type: multipart/alternative; boundary="000000000000c264d605aa9160f7"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--000000000000c264d605aa9160f7
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 16 Jul 2020, 17:01 Jan Beulich, <jbeulich@suse.com> wrote:

> On 16.07.2020 16:42, Roger Pau Monn=C3=A9 wrote:
> > On Thu, Jul 16, 2020 at 01:48:51PM +0200, Jan Beulich wrote:
> >> On 16.07.2020 13:41, Roger Pau Monn=C3=A9 wrote:
> >>> On Wed, Jul 15, 2020 at 12:15:10PM +0200, Jan Beulich wrote:
> >>>> Use ENXIO instead of EINVAL to cover the two cases of the address no=
t
> >>>> satisfying the requirements. This will make an issue here better sta=
nd
> >>>> out at the call site.
> >>>
> >>> Not sure whether I would use EFAULT instead of ENXIO, as the
> >>> description of it is 'bad address' which seems more inline with the
> >>> error that we are trying to report.
> >>
> >> The address isn't bad in the sense of causing a fault, it's just
> >> that we elect to not allow it. Hence I don't think EFAULT is
> >> suitable. I'm open to replacement suggestions for ENXIO, though.
> >
> > Well, using an address that's not properly aligned to the requirements
> > of an interface would cause a fault? (in this case it's a software
> > interface, but the concept applies equally).
>
> Not necessarily, see x86'es behavior. Also even on strict arches

it is typically possible to cover for the misalignment by using
> suitable instructions; it's still an implementation choice to not
> do so.


I am not sure about your argument here... Yes it might be possible, but at
what cost?


> > Anyway, not something worth arguing about I think, so unless someone
> > else disagrees I'm fine with using ENXIO.
>
> Good, thanks.
>

-EFAULT can be described as "Bad address". I think it makes more sense than
-ENXIO here even if it may not strictly result to a fault on some arch.


> Jan
>

--000000000000c264d605aa9160f7
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><br><br><div class=3D"gmail_quote"><div dir=3D"ltr" =
class=3D"gmail_attr">On Thu, 16 Jul 2020, 17:01 Jan Beulich, &lt;<a href=3D=
"mailto:jbeulich@suse.com" target=3D"_blank" rel=3D"noreferrer">jbeulich@su=
se.com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"m=
argin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 16.07.2020=
 16:42, Roger Pau Monn=C3=A9 wrote:<br>
&gt; On Thu, Jul 16, 2020 at 01:48:51PM +0200, Jan Beulich wrote:<br>
&gt;&gt; On 16.07.2020 13:41, Roger Pau Monn=C3=A9 wrote:<br>
&gt;&gt;&gt; On Wed, Jul 15, 2020 at 12:15:10PM +0200, Jan Beulich wrote:<b=
r>
&gt;&gt;&gt;&gt; Use ENXIO instead of EINVAL to cover the two cases of the =
address not<br>
&gt;&gt;&gt;&gt; satisfying the requirements. This will make an issue here =
better stand<br>
&gt;&gt;&gt;&gt; out at the call site.<br>
&gt;&gt;&gt;<br>
&gt;&gt;&gt; Not sure whether I would use EFAULT instead of ENXIO, as the<b=
r>
&gt;&gt;&gt; description of it is &#39;bad address&#39; which seems more in=
line with the<br>
&gt;&gt;&gt; error that we are trying to report.<br>
&gt;&gt;<br>
&gt;&gt; The address isn&#39;t bad in the sense of causing a fault, it&#39;=
s just<br>
&gt;&gt; that we elect to not allow it. Hence I don&#39;t think EFAULT is<b=
r>
&gt;&gt; suitable. I&#39;m open to replacement suggestions for ENXIO, thoug=
h.<br>
&gt; <br>
&gt; Well, using an address that&#39;s not properly aligned to the requirem=
ents<br>
&gt; of an interface would cause a fault? (in this case it&#39;s a software=
<br>
&gt; interface, but the concept applies equally).<br>
<br>
Not necessarily, see x86&#39;es behavior. Also even on strict arches</block=
quote></div></div><div dir=3D"auto"><div class=3D"gmail_quote"><blockquote =
class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid=
;padding-left:1ex">
it is typically possible to cover for the misalignment by using<br>
suitable instructions; it&#39;s still an implementation choice to not<br>
do so.</blockquote></div></div><div dir=3D"auto"><br></div><div dir=3D"auto=
">I am not sure about your argument here... Yes it might be possible, but a=
t what cost?</div><div dir=3D"auto"><br></div><div dir=3D"auto"></div><div =
dir=3D"auto"><div class=3D"gmail_quote"><blockquote class=3D"gmail_quote" s=
tyle=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
&gt; Anyway, not something worth arguing about I think, so unless someone<b=
r>
&gt; else disagrees I&#39;m fine with using ENXIO.<br>
<br>
Good, thanks.<br></blockquote></div></div><div dir=3D"auto"><br></div><div =
dir=3D"auto">-EFAULT can be described as &quot;Bad address&quot;. I think i=
t makes more sense than -ENXIO here even if it may not strictly result to a=
 fault on some arch.</div><div dir=3D"auto"><br></div><div dir=3D"auto"><di=
v class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0=
 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
Jan<br>
</blockquote></div></div></div>

--000000000000c264d605aa9160f7--


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 17:10:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 17:10:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jw7Ol-0002Ho-2h; Thu, 16 Jul 2020 17:10:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ynM9=A3=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1jw7Oj-0002Hj-3M
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 17:10:17 +0000
X-Inumbo-ID: 3743a400-c787-11ea-9505-12813bfff9fa
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.45]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3743a400-c787-11ea-9505-12813bfff9fa;
 Thu, 16 Jul 2020 17:10:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k9327hoVOp+SlRJK5dGd3XiqB91osgeONFftx4DWY1I=;
 b=H+PFuUTHV7yNCugAqQ3/RJlhYWsgpkQiN0ICvcZinS8Q1OHWhFqjvMDXS7aMFZAzZ6zr1shSUaQWTnkyyJalI+fJWyqAPjYWo+74j/NWNmvxSnumsU5QquTl1Lg4FvzScsa10hLyBF725kNosvIKRL27QJpUB/XSvrQT5al/tfs=
Received: from DB6PR0202CA0018.eurprd02.prod.outlook.com (2603:10a6:4:29::28)
 by VI1PR08MB3902.eurprd08.prod.outlook.com (2603:10a6:803:c2::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Thu, 16 Jul
 2020 17:10:12 +0000
Received: from DB5EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:29:cafe::3e) by DB6PR0202CA0018.outlook.office365.com
 (2603:10a6:4:29::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17 via Frontend
 Transport; Thu, 16 Jul 2020 17:10:12 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT025.mail.protection.outlook.com (10.152.20.104) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Thu, 16 Jul 2020 17:10:12 +0000
Received: ("Tessian outbound 2ae7cfbcc26c:v62");
 Thu, 16 Jul 2020 17:10:12 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: e56855c0790134f3
X-CR-MTA-TID: 64aa7808
Received: from 701db4b602e2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 067F246E-B8DE-4866-8A65-9F71E18EC650.1; 
 Thu, 16 Jul 2020 17:10:07 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 701db4b602e2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 16 Jul 2020 17:10:07 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cfXzXs44t/oqlYHQSeVI0aKrwGIBdaEaGUaYmTLTwh4Vw+HTluVXP/P6tLox5sPO+TDtfbf7pcHgnZBf2r++TBzcd59e2v/X5MxngKzP6qCgck5oJh/E4pUvcCpmEh1ZOVXSfJ9KVQbnAb31EfqPJkryJUCe7bjOWZaISOnANfgO1pET2i6a4vL3pZaDdcV9jT7E1cTBzttEI6nUxRl3KZL7zHU8h/hx0cZaAya3Hf9scBYTS6c8NwL1WUQ1uxgjXUTZny147AoyzGeU5czbNRDMmuM0VOgA5UgPt8LLkdOZOHIMWTEDzCFVYDQC7V23ObkH11DjP5AMRZeAtGbzEg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k9327hoVOp+SlRJK5dGd3XiqB91osgeONFftx4DWY1I=;
 b=hA71L26ePdn4cbZ6gC/6ktDfB8XegEj31o7Oav8XU8400avMCOeMCZ7qB4TtV3Kesene6kuXKRbuCg/9moChL6m2VNfebdMCo0dWu62iQbdsDz7jI/GBtszSOwgggyF0FEuLyX3y2Ip/4P/kPaRlqzgdH0UzaYNrE02EXzlGtB5HbXDqc4wqDgUNk79V7SULmnyoe0Z1ct9O7K9m+1aWu37JU8VNADYSdbHaw+LjllYYWj5BYzvKnu78MoekqWj7p0OafmRAK56IK2/pmDqLtNi2IFD3wmEaupNfBW6fWd/OyOd5ET/TvExz3kQE0BrKB8G6y49UcuOpxlrWUJpPcg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k9327hoVOp+SlRJK5dGd3XiqB91osgeONFftx4DWY1I=;
 b=H+PFuUTHV7yNCugAqQ3/RJlhYWsgpkQiN0ICvcZinS8Q1OHWhFqjvMDXS7aMFZAzZ6zr1shSUaQWTnkyyJalI+fJWyqAPjYWo+74j/NWNmvxSnumsU5QquTl1Lg4FvzScsa10hLyBF725kNosvIKRL27QJpUB/XSvrQT5al/tfs=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB3815.eurprd08.prod.outlook.com (2603:10a6:20b:8b::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.22; Thu, 16 Jul
 2020 17:10:06 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3174.026; Thu, 16 Jul 2020
 17:10:05 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLIA=
Date: Thu, 16 Jul 2020 17:10:05 +0000
Message-ID: <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
In-Reply-To: <BD475825-10F6-4538-8294-931E370A602C@arm.com>
Accept-Language: en-US
Content-Language: en-GB
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: Microsoft-MacOutlook/16.38.20061401
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none; lists.xenproject.org;
 dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: ef595ec3-f365-450b-a2da-08d829ab19e9
x-ms-traffictypediagnostic: AM6PR08MB3815:|VI1PR08MB3902:
X-Microsoft-Antispam-PRVS: <VI1PR08MB39029FBB5BFC0D743BBC5DF6FC7F0@VI1PR08MB3902.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: zCvD+tP4hWbvaoImxt5LWmfw7ka3nqB1tS9BFaCZjph0p01wv+AcvT4o2Lr7Hx0mIRJjIQ13x3PkSWUKkvN8CqnzEsgY7PN3D3CqQlSAGidSCT4D4XboSExxto8UlanJUSOzyXDF48X8OB5/ESVk37rXYqqUlHAc75ZDC9X/iEl41ASVllANnDoZ8K57sx3XeovjbMv1ke7QakkBVT8jhcVPCF3+zweSziPQwZmwRGDootiAP79K79A+4G5Cgsgvuyj0xtXIfqKmsLe7Q0AT3foBN2XnbUY448c8Ir26CTZRvrAZHdxaN+UnyZbp3I2oloovxzCxjGzrSRMNXMQUwg==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(366004)(376002)(396003)(346002)(39860400002)(136003)(6506007)(8936002)(76116006)(66556008)(86362001)(91956017)(316002)(4326008)(54906003)(71200400001)(2616005)(66476007)(66946007)(64756008)(66446008)(36756003)(478600001)(186003)(8676002)(26005)(6486002)(2906002)(33656002)(83380400001)(6512007)(5660300002)(55236004)(6916009);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: DQ8YwQzWAXrkhmOWvQV+uRA8iWLJJz6YYY/QlVoygcApxLfRQ4+WhG10zn7TDXbHBDU1gPlp4/YCFWauffMsJ7zMBPsFDMvYgscz/df9kBdhZUdkOLrjOqbBy4PVt5vDt1G3efFEObtOeoXsX6sp6vK8IfzC6taxYkoXnEpswOj2OKO9dTY+rugoWSLhG1Z+FXFAVD1tpyl5vV2baQxc9vTs5JmKhY0zvyN3fdULvYdQMhHjSq7UuUsv57TkUMEy1slWrPVyWra67Fq+LOj4iUSjEztVB1w63+AQ5aeFx44mAy4ejychgacL7ZLSW0fMnsMowoiA+JLv5z92lULlx0GQXtCUyQwnud1ni8xoXBQj9Y7KRN38+eeCbgEZIeDAlOJe2NgcWqXFR61uJessQGh+P4MBYQ5J19uAC/jt8Pr+ueCPnSTVeyCsrCaE8rKV5pL9dr8jnVTnyXqy86bdbXnkU5Q78T5lmzkDPd6jQtE=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <923782005408E4449FCFB30B2C461F33@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3815
Original-Authentication-Results: lists.xenproject.org;
 dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(39860400002)(376002)(396003)(346002)(46966005)(5660300002)(70206006)(316002)(54906003)(82310400002)(186003)(26005)(6916009)(86362001)(6486002)(478600001)(2906002)(8936002)(336012)(83380400001)(70586007)(6512007)(33656002)(8676002)(356005)(2616005)(6506007)(81166007)(4326008)(82740400003)(36756003)(47076004);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 164c2b76-5b49-4c04-a820-08d829ab160c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: P9sSwe9VJBvUlb8Zhm7V/L02vkD4TsEOBasWFOmfnBov++lj/I9TOj/UPz9OIu9KQwqgfTv87KDTw/r5bhZBqX7wHYulwzJtx7jHjtubxUUeHP9OZ72kIKjZH6r1w/TAEZr6Su+EAAbXrLTdPLTD4K5mLQ7KpRQIkUtTWV3ZhPUsESqkmd/o1yG+Hj1q2rSnTbx1wMimUNiiSuw9uwaTTi4YltmIZ+KhJ0wjU+H6g0K08/uar4FuvIZIDSj00gnsX14VMA+rroZiPIs0Hoy6fF93DK9VUt4mRWPu04g3lMjpNPW/KOSvtP4gLSDzsad+W1OhYUxKv6B1hVvu7ai0NU1AjzGBbMp1vbaGusKECy+DB0i5CgMEWuOQEaOW1z48wm3QfCQb3c0FDxfc5BfbpQ==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Jul 2020 17:10:12.3733 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ef595ec3-f365-450b-a2da-08d829ab19e9
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3902
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGVsbG8gQWxsLA0KDQpGb2xsb3dpbmcgdXAgb24gZGlzY3Vzc2lvbiBvbiBQQ0kgUGFzc3Rocm91
Z2ggc3VwcG9ydCBvbiBBUk0gdGhhdCB3ZSBoYWQgYXQgdGhlIFhFTiBzdW1taXQsIHdlIGFyZSBz
dWJtaXR0aW5nIGEgUmV2aWV3IEZvciBDb21tZW50IGFuZCBhIGRlc2lnbiBwcm9wb3NhbCBmb3Ig
UENJIHBhc3N0aHJvdWdoIHN1cHBvcnQgb24gQVJNLiBGZWVsIGZyZWUgdG8gZ2l2ZSB5b3VyIGZl
ZWRiYWNrLg0KDQpUaGUgZm9sbG93aW5ncyBkZXNjcmliZSB0aGUgaGlnaC1sZXZlbCBkZXNpZ24g
cHJvcG9zYWwgb2YgdGhlIFBDSSBwYXNzdGhyb3VnaCBzdXBwb3J0IGFuZCBob3cgdGhlIGRpZmZl
cmVudCBtb2R1bGVzIHdpdGhpbiB0aGUgc3lzdGVtIGludGVyYWN0cyB3aXRoIGVhY2ggb3RoZXIg
dG8gYXNzaWduIGEgcGFydGljdWxhciBQQ0kgZGV2aWNlIHRvIHRoZSBndWVzdC4NCg0KIyBUaXRs
ZToNCg0KUENJIGRldmljZXMgcGFzc3Rocm91Z2ggb24gQXJtIGRlc2lnbiBwcm9wb3NhbA0KDQoj
IFByb2JsZW0gc3RhdGVtZW50Og0KDQpPbiBBUk0gdGhlcmUgaW4gbm8gc3VwcG9ydCB0byBhc3Np
Z24gYSBQQ0kgZGV2aWNlIHRvIGEgZ3Vlc3QuIFBDSSBkZXZpY2UgcGFzc3Rocm91Z2ggY2FwYWJp
bGl0eSBhbGxvd3MgZ3Vlc3RzIHRvIGhhdmUgZnVsbCBhY2Nlc3MgdG8gc29tZSBQQ0kgZGV2aWNl
cy4gUENJIGRldmljZSBwYXNzdGhyb3VnaCBhbGxvd3MgUENJIGRldmljZXMgdG8gYXBwZWFyIGFu
ZCBiZWhhdmUgYXMgaWYgdGhleSB3ZXJlIHBoeXNpY2FsbHkgYXR0YWNoZWQgdG8gdGhlIGd1ZXN0
IG9wZXJhdGluZyBzeXN0ZW0gYW5kIHByb3ZpZGUgZnVsbCBpc29sYXRpb24gb2YgdGhlIFBDSSBk
ZXZpY2VzLg0KDQpHb2FsIG9mIHRoaXMgd29yayBpcyB0byBhbHNvIHN1cHBvcnQgRG9tMExlc3Mg
Y29uZmlndXJhdGlvbiBzbyB0aGUgUENJIGJhY2tlbmQvZnJvbnRlbmQgZHJpdmVycyB1c2VkIG9u
IHg4NiBzaGFsbCBub3QgYmUgdXNlZCBvbiBBcm0uIEl0IHdpbGwgdXNlIHRoZSBleGlzdGluZyBW
UENJIGNvbmNlcHQgZnJvbSBYODYgYW5kIGltcGxlbWVudCB0aGUgdmlydHVhbCBQQ0kgYnVzIHRo
cm91Z2ggSU8gZW11bGF0aW9u4oCLIHN1Y2ggdGhhdCBvbmx5IGFzc2lnbmVkIGRldmljZXMgYXJl
IHZpc2libGXigIsgdG8gdGhlIGd1ZXN0IGFuZCBndWVzdCBjYW4gdXNlIHRoZSBzdGFuZGFyZCBQ
Q0kgZHJpdmVyLg0KDQpPbmx5IERvbTAgYW5kIFhlbiB3aWxsIGhhdmUgYWNjZXNzIHRvIHRoZSBy
ZWFsIFBDSSBidXMs4oCLIGd1ZXN0IHdpbGwgaGF2ZSBhIGRpcmVjdCBhY2Nlc3MgdG8gdGhlIGFz
c2lnbmVkIGRldmljZSBpdHNlbGbigIsuIElPTUVNIG1lbW9yeSB3aWxsIGJlIG1hcHBlZCB0byB0
aGUgZ3Vlc3Qg4oCLYW5kIGludGVycnVwdCB3aWxsIGJlIHJlZGlyZWN0ZWQgdG8gdGhlIGd1ZXN0
LiBTTU1VIGhhcyB0byBiZSBjb25maWd1cmVkIGNvcnJlY3RseSB0byBoYXZlIERNQSB0cmFuc2Fj
dGlvbi4NCg0KIyMgQ3VycmVudCBzdGF0ZTrigK9EcmFmdCB2ZXJzaW9uDQoNCiMgUHJvcG9zZXIo
cyk6IFJhaHVsIFNpbmdoLCBCZXJ0cmFuZCBNYXJxdWlzDQoNCiMgUHJvcG9zYWw6DQoNClRoaXMg
c2VjdGlvbiB3aWxsIGRlc2NyaWJlIHRoZSBkaWZmZXJlbnQgc3Vic3lzdGVtIHRvIHN1cHBvcnQg
dGhlIFBDSSBkZXZpY2UgcGFzc3Rocm91Z2ggYW5kIGhvdyB0aGVzZSBzdWJzeXN0ZW1zIGludGVy
YWN0IHdpdGggZWFjaCBvdGhlciB0byBhc3NpZ24gYSBkZXZpY2UgdG8gdGhlIGd1ZXN0Lg0KDQoj
IFBDSSBUZXJtaW5vbG9neToNCg0KSG9zdCBCcmlkZ2U6IEhvc3QgYnJpZGdlIGFsbG93cyB0aGUg
UENJIGRldmljZXMgdG8gdGFsayB0byB0aGUgcmVzdCBvZiB0aGUgY29tcHV0ZXIuICANCkVDQU06
IEVDQU0gKEVuaGFuY2VkIENvbmZpZ3VyYXRpb24gQWNjZXNzIE1lY2hhbmlzbSkgaXMgYSBtZWNo
YW5pc20gZGV2ZWxvcGVkIHRvIGFsbG93IFBDSWUgdG8gYWNjZXNzIGNvbmZpZ3VyYXRpb24gc3Bh
Y2UuIFRoZSBzcGFjZSBhdmFpbGFibGUgcGVyIGZ1bmN0aW9uIGlzIDRLQi4NCg0KIyBEaXNjb3Zl
cmluZyBQQ0kgSG9zdCBCcmlkZ2UgaW4gWEVOOg0KDQpJbiBvcmRlciB0byBzdXBwb3J0IHRoZSBQ
Q0kgcGFzc3Rocm91Z2ggWEVOIHNob3VsZCBiZSBhd2FyZSBvZiBhbGwgdGhlIFBDSSBob3N0IGJy
aWRnZXMgYXZhaWxhYmxlIG9uIHRoZSBzeXN0ZW0gYW5kIHNob3VsZCBiZSBhYmxlIHRvIGFjY2Vz
cyB0aGUgUENJIGNvbmZpZ3VyYXRpb24gc3BhY2UuIEVDQU0gY29uZmlndXJhdGlvbiBhY2Nlc3Mg
aXMgc3VwcG9ydGVkIGFzIG9mIG5vdy4gWEVOIGR1cmluZyBib290IHdpbGwgcmVhZCB0aGUgUENJ
IGRldmljZSB0cmVlIG5vZGUg4oCccmVn4oCdIHByb3BlcnR5IGFuZCB3aWxsIG1hcCB0aGUgRUNB
TSBzcGFjZSB0byB0aGUgWEVOIG1lbW9yeSB1c2luZyB0aGUg4oCcaW9yZW1hcF9ub2NhY2hlICgp
4oCdIGZ1bmN0aW9uLg0KDQpJZiB0aGVyZSBhcmUgbW9yZSB0aGFuIG9uZSBzZWdtZW50IG9uIHRo
ZSBzeXN0ZW0sIFhFTiB3aWxsIHJlYWQgdGhlIOKAnGxpbnV4LCBwY2ktZG9tYWlu4oCdIHByb3Bl
cnR5IGZyb20gdGhlIGRldmljZSB0cmVlIG5vZGUgYW5kIGNvbmZpZ3VyZSAgdGhlIGhvc3QgYnJp
ZGdlIHNlZ21lbnQgbnVtYmVyIGFjY29yZGluZ2x5LiBBbGwgdGhlIFBDSSBkZXZpY2UgdHJlZSBu
b2RlcyBzaG91bGQgaGF2ZSB0aGUg4oCcbGludXgscGNpLWRvbWFpbuKAnSBwcm9wZXJ0eSBzbyB0
aGF0IHRoZXJlIHdpbGwgYmUgbm8gY29uZmxpY3RzLiBEdXJpbmcgaGFyZHdhcmUgZG9tYWluIGJv
b3QgTGludXggd2lsbCBhbHNvIHVzZSB0aGUgc2FtZSDigJxsaW51eCxwY2ktZG9tYWlu4oCdIHBy
b3BlcnR5IGFuZCBhc3NpZ24gdGhlIGRvbWFpbiBudW1iZXIgdG8gdGhlIGhvc3QgYnJpZGdlLg0K
DQpXaGVuIERvbTAgdHJpZXMgdG8gYWNjZXNzIHRoZSBQQ0kgY29uZmlnIHNwYWNlIG9mIHRoZSBk
ZXZpY2UsIFhFTiB3aWxsIGZpbmQgdGhlIGNvcnJlc3BvbmRpbmcgaG9zdCBicmlkZ2UgYmFzZWQg
b24gc2VnbWVudCBudW1iZXIgYW5kIGFjY2VzcyB0aGUgY29ycmVzcG9uZGluZyBjb25maWcgc3Bh
Y2UgYXNzaWduZWQgdG8gdGhhdCBicmlkZ2UuDQoNCkxpbWl0YXRpb246DQoqIE9ubHkgUENJIEVD
QU0gY29uZmlndXJhdGlvbiBzcGFjZSBhY2Nlc3MgaXMgc3VwcG9ydGVkLg0KKiBEZXZpY2UgdHJl
ZSBiaW5kaW5nIGlzIHN1cHBvcnRlZCBhcyBvZiBub3csIEFDUEkgaXMgbm90IHN1cHBvcnRlZC4N
CiogTmVlZCB0byBwb3J0IHRoZSBQQ0kgaG9zdCBicmlkZ2UgYWNjZXNzIGNvZGUgdG8gWEVOIHRv
IGFjY2VzcyB0aGUgY29uZmlndXJhdGlvbiBzcGFjZSAoZ2VuZXJpYyBvbmUgd29ya3MgYnV0IGxv
dHMgb2YgcGxhdGZvcm1zIHdpbGwgcmVxdWlyZWQgIHNvbWUgc3BlY2lmaWMgY29kZSBvciBxdWly
a3MpLg0KDQojIERpc2NvdmVyaW5nIFBDSSBkZXZpY2VzOg0KDQpQQ0ktUENJZSBlbnVtZXJhdGlv
biBpcyBhIHByb2Nlc3Mgb2YgZGV0ZWN0aW5nIGRldmljZXMgY29ubmVjdGVkIHRvIGl0cyBob3N0
LiBJdCBpcyB0aGUgcmVzcG9uc2liaWxpdHkgb2YgdGhlIGhhcmR3YXJlIGRvbWFpbiBvciBib290
IGZpcm13YXJlIHRvIGRvIHRoZSBQQ0kgZW51bWVyYXRpb24gYW5kIGNvbmZpZ3VyZSB0aGUgQkFS
LCBQQ0kgY2FwYWJpbGl0aWVzLCBhbmQgTVNJL01TSS1YIGNvbmZpZ3VyYXRpb24uDQoNClBDSS1Q
Q0llIGVudW1lcmF0aW9uIGluIFhFTiBpcyBub3QgZmVhc2libGUgZm9yIHRoZSBjb25maWd1cmF0
aW9uIHBhcnQgYXMgaXQgd291bGQgcmVxdWlyZSBhIGxvdCBvZiBjb2RlIGluc2lkZSBYZW4gd2hp
Y2ggd291bGQgcmVxdWlyZSBhIGxvdCBvZiBtYWludGVuYW5jZS4gQWRkZWQgdG8gdGhpcyBtYW55
IHBsYXRmb3JtcyByZXF1aXJlIHNvbWUgcXVpcmtzIGluIHRoYXQgcGFydCBvZiB0aGUgUENJIGNv
ZGUgd2hpY2ggd291bGQgZ3JlYXRseSBpbXByb3ZlIFhlbiBjb21wbGV4aXR5LiBPbmNlIGhhcmR3
YXJlIGRvbWFpbiBlbnVtZXJhdGVzIHRoZSBkZXZpY2UgdGhlbiBpdCB3aWxsIGNvbW11bmljYXRl
IHRvIFhFTiB2aWEgdGhlIGJlbG93IGh5cGVyY2FsbC4NCg0KI2RlZmluZSBQSFlTREVWT1BfcGNp
X2RldmljZV9hZGQgICAgICAgIDI1DQpzdHJ1Y3QgcGh5c2Rldl9wY2lfZGV2aWNlX2FkZCB7DQog
ICAgdWludDE2X3Qgc2VnOw0KICAgIHVpbnQ4X3QgYnVzOw0KICAgIHVpbnQ4X3QgZGV2Zm47DQog
ICAgdWludDMyX3QgZmxhZ3M7DQogICAgc3RydWN0IHsNCiAgICAJdWludDhfdCBidXM7DQogICAg
CXVpbnQ4X3QgZGV2Zm47DQogICAgfSBwaHlzZm47DQogICAgLyoNCiAgICAqIE9wdGlvbmFsIHBh
cmFtZXRlcnMgYXJyYXkuDQogICAgKiBGaXJzdCBlbGVtZW50IChbMF0pIGlzIFBYTSBkb21haW4g
YXNzb2NpYXRlZCB3aXRoIHRoZSBkZXZpY2UgKGlmICogWEVOX1BDSV9ERVZfUFhNIGlzIHNldCkN
CiAgICAqLw0KICAgIHVpbnQzMl90IG9wdGFycltYRU5fRkxFWF9BUlJBWV9ESU1dOw0KICAgIH07
DQoNCkFzIHRoZSBoeXBlcmNhbGwgYXJndW1lbnQgaGFzIHRoZSBQQ0kgc2VnbWVudCBudW1iZXIs
IFhFTiB3aWxsIGFjY2VzcyB0aGUgUENJIGNvbmZpZyBzcGFjZSBiYXNlZCBvbiB0aGlzIHNlZ21l
bnQgbnVtYmVyIGFuZCBmaW5kIHRoZSBob3N0LWJyaWRnZSBjb3JyZXNwb25kaW5nIHRvIHRoaXMg
c2VnbWVudCBudW1iZXIuIEF0IHRoaXMgc3RhZ2UgaG9zdCBicmlkZ2UgaXMgZnVsbHkgaW5pdGlh
bGl6ZWQgc28gdGhlcmUgd2lsbCBiZSBubyBpc3N1ZSB0byBhY2Nlc3MgdGhlIGNvbmZpZyBzcGFj
ZS4NCg0KWEVOIHdpbGwgYWRkIHRoZSBQQ0kgZGV2aWNlcyBpbiB0aGUgbGlua2VkIGxpc3QgbWFp
bnRhaW4gaW4gWEVOIHVzaW5nIHRoZSBmdW5jdGlvbiBwY2lfYWRkX2RldmljZSgpLiBYRU4gd2ls
bCBiZSBhd2FyZSBvZiBhbGwgdGhlIFBDSSBkZXZpY2VzIG9uIHRoZSBzeXN0ZW0gYW5kIGFsbCB0
aGUgZGV2aWNlIHdpbGwgYmUgYWRkZWQgdG8gdGhlIGhhcmR3YXJlIGRvbWFpbi4NCg0KTGltaXRh
dGlvbnM6DQoqIFdoZW4gUENJIGRldmljZXMgYXJlIGFkZGVkIHRvIFhFTiwgTVNJIGNhcGFiaWxp
dHkgaXMgbm90IGluaXRpYWxpemVkIGluc2lkZSBYRU4gYW5kIG5vdCBzdXBwb3J0ZWQgYXMgb2Yg
bm93Lg0KKiBBQ1MgY2FwYWJpbGl0eSBpcyBkaXNhYmxlIGZvciBBUk0gYXMgb2Ygbm93IGFzIGFm
dGVyIGVuYWJsaW5nIGl0IGRldmljZXMgYXJlIG5vdCBhY2Nlc3NpYmxlLg0KKiBEb20wTGVzcyBp
bXBsZW1lbnRhdGlvbiB3aWxsIHJlcXVpcmUgdG8gaGF2ZSB0aGUgY2FwYWNpdHkgaW5zaWRlIFhl
biB0byBkaXNjb3ZlciB0aGUgUENJIGRldmljZXMgKHdpdGhvdXQgZGVwZW5kaW5nIG9uIERvbTAg
dG8gZGVjbGFyZSB0aGVtIHRvIFhlbikuDQoNCiMgRW5hYmxlIHRoZSBleGlzdGluZyB4ODYgdmly
dHVhbCBQQ0kgc3VwcG9ydCBmb3IgQVJNOg0KDQpUaGUgZXhpc3RpbmcgVlBDSSBzdXBwb3J0IGF2
YWlsYWJsZSBmb3IgWDg2IGlzIGFkYXB0ZWQgZm9yIEFybS4gV2hlbiB0aGUgZGV2aWNlIGlzIGFk
ZGVkIHRvIFhFTiB2aWEgdGhlIGh5cGVyIGNhbGwg4oCcUEhZU0RFVk9QX3BjaV9kZXZpY2VfYWRk
4oCdLCBWUENJIGhhbmRsZXIgZm9yIHRoZSBjb25maWcgc3BhY2UgYWNjZXNzIGlzIGFkZGVkIHRv
IHRoZSBQQ0kgZGV2aWNlIHRvIGVtdWxhdGUgdGhlIFBDSSBkZXZpY2VzLg0KDQpBIE1NSU8gdHJh
cCBoYW5kbGVyIGZvciB0aGUgUENJIEVDQU0gc3BhY2UgaXMgcmVnaXN0ZXJlZCBpbiBYRU4gc28g
dGhhdCB3aGVuIGd1ZXN0IGlzIHRyeWluZyB0byBhY2Nlc3MgdGhlIFBDSSBjb25maWcgc3BhY2Us
IFhFTiB3aWxsIHRyYXAgdGhlIGFjY2VzcyBhbmQgZW11bGF0ZSByZWFkL3dyaXRlIHVzaW5nIHRo
ZSBWUENJIGFuZCBub3QgdGhlIHJlYWwgUENJIGhhcmR3YXJlLg0KDQpMaW1pdGF0aW9uOg0KKiBO
byBoYW5kbGVyIGlzIHJlZ2lzdGVyIGZvciB0aGUgTVNJIGNvbmZpZ3VyYXRpb24uDQoqIE9ubHkg
bGVnYWN5IGludGVycnVwdCBpcyBzdXBwb3J0ZWQgYW5kIHRlc3RlZCBhcyBvZiBub3csIE1TSSBp
cyBub3QgaW1wbGVtZW50ZWQgYW5kIHRlc3RlZC4gIA0KDQojIEFzc2lnbiB0aGUgZGV2aWNlIHRv
IHRoZSBndWVzdDoNCg0KQXNzaWduIHRoZSBQQ0kgZGV2aWNlIGZyb20gdGhlIGhhcmR3YXJlIGRv
bWFpbiB0byB0aGUgZ3Vlc3QgaXMgZG9uZSB1c2luZyB0aGUgYmVsb3cgZ3Vlc3QgY29uZmlnIG9w
dGlvbi4gV2hlbiB4bCB0b29sIGNyZWF0ZSB0aGUgZG9tYWluLCBQQ0kgZGV2aWNlcyB3aWxsIGJl
IGFzc2lnbmVkIHRvIHRoZSBndWVzdCBWUENJIGJ1cy4NCglwY2k9WyAiUENJX1NQRUNfU1RSSU5H
IiwgIlBDSV9TUEVDX1NUUklORyIsIC4uLl0NCg0KR3Vlc3Qgd2lsbCBiZSBvbmx5IGFibGUgdG8g
YWNjZXNzIHRoZSBhc3NpZ25lZCBkZXZpY2VzIGFuZCBzZWUgdGhlIGJyaWRnZXMuIEd1ZXN0IHdp
bGwgbm90IGJlIGFibGUgdG8gYWNjZXNzIG9yIHNlZSB0aGUgZGV2aWNlcyB0aGF0IGFyZSBubyBh
c3NpZ25lZCB0byBoaW0uDQoNCkxpbWl0YXRpb246DQoqIEFzIG9mIG5vdyBhbGwgdGhlIGJyaWRn
ZXMgaW4gdGhlIFBDSSBidXMgYXJlIHNlZW4gYnkgdGhlIGd1ZXN0IG9uIHRoZSBWUENJIGJ1cy4N
Cg0KIyBFbXVsYXRlZCBQQ0kgZGV2aWNlIHRyZWUgbm9kZSBpbiBsaWJ4bDoNCg0KTGlieGwgaXMg
Y3JlYXRpbmcgYSB2aXJ0dWFsIFBDSSBkZXZpY2UgdHJlZSBub2RlIGluIHRoZSBkZXZpY2UgdHJl
ZSB0byBlbmFibGUgdGhlIGd1ZXN0IE9TIHRvIGRpc2NvdmVyIHRoZSB2aXJ0dWFsIFBDSSBkdXJp
bmcgZ3Vlc3QgYm9vdC4gV2UgaW50cm9kdWNlZCB0aGUgbmV3IGNvbmZpZyBvcHRpb24gW3ZwY2k9
InBjaV9lY2FtIl0gZm9yIGd1ZXN0cy4gV2hlbiB0aGlzIGNvbmZpZyBvcHRpb24gaXMgZW5hYmxl
ZCBpbiBhIGd1ZXN0IGNvbmZpZ3VyYXRpb24sIGEgUENJIGRldmljZSB0cmVlIG5vZGUgd2lsbCBi
ZSBjcmVhdGVkIGluIHRoZSBndWVzdCBkZXZpY2UgdHJlZS4NCg0KQSBuZXcgYXJlYSBoYXMgYmVl
biByZXNlcnZlZCBpbiB0aGUgYXJtIGd1ZXN0IHBoeXNpY2FsIG1hcCBhdCB3aGljaCB0aGUgVlBD
SSBidXMgaXMgZGVjbGFyZWQgaW4gdGhlIGRldmljZSB0cmVlIChyZWcgYW5kIHJhbmdlcyBwYXJh
bWV0ZXJzIG9mIHRoZSBub2RlKS4gQSB0cmFwIGhhbmRsZXIgZm9yIHRoZSBQQ0kgRUNBTSBhY2Nl
c3MgZnJvbSBndWVzdCBoYXMgYmVlbiByZWdpc3RlcmVkIGF0IHRoZSBkZWZpbmVkIGFkZHJlc3Mg
YW5kIHJlZGlyZWN0cyByZXF1ZXN0cyB0byB0aGUgVlBDSSBkcml2ZXIgaW4gWGVuLg0KDQpMaW1p
dGF0aW9uOg0KKiBPbmx5IG9uZSBQQ0kgZGV2aWNlIHRyZWUgbm9kZSBpcyBzdXBwb3J0ZWQgYXMg
b2Ygbm93Lg0KDQpCQVIgdmFsdWUgYW5kIElPTUVNIG1hcHBpbmc6DQoNCkxpbnV4IGd1ZXN0IHdp
bGwgZG8gdGhlIFBDSSBlbnVtZXJhdGlvbiBiYXNlZCBvbiB0aGUgYXJlYSByZXNlcnZlZCBmb3Ig
RUNBTSBhbmQgSU9NRU0gcmFuZ2VzIGluIHRoZSBWUENJIGRldmljZSB0cmVlIG5vZGUuIE9uY2Ug
UENJCWRldmljZSBpcyBhc3NpZ25lZCB0byB0aGUgZ3Vlc3QsIFhFTiB3aWxsIG1hcCB0aGUgZ3Vl
c3QgUENJIElPTUVNIHJlZ2lvbiB0byB0aGUgcmVhbCBwaHlzaWNhbCBJT01FTSByZWdpb24gb25s
eSBmb3IgdGhlIGFzc2lnbmVkIGRldmljZXMuDQoNCkFzIG9mIG5vdyB3ZSBoYXZlIG5vdCBtb2Rp
ZmllZCB0aGUgZXhpc3RpbmcgVlBDSSBjb2RlIHRvIG1hcCB0aGUgZ3Vlc3QgUENJIElPTUVNIHJl
Z2lvbiB0byB0aGUgcmVhbCBwaHlzaWNhbCBJT01FTSByZWdpb24uIFdlIHVzZWQgdGhlIGV4aXN0
aW5nIGd1ZXN0IOKAnGlvbWVt4oCdIGNvbmZpZyBvcHRpb24gdG8gbWFwIHRoZSByZWdpb24uDQpG
b3IgZXhhbXBsZToNCglHdWVzdCByZXNlcnZlZCBJT01FTSByZWdpb246ICAweDA0MDIwMDAwDQog
ICAgCVJlYWwgcGh5c2ljYWwgSU9NRU0gcmVnaW9uOjB4NTAwMDAwMDANCiAgICAJSU9NRU0gc2l6
ZToxMjhNQg0KICAgIAlpb21lbSBjb25maWcgd2lsbCBiZTogICBpb21lbSA9IFsiMHg1MDAwMCww
eDgwMDBAMHg0MDIwIl0NCg0KVGhlcmUgaXMgbm8gbmVlZCB0byBtYXAgdGhlIEVDQU0gc3BhY2Ug
YXMgWEVOIGFscmVhZHkgaGF2ZSBhY2Nlc3MgdG8gdGhlIEVDQU0gc3BhY2UgYW5kIFhFTiB3aWxs
IHRyYXAgRUNBTSBhY2Nlc3NlcyBmcm9tIHRoZSBndWVzdCBhbmQgd2lsbCBwZXJmb3JtIHJlYWQv
d3JpdGUgb24gdGhlIFZQQ0kgYnVzLg0KDQpJT01FTSBhY2Nlc3Mgd2lsbCBub3QgYmUgdHJhcHBl
ZCBhbmQgdGhlIGd1ZXN0IHdpbGwgZGlyZWN0bHkgYWNjZXNzIHRoZSBJT01FTSByZWdpb24gb2Yg
dGhlIGFzc2lnbmVkIGRldmljZSB2aWEgc3RhZ2UtMiB0cmFuc2xhdGlvbi4NCg0KSW4gdGhlIHNh
bWUsIHdlIG1hcHBlZCB0aGUgYXNzaWduZWQgZGV2aWNlcyBJUlEgdG8gdGhlIGd1ZXN0IHVzaW5n
IGJlbG93IGNvbmZpZyBvcHRpb25zLg0KCWlycXM9IFsgTlVNQkVSLCBOVU1CRVIsIC4uLl0NCg0K
TGltaXRhdGlvbjoNCiogTmVlZCB0byBhdm9pZCB0aGUg4oCcaW9tZW3igJ0gYW5kIOKAnGlyceKA
nSBndWVzdCBjb25maWcgb3B0aW9ucyBhbmQgbWFwIHRoZSBJT01FTSByZWdpb24gYW5kIElSUSBh
dCB0aGUgc2FtZSB0aW1lIHdoZW4gZGV2aWNlIGlzIGFzc2lnbmVkIHRvIHRoZSBndWVzdCB1c2lu
ZyB0aGUg4oCccGNp4oCdIGd1ZXN0IGNvbmZpZyBvcHRpb25zIHdoZW4geGwgY3JlYXRlcyB0aGUg
ZG9tYWluLg0KKiBFbXVsYXRlZCBCQVIgdmFsdWVzIG9uIHRoZSBWUENJIGJ1cyBzaG91bGQgcmVm
bGVjdCB0aGUgSU9NRU0gbWFwcGVkIGFkZHJlc3MuDQoqIFg4NiBtYXBwaW5nIGNvZGUgc2hvdWxk
IGJlIHBvcnRlZCBvbiBBcm0gc28gdGhhdCB0aGUgc3RhZ2UtMiB0cmFuc2xhdGlvbiBpcyBhZGFw
dGVkIHdoZW4gdGhlIGd1ZXN0IGlzIGRvaW5nIGEgbW9kaWZpY2F0aW9uIG9mIHRoZSBCQVIgcmVn
aXN0ZXJzIHZhbHVlcyAodG8gbWFwIHRoZSBhZGRyZXNzIHJlcXVlc3RlZCBieSB0aGUgZ3Vlc3Qg
Zm9yIGEgc3BlY2lmaWMgSU9NRU0gdG8gdGhlIGFkZHJlc3MgYWN0dWFsbHkgY29udGFpbmVkIGlu
IHRoZSByZWFsIEJBUiByZWdpc3RlciBvZiB0aGUgY29ycmVzcG9uZGluZyBkZXZpY2UpLg0KDQoj
IFNNTVUgY29uZmlndXJhdGlvbiBmb3IgZ3Vlc3Q6DQoNCldoZW4gYXNzaWduaW5nIFBDSSBkZXZp
Y2VzIHRvIGEgZ3Vlc3QsIHRoZSBTTU1VIGNvbmZpZ3VyYXRpb24gc2hvdWxkIGJlIHVwZGF0ZWQg
dG8gcmVtb3ZlIGFjY2VzcyB0byB0aGUgaGFyZHdhcmUgZG9tYWluIG1lbW9yeSBhbmQgYWRkDQpj
b25maWd1cmF0aW9uIHRvIGhhdmUgYWNjZXNzIHRvIHRoZSBndWVzdCBtZW1vcnkgd2l0aCB0aGUg
cHJvcGVyIGFkZHJlc3MgdHJhbnNsYXRpb24gc28gdGhhdCB0aGUgZGV2aWNlIGNhbiBkbyBETUEg
b3BlcmF0aW9ucyBmcm9tIGFuZCB0byB0aGUgZ3Vlc3QgbWVtb3J5IG9ubHkuDQoNCiMgTVNJL01T
SS1YIHN1cHBvcnQ6DQpOb3QgaW1wbGVtZW50IGFuZCB0ZXN0ZWQgYXMgb2Ygbm93Lg0KDQojIElU
UyBzdXBwb3J0Og0KTm90IGltcGxlbWVudCBhbmQgdGVzdGVkIGFzIG9mIG5vdy4NCg0KUmVnYXJk
cywNClJhaHVsDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 20:23:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 20:23:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwAPm-0001Io-Iz; Thu, 16 Jul 2020 20:23:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5AHM=A3=panix.com=marcotte@srs-us1.protection.inumbo.net>)
 id 1jwAPl-0001Ij-2u
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 20:23:33 +0000
X-Inumbo-ID: 37c11a5a-c7a2-11ea-9536-12813bfff9fa
Received: from mailbackend.panix.com (unknown [166.84.1.89])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 37c11a5a-c7a2-11ea-9536-12813bfff9fa;
 Thu, 16 Jul 2020 20:23:32 +0000 (UTC)
Received: from panix5.panix.com (panix5.panix.com [166.84.1.5])
 by mailbackend.panix.com (Postfix) with ESMTP id 4B75LC6fphzYvF;
 Thu, 16 Jul 2020 16:23:31 -0400 (EDT)
Received: by panix5.panix.com (Postfix, from userid 13564)
 id 4B75LC6jbVzfYm; Thu, 16 Jul 2020 16:23:31 -0400 (EDT)
Date: Thu, 16 Jul 2020 16:23:31 -0400
From: Brian Marcotte <marcotte@panix.com>
To: Paul Durrant <xadimgnik@gmail.com>
Subject: Re: [EXTERNAL] [Xen-devel] XEN Qdisk Ceph rbd support broken?
Message-ID: <20200716202331.GA9471@panix.com>
References: <AC8105C4-6DAD-4AB0-AC3F-B4CDD151CDEB@ispire.me>
 <763e69df40604c51bb72477c706ec24b@EX13D32EUC003.ant.amazon.com>
 <20200715191705.GA20643@panix.com>
 <000b01d65b40$ab7fead0$027fc070$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <000b01d65b40$ab7fead0$027fc070$@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Jules' <jules@ispire.me>, xen-devel@lists.xenproject.org,
 oleksandr_grytsov@epam.com, wl@xen.org, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> Your issue stems from the auto-creation code in xen-block:
> 
> The "aio:rbd:rbd/machine.disk0" string is generated by libxl and does
> look a little odd and will fool the parser there, but the error you see
> after modifying the string appears to be because QEMU's QMP block
> device instantiation code is objecting to a missing parameter. Older
> QEMUs circumvented that code which is almost certainly why you don't
> see the issue with versions 2 or 3.

Xen 4.13 and 4.14 includes QEMU 4 and 5. They don't work with Ceph/RBD.

Are you saying that xl/libxl is doing the right thing and the problem
needs to be fixed in QEMU?

--
- Brian


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 20:52:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 20:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwArF-0003lw-Ul; Thu, 16 Jul 2020 20:51:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CN1r=A3=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jwArE-0003lr-Oj
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 20:51:56 +0000
X-Inumbo-ID: 2e1ea8ce-c7a6-11ea-9538-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e1ea8ce-c7a6-11ea-9538-12813bfff9fa;
 Thu, 16 Jul 2020 20:51:54 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6E0A62082F;
 Thu, 16 Jul 2020 20:51:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1594932713;
 bh=eeIGzqsuVCsDcDosxNRiQ62ktzasYvbSZ6mXibm2gSY=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=Yexu+DWFZs52gJA14L3YuU2HzfTibvri3F1uA16uDmkm9kHwAHCiFhoMiGQy80O0A
 YZrqO96MH4QMJP9tSMWhT5bXzTkl4qDminXzGBm6wFalSf7n7QKr5Ry8Pz9ZaJUbc5
 DVqcvdTFtDghn9GKW67CMlPb4W0SvCW9zpdqm7Ys=
Date: Thu, 16 Jul 2020 13:51:45 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <Rahul.Singh@arm.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
In-Reply-To: <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
Message-ID: <alpine.DEB.2.21.2007161258160.3886@sstabellini-ThinkPad-T480s>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1721092369-1594930123=:3886"
Content-ID: <alpine.DEB.2.21.2007161314040.3886@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1721092369-1594930123=:3886
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2007161314041.3886@sstabellini-ThinkPad-T480s>

On Thu, 16 Jul 2020, Rahul Singh wrote:
> Hello All,
> 
> Following up on discussion on PCI Passthrough support on ARM that we had at the XEN summit, we are submitting a Review For Comment and a design proposal for PCI passthrough support on ARM. Feel free to give your feedback.
> 
> The followings describe the high-level design proposal of the PCI passthrough support and how the different modules within the system interacts with each other to assign a particular PCI device to the guest.

I think the proposal is good and I only have a couple of thoughts to
share below.


> # Title:
> 
> PCI devices passthrough on Arm design proposal
> 
> # Problem statement:
> 
> On ARM there in no support to assign a PCI device to a guest. PCI device passthrough capability allows guests to have full access to some PCI devices. PCI device passthrough allows PCI devices to appear and behave as if they were physically attached to the guest operating system and provide full isolation of the PCI devices.
> 
> Goal of this work is to also support Dom0Less configuration so the PCI backend/frontend drivers used on x86 shall not be used on Arm. It will use the existing VPCI concept from X86 and implement the virtual PCI bus through IO emulation​ such that only assigned devices are visible​ to the guest and guest can use the standard PCI driver.
> 
> Only Dom0 and Xen will have access to the real PCI bus,​ guest will have a direct access to the assigned device itself​. IOMEM memory will be mapped to the guest ​and interrupt will be redirected to the guest. SMMU has to be configured correctly to have DMA transaction.
> 
> ## Current state: Draft version
> 
> # Proposer(s): Rahul Singh, Bertrand Marquis
> 
> # Proposal:
> 
> This section will describe the different subsystem to support the PCI device passthrough and how these subsystems interact with each other to assign a device to the guest.
> 
> # PCI Terminology:
> 
> Host Bridge: Host bridge allows the PCI devices to talk to the rest of the computer.  
> ECAM: ECAM (Enhanced Configuration Access Mechanism) is a mechanism developed to allow PCIe to access configuration space. The space available per function is 4KB.
> 
> # Discovering PCI Host Bridge in XEN:
> 
> In order to support the PCI passthrough XEN should be aware of all the PCI host bridges available on the system and should be able to access the PCI configuration space. ECAM configuration access is supported as of now. XEN during boot will read the PCI device tree node “reg” property and will map the ECAM space to the XEN memory using the “ioremap_nocache ()” function.
> 
> If there are more than one segment on the system, XEN will read the “linux, pci-domain” property from the device tree node and configure  the host bridge segment number accordingly. All the PCI device tree nodes should have the “linux,pci-domain” property so that there will be no conflicts. During hardware domain boot Linux will also use the same “linux,pci-domain” property and assign the domain number to the host bridge.
> 
> When Dom0 tries to access the PCI config space of the device, XEN will find the corresponding host bridge based on segment number and access the corresponding config space assigned to that bridge.
> 
> Limitation:
> * Only PCI ECAM configuration space access is supported.
> * Device tree binding is supported as of now, ACPI is not supported.
> * Need to port the PCI host bridge access code to XEN to access the configuration space (generic one works but lots of platforms will required  some specific code or quirks).
>
> # Discovering PCI devices:
> 
> PCI-PCIe enumeration is a process of detecting devices connected to its host. It is the responsibility of the hardware domain or boot firmware to do the PCI enumeration and configure the BAR, PCI capabilities, and MSI/MSI-X configuration.
> 
> PCI-PCIe enumeration in XEN is not feasible for the configuration part as it would require a lot of code inside Xen which would require a lot of maintenance. Added to this many platforms require some quirks in that part of the PCI code which would greatly improve Xen complexity. Once hardware domain enumerates the device then it will communicate to XEN via the below hypercall.
> 
> #define PHYSDEVOP_pci_device_add        25
> struct physdev_pci_device_add {
>     uint16_t seg;
>     uint8_t bus;
>     uint8_t devfn;
>     uint32_t flags;
>     struct {
>     	uint8_t bus;
>     	uint8_t devfn;
>     } physfn;
>     /*
>     * Optional parameters array.
>     * First element ([0]) is PXM domain associated with the device (if * XEN_PCI_DEV_PXM is set)
>     */
>     uint32_t optarr[XEN_FLEX_ARRAY_DIM];
>     };
> 
> As the hypercall argument has the PCI segment number, XEN will access the PCI config space based on this segment number and find the host-bridge corresponding to this segment number. At this stage host bridge is fully initialized so there will be no issue to access the config space.
> 
> XEN will add the PCI devices in the linked list maintain in XEN using the function pci_add_device(). XEN will be aware of all the PCI devices on the system and all the device will be added to the hardware domain.
> 
> Limitations:
> * When PCI devices are added to XEN, MSI capability is not initialized inside XEN and not supported as of now.
> * ACS capability is disable for ARM as of now as after enabling it devices are not accessible.
> * Dom0Less implementation will require to have the capacity inside Xen to discover the PCI devices (without depending on Dom0 to declare them to Xen).
 
I think it is fine to assume that for dom0less the "firmware" has taken
care of setting up the BARs correctly. Starting with that assumption, it
looks like it should be "easy" to walk the PCI topology in Xen when/if
there is no dom0?


> # Enable the existing x86 virtual PCI support for ARM:
> 
> The existing VPCI support available for X86 is adapted for Arm. When the device is added to XEN via the hyper call “PHYSDEVOP_pci_device_add”, VPCI handler for the config space access is added to the PCI device to emulate the PCI devices.
> 
> A MMIO trap handler for the PCI ECAM space is registered in XEN so that when guest is trying to access the PCI config space, XEN will trap the access and emulate read/write using the VPCI and not the real PCI hardware.
> 
> Limitation:
> * No handler is register for the MSI configuration.
> * Only legacy interrupt is supported and tested as of now, MSI is not implemented and tested.  
> 
> # Assign the device to the guest:
> 
> Assign the PCI device from the hardware domain to the guest is done using the below guest config option. When xl tool create the domain, PCI devices will be assigned to the guest VPCI bus.
> 	pci=[ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...]
> 
> Guest will be only able to access the assigned devices and see the bridges. Guest will not be able to access or see the devices that are no assigned to him.
> 
> Limitation:
> * As of now all the bridges in the PCI bus are seen by the guest on the VPCI bus.

We need to come up with something similar for dom0less too. It could be
exactly the same thing (a list of BDFs as strings as a device tree
property) or something else if we can come up with a better idea.


> # Emulated PCI device tree node in libxl:
> 
> Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.
> 
> A new area has been reserved in the arm guest physical map at which the VPCI bus is declared in the device tree (reg and ranges parameters of the node). A trap handler for the PCI ECAM access from guest has been registered at the defined address and redirects requests to the VPCI driver in Xen.
> 
> Limitation:
> * Only one PCI device tree node is supported as of now.

I think vpci="pci_ecam" should be optional: if pci=[ "PCI_SPEC_STRING",
...] is specififed, then vpci="pci_ecam" is implied.

vpci="pci_ecam" is only useful one day in the future when we want to be
able to emulate other non-ecam host bridges. For now we could even skip
it.


> BAR value and IOMEM mapping:
> 
> Linux guest will do the PCI enumeration based on the area reserved for ECAM and IOMEM ranges in the VPCI device tree node. Once PCI	device is assigned to the guest, XEN will map the guest PCI IOMEM region to the real physical IOMEM region only for the assigned devices.
> 
> As of now we have not modified the existing VPCI code to map the guest PCI IOMEM region to the real physical IOMEM region. We used the existing guest “iomem” config option to map the region.
> For example:
> 	Guest reserved IOMEM region:  0x04020000
>     	Real physical IOMEM region:0x50000000
>     	IOMEM size:128MB
>     	iomem config will be:   iomem = ["0x50000,0x8000@0x4020"]
> 
> There is no need to map the ECAM space as XEN already have access to the ECAM space and XEN will trap ECAM accesses from the guest and will perform read/write on the VPCI bus.
> 
> IOMEM access will not be trapped and the guest will directly access the IOMEM region of the assigned device via stage-2 translation.
> 
> In the same, we mapped the assigned devices IRQ to the guest using below config options.
> 	irqs= [ NUMBER, NUMBER, ...]
> 
> Limitation:
> * Need to avoid the “iomem” and “irq” guest config options and map the IOMEM region and IRQ at the same time when device is assigned to the guest using the “pci” guest config options when xl creates the domain.
> * Emulated BAR values on the VPCI bus should reflect the IOMEM mapped address.
> * X86 mapping code should be ported on Arm so that the stage-2 translation is adapted when the guest is doing a modification of the BAR registers values (to map the address requested by the guest for a specific IOMEM to the address actually contained in the real BAR register of the corresponding device).
> 
> # SMMU configuration for guest:
> 
> When assigning PCI devices to a guest, the SMMU configuration should be updated to remove access to the hardware domain memory and add
> configuration to have access to the guest memory with the proper address translation so that the device can do DMA operations from and to the guest memory only.
> 
> # MSI/MSI-X support:
> Not implement and tested as of now.
> 
> # ITS support:
> Not implement and tested as of now.
--8323329-1721092369-1594930123=:3886--


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 21:50:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 21:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwBlJ-0008JF-3S; Thu, 16 Jul 2020 21:49:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sYN5=A3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwBlH-0008In-DL
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 21:49:51 +0000
X-Inumbo-ID: 425eb560-c7ae-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 425eb560-c7ae-11ea-b7bb-bc764e2007e4;
 Thu, 16 Jul 2020 21:49:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=jlduG5hfZMR4/2TW+yTWh9vv/2Sv5Eyaqk66LqwiT7Q=; b=zZl845q6lhMhtZTzTjpnvfFtq
 bafPerrvRef9DTyyh+lev4r6uZoY8h3++LfTL7jRs2Nf7baixRBmECmx/dg6wCHOznatwC6KrKISx
 De+biQZtI5XcU0botmQe2f3eqP2LwnuyBUPJ6qjddhZfwUA6WaZekV7IGwV7dSOqwg1QI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwBl9-0000hW-GC; Thu, 16 Jul 2020 21:49:43 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwBl9-00032P-1x; Thu, 16 Jul 2020 21:49:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwBl9-0002Lx-0X; Thu, 16 Jul 2020 21:49:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151939-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 151939: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=c57b1153a58a6263863667296b5f00933fc46a4f
X-Osstest-Versions-That: linux=1c54d3c15afacf179c07ce6c57a0d43f412f1b3a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jul 2020 21:49:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151939 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151939/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass

version targeted for testing:
 linux                c57b1153a58a6263863667296b5f00933fc46a4f
baseline version:
 linux                1c54d3c15afacf179c07ce6c57a0d43f412f1b3a

Last test of basis   151757  2020-07-09 08:18:27 Z    7 days
Testing same since   151939  2020-07-16 06:40:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aditya Pakki <pakki001@umn.edu>
  Adrian Hunter <adrian.hunter@intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Alexei Starovoitov <ast@kernel.org>
  Andre Edich <andre.edich@microchip.com>
  Andrew Bowers <andrewx.bowers@intel.com>
  Andrew Scull <ascull@google.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Aya Levin <ayal@mellanox.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Benjamin Poirier <benjamin.poirier@gmail.com>
  Boris Burkov <boris@bur.io>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chengguang Xu <cgxu519@mykernel.net>
  Chris Chiu <chiu@endlessm.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian König <christian.koenig@amd.com>
  Christoph Hellwig <hch@lst.de>
  Chun-Kuang Hu <chunkuang.hu@kernel.org>
  Ciara Loftus <ciara.loftus@intel.com>
  Codrin Ciubotariu <codrin.ciubotariu@microchip.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Drake <drake@endlessm.com>
  Dany Madden <drt@linux.ibm.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Davide Caratti <dcaratti@redhat.com>
  Dennis Dalessandro <dennis.dalessandro@intel.com>
  Divya Indi <divya.indi@oracle.com>
  Douglas Anderson <dianders@chromium.org>
  Eran Ben Elisha <eranbe@mellanox.com>
  Eric Dumazet <edumazet@google.com>
  Even Brenden <evenbrenden@gmail.com>
  Felipe Balbi <balbi@kernel.org>
  Gerald Schaefer <gerald.schaefer@de.ibm.com> # s390
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hans de Goede <hdegoede@redhat.com>
  Hector Martin <marcan@marcan.st>
  Heiko Carstens <hca@linux.ibm.com>
  Heiko Carstens <heiko.carstens@de.ibm.com>
  Hsin-Yi Wang <hsinyi@chromium.org>
  Huazhong Tan <tanhuazhong@huawei.com>
  Hui Wang <hui.wang@canonical.com>
  Huy Nguyen <huyn@mellanox.com>
  Ido Schimmel <idosch@mellanox.com>
  Igor Russkikh <irusskikh@marvell.com>
  Ingo Molnar <mingo@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  James Morse <james.morse@arm.com>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jeff Kirsher <jeffrey.t.kirsher@intel.com>
  Jens Axboe <axboe@kernel.dk>
  Jens Thoms Toerring <jt@toerring.de>
  Jessica Yu <jeyu@kernel.org>
  Jian-Hong Pan <jian-hong@endlessm.com>
  Jiri Kosina <jkosina@suse.cz>
  Jiri Olsa <jolsa@redhat.com>
  Joe Lawrence <joe.lawrence@redhat.com>
  Joerg Roedel <jroedel@suse.de>
  Johannes Berg <johannes.berg@intel.com>
  John Fastabend <john.fastabend@gmail.com>
  Jonathan Toppins <jtoppins@redhat.com>
  Josef Bacik <josef@toxicpanda.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Kaike Wan <kaike.wan@intel.com>
  Kamal Heib <kamalheib1@gmail.com>
  Kees Cook <keescook@chromium.org>
  Kim Phillips <kim.phillips@amd.com>
  Krzysztof Kozlowski <krzk@kernel.org>
  Leon Romanovsky <leonro@mellanox.com>
  Li Heng <liheng40@huawei.com>
  Linus Walleij <linus.walleij@linaro.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Luca Coelho <luciano.coelho@intel.com>
  Marc Zyngier <maz@kernel.org>
  Marco Elver <elver@google.com>
  Marek Olšák <marek.olsak@amd.com>
  Mark Brown <broonie@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin KaFai Lau <kafai@fb.com>
  Max Gurtovoy <maxg@mellanox.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michal Suchanek <msuchanek@suse.de>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Ming Lei <ming.lei@redhat.com>
  Miroslav Benes <mbenes@suse.cz>
  Namhyung Kim <namhyung@kernel.org>
  Neil Armstrong <narmstrong@baylibre.com>
  Nicolas Ferre <nicolas.ferre@microchip.com>
  Nicolin Chen <nicoleotsuka@gmail.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Bonzini <pbonzini@redhat.com>
  Parthiban Veerasooran <Parthiban.Veerasooran@microchip.com>
  Pavel Hofman <pavel.hofman@ivitera.com>
  Peng Ma <peng.ma@nxp.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <peterz@infradead.org>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
  Rajat Jain <rajatja@google.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Saeed Mahameed <saeedm@mellanox.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Sasha Levin <sashal@kernel.org>
  Scott Wood <swood@redhat.com>
  Sean Christopherson <sean.j.christopherson@intel.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Shawn Guo <shawnguo@kernel.org>
  Sowjanya Komatineni <skomatineni@nvidia.com>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Stanislav Saner <ssaner@redhat.com>
  Stephane Eranian <eranian@google.com>
  Stephane Eranian <eraniangoogle.com>
  Steve French <stfrench@microsoft.com>
  Steven Price <steven.price@arm.com>
  Sudarsana Reddy Kalluru <skalluru@marvell.com>
  Takashi Iwai <tiwai@suse.de>
  Thierry Reding <treding@nvidia.com>
  Tom Rix <trix@redhat.com>
  Tomas Henzl <thenzl@redhat.com>
  Tony Lindgren <tony@atomide.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vasily Gorbik <gor@linux.ibm.com>
  Vincent Bernat <vincent@bernat.ch>
  Vineet Gupta <vgupta@synopsys.com>
  Vinod Koul <vkoul@kernel.org>
  Wei Li <liwei391@huawei.com>
  Will Deacon <will@kernel.org>
  xidongwang <wangxidong_97@163.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yonglong Liu <liuyonglong@huawei.com>
  yu kuai <yukuai3@huawei.com>
  Zhang Xiaoxu <zhangxiaoxu5@huawei.com>
  Zheng Bin <zhengbin13@huawei.com>
  Zhenzhong Duan <zhenzhong.duan@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   1c54d3c15afa..c57b1153a58a  c57b1153a58a6263863667296b5f00933fc46a4f -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 22:08:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 22:08:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwC2z-0001pC-1v; Thu, 16 Jul 2020 22:08:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sYN5=A3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwC2x-0001p7-Rd
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 22:08:07 +0000
X-Inumbo-ID: d2b2634e-c7b0-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2b2634e-c7b0-11ea-bb8b-bc764e2007e4;
 Thu, 16 Jul 2020 22:08:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=DSaGYVuu0fSmo2UPQ395hHZiDynwVeUpF5+VoCe3ozI=; b=2nxs4zZTBrYaMA41N5AWIps/i
 IbRjJnwt84yqhuc6d0k1cewc4i21MOFKYyi7VLSms9TSsIPvIDjec4acK4BwTP9WfrHsiwH2/IU/h
 8iXyWAXhTpX+15beqazqX6rKjU3p/ufDXkpj/2u0gCsaP+J0OiFE7kqIKdZZBMDPQ2kbg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwC2u-00016C-Lp; Thu, 16 Jul 2020 22:08:04 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwC2u-0003mq-CN; Thu, 16 Jul 2020 22:08:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwC2u-0007PR-Bj; Thu, 16 Jul 2020 22:08:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151934-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151934: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:leak-check/check:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=8746309137ba470d1b2e8f5ce86ac228625db940
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 16 Jul 2020 22:08:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151934 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151934/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      18 leak-check/check         fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                8746309137ba470d1b2e8f5ce86ac228625db940
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   33 days
Failing since        151101  2020-06-14 08:32:51 Z   32 days   44 attempts
Testing same since   151934  2020-07-16 04:14:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Jones <drjones@redhat.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 28624 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 16 23:28:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 16 Jul 2020 23:28:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwDIr-0000Mf-Px; Thu, 16 Jul 2020 23:28:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2Du1=A3=amazon.com=prvs=459385da7=anchalag@srs-us1.protection.inumbo.net>)
 id 1jwDIq-0000Ma-Lm
 for xen-devel@lists.xenproject.org; Thu, 16 Jul 2020 23:28:36 +0000
X-Inumbo-ID: 1145933c-c7bc-11ea-8496-bc764e2007e4
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1145933c-c7bc-11ea-8496-bc764e2007e4;
 Thu, 16 Jul 2020 23:28:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1594942116; x=1626478116;
 h=date:from:to:cc:message-id:references:mime-version:
 content-transfer-encoding:in-reply-to:subject;
 bh=jYMN3GJrGjA7ojiGUc+ciVEe93zlOhFmaxfZs4H/WZY=;
 b=aXWtekAmjTmmJCjdkTR35j/1FKXtr4KRsuse/fl7DxJGp8IuJ7QJQmcc
 nmgtXqS6dJbcoNKrfx0kjweWeaDNo4rYvtvg57wtEliCQ0HXRJIvPyHC5
 4+zO4xGcQPycY1tu67h09l4V9KSwFx3F0W+mBK9teiu6i+D6FzKkJBahh 4=;
IronPort-SDR: l3hkHaieLfP9aicaua6dk5DKRxOesNbvVEFYq8ruQ20zrMR0yFNnemTS5j3jWHYSO5I6275xJA
 fwBryLxen1ag==
X-IronPort-AV: E=Sophos;i="5.75,360,1589241600"; d="scan'208";a="59225594"
Subject: Re: [PATCH v2 00/11] Fix PM hibernation in Xen guests
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2a-90c42d1d.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 16 Jul 2020 23:28:33 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2a-90c42d1d.us-west-2.amazon.com (Postfix) with ESMTPS
 id 0DA0EA2021; Thu, 16 Jul 2020 23:28:31 +0000 (UTC)
Received: from EX13D08UEE002.ant.amazon.com (10.43.62.92) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 16 Jul 2020 23:28:13 +0000
Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by
 EX13D08UEE002.ant.amazon.com (10.43.62.92) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 16 Jul 2020 23:28:12 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 16 Jul 2020 23:28:13 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id CEA575697F; Thu, 16 Jul 2020 23:28:12 +0000 (UTC)
Date: Thu, 16 Jul 2020 23:28:12 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <20200716232812.GA26338@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
 <324020A7-996F-4CF8-A2F4-46957CEA5F0C@amazon.com>
 <c6688a0c-7fec-97d2-3dcc-e160e97206e6@oracle.com>
 <20200715194933.GA17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <6145a0d9-fd4e-a739-407e-97f7261eecd8@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <6145a0d9-fd4e-a739-407e-97f7261eecd8@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "Valentin, Eduardo" <eduval@amazon.com>,
 "len.brown@intel.com" <len.brown@intel.com>,
 "peterz@infradead.org" <peterz@infradead.org>,
 "benh@kernel.crashing.org" <benh@kernel.crashing.org>,
 "x86@kernel.org" <x86@kernel.org>, "linux-mm@kvack.org" <linux-mm@kvack.org>,
 "pavel@ucw.cz" <pavel@ucw.cz>, "hpa@zytor.com" <hpa@zytor.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>, "Kamata,
 Munehisa" <kamatam@amazon.com>, "mingo@redhat.com" <mingo@redhat.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Singh,
 Balbir" <sblbir@amazon.com>, "axboe@kernel.dk" <axboe@kernel.dk>,
 "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
 "bp@alien8.de" <bp@alien8.de>, "tglx@linutronix.de" <tglx@linutronix.de>,
 "jgross@suse.com" <jgross@suse.com>,
 "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
 "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
 "rjw@rjwysocki.net" <rjw@rjwysocki.net>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "vkuznets@redhat.com" <vkuznets@redhat.com>,
 "davem@davemloft.net" <davem@davemloft.net>, "Woodhouse,
 David" <dwmw@amazon.co.uk>, "roger.pau@citrix.com" <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 04:49:57PM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 7/15/20 3:49 PM, Anchal Agarwal wrote:
> > On Mon, Jul 13, 2020 at 03:43:33PM -0400, Boris Ostrovsky wrote:
> >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >>
> >>
> >>
> >> On 7/10/20 2:17 PM, Agarwal, Anchal wrote:
> >>> Gentle ping on this series.
> >>
> >> Have you tested save/restore?
> >>
> > No, not with the last few series. But a good point, I will test that and get
> > back to you. Do you see anything specific in the series that suggests otherwise?
> 
> 
> root@ovs104> xl save pvh saved
> Saving to saved new xl format (info 0x3/0x0/1699)
> xc: info: Saving domain 3, type x86 HVM
> xc: Frames: 1044480/1044480  100%
> xc: End of stream: 0/0    0%
> root@ovs104> xl restore saved
> Loading new save file saved (new xl fmt info 0x3/0x0/1699)
>  Savefile contains xl domain config in JSON format
> Parsing config from <saved>
> xc: info: Found x86 HVM domain from Xen 4.13
> xc: info: Restoring domain
> xc: info: Restore successful
> xc: info: XenStore: mfn 0xfeffc, dom 0, evt 1
> xc: info: Console: mfn 0xfefff, dom 0, evt 2
> root@ovs104> xl console pvh
> [ 139.943872] ------------[ cut here ]------------
> [ 139.943872] kernel BUG at arch/x86/xen/enlighten.c:205!
> [ 139.943872] invalid opcode: 0000 [#1] SMP PTI
> [ 139.943872] CPU: 0 PID: 11 Comm: migration/0 Not tainted 5.8.0-rc5 #26
> [ 139.943872] RIP: 0010:xen_vcpu_setup+0x16d/0x180
> [ 139.943872] Code: 4a 8b 14 f5 40 c9 1b 82 48 89 d8 48 89 2c 02 8b 05
> a4 d4 40 01 85 c0 0f 85 15 ff ff ff 4a 8b 04 f5 40 c9 1b 82 e9 f4 fe ff
> ff <0f> 0b b8 ed ff ff ff e9 14 ff ff ff e8 12 4f 86 00 66 90 66 66 66
> [ 139.943872] RSP: 0018:ffffc9000006bdb0 EFLAGS: 00010046
> [ 139.943872] RAX: 0000000000000000 RBX: ffffc9000014fe00 RCX:
> 0000000000000000
> [ 139.943872] RDX: ffff88803fc00000 RSI: 0000000000016128 RDI:
> 0000000000000000
> [ 139.943872] RBP: 0000000000000000 R08: 0000000000000000 R09:
> 0000000000000000
> [ 139.943872] R10: ffffffff826174a0 R11: ffffc9000006bcb4 R12:
> 0000000000016120
> [ 139.943872] R13: 0000000000016120 R14: 0000000000016128 R15:
> 0000000000000000
> [ 139.943872] FS:  0000000000000000(0000) GS:ffff88803fc00000(0000)
> knlGS:0000000000000000
> [ 139.943872] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 139.943872] CR2: 00007f704be8b000 CR3: 000000003901a004 CR4:
> 00000000000606f0
> [ 139.943872] Call Trace:
> [ 139.943872] ? __kmalloc+0x167/0x260
> [ 139.943872] ? xen_manage_runstate_time+0x14a/0x170
> [ 139.943872] xen_vcpu_restore+0x134/0x170
> [ 139.943872] xen_hvm_post_suspend+0x1d/0x30
> [ 139.943872] xen_arch_post_suspend+0x13/0x30
> [ 139.943872] xen_suspend+0x87/0x190
> [ 139.943872] multi_cpu_stop+0x6d/0x110
> [ 139.943872] ? stop_machine_yield+0x10/0x10
> [ 139.943872] cpu_stopper_thread+0x47/0x100
> [ 139.943872] smpboot_thread_fn+0xc5/0x160
> [ 139.943872] ? sort_range+0x20/0x20
> [ 139.943872] kthread+0xfe/0x140
> [ 139.943872] ? kthread_park+0x90/0x90
> [ 139.943872] ret_from_fork+0x22/0x30
> [ 139.943872] Modules linked in:
> [ 139.943872] ---[ end trace 74716859a6b4f0a8 ]---
> [ 139.943872] RIP: 0010:xen_vcpu_setup+0x16d/0x180
> [ 139.943872] Code: 4a 8b 14 f5 40 c9 1b 82 48 89 d8 48 89 2c 02 8b 05
> a4 d4 40 01 85 c0 0f 85 15 ff ff ff 4a 8b 04 f5 40 c9 1b 82 e9 f4 fe ff
> ff <0f> 0b b8 ed ff ff ff e9 14 ff ff ff e8 12 4f 86 00 66 90 66 66 66
> [ 139.943872] RSP: 0018:ffffc9000006bdb0 EFLAGS: 00010046
> [ 139.943872] RAX: 0000000000000000 RBX: ffffc9000014fe00 RCX:
> 0000000000000000
> [ 139.943872] RDX: ffff88803fc00000 RSI: 0000000000016128 RDI:
> 0000000000000000
> [ 139.943872] RBP: 0000000000000000 R08: 0000000000000000 R09:
> 0000000000000000
> [ 139.943872] R10: ffffffff826174a0 R11: ffffc9000006bcb4 R12:
> 0000000000016120
> [ 139.943872] R13: 0000000000016120 R14: 0000000000016128 R15:
> 0000000000000000
> [ 139.943872] FS:  0000000000000000(0000) GS:ffff88803fc00000(0000)
> knlGS:0000000000000000
> [ 139.943872] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 139.943872] CR2: 00007f704be8b000 CR3: 000000003901a004 CR4:
> 00000000000606f0
> [ 139.943872] Kernel panic - not syncing: Fatal exception
> [ 139.943872] Shutting down cpus with NMI
> [ 143.927559] Kernel Offset: disabled
> root@ovs104>
>
I think I may have found a bug. There were no issues with V1 version  that I
send however, there were issues with V2. I tested both series and found xl
save/restore to be working in V1 but not in V2. I should have tested it.
Anyways, looks the issue is coming from executing syscore ops registered for
hibernation use case during call to xen_suspend. 
I remember your comment from earlier where you did ask why we need to
check xen_suspend mode xen_syscore_suspend [patch-004] and I removed that based
on my theoretical understanding of your suggestion that since lock_system_sleep() lock
is taken, we cannot initialize hibernation. I skipped to check the part in the
code where during xen_suspend(), all registered syscore_ops suspend callbacks are
called. Hence the ones registered for PM hibernation will also be called.
With no check there on suspend mode, it fails to return from the function and
they never should be executed in case of xen suspend.
I will revert a part of that check in Patch-004 from V1 and send an updated patch with
the fix.

Thanks,
Anchal


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 02:49:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 02:49:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwGQS-00025T-T6; Fri, 17 Jul 2020 02:48:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hPx8=A4=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1jwGQQ-00025O-QP
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 02:48:39 +0000
X-Inumbo-ID: 017d7174-c7d8-11ea-bb8b-bc764e2007e4
Received: from mga05.intel.com (unknown [192.55.52.43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 017d7174-c7d8-11ea-bb8b-bc764e2007e4;
 Fri, 17 Jul 2020 02:48:34 +0000 (UTC)
IronPort-SDR: +ku72ieCpUj6BiG0cCLANam8WHw2+8+G7ITrPt3PdTm1AHj4EL5QKFQXC5MjHvmFoPhLA9T9rM
 xsxEYn7yGTNg==
X-IronPort-AV: E=McAfee;i="6000,8403,9684"; a="234379831"
X-IronPort-AV: E=Sophos;i="5.75,361,1589266800"; d="scan'208";a="234379831"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga006.fm.intel.com ([10.253.24.20])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 16 Jul 2020 19:48:33 -0700
IronPort-SDR: 27u9ZPBRvB8dwNTpknMqOKTcEPpZ6WlIgTTl8/93s6P5zb6BWpU1ukbq19wQ5gt/YYOl6t4Rry
 aaw7nsD5wlag==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.75,361,1589266800"; d="scan'208";a="486327508"
Received: from fmsmsx602.amr.corp.intel.com ([10.18.126.82])
 by fmsmga006.fm.intel.com with ESMTP; 16 Jul 2020 19:48:33 -0700
Received: from fmsmsx602.amr.corp.intel.com (10.18.126.82) by
 fmsmsx602.amr.corp.intel.com (10.18.126.82) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Thu, 16 Jul 2020 19:48:32 -0700
Received: from FMSEDG001.ED.cps.intel.com (10.1.192.133) by
 fmsmsx602.amr.corp.intel.com (10.18.126.82) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.1713.5
 via Frontend Transport; Thu, 16 Jul 2020 19:48:32 -0700
Received: from NAM12-DM6-obe.outbound.protection.outlook.com (104.47.59.169)
 by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server (TLS) id
 14.3.439.0; Thu, 16 Jul 2020 19:48:31 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ko7LwWSOX2cgWAIq7nb9Ybr+/CTg6NZX5I64YSEJzYPpPvofkjYyq+zkkZmQeivTUdgKwh1LmL+iUs6sPtf1xTrV6LRR1Axd5h/jzHa9G1jVf/56hFktJnTHVLJ6wqAJiDI6BB3SWfhfjkO6tFcQ5LM30SFU00V+hcqK3MUESwt/gOeV/9QdjZLW7jqYqNg9ZkeGJzS+ZpHpogTAaFY11gkA/pTALADUrvhuB74DaZH6CtcknFeVAV7QD0DDMVGfgfxxGW6BE80Jgc36q6uq+oQw52KCQBwbCzRUulPmRez7bSOkHS5c3KANtMyNsZFkHdfpqhY61iRIdY/KaujI4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nqUNMdOna51lVMddSEw3v4JJroMRzmwwFd/ZAXjxsk8=;
 b=THh+oD2LWZIJbbGYotae/MIC9pa02HuNdfwXc6ZYIYT+GvfvE6wXrjo8dTIxRtRlZHMH8rF7cSidmQZ6DWHsGKhKsqJkWzwqcSC+3Dabdf4gOZ9/Tvpi2pTwQSJTLipWp0uTqU+ARzvZNTIOZcnjFz2UZv013zQeHiFvUvyTZZPuQLnqcLVP/Xi8APHvvNxBUObrvJdONvTlCsAyutdCXEWqDHWmzu2TDh9goujmlXmiIEE1IC8RP/q/s3dtyZ/jaM8PwrnG1tmkKqMdSaEmlUj+hDlPz6vxBKQDL6DD3z/Ae/CkQ5BuTIrlgS7N6pq4wLhz+9DpOiDMdOL1HkKaqw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nqUNMdOna51lVMddSEw3v4JJroMRzmwwFd/ZAXjxsk8=;
 b=hOwnou9yMSI195yXQdzAgXeoKutkBNSAxcVM4pzFBVARLu6pAgnZHYHrWw5yCzDlAhwodiV+nJZbAXRsdEiEteY1evCBz9y7Ljvfxz78q2NwLtg+InUpHy14pCshNk76RkhGznuH3YlZivsRNcDTdvmAU7O7YkGsmwVEwfTXnlo=
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MWHPR1101MB2253.namprd11.prod.outlook.com (2603:10b6:301:52::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Fri, 17 Jul
 2020 02:48:29 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::9864:e0cb:af36:6feb]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::9864:e0cb:af36:6feb%5]) with mapi id 15.20.3195.018; Fri, 17 Jul 2020
 02:48:29 +0000
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH 1/2] VT-d: install sync_cache hook on demand
Thread-Topic: [PATCH 1/2] VT-d: install sync_cache hook on demand
Thread-Index: AQHWWo9R9JuT380chUCL6kT51/xX/6kLFAIA
Date: Fri, 17 Jul 2020 02:48:29 +0000
Message-ID: <MWHPR11MB1645176C28AD7F37615C79378C7C0@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <2ad1607d-0909-f1cc-83bf-2546b28a9128@suse.com>
 <0036b69f-0d56-9ac4-1afa-06640c9007de@suse.com>
In-Reply-To: <0036b69f-0d56-9ac4-1afa-06640c9007de@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
dlp-version: 11.2.0.6
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.147.213]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 03c6982c-59d9-4533-8134-08d829fbe32e
x-ms-traffictypediagnostic: MWHPR1101MB2253:
x-microsoft-antispam-prvs: <MWHPR1101MB2253920F702B6D77C79C89F18C7C0@MWHPR1101MB2253.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: v8XsXVOjU+qAymc52E2F39cAEnw1Haz4ovPx58yPLXZBUsTPO8tPly8QuSkgMgNN+fXBGF0AyjcU8syZia9UQ/wUsbr/W/8aqCOiAXQ83M10UkdA2S71u4w1J0O8IvJ+MTyNDdMcnOHCnuCRhpYzR0CrRvoGsge1BP/euxzd1YN+GxBr5Fw3oY8c1PFDyRBEJK6SiGXEMRmsftZYy/ltTGQMrs3dftymcKqweUEwXjPqozSPLnpj4Q+u3rk1hyKa4aGEIK7eYp/v7uq+ugYGNnODBp7onIcGNnka/aIM+t2cEsIWXtxY3LttRwg2S6EYthe1lnXFR0oDBvpzsQ8+pQ==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:MWHPR11MB1645.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(346002)(366004)(396003)(376002)(39860400002)(136003)(76116006)(8936002)(9686003)(33656002)(66446008)(8676002)(66556008)(83380400001)(66476007)(6506007)(64756008)(52536014)(7696005)(186003)(26005)(110136005)(5660300002)(71200400001)(66946007)(86362001)(55016002)(478600001)(2906002)(316002);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata: oB8sN9ajGtSPXR8Af/MTCA6MHGO2lCKRxXfqtx1bUNxa5afQpb6V+QM5hplyP4X15cSjcUI8yNy0u5zzPSukYZKXINs/KbEdQ/yIwySU7VFMitrniNjP5A/OLmsP4QlaucK5VCmwcBsSE6o+GCWkrBcP2CbgUfyc4Vwo/w4JPog4aTDk2L/T9CQivtIiPBD5stBuBs/GPBqYooqRqLOtJDb9f/2toAPEihtinzxCU33np59RQcBlbe5NrMEu0dlQYoZuSwZBejoGiBX5PHm5Kz8WCc5BNpZHW3eHgU86gPI+3MeLif5gStZHTrszbLegNsMcOVp3WKuNViZ5E/KPFEnWE0MqMeRjcxU8olE/pxlcE7jx0Qsc5qyF8h0P9EU/3QyV2UGmYB8imCKkvtA8oJKk2J5o+Bxewzm4MM41OZ37kxqHqtTXu0OFbNA2FPlIVFs977qZEchCXxTRX6VLFcagALZDEhvO9Z5WuLoSkl9dA69unn1BlxV2tRBWS/hC
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1645.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 03c6982c-59d9-4533-8134-08d829fbe32e
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jul 2020 02:48:29.5382 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: py6jFCLPmZfWWUSnEN64haaYBrwj59xG6+sMOaBx7WHWTShEsEgD6Bh/0Ug94ifXOBdMV6/859Q9CMhuy1Zo1g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR1101MB2253
X-OriginatorOrg: intel.com
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFdlZG5lc2Rh
eSwgSnVseSAxNSwgMjAyMCA2OjA0IFBNDQo+IA0KPiBJbnN0ZWFkIG9mIGNoZWNraW5nIGluc2lk
ZSB0aGUgaG9vayB3aGV0aGVyIGFueSBub24tY29oZXJlbnQgSU9NTVVzIGFyZQ0KPiBwcmVzZW50
LCBzaW1wbHkgaW5zdGFsbCB0aGUgaG9vayBvbmx5IHdoZW4gdGhpcyBpcyB0aGUgY2FzZS4NCj4g
DQo+IFRvIHByb3ZlIHRoYXQgdGhlcmUgYXJlIG5vIG90aGVyIHJlZmVyZW5jZXMgdG8gdGhlIG5v
dyBkeW5hbWljYWxseQ0KPiB1cGRhdGVkIG9wcyBzdHJ1Y3R1cmUgKGFuZCBoZW5jZSB0aGF0IGl0
cyB1cGRhdGluZyBoYXBwZW5zIGVhcmx5DQo+IGVub3VnaCksIG1ha2UgaXQgc3RhdGljIGFuZCBy
ZW5hbWUgaXQgYXQgdGhlIHNhbWUgdGltZS4NCj4gDQo+IE5vdGUgdGhhdCB0aGlzIGNoYW5nZSBp
bXBsaWVzIHRoYXQgc3luY19jYWNoZSgpIHNob3VsZG4ndCBiZSBjYWxsZWQNCj4gZGlyZWN0bHkg
dW5sZXNzIHRoZXJlIGFyZSB1bnVzdWFsIGNpcmN1bXN0YW5jZXMsIGxpa2UgaXMgdGhlIGNhc2Ug
aW4NCj4gYWxsb2NfcGd0YWJsZV9tYWRkcigpLCB3aGljaCBnZXRzIGludm9rZWQgdG9vIGVhcmx5
IGZvciBpb21tdV9vcHMgdG8NCj4gYmUgc2V0IGFscmVhZHkgKGFuZCB0aGVyZWZvcmUgd2UgYWxz
byBuZWVkIHRvIGJlIGNhcmVmdWwgdGhlcmUgdG8NCj4gYXZvaWQgYWNjZXNzaW5nIHZ0ZF9vcHMg
bGF0ZXIgb24sIGFzIGl0IGxpdmVzIGluIC5pbml0KS4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCg0KUmV2aWV3ZWQtYnk6IEtldmluIFRpYW4g
PGtldmluLnRpYW5AaW50ZWwuY29tPg0KDQo+IA0KPiAtLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC92dGQvZXh0ZXJuLmgNCj4gKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2V4
dGVybi5oDQo+IEBAIC0yOCw3ICsyOCw2IEBADQo+ICBzdHJ1Y3QgcGNpX2F0c19kZXY7DQo+ICBl
eHRlcm4gYm9vbF90IHJ3YmZfcXVpcms7DQo+ICBleHRlcm4gY29uc3Qgc3RydWN0IGlvbW11X2lu
aXRfb3BzIGludGVsX2lvbW11X2luaXRfb3BzOw0KPiAtZXh0ZXJuIGNvbnN0IHN0cnVjdCBpb21t
dV9vcHMgaW50ZWxfaW9tbXVfb3BzOw0KPiANCj4gIHZvaWQgcHJpbnRfaW9tbXVfcmVncyhzdHJ1
Y3QgYWNwaV9kcmhkX3VuaXQgKmRyaGQpOw0KPiAgdm9pZCBwcmludF92dGRfZW50cmllcyhzdHJ1
Y3QgdnRkX2lvbW11ICppb21tdSwgaW50IGJ1cywgaW50IGRldmZuLCB1NjQNCj4gZ21mbik7DQo+
IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jDQo+ICsrKyBiL3hlbi9k
cml2ZXJzL3Bhc3N0aHJvdWdoL3Z0ZC9pb21tdS5jDQo+IEBAIC01OSw2ICs1OSw3IEBAIGJvb2wg
X19yZWFkX21vc3RseSBpb21tdV9zbm9vcCA9IHRydWU7DQo+IA0KPiAgaW50IG5yX2lvbW11czsN
Cj4gDQo+ICtzdGF0aWMgc3RydWN0IGlvbW11X29wcyB2dGRfb3BzOw0KPiAgc3RhdGljIHN0cnVj
dCB0YXNrbGV0IHZ0ZF9mYXVsdF90YXNrbGV0Ow0KPiANCj4gIHN0YXRpYyBpbnQgc2V0dXBfaHdk
b21fZGV2aWNlKHU4IGRldmZuLCBzdHJ1Y3QgcGNpX2RldiAqKTsNCj4gQEAgLTE0NiwxNiArMTQ3
LDExIEBAIHN0YXRpYyBpbnQgY29udGV4dF9nZXRfZG9tYWluX2lkKHN0cnVjdA0KPiAgICAgIHJl
dHVybiBkb21pZDsNCj4gIH0NCj4gDQo+IC1zdGF0aWMgaW50IGlvbW11c19pbmNvaGVyZW50Ow0K
PiAtDQo+ICBzdGF0aWMgdm9pZCBzeW5jX2NhY2hlKGNvbnN0IHZvaWQgKmFkZHIsIHVuc2lnbmVk
IGludCBzaXplKQ0KPiAgew0KPiAgICAgIHN0YXRpYyB1bnNpZ25lZCBsb25nIGNsZmx1c2hfc2l6
ZSA9IDA7DQo+ICAgICAgY29uc3Qgdm9pZCAqZW5kID0gYWRkciArIHNpemU7DQo+IA0KPiAtICAg
IGlmICggIWlvbW11c19pbmNvaGVyZW50ICkNCj4gLSAgICAgICAgcmV0dXJuOw0KPiAtDQo+ICAg
ICAgaWYgKCBjbGZsdXNoX3NpemUgPT0gMCApDQo+ICAgICAgICAgIGNsZmx1c2hfc2l6ZSA9IGdl
dF9jYWNoZV9saW5lX3NpemUoKTsNCj4gDQo+IEBAIC0yMTcsNyArMjEzLDggQEAgdWludDY0X3Qg
YWxsb2NfcGd0YWJsZV9tYWRkcih1bnNpZ25lZCBsbw0KPiAgICAgICAgICB2YWRkciA9IF9fbWFw
X2RvbWFpbl9wYWdlKGN1cl9wZyk7DQo+ICAgICAgICAgIG1lbXNldCh2YWRkciwgMCwgUEFHRV9T
SVpFKTsNCj4gDQo+IC0gICAgICAgIHN5bmNfY2FjaGUodmFkZHIsIFBBR0VfU0laRSk7DQo+ICsg
ICAgICAgIGlmICggKGlvbW11X29wcy5pbml0ID8gJmlvbW11X29wcyA6ICZ2dGRfb3BzKS0+c3lu
Y19jYWNoZSApDQo+ICsgICAgICAgICAgICBzeW5jX2NhY2hlKHZhZGRyLCBQQUdFX1NJWkUpOw0K
PiAgICAgICAgICB1bm1hcF9kb21haW5fcGFnZSh2YWRkcik7DQo+ICAgICAgICAgIGN1cl9wZysr
Ow0KPiAgICAgIH0NCj4gQEAgLTEyMjcsNyArMTIyNCw3IEBAIGludCBfX2luaXQgaW9tbXVfYWxs
b2Moc3RydWN0IGFjcGlfZHJoZF8NCj4gICAgICBpb21tdS0+bnJfcHRfbGV2ZWxzID0gYWdhd190
b19sZXZlbChhZ2F3KTsNCj4gDQo+ICAgICAgaWYgKCAhZWNhcF9jb2hlcmVudChpb21tdS0+ZWNh
cCkgKQ0KPiAtICAgICAgICBpb21tdXNfaW5jb2hlcmVudCA9IDE7DQo+ICsgICAgICAgIHZ0ZF9v
cHMuc3luY19jYWNoZSA9IHN5bmNfY2FjaGU7DQo+IA0KPiAgICAgIC8qIGFsbG9jYXRlIGRvbWFp
biBpZCBiaXRtYXAgKi8NCj4gICAgICBucl9kb20gPSBjYXBfbmRvbXMoaW9tbXUtPmNhcCk7DQo+
IEBAIC0yNzM3LDcgKzI3MzQsNyBAQCBzdGF0aWMgaW50IF9faW5pdCBpbnRlbF9pb21tdV9xdWFy
YW50aW5lDQo+ICAgICAgcmV0dXJuIGxldmVsID8gLUVOT01FTSA6IHJjOw0KPiAgfQ0KPiANCj4g
LWNvbnN0IHN0cnVjdCBpb21tdV9vcHMgX19pbml0Y29uc3RyZWwgaW50ZWxfaW9tbXVfb3BzID0g
ew0KPiArc3RhdGljIHN0cnVjdCBpb21tdV9vcHMgX19pbml0ZGF0YSB2dGRfb3BzID0gew0KPiAg
ICAgIC5pbml0ID0gaW50ZWxfaW9tbXVfZG9tYWluX2luaXQsDQo+ICAgICAgLmh3ZG9tX2luaXQg
PSBpbnRlbF9pb21tdV9od2RvbV9pbml0LA0KPiAgICAgIC5xdWFyYW50aW5lX2luaXQgPSBpbnRl
bF9pb21tdV9xdWFyYW50aW5lX2luaXQsDQo+IEBAIC0yNzY4LDExICsyNzY1LDEwIEBAIGNvbnN0
IHN0cnVjdCBpb21tdV9vcHMgX19pbml0Y29uc3RyZWwgaW4NCj4gICAgICAuaW90bGJfZmx1c2hf
YWxsID0gaW9tbXVfZmx1c2hfaW90bGJfYWxsLA0KPiAgICAgIC5nZXRfcmVzZXJ2ZWRfZGV2aWNl
X21lbW9yeSA9DQo+IGludGVsX2lvbW11X2dldF9yZXNlcnZlZF9kZXZpY2VfbWVtb3J5LA0KPiAg
ICAgIC5kdW1wX3AybV90YWJsZSA9IHZ0ZF9kdW1wX3AybV90YWJsZSwNCj4gLSAgICAuc3luY19j
YWNoZSA9IHN5bmNfY2FjaGUsDQo+ICB9Ow0KPiANCj4gIGNvbnN0IHN0cnVjdCBpb21tdV9pbml0
X29wcyBfX2luaXRjb25zdHJlbCBpbnRlbF9pb21tdV9pbml0X29wcyA9IHsNCj4gLSAgICAub3Bz
ID0gJmludGVsX2lvbW11X29wcywNCj4gKyAgICAub3BzID0gJnZ0ZF9vcHMsDQo+ICAgICAgLnNl
dHVwID0gdnRkX3NldHVwLA0KPiAgICAgIC5zdXBwb3J0c194MmFwaWMgPSBpbnRlbF9pb21tdV9z
dXBwb3J0c19laW0sDQo+ICB9Ow0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 02:49:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 02:49:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwGR0-00026t-6x; Fri, 17 Jul 2020 02:49:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hPx8=A4=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1jwGQz-00026k-Lj
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 02:49:13 +0000
X-Inumbo-ID: 17b64678-c7d8-11ea-9579-12813bfff9fa
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 17b64678-c7d8-11ea-9579-12813bfff9fa;
 Fri, 17 Jul 2020 02:49:11 +0000 (UTC)
IronPort-SDR: +v9j2Ftv0Gnvx+rSI4WTG+1sf5Wohq5SqryUN8XFzLwg8nMp92SJq2H9jbVVSLeo5EmKVoeO4n
 zIy5Jqsv5G1w==
X-IronPort-AV: E=McAfee;i="6000,8403,9684"; a="150909894"
X-IronPort-AV: E=Sophos;i="5.75,361,1589266800"; d="scan'208";a="150909894"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 16 Jul 2020 19:49:08 -0700
IronPort-SDR: 2OXcmu4kXRiDvZs7LK49cX/J3VfFh1LxC2zBc12YJSTerYHwnsI8IrvF2ZFmelWQxV1h/Gcfk1
 VJGZpIxOvywQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.75,361,1589266800"; d="scan'208";a="486829054"
Received: from fmsmsx105.amr.corp.intel.com ([10.18.124.203])
 by fmsmga005.fm.intel.com with ESMTP; 16 Jul 2020 19:49:07 -0700
Received: from fmsmsx158.amr.corp.intel.com (10.18.116.75) by
 FMSMSX105.amr.corp.intel.com (10.18.124.203) with Microsoft SMTP Server (TLS)
 id 14.3.439.0; Thu, 16 Jul 2020 19:49:06 -0700
Received: from FMSEDG002.ED.cps.intel.com (10.1.192.134) by
 fmsmsx158.amr.corp.intel.com (10.18.116.75) with Microsoft SMTP Server (TLS)
 id 14.3.439.0; Thu, 16 Jul 2020 19:49:06 -0700
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.107)
 by edgegateway.intel.com (192.55.55.69) with Microsoft SMTP Server (TLS) id
 14.3.439.0; Thu, 16 Jul 2020 19:49:05 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=It9fFztwMRTYiuKbRJ30dg7bNTHtDsI/TrcCPNW3MiIOSY5oqrSuPR7LartyLIp5q9scr8klFkqa63uvxhBbK585RWXkeljxuoTKZjyT6NIBBY96Tv8dFGovULXWAT4RcSif3p04/xlHjov0lXudSTNmqeZ5gMXT1sbK0O04FbBDoyMurrj/8csb6auyGkBgPHPFu6WGHe6B9NkFlcip1uMjdMs52s/Cidle5N4EBOuv7t/G1lVlNrSD1fi3+PAH4HLBzxe55As6zR8Epm1ZF1OTWjW4/h+pF92bGLPeNvBJYwwDrTlpo2+utQ6tJWSW0UnPY/jus06KvJ3d0WzTdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Hk965ARW+ikK3M59/OeHcUZO/J+ppj2m64EOXbCKxxU=;
 b=C4yHCYdPh/QU37lgHd3+bDRU+ZwEfSZPCq48aSVWNiWasO0aDGf7jKOouNuioNKVJJZv/0my7mERR8fmUSYcbO7lzcttAy2EcdhMQa+kaubfNPVoD1N6EFHg9RJQ1DxEqSOGD9+9ZUKC8pbqx/0wsebhL95QfnRooJRS0MPED6yVdsXrQHQ/GrlMnbCMHHN57ZHNCx6Az4yJht5ZmwDBgy/RN17sirjwKsqY8jRjNEus218U2fzsdiGBwepE9IOw4/QDpZBiDelyRsgxkSNoA0dyQDSrp6VngFtP7zlf61/+bCIuyPZ9hrxu9x/8fFxS1dRtivty8A7n3tPR4thNTg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Hk965ARW+ikK3M59/OeHcUZO/J+ppj2m64EOXbCKxxU=;
 b=ZU/gOdOFJ+KS/fTIExX3cVd0dZj64QWAv9KyWRn6Oyzv1+JPml+PY/aeSj8SGejQ2EJ3dJtlH4SqekR4EHWicTPohresq3aC1TgJ2rNt++5INMHQ/gHolJnTCWHQhQ/qUkvWlUwEf8mnOPH6JvndS+YuZ6ikgkyHzGayx8UAoXM=
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MWHPR1101MB2253.namprd11.prod.outlook.com (2603:10b6:301:52::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Fri, 17 Jul
 2020 02:49:04 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::9864:e0cb:af36:6feb]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::9864:e0cb:af36:6feb%5]) with mapi id 15.20.3195.018; Fri, 17 Jul 2020
 02:49:04 +0000
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH 2/2] VT-d: use clear_page() in alloc_pgtable_maddr()
Thread-Topic: [PATCH 2/2] VT-d: use clear_page() in alloc_pgtable_maddr()
Thread-Index: AQHWWo9Tnv7rOln+0USMn9HVMeMuSqkLFGfQ
Date: Fri, 17 Jul 2020 02:49:04 +0000
Message-ID: <MWHPR11MB1645486699AEF7BF39B652518C7C0@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <2ad1607d-0909-f1cc-83bf-2546b28a9128@suse.com>
 <14f8b940-252f-9837-8958-5e76e1c3f06f@suse.com>
In-Reply-To: <14f8b940-252f-9837-8958-5e76e1c3f06f@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
dlp-version: 11.2.0.6
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.147.213]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 1c0412e2-7897-4269-bd1b-08d829fbf7e2
x-ms-traffictypediagnostic: MWHPR1101MB2253:
x-microsoft-antispam-prvs: <MWHPR1101MB22533CB1779F0FF580D61A1A8C7C0@MWHPR1101MB2253.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:5236;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: wTc5ML7CHni+LSqfffsROcfjVuoUW+S9adhdaZ9axefgS+RiZzFbvqZZ7Jt8hm9hEzMPstuwvWRMnIwieR87suqXS2q+NObRZKvfSsQJ6iwyQKQBiC6SdoxubOplICmumHtQSRt9OJVebozObd6/eC95sFhL+qwgEay69td24HOTqGftapgBsSvEpqa21xl+FCcLRMwRkhGqPOia9ljna694Q48dB6oTVYQwJpXMXqbMkx67EpdGzevu7cLotVvS8X+1oJmZ5wj2Rdfke7UNnX4Ka1Twatn0Qtqs9uLZ/SuKzjIqbGUjYtPsePAd1+dONCKQiWMI5NLBa+rpR2gN9g==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:MWHPR11MB1645.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(346002)(366004)(396003)(376002)(39860400002)(136003)(76116006)(8936002)(9686003)(33656002)(66446008)(8676002)(66556008)(83380400001)(66476007)(4744005)(6506007)(64756008)(52536014)(7696005)(186003)(26005)(110136005)(5660300002)(71200400001)(66946007)(86362001)(55016002)(478600001)(2906002)(316002);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata: pFmsTUr/C1iaHo7cSIDS7Yr4dAvZ7iWgvJFIrdwYpVuqR7sKk0Msi8df9Dmi7E+bnT9u42ksnnwPMWV3gs+Iw7emqJiBFSajHnxRmWNhyoY1YegmOFxxkvW50NM9hzSzfqGUjbd17vsKnDsZJPaMYeEPE9Jlclf135m9veysSl5jCHEawbbA9sUcikW1QZtNuKc/2TE7YHlN1NuFjCvJlkeH/fZF71S6W6PPWCqVvFPzVPNYJV30L2okvD7vwWTXv66uTPKFFZVZdutD+WZBAWobDvY/YqxMJDxKJVcTEgpHqLp+Apfa7yvMybgTDyZ6CKXzSOOb8wgmyFJO6ycuxXZeLSO7SyLjkZDctoXbeaOFboEbDSBmWrm79NTrLpAQSsp5vfmJ5e9p5k+Qny8+ZZH3VrtfgiroZ9zgtGpK7lVs0n5Hc/thSQe2nksL55LEYoY720Iuw4F0b8FHzloqRvcVQqusnCL9okabZYggrZO7WlaYXlJAnf3YcFdRmg6T
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1645.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1c0412e2-7897-4269-bd1b-08d829fbf7e2
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jul 2020 02:49:04.2928 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 6EUJ5XUfSq6xTqBcpx470htxKZ/kPSD3VMUFs7/IyhQGRIkIg+7Lram1o73NhhCqh09SxFhir0W9iM4Qm6gIXQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR1101MB2253
X-OriginatorOrg: intel.com
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFdlZG5lc2Rh
eSwgSnVseSAxNSwgMjAyMCA2OjA0IFBNDQo+IA0KPiBGb3IgZnVsbCBwYWdlcyB0aGlzIGlzICht
ZWFudCB0byBiZSkgbW9yZSBlZmZpY2llbnQuIEFsc28gY2hhbmdlIHRoZQ0KPiB0eXBlIGFuZCBy
ZWR1Y2UgdGhlIHNjb3BlIG9mIHRoZSBpbnZvbHZlZCBsb2NhbCB2YXJpYWJsZS4NCj4gDQo+IFNp
Z25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCg0KUmV2aWV3ZWQt
Ynk6IEtldmluIFRpYW4gPGtldmluLnRpYW5AaW50ZWwuY29tPg0KDQo+IA0KPiAtLS0gYS94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC92dGQvaW9tbXUuYw0KPiArKysgYi94ZW4vZHJpdmVycy9wYXNz
dGhyb3VnaC92dGQvaW9tbXUuYw0KPiBAQCAtMTk5LDcgKzE5OSw2IEBAIHN0YXRpYyB2b2lkIHN5
bmNfY2FjaGUoY29uc3Qgdm9pZCAqYWRkciwNCj4gIHVpbnQ2NF90IGFsbG9jX3BndGFibGVfbWFk
ZHIodW5zaWduZWQgbG9uZyBucGFnZXMsIG5vZGVpZF90IG5vZGUpDQo+ICB7DQo+ICAgICAgc3Ry
dWN0IHBhZ2VfaW5mbyAqcGcsICpjdXJfcGc7DQo+IC0gICAgdTY0ICp2YWRkcjsNCj4gICAgICB1
bnNpZ25lZCBpbnQgaTsNCj4gDQo+ICAgICAgcGcgPSBhbGxvY19kb21oZWFwX3BhZ2VzKE5VTEws
IGdldF9vcmRlcl9mcm9tX3BhZ2VzKG5wYWdlcyksDQo+IEBAIC0yMTAsOCArMjA5LDkgQEAgdWlu
dDY0X3QgYWxsb2NfcGd0YWJsZV9tYWRkcih1bnNpZ25lZCBsbw0KPiAgICAgIGN1cl9wZyA9IHBn
Ow0KPiAgICAgIGZvciAoIGkgPSAwOyBpIDwgbnBhZ2VzOyBpKysgKQ0KPiAgICAgIHsNCj4gLSAg
ICAgICAgdmFkZHIgPSBfX21hcF9kb21haW5fcGFnZShjdXJfcGcpOw0KPiAtICAgICAgICBtZW1z
ZXQodmFkZHIsIDAsIFBBR0VfU0laRSk7DQo+ICsgICAgICAgIHZvaWQgKnZhZGRyID0gX19tYXBf
ZG9tYWluX3BhZ2UoY3VyX3BnKTsNCj4gKw0KPiArICAgICAgICBjbGVhcl9wYWdlKHZhZGRyKTsN
Cj4gDQo+ICAgICAgICAgIGlmICggKGlvbW11X29wcy5pbml0ID8gJmlvbW11X29wcyA6ICZ2dGRf
b3BzKS0+c3luY19jYWNoZSApDQo+ICAgICAgICAgICAgICBzeW5jX2NhY2hlKHZhZGRyLCBQQUdF
X1NJWkUpOw0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 04:41:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 04:41:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwIAu-0003dP-LV; Fri, 17 Jul 2020 04:40:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sKPa=A4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwIAt-0003dK-Ey
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 04:40:43 +0000
X-Inumbo-ID: aa86da9e-c7e7-11ea-9585-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aa86da9e-c7e7-11ea-9585-12813bfff9fa;
 Fri, 17 Jul 2020 04:40:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=uwYMW0aKR1nau2Uk5Sd7Rvis4WASyZUJ1P2zwi7ENrQ=; b=kakaGV0k5krH43mSpf4MxemCS
 mC+OBcXs93lgdXxkfQIe0J+8Gz4XHtAIJuk+goHLZTW6yB+OzWUknXrjScTOXERkyKRQoFLstBILz
 lBCTZ0o/PfzcsnJWByCQP7S/WHF/nUQClbegu4RcwDMdWdWYd7PtNXWc7iRlM3N7RmjT4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwIAp-0003BV-7U; Fri, 17 Jul 2020 04:40:39 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwIAo-0000PI-VQ; Fri, 17 Jul 2020 04:40:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwIAo-00009J-Up; Fri, 17 Jul 2020 04:40:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151942-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151942: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=f8fe3c07363d11fc81d8e7382dbcaa357c861569
X-Osstest-Versions-That: xen=f8fe3c07363d11fc81d8e7382dbcaa357c861569
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jul 2020 04:40:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151942 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151942/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151926
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151926
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151926
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151926
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151926
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151926
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151926
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151926
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151926
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151926
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f8fe3c07363d11fc81d8e7382dbcaa357c861569
baseline version:
 xen                  f8fe3c07363d11fc81d8e7382dbcaa357c861569

Last test of basis   151942  2020-07-16 11:32:41 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Jul 17 05:18:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 05:18:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwIlZ-0006nj-Lj; Fri, 17 Jul 2020 05:18:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sKPa=A4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwIlY-0006ne-J8
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 05:18:36 +0000
X-Inumbo-ID: f4f5151e-c7ec-11ea-958a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f4f5151e-c7ec-11ea-958a-12813bfff9fa;
 Fri, 17 Jul 2020 05:18:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ii/Oogwn5FuGGClEL/50opsfGSsDdd55F54LSvsadhM=; b=GHGubMQKqvv/c7SGbCzTnAcX+
 dVrwatiPdiGDDBwnrHdFE0msXn2pQZHL+9dhhwbz0NJDv4UYEhRP8k1agTIw0GKj7H1uOTySkVwgl
 9mlkyk9xmoIhJHSvQh7QMvHvsqsWvelLN1HjmSXTf84IDRq2glL3m7b8iRcZfMIn/0WfI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwIlT-0004HU-Ro; Fri, 17 Jul 2020 05:18:31 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwIlT-0002u3-IN; Fri, 17 Jul 2020 05:18:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwIlT-0005uL-Hf; Fri, 17 Jul 2020 05:18:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151946-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151946: all pass
X-Osstest-Versions-This: ovmf=21a23e6966c2eb597a8db98d6837a4c01b3cad4a
X-Osstest-Versions-That: ovmf=d9269d69138860edb1ec9796ed48549dc6ba5735
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jul 2020 05:18:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151946 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151946/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 21a23e6966c2eb597a8db98d6837a4c01b3cad4a
baseline version:
 ovmf                 d9269d69138860edb1ec9796ed48549dc6ba5735

Last test of basis   151937  2020-07-16 04:25:27 Z    1 days
Testing same since   151946  2020-07-16 17:27:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dandan Bi <dandan.bi@intel.com>
  Vin Xue <vinxue@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
 ! [rejected]              21a23e6966c2eb597a8db98d6837a4c01b3cad4a -> xen-tested-master (needs force)
error: failed to push some refs to 'osstest@xenbits.xen.org:/home/xen/git/osstest/ovmf.git'
hint: You cannot update a remote ref that points at a non-commit object,
hint: or update a remote ref to make it point at a non-commit object,
hint: without using the '--force' option.
------------------------------------------------------------
commit 21a23e6966c2eb597a8db98d6837a4c01b3cad4a
Author: Vin Xue <vinxue@outlook.com>
Date:   Tue Jul 14 10:09:35 2020 +0800

    SignedCapsulePkg: Address NULL pointer dereference case.
    
    Original code GetFmpImageDescriptors for OriginalFmpImageInfoBuf
    pointer, if failed, return a NULL pointer. The OriginalFmpImageInfoBuf
    should not be NULL and the NULL pointer dereference case
    should be false positive.
    
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Chao Zhang <chao.b.zhang@intel.com>
    Signed-off-by: Vin Xue <vinxue@outlook.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>

commit 1da651cdb77f42787e55da5a8f85e61d5258824f
Author: Dandan Bi <dandan.bi@intel.com>
Date:   Fri Nov 1 14:41:23 2019 +0800

    MdeModulePkg/DisplayEngine: Add Debug message to show mismatch menu info
    
    REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2326
    
    Currently when meet mismatch case for one-of and ordered-list
    menu, just show a popup window to indicate mismatch, no more
    info for debugging. This patch is to add more debug message
    about mismatch menu info which is helpful to debug.
    
    Cc: Liming Gao <liming.gao@intel.com>
    Cc: Eric Dong <eric.dong@intel.com>
    Cc: Laszlo Ersek <lersek@redhat.com>
    Signed-off-by: Dandan Bi <dandan.bi@intel.com>
    Reviewed-by: Eric Dong <eric.dong@intel.com>


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 05:33:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 05:33:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwIze-0008Qk-0P; Fri, 17 Jul 2020 05:33:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aSkl=A4=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jwIzc-0008Qf-63
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 05:33:08 +0000
X-Inumbo-ID: fdb7594e-c7ee-11ea-bb8b-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id fdb7594e-c7ee-11ea-bb8b-bc764e2007e4;
 Fri, 17 Jul 2020 05:33:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594963985;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=M9kggi+ZEqEOoDN9bqFMB91/EhttHJqL/akxG2l4I1M=;
 b=AcsP8NvU0jyNVRYZAv0ARjrbRaD2hDvGPIBQwRklyTE2R1/7wqT8Awup30W/rnD8wepY8x
 cg5K0sVswF5yHfUDB9EKjaKINzhr/i3Q84aiZMe/O29Lj5ghy3TouwMVT4ShqkMaieLnKm
 WVdClHeCwkOxqxBc+r3hTfjhA1XwMmE=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-408-AVt7XhWCN0ClwNhgV-cpdw-1; Fri, 17 Jul 2020 01:33:02 -0400
X-MC-Unique: AVt7XhWCN0ClwNhgV-cpdw-1
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7E70E1080;
 Fri, 17 Jul 2020 05:33:00 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-143.ams2.redhat.com
 [10.36.112.143])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 6F15F724A9;
 Fri, 17 Jul 2020 05:32:57 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id F049611386A6; Fri, 17 Jul 2020 07:32:55 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Daniel P. =?utf-8?Q?Berrang=C3=A9?= <berrange@redhat.com>
Subject: Re: [RFC PATCH-for-5.2 v2 2/2] block/block-backend: Let
 blk_attach_dev() provide helpful error
References: <20200716123704.6557-1-f4bug@amsat.org>
 <20200716123704.6557-3-f4bug@amsat.org>
 <20200716130440.GT227735@redhat.com>
Date: Fri, 17 Jul 2020 07:32:55 +0200
In-Reply-To: <20200716130440.GT227735@redhat.com> ("Daniel P. =?utf-8?Q?Be?=
 =?utf-8?Q?rrang=C3=A9=22's?=
 message of "Thu, 16 Jul 2020 14:04:40 +0100")
Message-ID: <87o8oek8oo.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 qemu-devel@nongnu.org, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Max Reitz <mreitz@redhat.com>,
 John Snow <jsnow@redhat.com>, Laurent Vivier <laurent@vivier.eu>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Daniel P. Berrang=C3=A9 <berrange@redhat.com> writes:

> On Thu, Jul 16, 2020 at 02:37:04PM +0200, Philippe Mathieu-Daud=C3=A9 wro=
te:
>> Let blk_attach_dev() take an Error* object to return helpful
>> information. Adapt the callers.
>>=20
>>   $ qemu-system-arm -M n800
>>   qemu-system-arm: sd_init failed: cannot attach blk 'sd0' to device 'sd=
-card'
>>   blk 'sd0' is already attached by device 'omap2-mmc'
>>   Drive 'sd0' is already in use because it has been automatically connec=
ted to another device
>>   (do you need 'if=3Dnone' in the drive options?)
>>=20
>> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>
>> ---
>> v2: Rebased after 668f62ec62 ("error: Eliminate error_propagate()")
>> ---
>>  include/sysemu/block-backend.h   |  2 +-
>>  block/block-backend.c            | 11 ++++++++++-
>>  hw/block/fdc.c                   |  4 +---
>>  hw/block/swim.c                  |  4 +---
>>  hw/block/xen-block.c             |  5 +++--
>>  hw/core/qdev-properties-system.c | 16 +++++++++-------
>>  hw/ide/qdev.c                    |  4 +---
>>  hw/scsi/scsi-disk.c              |  4 +---
>>  8 files changed, 27 insertions(+), 23 deletions(-)
>>=20
>> diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backe=
nd.h
>> index 8203d7f6f9..118fbad0b4 100644
>> --- a/include/sysemu/block-backend.h
>> +++ b/include/sysemu/block-backend.h
>> @@ -113,7 +113,7 @@ BlockDeviceIoStatus blk_iostatus(const BlockBackend =
*blk);
>>  void blk_iostatus_disable(BlockBackend *blk);
>>  void blk_iostatus_reset(BlockBackend *blk);
>>  void blk_iostatus_set_err(BlockBackend *blk, int error);
>> -int blk_attach_dev(BlockBackend *blk, DeviceState *dev);
>> +int blk_attach_dev(BlockBackend *blk, DeviceState *dev, Error **errp);
>>  void blk_detach_dev(BlockBackend *blk, DeviceState *dev);
>>  DeviceState *blk_get_attached_dev(BlockBackend *blk);
>>  char *blk_get_attached_dev_id(BlockBackend *blk);
>> diff --git a/block/block-backend.c b/block/block-backend.c
>> index 63ff940ef9..b7be0a4619 100644
>> --- a/block/block-backend.c
>> +++ b/block/block-backend.c
>> @@ -884,12 +884,21 @@ void blk_get_perm(BlockBackend *blk, uint64_t *per=
m, uint64_t *shared_perm)
>> =20
>>  /*
>>   * Attach device model @dev to @blk.
>> + *
>> + * @blk: Block backend
>> + * @dev: Device to attach the block backend to
>> + * @errp: pointer to NULL initialized error object
>> + *
>>   * Return 0 on success, -EBUSY when a device model is attached already.
>>   */
>> -int blk_attach_dev(BlockBackend *blk, DeviceState *dev)
>> +int blk_attach_dev(BlockBackend *blk, DeviceState *dev, Error **errp)
>>  {
>>      trace_blk_attach_dev(blk_name(blk), object_get_typename(OBJECT(dev)=
));
>>      if (blk->dev) {
>> +        error_setg(errp, "cannot attach blk '%s' to device '%s'",
>> +                   blk_name(blk), object_get_typename(OBJECT(dev)));
>> +        error_append_hint(errp, "blk '%s' is already attached by device=
 '%s'\n",
>> +                          blk_name(blk), object_get_typename(OBJECT(blk=
->dev)));
>
> I would have a preference for expanding the main error message and not
> using a hint.  Any hint is completely thrown away when using QMP :-(

Hints work best in cases like

    error message
    hint suggesting things to try to fix it, in CLI syntax

    error message rejecting a configuration value
    hint listing possible values, in CLI syntax

Why "in CLI syntax"?  Well, we need to use *some* syntax to be useful.
Hints have always been phrased for the CLI, simply because the hint
feature grew out of my disgust over the loss of lovingly written CLI
hints in the conversion to Error.

Hints in CLI syntax would be misleading QMP.  We never extended QMP to
transport hints.

Hints may tempt you in a case like

    error message that is painfully long, because it really tries hard to e=
xplain, which is laudable in theory, but sadly illegible in practice; also,=
 interminable sentences, meh, this is an error message, not a Joyce novel

to get something like

    terse error message
    Explanation that is rather long, because it really tries hard to
    explain.  It can be multiple sentences, and lines are wrapped at a
    reasonable length.
   =20
Comes out okay in the CLI, but the explanation is lost in QMP.

I don't have a good solution to offer for errors that genuinely need
explaining.



From xen-devel-bounces@lists.xenproject.org Fri Jul 17 06:22:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 06:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwJkw-0004FR-WF; Fri, 17 Jul 2020 06:22:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sKPa=A4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwJkv-0004F7-TM
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 06:22:01 +0000
X-Inumbo-ID: cf825946-c7f5-11ea-9590-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cf825946-c7f5-11ea-9590-12813bfff9fa;
 Fri, 17 Jul 2020 06:21:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=QEGN3qajBcHZCDaev7mGGDTe1UefqtsPt85GvZ15ozk=; b=NEqAqXs3LpGaJp9kGOKbKKK32
 Js5rA31v4xh4xgM15rVDXDV0K6U2Fu6vKI6/zNcqx6q1QyXo5rVw73eu4Av+TulwrHFewd9FZc598
 rm6iQmSkDcjfQvDGTGnzH1n3+UhUt9eZcYxWCh6cVNUWRp4mfVW/yei3oTb3uPrF8E93s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwJko-0005e0-RH; Fri, 17 Jul 2020 06:21:54 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwJko-00055i-HQ; Fri, 17 Jul 2020 06:21:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwJko-0002MC-Gj; Fri, 17 Jul 2020 06:21:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151947-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 151947: tolerable FAIL - PUSHED
X-Osstest-Failures: seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: seabios=6ada2285d9918859699c92e09540e023e0a16054
X-Osstest-Versions-That: seabios=88ab0c15525ced2eefe39220742efe4769089ad8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jul 2020 06:21:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151947 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151947/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151387
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151387
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151387
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151387
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 seabios              6ada2285d9918859699c92e09540e023e0a16054
baseline version:
 seabios              88ab0c15525ced2eefe39220742efe4769089ad8

Last test of basis   151387  2020-06-26 18:45:26 Z   20 days
Testing same since   151947  2020-07-16 17:27:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Kevin O'Connor <kevin@koconnor.net>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   88ab0c1..6ada228  6ada2285d9918859699c92e09540e023e0a16054 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 06:33:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 06:33:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwJvo-0005Az-58; Fri, 17 Jul 2020 06:33:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sKPa=A4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwJvn-0005Au-6s
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 06:33:15 +0000
X-Inumbo-ID: 62b2fb98-c7f7-11ea-9592-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 62b2fb98-c7f7-11ea-9592-12813bfff9fa;
 Fri, 17 Jul 2020 06:33:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=bxvLCeawTEaXq6BucfhAqFe9/AJyJBpM+VPshB4VtEc=; b=LXgEOGupBRF6ffXBcdD9D2oESi
 VefcDhv4aPyHiqRomHCXzbITutMyP35UCVlmFv73Ma1NMVSf45V89jtoJ/jxTNtm4y0tPm0pFF0Os
 dOe6cYced8aTaMYUAyrS9w1IeBz0r0baYbbzcsYxuZG9cBdvHE2tTReursZkEu6HC5eI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwJvj-0005rh-6d; Fri, 17 Jul 2020 06:33:11 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwJvi-0005Sa-V8; Fri, 17 Jul 2020 06:33:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwJvi-0008Dl-UR; Fri, 17 Jul 2020 06:33:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm
Message-Id: <E1jwJvi-0008Dl-UR@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jul 2020 06:33:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  da278d58a092bfcc4e36f1e274229c1468dea731
  Bug not present: 23accdf162dcccb9fec9585a64ad01a87b13da5c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151958/


  commit da278d58a092bfcc4e36f1e274229c1468dea731
  Author: Philippe Mathieu-Daudé <philmd@redhat.com>
  Date:   Fri May 8 12:02:22 2020 +0200
  
      accel: Move Xen accelerator code under accel/xen/
      
      This code is not related to hardware emulation.
      Move it under accel/ with the other hypervisors.
      
      Reviewed-by: Paul Durrant <paul@xen.org>
      Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Message-Id: <20200508100222.7112-1-philmd@redhat.com>
      Reviewed-by: Juan Quintela <quintela@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-debianhvm-i386-xsm.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-debianhvm-i386-xsm.debian-hvm-install --summary-out=tmp/151958.bisection-summary --basis-template=151065 --blessings=real,real-bisect qemu-mainline test-amd64-i386-xl-qemuu-debianhvm-i386-xsm debian-hvm-install
Searching for failure / basis pass:
 151934 fail [host=albana1] / 151065 [host=huxelrebe1] 151047 [host=albana0] 150970 [host=huxelrebe0] 150930 [host=elbling1] 150916 [host=elbling0] 150895 [host=debina0] 150831 [host=chardonnay0] 150694 [host=debina1] 150631 [host=pinot0] 150608 [host=fiano1] 150593 ok.
Failure / basis pass flights: 151934 / 150593
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c7195b9ec3c5f8f40119c96ac4a0ab1e8cebe9dc 3c659044118e34603161457db9934a34f816d78b 8746309137ba470d1b2e8f5ce86ac228625db940 88ab0c15525ced2eefe39220742efe4769089ad8 165f3afbfc3db70fcfdccad07085cde0a03c858b
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 4ec2a1f53e8aaa22924614b64dde97321126943e 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3-c7195b9ec3c5f8f40119c96ac4a0ab1e8cebe9dc git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://git.qemu.org/qemu.git#4ec2a1f53e8aaa22924614b64dde97321126943e-8746309137ba470d1b2e8f5ce86ac228625db940 git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-88ab0c15525ced2eefe39220742efe4769089ad8 git://xenbits.xen.org/xen.git#1497e78068421d83956f8e82fb6e1bf1fc3b1199-165f3afbfc3db70fcfdccad07085cde0a03c858b
>From git://cache:9419/git://xenbits.xen.org/osstest/seabios
   88ab0c1..6ada228  xen-tested-master -> origin/xen-tested-master
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 55650 nodes in revision graph
Searching for test results:
 150585 [host=pinot1]
 150593 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 4ec2a1f53e8aaa22924614b64dde97321126943e 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 150631 [host=pinot0]
 150608 [host=fiano1]
 150694 [host=debina1]
 150831 [host=chardonnay0]
 150909 []
 150930 [host=elbling1]
 150916 [host=elbling0]
 150895 [host=debina0]
 150899 []
 150970 [host=huxelrebe0]
 151047 [host=albana0]
 151101 fail irrelevant
 151065 [host=huxelrebe1]
 151149 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151221 fail irrelevant
 151175 fail irrelevant
 151241 fail irrelevant
 151286 fail irrelevant
 151269 fail irrelevant
 151328 fail irrelevant
 151304 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 322969adf1fb3d6cfbd613f35121315715aff2ed 3c659044118e34603161457db9934a34f816d78b 171199f56f5f9bdf1e5d670d09ef1351d8f01bae 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151377 fail irrelevant
 151353 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b d4b78317b7cf8c0c635b70086503813f79ff21ec 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151399 fail irrelevant
 151414 fail irrelevant
 151435 fail irrelevant
 151459 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151471 fail irrelevant
 151485 fail irrelevant
 151500 fail irrelevant
 151518 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151547 fail irrelevant
 151598 fail irrelevant
 151577 fail irrelevant
 151622 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b 7b7515702012219410802a168ae4aa45b72a44df 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151656 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151634 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151645 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151669 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151685 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151704 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151778 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b aff2caf6b3fbab1062e117a47b66d27f7fd2f272 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151721 fail irrelevant
 151763 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 48f22ad04ead83e61b4b35871ec6f6109779b791 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151744 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 8796c64ecdfd34be394ea277aaaaa53df0c76996 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151804 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151816 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151833 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 827937158b72ce2265841ff528bba3c44a1bfbc8 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151909 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 86e8c353f705f14f2f2fd7a6195cefa431aa24d9 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 151855 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151841 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 2033cc6efa98b831d7839e367aa7d5aa74d0750f 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151925 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b af509738f8e4400c26d321abeac924efb04fbfa0 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151849 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151917 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bb78cfbec07eda45118b630a09b0af549b43a135 3c659044118e34603161457db9934a34f816d78b fe0fe4735e798578097758781166cc221319b93d 2e3de6253422112ae43e608661ba94ea6b345694 d9f58cd54fe2f05e1f05e2fe254684bd1840de8e
 151901 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 4ec2a1f53e8aaa22924614b64dde97321126943e 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151911 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 6345d7e2aeb6f7bbaa9c1e7e94e21fccf9453c70 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151902 fail irrelevant
 151874 fail irrelevant
 151895 fail irrelevant
 151904 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 3c659044118e34603161457db9934a34f816d78b 9f1f264edbdf5516d6f208497310b3eedbc7b74c 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151920 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 6bb228190ef0b45669d285114cf8a280c55f4b39 2e3de6253422112ae43e608661ba94ea6b345694 ad33a573c009d72466432b41ba0591c64e819c19
 151912 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 49ee11555262a256afec592dfed7c5902d5eefd2 2e3de6253422112ae43e608661ba94ea6b345694 726c78d14dfe6ec76f5e4c7756821a91f0a04b34
 151918 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 250b1da35d579f42319af234f36207902ca4baa4 2e3de6253422112ae43e608661ba94ea6b345694 dde6174ada5280cd9a6396e3b12606360a0d29a3
 151905 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1d24410da356731da70b3334f86343e11e207d2 3c659044118e34603161457db9934a34f816d78b 470dd165d152ff7ceac61c7b71c2b89220b3aad7 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151906 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b eea8f5df4ecc607d64f091b8d916fcc11a697541 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151914 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 256c4470f86e53661c070f8c64a1052e975f9ef0 3c659044118e34603161457db9934a34f816d78b c920fdba39480989cb5f1af3cc63acccef021b54 88ab0c15525ced2eefe39220742efe4769089ad8 165f3afbfc3db70fcfdccad07085cde0a03c858b
 151913 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 5d2f557b47dfbf8f23277a5bdd8473d4607c681a 2e3de6253422112ae43e608661ba94ea6b345694 51ca66c37371b10b378513af126646de22eddb17
 151919 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 71b04329c4f7d5824a289ca5225e1883a278cf3b 2e3de6253422112ae43e608661ba94ea6b345694 e181db8ba4e0797b8f9b55996adfa71ffb5b4081
 151931 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b b889212973dabee119a1ab21326a27fc51b88d6d 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151915 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 7d2410cea154bf915fb30179ebda3b17ac36e70e 2e3de6253422112ae43e608661ba94ea6b345694 780aba2779b834f19b2a6f0dcdea0e7e0b5e1622
 151924 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 98d59d5dd8b662ba8ec7c522faa9b88823389711 2e3de6253422112ae43e608661ba94ea6b345694 dde6174ada5280cd9a6396e3b12606360a0d29a3
 151928 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cde194be8d43ee642db719b4761889f46dd80f27 3c659044118e34603161457db9934a34f816d78b 80bde69353924d99e2a7f5ac6be0ab4cf489d33c 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151949 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b da278d58a092bfcc4e36f1e274229c1468dea731 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151929 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b 0d48b436327955c69e2eb53f88aba9aa1e0dbaa0 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151940 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b aaacf1c15a225ffeb1ff066b78e211594b3a5053 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151933 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 4ec2a1f53e8aaa22924614b64dde97321126943e 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151936 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 256c4470f86e53661c070f8c64a1052e975f9ef0 3c659044118e34603161457db9934a34f816d78b c920fdba39480989cb5f1af3cc63acccef021b54 88ab0c15525ced2eefe39220742efe4769089ad8 165f3afbfc3db70fcfdccad07085cde0a03c858b
 151938 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b da278d58a092bfcc4e36f1e274229c1468dea731 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151941 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b 73b994f6d74ec00a1d78daf4145096ff9f0e2982 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151948 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b 23accdf162dcccb9fec9585a64ad01a87b13da5c 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151943 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b d6048bfd12e24a0980ba2040cfaa2b101df3fa16 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151934 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c7195b9ec3c5f8f40119c96ac4a0ab1e8cebe9dc 3c659044118e34603161457db9934a34f816d78b 8746309137ba470d1b2e8f5ce86ac228625db940 88ab0c15525ced2eefe39220742efe4769089ad8 165f3afbfc3db70fcfdccad07085cde0a03c858b
 151950 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 4ec2a1f53e8aaa22924614b64dde97321126943e 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151951 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c7195b9ec3c5f8f40119c96ac4a0ab1e8cebe9dc 3c659044118e34603161457db9934a34f816d78b 8746309137ba470d1b2e8f5ce86ac228625db940 88ab0c15525ced2eefe39220742efe4769089ad8 165f3afbfc3db70fcfdccad07085cde0a03c858b
 151953 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b 23accdf162dcccb9fec9585a64ad01a87b13da5c 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151954 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b da278d58a092bfcc4e36f1e274229c1468dea731 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151955 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b 23accdf162dcccb9fec9585a64ad01a87b13da5c 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151958 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b da278d58a092bfcc4e36f1e274229c1468dea731 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
Searching for interesting versions
 Result found: flight 150593 (pass), for basis pass
 Result found: flight 151934 (fail), for basis failure (at ancestor ~1)
 Repro found: flight 151950 (pass), for basis pass
 Repro found: flight 151951 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b 23accdf162dcccb9fec9585a64ad01a87b13da5c 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
No revisions left to test, checking graph state.
 Result found: flight 151948 (pass), for last pass
 Result found: flight 151949 (fail), for first failure
 Repro found: flight 151953 (pass), for last pass
 Repro found: flight 151954 (fail), for first failure
 Repro found: flight 151955 (pass), for last pass
 Repro found: flight 151958 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  da278d58a092bfcc4e36f1e274229c1468dea731
  Bug not present: 23accdf162dcccb9fec9585a64ad01a87b13da5c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/151958/


  commit da278d58a092bfcc4e36f1e274229c1468dea731
  Author: Philippe Mathieu-Daudé <philmd@redhat.com>
  Date:   Fri May 8 12:02:22 2020 +0200
  
      accel: Move Xen accelerator code under accel/xen/
      
      This code is not related to hardware emulation.
      Move it under accel/ with the other hypervisors.
      
      Reviewed-by: Paul Durrant <paul@xen.org>
      Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Message-Id: <20200508100222.7112-1-philmd@redhat.com>
      Reviewed-by: Juan Quintela <quintela@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.487169 to fit
pnmtopng: 206 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-debianhvm-i386-xsm.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
151958: tolerable ALL FAIL

flight 151958 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/151958/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail baseline untested


jobs:
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Jul 17 06:54:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 06:54:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwKFn-0006xN-3i; Fri, 17 Jul 2020 06:53:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwKFl-0006xI-GD
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 06:53:53 +0000
X-Inumbo-ID: 45858be6-c7fa-11ea-bb8b-bc764e2007e4
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe1f::620])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45858be6-c7fa-11ea-bb8b-bc764e2007e4;
 Fri, 17 Jul 2020 06:53:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5tGjbL4BLvi6OT1NHsZmin8IeeIs4cu9m/IrBJSOa8o=;
 b=VBZZuqJfE5p2LEa+J5LXsiZZr06WPnVCrOP8Ss7U9RHuF4jXG3xssQ7b3LW5gob40dCqUqjMobHzh9ywRwzOgn+sJ8QHYQj0C87QgFHZ5rkml6NGuPLGloUBzbG3VxSl+s6KUZ/pHz97w7by19pULnF7tvz3c+iiUu3Zw/QBpEo=
Received: from DB6PR07CA0083.eurprd07.prod.outlook.com (2603:10a6:6:2b::21) by
 VI1PR08MB3309.eurprd08.prod.outlook.com (2603:10a6:803:41::16) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3174.20; Fri, 17 Jul 2020 06:53:49 +0000
Received: from DB5EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2b:cafe::b4) by DB6PR07CA0083.outlook.office365.com
 (2603:10a6:6:2b::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.10 via Frontend
 Transport; Fri, 17 Jul 2020 06:53:49 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT025.mail.protection.outlook.com (10.152.20.104) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 06:53:48 +0000
Received: ("Tessian outbound 73b502bf693a:v62");
 Fri, 17 Jul 2020 06:53:48 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 368887eb5e22a41f
X-CR-MTA-TID: 64aa7808
Received: from 897e10ff5d24.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1CF4385D-D921-473D-9A01-BCC45E049945.1; 
 Fri, 17 Jul 2020 06:53:43 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 897e10ff5d24.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 06:53:43 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EfLJQn+cgM6fL/VMJeeWlrTm3jrNIwPA+wyQKrDRwxJ/bvCFwjSZwXDJhwI4km60n9k7BvBkCBvgbxHzK8amfKe/MWTwgAAmHmeD0DjT08rEjcWXLENBXmmcA2gtvGBJxl4bh9KyIvaC8z0CMMJ/XCfV5TfzwXRT9EUAPMF8AXtnm5SrjQrJscHVoTRMkC9EdN5XmXcaPYFQ1+HCRQ2ThWrkD2ngji+0iTO4sElasa1ptKgbnQprSn/UuGXguHHMiygaHg4Zc/lwPNYrSxpOJ6RhYrUjW6raPWbwSof8ns7cK4w6Bxz2c9CCpsjKEgbpP01IHx/prg8p9kdesbTSQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5tGjbL4BLvi6OT1NHsZmin8IeeIs4cu9m/IrBJSOa8o=;
 b=RkDmYge4uA6UuPsJf6ul1YwhNX1Pk8J/59t0b0pYcBc+XnJzTvqiu2NhlpUrOxprqKVf6EMCk32iNNvJEQLsbpVWNjC2XkVG4pFGFnBph3heYURbUb1xq9BcqnLnrPRS5yCU7IEfopYTITz8cY2J5litAX+7oGEZFOzqYuL9ar/HWWydWbbId7STg2uqenKxJlEl3EbN0re3SEvAQFp+rzApzHBQ8gzl0GZtlDPldU8BmAOb+9zm3hkSWBoIQc6pWFSN0a8wnmjeAR3UX0NtkN1wuJNA6bM1rVonmhTyCJp9yIHs8vktcrQUDp1TyjoAW3wYRZ1S9A0tiUsxFsD57g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5tGjbL4BLvi6OT1NHsZmin8IeeIs4cu9m/IrBJSOa8o=;
 b=VBZZuqJfE5p2LEa+J5LXsiZZr06WPnVCrOP8Ss7U9RHuF4jXG3xssQ7b3LW5gob40dCqUqjMobHzh9ywRwzOgn+sJ8QHYQj0C87QgFHZ5rkml6NGuPLGloUBzbG3VxSl+s6KUZ/pHz97w7by19pULnF7tvz3c+iiUu3Zw/QBpEo=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4773.eurprd08.prod.outlook.com (2603:10a6:10:d9::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17; Fri, 17 Jul
 2020 06:53:41 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 06:53:41 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAC0rgIAAqC0A
Date: Fri, 17 Jul 2020 06:53:41 +0000
Message-ID: <BB4645DF-A040-4912-AC35-C98105917FD5@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <alpine.DEB.2.21.2007161258160.3886@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007161258160.3886@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [2a01:e0a:13:6f10:52:283b:b28d:2112]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: deed8aa2-2e6b-467b-b774-08d82a1e287b
x-ms-traffictypediagnostic: DBBPR08MB4773:|VI1PR08MB3309:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <VI1PR08MB3309D0D4D4052DA35AD1B5D89D7C0@VI1PR08MB3309.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: vTb2jXJgcQbaMcZJlukUbpedz+SOqnApdNlyecUfF28181IKoNrNs1Vff7v9po09kw/esY13Z/Z/r2qMZjHDBBe5gPfV/qVZno2XCy+pV7SXqkwYGwbHn3m9hC3Mk4tC8EkdPf0V84dEx1xOL181JWfSKXRGh4XMiEdomm7qhCbN9vkEZMm8j2dm2ZKuz8680R+jUfOB6DeOdUwNoBBzUkAXJs1o3M+EWHEY0ZGT9z60rZ2D/ZHzd2oV98+s6LqAlTLSwdeZpRZO7UHinjrUyUr/0vgbIcjePPRX6F8HID2TrI68eA9xHqlayw6ytnmZW7CFcbpgyAgrEXY+YcFzKQ==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(396003)(366004)(39860400002)(136003)(376002)(86362001)(71200400001)(2616005)(4326008)(54906003)(76116006)(91956017)(66556008)(66946007)(64756008)(66476007)(36756003)(33656002)(66446008)(83380400001)(8676002)(6486002)(186003)(8936002)(2906002)(6506007)(53546011)(316002)(478600001)(6512007)(5660300002)(30864003)(6916009);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: jGZKNHEzl+ZN/biUXsdo1zP01g/QEBDVcnNhMENj76y7+m6q/uXYOy46VFvv4SOES1HvverOoJkjyMhBmVbxFUFBHAI0zySMB9Fhx5e0OnBzf0TcvG9copAdGnLuyHxt444Xku8Qr9o/JpvAqjqd9m4AlYqrUouQst/qI5BRxrfsTbKSrUdBTi9cEESwbIz3oWvbH0YBTLFuX2LSSsqtFr+QXEHbij1t+bUC3oWyG8SR7lO6OjQVZ9Nhktlm2GXWYPC0xbJuwPbqYIsXP/IeE1GUpwnSN8dgHgZ5WZtSJJpHY+NcFmSDePLYynYSonfY+F2QXn/P2Zfby5Vdbvu4hUU8DyQO2LgOJBgK0i+VAWC2flr6s+/ZgOxGAtgXs+YxPrdRW3jt+iU8S7IlsyJYRgaqJ8TpJJx4/wjgaMI2BOIfB2ZeScC5mcmCP5v4o11Uzb6xNJ2/b5OUd55OBztmyzCJQDCwkbPPtEa+ByKoYN5pxoNRQUgfwNq7AQB7CToRQVrYHQ+e7sWqH8l2BvleUP1VfThxsox11Uvo+Is/2ak=
Content-Type: text/plain; charset="utf-8"
Content-ID: <0241A1D24169AB429BF91B83FD172ADA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4773
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(136003)(39850400004)(346002)(376002)(46966005)(6486002)(83380400001)(356005)(53546011)(336012)(82310400002)(6506007)(6862004)(2906002)(82740400003)(81166007)(6512007)(5660300002)(70586007)(70206006)(47076004)(33656002)(107886003)(8676002)(186003)(54906003)(4326008)(26005)(86362001)(36756003)(8936002)(30864003)(316002)(2616005)(478600001);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 9775105a-7c39-46bf-5332-08d82a1e2400
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: djzHL21QIZ4fj7HRGrMIwzXMX5MccOYgH5kleobRYnJ0F/oaGm6VMQR6qgIyjPmaW4Zuts6QFnLVaNRc80JZCVC2QqQ9GhyeQmqgXUFvQ92vfmiVVtpreIl98bZbGBL1IXzsOw1OrtXURFcaDAH+qkUGR5+8SKpd2JAc8Jj2ZIIX4PkCeGKvWEEZ+eun4AXk2vu89BsEtRH3FrzHwWFtLHf1vsFYtL09vDoSpLV4UOPeJezT6ZcgxLrYsU3pD6muIMjYFgZBNSn9WW8onCRXM1TwglTkAfyAa7uiFO+Nb8MfiyYawzce5P2sRDkrvBmhzgKnA5rc8kkcpXF7bNacBEena0/q4Up9D7DiHSTfagiQ5WzuDP56b0PMEO5MjJV3mjRfXSp6icV2teQAJM7dlg==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 06:53:48.9427 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: deed8aa2-2e6b-467b-b774-08d82a1e287b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3309
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTYgSnVsIDIwMjAsIGF0IDIyOjUxLCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPiANCj4gT24gVGh1LCAxNiBKdWwgMjAyMCwgUmFo
dWwgU2luZ2ggd3JvdGU6DQo+PiBIZWxsbyBBbGwsDQo+PiANCj4+IEZvbGxvd2luZyB1cCBvbiBk
aXNjdXNzaW9uIG9uIFBDSSBQYXNzdGhyb3VnaCBzdXBwb3J0IG9uIEFSTSB0aGF0IHdlIGhhZCBh
dCB0aGUgWEVOIHN1bW1pdCwgd2UgYXJlIHN1Ym1pdHRpbmcgYSBSZXZpZXcgRm9yIENvbW1lbnQg
YW5kIGEgZGVzaWduIHByb3Bvc2FsIGZvciBQQ0kgcGFzc3Rocm91Z2ggc3VwcG9ydCBvbiBBUk0u
IEZlZWwgZnJlZSB0byBnaXZlIHlvdXIgZmVlZGJhY2suDQo+PiANCj4+IFRoZSBmb2xsb3dpbmdz
IGRlc2NyaWJlIHRoZSBoaWdoLWxldmVsIGRlc2lnbiBwcm9wb3NhbCBvZiB0aGUgUENJIHBhc3N0
aHJvdWdoIHN1cHBvcnQgYW5kIGhvdyB0aGUgZGlmZmVyZW50IG1vZHVsZXMgd2l0aGluIHRoZSBz
eXN0ZW0gaW50ZXJhY3RzIHdpdGggZWFjaCBvdGhlciB0byBhc3NpZ24gYSBwYXJ0aWN1bGFyIFBD
SSBkZXZpY2UgdG8gdGhlIGd1ZXN0Lg0KPiANCj4gSSB0aGluayB0aGUgcHJvcG9zYWwgaXMgZ29v
ZCBhbmQgSSBvbmx5IGhhdmUgYSBjb3VwbGUgb2YgdGhvdWdodHMgdG8NCj4gc2hhcmUgYmVsb3cu
DQo+IA0KPiANCj4+ICMgVGl0bGU6DQo+PiANCj4+IFBDSSBkZXZpY2VzIHBhc3N0aHJvdWdoIG9u
IEFybSBkZXNpZ24gcHJvcG9zYWwNCj4+IA0KPj4gIyBQcm9ibGVtIHN0YXRlbWVudDoNCj4+IA0K
Pj4gT24gQVJNIHRoZXJlIGluIG5vIHN1cHBvcnQgdG8gYXNzaWduIGEgUENJIGRldmljZSB0byBh
IGd1ZXN0LiBQQ0kgZGV2aWNlIHBhc3N0aHJvdWdoIGNhcGFiaWxpdHkgYWxsb3dzIGd1ZXN0cyB0
byBoYXZlIGZ1bGwgYWNjZXNzIHRvIHNvbWUgUENJIGRldmljZXMuIFBDSSBkZXZpY2UgcGFzc3Ro
cm91Z2ggYWxsb3dzIFBDSSBkZXZpY2VzIHRvIGFwcGVhciBhbmQgYmVoYXZlIGFzIGlmIHRoZXkg
d2VyZSBwaHlzaWNhbGx5IGF0dGFjaGVkIHRvIHRoZSBndWVzdCBvcGVyYXRpbmcgc3lzdGVtIGFu
ZCBwcm92aWRlIGZ1bGwgaXNvbGF0aW9uIG9mIHRoZSBQQ0kgZGV2aWNlcy4NCj4+IA0KPj4gR29h
bCBvZiB0aGlzIHdvcmsgaXMgdG8gYWxzbyBzdXBwb3J0IERvbTBMZXNzIGNvbmZpZ3VyYXRpb24g
c28gdGhlIFBDSSBiYWNrZW5kL2Zyb250ZW5kIGRyaXZlcnMgdXNlZCBvbiB4ODYgc2hhbGwgbm90
IGJlIHVzZWQgb24gQXJtLiBJdCB3aWxsIHVzZSB0aGUgZXhpc3RpbmcgVlBDSSBjb25jZXB0IGZy
b20gWDg2IGFuZCBpbXBsZW1lbnQgdGhlIHZpcnR1YWwgUENJIGJ1cyB0aHJvdWdoIElPIGVtdWxh
dGlvbuKAiyBzdWNoIHRoYXQgb25seSBhc3NpZ25lZCBkZXZpY2VzIGFyZSB2aXNpYmxl4oCLIHRv
IHRoZSBndWVzdCBhbmQgZ3Vlc3QgY2FuIHVzZSB0aGUgc3RhbmRhcmQgUENJIGRyaXZlci4NCj4+
IA0KPj4gT25seSBEb20wIGFuZCBYZW4gd2lsbCBoYXZlIGFjY2VzcyB0byB0aGUgcmVhbCBQQ0kg
YnVzLOKAiyBndWVzdCB3aWxsIGhhdmUgYSBkaXJlY3QgYWNjZXNzIHRvIHRoZSBhc3NpZ25lZCBk
ZXZpY2UgaXRzZWxm4oCLLiBJT01FTSBtZW1vcnkgd2lsbCBiZSBtYXBwZWQgdG8gdGhlIGd1ZXN0
IOKAi2FuZCBpbnRlcnJ1cHQgd2lsbCBiZSByZWRpcmVjdGVkIHRvIHRoZSBndWVzdC4gU01NVSBo
YXMgdG8gYmUgY29uZmlndXJlZCBjb3JyZWN0bHkgdG8gaGF2ZSBETUEgdHJhbnNhY3Rpb24uDQo+
PiANCj4+ICMjIEN1cnJlbnQgc3RhdGU64oCvRHJhZnQgdmVyc2lvbg0KPj4gDQo+PiAjIFByb3Bv
c2VyKHMpOiBSYWh1bCBTaW5naCwgQmVydHJhbmQgTWFycXVpcw0KPj4gDQo+PiAjIFByb3Bvc2Fs
Og0KPj4gDQo+PiBUaGlzIHNlY3Rpb24gd2lsbCBkZXNjcmliZSB0aGUgZGlmZmVyZW50IHN1YnN5
c3RlbSB0byBzdXBwb3J0IHRoZSBQQ0kgZGV2aWNlIHBhc3N0aHJvdWdoIGFuZCBob3cgdGhlc2Ug
c3Vic3lzdGVtcyBpbnRlcmFjdCB3aXRoIGVhY2ggb3RoZXIgdG8gYXNzaWduIGEgZGV2aWNlIHRv
IHRoZSBndWVzdC4NCj4+IA0KPj4gIyBQQ0kgVGVybWlub2xvZ3k6DQo+PiANCj4+IEhvc3QgQnJp
ZGdlOiBIb3N0IGJyaWRnZSBhbGxvd3MgdGhlIFBDSSBkZXZpY2VzIHRvIHRhbGsgdG8gdGhlIHJl
c3Qgb2YgdGhlIGNvbXB1dGVyLiAgDQo+PiBFQ0FNOiBFQ0FNIChFbmhhbmNlZCBDb25maWd1cmF0
aW9uIEFjY2VzcyBNZWNoYW5pc20pIGlzIGEgbWVjaGFuaXNtIGRldmVsb3BlZCB0byBhbGxvdyBQ
Q0llIHRvIGFjY2VzcyBjb25maWd1cmF0aW9uIHNwYWNlLiBUaGUgc3BhY2UgYXZhaWxhYmxlIHBl
ciBmdW5jdGlvbiBpcyA0S0IuDQo+PiANCj4+ICMgRGlzY292ZXJpbmcgUENJIEhvc3QgQnJpZGdl
IGluIFhFTjoNCj4+IA0KPj4gSW4gb3JkZXIgdG8gc3VwcG9ydCB0aGUgUENJIHBhc3N0aHJvdWdo
IFhFTiBzaG91bGQgYmUgYXdhcmUgb2YgYWxsIHRoZSBQQ0kgaG9zdCBicmlkZ2VzIGF2YWlsYWJs
ZSBvbiB0aGUgc3lzdGVtIGFuZCBzaG91bGQgYmUgYWJsZSB0byBhY2Nlc3MgdGhlIFBDSSBjb25m
aWd1cmF0aW9uIHNwYWNlLiBFQ0FNIGNvbmZpZ3VyYXRpb24gYWNjZXNzIGlzIHN1cHBvcnRlZCBh
cyBvZiBub3cuIFhFTiBkdXJpbmcgYm9vdCB3aWxsIHJlYWQgdGhlIFBDSSBkZXZpY2UgdHJlZSBu
b2RlIOKAnHJlZ+KAnSBwcm9wZXJ0eSBhbmQgd2lsbCBtYXAgdGhlIEVDQU0gc3BhY2UgdG8gdGhl
IFhFTiBtZW1vcnkgdXNpbmcgdGhlIOKAnGlvcmVtYXBfbm9jYWNoZSAoKeKAnSBmdW5jdGlvbi4N
Cj4+IA0KPj4gSWYgdGhlcmUgYXJlIG1vcmUgdGhhbiBvbmUgc2VnbWVudCBvbiB0aGUgc3lzdGVt
LCBYRU4gd2lsbCByZWFkIHRoZSDigJxsaW51eCwgcGNpLWRvbWFpbuKAnSBwcm9wZXJ0eSBmcm9t
IHRoZSBkZXZpY2UgdHJlZSBub2RlIGFuZCBjb25maWd1cmUgIHRoZSBob3N0IGJyaWRnZSBzZWdt
ZW50IG51bWJlciBhY2NvcmRpbmdseS4gQWxsIHRoZSBQQ0kgZGV2aWNlIHRyZWUgbm9kZXMgc2hv
dWxkIGhhdmUgdGhlIOKAnGxpbnV4LHBjaS1kb21haW7igJ0gcHJvcGVydHkgc28gdGhhdCB0aGVy
ZSB3aWxsIGJlIG5vIGNvbmZsaWN0cy4gRHVyaW5nIGhhcmR3YXJlIGRvbWFpbiBib290IExpbnV4
IHdpbGwgYWxzbyB1c2UgdGhlIHNhbWUg4oCcbGludXgscGNpLWRvbWFpbuKAnSBwcm9wZXJ0eSBh
bmQgYXNzaWduIHRoZSBkb21haW4gbnVtYmVyIHRvIHRoZSBob3N0IGJyaWRnZS4NCj4+IA0KPj4g
V2hlbiBEb20wIHRyaWVzIHRvIGFjY2VzcyB0aGUgUENJIGNvbmZpZyBzcGFjZSBvZiB0aGUgZGV2
aWNlLCBYRU4gd2lsbCBmaW5kIHRoZSBjb3JyZXNwb25kaW5nIGhvc3QgYnJpZGdlIGJhc2VkIG9u
IHNlZ21lbnQgbnVtYmVyIGFuZCBhY2Nlc3MgdGhlIGNvcnJlc3BvbmRpbmcgY29uZmlnIHNwYWNl
IGFzc2lnbmVkIHRvIHRoYXQgYnJpZGdlLg0KPj4gDQo+PiBMaW1pdGF0aW9uOg0KPj4gKiBPbmx5
IFBDSSBFQ0FNIGNvbmZpZ3VyYXRpb24gc3BhY2UgYWNjZXNzIGlzIHN1cHBvcnRlZC4NCj4+ICog
RGV2aWNlIHRyZWUgYmluZGluZyBpcyBzdXBwb3J0ZWQgYXMgb2Ygbm93LCBBQ1BJIGlzIG5vdCBz
dXBwb3J0ZWQuDQo+PiAqIE5lZWQgdG8gcG9ydCB0aGUgUENJIGhvc3QgYnJpZGdlIGFjY2VzcyBj
b2RlIHRvIFhFTiB0byBhY2Nlc3MgdGhlIGNvbmZpZ3VyYXRpb24gc3BhY2UgKGdlbmVyaWMgb25l
IHdvcmtzIGJ1dCBsb3RzIG9mIHBsYXRmb3JtcyB3aWxsIHJlcXVpcmVkIHNvbWUgc3BlY2lmaWMg
Y29kZSBvciBxdWlya3MpLg0KPj4gDQo+PiAjIERpc2NvdmVyaW5nIFBDSSBkZXZpY2VzOg0KPj4g
DQo+PiBQQ0ktUENJZSBlbnVtZXJhdGlvbiBpcyBhIHByb2Nlc3Mgb2YgZGV0ZWN0aW5nIGRldmlj
ZXMgY29ubmVjdGVkIHRvIGl0cyBob3N0LiBJdCBpcyB0aGUgcmVzcG9uc2liaWxpdHkgb2YgdGhl
IGhhcmR3YXJlIGRvbWFpbiBvciBib290IGZpcm13YXJlIHRvIGRvIHRoZSBQQ0kgZW51bWVyYXRp
b24gYW5kIGNvbmZpZ3VyZSB0aGUgQkFSLCBQQ0kgY2FwYWJpbGl0aWVzLCBhbmQgTVNJL01TSS1Y
IGNvbmZpZ3VyYXRpb24uDQo+PiANCj4+IFBDSS1QQ0llIGVudW1lcmF0aW9uIGluIFhFTiBpcyBu
b3QgZmVhc2libGUgZm9yIHRoZSBjb25maWd1cmF0aW9uIHBhcnQgYXMgaXQgd291bGQgcmVxdWly
ZSBhIGxvdCBvZiBjb2RlIGluc2lkZSBYZW4gd2hpY2ggd291bGQgcmVxdWlyZSBhIGxvdCBvZiBt
YWludGVuYW5jZS4gQWRkZWQgdG8gdGhpcyBtYW55IHBsYXRmb3JtcyByZXF1aXJlIHNvbWUgcXVp
cmtzIGluIHRoYXQgcGFydCBvZiB0aGUgUENJIGNvZGUgd2hpY2ggd291bGQgZ3JlYXRseSBpbXBy
b3ZlIFhlbiBjb21wbGV4aXR5LiBPbmNlIGhhcmR3YXJlIGRvbWFpbiBlbnVtZXJhdGVzIHRoZSBk
ZXZpY2UgdGhlbiBpdCB3aWxsIGNvbW11bmljYXRlIHRvIFhFTiB2aWEgdGhlIGJlbG93IGh5cGVy
Y2FsbC4NCj4+IA0KPj4gI2RlZmluZSBQSFlTREVWT1BfcGNpX2RldmljZV9hZGQgICAgICAgIDI1
DQo+PiBzdHJ1Y3QgcGh5c2Rldl9wY2lfZGV2aWNlX2FkZCB7DQo+PiAgICB1aW50MTZfdCBzZWc7
DQo+PiAgICB1aW50OF90IGJ1czsNCj4+ICAgIHVpbnQ4X3QgZGV2Zm47DQo+PiAgICB1aW50MzJf
dCBmbGFnczsNCj4+ICAgIHN0cnVjdCB7DQo+PiAgICAJdWludDhfdCBidXM7DQo+PiAgICAJdWlu
dDhfdCBkZXZmbjsNCj4+ICAgIH0gcGh5c2ZuOw0KPj4gICAgLyoNCj4+ICAgICogT3B0aW9uYWwg
cGFyYW1ldGVycyBhcnJheS4NCj4+ICAgICogRmlyc3QgZWxlbWVudCAoWzBdKSBpcyBQWE0gZG9t
YWluIGFzc29jaWF0ZWQgd2l0aCB0aGUgZGV2aWNlIChpZiAqIFhFTl9QQ0lfREVWX1BYTSBpcyBz
ZXQpDQo+PiAgICAqLw0KPj4gICAgdWludDMyX3Qgb3B0YXJyW1hFTl9GTEVYX0FSUkFZX0RJTV07
DQo+PiAgICB9Ow0KPj4gDQo+PiBBcyB0aGUgaHlwZXJjYWxsIGFyZ3VtZW50IGhhcyB0aGUgUENJ
IHNlZ21lbnQgbnVtYmVyLCBYRU4gd2lsbCBhY2Nlc3MgdGhlIFBDSSBjb25maWcgc3BhY2UgYmFz
ZWQgb24gdGhpcyBzZWdtZW50IG51bWJlciBhbmQgZmluZCB0aGUgaG9zdC1icmlkZ2UgY29ycmVz
cG9uZGluZyB0byB0aGlzIHNlZ21lbnQgbnVtYmVyLiBBdCB0aGlzIHN0YWdlIGhvc3QgYnJpZGdl
IGlzIGZ1bGx5IGluaXRpYWxpemVkIHNvIHRoZXJlIHdpbGwgYmUgbm8gaXNzdWUgdG8gYWNjZXNz
IHRoZSBjb25maWcgc3BhY2UuDQo+PiANCj4+IFhFTiB3aWxsIGFkZCB0aGUgUENJIGRldmljZXMg
aW4gdGhlIGxpbmtlZCBsaXN0IG1haW50YWluIGluIFhFTiB1c2luZyB0aGUgZnVuY3Rpb24gcGNp
X2FkZF9kZXZpY2UoKS4gWEVOIHdpbGwgYmUgYXdhcmUgb2YgYWxsIHRoZSBQQ0kgZGV2aWNlcyBv
biB0aGUgc3lzdGVtIGFuZCBhbGwgdGhlIGRldmljZSB3aWxsIGJlIGFkZGVkIHRvIHRoZSBoYXJk
d2FyZSBkb21haW4uDQo+PiANCj4+IExpbWl0YXRpb25zOg0KPj4gKiBXaGVuIFBDSSBkZXZpY2Vz
IGFyZSBhZGRlZCB0byBYRU4sIE1TSSBjYXBhYmlsaXR5IGlzIG5vdCBpbml0aWFsaXplZCBpbnNp
ZGUgWEVOIGFuZCBub3Qgc3VwcG9ydGVkIGFzIG9mIG5vdy4NCj4+ICogQUNTIGNhcGFiaWxpdHkg
aXMgZGlzYWJsZSBmb3IgQVJNIGFzIG9mIG5vdyBhcyBhZnRlciBlbmFibGluZyBpdCBkZXZpY2Vz
IGFyZSBub3QgYWNjZXNzaWJsZS4NCj4+ICogRG9tMExlc3MgaW1wbGVtZW50YXRpb24gd2lsbCBy
ZXF1aXJlIHRvIGhhdmUgdGhlIGNhcGFjaXR5IGluc2lkZSBYZW4gdG8gZGlzY292ZXIgdGhlIFBD
SSBkZXZpY2VzICh3aXRob3V0IGRlcGVuZGluZyBvbiBEb20wIHRvIGRlY2xhcmUgdGhlbSB0byBY
ZW4pLg0KPiANCj4gSSB0aGluayBpdCBpcyBmaW5lIHRvIGFzc3VtZSB0aGF0IGZvciBkb20wbGVz
cyB0aGUgImZpcm13YXJlIiBoYXMgdGFrZW4NCj4gY2FyZSBvZiBzZXR0aW5nIHVwIHRoZSBCQVJz
IGNvcnJlY3RseS4gU3RhcnRpbmcgd2l0aCB0aGF0IGFzc3VtcHRpb24sIGl0DQo+IGxvb2tzIGxp
a2UgaXQgc2hvdWxkIGJlICJlYXN5IiB0byB3YWxrIHRoZSBQQ0kgdG9wb2xvZ3kgaW4gWGVuIHdo
ZW4vaWYNCj4gdGhlcmUgaXMgbm8gZG9tMD8NCg0KWWVzIGFzIHdlIGRpc2N1c3NlZCBkdXJpbmcg
dGhlIGRlc2lnbiBzZXNzaW9uLCB3ZSBjdXJyZW50bHkgdGhpbmsgdGhhdCBpdCBpcyB0aGUgd2F5
IHRvIGdvLg0KV2UgYXJlIGZvciBub3cgcmVseWluZyBvbiBEb20wIHRvIGdldCB0aGUgbGlzdCBv
ZiBQQ0kgZGV2aWNlcyBidXQgdGhpcyBpcyBkZWZpbml0ZWx5IHRoZSBzdHJhdGVneSB3ZSB3b3Vs
ZCBsaWtlIHRvIHVzZSB0byBoYXZlIERvbTAgc3VwcG9ydC4NCklmIHRoaXMgaXMgd29ya2luZyB3
ZWxsLCBJIGV2ZW4gdGhpbmsgd2UgY291bGQgZ2V0IHJpZCBvZiB0aGUgaHlwZXJjYWxsIGFsbCB0
b2dldGhlci4NCg0KPiANCj4gDQo+PiAjIEVuYWJsZSB0aGUgZXhpc3RpbmcgeDg2IHZpcnR1YWwg
UENJIHN1cHBvcnQgZm9yIEFSTToNCj4+IA0KPj4gVGhlIGV4aXN0aW5nIFZQQ0kgc3VwcG9ydCBh
dmFpbGFibGUgZm9yIFg4NiBpcyBhZGFwdGVkIGZvciBBcm0uIFdoZW4gdGhlIGRldmljZSBpcyBh
ZGRlZCB0byBYRU4gdmlhIHRoZSBoeXBlciBjYWxsIOKAnFBIWVNERVZPUF9wY2lfZGV2aWNlX2Fk
ZOKAnSwgVlBDSSBoYW5kbGVyIGZvciB0aGUgY29uZmlnIHNwYWNlIGFjY2VzcyBpcyBhZGRlZCB0
byB0aGUgUENJIGRldmljZSB0byBlbXVsYXRlIHRoZSBQQ0kgZGV2aWNlcy4NCj4+IA0KPj4gQSBN
TUlPIHRyYXAgaGFuZGxlciBmb3IgdGhlIFBDSSBFQ0FNIHNwYWNlIGlzIHJlZ2lzdGVyZWQgaW4g
WEVOIHNvIHRoYXQgd2hlbiBndWVzdCBpcyB0cnlpbmcgdG8gYWNjZXNzIHRoZSBQQ0kgY29uZmln
IHNwYWNlLCBYRU4gd2lsbCB0cmFwIHRoZSBhY2Nlc3MgYW5kIGVtdWxhdGUgcmVhZC93cml0ZSB1
c2luZyB0aGUgVlBDSSBhbmQgbm90IHRoZSByZWFsIFBDSSBoYXJkd2FyZS4NCj4+IA0KPj4gTGlt
aXRhdGlvbjoNCj4+ICogTm8gaGFuZGxlciBpcyByZWdpc3RlciBmb3IgdGhlIE1TSSBjb25maWd1
cmF0aW9uLg0KPj4gKiBPbmx5IGxlZ2FjeSBpbnRlcnJ1cHQgaXMgc3VwcG9ydGVkIGFuZCB0ZXN0
ZWQgYXMgb2Ygbm93LCBNU0kgaXMgbm90IGltcGxlbWVudGVkIGFuZCB0ZXN0ZWQuICANCj4+IA0K
Pj4gIyBBc3NpZ24gdGhlIGRldmljZSB0byB0aGUgZ3Vlc3Q6DQo+PiANCj4+IEFzc2lnbiB0aGUg
UENJIGRldmljZSBmcm9tIHRoZSBoYXJkd2FyZSBkb21haW4gdG8gdGhlIGd1ZXN0IGlzIGRvbmUg
dXNpbmcgdGhlIGJlbG93IGd1ZXN0IGNvbmZpZyBvcHRpb24uIFdoZW4geGwgdG9vbCBjcmVhdGUg
dGhlIGRvbWFpbiwgUENJIGRldmljZXMgd2lsbCBiZSBhc3NpZ25lZCB0byB0aGUgZ3Vlc3QgVlBD
SSBidXMuDQo+PiAJcGNpPVsgIlBDSV9TUEVDX1NUUklORyIsICJQQ0lfU1BFQ19TVFJJTkciLCAu
Li5dDQo+PiANCj4+IEd1ZXN0IHdpbGwgYmUgb25seSBhYmxlIHRvIGFjY2VzcyB0aGUgYXNzaWdu
ZWQgZGV2aWNlcyBhbmQgc2VlIHRoZSBicmlkZ2VzLiBHdWVzdCB3aWxsIG5vdCBiZSBhYmxlIHRv
IGFjY2VzcyBvciBzZWUgdGhlIGRldmljZXMgdGhhdCBhcmUgbm8gYXNzaWduZWQgdG8gaGltLg0K
Pj4gDQo+PiBMaW1pdGF0aW9uOg0KPj4gKiBBcyBvZiBub3cgYWxsIHRoZSBicmlkZ2VzIGluIHRo
ZSBQQ0kgYnVzIGFyZSBzZWVuIGJ5IHRoZSBndWVzdCBvbiB0aGUgVlBDSSBidXMuDQo+IA0KPiBX
ZSBuZWVkIHRvIGNvbWUgdXAgd2l0aCBzb21ldGhpbmcgc2ltaWxhciBmb3IgZG9tMGxlc3MgdG9v
LiBJdCBjb3VsZCBiZQ0KPiBleGFjdGx5IHRoZSBzYW1lIHRoaW5nIChhIGxpc3Qgb2YgQkRGcyBh
cyBzdHJpbmdzIGFzIGEgZGV2aWNlIHRyZWUNCj4gcHJvcGVydHkpIG9yIHNvbWV0aGluZyBlbHNl
IGlmIHdlIGNhbiBjb21lIHVwIHdpdGggYSBiZXR0ZXIgaWRlYS4NCg0KRnVsbHkgYWdyZWUuDQpN
YXliZSBhIHRyZWUgdG9wb2xvZ3kgY291bGQgYWxsb3cgbW9yZSBwb3NzaWJpbGl0aWVzIChsaWtl
IGdpdmluZyBCQVIgdmFsdWVzKSBpbiB0aGUgZnV0dXJlLg0KPiANCj4gDQo+PiAjIEVtdWxhdGVk
IFBDSSBkZXZpY2UgdHJlZSBub2RlIGluIGxpYnhsOg0KPj4gDQo+PiBMaWJ4bCBpcyBjcmVhdGlu
ZyBhIHZpcnR1YWwgUENJIGRldmljZSB0cmVlIG5vZGUgaW4gdGhlIGRldmljZSB0cmVlIHRvIGVu
YWJsZSB0aGUgZ3Vlc3QgT1MgdG8gZGlzY292ZXIgdGhlIHZpcnR1YWwgUENJIGR1cmluZyBndWVz
dCBib290LiBXZSBpbnRyb2R1Y2VkIHRoZSBuZXcgY29uZmlnIG9wdGlvbiBbdnBjaT0icGNpX2Vj
YW0iXSBmb3IgZ3Vlc3RzLiBXaGVuIHRoaXMgY29uZmlnIG9wdGlvbiBpcyBlbmFibGVkIGluIGEg
Z3Vlc3QgY29uZmlndXJhdGlvbiwgYSBQQ0kgZGV2aWNlIHRyZWUgbm9kZSB3aWxsIGJlIGNyZWF0
ZWQgaW4gdGhlIGd1ZXN0IGRldmljZSB0cmVlLg0KPj4gDQo+PiBBIG5ldyBhcmVhIGhhcyBiZWVu
IHJlc2VydmVkIGluIHRoZSBhcm0gZ3Vlc3QgcGh5c2ljYWwgbWFwIGF0IHdoaWNoIHRoZSBWUENJ
IGJ1cyBpcyBkZWNsYXJlZCBpbiB0aGUgZGV2aWNlIHRyZWUgKHJlZyBhbmQgcmFuZ2VzIHBhcmFt
ZXRlcnMgb2YgdGhlIG5vZGUpLiBBIHRyYXAgaGFuZGxlciBmb3IgdGhlIFBDSSBFQ0FNIGFjY2Vz
cyBmcm9tIGd1ZXN0IGhhcyBiZWVuIHJlZ2lzdGVyZWQgYXQgdGhlIGRlZmluZWQgYWRkcmVzcyBh
bmQgcmVkaXJlY3RzIHJlcXVlc3RzIHRvIHRoZSBWUENJIGRyaXZlciBpbiBYZW4uDQo+PiANCj4+
IExpbWl0YXRpb246DQo+PiAqIE9ubHkgb25lIFBDSSBkZXZpY2UgdHJlZSBub2RlIGlzIHN1cHBv
cnRlZCBhcyBvZiBub3cuDQo+IA0KPiBJIHRoaW5rIHZwY2k9InBjaV9lY2FtIiBzaG91bGQgYmUg
b3B0aW9uYWw6IGlmIHBjaT1bICJQQ0lfU1BFQ19TVFJJTkciLA0KPiAuLi5dIGlzIHNwZWNpZmlm
ZWQsIHRoZW4gdnBjaT0icGNpX2VjYW0iIGlzIGltcGxpZWQuDQo+IA0KPiB2cGNpPSJwY2lfZWNh
bSIgaXMgb25seSB1c2VmdWwgb25lIGRheSBpbiB0aGUgZnV0dXJlIHdoZW4gd2Ugd2FudCB0byBi
ZQ0KPiBhYmxlIHRvIGVtdWxhdGUgb3RoZXIgbm9uLWVjYW0gaG9zdCBicmlkZ2VzLiBGb3Igbm93
IHdlIGNvdWxkIGV2ZW4gc2tpcA0KPiBpdC4NCg0KVGhpcyB3b3VsZCBjcmVhdGUgYSBwcm9ibGVt
IGlmIHhsIGlzIHVzZWQgdG8gYWRkIGEgUENJIGRldmljZSBhcyB3ZSBuZWVkIHRoZSBQQ0kgbm9k
ZSB0byBiZSBpbiB0aGUgRFRCIHdoZW4gdGhlIGd1ZXN0IGlzIGNyZWF0ZWQuDQpJIGFncmVlIHRo
aXMgaXMgbm90IG5lZWRlZCBidXQgcmVtb3ZpbmcgaXQgbWlnaHQgY3JlYXRlIG1vcmUgY29tcGxl
eGl0eSBpbiB0aGUgY29kZS4NCg0KQmVydHJhbmQNCg0KPiANCj4gDQo+PiBCQVIgdmFsdWUgYW5k
IElPTUVNIG1hcHBpbmc6DQo+PiANCj4+IExpbnV4IGd1ZXN0IHdpbGwgZG8gdGhlIFBDSSBlbnVt
ZXJhdGlvbiBiYXNlZCBvbiB0aGUgYXJlYSByZXNlcnZlZCBmb3IgRUNBTSBhbmQgSU9NRU0gcmFu
Z2VzIGluIHRoZSBWUENJIGRldmljZSB0cmVlIG5vZGUuIE9uY2UgUENJCWRldmljZSBpcyBhc3Np
Z25lZCB0byB0aGUgZ3Vlc3QsIFhFTiB3aWxsIG1hcCB0aGUgZ3Vlc3QgUENJIElPTUVNIHJlZ2lv
biB0byB0aGUgcmVhbCBwaHlzaWNhbCBJT01FTSByZWdpb24gb25seSBmb3IgdGhlIGFzc2lnbmVk
IGRldmljZXMuDQo+PiANCj4+IEFzIG9mIG5vdyB3ZSBoYXZlIG5vdCBtb2RpZmllZCB0aGUgZXhp
c3RpbmcgVlBDSSBjb2RlIHRvIG1hcCB0aGUgZ3Vlc3QgUENJIElPTUVNIHJlZ2lvbiB0byB0aGUg
cmVhbCBwaHlzaWNhbCBJT01FTSByZWdpb24uIFdlIHVzZWQgdGhlIGV4aXN0aW5nIGd1ZXN0IOKA
nGlvbWVt4oCdIGNvbmZpZyBvcHRpb24gdG8gbWFwIHRoZSByZWdpb24uDQo+PiBGb3IgZXhhbXBs
ZToNCj4+IAlHdWVzdCByZXNlcnZlZCBJT01FTSByZWdpb246ICAweDA0MDIwMDAwDQo+PiAgICAJ
UmVhbCBwaHlzaWNhbCBJT01FTSByZWdpb246MHg1MDAwMDAwMA0KPj4gICAgCUlPTUVNIHNpemU6
MTI4TUINCj4+ICAgIAlpb21lbSBjb25maWcgd2lsbCBiZTogICBpb21lbSA9IFsiMHg1MDAwMCww
eDgwMDBAMHg0MDIwIl0NCj4+IA0KPj4gVGhlcmUgaXMgbm8gbmVlZCB0byBtYXAgdGhlIEVDQU0g
c3BhY2UgYXMgWEVOIGFscmVhZHkgaGF2ZSBhY2Nlc3MgdG8gdGhlIEVDQU0gc3BhY2UgYW5kIFhF
TiB3aWxsIHRyYXAgRUNBTSBhY2Nlc3NlcyBmcm9tIHRoZSBndWVzdCBhbmQgd2lsbCBwZXJmb3Jt
IHJlYWQvd3JpdGUgb24gdGhlIFZQQ0kgYnVzLg0KPj4gDQo+PiBJT01FTSBhY2Nlc3Mgd2lsbCBu
b3QgYmUgdHJhcHBlZCBhbmQgdGhlIGd1ZXN0IHdpbGwgZGlyZWN0bHkgYWNjZXNzIHRoZSBJT01F
TSByZWdpb24gb2YgdGhlIGFzc2lnbmVkIGRldmljZSB2aWEgc3RhZ2UtMiB0cmFuc2xhdGlvbi4N
Cj4+IA0KPj4gSW4gdGhlIHNhbWUsIHdlIG1hcHBlZCB0aGUgYXNzaWduZWQgZGV2aWNlcyBJUlEg
dG8gdGhlIGd1ZXN0IHVzaW5nIGJlbG93IGNvbmZpZyBvcHRpb25zLg0KPj4gCWlycXM9IFsgTlVN
QkVSLCBOVU1CRVIsIC4uLl0NCj4+IA0KPj4gTGltaXRhdGlvbjoNCj4+ICogTmVlZCB0byBhdm9p
ZCB0aGUg4oCcaW9tZW3igJ0gYW5kIOKAnGlyceKAnSBndWVzdCBjb25maWcgb3B0aW9ucyBhbmQg
bWFwIHRoZSBJT01FTSByZWdpb24gYW5kIElSUSBhdCB0aGUgc2FtZSB0aW1lIHdoZW4gZGV2aWNl
IGlzIGFzc2lnbmVkIHRvIHRoZSBndWVzdCB1c2luZyB0aGUg4oCccGNp4oCdIGd1ZXN0IGNvbmZp
ZyBvcHRpb25zIHdoZW4geGwgY3JlYXRlcyB0aGUgZG9tYWluLg0KPj4gKiBFbXVsYXRlZCBCQVIg
dmFsdWVzIG9uIHRoZSBWUENJIGJ1cyBzaG91bGQgcmVmbGVjdCB0aGUgSU9NRU0gbWFwcGVkIGFk
ZHJlc3MuDQo+PiAqIFg4NiBtYXBwaW5nIGNvZGUgc2hvdWxkIGJlIHBvcnRlZCBvbiBBcm0gc28g
dGhhdCB0aGUgc3RhZ2UtMiB0cmFuc2xhdGlvbiBpcyBhZGFwdGVkIHdoZW4gdGhlIGd1ZXN0IGlz
IGRvaW5nIGEgbW9kaWZpY2F0aW9uIG9mIHRoZSBCQVIgcmVnaXN0ZXJzIHZhbHVlcyAodG8gbWFw
IHRoZSBhZGRyZXNzIHJlcXVlc3RlZCBieSB0aGUgZ3Vlc3QgZm9yIGEgc3BlY2lmaWMgSU9NRU0g
dG8gdGhlIGFkZHJlc3MgYWN0dWFsbHkgY29udGFpbmVkIGluIHRoZSByZWFsIEJBUiByZWdpc3Rl
ciBvZiB0aGUgY29ycmVzcG9uZGluZyBkZXZpY2UpLg0KPj4gDQo+PiAjIFNNTVUgY29uZmlndXJh
dGlvbiBmb3IgZ3Vlc3Q6DQo+PiANCj4+IFdoZW4gYXNzaWduaW5nIFBDSSBkZXZpY2VzIHRvIGEg
Z3Vlc3QsIHRoZSBTTU1VIGNvbmZpZ3VyYXRpb24gc2hvdWxkIGJlIHVwZGF0ZWQgdG8gcmVtb3Zl
IGFjY2VzcyB0byB0aGUgaGFyZHdhcmUgZG9tYWluIG1lbW9yeSBhbmQgYWRkDQo+PiBjb25maWd1
cmF0aW9uIHRvIGhhdmUgYWNjZXNzIHRvIHRoZSBndWVzdCBtZW1vcnkgd2l0aCB0aGUgcHJvcGVy
IGFkZHJlc3MgdHJhbnNsYXRpb24gc28gdGhhdCB0aGUgZGV2aWNlIGNhbiBkbyBETUEgb3BlcmF0
aW9ucyBmcm9tIGFuZCB0byB0aGUgZ3Vlc3QgbWVtb3J5IG9ubHkuDQo+PiANCj4+ICMgTVNJL01T
SS1YIHN1cHBvcnQ6DQo+PiBOb3QgaW1wbGVtZW50IGFuZCB0ZXN0ZWQgYXMgb2Ygbm93Lg0KPj4g
DQo+PiAjIElUUyBzdXBwb3J0Og0KPj4gTm90IGltcGxlbWVudCBhbmQgdGVzdGVkIGFzIG9mIG5v
dy4NCg0K


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 07:42:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 07:42:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwL0K-0002nm-3h; Fri, 17 Jul 2020 07:42:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6P0s=A4=epam.com=prvs=5467d6a743=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1jwL0I-0002nh-J3
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 07:41:58 +0000
X-Inumbo-ID: fd976d8e-c800-11ea-bb8b-bc764e2007e4
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd976d8e-c800-11ea-bb8b-bc764e2007e4;
 Fri, 17 Jul 2020 07:41:56 +0000 (UTC)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 06H7ab9b011335; Fri, 17 Jul 2020 07:41:51 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2111.outbound.protection.outlook.com [104.47.17.111])
 by mx0b-0039f301.pphosted.com with ESMTP id 32as5e9qjg-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 17 Jul 2020 07:41:51 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A5+sD9NpfcIkjCZ+tsoOb1s8bWEGGY3YC4bt9a9BHvjzxiOqF1ERqWoj0CzGVkb2gxJtaGwhDFEaPfRT4aRwNk8qjGyeQmDDz+njFA0OQ65cYW1C8aZnhM7pSbn672BI7Goe6yWbURCVVavZCX4FjWFqYA4K8vk2Bz9sOZIY9du1zWJLhorEdovZJaWngF1QLLQz1FLZ5miviqwQ8k5ksDOkbUqsiHHCfuFVhQRNO+ytPTKnXp2cxG9rKhbMyRpgr62RXhbrlTjtf0Xp5OiJsI6MJSaXSTjvIdSEi6DU2P2u4WvFryFTu8DJa34M/DdlVvaWHAlkbltFQy84JeRX1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FDUPDh7/fFGpVB1H+916F6hu9kABpilUU+M1gPj+sV8=;
 b=ggaBVa8upMV0Qcqw5pLFmyiESUH7xUAqoIAvz9LGpnavMO7ugjMN6Mbj2TuG+uX2IjZhN9s/xtJP8SwcyehqnXT5OSRqIAi0qaVSQ2UW6PoD2k1Ob7Eo7BBztRxznCw80fg+3TeYOjP2jk5G6WSjiASN3a9/B9oqOLueMce7mQkyC52mxPM0xiESzK6mvh03jG6aP0YqhdweGqJkk3E0zGn2L459AlTASqtPQOmcqp2lJhyzjbzdsaK9UspFOnoHWLtSAzHsKEKKxckOXmsbwqCrU6UazO6XyV6RFyJOjW7qnOpYMwJnGe46gbQytqvQNziDt9m+NpPRpLDLzS/mJw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FDUPDh7/fFGpVB1H+916F6hu9kABpilUU+M1gPj+sV8=;
 b=0WXO4t8FeWTivA/OPh+3OQM3b46tCXYzaW2iIg+IDvV6d779ghKTiQloCChXXLsG7AWbFY1xZm2jDEVmj6hUJZO/aTH39zWRvChLI7LLfXV8lNBZ0GEEYDuMUuYLrKgGQv1apItgAwt/7Y4yvuFNs1XW2e9e7Gyj+07VJgREurfne5u4eIZlsB1vOMWYjETQq85/nRf75qM1pJvfdBkxE/uy0gf1t/seVEm0SixL4mCX47FER/2k3fcb4KL+lPmSJM99DCU5Q8kNNaBioSa3VJv3oe+bCLpQY4SedL8Suzf/LD2cfa8zFfuWtwsG0D9IdJzasG/JntTVHa2se3HCiQ==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0301MB2196.eurprd03.prod.outlook.com (2603:10a6:200:4d::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17; Fri, 17 Jul
 2020 07:41:48 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508%9]) with mapi id 15.20.3195.022; Fri, 17 Jul 2020
 07:41:48 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>, Stefano Stabellini
 <sstabellini@kernel.org>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWXA26EtJZFfhoUEGdjsOmzYm6Xg==
Date: Fri, 17 Jul 2020 07:41:48 +0000
Message-ID: <f69f86dc-7a8c-4c25-c059-0e391de51d7f@epam.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <alpine.DEB.2.21.2007161258160.3886@sstabellini-ThinkPad-T480s>
 <BB4645DF-A040-4912-AC35-C98105917FD5@arm.com>
In-Reply-To: <BB4645DF-A040-4912-AC35-C98105917FD5@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: fb7e7039-e396-4cb0-6084-08d82a24dcb6
x-ms-traffictypediagnostic: AM4PR0301MB2196:
x-microsoft-antispam-prvs: <AM4PR0301MB2196E38ADF85AEECDFB3FA7FE77C0@AM4PR0301MB2196.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: C/Q3jm7/1vfSAI0fPKYS5+A9CfZsFpkGNh9L8A4vNQVOsLCSzkDofd+jmBye/xI7YPBpQ2+6GtzLIRkBccXCnpQ67BbYjeZ045PL5raQfRyn1GmwhWS+npIbOKIoEPAjab2X0eWUUkb6u+GMYAk0LoSHaqKGkavGw21LJywfdFKUIUOUbDJLo9BolJ2aMl8qkk5Y2n/UVq+92UoumPishnA7fUectV5s9KGC6XriIDz8bT5vQQQYDDzx7BvhHMAJK8kv7sgQ7kWMCcug27Ju0r0jNwuYiJsVwLj6W8ayvfn6gBO6TC7i1iwfnYcVJvUAy+tByeOxVpvivdPvcEi/sSVX5oXL1VyV/j5riHZwNhmGp6C0Mori3B3PGvskZNo6o0koqNkyesOLaa1h1Vhqaw==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM0PR03MB6324.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(346002)(366004)(376002)(136003)(396003)(39860400002)(316002)(6512007)(4326008)(2616005)(5660300002)(478600001)(966005)(31686004)(36756003)(71200400001)(2906002)(30864003)(186003)(53546011)(86362001)(110136005)(54906003)(8676002)(83380400001)(31696002)(6506007)(66446008)(66556008)(66946007)(66476007)(91956017)(64756008)(76116006)(8936002)(26005)(6486002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: rugPjgPQpiilfe2x5Sjfme62mJac+4Ya/5Dad427meIxH0EbU4c6Vi/6RjThZqHgk5u4E0bU5BySy6XInRe/SeBYzC7glyyE/jerXr9xDJIdgX224XG4o3pFzks1wNr44y7CZ030fNcgnNIJThG/tlbx8h5qEvFt0bnwFw+hlJX3qs1Xcnqo5nw7dMj2z91yUXdN81d1aS5H9fOJAlp6C2sIpoF9j3eNBYoRoYUJw3mIqXS9iH6DKOi0Je3HQ8068me1iVdjFoHLzvkqNP8Dlb6/smXQUHUrqNbK32oQH+ttTAY9hwfhWbQgFNOjVEasoYplRk8FLJrdict9PnwzcBVVgLI1fM4eqJvhNe3u3lie+8UsbKPG45+zAlrOoDVFJ3cIQVdByRfI6kvZ9Ulk4fgyjLNqan1ilD2n/T0axPk1lfCQoSuz5SwAFNSirkESqdqq05A/y14+ZtHRQjAgnfZ+UyAWuMDVEZdVgSuAD1E=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <417DE3C9C3D97A44BD2069AC40DE2E89@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fb7e7039-e396-4cb0-6084-08d82a24dcb6
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jul 2020 07:41:48.2008 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: eVCm95KNlTkg5pcJaxTN8NKSXT0oNds+XPnAJxF8ndRhGMVpCWmg0Cvw82NhAWUJB0uhqYyMKHXxUiH6JjWEKg1QhIN7KcVDP5kLqKMpmvdw/GuKYS6G4yGX0X6qxIZ6
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0301MB2196
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687
 definitions=2020-07-17_04:2020-07-17,
 2020-07-17 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0
 malwarescore=0 suspectscore=0
 bulkscore=0 spamscore=0 phishscore=0 priorityscore=1501 clxscore=1011
 adultscore=0 impostorscore=0 mlxlogscore=999 mlxscore=0 lowpriorityscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007170058
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQpPbiA3LzE3LzIwIDk6NTMgQU0sIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+DQo+PiBPbiAx
NiBKdWwgMjAyMCwgYXQgMjI6NTEsIFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZz4gd3JvdGU6DQo+Pg0KPj4gT24gVGh1LCAxNiBKdWwgMjAyMCwgUmFodWwgU2luZ2gg
d3JvdGU6DQo+Pj4gSGVsbG8gQWxsLA0KPj4+DQo+Pj4gRm9sbG93aW5nIHVwIG9uIGRpc2N1c3Np
b24gb24gUENJIFBhc3N0aHJvdWdoIHN1cHBvcnQgb24gQVJNIHRoYXQgd2UgaGFkIGF0IHRoZSBY
RU4gc3VtbWl0LCB3ZSBhcmUgc3VibWl0dGluZyBhIFJldmlldyBGb3IgQ29tbWVudCBhbmQgYSBk
ZXNpZ24gcHJvcG9zYWwgZm9yIFBDSSBwYXNzdGhyb3VnaCBzdXBwb3J0IG9uIEFSTS4gRmVlbCBm
cmVlIHRvIGdpdmUgeW91ciBmZWVkYmFjay4NCj4+Pg0KPj4+IFRoZSBmb2xsb3dpbmdzIGRlc2Ny
aWJlIHRoZSBoaWdoLWxldmVsIGRlc2lnbiBwcm9wb3NhbCBvZiB0aGUgUENJIHBhc3N0aHJvdWdo
IHN1cHBvcnQgYW5kIGhvdyB0aGUgZGlmZmVyZW50IG1vZHVsZXMgd2l0aGluIHRoZSBzeXN0ZW0g
aW50ZXJhY3RzIHdpdGggZWFjaCBvdGhlciB0byBhc3NpZ24gYSBwYXJ0aWN1bGFyIFBDSSBkZXZp
Y2UgdG8gdGhlIGd1ZXN0Lg0KPj4gSSB0aGluayB0aGUgcHJvcG9zYWwgaXMgZ29vZCBhbmQgSSBv
bmx5IGhhdmUgYSBjb3VwbGUgb2YgdGhvdWdodHMgdG8NCj4+IHNoYXJlIGJlbG93Lg0KPj4NCj4+
DQo+Pj4gIyBUaXRsZToNCj4+Pg0KPj4+IFBDSSBkZXZpY2VzIHBhc3N0aHJvdWdoIG9uIEFybSBk
ZXNpZ24gcHJvcG9zYWwNCj4+Pg0KPj4+ICMgUHJvYmxlbSBzdGF0ZW1lbnQ6DQo+Pj4NCj4+PiBP
biBBUk0gdGhlcmUgaW4gbm8gc3VwcG9ydCB0byBhc3NpZ24gYSBQQ0kgZGV2aWNlIHRvIGEgZ3Vl
c3QuIFBDSSBkZXZpY2UgcGFzc3Rocm91Z2ggY2FwYWJpbGl0eSBhbGxvd3MgZ3Vlc3RzIHRvIGhh
dmUgZnVsbCBhY2Nlc3MgdG8gc29tZSBQQ0kgZGV2aWNlcy4gUENJIGRldmljZSBwYXNzdGhyb3Vn
aCBhbGxvd3MgUENJIGRldmljZXMgdG8gYXBwZWFyIGFuZCBiZWhhdmUgYXMgaWYgdGhleSB3ZXJl
IHBoeXNpY2FsbHkgYXR0YWNoZWQgdG8gdGhlIGd1ZXN0IG9wZXJhdGluZyBzeXN0ZW0gYW5kIHBy
b3ZpZGUgZnVsbCBpc29sYXRpb24gb2YgdGhlIFBDSSBkZXZpY2VzLg0KPj4+DQo+Pj4gR29hbCBv
ZiB0aGlzIHdvcmsgaXMgdG8gYWxzbyBzdXBwb3J0IERvbTBMZXNzIGNvbmZpZ3VyYXRpb24gc28g
dGhlIFBDSSBiYWNrZW5kL2Zyb250ZW5kIGRyaXZlcnMgdXNlZCBvbiB4ODYgc2hhbGwgbm90IGJl
IHVzZWQgb24gQXJtLiBJdCB3aWxsIHVzZSB0aGUgZXhpc3RpbmcgVlBDSSBjb25jZXB0IGZyb20g
WDg2IGFuZCBpbXBsZW1lbnQgdGhlIHZpcnR1YWwgUENJIGJ1cyB0aHJvdWdoIElPIGVtdWxhdGlv
buKAiyBzdWNoIHRoYXQgb25seSBhc3NpZ25lZCBkZXZpY2VzIGFyZSB2aXNpYmxl4oCLIHRvIHRo
ZSBndWVzdCBhbmQgZ3Vlc3QgY2FuIHVzZSB0aGUgc3RhbmRhcmQgUENJIGRyaXZlci4NCj4+Pg0K
Pj4+IE9ubHkgRG9tMCBhbmQgWGVuIHdpbGwgaGF2ZSBhY2Nlc3MgdG8gdGhlIHJlYWwgUENJIGJ1
cywNCg0KU28sIGluIHRoaXMgY2FzZSBob3cgaXMgdGhlIGFjY2VzcyBzZXJpYWxpemF0aW9uIGdv
aW5nIHRvIHdvcms/DQoNCkkgbWVhbiB0aGF0IGlmIGJvdGggWGVuIGFuZCBEb20wIGFyZSBhYm91
dCB0byBhY2Nlc3MgdGhlIGJ1cyBhdCB0aGUgc2FtZSB0aW1lPw0KDQpUaGVyZSB3YXMgYSBkaXNj
dXNzaW9uIG9uIHRoZSBzYW1lIGJlZm9yZSBbMV0gYW5kIElNTyBpdCB3YXMgbm90IGRlY2lkZWQg
b24NCg0KaG93IHRvIGRlYWwgd2l0aCB0aGF0Lg0KDQo+Pj4g4oCLIGd1ZXN0IHdpbGwgaGF2ZSBh
IGRpcmVjdCBhY2Nlc3MgdG8gdGhlIGFzc2lnbmVkIGRldmljZSBpdHNlbGbigIsuIElPTUVNIG1l
bW9yeSB3aWxsIGJlIG1hcHBlZCB0byB0aGUgZ3Vlc3Qg4oCLYW5kIGludGVycnVwdCB3aWxsIGJl
IHJlZGlyZWN0ZWQgdG8gdGhlIGd1ZXN0LiBTTU1VIGhhcyB0byBiZSBjb25maWd1cmVkIGNvcnJl
Y3RseSB0byBoYXZlIERNQSB0cmFuc2FjdGlvbi4NCj4+Pg0KPj4+ICMjIEN1cnJlbnQgc3RhdGU6
4oCvRHJhZnQgdmVyc2lvbg0KPj4+DQo+Pj4gIyBQcm9wb3NlcihzKTogUmFodWwgU2luZ2gsIEJl
cnRyYW5kIE1hcnF1aXMNCj4+Pg0KPj4+ICMgUHJvcG9zYWw6DQo+Pj4NCj4+PiBUaGlzIHNlY3Rp
b24gd2lsbCBkZXNjcmliZSB0aGUgZGlmZmVyZW50IHN1YnN5c3RlbSB0byBzdXBwb3J0IHRoZSBQ
Q0kgZGV2aWNlIHBhc3N0aHJvdWdoIGFuZCBob3cgdGhlc2Ugc3Vic3lzdGVtcyBpbnRlcmFjdCB3
aXRoIGVhY2ggb3RoZXIgdG8gYXNzaWduIGEgZGV2aWNlIHRvIHRoZSBndWVzdC4NCj4+Pg0KPj4+
ICMgUENJIFRlcm1pbm9sb2d5Og0KPj4+DQo+Pj4gSG9zdCBCcmlkZ2U6IEhvc3QgYnJpZGdlIGFs
bG93cyB0aGUgUENJIGRldmljZXMgdG8gdGFsayB0byB0aGUgcmVzdCBvZiB0aGUgY29tcHV0ZXIu
DQo+Pj4gRUNBTTogRUNBTSAoRW5oYW5jZWQgQ29uZmlndXJhdGlvbiBBY2Nlc3MgTWVjaGFuaXNt
KSBpcyBhIG1lY2hhbmlzbSBkZXZlbG9wZWQgdG8gYWxsb3cgUENJZSB0byBhY2Nlc3MgY29uZmln
dXJhdGlvbiBzcGFjZS4gVGhlIHNwYWNlIGF2YWlsYWJsZSBwZXIgZnVuY3Rpb24gaXMgNEtCLg0K
Pj4+DQo+Pj4gIyBEaXNjb3ZlcmluZyBQQ0kgSG9zdCBCcmlkZ2UgaW4gWEVOOg0KPj4+DQo+Pj4g
SW4gb3JkZXIgdG8gc3VwcG9ydCB0aGUgUENJIHBhc3N0aHJvdWdoIFhFTiBzaG91bGQgYmUgYXdh
cmUgb2YgYWxsIHRoZSBQQ0kgaG9zdCBicmlkZ2VzIGF2YWlsYWJsZSBvbiB0aGUgc3lzdGVtIGFu
ZCBzaG91bGQgYmUgYWJsZSB0byBhY2Nlc3MgdGhlIFBDSSBjb25maWd1cmF0aW9uIHNwYWNlLiBF
Q0FNIGNvbmZpZ3VyYXRpb24gYWNjZXNzIGlzIHN1cHBvcnRlZCBhcyBvZiBub3cuIFhFTiBkdXJp
bmcgYm9vdCB3aWxsIHJlYWQgdGhlIFBDSSBkZXZpY2UgdHJlZSBub2RlIOKAnHJlZ+KAnSBwcm9w
ZXJ0eSBhbmQgd2lsbCBtYXAgdGhlIEVDQU0gc3BhY2UgdG8gdGhlIFhFTiBtZW1vcnkgdXNpbmcg
dGhlIOKAnGlvcmVtYXBfbm9jYWNoZSAoKeKAnSBmdW5jdGlvbi4NCj4+Pg0KPj4+IElmIHRoZXJl
IGFyZSBtb3JlIHRoYW4gb25lIHNlZ21lbnQgb24gdGhlIHN5c3RlbSwgWEVOIHdpbGwgcmVhZCB0
aGUg4oCcbGludXgsIHBjaS1kb21haW7igJ0gcHJvcGVydHkgZnJvbSB0aGUgZGV2aWNlIHRyZWUg
bm9kZSBhbmQgY29uZmlndXJlICB0aGUgaG9zdCBicmlkZ2Ugc2VnbWVudCBudW1iZXIgYWNjb3Jk
aW5nbHkuIEFsbCB0aGUgUENJIGRldmljZSB0cmVlIG5vZGVzIHNob3VsZCBoYXZlIHRoZSDigJxs
aW51eCxwY2ktZG9tYWlu4oCdIHByb3BlcnR5IHNvIHRoYXQgdGhlcmUgd2lsbCBiZSBubyBjb25m
bGljdHMuIER1cmluZyBoYXJkd2FyZSBkb21haW4gYm9vdCBMaW51eCB3aWxsIGFsc28gdXNlIHRo
ZSBzYW1lIOKAnGxpbnV4LHBjaS1kb21haW7igJ0gcHJvcGVydHkgYW5kIGFzc2lnbiB0aGUgZG9t
YWluIG51bWJlciB0byB0aGUgaG9zdCBicmlkZ2UuDQo+Pj4NCj4+PiBXaGVuIERvbTAgdHJpZXMg
dG8gYWNjZXNzIHRoZSBQQ0kgY29uZmlnIHNwYWNlIG9mIHRoZSBkZXZpY2UsIFhFTiB3aWxsIGZp
bmQgdGhlIGNvcnJlc3BvbmRpbmcgaG9zdCBicmlkZ2UgYmFzZWQgb24gc2VnbWVudCBudW1iZXIg
YW5kIGFjY2VzcyB0aGUgY29ycmVzcG9uZGluZyBjb25maWcgc3BhY2UgYXNzaWduZWQgdG8gdGhh
dCBicmlkZ2UuDQo+Pj4NCj4+PiBMaW1pdGF0aW9uOg0KPj4+ICogT25seSBQQ0kgRUNBTSBjb25m
aWd1cmF0aW9uIHNwYWNlIGFjY2VzcyBpcyBzdXBwb3J0ZWQuDQoNClRoaXMgaXMgcmVhbGx5IHRo
ZSBsaW1pdGF0aW9uIHdoaWNoIHdlIGhhdmUgdG8gdGhpbmsgb2Ygbm93IGFzIHRoZXJlIGFyZSBs
b3RzIG9mDQoNCkhXIHcvbyBFQ0FNIHN1cHBvcnQgYW5kIG5vdCBwcm92aWRpbmcgYSB3YXkgdG8g
dXNlIFBDSShlKSBvbiB0aG9zZSBib2FyZHMNCg0Kd291bGQgcmVuZGVyIHRoZW0gdXNlbGVzcyB3
cnQgUENJLiBJIGRvbid0IHN1Z2dlc3QgdG8gaGF2ZSBzb21lIHJlYWwgY29kZSBmb3INCg0KdGhh
dCwgYnV0IEkgd291bGQgc3VnZ2VzdCB3ZSBkZXNpZ24gc29tZSBpbnRlcmZhY2VzIGZyb20gZGF5
IDAuDQoNCkF0IHRoZSBzYW1lIHRpbWUgSSBkbyB1bmRlcnN0YW5kIHRoYXQgc3VwcG9ydGluZyBu
b24tRUNBTSBicmlkZ2VzIGlzIGEgcGFpbg0KDQo+Pj4gKiBEZXZpY2UgdHJlZSBiaW5kaW5nIGlz
IHN1cHBvcnRlZCBhcyBvZiBub3csIEFDUEkgaXMgbm90IHN1cHBvcnRlZC4NCj4+PiAqIE5lZWQg
dG8gcG9ydCB0aGUgUENJIGhvc3QgYnJpZGdlIGFjY2VzcyBjb2RlIHRvIFhFTiB0byBhY2Nlc3Mg
dGhlIGNvbmZpZ3VyYXRpb24gc3BhY2UgKGdlbmVyaWMgb25lIHdvcmtzIGJ1dCBsb3RzIG9mIHBs
YXRmb3JtcyB3aWxsIHJlcXVpcmVkIHNvbWUgc3BlY2lmaWMgY29kZSBvciBxdWlya3MpLg0KPj4+
DQo+Pj4gIyBEaXNjb3ZlcmluZyBQQ0kgZGV2aWNlczoNCj4+Pg0KPj4+IFBDSS1QQ0llIGVudW1l
cmF0aW9uIGlzIGEgcHJvY2VzcyBvZiBkZXRlY3RpbmcgZGV2aWNlcyBjb25uZWN0ZWQgdG8gaXRz
IGhvc3QuIEl0IGlzIHRoZSByZXNwb25zaWJpbGl0eSBvZiB0aGUgaGFyZHdhcmUgZG9tYWluIG9y
IGJvb3QgZmlybXdhcmUgdG8gZG8gdGhlIFBDSSBlbnVtZXJhdGlvbiBhbmQgY29uZmlndXJlDQpH
cmVhdCwgc28gd2UgYXNzdW1lIGhlcmUgdGhhdCB0aGUgYm9vdGxvYWRlciBjYW4gZG8gdGhlIGVu
dW1lcmF0aW9uIGFuZCBjb25maWd1cmF0aW9uLi4uDQo+Pj4gICB0aGUgQkFSLCBQQ0kgY2FwYWJp
bGl0aWVzLCBhbmQgTVNJL01TSS1YIGNvbmZpZ3VyYXRpb24uDQo+Pj4NCj4+PiBQQ0ktUENJZSBl
bnVtZXJhdGlvbiBpbiBYRU4gaXMgbm90IGZlYXNpYmxlIGZvciB0aGUgY29uZmlndXJhdGlvbiBw
YXJ0IGFzIGl0IHdvdWxkIHJlcXVpcmUgYSBsb3Qgb2YgY29kZSBpbnNpZGUgWGVuIHdoaWNoIHdv
dWxkIHJlcXVpcmUgYSBsb3Qgb2YgbWFpbnRlbmFuY2UuIEFkZGVkIHRvIHRoaXMgbWFueSBwbGF0
Zm9ybXMgcmVxdWlyZSBzb21lIHF1aXJrcyBpbiB0aGF0IHBhcnQgb2YgdGhlIFBDSSBjb2RlIHdo
aWNoIHdvdWxkIGdyZWF0bHkgaW1wcm92ZSBYZW4gY29tcGxleGl0eS4gT25jZSBoYXJkd2FyZSBk
b21haW4gZW51bWVyYXRlcyB0aGUgZGV2aWNlIHRoZW4gaXQgd2lsbCBjb21tdW5pY2F0ZSB0byBY
RU4gdmlhIHRoZSBiZWxvdyBoeXBlcmNhbGwuDQo+Pj4NCj4+PiAjZGVmaW5lIFBIWVNERVZPUF9w
Y2lfZGV2aWNlX2FkZCAgICAgICAgMjUNCj4+PiBzdHJ1Y3QgcGh5c2Rldl9wY2lfZGV2aWNlX2Fk
ZCB7DQo+Pj4gICAgIHVpbnQxNl90IHNlZzsNCj4+PiAgICAgdWludDhfdCBidXM7DQo+Pj4gICAg
IHVpbnQ4X3QgZGV2Zm47DQo+Pj4gICAgIHVpbnQzMl90IGZsYWdzOw0KPj4+ICAgICBzdHJ1Y3Qg
ew0KPj4+ICAgICAJdWludDhfdCBidXM7DQo+Pj4gICAgIAl1aW50OF90IGRldmZuOw0KPj4+ICAg
ICB9IHBoeXNmbjsNCj4+PiAgICAgLyoNCj4+PiAgICAgKiBPcHRpb25hbCBwYXJhbWV0ZXJzIGFy
cmF5Lg0KPj4+ICAgICAqIEZpcnN0IGVsZW1lbnQgKFswXSkgaXMgUFhNIGRvbWFpbiBhc3NvY2lh
dGVkIHdpdGggdGhlIGRldmljZSAoaWYgKiBYRU5fUENJX0RFVl9QWE0gaXMgc2V0KQ0KPj4+ICAg
ICAqLw0KPj4+ICAgICB1aW50MzJfdCBvcHRhcnJbWEVOX0ZMRVhfQVJSQVlfRElNXTsNCj4+PiAg
ICAgfTsNCj4+Pg0KPj4+IEFzIHRoZSBoeXBlcmNhbGwgYXJndW1lbnQgaGFzIHRoZSBQQ0kgc2Vn
bWVudCBudW1iZXIsIFhFTiB3aWxsIGFjY2VzcyB0aGUgUENJIGNvbmZpZyBzcGFjZSBiYXNlZCBv
biB0aGlzIHNlZ21lbnQgbnVtYmVyIGFuZCBmaW5kIHRoZSBob3N0LWJyaWRnZSBjb3JyZXNwb25k
aW5nIHRvIHRoaXMgc2VnbWVudCBudW1iZXIuIEF0IHRoaXMgc3RhZ2UgaG9zdCBicmlkZ2UgaXMg
ZnVsbHkgaW5pdGlhbGl6ZWQgc28gdGhlcmUgd2lsbCBiZSBubyBpc3N1ZSB0byBhY2Nlc3MgdGhl
IGNvbmZpZyBzcGFjZS4NCj4+Pg0KPj4+IFhFTiB3aWxsIGFkZCB0aGUgUENJIGRldmljZXMgaW4g
dGhlIGxpbmtlZCBsaXN0IG1haW50YWluIGluIFhFTiB1c2luZyB0aGUgZnVuY3Rpb24gcGNpX2Fk
ZF9kZXZpY2UoKS4gWEVOIHdpbGwgYmUgYXdhcmUgb2YgYWxsIHRoZSBQQ0kgZGV2aWNlcyBvbiB0
aGUgc3lzdGVtIGFuZCBhbGwgdGhlIGRldmljZSB3aWxsIGJlIGFkZGVkIHRvIHRoZSBoYXJkd2Fy
ZSBkb21haW4uDQo+Pj4NCj4+PiBMaW1pdGF0aW9uczoNCj4+PiAqIFdoZW4gUENJIGRldmljZXMg
YXJlIGFkZGVkIHRvIFhFTiwgTVNJIGNhcGFiaWxpdHkgaXMgbm90IGluaXRpYWxpemVkIGluc2lk
ZSBYRU4gYW5kIG5vdCBzdXBwb3J0ZWQgYXMgb2Ygbm93Lg0KPj4+ICogQUNTIGNhcGFiaWxpdHkg
aXMgZGlzYWJsZSBmb3IgQVJNIGFzIG9mIG5vdyBhcyBhZnRlciBlbmFibGluZyBpdCBkZXZpY2Vz
IGFyZSBub3QgYWNjZXNzaWJsZS4NCj4+PiAqIERvbTBMZXNzIGltcGxlbWVudGF0aW9uIHdpbGwg
cmVxdWlyZSB0byBoYXZlIHRoZSBjYXBhY2l0eSBpbnNpZGUgWGVuIHRvIGRpc2NvdmVyIHRoZSBQ
Q0kgZGV2aWNlcyAod2l0aG91dCBkZXBlbmRpbmcgb24gRG9tMCB0byBkZWNsYXJlIHRoZW0gdG8g
WGVuKS4NCj4+IEkgdGhpbmsgaXQgaXMgZmluZSB0byBhc3N1bWUgdGhhdCBmb3IgZG9tMGxlc3Mg
dGhlICJmaXJtd2FyZSIgaGFzIHRha2VuDQo+PiBjYXJlIG9mIHNldHRpbmcgdXAgdGhlIEJBUnMg
Y29ycmVjdGx5LiBTdGFydGluZyB3aXRoIHRoYXQgYXNzdW1wdGlvbiwgaXQNCj4+IGxvb2tzIGxp
a2UgaXQgc2hvdWxkIGJlICJlYXN5IiB0byB3YWxrIHRoZSBQQ0kgdG9wb2xvZ3kgaW4gWGVuIHdo
ZW4vaWYNCj4+IHRoZXJlIGlzIG5vIGRvbTA/DQo+IFllcyBhcyB3ZSBkaXNjdXNzZWQgZHVyaW5n
IHRoZSBkZXNpZ24gc2Vzc2lvbiwgd2UgY3VycmVudGx5IHRoaW5rIHRoYXQgaXQgaXMgdGhlIHdh
eSB0byBnby4NCj4gV2UgYXJlIGZvciBub3cgcmVseWluZyBvbiBEb20wIHRvIGdldCB0aGUgbGlz
dCBvZiBQQ0kgZGV2aWNlcyBidXQgdGhpcyBpcyBkZWZpbml0ZWx5IHRoZSBzdHJhdGVneSB3ZSB3
b3VsZCBsaWtlIHRvIHVzZSB0byBoYXZlIERvbTAgc3VwcG9ydC4NCj4gSWYgdGhpcyBpcyB3b3Jr
aW5nIHdlbGwsIEkgZXZlbiB0aGluayB3ZSBjb3VsZCBnZXQgcmlkIG9mIHRoZSBoeXBlcmNhbGwg
YWxsIHRvZ2V0aGVyLg0KLi4uYW5kIHRoaXMgaXMgdGhlIHNhbWUgd2F5IG9mIGNvbmZpZ3VyaW5n
IGlmIGVudW1lcmF0aW9uIGhhcHBlbnMgaW4gdGhlIGJvb3Rsb2FkZXI/DQoNCkkgZG8gc3VwcG9y
dCB0aGUgaWRlYSB3ZSBnbyBhd2F5IGZyb20gUEhZU0RFVk9QX3BjaV9kZXZpY2VfYWRkLCBidXQg
ZHJpdmVyIGRvbWFpbg0KDQpqdXN0IHNpZ25hbHMgWGVuIHRoYXQgdGhlIGVudW1lcmF0aW9uIGlz
IGRvbmUgYW5kIFhlbiBjYW4gdHJhdmVyc2UgdGhlIGJ1cyBieSB0aGF0IHRpbWUuDQoNClBsZWFz
ZSBhbHNvIG5vdGUsIHRoYXQgdGhlcmUgYXJlIGFjdHVhbGx5IDMgY2FzZXMgcG9zc2libGUgd3J0
IHdoZXJlIHRoZSBlbnVtZXJhdGlvbiBhbmQNCg0KY29uZmlndXJhdGlvbiBoYXBwZW5zOiBib290
IGZpcm13YXJlLCBEb20wLCBYZW4uIFNvLCBpdCBzZWVtcyB3ZQ0KDQphcmUgZ29pbmcgdG8gaGF2
ZSBkaWZmZXJlbnQgYXBwcm9hY2hlcyBmb3IgdGhlIGZpcnN0IHR3byAoc2VlIG15IGNvbW1lbnQg
YWJvdmUgb24NCg0KdGhlIGh5cGVyY2FsbCB1c2UgaW4gRG9tMCkuIFNvLCB3YWxraW5nIHRoZSBi
dXMgb3Vyc2VsdmVzIGluIFhlbiBzZWVtcyB0byBiZSBnb29kIGZvciBhbGwNCg0KdGhlIHVzZS1j
YXNlcyBhYm92ZQ0KDQo+DQo+DQo+Pg0KPj4+ICMgRW5hYmxlIHRoZSBleGlzdGluZyB4ODYgdmly
dHVhbCBQQ0kgc3VwcG9ydCBmb3IgQVJNOg0KPj4+DQo+Pj4gVGhlIGV4aXN0aW5nIFZQQ0kgc3Vw
cG9ydCBhdmFpbGFibGUgZm9yIFg4NiBpcyBhZGFwdGVkIGZvciBBcm0uIFdoZW4gdGhlIGRldmlj
ZSBpcyBhZGRlZCB0byBYRU4gdmlhIHRoZSBoeXBlciBjYWxsIOKAnFBIWVNERVZPUF9wY2lfZGV2
aWNlX2FkZOKAnSwgVlBDSSBoYW5kbGVyIGZvciB0aGUgY29uZmlnIHNwYWNlIGFjY2VzcyBpcyBh
ZGRlZCB0byB0aGUgUENJIGRldmljZSB0byBlbXVsYXRlIHRoZSBQQ0kgZGV2aWNlcy4NCj4+Pg0K
Pj4+IEEgTU1JTyB0cmFwIGhhbmRsZXIgZm9yIHRoZSBQQ0kgRUNBTSBzcGFjZSBpcyByZWdpc3Rl
cmVkIGluIFhFTiBzbyB0aGF0IHdoZW4gZ3Vlc3QgaXMgdHJ5aW5nIHRvIGFjY2VzcyB0aGUgUENJ
IGNvbmZpZyBzcGFjZSwgWEVOIHdpbGwgdHJhcCB0aGUgYWNjZXNzIGFuZCBlbXVsYXRlIHJlYWQv
d3JpdGUgdXNpbmcgdGhlIFZQQ0kgYW5kIG5vdCB0aGUgcmVhbCBQQ0kgaGFyZHdhcmUuDQpKdXN0
IHRvIG1ha2UgaXQgY2xlYXI6IERvbTAgc3RpbGwgYWNjZXNzIHRoZSBidXMgZGlyZWN0bHkgdy9v
IGVtdWxhdGlvbiwgcmlnaHQ/DQo+Pj4NCj4+PiBMaW1pdGF0aW9uOg0KPj4+ICogTm8gaGFuZGxl
ciBpcyByZWdpc3RlciBmb3IgdGhlIE1TSSBjb25maWd1cmF0aW9uLg0KPj4+ICogT25seSBsZWdh
Y3kgaW50ZXJydXB0IGlzIHN1cHBvcnRlZCBhbmQgdGVzdGVkIGFzIG9mIG5vdywgTVNJIGlzIG5v
dCBpbXBsZW1lbnRlZCBhbmQgdGVzdGVkLg0KPj4+DQo+Pj4gIyBBc3NpZ24gdGhlIGRldmljZSB0
byB0aGUgZ3Vlc3Q6DQo+Pj4NCj4+PiBBc3NpZ24gdGhlIFBDSSBkZXZpY2UgZnJvbSB0aGUgaGFy
ZHdhcmUgZG9tYWluIHRvIHRoZSBndWVzdCBpcyBkb25lIHVzaW5nIHRoZSBiZWxvdyBndWVzdCBj
b25maWcgb3B0aW9uLiBXaGVuIHhsIHRvb2wgY3JlYXRlIHRoZSBkb21haW4sIFBDSSBkZXZpY2Vz
IHdpbGwgYmUgYXNzaWduZWQgdG8gdGhlIGd1ZXN0IFZQQ0kgYnVzLg0KPj4+IAlwY2k9WyAiUENJ
X1NQRUNfU1RSSU5HIiwgIlBDSV9TUEVDX1NUUklORyIsIC4uLl0NCj4+Pg0KPj4+IEd1ZXN0IHdp
bGwgYmUgb25seSBhYmxlIHRvIGFjY2VzcyB0aGUgYXNzaWduZWQgZGV2aWNlcyBhbmQgc2VlIHRo
ZSBicmlkZ2VzLiBHdWVzdCB3aWxsIG5vdCBiZSBhYmxlIHRvIGFjY2VzcyBvciBzZWUgdGhlIGRl
dmljZXMgdGhhdCBhcmUgbm8gYXNzaWduZWQgdG8gaGltLg0KRG9lcyB0aGlzIG1lYW4gdGhhdCB3
ZSBkbyBub3QgbmVlZCB0byBjb25maWd1cmUgdGhlIGJyaWRnZXMgYXMgdGhvc2UgYXJlIGV4cG9z
ZWQgdG8gdGhlIGd1ZXN0IGltcGxpY2l0bHk/DQo+Pj4NCj4+PiBMaW1pdGF0aW9uOg0KPj4+ICog
QXMgb2Ygbm93IGFsbCB0aGUgYnJpZGdlcyBpbiB0aGUgUENJIGJ1cyBhcmUgc2VlbiBieSB0aGUg
Z3Vlc3Qgb24gdGhlIFZQQ0kgYnVzLg0KDQpTbywgd2hhdCBoYXBwZW5zIGlmIGEgZ3Vlc3QgdHJp
ZXMgdG8gYWNjZXNzIHRoZSBicmlkZ2UgdGhhdCBkb2Vzbid0IGhhdmUgdGhlIGFzc2lnbmVkDQoN
ClBDSSBkZXZpY2U/IEUuZy4gd2UgcGFzcyBQQ0llX2RldjAgd2hpY2ggaXMgYmVoaW5kIEJyaWRn
ZTAgYW5kIHRoZSBndWVzdCBhbHNvIHNlZXMNCg0KQnJpZGdlMSBhbmQgdHJpZXMgdG8gYWNjZXNz
IGRldmljZXMgYmVoaW5kIGl0IGR1cmluZyB0aGUgZW51bWVyYXRpb24uDQoNCkNvdWxkIHlvdSBw
bGVhc2UgY2xhcmlmeT8NCg0KPj4gV2UgbmVlZCB0byBjb21lIHVwIHdpdGggc29tZXRoaW5nIHNp
bWlsYXIgZm9yIGRvbTBsZXNzIHRvby4gSXQgY291bGQgYmUNCj4+IGV4YWN0bHkgdGhlIHNhbWUg
dGhpbmcgKGEgbGlzdCBvZiBCREZzIGFzIHN0cmluZ3MgYXMgYSBkZXZpY2UgdHJlZQ0KPj4gcHJv
cGVydHkpIG9yIHNvbWV0aGluZyBlbHNlIGlmIHdlIGNhbiBjb21lIHVwIHdpdGggYSBiZXR0ZXIg
aWRlYS4NCj4gRnVsbHkgYWdyZWUuDQo+IE1heWJlIGEgdHJlZSB0b3BvbG9neSBjb3VsZCBhbGxv
dyBtb3JlIHBvc3NpYmlsaXRpZXMgKGxpa2UgZ2l2aW5nIEJBUiB2YWx1ZXMpIGluIHRoZSBmdXR1
cmUuDQo+Pg0KPj4+ICMgRW11bGF0ZWQgUENJIGRldmljZSB0cmVlIG5vZGUgaW4gbGlieGw6DQo+
Pj4NCj4+PiBMaWJ4bCBpcyBjcmVhdGluZyBhIHZpcnR1YWwgUENJIGRldmljZSB0cmVlIG5vZGUg
aW4gdGhlIGRldmljZSB0cmVlIHRvIGVuYWJsZSB0aGUgZ3Vlc3QgT1MgdG8gZGlzY292ZXIgdGhl
IHZpcnR1YWwgUENJIGR1cmluZyBndWVzdCBib290LiBXZSBpbnRyb2R1Y2VkIHRoZSBuZXcgY29u
ZmlnIG9wdGlvbiBbdnBjaT0icGNpX2VjYW0iXSBmb3IgZ3Vlc3RzLiBXaGVuIHRoaXMgY29uZmln
IG9wdGlvbiBpcyBlbmFibGVkIGluIGEgZ3Vlc3QgY29uZmlndXJhdGlvbiwgYSBQQ0kgZGV2aWNl
IHRyZWUgbm9kZSB3aWxsIGJlIGNyZWF0ZWQgaW4gdGhlIGd1ZXN0IGRldmljZSB0cmVlLg0KPj4+
DQo+Pj4gQSBuZXcgYXJlYSBoYXMgYmVlbiByZXNlcnZlZCBpbiB0aGUgYXJtIGd1ZXN0IHBoeXNp
Y2FsIG1hcCBhdCB3aGljaCB0aGUgVlBDSSBidXMgaXMgZGVjbGFyZWQgaW4gdGhlIGRldmljZSB0
cmVlIChyZWcgYW5kIHJhbmdlcyBwYXJhbWV0ZXJzIG9mIHRoZSBub2RlKS4gQSB0cmFwIGhhbmRs
ZXIgZm9yIHRoZSBQQ0kgRUNBTSBhY2Nlc3MgZnJvbSBndWVzdCBoYXMgYmVlbiByZWdpc3RlcmVk
IGF0IHRoZSBkZWZpbmVkIGFkZHJlc3MgYW5kIHJlZGlyZWN0cyByZXF1ZXN0cyB0byB0aGUgVlBD
SSBkcml2ZXIgaW4gWGVuLg0KPj4+DQo+Pj4gTGltaXRhdGlvbjoNCj4+PiAqIE9ubHkgb25lIFBD
SSBkZXZpY2UgdHJlZSBub2RlIGlzIHN1cHBvcnRlZCBhcyBvZiBub3cuDQo+PiBJIHRoaW5rIHZw
Y2k9InBjaV9lY2FtIiBzaG91bGQgYmUgb3B0aW9uYWw6IGlmIHBjaT1bICJQQ0lfU1BFQ19TVFJJ
TkciLA0KPj4gLi4uXSBpcyBzcGVjaWZpZmVkLCB0aGVuIHZwY2k9InBjaV9lY2FtIiBpcyBpbXBs
aWVkLg0KPj4NCj4+IHZwY2k9InBjaV9lY2FtIiBpcyBvbmx5IHVzZWZ1bCBvbmUgZGF5IGluIHRo
ZSBmdXR1cmUgd2hlbiB3ZSB3YW50IHRvIGJlDQo+PiBhYmxlIHRvIGVtdWxhdGUgb3RoZXIgbm9u
LWVjYW0gaG9zdCBicmlkZ2VzLiBGb3Igbm93IHdlIGNvdWxkIGV2ZW4gc2tpcA0KPj4gaXQuDQo+
IFRoaXMgd291bGQgY3JlYXRlIGEgcHJvYmxlbSBpZiB4bCBpcyB1c2VkIHRvIGFkZCBhIFBDSSBk
ZXZpY2UgYXMgd2UgbmVlZCB0aGUgUENJIG5vZGUgdG8gYmUgaW4gdGhlIERUQiB3aGVuIHRoZSBn
dWVzdCBpcyBjcmVhdGVkLg0KPiBJIGFncmVlIHRoaXMgaXMgbm90IG5lZWRlZCBidXQgcmVtb3Zp
bmcgaXQgbWlnaHQgY3JlYXRlIG1vcmUgY29tcGxleGl0eSBpbiB0aGUgY29kZS4NCg0KSSB3b3Vs
ZCBzdWdnZXN0IHdlIGhhdmUgaXQgZnJvbSBkYXkgMCBhcyB0aGVyZSBhcmUgcGxlbnR5IG9mIEhX
IGF2YWlsYWJsZSB3aGljaCBpcyBub3QgRUNBTS4NCg0KSGF2aW5nIHZwY2kgYWxsb3dzIG90aGVy
IGJyaWRnZXMgdG8gYmUgc3VwcG9ydGVkDQoNCj4NCj4gQmVydHJhbmQNCj4NCj4+DQo+Pj4gQkFS
IHZhbHVlIGFuZCBJT01FTSBtYXBwaW5nOg0KPj4+DQo+Pj4gTGludXggZ3Vlc3Qgd2lsbCBkbyB0
aGUgUENJIGVudW1lcmF0aW9uIGJhc2VkIG9uIHRoZSBhcmVhIHJlc2VydmVkIGZvciBFQ0FNIGFu
ZCBJT01FTSByYW5nZXMgaW4gdGhlIFZQQ0kgZGV2aWNlIHRyZWUgbm9kZS4gT25jZSBQQ0kJZGV2
aWNlIGlzIGFzc2lnbmVkIHRvIHRoZSBndWVzdCwgWEVOIHdpbGwgbWFwIHRoZSBndWVzdCBQQ0kg
SU9NRU0gcmVnaW9uIHRvIHRoZSByZWFsIHBoeXNpY2FsIElPTUVNIHJlZ2lvbiBvbmx5IGZvciB0
aGUgYXNzaWduZWQgZGV2aWNlcy4NCj4+Pg0KPj4+IEFzIG9mIG5vdyB3ZSBoYXZlIG5vdCBtb2Rp
ZmllZCB0aGUgZXhpc3RpbmcgVlBDSSBjb2RlIHRvIG1hcCB0aGUgZ3Vlc3QgUENJIElPTUVNIHJl
Z2lvbiB0byB0aGUgcmVhbCBwaHlzaWNhbCBJT01FTSByZWdpb24uIFdlIHVzZWQgdGhlIGV4aXN0
aW5nIGd1ZXN0IOKAnGlvbWVt4oCdIGNvbmZpZyBvcHRpb24gdG8gbWFwIHRoZSByZWdpb24uDQo+
Pj4gRm9yIGV4YW1wbGU6DQo+Pj4gCUd1ZXN0IHJlc2VydmVkIElPTUVNIHJlZ2lvbjogIDB4MDQw
MjAwMDANCj4+PiAgICAgCVJlYWwgcGh5c2ljYWwgSU9NRU0gcmVnaW9uOjB4NTAwMDAwMDANCj4+
PiAgICAgCUlPTUVNIHNpemU6MTI4TUINCj4+PiAgICAgCWlvbWVtIGNvbmZpZyB3aWxsIGJlOiAg
IGlvbWVtID0gWyIweDUwMDAwLDB4ODAwMEAweDQwMjAiXQ0KPj4+DQo+Pj4gVGhlcmUgaXMgbm8g
bmVlZCB0byBtYXAgdGhlIEVDQU0gc3BhY2UgYXMgWEVOIGFscmVhZHkgaGF2ZSBhY2Nlc3MgdG8g
dGhlIEVDQU0gc3BhY2UgYW5kIFhFTiB3aWxsIHRyYXAgRUNBTSBhY2Nlc3NlcyBmcm9tIHRoZSBn
dWVzdCBhbmQgd2lsbCBwZXJmb3JtIHJlYWQvd3JpdGUgb24gdGhlIFZQQ0kgYnVzLg0KPj4+DQo+
Pj4gSU9NRU0gYWNjZXNzIHdpbGwgbm90IGJlIHRyYXBwZWQgYW5kIHRoZSBndWVzdCB3aWxsIGRp
cmVjdGx5IGFjY2VzcyB0aGUgSU9NRU0gcmVnaW9uIG9mIHRoZSBhc3NpZ25lZCBkZXZpY2Ugdmlh
IHN0YWdlLTIgdHJhbnNsYXRpb24uDQo+Pj4NCj4+PiBJbiB0aGUgc2FtZSwgd2UgbWFwcGVkIHRo
ZSBhc3NpZ25lZCBkZXZpY2VzIElSUSB0byB0aGUgZ3Vlc3QgdXNpbmcgYmVsb3cgY29uZmlnIG9w
dGlvbnMuDQo+Pj4gCWlycXM9IFsgTlVNQkVSLCBOVU1CRVIsIC4uLl0NCj4+Pg0KPj4+IExpbWl0
YXRpb246DQo+Pj4gKiBOZWVkIHRvIGF2b2lkIHRoZSDigJxpb21lbeKAnSBhbmQg4oCcaXJx4oCd
IGd1ZXN0IGNvbmZpZyBvcHRpb25zIGFuZCBtYXAgdGhlIElPTUVNIHJlZ2lvbiBhbmQgSVJRIGF0
IHRoZSBzYW1lIHRpbWUgd2hlbiBkZXZpY2UgaXMgYXNzaWduZWQgdG8gdGhlIGd1ZXN0IHVzaW5n
IHRoZSDigJxwY2nigJ0gZ3Vlc3QgY29uZmlnIG9wdGlvbnMgd2hlbiB4bCBjcmVhdGVzIHRoZSBk
b21haW4uDQo+Pj4gKiBFbXVsYXRlZCBCQVIgdmFsdWVzIG9uIHRoZSBWUENJIGJ1cyBzaG91bGQg
cmVmbGVjdCB0aGUgSU9NRU0gbWFwcGVkIGFkZHJlc3MuDQo+Pj4gKiBYODYgbWFwcGluZyBjb2Rl
IHNob3VsZCBiZSBwb3J0ZWQgb24gQXJtIHNvIHRoYXQgdGhlIHN0YWdlLTIgdHJhbnNsYXRpb24g
aXMgYWRhcHRlZCB3aGVuIHRoZSBndWVzdCBpcyBkb2luZyBhIG1vZGlmaWNhdGlvbiBvZiB0aGUg
QkFSIHJlZ2lzdGVycyB2YWx1ZXMgKHRvIG1hcCB0aGUgYWRkcmVzcyByZXF1ZXN0ZWQgYnkgdGhl
IGd1ZXN0IGZvciBhIHNwZWNpZmljIElPTUVNIHRvIHRoZSBhZGRyZXNzIGFjdHVhbGx5IGNvbnRh
aW5lZCBpbiB0aGUgcmVhbCBCQVIgcmVnaXN0ZXIgb2YgdGhlIGNvcnJlc3BvbmRpbmcgZGV2aWNl
KS4NCj4+Pg0KPj4+ICMgU01NVSBjb25maWd1cmF0aW9uIGZvciBndWVzdDoNCj4+Pg0KPj4+IFdo
ZW4gYXNzaWduaW5nIFBDSSBkZXZpY2VzIHRvIGEgZ3Vlc3QsIHRoZSBTTU1VIGNvbmZpZ3VyYXRp
b24gc2hvdWxkIGJlIHVwZGF0ZWQgdG8gcmVtb3ZlIGFjY2VzcyB0byB0aGUgaGFyZHdhcmUgZG9t
YWluIG1lbW9yeQ0KDQpTbywgYXMgdGhlIGhhcmR3YXJlIGRvbWFpbiBzdGlsbCBoYXMgYWNjZXNz
IHRvIHRoZSBQQ0kgY29uZmlndXJhdGlvbiBzcGFjZSwgd2UNCg0KY2FuIHBvdGVudGlhbGx5IGhh
dmUgYSBjb25kaXRpb24gd2hlbiBEb20wIGFjY2Vzc2VzIHRoZSBkZXZpY2UuIEFGQUlVLCBpZiB3
ZSBoYXZlDQoNCnBjaSBmcm9udC9iYWNrIHRoZW4gYmVmb3JlIGFzc2lnbmluZyB0aGUgZGV2aWNl
IHRvIHRoZSBndWVzdCB3ZSB1bmJpbmQgaXQgZnJvbSB0aGUNCg0KcmVhbCBkcml2ZXIgYW5kIGJp
bmQgdG8gdGhlIGJhY2suIEFyZSB3ZSBnb2luZyB0byBkbyBzb21ldGhpbmcgc2ltaWxhciBoZXJl
Pw0KDQoNClRoYW5rIHlvdSwNCg0KT2xla3NhbmRyDQoNCj4+PiAgIGFuZCBhZGQNCj4+PiBjb25m
aWd1cmF0aW9uIHRvIGhhdmUgYWNjZXNzIHRvIHRoZSBndWVzdCBtZW1vcnkgd2l0aCB0aGUgcHJv
cGVyIGFkZHJlc3MgdHJhbnNsYXRpb24gc28gdGhhdCB0aGUgZGV2aWNlIGNhbiBkbyBETUEgb3Bl
cmF0aW9ucyBmcm9tIGFuZCB0byB0aGUgZ3Vlc3QgbWVtb3J5IG9ubHkuDQo+Pj4NCj4+PiAjIE1T
SS9NU0ktWCBzdXBwb3J0Og0KPj4+IE5vdCBpbXBsZW1lbnQgYW5kIHRlc3RlZCBhcyBvZiBub3cu
DQo+Pj4NCj4+PiAjIElUUyBzdXBwb3J0Og0KPj4+IE5vdCBpbXBsZW1lbnQgYW5kIHRlc3RlZCBh
cyBvZiBub3cuDQpbMV0gaHR0cHM6Ly9saXN0cy54ZW4ub3JnL2FyY2hpdmVzL2h0bWwveGVuLWRl
dmVsLzIwMTctMDUvbXNnMDI2NzQuaHRtbA==


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 07:48:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 07:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwL6D-00030z-U3; Fri, 17 Jul 2020 07:48:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ybK2=A4=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jwL6C-00030u-La
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 07:48:04 +0000
X-Inumbo-ID: d837ec7a-c801-11ea-8496-bc764e2007e4
Received: from mail-wm1-x32a.google.com (unknown [2a00:1450:4864:20::32a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d837ec7a-c801-11ea-8496-bc764e2007e4;
 Fri, 17 Jul 2020 07:48:04 +0000 (UTC)
Received: by mail-wm1-x32a.google.com with SMTP id l2so15893082wmf.0
 for <xen-devel@lists.xenproject.org>; Fri, 17 Jul 2020 00:48:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=q1mnJvk5nRU6AupHxotXt81TJLZlwg220QYxb1o3trc=;
 b=RKOAhZJziDDC1OKdUkiZy68Inh1a4MzvRtvUcThEGiO57ts5/oA90TXqrybPYD5t4j
 kQHAtqDw//728yqU9mvRRVq6Di3TAUMRBVX1WphsOZHl7/9VXFZnnWInQHbpFbuBF/tP
 2StN3nP7ov3XJvp8P/cWWHEo5gkeQRFn1EHJ/p2tArulsHG4eFG/SCvtcS+60yiUdI6Y
 odMJROQvIQ3cCQjBV/+Bwl3Z4E16xMeZSOZSV35eCcDk/uUfqa5XGfFigf0ZBdj84Iiq
 vvyI6PfMlRJVUAE44NsTpKTDObXpeKO1nQ8sh5/tVr/CnfFExFRfE64oAK8YzQnx6oyK
 6oCw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=q1mnJvk5nRU6AupHxotXt81TJLZlwg220QYxb1o3trc=;
 b=YkMhxa+vz2dzCHc1eDxzx9W5NPiVYws5Zdow3H7+RqfO8ngA397ok4sGyUZhD8sOSC
 5MLbRqKXrCi2SIsMwNWfipSfKWYQJZK2cvZbLKCTrzXtY/voI/HD7n51NORt7HJMhM3X
 rYPf64CrkdcRM8ArJyfe/Evf8DElG6XSm1t8O6S4O7UydcXARmsHx4hYxdGRnXcT6G03
 uuyCc6lQGXWNtfEXzm6Y6F6CLYd5dc95S1ZtrSmPYyJBn4i90EjPekQEs18TtnYhOSUj
 x9JdaIMaGkepcZzVvbWgu+p8kvFNKCWrW2k11l43JTzbK83UGRTRAj+wphhuYeUH+15c
 Wfew==
X-Gm-Message-State: AOAM533wycKm8wOyW6kyMsV7pIyUMSx5JV+COwK0egTaKmNYNbbkVMZB
 SIAdOwaOXyZqh7J6XnBY2C8=
X-Google-Smtp-Source: ABdhPJzgrzhwXL4HTew6EcS8j1IeB8pn7V7tO1kAxuHLqzRSTk8JTiLaMri21GPHDvVZwehINNl4IQ==
X-Received: by 2002:a1c:cc12:: with SMTP id h18mr8482009wmb.56.1594972083173; 
 Fri, 17 Jul 2020 00:48:03 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-227.amazon.com. [54.240.197.227])
 by smtp.gmail.com with ESMTPSA id e8sm12469612wrp.26.2020.07.17.00.48.02
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 17 Jul 2020 00:48:02 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Brian Marcotte'" <marcotte@panix.com>,
 "'Paul Durrant'" <xadimgnik@gmail.com>
References: <AC8105C4-6DAD-4AB0-AC3F-B4CDD151CDEB@ispire.me>
 <763e69df40604c51bb72477c706ec24b@EX13D32EUC003.ant.amazon.com>
 <20200715191705.GA20643@panix.com> <000b01d65b40$ab7fead0$027fc070$@xen.org>
 <20200716202331.GA9471@panix.com>
In-Reply-To: <20200716202331.GA9471@panix.com>
Subject: RE: [EXTERNAL] [Xen-devel] XEN Qdisk Ceph rbd support broken?
Date: Fri, 17 Jul 2020 08:48:01 +0100
Message-ID: <003401d65c0e$995bb4f0$cc131ed0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="US-ASCII"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQFy/a26FkUIi5mjo8XOqnAYvmCCoAJQGLlhAdA3FVkBzMgo7gIKkLk6qZJjndA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Jules' <jules@ispire.me>, xen-devel@lists.xenproject.org,
 oleksandr_grytsov@epam.com, wl@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Brian Marcotte <marcotte@panix.com>
> Sent: 16 July 2020 21:24
> To: Paul Durrant <xadimgnik@gmail.com>
> Cc: paul@xen.org; 'Jules' <jules@ispire.me>; xen-devel@lists.xenproject.org;
> oleksandr_grytsov@epam.com; wl@xen.org
> Subject: Re: [EXTERNAL] [Xen-devel] XEN Qdisk Ceph rbd support broken?
> 
> > Your issue stems from the auto-creation code in xen-block:
> >
> > The "aio:rbd:rbd/machine.disk0" string is generated by libxl and does
> > look a little odd and will fool the parser there, but the error you see
> > after modifying the string appears to be because QEMU's QMP block
> > device instantiation code is objecting to a missing parameter. Older
> > QEMUs circumvented that code which is almost certainly why you don't
> > see the issue with versions 2 or 3.
> 
> Xen 4.13 and 4.14 includes QEMU 4 and 5. They don't work with Ceph/RBD.
> 
> Are you saying that xl/libxl is doing the right thing and the problem
> needs to be fixed in QEMU?

Unfortunately, from what you describe, it sounds like there is a problem in both. To get something going, you could bring a domain
up paused and then try manually adding your rbd device using the QMP shell.

It would be useful if a toolstack maintainer could take a look at this issue in the near future.

  Paul

> 
> --
> - Brian



From xen-devel-bounces@lists.xenproject.org Fri Jul 17 07:57:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 07:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwLEq-00043h-B2; Fri, 17 Jul 2020 07:57:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YfW4=A4=redhat.com=mcascell@srs-us1.protection.inumbo.net>)
 id 1jwLCf-0003zk-6h
 for xen-devel@lists.xen.org; Fri, 17 Jul 2020 07:54:45 +0000
X-Inumbo-ID: c4f333ee-c802-11ea-bb8b-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id c4f333ee-c802-11ea-bb8b-bc764e2007e4;
 Fri, 17 Jul 2020 07:54:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1594972480;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 in-reply-to:in-reply-to:references:references;
 bh=mCVsudiUaJnZfsa/Js80mZXCIgbfZRyKNbODyzDeHsc=;
 b=dDFq5leE4PegyJBPQ2PL/Sc8qZVh/IdIpJ/aEJ2va+4DJvxvmu1BO810e1DrgwasSYoibq
 LjJSjGGDtk3aRk2XHbX0Uq0mN+8G/0yscW5MEnLmyBhwaBowyIrKU83dBgowbr8dFQrEnb
 grxs2C4Hl67gspLJmz7TaciNkcZbXOo=
Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com
 [209.85.208.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-311-IB7N8f-FN0KslI3AfRGdPg-1; Fri, 17 Jul 2020 03:54:33 -0400
X-MC-Unique: IB7N8f-FN0KslI3AfRGdPg-1
Received: by mail-ed1-f71.google.com with SMTP id u25so5396655edq.1
 for <xen-devel@lists.xen.org>; Fri, 17 Jul 2020 00:54:33 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=mCVsudiUaJnZfsa/Js80mZXCIgbfZRyKNbODyzDeHsc=;
 b=ICPnR83sorxmZEHOrFccK759XtqMVQ7Z3QFK71k76HncC+OgGVKaDTuWA3Y1hIwbbw
 zLKdU6FU4GhHX3AgHS+Gkfwy+GQvLthWzSMbl5ii3GrGsMuVrzwNFbsvb6UBpRWtkYYM
 hJJOZ7UX+isCXO9fKw0Qeui2jmlzM9y/zJPqPi8lIIkVgh5cQ7V9p0rIjjD8sXSEydzL
 aWUI9D9xzl+v7Fl9tpGh+Yf8M+77HaoTI+A3KBKv5gZKNKuXsdr/ItTcDeQKuuqDP/hR
 RbpqaGcVJE13VsUcHHgDpu4x89lAI5O+AG3UMltVjT4B4B6bk6VdZAgQ/2yVqs+ZwWdy
 PsQQ==
X-Gm-Message-State: AOAM530PU+P5PUK3ykSJllm2DnnBTh4XABZpVzJXod8x6uMICEuTStxW
 FxdBXZ/znOix5/lmR3CaPUFDN83VXUg2yGJUJtn0eR4uetkwAWfljie2a0EiyqMvGjFXdDAD1Ub
 EeT+MLaFnAeDMiPqZoRpeYYJmZDvYBA7JWg==
X-Received: by 2002:a05:6402:128c:: with SMTP id
 w12mr8386843edv.65.1594972472439; 
 Fri, 17 Jul 2020 00:54:32 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJxLKN6gyw2bSTiwmuJkYyGhJ40svEPXSZ9d0JZod1YuwomtXujKsvrP2uzhY6aXV7fbZddiMBwbXT37wvkoyuk=
X-Received: by 2002:a05:6402:128c:: with SMTP id
 w12mr8386835edv.65.1594972472226; 
 Fri, 17 Jul 2020 00:54:32 -0700 (PDT)
MIME-Version: 1.0
References: <E1jw3ms-0006i6-Se@xenbits.xenproject.org>
In-Reply-To: <E1jw3ms-0006i6-Se@xenbits.xenproject.org>
From: Mauro Matteo Cascella <mcascell@redhat.com>
Date: Fri, 17 Jul 2020 09:54:21 +0200
Message-ID: <CAA8xKjVib9UERsMrAy3nNdVssNxLciXTmmhmXqq1gvhO16URew@mail.gmail.com>
Subject: Re: [oss-security] Xen Security Advisory 329 v2 - Linux ioperm bitmap
 context switching issues
To: oss-security@lists.openwall.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: multipart/alternative; boundary="000000000000a245b105aa9e773c"
X-Mailman-Approved-At: Fri, 17 Jul 2020 07:56:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-users@lists.xen.org, xen-announce@lists.xen.org,
 "Xen.org security team" <security-team-members@xen.org>,
 xen-devel@lists.xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--000000000000a245b105aa9e773c
Content-Type: text/plain; charset="UTF-8"

Hello,

Will a CVE be assigned to this flaw?

Thanks,

On Thu, Jul 16, 2020 at 3:21 PM Xen.org security team <security@xen.org>
wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
>                     Xen Security Advisory XSA-329
>                               version 2
>
>              Linux ioperm bitmap context switching issues
>
> UPDATES IN VERSION 2
> ====================
>
> Public release.
>
> ISSUE DESCRIPTION
> =================
>
> Linux 5.5 overhauled the internal state handling for the iopl() and
> ioperm()
> system calls.  Unfortunately, one aspect on context switch wasn't wired up
> correctly for the Xen PVOps case.
>
> IMPACT
> ======
>
> IO port permissions don't get rescinded when context switching to an
> unprivileged task.  Therefore, all userspace can use the IO ports granted
> to
> the most recently scheduled task with IO port permissions.
>
> VULNERABLE SYSTEMS
> ==================
>
> Only x86 guests are vulnerable.
>
> All versions of Linux from 5.5 are potentially vulnerable.
>
> Linux is only vulnerable when running as x86 PV guest.  Linux is not
> vulnerable when running as an x86 HVM/PVH guests.
>
> The vulnerability can only be exploited in domains which have been granted
> access to IO ports by Xen.  This is typically only the hardware domain, and
> guests configured with PCI Passthrough.
>
> MITIGATION
> ==========
>
> Running only HVM/PVH guests avoids the vulnerability.
>
> CREDITS
> =======
>
> This issue was discovered by Andy Lutomirski.
>
> RESOLUTION
> ==========
>
> Applying the appropriate attached patch resolves this issue.
>
> xsa329.patch           Linux 5.5 and later
>
> $ sha256sum xsa329*
> cdb5ac9bfd21192b5965e8ec0a1c4fcf12d0a94a962a8158cd27810e6aa362f0
> xsa329.patch
> $
>
> DEPLOYMENT DURING EMBARGO
> =========================
>
> Deployment of the patches and/or mitigations described above (or
> others which are substantially similar) is permitted during the
> embargo, even on public-facing systems with untrusted guest users and
> administrators.
>
> But: Distribution of updated software is prohibited (except to other
> members of the predisclosure list).
>
> Predisclosure list members who wish to deploy significantly different
> patches and/or mitigations, please contact the Xen Project Security
> Team.
>
>
> (Note: this during-embargo deployment notice is retained in
> post-embargo publicly released Xen Project advisories, even though it
> is then no longer applicable.  This is to enable the community to have
> oversight of the Xen Project Security Team's decisionmaking.)
>
> For more information about permissible uses of embargoed information,
> consult the Xen Project community's agreed Security Policy:
>   http://www.xenproject.org/security-policy.html
> -----BEGIN PGP SIGNATURE-----
>
> iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl8QU6EMHHBncEB4ZW4u
> b3JnAAoJEIP+FMlX6CvZ/sEIAMiCOnz119KTlRU50HTwa4pvIgLphf9htTbPzHXS
> iEb8yINqMxmep8NRcAzwFREQP+Z4Tue1upt31Vx0RPkFZpUklLuuBSXsV0JA7+UM
> LSGyWhkzDdnfj6iPUHycGmFzRTzkbB7qfcMj7khCvuYtSNbTUdOgUq04ngZksrSJ
> UMhfgUNKXawULKvVe7572L/AQTmMXK8eaolb+eWtf1U2pFkZQR8GWoLmiFbKLks2
> X2tRUF4U4cHEBzxXRzYrD1ArWLajqK6hQmauwgkCCSowvCHoD1dTv55GlrlEo4od
> MSB6YOVLl7HJuUw1GmwlKjA8XqStHq1Fi0urvlKCfHfK2Wk=
> =MP+m
> -----END PGP SIGNATURE-----
>


-- 
Mauro Matteo Cascella, Red Hat Product Security
6F78 E20B 5935 928C F0A8  1A9D 4E55 23B8 BB34 10B0

--000000000000a245b105aa9e773c
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hello,<div><br></div><div>Will a CVE be assigned to this f=
law?</div><div><br></div><div>Thanks,</div></div><br><div class=3D"gmail_qu=
ote"><div dir=3D"ltr" class=3D"gmail_attr">On Thu, Jul 16, 2020 at 3:21 PM =
Xen.org security team &lt;<a href=3D"mailto:security@xen.org">security@xen.=
org</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"marg=
in:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1e=
x">-----BEGIN PGP SIGNED MESSAGE-----<br>
Hash: SHA256<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 Xen S=
ecurity Advisory XSA-329<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 version 2<br>
<br>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Linux ioperm bitmap context=
 switching issues<br>
<br>
UPDATES IN VERSION 2<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
Public release.<br>
<br>
ISSUE DESCRIPTION<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
Linux 5.5 overhauled the internal state handling for the iopl() and ioperm(=
)<br>
system calls.=C2=A0 Unfortunately, one aspect on context switch wasn&#39;t =
wired up<br>
correctly for the Xen PVOps case.<br>
<br>
IMPACT<br>
=3D=3D=3D=3D=3D=3D<br>
<br>
IO port permissions don&#39;t get rescinded when context switching to an<br=
>
unprivileged task.=C2=A0 Therefore, all userspace can use the IO ports gran=
ted to<br>
the most recently scheduled task with IO port permissions.<br>
<br>
VULNERABLE SYSTEMS<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
Only x86 guests are vulnerable.<br>
<br>
All versions of Linux from 5.5 are potentially vulnerable.<br>
<br>
Linux is only vulnerable when running as x86 PV guest.=C2=A0 Linux is not<b=
r>
vulnerable when running as an x86 HVM/PVH guests.<br>
<br>
The vulnerability can only be exploited in domains which have been granted<=
br>
access to IO ports by Xen.=C2=A0 This is typically only the hardware domain=
, and<br>
guests configured with PCI Passthrough.<br>
<br>
MITIGATION<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
Running only HVM/PVH guests avoids the vulnerability.<br>
<br>
CREDITS<br>
=3D=3D=3D=3D=3D=3D=3D<br>
<br>
This issue was discovered by Andy Lutomirski.<br>
<br>
RESOLUTION<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D<br>
<br>
Applying the appropriate attached patch resolves this issue.<br>
<br>
xsa329.patch=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Linux 5.5 and later<br=
>
<br>
$ sha256sum xsa329*<br>
cdb5ac9bfd21192b5965e8ec0a1c4fcf12d0a94a962a8158cd27810e6aa362f0=C2=A0 xsa3=
29.patch<br>
$<br>
<br>
DEPLOYMENT DURING EMBARGO<br>
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
<br>
<br>
Deployment of the patches and/or mitigations described above (or<br>
others which are substantially similar) is permitted during the<br>
embargo, even on public-facing systems with untrusted guest users and<br>
administrators.<br>
<br>
But: Distribution of updated software is prohibited (except to other<br>
members of the predisclosure list).<br>
<br>
Predisclosure list members who wish to deploy significantly different<br>
patches and/or mitigations, please contact the Xen Project Security<br>
Team.<br>
<br>
<br>
(Note: this during-embargo deployment notice is retained in<br>
post-embargo publicly released Xen Project advisories, even though it<br>
is then no longer applicable.=C2=A0 This is to enable the community to have=
<br>
oversight of the Xen Project Security Team&#39;s decisionmaking.)<br>
<br>
For more information about permissible uses of embargoed information,<br>
consult the Xen Project community&#39;s agreed Security Policy:<br>
=C2=A0 <a href=3D"http://www.xenproject.org/security-policy.html" rel=3D"no=
referrer" target=3D"_blank">http://www.xenproject.org/security-policy.html<=
/a><br>
-----BEGIN PGP SIGNATURE-----<br>
<br>
iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl8QU6EMHHBncEB4ZW4u<br>
b3JnAAoJEIP+FMlX6CvZ/sEIAMiCOnz119KTlRU50HTwa4pvIgLphf9htTbPzHXS<br>
iEb8yINqMxmep8NRcAzwFREQP+Z4Tue1upt31Vx0RPkFZpUklLuuBSXsV0JA7+UM<br>
LSGyWhkzDdnfj6iPUHycGmFzRTzkbB7qfcMj7khCvuYtSNbTUdOgUq04ngZksrSJ<br>
UMhfgUNKXawULKvVe7572L/AQTmMXK8eaolb+eWtf1U2pFkZQR8GWoLmiFbKLks2<br>
X2tRUF4U4cHEBzxXRzYrD1ArWLajqK6hQmauwgkCCSowvCHoD1dTv55GlrlEo4od<br>
MSB6YOVLl7HJuUw1GmwlKjA8XqStHq1Fi0urvlKCfHfK2Wk=3D<br>
=3DMP+m<br>
-----END PGP SIGNATURE-----<br>
</blockquote></div><br clear=3D"all"><div><br></div>-- <br><div dir=3D"ltr"=
 class=3D"gmail_signature"><div dir=3D"ltr">Mauro Matteo Cascella, Red Hat =
Product Security<div>6F78 E20B 5935 928C F0A8=C2=A0 1A9D 4E55 23B8 BB34 10B=
0<br></div></div></div>

--000000000000a245b105aa9e773c--



From xen-devel-bounces@lists.xenproject.org Fri Jul 17 08:03:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 08:03:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwLL3-0005W0-H8; Fri, 17 Jul 2020 08:03:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sKPa=A4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwLL2-0005Vb-Dt
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 08:03:24 +0000
X-Inumbo-ID: f9d5b248-c803-11ea-9598-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f9d5b248-c803-11ea-9598-12813bfff9fa;
 Fri, 17 Jul 2020 08:03:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Il0BN/JryQ35rriEH4U15RyqFPIPH00SiujR8fQ12yU=; b=jzuaJ3V4GiDYsAIa9NSR5X0VV
 mXaahBViULDH+BzaK7dmu9jNs6EvlOGcVa6/t5ql+Y8jyqyTrzo79TyXZJbyUB4Oj3d2fUVA34d+i
 JjSNmVpx1Mqd0F8NL53Mp6g8AqnpdyLJ96/idbpRpmEk2FzOoXWLNobMjRTZ5G0EznYyk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwLKw-0008F6-L1; Fri, 17 Jul 2020 08:03:18 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwLKw-0003Ig-1D; Fri, 17 Jul 2020 08:03:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwLKw-0001ZM-0V; Fri, 17 Jul 2020 08:03:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151956-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151956: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=8b0cb0e666f9fc24e13fe2480663c1688dd6a411
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jul 2020 08:03:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151956 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151956/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              8b0cb0e666f9fc24e13fe2480663c1688dd6a411
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z    7 days
Failing since        151818  2020-07-11 04:18:52 Z    6 days    7 attempts
Testing same since   151956  2020-07-17 04:18:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Bastien Orivel <bastien.orivel@diateam.net>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Jin Yan <jinyan12@huawei.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1318 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 08:10:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 08:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwLRl-0006M2-Af; Fri, 17 Jul 2020 08:10:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1N/p=A4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jwLRj-0006Lx-Lu
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 08:10:19 +0000
X-Inumbo-ID: f3c53b8e-c804-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3c53b8e-c804-11ea-b7bb-bc764e2007e4;
 Fri, 17 Jul 2020 08:10:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4EAF0B1AA;
 Fri, 17 Jul 2020 08:10:22 +0000 (UTC)
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
To: Rahul Singh <Rahul.Singh@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
Date: Fri, 17 Jul 2020 10:10:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16.07.2020 19:10, Rahul Singh wrote:
> # Discovering PCI devices:
> 
> PCI-PCIe enumeration is a process of detecting devices connected to its host. It is the responsibility of the hardware domain or boot firmware to do the PCI enumeration and configure the BAR, PCI capabilities, and MSI/MSI-X configuration.
> 
> PCI-PCIe enumeration in XEN is not feasible for the configuration part as it would require a lot of code inside Xen which would require a lot of maintenance. Added to this many platforms require some quirks in that part of the PCI code which would greatly improve Xen complexity. Once hardware domain enumerates the device then it will communicate to XEN via the below hypercall.
> 
> #define PHYSDEVOP_pci_device_add        25
> struct physdev_pci_device_add {
>     uint16_t seg;
>     uint8_t bus;
>     uint8_t devfn;
>     uint32_t flags;
>     struct {
>     	uint8_t bus;
>     	uint8_t devfn;
>     } physfn;
>     /*
>     * Optional parameters array.
>     * First element ([0]) is PXM domain associated with the device (if * XEN_PCI_DEV_PXM is set)
>     */
>     uint32_t optarr[XEN_FLEX_ARRAY_DIM];
>     };
> 
> As the hypercall argument has the PCI segment number, XEN will access the PCI config space based on this segment number and find the host-bridge corresponding to this segment number. At this stage host bridge is fully initialized so there will be no issue to access the config space.
> 
> XEN will add the PCI devices in the linked list maintain in XEN using the function pci_add_device(). XEN will be aware of all the PCI devices on the system and all the device will be added to the hardware domain.

Have you had any thoughts about Dom0 re-arranging the bus numbering?
This is, afaict, a still open issue on x86 as well.

> Limitations:
> * When PCI devices are added to XEN, MSI capability is not initialized inside XEN and not supported as of now.

I think this is a pretty severe limitation, as modern devices tend to
not support pin based interrupts anymore.

> # Emulated PCI device tree node in libxl:
> 
> Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.

I support Stefano's suggestion for this to be an optional thing, i.e.
there to be no need for it when there are PCI devices assigned to the
guest anyway. I also wonder about the pci_ prefix here - isn't
vpci="ecam" as unambiguous?

> A new area has been reserved in the arm guest physical map at which the VPCI bus is declared in the device tree (reg and ranges parameters of the node). A trap handler for the PCI ECAM access from guest has been registered at the defined address and redirects requests to the VPCI driver in Xen.
> 
> Limitation:
> * Only one PCI device tree node is supported as of now.
> 
> BAR value and IOMEM mapping:
> 
> Linux guest will do the PCI enumeration based on the area reserved for ECAM and IOMEM ranges in the VPCI device tree node. Once PCI	device is assigned to the guest, XEN will map the guest PCI IOMEM region to the real physical IOMEM region only for the assigned devices.
> 
> As of now we have not modified the existing VPCI code to map the guest PCI IOMEM region to the real physical IOMEM region. We used the existing guest “iomem” config option to map the region.
> For example:
> 	Guest reserved IOMEM region:  0x04020000
>     	Real physical IOMEM region:0x50000000
>     	IOMEM size:128MB
>     	iomem config will be:   iomem = ["0x50000,0x8000@0x4020"]

This surely is planned to go away before the code hits upstream? The
ranges really should be read out of the BARs, as I see the
"limitations" section further down suggests, but it's not clear
whether "limitations" are items that you plan to take care of before
submitting your code for review.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 08:16:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 08:16:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwLXX-0006Yd-2n; Fri, 17 Jul 2020 08:16:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1N/p=A4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jwLXV-0006YY-24
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 08:16:17 +0000
X-Inumbo-ID: c8e8de42-c805-11ea-b7bb-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8e8de42-c805-11ea-b7bb-bc764e2007e4;
 Fri, 17 Jul 2020 08:16:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D9AB3ADF0;
 Fri, 17 Jul 2020 08:16:19 +0000 (UTC)
Subject: Re: [PATCH 1/2] common: map_vcpu_info() cosmetics
To: Julien Grall <julien.grall.oss@gmail.com>
References: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
 <2a0341c7-3836-a8c0-9516-b6a08e2720ec@suse.com>
 <20200716114128.GO7191@Air-de-Roger>
 <03a4d446-c26b-f171-57fd-a1eb13dad6c0@suse.com>
 <20200716144219.GQ7191@Air-de-Roger>
 <d64ee03f-4663-39ce-fd72-5702029e0182@suse.com>
 <CAJ=z9a2gCm7LNOpJUO4nbwUExmtd8KH2TBvt4VXCaqiAeXuCcg@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7b9dc9b2-e117-6bbb-05a7-e82c4529e5e7@suse.com>
Date: Fri, 17 Jul 2020 10:16:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CAJ=z9a2gCm7LNOpJUO4nbwUExmtd8KH2TBvt4VXCaqiAeXuCcg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16.07.2020 18:17, Julien Grall wrote:
> On Thu, 16 Jul 2020, 17:01 Jan Beulich, <jbeulich@suse.com> wrote:
> 
>> On 16.07.2020 16:42, Roger Pau Monné wrote:
>>> On Thu, Jul 16, 2020 at 01:48:51PM +0200, Jan Beulich wrote:
>>>> On 16.07.2020 13:41, Roger Pau Monné wrote:
>>>>> On Wed, Jul 15, 2020 at 12:15:10PM +0200, Jan Beulich wrote:
>>>>>> Use ENXIO instead of EINVAL to cover the two cases of the address not
>>>>>> satisfying the requirements. This will make an issue here better stand
>>>>>> out at the call site.
>>>>>
>>>>> Not sure whether I would use EFAULT instead of ENXIO, as the
>>>>> description of it is 'bad address' which seems more inline with the
>>>>> error that we are trying to report.
>>>>
>>>> The address isn't bad in the sense of causing a fault, it's just
>>>> that we elect to not allow it. Hence I don't think EFAULT is
>>>> suitable. I'm open to replacement suggestions for ENXIO, though.
>>>
>>> Well, using an address that's not properly aligned to the requirements
>>> of an interface would cause a fault? (in this case it's a software
>>> interface, but the concept applies equally).
>>
>> Not necessarily, see x86'es behavior. Also even on strict arches
> 
> it is typically possible to cover for the misalignment by using
>> suitable instructions; it's still an implementation choice to not
>> do so.
> 
> 
> I am not sure about your argument here... Yes it might be possible, but at
> what cost?

The cost is what influences the decision whether to support it. Nevertheless
it remains an implementation decision rather than a hardware imposed
restriction, and hence I don't consider -EFAULT suitable here.

>>> Anyway, not something worth arguing about I think, so unless someone
>>> else disagrees I'm fine with using ENXIO.
>>
>> Good, thanks.
>>
> 
> -EFAULT can be described as "Bad address". I think it makes more sense than
> -ENXIO here even if it may not strictly result to a fault on some arch.

As said - I don't consider EFAULT applicable here; I also consider EINVAL
as too generic. I'll be happy to see replacement suggestions for my ENXIO
choice, but obviously I'm not overly happy to see options re-suggested
which I did already say I've ruled out.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 08:35:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 08:35:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwLpb-0008JD-Ir; Fri, 17 Jul 2020 08:34:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sKPa=A4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwLpZ-0008It-So
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 08:34:57 +0000
X-Inumbo-ID: 6243a3ea-c808-11ea-959c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6243a3ea-c808-11ea-959c-12813bfff9fa;
 Fri, 17 Jul 2020 08:34:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=TDSsp+v5s6IjGtQz93GInU908XIyVVUg9N+RqfL1uLc=; b=455zSSd6kqfdHYKSOLbkqbnRj
 T5cMRUnZ4rcPDvThPWvyCdbwxPdHN77NT/fwjfqaLppi2x5vK81MU9DKkCvEuO7UQOiB2OhaVLHSV
 NJcJ1VcRR+q/mC+VAvnStA7oNhZ70GMgjfCZylKTtt37MVLEwgG4tLPUofaYxO0xfdwFA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwLpT-0000RH-Ux; Fri, 17 Jul 2020 08:34:52 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwLpT-0006HZ-Gx; Fri, 17 Jul 2020 08:34:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwLpT-0006dU-GF; Fri, 17 Jul 2020 08:34:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151944-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151944: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=f8456690ba8eb18ea4714e68554e242a04f65cff
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jul 2020 08:34:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151944 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151944/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f8456690ba8eb18ea4714e68554e242a04f65cff
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   29 days
Failing since        151236  2020-06-19 19:10:35 Z   27 days   43 attempts
Testing same since   151944  2020-07-16 14:39:18 Z    0 days    1 attempts

------------------------------------------------------------
733 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 38963 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 08:48:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 08:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwM2C-0000ta-V0; Fri, 17 Jul 2020 08:48:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3hc=A4=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1jwM2B-0000tG-Tf
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 08:47:59 +0000
X-Inumbo-ID: 36bebfa0-c80a-11ea-bb8b-bc764e2007e4
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 36bebfa0-c80a-11ea-bb8b-bc764e2007e4;
 Fri, 17 Jul 2020 08:47:58 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id y13so5601175lfe.9
 for <xen-devel@lists.xenproject.org>; Fri, 17 Jul 2020 01:47:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-transfer-encoding:content-language;
 bh=+cjpQErgu64AMMblDNbZsWCTAraG+7PnGSTM25MPtPo=;
 b=nVUOUvlK1FjIf3AHNf/KHr9I2WXBkPAeHkzaHbmHK/41R7p9uNUdE68mJXuH/MSlih
 7i4AK4LQFYq1vhuASxj7R5q/JAgQvV8ED06Uqqa22dPo7IiBpBfG0BgUxtUdnkPE09DZ
 fvvnCoGMq+EQoWbivLoZaECDO43TBCdaFrsee6fqpMEgTGH+HilolNe5In0SQIW604ZO
 XSavuiaNvRQ9e4scaqrnjlOjCvH8FLFHyBywICiKsF9UamDCbyblgsutwf39B1WReZlF
 bslVHAwxyGv1IA4q1iKLVetJa302gib5+j4zy2qAcxRUGVj6X4rV+95yalX0U5rdqzPv
 E4SA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-transfer-encoding
 :content-language;
 bh=+cjpQErgu64AMMblDNbZsWCTAraG+7PnGSTM25MPtPo=;
 b=ILv+tZYbLwf+t4gad+gpU1f1yVMzwHPI79wUy2fATDKz4ElNzYyBJUPuHsntWkNoMx
 FJevksm/gm7zbezVDaGuajiLvonwHEgUbFN8f72467LCJHuA7NHaEQrSCozo9pQns5tH
 EfXA6GWoKpeiYD0Jm2GZ8SSiNi9SwxppNNtLgUvCATIXl8OUC2a2LVkU08SAhVLAclaG
 nEqi6DyvAy2IowojvpkJhvBVEh4DgHnjPzUva1ACSOwjOcHKNix9ysbNBLFlcvPGSHTh
 iqnjefW+gjE2N7ZqGXoOC71CQy1jqY7EnMTho1usG+HiP6gH7aYAHD7T633Rb42yqPFb
 /o9A==
X-Gm-Message-State: AOAM530t1edKkRerApJJGpXbBPVhBJeTxp9PxrW5QkcQUoEwVYX2pVN/
 mrox/+IJIlZSWIfOLo7wmiI=
X-Google-Smtp-Source: ABdhPJxHrjs4ruDWsJIEWsI2FyFYHbirRkItJqAOiGnkUrjeog0tvOHSrsQJbkXW7sGyHWccAPEwtg==
X-Received: by 2002:ac2:51a1:: with SMTP id f1mr4219703lfk.173.1594975677602; 
 Fri, 17 Jul 2020 01:47:57 -0700 (PDT)
Received: from [192.168.10.4] ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id l12sm1531346ljj.43.2020.07.17.01.47.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 17 Jul 2020 01:47:56 -0700 (PDT)
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
To: Jan Beulich <jbeulich@suse.com>, Rahul Singh <Rahul.Singh@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
From: Oleksandr Andrushchenko <andr2000@gmail.com>
Message-ID: <6b4cadff-ffdc-848a-2b57-be55f61f5bc7@gmail.com>
Date: Fri, 17 Jul 2020 11:47:55 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, nd <nd@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 7/17/20 11:10 AM, Jan Beulich wrote:
> On 16.07.2020 19:10, Rahul Singh wrote:
>> # Discovering PCI devices:
>>
>> PCI-PCIe enumeration is a process of detecting devices connected to its host. It is the responsibility of the hardware domain or boot firmware to do the PCI enumeration and configure the BAR, PCI capabilities, and MSI/MSI-X configuration.
>>
>> PCI-PCIe enumeration in XEN is not feasible for the configuration part as it would require a lot of code inside Xen which would require a lot of maintenance. Added to this many platforms require some quirks in that part of the PCI code which would greatly improve Xen complexity. Once hardware domain enumerates the device then it will communicate to XEN via the below hypercall.
>>
>> #define PHYSDEVOP_pci_device_add        25
>> struct physdev_pci_device_add {
>>      uint16_t seg;
>>      uint8_t bus;
>>      uint8_t devfn;
>>      uint32_t flags;
>>      struct {
>>      	uint8_t bus;
>>      	uint8_t devfn;
>>      } physfn;
>>      /*
>>      * Optional parameters array.
>>      * First element ([0]) is PXM domain associated with the device (if * XEN_PCI_DEV_PXM is set)
>>      */
>>      uint32_t optarr[XEN_FLEX_ARRAY_DIM];
>>      };
>>
>> As the hypercall argument has the PCI segment number, XEN will access the PCI config space based on this segment number and find the host-bridge corresponding to this segment number. At this stage host bridge is fully initialized so there will be no issue to access the config space.
>>
>> XEN will add the PCI devices in the linked list maintain in XEN using the function pci_add_device(). XEN will be aware of all the PCI devices on the system and all the device will be added to the hardware domain.
> Have you had any thoughts about Dom0 re-arranging the bus numbering?
> This is, afaict, a still open issue on x86 as well.

This can get even trickier as we may have PCI enumerated at boot time

by the firmware and then Dom0 may perform the enumeration differently.

So, Xen needs to be aware of what is going to be used as the source of the

enumeration data and be ready to re-build its internal structures in order

to be aligned with that entity: e.g. compare Dom0 and Dom0less use-cases

>
>> Limitations:
>> * When PCI devices are added to XEN, MSI capability is not initialized inside XEN and not supported as of now.
> I think this is a pretty severe limitation, as modern devices tend to
> not support pin based interrupts anymore.
>
>> # Emulated PCI device tree node in libxl:
>>
>> Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.
> I support Stefano's suggestion for this to be an optional thing, i.e.
> there to be no need for it when there are PCI devices assigned to the
> guest anyway. I also wonder about the pci_ prefix here - isn't
> vpci="ecam" as unambiguous?
>
>> A new area has been reserved in the arm guest physical map at which the VPCI bus is declared in the device tree (reg and ranges parameters of the node). A trap handler for the PCI ECAM access from guest has been registered at the defined address and redirects requests to the VPCI driver in Xen.
>>
>> Limitation:
>> * Only one PCI device tree node is supported as of now.
>>
>> BAR value and IOMEM mapping:
>>
>> Linux guest will do the PCI enumeration based on the area reserved for ECAM and IOMEM ranges in the VPCI device tree node. Once PCI	device is assigned to the guest, XEN will map the guest PCI IOMEM region to the real physical IOMEM region only for the assigned devices.
>>
>> As of now we have not modified the existing VPCI code to map the guest PCI IOMEM region to the real physical IOMEM region. We used the existing guest “iomem” config option to map the region.
>> For example:
>> 	Guest reserved IOMEM region:  0x04020000
>>      	Real physical IOMEM region:0x50000000
>>      	IOMEM size:128MB
>>      	iomem config will be:   iomem = ["0x50000,0x8000@0x4020"]
> This surely is planned to go away before the code hits upstream? The
> ranges really should be read out of the BARs, as I see the
> "limitations" section further down suggests, but it's not clear
> whether "limitations" are items that you plan to take care of before
> submitting your code for review.
>
> Jan
>


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 09:22:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 09:22:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwMZR-0004Va-Ie; Fri, 17 Jul 2020 09:22:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/fKj=A4=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jwMZP-0004VV-Mx
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 09:22:19 +0000
X-Inumbo-ID: 02fbc2a8-c80f-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02fbc2a8-c80f-11ea-bca7-bc764e2007e4;
 Fri, 17 Jul 2020 09:22:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ddxKEY0lA45qy6ua1yonzSg5nBItlF+1Mxkg7U80taU=; b=PnhcnAj+NTVKjS50m7Lkh0QySj
 JrfvrTYunAZcdtBY4BEYsw9kbsRc5wPSCdBOIFxQ0nY3wIkm49Y9FSj4dHHDLKNXsAOIUhLaFaP87
 DndTHpOe2a7DhNzHjcLoAeZXocSBqFMbG25IGvLBBYIy1Ag/ON3p9J5k8ipTAXWcaRbk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwMZM-0001RK-Pc; Fri, 17 Jul 2020 09:22:16 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwMZM-0002Mb-GU; Fri, 17 Jul 2020 09:22:16 +0000
Subject: Re: [PATCH 1/2] common: map_vcpu_info() cosmetics
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien.grall.oss@gmail.com>
References: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
 <2a0341c7-3836-a8c0-9516-b6a08e2720ec@suse.com>
 <20200716114128.GO7191@Air-de-Roger>
 <03a4d446-c26b-f171-57fd-a1eb13dad6c0@suse.com>
 <20200716144219.GQ7191@Air-de-Roger>
 <d64ee03f-4663-39ce-fd72-5702029e0182@suse.com>
 <CAJ=z9a2gCm7LNOpJUO4nbwUExmtd8KH2TBvt4VXCaqiAeXuCcg@mail.gmail.com>
 <7b9dc9b2-e117-6bbb-05a7-e82c4529e5e7@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <be5e1706-55de-e7b7-c302-5440f4c321a8@xen.org>
Date: Fri, 17 Jul 2020 10:22:14 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <7b9dc9b2-e117-6bbb-05a7-e82c4529e5e7@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 17/07/2020 09:16, Jan Beulich wrote:
> On 16.07.2020 18:17, Julien Grall wrote:
>> On Thu, 16 Jul 2020, 17:01 Jan Beulich, <jbeulich@suse.com> wrote:
>>
>>> On 16.07.2020 16:42, Roger Pau Monné wrote:
>>>> On Thu, Jul 16, 2020 at 01:48:51PM +0200, Jan Beulich wrote:
>>>>> On 16.07.2020 13:41, Roger Pau Monné wrote:
>>>>>> On Wed, Jul 15, 2020 at 12:15:10PM +0200, Jan Beulich wrote:
>>>>>>> Use ENXIO instead of EINVAL to cover the two cases of the address not
>>>>>>> satisfying the requirements. This will make an issue here better stand
>>>>>>> out at the call site.
>>>>>>
>>>>>> Not sure whether I would use EFAULT instead of ENXIO, as the
>>>>>> description of it is 'bad address' which seems more inline with the
>>>>>> error that we are trying to report.
>>>>>
>>>>> The address isn't bad in the sense of causing a fault, it's just
>>>>> that we elect to not allow it. Hence I don't think EFAULT is
>>>>> suitable. I'm open to replacement suggestions for ENXIO, though.
>>>>
>>>> Well, using an address that's not properly aligned to the requirements
>>>> of an interface would cause a fault? (in this case it's a software
>>>> interface, but the concept applies equally).
>>>
>>> Not necessarily, see x86'es behavior. Also even on strict arches
>>
>> it is typically possible to cover for the misalignment by using
>>> suitable instructions; it's still an implementation choice to not
>>> do so.
>>
>>
>> I am not sure about your argument here... Yes it might be possible, but at
>> what cost?
> 
> The cost is what influences the decision whether to support it. Nevertheless
> it remains an implementation decision rather than a hardware imposed
> restriction, and hence I don't consider -EFAULT suitable here.
> 
>>>> Anyway, not something worth arguing about I think, so unless someone
>>>> else disagrees I'm fine with using ENXIO.
>>>
>>> Good, thanks.
>>>
>>
>> -EFAULT can be described as "Bad address". I think it makes more sense than
>> -ENXIO here even if it may not strictly result to a fault on some arch.
> 
> As said - I don't consider EFAULT applicable here;

AFAICT, you don't consider it because you think that using the address 
means it will always lead to a fault. However, this is just a strict 
interpretation of the error code. A less strict interpretation is it 
could be used for any address that is considered to be invalid.

-ENXIO makes less sense because the address exists. To re-use your 
argument, this is just an implementation details.

> I also consider EINVAL
> as too generic. I'll be happy to see replacement suggestions for my ENXIO
> choice, but obviously I'm not overly happy to see options re-suggested
> which I did already say I've ruled out.

I think I am allowed to express my opinion even if it means this was 
already said... However, I should have been clearer and say that I agree 
with Roger's suggestion about -EFAULT.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 09:54:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 09:54:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwN4V-00078P-5v; Fri, 17 Jul 2020 09:54:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Bgo=A4=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jwN4T-00078K-TA
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 09:54:25 +0000
X-Inumbo-ID: 7ea04a6a-c813-11ea-b7bb-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ea04a6a-c813-11ea-b7bb-bc764e2007e4;
 Fri, 17 Jul 2020 09:54:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594979665;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=fI81c/8hwVQWBLLk8cS0glHAIbt+oZbJtZK0qW4qzuQ=;
 b=GE4SXnsZJtOxfIQAzmwafGjY5kzUbpu2mpEkdJ9SirPZqJuP20OeQYUY
 xxosrk+hdb4YAd59m7raoXkzb83uie5VmsPAm7xlB+TpVP7mJ8VAA23DM
 cjiwqfBiA2TTh1jLwYm1/Z801MLEe2yl9lI5UK/ToFHWydxt3mIw+SamE A=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: JgVqh6+/0rrA4OrAXwNdi68JopV9V6p52OnTzLJcXw/55EvOhC8pPjfS5QVrzCapLG7/FdM5HI
 JuYsOWInvrC4LHuY43JLExFm1T2FwGoUUlOF4YvOtylodGu4v1ByzIyE/0xMZ3bqOnnA2+3bwW
 Bwgy2GzHz0W9+eBZRkhQ4E9aged/gJl05xrnrtbzWDIKbxRZNWbH8K/Y2yMBXcIiFONT5VRVyI
 C01J17bWlibUaC7F/RKqECE/CfWJ3RcoRBv/VMGflIlCg2tpBEAVVwNKoas0TqAUwXabpAKEBw
 xMs=
X-SBRS: 2.7
X-MesageID: 22922260
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,362,1589256000"; d="scan'208";a="22922260"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24337.30027.196236.23948@mariner.uk.xensource.com>
Date: Fri, 17 Jul 2020 10:54:19 +0100
To: Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [osstest PATCH] Revert "make-flight: Temporarily disable flaky
 test"
In-Reply-To: <20200716153424.40953-1-roger.pau@citrix.com>
References: <20200716153424.40953-1-roger.pau@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Roger Pau Monne writes ("[osstest PATCH] Revert "make-flight: Temporarily disable flaky test""):
> This reverts commit c436ff754810c15e4d2a434257d1d07498883acb.
> 
> Now that XSA-321 has been released we can try to enable PVH dom0
> tests again.
> 
> Signed-off-by: Roger Pau Monn <roger.pau@citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

And pushed, thanks.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 10:00:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 10:00:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwNAO-00082R-SV; Fri, 17 Jul 2020 10:00:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1N/p=A4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jwNAN-00081v-Gg
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 10:00:31 +0000
X-Inumbo-ID: 58628196-c814-11ea-95ae-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 58628196-c814-11ea-95ae-12813bfff9fa;
 Fri, 17 Jul 2020 10:00:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 968B1AD26;
 Fri, 17 Jul 2020 10:00:33 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] x86: detect CMOS aliasing on ports other than
 0x70/0x71
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <416ac9b1-93d1-81a2-be19-d58d509fdfec@suse.com>
 <72a63cba-bfdb-ae3c-284b-8ba5b9d7f7a9@suse.com>
 <20200716143145.GP7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <77904c06-ff49-14c2-385f-2a3ec4d477f1@suse.com>
Date: Fri, 17 Jul 2020 12:00:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200716143145.GP7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16.07.2020 16:31, Roger Pau Monné wrote:
> On Wed, Jul 15, 2020 at 11:47:56AM +0200, Jan Beulich wrote:
>> ... in order to also intercept accesses through the alias ports.
>>
>> Also stop intercepting accesses to the CMOS ports if we won't ourselves
>> use the CMOS RTC.
> 
> I think you are missing the registration of the aliased ports in
> rtc_init for a PVH hardware domain, hw_rtc_io will currently only get
> called by accesses to 0x71-0x71.

Oh, right - a re-basing oversight. Thanks for noticing. (It's not
just the registration that's missing, but also the avoiding of it
in case ACPI_FADT_NO_CMOS_RTC is set.)

>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v2: Re-base.
>>
>> --- a/xen/arch/x86/physdev.c
>> +++ b/xen/arch/x86/physdev.c
>> @@ -670,6 +670,80 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>>      return ret;
>>  }
>>  
>> +#ifndef COMPAT
>> +#include <asm/mc146818rtc.h>
>> +
>> +unsigned int __read_mostly cmos_alias_mask;
>> +
>> +static int __init probe_cmos_alias(void)
>> +{
>> +    unsigned int i, offs;
>> +
>> +    if ( acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC )
>> +        return 0;
>> +
>> +    for ( offs = 2; offs < 8; offs <<= 1 )
>> +    {
>> +        bool read = true;
>> +
>> +        for ( i = RTC_REG_D + 1; i < 0x80; ++i )
>> +        {
>> +            uint8_t normal, alt;
>> +            unsigned long flags;
>> +
>> +            if ( i == acpi_gbl_FADT.century )
>> +                continue;
> 
> I'm missing something, why do you avoid the century register for
> comparison reasons?

Just like the other RTC registers - their contents may change behind
our backs, here e.g. over New Year between two centuries.

>> @@ -2009,37 +2009,33 @@ int __hwdom_init xen_in_range(unsigned l
>>  static int __hwdom_init io_bitmap_cb(unsigned long s, unsigned long e,
>>                                       void *ctx)
>>  {
>> -    struct domain *d = ctx;
>> +    const struct domain *d = ctx;
> 
> Urg, it's kind of weird to constify d ...
> 
>>      unsigned int i;
>>  
>>      ASSERT(e <= INT_MAX);
>>      for ( i = s; i <= e; i++ )
>> -        __clear_bit(i, d->arch.hvm.io_bitmap);
>> +        if ( admin_io_okay(i, 1, d) )
>> +            __clear_bit(i, d->arch.hvm.io_bitmap);
> 
> ... when you are modifying the bitmap here.

Well - I'm not modifying what d points to. In principle these I/O
bitmaps are shared; It's just Dom0 which gets a separate one. So
modifying the bitmap really is unrelated to modifying struct domain.

>>  void __hwdom_init setup_io_bitmap(struct domain *d)
>>  {
>> -    int rc;
>> +    if ( !is_hvm_domain(d) )
>> +        return;
>>  
>> -    if ( is_hvm_domain(d) )
>> -    {
>> -        bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
>> -        rc = rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
>> -                                    io_bitmap_cb, d);
>> -        BUG_ON(rc);
>> -        /*
>> -         * NB: we need to trap accesses to 0xcf8 in order to intercept
>> -         * 4 byte accesses, that need to be handled by Xen in order to
>> -         * keep consistency.
>> -         * Access to 1 byte RTC ports also needs to be trapped in order
>> -         * to keep consistency with PV.
>> -         */
>> -        __set_bit(0xcf8, d->arch.hvm.io_bitmap);
>> -        __set_bit(RTC_PORT(0), d->arch.hvm.io_bitmap);
>> -        __set_bit(RTC_PORT(1), d->arch.hvm.io_bitmap);
>> -    }
>> +    bitmap_fill(d->arch.hvm.io_bitmap, 0x10000);
>> +    if ( rangeset_report_ranges(d->arch.ioport_caps, 0, 0x10000,
>> +                                io_bitmap_cb, d) )
>> +        BUG();
> 
> You can directly use BUG_ON, no need for the if.

Long ago we agreed to avoid BUG_ON() with expressions that have
required (side) effects. I.e. just like for ASSERT(), where the
expression wouldn't get evaluated at all when NDEBUG is defined.

> IIRC it's safe to
> call admin_io_okay (and thus ioports_access_permitted) when already
> holding the rangeset lock, as both are read-lockers and can safely
> recurse.

I'm afraid I don't see the connection of this remark to the
construct in question.

>> --- a/xen/arch/x86/time.c
>> +++ b/xen/arch/x86/time.c
>> @@ -1092,7 +1092,10 @@ static unsigned long get_cmos_time(void)
>>          if ( seconds < 60 )
>>          {
>>              if ( rtc.sec != seconds )
>> +            {
>>                  cmos_rtc_probe = false;
>> +                acpi_gbl_FADT.boot_flags &= ~ACPI_FADT_NO_CMOS_RTC;
> 
> Do you need to set this flag also when using the EFI runtime services
> in order to get the time in get_cmos_time? In that case the RTC is not
> use, and hence could be handled to the hardware domain?

Whether the EFI runtime services use the RTC is unknown. There are
specific precautions towards this in the UEFI spec, iirc.

>> @@ -1114,7 +1117,7 @@ unsigned int rtc_guest_read(unsigned int
>>      unsigned long flags;
>>      unsigned int data = ~0;
>>  
>> -    switch ( port )
>> +    switch ( port & ~cmos_alias_mask )
>>      {
>>      case RTC_PORT(0):
>>          /*
>> @@ -1126,11 +1129,12 @@ unsigned int rtc_guest_read(unsigned int
>>          break;
>>  
>>      case RTC_PORT(1):
>> -        if ( !ioports_access_permitted(currd, RTC_PORT(0), RTC_PORT(1)) )
>> +        if ( !ioports_access_permitted(currd, port - 1, port) )
>>              break;
>>          spin_lock_irqsave(&rtc_lock, flags);
>> -        outb(currd->arch.cmos_idx & 0x7f, RTC_PORT(0));
>> -        data = inb(RTC_PORT(1));
>> +        outb(currd->arch.cmos_idx & (0xff >> (port == RTC_PORT(1))),
> 
> Why do you only mask this for accesses to the non aliased ports? If
> the RTC is aliased you also want to mask the aliased accesses in the
> same way?

Bit 7 in port 70 has a different meaning (NMI mask); you can access
RTC/CMOS bytes 0-127 only this way. There are chipsets which provide
256-byte CMOS, where the high half can be accessed via the aliases.

However, seeing this comment of yours I noticed that there's still a
related bug here: When the guest reads/writes the index port, I _also_
need to mask off the high bit when it's the non-aliased port that gets
accessed. Otherwise Dom0 writing port 74 but then reading port 71
could lead to bit 7 getting set in port 70.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 10:48:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 10:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwNv1-0003Bo-JE; Fri, 17 Jul 2020 10:48:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XmQz=A4=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1jwNv0-0003Bi-9u
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 10:48:42 +0000
X-Inumbo-ID: 1356986a-c81b-11ea-95b8-12813bfff9fa
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.53]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1356986a-c81b-11ea-95b8-12813bfff9fa;
 Fri, 17 Jul 2020 10:48:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=scmTMjVaRH1ezbx1VIu7voixZKa+4eKWKAqyFmDgkSU=;
 b=XWrSZgH+BhA/tclzubT/2XU/qMYCqenSzPE0Ni0k1bhTzmj8fxdi1CLN1yBdAXvzqfAMlNo4zV9/l06VZY6uHTjjnuak5HkadFDvuaRdjo9CAjPKgaSflOvIdBgUE6J6KkYaUAgtaVKUesYOIjBTaPOEEUiwq0yjdOQNn2F2D8g=
Received: from MR2P264CA0135.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:30::27)
 by DB8PR08MB5273.eurprd08.prod.outlook.com (2603:10a6:10:e8::25) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17; Fri, 17 Jul
 2020 10:48:38 +0000
Received: from VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:30:cafe::ae) by MR2P264CA0135.outlook.office365.com
 (2603:10a6:500:30::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18 via Frontend
 Transport; Fri, 17 Jul 2020 10:48:37 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT033.mail.protection.outlook.com (10.152.18.147) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 10:48:37 +0000
Received: ("Tessian outbound 1c27ecaec3d6:v62");
 Fri, 17 Jul 2020 10:48:36 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: b615941b5ff91042
X-CR-MTA-TID: 64aa7808
Received: from efc8edb381a6.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A440ABFA-ABBD-4D2E-A23D-AAB62CF3AE7D.1; 
 Fri, 17 Jul 2020 10:48:31 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id efc8edb381a6.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 10:48:31 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZQhhtmdHinmjO+V8xD1Sor+6uiaDQ+aOYo0Y7I+ilTWrDv87GfDH0tE2W7x5nKfkheCkbH3n+zSv89kLD39lC34U9K5Vf45i+gA6+BwKb3EtjUj0ngWPKvYK2/l6WkI7wDBo95m8a5iMV5s1lr0ogoXf6vouGSaU2WEj2g3uzFXPMyDh6/TSwqyaJQr1mcshdbfOgNOTlOJF2v6SBAUgB9VqYmcz1z+e98d6+gqqq58V6T2G1FNH/kD1uyZ2rQLEQjM5ML0a2KJnYpZf/A/xpPJqUMZkwruu5boffCG3WzKuJ+itM8+9ePCFj1alJD1+YenGWZXrIaybfSkUxAmI+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=scmTMjVaRH1ezbx1VIu7voixZKa+4eKWKAqyFmDgkSU=;
 b=nCoIgG6BLgO4lP8S2bB9ofchmY6y4AmhrIb/TjC1W9TxCvwGn8bCjHfeTNrMqCzjgVzBYCMbtFMyDse3Z24+pI3dU3/PrdxI+J9eseCGPlC/v4AjJv6mG1yLbig+rovIjVwwi+Bl3lmMP535urp/wAJK/2b5qWc8HNClmG8Kbl3zqkSwaPo40yznByoapqiwDLt6YFw3rsQ5Ae6lp0ybXKDVqQBVA6gFIVInE34n0VlzYQw74eRp7dESaa45Z9TR3Q7SAWxGjvMVKRd24rrFwQm5uFfXCgTbKN8c2zzl0L8IKQ157u13ZZy7Jndr7rzrCVSsJIWcQ/VHf3TRHxqx0g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=scmTMjVaRH1ezbx1VIu7voixZKa+4eKWKAqyFmDgkSU=;
 b=XWrSZgH+BhA/tclzubT/2XU/qMYCqenSzPE0Ni0k1bhTzmj8fxdi1CLN1yBdAXvzqfAMlNo4zV9/l06VZY6uHTjjnuak5HkadFDvuaRdjo9CAjPKgaSflOvIdBgUE6J6KkYaUAgtaVKUesYOIjBTaPOEEUiwq0yjdOQNn2F2D8g=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM5PR0801MB1811.eurprd08.prod.outlook.com (2603:10a6:203:39::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.21; Fri, 17 Jul
 2020 10:48:29 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3174.026; Fri, 17 Jul 2020
 10:48:29 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Stefano Stabellini
 <sstabellini@kernel.org>
Subject: Re: PCI devices passthrough on Arm design proposal
Thread-Topic: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWXCfOXvffVM7hqk2nmODs1ql70Q==
Date: Fri, 17 Jul 2020 10:48:29 +0000
Message-ID: <D0FA2330-D6AC-4A0C-98FD-62B8BA200E27@arm.com>
Accept-Language: en-US
Content-Language: en-GB
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: Microsoft-MacOutlook/16.38.20061401
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: b065b9de-1992-4702-70b0-08d82a3ef5ec
x-ms-traffictypediagnostic: AM5PR0801MB1811:|DB8PR08MB5273:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <DB8PR08MB5273B3ABD2069855D5878DCEFC7C0@DB8PR08MB5273.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: jvZmi/q/EBFYacdBJ1UkUzDbSQE5ePjtkK+gIONutJGBr0tDP4Ik9Xoo+NWMBZpPCN9FJ09wM2eA1ylXcP2A72ISn4SI6R9UH/429LYRXc29GwuFGSh5YMHv8hhVXEr1iRp2h4C3nQbxZo8UbwKNxoM3+d1sGKs1WZ9ahlOCLQn+/A/VoE71k0gy4CqhrcihdkGq0+1kxyVMFz4CqjPq7PPSFEsA6fjcZuu96rHyMljBIkPSOvV0tqDEzIKMlExxVsvB0kPcNMmgEa9rcf/kGm/QX6hDFfrDL6cvLlXSolx1/1GR33KkM9dKPxOY+t9YPn3DUOkmnUiSj9psuggDP7PX0hLinWEo/7tNztH0d83BlPvW/VEBZ6/m9AWK3Q3npKFCQfL1mzsbMJ9XJ2Zwsw==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(396003)(346002)(366004)(136003)(376002)(316002)(64756008)(66476007)(8676002)(478600001)(66446008)(66946007)(6512007)(6506007)(71200400001)(5660300002)(30864003)(53546011)(186003)(76116006)(91956017)(6486002)(2906002)(110136005)(66556008)(36756003)(54906003)(33656002)(83380400001)(8936002)(86362001)(26005)(2616005)(966005)(4326008);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: hYTdZBPH90il29cN1Emw0gw3XlBOVmqwYVn9cSunom/A0+OFmOIT+N8xq077mqIAE4Zil5JVsKfjb3itlKVV86SiD5+Lk8sxeF5tyyNNx46vLPPV0utI7fkv6ngnLrCLtk9aMsIK9RwDpfJe7PKr9mI47Ny4TAszj9I4yMI+tnAGi+HXgn0JRqBjaiXfIhEHdoMTTccKR0Ih6vqdDHoIbqAuFde9BlWyN6/Xdrb4dgQZnzL5NQ3m6XOMRQpymjju/6s8qHw4ujF6JAcx7zwth9fX0v3dHwizuq/ckvozP69KMandYMj50Q9AXSdnpLzgZBu/Pwa3GYJEwv1cei/HLPC2i2prEdldJXdxwdMUYh/sTsNb5SIVQqn417I9vPdc4rc/dmHOqeApKytbj3yi8bIqAyS4zGB3gaJ2a5PRutQifVZZlrb9hcfi+0Uieb4XMOf7ZNTQ2JbIm/ubvxZuIQS74s1atLG4kllz/o49UeLx2pbXP0EjA5EXeRj0jDGw
Content-Type: text/plain; charset="utf-8"
Content-ID: <41BFB78F865D8140A4053143EA865C44@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB1811
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(39860400002)(346002)(376002)(136003)(46966005)(54906003)(30864003)(82740400003)(4326008)(966005)(2906002)(83380400001)(47076004)(8676002)(6512007)(8936002)(110136005)(478600001)(36906005)(6486002)(36756003)(86362001)(356005)(33656002)(70586007)(336012)(70206006)(316002)(82310400002)(5660300002)(81166007)(26005)(107886003)(2616005)(53546011)(6506007)(186003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 3eaaf70f-5673-4f75-92cc-08d82a3ef0f7
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: WyWo+ESS8ChtQY7el5DLhIfiR5/+OwFW8kKQBqAU0MWEjYHKTO26hGmJyInlRdXtWxARjg7okc0PoJUXZMURydGlqlp9d1knU+0Zu9Bqxar9ZG0fF6OiLRcuaOj8FkSQWCU5pZrJR49Qm36zRjnWIPf5nGy3xeL4EY94J4GWNfBtDt9pglKN/WCUylR/lCFmweWy9S9QNN1pldWi+Bh0j3UyT46blYo0ZW5AYhPbfqHjAksEg6/cVPH+C9/QlkSiTLLUMmdcIO3uRYS2L3CTIT+RJdgUdb7pkTf+ZoYSDbeSM0lKNOlM84Npw2JQwAXYugfPfW3bQFM76Nk1BuvUMyaraStnmqkCIyepJt2lTlUdgRmepZN13LovXMTkNLTBu9r16yCgv0tyrB/ysu3rawOHfIzYkqJEw9fERaw07TduLgmp5Hf24dG9bbmru6pvexj3kMjms2eQhGNCQou4W1hInXb0YfnKder00qy6wrk=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 10:48:37.3678 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b065b9de-1992-4702-70b0-08d82a3ef5ec
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5273
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

T24gMTcvMDcvMjAyMCwgODo0MiBhbSwgIk9sZWtzYW5kciBBbmRydXNoY2hlbmtvIiA8T2xla3Nh
bmRyX0FuZHJ1c2hjaGVua29AZXBhbS5jb20+IHdyb3RlOg0KDQoNCiAgICBPbiA3LzE3LzIwIDk6
NTMgQU0sIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQogICAgPg0KICAgID4+IE9uIDE2IEp1bCAy
MDIwLCBhdCAyMjo1MSwgU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3Jn
PiB3cm90ZToNCiAgICA+Pg0KICAgID4+IE9uIFRodSwgMTYgSnVsIDIwMjAsIFJhaHVsIFNpbmdo
IHdyb3RlOg0KICAgID4+PiBIZWxsbyBBbGwsDQogICAgPj4+DQogICAgPj4+IEZvbGxvd2luZyB1
cCBvbiBkaXNjdXNzaW9uIG9uIFBDSSBQYXNzdGhyb3VnaCBzdXBwb3J0IG9uIEFSTSB0aGF0IHdl
IGhhZCBhdCB0aGUgWEVOIHN1bW1pdCwgd2UgYXJlIHN1Ym1pdHRpbmcgYSBSZXZpZXcgRm9yIENv
bW1lbnQgYW5kIGEgZGVzaWduIHByb3Bvc2FsIGZvciBQQ0kgcGFzc3Rocm91Z2ggc3VwcG9ydCBv
biBBUk0uIEZlZWwgZnJlZSB0byBnaXZlIHlvdXIgZmVlZGJhY2suDQogICAgPj4+DQogICAgPj4+
IFRoZSBmb2xsb3dpbmdzIGRlc2NyaWJlIHRoZSBoaWdoLWxldmVsIGRlc2lnbiBwcm9wb3NhbCBv
ZiB0aGUgUENJIHBhc3N0aHJvdWdoIHN1cHBvcnQgYW5kIGhvdyB0aGUgZGlmZmVyZW50IG1vZHVs
ZXMgd2l0aGluIHRoZSBzeXN0ZW0gaW50ZXJhY3RzIHdpdGggZWFjaCBvdGhlciB0byBhc3NpZ24g
YSBwYXJ0aWN1bGFyIFBDSSBkZXZpY2UgdG8gdGhlIGd1ZXN0Lg0KICAgID4+IEkgdGhpbmsgdGhl
IHByb3Bvc2FsIGlzIGdvb2QgYW5kIEkgb25seSBoYXZlIGEgY291cGxlIG9mIHRob3VnaHRzIHRv
DQogICAgPj4gc2hhcmUgYmVsb3cuDQogICAgPj4NCiAgICA+Pg0KICAgID4+PiAjIFRpdGxlOg0K
ICAgID4+Pg0KICAgID4+PiBQQ0kgZGV2aWNlcyBwYXNzdGhyb3VnaCBvbiBBcm0gZGVzaWduIHBy
b3Bvc2FsDQogICAgPj4+DQogICAgPj4+ICMgUHJvYmxlbSBzdGF0ZW1lbnQ6DQogICAgPj4+DQog
ICAgPj4+IE9uIEFSTSB0aGVyZSBpbiBubyBzdXBwb3J0IHRvIGFzc2lnbiBhIFBDSSBkZXZpY2Ug
dG8gYSBndWVzdC4gUENJIGRldmljZSBwYXNzdGhyb3VnaCBjYXBhYmlsaXR5IGFsbG93cyBndWVz
dHMgdG8gaGF2ZSBmdWxsIGFjY2VzcyB0byBzb21lIFBDSSBkZXZpY2VzLiBQQ0kgZGV2aWNlIHBh
c3N0aHJvdWdoIGFsbG93cyBQQ0kgZGV2aWNlcyB0byBhcHBlYXIgYW5kIGJlaGF2ZSBhcyBpZiB0
aGV5IHdlcmUgcGh5c2ljYWxseSBhdHRhY2hlZCB0byB0aGUgZ3Vlc3Qgb3BlcmF0aW5nIHN5c3Rl
bSBhbmQgcHJvdmlkZSBmdWxsIGlzb2xhdGlvbiBvZiB0aGUgUENJIGRldmljZXMuDQogICAgPj4+
DQogICAgPj4+IEdvYWwgb2YgdGhpcyB3b3JrIGlzIHRvIGFsc28gc3VwcG9ydCBEb20wTGVzcyBj
b25maWd1cmF0aW9uIHNvIHRoZSBQQ0kgYmFja2VuZC9mcm9udGVuZCBkcml2ZXJzIHVzZWQgb24g
eDg2IHNoYWxsIG5vdCBiZSB1c2VkIG9uIEFybS4gSXQgd2lsbCB1c2UgdGhlIGV4aXN0aW5nIFZQ
Q0kgY29uY2VwdCBmcm9tIFg4NiBhbmQgaW1wbGVtZW50IHRoZSB2aXJ0dWFsIFBDSSBidXMgdGhy
b3VnaCBJTyBlbXVsYXRpb27igIsgc3VjaCB0aGF0IG9ubHkgYXNzaWduZWQgZGV2aWNlcyBhcmUg
dmlzaWJsZeKAiyB0byB0aGUgZ3Vlc3QgYW5kIGd1ZXN0IGNhbiB1c2UgdGhlIHN0YW5kYXJkIFBD
SSBkcml2ZXIuDQogICAgPj4+DQogICAgPj4+IE9ubHkgRG9tMCBhbmQgWGVuIHdpbGwgaGF2ZSBh
Y2Nlc3MgdG8gdGhlIHJlYWwgUENJIGJ1cywNCg0KICAgIFNvLCBpbiB0aGlzIGNhc2UgaG93IGlz
IHRoZSBhY2Nlc3Mgc2VyaWFsaXphdGlvbiBnb2luZyB0byB3b3JrPw0KDQogICAgSSBtZWFuIHRo
YXQgaWYgYm90aCBYZW4gYW5kIERvbTAgYXJlIGFib3V0IHRvIGFjY2VzcyB0aGUgYnVzIGF0IHRo
ZSBzYW1lIHRpbWU/DQoNCiAgICBUaGVyZSB3YXMgYSBkaXNjdXNzaW9uIG9uIHRoZSBzYW1lIGJl
Zm9yZSBbMV0gYW5kIElNTyBpdCB3YXMgbm90IGRlY2lkZWQgb24NCg0KICAgIGhvdyB0byBkZWFs
IHdpdGggdGhhdC4NCg0KRE9NMCBhbHNvIGFjY2VzcyB0aGUgcmVhbCBQQ0kgaGFyZHdhcmUgdmlh
IE1NSU8gY29uZmlnIHNwYWNlIHRyYXAgaW4gWEVOLiBXZSB3aWxsIHRha2UgY2FyZSBvZiBjb25m
aWcgIHNwYWNlIGFjY2VzcyBzZXJpYWxpemF0aW9uIGluIFhFTi4NCg0KICAgID4+PiDigIsgZ3Vl
c3Qgd2lsbCBoYXZlIGEgZGlyZWN0IGFjY2VzcyB0byB0aGUgYXNzaWduZWQgZGV2aWNlIGl0c2Vs
ZuKAiy4gSU9NRU0gbWVtb3J5IHdpbGwgYmUgbWFwcGVkIHRvIHRoZSBndWVzdCDigIthbmQgaW50
ZXJydXB0IHdpbGwgYmUgcmVkaXJlY3RlZCB0byB0aGUgZ3Vlc3QuIFNNTVUgaGFzIHRvIGJlIGNv
bmZpZ3VyZWQgY29ycmVjdGx5IHRvIGhhdmUgRE1BIHRyYW5zYWN0aW9uLg0KICAgID4+Pg0KICAg
ID4+PiAjIyBDdXJyZW50IHN0YXRlOuKAr0RyYWZ0IHZlcnNpb24NCiAgICA+Pj4NCiAgICA+Pj4g
IyBQcm9wb3NlcihzKTogUmFodWwgU2luZ2gsIEJlcnRyYW5kIE1hcnF1aXMNCiAgICA+Pj4NCiAg
ICA+Pj4gIyBQcm9wb3NhbDoNCiAgICA+Pj4NCiAgICA+Pj4gVGhpcyBzZWN0aW9uIHdpbGwgZGVz
Y3JpYmUgdGhlIGRpZmZlcmVudCBzdWJzeXN0ZW0gdG8gc3VwcG9ydCB0aGUgUENJIGRldmljZSBw
YXNzdGhyb3VnaCBhbmQgaG93IHRoZXNlIHN1YnN5c3RlbXMgaW50ZXJhY3Qgd2l0aCBlYWNoIG90
aGVyIHRvIGFzc2lnbiBhIGRldmljZSB0byB0aGUgZ3Vlc3QuDQogICAgPj4+DQogICAgPj4+ICMg
UENJIFRlcm1pbm9sb2d5Og0KICAgID4+Pg0KICAgID4+PiBIb3N0IEJyaWRnZTogSG9zdCBicmlk
Z2UgYWxsb3dzIHRoZSBQQ0kgZGV2aWNlcyB0byB0YWxrIHRvIHRoZSByZXN0IG9mIHRoZSBjb21w
dXRlci4NCiAgICA+Pj4gRUNBTTogRUNBTSAoRW5oYW5jZWQgQ29uZmlndXJhdGlvbiBBY2Nlc3Mg
TWVjaGFuaXNtKSBpcyBhIG1lY2hhbmlzbSBkZXZlbG9wZWQgdG8gYWxsb3cgUENJZSB0byBhY2Nl
c3MgY29uZmlndXJhdGlvbiBzcGFjZS4gVGhlIHNwYWNlIGF2YWlsYWJsZSBwZXIgZnVuY3Rpb24g
aXMgNEtCLg0KICAgID4+Pg0KICAgID4+PiAjIERpc2NvdmVyaW5nIFBDSSBIb3N0IEJyaWRnZSBp
biBYRU46DQogICAgPj4+DQogICAgPj4+IEluIG9yZGVyIHRvIHN1cHBvcnQgdGhlIFBDSSBwYXNz
dGhyb3VnaCBYRU4gc2hvdWxkIGJlIGF3YXJlIG9mIGFsbCB0aGUgUENJIGhvc3QgYnJpZGdlcyBh
dmFpbGFibGUgb24gdGhlIHN5c3RlbSBhbmQgc2hvdWxkIGJlIGFibGUgdG8gYWNjZXNzIHRoZSBQ
Q0kgY29uZmlndXJhdGlvbiBzcGFjZS4gRUNBTSBjb25maWd1cmF0aW9uIGFjY2VzcyBpcyBzdXBw
b3J0ZWQgYXMgb2Ygbm93LiBYRU4gZHVyaW5nIGJvb3Qgd2lsbCByZWFkIHRoZSBQQ0kgZGV2aWNl
IHRyZWUgbm9kZSDigJxyZWfigJ0gcHJvcGVydHkgYW5kIHdpbGwgbWFwIHRoZSBFQ0FNIHNwYWNl
IHRvIHRoZSBYRU4gbWVtb3J5IHVzaW5nIHRoZSDigJxpb3JlbWFwX25vY2FjaGUgKCnigJ0gZnVu
Y3Rpb24uDQogICAgPj4+DQogICAgPj4+IElmIHRoZXJlIGFyZSBtb3JlIHRoYW4gb25lIHNlZ21l
bnQgb24gdGhlIHN5c3RlbSwgWEVOIHdpbGwgcmVhZCB0aGUg4oCcbGludXgsIHBjaS1kb21haW7i
gJ0gcHJvcGVydHkgZnJvbSB0aGUgZGV2aWNlIHRyZWUgbm9kZSBhbmQgY29uZmlndXJlICB0aGUg
aG9zdCBicmlkZ2Ugc2VnbWVudCBudW1iZXIgYWNjb3JkaW5nbHkuIEFsbCB0aGUgUENJIGRldmlj
ZSB0cmVlIG5vZGVzIHNob3VsZCBoYXZlIHRoZSDigJxsaW51eCxwY2ktZG9tYWlu4oCdIHByb3Bl
cnR5IHNvIHRoYXQgdGhlcmUgd2lsbCBiZSBubyBjb25mbGljdHMuIER1cmluZyBoYXJkd2FyZSBk
b21haW4gYm9vdCBMaW51eCB3aWxsIGFsc28gdXNlIHRoZSBzYW1lIOKAnGxpbnV4LHBjaS1kb21h
aW7igJ0gcHJvcGVydHkgYW5kIGFzc2lnbiB0aGUgZG9tYWluIG51bWJlciB0byB0aGUgaG9zdCBi
cmlkZ2UuDQogICAgPj4+DQogICAgPj4+IFdoZW4gRG9tMCB0cmllcyB0byBhY2Nlc3MgdGhlIFBD
SSBjb25maWcgc3BhY2Ugb2YgdGhlIGRldmljZSwgWEVOIHdpbGwgZmluZCB0aGUgY29ycmVzcG9u
ZGluZyBob3N0IGJyaWRnZSBiYXNlZCBvbiBzZWdtZW50IG51bWJlciBhbmQgYWNjZXNzIHRoZSBj
b3JyZXNwb25kaW5nIGNvbmZpZyBzcGFjZSBhc3NpZ25lZCB0byB0aGF0IGJyaWRnZS4NCiAgICA+
Pj4NCiAgICA+Pj4gTGltaXRhdGlvbjoNCiAgICA+Pj4gKiBPbmx5IFBDSSBFQ0FNIGNvbmZpZ3Vy
YXRpb24gc3BhY2UgYWNjZXNzIGlzIHN1cHBvcnRlZC4NCg0KICAgIFRoaXMgaXMgcmVhbGx5IHRo
ZSBsaW1pdGF0aW9uIHdoaWNoIHdlIGhhdmUgdG8gdGhpbmsgb2Ygbm93IGFzIHRoZXJlIGFyZSBs
b3RzIG9mDQoNCiAgICBIVyB3L28gRUNBTSBzdXBwb3J0IGFuZCBub3QgcHJvdmlkaW5nIGEgd2F5
IHRvIHVzZSBQQ0koZSkgb24gdGhvc2UgYm9hcmRzDQoNCiAgICB3b3VsZCByZW5kZXIgdGhlbSB1
c2VsZXNzIHdydCBQQ0kuIEkgZG9uJ3Qgc3VnZ2VzdCB0byBoYXZlIHNvbWUgcmVhbCBjb2RlIGZv
cg0KDQogICAgdGhhdCwgYnV0IEkgd291bGQgc3VnZ2VzdCB3ZSBkZXNpZ24gc29tZSBpbnRlcmZh
Y2VzIGZyb20gZGF5IDAuDQoNCiAgICBBdCB0aGUgc2FtZSB0aW1lIEkgZG8gdW5kZXJzdGFuZCB0
aGF0IHN1cHBvcnRpbmcgbm9uLUVDQU0gYnJpZGdlcyBpcyBhIHBhaW4NCg0KQWRkaW5nIGFueSB0
eXBlIG9mIGhvc3QgYnJpZGdlIGlzIHN1cHBvcnRlZCwgd2UgZGlkIHB1dCB0aGUgRUNBTSBzcGVj
aWZpYyBjb2RlIGluIGFuIGlkZW50aWZpZWQgc291cmNlIGZpbGUgc28gdGhhdCBvdGhlciB0eXBl
cyBjYW4gYmUgaW1wbGVtZW50ZWQuIEFzIG9mIG5vdyB3ZSBoYXZlIGltcGxlbWVudGVkIHRoZSBF
Q0FNIHN1cHBvcnQgYW5kIHdlIGFyZSBpbXBsZW1lbnRpbmcgcmlnaHQgbm93IHN1cHBvcnQgZm9y
IE4xU0RQIHdoaWNoIHJlcXVpcmVzIHNwZWNpZmljIHF1aXJrcyB3aGljaCB3aWxsIGJlIGRvbmUg
aW4gYSBzZXBhcmF0ZSBzb3VyY2UgZmlsZS4NCg0KICAgID4+PiAqIERldmljZSB0cmVlIGJpbmRp
bmcgaXMgc3VwcG9ydGVkIGFzIG9mIG5vdywgQUNQSSBpcyBub3Qgc3VwcG9ydGVkLg0KICAgID4+
PiAqIE5lZWQgdG8gcG9ydCB0aGUgUENJIGhvc3QgYnJpZGdlIGFjY2VzcyBjb2RlIHRvIFhFTiB0
byBhY2Nlc3MgdGhlIGNvbmZpZ3VyYXRpb24gc3BhY2UgKGdlbmVyaWMgb25lIHdvcmtzIGJ1dCBs
b3RzIG9mIHBsYXRmb3JtcyB3aWxsIHJlcXVpcmVkIHNvbWUgc3BlY2lmaWMgY29kZSBvciBxdWly
a3MpLg0KICAgID4+Pg0KICAgID4+PiAjIERpc2NvdmVyaW5nIFBDSSBkZXZpY2VzOg0KICAgID4+
Pg0KICAgID4+PiBQQ0ktUENJZSBlbnVtZXJhdGlvbiBpcyBhIHByb2Nlc3Mgb2YgZGV0ZWN0aW5n
IGRldmljZXMgY29ubmVjdGVkIHRvIGl0cyBob3N0LiBJdCBpcyB0aGUgcmVzcG9uc2liaWxpdHkg
b2YgdGhlIGhhcmR3YXJlIGRvbWFpbiBvciBib290IGZpcm13YXJlIHRvIGRvIHRoZSBQQ0kgZW51
bWVyYXRpb24gYW5kIGNvbmZpZ3VyZQ0KICAgIEdyZWF0LCBzbyB3ZSBhc3N1bWUgaGVyZSB0aGF0
IHRoZSBib290bG9hZGVyIGNhbiBkbyB0aGUgZW51bWVyYXRpb24gYW5kIGNvbmZpZ3VyYXRpb24u
Li4NCiAgICA+Pj4gICB0aGUgQkFSLCBQQ0kgY2FwYWJpbGl0aWVzLCBhbmQgTVNJL01TSS1YIGNv
bmZpZ3VyYXRpb24uDQogICAgPj4+DQogICAgPj4+IFBDSS1QQ0llIGVudW1lcmF0aW9uIGluIFhF
TiBpcyBub3QgZmVhc2libGUgZm9yIHRoZSBjb25maWd1cmF0aW9uIHBhcnQgYXMgaXQgd291bGQg
cmVxdWlyZSBhIGxvdCBvZiBjb2RlIGluc2lkZSBYZW4gd2hpY2ggd291bGQgcmVxdWlyZSBhIGxv
dCBvZiBtYWludGVuYW5jZS4gQWRkZWQgdG8gdGhpcyBtYW55IHBsYXRmb3JtcyByZXF1aXJlIHNv
bWUgcXVpcmtzIGluIHRoYXQgcGFydCBvZiB0aGUgUENJIGNvZGUgd2hpY2ggd291bGQgZ3JlYXRs
eSBpbXByb3ZlIFhlbiBjb21wbGV4aXR5LiBPbmNlIGhhcmR3YXJlIGRvbWFpbiBlbnVtZXJhdGVz
IHRoZSBkZXZpY2UgdGhlbiBpdCB3aWxsIGNvbW11bmljYXRlIHRvIFhFTiB2aWEgdGhlIGJlbG93
IGh5cGVyY2FsbC4NCiAgICA+Pj4NCiAgICA+Pj4gI2RlZmluZSBQSFlTREVWT1BfcGNpX2Rldmlj
ZV9hZGQgICAgICAgIDI1DQogICAgPj4+IHN0cnVjdCBwaHlzZGV2X3BjaV9kZXZpY2VfYWRkIHsN
CiAgICA+Pj4gICAgIHVpbnQxNl90IHNlZzsNCiAgICA+Pj4gICAgIHVpbnQ4X3QgYnVzOw0KICAg
ID4+PiAgICAgdWludDhfdCBkZXZmbjsNCiAgICA+Pj4gICAgIHVpbnQzMl90IGZsYWdzOw0KICAg
ID4+PiAgICAgc3RydWN0IHsNCiAgICA+Pj4gICAgIAl1aW50OF90IGJ1czsNCiAgICA+Pj4gICAg
IAl1aW50OF90IGRldmZuOw0KICAgID4+PiAgICAgfSBwaHlzZm47DQogICAgPj4+ICAgICAvKg0K
ICAgID4+PiAgICAgKiBPcHRpb25hbCBwYXJhbWV0ZXJzIGFycmF5Lg0KICAgID4+PiAgICAgKiBG
aXJzdCBlbGVtZW50IChbMF0pIGlzIFBYTSBkb21haW4gYXNzb2NpYXRlZCB3aXRoIHRoZSBkZXZp
Y2UgKGlmICogWEVOX1BDSV9ERVZfUFhNIGlzIHNldCkNCiAgICA+Pj4gICAgICovDQogICAgPj4+
ICAgICB1aW50MzJfdCBvcHRhcnJbWEVOX0ZMRVhfQVJSQVlfRElNXTsNCiAgICA+Pj4gICAgIH07
DQogICAgPj4+DQogICAgPj4+IEFzIHRoZSBoeXBlcmNhbGwgYXJndW1lbnQgaGFzIHRoZSBQQ0kg
c2VnbWVudCBudW1iZXIsIFhFTiB3aWxsIGFjY2VzcyB0aGUgUENJIGNvbmZpZyBzcGFjZSBiYXNl
ZCBvbiB0aGlzIHNlZ21lbnQgbnVtYmVyIGFuZCBmaW5kIHRoZSBob3N0LWJyaWRnZSBjb3JyZXNw
b25kaW5nIHRvIHRoaXMgc2VnbWVudCBudW1iZXIuIEF0IHRoaXMgc3RhZ2UgaG9zdCBicmlkZ2Ug
aXMgZnVsbHkgaW5pdGlhbGl6ZWQgc28gdGhlcmUgd2lsbCBiZSBubyBpc3N1ZSB0byBhY2Nlc3Mg
dGhlIGNvbmZpZyBzcGFjZS4NCiAgICA+Pj4NCiAgICA+Pj4gWEVOIHdpbGwgYWRkIHRoZSBQQ0kg
ZGV2aWNlcyBpbiB0aGUgbGlua2VkIGxpc3QgbWFpbnRhaW4gaW4gWEVOIHVzaW5nIHRoZSBmdW5j
dGlvbiBwY2lfYWRkX2RldmljZSgpLiBYRU4gd2lsbCBiZSBhd2FyZSBvZiBhbGwgdGhlIFBDSSBk
ZXZpY2VzIG9uIHRoZSBzeXN0ZW0gYW5kIGFsbCB0aGUgZGV2aWNlIHdpbGwgYmUgYWRkZWQgdG8g
dGhlIGhhcmR3YXJlIGRvbWFpbi4NCiAgICA+Pj4NCiAgICA+Pj4gTGltaXRhdGlvbnM6DQogICAg
Pj4+ICogV2hlbiBQQ0kgZGV2aWNlcyBhcmUgYWRkZWQgdG8gWEVOLCBNU0kgY2FwYWJpbGl0eSBp
cyBub3QgaW5pdGlhbGl6ZWQgaW5zaWRlIFhFTiBhbmQgbm90IHN1cHBvcnRlZCBhcyBvZiBub3cu
DQogICAgPj4+ICogQUNTIGNhcGFiaWxpdHkgaXMgZGlzYWJsZSBmb3IgQVJNIGFzIG9mIG5vdyBh
cyBhZnRlciBlbmFibGluZyBpdCBkZXZpY2VzIGFyZSBub3QgYWNjZXNzaWJsZS4NCiAgICA+Pj4g
KiBEb20wTGVzcyBpbXBsZW1lbnRhdGlvbiB3aWxsIHJlcXVpcmUgdG8gaGF2ZSB0aGUgY2FwYWNp
dHkgaW5zaWRlIFhlbiB0byBkaXNjb3ZlciB0aGUgUENJIGRldmljZXMgKHdpdGhvdXQgZGVwZW5k
aW5nIG9uIERvbTAgdG8gZGVjbGFyZSB0aGVtIHRvIFhlbikuDQogICAgPj4gSSB0aGluayBpdCBp
cyBmaW5lIHRvIGFzc3VtZSB0aGF0IGZvciBkb20wbGVzcyB0aGUgImZpcm13YXJlIiBoYXMgdGFr
ZW4NCiAgICA+PiBjYXJlIG9mIHNldHRpbmcgdXAgdGhlIEJBUnMgY29ycmVjdGx5LiBTdGFydGlu
ZyB3aXRoIHRoYXQgYXNzdW1wdGlvbiwgaXQNCiAgICA+PiBsb29rcyBsaWtlIGl0IHNob3VsZCBi
ZSAiZWFzeSIgdG8gd2FsayB0aGUgUENJIHRvcG9sb2d5IGluIFhlbiB3aGVuL2lmDQogICAgPj4g
dGhlcmUgaXMgbm8gZG9tMD8NCiAgICA+IFllcyBhcyB3ZSBkaXNjdXNzZWQgZHVyaW5nIHRoZSBk
ZXNpZ24gc2Vzc2lvbiwgd2UgY3VycmVudGx5IHRoaW5rIHRoYXQgaXQgaXMgdGhlIHdheSB0byBn
by4NCiAgICA+IFdlIGFyZSBmb3Igbm93IHJlbHlpbmcgb24gRG9tMCB0byBnZXQgdGhlIGxpc3Qg
b2YgUENJIGRldmljZXMgYnV0IHRoaXMgaXMgZGVmaW5pdGVseSB0aGUgc3RyYXRlZ3kgd2Ugd291
bGQgbGlrZSB0byB1c2UgdG8gaGF2ZSBEb20wIHN1cHBvcnQuDQogICAgPiBJZiB0aGlzIGlzIHdv
cmtpbmcgd2VsbCwgSSBldmVuIHRoaW5rIHdlIGNvdWxkIGdldCByaWQgb2YgdGhlIGh5cGVyY2Fs
bCBhbGwgdG9nZXRoZXIuDQogICAgLi4uYW5kIHRoaXMgaXMgdGhlIHNhbWUgd2F5IG9mIGNvbmZp
Z3VyaW5nIGlmIGVudW1lcmF0aW9uIGhhcHBlbnMgaW4gdGhlIGJvb3Rsb2FkZXI/DQoNCiAgICBJ
IGRvIHN1cHBvcnQgdGhlIGlkZWEgd2UgZ28gYXdheSBmcm9tIFBIWVNERVZPUF9wY2lfZGV2aWNl
X2FkZCwgYnV0IGRyaXZlciBkb21haW4NCg0KICAgIGp1c3Qgc2lnbmFscyBYZW4gdGhhdCB0aGUg
ZW51bWVyYXRpb24gaXMgZG9uZSBhbmQgWGVuIGNhbiB0cmF2ZXJzZSB0aGUgYnVzIGJ5IHRoYXQg
dGltZS4NCg0KICAgIFBsZWFzZSBhbHNvIG5vdGUsIHRoYXQgdGhlcmUgYXJlIGFjdHVhbGx5IDMg
Y2FzZXMgcG9zc2libGUgd3J0IHdoZXJlIHRoZSBlbnVtZXJhdGlvbiBhbmQNCg0KICAgIGNvbmZp
Z3VyYXRpb24gaGFwcGVuczogYm9vdCBmaXJtd2FyZSwgRG9tMCwgWGVuLiBTbywgaXQgc2VlbXMg
d2UNCg0KICAgIGFyZSBnb2luZyB0byBoYXZlIGRpZmZlcmVudCBhcHByb2FjaGVzIGZvciB0aGUg
Zmlyc3QgdHdvIChzZWUgbXkgY29tbWVudCBhYm92ZSBvbg0KDQogICAgdGhlIGh5cGVyY2FsbCB1
c2UgaW4gRG9tMCkuIFNvLCB3YWxraW5nIHRoZSBidXMgb3Vyc2VsdmVzIGluIFhlbiBzZWVtcyB0
byBiZSBnb29kIGZvciBhbGwNCg0KICAgIHRoZSB1c2UtY2FzZXMgYWJvdmUNCg0KSW4gdGhhdCBj
YXNlIHdlIG1heSBoYXZlIHRvIGltcGxlbWVudCBhIG5ldyBoeXBlciBjYWxsIHRvIGluZm9ybSBY
RU4gdGhhdCBlbnVtZXJhdGlvbiBpcyBjb21wbGV0ZSBhbmQgbm93IHNjYW4gdGhlIGRldmljZXMu
IFdlIGNvdWxkIHRlbGwgWGVuIHRvIGRlbGF5IGl0cyBlbnVtZXJhdGlvbiB1bnRpbCB0aGlzIGh5
cGVyIGNhbGwgaXMgY2FsbGVkIHVzaW5nIGEgeGVuIGNvbW1hbmQgbGluZSBwYXJhbWV0ZXIuIFRo
aXMgd2F5IHdoZW4gdGhpcyBpcyBub3QgcmVxdWlyZWQgYmVjYXVzZSB0aGUgZmlybXdhcmUgZGlk
IHRoZSBlbnVtZXJhdGlvbiwgd2UgY2FuIHByb3Blcmx5IHN1cHBvcnQgRG9tMExlc3MuDQoNCiAg
ICA+DQogICAgPg0KICAgID4+DQogICAgPj4+ICMgRW5hYmxlIHRoZSBleGlzdGluZyB4ODYgdmly
dHVhbCBQQ0kgc3VwcG9ydCBmb3IgQVJNOg0KICAgID4+Pg0KICAgID4+PiBUaGUgZXhpc3Rpbmcg
VlBDSSBzdXBwb3J0IGF2YWlsYWJsZSBmb3IgWDg2IGlzIGFkYXB0ZWQgZm9yIEFybS4gV2hlbiB0
aGUgZGV2aWNlIGlzIGFkZGVkIHRvIFhFTiB2aWEgdGhlIGh5cGVyIGNhbGwg4oCcUEhZU0RFVk9Q
X3BjaV9kZXZpY2VfYWRk4oCdLCBWUENJIGhhbmRsZXIgZm9yIHRoZSBjb25maWcgc3BhY2UgYWNj
ZXNzIGlzIGFkZGVkIHRvIHRoZSBQQ0kgZGV2aWNlIHRvIGVtdWxhdGUgdGhlIFBDSSBkZXZpY2Vz
Lg0KICAgID4+Pg0KICAgID4+PiBBIE1NSU8gdHJhcCBoYW5kbGVyIGZvciB0aGUgUENJIEVDQU0g
c3BhY2UgaXMgcmVnaXN0ZXJlZCBpbiBYRU4gc28gdGhhdCB3aGVuIGd1ZXN0IGlzIHRyeWluZyB0
byBhY2Nlc3MgdGhlIFBDSSBjb25maWcgc3BhY2UsIFhFTiB3aWxsIHRyYXAgdGhlIGFjY2VzcyBh
bmQgZW11bGF0ZSByZWFkL3dyaXRlIHVzaW5nIHRoZSBWUENJIGFuZCBub3QgdGhlIHJlYWwgUENJ
IGhhcmR3YXJlLg0KICAgIEp1c3QgdG8gbWFrZSBpdCBjbGVhcjogRG9tMCBzdGlsbCBhY2Nlc3Mg
dGhlIGJ1cyBkaXJlY3RseSB3L28gZW11bGF0aW9uLCByaWdodD8NCiAgICA+Pj4NCiAgICA+Pj4g
TGltaXRhdGlvbjoNCiAgICA+Pj4gKiBObyBoYW5kbGVyIGlzIHJlZ2lzdGVyIGZvciB0aGUgTVNJ
IGNvbmZpZ3VyYXRpb24uDQogICAgPj4+ICogT25seSBsZWdhY3kgaW50ZXJydXB0IGlzIHN1cHBv
cnRlZCBhbmQgdGVzdGVkIGFzIG9mIG5vdywgTVNJIGlzIG5vdCBpbXBsZW1lbnRlZCBhbmQgdGVz
dGVkLg0KICAgID4+Pg0KICAgID4+PiAjIEFzc2lnbiB0aGUgZGV2aWNlIHRvIHRoZSBndWVzdDoN
CiAgICA+Pj4NCiAgICA+Pj4gQXNzaWduIHRoZSBQQ0kgZGV2aWNlIGZyb20gdGhlIGhhcmR3YXJl
IGRvbWFpbiB0byB0aGUgZ3Vlc3QgaXMgZG9uZSB1c2luZyB0aGUgYmVsb3cgZ3Vlc3QgY29uZmln
IG9wdGlvbi4gV2hlbiB4bCB0b29sIGNyZWF0ZSB0aGUgZG9tYWluLCBQQ0kgZGV2aWNlcyB3aWxs
IGJlIGFzc2lnbmVkIHRvIHRoZSBndWVzdCBWUENJIGJ1cy4NCiAgICA+Pj4gCXBjaT1bICJQQ0lf
U1BFQ19TVFJJTkciLCAiUENJX1NQRUNfU1RSSU5HIiwgLi4uXQ0KICAgID4+Pg0KICAgID4+PiBH
dWVzdCB3aWxsIGJlIG9ubHkgYWJsZSB0byBhY2Nlc3MgdGhlIGFzc2lnbmVkIGRldmljZXMgYW5k
IHNlZSB0aGUgYnJpZGdlcy4gR3Vlc3Qgd2lsbCBub3QgYmUgYWJsZSB0byBhY2Nlc3Mgb3Igc2Vl
IHRoZSBkZXZpY2VzIHRoYXQgYXJlIG5vIGFzc2lnbmVkIHRvIGhpbS4NCiAgICBEb2VzIHRoaXMg
bWVhbiB0aGF0IHdlIGRvIG5vdCBuZWVkIHRvIGNvbmZpZ3VyZSB0aGUgYnJpZGdlcyBhcyB0aG9z
ZSBhcmUgZXhwb3NlZCB0byB0aGUgZ3Vlc3QgaW1wbGljaXRseT8NCiAgICA+Pj4NCiAgICA+Pj4g
TGltaXRhdGlvbjoNCiAgICA+Pj4gKiBBcyBvZiBub3cgYWxsIHRoZSBicmlkZ2VzIGluIHRoZSBQ
Q0kgYnVzIGFyZSBzZWVuIGJ5IHRoZSBndWVzdCBvbiB0aGUgVlBDSSBidXMuDQoNCiAgICBTbywg
d2hhdCBoYXBwZW5zIGlmIGEgZ3Vlc3QgdHJpZXMgdG8gYWNjZXNzIHRoZSBicmlkZ2UgdGhhdCBk
b2Vzbid0IGhhdmUgdGhlIGFzc2lnbmVkDQoNCiAgICBQQ0kgZGV2aWNlPyBFLmcuIHdlIHBhc3Mg
UENJZV9kZXYwIHdoaWNoIGlzIGJlaGluZCBCcmlkZ2UwIGFuZCB0aGUgZ3Vlc3QgYWxzbyBzZWVz
DQoNCiAgICBCcmlkZ2UxIGFuZCB0cmllcyB0byBhY2Nlc3MgZGV2aWNlcyBiZWhpbmQgaXQgZHVy
aW5nIHRoZSBlbnVtZXJhdGlvbi4NCg0KICAgIENvdWxkIHlvdSBwbGVhc2UgY2xhcmlmeT8NCg0K
VGhlIGJyaWRnZXMgYXJlIG9ubHkgYWNjZXNzaWJsZSBpbiByZWFkLW9ubHkgYW5kIGNhbm5vdCBi
ZSBtb2RpZmllZC4gRXZlbiB0aG91Z2ggYSBndWVzdCB3b3VsZCBzZWUgdGhlIGJyaWRnZSwgdGhl
IFZQQ0kgd2lsbCBvbmx5IHNob3cgdGhlIGFzc2lnbmVkIGRldmljZXMgYmVoaW5kIGl0LiBJZiB0
aGVyZSBpcyBubyBkZXZpY2UgYmVoaW5kIHRoYXQgYnJpZGdlIGFzc2lnbmVkIHRvIHRoZSBndWVz
dCwgdGhlIGd1ZXN0IHdpbGwgc2VlIGFuIGVtcHR5IGJ1cyBiZWhpbmQgdGhhdCBicmlkZ2UNCg0K
ICAgID4+IFdlIG5lZWQgdG8gY29tZSB1cCB3aXRoIHNvbWV0aGluZyBzaW1pbGFyIGZvciBkb20w
bGVzcyB0b28uIEl0IGNvdWxkIGJlDQogICAgPj4gZXhhY3RseSB0aGUgc2FtZSB0aGluZyAoYSBs
aXN0IG9mIEJERnMgYXMgc3RyaW5ncyBhcyBhIGRldmljZSB0cmVlDQogICAgPj4gcHJvcGVydHkp
IG9yIHNvbWV0aGluZyBlbHNlIGlmIHdlIGNhbiBjb21lIHVwIHdpdGggYSBiZXR0ZXIgaWRlYS4N
CiAgICA+IEZ1bGx5IGFncmVlLg0KICAgID4gTWF5YmUgYSB0cmVlIHRvcG9sb2d5IGNvdWxkIGFs
bG93IG1vcmUgcG9zc2liaWxpdGllcyAobGlrZSBnaXZpbmcgQkFSIHZhbHVlcykgaW4gdGhlIGZ1
dHVyZS4NCiAgICA+Pg0KICAgID4+PiAjIEVtdWxhdGVkIFBDSSBkZXZpY2UgdHJlZSBub2RlIGlu
IGxpYnhsOg0KICAgID4+Pg0KICAgID4+PiBMaWJ4bCBpcyBjcmVhdGluZyBhIHZpcnR1YWwgUENJ
IGRldmljZSB0cmVlIG5vZGUgaW4gdGhlIGRldmljZSB0cmVlIHRvIGVuYWJsZSB0aGUgZ3Vlc3Qg
T1MgdG8gZGlzY292ZXIgdGhlIHZpcnR1YWwgUENJIGR1cmluZyBndWVzdCBib290LiBXZSBpbnRy
b2R1Y2VkIHRoZSBuZXcgY29uZmlnIG9wdGlvbiBbdnBjaT0icGNpX2VjYW0iXSBmb3IgZ3Vlc3Rz
LiBXaGVuIHRoaXMgY29uZmlnIG9wdGlvbiBpcyBlbmFibGVkIGluIGEgZ3Vlc3QgY29uZmlndXJh
dGlvbiwgYSBQQ0kgZGV2aWNlIHRyZWUgbm9kZSB3aWxsIGJlIGNyZWF0ZWQgaW4gdGhlIGd1ZXN0
IGRldmljZSB0cmVlLg0KICAgID4+Pg0KICAgID4+PiBBIG5ldyBhcmVhIGhhcyBiZWVuIHJlc2Vy
dmVkIGluIHRoZSBhcm0gZ3Vlc3QgcGh5c2ljYWwgbWFwIGF0IHdoaWNoIHRoZSBWUENJIGJ1cyBp
cyBkZWNsYXJlZCBpbiB0aGUgZGV2aWNlIHRyZWUgKHJlZyBhbmQgcmFuZ2VzIHBhcmFtZXRlcnMg
b2YgdGhlIG5vZGUpLiBBIHRyYXAgaGFuZGxlciBmb3IgdGhlIFBDSSBFQ0FNIGFjY2VzcyBmcm9t
IGd1ZXN0IGhhcyBiZWVuIHJlZ2lzdGVyZWQgYXQgdGhlIGRlZmluZWQgYWRkcmVzcyBhbmQgcmVk
aXJlY3RzIHJlcXVlc3RzIHRvIHRoZSBWUENJIGRyaXZlciBpbiBYZW4uDQogICAgPj4+DQogICAg
Pj4+IExpbWl0YXRpb246DQogICAgPj4+ICogT25seSBvbmUgUENJIGRldmljZSB0cmVlIG5vZGUg
aXMgc3VwcG9ydGVkIGFzIG9mIG5vdy4NCiAgICA+PiBJIHRoaW5rIHZwY2k9InBjaV9lY2FtIiBz
aG91bGQgYmUgb3B0aW9uYWw6IGlmIHBjaT1bICJQQ0lfU1BFQ19TVFJJTkciLA0KICAgID4+IC4u
Ll0gaXMgc3BlY2lmaWZlZCwgdGhlbiB2cGNpPSJwY2lfZWNhbSIgaXMgaW1wbGllZC4NCiAgICA+
Pg0KICAgID4+IHZwY2k9InBjaV9lY2FtIiBpcyBvbmx5IHVzZWZ1bCBvbmUgZGF5IGluIHRoZSBm
dXR1cmUgd2hlbiB3ZSB3YW50IHRvIGJlDQogICAgPj4gYWJsZSB0byBlbXVsYXRlIG90aGVyIG5v
bi1lY2FtIGhvc3QgYnJpZGdlcy4gRm9yIG5vdyB3ZSBjb3VsZCBldmVuIHNraXANCiAgICA+PiBp
dC4NCiAgICA+IFRoaXMgd291bGQgY3JlYXRlIGEgcHJvYmxlbSBpZiB4bCBpcyB1c2VkIHRvIGFk
ZCBhIFBDSSBkZXZpY2UgYXMgd2UgbmVlZCB0aGUgUENJIG5vZGUgdG8gYmUgaW4gdGhlIERUQiB3
aGVuIHRoZSBndWVzdCBpcyBjcmVhdGVkLg0KICAgID4gSSBhZ3JlZSB0aGlzIGlzIG5vdCBuZWVk
ZWQgYnV0IHJlbW92aW5nIGl0IG1pZ2h0IGNyZWF0ZSBtb3JlIGNvbXBsZXhpdHkgaW4gdGhlIGNv
ZGUuDQoNCiAgICBJIHdvdWxkIHN1Z2dlc3Qgd2UgaGF2ZSBpdCBmcm9tIGRheSAwIGFzIHRoZXJl
IGFyZSBwbGVudHkgb2YgSFcgYXZhaWxhYmxlIHdoaWNoIGlzIG5vdCBFQ0FNLg0KDQogICAgSGF2
aW5nIHZwY2kgYWxsb3dzIG90aGVyIGJyaWRnZXMgdG8gYmUgc3VwcG9ydGVkDQoNClllcyB3ZSBh
Z3JlZS4gV2Ugd2lsbCB0YWtlIGNhcmUgb2YgdGhhdC4NCg0KICAgID4NCiAgICA+IEJlcnRyYW5k
DQogICAgPg0KICAgID4+DQogICAgPj4+IEJBUiB2YWx1ZSBhbmQgSU9NRU0gbWFwcGluZzoNCiAg
ICA+Pj4NCiAgICA+Pj4gTGludXggZ3Vlc3Qgd2lsbCBkbyB0aGUgUENJIGVudW1lcmF0aW9uIGJh
c2VkIG9uIHRoZSBhcmVhIHJlc2VydmVkIGZvciBFQ0FNIGFuZCBJT01FTSByYW5nZXMgaW4gdGhl
IFZQQ0kgZGV2aWNlIHRyZWUgbm9kZS4gT25jZSBQQ0kJZGV2aWNlIGlzIGFzc2lnbmVkIHRvIHRo
ZSBndWVzdCwgWEVOIHdpbGwgbWFwIHRoZSBndWVzdCBQQ0kgSU9NRU0gcmVnaW9uIHRvIHRoZSBy
ZWFsIHBoeXNpY2FsIElPTUVNIHJlZ2lvbiBvbmx5IGZvciB0aGUgYXNzaWduZWQgZGV2aWNlcy4N
CiAgICA+Pj4NCiAgICA+Pj4gQXMgb2Ygbm93IHdlIGhhdmUgbm90IG1vZGlmaWVkIHRoZSBleGlz
dGluZyBWUENJIGNvZGUgdG8gbWFwIHRoZSBndWVzdCBQQ0kgSU9NRU0gcmVnaW9uIHRvIHRoZSBy
ZWFsIHBoeXNpY2FsIElPTUVNIHJlZ2lvbi4gV2UgdXNlZCB0aGUgZXhpc3RpbmcgZ3Vlc3Qg4oCc
aW9tZW3igJ0gY29uZmlnIG9wdGlvbiB0byBtYXAgdGhlIHJlZ2lvbi4NCiAgICA+Pj4gRm9yIGV4
YW1wbGU6DQogICAgPj4+IAlHdWVzdCByZXNlcnZlZCBJT01FTSByZWdpb246ICAweDA0MDIwMDAw
DQogICAgPj4+ICAgICAJUmVhbCBwaHlzaWNhbCBJT01FTSByZWdpb246MHg1MDAwMDAwMA0KICAg
ID4+PiAgICAgCUlPTUVNIHNpemU6MTI4TUINCiAgICA+Pj4gICAgIAlpb21lbSBjb25maWcgd2ls
bCBiZTogICBpb21lbSA9IFsiMHg1MDAwMCwweDgwMDBAMHg0MDIwIl0NCiAgICA+Pj4NCiAgICA+
Pj4gVGhlcmUgaXMgbm8gbmVlZCB0byBtYXAgdGhlIEVDQU0gc3BhY2UgYXMgWEVOIGFscmVhZHkg
aGF2ZSBhY2Nlc3MgdG8gdGhlIEVDQU0gc3BhY2UgYW5kIFhFTiB3aWxsIHRyYXAgRUNBTSBhY2Nl
c3NlcyBmcm9tIHRoZSBndWVzdCBhbmQgd2lsbCBwZXJmb3JtIHJlYWQvd3JpdGUgb24gdGhlIFZQ
Q0kgYnVzLg0KICAgID4+Pg0KICAgID4+PiBJT01FTSBhY2Nlc3Mgd2lsbCBub3QgYmUgdHJhcHBl
ZCBhbmQgdGhlIGd1ZXN0IHdpbGwgZGlyZWN0bHkgYWNjZXNzIHRoZSBJT01FTSByZWdpb24gb2Yg
dGhlIGFzc2lnbmVkIGRldmljZSB2aWEgc3RhZ2UtMiB0cmFuc2xhdGlvbi4NCiAgICA+Pj4NCiAg
ICA+Pj4gSW4gdGhlIHNhbWUsIHdlIG1hcHBlZCB0aGUgYXNzaWduZWQgZGV2aWNlcyBJUlEgdG8g
dGhlIGd1ZXN0IHVzaW5nIGJlbG93IGNvbmZpZyBvcHRpb25zLg0KICAgID4+PiAJaXJxcz0gWyBO
VU1CRVIsIE5VTUJFUiwgLi4uXQ0KICAgID4+Pg0KICAgID4+PiBMaW1pdGF0aW9uOg0KICAgID4+
PiAqIE5lZWQgdG8gYXZvaWQgdGhlIOKAnGlvbWVt4oCdIGFuZCDigJxpcnHigJ0gZ3Vlc3QgY29u
ZmlnIG9wdGlvbnMgYW5kIG1hcCB0aGUgSU9NRU0gcmVnaW9uIGFuZCBJUlEgYXQgdGhlIHNhbWUg
dGltZSB3aGVuIGRldmljZSBpcyBhc3NpZ25lZCB0byB0aGUgZ3Vlc3QgdXNpbmcgdGhlIOKAnHBj
aeKAnSBndWVzdCBjb25maWcgb3B0aW9ucyB3aGVuIHhsIGNyZWF0ZXMgdGhlIGRvbWFpbi4NCiAg
ICA+Pj4gKiBFbXVsYXRlZCBCQVIgdmFsdWVzIG9uIHRoZSBWUENJIGJ1cyBzaG91bGQgcmVmbGVj
dCB0aGUgSU9NRU0gbWFwcGVkIGFkZHJlc3MuDQogICAgPj4+ICogWDg2IG1hcHBpbmcgY29kZSBz
aG91bGQgYmUgcG9ydGVkIG9uIEFybSBzbyB0aGF0IHRoZSBzdGFnZS0yIHRyYW5zbGF0aW9uIGlz
IGFkYXB0ZWQgd2hlbiB0aGUgZ3Vlc3QgaXMgZG9pbmcgYSBtb2RpZmljYXRpb24gb2YgdGhlIEJB
UiByZWdpc3RlcnMgdmFsdWVzICh0byBtYXAgdGhlIGFkZHJlc3MgcmVxdWVzdGVkIGJ5IHRoZSBn
dWVzdCBmb3IgYSBzcGVjaWZpYyBJT01FTSB0byB0aGUgYWRkcmVzcyBhY3R1YWxseSBjb250YWlu
ZWQgaW4gdGhlIHJlYWwgQkFSIHJlZ2lzdGVyIG9mIHRoZSBjb3JyZXNwb25kaW5nIGRldmljZSku
DQogICAgPj4+DQogICAgPj4+ICMgU01NVSBjb25maWd1cmF0aW9uIGZvciBndWVzdDoNCiAgICA+
Pj4NCiAgICA+Pj4gV2hlbiBhc3NpZ25pbmcgUENJIGRldmljZXMgdG8gYSBndWVzdCwgdGhlIFNN
TVUgY29uZmlndXJhdGlvbiBzaG91bGQgYmUgdXBkYXRlZCB0byByZW1vdmUgYWNjZXNzIHRvIHRo
ZSBoYXJkd2FyZSBkb21haW4gbWVtb3J5DQoNCiAgICBTbywgYXMgdGhlIGhhcmR3YXJlIGRvbWFp
biBzdGlsbCBoYXMgYWNjZXNzIHRvIHRoZSBQQ0kgY29uZmlndXJhdGlvbiBzcGFjZSwgd2UNCg0K
ICAgIGNhbiBwb3RlbnRpYWxseSBoYXZlIGEgY29uZGl0aW9uIHdoZW4gRG9tMCBhY2Nlc3NlcyB0
aGUgZGV2aWNlLiBBRkFJVSwgaWYgd2UgaGF2ZQ0KDQogICAgcGNpIGZyb250L2JhY2sgdGhlbiBi
ZWZvcmUgYXNzaWduaW5nIHRoZSBkZXZpY2UgdG8gdGhlIGd1ZXN0IHdlIHVuYmluZCBpdCBmcm9t
IHRoZQ0KDQogICAgcmVhbCBkcml2ZXIgYW5kIGJpbmQgdG8gdGhlIGJhY2suIEFyZSB3ZSBnb2lu
ZyB0byBkbyBzb21ldGhpbmcgc2ltaWxhciBoZXJlPw0KDQpZZXMgd2UgaGF2ZSB0byB1bmJpbmQg
dGhlIGRyaXZlciBmcm9tIHRoZSBoYXJkd2FyZSBkb21haW4gYmVmb3JlIGFzc2lnbmluZyB0aGUg
ZGV2aWNlIHRvIHRoZSBndWVzdC4gQWxzbyBhcyBzb29uIGFzIFhlbiBoYXMgZG9uZSBoaXMgUENJ
IGVudW1lcmF0aW9uIChlaXRoZXIgb24gYm9vdCBvciBhZnRlciBhbiBoeXBlcmNhbGwgZnJvbSB0
aGUgaGFyZHdhcmUgZG9tYWluKSwgb25seSBYZW4gd2lsbCBhY2Nlc3MgdGhlIHBoeXNpY2FsIFBD
SSBidXMsIGV2ZXJ5Ym9keSBlbHNlIHdpbGwgZ28gdGhyb3VnaCBWUENJLg0KDQoNCiAgICBUaGFu
ayB5b3UsDQoNCiAgICBPbGVrc2FuZHINCg0KICAgID4+PiAgIGFuZCBhZGQNCiAgICA+Pj4gY29u
ZmlndXJhdGlvbiB0byBoYXZlIGFjY2VzcyB0byB0aGUgZ3Vlc3QgbWVtb3J5IHdpdGggdGhlIHBy
b3BlciBhZGRyZXNzIHRyYW5zbGF0aW9uIHNvIHRoYXQgdGhlIGRldmljZSBjYW4gZG8gRE1BIG9w
ZXJhdGlvbnMgZnJvbSBhbmQgdG8gdGhlIGd1ZXN0IG1lbW9yeSBvbmx5Lg0KICAgID4+Pg0KICAg
ID4+PiAjIE1TSS9NU0ktWCBzdXBwb3J0Og0KICAgID4+PiBOb3QgaW1wbGVtZW50IGFuZCB0ZXN0
ZWQgYXMgb2Ygbm93Lg0KICAgID4+Pg0KICAgID4+PiAjIElUUyBzdXBwb3J0Og0KICAgID4+PiBO
b3QgaW1wbGVtZW50IGFuZCB0ZXN0ZWQgYXMgb2Ygbm93Lg0KICAgIFsxXSBodHRwczovL2xpc3Rz
Lnhlbi5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAxNy0wNS9tc2cwMjY3NC5odG1sDQoN
Cg==


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 10:56:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 10:56:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwO2P-000463-Ht; Fri, 17 Jul 2020 10:56:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1N/p=A4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jwO2O-00045y-Ef
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 10:56:20 +0000
X-Inumbo-ID: 24394d20-c81c-11ea-95b8-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 24394d20-c81c-11ea-95b8-12813bfff9fa;
 Fri, 17 Jul 2020 10:56:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 12FD5AE56;
 Fri, 17 Jul 2020 10:56:22 +0000 (UTC)
Subject: Re: [PATCH 1/2] common: map_vcpu_info() cosmetics
To: Julien Grall <julien@xen.org>, Julien Grall <julien.grall.oss@gmail.com>
References: <fef45e49-bcb4-dc11-8f7f-b2f5e4bf1a73@suse.com>
 <2a0341c7-3836-a8c0-9516-b6a08e2720ec@suse.com>
 <20200716114128.GO7191@Air-de-Roger>
 <03a4d446-c26b-f171-57fd-a1eb13dad6c0@suse.com>
 <20200716144219.GQ7191@Air-de-Roger>
 <d64ee03f-4663-39ce-fd72-5702029e0182@suse.com>
 <CAJ=z9a2gCm7LNOpJUO4nbwUExmtd8KH2TBvt4VXCaqiAeXuCcg@mail.gmail.com>
 <7b9dc9b2-e117-6bbb-05a7-e82c4529e5e7@suse.com>
 <be5e1706-55de-e7b7-c302-5440f4c321a8@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c8a341b7-0252-7c42-0375-130a1542c731@suse.com>
Date: Fri, 17 Jul 2020 12:56:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <be5e1706-55de-e7b7-c302-5440f4c321a8@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 17.07.2020 11:22, Julien Grall wrote:
> 
> 
> On 17/07/2020 09:16, Jan Beulich wrote:
>> On 16.07.2020 18:17, Julien Grall wrote:
>>> On Thu, 16 Jul 2020, 17:01 Jan Beulich, <jbeulich@suse.com> wrote:
>>>
>>>> On 16.07.2020 16:42, Roger Pau Monné wrote:
>>>>> On Thu, Jul 16, 2020 at 01:48:51PM +0200, Jan Beulich wrote:
>>>>>> On 16.07.2020 13:41, Roger Pau Monné wrote:
>>>>>>> On Wed, Jul 15, 2020 at 12:15:10PM +0200, Jan Beulich wrote:
>>>>>>>> Use ENXIO instead of EINVAL to cover the two cases of the address not
>>>>>>>> satisfying the requirements. This will make an issue here better stand
>>>>>>>> out at the call site.
>>>>>>>
>>>>>>> Not sure whether I would use EFAULT instead of ENXIO, as the
>>>>>>> description of it is 'bad address' which seems more inline with the
>>>>>>> error that we are trying to report.
>>>>>>
>>>>>> The address isn't bad in the sense of causing a fault, it's just
>>>>>> that we elect to not allow it. Hence I don't think EFAULT is
>>>>>> suitable. I'm open to replacement suggestions for ENXIO, though.
>>>>>
>>>>> Well, using an address that's not properly aligned to the requirements
>>>>> of an interface would cause a fault? (in this case it's a software
>>>>> interface, but the concept applies equally).
>>>>
>>>> Not necessarily, see x86'es behavior. Also even on strict arches
>>>
>>> it is typically possible to cover for the misalignment by using
>>>> suitable instructions; it's still an implementation choice to not
>>>> do so.
>>>
>>>
>>> I am not sure about your argument here... Yes it might be possible, but at
>>> what cost?
>>
>> The cost is what influences the decision whether to support it. Nevertheless
>> it remains an implementation decision rather than a hardware imposed
>> restriction, and hence I don't consider -EFAULT suitable here.
>>
>>>>> Anyway, not something worth arguing about I think, so unless someone
>>>>> else disagrees I'm fine with using ENXIO.
>>>>
>>>> Good, thanks.
>>>>
>>>
>>> -EFAULT can be described as "Bad address". I think it makes more sense than
>>> -ENXIO here even if it may not strictly result to a fault on some arch.
>>
>> As said - I don't consider EFAULT applicable here;
> 
> AFAICT, you don't consider it because you think that using the address 
> means it will always lead to a fault. However, this is just a strict 
> interpretation of the error code. A less strict interpretation is it 
> could be used for any address that is considered to be invalid.
> 
> -ENXIO makes less sense because the address exists. To re-use your 
> argument, this is just an implementation details.

To be honest, with all the errno values (and with how we use them in
Xen) I rarely pay attention to the text that's associated with them,
but rather what their symbolic name says. For ENXIO, I don't consider
"No such device or address" any more sensible than "Bad address" for
EFAULT.

Since I'm against EFAULT (and EINVAL) and you're against ENXIO, how
about you suggest a 3rd alternative? EPERM comes to mind, but could
easily be mistaken for yet something else. My goal really is to use
an error code here which makes it immediately clear what the actual
problem was, with no ambiguity to other possible error sources on
this hypercall handling path. I don't really care about which of the
ones we use here that aren't already used anywhere on this path. I'd
even be fine with presumably (at least to some people) absurd ones
like EILSEQ or EXDEV, which we use elsewhere, and hardly in the sense
they were originally meant, but again for the purpose of making the
cause of the error easily identifiable.

For this purpose, to me ENXIO seemed to be a reasonable fit. So again
- I'm open to suggestions, but it has to be a not commonly used error
code.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 11:17:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 11:17:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwOMM-0005qz-9h; Fri, 17 Jul 2020 11:16:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lz8P=A4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jwOML-0005qu-PA
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 11:16:57 +0000
X-Inumbo-ID: 058ecbfe-c81f-11ea-bb8b-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 058ecbfe-c81f-11ea-bb8b-bc764e2007e4;
 Fri, 17 Jul 2020 11:16:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594984615;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=j7tTR5dDUNpQegeKl/NFt3JZg8+0/ltBa+Hn+nPZJBE=;
 b=Irx9G451scm/ctoHjK+0mbfkai+5WNxziJTSNbprqYDbl5/novhnsV3g
 +nWFsGvLAE052Tjl0TDyUwjp0CCS9sgoKTLt+dyY39DIZYtB9j4b/XuFQ
 dOTwMCb9bYZVJE/0CDWTZ//fsrJ7D6xQtKUcb20o7YBTY9tOHpvF8zIjY o=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Va+i6L8kROWtOhh4mWoadIoeKyG8xttKaoqRPaY2Htu/cIFEbZgw33quaniNnkGlTf9YEC3A1j
 Yx2kRpGRNbC1P5mhXceM5aEgNyqGVmLcUyttbXBeiDTVluT1ukS4RWUMUKz42JxpSPYY+lKAEV
 HLmbgVoufuidRoCXfL/PeWTi59AgLh5y0KN5EsM4B93g4GALVLxTD6qxht3PI6lMCZgU3xame4
 7AEykjrftVbwxJXN/EJWRtoJrXWLrzynevkZOFXK90dBW7BYVwKz0y7gXf0QtbJtVLQv2tVveC
 ZXk=
X-SBRS: 2.7
X-MesageID: 23458554
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,362,1589256000"; d="scan'208";a="23458554"
Date: Fri, 17 Jul 2020 13:16:44 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Rahul Singh <Rahul.Singh@arm.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Message-ID: <20200717111644.GS7191@Air-de-Roger>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

I've wrapped the email to 80 columns in order to make it easier to
reply.

Thanks for doing this, I think the design is good, I have some
questions below so that I understand the full picture.

On Thu, Jul 16, 2020 at 05:10:05PM +0000, Rahul Singh wrote:
> Hello All,
> 
> Following up on discussion on PCI Passthrough support on ARM that we
> had at the XEN summit, we are submitting a Review For Comment and a
> design proposal for PCI passthrough support on ARM. Feel free to
> give your feedback.
> 
> The followings describe the high-level design proposal of the PCI
> passthrough support and how the different modules within the system
> interacts with each other to assign a particular PCI device to the
> guest.
> 
> # Title:
> 
> PCI devices passthrough on Arm design proposal
> 
> # Problem statement:
> 
> On ARM there in no support to assign a PCI device to a guest. PCI
> device passthrough capability allows guests to have full access to
> some PCI devices. PCI device passthrough allows PCI devices to
> appear and behave as if they were physically attached to the guest
> operating system and provide full isolation of the PCI devices.
> 
> Goal of this work is to also support Dom0Less configuration so the
> PCI backend/frontend drivers used on x86 shall not be used on Arm.
> It will use the existing VPCI concept from X86 and implement the
> virtual PCI bus through IO emulation such that only assigned devices
> are visible to the guest and guest can use the standard PCI
> driver.
> 
> Only Dom0 and Xen will have access to the real PCI bus, guest will
> have a direct access to the assigned device itself. IOMEM memory
> will be mapped to the guest and interrupt will be redirected to the
> guest. SMMU has to be configured correctly to have DMA
> transaction.
> 
> ## Current state: Draft version
> 
> # Proposer(s): Rahul Singh, Bertrand Marquis
> 
> # Proposal:
> 
> This section will describe the different subsystem to support the
> PCI device passthrough and how these subsystems interact with each
> other to assign a device to the guest.
> 
> # PCI Terminology:
> 
> Host Bridge: Host bridge allows the PCI devices to talk to the rest
> of the computer.  ECAM: ECAM (Enhanced Configuration Access
> Mechanism) is a mechanism developed to allow PCIe to access
> configuration space. The space available per function is 4KB.
> 
> # Discovering PCI Host Bridge in XEN:
> 
> In order to support the PCI passthrough XEN should be aware of all
> the PCI host bridges available on the system and should be able to
> access the PCI configuration space. ECAM configuration access is
> supported as of now. XEN during boot will read the PCI device tree
> node “reg” property and will map the ECAM space to the XEN memory
> using the “ioremap_nocache ()” function.

What about ACPI? I think you should also mention the MMCFG table,
which should contain the information about the ECAM region(s) (or at
least that's how it works on x86). Just realized that you don't
support ACPI ATM, so you can ignore this comment.

> 
> If there are more than one segment on the system, XEN will read the
> “linux, pci-domain” property from the device tree node and configure
> the host bridge segment number accordingly. All the PCI device tree
> nodes should have the “linux,pci-domain” property so that there will
> be no conflicts. During hardware domain boot Linux will also use the
> same “linux,pci-domain” property and assign the domain number to the
> host bridge.

So it's my understanding that the PCI domain (or segment) is just an
abstract concept to differentiate all the Root Complex present on
the system, but the host bridge itself it's not aware of the segment
assigned to it in any way.

I'm not sure Xen and the hardware domain having matching segments is a
requirement, if you use vPCI you can match the segment (from Xen's
PoV) by just checking from which ECAM region the access has been
performed.

The only reason to require matching segment values between Xen and the
hardware domain is to allow using hypercalls against the PCI devices,
ie: to be able to use hypercalls to assign a device to a domain from
the hardware domain.

I have 0 understanding of DT or it's spec, but why does this have a
'linux,' prefix? The segment number is part of the PCI spec, and not
something specific to Linux IMO.

> 
> When Dom0 tries to access the PCI config space of the device, XEN
> will find the corresponding host bridge based on segment number and
> access the corresponding config space assigned to that bridge.
> 
> Limitation:
> * Only PCI ECAM configuration space access is supported.
> * Device tree binding is supported as of now, ACPI is not supported.
> * Need to port the PCI host bridge access code to XEN to access the
>   configuration space (generic one works but lots of platforms will
>   required  some specific code or quirks).
> 
> # Discovering PCI devices:
> 
> PCI-PCIe enumeration is a process of detecting devices connected to
> its host. It is the responsibility of the hardware domain or boot
> firmware to do the PCI enumeration and configure the BAR, PCI
> capabilities, and MSI/MSI-X configuration.
> 
> PCI-PCIe enumeration in XEN is not feasible for the configuration
> part as it would require a lot of code inside Xen which would
> require a lot of maintenance. Added to this many platforms require
> some quirks in that part of the PCI code which would greatly improve
> Xen complexity. Once hardware domain enumerates the device then it
> will communicate to XEN via the below hypercall.
> 
> #define PHYSDEVOP_pci_device_add        25 struct
> physdev_pci_device_add {
>     uint16_t seg;
>     uint8_t bus;
>     uint8_t devfn;
>     uint32_t flags;
>     struct {
>         uint8_t bus;
>         uint8_t devfn;
>     } physfn;
>     /*
>      * Optional parameters array.
>      * First element ([0]) is PXM domain associated with the device (if
>      * XEN_PCI_DEV_PXM is set)
>      */
>     uint32_t optarr[XEN_FLEX_ARRAY_DIM];
> };
> 
> As the hypercall argument has the PCI segment number, XEN will
> access the PCI config space based on this segment number and find
> the host-bridge corresponding to this segment number. At this stage
> host bridge is fully initialized so there will be no issue to access
> the config space.
> 
> XEN will add the PCI devices in the linked list maintain in XEN
> using the function pci_add_device(). XEN will be aware of all the
> PCI devices on the system and all the device will be added to the
> hardware domain.
> 
> Limitations:
> * When PCI devices are added to XEN, MSI capability is
>   not initialized inside XEN and not supported as of now.

I assume you will mask such capability and will prevent the guest (or
hardware domain) from interacting with it?

> * ACS capability is disable for ARM as of now as after enabling it
>   devices are not accessible.
> * Dom0Less implementation will require to have the capacity inside Xen
>   to discover the PCI devices (without depending on Dom0 to declare them
>   to Xen).

I assume the firmware will properly initialize the host bridge and
configure the resources for each device, so that Xen just has to walk
the PCI space and find the devices.

TBH that would be my preferred method, because then you can get rid of
the hypercall.

Is there anyway for Xen to know whether the host bridge is properly
setup and thus the PCI bus can be scanned?

That way Arm could do something similar to x86, where Xen will scan
the bus and discover devices, but you could still provide the
hypercall in case the bus cannot be scanned by Xen (because it hasn't
been setup).

> 
> # Enable the existing x86 virtual PCI support for ARM:
> 
> The existing VPCI support available for X86 is adapted for Arm. When
> the device is added to XEN via the hyper call
> “PHYSDEVOP_pci_device_add”, VPCI handler for the config space access
> is added to the PCI device to emulate the PCI devices.
> 
> A MMIO trap handler for the PCI ECAM space is registered in XEN so
> that when guest is trying to access the PCI config space, XEN will
> trap the access and emulate read/write using the VPCI and not the
> real PCI hardware.
> 
> Limitation:
> * No handler is register for the MSI configuration.

But you need to mask MSI/MSI-X capabilities in the config space in
order to prevent access from domains? (and by mask I mean remove from
the list of capabilities and prevent reads/writes to that
configuration space).

Note this is already implemented for x86, and I've tried to add arch_
hooks for arch specific stuff so that it could be reused by Arm. But
maybe this would require a different design document?

> * Only legacy interrupt is supported and tested as of now, MSI is not
>   implemented and tested.
> 
> # Assign the device to the guest:
> 
> Assign the PCI device from the hardware domain to the guest is done
> using the below guest config option. When xl tool create the domain,
> PCI devices will be assigned to the guest VPCI bus.
>
> pci=[ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...]
> 
> Guest will be only able to access the assigned devices and see the
> bridges. Guest will not be able to access or see the devices that
> are no assigned to him.
> 
> Limitation:
> * As of now all the bridges in the PCI bus are seen by
>   the guest on the VPCI bus.

I don't think you need all of them, just the ones that are higher up
on the hierarchy of the device you are trying to passthrough?

Which kind of access do guest have to PCI bridges config space?

This should be limited to read-only accesses in order to be safe.

Emulating a PCI bridge in Xen using vPCI shouldn't be that
complicated, so you could likely replace the real bridges with
emulated ones. Or even provide a fake topology to the guest using an
emulated bridge.

> 
> # Emulated PCI device tree node in libxl:
> 
> Libxl is creating a virtual PCI device tree node in the device tree
> to enable the guest OS to discover the virtual PCI during guest
> boot. We introduced the new config option [vpci="pci_ecam"] for
> guests. When this config option is enabled in a guest configuration,
> a PCI device tree node will be created in the guest device tree.
> 
> A new area has been reserved in the arm guest physical map at which
> the VPCI bus is declared in the device tree (reg and ranges
> parameters of the node). A trap handler for the PCI ECAM access from
> guest has been registered at the defined address and redirects
> requests to the VPCI driver in Xen.

Can't you deduce the requirement of such DT node based on the presence
of a 'pci=' option in the same config file?

Also I wouldn't discard that in the future you might want to use
different emulators for different devices, so it might be helpful to
introduce something like:

pci = [ '08:00.0,backend=vpci', '09:00.0,backend=xenpt', '0a:00.0,backend=qemu', ... ]

For the time being Arm will require backend=vpci for all the passed
through devices, but I wouldn't rule out this changing in the future.

> Limitation:
> * Only one PCI device tree node is supported as of now.
> 
> BAR value and IOMEM mapping:
> 
> Linux guest will do the PCI enumeration based on the area reserved
> for ECAM and IOMEM ranges in the VPCI device tree node. Once PCI
> device is assigned to the guest, XEN will map the guest PCI IOMEM
> region to the real physical IOMEM region only for the assigned
> devices.

PCI IOMEM == BARs? Or are you referring to the ECAM access window?

> As of now we have not modified the existing VPCI code to map the
> guest PCI IOMEM region to the real physical IOMEM region. We used
> the existing guest “iomem” config option to map the region.  For
> example: Guest reserved IOMEM region:  0x04020000 Real physical
> IOMEM region:0x50000000 IOMEM size:128MB iomem config will be:
> iomem = ["0x50000,0x8000@0x4020"]
> 
> There is no need to map the ECAM space as XEN already have access to
> the ECAM space and XEN will trap ECAM accesses from the guest and
> will perform read/write on the VPCI bus.
> 
> IOMEM access will not be trapped and the guest will directly access
> the IOMEM region of the assigned device via stage-2 translation.
> 
> In the same, we mapped the assigned devices IRQ to the guest using
> below config options.  irqs= [ NUMBER, NUMBER, ...]

Are you providing this for the hardware domain also? Or are irqs
fetched from the DT in that case?

> Limitation:
> * Need to avoid the “iomem” and “irq” guest config
>   options and map the IOMEM region and IRQ at the same time when
>   device is assigned to the guest using the “pci” guest config options
>   when xl creates the domain.
> * Emulated BAR values on the VPCI bus should reflect the IOMEM mapped
>   address.

It was my understanding that you would identity map the BAR into the
domU stage-2 translation, and that changes by the guest won't be
allowed.

> * X86 mapping code should be ported on Arm so that the stage-2
>   translation is adapted when the guest is doing a modification of the
>   BAR registers values (to map the address requested by the guest for
>   a specific IOMEM to the address actually contained in the real BAR
>   register of the corresponding device).

I think the above means that you want to allow the guest to change the
position of the BAR in the stage-2 translation _without_ allowing it
to change the position of the BAR in the physical memory map, is that
correct?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 11:27:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 11:27:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwOW7-0006lD-AO; Fri, 17 Jul 2020 11:27:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/fKj=A4=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jwOW6-0006l8-5l
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 11:27:02 +0000
X-Inumbo-ID: 6ef34a06-c820-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ef34a06-c820-11ea-8496-bc764e2007e4;
 Fri, 17 Jul 2020 11:27:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=qMIF4AIE0d+FYkBAHZOE+bo3oY9eTzs7hLxFk37pVYY=; b=PHa8tbVBOiU0fP2LiMDellURe4
 BccH1XFctvkOMG0t1GHVoOw5LfkYL64PJRYDxHOAOMVKMo5ymnq8tGHdg3WoUuZNLz1BKDFJFkCur
 b84M2ENbpyQ3lfF4UbYsyKRc3pJHcG8NbQ7HZJfWTc+QlyRV15C7+dGGQoefN72WRSpc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwOW4-00045V-5J; Fri, 17 Jul 2020 11:27:00 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwOW3-0000GZ-U5; Fri, 17 Jul 2020 11:27:00 +0000
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <alpine.DEB.2.21.2007161258160.3886@sstabellini-ThinkPad-T480s>
 <BB4645DF-A040-4912-AC35-C98105917FD5@arm.com>
 <f69f86dc-7a8c-4c25-c059-0e391de51d7f@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <547d91e8-a6fe-6430-b020-f9c550bfc22b@xen.org>
Date: Fri, 17 Jul 2020 12:26:57 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <f69f86dc-7a8c-4c25-c059-0e391de51d7f@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 17/07/2020 08:41, Oleksandr Andrushchenko wrote:
>>> We need to come up with something similar for dom0less too. It could be
>>> exactly the same thing (a list of BDFs as strings as a device tree
>>> property) or something else if we can come up with a better idea.
>> Fully agree.
>> Maybe a tree topology could allow more possibilities (like giving BAR values) in the future.
>>>
>>>> # Emulated PCI device tree node in libxl:
>>>>
>>>> Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.
>>>>
>>>> A new area has been reserved in the arm guest physical map at which the VPCI bus is declared in the device tree (reg and ranges parameters of the node). A trap handler for the PCI ECAM access from guest has been registered at the defined address and redirects requests to the VPCI driver in Xen.
>>>>
>>>> Limitation:
>>>> * Only one PCI device tree node is supported as of now.
>>> I think vpci="pci_ecam" should be optional: if pci=[ "PCI_SPEC_STRING",
>>> ...] is specififed, then vpci="pci_ecam" is implied.
>>>
>>> vpci="pci_ecam" is only useful one day in the future when we want to be
>>> able to emulate other non-ecam host bridges. For now we could even skip
>>> it.
>> This would create a problem if xl is used to add a PCI device as we need the PCI node to be in the DTB when the guest is created.
>> I agree this is not needed but removing it might create more complexity in the code.
> 
> I would suggest we have it from day 0 as there are plenty of HW available which is not ECAM.
> 
> Having vpci allows other bridges to be supported

So I can understand why you would want to have a driver for non-ECAM 
host PCI controller. However, why would you want to emulate a non-ECAM 
PCI controller to a guest?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 11:42:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 11:42:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwOkg-0008Or-Lm; Fri, 17 Jul 2020 11:42:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6P0s=A4=epam.com=prvs=5467d6a743=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1jwOkf-0008Om-78
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 11:42:05 +0000
X-Inumbo-ID: 88007af8-c822-11ea-95d4-12813bfff9fa
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 88007af8-c822-11ea-95d4-12813bfff9fa;
 Fri, 17 Jul 2020 11:42:02 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 06HBfVxf012038; Fri, 17 Jul 2020 11:41:57 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2106.outbound.protection.outlook.com [104.47.17.106])
 by mx0a-0039f301.pphosted.com with ESMTP id 32a1nrx44u-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 17 Jul 2020 11:41:57 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZIfWzTERBW5KHbenM2b0UOPO0Vj/A8w6GPXowLBSVuzsxxrarAtLlDYtjJ7XUzKZouF7GW2h8AcTKDr8UulMcj0I4jp6GQzgJRsuRi6hwgcoIZS9N9dB1h7xAgF7m6bz0emQs86DoC2Ul2wLQEat1LlHLwQuZGzkmSBKYwAHzYaP+F4CW3mkIqAyP1Rfy7C3pylvvKPwXpc6Y+6Py0U+Uq0GlUNQmMwrfTi4G+E5aiFEfu5mZFYEs5F3/InIcH7VhuB2hgH5fCHbC6uGzi0DjMLfCiuWRwZEY+A1yKLumRnO6mNSCY3J8UphvnIYIexnDRdra4Za730RLAVbXAv3cg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8RIiAIK16zN+pwbDHiqilk8OyUtBGYsWq8/wR1DF/+Y=;
 b=Yjoq4qd7n0VMMVdt8Q5I+4gVezOnQMtebupv7+UN2U+914VYNBpWr2AbEC4eQsuSuu9RrdFSb4uIiyCfj6Yr4Ucjx11vkOTdck93kfdQEoVhxetOOB76rFaGyDxqo43u1rAslUipSZ5MG13vTkFPVHMhaqOymLqdJ1vGRiUA/j5YXaAJ9KoMeQ/llmvVgoXvI7/LJw7/C77FxlBAXhUSvkyU30UTa9ebgOlnhX2sL75F96J7FnQ8aW7LatnY0dO24hSYfrtkbwg6cjYAxZ6RQPYyCRPMZfPJCjtH2zTXuKiO164Z5d8u1LrYtbUXbxkrxuDegp5MRUPTKszRlORpkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8RIiAIK16zN+pwbDHiqilk8OyUtBGYsWq8/wR1DF/+Y=;
 b=LknPjq56u7A2QVAmPNFcak2tUHGh6ytJ54tvTm83cn3rzXq5BH4coZTYICajxvGfojLfuH+CcGfv2+89Seqg1Tc4eBr7m7yYFhh/i53p3Hi7MoVjqv0rruUfPZCID6IiwN2p+MU0No9FxIenWYmMjc+NNazgzh6paUyUyUFw44VpYFgKrBbAPek1FtgIqw/EVLDaHlNqY2LOH9/mSKHHCvFyy+GSp1evxJK2pxz5IusooANyAw7qORE8bfEvZzwwbKkTU0/DPF275A8ZC/+nfCRJFizMfEq2XH/XTvPDWSUO9g0S/XWYq9He0XqTozLURY1CzvCdk7/XFHMgYDHiLQ==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR0302MB3425.eurprd03.prod.outlook.com (2603:10a6:208:3::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.20; Fri, 17 Jul
 2020 11:41:53 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508%9]) with mapi id 15.20.3195.022; Fri, 17 Jul 2020
 11:41:53 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWXA26EtJZFfhoUEGdjsOmzYm6XqkLokCAgAAEKwA=
Date: Fri, 17 Jul 2020 11:41:53 +0000
Message-ID: <0cfe750e-2213-d6d3-80c5-494ede727304@epam.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <alpine.DEB.2.21.2007161258160.3886@sstabellini-ThinkPad-T480s>
 <BB4645DF-A040-4912-AC35-C98105917FD5@arm.com>
 <f69f86dc-7a8c-4c25-c059-0e391de51d7f@epam.com>
 <547d91e8-a6fe-6430-b020-f9c550bfc22b@xen.org>
In-Reply-To: <547d91e8-a6fe-6430-b020-f9c550bfc22b@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: c775fe10-41ea-44dd-35c0-08d82a4666f1
x-ms-traffictypediagnostic: AM0PR0302MB3425:
x-microsoft-antispam-prvs: <AM0PR0302MB3425C5A416F7CE40E7D59023E77C0@AM0PR0302MB3425.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: vQzahmauYlanfE25dCjemKfa+JZGE39EM214gmB0Fw6rUeG3ii1ylfsCVUuo9QOjFlD5TXcqiaxNLCR7T1OS/Q+O0dnf1Zl/rSizlCtr/ITvUZA1UCQlLLtDJrqSQPp2EQGwzqocHtDj1MOfda3iqrcZ6ymX64f9bA7IxRU6FmtTJpXxCa9NO2/JtI4RXX5Uvchfu+Tr9H7duzCaL/lkACngEvAz2TMKDFXi1yYfR0QVFMUBHw8ZsNz02yoso5JnrwXhHXcQbhpvkFE1ROR0eVqMr6hRDAXgFVbUDvHESUlcLwmQs6eE3r3fbBWMyrROVHv92Ij7oYzWNmv9a1cHag==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM0PR03MB6324.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(366004)(39860400002)(346002)(376002)(396003)(136003)(4326008)(316002)(478600001)(8676002)(2906002)(91956017)(110136005)(31696002)(54906003)(76116006)(8936002)(6512007)(64756008)(66446008)(66476007)(66556008)(71200400001)(83380400001)(186003)(31686004)(2616005)(6486002)(53546011)(6506007)(66946007)(86362001)(26005)(36756003)(5660300002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: WBGeEKacMFbouKDslMpx/xREzUONxm8mDVzIsWSmZ43deIbteocyIRpCErePch53J4tPelXAGnkIFNhBVH2jphSpiYQAzEOjwQxfAQN0/vc1UL793GK51YKs3bDUYW631Aa9ebE/JCl1ekZ4jYIP5hM2W/yyxItCSTT7l0+do03yTnsbONkSvygYxe2HfjzIUXrZGORYxaFMDVv0rMSG2METsY3QVA0F31cXgNrtm23DHnu/Qo6E/6qQPjV5mU0OQlbeQuap1XlxwvRcWHU3wPCt793tW/UETULS2rD3zCBDhQENT5lvoowFtCDDvkwWNCgEPqJTSTdgCpPTQRHHaee1U1IxSpnMSWofhzhZKHuMkH43PBZ1M94e/AVfAmOCQRR1XPZQnf8/+psW1N9PecxJA7p99RGresgAd8nd7BZ+djJysrghztvEm5HYZSgimwM1e9Ezwpx3z6GyQF5gWqRRuW1+aBu1Ew95RIRYgNk=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <E92BDBDA81883048B0B7E3A1C2079E98@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c775fe10-41ea-44dd-35c0-08d82a4666f1
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jul 2020 11:41:53.5225 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: amQ/7nsy0hMERTCMpM46a+gf2TYuftnINGObIIoeU8SnvoN7ctTz86BRTWkUbVE63voX3mTcusGZmZhg0wpJdnAR+riqGR4VAp3CSJZFZCcRqNffbCv2rOW/NWChbH2m
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR0302MB3425
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687
 definitions=2020-07-17_06:2020-07-17,
 2020-07-17 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0
 lowpriorityscore=0
 impostorscore=0 bulkscore=0 priorityscore=1501 phishscore=0 mlxscore=0
 suspectscore=0 clxscore=1011 malwarescore=0 adultscore=0 spamscore=0
 mlxlogscore=605 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007170085
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQpPbiA3LzE3LzIwIDI6MjYgUE0sIEp1bGllbiBHcmFsbCB3cm90ZToNCj4NCj4NCj4gT24gMTcv
MDcvMjAyMCAwODo0MSwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+Pj4+IFdlIG5l
ZWQgdG8gY29tZSB1cCB3aXRoIHNvbWV0aGluZyBzaW1pbGFyIGZvciBkb20wbGVzcyB0b28uIEl0
IGNvdWxkIGJlDQo+Pj4+IGV4YWN0bHkgdGhlIHNhbWUgdGhpbmcgKGEgbGlzdCBvZiBCREZzIGFz
IHN0cmluZ3MgYXMgYSBkZXZpY2UgdHJlZQ0KPj4+PiBwcm9wZXJ0eSkgb3Igc29tZXRoaW5nIGVs
c2UgaWYgd2UgY2FuIGNvbWUgdXAgd2l0aCBhIGJldHRlciBpZGVhLg0KPj4+IEZ1bGx5IGFncmVl
Lg0KPj4+IE1heWJlIGEgdHJlZSB0b3BvbG9neSBjb3VsZCBhbGxvdyBtb3JlIHBvc3NpYmlsaXRp
ZXMgKGxpa2UgZ2l2aW5nIEJBUiB2YWx1ZXMpIGluIHRoZSBmdXR1cmUuDQo+Pj4+DQo+Pj4+PiAj
IEVtdWxhdGVkIFBDSSBkZXZpY2UgdHJlZSBub2RlIGluIGxpYnhsOg0KPj4+Pj4NCj4+Pj4+IExp
YnhsIGlzIGNyZWF0aW5nIGEgdmlydHVhbCBQQ0kgZGV2aWNlIHRyZWUgbm9kZSBpbiB0aGUgZGV2
aWNlIHRyZWUgdG8gZW5hYmxlIHRoZSBndWVzdCBPUyB0byBkaXNjb3ZlciB0aGUgdmlydHVhbCBQ
Q0kgZHVyaW5nIGd1ZXN0IGJvb3QuIFdlIGludHJvZHVjZWQgdGhlIG5ldyBjb25maWcgb3B0aW9u
IFt2cGNpPSJwY2lfZWNhbSJdIGZvciBndWVzdHMuIFdoZW4gdGhpcyBjb25maWcgb3B0aW9uIGlz
IGVuYWJsZWQgaW4gYSBndWVzdCBjb25maWd1cmF0aW9uLCBhIFBDSSBkZXZpY2UgdHJlZSBub2Rl
IHdpbGwgYmUgY3JlYXRlZCBpbiB0aGUgZ3Vlc3QgZGV2aWNlIHRyZWUuDQo+Pj4+Pg0KPj4+Pj4g
QSBuZXcgYXJlYSBoYXMgYmVlbiByZXNlcnZlZCBpbiB0aGUgYXJtIGd1ZXN0IHBoeXNpY2FsIG1h
cCBhdCB3aGljaCB0aGUgVlBDSSBidXMgaXMgZGVjbGFyZWQgaW4gdGhlIGRldmljZSB0cmVlIChy
ZWcgYW5kIHJhbmdlcyBwYXJhbWV0ZXJzIG9mIHRoZSBub2RlKS4gQSB0cmFwIGhhbmRsZXIgZm9y
IHRoZSBQQ0kgRUNBTSBhY2Nlc3MgZnJvbSBndWVzdCBoYXMgYmVlbiByZWdpc3RlcmVkIGF0IHRo
ZSBkZWZpbmVkIGFkZHJlc3MgYW5kIHJlZGlyZWN0cyByZXF1ZXN0cyB0byB0aGUgVlBDSSBkcml2
ZXIgaW4gWGVuLg0KPj4+Pj4NCj4+Pj4+IExpbWl0YXRpb246DQo+Pj4+PiAqIE9ubHkgb25lIFBD
SSBkZXZpY2UgdHJlZSBub2RlIGlzIHN1cHBvcnRlZCBhcyBvZiBub3cuDQo+Pj4+IEkgdGhpbmsg
dnBjaT0icGNpX2VjYW0iIHNob3VsZCBiZSBvcHRpb25hbDogaWYgcGNpPVsgIlBDSV9TUEVDX1NU
UklORyIsDQo+Pj4+IC4uLl0gaXMgc3BlY2lmaWZlZCwgdGhlbiB2cGNpPSJwY2lfZWNhbSIgaXMg
aW1wbGllZC4NCj4+Pj4NCj4+Pj4gdnBjaT0icGNpX2VjYW0iIGlzIG9ubHkgdXNlZnVsIG9uZSBk
YXkgaW4gdGhlIGZ1dHVyZSB3aGVuIHdlIHdhbnQgdG8gYmUNCj4+Pj4gYWJsZSB0byBlbXVsYXRl
IG90aGVyIG5vbi1lY2FtIGhvc3QgYnJpZGdlcy4gRm9yIG5vdyB3ZSBjb3VsZCBldmVuIHNraXAN
Cj4+Pj4gaXQuDQo+Pj4gVGhpcyB3b3VsZCBjcmVhdGUgYSBwcm9ibGVtIGlmIHhsIGlzIHVzZWQg
dG8gYWRkIGEgUENJIGRldmljZSBhcyB3ZSBuZWVkIHRoZSBQQ0kgbm9kZSB0byBiZSBpbiB0aGUg
RFRCIHdoZW4gdGhlIGd1ZXN0IGlzIGNyZWF0ZWQuDQo+Pj4gSSBhZ3JlZSB0aGlzIGlzIG5vdCBu
ZWVkZWQgYnV0IHJlbW92aW5nIGl0IG1pZ2h0IGNyZWF0ZSBtb3JlIGNvbXBsZXhpdHkgaW4gdGhl
IGNvZGUuDQo+Pg0KPj4gSSB3b3VsZCBzdWdnZXN0IHdlIGhhdmUgaXQgZnJvbSBkYXkgMCBhcyB0
aGVyZSBhcmUgcGxlbnR5IG9mIEhXIGF2YWlsYWJsZSB3aGljaCBpcyBub3QgRUNBTS4NCj4+DQo+
PiBIYXZpbmcgdnBjaSBhbGxvd3Mgb3RoZXIgYnJpZGdlcyB0byBiZSBzdXBwb3J0ZWQNCj4NCj4g
U28gSSBjYW4gdW5kZXJzdGFuZCB3aHkgeW91IHdvdWxkIHdhbnQgdG8gaGF2ZSBhIGRyaXZlciBm
b3Igbm9uLUVDQU0gaG9zdCBQQ0kgY29udHJvbGxlci4gSG93ZXZlciwgd2h5IHdvdWxkIHlvdSB3
YW50IHRvIGVtdWxhdGUgYSBub24tRUNBTSBQQ0kgY29udHJvbGxlciB0byBhIGd1ZXN0Pw0KSW5k
ZWVkLiBObyBuZWVkIHRvIGVtdWxhdGUgbm9uLUVDQU0NCj4NCj4gQ2hlZXJzLA0KPg==


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 12:47:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 12:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwPld-0005AM-NM; Fri, 17 Jul 2020 12:47:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XmQz=A4=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1jwPlc-0005AF-C7
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 12:47:08 +0000
X-Inumbo-ID: 9c6e231b-c82b-11ea-95f5-12813bfff9fa
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.49]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9c6e231b-c82b-11ea-95f5-12813bfff9fa;
 Fri, 17 Jul 2020 12:47:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4Z7QnX9P97hLlSwVNj3Ro9Vb6DTJa2wJcrPa2G0qpG0=;
 b=TdNb0X7PihaaPjJLpZokW34qlA3ceACp8PrdnQ60FVr8gb5531c1Dzqj8tEc5ysBYogpMBXfmAoTZ0hWCfr7QbUCr4KvtkUTHKidszxJxJXe/K4cEvNa8LH1Gxsaf51SRu2b1IAtIsAvkSBrgaalZ9DLwFzu/ZQARoqwpYhOrUY=
Received: from AM5PR0601CA0044.eurprd06.prod.outlook.com
 (2603:10a6:203:68::30) by AM6PR08MB5030.eurprd08.prod.outlook.com
 (2603:10a6:20b:e1::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.22; Fri, 17 Jul
 2020 12:47:01 +0000
Received: from VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:68:cafe::b9) by AM5PR0601CA0044.outlook.office365.com
 (2603:10a6:203:68::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17 via Frontend
 Transport; Fri, 17 Jul 2020 12:47:01 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT036.mail.protection.outlook.com (10.152.19.204) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 12:47:00 +0000
Received: ("Tessian outbound c83312565ef4:v62");
 Fri, 17 Jul 2020 12:47:00 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: b0c2d1630efc7854
X-CR-MTA-TID: 64aa7808
Received: from ae087f5d1adf.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C5DCE2C6-FA1A-4191-9ED0-4D43A35230C2.1; 
 Fri, 17 Jul 2020 12:46:54 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ae087f5d1adf.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 12:46:54 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HRIh6HWZF4XIa1fjAyMDgt+K9kRulQQb6PpM7gxn9b3XSg+FRUI7aExYB58qgHUWGx72hE1e/CaWxbT4KQ8CUMOUjdegWLvlZDthzThnkkUvZF0xFYDRixeq/aaHthPy50KSWRptSCKjeXtVV/A1ojao4tN6kEajCCRwNBsCcNHVbhhcpWzgVExhIoEFZZXi6QglF4NMc6ty30zckDGm9ozjOHx7g/mhWJaAek2J+S+1Kcs5jhiRI/4P2S0Ww9o4d7JZzxGJXUSLufg2qwR190PEqstyQhOVnNdTdrfFHOU7TNWPRu1NDX9JxGGdNAJ+VvDQ/zwpX6xzEvxygiFMjQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4Z7QnX9P97hLlSwVNj3Ro9Vb6DTJa2wJcrPa2G0qpG0=;
 b=TDulhf67gAYH70Sh/0xA+ro6U3x8qSUyVp6BR5uIeGwymV8dcBOKXTHg01RB2OGxzskrmOC2+bwqPjwIq9I8gCZj+gJnkgWdm/WovGmVgONChKwY+tY1kb7tTJKqzMinM6vLfGQ7WTOKxwmcaffS4lMBCG0aVaZgC9c9YihUM56GTUqNK22MXlOSfu89X1pL7JbLi7zLs7YJiM4oB94XozNLtFAjIOOp3jdE70Wjzk8IZ0K5sQZR0LGhWHLXR0Go/TC99q1cnr/mJssPuIK1aXmXTnsmlbGK5IPMs4o+2DO/1rJG6enxZTd5uskyp/dz8X++3V/XC4nDXYnVQexsSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4Z7QnX9P97hLlSwVNj3Ro9Vb6DTJa2wJcrPa2G0qpG0=;
 b=TdNb0X7PihaaPjJLpZokW34qlA3ceACp8PrdnQ60FVr8gb5531c1Dzqj8tEc5ysBYogpMBXfmAoTZ0hWCfr7QbUCr4KvtkUTHKidszxJxJXe/K4cEvNa8LH1Gxsaf51SRu2b1IAtIsAvkSBrgaalZ9DLwFzu/ZQARoqwpYhOrUY=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB3302.eurprd08.prod.outlook.com (2603:10a6:209:41::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17; Fri, 17 Jul
 2020 12:46:52 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3174.026; Fri, 17 Jul 2020
 12:46:52 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAC0rgIAAqC0AgAANcgCAAFU7gA==
Date: Fri, 17 Jul 2020 12:46:51 +0000
Message-ID: <E4755A88-798C-42FF-8DAD-DC4FD3C7B571@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <alpine.DEB.2.21.2007161258160.3886@sstabellini-ThinkPad-T480s>
 <BB4645DF-A040-4912-AC35-C98105917FD5@arm.com>
 <f69f86dc-7a8c-4c25-c059-0e391de51d7f@epam.com>
In-Reply-To: <f69f86dc-7a8c-4c25-c059-0e391de51d7f@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 9acd8c1e-a1e7-4b6a-2af8-08d82a4f7fe8
x-ms-traffictypediagnostic: AM6PR08MB3302:|AM6PR08MB5030:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <AM6PR08MB5030BEE72AAAA8AFA687EF5AFC7C0@AM6PR08MB5030.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: mSGJkcYBUlV1GiklIog1JS586r92vQDtS6RFJOX5Wkw/bTHj3ES30mHzJe94TDipkVbxskAkan8tQQ5LsVGQSdEew3mLErxfIR3S91rAWWtgFv0pHJLsxB4LgXBeXn3tGLR1vCeupm3H+44nhSmMPLQ/rK+EIyM1Ww8oinMS4N/ZunG1ImI4C1mdRgzZPC2Lb8HrrincIqAe+XIHfJMXVpqbOQhGpY92PEWCJpjYNes/65tt3wxiAWI291kx/0t3E03F1YbH5li2fn6LazAimmw/XG1yQSZUdTiQd0zMpHNHVVUis+rkEShfeNes48eL9R9vgRdDWOAB0r8YMzetki4EFETzNYZ9E4YRHGD1zungJHAolTWQro1XJ/pegLZ7YbRYS7yLk1uP4/c0IvTF2g==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(366004)(346002)(376002)(396003)(6512007)(91956017)(86362001)(166002)(64756008)(966005)(53546011)(316002)(19627235002)(6916009)(76116006)(6506007)(30864003)(478600001)(2616005)(6486002)(186003)(4326008)(66476007)(8676002)(36756003)(8936002)(71200400001)(26005)(54906003)(66946007)(5660300002)(83380400001)(33656002)(66446008)(21615005)(66556008)(2906002)(55236004)(579004)(559001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: mgwO8khV7em1wyEUmiZIyvralr/7vmO7haDjy2XeSVKZZtvYBEl811gnGuvdUWA9zLGS3N+tgGhrms7O8Ofs9IdNnS4vs+/JqZAEZvoWm4oooqKbPDhaiTcfjqsvJFYaVPRuRtYJuM3fu9ucb29aBDnegTKIhEjAYZwN8a8Qki+G9D2yesaDl9YIfUxG/puFGygOR1SUElgyj05FiXgYy9Ioh3lq5k4QpuLJo42l43KclI2dlTXQCcKytJmq7RrFd6uS2gW9qDtpf6u7zmGoRzfdVnDaMHJl1akWxPJd28f+yMp+XwRbpY5VYMEnU9bVWskhaDjVmTGps3uSrq0HWtiXM4Y114ayUzZLBsWg/ujfCy4ozVnLAI59qDcezShr21FR8MtrK/bTS+Cya8dOz4RtppElUZXeg2HJci7Q4PXRezDgZU+L7u3OA12IlDBnImgXbbab28H2BQJ+6sE+TttQk61W9elWkFha9lPkTpI=
Content-Type: multipart/alternative;
 boundary="_000_E4755A88798C42FF8DADDC4FD3C7B571armcom_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3302
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(39860400002)(346002)(396003)(376002)(46966005)(336012)(86362001)(8676002)(70206006)(36756003)(5660300002)(30864003)(81166007)(166002)(47076004)(82740400003)(83380400001)(70586007)(33656002)(19627235002)(33964004)(316002)(4326008)(21615005)(82310400002)(6486002)(26005)(356005)(186003)(6506007)(53546011)(2906002)(54906003)(45080400002)(2616005)(6512007)(8936002)(6862004)(966005)(107886003)(478600001)(559001)(579004);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 7156ebe5-38cd-4c2a-9453-08d82a4f7a95
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 63TiTMrbU5H+NAsqL0cpju17efn7fKE7TvdnlrvKJ4jg1n6qOIWqXgm6wr/UJI7mx3ZagaZDGznWH8l+VIMsyxWkrWSs8c899dqotl/5BHiiBm5mkDDcSFPtKAkX66VTZ3cYAFiveP2Q505evl/OD8I90NbuXPhcqMu12xskjliFC4P7t8/b+4hIOY+zJWM+1btOXghi6c3H2sp4497qPScS7VIoMfLe0ylVTb0i34LYjMK0RmknLNr8aAc2Kke1vfxh9pQNt0wzBuqdCAnl/rPk+IaeMCoL0vXZpC6xnO3ZMH7YHlI0HPeboffCbrM2jY6pTVuUJPbuGBQSWA+8UX2IhgTTmaWe/UtxqTeEYynUJZqAw1TwK9NnJI3Jmy33BLet2XGELaiIl71O8VnDxH1xlE11egR74P6/TSTprjZqO7vYAEVxlY6P5wGJx4xxvSbDhbDjOImJNgBMeFMOe7j5rsSLptZSwyBilNaTepU=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 12:47:00.8090 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9acd8c1e-a1e7-4b6a-2af8-08d82a4f7fe8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5030
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--_000_E4755A88798C42FF8DADDC4FD3C7B571armcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

U29ycnkgZm9yIHByZXZpb3VzIG1haWwgZm9ybWF0dGluZyBpc3N1ZS4gUmVwbHlpbmcgYWdhaW4g
c28gdGhhdCBhbnkgY29tbWVudCBoaXN0b3J5IHNob3VsZCBub3QgbWlzc2VkLg0KDQpPbiAxNyBK
dWwgMjAyMCwgYXQgODo0MSBhbSwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gPE9sZWtzYW5kcl9B
bmRydXNoY2hlbmtvQGVwYW0uY29tPG1haWx0bzpPbGVrc2FuZHJfQW5kcnVzaGNoZW5rb0BlcGFt
LmNvbT4+IHdyb3RlOg0KDQoNCk9uIDcvMTcvMjAgOTo1MyBBTSwgQmVydHJhbmQgTWFycXVpcyB3
cm90ZToNCg0KT24gMTYgSnVsIDIwMjAsIGF0IDIyOjUxLCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc8bWFpbHRvOnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+PiB3cm90
ZToNCg0KT24gVGh1LCAxNiBKdWwgMjAyMCwgUmFodWwgU2luZ2ggd3JvdGU6DQpIZWxsbyBBbGws
DQoNCkZvbGxvd2luZyB1cCBvbiBkaXNjdXNzaW9uIG9uIFBDSSBQYXNzdGhyb3VnaCBzdXBwb3J0
IG9uIEFSTSB0aGF0IHdlIGhhZCBhdCB0aGUgWEVOIHN1bW1pdCwgd2UgYXJlIHN1Ym1pdHRpbmcg
YSBSZXZpZXcgRm9yIENvbW1lbnQgYW5kIGEgZGVzaWduIHByb3Bvc2FsIGZvciBQQ0kgcGFzc3Ro
cm91Z2ggc3VwcG9ydCBvbiBBUk0uIEZlZWwgZnJlZSB0byBnaXZlIHlvdXIgZmVlZGJhY2suDQoN
ClRoZSBmb2xsb3dpbmdzIGRlc2NyaWJlIHRoZSBoaWdoLWxldmVsIGRlc2lnbiBwcm9wb3NhbCBv
ZiB0aGUgUENJIHBhc3N0aHJvdWdoIHN1cHBvcnQgYW5kIGhvdyB0aGUgZGlmZmVyZW50IG1vZHVs
ZXMgd2l0aGluIHRoZSBzeXN0ZW0gaW50ZXJhY3RzIHdpdGggZWFjaCBvdGhlciB0byBhc3NpZ24g
YSBwYXJ0aWN1bGFyIFBDSSBkZXZpY2UgdG8gdGhlIGd1ZXN0Lg0KSSB0aGluayB0aGUgcHJvcG9z
YWwgaXMgZ29vZCBhbmQgSSBvbmx5IGhhdmUgYSBjb3VwbGUgb2YgdGhvdWdodHMgdG8NCnNoYXJl
IGJlbG93Lg0KDQoNCiMgVGl0bGU6DQoNClBDSSBkZXZpY2VzIHBhc3N0aHJvdWdoIG9uIEFybSBk
ZXNpZ24gcHJvcG9zYWwNCg0KIyBQcm9ibGVtIHN0YXRlbWVudDoNCg0KT24gQVJNIHRoZXJlIGlu
IG5vIHN1cHBvcnQgdG8gYXNzaWduIGEgUENJIGRldmljZSB0byBhIGd1ZXN0LiBQQ0kgZGV2aWNl
IHBhc3N0aHJvdWdoIGNhcGFiaWxpdHkgYWxsb3dzIGd1ZXN0cyB0byBoYXZlIGZ1bGwgYWNjZXNz
IHRvIHNvbWUgUENJIGRldmljZXMuIFBDSSBkZXZpY2UgcGFzc3Rocm91Z2ggYWxsb3dzIFBDSSBk
ZXZpY2VzIHRvIGFwcGVhciBhbmQgYmVoYXZlIGFzIGlmIHRoZXkgd2VyZSBwaHlzaWNhbGx5IGF0
dGFjaGVkIHRvIHRoZSBndWVzdCBvcGVyYXRpbmcgc3lzdGVtIGFuZCBwcm92aWRlIGZ1bGwgaXNv
bGF0aW9uIG9mIHRoZSBQQ0kgZGV2aWNlcy4NCg0KR29hbCBvZiB0aGlzIHdvcmsgaXMgdG8gYWxz
byBzdXBwb3J0IERvbTBMZXNzIGNvbmZpZ3VyYXRpb24gc28gdGhlIFBDSSBiYWNrZW5kL2Zyb250
ZW5kIGRyaXZlcnMgdXNlZCBvbiB4ODYgc2hhbGwgbm90IGJlIHVzZWQgb24gQXJtLiBJdCB3aWxs
IHVzZSB0aGUgZXhpc3RpbmcgVlBDSSBjb25jZXB0IGZyb20gWDg2IGFuZCBpbXBsZW1lbnQgdGhl
IHZpcnR1YWwgUENJIGJ1cyB0aHJvdWdoIElPIGVtdWxhdGlvbuKAiyBzdWNoIHRoYXQgb25seSBh
c3NpZ25lZCBkZXZpY2VzIGFyZSB2aXNpYmxl4oCLIHRvIHRoZSBndWVzdCBhbmQgZ3Vlc3QgY2Fu
IHVzZSB0aGUgc3RhbmRhcmQgUENJIGRyaXZlci4NCg0KT25seSBEb20wIGFuZCBYZW4gd2lsbCBo
YXZlIGFjY2VzcyB0byB0aGUgcmVhbCBQQ0kgYnVzLA0KDQpTbywgaW4gdGhpcyBjYXNlIGhvdyBp
cyB0aGUgYWNjZXNzIHNlcmlhbGl6YXRpb24gZ29pbmcgdG8gd29yaz8NCg0KSSBtZWFuIHRoYXQg
aWYgYm90aCBYZW4gYW5kIERvbTAgYXJlIGFib3V0IHRvIGFjY2VzcyB0aGUgYnVzIGF0IHRoZSBz
YW1lIHRpbWU/DQoNClRoZXJlIHdhcyBhIGRpc2N1c3Npb24gb24gdGhlIHNhbWUgYmVmb3JlIFsx
XSBhbmQgSU1PIGl0IHdhcyBub3QgZGVjaWRlZCBvbg0KDQpob3cgdG8gZGVhbCB3aXRoIHRoYXQu
DQoNCkRPTTAgYWxzbyBhY2Nlc3MgdGhlIHJlYWwgUENJIGhhcmR3YXJlIHZpYSBNTUlPIGNvbmZp
ZyBzcGFjZSB0cmFwIGluIFhFTi4gV2Ugd2lsbCB0YWtlIGNhcmUgb2YgYWNjZXNzIHRoZSBjb25m
aWcgc3BhY2UgbG9jayBpbiBYRU4uDQoNCuKAiyBndWVzdCB3aWxsIGhhdmUgYSBkaXJlY3QgYWNj
ZXNzIHRvIHRoZSBhc3NpZ25lZCBkZXZpY2UgaXRzZWxm4oCLLiBJT01FTSBtZW1vcnkgd2lsbCBi
ZSBtYXBwZWQgdG8gdGhlIGd1ZXN0IOKAi2FuZCBpbnRlcnJ1cHQgd2lsbCBiZSByZWRpcmVjdGVk
IHRvIHRoZSBndWVzdC4gU01NVSBoYXMgdG8gYmUgY29uZmlndXJlZCBjb3JyZWN0bHkgdG8gaGF2
ZSBETUEgdHJhbnNhY3Rpb24uDQoNCiMjIEN1cnJlbnQgc3RhdGU64oCvRHJhZnQgdmVyc2lvbg0K
DQojIFByb3Bvc2VyKHMpOiBSYWh1bCBTaW5naCwgQmVydHJhbmQgTWFycXVpcw0KDQojIFByb3Bv
c2FsOg0KDQpUaGlzIHNlY3Rpb24gd2lsbCBkZXNjcmliZSB0aGUgZGlmZmVyZW50IHN1YnN5c3Rl
bSB0byBzdXBwb3J0IHRoZSBQQ0kgZGV2aWNlIHBhc3N0aHJvdWdoIGFuZCBob3cgdGhlc2Ugc3Vi
c3lzdGVtcyBpbnRlcmFjdCB3aXRoIGVhY2ggb3RoZXIgdG8gYXNzaWduIGEgZGV2aWNlIHRvIHRo
ZSBndWVzdC4NCg0KIyBQQ0kgVGVybWlub2xvZ3k6DQoNCkhvc3QgQnJpZGdlOiBIb3N0IGJyaWRn
ZSBhbGxvd3MgdGhlIFBDSSBkZXZpY2VzIHRvIHRhbGsgdG8gdGhlIHJlc3Qgb2YgdGhlIGNvbXB1
dGVyLg0KRUNBTTogRUNBTSAoRW5oYW5jZWQgQ29uZmlndXJhdGlvbiBBY2Nlc3MgTWVjaGFuaXNt
KSBpcyBhIG1lY2hhbmlzbSBkZXZlbG9wZWQgdG8gYWxsb3cgUENJZSB0byBhY2Nlc3MgY29uZmln
dXJhdGlvbiBzcGFjZS4gVGhlIHNwYWNlIGF2YWlsYWJsZSBwZXIgZnVuY3Rpb24gaXMgNEtCLg0K
DQojIERpc2NvdmVyaW5nIFBDSSBIb3N0IEJyaWRnZSBpbiBYRU46DQoNCkluIG9yZGVyIHRvIHN1
cHBvcnQgdGhlIFBDSSBwYXNzdGhyb3VnaCBYRU4gc2hvdWxkIGJlIGF3YXJlIG9mIGFsbCB0aGUg
UENJIGhvc3QgYnJpZGdlcyBhdmFpbGFibGUgb24gdGhlIHN5c3RlbSBhbmQgc2hvdWxkIGJlIGFi
bGUgdG8gYWNjZXNzIHRoZSBQQ0kgY29uZmlndXJhdGlvbiBzcGFjZS4gRUNBTSBjb25maWd1cmF0
aW9uIGFjY2VzcyBpcyBzdXBwb3J0ZWQgYXMgb2Ygbm93LiBYRU4gZHVyaW5nIGJvb3Qgd2lsbCBy
ZWFkIHRoZSBQQ0kgZGV2aWNlIHRyZWUgbm9kZSDigJxyZWfigJ0gcHJvcGVydHkgYW5kIHdpbGwg
bWFwIHRoZSBFQ0FNIHNwYWNlIHRvIHRoZSBYRU4gbWVtb3J5IHVzaW5nIHRoZSDigJxpb3JlbWFw
X25vY2FjaGUgKCnigJ0gZnVuY3Rpb24uDQoNCklmIHRoZXJlIGFyZSBtb3JlIHRoYW4gb25lIHNl
Z21lbnQgb24gdGhlIHN5c3RlbSwgWEVOIHdpbGwgcmVhZCB0aGUg4oCcbGludXgsIHBjaS1kb21h
aW7igJ0gcHJvcGVydHkgZnJvbSB0aGUgZGV2aWNlIHRyZWUgbm9kZSBhbmQgY29uZmlndXJlICB0
aGUgaG9zdCBicmlkZ2Ugc2VnbWVudCBudW1iZXIgYWNjb3JkaW5nbHkuIEFsbCB0aGUgUENJIGRl
dmljZSB0cmVlIG5vZGVzIHNob3VsZCBoYXZlIHRoZSDigJxsaW51eCxwY2ktZG9tYWlu4oCdIHBy
b3BlcnR5IHNvIHRoYXQgdGhlcmUgd2lsbCBiZSBubyBjb25mbGljdHMuIER1cmluZyBoYXJkd2Fy
ZSBkb21haW4gYm9vdCBMaW51eCB3aWxsIGFsc28gdXNlIHRoZSBzYW1lIOKAnGxpbnV4LHBjaS1k
b21haW7igJ0gcHJvcGVydHkgYW5kIGFzc2lnbiB0aGUgZG9tYWluIG51bWJlciB0byB0aGUgaG9z
dCBicmlkZ2UuDQoNCldoZW4gRG9tMCB0cmllcyB0byBhY2Nlc3MgdGhlIFBDSSBjb25maWcgc3Bh
Y2Ugb2YgdGhlIGRldmljZSwgWEVOIHdpbGwgZmluZCB0aGUgY29ycmVzcG9uZGluZyBob3N0IGJy
aWRnZSBiYXNlZCBvbiBzZWdtZW50IG51bWJlciBhbmQgYWNjZXNzIHRoZSBjb3JyZXNwb25kaW5n
IGNvbmZpZyBzcGFjZSBhc3NpZ25lZCB0byB0aGF0IGJyaWRnZS4NCg0KTGltaXRhdGlvbjoNCiog
T25seSBQQ0kgRUNBTSBjb25maWd1cmF0aW9uIHNwYWNlIGFjY2VzcyBpcyBzdXBwb3J0ZWQuDQoN
ClRoaXMgaXMgcmVhbGx5IHRoZSBsaW1pdGF0aW9uIHdoaWNoIHdlIGhhdmUgdG8gdGhpbmsgb2Yg
bm93IGFzIHRoZXJlIGFyZSBsb3RzIG9mDQoNCkhXIHcvbyBFQ0FNIHN1cHBvcnQgYW5kIG5vdCBw
cm92aWRpbmcgYSB3YXkgdG8gdXNlIFBDSShlKSBvbiB0aG9zZSBib2FyZHMNCg0Kd291bGQgcmVu
ZGVyIHRoZW0gdXNlbGVzcyB3cnQgUENJLiBJIGRvbid0IHN1Z2dlc3QgdG8gaGF2ZSBzb21lIHJl
YWwgY29kZSBmb3INCg0KdGhhdCwgYnV0IEkgd291bGQgc3VnZ2VzdCB3ZSBkZXNpZ24gc29tZSBp
bnRlcmZhY2VzIGZyb20gZGF5IDAuDQoNCkF0IHRoZSBzYW1lIHRpbWUgSSBkbyB1bmRlcnN0YW5k
IHRoYXQgc3VwcG9ydGluZyBub24tRUNBTSBicmlkZ2VzIGlzIGEgcGFpbg0KDQpBZGRpbmcgYW55
IHR5cGUgb2YgaG9zdCBicmlkZ2UgaXMgc3VwcG9ydGVkLCB3ZSBkaWQgcHV0IHRoZSBFQ0FNIHNw
ZWNpZmljIGNvZGUgaW4gYW4gaWRlbnRpZmVkIHNvdXJjZSBmaWxlIHNvIHRoYXQgb3RoZXIgdHlw
ZXMgY2FuIGJlIGltcGxlbWVudGVkLiBBcyBvZiBub3cgd2UgaGF2ZSBpbXBsZW1lbnRlZCB0aGUg
RUNBTSBzdXBwb3J0IGFuZCB3ZSBhcmUgaW1wbGVtZW50aW5nIHJpZ2h0IG5vdyBzdXBwb3J0IGZv
ciBOMVNEUCB3aGljaCByZXF1aXJlcyBzcGVjaWZpYyBxdWlya3Mgd2hpY2ggd2lsbCBiZSBkb25l
IGluIGEgc2VwYXJhdGUgc291cmNlIGZpbGUuDQoNCg0KKiBEZXZpY2UgdHJlZSBiaW5kaW5nIGlz
IHN1cHBvcnRlZCBhcyBvZiBub3csIEFDUEkgaXMgbm90IHN1cHBvcnRlZC4NCiogTmVlZCB0byBw
b3J0IHRoZSBQQ0kgaG9zdCBicmlkZ2UgYWNjZXNzIGNvZGUgdG8gWEVOIHRvIGFjY2VzcyB0aGUg
Y29uZmlndXJhdGlvbiBzcGFjZSAoZ2VuZXJpYyBvbmUgd29ya3MgYnV0IGxvdHMgb2YgcGxhdGZv
cm1zIHdpbGwgcmVxdWlyZWQgc29tZSBzcGVjaWZpYyBjb2RlIG9yIHF1aXJrcykuDQoNCiMgRGlz
Y292ZXJpbmcgUENJIGRldmljZXM6DQoNClBDSS1QQ0llIGVudW1lcmF0aW9uIGlzIGEgcHJvY2Vz
cyBvZiBkZXRlY3RpbmcgZGV2aWNlcyBjb25uZWN0ZWQgdG8gaXRzIGhvc3QuIEl0IGlzIHRoZSBy
ZXNwb25zaWJpbGl0eSBvZiB0aGUgaGFyZHdhcmUgZG9tYWluIG9yIGJvb3QgZmlybXdhcmUgdG8g
ZG8gdGhlIFBDSSBlbnVtZXJhdGlvbiBhbmQgY29uZmlndXJlDQpHcmVhdCwgc28gd2UgYXNzdW1l
IGhlcmUgdGhhdCB0aGUgYm9vdGxvYWRlciBjYW4gZG8gdGhlIGVudW1lcmF0aW9uIGFuZCBjb25m
aWd1cmF0aW9uLi4uDQogdGhlIEJBUiwgUENJIGNhcGFiaWxpdGllcywgYW5kIE1TSS9NU0ktWCBj
b25maWd1cmF0aW9uLg0KDQpQQ0ktUENJZSBlbnVtZXJhdGlvbiBpbiBYRU4gaXMgbm90IGZlYXNp
YmxlIGZvciB0aGUgY29uZmlndXJhdGlvbiBwYXJ0IGFzIGl0IHdvdWxkIHJlcXVpcmUgYSBsb3Qg
b2YgY29kZSBpbnNpZGUgWGVuIHdoaWNoIHdvdWxkIHJlcXVpcmUgYSBsb3Qgb2YgbWFpbnRlbmFu
Y2UuIEFkZGVkIHRvIHRoaXMgbWFueSBwbGF0Zm9ybXMgcmVxdWlyZSBzb21lIHF1aXJrcyBpbiB0
aGF0IHBhcnQgb2YgdGhlIFBDSSBjb2RlIHdoaWNoIHdvdWxkIGdyZWF0bHkgaW1wcm92ZSBYZW4g
Y29tcGxleGl0eS4gT25jZSBoYXJkd2FyZSBkb21haW4gZW51bWVyYXRlcyB0aGUgZGV2aWNlIHRo
ZW4gaXQgd2lsbCBjb21tdW5pY2F0ZSB0byBYRU4gdmlhIHRoZSBiZWxvdyBoeXBlcmNhbGwuDQoN
CiNkZWZpbmUgUEhZU0RFVk9QX3BjaV9kZXZpY2VfYWRkICAgICAgICAyNQ0Kc3RydWN0IHBoeXNk
ZXZfcGNpX2RldmljZV9hZGQgew0KICAgdWludDE2X3Qgc2VnOw0KICAgdWludDhfdCBidXM7DQog
ICB1aW50OF90IGRldmZuOw0KICAgdWludDMyX3QgZmxhZ3M7DQogICBzdHJ1Y3Qgew0KICAgIHVp
bnQ4X3QgYnVzOw0KICAgIHVpbnQ4X3QgZGV2Zm47DQogICB9IHBoeXNmbjsNCiAgIC8qDQogICAq
IE9wdGlvbmFsIHBhcmFtZXRlcnMgYXJyYXkuDQogICAqIEZpcnN0IGVsZW1lbnQgKFswXSkgaXMg
UFhNIGRvbWFpbiBhc3NvY2lhdGVkIHdpdGggdGhlIGRldmljZSAoaWYgKiBYRU5fUENJX0RFVl9Q
WE0gaXMgc2V0KQ0KICAgKi8NCiAgIHVpbnQzMl90IG9wdGFycltYRU5fRkxFWF9BUlJBWV9ESU1d
Ow0KICAgfTsNCg0KQXMgdGhlIGh5cGVyY2FsbCBhcmd1bWVudCBoYXMgdGhlIFBDSSBzZWdtZW50
IG51bWJlciwgWEVOIHdpbGwgYWNjZXNzIHRoZSBQQ0kgY29uZmlnIHNwYWNlIGJhc2VkIG9uIHRo
aXMgc2VnbWVudCBudW1iZXIgYW5kIGZpbmQgdGhlIGhvc3QtYnJpZGdlIGNvcnJlc3BvbmRpbmcg
dG8gdGhpcyBzZWdtZW50IG51bWJlci4gQXQgdGhpcyBzdGFnZSBob3N0IGJyaWRnZSBpcyBmdWxs
eSBpbml0aWFsaXplZCBzbyB0aGVyZSB3aWxsIGJlIG5vIGlzc3VlIHRvIGFjY2VzcyB0aGUgY29u
ZmlnIHNwYWNlLg0KDQpYRU4gd2lsbCBhZGQgdGhlIFBDSSBkZXZpY2VzIGluIHRoZSBsaW5rZWQg
bGlzdCBtYWludGFpbiBpbiBYRU4gdXNpbmcgdGhlIGZ1bmN0aW9uIHBjaV9hZGRfZGV2aWNlKCku
IFhFTiB3aWxsIGJlIGF3YXJlIG9mIGFsbCB0aGUgUENJIGRldmljZXMgb24gdGhlIHN5c3RlbSBh
bmQgYWxsIHRoZSBkZXZpY2Ugd2lsbCBiZSBhZGRlZCB0byB0aGUgaGFyZHdhcmUgZG9tYWluLg0K
DQpMaW1pdGF0aW9uczoNCiogV2hlbiBQQ0kgZGV2aWNlcyBhcmUgYWRkZWQgdG8gWEVOLCBNU0kg
Y2FwYWJpbGl0eSBpcyBub3QgaW5pdGlhbGl6ZWQgaW5zaWRlIFhFTiBhbmQgbm90IHN1cHBvcnRl
ZCBhcyBvZiBub3cuDQoqIEFDUyBjYXBhYmlsaXR5IGlzIGRpc2FibGUgZm9yIEFSTSBhcyBvZiBu
b3cgYXMgYWZ0ZXIgZW5hYmxpbmcgaXQgZGV2aWNlcyBhcmUgbm90IGFjY2Vzc2libGUuDQoqIERv
bTBMZXNzIGltcGxlbWVudGF0aW9uIHdpbGwgcmVxdWlyZSB0byBoYXZlIHRoZSBjYXBhY2l0eSBp
bnNpZGUgWGVuIHRvIGRpc2NvdmVyIHRoZSBQQ0kgZGV2aWNlcyAod2l0aG91dCBkZXBlbmRpbmcg
b24gRG9tMCB0byBkZWNsYXJlIHRoZW0gdG8gWGVuKS4NCkkgdGhpbmsgaXQgaXMgZmluZSB0byBh
c3N1bWUgdGhhdCBmb3IgZG9tMGxlc3MgdGhlICJmaXJtd2FyZSIgaGFzIHRha2VuDQpjYXJlIG9m
IHNldHRpbmcgdXAgdGhlIEJBUnMgY29ycmVjdGx5LiBTdGFydGluZyB3aXRoIHRoYXQgYXNzdW1w
dGlvbiwgaXQNCmxvb2tzIGxpa2UgaXQgc2hvdWxkIGJlICJlYXN5IiB0byB3YWxrIHRoZSBQQ0kg
dG9wb2xvZ3kgaW4gWGVuIHdoZW4vaWYNCnRoZXJlIGlzIG5vIGRvbTA/DQpZZXMgYXMgd2UgZGlz
Y3Vzc2VkIGR1cmluZyB0aGUgZGVzaWduIHNlc3Npb24sIHdlIGN1cnJlbnRseSB0aGluayB0aGF0
IGl0IGlzIHRoZSB3YXkgdG8gZ28uDQpXZSBhcmUgZm9yIG5vdyByZWx5aW5nIG9uIERvbTAgdG8g
Z2V0IHRoZSBsaXN0IG9mIFBDSSBkZXZpY2VzIGJ1dCB0aGlzIGlzIGRlZmluaXRlbHkgdGhlIHN0
cmF0ZWd5IHdlIHdvdWxkIGxpa2UgdG8gdXNlIHRvIGhhdmUgRG9tMCBzdXBwb3J0Lg0KSWYgdGhp
cyBpcyB3b3JraW5nIHdlbGwsIEkgZXZlbiB0aGluayB3ZSBjb3VsZCBnZXQgcmlkIG9mIHRoZSBo
eXBlcmNhbGwgYWxsIHRvZ2V0aGVyLg0KLi4uYW5kIHRoaXMgaXMgdGhlIHNhbWUgd2F5IG9mIGNv
bmZpZ3VyaW5nIGlmIGVudW1lcmF0aW9uIGhhcHBlbnMgaW4gdGhlIGJvb3Rsb2FkZXI/DQoNCkkg
ZG8gc3VwcG9ydCB0aGUgaWRlYSB3ZSBnbyBhd2F5IGZyb20gUEhZU0RFVk9QX3BjaV9kZXZpY2Vf
YWRkLCBidXQgZHJpdmVyIGRvbWFpbg0KDQpqdXN0IHNpZ25hbHMgWGVuIHRoYXQgdGhlIGVudW1l
cmF0aW9uIGlzIGRvbmUgYW5kIFhlbiBjYW4gdHJhdmVyc2UgdGhlIGJ1cyBieSB0aGF0IHRpbWUu
DQoNClBsZWFzZSBhbHNvIG5vdGUsIHRoYXQgdGhlcmUgYXJlIGFjdHVhbGx5IDMgY2FzZXMgcG9z
c2libGUgd3J0IHdoZXJlIHRoZSBlbnVtZXJhdGlvbiBhbmQNCg0KY29uZmlndXJhdGlvbiBoYXBw
ZW5zOiBib290IGZpcm13YXJlLCBEb20wLCBYZW4uIFNvLCBpdCBzZWVtcyB3ZQ0KDQphcmUgZ29p
bmcgdG8gaGF2ZSBkaWZmZXJlbnQgYXBwcm9hY2hlcyBmb3IgdGhlIGZpcnN0IHR3byAoc2VlIG15
IGNvbW1lbnQgYWJvdmUgb24NCg0KdGhlIGh5cGVyY2FsbCB1c2UgaW4gRG9tMCkuIFNvLCB3YWxr
aW5nIHRoZSBidXMgb3Vyc2VsdmVzIGluIFhlbiBzZWVtcyB0byBiZSBnb29kIGZvciBhbGwNCg0K
dGhlIHVzZS1jYXNlcyBhYm92ZQ0KDQoNCkluIHRoYXQgY2FzZSB3ZSBtYXkgaGF2ZSB0byBpbXBs
ZW1lbnQgYSBuZXcgaHlwZXJjYWxsIHRvIGluZm9ybSBYRU4gdGhhdCBlbnVtZXJhdGlvbiBpcyBj
b21wbGV0ZSBhbmQgbm93IHNjYW4gdGhlIGRldmljZXMuIFdlIGNvdWxkIHRlbGwgWGVuIHRvIGRl
bGF5IGl0cyBlbnVtZXJhdGlvbiB1bnRpbCB0aGlzIGh5cGVyY2FsbCBpcyBjYWxsZWQgdXNpbmcg
YSB4ZW4gY29tbWFuZCBsaW5lIHBhcmFtZXRlci4gVGhpcyB3YXkgd2hlbiB0aGlzIGlzIG5vdCBy
ZXF1aXJlZCBiZWNhdXNlIHRoZSBmaXJtd2FyZSBkaWQgdGhlIGVudW1lcmF0aW9uLCB3ZSBjYW4g
cHJvcGVybHkgc3VwcG9ydCBEb20wTGVzcy4NCg0KDQoNCg0KIyBFbmFibGUgdGhlIGV4aXN0aW5n
IHg4NiB2aXJ0dWFsIFBDSSBzdXBwb3J0IGZvciBBUk06DQoNClRoZSBleGlzdGluZyBWUENJIHN1
cHBvcnQgYXZhaWxhYmxlIGZvciBYODYgaXMgYWRhcHRlZCBmb3IgQXJtLiBXaGVuIHRoZSBkZXZp
Y2UgaXMgYWRkZWQgdG8gWEVOIHZpYSB0aGUgaHlwZXIgY2FsbCDigJxQSFlTREVWT1BfcGNpX2Rl
dmljZV9hZGTigJ0sIFZQQ0kgaGFuZGxlciBmb3IgdGhlIGNvbmZpZyBzcGFjZSBhY2Nlc3MgaXMg
YWRkZWQgdG8gdGhlIFBDSSBkZXZpY2UgdG8gZW11bGF0ZSB0aGUgUENJIGRldmljZXMuDQoNCkEg
TU1JTyB0cmFwIGhhbmRsZXIgZm9yIHRoZSBQQ0kgRUNBTSBzcGFjZSBpcyByZWdpc3RlcmVkIGlu
IFhFTiBzbyB0aGF0IHdoZW4gZ3Vlc3QgaXMgdHJ5aW5nIHRvIGFjY2VzcyB0aGUgUENJIGNvbmZp
ZyBzcGFjZSwgWEVOIHdpbGwgdHJhcCB0aGUgYWNjZXNzIGFuZCBlbXVsYXRlIHJlYWQvd3JpdGUg
dXNpbmcgdGhlIFZQQ0kgYW5kIG5vdCB0aGUgcmVhbCBQQ0kgaGFyZHdhcmUuDQpKdXN0IHRvIG1h
a2UgaXQgY2xlYXI6IERvbTAgc3RpbGwgYWNjZXNzIHRoZSBidXMgZGlyZWN0bHkgdy9vIGVtdWxh
dGlvbiwgcmlnaHQ/DQoNCk5vLk9uY2UgWGVuIGhhcyBkb25lIGhpcyBQQ0kgZW51bWVyYXRpb24g
KGVpdGhlciBvbiBib290IG9yIGFmdGVyIGFuIGh5cGVyY2FsbCBmcm9tIHRoZSBoYXJkd2FyZSBk
b21haW4pLCBvbmx5IFhlbiB3aWxsIGFjY2VzcyB0aGUgcGh5c2ljYWwgUENJIGJ1cywgZXZlcnli
b2R5IGVsc2Ugd2lsbCBnbyB0aHJvdWdoIFZQQ0kuDQoNCg0KTGltaXRhdGlvbjoNCiogTm8gaGFu
ZGxlciBpcyByZWdpc3RlciBmb3IgdGhlIE1TSSBjb25maWd1cmF0aW9uLg0KKiBPbmx5IGxlZ2Fj
eSBpbnRlcnJ1cHQgaXMgc3VwcG9ydGVkIGFuZCB0ZXN0ZWQgYXMgb2Ygbm93LCBNU0kgaXMgbm90
IGltcGxlbWVudGVkIGFuZCB0ZXN0ZWQuDQoNCiMgQXNzaWduIHRoZSBkZXZpY2UgdG8gdGhlIGd1
ZXN0Og0KDQpBc3NpZ24gdGhlIFBDSSBkZXZpY2UgZnJvbSB0aGUgaGFyZHdhcmUgZG9tYWluIHRv
IHRoZSBndWVzdCBpcyBkb25lIHVzaW5nIHRoZSBiZWxvdyBndWVzdCBjb25maWcgb3B0aW9uLiBX
aGVuIHhsIHRvb2wgY3JlYXRlIHRoZSBkb21haW4sIFBDSSBkZXZpY2VzIHdpbGwgYmUgYXNzaWdu
ZWQgdG8gdGhlIGd1ZXN0IFZQQ0kgYnVzLg0KcGNpPVsgIlBDSV9TUEVDX1NUUklORyIsICJQQ0lf
U1BFQ19TVFJJTkciLCAuLi5dDQoNCkd1ZXN0IHdpbGwgYmUgb25seSBhYmxlIHRvIGFjY2VzcyB0
aGUgYXNzaWduZWQgZGV2aWNlcyBhbmQgc2VlIHRoZSBicmlkZ2VzLiBHdWVzdCB3aWxsIG5vdCBi
ZSBhYmxlIHRvIGFjY2VzcyBvciBzZWUgdGhlIGRldmljZXMgdGhhdCBhcmUgbm8gYXNzaWduZWQg
dG8gaGltLg0KRG9lcyB0aGlzIG1lYW4gdGhhdCB3ZSBkbyBub3QgbmVlZCB0byBjb25maWd1cmUg
dGhlIGJyaWRnZXMgYXMgdGhvc2UgYXJlIGV4cG9zZWQgdG8gdGhlIGd1ZXN0IGltcGxpY2l0bHk/
DQoNCkxpbWl0YXRpb246DQoqIEFzIG9mIG5vdyBhbGwgdGhlIGJyaWRnZXMgaW4gdGhlIFBDSSBi
dXMgYXJlIHNlZW4gYnkgdGhlIGd1ZXN0IG9uIHRoZSBWUENJIGJ1cy4NCg0KU28sIHdoYXQgaGFw
cGVucyBpZiBhIGd1ZXN0IHRyaWVzIHRvIGFjY2VzcyB0aGUgYnJpZGdlIHRoYXQgZG9lc24ndCBo
YXZlIHRoZSBhc3NpZ25lZA0KDQpQQ0kgZGV2aWNlPyBFLmcuIHdlIHBhc3MgUENJZV9kZXYwIHdo
aWNoIGlzIGJlaGluZCBCcmlkZ2UwIGFuZCB0aGUgZ3Vlc3QgYWxzbyBzZWVzDQoNCkJyaWRnZTEg
YW5kIHRyaWVzIHRvIGFjY2VzcyBkZXZpY2VzIGJlaGluZCBpdCBkdXJpbmcgdGhlIGVudW1lcmF0
aW9uLg0KDQpDb3VsZCB5b3UgcGxlYXNlIGNsYXJpZnk/DQoNClRoZSBicmlkZ2VzIGFyZSBvbmx5
IGFjY2Vzc2libGUgaW4gcmVhZC1vbmx5IGFuZCBjYW5ub3QgYmUgbW9kaWZpZWQuIEV2ZW4gdGhv
dWdoIGEgZ3Vlc3Qgd291bGQgc2VlIHRoZSBicmlkZ2UsIHRoZSBWUENJIHdpbGwgb25seSBzaG93
IHRoZSBhc3NpZ25lZCBkZXZpY2VzIGJlaGluZCBpdC4gSWYgdGhlcmUgaXMgbm8gZGV2aWNlIGJl
aGluZCB0aGF0IGJyaWRnZSBhc3NpZ25lZCB0byB0aGUgZ3Vlc3QsIHRoZSBndWVzdCB3aWxsIHNl
ZSBhbiBlbXB0eSBidXMgYmVoaW5kIHRoYXQgYnJpZGdlLg0KDQoNCldlIG5lZWQgdG8gY29tZSB1
cCB3aXRoIHNvbWV0aGluZyBzaW1pbGFyIGZvciBkb20wbGVzcyB0b28uIEl0IGNvdWxkIGJlDQpl
eGFjdGx5IHRoZSBzYW1lIHRoaW5nIChhIGxpc3Qgb2YgQkRGcyBhcyBzdHJpbmdzIGFzIGEgZGV2
aWNlIHRyZWUNCnByb3BlcnR5KSBvciBzb21ldGhpbmcgZWxzZSBpZiB3ZSBjYW4gY29tZSB1cCB3
aXRoIGEgYmV0dGVyIGlkZWEuDQpGdWxseSBhZ3JlZS4NCk1heWJlIGEgdHJlZSB0b3BvbG9neSBj
b3VsZCBhbGxvdyBtb3JlIHBvc3NpYmlsaXRpZXMgKGxpa2UgZ2l2aW5nIEJBUiB2YWx1ZXMpIGlu
IHRoZSBmdXR1cmUuDQoNCiMgRW11bGF0ZWQgUENJIGRldmljZSB0cmVlIG5vZGUgaW4gbGlieGw6
DQoNCkxpYnhsIGlzIGNyZWF0aW5nIGEgdmlydHVhbCBQQ0kgZGV2aWNlIHRyZWUgbm9kZSBpbiB0
aGUgZGV2aWNlIHRyZWUgdG8gZW5hYmxlIHRoZSBndWVzdCBPUyB0byBkaXNjb3ZlciB0aGUgdmly
dHVhbCBQQ0kgZHVyaW5nIGd1ZXN0IGJvb3QuIFdlIGludHJvZHVjZWQgdGhlIG5ldyBjb25maWcg
b3B0aW9uIFt2cGNpPSJwY2lfZWNhbSJdIGZvciBndWVzdHMuIFdoZW4gdGhpcyBjb25maWcgb3B0
aW9uIGlzIGVuYWJsZWQgaW4gYSBndWVzdCBjb25maWd1cmF0aW9uLCBhIFBDSSBkZXZpY2UgdHJl
ZSBub2RlIHdpbGwgYmUgY3JlYXRlZCBpbiB0aGUgZ3Vlc3QgZGV2aWNlIHRyZWUuDQoNCkEgbmV3
IGFyZWEgaGFzIGJlZW4gcmVzZXJ2ZWQgaW4gdGhlIGFybSBndWVzdCBwaHlzaWNhbCBtYXAgYXQg
d2hpY2ggdGhlIFZQQ0kgYnVzIGlzIGRlY2xhcmVkIGluIHRoZSBkZXZpY2UgdHJlZSAocmVnIGFu
ZCByYW5nZXMgcGFyYW1ldGVycyBvZiB0aGUgbm9kZSkuIEEgdHJhcCBoYW5kbGVyIGZvciB0aGUg
UENJIEVDQU0gYWNjZXNzIGZyb20gZ3Vlc3QgaGFzIGJlZW4gcmVnaXN0ZXJlZCBhdCB0aGUgZGVm
aW5lZCBhZGRyZXNzIGFuZCByZWRpcmVjdHMgcmVxdWVzdHMgdG8gdGhlIFZQQ0kgZHJpdmVyIGlu
IFhlbi4NCg0KTGltaXRhdGlvbjoNCiogT25seSBvbmUgUENJIGRldmljZSB0cmVlIG5vZGUgaXMg
c3VwcG9ydGVkIGFzIG9mIG5vdy4NCkkgdGhpbmsgdnBjaT0icGNpX2VjYW0iIHNob3VsZCBiZSBv
cHRpb25hbDogaWYgcGNpPVsgIlBDSV9TUEVDX1NUUklORyIsDQouLi5dIGlzIHNwZWNpZmlmZWQs
IHRoZW4gdnBjaT0icGNpX2VjYW0iIGlzIGltcGxpZWQuDQoNCnZwY2k9InBjaV9lY2FtIiBpcyBv
bmx5IHVzZWZ1bCBvbmUgZGF5IGluIHRoZSBmdXR1cmUgd2hlbiB3ZSB3YW50IHRvIGJlDQphYmxl
IHRvIGVtdWxhdGUgb3RoZXIgbm9uLWVjYW0gaG9zdCBicmlkZ2VzLiBGb3Igbm93IHdlIGNvdWxk
IGV2ZW4gc2tpcA0KaXQuDQpUaGlzIHdvdWxkIGNyZWF0ZSBhIHByb2JsZW0gaWYgeGwgaXMgdXNl
ZCB0byBhZGQgYSBQQ0kgZGV2aWNlIGFzIHdlIG5lZWQgdGhlIFBDSSBub2RlIHRvIGJlIGluIHRo
ZSBEVEIgd2hlbiB0aGUgZ3Vlc3QgaXMgY3JlYXRlZC4NCkkgYWdyZWUgdGhpcyBpcyBub3QgbmVl
ZGVkIGJ1dCByZW1vdmluZyBpdCBtaWdodCBjcmVhdGUgbW9yZSBjb21wbGV4aXR5IGluIHRoZSBj
b2RlLg0KDQpJIHdvdWxkIHN1Z2dlc3Qgd2UgaGF2ZSBpdCBmcm9tIGRheSAwIGFzIHRoZXJlIGFy
ZSBwbGVudHkgb2YgSFcgYXZhaWxhYmxlIHdoaWNoIGlzIG5vdCBFQ0FNLg0KDQpIYXZpbmcgdnBj
aSBhbGxvd3Mgb3RoZXIgYnJpZGdlcyB0byBiZSBzdXBwb3J0ZWQNCg0KWWVzIHdlIGFncmVlLg0K
DQoNCg0KQmVydHJhbmQNCg0KDQpCQVIgdmFsdWUgYW5kIElPTUVNIG1hcHBpbmc6DQoNCkxpbnV4
IGd1ZXN0IHdpbGwgZG8gdGhlIFBDSSBlbnVtZXJhdGlvbiBiYXNlZCBvbiB0aGUgYXJlYSByZXNl
cnZlZCBmb3IgRUNBTSBhbmQgSU9NRU0gcmFuZ2VzIGluIHRoZSBWUENJIGRldmljZSB0cmVlIG5v
ZGUuIE9uY2UgUENJIGRldmljZSBpcyBhc3NpZ25lZCB0byB0aGUgZ3Vlc3QsIFhFTiB3aWxsIG1h
cCB0aGUgZ3Vlc3QgUENJIElPTUVNIHJlZ2lvbiB0byB0aGUgcmVhbCBwaHlzaWNhbCBJT01FTSBy
ZWdpb24gb25seSBmb3IgdGhlIGFzc2lnbmVkIGRldmljZXMuDQoNCkFzIG9mIG5vdyB3ZSBoYXZl
IG5vdCBtb2RpZmllZCB0aGUgZXhpc3RpbmcgVlBDSSBjb2RlIHRvIG1hcCB0aGUgZ3Vlc3QgUENJ
IElPTUVNIHJlZ2lvbiB0byB0aGUgcmVhbCBwaHlzaWNhbCBJT01FTSByZWdpb24uIFdlIHVzZWQg
dGhlIGV4aXN0aW5nIGd1ZXN0IOKAnGlvbWVt4oCdIGNvbmZpZyBvcHRpb24gdG8gbWFwIHRoZSBy
ZWdpb24uDQpGb3IgZXhhbXBsZToNCkd1ZXN0IHJlc2VydmVkIElPTUVNIHJlZ2lvbjogIDB4MDQw
MjAwMDANCiAgICBSZWFsIHBoeXNpY2FsIElPTUVNIHJlZ2lvbjoweDUwMDAwMDAwDQogICAgSU9N
RU0gc2l6ZToxMjhNQg0KICAgIGlvbWVtIGNvbmZpZyB3aWxsIGJlOiAgIGlvbWVtID0gWyIweDUw
MDAwLDB4ODAwMEAweDQwMjAiXQ0KDQpUaGVyZSBpcyBubyBuZWVkIHRvIG1hcCB0aGUgRUNBTSBz
cGFjZSBhcyBYRU4gYWxyZWFkeSBoYXZlIGFjY2VzcyB0byB0aGUgRUNBTSBzcGFjZSBhbmQgWEVO
IHdpbGwgdHJhcCBFQ0FNIGFjY2Vzc2VzIGZyb20gdGhlIGd1ZXN0IGFuZCB3aWxsIHBlcmZvcm0g
cmVhZC93cml0ZSBvbiB0aGUgVlBDSSBidXMuDQoNCklPTUVNIGFjY2VzcyB3aWxsIG5vdCBiZSB0
cmFwcGVkIGFuZCB0aGUgZ3Vlc3Qgd2lsbCBkaXJlY3RseSBhY2Nlc3MgdGhlIElPTUVNIHJlZ2lv
biBvZiB0aGUgYXNzaWduZWQgZGV2aWNlIHZpYSBzdGFnZS0yIHRyYW5zbGF0aW9uLg0KDQpJbiB0
aGUgc2FtZSwgd2UgbWFwcGVkIHRoZSBhc3NpZ25lZCBkZXZpY2VzIElSUSB0byB0aGUgZ3Vlc3Qg
dXNpbmcgYmVsb3cgY29uZmlnIG9wdGlvbnMuDQppcnFzPSBbIE5VTUJFUiwgTlVNQkVSLCAuLi5d
DQoNCkxpbWl0YXRpb246DQoqIE5lZWQgdG8gYXZvaWQgdGhlIOKAnGlvbWVt4oCdIGFuZCDigJxp
cnHigJ0gZ3Vlc3QgY29uZmlnIG9wdGlvbnMgYW5kIG1hcCB0aGUgSU9NRU0gcmVnaW9uIGFuZCBJ
UlEgYXQgdGhlIHNhbWUgdGltZSB3aGVuIGRldmljZSBpcyBhc3NpZ25lZCB0byB0aGUgZ3Vlc3Qg
dXNpbmcgdGhlIOKAnHBjaeKAnSBndWVzdCBjb25maWcgb3B0aW9ucyB3aGVuIHhsIGNyZWF0ZXMg
dGhlIGRvbWFpbi4NCiogRW11bGF0ZWQgQkFSIHZhbHVlcyBvbiB0aGUgVlBDSSBidXMgc2hvdWxk
IHJlZmxlY3QgdGhlIElPTUVNIG1hcHBlZCBhZGRyZXNzLg0KKiBYODYgbWFwcGluZyBjb2RlIHNo
b3VsZCBiZSBwb3J0ZWQgb24gQXJtIHNvIHRoYXQgdGhlIHN0YWdlLTIgdHJhbnNsYXRpb24gaXMg
YWRhcHRlZCB3aGVuIHRoZSBndWVzdCBpcyBkb2luZyBhIG1vZGlmaWNhdGlvbiBvZiB0aGUgQkFS
IHJlZ2lzdGVycyB2YWx1ZXMgKHRvIG1hcCB0aGUgYWRkcmVzcyByZXF1ZXN0ZWQgYnkgdGhlIGd1
ZXN0IGZvciBhIHNwZWNpZmljIElPTUVNIHRvIHRoZSBhZGRyZXNzIGFjdHVhbGx5IGNvbnRhaW5l
ZCBpbiB0aGUgcmVhbCBCQVIgcmVnaXN0ZXIgb2YgdGhlIGNvcnJlc3BvbmRpbmcgZGV2aWNlKS4N
Cg0KIyBTTU1VIGNvbmZpZ3VyYXRpb24gZm9yIGd1ZXN0Og0KDQpXaGVuIGFzc2lnbmluZyBQQ0kg
ZGV2aWNlcyB0byBhIGd1ZXN0LCB0aGUgU01NVSBjb25maWd1cmF0aW9uIHNob3VsZCBiZSB1cGRh
dGVkIHRvIHJlbW92ZSBhY2Nlc3MgdG8gdGhlIGhhcmR3YXJlIGRvbWFpbiBtZW1vcnkNCg0KU28s
IGFzIHRoZSBoYXJkd2FyZSBkb21haW4gc3RpbGwgaGFzIGFjY2VzcyB0byB0aGUgUENJIGNvbmZp
Z3VyYXRpb24gc3BhY2UsIHdlDQoNCmNhbiBwb3RlbnRpYWxseSBoYXZlIGEgY29uZGl0aW9uIHdo
ZW4gRG9tMCBhY2Nlc3NlcyB0aGUgZGV2aWNlLiBBRkFJVSwgaWYgd2UgaGF2ZQ0KDQpwY2kgZnJv
bnQvYmFjayB0aGVuIGJlZm9yZSBhc3NpZ25pbmcgdGhlIGRldmljZSB0byB0aGUgZ3Vlc3Qgd2Ug
dW5iaW5kIGl0IGZyb20gdGhlDQoNCnJlYWwgZHJpdmVyIGFuZCBiaW5kIHRvIHRoZSBiYWNrLiBB
cmUgd2UgZ29pbmcgdG8gZG8gc29tZXRoaW5nIHNpbWlsYXIgaGVyZT8NCg0KWWVzIHdlIGhhdmUg
dG8gdW5iaW5kIHRoZSBkcml2ZXIgZnJvbSB0aGUgaGFyZHdhcmUgZG9tYWluIGJlZm9yZSBhc3Np
Z25pbmcgdGhlIGRldmljZSB0byB0aGUgZ3Vlc3QuIEFsc28gYXMgc29vbiBhcyBYZW4gaGFzIGRv
bmUgaGlzIFBDSSBlbnVtZXJhdGlvbiAoZWl0aGVyIG9uIGJvb3Qgb3IgYWZ0ZXIgYW4gaHlwZXJj
YWxsIGZyb20gdGhlIGhhcmR3YXJlIGRvbWFpbiksIG9ubHkgWGVuIHdpbGwgYWNjZXNzIHRoZSBw
aHlzaWNhbCBQQ0kgYnVzLCBldmVyeWJvZHkgZWxzZSB3aWxsIGdvIHRocm91Z2ggVlBDSS4NCg0K
LSBSYWh1bA0KDQoNCg0KVGhhbmsgeW91LA0KDQpPbGVrc2FuZHINCg0KIGFuZCBhZGQNCmNvbmZp
Z3VyYXRpb24gdG8gaGF2ZSBhY2Nlc3MgdG8gdGhlIGd1ZXN0IG1lbW9yeSB3aXRoIHRoZSBwcm9w
ZXIgYWRkcmVzcyB0cmFuc2xhdGlvbiBzbyB0aGF0IHRoZSBkZXZpY2UgY2FuIGRvIERNQSBvcGVy
YXRpb25zIGZyb20gYW5kIHRvIHRoZSBndWVzdCBtZW1vcnkgb25seS4NCg0KIyBNU0kvTVNJLVgg
c3VwcG9ydDoNCk5vdCBpbXBsZW1lbnQgYW5kIHRlc3RlZCBhcyBvZiBub3cuDQoNCiMgSVRTIHN1
cHBvcnQ6DQpOb3QgaW1wbGVtZW50IGFuZCB0ZXN0ZWQgYXMgb2Ygbm93Lg0KWzFdIGh0dHBzOi8v
bGlzdHMueGVuLm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDE3LTA1L21zZzAyNjc0Lmh0
bWwNCg0K

--_000_E4755A88798C42FF8DADDC4FD3C7B571armcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <AA9BB96943981443A524CA4A637F1AD4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64

PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgbGluZS1icmVhazogYWZ0
ZXItd2hpdGUtc3BhY2U7IiBjbGFzcz0iIj4NClNvcnJ5IGZvciBwcmV2aW91cyBtYWlsIGZvcm1h
dHRpbmcgaXNzdWUuIFJlcGx5aW5nIGFnYWluIHNvIHRoYXQgYW55IGNvbW1lbnQgaGlzdG9yeSBz
aG91bGQgbm90IG1pc3NlZC48YnIgY2xhc3M9IiI+DQo8ZGl2PjxiciBjbGFzcz0iIj4NCjxibG9j
a3F1b3RlIHR5cGU9ImNpdGUiIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj5PbiAxNyBKdWwgMjAy
MCwgYXQgODo0MSBhbSwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gJmx0OzxhIGhyZWY9Im1haWx0
bzpPbGVrc2FuZHJfQW5kcnVzaGNoZW5rb0BlcGFtLmNvbSIgY2xhc3M9IiI+T2xla3NhbmRyX0Fu
ZHJ1c2hjaGVua29AZXBhbS5jb208L2E+Jmd0OyB3cm90ZTo8L2Rpdj4NCjxiciBjbGFzcz0iQXBw
bGUtaW50ZXJjaGFuZ2UtbmV3bGluZSI+DQo8ZGl2IGNsYXNzPSIiPjxiciBzdHlsZT0iY2FyZXQt
Y29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAx
MnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQt
d2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0
OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5v
cm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsg
dGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29s
b3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4
OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2Vp
Z2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0
ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1h
bDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4
dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRh
bnQ7IiBjbGFzcz0iIj5Pbg0KIDcvMTcvMjAgOTo1MyBBTSwgQmVydHJhbmQgTWFycXVpcyB3cm90
ZTo8L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWls
eTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12
YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6
IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNm
b3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtp
dC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0i
Ij4NCjxibG9ja3F1b3RlIHR5cGU9ImNpdGUiIHN0eWxlPSJmb250LWZhbWlseTogSGVsdmV0aWNh
OyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6
IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgb3Jw
aGFuczogYXV0bzsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJh
bnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3aWRvd3M6IGF1dG87IHdvcmQtc3Bh
Y2luZzogMHB4OyAtd2Via2l0LXRleHQtc2l6ZS1hZGp1c3Q6IGF1dG87IC13ZWJraXQtdGV4dC1z
dHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8YnIg
Y2xhc3M9IiI+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBjbGFzcz0iIj5PbiAxNiBKdWwgMjAy
MCwgYXQgMjI6NTEsIFN0ZWZhbm8gU3RhYmVsbGluaSAmbHQ7PGEgaHJlZj0ibWFpbHRvOnNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmciIGNsYXNzPSIiPnNzdGFiZWxsaW5pQGtlcm5lbC5vcmc8L2E+Jmd0
OyB3cm90ZTo8YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpPbiBUaHUsIDE2IEp1bCAyMDIw
LCBSYWh1bCBTaW5naCB3cm90ZTo8YnIgY2xhc3M9IiI+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRl
IiBjbGFzcz0iIj5IZWxsbyBBbGwsPGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0KRm9sbG93
aW5nIHVwIG9uIGRpc2N1c3Npb24gb24gUENJIFBhc3N0aHJvdWdoIHN1cHBvcnQgb24gQVJNIHRo
YXQgd2UgaGFkIGF0IHRoZSBYRU4gc3VtbWl0LCB3ZSBhcmUgc3VibWl0dGluZyBhIFJldmlldyBG
b3IgQ29tbWVudCBhbmQgYSBkZXNpZ24gcHJvcG9zYWwgZm9yIFBDSSBwYXNzdGhyb3VnaCBzdXBw
b3J0IG9uIEFSTS4gRmVlbCBmcmVlIHRvIGdpdmUgeW91ciBmZWVkYmFjay48YnIgY2xhc3M9IiI+
DQo8YnIgY2xhc3M9IiI+DQpUaGUgZm9sbG93aW5ncyBkZXNjcmliZSB0aGUgaGlnaC1sZXZlbCBk
ZXNpZ24gcHJvcG9zYWwgb2YgdGhlIFBDSSBwYXNzdGhyb3VnaCBzdXBwb3J0IGFuZCBob3cgdGhl
IGRpZmZlcmVudCBtb2R1bGVzIHdpdGhpbiB0aGUgc3lzdGVtIGludGVyYWN0cyB3aXRoIGVhY2gg
b3RoZXIgdG8gYXNzaWduIGEgcGFydGljdWxhciBQQ0kgZGV2aWNlIHRvIHRoZSBndWVzdC48YnIg
Y2xhc3M9IiI+DQo8L2Jsb2NrcXVvdGU+DQpJIHRoaW5rIHRoZSBwcm9wb3NhbCBpcyBnb29kIGFu
ZCBJIG9ubHkgaGF2ZSBhIGNvdXBsZSBvZiB0aG91Z2h0cyB0bzxiciBjbGFzcz0iIj4NCnNoYXJl
IGJlbG93LjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCjxibG9j
a3F1b3RlIHR5cGU9ImNpdGUiIGNsYXNzPSIiPiMgVGl0bGU6PGJyIGNsYXNzPSIiPg0KPGJyIGNs
YXNzPSIiPg0KUENJIGRldmljZXMgcGFzc3Rocm91Z2ggb24gQXJtIGRlc2lnbiBwcm9wb3NhbDxi
ciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCiMgUHJvYmxlbSBzdGF0ZW1lbnQ6PGJyIGNsYXNz
PSIiPg0KPGJyIGNsYXNzPSIiPg0KT24gQVJNIHRoZXJlIGluIG5vIHN1cHBvcnQgdG8gYXNzaWdu
IGEgUENJIGRldmljZSB0byBhIGd1ZXN0LiBQQ0kgZGV2aWNlIHBhc3N0aHJvdWdoIGNhcGFiaWxp
dHkgYWxsb3dzIGd1ZXN0cyB0byBoYXZlIGZ1bGwgYWNjZXNzIHRvIHNvbWUgUENJIGRldmljZXMu
IFBDSSBkZXZpY2UgcGFzc3Rocm91Z2ggYWxsb3dzIFBDSSBkZXZpY2VzIHRvIGFwcGVhciBhbmQg
YmVoYXZlIGFzIGlmIHRoZXkgd2VyZSBwaHlzaWNhbGx5IGF0dGFjaGVkIHRvIHRoZQ0KIGd1ZXN0
IG9wZXJhdGluZyBzeXN0ZW0gYW5kIHByb3ZpZGUgZnVsbCBpc29sYXRpb24gb2YgdGhlIFBDSSBk
ZXZpY2VzLjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCkdvYWwgb2YgdGhpcyB3b3JrIGlz
IHRvIGFsc28gc3VwcG9ydCBEb20wTGVzcyBjb25maWd1cmF0aW9uIHNvIHRoZSBQQ0kgYmFja2Vu
ZC9mcm9udGVuZCBkcml2ZXJzIHVzZWQgb24geDg2IHNoYWxsIG5vdCBiZSB1c2VkIG9uIEFybS4g
SXQgd2lsbCB1c2UgdGhlIGV4aXN0aW5nIFZQQ0kgY29uY2VwdCBmcm9tIFg4NiBhbmQgaW1wbGVt
ZW50IHRoZSB2aXJ0dWFsIFBDSSBidXMgdGhyb3VnaCBJTyBlbXVsYXRpb27igIsgc3VjaCB0aGF0
IG9ubHkgYXNzaWduZWQNCiBkZXZpY2VzIGFyZSB2aXNpYmxl4oCLIHRvIHRoZSBndWVzdCBhbmQg
Z3Vlc3QgY2FuIHVzZSB0aGUgc3RhbmRhcmQgUENJIGRyaXZlci48YnIgY2xhc3M9IiI+DQo8YnIg
Y2xhc3M9IiI+DQpPbmx5IERvbTAgYW5kIFhlbiB3aWxsIGhhdmUgYWNjZXNzIHRvIHRoZSByZWFs
IFBDSSBidXMsPGJyIGNsYXNzPSIiPg0KPC9ibG9ja3F1b3RlPg0KPC9ibG9ja3F1b3RlPg0KPC9i
bG9ja3F1b3RlPg0KPGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZh
bWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9u
dC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNp
bmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJh
bnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdl
YmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFz
cz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWls
eTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12
YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6
IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNm
b3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtp
dC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBu
b25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPlNvLA0KIGluIHRoaXMg
Y2FzZSBob3cgaXMgdGhlIGFjY2VzcyBzZXJpYWxpemF0aW9uIGdvaW5nIHRvIHdvcms/PC9zcGFu
PjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZl
dGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1j
YXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7
IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9u
ZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1z
dHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8YnIg
c3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7
IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczog
bm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0
LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdo
aXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tl
LXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPHNwYW4gc3R5
bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZv
bnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9y
bWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFs
aWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRl
LXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdp
ZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlu
bGluZSAhaW1wb3J0YW50OyIgY2xhc3M9IiI+SQ0KIG1lYW4gdGhhdCBpZiBib3RoIFhlbiBhbmQg
RG9tMCBhcmUgYWJvdXQgdG8gYWNjZXNzIHRoZSBidXMgYXQgdGhlIHNhbWUgdGltZT88L3NwYW4+
PGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0
aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNh
cHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsg
dGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25l
OyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0
cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxiciBz
dHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsg
Zm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBu
b3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQt
YWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hp
dGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Ut
d2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHls
ZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9u
dC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3Jt
YWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxp
Z246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUt
c3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lk
dGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5s
aW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj5UaGVyZQ0KIHdhcyBhIGRpc2N1c3Npb24gb24gdGhl
IHNhbWUgYmVmb3JlIFsxXSBhbmQgSU1PIGl0IHdhcyBub3QgZGVjaWRlZCBvbjwvc3Bhbj48YnIg
c3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7
IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczog
bm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0
LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdo
aXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tl
LXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJyIHN0eWxl
PSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250
LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1h
bDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGln
bjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1z
cGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0
aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJj
YXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNp
emU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsg
Zm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjog
c3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFj
ZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDog
MHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUg
IWltcG9ydGFudDsiIGNsYXNzPSIiPmhvdw0KIHRvIGRlYWwgd2l0aCB0aGF0Ljwvc3Bhbj48YnIg
c3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7
IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczog
bm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0
LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdo
aXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tl
LXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPC9kaXY+DQo8
L2Jsb2NrcXVvdGU+DQo8ZGl2PjxiciBjbGFzcz0iIj4NCjwvZGl2Pg0KPHNwYW4gZGF0YS1jb250
cmFzdD0iYXV0byIgY2xhc3M9IlRleHRSdW4gU0NYVzE3NjAwMzMyMiBCQ1gwIiB4bWw6bGFuZz0i
RU4tR0IiIGxhbmc9IkVOLUdCIiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgLXdl
YmtpdC11c2VyLWRyYWc6IG5vbmU7IG9ycGhhbnM6IDI7IHRleHQtYWxpZ246IGp1c3RpZnk7IHdp
ZG93czogMjsgYmFja2dyb3VuZC1jb2xvcjogcmdiKDI1NSwgMjU1LCAyNTUpOyBmb250LXNpemU6
IDEwcHQ7IGZvbnQtZmFtaWx5OiAmcXVvdDtDYWxpYnJpIExpZ2h0JnF1b3Q7LCAmcXVvdDtDYWxp
YnJpIExpZ2h0X0VtYmVkZGVkRm9udCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9NU0ZvbnRT
ZXJ2aWNlJnF1b3Q7LCBzYW5zLXNlcmlmOyAtd2Via2l0LWZvbnQta2VybmluZzogbm9uZTsgbGlu
ZS1oZWlnaHQ6IDE3LjI2NjdweDsgZm9udC12YXJpYW50LWxpZ2F0dXJlczogbm9uZSAhaW1wb3J0
YW50OyI+PHNwYW4gY2xhc3M9IlNDWFcxNzYwMDMzMjIgTm9ybWFsVGV4dFJ1biBCQ1gwIiBzdHls
ZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQt
dXNlci1kcmFnOiBub25lOyAtd2Via2l0LXRhcC1oaWdobGlnaHQtY29sb3I6IHRyYW5zcGFyZW50
OyBiYWNrZ3JvdW5kLWNvbG9yOiBpbmhlcml0OyI+RE9NMA0KIGFsc28gYWNjZXNzIHRoZSByZWFs
IFBDSSBoYXJkd2FyZSZuYnNwOzwvc3Bhbj48L3NwYW4+PHNwYW4gZGF0YS1jb250cmFzdD0iYXV0
byIgY2xhc3M9IlRleHRSdW4gU0NYVzE3NjAwMzMyMiBCQ1gwIiB4bWw6bGFuZz0iRU4tR0IiIGxh
bmc9IkVOLUdCIiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgLXdlYmtpdC11c2Vy
LWRyYWc6IG5vbmU7IG9ycGhhbnM6IDI7IHRleHQtYWxpZ246IGp1c3RpZnk7IHdpZG93czogMjsg
YmFja2dyb3VuZC1jb2xvcjogcmdiKDI1NSwgMjU1LCAyNTUpOyBmb250LXNpemU6IDEwcHQ7IGZv
bnQtZmFtaWx5OiAmcXVvdDtDYWxpYnJpIExpZ2h0JnF1b3Q7LCAmcXVvdDtDYWxpYnJpIExpZ2h0
X0VtYmVkZGVkRm9udCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9NU0ZvbnRTZXJ2aWNlJnF1
b3Q7LCBzYW5zLXNlcmlmOyAtd2Via2l0LWZvbnQta2VybmluZzogbm9uZTsgbGluZS1oZWlnaHQ6
IDE3LjI2NjdweDsgZm9udC12YXJpYW50LWxpZ2F0dXJlczogbm9uZSAhaW1wb3J0YW50OyI+PHNw
YW4gY2xhc3M9IlNDWFcxNzYwMDMzMjIgTm9ybWFsVGV4dFJ1biBCQ1gwIiBzdHlsZT0ibWFyZ2lu
OiAwcHg7IHBhZGRpbmc6IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNlci1kcmFn
OiBub25lOyAtd2Via2l0LXRhcC1oaWdobGlnaHQtY29sb3I6IHRyYW5zcGFyZW50OyBiYWNrZ3Jv
dW5kLWNvbG9yOiBpbmhlcml0OyI+dmlhDQogTU1JTzwvc3Bhbj48L3NwYW4+PHNwYW4gZGF0YS1j
b250cmFzdD0iYXV0byIgY2xhc3M9IlRleHRSdW4gU0NYVzE3NjAwMzMyMiBCQ1gwIiB4bWw6bGFu
Zz0iRU4tR0IiIGxhbmc9IkVOLUdCIiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsg
LXdlYmtpdC11c2VyLWRyYWc6IG5vbmU7IG9ycGhhbnM6IDI7IHRleHQtYWxpZ246IGp1c3RpZnk7
IHdpZG93czogMjsgYmFja2dyb3VuZC1jb2xvcjogcmdiKDI1NSwgMjU1LCAyNTUpOyBmb250LXNp
emU6IDEwcHQ7IGZvbnQtZmFtaWx5OiAmcXVvdDtDYWxpYnJpIExpZ2h0JnF1b3Q7LCAmcXVvdDtD
YWxpYnJpIExpZ2h0X0VtYmVkZGVkRm9udCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9NU0Zv
bnRTZXJ2aWNlJnF1b3Q7LCBzYW5zLXNlcmlmOyAtd2Via2l0LWZvbnQta2VybmluZzogbm9uZTsg
bGluZS1oZWlnaHQ6IDE3LjI2NjdweDsgZm9udC12YXJpYW50LWxpZ2F0dXJlczogbm9uZSAhaW1w
b3J0YW50OyI+PHNwYW4gY2xhc3M9IlNDWFcxNzYwMDMzMjIgTm9ybWFsVGV4dFJ1biBCQ1gwIiBz
dHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJr
aXQtdXNlci1kcmFnOiBub25lOyAtd2Via2l0LXRhcC1oaWdobGlnaHQtY29sb3I6IHRyYW5zcGFy
ZW50OyBiYWNrZ3JvdW5kLWNvbG9yOiBpbmhlcml0OyI+Jm5ic3A7Y29uZmlnDQogc3BhY2UmbmJz
cDs8L3NwYW4+PC9zcGFuPjxzcGFuIGRhdGEtY29udHJhc3Q9ImF1dG8iIGNsYXNzPSJUZXh0UnVu
IFNDWFcxNzYwMDMzMjIgQkNYMCIgeG1sOmxhbmc9IkVOLUdCIiBsYW5nPSJFTi1HQiIgc3R5bGU9
Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyBvcnBo
YW5zOiAyOyB0ZXh0LWFsaWduOiBqdXN0aWZ5OyB3aWRvd3M6IDI7IGJhY2tncm91bmQtY29sb3I6
IHJnYigyNTUsIDI1NSwgMjU1KTsgZm9udC1zaXplOiAxMHB0OyBmb250LWZhbWlseTogJnF1b3Q7
Q2FsaWJyaSBMaWdodCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9FbWJlZGRlZEZvbnQmcXVv
dDssICZxdW90O0NhbGlicmkgTGlnaHRfTVNGb250U2VydmljZSZxdW90Oywgc2Fucy1zZXJpZjsg
LXdlYmtpdC1mb250LWtlcm5pbmc6IG5vbmU7IGxpbmUtaGVpZ2h0OiAxNy4yNjY3cHg7IGZvbnQt
dmFyaWFudC1saWdhdHVyZXM6IG5vbmUgIWltcG9ydGFudDsiPjxzcGFuIGNsYXNzPSJTQ1hXMTc2
MDAzMzIyIE5vcm1hbFRleHRSdW4gQkNYMCIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAw
cHg7IHVzZXItc2VsZWN0OiB0ZXh0OyAtd2Via2l0LXVzZXItZHJhZzogbm9uZTsgLXdlYmtpdC10
YXAtaGlnaGxpZ2h0LWNvbG9yOiB0cmFuc3BhcmVudDsgYmFja2dyb3VuZC1jb2xvcjogaW5oZXJp
dDsiPnRyYXANCiBpbiBYRU4uIFdlIHdpbGwgdGFrZSBjYXJlIG9mIGFjY2VzcyB0aGUgY29uZmln
IHNwYWNlIGxvY2sgaW4gWEVOLjwvc3Bhbj48L3NwYW4+PHNwYW4gY2xhc3M9IkVPUCBTQ1hXMTc2
MDAzMzIyIEJDWDAiIGRhdGEtY2NwLXByb3BzPSJ7JnF1b3Q7MTM0MjMzMjc5JnF1b3Q7OnRydWUs
JnF1b3Q7MzM1NTUxNTUwJnF1b3Q7OjYsJnF1b3Q7MzM1NTUxNjIwJnF1b3Q7OjYsJnF1b3Q7MzM1
NTU5Njg1JnF1b3Q7OjcyMH0iIHN0eWxlPSJtYXJnaW46IDBweDsgcGFkZGluZzogMHB4OyAtd2Vi
a2l0LXVzZXItZHJhZzogbm9uZTsgZm9udC12YXJpYW50LWxpZ2F0dXJlczogbm9ybWFsOyBvcnBo
YW5zOiAyOyB0ZXh0LWFsaWduOiBqdXN0aWZ5OyB3aWRvd3M6IDI7IGJhY2tncm91bmQtY29sb3I6
IHJnYigyNTUsIDI1NSwgMjU1KTsgZm9udC1zaXplOiAxMHB0OyBsaW5lLWhlaWdodDogMTcuMjY2
N3B4OyBmb250LWZhbWlseTogJnF1b3Q7Q2FsaWJyaSBMaWdodCZxdW90OywgJnF1b3Q7Q2FsaWJy
aSBMaWdodF9FbWJlZGRlZEZvbnQmcXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfTVNGb250U2Vy
dmljZSZxdW90Oywgc2Fucy1zZXJpZjsiPiZuYnNwOzwvc3Bhbj48YnIgY2xhc3M9IiI+DQo8Ymxv
Y2txdW90ZSB0eXBlPSJjaXRlIiBjbGFzcz0iIj4NCjxkaXYgY2xhc3M9IiI+PGJyIHN0eWxlPSJj
YXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNp
emU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsg
Zm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjog
c3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFj
ZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDog
MHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxibG9ja3F1b3RlIHR5cGU9
ImNpdGUiIHN0eWxlPSJmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZv
bnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6
IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgb3JwaGFuczogYXV0bzsgdGV4dC1hbGln
bjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1z
cGFjZTogbm9ybWFsOyB3aWRvd3M6IGF1dG87IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRl
eHQtc2l6ZS1hZGp1c3Q6IGF1dG87IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4
dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBj
bGFzcz0iIj4NCjxibG9ja3F1b3RlIHR5cGU9ImNpdGUiIGNsYXNzPSIiPuKAizxzcGFuIGNsYXNz
PSJBcHBsZS1jb252ZXJ0ZWQtc3BhY2UiPiZuYnNwOzwvc3Bhbj5ndWVzdCB3aWxsIGhhdmUgYSBk
aXJlY3QgYWNjZXNzIHRvIHRoZSBhc3NpZ25lZCBkZXZpY2UgaXRzZWxm4oCLLiBJT01FTSBtZW1v
cnkgd2lsbCBiZSBtYXBwZWQgdG8gdGhlIGd1ZXN0IOKAi2FuZCBpbnRlcnJ1cHQgd2lsbCBiZSBy
ZWRpcmVjdGVkIHRvIHRoZSBndWVzdC4gU01NVSBoYXMgdG8gYmUgY29uZmlndXJlZA0KIGNvcnJl
Y3RseSB0byBoYXZlIERNQSB0cmFuc2FjdGlvbi48YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+
DQojIyBDdXJyZW50IHN0YXRlOuKAr0RyYWZ0IHZlcnNpb248YnIgY2xhc3M9IiI+DQo8YnIgY2xh
c3M9IiI+DQojIFByb3Bvc2VyKHMpOiBSYWh1bCBTaW5naCwgQmVydHJhbmQgTWFycXVpczxiciBj
bGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCiMgUHJvcG9zYWw6PGJyIGNsYXNzPSIiPg0KPGJyIGNs
YXNzPSIiPg0KVGhpcyBzZWN0aW9uIHdpbGwgZGVzY3JpYmUgdGhlIGRpZmZlcmVudCBzdWJzeXN0
ZW0gdG8gc3VwcG9ydCB0aGUgUENJIGRldmljZSBwYXNzdGhyb3VnaCBhbmQgaG93IHRoZXNlIHN1
YnN5c3RlbXMgaW50ZXJhY3Qgd2l0aCBlYWNoIG90aGVyIHRvIGFzc2lnbiBhIGRldmljZSB0byB0
aGUgZ3Vlc3QuPGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0KIyBQQ0kgVGVybWlub2xvZ3k6
PGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0KSG9zdCBCcmlkZ2U6IEhvc3QgYnJpZGdlIGFs
bG93cyB0aGUgUENJIGRldmljZXMgdG8gdGFsayB0byB0aGUgcmVzdCBvZiB0aGUgY29tcHV0ZXIu
PGJyIGNsYXNzPSIiPg0KRUNBTTogRUNBTSAoRW5oYW5jZWQgQ29uZmlndXJhdGlvbiBBY2Nlc3Mg
TWVjaGFuaXNtKSBpcyBhIG1lY2hhbmlzbSBkZXZlbG9wZWQgdG8gYWxsb3cgUENJZSB0byBhY2Nl
c3MgY29uZmlndXJhdGlvbiBzcGFjZS4gVGhlIHNwYWNlIGF2YWlsYWJsZSBwZXIgZnVuY3Rpb24g
aXMgNEtCLjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCiMgRGlzY292ZXJpbmcgUENJIEhv
c3QgQnJpZGdlIGluIFhFTjo8YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpJbiBvcmRlciB0
byBzdXBwb3J0IHRoZSBQQ0kgcGFzc3Rocm91Z2ggWEVOIHNob3VsZCBiZSBhd2FyZSBvZiBhbGwg
dGhlIFBDSSBob3N0IGJyaWRnZXMgYXZhaWxhYmxlIG9uIHRoZSBzeXN0ZW0gYW5kIHNob3VsZCBi
ZSBhYmxlIHRvIGFjY2VzcyB0aGUgUENJIGNvbmZpZ3VyYXRpb24gc3BhY2UuIEVDQU0gY29uZmln
dXJhdGlvbiBhY2Nlc3MgaXMgc3VwcG9ydGVkIGFzIG9mIG5vdy4gWEVOIGR1cmluZyBib290IHdp
bGwgcmVhZCB0aGUgUENJIGRldmljZQ0KIHRyZWUgbm9kZSDigJxyZWfigJ0gcHJvcGVydHkgYW5k
IHdpbGwgbWFwIHRoZSBFQ0FNIHNwYWNlIHRvIHRoZSBYRU4gbWVtb3J5IHVzaW5nIHRoZSDigJxp
b3JlbWFwX25vY2FjaGUgKCnigJ0gZnVuY3Rpb24uPGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIi
Pg0KSWYgdGhlcmUgYXJlIG1vcmUgdGhhbiBvbmUgc2VnbWVudCBvbiB0aGUgc3lzdGVtLCBYRU4g
d2lsbCByZWFkIHRoZSDigJxsaW51eCwgcGNpLWRvbWFpbuKAnSBwcm9wZXJ0eSBmcm9tIHRoZSBk
ZXZpY2UgdHJlZSBub2RlIGFuZCBjb25maWd1cmUgJm5ic3A7dGhlIGhvc3QgYnJpZGdlIHNlZ21l
bnQgbnVtYmVyIGFjY29yZGluZ2x5LiBBbGwgdGhlIFBDSSBkZXZpY2UgdHJlZSBub2RlcyBzaG91
bGQgaGF2ZSB0aGUg4oCcbGludXgscGNpLWRvbWFpbuKAnSBwcm9wZXJ0eSBzbw0KIHRoYXQgdGhl
cmUgd2lsbCBiZSBubyBjb25mbGljdHMuIER1cmluZyBoYXJkd2FyZSBkb21haW4gYm9vdCBMaW51
eCB3aWxsIGFsc28gdXNlIHRoZSBzYW1lIOKAnGxpbnV4LHBjaS1kb21haW7igJ0gcHJvcGVydHkg
YW5kIGFzc2lnbiB0aGUgZG9tYWluIG51bWJlciB0byB0aGUgaG9zdCBicmlkZ2UuPGJyIGNsYXNz
PSIiPg0KPGJyIGNsYXNzPSIiPg0KV2hlbiBEb20wIHRyaWVzIHRvIGFjY2VzcyB0aGUgUENJIGNv
bmZpZyBzcGFjZSBvZiB0aGUgZGV2aWNlLCBYRU4gd2lsbCBmaW5kIHRoZSBjb3JyZXNwb25kaW5n
IGhvc3QgYnJpZGdlIGJhc2VkIG9uIHNlZ21lbnQgbnVtYmVyIGFuZCBhY2Nlc3MgdGhlIGNvcnJl
c3BvbmRpbmcgY29uZmlnIHNwYWNlIGFzc2lnbmVkIHRvIHRoYXQgYnJpZGdlLjxiciBjbGFzcz0i
Ij4NCjxiciBjbGFzcz0iIj4NCkxpbWl0YXRpb246PGJyIGNsYXNzPSIiPg0KKiBPbmx5IFBDSSBF
Q0FNIGNvbmZpZ3VyYXRpb24gc3BhY2UgYWNjZXNzIGlzIHN1cHBvcnRlZC48YnIgY2xhc3M9IiI+
DQo8L2Jsb2NrcXVvdGU+DQo8L2Jsb2NrcXVvdGU+DQo8L2Jsb2NrcXVvdGU+DQo8YnIgc3R5bGU9
ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQt
c2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFs
OyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWdu
OiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNw
YWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRo
OiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPHNwYW4gc3R5bGU9ImNh
cmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6
ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBm
b250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBz
dGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNl
OiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAw
cHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAh
aW1wb3J0YW50OyIgY2xhc3M9IiI+VGhpcw0KIGlzIHJlYWxseSB0aGUgbGltaXRhdGlvbiB3aGlj
aCB3ZSBoYXZlIHRvIHRoaW5rIG9mIG5vdyBhcyB0aGVyZSBhcmUgbG90cyBvZjwvc3Bhbj48YnIg
c3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7
IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczog
bm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0
LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdo
aXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tl
LXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJyIHN0eWxl
PSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250
LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1h
bDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGln
bjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1z
cGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0
aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJj
YXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNp
emU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsg
Zm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjog
c3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFj
ZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDog
MHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUg
IWltcG9ydGFudDsiIGNsYXNzPSIiPkhXDQogdy9vIEVDQU0gc3VwcG9ydCBhbmQgbm90IHByb3Zp
ZGluZyBhIHdheSB0byB1c2UgUENJKGUpIG9uIHRob3NlIGJvYXJkczwvc3Bhbj48YnIgc3R5bGU9
ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQt
c2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFs
OyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWdu
OiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNw
YWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRo
OiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJyIHN0eWxlPSJjYXJl
dC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6
IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9u
dC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3Rh
cnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTog
bm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4
OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1j
b2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEy
cHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13
ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7
IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9y
bWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0
ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9y
dGFudDsiIGNsYXNzPSIiPndvdWxkDQogcmVuZGVyIHRoZW0gdXNlbGVzcyB3cnQgUENJLiBJIGRv
bid0IHN1Z2dlc3QgdG8gaGF2ZSBzb21lIHJlYWwgY29kZSBmb3I8L3NwYW4+PGJyIHN0eWxlPSJj
YXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNp
emU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsg
Zm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjog
c3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFj
ZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDog
MHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxiciBzdHlsZT0iY2FyZXQt
Y29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAx
MnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQt
d2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0
OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5v
cm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsg
dGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29s
b3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4
OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2Vp
Z2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0
ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1h
bDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4
dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRh
bnQ7IiBjbGFzcz0iIj50aGF0LA0KIGJ1dCBJIHdvdWxkIHN1Z2dlc3Qgd2UgZGVzaWduIHNvbWUg
aW50ZXJmYWNlcyBmcm9tIGRheSAwLjwvc3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2Io
MCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1z
dHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9y
bWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRl
bnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQt
c3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3Jh
dGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAs
IDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6
IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsg
bGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAw
cHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNp
bmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246
IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDAp
OyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5v
cm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0
dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7
IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6
IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5v
bmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPkF0
DQogdGhlIHNhbWUgdGltZSBJIGRvIHVuZGVyc3RhbmQgdGhhdCBzdXBwb3J0aW5nIG5vbi1FQ0FN
IGJyaWRnZXMgaXMgYSBwYWluPC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAw
LCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxl
OiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7
IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDog
MHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFj
aW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9u
OiBub25lOyIgY2xhc3M9IiI+DQo8L2Rpdj4NCjwvYmxvY2txdW90ZT4NCjxkaXY+PGJyIGNsYXNz
PSIiPg0KPC9kaXY+DQo8c3BhbiBkYXRhLWNvbnRyYXN0PSJhdXRvIiBjbGFzcz0iVGV4dFJ1biBC
Q1gwIFNDWFc2NDU3OTUwNSIgeG1sOmxhbmc9IkVOLUdCIiBsYW5nPSJFTi1HQiIgc3R5bGU9Im1h
cmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyBvcnBoYW5z
OiAyOyB0ZXh0LWFsaWduOiBqdXN0aWZ5OyB3aWRvd3M6IDI7IGJhY2tncm91bmQtY29sb3I6IHJn
YigyNTUsIDI1NSwgMjU1KTsgZm9udC1zaXplOiAxMHB0OyBmb250LWZhbWlseTogJnF1b3Q7Q2Fs
aWJyaSBMaWdodCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9FbWJlZGRlZEZvbnQmcXVvdDss
ICZxdW90O0NhbGlicmkgTGlnaHRfTVNGb250U2VydmljZSZxdW90Oywgc2Fucy1zZXJpZjsgLXdl
YmtpdC1mb250LWtlcm5pbmc6IG5vbmU7IGxpbmUtaGVpZ2h0OiAxNy4yNjY3cHg7IGZvbnQtdmFy
aWFudC1saWdhdHVyZXM6IG5vbmUgIWltcG9ydGFudDsiPjxzcGFuIGNsYXNzPSJCQ1gwIE5vcm1h
bFRleHRSdW4gU0NYVzY0NTc5NTA1IiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsg
dXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyAtd2Via2l0LXRhcC1o
aWdobGlnaHQtY29sb3I6IHRyYW5zcGFyZW50OyBiYWNrZ3JvdW5kLWNvbG9yOiBpbmhlcml0OyI+
QWRkaW5nDQogYW55PC9zcGFuPjwvc3Bhbj48c3BhbiBkYXRhLWNvbnRyYXN0PSJhdXRvIiBjbGFz
cz0iVGV4dFJ1biBCQ1gwIFNDWFc2NDU3OTUwNSIgeG1sOmxhbmc9IkVOLUdCIiBsYW5nPSJFTi1H
QiIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IC13ZWJraXQtdXNlci1kcmFnOiBu
b25lOyBvcnBoYW5zOiAyOyB0ZXh0LWFsaWduOiBqdXN0aWZ5OyB3aWRvd3M6IDI7IGJhY2tncm91
bmQtY29sb3I6IHJnYigyNTUsIDI1NSwgMjU1KTsgZm9udC1zaXplOiAxMHB0OyBmb250LWZhbWls
eTogJnF1b3Q7Q2FsaWJyaSBMaWdodCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9FbWJlZGRl
ZEZvbnQmcXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfTVNGb250U2VydmljZSZxdW90Oywgc2Fu
cy1zZXJpZjsgLXdlYmtpdC1mb250LWtlcm5pbmc6IG5vbmU7IGxpbmUtaGVpZ2h0OiAxNy4yNjY3
cHg7IGZvbnQtdmFyaWFudC1saWdhdHVyZXM6IG5vbmUgIWltcG9ydGFudDsiPjxzcGFuIGNsYXNz
PSJCQ1gwIE5vcm1hbFRleHRSdW4gU0NYVzY0NTc5NTA1IiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBh
ZGRpbmc6IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyAt
d2Via2l0LXRhcC1oaWdobGlnaHQtY29sb3I6IHRyYW5zcGFyZW50OyBiYWNrZ3JvdW5kLWNvbG9y
OiBpbmhlcml0OyI+Jm5ic3A7dHlwZQ0KIG9mPC9zcGFuPjwvc3Bhbj48c3BhbiBkYXRhLWNvbnRy
YXN0PSJhdXRvIiBjbGFzcz0iVGV4dFJ1biBCQ1gwIFNDWFc2NDU3OTUwNSIgeG1sOmxhbmc9IkVO
LUdCIiBsYW5nPSJFTi1HQiIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IC13ZWJr
aXQtdXNlci1kcmFnOiBub25lOyBvcnBoYW5zOiAyOyB0ZXh0LWFsaWduOiBqdXN0aWZ5OyB3aWRv
d3M6IDI7IGJhY2tncm91bmQtY29sb3I6IHJnYigyNTUsIDI1NSwgMjU1KTsgZm9udC1zaXplOiAx
MHB0OyBmb250LWZhbWlseTogJnF1b3Q7Q2FsaWJyaSBMaWdodCZxdW90OywgJnF1b3Q7Q2FsaWJy
aSBMaWdodF9FbWJlZGRlZEZvbnQmcXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfTVNGb250U2Vy
dmljZSZxdW90Oywgc2Fucy1zZXJpZjsgLXdlYmtpdC1mb250LWtlcm5pbmc6IG5vbmU7IGxpbmUt
aGVpZ2h0OiAxNy4yNjY3cHg7IGZvbnQtdmFyaWFudC1saWdhdHVyZXM6IG5vbmUgIWltcG9ydGFu
dDsiPjxzcGFuIGNsYXNzPSJCQ1gwIE5vcm1hbFRleHRSdW4gU0NYVzY0NTc5NTA1IiBzdHlsZT0i
bWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNl
ci1kcmFnOiBub25lOyAtd2Via2l0LXRhcC1oaWdobGlnaHQtY29sb3I6IHRyYW5zcGFyZW50OyBi
YWNrZ3JvdW5kLWNvbG9yOiBpbmhlcml0OyI+Jm5ic3A7aG9zdA0KIGJyaWRnZSBpcyBzdXBwb3J0
ZWQ8L3NwYW4+PC9zcGFuPjxzcGFuIGRhdGEtY29udHJhc3Q9ImF1dG8iIGNsYXNzPSJUZXh0UnVu
IEJDWDAgU0NYVzY0NTc5NTA1IiB4bWw6bGFuZz0iRU4tR0IiIGxhbmc9IkVOLUdCIiBzdHlsZT0i
bWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgLXdlYmtpdC11c2VyLWRyYWc6IG5vbmU7IG9ycGhh
bnM6IDI7IHRleHQtYWxpZ246IGp1c3RpZnk7IHdpZG93czogMjsgYmFja2dyb3VuZC1jb2xvcjog
cmdiKDI1NSwgMjU1LCAyNTUpOyBmb250LXNpemU6IDEwcHQ7IGZvbnQtZmFtaWx5OiAmcXVvdDtD
YWxpYnJpIExpZ2h0JnF1b3Q7LCAmcXVvdDtDYWxpYnJpIExpZ2h0X0VtYmVkZGVkRm9udCZxdW90
OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9NU0ZvbnRTZXJ2aWNlJnF1b3Q7LCBzYW5zLXNlcmlmOyAt
d2Via2l0LWZvbnQta2VybmluZzogbm9uZTsgbGluZS1oZWlnaHQ6IDE3LjI2NjdweDsgZm9udC12
YXJpYW50LWxpZ2F0dXJlczogbm9uZSAhaW1wb3J0YW50OyI+PHNwYW4gY2xhc3M9IkJDWDAgTm9y
bWFsVGV4dFJ1biBTQ1hXNjQ1Nzk1MDUiIHN0eWxlPSJtYXJnaW46IDBweDsgcGFkZGluZzogMHB4
OyB1c2VyLXNlbGVjdDogdGV4dDsgLXdlYmtpdC11c2VyLWRyYWc6IG5vbmU7IC13ZWJraXQtdGFw
LWhpZ2hsaWdodC1jb2xvcjogdHJhbnNwYXJlbnQ7IGJhY2tncm91bmQtY29sb3I6IGluaGVyaXQ7
Ij4sDQogd2UgZGlkIHB1dCB0aGUgRUNBTSBzcGVjaWZpYyBjb2RlIGluIGFuJm5ic3A7PC9zcGFu
PjxzcGFuIGNsYXNzPSJOb3JtYWxUZXh0UnVuIEJDWDAgU0NYVzY0NTc5NTA1IFNwZWxsaW5nRXJy
b3JWMiIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IHVzZXItc2VsZWN0OiB0ZXh0
OyAtd2Via2l0LXVzZXItZHJhZzogbm9uZTsgLXdlYmtpdC10YXAtaGlnaGxpZ2h0LWNvbG9yOiB0
cmFuc3BhcmVudDsgYmFja2dyb3VuZC1yZXBlYXQ6IHJlcGVhdC14OyBiYWNrZ3JvdW5kLXBvc2l0
aW9uOiBsZWZ0IGJvdHRvbTsgYmFja2dyb3VuZC1pbWFnZTogdXJsKCZxdW90O2RhdGE6aW1hZ2Uv
c3ZnJiM0Mzt4bWw7YmFzZTY0LFBEOTRiV3dnZG1WeWMybHZiajBpTVM0d0lpQmxibU52WkdsdVp6
MGlWVlJHTFRnaVB6NEtQSE4yWnlCM2FXUjBhRDBpTlhCNElpQm9aV2xuYUhROUlqUndlQ0lnZG1s
bGQwSnZlRDBpTUNBd0lEVWdOQ0lnZG1WeWMybHZiajBpTVM0eElpQjRiV3h1Y3owaWFIUjBjRG92
TDNkM2R5NTNNeTV2Y21jdk1qQXdNQzl6ZG1jaUlIaHRiRzV6T25oc2FXNXJQU0pvZEhSd09pOHZk
M2QzTG5jekxtOXlaeTh4T1RrNUwzaHNhVzVySWo0S0lDQWdJRHdoTFMwZ1IyVnVaWEpoZEc5eU9p
QlRhMlYwWTJnZ05UWXVNaUFvT0RFMk56SXBJQzBnYUhSMGNITTZMeTl6YTJWMFkyZ3VZMjl0SUMw
dFBnb2dJQ0FnUEhScGRHeGxQbk53Wld4c2FXNW5YM054ZFdsbloyeGxQQzkwYVhSc1pUNEtJQ0Fn
SUR4a1pYTmpQa055WldGMFpXUWdkMmwwYUNCVGEyVjBZMmd1UEM5a1pYTmpQZ29nSUNBZ1BHY2dh
V1E5SWtac1lXZHpJaUJ6ZEhKdmEyVTlJbTV2Ym1VaUlITjBjbTlyWlMxM2FXUjBhRDBpTVNJZ1pt
bHNiRDBpYm05dVpTSWdabWxzYkMxeWRXeGxQU0psZG1WdWIyUmtJajRLSUNBZ0lDQWdJQ0E4WnlC
MGNtRnVjMlp2Y20wOUluUnlZVzV6YkdGMFpTZ3RNVEF4TUM0d01EQXdNREFzSUMweU9UWXVNREF3
TURBd0tTSWdhV1E5SW5Od1pXeHNhVzVuWDNOeGRXbG5aMnhsSWo0S0lDQWdJQ0FnSUNBZ0lDQWdQ
R2NnZEhKaGJuTm1iM0p0UFNKMGNtRnVjMnhoZEdVb01UQXhNQzR3TURBd01EQXNJREk1Tmk0d01E
QXdNREFwSWo0S0lDQWdJQ0FnSUNBZ0lDQWdJQ0FnSUR4d1lYUm9JR1E5SWswd0xETWdRekV1TWpV
c015QXhMakkxTERFZ01pNDFMREVnUXpNdU56VXNNU0F6TGpjMUxETWdOU3d6SWlCcFpEMGlVR0Yw
YUNJZ2MzUnliMnRsUFNJalJVSXdNREF3SWlCemRISnZhMlV0ZDJsa2RHZzlJakVpUGp3dmNHRjBh
RDRLSUNBZ0lDQWdJQ0FnSUNBZ0lDQWdJRHh5WldOMElHbGtQU0pTWldOMFlXNW5iR1VpSUhnOUlq
QWlJSGs5SWpBaUlIZHBaSFJvUFNJMUlpQm9aV2xuYUhROUlqUWlQand2Y21WamRENEtJQ0FnSUNB
Z0lDQWdJQ0FnUEM5blBnb2dJQ0FnSUNBZ0lEd3ZaejRLSUNBZ0lEd3ZaejRLUEM5emRtYyYjNDM7
JnF1b3Q7KTsgYm9yZGVyLWJvdHRvbTogMXB4IHNvbGlkIHRyYW5zcGFyZW50OyBiYWNrZ3JvdW5k
LWNvbG9yOiBpbmhlcml0OyI+aWRlbnRpZmVkPC9zcGFuPjxzcGFuIGNsYXNzPSJCQ1gwIE5vcm1h
bFRleHRSdW4gU0NYVzY0NTc5NTA1IiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsg
dXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyAtd2Via2l0LXRhcC1o
aWdobGlnaHQtY29sb3I6IHRyYW5zcGFyZW50OyBiYWNrZ3JvdW5kLWNvbG9yOiBpbmhlcml0OyI+
Jm5ic3A7c291cmNlDQogZmlsZSBzbyB0aGF0IG90aGVyIHR5cGVzIGNhbiBiZSBpbXBsZW1lbnRl
ZDwvc3Bhbj48L3NwYW4+PHNwYW4gZGF0YS1jb250cmFzdD0iYXV0byIgY2xhc3M9IlRleHRSdW4g
QkNYMCBTQ1hXNjQ1Nzk1MDUiIHhtbDpsYW5nPSJFTi1HQiIgbGFuZz0iRU4tR0IiIHN0eWxlPSJt
YXJnaW46IDBweDsgcGFkZGluZzogMHB4OyAtd2Via2l0LXVzZXItZHJhZzogbm9uZTsgb3JwaGFu
czogMjsgdGV4dC1hbGlnbjoganVzdGlmeTsgd2lkb3dzOiAyOyBiYWNrZ3JvdW5kLWNvbG9yOiBy
Z2IoMjU1LCAyNTUsIDI1NSk7IGZvbnQtc2l6ZTogMTBwdDsgZm9udC1mYW1pbHk6ICZxdW90O0Nh
bGlicmkgTGlnaHQmcXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfRW1iZWRkZWRGb250JnF1b3Q7
LCAmcXVvdDtDYWxpYnJpIExpZ2h0X01TRm9udFNlcnZpY2UmcXVvdDssIHNhbnMtc2VyaWY7IC13
ZWJraXQtZm9udC1rZXJuaW5nOiBub25lOyBsaW5lLWhlaWdodDogMTcuMjY2N3B4OyBmb250LXZh
cmlhbnQtbGlnYXR1cmVzOiBub25lICFpbXBvcnRhbnQ7Ij48c3BhbiBjbGFzcz0iQkNYMCBOb3Jt
YWxUZXh0UnVuIFNDWFc2NDU3OTUwNSIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7
IHVzZXItc2VsZWN0OiB0ZXh0OyAtd2Via2l0LXVzZXItZHJhZzogbm9uZTsgLXdlYmtpdC10YXAt
aGlnaGxpZ2h0LWNvbG9yOiB0cmFuc3BhcmVudDsgYmFja2dyb3VuZC1jb2xvcjogaW5oZXJpdDsi
Pi4NCiBBcyBvZiBub3cgd2UgaGF2ZSZuYnNwOzwvc3Bhbj48L3NwYW4+PHNwYW4gZGF0YS1jb250
cmFzdD0iYXV0byIgY2xhc3M9IlRleHRSdW4gQkNYMCBTQ1hXNjQ1Nzk1MDUiIHhtbDpsYW5nPSJF
Ti1HQiIgbGFuZz0iRU4tR0IiIHN0eWxlPSJtYXJnaW46IDBweDsgcGFkZGluZzogMHB4OyAtd2Vi
a2l0LXVzZXItZHJhZzogbm9uZTsgb3JwaGFuczogMjsgdGV4dC1hbGlnbjoganVzdGlmeTsgd2lk
b3dzOiAyOyBiYWNrZ3JvdW5kLWNvbG9yOiByZ2IoMjU1LCAyNTUsIDI1NSk7IGZvbnQtc2l6ZTog
MTBwdDsgZm9udC1mYW1pbHk6ICZxdW90O0NhbGlicmkgTGlnaHQmcXVvdDssICZxdW90O0NhbGli
cmkgTGlnaHRfRW1iZWRkZWRGb250JnF1b3Q7LCAmcXVvdDtDYWxpYnJpIExpZ2h0X01TRm9udFNl
cnZpY2UmcXVvdDssIHNhbnMtc2VyaWY7IC13ZWJraXQtZm9udC1rZXJuaW5nOiBub25lOyBsaW5l
LWhlaWdodDogMTcuMjY2N3B4OyBmb250LXZhcmlhbnQtbGlnYXR1cmVzOiBub25lICFpbXBvcnRh
bnQ7Ij48c3BhbiBjbGFzcz0iQkNYMCBOb3JtYWxUZXh0UnVuIFNDWFc2NDU3OTUwNSIgc3R5bGU9
Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IHVzZXItc2VsZWN0OiB0ZXh0OyAtd2Via2l0LXVz
ZXItZHJhZzogbm9uZTsgLXdlYmtpdC10YXAtaGlnaGxpZ2h0LWNvbG9yOiB0cmFuc3BhcmVudDsg
YmFja2dyb3VuZC1jb2xvcjogaW5oZXJpdDsiPmltcGxlbWVudGVkJm5ic3A7PC9zcGFuPjwvc3Bh
bj48c3BhbiBkYXRhLWNvbnRyYXN0PSJhdXRvIiBjbGFzcz0iVGV4dFJ1biBCQ1gwIFNDWFc2NDU3
OTUwNSIgeG1sOmxhbmc9IkVOLUdCIiBsYW5nPSJFTi1HQiIgc3R5bGU9Im1hcmdpbjogMHB4OyBw
YWRkaW5nOiAwcHg7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyBvcnBoYW5zOiAyOyB0ZXh0LWFs
aWduOiBqdXN0aWZ5OyB3aWRvd3M6IDI7IGJhY2tncm91bmQtY29sb3I6IHJnYigyNTUsIDI1NSwg
MjU1KTsgZm9udC1zaXplOiAxMHB0OyBmb250LWZhbWlseTogJnF1b3Q7Q2FsaWJyaSBMaWdodCZx
dW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9FbWJlZGRlZEZvbnQmcXVvdDssICZxdW90O0NhbGli
cmkgTGlnaHRfTVNGb250U2VydmljZSZxdW90Oywgc2Fucy1zZXJpZjsgLXdlYmtpdC1mb250LWtl
cm5pbmc6IG5vbmU7IGxpbmUtaGVpZ2h0OiAxNy4yNjY3cHg7IGZvbnQtdmFyaWFudC1saWdhdHVy
ZXM6IG5vbmUgIWltcG9ydGFudDsiPjxzcGFuIGNsYXNzPSJCQ1gwIE5vcm1hbFRleHRSdW4gU0NY
VzY0NTc5NTA1IiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgdXNlci1zZWxlY3Q6
IHRleHQ7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyAtd2Via2l0LXRhcC1oaWdobGlnaHQtY29s
b3I6IHRyYW5zcGFyZW50OyBiYWNrZ3JvdW5kLWNvbG9yOiBpbmhlcml0OyI+dGhlDQogRUNBTSBz
dXBwb3J0PC9zcGFuPjwvc3Bhbj48c3BhbiBkYXRhLWNvbnRyYXN0PSJhdXRvIiBjbGFzcz0iVGV4
dFJ1biBCQ1gwIFNDWFc2NDU3OTUwNSIgeG1sOmxhbmc9IkVOLUdCIiBsYW5nPSJFTi1HQiIgc3R5
bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyBv
cnBoYW5zOiAyOyB0ZXh0LWFsaWduOiBqdXN0aWZ5OyB3aWRvd3M6IDI7IGJhY2tncm91bmQtY29s
b3I6IHJnYigyNTUsIDI1NSwgMjU1KTsgZm9udC1zaXplOiAxMHB0OyBmb250LWZhbWlseTogJnF1
b3Q7Q2FsaWJyaSBMaWdodCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9FbWJlZGRlZEZvbnQm
cXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfTVNGb250U2VydmljZSZxdW90Oywgc2Fucy1zZXJp
ZjsgLXdlYmtpdC1mb250LWtlcm5pbmc6IG5vbmU7IGxpbmUtaGVpZ2h0OiAxNy4yNjY3cHg7IGZv
bnQtdmFyaWFudC1saWdhdHVyZXM6IG5vbmUgIWltcG9ydGFudDsiPjxzcGFuIGNsYXNzPSJCQ1gw
IE5vcm1hbFRleHRSdW4gU0NYVzY0NTc5NTA1IiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6
IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyAtd2Via2l0
LXRhcC1oaWdobGlnaHQtY29sb3I6IHRyYW5zcGFyZW50OyBiYWNrZ3JvdW5kLWNvbG9yOiBpbmhl
cml0OyI+Jm5ic3A7YW5kDQogd2UgYXJlIGltcGxlbWVudGluZyByaWdodCBub3cgc3VwcG9ydCBm
b3IgTjFTRFAgd2hpY2ggcmVxdWlyZXMgc3BlY2lmaWMgcXVpcmtzIHdoaWNoIHdpbGwgYmUgZG9u
ZSBpbiBhIHNlcGFyYXRlIHNvdXJjZSBmaWxlPC9zcGFuPjwvc3Bhbj48c3BhbiBkYXRhLWNvbnRy
YXN0PSJhdXRvIiBjbGFzcz0iVGV4dFJ1biBCQ1gwIFNDWFc2NDU3OTUwNSIgeG1sOmxhbmc9IkVO
LUdCIiBsYW5nPSJFTi1HQiIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IC13ZWJr
aXQtdXNlci1kcmFnOiBub25lOyBvcnBoYW5zOiAyOyB0ZXh0LWFsaWduOiBqdXN0aWZ5OyB3aWRv
d3M6IDI7IGJhY2tncm91bmQtY29sb3I6IHJnYigyNTUsIDI1NSwgMjU1KTsgZm9udC1zaXplOiAx
MHB0OyBmb250LWZhbWlseTogJnF1b3Q7Q2FsaWJyaSBMaWdodCZxdW90OywgJnF1b3Q7Q2FsaWJy
aSBMaWdodF9FbWJlZGRlZEZvbnQmcXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfTVNGb250U2Vy
dmljZSZxdW90Oywgc2Fucy1zZXJpZjsgLXdlYmtpdC1mb250LWtlcm5pbmc6IG5vbmU7IGxpbmUt
aGVpZ2h0OiAxNy4yNjY3cHg7IGZvbnQtdmFyaWFudC1saWdhdHVyZXM6IG5vbmUgIWltcG9ydGFu
dDsiPjxzcGFuIGNsYXNzPSJCQ1gwIE5vcm1hbFRleHRSdW4gU0NYVzY0NTc5NTA1IiBzdHlsZT0i
bWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNl
ci1kcmFnOiBub25lOyAtd2Via2l0LXRhcC1oaWdobGlnaHQtY29sb3I6IHRyYW5zcGFyZW50OyBi
YWNrZ3JvdW5kLWNvbG9yOiBpbmhlcml0OyI+Ljwvc3Bhbj48L3NwYW4+PHNwYW4gY2xhc3M9IkVP
UCBCQ1gwIFNDWFc2NDU3OTUwNSIgZGF0YS1jY3AtcHJvcHM9InsmcXVvdDsxMzQyMzMyNzkmcXVv
dDs6dHJ1ZSwmcXVvdDszMzU1NTE1NTAmcXVvdDs6NiwmcXVvdDszMzU1NTE2MjAmcXVvdDs6Niwm
cXVvdDszMzU1NTk2ODUmcXVvdDs6NzIwfSIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAw
cHg7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyBmb250LXZhcmlhbnQtbGlnYXR1cmVzOiBub3Jt
YWw7IG9ycGhhbnM6IDI7IHRleHQtYWxpZ246IGp1c3RpZnk7IHdpZG93czogMjsgYmFja2dyb3Vu
ZC1jb2xvcjogcmdiKDI1NSwgMjU1LCAyNTUpOyBmb250LXNpemU6IDEwcHQ7IGxpbmUtaGVpZ2h0
OiAxNy4yNjY3cHg7IGZvbnQtZmFtaWx5OiAmcXVvdDtDYWxpYnJpIExpZ2h0JnF1b3Q7LCAmcXVv
dDtDYWxpYnJpIExpZ2h0X0VtYmVkZGVkRm9udCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9N
U0ZvbnRTZXJ2aWNlJnF1b3Q7LCBzYW5zLXNlcmlmOyI+Jm5ic3A7PC9zcGFuPjwvZGl2Pg0KPGRp
dj4NCjxkaXYgc3R5bGU9Im9ycGhhbnM6IDI7IHRleHQtYWxpZ246IGp1c3RpZnk7IHdpZG93czog
MjsiIGNsYXNzPSIiPjxmb250IGZhY2U9IkNhbGlicmkgTGlnaHQsIENhbGlicmkgTGlnaHRfRW1i
ZWRkZWRGb250LCBDYWxpYnJpIExpZ2h0X01TRm9udFNlcnZpY2UsIHNhbnMtc2VyaWYiIHNpemU9
IjIiIGNsYXNzPSIiPjxzcGFuIHN0eWxlPSJiYWNrZ3JvdW5kLWNvbG9yOiByZ2IoMjU1LCAyNTUs
IDI1NSk7IiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8L3NwYW4+PC9mb250PjwvZGl2Pg0KPGJs
b2NrcXVvdGUgdHlwZT0iY2l0ZSIgY2xhc3M9IiI+DQo8ZGl2IGNsYXNzPSIiPjxiciBzdHlsZT0i
Y2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1z
aXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7
IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246
IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3Bh
Y2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6
IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8YmxvY2txdW90ZSB0eXBl
PSJjaXRlIiBzdHlsZT0iZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBm
b250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0
OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IG9ycGhhbnM6IGF1dG87IHRleHQtYWxp
Z246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUt
c3BhY2U6IG5vcm1hbDsgd2lkb3dzOiBhdXRvOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10
ZXh0LXNpemUtYWRqdXN0OiBhdXRvOyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRl
eHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIg
Y2xhc3M9IiI+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBjbGFzcz0iIj4qIERldmljZSB0cmVl
IGJpbmRpbmcgaXMgc3VwcG9ydGVkIGFzIG9mIG5vdywgQUNQSSBpcyBub3Qgc3VwcG9ydGVkLjxi
ciBjbGFzcz0iIj4NCiogTmVlZCB0byBwb3J0IHRoZSBQQ0kgaG9zdCBicmlkZ2UgYWNjZXNzIGNv
ZGUgdG8gWEVOIHRvIGFjY2VzcyB0aGUgY29uZmlndXJhdGlvbiBzcGFjZSAoZ2VuZXJpYyBvbmUg
d29ya3MgYnV0IGxvdHMgb2YgcGxhdGZvcm1zIHdpbGwgcmVxdWlyZWQgc29tZSBzcGVjaWZpYyBj
b2RlIG9yIHF1aXJrcykuPGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0KIyBEaXNjb3Zlcmlu
ZyBQQ0kgZGV2aWNlczo8YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpQQ0ktUENJZSBlbnVt
ZXJhdGlvbiBpcyBhIHByb2Nlc3Mgb2YgZGV0ZWN0aW5nIGRldmljZXMgY29ubmVjdGVkIHRvIGl0
cyBob3N0LiBJdCBpcyB0aGUgcmVzcG9uc2liaWxpdHkgb2YgdGhlIGhhcmR3YXJlIGRvbWFpbiBv
ciBib290IGZpcm13YXJlIHRvIGRvIHRoZSBQQ0kgZW51bWVyYXRpb24gYW5kIGNvbmZpZ3VyZTxi
ciBjbGFzcz0iIj4NCjwvYmxvY2txdW90ZT4NCjwvYmxvY2txdW90ZT4NCjwvYmxvY2txdW90ZT4N
CjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVs
dmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50
LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1h
bDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBu
b25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0
LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBk
aXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPkdyZWF0LA0KIHNvIHdlIGFzc3Vt
ZSBoZXJlIHRoYXQgdGhlIGJvb3Rsb2FkZXIgY2FuIGRvIHRoZSBlbnVtZXJhdGlvbiBhbmQgY29u
ZmlndXJhdGlvbi4uLjwvc3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7
IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9y
bWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0
ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsg
dGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzog
MHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9u
ZTsiIGNsYXNzPSIiPg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIgc3R5bGU9ImZvbnQtZmFtaWx5
OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZh
cmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzog
bm9ybWFsOyBvcnBoYW5zOiBhdXRvOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBw
eDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdpZG93czogYXV0
bzsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zaXplLWFkanVzdDogYXV0bzsgLXdl
YmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFz
cz0iIj4NCjxibG9ja3F1b3RlIHR5cGU9ImNpdGUiIGNsYXNzPSIiPg0KPGJsb2NrcXVvdGUgdHlw
ZT0iY2l0ZSIgY2xhc3M9IiI+Jm5ic3A7dGhlIEJBUiwgUENJIGNhcGFiaWxpdGllcywgYW5kIE1T
SS9NU0ktWCBjb25maWd1cmF0aW9uLjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NClBDSS1Q
Q0llIGVudW1lcmF0aW9uIGluIFhFTiBpcyBub3QgZmVhc2libGUgZm9yIHRoZSBjb25maWd1cmF0
aW9uIHBhcnQgYXMgaXQgd291bGQgcmVxdWlyZSBhIGxvdCBvZiBjb2RlIGluc2lkZSBYZW4gd2hp
Y2ggd291bGQgcmVxdWlyZSBhIGxvdCBvZiBtYWludGVuYW5jZS4gQWRkZWQgdG8gdGhpcyBtYW55
IHBsYXRmb3JtcyByZXF1aXJlIHNvbWUgcXVpcmtzIGluIHRoYXQgcGFydCBvZiB0aGUgUENJIGNv
ZGUgd2hpY2ggd291bGQgZ3JlYXRseSBpbXByb3ZlDQogWGVuIGNvbXBsZXhpdHkuIE9uY2UgaGFy
ZHdhcmUgZG9tYWluIGVudW1lcmF0ZXMgdGhlIGRldmljZSB0aGVuIGl0IHdpbGwgY29tbXVuaWNh
dGUgdG8gWEVOIHZpYSB0aGUgYmVsb3cgaHlwZXJjYWxsLjxiciBjbGFzcz0iIj4NCjxiciBjbGFz
cz0iIj4NCiNkZWZpbmUgUEhZU0RFVk9QX3BjaV9kZXZpY2VfYWRkICZuYnNwOyZuYnNwOyZuYnNw
OyZuYnNwOyZuYnNwOyZuYnNwOyZuYnNwOzI1PGJyIGNsYXNzPSIiPg0Kc3RydWN0IHBoeXNkZXZf
cGNpX2RldmljZV9hZGQgezxiciBjbGFzcz0iIj4NCiZuYnNwOyZuYnNwOyZuYnNwO3VpbnQxNl90
IHNlZzs8YnIgY2xhc3M9IiI+DQombmJzcDsmbmJzcDsmbmJzcDt1aW50OF90IGJ1czs8YnIgY2xh
c3M9IiI+DQombmJzcDsmbmJzcDsmbmJzcDt1aW50OF90IGRldmZuOzxiciBjbGFzcz0iIj4NCiZu
YnNwOyZuYnNwOyZuYnNwO3VpbnQzMl90IGZsYWdzOzxiciBjbGFzcz0iIj4NCiZuYnNwOyZuYnNw
OyZuYnNwO3N0cnVjdCB7PGJyIGNsYXNzPSIiPg0KJm5ic3A7Jm5ic3A7Jm5ic3A7PHNwYW4gY2xh
c3M9IkFwcGxlLXRhYi1zcGFuIiBzdHlsZT0id2hpdGUtc3BhY2U6IHByZTsiPiA8L3NwYW4+dWlu
dDhfdCBidXM7PGJyIGNsYXNzPSIiPg0KJm5ic3A7Jm5ic3A7Jm5ic3A7PHNwYW4gY2xhc3M9IkFw
cGxlLXRhYi1zcGFuIiBzdHlsZT0id2hpdGUtc3BhY2U6IHByZTsiPiA8L3NwYW4+dWludDhfdCBk
ZXZmbjs8YnIgY2xhc3M9IiI+DQombmJzcDsmbmJzcDsmbmJzcDt9IHBoeXNmbjs8YnIgY2xhc3M9
IiI+DQombmJzcDsmbmJzcDsmbmJzcDsvKjxiciBjbGFzcz0iIj4NCiZuYnNwOyZuYnNwOyZuYnNw
OyogT3B0aW9uYWwgcGFyYW1ldGVycyBhcnJheS48YnIgY2xhc3M9IiI+DQombmJzcDsmbmJzcDsm
bmJzcDsqIEZpcnN0IGVsZW1lbnQgKFswXSkgaXMgUFhNIGRvbWFpbiBhc3NvY2lhdGVkIHdpdGgg
dGhlIGRldmljZSAoaWYgKiBYRU5fUENJX0RFVl9QWE0gaXMgc2V0KTxiciBjbGFzcz0iIj4NCiZu
YnNwOyZuYnNwOyZuYnNwOyovPGJyIGNsYXNzPSIiPg0KJm5ic3A7Jm5ic3A7Jm5ic3A7dWludDMy
X3Qgb3B0YXJyW1hFTl9GTEVYX0FSUkFZX0RJTV07PGJyIGNsYXNzPSIiPg0KJm5ic3A7Jm5ic3A7
Jm5ic3A7fTs8YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpBcyB0aGUgaHlwZXJjYWxsIGFy
Z3VtZW50IGhhcyB0aGUgUENJIHNlZ21lbnQgbnVtYmVyLCBYRU4gd2lsbCBhY2Nlc3MgdGhlIFBD
SSBjb25maWcgc3BhY2UgYmFzZWQgb24gdGhpcyBzZWdtZW50IG51bWJlciBhbmQgZmluZCB0aGUg
aG9zdC1icmlkZ2UgY29ycmVzcG9uZGluZyB0byB0aGlzIHNlZ21lbnQgbnVtYmVyLiBBdCB0aGlz
IHN0YWdlIGhvc3QgYnJpZGdlIGlzIGZ1bGx5IGluaXRpYWxpemVkIHNvIHRoZXJlIHdpbGwgYmUg
bm8gaXNzdWUgdG8NCiBhY2Nlc3MgdGhlIGNvbmZpZyBzcGFjZS48YnIgY2xhc3M9IiI+DQo8YnIg
Y2xhc3M9IiI+DQpYRU4gd2lsbCBhZGQgdGhlIFBDSSBkZXZpY2VzIGluIHRoZSBsaW5rZWQgbGlz
dCBtYWludGFpbiBpbiBYRU4gdXNpbmcgdGhlIGZ1bmN0aW9uIHBjaV9hZGRfZGV2aWNlKCkuIFhF
TiB3aWxsIGJlIGF3YXJlIG9mIGFsbCB0aGUgUENJIGRldmljZXMgb24gdGhlIHN5c3RlbSBhbmQg
YWxsIHRoZSBkZXZpY2Ugd2lsbCBiZSBhZGRlZCB0byB0aGUgaGFyZHdhcmUgZG9tYWluLjxiciBj
bGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCkxpbWl0YXRpb25zOjxiciBjbGFzcz0iIj4NCiogV2hl
biBQQ0kgZGV2aWNlcyBhcmUgYWRkZWQgdG8gWEVOLCBNU0kgY2FwYWJpbGl0eSBpcyBub3QgaW5p
dGlhbGl6ZWQgaW5zaWRlIFhFTiBhbmQgbm90IHN1cHBvcnRlZCBhcyBvZiBub3cuPGJyIGNsYXNz
PSIiPg0KKiBBQ1MgY2FwYWJpbGl0eSBpcyBkaXNhYmxlIGZvciBBUk0gYXMgb2Ygbm93IGFzIGFm
dGVyIGVuYWJsaW5nIGl0IGRldmljZXMgYXJlIG5vdCBhY2Nlc3NpYmxlLjxiciBjbGFzcz0iIj4N
CiogRG9tMExlc3MgaW1wbGVtZW50YXRpb24gd2lsbCByZXF1aXJlIHRvIGhhdmUgdGhlIGNhcGFj
aXR5IGluc2lkZSBYZW4gdG8gZGlzY292ZXIgdGhlIFBDSSBkZXZpY2VzICh3aXRob3V0IGRlcGVu
ZGluZyBvbiBEb20wIHRvIGRlY2xhcmUgdGhlbSB0byBYZW4pLjxiciBjbGFzcz0iIj4NCjwvYmxv
Y2txdW90ZT4NCkkgdGhpbmsgaXQgaXMgZmluZSB0byBhc3N1bWUgdGhhdCBmb3IgZG9tMGxlc3Mg
dGhlICZxdW90O2Zpcm13YXJlJnF1b3Q7IGhhcyB0YWtlbjxiciBjbGFzcz0iIj4NCmNhcmUgb2Yg
c2V0dGluZyB1cCB0aGUgQkFScyBjb3JyZWN0bHkuIFN0YXJ0aW5nIHdpdGggdGhhdCBhc3N1bXB0
aW9uLCBpdDxiciBjbGFzcz0iIj4NCmxvb2tzIGxpa2UgaXQgc2hvdWxkIGJlICZxdW90O2Vhc3km
cXVvdDsgdG8gd2FsayB0aGUgUENJIHRvcG9sb2d5IGluIFhlbiB3aGVuL2lmPGJyIGNsYXNzPSIi
Pg0KdGhlcmUgaXMgbm8gZG9tMD88YnIgY2xhc3M9IiI+DQo8L2Jsb2NrcXVvdGU+DQpZZXMgYXMg
d2UgZGlzY3Vzc2VkIGR1cmluZyB0aGUgZGVzaWduIHNlc3Npb24sIHdlIGN1cnJlbnRseSB0aGlu
ayB0aGF0IGl0IGlzIHRoZSB3YXkgdG8gZ28uPGJyIGNsYXNzPSIiPg0KV2UgYXJlIGZvciBub3cg
cmVseWluZyBvbiBEb20wIHRvIGdldCB0aGUgbGlzdCBvZiBQQ0kgZGV2aWNlcyBidXQgdGhpcyBp
cyBkZWZpbml0ZWx5IHRoZSBzdHJhdGVneSB3ZSB3b3VsZCBsaWtlIHRvIHVzZSB0byBoYXZlIERv
bTAgc3VwcG9ydC48YnIgY2xhc3M9IiI+DQpJZiB0aGlzIGlzIHdvcmtpbmcgd2VsbCwgSSBldmVu
IHRoaW5rIHdlIGNvdWxkIGdldCByaWQgb2YgdGhlIGh5cGVyY2FsbCBhbGwgdG9nZXRoZXIuPGJy
IGNsYXNzPSIiPg0KPC9ibG9ja3F1b3RlPg0KPHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2Io
MCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1z
dHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9y
bWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRl
bnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQt
c3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3Jh
dGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyIgY2xh
c3M9IiI+Li4uYW5kDQogdGhpcyBpcyB0aGUgc2FtZSB3YXkgb2YgY29uZmlndXJpbmcgaWYgZW51
bWVyYXRpb24gaGFwcGVucyBpbiB0aGUgYm9vdGxvYWRlcj88L3NwYW4+PGJyIHN0eWxlPSJjYXJl
dC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6
IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9u
dC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3Rh
cnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTog
bm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4
OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxiciBzdHlsZT0iY2FyZXQtY29s
b3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4
OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2Vp
Z2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0
ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1h
bDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4
dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6
IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBm
b250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0
OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0
LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsg
d29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1k
ZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7
IiBjbGFzcz0iIj5JDQogZG8gc3VwcG9ydCB0aGUgaWRlYSB3ZSBnbyBhd2F5IGZyb20gUEhZU0RF
Vk9QX3BjaV9kZXZpY2VfYWRkLCBidXQgZHJpdmVyIGRvbWFpbjwvc3Bhbj48YnIgc3R5bGU9ImNh
cmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6
ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBm
b250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBz
dGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNl
OiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAw
cHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJyIHN0eWxlPSJjYXJldC1j
b2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEy
cHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13
ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7
IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9y
bWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0
ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1jb2xv
cjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7
IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWln
aHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRl
eHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFs
OyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0
LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFu
dDsiIGNsYXNzPSIiPmp1c3QNCiBzaWduYWxzIFhlbiB0aGF0IHRoZSBlbnVtZXJhdGlvbiBpcyBk
b25lIGFuZCBYZW4gY2FuIHRyYXZlcnNlIHRoZSBidXMgYnkgdGhhdCB0aW1lLjwvc3Bhbj48YnIg
c3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7
IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczog
bm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0
LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdo
aXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tl
LXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJyIHN0eWxl
PSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250
LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1h
bDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGln
bjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1z
cGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0
aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJj
YXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNp
emU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsg
Zm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjog
c3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFj
ZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDog
MHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUg
IWltcG9ydGFudDsiIGNsYXNzPSIiPlBsZWFzZQ0KIGFsc28gbm90ZSwgdGhhdCB0aGVyZSBhcmUg
YWN0dWFsbHkgMyBjYXNlcyBwb3NzaWJsZSB3cnQgd2hlcmUgdGhlIGVudW1lcmF0aW9uIGFuZDwv
c3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBI
ZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlh
bnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9y
bWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06
IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRl
eHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0K
PGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0
aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNh
cHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsg
dGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25l
OyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0
cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFu
IHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNh
OyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6
IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4
dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3
aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9r
ZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5
OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPmNvbmZpZ3VyYXRpb24NCiBoYXBwZW5zOiBi
b290IGZpcm13YXJlLCBEb20wLCBYZW4uIFNvLCBpdCBzZWVtcyB3ZTwvc3Bhbj48YnIgc3R5bGU9
ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQt
c2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFs
OyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWdu
OiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNw
YWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRo
OiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJyIHN0eWxlPSJjYXJl
dC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6
IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9u
dC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3Rh
cnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTog
bm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4
OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1j
b2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEy
cHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13
ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7
IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9y
bWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0
ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9y
dGFudDsiIGNsYXNzPSIiPmFyZQ0KIGdvaW5nIHRvIGhhdmUgZGlmZmVyZW50IGFwcHJvYWNoZXMg
Zm9yIHRoZSBmaXJzdCB0d28gKHNlZSBteSBjb21tZW50IGFib3ZlIG9uPC9zcGFuPjxiciBzdHls
ZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9u
dC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3Jt
YWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxp
Z246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUt
c3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lk
dGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8YnIgc3R5bGU9ImNh
cmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6
ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBm
b250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBz
dGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNl
OiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAw
cHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPHNwYW4gc3R5bGU9ImNhcmV0
LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTog
MTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250
LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFy
dDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBu
b3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7
IHRleHQtZGVjb3JhdGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1w
b3J0YW50OyIgY2xhc3M9IiI+dGhlDQogaHlwZXJjYWxsIHVzZSBpbiBEb20wKS4gU28sIHdhbGtp
bmcgdGhlIGJ1cyBvdXJzZWx2ZXMgaW4gWGVuIHNlZW1zIHRvIGJlIGdvb2QgZm9yIGFsbDwvc3Bh
bj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2
ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQt
Y2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFs
OyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5v
bmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQt
c3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJy
IHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNh
OyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6
IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4
dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3
aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9r
ZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0
eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBm
b250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5v
cm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1h
bGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0
ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13
aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBp
bmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPnRoZQ0KIHVzZS1jYXNlcyBhYm92ZTwvc3Bhbj48
YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRp
Y2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fw
czogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0
ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7
IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ry
b2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJyIHN0
eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBm
b250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5v
cm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1h
bGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0
ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13
aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjwvZGl2Pg0KPC9i
bG9ja3F1b3RlPg0KPHNwYW4gZGF0YS1jb250cmFzdD0iYXV0byIgY2xhc3M9IlRleHRSdW4gU0NY
VzYyMDY4OTQ5IEJDWDAiIHhtbDpsYW5nPSJFTi1HQiIgbGFuZz0iRU4tR0IiIHN0eWxlPSJtYXJn
aW46IDBweDsgcGFkZGluZzogMHB4OyAtd2Via2l0LXVzZXItZHJhZzogbm9uZTsgb3JwaGFuczog
Mjsgd2lkb3dzOiAyOyBiYWNrZ3JvdW5kLWNvbG9yOiByZ2IoMjU1LCAyNTUsIDI1NSk7IGZvbnQt
c2l6ZTogMTBwdDsgZm9udC1mYW1pbHk6ICZxdW90O0NhbGlicmkgTGlnaHQmcXVvdDssICZxdW90
O0NhbGlicmkgTGlnaHRfRW1iZWRkZWRGb250JnF1b3Q7LCAmcXVvdDtDYWxpYnJpIExpZ2h0X01T
Rm9udFNlcnZpY2UmcXVvdDssIHNhbnMtc2VyaWY7IC13ZWJraXQtZm9udC1rZXJuaW5nOiBub25l
OyBsaW5lLWhlaWdodDogMTcuMjY2N3B4OyBmb250LXZhcmlhbnQtbGlnYXR1cmVzOiBub25lICFp
bXBvcnRhbnQ7Ij48c3BhbiBjbGFzcz0iU0NYVzYyMDY4OTQ5IE5vcm1hbFRleHRSdW4gQkNYMCIg
c3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IHVzZXItc2VsZWN0OiB0ZXh0OyAtd2Vi
a2l0LXVzZXItZHJhZzogbm9uZTsgLXdlYmtpdC10YXAtaGlnaGxpZ2h0LWNvbG9yOiB0cmFuc3Bh
cmVudDsgYmFja2dyb3VuZC1jb2xvcjogaW5oZXJpdDsiPg0KPGRpdj48c3BhbiBkYXRhLWNvbnRy
YXN0PSJhdXRvIiBjbGFzcz0iVGV4dFJ1biBTQ1hXNjIwNjg5NDkgQkNYMCIgeG1sOmxhbmc9IkVO
LUdCIiBsYW5nPSJFTi1HQiIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IC13ZWJr
aXQtdXNlci1kcmFnOiBub25lOyBvcnBoYW5zOiAyOyB3aWRvd3M6IDI7IGJhY2tncm91bmQtY29s
b3I6IHJnYigyNTUsIDI1NSwgMjU1KTsgZm9udC1zaXplOiAxMHB0OyBmb250LWZhbWlseTogJnF1
b3Q7Q2FsaWJyaSBMaWdodCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9FbWJlZGRlZEZvbnQm
cXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfTVNGb250U2VydmljZSZxdW90Oywgc2Fucy1zZXJp
ZjsgLXdlYmtpdC1mb250LWtlcm5pbmc6IG5vbmU7IGxpbmUtaGVpZ2h0OiAxNy4yNjY3cHg7IGZv
bnQtdmFyaWFudC1saWdhdHVyZXM6IG5vbmUgIWltcG9ydGFudDsiPjxzcGFuIGNsYXNzPSJTQ1hX
NjIwNjg5NDkgTm9ybWFsVGV4dFJ1biBCQ1gwIiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6
IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyAtd2Via2l0
LXRhcC1oaWdobGlnaHQtY29sb3I6IHRyYW5zcGFyZW50OyBiYWNrZ3JvdW5kLWNvbG9yOiBpbmhl
cml0OyI+PGJyIGNsYXNzPSIiPg0KPC9zcGFuPjwvc3Bhbj48L2Rpdj4NCkluIHRoYXQgY2FzZSB3
ZSBtYXkgaGF2ZSB0byBpbXBsZW1lbnQmbmJzcDs8L3NwYW4+PC9zcGFuPjxzcGFuIGRhdGEtY29u
dHJhc3Q9ImF1dG8iIGNsYXNzPSJUZXh0UnVuIFNDWFc2MjA2ODk0OSBCQ1gwIiB4bWw6bGFuZz0i
RU4tR0IiIGxhbmc9IkVOLUdCIiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgLXdl
YmtpdC11c2VyLWRyYWc6IG5vbmU7IG9ycGhhbnM6IDI7IHdpZG93czogMjsgYmFja2dyb3VuZC1j
b2xvcjogcmdiKDI1NSwgMjU1LCAyNTUpOyBmb250LXNpemU6IDEwcHQ7IGZvbnQtZmFtaWx5OiAm
cXVvdDtDYWxpYnJpIExpZ2h0JnF1b3Q7LCAmcXVvdDtDYWxpYnJpIExpZ2h0X0VtYmVkZGVkRm9u
dCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9NU0ZvbnRTZXJ2aWNlJnF1b3Q7LCBzYW5zLXNl
cmlmOyAtd2Via2l0LWZvbnQta2VybmluZzogbm9uZTsgbGluZS1oZWlnaHQ6IDE3LjI2NjdweDsg
Zm9udC12YXJpYW50LWxpZ2F0dXJlczogbm9uZSAhaW1wb3J0YW50OyI+PHNwYW4gY2xhc3M9IlND
WFc2MjA2ODk0OSBOb3JtYWxUZXh0UnVuIEJDWDAiIHN0eWxlPSJtYXJnaW46IDBweDsgcGFkZGlu
ZzogMHB4OyB1c2VyLXNlbGVjdDogdGV4dDsgLXdlYmtpdC11c2VyLWRyYWc6IG5vbmU7IC13ZWJr
aXQtdGFwLWhpZ2hsaWdodC1jb2xvcjogdHJhbnNwYXJlbnQ7IGJhY2tncm91bmQtY29sb3I6IGlu
aGVyaXQ7Ij5hJm5ic3A7PC9zcGFuPjwvc3Bhbj48c3BhbiBkYXRhLWNvbnRyYXN0PSJhdXRvIiBj
bGFzcz0iVGV4dFJ1biBTQ1hXNjIwNjg5NDkgQkNYMCIgeG1sOmxhbmc9IkVOLUdCIiBsYW5nPSJF
Ti1HQiIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IC13ZWJraXQtdXNlci1kcmFn
OiBub25lOyBvcnBoYW5zOiAyOyB3aWRvd3M6IDI7IGJhY2tncm91bmQtY29sb3I6IHJnYigyNTUs
IDI1NSwgMjU1KTsgZm9udC1zaXplOiAxMHB0OyBmb250LWZhbWlseTogJnF1b3Q7Q2FsaWJyaSBM
aWdodCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9FbWJlZGRlZEZvbnQmcXVvdDssICZxdW90
O0NhbGlicmkgTGlnaHRfTVNGb250U2VydmljZSZxdW90Oywgc2Fucy1zZXJpZjsgLXdlYmtpdC1m
b250LWtlcm5pbmc6IG5vbmU7IGxpbmUtaGVpZ2h0OiAxNy4yNjY3cHg7IGZvbnQtdmFyaWFudC1s
aWdhdHVyZXM6IG5vbmUgIWltcG9ydGFudDsiPjxzcGFuIGNsYXNzPSJTQ1hXNjIwNjg5NDkgTm9y
bWFsVGV4dFJ1biBCQ1gwIiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgdXNlci1z
ZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyAtd2Via2l0LXRhcC1oaWdobGln
aHQtY29sb3I6IHRyYW5zcGFyZW50OyBiYWNrZ3JvdW5kLWNvbG9yOiBpbmhlcml0OyI+bmV3Jm5i
c3A7PC9zcGFuPjxzcGFuIGNsYXNzPSJOb3JtYWxUZXh0UnVuIEJDWDAgU0NYVzYyMDY4OTQ5IFNw
ZWxsaW5nRXJyb3JWMiIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IHVzZXItc2Vs
ZWN0OiB0ZXh0OyAtd2Via2l0LXVzZXItZHJhZzogbm9uZTsgLXdlYmtpdC10YXAtaGlnaGxpZ2h0
LWNvbG9yOiB0cmFuc3BhcmVudDsgYmFja2dyb3VuZC1yZXBlYXQ6IHJlcGVhdC14OyBiYWNrZ3Jv
dW5kLXBvc2l0aW9uOiBsZWZ0IGJvdHRvbTsgYmFja2dyb3VuZC1pbWFnZTogdXJsKCZxdW90O2Rh
dGE6aW1hZ2Uvc3ZnJiM0Mzt4bWw7YmFzZTY0LFBEOTRiV3dnZG1WeWMybHZiajBpTVM0d0lpQmxi
bU52WkdsdVp6MGlWVlJHTFRnaVB6NEtQSE4yWnlCM2FXUjBhRDBpTlhCNElpQm9aV2xuYUhROUlq
UndlQ0lnZG1sbGQwSnZlRDBpTUNBd0lEVWdOQ0lnZG1WeWMybHZiajBpTVM0eElpQjRiV3h1Y3ow
aWFIUjBjRG92TDNkM2R5NTNNeTV2Y21jdk1qQXdNQzl6ZG1jaUlIaHRiRzV6T25oc2FXNXJQU0pv
ZEhSd09pOHZkM2QzTG5jekxtOXlaeTh4T1RrNUwzaHNhVzVySWo0S0lDQWdJRHdoTFMwZ1IyVnVa
WEpoZEc5eU9pQlRhMlYwWTJnZ05UWXVNaUFvT0RFMk56SXBJQzBnYUhSMGNITTZMeTl6YTJWMFky
Z3VZMjl0SUMwdFBnb2dJQ0FnUEhScGRHeGxQbk53Wld4c2FXNW5YM054ZFdsbloyeGxQQzkwYVhS
c1pUNEtJQ0FnSUR4a1pYTmpQa055WldGMFpXUWdkMmwwYUNCVGEyVjBZMmd1UEM5a1pYTmpQZ29n
SUNBZ1BHY2dhV1E5SWtac1lXZHpJaUJ6ZEhKdmEyVTlJbTV2Ym1VaUlITjBjbTlyWlMxM2FXUjBh
RDBpTVNJZ1ptbHNiRDBpYm05dVpTSWdabWxzYkMxeWRXeGxQU0psZG1WdWIyUmtJajRLSUNBZ0lD
QWdJQ0E4WnlCMGNtRnVjMlp2Y20wOUluUnlZVzV6YkdGMFpTZ3RNVEF4TUM0d01EQXdNREFzSUMw
eU9UWXVNREF3TURBd0tTSWdhV1E5SW5Od1pXeHNhVzVuWDNOeGRXbG5aMnhsSWo0S0lDQWdJQ0Fn
SUNBZ0lDQWdQR2NnZEhKaGJuTm1iM0p0UFNKMGNtRnVjMnhoZEdVb01UQXhNQzR3TURBd01EQXNJ
REk1Tmk0d01EQXdNREFwSWo0S0lDQWdJQ0FnSUNBZ0lDQWdJQ0FnSUR4d1lYUm9JR1E5SWswd0xE
TWdRekV1TWpVc015QXhMakkxTERFZ01pNDFMREVnUXpNdU56VXNNU0F6TGpjMUxETWdOU3d6SWlC
cFpEMGlVR0YwYUNJZ2MzUnliMnRsUFNJalJVSXdNREF3SWlCemRISnZhMlV0ZDJsa2RHZzlJakVp
UGp3dmNHRjBhRDRLSUNBZ0lDQWdJQ0FnSUNBZ0lDQWdJRHh5WldOMElHbGtQU0pTWldOMFlXNW5i
R1VpSUhnOUlqQWlJSGs5SWpBaUlIZHBaSFJvUFNJMUlpQm9aV2xuYUhROUlqUWlQand2Y21WamRE
NEtJQ0FnSUNBZ0lDQWdJQ0FnUEM5blBnb2dJQ0FnSUNBZ0lEd3ZaejRLSUNBZ0lEd3ZaejRLUEM5
emRtYyYjNDM7JnF1b3Q7KTsgYm9yZGVyLWJvdHRvbTogMXB4IHNvbGlkIHRyYW5zcGFyZW50OyBi
YWNrZ3JvdW5kLWNvbG9yOiBpbmhlcml0OyI+aHlwZXJjYWxsPC9zcGFuPjxzcGFuIGNsYXNzPSJT
Q1hXNjIwNjg5NDkgTm9ybWFsVGV4dFJ1biBCQ1gwIiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRp
bmc6IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyAtd2Vi
a2l0LXRhcC1oaWdobGlnaHQtY29sb3I6IHRyYW5zcGFyZW50OyBiYWNrZ3JvdW5kLWNvbG9yOiBp
bmhlcml0OyI+Jm5ic3A7dG8NCiBpbmZvcm0gWEVOIHRoYXQgZW51bWVyYXRpb24gaXMgY29tcGxl
dGUgYW5kIG5vdyBzY2FuIHRoZSBkZXZpY2VzLiZuYnNwOzwvc3Bhbj48L3NwYW4+PHNwYW4gZGF0
YS1jb250cmFzdD0iYXV0byIgY2xhc3M9IlRleHRSdW4gU0NYVzYyMDY4OTQ5IEJDWDAiIHhtbDps
YW5nPSJFTi1HQiIgbGFuZz0iRU4tR0IiIHN0eWxlPSJtYXJnaW46IDBweDsgcGFkZGluZzogMHB4
OyAtd2Via2l0LXVzZXItZHJhZzogbm9uZTsgb3JwaGFuczogMjsgd2lkb3dzOiAyOyBiYWNrZ3Jv
dW5kLWNvbG9yOiByZ2IoMjU1LCAyNTUsIDI1NSk7IGZvbnQtc2l6ZTogMTBwdDsgZm9udC1mYW1p
bHk6ICZxdW90O0NhbGlicmkgTGlnaHQmcXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfRW1iZWRk
ZWRGb250JnF1b3Q7LCAmcXVvdDtDYWxpYnJpIExpZ2h0X01TRm9udFNlcnZpY2UmcXVvdDssIHNh
bnMtc2VyaWY7IC13ZWJraXQtZm9udC1rZXJuaW5nOiBub25lOyBsaW5lLWhlaWdodDogMTcuMjY2
N3B4OyBmb250LXZhcmlhbnQtbGlnYXR1cmVzOiBub25lICFpbXBvcnRhbnQ7Ij48c3BhbiBjbGFz
cz0iU0NYVzYyMDY4OTQ5IE5vcm1hbFRleHRSdW4gQkNYMCIgc3R5bGU9Im1hcmdpbjogMHB4OyBw
YWRkaW5nOiAwcHg7IHVzZXItc2VsZWN0OiB0ZXh0OyAtd2Via2l0LXVzZXItZHJhZzogbm9uZTsg
LXdlYmtpdC10YXAtaGlnaGxpZ2h0LWNvbG9yOiB0cmFuc3BhcmVudDsgYmFja2dyb3VuZC1jb2xv
cjogaW5oZXJpdDsiPldlDQogY291bGQgdGVsbCBYZW4gdG8gZGVsYXkgaXRzIGVudW1lcmF0aW9u
IHVudGlsIHRoaXMmbmJzcDs8L3NwYW4+PHNwYW4gY2xhc3M9Ik5vcm1hbFRleHRSdW4gQkNYMCBT
Q1hXNjIwNjg5NDkgU3BlbGxpbmdFcnJvclYyIiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6
IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyAtd2Via2l0
LXRhcC1oaWdobGlnaHQtY29sb3I6IHRyYW5zcGFyZW50OyBiYWNrZ3JvdW5kLXJlcGVhdDogcmVw
ZWF0LXg7IGJhY2tncm91bmQtcG9zaXRpb246IGxlZnQgYm90dG9tOyBiYWNrZ3JvdW5kLWltYWdl
OiB1cmwoJnF1b3Q7ZGF0YTppbWFnZS9zdmcmIzQzO3htbDtiYXNlNjQsUEQ5NGJXd2dkbVZ5YzJs
dmJqMGlNUzR3SWlCbGJtTnZaR2x1WnowaVZWUkdMVGdpUHo0S1BITjJaeUIzYVdSMGFEMGlOWEI0
SWlCb1pXbG5hSFE5SWpSd2VDSWdkbWxsZDBKdmVEMGlNQ0F3SURVZ05DSWdkbVZ5YzJsdmJqMGlN
UzR4SWlCNGJXeHVjejBpYUhSMGNEb3ZMM2QzZHk1M015NXZjbWN2TWpBd01DOXpkbWNpSUhodGJH
NXpPbmhzYVc1clBTSm9kSFJ3T2k4dmQzZDNMbmN6TG05eVp5OHhPVGs1TDNoc2FXNXJJajRLSUNB
Z0lEd2hMUzBnUjJWdVpYSmhkRzl5T2lCVGEyVjBZMmdnTlRZdU1pQW9PREUyTnpJcElDMGdhSFIw
Y0hNNkx5OXphMlYwWTJndVkyOXRJQzB0UGdvZ0lDQWdQSFJwZEd4bFBuTndaV3hzYVc1blgzTnhk
V2xuWjJ4bFBDOTBhWFJzWlQ0S0lDQWdJRHhrWlhOalBrTnlaV0YwWldRZ2QybDBhQ0JUYTJWMFky
Z3VQQzlrWlhOalBnb2dJQ0FnUEdjZ2FXUTlJa1pzWVdkeklpQnpkSEp2YTJVOUltNXZibVVpSUhO
MGNtOXJaUzEzYVdSMGFEMGlNU0lnWm1sc2JEMGlibTl1WlNJZ1ptbHNiQzF5ZFd4bFBTSmxkbVZ1
YjJSa0lqNEtJQ0FnSUNBZ0lDQThaeUIwY21GdWMyWnZjbTA5SW5SeVlXNXpiR0YwWlNndE1UQXhN
QzR3TURBd01EQXNJQzB5T1RZdU1EQXdNREF3S1NJZ2FXUTlJbk53Wld4c2FXNW5YM054ZFdsbloy
eGxJajRLSUNBZ0lDQWdJQ0FnSUNBZ1BHY2dkSEpoYm5ObWIzSnRQU0owY21GdWMyeGhkR1VvTVRB
eE1DNHdNREF3TURBc0lESTVOaTR3TURBd01EQXBJajRLSUNBZ0lDQWdJQ0FnSUNBZ0lDQWdJRHh3
WVhSb0lHUTlJazB3TERNZ1F6RXVNalVzTXlBeExqSTFMREVnTWk0MUxERWdRek11TnpVc01TQXpM
amMxTERNZ05Td3pJaUJwWkQwaVVHRjBhQ0lnYzNSeWIydGxQU0lqUlVJd01EQXdJaUJ6ZEhKdmEy
VXRkMmxrZEdnOUlqRWlQand2Y0dGMGFENEtJQ0FnSUNBZ0lDQWdJQ0FnSUNBZ0lEeHlaV04wSUds
a1BTSlNaV04wWVc1bmJHVWlJSGc5SWpBaUlIazlJakFpSUhkcFpIUm9QU0kxSWlCb1pXbG5hSFE5
SWpRaVBqd3ZjbVZqZEQ0S0lDQWdJQ0FnSUNBZ0lDQWdQQzluUGdvZ0lDQWdJQ0FnSUR3dlp6NEtJ
Q0FnSUR3dlp6NEtQQzl6ZG1jJiM0MzsmcXVvdDspOyBib3JkZXItYm90dG9tOiAxcHggc29saWQg
dHJhbnNwYXJlbnQ7IGJhY2tncm91bmQtY29sb3I6IGluaGVyaXQ7Ij5oeXBlcmNhbGw8L3NwYW4+
PHNwYW4gY2xhc3M9IlNDWFc2MjA2ODk0OSBOb3JtYWxUZXh0UnVuIEJDWDAiIHN0eWxlPSJtYXJn
aW46IDBweDsgcGFkZGluZzogMHB4OyB1c2VyLXNlbGVjdDogdGV4dDsgLXdlYmtpdC11c2VyLWRy
YWc6IG5vbmU7IC13ZWJraXQtdGFwLWhpZ2hsaWdodC1jb2xvcjogdHJhbnNwYXJlbnQ7IGJhY2tn
cm91bmQtY29sb3I6IGluaGVyaXQ7Ij4mbmJzcDtpcw0KIGNhbGxlZCB1c2luZyBhIHhlbiBjb21t
YW5kIGxpbmUgcGFyYW1ldGVyLiBUaGlzIHdheSB3aGVuIHRoaXMgaXMgbm90IHJlcXVpcmVkIGJl
Y2F1c2UgdGhlIGZpcm13YXJlIGRpZCB0aGUgZW51bWVyYXRpb24sIHdlIGNhbiBwcm9wZXJseSBz
dXBwPC9zcGFuPjwvc3Bhbj48c3BhbiBkYXRhLWNvbnRyYXN0PSJhdXRvIiBjbGFzcz0iVGV4dFJ1
biBTQ1hXNjIwNjg5NDkgQkNYMCIgeG1sOmxhbmc9IkVOLUdCIiBsYW5nPSJFTi1HQiIgc3R5bGU9
Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyBvcnBo
YW5zOiAyOyB3aWRvd3M6IDI7IGJhY2tncm91bmQtY29sb3I6IHJnYigyNTUsIDI1NSwgMjU1KTsg
Zm9udC1zaXplOiAxMHB0OyBmb250LWZhbWlseTogJnF1b3Q7Q2FsaWJyaSBMaWdodCZxdW90Oywg
JnF1b3Q7Q2FsaWJyaSBMaWdodF9FbWJlZGRlZEZvbnQmcXVvdDssICZxdW90O0NhbGlicmkgTGln
aHRfTVNGb250U2VydmljZSZxdW90Oywgc2Fucy1zZXJpZjsgLXdlYmtpdC1mb250LWtlcm5pbmc6
IG5vbmU7IGxpbmUtaGVpZ2h0OiAxNy4yNjY3cHg7IGZvbnQtdmFyaWFudC1saWdhdHVyZXM6IG5v
bmUgIWltcG9ydGFudDsiPjxzcGFuIGNsYXNzPSJTQ1hXNjIwNjg5NDkgTm9ybWFsVGV4dFJ1biBC
Q1gwIiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7
IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyAtd2Via2l0LXRhcC1oaWdobGlnaHQtY29sb3I6IHRy
YW5zcGFyZW50OyBiYWNrZ3JvdW5kLWNvbG9yOiBpbmhlcml0OyI+b3J0DQogRG9tMExlc3MuPC9z
cGFuPjwvc3Bhbj48L2Rpdj4NCjxkaXY+PHNwYW4gZGF0YS1jb250cmFzdD0iYXV0byIgY2xhc3M9
IlRleHRSdW4gU0NYVzYyMDY4OTQ5IEJDWDAiIHhtbDpsYW5nPSJFTi1HQiIgbGFuZz0iRU4tR0Ii
IHN0eWxlPSJtYXJnaW46IDBweDsgcGFkZGluZzogMHB4OyAtd2Via2l0LXVzZXItZHJhZzogbm9u
ZTsgb3JwaGFuczogMjsgd2lkb3dzOiAyOyBiYWNrZ3JvdW5kLWNvbG9yOiByZ2IoMjU1LCAyNTUs
IDI1NSk7IGZvbnQtc2l6ZTogMTBwdDsgZm9udC1mYW1pbHk6ICZxdW90O0NhbGlicmkgTGlnaHQm
cXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfRW1iZWRkZWRGb250JnF1b3Q7LCAmcXVvdDtDYWxp
YnJpIExpZ2h0X01TRm9udFNlcnZpY2UmcXVvdDssIHNhbnMtc2VyaWY7IC13ZWJraXQtZm9udC1r
ZXJuaW5nOiBub25lOyBsaW5lLWhlaWdodDogMTcuMjY2N3B4OyBmb250LXZhcmlhbnQtbGlnYXR1
cmVzOiBub25lICFpbXBvcnRhbnQ7Ij48c3BhbiBjbGFzcz0iU0NYVzYyMDY4OTQ5IE5vcm1hbFRl
eHRSdW4gQkNYMCIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IHVzZXItc2VsZWN0
OiB0ZXh0OyAtd2Via2l0LXVzZXItZHJhZzogbm9uZTsgLXdlYmtpdC10YXAtaGlnaGxpZ2h0LWNv
bG9yOiB0cmFuc3BhcmVudDsgYmFja2dyb3VuZC1jb2xvcjogaW5oZXJpdDsiPjxiciBjbGFzcz0i
Ij4NCjwvc3Bhbj48L3NwYW4+PC9kaXY+DQo8ZGl2Pg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIg
Y2xhc3M9IiI+DQo8ZGl2IGNsYXNzPSIiPg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIgc3R5bGU9
ImZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9y
bWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0
ZXItc3BhY2luZzogbm9ybWFsOyBvcnBoYW5zOiBhdXRvOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4
dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7
IHdpZG93czogYXV0bzsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zaXplLWFkanVz
dDogYXV0bzsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246
IG5vbmU7IiBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCjxibG9ja3F1
b3RlIHR5cGU9ImNpdGUiIGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjxibG9ja3F1b3RlIHR5cGU9
ImNpdGUiIGNsYXNzPSIiPiMgRW5hYmxlIHRoZSBleGlzdGluZyB4ODYgdmlydHVhbCBQQ0kgc3Vw
cG9ydCBmb3IgQVJNOjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NClRoZSBleGlzdGluZyBW
UENJIHN1cHBvcnQgYXZhaWxhYmxlIGZvciBYODYgaXMgYWRhcHRlZCBmb3IgQXJtLiBXaGVuIHRo
ZSBkZXZpY2UgaXMgYWRkZWQgdG8gWEVOIHZpYSB0aGUgaHlwZXIgY2FsbCDigJxQSFlTREVWT1Bf
cGNpX2RldmljZV9hZGTigJ0sIFZQQ0kgaGFuZGxlciBmb3IgdGhlIGNvbmZpZyBzcGFjZSBhY2Nl
c3MgaXMgYWRkZWQgdG8gdGhlIFBDSSBkZXZpY2UgdG8gZW11bGF0ZSB0aGUgUENJIGRldmljZXMu
PGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0KQSBNTUlPIHRyYXAgaGFuZGxlciBmb3IgdGhl
IFBDSSBFQ0FNIHNwYWNlIGlzIHJlZ2lzdGVyZWQgaW4gWEVOIHNvIHRoYXQgd2hlbiBndWVzdCBp
cyB0cnlpbmcgdG8gYWNjZXNzIHRoZSBQQ0kgY29uZmlnIHNwYWNlLCBYRU4gd2lsbCB0cmFwIHRo
ZSBhY2Nlc3MgYW5kIGVtdWxhdGUgcmVhZC93cml0ZSB1c2luZyB0aGUgVlBDSSBhbmQgbm90IHRo
ZSByZWFsIFBDSSBoYXJkd2FyZS48YnIgY2xhc3M9IiI+DQo8L2Jsb2NrcXVvdGU+DQo8L2Jsb2Nr
cXVvdGU+DQo8L2Jsb2NrcXVvdGU+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAw
LCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxl
OiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7
IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDog
MHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFj
aW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9u
OiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0i
Ij5KdXN0DQogdG8gbWFrZSBpdCBjbGVhcjogRG9tMCBzdGlsbCBhY2Nlc3MgdGhlIGJ1cyBkaXJl
Y3RseSB3L28gZW11bGF0aW9uLCByaWdodD88L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1jb2xvcjog
cmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZv
bnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6
IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQt
aW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3
b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRl
Y29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjwvZGl2Pg0KPC9ibG9ja3F1b3RlPg0KPGRpdj48
YnIgY2xhc3M9IiI+DQo8L2Rpdj4NCk5vLk9uY2UmbmJzcDs8c3BhbiBkYXRhLWNvbnRyYXN0PSJh
dXRvIiBjbGFzcz0iVGV4dFJ1biBTQ1hXNDM3ODA3MTEgQkNYMCIgeG1sOmxhbmc9IkVOLUdCIiBs
YW5nPSJFTi1HQiIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IC13ZWJraXQtdXNl
ci1kcmFnOiBub25lOyBvcnBoYW5zOiAyOyB3aWRvd3M6IDI7IGJhY2tncm91bmQtY29sb3I6IHJn
YigyNTUsIDI1NSwgMjU1KTsgZm9udC1zaXplOiAxMHB0OyBmb250LWZhbWlseTogJnF1b3Q7Q2Fs
aWJyaSBMaWdodCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9FbWJlZGRlZEZvbnQmcXVvdDss
ICZxdW90O0NhbGlicmkgTGlnaHRfTVNGb250U2VydmljZSZxdW90Oywgc2Fucy1zZXJpZjsgLXdl
YmtpdC1mb250LWtlcm5pbmc6IG5vbmU7IGxpbmUtaGVpZ2h0OiAxNy4yNjY3cHg7IGZvbnQtdmFy
aWFudC1saWdhdHVyZXM6IG5vbmUgIWltcG9ydGFudDsiPjxzcGFuIGNsYXNzPSJTQ1hXNDM3ODA3
MTEgTm9ybWFsVGV4dFJ1biBCQ1gwIiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsg
LXdlYmtpdC11c2VyLWRyYWc6IG5vbmU7IGJhY2tncm91bmQtY29sb3I6IGluaGVyaXQ7Ij5YZW4N
CiBoYXMgZG9uZSBoaXMgUENJIGVudW1lcmF0aW9uIChlaXRoZXIgb24gYm9vdCBvciBhZnRlciBh
biZuYnNwOzwvc3Bhbj48c3BhbiBjbGFzcz0iTm9ybWFsVGV4dFJ1biBCQ1gwIFNDWFc0Mzc4MDcx
MSBTcGVsbGluZ0Vycm9yVjIiIHN0eWxlPSJtYXJnaW46IDBweDsgcGFkZGluZzogMHB4OyAtd2Vi
a2l0LXVzZXItZHJhZzogbm9uZTsgYmFja2dyb3VuZC1pbWFnZTogdXJsKCZxdW90O2RhdGE6aW1h
Z2Uvc3ZnJiM0Mzt4bWw7YmFzZTY0LFBEOTRiV3dnZG1WeWMybHZiajBpTVM0d0lpQmxibU52Wkds
dVp6MGlWVlJHTFRnaVB6NEtQSE4yWnlCM2FXUjBhRDBpTlhCNElpQm9aV2xuYUhROUlqUndlQ0ln
ZG1sbGQwSnZlRDBpTUNBd0lEVWdOQ0lnZG1WeWMybHZiajBpTVM0eElpQjRiV3h1Y3owaWFIUjBj
RG92TDNkM2R5NTNNeTV2Y21jdk1qQXdNQzl6ZG1jaUlIaHRiRzV6T25oc2FXNXJQU0pvZEhSd09p
OHZkM2QzTG5jekxtOXlaeTh4T1RrNUwzaHNhVzVySWo0S0lDQWdJRHdoTFMwZ1IyVnVaWEpoZEc5
eU9pQlRhMlYwWTJnZ05UWXVNaUFvT0RFMk56SXBJQzBnYUhSMGNITTZMeTl6YTJWMFkyZ3VZMjl0
SUMwdFBnb2dJQ0FnUEhScGRHeGxQbk53Wld4c2FXNW5YM054ZFdsbloyeGxQQzkwYVhSc1pUNEtJ
Q0FnSUR4a1pYTmpQa055WldGMFpXUWdkMmwwYUNCVGEyVjBZMmd1UEM5a1pYTmpQZ29nSUNBZ1BH
Y2dhV1E5SWtac1lXZHpJaUJ6ZEhKdmEyVTlJbTV2Ym1VaUlITjBjbTlyWlMxM2FXUjBhRDBpTVNJ
Z1ptbHNiRDBpYm05dVpTSWdabWxzYkMxeWRXeGxQU0psZG1WdWIyUmtJajRLSUNBZ0lDQWdJQ0E4
WnlCMGNtRnVjMlp2Y20wOUluUnlZVzV6YkdGMFpTZ3RNVEF4TUM0d01EQXdNREFzSUMweU9UWXVN
REF3TURBd0tTSWdhV1E5SW5Od1pXeHNhVzVuWDNOeGRXbG5aMnhsSWo0S0lDQWdJQ0FnSUNBZ0lD
QWdQR2NnZEhKaGJuTm1iM0p0UFNKMGNtRnVjMnhoZEdVb01UQXhNQzR3TURBd01EQXNJREk1Tmk0
d01EQXdNREFwSWo0S0lDQWdJQ0FnSUNBZ0lDQWdJQ0FnSUR4d1lYUm9JR1E5SWswd0xETWdRekV1
TWpVc015QXhMakkxTERFZ01pNDFMREVnUXpNdU56VXNNU0F6TGpjMUxETWdOU3d6SWlCcFpEMGlV
R0YwYUNJZ2MzUnliMnRsUFNJalJVSXdNREF3SWlCemRISnZhMlV0ZDJsa2RHZzlJakVpUGp3dmNH
RjBhRDRLSUNBZ0lDQWdJQ0FnSUNBZ0lDQWdJRHh5WldOMElHbGtQU0pTWldOMFlXNW5iR1VpSUhn
OUlqQWlJSGs5SWpBaUlIZHBaSFJvUFNJMUlpQm9aV2xuYUhROUlqUWlQand2Y21WamRENEtJQ0Fn
SUNBZ0lDQWdJQ0FnUEM5blBnb2dJQ0FnSUNBZ0lEd3ZaejRLSUNBZ0lEd3ZaejRLUEM5emRtYyYj
NDM7JnF1b3Q7KTsgYm9yZGVyLWJvdHRvbS13aWR0aDogMXB4OyBib3JkZXItYm90dG9tLXN0eWxl
OiBzb2xpZDsgYm9yZGVyLWJvdHRvbS1jb2xvcjogdHJhbnNwYXJlbnQ7IGJhY2tncm91bmQtY29s
b3I6IGluaGVyaXQ7IGJhY2tncm91bmQtcG9zaXRpb246IGxlZnQgYm90dG9tOyBiYWNrZ3JvdW5k
LXJlcGVhdDogcmVwZWF0IG5vLXJlcGVhdDsiPmh5cGVyY2FsbDwvc3Bhbj48c3BhbiBjbGFzcz0i
U0NYVzQzNzgwNzExIE5vcm1hbFRleHRSdW4gQkNYMCIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRk
aW5nOiAwcHg7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyBiYWNrZ3JvdW5kLWNvbG9yOiBpbmhl
cml0OyI+Jm5ic3A7ZnJvbQ0KIHRoZSBoYXJkd2FyZSBkb21haW4pLCBvbmx5IFhlbiB3aWxsIGFj
Y2VzcyB0aGUgcGh5c2ljYWwgUENJIGJ1cywgZXZlcnlib2R5IGVsc2Ugd2lsbCBnbyB0aHJvdWdo
IFZQQ0kuPC9zcGFuPjwvc3Bhbj48c3BhbiBjbGFzcz0iRU9QIFNDWFc0Mzc4MDcxMSBCQ1gwIiBk
YXRhLWNjcC1wcm9wcz0ieyZxdW90OzEzNDIzMzI3OSZxdW90Ozp0cnVlLCZxdW90OzMzNTU1OTY4
NSZxdW90Ozo3MjB9IiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgLXdlYmtpdC11
c2VyLWRyYWc6IG5vbmU7IGZvbnQtdmFyaWFudC1saWdhdHVyZXM6IG5vcm1hbDsgb3JwaGFuczog
Mjsgd2lkb3dzOiAyOyBiYWNrZ3JvdW5kLWNvbG9yOiByZ2IoMjU1LCAyNTUsIDI1NSk7IGZvbnQt
c2l6ZTogMTBwdDsgbGluZS1oZWlnaHQ6IDE3LjI2NjdweDsgZm9udC1mYW1pbHk6ICZxdW90O0Nh
bGlicmkgTGlnaHQmcXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfRW1iZWRkZWRGb250JnF1b3Q7
LCAmcXVvdDtDYWxpYnJpIExpZ2h0X01TRm9udFNlcnZpY2UmcXVvdDssIHNhbnMtc2VyaWY7Ij4m
bmJzcDs8L3NwYW4+PC9kaXY+DQo8ZGl2PjxiciBjbGFzcz0iIj4NCjxibG9ja3F1b3RlIHR5cGU9
ImNpdGUiIGNsYXNzPSIiPg0KPGRpdiBjbGFzcz0iIj4NCjxibG9ja3F1b3RlIHR5cGU9ImNpdGUi
IHN0eWxlPSJmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5
bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1h
bDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgb3JwaGFuczogYXV0bzsgdGV4dC1hbGlnbjogc3Rh
cnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTog
bm9ybWFsOyB3aWRvd3M6IGF1dG87IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc2l6
ZS1hZGp1c3Q6IGF1dG87IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNv
cmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBjbGFzcz0i
Ij4NCjxibG9ja3F1b3RlIHR5cGU9ImNpdGUiIGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCkxpbWl0
YXRpb246PGJyIGNsYXNzPSIiPg0KKiBObyBoYW5kbGVyIGlzIHJlZ2lzdGVyIGZvciB0aGUgTVNJ
IGNvbmZpZ3VyYXRpb24uPGJyIGNsYXNzPSIiPg0KKiBPbmx5IGxlZ2FjeSBpbnRlcnJ1cHQgaXMg
c3VwcG9ydGVkIGFuZCB0ZXN0ZWQgYXMgb2Ygbm93LCBNU0kgaXMgbm90IGltcGxlbWVudGVkIGFu
ZCB0ZXN0ZWQuPGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0KIyBBc3NpZ24gdGhlIGRldmlj
ZSB0byB0aGUgZ3Vlc3Q6PGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0KQXNzaWduIHRoZSBQ
Q0kgZGV2aWNlIGZyb20gdGhlIGhhcmR3YXJlIGRvbWFpbiB0byB0aGUgZ3Vlc3QgaXMgZG9uZSB1
c2luZyB0aGUgYmVsb3cgZ3Vlc3QgY29uZmlnIG9wdGlvbi4gV2hlbiB4bCB0b29sIGNyZWF0ZSB0
aGUgZG9tYWluLCBQQ0kgZGV2aWNlcyB3aWxsIGJlIGFzc2lnbmVkIHRvIHRoZSBndWVzdCBWUENJ
IGJ1cy48YnIgY2xhc3M9IiI+DQo8c3BhbiBjbGFzcz0iQXBwbGUtdGFiLXNwYW4iIHN0eWxlPSJ3
aGl0ZS1zcGFjZTogcHJlOyI+PC9zcGFuPnBjaT1bICZxdW90O1BDSV9TUEVDX1NUUklORyZxdW90
OywgJnF1b3Q7UENJX1NQRUNfU1RSSU5HJnF1b3Q7LCAuLi5dPGJyIGNsYXNzPSIiPg0KPGJyIGNs
YXNzPSIiPg0KR3Vlc3Qgd2lsbCBiZSBvbmx5IGFibGUgdG8gYWNjZXNzIHRoZSBhc3NpZ25lZCBk
ZXZpY2VzIGFuZCBzZWUgdGhlIGJyaWRnZXMuIEd1ZXN0IHdpbGwgbm90IGJlIGFibGUgdG8gYWNj
ZXNzIG9yIHNlZSB0aGUgZGV2aWNlcyB0aGF0IGFyZSBubyBhc3NpZ25lZCB0byBoaW0uPGJyIGNs
YXNzPSIiPg0KPC9ibG9ja3F1b3RlPg0KPC9ibG9ja3F1b3RlPg0KPC9ibG9ja3F1b3RlPg0KPHNw
YW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRp
Y2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fw
czogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0
ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7
IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ry
b2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRpc3Bs
YXk6IGlubGluZSAhaW1wb3J0YW50OyIgY2xhc3M9IiI+RG9lcw0KIHRoaXMgbWVhbiB0aGF0IHdl
IGRvIG5vdCBuZWVkIHRvIGNvbmZpZ3VyZSB0aGUgYnJpZGdlcyBhcyB0aG9zZSBhcmUgZXhwb3Nl
ZCB0byB0aGUgZ3Vlc3QgaW1wbGljaXRseT88L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1jb2xvcjog
cmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZv
bnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6
IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQt
aW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3
b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRl
Y29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxibG9ja3F1b3RlIHR5cGU9ImNpdGUiIHN0eWxl
PSJmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5v
cm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0
dGVyLXNwYWNpbmc6IG5vcm1hbDsgb3JwaGFuczogYXV0bzsgdGV4dC1hbGlnbjogc3RhcnQ7IHRl
eHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFs
OyB3aWRvd3M6IGF1dG87IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc2l6ZS1hZGp1
c3Q6IGF1dG87IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9u
OiBub25lOyIgY2xhc3M9IiI+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBjbGFzcz0iIj4NCjxi
bG9ja3F1b3RlIHR5cGU9ImNpdGUiIGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCkxpbWl0YXRpb246
PGJyIGNsYXNzPSIiPg0KKiBBcyBvZiBub3cgYWxsIHRoZSBicmlkZ2VzIGluIHRoZSBQQ0kgYnVz
IGFyZSBzZWVuIGJ5IHRoZSBndWVzdCBvbiB0aGUgVlBDSSBidXMuPGJyIGNsYXNzPSIiPg0KPC9i
bG9ja3F1b3RlPg0KPC9ibG9ja3F1b3RlPg0KPC9ibG9ja3F1b3RlPg0KPGJyIHN0eWxlPSJjYXJl
dC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6
IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9u
dC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3Rh
cnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTog
bm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4
OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1j
b2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEy
cHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13
ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7
IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9y
bWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0
ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9y
dGFudDsiIGNsYXNzPSIiPlNvLA0KIHdoYXQgaGFwcGVucyBpZiBhIGd1ZXN0IHRyaWVzIHRvIGFj
Y2VzcyB0aGUgYnJpZGdlIHRoYXQgZG9lc24ndCBoYXZlIHRoZSBhc3NpZ25lZDwvc3Bhbj48YnIg
c3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7
IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczog
bm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0
LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdo
aXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tl
LXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJyIHN0eWxl
PSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250
LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1h
bDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGln
bjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1z
cGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0
aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJj
YXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNp
emU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsg
Zm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjog
c3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFj
ZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDog
MHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUg
IWltcG9ydGFudDsiIGNsYXNzPSIiPlBDSQ0KIGRldmljZT8gRS5nLiB3ZSBwYXNzIFBDSWVfZGV2
MCB3aGljaCBpcyBiZWhpbmQgQnJpZGdlMCBhbmQgdGhlIGd1ZXN0IGFsc28gc2Vlczwvc3Bhbj48
YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRp
Y2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fw
czogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0
ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7
IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ry
b2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJyIHN0
eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBm
b250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5v
cm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1h
bGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0
ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13
aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxl
PSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250
LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1h
bDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGln
bjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1z
cGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0
aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxp
bmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPkJyaWRnZTENCiBhbmQgdHJpZXMgdG8gYWNjZXNzIGRl
dmljZXMgYmVoaW5kIGl0IGR1cmluZyB0aGUgZW51bWVyYXRpb24uPC9zcGFuPjxiciBzdHlsZT0i
Y2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1z
aXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7
IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246
IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3Bh
Y2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6
IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8YnIgc3R5bGU9ImNhcmV0
LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTog
MTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250
LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFy
dDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBu
b3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7
IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPHNwYW4gc3R5bGU9ImNhcmV0LWNv
bG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJw
eDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdl
aWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsg
dGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3Jt
YWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRl
eHQtZGVjb3JhdGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0
YW50OyIgY2xhc3M9IiI+Q291bGQNCiB5b3UgcGxlYXNlIGNsYXJpZnk/PC9zcGFuPjxiciBzdHls
ZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9u
dC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3Jt
YWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxp
Z246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUt
c3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lk
dGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8L2Rpdj4NCjwvYmxv
Y2txdW90ZT4NCjxzcGFuIGRhdGEtY29udHJhc3Q9ImF1dG8iIGNsYXNzPSJUZXh0UnVuIFNDWFcy
Mjc2NDgwOTUgQkNYMCIgeG1sOmxhbmc9IkVOLUdCIiBsYW5nPSJFTi1HQiIgc3R5bGU9Im1hcmdp
bjogMHB4OyBwYWRkaW5nOiAwcHg7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyBvcnBoYW5zOiAy
OyB0ZXh0LWluZGVudDogNDhweDsgd2lkb3dzOiAyOyBiYWNrZ3JvdW5kLWNvbG9yOiByZ2IoMjU1
LCAyNTUsIDI1NSk7IGZvbnQtc2l6ZTogMTBwdDsgZm9udC1mYW1pbHk6ICZxdW90O0NhbGlicmkg
TGlnaHQmcXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfRW1iZWRkZWRGb250JnF1b3Q7LCAmcXVv
dDtDYWxpYnJpIExpZ2h0X01TRm9udFNlcnZpY2UmcXVvdDssIHNhbnMtc2VyaWY7IC13ZWJraXQt
Zm9udC1rZXJuaW5nOiBub25lOyBsaW5lLWhlaWdodDogMTcuMjY2N3B4OyBmb250LXZhcmlhbnQt
bGlnYXR1cmVzOiBub25lICFpbXBvcnRhbnQ7Ij48c3BhbiBjbGFzcz0iQkNYMCBOb3JtYWxUZXh0
UnVuIFNDWFcyMjc2NDgwOTUiIHN0eWxlPSJtYXJnaW46IDBweDsgcGFkZGluZzogMHB4OyB1c2Vy
LXNlbGVjdDogdGV4dDsgLXdlYmtpdC11c2VyLWRyYWc6IG5vbmU7IC13ZWJraXQtdGFwLWhpZ2hs
aWdodC1jb2xvcjogdHJhbnNwYXJlbnQ7IGJhY2tncm91bmQtY29sb3I6IGluaGVyaXQ7Ij4NCjxk
aXY+PHNwYW4gZGF0YS1jb250cmFzdD0iYXV0byIgY2xhc3M9IlRleHRSdW4gU0NYVzIyNzY0ODA5
NSBCQ1gwIiB4bWw6bGFuZz0iRU4tR0IiIGxhbmc9IkVOLUdCIiBzdHlsZT0ibWFyZ2luOiAwcHg7
IHBhZGRpbmc6IDBweDsgLXdlYmtpdC11c2VyLWRyYWc6IG5vbmU7IG9ycGhhbnM6IDI7IHRleHQt
aW5kZW50OiA0OHB4OyB3aWRvd3M6IDI7IGJhY2tncm91bmQtY29sb3I6IHJnYigyNTUsIDI1NSwg
MjU1KTsgZm9udC1zaXplOiAxMHB0OyBmb250LWZhbWlseTogJnF1b3Q7Q2FsaWJyaSBMaWdodCZx
dW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9FbWJlZGRlZEZvbnQmcXVvdDssICZxdW90O0NhbGli
cmkgTGlnaHRfTVNGb250U2VydmljZSZxdW90Oywgc2Fucy1zZXJpZjsgLXdlYmtpdC1mb250LWtl
cm5pbmc6IG5vbmU7IGxpbmUtaGVpZ2h0OiAxNy4yNjY3cHg7IGZvbnQtdmFyaWFudC1saWdhdHVy
ZXM6IG5vbmUgIWltcG9ydGFudDsiPjxzcGFuIGNsYXNzPSJCQ1gwIE5vcm1hbFRleHRSdW4gU0NY
VzIyNzY0ODA5NSIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IHVzZXItc2VsZWN0
OiB0ZXh0OyAtd2Via2l0LXVzZXItZHJhZzogbm9uZTsgLXdlYmtpdC10YXAtaGlnaGxpZ2h0LWNv
bG9yOiB0cmFuc3BhcmVudDsgYmFja2dyb3VuZC1jb2xvcjogaW5oZXJpdDsiPjxiciBjbGFzcz0i
Ij4NCjwvc3Bhbj48L3NwYW4+PC9kaXY+DQpUaGUgYnJpZGdlcyBhcmUgb25seSBhY2Nlc3NpYmxl
IGluIHJlYWQtb25seSBhbmQgY2Fubm90IGJlIG1vZGlmaWVkLiBFdmVuIHRob3VnaCBhIGd1ZXN0
IHdvdWxkIHNlZSB0aGUgYnJpZGdlLCB0aGUgVlBDSSB3aWxsIG9ubHkgc2hvdyB0aGUgYXNzaWdu
ZWQgZGV2aWNlcyBiZWhpbmQgaXQuJm5ic3A7PC9zcGFuPjwvc3Bhbj48c3BhbiBkYXRhLWNvbnRy
YXN0PSJhdXRvIiBjbGFzcz0iVGV4dFJ1biBTQ1hXMjI3NjQ4MDk1IEJDWDAiIHhtbDpsYW5nPSJF
Ti1HQiIgbGFuZz0iRU4tR0IiIHN0eWxlPSJtYXJnaW46IDBweDsgcGFkZGluZzogMHB4OyAtd2Vi
a2l0LXVzZXItZHJhZzogbm9uZTsgb3JwaGFuczogMjsgdGV4dC1pbmRlbnQ6IDQ4cHg7IHdpZG93
czogMjsgYmFja2dyb3VuZC1jb2xvcjogcmdiKDI1NSwgMjU1LCAyNTUpOyBmb250LXNpemU6IDEw
cHQ7IGZvbnQtZmFtaWx5OiAmcXVvdDtDYWxpYnJpIExpZ2h0JnF1b3Q7LCAmcXVvdDtDYWxpYnJp
IExpZ2h0X0VtYmVkZGVkRm9udCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9NU0ZvbnRTZXJ2
aWNlJnF1b3Q7LCBzYW5zLXNlcmlmOyAtd2Via2l0LWZvbnQta2VybmluZzogbm9uZTsgbGluZS1o
ZWlnaHQ6IDE3LjI2NjdweDsgZm9udC12YXJpYW50LWxpZ2F0dXJlczogbm9uZSAhaW1wb3J0YW50
OyI+PHNwYW4gY2xhc3M9IkJDWDAgTm9ybWFsVGV4dFJ1biBTQ1hXMjI3NjQ4MDk1IiBzdHlsZT0i
bWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNl
ci1kcmFnOiBub25lOyAtd2Via2l0LXRhcC1oaWdobGlnaHQtY29sb3I6IHRyYW5zcGFyZW50OyBi
YWNrZ3JvdW5kLWNvbG9yOiBpbmhlcml0OyI+SWYNCiB0aGVyZSBpcyBubyBkZXZpY2UgYmVoaW5k
IHRoYXQgYnJpZGdlIGFzc2lnbmVkIHRvIHRoZSBndWVzdCwgdGhlIGd1ZXN0IHdpbGwgc2VlIGFu
IGVtcHR5IGJ1cyBiZWhpbmQgdGhhdCBicmlkZ2UuPC9zcGFuPjwvc3Bhbj48L2Rpdj4NCjxkaXY+
PHNwYW4gZGF0YS1jb250cmFzdD0iYXV0byIgY2xhc3M9IlRleHRSdW4gU0NYVzIyNzY0ODA5NSBC
Q1gwIiB4bWw6bGFuZz0iRU4tR0IiIGxhbmc9IkVOLUdCIiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBh
ZGRpbmc6IDBweDsgLXdlYmtpdC11c2VyLWRyYWc6IG5vbmU7IG9ycGhhbnM6IDI7IHRleHQtaW5k
ZW50OiA0OHB4OyB3aWRvd3M6IDI7IGJhY2tncm91bmQtY29sb3I6IHJnYigyNTUsIDI1NSwgMjU1
KTsgZm9udC1zaXplOiAxMHB0OyBmb250LWZhbWlseTogJnF1b3Q7Q2FsaWJyaSBMaWdodCZxdW90
OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9FbWJlZGRlZEZvbnQmcXVvdDssICZxdW90O0NhbGlicmkg
TGlnaHRfTVNGb250U2VydmljZSZxdW90Oywgc2Fucy1zZXJpZjsgLXdlYmtpdC1mb250LWtlcm5p
bmc6IG5vbmU7IGxpbmUtaGVpZ2h0OiAxNy4yNjY3cHg7IGZvbnQtdmFyaWFudC1saWdhdHVyZXM6
IG5vbmUgIWltcG9ydGFudDsiPjxzcGFuIGNsYXNzPSJCQ1gwIE5vcm1hbFRleHRSdW4gU0NYVzIy
NzY0ODA5NSIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IHVzZXItc2VsZWN0OiB0
ZXh0OyAtd2Via2l0LXVzZXItZHJhZzogbm9uZTsgLXdlYmtpdC10YXAtaGlnaGxpZ2h0LWNvbG9y
OiB0cmFuc3BhcmVudDsgYmFja2dyb3VuZC1jb2xvcjogaW5oZXJpdDsiPjxiciBjbGFzcz0iIj4N
Cjwvc3Bhbj48L3NwYW4+PC9kaXY+DQo8ZGl2Pg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIgY2xh
c3M9IiI+DQo8ZGl2IGNsYXNzPSIiPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAw
KTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBu
b3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxl
dHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4
OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5n
OiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBu
b25lOyIgY2xhc3M9IiI+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBzdHlsZT0iZm9udC1mYW1p
bHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQt
dmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5n
OiBub3JtYWw7IG9ycGhhbnM6IGF1dG87IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDog
MHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd2lkb3dzOiBh
dXRvOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXNpemUtYWRqdXN0OiBhdXRvOyAt
d2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNs
YXNzPSIiPg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIgY2xhc3M9IiI+V2UgbmVlZCB0byBjb21l
IHVwIHdpdGggc29tZXRoaW5nIHNpbWlsYXIgZm9yIGRvbTBsZXNzIHRvby4gSXQgY291bGQgYmU8
YnIgY2xhc3M9IiI+DQpleGFjdGx5IHRoZSBzYW1lIHRoaW5nIChhIGxpc3Qgb2YgQkRGcyBhcyBz
dHJpbmdzIGFzIGEgZGV2aWNlIHRyZWU8YnIgY2xhc3M9IiI+DQpwcm9wZXJ0eSkgb3Igc29tZXRo
aW5nIGVsc2UgaWYgd2UgY2FuIGNvbWUgdXAgd2l0aCBhIGJldHRlciBpZGVhLjxiciBjbGFzcz0i
Ij4NCjwvYmxvY2txdW90ZT4NCkZ1bGx5IGFncmVlLjxiciBjbGFzcz0iIj4NCk1heWJlIGEgdHJl
ZSB0b3BvbG9neSBjb3VsZCBhbGxvdyBtb3JlIHBvc3NpYmlsaXRpZXMgKGxpa2UgZ2l2aW5nIEJB
UiB2YWx1ZXMpIGluIHRoZSBmdXR1cmUuPGJyIGNsYXNzPSIiPg0KPGJsb2NrcXVvdGUgdHlwZT0i
Y2l0ZSIgY2xhc3M9IiI+PGJyIGNsYXNzPSIiPg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIgY2xh
c3M9IiI+IyBFbXVsYXRlZCBQQ0kgZGV2aWNlIHRyZWUgbm9kZSBpbiBsaWJ4bDo8YnIgY2xhc3M9
IiI+DQo8YnIgY2xhc3M9IiI+DQpMaWJ4bCBpcyBjcmVhdGluZyBhIHZpcnR1YWwgUENJIGRldmlj
ZSB0cmVlIG5vZGUgaW4gdGhlIGRldmljZSB0cmVlIHRvIGVuYWJsZSB0aGUgZ3Vlc3QgT1MgdG8g
ZGlzY292ZXIgdGhlIHZpcnR1YWwgUENJIGR1cmluZyBndWVzdCBib290LiBXZSBpbnRyb2R1Y2Vk
IHRoZSBuZXcgY29uZmlnIG9wdGlvbiBbdnBjaT0mcXVvdDtwY2lfZWNhbSZxdW90O10gZm9yIGd1
ZXN0cy4gV2hlbiB0aGlzIGNvbmZpZyBvcHRpb24gaXMgZW5hYmxlZCBpbiBhIGd1ZXN0IGNvbmZp
Z3VyYXRpb24sDQogYSBQQ0kgZGV2aWNlIHRyZWUgbm9kZSB3aWxsIGJlIGNyZWF0ZWQgaW4gdGhl
IGd1ZXN0IGRldmljZSB0cmVlLjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCkEgbmV3IGFy
ZWEgaGFzIGJlZW4gcmVzZXJ2ZWQgaW4gdGhlIGFybSBndWVzdCBwaHlzaWNhbCBtYXAgYXQgd2hp
Y2ggdGhlIFZQQ0kgYnVzIGlzIGRlY2xhcmVkIGluIHRoZSBkZXZpY2UgdHJlZSAocmVnIGFuZCBy
YW5nZXMgcGFyYW1ldGVycyBvZiB0aGUgbm9kZSkuIEEgdHJhcCBoYW5kbGVyIGZvciB0aGUgUENJ
IEVDQU0gYWNjZXNzIGZyb20gZ3Vlc3QgaGFzIGJlZW4gcmVnaXN0ZXJlZCBhdCB0aGUgZGVmaW5l
ZCBhZGRyZXNzIGFuZCByZWRpcmVjdHMNCiByZXF1ZXN0cyB0byB0aGUgVlBDSSBkcml2ZXIgaW4g
WGVuLjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCkxpbWl0YXRpb246PGJyIGNsYXNzPSIi
Pg0KKiBPbmx5IG9uZSBQQ0kgZGV2aWNlIHRyZWUgbm9kZSBpcyBzdXBwb3J0ZWQgYXMgb2Ygbm93
LjxiciBjbGFzcz0iIj4NCjwvYmxvY2txdW90ZT4NCkkgdGhpbmsgdnBjaT0mcXVvdDtwY2lfZWNh
bSZxdW90OyBzaG91bGQgYmUgb3B0aW9uYWw6IGlmIHBjaT1bICZxdW90O1BDSV9TUEVDX1NUUklO
RyZxdW90Oyw8YnIgY2xhc3M9IiI+DQouLi5dIGlzIHNwZWNpZmlmZWQsIHRoZW4gdnBjaT0mcXVv
dDtwY2lfZWNhbSZxdW90OyBpcyBpbXBsaWVkLjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4N
CnZwY2k9JnF1b3Q7cGNpX2VjYW0mcXVvdDsgaXMgb25seSB1c2VmdWwgb25lIGRheSBpbiB0aGUg
ZnV0dXJlIHdoZW4gd2Ugd2FudCB0byBiZTxiciBjbGFzcz0iIj4NCmFibGUgdG8gZW11bGF0ZSBv
dGhlciBub24tZWNhbSBob3N0IGJyaWRnZXMuIEZvciBub3cgd2UgY291bGQgZXZlbiBza2lwPGJy
IGNsYXNzPSIiPg0KaXQuPGJyIGNsYXNzPSIiPg0KPC9ibG9ja3F1b3RlPg0KVGhpcyB3b3VsZCBj
cmVhdGUgYSBwcm9ibGVtIGlmIHhsIGlzIHVzZWQgdG8gYWRkIGEgUENJIGRldmljZSBhcyB3ZSBu
ZWVkIHRoZSBQQ0kgbm9kZSB0byBiZSBpbiB0aGUgRFRCIHdoZW4gdGhlIGd1ZXN0IGlzIGNyZWF0
ZWQuPGJyIGNsYXNzPSIiPg0KSSBhZ3JlZSB0aGlzIGlzIG5vdCBuZWVkZWQgYnV0IHJlbW92aW5n
IGl0IG1pZ2h0IGNyZWF0ZSBtb3JlIGNvbXBsZXhpdHkgaW4gdGhlIGNvZGUuPGJyIGNsYXNzPSIi
Pg0KPC9ibG9ja3F1b3RlPg0KPGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBm
b250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1h
bDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVy
LXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRl
eHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBw
eDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7
IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250
LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsg
Zm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNw
YWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQt
dHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsg
LXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZs
b2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPkkNCiB3b3Vs
ZCBzdWdnZXN0IHdlIGhhdmUgaXQgZnJvbSBkYXkgMCBhcyB0aGVyZSBhcmUgcGxlbnR5IG9mIEhX
IGF2YWlsYWJsZSB3aGljaCBpcyBub3QgRUNBTS48L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1jb2xv
cjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7
IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWln
aHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRl
eHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFs
OyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0
LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJn
YigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250
LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBu
b3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWlu
ZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29y
ZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNv
cmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigw
LCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0
eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3Jt
YWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVu
dDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1z
cGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0
aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFz
cz0iIj5IYXZpbmcNCiB2cGNpIGFsbG93cyBvdGhlciBicmlkZ2VzIHRvIGJlIHN1cHBvcnRlZDwv
c3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBI
ZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlh
bnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9y
bWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06
IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRl
eHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0K
PC9kaXY+DQo8L2Jsb2NrcXVvdGU+DQo8ZGl2PjxzcGFuIGNsYXNzPSJOb3JtYWxUZXh0UnVuIENv
bnRleHR1YWxTcGVsbGluZ0FuZEdyYW1tYXJFcnJvclYyIEJDWDAgU0NYVzExMDMyNjQ1MCIgc3R5
bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyBi
YWNrZ3JvdW5kLWltYWdlOiB1cmwoJnF1b3Q7ZGF0YTppbWFnZS9zdmcmIzQzO3htbDtiYXNlNjQs
UEQ5NGJXd2dkbVZ5YzJsdmJqMGlNUzR3SWlCbGJtTnZaR2x1WnowaVZWUkdMVGdpUHo0S1BITjJa
eUIzYVdSMGFEMGlOWEI0SWlCb1pXbG5hSFE5SWpOd2VDSWdkbWxsZDBKdmVEMGlNQ0F3SURVZ015
SWdkbVZ5YzJsdmJqMGlNUzR4SWlCNGJXeHVjejBpYUhSMGNEb3ZMM2QzZHk1M015NXZjbWN2TWpB
d01DOXpkbWNpSUhodGJHNXpPbmhzYVc1clBTSm9kSFJ3T2k4dmQzZDNMbmN6TG05eVp5OHhPVGs1
TDNoc2FXNXJJajRLSUNBZ0lEd2hMUzBnUjJWdVpYSmhkRzl5T2lCVGEyVjBZMmdnTlRVdU1pQW9O
emd4T0RFcElDMGdhSFIwY0hNNkx5OXphMlYwWTJoaGNIQXVZMjl0SUMwdFBnb2dJQ0FnUEhScGRH
eGxQbWR5WVcxdFlYSmZaRzkxWW14bFgyeHBibVU4TDNScGRHeGxQZ29nSUNBZ1BHUmxjMk0mIzQz
O1EzSmxZWFJsWkNCM2FYUm9JRk5yWlhSamFDNDhMMlJsYzJNJiM0MztDaUFnSUNBOFp5QnBaRDBp
WjNKaGJXMWhjbDlrYjNWaWJHVmZiR2x1WlNJZ2MzUnliMnRsUFNKdWIyNWxJaUJ6ZEhKdmEyVXRk
MmxrZEdnOUlqRWlJR1pwYkd3OUltNXZibVVpSUdacGJHd3RjblZzWlQwaVpYWmxibTlrWkNJZ2Mz
UnliMnRsTFd4cGJtVmpZWEE5SW5KdmRXNWtJajRLSUNBZ0lDQWdJQ0E4WnlCcFpEMGlSM0poYlcx
aGNpMVVhV3hsTFVOdmNIa2lJSE4wY205clpUMGlJek16TlRWR1JpSSYjNDM7Q2lBZ0lDQWdJQ0Fn
SUNBZ0lEeHdZWFJvSUdROUlrMHdMREF1TlNCTU5Td3dMalVpSUdsa1BTSk1hVzVsTFRJdFEyOXdl
UzB4TUNJJiM0MztQQzl3WVhSb1Bnb2dJQ0FnSUNBZ0lDQWdJQ0E4Y0dGMGFDQmtQU0pOTUN3eUxq
VWdURFVzTWk0MUlpQnBaRDBpVEdsdVpTMHlMVU52Y0hrdE1URWlQand2Y0dGMGFENEtJQ0FnSUNB
Z0lDQThMMmMmIzQzO0NpQWdJQ0E4TDJjJiM0MztDand2YzNablBnPT0mcXVvdDspOyBib3JkZXIt
Ym90dG9tLXdpZHRoOiAxcHg7IGJvcmRlci1ib3R0b20tc3R5bGU6IHNvbGlkOyBib3JkZXItYm90
dG9tLWNvbG9yOiB0cmFuc3BhcmVudDsgZm9udC1mYW1pbHk6ICZxdW90O0NhbGlicmkgTGlnaHQm
cXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfRW1iZWRkZWRGb250JnF1b3Q7LCAmcXVvdDtDYWxp
YnJpIExpZ2h0X01TRm9udFNlcnZpY2UmcXVvdDssIHNhbnMtc2VyaWY7IGZvbnQtc2l6ZTogMTMu
MzMzM3B4OyBmb250LXZhcmlhbnQtbGlnYXR1cmVzOiBub25lOyBvcnBoYW5zOiAyOyB3aWRvd3M6
IDI7IGJhY2tncm91bmQtcG9zaXRpb246IGxlZnQgYm90dG9tOyBiYWNrZ3JvdW5kLXJlcGVhdDog
cmVwZWF0IG5vLXJlcGVhdDsiPjxiciBjbGFzcz0iIj4NCjwvc3Bhbj48L2Rpdj4NCjxkaXY+PHNw
YW4gY2xhc3M9Ik5vcm1hbFRleHRSdW4gQ29udGV4dHVhbFNwZWxsaW5nQW5kR3JhbW1hckVycm9y
VjIgQkNYMCBTQ1hXMTEwMzI2NDUwIiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsg
LXdlYmtpdC11c2VyLWRyYWc6IG5vbmU7IGJhY2tncm91bmQtaW1hZ2U6IHVybCgmcXVvdDtkYXRh
OmltYWdlL3N2ZyYjNDM7eG1sO2Jhc2U2NCxQRDk0Yld3Z2RtVnljMmx2YmowaU1TNHdJaUJsYm1O
dlpHbHVaejBpVlZSR0xUZ2lQejRLUEhOMlp5QjNhV1IwYUQwaU5YQjRJaUJvWldsbmFIUTlJak53
ZUNJZ2RtbGxkMEp2ZUQwaU1DQXdJRFVnTXlJZ2RtVnljMmx2YmowaU1TNHhJaUI0Yld4dWN6MGlh
SFIwY0RvdkwzZDNkeTUzTXk1dmNtY3ZNakF3TUM5emRtY2lJSGh0Ykc1ek9uaHNhVzVyUFNKb2RI
UndPaTh2ZDNkM0xuY3pMbTl5Wnk4eE9UazVMM2hzYVc1cklqNEtJQ0FnSUR3aExTMGdSMlZ1WlhK
aGRHOXlPaUJUYTJWMFkyZ2dOVFV1TWlBb056Z3hPREVwSUMwZ2FIUjBjSE02THk5emEyVjBZMmho
Y0hBdVkyOXRJQzB0UGdvZ0lDQWdQSFJwZEd4bFBtZHlZVzF0WVhKZlpHOTFZbXhsWDJ4cGJtVThM
M1JwZEd4bFBnb2dJQ0FnUEdSbGMyTSYjNDM7UTNKbFlYUmxaQ0IzYVhSb0lGTnJaWFJqYUM0OEwy
UmxjMk0mIzQzO0NpQWdJQ0E4WnlCcFpEMGlaM0poYlcxaGNsOWtiM1ZpYkdWZmJHbHVaU0lnYzNS
eWIydGxQU0p1YjI1bElpQnpkSEp2YTJVdGQybGtkR2c5SWpFaUlHWnBiR3c5SW01dmJtVWlJR1pw
Ykd3dGNuVnNaVDBpWlhabGJtOWtaQ0lnYzNSeWIydGxMV3hwYm1WallYQTlJbkp2ZFc1a0lqNEtJ
Q0FnSUNBZ0lDQThaeUJwWkQwaVIzSmhiVzFoY2kxVWFXeGxMVU52Y0hraUlITjBjbTlyWlQwaUl6
TXpOVFZHUmlJJiM0MztDaUFnSUNBZ0lDQWdJQ0FnSUR4d1lYUm9JR1E5SWswd0xEQXVOU0JNTlN3
d0xqVWlJR2xrUFNKTWFXNWxMVEl0UTI5d2VTMHhNQ0kmIzQzO1BDOXdZWFJvUGdvZ0lDQWdJQ0Fn
SUNBZ0lDQThjR0YwYUNCa1BTSk5NQ3d5TGpVZ1REVXNNaTQxSWlCcFpEMGlUR2x1WlMweUxVTnZj
SGt0TVRFaVBqd3ZjR0YwYUQ0S0lDQWdJQ0FnSUNBOEwyYyYjNDM7Q2lBZ0lDQThMMmMmIzQzO0Nq
d3ZjM1puUGc9PSZxdW90Oyk7IGJvcmRlci1ib3R0b20td2lkdGg6IDFweDsgYm9yZGVyLWJvdHRv
bS1zdHlsZTogc29saWQ7IGJvcmRlci1ib3R0b20tY29sb3I6IHRyYW5zcGFyZW50OyBmb250LWZh
bWlseTogJnF1b3Q7Q2FsaWJyaSBMaWdodCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9FbWJl
ZGRlZEZvbnQmcXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfTVNGb250U2VydmljZSZxdW90Oywg
c2Fucy1zZXJpZjsgZm9udC1zaXplOiAxMy4zMzMzcHg7IGZvbnQtdmFyaWFudC1saWdhdHVyZXM6
IG5vbmU7IG9ycGhhbnM6IDI7IHdpZG93czogMjsgYmFja2dyb3VuZC1wb3NpdGlvbjogbGVmdCBi
b3R0b207IGJhY2tncm91bmQtcmVwZWF0OiByZXBlYXQgbm8tcmVwZWF0OyI+WWVzPC9zcGFuPjxz
cGFuIGNsYXNzPSJTQ1hXMTEwMzI2NDUwIE5vcm1hbFRleHRSdW4gQkNYMCIgc3R5bGU9Im1hcmdp
bjogMHB4OyBwYWRkaW5nOiAwcHg7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyBmb250LWZhbWls
eTogJnF1b3Q7Q2FsaWJyaSBMaWdodCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9FbWJlZGRl
ZEZvbnQmcXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfTVNGb250U2VydmljZSZxdW90Oywgc2Fu
cy1zZXJpZjsgZm9udC1zaXplOiAxMy4zMzMzcHg7IGZvbnQtdmFyaWFudC1saWdhdHVyZXM6IG5v
bmU7IG9ycGhhbnM6IDI7IHdpZG93czogMjsiPiZuYnNwO3dlDQogYWdyZWUuPC9zcGFuPjwvZGl2
Pg0KPGJyIGNsYXNzPSIiPg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIgY2xhc3M9IiI+DQo8ZGl2
IGNsYXNzPSIiPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1p
bHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQt
dmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5n
OiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5z
Zm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJr
aXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9
IiI+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBzdHlsZT0iZm9udC1mYW1pbHk6IEhlbHZldGlj
YTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBz
OiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IG9y
cGhhbnM6IGF1dG87IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRy
YW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd2lkb3dzOiBhdXRvOyB3b3JkLXNw
YWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXNpemUtYWRqdXN0OiBhdXRvOyAtd2Via2l0LXRleHQt
c3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJy
IGNsYXNzPSIiPg0KQmVydHJhbmQ8YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQo8YmxvY2tx
dW90ZSB0eXBlPSJjaXRlIiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQo8YmxvY2txdW90ZSB0eXBl
PSJjaXRlIiBjbGFzcz0iIj5CQVIgdmFsdWUgYW5kIElPTUVNIG1hcHBpbmc6PGJyIGNsYXNzPSIi
Pg0KPGJyIGNsYXNzPSIiPg0KTGludXggZ3Vlc3Qgd2lsbCBkbyB0aGUgUENJIGVudW1lcmF0aW9u
IGJhc2VkIG9uIHRoZSBhcmVhIHJlc2VydmVkIGZvciBFQ0FNIGFuZCBJT01FTSByYW5nZXMgaW4g
dGhlIFZQQ0kgZGV2aWNlIHRyZWUgbm9kZS4gT25jZSBQQ0k8c3BhbiBjbGFzcz0iQXBwbGUtdGFi
LXNwYW4iIHN0eWxlPSJ3aGl0ZS1zcGFjZTogcHJlOyI+DQo8L3NwYW4+ZGV2aWNlIGlzIGFzc2ln
bmVkIHRvIHRoZSBndWVzdCwgWEVOIHdpbGwgbWFwIHRoZSBndWVzdCBQQ0kgSU9NRU0gcmVnaW9u
IHRvIHRoZSByZWFsIHBoeXNpY2FsIElPTUVNIHJlZ2lvbiBvbmx5IGZvciB0aGUgYXNzaWduZWQg
ZGV2aWNlcy48YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpBcyBvZiBub3cgd2UgaGF2ZSBu
b3QgbW9kaWZpZWQgdGhlIGV4aXN0aW5nIFZQQ0kgY29kZSB0byBtYXAgdGhlIGd1ZXN0IFBDSSBJ
T01FTSByZWdpb24gdG8gdGhlIHJlYWwgcGh5c2ljYWwgSU9NRU0gcmVnaW9uLiBXZSB1c2VkIHRo
ZSBleGlzdGluZyBndWVzdCDigJxpb21lbeKAnSBjb25maWcgb3B0aW9uIHRvIG1hcCB0aGUgcmVn
aW9uLjxiciBjbGFzcz0iIj4NCkZvciBleGFtcGxlOjxiciBjbGFzcz0iIj4NCjxzcGFuIGNsYXNz
PSJBcHBsZS10YWItc3BhbiIgc3R5bGU9IndoaXRlLXNwYWNlOiBwcmU7Ij48L3NwYW4+R3Vlc3Qg
cmVzZXJ2ZWQgSU9NRU0gcmVnaW9uOiAmbmJzcDsweDA0MDIwMDAwPGJyIGNsYXNzPSIiPg0KJm5i
c3A7Jm5ic3A7Jm5ic3A7PHNwYW4gY2xhc3M9IkFwcGxlLXRhYi1zcGFuIiBzdHlsZT0id2hpdGUt
c3BhY2U6IHByZTsiPiA8L3NwYW4+UmVhbCBwaHlzaWNhbCBJT01FTSByZWdpb246MHg1MDAwMDAw
MDxiciBjbGFzcz0iIj4NCiZuYnNwOyZuYnNwOyZuYnNwOzxzcGFuIGNsYXNzPSJBcHBsZS10YWIt
c3BhbiIgc3R5bGU9IndoaXRlLXNwYWNlOiBwcmU7Ij4gPC9zcGFuPklPTUVNIHNpemU6MTI4TUI8
YnIgY2xhc3M9IiI+DQombmJzcDsmbmJzcDsmbmJzcDs8c3BhbiBjbGFzcz0iQXBwbGUtdGFiLXNw
YW4iIHN0eWxlPSJ3aGl0ZS1zcGFjZTogcHJlOyI+IDwvc3Bhbj5pb21lbSBjb25maWcgd2lsbCBi
ZTogJm5ic3A7Jm5ic3A7aW9tZW0gPSBbJnF1b3Q7MHg1MDAwMCwweDgwMDBAMHg0MDIwJnF1b3Q7
XTxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NClRoZXJlIGlzIG5vIG5lZWQgdG8gbWFwIHRo
ZSBFQ0FNIHNwYWNlIGFzIFhFTiBhbHJlYWR5IGhhdmUgYWNjZXNzIHRvIHRoZSBFQ0FNIHNwYWNl
IGFuZCBYRU4gd2lsbCB0cmFwIEVDQU0gYWNjZXNzZXMgZnJvbSB0aGUgZ3Vlc3QgYW5kIHdpbGwg
cGVyZm9ybSByZWFkL3dyaXRlIG9uIHRoZSBWUENJIGJ1cy48YnIgY2xhc3M9IiI+DQo8YnIgY2xh
c3M9IiI+DQpJT01FTSBhY2Nlc3Mgd2lsbCBub3QgYmUgdHJhcHBlZCBhbmQgdGhlIGd1ZXN0IHdp
bGwgZGlyZWN0bHkgYWNjZXNzIHRoZSBJT01FTSByZWdpb24gb2YgdGhlIGFzc2lnbmVkIGRldmlj
ZSB2aWEgc3RhZ2UtMiB0cmFuc2xhdGlvbi48YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpJ
biB0aGUgc2FtZSwgd2UgbWFwcGVkIHRoZSBhc3NpZ25lZCBkZXZpY2VzIElSUSB0byB0aGUgZ3Vl
c3QgdXNpbmcgYmVsb3cgY29uZmlnIG9wdGlvbnMuPGJyIGNsYXNzPSIiPg0KPHNwYW4gY2xhc3M9
IkFwcGxlLXRhYi1zcGFuIiBzdHlsZT0id2hpdGUtc3BhY2U6IHByZTsiPjwvc3Bhbj5pcnFzPSBb
IE5VTUJFUiwgTlVNQkVSLCAuLi5dPGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0KTGltaXRh
dGlvbjo8YnIgY2xhc3M9IiI+DQoqIE5lZWQgdG8gYXZvaWQgdGhlIOKAnGlvbWVt4oCdIGFuZCDi
gJxpcnHigJ0gZ3Vlc3QgY29uZmlnIG9wdGlvbnMgYW5kIG1hcCB0aGUgSU9NRU0gcmVnaW9uIGFu
ZCBJUlEgYXQgdGhlIHNhbWUgdGltZSB3aGVuIGRldmljZSBpcyBhc3NpZ25lZCB0byB0aGUgZ3Vl
c3QgdXNpbmcgdGhlIOKAnHBjaeKAnSBndWVzdCBjb25maWcgb3B0aW9ucyB3aGVuIHhsIGNyZWF0
ZXMgdGhlIGRvbWFpbi48YnIgY2xhc3M9IiI+DQoqIEVtdWxhdGVkIEJBUiB2YWx1ZXMgb24gdGhl
IFZQQ0kgYnVzIHNob3VsZCByZWZsZWN0IHRoZSBJT01FTSBtYXBwZWQgYWRkcmVzcy48YnIgY2xh
c3M9IiI+DQoqIFg4NiBtYXBwaW5nIGNvZGUgc2hvdWxkIGJlIHBvcnRlZCBvbiBBcm0gc28gdGhh
dCB0aGUgc3RhZ2UtMiB0cmFuc2xhdGlvbiBpcyBhZGFwdGVkIHdoZW4gdGhlIGd1ZXN0IGlzIGRv
aW5nIGEgbW9kaWZpY2F0aW9uIG9mIHRoZSBCQVIgcmVnaXN0ZXJzIHZhbHVlcyAodG8gbWFwIHRo
ZSBhZGRyZXNzIHJlcXVlc3RlZCBieSB0aGUgZ3Vlc3QgZm9yIGEgc3BlY2lmaWMgSU9NRU0gdG8g
dGhlIGFkZHJlc3MgYWN0dWFsbHkgY29udGFpbmVkIGluIHRoZQ0KIHJlYWwgQkFSIHJlZ2lzdGVy
IG9mIHRoZSBjb3JyZXNwb25kaW5nIGRldmljZSkuPGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIi
Pg0KIyBTTU1VIGNvbmZpZ3VyYXRpb24gZm9yIGd1ZXN0OjxiciBjbGFzcz0iIj4NCjxiciBjbGFz
cz0iIj4NCldoZW4gYXNzaWduaW5nIFBDSSBkZXZpY2VzIHRvIGEgZ3Vlc3QsIHRoZSBTTU1VIGNv
bmZpZ3VyYXRpb24gc2hvdWxkIGJlIHVwZGF0ZWQgdG8gcmVtb3ZlIGFjY2VzcyB0byB0aGUgaGFy
ZHdhcmUgZG9tYWluIG1lbW9yeTxiciBjbGFzcz0iIj4NCjwvYmxvY2txdW90ZT4NCjwvYmxvY2tx
dW90ZT4NCjwvYmxvY2txdW90ZT4NCjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAw
KTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBu
b3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxl
dHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4
OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5n
OiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBu
b25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsg
Zm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3Jt
YWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRl
ci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0
ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAw
cHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25l
OyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj5TbywN
CiBhcyB0aGUgaGFyZHdhcmUgZG9tYWluIHN0aWxsIGhhcyBhY2Nlc3MgdG8gdGhlIFBDSSBjb25m
aWd1cmF0aW9uIHNwYWNlLCB3ZTwvc3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwg
MCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHls
ZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFs
OyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6
IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3Bh
Y2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlv
bjogbm9uZTsiIGNsYXNzPSIiPg0KPGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDAp
OyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5v
cm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0
dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7
IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6
IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5v
bmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBm
b250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1h
bDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVy
LXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRl
eHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBw
eDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7
IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPmNhbg0K
IHBvdGVudGlhbGx5IGhhdmUgYSBjb25kaXRpb24gd2hlbiBEb20wIGFjY2Vzc2VzIHRoZSBkZXZp
Y2UuIEFGQUlVLCBpZiB3ZSBoYXZlPC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigw
LCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0
eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3Jt
YWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVu
dDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1z
cGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0
aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwg
MCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTog
bm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBs
ZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBw
eDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2lu
ZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjog
bm9uZTsiIGNsYXNzPSIiPg0KPHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7
IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9y
bWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0
ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsg
dGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzog
MHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9u
ZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyIgY2xhc3M9IiI+cGNp
DQogZnJvbnQvYmFjayB0aGVuIGJlZm9yZSBhc3NpZ25pbmcgdGhlIGRldmljZSB0byB0aGUgZ3Vl
c3Qgd2UgdW5iaW5kIGl0IGZyb20gdGhlPC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJn
YigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250
LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBu
b3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWlu
ZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29y
ZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNv
cmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwg
MCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHls
ZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFs
OyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6
IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3Bh
Y2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlv
bjogbm9uZTsiIGNsYXNzPSIiPg0KPHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwg
MCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTog
bm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBs
ZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBw
eDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2lu
ZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjog
bm9uZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyIgY2xhc3M9IiI+
cmVhbA0KIGRyaXZlciBhbmQgYmluZCB0byB0aGUgYmFjay4gQXJlIHdlIGdvaW5nIHRvIGRvIHNv
bWV0aGluZyBzaW1pbGFyIGhlcmU/PC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigw
LCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0
eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3Jt
YWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVu
dDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1z
cGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0
aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8L2Rpdj4NCjwvYmxvY2txdW90ZT4NCjxkaXY+PGJyIGNs
YXNzPSIiPg0KPC9kaXY+DQo8c3BhbiBkYXRhLWNvbnRyYXN0PSJhdXRvIiBjbGFzcz0iVGV4dFJ1
biBTQ1hXNDM3ODA3MTEgQkNYMCIgeG1sOmxhbmc9IkVOLUdCIiBsYW5nPSJFTi1HQiIgc3R5bGU9
Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyBvcnBo
YW5zOiAyOyB3aWRvd3M6IDI7IGJhY2tncm91bmQtY29sb3I6IHJnYigyNTUsIDI1NSwgMjU1KTsg
Zm9udC1zaXplOiAxMHB0OyBmb250LWZhbWlseTogJnF1b3Q7Q2FsaWJyaSBMaWdodCZxdW90Oywg
JnF1b3Q7Q2FsaWJyaSBMaWdodF9FbWJlZGRlZEZvbnQmcXVvdDssICZxdW90O0NhbGlicmkgTGln
aHRfTVNGb250U2VydmljZSZxdW90Oywgc2Fucy1zZXJpZjsgLXdlYmtpdC1mb250LWtlcm5pbmc6
IG5vbmU7IGxpbmUtaGVpZ2h0OiAxNy4yNjY3cHg7IGZvbnQtdmFyaWFudC1saWdhdHVyZXM6IG5v
bmUgIWltcG9ydGFudDsiPjxzcGFuIGNsYXNzPSJOb3JtYWxUZXh0UnVuIENvbnRleHR1YWxTcGVs
bGluZ0FuZEdyYW1tYXJFcnJvclYyIEJDWDAgU0NYVzQzNzgwNzExIiBzdHlsZT0ibWFyZ2luOiAw
cHg7IHBhZGRpbmc6IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNlci1kcmFnOiBu
b25lOyAtd2Via2l0LXRhcC1oaWdobGlnaHQtY29sb3I6IHRyYW5zcGFyZW50OyBiYWNrZ3JvdW5k
LXJlcGVhdDogcmVwZWF0LXg7IGJhY2tncm91bmQtcG9zaXRpb246IGxlZnQgYm90dG9tOyBiYWNr
Z3JvdW5kLWltYWdlOiB1cmwoJnF1b3Q7ZGF0YTppbWFnZS9zdmcmIzQzO3htbDtiYXNlNjQsUEQ5
NGJXd2dkbVZ5YzJsdmJqMGlNUzR3SWlCbGJtTnZaR2x1WnowaVZWUkdMVGdpUHo0S1BITjJaeUIz
YVdSMGFEMGlOWEI0SWlCb1pXbG5hSFE5SWpOd2VDSWdkbWxsZDBKdmVEMGlNQ0F3SURVZ015SWdk
bVZ5YzJsdmJqMGlNUzR4SWlCNGJXeHVjejBpYUhSMGNEb3ZMM2QzZHk1M015NXZjbWN2TWpBd01D
OXpkbWNpSUhodGJHNXpPbmhzYVc1clBTSm9kSFJ3T2k4dmQzZDNMbmN6TG05eVp5OHhPVGs1TDNo
c2FXNXJJajRLSUNBZ0lEd2hMUzBnUjJWdVpYSmhkRzl5T2lCVGEyVjBZMmdnTlRVdU1pQW9Oemd4
T0RFcElDMGdhSFIwY0hNNkx5OXphMlYwWTJoaGNIQXVZMjl0SUMwdFBnb2dJQ0FnUEhScGRHeGxQ
bWR5WVcxdFlYSmZaRzkxWW14bFgyeHBibVU4TDNScGRHeGxQZ29nSUNBZ1BHUmxjMk0mIzQzO1Ez
SmxZWFJsWkNCM2FYUm9JRk5yWlhSamFDNDhMMlJsYzJNJiM0MztDaUFnSUNBOFp5QnBaRDBpWjNK
aGJXMWhjbDlrYjNWaWJHVmZiR2x1WlNJZ2MzUnliMnRsUFNKdWIyNWxJaUJ6ZEhKdmEyVXRkMmxr
ZEdnOUlqRWlJR1pwYkd3OUltNXZibVVpSUdacGJHd3RjblZzWlQwaVpYWmxibTlrWkNJZ2MzUnli
MnRsTFd4cGJtVmpZWEE5SW5KdmRXNWtJajRLSUNBZ0lDQWdJQ0E4WnlCcFpEMGlSM0poYlcxaGNp
MVVhV3hsTFVOdmNIa2lJSE4wY205clpUMGlJek16TlRWR1JpSSYjNDM7Q2lBZ0lDQWdJQ0FnSUNB
Z0lEeHdZWFJvSUdROUlrMHdMREF1TlNCTU5Td3dMalVpSUdsa1BTSk1hVzVsTFRJdFEyOXdlUzB4
TUNJJiM0MztQQzl3WVhSb1Bnb2dJQ0FnSUNBZ0lDQWdJQ0E4Y0dGMGFDQmtQU0pOTUN3eUxqVWdU
RFVzTWk0MUlpQnBaRDBpVEdsdVpTMHlMVU52Y0hrdE1URWlQand2Y0dGMGFENEtJQ0FnSUNBZ0lD
QThMMmMmIzQzO0NpQWdJQ0E4TDJjJiM0MztDand2YzNablBnPT0mcXVvdDspOyBib3JkZXItYm90
dG9tOiAxcHggc29saWQgdHJhbnNwYXJlbnQ7IGJhY2tncm91bmQtY29sb3I6IGluaGVyaXQ7Ij5Z
ZXM8L3NwYW4+PHNwYW4gY2xhc3M9IlNDWFc0Mzc4MDcxMSBOb3JtYWxUZXh0UnVuIEJDWDAiIHN0
eWxlPSJtYXJnaW46IDBweDsgcGFkZGluZzogMHB4OyB1c2VyLXNlbGVjdDogdGV4dDsgLXdlYmtp
dC11c2VyLWRyYWc6IG5vbmU7IC13ZWJraXQtdGFwLWhpZ2hsaWdodC1jb2xvcjogdHJhbnNwYXJl
bnQ7IGJhY2tncm91bmQtY29sb3I6IGluaGVyaXQ7Ij4mbmJzcDt3ZSZuYnNwOzwvc3Bhbj48c3Bh
biBjbGFzcz0iTm9ybWFsVGV4dFJ1biBCQ1gwIEFkdmFuY2VkUHJvb2ZpbmdJc3N1ZVYyIFNDWFc0
Mzc4MDcxMSIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IHVzZXItc2VsZWN0OiB0
ZXh0OyAtd2Via2l0LXVzZXItZHJhZzogbm9uZTsgLXdlYmtpdC10YXAtaGlnaGxpZ2h0LWNvbG9y
OiB0cmFuc3BhcmVudDsgYmFja2dyb3VuZC1yZXBlYXQ6IHJlcGVhdC14OyBiYWNrZ3JvdW5kLXBv
c2l0aW9uOiBsZWZ0IGJvdHRvbTsgYmFja2dyb3VuZC1pbWFnZTogdXJsKCZxdW90O2RhdGE6aW1h
Z2Uvc3ZnJiM0Mzt4bWw7YmFzZTY0LFBEOTRiV3dnZG1WeWMybHZiajBpTVM0d0lpQmxibU52Wkds
dVp6MGlWVlJHTFRnaVB6NEtQSE4yWnlCM2FXUjBhRDBpTVRCd2VDSWdhR1ZwWjJoMFBTSXljSGdp
SUhacFpYZENiM2c5SWpBZ01DQXhNQ0F5SWlCMlpYSnphVzl1UFNJeExqRWlJSGh0Ykc1elBTSm9k
SFJ3T2k4dmQzZDNMbmN6TG05eVp5OHlNREF3TDNOMlp5SWdlRzFzYm5NNmVHeHBibXM5SW1oMGRI
QTZMeTkzZDNjdWR6TXViM0puTHpFNU9Ua3ZlR3hwYm1zaVBnb2dJQ0FnUENFdExTQkhaVzVsY21G
MGIzSTZJRk5yWlhSamFDQTFOeTR4SUNnNE16QTRPQ2tnTFNCb2RIUndjem92TDNOclpYUmphQzVq
YjIwZ0xTMCYjNDM7Q2lBZ0lDQThkR2wwYkdVJiM0MzthVzV6YVdkb2RGOTBaWGgwZFhKbFBDOTBh
WFJzWlQ0S0lDQWdJRHhrWlhOalBrTnlaV0YwWldRZ2QybDBhQ0JUYTJWMFkyZ3VQQzlrWlhOalBn
b2dJQ0FnUEdjZ2FXUTlJbWx1YzJsbmFIUmZkR1Y0ZEhWeVpTSWdjM1J5YjJ0bFBTSnViMjVsSWlC
emRISnZhMlV0ZDJsa2RHZzlJakVpSUdacGJHdzlJbTV2Ym1VaUlHWnBiR3d0Y25Wc1pUMGlaWFps
Ym05a1pDSSYjNDM7Q2lBZ0lDQWdJQ0FnUEdjZ2FXUTlJa2R5YjNWd0xUSXRRMjl3ZVNJJiM0MztD
aUFnSUNBZ0lDQWdJQ0FnSUR4eVpXTjBJR2xrUFNKU1pXTjBZVzVuYkdVaUlIZzlJakFpSUhrOUlq
QWlJSGRwWkhSb1BTSXhNQ0lnYUdWcFoyaDBQU0l5SWo0OEwzSmxZM1EmIzQzO0NpQWdJQ0FnSUNB
Z0lDQWdJRHh3WVhSb0lHUTlJazB4TERFZ1REVXNNU0lnYVdROUlreHBibVV0TkNJZ2MzUnliMnRs
UFNJak56RTJNRVU0SWlCemRISnZhMlV0ZDJsa2RHZzlJaklpSUhOMGNtOXJaUzFzYVc1bFkyRndQ
U0p5YjNWdVpDSSYjNDM7UEM5d1lYUm9QZ29nSUNBZ0lDQWdJRHd2Wno0S0lDQWdJRHd2Wno0S1BD
OXpkbWMmIzQzOyZxdW90Oyk7IGJvcmRlci1ib3R0b206IDFweCBzb2xpZCB0cmFuc3BhcmVudDsg
YmFja2dyb3VuZC1jb2xvcjogaW5oZXJpdDsiPmhhdmUNCiB0bzwvc3Bhbj48c3BhbiBjbGFzcz0i
U0NYVzQzNzgwNzExIE5vcm1hbFRleHRSdW4gQkNYMCIgc3R5bGU9Im1hcmdpbjogMHB4OyBwYWRk
aW5nOiAwcHg7IHVzZXItc2VsZWN0OiB0ZXh0OyAtd2Via2l0LXVzZXItZHJhZzogbm9uZTsgLXdl
YmtpdC10YXAtaGlnaGxpZ2h0LWNvbG9yOiB0cmFuc3BhcmVudDsgYmFja2dyb3VuZC1jb2xvcjog
aW5oZXJpdDsiPiZuYnNwO3VuYmluZCB0aGUgZHJpdmVyIGZyb20gdGhlIGhhcmR3YXJlIGRvbWFp
biBiZWZvcmUgYXNzaWduaW5nDQogdGhlIGRldmljZSB0byB0aGUgZ3Vlc3QuPC9zcGFuPjwvc3Bh
bj48c3BhbiBkYXRhLWNvbnRyYXN0PSJhdXRvIiBjbGFzcz0iVGV4dFJ1biBTQ1hXNDM3ODA3MTEg
QkNYMCIgeG1sOmxhbmc9IkVOLUdCIiBsYW5nPSJFTi1HQiIgc3R5bGU9Im1hcmdpbjogMHB4OyBw
YWRkaW5nOiAwcHg7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyBvcnBoYW5zOiAyOyB3aWRvd3M6
IDI7IGJhY2tncm91bmQtY29sb3I6IHJnYigyNTUsIDI1NSwgMjU1KTsgZm9udC1zaXplOiAxMHB0
OyBmb250LWZhbWlseTogJnF1b3Q7Q2FsaWJyaSBMaWdodCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBM
aWdodF9FbWJlZGRlZEZvbnQmcXVvdDssICZxdW90O0NhbGlicmkgTGlnaHRfTVNGb250U2Vydmlj
ZSZxdW90Oywgc2Fucy1zZXJpZjsgLXdlYmtpdC1mb250LWtlcm5pbmc6IG5vbmU7IGxpbmUtaGVp
Z2h0OiAxNy4yNjY3cHg7IGZvbnQtdmFyaWFudC1saWdhdHVyZXM6IG5vbmUgIWltcG9ydGFudDsi
PjxzcGFuIGNsYXNzPSJTQ1hXNDM3ODA3MTEgTm9ybWFsVGV4dFJ1biBCQ1gwIiBzdHlsZT0ibWFy
Z2luOiAwcHg7IHBhZGRpbmc6IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNlci1k
cmFnOiBub25lOyAtd2Via2l0LXRhcC1oaWdobGlnaHQtY29sb3I6IHRyYW5zcGFyZW50OyBiYWNr
Z3JvdW5kLWNvbG9yOiBpbmhlcml0OyI+Jm5ic3A7PC9zcGFuPjxzcGFuIGNsYXNzPSJOb3JtYWxU
ZXh0UnVuIENvbnRleHR1YWxTcGVsbGluZ0FuZEdyYW1tYXJFcnJvclYyIEJDWDAgU0NYVzQzNzgw
NzExIiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgdXNlci1zZWxlY3Q6IHRleHQ7
IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyAtd2Via2l0LXRhcC1oaWdobGlnaHQtY29sb3I6IHRy
YW5zcGFyZW50OyBiYWNrZ3JvdW5kLXJlcGVhdDogcmVwZWF0LXg7IGJhY2tncm91bmQtcG9zaXRp
b246IGxlZnQgYm90dG9tOyBiYWNrZ3JvdW5kLWltYWdlOiB1cmwoJnF1b3Q7ZGF0YTppbWFnZS9z
dmcmIzQzO3htbDtiYXNlNjQsUEQ5NGJXd2dkbVZ5YzJsdmJqMGlNUzR3SWlCbGJtTnZaR2x1Wnow
aVZWUkdMVGdpUHo0S1BITjJaeUIzYVdSMGFEMGlOWEI0SWlCb1pXbG5hSFE5SWpOd2VDSWdkbWxs
ZDBKdmVEMGlNQ0F3SURVZ015SWdkbVZ5YzJsdmJqMGlNUzR4SWlCNGJXeHVjejBpYUhSMGNEb3ZM
M2QzZHk1M015NXZjbWN2TWpBd01DOXpkbWNpSUhodGJHNXpPbmhzYVc1clBTSm9kSFJ3T2k4dmQz
ZDNMbmN6TG05eVp5OHhPVGs1TDNoc2FXNXJJajRLSUNBZ0lEd2hMUzBnUjJWdVpYSmhkRzl5T2lC
VGEyVjBZMmdnTlRVdU1pQW9Oemd4T0RFcElDMGdhSFIwY0hNNkx5OXphMlYwWTJoaGNIQXVZMjl0
SUMwdFBnb2dJQ0FnUEhScGRHeGxQbWR5WVcxdFlYSmZaRzkxWW14bFgyeHBibVU4TDNScGRHeGxQ
Z29nSUNBZ1BHUmxjMk0mIzQzO1EzSmxZWFJsWkNCM2FYUm9JRk5yWlhSamFDNDhMMlJsYzJNJiM0
MztDaUFnSUNBOFp5QnBaRDBpWjNKaGJXMWhjbDlrYjNWaWJHVmZiR2x1WlNJZ2MzUnliMnRsUFNK
dWIyNWxJaUJ6ZEhKdmEyVXRkMmxrZEdnOUlqRWlJR1pwYkd3OUltNXZibVVpSUdacGJHd3RjblZz
WlQwaVpYWmxibTlrWkNJZ2MzUnliMnRsTFd4cGJtVmpZWEE5SW5KdmRXNWtJajRLSUNBZ0lDQWdJ
Q0E4WnlCcFpEMGlSM0poYlcxaGNpMVVhV3hsTFVOdmNIa2lJSE4wY205clpUMGlJek16TlRWR1Jp
SSYjNDM7Q2lBZ0lDQWdJQ0FnSUNBZ0lEeHdZWFJvSUdROUlrMHdMREF1TlNCTU5Td3dMalVpSUds
a1BTSk1hVzVsTFRJdFEyOXdlUzB4TUNJJiM0MztQQzl3WVhSb1Bnb2dJQ0FnSUNBZ0lDQWdJQ0E4
Y0dGMGFDQmtQU0pOTUN3eUxqVWdURFVzTWk0MUlpQnBaRDBpVEdsdVpTMHlMVU52Y0hrdE1URWlQ
and2Y0dGMGFENEtJQ0FnSUNBZ0lDQThMMmMmIzQzO0NpQWdJQ0E4TDJjJiM0MztDand2YzNablBn
PT0mcXVvdDspOyBib3JkZXItYm90dG9tOiAxcHggc29saWQgdHJhbnNwYXJlbnQ7IGJhY2tncm91
bmQtY29sb3I6IGluaGVyaXQ7Ij5BbHNvPC9zcGFuPjxzcGFuIGNsYXNzPSJTQ1hXNDM3ODA3MTEg
Tm9ybWFsVGV4dFJ1biBCQ1gwIiBzdHlsZT0ibWFyZ2luOiAwcHg7IHBhZGRpbmc6IDBweDsgdXNl
ci1zZWxlY3Q6IHRleHQ7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyAtd2Via2l0LXRhcC1oaWdo
bGlnaHQtY29sb3I6IHRyYW5zcGFyZW50OyBiYWNrZ3JvdW5kLWNvbG9yOiBpbmhlcml0OyI+Jm5i
c3A7YXMNCiBzb29uIGFzIFhlbiBoYXMgZG9uZSBoaXMgUENJIGVudW1lcmF0aW9uIChlaXRoZXIg
b24gYm9vdCBvciBhZnRlciBhbiZuYnNwOzwvc3Bhbj48c3BhbiBjbGFzcz0iTm9ybWFsVGV4dFJ1
biBCQ1gwIFNDWFc0Mzc4MDcxMSBTcGVsbGluZ0Vycm9yVjIiIHN0eWxlPSJtYXJnaW46IDBweDsg
cGFkZGluZzogMHB4OyB1c2VyLXNlbGVjdDogdGV4dDsgLXdlYmtpdC11c2VyLWRyYWc6IG5vbmU7
IC13ZWJraXQtdGFwLWhpZ2hsaWdodC1jb2xvcjogdHJhbnNwYXJlbnQ7IGJhY2tncm91bmQtcmVw
ZWF0OiByZXBlYXQteDsgYmFja2dyb3VuZC1wb3NpdGlvbjogbGVmdCBib3R0b207IGJhY2tncm91
bmQtaW1hZ2U6IHVybCgmcXVvdDtkYXRhOmltYWdlL3N2ZyYjNDM7eG1sO2Jhc2U2NCxQRDk0Yld3
Z2RtVnljMmx2YmowaU1TNHdJaUJsYm1OdlpHbHVaejBpVlZSR0xUZ2lQejRLUEhOMlp5QjNhV1Iw
YUQwaU5YQjRJaUJvWldsbmFIUTlJalJ3ZUNJZ2RtbGxkMEp2ZUQwaU1DQXdJRFVnTkNJZ2RtVnlj
Mmx2YmowaU1TNHhJaUI0Yld4dWN6MGlhSFIwY0RvdkwzZDNkeTUzTXk1dmNtY3ZNakF3TUM5emRt
Y2lJSGh0Ykc1ek9uaHNhVzVyUFNKb2RIUndPaTh2ZDNkM0xuY3pMbTl5Wnk4eE9UazVMM2hzYVc1
cklqNEtJQ0FnSUR3aExTMGdSMlZ1WlhKaGRHOXlPaUJUYTJWMFkyZ2dOVFl1TWlBb09ERTJOeklw
SUMwZ2FIUjBjSE02THk5emEyVjBZMmd1WTI5dElDMHRQZ29nSUNBZ1BIUnBkR3hsUG5Od1pXeHNh
VzVuWDNOeGRXbG5aMnhsUEM5MGFYUnNaVDRLSUNBZ0lEeGtaWE5qUGtOeVpXRjBaV1FnZDJsMGFD
QlRhMlYwWTJndVBDOWtaWE5qUGdvZ0lDQWdQR2NnYVdROUlrWnNZV2R6SWlCemRISnZhMlU5SW01
dmJtVWlJSE4wY205clpTMTNhV1IwYUQwaU1TSWdabWxzYkQwaWJtOXVaU0lnWm1sc2JDMXlkV3hs
UFNKbGRtVnViMlJrSWo0S0lDQWdJQ0FnSUNBOFp5QjBjbUZ1YzJadmNtMDlJblJ5WVc1emJHRjBa
U2d0TVRBeE1DNHdNREF3TURBc0lDMHlPVFl1TURBd01EQXdLU0lnYVdROUluTndaV3hzYVc1blgz
TnhkV2xuWjJ4bElqNEtJQ0FnSUNBZ0lDQWdJQ0FnUEdjZ2RISmhibk5tYjNKdFBTSjBjbUZ1YzJ4
aGRHVW9NVEF4TUM0d01EQXdNREFzSURJNU5pNHdNREF3TURBcElqNEtJQ0FnSUNBZ0lDQWdJQ0Fn
SUNBZ0lEeHdZWFJvSUdROUlrMHdMRE1nUXpFdU1qVXNNeUF4TGpJMUxERWdNaTQxTERFZ1F6TXVO
elVzTVNBekxqYzFMRE1nTlN3eklpQnBaRDBpVUdGMGFDSWdjM1J5YjJ0bFBTSWpSVUl3TURBd0lp
QnpkSEp2YTJVdGQybGtkR2c5SWpFaVBqd3ZjR0YwYUQ0S0lDQWdJQ0FnSUNBZ0lDQWdJQ0FnSUR4
eVpXTjBJR2xrUFNKU1pXTjBZVzVuYkdVaUlIZzlJakFpSUhrOUlqQWlJSGRwWkhSb1BTSTFJaUJv
WldsbmFIUTlJalFpUGp3dmNtVmpkRDRLSUNBZ0lDQWdJQ0FnSUNBZ1BDOW5QZ29nSUNBZ0lDQWdJ
RHd2Wno0S0lDQWdJRHd2Wno0S1BDOXpkbWMmIzQzOyZxdW90Oyk7IGJvcmRlci1ib3R0b206IDFw
eCBzb2xpZCB0cmFuc3BhcmVudDsgYmFja2dyb3VuZC1jb2xvcjogaW5oZXJpdDsiPmh5cGVyY2Fs
bDwvc3Bhbj48c3BhbiBjbGFzcz0iU0NYVzQzNzgwNzExIE5vcm1hbFRleHRSdW4gQkNYMCIgc3R5
bGU9Im1hcmdpbjogMHB4OyBwYWRkaW5nOiAwcHg7IHVzZXItc2VsZWN0OiB0ZXh0OyAtd2Via2l0
LXVzZXItZHJhZzogbm9uZTsgLXdlYmtpdC10YXAtaGlnaGxpZ2h0LWNvbG9yOiB0cmFuc3BhcmVu
dDsgYmFja2dyb3VuZC1jb2xvcjogaW5oZXJpdDsiPiZuYnNwO2Zyb20NCiB0aGUgaGFyZHdhcmUg
ZG9tYWluKSwgb25seSBYZW4gd2lsbCBhY2Nlc3MgdGhlIHBoeXNpY2FsIFBDSSBidXMsIGV2ZXJ5
Ym9keSBlbHNlIHdpbGwgZ28gdGhyb3VnaCBWUENJLjwvc3Bhbj48L3NwYW4+PHNwYW4gY2xhc3M9
IkVPUCBTQ1hXNDM3ODA3MTEgQkNYMCIgZGF0YS1jY3AtcHJvcHM9InsmcXVvdDsxMzQyMzMyNzkm
cXVvdDs6dHJ1ZSwmcXVvdDszMzU1NTk2ODUmcXVvdDs6NzIwfSIgc3R5bGU9Im1hcmdpbjogMHB4
OyBwYWRkaW5nOiAwcHg7IC13ZWJraXQtdXNlci1kcmFnOiBub25lOyBmb250LXZhcmlhbnQtbGln
YXR1cmVzOiBub3JtYWw7IG9ycGhhbnM6IDI7IHdpZG93czogMjsgYmFja2dyb3VuZC1jb2xvcjog
cmdiKDI1NSwgMjU1LCAyNTUpOyBmb250LXNpemU6IDEwcHQ7IGxpbmUtaGVpZ2h0OiAxNy4yNjY3
cHg7IGZvbnQtZmFtaWx5OiAmcXVvdDtDYWxpYnJpIExpZ2h0JnF1b3Q7LCAmcXVvdDtDYWxpYnJp
IExpZ2h0X0VtYmVkZGVkRm9udCZxdW90OywgJnF1b3Q7Q2FsaWJyaSBMaWdodF9NU0ZvbnRTZXJ2
aWNlJnF1b3Q7LCBzYW5zLXNlcmlmOyI+Jm5ic3A7PC9zcGFuPjwvZGl2Pg0KPGRpdj4NCjxkaXYg
c3R5bGU9Im9ycGhhbnM6IDI7IHdpZG93czogMjsiIGNsYXNzPSIiPjxmb250IGZhY2U9IkNhbGli
cmkgTGlnaHQsIENhbGlicmkgTGlnaHRfRW1iZWRkZWRGb250LCBDYWxpYnJpIExpZ2h0X01TRm9u
dFNlcnZpY2UsIHNhbnMtc2VyaWYiIHNpemU9IjIiIGNsYXNzPSIiPjxzcGFuIHN0eWxlPSJiYWNr
Z3JvdW5kLWNvbG9yOiByZ2IoMjU1LCAyNTUsIDI1NSk7IiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+
DQo8L3NwYW4+PC9mb250PjwvZGl2Pg0KPGRpdiBzdHlsZT0ib3JwaGFuczogMjsgd2lkb3dzOiAy
OyIgY2xhc3M9IiI+PGZvbnQgZmFjZT0iQ2FsaWJyaSBMaWdodCwgQ2FsaWJyaSBMaWdodF9FbWJl
ZGRlZEZvbnQsIENhbGlicmkgTGlnaHRfTVNGb250U2VydmljZSwgc2Fucy1zZXJpZiIgc2l6ZT0i
MiIgY2xhc3M9IiI+PHNwYW4gc3R5bGU9ImJhY2tncm91bmQtY29sb3I6IHJnYigyNTUsIDI1NSwg
MjU1KTsiIGNsYXNzPSIiPi0gUmFodWw8L3NwYW4+PC9mb250PjwvZGl2Pg0KPGRpdiBzdHlsZT0i
b3JwaGFuczogMjsgd2lkb3dzOiAyOyIgY2xhc3M9IiI+PGZvbnQgZmFjZT0iQ2FsaWJyaSBMaWdo
dCwgQ2FsaWJyaSBMaWdodF9FbWJlZGRlZEZvbnQsIENhbGlicmkgTGlnaHRfTVNGb250U2Vydmlj
ZSwgc2Fucy1zZXJpZiIgc2l6ZT0iMiIgY2xhc3M9IiI+PHNwYW4gc3R5bGU9ImJhY2tncm91bmQt
Y29sb3I6IHJnYigyNTUsIDI1NSwgMjU1KTsiIGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjwvc3Bh
bj48L2ZvbnQ+PC9kaXY+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBjbGFzcz0iIj4NCjxkaXYg
Y2xhc3M9IiI+PGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWls
eTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12
YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6
IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNm
b3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtp
dC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0i
Ij4NCjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhl
bHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFu
dC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3Jt
YWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTog
bm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4
dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8
c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZl
dGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1j
YXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7
IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9u
ZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1z
dHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlz
cGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj5UaGFuaw0KIHlvdSw8L3NwYW4+PGJy
IHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNh
OyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6
IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4
dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3
aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9r
ZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxiciBzdHls
ZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9u
dC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3Jt
YWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxp
Z246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUt
c3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lk
dGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0i
Y2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1z
aXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7
IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246
IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3Bh
Y2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6
IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5l
ICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj5PbGVrc2FuZHI8L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1j
b2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEy
cHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13
ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7
IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9y
bWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0
ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxiciBzdHlsZT0iY2FyZXQtY29sb3I6
IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBm
b250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0
OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0
LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsg
d29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1k
ZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBzdHls
ZT0iZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBu
b3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxl
dHRlci1zcGFjaW5nOiBub3JtYWw7IG9ycGhhbnM6IGF1dG87IHRleHQtYWxpZ246IHN0YXJ0OyB0
ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1h
bDsgd2lkb3dzOiBhdXRvOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXNpemUtYWRq
dXN0OiBhdXRvOyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlv
bjogbm9uZTsiIGNsYXNzPSIiPg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIgY2xhc3M9IiI+DQo8
YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBjbGFzcz0iIj4mbmJzcDthbmQgYWRkPGJyIGNsYXNzPSIi
Pg0KY29uZmlndXJhdGlvbiB0byBoYXZlIGFjY2VzcyB0byB0aGUgZ3Vlc3QgbWVtb3J5IHdpdGgg
dGhlIHByb3BlciBhZGRyZXNzIHRyYW5zbGF0aW9uIHNvIHRoYXQgdGhlIGRldmljZSBjYW4gZG8g
RE1BIG9wZXJhdGlvbnMgZnJvbSBhbmQgdG8gdGhlIGd1ZXN0IG1lbW9yeSBvbmx5LjxiciBjbGFz
cz0iIj4NCjxiciBjbGFzcz0iIj4NCiMgTVNJL01TSS1YIHN1cHBvcnQ6PGJyIGNsYXNzPSIiPg0K
Tm90IGltcGxlbWVudCBhbmQgdGVzdGVkIGFzIG9mIG5vdy48YnIgY2xhc3M9IiI+DQo8YnIgY2xh
c3M9IiI+DQojIElUUyBzdXBwb3J0OjxiciBjbGFzcz0iIj4NCk5vdCBpbXBsZW1lbnQgYW5kIHRl
c3RlZCBhcyBvZiBub3cuPGJyIGNsYXNzPSIiPg0KPC9ibG9ja3F1b3RlPg0KPC9ibG9ja3F1b3Rl
Pg0KPC9ibG9ja3F1b3RlPg0KPHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7
IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9y
bWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0
ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsg
dGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzog
MHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9u
ZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyIgY2xhc3M9IiI+WzFd
PHNwYW4gY2xhc3M9IkFwcGxlLWNvbnZlcnRlZC1zcGFjZSI+Jm5ic3A7PC9zcGFuPjwvc3Bhbj48
YSBocmVmPSJodHRwczovL2xpc3RzLnhlbi5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAx
Ny0wNS9tc2cwMjY3NC5odG1sIiBzdHlsZT0iZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1z
aXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7
IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IG9ycGhhbnM6IGF1
dG87IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTog
bm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd2lkb3dzOiBhdXRvOyB3b3JkLXNwYWNpbmc6IDBw
eDsgLXdlYmtpdC10ZXh0LXNpemUtYWRqdXN0OiBhdXRvOyAtd2Via2l0LXRleHQtc3Ryb2tlLXdp
ZHRoOiAwcHg7IiBjbGFzcz0iIj5odHRwczovL2xpc3RzLnhlbi5vcmcvYXJjaGl2ZXMvaHRtbC94
ZW4tZGV2ZWwvMjAxNy0wNS9tc2cwMjY3NC5odG1sPC9hPjwvZGl2Pg0KPC9ibG9ja3F1b3RlPg0K
PC9kaXY+DQo8YnIgY2xhc3M9IiI+DQo8L2JvZHk+DQo8L2h0bWw+DQo=

--_000_E4755A88798C42FF8DADDC4FD3C7B571armcom_--


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 12:55:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 12:55:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwPu2-00069V-2M; Fri, 17 Jul 2020 12:55:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1N/p=A4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jwPu0-00069K-TA
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 12:55:48 +0000
X-Inumbo-ID: d4e27358-c82c-11ea-bb8b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d4e27358-c82c-11ea-bb8b-bc764e2007e4;
 Fri, 17 Jul 2020 12:55:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 728EAAD17;
 Fri, 17 Jul 2020 12:55:50 +0000 (UTC)
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
To: Rahul Singh <Rahul.Singh@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <alpine.DEB.2.21.2007161258160.3886@sstabellini-ThinkPad-T480s>
 <BB4645DF-A040-4912-AC35-C98105917FD5@arm.com>
 <f69f86dc-7a8c-4c25-c059-0e391de51d7f@epam.com>
 <E4755A88-798C-42FF-8DAD-DC4FD3C7B571@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0047a1f0-0a0f-f11d-ddba-3fdb877c3eb4@suse.com>
Date: Fri, 17 Jul 2020 14:55:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <E4755A88-798C-42FF-8DAD-DC4FD3C7B571@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>,
 Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 17.07.2020 14:46, Rahul Singh wrote:
> Sorry for previous mail formatting issue. Replying again so that any comment history should not missed.

I'm sorry, but from a plain text view I cannot determine what parts
your replies were (in fact all nesting of prior replies is lost).
Please can you arrange for suitable reply quoting in your mail
client, using plain text mails? (Leaving all prior text in place
below, for you to see what it is that at least some people got to
see.)

Jan

> On 17 Jul 2020, at 8:41 am, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com<mailto:Oleksandr_Andrushchenko@epam.com>> wrote:
> 
> 
> On 7/17/20 9:53 AM, Bertrand Marquis wrote:
> 
> On 16 Jul 2020, at 22:51, Stefano Stabellini <sstabellini@kernel.org<mailto:sstabellini@kernel.org>> wrote:
> 
> On Thu, 16 Jul 2020, Rahul Singh wrote:
> Hello All,
> 
> Following up on discussion on PCI Passthrough support on ARM that we had at the XEN summit, we are submitting a Review For Comment and a design proposal for PCI passthrough support on ARM. Feel free to give your feedback.
> 
> The followings describe the high-level design proposal of the PCI passthrough support and how the different modules within the system interacts with each other to assign a particular PCI device to the guest.
> I think the proposal is good and I only have a couple of thoughts to
> share below.
> 
> 
> # Title:
> 
> PCI devices passthrough on Arm design proposal
> 
> # Problem statement:
> 
> On ARM there in no support to assign a PCI device to a guest. PCI device passthrough capability allows guests to have full access to some PCI devices. PCI device passthrough allows PCI devices to appear and behave as if they were physically attached to the guest operating system and provide full isolation of the PCI devices.
> 
> Goal of this work is to also support Dom0Less configuration so the PCI backend/frontend drivers used on x86 shall not be used on Arm. It will use the existing VPCI concept from X86 and implement the virtual PCI bus through IO emulation​ such that only assigned devices are visible​ to the guest and guest can use the standard PCI driver.
> 
> Only Dom0 and Xen will have access to the real PCI bus,
> 
> So, in this case how is the access serialization going to work?
> 
> I mean that if both Xen and Dom0 are about to access the bus at the same time?
> 
> There was a discussion on the same before [1] and IMO it was not decided on
> 
> how to deal with that.
> 
> DOM0 also access the real PCI hardware via MMIO config space trap in XEN. We will take care of access the config space lock in XEN.
> 
> ​ guest will have a direct access to the assigned device itself​. IOMEM memory will be mapped to the guest ​and interrupt will be redirected to the guest. SMMU has to be configured correctly to have DMA transaction.
> 
> ## Current state: Draft version
> 
> # Proposer(s): Rahul Singh, Bertrand Marquis
> 
> # Proposal:
> 
> This section will describe the different subsystem to support the PCI device passthrough and how these subsystems interact with each other to assign a device to the guest.
> 
> # PCI Terminology:
> 
> Host Bridge: Host bridge allows the PCI devices to talk to the rest of the computer.
> ECAM: ECAM (Enhanced Configuration Access Mechanism) is a mechanism developed to allow PCIe to access configuration space. The space available per function is 4KB.
> 
> # Discovering PCI Host Bridge in XEN:
> 
> In order to support the PCI passthrough XEN should be aware of all the PCI host bridges available on the system and should be able to access the PCI configuration space. ECAM configuration access is supported as of now. XEN during boot will read the PCI device tree node “reg” property and will map the ECAM space to the XEN memory using the “ioremap_nocache ()” function.
> 
> If there are more than one segment on the system, XEN will read the “linux, pci-domain” property from the device tree node and configure  the host bridge segment number accordingly. All the PCI device tree nodes should have the “linux,pci-domain” property so that there will be no conflicts. During hardware domain boot Linux will also use the same “linux,pci-domain” property and assign the domain number to the host bridge.
> 
> When Dom0 tries to access the PCI config space of the device, XEN will find the corresponding host bridge based on segment number and access the corresponding config space assigned to that bridge.
> 
> Limitation:
> * Only PCI ECAM configuration space access is supported.
> 
> This is really the limitation which we have to think of now as there are lots of
> 
> HW w/o ECAM support and not providing a way to use PCI(e) on those boards
> 
> would render them useless wrt PCI. I don't suggest to have some real code for
> 
> that, but I would suggest we design some interfaces from day 0.
> 
> At the same time I do understand that supporting non-ECAM bridges is a pain
> 
> Adding any type of host bridge is supported, we did put the ECAM specific code in an identifed source file so that other types can be implemented. As of now we have implemented the ECAM support and we are implementing right now support for N1SDP which requires specific quirks which will be done in a separate source file.
> 
> 
> * Device tree binding is supported as of now, ACPI is not supported.
> * Need to port the PCI host bridge access code to XEN to access the configuration space (generic one works but lots of platforms will required some specific code or quirks).
> 
> # Discovering PCI devices:
> 
> PCI-PCIe enumeration is a process of detecting devices connected to its host. It is the responsibility of the hardware domain or boot firmware to do the PCI enumeration and configure
> Great, so we assume here that the bootloader can do the enumeration and configuration...
>  the BAR, PCI capabilities, and MSI/MSI-X configuration.
> 
> PCI-PCIe enumeration in XEN is not feasible for the configuration part as it would require a lot of code inside Xen which would require a lot of maintenance. Added to this many platforms require some quirks in that part of the PCI code which would greatly improve Xen complexity. Once hardware domain enumerates the device then it will communicate to XEN via the below hypercall.
> 
> #define PHYSDEVOP_pci_device_add        25
> struct physdev_pci_device_add {
>    uint16_t seg;
>    uint8_t bus;
>    uint8_t devfn;
>    uint32_t flags;
>    struct {
>     uint8_t bus;
>     uint8_t devfn;
>    } physfn;
>    /*
>    * Optional parameters array.
>    * First element ([0]) is PXM domain associated with the device (if * XEN_PCI_DEV_PXM is set)
>    */
>    uint32_t optarr[XEN_FLEX_ARRAY_DIM];
>    };
> 
> As the hypercall argument has the PCI segment number, XEN will access the PCI config space based on this segment number and find the host-bridge corresponding to this segment number. At this stage host bridge is fully initialized so there will be no issue to access the config space.
> 
> XEN will add the PCI devices in the linked list maintain in XEN using the function pci_add_device(). XEN will be aware of all the PCI devices on the system and all the device will be added to the hardware domain.
> 
> Limitations:
> * When PCI devices are added to XEN, MSI capability is not initialized inside XEN and not supported as of now.
> * ACS capability is disable for ARM as of now as after enabling it devices are not accessible.
> * Dom0Less implementation will require to have the capacity inside Xen to discover the PCI devices (without depending on Dom0 to declare them to Xen).
> I think it is fine to assume that for dom0less the "firmware" has taken
> care of setting up the BARs correctly. Starting with that assumption, it
> looks like it should be "easy" to walk the PCI topology in Xen when/if
> there is no dom0?
> Yes as we discussed during the design session, we currently think that it is the way to go.
> We are for now relying on Dom0 to get the list of PCI devices but this is definitely the strategy we would like to use to have Dom0 support.
> If this is working well, I even think we could get rid of the hypercall all together.
> ...and this is the same way of configuring if enumeration happens in the bootloader?
> 
> I do support the idea we go away from PHYSDEVOP_pci_device_add, but driver domain
> 
> just signals Xen that the enumeration is done and Xen can traverse the bus by that time.
> 
> Please also note, that there are actually 3 cases possible wrt where the enumeration and
> 
> configuration happens: boot firmware, Dom0, Xen. So, it seems we
> 
> are going to have different approaches for the first two (see my comment above on
> 
> the hypercall use in Dom0). So, walking the bus ourselves in Xen seems to be good for all
> 
> the use-cases above
> 
> 
> In that case we may have to implement a new hypercall to inform XEN that enumeration is complete and now scan the devices. We could tell Xen to delay its enumeration until this hypercall is called using a xen command line parameter. This way when this is not required because the firmware did the enumeration, we can properly support Dom0Less.
> 
> 
> 
> 
> # Enable the existing x86 virtual PCI support for ARM:
> 
> The existing VPCI support available for X86 is adapted for Arm. When the device is added to XEN via the hyper call “PHYSDEVOP_pci_device_add”, VPCI handler for the config space access is added to the PCI device to emulate the PCI devices.
> 
> A MMIO trap handler for the PCI ECAM space is registered in XEN so that when guest is trying to access the PCI config space, XEN will trap the access and emulate read/write using the VPCI and not the real PCI hardware.
> Just to make it clear: Dom0 still access the bus directly w/o emulation, right?
> 
> No.Once Xen has done his PCI enumeration (either on boot or after an hypercall from the hardware domain), only Xen will access the physical PCI bus, everybody else will go through VPCI.
> 
> 
> Limitation:
> * No handler is register for the MSI configuration.
> * Only legacy interrupt is supported and tested as of now, MSI is not implemented and tested.
> 
> # Assign the device to the guest:
> 
> Assign the PCI device from the hardware domain to the guest is done using the below guest config option. When xl tool create the domain, PCI devices will be assigned to the guest VPCI bus.
> pci=[ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...]
> 
> Guest will be only able to access the assigned devices and see the bridges. Guest will not be able to access or see the devices that are no assigned to him.
> Does this mean that we do not need to configure the bridges as those are exposed to the guest implicitly?
> 
> Limitation:
> * As of now all the bridges in the PCI bus are seen by the guest on the VPCI bus.
> 
> So, what happens if a guest tries to access the bridge that doesn't have the assigned
> 
> PCI device? E.g. we pass PCIe_dev0 which is behind Bridge0 and the guest also sees
> 
> Bridge1 and tries to access devices behind it during the enumeration.
> 
> Could you please clarify?
> 
> The bridges are only accessible in read-only and cannot be modified. Even though a guest would see the bridge, the VPCI will only show the assigned devices behind it. If there is no device behind that bridge assigned to the guest, the guest will see an empty bus behind that bridge.
> 
> 
> We need to come up with something similar for dom0less too. It could be
> exactly the same thing (a list of BDFs as strings as a device tree
> property) or something else if we can come up with a better idea.
> Fully agree.
> Maybe a tree topology could allow more possibilities (like giving BAR values) in the future.
> 
> # Emulated PCI device tree node in libxl:
> 
> Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.
> 
> A new area has been reserved in the arm guest physical map at which the VPCI bus is declared in the device tree (reg and ranges parameters of the node). A trap handler for the PCI ECAM access from guest has been registered at the defined address and redirects requests to the VPCI driver in Xen.
> 
> Limitation:
> * Only one PCI device tree node is supported as of now.
> I think vpci="pci_ecam" should be optional: if pci=[ "PCI_SPEC_STRING",
> ...] is specififed, then vpci="pci_ecam" is implied.
> 
> vpci="pci_ecam" is only useful one day in the future when we want to be
> able to emulate other non-ecam host bridges. For now we could even skip
> it.
> This would create a problem if xl is used to add a PCI device as we need the PCI node to be in the DTB when the guest is created.
> I agree this is not needed but removing it might create more complexity in the code.
> 
> I would suggest we have it from day 0 as there are plenty of HW available which is not ECAM.
> 
> Having vpci allows other bridges to be supported
> 
> Yes we agree.
> 
> 
> 
> Bertrand
> 
> 
> BAR value and IOMEM mapping:
> 
> Linux guest will do the PCI enumeration based on the area reserved for ECAM and IOMEM ranges in the VPCI device tree node. Once PCI device is assigned to the guest, XEN will map the guest PCI IOMEM region to the real physical IOMEM region only for the assigned devices.
> 
> As of now we have not modified the existing VPCI code to map the guest PCI IOMEM region to the real physical IOMEM region. We used the existing guest “iomem” config option to map the region.
> For example:
> Guest reserved IOMEM region:  0x04020000
>     Real physical IOMEM region:0x50000000
>     IOMEM size:128MB
>     iomem config will be:   iomem = ["0x50000,0x8000@0x4020"]
> 
> There is no need to map the ECAM space as XEN already have access to the ECAM space and XEN will trap ECAM accesses from the guest and will perform read/write on the VPCI bus.
> 
> IOMEM access will not be trapped and the guest will directly access the IOMEM region of the assigned device via stage-2 translation.
> 
> In the same, we mapped the assigned devices IRQ to the guest using below config options.
> irqs= [ NUMBER, NUMBER, ...]
> 
> Limitation:
> * Need to avoid the “iomem” and “irq” guest config options and map the IOMEM region and IRQ at the same time when device is assigned to the guest using the “pci” guest config options when xl creates the domain.
> * Emulated BAR values on the VPCI bus should reflect the IOMEM mapped address.
> * X86 mapping code should be ported on Arm so that the stage-2 translation is adapted when the guest is doing a modification of the BAR registers values (to map the address requested by the guest for a specific IOMEM to the address actually contained in the real BAR register of the corresponding device).
> 
> # SMMU configuration for guest:
> 
> When assigning PCI devices to a guest, the SMMU configuration should be updated to remove access to the hardware domain memory
> 
> So, as the hardware domain still has access to the PCI configuration space, we
> 
> can potentially have a condition when Dom0 accesses the device. AFAIU, if we have
> 
> pci front/back then before assigning the device to the guest we unbind it from the
> 
> real driver and bind to the back. Are we going to do something similar here?
> 
> Yes we have to unbind the driver from the hardware domain before assigning the device to the guest. Also as soon as Xen has done his PCI enumeration (either on boot or after an hypercall from the hardware domain), only Xen will access the physical PCI bus, everybody else will go through VPCI.
> 
> - Rahul
> 
> 
> 
> Thank you,
> 
> Oleksandr
> 
>  and add
> configuration to have access to the guest memory with the proper address translation so that the device can do DMA operations from and to the guest memory only.
> 
> # MSI/MSI-X support:
> Not implement and tested as of now.
> 
> # ITS support:
> Not implement and tested as of now.
> [1] https://lists.xen.org/archives/html/xen-devel/2017-05/msg02674.html
> 



From xen-devel-bounces@lists.xenproject.org Fri Jul 17 13:10:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 13:10:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQ8R-0007or-Df; Fri, 17 Jul 2020 13:10:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1N/p=A4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jwQ8Q-0007om-BU
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 13:10:42 +0000
X-Inumbo-ID: ea270a6a-c82e-11ea-8496-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea270a6a-c82e-11ea-8496-bc764e2007e4;
 Fri, 17 Jul 2020 13:10:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2D6CCB6AB;
 Fri, 17 Jul 2020 13:10:45 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: guard against port I/O overlapping the RTC/CMOS range
Message-ID: <38c73e17-30b8-27b4-bc7c-e6ef7817fa1e@suse.com>
Date: Fri, 17 Jul 2020 15:10:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Since we intercept RTC/CMOS port accesses, let's do so consistently in
all cases, i.e. also for e.g. a dword access to [006E,0071]. To avoid
the risk of unintended impact on Dom0 code actually doing so (despite
the belief that none ought to exist), also extend
guest_io_{read,write}() to decompose accesses where some ports are
allowed to be directly accessed and some aren't.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -210,7 +210,7 @@ static bool admin_io_okay(unsigned int p
         return false;
 
     /* We also never permit direct access to the RTC/CMOS registers. */
-    if ( ((port & ~1) == RTC_PORT(0)) )
+    if ( port <= RTC_PORT(1) && port + bytes > RTC_PORT(0) )
         return false;
 
     return ioports_access_permitted(d, port, port + bytes - 1);
@@ -297,6 +297,17 @@ static uint32_t guest_io_read(unsigned i
             if ( pci_cfg_ok(currd, port & 3, size, NULL) )
                 sub_data = pci_conf_read(currd->arch.pci_cf8, port & 3, size);
         }
+        else if ( ioports_access_permitted(currd, port, port) )
+        {
+            if ( bytes > 1 && !(port & 1) &&
+                 ioports_access_permitted(currd, port, port + 1) )
+            {
+                sub_data = inw(port);
+                size = 2;
+            }
+            else
+                sub_data = inb(port);
+        }
 
         if ( size == 4 )
             return sub_data;
@@ -373,25 +384,31 @@ static int read_io(unsigned int port, un
     return X86EMUL_OKAY;
 }
 
+static void _guest_io_write(unsigned int port, unsigned int bytes,
+                            uint32_t data)
+{
+    switch ( bytes )
+    {
+    case 1:
+        outb((uint8_t)data, port);
+        if ( amd_acpi_c1e_quirk )
+            amd_check_disable_c1e(port, (uint8_t)data);
+        break;
+    case 2:
+        outw((uint16_t)data, port);
+        break;
+    case 4:
+        outl(data, port);
+        break;
+    }
+}
+
 static void guest_io_write(unsigned int port, unsigned int bytes,
                            uint32_t data, struct domain *currd)
 {
     if ( admin_io_okay(port, bytes, currd) )
     {
-        switch ( bytes )
-        {
-        case 1:
-            outb((uint8_t)data, port);
-            if ( amd_acpi_c1e_quirk )
-                amd_check_disable_c1e(port, (uint8_t)data);
-            break;
-        case 2:
-            outw((uint16_t)data, port);
-            break;
-        case 4:
-            outl(data, port);
-            break;
-        }
+        _guest_io_write(port, bytes, data);
         return;
     }
 
@@ -420,6 +437,13 @@ static void guest_io_write(unsigned int
             if ( pci_cfg_ok(currd, port & 3, size, &data) )
                 pci_conf_write(currd->arch.pci_cf8, port & 3, size, data);
         }
+        else if ( ioports_access_permitted(currd, port, port) )
+        {
+            if ( bytes > 1 && !(port & 1) &&
+                 ioports_access_permitted(currd, port, port + 1) )
+                size = 2;
+            _guest_io_write(port, size, data);
+        }
 
         if ( size == 4 )
             return;


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 13:12:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 13:12:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQAC-0007vT-Q7; Fri, 17 Jul 2020 13:12:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwQAB-0007vN-4E
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 13:12:31 +0000
X-Inumbo-ID: 2aaebb8c-c82f-11ea-8496-bc764e2007e4
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.73]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2aaebb8c-c82f-11ea-8496-bc764e2007e4;
 Fri, 17 Jul 2020 13:12:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZCTxn/F6Q8jKbtpoiMsZeapOySHgmubhSYKVzSpV1a8=;
 b=R7GESLTRGyD3ygbqPgHWqomFyrD02YroHV7OtPYfjupy/bW0Std8mxJ1ohzjNOZq2QTGnwR0qmazgdV/zX6iBAa4X7wpXErFyEQIoie/uvBpNy1YIziWWa1Q8uqgwn//PmcNNXs6BUx1DHO5eWsLBm7D5ZYfv1p52JDK5FwEItc=
Received: from MR2P264CA0129.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:30::21)
 by AM6PR08MB3077.eurprd08.prod.outlook.com (2603:10a6:209:48::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.22; Fri, 17 Jul
 2020 13:12:27 +0000
Received: from VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:30:cafe::d9) by MR2P264CA0129.outlook.office365.com
 (2603:10a6:500:30::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17 via Frontend
 Transport; Fri, 17 Jul 2020 13:12:27 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT063.mail.protection.outlook.com (10.152.18.236) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 13:12:27 +0000
Received: ("Tessian outbound 1c27ecaec3d6:v62");
 Fri, 17 Jul 2020 13:12:26 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: f8b7ddff94ac4b3f
X-CR-MTA-TID: 64aa7808
Received: from ca23257b1d67.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 243351EE-730A-40E1-BF7C-EDF97DB0B461.1; 
 Fri, 17 Jul 2020 13:12:21 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ca23257b1d67.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 13:12:21 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nuxqLCemg0bFTxGczhUaeXJ62k6B+GnaIuUMoCPrhG52I++El+a6SYn+7paumOIjHQFnYi8YcuNLDl/aTOPMe6Sgt3auZcNaV+yTRkkHM9HnxxeDzCqMI2wiNTn4PWiUeBT6bJUMy+K5JkizeuQDdAT4Qyp4p/GAOpMvl+ma0wbAZFAewl5irNQYfmOPQ4UPN4RLN0XrpvzLXwoKZAHA/mOtJtKMIcz16W7I6fltwsxHlDYeB9q6j0YUDxf4oVF3n3V0teU4Ue0GH5XVrAWeJJWJ/3A0QBPzWjpfPIKbqA9WbYnXrl1XpGZjUHcNQsb/ivfKpBvoydQRXNDFTjJwzg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZCTxn/F6Q8jKbtpoiMsZeapOySHgmubhSYKVzSpV1a8=;
 b=JtKVCw061IIS5EKZhChxOuO4pA00fdLE/ZRR4OqnJeJz3mpOudoQaDA+vbFyopYcU07Xp29oUJPa3eeuRRPU4tdO19QrJF3LulnPBlpfnqk+57epkp7vnb4tAv06aEjEJWHTat56ZbW1EQDNgOyZIneIzbOvT0CH1sqoGrfalUWApPf9siqZps37xmBSk7KMAhb5WSCoR1PhlxM1O9sPmNbKQo9SexpaReC7jiZq7FqVyTalF31sqL4tUsV/9dmhV3bQhjUrXDOF8eHk/rNIJm15dJ68KMuivmqjYO0sEGyl2orbQ8G9Bi9bWH6EcOaMnltqMwEo+/OcoCt7u7xl0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZCTxn/F6Q8jKbtpoiMsZeapOySHgmubhSYKVzSpV1a8=;
 b=R7GESLTRGyD3ygbqPgHWqomFyrD02YroHV7OtPYfjupy/bW0Std8mxJ1ohzjNOZq2QTGnwR0qmazgdV/zX6iBAa4X7wpXErFyEQIoie/uvBpNy1YIziWWa1Q8uqgwn//PmcNNXs6BUx1DHO5eWsLBm7D5ZYfv1p52JDK5FwEItc=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB2006.eurprd08.prod.outlook.com (2603:10a6:4:79::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Fri, 17 Jul
 2020 13:12:18 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 13:12:18 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAC0rgIAAqC0AgAANcgCAAFxWgA==
Date: Fri, 17 Jul 2020 13:12:18 +0000
Message-ID: <C09D50EB-2C77-447D-89B0-5903D5DD3296@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <alpine.DEB.2.21.2007161258160.3886@sstabellini-ThinkPad-T480s>
 <BB4645DF-A040-4912-AC35-C98105917FD5@arm.com>
 <f69f86dc-7a8c-4c25-c059-0e391de51d7f@epam.com>
In-Reply-To: <f69f86dc-7a8c-4c25-c059-0e391de51d7f@epam.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [2a01:e0a:13:6f10:d5e3:98:5df0:fb15]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: b1250485-3f94-4688-bd34-08d82a530dc1
x-ms-traffictypediagnostic: DB6PR0801MB2006:|AM6PR08MB3077:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <AM6PR08MB307714D5352B95BBC341CC569D7C0@AM6PR08MB3077.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: d4CiQT+IEn5zTtpGs13RbKKjpZDIACwM9oHLAQYzK5GhBDSPvLthGNJJV/FyxkcHrVDw1TML/E84Di0LE0yiVZSnB2n/z0BxtCtgQnR0Mfv9a5rxBkSTpyAh6SCgLW9ORz/faO8hJdp2yjan9PxG1zXrgEiknrVqlGmXUwl0YAPUbqKlqdW5Y8uzmmVRowssiUmN3oNr9UVR4nlOZpkW3wgfDY7ZmW6tsBuFpxbzhG9rCF5cL7re71PwOVhl7yGFPyn6vncnGlUT49zIk/3E5ROZLrFZgVAz0AvbbbuK14OQIhWlPqeRPq90S6+Qbzg48v9oGnyfxM1OgH0BOXZqkm0c0jKHE4Gx8nknnhDWBbgqOqfpmSGYXZZUlqd8Zct/z8tlBxczPNqoiqwLdsM0pA==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(39860400002)(136003)(396003)(376002)(366004)(8676002)(83380400001)(6486002)(8936002)(966005)(6506007)(186003)(66556008)(66476007)(76116006)(6512007)(91956017)(64756008)(6916009)(53546011)(66446008)(478600001)(66946007)(2906002)(30864003)(2616005)(36756003)(86362001)(54906003)(71200400001)(5660300002)(316002)(4326008)(33656002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: Gg0cbe6aTisz7W5DIYzaO1BJMIDX8+a/C81UOEDG8jXTbhHb+Wbq+fv2+4Rl6YAVZIb55cdfVwdPjlZZBQOYXOKqOKPTSPqAT9Ph7BQ1ij1YxWGCr76R+b4MlAbfY9eeeMtrlBElRR93CQ0QaiXGaHmstCAfw98KJWGU3JqtrxG8hQ/NjkEejqYa1EsrGZ4MCtZuHsoz25W5vZyjTHfWTGTbxjrqM03V+M4KO8k0T8p376dkZ4uNhSun7tlSnscqsPsH/6K8/ibD+VOT0weirWVmwfc8I7qNC9sld4QeueLOOeRaWXDzP4K1UDZ06qngGsTXYiLhTPXtFUxpI31sytkCfV3l8n2h1W4BhTvSX6uustTlMKwSNHTyQNXvxU//ktfGI+VFGIDWTsbNckekrBqtbmasbyzYARGPVB+cL+/aqQ91R5ARPQWMCgCWFVIilVhmO+thVsGjo6Sb+KW9E3H5a+tXCUWzF9saWGyNjiY066RFd6NNmhKOoAncD0Tyoacn9uvgc+kKYE1T2QZa5pnnmNGxN2qbptOB5iTRwHo=
Content-Type: text/plain; charset="utf-8"
Content-ID: <A7DE935EE9BBC643A1E0E126D9518661@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB2006
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(39860400002)(376002)(136003)(346002)(46966005)(8676002)(54906003)(316002)(8936002)(36906005)(86362001)(356005)(53546011)(81166007)(26005)(36756003)(82740400003)(186003)(2906002)(6506007)(82310400002)(30864003)(70206006)(336012)(47076004)(2616005)(70586007)(83380400001)(4326008)(107886003)(6862004)(478600001)(5660300002)(966005)(6512007)(6486002)(33656002);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: c6860fe2-19d0-439b-bf3c-08d82a53088b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: N1dzo1JWsvpyMAlNGSbmG08ZD4c6YP/noNwVTPD7K6k//UyhALbGwPGYn5J2zMWQylg+6unmgW+BIqEz3SvgxqkHltn5uaNMY6sNCBjmnMZkVVFYvpy4tCgnsraS2YSOD2VTuVCnBgiXGlGhTUj+cWnDBX+BUsz3eZukLkihHOAvv91alv9+fTEZYXTaN7fLBFt4kpQI8sJ0gdhFt1lqZuUXuBXpuJhGs06FjYIZtvlqvIkP9pF9+wkqokfMGYPrGvioSdQ6xMyzy12sfPSzfi0iDy455GYdcYG7Tx+zFExwkpaGBT9Ir/+mVEIiCjZBxr84W/mENx0miNPXSsS7XjSet947+4iY7c6s0CO4x1Tvjdy/Rt2xzovSoSUKLPDX/wqwjujCzQ+Fki9PXdFiiR51juwoubXQAd7HPzqmb9lDU4K2N1CfHjGgA6SulTCaEJIRvG1hkH+4UQBa8GUDN4qsgXuXXgYQjjG+R3uk0y4=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 13:12:27.2828 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b1250485-3f94-4688-bd34-08d82a530dc1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3077
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>, Rahul Singh <Rahul.Singh@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGksDQoNCkkgd2lsbCByZXBseSBmb3IgUmFodWwgdW50aWwgaGUgZ2V0cyBoaXMgbWFpbCBjbGll
bnQgZml4ZWQuDQoNCj4gT24gMTcgSnVsIDIwMjAsIGF0IDA5OjQxLCBPbGVrc2FuZHIgQW5kcnVz
aGNoZW5rbyA8T2xla3NhbmRyX0FuZHJ1c2hjaGVua29AZXBhbS5jb20+IHdyb3RlOg0KPiANCj4g
DQo+IE9uIDcvMTcvMjAgOTo1MyBBTSwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+IA0KPj4+
IE9uIDE2IEp1bCAyMDIwLCBhdCAyMjo1MSwgU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGlu
aUBrZXJuZWwub3JnPiB3cm90ZToNCj4+PiANCj4+PiBPbiBUaHUsIDE2IEp1bCAyMDIwLCBSYWh1
bCBTaW5naCB3cm90ZToNCj4+Pj4gSGVsbG8gQWxsLA0KPj4+PiANCj4+Pj4gRm9sbG93aW5nIHVw
IG9uIGRpc2N1c3Npb24gb24gUENJIFBhc3N0aHJvdWdoIHN1cHBvcnQgb24gQVJNIHRoYXQgd2Ug
aGFkIGF0IHRoZSBYRU4gc3VtbWl0LCB3ZSBhcmUgc3VibWl0dGluZyBhIFJldmlldyBGb3IgQ29t
bWVudCBhbmQgYSBkZXNpZ24gcHJvcG9zYWwgZm9yIFBDSSBwYXNzdGhyb3VnaCBzdXBwb3J0IG9u
IEFSTS4gRmVlbCBmcmVlIHRvIGdpdmUgeW91ciBmZWVkYmFjay4NCj4+Pj4gDQo+Pj4+IFRoZSBm
b2xsb3dpbmdzIGRlc2NyaWJlIHRoZSBoaWdoLWxldmVsIGRlc2lnbiBwcm9wb3NhbCBvZiB0aGUg
UENJIHBhc3N0aHJvdWdoIHN1cHBvcnQgYW5kIGhvdyB0aGUgZGlmZmVyZW50IG1vZHVsZXMgd2l0
aGluIHRoZSBzeXN0ZW0gaW50ZXJhY3RzIHdpdGggZWFjaCBvdGhlciB0byBhc3NpZ24gYSBwYXJ0
aWN1bGFyIFBDSSBkZXZpY2UgdG8gdGhlIGd1ZXN0Lg0KPj4+IEkgdGhpbmsgdGhlIHByb3Bvc2Fs
IGlzIGdvb2QgYW5kIEkgb25seSBoYXZlIGEgY291cGxlIG9mIHRob3VnaHRzIHRvDQo+Pj4gc2hh
cmUgYmVsb3cuDQo+Pj4gDQo+Pj4gDQo+Pj4+ICMgVGl0bGU6DQo+Pj4+IA0KPj4+PiBQQ0kgZGV2
aWNlcyBwYXNzdGhyb3VnaCBvbiBBcm0gZGVzaWduIHByb3Bvc2FsDQo+Pj4+IA0KPj4+PiAjIFBy
b2JsZW0gc3RhdGVtZW50Og0KPj4+PiANCj4+Pj4gT24gQVJNIHRoZXJlIGluIG5vIHN1cHBvcnQg
dG8gYXNzaWduIGEgUENJIGRldmljZSB0byBhIGd1ZXN0LiBQQ0kgZGV2aWNlIHBhc3N0aHJvdWdo
IGNhcGFiaWxpdHkgYWxsb3dzIGd1ZXN0cyB0byBoYXZlIGZ1bGwgYWNjZXNzIHRvIHNvbWUgUENJ
IGRldmljZXMuIFBDSSBkZXZpY2UgcGFzc3Rocm91Z2ggYWxsb3dzIFBDSSBkZXZpY2VzIHRvIGFw
cGVhciBhbmQgYmVoYXZlIGFzIGlmIHRoZXkgd2VyZSBwaHlzaWNhbGx5IGF0dGFjaGVkIHRvIHRo
ZSBndWVzdCBvcGVyYXRpbmcgc3lzdGVtIGFuZCBwcm92aWRlIGZ1bGwgaXNvbGF0aW9uIG9mIHRo
ZSBQQ0kgZGV2aWNlcy4NCj4+Pj4gDQo+Pj4+IEdvYWwgb2YgdGhpcyB3b3JrIGlzIHRvIGFsc28g
c3VwcG9ydCBEb20wTGVzcyBjb25maWd1cmF0aW9uIHNvIHRoZSBQQ0kgYmFja2VuZC9mcm9udGVu
ZCBkcml2ZXJzIHVzZWQgb24geDg2IHNoYWxsIG5vdCBiZSB1c2VkIG9uIEFybS4gSXQgd2lsbCB1
c2UgdGhlIGV4aXN0aW5nIFZQQ0kgY29uY2VwdCBmcm9tIFg4NiBhbmQgaW1wbGVtZW50IHRoZSB2
aXJ0dWFsIFBDSSBidXMgdGhyb3VnaCBJTyBlbXVsYXRpb27igIsgc3VjaCB0aGF0IG9ubHkgYXNz
aWduZWQgZGV2aWNlcyBhcmUgdmlzaWJsZeKAiyB0byB0aGUgZ3Vlc3QgYW5kIGd1ZXN0IGNhbiB1
c2UgdGhlIHN0YW5kYXJkIFBDSSBkcml2ZXIuDQo+Pj4+IA0KPj4+PiBPbmx5IERvbTAgYW5kIFhl
biB3aWxsIGhhdmUgYWNjZXNzIHRvIHRoZSByZWFsIFBDSSBidXMsDQo+IA0KPiBTbywgaW4gdGhp
cyBjYXNlIGhvdyBpcyB0aGUgYWNjZXNzIHNlcmlhbGl6YXRpb24gZ29pbmcgdG8gd29yaz8NCj4g
DQo+IEkgbWVhbiB0aGF0IGlmIGJvdGggWGVuIGFuZCBEb20wIGFyZSBhYm91dCB0byBhY2Nlc3Mg
dGhlIGJ1cyBhdCB0aGUgc2FtZSB0aW1lPw0KPiANCj4gVGhlcmUgd2FzIGEgZGlzY3Vzc2lvbiBv
biB0aGUgc2FtZSBiZWZvcmUgWzFdIGFuZCBJTU8gaXQgd2FzIG5vdCBkZWNpZGVkIG9uDQo+IA0K
PiBob3cgdG8gZGVhbCB3aXRoIHRoYXQuDQoNCkRPTTAgYWxzbyBhY2Nlc3MgdGhlIHJlYWwgUENJ
IGhhcmR3YXJlIHZpYSBNTUlPIGNvbmZpZyBzcGFjZSB0cmFwIGluIFhFTi4gV2Ugd2lsbCB0YWtl
IGNhcmUgb2YgbG9ja2luZyBhY2Nlc3MgdG8gdGhlIGNvbmZpZyBzcGFjZSBpbiBYRU4uIA0KDQo+
IA0KPj4+PiDigIsgZ3Vlc3Qgd2lsbCBoYXZlIGEgZGlyZWN0IGFjY2VzcyB0byB0aGUgYXNzaWdu
ZWQgZGV2aWNlIGl0c2VsZuKAiy4gSU9NRU0gbWVtb3J5IHdpbGwgYmUgbWFwcGVkIHRvIHRoZSBn
dWVzdCDigIthbmQgaW50ZXJydXB0IHdpbGwgYmUgcmVkaXJlY3RlZCB0byB0aGUgZ3Vlc3QuIFNN
TVUgaGFzIHRvIGJlIGNvbmZpZ3VyZWQgY29ycmVjdGx5IHRvIGhhdmUgRE1BIHRyYW5zYWN0aW9u
Lg0KPj4+PiANCj4+Pj4gIyMgQ3VycmVudCBzdGF0ZTrigK9EcmFmdCB2ZXJzaW9uDQo+Pj4+IA0K
Pj4+PiAjIFByb3Bvc2VyKHMpOiBSYWh1bCBTaW5naCwgQmVydHJhbmQgTWFycXVpcw0KPj4+PiAN
Cj4+Pj4gIyBQcm9wb3NhbDoNCj4+Pj4gDQo+Pj4+IFRoaXMgc2VjdGlvbiB3aWxsIGRlc2NyaWJl
IHRoZSBkaWZmZXJlbnQgc3Vic3lzdGVtIHRvIHN1cHBvcnQgdGhlIFBDSSBkZXZpY2UgcGFzc3Ro
cm91Z2ggYW5kIGhvdyB0aGVzZSBzdWJzeXN0ZW1zIGludGVyYWN0IHdpdGggZWFjaCBvdGhlciB0
byBhc3NpZ24gYSBkZXZpY2UgdG8gdGhlIGd1ZXN0Lg0KPj4+PiANCj4+Pj4gIyBQQ0kgVGVybWlu
b2xvZ3k6DQo+Pj4+IA0KPj4+PiBIb3N0IEJyaWRnZTogSG9zdCBicmlkZ2UgYWxsb3dzIHRoZSBQ
Q0kgZGV2aWNlcyB0byB0YWxrIHRvIHRoZSByZXN0IG9mIHRoZSBjb21wdXRlci4NCj4+Pj4gRUNB
TTogRUNBTSAoRW5oYW5jZWQgQ29uZmlndXJhdGlvbiBBY2Nlc3MgTWVjaGFuaXNtKSBpcyBhIG1l
Y2hhbmlzbSBkZXZlbG9wZWQgdG8gYWxsb3cgUENJZSB0byBhY2Nlc3MgY29uZmlndXJhdGlvbiBz
cGFjZS4gVGhlIHNwYWNlIGF2YWlsYWJsZSBwZXIgZnVuY3Rpb24gaXMgNEtCLg0KPj4+PiANCj4+
Pj4gIyBEaXNjb3ZlcmluZyBQQ0kgSG9zdCBCcmlkZ2UgaW4gWEVOOg0KPj4+PiANCj4+Pj4gSW4g
b3JkZXIgdG8gc3VwcG9ydCB0aGUgUENJIHBhc3N0aHJvdWdoIFhFTiBzaG91bGQgYmUgYXdhcmUg
b2YgYWxsIHRoZSBQQ0kgaG9zdCBicmlkZ2VzIGF2YWlsYWJsZSBvbiB0aGUgc3lzdGVtIGFuZCBz
aG91bGQgYmUgYWJsZSB0byBhY2Nlc3MgdGhlIFBDSSBjb25maWd1cmF0aW9uIHNwYWNlLiBFQ0FN
IGNvbmZpZ3VyYXRpb24gYWNjZXNzIGlzIHN1cHBvcnRlZCBhcyBvZiBub3cuIFhFTiBkdXJpbmcg
Ym9vdCB3aWxsIHJlYWQgdGhlIFBDSSBkZXZpY2UgdHJlZSBub2RlIOKAnHJlZ+KAnSBwcm9wZXJ0
eSBhbmQgd2lsbCBtYXAgdGhlIEVDQU0gc3BhY2UgdG8gdGhlIFhFTiBtZW1vcnkgdXNpbmcgdGhl
IOKAnGlvcmVtYXBfbm9jYWNoZSAoKeKAnSBmdW5jdGlvbi4NCj4+Pj4gDQo+Pj4+IElmIHRoZXJl
IGFyZSBtb3JlIHRoYW4gb25lIHNlZ21lbnQgb24gdGhlIHN5c3RlbSwgWEVOIHdpbGwgcmVhZCB0
aGUg4oCcbGludXgsIHBjaS1kb21haW7igJ0gcHJvcGVydHkgZnJvbSB0aGUgZGV2aWNlIHRyZWUg
bm9kZSBhbmQgY29uZmlndXJlIHRoZSBob3N0IGJyaWRnZSBzZWdtZW50IG51bWJlciBhY2NvcmRp
bmdseS4gQWxsIHRoZSBQQ0kgZGV2aWNlIHRyZWUgbm9kZXMgc2hvdWxkIGhhdmUgdGhlIOKAnGxp
bnV4LHBjaS1kb21haW7igJ0gcHJvcGVydHkgc28gdGhhdCB0aGVyZSB3aWxsIGJlIG5vIGNvbmZs
aWN0cy4gRHVyaW5nIGhhcmR3YXJlIGRvbWFpbiBib290IExpbnV4IHdpbGwgYWxzbyB1c2UgdGhl
IHNhbWUg4oCcbGludXgscGNpLWRvbWFpbuKAnSBwcm9wZXJ0eSBhbmQgYXNzaWduIHRoZSBkb21h
aW4gbnVtYmVyIHRvIHRoZSBob3N0IGJyaWRnZS4NCj4+Pj4gDQo+Pj4+IFdoZW4gRG9tMCB0cmll
cyB0byBhY2Nlc3MgdGhlIFBDSSBjb25maWcgc3BhY2Ugb2YgdGhlIGRldmljZSwgWEVOIHdpbGwg
ZmluZCB0aGUgY29ycmVzcG9uZGluZyBob3N0IGJyaWRnZSBiYXNlZCBvbiBzZWdtZW50IG51bWJl
ciBhbmQgYWNjZXNzIHRoZSBjb3JyZXNwb25kaW5nIGNvbmZpZyBzcGFjZSBhc3NpZ25lZCB0byB0
aGF0IGJyaWRnZS4NCj4+Pj4gDQo+Pj4+IExpbWl0YXRpb246DQo+Pj4+ICogT25seSBQQ0kgRUNB
TSBjb25maWd1cmF0aW9uIHNwYWNlIGFjY2VzcyBpcyBzdXBwb3J0ZWQuDQo+IA0KPiBUaGlzIGlz
IHJlYWxseSB0aGUgbGltaXRhdGlvbiB3aGljaCB3ZSBoYXZlIHRvIHRoaW5rIG9mIG5vdyBhcyB0
aGVyZSBhcmUgbG90cyBvZg0KPiANCj4gSFcgdy9vIEVDQU0gc3VwcG9ydCBhbmQgbm90IHByb3Zp
ZGluZyBhIHdheSB0byB1c2UgUENJKGUpIG9uIHRob3NlIGJvYXJkcw0KPiANCj4gd291bGQgcmVu
ZGVyIHRoZW0gdXNlbGVzcyB3cnQgUENJLiBJIGRvbid0IHN1Z2dlc3QgdG8gaGF2ZSBzb21lIHJl
YWwgY29kZSBmb3INCj4gDQo+IHRoYXQsIGJ1dCBJIHdvdWxkIHN1Z2dlc3Qgd2UgZGVzaWduIHNv
bWUgaW50ZXJmYWNlcyBmcm9tIGRheSAwLg0KPiANCj4gQXQgdGhlIHNhbWUgdGltZSBJIGRvIHVu
ZGVyc3RhbmQgdGhhdCBzdXBwb3J0aW5nIG5vbi1FQ0FNIGJyaWRnZXMgaXMgYSBwYWluDQoNCkFk
ZGluZyBhbnkgdHlwZSBvZiBob3N0IGJyaWRnZSBpcyBzdXBwb3J0ZWQsIHdlIGRpZCBwdXQgdGhl
IEVDQU0gc3BlY2lmaWMgY29kZSBpbiBhbiBpZGVudGlmZWQgc291cmNlIGZpbGUgc28gdGhhdCBv
dGhlciB0eXBlcyBjYW4gYmUgaW1wbGVtZW50ZWQuIEFzIG9mIG5vdyB3ZSBoYXZlIGltcGxlbWVu
dGVkIHRoZSBFQ0FNIHN1cHBvcnQgYW5kIHdlIGFyZSBpbXBsZW1lbnRpbmcgcmlnaHQgbm93IHN1
cHBvcnQgZm9yIE4xU0RQIHdoaWNoIHJlcXVpcmVzIHNwZWNpZmljIHF1aXJrcyB3aGljaCB3aWxs
IGJlIGRvbmUgaW4gYSBzZXBhcmF0ZSBzb3VyY2UgZmlsZS4NCg0KPiANCj4+Pj4gKiBEZXZpY2Ug
dHJlZSBiaW5kaW5nIGlzIHN1cHBvcnRlZCBhcyBvZiBub3csIEFDUEkgaXMgbm90IHN1cHBvcnRl
ZC4NCj4+Pj4gKiBOZWVkIHRvIHBvcnQgdGhlIFBDSSBob3N0IGJyaWRnZSBhY2Nlc3MgY29kZSB0
byBYRU4gdG8gYWNjZXNzIHRoZSBjb25maWd1cmF0aW9uIHNwYWNlIChnZW5lcmljIG9uZSB3b3Jr
cyBidXQgbG90cyBvZiBwbGF0Zm9ybXMgd2lsbCByZXF1aXJlZCBzb21lIHNwZWNpZmljIGNvZGUg
b3IgcXVpcmtzKS4NCj4+Pj4gDQo+Pj4+ICMgRGlzY292ZXJpbmcgUENJIGRldmljZXM6DQo+Pj4+
IA0KPj4+PiBQQ0ktUENJZSBlbnVtZXJhdGlvbiBpcyBhIHByb2Nlc3Mgb2YgZGV0ZWN0aW5nIGRl
dmljZXMgY29ubmVjdGVkIHRvIGl0cyBob3N0LiBJdCBpcyB0aGUgcmVzcG9uc2liaWxpdHkgb2Yg
dGhlIGhhcmR3YXJlIGRvbWFpbiBvciBib290IGZpcm13YXJlIHRvIGRvIHRoZSBQQ0kgZW51bWVy
YXRpb24gYW5kIGNvbmZpZ3VyZQ0KPiBHcmVhdCwgc28gd2UgYXNzdW1lIGhlcmUgdGhhdCB0aGUg
Ym9vdGxvYWRlciBjYW4gZG8gdGhlIGVudW1lcmF0aW9uIGFuZCBjb25maWd1cmF0aW9uLi4uDQoN
CkluIGNhc2Ugd2hlcmUgbm9ib2R5IGJlZm9yZSBYZW4gZG9lcywgd2Ugd2lsbCBuZWVkIHRoZSBo
YXJkd2FyZSBkb21haW4gdG8gZG8gaXQgYW5kIG1heWJlIGhhdmUgYW4gaHlwZXJjYWxsIHRvIHNp
Z25hbCB0byBYZW4gd2hlbiBpdCBjYW4gZGV0ZWN0IHRoZSBQQ0kgZGV2aWNlcy4NCg0KPj4+PiAg
dGhlIEJBUiwgUENJIGNhcGFiaWxpdGllcywgYW5kIE1TSS9NU0ktWCBjb25maWd1cmF0aW9uLg0K
Pj4+PiANCj4+Pj4gUENJLVBDSWUgZW51bWVyYXRpb24gaW4gWEVOIGlzIG5vdCBmZWFzaWJsZSBm
b3IgdGhlIGNvbmZpZ3VyYXRpb24gcGFydCBhcyBpdCB3b3VsZCByZXF1aXJlIGEgbG90IG9mIGNv
ZGUgaW5zaWRlIFhlbiB3aGljaCB3b3VsZCByZXF1aXJlIGEgbG90IG9mIG1haW50ZW5hbmNlLiBB
ZGRlZCB0byB0aGlzIG1hbnkgcGxhdGZvcm1zIHJlcXVpcmUgc29tZSBxdWlya3MgaW4gdGhhdCBw
YXJ0IG9mIHRoZSBQQ0kgY29kZSB3aGljaCB3b3VsZCBncmVhdGx5IGltcHJvdmUgWGVuIGNvbXBs
ZXhpdHkuIE9uY2UgaGFyZHdhcmUgZG9tYWluIGVudW1lcmF0ZXMgdGhlIGRldmljZSB0aGVuIGl0
IHdpbGwgY29tbXVuaWNhdGUgdG8gWEVOIHZpYSB0aGUgYmVsb3cgaHlwZXJjYWxsLg0KPj4+PiAN
Cj4+Pj4gI2RlZmluZSBQSFlTREVWT1BfcGNpX2RldmljZV9hZGQgICAgICAgIDI1DQo+Pj4+IHN0
cnVjdCBwaHlzZGV2X3BjaV9kZXZpY2VfYWRkIHsNCj4+Pj4gICAgdWludDE2X3Qgc2VnOw0KPj4+
PiAgICB1aW50OF90IGJ1czsNCj4+Pj4gICAgdWludDhfdCBkZXZmbjsNCj4+Pj4gICAgdWludDMy
X3QgZmxhZ3M7DQo+Pj4+ICAgIHN0cnVjdCB7DQo+Pj4+ICAgIAl1aW50OF90IGJ1czsNCj4+Pj4g
ICAgCXVpbnQ4X3QgZGV2Zm47DQo+Pj4+ICAgIH0gcGh5c2ZuOw0KPj4+PiAgICAvKg0KPj4+PiAg
ICAqIE9wdGlvbmFsIHBhcmFtZXRlcnMgYXJyYXkuDQo+Pj4+ICAgICogRmlyc3QgZWxlbWVudCAo
WzBdKSBpcyBQWE0gZG9tYWluIGFzc29jaWF0ZWQgd2l0aCB0aGUgZGV2aWNlIChpZiAqIFhFTl9Q
Q0lfREVWX1BYTSBpcyBzZXQpDQo+Pj4+ICAgICovDQo+Pj4+ICAgIHVpbnQzMl90IG9wdGFycltY
RU5fRkxFWF9BUlJBWV9ESU1dOw0KPj4+PiAgICB9Ow0KPj4+PiANCj4+Pj4gQXMgdGhlIGh5cGVy
Y2FsbCBhcmd1bWVudCBoYXMgdGhlIFBDSSBzZWdtZW50IG51bWJlciwgWEVOIHdpbGwgYWNjZXNz
IHRoZSBQQ0kgY29uZmlnIHNwYWNlIGJhc2VkIG9uIHRoaXMgc2VnbWVudCBudW1iZXIgYW5kIGZp
bmQgdGhlIGhvc3QtYnJpZGdlIGNvcnJlc3BvbmRpbmcgdG8gdGhpcyBzZWdtZW50IG51bWJlci4g
QXQgdGhpcyBzdGFnZSBob3N0IGJyaWRnZSBpcyBmdWxseSBpbml0aWFsaXplZCBzbyB0aGVyZSB3
aWxsIGJlIG5vIGlzc3VlIHRvIGFjY2VzcyB0aGUgY29uZmlnIHNwYWNlLg0KPj4+PiANCj4+Pj4g
WEVOIHdpbGwgYWRkIHRoZSBQQ0kgZGV2aWNlcyBpbiB0aGUgbGlua2VkIGxpc3QgbWFpbnRhaW4g
aW4gWEVOIHVzaW5nIHRoZSBmdW5jdGlvbiBwY2lfYWRkX2RldmljZSgpLiBYRU4gd2lsbCBiZSBh
d2FyZSBvZiBhbGwgdGhlIFBDSSBkZXZpY2VzIG9uIHRoZSBzeXN0ZW0gYW5kIGFsbCB0aGUgZGV2
aWNlIHdpbGwgYmUgYWRkZWQgdG8gdGhlIGhhcmR3YXJlIGRvbWFpbi4NCj4+Pj4gDQo+Pj4+IExp
bWl0YXRpb25zOg0KPj4+PiAqIFdoZW4gUENJIGRldmljZXMgYXJlIGFkZGVkIHRvIFhFTiwgTVNJ
IGNhcGFiaWxpdHkgaXMgbm90IGluaXRpYWxpemVkIGluc2lkZSBYRU4gYW5kIG5vdCBzdXBwb3J0
ZWQgYXMgb2Ygbm93Lg0KPj4+PiAqIEFDUyBjYXBhYmlsaXR5IGlzIGRpc2FibGUgZm9yIEFSTSBh
cyBvZiBub3cgYXMgYWZ0ZXIgZW5hYmxpbmcgaXQgZGV2aWNlcyBhcmUgbm90IGFjY2Vzc2libGUu
DQo+Pj4+ICogRG9tMExlc3MgaW1wbGVtZW50YXRpb24gd2lsbCByZXF1aXJlIHRvIGhhdmUgdGhl
IGNhcGFjaXR5IGluc2lkZSBYZW4gdG8gZGlzY292ZXIgdGhlIFBDSSBkZXZpY2VzICh3aXRob3V0
IGRlcGVuZGluZyBvbiBEb20wIHRvIGRlY2xhcmUgdGhlbSB0byBYZW4pLg0KPj4+IEkgdGhpbmsg
aXQgaXMgZmluZSB0byBhc3N1bWUgdGhhdCBmb3IgZG9tMGxlc3MgdGhlICJmaXJtd2FyZSIgaGFz
IHRha2VuDQo+Pj4gY2FyZSBvZiBzZXR0aW5nIHVwIHRoZSBCQVJzIGNvcnJlY3RseS4gU3RhcnRp
bmcgd2l0aCB0aGF0IGFzc3VtcHRpb24sIGl0DQo+Pj4gbG9va3MgbGlrZSBpdCBzaG91bGQgYmUg
ImVhc3kiIHRvIHdhbGsgdGhlIFBDSSB0b3BvbG9neSBpbiBYZW4gd2hlbi9pZg0KPj4+IHRoZXJl
IGlzIG5vIGRvbTA/DQo+PiBZZXMgYXMgd2UgZGlzY3Vzc2VkIGR1cmluZyB0aGUgZGVzaWduIHNl
c3Npb24sIHdlIGN1cnJlbnRseSB0aGluayB0aGF0IGl0IGlzIHRoZSB3YXkgdG8gZ28uDQo+PiBX
ZSBhcmUgZm9yIG5vdyByZWx5aW5nIG9uIERvbTAgdG8gZ2V0IHRoZSBsaXN0IG9mIFBDSSBkZXZp
Y2VzIGJ1dCB0aGlzIGlzIGRlZmluaXRlbHkgdGhlIHN0cmF0ZWd5IHdlIHdvdWxkIGxpa2UgdG8g
dXNlIHRvIGhhdmUgRG9tMCBzdXBwb3J0Lg0KPj4gSWYgdGhpcyBpcyB3b3JraW5nIHdlbGwsIEkg
ZXZlbiB0aGluayB3ZSBjb3VsZCBnZXQgcmlkIG9mIHRoZSBoeXBlcmNhbGwgYWxsIHRvZ2V0aGVy
Lg0KPiAuLi5hbmQgdGhpcyBpcyB0aGUgc2FtZSB3YXkgb2YgY29uZmlndXJpbmcgaWYgZW51bWVy
YXRpb24gaGFwcGVucyBpbiB0aGUgYm9vdGxvYWRlcj8NCj4gDQo+IEkgZG8gc3VwcG9ydCB0aGUg
aWRlYSB3ZSBnbyBhd2F5IGZyb20gUEhZU0RFVk9QX3BjaV9kZXZpY2VfYWRkLCBidXQgZHJpdmVy
IGRvbWFpbg0KPiANCj4ganVzdCBzaWduYWxzIFhlbiB0aGF0IHRoZSBlbnVtZXJhdGlvbiBpcyBk
b25lIGFuZCBYZW4gY2FuIHRyYXZlcnNlIHRoZSBidXMgYnkgdGhhdCB0aW1lLg0KPiANCj4gUGxl
YXNlIGFsc28gbm90ZSwgdGhhdCB0aGVyZSBhcmUgYWN0dWFsbHkgMyBjYXNlcyBwb3NzaWJsZSB3
cnQgd2hlcmUgdGhlIGVudW1lcmF0aW9uIGFuZA0KPiANCj4gY29uZmlndXJhdGlvbiBoYXBwZW5z
OiBib290IGZpcm13YXJlLCBEb20wLCBYZW4uIFNvLCBpdCBzZWVtcyB3ZQ0KPiANCj4gYXJlIGdv
aW5nIHRvIGhhdmUgZGlmZmVyZW50IGFwcHJvYWNoZXMgZm9yIHRoZSBmaXJzdCB0d28gKHNlZSBt
eSBjb21tZW50IGFib3ZlIG9uDQo+IA0KPiB0aGUgaHlwZXJjYWxsIHVzZSBpbiBEb20wKS4gU28s
IHdhbGtpbmcgdGhlIGJ1cyBvdXJzZWx2ZXMgaW4gWGVuIHNlZW1zIHRvIGJlIGdvb2QgZm9yIGFs
bA0KPiANCj4gdGhlIHVzZS1jYXNlcyBhYm92ZQ0KDQpJbiB0aGF0IGNhc2Ugd2UgbWF5IGhhdmUg
dG8gaW1wbGVtZW50IGEgbmV3IGh5cGVyY2FsbCB0byBpbmZvcm0gWEVOIHRoYXQgZW51bWVyYXRp
b24gaXMgY29tcGxldGUgYW5kIG5vdyBzY2FuIHRoZSBkZXZpY2VzLiBXZSBjb3VsZCB0ZWxsIFhl
biB0byBkZWxheSBpdHMgZW51bWVyYXRpb24gdW50aWwgdGhpcyBoeXBlcmNhbGwgaXMgY2FsbGVk
IHVzaW5nIGEgeGVuIGNvbW1hbmQgbGluZSBwYXJhbWV0ZXIuIFRoaXMgd2F5IHdoZW4gdGhpcyBp
cyBub3QgcmVxdWlyZWQgYmVjYXVzZSB0aGUgZmlybXdhcmUgZGlkIHRoZSBlbnVtZXJhdGlvbiwg
d2UgY2FuIHByb3Blcmx5IHN1cHBvcnQgRG9tMExlc3MuDQoNCj4gDQo+PiANCj4+IA0KPj4+IA0K
Pj4+PiAjIEVuYWJsZSB0aGUgZXhpc3RpbmcgeDg2IHZpcnR1YWwgUENJIHN1cHBvcnQgZm9yIEFS
TToNCj4+Pj4gDQo+Pj4+IFRoZSBleGlzdGluZyBWUENJIHN1cHBvcnQgYXZhaWxhYmxlIGZvciBY
ODYgaXMgYWRhcHRlZCBmb3IgQXJtLiBXaGVuIHRoZSBkZXZpY2UgaXMgYWRkZWQgdG8gWEVOIHZp
YSB0aGUgaHlwZXIgY2FsbCDigJxQSFlTREVWT1BfcGNpX2RldmljZV9hZGTigJ0sIFZQQ0kgaGFu
ZGxlciBmb3IgdGhlIGNvbmZpZyBzcGFjZSBhY2Nlc3MgaXMgYWRkZWQgdG8gdGhlIFBDSSBkZXZp
Y2UgdG8gZW11bGF0ZSB0aGUgUENJIGRldmljZXMuDQo+Pj4+IA0KPj4+PiBBIE1NSU8gdHJhcCBo
YW5kbGVyIGZvciB0aGUgUENJIEVDQU0gc3BhY2UgaXMgcmVnaXN0ZXJlZCBpbiBYRU4gc28gdGhh
dCB3aGVuIGd1ZXN0IGlzIHRyeWluZyB0byBhY2Nlc3MgdGhlIFBDSSBjb25maWcgc3BhY2UsIFhF
TiB3aWxsIHRyYXAgdGhlIGFjY2VzcyBhbmQgZW11bGF0ZSByZWFkL3dyaXRlIHVzaW5nIHRoZSBW
UENJIGFuZCBub3QgdGhlIHJlYWwgUENJIGhhcmR3YXJlLg0KPiBKdXN0IHRvIG1ha2UgaXQgY2xl
YXI6IERvbTAgc3RpbGwgYWNjZXNzIHRoZSBidXMgZGlyZWN0bHkgdy9vIGVtdWxhdGlvbiwgcmln
aHQ/DQoNCk9ubHkgaWYgRG9tMCBkb2VzIHRoZSBlbnVtZXJhdGlvbi4gSWYgbm90IG9yIGFmdGVy
IGl0IGhhcyBkb25lIHRoZSBlbnVtZXJhdGlvbiwgZXZlcnlib2R5IHdpbGwgdXNlIHRoZSBWUENJ
IGFuZCBvbmx5IFhlbiB3aWxsIGFjY2VzcyB0aGUgcmVhbCBidXMuIA0KDQo+Pj4+IA0KPj4+PiBM
aW1pdGF0aW9uOg0KPj4+PiAqIE5vIGhhbmRsZXIgaXMgcmVnaXN0ZXIgZm9yIHRoZSBNU0kgY29u
ZmlndXJhdGlvbi4NCj4+Pj4gKiBPbmx5IGxlZ2FjeSBpbnRlcnJ1cHQgaXMgc3VwcG9ydGVkIGFu
ZCB0ZXN0ZWQgYXMgb2Ygbm93LCBNU0kgaXMgbm90IGltcGxlbWVudGVkIGFuZCB0ZXN0ZWQuDQo+
Pj4+IA0KPj4+PiAjIEFzc2lnbiB0aGUgZGV2aWNlIHRvIHRoZSBndWVzdDoNCj4+Pj4gDQo+Pj4+
IEFzc2lnbiB0aGUgUENJIGRldmljZSBmcm9tIHRoZSBoYXJkd2FyZSBkb21haW4gdG8gdGhlIGd1
ZXN0IGlzIGRvbmUgdXNpbmcgdGhlIGJlbG93IGd1ZXN0IGNvbmZpZyBvcHRpb24uIFdoZW4geGwg
dG9vbCBjcmVhdGUgdGhlIGRvbWFpbiwgUENJIGRldmljZXMgd2lsbCBiZSBhc3NpZ25lZCB0byB0
aGUgZ3Vlc3QgVlBDSSBidXMuDQo+Pj4+IAlwY2k9WyAiUENJX1NQRUNfU1RSSU5HIiwgIlBDSV9T
UEVDX1NUUklORyIsIC4uLl0NCj4+Pj4gDQo+Pj4+IEd1ZXN0IHdpbGwgYmUgb25seSBhYmxlIHRv
IGFjY2VzcyB0aGUgYXNzaWduZWQgZGV2aWNlcyBhbmQgc2VlIHRoZSBicmlkZ2VzLiBHdWVzdCB3
aWxsIG5vdCBiZSBhYmxlIHRvIGFjY2VzcyBvciBzZWUgdGhlIGRldmljZXMgdGhhdCBhcmUgbm8g
YXNzaWduZWQgdG8gaGltLg0KPiBEb2VzIHRoaXMgbWVhbiB0aGF0IHdlIGRvIG5vdCBuZWVkIHRv
IGNvbmZpZ3VyZSB0aGUgYnJpZGdlcyBhcyB0aG9zZSBhcmUgZXhwb3NlZCB0byB0aGUgZ3Vlc3Qg
aW1wbGljaXRseT8NCg0KWWVzDQoNCj4+Pj4gDQo+Pj4+IExpbWl0YXRpb246DQo+Pj4+ICogQXMg
b2Ygbm93IGFsbCB0aGUgYnJpZGdlcyBpbiB0aGUgUENJIGJ1cyBhcmUgc2VlbiBieSB0aGUgZ3Vl
c3Qgb24gdGhlIFZQQ0kgYnVzLg0KPiANCj4gU28sIHdoYXQgaGFwcGVucyBpZiBhIGd1ZXN0IHRy
aWVzIHRvIGFjY2VzcyB0aGUgYnJpZGdlIHRoYXQgZG9lc24ndCBoYXZlIHRoZSBhc3NpZ25lZA0K
PiANCj4gUENJIGRldmljZT8gRS5nLiB3ZSBwYXNzIFBDSWVfZGV2MCB3aGljaCBpcyBiZWhpbmQg
QnJpZGdlMCBhbmQgdGhlIGd1ZXN0IGFsc28gc2Vlcw0KPiANCj4gQnJpZGdlMSBhbmQgdHJpZXMg
dG8gYWNjZXNzIGRldmljZXMgYmVoaW5kIGl0IGR1cmluZyB0aGUgZW51bWVyYXRpb24uDQo+IA0K
PiBDb3VsZCB5b3UgcGxlYXNlIGNsYXJpZnk/DQoNClRoZSBicmlkZ2VzIGFyZSBvbmx5IGFjY2Vz
c2libGUgaW4gcmVhZC1vbmx5IGFuZCBjYW5ub3QgYmUgbW9kaWZpZWQuIEV2ZW4gdGhvdWdoIGEg
Z3Vlc3Qgd291bGQgc2VlIHRoZSBicmlkZ2UsIHRoZSBWUENJIHdpbGwgb25seSBzaG93IHRoZSBh
c3NpZ25lZCBkZXZpY2VzIGJlaGluZCBpdC4gSWYgdGhlcmUgaXMgbm8gZGV2aWNlIGJlaGluZCB0
aGF0IGJyaWRnZSBhc3NpZ25lZCB0byB0aGUgZ3Vlc3QsIHRoZSBndWVzdCB3aWxsIHNlZSBhbiBl
bXB0eSBidXMgYmVoaW5kIHRoYXQgYnJpZGdlLiANCg0KPiANCj4+PiBXZSBuZWVkIHRvIGNvbWUg
dXAgd2l0aCBzb21ldGhpbmcgc2ltaWxhciBmb3IgZG9tMGxlc3MgdG9vLiBJdCBjb3VsZCBiZQ0K
Pj4+IGV4YWN0bHkgdGhlIHNhbWUgdGhpbmcgKGEgbGlzdCBvZiBCREZzIGFzIHN0cmluZ3MgYXMg
YSBkZXZpY2UgdHJlZQ0KPj4+IHByb3BlcnR5KSBvciBzb21ldGhpbmcgZWxzZSBpZiB3ZSBjYW4g
Y29tZSB1cCB3aXRoIGEgYmV0dGVyIGlkZWEuDQo+PiBGdWxseSBhZ3JlZS4NCj4+IE1heWJlIGEg
dHJlZSB0b3BvbG9neSBjb3VsZCBhbGxvdyBtb3JlIHBvc3NpYmlsaXRpZXMgKGxpa2UgZ2l2aW5n
IEJBUiB2YWx1ZXMpIGluIHRoZSBmdXR1cmUuDQo+Pj4gDQo+Pj4+ICMgRW11bGF0ZWQgUENJIGRl
dmljZSB0cmVlIG5vZGUgaW4gbGlieGw6DQo+Pj4+IA0KPj4+PiBMaWJ4bCBpcyBjcmVhdGluZyBh
IHZpcnR1YWwgUENJIGRldmljZSB0cmVlIG5vZGUgaW4gdGhlIGRldmljZSB0cmVlIHRvIGVuYWJs
ZSB0aGUgZ3Vlc3QgT1MgdG8gZGlzY292ZXIgdGhlIHZpcnR1YWwgUENJIGR1cmluZyBndWVzdCBi
b290LiBXZSBpbnRyb2R1Y2VkIHRoZSBuZXcgY29uZmlnIG9wdGlvbiBbdnBjaT0icGNpX2VjYW0i
XSBmb3IgZ3Vlc3RzLiBXaGVuIHRoaXMgY29uZmlnIG9wdGlvbiBpcyBlbmFibGVkIGluIGEgZ3Vl
c3QgY29uZmlndXJhdGlvbiwgYSBQQ0kgZGV2aWNlIHRyZWUgbm9kZSB3aWxsIGJlIGNyZWF0ZWQg
aW4gdGhlIGd1ZXN0IGRldmljZSB0cmVlLg0KPj4+PiANCj4+Pj4gQSBuZXcgYXJlYSBoYXMgYmVl
biByZXNlcnZlZCBpbiB0aGUgYXJtIGd1ZXN0IHBoeXNpY2FsIG1hcCBhdCB3aGljaCB0aGUgVlBD
SSBidXMgaXMgZGVjbGFyZWQgaW4gdGhlIGRldmljZSB0cmVlIChyZWcgYW5kIHJhbmdlcyBwYXJh
bWV0ZXJzIG9mIHRoZSBub2RlKS4gQSB0cmFwIGhhbmRsZXIgZm9yIHRoZSBQQ0kgRUNBTSBhY2Nl
c3MgZnJvbSBndWVzdCBoYXMgYmVlbiByZWdpc3RlcmVkIGF0IHRoZSBkZWZpbmVkIGFkZHJlc3Mg
YW5kIHJlZGlyZWN0cyByZXF1ZXN0cyB0byB0aGUgVlBDSSBkcml2ZXIgaW4gWGVuLg0KPj4+PiAN
Cj4+Pj4gTGltaXRhdGlvbjoNCj4+Pj4gKiBPbmx5IG9uZSBQQ0kgZGV2aWNlIHRyZWUgbm9kZSBp
cyBzdXBwb3J0ZWQgYXMgb2Ygbm93Lg0KPj4+IEkgdGhpbmsgdnBjaT0icGNpX2VjYW0iIHNob3Vs
ZCBiZSBvcHRpb25hbDogaWYgcGNpPVsgIlBDSV9TUEVDX1NUUklORyIsDQo+Pj4gLi4uXSBpcyBz
cGVjaWZpZmVkLCB0aGVuIHZwY2k9InBjaV9lY2FtIiBpcyBpbXBsaWVkLg0KPj4+IA0KPj4+IHZw
Y2k9InBjaV9lY2FtIiBpcyBvbmx5IHVzZWZ1bCBvbmUgZGF5IGluIHRoZSBmdXR1cmUgd2hlbiB3
ZSB3YW50IHRvIGJlDQo+Pj4gYWJsZSB0byBlbXVsYXRlIG90aGVyIG5vbi1lY2FtIGhvc3QgYnJp
ZGdlcy4gRm9yIG5vdyB3ZSBjb3VsZCBldmVuIHNraXANCj4+PiBpdC4NCj4+IFRoaXMgd291bGQg
Y3JlYXRlIGEgcHJvYmxlbSBpZiB4bCBpcyB1c2VkIHRvIGFkZCBhIFBDSSBkZXZpY2UgYXMgd2Ug
bmVlZCB0aGUgUENJIG5vZGUgdG8gYmUgaW4gdGhlIERUQiB3aGVuIHRoZSBndWVzdCBpcyBjcmVh
dGVkLg0KPj4gSSBhZ3JlZSB0aGlzIGlzIG5vdCBuZWVkZWQgYnV0IHJlbW92aW5nIGl0IG1pZ2h0
IGNyZWF0ZSBtb3JlIGNvbXBsZXhpdHkgaW4gdGhlIGNvZGUuDQo+IA0KPiBJIHdvdWxkIHN1Z2dl
c3Qgd2UgaGF2ZSBpdCBmcm9tIGRheSAwIGFzIHRoZXJlIGFyZSBwbGVudHkgb2YgSFcgYXZhaWxh
YmxlIHdoaWNoIGlzIG5vdCBFQ0FNLg0KPiANCj4gSGF2aW5nIHZwY2kgYWxsb3dzIG90aGVyIGJy
aWRnZXMgdG8gYmUgc3VwcG9ydGVkDQoNClllcyB3ZSBhZ3JlZS4NCg0KPiANCj4+IA0KPj4gQmVy
dHJhbmQNCj4+IA0KPj4+IA0KPj4+PiBCQVIgdmFsdWUgYW5kIElPTUVNIG1hcHBpbmc6DQo+Pj4+
IA0KPj4+PiBMaW51eCBndWVzdCB3aWxsIGRvIHRoZSBQQ0kgZW51bWVyYXRpb24gYmFzZWQgb24g
dGhlIGFyZWEgcmVzZXJ2ZWQgZm9yIEVDQU0gYW5kIElPTUVNIHJhbmdlcyBpbiB0aGUgVlBDSSBk
ZXZpY2UgdHJlZSBub2RlLiBPbmNlIFBDSQlkZXZpY2UgaXMgYXNzaWduZWQgdG8gdGhlIGd1ZXN0
LCBYRU4gd2lsbCBtYXAgdGhlIGd1ZXN0IFBDSSBJT01FTSByZWdpb24gdG8gdGhlIHJlYWwgcGh5
c2ljYWwgSU9NRU0gcmVnaW9uIG9ubHkgZm9yIHRoZSBhc3NpZ25lZCBkZXZpY2VzLg0KPj4+PiAN
Cj4+Pj4gQXMgb2Ygbm93IHdlIGhhdmUgbm90IG1vZGlmaWVkIHRoZSBleGlzdGluZyBWUENJIGNv
ZGUgdG8gbWFwIHRoZSBndWVzdCBQQ0kgSU9NRU0gcmVnaW9uIHRvIHRoZSByZWFsIHBoeXNpY2Fs
IElPTUVNIHJlZ2lvbi4gV2UgdXNlZCB0aGUgZXhpc3RpbmcgZ3Vlc3Qg4oCcaW9tZW3igJ0gY29u
ZmlnIG9wdGlvbiB0byBtYXAgdGhlIHJlZ2lvbi4NCj4+Pj4gRm9yIGV4YW1wbGU6DQo+Pj4+IAlH
dWVzdCByZXNlcnZlZCBJT01FTSByZWdpb246ICAweDA0MDIwMDAwDQo+Pj4+ICAgIAlSZWFsIHBo
eXNpY2FsIElPTUVNIHJlZ2lvbjoweDUwMDAwMDAwDQo+Pj4+ICAgIAlJT01FTSBzaXplOjEyOE1C
DQo+Pj4+ICAgIAlpb21lbSBjb25maWcgd2lsbCBiZTogICBpb21lbSA9IFsiMHg1MDAwMCwweDgw
MDBAMHg0MDIwIl0NCj4+Pj4gDQo+Pj4+IFRoZXJlIGlzIG5vIG5lZWQgdG8gbWFwIHRoZSBFQ0FN
IHNwYWNlIGFzIFhFTiBhbHJlYWR5IGhhdmUgYWNjZXNzIHRvIHRoZSBFQ0FNIHNwYWNlIGFuZCBY
RU4gd2lsbCB0cmFwIEVDQU0gYWNjZXNzZXMgZnJvbSB0aGUgZ3Vlc3QgYW5kIHdpbGwgcGVyZm9y
bSByZWFkL3dyaXRlIG9uIHRoZSBWUENJIGJ1cy4NCj4+Pj4gDQo+Pj4+IElPTUVNIGFjY2VzcyB3
aWxsIG5vdCBiZSB0cmFwcGVkIGFuZCB0aGUgZ3Vlc3Qgd2lsbCBkaXJlY3RseSBhY2Nlc3MgdGhl
IElPTUVNIHJlZ2lvbiBvZiB0aGUgYXNzaWduZWQgZGV2aWNlIHZpYSBzdGFnZS0yIHRyYW5zbGF0
aW9uLg0KPj4+PiANCj4+Pj4gSW4gdGhlIHNhbWUsIHdlIG1hcHBlZCB0aGUgYXNzaWduZWQgZGV2
aWNlcyBJUlEgdG8gdGhlIGd1ZXN0IHVzaW5nIGJlbG93IGNvbmZpZyBvcHRpb25zLg0KPj4+PiAJ
aXJxcz0gWyBOVU1CRVIsIE5VTUJFUiwgLi4uXQ0KPj4+PiANCj4+Pj4gTGltaXRhdGlvbjoNCj4+
Pj4gKiBOZWVkIHRvIGF2b2lkIHRoZSDigJxpb21lbeKAnSBhbmQg4oCcaXJx4oCdIGd1ZXN0IGNv
bmZpZyBvcHRpb25zIGFuZCBtYXAgdGhlIElPTUVNIHJlZ2lvbiBhbmQgSVJRIGF0IHRoZSBzYW1l
IHRpbWUgd2hlbiBkZXZpY2UgaXMgYXNzaWduZWQgdG8gdGhlIGd1ZXN0IHVzaW5nIHRoZSDigJxw
Y2nigJ0gZ3Vlc3QgY29uZmlnIG9wdGlvbnMgd2hlbiB4bCBjcmVhdGVzIHRoZSBkb21haW4uDQo+
Pj4+ICogRW11bGF0ZWQgQkFSIHZhbHVlcyBvbiB0aGUgVlBDSSBidXMgc2hvdWxkIHJlZmxlY3Qg
dGhlIElPTUVNIG1hcHBlZCBhZGRyZXNzLg0KPj4+PiAqIFg4NiBtYXBwaW5nIGNvZGUgc2hvdWxk
IGJlIHBvcnRlZCBvbiBBcm0gc28gdGhhdCB0aGUgc3RhZ2UtMiB0cmFuc2xhdGlvbiBpcyBhZGFw
dGVkIHdoZW4gdGhlIGd1ZXN0IGlzIGRvaW5nIGEgbW9kaWZpY2F0aW9uIG9mIHRoZSBCQVIgcmVn
aXN0ZXJzIHZhbHVlcyAodG8gbWFwIHRoZSBhZGRyZXNzIHJlcXVlc3RlZCBieSB0aGUgZ3Vlc3Qg
Zm9yIGEgc3BlY2lmaWMgSU9NRU0gdG8gdGhlIGFkZHJlc3MgYWN0dWFsbHkgY29udGFpbmVkIGlu
IHRoZSByZWFsIEJBUiByZWdpc3RlciBvZiB0aGUgY29ycmVzcG9uZGluZyBkZXZpY2UpLg0KPj4+
PiANCj4+Pj4gIyBTTU1VIGNvbmZpZ3VyYXRpb24gZm9yIGd1ZXN0Og0KPj4+PiANCj4+Pj4gV2hl
biBhc3NpZ25pbmcgUENJIGRldmljZXMgdG8gYSBndWVzdCwgdGhlIFNNTVUgY29uZmlndXJhdGlv
biBzaG91bGQgYmUgdXBkYXRlZCB0byByZW1vdmUgYWNjZXNzIHRvIHRoZSBoYXJkd2FyZSBkb21h
aW4gbWVtb3J5DQo+IA0KPiBTbywgYXMgdGhlIGhhcmR3YXJlIGRvbWFpbiBzdGlsbCBoYXMgYWNj
ZXNzIHRvIHRoZSBQQ0kgY29uZmlndXJhdGlvbiBzcGFjZSwgd2UNCj4gDQo+IGNhbiBwb3RlbnRp
YWxseSBoYXZlIGEgY29uZGl0aW9uIHdoZW4gRG9tMCBhY2Nlc3NlcyB0aGUgZGV2aWNlLiBBRkFJ
VSwgaWYgd2UgaGF2ZQ0KPiANCj4gcGNpIGZyb250L2JhY2sgdGhlbiBiZWZvcmUgYXNzaWduaW5n
IHRoZSBkZXZpY2UgdG8gdGhlIGd1ZXN0IHdlIHVuYmluZCBpdCBmcm9tIHRoZQ0KPiANCj4gcmVh
bCBkcml2ZXIgYW5kIGJpbmQgdG8gdGhlIGJhY2suIEFyZSB3ZSBnb2luZyB0byBkbyBzb21ldGhp
bmcgc2ltaWxhciBoZXJlPw0KDQpZZXMgd2UgaGF2ZSB0byB1bmJpbmQgdGhlIGRyaXZlciBmcm9t
IHRoZSBoYXJkd2FyZSBkb21haW4gYmVmb3JlIGFzc2lnbmluZyB0aGUgZGV2aWNlIHRvIHRoZSBn
dWVzdC4gQWxzbyBhcyBzb29uIGFzIFhlbiBoYXMgZG9uZSBoaXMgUENJIGVudW1lcmF0aW9uIChl
aXRoZXIgb24gYm9vdCBvciBhZnRlciBhbiBoeXBlcmNhbGwgZnJvbSB0aGUgaGFyZHdhcmUgZG9t
YWluKSwgb25seSBYZW4gd2lsbCBhY2Nlc3MgdGhlIHBoeXNpY2FsIFBDSSBidXMsIGV2ZXJ5Ym9k
eSBlbHNlIHdpbGwgZ28gdGhyb3VnaCBWUENJLiANCg0KQmVydHJhbmQgKGFuZCBSYWh1bCkNCg0K
PiANCj4gDQo+IFRoYW5rIHlvdSwNCj4gDQo+IE9sZWtzYW5kcg0KPiANCj4+Pj4gIGFuZCBhZGQN
Cj4+Pj4gY29uZmlndXJhdGlvbiB0byBoYXZlIGFjY2VzcyB0byB0aGUgZ3Vlc3QgbWVtb3J5IHdp
dGggdGhlIHByb3BlciBhZGRyZXNzIHRyYW5zbGF0aW9uIHNvIHRoYXQgdGhlIGRldmljZSBjYW4g
ZG8gRE1BIG9wZXJhdGlvbnMgZnJvbSBhbmQgdG8gdGhlIGd1ZXN0IG1lbW9yeSBvbmx5Lg0KPj4+
PiANCj4+Pj4gIyBNU0kvTVNJLVggc3VwcG9ydDoNCj4+Pj4gTm90IGltcGxlbWVudCBhbmQgdGVz
dGVkIGFzIG9mIG5vdy4NCj4+Pj4gDQo+Pj4+ICMgSVRTIHN1cHBvcnQ6DQo+Pj4+IE5vdCBpbXBs
ZW1lbnQgYW5kIHRlc3RlZCBhcyBvZiBub3cuDQo+IFsxXSBodHRwczovL2xpc3RzLnhlbi5vcmcv
YXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAxNy0wNS9tc2cwMjY3NC5odG1sDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 13:14:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 13:14:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQCD-00084l-Bt; Fri, 17 Jul 2020 13:14:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwQCC-00084g-QA
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 13:14:36 +0000
X-Inumbo-ID: 75ac27e6-c82f-11ea-b7bb-bc764e2007e4
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.88]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75ac27e6-c82f-11ea-b7bb-bc764e2007e4;
 Fri, 17 Jul 2020 13:14:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fGlWvLunnNEPD3F17cNmd24B3IWMU6l8zKXPOJpOy4c=;
 b=44XBSM3Wa6VjJ7JdhrY47XlqGj7JZWmmRCJw3trt9DYgw+m1t3uHlIc0FOQse+SNHFjp2APB79UXBry4fa884RRuQu/7u8cULxIh4sGU0NrvcZ7iyaA6XV/RaZuV3GlcIhX2p0Yzk2/T2aZXpVfLx38rzs7TRp1v8Guy2A7Gv5Y=
Received: from AM6PR10CA0032.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:89::45)
 by AM6PR08MB5045.eurprd08.prod.outlook.com (2603:10a6:20b:e4::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.22; Fri, 17 Jul
 2020 13:14:34 +0000
Received: from VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:89:cafe::3b) by AM6PR10CA0032.outlook.office365.com
 (2603:10a6:209:89::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17 via Frontend
 Transport; Fri, 17 Jul 2020 13:14:34 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT050.mail.protection.outlook.com (10.152.19.209) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 13:14:33 +0000
Received: ("Tessian outbound 7de93d801f24:v62");
 Fri, 17 Jul 2020 13:14:33 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 637464a434bbbee5
X-CR-MTA-TID: 64aa7808
Received: from c0f236bb3e6e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 741FE4C6-8D78-4DFF-A975-D5A554EBA72F.1; 
 Fri, 17 Jul 2020 13:14:27 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c0f236bb3e6e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 13:14:27 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B4W89KzXozt+dp0K8D6k+A1VviZCAzLtsdeGYi0QclUukekfsnuNpoMu6husxI+k5KmmjEJ70fhlAuTGO99tx2tJ87XYlHqNW7gSXyA7OTwQq44sTbV0y7eSmr9reXLrf0Q3PdMCW2wIy/3WEWW1LAc2skqmHh2Lrc9XbPMGAVV1SNzAdyWv0wxyO1ClpV0irzrbd9kl1vNfqn8xu6MPn89fJ8aAh0AZJm1Qo1Jmc2LYNDcOBNq29gyk5zndChiG2qtWdjSASMXUy9bebE3mM9yO4vV/AVmukkBFdTL7jNNrrPp1faWnjQdA5daDROU7arh+7plb3tPWeTL0ryJEUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fGlWvLunnNEPD3F17cNmd24B3IWMU6l8zKXPOJpOy4c=;
 b=ck3RF6traEq3h0MygKJ3UvUdoGZJC2Ic/XzmV9Qcnhv72GJ+1WPxA/Rkyr88d0SasUVlHvRCRMaBAuX9NlAa5wuNIyEefQe0Es+oM4KTgXX1TPiWOhDsViV3IrXwJXhMzwkMmeomWUuuPE2w8tzcl1dckiL90EH13xR0HiDmhV0XHZ38eZ8qPGk+6d4ptg+R0JU6f8HRFUsXBwqHs3R1jS5dq/qbgfp08zEkUWJ1IG3NqApnHIqGFrSbFJSL/p0dJOaPo6D4ySNbYwI9X2doD5ifDUonoJBVt4nZ3IrsMcyr/gnLKN4kyPfpD/lJDxBKxhTen8cmBJUdIF7TzfapGw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fGlWvLunnNEPD3F17cNmd24B3IWMU6l8zKXPOJpOy4c=;
 b=44XBSM3Wa6VjJ7JdhrY47XlqGj7JZWmmRCJw3trt9DYgw+m1t3uHlIc0FOQse+SNHFjp2APB79UXBry4fa884RRuQu/7u8cULxIh4sGU0NrvcZ7iyaA6XV/RaZuV3GlcIhX2p0Yzk2/T2aZXpVfLx38rzs7TRp1v8Guy2A7Gv5Y=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3515.eurprd08.prod.outlook.com (2603:10a6:10:50::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Fri, 17 Jul
 2020 13:14:25 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 13:14:25 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAOrEgIAAVPWA
Date: Fri, 17 Jul 2020 13:14:25 +0000
Message-ID: <28899FEF-9DA7-4513-8283-1AC5EFFC6E92@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
In-Reply-To: <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [2a01:e0a:13:6f10:d5e3:98:5df0:fb15]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 8b3d741c-b10f-4463-6c35-08d82a5358df
x-ms-traffictypediagnostic: DB7PR08MB3515:|AM6PR08MB5045:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <AM6PR08MB5045D1D7E033625A53B281DD9D7C0@AM6PR08MB5045.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: QD1AU8fybUrO2/OvhxnnWUQiMNS6xAfWhV5iGZg+2dyCrDYtpzNgZxx4Ss/eLy0LWzBd8TlRurjIn+78hY2qq1i5ZsC1Kyzb/AFoPwdPKSW8ppSOcVXuUxceuhVU94Pgkl0PlylZtowPSbyMmOolMqFsbp34Ysg417qc9odOCgv4rogZShyf54CMT7TkJEC4ngbHiGwyqC65zm22xCSH5m2wSpaKKVoyhEj7G+voN22XG6FCSNL61hjk4zEU1Yn2HGp58LYmVakZ8hangIdyFX7srhhsacTKPYVlyy5+CQ8NSzlcT0slclK49mehbSSpj3+/No4WmDQFHg5r8nVQuQ==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(366004)(33656002)(54906003)(8936002)(8676002)(66476007)(66556008)(64756008)(66946007)(66446008)(186003)(498600001)(6506007)(53546011)(4326008)(2906002)(2616005)(76116006)(91956017)(6916009)(36756003)(6486002)(6512007)(86362001)(5660300002)(71200400001)(83380400001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: Y9TSmBLfSaFlSJhC0DLv6mG/M4ElwgSD6R4r1rtz301CC2nE9IZpyKBC9e5fK2u6BqCAgAklq6YQ/JSHexlbQUZAwDWRC6W7pv7qymn/4Mxoq/zAYhKSJsTUd06gASBOtdZL+13Az7bIcgltuS2pmVCzcGJ/57bOR/hBPEFvLKn3B9dBuuEZZSNFJHKgY7Rfci/ROrYGzdOTv/D9oapPyNuOkgFVD1PKVBKSNWI6HvOKeweI4VARdc9kiqNQTX8aPVKlsVFHi0JjtAEqbVmXUDVLZNe+D4pIxZLAF51t4CiPgb8DMITfvFFZkENrFDwthTn9uMYi6OohKIOdBr4V5c2p+jdAButdOvA4wxaXtGQnAVaJWTa5LOTTe7563U/e07r6qvdQki5E5bRgtTG7geyBeJ5o6EoRjFDzcV9dYLHUXFOd6HgOMX5hV9gaLk3gtBtibhtv4IuWkziIOEoRWRyvALaWaqT/oAoDk8jrl0EsPms4+0K1neDyhq9ZscVq1J1IpK3zZFly9YW4fyfnU6euftaQUYsnYIjr6CGCWQA=
Content-Type: text/plain; charset="utf-8"
Content-ID: <E631FCF5DA73FB468672B83581ABFE65@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3515
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(136003)(396003)(346002)(39860400002)(46966005)(107886003)(70586007)(8936002)(6506007)(5660300002)(2616005)(4326008)(82740400003)(2906002)(36756003)(186003)(8676002)(53546011)(70206006)(336012)(6862004)(26005)(47076004)(33656002)(83380400001)(316002)(82310400002)(356005)(6512007)(81166007)(6486002)(478600001)(54906003)(86362001);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 3966fed9-2f39-45d2-9973-08d82a535429
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: AMMK21H4Gda01ubTRpqjaAaMK5SuFSzSSxqo+bd5E85IKqhoL971Vj483+sq+d0ghltRFdskZ/LyUcPXSgHXgHKMX1LYoY4+2/IkW6ykiPppdw+shkWuavqgZt7DZ21a6avZpP/eLRlEHgYn6A2infi4SMG0Aq5a45G0OWrYImXwEa0iA/HM8LBcbELN2TPD9n0YPyn3NVV9I8O/z3xIpGzAFHsGJSg40Ii7D2D5o5K/HRbUjnZqnH8PszB4zG1woi81xZ6Sqm9TgvOi/VZbqHudeEqR9xW3vS9okUhbkiOgrV+RCXBpcLkUfg4OxT1cR7J6/6V2iqpoJdE1uIFLGrxEAQN1MzaB0VIQOX0OFDyVMv1ySOauWTEYVPm4tv4IyySPFGusuAkORDiKmf5nVw==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 13:14:33.3135 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8b3d741c-b10f-4463-6c35-08d82a5358df
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5045
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTcgSnVsIDIwMjAsIGF0IDEwOjEwLCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMTYuMDcuMjAyMCAxOToxMCwgUmFodWwgU2luZ2ggd3Jv
dGU6DQo+PiAjIERpc2NvdmVyaW5nIFBDSSBkZXZpY2VzOg0KPj4gDQo+PiBQQ0ktUENJZSBlbnVt
ZXJhdGlvbiBpcyBhIHByb2Nlc3Mgb2YgZGV0ZWN0aW5nIGRldmljZXMgY29ubmVjdGVkIHRvIGl0
cyBob3N0LiBJdCBpcyB0aGUgcmVzcG9uc2liaWxpdHkgb2YgdGhlIGhhcmR3YXJlIGRvbWFpbiBv
ciBib290IGZpcm13YXJlIHRvIGRvIHRoZSBQQ0kgZW51bWVyYXRpb24gYW5kIGNvbmZpZ3VyZSB0
aGUgQkFSLCBQQ0kgY2FwYWJpbGl0aWVzLCBhbmQgTVNJL01TSS1YIGNvbmZpZ3VyYXRpb24uDQo+
PiANCj4+IFBDSS1QQ0llIGVudW1lcmF0aW9uIGluIFhFTiBpcyBub3QgZmVhc2libGUgZm9yIHRo
ZSBjb25maWd1cmF0aW9uIHBhcnQgYXMgaXQgd291bGQgcmVxdWlyZSBhIGxvdCBvZiBjb2RlIGlu
c2lkZSBYZW4gd2hpY2ggd291bGQgcmVxdWlyZSBhIGxvdCBvZiBtYWludGVuYW5jZS4gQWRkZWQg
dG8gdGhpcyBtYW55IHBsYXRmb3JtcyByZXF1aXJlIHNvbWUgcXVpcmtzIGluIHRoYXQgcGFydCBv
ZiB0aGUgUENJIGNvZGUgd2hpY2ggd291bGQgZ3JlYXRseSBpbXByb3ZlIFhlbiBjb21wbGV4aXR5
LiBPbmNlIGhhcmR3YXJlIGRvbWFpbiBlbnVtZXJhdGVzIHRoZSBkZXZpY2UgdGhlbiBpdCB3aWxs
IGNvbW11bmljYXRlIHRvIFhFTiB2aWEgdGhlIGJlbG93IGh5cGVyY2FsbC4NCj4+IA0KPj4gI2Rl
ZmluZSBQSFlTREVWT1BfcGNpX2RldmljZV9hZGQgICAgICAgIDI1DQo+PiBzdHJ1Y3QgcGh5c2Rl
dl9wY2lfZGV2aWNlX2FkZCB7DQo+PiAgICB1aW50MTZfdCBzZWc7DQo+PiAgICB1aW50OF90IGJ1
czsNCj4+ICAgIHVpbnQ4X3QgZGV2Zm47DQo+PiAgICB1aW50MzJfdCBmbGFnczsNCj4+ICAgIHN0
cnVjdCB7DQo+PiAgICAJdWludDhfdCBidXM7DQo+PiAgICAJdWludDhfdCBkZXZmbjsNCj4+ICAg
IH0gcGh5c2ZuOw0KPj4gICAgLyoNCj4+ICAgICogT3B0aW9uYWwgcGFyYW1ldGVycyBhcnJheS4N
Cj4+ICAgICogRmlyc3QgZWxlbWVudCAoWzBdKSBpcyBQWE0gZG9tYWluIGFzc29jaWF0ZWQgd2l0
aCB0aGUgZGV2aWNlIChpZiAqIFhFTl9QQ0lfREVWX1BYTSBpcyBzZXQpDQo+PiAgICAqLw0KPj4g
ICAgdWludDMyX3Qgb3B0YXJyW1hFTl9GTEVYX0FSUkFZX0RJTV07DQo+PiAgICB9Ow0KPj4gDQo+
PiBBcyB0aGUgaHlwZXJjYWxsIGFyZ3VtZW50IGhhcyB0aGUgUENJIHNlZ21lbnQgbnVtYmVyLCBY
RU4gd2lsbCBhY2Nlc3MgdGhlIFBDSSBjb25maWcgc3BhY2UgYmFzZWQgb24gdGhpcyBzZWdtZW50
IG51bWJlciBhbmQgZmluZCB0aGUgaG9zdC1icmlkZ2UgY29ycmVzcG9uZGluZyB0byB0aGlzIHNl
Z21lbnQgbnVtYmVyLiBBdCB0aGlzIHN0YWdlIGhvc3QgYnJpZGdlIGlzIGZ1bGx5IGluaXRpYWxp
emVkIHNvIHRoZXJlIHdpbGwgYmUgbm8gaXNzdWUgdG8gYWNjZXNzIHRoZSBjb25maWcgc3BhY2Uu
DQo+PiANCj4+IFhFTiB3aWxsIGFkZCB0aGUgUENJIGRldmljZXMgaW4gdGhlIGxpbmtlZCBsaXN0
IG1haW50YWluIGluIFhFTiB1c2luZyB0aGUgZnVuY3Rpb24gcGNpX2FkZF9kZXZpY2UoKS4gWEVO
IHdpbGwgYmUgYXdhcmUgb2YgYWxsIHRoZSBQQ0kgZGV2aWNlcyBvbiB0aGUgc3lzdGVtIGFuZCBh
bGwgdGhlIGRldmljZSB3aWxsIGJlIGFkZGVkIHRvIHRoZSBoYXJkd2FyZSBkb21haW4uDQo+IA0K
PiBIYXZlIHlvdSBoYWQgYW55IHRob3VnaHRzIGFib3V0IERvbTAgcmUtYXJyYW5naW5nIHRoZSBi
dXMgbnVtYmVyaW5nPw0KPiBUaGlzIGlzLCBhZmFpY3QsIGEgc3RpbGwgb3BlbiBpc3N1ZSBvbiB4
ODYgYXMgd2VsbC4NCg0Kbm8gdGhhdOKAmXMgbm90IHNvbWV0aGluZyB3ZSBsb29rZWQgaW50by4g
QnV0IGluIHRoZW9yeSBpZiB0aGlzIGlzIGRvbmUgYnkgTGludXggYmVmb3JlIFhlbiBlbnVtZXJh
dGlvbiB0aGlzIHdpbGwgd29yay4gSWYgYSBkb21haW4gaXMgdHJ5aW5nIHRvIGRvIHRoaXMgd2Ug
aGF2ZSB0byBsb29rIGlmIHdlIGNhbiBzb21laG93IHN1cHBvcnQgdGhpcyBpbiBWUENJIGJ1dCB0
aGF0IGlzIHNvbWV0aGluZyB3ZSBkaWQgbm90IGNvbnNpZGVyIHNvIGZhci4gDQoNCj4gDQo+PiBM
aW1pdGF0aW9uczoNCj4+ICogV2hlbiBQQ0kgZGV2aWNlcyBhcmUgYWRkZWQgdG8gWEVOLCBNU0kg
Y2FwYWJpbGl0eSBpcyBub3QgaW5pdGlhbGl6ZWQgaW5zaWRlIFhFTiBhbmQgbm90IHN1cHBvcnRl
ZCBhcyBvZiBub3cuDQo+IA0KPiBJIHRoaW5rIHRoaXMgaXMgYSBwcmV0dHkgc2V2ZXJlIGxpbWl0
YXRpb24sIGFzIG1vZGVybiBkZXZpY2VzIHRlbmQgdG8NCj4gbm90IHN1cHBvcnQgcGluIGJhc2Vk
IGludGVycnVwdHMgYW55bW9yZS4NCg0KU29ycnkgdGhpcyBpcyBub3Qgd2hhdCB3ZSBtZWFudC4g
V2Ugd2lsbCBhZGQgTVNJIHN1cHBvcnQgYnV0IGFzIG9mIG5vdyB0aGlzIGlzIG5vdCBpbXBsZW1l
bnRlZCBvciBkZXNpZ25lZC4g4oCcTGltaXRhdGlvbnPigJ0gbWVhbnMgY3VycmVudGx5IG5vdCBz
dXBwb3J0ZWQgYnV0IHdlIHdpbGwgd29yayBvbiBpdCBvbiBhIHNlY29uZCBzdGVwLg0KDQo+IA0K
Pj4gIyBFbXVsYXRlZCBQQ0kgZGV2aWNlIHRyZWUgbm9kZSBpbiBsaWJ4bDoNCj4+IA0KPj4gTGli
eGwgaXMgY3JlYXRpbmcgYSB2aXJ0dWFsIFBDSSBkZXZpY2UgdHJlZSBub2RlIGluIHRoZSBkZXZp
Y2UgdHJlZSB0byBlbmFibGUgdGhlIGd1ZXN0IE9TIHRvIGRpc2NvdmVyIHRoZSB2aXJ0dWFsIFBD
SSBkdXJpbmcgZ3Vlc3QgYm9vdC4gV2UgaW50cm9kdWNlZCB0aGUgbmV3IGNvbmZpZyBvcHRpb24g
W3ZwY2k9InBjaV9lY2FtIl0gZm9yIGd1ZXN0cy4gV2hlbiB0aGlzIGNvbmZpZyBvcHRpb24gaXMg
ZW5hYmxlZCBpbiBhIGd1ZXN0IGNvbmZpZ3VyYXRpb24sIGEgUENJIGRldmljZSB0cmVlIG5vZGUg
d2lsbCBiZSBjcmVhdGVkIGluIHRoZSBndWVzdCBkZXZpY2UgdHJlZS4NCj4gDQo+IEkgc3VwcG9y
dCBTdGVmYW5vJ3Mgc3VnZ2VzdGlvbiBmb3IgdGhpcyB0byBiZSBhbiBvcHRpb25hbCB0aGluZywg
aS5lLg0KPiB0aGVyZSB0byBiZSBubyBuZWVkIGZvciBpdCB3aGVuIHRoZXJlIGFyZSBQQ0kgZGV2
aWNlcyBhc3NpZ25lZCB0byB0aGUNCj4gZ3Vlc3QgYW55d2F5LiBJIGFsc28gd29uZGVyIGFib3V0
IHRoZSBwY2lfIHByZWZpeCBoZXJlIC0gaXNuJ3QNCj4gdnBjaT0iZWNhbSIgYXMgdW5hbWJpZ3Vv
dXM/DQoNClRoaXMgY291bGQgYmUgYSBwcm9ibGVtIGFzIHdlIG5lZWQgdG8ga25vdyB0aGF0IHRo
aXMgaXMgcmVxdWlyZWQgZm9yIGEgZ3Vlc3QgdXBmcm9udCBzbyB0aGF0IFBDSSBkZXZpY2VzIGNh
biBiZSBhc3NpZ25lZCBhZnRlciB1c2luZyB4bC4gDQpSZWdhcmRpbmcgdGhlIG5hbWluZywgSSBh
Z3JlZS4gV2Ugd2lsbCByZW1vdmUgdGhlIHBjaV8gcHJlZml4IGhlcmUuIA0KPiANCj4+IEEgbmV3
IGFyZWEgaGFzIGJlZW4gcmVzZXJ2ZWQgaW4gdGhlIGFybSBndWVzdCBwaHlzaWNhbCBtYXAgYXQg
d2hpY2ggdGhlIFZQQ0kgYnVzIGlzIGRlY2xhcmVkIGluIHRoZSBkZXZpY2UgdHJlZSAocmVnIGFu
ZCByYW5nZXMgcGFyYW1ldGVycyBvZiB0aGUgbm9kZSkuIEEgdHJhcCBoYW5kbGVyIGZvciB0aGUg
UENJIEVDQU0gYWNjZXNzIGZyb20gZ3Vlc3QgaGFzIGJlZW4gcmVnaXN0ZXJlZCBhdCB0aGUgZGVm
aW5lZCBhZGRyZXNzIGFuZCByZWRpcmVjdHMgcmVxdWVzdHMgdG8gdGhlIFZQQ0kgZHJpdmVyIGlu
IFhlbi4NCj4+IA0KPj4gTGltaXRhdGlvbjoNCj4+ICogT25seSBvbmUgUENJIGRldmljZSB0cmVl
IG5vZGUgaXMgc3VwcG9ydGVkIGFzIG9mIG5vdy4NCj4+IA0KPj4gQkFSIHZhbHVlIGFuZCBJT01F
TSBtYXBwaW5nOg0KPj4gDQo+PiBMaW51eCBndWVzdCB3aWxsIGRvIHRoZSBQQ0kgZW51bWVyYXRp
b24gYmFzZWQgb24gdGhlIGFyZWEgcmVzZXJ2ZWQgZm9yIEVDQU0gYW5kIElPTUVNIHJhbmdlcyBp
biB0aGUgVlBDSSBkZXZpY2UgdHJlZSBub2RlLiBPbmNlIFBDSQlkZXZpY2UgaXMgYXNzaWduZWQg
dG8gdGhlIGd1ZXN0LCBYRU4gd2lsbCBtYXAgdGhlIGd1ZXN0IFBDSSBJT01FTSByZWdpb24gdG8g
dGhlIHJlYWwgcGh5c2ljYWwgSU9NRU0gcmVnaW9uIG9ubHkgZm9yIHRoZSBhc3NpZ25lZCBkZXZp
Y2VzLg0KPj4gDQo+PiBBcyBvZiBub3cgd2UgaGF2ZSBub3QgbW9kaWZpZWQgdGhlIGV4aXN0aW5n
IFZQQ0kgY29kZSB0byBtYXAgdGhlIGd1ZXN0IFBDSSBJT01FTSByZWdpb24gdG8gdGhlIHJlYWwg
cGh5c2ljYWwgSU9NRU0gcmVnaW9uLiBXZSB1c2VkIHRoZSBleGlzdGluZyBndWVzdCDigJxpb21l
beKAnSBjb25maWcgb3B0aW9uIHRvIG1hcCB0aGUgcmVnaW9uLg0KPj4gRm9yIGV4YW1wbGU6DQo+
PiAJR3Vlc3QgcmVzZXJ2ZWQgSU9NRU0gcmVnaW9uOiAgMHgwNDAyMDAwMA0KPj4gICAgCVJlYWwg
cGh5c2ljYWwgSU9NRU0gcmVnaW9uOjB4NTAwMDAwMDANCj4+ICAgIAlJT01FTSBzaXplOjEyOE1C
DQo+PiAgICAJaW9tZW0gY29uZmlnIHdpbGwgYmU6ICAgaW9tZW0gPSBbIjB4NTAwMDAsMHg4MDAw
QDB4NDAyMCJdDQo+IA0KPiBUaGlzIHN1cmVseSBpcyBwbGFubmVkIHRvIGdvIGF3YXkgYmVmb3Jl
IHRoZSBjb2RlIGhpdHMgdXBzdHJlYW0/IFRoZQ0KPiByYW5nZXMgcmVhbGx5IHNob3VsZCBiZSBy
ZWFkIG91dCBvZiB0aGUgQkFScywgYXMgSSBzZWUgdGhlDQo+ICJsaW1pdGF0aW9ucyIgc2VjdGlv
biBmdXJ0aGVyIGRvd24gc3VnZ2VzdHMsIGJ1dCBpdCdzIG5vdCBjbGVhcg0KPiB3aGV0aGVyICJs
aW1pdGF0aW9ucyIgYXJlIGl0ZW1zIHRoYXQgeW91IHBsYW4gdG8gdGFrZSBjYXJlIG9mIGJlZm9y
ZQ0KPiBzdWJtaXR0aW5nIHlvdXIgY29kZSBmb3IgcmV2aWV3Lg0KDQpkZWZpbml0ZWx5IHllcy4g
QXMgc2FpZCBiZWZvcmUgdGhlIGxpbWl0YXRpb25zIGFyZSBpbiB0aGUgUkZDIHdlIHdpbGwgc3Vi
bWl0IGJ1dCB3ZSB3aWxsIHdvcmsgb24gdGhhdCBhbmQgcmVtb3ZlIHRob3NlIGxpbWl0YXRpb25z
IGJlZm9yZSBzdWJtaXR0aW5nIHRoZSBmaW5hbCBjb2RlIGZvciByZXZpZXcuIA0KDQpCZXJ0cmFu
ZCBhbmQgUmFodWwNCg0KPiANCj4gSmFuDQo+IA0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 13:19:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 13:19:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQH8-0008G4-Vv; Fri, 17 Jul 2020 13:19:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1N/p=A4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jwQH8-0008Fw-C4
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 13:19:42 +0000
X-Inumbo-ID: 2aefe192-c830-11ea-95fc-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2aefe192-c830-11ea-95fc-12813bfff9fa;
 Fri, 17 Jul 2020 13:19:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 556FAB7CA;
 Fri, 17 Jul 2020 13:19:43 +0000 (UTC)
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
 <28899FEF-9DA7-4513-8283-1AC5EFFC6E92@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
Date: Fri, 17 Jul 2020 15:19:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <28899FEF-9DA7-4513-8283-1AC5EFFC6E92@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 17.07.2020 15:14, Bertrand Marquis wrote:
>> On 17 Jul 2020, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
>> On 16.07.2020 19:10, Rahul Singh wrote:
>>> # Emulated PCI device tree node in libxl:
>>>
>>> Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.
>>
>> I support Stefano's suggestion for this to be an optional thing, i.e.
>> there to be no need for it when there are PCI devices assigned to the
>> guest anyway. I also wonder about the pci_ prefix here - isn't
>> vpci="ecam" as unambiguous?
> 
> This could be a problem as we need to know that this is required for a guest upfront so that PCI devices can be assigned after using xl. 

I'm afraid I don't understand: When there are no PCI device that get
handed to a guest when it gets created, but it is supposed to be able
to have some assigned while already running, then we agree the option
is needed (afaict). When PCI devices get handed to the guest while it
gets constructed, where's the problem to infer this option from the
presence of PCI devices in the guest configuration?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 13:22:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 13:22:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQJL-0000ab-DO; Fri, 17 Jul 2020 13:21:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwQJK-0000aW-EQ
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 13:21:58 +0000
X-Inumbo-ID: 7ceb49e6-c830-11ea-95fc-12813bfff9fa
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.50]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7ceb49e6-c830-11ea-95fc-12813bfff9fa;
 Fri, 17 Jul 2020 13:21:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+4HPXVczpd//tOGvgos9oXwkWTw2F29F1hkpbXCE9uo=;
 b=Q7rwXYI6+TBrPsQelz8uZk+S19FNpayB4Q2IDsSWZF4PKuk4c3xihOkpIBKfhzvZX2SnLwS+FCwmdwai0ganxR2LGZhiHm0gGqAnalvmtCsFW9wdLQkehBmxmOOQgEy1rqPr0xPIgWp1O9GB6hrqh0NR95Od5dEa7NdsfrNvnW0=
Received: from DB6PR0201CA0017.eurprd02.prod.outlook.com (2603:10a6:4:3f::27)
 by AM0PR08MB4499.eurprd08.prod.outlook.com (2603:10a6:208:140::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.22; Fri, 17 Jul
 2020 13:21:55 +0000
Received: from DB5EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:3f:cafe::fe) by DB6PR0201CA0017.outlook.office365.com
 (2603:10a6:4:3f::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18 via Frontend
 Transport; Fri, 17 Jul 2020 13:21:55 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT043.mail.protection.outlook.com (10.152.20.236) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 13:21:55 +0000
Received: ("Tessian outbound 73b502bf693a:v62");
 Fri, 17 Jul 2020 13:21:54 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: a3085b6a01fdfa74
X-CR-MTA-TID: 64aa7808
Received: from f0792d98ed35.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C7694120-DBD6-4863-9265-6EA85CA1E2B8.1; 
 Fri, 17 Jul 2020 13:21:49 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f0792d98ed35.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 13:21:49 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b6rDlskKZyyWWsWOWMsQo172apwC8LGcbZnkr2bduWp42N2oJyDjm4Ijad4IPfbni1cI9WZCmqwtfb6DBPhJJS3XIfH84b62oFnkU+sIuVpfQ08PsSMuRmlF3JAqf6LyKWvNXkwkH0FZY06/oqzm7m3WMjAOA8/rEl51mBITNBmU2AaZ2fRX4v8nmUld8jZjf9yHJnmDo/+ssOyIvrD0OzIdV6se4Yr4vZ3IavprjPU8ZXWV0A7PJJWo76QZKETdLdrF53j9x96IgDCKXMs6KxA35aJOuMlR7XON/dquICx+o2skTJnvIPW614px/sfPt5FNMy0SaeTrThETB6WT9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+4HPXVczpd//tOGvgos9oXwkWTw2F29F1hkpbXCE9uo=;
 b=jfu4zUTA/hGtyA6aGz7hWhP/ZF0D7vhMf5gL8qXNTIbmw8eAJlGCyjIJiEnIhhel6Zl+5JyM97j0tkvoDrr0C6cbX/tEyJWIbdnb5hMOQ8AqI+Iev3sXUyYHcLPvVb3wjY2Hx4qzMcge5m4iAA4WYStR2KcLG2T6czejkOkGNg/QU2iofF+bOeEXIeEzcsFcM7dU82D6DAMNiWmksZIQWyKmJ9gDvb+ZYw8o4IBlKbiaKVrVv2Wf+8FhMwjxf71UEPrYT3bbXE3fsMNRrMtjsIjgPORGPUTBwKZsFGktVWk8lZCh8ORX+2s372oSpYJUOkluFjN7E+6XSOxpxcVfnA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+4HPXVczpd//tOGvgos9oXwkWTw2F29F1hkpbXCE9uo=;
 b=Q7rwXYI6+TBrPsQelz8uZk+S19FNpayB4Q2IDsSWZF4PKuk4c3xihOkpIBKfhzvZX2SnLwS+FCwmdwai0ganxR2LGZhiHm0gGqAnalvmtCsFW9wdLQkehBmxmOOQgEy1rqPr0xPIgWp1O9GB6hrqh0NR95Od5dEa7NdsfrNvnW0=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB4138.eurprd08.prod.outlook.com (2603:10a6:10:a4::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.21; Fri, 17 Jul
 2020 13:21:47 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 13:21:47 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAC0rgIAAqC0AgAANcgCAAD7ogIAABC2AgAAb6YA=
Date: Fri, 17 Jul 2020 13:21:47 +0000
Message-ID: <0EB29B08-E36F-4577-9140-30B70834D5D4@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <alpine.DEB.2.21.2007161258160.3886@sstabellini-ThinkPad-T480s>
 <BB4645DF-A040-4912-AC35-C98105917FD5@arm.com>
 <f69f86dc-7a8c-4c25-c059-0e391de51d7f@epam.com>
 <547d91e8-a6fe-6430-b020-f9c550bfc22b@xen.org>
 <0cfe750e-2213-d6d3-80c5-494ede727304@epam.com>
In-Reply-To: <0cfe750e-2213-d6d3-80c5-494ede727304@epam.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [2a01:e0a:13:6f10:d5e3:98:5df0:fb15]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 07d7b76f-96b3-4ad3-975f-08d82a54600f
x-ms-traffictypediagnostic: DB8PR08MB4138:|AM0PR08MB4499:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <AM0PR08MB4499ED8C01626E14B0C8D6669D7C0@AM0PR08MB4499.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: YqpVEQNiQ81Z4vgs1ZqkTp0bqmr+jtwpVwx0uoatlxhXxjKIp4LinluZVpjE+c6rE4KajYtHVSsoNO+orwC83o0PTLzsxB1JurlD5432DU9ZEcJp5guHdlFlXN2sqO+/02m7kWmoqNXkWDBqv+rVwyFQQKwHXn3mqglvWkbzsEuNbEhggmK3+yPL6sBg3J9VGpGNR91p7MHzdNwKdGK22+ZJGo7u4G+JY0O+JiYCuBMtmO53B0om9UpWb9qLUO36yod5kMTNac6yXfWq8pAh/vPhqC1+ysz8bu/zGHch9VNUz3VtTELTjILg0oN1xSs99u7UNS2BFfwv4aephOwWvg==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(39860400002)(136003)(366004)(396003)(346002)(186003)(83380400001)(2906002)(316002)(6916009)(6512007)(71200400001)(36756003)(66476007)(66946007)(76116006)(5660300002)(66446008)(66556008)(64756008)(91956017)(6486002)(54906003)(478600001)(86362001)(2616005)(53546011)(4326008)(6506007)(8936002)(33656002)(8676002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: vPNRC9oVAOs4CnjS9nLYNKGvHDjPmc0MfI73yYSBXqrpMw1Sa04StZ+vMPj/s64TnACgtsv3wTlCdpw5PRp/zIu1xW3iQ4zcGcAsQ5cxYfu90E8+FEj1Sy7fxXNKndJqo01ojAEL+//661Ao2QO3ERO9em2r5hkvdYucEkWVTSFoil1nrWhycPrKcvHE15UBX7sSDG4YZabvO9agwsr/8SzDPMDccwX7WKlCeb41jaisiGEJO5gX/l3011ivtjxCeXO41Gw+ZCWU+1KXzdlHl2uz0DEe2kgROMcIy0mwS+Ygv3TJ7Seb6yQbLZushl91ykthl5uF3r3zwTX+5QBCIlQaMFv4+77WQ2m9t+E/dMjHvA36x7807xkVYiYTryz92aa70kDxPXFp+aOMzATa2AI1rhEtf9aVcIOWElPPXgjCwrngxK9ulJMCKQMRwQMC9mhGc5daHQexJUpe/ur/ImkFauCQWwEpPC+6kOcLLnGG1oSaoXGwbSbG6RFdzMyATYzWOU5TLDIuC7oU+d7KGrrBIekxXsd+D4k8HJPXnwc=
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <22D716B97F15E34A933CB9B1F04585E3@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB4138
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(39860400002)(376002)(396003)(346002)(46966005)(8936002)(4326008)(8676002)(36756003)(81166007)(6862004)(5660300002)(70586007)(6486002)(82740400003)(70206006)(47076004)(316002)(2616005)(107886003)(6512007)(82310400002)(86362001)(53546011)(336012)(26005)(2906002)(6506007)(356005)(33656002)(83380400001)(54906003)(478600001)(186003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 51841fd9-f0c3-4543-f493-08d82a545bd1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: C20OgalB2wAuVyJVWTbcl6LofdjEr3ctJ1gmnTLJ32UejPRzMRsQOi7WpOvN9Oy38k2b7V8slhMxy/VfdKc0JNayI2yIQDeLQmiiUu5wMo0Yws4wLRk8U0BEFtUqra4GG27uU++e89OyXRFoFRzYIog2b+waK64vCVDYjtMaXgIxZHHDU0JuuncqyLMgCSc2wMZ85CiCrUQB6HbuaSEGVyoSyWGRwURAMM5B3xJjdwKzCWWRcmDgNpmKNq3+AxiHc0g3OeVPHSAEB+S+P+imKsvhIuFTiXcR0QRjY/sfMbfwJaC7JdPz8ewLW42I3q+F9wIOIxMxh62GxHCV5+L8NxUqCjaxTc9tyXapF0Mt4wqb3EsbxApFUMxMu7U4zPmEp1mAHeBE/9ozxnGRl1aslg==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 13:21:55.0119 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 07d7b76f-96b3-4ad3-975f-08d82a54600f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4499
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Julien Grall <julien.grall.oss@gmail.com>, Rahul Singh <Rahul.Singh@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 17 Jul 2020, at 13:41, Oleksandr Andrushchenko <Oleksandr_Andrushchenk=
o@epam.com> wrote:
>=20
>=20
> On 7/17/20 2:26 PM, Julien Grall wrote:
>>=20
>>=20
>> On 17/07/2020 08:41, Oleksandr Andrushchenko wrote:
>>>>> We need to come up with something similar for dom0less too. It could =
be
>>>>> exactly the same thing (a list of BDFs as strings as a device tree
>>>>> property) or something else if we can come up with a better idea.
>>>> Fully agree.
>>>> Maybe a tree topology could allow more possibilities (like giving BAR =
values) in the future.
>>>>>=20
>>>>>> # Emulated PCI device tree node in libxl:
>>>>>>=20
>>>>>> Libxl is creating a virtual PCI device tree node in the device tree =
to enable the guest OS to discover the virtual PCI during guest boot. We in=
troduced the new config option [vpci=3D"pci_ecam"] for guests. When this co=
nfig option is enabled in a guest configuration, a PCI device tree node wil=
l be created in the guest device tree.
>>>>>>=20
>>>>>> A new area has been reserved in the arm guest physical map at which =
the VPCI bus is declared in the device tree (reg and ranges parameters of t=
he node). A trap handler for the PCI ECAM access from guest has been regist=
ered at the defined address and redirects requests to the VPCI driver in Xe=
n.
>>>>>>=20
>>>>>> Limitation:
>>>>>> * Only one PCI device tree node is supported as of now.
>>>>> I think vpci=3D"pci_ecam" should be optional: if pci=3D[ "PCI_SPEC_ST=
RING",
>>>>> ...] is specififed, then vpci=3D"pci_ecam" is implied.
>>>>>=20
>>>>> vpci=3D"pci_ecam" is only useful one day in the future when we want t=
o be
>>>>> able to emulate other non-ecam host bridges. For now we could even sk=
ip
>>>>> it.
>>>> This would create a problem if xl is used to add a PCI device as we ne=
ed the PCI node to be in the DTB when the guest is created.
>>>> I agree this is not needed but removing it might create more complexit=
y in the code.
>>>=20
>>> I would suggest we have it from day 0 as there are plenty of HW availab=
le which is not ECAM.
>>>=20
>>> Having vpci allows other bridges to be supported
>>=20
>> So I can understand why you would want to have a driver for non-ECAM hos=
t PCI controller. However, why would you want to emulate a non-ECAM PCI con=
troller to a guest?
> Indeed. No need to emulate non-ECAM

If someone wants to implement something else then ECAM in the future, there=
 will be nothing preventing it to be done.
But indeed I do not really see a need for that.

Cheers
Bertrand

>>=20
>> Cheers,



From xen-devel-bounces@lists.xenproject.org Fri Jul 17 13:22:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 13:22:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQJs-0000dl-TG; Fri, 17 Jul 2020 13:22:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwQJr-0000da-51
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 13:22:31 +0000
X-Inumbo-ID: 903def94-c830-11ea-b7bb-bc764e2007e4
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe1f::614])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 903def94-c830-11ea-b7bb-bc764e2007e4;
 Fri, 17 Jul 2020 13:22:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hvkgAIJsk+ABmn5vn5Yco54M0XKKWrn+K1LL18PqNLA=;
 b=32X3cfC7ySWK5c5HIWIt/Mdcwq0BoollSzcjnrDId/2sZYOEwIJBZP8DDgS1XqdXdj2J7ABJEDRnBqNb1/rmTWfxPw4csYfVlQlqSkgJSSXqQ6DP9OPsb3wrdpLCZSicETaPZmo5NRpeYXaS8x6OPDuha2+5R+rnyac6fIA8tcI=
Received: from DB8PR03CA0021.eurprd03.prod.outlook.com (2603:10a6:10:be::34)
 by HE1PR0801MB1947.eurprd08.prod.outlook.com (2603:10a6:3:4d::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.24; Fri, 17 Jul
 2020 13:22:27 +0000
Received: from DB5EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:be:cafe::fd) by DB8PR03CA0021.outlook.office365.com
 (2603:10a6:10:be::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18 via Frontend
 Transport; Fri, 17 Jul 2020 13:22:26 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT005.mail.protection.outlook.com (10.152.20.122) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 13:22:26 +0000
Received: ("Tessian outbound 8f45de5545d6:v62");
 Fri, 17 Jul 2020 13:22:26 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 51c71de033baa0e7
X-CR-MTA-TID: 64aa7808
Received: from 830d997d7344.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9C1C6B9D-A532-4476-8D37-E6A6D4C1586E.1; 
 Fri, 17 Jul 2020 13:22:21 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 830d997d7344.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 13:22:21 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AU50xmYMBGAoKocptu0fZa12rIUSYdU5ni20CGSlrHA/qtLGGuNKEkxDerrTbzW94Ebhw+S3BVVAtUaG7z1uQ6g+PQfy608c+sjS7ZNQgrzZd2arpuv/1S8RZXPnadvnjO8GXN/+p6wsP3QjFdE9zcGJIrLYMyLNIeBXZMMXVQP+o1uXJVMxSd0G3h4DFysZtzW/GbrG+Hg/8HW8Awyrn3xA4r5gW+lOjJyC638e8Lk1Knq28lC7nG7HtIf4wxESWdmcsXqoxzNODvSEO5STQRBr/LOGxyoDS4C88yIbxrcGubcSLdCQKGkAb6hCPMZ7KDa94J6FcyI9LjI3QegdbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hvkgAIJsk+ABmn5vn5Yco54M0XKKWrn+K1LL18PqNLA=;
 b=MWi+DLqEhAQYJWfFWCVz7DSSpvVeHcMWoZsd/yOfJNpUvizHnZkqX8m4e7xkfwQkHBNwdLiJEyLYJdNxu+GHc4as9lLwtJkd7QtJaSbKenHDJjKG3BhV6f3KX21wPQ56TSKV/42ftbhsfi1mMmgENlO674nCy0QMLi/AlQaI1omRNV3zshu7wywbmEgkfqql5VI7PcgH8z6QudTbIdHdXxIZ431mC6tGwvQ3bf9RA3pAtoaKGxJjbj4HggoNraLH9843ETRp2J1EllA4t1qzvkEn6FTKJRYSeXxHcqRogeH10bjWnHPPnKNV9zKOa5ItvkOdhXWnrBO7S8F3GbdGxQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hvkgAIJsk+ABmn5vn5Yco54M0XKKWrn+K1LL18PqNLA=;
 b=32X3cfC7ySWK5c5HIWIt/Mdcwq0BoollSzcjnrDId/2sZYOEwIJBZP8DDgS1XqdXdj2J7ABJEDRnBqNb1/rmTWfxPw4csYfVlQlqSkgJSSXqQ6DP9OPsb3wrdpLCZSicETaPZmo5NRpeYXaS8x6OPDuha2+5R+rnyac6fIA8tcI=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB4138.eurprd08.prod.outlook.com (2603:10a6:10:a4::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.21; Fri, 17 Jul
 2020 13:22:19 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 13:22:19 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAR7YAIAAIxaA
Date: Fri, 17 Jul 2020 13:22:19 +0000
Message-ID: <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <20200717111644.GS7191@Air-de-Roger>
In-Reply-To: <20200717111644.GS7191@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [2a01:e0a:13:6f10:d5e3:98:5df0:fb15]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 314ab100-b4b9-486f-bcd5-08d82a5472d8
x-ms-traffictypediagnostic: DB8PR08MB4138:|HE1PR0801MB1947:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <HE1PR0801MB19473E303CFDA4A999CF63509D7C0@HE1PR0801MB1947.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: JtUcgBYR2icIOjfXf/9dizmd6yDv7hV6PnbN+UKty/RiaAenh3KHidYbRjRWvPezQjacKOKLbVupGe635vFNiG4vtM9qPN9wd2E2g57T+sUI5c7fWiuAISdVKpt7Ex4jGV8YrmUmtDg95rycGQ9VgrFsix2i6Sg+lL9CNBZWx01KcAgzfmcWrpQ3AmqJTcuHoRh7F5NRGspbOAy5E43cCHQrCfr+/gqVzwBI+OaLdh4OFm/n4N8P2qznf8z1WpmqXdYwWhxTTHxUhF5tHk1fEsxtpFTpSlGKjv11rZQERqsyhVuNYlFYg62dmXhox9WriQCZ9Hg5AzxYXB0o+EGbgA==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(39860400002)(136003)(366004)(396003)(346002)(186003)(83380400001)(2906002)(316002)(6916009)(6512007)(71200400001)(36756003)(66476007)(66946007)(76116006)(5660300002)(66446008)(66556008)(64756008)(30864003)(91956017)(6486002)(54906003)(478600001)(86362001)(2616005)(53546011)(4326008)(6506007)(8936002)(33656002)(8676002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: nsi2jDW4TXWP1VZewJ0ti1HCd2n96yvLNc+jeWFtwzJv2ectbj5UUalhI4ArR9LAJ0zVWFjbH4OM2fy4gj4VRJfPiufAEkDauHklENPls8o2qwT7Q/sRnetUSUCdSIl6OWcQxZTJ2c0Q8p9klgQi6evPFDfK8eZUa/NS2cS1Uvt4xifwotwLjGkpShKrhoB9lIVc/RKvG5vhk4lQHkjnl/C17/tKUVkDp4zSxArajbDikjH7ZcCKixMI3Z7WkvswNxZcLGZ6EOZL4IyCkUyHPnm/05HRRk5cqSfL6vmk3lhCcq1Nix4U/EYEE66FXrSMeEhu9NrOXFm2UoshgyBBwQ8tQq3/QZxMbIoJ4G5YEiqO397l1YHr+m8g0lhZQ9ZEgs23utIhhY4v3OUyQSzkDf1FfsyzbV7Ve9snFnGOllzBlgD37uFFK1UOfWYhGIskiWPfXDSTZjxZ/5KeIkwg0reyUjHZceNExNPj0qKqObSNSebXCGVKodQOPErHFnayDPfqgmdNbRwFRclOD87BRyU1mQII8t/pu5sqPcHvQzk=
Content-Type: text/plain; charset="utf-8"
Content-ID: <FC114C99F89CD84AA9025AEFA9DB8A59@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB4138
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(39860400002)(396003)(376002)(136003)(46966005)(33656002)(356005)(26005)(6512007)(186003)(2906002)(5660300002)(36756003)(6506007)(47076004)(53546011)(2616005)(82740400003)(478600001)(82310400002)(81166007)(316002)(6862004)(8676002)(83380400001)(30864003)(86362001)(336012)(6486002)(107886003)(54906003)(70206006)(4326008)(70586007)(8936002);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: d9b16987-3512-4b1f-fb21-08d82a546ec1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: VRCEfB8dZeBBZeTjAe8/n9n/8nqa6swInguomKnVoEx2daLDvSK/1TQQt8YWjWTCgxBpXczxgIGkpilPP9qMPf3ehJ+5MfUyn1+XIIT+txeSyZt4Hcrrtyz+KgQrxQMt7Ygvb2lPqoWzL/RwxzamzRglr/TfZ1RSjI2X3h5hcrrW2DR4HD+ts1kW5KvT1gfSt3w210gZAr1aPzo9Z2FIHGslyIJu7A2hGg9UEQx6IcHIljk/eRJAyRZjOTRXWM8Aqyze3MuTsiyfMVFNMi9tDw7TPNFZR5DZAqCCgAI3TFsB8amtMO6SfWj8/bOA9qImQkUrM/P70bWbqsu3BSpr+CtVhXKf9fxVyjvmx6VfFfg4Y69hYo2GPby4I0dvttKIpWy/cinqk5wSDF/WKTLeig==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 13:22:26.5271 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 314ab100-b4b9-486f-bcd5-08d82a5472d8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB1947
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTcgSnVsIDIwMjAsIGF0IDEzOjE2LCBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5w
YXVAY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBJJ3ZlIHdyYXBwZWQgdGhlIGVtYWlsIHRvIDgw
IGNvbHVtbnMgaW4gb3JkZXIgdG8gbWFrZSBpdCBlYXNpZXIgdG8NCj4gcmVwbHkuDQo+IA0KPiBU
aGFua3MgZm9yIGRvaW5nIHRoaXMsIEkgdGhpbmsgdGhlIGRlc2lnbiBpcyBnb29kLCBJIGhhdmUg
c29tZQ0KPiBxdWVzdGlvbnMgYmVsb3cgc28gdGhhdCBJIHVuZGVyc3RhbmQgdGhlIGZ1bGwgcGlj
dHVyZS4NCj4gDQo+IE9uIFRodSwgSnVsIDE2LCAyMDIwIGF0IDA1OjEwOjA1UE0gKzAwMDAsIFJh
aHVsIFNpbmdoIHdyb3RlOg0KPj4gSGVsbG8gQWxsLA0KPj4gDQo+PiBGb2xsb3dpbmcgdXAgb24g
ZGlzY3Vzc2lvbiBvbiBQQ0kgUGFzc3Rocm91Z2ggc3VwcG9ydCBvbiBBUk0gdGhhdCB3ZQ0KPj4g
aGFkIGF0IHRoZSBYRU4gc3VtbWl0LCB3ZSBhcmUgc3VibWl0dGluZyBhIFJldmlldyBGb3IgQ29t
bWVudCBhbmQgYQ0KPj4gZGVzaWduIHByb3Bvc2FsIGZvciBQQ0kgcGFzc3Rocm91Z2ggc3VwcG9y
dCBvbiBBUk0uIEZlZWwgZnJlZSB0bw0KPj4gZ2l2ZSB5b3VyIGZlZWRiYWNrLg0KPj4gDQo+PiBU
aGUgZm9sbG93aW5ncyBkZXNjcmliZSB0aGUgaGlnaC1sZXZlbCBkZXNpZ24gcHJvcG9zYWwgb2Yg
dGhlIFBDSQ0KPj4gcGFzc3Rocm91Z2ggc3VwcG9ydCBhbmQgaG93IHRoZSBkaWZmZXJlbnQgbW9k
dWxlcyB3aXRoaW4gdGhlIHN5c3RlbQ0KPj4gaW50ZXJhY3RzIHdpdGggZWFjaCBvdGhlciB0byBh
c3NpZ24gYSBwYXJ0aWN1bGFyIFBDSSBkZXZpY2UgdG8gdGhlDQo+PiBndWVzdC4NCj4+IA0KPj4g
IyBUaXRsZToNCj4+IA0KPj4gUENJIGRldmljZXMgcGFzc3Rocm91Z2ggb24gQXJtIGRlc2lnbiBw
cm9wb3NhbA0KPj4gDQo+PiAjIFByb2JsZW0gc3RhdGVtZW50Og0KPj4gDQo+PiBPbiBBUk0gdGhl
cmUgaW4gbm8gc3VwcG9ydCB0byBhc3NpZ24gYSBQQ0kgZGV2aWNlIHRvIGEgZ3Vlc3QuIFBDSQ0K
Pj4gZGV2aWNlIHBhc3N0aHJvdWdoIGNhcGFiaWxpdHkgYWxsb3dzIGd1ZXN0cyB0byBoYXZlIGZ1
bGwgYWNjZXNzIHRvDQo+PiBzb21lIFBDSSBkZXZpY2VzLiBQQ0kgZGV2aWNlIHBhc3N0aHJvdWdo
IGFsbG93cyBQQ0kgZGV2aWNlcyB0bw0KPj4gYXBwZWFyIGFuZCBiZWhhdmUgYXMgaWYgdGhleSB3
ZXJlIHBoeXNpY2FsbHkgYXR0YWNoZWQgdG8gdGhlIGd1ZXN0DQo+PiBvcGVyYXRpbmcgc3lzdGVt
IGFuZCBwcm92aWRlIGZ1bGwgaXNvbGF0aW9uIG9mIHRoZSBQQ0kgZGV2aWNlcy4NCj4+IA0KPj4g
R29hbCBvZiB0aGlzIHdvcmsgaXMgdG8gYWxzbyBzdXBwb3J0IERvbTBMZXNzIGNvbmZpZ3VyYXRp
b24gc28gdGhlDQo+PiBQQ0kgYmFja2VuZC9mcm9udGVuZCBkcml2ZXJzIHVzZWQgb24geDg2IHNo
YWxsIG5vdCBiZSB1c2VkIG9uIEFybS4NCj4+IEl0IHdpbGwgdXNlIHRoZSBleGlzdGluZyBWUENJ
IGNvbmNlcHQgZnJvbSBYODYgYW5kIGltcGxlbWVudCB0aGUNCj4+IHZpcnR1YWwgUENJIGJ1cyB0
aHJvdWdoIElPIGVtdWxhdGlvbiBzdWNoIHRoYXQgb25seSBhc3NpZ25lZCBkZXZpY2VzDQo+PiBh
cmUgdmlzaWJsZSB0byB0aGUgZ3Vlc3QgYW5kIGd1ZXN0IGNhbiB1c2UgdGhlIHN0YW5kYXJkIFBD
SQ0KPj4gZHJpdmVyLg0KPj4gDQo+PiBPbmx5IERvbTAgYW5kIFhlbiB3aWxsIGhhdmUgYWNjZXNz
IHRvIHRoZSByZWFsIFBDSSBidXMsIGd1ZXN0IHdpbGwNCj4+IGhhdmUgYSBkaXJlY3QgYWNjZXNz
IHRvIHRoZSBhc3NpZ25lZCBkZXZpY2UgaXRzZWxmLiBJT01FTSBtZW1vcnkNCj4+IHdpbGwgYmUg
bWFwcGVkIHRvIHRoZSBndWVzdCBhbmQgaW50ZXJydXB0IHdpbGwgYmUgcmVkaXJlY3RlZCB0byB0
aGUNCj4+IGd1ZXN0LiBTTU1VIGhhcyB0byBiZSBjb25maWd1cmVkIGNvcnJlY3RseSB0byBoYXZl
IERNQQ0KPj4gdHJhbnNhY3Rpb24uDQo+PiANCj4+ICMjIEN1cnJlbnQgc3RhdGU64oCvRHJhZnQg
dmVyc2lvbg0KPj4gDQo+PiAjIFByb3Bvc2VyKHMpOiBSYWh1bCBTaW5naCwgQmVydHJhbmQgTWFy
cXVpcw0KPj4gDQo+PiAjIFByb3Bvc2FsOg0KPj4gDQo+PiBUaGlzIHNlY3Rpb24gd2lsbCBkZXNj
cmliZSB0aGUgZGlmZmVyZW50IHN1YnN5c3RlbSB0byBzdXBwb3J0IHRoZQ0KPj4gUENJIGRldmlj
ZSBwYXNzdGhyb3VnaCBhbmQgaG93IHRoZXNlIHN1YnN5c3RlbXMgaW50ZXJhY3Qgd2l0aCBlYWNo
DQo+PiBvdGhlciB0byBhc3NpZ24gYSBkZXZpY2UgdG8gdGhlIGd1ZXN0Lg0KPj4gDQo+PiAjIFBD
SSBUZXJtaW5vbG9neToNCj4+IA0KPj4gSG9zdCBCcmlkZ2U6IEhvc3QgYnJpZGdlIGFsbG93cyB0
aGUgUENJIGRldmljZXMgdG8gdGFsayB0byB0aGUgcmVzdA0KPj4gb2YgdGhlIGNvbXB1dGVyLiAg
RUNBTTogRUNBTSAoRW5oYW5jZWQgQ29uZmlndXJhdGlvbiBBY2Nlc3MNCj4+IE1lY2hhbmlzbSkg
aXMgYSBtZWNoYW5pc20gZGV2ZWxvcGVkIHRvIGFsbG93IFBDSWUgdG8gYWNjZXNzDQo+PiBjb25m
aWd1cmF0aW9uIHNwYWNlLiBUaGUgc3BhY2UgYXZhaWxhYmxlIHBlciBmdW5jdGlvbiBpcyA0S0Iu
DQo+PiANCj4+ICMgRGlzY292ZXJpbmcgUENJIEhvc3QgQnJpZGdlIGluIFhFTjoNCj4+IA0KPj4g
SW4gb3JkZXIgdG8gc3VwcG9ydCB0aGUgUENJIHBhc3N0aHJvdWdoIFhFTiBzaG91bGQgYmUgYXdh
cmUgb2YgYWxsDQo+PiB0aGUgUENJIGhvc3QgYnJpZGdlcyBhdmFpbGFibGUgb24gdGhlIHN5c3Rl
bSBhbmQgc2hvdWxkIGJlIGFibGUgdG8NCj4+IGFjY2VzcyB0aGUgUENJIGNvbmZpZ3VyYXRpb24g
c3BhY2UuIEVDQU0gY29uZmlndXJhdGlvbiBhY2Nlc3MgaXMNCj4+IHN1cHBvcnRlZCBhcyBvZiBu
b3cuIFhFTiBkdXJpbmcgYm9vdCB3aWxsIHJlYWQgdGhlIFBDSSBkZXZpY2UgdHJlZQ0KPj4gbm9k
ZSDigJxyZWfigJ0gcHJvcGVydHkgYW5kIHdpbGwgbWFwIHRoZSBFQ0FNIHNwYWNlIHRvIHRoZSBY
RU4gbWVtb3J5DQo+PiB1c2luZyB0aGUg4oCcaW9yZW1hcF9ub2NhY2hlICgp4oCdIGZ1bmN0aW9u
Lg0KPiANCj4gV2hhdCBhYm91dCBBQ1BJPyBJIHRoaW5rIHlvdSBzaG91bGQgYWxzbyBtZW50aW9u
IHRoZSBNTUNGRyB0YWJsZSwNCj4gd2hpY2ggc2hvdWxkIGNvbnRhaW4gdGhlIGluZm9ybWF0aW9u
IGFib3V0IHRoZSBFQ0FNIHJlZ2lvbihzKSAob3IgYXQNCj4gbGVhc3QgdGhhdCdzIGhvdyBpdCB3
b3JrcyBvbiB4ODYpLiBKdXN0IHJlYWxpemVkIHRoYXQgeW91IGRvbid0DQo+IHN1cHBvcnQgQUNQ
SSBBVE0sIHNvIHlvdSBjYW4gaWdub3JlIHRoaXMgY29tbWVudC4NCg0KWWVzIGZvciBub3cgd2Ug
ZGlkIG5vdCBjb25zaWRlciBBQ1BJIHN1cHBvcnQuDQoNCj4gDQo+PiANCj4+IElmIHRoZXJlIGFy
ZSBtb3JlIHRoYW4gb25lIHNlZ21lbnQgb24gdGhlIHN5c3RlbSwgWEVOIHdpbGwgcmVhZCB0aGUN
Cj4+IOKAnGxpbnV4LCBwY2ktZG9tYWlu4oCdIHByb3BlcnR5IGZyb20gdGhlIGRldmljZSB0cmVl
IG5vZGUgYW5kIGNvbmZpZ3VyZQ0KPj4gdGhlIGhvc3QgYnJpZGdlIHNlZ21lbnQgbnVtYmVyIGFj
Y29yZGluZ2x5LiBBbGwgdGhlIFBDSSBkZXZpY2UgdHJlZQ0KPj4gbm9kZXMgc2hvdWxkIGhhdmUg
dGhlIOKAnGxpbnV4LHBjaS1kb21haW7igJ0gcHJvcGVydHkgc28gdGhhdCB0aGVyZSB3aWxsDQo+
PiBiZSBubyBjb25mbGljdHMuIER1cmluZyBoYXJkd2FyZSBkb21haW4gYm9vdCBMaW51eCB3aWxs
IGFsc28gdXNlIHRoZQ0KPj4gc2FtZSDigJxsaW51eCxwY2ktZG9tYWlu4oCdIHByb3BlcnR5IGFu
ZCBhc3NpZ24gdGhlIGRvbWFpbiBudW1iZXIgdG8gdGhlDQo+PiBob3N0IGJyaWRnZS4NCj4gDQo+
IFNvIGl0J3MgbXkgdW5kZXJzdGFuZGluZyB0aGF0IHRoZSBQQ0kgZG9tYWluIChvciBzZWdtZW50
KSBpcyBqdXN0IGFuDQo+IGFic3RyYWN0IGNvbmNlcHQgdG8gZGlmZmVyZW50aWF0ZSBhbGwgdGhl
IFJvb3QgQ29tcGxleCBwcmVzZW50IG9uDQo+IHRoZSBzeXN0ZW0sIGJ1dCB0aGUgaG9zdCBicmlk
Z2UgaXRzZWxmIGl0J3Mgbm90IGF3YXJlIG9mIHRoZSBzZWdtZW50DQo+IGFzc2lnbmVkIHRvIGl0
IGluIGFueSB3YXkuDQo+IA0KPiBJJ20gbm90IHN1cmUgWGVuIGFuZCB0aGUgaGFyZHdhcmUgZG9t
YWluIGhhdmluZyBtYXRjaGluZyBzZWdtZW50cyBpcyBhDQo+IHJlcXVpcmVtZW50LCBpZiB5b3Ug
dXNlIHZQQ0kgeW91IGNhbiBtYXRjaCB0aGUgc2VnbWVudCAoZnJvbSBYZW4ncw0KPiBQb1YpIGJ5
IGp1c3QgY2hlY2tpbmcgZnJvbSB3aGljaCBFQ0FNIHJlZ2lvbiB0aGUgYWNjZXNzIGhhcyBiZWVu
DQo+IHBlcmZvcm1lZC4NCj4gDQo+IFRoZSBvbmx5IHJlYXNvbiB0byByZXF1aXJlIG1hdGNoaW5n
IHNlZ21lbnQgdmFsdWVzIGJldHdlZW4gWGVuIGFuZCB0aGUNCj4gaGFyZHdhcmUgZG9tYWluIGlz
IHRvIGFsbG93IHVzaW5nIGh5cGVyY2FsbHMgYWdhaW5zdCB0aGUgUENJIGRldmljZXMsDQo+IGll
OiB0byBiZSBhYmxlIHRvIHVzZSBoeXBlcmNhbGxzIHRvIGFzc2lnbiBhIGRldmljZSB0byBhIGRv
bWFpbiBmcm9tDQo+IHRoZSBoYXJkd2FyZSBkb21haW4uDQo+IA0KPiBJIGhhdmUgMCB1bmRlcnN0
YW5kaW5nIG9mIERUIG9yIGl0J3Mgc3BlYywgYnV0IHdoeSBkb2VzIHRoaXMgaGF2ZSBhDQo+ICds
aW51eCwnIHByZWZpeD8gVGhlIHNlZ21lbnQgbnVtYmVyIGlzIHBhcnQgb2YgdGhlIFBDSSBzcGVj
LCBhbmQgbm90DQo+IHNvbWV0aGluZyBzcGVjaWZpYyB0byBMaW51eCBJTU8uDQoNClRoaXMgaXMg
ZXhhY3QgdGhhdCB0aGlzIGlzIG9ubHkgbmVlZGVkIGZvciB0aGUgaHlwZXJjYWxsIHdoZW4gRG9t
MCBpcw0KZG9pbmcgdGhlIGZ1bGwgZW51bWVyYXRpb24gYW5kIGNvbW11bmljYXRpbmcgdGhlIGRl
dmljZXMgdG8gWGVuLiANCk9uIGFsbCBvdGhlciBjYXNlcyB0aGlzIGNhbiBiZSBkZWR1Y2VkIGZy
b20gdGhlIGFkZHJlc3Mgb2YgdGhlIGFjY2Vzcy4gDQpSZWdhcmRpbmcgdGhlIERUIGVudHJ5LCB0
aGlzIGlzIG5vdCBjb21pbmcgZnJvbSB1cyBhbmQgdGhpcyBpcyBhbHJlYWR5DQpkZWZpbmVkIHRo
aXMgd2F5IGluIGV4aXN0aW5nIERUQnMsIHdlIGp1c3QgcmV1c2UgdGhlIGV4aXN0aW5nIGVudHJ5
LiANCg0KPiANCj4+IA0KPj4gV2hlbiBEb20wIHRyaWVzIHRvIGFjY2VzcyB0aGUgUENJIGNvbmZp
ZyBzcGFjZSBvZiB0aGUgZGV2aWNlLCBYRU4NCj4+IHdpbGwgZmluZCB0aGUgY29ycmVzcG9uZGlu
ZyBob3N0IGJyaWRnZSBiYXNlZCBvbiBzZWdtZW50IG51bWJlciBhbmQNCj4+IGFjY2VzcyB0aGUg
Y29ycmVzcG9uZGluZyBjb25maWcgc3BhY2UgYXNzaWduZWQgdG8gdGhhdCBicmlkZ2UuDQo+PiAN
Cj4+IExpbWl0YXRpb246DQo+PiAqIE9ubHkgUENJIEVDQU0gY29uZmlndXJhdGlvbiBzcGFjZSBh
Y2Nlc3MgaXMgc3VwcG9ydGVkLg0KPj4gKiBEZXZpY2UgdHJlZSBiaW5kaW5nIGlzIHN1cHBvcnRl
ZCBhcyBvZiBub3csIEFDUEkgaXMgbm90IHN1cHBvcnRlZC4NCj4+ICogTmVlZCB0byBwb3J0IHRo
ZSBQQ0kgaG9zdCBicmlkZ2UgYWNjZXNzIGNvZGUgdG8gWEVOIHRvIGFjY2VzcyB0aGUNCj4+ICBj
b25maWd1cmF0aW9uIHNwYWNlIChnZW5lcmljIG9uZSB3b3JrcyBidXQgbG90cyBvZiBwbGF0Zm9y
bXMgd2lsbA0KPj4gIHJlcXVpcmVkICBzb21lIHNwZWNpZmljIGNvZGUgb3IgcXVpcmtzKS4NCj4+
IA0KPj4gIyBEaXNjb3ZlcmluZyBQQ0kgZGV2aWNlczoNCj4+IA0KPj4gUENJLVBDSWUgZW51bWVy
YXRpb24gaXMgYSBwcm9jZXNzIG9mIGRldGVjdGluZyBkZXZpY2VzIGNvbm5lY3RlZCB0bw0KPj4g
aXRzIGhvc3QuIEl0IGlzIHRoZSByZXNwb25zaWJpbGl0eSBvZiB0aGUgaGFyZHdhcmUgZG9tYWlu
IG9yIGJvb3QNCj4+IGZpcm13YXJlIHRvIGRvIHRoZSBQQ0kgZW51bWVyYXRpb24gYW5kIGNvbmZp
Z3VyZSB0aGUgQkFSLCBQQ0kNCj4+IGNhcGFiaWxpdGllcywgYW5kIE1TSS9NU0ktWCBjb25maWd1
cmF0aW9uLg0KPj4gDQo+PiBQQ0ktUENJZSBlbnVtZXJhdGlvbiBpbiBYRU4gaXMgbm90IGZlYXNp
YmxlIGZvciB0aGUgY29uZmlndXJhdGlvbg0KPj4gcGFydCBhcyBpdCB3b3VsZCByZXF1aXJlIGEg
bG90IG9mIGNvZGUgaW5zaWRlIFhlbiB3aGljaCB3b3VsZA0KPj4gcmVxdWlyZSBhIGxvdCBvZiBt
YWludGVuYW5jZS4gQWRkZWQgdG8gdGhpcyBtYW55IHBsYXRmb3JtcyByZXF1aXJlDQo+PiBzb21l
IHF1aXJrcyBpbiB0aGF0IHBhcnQgb2YgdGhlIFBDSSBjb2RlIHdoaWNoIHdvdWxkIGdyZWF0bHkg
aW1wcm92ZQ0KPj4gWGVuIGNvbXBsZXhpdHkuIE9uY2UgaGFyZHdhcmUgZG9tYWluIGVudW1lcmF0
ZXMgdGhlIGRldmljZSB0aGVuIGl0DQo+PiB3aWxsIGNvbW11bmljYXRlIHRvIFhFTiB2aWEgdGhl
IGJlbG93IGh5cGVyY2FsbC4NCj4+IA0KPj4gI2RlZmluZSBQSFlTREVWT1BfcGNpX2RldmljZV9h
ZGQgICAgICAgIDI1IHN0cnVjdA0KPj4gcGh5c2Rldl9wY2lfZGV2aWNlX2FkZCB7DQo+PiAgICB1
aW50MTZfdCBzZWc7DQo+PiAgICB1aW50OF90IGJ1czsNCj4+ICAgIHVpbnQ4X3QgZGV2Zm47DQo+
PiAgICB1aW50MzJfdCBmbGFnczsNCj4+ICAgIHN0cnVjdCB7DQo+PiAgICAgICAgdWludDhfdCBi
dXM7DQo+PiAgICAgICAgdWludDhfdCBkZXZmbjsNCj4+ICAgIH0gcGh5c2ZuOw0KPj4gICAgLyoN
Cj4+ICAgICAqIE9wdGlvbmFsIHBhcmFtZXRlcnMgYXJyYXkuDQo+PiAgICAgKiBGaXJzdCBlbGVt
ZW50IChbMF0pIGlzIFBYTSBkb21haW4gYXNzb2NpYXRlZCB3aXRoIHRoZSBkZXZpY2UgKGlmDQo+
PiAgICAgKiBYRU5fUENJX0RFVl9QWE0gaXMgc2V0KQ0KPj4gICAgICovDQo+PiAgICB1aW50MzJf
dCBvcHRhcnJbWEVOX0ZMRVhfQVJSQVlfRElNXTsNCj4+IH07DQo+PiANCj4+IEFzIHRoZSBoeXBl
cmNhbGwgYXJndW1lbnQgaGFzIHRoZSBQQ0kgc2VnbWVudCBudW1iZXIsIFhFTiB3aWxsDQo+PiBh
Y2Nlc3MgdGhlIFBDSSBjb25maWcgc3BhY2UgYmFzZWQgb24gdGhpcyBzZWdtZW50IG51bWJlciBh
bmQgZmluZA0KPj4gdGhlIGhvc3QtYnJpZGdlIGNvcnJlc3BvbmRpbmcgdG8gdGhpcyBzZWdtZW50
IG51bWJlci4gQXQgdGhpcyBzdGFnZQ0KPj4gaG9zdCBicmlkZ2UgaXMgZnVsbHkgaW5pdGlhbGl6
ZWQgc28gdGhlcmUgd2lsbCBiZSBubyBpc3N1ZSB0byBhY2Nlc3MNCj4+IHRoZSBjb25maWcgc3Bh
Y2UuDQo+PiANCj4+IFhFTiB3aWxsIGFkZCB0aGUgUENJIGRldmljZXMgaW4gdGhlIGxpbmtlZCBs
aXN0IG1haW50YWluIGluIFhFTg0KPj4gdXNpbmcgdGhlIGZ1bmN0aW9uIHBjaV9hZGRfZGV2aWNl
KCkuIFhFTiB3aWxsIGJlIGF3YXJlIG9mIGFsbCB0aGUNCj4+IFBDSSBkZXZpY2VzIG9uIHRoZSBz
eXN0ZW0gYW5kIGFsbCB0aGUgZGV2aWNlIHdpbGwgYmUgYWRkZWQgdG8gdGhlDQo+PiBoYXJkd2Fy
ZSBkb21haW4uDQo+PiANCj4+IExpbWl0YXRpb25zOg0KPj4gKiBXaGVuIFBDSSBkZXZpY2VzIGFy
ZSBhZGRlZCB0byBYRU4sIE1TSSBjYXBhYmlsaXR5IGlzDQo+PiAgbm90IGluaXRpYWxpemVkIGlu
c2lkZSBYRU4gYW5kIG5vdCBzdXBwb3J0ZWQgYXMgb2Ygbm93Lg0KPiANCj4gSSBhc3N1bWUgeW91
IHdpbGwgbWFzayBzdWNoIGNhcGFiaWxpdHkgYW5kIHdpbGwgcHJldmVudCB0aGUgZ3Vlc3QgKG9y
DQo+IGhhcmR3YXJlIGRvbWFpbikgZnJvbSBpbnRlcmFjdGluZyB3aXRoIGl0Pw0KDQpObyB3ZSB3
aWxsIGFjdHVhbGx5IGltcGxlbWVudCB0aGF0IHBhcnQgYnV0IGxhdGVyLiBUaGlzIGlzIG5vdCBz
dXBwb3J0ZWQgaW4NCnRoZSBSRkMgdGhhdCB3ZSB3aWxsIHN1Ym1pdC4gDQoNCj4gDQo+PiAqIEFD
UyBjYXBhYmlsaXR5IGlzIGRpc2FibGUgZm9yIEFSTSBhcyBvZiBub3cgYXMgYWZ0ZXIgZW5hYmxp
bmcgaXQNCj4+ICBkZXZpY2VzIGFyZSBub3QgYWNjZXNzaWJsZS4NCj4+ICogRG9tMExlc3MgaW1w
bGVtZW50YXRpb24gd2lsbCByZXF1aXJlIHRvIGhhdmUgdGhlIGNhcGFjaXR5IGluc2lkZSBYZW4N
Cj4+ICB0byBkaXNjb3ZlciB0aGUgUENJIGRldmljZXMgKHdpdGhvdXQgZGVwZW5kaW5nIG9uIERv
bTAgdG8gZGVjbGFyZSB0aGVtDQo+PiAgdG8gWGVuKS4NCj4gDQo+IEkgYXNzdW1lIHRoZSBmaXJt
d2FyZSB3aWxsIHByb3Blcmx5IGluaXRpYWxpemUgdGhlIGhvc3QgYnJpZGdlIGFuZA0KPiBjb25m
aWd1cmUgdGhlIHJlc291cmNlcyBmb3IgZWFjaCBkZXZpY2UsIHNvIHRoYXQgWGVuIGp1c3QgaGFz
IHRvIHdhbGsNCj4gdGhlIFBDSSBzcGFjZSBhbmQgZmluZCB0aGUgZGV2aWNlcy4NCj4gDQo+IFRC
SCB0aGF0IHdvdWxkIGJlIG15IHByZWZlcnJlZCBtZXRob2QsIGJlY2F1c2UgdGhlbiB5b3UgY2Fu
IGdldCByaWQgb2YNCj4gdGhlIGh5cGVyY2FsbC4NCj4gDQo+IElzIHRoZXJlIGFueXdheSBmb3Ig
WGVuIHRvIGtub3cgd2hldGhlciB0aGUgaG9zdCBicmlkZ2UgaXMgcHJvcGVybHkNCj4gc2V0dXAg
YW5kIHRodXMgdGhlIFBDSSBidXMgY2FuIGJlIHNjYW5uZWQ/DQo+IA0KPiBUaGF0IHdheSBBcm0g
Y291bGQgZG8gc29tZXRoaW5nIHNpbWlsYXIgdG8geDg2LCB3aGVyZSBYZW4gd2lsbCBzY2FuDQo+
IHRoZSBidXMgYW5kIGRpc2NvdmVyIGRldmljZXMsIGJ1dCB5b3UgY291bGQgc3RpbGwgcHJvdmlk
ZSB0aGUNCj4gaHlwZXJjYWxsIGluIGNhc2UgdGhlIGJ1cyBjYW5ub3QgYmUgc2Nhbm5lZCBieSBY
ZW4gKGJlY2F1c2UgaXQgaGFzbid0DQo+IGJlZW4gc2V0dXApLg0KDQpUaGF0IGlzIGRlZmluaXRl
bHkgdGhlIGlkZWEgdG8gcmVseSBieSBkZWZhdWx0IG9uIGEgZmlybXdhcmUgZG9pbmcgdGhpcyBw
cm9wZXJseS4NCkkgYW0gbm90IHN1cmUgd2V0aGVyIGEgcHJvcGVyIGVudW1lcmF0aW9uIGNvdWxk
IGJlIGRldGVjdGVkIHByb3Blcmx5IGluIGFsbA0KY2FzZXMgc28gaXQgd291bGQgbWFrZSBzZW5z
IHRvIHJlbHkgb24gRG9tMCBlbnVtZXJhdGlvbiB3aGVuIGEgWGVuDQpjb21tYW5kIGxpbmUgYXJn
dW1lbnQgaXMgcGFzc2VkIGFzIGV4cGxhaW5lZCBpbiBvbmUgb2YgUmFodWzigJlzIG1haWxzLg0K
DQo+IA0KPj4gDQo+PiAjIEVuYWJsZSB0aGUgZXhpc3RpbmcgeDg2IHZpcnR1YWwgUENJIHN1cHBv
cnQgZm9yIEFSTToNCj4+IA0KPj4gVGhlIGV4aXN0aW5nIFZQQ0kgc3VwcG9ydCBhdmFpbGFibGUg
Zm9yIFg4NiBpcyBhZGFwdGVkIGZvciBBcm0uIFdoZW4NCj4+IHRoZSBkZXZpY2UgaXMgYWRkZWQg
dG8gWEVOIHZpYSB0aGUgaHlwZXIgY2FsbA0KPj4g4oCcUEhZU0RFVk9QX3BjaV9kZXZpY2VfYWRk
4oCdLCBWUENJIGhhbmRsZXIgZm9yIHRoZSBjb25maWcgc3BhY2UgYWNjZXNzDQo+PiBpcyBhZGRl
ZCB0byB0aGUgUENJIGRldmljZSB0byBlbXVsYXRlIHRoZSBQQ0kgZGV2aWNlcy4NCj4+IA0KPj4g
QSBNTUlPIHRyYXAgaGFuZGxlciBmb3IgdGhlIFBDSSBFQ0FNIHNwYWNlIGlzIHJlZ2lzdGVyZWQg
aW4gWEVOIHNvDQo+PiB0aGF0IHdoZW4gZ3Vlc3QgaXMgdHJ5aW5nIHRvIGFjY2VzcyB0aGUgUENJ
IGNvbmZpZyBzcGFjZSwgWEVOIHdpbGwNCj4+IHRyYXAgdGhlIGFjY2VzcyBhbmQgZW11bGF0ZSBy
ZWFkL3dyaXRlIHVzaW5nIHRoZSBWUENJIGFuZCBub3QgdGhlDQo+PiByZWFsIFBDSSBoYXJkd2Fy
ZS4NCj4+IA0KPj4gTGltaXRhdGlvbjoNCj4+ICogTm8gaGFuZGxlciBpcyByZWdpc3RlciBmb3Ig
dGhlIE1TSSBjb25maWd1cmF0aW9uLg0KPiANCj4gQnV0IHlvdSBuZWVkIHRvIG1hc2sgTVNJL01T
SS1YIGNhcGFiaWxpdGllcyBpbiB0aGUgY29uZmlnIHNwYWNlIGluDQo+IG9yZGVyIHRvIHByZXZl
bnQgYWNjZXNzIGZyb20gZG9tYWlucz8gKGFuZCBieSBtYXNrIEkgbWVhbiByZW1vdmUgZnJvbQ0K
PiB0aGUgbGlzdCBvZiBjYXBhYmlsaXRpZXMgYW5kIHByZXZlbnQgcmVhZHMvd3JpdGVzIHRvIHRo
YXQNCj4gY29uZmlndXJhdGlvbiBzcGFjZSkuDQo+IA0KPiBOb3RlIHRoaXMgaXMgYWxyZWFkeSBp
bXBsZW1lbnRlZCBmb3IgeDg2LCBhbmQgSSd2ZSB0cmllZCB0byBhZGQgYXJjaF8NCj4gaG9va3Mg
Zm9yIGFyY2ggc3BlY2lmaWMgc3R1ZmYgc28gdGhhdCBpdCBjb3VsZCBiZSByZXVzZWQgYnkgQXJt
LiBCdXQNCj4gbWF5YmUgdGhpcyB3b3VsZCByZXF1aXJlIGEgZGlmZmVyZW50IGRlc2lnbiBkb2N1
bWVudD8NCg0KYXMgc2FpZCwgd2Ugd2lsbCBoYW5kbGUgTVNJIHN1cHBvcnQgaW4gYSBzZXBhcmF0
ZSBkb2N1bWVudC9zdGVwLg0KDQo+IA0KPj4gKiBPbmx5IGxlZ2FjeSBpbnRlcnJ1cHQgaXMgc3Vw
cG9ydGVkIGFuZCB0ZXN0ZWQgYXMgb2Ygbm93LCBNU0kgaXMgbm90DQo+PiAgaW1wbGVtZW50ZWQg
YW5kIHRlc3RlZC4NCj4+IA0KPj4gIyBBc3NpZ24gdGhlIGRldmljZSB0byB0aGUgZ3Vlc3Q6DQo+
PiANCj4+IEFzc2lnbiB0aGUgUENJIGRldmljZSBmcm9tIHRoZSBoYXJkd2FyZSBkb21haW4gdG8g
dGhlIGd1ZXN0IGlzIGRvbmUNCj4+IHVzaW5nIHRoZSBiZWxvdyBndWVzdCBjb25maWcgb3B0aW9u
LiBXaGVuIHhsIHRvb2wgY3JlYXRlIHRoZSBkb21haW4sDQo+PiBQQ0kgZGV2aWNlcyB3aWxsIGJl
IGFzc2lnbmVkIHRvIHRoZSBndWVzdCBWUENJIGJ1cy4NCj4+IA0KPj4gcGNpPVsgIlBDSV9TUEVD
X1NUUklORyIsICJQQ0lfU1BFQ19TVFJJTkciLCAuLi5dDQo+PiANCj4+IEd1ZXN0IHdpbGwgYmUg
b25seSBhYmxlIHRvIGFjY2VzcyB0aGUgYXNzaWduZWQgZGV2aWNlcyBhbmQgc2VlIHRoZQ0KPj4g
YnJpZGdlcy4gR3Vlc3Qgd2lsbCBub3QgYmUgYWJsZSB0byBhY2Nlc3Mgb3Igc2VlIHRoZSBkZXZp
Y2VzIHRoYXQNCj4+IGFyZSBubyBhc3NpZ25lZCB0byBoaW0uDQo+PiANCj4+IExpbWl0YXRpb246
DQo+PiAqIEFzIG9mIG5vdyBhbGwgdGhlIGJyaWRnZXMgaW4gdGhlIFBDSSBidXMgYXJlIHNlZW4g
YnkNCj4+ICB0aGUgZ3Vlc3Qgb24gdGhlIFZQQ0kgYnVzLg0KPiANCj4gSSBkb24ndCB0aGluayB5
b3UgbmVlZCBhbGwgb2YgdGhlbSwganVzdCB0aGUgb25lcyB0aGF0IGFyZSBoaWdoZXIgdXANCj4g
b24gdGhlIGhpZXJhcmNoeSBvZiB0aGUgZGV2aWNlIHlvdSBhcmUgdHJ5aW5nIHRvIHBhc3N0aHJv
dWdoPw0KPiANCj4gV2hpY2gga2luZCBvZiBhY2Nlc3MgZG8gZ3Vlc3QgaGF2ZSB0byBQQ0kgYnJp
ZGdlcyBjb25maWcgc3BhY2U/DQoNCkZvciBub3cgdGhlIGJyaWRnZXMgYXJlIHJlYWQgb25seSwg
bm8gc3BlY2lmaWMgYWNjZXNzIGlzIHJlcXVpcmVkIGJ5IGd1ZXN0cy4gDQoNCj4gDQo+IFRoaXMg
c2hvdWxkIGJlIGxpbWl0ZWQgdG8gcmVhZC1vbmx5IGFjY2Vzc2VzIGluIG9yZGVyIHRvIGJlIHNh
ZmUuDQo+IA0KPiBFbXVsYXRpbmcgYSBQQ0kgYnJpZGdlIGluIFhlbiB1c2luZyB2UENJIHNob3Vs
ZG4ndCBiZSB0aGF0DQo+IGNvbXBsaWNhdGVkLCBzbyB5b3UgY291bGQgbGlrZWx5IHJlcGxhY2Ug
dGhlIHJlYWwgYnJpZGdlcyB3aXRoDQo+IGVtdWxhdGVkIG9uZXMuIE9yIGV2ZW4gcHJvdmlkZSBh
IGZha2UgdG9wb2xvZ3kgdG8gdGhlIGd1ZXN0IHVzaW5nIGFuDQo+IGVtdWxhdGVkIGJyaWRnZS4N
Cg0KSnVzdCBzaG93aW5nIGFsbCBicmlkZ2VzIGFuZCBrZWVwaW5nIHRoZSBoYXJkd2FyZSB0b3Bv
bG9neSBpcyB0aGUgc2ltcGxlc3QNCnNvbHV0aW9uIGZvciBub3cuIEJ1dCBtYXliZSBzaG93aW5n
IGEgZGlmZmVyZW50IHRvcG9sb2d5IGFuZCBvbmx5IGZha2UNCmJyaWRnZXMgY291bGQgbWFrZSBz
ZW5zZSBhbmQgYmUgaW1wbGVtZW50ZWQgaW4gdGhlIGZ1dHVyZS4NCg0KPiANCj4+IA0KPj4gIyBF
bXVsYXRlZCBQQ0kgZGV2aWNlIHRyZWUgbm9kZSBpbiBsaWJ4bDoNCj4+IA0KPj4gTGlieGwgaXMg
Y3JlYXRpbmcgYSB2aXJ0dWFsIFBDSSBkZXZpY2UgdHJlZSBub2RlIGluIHRoZSBkZXZpY2UgdHJl
ZQ0KPj4gdG8gZW5hYmxlIHRoZSBndWVzdCBPUyB0byBkaXNjb3ZlciB0aGUgdmlydHVhbCBQQ0kg
ZHVyaW5nIGd1ZXN0DQo+PiBib290LiBXZSBpbnRyb2R1Y2VkIHRoZSBuZXcgY29uZmlnIG9wdGlv
biBbdnBjaT0icGNpX2VjYW0iXSBmb3INCj4+IGd1ZXN0cy4gV2hlbiB0aGlzIGNvbmZpZyBvcHRp
b24gaXMgZW5hYmxlZCBpbiBhIGd1ZXN0IGNvbmZpZ3VyYXRpb24sDQo+PiBhIFBDSSBkZXZpY2Ug
dHJlZSBub2RlIHdpbGwgYmUgY3JlYXRlZCBpbiB0aGUgZ3Vlc3QgZGV2aWNlIHRyZWUuDQo+PiAN
Cj4+IEEgbmV3IGFyZWEgaGFzIGJlZW4gcmVzZXJ2ZWQgaW4gdGhlIGFybSBndWVzdCBwaHlzaWNh
bCBtYXAgYXQgd2hpY2gNCj4+IHRoZSBWUENJIGJ1cyBpcyBkZWNsYXJlZCBpbiB0aGUgZGV2aWNl
IHRyZWUgKHJlZyBhbmQgcmFuZ2VzDQo+PiBwYXJhbWV0ZXJzIG9mIHRoZSBub2RlKS4gQSB0cmFw
IGhhbmRsZXIgZm9yIHRoZSBQQ0kgRUNBTSBhY2Nlc3MgZnJvbQ0KPj4gZ3Vlc3QgaGFzIGJlZW4g
cmVnaXN0ZXJlZCBhdCB0aGUgZGVmaW5lZCBhZGRyZXNzIGFuZCByZWRpcmVjdHMNCj4+IHJlcXVl
c3RzIHRvIHRoZSBWUENJIGRyaXZlciBpbiBYZW4uDQo+IA0KPiBDYW4ndCB5b3UgZGVkdWNlIHRo
ZSByZXF1aXJlbWVudCBvZiBzdWNoIERUIG5vZGUgYmFzZWQgb24gdGhlIHByZXNlbmNlDQo+IG9m
IGEgJ3BjaT0nIG9wdGlvbiBpbiB0aGUgc2FtZSBjb25maWcgZmlsZT8NCj4gDQo+IEFsc28gSSB3
b3VsZG4ndCBkaXNjYXJkIHRoYXQgaW4gdGhlIGZ1dHVyZSB5b3UgbWlnaHQgd2FudCB0byB1c2UN
Cj4gZGlmZmVyZW50IGVtdWxhdG9ycyBmb3IgZGlmZmVyZW50IGRldmljZXMsIHNvIGl0IG1pZ2h0
IGJlIGhlbHBmdWwgdG8NCj4gaW50cm9kdWNlIHNvbWV0aGluZyBsaWtlOg0KPiANCj4gcGNpID0g
WyAnMDg6MDAuMCxiYWNrZW5kPXZwY2knLCAnMDk6MDAuMCxiYWNrZW5kPXhlbnB0JywgJzBhOjAw
LjAsYmFja2VuZD1xZW11JywgLi4uIF0NCj4gDQo+IEZvciB0aGUgdGltZSBiZWluZyBBcm0gd2ls
bCByZXF1aXJlIGJhY2tlbmQ9dnBjaSBmb3IgYWxsIHRoZSBwYXNzZWQNCj4gdGhyb3VnaCBkZXZp
Y2VzLCBidXQgSSB3b3VsZG4ndCBydWxlIG91dCB0aGlzIGNoYW5naW5nIGluIHRoZSBmdXR1cmUu
DQoNCldlIG5lZWQgaXQgZm9yIHRoZSBjYXNlIHdoZXJlIG5vIGRldmljZSBpcyBkZWNsYXJlZCBp
biB0aGUgY29uZmlnIGZpbGUgYW5kIHRoZSB1c2VyDQp3YW50cyB0byBhZGQgZGV2aWNlcyB1c2lu
ZyB4bCBsYXRlci4gSW4gdGhpcyBjYXNlIHdlIG11c3QgaGF2ZSB0aGUgRFQgbm9kZSBmb3IgaXQN
CnRvIHdvcmsuIA0KDQpSZWdhcmRpbmcgcG9zc2libGVzIGJhY2tlbmQgdGhpcyBjb3VsZCBiZSBh
ZGRlZCBpbiB0aGUgZnV0dXJlIGlmIHJlcXVpcmVkLiANCg0KPiANCj4+IExpbWl0YXRpb246DQo+
PiAqIE9ubHkgb25lIFBDSSBkZXZpY2UgdHJlZSBub2RlIGlzIHN1cHBvcnRlZCBhcyBvZiBub3cu
DQo+PiANCj4+IEJBUiB2YWx1ZSBhbmQgSU9NRU0gbWFwcGluZzoNCj4+IA0KPj4gTGludXggZ3Vl
c3Qgd2lsbCBkbyB0aGUgUENJIGVudW1lcmF0aW9uIGJhc2VkIG9uIHRoZSBhcmVhIHJlc2VydmVk
DQo+PiBmb3IgRUNBTSBhbmQgSU9NRU0gcmFuZ2VzIGluIHRoZSBWUENJIGRldmljZSB0cmVlIG5v
ZGUuIE9uY2UgUENJDQo+PiBkZXZpY2UgaXMgYXNzaWduZWQgdG8gdGhlIGd1ZXN0LCBYRU4gd2ls
bCBtYXAgdGhlIGd1ZXN0IFBDSSBJT01FTQ0KPj4gcmVnaW9uIHRvIHRoZSByZWFsIHBoeXNpY2Fs
IElPTUVNIHJlZ2lvbiBvbmx5IGZvciB0aGUgYXNzaWduZWQNCj4+IGRldmljZXMuDQo+IA0KPiBQ
Q0kgSU9NRU0gPT0gQkFScz8gT3IgYXJlIHlvdSByZWZlcnJpbmcgdG8gdGhlIEVDQU0gYWNjZXNz
IHdpbmRvdz8NCg0KSGVyZSBieSBQQ0kgSU9NRU0gd2UgbWVhbiB0aGUgSU9NRU0gc3BhY2VzIHJl
ZmVycmVkIHRvIGJ5IHRoZSBCQVJzDQpvZiB0aGUgUENJIGRldmljZQ0KDQo+IA0KPj4gQXMgb2Yg
bm93IHdlIGhhdmUgbm90IG1vZGlmaWVkIHRoZSBleGlzdGluZyBWUENJIGNvZGUgdG8gbWFwIHRo
ZQ0KPj4gZ3Vlc3QgUENJIElPTUVNIHJlZ2lvbiB0byB0aGUgcmVhbCBwaHlzaWNhbCBJT01FTSBy
ZWdpb24uIFdlIHVzZWQNCj4+IHRoZSBleGlzdGluZyBndWVzdCDigJxpb21lbeKAnSBjb25maWcg
b3B0aW9uIHRvIG1hcCB0aGUgcmVnaW9uLiAgRm9yDQo+PiBleGFtcGxlOiBHdWVzdCByZXNlcnZl
ZCBJT01FTSByZWdpb246ICAweDA0MDIwMDAwIFJlYWwgcGh5c2ljYWwNCj4+IElPTUVNIHJlZ2lv
bjoweDUwMDAwMDAwIElPTUVNIHNpemU6MTI4TUIgaW9tZW0gY29uZmlnIHdpbGwgYmU6DQo+PiBp
b21lbSA9IFsiMHg1MDAwMCwweDgwMDBAMHg0MDIwIl0NCj4+IA0KPj4gVGhlcmUgaXMgbm8gbmVl
ZCB0byBtYXAgdGhlIEVDQU0gc3BhY2UgYXMgWEVOIGFscmVhZHkgaGF2ZSBhY2Nlc3MgdG8NCj4+
IHRoZSBFQ0FNIHNwYWNlIGFuZCBYRU4gd2lsbCB0cmFwIEVDQU0gYWNjZXNzZXMgZnJvbSB0aGUg
Z3Vlc3QgYW5kDQo+PiB3aWxsIHBlcmZvcm0gcmVhZC93cml0ZSBvbiB0aGUgVlBDSSBidXMuDQo+
PiANCj4+IElPTUVNIGFjY2VzcyB3aWxsIG5vdCBiZSB0cmFwcGVkIGFuZCB0aGUgZ3Vlc3Qgd2ls
bCBkaXJlY3RseSBhY2Nlc3MNCj4+IHRoZSBJT01FTSByZWdpb24gb2YgdGhlIGFzc2lnbmVkIGRl
dmljZSB2aWEgc3RhZ2UtMiB0cmFuc2xhdGlvbi4NCj4+IA0KPj4gSW4gdGhlIHNhbWUsIHdlIG1h
cHBlZCB0aGUgYXNzaWduZWQgZGV2aWNlcyBJUlEgdG8gdGhlIGd1ZXN0IHVzaW5nDQo+PiBiZWxv
dyBjb25maWcgb3B0aW9ucy4gIGlycXM9IFsgTlVNQkVSLCBOVU1CRVIsIC4uLl0NCj4gDQo+IEFy
ZSB5b3UgcHJvdmlkaW5nIHRoaXMgZm9yIHRoZSBoYXJkd2FyZSBkb21haW4gYWxzbz8gT3IgYXJl
IGlycXMNCj4gZmV0Y2hlZCBmcm9tIHRoZSBEVCBpbiB0aGF0IGNhc2U/DQoNClRoaXMgd2lsbCBv
bmx5IGJlIHVzZWQgdGVtcG9yYXJpbHkgdW50aWwgd2UgaGF2ZSBwcm9wZXIgc3VwcG9ydCB0byBk
byB0aGlzDQphdXRvbWF0aWNhbGx5IHdoZW4gYSBkZXZpY2UgaXMgYXNzaWduZWQuIFJpZ2h0IG5v
dyBvdXIgY3VycmVudCBpbXBsZW1lbnRhdGlvbg0Kc3RhdHVzIHJlcXVpcmVzIHRoZSB1c2VyIHRv
IGV4cGxpY2l0ZWx5IHJlZGlyZWN0IHRoZSBpbnRlcnJ1cHRzIHJlcXVpcmVkIGJ5IHRoZSBQQ0kN
CmRldmljZXMgYXNzaWduZWQgYnV0IGluIHRoZSBmaW5hbCB2ZXJzaW9uIHRoaXMgZW50cnkgd2ls
bCBub3QgYmUgbmVlZGVkLg0KDQpEb20wIHJlbGllcyBvbiB0aGUgZW50cmllcyBkZWNsYXJlZCBp
biB0aGUgRFQuDQoNCj4gDQo+PiBMaW1pdGF0aW9uOg0KPj4gKiBOZWVkIHRvIGF2b2lkIHRoZSDi
gJxpb21lbeKAnSBhbmQg4oCcaXJx4oCdIGd1ZXN0IGNvbmZpZw0KPj4gIG9wdGlvbnMgYW5kIG1h
cCB0aGUgSU9NRU0gcmVnaW9uIGFuZCBJUlEgYXQgdGhlIHNhbWUgdGltZSB3aGVuDQo+PiAgZGV2
aWNlIGlzIGFzc2lnbmVkIHRvIHRoZSBndWVzdCB1c2luZyB0aGUg4oCccGNp4oCdIGd1ZXN0IGNv
bmZpZyBvcHRpb25zDQo+PiAgd2hlbiB4bCBjcmVhdGVzIHRoZSBkb21haW4uDQo+PiAqIEVtdWxh
dGVkIEJBUiB2YWx1ZXMgb24gdGhlIFZQQ0kgYnVzIHNob3VsZCByZWZsZWN0IHRoZSBJT01FTSBt
YXBwZWQNCj4+ICBhZGRyZXNzLg0KPiANCj4gSXQgd2FzIG15IHVuZGVyc3RhbmRpbmcgdGhhdCB5
b3Ugd291bGQgaWRlbnRpdHkgbWFwIHRoZSBCQVIgaW50byB0aGUNCj4gZG9tVSBzdGFnZS0yIHRy
YW5zbGF0aW9uLCBhbmQgdGhhdCBjaGFuZ2VzIGJ5IHRoZSBndWVzdCB3b24ndCBiZQ0KPiBhbGxv
d2VkLg0KDQpJbiBmYWN0IHRoaXMgaXMgbm90IHBvc3NpYmxlIHRvIGRvIGFuZCB3ZSBoYXZlIHRv
IHJlbWFwIGF0IGEgZGlmZmVyZW50IGFkZHJlc3MNCmJlY2F1c2UgdGhlIGd1ZXN0IHBoeXNpY2Fs
IG1hcHBpbmcgaXMgZml4ZWQgYnkgWGVuIG9uIEFybSBzbyB3ZSBtdXN0IGZvbGxvdw0KdGhlIHNh
bWUgZGVzaWduIG90aGVyd2lzZSB0aGlzIHdvdWxkIG9ubHkgd29yayBpZiB0aGUgQkFScyBhcmUg
cG9pbnRpbmcgdG8gYW4NCmFkZHJlc3MgdW51c2VkIGFuZCBvbiBKdW5vIHRoaXMgaXMgZm9yIGV4
YW1wbGUgY29uZmxpY3Rpbmcgd2l0aCB0aGUgZ3Vlc3QNClJBTSBhZGRyZXNzLg0KDQo+IA0KPj4g
KiBYODYgbWFwcGluZyBjb2RlIHNob3VsZCBiZSBwb3J0ZWQgb24gQXJtIHNvIHRoYXQgdGhlIHN0
YWdlLTINCj4+ICB0cmFuc2xhdGlvbiBpcyBhZGFwdGVkIHdoZW4gdGhlIGd1ZXN0IGlzIGRvaW5n
IGEgbW9kaWZpY2F0aW9uIG9mIHRoZQ0KPj4gIEJBUiByZWdpc3RlcnMgdmFsdWVzICh0byBtYXAg
dGhlIGFkZHJlc3MgcmVxdWVzdGVkIGJ5IHRoZSBndWVzdCBmb3INCj4+ICBhIHNwZWNpZmljIElP
TUVNIHRvIHRoZSBhZGRyZXNzIGFjdHVhbGx5IGNvbnRhaW5lZCBpbiB0aGUgcmVhbCBCQVINCj4+
ICByZWdpc3RlciBvZiB0aGUgY29ycmVzcG9uZGluZyBkZXZpY2UpLg0KPiANCj4gSSB0aGluayB0
aGUgYWJvdmUgbWVhbnMgdGhhdCB5b3Ugd2FudCB0byBhbGxvdyB0aGUgZ3Vlc3QgdG8gY2hhbmdl
IHRoZQ0KPiBwb3NpdGlvbiBvZiB0aGUgQkFSIGluIHRoZSBzdGFnZS0yIHRyYW5zbGF0aW9uIF93
aXRob3V0XyBhbGxvd2luZyBpdA0KPiB0byBjaGFuZ2UgdGhlIHBvc2l0aW9uIG9mIHRoZSBCQVIg
aW4gdGhlIHBoeXNpY2FsIG1lbW9yeSBtYXAsIGlzIHRoYXQNCj4gY29ycmVjdD8NCg0KeWVzIHRo
aXMgaXMgY29ycmVjdC4gVGhpcyBpcyBub3QgdmVyeSBjb21wbGV4IGFuZCBtYWtlIGl0IGVhc2ll
ciB0byB1c2UNCnVubW9kaWZpZWQgZ3Vlc3RzIGFzIFZQQ0kgd291bGQgYmVoYXZlIGFzIGFuIGhh
cmR3YXJlIFBDSS4NCg0KQmVydHJhbmQNCg0KPiANCj4gVGhhbmtzLCBSb2dlci4NCg0K


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 13:28:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 13:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQPZ-0000un-Nt; Fri, 17 Jul 2020 13:28:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XmQz=A4=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1jwQPY-0000ui-9s
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 13:28:24 +0000
X-Inumbo-ID: 62943b7e-c831-11ea-8496-bc764e2007e4
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.80]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62943b7e-c831-11ea-8496-bc764e2007e4;
 Fri, 17 Jul 2020 13:28:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o8/Kziu+oSoJnQr/G2G+fkiXFcIjx2GdzkhMcZYaIqs=;
 b=cxLbzFY0EkhLT7gvHkTqbRXAZIb4gPWjqZdK3XUOy2VrUUibttjiOktq6rCZ9MEDHS03UzZPsI/yL+FXKZ3dgYF2lNXlT3rNLfwysW0muc5S9GcicAkdA1yhc2B2lgV6Ep3H0zjuYWs3DKq2IJkKcqKt2u4I6gncX6a0u3p4PYc=
Received: from AM6P195CA0088.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:86::29)
 by DB8PR08MB5274.eurprd08.prod.outlook.com (2603:10a6:10:e6::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17; Fri, 17 Jul
 2020 13:28:20 +0000
Received: from AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:86:cafe::51) by AM6P195CA0088.outlook.office365.com
 (2603:10a6:209:86::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18 via Frontend
 Transport; Fri, 17 Jul 2020 13:28:20 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT004.mail.protection.outlook.com (10.152.16.163) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 13:28:20 +0000
Received: ("Tessian outbound 1dc58800d5dd:v62");
 Fri, 17 Jul 2020 13:28:20 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: a7dc5ccf220848c6
X-CR-MTA-TID: 64aa7808
Received: from c4f37c6d5811.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 21CD6BA2-EAE3-4384-894A-27A5E8BF1E6E.1; 
 Fri, 17 Jul 2020 13:28:14 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c4f37c6d5811.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 13:28:14 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mK1YdXahLQaL2y2K4hoSgh7DUnpNFpXuIxrcan0w77sNgh6H8+bi2fmvLiU7NwlipErv+Qxx5dyw08mnBb+/EgtVtxF3kRA3VoDaVIV0Qldl4vwKiJRgPm9gfUWdPo8eE7hri/27Q95u5VVpCdRWqavUGCGHVZVbSnfuLCGsXRWRQzNprQ4lzaoA4lMEnKIS+IOEpTwSFgfL/L5PiQFCwyT8ejvLxlzn8bkzDOenAYCzP3Go8D+RZThs1uKzaxwPmZXDXOuToPyrTssRdITSfjKEU9HzQLrlhFnGiNKyflBde9Ia83InrgOFNHzlEgJRZ3fCzNFCNkarx+hp0oPXxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o8/Kziu+oSoJnQr/G2G+fkiXFcIjx2GdzkhMcZYaIqs=;
 b=FjhiU1yfYFtPONEoqBtpnn6OB/5rZW8Z+Lc7TtUUkwCPN7Rut6X83+r6zuwKgn8dcTwRIw1UPb2mu+TR1uc4LKZ3ZbiQZIjCjokxoRjdXnUUS5Rv9T6Dy7G3/0qpxDu+/NnM1tAx0JZHZnYHZvrDlZEPqZlyAvRsI80lQn1isEC5VKN9ZTsgE5lEWazcIvtmg3SreUSh0URpp/oJuZ1zLrBg1Hqn1cMOnighqxXbHMqVbxYvZQurKSfV0oSMb+ufa0F1hzdxpM9pZdKVxRc857J2jO5QZQ1brNlOYnILh3KoUW5GLbH2eAsc/kGMLsYb0n7wG4kauWRa5Nvog5vPLw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o8/Kziu+oSoJnQr/G2G+fkiXFcIjx2GdzkhMcZYaIqs=;
 b=cxLbzFY0EkhLT7gvHkTqbRXAZIb4gPWjqZdK3XUOy2VrUUibttjiOktq6rCZ9MEDHS03UzZPsI/yL+FXKZ3dgYF2lNXlT3rNLfwysW0muc5S9GcicAkdA1yhc2B2lgV6Ep3H0zjuYWs3DKq2IJkKcqKt2u4I6gncX6a0u3p4PYc=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (20.177.115.19) by
 AM6PR08MB4184.eurprd08.prod.outlook.com (20.178.87.220) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3174.21; Fri, 17 Jul 2020 13:28:12 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3174.026; Fri, 17 Jul 2020
 13:28:12 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Oleksandr Andrushchenko <andr2000@gmail.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAOrEgIAACn+AgABOT4A=
Date: Fri, 17 Jul 2020 13:28:12 +0000
Message-ID: <B16BB290-7BE6-49C0-8705-9F21257E6B6B@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
 <6b4cadff-ffdc-848a-2b57-be55f61f5bc7@gmail.com>
In-Reply-To: <6b4cadff-ffdc-848a-2b57-be55f61f5bc7@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 6176bc5f-d96a-4f3c-b9b0-08d82a5545c0
x-ms-traffictypediagnostic: AM6PR08MB4184:|DB8PR08MB5274:
X-Microsoft-Antispam-PRVS: <DB8PR08MB527495827EF940C4C2590866FC7C0@DB8PR08MB5274.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: EG+8KJTlgaCTRA3L+cNOQRN8cdNQzaNoLQzeTwbHgzPzC1VeiebLMkg74ve/e8nJ+wHs33agu1VlOSiFNn0p09D3VNgLJLvkn2vNuAamTpj8I0I5hCyLifs7w43H+zcP4XkrvxT/xNVoL4kG778GcRoX1CgmjGIaAlq3cpxmw/H6+ILzfAguB8+agN5CLXFdjdtmke0rSpCMchSr+ybT/+lKDpNx94qWMmQrQM9X0dLf0kDuhAukRRJg3W0BuMIyo6wq7W1KNIJMKMk5NfGaZ/tnZLVnPzj4sJcm4ROgTAWVnZRec+s4spvx8O8U3d+HIlyPUxiBX7zmBTHvKAVJrg==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(376002)(346002)(136003)(366004)(39860400002)(55236004)(478600001)(4326008)(91956017)(76116006)(6512007)(33656002)(2616005)(83380400001)(5660300002)(86362001)(53546011)(54906003)(8936002)(71200400001)(316002)(186003)(8676002)(26005)(6506007)(6916009)(6486002)(36756003)(2906002)(66476007)(66556008)(64756008)(66446008)(66946007);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: htMK4h8kSJm0CHs88H5ca3qMezyB2yodza0OMlNN25rs/iFobPNnrB3agBvvtmd3rZu807QYqeF15SOqu19Vtu1sMGW6IqcUgSEVJP72a+bFx4MvHrEQncVxowDthyEIAiBVGdFSHUEwNMNJnKzqyiq3apLQFAO8dpKqO+x6xKxLWktBwDaGDC23hTZuK3QL68jJljjk+pYyWpNe9YE6wASr0LtzFAEOCYkatzq0v7xFVGXiKR6kodTSGiOYFa00L9XljTwDzcjaWLd3R68xU3bQCDMlJodDaXuurgMAp2NsggHDWdSCl0sASsdsx2WN1Zf6FBelm8BpKAGQihwmvzRQYoNs18VgxcRvWjEqK7Bkbi13OzpNRPe8xpmMk02pNegzqqUFxulPrT9qW7Aqin9fWW/Nst1ImHQ9LZ64fN0qub4ogAWdEnVd1la0MrtNPYkmYjTnPAlUQcfeyyxkFcxivnKhI8y/ddjsIi4Z0zA=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <F9B61D33F4C87C42B491F13830EF1290@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4184
Original-Authentication-Results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(396003)(346002)(376002)(46966005)(107886003)(6512007)(82740400003)(336012)(86362001)(36906005)(70586007)(316002)(2616005)(6486002)(82310400002)(53546011)(6506007)(478600001)(81166007)(6862004)(5660300002)(186003)(47076004)(8676002)(4326008)(36756003)(8936002)(26005)(54906003)(70206006)(83380400001)(33656002)(356005)(2906002);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: c814cce5-73b6-41e3-a4e2-08d82a554109
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: LFHwlzMKy3hFmgI+BpTazTBdlksqF+A0nthx02+8GrcwrvSJMjFsqysCNK04ScXb/oiz2MXomCQY0/CWkEHY8mcXOCE/2cPwe8DVb/HQPf0yUCUo/KxpBzwzI7z7WL4ecsJbY2zoeZgkWLxH5o1rR4kMb/kIk31pYBp1nhVkUiGMH+t1mnQKCoWcEvlikpDoN5yiJJZItzp2AR42qBVPcgb5OD7DCTvLk25O+ubyCVQxspAz38cbfqI0RRkXWPSBnt/Rbn3PiDg2oLKr9obk2c9BDY/rVYgx1coFMoZ7Hj3Ef9gTCtl+LjIaWvFDMDr+jbP2QANq8OG82Z0ex0/VN+F/13pHFZ+0MDT3/BOxqTI/ELlFoBuOVzYvt2fI/x4/q4V7P85S/HUTwP1rNTlgCg==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 13:28:20.3146 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6176bc5f-d96a-4f3c-b9b0-08d82a5545c0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5274
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTcgSnVsIDIwMjAsIGF0IDk6NDcgYW0sIE9sZWtzYW5kciBBbmRydXNoY2hlbmtv
IDxhbmRyMjAwMEBnbWFpbC5jb20+IHdyb3RlOg0KPiANCj4gDQo+IE9uIDcvMTcvMjAgMTE6MTAg
QU0sIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4gT24gMTYuMDcuMjAyMCAxOToxMCwgUmFodWwgU2lu
Z2ggd3JvdGU6DQo+Pj4gIyBEaXNjb3ZlcmluZyBQQ0kgZGV2aWNlczoNCj4+PiANCj4+PiBQQ0kt
UENJZSBlbnVtZXJhdGlvbiBpcyBhIHByb2Nlc3Mgb2YgZGV0ZWN0aW5nIGRldmljZXMgY29ubmVj
dGVkIHRvIGl0cyBob3N0LiBJdCBpcyB0aGUgcmVzcG9uc2liaWxpdHkgb2YgdGhlIGhhcmR3YXJl
IGRvbWFpbiBvciBib290IGZpcm13YXJlIHRvIGRvIHRoZSBQQ0kgZW51bWVyYXRpb24gYW5kIGNv
bmZpZ3VyZSB0aGUgQkFSLCBQQ0kgY2FwYWJpbGl0aWVzLCBhbmQgTVNJL01TSS1YIGNvbmZpZ3Vy
YXRpb24uDQo+Pj4gDQo+Pj4gUENJLVBDSWUgZW51bWVyYXRpb24gaW4gWEVOIGlzIG5vdCBmZWFz
aWJsZSBmb3IgdGhlIGNvbmZpZ3VyYXRpb24gcGFydCBhcyBpdCB3b3VsZCByZXF1aXJlIGEgbG90
IG9mIGNvZGUgaW5zaWRlIFhlbiB3aGljaCB3b3VsZCByZXF1aXJlIGEgbG90IG9mIG1haW50ZW5h
bmNlLiBBZGRlZCB0byB0aGlzIG1hbnkgcGxhdGZvcm1zIHJlcXVpcmUgc29tZSBxdWlya3MgaW4g
dGhhdCBwYXJ0IG9mIHRoZSBQQ0kgY29kZSB3aGljaCB3b3VsZCBncmVhdGx5IGltcHJvdmUgWGVu
IGNvbXBsZXhpdHkuIE9uY2UgaGFyZHdhcmUgZG9tYWluIGVudW1lcmF0ZXMgdGhlIGRldmljZSB0
aGVuIGl0IHdpbGwgY29tbXVuaWNhdGUgdG8gWEVOIHZpYSB0aGUgYmVsb3cgaHlwZXJjYWxsLg0K
Pj4+IA0KPj4+ICNkZWZpbmUgUEhZU0RFVk9QX3BjaV9kZXZpY2VfYWRkICAgICAgICAyNQ0KPj4+
IHN0cnVjdCBwaHlzZGV2X3BjaV9kZXZpY2VfYWRkIHsNCj4+PiAgICAgdWludDE2X3Qgc2VnOw0K
Pj4+ICAgICB1aW50OF90IGJ1czsNCj4+PiAgICAgdWludDhfdCBkZXZmbjsNCj4+PiAgICAgdWlu
dDMyX3QgZmxhZ3M7DQo+Pj4gICAgIHN0cnVjdCB7DQo+Pj4gICAgIAl1aW50OF90IGJ1czsNCj4+
PiAgICAgCXVpbnQ4X3QgZGV2Zm47DQo+Pj4gICAgIH0gcGh5c2ZuOw0KPj4+ICAgICAvKg0KPj4+
ICAgICAqIE9wdGlvbmFsIHBhcmFtZXRlcnMgYXJyYXkuDQo+Pj4gICAgICogRmlyc3QgZWxlbWVu
dCAoWzBdKSBpcyBQWE0gZG9tYWluIGFzc29jaWF0ZWQgd2l0aCB0aGUgZGV2aWNlIChpZiAqIFhF
Tl9QQ0lfREVWX1BYTSBpcyBzZXQpDQo+Pj4gICAgICovDQo+Pj4gICAgIHVpbnQzMl90IG9wdGFy
cltYRU5fRkxFWF9BUlJBWV9ESU1dOw0KPj4+ICAgICB9Ow0KPj4+IA0KPj4+IEFzIHRoZSBoeXBl
cmNhbGwgYXJndW1lbnQgaGFzIHRoZSBQQ0kgc2VnbWVudCBudW1iZXIsIFhFTiB3aWxsIGFjY2Vz
cyB0aGUgUENJIGNvbmZpZyBzcGFjZSBiYXNlZCBvbiB0aGlzIHNlZ21lbnQgbnVtYmVyIGFuZCBm
aW5kIHRoZSBob3N0LWJyaWRnZSBjb3JyZXNwb25kaW5nIHRvIHRoaXMgc2VnbWVudCBudW1iZXIu
IEF0IHRoaXMgc3RhZ2UgaG9zdCBicmlkZ2UgaXMgZnVsbHkgaW5pdGlhbGl6ZWQgc28gdGhlcmUg
d2lsbCBiZSBubyBpc3N1ZSB0byBhY2Nlc3MgdGhlIGNvbmZpZyBzcGFjZS4NCj4+PiANCj4+PiBY
RU4gd2lsbCBhZGQgdGhlIFBDSSBkZXZpY2VzIGluIHRoZSBsaW5rZWQgbGlzdCBtYWludGFpbiBp
biBYRU4gdXNpbmcgdGhlIGZ1bmN0aW9uIHBjaV9hZGRfZGV2aWNlKCkuIFhFTiB3aWxsIGJlIGF3
YXJlIG9mIGFsbCB0aGUgUENJIGRldmljZXMgb24gdGhlIHN5c3RlbSBhbmQgYWxsIHRoZSBkZXZp
Y2Ugd2lsbCBiZSBhZGRlZCB0byB0aGUgaGFyZHdhcmUgZG9tYWluLg0KPj4gSGF2ZSB5b3UgaGFk
IGFueSB0aG91Z2h0cyBhYm91dCBEb20wIHJlLWFycmFuZ2luZyB0aGUgYnVzIG51bWJlcmluZz8N
Cj4+IFRoaXMgaXMsIGFmYWljdCwgYSBzdGlsbCBvcGVuIGlzc3VlIG9uIHg4NiBhcyB3ZWxsLg0K
PiANCj4gVGhpcyBjYW4gZ2V0IGV2ZW4gdHJpY2tpZXIgYXMgd2UgbWF5IGhhdmUgUENJIGVudW1l
cmF0ZWQgYXQgYm9vdCB0aW1lDQo+IA0KPiBieSB0aGUgZmlybXdhcmUgYW5kIHRoZW4gRG9tMCBt
YXkgcGVyZm9ybSB0aGUgZW51bWVyYXRpb24gZGlmZmVyZW50bHkuDQo+IA0KPiBTbywgWGVuIG5l
ZWRzIHRvIGJlIGF3YXJlIG9mIHdoYXQgaXMgZ29pbmcgdG8gYmUgdXNlZCBhcyB0aGUgc291cmNl
IG9mIHRoZQ0KPiANCj4gZW51bWVyYXRpb24gZGF0YSBhbmQgYmUgcmVhZHkgdG8gcmUtYnVpbGQg
aXRzIGludGVybmFsIHN0cnVjdHVyZXMgaW4gb3JkZXINCj4gDQo+IHRvIGJlIGFsaWduZWQgd2l0
aCB0aGF0IGVudGl0eTogZS5nLiBjb21wYXJlIERvbTAgYW5kIERvbTBsZXNzIHVzZS1jYXNlcw0K
PiANCg0KVGhlIGlkZWEgaXMgdGhhdCBhcyBzb29uIGFzIFhlbiBoYXMgZG9uZSBoaXMgZW51bWVy
YXRpb24gKGl0IGJlaW5nIG9uIGJvb3Qgb3IgYWZ0ZXIgRG9tMCBzaWduYWwpLCBubyBkb21haW4g
d2lsbCBiZSBhYmxlIHRvIG1vZGlmeSB0aGUgcGh5c2ljYWwgUENJIGJ1cyBhbnltb3JlLiANCi0g
UmFodWwNCj4+IA0KPj4+IExpbWl0YXRpb25zOg0KPj4+ICogV2hlbiBQQ0kgZGV2aWNlcyBhcmUg
YWRkZWQgdG8gWEVOLCBNU0kgY2FwYWJpbGl0eSBpcyBub3QgaW5pdGlhbGl6ZWQgaW5zaWRlIFhF
TiBhbmQgbm90IHN1cHBvcnRlZCBhcyBvZiBub3cuDQo+PiBJIHRoaW5rIHRoaXMgaXMgYSBwcmV0
dHkgc2V2ZXJlIGxpbWl0YXRpb24sIGFzIG1vZGVybiBkZXZpY2VzIHRlbmQgdG8NCj4+IG5vdCBz
dXBwb3J0IHBpbiBiYXNlZCBpbnRlcnJ1cHRzIGFueW1vcmUuDQo+PiANCj4+PiAjIEVtdWxhdGVk
IFBDSSBkZXZpY2UgdHJlZSBub2RlIGluIGxpYnhsOg0KPj4+IA0KPj4+IExpYnhsIGlzIGNyZWF0
aW5nIGEgdmlydHVhbCBQQ0kgZGV2aWNlIHRyZWUgbm9kZSBpbiB0aGUgZGV2aWNlIHRyZWUgdG8g
ZW5hYmxlIHRoZSBndWVzdCBPUyB0byBkaXNjb3ZlciB0aGUgdmlydHVhbCBQQ0kgZHVyaW5nIGd1
ZXN0IGJvb3QuIFdlIGludHJvZHVjZWQgdGhlIG5ldyBjb25maWcgb3B0aW9uIFt2cGNpPSJwY2lf
ZWNhbSJdIGZvciBndWVzdHMuIFdoZW4gdGhpcyBjb25maWcgb3B0aW9uIGlzIGVuYWJsZWQgaW4g
YSBndWVzdCBjb25maWd1cmF0aW9uLCBhIFBDSSBkZXZpY2UgdHJlZSBub2RlIHdpbGwgYmUgY3Jl
YXRlZCBpbiB0aGUgZ3Vlc3QgZGV2aWNlIHRyZWUuDQo+PiBJIHN1cHBvcnQgU3RlZmFubydzIHN1
Z2dlc3Rpb24gZm9yIHRoaXMgdG8gYmUgYW4gb3B0aW9uYWwgdGhpbmcsIGkuZS4NCj4+IHRoZXJl
IHRvIGJlIG5vIG5lZWQgZm9yIGl0IHdoZW4gdGhlcmUgYXJlIFBDSSBkZXZpY2VzIGFzc2lnbmVk
IHRvIHRoZQ0KPj4gZ3Vlc3QgYW55d2F5LiBJIGFsc28gd29uZGVyIGFib3V0IHRoZSBwY2lfIHBy
ZWZpeCBoZXJlIC0gaXNuJ3QNCj4+IHZwY2k9ImVjYW0iIGFzIHVuYW1iaWd1b3VzPw0KPj4gDQo+
Pj4gQSBuZXcgYXJlYSBoYXMgYmVlbiByZXNlcnZlZCBpbiB0aGUgYXJtIGd1ZXN0IHBoeXNpY2Fs
IG1hcCBhdCB3aGljaCB0aGUgVlBDSSBidXMgaXMgZGVjbGFyZWQgaW4gdGhlIGRldmljZSB0cmVl
IChyZWcgYW5kIHJhbmdlcyBwYXJhbWV0ZXJzIG9mIHRoZSBub2RlKS4gQSB0cmFwIGhhbmRsZXIg
Zm9yIHRoZSBQQ0kgRUNBTSBhY2Nlc3MgZnJvbSBndWVzdCBoYXMgYmVlbiByZWdpc3RlcmVkIGF0
IHRoZSBkZWZpbmVkIGFkZHJlc3MgYW5kIHJlZGlyZWN0cyByZXF1ZXN0cyB0byB0aGUgVlBDSSBk
cml2ZXIgaW4gWGVuLg0KPj4+IA0KPj4+IExpbWl0YXRpb246DQo+Pj4gKiBPbmx5IG9uZSBQQ0kg
ZGV2aWNlIHRyZWUgbm9kZSBpcyBzdXBwb3J0ZWQgYXMgb2Ygbm93Lg0KPj4+IA0KPj4+IEJBUiB2
YWx1ZSBhbmQgSU9NRU0gbWFwcGluZzoNCj4+PiANCj4+PiBMaW51eCBndWVzdCB3aWxsIGRvIHRo
ZSBQQ0kgZW51bWVyYXRpb24gYmFzZWQgb24gdGhlIGFyZWEgcmVzZXJ2ZWQgZm9yIEVDQU0gYW5k
IElPTUVNIHJhbmdlcyBpbiB0aGUgVlBDSSBkZXZpY2UgdHJlZSBub2RlLiBPbmNlIFBDSQlkZXZp
Y2UgaXMgYXNzaWduZWQgdG8gdGhlIGd1ZXN0LCBYRU4gd2lsbCBtYXAgdGhlIGd1ZXN0IFBDSSBJ
T01FTSByZWdpb24gdG8gdGhlIHJlYWwgcGh5c2ljYWwgSU9NRU0gcmVnaW9uIG9ubHkgZm9yIHRo
ZSBhc3NpZ25lZCBkZXZpY2VzLg0KPj4+IA0KPj4+IEFzIG9mIG5vdyB3ZSBoYXZlIG5vdCBtb2Rp
ZmllZCB0aGUgZXhpc3RpbmcgVlBDSSBjb2RlIHRvIG1hcCB0aGUgZ3Vlc3QgUENJIElPTUVNIHJl
Z2lvbiB0byB0aGUgcmVhbCBwaHlzaWNhbCBJT01FTSByZWdpb24uIFdlIHVzZWQgdGhlIGV4aXN0
aW5nIGd1ZXN0IOKAnGlvbWVt4oCdIGNvbmZpZyBvcHRpb24gdG8gbWFwIHRoZSByZWdpb24uDQo+
Pj4gRm9yIGV4YW1wbGU6DQo+Pj4gCUd1ZXN0IHJlc2VydmVkIElPTUVNIHJlZ2lvbjogIDB4MDQw
MjAwMDANCj4+PiAgICAgCVJlYWwgcGh5c2ljYWwgSU9NRU0gcmVnaW9uOjB4NTAwMDAwMDANCj4+
PiAgICAgCUlPTUVNIHNpemU6MTI4TUINCj4+PiAgICAgCWlvbWVtIGNvbmZpZyB3aWxsIGJlOiAg
IGlvbWVtID0gWyIweDUwMDAwLDB4ODAwMEAweDQwMjAiXQ0KPj4gVGhpcyBzdXJlbHkgaXMgcGxh
bm5lZCB0byBnbyBhd2F5IGJlZm9yZSB0aGUgY29kZSBoaXRzIHVwc3RyZWFtPyBUaGUNCj4+IHJh
bmdlcyByZWFsbHkgc2hvdWxkIGJlIHJlYWQgb3V0IG9mIHRoZSBCQVJzLCBhcyBJIHNlZSB0aGUN
Cj4+ICJsaW1pdGF0aW9ucyIgc2VjdGlvbiBmdXJ0aGVyIGRvd24gc3VnZ2VzdHMsIGJ1dCBpdCdz
IG5vdCBjbGVhcg0KPj4gd2hldGhlciAibGltaXRhdGlvbnMiIGFyZSBpdGVtcyB0aGF0IHlvdSBw
bGFuIHRvIHRha2UgY2FyZSBvZiBiZWZvcmUNCj4+IHN1Ym1pdHRpbmcgeW91ciBjb2RlIGZvciBy
ZXZpZXcuDQo+PiANCj4+IEphbg0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 13:29:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 13:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQQF-0000xj-1p; Fri, 17 Jul 2020 13:29:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/fKj=A4=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jwQQE-0000xe-6i
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 13:29:06 +0000
X-Inumbo-ID: 7c8b3bf4-c831-11ea-95fe-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7c8b3bf4-c831-11ea-95fe-12813bfff9fa;
 Fri, 17 Jul 2020 13:29:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Vy17Fh/U6PP9hOLGm7LaXQkSnIjQFuHzX8pSnUlYdtU=; b=FXH6a3xDsTU98+8TCX2jGEtLtF
 dZgC+OzWQSWRWf0P7CiMlyc1MrUEn1zwu9Lmc+fadDJ3BxaHm6o6fphwjlBBH2M2FwXGEbeKgm0hk
 Zl+WUO4gSWm2omg+qcUjMs49yjm2T4stiR3HX7IamSSjyy4VKHT1V7nh/Vg7CdhdYPMo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwQQC-0006a4-NG; Fri, 17 Jul 2020 13:29:04 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwQQC-0006aA-B9; Fri, 17 Jul 2020 13:29:04 +0000
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <20200717111644.GS7191@Air-de-Roger>
 <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <339df023-7844-b998-81bd-8c00baad3b04@xen.org>
Date: Fri, 17 Jul 2020 14:29:01 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 17/07/2020 14:22, Bertrand Marquis wrote:
>>> # Emulated PCI device tree node in libxl:
>>>
>>> Libxl is creating a virtual PCI device tree node in the device tree
>>> to enable the guest OS to discover the virtual PCI during guest
>>> boot. We introduced the new config option [vpci="pci_ecam"] for
>>> guests. When this config option is enabled in a guest configuration,
>>> a PCI device tree node will be created in the guest device tree.
>>>
>>> A new area has been reserved in the arm guest physical map at which
>>> the VPCI bus is declared in the device tree (reg and ranges
>>> parameters of the node). A trap handler for the PCI ECAM access from
>>> guest has been registered at the defined address and redirects
>>> requests to the VPCI driver in Xen.
>>
>> Can't you deduce the requirement of such DT node based on the presence
>> of a 'pci=' option in the same config file?
>>
>> Also I wouldn't discard that in the future you might want to use
>> different emulators for different devices, so it might be helpful to
>> introduce something like:
>>
>> pci = [ '08:00.0,backend=vpci', '09:00.0,backend=xenpt', '0a:00.0,backend=qemu', ... ]

I like this idea :).

>>
>> For the time being Arm will require backend=vpci for all the passed
>> through devices, but I wouldn't rule out this changing in the future.
> 
> We need it for the case where no device is declared in the config file and the user
> wants to add devices using xl later. In this case we must have the DT node for it
> to work.

Are you suggesting that you plan to implement PCI hotplug?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 13:44:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 13:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQf5-0002fQ-Eh; Fri, 17 Jul 2020 13:44:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwQf4-0002fL-EV
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 13:44:26 +0000
X-Inumbo-ID: a07a96e8-c833-11ea-8496-bc764e2007e4
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe02::600])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a07a96e8-c833-11ea-8496-bc764e2007e4;
 Fri, 17 Jul 2020 13:44:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/hDwZuc88ZnW72NHkXZ6YZ87KsiakaGpEzK+53QjM/s=;
 b=zWLByuG9iP0vkpOGZF/9voEMV7LlSZkTSeegSQHV7E0yX7dveqcUGdnWpPsGaf0o9AVWoIHZZVvBCcnPB9KjQujhGuU7DluxdrBOtIAPAcwQQXbQmGB5hNKT7oO6tX02fkIOIrQqXg5aM/YtT2CZWtWWuIpseYvs5O+sETpoQDE=
Received: from DB6P18901CA0024.EURP189.PROD.OUTLOOK.COM (2603:10a6:4:16::34)
 by DB6PR08MB2885.eurprd08.prod.outlook.com (2603:10a6:6:1b::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Fri, 17 Jul
 2020 13:44:23 +0000
Received: from DB5EUR03FT042.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:16:cafe::2f) by DB6P18901CA0024.outlook.office365.com
 (2603:10a6:4:16::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17 via Frontend
 Transport; Fri, 17 Jul 2020 13:44:23 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT042.mail.protection.outlook.com (10.152.21.123) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 13:44:23 +0000
Received: ("Tessian outbound 1dc58800d5dd:v62");
 Fri, 17 Jul 2020 13:44:22 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: c4a51dede70b03d3
X-CR-MTA-TID: 64aa7808
Received: from c7c72735cc33.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A408421E-D0BC-4CF7-92F0-7B84A9599CAD.1; 
 Fri, 17 Jul 2020 13:44:17 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c7c72735cc33.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 13:44:17 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZVY6ZHoYX1kcy7iofNfLd/HqTmoQCOVrDKS4plgnwHEpS5OsnIdDhLPlkiAwofWmLzkXUmpU0DMR+dya/U47zOEMcj8zJxQ3xMQKd7RVlNQje9XZPdXSRtmFZFpTQ6EeMUVA6HVz6FtMfmFhJ7ROtsbe9HbqVS+q/hkIOXp02i6Loc690m3kVB6d/KJ0mmW55xH/jVX7eGYM3DjaLDbMN7QDaU0Ez38n1AzOKwX6Q7/p/bB4gVN0VW4p6OXnqGc2YtCagwN8FFNMtHpPIK5dyO7jba8Scn5KwU34/wtOe4hfKvTfasB/XFJwzui6omH+S5MkC7/pzRiFhG6FF1+F0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/hDwZuc88ZnW72NHkXZ6YZ87KsiakaGpEzK+53QjM/s=;
 b=dMyO4xFxHoWGEIaj7uoeJgGRcFNUpGIHcpaVHh4nqAnPU8mDHWje6Z4TEzu51eocyr124hSZ6o+JEYmGxrjOD3fWca86h1fvBrhwIblA+bC6qV+oDFokXzmPlFbw7frOHs2hSrBcezLCfowFpUdr7BDeKdUL8h7FhjFzanm4dPEyuYqX3yWyaBKRSJgzLkigyXIj5+O5/F1A810Z87793FyoLK08SlHPMjQNJZpFL8liQhRPCvpsZli9UEilVLRcoHi/1KPrIKOe9M4QftCM41MDcb5pVmgr/3InD16f5qNYgWUYMCTiGlYqG+JWc1PlDcAxJkf/LO3lJ9iVL0Brgg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/hDwZuc88ZnW72NHkXZ6YZ87KsiakaGpEzK+53QjM/s=;
 b=zWLByuG9iP0vkpOGZF/9voEMV7LlSZkTSeegSQHV7E0yX7dveqcUGdnWpPsGaf0o9AVWoIHZZVvBCcnPB9KjQujhGuU7DluxdrBOtIAPAcwQQXbQmGB5hNKT7oO6tX02fkIOIrQqXg5aM/YtT2CZWtWWuIpseYvs5O+sETpoQDE=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR08MB2695.eurprd08.prod.outlook.com (2603:10a6:6:19::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.21; Fri, 17 Jul
 2020 13:44:15 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 13:44:15 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAR7YAIAAIxaAgAAB34CAAARBAA==
Date: Fri, 17 Jul 2020 13:44:15 +0000
Message-ID: <F91FCC13-D591-4A57-9840-220614174F02@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <20200717111644.GS7191@Air-de-Roger>
 <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
 <339df023-7844-b998-81bd-8c00baad3b04@xen.org>
In-Reply-To: <339df023-7844-b998-81bd-8c00baad3b04@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [91.160.77.188]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 5a2bd711-ecd6-46b4-9317-08d82a578386
x-ms-traffictypediagnostic: DB6PR08MB2695:|DB6PR08MB2885:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <DB6PR08MB2885DB76FACF5326D976672A9D7C0@DB6PR08MB2885.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: U14cqmMP8AQfF7l8p3oWjLW5DMFkdxJyXT4+pK4BEvLCvjmog51jaztDWIRli6Oosodgsq5p1nOvfZKY5BlVqRh5ZJJoyNZVcHHQsgWrdB0MqLD6KFsC/cGsVcByEdC39X9NlLBJoBTPU7ZBwEn8i+N0TevXCraLRgtJAaR/mV5rEcmf+HGx0iH7dANcglqL+PBzaNvQ3VgaIqG1lkHCSkymmmYGzdiyZp/TFTWOeMeQ1EV6rIrs77Pg3Q+kvzSYrMhQrdCF+h+r7HdpU7u1ICLiVRmrPHtyeE+0i53KRurrusW9WIVnlQ11zWvKijxo
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(39860400002)(376002)(366004)(396003)(136003)(83380400001)(6512007)(71200400001)(36756003)(2906002)(33656002)(316002)(2616005)(86362001)(54906003)(8936002)(64756008)(53546011)(66556008)(66476007)(66446008)(4326008)(8676002)(66946007)(91956017)(76116006)(478600001)(6506007)(6916009)(5660300002)(26005)(6486002)(186003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: OZddnMgcHzKe49ptOBbzy4hGFpwhI/ENJXDxsxLH0NDYSMAQzynVbWtTIzRM/LOXXWFtDbaLE9sNHwvquNRUddrty2ceWXtK6HOMBhacqb/Q4nX3GWfx0ywp3AYF87qbV/ab19NnBs2G1/5YtfzgsH0zr10UQ5EkF0dYv/uKsptXVNdVofVfzf5BSzB6Lrb1xL6LKtE6NnhAop3y/QIN0ANJf4s3pU7bJZY4yi6S8kTz9p+gOpnn2wDCfcITG3gCqVww78r9YL+nrvSgD31QD0Y4GqhG3BQp6I57wr7Qunnnrr58l5QPMEmN7Xx5HNA/QxyRJGviOQwKMFac3UdEMZeMMqPOCqUFB7I4TbiTe6H6zaQ4gdj8zzsqrM71UIX5EhCgZeCgJhtKjEIzxrMheQUFr/bBx9GPl7bbjZL38c/Lu6zTjy2poE4Efew7f0LhCF4+ss+nab3gVGFGvvyVkBQNjOB+GyIhop2XhwR7agQ=
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <5C8ED09AAAD3204390376F1D62057FCD@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2695
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT042.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(396003)(136003)(376002)(346002)(46966005)(478600001)(6506007)(53546011)(186003)(26005)(2906002)(336012)(4326008)(6862004)(316002)(70586007)(70206006)(54906003)(2616005)(33656002)(8936002)(47076004)(8676002)(83380400001)(5660300002)(6512007)(86362001)(81166007)(82310400002)(82740400003)(107886003)(36756003)(356005)(6486002);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: fa069cd4-f8ad-402f-c5e6-08d82a577edd
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: QzMKv7lum8sTNfMtmWo8x4La/R5M7AfhXsGjQK2xAa2H9y3smCCqhoGcAkUOc3YZNNCQ7AsLfuq0/ts+yC1djtU+rm1n0fdkSO8XBHag+Yl0dGooMnC6mcl4iZzD5OP6EawMcIDF6m8i3e3SucjzfeEMOCpHYO59duUYfGXtcwbmWHd/EQvZwmCnUL1s+rmw+4Zebno+Nvpi1A4h8G55FFAdDGL7iBrhswZ8CB6wbayh/9T5czlmUuyVCZlN+21kXiuivGAQ/OtfJLCvwO9tKWHeEOgE1TiWoq8N52dARjLhHeYJd8WzE8CMV3oa5GEhK9Xf3WCHHHvR3UfyuOcZJFgCpabPYwT6CY0A4I8O0ftcx6x2aiagYPeE44IFi6tMQt92JVV460oC8P8vC81XMg==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 13:44:23.0001 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5a2bd711-ecd6-46b4-9317-08d82a578386
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2885
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 17 Jul 2020, at 15:29, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 17/07/2020 14:22, Bertrand Marquis wrote:
>>>> # Emulated PCI device tree node in libxl:
>>>>=20
>>>> Libxl is creating a virtual PCI device tree node in the device tree
>>>> to enable the guest OS to discover the virtual PCI during guest
>>>> boot. We introduced the new config option [vpci=3D"pci_ecam"] for
>>>> guests. When this config option is enabled in a guest configuration,
>>>> a PCI device tree node will be created in the guest device tree.
>>>>=20
>>>> A new area has been reserved in the arm guest physical map at which
>>>> the VPCI bus is declared in the device tree (reg and ranges
>>>> parameters of the node). A trap handler for the PCI ECAM access from
>>>> guest has been registered at the defined address and redirects
>>>> requests to the VPCI driver in Xen.
>>>=20
>>> Can't you deduce the requirement of such DT node based on the presence
>>> of a 'pci=3D' option in the same config file?
>>>=20
>>> Also I wouldn't discard that in the future you might want to use
>>> different emulators for different devices, so it might be helpful to
>>> introduce something like:
>>>=20
>>> pci =3D [ '08:00.0,backend=3Dvpci', '09:00.0,backend=3Dxenpt', '0a:00.0=
,backend=3Dqemu', ... ]
>=20
> I like this idea :).
>=20
>>>=20
>>> For the time being Arm will require backend=3Dvpci for all the passed
>>> through devices, but I wouldn't rule out this changing in the future.
>> We need it for the case where no device is declared in the config file a=
nd the user
>> wants to add devices using xl later. In this case we must have the DT no=
de for it
>> to work.
>=20
> Are you suggesting that you plan to implement PCI hotplug?

No this is not in the current plan but we should not prevent this to be sup=
ported some day :-)

Bertrand

>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Fri Jul 17 13:49:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 13:49:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQjl-0002qU-5a; Fri, 17 Jul 2020 13:49:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/fKj=A4=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jwQjj-0002qK-1f
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 13:49:15 +0000
X-Inumbo-ID: 4bde736a-c834-11ea-9606-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4bde736a-c834-11ea-9606-12813bfff9fa;
 Fri, 17 Jul 2020 13:49:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=YE6DBSEQEIn+3sn8+Xy9OlDLHar9Uf+a2hkRZnMCq1s=; b=CXX0QJZXuqvDKSXetCm592Xg4f
 7R+pXkoW0c3iy3GsqdhoANELoGet5aQjY9SqEzWZFUCODP4L95JhgifLw4uG5fPgO+Y0CCdeBFHmW
 8/WWHZk15ao7SXNq68FfRT4bUFtpSvzds/LOzHsvHdVGvM9tUnoqTkkp6ZP+Ya0JqLYM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwQjf-0006zW-AZ; Fri, 17 Jul 2020 13:49:11 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwQjf-0007di-3m; Fri, 17 Jul 2020 13:49:11 +0000
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <20200717111644.GS7191@Air-de-Roger>
 <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
 <339df023-7844-b998-81bd-8c00baad3b04@xen.org>
 <F91FCC13-D591-4A57-9840-220614174F02@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <239b5114-9e23-ab55-41b9-c02a2018e4ab@xen.org>
Date: Fri, 17 Jul 2020 14:49:09 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <F91FCC13-D591-4A57-9840-220614174F02@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 17/07/2020 14:44, Bertrand Marquis wrote:
> 
> 
>> On 17 Jul 2020, at 15:29, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 17/07/2020 14:22, Bertrand Marquis wrote:
>>>>> # Emulated PCI device tree node in libxl:
>>>>>
>>>>> Libxl is creating a virtual PCI device tree node in the device tree
>>>>> to enable the guest OS to discover the virtual PCI during guest
>>>>> boot. We introduced the new config option [vpci="pci_ecam"] for
>>>>> guests. When this config option is enabled in a guest configuration,
>>>>> a PCI device tree node will be created in the guest device tree.
>>>>>
>>>>> A new area has been reserved in the arm guest physical map at which
>>>>> the VPCI bus is declared in the device tree (reg and ranges
>>>>> parameters of the node). A trap handler for the PCI ECAM access from
>>>>> guest has been registered at the defined address and redirects
>>>>> requests to the VPCI driver in Xen.
>>>>
>>>> Can't you deduce the requirement of such DT node based on the presence
>>>> of a 'pci=' option in the same config file?
>>>>
>>>> Also I wouldn't discard that in the future you might want to use
>>>> different emulators for different devices, so it might be helpful to
>>>> introduce something like:
>>>>
>>>> pci = [ '08:00.0,backend=vpci', '09:00.0,backend=xenpt', '0a:00.0,backend=qemu', ... ]
>>
>> I like this idea :).
>>
>>>>
>>>> For the time being Arm will require backend=vpci for all the passed
>>>> through devices, but I wouldn't rule out this changing in the future.
>>> We need it for the case where no device is declared in the config file and the user
>>> wants to add devices using xl later. In this case we must have the DT node for it
>>> to work.
>>
>> Are you suggesting that you plan to implement PCI hotplug?
> 
> No this is not in the current plan but we should not prevent this to be supported some day :-)

I agree that we don't want to prevent extension. But I fail to see why 
this would be an issue if we don't introduce the option "vcpi" today.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 13:51:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 13:51:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQlR-0003a3-Hk; Fri, 17 Jul 2020 13:51:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/fKj=A4=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jwQlQ-0003Zx-Ve
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 13:51:01 +0000
X-Inumbo-ID: 8ba8d292-c834-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ba8d292-c834-11ea-8496-bc764e2007e4;
 Fri, 17 Jul 2020 13:50:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:References:Cc:To:From:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=sFZrVzBeNdUKd7NJErLX1N+nGCzoAYi9P6utJIfvTME=; b=mPMZdRBdfLGR9xxo68Vv0N2weM
 T/gNah/P8E2WzlWqxH9yoDCk32r9H1HzN6OulxXCH9OJ8wp6rFFo5O01NmQTQqOmLt7cVKsUIyQgU
 28Z7JiBBoy9zYvdrm5curf74fvjSglOqmKfX8jEDmTPBKidI5VnDWf4h00Bv/fYyco1o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwQlO-000716-7n; Fri, 17 Jul 2020 13:50:58 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwQlN-0007gG-QT; Fri, 17 Jul 2020 13:50:58 +0000
Subject: Re: PCI devices passthrough on Arm design proposal
From: Julien Grall <julien@xen.org>
To: Rahul Singh <Rahul.Singh@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
Message-ID: <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
Date: Fri, 17 Jul 2020 14:50:56 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

(Resending to the correct ML)

On 17/07/2020 14:23, Julien Grall wrote:
> 
> 
> On 16/07/2020 18:02, Rahul Singh wrote:
>> Hello All,
> 
> Hi,
> 
>> Following up on discussion on PCI Passthrough support on ARM that we 
>> had at the XEN summit, we are submitting a Review For Comment and a 
>> design proposal for PCI passthrough support on ARM. Feel free to give 
>> your feedback.
>>
>> The followings describe the high-level design proposal of the PCI 
>> passthrough support and how the different modules within the system 
>> interacts with each other to assign a particular PCI device to the guest.
> 
> There was an attempt a few years ago to get a design document for PCI 
> passthrough (see [1]). I would suggest to have a look at the thread as I 
> think it would help to have an overview of all the components (e.g MSI 
> controllers...) even if they will not be implemented at the beginning.
> 
>>
>> # Title:
>>
>> PCI devices passthrough on Arm design proposal
>>
>> # Problem statement:
>>
>> On ARM there in no support to assign a PCI device to a guest. PCI 
>> device passthrough capability allows guests to have full access to 
>> some PCI devices. PCI device passthrough allows PCI devices to appear 
>> and behave as if they were physically attached to the guest operating 
>> system and provide full isolation of the PCI devices.
>>
>> Goal of this work is to also support Dom0Less configuration so the PCI 
>> backend/frontend drivers used on x86 shall not be used on Arm. It will 
>> use the existing VPCI concept from X86 and implement the virtual PCI 
>> bus through IO emulation​ such that only assigned devices are visible​ 
>> to the guest and guest can use the standard PCI driver.
>>
>> Only Dom0 and Xen will have access to the real PCI bus,​ guest will 
>> have a direct access to the assigned device itself​. IOMEM memory will 
>> be mapped to the guest ​and interrupt will be redirected to the guest. 
>> SMMU has to be configured correctly to have DMA transaction.
>>
>> ## Current state: Draft version
>>
>> # Proposer(s): Rahul Singh, Bertrand Marquis
>>
>> # Proposal:
>>
>> This section will describe the different subsystem to support the PCI 
>> device passthrough and how these subsystems interact with each other 
>> to assign a device to the guest.
>>
>> # PCI Terminology:
>>
>> Host Bridge: Host bridge allows the PCI devices to talk to the rest of 
>> the computer.
>> ECAM: ECAM (Enhanced Configuration Access Mechanism) is a mechanism 
>> developed to allow PCIe to access configuration space. The space 
>> available per function is 4KB.
>>
>> # Discovering PCI Host Bridge in XEN:
>>
>> In order to support the PCI passthrough XEN should be aware of all the 
>> PCI host bridges available on the system and should be able to access 
>> the PCI configuration space. ECAM configuration access is supported as 
>> of now. XEN during boot will read the PCI device tree node “reg” 
>> property and will map the ECAM space to the XEN memory using the 
>> “ioremap_nocache ()” function.
>>
>> If there are more than one segment on the system, XEN will read the 
>> “linux, pci-domain” property from the device tree node and configure  
>> the host bridge segment number accordingly. All the PCI device tree 
>> nodes should have the “linux,pci-domain” property so that there will 
>> be no conflicts. During hardware domain boot Linux will also use the 
>> same “linux,pci-domain” property and assign the domain number to the 
>> host bridge.
> 
> AFAICT, "linux,pci-domain" is not a mandatory option and mostly tie to 
> Linux. What would happen with other OS?
> 
> But I would rather avoid trying to mandate a user to modifying his/her 
> device-tree in order to support PCI passthrough. It would be better to 
> consider Xen to assign the number if it is not present.
> 
>>
>> When Dom0 tries to access the PCI config space of the device, XEN will 
>> find the corresponding host bridge based on segment number and access 
>> the corresponding config space assigned to that bridge.
>>
>> Limitation:
>> * Only PCI ECAM configuration space access is supported.
>> * Device tree binding is supported as of now, ACPI is not supported.
> 
> We want to differentiate the high-level design from the actual 
> implementation. While you may not yet implement ACPI, we still need to 
> keep it in mind to avoid incompatibilities in long term.
> 
>> * Need to port the PCI host bridge access code to XEN to access the 
>> configuration space (generic one works but lots of platforms will 
>> required  some specific code or quirks).
>>
>> # Discovering PCI devices:
>>
>> PCI-PCIe enumeration is a process of detecting devices connected to 
>> its host. It is the responsibility of the hardware domain or boot 
>> firmware to do the PCI enumeration and configure the BAR, PCI 
>> capabilities, and MSI/MSI-X configuration.
>>
>> PCI-PCIe enumeration in XEN is not feasible for the configuration part 
>> as it would require a lot of code inside Xen which would require a lot 
>> of maintenance. Added to this many platforms require some quirks in 
>> that part of the PCI code which would greatly improve Xen complexity. 
>> Once hardware domain enumerates the device then it will communicate to 
>> XEN via the below hypercall.
>>
>> #define PHYSDEVOP_pci_device_add        25
>> struct physdev_pci_device_add {
>>      uint16_t seg;
>>      uint8_t bus;
>>      uint8_t devfn;
>>      uint32_t flags;
>>      struct {
>>          uint8_t bus;
>>          uint8_t devfn;
>>      } physfn;
>>      /*
>>      * Optional parameters array.
>>      * First element ([0]) is PXM domain associated with the device 
>> (if * XEN_PCI_DEV_PXM is set)
>>      */
>>      uint32_t optarr[XEN_FLEX_ARRAY_DIM];
>>      };
>>
>> As the hypercall argument has the PCI segment number, XEN will access 
>> the PCI config space based on this segment number and find the 
>> host-bridge corresponding to this segment number. At this stage host 
>> bridge is fully initialized so there will be no issue to access the 
>> config space.
>>
>> XEN will add the PCI devices in the linked list maintain in XEN using 
>> the function pci_add_device(). XEN will be aware of all the PCI 
>> devices on the system and all the device will be added to the hardware 
>> domain.
> I understand this what x86 does. However, may I ask why we would want it 
> for Arm?
> 
>>
>> Limitations:
>> * When PCI devices are added to XEN, MSI capability is not initialized 
>> inside XEN and not supported as of now.
>> * ACS capability is disable for ARM as of now as after enabling it 
>> devices are not accessible.
> 
> I am not sure to understand this. Can you expand?
> 
>> * Dom0Less implementation will require to have the capacity inside Xen 
>> to discover the PCI devices (without depending on Dom0 to declare them 
>> to Xen).
>>
>> # Enable the existing x86 virtual PCI support for ARM:
>>
>> The existing VPCI support available for X86 is adapted for Arm. When 
>> the device is added to XEN via the hyper call 
>> “PHYSDEVOP_pci_device_add”, VPCI handler for the config space access 
>> is added to the PCI device to emulate the PCI devices.
>>
>> A MMIO trap handler for the PCI ECAM space is registered in XEN so 
>> that when guest is trying to access the PCI config space, XEN will 
>> trap the access and emulate read/write using the VPCI and not the real 
>> PCI hardware.
>>
>> Limitation:
>> * No handler is register for the MSI configuration.
>> * Only legacy interrupt is supported and tested as of now, MSI is not 
>> implemented and tested.
> 
> IIRC, legacy interrupt may be shared between two PCI devices. How do you 
> plan to handle this on Arm?
> 
>>
>> # Assign the device to the guest:
>>
>> Assign the PCI device from the hardware domain to the guest is done 
>> using the below guest config option. When xl tool create the domain, 
>> PCI devices will be assigned to the guest VPCI bus.
> 
> Above, you suggest that device will be assigned to the hardware domain 
> at boot. I am assuming this also means that all the interrupts/MMIOs 
> will be routed/mapped, is that correct?
> 
> If so, can you provide a rough sketch how assign/deassign will work?
> 
>>     pci=[ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...]
>>
>> Guest will be only able to access the assigned devices and see the 
>> bridges. Guest will not be able to access or see the devices that are 
>> no assigned to him.
>>
>> Limitation:
>> * As of now all the bridges in the PCI bus are seen by the guest on 
>> the VPCI bus.
> 
> Why do you want to expose all the bridges to a guest? Does this mean 
> that the BDF should always match between the host and the guest?
> 
>>
>> # Emulated PCI device tree node in libxl:
>>
>> Libxl is creating a virtual PCI device tree node in the device tree to 
>> enable the guest OS to discover the virtual PCI during guest boot. We 
>> introduced the new config option [vpci="pci_ecam"] for guests. When 
>> this config option is enabled in a guest configuration, a PCI device 
>> tree node will be created in the guest device tree.
>>
>> A new area has been reserved in the arm guest physical map at which 
>> the VPCI bus is declared in the device tree (reg and ranges parameters 
>> of the node). A trap handler for the PCI ECAM access from guest has 
>> been registered at the defined address and redirects requests to the 
>> VPCI driver in Xen.
>>
>> Limitation:
>> * Only one PCI device tree node is supported as of now.
>>
>> BAR value and IOMEM mapping:
>>
>> Linux guest will do the PCI enumeration based on the area reserved for 
>> ECAM and IOMEM ranges in the VPCI device tree node. Once PCI    device 
>> is assigned to the guest, XEN will map the guest PCI IOMEM region to 
>> the real physical IOMEM region only for the assigned devices.
>>
>> As of now we have not modified the existing VPCI code to map the guest 
>> PCI IOMEM region to the real physical IOMEM region. We used the 
>> existing guest “iomem” config option to map the region.
>> For example:
>>     Guest reserved IOMEM region:  0x04020000
>>          Real physical IOMEM region:0x50000000
>>          IOMEM size:128MB
>>          iomem config will be:   iomem = ["0x50000,0x8000@0x4020"]
>>
>> There is no need to map the ECAM space as XEN already have access to 
>> the ECAM space and XEN will trap ECAM accesses from the guest and will 
>> perform read/write on the VPCI bus.
>>
>> IOMEM access will not be trapped and the guest will directly access 
>> the IOMEM region of the assigned device via stage-2 translation.
>>
>> In the same, we mapped the assigned devices IRQ to the guest using 
>> below config options.
>>     irqs= [ NUMBER, NUMBER, ...]
>>
>> Limitation:
>> * Need to avoid the “iomem” and “irq” guest config options and map the 
>> IOMEM region and IRQ at the same time when device is assigned to the 
>> guest using the “pci” guest config options when xl creates the domain.
>> * Emulated BAR values on the VPCI bus should reflect the IOMEM mapped 
>> address.
>> * X86 mapping code should be ported on Arm so that the stage-2 
>> translation is adapted when the guest is doing a modification of the 
>> BAR registers values (to map the address requested by the guest for a 
>> specific IOMEM to the address actually contained in the real BAR 
>> register of the corresponding device).
>>
>> # SMMU configuration for guest:
>>
>> When assigning PCI devices to a guest, the SMMU configuration should 
>> be updated to remove access to the hardware domain memory and add
>> configuration to have access to the guest memory with the proper 
>> address translation so that the device can do DMA operations from and 
>> to the guest memory only.
> 
> There are a few more questions to answer here:
>     - When a guest is destroyed, who will be the owner of the PCI 
> devices? Depending on the answer, how do you make sure the device is 
> quiescent?
>     - Is there any memory access that can bypassed the IOMMU (e.g 
> doorbell)?
> 
> Cheers,
> 
> [1] 
> https://lists.xenproject.org/archives/html/xen-devel/2017-05/msg02520.html
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 13:57:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 13:57:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQrx-0003op-9Z; Fri, 17 Jul 2020 13:57:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sKPa=A4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwQrv-0003ok-PF
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 13:57:43 +0000
X-Inumbo-ID: 7a53412a-c835-11ea-960a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7a53412a-c835-11ea-960a-12813bfff9fa;
 Fri, 17 Jul 2020 13:57:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=o95gEOwx3Ii09CWZXGRP43lrm4U9zbozjO18nTOVmnc=; b=EO7sUq9MPCrh4XWbUxmAFiFEG
 qT04ZI8ZKFMFDDSOoktc95XZ6zaZPTvB18nyVq8FUK1YHER2fY3pRjNH5tkiMY5Wu45iGOMzHsCuF
 nafLvb4VA4GKglp10dPNLdewUuNcv9xuybS5T0uQhB0eZLcSSmXVKTVER8NVjaD6pGbZ8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwQrq-0007AM-UU; Fri, 17 Jul 2020 13:57:38 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwQrq-0004Wr-EJ; Fri, 17 Jul 2020 13:57:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwQrq-0007cp-Dp; Fri, 17 Jul 2020 13:57:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151952-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151952: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:leak-check/check:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=95d1fbabae0cd44156ac4b96d512d143ca7dfd5e
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jul 2020 13:57:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151952 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151952/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      18 leak-check/check         fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                95d1fbabae0cd44156ac4b96d512d143ca7dfd5e
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   34 days
Failing since        151101  2020-06-14 08:32:51 Z   33 days   45 attempts
Testing same since   151952  2020-07-16 22:36:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 28936 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 13:59:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 13:59:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQta-0003wU-Pn; Fri, 17 Jul 2020 13:59:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1N/p=A4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jwQtZ-0003wL-Jk
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 13:59:25 +0000
X-Inumbo-ID: b89649c8-c835-11ea-bca7-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b89649c8-c835-11ea-bca7-bc764e2007e4;
 Fri, 17 Jul 2020 13:59:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7A369AC20;
 Fri, 17 Jul 2020 13:59:28 +0000 (UTC)
Subject: Re: PCI devices passthrough on Arm design proposal
To: Julien Grall <julien@xen.org>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
 <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8cae3706-7171-3f29-7b68-b5e6f26bc2b7@suse.com>
Date: Fri, 17 Jul 2020 15:59:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 17.07.2020 15:50, Julien Grall wrote:
> (Resending to the correct ML)
> On 17/07/2020 14:23, Julien Grall wrote:
>> On 16/07/2020 18:02, Rahul Singh wrote:
>>> # Discovering PCI devices:
>>>
>>> PCI-PCIe enumeration is a process of detecting devices connected to 
>>> its host. It is the responsibility of the hardware domain or boot 
>>> firmware to do the PCI enumeration and configure the BAR, PCI 
>>> capabilities, and MSI/MSI-X configuration.
>>>
>>> PCI-PCIe enumeration in XEN is not feasible for the configuration part 
>>> as it would require a lot of code inside Xen which would require a lot 
>>> of maintenance. Added to this many platforms require some quirks in 
>>> that part of the PCI code which would greatly improve Xen complexity. 
>>> Once hardware domain enumerates the device then it will communicate to 
>>> XEN via the below hypercall.
>>>
>>> #define PHYSDEVOP_pci_device_add        25
>>> struct physdev_pci_device_add {
>>>      uint16_t seg;
>>>      uint8_t bus;
>>>      uint8_t devfn;
>>>      uint32_t flags;
>>>      struct {
>>>          uint8_t bus;
>>>          uint8_t devfn;
>>>      } physfn;
>>>      /*
>>>      * Optional parameters array.
>>>      * First element ([0]) is PXM domain associated with the device 
>>> (if * XEN_PCI_DEV_PXM is set)
>>>      */
>>>      uint32_t optarr[XEN_FLEX_ARRAY_DIM];
>>>      };
>>>
>>> As the hypercall argument has the PCI segment number, XEN will access 
>>> the PCI config space based on this segment number and find the 
>>> host-bridge corresponding to this segment number. At this stage host 
>>> bridge is fully initialized so there will be no issue to access the 
>>> config space.
>>>
>>> XEN will add the PCI devices in the linked list maintain in XEN using 
>>> the function pci_add_device(). XEN will be aware of all the PCI 
>>> devices on the system and all the device will be added to the hardware 
>>> domain.
>> I understand this what x86 does. However, may I ask why we would want it 
>> for Arm?

Isn't it the normal thing to follow what there is, and instead provide
reasons why a different approach is to be followed? Personally I'd much
prefer if we didn't have two fundamentally different PCI implementations
in the tree. Perhaps some of what Arm wants or needs can actually
benefit x86 as well, but this requires sufficiently much code sharing
then.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 13:59:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 13:59:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQu4-00040E-31; Fri, 17 Jul 2020 13:59:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwQu2-000402-K9
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 13:59:54 +0000
X-Inumbo-ID: c9951bfa-c835-11ea-bca7-bc764e2007e4
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe05::620])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c9951bfa-c835-11ea-bca7-bc764e2007e4;
 Fri, 17 Jul 2020 13:59:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vQ2gZHMn6LDSz8tivzsrNZKbVKpPQrWz0pNWuRqzEKo=;
 b=4e3hYZDdgwxLuiLwSWBEJpm+NBjPe1xDtPbU08hM0NDKe1UDhjJ/CBuQvv0ytrsvAmA68Hc34nc1DLg1cdc3G0dk3Gam4m5ryr10WEbRxDP2Dn0lby1Dw5G6aKNcgCUI+4xbXEQDQV1rph+vI//OSGbNjIVLle5vP4XbXdeNojc=
Received: from DB3PR06CA0013.eurprd06.prod.outlook.com (2603:10a6:8:1::26) by
 VI1PR08MB3885.eurprd08.prod.outlook.com (2603:10a6:803:c1::32) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18; Fri, 17 Jul 2020 13:59:51 +0000
Received: from DB5EUR03FT062.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:8:1:cafe::ae) by DB3PR06CA0013.outlook.office365.com
 (2603:10a6:8:1::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17 via Frontend
 Transport; Fri, 17 Jul 2020 13:59:50 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT062.mail.protection.outlook.com (10.152.20.197) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 13:59:50 +0000
Received: ("Tessian outbound 8f45de5545d6:v62");
 Fri, 17 Jul 2020 13:59:50 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0c33ac93980ba219
X-CR-MTA-TID: 64aa7808
Received: from 9d394a7510c8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E80E0610-9462-438E-97AD-2E8442A848C0.1; 
 Fri, 17 Jul 2020 13:59:44 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9d394a7510c8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 13:59:44 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=V31tpZc6kUIqiNi8plSIlwnSmCurepjLZvOY8Dx/uLNBWrxyeGsoyU1FrEIf5MUc//KNAROUBQfgIfsay5J0A/C8YAm0cURAkEqMeTdGUbpHn2/b2Jl14eDYS6Tmeh9QKoE20GmYMNsXMhPFJI4smzxGQbU/Eoh1zirPO5XL+G+H7X1G1QQQ04OFb6PqvnfoznrTcobFz2srNLzY9cZumpJYG1cJ3MJoOtxjq/z8K+CR5lSIIvnXR/YSrflT50Y5stQESz3ZRnQK665BosBwwhyj2E7d2EetWeNzQ9IkV9hf8Dubq1FZzWLJdY75XuWcIFs/M4Z/zXiuHBTIUUsF8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vQ2gZHMn6LDSz8tivzsrNZKbVKpPQrWz0pNWuRqzEKo=;
 b=JcGEkXDz2FtOZqOJWgCr0iqpFZIwObPLAjqGtSjhCCI8bfz3yTjD6VXZydZFwybB/NUZi+gjcx1s5AAHNhBndvAJFWrdlhtohmLHYCK0Vdz+adBz2rLLz83Fw3g4sME8y6ADQ0/kAbIOP2Wxfvj1mdd3uW60pbrZ7EOylQhT1xPcvWD5NQCxXtB6TSoSxANgaCawwKAk52HQStW+V7V3tqnU9vjysHaTNVi1wouHNYIjU4dXgoFK2zoxus7etME+QNNDALFzPXJD2LI/VuGuRU4ZE1YrSjzUSwhNRT2QNnikIspiz9Y9mtzV9J1t5+UkTp9FYruFCPtc3DzSbGBxaQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vQ2gZHMn6LDSz8tivzsrNZKbVKpPQrWz0pNWuRqzEKo=;
 b=4e3hYZDdgwxLuiLwSWBEJpm+NBjPe1xDtPbU08hM0NDKe1UDhjJ/CBuQvv0ytrsvAmA68Hc34nc1DLg1cdc3G0dk3Gam4m5ryr10WEbRxDP2Dn0lby1Dw5G6aKNcgCUI+4xbXEQDQV1rph+vI//OSGbNjIVLle5vP4XbXdeNojc=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3386.eurprd08.prod.outlook.com (2603:10a6:10:46::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.21; Fri, 17 Jul
 2020 13:59:41 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 13:59:41 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAOrEgIAAVPWAgAABeYCAAAssAA==
Date: Fri, 17 Jul 2020 13:59:41 +0000
Message-ID: <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
 <28899FEF-9DA7-4513-8283-1AC5EFFC6E92@arm.com>
 <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
In-Reply-To: <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [91.160.77.188]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d0082458-0c83-442a-ec5a-08d82a59ac7a
x-ms-traffictypediagnostic: DB7PR08MB3386:|VI1PR08MB3885:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <VI1PR08MB388503CAFA2054F7D881CA6E9D7C0@VI1PR08MB3885.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: EiSsMR+TvxAIJ3lGvV5BqYGa0I2UyKwQK3SUsZlvpFmc1lqlcYnKYLPA4vF6lTpeTEBDtYBjV8e8yB0M8S4O+KhpSiAzWCAMkUHS1i2AIfKslCZqPgWysuSFuKYWUGswY2lV4puIdcuAk/sGxmM7MYpoVG3UXBIlVOXZZNSkCWEo2LuKJXKAM6WuLElhUeI8a0fP+f8Hvp6aOTZTaHgtIfN2U9Memb2Vgj83/K1YO0b05yGp8P7mZb969Vc72w50wUMQV5hWhZCMPXIrwrOXMyzhclP2WMXLDaXbYt6u+yaZBT2iAqCX8BwTNqPlDeQXCQP32kfviXsHk7z9jYVaog==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(366004)(346002)(376002)(396003)(39860400002)(2616005)(186003)(2906002)(33656002)(54906003)(26005)(66556008)(71200400001)(66446008)(76116006)(91956017)(316002)(66476007)(86362001)(6506007)(53546011)(64756008)(66946007)(4326008)(6916009)(8936002)(478600001)(5660300002)(6512007)(6486002)(36756003)(8676002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: RG4u1ZJ/ZtFrRp4sFRgv6X7RAIGMUd+pVo1CtEvlvvmmNkTL+mSqYfICza0TeggqYxkJMhg0yOzBYVNp3o2k1vTRSJjTtxgaC95rF5DGiz+CZ3tSaSMxiKnAPXMU3G91btezLwa6P22rXBcFanpcV3fgWz/NTTQ04GmTDR7K9Rhy8VxEL1Ad7xW+0aZ/08rrkUHTX+HAXiS87fzvyeyBhNu3BaXc66i1EYLF5ydiNGfzZQLD6srWuvd/kamW5DDjuNyTLoxcgwa6hkdJE1fUz7O2PKLkIvYFsATbtd76J/iORf8311ByRWvmbhWrz2reS8tQJS2xg63dsH5FM1ZXzHOt4NxSvgrfR8YXrB9NlZVJ7cPw6v3t6EAjvafvgxiRZ3Q+JBa3tQyYlaKC85jat053G8Yjp4GdqhohnuPIgaH/7KL543hwWeWNTitFJkx7p2wwXa6qaLoaAoKBCDaARVNKr+jIpBog4xjk6v3ixNo=
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <B927026E51A75345896C176F13B61140@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3386
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(376002)(346002)(39860400002)(136003)(46966005)(4326008)(33656002)(54906003)(86362001)(5660300002)(26005)(2906002)(6486002)(186003)(8936002)(70586007)(47076004)(36756003)(70206006)(336012)(8676002)(316002)(6512007)(82740400003)(6862004)(356005)(478600001)(82310400002)(81166007)(6506007)(53546011)(2616005)(107886003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: f10a04d7-4169-4fed-3bd8-08d82a59a6ef
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: pBQlrthjhgIeiKkqjujJe+qnh7mlEDZysMkJxfFCcvHjFuY+LHgCr1dYHxmIvYYXmsi1VG3tCPDOEH7cY7Oo6NWTT9XmaK5lJxRKvaXcBaT+gXZdDEYntlxdKPaG13lFo4vjoWecW8BSM47a3kU+cXyMGHSoCTNXcvpednFSUiQc9Z6ZbtOFfPUxGOjUBXWJZLMC4fp0W7uRPiRyfoez/+kmiGIdE3BsAxD2ByyGX1vlrzj4zfTeW+U0/sZlQ/5A6hUoJZrDC6HmlAaCNIdeC/VQWGkSDXEfV1w0R5w3+PS2vUeufSvAAGPyVvInKThGbIGOAKmorG2D4fc4tGMBdb3L9HL82vW2NSaeUtdYup/aZn+UfJtrpCmr4+sL2l+a1iONoChEjgV6G8gB8o6k+g==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 13:59:50.7048 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d0082458-0c83-442a-ec5a-08d82a59ac7a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3885
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 17 Jul 2020, at 15:19, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 17.07.2020 15:14, Bertrand Marquis wrote:
>>> On 17 Jul 2020, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
>>> On 16.07.2020 19:10, Rahul Singh wrote:
>>>> # Emulated PCI device tree node in libxl:
>>>>=20
>>>> Libxl is creating a virtual PCI device tree node in the device tree to=
 enable the guest OS to discover the virtual PCI during guest boot. We intr=
oduced the new config option [vpci=3D"pci_ecam"] for guests. When this conf=
ig option is enabled in a guest configuration, a PCI device tree node will =
be created in the guest device tree.
>>>=20
>>> I support Stefano's suggestion for this to be an optional thing, i.e.
>>> there to be no need for it when there are PCI devices assigned to the
>>> guest anyway. I also wonder about the pci_ prefix here - isn't
>>> vpci=3D"ecam" as unambiguous?
>>=20
>> This could be a problem as we need to know that this is required for a g=
uest upfront so that PCI devices can be assigned after using xl.=20
>=20
> I'm afraid I don't understand: When there are no PCI device that get
> handed to a guest when it gets created, but it is supposed to be able
> to have some assigned while already running, then we agree the option
> is needed (afaict). When PCI devices get handed to the guest while it
> gets constructed, where's the problem to infer this option from the
> presence of PCI devices in the guest configuration?

If the user wants to use xl pci-attach to attach in runtime a device to a g=
uest, this guest must have a VPCI bus (even with no devices).
If we do not have the vpci parameter in the configuration this use case wil=
l not work anymore.

@julien: in fact this can be considered as hotplug from guest point of view=
 and we do support this :-)
We will not support PCI hotplug for hardware hotplug definitely (ie adding =
in runtime a new device on PCI).

Bertrand

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Fri Jul 17 14:01:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 14:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwQvV-0004qw-FZ; Fri, 17 Jul 2020 14:01:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwQvU-0004qo-9T
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 14:01:24 +0000
X-Inumbo-ID: fe92d900-c835-11ea-bb8b-bc764e2007e4
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe05::627])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe92d900-c835-11ea-bb8b-bc764e2007e4;
 Fri, 17 Jul 2020 14:01:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Jd4/aGJDD35zQ9lqOJnGsuQzghd404qhES6bmDZNbso=;
 b=naMW1CjBG/znuJ+LKypYLvEgIV5opilE4aOTr63iy5XyTb4D0inMg2y9aEI28zHkcrCaqa7BptZGTUlfYQqV7mEOnLidzlZqs61C2N7tmt9A+KnLQLj738WaGLDIndtG6YM2MvqamUuaugR8AiOC/TfizsI/P//JHKqH9v6kbKU=
Received: from AM6PR04CA0011.eurprd04.prod.outlook.com (2603:10a6:20b:92::24)
 by VE1PR08MB4781.eurprd08.prod.outlook.com (2603:10a6:802:ab::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.21; Fri, 17 Jul
 2020 14:01:18 +0000
Received: from AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:92:cafe::df) by AM6PR04CA0011.outlook.office365.com
 (2603:10a6:20b:92::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17 via Frontend
 Transport; Fri, 17 Jul 2020 14:01:18 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT004.mail.protection.outlook.com (10.152.16.163) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 14:01:18 +0000
Received: ("Tessian outbound 2ae7cfbcc26c:v62");
 Fri, 17 Jul 2020 14:01:18 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1002281a3852c0af
X-CR-MTA-TID: 64aa7808
Received: from bde09bfd03ea.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7253E12D-E095-4A7D-914A-3E34F5783049.1; 
 Fri, 17 Jul 2020 14:01:13 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bde09bfd03ea.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 14:01:12 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RqmBcRvehMDdVFh76iDZ5lHz/cemaMmeONnIGKJDxpG/ymRWcIcGB8N6MQ45P63xqGN3ueENxfsoDlt3VPa1m1hwsu18IzakXgW5g7Df7D4yFJrL7fHSGBuJMnekLdJNRR/dZ0fe9wtuYGmOZcZUYIjf16ySjIFzgkYaQsoOoBBV9r8wngiHRMyAM41Ylmwf9gyYqngoW8Okn5AiEzjGWw2D5YyJPLtDWpNYBDdBmetgFMqVIJ97cde5XZ4WB1+V/eHbTuf4a1xxpleQIvdJd9HaDFxPebJlCROiJecrcP9IV2uNA6l715+T7YLNJ20yekPwLi8KWy4iKXJg6GaWcw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Jd4/aGJDD35zQ9lqOJnGsuQzghd404qhES6bmDZNbso=;
 b=AoqVoLvwhzFTfbC4SlhoIKvfsnNDALIEg/FNN16Ljr6Tg6EEybl2N8BSV0JXe3wBeukCXRaDKOZr2JOUQP7G6RWTLtIFKYry0xYyaPqEHT+h26ENNZCwnTkFUtEtumr0z+/pVtognJSh3azKv7A5K5fjEYxXGNOPIU/nWRIqkgcptDqjvoM11JNOjiJ/F8OkhvaGiB6TLkAWnQIut3o9axELGq3G6A7DKXGfAWjSPbqNVYxaLVwOftb3PjFBMjSA8hrXxhfzwzQONnYLiJ2ufPxE9NVX2KxWzRf9dpSyPHuPlM/UrWiNywFDmC62rBz0O1jLd5kAO/L6PfBro4It2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Jd4/aGJDD35zQ9lqOJnGsuQzghd404qhES6bmDZNbso=;
 b=naMW1CjBG/znuJ+LKypYLvEgIV5opilE4aOTr63iy5XyTb4D0inMg2y9aEI28zHkcrCaqa7BptZGTUlfYQqV7mEOnLidzlZqs61C2N7tmt9A+KnLQLj738WaGLDIndtG6YM2MvqamUuaugR8AiOC/TfizsI/P//JHKqH9v6kbKU=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3386.eurprd08.prod.outlook.com (2603:10a6:10:46::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.21; Fri, 17 Jul
 2020 14:01:10 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 14:01:10 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAR7YAIAAIxaAgAAB34CAAARBAIAAAV+AgAADXAA=
Date: Fri, 17 Jul 2020 14:01:10 +0000
Message-ID: <73106C3C-E7BD-4692-943F-BE3CAB9CCEC9@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <20200717111644.GS7191@Air-de-Roger>
 <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
 <339df023-7844-b998-81bd-8c00baad3b04@xen.org>
 <F91FCC13-D591-4A57-9840-220614174F02@arm.com>
 <239b5114-9e23-ab55-41b9-c02a2018e4ab@xen.org>
In-Reply-To: <239b5114-9e23-ab55-41b9-c02a2018e4ab@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 53a7ef17-d37f-4e7a-dc7d-08d82a59e0c1
x-ms-traffictypediagnostic: DB7PR08MB3386:|VE1PR08MB4781:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <VE1PR08MB47811DA98810BD6A5F4B2F129D7C0@VE1PR08MB4781.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: XHig5CNJ5U4MNfI04aJeFyPD0yqKkiLPVFv/+TmwvE3Z5jPelbBqTOwK5zh8RdGlD8w9YKP5LHUGPREws0e3mCmkEXFU1GkUTSKAgF12N3yOS128IUahXuA7RTLP0n/vtNfOAaXlpJu5/a/13pKv2ATWuA5EoVZWrooPM9vyLOK7hx72SGrUfMAUbShHI9q64TLn0ZasXBD9j04ylG2ujOor6WFcMWV7vE2e8Ev2n73SKl8dgyihE3ev2ZedH4OPsPR3TOpu77hOUaEa+sDFV3Nu4gG1YcNJkB4qWMOeVzIDZNOLTBB1LO7BFh8eJHRj
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(366004)(346002)(376002)(396003)(39860400002)(2616005)(186003)(2906002)(33656002)(54906003)(26005)(66556008)(71200400001)(66446008)(76116006)(91956017)(316002)(66476007)(86362001)(6506007)(53546011)(64756008)(66946007)(4326008)(6916009)(8936002)(83380400001)(478600001)(5660300002)(6512007)(6486002)(36756003)(8676002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: nLrbSDDUzIKrumcQf5HrR1ZRdQGJFW+nsME6r4++uVR3tTWsW65QKr9lNxkbmtPFx/a5eRgueaOLbCpKgIMCqC3hRM6ymJpurTLmJj7JLA1HrA/HI/xb1J6CssB8eJkgnPRMv3o/fSLcj+57BWG5/Dquh2r7IIGPpp+RfrF8ceeiBf3VPuZhuqc/SI+zYrNJEZbiFXWVsMRMpXOIBmAnEV+VB5T2+0iGm1sgO/baqujNXmEacccu+OO942vOCT8XSWs57nZXQD46k72t1gMUD7ea9m/BlSL0wxDMB0GffqMhrbf4w7N6s5SGbjQeYY6ZUB1TZFYs0QyFjWir+oV1CLU5xF4mNb3+DYmDmtNvd060LiiQAfKIJthn0T3z5V/mFdUBESHFiHXxGDm/LnPXOdkapVIVzN0wVOh23g6VyShcT+a9QLwGqM67b9lxpEW9GORsqBrbQjXVvTdZXFyFoOFkt6N8be8cP/xD6ZQ8p/ux3oEamJ0TZGJ4kU/98UpD
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <57C38B24F6C3B7489CA11E1F57153DA3@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3386
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(39860400002)(376002)(396003)(136003)(46966005)(316002)(356005)(2906002)(54906003)(82740400003)(86362001)(8676002)(6862004)(81166007)(83380400001)(47076004)(5660300002)(82310400002)(4326008)(36756003)(2616005)(53546011)(6486002)(478600001)(6506007)(6512007)(107886003)(26005)(70586007)(70206006)(336012)(36906005)(8936002)(186003)(33656002);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 1af5859c-ed7e-489f-59a7-08d82a59dc45
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3VCLZaymK1cnf8ibSGCUqm6QhtAo7n8pgVDrG/72nDR5VZux1JKeJEiCm70XLkKAnphQsKJgMhcrRpgBW0llvGQASmRlH/oqQ4BfZjzP0RfhX+/kVjAgtTmF3gCF/lSS87IO3TMEmZxJ8AdYw0dHmzP2Tr2jwVrHTRJJUEvj77QpLxi4EhV9S0hMpWDMm/wFdTkt+OdCKbwFVguJkdIyk7AZoKWqSPCJWnMF0RVLxjnWWnNsno0iXWK49sVOafamMFuWEA38JOZYqMHJ1zzXEkxOri7dgqDD0ZXSMmETHC/+lN49VIMJ9rnHo63nF/p4q9Uk7WegIf4HsMOQ5H6riq8IN9Dnb2WIM+SwwmzffGzYX2hNsQ/r0WD9Cyh11UR3Wxss6VjIc+dUKJ7xzHYsdw==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 14:01:18.3383 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 53a7ef17-d37f-4e7a-dc7d-08d82a59e0c1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB4781
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 17 Jul 2020, at 15:49, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 17/07/2020 14:44, Bertrand Marquis wrote:
>>> On 17 Jul 2020, at 15:29, Julien Grall <julien@xen.org> wrote:
>>>=20
>>>=20
>>>=20
>>> On 17/07/2020 14:22, Bertrand Marquis wrote:
>>>>>> # Emulated PCI device tree node in libxl:
>>>>>>=20
>>>>>> Libxl is creating a virtual PCI device tree node in the device tree
>>>>>> to enable the guest OS to discover the virtual PCI during guest
>>>>>> boot. We introduced the new config option [vpci=3D"pci_ecam"] for
>>>>>> guests. When this config option is enabled in a guest configuration,
>>>>>> a PCI device tree node will be created in the guest device tree.
>>>>>>=20
>>>>>> A new area has been reserved in the arm guest physical map at which
>>>>>> the VPCI bus is declared in the device tree (reg and ranges
>>>>>> parameters of the node). A trap handler for the PCI ECAM access from
>>>>>> guest has been registered at the defined address and redirects
>>>>>> requests to the VPCI driver in Xen.
>>>>>=20
>>>>> Can't you deduce the requirement of such DT node based on the presenc=
e
>>>>> of a 'pci=3D' option in the same config file?
>>>>>=20
>>>>> Also I wouldn't discard that in the future you might want to use
>>>>> different emulators for different devices, so it might be helpful to
>>>>> introduce something like:
>>>>>=20
>>>>> pci =3D [ '08:00.0,backend=3Dvpci', '09:00.0,backend=3Dxenpt', '0a:00=
.0,backend=3Dqemu', ... ]
>>>=20
>>> I like this idea :).
>>>=20
>>>>>=20
>>>>> For the time being Arm will require backend=3Dvpci for all the passed
>>>>> through devices, but I wouldn't rule out this changing in the future.
>>>> We need it for the case where no device is declared in the config file=
 and the user
>>>> wants to add devices using xl later. In this case we must have the DT =
node for it
>>>> to work.
>>>=20
>>> Are you suggesting that you plan to implement PCI hotplug?
>> No this is not in the current plan but we should not prevent this to be =
supported some day :-)
>=20
> I agree that we don't want to prevent extension. But I fail to see why th=
is would be an issue if we don't introduce the option "vcpi" today.

I answered that in parallel while answering to Jan.
This is needed to have no PCI device assigned when starting the guest and a=
ssign them later using xl pci-attach

Bertrand

>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Fri Jul 17 14:06:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 14:06:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwR03-00053g-5Y; Fri, 17 Jul 2020 14:06:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1N/p=A4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jwR02-00053b-35
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 14:06:06 +0000
X-Inumbo-ID: a646a4c4-c836-11ea-9612-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a646a4c4-c836-11ea-9612-12813bfff9fa;
 Fri, 17 Jul 2020 14:06:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 46A83AD89;
 Fri, 17 Jul 2020 14:06:07 +0000 (UTC)
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
 <28899FEF-9DA7-4513-8283-1AC5EFFC6E92@arm.com>
 <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
Date: Fri, 17 Jul 2020 16:06:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 17.07.2020 15:59, Bertrand Marquis wrote:
> 
> 
>> On 17 Jul 2020, at 15:19, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 17.07.2020 15:14, Bertrand Marquis wrote:
>>>> On 17 Jul 2020, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 16.07.2020 19:10, Rahul Singh wrote:
>>>>> # Emulated PCI device tree node in libxl:
>>>>>
>>>>> Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.
>>>>
>>>> I support Stefano's suggestion for this to be an optional thing, i.e.
>>>> there to be no need for it when there are PCI devices assigned to the
>>>> guest anyway. I also wonder about the pci_ prefix here - isn't
>>>> vpci="ecam" as unambiguous?
>>>
>>> This could be a problem as we need to know that this is required for a guest upfront so that PCI devices can be assigned after using xl. 
>>
>> I'm afraid I don't understand: When there are no PCI device that get
>> handed to a guest when it gets created, but it is supposed to be able
>> to have some assigned while already running, then we agree the option
>> is needed (afaict). When PCI devices get handed to the guest while it
>> gets constructed, where's the problem to infer this option from the
>> presence of PCI devices in the guest configuration?
> 
> If the user wants to use xl pci-attach to attach in runtime a device to a guest, this guest must have a VPCI bus (even with no devices).
> If we do not have the vpci parameter in the configuration this use case will not work anymore.

That's what everyone looks to agree with. Yet why is the parameter needed
when there _are_ PCI devices anyway? That's the "optional" that Stefano
was suggesting, aiui.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 14:11:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 14:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwR52-0005uY-Pp; Fri, 17 Jul 2020 14:11:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z3hM=A4=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1jwR51-0005uT-GA
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 14:11:15 +0000
X-Inumbo-ID: 5f7895b0-c837-11ea-bb8b-bc764e2007e4
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f7895b0-c837-11ea-bb8b-bc764e2007e4;
 Fri, 17 Jul 2020 14:11:14 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id q15so15048169wmj.2
 for <xen-devel@lists.xenproject.org>; Fri, 17 Jul 2020 07:11:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:from:date:message-id:subject:to:cc;
 bh=U2sdQPnze1MLKuOyuvhgbGRrHIX0NCpYikz7yBYrP2s=;
 b=veMcm7MPqDZno/rNed+Uxve/K3wX3zKeS9iLg0IUHnzbidZF8Y8lL3Jvf04KyqPqlN
 bjCZma7N628NLHjPxOpY76Uk4zKJDO4T6MfEmpj7zGkoixjr2132cHATW0hRajD9aYr7
 DLTdXIyK/pWyJmrBPCx9jrCfz7L4YC3lqgC5NstdaavS0aYwzWD4wSB5GcyQIqsOrWL2
 fpnJssjbCSRAlGZsRBK+o3XpiZ9cCEymOLdGr85UdPntq5Y4BPt3mEAYIXmh1CxUc7SZ
 USJcFAy6oAjD7QXIPVoZk0HmGOlCjOXruD9+PSWA8dRR5Zv3WX2GSEfC3DNPC2sEnYg2
 Iawg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc;
 bh=U2sdQPnze1MLKuOyuvhgbGRrHIX0NCpYikz7yBYrP2s=;
 b=eHPBcEVBGJyAfZi8CJTAG1fhbCUu78PmeRkjne/R7ASdFdalzBcQmuu4TIdXIZUuP/
 qfbb/jFqE1NzNsNt/VxKq/bU4bT15S7XpqRTcycNdhzhwsH7LChpLUhl4QVXFdq30fP6
 LryrvNoF7J4uSKXY1uV3BckUjWy/7SMAcrh2CVg7kPxYcXgQ/1d/6vvK3bt8PCxsoqVQ
 HuxwIkisG/tMRvY5/Lb0B7qCJSWwFmhcnGEhSE4MhNeZmCL5MIEnFKdRjNR4QBvZlATf
 WAV+93OvgtiIzQnjUxdyoAXW+WzgvB44mnJJ3Izt2ptSnCHwCF/m3HFQRE0ooZEZLhh2
 416A==
X-Gm-Message-State: AOAM530xuNHBogg73pv11z0mGjen47NCwr1329Fhhiv96ll1bjWWdsAc
 gxpClsD27VT1bB4iVcBHgq+mCxogy/G84QqFoRKGvY2/3nQ=
X-Google-Smtp-Source: ABdhPJyDkKUC2h2JgdO2TeX6ielZxqfsAEnsEafrae3Ug4c8EwPNRyINL55G9FaJvOQK9hbdyCZIdB2rrBi0x3O10Wk=
X-Received: by 2002:a05:600c:218f:: with SMTP id
 e15mr8915485wme.63.1594995073104; 
 Fri, 17 Jul 2020 07:11:13 -0700 (PDT)
MIME-Version: 1.0
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
Date: Fri, 17 Jul 2020 17:11:02 +0300
Message-ID: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
Subject: Virtio in Xen on Arm (based on IOREQ concept)
To: xen-devel <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="000000000000c0409f05aaa3ba28"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Artem Mygaiev <joculator@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--000000000000c0409f05aaa3ba28
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello all.

We would like to resume Virtio in Xen on Arm activities. You can find some
background at [1] and Virtio specification at [2].

*A few words about importance:*
There is an increasing interest, I would even say, the requirement to have
flexible, generic and standardized cross-hypervisor solution for I/O
virtualization
in the automotive and embedded areas. The target is quite clear here.
Providing a standardized interface and device models for device
para-virtualization
in hypervisor environments, Virtio interface allows us to move Guest domain=
s
among different hypervisor systems without further modification at the
Guest side.
What is more that Virtio support is available in Linux, Android and many
other
operating systems and there are a lot of existing Virtio drivers (frontends=
)
which could be just reused without reinventing the wheel. Many
organisations push
Virtio direction as a common interface. To summarize, Virtio support would
be
the great feature in Xen on Arm in addition to traditional Xen PV drivers
for
the user to be able to choose which one to use.

*A few word about solution:*
As it was mentioned at [1], in order to implement virtio-mmio Xen on Arm
requires
some implementation to forward guest MMIO access to a device model. And as
it
turned out the Xen on x86 contains most of the pieces to be able to use tha=
t
transport (via existing IOREQ concept). Julien has already done a big amoun=
t
of work in his PoC (xen/arm: Add support for Guest IO forwarding to a
device emulator).
Using that code as a base we managed to create a completely functional PoC
with DomU
running on virtio block device instead of a traditional Xen PV driver
without
modifications to DomU Linux. Our work is mostly about rebasing Julien's
code on the actual
codebase (Xen 4.14-rc4), various tweeks to be able to run emulator
(virtio-disk backend)
in other than Dom0 domain (in our system we have thin Dom0 and keep all
backends
in driver domain), misc fixes for our use-cases and tool support for the
configuration.
Unfortunately, Julien doesn=E2=80=99t have much time to allocate on the wor=
k
anymore,
so we would like to step in and continue.

*A few word about the Xen code:*
You can find the whole Xen series at [5]. The patches are in RFC state
because
some actions in the series should be reconsidered and implemented properly.
Before submitting the final code for the review the first IOREQ patch
(which is quite
big) will be split into x86, Arm and common parts. Please note, x86 part
wasn=E2=80=99t
even build-tested so far and could be broken with that series. Also the
series probably
wants splitting into adding IOREQ on Arm (should be focused first) and
tools support
for the virtio-disk (which is going to be the first Virtio driver)
configuration before going
into the mailing list.

What I would like to add here, the IOREQ feature on Arm could be used not
only
for implementing Virtio, but for other use-cases which require some
emulator entity
outside Xen such as custom PCI emulator (non-ECAM compatible) for example.

*A few word about the backend(s):*
One of the main problems with Virtio in Xen on Arm is the absence of
=E2=80=9Cready-to-use=E2=80=9D and =E2=80=9Cout-of-Qemu=E2=80=9D Virtio bac=
kends (I least am not aware of).
We managed to create virtio-disk backend based on demu [3] and kvmtool [4]
using
that series. It is worth mentioning that although Xenbus/Xenstore is not
supposed
to be used with native Virtio, that interface was chosen to just pass
configuration from toolstack
to the backend and notify it about creating/destroying Guest domain (I
think it is
not bad since backends are usually tied to the hypervisor and can use
services
provided by hypervisor), the most important thing here is that all Virtio
subsystem
in the Guest was left unmodified. Backend wants some cleanup and, probably,
refactoring. We have a plan to publish it in a while.

Our next plan is to start preparing series for the review. Any feedback and
would be
highly appreciated.

[1]
https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg01746.html
[2]
https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html
[3] https://xenbits.xen.org/gitweb/?p=3Dpeople/pauldu/demu.git;a=3Dsummary
[4] https://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git/
[5] https://github.com/xen-troops/xen/commits/ioreq_4.14_ml

--=20
Regards,

Oleksandr Tyshchenko

--000000000000c0409f05aaa3ba28
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hello all.<br><br>We would like to resume Virtio in Xen on=
 Arm activities. You can find some<br>background at [1] and Virtio specific=
ation at [2].<br><br>*A few words about importance:*<br>There is an increas=
ing interest, I would even say, the requirement to have<br>flexible, generi=
c and standardized cross-hypervisor solution for I/O virtualization<br>in t=
he automotive and embedded areas. The target is quite clear here.<br>Provid=
ing a standardized interface and device models for device para-virtualizati=
on<br>in hypervisor environments, Virtio interface allows us to move Guest =
domains<br>among different hypervisor systems without further modification =
at the Guest side.<br>What is more that Virtio support is available in Linu=
x, Android and many other<br>operating systems and there are a lot of exist=
ing Virtio drivers (frontends)<br>which could be just reused without reinve=
nting the wheel. Many organisations push<br>Virtio direction as a common in=
terface. To summarize, Virtio support would be<br>the great feature in Xen =
on Arm in addition to traditional Xen PV drivers for<br>the user to be able=
 to choose which one to use.<br><br>*A few word about solution:*<br>As it w=
as mentioned at [1], in order to implement virtio-mmio Xen on Arm requires<=
br>some implementation to forward guest MMIO access to a device model. And =
as it<br>turned out the Xen on x86 contains most of the pieces to be able t=
o use that<br>transport (via existing IOREQ concept). Julien has already do=
ne a big amount<br>of work in his PoC (xen/arm: Add support for Guest IO fo=
rwarding to a device emulator).<br>Using that code as a base we managed to =
create a completely functional PoC with DomU<br>running on virtio block dev=
ice instead of a traditional Xen PV driver without<br>modifications to DomU=
 Linux. Our work is mostly about rebasing Julien&#39;s code on the actual<b=
r>codebase (Xen 4.14-rc4), various tweeks to be able to run emulator (virti=
o-disk backend)<br>in other than Dom0 domain (in our system we have thin Do=
m0 and keep all backends<br>in driver domain), misc fixes for our use-cases=
 and tool support for the configuration.<br>Unfortunately, Julien doesn=E2=
=80=99t have much time to allocate on the work anymore,<br>so we would like=
 to step in and continue.<br><br>*A few word about the Xen code:*<br>You ca=
n find the whole Xen series at [5]. The patches are in RFC state because<br=
>some actions in the series should be reconsidered and implemented properly=
. <br>Before submitting the final code for the review the first IOREQ patch=
 (which is quite<br>big) will be split into x86, Arm and common parts. Plea=
se note, x86 part wasn=E2=80=99t<br>even build-tested so far and could be b=
roken with that series. Also the series probably<br>wants splitting into ad=
ding IOREQ on Arm (should be focused first) and tools support<br>for the vi=
rtio-disk (which is going to be the first Virtio driver) configuration befo=
re going<div>into the mailing list.<div><br><div>What I would like to add h=
ere, the IOREQ feature on Arm could be used not only</div><div>for implemen=
ting Virtio, but for other use-cases which require some emulator entity</di=
v><div>outside Xen such as custom PCI emulator (non-ECAM compatible) for ex=
ample.<br><div><br>*A few word about the backend(s):*<br>One of the main pr=
oblems with Virtio in Xen on Arm is the absence of<br>=E2=80=9Cready-to-use=
=E2=80=9D and =E2=80=9Cout-of-Qemu=E2=80=9D Virtio backends (I least am not=
 aware of).<br>We managed to create virtio-disk backend based on demu [3] a=
nd kvmtool [4] using<br>that series. It is worth mentioning that although X=
enbus/Xenstore is not supposed<br>to be used with native Virtio, that inter=
face was chosen to just pass configuration from toolstack<br>to the backend=
 and notify it about creating/destroying Guest domain (I think it is<br>not=
 bad since backends are usually tied to the hypervisor and can use services=
<br>provided by hypervisor), the most important thing here is that all Virt=
io subsystem<br>in the Guest was left unmodified. Backend wants some cleanu=
p and, probably,<br>refactoring. We have a plan to publish it in a while.</=
div><div><br>Our next plan is to start preparing series for the review. Any=
 feedback and would be<br>highly appreciated.<br><br>[1] <a href=3D"https:/=
/lists.xenproject.org/archives/html/xen-devel/2019-07/msg01746.html">https:=
//lists.xenproject.org/archives/html/xen-devel/2019-07/msg01746.html</a><br=
>[2] <a href=3D"https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-=
v1.1-cs01.html">https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-=
v1.1-cs01.html</a><br>[3] <a href=3D"https://xenbits.xen.org/gitweb/?p=3Dpe=
ople/pauldu/demu.git;a=3Dsummary">https://xenbits.xen.org/gitweb/?p=3Dpeopl=
e/pauldu/demu.git;a=3Dsummary</a><br>[4] <a href=3D"https://git.kernel.org/=
pub/scm/linux/kernel/git/will/kvmtool.git/">https://git.kernel.org/pub/scm/=
linux/kernel/git/will/kvmtool.git/</a><br>[5] <a href=3D"https://github.com=
/xen-troops/xen/commits/ioreq_4.14_ml">https://github.com/xen-troops/xen/co=
mmits/ioreq_4.14_ml</a><br clear=3D"all"><div><br></div>-- <br><div dir=3D"=
ltr" class=3D"gmail_signature" data-smartmail=3D"gmail_signature"><div dir=
=3D"ltr"><div><div dir=3D"ltr"><div><div dir=3D"ltr"><span style=3D"backgro=
und-color:rgb(255,255,255)"><font size=3D"2"><span style=3D"color:rgb(51,51=
,51);font-family:Arial,sans-serif">Regards,</span></font></span></div><div =
dir=3D"ltr"><br></div><div dir=3D"ltr"><div><span style=3D"background-color=
:rgb(255,255,255)"><font size=3D"2">Oleksandr Tyshchenko</font></span></div=
></div></div></div></div></div></div></div></div></div></div></div>

--000000000000c0409f05aaa3ba28--


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 14:12:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 14:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwR6H-0005zD-4V; Fri, 17 Jul 2020 14:12:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/fKj=A4=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jwR6F-0005z7-In
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 14:12:31 +0000
X-Inumbo-ID: 8d4117b0-c837-11ea-9615-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d4117b0-c837-11ea-9615-12813bfff9fa;
 Fri, 17 Jul 2020 14:12:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=OnFkgLxGwZ0rE9p/KTllMhA3UhQSmPfLpDB0tPklAZg=; b=k8fmz3joJNyByZBNdpogXcrfsD
 YLJMqFW/3gz4GzCQb2Lhx9Ds0aDiW4HnmZybe9Ou+Ocnjbsz8nCQdZ9Po1bbHAo2672fuAAc59hGa
 E2vQv+qiTQvF0QfQKIz7W7lixYOxubVMlWM+B/42NSJSnFPXjayUkV4qjOK5g/6tSeiY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwR6C-0007ZY-Uy; Fri, 17 Jul 2020 14:12:28 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwR6C-0000aG-Ls; Fri, 17 Jul 2020 14:12:28 +0000
Subject: Re: PCI devices passthrough on Arm design proposal
To: Jan Beulich <jbeulich@suse.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
 <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
 <8cae3706-7171-3f29-7b68-b5e6f26bc2b7@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a8e4c68b-0ba8-4f35-6a07-98a9e68b4a6f@xen.org>
Date: Fri, 17 Jul 2020 15:12:26 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <8cae3706-7171-3f29-7b68-b5e6f26bc2b7@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 17/07/2020 14:59, Jan Beulich wrote:
> On 17.07.2020 15:50, Julien Grall wrote:
>> (Resending to the correct ML)
>> On 17/07/2020 14:23, Julien Grall wrote:
>>> On 16/07/2020 18:02, Rahul Singh wrote:
>>>> # Discovering PCI devices:
>>>>
>>>> PCI-PCIe enumeration is a process of detecting devices connected to
>>>> its host. It is the responsibility of the hardware domain or boot
>>>> firmware to do the PCI enumeration and configure the BAR, PCI
>>>> capabilities, and MSI/MSI-X configuration.
>>>>
>>>> PCI-PCIe enumeration in XEN is not feasible for the configuration part
>>>> as it would require a lot of code inside Xen which would require a lot
>>>> of maintenance. Added to this many platforms require some quirks in
>>>> that part of the PCI code which would greatly improve Xen complexity.
>>>> Once hardware domain enumerates the device then it will communicate to
>>>> XEN via the below hypercall.
>>>>
>>>> #define PHYSDEVOP_pci_device_add        25
>>>> struct physdev_pci_device_add {
>>>>       uint16_t seg;
>>>>       uint8_t bus;
>>>>       uint8_t devfn;
>>>>       uint32_t flags;
>>>>       struct {
>>>>           uint8_t bus;
>>>>           uint8_t devfn;
>>>>       } physfn;
>>>>       /*
>>>>       * Optional parameters array.
>>>>       * First element ([0]) is PXM domain associated with the device
>>>> (if * XEN_PCI_DEV_PXM is set)
>>>>       */
>>>>       uint32_t optarr[XEN_FLEX_ARRAY_DIM];
>>>>       };
>>>>
>>>> As the hypercall argument has the PCI segment number, XEN will access
>>>> the PCI config space based on this segment number and find the
>>>> host-bridge corresponding to this segment number. At this stage host
>>>> bridge is fully initialized so there will be no issue to access the
>>>> config space.
>>>>
>>>> XEN will add the PCI devices in the linked list maintain in XEN using
>>>> the function pci_add_device(). XEN will be aware of all the PCI
>>>> devices on the system and all the device will be added to the hardware
>>>> domain.
>>> I understand this what x86 does. However, may I ask why we would want it
>>> for Arm?
> 
> Isn't it the normal thing to follow what there is, and instead provide
> reasons why a different approach is to be followed?

Not all the decision on x86 have been great and this is the opportunity 
to make it better rather than blindly follow. For instance, platform 
devices were are not assigned (back) to dom0 by default. Thanks to this 
decision, we were not affected by XSA-306.

> Personally I'd much
> prefer if we didn't have two fundamentally different PCI implementations
> in the tree. Perhaps some of what Arm wants or needs can actually
> benefit x86 as well, but this requires sufficiently much code sharing
> then.

Well, it would be nice to have similar implementations. But at the same 
time, we have different constraint. For instance, dom0 may disappear in 
the future on Arm.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 14:16:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 14:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwR9x-0006AF-MG; Fri, 17 Jul 2020 14:16:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/fKj=A4=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jwR9w-0006AA-A7
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 14:16:20 +0000
X-Inumbo-ID: 15b84e74-c838-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15b84e74-c838-11ea-bca7-bc764e2007e4;
 Fri, 17 Jul 2020 14:16:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=nWS351F+OF9AZ16aEseAnYuJazPxdKXmf/c7Syxfwh0=; b=xOvSKzYwiHRWMhx4mpZgWNeoH1
 m+LjjgV9r68N27sPNqXN++EvWbn0UHl27gPzhzutE54rGD29UUtSZOclCUrFzC8TnthYaMNyzlDRO
 iGcxvIFtNxXVSAugz+j+xSdBYGe0GN4bvFt0NwtPUWQmj2hiaZeRjtOZbpacnjYPDwZE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwR9s-0007dm-Cw; Fri, 17 Jul 2020 14:16:16 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwR9s-0000jy-5j; Fri, 17 Jul 2020 14:16:16 +0000
Subject: Re: [PATCH v6 01/11] memory: batch processing in acquire_resource()
To: Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>
References: <cover.1594150543.git.michal.leszczynski@cert.pl>
 <02415890e4e8211513b495228c790e1d16de767f.1594150543.git.michal.leszczynski@cert.pl>
 <83100b6c-3a06-e379-bef0-fcbd8fdcce98@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <81a874b6-d64c-24e6-d7d0-a6ff57a2ba86@xen.org>
Date: Fri, 17 Jul 2020 15:16:13 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <83100b6c-3a06-e379-bef0-fcbd8fdcce98@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, tamas.lengyel@intel.com,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, luwei.kang@intel.com,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 15/07/2020 13:28, Jan Beulich wrote:
> May I ask that you drop the if() around this block of code?
> 
> Also, looking at this, I wonder whether it's a good idea to use the
> "start extent" model here anyway: You could as well update the
> struct (saying that it may be clobbered in the public header) and
> copy the whole thing back to the original guest struct. This would
> then remove the pretty arbitrary "UINT_MAX >> MEMOP_EXTENT_SHIFT"
> limit you currently need to enforce. The main question is whether
> we'd consider such an adjustment to an existing interface
> acceptable; there's an at least theoretical risk that it may break
> existing callers. Then again no existing caller can sensibly have
> specified a count above 32, and when the copying back would be
> limited to just the continuation case, no such caller would be
> affected in any way afaict.

I can see two risks with this approach:
    1) There is no requirement for the 'arg' buffer to be in read-write 
memory.
    2) A guest may expect to re-use the value within the structure after 
the hypercalls.

So I think the "start extent" model is better. This is already used in 
other memory sub-hypercalls, so the risk to break existing users is more 
limited.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 14:23:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 14:23:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwRGc-00073g-HY; Fri, 17 Jul 2020 14:23:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1N/p=A4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jwRGa-00073b-W9
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 14:23:13 +0000
X-Inumbo-ID: 0b655dda-c839-11ea-9616-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0b655dda-c839-11ea-9616-12813bfff9fa;
 Fri, 17 Jul 2020 14:23:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E236FAE70;
 Fri, 17 Jul 2020 14:23:15 +0000 (UTC)
Subject: Re: PCI devices passthrough on Arm design proposal
To: Julien Grall <julien@xen.org>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
 <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
 <8cae3706-7171-3f29-7b68-b5e6f26bc2b7@suse.com>
 <a8e4c68b-0ba8-4f35-6a07-98a9e68b4a6f@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ab8b12cb-5172-47da-3f31-67affcac6621@suse.com>
Date: Fri, 17 Jul 2020 16:23:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a8e4c68b-0ba8-4f35-6a07-98a9e68b4a6f@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 17.07.2020 16:12, Julien Grall wrote:
> On 17/07/2020 14:59, Jan Beulich wrote:
>> Personally I'd much
>> prefer if we didn't have two fundamentally different PCI implementations
>> in the tree. Perhaps some of what Arm wants or needs can actually
>> benefit x86 as well, but this requires sufficiently much code sharing
>> then.
> 
> Well, it would be nice to have similar implementations. But at the same 
> time, we have different constraint. For instance, dom0 may disappear in 
> the future on Arm.

And becoming independent of Dom0 in this regard would be a benefit to
x86 as well, I think, irrespective of whether dom0less is to become a
thing there, too.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 14:31:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 14:31:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwROe-0007wM-IO; Fri, 17 Jul 2020 14:31:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lz8P=A4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jwROd-0007wH-0P
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 14:31:31 +0000
X-Inumbo-ID: 33493316-c83a-11ea-bb8b-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 33493316-c83a-11ea-bb8b-bc764e2007e4;
 Fri, 17 Jul 2020 14:31:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594996289;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=Uuz5TL1fbxmFd/n15nm7iobrvGenyCZTAFr40EETJ/o=;
 b=VjCYi1ytCL6qjxG+OrwlpZ9sWnA6hYWL6yuifGPrACXFyTr+jCReyD8q
 V+4kA+LsPaoOA/hLkXWkRsS9qRgM1HMKEJ+5WGeHPk1hhYYHrzl9VEb7l
 ngeNAnkAsMf2HRel6eBk+sAlpz1P2iBvBja+O6YvV9pavYB+/5YxjpJLH A=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: zv4vvE8+FHpYV52xZ64iapQfIP83JUgHH1n6cqrJJxPwVpwxGXN7+FCzHVSviIKwIUcXqoqGSQ
 bo0Bs1T2B9wQ5bUbZfUiGeeyM1+5VfyyT+BibpKhA3xJSG6TtOcbnmFNjkYJT8CtdwMeT1fMBf
 blyGKUfHHSoct6GGA0Ef1Ypho3jlxA/a0OEa4W5/TzfE3nELBzec4QCguNbWugzoEt4ZDVj6oy
 jLX/qN/SUnjn7GS5YTTG8DJ6183d8PRxO66vUkSPWoIEYiuHkgdcf7guKsIqJ3jL+reh4Jp009
 87k=
X-SBRS: 2.7
X-MesageID: 22623847
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,362,1589256000"; d="scan'208";a="22623847"
Date: Fri, 17 Jul 2020 16:31:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Message-ID: <20200717143120.GT7191@Air-de-Roger>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <20200717111644.GS7191@Air-de-Roger>
 <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 17, 2020 at 01:22:19PM +0000, Bertrand Marquis wrote:
> 
> 
> > On 17 Jul 2020, at 13:16, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > 
> > I've wrapped the email to 80 columns in order to make it easier to
> > reply.
> > 
> > Thanks for doing this, I think the design is good, I have some
> > questions below so that I understand the full picture.
> > 
> > On Thu, Jul 16, 2020 at 05:10:05PM +0000, Rahul Singh wrote:
> >> Hello All,
> >> 
> >> Following up on discussion on PCI Passthrough support on ARM that we
> >> had at the XEN summit, we are submitting a Review For Comment and a
> >> design proposal for PCI passthrough support on ARM. Feel free to
> >> give your feedback.
> >> 
> >> The followings describe the high-level design proposal of the PCI
> >> passthrough support and how the different modules within the system
> >> interacts with each other to assign a particular PCI device to the
> >> guest.
> >> 
> >> # Title:
> >> 
> >> PCI devices passthrough on Arm design proposal
> >> 
> >> # Problem statement:
> >> 
> >> On ARM there in no support to assign a PCI device to a guest. PCI
> >> device passthrough capability allows guests to have full access to
> >> some PCI devices. PCI device passthrough allows PCI devices to
> >> appear and behave as if they were physically attached to the guest
> >> operating system and provide full isolation of the PCI devices.
> >> 
> >> Goal of this work is to also support Dom0Less configuration so the
> >> PCI backend/frontend drivers used on x86 shall not be used on Arm.
> >> It will use the existing VPCI concept from X86 and implement the
> >> virtual PCI bus through IO emulation such that only assigned devices
> >> are visible to the guest and guest can use the standard PCI
> >> driver.
> >> 
> >> Only Dom0 and Xen will have access to the real PCI bus, guest will
> >> have a direct access to the assigned device itself. IOMEM memory
> >> will be mapped to the guest and interrupt will be redirected to the
> >> guest. SMMU has to be configured correctly to have DMA
> >> transaction.
> >> 
> >> ## Current state: Draft version
> >> 
> >> # Proposer(s): Rahul Singh, Bertrand Marquis
> >> 
> >> # Proposal:
> >> 
> >> This section will describe the different subsystem to support the
> >> PCI device passthrough and how these subsystems interact with each
> >> other to assign a device to the guest.
> >> 
> >> # PCI Terminology:
> >> 
> >> Host Bridge: Host bridge allows the PCI devices to talk to the rest
> >> of the computer.  ECAM: ECAM (Enhanced Configuration Access
> >> Mechanism) is a mechanism developed to allow PCIe to access
> >> configuration space. The space available per function is 4KB.
> >> 
> >> # Discovering PCI Host Bridge in XEN:
> >> 
> >> In order to support the PCI passthrough XEN should be aware of all
> >> the PCI host bridges available on the system and should be able to
> >> access the PCI configuration space. ECAM configuration access is
> >> supported as of now. XEN during boot will read the PCI device tree
> >> node “reg” property and will map the ECAM space to the XEN memory
> >> using the “ioremap_nocache ()” function.
> > 
> > What about ACPI? I think you should also mention the MMCFG table,
> > which should contain the information about the ECAM region(s) (or at
> > least that's how it works on x86). Just realized that you don't
> > support ACPI ATM, so you can ignore this comment.
> 
> Yes for now we did not consider ACPI support.

I have 0 knowledge of ACPI on Arm, but I would assume it's also using
the MCFG table in order to report ECAM regions to the OSPM. This is a
static table that's very simple to parse, and it contains the ECAM
IOMEM area and the segment assigned to that ECAM region.

This is better than DT because ACPI already assigns segment numbers to
each ECAM region.

Even if not currently supported in the code implemented so far
describing the plan for it's implementation here seems fine IMO, as
it's going to be slightly different from what you need to do when
using DT.

> > 
> >> 
> >> If there are more than one segment on the system, XEN will read the
> >> “linux, pci-domain” property from the device tree node and configure
> >> the host bridge segment number accordingly. All the PCI device tree
> >> nodes should have the “linux,pci-domain” property so that there will
> >> be no conflicts. During hardware domain boot Linux will also use the
> >> same “linux,pci-domain” property and assign the domain number to the
> >> host bridge.
> > 
> > So it's my understanding that the PCI domain (or segment) is just an
> > abstract concept to differentiate all the Root Complex present on
> > the system, but the host bridge itself it's not aware of the segment
> > assigned to it in any way.
> > 
> > I'm not sure Xen and the hardware domain having matching segments is a
> > requirement, if you use vPCI you can match the segment (from Xen's
> > PoV) by just checking from which ECAM region the access has been
> > performed.
> > 
> > The only reason to require matching segment values between Xen and the
> > hardware domain is to allow using hypercalls against the PCI devices,
> > ie: to be able to use hypercalls to assign a device to a domain from
> > the hardware domain.
> > 
> > I have 0 understanding of DT or it's spec, but why does this have a
> > 'linux,' prefix? The segment number is part of the PCI spec, and not
> > something specific to Linux IMO.
> 
> This is exact that this is only needed for the hypercall when Dom0 is
> doing the full enumeration and communicating the devices to Xen. 
> On all other cases this can be deduced from the address of the access.

You also need the SBDF nomenclature in order to assign deices to
guests from the control domain, so at least there needs to be some
consensus from the hardware domain and Xen on the segment numbering in
that regard.

Same applies to dom0less mode, there needs to be some consensus about
the segment numbers used, so Xen can identify the devices assigned to
each guests without confusion.

> Regarding the DT entry, this is not coming from us and this is already
> defined this way in existing DTBs, we just reuse the existing entry. 

Is it possible to standardize the property and drop the linux prefix?

> > 
> >> 
> >> When Dom0 tries to access the PCI config space of the device, XEN
> >> will find the corresponding host bridge based on segment number and
> >> access the corresponding config space assigned to that bridge.
> >> 
> >> Limitation:
> >> * Only PCI ECAM configuration space access is supported.
> >> * Device tree binding is supported as of now, ACPI is not supported.
> >> * Need to port the PCI host bridge access code to XEN to access the
> >>  configuration space (generic one works but lots of platforms will
> >>  required  some specific code or quirks).
> >> 
> >> # Discovering PCI devices:
> >> 
> >> PCI-PCIe enumeration is a process of detecting devices connected to
> >> its host. It is the responsibility of the hardware domain or boot
> >> firmware to do the PCI enumeration and configure the BAR, PCI
> >> capabilities, and MSI/MSI-X configuration.
> >> 
> >> PCI-PCIe enumeration in XEN is not feasible for the configuration
> >> part as it would require a lot of code inside Xen which would
> >> require a lot of maintenance. Added to this many platforms require
> >> some quirks in that part of the PCI code which would greatly improve
> >> Xen complexity. Once hardware domain enumerates the device then it
> >> will communicate to XEN via the below hypercall.
> >> 
> >> #define PHYSDEVOP_pci_device_add        25 struct
> >> physdev_pci_device_add {
> >>    uint16_t seg;
> >>    uint8_t bus;
> >>    uint8_t devfn;
> >>    uint32_t flags;
> >>    struct {
> >>        uint8_t bus;
> >>        uint8_t devfn;
> >>    } physfn;
> >>    /*
> >>     * Optional parameters array.
> >>     * First element ([0]) is PXM domain associated with the device (if
> >>     * XEN_PCI_DEV_PXM is set)
> >>     */
> >>    uint32_t optarr[XEN_FLEX_ARRAY_DIM];
> >> };
> >> 
> >> As the hypercall argument has the PCI segment number, XEN will
> >> access the PCI config space based on this segment number and find
> >> the host-bridge corresponding to this segment number. At this stage
> >> host bridge is fully initialized so there will be no issue to access
> >> the config space.
> >> 
> >> XEN will add the PCI devices in the linked list maintain in XEN
> >> using the function pci_add_device(). XEN will be aware of all the
> >> PCI devices on the system and all the device will be added to the
> >> hardware domain.
> >> 
> >> Limitations:
> >> * When PCI devices are added to XEN, MSI capability is
> >>  not initialized inside XEN and not supported as of now.
> > 
> > I assume you will mask such capability and will prevent the guest (or
> > hardware domain) from interacting with it?
> 
> No we will actually implement that part but later. This is not supported in
> the RFC that we will submit. 

OK, might be nice to note this somewhere, even if it's not implemented
right now. It might also be relevant to start thinking about which
capabilities you have to expose to guests, and how you will make those
safe. This could even be in a separate document, but ideally a design
document (or set of documents) should try to cover all the
implementation that will be done in order to support a feature.

> > 
> >> * ACS capability is disable for ARM as of now as after enabling it
> >>  devices are not accessible.
> >> * Dom0Less implementation will require to have the capacity inside Xen
> >>  to discover the PCI devices (without depending on Dom0 to declare them
> >>  to Xen).
> > 
> > I assume the firmware will properly initialize the host bridge and
> > configure the resources for each device, so that Xen just has to walk
> > the PCI space and find the devices.
> > 
> > TBH that would be my preferred method, because then you can get rid of
> > the hypercall.
> > 
> > Is there anyway for Xen to know whether the host bridge is properly
> > setup and thus the PCI bus can be scanned?
> > 
> > That way Arm could do something similar to x86, where Xen will scan
> > the bus and discover devices, but you could still provide the
> > hypercall in case the bus cannot be scanned by Xen (because it hasn't
> > been setup).
> 
> That is definitely the idea to rely by default on a firmware doing this properly.
> I am not sure wether a proper enumeration could be detected properly in all
> cases so it would make sens to rely on Dom0 enumeration when a Xen
> command line argument is passed as explained in one of Rahul’s mails.

I assume Linux somehow knows when it needs to initialize the PCI root
complex before attempting to access the bus. Would it be possible to
add this logic to Xen so it can figure out on it's own whether it's
safe to scan the PCI bus or whether it needs to wait for the hardware
domain to report the devices present?

> > 
> >> 
> >> # Enable the existing x86 virtual PCI support for ARM:
> >> 
> >> The existing VPCI support available for X86 is adapted for Arm. When
> >> the device is added to XEN via the hyper call
> >> “PHYSDEVOP_pci_device_add”, VPCI handler for the config space access
> >> is added to the PCI device to emulate the PCI devices.
> >> 
> >> A MMIO trap handler for the PCI ECAM space is registered in XEN so
> >> that when guest is trying to access the PCI config space, XEN will
> >> trap the access and emulate read/write using the VPCI and not the
> >> real PCI hardware.
> >> 
> >> Limitation:
> >> * No handler is register for the MSI configuration.
> > 
> > But you need to mask MSI/MSI-X capabilities in the config space in
> > order to prevent access from domains? (and by mask I mean remove from
> > the list of capabilities and prevent reads/writes to that
> > configuration space).
> > 
> > Note this is already implemented for x86, and I've tried to add arch_
> > hooks for arch specific stuff so that it could be reused by Arm. But
> > maybe this would require a different design document?
> 
> as said, we will handle MSI support in a separate document/step.
> 
> > 
> >> * Only legacy interrupt is supported and tested as of now, MSI is not
> >>  implemented and tested.
> >> 
> >> # Assign the device to the guest:
> >> 
> >> Assign the PCI device from the hardware domain to the guest is done
> >> using the below guest config option. When xl tool create the domain,
> >> PCI devices will be assigned to the guest VPCI bus.
> >> 
> >> pci=[ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...]
> >> 
> >> Guest will be only able to access the assigned devices and see the
> >> bridges. Guest will not be able to access or see the devices that
> >> are no assigned to him.
> >> 
> >> Limitation:
> >> * As of now all the bridges in the PCI bus are seen by
> >>  the guest on the VPCI bus.
> > 
> > I don't think you need all of them, just the ones that are higher up
> > on the hierarchy of the device you are trying to passthrough?
> > 
> > Which kind of access do guest have to PCI bridges config space?
> 
> For now the bridges are read only, no specific access is required by guests. 
> 
> > 
> > This should be limited to read-only accesses in order to be safe.
> > 
> > Emulating a PCI bridge in Xen using vPCI shouldn't be that
> > complicated, so you could likely replace the real bridges with
> > emulated ones. Or even provide a fake topology to the guest using an
> > emulated bridge.
> 
> Just showing all bridges and keeping the hardware topology is the simplest
> solution for now. But maybe showing a different topology and only fake
> bridges could make sense and be implemented in the future.

Ack. I've also heard rumors of Xen on Arm people being very interested
in VirtIO support, in which case you might expose both fully emulated
VirtIO devices and PCI passthrough devices on the PCI bus, so it would
be good to spend some time thinking how those will fit together.

Will you allocate a separate segment unused by hardware to expose the
fully emulated PCI devices (VirtIO)?

Will OSes support having several segments?

If not you likely need to have emulated bridges so that you can adjust
the bridge window accordingly to fit the passthrough and the emulated
MMIO space, and likely be able to expose passthrough devices using a
different topology than the host one.

> > 
> >> 
> >> # Emulated PCI device tree node in libxl:
> >> 
> >> Libxl is creating a virtual PCI device tree node in the device tree
> >> to enable the guest OS to discover the virtual PCI during guest
> >> boot. We introduced the new config option [vpci="pci_ecam"] for
> >> guests. When this config option is enabled in a guest configuration,
> >> a PCI device tree node will be created in the guest device tree.
> >> 
> >> A new area has been reserved in the arm guest physical map at which
> >> the VPCI bus is declared in the device tree (reg and ranges
> >> parameters of the node). A trap handler for the PCI ECAM access from
> >> guest has been registered at the defined address and redirects
> >> requests to the VPCI driver in Xen.
> > 
> > Can't you deduce the requirement of such DT node based on the presence
> > of a 'pci=' option in the same config file?
> > 
> > Also I wouldn't discard that in the future you might want to use
> > different emulators for different devices, so it might be helpful to
> > introduce something like:
> > 
> > pci = [ '08:00.0,backend=vpci', '09:00.0,backend=xenpt', '0a:00.0,backend=qemu', ... ]
> > 
> > For the time being Arm will require backend=vpci for all the passed
> > through devices, but I wouldn't rule out this changing in the future.
> 
> We need it for the case where no device is declared in the config file and the user
> wants to add devices using xl later. In this case we must have the DT node for it
> to work. 

There's a passthrough xl.cfg option for that already, so that if you
don't want to add any PCI passthrough devices at creation time but
rather hotplug them you can set:

passthrough=enabled

And it should setup the domain to be prepared to support hot
passthrough, including the IOMMU [0].

> Regarding possibles backend this could be added in the future if required. 
> 
> > 
> >> Limitation:
> >> * Only one PCI device tree node is supported as of now.
> >> 
> >> BAR value and IOMEM mapping:
> >> 
> >> Linux guest will do the PCI enumeration based on the area reserved
> >> for ECAM and IOMEM ranges in the VPCI device tree node. Once PCI
> >> device is assigned to the guest, XEN will map the guest PCI IOMEM
> >> region to the real physical IOMEM region only for the assigned
> >> devices.
> > 
> > PCI IOMEM == BARs? Or are you referring to the ECAM access window?
> 
> Here by PCI IOMEM we mean the IOMEM spaces referred to by the BARs
> of the PCI device

OK, might be worth to use PCI BARs explicitly rather than PCI IOMEM as
I think that's likely to be confused with the config space IOMEM.

> > 
> >> As of now we have not modified the existing VPCI code to map the
> >> guest PCI IOMEM region to the real physical IOMEM region. We used
> >> the existing guest “iomem” config option to map the region.  For
> >> example: Guest reserved IOMEM region:  0x04020000 Real physical
> >> IOMEM region:0x50000000 IOMEM size:128MB iomem config will be:
> >> iomem = ["0x50000,0x8000@0x4020"]
> >> 
> >> There is no need to map the ECAM space as XEN already have access to
> >> the ECAM space and XEN will trap ECAM accesses from the guest and
> >> will perform read/write on the VPCI bus.
> >> 
> >> IOMEM access will not be trapped and the guest will directly access
> >> the IOMEM region of the assigned device via stage-2 translation.
> >> 
> >> In the same, we mapped the assigned devices IRQ to the guest using
> >> below config options.  irqs= [ NUMBER, NUMBER, ...]
> > 
> > Are you providing this for the hardware domain also? Or are irqs
> > fetched from the DT in that case?
> 
> This will only be used temporarily until we have proper support to do this
> automatically when a device is assigned. Right now our current implementation
> status requires the user to explicitely redirect the interrupts required by the PCI
> devices assigned but in the final version this entry will not be needed.

Right, I'm not sure whether this should be marked somehow as **
WORKAROUND ** or ** TEMPORARY ** in the document, since it's not
supposed to be part of the final implementation.

> Dom0 relies on the entries declared in the DT.
> 
> > 
> >> Limitation:
> >> * Need to avoid the “iomem” and “irq” guest config
> >>  options and map the IOMEM region and IRQ at the same time when
> >>  device is assigned to the guest using the “pci” guest config options
> >>  when xl creates the domain.
> >> * Emulated BAR values on the VPCI bus should reflect the IOMEM mapped
> >>  address.
> > 
> > It was my understanding that you would identity map the BAR into the
> > domU stage-2 translation, and that changes by the guest won't be
> > allowed.
> 
> In fact this is not possible to do and we have to remap at a different address
> because the guest physical mapping is fixed by Xen on Arm so we must follow
> the same design otherwise this would only work if the BARs are pointing to an
> address unused and on Juno this is for example conflicting with the guest
> RAM address.

This was not clear from my reading of the document, could you please
clarify on the next version that the guest physical memory map is
always the same, and that BARs from PCI devices cannot be identity
mapped to the stage-2 translation and instead are relocated somewhere
else?

I'm then confused about what you do with bridge windows, do you also
trap and adjust them to report a different IOMEM region?

Above you mentioned that read-only access was given to bridge
registers, but I guess some are also emulated in order to report
matching IOMEM regions?

Roger.

[0] https://xenbits.xen.org/docs/unstable/man/xl.cfg.5.html#Other-Options


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 14:35:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 14:35:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwRS7-00085Q-2v; Fri, 17 Jul 2020 14:35:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwRS6-00085K-Lp
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 14:35:06 +0000
X-Inumbo-ID: b4a4332a-c83a-11ea-961c-12813bfff9fa
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.53]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b4a4332a-c83a-11ea-961c-12813bfff9fa;
 Fri, 17 Jul 2020 14:35:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rBDTnrluCCrEQf6lD+PDWOCUyfKdG5W2XqttRnnF118=;
 b=Q/IXRFAy3+IKzJmB72xHFK1W16zDo0tv7Z+c55V+Ye2oQtDySOwqdZPhwW7tryM5Ac9bdMhgeyUwpcN8bamt7x2qz0/amwMPKIM21t51kfQhxAEJ4jOWECTKe3GslBWITTnfPLBo3m0rIHK3mJGmFWzpUCfc7vJbpuf4DJpjrD8=
Received: from AM5PR0701CA0063.eurprd07.prod.outlook.com (2603:10a6:203:2::25)
 by VE1PR08MB5261.eurprd08.prod.outlook.com (2603:10a6:803:10d::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.20; Fri, 17 Jul
 2020 14:35:03 +0000
Received: from AM5EUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:2:cafe::1c) by AM5PR0701CA0063.outlook.office365.com
 (2603:10a6:203:2::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.14 via Frontend
 Transport; Fri, 17 Jul 2020 14:35:03 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT030.mail.protection.outlook.com (10.152.16.117) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 14:35:03 +0000
Received: ("Tessian outbound c4059ed8d7bf:v62");
 Fri, 17 Jul 2020 14:35:03 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5a978b50238f9ab2
X-CR-MTA-TID: 64aa7808
Received: from 4865c13c09e1.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B382135B-05C0-4C86-B3E2-365803D5F467.1; 
 Fri, 17 Jul 2020 14:34:57 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4865c13c09e1.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 14:34:57 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OskMHEMTZLSirYd5wAknr0GwNMTcM8zKYH75n3M60hlChDoAwrPdGcntnb/pWONDmIj4kH3brsyxgPK9EbnhEqBdMTENK6LMuYe7d0r4ieeI+6junVR4Z1GT7PlaNW/1+pfWr/bSTmOH/zVJGHZg6wNKm6KlMRGlzyui9M+cOu2xPQrW/1iK/GV/jTtGwW9p3hqMlsXRWnTUtN5/iyOEfrv2uzMVbjtrOhP3ZGajXwFMm8j7kSvtKXzHs5ysLeU7S5o43iH4KZtAsCRmoeheNqESC5VBXFOotxiSDp0KBMn063ZOdN0e2vBwnf/37gaZ33XG0lO8NrLx1MNRahsm+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rBDTnrluCCrEQf6lD+PDWOCUyfKdG5W2XqttRnnF118=;
 b=CF1Bs+d9J7VZfpSfaBr5RgtHcvypk52c8XByFcR5Znv7lkYAyKCOAzeoGr8hXOO+ZxdUsowzV8SAlLZ1s9v1WsikNYm+Z5rEPz1fXb4zrptFtyCorMZxc5o/aDeUWyuIz7dqkHWhnCb2CZipUPPwXqVaawKVCA/KfuYhr/K54ue//0xnUA/KnbEYfGzXB4toP5x9hzPJmQb6gl8QvGUonngp1m0RPstux50y8obxF6YbCR4PCvk/SLbrWz6AEjwA7PS8J7e2SO8NUJK7Azy5bwFwe0GpIlsuJNQMeU8w3YQ/YmiJIRF2nBAbrYrki/Mbp3+E2tH08WgXlLKjmiDKVg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rBDTnrluCCrEQf6lD+PDWOCUyfKdG5W2XqttRnnF118=;
 b=Q/IXRFAy3+IKzJmB72xHFK1W16zDo0tv7Z+c55V+Ye2oQtDySOwqdZPhwW7tryM5Ac9bdMhgeyUwpcN8bamt7x2qz0/amwMPKIM21t51kfQhxAEJ4jOWECTKe3GslBWITTnfPLBo3m0rIHK3mJGmFWzpUCfc7vJbpuf4DJpjrD8=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1800.eurprd08.prod.outlook.com (2603:10a6:4:38::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Fri, 17 Jul
 2020 14:34:55 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 14:34:55 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAOrEgIAAVPWAgAABeYCAAAssAIAAAcuAgAAIDQA=
Date: Fri, 17 Jul 2020 14:34:55 +0000
Message-ID: <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
 <28899FEF-9DA7-4513-8283-1AC5EFFC6E92@arm.com>
 <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
In-Reply-To: <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [91.160.77.188]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 15eed075-daba-48a4-3a6b-08d82a5e97ab
x-ms-traffictypediagnostic: DB6PR0801MB1800:|VE1PR08MB5261:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <VE1PR08MB52611DF1F7AE9B648FE83B339D7C0@VE1PR08MB5261.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: +Q2XoaJ1HG6pgS4ge6fV6Lm5G2ewU5XqtW5yuFyuTN+i5yzTkI9U1KHvZ2nQHucv1fyQAhAYLVBa57xjFqN8N6R4YaM3A/9buqfe1ch2GBn/158VIPW/2n2ACtoifWej8afUs3+INgCoeuWLJooNG9qE4qux4W5+rp7n/rw6qlgHBACHs9cMruCH/8aTl5X2XL7S4S07sKXE/XGw4lHlNFq3uSF7IFuM5K2gIHZe5pDJjgQWCglOOCXdMwyDI9K74m8hzzm6fFNAdWRua9zRhOWBwsD0rhbL+XW2N+KvBMOvZDB0ZquACrnCtvANu//nvHLAtFi+CTF+FoE4kKoxfA==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(39860400002)(376002)(366004)(396003)(346002)(53546011)(2906002)(33656002)(186003)(6506007)(66446008)(66556008)(66946007)(8936002)(64756008)(66476007)(26005)(6486002)(76116006)(91956017)(54906003)(86362001)(6916009)(8676002)(6512007)(316002)(71200400001)(36756003)(5660300002)(2616005)(4326008)(478600001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: SOM1da0Ny8gLrCOmziavbTnBhDfoGkyaqAkP37p9PwKuTWT7SYE6pAp1c3/rXk84Ng8eUXfMe3XJxHmcQzXcAee4deSLpNjd/RikkVwHbFlGJYJYtongiB9y8FoswcRhUPxs1Pf/pWqYYBx4suNraZ2bjylmsMs6bQnLBUrMxPnx5L+E/lTRhO77SEfARI/1dhaWUf5u4gsyqHaLMgXgd5of6QVqBBnefwsYLk0FmKf+Xsh6krtr//63jLMVxaCbnG4MpYpwRxusPKJVH7IYIlt0XGI1KXpC4OlRlQklXnmJZyrJGwOPgCls2j15mNspJRE0+wtfdX+CfpPnG8h/yDy4CpQvuQnGw2eG18uXr3LejuRm0mxUbA+BcgGuC/E5nitK1/OKFKyqY6BcoLNSJnoJKGAfnLWrCe3mRmKb63qPD+tfQS53t8NeCoN0uFR4U40Xh3Vy3IRJsxamSNqh6cEytLD8absVIinqQgfRqZb1BDrKPVMbJhUzM4pX07Ni
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <560D8F1DA805B147BCE425B236527E19@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1800
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(396003)(136003)(346002)(376002)(46966005)(336012)(356005)(36906005)(81166007)(82310400002)(26005)(70586007)(186003)(54906003)(86362001)(2616005)(53546011)(316002)(70206006)(8676002)(6506007)(36756003)(5660300002)(6512007)(33656002)(4326008)(82740400003)(47076004)(478600001)(6486002)(8936002)(6862004)(2906002)(107886003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: c7b49d09-3458-418f-9e6d-08d82a5e9311
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3qV+u36O3pfUYRLE9BTgIEPf43j3UD4Lw16VF/3YoxUm40H8lo0558KlNIH8KyNFpbZbUmWUIerphUXhQ+F6CPsPJTJvXrDIieYvSuko+9YeyOQYe0ZZSQauBewDe4ToJCHu5oUzQDY6zIWnyPX6t4PGc+/tZ9lMM25tsiVJGrdyiKK7GjIkXfgdiAOGBjjNVSKUnSOhWnyZygyKdiaSuAcER376y9sv1s9NeUUvM/6vtMDUz+EJEDEm5H5//4j2sglLv3wXYc2hHuoepCh1w3KOJmfqZ1PZrsoB286dKuLy5Sv/hg3ccjZFGKaoMMyFDKCp5fzkWl9/yBWOYwE8Znsny/0u5u3W60dXXKYDLOnn3RUIv5/eMDSt6dLush5SuK+e5WU3GXxOjVqVPylB0Q==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 14:35:03.2209 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 15eed075-daba-48a4-3a6b-08d82a5e97ab
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5261
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 17 Jul 2020, at 16:06, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 17.07.2020 15:59, Bertrand Marquis wrote:
>>=20
>>=20
>>> On 17 Jul 2020, at 15:19, Jan Beulich <jbeulich@suse.com> wrote:
>>>=20
>>> On 17.07.2020 15:14, Bertrand Marquis wrote:
>>>>> On 17 Jul 2020, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
>>>>> On 16.07.2020 19:10, Rahul Singh wrote:
>>>>>> # Emulated PCI device tree node in libxl:
>>>>>>=20
>>>>>> Libxl is creating a virtual PCI device tree node in the device tree =
to enable the guest OS to discover the virtual PCI during guest boot. We in=
troduced the new config option [vpci=3D"pci_ecam"] for guests. When this co=
nfig option is enabled in a guest configuration, a PCI device tree node wil=
l be created in the guest device tree.
>>>>>=20
>>>>> I support Stefano's suggestion for this to be an optional thing, i.e.
>>>>> there to be no need for it when there are PCI devices assigned to the
>>>>> guest anyway. I also wonder about the pci_ prefix here - isn't
>>>>> vpci=3D"ecam" as unambiguous?
>>>>=20
>>>> This could be a problem as we need to know that this is required for a=
 guest upfront so that PCI devices can be assigned after using xl.=20
>>>=20
>>> I'm afraid I don't understand: When there are no PCI device that get
>>> handed to a guest when it gets created, but it is supposed to be able
>>> to have some assigned while already running, then we agree the option
>>> is needed (afaict). When PCI devices get handed to the guest while it
>>> gets constructed, where's the problem to infer this option from the
>>> presence of PCI devices in the guest configuration?
>>=20
>> If the user wants to use xl pci-attach to attach in runtime a device to =
a guest, this guest must have a VPCI bus (even with no devices).
>> If we do not have the vpci parameter in the configuration this use case =
will not work anymore.
>=20
> That's what everyone looks to agree with. Yet why is the parameter needed
> when there _are_ PCI devices anyway? That's the "optional" that Stefano
> was suggesting, aiui.

I agree in this case the parameter could be optional and only required if n=
ot PCI device is assigned directly in the guest configuration.

Bertrand

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Fri Jul 17 14:42:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 14:42:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwRYi-0000XK-VZ; Fri, 17 Jul 2020 14:41:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lz8P=A4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jwRYh-0000XF-NB
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 14:41:55 +0000
X-Inumbo-ID: a84ff2f2-c83b-11ea-961d-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a84ff2f2-c83b-11ea-961d-12813bfff9fa;
 Fri, 17 Jul 2020 14:41:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594996914;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=aqrREHJIYejtlp3+BeU9+k7NlCM456Us6iYhRhqf5kg=;
 b=ZtScZ9RCXCXEkWOy2alf03p5lbicI9pFQH4pPW/TcwoqXUzosVVBmgEN
 Dxc+jwHQsM/N3w1/qjLpvA0pw1kzzzAfAoCtALo8f990Mh6CnGdkT0IF5
 009D6Krpr8sIUYQPBn8lYqQUeXxSeedYiNAan7yO0pB8q76yXuqWwxy9T o=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: SiAzn9OIsIgZRt99hXoJrnj0p6vc15yfD1+wOwjJA4EiHCXHfVfKXkjywv87WApVHmD+aBJQCt
 fhsfjTIzs0NqR2OxpzpD4eB2nx48vIsznHmaHQ+OP8BDYicNtj7bpxCz7zvH1lwNIl62/5s/qx
 /1IZ9a5SqaGOUH/1c4Gqcpy4K5F44uhywSlBqY0hU7f3iCi8ox/bsQTIrLqbXOCH939MALTdMg
 1CvEJ/FQGSguVlbXBXtLher8Io1nOqxKFz0wsC1K45QDJfzXRfrs2tRhQqYbEp3YRVCjxqqTvV
 VlY=
X-SBRS: 2.7
X-MesageID: 22817687
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,362,1589256000"; d="scan'208";a="22817687"
Date: Fri, 17 Jul 2020 16:41:39 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Message-ID: <20200717144139.GU7191@Air-de-Roger>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
 <28899FEF-9DA7-4513-8283-1AC5EFFC6E92@arm.com>
 <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 17, 2020 at 02:34:55PM +0000, Bertrand Marquis wrote:
> 
> 
> > On 17 Jul 2020, at 16:06, Jan Beulich <jbeulich@suse.com> wrote:
> > 
> > On 17.07.2020 15:59, Bertrand Marquis wrote:
> >> 
> >> 
> >>> On 17 Jul 2020, at 15:19, Jan Beulich <jbeulich@suse.com> wrote:
> >>> 
> >>> On 17.07.2020 15:14, Bertrand Marquis wrote:
> >>>>> On 17 Jul 2020, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
> >>>>> On 16.07.2020 19:10, Rahul Singh wrote:
> >>>>>> # Emulated PCI device tree node in libxl:
> >>>>>> 
> >>>>>> Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.
> >>>>> 
> >>>>> I support Stefano's suggestion for this to be an optional thing, i.e.
> >>>>> there to be no need for it when there are PCI devices assigned to the
> >>>>> guest anyway. I also wonder about the pci_ prefix here - isn't
> >>>>> vpci="ecam" as unambiguous?
> >>>> 
> >>>> This could be a problem as we need to know that this is required for a guest upfront so that PCI devices can be assigned after using xl. 
> >>> 
> >>> I'm afraid I don't understand: When there are no PCI device that get
> >>> handed to a guest when it gets created, but it is supposed to be able
> >>> to have some assigned while already running, then we agree the option
> >>> is needed (afaict). When PCI devices get handed to the guest while it
> >>> gets constructed, where's the problem to infer this option from the
> >>> presence of PCI devices in the guest configuration?
> >> 
> >> If the user wants to use xl pci-attach to attach in runtime a device to a guest, this guest must have a VPCI bus (even with no devices).
> >> If we do not have the vpci parameter in the configuration this use case will not work anymore.
> > 
> > That's what everyone looks to agree with. Yet why is the parameter needed
> > when there _are_ PCI devices anyway? That's the "optional" that Stefano
> > was suggesting, aiui.
> 
> I agree in this case the parameter could be optional and only required if not PCI device is assigned directly in the guest configuration.

Where will the ECAM region(s) appear on the guest physmap?

Are you going to re-use the same locations as on the physical
hardware, or will they appear somewhere else?

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 14:44:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 14:44:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwRbB-0000f5-EQ; Fri, 17 Jul 2020 14:44:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/fKj=A4=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jwRbA-0000f0-CI
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 14:44:28 +0000
X-Inumbo-ID: 03eb714a-c83c-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03eb714a-c83c-11ea-b7bb-bc764e2007e4;
 Fri, 17 Jul 2020 14:44:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:To:Subject:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=5At1Ml5+qHgjsWRq1Z5+lMTAMXO06N/EWtui0mPzDY4=; b=naC9b/nLAcnCK2YKWKxtAmobBy
 zsmgygEVraeR7o0ym+c3hXTamQ1NrVLVpENUgF0IN8YBIJ/KermppqnCAz2CmkyJxnk0xqGaWfn61
 4ASfuf6SpooO1AdKUbc2klF9qvvu5y2lGdUld2BOE1jg/+VD29Ps+C3CLSq1CUd7eLlo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwRb9-0008Do-IK; Fri, 17 Jul 2020 14:44:27 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwRb9-0002cD-Bc; Fri, 17 Jul 2020 14:44:27 +0000
Subject: Re: [PATCH 4/8] Arm: prune #include-s needed by domain.h
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
 <150525bb-1c48-c331-3212-eff18bc4c13d@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d836dc7f-017b-5048-02de-d1cb291fbc3b@xen.org>
Date: Fri, 17 Jul 2020 15:44:25 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <150525bb-1c48-c331-3212-eff18bc4c13d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 15/07/2020 11:39, Jan Beulich wrote:
> asm/domain.h is a dependency of xen/sched.h, and hence should not itself
> include xen/sched.h. Nor should any of the other #include-s used by it.
> While at it, also drop two other #include-s that aren't needed by this
> particular header.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -2,7 +2,7 @@
>   #define __ASM_DOMAIN_H__
>   
>   #include <xen/cache.h>
> -#include <xen/sched.h>
> +#include <xen/timer.h>
>   #include <asm/page.h>
>   #include <asm/p2m.h>
>   #include <asm/vfp.h>
> @@ -11,8 +11,6 @@
>   #include <asm/vgic.h>
>   #include <asm/vpl011.h>
>   #include <public/hvm/params.h>
> -#include <xen/serial.h>

While we don't need the rbtree.h, we technically need serial.h for using 
vuart_info.

I would rather prefer if headers are not implicitly included whenever it 
is possible.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 14:48:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 14:48:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwReX-0000nP-Uz; Fri, 17 Jul 2020 14:47:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwReW-0000nJ-8M
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 14:47:56 +0000
X-Inumbo-ID: 7f83ef9e-c83c-11ea-961f-12813bfff9fa
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.85]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f83ef9e-c83c-11ea-961f-12813bfff9fa;
 Fri, 17 Jul 2020 14:47:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xT/wGwwoOzWVt+0lVfy7eq7m1fOUmHHNnveMSLal9Ak=;
 b=qVWTQ25go3wQpUjiFug91zZuh/e358U8Kbqdfn7DpK/8BgEyY/Da0NiNPdZKZ/D56XnBRvCL1RXWdD1xwv5iBI2f4EOfRM1A08I4X8UI30ZgdCYuZ7rk01Ks8QFMYOVh9ck5FE9aZIvpAEOyjvE/UhDInLE7VTz9+lJC1YA6yXg=
Received: from AM6PR0202CA0058.eurprd02.prod.outlook.com
 (2603:10a6:20b:3a::35) by AM6PR08MB3237.eurprd08.prod.outlook.com
 (2603:10a6:209:4c::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Fri, 17 Jul
 2020 14:47:53 +0000
Received: from AM5EUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:3a:cafe::1f) by AM6PR0202CA0058.outlook.office365.com
 (2603:10a6:20b:3a::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18 via Frontend
 Transport; Fri, 17 Jul 2020 14:47:53 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT014.mail.protection.outlook.com (10.152.16.130) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 14:47:53 +0000
Received: ("Tessian outbound 8f45de5545d6:v62");
 Fri, 17 Jul 2020 14:47:52 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2993e295d1b03370
X-CR-MTA-TID: 64aa7808
Received: from aec9d02da0f5.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 EA6E9535-35BE-4E88-AA24-ED607C662FEF.1; 
 Fri, 17 Jul 2020 14:47:47 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id aec9d02da0f5.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 14:47:47 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iNYSPn6jnIiajt6EPIwYs9mbSE5Cgj3WhuOz8o2GQT2Gf5PD958nXQLsBuvDBihjZ/vZfralqyV8X3QV6Go0pQtguw07rMr2MkoJbErs9vgrqGVAoKwjayAa0hwSHC7lRJ08ZDRL7CRcNS4zzV9e7NFy5SbcOvMIEjrLTEP+PxY0tyBeHBvYHJwI00aIUJ4sIv4aHKRwgKzkOSx6gJVbIb9iVr75QhOMGeWyJbdhMfYGDMsMikfJqCSApj+wh8YlU6WE0DfHEYl3YitpQEWZxekBh6Os3T/Mu1/HAfuUuNRCV2OLPANw7kQ42SS7OIzIrw9SikJjStEwdlFImxfgJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xT/wGwwoOzWVt+0lVfy7eq7m1fOUmHHNnveMSLal9Ak=;
 b=W7tQSu1LE2d5NXPwYFBrxW2Cd+Fw43wSIFFRGp7gb5G8LF7V1oI5WgDKqb8V4Z7Bsz5Xs6afwR6W+WW2vPRc0HqM+wNdYWaXkGMnsYtIujG9Wx/aM+k1ZzJmV4uf2WtjvqJGOMV4+IZ2PwwLLk97F1ZiUh7JIM1rB7Uh2LWaFzXA29sF1d0r4yfvRDESV2NPsrXpiM53qPZpcrK48UBG7nOIqcsf0cjLzHDzHMcIs73CpfYL9x/N+0QfcK6vdeLLGBfDMiMPRffkyqrChI8UGDT6kdBWZJ1IxYER5EeXldm+P6SbmDp2UZuQ9XPRMA8xMwJbI+KElHCcxWW/wR/1XQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xT/wGwwoOzWVt+0lVfy7eq7m1fOUmHHNnveMSLal9Ak=;
 b=qVWTQ25go3wQpUjiFug91zZuh/e358U8Kbqdfn7DpK/8BgEyY/Da0NiNPdZKZ/D56XnBRvCL1RXWdD1xwv5iBI2f4EOfRM1A08I4X8UI30ZgdCYuZ7rk01Ks8QFMYOVh9ck5FE9aZIvpAEOyjvE/UhDInLE7VTz9+lJC1YA6yXg=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2167.eurprd08.prod.outlook.com (2603:10a6:4:83::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.22; Fri, 17 Jul
 2020 14:47:45 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 14:47:45 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: PCI devices passthrough on Arm design proposal
Thread-Topic: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkLzBPIgAAPUAA=
Date: Fri, 17 Jul 2020 14:47:45 +0000
Message-ID: <865D5A77-85D4-4A88-A228-DDB70BDB3691@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
 <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
In-Reply-To: <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [91.160.77.188]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: bbd5d4ea-21dd-4551-bd3a-08d82a606289
x-ms-traffictypediagnostic: DB6PR0802MB2167:|AM6PR08MB3237:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <AM6PR08MB3237F40ED0236644DA07E2089D7C0@AM6PR08MB3237.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: tI6WEv8swn4u1hzs0Y0GA2MgR1k1P5TFa8dDhf7g7o986PKocz5zW/0lawPt9gKIPtOdKtOfRCk2dl9kV68BXlB8uk09zCDSV9J54gG/pNMZNZxDTfWHQFS5UD4g7ey72uF446lYtsFh1ZFEw+0QUnO64HOLQ2vI4RtVmH7NtQfZBYqs2HB8nPvSR+/xXUOk0JpLf9ecw3UehG3OIkV1cEVJKaaoPrT9RDF/00vZuaFXGOhr0YAmx7B2MyK7Yt+WZFimHRiPGplzTgf9JF9pFkXe340TZtHEFPdsKgnJuB4ukyK021r8ondLgKIEWNRk8ffk2piEaZx5YVOjFrE7X69I7EL1OHQd0w1l69Y0T5tPH9revkVTcjWi+Qxls8LWvyycrlET1bhKHIu7y5IKSA==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(376002)(136003)(346002)(366004)(39860400002)(26005)(83380400001)(36756003)(6512007)(966005)(478600001)(316002)(33656002)(6916009)(186003)(76116006)(54906003)(8936002)(2616005)(66446008)(5660300002)(30864003)(53546011)(86362001)(71200400001)(2906002)(66556008)(66946007)(6486002)(64756008)(4326008)(91956017)(66476007)(6506007)(8676002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: O+HDQOtqmib3+Rt/kQbGyFyUiF8bC7PuVFQ/Bgop2XMkaf7K3LcxEnhP+888VYZb0Jo+oda+iiNRDVQQ6K6xOkuQFqsMKPLUHpprzUWxYXgY3F0i4ExeljhK0Mfj7SgJr2ph4qvMQNaLdoHruz0Z5INZ2TjUUIZwjR/YlLNxMX1N4LmyH7e3MMYYCVD1gtaa03eep5J9bgd4SH2l+9V/un/ZOCeTu9y5p3Xr84fiKmnlBBEQoYb/vBmg2q2qlCQ+ODzafm28rJrRhQFy5r1KnCm1oT3/tPK51+UAJoBM/93xEZSWpES0KyijTmpeTQ2Zhu8l3SYAu5Zl4D7UgRbl+gujEebKmwPwXKDBxEufo/EvzAo5fSj4hBaZRngD41UhEXpYO2klS2zP+JfjnNlkLkLWFUtQx+0f23GVNEQamsfGAAza0Gfu6CpohRfId89vFN/S51w+UPlE9rG62qajRnlUUA5/+qIN8g6LFTZkVRfqdk+OVXgkMplSNRZf2Snl
Content-Type: text/plain; charset="utf-8"
Content-ID: <6DDC15243A1CBD4D82779930B49FC560@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2167
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(39860400002)(346002)(136003)(396003)(46966005)(33656002)(316002)(54906003)(70206006)(336012)(70586007)(8936002)(8676002)(107886003)(186003)(6506007)(53546011)(4326008)(26005)(2906002)(2616005)(966005)(6862004)(478600001)(6486002)(30864003)(36756003)(36906005)(6512007)(47076004)(82740400003)(82310400002)(81166007)(86362001)(356005)(83380400001)(5660300002);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 03efb1c2-46a0-4718-303b-08d82a605dd6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: pd6be32jqJfrKUR8b1+8ym9y9XMuks5e+Td0O71LNvzn2HNs9KjaJY/zzeV+aRjn/fG5D2YJeLO3CSzus3IvHn4dfS0beE7uSLu6aTvGxm8CQ7sO3j/8bP6+BjKH7pf5p99iQAEYi+Xo7vF2WMcK+nO5qcpzXvSIPBDDeQErP0KNRobVJW06/kkM3Q+AJnYmcVVHfWMuHR3hxT5xm0IdmbmecYr0HrqWL6FwzcsndRaq4EFvW0B0mhyskXNydcAfuPjuwGAQFNvV3mmwmCxMRfNfoHUwnhbUsE3h6w55cRW6runMEk2m7zUo3XYpLuUCkoecge2zoirbNWxTplNJLqt2Qs8nV+9pEbr68fajtBIuW/e1WqWmD0hQV7ANk9WTjZaDsnqAxbq1hw+BscZJQ8U7LLLp1FQm3VJdehmbn+OzrDn3YxJ2Ta0vmsU00mBS4EyLiiCvECK5DHL+4dx0MZwZWptz/MqzVW6vEWIEcl4=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 14:47:53.0516 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bbd5d4ea-21dd-4551-bd3a-08d82a606289
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3237
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTcgSnVsIDIwMjAsIGF0IDE1OjUwLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IChSZXNlbmRpbmcgdG8gdGhlIGNvcnJlY3QgTUwpDQo+IA0KPiBP
biAxNy8wNy8yMDIwIDE0OjIzLCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+PiBPbiAxNi8wNy8yMDIw
IDE4OjAyLCBSYWh1bCBTaW5naCB3cm90ZToNCj4+PiBIZWxsbyBBbGwsDQo+PiBIaSwNCj4+PiBG
b2xsb3dpbmcgdXAgb24gZGlzY3Vzc2lvbiBvbiBQQ0kgUGFzc3Rocm91Z2ggc3VwcG9ydCBvbiBB
Uk0gdGhhdCB3ZSBoYWQgYXQgdGhlIFhFTiBzdW1taXQsIHdlIGFyZSBzdWJtaXR0aW5nIGEgUmV2
aWV3IEZvciBDb21tZW50IGFuZCBhIGRlc2lnbiBwcm9wb3NhbCBmb3IgUENJIHBhc3N0aHJvdWdo
IHN1cHBvcnQgb24gQVJNLiBGZWVsIGZyZWUgdG8gZ2l2ZSB5b3VyIGZlZWRiYWNrLg0KPj4+IA0K
Pj4+IFRoZSBmb2xsb3dpbmdzIGRlc2NyaWJlIHRoZSBoaWdoLWxldmVsIGRlc2lnbiBwcm9wb3Nh
bCBvZiB0aGUgUENJIHBhc3N0aHJvdWdoIHN1cHBvcnQgYW5kIGhvdyB0aGUgZGlmZmVyZW50IG1v
ZHVsZXMgd2l0aGluIHRoZSBzeXN0ZW0gaW50ZXJhY3RzIHdpdGggZWFjaCBvdGhlciB0byBhc3Np
Z24gYSBwYXJ0aWN1bGFyIFBDSSBkZXZpY2UgdG8gdGhlIGd1ZXN0Lg0KPj4gVGhlcmUgd2FzIGFu
IGF0dGVtcHQgYSBmZXcgeWVhcnMgYWdvIHRvIGdldCBhIGRlc2lnbiBkb2N1bWVudCBmb3IgUENJ
IHBhc3N0aHJvdWdoIChzZWUgWzFdKS4gSSB3b3VsZCBzdWdnZXN0IHRvIGhhdmUgYSBsb29rIGF0
IHRoZSB0aHJlYWQgYXMgSSB0aGluayBpdCB3b3VsZCBoZWxwIHRvIGhhdmUgYW4gb3ZlcnZpZXcg
b2YgYWxsIHRoZSBjb21wb25lbnRzIChlLmcgTVNJIGNvbnRyb2xsZXJzLi4uKSBldmVuIGlmIHRo
ZXkgd2lsbCBub3QgYmUgaW1wbGVtZW50ZWQgYXQgdGhlIGJlZ2lubmluZy4NCg0KVGhhbmtzIGZv
ciB0aGUgcG9pbnRlci4gVGhpcyBkZXNpZ24gaXMgYSBmaXJzdCBkcmFmdCB0aGF0IHdlIHdpbGwg
aW1wcm92ZSBhbmQgY29tcGxldGUgaXQgYWxvbmcgdGhlIHdheS4NCg0KPj4+IA0KPj4+ICMgVGl0
bGU6DQo+Pj4gDQo+Pj4gUENJIGRldmljZXMgcGFzc3Rocm91Z2ggb24gQXJtIGRlc2lnbiBwcm9w
b3NhbA0KPj4+IA0KPj4+ICMgUHJvYmxlbSBzdGF0ZW1lbnQ6DQo+Pj4gDQo+Pj4gT24gQVJNIHRo
ZXJlIGluIG5vIHN1cHBvcnQgdG8gYXNzaWduIGEgUENJIGRldmljZSB0byBhIGd1ZXN0LiBQQ0kg
ZGV2aWNlIHBhc3N0aHJvdWdoIGNhcGFiaWxpdHkgYWxsb3dzIGd1ZXN0cyB0byBoYXZlIGZ1bGwg
YWNjZXNzIHRvIHNvbWUgUENJIGRldmljZXMuIFBDSSBkZXZpY2UgcGFzc3Rocm91Z2ggYWxsb3dz
IFBDSSBkZXZpY2VzIHRvIGFwcGVhciBhbmQgYmVoYXZlIGFzIGlmIHRoZXkgd2VyZSBwaHlzaWNh
bGx5IGF0dGFjaGVkIHRvIHRoZSBndWVzdCBvcGVyYXRpbmcgc3lzdGVtIGFuZCBwcm92aWRlIGZ1
bGwgaXNvbGF0aW9uIG9mIHRoZSBQQ0kgZGV2aWNlcy4NCj4+PiANCj4+PiBHb2FsIG9mIHRoaXMg
d29yayBpcyB0byBhbHNvIHN1cHBvcnQgRG9tMExlc3MgY29uZmlndXJhdGlvbiBzbyB0aGUgUENJ
IGJhY2tlbmQvZnJvbnRlbmQgZHJpdmVycyB1c2VkIG9uIHg4NiBzaGFsbCBub3QgYmUgdXNlZCBv
biBBcm0uIEl0IHdpbGwgdXNlIHRoZSBleGlzdGluZyBWUENJIGNvbmNlcHQgZnJvbSBYODYgYW5k
IGltcGxlbWVudCB0aGUgdmlydHVhbCBQQ0kgYnVzIHRocm91Z2ggSU8gZW11bGF0aW9u4oCLIHN1
Y2ggdGhhdCBvbmx5IGFzc2lnbmVkIGRldmljZXMgYXJlIHZpc2libGXigIsgdG8gdGhlIGd1ZXN0
IGFuZCBndWVzdCBjYW4gdXNlIHRoZSBzdGFuZGFyZCBQQ0kgZHJpdmVyLg0KPj4+IA0KPj4+IE9u
bHkgRG9tMCBhbmQgWGVuIHdpbGwgaGF2ZSBhY2Nlc3MgdG8gdGhlIHJlYWwgUENJIGJ1cyzigIsg
Z3Vlc3Qgd2lsbCBoYXZlIGEgZGlyZWN0IGFjY2VzcyB0byB0aGUgYXNzaWduZWQgZGV2aWNlIGl0
c2VsZuKAiy4gSU9NRU0gbWVtb3J5IHdpbGwgYmUgbWFwcGVkIHRvIHRoZSBndWVzdCDigIthbmQg
aW50ZXJydXB0IHdpbGwgYmUgcmVkaXJlY3RlZCB0byB0aGUgZ3Vlc3QuIFNNTVUgaGFzIHRvIGJl
IGNvbmZpZ3VyZWQgY29ycmVjdGx5IHRvIGhhdmUgRE1BIHRyYW5zYWN0aW9uLg0KPj4+IA0KPj4+
ICMjIEN1cnJlbnQgc3RhdGU64oCvRHJhZnQgdmVyc2lvbg0KPj4+IA0KPj4+ICMgUHJvcG9zZXIo
cyk6IFJhaHVsIFNpbmdoLCBCZXJ0cmFuZCBNYXJxdWlzDQo+Pj4gDQo+Pj4gIyBQcm9wb3NhbDoN
Cj4+PiANCj4+PiBUaGlzIHNlY3Rpb24gd2lsbCBkZXNjcmliZSB0aGUgZGlmZmVyZW50IHN1YnN5
c3RlbSB0byBzdXBwb3J0IHRoZSBQQ0kgZGV2aWNlIHBhc3N0aHJvdWdoIGFuZCBob3cgdGhlc2Ug
c3Vic3lzdGVtcyBpbnRlcmFjdCB3aXRoIGVhY2ggb3RoZXIgdG8gYXNzaWduIGEgZGV2aWNlIHRv
IHRoZSBndWVzdC4NCj4+PiANCj4+PiAjIFBDSSBUZXJtaW5vbG9neToNCj4+PiANCj4+PiBIb3N0
IEJyaWRnZTogSG9zdCBicmlkZ2UgYWxsb3dzIHRoZSBQQ0kgZGV2aWNlcyB0byB0YWxrIHRvIHRo
ZSByZXN0IG9mIHRoZSBjb21wdXRlci4NCj4+PiBFQ0FNOiBFQ0FNIChFbmhhbmNlZCBDb25maWd1
cmF0aW9uIEFjY2VzcyBNZWNoYW5pc20pIGlzIGEgbWVjaGFuaXNtIGRldmVsb3BlZCB0byBhbGxv
dyBQQ0llIHRvIGFjY2VzcyBjb25maWd1cmF0aW9uIHNwYWNlLiBUaGUgc3BhY2UgYXZhaWxhYmxl
IHBlciBmdW5jdGlvbiBpcyA0S0IuDQo+Pj4gDQo+Pj4gIyBEaXNjb3ZlcmluZyBQQ0kgSG9zdCBC
cmlkZ2UgaW4gWEVOOg0KPj4+IA0KPj4+IEluIG9yZGVyIHRvIHN1cHBvcnQgdGhlIFBDSSBwYXNz
dGhyb3VnaCBYRU4gc2hvdWxkIGJlIGF3YXJlIG9mIGFsbCB0aGUgUENJIGhvc3QgYnJpZGdlcyBh
dmFpbGFibGUgb24gdGhlIHN5c3RlbSBhbmQgc2hvdWxkIGJlIGFibGUgdG8gYWNjZXNzIHRoZSBQ
Q0kgY29uZmlndXJhdGlvbiBzcGFjZS4gRUNBTSBjb25maWd1cmF0aW9uIGFjY2VzcyBpcyBzdXBw
b3J0ZWQgYXMgb2Ygbm93LiBYRU4gZHVyaW5nIGJvb3Qgd2lsbCByZWFkIHRoZSBQQ0kgZGV2aWNl
IHRyZWUgbm9kZSDigJxyZWfigJ0gcHJvcGVydHkgYW5kIHdpbGwgbWFwIHRoZSBFQ0FNIHNwYWNl
IHRvIHRoZSBYRU4gbWVtb3J5IHVzaW5nIHRoZSDigJxpb3JlbWFwX25vY2FjaGUgKCnigJ0gZnVu
Y3Rpb24uDQo+Pj4gDQo+Pj4gSWYgdGhlcmUgYXJlIG1vcmUgdGhhbiBvbmUgc2VnbWVudCBvbiB0
aGUgc3lzdGVtLCBYRU4gd2lsbCByZWFkIHRoZSDigJxsaW51eCwgcGNpLWRvbWFpbuKAnSBwcm9w
ZXJ0eSBmcm9tIHRoZSBkZXZpY2UgdHJlZSBub2RlIGFuZCBjb25maWd1cmUgIHRoZSBob3N0IGJy
aWRnZSBzZWdtZW50IG51bWJlciBhY2NvcmRpbmdseS4gQWxsIHRoZSBQQ0kgZGV2aWNlIHRyZWUg
bm9kZXMgc2hvdWxkIGhhdmUgdGhlIOKAnGxpbnV4LHBjaS1kb21haW7igJ0gcHJvcGVydHkgc28g
dGhhdCB0aGVyZSB3aWxsIGJlIG5vIGNvbmZsaWN0cy4gRHVyaW5nIGhhcmR3YXJlIGRvbWFpbiBi
b290IExpbnV4IHdpbGwgYWxzbyB1c2UgdGhlIHNhbWUg4oCcbGludXgscGNpLWRvbWFpbuKAnSBw
cm9wZXJ0eSBhbmQgYXNzaWduIHRoZSBkb21haW4gbnVtYmVyIHRvIHRoZSBob3N0IGJyaWRnZS4N
Cj4+IEFGQUlDVCwgImxpbnV4LHBjaS1kb21haW4iIGlzIG5vdCBhIG1hbmRhdG9yeSBvcHRpb24g
YW5kIG1vc3RseSB0aWUgdG8gTGludXguIFdoYXQgd291bGQgaGFwcGVuIHdpdGggb3RoZXIgT1M/
DQo+PiBCdXQgSSB3b3VsZCByYXRoZXIgYXZvaWQgdHJ5aW5nIHRvIG1hbmRhdGUgYSB1c2VyIHRv
IG1vZGlmeWluZyBoaXMvaGVyIGRldmljZS10cmVlIGluIG9yZGVyIHRvIHN1cHBvcnQgUENJIHBh
c3N0aHJvdWdoLiBJdCB3b3VsZCBiZSBiZXR0ZXIgdG8gY29uc2lkZXIgWGVuIHRvIGFzc2lnbiB0
aGUgbnVtYmVyIGlmIGl0IGlzIG5vdCBwcmVzZW50Lg0KDQpzbyB5b3Ugd291bGQgc3VnZ2VzdCBo
ZXJlIHRoYXQgaWYgdGhpcyBlbnRyeSBpcyBub3QgcHJlc2VudCBpbiB0aGUgY29uZmlndXJhdGlv
biwgd2UganVzdCBhc3NpZ24gYSB2YWx1ZSBpbnNpZGUgeGVuID8gSG93IHNob3VsZCB0aGlzIGlu
Zm9ybWF0aW9uIGJlIHBhc3NlZCB0byB0aGUgZ3Vlc3QgPyANClRoaXMgbnVtYmVyIGlzIHJlcXVp
cmVkIGZvciB0aGUgY3VycmVudCBoeXBlcmNhbGwgdG8gZGVjbGFyZSBkZXZpY2VzIHRvIHhlbiBz
byB0aG9zZSBjb3VsZCBlbmQgdXAgYmVpbmcgZGlmZmVyZW50Lg0KDQo+Pj4gDQo+Pj4gV2hlbiBE
b20wIHRyaWVzIHRvIGFjY2VzcyB0aGUgUENJIGNvbmZpZyBzcGFjZSBvZiB0aGUgZGV2aWNlLCBY
RU4gd2lsbCBmaW5kIHRoZSBjb3JyZXNwb25kaW5nIGhvc3QgYnJpZGdlIGJhc2VkIG9uIHNlZ21l
bnQgbnVtYmVyIGFuZCBhY2Nlc3MgdGhlIGNvcnJlc3BvbmRpbmcgY29uZmlnIHNwYWNlIGFzc2ln
bmVkIHRvIHRoYXQgYnJpZGdlLg0KPj4+IA0KPj4+IExpbWl0YXRpb246DQo+Pj4gKiBPbmx5IFBD
SSBFQ0FNIGNvbmZpZ3VyYXRpb24gc3BhY2UgYWNjZXNzIGlzIHN1cHBvcnRlZC4NCj4+PiAqIERl
dmljZSB0cmVlIGJpbmRpbmcgaXMgc3VwcG9ydGVkIGFzIG9mIG5vdywgQUNQSSBpcyBub3Qgc3Vw
cG9ydGVkLg0KPj4gV2Ugd2FudCB0byBkaWZmZXJlbnRpYXRlIHRoZSBoaWdoLWxldmVsIGRlc2ln
biBmcm9tIHRoZSBhY3R1YWwgaW1wbGVtZW50YXRpb24uIFdoaWxlIHlvdSBtYXkgbm90IHlldCBp
bXBsZW1lbnQgQUNQSSwgd2Ugc3RpbGwgbmVlZCB0byBrZWVwIGl0IGluIG1pbmQgdG8gYXZvaWQg
aW5jb21wYXRpYmlsaXRpZXMgaW4gbG9uZyB0ZXJtLg0KDQpGb3Igc3VyZSB3ZSBkbyBub3Qgd2Fu
dCB0byBtYWtlIGFueXRoaW5nIHdoaWNoIHdvdWxkIG5vdCBiZSBwb3NzaWJsZSB0byBpbXBsZW1l
bnQgd2l0aCBBQ1BJLiANCkkgaG9wZSB0aGUgY29tbXVuaXR5IHdpbGwgaGVscCB1cyBkdXJpbmcg
cmV2aWV3IHRvIGZpbmQgdGhvc2UgcG9zc2libGUgcHJvYmxlbXMgaWYgd2UgZG8gbm90IHNlZSB0
aGVtLiANCg0KPj4+ICogTmVlZCB0byBwb3J0IHRoZSBQQ0kgaG9zdCBicmlkZ2UgYWNjZXNzIGNv
ZGUgdG8gWEVOIHRvIGFjY2VzcyB0aGUgY29uZmlndXJhdGlvbiBzcGFjZSAoZ2VuZXJpYyBvbmUg
d29ya3MgYnV0IGxvdHMgb2YgcGxhdGZvcm1zIHdpbGwgcmVxdWlyZWQgIHNvbWUgc3BlY2lmaWMg
Y29kZSBvciBxdWlya3MpLg0KPj4+IA0KPj4+ICMgRGlzY292ZXJpbmcgUENJIGRldmljZXM6DQo+
Pj4gDQo+Pj4gUENJLVBDSWUgZW51bWVyYXRpb24gaXMgYSBwcm9jZXNzIG9mIGRldGVjdGluZyBk
ZXZpY2VzIGNvbm5lY3RlZCB0byBpdHMgaG9zdC4gSXQgaXMgdGhlIHJlc3BvbnNpYmlsaXR5IG9m
IHRoZSBoYXJkd2FyZSBkb21haW4gb3IgYm9vdCBmaXJtd2FyZSB0byBkbyB0aGUgUENJIGVudW1l
cmF0aW9uIGFuZCBjb25maWd1cmUgdGhlIEJBUiwgUENJIGNhcGFiaWxpdGllcywgYW5kIE1TSS9N
U0ktWCBjb25maWd1cmF0aW9uLg0KPj4+IA0KPj4+IFBDSS1QQ0llIGVudW1lcmF0aW9uIGluIFhF
TiBpcyBub3QgZmVhc2libGUgZm9yIHRoZSBjb25maWd1cmF0aW9uIHBhcnQgYXMgaXQgd291bGQg
cmVxdWlyZSBhIGxvdCBvZiBjb2RlIGluc2lkZSBYZW4gd2hpY2ggd291bGQgcmVxdWlyZSBhIGxv
dCBvZiBtYWludGVuYW5jZS4gQWRkZWQgdG8gdGhpcyBtYW55IHBsYXRmb3JtcyByZXF1aXJlIHNv
bWUgcXVpcmtzIGluIHRoYXQgcGFydCBvZiB0aGUgUENJIGNvZGUgd2hpY2ggd291bGQgZ3JlYXRs
eSBpbXByb3ZlIFhlbiBjb21wbGV4aXR5LiBPbmNlIGhhcmR3YXJlIGRvbWFpbiBlbnVtZXJhdGVz
IHRoZSBkZXZpY2UgdGhlbiBpdCB3aWxsIGNvbW11bmljYXRlIHRvIFhFTiB2aWEgdGhlIGJlbG93
IGh5cGVyY2FsbC4NCj4+PiANCj4+PiAjZGVmaW5lIFBIWVNERVZPUF9wY2lfZGV2aWNlX2FkZCAg
ICAgICAgMjUNCj4+PiBzdHJ1Y3QgcGh5c2Rldl9wY2lfZGV2aWNlX2FkZCB7DQo+Pj4gICAgICB1
aW50MTZfdCBzZWc7DQo+Pj4gICAgICB1aW50OF90IGJ1czsNCj4+PiAgICAgIHVpbnQ4X3QgZGV2
Zm47DQo+Pj4gICAgICB1aW50MzJfdCBmbGFnczsNCj4+PiAgICAgIHN0cnVjdCB7DQo+Pj4gICAg
ICAgICAgdWludDhfdCBidXM7DQo+Pj4gICAgICAgICAgdWludDhfdCBkZXZmbjsNCj4+PiAgICAg
IH0gcGh5c2ZuOw0KPj4+ICAgICAgLyoNCj4+PiAgICAgICogT3B0aW9uYWwgcGFyYW1ldGVycyBh
cnJheS4NCj4+PiAgICAgICogRmlyc3QgZWxlbWVudCAoWzBdKSBpcyBQWE0gZG9tYWluIGFzc29j
aWF0ZWQgd2l0aCB0aGUgZGV2aWNlIChpZiAqIFhFTl9QQ0lfREVWX1BYTSBpcyBzZXQpDQo+Pj4g
ICAgICAqLw0KPj4+ICAgICAgdWludDMyX3Qgb3B0YXJyW1hFTl9GTEVYX0FSUkFZX0RJTV07DQo+
Pj4gICAgICB9Ow0KPj4+IA0KPj4+IEFzIHRoZSBoeXBlcmNhbGwgYXJndW1lbnQgaGFzIHRoZSBQ
Q0kgc2VnbWVudCBudW1iZXIsIFhFTiB3aWxsIGFjY2VzcyB0aGUgUENJIGNvbmZpZyBzcGFjZSBi
YXNlZCBvbiB0aGlzIHNlZ21lbnQgbnVtYmVyIGFuZCBmaW5kIHRoZSBob3N0LWJyaWRnZSBjb3Jy
ZXNwb25kaW5nIHRvIHRoaXMgc2VnbWVudCBudW1iZXIuIEF0IHRoaXMgc3RhZ2UgaG9zdCBicmlk
Z2UgaXMgZnVsbHkgaW5pdGlhbGl6ZWQgc28gdGhlcmUgd2lsbCBiZSBubyBpc3N1ZSB0byBhY2Nl
c3MgdGhlIGNvbmZpZyBzcGFjZS4NCj4+PiANCj4+PiBYRU4gd2lsbCBhZGQgdGhlIFBDSSBkZXZp
Y2VzIGluIHRoZSBsaW5rZWQgbGlzdCBtYWludGFpbiBpbiBYRU4gdXNpbmcgdGhlIGZ1bmN0aW9u
IHBjaV9hZGRfZGV2aWNlKCkuIFhFTiB3aWxsIGJlIGF3YXJlIG9mIGFsbCB0aGUgUENJIGRldmlj
ZXMgb24gdGhlIHN5c3RlbSBhbmQgYWxsIHRoZSBkZXZpY2Ugd2lsbCBiZSBhZGRlZCB0byB0aGUg
aGFyZHdhcmUgZG9tYWluLg0KPj4gSSB1bmRlcnN0YW5kIHRoaXMgd2hhdCB4ODYgZG9lcy4gSG93
ZXZlciwgbWF5IEkgYXNrIHdoeSB3ZSB3b3VsZCB3YW50IGl0IGZvciBBcm0/DQoNCldlIHdhbnRl
ZCB0byBiZSBhcyBuZWFyIGFzIHBvc3NpYmxlIGZyb20geDg2IGltcGxlbWVudGF0aW9uIGFuZCBk
ZXNpZ24uIA0KQnV0IGlmIHlvdSBoYXZlIGFuIG90aGVyIGlkZWEgaGVyZSB3ZSBhcmUgZnVsbHkg
b3BlbiB0byBkaXNjdXNzIGl0LiANCg0KPj4+IA0KPj4+IExpbWl0YXRpb25zOg0KPj4+ICogV2hl
biBQQ0kgZGV2aWNlcyBhcmUgYWRkZWQgdG8gWEVOLCBNU0kgY2FwYWJpbGl0eSBpcyBub3QgaW5p
dGlhbGl6ZWQgaW5zaWRlIFhFTiBhbmQgbm90IHN1cHBvcnRlZCBhcyBvZiBub3cuDQo+Pj4gKiBB
Q1MgY2FwYWJpbGl0eSBpcyBkaXNhYmxlIGZvciBBUk0gYXMgb2Ygbm93IGFzIGFmdGVyIGVuYWJs
aW5nIGl0IGRldmljZXMgYXJlIG5vdCBhY2Nlc3NpYmxlLg0KPj4gSSBhbSBub3Qgc3VyZSB0byB1
bmRlcnN0YW5kIHRoaXMuIENhbiB5b3UgZXhwYW5kPw0KDQpBcyBhIHRlbXBvcmFyeSB3b3JrYXJv
dW5kIHdlIHR1cm5lZCB0aGF0IGZlYXR1cmUgb2ZmIGluIHRoZSBjb2RlIGZvciBub3cgYnV0IHdl
IHdpbGwgZml4IHRoYXQgbGF0ZXIuDQoNCj4+PiAqIERvbTBMZXNzIGltcGxlbWVudGF0aW9uIHdp
bGwgcmVxdWlyZSB0byBoYXZlIHRoZSBjYXBhY2l0eSBpbnNpZGUgWGVuIHRvIGRpc2NvdmVyIHRo
ZSBQQ0kgZGV2aWNlcyAod2l0aG91dCBkZXBlbmRpbmcgb24gRG9tMCB0byBkZWNsYXJlIHRoZW0g
dG8gWGVuKS4NCj4+PiANCj4+PiAjIEVuYWJsZSB0aGUgZXhpc3RpbmcgeDg2IHZpcnR1YWwgUENJ
IHN1cHBvcnQgZm9yIEFSTToNCj4+PiANCj4+PiBUaGUgZXhpc3RpbmcgVlBDSSBzdXBwb3J0IGF2
YWlsYWJsZSBmb3IgWDg2IGlzIGFkYXB0ZWQgZm9yIEFybS4gV2hlbiB0aGUgZGV2aWNlIGlzIGFk
ZGVkIHRvIFhFTiB2aWEgdGhlIGh5cGVyIGNhbGwg4oCcUEhZU0RFVk9QX3BjaV9kZXZpY2VfYWRk
4oCdLCBWUENJIGhhbmRsZXIgZm9yIHRoZSBjb25maWcgc3BhY2UgYWNjZXNzIGlzIGFkZGVkIHRv
IHRoZSBQQ0kgZGV2aWNlIHRvIGVtdWxhdGUgdGhlIFBDSSBkZXZpY2VzLg0KPj4+IA0KPj4+IEEg
TU1JTyB0cmFwIGhhbmRsZXIgZm9yIHRoZSBQQ0kgRUNBTSBzcGFjZSBpcyByZWdpc3RlcmVkIGlu
IFhFTiBzbyB0aGF0IHdoZW4gZ3Vlc3QgaXMgdHJ5aW5nIHRvIGFjY2VzcyB0aGUgUENJIGNvbmZp
ZyBzcGFjZSwgWEVOIHdpbGwgdHJhcCB0aGUgYWNjZXNzIGFuZCBlbXVsYXRlIHJlYWQvd3JpdGUg
dXNpbmcgdGhlIFZQQ0kgYW5kIG5vdCB0aGUgcmVhbCBQQ0kgaGFyZHdhcmUuDQo+Pj4gDQo+Pj4g
TGltaXRhdGlvbjoNCj4+PiAqIE5vIGhhbmRsZXIgaXMgcmVnaXN0ZXIgZm9yIHRoZSBNU0kgY29u
ZmlndXJhdGlvbi4NCj4+PiAqIE9ubHkgbGVnYWN5IGludGVycnVwdCBpcyBzdXBwb3J0ZWQgYW5k
IHRlc3RlZCBhcyBvZiBub3csIE1TSSBpcyBub3QgaW1wbGVtZW50ZWQgYW5kIHRlc3RlZC4NCj4+
IElJUkMsIGxlZ2FjeSBpbnRlcnJ1cHQgbWF5IGJlIHNoYXJlZCBiZXR3ZWVuIHR3byBQQ0kgZGV2
aWNlcy4gSG93IGRvIHlvdSBwbGFuIHRvIGhhbmRsZSB0aGlzIG9uIEFybT8NCg0KV2UgcGxhbiB0
byBmaXggdGhpcyBieSBhZGRpbmcgcHJvcGVyIHN1cHBvcnQgZm9yIE1TSSBpbiB0aGUgbG9uZyB0
ZXJtLiANCkZvciB0aGUgdXNlIGNhc2Ugd2hlcmUgTVNJIGlzIG5vdCBzdXBwb3J0ZWQgb3Igbm90
IHdhbnRlZCB3ZSBtaWdodCBoYXZlIHRvIGZpbmQgYSB3YXkgdG8gZm9yd2FyZCB0aGUgaGFyZHdh
cmUgaW50ZXJydXB0IHRvIHNldmVyYWwgZ3Vlc3RzIHRvIGVtdWxhdGUgc29tZSBraW5kIG9mIHNo
YXJlZCBpbnRlcnJ1cHQuDQoNCj4+PiANCj4+PiAjIEFzc2lnbiB0aGUgZGV2aWNlIHRvIHRoZSBn
dWVzdDoNCj4+PiANCj4+PiBBc3NpZ24gdGhlIFBDSSBkZXZpY2UgZnJvbSB0aGUgaGFyZHdhcmUg
ZG9tYWluIHRvIHRoZSBndWVzdCBpcyBkb25lIHVzaW5nIHRoZSBiZWxvdyBndWVzdCBjb25maWcg
b3B0aW9uLiBXaGVuIHhsIHRvb2wgY3JlYXRlIHRoZSBkb21haW4sIFBDSSBkZXZpY2VzIHdpbGwg
YmUgYXNzaWduZWQgdG8gdGhlIGd1ZXN0IFZQQ0kgYnVzLg0KPj4gQWJvdmUsIHlvdSBzdWdnZXN0
IHRoYXQgZGV2aWNlIHdpbGwgYmUgYXNzaWduZWQgdG8gdGhlIGhhcmR3YXJlIGRvbWFpbiBhdCBi
b290LiBJIGFtIGFzc3VtaW5nIHRoaXMgYWxzbyBtZWFucyB0aGF0IGFsbCB0aGUgaW50ZXJydXB0
cy9NTUlPcyB3aWxsIGJlIHJvdXRlZC9tYXBwZWQsIGlzIHRoYXQgY29ycmVjdD8NCj4+IElmIHNv
LCBjYW4geW91IHByb3ZpZGUgYSByb3VnaCBza2V0Y2ggaG93IGFzc2lnbi9kZWFzc2lnbiB3aWxs
IHdvcms/DQoNClllcyB0aGlzIGlzIGNvcnJlY3QuIFdlIHdpbGwgaW1wcm92ZSB0aGUgZGVzaWdu
IGFuZCBhZGQgYSBtb3JlIGRldGFpbGVkIGRlc2NyaXB0aW9uIG9uIHRoYXQgaW4gdGhlIG5leHQg
dmVyc2lvbi4gDQpUbyBtYWtlIGl0IHNob3J0IHdlIHJlbW92ZSB0aGUgcmVzb3VyY2VzIGZyb20g
dGhlIGhhcmR3YXJlIGRvbWFpbiBmaXJzdCBhbmQgYXNzaWduIHRoZW0gdG8gdGhlIGd1ZXN0IHRo
ZSBkZXZpY2UgaGFzIGJlZW4gYXNzaWduZWQgdG8uIFRoZXJlIGFyZSBzdGlsbCBzb21lIHBhcnRz
IGluIHRoZXJlIHdoZXJlIHdlIGFyZSBzdGlsbCBpbiBpbnZlc3RpZ2F0aW9uIG1vZGUgb24gdGhh
dCBwYXJ0LiANCg0KPj4+ICAgICBwY2k9WyAiUENJX1NQRUNfU1RSSU5HIiwgIlBDSV9TUEVDX1NU
UklORyIsIC4uLl0NCj4+PiANCj4+PiBHdWVzdCB3aWxsIGJlIG9ubHkgYWJsZSB0byBhY2Nlc3Mg
dGhlIGFzc2lnbmVkIGRldmljZXMgYW5kIHNlZSB0aGUgYnJpZGdlcy4gR3Vlc3Qgd2lsbCBub3Qg
YmUgYWJsZSB0byBhY2Nlc3Mgb3Igc2VlIHRoZSBkZXZpY2VzIHRoYXQgYXJlIG5vIGFzc2lnbmVk
IHRvIGhpbS4NCj4+PiANCj4+PiBMaW1pdGF0aW9uOg0KPj4+ICogQXMgb2Ygbm93IGFsbCB0aGUg
YnJpZGdlcyBpbiB0aGUgUENJIGJ1cyBhcmUgc2VlbiBieSB0aGUgZ3Vlc3Qgb24gdGhlIFZQQ0kg
YnVzLg0KPj4gV2h5IGRvIHlvdSB3YW50IHRvIGV4cG9zZSBhbGwgdGhlIGJyaWRnZXMgdG8gYSBn
dWVzdD8gRG9lcyB0aGlzIG1lYW4gdGhhdCB0aGUgQkRGIHNob3VsZCBhbHdheXMgbWF0Y2ggYmV0
d2VlbiB0aGUgaG9zdCBhbmQgdGhlIGd1ZXN0Pw0KDQpUaGF04oCZcyBub3QgcmVhbGx5IHNvbWV0
aGluZyB0aGF0IHdlIHdhbnRlZCBidXQgdGhpcyB3YXMgdGhlIGVhc2llc3Qgd2F5IHRvIGdvLg0K
QXMgc2FpZCBpbiBhIHByZXZpb3VzIG1haWwgd2UgY291bGQgYnVpbGQgYSBWUENJIGJ1cyB3aXRo
IGEgY29tcGxldGVseSBkaWZmZXJlbnQgdG9wb2xvZ3kgYnV0IEkgYW0gbm90IHN1cmUgb2YgdGhl
IGFkdmFudGFnZXMgdGhpcyB3b3VsZCBoYXZlLg0KRG8geW91IHNlZSBzb21lIHJlYXNvbiB0byBk
byB0aGlzID8NCg0KPj4+IA0KPj4+ICMgRW11bGF0ZWQgUENJIGRldmljZSB0cmVlIG5vZGUgaW4g
bGlieGw6DQo+Pj4gDQo+Pj4gTGlieGwgaXMgY3JlYXRpbmcgYSB2aXJ0dWFsIFBDSSBkZXZpY2Ug
dHJlZSBub2RlIGluIHRoZSBkZXZpY2UgdHJlZSB0byBlbmFibGUgdGhlIGd1ZXN0IE9TIHRvIGRp
c2NvdmVyIHRoZSB2aXJ0dWFsIFBDSSBkdXJpbmcgZ3Vlc3QgYm9vdC4gV2UgaW50cm9kdWNlZCB0
aGUgbmV3IGNvbmZpZyBvcHRpb24gW3ZwY2k9InBjaV9lY2FtIl0gZm9yIGd1ZXN0cy4gV2hlbiB0
aGlzIGNvbmZpZyBvcHRpb24gaXMgZW5hYmxlZCBpbiBhIGd1ZXN0IGNvbmZpZ3VyYXRpb24sIGEg
UENJIGRldmljZSB0cmVlIG5vZGUgd2lsbCBiZSBjcmVhdGVkIGluIHRoZSBndWVzdCBkZXZpY2Ug
dHJlZS4NCj4+PiANCj4+PiBBIG5ldyBhcmVhIGhhcyBiZWVuIHJlc2VydmVkIGluIHRoZSBhcm0g
Z3Vlc3QgcGh5c2ljYWwgbWFwIGF0IHdoaWNoIHRoZSBWUENJIGJ1cyBpcyBkZWNsYXJlZCBpbiB0
aGUgZGV2aWNlIHRyZWUgKHJlZyBhbmQgcmFuZ2VzIHBhcmFtZXRlcnMgb2YgdGhlIG5vZGUpLiBB
IHRyYXAgaGFuZGxlciBmb3IgdGhlIFBDSSBFQ0FNIGFjY2VzcyBmcm9tIGd1ZXN0IGhhcyBiZWVu
IHJlZ2lzdGVyZWQgYXQgdGhlIGRlZmluZWQgYWRkcmVzcyBhbmQgcmVkaXJlY3RzIHJlcXVlc3Rz
IHRvIHRoZSBWUENJIGRyaXZlciBpbiBYZW4uDQo+Pj4gDQo+Pj4gTGltaXRhdGlvbjoNCj4+PiAq
IE9ubHkgb25lIFBDSSBkZXZpY2UgdHJlZSBub2RlIGlzIHN1cHBvcnRlZCBhcyBvZiBub3cuDQo+
Pj4gDQo+Pj4gQkFSIHZhbHVlIGFuZCBJT01FTSBtYXBwaW5nOg0KPj4+IA0KPj4+IExpbnV4IGd1
ZXN0IHdpbGwgZG8gdGhlIFBDSSBlbnVtZXJhdGlvbiBiYXNlZCBvbiB0aGUgYXJlYSByZXNlcnZl
ZCBmb3IgRUNBTSBhbmQgSU9NRU0gcmFuZ2VzIGluIHRoZSBWUENJIGRldmljZSB0cmVlIG5vZGUu
IE9uY2UgUENJICAgIGRldmljZSBpcyBhc3NpZ25lZCB0byB0aGUgZ3Vlc3QsIFhFTiB3aWxsIG1h
cCB0aGUgZ3Vlc3QgUENJIElPTUVNIHJlZ2lvbiB0byB0aGUgcmVhbCBwaHlzaWNhbCBJT01FTSBy
ZWdpb24gb25seSBmb3IgdGhlIGFzc2lnbmVkIGRldmljZXMuDQo+Pj4gDQo+Pj4gQXMgb2Ygbm93
IHdlIGhhdmUgbm90IG1vZGlmaWVkIHRoZSBleGlzdGluZyBWUENJIGNvZGUgdG8gbWFwIHRoZSBn
dWVzdCBQQ0kgSU9NRU0gcmVnaW9uIHRvIHRoZSByZWFsIHBoeXNpY2FsIElPTUVNIHJlZ2lvbi4g
V2UgdXNlZCB0aGUgZXhpc3RpbmcgZ3Vlc3Qg4oCcaW9tZW3igJ0gY29uZmlnIG9wdGlvbiB0byBt
YXAgdGhlIHJlZ2lvbi4NCj4+PiBGb3IgZXhhbXBsZToNCj4+PiAgICAgR3Vlc3QgcmVzZXJ2ZWQg
SU9NRU0gcmVnaW9uOiAgMHgwNDAyMDAwMA0KPj4+ICAgICAgICAgIFJlYWwgcGh5c2ljYWwgSU9N
RU0gcmVnaW9uOjB4NTAwMDAwMDANCj4+PiAgICAgICAgICBJT01FTSBzaXplOjEyOE1CDQo+Pj4g
ICAgICAgICAgaW9tZW0gY29uZmlnIHdpbGwgYmU6ICAgaW9tZW0gPSBbIjB4NTAwMDAsMHg4MDAw
QDB4NDAyMCJdDQo+Pj4gDQo+Pj4gVGhlcmUgaXMgbm8gbmVlZCB0byBtYXAgdGhlIEVDQU0gc3Bh
Y2UgYXMgWEVOIGFscmVhZHkgaGF2ZSBhY2Nlc3MgdG8gdGhlIEVDQU0gc3BhY2UgYW5kIFhFTiB3
aWxsIHRyYXAgRUNBTSBhY2Nlc3NlcyBmcm9tIHRoZSBndWVzdCBhbmQgd2lsbCBwZXJmb3JtIHJl
YWQvd3JpdGUgb24gdGhlIFZQQ0kgYnVzLg0KPj4+IA0KPj4+IElPTUVNIGFjY2VzcyB3aWxsIG5v
dCBiZSB0cmFwcGVkIGFuZCB0aGUgZ3Vlc3Qgd2lsbCBkaXJlY3RseSBhY2Nlc3MgdGhlIElPTUVN
IHJlZ2lvbiBvZiB0aGUgYXNzaWduZWQgZGV2aWNlIHZpYSBzdGFnZS0yIHRyYW5zbGF0aW9uLg0K
Pj4+IA0KPj4+IEluIHRoZSBzYW1lLCB3ZSBtYXBwZWQgdGhlIGFzc2lnbmVkIGRldmljZXMgSVJR
IHRvIHRoZSBndWVzdCB1c2luZyBiZWxvdyBjb25maWcgb3B0aW9ucy4NCj4+PiAgICAgaXJxcz0g
WyBOVU1CRVIsIE5VTUJFUiwgLi4uXQ0KPj4+IA0KPj4+IExpbWl0YXRpb246DQo+Pj4gKiBOZWVk
IHRvIGF2b2lkIHRoZSDigJxpb21lbeKAnSBhbmQg4oCcaXJx4oCdIGd1ZXN0IGNvbmZpZyBvcHRp
b25zIGFuZCBtYXAgdGhlIElPTUVNIHJlZ2lvbiBhbmQgSVJRIGF0IHRoZSBzYW1lIHRpbWUgd2hl
biBkZXZpY2UgaXMgYXNzaWduZWQgdG8gdGhlIGd1ZXN0IHVzaW5nIHRoZSDigJxwY2nigJ0gZ3Vl
c3QgY29uZmlnIG9wdGlvbnMgd2hlbiB4bCBjcmVhdGVzIHRoZSBkb21haW4uDQo+Pj4gKiBFbXVs
YXRlZCBCQVIgdmFsdWVzIG9uIHRoZSBWUENJIGJ1cyBzaG91bGQgcmVmbGVjdCB0aGUgSU9NRU0g
bWFwcGVkIGFkZHJlc3MuDQo+Pj4gKiBYODYgbWFwcGluZyBjb2RlIHNob3VsZCBiZSBwb3J0ZWQg
b24gQXJtIHNvIHRoYXQgdGhlIHN0YWdlLTIgdHJhbnNsYXRpb24gaXMgYWRhcHRlZCB3aGVuIHRo
ZSBndWVzdCBpcyBkb2luZyBhIG1vZGlmaWNhdGlvbiBvZiB0aGUgQkFSIHJlZ2lzdGVycyB2YWx1
ZXMgKHRvIG1hcCB0aGUgYWRkcmVzcyByZXF1ZXN0ZWQgYnkgdGhlIGd1ZXN0IGZvciBhIHNwZWNp
ZmljIElPTUVNIHRvIHRoZSBhZGRyZXNzIGFjdHVhbGx5IGNvbnRhaW5lZCBpbiB0aGUgcmVhbCBC
QVIgcmVnaXN0ZXIgb2YgdGhlIGNvcnJlc3BvbmRpbmcgZGV2aWNlKS4NCj4+PiANCj4+PiAjIFNN
TVUgY29uZmlndXJhdGlvbiBmb3IgZ3Vlc3Q6DQo+Pj4gDQo+Pj4gV2hlbiBhc3NpZ25pbmcgUENJ
IGRldmljZXMgdG8gYSBndWVzdCwgdGhlIFNNTVUgY29uZmlndXJhdGlvbiBzaG91bGQgYmUgdXBk
YXRlZCB0byByZW1vdmUgYWNjZXNzIHRvIHRoZSBoYXJkd2FyZSBkb21haW4gbWVtb3J5IGFuZCBh
ZGQNCj4+PiBjb25maWd1cmF0aW9uIHRvIGhhdmUgYWNjZXNzIHRvIHRoZSBndWVzdCBtZW1vcnkg
d2l0aCB0aGUgcHJvcGVyIGFkZHJlc3MgdHJhbnNsYXRpb24gc28gdGhhdCB0aGUgZGV2aWNlIGNh
biBkbyBETUEgb3BlcmF0aW9ucyBmcm9tIGFuZCB0byB0aGUgZ3Vlc3QgbWVtb3J5IG9ubHkuDQo+
PiBUaGVyZSBhcmUgYSBmZXcgbW9yZSBxdWVzdGlvbnMgdG8gYW5zd2VyIGhlcmU6DQo+PiAgICAt
IFdoZW4gYSBndWVzdCBpcyBkZXN0cm95ZWQsIHdobyB3aWxsIGJlIHRoZSBvd25lciBvZiB0aGUg
UENJIGRldmljZXM/IERlcGVuZGluZyBvbiB0aGUgYW5zd2VyLCBob3cgZG8geW91IG1ha2Ugc3Vy
ZSB0aGUgZGV2aWNlIGlzIHF1aWVzY2VudD8NCg0KSSB3b3VsZCBzYXkgdGhlIGhhcmR3YXJlIGRv
bWFpbiBpZiB0aGVyZSBpcyBvbmUgb3RoZXJ3aXNlIG5vYm9keS4gDQpPbiB0aGUgcXVpZXNjZW50
IHBhcnQgdGhpcyBpcyBkZWZpbml0ZWx5IHNvbWV0aGluZyBmb3Igd2hpY2ggSSBoYXZlIG5vIGFu
c3dlciBmb3Igbm93IGFuZCBhbnkgc3VnZ2VzdGlvbiBpcyBtb3JlIHRoZW4gd2VsY29tZS4NCg0K
Pj4gICAgLSBJcyB0aGVyZSBhbnkgbWVtb3J5IGFjY2VzcyB0aGF0IGNhbiBieXBhc3NlZCB0aGUg
SU9NTVUgKGUuZyBkb29yYmVsbCk/DQoNClRoaXMgaXMgc3RpbGwgc29tZXRoaW5nIHRvIGJlIGlu
dmVzdGlnYXRlZCBhcyBwYXJ0IG9mIHRoZSBNU0kgaW1wbGVtZW50YXRpb24uIA0KSWYgeW91IGhh
dmUgYW55IGlkZWEgaGVyZSwgZmVlbCBmcmVlIHRvIHRlbGwgdXMuIA0KDQpXZSBhcmUgc3VibWl0
dGluZyBhbGwgdGhpcyBhcyByZXF1ZXN0ZWQgZHVyaW5nIFhlbiBTdW1taXQgdG8gaGF2ZSBzb21l
IGZpcnN0IGZlZWRiYWNrIGJ1dCB0aGlzIGlzIGEgaHVnZSB3b3JrIHBhY2thZ2UgYW5kIHRoZXJl
IGFyZSBzdGlsbCBsb3RzIG9mIGFyZWFzIHRoYXQgd2UgaGF2ZSB0byBkaWcgaW50byA6LSkNCg0K
Q2hlZXJzDQpCZXJ0cmFuZA0KDQo+PiBDaGVlcnMsDQo+PiBbMV0gaHR0cHM6Ly9saXN0cy54ZW5w
cm9qZWN0Lm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDE3LTA1L21zZzAyNTIwLmh0bWwN
Cj4gDQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCj4gDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 14:49:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 14:49:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwRg3-0000uF-Eq; Fri, 17 Jul 2020 14:49:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwRg2-0000uA-Nt
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 14:49:30 +0000
X-Inumbo-ID: b79a9ba8-c83c-11ea-8496-bc764e2007e4
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::612])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b79a9ba8-c83c-11ea-8496-bc764e2007e4;
 Fri, 17 Jul 2020 14:49:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=16qJpWuyRsbkhE+DIkPMJSVXRYLtWIcIvHctzxkKtFk=;
 b=PqTz1nZcgpFb3LuPdycKY8tUVFGv/eH+C8LKrK1i5khiOUcHap3b4vUBn2nlgPCZzYiVU8EfhUHNk7UnZP4jO/+EBNxIg7cmako0wl3cE4fVB8/SH3cIDgm3M4BeMWn5FnmpXS0x5Hb1DaHNtYfQGcsNYgXOAb854nAOD6gq0w0=
Received: from MR2P264CA0064.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:31::28)
 by AM6PR08MB3317.eurprd08.prod.outlook.com (2603:10a6:209:42::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.22; Fri, 17 Jul
 2020 14:49:28 +0000
Received: from VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:31:cafe::ee) by MR2P264CA0064.outlook.office365.com
 (2603:10a6:500:31::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18 via Frontend
 Transport; Fri, 17 Jul 2020 14:49:28 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT010.mail.protection.outlook.com (10.152.18.113) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 14:49:27 +0000
Received: ("Tessian outbound 1dc58800d5dd:v62");
 Fri, 17 Jul 2020 14:49:27 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: c181b4099ff1cfce
X-CR-MTA-TID: 64aa7808
Received: from 792995244d14.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DA540EB2-B182-496B-B0DA-37AC616A7616.1; 
 Fri, 17 Jul 2020 14:49:21 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 792995244d14.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 14:49:21 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JFYGPPld2s0pHoquNnzfJR2e9ugJ3hMvGemck1/PtVuKSmapRF5r5PsG6wePg4FcNpIYNw2RkvSXFn/GholnfAnilCIaJXBvwnPCKCHHCZl4pj4PE84isFZo0PnrL3b1Kb1somgknmu7bKA9eXetiUzWFvwEs9sn6WBN3b7bpjISc/muAz9oJ18MqYmYSgpkXF5olGYHH2AQmePsybkydunsUO9FBEWp7PebjPEG95I4206P0+2IMyEvvd1bMf+nsYp0JYY7WPVYuPWc/bLuxzqwcaJKMxenEPAsf5tByDtxyQ3sGwg46KO3cJqUmLi5YEkmub8OQ4W1vVxTDs3EaQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=16qJpWuyRsbkhE+DIkPMJSVXRYLtWIcIvHctzxkKtFk=;
 b=i0wp2mwMcC+n0wV0V4IMSuMgrr1CydpjCgecqmFgTlaUzQ9tsOlNBnQLcnZl2uVjdzYNK5PWZv0gxWnoXNOVqDrSDTTNdnAfXF1zskot1/Uum71ikuGTOKbIe2DhAJUQI9vShgAeT/Rv+SRZFeg1Y5N1QrxCYX/5X2OhUeOINdZt6nO8Vh4uqje/u4RFVMmMe2EiaViImJA9LrrO4Wks1yEa45h+4dFhrcNKyBOAdQ4fq16s/SnxzfsAY/M0VdAfQ2X3D/E2Mx5brOGHZHxywzxm1fzxzWvdID1J2qIYkjNtwit+Hpn3pxo845uGUyyMN2xpC8iA1sBj/uieKfjeqw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=16qJpWuyRsbkhE+DIkPMJSVXRYLtWIcIvHctzxkKtFk=;
 b=PqTz1nZcgpFb3LuPdycKY8tUVFGv/eH+C8LKrK1i5khiOUcHap3b4vUBn2nlgPCZzYiVU8EfhUHNk7UnZP4jO/+EBNxIg7cmako0wl3cE4fVB8/SH3cIDgm3M4BeMWn5FnmpXS0x5Hb1DaHNtYfQGcsNYgXOAb854nAOD6gq0w0=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5179.eurprd08.prod.outlook.com (2603:10a6:10:e7::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17; Fri, 17 Jul
 2020 14:49:20 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 14:49:20 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAOrEgIAAVPWAgAABeYCAAAssAIAAAcuAgAAIDQCAAAHigIAAAiYA
Date: Fri, 17 Jul 2020 14:49:20 +0000
Message-ID: <90AE8DAB-2223-46DC-A263-D78365E5435E@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
 <28899FEF-9DA7-4513-8283-1AC5EFFC6E92@arm.com>
 <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
 <20200717144139.GU7191@Air-de-Roger>
In-Reply-To: <20200717144139.GU7191@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [91.160.77.188]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: e09dab6e-077d-4fcb-0041-08d82a609ae5
x-ms-traffictypediagnostic: DB8PR08MB5179:|AM6PR08MB3317:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <AM6PR08MB33178581CAD71442AF81C2059D7C0@AM6PR08MB3317.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: nZQUz3DMZJq84WVJDz161Gi+jKG4DSksKnROs/ro/UoztzwGGzqb/bWlyMyRDAk6wcYCMBIIWLdkpoAncietb8Ma37wOoohhh15NoJxi+cguI/hnmgZIhbXBsqQ61fC+lDjgYZe0IdefSQEaYQDEa2A4DUT+M/Ih1flNufCkpclJ94VtB1FpH7P+nl311YjaCjgnjufWj79mDHyHiEpYgexZf+smfB4aR6eRshxaWQNTmuZVxlEeolWnvmspfZgi02dedNiF4ZVAPVcjBq3/smlX/ajh852kX0UWIsLK6k8CKRmPuY94K3NvFa0O/r3HVn6g0nHNZmYO1FDeMFbyXQ==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(366004)(396003)(136003)(39860400002)(376002)(316002)(6512007)(5660300002)(4326008)(36756003)(71200400001)(478600001)(2616005)(2906002)(33656002)(6506007)(186003)(53546011)(8936002)(86362001)(26005)(54906003)(8676002)(6916009)(66476007)(64756008)(66556008)(66946007)(66446008)(76116006)(91956017)(6486002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: CC0Jd8tPtrorAQpg6ONoYQ6EydMMO6rvEqQFVrJqQO91N+yURyofFIkzJq4aBaeRTeGrcBYTqwaw+0MhoHYeejFRIFzmp0ntKZQGTLZw3dPiFUK5tTxnD31lgzLIjI5Nj4Vz/4y67uWhVFvJl86DM1CkbhU5mqeJUfvAUiMna94hnaJJlL9s+Q7CuxkRPOCs3zgS0KHN9Cj9XigL0y2bRs5BlUh/lrvsWjP6lodfmphokCpETHVY0OIc4t0z8bLGEmnVOhwJZZaAlNGpzZVzIBpi4CZ91CnQVFq2yMfGkSYWEp+ilzyABlgF5Vw3QFacCuKvVCdNat3Y+baqIStItmk37t/HmJXXaOpPB7AkVnYnYfcCMt5yYkDI5z4FVBlh2HrNw85PIJ+jid8C/KbgbaJ+uhbKIIAUbDVSsgKXRSlI/fg/8yPLB8D0STEncFgMJ+8UUwYF54ybHyQJrYFquukWllP5FUWIz5B9lMq5uluLiBoS23dfE12663tv2y+o
Content-Type: text/plain; charset="utf-8"
Content-ID: <8C544AC97E6FE24D882346475A78198B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5179
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(376002)(39860400002)(396003)(136003)(46966005)(26005)(186003)(6862004)(2906002)(53546011)(6506007)(4326008)(86362001)(107886003)(70206006)(336012)(70586007)(54906003)(6486002)(36756003)(5660300002)(2616005)(356005)(81166007)(8936002)(316002)(6512007)(36906005)(82310400002)(478600001)(33656002)(8676002)(82740400003)(47076004);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 248e502d-9a52-4acb-4e1c-08d82a6096b3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: imNnrvEzC7yiPZBIL75bC1RsvukfWigmDiMcIhI56qM98BZiuYwULBHbrWNk8b6BSFGtj9ILjHL7WY5yUBoTB7qdOnkreeQT2oycF8bP2kGBzlU3wuPyMTnZ1hd1yqg2fh9IzAmnHhJVL/P4Uby7bSdHuU+J5kfqw58cgkgRtC8TXGE/OrYtjpZT6gaK2WST3ydZ4c4OKYCVpA/zpw5St2IeHatXHqgvLeoEb1NblejOFNLiQQvyW0YjYUy25jM4U+7IYiFD0upWDkwuqVp/aAOsJLyRw3BueHWLnBP2Oii3zfXHRJVMSdvFFVCq+uWsCpW5A/J7qLjBO5wzOs8i3XntC+k/uoqsYfJT3F9QXog09Av62wHbl/R+AClCKdpRSeJgP+rHn6BVWgGdaSpmzw==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 14:49:27.5312 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e09dab6e-077d-4fcb-0041-08d82a609ae5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3317
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTcgSnVsIDIwMjAsIGF0IDE2OjQxLCBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5w
YXVAY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiBGcmksIEp1bCAxNywgMjAyMCBhdCAwMjoz
NDo1NVBNICswMDAwLCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4gDQo+PiANCj4+PiBPbiAx
NyBKdWwgMjAyMCwgYXQgMTY6MDYsIEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4gd3Jv
dGU6DQo+Pj4gDQo+Pj4gT24gMTcuMDcuMjAyMCAxNTo1OSwgQmVydHJhbmQgTWFycXVpcyB3cm90
ZToNCj4+Pj4gDQo+Pj4+IA0KPj4+Pj4gT24gMTcgSnVsIDIwMjAsIGF0IDE1OjE5LCBKYW4gQmV1
bGljaCA8amJldWxpY2hAc3VzZS5jb20+IHdyb3RlOg0KPj4+Pj4gDQo+Pj4+PiBPbiAxNy4wNy4y
MDIwIDE1OjE0LCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4+Pj4+PiBPbiAxNyBKdWwgMjAy
MCwgYXQgMTA6MTAsIEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4gd3JvdGU6DQo+Pj4+
Pj4+IE9uIDE2LjA3LjIwMjAgMTk6MTAsIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4+Pj4+Pj4gIyBF
bXVsYXRlZCBQQ0kgZGV2aWNlIHRyZWUgbm9kZSBpbiBsaWJ4bDoNCj4+Pj4+Pj4+IA0KPj4+Pj4+
Pj4gTGlieGwgaXMgY3JlYXRpbmcgYSB2aXJ0dWFsIFBDSSBkZXZpY2UgdHJlZSBub2RlIGluIHRo
ZSBkZXZpY2UgdHJlZSB0byBlbmFibGUgdGhlIGd1ZXN0IE9TIHRvIGRpc2NvdmVyIHRoZSB2aXJ0
dWFsIFBDSSBkdXJpbmcgZ3Vlc3QgYm9vdC4gV2UgaW50cm9kdWNlZCB0aGUgbmV3IGNvbmZpZyBv
cHRpb24gW3ZwY2k9InBjaV9lY2FtIl0gZm9yIGd1ZXN0cy4gV2hlbiB0aGlzIGNvbmZpZyBvcHRp
b24gaXMgZW5hYmxlZCBpbiBhIGd1ZXN0IGNvbmZpZ3VyYXRpb24sIGEgUENJIGRldmljZSB0cmVl
IG5vZGUgd2lsbCBiZSBjcmVhdGVkIGluIHRoZSBndWVzdCBkZXZpY2UgdHJlZS4NCj4+Pj4+Pj4g
DQo+Pj4+Pj4+IEkgc3VwcG9ydCBTdGVmYW5vJ3Mgc3VnZ2VzdGlvbiBmb3IgdGhpcyB0byBiZSBh
biBvcHRpb25hbCB0aGluZywgaS5lLg0KPj4+Pj4+PiB0aGVyZSB0byBiZSBubyBuZWVkIGZvciBp
dCB3aGVuIHRoZXJlIGFyZSBQQ0kgZGV2aWNlcyBhc3NpZ25lZCB0byB0aGUNCj4+Pj4+Pj4gZ3Vl
c3QgYW55d2F5LiBJIGFsc28gd29uZGVyIGFib3V0IHRoZSBwY2lfIHByZWZpeCBoZXJlIC0gaXNu
J3QNCj4+Pj4+Pj4gdnBjaT0iZWNhbSIgYXMgdW5hbWJpZ3VvdXM/DQo+Pj4+Pj4gDQo+Pj4+Pj4g
VGhpcyBjb3VsZCBiZSBhIHByb2JsZW0gYXMgd2UgbmVlZCB0byBrbm93IHRoYXQgdGhpcyBpcyBy
ZXF1aXJlZCBmb3IgYSBndWVzdCB1cGZyb250IHNvIHRoYXQgUENJIGRldmljZXMgY2FuIGJlIGFz
c2lnbmVkIGFmdGVyIHVzaW5nIHhsLiANCj4+Pj4+IA0KPj4+Pj4gSSdtIGFmcmFpZCBJIGRvbid0
IHVuZGVyc3RhbmQ6IFdoZW4gdGhlcmUgYXJlIG5vIFBDSSBkZXZpY2UgdGhhdCBnZXQNCj4+Pj4+
IGhhbmRlZCB0byBhIGd1ZXN0IHdoZW4gaXQgZ2V0cyBjcmVhdGVkLCBidXQgaXQgaXMgc3VwcG9z
ZWQgdG8gYmUgYWJsZQ0KPj4+Pj4gdG8gaGF2ZSBzb21lIGFzc2lnbmVkIHdoaWxlIGFscmVhZHkg
cnVubmluZywgdGhlbiB3ZSBhZ3JlZSB0aGUgb3B0aW9uDQo+Pj4+PiBpcyBuZWVkZWQgKGFmYWlj
dCkuIFdoZW4gUENJIGRldmljZXMgZ2V0IGhhbmRlZCB0byB0aGUgZ3Vlc3Qgd2hpbGUgaXQNCj4+
Pj4+IGdldHMgY29uc3RydWN0ZWQsIHdoZXJlJ3MgdGhlIHByb2JsZW0gdG8gaW5mZXIgdGhpcyBv
cHRpb24gZnJvbSB0aGUNCj4+Pj4+IHByZXNlbmNlIG9mIFBDSSBkZXZpY2VzIGluIHRoZSBndWVz
dCBjb25maWd1cmF0aW9uPw0KPj4+PiANCj4+Pj4gSWYgdGhlIHVzZXIgd2FudHMgdG8gdXNlIHhs
IHBjaS1hdHRhY2ggdG8gYXR0YWNoIGluIHJ1bnRpbWUgYSBkZXZpY2UgdG8gYSBndWVzdCwgdGhp
cyBndWVzdCBtdXN0IGhhdmUgYSBWUENJIGJ1cyAoZXZlbiB3aXRoIG5vIGRldmljZXMpLg0KPj4+
PiBJZiB3ZSBkbyBub3QgaGF2ZSB0aGUgdnBjaSBwYXJhbWV0ZXIgaW4gdGhlIGNvbmZpZ3VyYXRp
b24gdGhpcyB1c2UgY2FzZSB3aWxsIG5vdCB3b3JrIGFueW1vcmUuDQo+Pj4gDQo+Pj4gVGhhdCdz
IHdoYXQgZXZlcnlvbmUgbG9va3MgdG8gYWdyZWUgd2l0aC4gWWV0IHdoeSBpcyB0aGUgcGFyYW1l
dGVyIG5lZWRlZA0KPj4+IHdoZW4gdGhlcmUgX2FyZV8gUENJIGRldmljZXMgYW55d2F5PyBUaGF0
J3MgdGhlICJvcHRpb25hbCIgdGhhdCBTdGVmYW5vDQo+Pj4gd2FzIHN1Z2dlc3RpbmcsIGFpdWku
DQo+PiANCj4+IEkgYWdyZWUgaW4gdGhpcyBjYXNlIHRoZSBwYXJhbWV0ZXIgY291bGQgYmUgb3B0
aW9uYWwgYW5kIG9ubHkgcmVxdWlyZWQgaWYgbm90IFBDSSBkZXZpY2UgaXMgYXNzaWduZWQgZGly
ZWN0bHkgaW4gdGhlIGd1ZXN0IGNvbmZpZ3VyYXRpb24uDQo+IA0KPiBXaGVyZSB3aWxsIHRoZSBF
Q0FNIHJlZ2lvbihzKSBhcHBlYXIgb24gdGhlIGd1ZXN0IHBoeXNtYXA/DQo+IA0KPiBBcmUgeW91
IGdvaW5nIHRvIHJlLXVzZSB0aGUgc2FtZSBsb2NhdGlvbnMgYXMgb24gdGhlIHBoeXNpY2FsDQo+
IGhhcmR3YXJlLCBvciB3aWxsIHRoZXkgYXBwZWFyIHNvbWV3aGVyZSBlbHNlPw0KDQpXZSB3aWxs
IGFkZCBzb21lIG5ldyBkZWZpbml0aW9ucyBmb3IgdGhlIEVDQU0gcmVnaW9ucyBpbiB0aGUgZ3Vl
c3QgcGh5c21hcCBkZWNsYXJlZCBpbiB4ZW4gKGluY2x1ZGUvYXNtLWFybS9jb25maWcuaCkNClNv
IHRoZXkgd2lsbCBhcHBlYXIgYXQgYSBkaWZmZXJlbnQgYWRkcmVzcyB0aGVuIHRoZSBoYXJkd2Fy
ZS4NCg0KQmVydHJhbmQNCg0KPiANCj4gUm9nZXIuDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 15:00:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 15:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwRr0-0002Ym-K3; Fri, 17 Jul 2020 15:00:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lz8P=A4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jwRqy-0002Ya-S4
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 15:00:48 +0000
X-Inumbo-ID: 4b504b8a-c83e-11ea-9622-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4b504b8a-c83e-11ea-9622-12813bfff9fa;
 Fri, 17 Jul 2020 15:00:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594998047;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=6taC/Ie7M8Gq+JWvdtEOr5MJlc6+b45lxyY1V4fbMo4=;
 b=YWSHHjflJEeSl4jZ3hvM/yCBAsG4YZIHSlbPcp+gOYWbDaUqYi3vBdHU
 mrSKspgrYDjNM9BplcNGphto9Jk1kzbLXaLdX5nCRUHNGtGg6WKX+QNQu
 ACDfXr2YFMqqiyGX0Ck7k/B/rGUeoVPj4j5kvmzlq1uqXfQpSKzGcvLZs Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: BK9ktkuQrB6scE79g2FzZnIirusEq0ov5RvCfyy3LBP33FHQmhKqWU5AN5dmjv/+x59JUOEzx1
 Avi/FzZS48l06rE8sTzyWyWVKx8Q1FUhZOVh1g7K2xXwmZIje9bzi5KS/q86frqSlz4lFvLLfP
 ssSu3ML9QPi4q39D7GioAUXwyQrMjFGCSjo7UmaWK6tYNUUNVSegeCSU6J/Q6xKysioQpR0WRq
 a2Eqj6i+ymgqE6nENA8dDrwtCQabS3KKDARReuMxKe1SEJjJzib4Ra5d0oYKTAKzu4zkRAyb+6
 aSM=
X-SBRS: 2.7
X-MesageID: 22954211
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,362,1589256000"; d="scan'208";a="22954211"
Date: Fri, 17 Jul 2020 17:00:39 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
Message-ID: <20200717150039.GV7191@Air-de-Roger>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

I'm very happy to see this proposal, as I think having proper (1st
class) VirtIO support on Xen is crucial to our survival. Almost all
OSes have VirtIO frontends, while the same can't be said about Xen PV
frontends. It would also allow us to piggyback on any new VirtIO
devices without having to re-invent the wheel by creating a clone Xen
PV device.

On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
> Hello all.
> 
> We would like to resume Virtio in Xen on Arm activities. You can find some
> background at [1] and Virtio specification at [2].
> 
> *A few words about importance:*
> There is an increasing interest, I would even say, the requirement to have
> flexible, generic and standardized cross-hypervisor solution for I/O
> virtualization
> in the automotive and embedded areas. The target is quite clear here.
> Providing a standardized interface and device models for device
> para-virtualization
> in hypervisor environments, Virtio interface allows us to move Guest domains
> among different hypervisor systems without further modification at the
> Guest side.
> What is more that Virtio support is available in Linux, Android and many
> other
> operating systems and there are a lot of existing Virtio drivers (frontends)
> which could be just reused without reinventing the wheel. Many
> organisations push
> Virtio direction as a common interface. To summarize, Virtio support would
> be
> the great feature in Xen on Arm in addition to traditional Xen PV drivers
> for
> the user to be able to choose which one to use.

I think most of the above also applies to x86, and fully agree.

> 
> *A few word about solution:*
> As it was mentioned at [1], in order to implement virtio-mmio Xen on Arm

Any plans for virtio-pci? Arm seems to be moving to the PCI bus, and
it would be very interesting from a x86 PoV, as I don't think
virtio-mmio is something that you can easily use on x86 (or even use
at all).

> requires
> some implementation to forward guest MMIO access to a device model. And as
> it
> turned out the Xen on x86 contains most of the pieces to be able to use that
> transport (via existing IOREQ concept). Julien has already done a big amount
> of work in his PoC (xen/arm: Add support for Guest IO forwarding to a
> device emulator).
> Using that code as a base we managed to create a completely functional PoC
> with DomU
> running on virtio block device instead of a traditional Xen PV driver
> without
> modifications to DomU Linux. Our work is mostly about rebasing Julien's
> code on the actual
> codebase (Xen 4.14-rc4), various tweeks to be able to run emulator
> (virtio-disk backend)
> in other than Dom0 domain (in our system we have thin Dom0 and keep all
> backends
> in driver domain),

How do you handle this use-case? Are you using grants in the VirtIO
ring, or rather allowing the driver domain to map all the guest memory
and then placing gfn on the ring like it's commonly done with VirtIO?

Do you have any plans to try to upstream a modification to the VirtIO
spec so that grants (ie: abstract references to memory addresses) can
be used on the VirtIO ring?

> misc fixes for our use-cases and tool support for the
> configuration.
> Unfortunately, Julien doesn’t have much time to allocate on the work
> anymore,
> so we would like to step in and continue.
> 
> *A few word about the Xen code:*
> You can find the whole Xen series at [5]. The patches are in RFC state
> because
> some actions in the series should be reconsidered and implemented properly.
> Before submitting the final code for the review the first IOREQ patch
> (which is quite
> big) will be split into x86, Arm and common parts. Please note, x86 part
> wasn’t
> even build-tested so far and could be broken with that series. Also the
> series probably
> wants splitting into adding IOREQ on Arm (should be focused first) and
> tools support
> for the virtio-disk (which is going to be the first Virtio driver)
> configuration before going
> into the mailing list.

Sending first a patch series to enable IOREQs on Arm seems perfectly
fine, and it doesn't have to come with the VirtIO backend. In fact I
would recommend that you send that ASAP, so that you don't spend time
working on the backend that would likely need to be modified
according to the review received on the IOREQ series.

> 
> What I would like to add here, the IOREQ feature on Arm could be used not
> only
> for implementing Virtio, but for other use-cases which require some
> emulator entity
> outside Xen such as custom PCI emulator (non-ECAM compatible) for example.
> 
> *A few word about the backend(s):*
> One of the main problems with Virtio in Xen on Arm is the absence of
> “ready-to-use” and “out-of-Qemu” Virtio backends (I least am not aware of).
> We managed to create virtio-disk backend based on demu [3] and kvmtool [4]
> using
> that series. It is worth mentioning that although Xenbus/Xenstore is not
> supposed
> to be used with native Virtio, that interface was chosen to just pass
> configuration from toolstack
> to the backend and notify it about creating/destroying Guest domain (I
> think it is

I would prefer if a single instance was launched to handle each
backend, and that the configuration was passed on the command line.
Killing the user-space backend from the toolstack is fine I think,
there's no need to notify the backend using xenstore or any other
out-of-band methods.

xenstore has proven to be a bottleneck in terms of performance, and it
would be better if we can avoid using it when possible, specially here
that you have to do this from scratch anyway.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 15:05:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 15:05:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwRvJ-0002kC-6w; Fri, 17 Jul 2020 15:05:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lz8P=A4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jwRvH-0002k7-TN
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 15:05:15 +0000
X-Inumbo-ID: eac9434c-c83e-11ea-bb8b-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eac9434c-c83e-11ea-bb8b-bc764e2007e4;
 Fri, 17 Jul 2020 15:05:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594998314;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=qeKYxU/qAPmP/+3y2CDuFld1p0oEGofhn/mkO0VtoQQ=;
 b=RPVM8JH+kV1u/Qj4t8yqPdBTxyqW0/U9vzTHwnyDpgHRsm2K4Cb+kUfq
 iEs+H5x4yhmdkFfWaWHZDUyt9ykyHZdlhtA2yGpJ3EV2FT2Dab6ZjjKKS
 TWePTqTRfghQ1daJ/MGu8wdgBX6G+HFG5xRQrDJglInN9L0NoDvlUavA7 g=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: j8B/zuANs8Cnw4peTCHQ/h9a1jb3Oesznh23iAiwnbwY/rJIn1MQvsWFUCsrJSmr5Z1Tk6cVgY
 ttbcB69kSifadlBhpYYKfxuXaJWMPbSqFVdQNBSxgR4sSbMx7cMeeS/6rLldOCuj2ntRPHjM4V
 9Umq3LOyucQmPdMWgY6bYhtwAYRqHc1nvyGts/2ShMdK/0Bdee3Hr0Zza0neoTyJKLRXbiwc80
 Iovt+dUpxXWuR9+2YuNBlFQ+OGDKFccyHwhdb4eFtWNxUvAfLLp+AM3rJEHuJ8tiueimsMH3WB
 M0U=
X-SBRS: 2.7
X-MesageID: 23477445
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,362,1589256000"; d="scan'208";a="23477445"
Date: Fri, 17 Jul 2020 17:05:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Message-ID: <20200717150507.GW7191@Air-de-Roger>
References: <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
 <28899FEF-9DA7-4513-8283-1AC5EFFC6E92@arm.com>
 <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
 <20200717144139.GU7191@Air-de-Roger>
 <90AE8DAB-2223-46DC-A263-D78365E5435E@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <90AE8DAB-2223-46DC-A263-D78365E5435E@arm.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 17, 2020 at 02:49:20PM +0000, Bertrand Marquis wrote:
> 
> 
> > On 17 Jul 2020, at 16:41, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > 
> > On Fri, Jul 17, 2020 at 02:34:55PM +0000, Bertrand Marquis wrote:
> >> 
> >> 
> >>> On 17 Jul 2020, at 16:06, Jan Beulich <jbeulich@suse.com> wrote:
> >>> 
> >>> On 17.07.2020 15:59, Bertrand Marquis wrote:
> >>>> 
> >>>> 
> >>>>> On 17 Jul 2020, at 15:19, Jan Beulich <jbeulich@suse.com> wrote:
> >>>>> 
> >>>>> On 17.07.2020 15:14, Bertrand Marquis wrote:
> >>>>>>> On 17 Jul 2020, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
> >>>>>>> On 16.07.2020 19:10, Rahul Singh wrote:
> >>>>>>>> # Emulated PCI device tree node in libxl:
> >>>>>>>> 
> >>>>>>>> Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.
> >>>>>>> 
> >>>>>>> I support Stefano's suggestion for this to be an optional thing, i.e.
> >>>>>>> there to be no need for it when there are PCI devices assigned to the
> >>>>>>> guest anyway. I also wonder about the pci_ prefix here - isn't
> >>>>>>> vpci="ecam" as unambiguous?
> >>>>>> 
> >>>>>> This could be a problem as we need to know that this is required for a guest upfront so that PCI devices can be assigned after using xl. 
> >>>>> 
> >>>>> I'm afraid I don't understand: When there are no PCI device that get
> >>>>> handed to a guest when it gets created, but it is supposed to be able
> >>>>> to have some assigned while already running, then we agree the option
> >>>>> is needed (afaict). When PCI devices get handed to the guest while it
> >>>>> gets constructed, where's the problem to infer this option from the
> >>>>> presence of PCI devices in the guest configuration?
> >>>> 
> >>>> If the user wants to use xl pci-attach to attach in runtime a device to a guest, this guest must have a VPCI bus (even with no devices).
> >>>> If we do not have the vpci parameter in the configuration this use case will not work anymore.
> >>> 
> >>> That's what everyone looks to agree with. Yet why is the parameter needed
> >>> when there _are_ PCI devices anyway? That's the "optional" that Stefano
> >>> was suggesting, aiui.
> >> 
> >> I agree in this case the parameter could be optional and only required if not PCI device is assigned directly in the guest configuration.
> > 
> > Where will the ECAM region(s) appear on the guest physmap?
> > 
> > Are you going to re-use the same locations as on the physical
> > hardware, or will they appear somewhere else?
> 
> We will add some new definitions for the ECAM regions in the guest physmap declared in xen (include/asm-arm/config.h)

I think I'm confused, but that file doesn't contain anything related
to the guest physmap, that's the Xen virtual memory layout on Arm
AFAICT?

Does this somehow relate to the physical memory map exposed to guests
on Arm?

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 15:22:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 15:22:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwSBf-0004TM-MZ; Fri, 17 Jul 2020 15:22:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwSBe-0004TH-4t
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 15:22:10 +0000
X-Inumbo-ID: 47390746-c841-11ea-8496-bc764e2007e4
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.64]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47390746-c841-11ea-8496-bc764e2007e4;
 Fri, 17 Jul 2020 15:22:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y9DiLOqt87A5Hx0z03kaMd5WEZZnJNLefdwvAe6g4co=;
 b=XLFfazqzBsw8tSQri19h0mA5Q76sMJx6/XM+LO6QmfKqL7AG2VEU1keZOwgrPfBfHIUTPCiUem/X+yZZL8mSDU1S+ireXEiNLcB4ZyjtsHvexvDczBK+9EspHyH4F2n04OcytUK/1OC2HyTHlXVIV0WS+Se2jdq4yosmw8r475o=
Received: from DB8PR06CA0064.eurprd06.prod.outlook.com (2603:10a6:10:120::38)
 by DB8PR08MB5052.eurprd08.prod.outlook.com (2603:10a6:10:e8::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.22; Fri, 17 Jul
 2020 15:22:05 +0000
Received: from DB5EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:120:cafe::6e) by DB8PR06CA0064.outlook.office365.com
 (2603:10a6:10:120::38) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17 via Frontend
 Transport; Fri, 17 Jul 2020 15:22:05 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT043.mail.protection.outlook.com (10.152.20.236) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 15:22:05 +0000
Received: ("Tessian outbound c83312565ef4:v62");
 Fri, 17 Jul 2020 15:22:05 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 95245ba5534dd4d1
X-CR-MTA-TID: 64aa7808
Received: from a026754b42c8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3FA4D6F9-FAC5-4F53-83DF-F3728A471B99.1; 
 Fri, 17 Jul 2020 15:21:59 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a026754b42c8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 15:21:59 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P9YsdlQaiazQV6wumXRE6xCteshIurTQ1gge0e4io0HhS7w9BMb7CbZhGRN9qo5s/BshMF1dccyfb7NuYmwx/sbSSGrpYlPSW/drN+spoPnYDSBOzP8sOLuBsP+8W4kf0uUJA6Z4Il97+tRiJhH7jlzUOXyec6tIiei8/UtIFbxcKoGs6E55uLzXMvpa5pBebxYg1gwgwz1kTGHM3PVyFVRpfWRAKpAlk1PXrtcmklaVokEMgEjGH1Hf3qFwYLlkaw7KM35NFroVGD8skMSufoQj0Y58/4WL6knw9sqcuhPfIr+gvxWc2Zuad8Xnqws6kJ79de/ge9R/RoRxcA+pzQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y9DiLOqt87A5Hx0z03kaMd5WEZZnJNLefdwvAe6g4co=;
 b=eZy7qAb4Lf7jCTzz/Bkid88eDR8EEwUeKNueS0FwweHx6jXqU2a2xbfddmLa5v+BhE1jZzgEJ8p+fnUhnu+z1gyfe8hbWZwQ4HusEnsz4qyga/4mrp1opS7sQaTV83Svr2w4gU3QeCD/XLWtTCk5gWEGY6nqAABBOUDJUM7RrYHmvqFVFduVRmv2czQzJZsuIL+JilcmcgAiDnOnHqqO51sQKcO2DlIlaFtrWC6gjYoN/vClm9/k4cQBP4tfD1aRNvh5DRHSq5eTYI68G+7z/Nr3vXOYC3cfASUeOxUaOS3Hle9qyfUVmt+FDwCOiNIGVDyRKtx+Gs8OSTOSXtW7MA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y9DiLOqt87A5Hx0z03kaMd5WEZZnJNLefdwvAe6g4co=;
 b=XLFfazqzBsw8tSQri19h0mA5Q76sMJx6/XM+LO6QmfKqL7AG2VEU1keZOwgrPfBfHIUTPCiUem/X+yZZL8mSDU1S+ireXEiNLcB4ZyjtsHvexvDczBK+9EspHyH4F2n04OcytUK/1OC2HyTHlXVIV0WS+Se2jdq4yosmw8r475o=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4252.eurprd08.prod.outlook.com (2603:10a6:10:c2::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Fri, 17 Jul
 2020 15:21:57 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 15:21:57 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAR7YAIAAIxaAgAATSQCAAA4jAA==
Date: Fri, 17 Jul 2020 15:21:57 +0000
Message-ID: <8AF78FF1-C389-44D8-896B-B95C1A0560E2@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <20200717111644.GS7191@Air-de-Roger>
 <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
 <20200717143120.GT7191@Air-de-Roger>
In-Reply-To: <20200717143120.GT7191@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [91.160.77.188]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f2a5c725-be76-41a4-6368-08d82a6529b2
x-ms-traffictypediagnostic: DBBPR08MB4252:|DB8PR08MB5052:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <DB8PR08MB5052B6F178039A3608A921929D7C0@DB8PR08MB5052.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: LGdbEoeuuSseZdZY6BQClUDzBNTi23Sselwuenkt9la0JR2DF4zi7nLEI9Mzh7hd+qGUU8xi7k62zvDyu6+Vj/ZE8k1BovruIqIUKE6iXfhPq1uj2HZ5CUMKJBbZqJHBIprCxo7BHit6C2cBRQQdYB16yRBmtejJsEN36W8cprIvOJTGHREpoYXg3s2MhPOP/bmgf2FpT7abyJUPEsItGgwexvygBvENgQGLQGdkfwRTWzL93S3wQ/lN8TBqb/sbWB9B9njyiihXdub8mv4T4bbrRd7ZOmWhjQe/XlICkAS28JojAkUPtZQSTalKPG9lmaP/PPHYStOkBjNdK8VWWsd1rID6nKruuTmayr3NOHYl+/J56g5sGPa9O6CYFTUik3G9rjk/w8IZUpp1munHdw==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(366004)(346002)(376002)(396003)(136003)(39860400002)(71200400001)(5660300002)(86362001)(6512007)(83380400001)(8676002)(30864003)(36756003)(6486002)(76116006)(2906002)(966005)(4326008)(66446008)(64756008)(66556008)(66476007)(91956017)(478600001)(6916009)(186003)(26005)(53546011)(6506007)(33656002)(8936002)(66946007)(316002)(54906003)(2616005)(579004);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: bOtk08XhSQoxtEImVJ3JfwdvetNu3zgA6y09dMbZFBVkX4eYzgKM1Xn7L/W+tZerJVJeZmzViSNY0JOE+KHJj5kQd8diHUuwlOXA/+TeYDq3TKesPO3Wikleg6UyJsLKUeVG1jwKcI/iUqf4hkRth3i0kIRAIDV2xbsu6Aaio465DJ+xWQzLTQBgoJqrmwnOcnWAdAfDgBBdxyBP3KQnwyXHvTHQYQGaLYdKaKM+s7qKe8aJ6OqxYQRMD2rdxGBeu2TxNxYkoH2uH5Cxg1taK1iP0jSt+1MIwL+5A0MUxIqthNYN0Y5A4iLDIWz1dndgt1/nWn5CnX1DUwkwXNUY9d6oyc6pxztV7ZVFcz5VYWPj6p708OUxAHKfZ9QVHeHyl3HCO7O8DqYo81XBRJw5MlIzH90UIS3RoUaHFcC8SLndetkmN6RQjonwAJjVmcNUzSrovO5J7qy+tQEClrdPa9ZQaJGDi/5/kUSKpLXBU33AZeSRnG8JCl4yl31nXSLL
Content-Type: text/plain; charset="utf-8"
Content-ID: <6786395F5215AB4D864ED93041551E56@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4252
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(346002)(396003)(376002)(46966005)(82310400002)(33656002)(6862004)(356005)(53546011)(86362001)(336012)(186003)(26005)(8936002)(107886003)(6486002)(6506007)(83380400001)(2616005)(82740400003)(70206006)(81166007)(36756003)(4326008)(316002)(2906002)(6512007)(30864003)(478600001)(5660300002)(966005)(8676002)(47076004)(70586007)(54906003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: a0c1c9d4-1df3-4650-00f6-08d82a652531
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: q0Rktrmh5Eit494kcQKaVggKy6UUvRDKeMNg2EunlcVWJu0DAqTLKgaf1UYq3uHrQ8J503B32yveYlBzcl3Fy2abyzEXTpE2wxcDFqZ9eUX6HVh0kw5gZT8lWgkeYIH8PjOATDOXqnZKDup3XFTQFPLDMD3dYhxIaZgnadFFiAOMVvStTZlND0hZN38zOw76YBw4h9pggbxVDKed2c+PL05LFzA0FOB53iy1ETBJzAOXZ2h0bO5rwjMTG4/1sh2LLjC8OSg0m/zewOVp7JnVDueAX8eO2wVw4hBzAclw+4+RppfAhVcSbJ9LlaHa7ikNiwcEuUAb2wRVQ6VUSgQ8SAPQAEDVCTSxA2TM8fIsunpC5h4y6YPWtROnw5pLvAHuRQiVpycGjX4GxrInp7Q5dFCpVWgEMORhV/kHnpegMx4FgfsqI0/+6ydX3AXD+jmabHQTZQMMCv8r5NbSLlfwIho3uy+G4hN8vSc6FfyqNKs=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 15:22:05.2465 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f2a5c725-be76-41a4-6368-08d82a6529b2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5052
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTcgSnVsIDIwMjAsIGF0IDE2OjMxLCBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5w
YXVAY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiBGcmksIEp1bCAxNywgMjAyMCBhdCAwMToy
MjoxOVBNICswMDAwLCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4gDQo+PiANCj4+PiBPbiAx
NyBKdWwgMjAyMCwgYXQgMTM6MTYsIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXgu
Y29tPiB3cm90ZToNCj4+PiANCj4+PiBJJ3ZlIHdyYXBwZWQgdGhlIGVtYWlsIHRvIDgwIGNvbHVt
bnMgaW4gb3JkZXIgdG8gbWFrZSBpdCBlYXNpZXIgdG8NCj4+PiByZXBseS4NCj4+PiANCj4+PiBU
aGFua3MgZm9yIGRvaW5nIHRoaXMsIEkgdGhpbmsgdGhlIGRlc2lnbiBpcyBnb29kLCBJIGhhdmUg
c29tZQ0KPj4+IHF1ZXN0aW9ucyBiZWxvdyBzbyB0aGF0IEkgdW5kZXJzdGFuZCB0aGUgZnVsbCBw
aWN0dXJlLg0KPj4+IA0KPj4+IE9uIFRodSwgSnVsIDE2LCAyMDIwIGF0IDA1OjEwOjA1UE0gKzAw
MDAsIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4+PiBIZWxsbyBBbGwsDQo+Pj4+IA0KPj4+PiBGb2xs
b3dpbmcgdXAgb24gZGlzY3Vzc2lvbiBvbiBQQ0kgUGFzc3Rocm91Z2ggc3VwcG9ydCBvbiBBUk0g
dGhhdCB3ZQ0KPj4+PiBoYWQgYXQgdGhlIFhFTiBzdW1taXQsIHdlIGFyZSBzdWJtaXR0aW5nIGEg
UmV2aWV3IEZvciBDb21tZW50IGFuZCBhDQo+Pj4+IGRlc2lnbiBwcm9wb3NhbCBmb3IgUENJIHBh
c3N0aHJvdWdoIHN1cHBvcnQgb24gQVJNLiBGZWVsIGZyZWUgdG8NCj4+Pj4gZ2l2ZSB5b3VyIGZl
ZWRiYWNrLg0KPj4+PiANCj4+Pj4gVGhlIGZvbGxvd2luZ3MgZGVzY3JpYmUgdGhlIGhpZ2gtbGV2
ZWwgZGVzaWduIHByb3Bvc2FsIG9mIHRoZSBQQ0kNCj4+Pj4gcGFzc3Rocm91Z2ggc3VwcG9ydCBh
bmQgaG93IHRoZSBkaWZmZXJlbnQgbW9kdWxlcyB3aXRoaW4gdGhlIHN5c3RlbQ0KPj4+PiBpbnRl
cmFjdHMgd2l0aCBlYWNoIG90aGVyIHRvIGFzc2lnbiBhIHBhcnRpY3VsYXIgUENJIGRldmljZSB0
byB0aGUNCj4+Pj4gZ3Vlc3QuDQo+Pj4+IA0KPj4+PiAjIFRpdGxlOg0KPj4+PiANCj4+Pj4gUENJ
IGRldmljZXMgcGFzc3Rocm91Z2ggb24gQXJtIGRlc2lnbiBwcm9wb3NhbA0KPj4+PiANCj4+Pj4g
IyBQcm9ibGVtIHN0YXRlbWVudDoNCj4+Pj4gDQo+Pj4+IE9uIEFSTSB0aGVyZSBpbiBubyBzdXBw
b3J0IHRvIGFzc2lnbiBhIFBDSSBkZXZpY2UgdG8gYSBndWVzdC4gUENJDQo+Pj4+IGRldmljZSBw
YXNzdGhyb3VnaCBjYXBhYmlsaXR5IGFsbG93cyBndWVzdHMgdG8gaGF2ZSBmdWxsIGFjY2VzcyB0
bw0KPj4+PiBzb21lIFBDSSBkZXZpY2VzLiBQQ0kgZGV2aWNlIHBhc3N0aHJvdWdoIGFsbG93cyBQ
Q0kgZGV2aWNlcyB0bw0KPj4+PiBhcHBlYXIgYW5kIGJlaGF2ZSBhcyBpZiB0aGV5IHdlcmUgcGh5
c2ljYWxseSBhdHRhY2hlZCB0byB0aGUgZ3Vlc3QNCj4+Pj4gb3BlcmF0aW5nIHN5c3RlbSBhbmQg
cHJvdmlkZSBmdWxsIGlzb2xhdGlvbiBvZiB0aGUgUENJIGRldmljZXMuDQo+Pj4+IA0KPj4+PiBH
b2FsIG9mIHRoaXMgd29yayBpcyB0byBhbHNvIHN1cHBvcnQgRG9tMExlc3MgY29uZmlndXJhdGlv
biBzbyB0aGUNCj4+Pj4gUENJIGJhY2tlbmQvZnJvbnRlbmQgZHJpdmVycyB1c2VkIG9uIHg4NiBz
aGFsbCBub3QgYmUgdXNlZCBvbiBBcm0uDQo+Pj4+IEl0IHdpbGwgdXNlIHRoZSBleGlzdGluZyBW
UENJIGNvbmNlcHQgZnJvbSBYODYgYW5kIGltcGxlbWVudCB0aGUNCj4+Pj4gdmlydHVhbCBQQ0kg
YnVzIHRocm91Z2ggSU8gZW11bGF0aW9uIHN1Y2ggdGhhdCBvbmx5IGFzc2lnbmVkIGRldmljZXMN
Cj4+Pj4gYXJlIHZpc2libGUgdG8gdGhlIGd1ZXN0IGFuZCBndWVzdCBjYW4gdXNlIHRoZSBzdGFu
ZGFyZCBQQ0kNCj4+Pj4gZHJpdmVyLg0KPj4+PiANCj4+Pj4gT25seSBEb20wIGFuZCBYZW4gd2ls
bCBoYXZlIGFjY2VzcyB0byB0aGUgcmVhbCBQQ0kgYnVzLCBndWVzdCB3aWxsDQo+Pj4+IGhhdmUg
YSBkaXJlY3QgYWNjZXNzIHRvIHRoZSBhc3NpZ25lZCBkZXZpY2UgaXRzZWxmLiBJT01FTSBtZW1v
cnkNCj4+Pj4gd2lsbCBiZSBtYXBwZWQgdG8gdGhlIGd1ZXN0IGFuZCBpbnRlcnJ1cHQgd2lsbCBi
ZSByZWRpcmVjdGVkIHRvIHRoZQ0KPj4+PiBndWVzdC4gU01NVSBoYXMgdG8gYmUgY29uZmlndXJl
ZCBjb3JyZWN0bHkgdG8gaGF2ZSBETUENCj4+Pj4gdHJhbnNhY3Rpb24uDQo+Pj4+IA0KPj4+PiAj
IyBDdXJyZW50IHN0YXRlOuKAr0RyYWZ0IHZlcnNpb24NCj4+Pj4gDQo+Pj4+ICMgUHJvcG9zZXIo
cyk6IFJhaHVsIFNpbmdoLCBCZXJ0cmFuZCBNYXJxdWlzDQo+Pj4+IA0KPj4+PiAjIFByb3Bvc2Fs
Og0KPj4+PiANCj4+Pj4gVGhpcyBzZWN0aW9uIHdpbGwgZGVzY3JpYmUgdGhlIGRpZmZlcmVudCBz
dWJzeXN0ZW0gdG8gc3VwcG9ydCB0aGUNCj4+Pj4gUENJIGRldmljZSBwYXNzdGhyb3VnaCBhbmQg
aG93IHRoZXNlIHN1YnN5c3RlbXMgaW50ZXJhY3Qgd2l0aCBlYWNoDQo+Pj4+IG90aGVyIHRvIGFz
c2lnbiBhIGRldmljZSB0byB0aGUgZ3Vlc3QuDQo+Pj4+IA0KPj4+PiAjIFBDSSBUZXJtaW5vbG9n
eToNCj4+Pj4gDQo+Pj4+IEhvc3QgQnJpZGdlOiBIb3N0IGJyaWRnZSBhbGxvd3MgdGhlIFBDSSBk
ZXZpY2VzIHRvIHRhbGsgdG8gdGhlIHJlc3QNCj4+Pj4gb2YgdGhlIGNvbXB1dGVyLiAgRUNBTTog
RUNBTSAoRW5oYW5jZWQgQ29uZmlndXJhdGlvbiBBY2Nlc3MNCj4+Pj4gTWVjaGFuaXNtKSBpcyBh
IG1lY2hhbmlzbSBkZXZlbG9wZWQgdG8gYWxsb3cgUENJZSB0byBhY2Nlc3MNCj4+Pj4gY29uZmln
dXJhdGlvbiBzcGFjZS4gVGhlIHNwYWNlIGF2YWlsYWJsZSBwZXIgZnVuY3Rpb24gaXMgNEtCLg0K
Pj4+PiANCj4+Pj4gIyBEaXNjb3ZlcmluZyBQQ0kgSG9zdCBCcmlkZ2UgaW4gWEVOOg0KPj4+PiAN
Cj4+Pj4gSW4gb3JkZXIgdG8gc3VwcG9ydCB0aGUgUENJIHBhc3N0aHJvdWdoIFhFTiBzaG91bGQg
YmUgYXdhcmUgb2YgYWxsDQo+Pj4+IHRoZSBQQ0kgaG9zdCBicmlkZ2VzIGF2YWlsYWJsZSBvbiB0
aGUgc3lzdGVtIGFuZCBzaG91bGQgYmUgYWJsZSB0bw0KPj4+PiBhY2Nlc3MgdGhlIFBDSSBjb25m
aWd1cmF0aW9uIHNwYWNlLiBFQ0FNIGNvbmZpZ3VyYXRpb24gYWNjZXNzIGlzDQo+Pj4+IHN1cHBv
cnRlZCBhcyBvZiBub3cuIFhFTiBkdXJpbmcgYm9vdCB3aWxsIHJlYWQgdGhlIFBDSSBkZXZpY2Ug
dHJlZQ0KPj4+PiBub2RlIOKAnHJlZ+KAnSBwcm9wZXJ0eSBhbmQgd2lsbCBtYXAgdGhlIEVDQU0g
c3BhY2UgdG8gdGhlIFhFTiBtZW1vcnkNCj4+Pj4gdXNpbmcgdGhlIOKAnGlvcmVtYXBfbm9jYWNo
ZSAoKeKAnSBmdW5jdGlvbi4NCj4+PiANCj4+PiBXaGF0IGFib3V0IEFDUEk/IEkgdGhpbmsgeW91
IHNob3VsZCBhbHNvIG1lbnRpb24gdGhlIE1NQ0ZHIHRhYmxlLA0KPj4+IHdoaWNoIHNob3VsZCBj
b250YWluIHRoZSBpbmZvcm1hdGlvbiBhYm91dCB0aGUgRUNBTSByZWdpb24ocykgKG9yIGF0DQo+
Pj4gbGVhc3QgdGhhdCdzIGhvdyBpdCB3b3JrcyBvbiB4ODYpLiBKdXN0IHJlYWxpemVkIHRoYXQg
eW91IGRvbid0DQo+Pj4gc3VwcG9ydCBBQ1BJIEFUTSwgc28geW91IGNhbiBpZ25vcmUgdGhpcyBj
b21tZW50Lg0KPj4gDQo+PiBZZXMgZm9yIG5vdyB3ZSBkaWQgbm90IGNvbnNpZGVyIEFDUEkgc3Vw
cG9ydC4NCj4gDQo+IEkgaGF2ZSAwIGtub3dsZWRnZSBvZiBBQ1BJIG9uIEFybSwgYnV0IEkgd291
bGQgYXNzdW1lIGl0J3MgYWxzbyB1c2luZw0KPiB0aGUgTUNGRyB0YWJsZSBpbiBvcmRlciB0byBy
ZXBvcnQgRUNBTSByZWdpb25zIHRvIHRoZSBPU1BNLiBUaGlzIGlzIGENCj4gc3RhdGljIHRhYmxl
IHRoYXQncyB2ZXJ5IHNpbXBsZSB0byBwYXJzZSwgYW5kIGl0IGNvbnRhaW5zIHRoZSBFQ0FNDQo+
IElPTUVNIGFyZWEgYW5kIHRoZSBzZWdtZW50IGFzc2lnbmVkIHRvIHRoYXQgRUNBTSByZWdpb24u
DQo+IA0KPiBUaGlzIGlzIGJldHRlciB0aGFuIERUIGJlY2F1c2UgQUNQSSBhbHJlYWR5IGFzc2ln
bnMgc2VnbWVudCBudW1iZXJzIHRvDQo+IGVhY2ggRUNBTSByZWdpb24uDQo+IA0KPiBFdmVuIGlm
IG5vdCBjdXJyZW50bHkgc3VwcG9ydGVkIGluIHRoZSBjb2RlIGltcGxlbWVudGVkIHNvIGZhcg0K
PiBkZXNjcmliaW5nIHRoZSBwbGFuIGZvciBpdCdzIGltcGxlbWVudGF0aW9uIGhlcmUgc2VlbXMg
ZmluZSBJTU8sIGFzDQo+IGl0J3MgZ29pbmcgdG8gYmUgc2xpZ2h0bHkgZGlmZmVyZW50IGZyb20g
d2hhdCB5b3UgbmVlZCB0byBkbyB3aGVuDQo+IHVzaW5nIERULg0KDQpUaGF0IHNob3VsZCBiZSBm
YWlybHkgc3RyYWlnaHQgZm9yd2FyZCBpIGFncmVlIGFuZCBpdCBtYWtlIHNlbnNlIHRvIHNwZW5k
DQpzb21lIHRpbWUgdG8gbWFrZSBzdXJlIHRoZSBkZXNpZ24gd291bGQgYWxsb3cgdG8gYWRkIEFD
UEkgc3VwcG9ydC4NCkkgd2lsbCBub3RlIHRoYXQgZG93biB0byBhZGQgdGhpcyBpbiB0aGUgbmV4
dCBkZXNpZ24gdmVyc2lvbi4NCg0KPiANCj4+PiANCj4+Pj4gDQo+Pj4+IElmIHRoZXJlIGFyZSBt
b3JlIHRoYW4gb25lIHNlZ21lbnQgb24gdGhlIHN5c3RlbSwgWEVOIHdpbGwgcmVhZCB0aGUNCj4+
Pj4g4oCcbGludXgsIHBjaS1kb21haW7igJ0gcHJvcGVydHkgZnJvbSB0aGUgZGV2aWNlIHRyZWUg
bm9kZSBhbmQgY29uZmlndXJlDQo+Pj4+IHRoZSBob3N0IGJyaWRnZSBzZWdtZW50IG51bWJlciBh
Y2NvcmRpbmdseS4gQWxsIHRoZSBQQ0kgZGV2aWNlIHRyZWUNCj4+Pj4gbm9kZXMgc2hvdWxkIGhh
dmUgdGhlIOKAnGxpbnV4LHBjaS1kb21haW7igJ0gcHJvcGVydHkgc28gdGhhdCB0aGVyZSB3aWxs
DQo+Pj4+IGJlIG5vIGNvbmZsaWN0cy4gRHVyaW5nIGhhcmR3YXJlIGRvbWFpbiBib290IExpbnV4
IHdpbGwgYWxzbyB1c2UgdGhlDQo+Pj4+IHNhbWUg4oCcbGludXgscGNpLWRvbWFpbuKAnSBwcm9w
ZXJ0eSBhbmQgYXNzaWduIHRoZSBkb21haW4gbnVtYmVyIHRvIHRoZQ0KPj4+PiBob3N0IGJyaWRn
ZS4NCj4+PiANCj4+PiBTbyBpdCdzIG15IHVuZGVyc3RhbmRpbmcgdGhhdCB0aGUgUENJIGRvbWFp
biAob3Igc2VnbWVudCkgaXMganVzdCBhbg0KPj4+IGFic3RyYWN0IGNvbmNlcHQgdG8gZGlmZmVy
ZW50aWF0ZSBhbGwgdGhlIFJvb3QgQ29tcGxleCBwcmVzZW50IG9uDQo+Pj4gdGhlIHN5c3RlbSwg
YnV0IHRoZSBob3N0IGJyaWRnZSBpdHNlbGYgaXQncyBub3QgYXdhcmUgb2YgdGhlIHNlZ21lbnQN
Cj4+PiBhc3NpZ25lZCB0byBpdCBpbiBhbnkgd2F5Lg0KPj4+IA0KPj4+IEknbSBub3Qgc3VyZSBY
ZW4gYW5kIHRoZSBoYXJkd2FyZSBkb21haW4gaGF2aW5nIG1hdGNoaW5nIHNlZ21lbnRzIGlzIGEN
Cj4+PiByZXF1aXJlbWVudCwgaWYgeW91IHVzZSB2UENJIHlvdSBjYW4gbWF0Y2ggdGhlIHNlZ21l
bnQgKGZyb20gWGVuJ3MNCj4+PiBQb1YpIGJ5IGp1c3QgY2hlY2tpbmcgZnJvbSB3aGljaCBFQ0FN
IHJlZ2lvbiB0aGUgYWNjZXNzIGhhcyBiZWVuDQo+Pj4gcGVyZm9ybWVkLg0KPj4+IA0KPj4+IFRo
ZSBvbmx5IHJlYXNvbiB0byByZXF1aXJlIG1hdGNoaW5nIHNlZ21lbnQgdmFsdWVzIGJldHdlZW4g
WGVuIGFuZCB0aGUNCj4+PiBoYXJkd2FyZSBkb21haW4gaXMgdG8gYWxsb3cgdXNpbmcgaHlwZXJj
YWxscyBhZ2FpbnN0IHRoZSBQQ0kgZGV2aWNlcywNCj4+PiBpZTogdG8gYmUgYWJsZSB0byB1c2Ug
aHlwZXJjYWxscyB0byBhc3NpZ24gYSBkZXZpY2UgdG8gYSBkb21haW4gZnJvbQ0KPj4+IHRoZSBo
YXJkd2FyZSBkb21haW4uDQo+Pj4gDQo+Pj4gSSBoYXZlIDAgdW5kZXJzdGFuZGluZyBvZiBEVCBv
ciBpdCdzIHNwZWMsIGJ1dCB3aHkgZG9lcyB0aGlzIGhhdmUgYQ0KPj4+ICdsaW51eCwnIHByZWZp
eD8gVGhlIHNlZ21lbnQgbnVtYmVyIGlzIHBhcnQgb2YgdGhlIFBDSSBzcGVjLCBhbmQgbm90DQo+
Pj4gc29tZXRoaW5nIHNwZWNpZmljIHRvIExpbnV4IElNTy4NCj4+IA0KPj4gVGhpcyBpcyBleGFj
dCB0aGF0IHRoaXMgaXMgb25seSBuZWVkZWQgZm9yIHRoZSBoeXBlcmNhbGwgd2hlbiBEb20wIGlz
DQo+PiBkb2luZyB0aGUgZnVsbCBlbnVtZXJhdGlvbiBhbmQgY29tbXVuaWNhdGluZyB0aGUgZGV2
aWNlcyB0byBYZW4uIA0KPj4gT24gYWxsIG90aGVyIGNhc2VzIHRoaXMgY2FuIGJlIGRlZHVjZWQg
ZnJvbSB0aGUgYWRkcmVzcyBvZiB0aGUgYWNjZXNzLg0KPiANCj4gWW91IGFsc28gbmVlZCB0aGUg
U0JERiBub21lbmNsYXR1cmUgaW4gb3JkZXIgdG8gYXNzaWduIGRlaWNlcyB0bw0KPiBndWVzdHMg
ZnJvbSB0aGUgY29udHJvbCBkb21haW4sIHNvIGF0IGxlYXN0IHRoZXJlIG5lZWRzIHRvIGJlIHNv
bWUNCj4gY29uc2Vuc3VzIGZyb20gdGhlIGhhcmR3YXJlIGRvbWFpbiBhbmQgWGVuIG9uIHRoZSBz
ZWdtZW50IG51bWJlcmluZyBpbg0KPiB0aGF0IHJlZ2FyZC4NCj4gDQo+IFNhbWUgYXBwbGllcyB0
byBkb20wbGVzcyBtb2RlLCB0aGVyZSBuZWVkcyB0byBiZSBzb21lIGNvbnNlbnN1cyBhYm91dA0K
PiB0aGUgc2VnbWVudCBudW1iZXJzIHVzZWQsIHNvIFhlbiBjYW4gaWRlbnRpZnkgdGhlIGRldmlj
ZXMgYXNzaWduZWQgdG8NCj4gZWFjaCBndWVzdHMgd2l0aG91dCBjb25mdXNpb24uDQoNCkFncmVl
Lg0KDQo+IA0KPj4gUmVnYXJkaW5nIHRoZSBEVCBlbnRyeSwgdGhpcyBpcyBub3QgY29taW5nIGZy
b20gdXMgYW5kIHRoaXMgaXMgYWxyZWFkeQ0KPj4gZGVmaW5lZCB0aGlzIHdheSBpbiBleGlzdGlu
ZyBEVEJzLCB3ZSBqdXN0IHJldXNlIHRoZSBleGlzdGluZyBlbnRyeS4gDQo+IA0KPiBJcyBpdCBw
b3NzaWJsZSB0byBzdGFuZGFyZGl6ZSB0aGUgcHJvcGVydHkgYW5kIGRyb3AgdGhlIGxpbnV4IHBy
ZWZpeD8NCg0KSG9uZXN0bHkgaSBkbyBub3Qga25vdy4gVGhpcyB3YXMgdGhlcmUgaW4gdGhlIERU
IGV4YW1wbGVzIHdlIGNoZWNrZWQgc28NCndlIHBsYW5uZWQgdG8gdXNlIHRoYXQuIEJ1dCBpdCBt
aWdodCBiZSBwb3NzaWJsZSB0byBzdGFuZGFyZGl6ZSB0aGlzLg0KQHN0ZWZhbm86IFlvdSBhcmUg
dGhlIGRldmljZSB0cmVlIGV4cGVydCA6LSkgYW55IGlkZWEgb24gdGhpcyA/DQoNCj4gDQo+Pj4g
DQo+Pj4+IA0KPj4+PiBXaGVuIERvbTAgdHJpZXMgdG8gYWNjZXNzIHRoZSBQQ0kgY29uZmlnIHNw
YWNlIG9mIHRoZSBkZXZpY2UsIFhFTg0KPj4+PiB3aWxsIGZpbmQgdGhlIGNvcnJlc3BvbmRpbmcg
aG9zdCBicmlkZ2UgYmFzZWQgb24gc2VnbWVudCBudW1iZXIgYW5kDQo+Pj4+IGFjY2VzcyB0aGUg
Y29ycmVzcG9uZGluZyBjb25maWcgc3BhY2UgYXNzaWduZWQgdG8gdGhhdCBicmlkZ2UuDQo+Pj4+
IA0KPj4+PiBMaW1pdGF0aW9uOg0KPj4+PiAqIE9ubHkgUENJIEVDQU0gY29uZmlndXJhdGlvbiBz
cGFjZSBhY2Nlc3MgaXMgc3VwcG9ydGVkLg0KPj4+PiAqIERldmljZSB0cmVlIGJpbmRpbmcgaXMg
c3VwcG9ydGVkIGFzIG9mIG5vdywgQUNQSSBpcyBub3Qgc3VwcG9ydGVkLg0KPj4+PiAqIE5lZWQg
dG8gcG9ydCB0aGUgUENJIGhvc3QgYnJpZGdlIGFjY2VzcyBjb2RlIHRvIFhFTiB0byBhY2Nlc3Mg
dGhlDQo+Pj4+IGNvbmZpZ3VyYXRpb24gc3BhY2UgKGdlbmVyaWMgb25lIHdvcmtzIGJ1dCBsb3Rz
IG9mIHBsYXRmb3JtcyB3aWxsDQo+Pj4+IHJlcXVpcmVkICBzb21lIHNwZWNpZmljIGNvZGUgb3Ig
cXVpcmtzKS4NCj4+Pj4gDQo+Pj4+ICMgRGlzY292ZXJpbmcgUENJIGRldmljZXM6DQo+Pj4+IA0K
Pj4+PiBQQ0ktUENJZSBlbnVtZXJhdGlvbiBpcyBhIHByb2Nlc3Mgb2YgZGV0ZWN0aW5nIGRldmlj
ZXMgY29ubmVjdGVkIHRvDQo+Pj4+IGl0cyBob3N0LiBJdCBpcyB0aGUgcmVzcG9uc2liaWxpdHkg
b2YgdGhlIGhhcmR3YXJlIGRvbWFpbiBvciBib290DQo+Pj4+IGZpcm13YXJlIHRvIGRvIHRoZSBQ
Q0kgZW51bWVyYXRpb24gYW5kIGNvbmZpZ3VyZSB0aGUgQkFSLCBQQ0kNCj4+Pj4gY2FwYWJpbGl0
aWVzLCBhbmQgTVNJL01TSS1YIGNvbmZpZ3VyYXRpb24uDQo+Pj4+IA0KPj4+PiBQQ0ktUENJZSBl
bnVtZXJhdGlvbiBpbiBYRU4gaXMgbm90IGZlYXNpYmxlIGZvciB0aGUgY29uZmlndXJhdGlvbg0K
Pj4+PiBwYXJ0IGFzIGl0IHdvdWxkIHJlcXVpcmUgYSBsb3Qgb2YgY29kZSBpbnNpZGUgWGVuIHdo
aWNoIHdvdWxkDQo+Pj4+IHJlcXVpcmUgYSBsb3Qgb2YgbWFpbnRlbmFuY2UuIEFkZGVkIHRvIHRo
aXMgbWFueSBwbGF0Zm9ybXMgcmVxdWlyZQ0KPj4+PiBzb21lIHF1aXJrcyBpbiB0aGF0IHBhcnQg
b2YgdGhlIFBDSSBjb2RlIHdoaWNoIHdvdWxkIGdyZWF0bHkgaW1wcm92ZQ0KPj4+PiBYZW4gY29t
cGxleGl0eS4gT25jZSBoYXJkd2FyZSBkb21haW4gZW51bWVyYXRlcyB0aGUgZGV2aWNlIHRoZW4g
aXQNCj4+Pj4gd2lsbCBjb21tdW5pY2F0ZSB0byBYRU4gdmlhIHRoZSBiZWxvdyBoeXBlcmNhbGwu
DQo+Pj4+IA0KPj4+PiAjZGVmaW5lIFBIWVNERVZPUF9wY2lfZGV2aWNlX2FkZCAgICAgICAgMjUg
c3RydWN0DQo+Pj4+IHBoeXNkZXZfcGNpX2RldmljZV9hZGQgew0KPj4+PiAgIHVpbnQxNl90IHNl
ZzsNCj4+Pj4gICB1aW50OF90IGJ1czsNCj4+Pj4gICB1aW50OF90IGRldmZuOw0KPj4+PiAgIHVp
bnQzMl90IGZsYWdzOw0KPj4+PiAgIHN0cnVjdCB7DQo+Pj4+ICAgICAgIHVpbnQ4X3QgYnVzOw0K
Pj4+PiAgICAgICB1aW50OF90IGRldmZuOw0KPj4+PiAgIH0gcGh5c2ZuOw0KPj4+PiAgIC8qDQo+
Pj4+ICAgICogT3B0aW9uYWwgcGFyYW1ldGVycyBhcnJheS4NCj4+Pj4gICAgKiBGaXJzdCBlbGVt
ZW50IChbMF0pIGlzIFBYTSBkb21haW4gYXNzb2NpYXRlZCB3aXRoIHRoZSBkZXZpY2UgKGlmDQo+
Pj4+ICAgICogWEVOX1BDSV9ERVZfUFhNIGlzIHNldCkNCj4+Pj4gICAgKi8NCj4+Pj4gICB1aW50
MzJfdCBvcHRhcnJbWEVOX0ZMRVhfQVJSQVlfRElNXTsNCj4+Pj4gfTsNCj4+Pj4gDQo+Pj4+IEFz
IHRoZSBoeXBlcmNhbGwgYXJndW1lbnQgaGFzIHRoZSBQQ0kgc2VnbWVudCBudW1iZXIsIFhFTiB3
aWxsDQo+Pj4+IGFjY2VzcyB0aGUgUENJIGNvbmZpZyBzcGFjZSBiYXNlZCBvbiB0aGlzIHNlZ21l
bnQgbnVtYmVyIGFuZCBmaW5kDQo+Pj4+IHRoZSBob3N0LWJyaWRnZSBjb3JyZXNwb25kaW5nIHRv
IHRoaXMgc2VnbWVudCBudW1iZXIuIEF0IHRoaXMgc3RhZ2UNCj4+Pj4gaG9zdCBicmlkZ2UgaXMg
ZnVsbHkgaW5pdGlhbGl6ZWQgc28gdGhlcmUgd2lsbCBiZSBubyBpc3N1ZSB0byBhY2Nlc3MNCj4+
Pj4gdGhlIGNvbmZpZyBzcGFjZS4NCj4+Pj4gDQo+Pj4+IFhFTiB3aWxsIGFkZCB0aGUgUENJIGRl
dmljZXMgaW4gdGhlIGxpbmtlZCBsaXN0IG1haW50YWluIGluIFhFTg0KPj4+PiB1c2luZyB0aGUg
ZnVuY3Rpb24gcGNpX2FkZF9kZXZpY2UoKS4gWEVOIHdpbGwgYmUgYXdhcmUgb2YgYWxsIHRoZQ0K
Pj4+PiBQQ0kgZGV2aWNlcyBvbiB0aGUgc3lzdGVtIGFuZCBhbGwgdGhlIGRldmljZSB3aWxsIGJl
IGFkZGVkIHRvIHRoZQ0KPj4+PiBoYXJkd2FyZSBkb21haW4uDQo+Pj4+IA0KPj4+PiBMaW1pdGF0
aW9uczoNCj4+Pj4gKiBXaGVuIFBDSSBkZXZpY2VzIGFyZSBhZGRlZCB0byBYRU4sIE1TSSBjYXBh
YmlsaXR5IGlzDQo+Pj4+IG5vdCBpbml0aWFsaXplZCBpbnNpZGUgWEVOIGFuZCBub3Qgc3VwcG9y
dGVkIGFzIG9mIG5vdy4NCj4+PiANCj4+PiBJIGFzc3VtZSB5b3Ugd2lsbCBtYXNrIHN1Y2ggY2Fw
YWJpbGl0eSBhbmQgd2lsbCBwcmV2ZW50IHRoZSBndWVzdCAob3INCj4+PiBoYXJkd2FyZSBkb21h
aW4pIGZyb20gaW50ZXJhY3Rpbmcgd2l0aCBpdD8NCj4+IA0KPj4gTm8gd2Ugd2lsbCBhY3R1YWxs
eSBpbXBsZW1lbnQgdGhhdCBwYXJ0IGJ1dCBsYXRlci4gVGhpcyBpcyBub3Qgc3VwcG9ydGVkIGlu
DQo+PiB0aGUgUkZDIHRoYXQgd2Ugd2lsbCBzdWJtaXQuIA0KPiANCj4gT0ssIG1pZ2h0IGJlIG5p
Y2UgdG8gbm90ZSB0aGlzIHNvbWV3aGVyZSwgZXZlbiBpZiBpdCdzIG5vdCBpbXBsZW1lbnRlZA0K
PiByaWdodCBub3cuIEl0IG1pZ2h0IGFsc28gYmUgcmVsZXZhbnQgdG8gc3RhcnQgdGhpbmtpbmcg
YWJvdXQgd2hpY2gNCj4gY2FwYWJpbGl0aWVzIHlvdSBoYXZlIHRvIGV4cG9zZSB0byBndWVzdHMs
IGFuZCBob3cgeW91IHdpbGwgbWFrZSB0aG9zZQ0KPiBzYWZlLiBUaGlzIGNvdWxkIGV2ZW4gYmUg
aW4gYSBzZXBhcmF0ZSBkb2N1bWVudCwgYnV0IGlkZWFsbHkgYSBkZXNpZ24NCj4gZG9jdW1lbnQg
KG9yIHNldCBvZiBkb2N1bWVudHMpIHNob3VsZCB0cnkgdG8gY292ZXIgYWxsIHRoZQ0KPiBpbXBs
ZW1lbnRhdGlvbiB0aGF0IHdpbGwgYmUgZG9uZSBpbiBvcmRlciB0byBzdXBwb3J0IGEgZmVhdHVy
ZS4NCg0KSSBhZGRlZCB0aGF0IGFzIHBvaW50cyB3ZSBuZWVkIHRvIGNsZWFyIGluIHRoZSBuZXh0
IGRlc2lnbiB2ZXJzaW9uLg0KDQo+IA0KPj4+IA0KPj4+PiAqIEFDUyBjYXBhYmlsaXR5IGlzIGRp
c2FibGUgZm9yIEFSTSBhcyBvZiBub3cgYXMgYWZ0ZXIgZW5hYmxpbmcgaXQNCj4+Pj4gZGV2aWNl
cyBhcmUgbm90IGFjY2Vzc2libGUuDQo+Pj4+ICogRG9tMExlc3MgaW1wbGVtZW50YXRpb24gd2ls
bCByZXF1aXJlIHRvIGhhdmUgdGhlIGNhcGFjaXR5IGluc2lkZSBYZW4NCj4+Pj4gdG8gZGlzY292
ZXIgdGhlIFBDSSBkZXZpY2VzICh3aXRob3V0IGRlcGVuZGluZyBvbiBEb20wIHRvIGRlY2xhcmUg
dGhlbQ0KPj4+PiB0byBYZW4pLg0KPj4+IA0KPj4+IEkgYXNzdW1lIHRoZSBmaXJtd2FyZSB3aWxs
IHByb3Blcmx5IGluaXRpYWxpemUgdGhlIGhvc3QgYnJpZGdlIGFuZA0KPj4+IGNvbmZpZ3VyZSB0
aGUgcmVzb3VyY2VzIGZvciBlYWNoIGRldmljZSwgc28gdGhhdCBYZW4ganVzdCBoYXMgdG8gd2Fs
aw0KPj4+IHRoZSBQQ0kgc3BhY2UgYW5kIGZpbmQgdGhlIGRldmljZXMuDQo+Pj4gDQo+Pj4gVEJI
IHRoYXQgd291bGQgYmUgbXkgcHJlZmVycmVkIG1ldGhvZCwgYmVjYXVzZSB0aGVuIHlvdSBjYW4g
Z2V0IHJpZCBvZg0KPj4+IHRoZSBoeXBlcmNhbGwuDQo+Pj4gDQo+Pj4gSXMgdGhlcmUgYW55d2F5
IGZvciBYZW4gdG8ga25vdyB3aGV0aGVyIHRoZSBob3N0IGJyaWRnZSBpcyBwcm9wZXJseQ0KPj4+
IHNldHVwIGFuZCB0aHVzIHRoZSBQQ0kgYnVzIGNhbiBiZSBzY2FubmVkPw0KPj4+IA0KPj4+IFRo
YXQgd2F5IEFybSBjb3VsZCBkbyBzb21ldGhpbmcgc2ltaWxhciB0byB4ODYsIHdoZXJlIFhlbiB3
aWxsIHNjYW4NCj4+PiB0aGUgYnVzIGFuZCBkaXNjb3ZlciBkZXZpY2VzLCBidXQgeW91IGNvdWxk
IHN0aWxsIHByb3ZpZGUgdGhlDQo+Pj4gaHlwZXJjYWxsIGluIGNhc2UgdGhlIGJ1cyBjYW5ub3Qg
YmUgc2Nhbm5lZCBieSBYZW4gKGJlY2F1c2UgaXQgaGFzbid0DQo+Pj4gYmVlbiBzZXR1cCkuDQo+
PiANCj4+IFRoYXQgaXMgZGVmaW5pdGVseSB0aGUgaWRlYSB0byByZWx5IGJ5IGRlZmF1bHQgb24g
YSBmaXJtd2FyZSBkb2luZyB0aGlzIHByb3Blcmx5Lg0KPj4gSSBhbSBub3Qgc3VyZSB3ZXRoZXIg
YSBwcm9wZXIgZW51bWVyYXRpb24gY291bGQgYmUgZGV0ZWN0ZWQgcHJvcGVybHkgaW4gYWxsDQo+
PiBjYXNlcyBzbyBpdCB3b3VsZCBtYWtlIHNlbnMgdG8gcmVseSBvbiBEb20wIGVudW1lcmF0aW9u
IHdoZW4gYSBYZW4NCj4+IGNvbW1hbmQgbGluZSBhcmd1bWVudCBpcyBwYXNzZWQgYXMgZXhwbGFp
bmVkIGluIG9uZSBvZiBSYWh1bOKAmXMgbWFpbHMuDQo+IA0KPiBJIGFzc3VtZSBMaW51eCBzb21l
aG93IGtub3dzIHdoZW4gaXQgbmVlZHMgdG8gaW5pdGlhbGl6ZSB0aGUgUENJIHJvb3QNCj4gY29t
cGxleCBiZWZvcmUgYXR0ZW1wdGluZyB0byBhY2Nlc3MgdGhlIGJ1cy4gV291bGQgaXQgYmUgcG9z
c2libGUgdG8NCj4gYWRkIHRoaXMgbG9naWMgdG8gWGVuIHNvIGl0IGNhbiBmaWd1cmUgb3V0IG9u
IGl0J3Mgb3duIHdoZXRoZXIgaXQncw0KPiBzYWZlIHRvIHNjYW4gdGhlIFBDSSBidXMgb3Igd2hl
dGhlciBpdCBuZWVkcyB0byB3YWl0IGZvciB0aGUgaGFyZHdhcmUNCj4gZG9tYWluIHRvIHJlcG9y
dCB0aGUgZGV2aWNlcyBwcmVzZW50Pw0KDQpUaGF0IG1pZ2h0IGJlIHBvc3NpYmxlIHRvIGRvIGJ1
dCB3aWxsIGFueXdheSByZXF1aXJlIGEgY29tbWFuZCBsaW5lIGFyZ3VtZW50DQp0byBiZSBhYmxl
IHRvIGZvcmNlIHhlbiB0byBsZXQgdGhlIGhhcmR3YXJlIGRvbWFpbiBkbyB0aGUgaW5pdGlhbGl6
YXRpb24gYW55d2F5IGluDQpjYXNlIFhlbiBkZXRlY3Rpb24gZG9lcyBub3Qgd29yayBwcm9wZXJs
eS4NCkluIHRoZSBjYXNlIHdoZXJlIHRoZXJlIGlzIGEgRG9tMCBpIHdvdWxkIG1vcmUgZXhwZWN0
IHRoYXQgd2UgbGV0IGl0IGRvIHRoZSBpbml0aWFsaXphdGlvbg0KYWxsIHRoZSB0aW1lIHVubGVz
cyB0aGUgdXNlciBpcyB0ZWxsaW5nIHVzaW5nIGEgY29tbWFuZCBsaW5lIGFyZ3VtZW50IHRoYXQg
dGhlIGN1cnJlbnQgb25lDQppcyBjb3JyZWN0IGFuZCBzaGFsbCBiZSB1c2VkLg0KSW4gRG9tMExl
c3MgY2FzZSBpdCBtdXN0IGJlIGNvcnJlY3QgYXMgbm9ib2R5IHdpbGwgZG8gaXQgYW55d2F5Lg0K
DQpJIGZlYXIgYSBiaXQgdGhhdCBzb21lIGtpbmQgb2YgbG9naWMgdG8gZGV0ZWN0IGlmIHRoZSBp
bml0aWFsaXphdGlvbiBpcyBjb3JyZWN0IHdpbGwgaGF2ZSBtb3JlDQpleGNlcHRpb25zIHRoZW4g
d29ya2luZyBjYXNlcyBhbmQgd2UgbWlnaHQgZW5kIHVwIHdpdGggbG90cyBvZiBwbGF0Zm9ybSBz
cGVjaWZpYyBxdWlya3MuDQpCdXQgaSBtaWdodCBiZSB3cm9uZywgYW55Ym9keSBoYXMgYW4gaWRl
YSBoZXJlID8NCg0KPiANCj4+PiANCj4+Pj4gDQo+Pj4+ICMgRW5hYmxlIHRoZSBleGlzdGluZyB4
ODYgdmlydHVhbCBQQ0kgc3VwcG9ydCBmb3IgQVJNOg0KPj4+PiANCj4+Pj4gVGhlIGV4aXN0aW5n
IFZQQ0kgc3VwcG9ydCBhdmFpbGFibGUgZm9yIFg4NiBpcyBhZGFwdGVkIGZvciBBcm0uIFdoZW4N
Cj4+Pj4gdGhlIGRldmljZSBpcyBhZGRlZCB0byBYRU4gdmlhIHRoZSBoeXBlciBjYWxsDQo+Pj4+
IOKAnFBIWVNERVZPUF9wY2lfZGV2aWNlX2FkZOKAnSwgVlBDSSBoYW5kbGVyIGZvciB0aGUgY29u
ZmlnIHNwYWNlIGFjY2Vzcw0KPj4+PiBpcyBhZGRlZCB0byB0aGUgUENJIGRldmljZSB0byBlbXVs
YXRlIHRoZSBQQ0kgZGV2aWNlcy4NCj4+Pj4gDQo+Pj4+IEEgTU1JTyB0cmFwIGhhbmRsZXIgZm9y
IHRoZSBQQ0kgRUNBTSBzcGFjZSBpcyByZWdpc3RlcmVkIGluIFhFTiBzbw0KPj4+PiB0aGF0IHdo
ZW4gZ3Vlc3QgaXMgdHJ5aW5nIHRvIGFjY2VzcyB0aGUgUENJIGNvbmZpZyBzcGFjZSwgWEVOIHdp
bGwNCj4+Pj4gdHJhcCB0aGUgYWNjZXNzIGFuZCBlbXVsYXRlIHJlYWQvd3JpdGUgdXNpbmcgdGhl
IFZQQ0kgYW5kIG5vdCB0aGUNCj4+Pj4gcmVhbCBQQ0kgaGFyZHdhcmUuDQo+Pj4+IA0KPj4+PiBM
aW1pdGF0aW9uOg0KPj4+PiAqIE5vIGhhbmRsZXIgaXMgcmVnaXN0ZXIgZm9yIHRoZSBNU0kgY29u
ZmlndXJhdGlvbi4NCj4+PiANCj4+PiBCdXQgeW91IG5lZWQgdG8gbWFzayBNU0kvTVNJLVggY2Fw
YWJpbGl0aWVzIGluIHRoZSBjb25maWcgc3BhY2UgaW4NCj4+PiBvcmRlciB0byBwcmV2ZW50IGFj
Y2VzcyBmcm9tIGRvbWFpbnM/IChhbmQgYnkgbWFzayBJIG1lYW4gcmVtb3ZlIGZyb20NCj4+PiB0
aGUgbGlzdCBvZiBjYXBhYmlsaXRpZXMgYW5kIHByZXZlbnQgcmVhZHMvd3JpdGVzIHRvIHRoYXQN
Cj4+PiBjb25maWd1cmF0aW9uIHNwYWNlKS4NCj4+PiANCj4+PiBOb3RlIHRoaXMgaXMgYWxyZWFk
eSBpbXBsZW1lbnRlZCBmb3IgeDg2LCBhbmQgSSd2ZSB0cmllZCB0byBhZGQgYXJjaF8NCj4+PiBo
b29rcyBmb3IgYXJjaCBzcGVjaWZpYyBzdHVmZiBzbyB0aGF0IGl0IGNvdWxkIGJlIHJldXNlZCBi
eSBBcm0uIEJ1dA0KPj4+IG1heWJlIHRoaXMgd291bGQgcmVxdWlyZSBhIGRpZmZlcmVudCBkZXNp
Z24gZG9jdW1lbnQ/DQo+PiANCj4+IGFzIHNhaWQsIHdlIHdpbGwgaGFuZGxlIE1TSSBzdXBwb3J0
IGluIGEgc2VwYXJhdGUgZG9jdW1lbnQvc3RlcC4NCj4+IA0KPj4+IA0KPj4+PiAqIE9ubHkgbGVn
YWN5IGludGVycnVwdCBpcyBzdXBwb3J0ZWQgYW5kIHRlc3RlZCBhcyBvZiBub3csIE1TSSBpcyBu
b3QNCj4+Pj4gaW1wbGVtZW50ZWQgYW5kIHRlc3RlZC4NCj4+Pj4gDQo+Pj4+ICMgQXNzaWduIHRo
ZSBkZXZpY2UgdG8gdGhlIGd1ZXN0Og0KPj4+PiANCj4+Pj4gQXNzaWduIHRoZSBQQ0kgZGV2aWNl
IGZyb20gdGhlIGhhcmR3YXJlIGRvbWFpbiB0byB0aGUgZ3Vlc3QgaXMgZG9uZQ0KPj4+PiB1c2lu
ZyB0aGUgYmVsb3cgZ3Vlc3QgY29uZmlnIG9wdGlvbi4gV2hlbiB4bCB0b29sIGNyZWF0ZSB0aGUg
ZG9tYWluLA0KPj4+PiBQQ0kgZGV2aWNlcyB3aWxsIGJlIGFzc2lnbmVkIHRvIHRoZSBndWVzdCBW
UENJIGJ1cy4NCj4+Pj4gDQo+Pj4+IHBjaT1bICJQQ0lfU1BFQ19TVFJJTkciLCAiUENJX1NQRUNf
U1RSSU5HIiwgLi4uXQ0KPj4+PiANCj4+Pj4gR3Vlc3Qgd2lsbCBiZSBvbmx5IGFibGUgdG8gYWNj
ZXNzIHRoZSBhc3NpZ25lZCBkZXZpY2VzIGFuZCBzZWUgdGhlDQo+Pj4+IGJyaWRnZXMuIEd1ZXN0
IHdpbGwgbm90IGJlIGFibGUgdG8gYWNjZXNzIG9yIHNlZSB0aGUgZGV2aWNlcyB0aGF0DQo+Pj4+
IGFyZSBubyBhc3NpZ25lZCB0byBoaW0uDQo+Pj4+IA0KPj4+PiBMaW1pdGF0aW9uOg0KPj4+PiAq
IEFzIG9mIG5vdyBhbGwgdGhlIGJyaWRnZXMgaW4gdGhlIFBDSSBidXMgYXJlIHNlZW4gYnkNCj4+
Pj4gdGhlIGd1ZXN0IG9uIHRoZSBWUENJIGJ1cy4NCj4+PiANCj4+PiBJIGRvbid0IHRoaW5rIHlv
dSBuZWVkIGFsbCBvZiB0aGVtLCBqdXN0IHRoZSBvbmVzIHRoYXQgYXJlIGhpZ2hlciB1cA0KPj4+
IG9uIHRoZSBoaWVyYXJjaHkgb2YgdGhlIGRldmljZSB5b3UgYXJlIHRyeWluZyB0byBwYXNzdGhy
b3VnaD8NCj4+PiANCj4+PiBXaGljaCBraW5kIG9mIGFjY2VzcyBkbyBndWVzdCBoYXZlIHRvIFBD
SSBicmlkZ2VzIGNvbmZpZyBzcGFjZT8NCj4+IA0KPj4gRm9yIG5vdyB0aGUgYnJpZGdlcyBhcmUg
cmVhZCBvbmx5LCBubyBzcGVjaWZpYyBhY2Nlc3MgaXMgcmVxdWlyZWQgYnkgZ3Vlc3RzLiANCj4+
IA0KPj4+IA0KPj4+IFRoaXMgc2hvdWxkIGJlIGxpbWl0ZWQgdG8gcmVhZC1vbmx5IGFjY2Vzc2Vz
IGluIG9yZGVyIHRvIGJlIHNhZmUuDQo+Pj4gDQo+Pj4gRW11bGF0aW5nIGEgUENJIGJyaWRnZSBp
biBYZW4gdXNpbmcgdlBDSSBzaG91bGRuJ3QgYmUgdGhhdA0KPj4+IGNvbXBsaWNhdGVkLCBzbyB5
b3UgY291bGQgbGlrZWx5IHJlcGxhY2UgdGhlIHJlYWwgYnJpZGdlcyB3aXRoDQo+Pj4gZW11bGF0
ZWQgb25lcy4gT3IgZXZlbiBwcm92aWRlIGEgZmFrZSB0b3BvbG9neSB0byB0aGUgZ3Vlc3QgdXNp
bmcgYW4NCj4+PiBlbXVsYXRlZCBicmlkZ2UuDQo+PiANCj4+IEp1c3Qgc2hvd2luZyBhbGwgYnJp
ZGdlcyBhbmQga2VlcGluZyB0aGUgaGFyZHdhcmUgdG9wb2xvZ3kgaXMgdGhlIHNpbXBsZXN0DQo+
PiBzb2x1dGlvbiBmb3Igbm93LiBCdXQgbWF5YmUgc2hvd2luZyBhIGRpZmZlcmVudCB0b3BvbG9n
eSBhbmQgb25seSBmYWtlDQo+PiBicmlkZ2VzIGNvdWxkIG1ha2Ugc2Vuc2UgYW5kIGJlIGltcGxl
bWVudGVkIGluIHRoZSBmdXR1cmUuDQo+IA0KPiBBY2suIEkndmUgYWxzbyBoZWFyZCBydW1vcnMg
b2YgWGVuIG9uIEFybSBwZW9wbGUgYmVpbmcgdmVyeSBpbnRlcmVzdGVkDQo+IGluIFZpcnRJTyBz
dXBwb3J0LCBpbiB3aGljaCBjYXNlIHlvdSBtaWdodCBleHBvc2UgYm90aCBmdWxseSBlbXVsYXRl
ZA0KPiBWaXJ0SU8gZGV2aWNlcyBhbmQgUENJIHBhc3N0aHJvdWdoIGRldmljZXMgb24gdGhlIFBD
SSBidXMsIHNvIGl0IHdvdWxkDQo+IGJlIGdvb2QgdG8gc3BlbmQgc29tZSB0aW1lIHRoaW5raW5n
IGhvdyB0aG9zZSB3aWxsIGZpdCB0b2dldGhlci4NCj4gDQo+IFdpbGwgeW91IGFsbG9jYXRlIGEg
c2VwYXJhdGUgc2VnbWVudCB1bnVzZWQgYnkgaGFyZHdhcmUgdG8gZXhwb3NlIHRoZQ0KPiBmdWxs
eSBlbXVsYXRlZCBQQ0kgZGV2aWNlcyAoVmlydElPKT8NCj4gDQo+IFdpbGwgT1NlcyBzdXBwb3J0
IGhhdmluZyBzZXZlcmFsIHNlZ21lbnRzPw0KPiANCj4gSWYgbm90IHlvdSBsaWtlbHkgbmVlZCB0
byBoYXZlIGVtdWxhdGVkIGJyaWRnZXMgc28gdGhhdCB5b3UgY2FuIGFkanVzdA0KPiB0aGUgYnJp
ZGdlIHdpbmRvdyBhY2NvcmRpbmdseSB0byBmaXQgdGhlIHBhc3N0aHJvdWdoIGFuZCB0aGUgZW11
bGF0ZWQNCj4gTU1JTyBzcGFjZSwgYW5kIGxpa2VseSBiZSBhYmxlIHRvIGV4cG9zZSBwYXNzdGhy
b3VnaCBkZXZpY2VzIHVzaW5nIGENCj4gZGlmZmVyZW50IHRvcG9sb2d5IHRoYW4gdGhlIGhvc3Qg
b25lLg0KDQpIb25lc3RseSB0aGlzIGlzIG5vdCBzb21ldGhpbmcgd2UgY29uc2lkZXJlZC4gSSB3
YXMgbW9yZSB0aGlua2luZyB0aGF0DQp0aGlzIHVzZSBjYXNlIHdvdWxkIGJlIGhhbmRsZWQgYnkg
Y3JlYXRpbmcgYW4gb3RoZXIgVlBDSSBidXMgZGVkaWNhdGVkDQp0byB0aG9zZSBraW5kIG9mIGRl
dmljZXMgaW5zdGVhZCBvZiBtaXhpbmcgcGh5c2ljYWwgYW5kIHZpcnR1YWwgZGV2aWNlcy4NCg0K
DQo+IA0KPj4+IA0KPj4+PiANCj4+Pj4gIyBFbXVsYXRlZCBQQ0kgZGV2aWNlIHRyZWUgbm9kZSBp
biBsaWJ4bDoNCj4+Pj4gDQo+Pj4+IExpYnhsIGlzIGNyZWF0aW5nIGEgdmlydHVhbCBQQ0kgZGV2
aWNlIHRyZWUgbm9kZSBpbiB0aGUgZGV2aWNlIHRyZWUNCj4+Pj4gdG8gZW5hYmxlIHRoZSBndWVz
dCBPUyB0byBkaXNjb3ZlciB0aGUgdmlydHVhbCBQQ0kgZHVyaW5nIGd1ZXN0DQo+Pj4+IGJvb3Qu
IFdlIGludHJvZHVjZWQgdGhlIG5ldyBjb25maWcgb3B0aW9uIFt2cGNpPSJwY2lfZWNhbSJdIGZv
cg0KPj4+PiBndWVzdHMuIFdoZW4gdGhpcyBjb25maWcgb3B0aW9uIGlzIGVuYWJsZWQgaW4gYSBn
dWVzdCBjb25maWd1cmF0aW9uLA0KPj4+PiBhIFBDSSBkZXZpY2UgdHJlZSBub2RlIHdpbGwgYmUg
Y3JlYXRlZCBpbiB0aGUgZ3Vlc3QgZGV2aWNlIHRyZWUuDQo+Pj4+IA0KPj4+PiBBIG5ldyBhcmVh
IGhhcyBiZWVuIHJlc2VydmVkIGluIHRoZSBhcm0gZ3Vlc3QgcGh5c2ljYWwgbWFwIGF0IHdoaWNo
DQo+Pj4+IHRoZSBWUENJIGJ1cyBpcyBkZWNsYXJlZCBpbiB0aGUgZGV2aWNlIHRyZWUgKHJlZyBh
bmQgcmFuZ2VzDQo+Pj4+IHBhcmFtZXRlcnMgb2YgdGhlIG5vZGUpLiBBIHRyYXAgaGFuZGxlciBm
b3IgdGhlIFBDSSBFQ0FNIGFjY2VzcyBmcm9tDQo+Pj4+IGd1ZXN0IGhhcyBiZWVuIHJlZ2lzdGVy
ZWQgYXQgdGhlIGRlZmluZWQgYWRkcmVzcyBhbmQgcmVkaXJlY3RzDQo+Pj4+IHJlcXVlc3RzIHRv
IHRoZSBWUENJIGRyaXZlciBpbiBYZW4uDQo+Pj4gDQo+Pj4gQ2FuJ3QgeW91IGRlZHVjZSB0aGUg
cmVxdWlyZW1lbnQgb2Ygc3VjaCBEVCBub2RlIGJhc2VkIG9uIHRoZSBwcmVzZW5jZQ0KPj4+IG9m
IGEgJ3BjaT0nIG9wdGlvbiBpbiB0aGUgc2FtZSBjb25maWcgZmlsZT8NCj4+PiANCj4+PiBBbHNv
IEkgd291bGRuJ3QgZGlzY2FyZCB0aGF0IGluIHRoZSBmdXR1cmUgeW91IG1pZ2h0IHdhbnQgdG8g
dXNlDQo+Pj4gZGlmZmVyZW50IGVtdWxhdG9ycyBmb3IgZGlmZmVyZW50IGRldmljZXMsIHNvIGl0
IG1pZ2h0IGJlIGhlbHBmdWwgdG8NCj4+PiBpbnRyb2R1Y2Ugc29tZXRoaW5nIGxpa2U6DQo+Pj4g
DQo+Pj4gcGNpID0gWyAnMDg6MDAuMCxiYWNrZW5kPXZwY2knLCAnMDk6MDAuMCxiYWNrZW5kPXhl
bnB0JywgJzBhOjAwLjAsYmFja2VuZD1xZW11JywgLi4uIF0NCj4+PiANCj4+PiBGb3IgdGhlIHRp
bWUgYmVpbmcgQXJtIHdpbGwgcmVxdWlyZSBiYWNrZW5kPXZwY2kgZm9yIGFsbCB0aGUgcGFzc2Vk
DQo+Pj4gdGhyb3VnaCBkZXZpY2VzLCBidXQgSSB3b3VsZG4ndCBydWxlIG91dCB0aGlzIGNoYW5n
aW5nIGluIHRoZSBmdXR1cmUuDQo+PiANCj4+IFdlIG5lZWQgaXQgZm9yIHRoZSBjYXNlIHdoZXJl
IG5vIGRldmljZSBpcyBkZWNsYXJlZCBpbiB0aGUgY29uZmlnIGZpbGUgYW5kIHRoZSB1c2VyDQo+
PiB3YW50cyB0byBhZGQgZGV2aWNlcyB1c2luZyB4bCBsYXRlci4gSW4gdGhpcyBjYXNlIHdlIG11
c3QgaGF2ZSB0aGUgRFQgbm9kZSBmb3IgaXQNCj4+IHRvIHdvcmsuIA0KPiANCj4gVGhlcmUncyBh
IHBhc3N0aHJvdWdoIHhsLmNmZyBvcHRpb24gZm9yIHRoYXQgYWxyZWFkeSwgc28gdGhhdCBpZiB5
b3UNCj4gZG9uJ3Qgd2FudCB0byBhZGQgYW55IFBDSSBwYXNzdGhyb3VnaCBkZXZpY2VzIGF0IGNy
ZWF0aW9uIHRpbWUgYnV0DQo+IHJhdGhlciBob3RwbHVnIHRoZW0geW91IGNhbiBzZXQ6DQo+IA0K
PiBwYXNzdGhyb3VnaD1lbmFibGVkDQo+IA0KPiBBbmQgaXQgc2hvdWxkIHNldHVwIHRoZSBkb21h
aW4gdG8gYmUgcHJlcGFyZWQgdG8gc3VwcG9ydCBob3QNCj4gcGFzc3Rocm91Z2gsIGluY2x1ZGlu
ZyB0aGUgSU9NTVUgWzBdLg0KDQpJc27igJl0IHRoaXMgb3B0aW9uIGNvdmVyaW5nIG1vcmUgdGhl
biBQQ0kgcGFzc3Rocm91Z2ggPw0KDQpMb3RzIG9mIEFybSBwbGF0Zm9ybSBkbyBub3QgaGF2ZSBh
IFBDSSBidXMgYXQgYWxsLCBzbyBmb3IgdGhvc2UNCmNyZWF0aW5nIGEgVlBDSSBidXMgd291bGQg
YmUgcG9pbnRsZXNzLiBCdXQgeW91IG1pZ2h0IG5lZWQgdG8NCmFjdGl2YXRlIHRoaXMgdG8gcGFz
cyBkZXZpY2VzIHdoaWNoIGFyZSBub3Qgb24gdGhlIFBDSSBidXMuDQoNCj4gDQo+PiBSZWdhcmRp
bmcgcG9zc2libGVzIGJhY2tlbmQgdGhpcyBjb3VsZCBiZSBhZGRlZCBpbiB0aGUgZnV0dXJlIGlm
IHJlcXVpcmVkLiANCj4+IA0KPj4+IA0KPj4+PiBMaW1pdGF0aW9uOg0KPj4+PiAqIE9ubHkgb25l
IFBDSSBkZXZpY2UgdHJlZSBub2RlIGlzIHN1cHBvcnRlZCBhcyBvZiBub3cuDQo+Pj4+IA0KPj4+
PiBCQVIgdmFsdWUgYW5kIElPTUVNIG1hcHBpbmc6DQo+Pj4+IA0KPj4+PiBMaW51eCBndWVzdCB3
aWxsIGRvIHRoZSBQQ0kgZW51bWVyYXRpb24gYmFzZWQgb24gdGhlIGFyZWEgcmVzZXJ2ZWQNCj4+
Pj4gZm9yIEVDQU0gYW5kIElPTUVNIHJhbmdlcyBpbiB0aGUgVlBDSSBkZXZpY2UgdHJlZSBub2Rl
LiBPbmNlIFBDSQ0KPj4+PiBkZXZpY2UgaXMgYXNzaWduZWQgdG8gdGhlIGd1ZXN0LCBYRU4gd2ls
bCBtYXAgdGhlIGd1ZXN0IFBDSSBJT01FTQ0KPj4+PiByZWdpb24gdG8gdGhlIHJlYWwgcGh5c2lj
YWwgSU9NRU0gcmVnaW9uIG9ubHkgZm9yIHRoZSBhc3NpZ25lZA0KPj4+PiBkZXZpY2VzLg0KPj4+
IA0KPj4+IFBDSSBJT01FTSA9PSBCQVJzPyBPciBhcmUgeW91IHJlZmVycmluZyB0byB0aGUgRUNB
TSBhY2Nlc3Mgd2luZG93Pw0KPj4gDQo+PiBIZXJlIGJ5IFBDSSBJT01FTSB3ZSBtZWFuIHRoZSBJ
T01FTSBzcGFjZXMgcmVmZXJyZWQgdG8gYnkgdGhlIEJBUnMNCj4+IG9mIHRoZSBQQ0kgZGV2aWNl
DQo+IA0KPiBPSywgbWlnaHQgYmUgd29ydGggdG8gdXNlIFBDSSBCQVJzIGV4cGxpY2l0bHkgcmF0
aGVyIHRoYW4gUENJIElPTUVNIGFzDQo+IEkgdGhpbmsgdGhhdCdzIGxpa2VseSB0byBiZSBjb25m
dXNlZCB3aXRoIHRoZSBjb25maWcgc3BhY2UgSU9NRU0uDQoNCkdvb2QgcG9pbnQgd2Ugd2lsbCBy
ZXBocmFzZSB0aGF0IGluIHRoZSBuZXh0IGRlc2lnbiB2ZXJzaW9uLg0KDQo+IA0KPj4+IA0KPj4+
PiBBcyBvZiBub3cgd2UgaGF2ZSBub3QgbW9kaWZpZWQgdGhlIGV4aXN0aW5nIFZQQ0kgY29kZSB0
byBtYXAgdGhlDQo+Pj4+IGd1ZXN0IFBDSSBJT01FTSByZWdpb24gdG8gdGhlIHJlYWwgcGh5c2lj
YWwgSU9NRU0gcmVnaW9uLiBXZSB1c2VkDQo+Pj4+IHRoZSBleGlzdGluZyBndWVzdCDigJxpb21l
beKAnSBjb25maWcgb3B0aW9uIHRvIG1hcCB0aGUgcmVnaW9uLiAgRm9yDQo+Pj4+IGV4YW1wbGU6
IEd1ZXN0IHJlc2VydmVkIElPTUVNIHJlZ2lvbjogIDB4MDQwMjAwMDAgUmVhbCBwaHlzaWNhbA0K
Pj4+PiBJT01FTSByZWdpb246MHg1MDAwMDAwMCBJT01FTSBzaXplOjEyOE1CIGlvbWVtIGNvbmZp
ZyB3aWxsIGJlOg0KPj4+PiBpb21lbSA9IFsiMHg1MDAwMCwweDgwMDBAMHg0MDIwIl0NCj4+Pj4g
DQo+Pj4+IFRoZXJlIGlzIG5vIG5lZWQgdG8gbWFwIHRoZSBFQ0FNIHNwYWNlIGFzIFhFTiBhbHJl
YWR5IGhhdmUgYWNjZXNzIHRvDQo+Pj4+IHRoZSBFQ0FNIHNwYWNlIGFuZCBYRU4gd2lsbCB0cmFw
IEVDQU0gYWNjZXNzZXMgZnJvbSB0aGUgZ3Vlc3QgYW5kDQo+Pj4+IHdpbGwgcGVyZm9ybSByZWFk
L3dyaXRlIG9uIHRoZSBWUENJIGJ1cy4NCj4+Pj4gDQo+Pj4+IElPTUVNIGFjY2VzcyB3aWxsIG5v
dCBiZSB0cmFwcGVkIGFuZCB0aGUgZ3Vlc3Qgd2lsbCBkaXJlY3RseSBhY2Nlc3MNCj4+Pj4gdGhl
IElPTUVNIHJlZ2lvbiBvZiB0aGUgYXNzaWduZWQgZGV2aWNlIHZpYSBzdGFnZS0yIHRyYW5zbGF0
aW9uLg0KPj4+PiANCj4+Pj4gSW4gdGhlIHNhbWUsIHdlIG1hcHBlZCB0aGUgYXNzaWduZWQgZGV2
aWNlcyBJUlEgdG8gdGhlIGd1ZXN0IHVzaW5nDQo+Pj4+IGJlbG93IGNvbmZpZyBvcHRpb25zLiAg
aXJxcz0gWyBOVU1CRVIsIE5VTUJFUiwgLi4uXQ0KPj4+IA0KPj4+IEFyZSB5b3UgcHJvdmlkaW5n
IHRoaXMgZm9yIHRoZSBoYXJkd2FyZSBkb21haW4gYWxzbz8gT3IgYXJlIGlycXMNCj4+PiBmZXRj
aGVkIGZyb20gdGhlIERUIGluIHRoYXQgY2FzZT8NCj4+IA0KPj4gVGhpcyB3aWxsIG9ubHkgYmUg
dXNlZCB0ZW1wb3JhcmlseSB1bnRpbCB3ZSBoYXZlIHByb3BlciBzdXBwb3J0IHRvIGRvIHRoaXMN
Cj4+IGF1dG9tYXRpY2FsbHkgd2hlbiBhIGRldmljZSBpcyBhc3NpZ25lZC4gUmlnaHQgbm93IG91
ciBjdXJyZW50IGltcGxlbWVudGF0aW9uDQo+PiBzdGF0dXMgcmVxdWlyZXMgdGhlIHVzZXIgdG8g
ZXhwbGljaXRlbHkgcmVkaXJlY3QgdGhlIGludGVycnVwdHMgcmVxdWlyZWQgYnkgdGhlIFBDSQ0K
Pj4gZGV2aWNlcyBhc3NpZ25lZCBidXQgaW4gdGhlIGZpbmFsIHZlcnNpb24gdGhpcyBlbnRyeSB3
aWxsIG5vdCBiZSBuZWVkZWQuDQo+IA0KPiBSaWdodCwgSSdtIG5vdCBzdXJlIHdoZXRoZXIgdGhp
cyBzaG91bGQgYmUgbWFya2VkIHNvbWVob3cgYXMgKioNCj4gV09SS0FST1VORCAqKiBvciAqKiBU
RU1QT1JBUlkgKiogaW4gdGhlIGRvY3VtZW50LCBzaW5jZSBpdCdzIG5vdA0KPiBzdXBwb3NlZCB0
byBiZSBwYXJ0IG9mIHRoZSBmaW5hbCBpbXBsZW1lbnRhdGlvbi4NCg0KV2UgYWRkZWQgdGhvc2Ug
dG8gbWFrZSBjbGVhciB0aGF0IHRoZSBpbXBsZW1lbnRhdGlvbiBpbiB0aGUgUkZDIHdlIHdpbGwg
cHVzaA0Kd2FzIHRoaXMgd2F5LiBUaG9zZSB3aWxsIGRpc2FwcGVhciB3aXRoIHRpbWUgYW5kIHZl
cnNpb25zLg0KV2Ugd2lsbCBhZGQgYSBiaWcgVEVNUE9SQVJZIGluIHRoZSBuZXh0IHZlcnNpb24u
DQoNCj4gDQo+PiBEb20wIHJlbGllcyBvbiB0aGUgZW50cmllcyBkZWNsYXJlZCBpbiB0aGUgRFQu
DQo+PiANCj4+PiANCj4+Pj4gTGltaXRhdGlvbjoNCj4+Pj4gKiBOZWVkIHRvIGF2b2lkIHRoZSDi
gJxpb21lbeKAnSBhbmQg4oCcaXJx4oCdIGd1ZXN0IGNvbmZpZw0KPj4+PiBvcHRpb25zIGFuZCBt
YXAgdGhlIElPTUVNIHJlZ2lvbiBhbmQgSVJRIGF0IHRoZSBzYW1lIHRpbWUgd2hlbg0KPj4+PiBk
ZXZpY2UgaXMgYXNzaWduZWQgdG8gdGhlIGd1ZXN0IHVzaW5nIHRoZSDigJxwY2nigJ0gZ3Vlc3Qg
Y29uZmlnIG9wdGlvbnMNCj4+Pj4gd2hlbiB4bCBjcmVhdGVzIHRoZSBkb21haW4uDQo+Pj4+ICog
RW11bGF0ZWQgQkFSIHZhbHVlcyBvbiB0aGUgVlBDSSBidXMgc2hvdWxkIHJlZmxlY3QgdGhlIElP
TUVNIG1hcHBlZA0KPj4+PiBhZGRyZXNzLg0KPj4+IA0KPj4+IEl0IHdhcyBteSB1bmRlcnN0YW5k
aW5nIHRoYXQgeW91IHdvdWxkIGlkZW50aXR5IG1hcCB0aGUgQkFSIGludG8gdGhlDQo+Pj4gZG9t
VSBzdGFnZS0yIHRyYW5zbGF0aW9uLCBhbmQgdGhhdCBjaGFuZ2VzIGJ5IHRoZSBndWVzdCB3b24n
dCBiZQ0KPj4+IGFsbG93ZWQuDQo+PiANCj4+IEluIGZhY3QgdGhpcyBpcyBub3QgcG9zc2libGUg
dG8gZG8gYW5kIHdlIGhhdmUgdG8gcmVtYXAgYXQgYSBkaWZmZXJlbnQgYWRkcmVzcw0KPj4gYmVj
YXVzZSB0aGUgZ3Vlc3QgcGh5c2ljYWwgbWFwcGluZyBpcyBmaXhlZCBieSBYZW4gb24gQXJtIHNv
IHdlIG11c3QgZm9sbG93DQo+PiB0aGUgc2FtZSBkZXNpZ24gb3RoZXJ3aXNlIHRoaXMgd291bGQg
b25seSB3b3JrIGlmIHRoZSBCQVJzIGFyZSBwb2ludGluZyB0byBhbg0KPj4gYWRkcmVzcyB1bnVz
ZWQgYW5kIG9uIEp1bm8gdGhpcyBpcyBmb3IgZXhhbXBsZSBjb25mbGljdGluZyB3aXRoIHRoZSBn
dWVzdA0KPj4gUkFNIGFkZHJlc3MuDQo+IA0KPiBUaGlzIHdhcyBub3QgY2xlYXIgZnJvbSBteSBy
ZWFkaW5nIG9mIHRoZSBkb2N1bWVudCwgY291bGQgeW91IHBsZWFzZQ0KPiBjbGFyaWZ5IG9uIHRo
ZSBuZXh0IHZlcnNpb24gdGhhdCB0aGUgZ3Vlc3QgcGh5c2ljYWwgbWVtb3J5IG1hcCBpcw0KPiBh
bHdheXMgdGhlIHNhbWUsIGFuZCB0aGF0IEJBUnMgZnJvbSBQQ0kgZGV2aWNlcyBjYW5ub3QgYmUg
aWRlbnRpdHkNCj4gbWFwcGVkIHRvIHRoZSBzdGFnZS0yIHRyYW5zbGF0aW9uIGFuZCBpbnN0ZWFk
IGFyZSByZWxvY2F0ZWQgc29tZXdoZXJlDQo+IGVsc2U/DQoNCldlIHdpbGwuDQoNCj4gDQo+IEkn
bSB0aGVuIGNvbmZ1c2VkIGFib3V0IHdoYXQgeW91IGRvIHdpdGggYnJpZGdlIHdpbmRvd3MsIGRv
IHlvdSBhbHNvDQo+IHRyYXAgYW5kIGFkanVzdCB0aGVtIHRvIHJlcG9ydCBhIGRpZmZlcmVudCBJ
T01FTSByZWdpb24/DQoNClllcyB0aGlzIGlzIHdoYXQgd2Ugd2lsbCBoYXZlIHRvIGRvIHNvIHRo
YXQgdGhlIHJlZ2lvbnMgcmVmbGVjdCB0aGUgVlBDSSBtYXBwaW5ncw0KYW5kIG5vdCB0aGUgaGFy
ZHdhcmUgb25lLg0KDQo+IA0KPiBBYm92ZSB5b3UgbWVudGlvbmVkIHRoYXQgcmVhZC1vbmx5IGFj
Y2VzcyB3YXMgZ2l2ZW4gdG8gYnJpZGdlDQo+IHJlZ2lzdGVycywgYnV0IEkgZ3Vlc3Mgc29tZSBh
cmUgYWxzbyBlbXVsYXRlZCBpbiBvcmRlciB0byByZXBvcnQNCj4gbWF0Y2hpbmcgSU9NRU0gcmVn
aW9ucz8NCg0KeWVzIHRoYXTigJlzIGV4YWN0LiBXZSB3aWxsIGNsZWFyIHRoaXMgaW4gdGhlIG5l
eHQgdmVyc2lvbi4NCg0KQmVydHJhbmQNCg0KPiANCj4gUm9nZXIuDQo+IA0KPiBbMF0gaHR0cHM6
Ly94ZW5iaXRzLnhlbi5vcmcvZG9jcy91bnN0YWJsZS9tYW4veGwuY2ZnLjUuaHRtbCNPdGhlci1P
cHRpb25zDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 15:24:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 15:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwSDa-0004bB-8X; Fri, 17 Jul 2020 15:24:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwSDZ-0004b5-2L
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 15:24:09 +0000
X-Inumbo-ID: 8d84c9b1-c841-11ea-9628-12813bfff9fa
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.15.51]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d84c9b1-c841-11ea-9628-12813bfff9fa;
 Fri, 17 Jul 2020 15:24:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dQXfMS2KW1AADi+PJHjBZrW/DpIYPlu2qSbWsw15YUc=;
 b=ZD1sbznfnZw6H8At2GYHK/O9GS8gQznPT7pCg/ZMeyGGGpScf9qf2DX36lED41SO1a/MOUqL+WHs0AWtkoRWsCF4No0toyHYosn6Bu/L2iuWvofRa48CETlgRv+CGhHDADjG0aX+lKYDsFbX+ksPyimzOL0QWHVTuzP7sqRrk5E=
Received: from AM6P192CA0012.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:83::25)
 by AM6PR08MB5096.eurprd08.prod.outlook.com (2603:10a6:20b:ee::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.23; Fri, 17 Jul
 2020 15:24:06 +0000
Received: from VE1EUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:83:cafe::38) by AM6P192CA0012.outlook.office365.com
 (2603:10a6:209:83::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17 via Frontend
 Transport; Fri, 17 Jul 2020 15:24:06 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT028.mail.protection.outlook.com (10.152.18.88) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 15:24:05 +0000
Received: ("Tessian outbound 1dc58800d5dd:v62");
 Fri, 17 Jul 2020 15:24:05 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 052dad7aa9d28e7a
X-CR-MTA-TID: 64aa7808
Received: from c2e377230dd1.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 690F5022-0D11-44D5-87D4-A278D6501B51.1; 
 Fri, 17 Jul 2020 15:23:59 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c2e377230dd1.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 15:23:59 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ERgZECdtXeE/ev1npsA0KXvLGHkT2vVVEOJefmth1OJdjNbAuujXCGkXAJBh/DAR64wab5jpo2UuRzgj3gRRcEr8cZhV1ZBZRC7T8owzppJ43HhLzx2RMv0Wa8ydlhN9+OcL4hvdZSMTV0fFy/CDHicBBhdjV1CY17x2klhf85cPn/X+UPMMZyZ96CNyp4i2nmJXtv+vYBRnbF5vqqu+6W01azkfRWibHaIu6fYPWUmfwuuHThItVswrKAUaXblEBTHIXhcT3NTqkiy0GGvd0s4BsksA7H2lxqLAcaS9MNwc7T+3tFjh/rdkyeFn7DhzXpyIjvVrpqDpW46BwkGMxg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dQXfMS2KW1AADi+PJHjBZrW/DpIYPlu2qSbWsw15YUc=;
 b=avt25V3DMCtlznBUv3mEyYiAHOLLGo7fAjMT0MfLWoV/Cs20j+cLj+RgThsLHt+iDArgH+I6ZR9VOyGxViITrMsvrrp0qIgK0ikC3ccnQI4+OZtdQtgqQxSVLD6xms3ynY2AhyyhVC4+bC8JN0AUm1aNe+VxlDx8th+UhEF9NhX+tAZ56D2cJpbnqAeYNBPgT7HW7EaF+ev3ejL5wlkRMbMuDHKpN7qk8Et+gj4emT92naZ4LWhXoixyycdInw2hutUFj0sK2pcut2IlgJ4dupOPyQLkqZhoR26rqh+2n2pwoPj2oNtJ4IW0REuDTDEv+TF4+/5Whbwt3i3PaAjvHQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dQXfMS2KW1AADi+PJHjBZrW/DpIYPlu2qSbWsw15YUc=;
 b=ZD1sbznfnZw6H8At2GYHK/O9GS8gQznPT7pCg/ZMeyGGGpScf9qf2DX36lED41SO1a/MOUqL+WHs0AWtkoRWsCF4No0toyHYosn6Bu/L2iuWvofRa48CETlgRv+CGhHDADjG0aX+lKYDsFbX+ksPyimzOL0QWHVTuzP7sqRrk5E=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4252.eurprd08.prod.outlook.com (2603:10a6:10:c2::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Fri, 17 Jul
 2020 15:23:58 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 15:23:57 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAOrEgIAAVPWAgAABeYCAAAssAIAAAcuAgAAIDQCAAAHigIAAAiYAgAAEaYCAAAVDgA==
Date: Fri, 17 Jul 2020 15:23:57 +0000
Message-ID: <FBE040A9-D088-43D6-8929-FFEDE9DDDE34@arm.com>
References: <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
 <28899FEF-9DA7-4513-8283-1AC5EFFC6E92@arm.com>
 <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
 <20200717144139.GU7191@Air-de-Roger>
 <90AE8DAB-2223-46DC-A263-D78365E5435E@arm.com>
 <20200717150507.GW7191@Air-de-Roger>
In-Reply-To: <20200717150507.GW7191@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [91.160.77.188]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 07502eee-cd8e-4c1c-5ae8-08d82a657181
x-ms-traffictypediagnostic: DBBPR08MB4252:|AM6PR08MB5096:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <AM6PR08MB50966C65CCCA836BCE7C7DB79D7C0@AM6PR08MB5096.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 7D56bkVGwbp/gVbS47FZu0owaOGKIK/a/MZvNK9w7X698cPtF/OeqMQ8OTuq/TwMWTwV0vvwtrghGaOYMrhOee6hoOOuTswlFeN+QfxEIzV6bw1iNrWRUBYiTsPQW1DiN2hUaet/O2aNTpqdkeR3ZNx5aeyhFNa6yH6uWPtU0A5byX1Rhukuiaj1VNMViVGSbuk6THjTUx1fa0VS/mQ5s9T2Ap++n/a8zwjyfFbl9azWHkKb02ibPkBqPeuZZgLibfLPCScGdV5O/e5WNLqEIGYQxwfU3cyZKiMo9p3fE9X/RkU+Y8tJRQYx359g/8jrvmTdL1qGylo0PWVTe4MO9w==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(366004)(346002)(376002)(396003)(136003)(39860400002)(71200400001)(5660300002)(86362001)(6512007)(8676002)(36756003)(6486002)(76116006)(2906002)(4326008)(66446008)(64756008)(66556008)(66476007)(91956017)(478600001)(6916009)(186003)(26005)(53546011)(6506007)(33656002)(8936002)(66946007)(316002)(54906003)(2616005);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: mmLppnV5t43n6CKx2u/JF+H4qCnzD/VKXT87IoZ+6JeWiZg863r7eqBdzZk19bNq7RO52eyW/gaAdBR6fwOeA20Lnrb7vUgRKgUgTKHup5OKnlyP+FdrKu4314vfsVpWcNgw8Bt8+AWJOvYRSffB8T2fEeLobw9vGMfnXCqQNCQXhZZgUewYcIqjh50Y9DMRgGMQ6E7Ali7nEOnXKrZhLTY+R1OiBFwbYiz94bj6Zs8S2MQcAzbpnVJ4ucscaST1n0ZFKE9Hs2vEhttNiIJDx9dlOaUrmAkIO+/j+IHsXpR2AhDgrbw9dEYHb/sprarPZkvrfkZ4syhft9nBoHqz9b+iBIS6thelC8fbwHgNAej1EIXrOsRtC0+Zum0tGwtxesa8Ukk2HnZJEeUjpnaIjlfnpyW5nANgUzOirf1cLa25PRGjt/NhJseMyOQHTcYdpl2/TmO70TCnDCepTM4wwdT3ks2G/pQPde9WtKi5VMHo20QYMAFZ4IxJ7DfMH0Yq
Content-Type: text/plain; charset="utf-8"
Content-ID: <CF45369944101648BFB587DE01F25047@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4252
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(346002)(396003)(376002)(46966005)(82740400003)(4326008)(47076004)(316002)(6512007)(36906005)(36756003)(82310400002)(81166007)(356005)(54906003)(86362001)(70586007)(70206006)(8936002)(8676002)(107886003)(6862004)(186003)(2616005)(6486002)(53546011)(6506007)(5660300002)(478600001)(26005)(336012)(2906002)(33656002);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: d7db6088-ab93-4653-7b51-08d82a656ce1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: cbU2yaLOG0+MGfIq0yx4TIGgVUuzHQIuq4k5YhiTiLK0dMSvCKuy34u0caWQEtP09tuNaMp8sdaSHiYV+vJKr+Yo2soHfdBNj7VMOHl92ILaKgWLpzF9avYb1OjffRuIgA99ALv+Vxfxkz1K2LTTAR3+dMNF3HB+cHOrJwQALQamyDgN1evInhLeMvg1haxaN/udvX0pD/fpoxIOYbGFwh09E7jSC5hDfJxqLezhkhPJSIU1vbtphb/gz+bLxayrwQ+wgXcAFIUzsIq7r3j5m3yKFgxodj+qeAqqT/8x13VNKBQDlEJcUmV9WEcDFxe5mTHRC+4b0sjHvU3hINHD5A+0UWR9zTx4istu9k8TT8z57bzzMIRByAHhTrG/S2ao3trU75Nt9qM2biL4/3UH7A==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 15:24:05.5832 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 07502eee-cd8e-4c1c-5ae8-08d82a657181
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5096
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTcgSnVsIDIwMjAsIGF0IDE3OjA1LCBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5w
YXVAY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiBGcmksIEp1bCAxNywgMjAyMCBhdCAwMjo0
OToyMFBNICswMDAwLCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4gDQo+PiANCj4+PiBPbiAx
NyBKdWwgMjAyMCwgYXQgMTY6NDEsIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXgu
Y29tPiB3cm90ZToNCj4+PiANCj4+PiBPbiBGcmksIEp1bCAxNywgMjAyMCBhdCAwMjozNDo1NVBN
ICswMDAwLCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4+PiANCj4+Pj4gDQo+Pj4+PiBPbiAx
NyBKdWwgMjAyMCwgYXQgMTY6MDYsIEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4gd3Jv
dGU6DQo+Pj4+PiANCj4+Pj4+IE9uIDE3LjA3LjIwMjAgMTU6NTksIEJlcnRyYW5kIE1hcnF1aXMg
d3JvdGU6DQo+Pj4+Pj4gDQo+Pj4+Pj4gDQo+Pj4+Pj4+IE9uIDE3IEp1bCAyMDIwLCBhdCAxNTox
OSwgSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPiB3cm90ZToNCj4+Pj4+Pj4gDQo+Pj4+
Pj4+IE9uIDE3LjA3LjIwMjAgMTU6MTQsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4+Pj4+
Pj4gT24gMTcgSnVsIDIwMjAsIGF0IDEwOjEwLCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5j
b20+IHdyb3RlOg0KPj4+Pj4+Pj4+IE9uIDE2LjA3LjIwMjAgMTk6MTAsIFJhaHVsIFNpbmdoIHdy
b3RlOg0KPj4+Pj4+Pj4+PiAjIEVtdWxhdGVkIFBDSSBkZXZpY2UgdHJlZSBub2RlIGluIGxpYnhs
Og0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gTGlieGwgaXMgY3JlYXRpbmcgYSB2aXJ0dWFsIFBD
SSBkZXZpY2UgdHJlZSBub2RlIGluIHRoZSBkZXZpY2UgdHJlZSB0byBlbmFibGUgdGhlIGd1ZXN0
IE9TIHRvIGRpc2NvdmVyIHRoZSB2aXJ0dWFsIFBDSSBkdXJpbmcgZ3Vlc3QgYm9vdC4gV2UgaW50
cm9kdWNlZCB0aGUgbmV3IGNvbmZpZyBvcHRpb24gW3ZwY2k9InBjaV9lY2FtIl0gZm9yIGd1ZXN0
cy4gV2hlbiB0aGlzIGNvbmZpZyBvcHRpb24gaXMgZW5hYmxlZCBpbiBhIGd1ZXN0IGNvbmZpZ3Vy
YXRpb24sIGEgUENJIGRldmljZSB0cmVlIG5vZGUgd2lsbCBiZSBjcmVhdGVkIGluIHRoZSBndWVz
dCBkZXZpY2UgdHJlZS4NCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBJIHN1cHBvcnQgU3RlZmFubydz
IHN1Z2dlc3Rpb24gZm9yIHRoaXMgdG8gYmUgYW4gb3B0aW9uYWwgdGhpbmcsIGkuZS4NCj4+Pj4+
Pj4+PiB0aGVyZSB0byBiZSBubyBuZWVkIGZvciBpdCB3aGVuIHRoZXJlIGFyZSBQQ0kgZGV2aWNl
cyBhc3NpZ25lZCB0byB0aGUNCj4+Pj4+Pj4+PiBndWVzdCBhbnl3YXkuIEkgYWxzbyB3b25kZXIg
YWJvdXQgdGhlIHBjaV8gcHJlZml4IGhlcmUgLSBpc24ndA0KPj4+Pj4+Pj4+IHZwY2k9ImVjYW0i
IGFzIHVuYW1iaWd1b3VzPw0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBUaGlzIGNvdWxkIGJlIGEgcHJv
YmxlbSBhcyB3ZSBuZWVkIHRvIGtub3cgdGhhdCB0aGlzIGlzIHJlcXVpcmVkIGZvciBhIGd1ZXN0
IHVwZnJvbnQgc28gdGhhdCBQQ0kgZGV2aWNlcyBjYW4gYmUgYXNzaWduZWQgYWZ0ZXIgdXNpbmcg
eGwuIA0KPj4+Pj4+PiANCj4+Pj4+Pj4gSSdtIGFmcmFpZCBJIGRvbid0IHVuZGVyc3RhbmQ6IFdo
ZW4gdGhlcmUgYXJlIG5vIFBDSSBkZXZpY2UgdGhhdCBnZXQNCj4+Pj4+Pj4gaGFuZGVkIHRvIGEg
Z3Vlc3Qgd2hlbiBpdCBnZXRzIGNyZWF0ZWQsIGJ1dCBpdCBpcyBzdXBwb3NlZCB0byBiZSBhYmxl
DQo+Pj4+Pj4+IHRvIGhhdmUgc29tZSBhc3NpZ25lZCB3aGlsZSBhbHJlYWR5IHJ1bm5pbmcsIHRo
ZW4gd2UgYWdyZWUgdGhlIG9wdGlvbg0KPj4+Pj4+PiBpcyBuZWVkZWQgKGFmYWljdCkuIFdoZW4g
UENJIGRldmljZXMgZ2V0IGhhbmRlZCB0byB0aGUgZ3Vlc3Qgd2hpbGUgaXQNCj4+Pj4+Pj4gZ2V0
cyBjb25zdHJ1Y3RlZCwgd2hlcmUncyB0aGUgcHJvYmxlbSB0byBpbmZlciB0aGlzIG9wdGlvbiBm
cm9tIHRoZQ0KPj4+Pj4+PiBwcmVzZW5jZSBvZiBQQ0kgZGV2aWNlcyBpbiB0aGUgZ3Vlc3QgY29u
ZmlndXJhdGlvbj8NCj4+Pj4+PiANCj4+Pj4+PiBJZiB0aGUgdXNlciB3YW50cyB0byB1c2UgeGwg
cGNpLWF0dGFjaCB0byBhdHRhY2ggaW4gcnVudGltZSBhIGRldmljZSB0byBhIGd1ZXN0LCB0aGlz
IGd1ZXN0IG11c3QgaGF2ZSBhIFZQQ0kgYnVzIChldmVuIHdpdGggbm8gZGV2aWNlcykuDQo+Pj4+
Pj4gSWYgd2UgZG8gbm90IGhhdmUgdGhlIHZwY2kgcGFyYW1ldGVyIGluIHRoZSBjb25maWd1cmF0
aW9uIHRoaXMgdXNlIGNhc2Ugd2lsbCBub3Qgd29yayBhbnltb3JlLg0KPj4+Pj4gDQo+Pj4+PiBU
aGF0J3Mgd2hhdCBldmVyeW9uZSBsb29rcyB0byBhZ3JlZSB3aXRoLiBZZXQgd2h5IGlzIHRoZSBw
YXJhbWV0ZXIgbmVlZGVkDQo+Pj4+PiB3aGVuIHRoZXJlIF9hcmVfIFBDSSBkZXZpY2VzIGFueXdh
eT8gVGhhdCdzIHRoZSAib3B0aW9uYWwiIHRoYXQgU3RlZmFubw0KPj4+Pj4gd2FzIHN1Z2dlc3Rp
bmcsIGFpdWkuDQo+Pj4+IA0KPj4+PiBJIGFncmVlIGluIHRoaXMgY2FzZSB0aGUgcGFyYW1ldGVy
IGNvdWxkIGJlIG9wdGlvbmFsIGFuZCBvbmx5IHJlcXVpcmVkIGlmIG5vdCBQQ0kgZGV2aWNlIGlz
IGFzc2lnbmVkIGRpcmVjdGx5IGluIHRoZSBndWVzdCBjb25maWd1cmF0aW9uLg0KPj4+IA0KPj4+
IFdoZXJlIHdpbGwgdGhlIEVDQU0gcmVnaW9uKHMpIGFwcGVhciBvbiB0aGUgZ3Vlc3QgcGh5c21h
cD8NCj4+PiANCj4+PiBBcmUgeW91IGdvaW5nIHRvIHJlLXVzZSB0aGUgc2FtZSBsb2NhdGlvbnMg
YXMgb24gdGhlIHBoeXNpY2FsDQo+Pj4gaGFyZHdhcmUsIG9yIHdpbGwgdGhleSBhcHBlYXIgc29t
ZXdoZXJlIGVsc2U/DQo+PiANCj4+IFdlIHdpbGwgYWRkIHNvbWUgbmV3IGRlZmluaXRpb25zIGZv
ciB0aGUgRUNBTSByZWdpb25zIGluIHRoZSBndWVzdCBwaHlzbWFwIGRlY2xhcmVkIGluIHhlbiAo
aW5jbHVkZS9hc20tYXJtL2NvbmZpZy5oKQ0KPiANCj4gSSB0aGluayBJJ20gY29uZnVzZWQsIGJ1
dCB0aGF0IGZpbGUgZG9lc24ndCBjb250YWluIGFueXRoaW5nIHJlbGF0ZWQNCj4gdG8gdGhlIGd1
ZXN0IHBoeXNtYXAsIHRoYXQncyB0aGUgWGVuIHZpcnR1YWwgbWVtb3J5IGxheW91dCBvbiBBcm0N
Cj4gQUZBSUNUPw0KPiANCj4gRG9lcyB0aGlzIHNvbWVob3cgcmVsYXRlIHRvIHRoZSBwaHlzaWNh
bCBtZW1vcnkgbWFwIGV4cG9zZWQgdG8gZ3Vlc3RzDQo+IG9uIEFybT8NCg0KWWVzIGl0IGRvZXMu
DQpXZSB3aWxsIGFkZCBuZXcgZGVmaW5pdGlvbnMgdGhlcmUgcmVsYXRlZCB0byBWUENJIHRvIHJl
c2VydmUgc29tZSBhcmVhcyBmb3IgdGhlIFZQQ0kgRUNBTSBhbmQgdGhlIElPTUVNIGFyZWFzLg0K
DQpCZXJ0cmFuZA0KDQo+IA0KPiBSb2dlci4NCg0K


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 15:26:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 15:26:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwSFx-0004it-Ne; Fri, 17 Jul 2020 15:26:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/fKj=A4=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jwSFw-0004io-M6
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 15:26:36 +0000
X-Inumbo-ID: e668eb9c-c841-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e668eb9c-c841-11ea-bca7-bc764e2007e4;
 Fri, 17 Jul 2020 15:26:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Vu2Y0wvFphqWMvee/7Lu18rER6wOiwx5S/QJuqHMZuk=; b=FzmENLdA+LN53jaPjVSHmv//9F
 IdDy9+gZCkD7G8pxM9ZycMl6hT1rQdsTrrPYqSpCjAczjFnuLdcCSKYHZq+e+WLjEYwJ6swZvINsw
 WQHkXQc2kEGRLP8F1KSH7/MicMbhyrjo4AkY1WvleT3OHKIJbSlJ+mNrEwb02BfwgcSU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwSFt-0000iU-TZ; Fri, 17 Jul 2020 15:26:33 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwSFt-00051Y-CE; Fri, 17 Jul 2020 15:26:33 +0000
Subject: Re: PCI devices passthrough on Arm design proposal
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
 <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
 <865D5A77-85D4-4A88-A228-DDB70BDB3691@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <972c0c81-6595-7c41-baa5-8882f5d1c0ff@xen.org>
Date: Fri, 17 Jul 2020 16:26:31 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <865D5A77-85D4-4A88-A228-DDB70BDB3691@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 17/07/2020 15:47, Bertrand Marquis wrote:
>>>> # Title:
>>>>
>>>> PCI devices passthrough on Arm design proposal
>>>>
>>>> # Problem statement:
>>>>
>>>> On ARM there in no support to assign a PCI device to a guest. PCI device passthrough capability allows guests to have full access to some PCI devices. PCI device passthrough allows PCI devices to appear and behave as if they were physically attached to the guest operating system and provide full isolation of the PCI devices.
>>>>
>>>> Goal of this work is to also support Dom0Less configuration so the PCI backend/frontend drivers used on x86 shall not be used on Arm. It will use the existing VPCI concept from X86 and implement the virtual PCI bus through IO emulation​ such that only assigned devices are visible​ to the guest and guest can use the standard PCI driver.
>>>>
>>>> Only Dom0 and Xen will have access to the real PCI bus,​ guest will have a direct access to the assigned device itself​. IOMEM memory will be mapped to the guest ​and interrupt will be redirected to the guest. SMMU has to be configured correctly to have DMA transaction.
>>>>
>>>> ## Current state: Draft version
>>>>
>>>> # Proposer(s): Rahul Singh, Bertrand Marquis
>>>>
>>>> # Proposal:
>>>>
>>>> This section will describe the different subsystem to support the PCI device passthrough and how these subsystems interact with each other to assign a device to the guest.
>>>>
>>>> # PCI Terminology:
>>>>
>>>> Host Bridge: Host bridge allows the PCI devices to talk to the rest of the computer.
>>>> ECAM: ECAM (Enhanced Configuration Access Mechanism) is a mechanism developed to allow PCIe to access configuration space. The space available per function is 4KB.
>>>>
>>>> # Discovering PCI Host Bridge in XEN:
>>>>
>>>> In order to support the PCI passthrough XEN should be aware of all the PCI host bridges available on the system and should be able to access the PCI configuration space. ECAM configuration access is supported as of now. XEN during boot will read the PCI device tree node “reg” property and will map the ECAM space to the XEN memory using the “ioremap_nocache ()” function.
>>>>
>>>> If there are more than one segment on the system, XEN will read the “linux, pci-domain” property from the device tree node and configure  the host bridge segment number accordingly. All the PCI device tree nodes should have the “linux,pci-domain” property so that there will be no conflicts. During hardware domain boot Linux will also use the same “linux,pci-domain” property and assign the domain number to the host bridge.
>>> AFAICT, "linux,pci-domain" is not a mandatory option and mostly tie to Linux. What would happen with other OS?
>>> But I would rather avoid trying to mandate a user to modifying his/her device-tree in order to support PCI passthrough. It would be better to consider Xen to assign the number if it is not present.
> 
> so you would suggest here that if this entry is not present in the configuration, we just assign a value inside xen ? How should this information be passed to the guest ?
> This number is required for the current hypercall to declare devices to xen so those could end up being different.

I am guessing you mean passing to the hardware domain? If so, Xen is 
already rewriting the device-tree for the hardware domain. So it would 
be easy to add more property.

Now the question is whether other OSes are using "linux,pci-domain". I 
would suggest to have a look at a *BSD to see how they deal with PCI 
controllers.

> 
>>>>
>>>> When Dom0 tries to access the PCI config space of the device, XEN will find the corresponding host bridge based on segment number and access the corresponding config space assigned to that bridge.
>>>>
>>>> Limitation:
>>>> * Only PCI ECAM configuration space access is supported.
>>>> * Device tree binding is supported as of now, ACPI is not supported.
>>> We want to differentiate the high-level design from the actual implementation. While you may not yet implement ACPI, we still need to keep it in mind to avoid incompatibilities in long term.
> 
> For sure we do not want to make anything which would not be possible to implement with ACPI.
> I hope the community will help us during review to find those possible problems if we do not see them.

Have a look at the design document I pointed out in my previous answer. 
It should contain a lot of information already for ACPI :).

> 
>>>> * Need to port the PCI host bridge access code to XEN to access the configuration space (generic one works but lots of platforms will required  some specific code or quirks).
>>>>
>>>> # Discovering PCI devices:
>>>>
>>>> PCI-PCIe enumeration is a process of detecting devices connected to its host. It is the responsibility of the hardware domain or boot firmware to do the PCI enumeration and configure the BAR, PCI capabilities, and MSI/MSI-X configuration.
>>>>
>>>> PCI-PCIe enumeration in XEN is not feasible for the configuration part as it would require a lot of code inside Xen which would require a lot of maintenance. Added to this many platforms require some quirks in that part of the PCI code which would greatly improve Xen complexity. Once hardware domain enumerates the device then it will communicate to XEN via the below hypercall.
>>>>
>>>> #define PHYSDEVOP_pci_device_add        25
>>>> struct physdev_pci_device_add {
>>>>       uint16_t seg;
>>>>       uint8_t bus;
>>>>       uint8_t devfn;
>>>>       uint32_t flags;
>>>>       struct {
>>>>           uint8_t bus;
>>>>           uint8_t devfn;
>>>>       } physfn;
>>>>       /*
>>>>       * Optional parameters array.
>>>>       * First element ([0]) is PXM domain associated with the device (if * XEN_PCI_DEV_PXM is set)
>>>>       */
>>>>       uint32_t optarr[XEN_FLEX_ARRAY_DIM];
>>>>       };
>>>>
>>>> As the hypercall argument has the PCI segment number, XEN will access the PCI config space based on this segment number and find the host-bridge corresponding to this segment number. At this stage host bridge is fully initialized so there will be no issue to access the config space.
>>>>
>>>> XEN will add the PCI devices in the linked list maintain in XEN using the function pci_add_device(). XEN will be aware of all the PCI devices on the system and all the device will be added to the hardware domain.
>>> I understand this what x86 does. However, may I ask why we would want it for Arm?
> 
> We wanted to be as near as possible from x86 implementation and design.
> But if you have an other idea here we are fully open to discuss it.

In the case of platform device passthrough, we are leaving the device 
unassigned when not using by a guest. This makes sure the device can't 
do any harm if somehow it wasn't reset correctly.

I would prefer to consider the same approach for PCI devices if there is 
no plan to use it in dom0. Although, we need to figure out how PCI 
devices will be reset.

>>>> * Dom0Less implementation will require to have the capacity inside Xen to discover the PCI devices (without depending on Dom0 to declare them to Xen).
>>>>
>>>> # Enable the existing x86 virtual PCI support for ARM:
>>>>
>>>> The existing VPCI support available for X86 is adapted for Arm. When the device is added to XEN via the hyper call “PHYSDEVOP_pci_device_add”, VPCI handler for the config space access is added to the PCI device to emulate the PCI devices.
>>>>
>>>> A MMIO trap handler for the PCI ECAM space is registered in XEN so that when guest is trying to access the PCI config space, XEN will trap the access and emulate read/write using the VPCI and not the real PCI hardware.
>>>>
>>>> Limitation:
>>>> * No handler is register for the MSI configuration.
>>>> * Only legacy interrupt is supported and tested as of now, MSI is not implemented and tested.
>>> IIRC, legacy interrupt may be shared between two PCI devices. How do you plan to handle this on Arm?
> 
> We plan to fix this by adding proper support for MSI in the long term.
> For the use case where MSI is not supported or not wanted we might have to find a way to forward the hardware interrupt to several guests to emulate some kind of shared interrupt.

Sharing interrupts are a bit pain because you couldn't take advantage of 
the direct EOI in HW and have to be careful if one guest doesn't EOI in 
timely maneer.

This is something I would rather avoid unless there is a real use case 
for it.

> 
>>>>
>>>> # Assign the device to the guest:
>>>>
>>>> Assign the PCI device from the hardware domain to the guest is done using the below guest config option. When xl tool create the domain, PCI devices will be assigned to the guest VPCI bus.
>>> Above, you suggest that device will be assigned to the hardware domain at boot. I am assuming this also means that all the interrupts/MMIOs will be routed/mapped, is that correct?
>>> If so, can you provide a rough sketch how assign/deassign will work?
> 
> Yes this is correct. We will improve the design and add a more detailed description on that in the next version.
> To make it short we remove the resources from the hardware domain first and assign them to the guest the device has been assigned to. There are still some parts in there where we are still in investigation mode on that part.

Hmmm... Does this mean you modified the code to allow a interrupt to be 
removed while the domain is still running?

> 
>>>>      pci=[ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...]
>>>>
>>>> Guest will be only able to access the assigned devices and see the bridges. Guest will not be able to access or see the devices that are no assigned to him.
>>>>
>>>> Limitation:
>>>> * As of now all the bridges in the PCI bus are seen by the guest on the VPCI bus.
>>> Why do you want to expose all the bridges to a guest? Does this mean that the BDF should always match between the host and the guest?
> 
> That’s not really something that we wanted but this was the easiest way to go.
> As said in a previous mail we could build a VPCI bus with a completely different topology but I am not sure of the advantages this would have.
> Do you see some reason to do this ?

Yes :):
   1) If a platform has two host controllers (IIRC Thunder-X has it) 
then you would need to expose two host controllers to your guest. I 
think this is undesirable if your guest is only using a couple of PCI 
devices on each host controllers.
   2) In the case of migration (live or not), you may want to use a 
difference PCI card on the target platform. So your BDF and bridges may 
be different.

Therefore I think the virtual topology can be beneficial.

> 
>>>>
>>>> # Emulated PCI device tree node in libxl:
>>>>
>>>> Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.
>>>>
>>>> A new area has been reserved in the arm guest physical map at which the VPCI bus is declared in the device tree (reg and ranges parameters of the node). A trap handler for the PCI ECAM access from guest has been registered at the defined address and redirects requests to the VPCI driver in Xen.
>>>>
>>>> Limitation:
>>>> * Only one PCI device tree node is supported as of now.
>>>>
>>>> BAR value and IOMEM mapping:
>>>>
>>>> Linux guest will do the PCI enumeration based on the area reserved for ECAM and IOMEM ranges in the VPCI device tree node. Once PCI    device is assigned to the guest, XEN will map the guest PCI IOMEM region to the real physical IOMEM region only for the assigned devices.
>>>>
>>>> As of now we have not modified the existing VPCI code to map the guest PCI IOMEM region to the real physical IOMEM region. We used the existing guest “iomem” config option to map the region.
>>>> For example:
>>>>      Guest reserved IOMEM region:  0x04020000
>>>>           Real physical IOMEM region:0x50000000
>>>>           IOMEM size:128MB
>>>>           iomem config will be:   iomem = ["0x50000,0x8000@0x4020"]
>>>>
>>>> There is no need to map the ECAM space as XEN already have access to the ECAM space and XEN will trap ECAM accesses from the guest and will perform read/write on the VPCI bus.
>>>>
>>>> IOMEM access will not be trapped and the guest will directly access the IOMEM region of the assigned device via stage-2 translation.
>>>>
>>>> In the same, we mapped the assigned devices IRQ to the guest using below config options.
>>>>      irqs= [ NUMBER, NUMBER, ...]
>>>>
>>>> Limitation:
>>>> * Need to avoid the “iomem” and “irq” guest config options and map the IOMEM region and IRQ at the same time when device is assigned to the guest using the “pci” guest config options when xl creates the domain.
>>>> * Emulated BAR values on the VPCI bus should reflect the IOMEM mapped address.
>>>> * X86 mapping code should be ported on Arm so that the stage-2 translation is adapted when the guest is doing a modification of the BAR registers values (to map the address requested by the guest for a specific IOMEM to the address actually contained in the real BAR register of the corresponding device).
>>>>
>>>> # SMMU configuration for guest:
>>>>
>>>> When assigning PCI devices to a guest, the SMMU configuration should be updated to remove access to the hardware domain memory and add
>>>> configuration to have access to the guest memory with the proper address translation so that the device can do DMA operations from and to the guest memory only.
>>> There are a few more questions to answer here:
>>>     - When a guest is destroyed, who will be the owner of the PCI devices? Depending on the answer, how do you make sure the device is quiescent?
> 
> I would say the hardware domain if there is one otherwise nobody.

This is risky, in particular if your device is not quiescent (e.g 
because the reset failed). This would mean your device may be able to 
rewrite part of Dom0.

> On the quiescent part this is definitely something for which I have no answer for now and any suggestion is more then welcome.

Usually you will have to reset a device, but I am not sure this can 
always work properly. Hence, I think assigning the PCI devices to nobody 
would be more sensible. Note this is what XSA-306 aimed to do on x86 
(not yet implemented on Arm).

> 
>>>     - Is there any memory access that can bypassed the IOMMU (e.g doorbell)?
> 
> This is still something to be investigated as part of the MSI implementation.
> If you have any idea here, feel free to tell us.

My memory is a bit fuzzy here. I am sure that the doorbell can bypass 
the IOMMU on some platform, but I also vaguely remember that accesses to 
the PCI host controller memory window may also bypass the IOMMU. A good 
reading might be [2].

IIRC, I came to the conclusion that we may want to use the host memory 
map in the guest when using the PCI passthrough. But maybe not on all 
the platforms.

Cheers,

>>> [1] https://lists.xenproject.org/archives/html/xen-devel/2017-05/msg02520.html

[2] https://www.spinics.net/lists/kvm/msg140116.html
>>
>> -- 
>> Julien Grall
>>
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 15:30:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 15:30:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwSK5-0005Y2-FP; Fri, 17 Jul 2020 15:30:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lz8P=A4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jwSK4-0005Xx-10
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 15:30:52 +0000
X-Inumbo-ID: 7e89ef2a-c842-11ea-962a-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7e89ef2a-c842-11ea-962a-12813bfff9fa;
 Fri, 17 Jul 2020 15:30:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1594999850;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=E/Y6vhon7fxPLpdTCqwMAVUthBgguDDqdVTF4CfEbdI=;
 b=S8jclhuL2jWxEC95IJs1vcw4NXve097iemzOlpzl9HkYhV1VGbAqcpZm
 2VkBeF2tKQViTskMp9pkHfdKw0j7ZO2bwKgS0XOt2ii98OlpfE9UZa+iJ
 6BptTO42VDUhwydtNGJK0ODt8Uxm/Sar8cBETg0+nC9Czs+2adfVeishr M=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 7tOY4hBzORPTe8f9Xu7d2hmhQ5C36c+sGod4QaRCBE9Cxmz1DA2eknVe/f4VRa7mbMPhkSH8ZB
 VrHIxk1W/PFkPXfsQ8u9jDAQmVIxqLnvX3E2DwTAm9xOju1o7W6kOVFIFqvlz2ZfhpSk42BXBm
 5R/hzJpCpADBSXuGUH7Ip+NWhyZXpNc2+Ggu1Jtd0HgGgboT4/OinimqimQRarceunp6Hd08rG
 tjnZbWFBH85QeO/Yo/F/MJGw/3Sy6OfRdhOPKaRLh9S6sO5zR1wwIInw/h+Zo5gy30MBpr0NPR
 H/s=
X-SBRS: 2.7
X-MesageID: 22956834
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,362,1589256000"; d="scan'208";a="22956834"
Date: Fri, 17 Jul 2020 17:30:43 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Message-ID: <20200717153043.GX7191@Air-de-Roger>
References: <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
 <28899FEF-9DA7-4513-8283-1AC5EFFC6E92@arm.com>
 <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
 <20200717144139.GU7191@Air-de-Roger>
 <90AE8DAB-2223-46DC-A263-D78365E5435E@arm.com>
 <20200717150507.GW7191@Air-de-Roger>
 <FBE040A9-D088-43D6-8929-FFEDE9DDDE34@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <FBE040A9-D088-43D6-8929-FFEDE9DDDE34@arm.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 17, 2020 at 03:23:57PM +0000, Bertrand Marquis wrote:
> 
> 
> > On 17 Jul 2020, at 17:05, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > 
> > On Fri, Jul 17, 2020 at 02:49:20PM +0000, Bertrand Marquis wrote:
> >> 
> >> 
> >>> On 17 Jul 2020, at 16:41, Roger Pau Monné <roger.pau@citrix.com> wrote:
> >>> 
> >>> On Fri, Jul 17, 2020 at 02:34:55PM +0000, Bertrand Marquis wrote:
> >>>> 
> >>>> 
> >>>>> On 17 Jul 2020, at 16:06, Jan Beulich <jbeulich@suse.com> wrote:
> >>>>> 
> >>>>> On 17.07.2020 15:59, Bertrand Marquis wrote:
> >>>>>> 
> >>>>>> 
> >>>>>>> On 17 Jul 2020, at 15:19, Jan Beulich <jbeulich@suse.com> wrote:
> >>>>>>> 
> >>>>>>> On 17.07.2020 15:14, Bertrand Marquis wrote:
> >>>>>>>>> On 17 Jul 2020, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
> >>>>>>>>> On 16.07.2020 19:10, Rahul Singh wrote:
> >>>>>>>>>> # Emulated PCI device tree node in libxl:
> >>>>>>>>>> 
> >>>>>>>>>> Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.
> >>>>>>>>> 
> >>>>>>>>> I support Stefano's suggestion for this to be an optional thing, i.e.
> >>>>>>>>> there to be no need for it when there are PCI devices assigned to the
> >>>>>>>>> guest anyway. I also wonder about the pci_ prefix here - isn't
> >>>>>>>>> vpci="ecam" as unambiguous?
> >>>>>>>> 
> >>>>>>>> This could be a problem as we need to know that this is required for a guest upfront so that PCI devices can be assigned after using xl. 
> >>>>>>> 
> >>>>>>> I'm afraid I don't understand: When there are no PCI device that get
> >>>>>>> handed to a guest when it gets created, but it is supposed to be able
> >>>>>>> to have some assigned while already running, then we agree the option
> >>>>>>> is needed (afaict). When PCI devices get handed to the guest while it
> >>>>>>> gets constructed, where's the problem to infer this option from the
> >>>>>>> presence of PCI devices in the guest configuration?
> >>>>>> 
> >>>>>> If the user wants to use xl pci-attach to attach in runtime a device to a guest, this guest must have a VPCI bus (even with no devices).
> >>>>>> If we do not have the vpci parameter in the configuration this use case will not work anymore.
> >>>>> 
> >>>>> That's what everyone looks to agree with. Yet why is the parameter needed
> >>>>> when there _are_ PCI devices anyway? That's the "optional" that Stefano
> >>>>> was suggesting, aiui.
> >>>> 
> >>>> I agree in this case the parameter could be optional and only required if not PCI device is assigned directly in the guest configuration.
> >>> 
> >>> Where will the ECAM region(s) appear on the guest physmap?
> >>> 
> >>> Are you going to re-use the same locations as on the physical
> >>> hardware, or will they appear somewhere else?
> >> 
> >> We will add some new definitions for the ECAM regions in the guest physmap declared in xen (include/asm-arm/config.h)
> > 
> > I think I'm confused, but that file doesn't contain anything related
> > to the guest physmap, that's the Xen virtual memory layout on Arm
> > AFAICT?
> > 
> > Does this somehow relate to the physical memory map exposed to guests
> > on Arm?
> 
> Yes it does.
> We will add new definitions there related to VPCI to reserve some areas for the VPCI ECAM and the IOMEM areas.

Yes, that's completely fine and is what's done on x86, but again I
feel like I'm lost here, this is the Xen virtual memory map, how does
this relate to the guest physical memory map?

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 15:47:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 15:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwSaI-0006bM-Vn; Fri, 17 Jul 2020 15:47:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwSaI-0006bH-BR
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 15:47:38 +0000
X-Inumbo-ID: d65f736c-c844-11ea-bca7-bc764e2007e4
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.69]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d65f736c-c844-11ea-bca7-bc764e2007e4;
 Fri, 17 Jul 2020 15:47:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jOulybHw7an/rl8UERdWxyz7i8sBgWnigII1qZHmoC4=;
 b=qPjkgWxaZvFOTJkg/RaZA6mkR19vsEA9kmUqkU8qlyr1HEYUMbL1/QLT6WeKn5cKcUK0svAQPbyfOoB6pqPbMWGf9mMCwib1OxBEUnbnEYBjGxPl527snPvj9iIzjnJQgxfXmHkFC4DRT/2ZOjMOaX/bDdcr/ArV/0kE8vj2Yzk=
Received: from DB6PR0301CA0082.eurprd03.prod.outlook.com (2603:10a6:6:30::29)
 by VI1PR08MB4288.eurprd08.prod.outlook.com (2603:10a6:803:fc::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17; Fri, 17 Jul
 2020 15:47:34 +0000
Received: from DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:30:cafe::e) by DB6PR0301CA0082.outlook.office365.com
 (2603:10a6:6:30::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17 via Frontend
 Transport; Fri, 17 Jul 2020 15:47:34 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT034.mail.protection.outlook.com (10.152.20.87) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 15:47:33 +0000
Received: ("Tessian outbound 2ae7cfbcc26c:v62");
 Fri, 17 Jul 2020 15:47:33 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: ec332e8d807e7062
X-CR-MTA-TID: 64aa7808
Received: from 9f94b9698ef5.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E468FCF8-3D23-4321-A74B-52BADD83AF84.1; 
 Fri, 17 Jul 2020 15:47:28 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9f94b9698ef5.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 15:47:28 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RM94s2qna+4BPX6mKl/V3l4AHYkoOqbXL529DFhqvN9uAGDiBFj11BGwLtRP1wsU6SPg2CbzOJLuS+wtHTZK+0m5NwSy/yZQ1nFypWByuo0kF9zAP9tCzXu5Y84QgComs8C/JG8lmt2v2MM/cGMRBxsKRDv+/ypb/GYwvsMns3qFIIS+Skw8P+LnpKbVo42fSvNPC2hT1IPwo2DasZtGB4KL0DMPf2DpCIgLCzodU0Nwu3KM6aXe2OFnKBtc5awL91eC4vmJoVBuCnzgmoPgAxHo6YQkyl7FNBB3oG3po8Vu4i7QWYJTZhjo/DuGjI2zQVuUmHYQuRP6U5NUurO4eA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jOulybHw7an/rl8UERdWxyz7i8sBgWnigII1qZHmoC4=;
 b=BvEdY+PagTJpFE++/wTfadcPrNXTlyYXQKQ4mlGM2sxS7EOydLWD7rF5+d7S7GLdfwrcKzYhEqoIuV36Ot2TqJ9LuoSaFdyT3Mkth3r5aQ5WUwchqbMtIzI/DrnXSANahTWCYrB20Saj1D5SFGqpVbEvS0sirDc3z278nCejANldMbS92Xfq/t0zWVFjlBKEr8OQZvmWhy7K1bNWMQ1amCqKoO8tvSaDKCGHVBrbdPsnnJ9e/uGlsKberP1aDUPNrjxQ2CaHw627zRDrKvXv1UwxXHZXNXXbVEQUFD3xcDsjzu4Ag7PyVFUnbCIlxfICuEHaHXkgoC5QUgUfiIkALg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jOulybHw7an/rl8UERdWxyz7i8sBgWnigII1qZHmoC4=;
 b=qPjkgWxaZvFOTJkg/RaZA6mkR19vsEA9kmUqkU8qlyr1HEYUMbL1/QLT6WeKn5cKcUK0svAQPbyfOoB6pqPbMWGf9mMCwib1OxBEUnbnEYBjGxPl527snPvj9iIzjnJQgxfXmHkFC4DRT/2ZOjMOaX/bDdcr/ArV/0kE8vj2Yzk=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1797.eurprd08.prod.outlook.com (2603:10a6:4:32::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Fri, 17 Jul
 2020 15:47:26 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 15:47:26 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: PCI devices passthrough on Arm design proposal
Thread-Topic: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkLzBPIgAAPUACAAArWgIAABdaA
Date: Fri, 17 Jul 2020 15:47:25 +0000
Message-ID: <4E6B793C-2E0A-4999-9842-24CDCDE43903@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
 <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
 <865D5A77-85D4-4A88-A228-DDB70BDB3691@arm.com>
 <972c0c81-6595-7c41-baa5-8882f5d1c0ff@xen.org>
In-Reply-To: <972c0c81-6595-7c41-baa5-8882f5d1c0ff@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [91.160.77.188]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 01b5ff81-9a5f-4730-c121-08d82a68b8dd
x-ms-traffictypediagnostic: DB6PR0801MB1797:|VI1PR08MB4288:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <VI1PR08MB42881ECEDD6A4636E82E33719D7C0@VI1PR08MB4288.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: oTHhDgGUfgtDcP1QaxxroUhj+kbmRnXl89x+pgop+Tq5NjPTC30CUCkF6QuZZ0qogqxGrC1qT6H25sL8Tl/BTwXGcPm8OefnkkrUtw+UJ52hZ/39tYU4vwPHH39VvXCIaEgHFhPIphDmsv9jVU4q08ybL5CSuhyJEGBjH35MkUh+4BQZpwE6Cf3HyFeeVPGSn3GO5MJj8G/ZdsKz5Maeoh4z7jbMF+5hXkLKSXI9AeJ9OYoMjxYnymnSZrIy3M0XxT1bZNf4DS8iEU1CLU8hbUEsFP/6oYSyjYqliNevOSVHMgmQKI2m/aoHwvCtzPspRL53yZlnzrp6PA4MTLSYPvHJQEiu7CBdWhIlBE2IRVDxI5NJmsautoOjc2lFaEmWoo3yRlSr1t27J6qcRe1MrQ==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(376002)(366004)(346002)(39860400002)(136003)(316002)(33656002)(4326008)(54906003)(86362001)(2906002)(6486002)(26005)(30864003)(186003)(71200400001)(5660300002)(76116006)(91956017)(83380400001)(966005)(8936002)(6916009)(36756003)(66946007)(6512007)(8676002)(64756008)(66556008)(66476007)(66446008)(478600001)(53546011)(6506007)(2616005);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: nX+Aby4RpYW2ukRYc+T3VUiAdEaBjOYERj95zgX1Ysqmyp40iTAxXbpY8G3gmwvq7hDORXN9O1z3lcyYy/It8wY85xn2tydClC3XfbcIE3ziETbwBt7fFfm8/6ZJrl4SJrHqaaR0ooLdKDHG3JNmFIGUA2UpGmevA5rkzBtxRw7ZmhtYCdhv+u4RHOSaHQ4mcL4ZvZhRlrcuyv4Co7rwPFcGT9uxBKd3qWbbuOfbc/wQEkIinwHzlpD8ycQwRHYwc/70f1N6lUy4MNeNnDNI11YQWPFvJhCxvA1Jmk+AD3wjCGrp7FtwarfzZFdhX5PGg8Vy6fsx3eRnkqc67vlQLcTr4cONnci9dWfb7CF10XlTcryLFyurFcgy6zBLsMWIoCEX5LlMPDNUSBgEjaDmIb8E9pYgY7teL4xocnD5ThXzM3nHmm6fEyUzDwEqR7d/g8mW9Uv7MUvNA/03ff2wePPxuZDxJCbAUMHy5hbMb0w9oBxxaxam5gvtoTP18XRz
Content-Type: text/plain; charset="utf-8"
Content-ID: <6AF4DC2C4F4F8A4C9E1EEDA8A34689AE@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1797
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(39860400002)(376002)(136003)(346002)(46966005)(30864003)(54906003)(70586007)(6512007)(6506007)(26005)(966005)(70206006)(8676002)(6486002)(8936002)(478600001)(47076004)(186003)(5660300002)(2906002)(83380400001)(107886003)(2616005)(6862004)(316002)(33656002)(53546011)(86362001)(82310400002)(336012)(356005)(4326008)(82740400003)(81166007)(36756003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: f809e8cc-c1b9-489b-ba56-08d82a68b453
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: LFik4Z2t5yFbG/YwhddvxYy4j0SE8WKnTQcHu1HCO7LFULQRzy4XbxtLfUIG7VwTHeCQFZD6jHUDGRuld5GQ60CnIPjUgYukesGulurFwcxp7HgoEnv6GjWq2v5w+u+nbyEGSnsb/eE/B+l7Op8w5hU4M7hBIUSNhyXj4e78bU39KTz/OEf421Rs2EVOpxqDcjkAq/oxZKaUuybWtAdoH1vkCe0epMEswgkLH6+WpfE/jS5jUxXzzD6WDzMfrpxGBvhfUmN2TVA513mo08IMSC8i0I0Li6UrnVx/ZE8AXcCM08s6Xj7IXYKhvqfS1WaCwEbkgUMIEcIwfTFa+gI0EVeRAonicqOKOuTaSbPP8NC3wilCse+9j+6FQtlS3aCn2Ndva1IzSR3KhhrKPI1t4SzeCArVa0jwjaE0G+oPMuuFb+MUPnOspZjdCbI4euAJqGbBIie8oZmOQQwOZnftUpHvIgWODA0dwTXdIoRfAxo=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 15:47:33.9281 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 01b5ff81-9a5f-4730-c121-08d82a68b8dd
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4288
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTcgSnVsIDIwMjAsIGF0IDE3OjI2LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IA0KPiANCj4gT24gMTcvMDcvMjAyMCAxNTo0NywgQmVydHJhbmQg
TWFycXVpcyB3cm90ZToNCj4+Pj4+ICMgVGl0bGU6DQo+Pj4+PiANCj4+Pj4+IFBDSSBkZXZpY2Vz
IHBhc3N0aHJvdWdoIG9uIEFybSBkZXNpZ24gcHJvcG9zYWwNCj4+Pj4+IA0KPj4+Pj4gIyBQcm9i
bGVtIHN0YXRlbWVudDoNCj4+Pj4+IA0KPj4+Pj4gT24gQVJNIHRoZXJlIGluIG5vIHN1cHBvcnQg
dG8gYXNzaWduIGEgUENJIGRldmljZSB0byBhIGd1ZXN0LiBQQ0kgZGV2aWNlIHBhc3N0aHJvdWdo
IGNhcGFiaWxpdHkgYWxsb3dzIGd1ZXN0cyB0byBoYXZlIGZ1bGwgYWNjZXNzIHRvIHNvbWUgUENJ
IGRldmljZXMuIFBDSSBkZXZpY2UgcGFzc3Rocm91Z2ggYWxsb3dzIFBDSSBkZXZpY2VzIHRvIGFw
cGVhciBhbmQgYmVoYXZlIGFzIGlmIHRoZXkgd2VyZSBwaHlzaWNhbGx5IGF0dGFjaGVkIHRvIHRo
ZSBndWVzdCBvcGVyYXRpbmcgc3lzdGVtIGFuZCBwcm92aWRlIGZ1bGwgaXNvbGF0aW9uIG9mIHRo
ZSBQQ0kgZGV2aWNlcy4NCj4+Pj4+IA0KPj4+Pj4gR29hbCBvZiB0aGlzIHdvcmsgaXMgdG8gYWxz
byBzdXBwb3J0IERvbTBMZXNzIGNvbmZpZ3VyYXRpb24gc28gdGhlIFBDSSBiYWNrZW5kL2Zyb250
ZW5kIGRyaXZlcnMgdXNlZCBvbiB4ODYgc2hhbGwgbm90IGJlIHVzZWQgb24gQXJtLiBJdCB3aWxs
IHVzZSB0aGUgZXhpc3RpbmcgVlBDSSBjb25jZXB0IGZyb20gWDg2IGFuZCBpbXBsZW1lbnQgdGhl
IHZpcnR1YWwgUENJIGJ1cyB0aHJvdWdoIElPIGVtdWxhdGlvbuKAiyBzdWNoIHRoYXQgb25seSBh
c3NpZ25lZCBkZXZpY2VzIGFyZSB2aXNpYmxl4oCLIHRvIHRoZSBndWVzdCBhbmQgZ3Vlc3QgY2Fu
IHVzZSB0aGUgc3RhbmRhcmQgUENJIGRyaXZlci4NCj4+Pj4+IA0KPj4+Pj4gT25seSBEb20wIGFu
ZCBYZW4gd2lsbCBoYXZlIGFjY2VzcyB0byB0aGUgcmVhbCBQQ0kgYnVzLOKAiyBndWVzdCB3aWxs
IGhhdmUgYSBkaXJlY3QgYWNjZXNzIHRvIHRoZSBhc3NpZ25lZCBkZXZpY2UgaXRzZWxm4oCLLiBJ
T01FTSBtZW1vcnkgd2lsbCBiZSBtYXBwZWQgdG8gdGhlIGd1ZXN0IOKAi2FuZCBpbnRlcnJ1cHQg
d2lsbCBiZSByZWRpcmVjdGVkIHRvIHRoZSBndWVzdC4gU01NVSBoYXMgdG8gYmUgY29uZmlndXJl
ZCBjb3JyZWN0bHkgdG8gaGF2ZSBETUEgdHJhbnNhY3Rpb24uDQo+Pj4+PiANCj4+Pj4+ICMjIEN1
cnJlbnQgc3RhdGU64oCvRHJhZnQgdmVyc2lvbg0KPj4+Pj4gDQo+Pj4+PiAjIFByb3Bvc2VyKHMp
OiBSYWh1bCBTaW5naCwgQmVydHJhbmQgTWFycXVpcw0KPj4+Pj4gDQo+Pj4+PiAjIFByb3Bvc2Fs
Og0KPj4+Pj4gDQo+Pj4+PiBUaGlzIHNlY3Rpb24gd2lsbCBkZXNjcmliZSB0aGUgZGlmZmVyZW50
IHN1YnN5c3RlbSB0byBzdXBwb3J0IHRoZSBQQ0kgZGV2aWNlIHBhc3N0aHJvdWdoIGFuZCBob3cg
dGhlc2Ugc3Vic3lzdGVtcyBpbnRlcmFjdCB3aXRoIGVhY2ggb3RoZXIgdG8gYXNzaWduIGEgZGV2
aWNlIHRvIHRoZSBndWVzdC4NCj4+Pj4+IA0KPj4+Pj4gIyBQQ0kgVGVybWlub2xvZ3k6DQo+Pj4+
PiANCj4+Pj4+IEhvc3QgQnJpZGdlOiBIb3N0IGJyaWRnZSBhbGxvd3MgdGhlIFBDSSBkZXZpY2Vz
IHRvIHRhbGsgdG8gdGhlIHJlc3Qgb2YgdGhlIGNvbXB1dGVyLg0KPj4+Pj4gRUNBTTogRUNBTSAo
RW5oYW5jZWQgQ29uZmlndXJhdGlvbiBBY2Nlc3MgTWVjaGFuaXNtKSBpcyBhIG1lY2hhbmlzbSBk
ZXZlbG9wZWQgdG8gYWxsb3cgUENJZSB0byBhY2Nlc3MgY29uZmlndXJhdGlvbiBzcGFjZS4gVGhl
IHNwYWNlIGF2YWlsYWJsZSBwZXIgZnVuY3Rpb24gaXMgNEtCLg0KPj4+Pj4gDQo+Pj4+PiAjIERp
c2NvdmVyaW5nIFBDSSBIb3N0IEJyaWRnZSBpbiBYRU46DQo+Pj4+PiANCj4+Pj4+IEluIG9yZGVy
IHRvIHN1cHBvcnQgdGhlIFBDSSBwYXNzdGhyb3VnaCBYRU4gc2hvdWxkIGJlIGF3YXJlIG9mIGFs
bCB0aGUgUENJIGhvc3QgYnJpZGdlcyBhdmFpbGFibGUgb24gdGhlIHN5c3RlbSBhbmQgc2hvdWxk
IGJlIGFibGUgdG8gYWNjZXNzIHRoZSBQQ0kgY29uZmlndXJhdGlvbiBzcGFjZS4gRUNBTSBjb25m
aWd1cmF0aW9uIGFjY2VzcyBpcyBzdXBwb3J0ZWQgYXMgb2Ygbm93LiBYRU4gZHVyaW5nIGJvb3Qg
d2lsbCByZWFkIHRoZSBQQ0kgZGV2aWNlIHRyZWUgbm9kZSDigJxyZWfigJ0gcHJvcGVydHkgYW5k
IHdpbGwgbWFwIHRoZSBFQ0FNIHNwYWNlIHRvIHRoZSBYRU4gbWVtb3J5IHVzaW5nIHRoZSDigJxp
b3JlbWFwX25vY2FjaGUgKCnigJ0gZnVuY3Rpb24uDQo+Pj4+PiANCj4+Pj4+IElmIHRoZXJlIGFy
ZSBtb3JlIHRoYW4gb25lIHNlZ21lbnQgb24gdGhlIHN5c3RlbSwgWEVOIHdpbGwgcmVhZCB0aGUg
4oCcbGludXgsIHBjaS1kb21haW7igJ0gcHJvcGVydHkgZnJvbSB0aGUgZGV2aWNlIHRyZWUgbm9k
ZSBhbmQgY29uZmlndXJlICB0aGUgaG9zdCBicmlkZ2Ugc2VnbWVudCBudW1iZXIgYWNjb3JkaW5n
bHkuIEFsbCB0aGUgUENJIGRldmljZSB0cmVlIG5vZGVzIHNob3VsZCBoYXZlIHRoZSDigJxsaW51
eCxwY2ktZG9tYWlu4oCdIHByb3BlcnR5IHNvIHRoYXQgdGhlcmUgd2lsbCBiZSBubyBjb25mbGlj
dHMuIER1cmluZyBoYXJkd2FyZSBkb21haW4gYm9vdCBMaW51eCB3aWxsIGFsc28gdXNlIHRoZSBz
YW1lIOKAnGxpbnV4LHBjaS1kb21haW7igJ0gcHJvcGVydHkgYW5kIGFzc2lnbiB0aGUgZG9tYWlu
IG51bWJlciB0byB0aGUgaG9zdCBicmlkZ2UuDQo+Pj4+IEFGQUlDVCwgImxpbnV4LHBjaS1kb21h
aW4iIGlzIG5vdCBhIG1hbmRhdG9yeSBvcHRpb24gYW5kIG1vc3RseSB0aWUgdG8gTGludXguIFdo
YXQgd291bGQgaGFwcGVuIHdpdGggb3RoZXIgT1M/DQo+Pj4+IEJ1dCBJIHdvdWxkIHJhdGhlciBh
dm9pZCB0cnlpbmcgdG8gbWFuZGF0ZSBhIHVzZXIgdG8gbW9kaWZ5aW5nIGhpcy9oZXIgZGV2aWNl
LXRyZWUgaW4gb3JkZXIgdG8gc3VwcG9ydCBQQ0kgcGFzc3Rocm91Z2guIEl0IHdvdWxkIGJlIGJl
dHRlciB0byBjb25zaWRlciBYZW4gdG8gYXNzaWduIHRoZSBudW1iZXIgaWYgaXQgaXMgbm90IHBy
ZXNlbnQuDQo+PiBzbyB5b3Ugd291bGQgc3VnZ2VzdCBoZXJlIHRoYXQgaWYgdGhpcyBlbnRyeSBp
cyBub3QgcHJlc2VudCBpbiB0aGUgY29uZmlndXJhdGlvbiwgd2UganVzdCBhc3NpZ24gYSB2YWx1
ZSBpbnNpZGUgeGVuID8gSG93IHNob3VsZCB0aGlzIGluZm9ybWF0aW9uIGJlIHBhc3NlZCB0byB0
aGUgZ3Vlc3QgPw0KPj4gVGhpcyBudW1iZXIgaXMgcmVxdWlyZWQgZm9yIHRoZSBjdXJyZW50IGh5
cGVyY2FsbCB0byBkZWNsYXJlIGRldmljZXMgdG8geGVuIHNvIHRob3NlIGNvdWxkIGVuZCB1cCBi
ZWluZyBkaWZmZXJlbnQuDQo+IA0KPiBJIGFtIGd1ZXNzaW5nIHlvdSBtZWFuIHBhc3NpbmcgdG8g
dGhlIGhhcmR3YXJlIGRvbWFpbj8gSWYgc28sIFhlbiBpcyBhbHJlYWR5IHJld3JpdGluZyB0aGUg
ZGV2aWNlLXRyZWUgZm9yIHRoZSBoYXJkd2FyZSBkb21haW4uIFNvIGl0IHdvdWxkIGJlIGVhc3kg
dG8gYWRkIG1vcmUgcHJvcGVydHkuDQoNClRydWUgdGhpcyBjYW4gYmUgZG9uZSA6LSkNCldlIHdp
bGwgYWRkIHRoaXMgdG8gdGhlIGRlc2lnbi4NCg0KPiANCj4gTm93IHRoZSBxdWVzdGlvbiBpcyB3
aGV0aGVyIG90aGVyIE9TZXMgYXJlIHVzaW5nICJsaW51eCxwY2ktZG9tYWluIi4gSSB3b3VsZCBz
dWdnZXN0IHRvIGhhdmUgYSBsb29rIGF0IGEgKkJTRCB0byBzZWUgaG93IHRoZXkgZGVhbCB3aXRo
IFBDSSBjb250cm9sbGVycy4NCg0KR29vZCBpZGVhLCB3ZSB3aWxsIGNoZWNrIGhvdyBCU0QgaXMg
dXNpbmcgdGhlIGh5cGVyY2FsbCB0byBkZWNsYXJlIFBDSSBkZXZpY2VzIGFuZCB3aGF0IHZhbHVl
IGlzIHVzZWQgdGhlcmUgZm9yIHRoZSBkb21haW4gaWQuDQoNCj4gDQo+Pj4+PiANCj4+Pj4+IFdo
ZW4gRG9tMCB0cmllcyB0byBhY2Nlc3MgdGhlIFBDSSBjb25maWcgc3BhY2Ugb2YgdGhlIGRldmlj
ZSwgWEVOIHdpbGwgZmluZCB0aGUgY29ycmVzcG9uZGluZyBob3N0IGJyaWRnZSBiYXNlZCBvbiBz
ZWdtZW50IG51bWJlciBhbmQgYWNjZXNzIHRoZSBjb3JyZXNwb25kaW5nIGNvbmZpZyBzcGFjZSBh
c3NpZ25lZCB0byB0aGF0IGJyaWRnZS4NCj4+Pj4+IA0KPj4+Pj4gTGltaXRhdGlvbjoNCj4+Pj4+
ICogT25seSBQQ0kgRUNBTSBjb25maWd1cmF0aW9uIHNwYWNlIGFjY2VzcyBpcyBzdXBwb3J0ZWQu
DQo+Pj4+PiAqIERldmljZSB0cmVlIGJpbmRpbmcgaXMgc3VwcG9ydGVkIGFzIG9mIG5vdywgQUNQ
SSBpcyBub3Qgc3VwcG9ydGVkLg0KPj4+PiBXZSB3YW50IHRvIGRpZmZlcmVudGlhdGUgdGhlIGhp
Z2gtbGV2ZWwgZGVzaWduIGZyb20gdGhlIGFjdHVhbCBpbXBsZW1lbnRhdGlvbi4gV2hpbGUgeW91
IG1heSBub3QgeWV0IGltcGxlbWVudCBBQ1BJLCB3ZSBzdGlsbCBuZWVkIHRvIGtlZXAgaXQgaW4g
bWluZCB0byBhdm9pZCBpbmNvbXBhdGliaWxpdGllcyBpbiBsb25nIHRlcm0uDQo+PiBGb3Igc3Vy
ZSB3ZSBkbyBub3Qgd2FudCB0byBtYWtlIGFueXRoaW5nIHdoaWNoIHdvdWxkIG5vdCBiZSBwb3Nz
aWJsZSB0byBpbXBsZW1lbnQgd2l0aCBBQ1BJLg0KPj4gSSBob3BlIHRoZSBjb21tdW5pdHkgd2ls
bCBoZWxwIHVzIGR1cmluZyByZXZpZXcgdG8gZmluZCB0aG9zZSBwb3NzaWJsZSBwcm9ibGVtcyBp
ZiB3ZSBkbyBub3Qgc2VlIHRoZW0uDQo+IA0KPiBIYXZlIGEgbG9vayBhdCB0aGUgZGVzaWduIGRv
Y3VtZW50IEkgcG9pbnRlZCBvdXQgaW4gbXkgcHJldmlvdXMgYW5zd2VyLiBJdCBzaG91bGQgY29u
dGFpbiBhIGxvdCBvZiBpbmZvcm1hdGlvbiBhbHJlYWR5IGZvciBBQ1BJIDopLg0KDQpUaGFua3Mg
Zm9yIHRoZSBwb2ludGVyLCB3ZSB3aWxsIGdvIHRocm91Z2ggdGhhdA0KDQo+IA0KPj4+Pj4gKiBO
ZWVkIHRvIHBvcnQgdGhlIFBDSSBob3N0IGJyaWRnZSBhY2Nlc3MgY29kZSB0byBYRU4gdG8gYWNj
ZXNzIHRoZSBjb25maWd1cmF0aW9uIHNwYWNlIChnZW5lcmljIG9uZSB3b3JrcyBidXQgbG90cyBv
ZiBwbGF0Zm9ybXMgd2lsbCByZXF1aXJlZCAgc29tZSBzcGVjaWZpYyBjb2RlIG9yIHF1aXJrcyku
DQo+Pj4+PiANCj4+Pj4+ICMgRGlzY292ZXJpbmcgUENJIGRldmljZXM6DQo+Pj4+PiANCj4+Pj4+
IFBDSS1QQ0llIGVudW1lcmF0aW9uIGlzIGEgcHJvY2VzcyBvZiBkZXRlY3RpbmcgZGV2aWNlcyBj
b25uZWN0ZWQgdG8gaXRzIGhvc3QuIEl0IGlzIHRoZSByZXNwb25zaWJpbGl0eSBvZiB0aGUgaGFy
ZHdhcmUgZG9tYWluIG9yIGJvb3QgZmlybXdhcmUgdG8gZG8gdGhlIFBDSSBlbnVtZXJhdGlvbiBh
bmQgY29uZmlndXJlIHRoZSBCQVIsIFBDSSBjYXBhYmlsaXRpZXMsIGFuZCBNU0kvTVNJLVggY29u
ZmlndXJhdGlvbi4NCj4+Pj4+IA0KPj4+Pj4gUENJLVBDSWUgZW51bWVyYXRpb24gaW4gWEVOIGlz
IG5vdCBmZWFzaWJsZSBmb3IgdGhlIGNvbmZpZ3VyYXRpb24gcGFydCBhcyBpdCB3b3VsZCByZXF1
aXJlIGEgbG90IG9mIGNvZGUgaW5zaWRlIFhlbiB3aGljaCB3b3VsZCByZXF1aXJlIGEgbG90IG9m
IG1haW50ZW5hbmNlLiBBZGRlZCB0byB0aGlzIG1hbnkgcGxhdGZvcm1zIHJlcXVpcmUgc29tZSBx
dWlya3MgaW4gdGhhdCBwYXJ0IG9mIHRoZSBQQ0kgY29kZSB3aGljaCB3b3VsZCBncmVhdGx5IGlt
cHJvdmUgWGVuIGNvbXBsZXhpdHkuIE9uY2UgaGFyZHdhcmUgZG9tYWluIGVudW1lcmF0ZXMgdGhl
IGRldmljZSB0aGVuIGl0IHdpbGwgY29tbXVuaWNhdGUgdG8gWEVOIHZpYSB0aGUgYmVsb3cgaHlw
ZXJjYWxsLg0KPj4+Pj4gDQo+Pj4+PiAjZGVmaW5lIFBIWVNERVZPUF9wY2lfZGV2aWNlX2FkZCAg
ICAgICAgMjUNCj4+Pj4+IHN0cnVjdCBwaHlzZGV2X3BjaV9kZXZpY2VfYWRkIHsNCj4+Pj4+ICAg
ICAgdWludDE2X3Qgc2VnOw0KPj4+Pj4gICAgICB1aW50OF90IGJ1czsNCj4+Pj4+ICAgICAgdWlu
dDhfdCBkZXZmbjsNCj4+Pj4+ICAgICAgdWludDMyX3QgZmxhZ3M7DQo+Pj4+PiAgICAgIHN0cnVj
dCB7DQo+Pj4+PiAgICAgICAgICB1aW50OF90IGJ1czsNCj4+Pj4+ICAgICAgICAgIHVpbnQ4X3Qg
ZGV2Zm47DQo+Pj4+PiAgICAgIH0gcGh5c2ZuOw0KPj4+Pj4gICAgICAvKg0KPj4+Pj4gICAgICAq
IE9wdGlvbmFsIHBhcmFtZXRlcnMgYXJyYXkuDQo+Pj4+PiAgICAgICogRmlyc3QgZWxlbWVudCAo
WzBdKSBpcyBQWE0gZG9tYWluIGFzc29jaWF0ZWQgd2l0aCB0aGUgZGV2aWNlIChpZiAqIFhFTl9Q
Q0lfREVWX1BYTSBpcyBzZXQpDQo+Pj4+PiAgICAgICovDQo+Pj4+PiAgICAgIHVpbnQzMl90IG9w
dGFycltYRU5fRkxFWF9BUlJBWV9ESU1dOw0KPj4+Pj4gICAgICB9Ow0KPj4+Pj4gDQo+Pj4+PiBB
cyB0aGUgaHlwZXJjYWxsIGFyZ3VtZW50IGhhcyB0aGUgUENJIHNlZ21lbnQgbnVtYmVyLCBYRU4g
d2lsbCBhY2Nlc3MgdGhlIFBDSSBjb25maWcgc3BhY2UgYmFzZWQgb24gdGhpcyBzZWdtZW50IG51
bWJlciBhbmQgZmluZCB0aGUgaG9zdC1icmlkZ2UgY29ycmVzcG9uZGluZyB0byB0aGlzIHNlZ21l
bnQgbnVtYmVyLiBBdCB0aGlzIHN0YWdlIGhvc3QgYnJpZGdlIGlzIGZ1bGx5IGluaXRpYWxpemVk
IHNvIHRoZXJlIHdpbGwgYmUgbm8gaXNzdWUgdG8gYWNjZXNzIHRoZSBjb25maWcgc3BhY2UuDQo+
Pj4+PiANCj4+Pj4+IFhFTiB3aWxsIGFkZCB0aGUgUENJIGRldmljZXMgaW4gdGhlIGxpbmtlZCBs
aXN0IG1haW50YWluIGluIFhFTiB1c2luZyB0aGUgZnVuY3Rpb24gcGNpX2FkZF9kZXZpY2UoKS4g
WEVOIHdpbGwgYmUgYXdhcmUgb2YgYWxsIHRoZSBQQ0kgZGV2aWNlcyBvbiB0aGUgc3lzdGVtIGFu
ZCBhbGwgdGhlIGRldmljZSB3aWxsIGJlIGFkZGVkIHRvIHRoZSBoYXJkd2FyZSBkb21haW4uDQo+
Pj4+IEkgdW5kZXJzdGFuZCB0aGlzIHdoYXQgeDg2IGRvZXMuIEhvd2V2ZXIsIG1heSBJIGFzayB3
aHkgd2Ugd291bGQgd2FudCBpdCBmb3IgQXJtPw0KPj4gV2Ugd2FudGVkIHRvIGJlIGFzIG5lYXIg
YXMgcG9zc2libGUgZnJvbSB4ODYgaW1wbGVtZW50YXRpb24gYW5kIGRlc2lnbi4NCj4+IEJ1dCBp
ZiB5b3UgaGF2ZSBhbiBvdGhlciBpZGVhIGhlcmUgd2UgYXJlIGZ1bGx5IG9wZW4gdG8gZGlzY3Vz
cyBpdC4NCj4gDQo+IEluIHRoZSBjYXNlIG9mIHBsYXRmb3JtIGRldmljZSBwYXNzdGhyb3VnaCwg
d2UgYXJlIGxlYXZpbmcgdGhlIGRldmljZSB1bmFzc2lnbmVkIHdoZW4gbm90IHVzaW5nIGJ5IGEg
Z3Vlc3QuIFRoaXMgbWFrZXMgc3VyZSB0aGUgZGV2aWNlIGNhbid0IGRvIGFueSBoYXJtIGlmIHNv
bWVob3cgaXQgd2Fzbid0IHJlc2V0IGNvcnJlY3RseS4NCj4gDQo+IEkgd291bGQgcHJlZmVyIHRv
IGNvbnNpZGVyIHRoZSBzYW1lIGFwcHJvYWNoIGZvciBQQ0kgZGV2aWNlcyBpZiB0aGVyZSBpcyBu
byBwbGFuIHRvIHVzZSBpdCBpbiBkb20wLiBBbHRob3VnaCwgd2UgbmVlZCB0byBmaWd1cmUgb3V0
IGhvdyBQQ0kgZGV2aWNlcyB3aWxsIGJlIHJlc2V0Lg0KDQpEZWZpbml0ZWx5IHdlIGNhbm5vdCBy
ZWx5IG9uIGEgZ3Vlc3QgdG8gcmVzZXQgdGhlIGRldmljZSBwcm9wZXJseSBpZiBpdCBpcyBraWxs
ZWQgYW5kIEkgZG91YnQgdGhlcmUgaXMgYSDigJxzdGFuZGFyZOKAnSB3YXkgdG8gZG8gYSByZXNl
dCBvZiBhIFBDSSBkZXZpY2UgdGhhdCB3b3JrcyBhbGwgdGhlIHRpbWUuDQpTbyBJIGFncmVlIHRo
YXQgbGVhdmluZyBpdCB1bmFzc2lnbmVkIGlzIGJldHRlciBhbmQgbW9yZSBzZWN1cmUuDQpXZSB3
aWxsIG1vZGlmeSBvdXIgZGVzaWduIGFjY29yZGluZ2x5Lg0KDQo+IA0KPj4+Pj4gKiBEb20wTGVz
cyBpbXBsZW1lbnRhdGlvbiB3aWxsIHJlcXVpcmUgdG8gaGF2ZSB0aGUgY2FwYWNpdHkgaW5zaWRl
IFhlbiB0byBkaXNjb3ZlciB0aGUgUENJIGRldmljZXMgKHdpdGhvdXQgZGVwZW5kaW5nIG9uIERv
bTAgdG8gZGVjbGFyZSB0aGVtIHRvIFhlbikuDQo+Pj4+PiANCj4+Pj4+ICMgRW5hYmxlIHRoZSBl
eGlzdGluZyB4ODYgdmlydHVhbCBQQ0kgc3VwcG9ydCBmb3IgQVJNOg0KPj4+Pj4gDQo+Pj4+PiBU
aGUgZXhpc3RpbmcgVlBDSSBzdXBwb3J0IGF2YWlsYWJsZSBmb3IgWDg2IGlzIGFkYXB0ZWQgZm9y
IEFybS4gV2hlbiB0aGUgZGV2aWNlIGlzIGFkZGVkIHRvIFhFTiB2aWEgdGhlIGh5cGVyIGNhbGwg
4oCcUEhZU0RFVk9QX3BjaV9kZXZpY2VfYWRk4oCdLCBWUENJIGhhbmRsZXIgZm9yIHRoZSBjb25m
aWcgc3BhY2UgYWNjZXNzIGlzIGFkZGVkIHRvIHRoZSBQQ0kgZGV2aWNlIHRvIGVtdWxhdGUgdGhl
IFBDSSBkZXZpY2VzLg0KPj4+Pj4gDQo+Pj4+PiBBIE1NSU8gdHJhcCBoYW5kbGVyIGZvciB0aGUg
UENJIEVDQU0gc3BhY2UgaXMgcmVnaXN0ZXJlZCBpbiBYRU4gc28gdGhhdCB3aGVuIGd1ZXN0IGlz
IHRyeWluZyB0byBhY2Nlc3MgdGhlIFBDSSBjb25maWcgc3BhY2UsIFhFTiB3aWxsIHRyYXAgdGhl
IGFjY2VzcyBhbmQgZW11bGF0ZSByZWFkL3dyaXRlIHVzaW5nIHRoZSBWUENJIGFuZCBub3QgdGhl
IHJlYWwgUENJIGhhcmR3YXJlLg0KPj4+Pj4gDQo+Pj4+PiBMaW1pdGF0aW9uOg0KPj4+Pj4gKiBO
byBoYW5kbGVyIGlzIHJlZ2lzdGVyIGZvciB0aGUgTVNJIGNvbmZpZ3VyYXRpb24uDQo+Pj4+PiAq
IE9ubHkgbGVnYWN5IGludGVycnVwdCBpcyBzdXBwb3J0ZWQgYW5kIHRlc3RlZCBhcyBvZiBub3cs
IE1TSSBpcyBub3QgaW1wbGVtZW50ZWQgYW5kIHRlc3RlZC4NCj4+Pj4gSUlSQywgbGVnYWN5IGlu
dGVycnVwdCBtYXkgYmUgc2hhcmVkIGJldHdlZW4gdHdvIFBDSSBkZXZpY2VzLiBIb3cgZG8geW91
IHBsYW4gdG8gaGFuZGxlIHRoaXMgb24gQXJtPw0KPj4gV2UgcGxhbiB0byBmaXggdGhpcyBieSBh
ZGRpbmcgcHJvcGVyIHN1cHBvcnQgZm9yIE1TSSBpbiB0aGUgbG9uZyB0ZXJtLg0KPj4gRm9yIHRo
ZSB1c2UgY2FzZSB3aGVyZSBNU0kgaXMgbm90IHN1cHBvcnRlZCBvciBub3Qgd2FudGVkIHdlIG1p
Z2h0IGhhdmUgdG8gZmluZCBhIHdheSB0byBmb3J3YXJkIHRoZSBoYXJkd2FyZSBpbnRlcnJ1cHQg
dG8gc2V2ZXJhbCBndWVzdHMgdG8gZW11bGF0ZSBzb21lIGtpbmQgb2Ygc2hhcmVkIGludGVycnVw
dC4NCj4gDQo+IFNoYXJpbmcgaW50ZXJydXB0cyBhcmUgYSBiaXQgcGFpbiBiZWNhdXNlIHlvdSBj
b3VsZG4ndCB0YWtlIGFkdmFudGFnZSBvZiB0aGUgZGlyZWN0IEVPSSBpbiBIVyBhbmQgaGF2ZSB0
byBiZSBjYXJlZnVsIGlmIG9uZSBndWVzdCBkb2Vzbid0IEVPSSBpbiB0aW1lbHkgbWFuZWVyLg0K
PiANCj4gVGhpcyBpcyBzb21ldGhpbmcgSSB3b3VsZCByYXRoZXIgYXZvaWQgdW5sZXNzIHRoZXJl
IGlzIGEgcmVhbCB1c2UgY2FzZSBmb3IgaXQuDQoNCkkgd291bGQgZXhwZWN0IHRoYXQgbW9zdCBy
ZWNlbnQgaGFyZHdhcmUgd2lsbCBzdXBwb3J0IE1TSSBhbmQgdGhpcyB3aWxsIG5vdCBiZSBuZWVk
ZWQuDQpXaGVuIE1TSSBpcyBub3QgdXNlZCwgdGhlIG9ubHkgc29sdXRpb24gd291bGQgYmUgdG8g
ZW5mb3JjZSB0aGF0IGRldmljZXMgYXNzaWduZWQgdG8gZGlmZmVyZW50IGd1ZXN0IGFyZSB1c2lu
ZyBkaWZmZXJlbnQgaW50ZXJydXB0cyB3aGljaCB3b3VsZCBsaW1pdCB0aGUgbnVtYmVyIG9mIGRv
bWFpbnMgYmVpbmcgYWJsZSB0byB1c2UgUENJIGRldmljZXMgb24gYSBidXMgdG8gNCAoaWYgdGhl
IGVudW1lcmF0aW9uIGNhbiBiZSBtb2RpZmllZCBjb3JyZWN0bHkgdG8gYXNzaWduIHRoZSBpbnRl
cnJ1cHRzIHByb3Blcmx5KS4NCklmIHdlIGFsbCBhZ3JlZSB0aGF0IHRoaXMgaXMgYW4gYWNjZXB0
YWJsZSBsaW1pdGF0aW9uIHRoZW4gd2Ugd291bGQgbm90IG5lZWQgdGhlIOKAnGludGVycnVwdCBz
aGFyaW5n4oCdLg0KDQo+IA0KPj4+Pj4gDQo+Pj4+PiAjIEFzc2lnbiB0aGUgZGV2aWNlIHRvIHRo
ZSBndWVzdDoNCj4+Pj4+IA0KPj4+Pj4gQXNzaWduIHRoZSBQQ0kgZGV2aWNlIGZyb20gdGhlIGhh
cmR3YXJlIGRvbWFpbiB0byB0aGUgZ3Vlc3QgaXMgZG9uZSB1c2luZyB0aGUgYmVsb3cgZ3Vlc3Qg
Y29uZmlnIG9wdGlvbi4gV2hlbiB4bCB0b29sIGNyZWF0ZSB0aGUgZG9tYWluLCBQQ0kgZGV2aWNl
cyB3aWxsIGJlIGFzc2lnbmVkIHRvIHRoZSBndWVzdCBWUENJIGJ1cy4NCj4+Pj4gQWJvdmUsIHlv
dSBzdWdnZXN0IHRoYXQgZGV2aWNlIHdpbGwgYmUgYXNzaWduZWQgdG8gdGhlIGhhcmR3YXJlIGRv
bWFpbiBhdCBib290LiBJIGFtIGFzc3VtaW5nIHRoaXMgYWxzbyBtZWFucyB0aGF0IGFsbCB0aGUg
aW50ZXJydXB0cy9NTUlPcyB3aWxsIGJlIHJvdXRlZC9tYXBwZWQsIGlzIHRoYXQgY29ycmVjdD8N
Cj4+Pj4gSWYgc28sIGNhbiB5b3UgcHJvdmlkZSBhIHJvdWdoIHNrZXRjaCBob3cgYXNzaWduL2Rl
YXNzaWduIHdpbGwgd29yaz8NCj4+IFllcyB0aGlzIGlzIGNvcnJlY3QuIFdlIHdpbGwgaW1wcm92
ZSB0aGUgZGVzaWduIGFuZCBhZGQgYSBtb3JlIGRldGFpbGVkIGRlc2NyaXB0aW9uIG9uIHRoYXQg
aW4gdGhlIG5leHQgdmVyc2lvbi4NCj4+IFRvIG1ha2UgaXQgc2hvcnQgd2UgcmVtb3ZlIHRoZSBy
ZXNvdXJjZXMgZnJvbSB0aGUgaGFyZHdhcmUgZG9tYWluIGZpcnN0IGFuZCBhc3NpZ24gdGhlbSB0
byB0aGUgZ3Vlc3QgdGhlIGRldmljZSBoYXMgYmVlbiBhc3NpZ25lZCB0by4gVGhlcmUgYXJlIHN0
aWxsIHNvbWUgcGFydHMgaW4gdGhlcmUgd2hlcmUgd2UgYXJlIHN0aWxsIGluIGludmVzdGlnYXRp
b24gbW9kZSBvbiB0aGF0IHBhcnQuDQo+IA0KPiBIbW1tLi4uIERvZXMgdGhpcyBtZWFuIHlvdSBt
b2RpZmllZCB0aGUgY29kZSB0byBhbGxvdyBhIGludGVycnVwdCB0byBiZSByZW1vdmVkIHdoaWxl
IHRoZSBkb21haW4gaXMgc3RpbGwgcnVubmluZz8NCg0KRm9yIG5vdyB3ZSBhcmUgbm90IGRvaW5n
IHRoaXMgYXV0b21hdGljYWxseSBzbyB0aGlzIGlzIGRvbmUgYnkgZXhwbGljaXRlbHkgYXNzaWdu
aW5nIGFuIGludGVycnVwdCB0byB0aGUgZ3Vlc3QgaW4gdGhlIGNvbmZpZ3VyYXRpb24gb2YgdGhl
IGd1ZXN0Lg0KU28gd2UgZGlkIG5vdCBtb2RpZnkgdGhlIGNvZGUgZm9yIHRoYXQgc28gZmFyIGFz
IHRoaXMgaXMgcGFydCBvZiB0aGUgaW1wbGVtZW50YXRpb24gdXNpbmcgd29ya2Fyb3VuZHMgcmln
aHQgbm93Lg0KDQo+IA0KPj4+Pj4gICAgIHBjaT1bICJQQ0lfU1BFQ19TVFJJTkciLCAiUENJX1NQ
RUNfU1RSSU5HIiwgLi4uXQ0KPj4+Pj4gDQo+Pj4+PiBHdWVzdCB3aWxsIGJlIG9ubHkgYWJsZSB0
byBhY2Nlc3MgdGhlIGFzc2lnbmVkIGRldmljZXMgYW5kIHNlZSB0aGUgYnJpZGdlcy4gR3Vlc3Qg
d2lsbCBub3QgYmUgYWJsZSB0byBhY2Nlc3Mgb3Igc2VlIHRoZSBkZXZpY2VzIHRoYXQgYXJlIG5v
IGFzc2lnbmVkIHRvIGhpbS4NCj4+Pj4+IA0KPj4+Pj4gTGltaXRhdGlvbjoNCj4+Pj4+ICogQXMg
b2Ygbm93IGFsbCB0aGUgYnJpZGdlcyBpbiB0aGUgUENJIGJ1cyBhcmUgc2VlbiBieSB0aGUgZ3Vl
c3Qgb24gdGhlIFZQQ0kgYnVzLg0KPj4+PiBXaHkgZG8geW91IHdhbnQgdG8gZXhwb3NlIGFsbCB0
aGUgYnJpZGdlcyB0byBhIGd1ZXN0PyBEb2VzIHRoaXMgbWVhbiB0aGF0IHRoZSBCREYgc2hvdWxk
IGFsd2F5cyBtYXRjaCBiZXR3ZWVuIHRoZSBob3N0IGFuZCB0aGUgZ3Vlc3Q/DQo+PiBUaGF04oCZ
cyBub3QgcmVhbGx5IHNvbWV0aGluZyB0aGF0IHdlIHdhbnRlZCBidXQgdGhpcyB3YXMgdGhlIGVh
c2llc3Qgd2F5IHRvIGdvLg0KPj4gQXMgc2FpZCBpbiBhIHByZXZpb3VzIG1haWwgd2UgY291bGQg
YnVpbGQgYSBWUENJIGJ1cyB3aXRoIGEgY29tcGxldGVseSBkaWZmZXJlbnQgdG9wb2xvZ3kgYnV0
IEkgYW0gbm90IHN1cmUgb2YgdGhlIGFkdmFudGFnZXMgdGhpcyB3b3VsZCBoYXZlLg0KPj4gRG8g
eW91IHNlZSBzb21lIHJlYXNvbiB0byBkbyB0aGlzID8NCj4gDQo+IFllcyA6KToNCj4gIDEpIElm
IGEgcGxhdGZvcm0gaGFzIHR3byBob3N0IGNvbnRyb2xsZXJzIChJSVJDIFRodW5kZXItWCBoYXMg
aXQpIHRoZW4geW91IHdvdWxkIG5lZWQgdG8gZXhwb3NlIHR3byBob3N0IGNvbnRyb2xsZXJzIHRv
IHlvdXIgZ3Vlc3QuIEkgdGhpbmsgdGhpcyBpcyB1bmRlc2lyYWJsZSBpZiB5b3VyIGd1ZXN0IGlz
IG9ubHkgdXNpbmcgYSBjb3VwbGUgb2YgUENJIGRldmljZXMgb24gZWFjaCBob3N0IGNvbnRyb2xs
ZXJzLg0KPiAgMikgSW4gdGhlIGNhc2Ugb2YgbWlncmF0aW9uIChsaXZlIG9yIG5vdCksIHlvdSBt
YXkgd2FudCB0byB1c2UgYSBkaWZmZXJlbmNlIFBDSSBjYXJkIG9uIHRoZSB0YXJnZXQgcGxhdGZv
cm0uIFNvIHlvdXIgQkRGIGFuZCBicmlkZ2VzIG1heSBiZSBkaWZmZXJlbnQuDQo+IA0KPiBUaGVy
ZWZvcmUgSSB0aGluayB0aGUgdmlydHVhbCB0b3BvbG9neSBjYW4gYmUgYmVuZWZpY2lhbC4NCg0K
SSB3b3VsZCBzZWUgYSBiaWcgYWR2YW50YWdlIGRlZmluaXRlbHkgdG8gaGF2ZSBvbmx5IG9uZSBW
UENJIGJ1cyBwZXIgZ3Vlc3QgYW5kIHB1dCBhbGwgZGV2aWNlcyBpbiB0aGVpciBpbmRlcGVuZGVu
dGx5IG9mIHRoZSBoYXJkd2FyZSBkb21haW4gdGhlIGRldmljZSBpcyBvbi4NCkJ1dCB0aGlzIHdp
bGwgcHJvYmFibHkgbWFrZSB0aGUgVlBDSSBCQVJzIHZhbHVlIGNvbXB1dGF0aW9uIGEgYml0IG1v
cmUgY29tcGxleCBhcyB3ZSBtaWdodCBlbmQgdXAgd2l0aCBubyBzcGFjZSBvbiB0aGUgZ3Vlc3Qg
cGh5c2ljYWwgbWFwIGZvciBpdC4NClRoaXMgbWlnaHQgbWFrZSB0aGUgaW1wbGVtZW50YXRpb24g
YSBsb3QgbW9yZSBjb21wbGV4Lg0KDQo+IA0KPj4+Pj4gDQo+Pj4+PiAjIEVtdWxhdGVkIFBDSSBk
ZXZpY2UgdHJlZSBub2RlIGluIGxpYnhsOg0KPj4+Pj4gDQo+Pj4+PiBMaWJ4bCBpcyBjcmVhdGlu
ZyBhIHZpcnR1YWwgUENJIGRldmljZSB0cmVlIG5vZGUgaW4gdGhlIGRldmljZSB0cmVlIHRvIGVu
YWJsZSB0aGUgZ3Vlc3QgT1MgdG8gZGlzY292ZXIgdGhlIHZpcnR1YWwgUENJIGR1cmluZyBndWVz
dCBib290LiBXZSBpbnRyb2R1Y2VkIHRoZSBuZXcgY29uZmlnIG9wdGlvbiBbdnBjaT0icGNpX2Vj
YW0iXSBmb3IgZ3Vlc3RzLiBXaGVuIHRoaXMgY29uZmlnIG9wdGlvbiBpcyBlbmFibGVkIGluIGEg
Z3Vlc3QgY29uZmlndXJhdGlvbiwgYSBQQ0kgZGV2aWNlIHRyZWUgbm9kZSB3aWxsIGJlIGNyZWF0
ZWQgaW4gdGhlIGd1ZXN0IGRldmljZSB0cmVlLg0KPj4+Pj4gDQo+Pj4+PiBBIG5ldyBhcmVhIGhh
cyBiZWVuIHJlc2VydmVkIGluIHRoZSBhcm0gZ3Vlc3QgcGh5c2ljYWwgbWFwIGF0IHdoaWNoIHRo
ZSBWUENJIGJ1cyBpcyBkZWNsYXJlZCBpbiB0aGUgZGV2aWNlIHRyZWUgKHJlZyBhbmQgcmFuZ2Vz
IHBhcmFtZXRlcnMgb2YgdGhlIG5vZGUpLiBBIHRyYXAgaGFuZGxlciBmb3IgdGhlIFBDSSBFQ0FN
IGFjY2VzcyBmcm9tIGd1ZXN0IGhhcyBiZWVuIHJlZ2lzdGVyZWQgYXQgdGhlIGRlZmluZWQgYWRk
cmVzcyBhbmQgcmVkaXJlY3RzIHJlcXVlc3RzIHRvIHRoZSBWUENJIGRyaXZlciBpbiBYZW4uDQo+
Pj4+PiANCj4+Pj4+IExpbWl0YXRpb246DQo+Pj4+PiAqIE9ubHkgb25lIFBDSSBkZXZpY2UgdHJl
ZSBub2RlIGlzIHN1cHBvcnRlZCBhcyBvZiBub3cuDQo+Pj4+PiANCj4+Pj4+IEJBUiB2YWx1ZSBh
bmQgSU9NRU0gbWFwcGluZzoNCj4+Pj4+IA0KPj4+Pj4gTGludXggZ3Vlc3Qgd2lsbCBkbyB0aGUg
UENJIGVudW1lcmF0aW9uIGJhc2VkIG9uIHRoZSBhcmVhIHJlc2VydmVkIGZvciBFQ0FNIGFuZCBJ
T01FTSByYW5nZXMgaW4gdGhlIFZQQ0kgZGV2aWNlIHRyZWUgbm9kZS4gT25jZSBQQ0kgICAgZGV2
aWNlIGlzIGFzc2lnbmVkIHRvIHRoZSBndWVzdCwgWEVOIHdpbGwgbWFwIHRoZSBndWVzdCBQQ0kg
SU9NRU0gcmVnaW9uIHRvIHRoZSByZWFsIHBoeXNpY2FsIElPTUVNIHJlZ2lvbiBvbmx5IGZvciB0
aGUgYXNzaWduZWQgZGV2aWNlcy4NCj4+Pj4+IA0KPj4+Pj4gQXMgb2Ygbm93IHdlIGhhdmUgbm90
IG1vZGlmaWVkIHRoZSBleGlzdGluZyBWUENJIGNvZGUgdG8gbWFwIHRoZSBndWVzdCBQQ0kgSU9N
RU0gcmVnaW9uIHRvIHRoZSByZWFsIHBoeXNpY2FsIElPTUVNIHJlZ2lvbi4gV2UgdXNlZCB0aGUg
ZXhpc3RpbmcgZ3Vlc3Qg4oCcaW9tZW3igJ0gY29uZmlnIG9wdGlvbiB0byBtYXAgdGhlIHJlZ2lv
bi4NCj4+Pj4+IEZvciBleGFtcGxlOg0KPj4+Pj4gICAgIEd1ZXN0IHJlc2VydmVkIElPTUVNIHJl
Z2lvbjogIDB4MDQwMjAwMDANCj4+Pj4+ICAgICAgICAgIFJlYWwgcGh5c2ljYWwgSU9NRU0gcmVn
aW9uOjB4NTAwMDAwMDANCj4+Pj4+ICAgICAgICAgIElPTUVNIHNpemU6MTI4TUINCj4+Pj4+ICAg
ICAgICAgIGlvbWVtIGNvbmZpZyB3aWxsIGJlOiAgIGlvbWVtID0gWyIweDUwMDAwLDB4ODAwMEAw
eDQwMjAiXQ0KPj4+Pj4gDQo+Pj4+PiBUaGVyZSBpcyBubyBuZWVkIHRvIG1hcCB0aGUgRUNBTSBz
cGFjZSBhcyBYRU4gYWxyZWFkeSBoYXZlIGFjY2VzcyB0byB0aGUgRUNBTSBzcGFjZSBhbmQgWEVO
IHdpbGwgdHJhcCBFQ0FNIGFjY2Vzc2VzIGZyb20gdGhlIGd1ZXN0IGFuZCB3aWxsIHBlcmZvcm0g
cmVhZC93cml0ZSBvbiB0aGUgVlBDSSBidXMuDQo+Pj4+PiANCj4+Pj4+IElPTUVNIGFjY2VzcyB3
aWxsIG5vdCBiZSB0cmFwcGVkIGFuZCB0aGUgZ3Vlc3Qgd2lsbCBkaXJlY3RseSBhY2Nlc3MgdGhl
IElPTUVNIHJlZ2lvbiBvZiB0aGUgYXNzaWduZWQgZGV2aWNlIHZpYSBzdGFnZS0yIHRyYW5zbGF0
aW9uLg0KPj4+Pj4gDQo+Pj4+PiBJbiB0aGUgc2FtZSwgd2UgbWFwcGVkIHRoZSBhc3NpZ25lZCBk
ZXZpY2VzIElSUSB0byB0aGUgZ3Vlc3QgdXNpbmcgYmVsb3cgY29uZmlnIG9wdGlvbnMuDQo+Pj4+
PiAgICAgaXJxcz0gWyBOVU1CRVIsIE5VTUJFUiwgLi4uXQ0KPj4+Pj4gDQo+Pj4+PiBMaW1pdGF0
aW9uOg0KPj4+Pj4gKiBOZWVkIHRvIGF2b2lkIHRoZSDigJxpb21lbeKAnSBhbmQg4oCcaXJx4oCd
IGd1ZXN0IGNvbmZpZyBvcHRpb25zIGFuZCBtYXAgdGhlIElPTUVNIHJlZ2lvbiBhbmQgSVJRIGF0
IHRoZSBzYW1lIHRpbWUgd2hlbiBkZXZpY2UgaXMgYXNzaWduZWQgdG8gdGhlIGd1ZXN0IHVzaW5n
IHRoZSDigJxwY2nigJ0gZ3Vlc3QgY29uZmlnIG9wdGlvbnMgd2hlbiB4bCBjcmVhdGVzIHRoZSBk
b21haW4uDQo+Pj4+PiAqIEVtdWxhdGVkIEJBUiB2YWx1ZXMgb24gdGhlIFZQQ0kgYnVzIHNob3Vs
ZCByZWZsZWN0IHRoZSBJT01FTSBtYXBwZWQgYWRkcmVzcy4NCj4+Pj4+ICogWDg2IG1hcHBpbmcg
Y29kZSBzaG91bGQgYmUgcG9ydGVkIG9uIEFybSBzbyB0aGF0IHRoZSBzdGFnZS0yIHRyYW5zbGF0
aW9uIGlzIGFkYXB0ZWQgd2hlbiB0aGUgZ3Vlc3QgaXMgZG9pbmcgYSBtb2RpZmljYXRpb24gb2Yg
dGhlIEJBUiByZWdpc3RlcnMgdmFsdWVzICh0byBtYXAgdGhlIGFkZHJlc3MgcmVxdWVzdGVkIGJ5
IHRoZSBndWVzdCBmb3IgYSBzcGVjaWZpYyBJT01FTSB0byB0aGUgYWRkcmVzcyBhY3R1YWxseSBj
b250YWluZWQgaW4gdGhlIHJlYWwgQkFSIHJlZ2lzdGVyIG9mIHRoZSBjb3JyZXNwb25kaW5nIGRl
dmljZSkuDQo+Pj4+PiANCj4+Pj4+ICMgU01NVSBjb25maWd1cmF0aW9uIGZvciBndWVzdDoNCj4+
Pj4+IA0KPj4+Pj4gV2hlbiBhc3NpZ25pbmcgUENJIGRldmljZXMgdG8gYSBndWVzdCwgdGhlIFNN
TVUgY29uZmlndXJhdGlvbiBzaG91bGQgYmUgdXBkYXRlZCB0byByZW1vdmUgYWNjZXNzIHRvIHRo
ZSBoYXJkd2FyZSBkb21haW4gbWVtb3J5IGFuZCBhZGQNCj4+Pj4+IGNvbmZpZ3VyYXRpb24gdG8g
aGF2ZSBhY2Nlc3MgdG8gdGhlIGd1ZXN0IG1lbW9yeSB3aXRoIHRoZSBwcm9wZXIgYWRkcmVzcyB0
cmFuc2xhdGlvbiBzbyB0aGF0IHRoZSBkZXZpY2UgY2FuIGRvIERNQSBvcGVyYXRpb25zIGZyb20g
YW5kIHRvIHRoZSBndWVzdCBtZW1vcnkgb25seS4NCj4+Pj4gVGhlcmUgYXJlIGEgZmV3IG1vcmUg
cXVlc3Rpb25zIHRvIGFuc3dlciBoZXJlOg0KPj4+PiAgICAtIFdoZW4gYSBndWVzdCBpcyBkZXN0
cm95ZWQsIHdobyB3aWxsIGJlIHRoZSBvd25lciBvZiB0aGUgUENJIGRldmljZXM/IERlcGVuZGlu
ZyBvbiB0aGUgYW5zd2VyLCBob3cgZG8geW91IG1ha2Ugc3VyZSB0aGUgZGV2aWNlIGlzIHF1aWVz
Y2VudD8NCj4+IEkgd291bGQgc2F5IHRoZSBoYXJkd2FyZSBkb21haW4gaWYgdGhlcmUgaXMgb25l
IG90aGVyd2lzZSBub2JvZHkuDQo+IA0KPiBUaGlzIGlzIHJpc2t5LCBpbiBwYXJ0aWN1bGFyIGlm
IHlvdXIgZGV2aWNlIGlzIG5vdCBxdWllc2NlbnQgKGUuZyBiZWNhdXNlIHRoZSByZXNldCBmYWls
ZWQpLiBUaGlzIHdvdWxkIG1lYW4geW91ciBkZXZpY2UgbWF5IGJlIGFibGUgdG8gcmV3cml0ZSBw
YXJ0IG9mIERvbTAuDQoNCkFncmVlLiBXZSBzaG91bGQgbm90IHJlYXNzaWduIHRoZSBkZXZpY2Ug
dG8gRG9tMCBhbmQgYWx3YXlzIGxldCBpcyB1bmFzc2lnbmVkLg0KV2Ugd2lsbCBtb2RpZnkgdGhl
IGRlc2lnbiBhY2NvcmRpbmdseQ0KDQo+IA0KPj4gT24gdGhlIHF1aWVzY2VudCBwYXJ0IHRoaXMg
aXMgZGVmaW5pdGVseSBzb21ldGhpbmcgZm9yIHdoaWNoIEkgaGF2ZSBubyBhbnN3ZXIgZm9yIG5v
dyBhbmQgYW55IHN1Z2dlc3Rpb24gaXMgbW9yZSB0aGVuIHdlbGNvbWUuDQo+IA0KPiBVc3VhbGx5
IHlvdSB3aWxsIGhhdmUgdG8gcmVzZXQgYSBkZXZpY2UsIGJ1dCBJIGFtIG5vdCBzdXJlIHRoaXMg
Y2FuIGFsd2F5cyB3b3JrIHByb3Blcmx5LiBIZW5jZSwgSSB0aGluayBhc3NpZ25pbmcgdGhlIFBD
SSBkZXZpY2VzIHRvIG5vYm9keSB3b3VsZCBiZSBtb3JlIHNlbnNpYmxlLiBOb3RlIHRoaXMgaXMg
d2hhdCBYU0EtMzA2IGFpbWVkIHRvIGRvIG9uIHg4NiAobm90IHlldCBpbXBsZW1lbnRlZCBvbiBB
cm0pLg0KDQpBY2sNCg0KPiANCj4+Pj4gICAgLSBJcyB0aGVyZSBhbnkgbWVtb3J5IGFjY2VzcyB0
aGF0IGNhbiBieXBhc3NlZCB0aGUgSU9NTVUgKGUuZyBkb29yYmVsbCk/DQo+PiBUaGlzIGlzIHN0
aWxsIHNvbWV0aGluZyB0byBiZSBpbnZlc3RpZ2F0ZWQgYXMgcGFydCBvZiB0aGUgTVNJIGltcGxl
bWVudGF0aW9uLg0KPj4gSWYgeW91IGhhdmUgYW55IGlkZWEgaGVyZSwgZmVlbCBmcmVlIHRvIHRl
bGwgdXMuDQo+IA0KPiBNeSBtZW1vcnkgaXMgYSBiaXQgZnV6enkgaGVyZS4gSSBhbSBzdXJlIHRo
YXQgdGhlIGRvb3JiZWxsIGNhbiBieXBhc3MgdGhlIElPTU1VIG9uIHNvbWUgcGxhdGZvcm0sIGJ1
dCBJIGFsc28gdmFndWVseSByZW1lbWJlciB0aGF0IGFjY2Vzc2VzIHRvIHRoZSBQQ0kgaG9zdCBj
b250cm9sbGVyIG1lbW9yeSB3aW5kb3cgbWF5IGFsc28gYnlwYXNzIHRoZSBJT01NVS4gQSBnb29k
IHJlYWRpbmcgbWlnaHQgYmUgWzJdLg0KPiANCj4gSUlSQywgSSBjYW1lIHRvIHRoZSBjb25jbHVz
aW9uIHRoYXQgd2UgbWF5IHdhbnQgdG8gdXNlIHRoZSBob3N0IG1lbW9yeSBtYXAgaW4gdGhlIGd1
ZXN0IHdoZW4gdXNpbmcgdGhlIFBDSSBwYXNzdGhyb3VnaC4gQnV0IG1heWJlIG5vdCBvbiBhbGwg
dGhlIHBsYXRmb3Jtcy4NCg0KRGVmaW5pdGVseSBhIGxvdCBvZiB0aGlzIHdvdWxkIGJlIGVhc2ll
ciBpZiBjb3VsZCB1c2UgMToxIG1hcHBpbmcuDQpXZSB3aWxsIGtlZXAgdGhhdCBpbiBtaW5kIHdo
ZW4gd2Ugd2lsbCBzdGFydCB0byBpbnZlc3RpZ2F0ZSBvbiB0aGUgTVNJIHBhcnQuDQoNCkNoZWVy
cw0KQmVydHJhbmQNCg0KPiANCj4gQ2hlZXJzLA0KPiANCj4+Pj4gWzFdIGh0dHBzOi8vbGlzdHMu
eGVucHJvamVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAxNy0wNS9tc2cwMjUyMC5o
dG1sDQo+IA0KPiBbMl0gaHR0cHM6Ly93d3cuc3Bpbmljcy5uZXQvbGlzdHMva3ZtL21zZzE0MDEx
Ni5odG1sDQo+Pj4gDQo+Pj4gLS0gDQo+Pj4gSnVsaWVuIEdyYWxsDQo+Pj4gDQo+IA0KPiAtLSAN
Cj4gSnVsaWVuIEdyYWxsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 15:52:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 15:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwSeW-0007SN-NK; Fri, 17 Jul 2020 15:52:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdYj=A4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwSeU-0007SI-V0
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 15:51:59 +0000
X-Inumbo-ID: 71b543aa-c845-11ea-8496-bc764e2007e4
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.61]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 71b543aa-c845-11ea-8496-bc764e2007e4;
 Fri, 17 Jul 2020 15:51:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lItZYZJBVOe6opiwwJHC8qPt80unMpfxU7ICJXTWOFc=;
 b=vjMciu3tsbMoQOIXU0Mv45ilJch18txgSriBUyqC/zpE2L4Ra37c7AIBz+DE7rGhM7KHKVx8ADmmC+3szaWPDctEDK0boYbz78czsYUIRfW5iHDGEIM5KUmGPwJVPGGKTGIzTAzrSKaITx4Xv0J09wMGMp2xjK0FzYpfxvJCgzQ=
Received: from AM6P194CA0105.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:8f::46)
 by VI1PR0801MB1774.eurprd08.prod.outlook.com (2603:10a6:800:4e::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.22; Fri, 17 Jul
 2020 15:51:55 +0000
Received: from VE1EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8f:cafe::18) by AM6P194CA0105.outlook.office365.com
 (2603:10a6:209:8f::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18 via Frontend
 Transport; Fri, 17 Jul 2020 15:51:55 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT032.mail.protection.outlook.com (10.152.18.121) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Fri, 17 Jul 2020 15:51:54 +0000
Received: ("Tessian outbound c83312565ef4:v62");
 Fri, 17 Jul 2020 15:51:54 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: bdf83e4f1ab6e7b0
X-CR-MTA-TID: 64aa7808
Received: from ba2958c1b620.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 27408F25-4CD1-454A-B11B-BC1A0DBA9C12.1; 
 Fri, 17 Jul 2020 15:51:49 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ba2958c1b620.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 17 Jul 2020 15:51:49 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ASC/3lORlshQOaOGO9p2Kz+tt2Lo/nbgHi+3uOQYgyfC++F8IESgJNOC7a/Aj7FBztFR0TNSLwUIzsbYCIef3U/p7GahDn89TQwjHCcmDfUlefiQrBBoe96nfQutJKLk4FqgZS8Jmz1kgFHrP1K4oCqXHLSDQvrjjUcsWjq+CHXCvsr6a3h8VM/SBSRxH0saSOJB0MgG6eduNHLxBFJauFlLn8nafE22wLITRfS06iy1CJ/rv/dGgv0x4lNzjIUak3wr5CJc+jhhSMoatc0ICrFb+fnf6QKxbu5Cgdqme+aiEPZC+P5DylnjtvgCMER95J7lNIHbWQWliMchCS1IQg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lItZYZJBVOe6opiwwJHC8qPt80unMpfxU7ICJXTWOFc=;
 b=f0c1T2b8pEaJNxvizHuKValGQS3WWvGZOza56ro+bzbG8gcQvtuPCP7bRc/yIb5FPf48QtRs82BztnbWPNP+0WY1gydYmJ7MJ2UonCNrQIh6IYTO9tDCqib9ynXfxZ+mwvaqLRCa1qQPVDMaEbpkhjCf8V1kFu/OzVd3vKCAmpk96L54eIUWn3rhBlwFHYx4gObkTxkf+UHGiAVvUjvl0hVE5KYfr71OamsNLQv8ZibvbZebo/QrRKga1q/XoFiRlDKDmIqmraYLGBMZU4LPFejiJDCg4f2LNkY2l/FjVm//Bh0z/8vHQQuTEnjzMDhDh1+vqSG9kpYmdZciPMQQNQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lItZYZJBVOe6opiwwJHC8qPt80unMpfxU7ICJXTWOFc=;
 b=vjMciu3tsbMoQOIXU0Mv45ilJch18txgSriBUyqC/zpE2L4Ra37c7AIBz+DE7rGhM7KHKVx8ADmmC+3szaWPDctEDK0boYbz78czsYUIRfW5iHDGEIM5KUmGPwJVPGGKTGIzTAzrSKaITx4Xv0J09wMGMp2xjK0FzYpfxvJCgzQ=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2296.eurprd08.prod.outlook.com (2603:10a6:4:87::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Fri, 17 Jul
 2020 15:51:48 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3174.028; Fri, 17 Jul 2020
 15:51:48 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAOrEgIAAVPWAgAABeYCAAAssAIAAAcuAgAAIDQCAAAHigIAAAiYAgAAEaYCAAAVDgIAAAeSAgAAF4gA=
Date: Fri, 17 Jul 2020 15:51:47 +0000
Message-ID: <C5B2BDD5-E504-4871-8542-5BA8C051F699@arm.com>
References: <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
 <28899FEF-9DA7-4513-8283-1AC5EFFC6E92@arm.com>
 <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
 <20200717144139.GU7191@Air-de-Roger>
 <90AE8DAB-2223-46DC-A263-D78365E5435E@arm.com>
 <20200717150507.GW7191@Air-de-Roger>
 <FBE040A9-D088-43D6-8929-FFEDE9DDDE34@arm.com>
 <20200717153043.GX7191@Air-de-Roger>
In-Reply-To: <20200717153043.GX7191@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c33937b7-c667-49cc-d16d-08d82a695482
x-ms-traffictypediagnostic: DB6PR0802MB2296:|VI1PR0801MB1774:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <VI1PR0801MB17741D9C70442F9FF3548B949D7C0@VI1PR0801MB1774.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: QUeIUN+UyotUrC8xJ1QJS0ABY+mzQOUYSpZ0XH1zVKA1goaV5wF24Cob6VRee07VwsdNBAT3sX3Z+mcvnj6cZcevP9vqOBPT+jVeRLMSXaD+NTG3GT7yCdRv7i3gaKmYiAkTaBzYYzh00rV3Gxl5Lna2fw8FnwXk7lnrLiSBx7dM5KPmldiM6AtqqaJz/YIfNhu54ECiKRY1mDi9KPynYek70XHeItO/TtaB+jy+jUcVfkVLykJ1HaCbkirtm9zsNdiVHJuQvEQbtL5gslozNnL3f6iM2OhAWDmx4WSJBtazldT7BDAP56WqB1PEab7XMzJp50WmlTNWELt/fk8Rmg==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(39860400002)(346002)(376002)(396003)(366004)(8676002)(316002)(6512007)(64756008)(66556008)(66476007)(66446008)(6916009)(36756003)(8936002)(66946007)(53546011)(6506007)(2616005)(478600001)(5660300002)(71200400001)(6486002)(26005)(2906002)(4326008)(33656002)(54906003)(86362001)(76116006)(91956017)(186003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: DGN1J/eSPG5aXF77j3QiL1WJGcQ+ktX20IwBZGuE4o8TJeL16O9SpRanqc6nceS1B5hX5W0DB/7x0bvmAH4dYNaDBmxVcbkBs/buv2XxRZMj6QNH5tqeHNvtMsSRNrTtt05eGjj3gRTTp7hM1X7G3MJvRaZ7HE9/pOi2Y7mLfS+8BD20Lkh5bbmaUFL35k1F8AKpKCS5lEK4QUpE8XBqqm1Uy2/3pjMqx1/ldmDWgL1anmsUrcbbuMoxjR+P3Zorah33LmD0yioa7MGAjk11naXFDrLqzRCDzGF57LBk5ShB46jYGkQdHf8kiMB1ISGHCCIV102dHT4masVAU2FN2zMewFxE6ngjQNsLu8aYpRdw1vDsgl6LlV59BDavNsddvaiJlAowuiYdweRdcNA74pTpiGCJFD+IeDt3zWn1ay/CUmEABoEHPSPRTr6DKPVvoFuvdkzksWWs3HWiSP/RdCdD5cxD923u6Er6olx20vC9Ro7fzCbNJeJItgb7mGeC
Content-Type: text/plain; charset="utf-8"
Content-ID: <49530C698045634C89CD21DEB1C60E23@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2296
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(346002)(39860400002)(396003)(136003)(46966005)(356005)(33656002)(107886003)(2906002)(8936002)(8676002)(6506007)(53546011)(4326008)(47076004)(5660300002)(81166007)(36756003)(82740400003)(6512007)(336012)(478600001)(70206006)(316002)(6862004)(82310400002)(6486002)(186003)(36906005)(70586007)(2616005)(26005)(86362001)(54906003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: ba38a0b9-5054-4b40-7a3c-08d82a695056
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: envoUApaWeuoMF8YB5ilCo09Dv//8cVTZwAz0hzctmfbSwadUaBMRms2Y11832C0oaM1Z2F5r0fx390zLIad3aYAsQvB1v7bsVJdvmBKdlAcYnotfsyDCNKWrw/SQgl13emgoGLhuRkFFFRpZLFgSBEYGkG5lDtgsM6xXkyz3HXOP6JO8hxNlpHrb+wtU9bf4IFhCiQLyGnYPjq71IPRkd0bDnRzjKK3BFRldAm8f/Fd+qOwJyHz0GYcbQVtiW4BnB/i/4VdpyHAmdlN2xbFpbAe05BhI8FQoYmcCAGKBgm6n63en6bngBpjTzLQpYpQwi6qg+kmeJAjkuTG3HPKydS5WNHIP7IbGoXu0DXTCNuBuo7To71AhWRJqrv3x+B+fRR/7oh2FWuUXlR4/YyQjg==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jul 2020 15:51:54.9647 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c33937b7-c667-49cc-d16d-08d82a695482
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1774
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTcgSnVsIDIwMjAsIGF0IDE3OjMwLCBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5w
YXVAY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiBGcmksIEp1bCAxNywgMjAyMCBhdCAwMzoy
Mzo1N1BNICswMDAwLCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4gDQo+PiANCj4+PiBPbiAx
NyBKdWwgMjAyMCwgYXQgMTc6MDUsIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXgu
Y29tPiB3cm90ZToNCj4+PiANCj4+PiBPbiBGcmksIEp1bCAxNywgMjAyMCBhdCAwMjo0OToyMFBN
ICswMDAwLCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4+PiANCj4+Pj4gDQo+Pj4+PiBPbiAx
NyBKdWwgMjAyMCwgYXQgMTY6NDEsIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXgu
Y29tPiB3cm90ZToNCj4+Pj4+IA0KPj4+Pj4gT24gRnJpLCBKdWwgMTcsIDIwMjAgYXQgMDI6MzQ6
NTVQTSArMDAwMCwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+Pj4+PiANCj4+Pj4+PiANCj4+
Pj4+Pj4gT24gMTcgSnVsIDIwMjAsIGF0IDE2OjA2LCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPj4+Pj4+PiANCj4+Pj4+Pj4gT24gMTcuMDcuMjAyMCAxNTo1OSwgQmVy
dHJhbmQgTWFycXVpcyB3cm90ZToNCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gT24g
MTcgSnVsIDIwMjAsIGF0IDE1OjE5LCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+IHdy
b3RlOg0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IE9uIDE3LjA3LjIwMjAgMTU6MTQsIEJlcnRyYW5k
IE1hcnF1aXMgd3JvdGU6DQo+Pj4+Pj4+Pj4+PiBPbiAxNyBKdWwgMjAyMCwgYXQgMTA6MTAsIEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+PiBPbiAxNi4w
Ny4yMDIwIDE5OjEwLCBSYWh1bCBTaW5naCB3cm90ZToNCj4+Pj4+Pj4+Pj4+PiAjIEVtdWxhdGVk
IFBDSSBkZXZpY2UgdHJlZSBub2RlIGluIGxpYnhsOg0KPj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+
Pj4+IExpYnhsIGlzIGNyZWF0aW5nIGEgdmlydHVhbCBQQ0kgZGV2aWNlIHRyZWUgbm9kZSBpbiB0
aGUgZGV2aWNlIHRyZWUgdG8gZW5hYmxlIHRoZSBndWVzdCBPUyB0byBkaXNjb3ZlciB0aGUgdmly
dHVhbCBQQ0kgZHVyaW5nIGd1ZXN0IGJvb3QuIFdlIGludHJvZHVjZWQgdGhlIG5ldyBjb25maWcg
b3B0aW9uIFt2cGNpPSJwY2lfZWNhbSJdIGZvciBndWVzdHMuIFdoZW4gdGhpcyBjb25maWcgb3B0
aW9uIGlzIGVuYWJsZWQgaW4gYSBndWVzdCBjb25maWd1cmF0aW9uLCBhIFBDSSBkZXZpY2UgdHJl
ZSBub2RlIHdpbGwgYmUgY3JlYXRlZCBpbiB0aGUgZ3Vlc3QgZGV2aWNlIHRyZWUuDQo+Pj4+Pj4+
Pj4+PiANCj4+Pj4+Pj4+Pj4+IEkgc3VwcG9ydCBTdGVmYW5vJ3Mgc3VnZ2VzdGlvbiBmb3IgdGhp
cyB0byBiZSBhbiBvcHRpb25hbCB0aGluZywgaS5lLg0KPj4+Pj4+Pj4+Pj4gdGhlcmUgdG8gYmUg
bm8gbmVlZCBmb3IgaXQgd2hlbiB0aGVyZSBhcmUgUENJIGRldmljZXMgYXNzaWduZWQgdG8gdGhl
DQo+Pj4+Pj4+Pj4+PiBndWVzdCBhbnl3YXkuIEkgYWxzbyB3b25kZXIgYWJvdXQgdGhlIHBjaV8g
cHJlZml4IGhlcmUgLSBpc24ndA0KPj4+Pj4+Pj4+Pj4gdnBjaT0iZWNhbSIgYXMgdW5hbWJpZ3Vv
dXM/DQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBUaGlzIGNvdWxkIGJlIGEgcHJvYmxlbSBhcyB3
ZSBuZWVkIHRvIGtub3cgdGhhdCB0aGlzIGlzIHJlcXVpcmVkIGZvciBhIGd1ZXN0IHVwZnJvbnQg
c28gdGhhdCBQQ0kgZGV2aWNlcyBjYW4gYmUgYXNzaWduZWQgYWZ0ZXIgdXNpbmcgeGwuIA0KPj4+
Pj4+Pj4+IA0KPj4+Pj4+Pj4+IEknbSBhZnJhaWQgSSBkb24ndCB1bmRlcnN0YW5kOiBXaGVuIHRo
ZXJlIGFyZSBubyBQQ0kgZGV2aWNlIHRoYXQgZ2V0DQo+Pj4+Pj4+Pj4gaGFuZGVkIHRvIGEgZ3Vl
c3Qgd2hlbiBpdCBnZXRzIGNyZWF0ZWQsIGJ1dCBpdCBpcyBzdXBwb3NlZCB0byBiZSBhYmxlDQo+
Pj4+Pj4+Pj4gdG8gaGF2ZSBzb21lIGFzc2lnbmVkIHdoaWxlIGFscmVhZHkgcnVubmluZywgdGhl
biB3ZSBhZ3JlZSB0aGUgb3B0aW9uDQo+Pj4+Pj4+Pj4gaXMgbmVlZGVkIChhZmFpY3QpLiBXaGVu
IFBDSSBkZXZpY2VzIGdldCBoYW5kZWQgdG8gdGhlIGd1ZXN0IHdoaWxlIGl0DQo+Pj4+Pj4+Pj4g
Z2V0cyBjb25zdHJ1Y3RlZCwgd2hlcmUncyB0aGUgcHJvYmxlbSB0byBpbmZlciB0aGlzIG9wdGlv
biBmcm9tIHRoZQ0KPj4+Pj4+Pj4+IHByZXNlbmNlIG9mIFBDSSBkZXZpY2VzIGluIHRoZSBndWVz
dCBjb25maWd1cmF0aW9uPw0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBJZiB0aGUgdXNlciB3YW50cyB0
byB1c2UgeGwgcGNpLWF0dGFjaCB0byBhdHRhY2ggaW4gcnVudGltZSBhIGRldmljZSB0byBhIGd1
ZXN0LCB0aGlzIGd1ZXN0IG11c3QgaGF2ZSBhIFZQQ0kgYnVzIChldmVuIHdpdGggbm8gZGV2aWNl
cykuDQo+Pj4+Pj4+PiBJZiB3ZSBkbyBub3QgaGF2ZSB0aGUgdnBjaSBwYXJhbWV0ZXIgaW4gdGhl
IGNvbmZpZ3VyYXRpb24gdGhpcyB1c2UgY2FzZSB3aWxsIG5vdCB3b3JrIGFueW1vcmUuDQo+Pj4+
Pj4+IA0KPj4+Pj4+PiBUaGF0J3Mgd2hhdCBldmVyeW9uZSBsb29rcyB0byBhZ3JlZSB3aXRoLiBZ
ZXQgd2h5IGlzIHRoZSBwYXJhbWV0ZXIgbmVlZGVkDQo+Pj4+Pj4+IHdoZW4gdGhlcmUgX2FyZV8g
UENJIGRldmljZXMgYW55d2F5PyBUaGF0J3MgdGhlICJvcHRpb25hbCIgdGhhdCBTdGVmYW5vDQo+
Pj4+Pj4+IHdhcyBzdWdnZXN0aW5nLCBhaXVpLg0KPj4+Pj4+IA0KPj4+Pj4+IEkgYWdyZWUgaW4g
dGhpcyBjYXNlIHRoZSBwYXJhbWV0ZXIgY291bGQgYmUgb3B0aW9uYWwgYW5kIG9ubHkgcmVxdWly
ZWQgaWYgbm90IFBDSSBkZXZpY2UgaXMgYXNzaWduZWQgZGlyZWN0bHkgaW4gdGhlIGd1ZXN0IGNv
bmZpZ3VyYXRpb24uDQo+Pj4+PiANCj4+Pj4+IFdoZXJlIHdpbGwgdGhlIEVDQU0gcmVnaW9uKHMp
IGFwcGVhciBvbiB0aGUgZ3Vlc3QgcGh5c21hcD8NCj4+Pj4+IA0KPj4+Pj4gQXJlIHlvdSBnb2lu
ZyB0byByZS11c2UgdGhlIHNhbWUgbG9jYXRpb25zIGFzIG9uIHRoZSBwaHlzaWNhbA0KPj4+Pj4g
aGFyZHdhcmUsIG9yIHdpbGwgdGhleSBhcHBlYXIgc29tZXdoZXJlIGVsc2U/DQo+Pj4+IA0KPj4+
PiBXZSB3aWxsIGFkZCBzb21lIG5ldyBkZWZpbml0aW9ucyBmb3IgdGhlIEVDQU0gcmVnaW9ucyBp
biB0aGUgZ3Vlc3QgcGh5c21hcCBkZWNsYXJlZCBpbiB4ZW4gKGluY2x1ZGUvYXNtLWFybS9jb25m
aWcuaCkNCj4+PiANCj4+PiBJIHRoaW5rIEknbSBjb25mdXNlZCwgYnV0IHRoYXQgZmlsZSBkb2Vz
bid0IGNvbnRhaW4gYW55dGhpbmcgcmVsYXRlZA0KPj4+IHRvIHRoZSBndWVzdCBwaHlzbWFwLCB0
aGF0J3MgdGhlIFhlbiB2aXJ0dWFsIG1lbW9yeSBsYXlvdXQgb24gQXJtDQo+Pj4gQUZBSUNUPw0K
Pj4+IA0KPj4+IERvZXMgdGhpcyBzb21laG93IHJlbGF0ZSB0byB0aGUgcGh5c2ljYWwgbWVtb3J5
IG1hcCBleHBvc2VkIHRvIGd1ZXN0cw0KPj4+IG9uIEFybT8NCj4+IA0KPj4gWWVzIGl0IGRvZXMu
DQo+PiBXZSB3aWxsIGFkZCBuZXcgZGVmaW5pdGlvbnMgdGhlcmUgcmVsYXRlZCB0byBWUENJIHRv
IHJlc2VydmUgc29tZSBhcmVhcyBmb3IgdGhlIFZQQ0kgRUNBTSBhbmQgdGhlIElPTUVNIGFyZWFz
Lg0KPiANCj4gWWVzLCB0aGF0J3MgY29tcGxldGVseSBmaW5lIGFuZCBpcyB3aGF0J3MgZG9uZSBv
biB4ODYsIGJ1dCBhZ2FpbiBJDQo+IGZlZWwgbGlrZSBJJ20gbG9zdCBoZXJlLCB0aGlzIGlzIHRo
ZSBYZW4gdmlydHVhbCBtZW1vcnkgbWFwLCBob3cgZG9lcw0KPiB0aGlzIHJlbGF0ZSB0byB0aGUg
Z3Vlc3QgcGh5c2ljYWwgbWVtb3J5IG1hcD8NCg0KU29ycnkgbXkgYmFkLCB3ZSB3aWxsIGFkZCB2
YWx1ZXMgaW4gaW5jbHVkZS9wdWJsaWMvYXJjaC1hcm0uaCwgd3JvbmcgaGVhZGVyIDotKQ0KDQpC
ZXJ0cmFuZA0KDQoNCj4gDQo+IFJvZ2VyLg0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 15:55:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 15:55:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwShz-0007dz-BK; Fri, 17 Jul 2020 15:55:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lz8P=A4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jwShx-0007dh-Qt
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 15:55:33 +0000
X-Inumbo-ID: f2065ab2-c845-11ea-9634-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f2065ab2-c845-11ea-9634-12813bfff9fa;
 Fri, 17 Jul 2020 15:55:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595001333;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=kKIOm/tLpKkO/RVdK+qmAj3dALPw+NZ9MyRrdA1DLO0=;
 b=Ri38g88KCU/RENntFXHbyr5aKEXXRSKd+sgccQn134GCcEGS7gm+Eu+F
 Dbwz5+/kdSBTiwTCmBHiIIBeAYQN47ENIUfzeV9SuRkUYVXyQT4l21WqR
 MtJ1ht1mce+C/7Br0Be5lZwnq6aWnRMOEfyJSmp5tfV2awJyBvusfXS8P s=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Uk5dgXX6t9+WlglZ22k8Jzk7xLoljM4XJQahP791WhUUnZxnRm/ENxJaYiiHRBXXD0cdaq1Urv
 IMexfZZ9dZNymctlfxs1XqiTE3XhLVDkGDXfxDeL/B0d6gHD2VuDgFPMa0e0P6LIxVpFf0lAMk
 FM1X+Swji/dVQJ9ZtqpIEPwlhwgYiqlHRR8Ox4syoD9khzB0KEseGMMMRoih55p53QNGw+xZc0
 CyuupfvhKJSX9TWFAOdOcz4aCN/JCE8Cag6t65e/ivMrMX00oQLdSiAcJUSDJJ1qAsI8ST1YXZ
 r80=
X-SBRS: 2.7
X-MesageID: 22631661
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,362,1589256000"; d="scan'208";a="22631661"
Date: Fri, 17 Jul 2020 17:55:25 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Message-ID: <20200717155525.GY7191@Air-de-Roger>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <20200717111644.GS7191@Air-de-Roger>
 <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
 <20200717143120.GT7191@Air-de-Roger>
 <8AF78FF1-C389-44D8-896B-B95C1A0560E2@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8AF78FF1-C389-44D8-896B-B95C1A0560E2@arm.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 17, 2020 at 03:21:57PM +0000, Bertrand Marquis wrote:
> > On 17 Jul 2020, at 16:31, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > On Fri, Jul 17, 2020 at 01:22:19PM +0000, Bertrand Marquis wrote:
> >>> On 17 Jul 2020, at 13:16, Roger Pau Monné <roger.pau@citrix.com> wrote:
> >>>> * ACS capability is disable for ARM as of now as after enabling it
> >>>> devices are not accessible.
> >>>> * Dom0Less implementation will require to have the capacity inside Xen
> >>>> to discover the PCI devices (without depending on Dom0 to declare them
> >>>> to Xen).
> >>> 
> >>> I assume the firmware will properly initialize the host bridge and
> >>> configure the resources for each device, so that Xen just has to walk
> >>> the PCI space and find the devices.
> >>> 
> >>> TBH that would be my preferred method, because then you can get rid of
> >>> the hypercall.
> >>> 
> >>> Is there anyway for Xen to know whether the host bridge is properly
> >>> setup and thus the PCI bus can be scanned?
> >>> 
> >>> That way Arm could do something similar to x86, where Xen will scan
> >>> the bus and discover devices, but you could still provide the
> >>> hypercall in case the bus cannot be scanned by Xen (because it hasn't
> >>> been setup).
> >> 
> >> That is definitely the idea to rely by default on a firmware doing this properly.
> >> I am not sure wether a proper enumeration could be detected properly in all
> >> cases so it would make sens to rely on Dom0 enumeration when a Xen
> >> command line argument is passed as explained in one of Rahul’s mails.
> > 
> > I assume Linux somehow knows when it needs to initialize the PCI root
> > complex before attempting to access the bus. Would it be possible to
> > add this logic to Xen so it can figure out on it's own whether it's
> > safe to scan the PCI bus or whether it needs to wait for the hardware
> > domain to report the devices present?
> 
> That might be possible to do but will anyway require a command line argument
> to be able to force xen to let the hardware domain do the initialization anyway in
> case Xen detection does not work properly.
> In the case where there is a Dom0 i would more expect that we let it do the initialization
> all the time unless the user is telling using a command line argument that the current one
> is correct and shall be used.

FRT, on x86 we let dom0 enumerate and probe the PCI devices as it
feels like, but vPCI traps have already been set to all the detected
devices, and vPCI already supports letting dom0 size the BARs, or even
change it's position (theoretically, I haven't seen a dom0 change the
position of the BARs yet).

So on Arm you could also let dom0 do all of this, the question is
whether vPCI traps could be set earlier (when dom0 is created) if the
PCI bus has been initialized and can be scanned.

I have no idea however how bare metal Linux on Arm figures out the
state of the PCI bus, or if it's something that's passed on the DT, or
signaled somehow from the firmware/bootloader.

> >>> This should be limited to read-only accesses in order to be safe.
> >>> 
> >>> Emulating a PCI bridge in Xen using vPCI shouldn't be that
> >>> complicated, so you could likely replace the real bridges with
> >>> emulated ones. Or even provide a fake topology to the guest using an
> >>> emulated bridge.
> >> 
> >> Just showing all bridges and keeping the hardware topology is the simplest
> >> solution for now. But maybe showing a different topology and only fake
> >> bridges could make sense and be implemented in the future.
> > 
> > Ack. I've also heard rumors of Xen on Arm people being very interested
> > in VirtIO support, in which case you might expose both fully emulated
> > VirtIO devices and PCI passthrough devices on the PCI bus, so it would
> > be good to spend some time thinking how those will fit together.
> > 
> > Will you allocate a separate segment unused by hardware to expose the
> > fully emulated PCI devices (VirtIO)?
> > 
> > Will OSes support having several segments?
> > 
> > If not you likely need to have emulated bridges so that you can adjust
> > the bridge window accordingly to fit the passthrough and the emulated
> > MMIO space, and likely be able to expose passthrough devices using a
> > different topology than the host one.
> 
> Honestly this is not something we considered. I was more thinking that
> this use case would be handled by creating an other VPCI bus dedicated
> to those kind of devices instead of mixing physical and virtual devices.

Just mentioning it and your plans when guests might also have fully
emulated devices on the PCI bus would be relevant I think.

Anyway, I don't think it's something mandatory here, as from a guest
PoV how we expose PCI devices shouldn't matter that much, as long as
it's done in a spec compliant way.

So you can start with this approach if it's easier, I just wanted to
make sure you have in mind that at some point Arm guests might also
require fully emulated PCI devices so that you don't paint yourselves
in a corner.

> > 
> >>> 
> >>>> 
> >>>> # Emulated PCI device tree node in libxl:
> >>>> 
> >>>> Libxl is creating a virtual PCI device tree node in the device tree
> >>>> to enable the guest OS to discover the virtual PCI during guest
> >>>> boot. We introduced the new config option [vpci="pci_ecam"] for
> >>>> guests. When this config option is enabled in a guest configuration,
> >>>> a PCI device tree node will be created in the guest device tree.
> >>>> 
> >>>> A new area has been reserved in the arm guest physical map at which
> >>>> the VPCI bus is declared in the device tree (reg and ranges
> >>>> parameters of the node). A trap handler for the PCI ECAM access from
> >>>> guest has been registered at the defined address and redirects
> >>>> requests to the VPCI driver in Xen.
> >>> 
> >>> Can't you deduce the requirement of such DT node based on the presence
> >>> of a 'pci=' option in the same config file?
> >>> 
> >>> Also I wouldn't discard that in the future you might want to use
> >>> different emulators for different devices, so it might be helpful to
> >>> introduce something like:
> >>> 
> >>> pci = [ '08:00.0,backend=vpci', '09:00.0,backend=xenpt', '0a:00.0,backend=qemu', ... ]
> >>> 
> >>> For the time being Arm will require backend=vpci for all the passed
> >>> through devices, but I wouldn't rule out this changing in the future.
> >> 
> >> We need it for the case where no device is declared in the config file and the user
> >> wants to add devices using xl later. In this case we must have the DT node for it
> >> to work. 
> > 
> > There's a passthrough xl.cfg option for that already, so that if you
> > don't want to add any PCI passthrough devices at creation time but
> > rather hotplug them you can set:
> > 
> > passthrough=enabled
> > 
> > And it should setup the domain to be prepared to support hot
> > passthrough, including the IOMMU [0].
> 
> Isn’t this option covering more then PCI passthrough ?
> 
> Lots of Arm platform do not have a PCI bus at all, so for those
> creating a VPCI bus would be pointless. But you might need to
> activate this to pass devices which are not on the PCI bus.

Well, you can check whether the host has PCI support and decide
whether to attach a virtual PCI bus to the guest or not?

Setting passthrough=enabled should prepare the guest to handle
passthrough, in whatever form is supported by the host IMO.

> >>>> Limitation:
> >>>> * Need to avoid the “iomem” and “irq” guest config
> >>>> options and map the IOMEM region and IRQ at the same time when
> >>>> device is assigned to the guest using the “pci” guest config options
> >>>> when xl creates the domain.
> >>>> * Emulated BAR values on the VPCI bus should reflect the IOMEM mapped
> >>>> address.
> >>> 
> >>> It was my understanding that you would identity map the BAR into the
> >>> domU stage-2 translation, and that changes by the guest won't be
> >>> allowed.
> >> 
> >> In fact this is not possible to do and we have to remap at a different address
> >> because the guest physical mapping is fixed by Xen on Arm so we must follow
> >> the same design otherwise this would only work if the BARs are pointing to an
> >> address unused and on Juno this is for example conflicting with the guest
> >> RAM address.
> > 
> > This was not clear from my reading of the document, could you please
> > clarify on the next version that the guest physical memory map is
> > always the same, and that BARs from PCI devices cannot be identity
> > mapped to the stage-2 translation and instead are relocated somewhere
> > else?
> 
> We will.
> 
> > 
> > I'm then confused about what you do with bridge windows, do you also
> > trap and adjust them to report a different IOMEM region?
> 
> Yes this is what we will have to do so that the regions reflect the VPCI mappings
> and not the hardware one.
> 
> > 
> > Above you mentioned that read-only access was given to bridge
> > registers, but I guess some are also emulated in order to report
> > matching IOMEM regions?
> 
> yes that’s exact. We will clear this in the next version.

If you have to go this route for domUs, it might be easier to just
fake a PCI host bridge and place all the devices there even with
different SBDF addresses. Having to replicate all the bridges on the
physical PCI bus and fixing up it's MMIO windows seems much more
complicated than just faking/emulating a single bridge?

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 16:06:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 16:06:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwSs4-0000k9-IF; Fri, 17 Jul 2020 16:06:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lz8P=A4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jwSs3-0000k4-IJ
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 16:05:59 +0000
X-Inumbo-ID: 66902204-c847-11ea-b7bb-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66902204-c847-11ea-b7bb-bc764e2007e4;
 Fri, 17 Jul 2020 16:05:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595001958;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=qPLkfG26nVJQE1VRNe7hHilAbOaWUvxJtkEm7+xAYT8=;
 b=WmO9bsPaRfY9b3hWXw5zu8nqHrzqKdd2QdMQQPKdamVZj0DP9qiQ3wdZ
 rPmikRiZ5WVb/AK+ww4A4yEvLDBK5DTqc7ED6Kg+g3gMVxb/cDsqp9SYL
 ppUlqCAhZoNzvUN4fZDsG/y6MoMOgvEhfHUqLODM4HGQ+EwFX/8itY5f7 k=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: EihqMnwZ2rXBRismD7B9rCDGE/8NctwbucE9w3ZE6Mr5PGYHbmVGrNM7DJ7dJQft/dq0TRPFfP
 bi2VKoGoRnpXc0tep4g+0cCvjg6mDCNCUE9omVRW24jnCrJZ7cQGMFOb9gWRh6XywOasyDWze0
 gzWetkEf6xHiKuAeARazKykf4ueBo4Brs39eqIPfZ/pnhorjDjspT3f5qCwX7FH2/OcfXEHR5X
 gmP7G2RzfgnDPLZs6cCNaTxQIGYzKB1MHtOv3xYoxv7fQca6nye3mcrFnzUbbHULvLklNJfg7m
 EuY=
X-SBRS: 2.7
X-MesageID: 23484045
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,362,1589256000"; d="scan'208";a="23484045"
Date: Fri, 17 Jul 2020 18:05:50 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: PCI devices passthrough on Arm design proposal
Message-ID: <20200717160550.GZ7191@Air-de-Roger>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
 <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
 <865D5A77-85D4-4A88-A228-DDB70BDB3691@arm.com>
 <972c0c81-6595-7c41-baa5-8882f5d1c0ff@xen.org>
 <4E6B793C-2E0A-4999-9842-24CDCDE43903@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <4E6B793C-2E0A-4999-9842-24CDCDE43903@arm.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 17, 2020 at 03:47:25PM +0000, Bertrand Marquis wrote:
> > On 17 Jul 2020, at 17:26, Julien Grall <julien@xen.org> wrote:
> > On 17/07/2020 15:47, Bertrand Marquis wrote:
> >>>>> * Dom0Less implementation will require to have the capacity inside Xen to discover the PCI devices (without depending on Dom0 to declare them to Xen).
> >>>>> 
> >>>>> # Enable the existing x86 virtual PCI support for ARM:
> >>>>> 
> >>>>> The existing VPCI support available for X86 is adapted for Arm. When the device is added to XEN via the hyper call “PHYSDEVOP_pci_device_add”, VPCI handler for the config space access is added to the PCI device to emulate the PCI devices.
> >>>>> 
> >>>>> A MMIO trap handler for the PCI ECAM space is registered in XEN so that when guest is trying to access the PCI config space, XEN will trap the access and emulate read/write using the VPCI and not the real PCI hardware.
> >>>>> 
> >>>>> Limitation:
> >>>>> * No handler is register for the MSI configuration.
> >>>>> * Only legacy interrupt is supported and tested as of now, MSI is not implemented and tested.
> >>>> IIRC, legacy interrupt may be shared between two PCI devices. How do you plan to handle this on Arm?
> >> We plan to fix this by adding proper support for MSI in the long term.
> >> For the use case where MSI is not supported or not wanted we might have to find a way to forward the hardware interrupt to several guests to emulate some kind of shared interrupt.
> > 
> > Sharing interrupts are a bit pain because you couldn't take advantage of the direct EOI in HW and have to be careful if one guest doesn't EOI in timely maneer.
> > 
> > This is something I would rather avoid unless there is a real use case for it.
> 
> I would expect that most recent hardware will support MSI and this
> will not be needed.

Well, PCI Express mandates MSI support, so while this is just a spec,
I would expect most (if not all) devices to support MSI (or MSI-X), as
Arm platforms haven't implemented legacy PCI anyway.

> When MSI is not used, the only solution would be to enforce that
> devices assigned to different guest are using different interrupts
> which would limit the number of domains being able to use PCI
> devices on a bus to 4 (if the enumeration can be modified correctly
> to assign the interrupts properly).
> 
> If we all agree that this is an acceptable limitation then we would
> not need the “interrupt sharing”.

I might be easier to start by just supporting devices that have MSI
(or MSI-X) and then move to legacy interrupts if required?

You should have most of the pieces you require already implemented
since that's what x86 uses, and hence could reuse almost all of it?

IIRC Julien even said that Arm was likely to require much less traps
than x86 for accesses to MSI and MSI-X since you could allow untrusted
guests to write directly to the registers as there's another piece of
hardware that would already translate the interrupts?

I think it's fine to use this workaround while you don't have MSI
support in order to start testing and upstreaming stuff, but maybe
that shouldn't be committed?

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 16:08:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 16:08:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwSuj-0000tL-4o; Fri, 17 Jul 2020 16:08:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Lz8P=A4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jwSuh-0000tG-BX
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 16:08:43 +0000
X-Inumbo-ID: c84bf6ee-c847-11ea-b7bb-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c84bf6ee-c847-11ea-b7bb-bc764e2007e4;
 Fri, 17 Jul 2020 16:08:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595002121;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=4ugp0nuCggopPu9xvsiebu27ZkSjhyh/jUgO1t4HHRg=;
 b=G9Wq0IO00k///GVG63J7FvhPhGEtNwhLvOIZJnvF6DepExXXjrPgSoBP
 rrEFXhZlgQjodW+0EpET3Q2PL5tqXE8JH9RUooHyl7bWTrXTAhOzdL+gR
 LQpb/Mr0JeWP7pdlQceyYjkjXrAeq0RWy+OSaflbFNin3yx+rTWuh2WZs E=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: LasNCKAHIc0ypFYjyIbg/sA4ZD7GfwzcX70V8O1SPBdRPEKZzAPp2Z+LMJAsNAM82Nth268l9V
 Dj19Kv9ClUt9duPqwNgc7CXvyMNwR14OTkslEu/mz7U5fhKtU7iun1pBb4ri0eRp4gA27soIeH
 d8idJhOwXL2cCLB15WHMUj7Ybsv8zP1laeurgntgzL3tO93xBtPh91LVBirPMIQhNMoTeaFn/l
 O7lBj6InR6k0wpVqedZ/kExOkxIJiIv7L8wHzVPFag+PwpTp08wDUmcJkCACfH1CkIVJ3rtnj+
 jWY=
X-SBRS: 2.7
X-MesageID: 22960547
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,362,1589256000"; d="scan'208";a="22960547"
Date: Fri, 17 Jul 2020 18:08:34 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Message-ID: <20200717160834.GA7191@Air-de-Roger>
References: <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
 <20200717144139.GU7191@Air-de-Roger>
 <90AE8DAB-2223-46DC-A263-D78365E5435E@arm.com>
 <20200717150507.GW7191@Air-de-Roger>
 <FBE040A9-D088-43D6-8929-FFEDE9DDDE34@arm.com>
 <20200717153043.GX7191@Air-de-Roger>
 <C5B2BDD5-E504-4871-8542-5BA8C051F699@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <C5B2BDD5-E504-4871-8542-5BA8C051F699@arm.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 17, 2020 at 03:51:47PM +0000, Bertrand Marquis wrote:
> 
> 
> > On 17 Jul 2020, at 17:30, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > 
> > On Fri, Jul 17, 2020 at 03:23:57PM +0000, Bertrand Marquis wrote:
> >> 
> >> 
> >>> On 17 Jul 2020, at 17:05, Roger Pau Monné <roger.pau@citrix.com> wrote:
> >>> 
> >>> On Fri, Jul 17, 2020 at 02:49:20PM +0000, Bertrand Marquis wrote:
> >>>> 
> >>>> 
> >>>>> On 17 Jul 2020, at 16:41, Roger Pau Monné <roger.pau@citrix.com> wrote:
> >>>>> 
> >>>>> On Fri, Jul 17, 2020 at 02:34:55PM +0000, Bertrand Marquis wrote:
> >>>>>> 
> >>>>>> 
> >>>>>>> On 17 Jul 2020, at 16:06, Jan Beulich <jbeulich@suse.com> wrote:
> >>>>>>> 
> >>>>>>> On 17.07.2020 15:59, Bertrand Marquis wrote:
> >>>>>>>> 
> >>>>>>>> 
> >>>>>>>>> On 17 Jul 2020, at 15:19, Jan Beulich <jbeulich@suse.com> wrote:
> >>>>>>>>> 
> >>>>>>>>> On 17.07.2020 15:14, Bertrand Marquis wrote:
> >>>>>>>>>>> On 17 Jul 2020, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
> >>>>>>>>>>> On 16.07.2020 19:10, Rahul Singh wrote:
> >>>>>>>>>>>> # Emulated PCI device tree node in libxl:
> >>>>>>>>>>>> 
> >>>>>>>>>>>> Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.
> >>>>>>>>>>> 
> >>>>>>>>>>> I support Stefano's suggestion for this to be an optional thing, i.e.
> >>>>>>>>>>> there to be no need for it when there are PCI devices assigned to the
> >>>>>>>>>>> guest anyway. I also wonder about the pci_ prefix here - isn't
> >>>>>>>>>>> vpci="ecam" as unambiguous?
> >>>>>>>>>> 
> >>>>>>>>>> This could be a problem as we need to know that this is required for a guest upfront so that PCI devices can be assigned after using xl. 
> >>>>>>>>> 
> >>>>>>>>> I'm afraid I don't understand: When there are no PCI device that get
> >>>>>>>>> handed to a guest when it gets created, but it is supposed to be able
> >>>>>>>>> to have some assigned while already running, then we agree the option
> >>>>>>>>> is needed (afaict). When PCI devices get handed to the guest while it
> >>>>>>>>> gets constructed, where's the problem to infer this option from the
> >>>>>>>>> presence of PCI devices in the guest configuration?
> >>>>>>>> 
> >>>>>>>> If the user wants to use xl pci-attach to attach in runtime a device to a guest, this guest must have a VPCI bus (even with no devices).
> >>>>>>>> If we do not have the vpci parameter in the configuration this use case will not work anymore.
> >>>>>>> 
> >>>>>>> That's what everyone looks to agree with. Yet why is the parameter needed
> >>>>>>> when there _are_ PCI devices anyway? That's the "optional" that Stefano
> >>>>>>> was suggesting, aiui.
> >>>>>> 
> >>>>>> I agree in this case the parameter could be optional and only required if not PCI device is assigned directly in the guest configuration.
> >>>>> 
> >>>>> Where will the ECAM region(s) appear on the guest physmap?
> >>>>> 
> >>>>> Are you going to re-use the same locations as on the physical
> >>>>> hardware, or will they appear somewhere else?
> >>>> 
> >>>> We will add some new definitions for the ECAM regions in the guest physmap declared in xen (include/asm-arm/config.h)
> >>> 
> >>> I think I'm confused, but that file doesn't contain anything related
> >>> to the guest physmap, that's the Xen virtual memory layout on Arm
> >>> AFAICT?
> >>> 
> >>> Does this somehow relate to the physical memory map exposed to guests
> >>> on Arm?
> >> 
> >> Yes it does.
> >> We will add new definitions there related to VPCI to reserve some areas for the VPCI ECAM and the IOMEM areas.
> > 
> > Yes, that's completely fine and is what's done on x86, but again I
> > feel like I'm lost here, this is the Xen virtual memory map, how does
> > this relate to the guest physical memory map?
> 
> Sorry my bad, we will add values in include/public/arch-arm.h, wrong header :-)

Oh right, now I see it :).

Do you really need to specify the ECAM and MMIO regions there?

Wouldn't it be enough to specify the ECAM regions on the DT or the
ACPI MCFG table and get the MMIO regions directly from the BARs of the
devices?

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 16:19:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 16:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwT4Y-0001p3-3N; Fri, 17 Jul 2020 16:18:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/fKj=A4=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jwT4W-0001oy-Ek
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 16:18:52 +0000
X-Inumbo-ID: 33574c62-c849-11ea-963e-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 33574c62-c849-11ea-963e-12813bfff9fa;
 Fri, 17 Jul 2020 16:18:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=S8jfqjewAatkauV6GT2D9rqDr4BRXjpOEBtBAsaqiJc=; b=5vJyeCuaiV5NkiFElEhPVc0YiG
 PMnLH8z2ARAK1uPxxxDToSbbtCEYc8oEpBgTNjkz/SpOWFrF9+LT3u82Jl6twXn8JqR32Nq7Z37Bc
 cTDbItooEYV+1f/xXCRGyBe92DcJo4O6zYijp8fpyp/ju8zliRLxSDBg9JhzQ0ShJN74=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwT4T-0002Jr-MA; Fri, 17 Jul 2020 16:18:49 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwT4T-0008Bh-8f; Fri, 17 Jul 2020 16:18:49 +0000
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
 <20200717144139.GU7191@Air-de-Roger>
 <90AE8DAB-2223-46DC-A263-D78365E5435E@arm.com>
 <20200717150507.GW7191@Air-de-Roger>
 <FBE040A9-D088-43D6-8929-FFEDE9DDDE34@arm.com>
 <20200717153043.GX7191@Air-de-Roger>
 <C5B2BDD5-E504-4871-8542-5BA8C051F699@arm.com>
 <20200717160834.GA7191@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <0c76b6a0-2242-3bbd-9740-75c5580e93e8@xen.org>
Date: Fri, 17 Jul 2020 17:18:46 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200717160834.GA7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 17/07/2020 17:08, Roger Pau Monné wrote:
> On Fri, Jul 17, 2020 at 03:51:47PM +0000, Bertrand Marquis wrote:
>>
>>
>>> On 17 Jul 2020, at 17:30, Roger Pau Monné <roger.pau@citrix.com> wrote:
>>>
>>> On Fri, Jul 17, 2020 at 03:23:57PM +0000, Bertrand Marquis wrote:
>>>>
>>>>
>>>>> On 17 Jul 2020, at 17:05, Roger Pau Monné <roger.pau@citrix.com> wrote:
>>>>>
>>>>> On Fri, Jul 17, 2020 at 02:49:20PM +0000, Bertrand Marquis wrote:
>>>>>>
>>>>>>
>>>>>>> On 17 Jul 2020, at 16:41, Roger Pau Monné <roger.pau@citrix.com> wrote:
>>>>>>>
>>>>>>> On Fri, Jul 17, 2020 at 02:34:55PM +0000, Bertrand Marquis wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>> On 17 Jul 2020, at 16:06, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>>>>
>>>>>>>>> On 17.07.2020 15:59, Bertrand Marquis wrote:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> On 17 Jul 2020, at 15:19, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>> On 17.07.2020 15:14, Bertrand Marquis wrote:
>>>>>>>>>>>>> On 17 Jul 2020, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>>>>>>>> On 16.07.2020 19:10, Rahul Singh wrote:
>>>>>>>>>>>>>> # Emulated PCI device tree node in libxl:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I support Stefano's suggestion for this to be an optional thing, i.e.
>>>>>>>>>>>>> there to be no need for it when there are PCI devices assigned to the
>>>>>>>>>>>>> guest anyway. I also wonder about the pci_ prefix here - isn't
>>>>>>>>>>>>> vpci="ecam" as unambiguous?
>>>>>>>>>>>>
>>>>>>>>>>>> This could be a problem as we need to know that this is required for a guest upfront so that PCI devices can be assigned after using xl.
>>>>>>>>>>>
>>>>>>>>>>> I'm afraid I don't understand: When there are no PCI device that get
>>>>>>>>>>> handed to a guest when it gets created, but it is supposed to be able
>>>>>>>>>>> to have some assigned while already running, then we agree the option
>>>>>>>>>>> is needed (afaict). When PCI devices get handed to the guest while it
>>>>>>>>>>> gets constructed, where's the problem to infer this option from the
>>>>>>>>>>> presence of PCI devices in the guest configuration?
>>>>>>>>>>
>>>>>>>>>> If the user wants to use xl pci-attach to attach in runtime a device to a guest, this guest must have a VPCI bus (even with no devices).
>>>>>>>>>> If we do not have the vpci parameter in the configuration this use case will not work anymore.
>>>>>>>>>
>>>>>>>>> That's what everyone looks to agree with. Yet why is the parameter needed
>>>>>>>>> when there _are_ PCI devices anyway? That's the "optional" that Stefano
>>>>>>>>> was suggesting, aiui.
>>>>>>>>
>>>>>>>> I agree in this case the parameter could be optional and only required if not PCI device is assigned directly in the guest configuration.
>>>>>>>
>>>>>>> Where will the ECAM region(s) appear on the guest physmap?
>>>>>>>
>>>>>>> Are you going to re-use the same locations as on the physical
>>>>>>> hardware, or will they appear somewhere else?
>>>>>>
>>>>>> We will add some new definitions for the ECAM regions in the guest physmap declared in xen (include/asm-arm/config.h)
>>>>>
>>>>> I think I'm confused, but that file doesn't contain anything related
>>>>> to the guest physmap, that's the Xen virtual memory layout on Arm
>>>>> AFAICT?
>>>>>
>>>>> Does this somehow relate to the physical memory map exposed to guests
>>>>> on Arm?
>>>>
>>>> Yes it does.
>>>> We will add new definitions there related to VPCI to reserve some areas for the VPCI ECAM and the IOMEM areas.
>>>
>>> Yes, that's completely fine and is what's done on x86, but again I
>>> feel like I'm lost here, this is the Xen virtual memory map, how does
>>> this relate to the guest physical memory map?
>>
>> Sorry my bad, we will add values in include/public/arch-arm.h, wrong header :-)
> 
> Oh right, now I see it :).
> 
> Do you really need to specify the ECAM and MMIO regions there?

You need to define those values somewhere :). The layout is only shared 
between the tools and the hypervisor. I think it would be better if they 
are defined at the same place as the rest of the layout, so it is easier 
to rework the layout.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 16:28:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 16:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwTDt-0002k7-2R; Fri, 17 Jul 2020 16:28:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Bgo=A4=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jwTDr-0002k2-Pm
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 16:28:31 +0000
X-Inumbo-ID: 8cec88ae-c84a-11ea-bca7-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8cec88ae-c84a-11ea-bca7-bc764e2007e4;
 Fri, 17 Jul 2020 16:28:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595003312;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:subject:in-reply-to:references;
 bh=2/3732gubJuYU2C+ir5CDPG+T3m5Zqv4+URMK8hhZtw=;
 b=VSVsyyr6sN2R+4UrBsabOlP2agYq7GpRur/ldULqQ7NKO9yMVPkLn/3e
 8SwThzOdKyE4Mjjfb+rFKF+r3egSEEKf15B16ctCB9+CdZ684vcSHz+fH
 rhjrZ2KRi8MNbg/+Y7eh9FDwfdpFlQrQG0chgHMtKr15T9geL7BMdPRbK k=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: n3INc9QJiNhGZs7foyxwrav/xL2wsV5IVzSUAfOz3WUcMHKoo6L0duiYhJFyyNSm+o7jjsCOQk
 Di32RkY1PMl+cOrwOs0XUDxH7BatPURmPDl8Mw5/iUQcT8LLfxTKPp3ugELGT+ObkVyo3nLei0
 hzLXwhw+zAwswkw2+pTCI6Kkqt2E7oBh694cdEw9ef5JsbiAHec1IRifnWgc04O4Gt1ni2YPDk
 Vvvi1lk1VikcQ6O+0xJLHfgzBcWVGZZjfJwrrCbZvQo2k5VaFdMmNBvP9gTnQ+wlLoHPPomHaA
 xSE=
X-SBRS: 2.7
X-MesageID: 22634716
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,362,1589256000"; d="scan'208";a="22634716"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24337.53671.899413.925330@mariner.uk.xensource.com>
Date: Fri, 17 Jul 2020 17:28:23 +0100
To: Roger Pau Monne <roger.pau@citrix.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 0/2] osstest: update FreeBSD guest tests
In-Reply-To: <24279.29385.267956.941601@mariner.uk.xensource.com>
References: <20200528102648.8724-1-roger.pau@citrix.com>
 <24279.29385.267956.941601@mariner.uk.xensource.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Ian Jackson writes ("Re: [PATCH 0/2] osstest: update FreeBSD guest tests"):
> Roger Pau Monne writes ("[PATCH 0/2] osstest: update FreeBSD guest tests"):
> > The following series adds FreeBSD 11 and 12 guests tests to osstest.
> > ATM this is only tested on amd64, since the i386 versions had a bug.
> > 
> > The result can be seen at:
> > 
> > http://logs.test-lab.xenproject.org/osstest/logs/150428/
> 
> Oh, I forgot to say, both patches
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks, pushed.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 16:34:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 16:34:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwTJs-0003ck-RW; Fri, 17 Jul 2020 16:34:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sKPa=A4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwTJr-0003bp-DF
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 16:34:43 +0000
X-Inumbo-ID: 67b9f700-c84b-11ea-bb8b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67b9f700-c84b-11ea-bb8b-bc764e2007e4;
 Fri, 17 Jul 2020 16:34:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=4VjPjn8qfXMtSsoqX73HwKkuKt3VtGVlqW/8L0ivwZg=; b=uK1xhcm048AcHI0nJ3iJ2jRMj
 yWsByFJba+lMOgAI7L1yDH90f5ABkhNdXdQZIc1eIU+erSVBniycBbRo8+WShXs6OMqcaWzGUslSI
 +Fr30r/gdhtxSsGv4Pjdhi2HVlm7Co3Y79dEy52u7x2za4KQFhEE6TQbsu+E0nKOQzlnM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwTJl-0002cp-6Y; Fri, 17 Jul 2020 16:34:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwTJk-0001Kl-MG; Fri, 17 Jul 2020 16:34:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwTJk-0006sI-Lk; Fri, 17 Jul 2020 16:34:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151959-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151959: all pass - PUSHED
X-Osstest-Versions-This: ovmf=21a23e6966c2eb597a8db98d6837a4c01b3cad4a
X-Osstest-Versions-That: ovmf=d9269d69138860edb1ec9796ed48549dc6ba5735
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jul 2020 16:34:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151959 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151959/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 21a23e6966c2eb597a8db98d6837a4c01b3cad4a
baseline version:
 ovmf                 d9269d69138860edb1ec9796ed48549dc6ba5735

Last test of basis   151937  2020-07-16 04:25:27 Z    1 days
Testing same since   151946  2020-07-16 17:27:21 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dandan Bi <dandan.bi@intel.com>
  Vin Xue <vinxue@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d9269d6913..21a23e6966  21a23e6966c2eb597a8db98d6837a4c01b3cad4a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 18:30:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 18:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwV7Y-0005L1-Rv; Fri, 17 Jul 2020 18:30:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sKPa=A4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwV7Y-0004tF-6C
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 18:30:08 +0000
X-Inumbo-ID: 868e3578-c85b-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 868e3578-c85b-11ea-b7bb-bc764e2007e4;
 Fri, 17 Jul 2020 18:30:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zIj6EjbFSdrxo4DC3d8a7fcIvDyCLD1tfgLKzvTLvEU=; b=6uKjIgF5qhRylvA1nXte3WwLS
 1izqQx1rKAK0WaC9MWLIGOA8dAOGngWXhh3fA6DTU5uVEHJqyNuRIRgKs0GKYJX41zSYw6i1NLdxH
 7S+6YTB1PoPf6NYoOLuBBntZNHGBZxEs4ZiuJSfVCZaj7aqn9fyJVs7eIjA8xga4x0d/o=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwV7Q-00054i-RC; Fri, 17 Jul 2020 18:30:00 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwV7Q-0000Tj-Hu; Fri, 17 Jul 2020 18:30:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwV7Q-0005O3-EP; Fri, 17 Jul 2020 18:30:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151957-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151957: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: xen=f8fe3c07363d11fc81d8e7382dbcaa357c861569
X-Osstest-Versions-That: xen=f8fe3c07363d11fc81d8e7382dbcaa357c861569
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jul 2020 18:30:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151957 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151957/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151942
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151942
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151942
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151942
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151942
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151942
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151942
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151942
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151942
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  f8fe3c07363d11fc81d8e7382dbcaa357c861569
baseline version:
 xen                  f8fe3c07363d11fc81d8e7382dbcaa357c861569

Last test of basis   151957  2020-07-17 04:42:18 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Jul 17 18:35:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 18:35:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwVCH-0005Wp-Kq; Fri, 17 Jul 2020 18:35:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z3hM=A4=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1jwVCF-0005Wk-Qt
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 18:34:59 +0000
X-Inumbo-ID: 3738f3c2-c85c-11ea-bca7-bc764e2007e4
Received: from mail-lj1-x22e.google.com (unknown [2a00:1450:4864:20::22e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3738f3c2-c85c-11ea-bca7-bc764e2007e4;
 Fri, 17 Jul 2020 18:34:58 +0000 (UTC)
Received: by mail-lj1-x22e.google.com with SMTP id q4so13853116lji.2
 for <xen-devel@lists.xenproject.org>; Fri, 17 Jul 2020 11:34:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-transfer-encoding:content-language;
 bh=cjn/+HtkHu6VGY1bWiMHWEzCHRCWeG+Zqp25hnjNy2M=;
 b=qmjQ287Uuh+mNnpVHob8PUaQurZOaUWY0tJxgFIG8itAQgN2+acyUGvxZnnkn3j6V3
 hahTQ01txrVa4OpTGDuLiH4CzDurauczezQ5hWMkICvRmqY29+kurDvp6MZo8V8dSidR
 QmCO3cGbk1AYBKYwayJzOBcbPVaxIA1+tl0gJhWyQtjY5zbuDtpQAdh78H0TfKjJJwUC
 if+UECkBnZ31gP8MHgDocpspbKUqD062VOwXS6SYRVc1s6DDcg633aKtOITpfFdODJZ/
 86hFS15p8VQGxSAZsBwveDXKdnfRGnqHF/JQSZPUagX5ZOsjRJsmuRR7rPhfdiWVwOJw
 O3zA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-transfer-encoding
 :content-language;
 bh=cjn/+HtkHu6VGY1bWiMHWEzCHRCWeG+Zqp25hnjNy2M=;
 b=jeQ3pG3mYMzCE+/clPfauG1B7+bnuyXe989MQ7jRWFcABNOO4cPk4MUWwKxftffM1X
 SvNnz0hHYkyrToiiamsIJwF2r5U/6MWKcIVo4ZCcFozyg439DumnYYrDLJFUOC5FAkKG
 LtNLqMhJBRwL64lJqyYSh1NR94ZKrExy/egmyoBlnHF+LM+xn3Z3KbJdIEZQTipPBV6g
 GcY/Cne0pZNFGO0Stw2g5zrbI8K/GyTje8bBFqeOBjMnX8xZQV/4xAToZT5JUBU/zb68
 v1121f1kxuJGy1y6PpfeJCEA14zNrPmgMkoYPiZj2b4zBwKQHIfDxiZl5lzIu/xTQ+M/
 l1FQ==
X-Gm-Message-State: AOAM532UY4JDONdlNPgAIto9TiESQ3h48fJ79YaGTxDRR85o4/IlxI85
 yiHZB6KCEJWKtwpBUxn+omQ=
X-Google-Smtp-Source: ABdhPJw8LAuh+vN1Ky/FEp8eKnoZEvPCfkxxaB2nn0RJIW5hlREZ8PyEBHS/WCKOFdUtvC283+EAIQ==
X-Received: by 2002:a2e:8e8a:: with SMTP id z10mr4808032ljk.351.1595010897044; 
 Fri, 17 Jul 2020 11:34:57 -0700 (PDT)
Received: from [192.168.1.2] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id a8sm1773197ljk.138.2020.07.17.11.34.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 17 Jul 2020 11:34:56 -0700 (PDT)
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
Date: Fri, 17 Jul 2020 21:34:14 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200717150039.GV7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 17.07.20 18:00, Roger Pau Monné wrote:
> Hello,

Hello Roger


> I'm very happy to see this proposal, as I think having proper (1st
> class) VirtIO support on Xen is crucial to our survival. Almost all
> OSes have VirtIO frontends, while the same can't be said about Xen PV
> frontends. It would also allow us to piggyback on any new VirtIO
> devices without having to re-invent the wheel by creating a clone Xen
> PV device.

Thank you.


>
> On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
>> Hello all.
>>
>> We would like to resume Virtio in Xen on Arm activities. You can find some
>> background at [1] and Virtio specification at [2].
>>
>> *A few words about importance:*
>> There is an increasing interest, I would even say, the requirement to have
>> flexible, generic and standardized cross-hypervisor solution for I/O
>> virtualization
>> in the automotive and embedded areas. The target is quite clear here.
>> Providing a standardized interface and device models for device
>> para-virtualization
>> in hypervisor environments, Virtio interface allows us to move Guest domains
>> among different hypervisor systems without further modification at the
>> Guest side.
>> What is more that Virtio support is available in Linux, Android and many
>> other
>> operating systems and there are a lot of existing Virtio drivers (frontends)
>> which could be just reused without reinventing the wheel. Many
>> organisations push
>> Virtio direction as a common interface. To summarize, Virtio support would
>> be
>> the great feature in Xen on Arm in addition to traditional Xen PV drivers
>> for
>> the user to be able to choose which one to use.
> I think most of the above also applies to x86, and fully agree.
>
>> *A few word about solution:*
>> As it was mentioned at [1], in order to implement virtio-mmio Xen on Arm
> Any plans for virtio-pci? Arm seems to be moving to the PCI bus, and
> it would be very interesting from a x86 PoV, as I don't think
> virtio-mmio is something that you can easily use on x86 (or even use
> at all).

Being honest I didn't consider virtio-pci so far. Julien's PoC (we are 
based on) provides support for the virtio-mmio transport

which is enough to start working around VirtIO and is not as complex as 
virtio-pci. But it doesn't mean there is no way for virtio-pci in Xen.

I think, this could be added in next steps. But the nearest target is 
virtio-mmio approach (of course if the community agrees on that).


>> requires
>> some implementation to forward guest MMIO access to a device model. And as
>> it
>> turned out the Xen on x86 contains most of the pieces to be able to use that
>> transport (via existing IOREQ concept). Julien has already done a big amount
>> of work in his PoC (xen/arm: Add support for Guest IO forwarding to a
>> device emulator).
>> Using that code as a base we managed to create a completely functional PoC
>> with DomU
>> running on virtio block device instead of a traditional Xen PV driver
>> without
>> modifications to DomU Linux. Our work is mostly about rebasing Julien's
>> code on the actual
>> codebase (Xen 4.14-rc4), various tweeks to be able to run emulator
>> (virtio-disk backend)
>> in other than Dom0 domain (in our system we have thin Dom0 and keep all
>> backends
>> in driver domain),
> How do you handle this use-case? Are you using grants in the VirtIO
> ring, or rather allowing the driver domain to map all the guest memory
> and then placing gfn on the ring like it's commonly done with VirtIO?

Second option. Xen grants are not used at all as well as event channel 
and Xenbus. That allows us to have guest

*unmodified* which one of the main goals. Yes, this may sound (or even 
sounds) non-secure, but backend which runs in driver domain is allowed 
to map all guest memory.

In current backend implementation a part of guest memory is mapped just 
to process guest request then unmapped back, there is no mappings in 
advance. The xenforeignmemory_map

call is used for that purpose. For experiment I tried to map all guest 
memory in advance and just calculated pointer at runtime. Of course that 
logic performed better.

I was thinking about guest static memory regions and forcing guest to 
allocate descriptors from them (in order not to map all guest memory, 
but a predefined region). But that implies modifying guest...


>
> Do you have any plans to try to upstream a modification to the VirtIO
> spec so that grants (ie: abstract references to memory addresses) can
> be used on the VirtIO ring?

But VirtIO spec hasn't been modified as well as VirtIO infrastructure in 
the guest. Nothing to upsteam)


>
>> misc fixes for our use-cases and tool support for the
>> configuration.
>> Unfortunately, Julien doesn’t have much time to allocate on the work
>> anymore,
>> so we would like to step in and continue.
>>
>> *A few word about the Xen code:*
>> You can find the whole Xen series at [5]. The patches are in RFC state
>> because
>> some actions in the series should be reconsidered and implemented properly.
>> Before submitting the final code for the review the first IOREQ patch
>> (which is quite
>> big) will be split into x86, Arm and common parts. Please note, x86 part
>> wasn’t
>> even build-tested so far and could be broken with that series. Also the
>> series probably
>> wants splitting into adding IOREQ on Arm (should be focused first) and
>> tools support
>> for the virtio-disk (which is going to be the first Virtio driver)
>> configuration before going
>> into the mailing list.
> Sending first a patch series to enable IOREQs on Arm seems perfectly
> fine, and it doesn't have to come with the VirtIO backend. In fact I
> would recommend that you send that ASAP, so that you don't spend time
> working on the backend that would likely need to be modified
> according to the review received on the IOREQ series.

Completely agree with you, I will send it after splitting IOREQ patch 
and performing some cleanup.

However, it is going to take some time to make it properly taking into 
the account

that personally I won't be able to test on x86.


>
>> What I would like to add here, the IOREQ feature on Arm could be used not
>> only
>> for implementing Virtio, but for other use-cases which require some
>> emulator entity
>> outside Xen such as custom PCI emulator (non-ECAM compatible) for example.
>>
>> *A few word about the backend(s):*
>> One of the main problems with Virtio in Xen on Arm is the absence of
>> “ready-to-use” and “out-of-Qemu” Virtio backends (I least am not aware of).
>> We managed to create virtio-disk backend based on demu [3] and kvmtool [4]
>> using
>> that series. It is worth mentioning that although Xenbus/Xenstore is not
>> supposed
>> to be used with native Virtio, that interface was chosen to just pass
>> configuration from toolstack
>> to the backend and notify it about creating/destroying Guest domain (I
>> think it is
> I would prefer if a single instance was launched to handle each
> backend, and that the configuration was passed on the command line.
> Killing the user-space backend from the toolstack is fine I think,
> there's no need to notify the backend using xenstore or any other
> out-of-band methods.
>
> xenstore has proven to be a bottleneck in terms of performance, and it
> would be better if we can avoid using it when possible, specially here
> that you have to do this from scratch anyway.

Let me elaborate a bit more on this.

In current backend implementation, the Xenstore is *not* used for 
communication between backend (VirtIO device) and frontend (VirtIO 
driver), frontend knows nothing about it.

Xenstore was chosen as an interface in order to be able to pass 
configuration from toolstack in Dom0 to backend which may reside in 
other than Dom0 domain (DomD in our case),

also looking into the Xenstore entries backend always knows when the 
intended guest is been created/destroyed.

I may mistake, but I don't think we can avoid using Xenstore (or other 
interface provided by toolstack) for the several reasons.

Besides a virtio-disk configuration (a disk to be assigned to the guest, 
R/O mode, etc), for each virtio-mmio device instance

a pair (mmio range + IRQ) are allocated by toolstack at the guest 
construction time and inserted into virtio-mmio device tree node

in the guest device tree. And for the backend to properly operate these 
variable parameters are also passed to the backend via Xenstore.

The other reasons are:

1. Automation. With current backend implementation we don't need to 
pause guest right after creating it, then go to the driver domain and 
spawn backend and

after that go back to the dom0 and unpause the guest.

2. Ability to detect when guest with involved frontend has gone away and 
properly release resource (guest destroy/reboot).

3. Ability to (re)connect to the newly created guest with involved 
frontend (guest create/reboot).

4. What is more that having Xenstore support the backend is able to 
detect the dom_id it runs into and the guest dom_id, there is no need 
pass them via command line.


I will be happy to explain in details after publishing backend code).


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Jul 17 19:10:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 19:10:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwVkg-0000Ov-IA; Fri, 17 Jul 2020 19:10:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7cXX=A4=amazon.com=prvs=460612801=anchalag@srs-us1.protection.inumbo.net>)
 id 1jwVke-0000Oq-E8
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 19:10:32 +0000
X-Inumbo-ID: 2f471c8e-c861-11ea-b7bb-bc764e2007e4
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f471c8e-c861-11ea-b7bb-bc764e2007e4;
 Fri, 17 Jul 2020 19:10:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1595013032; x=1626549032;
 h=date:from:to:cc:message-id:references:mime-version:
 content-transfer-encoding:in-reply-to:subject;
 bh=JXEp3fu4ueKSPTL/aTTxogwWbfIdE4zKSLg5x2ylncA=;
 b=iTeU5yMSfyY0qrPHBs17EooslgyBlCgdmlmeT2ZUH+96WKSG0B69wDjE
 tptTdI2lUNEXXY6D46SPGNexJauLdEYTm0l33gG8GlbfZy2boCC1HybDU
 mJN4i9+j3Y3/TCnZ0a4zMyrQSCQtg4w8qM4tikhP0kMTXRTK7PdTouGgD s=;
IronPort-SDR: B7g8C+O+44k2rWyS1+qLZ7Ril88L7euUS5PHylv/wSxXaBC1vnDAauj2J020dgzo+5pOznIGT3
 9bsyqNana+Aw==
X-IronPort-AV: E=Sophos;i="5.75,364,1589241600"; d="scan'208";a="42640067"
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-1e-303d0b0e.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 17 Jul 2020 19:10:32 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1e-303d0b0e.us-east-1.amazon.com (Postfix) with ESMTPS
 id D73BFA20E1; Fri, 17 Jul 2020 19:10:24 +0000 (UTC)
Received: from EX13D08UEE003.ant.amazon.com (10.43.62.118) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 17 Jul 2020 19:10:09 +0000
Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by
 EX13D08UEE003.ant.amazon.com (10.43.62.118) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 17 Jul 2020 19:10:09 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Fri, 17 Jul 2020 19:10:09 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 3826A56980; Fri, 17 Jul 2020 19:10:09 +0000 (UTC)
Date: Fri, 17 Jul 2020 19:10:09 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, sstabellini@kernel.org, kamatam@amazon.com, mingo@redhat.com,
 xen-devel@lists.xenproject.org, sblbir@amazon.com, axboe@kernel.dk,
 konrad.wilk@oracle.com, bp@alien8.de, tglx@linutronix.de, jgross@suse.com,
 netdev@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net,
 linux-kernel@vger.kernel.org, vkuznets@redhat.com, davem@davemloft.net,
 dwmw@amazon.co.uk, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 05:18:08PM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 7/15/20 4:49 PM, Anchal Agarwal wrote:
> > On Mon, Jul 13, 2020 at 11:52:01AM -0400, Boris Ostrovsky wrote:
> >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >>
> >>
> >>
> >> On 7/2/20 2:21 PM, Anchal Agarwal wrote:
> >>> +
> >>> +bool xen_is_xen_suspend(void)
> >>
> >> Weren't you going to call this pv suspend? (And also --- is this suspend
> >> or hibernation? Your commit messages and cover letter talk about fixing
> >> hibernation).
> >>
> >>
> > This is for hibernation is for pvhvm/hvm/pv-on-hvm guests as you may call it.
> > The method is just there to check if "xen suspend" is in progress.
> > I do not see "xen_suspend" differentiating between pv or hvm
> > domain until later in the code hence, I abstracted it to xen_is_xen_suspend.
> 
> 
> I meant "pv suspend" in the sense that this is paravirtual suspend, not
> suspend for paravirtual guests. Just like pv drivers are for both pv and
> hvm guests.
> 
> 
> And then --- should it be pv suspend or pv hibernation?
> 
>
Ok so I think I am lot confused by this question. Here is what this
function for, function xen_is_xen_suspend() just tells us whether 
the guest is in "SHUTDOWN_SUSPEND" state or not. This check is needed
for correct invocation of syscore_ops callbacks registered for guest's
hibernation and for xenbus to invoke respective callbacks[suspend/resume
vs freeze/thaw/restore].
Since "shutting_down" state is defined static and is not directly available
to other parts of the code, the function solves the purpose.

I am having hard time understanding why this should be called pv
suspend/hibernation unless you are suggesting something else?
Am I missing your point here? 
> 
> >>> +{
> >>> +     return suspend_mode == XEN_SUSPEND;
> >>> +}
> >>> +
> >>
> >> +static int xen_setup_pm_notifier(void)
> >> +{
> >> +     if (!xen_hvm_domain())
> >> +             return -ENODEV;
> >>
> >> I forgot --- what did we decide about non-x86 (i.e. ARM)?
> > It would be great to support that however, its  out of
> > scope for this patch set.
> > I’ll be happy to discuss it separately.
> 
> 
> I wasn't implying that this *should* work on ARM but rather whether this
> will break ARM somehow (because xen_hvm_domain() is true there).
> 
> 
Ok makes sense. TBH, I haven't tested this part of code on ARM and the series
was only support x86 guests hibernation.
Moreover, this notifier is there to distinguish between 2 PM
events PM SUSPEND and PM hibernation. Now since we only care about PM
HIBERNATION I may just remove this code and rely on "SHUTDOWN_SUSPEND" state.
However, I may have to fix other patches in the series where this check may
appear and cater it only for x86 right?
> 
> >>
> >> And PVH dom0.
> > That's another good use case to make it work with however, I still
> > think that should be tested/worked upon separately as the feature itself
> > (PVH Dom0) is very new.
> 
> 
> Same question here --- will this break PVH dom0?
> 
I haven't tested it as a part of this series. Is that a blocker here?
> 
Thanks,
Anchal
> -boris
> 
> 


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 19:18:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 19:18:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwVry-0000dt-DI; Fri, 17 Jul 2020 19:18:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z3hM=A4=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1jwVrx-0000do-PX
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 19:18:05 +0000
X-Inumbo-ID: 3cd7848c-c862-11ea-bb8b-bc764e2007e4
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3cd7848c-c862-11ea-bb8b-bc764e2007e4;
 Fri, 17 Jul 2020 19:18:04 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id q7so13948045ljm.1
 for <xen-devel@lists.xenproject.org>; Fri, 17 Jul 2020 12:18:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-transfer-encoding:content-language;
 bh=tek3JAtI4UfaW/A2eL59/Ptyl2onTOkEiF9Ywg80Zuo=;
 b=VKwjNyfAi9uBgj3CUaizKB5JJUx2tcZN1Fk+0zawzZus3yOGgg0tbi/nKQqrz2hAkv
 in0kvuki4yIlgO7p/A+NcZg1pwQnbha0oisoRDpzb/QfHdQtu/vBWvduaU9ASEbaAUqd
 aBnEm6AgNwzftUgUARKUDBd5XuXj4nJ9cUCCbOTcZ278Q1G//15XwI0MGdzAXoBJy2aS
 DBxsHpc23POsRihyDXJktwJByJHBqwe0KmwEGvXTYgpt/uox9flmHXYpMl+R55eX/LFY
 gaeLwzbAOwM0V5CnvhugCxTCVcTUr5GNRRJ0LKvkeOvKqcevmvdBLAz/0IfZ2abqqSFg
 VIvg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-transfer-encoding
 :content-language;
 bh=tek3JAtI4UfaW/A2eL59/Ptyl2onTOkEiF9Ywg80Zuo=;
 b=d3eRQtE1J/AWOQUpy3MpTNh3MG54tZQw6rPy88EOiFNoJBI9GopElEafMqZYxRmBMo
 YpT8piiIjEoEscBor8NZKSM3EYQ0ctk0oK5i8ievjwUl3YpOmqEH0jxAEQ3DVWKrmFb0
 MYPtBYPWnTsG1iJmWFdd+vrVSfEa00PzAlOgrGD1WzFAtG/yDmcx0gLi2GQELGxxmGY9
 2H96/k6WZRpA78Exg4gn6UzgJfnW5ABumAXo4y2k7t5T73fnd52lRT+zZbu7ROHEunaN
 I6p3UNNNGPv480SEoEcTSo0PWJeXTpUdIsHLvU4eBvBU7M/P5ErKT/DMyk/HEr+LzXQy
 /UXA==
X-Gm-Message-State: AOAM533BpTuRh/KPDT15jYmTgJbNRyNHQusSxiklkNYPjXsbEfe1I66f
 DYQMqoyWAHMFG4v6FnaMrSc=
X-Google-Smtp-Source: ABdhPJxv5hxAar917ot786jMFqi3vH3DEBw69x47qqM4IJaaFSbuBrXC/nqtm+C2o3fhFffPWoBd6w==
X-Received: by 2002:a2e:8047:: with SMTP id p7mr5353973ljg.414.1595013483480; 
 Fri, 17 Jul 2020 12:18:03 -0700 (PDT)
Received: from [192.168.1.2] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id y22sm1801557ljn.2.2020.07.17.12.18.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 17 Jul 2020 12:18:02 -0700 (PDT)
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
 <20200717144139.GU7191@Air-de-Roger>
 <90AE8DAB-2223-46DC-A263-D78365E5435E@arm.com>
 <20200717150507.GW7191@Air-de-Roger>
 <FBE040A9-D088-43D6-8929-FFEDE9DDDE34@arm.com>
 <20200717153043.GX7191@Air-de-Roger>
 <C5B2BDD5-E504-4871-8542-5BA8C051F699@arm.com>
 <20200717160834.GA7191@Air-de-Roger>
 <0c76b6a0-2242-3bbd-9740-75c5580e93e8@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <1dea1217-f884-0fe1-d339-95c5b473ae23@gmail.com>
Date: Fri, 17 Jul 2020 22:17:56 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0c76b6a0-2242-3bbd-9740-75c5580e93e8@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Julien Grall <julien.grall.oss@gmail.com>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Rahul Singh <Rahul.Singh@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 17.07.20 19:18, Julien Grall wrote:

Hello Bertrand

[two threads with the same name are shown in my mail client, so not 
completely sure I am asking in the correct one]

>
>
> On 17/07/2020 17:08, Roger Pau Monné wrote:
>> On Fri, Jul 17, 2020 at 03:51:47PM +0000, Bertrand Marquis wrote:
>>>
>>>
>>>> On 17 Jul 2020, at 17:30, Roger Pau Monné <roger.pau@citrix.com> 
>>>> wrote:
>>>>
>>>> On Fri, Jul 17, 2020 at 03:23:57PM +0000, Bertrand Marquis wrote:
>>>>>
>>>>>
>>>>>> On 17 Jul 2020, at 17:05, Roger Pau Monné <roger.pau@citrix.com> 
>>>>>> wrote:
>>>>>>
>>>>>> On Fri, Jul 17, 2020 at 02:49:20PM +0000, Bertrand Marquis wrote:
>>>>>>>
>>>>>>>
>>>>>>>> On 17 Jul 2020, at 16:41, Roger Pau Monné 
>>>>>>>> <roger.pau@citrix.com> wrote:
>>>>>>>>
>>>>>>>> On Fri, Jul 17, 2020 at 02:34:55PM +0000, Bertrand Marquis wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On 17 Jul 2020, at 16:06, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>>>>>
>>>>>>>>>> On 17.07.2020 15:59, Bertrand Marquis wrote:
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>> On 17 Jul 2020, at 15:19, Jan Beulich <jbeulich@suse.com> 
>>>>>>>>>>>> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>> On 17.07.2020 15:14, Bertrand Marquis wrote:
>>>>>>>>>>>>>> On 17 Jul 2020, at 10:10, Jan Beulich <jbeulich@suse.com> 
>>>>>>>>>>>>>> wrote:
>>>>>>>>>>>>>> On 16.07.2020 19:10, Rahul Singh wrote:
>>>>>>>>>>>>>>> # Emulated PCI device tree node in libxl:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Libxl is creating a virtual PCI device tree node in the 
>>>>>>>>>>>>>>> device tree to enable the guest OS to discover the 
>>>>>>>>>>>>>>> virtual PCI during guest boot. We introduced the new 
>>>>>>>>>>>>>>> config option [vpci="pci_ecam"] for guests. When this 
>>>>>>>>>>>>>>> config option is enabled in a guest configuration, a PCI 
>>>>>>>>>>>>>>> device tree node will be created in the guest device tree.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> I support Stefano's suggestion for this to be an optional 
>>>>>>>>>>>>>> thing, i.e.
>>>>>>>>>>>>>> there to be no need for it when there are PCI devices 
>>>>>>>>>>>>>> assigned to the
>>>>>>>>>>>>>> guest anyway. I also wonder about the pci_ prefix here - 
>>>>>>>>>>>>>> isn't
>>>>>>>>>>>>>> vpci="ecam" as unambiguous?
>>>>>>>>>>>>>
>>>>>>>>>>>>> This could be a problem as we need to know that this is 
>>>>>>>>>>>>> required for a guest upfront so that PCI devices can be 
>>>>>>>>>>>>> assigned after using xl.
>>>>>>>>>>>>
>>>>>>>>>>>> I'm afraid I don't understand: When there are no PCI device 
>>>>>>>>>>>> that get
>>>>>>>>>>>> handed to a guest when it gets created, but it is supposed 
>>>>>>>>>>>> to be able
>>>>>>>>>>>> to have some assigned while already running, then we agree 
>>>>>>>>>>>> the option
>>>>>>>>>>>> is needed (afaict). When PCI devices get handed to the 
>>>>>>>>>>>> guest while it
>>>>>>>>>>>> gets constructed, where's the problem to infer this option 
>>>>>>>>>>>> from the
>>>>>>>>>>>> presence of PCI devices in the guest configuration?
>>>>>>>>>>>
>>>>>>>>>>> If the user wants to use xl pci-attach to attach in runtime 
>>>>>>>>>>> a device to a guest, this guest must have a VPCI bus (even 
>>>>>>>>>>> with no devices).
>>>>>>>>>>> If we do not have the vpci parameter in the configuration 
>>>>>>>>>>> this use case will not work anymore.
>>>>>>>>>>
>>>>>>>>>> That's what everyone looks to agree with. Yet why is the 
>>>>>>>>>> parameter needed
>>>>>>>>>> when there _are_ PCI devices anyway? That's the "optional" 
>>>>>>>>>> that Stefano
>>>>>>>>>> was suggesting, aiui.
>>>>>>>>>
>>>>>>>>> I agree in this case the parameter could be optional and only 
>>>>>>>>> required if not PCI device is assigned directly in the guest 
>>>>>>>>> configuration.
>>>>>>>>
>>>>>>>> Where will the ECAM region(s) appear on the guest physmap?
>>>>>>>>
>>>>>>>> Are you going to re-use the same locations as on the physical
>>>>>>>> hardware, or will they appear somewhere else?
>>>>>>>
>>>>>>> We will add some new definitions for the ECAM regions in the 
>>>>>>> guest physmap declared in xen (include/asm-arm/config.h)
>>>>>>
>>>>>> I think I'm confused, but that file doesn't contain anything related
>>>>>> to the guest physmap, that's the Xen virtual memory layout on Arm
>>>>>> AFAICT?
>>>>>>
>>>>>> Does this somehow relate to the physical memory map exposed to 
>>>>>> guests
>>>>>> on Arm?
>>>>>
>>>>> Yes it does.
>>>>> We will add new definitions there related to VPCI to reserve some 
>>>>> areas for the VPCI ECAM and the IOMEM areas.
>>>>
>>>> Yes, that's completely fine and is what's done on x86, but again I
>>>> feel like I'm lost here, this is the Xen virtual memory map, how does
>>>> this relate to the guest physical memory map?
>>>
>>> Sorry my bad, we will add values in include/public/arch-arm.h, wrong 
>>> header :-)
>>
>> Oh right, now I see it :).
>>
>> Do you really need to specify the ECAM and MMIO regions there?
>
> You need to define those values somewhere :). The layout is only 
> shared between the tools and the hypervisor. I think it would be 
> better if they are defined at the same place as the rest of the 
> layout, so it is easier to rework the layout.
>
> Cheers,


I would like to clarify regarding an IOMMU driver changes which should 
be done to support PCI pass-through properly.

Design document mentions about SMMU, but Xen also supports IPMMU-VMSA 
(under tech preview now). It would be really nice if the required 
support is extended to that kind of IOMMU as well.

May I clarify what should be implemented in the Xen driver in order to 
support PCI pass-through feature on Arm? Should the IOMMU H/W be 
"PCI-aware" for that purpose?


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Jul 17 19:22:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 19:22:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwVwH-0001Rb-4C; Fri, 17 Jul 2020 19:22:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sKPa=A4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwVwG-0001RW-22
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 19:22:32 +0000
X-Inumbo-ID: dc13e84c-c862-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc13e84c-c862-11ea-bca7-bc764e2007e4;
 Fri, 17 Jul 2020 19:22:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+M64DRq/0PlzBnS+B8Ref4MQblsCDYz95Tvtj5b+5qg=; b=i2xNBoxAj9lPZZblSjwx8IzcX
 WB2OzXLUX4yukowO78XY82YbjR+qiM+txLTktSBQq30btZagjvOFotz5IcI3YnnbovtrIDDA9UhvL
 gUef/dYaq/Y8wvWD7de4gXMOxogrrj3F90cpjbaeH8QXK8U6rtMMYuQtzb6eTXOIAom1U=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwVwE-0006A6-Rg; Fri, 17 Jul 2020 19:22:30 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwVwE-0003SP-G3; Fri, 17 Jul 2020 19:22:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwVwE-0000sZ-FO; Fri, 17 Jul 2020 19:22:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151970-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 151970: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=fb024b779336a0f73b3aee885b2ce082e812881f
X-Osstest-Versions-That: xen=f8fe3c07363d11fc81d8e7382dbcaa357c861569
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jul 2020 19:22:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151970 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151970/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  fb024b779336a0f73b3aee885b2ce082e812881f
baseline version:
 xen                  f8fe3c07363d11fc81d8e7382dbcaa357c861569

Last test of basis   151921  2020-07-15 14:09:51 Z    2 days
Testing same since   151970  2020-07-17 16:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f8fe3c0736..fb024b7793  fb024b779336a0f73b3aee885b2ce082e812881f -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jul 17 22:22:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 17 Jul 2020 22:22:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwYju-0007ze-0z; Fri, 17 Jul 2020 22:21:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sKPa=A4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwYjt-0007zZ-4v
 for xen-devel@lists.xenproject.org; Fri, 17 Jul 2020 22:21:57 +0000
X-Inumbo-ID: eb851404-c87b-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb851404-c87b-11ea-bca7-bc764e2007e4;
 Fri, 17 Jul 2020 22:21:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+pX21fXiNOpwyve2kpAGb8GkeZL/1+Oi2BXKVzuS/Uc=; b=2C6XjP30JViI6NY3B4m7y9S3r
 BDWpX1dbmHQ7nf16Mx54/bqn3Er3i1oSIe9KhD1aOQ22fKl9syFZc2MVJyNPccJ3QtQdh1luklzmh
 aYyoDHhDYEKyFG8jWA6R+l+7LziascK7oGwO06fj2NYmIpmzj1W0uStmm9GHyz5YvAeHE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwYjq-0001RO-0P; Fri, 17 Jul 2020 22:21:54 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwYjp-0004Dt-KJ; Fri, 17 Jul 2020 22:21:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwYjp-000751-Ji; Fri, 17 Jul 2020 22:21:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151962-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151962: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-xl-vhd:leak-check/check:fail:regression
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=8882572675c1bb1cc544f4e229a11661f1fc52e4
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 17 Jul 2020 22:21:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151962 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151962/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214
 test-armhf-armhf-xl-vhd      18 leak-check/check         fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8882572675c1bb1cc544f4e229a11661f1fc52e4
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   29 days
Failing since        151236  2020-06-19 19:10:35 Z   28 days   44 attempts
Testing same since   151962  2020-07-17 08:38:34 Z    0 days    1 attempts

------------------------------------------------------------
770 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 41098 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 02:35:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 02:35:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwchC-0005do-05; Sat, 18 Jul 2020 02:35:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OolH=A5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwchB-0005dj-27
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 02:35:25 +0000
X-Inumbo-ID: 53ea0bd0-c89f-11ea-96e2-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 53ea0bd0-c89f-11ea-96e2-12813bfff9fa;
 Sat, 18 Jul 2020 02:35:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=6YZ4PaY/GwDYgB99Nzvrdl5h7M7Lmm100ZAnzVhqeqY=; b=BHqwHAPywpyNr6kuVj2+Ds7yO
 hABhqOvVhSlrM1IgVH2H3Pks1/c2IFa2YayS5RzgVWsdYRuQnm0shmKvE5Ky5qiBPGtFvLIMuBVg6
 30xQlFkEmGv6cGBKOoEQrOgjlmmgPlKra9hw24gTVERu/WiI6VO1vbpqw9phgPaBMZQ6M=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwch7-0000T9-6n; Sat, 18 Jul 2020 02:35:21 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwch6-0004BI-SY; Sat, 18 Jul 2020 02:35:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwch6-000589-Ru; Sat, 18 Jul 2020 02:35:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151972-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151972: all pass - PUSHED
X-Osstest-Versions-This: ovmf=6ff53d2a13740e39dea110d6b3509c156c659586
X-Osstest-Versions-That: ovmf=21a23e6966c2eb597a8db98d6837a4c01b3cad4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jul 2020 02:35:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151972 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151972/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6ff53d2a13740e39dea110d6b3509c156c659586
baseline version:
 ovmf                 21a23e6966c2eb597a8db98d6837a4c01b3cad4a

Last test of basis   151959  2020-07-17 05:19:56 Z    0 days
Testing same since   151972  2020-07-17 16:51:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jian J Wang <jian.j.wang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Yuwei Chen <yuwei.chen@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   21a23e6966..6ff53d2a13  6ff53d2a13740e39dea110d6b3509c156c659586 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 03:31:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 03:31:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwdZX-0002Cy-DY; Sat, 18 Jul 2020 03:31:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7JXE=A5=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1jwdZW-0002Ct-54
 for xen-devel@lists.xen.org; Sat, 18 Jul 2020 03:31:34 +0000
X-Inumbo-ID: 2cda321a-c8a7-11ea-b7bb-bc764e2007e4
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2cda321a-c8a7-11ea-b7bb-bc764e2007e4;
 Sat, 18 Jul 2020 03:31:33 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 06I3VLlr088879
 (version=TLSv1.2 cipher=DHE-RSA-AES128-GCM-SHA256 bits=128 verify=NO);
 Fri, 17 Jul 2020 23:31:27 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 06I3VLqL088878;
 Fri, 17 Jul 2020 20:31:21 -0700 (PDT) (envelope-from ehem)
Date: Fri, 17 Jul 2020 20:31:21 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: xen-devel@lists.xen.org
Subject: [PATCH 1/2] Partially revert "Cross-compilation fixes."
Message-ID: <20200718033121.GA88869@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
 autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: ian.jackson@eu.citrix.com, christian.lindig@citrix.com, wl@xen.org,
 dave@recoil.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This partially reverts commit 16504669c5cbb8b195d20412aadc838da5c428f7.

Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
---
Doesn't look like much of 16504669c5cbb8b195d20412aadc838da5c428f7
actually remains due to passage of time.

Of the 3, both Python and pygrub appear to mostly be building just fine
cross-compiling.  The OCAML portion is being troublesome, this is going
to cause bug reports elsewhere soon.  The OCAML portion though can
already be disabled by setting OCAML_TOOLS=n and shouldn't have this
extra form of disabling.
---
 tools/Makefile | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/tools/Makefile b/tools/Makefile
index 7b1f6c4d28..930a533724 100644
--- a/tools/Makefile
+++ b/tools/Makefile
@@ -40,12 +40,9 @@ SUBDIRS-$(CONFIG_X86) += debugger/gdbsx
 SUBDIRS-$(CONFIG_X86) += debugger/kdd
 SUBDIRS-$(CONFIG_TESTS) += tests
 
-# These don't cross-compile
-ifeq ($(XEN_COMPILE_ARCH),$(XEN_TARGET_ARCH))
 SUBDIRS-y += python
 SUBDIRS-y += pygrub
 SUBDIRS-$(OCAML_TOOLS) += ocaml
-endif
 
 ifeq ($(CONFIG_RUMP),y)
 SUBDIRS-y := libs libxc xenstore
-- 
2.20.1



-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sat Jul 18 04:18:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 04:18:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jweIJ-0005k5-VM; Sat, 18 Jul 2020 04:17:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7JXE=A5=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1jweIJ-0005k0-1K
 for xen-devel@lists.xen.org; Sat, 18 Jul 2020 04:17:51 +0000
X-Inumbo-ID: a34d57aa-c8ad-11ea-96f0-12813bfff9fa
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a34d57aa-c8ad-11ea-96f0-12813bfff9fa;
 Sat, 18 Jul 2020 04:17:48 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 06I3WgY1088890
 (version=TLSv1.2 cipher=DHE-RSA-AES128-GCM-SHA256 bits=128 verify=NO);
 Fri, 17 Jul 2020 23:32:48 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 06I3Wgfh088889;
 Fri, 17 Jul 2020 20:32:42 -0700 (PDT) (envelope-from ehem)
Date: Fri, 17 Jul 2020 20:32:42 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: xen-devel@lists.xen.org
Subject: [PATCH 2/2] tools/ocaml: Default to useful build output
Message-ID: <20200718033242.GB88869@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
 autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: ian.jackson@eu.citrix.com, christian.lindig@citrix.com, wl@xen.org,
 dave@recoil.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

While hiding details of build output looks pretty to some, defaulting to
doing so deviates from the rest of Xen.  Switch the OCAML tools to match
everything else.

Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
---

Time for a bit of controversy.

Presently the OCAML tools build mismatches the rest of the Xen build.
My choice is to default to verbose output.  While some may like beauty
in their build output, function is far more important.

If someone wants to take on the task of making Xen's build output
consistently beatiful, invite them to do so.  Then call the police and
tell them you're being robbed.
---
 tools/ocaml/Makefile.rules | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/tools/ocaml/Makefile.rules b/tools/ocaml/Makefile.rules
index a893c42b43..abfbc64ce0 100644
--- a/tools/ocaml/Makefile.rules
+++ b/tools/ocaml/Makefile.rules
@@ -1,17 +1,20 @@
 ifdef V
-  ifeq ("$(origin V)", "command line")
-    BUILD_VERBOSE = $(V)
-  endif
+	ifeq ("$(origin V)", "command line")
+		BUILD_VERBOSE = $(V)
+	endif
+else
+	V := 1
+	BUILD_VERBOSE := 1
 endif
 ifndef BUILD_VERBOSE
-  BUILD_VERBOSE = 0
+	BUILD_VERBOSE := 0
 endif
 ifeq ($(BUILD_VERBOSE),1)
-  E = @true
-  Q =
+	E := @true
+	Q :=
 else
-  E = @echo
-  Q = @
+	E := @echo
+	Q := @
 endif
 
 .NOTPARALLEL:
-- 
2.20.1



-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sat Jul 18 08:14:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 08:14:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwhzF-0001NR-M5; Sat, 18 Jul 2020 08:14:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OolH=A5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwhzE-0001N1-AP
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 08:14:24 +0000
X-Inumbo-ID: ab5b9d28-c8ce-11ea-9708-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab5b9d28-c8ce-11ea-9708-12813bfff9fa;
 Sat, 18 Jul 2020 08:14:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=itrypWBhsiIdhCsU5q9eFfxQOXGLQgWPvu6oKmDO6gM=; b=My5DzPoZFUVe6KLdQ+EFkNYa+
 BVr2wsIqLCRW3b+DZO2clxjSuC9kU4mdLo8f9sYvIeBvkTXUDwzq/TkabYSMoXKUjMSdm7nM90+Wr
 JtU36fC4NKsT050/QRXrLHdWlLBaiHtlV3WUJ6HgXdtqHQuv0CDKZ8tfNP5oxEDLW8hJQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwhz4-0008Of-Sc; Sat, 18 Jul 2020 08:14:14 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwhz4-0000o8-Gr; Sat, 18 Jul 2020 08:14:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwhz4-0008Gy-GF; Sat, 18 Jul 2020 08:14:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151968-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151968: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:leak-check/check:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=b7bda69c4ef46c57480f6e378923f5215b122778
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jul 2020 08:14:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151968 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151968/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm  7 xen-boot                 fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd      18 leak-check/check         fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                b7bda69c4ef46c57480f6e378923f5215b122778
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   35 days
Failing since        151101  2020-06-14 08:32:51 Z   33 days   46 attempts
Testing same since   151968  2020-07-17 14:52:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29212 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 09:50:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 09:50:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwjTk-0000Xc-S5; Sat, 18 Jul 2020 09:50:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMbn=A5=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwjTi-0000XX-VF
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 09:49:59 +0000
X-Inumbo-ID: 08d51a62-c8dc-11ea-bca7-bc764e2007e4
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.84]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08d51a62-c8dc-11ea-bca7-bc764e2007e4;
 Sat, 18 Jul 2020 09:49:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ayHNGyMcSChVd48UGuW8lKhQv9KuZrlATPwlK5jI0eY=;
 b=fowoEiVVE5tPkGWq/e04rQl6c/J1N9lfJzwWWOc+eXILXV8F3V9nnhRKZX9gGCsuzfCLLxrksfkTAAmS3PXzFapRUroepib0vUMzOyuDOHuVcnX2MUkF5JXdmlDDJJoUUwgLnxhhAWtZbjomPFbe/Ku/QXbXW0TYms78bS2YQgc=
Received: from DB7PR03CA0080.eurprd03.prod.outlook.com (2603:10a6:10:72::21)
 by DBBPR08MB4362.eurprd08.prod.outlook.com (2603:10a6:10:ca::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Sat, 18 Jul
 2020 09:49:54 +0000
Received: from DB5EUR03FT061.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:72:cafe::44) by DB7PR03CA0080.outlook.office365.com
 (2603:10a6:10:72::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17 via Frontend
 Transport; Sat, 18 Jul 2020 09:49:54 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT061.mail.protection.outlook.com (10.152.21.234) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Sat, 18 Jul 2020 09:49:53 +0000
Received: ("Tessian outbound 2ae7cfbcc26c:v62");
 Sat, 18 Jul 2020 09:49:53 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 69eefec932568b59
X-CR-MTA-TID: 64aa7808
Received: from e1f9c8ece4aa.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 034F86B5-542A-4F5C-85A1-73E26A6ED3D8.1; 
 Sat, 18 Jul 2020 09:49:48 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e1f9c8ece4aa.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sat, 18 Jul 2020 09:49:48 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B6IFExYEOBaoechXXWMV4TnL69XI1lDjSAhtQs7S5pwz4tGafgwPa6vFq9fIwNhuPfQunlWURVZfI+zcoB2UEayEC0x6bNpxvoakTOH8XVL2+VBmebN/ZOb21m9eaSeGXHo5Rte9HdJLpsAqwp+rtdu8Yrd18U1N3FS3Bi0N0uRI7+rHKYCTtFiI8FALZl8vV9wFrp6LOskrmExtGIZD+XWdD4MLSvsY21B2iXHAStiI5hPy6j8kIWyqFY+MNPIga++Ll+4GsN/661VDlonccT4zfbHFZm4EyozwKYfWipDBSuRG62MZWbu7DHw3liBgJOszVpbY0sztdkjvrQ3Uqw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ayHNGyMcSChVd48UGuW8lKhQv9KuZrlATPwlK5jI0eY=;
 b=jyeNbgcRf2wYrjJxxR2lxegP4I2MVUgT0NoyUnk+Ch4lJojqRRgNVFcXaaoZKj2WfpbaFDpd1mVoswz2QjzZFQ2eImcwm+T+MNMcS02OwsBzdmJJ9jp6p9PeJ/vGE+4kWCV0ncuYtjb2igekIWSJP2Ee7kcPbI/dBH3KE13M6iZcD0UhRO71IKlGn0l1DkNOtBFxXjY9siWF/en2qKgYNGk2rgS1234m7cPKcF0hiLr9gu8bjLeXu8C7t3JwhWOyty5VH9gzcHk0ftCcKg9Ngqww2BnTlnEgyDx/mTtSxtFOEmgdp9xTw3hg5X1MtlE3PgAi+4rjuiqxPk8sWUWKEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ayHNGyMcSChVd48UGuW8lKhQv9KuZrlATPwlK5jI0eY=;
 b=fowoEiVVE5tPkGWq/e04rQl6c/J1N9lfJzwWWOc+eXILXV8F3V9nnhRKZX9gGCsuzfCLLxrksfkTAAmS3PXzFapRUroepib0vUMzOyuDOHuVcnX2MUkF5JXdmlDDJJoUUwgLnxhhAWtZbjomPFbe/Ku/QXbXW0TYms78bS2YQgc=
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM0PR08MB4561.eurprd08.prod.outlook.com (2603:10a6:208:12d::29)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.22; Sat, 18 Jul
 2020 09:49:45 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::b572:771:2750:14ed]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::b572:771:2750:14ed%5]) with mapi id 15.20.3174.029; Sat, 18 Jul 2020
 09:49:44 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAR7YAIAAIxaAgAATSQCAAA4jAIAACVuAgAEsKIA=
Date: Sat, 18 Jul 2020 09:49:43 +0000
Message-ID: <C150EBE5-5687-4C7D-9EDB-5E4B52782A45@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <20200717111644.GS7191@Air-de-Roger>
 <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
 <20200717143120.GT7191@Air-de-Roger>
 <8AF78FF1-C389-44D8-896B-B95C1A0560E2@arm.com>
 <20200717155525.GY7191@Air-de-Roger>
In-Reply-To: <20200717155525.GY7191@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [2a01:e0a:13:6f10:f1a2:5155:728:f8e7]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 37629de2-53aa-4c88-e403-08d82affec1b
x-ms-traffictypediagnostic: AM0PR08MB4561:|DBBPR08MB4362:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <DBBPR08MB43623C284606CF617B3E1A4E9D7D0@DBBPR08MB4362.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: ikuAN19Olh1pbr5DmshrAzs1VqmuRsqbGP92OY04SekOMZ8j6bkrjQK3AbYSMwGIzRHJY53NeFmeG5suuhnT1x26yymt/41HMTELs3ok3K1+aPcsJ7ewqagAro/HT6IUivfH287lKMf5cG8SrokrEoVcVhuFZjE48syW/+rhIaREj1OVVhdOmggqeju7NiC7mRAUmdExgqvOfyK3VkR/dU9OXBrIe4ot1407r63OeSP0ikZePUscu8GWolAGWQ1rvYLO42KnW1eUqZ+t1ttMXWtmbx9Waay7l5oNZN8uwRDiw4ujZFDddjpu7HqJxZvc6JsYaXJhMm9V7Sus0EpM3g==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM0PR08MB3682.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(366004)(136003)(346002)(39860400002)(396003)(76116006)(478600001)(33656002)(30864003)(316002)(36756003)(66946007)(8676002)(66556008)(64756008)(66446008)(66476007)(83380400001)(54906003)(91956017)(71200400001)(2616005)(8936002)(6916009)(4326008)(2906002)(53546011)(6506007)(86362001)(186003)(6486002)(5660300002)(6512007);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: KX0adZ8dg+BkCNVA8Er8bJKWK5JKAI5witSRpcX+rKV1eI/bs+BdFr0flArJ5uykIjtu4ef4to0hXnvoOM545H+VwMZKyWPHsWTZfM29VlOZS/OuzBPlDM68bYPPAUWafQO51SHDzOToPqFUwsVs4KImml0GzhBoklEacpQqW6v4t898waDXb/ioEkFA6fb67El5K9ntpw+GIpRV0JBPesGZwdDn8Ql30iJpbnLOkrZ8FpXKvyjPuMu0ARoIysTzrH3STWqbh4fuoVgrTXWcec6VxaA9REg2pWHCW0Mia/9yLLI2ME7L07TbxI5EYIUWUDz1NoC1UcvZwu6Z7W9vFGFhi9eehSkjRfCbRPWhxaFGn1vT0RYB6zSE4lOjgomFlrSNckNIAg9YiGpUV7HeoxjVcAb535JydR18l6kBL4WP9XNu67SIFRF+v5dvtUuyuNarLO1q9GshMX5PjfFnF6axQfmlBxsj3Ywroh0MUfJyDU75Vf0lysL1v/fGEm1XQTLTtrjpSl0dpuy+GOBNX5QUOm6cq1vPfHGGofppNxA=
Content-Type: text/plain; charset="utf-8"
Content-ID: <71CA4FD7B3F0CA40B01E2A4428CEE149@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4561
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(396003)(136003)(376002)(346002)(46966005)(186003)(70586007)(70206006)(53546011)(26005)(478600001)(6506007)(336012)(4326008)(2616005)(6862004)(54906003)(8936002)(33656002)(2906002)(316002)(107886003)(8676002)(83380400001)(47076004)(5660300002)(81166007)(6512007)(86362001)(36756003)(82740400003)(30864003)(356005)(6486002)(82310400002);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 49a02d3f-8fc8-494c-81ae-08d82affe690
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: MaWij7+AhYS+PUIYpsfRSX/FNqJqdTViVWn3hbguk9vn0l9Z3GDfmHi+0RihZaXEE0Zpzy/uOquWqa06S83sZRENFvaCYY0EtWD0Kuvpz+t8Hb2MDBYaz4E+nMqsObg+nXTRmcR1kVafZiGUhrGQU6vbh4ohPUJfUK2LdvUCxzJ+tfTxW3n2dTsOxyRORqK4xH8ZO6SJNhWk3ltXdFvQ1PkHFMr4iYcnLEfH0SzXRKF6R4bbblj6Hh7PT+nb65np0JZg0shPLvclZdWrxLW+tH2TFHTMGIQPNN7DRJOT+U0nj+DpVN24Rm1QP6PWk8pYtOEDFQjGooySF/YXzb3m+qsJupX3r8omg+qhsX0XGtyKBVCzEnF6qHHl6nDOePNMa5uCN9xLiVWKnk5BKrkkCQ==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jul 2020 09:49:53.9123 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 37629de2-53aa-4c88-e403-08d82affec1b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4362
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTcgSnVsIDIwMjAsIGF0IDE3OjU1LCBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5w
YXVAY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiBGcmksIEp1bCAxNywgMjAyMCBhdCAwMzoy
MTo1N1BNICswMDAwLCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4+IE9uIDE3IEp1bCAyMDIw
LCBhdCAxNjozMSwgUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+IHdyb3Rl
Og0KPj4+IE9uIEZyaSwgSnVsIDE3LCAyMDIwIGF0IDAxOjIyOjE5UE0gKzAwMDAsIEJlcnRyYW5k
IE1hcnF1aXMgd3JvdGU6DQo+Pj4+PiBPbiAxNyBKdWwgMjAyMCwgYXQgMTM6MTYsIFJvZ2VyIFBh
dSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToNCj4+Pj4+PiAqIEFDUyBjYXBh
YmlsaXR5IGlzIGRpc2FibGUgZm9yIEFSTSBhcyBvZiBub3cgYXMgYWZ0ZXIgZW5hYmxpbmcgaXQN
Cj4+Pj4+PiBkZXZpY2VzIGFyZSBub3QgYWNjZXNzaWJsZS4NCj4+Pj4+PiAqIERvbTBMZXNzIGlt
cGxlbWVudGF0aW9uIHdpbGwgcmVxdWlyZSB0byBoYXZlIHRoZSBjYXBhY2l0eSBpbnNpZGUgWGVu
DQo+Pj4+Pj4gdG8gZGlzY292ZXIgdGhlIFBDSSBkZXZpY2VzICh3aXRob3V0IGRlcGVuZGluZyBv
biBEb20wIHRvIGRlY2xhcmUgdGhlbQ0KPj4+Pj4+IHRvIFhlbikuDQo+Pj4+PiANCj4+Pj4+IEkg
YXNzdW1lIHRoZSBmaXJtd2FyZSB3aWxsIHByb3Blcmx5IGluaXRpYWxpemUgdGhlIGhvc3QgYnJp
ZGdlIGFuZA0KPj4+Pj4gY29uZmlndXJlIHRoZSByZXNvdXJjZXMgZm9yIGVhY2ggZGV2aWNlLCBz
byB0aGF0IFhlbiBqdXN0IGhhcyB0byB3YWxrDQo+Pj4+PiB0aGUgUENJIHNwYWNlIGFuZCBmaW5k
IHRoZSBkZXZpY2VzLg0KPj4+Pj4gDQo+Pj4+PiBUQkggdGhhdCB3b3VsZCBiZSBteSBwcmVmZXJy
ZWQgbWV0aG9kLCBiZWNhdXNlIHRoZW4geW91IGNhbiBnZXQgcmlkIG9mDQo+Pj4+PiB0aGUgaHlw
ZXJjYWxsLg0KPj4+Pj4gDQo+Pj4+PiBJcyB0aGVyZSBhbnl3YXkgZm9yIFhlbiB0byBrbm93IHdo
ZXRoZXIgdGhlIGhvc3QgYnJpZGdlIGlzIHByb3Blcmx5DQo+Pj4+PiBzZXR1cCBhbmQgdGh1cyB0
aGUgUENJIGJ1cyBjYW4gYmUgc2Nhbm5lZD8NCj4+Pj4+IA0KPj4+Pj4gVGhhdCB3YXkgQXJtIGNv
dWxkIGRvIHNvbWV0aGluZyBzaW1pbGFyIHRvIHg4Niwgd2hlcmUgWGVuIHdpbGwgc2Nhbg0KPj4+
Pj4gdGhlIGJ1cyBhbmQgZGlzY292ZXIgZGV2aWNlcywgYnV0IHlvdSBjb3VsZCBzdGlsbCBwcm92
aWRlIHRoZQ0KPj4+Pj4gaHlwZXJjYWxsIGluIGNhc2UgdGhlIGJ1cyBjYW5ub3QgYmUgc2Nhbm5l
ZCBieSBYZW4gKGJlY2F1c2UgaXQgaGFzbid0DQo+Pj4+PiBiZWVuIHNldHVwKS4NCj4+Pj4gDQo+
Pj4+IFRoYXQgaXMgZGVmaW5pdGVseSB0aGUgaWRlYSB0byByZWx5IGJ5IGRlZmF1bHQgb24gYSBm
aXJtd2FyZSBkb2luZyB0aGlzIHByb3Blcmx5Lg0KPj4+PiBJIGFtIG5vdCBzdXJlIHdldGhlciBh
IHByb3BlciBlbnVtZXJhdGlvbiBjb3VsZCBiZSBkZXRlY3RlZCBwcm9wZXJseSBpbiBhbGwNCj4+
Pj4gY2FzZXMgc28gaXQgd291bGQgbWFrZSBzZW5zIHRvIHJlbHkgb24gRG9tMCBlbnVtZXJhdGlv
biB3aGVuIGEgWGVuDQo+Pj4+IGNvbW1hbmQgbGluZSBhcmd1bWVudCBpcyBwYXNzZWQgYXMgZXhw
bGFpbmVkIGluIG9uZSBvZiBSYWh1bOKAmXMgbWFpbHMuDQo+Pj4gDQo+Pj4gSSBhc3N1bWUgTGlu
dXggc29tZWhvdyBrbm93cyB3aGVuIGl0IG5lZWRzIHRvIGluaXRpYWxpemUgdGhlIFBDSSByb290
DQo+Pj4gY29tcGxleCBiZWZvcmUgYXR0ZW1wdGluZyB0byBhY2Nlc3MgdGhlIGJ1cy4gV291bGQg
aXQgYmUgcG9zc2libGUgdG8NCj4+PiBhZGQgdGhpcyBsb2dpYyB0byBYZW4gc28gaXQgY2FuIGZp
Z3VyZSBvdXQgb24gaXQncyBvd24gd2hldGhlciBpdCdzDQo+Pj4gc2FmZSB0byBzY2FuIHRoZSBQ
Q0kgYnVzIG9yIHdoZXRoZXIgaXQgbmVlZHMgdG8gd2FpdCBmb3IgdGhlIGhhcmR3YXJlDQo+Pj4g
ZG9tYWluIHRvIHJlcG9ydCB0aGUgZGV2aWNlcyBwcmVzZW50Pw0KPj4gDQo+PiBUaGF0IG1pZ2h0
IGJlIHBvc3NpYmxlIHRvIGRvIGJ1dCB3aWxsIGFueXdheSByZXF1aXJlIGEgY29tbWFuZCBsaW5l
IGFyZ3VtZW50DQo+PiB0byBiZSBhYmxlIHRvIGZvcmNlIHhlbiB0byBsZXQgdGhlIGhhcmR3YXJl
IGRvbWFpbiBkbyB0aGUgaW5pdGlhbGl6YXRpb24gYW55d2F5IGluDQo+PiBjYXNlIFhlbiBkZXRl
Y3Rpb24gZG9lcyBub3Qgd29yayBwcm9wZXJseS4NCj4+IEluIHRoZSBjYXNlIHdoZXJlIHRoZXJl
IGlzIGEgRG9tMCBpIHdvdWxkIG1vcmUgZXhwZWN0IHRoYXQgd2UgbGV0IGl0IGRvIHRoZSBpbml0
aWFsaXphdGlvbg0KPj4gYWxsIHRoZSB0aW1lIHVubGVzcyB0aGUgdXNlciBpcyB0ZWxsaW5nIHVz
aW5nIGEgY29tbWFuZCBsaW5lIGFyZ3VtZW50IHRoYXQgdGhlIGN1cnJlbnQgb25lDQo+PiBpcyBj
b3JyZWN0IGFuZCBzaGFsbCBiZSB1c2VkLg0KPiANCj4gRlJULCBvbiB4ODYgd2UgbGV0IGRvbTAg
ZW51bWVyYXRlIGFuZCBwcm9iZSB0aGUgUENJIGRldmljZXMgYXMgaXQNCj4gZmVlbHMgbGlrZSwg
YnV0IHZQQ0kgdHJhcHMgaGF2ZSBhbHJlYWR5IGJlZW4gc2V0IHRvIGFsbCB0aGUgZGV0ZWN0ZWQN
Cj4gZGV2aWNlcywgYW5kIHZQQ0kgYWxyZWFkeSBzdXBwb3J0cyBsZXR0aW5nIGRvbTAgc2l6ZSB0
aGUgQkFScywgb3IgZXZlbg0KPiBjaGFuZ2UgaXQncyBwb3NpdGlvbiAodGhlb3JldGljYWxseSwg
SSBoYXZlbid0IHNlZW4gYSBkb20wIGNoYW5nZSB0aGUNCj4gcG9zaXRpb24gb2YgdGhlIEJBUnMg
eWV0KS4NCj4gDQo+IFNvIG9uIEFybSB5b3UgY291bGQgYWxzbyBsZXQgZG9tMCBkbyBhbGwgb2Yg
dGhpcywgdGhlIHF1ZXN0aW9uIGlzDQo+IHdoZXRoZXIgdlBDSSB0cmFwcyBjb3VsZCBiZSBzZXQg
ZWFybGllciAod2hlbiBkb20wIGlzIGNyZWF0ZWQpIGlmIHRoZQ0KPiBQQ0kgYnVzIGhhcyBiZWVu
IGluaXRpYWxpemVkIGFuZCBjYW4gYmUgc2Nhbm5lZC4NCj4gDQo+IEkgaGF2ZSBubyBpZGVhIGhv
d2V2ZXIgaG93IGJhcmUgbWV0YWwgTGludXggb24gQXJtIGZpZ3VyZXMgb3V0IHRoZQ0KPiBzdGF0
ZSBvZiB0aGUgUENJIGJ1cywgb3IgaWYgaXQncyBzb21ldGhpbmcgdGhhdCdzIHBhc3NlZCBvbiB0
aGUgRFQsIG9yDQo+IHNpZ25hbGVkIHNvbWVob3cgZnJvbSB0aGUgZmlybXdhcmUvYm9vdGxvYWRl
ci4NCg0KVGhpcyBpcyBkZWZpbml0ZWx5IHNvbWV0aGluZyB3ZSB3aWxsIGNoZWNrIGFuZCB3ZSB3
aWxsIGFsc28gdHJ5IHRvIGtlZXAgdGhlIHNhbWUNCmJlaGF2aW91ciBhcyB4ODYgdW5sZXNzIHRo
aXMgaXMgbm90IHBvc3NpYmxlLiBJIHdvdWxkIG5vdCBzZWUgd2h5IHdlIGNvdWxkIG5vdCANCnNl
dCB0aGUgdlBDSSB0cmFwcyBlYXJsaWVyIGFuZCBqdXN0IHJlbGF5IHRoZSB3cml0ZXMgdG8gdGhl
IGhhcmR3YXJlIGJ1dCBkZXRlY3QNCmlmIEJBUnMgYXJlIGNoYW5nZWQuDQoNCj4gDQo+Pj4+PiBU
aGlzIHNob3VsZCBiZSBsaW1pdGVkIHRvIHJlYWQtb25seSBhY2Nlc3NlcyBpbiBvcmRlciB0byBi
ZSBzYWZlLg0KPj4+Pj4gDQo+Pj4+PiBFbXVsYXRpbmcgYSBQQ0kgYnJpZGdlIGluIFhlbiB1c2lu
ZyB2UENJIHNob3VsZG4ndCBiZSB0aGF0DQo+Pj4+PiBjb21wbGljYXRlZCwgc28geW91IGNvdWxk
IGxpa2VseSByZXBsYWNlIHRoZSByZWFsIGJyaWRnZXMgd2l0aA0KPj4+Pj4gZW11bGF0ZWQgb25l
cy4gT3IgZXZlbiBwcm92aWRlIGEgZmFrZSB0b3BvbG9neSB0byB0aGUgZ3Vlc3QgdXNpbmcgYW4N
Cj4+Pj4+IGVtdWxhdGVkIGJyaWRnZS4NCj4+Pj4gDQo+Pj4+IEp1c3Qgc2hvd2luZyBhbGwgYnJp
ZGdlcyBhbmQga2VlcGluZyB0aGUgaGFyZHdhcmUgdG9wb2xvZ3kgaXMgdGhlIHNpbXBsZXN0DQo+
Pj4+IHNvbHV0aW9uIGZvciBub3cuIEJ1dCBtYXliZSBzaG93aW5nIGEgZGlmZmVyZW50IHRvcG9s
b2d5IGFuZCBvbmx5IGZha2UNCj4+Pj4gYnJpZGdlcyBjb3VsZCBtYWtlIHNlbnNlIGFuZCBiZSBp
bXBsZW1lbnRlZCBpbiB0aGUgZnV0dXJlLg0KPj4+IA0KPj4+IEFjay4gSSd2ZSBhbHNvIGhlYXJk
IHJ1bW9ycyBvZiBYZW4gb24gQXJtIHBlb3BsZSBiZWluZyB2ZXJ5IGludGVyZXN0ZWQNCj4+PiBp
biBWaXJ0SU8gc3VwcG9ydCwgaW4gd2hpY2ggY2FzZSB5b3UgbWlnaHQgZXhwb3NlIGJvdGggZnVs
bHkgZW11bGF0ZWQNCj4+PiBWaXJ0SU8gZGV2aWNlcyBhbmQgUENJIHBhc3N0aHJvdWdoIGRldmlj
ZXMgb24gdGhlIFBDSSBidXMsIHNvIGl0IHdvdWxkDQo+Pj4gYmUgZ29vZCB0byBzcGVuZCBzb21l
IHRpbWUgdGhpbmtpbmcgaG93IHRob3NlIHdpbGwgZml0IHRvZ2V0aGVyLg0KPj4+IA0KPj4+IFdp
bGwgeW91IGFsbG9jYXRlIGEgc2VwYXJhdGUgc2VnbWVudCB1bnVzZWQgYnkgaGFyZHdhcmUgdG8g
ZXhwb3NlIHRoZQ0KPj4+IGZ1bGx5IGVtdWxhdGVkIFBDSSBkZXZpY2VzIChWaXJ0SU8pPw0KPj4+
IA0KPj4+IFdpbGwgT1NlcyBzdXBwb3J0IGhhdmluZyBzZXZlcmFsIHNlZ21lbnRzPw0KPj4+IA0K
Pj4+IElmIG5vdCB5b3UgbGlrZWx5IG5lZWQgdG8gaGF2ZSBlbXVsYXRlZCBicmlkZ2VzIHNvIHRo
YXQgeW91IGNhbiBhZGp1c3QNCj4+PiB0aGUgYnJpZGdlIHdpbmRvdyBhY2NvcmRpbmdseSB0byBm
aXQgdGhlIHBhc3N0aHJvdWdoIGFuZCB0aGUgZW11bGF0ZWQNCj4+PiBNTUlPIHNwYWNlLCBhbmQg
bGlrZWx5IGJlIGFibGUgdG8gZXhwb3NlIHBhc3N0aHJvdWdoIGRldmljZXMgdXNpbmcgYQ0KPj4+
IGRpZmZlcmVudCB0b3BvbG9neSB0aGFuIHRoZSBob3N0IG9uZS4NCj4+IA0KPj4gSG9uZXN0bHkg
dGhpcyBpcyBub3Qgc29tZXRoaW5nIHdlIGNvbnNpZGVyZWQuIEkgd2FzIG1vcmUgdGhpbmtpbmcg
dGhhdA0KPj4gdGhpcyB1c2UgY2FzZSB3b3VsZCBiZSBoYW5kbGVkIGJ5IGNyZWF0aW5nIGFuIG90
aGVyIFZQQ0kgYnVzIGRlZGljYXRlZA0KPj4gdG8gdGhvc2Uga2luZCBvZiBkZXZpY2VzIGluc3Rl
YWQgb2YgbWl4aW5nIHBoeXNpY2FsIGFuZCB2aXJ0dWFsIGRldmljZXMuDQo+IA0KPiBKdXN0IG1l
bnRpb25pbmcgaXQgYW5kIHlvdXIgcGxhbnMgd2hlbiBndWVzdHMgbWlnaHQgYWxzbyBoYXZlIGZ1
bGx5DQo+IGVtdWxhdGVkIGRldmljZXMgb24gdGhlIFBDSSBidXMgd291bGQgYmUgcmVsZXZhbnQg
SSB0aGluay4NCg0KV2Ugd2lsbCBhZGQgdGhpcy4NCg0KPiANCj4gQW55d2F5LCBJIGRvbid0IHRo
aW5rIGl0J3Mgc29tZXRoaW5nIG1hbmRhdG9yeSBoZXJlLCBhcyBmcm9tIGEgZ3Vlc3QNCj4gUG9W
IGhvdyB3ZSBleHBvc2UgUENJIGRldmljZXMgc2hvdWxkbid0IG1hdHRlciB0aGF0IG11Y2gsIGFz
IGxvbmcgYXMNCj4gaXQncyBkb25lIGluIGEgc3BlYyBjb21wbGlhbnQgd2F5Lg0KPiANCj4gU28g
eW91IGNhbiBzdGFydCB3aXRoIHRoaXMgYXBwcm9hY2ggaWYgaXQncyBlYXNpZXIsIEkganVzdCB3
YW50ZWQgdG8NCj4gbWFrZSBzdXJlIHlvdSBoYXZlIGluIG1pbmQgdGhhdCBhdCBzb21lIHBvaW50
IEFybSBndWVzdHMgbWlnaHQgYWxzbw0KPiByZXF1aXJlIGZ1bGx5IGVtdWxhdGVkIFBDSSBkZXZp
Y2VzIHNvIHRoYXQgeW91IGRvbid0IHBhaW50IHlvdXJzZWx2ZXMNCj4gaW4gYSBjb3JuZXIuDQoN
CkRlZmluaXRlbHkgdGhhdOKAmXMgbm90IHNvbWV0aGluZyB3ZSBkaWQgdGhpbmsgb2YgYW5kIHRo
YW5rcyBmb3IgdGhlIHJlbWFyaw0KYXMgd2UgbmVlZCB0byBrZWVwIHRoaXMgaW4gbWluZC4NCg0K
PiANCj4+PiANCj4+Pj4+IA0KPj4+Pj4+IA0KPj4+Pj4+ICMgRW11bGF0ZWQgUENJIGRldmljZSB0
cmVlIG5vZGUgaW4gbGlieGw6DQo+Pj4+Pj4gDQo+Pj4+Pj4gTGlieGwgaXMgY3JlYXRpbmcgYSB2
aXJ0dWFsIFBDSSBkZXZpY2UgdHJlZSBub2RlIGluIHRoZSBkZXZpY2UgdHJlZQ0KPj4+Pj4+IHRv
IGVuYWJsZSB0aGUgZ3Vlc3QgT1MgdG8gZGlzY292ZXIgdGhlIHZpcnR1YWwgUENJIGR1cmluZyBn
dWVzdA0KPj4+Pj4+IGJvb3QuIFdlIGludHJvZHVjZWQgdGhlIG5ldyBjb25maWcgb3B0aW9uIFt2
cGNpPSJwY2lfZWNhbSJdIGZvcg0KPj4+Pj4+IGd1ZXN0cy4gV2hlbiB0aGlzIGNvbmZpZyBvcHRp
b24gaXMgZW5hYmxlZCBpbiBhIGd1ZXN0IGNvbmZpZ3VyYXRpb24sDQo+Pj4+Pj4gYSBQQ0kgZGV2
aWNlIHRyZWUgbm9kZSB3aWxsIGJlIGNyZWF0ZWQgaW4gdGhlIGd1ZXN0IGRldmljZSB0cmVlLg0K
Pj4+Pj4+IA0KPj4+Pj4+IEEgbmV3IGFyZWEgaGFzIGJlZW4gcmVzZXJ2ZWQgaW4gdGhlIGFybSBn
dWVzdCBwaHlzaWNhbCBtYXAgYXQgd2hpY2gNCj4+Pj4+PiB0aGUgVlBDSSBidXMgaXMgZGVjbGFy
ZWQgaW4gdGhlIGRldmljZSB0cmVlIChyZWcgYW5kIHJhbmdlcw0KPj4+Pj4+IHBhcmFtZXRlcnMg
b2YgdGhlIG5vZGUpLiBBIHRyYXAgaGFuZGxlciBmb3IgdGhlIFBDSSBFQ0FNIGFjY2VzcyBmcm9t
DQo+Pj4+Pj4gZ3Vlc3QgaGFzIGJlZW4gcmVnaXN0ZXJlZCBhdCB0aGUgZGVmaW5lZCBhZGRyZXNz
IGFuZCByZWRpcmVjdHMNCj4+Pj4+PiByZXF1ZXN0cyB0byB0aGUgVlBDSSBkcml2ZXIgaW4gWGVu
Lg0KPj4+Pj4gDQo+Pj4+PiBDYW4ndCB5b3UgZGVkdWNlIHRoZSByZXF1aXJlbWVudCBvZiBzdWNo
IERUIG5vZGUgYmFzZWQgb24gdGhlIHByZXNlbmNlDQo+Pj4+PiBvZiBhICdwY2k9JyBvcHRpb24g
aW4gdGhlIHNhbWUgY29uZmlnIGZpbGU/DQo+Pj4+PiANCj4+Pj4+IEFsc28gSSB3b3VsZG4ndCBk
aXNjYXJkIHRoYXQgaW4gdGhlIGZ1dHVyZSB5b3UgbWlnaHQgd2FudCB0byB1c2UNCj4+Pj4+IGRp
ZmZlcmVudCBlbXVsYXRvcnMgZm9yIGRpZmZlcmVudCBkZXZpY2VzLCBzbyBpdCBtaWdodCBiZSBo
ZWxwZnVsIHRvDQo+Pj4+PiBpbnRyb2R1Y2Ugc29tZXRoaW5nIGxpa2U6DQo+Pj4+PiANCj4+Pj4+
IHBjaSA9IFsgJzA4OjAwLjAsYmFja2VuZD12cGNpJywgJzA5OjAwLjAsYmFja2VuZD14ZW5wdCcs
ICcwYTowMC4wLGJhY2tlbmQ9cWVtdScsIC4uLiBdDQo+Pj4+PiANCj4+Pj4+IEZvciB0aGUgdGlt
ZSBiZWluZyBBcm0gd2lsbCByZXF1aXJlIGJhY2tlbmQ9dnBjaSBmb3IgYWxsIHRoZSBwYXNzZWQN
Cj4+Pj4+IHRocm91Z2ggZGV2aWNlcywgYnV0IEkgd291bGRuJ3QgcnVsZSBvdXQgdGhpcyBjaGFu
Z2luZyBpbiB0aGUgZnV0dXJlLg0KPj4+PiANCj4+Pj4gV2UgbmVlZCBpdCBmb3IgdGhlIGNhc2Ug
d2hlcmUgbm8gZGV2aWNlIGlzIGRlY2xhcmVkIGluIHRoZSBjb25maWcgZmlsZSBhbmQgdGhlIHVz
ZXINCj4+Pj4gd2FudHMgdG8gYWRkIGRldmljZXMgdXNpbmcgeGwgbGF0ZXIuIEluIHRoaXMgY2Fz
ZSB3ZSBtdXN0IGhhdmUgdGhlIERUIG5vZGUgZm9yIGl0DQo+Pj4+IHRvIHdvcmsuIA0KPj4+IA0K
Pj4+IFRoZXJlJ3MgYSBwYXNzdGhyb3VnaCB4bC5jZmcgb3B0aW9uIGZvciB0aGF0IGFscmVhZHks
IHNvIHRoYXQgaWYgeW91DQo+Pj4gZG9uJ3Qgd2FudCB0byBhZGQgYW55IFBDSSBwYXNzdGhyb3Vn
aCBkZXZpY2VzIGF0IGNyZWF0aW9uIHRpbWUgYnV0DQo+Pj4gcmF0aGVyIGhvdHBsdWcgdGhlbSB5
b3UgY2FuIHNldDoNCj4+PiANCj4+PiBwYXNzdGhyb3VnaD1lbmFibGVkDQo+Pj4gDQo+Pj4gQW5k
IGl0IHNob3VsZCBzZXR1cCB0aGUgZG9tYWluIHRvIGJlIHByZXBhcmVkIHRvIHN1cHBvcnQgaG90
DQo+Pj4gcGFzc3Rocm91Z2gsIGluY2x1ZGluZyB0aGUgSU9NTVUgWzBdLg0KPj4gDQo+PiBJc27i
gJl0IHRoaXMgb3B0aW9uIGNvdmVyaW5nIG1vcmUgdGhlbiBQQ0kgcGFzc3Rocm91Z2ggPw0KPj4g
DQo+PiBMb3RzIG9mIEFybSBwbGF0Zm9ybSBkbyBub3QgaGF2ZSBhIFBDSSBidXMgYXQgYWxsLCBz
byBmb3IgdGhvc2UNCj4+IGNyZWF0aW5nIGEgVlBDSSBidXMgd291bGQgYmUgcG9pbnRsZXNzLiBC
dXQgeW91IG1pZ2h0IG5lZWQgdG8NCj4+IGFjdGl2YXRlIHRoaXMgdG8gcGFzcyBkZXZpY2VzIHdo
aWNoIGFyZSBub3Qgb24gdGhlIFBDSSBidXMuDQo+IA0KPiBXZWxsLCB5b3UgY2FuIGNoZWNrIHdo
ZXRoZXIgdGhlIGhvc3QgaGFzIFBDSSBzdXBwb3J0IGFuZCBkZWNpZGUNCj4gd2hldGhlciB0byBh
dHRhY2ggYSB2aXJ0dWFsIFBDSSBidXMgdG8gdGhlIGd1ZXN0IG9yIG5vdD8NCj4gDQo+IFNldHRp
bmcgcGFzc3Rocm91Z2g9ZW5hYmxlZCBzaG91bGQgcHJlcGFyZSB0aGUgZ3Vlc3QgdG8gaGFuZGxl
DQo+IHBhc3N0aHJvdWdoLCBpbiB3aGF0ZXZlciBmb3JtIGlzIHN1cHBvcnRlZCBieSB0aGUgaG9z
dCBJTU8uDQoNClRydWUsIHdlIGNvdWxkIGp1c3Qgc2F5IHRoYXQgd2UgY3JlYXRlIGEgUENJIGJ1
cyBpZiB0aGUgaG9zdCBoYXMgb25lIGFuZA0KcGFzc3Rocm91Z2ggaXMgYWN0aXZhdGVkLg0KQnV0
IHdpdGggdmlydHVhbCBkZXZpY2UgcG9pbnQsIHdlIG1pZ2h0IGV2ZW4gbmVlZCBvbmUgb24gZ3Vl
c3Qgd2l0aG91dA0KUENJIHN1cHBvcnQgb24gdGhlIGhhcmR3YXJlIDotKQ0KDQo+IA0KPj4+Pj4+
IExpbWl0YXRpb246DQo+Pj4+Pj4gKiBOZWVkIHRvIGF2b2lkIHRoZSDigJxpb21lbeKAnSBhbmQg
4oCcaXJx4oCdIGd1ZXN0IGNvbmZpZw0KPj4+Pj4+IG9wdGlvbnMgYW5kIG1hcCB0aGUgSU9NRU0g
cmVnaW9uIGFuZCBJUlEgYXQgdGhlIHNhbWUgdGltZSB3aGVuDQo+Pj4+Pj4gZGV2aWNlIGlzIGFz
c2lnbmVkIHRvIHRoZSBndWVzdCB1c2luZyB0aGUg4oCccGNp4oCdIGd1ZXN0IGNvbmZpZyBvcHRp
b25zDQo+Pj4+Pj4gd2hlbiB4bCBjcmVhdGVzIHRoZSBkb21haW4uDQo+Pj4+Pj4gKiBFbXVsYXRl
ZCBCQVIgdmFsdWVzIG9uIHRoZSBWUENJIGJ1cyBzaG91bGQgcmVmbGVjdCB0aGUgSU9NRU0gbWFw
cGVkDQo+Pj4+Pj4gYWRkcmVzcy4NCj4+Pj4+IA0KPj4+Pj4gSXQgd2FzIG15IHVuZGVyc3RhbmRp
bmcgdGhhdCB5b3Ugd291bGQgaWRlbnRpdHkgbWFwIHRoZSBCQVIgaW50byB0aGUNCj4+Pj4+IGRv
bVUgc3RhZ2UtMiB0cmFuc2xhdGlvbiwgYW5kIHRoYXQgY2hhbmdlcyBieSB0aGUgZ3Vlc3Qgd29u
J3QgYmUNCj4+Pj4+IGFsbG93ZWQuDQo+Pj4+IA0KPj4+PiBJbiBmYWN0IHRoaXMgaXMgbm90IHBv
c3NpYmxlIHRvIGRvIGFuZCB3ZSBoYXZlIHRvIHJlbWFwIGF0IGEgZGlmZmVyZW50IGFkZHJlc3MN
Cj4+Pj4gYmVjYXVzZSB0aGUgZ3Vlc3QgcGh5c2ljYWwgbWFwcGluZyBpcyBmaXhlZCBieSBYZW4g
b24gQXJtIHNvIHdlIG11c3QgZm9sbG93DQo+Pj4+IHRoZSBzYW1lIGRlc2lnbiBvdGhlcndpc2Ug
dGhpcyB3b3VsZCBvbmx5IHdvcmsgaWYgdGhlIEJBUnMgYXJlIHBvaW50aW5nIHRvIGFuDQo+Pj4+
IGFkZHJlc3MgdW51c2VkIGFuZCBvbiBKdW5vIHRoaXMgaXMgZm9yIGV4YW1wbGUgY29uZmxpY3Rp
bmcgd2l0aCB0aGUgZ3Vlc3QNCj4+Pj4gUkFNIGFkZHJlc3MuDQo+Pj4gDQo+Pj4gVGhpcyB3YXMg
bm90IGNsZWFyIGZyb20gbXkgcmVhZGluZyBvZiB0aGUgZG9jdW1lbnQsIGNvdWxkIHlvdSBwbGVh
c2UNCj4+PiBjbGFyaWZ5IG9uIHRoZSBuZXh0IHZlcnNpb24gdGhhdCB0aGUgZ3Vlc3QgcGh5c2lj
YWwgbWVtb3J5IG1hcCBpcw0KPj4+IGFsd2F5cyB0aGUgc2FtZSwgYW5kIHRoYXQgQkFScyBmcm9t
IFBDSSBkZXZpY2VzIGNhbm5vdCBiZSBpZGVudGl0eQ0KPj4+IG1hcHBlZCB0byB0aGUgc3RhZ2Ut
MiB0cmFuc2xhdGlvbiBhbmQgaW5zdGVhZCBhcmUgcmVsb2NhdGVkIHNvbWV3aGVyZQ0KPj4+IGVs
c2U/DQo+PiANCj4+IFdlIHdpbGwuDQo+PiANCj4+PiANCj4+PiBJJ20gdGhlbiBjb25mdXNlZCBh
Ym91dCB3aGF0IHlvdSBkbyB3aXRoIGJyaWRnZSB3aW5kb3dzLCBkbyB5b3UgYWxzbw0KPj4+IHRy
YXAgYW5kIGFkanVzdCB0aGVtIHRvIHJlcG9ydCBhIGRpZmZlcmVudCBJT01FTSByZWdpb24/DQo+
PiANCj4+IFllcyB0aGlzIGlzIHdoYXQgd2Ugd2lsbCBoYXZlIHRvIGRvIHNvIHRoYXQgdGhlIHJl
Z2lvbnMgcmVmbGVjdCB0aGUgVlBDSSBtYXBwaW5ncw0KPj4gYW5kIG5vdCB0aGUgaGFyZHdhcmUg
b25lLg0KPj4gDQo+Pj4gDQo+Pj4gQWJvdmUgeW91IG1lbnRpb25lZCB0aGF0IHJlYWQtb25seSBh
Y2Nlc3Mgd2FzIGdpdmVuIHRvIGJyaWRnZQ0KPj4+IHJlZ2lzdGVycywgYnV0IEkgZ3Vlc3Mgc29t
ZSBhcmUgYWxzbyBlbXVsYXRlZCBpbiBvcmRlciB0byByZXBvcnQNCj4+PiBtYXRjaGluZyBJT01F
TSByZWdpb25zPw0KPj4gDQo+PiB5ZXMgdGhhdOKAmXMgZXhhY3QuIFdlIHdpbGwgY2xlYXIgdGhp
cyBpbiB0aGUgbmV4dCB2ZXJzaW9uLg0KPiANCj4gSWYgeW91IGhhdmUgdG8gZ28gdGhpcyByb3V0
ZSBmb3IgZG9tVXMsIGl0IG1pZ2h0IGJlIGVhc2llciB0byBqdXN0DQo+IGZha2UgYSBQQ0kgaG9z
dCBicmlkZ2UgYW5kIHBsYWNlIGFsbCB0aGUgZGV2aWNlcyB0aGVyZSBldmVuIHdpdGgNCj4gZGlm
ZmVyZW50IFNCREYgYWRkcmVzc2VzLiBIYXZpbmcgdG8gcmVwbGljYXRlIGFsbCB0aGUgYnJpZGdl
cyBvbiB0aGUNCj4gcGh5c2ljYWwgUENJIGJ1cyBhbmQgZml4aW5nIHVwIGl0J3MgTU1JTyB3aW5k
b3dzIHNlZW1zIG11Y2ggbW9yZQ0KPiBjb21wbGljYXRlZCB0aGFuIGp1c3QgZmFraW5nL2VtdWxh
dGluZyBhIHNpbmdsZSBicmlkZ2U/DQoNClRoYXTigJlzIGRlZmluaXRlbHkgc29tZXRoaW5nIHdl
IGhhdmUgdG8gZGlnIG1vcmUgb24uIFRoZSB3aG9sZSBwcm9ibGVtYXRpYw0Kb2YgUENJIGVudW1l
cmF0aW9uIGFuZCBCQVIgdmFsdWUgYXNzaWduYXRpb24gaW4gWGVuIG1pZ2h0IGJlIHB1c2hlZCB0
bw0KZWl0aGVyIERvbTAgb3IgdGhlIGZpcm13YXJlIGJ1dCB3ZSBtaWdodCBpbiBmYWN0IGZpbmQg
b3Vyc2VsZiB3aXRoIGV4YWN0bHkgdGhlDQpzYW1lIHByb2JsZW0gb24gdGhlIFZQQ0kgYnVzLg0K
DQpCZXJ0cmFuZA0KDQo+IA0KPiBSb2dlci4NCg0K


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 09:56:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 09:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwjZY-0001NZ-JK; Sat, 18 Jul 2020 09:56:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMbn=A5=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwjZY-0001NU-2f
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 09:56:00 +0000
X-Inumbo-ID: e109176c-c8dc-11ea-9718-12813bfff9fa
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.51]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e109176c-c8dc-11ea-9718-12813bfff9fa;
 Sat, 18 Jul 2020 09:55:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WY/TSg6DlnEj3Nv0jz4q3YEU60b+O33VE7wWj0RRGC0=;
 b=eHeu1ahpsnqZH5EHKC0CcOstTauIejVBG/ZWFhYhU2b2hyhw5IK1sCbZg43kq4c+3A6dvWUN/4lLg/9VIVmBMTHTldi0oib8FiQgmJz1Ppy7A+sXInuFb/eAbTsUKbQ6ox3GOF6UZiYAVNhVmY2dZgohBqYX3ovc29xCCsj5CqI=
Received: from AM5PR04CA0027.eurprd04.prod.outlook.com (2603:10a6:206:1::40)
 by VI1PR0802MB2302.eurprd08.prod.outlook.com (2603:10a6:800:9e::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.21; Sat, 18 Jul
 2020 09:55:55 +0000
Received: from AM5EUR03FT029.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:1:cafe::bc) by AM5PR04CA0027.outlook.office365.com
 (2603:10a6:206:1::40) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17 via Frontend
 Transport; Sat, 18 Jul 2020 09:55:55 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT029.mail.protection.outlook.com (10.152.16.150) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Sat, 18 Jul 2020 09:55:55 +0000
Received: ("Tessian outbound 8f45de5545d6:v62");
 Sat, 18 Jul 2020 09:55:55 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 245270a263ebef62
X-CR-MTA-TID: 64aa7808
Received: from 1b3cd3d2aaca.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 77C8CFFF-41B6-4FC9-89E4-AC1DD17CD3D3.1; 
 Sat, 18 Jul 2020 09:55:49 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1b3cd3d2aaca.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sat, 18 Jul 2020 09:55:49 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZBfuo87swT6+wZQb+I9M03YQz0bQF43HBoNcunK74OHIr7nUAWDEM+CbhrJ1Kv+AcLOamnmG9PhHtGUVyom7h2Jkq4nmLZiKNcprn4RHf7UKVCb2zsLGWz4gnPv+51SpcZwuaLU+yLNv3mB8gh4BPKXq5AxFNxSYmvDC0TnZqV/aYbhoxSSLVybxOFzOmgcyPgBE5ftHH1bu4ducRNNwGzF7luUMzvYYELkBZlP37QoTU1rFUQQHVpSk7PuzbFIhqV5UJ+wLtVIPLQex6Cg/dS16yT2Dq5THpoJ0xwJM9RgsT/2k+FxMvR0nEiOyLl2eby0o7wfQj6fNPWyAfyMYdA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WY/TSg6DlnEj3Nv0jz4q3YEU60b+O33VE7wWj0RRGC0=;
 b=jsd58WcT/TnKk/7pzbJJcTe1J+N7JvUBDf70kFPJ1Bg/DWWeWTmCH1azOcckxtPi5HUtSXBH7aE2bYW2mHVqp6KkNl4KjbDbnemIoIQziDIJa7ywURZ0I1Ku1kh8mpb48PPTdCb61p4Meu+TWLwEtjWChnWeJDN9TFOEnoldWQcW5uOlYj9sw2fnPwsssHyBtS2VkUyW15sNZpERKHvVQ237QXjKxnlKRqkfpp4vMJErSa/HwE3xdct+sdWJpQiOGZgtuRhLTpXst/eXzd8e4Cq5DSBRYLnCNlP00O9PRF20Z3y4lLQwE2AUIRmh3vbYm6wX9N9N7D1B2ua5ymBcLA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WY/TSg6DlnEj3Nv0jz4q3YEU60b+O33VE7wWj0RRGC0=;
 b=eHeu1ahpsnqZH5EHKC0CcOstTauIejVBG/ZWFhYhU2b2hyhw5IK1sCbZg43kq4c+3A6dvWUN/4lLg/9VIVmBMTHTldi0oib8FiQgmJz1Ppy7A+sXInuFb/eAbTsUKbQ6ox3GOF6UZiYAVNhVmY2dZgohBqYX3ovc29xCCsj5CqI=
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM0PR08MB3684.eurprd08.prod.outlook.com (2603:10a6:208:106::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Sat, 18 Jul
 2020 09:55:42 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::b572:771:2750:14ed]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::b572:771:2750:14ed%5]) with mapi id 15.20.3174.029; Sat, 18 Jul 2020
 09:55:42 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: PCI devices passthrough on Arm design proposal
Thread-Topic: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkLzBPIgAAPUACAAArWgIAABdaAgAAFJgCAASrpgA==
Date: Sat, 18 Jul 2020 09:55:42 +0000
Message-ID: <C86FE34B-4587-4895-8001-D8CD3F9D44F0@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
 <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
 <865D5A77-85D4-4A88-A228-DDB70BDB3691@arm.com>
 <972c0c81-6595-7c41-baa5-8882f5d1c0ff@xen.org>
 <4E6B793C-2E0A-4999-9842-24CDCDE43903@arm.com>
 <20200717160550.GZ7191@Air-de-Roger>
In-Reply-To: <20200717160550.GZ7191@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [2a01:e0a:13:6f10:f1a2:5155:728:f8e7]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: da20b6bc-fc19-48bf-ade8-08d82b00c39f
x-ms-traffictypediagnostic: AM0PR08MB3684:|VI1PR0802MB2302:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <VI1PR0802MB230242AE2A314A49D44F0C299D7D0@VI1PR0802MB2302.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: FT2wY3yl9rQYX8dYeo6fD0/m3BftBLiSBjHPfMp3XsAFM12y03sN06P8cbUMZK+loUls0l+YBWmUn42equH3ll0GjygG3tDxVWg8HHu48xm4sAv01DCIj05wqVbesLlgQM/SrwNxLbpVJT5Hdyeb9R8eprOo0gKiouaq/bN8dN69Jd2jGwXuLvp2LyUI9JOGs/Bpx/mJE4i+jV8EsrH53nkmAGk861ux93Hsxs/9D988zKCrkWT25tV2co3lFfzJolkZVNEA5cj6W1UqE23Pch4k0N/B5Kllw7qWYl4V7OZQcI7tzYeGB4lkexl9/egF
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM0PR08MB3682.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(396003)(366004)(346002)(376002)(66946007)(86362001)(66476007)(64756008)(76116006)(53546011)(66446008)(6486002)(6916009)(66556008)(6506007)(91956017)(478600001)(36756003)(2906002)(2616005)(8936002)(4326008)(33656002)(5660300002)(71200400001)(8676002)(186003)(316002)(6512007)(83380400001)(54906003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: a2+yHcVBjSPQbCAOp2IviaXLY+49nJHjfMa8PY86LuIjNiDS7o6eZy9tmkm6ja2k2HvklZByxHzwOJRgCn5ukUgOV7IeY6zQIUK6PT1XdkhTpBSSUsMQSzu6xmjIj2Ikb2kJ+KF4nx8G92zF4bdF1AnxKbiPxIhJ/5KSMupp3xfAFSS4t0Jio7aTx+v3nJ8X4wdNFks0R6vKZl0IpYpvjqVOSnrM/+dhqhfpzAkUuJmv5fOYgUcxSA5d/tzrlv7gTRQiuarWKZnphafLSDDwTh4E0Y+k4l2YeeTLbRLIvbi59b6yC7ka7uKApa70huG9s40ROLamBlWAHRZ4ws5PBbUHpNzhZOHWigJMMhFCNsFf7JyWUVSMh64GSgnKIyRyJyj8qn1t+gN7gdYF10WXZg25+hqV3yA2gwBf2pCK1pefUXmqCv8LZGaof806px47KCOAzPiC6WFYLiJ01blm2CE+cy/mRr8v8JIWt0tda7spfzmiGfxgqQzQyv2CAOj9JFFyMDbUlL7zre6dazA97ZCZ1jcCUXeR0Ecrxw+cOXo=
Content-Type: text/plain; charset="utf-8"
Content-ID: <3AEBCB336CEFEE45A49AD395B592D244@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3684
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT029.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(396003)(376002)(346002)(136003)(46966005)(2906002)(47076004)(26005)(54906003)(186003)(6862004)(316002)(82740400003)(53546011)(86362001)(36906005)(336012)(70206006)(107886003)(70586007)(4326008)(2616005)(6506007)(356005)(8936002)(36756003)(81166007)(6486002)(33656002)(478600001)(8676002)(5660300002)(83380400001)(82310400002)(6512007);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 34e427be-faad-4cdf-25f1-08d82b00bbc8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: cHnRDfy+jj4em55tVRv/WOsX0NHhhsz4mOHGGa0V7On5RxVp9lei8Qi7k4kobjzVjjZJbaJwQGVBdGS07CLUzPrKiKtqy2ViW654hzTeq7Iv4zOqXGPNPgAuv2vNFa8v4euQ621+WDgGIMBr+uPal0kuKJPFdsF77/Nn6KwhoZYarL8kosFAkCR/2NG0ceVwIh7ctahLRRkpCTLXZbUcP4bsfb/4qDbCXVftnkmDS+5zGR0joF39buVlgobY3hJ8oAMn6vDEJhDhra4rZTEcjG5IAD7wMUCvlk3ddhKGisU+1KkMviYxWI4ovQN+EpmbWKP43rJ0kODalpBybIGOqcYbs+sujbsDMRnNBboKMJkIppwgsDwt4ZSEU3PtEsF+EMPbjsICDoIBWhhYz+OFJQ==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jul 2020 09:55:55.4371 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: da20b6bc-fc19-48bf-ade8-08d82b00c39f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0802MB2302
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTcgSnVsIDIwMjAsIGF0IDE4OjA1LCBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5w
YXVAY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiBGcmksIEp1bCAxNywgMjAyMCBhdCAwMzo0
NzoyNVBNICswMDAwLCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4+IE9uIDE3IEp1bCAyMDIw
LCBhdCAxNzoyNiwgSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+Pj4gT24g
MTcvMDcvMjAyMCAxNTo0NywgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+Pj4+Pj4gKiBEb20w
TGVzcyBpbXBsZW1lbnRhdGlvbiB3aWxsIHJlcXVpcmUgdG8gaGF2ZSB0aGUgY2FwYWNpdHkgaW5z
aWRlIFhlbiB0byBkaXNjb3ZlciB0aGUgUENJIGRldmljZXMgKHdpdGhvdXQgZGVwZW5kaW5nIG9u
IERvbTAgdG8gZGVjbGFyZSB0aGVtIHRvIFhlbikuDQo+Pj4+Pj4+IA0KPj4+Pj4+PiAjIEVuYWJs
ZSB0aGUgZXhpc3RpbmcgeDg2IHZpcnR1YWwgUENJIHN1cHBvcnQgZm9yIEFSTToNCj4+Pj4+Pj4g
DQo+Pj4+Pj4+IFRoZSBleGlzdGluZyBWUENJIHN1cHBvcnQgYXZhaWxhYmxlIGZvciBYODYgaXMg
YWRhcHRlZCBmb3IgQXJtLiBXaGVuIHRoZSBkZXZpY2UgaXMgYWRkZWQgdG8gWEVOIHZpYSB0aGUg
aHlwZXIgY2FsbCDigJxQSFlTREVWT1BfcGNpX2RldmljZV9hZGTigJ0sIFZQQ0kgaGFuZGxlciBm
b3IgdGhlIGNvbmZpZyBzcGFjZSBhY2Nlc3MgaXMgYWRkZWQgdG8gdGhlIFBDSSBkZXZpY2UgdG8g
ZW11bGF0ZSB0aGUgUENJIGRldmljZXMuDQo+Pj4+Pj4+IA0KPj4+Pj4+PiBBIE1NSU8gdHJhcCBo
YW5kbGVyIGZvciB0aGUgUENJIEVDQU0gc3BhY2UgaXMgcmVnaXN0ZXJlZCBpbiBYRU4gc28gdGhh
dCB3aGVuIGd1ZXN0IGlzIHRyeWluZyB0byBhY2Nlc3MgdGhlIFBDSSBjb25maWcgc3BhY2UsIFhF
TiB3aWxsIHRyYXAgdGhlIGFjY2VzcyBhbmQgZW11bGF0ZSByZWFkL3dyaXRlIHVzaW5nIHRoZSBW
UENJIGFuZCBub3QgdGhlIHJlYWwgUENJIGhhcmR3YXJlLg0KPj4+Pj4+PiANCj4+Pj4+Pj4gTGlt
aXRhdGlvbjoNCj4+Pj4+Pj4gKiBObyBoYW5kbGVyIGlzIHJlZ2lzdGVyIGZvciB0aGUgTVNJIGNv
bmZpZ3VyYXRpb24uDQo+Pj4+Pj4+ICogT25seSBsZWdhY3kgaW50ZXJydXB0IGlzIHN1cHBvcnRl
ZCBhbmQgdGVzdGVkIGFzIG9mIG5vdywgTVNJIGlzIG5vdCBpbXBsZW1lbnRlZCBhbmQgdGVzdGVk
Lg0KPj4+Pj4+IElJUkMsIGxlZ2FjeSBpbnRlcnJ1cHQgbWF5IGJlIHNoYXJlZCBiZXR3ZWVuIHR3
byBQQ0kgZGV2aWNlcy4gSG93IGRvIHlvdSBwbGFuIHRvIGhhbmRsZSB0aGlzIG9uIEFybT8NCj4+
Pj4gV2UgcGxhbiB0byBmaXggdGhpcyBieSBhZGRpbmcgcHJvcGVyIHN1cHBvcnQgZm9yIE1TSSBp
biB0aGUgbG9uZyB0ZXJtLg0KPj4+PiBGb3IgdGhlIHVzZSBjYXNlIHdoZXJlIE1TSSBpcyBub3Qg
c3VwcG9ydGVkIG9yIG5vdCB3YW50ZWQgd2UgbWlnaHQgaGF2ZSB0byBmaW5kIGEgd2F5IHRvIGZv
cndhcmQgdGhlIGhhcmR3YXJlIGludGVycnVwdCB0byBzZXZlcmFsIGd1ZXN0cyB0byBlbXVsYXRl
IHNvbWUga2luZCBvZiBzaGFyZWQgaW50ZXJydXB0Lg0KPj4+IA0KPj4+IFNoYXJpbmcgaW50ZXJy
dXB0cyBhcmUgYSBiaXQgcGFpbiBiZWNhdXNlIHlvdSBjb3VsZG4ndCB0YWtlIGFkdmFudGFnZSBv
ZiB0aGUgZGlyZWN0IEVPSSBpbiBIVyBhbmQgaGF2ZSB0byBiZSBjYXJlZnVsIGlmIG9uZSBndWVz
dCBkb2Vzbid0IEVPSSBpbiB0aW1lbHkgbWFuZWVyLg0KPj4+IA0KPj4+IFRoaXMgaXMgc29tZXRo
aW5nIEkgd291bGQgcmF0aGVyIGF2b2lkIHVubGVzcyB0aGVyZSBpcyBhIHJlYWwgdXNlIGNhc2Ug
Zm9yIGl0Lg0KPj4gDQo+PiBJIHdvdWxkIGV4cGVjdCB0aGF0IG1vc3QgcmVjZW50IGhhcmR3YXJl
IHdpbGwgc3VwcG9ydCBNU0kgYW5kIHRoaXMNCj4+IHdpbGwgbm90IGJlIG5lZWRlZC4NCj4gDQo+
IFdlbGwsIFBDSSBFeHByZXNzIG1hbmRhdGVzIE1TSSBzdXBwb3J0LCBzbyB3aGlsZSB0aGlzIGlz
IGp1c3QgYSBzcGVjLA0KPiBJIHdvdWxkIGV4cGVjdCBtb3N0IChpZiBub3QgYWxsKSBkZXZpY2Vz
IHRvIHN1cHBvcnQgTVNJIChvciBNU0ktWCksIGFzDQo+IEFybSBwbGF0Zm9ybXMgaGF2ZW4ndCBp
bXBsZW1lbnRlZCBsZWdhY3kgUENJIGFueXdheS4NCg0KWWVzIHRoYXTigJlzIG91ciBhc3N1bXB0
aW9uIHRvLiBCdXQgd2UgaGF2ZSB0byBzdGFydCBzb21ld2hlcmUgc28gTVNJIGlzDQpwbGFubmVk
IGJ1dCBpbiBhIGZ1dHVyZSBzdGVwLiBJIHdvdWxkIHRoaW5rIHRoYXQgc3VwcG9ydGluZyBub24g
TVNJIGlmIG5vdA0KaW1wb3NzaWJsZSB3aWxsIGJlIGEgbG90IG1vcmUgY29tcGxleCBkdWUgdG8g
dGhlIGludGVycnVwdCBzaGFyaW5nLg0KSSBkbyB0aGluayB0aGF0IG5vdCBzdXBwb3J0aW5nIG5v
biBNU0kgc2hvdWxkIGJlIG9rIG9uIEFybS4NCg0KPiANCj4+IFdoZW4gTVNJIGlzIG5vdCB1c2Vk
LCB0aGUgb25seSBzb2x1dGlvbiB3b3VsZCBiZSB0byBlbmZvcmNlIHRoYXQNCj4+IGRldmljZXMg
YXNzaWduZWQgdG8gZGlmZmVyZW50IGd1ZXN0IGFyZSB1c2luZyBkaWZmZXJlbnQgaW50ZXJydXB0
cw0KPj4gd2hpY2ggd291bGQgbGltaXQgdGhlIG51bWJlciBvZiBkb21haW5zIGJlaW5nIGFibGUg
dG8gdXNlIFBDSQ0KPj4gZGV2aWNlcyBvbiBhIGJ1cyB0byA0IChpZiB0aGUgZW51bWVyYXRpb24g
Y2FuIGJlIG1vZGlmaWVkIGNvcnJlY3RseQ0KPj4gdG8gYXNzaWduIHRoZSBpbnRlcnJ1cHRzIHBy
b3Blcmx5KS4NCj4+IA0KPj4gSWYgd2UgYWxsIGFncmVlIHRoYXQgdGhpcyBpcyBhbiBhY2NlcHRh
YmxlIGxpbWl0YXRpb24gdGhlbiB3ZSB3b3VsZA0KPj4gbm90IG5lZWQgdGhlIOKAnGludGVycnVw
dCBzaGFyaW5n4oCdLg0KPiANCj4gSSBtaWdodCBiZSBlYXNpZXIgdG8gc3RhcnQgYnkganVzdCBz
dXBwb3J0aW5nIGRldmljZXMgdGhhdCBoYXZlIE1TSQ0KPiAob3IgTVNJLVgpIGFuZCB0aGVuIG1v
dmUgdG8gbGVnYWN5IGludGVycnVwdHMgaWYgcmVxdWlyZWQ/DQoNCk1TSSBzdXBwb3J0IHJlcXVp
cmVzIGFsc28gc29tZSBzdXBwb3J0IGluIHRoZSBpbnRlcnJ1cHQgY29udHJvbGxlciBwYXJ0DQpv
biBhcm0uIFNvIHRoZXJlIGlzIHNvbWUgd29yayB0byBhY2hpZXZlIHRoYXQuDQoNCj4gDQo+IFlv
dSBzaG91bGQgaGF2ZSBtb3N0IG9mIHRoZSBwaWVjZXMgeW91IHJlcXVpcmUgYWxyZWFkeSBpbXBs
ZW1lbnRlZA0KPiBzaW5jZSB0aGF0J3Mgd2hhdCB4ODYgdXNlcywgYW5kIGhlbmNlIGNvdWxkIHJl
dXNlIGFsbW9zdCBhbGwgb2YgaXQ/DQoNCkluc2lkZSBQQ0kgcHJvYmFibHkgYnV0IHRoZSBHSUMg
cGFydCB3aWxsIHJlcXVpcmUgc29tZSB3b3JrLg0KDQo+IA0KPiBJSVJDIEp1bGllbiBldmVuIHNh
aWQgdGhhdCBBcm0gd2FzIGxpa2VseSB0byByZXF1aXJlIG11Y2ggbGVzcyB0cmFwcw0KPiB0aGFu
IHg4NiBmb3IgYWNjZXNzZXMgdG8gTVNJIGFuZCBNU0ktWCBzaW5jZSB5b3UgY291bGQgYWxsb3cg
dW50cnVzdGVkDQo+IGd1ZXN0cyB0byB3cml0ZSBkaXJlY3RseSB0byB0aGUgcmVnaXN0ZXJzIGFz
IHRoZXJlJ3MgYW5vdGhlciBwaWVjZSBvZg0KPiBoYXJkd2FyZSB0aGF0IHdvdWxkIGFscmVhZHkg
dHJhbnNsYXRlIHRoZSBpbnRlcnJ1cHRzPw0KDQpZZXMgdGhpcyBpcyBkZWZpbml0ZWx5IHRoZSBj
YXNlLiBUaGUgSVRTIHBhcnQgb2YgdGhlIEdJQyBpbnRlcnJ1cHQgY29udHJvbGxlcg0Kd2lsbCBo
ZWxwIGEgbG90IGFuZCByZWR1Y2UgdGhlIG51bWJlciBvZiB0cmFwcy4NCg0KPiANCj4gSSB0aGlu
ayBpdCdzIGZpbmUgdG8gdXNlIHRoaXMgd29ya2Fyb3VuZCB3aGlsZSB5b3UgZG9uJ3QgaGF2ZSBN
U0kNCj4gc3VwcG9ydCBpbiBvcmRlciB0byBzdGFydCB0ZXN0aW5nIGFuZCB1cHN0cmVhbWluZyBz
dHVmZiwgYnV0IG1heWJlDQo+IHRoYXQgc2hvdWxkbid0IGJlIGNvbW1pdHRlZD8NCg0KVGhhdCB3
YXMgZGVmaW5pdGVseSBub3Qgb3VyIHBsYW4gdG8gY29tbWl0IHRoZSBjb2RlIHdpdGhvdXQgTVNJ
Lg0KQnV0IGFzIHJlcXVlc3RlZCBkdXJpbmcgdGhlIFhlbiBTdW1taXQsIHdlIHRyeSB0byBwdWJs
aXNoIHNvbWUgY29kZQ0KZm9yIGFuIFJGQyBhbmQgYSBkZXNpZ24gZWFybHkgdG8gZ2V0IGNvbW1l
bnQgZnJvbSB0aGUgY29tbXVuaXR5IGFuZA0Kd2UgdHJ5IHRvIGRvIHRoYXQgd2l0aCBzb21ldGhp
bmcgd29ya2luZywgZXZlbiBwYXJ0aWFsbHkgYW5kIHdpdGggbG90cyBvZg0KbGltaXRhdGlvbnMu
DQoNCkJlcnRyYW5kDQoNCg==


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 09:58:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 09:58:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwjcR-0001Wy-8R; Sat, 18 Jul 2020 09:58:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMbn=A5=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jwjcP-0001Wt-P9
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 09:58:57 +0000
X-Inumbo-ID: 4b0e9fb0-c8dd-11ea-bb8b-bc764e2007e4
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.73]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b0e9fb0-c8dd-11ea-bb8b-bc764e2007e4;
 Sat, 18 Jul 2020 09:58:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2s0tH+pup/xz/4DK8QhSXxnJtPEAg9Ps42z57ZPsKYA=;
 b=8Y3Ammtw4XIDax8ChOtAu17c4m+xdP1x01j7dlJFSB2bZEH+Fq5bGgQhhHYB3hhCrofBToy/AzXKwa0t4Hnhw1P4M8spnxILaU/KuTtg0OcGgfy7S7TuD8NaD/XmreNU2tYc+E+Gs/XLwY3aCnLgN9gvCKZZH4Ldab64Z9IOeR4=
Received: from AM7PR02CA0005.eurprd02.prod.outlook.com (2603:10a6:20b:100::15)
 by VI1PR0801MB1662.eurprd08.prod.outlook.com (2603:10a6:800:52::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.20; Sat, 18 Jul
 2020 09:58:54 +0000
Received: from VE1EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:100:cafe::c2) by AM7PR02CA0005.outlook.office365.com
 (2603:10a6:20b:100::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18 via Frontend
 Transport; Sat, 18 Jul 2020 09:58:54 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT003.mail.protection.outlook.com (10.152.18.108) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Sat, 18 Jul 2020 09:58:53 +0000
Received: ("Tessian outbound 73b502bf693a:v62");
 Sat, 18 Jul 2020 09:58:53 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4cd7f1c1245bc840
X-CR-MTA-TID: 64aa7808
Received: from 0609c9818d35.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FC71D507-F3BC-496B-8D78-9D7C91C60BBB.1; 
 Sat, 18 Jul 2020 09:58:47 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0609c9818d35.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Sat, 18 Jul 2020 09:58:47 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DUiVuJioMUDB4dPMF42QAagdgmqB1mWKy0rHuP8oVS6mkqvxWD0lkQsXBW3mMePPGL5lJLKjp1/Y90Tteboesei9pXUo8H5UZyRFdwXPb9f1MAlCWy0gKF699rypOQRgpIPs7iap2VSqCMXg33egsY2iffIFlDHGVoMNAY2IjxHO8Ds6nisfnGykkkj2slKgDKbGehE2zQeYzlcezeYqgnkzp4Cfuv5pvXA50RGMaPeF/oTMNzqCawF8k7BtHfL+iVrpsIpn+ORY3jxyxJItKZ4a7p1oPfLFF3Di1yDnDD2KCLk++sQkqEtQIAjVcyUGjIldthZ7C/jgwRcej/ANcQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2s0tH+pup/xz/4DK8QhSXxnJtPEAg9Ps42z57ZPsKYA=;
 b=mIk9UNqtcRcGsF87Wx8K7Qjru9P1Mch25WhIHZTQn7UrGyOWHzFO0d86haksU+zIV25zONn4zpHrupylSvwB2JSoqYTUl/sotkoaU823wknt2aBHkbarO+6ncuIZgC2AtVaLerYrzQBz7MY/fevLAngisIgH/McmnvwTM3nTi5JxO5ue/ubCd7niNN3ylEiRD62+JOH3N52zgVTgV5Or6Fb2GvjycfViPSpA9ISLrfsigkhmjJ08qREQPUCseNA/JghxtzpIGCof1EYdoWYQOADw5MhGe2Zrtpvn6FFBRoM83jm05COdZVCtMG2JVyUY3IzmbSmuuAY5lKt350ymFA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2s0tH+pup/xz/4DK8QhSXxnJtPEAg9Ps42z57ZPsKYA=;
 b=8Y3Ammtw4XIDax8ChOtAu17c4m+xdP1x01j7dlJFSB2bZEH+Fq5bGgQhhHYB3hhCrofBToy/AzXKwa0t4Hnhw1P4M8spnxILaU/KuTtg0OcGgfy7S7TuD8NaD/XmreNU2tYc+E+Gs/XLwY3aCnLgN9gvCKZZH4Ldab64Z9IOeR4=
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com (2603:10a6:208:fb::27)
 by AM0PR08MB3684.eurprd08.prod.outlook.com (2603:10a6:208:106::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Sat, 18 Jul
 2020 09:58:45 +0000
Received: from AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::b572:771:2750:14ed]) by AM0PR08MB3682.eurprd08.prod.outlook.com
 ([fe80::b572:771:2750:14ed%5]) with mapi id 15.20.3174.029; Sat, 18 Jul 2020
 09:58:45 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Oleksandr <olekstysh@gmail.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAOrEgIAAVPWAgAABeYCAAAssAIAAAcuAgAAIDQCAAAHigIAAAiYAgAAEaYCAAAVDgIAAAeSAgAAF4gCAAASxAIAAAtoAgAAyDwCAAPYZgA==
Date: Sat, 18 Jul 2020 09:58:45 +0000
Message-ID: <92495B3F-CCF4-40BE-9414-02DA2F338E64@arm.com>
References: <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
 <20200717144139.GU7191@Air-de-Roger>
 <90AE8DAB-2223-46DC-A263-D78365E5435E@arm.com>
 <20200717150507.GW7191@Air-de-Roger>
 <FBE040A9-D088-43D6-8929-FFEDE9DDDE34@arm.com>
 <20200717153043.GX7191@Air-de-Roger>
 <C5B2BDD5-E504-4871-8542-5BA8C051F699@arm.com>
 <20200717160834.GA7191@Air-de-Roger>
 <0c76b6a0-2242-3bbd-9740-75c5580e93e8@xen.org>
 <1dea1217-f884-0fe1-d339-95c5b473ae23@gmail.com>
In-Reply-To: <1dea1217-f884-0fe1-d339-95c5b473ae23@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [2a01:e0a:13:6f10:f1a2:5155:728:f8e7]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 7f33773f-ded9-4fbf-bbf3-08d82b012de0
x-ms-traffictypediagnostic: AM0PR08MB3684:|VI1PR0801MB1662:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <VI1PR0801MB166292D8FFD7D4323DAE08BA9D7D0@VI1PR0801MB1662.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 87hyaN7HASS8PGaAL0kJYgxTwY4pAQZ9RGwE7tFDjPWCHRnyaZEQQpsOpXMOdBCmMLOCEmA8wrNRFbTR1sma87WjqzUJuAu/XeCBvFtuazHkrxagmi1OTGHKiiJX2DbeOCqxKxjAwcMoKidKh6nYtUR75giXPjbfomhuFazDfGYHRBSZ+yrIpGOIw4H8KeXBSME+B53wNs6DQKgiawGZlbXrB1bG8/m7IrpW1JjWDTsGJ515eu4JBH0SIGOpMHwgsGcC1QrdU/apcQNP8kx+mMlZJsXNeBbpeWFpb4XPzquk1fLBkvFvC6t3SVPIXUNgyoRP/k0IktsB9LBVyiWcjQ==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM0PR08MB3682.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(396003)(366004)(346002)(376002)(66946007)(86362001)(66476007)(64756008)(76116006)(53546011)(66446008)(6486002)(6916009)(66556008)(6506007)(91956017)(478600001)(36756003)(2906002)(2616005)(8936002)(4326008)(33656002)(5660300002)(71200400001)(8676002)(186003)(316002)(6512007)(83380400001)(54906003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: BfkmrwTEvcLkd9IoKgFimfN4FpSBQ/5q2Oyx8muxWj1FXcbAwohHI3wYeHQx2O2YgK9ablf7Ke6cnhLHR5cyL/ffiblqaS3I2stktL6b5h66s+3+9m3unwCAKCVA+J2bQrKS/f9p3g6UgOPyWhuyPqPWSNBv4XSQ/zI7XfoHJ+ay1Bpl+4cwTwyJ05hNlCerrQz7pLTk6S9UEpIXtD8IYM3AcSL9zVNh80K6Eh4M9VCvYJ0fESqwiKwQtGqP4C8FFS+9yX3pYNElewoDisrN3xiM6AcIsoGha//NHfWnRkD5xY1ZKE/wZhilTQyRQyaKR14fsS8kHeZlgUOGc8glQcxs2vjS8kq8WKTVnrdC3JOgpWtXIp7YWkIwAxfFpxBgWYgMH8qjXuYpIGnv8q2CuSm8WvLMV12bxwli5fDYjQmnA+xdB0G1MGASp0r/tSjQzSnBgsscErFznW7D8IvgL77+V6PAIadwmBO5t3r8U0gnwMFWGY+hgxCfusZtJKjy+AKq9n3sgJhUFQER0wNKUDb+du9ZY0MTNe4AHn3oj6I=
Content-Type: text/plain; charset="utf-8"
Content-ID: <9674439EC461FC409BFC2A3119C21105@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3684
Original-Authentication-Results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(396003)(346002)(39860400002)(376002)(46966005)(5660300002)(36756003)(336012)(33656002)(6506007)(82740400003)(26005)(186003)(2616005)(81166007)(82310400002)(53546011)(83380400001)(356005)(86362001)(47076004)(70586007)(70206006)(8676002)(478600001)(54906003)(6486002)(316002)(107886003)(36906005)(6512007)(4326008)(6862004)(2906002)(8936002);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 28a1bdbe-9247-4641-8c98-08d82b01290e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Zc/YJJpKKo9OF7TXCDMHeeUDKEae3En+PzGfnH5L7k3K+Ii6gXEvuj3sNz2ylrEQk4Y4WZCwmVrMLHAQV5vlV8DC/+TDJJj0+GUcLLX6x/zhkU45GAH0DDeIgESyarpAHpGUss2BeGtq9Yb8kWEj8Fk+xpd8lu6SF5LsZm2OAVrld6sbBqFgDaBs4aLehi3kBQPlhXFilX2MRtVKpNoGnwjFhAjGnWWujbThe6UPwT4JlacvBf5fzpAUVIVF/uf5xWeUqavZi+ASNWLkKJf2C8ZWsMQQuhcpKFvTea7IbCddxy5Daqpgm1WOhjgp+MYTkmYaX83A82irWUziXRP2x71gbRECNNK+KsRYjBX5H1qkwo9aSnt9ZxXNNrzpbxZfDrdRYGymHAQerLUe4fh4yQ==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jul 2020 09:58:53.5920 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7f33773f-ded9-4fbf-bbf3-08d82b012de0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1662
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Julien Grall <julien.grall.oss@gmail.com>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Rahul Singh <Rahul.Singh@arm.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTcgSnVsIDIwMjAsIGF0IDIxOjE3LCBPbGVrc2FuZHIgPG9sZWtzdHlzaEBnbWFp
bC5jb20+IHdyb3RlOg0KPiANCj4gDQo+IE9uIDE3LjA3LjIwIDE5OjE4LCBKdWxpZW4gR3JhbGwg
d3JvdGU6DQo+IA0KPiBIZWxsbyBCZXJ0cmFuZA0KPiANCj4gW3R3byB0aHJlYWRzIHdpdGggdGhl
IHNhbWUgbmFtZSBhcmUgc2hvd24gaW4gbXkgbWFpbCBjbGllbnQsIHNvIG5vdCBjb21wbGV0ZWx5
IHN1cmUgSSBhbSBhc2tpbmcgaW4gdGhlIGNvcnJlY3Qgb25lXQ0KPiANCj4+IA0KPj4gDQo+PiBP
biAxNy8wNy8yMDIwIDE3OjA4LCBSb2dlciBQYXUgTW9ubsOpIHdyb3RlOg0KPj4+IE9uIEZyaSwg
SnVsIDE3LCAyMDIwIGF0IDAzOjUxOjQ3UE0gKzAwMDAsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6
DQo+Pj4+IA0KPj4+PiANCj4+Pj4+IE9uIDE3IEp1bCAyMDIwLCBhdCAxNzozMCwgUm9nZXIgUGF1
IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+IHdyb3RlOg0KPj4+Pj4gDQo+Pj4+PiBPbiBG
cmksIEp1bCAxNywgMjAyMCBhdCAwMzoyMzo1N1BNICswMDAwLCBCZXJ0cmFuZCBNYXJxdWlzIHdy
b3RlOg0KPj4+Pj4+IA0KPj4+Pj4+IA0KPj4+Pj4+PiBPbiAxNyBKdWwgMjAyMCwgYXQgMTc6MDUs
IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToNCj4+Pj4+Pj4g
DQo+Pj4+Pj4+IE9uIEZyaSwgSnVsIDE3LCAyMDIwIGF0IDAyOjQ5OjIwUE0gKzAwMDAsIEJlcnRy
YW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IE9uIDE3
IEp1bCAyMDIwLCBhdCAxNjo0MSwgUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5j
b20+IHdyb3RlOg0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IE9uIEZyaSwgSnVsIDE3LCAyMDIwIGF0
IDAyOjM0OjU1UE0gKzAwMDAsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4+Pj4+Pj4+IA0K
Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+IE9uIDE3IEp1bCAyMDIwLCBhdCAxNjowNiwgSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPiB3cm90ZToNCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+
Pj4gT24gMTcuMDcuMjAyMCAxNTo1OSwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+Pj4+Pj4+
Pj4+PiANCj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+Pj4gT24gMTcgSnVsIDIwMjAsIGF0IDE1
OjE5LCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+IHdyb3RlOg0KPj4+Pj4+Pj4+Pj4+
PiANCj4+Pj4+Pj4+Pj4+Pj4gT24gMTcuMDcuMjAyMCAxNToxNCwgQmVydHJhbmQgTWFycXVpcyB3
cm90ZToNCj4+Pj4+Pj4+Pj4+Pj4+PiBPbiAxNyBKdWwgMjAyMCwgYXQgMTA6MTAsIEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+Pj4+Pj4gT24gMTYuMDcu
MjAyMCAxOToxMCwgUmFodWwgU2luZ2ggd3JvdGU6DQo+Pj4+Pj4+Pj4+Pj4+Pj4+ICMgRW11bGF0
ZWQgUENJIGRldmljZSB0cmVlIG5vZGUgaW4gbGlieGw6DQo+Pj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+
Pj4+Pj4+Pj4+Pj4+PiBMaWJ4bCBpcyBjcmVhdGluZyBhIHZpcnR1YWwgUENJIGRldmljZSB0cmVl
IG5vZGUgaW4gdGhlIGRldmljZSB0cmVlIHRvIGVuYWJsZSB0aGUgZ3Vlc3QgT1MgdG8gZGlzY292
ZXIgdGhlIHZpcnR1YWwgUENJIGR1cmluZyBndWVzdCBib290LiBXZSBpbnRyb2R1Y2VkIHRoZSBu
ZXcgY29uZmlnIG9wdGlvbiBbdnBjaT0icGNpX2VjYW0iXSBmb3IgZ3Vlc3RzLiBXaGVuIHRoaXMg
Y29uZmlnIG9wdGlvbiBpcyBlbmFibGVkIGluIGEgZ3Vlc3QgY29uZmlndXJhdGlvbiwgYSBQQ0kg
ZGV2aWNlIHRyZWUgbm9kZSB3aWxsIGJlIGNyZWF0ZWQgaW4gdGhlIGd1ZXN0IGRldmljZSB0cmVl
Lg0KPj4+Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+Pj4+IEkgc3VwcG9ydCBTdGVmYW5vJ3Mg
c3VnZ2VzdGlvbiBmb3IgdGhpcyB0byBiZSBhbiBvcHRpb25hbCB0aGluZywgaS5lLg0KPj4+Pj4+
Pj4+Pj4+Pj4+IHRoZXJlIHRvIGJlIG5vIG5lZWQgZm9yIGl0IHdoZW4gdGhlcmUgYXJlIFBDSSBk
ZXZpY2VzIGFzc2lnbmVkIHRvIHRoZQ0KPj4+Pj4+Pj4+Pj4+Pj4+IGd1ZXN0IGFueXdheS4gSSBh
bHNvIHdvbmRlciBhYm91dCB0aGUgcGNpXyBwcmVmaXggaGVyZSAtIGlzbid0DQo+Pj4+Pj4+Pj4+
Pj4+Pj4gdnBjaT0iZWNhbSIgYXMgdW5hbWJpZ3VvdXM/DQo+Pj4+Pj4+Pj4+Pj4+PiANCj4+Pj4+
Pj4+Pj4+Pj4+IFRoaXMgY291bGQgYmUgYSBwcm9ibGVtIGFzIHdlIG5lZWQgdG8ga25vdyB0aGF0
IHRoaXMgaXMgcmVxdWlyZWQgZm9yIGEgZ3Vlc3QgdXBmcm9udCBzbyB0aGF0IFBDSSBkZXZpY2Vz
IGNhbiBiZSBhc3NpZ25lZCBhZnRlciB1c2luZyB4bC4NCj4+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+
Pj4+Pj4+IEknbSBhZnJhaWQgSSBkb24ndCB1bmRlcnN0YW5kOiBXaGVuIHRoZXJlIGFyZSBubyBQ
Q0kgZGV2aWNlIHRoYXQgZ2V0DQo+Pj4+Pj4+Pj4+Pj4+IGhhbmRlZCB0byBhIGd1ZXN0IHdoZW4g
aXQgZ2V0cyBjcmVhdGVkLCBidXQgaXQgaXMgc3VwcG9zZWQgdG8gYmUgYWJsZQ0KPj4+Pj4+Pj4+
Pj4+PiB0byBoYXZlIHNvbWUgYXNzaWduZWQgd2hpbGUgYWxyZWFkeSBydW5uaW5nLCB0aGVuIHdl
IGFncmVlIHRoZSBvcHRpb24NCj4+Pj4+Pj4+Pj4+Pj4gaXMgbmVlZGVkIChhZmFpY3QpLiBXaGVu
IFBDSSBkZXZpY2VzIGdldCBoYW5kZWQgdG8gdGhlIGd1ZXN0IHdoaWxlIGl0DQo+Pj4+Pj4+Pj4+
Pj4+IGdldHMgY29uc3RydWN0ZWQsIHdoZXJlJ3MgdGhlIHByb2JsZW0gdG8gaW5mZXIgdGhpcyBv
cHRpb24gZnJvbSB0aGUNCj4+Pj4+Pj4+Pj4+Pj4gcHJlc2VuY2Ugb2YgUENJIGRldmljZXMgaW4g
dGhlIGd1ZXN0IGNvbmZpZ3VyYXRpb24/DQo+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+Pj4gSWYg
dGhlIHVzZXIgd2FudHMgdG8gdXNlIHhsIHBjaS1hdHRhY2ggdG8gYXR0YWNoIGluIHJ1bnRpbWUg
YSBkZXZpY2UgdG8gYSBndWVzdCwgdGhpcyBndWVzdCBtdXN0IGhhdmUgYSBWUENJIGJ1cyAoZXZl
biB3aXRoIG5vIGRldmljZXMpLg0KPj4+Pj4+Pj4+Pj4+IElmIHdlIGRvIG5vdCBoYXZlIHRoZSB2
cGNpIHBhcmFtZXRlciBpbiB0aGUgY29uZmlndXJhdGlvbiB0aGlzIHVzZSBjYXNlIHdpbGwgbm90
IHdvcmsgYW55bW9yZS4NCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gVGhhdCdzIHdoYXQgZXZl
cnlvbmUgbG9va3MgdG8gYWdyZWUgd2l0aC4gWWV0IHdoeSBpcyB0aGUgcGFyYW1ldGVyIG5lZWRl
ZA0KPj4+Pj4+Pj4+Pj4gd2hlbiB0aGVyZSBfYXJlXyBQQ0kgZGV2aWNlcyBhbnl3YXk/IFRoYXQn
cyB0aGUgIm9wdGlvbmFsIiB0aGF0IFN0ZWZhbm8NCj4+Pj4+Pj4+Pj4+IHdhcyBzdWdnZXN0aW5n
LCBhaXVpLg0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gSSBhZ3JlZSBpbiB0aGlzIGNhc2UgdGhl
IHBhcmFtZXRlciBjb3VsZCBiZSBvcHRpb25hbCBhbmQgb25seSByZXF1aXJlZCBpZiBub3QgUENJ
IGRldmljZSBpcyBhc3NpZ25lZCBkaXJlY3RseSBpbiB0aGUgZ3Vlc3QgY29uZmlndXJhdGlvbi4N
Cj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBXaGVyZSB3aWxsIHRoZSBFQ0FNIHJlZ2lvbihzKSBhcHBl
YXIgb24gdGhlIGd1ZXN0IHBoeXNtYXA/DQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gQXJlIHlvdSBn
b2luZyB0byByZS11c2UgdGhlIHNhbWUgbG9jYXRpb25zIGFzIG9uIHRoZSBwaHlzaWNhbA0KPj4+
Pj4+Pj4+IGhhcmR3YXJlLCBvciB3aWxsIHRoZXkgYXBwZWFyIHNvbWV3aGVyZSBlbHNlPw0KPj4+
Pj4+Pj4gDQo+Pj4+Pj4+PiBXZSB3aWxsIGFkZCBzb21lIG5ldyBkZWZpbml0aW9ucyBmb3IgdGhl
IEVDQU0gcmVnaW9ucyBpbiB0aGUgZ3Vlc3QgcGh5c21hcCBkZWNsYXJlZCBpbiB4ZW4gKGluY2x1
ZGUvYXNtLWFybS9jb25maWcuaCkNCj4+Pj4+Pj4gDQo+Pj4+Pj4+IEkgdGhpbmsgSSdtIGNvbmZ1
c2VkLCBidXQgdGhhdCBmaWxlIGRvZXNuJ3QgY29udGFpbiBhbnl0aGluZyByZWxhdGVkDQo+Pj4+
Pj4+IHRvIHRoZSBndWVzdCBwaHlzbWFwLCB0aGF0J3MgdGhlIFhlbiB2aXJ0dWFsIG1lbW9yeSBs
YXlvdXQgb24gQXJtDQo+Pj4+Pj4+IEFGQUlDVD8NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IERvZXMgdGhp
cyBzb21laG93IHJlbGF0ZSB0byB0aGUgcGh5c2ljYWwgbWVtb3J5IG1hcCBleHBvc2VkIHRvIGd1
ZXN0cw0KPj4+Pj4+PiBvbiBBcm0/DQo+Pj4+Pj4gDQo+Pj4+Pj4gWWVzIGl0IGRvZXMuDQo+Pj4+
Pj4gV2Ugd2lsbCBhZGQgbmV3IGRlZmluaXRpb25zIHRoZXJlIHJlbGF0ZWQgdG8gVlBDSSB0byBy
ZXNlcnZlIHNvbWUgYXJlYXMgZm9yIHRoZSBWUENJIEVDQU0gYW5kIHRoZSBJT01FTSBhcmVhcy4N
Cj4+Pj4+IA0KPj4+Pj4gWWVzLCB0aGF0J3MgY29tcGxldGVseSBmaW5lIGFuZCBpcyB3aGF0J3Mg
ZG9uZSBvbiB4ODYsIGJ1dCBhZ2FpbiBJDQo+Pj4+PiBmZWVsIGxpa2UgSSdtIGxvc3QgaGVyZSwg
dGhpcyBpcyB0aGUgWGVuIHZpcnR1YWwgbWVtb3J5IG1hcCwgaG93IGRvZXMNCj4+Pj4+IHRoaXMg
cmVsYXRlIHRvIHRoZSBndWVzdCBwaHlzaWNhbCBtZW1vcnkgbWFwPw0KPj4+PiANCj4+Pj4gU29y
cnkgbXkgYmFkLCB3ZSB3aWxsIGFkZCB2YWx1ZXMgaW4gaW5jbHVkZS9wdWJsaWMvYXJjaC1hcm0u
aCwgd3JvbmcgaGVhZGVyIDotKQ0KPj4+IA0KPj4+IE9oIHJpZ2h0LCBub3cgSSBzZWUgaXQgOiku
DQo+Pj4gDQo+Pj4gRG8geW91IHJlYWxseSBuZWVkIHRvIHNwZWNpZnkgdGhlIEVDQU0gYW5kIE1N
SU8gcmVnaW9ucyB0aGVyZT8NCj4+IA0KPj4gWW91IG5lZWQgdG8gZGVmaW5lIHRob3NlIHZhbHVl
cyBzb21ld2hlcmUgOikuIFRoZSBsYXlvdXQgaXMgb25seSBzaGFyZWQgYmV0d2VlbiB0aGUgdG9v
bHMgYW5kIHRoZSBoeXBlcnZpc29yLiBJIHRoaW5rIGl0IHdvdWxkIGJlIGJldHRlciBpZiB0aGV5
IGFyZSBkZWZpbmVkIGF0IHRoZSBzYW1lIHBsYWNlIGFzIHRoZSByZXN0IG9mIHRoZSBsYXlvdXQs
IHNvIGl0IGlzIGVhc2llciB0byByZXdvcmsgdGhlIGxheW91dC4NCj4+IA0KPj4gQ2hlZXJzLA0K
PiANCj4gDQo+IEkgd291bGQgbGlrZSB0byBjbGFyaWZ5IHJlZ2FyZGluZyBhbiBJT01NVSBkcml2
ZXIgY2hhbmdlcyB3aGljaCBzaG91bGQgYmUgZG9uZSB0byBzdXBwb3J0IFBDSSBwYXNzLXRocm91
Z2ggcHJvcGVybHkuDQo+IA0KPiBEZXNpZ24gZG9jdW1lbnQgbWVudGlvbnMgYWJvdXQgU01NVSwg
YnV0IFhlbiBhbHNvIHN1cHBvcnRzIElQTU1VLVZNU0EgKHVuZGVyIHRlY2ggcHJldmlldyBub3cp
LiBJdCB3b3VsZCBiZSByZWFsbHkgbmljZSBpZiB0aGUgcmVxdWlyZWQgc3VwcG9ydCBpcyBleHRl
bmRlZCB0byB0aGF0IGtpbmQgb2YgSU9NTVUgYXMgd2VsbC4NCg0KV2Ugd2lsbCB0cnkgdG8gbWFr
ZSB0aGUgY29kZSBhcyBnZW5lcmljIGFzIHBvc3NpYmxlLiBGb3Igbm93IFNNTVUgaXMgdGhlIG9u
bHkgaGFyZHdhcmUgd2UgaGF2ZSAoYW5kIGlzIGEgc3RhbmRhcmQgYXJtIG9uZSkgc28gd2Ugd2ls
bCBzdGFydCB3aXRoIHRoYXQuDQpCdXQgd2UgYXJlIHdlbGNvbWluZyBvdGhlcnMgdG8gaW1wcm92
ZSBhbmQgYWRkIHN1cHBvcnQgZm9yIG1vcmUgZGlmZmVyZW50IGhhcmR3YXJlLg0KDQo+IA0KPiBN
YXkgSSBjbGFyaWZ5IHdoYXQgc2hvdWxkIGJlIGltcGxlbWVudGVkIGluIHRoZSBYZW4gZHJpdmVy
IGluIG9yZGVyIHRvIHN1cHBvcnQgUENJIHBhc3MtdGhyb3VnaCBmZWF0dXJlIG9uIEFybT8gU2hv
dWxkIHRoZSBJT01NVSBIL1cgYmUgIlBDSS1hd2FyZSIgZm9yIHRoYXQgcHVycG9zZT8NCg0KV2Ug
YXJlIHN0aWxsIG5vdCBvbiB0aGUgU01NVSBpbXBsZW1lbnRhdGlvbiBwYXJ0IGJ1dCBzaG91bGQg
YmUgb3VyIG5leHQgc3RlcC4NCkZlZWwgZnJlZSB0byBleHBsYWluIHVzIHdoYXQgd291bGQgYmUg
cmVxdWlyZWQgc28gdGhhdCB3ZSBjYW4gdGFrZSB0aGF0IGludG8gYWNjb3VudC4NCg0KUmVnYXJk
cw0KQmVydHJhbmQNCg0K


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 11:08:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 11:08:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwkhV-0007Rl-Fe; Sat, 18 Jul 2020 11:08:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BrZA=A5=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jwkhU-0007Rg-2T
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 11:08:16 +0000
X-Inumbo-ID: f9fb42cc-c8e6-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f9fb42cc-c8e6-11ea-bca7-bc764e2007e4;
 Sat, 18 Jul 2020 11:08:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=gxAXvakRWmFJPHfNrjY/xVHbrDJHm/doqu61Vdar1Pc=; b=v0gHPfXZqNuUwp5/m5u+a7U/e2
 j/KScSJhXDjIepDbsgDMKyn12gPQwLcOThDLzR32yz1UX/l2wkNqH47UJC1SVZPQP3L6oyO1V85wA
 z6zfmLxNrD80nu4y3lr20E/B9zYPboe0Qf/6huCYnIDIlYdWRb5jccFOKdrqFNT0vzG8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwkhR-0003Z6-FG; Sat, 18 Jul 2020 11:08:13 +0000
Received: from cpc91186-cmbg18-2-0-cust22.5-4.cable.virginm.net ([80.1.50.23]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwkhR-0005P6-8R; Sat, 18 Jul 2020 11:08:13 +0000
Subject: Re: PCI devices passthrough on Arm design proposal
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
 <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
 <865D5A77-85D4-4A88-A228-DDB70BDB3691@arm.com>
 <972c0c81-6595-7c41-baa5-8882f5d1c0ff@xen.org>
 <4E6B793C-2E0A-4999-9842-24CDCDE43903@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <22df2406-c4d3-1d06-0736-51ebea5581ea@xen.org>
Date: Sat, 18 Jul 2020 12:08:09 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <4E6B793C-2E0A-4999-9842-24CDCDE43903@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 17/07/2020 16:47, Bertrand Marquis wrote:
>> On 17 Jul 2020, at 17:26, Julien Grall <julien@xen.org> wrote:
>> On 17/07/2020 15:47, Bertrand Marquis wrote:
>>>>>>      pci=[ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...]
>>>>>>
>>>>>> Guest will be only able to access the assigned devices and see the bridges. Guest will not be able to access or see the devices that are no assigned to him.
>>>>>>
>>>>>> Limitation:
>>>>>> * As of now all the bridges in the PCI bus are seen by the guest on the VPCI bus.
>>>>> Why do you want to expose all the bridges to a guest? Does this mean that the BDF should always match between the host and the guest?
>>> That’s not really something that we wanted but this was the easiest way to go.
>>> As said in a previous mail we could build a VPCI bus with a completely different topology but I am not sure of the advantages this would have.
>>> Do you see some reason to do this ?
>>
>> Yes :):
>>   1) If a platform has two host controllers (IIRC Thunder-X has it) then you would need to expose two host controllers to your guest. I think this is undesirable if your guest is only using a couple of PCI devices on each host controllers.
>>   2) In the case of migration (live or not), you may want to use a difference PCI card on the target platform. So your BDF and bridges may be different.
>>
>> Therefore I think the virtual topology can be beneficial.
> 
> I would see a big advantage definitely to have only one VPCI bus per guest and put all devices in their independently of the hardware domain the device is on.
> But this will probably make the VPCI BARs value computation a bit more complex as we might end up with no space on the guest physical map for it.
> This might make the implementation a lot more complex.

I am not sure to understand your argument about the space... You should 
be able to find out the size of each BARs, so you can size the MMIO 
window correctly. This shouldn't add a lot of complexity.

I am not asking any implementation for this, but we need to make sure 
the design can easily be extended for other use cases. In the case of 
server, we will likely want to expose a single vPCI to the guest.

>>
>>>>>     - Is there any memory access that can bypassed the IOMMU (e.g doorbell)?
>>> This is still something to be investigated as part of the MSI implementation.
>>> If you have any idea here, feel free to tell us.
>>
>> My memory is a bit fuzzy here. I am sure that the doorbell can bypass the IOMMU on some platform, but I also vaguely remember that accesses to the PCI host controller memory window may also bypass the IOMMU. A good reading might be [2].
>>
>> IIRC, I came to the conclusion that we may want to use the host memory map in the guest when using the PCI passthrough. But maybe not on all the platforms.
> 
> Definitely a lot of this would be easier if could use 1:1 mapping.
> We will keep that in mind when we will start to investigate on the MSI part.

Hmmm... Maybe I wasn't clear enough but the problem is not only 
happening with MSIs doorbells. It is also with the P2P transactions.

Again, I am not asking to implement it at the beginning. However, it 
would be good to outline the potential limitations of the approach in 
your design.

Cheers,


-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 11:14:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 11:14:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwknT-0008ID-6p; Sat, 18 Jul 2020 11:14:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BrZA=A5=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jwknS-0008I8-2u
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 11:14:26 +0000
X-Inumbo-ID: d61b685e-c8e7-11ea-9723-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d61b685e-c8e7-11ea-9723-12813bfff9fa;
 Sat, 18 Jul 2020 11:14:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=lUNFPxOXlpwjXpbLc667sJKTqQx9dTl0ncXjEF1pc+A=; b=t9A7SIZLDJh+tPGP1j+mHhViDa
 bUQuoLi5XVh6pPH9XZq6v5f5vlqKbgJzs7qof5YCb1HUJ5+svect1cNH+Li2GuEWOkzQhZybBqVey
 UT+FZyT/FWWPpGyi1s6FxRwCDeU9dvShkHmxXNbgh7hgPoJjXA6l4CpLe0xlCgxINLBU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwknP-0003gb-EC; Sat, 18 Jul 2020 11:14:23 +0000
Received: from cpc91186-cmbg18-2-0-cust22.5-4.cable.virginm.net ([80.1.50.23]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwknP-0005oX-6H; Sat, 18 Jul 2020 11:14:23 +0000
Subject: Re: PCI devices passthrough on Arm design proposal
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
 <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
 <865D5A77-85D4-4A88-A228-DDB70BDB3691@arm.com>
 <972c0c81-6595-7c41-baa5-8882f5d1c0ff@xen.org>
 <4E6B793C-2E0A-4999-9842-24CDCDE43903@arm.com>
 <20200717160550.GZ7191@Air-de-Roger>
 <C86FE34B-4587-4895-8001-D8CD3F9D44F0@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f6a0da85-6d44-b0fa-abe6-6839d88c3578@xen.org>
Date: Sat, 18 Jul 2020 12:14:20 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <C86FE34B-4587-4895-8001-D8CD3F9D44F0@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 18/07/2020 10:55, Bertrand Marquis wrote:
> 
> 
>> On 17 Jul 2020, at 18:05, Roger Pau Monné <roger.pau@citrix.com> wrote:
>>
>> On Fri, Jul 17, 2020 at 03:47:25PM +0000, Bertrand Marquis wrote:
>>>> On 17 Jul 2020, at 17:26, Julien Grall <julien@xen.org> wrote:
>>>> On 17/07/2020 15:47, Bertrand Marquis wrote:
>>>>>>>> * Dom0Less implementation will require to have the capacity inside Xen to discover the PCI devices (without depending on Dom0 to declare them to Xen).
>>>>>>>>
>>>>>>>> # Enable the existing x86 virtual PCI support for ARM:
>>>>>>>>
>>>>>>>> The existing VPCI support available for X86 is adapted for Arm. When the device is added to XEN via the hyper call “PHYSDEVOP_pci_device_add”, VPCI handler for the config space access is added to the PCI device to emulate the PCI devices.
>>>>>>>>
>>>>>>>> A MMIO trap handler for the PCI ECAM space is registered in XEN so that when guest is trying to access the PCI config space, XEN will trap the access and emulate read/write using the VPCI and not the real PCI hardware.
>>>>>>>>
>>>>>>>> Limitation:
>>>>>>>> * No handler is register for the MSI configuration.
>>>>>>>> * Only legacy interrupt is supported and tested as of now, MSI is not implemented and tested.
>>>>>>> IIRC, legacy interrupt may be shared between two PCI devices. How do you plan to handle this on Arm?
>>>>> We plan to fix this by adding proper support for MSI in the long term.
>>>>> For the use case where MSI is not supported or not wanted we might have to find a way to forward the hardware interrupt to several guests to emulate some kind of shared interrupt.
>>>>
>>>> Sharing interrupts are a bit pain because you couldn't take advantage of the direct EOI in HW and have to be careful if one guest doesn't EOI in timely maneer.
>>>>
>>>> This is something I would rather avoid unless there is a real use case for it.
>>>
>>> I would expect that most recent hardware will support MSI and this
>>> will not be needed.
>>
>> Well, PCI Express mandates MSI support, so while this is just a spec,
>> I would expect most (if not all) devices to support MSI (or MSI-X), as
>> Arm platforms haven't implemented legacy PCI anyway.
> 
> Yes that’s our assumption to. But we have to start somewhere so MSI is
> planned but in a future step. I would think that supporting non MSI if not
> impossible will be a lot more complex due to the interrupt sharing.
> I do think that not supporting non MSI should be ok on Arm.
> 
>>
>>> When MSI is not used, the only solution would be to enforce that
>>> devices assigned to different guest are using different interrupts
>>> which would limit the number of domains being able to use PCI
>>> devices on a bus to 4 (if the enumeration can be modified correctly
>>> to assign the interrupts properly).
>>>
>>> If we all agree that this is an acceptable limitation then we would
>>> not need the “interrupt sharing”.
>>
>> I might be easier to start by just supporting devices that have MSI
>> (or MSI-X) and then move to legacy interrupts if required?
> 
> MSI support requires also some support in the interrupt controller part
> on arm. So there is some work to achieve that.
> 
>>
>> You should have most of the pieces you require already implemented
>> since that's what x86 uses, and hence could reuse almost all of it?
> 
> Inside PCI probably but the GIC part will require some work.

We already have an ITS implementation in Xen. This is required in order 
to use PCI devices in DOM0 on thunder-x (there is no legacy interrupts 
supported).

It wasn't yet exposed to the guest because we didn't fully investigate 
the security aspect of the implementation. However, for a tech preview 
this should be sufficient.


-- 
Julien Grall



From xen-devel-bounces@lists.xenproject.org Sat Jul 18 11:24:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 11:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwkwu-0000mH-7y; Sat, 18 Jul 2020 11:24:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BrZA=A5=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jwkws-0000lY-SY
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 11:24:10 +0000
X-Inumbo-ID: 3340c10e-c8e9-11ea-9725-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3340c10e-c8e9-11ea-9725-12813bfff9fa;
 Sat, 18 Jul 2020 11:24:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=KUxh8u1efSR2RaHD48Qz7dVl3CuiL7daNy2JgMIh/Z4=; b=TqxfXhmP2tt+atMq167bFy2HMS
 KkA9csHDB27Cb0ULygNICwu12JSxVAIOt573LUjD6q21DJqP+wGwqEpKS6/DYk6MTRsLuB0jPRlhb
 CyC2nQ7jbpiv5j86t6ZvHcjieaUXQtRTBTKXvcRn9G7tAJBtrJa8ZAD+RAFUQzt3qdvs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwkwr-0003tL-3f; Sat, 18 Jul 2020 11:24:09 +0000
Received: from cpc91186-cmbg18-2-0-cust22.5-4.cable.virginm.net ([80.1.50.23]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwkwq-0006Gi-Oy; Sat, 18 Jul 2020 11:24:08 +0000
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
To: Oleksandr <olekstysh@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
 <20200717144139.GU7191@Air-de-Roger>
 <90AE8DAB-2223-46DC-A263-D78365E5435E@arm.com>
 <20200717150507.GW7191@Air-de-Roger>
 <FBE040A9-D088-43D6-8929-FFEDE9DDDE34@arm.com>
 <20200717153043.GX7191@Air-de-Roger>
 <C5B2BDD5-E504-4871-8542-5BA8C051F699@arm.com>
 <20200717160834.GA7191@Air-de-Roger>
 <0c76b6a0-2242-3bbd-9740-75c5580e93e8@xen.org>
 <1dea1217-f884-0fe1-d339-95c5b473ae23@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2fd6c418-db41-8070-5644-344fefd8128d@xen.org>
Date: Sat, 18 Jul 2020 12:24:04 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1dea1217-f884-0fe1-d339-95c5b473ae23@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 17/07/2020 20:17, Oleksandr wrote:
> I would like to clarify regarding an IOMMU driver changes which should 
> be done to support PCI pass-through properly.
> 
> Design document mentions about SMMU, but Xen also supports IPMMU-VMSA 
> (under tech preview now). It would be really nice if the required 
> support is extended to that kind of IOMMU as well.
> 
> May I clarify what should be implemented in the Xen driver in order to 
> support PCI pass-through feature on Arm? 

I would expect callbacks to:
     - add a PCI device
     - remove a PCI device
     - assign a PCI device
     - deassign a PCI device

AFAICT, they are already existing. So it is a matter of plumbing. This 
would then be up to the driver to configure the IOMMU correctly.

> Should the IOMMU H/W be 
> "PCI-aware" for that purpose?

The only requirement is that your PCI devices are behind an IOMMU :). 
Other than that the IOMMU can mostly be configured the same way as you 
would do for the non-PCI devices. The main difference would be how you 
find the master ID.

I am aware that on some platforms, the masterID may be shared between 
multiple PCI devices. In that case, we would need to have a way to 
assign all the devices to the same guest (maybe using group?).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 11:32:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 11:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwl4e-0001dg-1k; Sat, 18 Jul 2020 11:32:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BrZA=A5=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jwl4c-0001db-JS
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 11:32:10 +0000
X-Inumbo-ID: 5149b3bc-c8ea-11ea-bca7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5149b3bc-c8ea-11ea-bca7-bc764e2007e4;
 Sat, 18 Jul 2020 11:32:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ssvoD9JZruOrEm0SPhSB2xinvmxMn18+8nq81L5UHRQ=; b=xN8WCjH7Ok5UIpSd8U/8uCQUuI
 VSmi3uF5qNksGIdLSZt3GwEGvyShV9aYzOB24EUjoMgL8Opw/haqTWd2OE4dU5YrQmAeVwblgI4ye
 VEtCo0hq9E8/uD5zRM4sKUOiNBDasbrgEOkt0ShFrq04/Yr9ckEOzA5Dm9nJczw/Zi5g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwl4a-000443-S9; Sat, 18 Jul 2020 11:32:08 +0000
Received: from cpc91186-cmbg18-2-0-cust22.5-4.cable.virginm.net ([80.1.50.23]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jwl4a-0006Su-Gp; Sat, 18 Jul 2020 11:32:08 +0000
Subject: Re: PCI devices passthrough on Arm design proposal
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
 <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
 <865D5A77-85D4-4A88-A228-DDB70BDB3691@arm.com>
 <972c0c81-6595-7c41-baa5-8882f5d1c0ff@xen.org>
 <4E6B793C-2E0A-4999-9842-24CDCDE43903@arm.com>
 <20200717160550.GZ7191@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <36d5f1a0-bfd6-45af-662e-2820e2bea08b@xen.org>
Date: Sat, 18 Jul 2020 12:32:06 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200717160550.GZ7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 17/07/2020 17:05, Roger Pau Monné wrote:
> IIRC Julien even said that Arm was likely to require much less traps
> than x86 for accesses to MSI and MSI-X since you could allow untrusted
> guests to write directly to the registers as there's another piece of
> hardware that would already translate the interrupts?

This is correct in the case of the ITS. This is because the hardware 
will tag the message with the deviceÌD. So there is no way to spoof it.

However, this may not be the case of other MSI controllers. For 
instance, in the case of the GICv2m, I think we will need to trap and 
sanitize the MSI message (see [1]).

[1] https://www.linaro.org/blog/kvm-pciemsi-passthrough-armarm64/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 11:34:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 11:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwl6z-0001sI-Nq; Sat, 18 Jul 2020 11:34:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OolH=A5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwl6y-0001sC-TD
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 11:34:36 +0000
X-Inumbo-ID: a7d0987c-c8ea-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7d0987c-c8ea-11ea-b7bb-bc764e2007e4;
 Sat, 18 Jul 2020 11:34:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=umVMc/h0d7iJNWXq5d2mzPeLGZiw+XgiYKEVfP+VgXk=; b=BbykmLx2p8AP416nKTQs1Sydy
 pzQcQGvCOJKl59HeKGIlRDeQNSHgzx/GoZ6uv8fpZrgn6dfDm/80zsD6FYRuxXlPd8KE55DFiQCKN
 1aMiRuganawO6t30bzQ12HguLZq0H+67Mi/is1+abXz8loITkoIdqbidE49woRJR4d2OU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwl6w-000477-UQ; Sat, 18 Jul 2020 11:34:34 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwl6w-0003Hp-JY; Sat, 18 Jul 2020 11:34:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwl6w-0005Ni-Il; Sat, 18 Jul 2020 11:34:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151975-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151975: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: xen=fb024b779336a0f73b3aee885b2ce082e812881f
X-Osstest-Versions-That: xen=f8fe3c07363d11fc81d8e7382dbcaa357c861569
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jul 2020 11:34:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151975 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151975/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151957
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151957
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151957
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151957
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151957
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151957
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151957
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151957
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151957
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  fb024b779336a0f73b3aee885b2ce082e812881f
baseline version:
 xen                  f8fe3c07363d11fc81d8e7382dbcaa357c861569

Last test of basis   151957  2020-07-17 04:42:18 Z    1 days
Testing same since   151975  2020-07-17 19:36:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f8fe3c0736..fb024b7793  fb024b779336a0f73b3aee885b2ce082e812881f -> master


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 13:17:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 13:17:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwmiH-0001ts-MQ; Sat, 18 Jul 2020 13:17:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OolH=A5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwmiG-0001tW-OG
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 13:17:12 +0000
X-Inumbo-ID: f9ceb77c-c8f8-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f9ceb77c-c8f8-11ea-8496-bc764e2007e4;
 Sat, 18 Jul 2020 13:17:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Zr8kXQyMcHXTnbu8JgSloBZ9tJZJcJ/Fd1tXJW6pzOo=; b=YFwCbe1P3EuaTI6Wxht6HFX14
 tnkf9BEqScw+/DIQWs+lSTYwiMRG7xyM1yTNa2IHWAgeVKGwbMn/G3/CXth9C/81wt9EW5pwM5YxI
 suE1hKAxEi6uNMTzdtw7qjTJEdHk+FFyjRXunazm75Yt+AFUZ+t5a4Are4UpKEBoGkSFA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwmi9-0006CA-0v; Sat, 18 Jul 2020 13:17:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwmi8-0000yc-MY; Sat, 18 Jul 2020 13:17:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwmi8-0000IZ-M3; Sat, 18 Jul 2020 13:17:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151978-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151978: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=4ebf8d7649cd86c41c41bf48da4b7761da2d5009
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jul 2020 13:17:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151978 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151978/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                4ebf8d7649cd86c41c41bf48da4b7761da2d5009
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   30 days
Failing since        151236  2020-06-19 19:10:35 Z   28 days   45 attempts
Testing same since   151978  2020-07-17 22:40:42 Z    0 days    1 attempts

------------------------------------------------------------
794 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 42236 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 15:34:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 15:34:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwoqj-00056a-0y; Sat, 18 Jul 2020 15:34:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OolH=A5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwoqi-00056V-B3
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 15:34:04 +0000
X-Inumbo-ID: 1c06397e-c90c-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c06397e-c90c-11ea-b7bb-bc764e2007e4;
 Sat, 18 Jul 2020 15:34:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=lO34XNcHggXHlIFLwysBX3g8kSgUxskgXLfbdncN/UM=; b=jsg5OeOhaC2Tt2oLee/ATLTqX
 ifhyMs1uFWH93J+hrQt1sd7k+9u5AOOtomE453SR16bY3lhGoEb1F1ks8j1a7Du+A9J9jWMxgmm8y
 bPizsXAKFWhlFWtludbz46dewacjRn7p6kIDoqy7YrlyIzmfiKIFjqn/dmfn6uXWyUu3Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwoqh-0000Yk-8S; Sat, 18 Jul 2020 15:34:03 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwoqg-0005ua-MQ; Sat, 18 Jul 2020 15:34:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwoqg-00049q-L3; Sat, 18 Jul 2020 15:34:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151982-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 151982: all pass - PUSHED
X-Osstest-Versions-This: ovmf=3d8327496762b4f2a54c9bafd7a214314ec28e9e
X-Osstest-Versions-That: ovmf=6ff53d2a13740e39dea110d6b3509c156c659586
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jul 2020 15:34:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151982 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151982/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3d8327496762b4f2a54c9bafd7a214314ec28e9e
baseline version:
 ovmf                 6ff53d2a13740e39dea110d6b3509c156c659586

Last test of basis   151972  2020-07-17 16:51:16 Z    0 days
Testing same since   151982  2020-07-18 02:36:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gary Lin <glin@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6ff53d2a13..3d83274967  3d8327496762b4f2a54c9bafd7a214314ec28e9e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 15:37:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 15:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwou7-0005EZ-J6; Sat, 18 Jul 2020 15:37:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OolH=A5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwou6-0005EU-S2
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 15:37:34 +0000
X-Inumbo-ID: 9944135c-c90c-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9944135c-c90c-11ea-b7bb-bc764e2007e4;
 Sat, 18 Jul 2020 15:37:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=E2wTkjnKt5ytcvoLx2V+4jieuF2RsNvKIfNI4pGZfBo=; b=qmh8jkonVrIUFxcnD1K690X8w
 4hw6ScDI/iVVsvRxBkwmAk2jXbBg2Rn6MyZIs1XeYZek8BDoyHD+K1/2GbiuklDi344Ye3HA+9iYP
 XwpMYWM+FITs+HRZ3gnPN/vzrzs0IGhC+wPw5+BPWl3VlQv+BssbOgKlpxoDrSyIsKNM0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwou5-0000dz-3K; Sat, 18 Jul 2020 15:37:33 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwou4-00063C-OZ; Sat, 18 Jul 2020 15:37:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwou4-00017i-Nw; Sat, 18 Jul 2020 15:37:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151984-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 151984: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=a4da0b2ac65a9625134e7c953c38dbe1a6ca4053
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jul 2020 15:37:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151984 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151984/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              a4da0b2ac65a9625134e7c953c38dbe1a6ca4053
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z    8 days
Failing since        151818  2020-07-11 04:18:52 Z    7 days    8 attempts
Testing same since   151984  2020-07-18 04:18:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Bastien Orivel <bastien.orivel@diateam.net>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Jin Yan <jinyan12@huawei.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1691 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 18:18:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 18:18:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwrPv-0002JU-EG; Sat, 18 Jul 2020 18:18:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+0x8=A5=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1jwrPt-0002JP-Tj
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 18:18:33 +0000
X-Inumbo-ID: 155b6196-c923-11ea-b7bb-bc764e2007e4
Received: from hera.aquilenet.fr (unknown [2a0c:e300::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 155b6196-c923-11ea-b7bb-bc764e2007e4;
 Sat, 18 Jul 2020 18:18:31 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id 64CC32939;
 Sat, 18 Jul 2020 20:18:29 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id 9Zsg89KWktZu; Sat, 18 Jul 2020 20:18:28 +0200 (CEST)
Received: from function.home (unknown
 [IPv6:2a01:cb19:956:1b00:9eb6:d0ff:fe88:c3c7])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id 1F3C21A5D;
 Sat, 18 Jul 2020 20:18:28 +0200 (CEST)
Received: from samy by function.home with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1jwrPn-009BGe-23; Sat, 18 Jul 2020 20:18:27 +0200
Date: Sat, 18 Jul 2020 20:18:27 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v2] mini-os: don't hard-wire xen internal paths
Message-ID: <20200718181827.7jrs5ilutt3jzp4i@function>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Juergen Gross <jgross@suse.com>, minios-devel@lists.xenproject.org,
 xen-devel@lists.xenproject.org, wl@xen.org
References: <20200713084230.18177-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200713084230.18177-1-jgross@suse.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org,
 wl@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Juergen Gross, le lun. 13 juil. 2020 10:42:30 +0200, a ecrit:
> Mini-OS shouldn't use Xen internal paths for building. Import the
> needed paths from Xen and fall back to the current values only if
> the import was not possible.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
> V2: correct typo (XCALL_APTH -> CALL_PATH)
> ---
>  Config.mk | 15 ++++++++++++++-
>  Makefile  | 35 ++++++++++++++++++-----------------
>  2 files changed, 32 insertions(+), 18 deletions(-)
> 
> diff --git a/Config.mk b/Config.mk
> index f6a2afa..cb823c2 100644
> --- a/Config.mk
> +++ b/Config.mk
> @@ -33,6 +33,19 @@ endif
>  #
>  ifneq ($(XEN_ROOT),)
>  MINIOS_ROOT=$(XEN_ROOT)/extras/mini-os
> +
> +-include $(XEN_ROOT)/stubdom/mini-os.mk
> +
> +XENSTORE_CPPFLAGS ?= -isystem $(XEN_ROOT)/tools/xenstore/include
> +TOOLCORE_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toolcore
> +TOOLLOG_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toollog
> +EVTCHN_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/evtchn
> +GNTTAB_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/gnttab
> +CALL_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/call
> +FOREIGNMEMORY_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/foreignmemory
> +DEVICEMODEL_PATH ?= $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/devicemodel
> +CTRL_PATH ?= $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)
> +GUEST_PATH ?= $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)
>  else
>  MINIOS_ROOT=$(TOPLEVEL_DIR)
>  endif
> @@ -93,7 +106,7 @@ DEF_CPPFLAGS += -D__MINIOS__
>  ifeq ($(libc),y)
>  DEF_CPPFLAGS += -DHAVE_LIBC
>  DEF_CPPFLAGS += -isystem $(MINIOS_ROOT)/include/posix
> -DEF_CPPFLAGS += -isystem $(XEN_ROOT)/tools/xenstore/include
> +DEF_CPPFLAGS += $(XENSTORE_CPPFLAGS)
>  endif
>  
>  ifneq ($(LWIPDIR),)
> diff --git a/Makefile b/Makefile
> index be640cd..4b76b55 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -125,23 +125,24 @@ OBJS := $(filter-out $(OBJ_DIR)/lwip%.o $(LWO), $(OBJS))
>  
>  ifeq ($(libc),y)
>  ifeq ($(CONFIG_XC),y)
> -APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toolcore -whole-archive -lxentoolcore -no-whole-archive
> -LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toolcore/libxentoolcore.a
> -APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toollog -whole-archive -lxentoollog -no-whole-archive
> -LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toollog/libxentoollog.a
> -APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/evtchn -whole-archive -lxenevtchn -no-whole-archive
> -LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/evtchn/libxenevtchn.a
> -APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/gnttab -whole-archive -lxengnttab -no-whole-archive
> -LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/gnttab/libxengnttab.a
> -APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/call -whole-archive -lxencall -no-whole-archive
> -LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/call/libxencall.a
> -APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/foreignmemory -whole-archive -lxenforeignmemory -no-whole-archive
> -LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/foreignmemory/libxenforeignmemory.a
> -APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/devicemodel -whole-archive -lxendevicemodel -no-whole-archive
> -LIBS += $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/devicemodel/libxendevicemodel.a
> -APP_LDLIBS += -L$(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH) -whole-archive -lxenguest -lxenctrl -no-whole-archive
> -LIBS += $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)/libxenctrl.a
> -LIBS += $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)/libxenguest.a
> +APP_LDLIBS += -L$(TOOLCORE_PATH) -whole-archive -lxentoolcore -no-whole-archive
> +LIBS += $(TOOLCORE_PATH)/libxentoolcore.a
> +APP_LDLIBS += -L$(TOOLLOG_PATH) -whole-archive -lxentoollog -no-whole-archive
> +LIBS += $(TOOLLOG_PATH)/libxentoollog.a
> +APP_LDLIBS += -L$(EVTCHN_PATH) -whole-archive -lxenevtchn -no-whole-archive
> +LIBS += $(EVTCHN_PATH)/libxenevtchn.a
> +APP_LDLIBS += -L$(GNTTAB_PATH) -whole-archive -lxengnttab -no-whole-archive
> +LIBS += $(GNTTAB_PATH)/libxengnttab.a
> +APP_LDLIBS += -L$(CALL_PATH) -whole-archive -lxencall -no-whole-archive
> +LIBS += $(CALL_PATH)/libxencall.a
> +APP_LDLIBS += -L$(FOREIGNMEMORY_PATH) -whole-archive -lxenforeignmemory -no-whole-archive
> +LIBS += $(FOREIGNMEMORY_PATH)/libxenforeignmemory.a
> +APP_LDLIBS += -L$(DEVICEMODEL_PATH) -whole-archive -lxendevicemodel -no-whole-archive
> +LIBS += $(DEVICEMODEL_PATH)/libxendevicemodel.a
> +APP_LDLIBS += -L$(GUEST_PATH) -whole-archive -lxenguest -no-whole-archive
> +LIBS += $(GUEST_PATH)/libxenguest.a
> +APP_LDLIBS += -L$(CTRL_PATH) -whole-archive -lxenctrl -no-whole-archive
> +LIBS += $(CTRL_PATH)/libxenctrl.a
>  endif
>  APP_LDLIBS += -lpci
>  APP_LDLIBS += -lz
> -- 
> 2.26.2
> 

-- 
Samuel
 Moralité : le modem et le cablerouteur font comme les filles, ils
 papotent toute la journée.
 -+- RB in NPC : Et en plus, ils ne parlent que de bits -+-


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 18:18:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 18:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwrQH-0002Kx-Oc; Sat, 18 Jul 2020 18:18:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+0x8=A5=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1jwrQG-0002Ka-6j
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 18:18:56 +0000
X-Inumbo-ID: 23d12cb0-c923-11ea-bb8b-bc764e2007e4
Received: from hera.aquilenet.fr (unknown [2a0c:e300::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23d12cb0-c923-11ea-bb8b-bc764e2007e4;
 Sat, 18 Jul 2020 18:18:55 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id C768B2939;
 Sat, 18 Jul 2020 20:18:54 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id 7D08i3-8rTd8; Sat, 18 Jul 2020 20:18:54 +0200 (CEST)
Received: from function.home (unknown
 [IPv6:2a01:cb19:956:1b00:9eb6:d0ff:fe88:c3c7])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id 12AAC1A5D;
 Sat, 18 Jul 2020 20:18:54 +0200 (CEST)
Received: from samy by function.home with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1jwrQD-009BGv-5H; Sat, 18 Jul 2020 20:18:53 +0200
Date: Sat, 18 Jul 2020 20:18:53 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH 01/12] stubdom: add stubdom/mini-os.mk for Xen paths used
 by Mini-OS
Message-ID: <20200718181853.t4zkpvnngrfcs2r4@function>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org, ian.jackson@eu.citrix.com,
 Juergen Gross <jgross@suse.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
 <20200715162511.5941-3-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200715162511.5941-3-ian.jackson@eu.citrix.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, ian.jackson@eu.citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Ian Jackson, le mer. 15 juil. 2020 17:25:00 +0100, a ecrit:
> From: Juergen Gross <jgross@suse.com>
> 
> stubdom/mini-os.mk should contain paths used by Mini-OS when built as
> stubdom.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  stubdom/mini-os.mk | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)
>  create mode 100644 stubdom/mini-os.mk
> 
> diff --git a/stubdom/mini-os.mk b/stubdom/mini-os.mk
> new file mode 100644
> index 0000000000..32528bb91f
> --- /dev/null
> +++ b/stubdom/mini-os.mk
> @@ -0,0 +1,17 @@
> +# Included by Mini-OS stubdom builds to set variables depending on Xen
> +# internal paths.
> +#
> +# Input variables are:
> +# XEN_ROOT
> +# MINIOS_TARGET_ARCH
> +
> +XENSTORE_CPPFLAGS = -isystem $(XEN_ROOT)/tools/xenstore/include
> +TOOLCORE_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toolcore
> +TOOLLOG_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toollog
> +EVTCHN_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/evtchn
> +GNTTAB_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/gnttab
> +CALL_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/call
> +FOREIGNMEMORY_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/foreignmemory
> +DEVICEMODEL_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/devicemodel
> +CTRL_PATH = $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)
> +GUEST_PATH = $(XEN_ROOT)/stubdom/libxc-$(MINIOS_TARGET_ARCH)
> -- 
> 2.20.1
> 

-- 
Samuel
 Créer une hiérarchie supplementaire pour remedier à un problème (?) de
 dispersion est d'une logique digne des Shadocks.
 * BT in: Guide du Cabaliste Usenet - La Cabale vote oui (les Shadocks aussi) *


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 18:20:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 18:20:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwrRx-00038Z-5q; Sat, 18 Jul 2020 18:20:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=++/D=A5=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1jwrRv-00038R-UD
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 18:20:39 +0000
X-Inumbo-ID: 619f2510-c923-11ea-b7bb-bc764e2007e4
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 619f2510-c923-11ea-b7bb-bc764e2007e4;
 Sat, 18 Jul 2020 18:20:39 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1jwrRt-000Cou-3S; Sat, 18 Jul 2020 18:20:37 +0000
Date: Sat, 18 Jul 2020 19:20:37 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 5/5] x86/shadow: l3table[] and gl3e[] are HVM only
Message-ID: <20200718182037.GA48915@deinos.phlegethon.org>
References: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
 <a3b9b496-e860-e657-2afc-c0658871fa3f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <a3b9b496-e860-e657-2afc-c0658871fa3f@suse.com>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org);
 SAEximRunCond expanded to false
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

At 12:00 +0200 on 15 Jul (1594814409), Jan Beulich wrote:
> ... by the very fact that they're 3-level specific, while PV always gets
> run in 4-level mode. This requires adding some seemingly redundant
> #ifdef-s - some of them will be possible to drop again once 2- and
> 3-level guest code doesn't get built anymore in !HVM configs, but I'm
> afraid there's still quite a bit of disentangling work to be done to
> make this possible.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Looks good.  It seems like the new code for '3-level non-HVM' in
guest-walks ought to have some sort of assert-unreachable in them too
- or is there a reason to to?

Cheers,

Tim.



From xen-devel-bounces@lists.xenproject.org Sat Jul 18 18:21:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 18:21:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwrT0-0003Ee-Gm; Sat, 18 Jul 2020 18:21:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+0x8=A5=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1jwrSz-0003EY-Ri
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 18:21:45 +0000
X-Inumbo-ID: 89008842-c923-11ea-bb8b-bc764e2007e4
Received: from hera.aquilenet.fr (unknown [2a0c:e300::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 89008842-c923-11ea-bb8b-bc764e2007e4;
 Sat, 18 Jul 2020 18:21:45 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id 640521F66;
 Sat, 18 Jul 2020 20:21:44 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id uaEEI_3M08tS; Sat, 18 Jul 2020 20:21:43 +0200 (CEST)
Received: from function (unknown [IPv6:2a01:cb19:956:1b00:9eb6:d0ff:fe88:c3c7])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id 91BA71E09;
 Sat, 18 Jul 2020 20:21:43 +0200 (CEST)
Received: from samy by function with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1jwrSw-009BHR-Hz; Sat, 18 Jul 2020 20:21:42 +0200
Date: Sat, 18 Jul 2020 20:21:42 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH 08/12] tools: move libxenctrl below tools/libs
Message-ID: <20200718182142.fgxmhayj6hxp26h2@function>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org, xen-devel@dornerworks.com,
 ian.jackson@eu.citrix.com, Juergen Gross <jgross@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Josh Whitehead <josh.whitehead@dornerworks.com>,
 Stewart Hildebrand <stewart.hildebrand@dornerworks.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
 <20200715162511.5941-10-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200715162511.5941-10-ian.jackson@eu.citrix.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@dornerworks.com, Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 ian.jackson@eu.citrix.com, George Dunlap <george.dunlap@citrix.com>,
 Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Josh Whitehead <josh.whitehead@dornerworks.com>,
 Jan Beulich <jbeulich@suse.com>,
 Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Ian Jackson, le mer. 15 juil. 2020 17:25:07 +0100, a ecrit:
> From: Juergen Gross <jgross@suse.com>
> 
> Today tools/libxc needs to be built after tools/libs as libxenctrl is
> depending on some libraries in tools/libs. This in turn blocks moving
> other libraries depending on libxenctrl below tools/libs.
> 
> So carve out libxenctrl from tools/libxc and move it into
> tools/libs/ctrl.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

>  stubdom/Makefile                              | 29 +++++-
>  stubdom/mini-os.mk                            |  2 +-

For that part,

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 18:21:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 18:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwrT5-0003Fh-Qn; Sat, 18 Jul 2020 18:21:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=++/D=A5=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1jwrT4-0003EY-Qf
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 18:21:50 +0000
X-Inumbo-ID: 8acb993c-c923-11ea-b7bb-bc764e2007e4
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8acb993c-c923-11ea-b7bb-bc764e2007e4;
 Sat, 18 Jul 2020 18:21:48 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1jwrT1-000CpL-IE; Sat, 18 Jul 2020 18:21:47 +0000
Date: Sat, 18 Jul 2020 19:21:47 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 0/5] x86: mostly shadow related XSA-319 follow-up
Message-ID: <20200718182147.GB48915@deinos.phlegethon.org>
References: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org);
 SAEximRunCond expanded to false
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

At 11:56 +0200 on 15 Jul (1594814214), Jan Beulich wrote:
> This in particular goes a few small steps further towards proper
> !HVM and !PV config handling (i.e. no carrying of unnecessary
> baggage).
> 
> 1: x86/shadow: dirty VRAM tracking is needed for HVM only
> 2: x86/shadow: shadow_table[] needs only one entry for PV-only configs
> 3: x86/PV: drop a few misleading paging_mode_refcounts() checks
> 4: x86/shadow: have just a single instance of sh_set_toplevel_shadow()
> 5: x86/shadow: l3table[] and gl3e[] are HVM only

I sent a question on #5 separatly; otherwise these all seem good to
me, thank you!

Acked-by: Tim Deegan <tim@xen.org>


From xen-devel-bounces@lists.xenproject.org Sat Jul 18 21:46:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 18 Jul 2020 21:46:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwuew-0003CI-Se; Sat, 18 Jul 2020 21:46:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OolH=A5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwuev-0003CD-4M
 for xen-devel@lists.xenproject.org; Sat, 18 Jul 2020 21:46:17 +0000
X-Inumbo-ID: 19eca4a0-c940-11ea-97d3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 19eca4a0-c940-11ea-97d3-12813bfff9fa;
 Sat, 18 Jul 2020 21:46:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=bfXd5xgl4RMZVGWPyXqJJQfLT7WsNeujGjr0Nbsb51Y=; b=e3QWtoqEBHhoDYhdNFSSv+1hO
 635o2KfIW9skaJRGEMR39bhHQBxNdrVgS7Z8iSuhQAzGtNxIwmq3awcS1KLxNO3ouyaB2NYbHkVZp
 Ug0w86aEs3OX14Cd38zu6KUOqljJbDrwuNbVacDBe7kuCe8z850VbVBolk+o6Aaphrmsk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwuer-0000IM-Co; Sat, 18 Jul 2020 21:46:13 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwuer-0001uZ-14; Sat, 18 Jul 2020 21:46:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwuer-0002aB-0P; Sat, 18 Jul 2020 21:46:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151988-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151988: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=b7bda69c4ef46c57480f6e378923f5215b122778
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 18 Jul 2020 21:46:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151988 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151988/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                b7bda69c4ef46c57480f6e378923f5215b122778
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   35 days
Failing since        151101  2020-06-14 08:32:51 Z   34 days   47 attempts
Testing same since   151968  2020-07-17 14:52:32 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29212 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 19 00:33:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jul 2020 00:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwxGm-00010K-3X; Sun, 19 Jul 2020 00:33:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UExf=A6=infradead.org=rdunlap@srs-us1.protection.inumbo.net>)
 id 1jwxGj-00010F-Kn
 for xen-devel@lists.xenproject.org; Sun, 19 Jul 2020 00:33:30 +0000
X-Inumbo-ID: 756fb076-c957-11ea-8496-bc764e2007e4
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 756fb076-c957-11ea-8496-bc764e2007e4;
 Sun, 19 Jul 2020 00:33:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=infradead.org; s=merlin.20170209; h=Content-Transfer-Encoding:MIME-Version:
 Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID:
 Content-Description:In-Reply-To:References;
 bh=ejhbUXRyUGcuxfHilbyRZHXTEvE6s41ebx55gOa3sr4=; b=JJAwQuBaSok0dLnO98Bcgkor9C
 vO8HkSOx6ZPvOn2dOFVTiS7E/yFQrdon8KXfn+O7hZPYPgtzRYOynx4TltF5G58lKDBX0BhhmDFo9
 16ZcOjrcvXVq6Bctnmah5JiGWDpOm2Fau3FS2JFaefCHUaJPIZEw8a2eaWmyH7RX/4svnfUt/u8ER
 RR5HjemBftX0T/6hfbHxIwW3imWhK3BX+xdbKoh7qUKaHo40cpKY41HebV/MjKGf2B5SIoDSFHPog
 NTo1eDaNsnNjtapMe96euBDZgE8QujB2v0rKE07Mx4X3DhhkipZ4ktiThra3yb7FmZgToVztSWbTa
 MLsiEENg==;
Received: from [2601:1c0:6280:3f0::19c2] (helo=smtpauth.infradead.org)
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1jwxGb-0003C7-Oe; Sun, 19 Jul 2020 00:33:22 +0000
From: Randy Dunlap <rdunlap@infradead.org>
To: linux-kernel@vger.kernel.org
Subject: [PATCH] xen/gntdev: gntdev.h: drop a duplicated word
Date: Sat, 18 Jul 2020 17:33:17 -0700
Message-Id: <20200719003317.21454-1-rdunlap@infradead.org>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Randy Dunlap <rdunlap@infradead.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Drop the repeated word "of" in a comment.

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org
---
 include/uapi/xen/gntdev.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- linux-next-20200717.orig/include/uapi/xen/gntdev.h
+++ linux-next-20200717/include/uapi/xen/gntdev.h
@@ -66,7 +66,7 @@ struct ioctl_gntdev_map_grant_ref {
 
 /*
  * Removes the grant references from the mapping table of an instance of
- * of gntdev. N.B. munmap() must be called on the relevant virtual address(es)
+ * gntdev. N.B. munmap() must be called on the relevant virtual address(es)
  * before this ioctl is called, or an error will result.
  */
 #define IOCTL_GNTDEV_UNMAP_GRANT_REF \


From xen-devel-bounces@lists.xenproject.org Sun Jul 19 01:50:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jul 2020 01:50:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwyTJ-0000Ef-Vy; Sun, 19 Jul 2020 01:50:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DOXs=A6=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jwyTI-0000EZ-Rn
 for xen-devel@lists.xenproject.org; Sun, 19 Jul 2020 01:50:32 +0000
X-Inumbo-ID: 3a5d080c-c962-11ea-b7bb-bc764e2007e4
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3a5d080c-c962-11ea-b7bb-bc764e2007e4;
 Sun, 19 Jul 2020 01:50:31 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06J1gVUj104805;
 Sun, 19 Jul 2020 01:49:54 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=EaYL3Q9q6QNFTsvFEQW1LskL6r73jcbI3lEAanfcPTk=;
 b=E2fwWQu2Q5dkhtuwVH5Y2Z2unfGCUoCMV6BXjJKvXnmibNWZcABC11FQYejv+oyjHLBN
 vrCAM6xBhwmK0c4Eo5V/E3ai6h9XLmBBywUUBX9U1y6dkiPLqgXW6uJqCFiQTZ7q7/wJ
 sLuE1aPoSPEL2prhFotkVJZFdjR/zINp5hdp0h7a7UNAvVPR6lS+Cxxs0T/bxiaBtpbc
 lbuJaZH2oyM8kOSWfMQlBZRnQodmyksJRBR3rKOvs1jfKcvjcYGhsgMh3fcJMcmOq1mH
 G7bLP72vcVlBMIUipDMLYu4iNnu21ffwfzh9zrFCro+tL5JubkUUXrDQtZ85qOvxs2Qr Ow== 
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2130.oracle.com with ESMTP id 32brgr258a-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Sun, 19 Jul 2020 01:49:54 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06J1loiU070561;
 Sun, 19 Jul 2020 01:47:54 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by aserp3030.oracle.com with ESMTP id 32canj2wcp-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sun, 19 Jul 2020 01:47:54 +0000
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 06J1lAXW028549;
 Sun, 19 Jul 2020 01:47:14 GMT
Received: from [10.39.198.189] (/10.39.198.189)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Sun, 19 Jul 2020 01:47:09 +0000
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
To: Anchal Agarwal <anchalag@amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
Date: Sat, 18 Jul 2020 21:47:04 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9686
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=100
 suspectscore=0 malwarescore=0
 phishscore=0 spamscore=100 mlxscore=100 bulkscore=0 adultscore=0
 mlxlogscore=-1000 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007190011
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9686
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=100
 malwarescore=0 bulkscore=0
 spamscore=100 impostorscore=0 suspectscore=0 adultscore=0 clxscore=1015
 mlxlogscore=-1000 priorityscore=1501 phishscore=0 lowpriorityscore=0
 mlxscore=100 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007190010
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, sstabellini@kernel.org, kamatam@amazon.com, mingo@redhat.com,
 xen-devel@lists.xenproject.org, sblbir@amazon.com, axboe@kernel.dk,
 konrad.wilk@oracle.com, bp@alien8.de, tglx@linutronix.de, jgross@suse.com,
 netdev@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net,
 linux-kernel@vger.kernel.org, vkuznets@redhat.com, davem@davemloft.net,
 dwmw@amazon.co.uk, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

(Roger, question for you at the very end)

On 7/17/20 3:10 PM, Anchal Agarwal wrote:
> On Wed, Jul 15, 2020 at 05:18:08PM -0400, Boris Ostrovsky wrote:
>> CAUTION: This email originated from outside of the organization. Do no=
t click links or open attachments unless you can confirm the sender and k=
now the content is safe.
>>
>>
>>
>> On 7/15/20 4:49 PM, Anchal Agarwal wrote:
>>> On Mon, Jul 13, 2020 at 11:52:01AM -0400, Boris Ostrovsky wrote:
>>>> CAUTION: This email originated from outside of the organization. Do =
not click links or open attachments unless you can confirm the sender and=
 know the content is safe.
>>>>
>>>>
>>>>
>>>> On 7/2/20 2:21 PM, Anchal Agarwal wrote:
>>>>> +
>>>>> +bool xen_is_xen_suspend(void)
>>>> Weren't you going to call this pv suspend? (And also --- is this sus=
pend
>>>> or hibernation? Your commit messages and cover letter talk about fix=
ing
>>>> hibernation).
>>>>
>>>>
>>> This is for hibernation is for pvhvm/hvm/pv-on-hvm guests as you may =
call it.
>>> The method is just there to check if "xen suspend" is in progress.
>>> I do not see "xen_suspend" differentiating between pv or hvm
>>> domain until later in the code hence, I abstracted it to xen_is_xen_s=
uspend.
>>
>> I meant "pv suspend" in the sense that this is paravirtual suspend, no=
t
>> suspend for paravirtual guests. Just like pv drivers are for both pv a=
nd
>> hvm guests.
>>
>>
>> And then --- should it be pv suspend or pv hibernation?
>>
>>
> Ok so I think I am lot confused by this question. Here is what this
> function for, function xen_is_xen_suspend() just tells us whether=20
> the guest is in "SHUTDOWN_SUSPEND" state or not. This check is needed
> for correct invocation of syscore_ops callbacks registered for guest's
> hibernation and for xenbus to invoke respective callbacks[suspend/resum=
e
> vs freeze/thaw/restore].
> Since "shutting_down" state is defined static and is not directly avail=
able
> to other parts of the code, the function solves the purpose.
>
> I am having hard time understanding why this should be called pv
> suspend/hibernation unless you are suggesting something else?
> Am I missing your point here?=20



I think I understand now what you are trying to say --- it's whether we
are going to use xen_suspend() routine, right? If that's the case then
sure, you can use "xen_suspend" term. (I'd probably still change
xen_is_xen_suspend() to is_xen_suspend())


>>>>> +{
>>>>> +     return suspend_mode =3D=3D XEN_SUSPEND;
>>>>> +}
>>>>> +
>>>> +static int xen_setup_pm_notifier(void)
>>>> +{
>>>> +     if (!xen_hvm_domain())
>>>> +             return -ENODEV;
>>>>
>>>> I forgot --- what did we decide about non-x86 (i.e. ARM)?
>>> It would be great to support that however, its  out of
>>> scope for this patch set.
>>> I=E2=80=99ll be happy to discuss it separately.
>>
>> I wasn't implying that this *should* work on ARM but rather whether th=
is
>> will break ARM somehow (because xen_hvm_domain() is true there).
>>
>>
> Ok makes sense. TBH, I haven't tested this part of code on ARM and the =
series
> was only support x86 guests hibernation.
> Moreover, this notifier is there to distinguish between 2 PM
> events PM SUSPEND and PM hibernation. Now since we only care about PM
> HIBERNATION I may just remove this code and rely on "SHUTDOWN_SUSPEND" =
state.
> However, I may have to fix other patches in the series where this check=
 may
> appear and cater it only for x86 right?


I don't know what would happen if ARM guest tries to handle hibernation
callbacks. The only ones that you are introducing are in block and net
fronts and that's arch-independent.


You do add a bunch of x86-specific code though (syscore ops), would
something similar be needed for ARM?


>>>> And PVH dom0.
>>> That's another good use case to make it work with however, I still
>>> think that should be tested/worked upon separately as the feature its=
elf
>>> (PVH Dom0) is very new.
>>
>> Same question here --- will this break PVH dom0?
>>
> I haven't tested it as a part of this series. Is that a blocker here?


I suspect dom0 will not do well now as far as hibernation goes, in which
case you are not breaking anything.


Roger?


-boris





From xen-devel-bounces@lists.xenproject.org Sun Jul 19 01:59:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jul 2020 01:59:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwycM-0000WA-Vu; Sun, 19 Jul 2020 01:59:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DOXs=A6=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jwycL-0000W5-Tk
 for xen-devel@lists.xenproject.org; Sun, 19 Jul 2020 01:59:53 +0000
X-Inumbo-ID: 88c54e5e-c963-11ea-9804-12813bfff9fa
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 88c54e5e-c963-11ea-9804-12813bfff9fa;
 Sun, 19 Jul 2020 01:59:52 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06J1xYxZ127625;
 Sun, 19 Jul 2020 01:59:39 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=f7OFTiNijTKWVwpU9cqPB4oaEZccBuJ7BYyIc/fNeCU=;
 b=HvKtPjmLuioiOgn8JAnm/l1msmBm1vWWhlNhO3omxy4cQCWKQmNB2eH4nK28FEEZvyPp
 GkPLQg5OIOClBiv09k7Z4GI7EsNbtTGpCsCdaSGa5ANHX995gwAOR6pJbz0y6uSgImWL
 yWh611XojpPHJ1TNlITD3azJ65G27e5FRPYoAZgrP6JfGTLsqU7Ais2y99DUKD2ynbZU
 0CtCqY8MlYPK8vM+PJr+paTCOtA/GfhGBNg27dHPBLFQSKSuCyjoQY6KXUptfxB1NI42
 S155C5YZ5/sases25GWMnmkYDDdGaNTQH7fzQ6Z2w6tYzQQt3Rc8TnwUdW8ocX/dzYm8 Bw== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2130.oracle.com with ESMTP id 32brgr25kk-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Sun, 19 Jul 2020 01:59:39 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06J1w2li152820;
 Sun, 19 Jul 2020 01:59:39 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by userp3020.oracle.com with ESMTP id 32cbq74ys6-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sun, 19 Jul 2020 01:59:39 +0000
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 06J1xa6G011053;
 Sun, 19 Jul 2020 01:59:37 GMT
Received: from [10.39.198.189] (/10.39.198.189)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Sat, 18 Jul 2020 18:59:36 -0700
Subject: Re: [PATCH -next] x86/xen: Convert to DEFINE_SHOW_ATTRIBUTE
To: Qinglang Miao <miaoqinglang@huawei.com>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Juergen Gross <jgross@suse.com>, Chen-Yu Tsai <wens@csie.org>,
 Thomas Gleixner <tglx@linutronix.de>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20200716090641.14184-1-miaoqinglang@huawei.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <a75ff99a-bb2e-6470-6c47-e7089c0fc8b4@oracle.com>
Date: Sat, 18 Jul 2020 21:59:34 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200716090641.14184-1-miaoqinglang@huawei.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9686
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999
 adultscore=0
 malwarescore=0 spamscore=0 suspectscore=0 mlxscore=0 phishscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007190012
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9686
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 bulkscore=0 spamscore=0
 impostorscore=0 suspectscore=0 adultscore=0 clxscore=1011 mlxlogscore=999
 priorityscore=1501 phishscore=0 lowpriorityscore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007190012
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/16/20 5:06 AM, Qinglang Miao wrote:
> From: Chen Huang <chenhuang5@huawei.com>
>
> Use DEFINE_SHOW_ATTRIBUTE macro to simplify the code.
>
> Signed-off-by: Chen Huang <chenhuang5@huawei.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>




From xen-devel-bounces@lists.xenproject.org Sun Jul 19 03:23:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jul 2020 03:23:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jwzuO-0008GG-5C; Sun, 19 Jul 2020 03:22:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x0Gu=A6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jwzuN-0008Fw-2k
 for xen-devel@lists.xenproject.org; Sun, 19 Jul 2020 03:22:35 +0000
X-Inumbo-ID: 10075d02-c96f-11ea-980c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 10075d02-c96f-11ea-980c-12813bfff9fa;
 Sun, 19 Jul 2020 03:22:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=6BLqCismMvBX0fi9tkkv/3Se9OQzCIUSxFZsWHUd8E0=; b=Xy91P0og0QufcCq3lOPAtvnzJ
 IbOSu59t0vf4oQ7sKgjJy9RDMCeiO7z/+VB2aTJ7NkbWmYWICNu2Mfnqpy6yHnjsNFSKnfsA3/v2A
 TCsy3YAyXK5Lr5CYYaJw5/Xw5k0DqEI1SLR1N2j+ORuEpJtjL/fkOKFlNRIKK4y23kmTI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwzuB-00012F-2O; Sun, 19 Jul 2020 03:22:23 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jwzuA-00040X-MJ; Sun, 19 Jul 2020 03:22:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jwzuA-0005Vn-Lj; Sun, 19 Jul 2020 03:22:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151990-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 151990: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-shadow:guest-localmigrate/x10:fail:heisenbug
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: xen=fb024b779336a0f73b3aee885b2ce082e812881f
X-Osstest-Versions-That: xen=fb024b779336a0f73b3aee885b2ce082e812881f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 19 Jul 2020 03:22:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151990 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151990/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-shadow   18 guest-localmigrate/x10     fail pass in 151975

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151975
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151975
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151975
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151975
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151975
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151975
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151975
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151975
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151975
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  fb024b779336a0f73b3aee885b2ce082e812881f
baseline version:
 xen                  fb024b779336a0f73b3aee885b2ce082e812881f

Last test of basis   151990  2020-07-18 11:37:19 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jul 19 05:30:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jul 2020 05:30:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jx1th-00027T-Rs; Sun, 19 Jul 2020 05:30:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x0Gu=A6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jx1tg-00024P-8n
 for xen-devel@lists.xenproject.org; Sun, 19 Jul 2020 05:30:00 +0000
X-Inumbo-ID: e1a3779a-c980-11ea-981d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e1a3779a-c980-11ea-981d-12813bfff9fa;
 Sun, 19 Jul 2020 05:29:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HIzBWddazCwVt6lrKnz7zWYFvZRDOhIIrtgjmZtcnV4=; b=iVZvlmtbjQeKLERo2icqVJsXi
 7qiMxsI/1G1/4JifrAQ2cBldSRv6vC4pcAqvPBQrSREcbJd3HHuPkdNFnZiEfgDb7YwGRJtBQ2gLD
 zw7SrD/X+hh+d5lWBeB/g0+QtaGK4u+3031/B7YaefYb72yGl5bHQiPlPrp6eNwPA+54s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jx1tb-00046n-Rw; Sun, 19 Jul 2020 05:29:55 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jx1tb-0002n0-Jc; Sun, 19 Jul 2020 05:29:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jx1tb-00040g-J3; Sun, 19 Jul 2020 05:29:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151992-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 151992: regressions - FAIL
X-Osstest-Failures: linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=6a70f89cc58f2368efa055cbcbd8b37384f6c588
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 19 Jul 2020 05:29:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151992 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151992/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6a70f89cc58f2368efa055cbcbd8b37384f6c588
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   31 days
Failing since        151236  2020-06-19 19:10:35 Z   29 days   46 attempts
Testing same since   151992  2020-07-18 13:18:34 Z    0 days    1 attempts

------------------------------------------------------------
803 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 43329 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 19 07:19:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jul 2020 07:19:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jx3b6-0002vR-UM; Sun, 19 Jul 2020 07:18:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x0Gu=A6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jx3b5-0002vM-O2
 for xen-devel@lists.xenproject.org; Sun, 19 Jul 2020 07:18:55 +0000
X-Inumbo-ID: 1a29111a-c990-11ea-9826-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1a29111a-c990-11ea-9826-12813bfff9fa;
 Sun, 19 Jul 2020 07:18:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=GGNl5TLS2xFqnHw2atLbFLD0HxjX6pJF+IRLzlLOBdM=; b=dGhQ+1xEu18bXRECUibf36XPR
 0JMhvodIgLFQEdU/LQqZFSt31m1uU5HR9lUDSCyBFZGP3yXTJNOgxpo7UoCDvGBkkKZXhFGlHO0TK
 V8CCEC+hPlMu2XEY1iklcdmAGI2vn2Xb4CoyW04Ny201ER6sj+3QZHJy4811ELt06ZqD0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jx3b3-0006Oy-LP; Sun, 19 Jul 2020 07:18:53 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jx3b3-0005N5-Ap; Sun, 19 Jul 2020 07:18:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jx3b3-0004Gf-9q; Sun, 19 Jul 2020 07:18:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152006-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 152006: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=747ba4ed98af063bc6f5485850adc4225ca69d53
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 19 Jul 2020 07:18:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152006 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152006/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              747ba4ed98af063bc6f5485850adc4225ca69d53
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z    9 days
Failing since        151818  2020-07-11 04:18:52 Z    8 days    9 attempts
Testing same since   152006  2020-07-19 04:18:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Bastien Orivel <bastien.orivel@diateam.net>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Jin Yan <jinyan12@huawei.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1702 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 19 09:28:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jul 2020 09:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jx5c6-0005qv-WF; Sun, 19 Jul 2020 09:28:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x0Gu=A6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jx5c5-0005qX-BL
 for xen-devel@lists.xenproject.org; Sun, 19 Jul 2020 09:28:05 +0000
X-Inumbo-ID: 21a3da30-c9a2-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 21a3da30-c9a2-11ea-8496-bc764e2007e4;
 Sun, 19 Jul 2020 09:27:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1ETHLV7Qc+AiLbVbk7WsNucFhbECTShWY+JpgtIo9/I=; b=Sk8Mudd6O/rD0WamFrKmn3RHP
 w0vRUUFPxQfa5TCDqDaEs+E2FyIOcsbKJBeJE7FoxxD0VKEa9vb3FyXN4jXQQVjy+UDnAHc+9Ow1c
 WYSAKKXRPO/WBY1sidZ5qcOLlGQUyRVqUvTa4Rw3Ny2lzwMuYr3OV1MquslQtPrHsSz94=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jx5bw-00018O-K9; Sun, 19 Jul 2020 09:27:56 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jx5bw-0004RA-Bz; Sun, 19 Jul 2020 09:27:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jx5bw-00026l-BD; Sun, 19 Jul 2020 09:27:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-151999-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 151999: regressions - trouble: fail/pass/starved
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: qemuu=97f750becac33e3d3e446d3ff4ae9af2577b7877
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 19 Jul 2020 09:27:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 151999 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/151999/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 qemuu                97f750becac33e3d3e446d3ff4ae9af2577b7877
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   36 days
Failing since        151101  2020-06-14 08:32:51 Z   35 days   48 attempts
Testing same since   151999  2020-07-18 22:07:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29536 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 19 10:28:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jul 2020 10:28:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jx6YW-0002Uq-4r; Sun, 19 Jul 2020 10:28:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x0Gu=A6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jx6YU-0002Ul-SB
 for xen-devel@lists.xenproject.org; Sun, 19 Jul 2020 10:28:26 +0000
X-Inumbo-ID: 941eaae2-c9aa-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 941eaae2-c9aa-11ea-b7bb-bc764e2007e4;
 Sun, 19 Jul 2020 10:28:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Kt3iKyjd7c8cMIs/gkP5BTH9EuCYy8+SxObpEVdH6Wg=; b=LhjCrL9Gi/6flgFsJuOcHkDDS
 7OIfyNYZhHCta1i3Xld1Xhov9WwKYt1HZ12worcMzvDxQeWl+hSvsUhKc3V+Gve2jcOSzy16i26a1
 MI0PEIpUrZ1HDyclCT13dI8OCkSFcpB9Q9xqSVO6hbAhyAbFB3AjF//jn9nyBElmwW7Uk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jx6YS-0002PO-RY; Sun, 19 Jul 2020 10:28:24 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jx6YS-0000Ip-F9; Sun, 19 Jul 2020 10:28:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jx6YS-0002Ns-EK; Sun, 19 Jul 2020 10:28:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152012-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 152012: all pass - PUSHED
X-Osstest-Versions-This: xen=fb024b779336a0f73b3aee885b2ce082e812881f
X-Osstest-Versions-That: xen=1969576661f3e34318e9b0a61a1a38f9a5aee16f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 19 Jul 2020 10:28:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152012 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152012/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  fb024b779336a0f73b3aee885b2ce082e812881f
baseline version:
 xen                  1969576661f3e34318e9b0a61a1a38f9a5aee16f

Last test of basis   151916  2020-07-15 09:18:25 Z    4 days
Testing same since   152012  2020-07-19 09:18:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1969576661..fb024b7793  fb024b779336a0f73b3aee885b2ce082e812881f -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Jul 19 12:15:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jul 2020 12:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jx8Dv-0003A2-9f; Sun, 19 Jul 2020 12:15:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x0Gu=A6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jx8Du-00039x-51
 for xen-devel@lists.xenproject.org; Sun, 19 Jul 2020 12:15:18 +0000
X-Inumbo-ID: 804de348-c9b9-11ea-9855-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 804de348-c9b9-11ea-9855-12813bfff9fa;
 Sun, 19 Jul 2020 12:15:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=NP8nZBzsiEUxIeP83x0jiTgzJFbxpZX4OWAIhwTfHxA=; b=it2sBXRShQL/75VQWQwlnKE1/E
 QCNf9hX9BeMdufYcihEghkjioizAC72H06KcDrWt+29IY+hPwBE4t8zHmGsHjGkPcBRC6i7F7PV0R
 c9JyNf79vdjK0r4nDz4KXRtkiXDLefqqIH8LoBp77GuHic+UTW9akBMcJq71ZT7A8xKM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jx8Dq-0004Zj-Fr; Sun, 19 Jul 2020 12:15:14 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jx8Dq-0005Un-2X; Sun, 19 Jul 2020 12:15:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jx8Dq-0007fk-1o; Sun, 19 Jul 2020 12:15:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict
Message-Id: <E1jx8Dq-0007fk-1o@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 19 Jul 2020 12:15:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/152015/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict.debian-hvm-install --summary-out=tmp/152015.bisection-summary --basis-template=151065 --blessings=real,real-bisect qemu-mainline test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict debian-hvm-install
Searching for failure / basis pass:
 151999 fail [host=pinot1] / 151149 [host=chardonnay1] 151101 [host=huxelrebe0] 151065 [host=albana0] 151047 [host=huxelrebe1] 150970 [host=chardonnay0] 150930 [host=fiano1] 150916 [host=pinot0] 150909 [host=elbling0] 150895 [host=albana1] 150831 [host=fiano0] 150694 [host=debina0] 150631 ok.
Failure / basis pass flights: 151999 / 150631
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 97f750becac33e3d3e446d3ff4ae9af2577b7877 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 5cc7a54c2e91d82cb6a52e4921325c511fd90712 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9-3d8327496762b4f2a54c9bafd7a214314ec28e9e git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://git.qemu.org/qemu.git#5cc7a54c2e91d82cb6a52e4921325c511fd90712-97f750becac33e3d3e446d3ff4ae9af2577b7877 git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-6ada2285d9918859699c92e09540e023e0a16054 git://xenbits.xen.org/xen.git#1497e78068421d83956f8e82fb6e1bf1fc3b1199-fb024b779336a0f73b3aee885b2ce082e812881f
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 67665 nodes in revision graph
Searching for test results:
 150631 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 5cc7a54c2e91d82cb6a52e4921325c511fd90712 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 150694 [host=debina0]
 150831 [host=fiano0]
 150909 [host=elbling0]
 150930 [host=fiano1]
 150916 [host=pinot0]
 150895 [host=albana1]
 150899 []
 150970 [host=chardonnay0]
 151047 [host=huxelrebe1]
 151101 [host=huxelrebe0]
 151065 [host=albana0]
 151149 [host=chardonnay1]
 151221 fail irrelevant
 151175 fail irrelevant
 151241 fail irrelevant
 151286 fail irrelevant
 151269 fail irrelevant
 151328 fail irrelevant
 151304 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 322969adf1fb3d6cfbd613f35121315715aff2ed 3c659044118e34603161457db9934a34f816d78b 171199f56f5f9bdf1e5d670d09ef1351d8f01bae 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151377 fail irrelevant
 151353 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b d4b78317b7cf8c0c635b70086503813f79ff21ec 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151399 fail irrelevant
 151414 fail irrelevant
 151435 fail irrelevant
 151459 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151471 fail irrelevant
 151485 fail irrelevant
 151500 fail irrelevant
 151518 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151547 fail irrelevant
 151598 fail irrelevant
 151577 fail irrelevant
 151622 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b 7b7515702012219410802a168ae4aa45b72a44df 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151656 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151634 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151645 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151669 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151685 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151704 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151778 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b aff2caf6b3fbab1062e117a47b66d27f7fd2f272 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151721 fail irrelevant
 151763 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 48f22ad04ead83e61b4b35871ec6f6109779b791 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151744 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 8796c64ecdfd34be394ea277aaaaa53df0c76996 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151804 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151816 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151833 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 827937158b72ce2265841ff528bba3c44a1bfbc8 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151855 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151841 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 2033cc6efa98b831d7839e367aa7d5aa74d0750f 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151849 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151874 fail irrelevant
 151895 fail irrelevant
 151914 fail irrelevant
 151934 fail irrelevant
 151977 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1d24410da356731da70b3334f86343e11e207d2 3c659044118e34603161457db9934a34f816d78b 470dd165d152ff7ceac61c7b71c2b89220b3aad7 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151979 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b eea8f5df4ecc607d64f091b8d916fcc11a697541 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 152011 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 5cc7a54c2e91d82cb6a52e4921325c511fd90712 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151998 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 5cc7a54c2e91d82cb6a52e4921325c511fd90712 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151981 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 86e8c353f705f14f2f2fd7a6195cefa431aa24d9 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 152000 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff53d2a13740e39dea110d6b3509c156c659586 3c659044118e34603161457db9934a34f816d78b b7bda69c4ef46c57480f6e378923f5215b122778 6ada2285d9918859699c92e09540e023e0a16054 f8fe3c07363d11fc81d8e7382dbcaa357c861569
 151960 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 5cc7a54c2e91d82cb6a52e4921325c511fd90712 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151983 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 3575b0aea983ad57804c9af739ed8ff7bc168393 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151961 fail irrelevant
 151968 fail irrelevant
 151985 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 49ee11555262a256afec592dfed7c5902d5eefd2 2e3de6253422112ae43e608661ba94ea6b345694 726c78d14dfe6ec76f5e4c7756821a91f0a04b34
 152001 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 75a6ed875ff0a2eb6b2971ae2098ed09963d7329 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151963 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b eefe34ea4b82c2b47abe28af4cc7247d51553626 2e3de6253422112ae43e608661ba94ea6b345694 25636ed707cf1211ce846c7ec58f8643e435d7a7
 151965 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b 3f429a3400822141651486193d6af625eeab05a5 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 152014 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 97f750becac33e3d3e446d3ff4ae9af2577b7877 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 151986 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 5cc7a54c2e91d82cb6a52e4921325c511fd90712 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151952 fail irrelevant
 151966 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 58ae92a993687d913aa6dd00ef3497a1bc5f6c40 3c659044118e34603161457db9934a34f816d78b 54cdfe511219b8051046be55a6e156c4f08ff7ff 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 152002 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 007d1dbf72536ec1b847a944832e4de1546af2ac 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151987 fail irrelevant
 151967 fail irrelevant
 152015 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a2433243fbe471c250d7eddc2c7da325d91265fd 3c659044118e34603161457db9934a34f816d78b 6675a653d2e57ab09c32c0ea7b44a1d6c40a7f58 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151973 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 53550e81e2cafe7c03a39526b95cd21b5194d9b1 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151989 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 5d2f557b47dfbf8f23277a5bdd8473d4607c681a 2e3de6253422112ae43e608661ba94ea6b345694 51ca66c37371b10b378513af126646de22eddb17
 152003 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151974 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 250bc43a406f7d46e319abe87c19548d4f027828 2e3de6253422112ae43e608661ba94ea6b345694 3371ced37ced359167b5a71abee2062854371323
 151976 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 3c659044118e34603161457db9934a34f816d78b 9f1f264edbdf5516d6f208497310b3eedbc7b74c 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151991 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 7d2410cea154bf915fb30179ebda3b17ac36e70e 2e3de6253422112ae43e608661ba94ea6b345694 780aba2779b834f19b2a6f0dcdea0e7e0b5e1622
 151993 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 175198ad91d8bac540159705873b4ffe4fb94eab 2e3de6253422112ae43e608661ba94ea6b345694 51ca66c37371b10b378513af126646de22eddb17
 152005 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151994 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 6b0eff1a4ea47c835a7d8bee88c05c47ada37495 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151995 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b da9630c57ee386f8beb571ba6bb4a98d546c42ca 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151996 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 fec6a7af5c5760b9bccd9e7c3eaf29f0401af264
 151988 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff53d2a13740e39dea110d6b3509c156c659586 3c659044118e34603161457db9934a34f816d78b b7bda69c4ef46c57480f6e378923f5215b122778 6ada2285d9918859699c92e09540e023e0a16054 f8fe3c07363d11fc81d8e7382dbcaa357c861569
 151997 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 157ed954e2dc8c2a4230d38058ca7f1fe50902e0 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 152007 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 152009 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151999 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 97f750becac33e3d3e446d3ff4ae9af2577b7877 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152010 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
Searching for interesting versions
 Result found: flight 150631 (pass), for basis pass
 Result found: flight 151999 (fail), for basis failure
 Repro found: flight 152011 (pass), for basis pass
 Repro found: flight 152014 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
No revisions left to test, checking graph state.
 Result found: flight 152003 (pass), for last pass
 Result found: flight 152005 (fail), for first failure
 Repro found: flight 152007 (pass), for last pass
 Repro found: flight 152009 (fail), for first failure
 Repro found: flight 152010 (pass), for last pass
 Repro found: flight 152015 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/152015/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.473936 to fit
pnmtopng: 209 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
152015: tolerable ALL FAIL

flight 152015 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/152015/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail baseline untested


jobs:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun Jul 19 14:06:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jul 2020 14:06:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jx9xB-00046r-FZ; Sun, 19 Jul 2020 14:06:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x0Gu=A6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jx9x9-00046X-Io
 for xen-devel@lists.xenproject.org; Sun, 19 Jul 2020 14:06:07 +0000
X-Inumbo-ID: f9ac288a-c9c8-11ea-b7bb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f9ac288a-c9c8-11ea-b7bb-bc764e2007e4;
 Sun, 19 Jul 2020 14:06:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ivSehOvguzmbyo8s/GuYBQYYIFXznTiYQrJFSS/f3y8=; b=DxNX9rQ7xOLyIJ+PfnlCDjL1c
 brbe9hm//1XBaQXB9UF4iSPpr0oaIU7/Lzk2QYgkXDnfjpPBafWUkY2rgLcxT2dFa/T48wZUEENwg
 OjjRkgC/b4sm36CoY9iqD0IjrpnaHG5cTy6qKMLKHtk3uHmcxT/o/d0KmKK+K3Mqaixfc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jx9x2-0006xx-92; Sun, 19 Jul 2020 14:06:00 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jx9x2-00053A-0f; Sun, 19 Jul 2020 14:06:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jx9x2-0001Hk-00; Sun, 19 Jul 2020 14:05:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152004-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 152004: tolerable trouble: fail/pass/starved
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-shadow:guest-localmigrate/x10:fail:heisenbug
 xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=fb024b779336a0f73b3aee885b2ce082e812881f
X-Osstest-Versions-That: xen=fb024b779336a0f73b3aee885b2ce082e812881f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 19 Jul 2020 14:05:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152004 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152004/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-shadow 18 guest-localmigrate/x10 fail in 151990 pass in 152004
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 151990

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151990
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151990
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151990
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151990
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151990
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151990
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151990
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151990
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151990
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 xen                  fb024b779336a0f73b3aee885b2ce082e812881f
baseline version:
 xen                  fb024b779336a0f73b3aee885b2ce082e812881f

Last test of basis   152004  2020-07-19 03:23:52 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jul 19 15:55:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jul 2020 15:55:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxBeD-0004o4-Q0; Sun, 19 Jul 2020 15:54:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=O9sL=A6=gmail.com=jaromir.dolecek@srs-us1.protection.inumbo.net>)
 id 1jxBeC-0004nz-1U
 for xen-devel@lists.xenproject.org; Sun, 19 Jul 2020 15:54:40 +0000
X-Inumbo-ID: 272e4856-c9d8-11ea-bca7-bc764e2007e4
Received: from mail-vs1-xe2d.google.com (unknown [2607:f8b0:4864:20::e2d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 272e4856-c9d8-11ea-bca7-bc764e2007e4;
 Sun, 19 Jul 2020 15:54:39 +0000 (UTC)
Received: by mail-vs1-xe2d.google.com with SMTP id b77so7264412vsd.8
 for <xen-devel@lists.xenproject.org>; Sun, 19 Jul 2020 08:54:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:from:date:message-id:subject:to;
 bh=xILTcvzTtUapDEiFu/hLuAqy10TR7gKRIOs8YhGb9wY=;
 b=PXHa+Flb6n8e5VdOfxjv1gjhittsD4+xpa7MjqP2iD0fWUzRgxohsTmDJ+OmA250tJ
 7BUihXG1CMVGSouXBys3gz1ILOLcjCbwKRNQCI+oxCrRqT5/Boj5S/GXXNwnDNfGtK2y
 UZRdA6FQM3nEA7bKW+5RiQqK9hL+OKS6GTzGWWEn+8dhD5L9phktXVMXTrAeoMv/f9V1
 CTVJINVhElgxuQV9sm33GRset2RvGppdrqNkIqbJMUxfkCRz2w5LDgVxxB55ag0ofTHL
 qJfVC/ric3vREa6pFA+TLI6TZNWmi1Oyud4wQR8dCY2jCIqmdPqtYpjvermhPQC1SWRh
 Gjcg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
 bh=xILTcvzTtUapDEiFu/hLuAqy10TR7gKRIOs8YhGb9wY=;
 b=kw+E+caL3bDxrJypOnq0wg1nWpA99JvuCnTBigqJH6RQH6Kh2SLP1cnkN5ueLxT1T3
 RzWZXFRbYuNtSHM9cRoc/B2mcJPeley7mCXJgH1/KEyKJrAi4yAZJOG1KKsPdY5uOz9N
 rJ5x1jky6huLHFziyXc0h7QB7AUh4/aR48ptfA1C+irmbt4WM3vlQ+cES4sQYxJOM18+
 DjOQpAyYMzs1fKfbU/hTVsH0EIDrK+zE3dqS4YLsJzeIP64f/Iore6VH0ZEL8O76/y0S
 SKUAivDuMEr6htgzrm+5aIc5QM2NO7u8mgmGo/uXwLc0BqZ0iZa5/P7rp2Hu/ZnZvRTu
 3h8g==
X-Gm-Message-State: AOAM5316J2mTKYr0h9raHuoRDet8h6epREFFucZOKVOqoxxQIeN6FUZZ
 /0NOhKbjGScseboTyLfot9Zer5qQPeOtg38FyOqH02b9100=
X-Google-Smtp-Source: ABdhPJwjfNUih4z2lYQE6CEtAT2OsJfX2eQgo73kubtaAbz5VI1WugeMoci3SVrYoZVRdGX9uhl0hBKHkzSdpXw1scI=
X-Received: by 2002:a67:b641:: with SMTP id e1mr12519961vsm.19.1595174079148; 
 Sun, 19 Jul 2020 08:54:39 -0700 (PDT)
MIME-Version: 1.0
From: =?UTF-8?B?SmFyb23DrXIgRG9sZcSNZWs=?= <jaromir.dolecek@gmail.com>
Date: Sun, 19 Jul 2020 17:54:28 +0200
Message-ID: <CAMnsW5542gmBLpKBsW5pnm=2VXmaDVHzg=OXXvBdu1BsYLdDvQ@mail.gmail.com>
Subject: Advice for implementing MSI-X support on NetBSD Xen 4.11?
To: xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

I've implemented support for using MSI under NetBSD Dom0 with 4.11.
This works well.

I have some trouble now with getting MSI-X work under Xen.
PHYSDEVOP_map_pirq hypercall succeeds just as well as for MSI, but
interrupts don't seem to get delivered.

MSI-X interrupts work with NetBSD for the same devices when booted
natively, without Xen.

Can you give me some advice on where to start looking to get this
working? Is there perhaps something special to be done within the PCI
subsystem to allow Xen to take over?

Thank you.

Jaromir


From xen-devel-bounces@lists.xenproject.org Sun Jul 19 17:18:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jul 2020 17:18:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxCwV-0003bT-6q; Sun, 19 Jul 2020 17:17:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x0Gu=A6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxCwU-0003bO-3O
 for xen-devel@lists.xenproject.org; Sun, 19 Jul 2020 17:17:38 +0000
X-Inumbo-ID: bd4349ee-c9e3-11ea-987c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bd4349ee-c9e3-11ea-987c-12813bfff9fa;
 Sun, 19 Jul 2020 17:17:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=WU4l8K31kBBFY1ZZQgye93NctKXWAPhKYWkXgjcbjPM=; b=kWDiy97oUcHlj6l+qxDdcJ9SX
 vrvsFY22SkA0U2E/LaZZSFiIpLnogggtY1sNOcvSfxlCgiu7a4cvbg7k9Z8q0ozlooSBFdhprRj6R
 ALGeVu26yVFU/PUiO4Syr00XEzGOSPYZUgN5e08ZUXzYDV4pHkOvqnuYoVXorl97/B5dc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxCwR-0002ww-0r; Sun, 19 Jul 2020 17:17:35 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxCwQ-0001KP-HM; Sun, 19 Jul 2020 17:17:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxCwQ-0002sQ-Gj; Sun, 19 Jul 2020 17:17:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152008-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152008: regressions - trouble:
 blocked/fail/pass/starved
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=f932d58abc38c898d7d3fe635ecb2b821a256f54
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 19 Jul 2020 17:17:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152008 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152008/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 linux                f932d58abc38c898d7d3fe635ecb2b821a256f54
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   31 days
Failing since        151236  2020-06-19 19:10:35 Z   29 days   47 attempts
Testing same since   152008  2020-07-19 05:32:20 Z    0 days    1 attempts

------------------------------------------------------------
812 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 43813 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 19 21:20:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 19 Jul 2020 21:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxGj3-00070l-En; Sun, 19 Jul 2020 21:20:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x0Gu=A6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxGj1-0006wc-EE
 for xen-devel@lists.xenproject.org; Sun, 19 Jul 2020 21:19:59 +0000
X-Inumbo-ID: 9830c4a2-ca05-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9830c4a2-ca05-11ea-8496-bc764e2007e4;
 Sun, 19 Jul 2020 21:19:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Dlj46sm0g3mCk6s7f4Fxl02NQzKE1Wh25ynIBWNA9X8=; b=eN90lHZmaKurMFOhJc5JaHeUS
 rnEVKq+NSbe9XcHM14JgvLLgDABGSRsIQW7pNjYRq2B+6wx0gpPVeeElxINv9NasTBb5LwadiAbIS
 b/wwpBVbb4olq34cwF7Rx1RQjxY2zyfALlR3rXjE39VMwFCpBb0pFOcHqwKM4b1yGqZsY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxGix-00082E-VM; Sun, 19 Jul 2020 21:19:56 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxGix-0004Rt-Jq; Sun, 19 Jul 2020 21:19:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxGix-0003U2-JK; Sun, 19 Jul 2020 21:19:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152013-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152013: regressions - trouble: fail/pass/starved
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:guest-localmigrate:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: qemuu=939ab64b400b9bec4b59795a87817784093e1acd
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 19 Jul 2020 21:19:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152013 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152013/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-dom0pvh-xl-amd 16 guest-localmigrate   fail baseline untested
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 qemuu                939ab64b400b9bec4b59795a87817784093e1acd
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   36 days
Failing since        151101  2020-06-14 08:32:51 Z   35 days   49 attempts
Testing same since   152013  2020-07-19 09:29:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29577 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 02:20:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 02:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxLPu-0000OY-RX; Mon, 20 Jul 2020 02:20:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wjZm=A7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxLPt-0000OT-1P
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 02:20:33 +0000
X-Inumbo-ID: 953507b6-ca2f-11ea-98d7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 953507b6-ca2f-11ea-98d7-12813bfff9fa;
 Mon, 20 Jul 2020 02:20:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=LAJovw70A43tjr9zJy/13RbhjIeqIobVa8yBVTuitvk=; b=r35Xh3BfbkBNH8zUHJpZIKMD3
 ezuHjD3tqi+yN3a6KhbEPK1l4xh4ifAVIKUGo5agkPK4dwaHvytvP38Rw5YEp0Ti6YYuy+bvVMfsP
 s+1nJukEPFqFdwiWBKncLq68ZPQTjNj2NE2rVczh85pzUSI6IyKvqch6Vl9PmgN+/8VsY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxLPp-00081l-PK; Mon, 20 Jul 2020 02:20:29 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxLPp-0003zG-Gc; Mon, 20 Jul 2020 02:20:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxLPp-0004Os-Fz; Mon, 20 Jul 2020 02:20:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152022-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152022: regressions - trouble:
 blocked/fail/pass/starved
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:heisenbug
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=f932d58abc38c898d7d3fe635ecb2b821a256f54
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jul 2020 02:20:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152022 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152022/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-i386-pvops              6 kernel-build             fail REGR. vs. 151214

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-vhd      11 guest-start                fail pass in 152008

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd     12 migrate-support-check fail in 152008 never pass
 test-armhf-armhf-xl-vhd 13 saverestore-support-check fail in 152008 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 linux                f932d58abc38c898d7d3fe635ecb2b821a256f54
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   31 days
Failing since        151236  2020-06-19 19:10:35 Z   30 days   48 attempts
Testing same since   152008  2020-07-19 05:32:20 Z    0 days    2 attempts

------------------------------------------------------------
812 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 43813 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 06:46:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 06:46:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxPZL-0005rp-ST; Mon, 20 Jul 2020 06:46:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UosC=A7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxPZK-0005rk-JW
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 06:46:34 +0000
X-Inumbo-ID: bfc5b85c-ca54-11ea-98e4-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bfc5b85c-ca54-11ea-98e4-12813bfff9fa;
 Mon, 20 Jul 2020 06:46:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595227593;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=NtlnAFJyO621aeblpWvmd5JeyG0HHI28kn0Wz3kJP5Y=;
 b=ZRnLz8k2nqEzUjyZvZeG+3893Tn897AjiW9rgf5rSrsB25o13NhOr0nj
 dZNUBaAFsarWChcMo9s7KzzwhCBQuxlPPVtg5nEDH2SQaFhpUj+5JpC9I
 C/GP02jh2a1jJu0fGaKEjRM9GGK8Y9Bi9Sgb64s3M+rrrNGn4XfQ8QGaN Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: VFr09NkDrYS42i9obUHyjr82PKMu92rZUtpouukdHRu+cQmC3VxLMeQ1thqx9s2Y9kRE4+NXy3
 aEadU6+6RJcMaPRVUgp5o12jfXPbGOPt0lKeF04QyIr7PDYzvfqH2o/NQtcuQ8BV89e2zCvDz5
 Zn+aspZMSj7WYKaCCUmT0mgDRIKmytRFi5FSTF6M1e7SQ5TS4CLR2pTbTu6AzjmG4JxFAmuqtx
 IKZQ6kjvFbKqMPvXgftGe7IizuBMgy6vJ+rIa4COpn6nt/62zhDJyIpOr9RMGs7CSf8vq9m0gw
 7io=
X-SBRS: 2.7
X-MesageID: 22920836
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="22920836"
Date: Mon, 20 Jul 2020 08:45:17 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: osstest service owner <osstest-admin@xenproject.org>
Subject: Re: [xen-unstable test] 152004: tolerable trouble: fail/pass/starved
Message-ID: <20200720064517.GB7191@Air-de-Roger>
References: <osstest-152004-mainreport@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <osstest-152004-mainreport@xen.org>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sun, Jul 19, 2020 at 02:05:59PM +0000, osstest service owner wrote:
> flight 152004 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/152004/
[...]
>  test-amd64-amd64-dom0pvh-xl-amd                              pass    
>  test-amd64-amd64-dom0pvh-xl-intel                            pass    

First pass of the PVH dom0 tests after the XSA fixes :).

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 07:00:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 07:00:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxPmf-0007WU-4m; Mon, 20 Jul 2020 07:00:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UosC=A7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxPmd-0007Vy-7s
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 07:00:19 +0000
X-Inumbo-ID: ab20566c-ca56-11ea-b7bb-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab20566c-ca56-11ea-b7bb-bc764e2007e4;
 Mon, 20 Jul 2020 07:00:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595228417;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=iocUiLLkBYzmKPC2eRpkZG7AYNH60rAT4JdmVZHSVYI=;
 b=P54pA7JTa5AVxoGo31xLDSpAaP1U6i6Mlikf0HrY97+fulHjdbVnWBST
 Ki80AzOadTllL0nbcRrpxHXeWq58oF8y/LCsky1SOKl4MJ6seKgUlrm2u
 qZ1N+78M3hMF0tOhBIC/mK0/WlKFV8uE/VJg6dEUysiyUDExTZdfTznqQ o=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: /gN7WqqKMJp6DphdnbH3646GayPWugXN2z7lX3DaOu/rqxip3pi0Rt6q+l8nYseSfImDaP042B
 GLnm/tXMDfhycdNLyKczfOd0N3dyLZVUzJKLqTlnKZWd+Uisbjzv4THhAFvmQWc6eYraS60VLv
 5c8YpjAgOkzktrR47dWhYgWPn/DVTTx0McYd3/np8rm0jK73Ox+dlq/Jao8VxoN8CSl6Q/T2kC
 Yy8IS/tIxEVbKlepzr8TukNUHa7ZWmKOEPs9dAmVM6FDSLnueJxgfsuYnLEB85lZuHmJvjbjGy
 vKo=
X-SBRS: 2.7
X-MesageID: 23056731
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="23056731"
Date: Mon, 20 Jul 2020 09:00:10 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?B?SmFyb23DrXIgRG9sZcSNZWs=?= <jaromir.dolecek@gmail.com>
Subject: Re: Advice for implementing MSI-X support on NetBSD Xen 4.11?
Message-ID: <20200720070010.GC7191@Air-de-Roger>
References: <CAMnsW5542gmBLpKBsW5pnm=2VXmaDVHzg=OXXvBdu1BsYLdDvQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAMnsW5542gmBLpKBsW5pnm=2VXmaDVHzg=OXXvBdu1BsYLdDvQ@mail.gmail.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sun, Jul 19, 2020 at 05:54:28PM +0200, Jaromír Doleček wrote:
> Hello,
> 
> I've implemented support for using MSI under NetBSD Dom0 with 4.11.
> This works well.
> 
> I have some trouble now with getting MSI-X work under Xen.
> PHYSDEVOP_map_pirq hypercall succeeds just as well as for MSI, but
> interrupts don't seem to get delivered.

How are you filling physdev_map_pirq for MSI-X?

You need to set entry_nr and table_base.

> MSI-X interrupts work with NetBSD for the same devices when booted
> natively, without Xen.
> 
> Can you give me some advice on where to start looking to get this
> working? Is there perhaps something special to be done within the PCI
> subsystem to allow Xen to take over?

Are you enabling the capability and unmasking the interrupt in the
MSI-X table?

IIRC the OS also needs to unmask the entry in the MSI-X table in MMIO
space, as done natively.

There are also the Xen debug keys which can be helpful, take a look at
'i' and 'M'.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 07:38:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 07:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxQN6-0001lU-2i; Mon, 20 Jul 2020 07:38:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wjZm=A7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxQN5-0001kr-JN
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 07:37:59 +0000
X-Inumbo-ID: ef07dc06-ca5b-11ea-98ec-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ef07dc06-ca5b-11ea-98ec-12813bfff9fa;
 Mon, 20 Jul 2020 07:37:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=/z7jMx2OGlFEaAVyOR254/Uz3vG+pn6alj42pRT5HRM=; b=xTuCrnCp9KlnITDXPRQU+BSkC
 xr7w41OqJ5JT/7Mjgx11wueOePuYyE6CaTfTjxOjMlPlc1vbMSHWIUDEqqA2HCPycIpSy1q0BVu6H
 4xk3ePy2mxF0mIRNEbVFBiPNpAGwZ/xDPZdcsPxyuL1u6B3phujpKSl/aRi8zvSNFQVi8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxQN4-00076T-J3; Mon, 20 Jul 2020 07:37:58 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxQN4-0000D9-7C; Mon, 20 Jul 2020 07:37:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxQN4-000662-6Y; Mon, 20 Jul 2020 07:37:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152034-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 152034: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=747ba4ed98af063bc6f5485850adc4225ca69d53
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jul 2020 07:37:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152034 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152034/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              747ba4ed98af063bc6f5485850adc4225ca69d53
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   10 days
Failing since        151818  2020-07-11 04:18:52 Z    9 days   10 attempts
Testing same since   152006  2020-07-19 04:18:59 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Bastien Orivel <bastien.orivel@diateam.net>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Jin Yan <jinyan12@huawei.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Stefan Berger <stefanb@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1702 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 07:51:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 07:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxQaM-0003M5-A9; Mon, 20 Jul 2020 07:51:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wjZm=A7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxQaL-0003M0-9i
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 07:51:41 +0000
X-Inumbo-ID: d76f8f24-ca5d-11ea-8496-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d76f8f24-ca5d-11ea-8496-bc764e2007e4;
 Mon, 20 Jul 2020 07:51:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=dNSO/R2BlB2gcynHJ3ew+eI8E5GIg63Sra2vz3EijrA=; b=vJrDlxp44xBuA9AbVPJGvKAF8
 lcESGB6qsNN9NznBLE5bAJRaaZJoA4GBg3hQ449GFGt4Hh9UqvAhRQk6+N4VDT2qIpbk/XcYxqb8r
 8G/rSZlBBmxWVoC+VxhuH1XQocfYZ4OaO9Ic/e/CK4KQ2IUxqJ7yiQRt+RTQWGrJwRzb4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxQaH-0007Nx-TV; Mon, 20 Jul 2020 07:51:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxQaH-0001ae-EQ; Mon, 20 Jul 2020 07:51:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxQaH-0005TQ-Do; Mon, 20 Jul 2020 07:51:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152026-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152026: regressions - trouble: fail/pass/starved
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:guest-localmigrate:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: qemuu=9fc87111005e8903785db40819af66b8f85b8b96
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jul 2020 07:51:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152026 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152026/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-dom0pvh-xl-amd 16 guest-localmigrate   fail baseline untested
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 qemuu                9fc87111005e8903785db40819af66b8f85b8b96
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   37 days
Failing since        151101  2020-06-14 08:32:51 Z   35 days   50 attempts
Testing same since   152026  2020-07-19 21:37:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29745 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 08:17:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 08:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxQzS-0005jY-29; Mon, 20 Jul 2020 08:17:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxQzR-0005jT-AX
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 08:17:37 +0000
X-Inumbo-ID: 77fde9b0-ca61-11ea-98f0-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 77fde9b0-ca61-11ea-98f0-12813bfff9fa;
 Mon, 20 Jul 2020 08:17:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EB186ADC1;
 Mon, 20 Jul 2020 08:17:41 +0000 (UTC)
Subject: Re: [PATCH 4/8] Arm: prune #include-s needed by domain.h
To: Julien Grall <julien@xen.org>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
 <150525bb-1c48-c331-3212-eff18bc4c13d@suse.com>
 <d836dc7f-017b-5048-02de-d1cb291fbc3b@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <931149db-2daf-6d72-0330-c938b5084eb6@suse.com>
Date: Mon, 20 Jul 2020 10:17:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d836dc7f-017b-5048-02de-d1cb291fbc3b@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 17.07.2020 16:44, Julien Grall wrote:
> On 15/07/2020 11:39, Jan Beulich wrote:
>> --- a/xen/include/asm-arm/domain.h
>> +++ b/xen/include/asm-arm/domain.h
>> @@ -2,7 +2,7 @@
>>   #define __ASM_DOMAIN_H__
>>   
>>   #include <xen/cache.h>
>> -#include <xen/sched.h>
>> +#include <xen/timer.h>
>>   #include <asm/page.h>
>>   #include <asm/p2m.h>
>>   #include <asm/vfp.h>
>> @@ -11,8 +11,6 @@
>>   #include <asm/vgic.h>
>>   #include <asm/vpl011.h>
>>   #include <public/hvm/params.h>
>> -#include <xen/serial.h>
> 
> While we don't need the rbtree.h, we technically need serial.h for using 
> vuart_info.

The only reference to it is

        const struct vuart_info     *info;

which doesn't require a definition nor even a forward declaration
of struct vuart_info. It should just be source files instantiating
a struct or de-referencing pointers to one that actually need to
see the full declaration. The only source file doing so (vuart.c)
already includes xen/serial.h. (In fact, it being just a single
source file doing so, the struct definition could [and imo should]
be moved there. The type can be entirely opaque to the rest of the
code base, as - over time - we've been doing for other structs.)

> I would rather prefer if headers are not implicitly included whenever it 
> is possible.

I agree with this principle, where it applies.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 08:25:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 08:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxR7G-0006cj-TB; Mon, 20 Jul 2020 08:25:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TdB1=A7=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1jxR7F-0006ce-H2
 for xen-devel@lists.xen.org; Mon, 20 Jul 2020 08:25:41 +0000
X-Inumbo-ID: 98637124-ca62-11ea-98f0-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 98637124-ca62-11ea-98f0-12813bfff9fa;
 Mon, 20 Jul 2020 08:25:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595233541;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-transfer-encoding:mime-version;
 bh=49QGHBHrZRjI696+RtSO5XA7ajoUpU7QjWNgVJCHooY=;
 b=HxyrxGPk4R7HlrBYu3byoyLpP+icu39O+U1JwEmye/yE2w8e8ddYWiR1
 sPVjaTbXE3X+r9JXPRHLc0wVw66GFNsLhaLL5Gk8jRod7j4jfIo5gFoaB
 IwJcwKfq0IcJ+50fVn68BglmmTdw3W7aUUuAWVaQbLAWdbFxTbh3Kjgjw w=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 75N3jJDKQ8VkZh15pqalXUltCYUKJFpfnL5iGz9RbtVQTXvKU1gvTf8kH9NZFGoIZ8lqAJg7Ko
 Yl8e7cd7riFRjQPBk1JV4TFoKMUtgZXVyU0mlMF6qOTEmGzr4Mo1pNnbKRUikfNbYf5Ad+SuHk
 0iOYp51nbh45vWYKTofsH6jPYMoDYX5hxpqMt7BAxQoMdMaHBoZxs2Qs+meUC/d4FhhfwRuDf6
 iohbT20W+ivBQU/ND69IsqDUeZ3zrfgvRCSpwb4gTqaarFC4fvLTc8QEmKD6N/3M7YQJSpLiom
 QhQ=
X-SBRS: 2.7
X-MesageID: 23056332
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="23056332"
From: Christian Lindig <christian.lindig@citrix.com>
To: Elliott Mitchell <ehem+xen@m5p.com>, "xen-devel@lists.xen.org"
 <xen-devel@lists.xen.org>
Subject: Re: [PATCH 1/2] Partially revert "Cross-compilation fixes."
Thread-Topic: [PATCH 1/2] Partially revert "Cross-compilation fixes."
Thread-Index: AQHWXLPw2WXAlvhnbky26RJ8a7SJFqkQJQKp
Date: Mon, 20 Jul 2020 08:25:36 +0000
Message-ID: <1595233536312.59307@citrix.com>
References: <20200718033121.GA88869@mattapan.m5p.com>
In-Reply-To: <20200718033121.GA88869@mattapan.m5p.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <Ian.Jackson@citrix.com>, "wl@xen.org" <wl@xen.org>,
 "dave@recoil.org" <dave@recoil.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

=0A=
________________________________________=0A=
From: Elliott Mitchell <ehem+xen@m5p.com>=0A=
Sent: 18 July 2020 04:31=0A=
To: xen-devel@lists.xen.org=0A=
Cc: Ian Jackson; wl@xen.org; Christian Lindig; dave@recoil.org=0A=
Subject: [PATCH 1/2] Partially revert "Cross-compilation fixes."=0A=
=0A=
This partially reverts commit 16504669c5cbb8b195d20412aadc838da5c428f7.=0A=
=0A=
Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>=0A=
---=0A=
Doesn't look like much of 16504669c5cbb8b195d20412aadc838da5c428f7=0A=
actually remains due to passage of time.=0A=
=0A=
Of the 3, both Python and pygrub appear to mostly be building just fine=0A=
cross-compiling.  The OCAML portion is being troublesome, this is going=0A=
to cause bug reports elsewhere soon.  The OCAML portion though can=0A=
already be disabled by setting OCAML_TOOLS=3Dn and shouldn't have this=0A=
extra form of disabling.=0A=
---=0A=
 tools/Makefile | 3 ---=0A=
 1 file changed, 3 deletions(-)=0A=
=0A=
diff --git a/tools/Makefile b/tools/Makefile=0A=
index 7b1f6c4d28..930a533724 100644=0A=
--- a/tools/Makefile=0A=
+++ b/tools/Makefile=0A=
@@ -40,12 +40,9 @@ SUBDIRS-$(CONFIG_X86) +=3D debugger/gdbsx=0A=
 SUBDIRS-$(CONFIG_X86) +=3D debugger/kdd=0A=
 SUBDIRS-$(CONFIG_TESTS) +=3D tests=0A=
=0A=
-# These don't cross-compile=0A=
-ifeq ($(XEN_COMPILE_ARCH),$(XEN_TARGET_ARCH))=0A=
 SUBDIRS-y +=3D python=0A=
 SUBDIRS-y +=3D pygrub=0A=
 SUBDIRS-$(OCAML_TOOLS) +=3D ocaml=0A=
-endif=0A=
=0A=
 ifeq ($(CONFIG_RUMP),y)=0A=
 SUBDIRS-y :=3D libs libxc xenstore=0A=
--=0A=
2.20.1=0A=
=0A=
-- =0A=
Acked-by: Christian Lindig <christian.lindig@citrix.com>=0A=
=0A=


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 08:26:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 08:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxR7f-0006ec-5m; Mon, 20 Jul 2020 08:26:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TdB1=A7=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1jxR7d-0006eR-Fb
 for xen-devel@lists.xen.org; Mon, 20 Jul 2020 08:26:05 +0000
X-Inumbo-ID: a60c8a4a-ca62-11ea-98f0-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a60c8a4a-ca62-11ea-98f0-12813bfff9fa;
 Mon, 20 Jul 2020 08:26:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595233563;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-transfer-encoding:mime-version;
 bh=guNi4ozQsd81IcNwzgCs24VYy4hVc9xWBLSEsepPGAw=;
 b=VCSNM1eLchfnWRzSpjaWeGW3K9YmHcz1qMeDhYJO24h7N1fOfEdTfYUv
 1w/3slnbhgNu6uSU/VXhp0IebDdsuvYdFb3FauLVumHKND6ajYlTt0pP9
 94fW2NwYyeYf6aQHXsi8Y+uQ+qYdjTqCIZ3qiz+nf2UbpNCTuIRd3OoUM A=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Su7dpSSVRAR/UfzOeBg1WbNzJuw6nJzBsmnvhCxsuU+bizGeoACTzsP1bpNb89Ofx72pMP+hi1
 qdpg+WDo2DOuk0KaFWALa6yn6PMvCfx79q53+1Go7AOyLFn/uqCedGCIRYpS6gTlzpaUpngz13
 tOPpa6g3cj99fVH9L432HMrH80+3m6xJL3WSK8LZJ0KOHzTsZ6fQNZ+51nKixpc2KAKKwyb9nq
 6y1osa5XAGimHlsL/rh8doiAlY1zolagx4jfW9q8xoYx4IQuqk5c0DXmsjEFJQQOgn9Mcx31Ct
 IkI=
X-SBRS: 2.7
X-MesageID: 22925651
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="22925651"
From: Christian Lindig <christian.lindig@citrix.com>
To: Elliott Mitchell <ehem+xen@m5p.com>, "xen-devel@lists.xen.org"
 <xen-devel@lists.xen.org>
Subject: Re: [PATCH 2/2] tools/ocaml: Default to useful build output
Thread-Topic: [PATCH 2/2] tools/ocaml: Default to useful build output
Thread-Index: AQHWXLQfZqk/9n2Ez0iUq9XU56BN2qkQJVSq
Date: Mon, 20 Jul 2020 08:25:59 +0000
Message-ID: <1595233559536.460@citrix.com>
References: <20200718033242.GB88869@mattapan.m5p.com>
In-Reply-To: <20200718033242.GB88869@mattapan.m5p.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <Ian.Jackson@citrix.com>, "wl@xen.org" <wl@xen.org>,
 "dave@recoil.org" <dave@recoil.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

=0A=
________________________________________=0A=
From: Elliott Mitchell <ehem+xen@m5p.com>=0A=
Sent: 18 July 2020 04:32=0A=
To: xen-devel@lists.xen.org=0A=
Cc: Ian Jackson; wl@xen.org; Christian Lindig; dave@recoil.org=0A=
Subject: [PATCH 2/2] tools/ocaml: Default to useful build output=0A=
=0A=
While hiding details of build output looks pretty to some, defaulting to=0A=
doing so deviates from the rest of Xen.  Switch the OCAML tools to match=0A=
everything else.=0A=
=0A=
Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>=0A=
---=0A=
=0A=
Time for a bit of controversy.=0A=
=0A=
Presently the OCAML tools build mismatches the rest of the Xen build.=0A=
My choice is to default to verbose output.  While some may like beauty=0A=
in their build output, function is far more important.=0A=
=0A=
If someone wants to take on the task of making Xen's build output=0A=
consistently beatiful, invite them to do so.  Then call the police and=0A=
tell them you're being robbed.=0A=
---=0A=
 tools/ocaml/Makefile.rules | 19 +++++++++++--------=0A=
 1 file changed, 11 insertions(+), 8 deletions(-)=0A=
=0A=
diff --git a/tools/ocaml/Makefile.rules b/tools/ocaml/Makefile.rules=0A=
index a893c42b43..abfbc64ce0 100644=0A=
--- a/tools/ocaml/Makefile.rules=0A=
+++ b/tools/ocaml/Makefile.rules=0A=
@@ -1,17 +1,20 @@=0A=
 ifdef V=0A=
-  ifeq ("$(origin V)", "command line")=0A=
-    BUILD_VERBOSE =3D $(V)=0A=
-  endif=0A=
+       ifeq ("$(origin V)", "command line")=0A=
+               BUILD_VERBOSE =3D $(V)=0A=
+       endif=0A=
+else=0A=
+       V :=3D 1=0A=
+       BUILD_VERBOSE :=3D 1=0A=
 endif=0A=
 ifndef BUILD_VERBOSE=0A=
-  BUILD_VERBOSE =3D 0=0A=
+       BUILD_VERBOSE :=3D 0=0A=
 endif=0A=
 ifeq ($(BUILD_VERBOSE),1)=0A=
-  E =3D @true=0A=
-  Q =3D=0A=
+       E :=3D @true=0A=
+       Q :=3D=0A=
 else=0A=
-  E =3D @echo=0A=
-  Q =3D @=0A=
+       E :=3D @echo=0A=
+       Q :=3D @=0A=
 endif=0A=
=0A=
 .NOTPARALLEL:=0A=
--=0A=
2.20.1=0A=
=0A=
-- =0A=
Acked-by: Christian Lindig <christian.lindig@citrix.com>=0A=
=0A=


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 08:39:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 08:39:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxRKH-0007g7-Db; Mon, 20 Jul 2020 08:39:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TdB1=A7=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1jxRKF-0007g2-Ph
 for xen-devel@lists.xen.org; Mon, 20 Jul 2020 08:39:07 +0000
X-Inumbo-ID: 78daeed4-ca64-11ea-b7bb-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78daeed4-ca64-11ea-b7bb-bc764e2007e4;
 Mon, 20 Jul 2020 08:39:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595234348;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-transfer-encoding:mime-version;
 bh=ZJ2relG/T/voq29vQ/pBa6hwLPIx7xF5qelCiLq7CkA=;
 b=VYrjWjRseTwKSHutgOHKltHPs/qJ4kGWvtY3WQzIRiO/1SNMqiBEzq4a
 RyspP8+iizRhynfVJgHAHLP8W093V/7j6XD5wwFn1gp90XGWUzGx+1ekS
 8L0aRQzr504S7lya2om8uG2sP5Yfkjf336aBuSFDyleGryYAZjaOSLOfK E=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: YCz2V4wlFImdGbXhZA4JfiqRPt2c6cTIQAlD3kvWFtF9ReQGom9KCXD8j41if8SK4OKG8v9s13
 SbD1QOp2y7SV2K8D0OHzS7QfdwtZAaXYnYtOIwODV1O+bDZDyWhTDR6c7zohjwCeYplLmAeXvt
 lL+SC8hx9N+TK5N2J2A4fAnnCSpAO8q/zo8CPwRBplv413oFX357zJl+xvOWHcpxk9LrzbAHAC
 9xl1/wSEDQ1mGMpiNgd3vDbw7oe0wiRuzt1OI7Raw/pJZYSr5pUMfpYaJAuJ0Ud6A1oN8iYi9t
 yFI=
X-SBRS: 2.7
X-MesageID: 22733367
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="22733367"
From: Christian Lindig <christian.lindig@citrix.com>
To: Elliott Mitchell <ehem+xen@m5p.com>, "xen-devel@lists.xen.org"
 <xen-devel@lists.xen.org>
Subject: Re: [PATCH 2/2] tools/ocaml: Default to useful build output
Thread-Topic: [PATCH 2/2] tools/ocaml: Default to useful build output
Thread-Index: AQHWXLQfZqk/9n2Ez0iUq9XU56BN2qkQJXHN
Date: Mon, 20 Jul 2020 08:38:40 +0000
Message-ID: <1595234320493.39632@citrix.com>
References: <20200718033242.GB88869@mattapan.m5p.com>
In-Reply-To: <20200718033242.GB88869@mattapan.m5p.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <Ian.Jackson@citrix.com>, Edwin Torok <edvin.torok@citrix.com>,
 "wl@xen.org" <wl@xen.org>, "dave@recoil.org" <dave@recoil.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

=0A=
=0A=
> Time for a bit of controversy.=0A=
=0A=
OCaml outside Xen has moved to a different model of building based on dune =
which is fast, declarative and reliable. The OCaml xenstore is stagnating b=
ecause nobody with OCaml experience wants to touch it anymore. It would be =
beneficial for the health of the OCaml xenstore to split it out such that i=
t could be worked on independently. You might argue that Make is still appr=
opriate for building OCaml projects but the OCaml community has moved throu=
gh several build systems, starting from Make, and learned the hard way that=
 this is not an easy problem. After years of more-or-less successful build =
system the consensus is that dune is right one and it has resulted in combi=
nation with the Opam package manager the ecosystem to flourish. Alternative=
ly, it would be possible to move OCaml xenstore to dune within the Xen tree=
 but it would create a dependency on it.=0A=
=0A=
-- C=0A=


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 08:45:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 08:45:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxRQB-0008VJ-3g; Mon, 20 Jul 2020 08:45:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UosC=A7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxRQA-0008VD-B2
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 08:45:14 +0000
X-Inumbo-ID: 532b2388-ca65-11ea-8496-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 532b2388-ca65-11ea-8496-bc764e2007e4;
 Mon, 20 Jul 2020 08:45:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595234712;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=PbeuKBiVc1w7ssIV3A/JAuBmHVDgu/hAvFTol+2DZ/k=;
 b=ewNm4LQLtUE+xEVKWn3Hn4/tNr02k2DFTntOmq/JqpIgGB5dn+rZGOSk
 fIUXB+1LORtWu/DRw8alclNIWzBLlEovlBTrl4xZjqQJ/Az718D36UgCu
 C1gENvlUdQAqtnaBBeoB7fPbrz8myfL3lj9syGwUmDPHt/Xxo5kfHn3VZ g=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: mT/1Pu38k6EbUUh1OWyaUzNO0NSdqkEjarumAdty/HboMThPar3UlfVr6nGqT8BimzAU0/E80n
 RkJRoinWm3+toV2QAecv5r54uBfjm42/1udGIPI2zDdOqN55vBA0AGPxpQSSKYGLGRC+h3/QAE
 u9KNzerX1jGtWX73FiaoY6Kdeeebher7QPA45oP7SpV4lRYbZaMOz81gmphXTIuFKyd6ML7tMd
 jh5WR4H8QFO46Mp6OjDjEEEgEGXOudRZTAf+55qZu8AoZQUFR/5adNxAfg2X17cPaxuNEXwagM
 Us0=
X-SBRS: 2.7
X-MesageID: 23062239
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="23062239"
Date: Mon, 20 Jul 2020 10:45:05 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Message-ID: <20200720084505.GD7191@Air-de-Roger>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <20200717111644.GS7191@Air-de-Roger>
 <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
 <20200717143120.GT7191@Air-de-Roger>
 <8AF78FF1-C389-44D8-896B-B95C1A0560E2@arm.com>
 <20200717155525.GY7191@Air-de-Roger>
 <C150EBE5-5687-4C7D-9EDB-5E4B52782A45@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <C150EBE5-5687-4C7D-9EDB-5E4B52782A45@arm.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 18, 2020 at 09:49:43AM +0000, Bertrand Marquis wrote:
> 
> 
> > On 17 Jul 2020, at 17:55, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > 
> > On Fri, Jul 17, 2020 at 03:21:57PM +0000, Bertrand Marquis wrote:
> >>> On 17 Jul 2020, at 16:31, Roger Pau Monné <roger.pau@citrix.com> wrote:
> >>> On Fri, Jul 17, 2020 at 01:22:19PM +0000, Bertrand Marquis wrote:
> >>>>> On 17 Jul 2020, at 13:16, Roger Pau Monné <roger.pau@citrix.com> wrote:
> >>>>>> # Emulated PCI device tree node in libxl:
> >>>>>> 
> >>>>>> Libxl is creating a virtual PCI device tree node in the device tree
> >>>>>> to enable the guest OS to discover the virtual PCI during guest
> >>>>>> boot. We introduced the new config option [vpci="pci_ecam"] for
> >>>>>> guests. When this config option is enabled in a guest configuration,
> >>>>>> a PCI device tree node will be created in the guest device tree.
> >>>>>> 
> >>>>>> A new area has been reserved in the arm guest physical map at which
> >>>>>> the VPCI bus is declared in the device tree (reg and ranges
> >>>>>> parameters of the node). A trap handler for the PCI ECAM access from
> >>>>>> guest has been registered at the defined address and redirects
> >>>>>> requests to the VPCI driver in Xen.
> >>>>> 
> >>>>> Can't you deduce the requirement of such DT node based on the presence
> >>>>> of a 'pci=' option in the same config file?
> >>>>> 
> >>>>> Also I wouldn't discard that in the future you might want to use
> >>>>> different emulators for different devices, so it might be helpful to
> >>>>> introduce something like:
> >>>>> 
> >>>>> pci = [ '08:00.0,backend=vpci', '09:00.0,backend=xenpt', '0a:00.0,backend=qemu', ... ]
> >>>>> 
> >>>>> For the time being Arm will require backend=vpci for all the passed
> >>>>> through devices, but I wouldn't rule out this changing in the future.
> >>>> 
> >>>> We need it for the case where no device is declared in the config file and the user
> >>>> wants to add devices using xl later. In this case we must have the DT node for it
> >>>> to work. 
> >>> 
> >>> There's a passthrough xl.cfg option for that already, so that if you
> >>> don't want to add any PCI passthrough devices at creation time but
> >>> rather hotplug them you can set:
> >>> 
> >>> passthrough=enabled
> >>> 
> >>> And it should setup the domain to be prepared to support hot
> >>> passthrough, including the IOMMU [0].
> >> 
> >> Isn’t this option covering more then PCI passthrough ?
> >> 
> >> Lots of Arm platform do not have a PCI bus at all, so for those
> >> creating a VPCI bus would be pointless. But you might need to
> >> activate this to pass devices which are not on the PCI bus.
> > 
> > Well, you can check whether the host has PCI support and decide
> > whether to attach a virtual PCI bus to the guest or not?
> > 
> > Setting passthrough=enabled should prepare the guest to handle
> > passthrough, in whatever form is supported by the host IMO.
> 
> True, we could just say that we create a PCI bus if the host has one and
> passthrough is activated.
> But with virtual device point, we might even need one on guest without
> PCI support on the hardware :-)

Sure, but at that point you might want to consider unconditionally
adding an emulated PCI bus to guests anyway.

You will always have time to add new options to xl, but I would start
by trying to make use of the existing ones.

Are you planning to add the logic in Xen to enable hot-plug of devices
right away?

If the implementation hasn't been considered yet I wouldn't mind
leaving all this for later and just focusing on non-hotplug
passthrough using pci = [ ... ] for the time being.

> > 
> >>>>>> Limitation:
> >>>>>> * Need to avoid the “iomem” and “irq” guest config
> >>>>>> options and map the IOMEM region and IRQ at the same time when
> >>>>>> device is assigned to the guest using the “pci” guest config options
> >>>>>> when xl creates the domain.
> >>>>>> * Emulated BAR values on the VPCI bus should reflect the IOMEM mapped
> >>>>>> address.
> >>>>> 
> >>>>> It was my understanding that you would identity map the BAR into the
> >>>>> domU stage-2 translation, and that changes by the guest won't be
> >>>>> allowed.
> >>>> 
> >>>> In fact this is not possible to do and we have to remap at a different address
> >>>> because the guest physical mapping is fixed by Xen on Arm so we must follow
> >>>> the same design otherwise this would only work if the BARs are pointing to an
> >>>> address unused and on Juno this is for example conflicting with the guest
> >>>> RAM address.
> >>> 
> >>> This was not clear from my reading of the document, could you please
> >>> clarify on the next version that the guest physical memory map is
> >>> always the same, and that BARs from PCI devices cannot be identity
> >>> mapped to the stage-2 translation and instead are relocated somewhere
> >>> else?
> >> 
> >> We will.
> >> 
> >>> 
> >>> I'm then confused about what you do with bridge windows, do you also
> >>> trap and adjust them to report a different IOMEM region?
> >> 
> >> Yes this is what we will have to do so that the regions reflect the VPCI mappings
> >> and not the hardware one.
> >> 
> >>> 
> >>> Above you mentioned that read-only access was given to bridge
> >>> registers, but I guess some are also emulated in order to report
> >>> matching IOMEM regions?
> >> 
> >> yes that’s exact. We will clear this in the next version.
> > 
> > If you have to go this route for domUs, it might be easier to just
> > fake a PCI host bridge and place all the devices there even with
> > different SBDF addresses. Having to replicate all the bridges on the
> > physical PCI bus and fixing up it's MMIO windows seems much more
> > complicated than just faking/emulating a single bridge?
> 
> That’s definitely something we have to dig more on. The whole problematic
> of PCI enumeration and BAR value assignation in Xen might be pushed to
> either Dom0 or the firmware but we might in fact find ourself with exactly the
> same problem on the VPCI bus.

Not really, in order for Xen to do passthrough to a guest it must know
the SBDF of a device, the resources it's using and the memory map of
the guest, or else passthrough can't be done.

At that point Xen has the whole picture and can decide where the
resources of the device should appear on the stage-2 translation, and
hence the IOMEM windows required on the bridge(s).

What I'm trying to say is that I'm not convinced that exposing all the
host PCI bridges with adjusted IOMEM windows is easier than just
completely faking (and emulating) a PCI bridge inside of Xen.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 08:47:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 08:47:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxRSZ-0000BW-IB; Mon, 20 Jul 2020 08:47:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UosC=A7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxRSY-0000BQ-Il
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 08:47:42 +0000
X-Inumbo-ID: abd99078-ca65-11ea-b7bb-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id abd99078-ca65-11ea-b7bb-bc764e2007e4;
 Mon, 20 Jul 2020 08:47:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595234861;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=mreCaNd/mlst++G9ORStuno43ieESymaTilNMLMFBSs=;
 b=eYY3wf/0lgXPCSa26G+ionIYKI5JQzWrmWwGCBOGk9qeVQ7Mlvt8kNoJ
 jd4O39LV8XLGvRcDgPcUCi9HYQxiJ4ga1D0rHIbM8i3zyg9OwpKbNd2iC
 pAsC0o5TLJj8rwLrkjnXMfCmUOR/0lQ64m9WJdJy9ptVQ8R5I76ZYjATb Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: +V+DLRZwkdZX79jGfBPxCUZdjWECpdtsTqMOnq2pFShQUZreEcmERWipS5ef6teRiAIj69d968
 stYzf50x+Br2a9nhK4QYNqLfbQHQroWabOC+WqIqHrJB/EU52+AizGzCsfRi8tLpOTBP3210xq
 etuOFWeyYYRvCF9anUV1tnNsdw0hFOe6UvaBmLD1LxhVAklMX4gOJwXBpRYwY/bbQHGY8fEnck
 jOpieMw33hoUnytawMAjkjb06Pxn4YaGtzIpIctJunFLCxwTV240+8sgY6N7iyIAUgMge3VtMY
 3VY=
X-SBRS: 2.7
X-MesageID: 23583603
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="23583603"
Date: Mon, 20 Jul 2020 10:47:34 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Message-ID: <20200720084734.GE7191@Air-de-Roger>
References: <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
 <20200717144139.GU7191@Air-de-Roger>
 <90AE8DAB-2223-46DC-A263-D78365E5435E@arm.com>
 <20200717150507.GW7191@Air-de-Roger>
 <FBE040A9-D088-43D6-8929-FFEDE9DDDE34@arm.com>
 <20200717153043.GX7191@Air-de-Roger>
 <C5B2BDD5-E504-4871-8542-5BA8C051F699@arm.com>
 <20200717160834.GA7191@Air-de-Roger>
 <0c76b6a0-2242-3bbd-9740-75c5580e93e8@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0c76b6a0-2242-3bbd-9740-75c5580e93e8@xen.org>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 17, 2020 at 05:18:46PM +0100, Julien Grall wrote:
> 
> 
> On 17/07/2020 17:08, Roger Pau Monné wrote:
> > On Fri, Jul 17, 2020 at 03:51:47PM +0000, Bertrand Marquis wrote:
> > > 
> > > 
> > > > On 17 Jul 2020, at 17:30, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > > > 
> > > > On Fri, Jul 17, 2020 at 03:23:57PM +0000, Bertrand Marquis wrote:
> > > > > 
> > > > > 
> > > > > > On 17 Jul 2020, at 17:05, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > > > > > 
> > > > > > On Fri, Jul 17, 2020 at 02:49:20PM +0000, Bertrand Marquis wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > > On 17 Jul 2020, at 16:41, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > > > > > > > 
> > > > > > > > On Fri, Jul 17, 2020 at 02:34:55PM +0000, Bertrand Marquis wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > > On 17 Jul 2020, at 16:06, Jan Beulich <jbeulich@suse.com> wrote:
> > > > > > > > > > 
> > > > > > > > > > On 17.07.2020 15:59, Bertrand Marquis wrote:
> > > > > > > > > > > 
> > > > > > > > > > > 
> > > > > > > > > > > > On 17 Jul 2020, at 15:19, Jan Beulich <jbeulich@suse.com> wrote:
> > > > > > > > > > > > 
> > > > > > > > > > > > On 17.07.2020 15:14, Bertrand Marquis wrote:
> > > > > > > > > > > > > > On 17 Jul 2020, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
> > > > > > > > > > > > > > On 16.07.2020 19:10, Rahul Singh wrote:
> > > > > > > > > > > > > > > # Emulated PCI device tree node in libxl:
> > > > > > > > > > > > > > > 
> > > > > > > > > > > > > > > Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.
> > > > > > > > > > > > > > 
> > > > > > > > > > > > > > I support Stefano's suggestion for this to be an optional thing, i.e.
> > > > > > > > > > > > > > there to be no need for it when there are PCI devices assigned to the
> > > > > > > > > > > > > > guest anyway. I also wonder about the pci_ prefix here - isn't
> > > > > > > > > > > > > > vpci="ecam" as unambiguous?
> > > > > > > > > > > > > 
> > > > > > > > > > > > > This could be a problem as we need to know that this is required for a guest upfront so that PCI devices can be assigned after using xl.
> > > > > > > > > > > > 
> > > > > > > > > > > > I'm afraid I don't understand: When there are no PCI device that get
> > > > > > > > > > > > handed to a guest when it gets created, but it is supposed to be able
> > > > > > > > > > > > to have some assigned while already running, then we agree the option
> > > > > > > > > > > > is needed (afaict). When PCI devices get handed to the guest while it
> > > > > > > > > > > > gets constructed, where's the problem to infer this option from the
> > > > > > > > > > > > presence of PCI devices in the guest configuration?
> > > > > > > > > > > 
> > > > > > > > > > > If the user wants to use xl pci-attach to attach in runtime a device to a guest, this guest must have a VPCI bus (even with no devices).
> > > > > > > > > > > If we do not have the vpci parameter in the configuration this use case will not work anymore.
> > > > > > > > > > 
> > > > > > > > > > That's what everyone looks to agree with. Yet why is the parameter needed
> > > > > > > > > > when there _are_ PCI devices anyway? That's the "optional" that Stefano
> > > > > > > > > > was suggesting, aiui.
> > > > > > > > > 
> > > > > > > > > I agree in this case the parameter could be optional and only required if not PCI device is assigned directly in the guest configuration.
> > > > > > > > 
> > > > > > > > Where will the ECAM region(s) appear on the guest physmap?
> > > > > > > > 
> > > > > > > > Are you going to re-use the same locations as on the physical
> > > > > > > > hardware, or will they appear somewhere else?
> > > > > > > 
> > > > > > > We will add some new definitions for the ECAM regions in the guest physmap declared in xen (include/asm-arm/config.h)
> > > > > > 
> > > > > > I think I'm confused, but that file doesn't contain anything related
> > > > > > to the guest physmap, that's the Xen virtual memory layout on Arm
> > > > > > AFAICT?
> > > > > > 
> > > > > > Does this somehow relate to the physical memory map exposed to guests
> > > > > > on Arm?
> > > > > 
> > > > > Yes it does.
> > > > > We will add new definitions there related to VPCI to reserve some areas for the VPCI ECAM and the IOMEM areas.
> > > > 
> > > > Yes, that's completely fine and is what's done on x86, but again I
> > > > feel like I'm lost here, this is the Xen virtual memory map, how does
> > > > this relate to the guest physical memory map?
> > > 
> > > Sorry my bad, we will add values in include/public/arch-arm.h, wrong header :-)
> > 
> > Oh right, now I see it :).
> > 
> > Do you really need to specify the ECAM and MMIO regions there?
> 
> You need to define those values somewhere :). The layout is only shared
> between the tools and the hypervisor. I think it would be better if they are
> defined at the same place as the rest of the layout, so it is easier to
> rework the layout.

OK, that's certainly a different approach from what x86 uses, where
the guest memory layout is not defined in the public headers.

On x86 my plan would be to add an hypercall that would set the
position of the ECAM region in the guest physmap, and that would be
called by the toolstack during domain construction.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 08:47:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 08:47:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxRSf-0000Cj-0g; Mon, 20 Jul 2020 08:47:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxRSd-0000CU-MR
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 08:47:47 +0000
X-Inumbo-ID: ae851bda-ca65-11ea-98f2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ae851bda-ca65-11ea-98f2-12813bfff9fa;
 Mon, 20 Jul 2020 08:47:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 72FE2ADC2;
 Mon, 20 Jul 2020 08:47:51 +0000 (UTC)
Subject: Re: Advice for implementing MSI-X support on NetBSD Xen 4.11?
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 =?UTF-8?B?SmFyb23DrXIgRG9sZcSNZWs=?= <jaromir.dolecek@gmail.com>
References: <CAMnsW5542gmBLpKBsW5pnm=2VXmaDVHzg=OXXvBdu1BsYLdDvQ@mail.gmail.com>
 <20200720070010.GC7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0a389f69-2c6b-e564-c6b5-c8f09ed66de0@suse.com>
Date: Mon, 20 Jul 2020 10:47:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200720070010.GC7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.07.2020 09:00, Roger Pau Monné wrote:
> On Sun, Jul 19, 2020 at 05:54:28PM +0200, Jaromír Doleček wrote:
>> I've implemented support for using MSI under NetBSD Dom0 with 4.11.
>> This works well.
>>
>> I have some trouble now with getting MSI-X work under Xen.
>> PHYSDEVOP_map_pirq hypercall succeeds just as well as for MSI, but
>> interrupts don't seem to get delivered.
> 
> How are you filling physdev_map_pirq for MSI-X?
> 
> You need to set entry_nr and table_base.
> 
>> MSI-X interrupts work with NetBSD for the same devices when booted
>> natively, without Xen.
>>
>> Can you give me some advice on where to start looking to get this
>> working? Is there perhaps something special to be done within the PCI
>> subsystem to allow Xen to take over?
> 
> Are you enabling the capability and unmasking the interrupt in the
> MSI-X table?
> 
> IIRC the OS also needs to unmask the entry in the MSI-X table in MMIO
> space, as done natively.

Is this effort for PV or PVH? If the former, I don't think Dom0 is
supposed to write directly to any of these structures. This is all
intended to be hypercall based, despite us intercepting and trying
to emulate direct accesses.

Jaromír - are you making use of PHYSDEVOP_prepare_msix?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 08:55:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 08:55:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxRa2-0001Dj-7v; Mon, 20 Jul 2020 08:55:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxRa0-0001Db-Qs
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 08:55:24 +0000
X-Inumbo-ID: bf0251c0-ca66-11ea-9f71-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bf0251c0-ca66-11ea-9f71-12813bfff9fa;
 Mon, 20 Jul 2020 08:55:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 95AE0B145;
 Mon, 20 Jul 2020 08:55:28 +0000 (UTC)
Subject: Re: [PATCH 5/5] x86/shadow: l3table[] and gl3e[] are HVM only
To: Tim Deegan <tim@xen.org>
References: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
 <a3b9b496-e860-e657-2afc-c0658871fa3f@suse.com>
 <20200718182037.GA48915@deinos.phlegethon.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1baa0d50-86a4-b0ba-d43a-ad0c0446b54b@suse.com>
Date: Mon, 20 Jul 2020 10:55:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200718182037.GA48915@deinos.phlegethon.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18.07.2020 20:20, Tim Deegan wrote:
> At 12:00 +0200 on 15 Jul (1594814409), Jan Beulich wrote:
>> ... by the very fact that they're 3-level specific, while PV always gets
>> run in 4-level mode. This requires adding some seemingly redundant
>> #ifdef-s - some of them will be possible to drop again once 2- and
>> 3-level guest code doesn't get built anymore in !HVM configs, but I'm
>> afraid there's still quite a bit of disentangling work to be done to
>> make this possible.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Looks good.  It seems like the new code for '3-level non-HVM' in
> guest-walks ought to have some sort of assert-unreachable in them too
> - or is there a reason to to?

You mean this piece of code

+#elif !defined(CONFIG_HVM)
+    (void)root_gfn;
+    memset(gw, 0, sizeof(*gw));
+    return false;
+#else /* PAE */

If so - sure, ASSERT_UNREACHABLE() could be added there. It simply
didn't occur to me. I take it your ack for the entire series holds
here with this addition.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 09:00:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 09:00:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxRfB-00029r-3O; Mon, 20 Jul 2020 09:00:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ezcM=A7=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jxRf9-00029l-7t
 for xen-devel@lists.xen.org; Mon, 20 Jul 2020 09:00:43 +0000
X-Inumbo-ID: 7d008372-ca67-11ea-847e-bc764e2007e4
Received: from mail-ej1-x631.google.com (unknown [2a00:1450:4864:20::631])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d008372-ca67-11ea-847e-bc764e2007e4;
 Mon, 20 Jul 2020 09:00:42 +0000 (UTC)
Received: by mail-ej1-x631.google.com with SMTP id br7so17236837ejb.5
 for <xen-devel@lists.xen.org>; Mon, 20 Jul 2020 02:00:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=hbOqN9Eu9KHRkwvTw7eiGMP6+MkBBOekO76C1WlkNjs=;
 b=X11Y3aq7H7u954nX2xKi81ziIpcT8yrbyM7ttyu+gXlFvRd4etDcEqMO7eLYhsPxLX
 ZF4ZgCNdeua7Tgr6Pu1tPF8vn00gZvTz9rJRqifo/ah7fD6W8S7A6a1vOfgbnTy8K43X
 xWQNkoZxASc5QabNtAPV1qk2qyhMSjuQJlhTwWm8oc9J5E4Ab19P8nTJBWRcJiocx4o7
 advOFkApsDvQJEtZlL7sb9PdWy8iuF3TFzB+X6qL0HVprdgGvqevmLjRXKOWnhX/IOvz
 gB4bl4lXp7C+xI0lp94eClgruW5KV1iZn7BGRQuZyOhM8OPYAHNqcYRAk31SCrOUabcy
 Vf/Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=hbOqN9Eu9KHRkwvTw7eiGMP6+MkBBOekO76C1WlkNjs=;
 b=Dt6jRM1xehaQDAvtabvsXteFE+OcDT626PfQDiNRN5vfZxPXL36PidobALLUe/h/Q5
 6YX97w3KZt2ubTDElFqwn0ydd7jN0G0ljv/ribAh345InH3pxGvOPgw8bz881HiDX+zx
 sEsgm9kVldtzUP+UYAVQBEAxh1BiEFIpGkIbZ561obiCFO4sFjUTf88FVIjigJMiERFY
 CXEFxEZknIj42teEJE7fEK2jVPJwXgyEbgxRf8Fkekfvo9E6wJVIYRDsioUjvmit7Q1M
 HzwVEEaGiIKNJq0a5NAd4nUJit3cZgoi8PQhx609R2IpkD4pZxiZKUHO3KzIzAeMiMpR
 MCow==
X-Gm-Message-State: AOAM531yLoHshHfvyJhsLqB8RusWqtDpeBvUfcKK2MzosZy4sEEdxhjo
 2jpYIVl6RLQzxIVD+ffaH0s=
X-Google-Smtp-Source: ABdhPJz6estHlZJ//fvF/dNmWSvvGwzecLIizhvwYek7CsHCq+opErEAEddkETaBkxEJEpsp2LFI1g==
X-Received: by 2002:a17:906:191a:: with SMTP id
 a26mr20579085eje.315.1595235641169; 
 Mon, 20 Jul 2020 02:00:41 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-226.amazon.com. [54.240.197.226])
 by smtp.gmail.com with ESMTPSA id m6sm14097990ejq.85.2020.07.20.02.00.39
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 20 Jul 2020 02:00:40 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Christian Lindig'" <christian.lindig@citrix.com>,
 "'Elliott Mitchell'" <ehem+xen@m5p.com>, <xen-devel@lists.xen.org>
References: <20200718033242.GB88869@mattapan.m5p.com>
 <1595234320493.39632@citrix.com>
In-Reply-To: <1595234320493.39632@citrix.com>
Subject: RE: [PATCH 2/2] tools/ocaml: Default to useful build output
Date: Mon, 20 Jul 2020 10:00:38 +0100
Message-ID: <000d01d65e74$3deda4d0$b9c8ee70$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQKVcH3Jn67g37z/wcsWrRp91TLbtgHVzaUjp4NWtpA=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Ian Jackson' <Ian.Jackson@citrix.com>,
 'Edwin Torok' <edvin.torok@citrix.com>, wl@xen.org, dave@recoil.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Christian Lindig
> Sent: 20 July 2020 09:39
> To: Elliott Mitchell <ehem+xen@m5p.com>; xen-devel@lists.xen.org
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Edwin Torok <edvin.torok@citrix.com>; wl@xen.org;
> dave@recoil.org
> Subject: Re: [PATCH 2/2] tools/ocaml: Default to useful build output
> 
> 
> 
> > Time for a bit of controversy.
> 
> OCaml outside Xen has moved to a different model of building based on dune which is fast, declarative
> and reliable. The OCaml xenstore is stagnating because nobody with OCaml experience wants to touch it
> anymore.

It is still the default. Would you suggest that we change this and make C xenstored the default for 4.15, deprecating oxenstored
with a view to subsequently purging it from the tree in the 4.16 dev cycle?

  Paul

> It would be beneficial for the health of the OCaml xenstore to split it out such that it
> could be worked on independently. You might argue that Make is still appropriate for building OCaml
> projects but the OCaml community has moved through several build systems, starting from Make, and
> learned the hard way that this is not an easy problem. After years of more-or-less successful build
> system the consensus is that dune is right one and it has resulted in combination with the Opam
> package manager the ecosystem to flourish. Alternatively, it would be possible to move OCaml xenstore
> to dune within the Xen tree but it would create a dependency on it.
> 
> -- C




From xen-devel-bounces@lists.xenproject.org Mon Jul 20 09:09:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 09:09:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxRnu-0002SV-1Q; Mon, 20 Jul 2020 09:09:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Eely=A7=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxRnt-0002SQ-Hn
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 09:09:45 +0000
X-Inumbo-ID: c0389eb2-ca68-11ea-9f71-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0389eb2-ca68-11ea-9f71-12813bfff9fa;
 Mon, 20 Jul 2020 09:09:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=bcI8mRWFsA9Ny4ZI1pt3kOXvbWUpdq41ifnQuWxnWuU=; b=Njpvx/oizMNfaP04/G+K93HQa6
 eOwjNFqisi3Zqaus9GL3wVhW7+qcgkuu2YBD9fuqhQmDwwZO2N/KgNiyOjN/YqUOjw+BEJzOXeVn+
 n8uJ6bQfNhJJb1AEJqqYzSGYmFX5I3Wnq+Jv2VZfaXdHq1MT/O0wXnw9szyicZFa+cAY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxRnr-0001Ay-MF; Mon, 20 Jul 2020 09:09:43 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxRnr-0005YL-GA; Mon, 20 Jul 2020 09:09:43 +0000
Subject: Re: [PATCH 4/8] Arm: prune #include-s needed by domain.h
To: Jan Beulich <jbeulich@suse.com>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
 <150525bb-1c48-c331-3212-eff18bc4c13d@suse.com>
 <d836dc7f-017b-5048-02de-d1cb291fbc3b@xen.org>
 <931149db-2daf-6d72-0330-c938b5084eb6@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2cc66fdb-1da2-16cd-717a-3248d136821c@xen.org>
Date: Mon, 20 Jul 2020 10:09:41 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <931149db-2daf-6d72-0330-c938b5084eb6@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 20/07/2020 09:17, Jan Beulich wrote:
> On 17.07.2020 16:44, Julien Grall wrote:
>> On 15/07/2020 11:39, Jan Beulich wrote:
>>> --- a/xen/include/asm-arm/domain.h
>>> +++ b/xen/include/asm-arm/domain.h
>>> @@ -2,7 +2,7 @@
>>>    #define __ASM_DOMAIN_H__
>>>    
>>>    #include <xen/cache.h>
>>> -#include <xen/sched.h>
>>> +#include <xen/timer.h>
>>>    #include <asm/page.h>
>>>    #include <asm/p2m.h>
>>>    #include <asm/vfp.h>
>>> @@ -11,8 +11,6 @@
>>>    #include <asm/vgic.h>
>>>    #include <asm/vpl011.h>
>>>    #include <public/hvm/params.h>
>>> -#include <xen/serial.h>
>>
>> While we don't need the rbtree.h, we technically need serial.h for using
>> vuart_info.
> 
> The only reference to it is
> 
>          const struct vuart_info     *info;
> 
> which doesn't require a definition nor even a forward declaration
> of struct vuart_info. It should just be source files instantiating
> a struct or de-referencing pointers to one that actually need to
> see the full declaration. 

Ah yes. I got confused because you introduced a forward declaration of 
struct vcpu. But this is because you need it to declare the function 
prototype.

> The only source file doing so (vuart.c)
> already includes xen/serial.h. (In fact, it being just a single
> source file doing so, the struct definition could [and imo should]
> be moved there. The type can be entirely opaque to the rest of the
> code base, as - over time - we've been doing for other structs.)

There are definitely more use of vuart_info within the code base. All 
the UART on Arm will fill up the structure (see drivers/char/pl011.c) 
for instance.

So the definition is in the correct place.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 09:17:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 09:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxRvR-0003Iw-Rn; Mon, 20 Jul 2020 09:17:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UosC=A7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxRvQ-0003Ir-ER
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 09:17:32 +0000
X-Inumbo-ID: d5ce2732-ca69-11ea-9f71-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d5ce2732-ca69-11ea-9f71-12813bfff9fa;
 Mon, 20 Jul 2020 09:17:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595236650;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=761O2jMMTizDRhBkzusMXoTQ7elMaVRw7s3TgsyAkFw=;
 b=R2BpURaALpFkA6OdzqgxcuHIoonpIKH/5IrBNa2kYXHKEDKIzyRw/jkZ
 9CpTbC+IWhRcFqRMRsQR/HOQZjAvY8ad2ZWtQyCTHywjCAUQRQO8KjVtB
 sQ2dwkeNDHIg008qxKqqcjbXf/4fZSh+6FrKVd4LtQ1AJ4rGy+2AaOgQp 8=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: jsEwx8eF/OmsbTwmK8MGM8UqP2uVjfJDPI9CDm/4dCbbbhEE2F5Amjfr3YFy9TjJbqFh9Wc/Eb
 BKULP9pAKLZyLOB7ukILJ0UlkCCdc2+UFLxwFqc56dXi0At3zGRrKbaBLHBirsliwvcMWsgaFb
 kEzud1fTOXrH3b40gckDOxyI71FItiMPl7OftAX5XJdl3ex2WospZk2CJAzl0SWItu+JYyFq4W
 ON6XtNlCXZrwiYRKiUuo7NFlBMUXT4kbPmSgngGpZ8mgnrZi1cVbHZ+Jb2k9Gkx8NudB7xpuYb
 qkw=
X-SBRS: 2.7
X-MesageID: 23064256
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="23064256"
Date: Mon, 20 Jul 2020 11:17:22 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr <olekstysh@gmail.com>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
Message-ID: <20200720091722.GF7191@Air-de-Roger>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 17, 2020 at 09:34:14PM +0300, Oleksandr wrote:
> On 17.07.20 18:00, Roger Pau Monné wrote:
> > On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
> > > requires
> > > some implementation to forward guest MMIO access to a device model. And as
> > > it
> > > turned out the Xen on x86 contains most of the pieces to be able to use that
> > > transport (via existing IOREQ concept). Julien has already done a big amount
> > > of work in his PoC (xen/arm: Add support for Guest IO forwarding to a
> > > device emulator).
> > > Using that code as a base we managed to create a completely functional PoC
> > > with DomU
> > > running on virtio block device instead of a traditional Xen PV driver
> > > without
> > > modifications to DomU Linux. Our work is mostly about rebasing Julien's
> > > code on the actual
> > > codebase (Xen 4.14-rc4), various tweeks to be able to run emulator
> > > (virtio-disk backend)
> > > in other than Dom0 domain (in our system we have thin Dom0 and keep all
> > > backends
> > > in driver domain),
> > How do you handle this use-case? Are you using grants in the VirtIO
> > ring, or rather allowing the driver domain to map all the guest memory
> > and then placing gfn on the ring like it's commonly done with VirtIO?
> 
> Second option. Xen grants are not used at all as well as event channel and
> Xenbus. That allows us to have guest
> 
> *unmodified* which one of the main goals. Yes, this may sound (or even
> sounds) non-secure, but backend which runs in driver domain is allowed to
> map all guest memory.

Supporting unmodified guests is certainly a fine goal, but I don't
think it's incompatible with also trying to expand the spec in
parallel in order to support grants in a negotiated way (see below).

That way you could (long term) regain some of the lost security.

> > Do you have any plans to try to upstream a modification to the VirtIO
> > spec so that grants (ie: abstract references to memory addresses) can
> > be used on the VirtIO ring?
> 
> But VirtIO spec hasn't been modified as well as VirtIO infrastructure in the
> guest. Nothing to upsteam)

OK, so there's no intention to add grants (or a similar interface) to
the spec?

I understand that you want to support unmodified VirtIO frontends, but
I also think that long term frontends could negotiate with backends on
the usage of grants in the shared ring, like any other VirtIO feature
negotiated between the frontend and the backend.

This of course needs to be on the spec first before we can start
implementing it, and hence my question whether a modification to the
spec in order to add grants has been considered.

It's fine to say that you don't have any plans in this regard.

> 
> > 
> > > misc fixes for our use-cases and tool support for the
> > > configuration.
> > > Unfortunately, Julien doesn’t have much time to allocate on the work
> > > anymore,
> > > so we would like to step in and continue.
> > > 
> > > *A few word about the Xen code:*
> > > You can find the whole Xen series at [5]. The patches are in RFC state
> > > because
> > > some actions in the series should be reconsidered and implemented properly.
> > > Before submitting the final code for the review the first IOREQ patch
> > > (which is quite
> > > big) will be split into x86, Arm and common parts. Please note, x86 part
> > > wasn’t
> > > even build-tested so far and could be broken with that series. Also the
> > > series probably
> > > wants splitting into adding IOREQ on Arm (should be focused first) and
> > > tools support
> > > for the virtio-disk (which is going to be the first Virtio driver)
> > > configuration before going
> > > into the mailing list.
> > Sending first a patch series to enable IOREQs on Arm seems perfectly
> > fine, and it doesn't have to come with the VirtIO backend. In fact I
> > would recommend that you send that ASAP, so that you don't spend time
> > working on the backend that would likely need to be modified
> > according to the review received on the IOREQ series.
> 
> Completely agree with you, I will send it after splitting IOREQ patch and
> performing some cleanup.
> 
> However, it is going to take some time to make it properly taking into the
> account
> 
> that personally I won't be able to test on x86.

We have gitlab and the osstest CI loop (plus all the reviewers) so we
should be able to spot any regressions. Build testing on x86 would be
nice so that you don't need to resend to fix build issues.

> 
> > 
> > > What I would like to add here, the IOREQ feature on Arm could be used not
> > > only
> > > for implementing Virtio, but for other use-cases which require some
> > > emulator entity
> > > outside Xen such as custom PCI emulator (non-ECAM compatible) for example.
> > > 
> > > *A few word about the backend(s):*
> > > One of the main problems with Virtio in Xen on Arm is the absence of
> > > “ready-to-use” and “out-of-Qemu” Virtio backends (I least am not aware of).
> > > We managed to create virtio-disk backend based on demu [3] and kvmtool [4]
> > > using
> > > that series. It is worth mentioning that although Xenbus/Xenstore is not
> > > supposed
> > > to be used with native Virtio, that interface was chosen to just pass
> > > configuration from toolstack
> > > to the backend and notify it about creating/destroying Guest domain (I
> > > think it is
> > I would prefer if a single instance was launched to handle each
> > backend, and that the configuration was passed on the command line.
> > Killing the user-space backend from the toolstack is fine I think,
> > there's no need to notify the backend using xenstore or any other
> > out-of-band methods.
> > 
> > xenstore has proven to be a bottleneck in terms of performance, and it
> > would be better if we can avoid using it when possible, specially here
> > that you have to do this from scratch anyway.
> 
> Let me elaborate a bit more on this.
> 
> In current backend implementation, the Xenstore is *not* used for
> communication between backend (VirtIO device) and frontend (VirtIO driver),
> frontend knows nothing about it.
> 
> Xenstore was chosen as an interface in order to be able to pass
> configuration from toolstack in Dom0 to backend which may reside in other
> than Dom0 domain (DomD in our case),

There's 'xl devd' which can be used on the driver domain to spawn
backends, maybe you could add the logic there so that 'xl devd' calls
the backend executable with the required command line parameters, so
that the backend itself doesn't need to interact with xenstore in any
way?

That way in the future we could use something else instead of
xenstore, like Argo for instance in order to pass the backend data
from the control domain to the driver domain.

> also looking into the Xenstore entries backend always knows when the
> intended guest is been created/destroyed.

xl devd should also do the killing of backends anyway when a domain is
destroyed, or else malfunctioning user-space backends could keep
running after the domain they are serving is destroyed.

> I may mistake, but I don't think we can avoid using Xenstore (or other
> interface provided by toolstack) for the several reasons.
> 
> Besides a virtio-disk configuration (a disk to be assigned to the guest, R/O
> mode, etc), for each virtio-mmio device instance
> 
> a pair (mmio range + IRQ) are allocated by toolstack at the guest
> construction time and inserted into virtio-mmio device tree node
> 
> in the guest device tree. And for the backend to properly operate these
> variable parameters are also passed to the backend via Xenstore.

I think you could pass all these parameters as command line arguments
to the backend?

> The other reasons are:
> 
> 1. Automation. With current backend implementation we don't need to pause
> guest right after creating it, then go to the driver domain and spawn
> backend and
> 
> after that go back to the dom0 and unpause the guest.

xl devd should be capable of handling this for you on the driver
domain.

> 2. Ability to detect when guest with involved frontend has gone away and
> properly release resource (guest destroy/reboot).
> 
> 3. Ability to (re)connect to the newly created guest with involved frontend
> (guest create/reboot).
> 
> 4. What is more that having Xenstore support the backend is able to detect
> the dom_id it runs into and the guest dom_id, there is no need pass them via
> command line.
> 
> 
> I will be happy to explain in details after publishing backend code).

As I'm not the one doing the work I certainly won't stop you from
using xenstore on the backend. I would certainly prefer if the backend
gets all the information it needs from the command line so that the
configuration data is completely agnostic to the transport layer used
to convey it.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 09:25:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 09:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxS2c-0004CR-NM; Mon, 20 Jul 2020 09:24:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Eely=A7=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxS2b-0004CM-9f
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 09:24:57 +0000
X-Inumbo-ID: dfd99b66-ca6a-11ea-8480-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dfd99b66-ca6a-11ea-8480-bc764e2007e4;
 Mon, 20 Jul 2020 09:24:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=xizyXMNDeLs4ZLnctySaz18NXv/aXzwDcBu6IFSNWNo=; b=LQpMvRz5l4IfQD5mNrzLdxLyL7
 tv8D4lxepWNYRUYdLLwdfq/4Yc9CtBG6bSPwN7GM2ysjCzzl/Hdi0dSljCkUrY9BXDWWbuW13AfUs
 2tIYs4tjyNhSD5TjYEigA9NhUr+oiKkwZLA5mp9uorc1SwGZOlEY12xk8516Z3pUheZk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxS2Y-0001Tr-FH; Mon, 20 Jul 2020 09:24:54 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxS2Y-0006Si-72; Mon, 20 Jul 2020 09:24:54 +0000
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
 <20200717144139.GU7191@Air-de-Roger>
 <90AE8DAB-2223-46DC-A263-D78365E5435E@arm.com>
 <20200717150507.GW7191@Air-de-Roger>
 <FBE040A9-D088-43D6-8929-FFEDE9DDDE34@arm.com>
 <20200717153043.GX7191@Air-de-Roger>
 <C5B2BDD5-E504-4871-8542-5BA8C051F699@arm.com>
 <20200717160834.GA7191@Air-de-Roger>
 <0c76b6a0-2242-3bbd-9740-75c5580e93e8@xen.org>
 <20200720084734.GE7191@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <f28cd954-fecf-e223-e393-24d64a9161f7@xen.org>
Date: Mon, 20 Jul 2020 10:24:52 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200720084734.GE7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Roger,

On 20/07/2020 09:47, Roger Pau Monné wrote:
> On Fri, Jul 17, 2020 at 05:18:46PM +0100, Julien Grall wrote:
>>> Do you really need to specify the ECAM and MMIO regions there?
>>
>> You need to define those values somewhere :). The layout is only shared
>> between the tools and the hypervisor. I think it would be better if they are
>> defined at the same place as the rest of the layout, so it is easier to
>> rework the layout.
> 
> OK, that's certainly a different approach from what x86 uses, where
> the guest memory layout is not defined in the public headers.

It is mostly a convenience as some addresses are used by both the 
hypervisor and tools. A guest should use the firmware tables (ACPI/DT) 
to detect the MMIO regions.

> 
> On x86 my plan would be to add an hypercall that would set the
> position of the ECAM region in the guest physmap, and that would be
> called by the toolstack during domain construction.

It would be possible to use the same on Arm so the hypervisor doesn't 
use hardcoded values for the ECAM.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 09:25:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 09:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxS2h-0004D0-3r; Mon, 20 Jul 2020 09:25:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TdB1=A7=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1jxS2f-0004Cp-El
 for xen-devel@lists.xen.org; Mon, 20 Jul 2020 09:25:01 +0000
X-Inumbo-ID: e23b1fec-ca6a-11ea-9f74-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e23b1fec-ca6a-11ea-9f74-12813bfff9fa;
 Mon, 20 Jul 2020 09:25:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595237100;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-transfer-encoding:mime-version;
 bh=SLZ3JUl1GKSwslnsYU09bBRQlWFBZdz0wz8wzZsV5Ms=;
 b=NOdr5PPXTGgkzRZBTkLJpn7Jt4X/QVHcQ2XEBzq97cxy7JNZlhhpOnbd
 gOYsRhSzIbRlwJUb2lJInU9HOwvAsxDs552Frz9JoAkvMxHW8JYeGGr7H
 HqILXzNEeFkLOlhSdWFrSL5R9z2H8fM1SX0cD9yKcInzm6dWwyDICjAVM k=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: gvsyrebKPOEOE096nZOmBT0ZW4O/maVmc7xSQfdsRmJPMjervanTLUy4Oh/9dDf4jwN6D8oEm3
 pF3hI13kWj7ghzh9lTqQq9hPNWcUbix+ATkdm3e/SR62y21LTHJQmjCSvqe5JjXFIMpgzsb6ms
 ZufstZR9sTEbP62rKRBvpCjfYc8W0L4gcCYBNpi+dbDJ1QPXCHQsT/N33XLBsZKFouhQQlehtq
 Nio0e8/40v0ZtTMSl34SHas4mD6uz7yLnrQaNwdjM6lFT55GaTlBl+AAjzWUkq/Og7CNakavL+
 IjQ=
X-SBRS: 2.7
X-MesageID: 22739429
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="22739429"
From: Christian Lindig <christian.lindig@citrix.com>
To: 'Elliott Mitchell' <ehem+xen@m5p.com>, "xen-devel@lists.xen.org"
 <xen-devel@lists.xen.org>, "paul@xen.org" <paul@xen.org>
Subject: Re: [PATCH 2/2] tools/ocaml: Default to useful build output
Thread-Topic: [PATCH 2/2] tools/ocaml: Default to useful build output
Thread-Index: AQHWXLQfZqk/9n2Ez0iUq9XU56BN2qkQJXHN///oGACAACWpQQ==
Date: Mon, 20 Jul 2020 09:24:56 +0000
Message-ID: <1595237096474.23865@citrix.com>
References: <20200718033242.GB88869@mattapan.m5p.com>
 <1595234320493.39632@citrix.com>,<000d01d65e74$3deda4d0$b9c8ee70$@xen.org>
In-Reply-To: <000d01d65e74$3deda4d0$b9c8ee70$@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <Ian.Jackson@citrix.com>, Edwin Torok <edvin.torok@citrix.com>,
 "wl@xen.org" <wl@xen.org>, "dave@recoil.org" <dave@recoil.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

=0A=
I think this would at least force a clean-up and open the project to wider =
set of OCaml developers. This might lead to a situation where the OCaml xen=
store is not readily available for the consumers of Xen and I don't know wh=
o wants it how much. But I would prefer a situation where the OCaml xenstor=
e can be built against a system with Xen libraries installed rather than on=
ly within the Xen tree. This would help to modernise the OCaml xenstore cod=
e base not just in terms of the build system but tackle long-standing probl=
ems like improving the code around select/poll which is inefficient. =0A=
=0A=
-- C=0A=
=0A=
________________________________________=0A=
From: Paul Durrant <xadimgnik@gmail.com>=0A=
Sent: 20 July 2020 10:00=0A=
To: Christian Lindig; 'Elliott Mitchell'; xen-devel@lists.xen.org=0A=
Cc: Ian Jackson; Edwin Torok; wl@xen.org; dave@recoil.org=0A=
Subject: RE: [PATCH 2/2] tools/ocaml: Default to useful build output=0A=
=0A=
> -----Original Message-----=0A=
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Chr=
istian Lindig=0A=
> Sent: 20 July 2020 09:39=0A=
> To: Elliott Mitchell <ehem+xen@m5p.com>; xen-devel@lists.xen.org=0A=
> Cc: Ian Jackson <Ian.Jackson@citrix.com>; Edwin Torok <edvin.torok@citrix=
.com>; wl@xen.org;=0A=
> dave@recoil.org=0A=
> Subject: Re: [PATCH 2/2] tools/ocaml: Default to useful build output=0A=
>=0A=
>=0A=
>=0A=
> > Time for a bit of controversy.=0A=
>=0A=
> OCaml outside Xen has moved to a different model of building based on dun=
e which is fast, declarative=0A=
> and reliable. The OCaml xenstore is stagnating because nobody with OCaml =
experience wants to touch it=0A=
> anymore.=0A=
=0A=
It is still the default. Would you suggest that we change this and make C x=
enstored the default for 4.15, deprecating oxenstored=0A=
with a view to subsequently purging it from the tree in the 4.16 dev cycle?=
=0A=
=0A=
  Paul=0A=
=0A=
> It would be beneficial for the health of the OCaml xenstore to split it o=
ut such that it=0A=
> could be worked on independently. You might argue that Make is still appr=
opriate for building OCaml=0A=
> projects but the OCaml community has moved through several build systems,=
 starting from Make, and=0A=
> learned the hard way that this is not an easy problem. After years of mor=
e-or-less successful build=0A=
> system the consensus is that dune is right one and it has resulted in com=
bination with the Opam=0A=
> package manager the ecosystem to flourish. Alternatively, it would be pos=
sible to move OCaml xenstore=0A=
> to dune within the Xen tree but it would create a dependency on it.=0A=
>=0A=
> -- C=0A=
=0A=
=0A=


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 09:36:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 09:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxSDe-0005D8-3H; Mon, 20 Jul 2020 09:36:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WdHU=A7=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1jxSDb-0005D3-Ue
 for xen-devel@lists.xen.org; Mon, 20 Jul 2020 09:36:19 +0000
X-Inumbo-ID: 76ee6292-ca6c-11ea-9f74-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 76ee6292-ca6c-11ea-9f74-12813bfff9fa;
 Mon, 20 Jul 2020 09:36:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595237780;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=JuuOUPIbdc96C0yQXLEYT6jO3VJLVFmxpWCcRtyTcik=;
 b=aIocrVwF3Bb9svjBoWyEsgwRsIEmQYMPOSPMhdOjNaXg5iMTWHKc0hG4
 lUPFpKSvlLcR/la91X0+H2aMVwNnBRgSKW+uc3x905LdfG8FezA/RbOIY
 9LwkvIHAxPWWuhdm7vozbIKOnwo7oSysy3+kDreA/QYnmEW+OHHvJRiwr M=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ixA9MUFesw9e9JnxG/CkniUjmcHiGLcnbO9NrXMot3BiY8LZTWoKCRWPSlLFrkcnBQZ60StMw2
 0ipFci8hipUoPcvSw/WnSGqLLhyFHxLU4MHGqB7FTdT288adl+26QVgHVdX89oWQUlb/uycXhh
 X/KnsvTH+SNBByVODdEq6sfdGmABAmgjwaf2NwzIMJFnC+TByu72baAKIscYmsALb5rtEtM3xW
 9AbMwpT/YlqnHwk2IXk6m2kyFqAIdibcJwheOrJ5qqQIkWHFsNQ405cQmxWIXkk3wjqHgNHIqs
 aTI=
X-SBRS: 2.7
X-MesageID: 22736990
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="22736990"
From: Edwin Torok <edvin.torok@citrix.com>
To: "ehem+xen@m5p.com" <ehem+xen@m5p.com>, Christian Lindig
 <christian.lindig@citrix.com>, "xen-devel@lists.xen.org"
 <xen-devel@lists.xen.org>
Subject: Re: [PATCH 2/2] tools/ocaml: Default to useful build output
Thread-Topic: [PATCH 2/2] tools/ocaml: Default to useful build output
Thread-Index: AQHWXLQfZqk/9n2Ez0iUq9XU56BN2qkQJXHN///yCwA=
Date: Mon, 20 Jul 2020 09:36:14 +0000
Message-ID: <db4207051ac4ff18ab876e55bca9041b729daba6.camel@citrix.com>
References: <20200718033242.GB88869@mattapan.m5p.com>
 <1595234320493.39632@citrix.com>
In-Reply-To: <1595234320493.39632@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <55135C5AD82C224D8EA57CB5896094F3@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <Ian.Jackson@citrix.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, "wl@xen.org" <wl@xen.org>,
 "dave@recoil.org" <dave@recoil.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

T24gTW9uLCAyMDIwLTA3LTIwIGF0IDEwOjM4ICswMjAwLCBDaHJpc3RpYW4gTGluZGlnIHdyb3Rl
Og0KPiA+IFRpbWUgZm9yIGEgYml0IG9mIGNvbnRyb3ZlcnN5Lg0KPiANCj4gT0NhbWwgb3V0c2lk
ZSBYZW4gaGFzIG1vdmVkIHRvIGEgZGlmZmVyZW50IG1vZGVsIG9mIGJ1aWxkaW5nIGJhc2VkIG9u
DQo+IGR1bmUgd2hpY2ggaXMgZmFzdCwgZGVjbGFyYXRpdmUgYW5kIHJlbGlhYmxlLiBUaGUgT0Nh
bWwgeGVuc3RvcmUgaXMNCj4gc3RhZ25hdGluZyBiZWNhdXNlIG5vYm9keSB3aXRoIE9DYW1sIGV4
cGVyaWVuY2Ugd2FudHMgdG8gdG91Y2ggaXQNCj4gYW55bW9yZS4gSXQgd291bGQgYmUgYmVuZWZp
Y2lhbCBmb3IgdGhlIGhlYWx0aCBvZiB0aGUgT0NhbWwgeGVuc3RvcmUNCj4gdG8gc3BsaXQgaXQg
b3V0IHN1Y2ggdGhhdCBpdCBjb3VsZCBiZSB3b3JrZWQgb24gaW5kZXBlbmRlbnRseS4NCg0KQUZB
SUsgdGhlcmUgYXJlIDIgdW5zdGFibGUgaW50ZXJmYWNlcyB1c2VkIGJ5IG94ZW5zdG9yZWQsIGRl
Y291cGxpbmcgaXQNCndvdWxkIG1ha2UgdGhlIHZlcnNpb24gb2Ygb3hlbnN0b3JlZCBtb3JlIGlu
ZGVwZW5kZW50IGZyb20gdGhlIHZlcnNpb24NCm9mIGh5cGVydmlzb3I6IA0KaHR0cHM6Ly9hbmRy
ZXdjb29wLXhlbi5yZWFkdGhlZG9jcy5pby9lbi9kb2NzLWRldmVsL21pc2MvdGVjaC1kZWJ0Lmh0
bWwjcmVtb3ZlLXhlbnN0b3JlZC1zLWRlcGVuZGVuY2llcy1vbi11bnN0YWJsZS1pbnRlcmZhY2Vz
DQpJSVVDIHRoaXMgd291bGQgYWxzbyBhbGxvdyBzb21lIGNvZGUgdG8gYmUgZHJvcGVkIGZyb20g
dGhlIGh5cGVydmlzb3INCndoZXJlIG94ZW5zdG9yZWQgaXMgdGhlIGxhc3QgdXNlciBvZiB0aGUg
dW5zdGFibGUgaW50ZXJmYWNlLg0KDQo+ICBZb3UgbWlnaHQgYXJndWUgdGhhdCBNYWtlIGlzIHN0
aWxsIGFwcHJvcHJpYXRlIGZvciBidWlsZGluZyBPQ2FtbA0KPiBwcm9qZWN0cyBidXQgdGhlIE9D
YW1sIGNvbW11bml0eSBoYXMgbW92ZWQgdGhyb3VnaCBzZXZlcmFsIGJ1aWxkDQo+IHN5c3RlbXMs
IHN0YXJ0aW5nIGZyb20gTWFrZSwgYW5kIGxlYXJuZWQgdGhlIGhhcmQgd2F5IHRoYXQgdGhpcyBp
cw0KPiBub3QgYW4gZWFzeSBwcm9ibGVtLiBBZnRlciB5ZWFycyBvZiBtb3JlLW9yLWxlc3Mgc3Vj
Y2Vzc2Z1bCBidWlsZA0KPiBzeXN0ZW0gdGhlIGNvbnNlbnN1cyBpcyB0aGF0IGR1bmUgaXMgcmln
aHQgb25lIGFuZCBpdCBoYXMgcmVzdWx0ZWQgaW4NCj4gY29tYmluYXRpb24gd2l0aCB0aGUgT3Bh
bSBwYWNrYWdlIG1hbmFnZXIgdGhlIGVjb3N5c3RlbSB0byBmbG91cmlzaC4NCj4gQWx0ZXJuYXRp
dmVseSwgaXQgd291bGQgYmUgcG9zc2libGUgdG8gbW92ZSBPQ2FtbCB4ZW5zdG9yZSB0byBkdW5l
DQo+IHdpdGhpbiB0aGUgWGVuIHRyZWUgYnV0IGl0IHdvdWxkIGNyZWF0ZSBhIGRlcGVuZGVuY3kg
b24gaXQuDQoNCkknZCB3ZWxjb21lIGEgRHVuZSBiYXNlZCBidWlsZC1zeXN0ZW0uDQoNClRoZSBj
dXJyZW50IE1ha2VmaWxlIGJhc2VkIGJ1aWxkLXN5c3RlbSBkb2Vzbid0IGhhbmRsZSBkZXBlbmRl
bmNpZXMNCmNvcnJlY3RseSBmb3IgaW5jcmVtZW50YWwgZGV2ZWxvcG1lbnQ6IEkgb2Z0ZW4gaGF2
ZSB0byBydW4gJ21ha2UgY2xlYW4nDQppbiBvcmRlciB0byBzdWNjZXNzZnVsbHkgYnVpbGQgeGVu
c3RvcmVkIGFmdGVyIGNoYW5naW5nIGFuIC5tbCBmaWxlLA0Kb3RoZXJ3aXNlIHRoZSBsaW5rZXIg
ZmFpbHMgd2l0aCAnaW5jb25zaXN0ZW50IGFzc3VtcHRpb25zIG92ZXINCmludGVyZmFjZScsIGlu
ZGljYXRpbmcgdGhhdCBNYWtlIGhhc24ndCByZWJ1aWx0IHNvbWV0aGluZyB0aGF0IGl0DQpzaG91
bGQgaGF2ZS4gKEZvciB0aG9zZSB1bmZhbWlsaWFyIHdpdGggdGhpcyBpc3N1ZSwgc2VlIHRoZQ0K
J01vdGl2YXRpb24nIHNlY3Rpb24gaW4gDQpodHRwczovL25pY29sYXNwb3VpbGxhcmQuZnIvb2Nh
bWxidWlsZC9vY2FtbGJ1aWxkLXVzZXItZ3VpZGUuaHRtbCkNCkl0IGFsc28gbGFja3MgZ2VuZXJh
dGlvbiBvZiAubWVybGluIGZpbGVzIChmb3IgZWRpdG9yIGludGVncmF0aW9uLCBlLmcuDQpWaW0g
b3IgRW1hY3MpLCB3aGljaCB5b3UgZ2V0IGZvciBmcmVlIHdpdGggRHVuZS4NCg0KV2UgY291bGQg
c3RpbGwgcmV0YWluIGEgTWFrZWZpbGUgYXMgYW4gZW50cnlwb2ludCB0aGF0IGxhdW5jaGVzIER1
bmUNCndpdGggYXBwcm9wcmlhdGUgZmxhZ3MsIHdoaWNoIGFzaWRlIGZyb20gYWRkaW5nIGEgYnVp
bGQgcmVxdWlyZW1lbnQgb24NCkR1bmUgd291bGRuJ3QgcmVxdWlyZSBjaGFuZ2VzIHRvIHBhY2th
Z2UgYnVpbGRpbmcuDQoNCkkgdGhpbmsgYSBuaWNlIHdheSBmb3J3YXJkIGhlcmUgd291bGQgYmUg
dG8gdHJ5IHRvIHdyaXRlIGEgbWluaW1hbA0KYmluZGluZyB0byBnbnR0YWIgdG8gcmVwbGljYXRl
IA0KaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXhlbi5naXQ7YT1jb21taXRkaWZm
O2g9MzhlZWIzODY0ZCBpbg0KT0NhbWwsIHRoaXMgd291bGQgYm90aCBkZW1vbnN0cmF0ZSB0aGUg
YmVuZWZpdHMgb2YgRHVuZSAobWFraW5nDQpjb250cmlidXRpb24gZWFzaWVyKSwgYW5kIHJlZHVj
ZSB0ZWNobmljYWwgZGVidCB3aXRoaW4gWGVuLg0KDQpCZXN0IHJlZ2FyZHMsDQotLUVkd2luDQoN
Cg==


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 09:37:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 09:37:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxSEW-0005GH-EI; Mon, 20 Jul 2020 09:37:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UosC=A7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxSEV-0005G7-En
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 09:37:15 +0000
X-Inumbo-ID: 97c2b400-ca6c-11ea-9f74-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 97c2b400-ca6c-11ea-9f74-12813bfff9fa;
 Mon, 20 Jul 2020 09:37:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595237834;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=nwPJa3NUFyEUiPP7p8oQoME8KchN1Curaq49nRiJSr0=;
 b=VQ73fMuqmBPOA8o5mFaRv6pB1eXZNN8kMoR9vi282MuqFKIOXiPNamQu
 5zQ4E9EVEGH9f7TtpGMQE3Jwzj/2XvzllMxQbMDi40PjXHG6yMMz06/qi
 yV90KDqbMHyBdy12D84mgjFdhkWq7imujJXztbWYOoBQN0MV4KpoDSFdP c=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 4hmgLSfnkocyYHfNLmECegraULYWfvgO+2ir3ZYwO5eOasBTEYVWmVfPwW4mzeVVL17AAtcZVd
 bUVOns9d+KznUSTWedAW/WhA4LDQ6fI22ZVr7E0i6W1qt7UqqwxosEHno3/M1gQyUdpcDOjoqF
 Nj70/QuhTm+JOMOwc9sysRtdWdhiTtL0p7pcoHDnQAPgaCsxfhMhr0r+0WqS9xxrS3GSrFbPO0
 NyjgT8OXTOod2Cdp5jysswqlD0JX8QMphFc9cBJwZ90Kl5ucVrm3e0q6KBg1OKyPj5+tG+RRay
 t54=
X-SBRS: 2.7
X-MesageID: 23586900
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="23586900"
Date: Mon, 20 Jul 2020 11:37:05 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
Message-ID: <20200720093705.GG7191@Air-de-Roger>
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, sstabellini@kernel.org, kamatam@amazon.com, mingo@redhat.com,
 xen-devel@lists.xenproject.org, sblbir@amazon.com, axboe@kernel.dk,
 konrad.wilk@oracle.com, Anchal Agarwal <anchalag@amazon.com>, bp@alien8.de,
 tglx@linutronix.de, jgross@suse.com, netdev@vger.kernel.org,
 linux-pm@vger.kernel.org, rjw@rjwysocki.net, linux-kernel@vger.kernel.org,
 vkuznets@redhat.com, davem@davemloft.net, dwmw@amazon.co.uk
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 18, 2020 at 09:47:04PM -0400, Boris Ostrovsky wrote:
> (Roger, question for you at the very end)
> 
> On 7/17/20 3:10 PM, Anchal Agarwal wrote:
> > On Wed, Jul 15, 2020 at 05:18:08PM -0400, Boris Ostrovsky wrote:
> >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >>
> >>
> >>
> >> On 7/15/20 4:49 PM, Anchal Agarwal wrote:
> >>> On Mon, Jul 13, 2020 at 11:52:01AM -0400, Boris Ostrovsky wrote:
> >>>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >>>>
> >>>>
> >>>>
> >>>> On 7/2/20 2:21 PM, Anchal Agarwal wrote:
> >>>> And PVH dom0.
> >>> That's another good use case to make it work with however, I still
> >>> think that should be tested/worked upon separately as the feature itself
> >>> (PVH Dom0) is very new.
> >>
> >> Same question here --- will this break PVH dom0?
> >>
> > I haven't tested it as a part of this series. Is that a blocker here?
> 
> 
> I suspect dom0 will not do well now as far as hibernation goes, in which
> case you are not breaking anything.
> 
> 
> Roger?

I sadly don't have any box ATM that supports hibernation where I
could test it. We have hibernation support for PV dom0, so while I
haven't done anything specific to support or test hibernation on PVH
dom0 I would at least aim to not make this any worse, and hence the
check should at least also fail for a PVH dom0?

if (!xen_hvm_domain() || xen_initial_domain())
    return -ENODEV;

Ie: none of this should be applied to a PVH dom0, as it doesn't have
PV devices and hence should follow the bare metal device suspend.

Also I would contact the QubesOS guys, they rely heavily on the
suspend feature for dom0, and that's something not currently tested by
osstest so any breakages there go unnoticed.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 09:39:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 09:39:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxSGq-0005RM-U4; Mon, 20 Jul 2020 09:39:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ibwy=A7=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jxSGq-0005RH-1D
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 09:39:40 +0000
X-Inumbo-ID: eda41a94-ca6c-11ea-9f74-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eda41a94-ca6c-11ea-9f74-12813bfff9fa;
 Mon, 20 Jul 2020 09:39:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CDB19AC85;
 Mon, 20 Jul 2020 09:39:43 +0000 (UTC)
Subject: Re: [PATCH] xen/gntdev: gntdev.h: drop a duplicated word
To: Randy Dunlap <rdunlap@infradead.org>, linux-kernel@vger.kernel.org
References: <20200719003317.21454-1-rdunlap@infradead.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <50c0c93a-3c50-e3f2-8b73-c907424e4e7c@suse.com>
Date: Mon, 20 Jul 2020 11:39:37 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200719003317.21454-1-rdunlap@infradead.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.07.20 02:33, Randy Dunlap wrote:
> Drop the repeated word "of" in a comment.
> 
> Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: xen-devel@lists.xenproject.org

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 09:40:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 09:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxSHu-0006Ai-83; Mon, 20 Jul 2020 09:40:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Eely=A7=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxSHt-0006AZ-0p
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 09:40:45 +0000
X-Inumbo-ID: 14e0ce7c-ca6d-11ea-9f74-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14e0ce7c-ca6d-11ea-9f74-12813bfff9fa;
 Mon, 20 Jul 2020 09:40:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=7jekAShMJES6zG0w+MwjXR5yipns1iTsnrBv9YJulBY=; b=2S36o/82rdqdAQkC2GUmoowq2T
 RTUVseoPXt8k4laC8e8yC6bKY9b8T/4eBRtD/6uzMzs2SO69RoYezYEu20fchy+R42/VBp608H8cg
 dimP44SIuGL3nl3W0SoFFNIXflnJb7fKCL00/5UJjxyXEqyXoDi/q9DGFXQfUhqCoo40=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxSHq-0001oO-HZ; Mon, 20 Jul 2020 09:40:42 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxSHq-0007kB-Al; Mon, 20 Jul 2020 09:40:42 +0000
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Oleksandr <olekstysh@gmail.com>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <20200720091722.GF7191@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <10eaec62-2c48-52ae-d113-1681c87e3d59@xen.org>
Date: Mon, 20 Jul 2020 10:40:40 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200720091722.GF7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 20/07/2020 10:17, Roger Pau Monné wrote:
> On Fri, Jul 17, 2020 at 09:34:14PM +0300, Oleksandr wrote:
>> On 17.07.20 18:00, Roger Pau Monné wrote:
>>> On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
>>>> requires
>>>> some implementation to forward guest MMIO access to a device model. And as
>>>> it
>>>> turned out the Xen on x86 contains most of the pieces to be able to use that
>>>> transport (via existing IOREQ concept). Julien has already done a big amount
>>>> of work in his PoC (xen/arm: Add support for Guest IO forwarding to a
>>>> device emulator).
>>>> Using that code as a base we managed to create a completely functional PoC
>>>> with DomU
>>>> running on virtio block device instead of a traditional Xen PV driver
>>>> without
>>>> modifications to DomU Linux. Our work is mostly about rebasing Julien's
>>>> code on the actual
>>>> codebase (Xen 4.14-rc4), various tweeks to be able to run emulator
>>>> (virtio-disk backend)
>>>> in other than Dom0 domain (in our system we have thin Dom0 and keep all
>>>> backends
>>>> in driver domain),
>>> How do you handle this use-case? Are you using grants in the VirtIO
>>> ring, or rather allowing the driver domain to map all the guest memory
>>> and then placing gfn on the ring like it's commonly done with VirtIO?
>>
>> Second option. Xen grants are not used at all as well as event channel and
>> Xenbus. That allows us to have guest
>>
>> *unmodified* which one of the main goals. Yes, this may sound (or even
>> sounds) non-secure, but backend which runs in driver domain is allowed to
>> map all guest memory.
> 
> Supporting unmodified guests is certainly a fine goal, but I don't
> think it's incompatible with also trying to expand the spec in
> parallel in order to support grants in a negotiated way (see below).
> 
> That way you could (long term) regain some of the lost security.

FWIW, Xen is not the only hypervisor/community interested in creating 
"less privileged" backend.

> 
>>> Do you have any plans to try to upstream a modification to the VirtIO
>>> spec so that grants (ie: abstract references to memory addresses) can
>>> be used on the VirtIO ring?
>>
>> But VirtIO spec hasn't been modified as well as VirtIO infrastructure in the
>> guest. Nothing to upsteam)
> 
> OK, so there's no intention to add grants (or a similar interface) to
> the spec?
> 
> I understand that you want to support unmodified VirtIO frontends, but
> I also think that long term frontends could negotiate with backends on
> the usage of grants in the shared ring, like any other VirtIO feature
> negotiated between the frontend and the backend.
> 
> This of course needs to be on the spec first before we can start
> implementing it, and hence my question whether a modification to the
> spec in order to add grants has been considered.
The problem is not really the specification but the adoption in the 
ecosystem. A protocol based on grant-tables would mostly only be used by 
Xen therefore:
    - It may be difficult to convince a proprietary OS vendor to invest 
resource on implementing the protocol
    - It would be more difficult to move in/out of Xen ecosystem.

Both, may slow the adoption of Xen in some areas.

If one is interested in security, then it would be better to work with 
the other interested parties. I think it would be possible to use a 
virtual IOMMU for this purpose.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 10:19:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 10:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxStF-0000W2-9v; Mon, 20 Jul 2020 10:19:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vvUF=A7=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jxStE-0000Vu-0p
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 10:19:20 +0000
X-Inumbo-ID: 7842e66d-ca72-11ea-8487-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7842e66d-ca72-11ea-8487-bc764e2007e4;
 Mon, 20 Jul 2020 10:19:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595240359;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=g1USrYgf6iLGRJ6Y6JipbkWFUsP6pUvHuvebTM15GUc=;
 b=X8+9YoonqynbTyWJ2BhWdTHmCnAb0yGMXXtzomFTsSM/dMUd7R7zUaNQ
 iQ5A2cKLe4tpiv9/HzN6r0RbOmD4axhVc739+3Fohf5flvSCj8wcdPuVs
 xnJdP4/7E4R/eFxkiGuDiesidsue3wPqT6rUpCGn8sAOOsCZMKbrRlIz+ 0=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: tA6PgaaShCYl/opkuLCMopA0zOujg7/WOsTNlI5gz8CDya78f96EjEooevZOkIZ50w+WYPFsvw
 dCaJzcrfb4Xwu+XskMndHoSfF/E3GXd3cJMbVF2S4l4q+nbnQXzFEkRzrAVLQNErQ2jGCHg1Li
 NpbAvc01eJu/ZrFeY2mMnDnJG9ar7ZKHboag2zt6dw3ftDzZ3NqZRMhCL4pyd/0FQgJ54oyBie
 vV+sGm+tTqwvC2mXHG60dLzG33d3D34RYkcWf0K3GGFcoh+VVL+kXjlB1rua2YfX+MU3z/6JVS
 jiU=
X-SBRS: 2.7
X-MesageID: 22739368
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="22739368"
From: George Dunlap <George.Dunlap@citrix.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v2] docs: specify stability of hypfs path documentation
Thread-Topic: [PATCH v2] docs: specify stability of hypfs path documentation
Thread-Index: AQHWWR59fvtpmEuro06JsoMrFTmX86kFdV+AgAMhtYCAAUgfAIAABYeAgAAO0gCAAALcAIAALlYAgAAD6oCABgIDAA==
Date: Mon, 20 Jul 2020 10:19:14 +0000
Message-ID: <041963E3-304B-4F1D-BCA0-387057DFC7FE@citrix.com>
References: <20200713140338.16172-1-jgross@suse.com>
 <8a96b1b9-cbcb-557a-5b82-661bbe40fe25@suse.com>
 <68F727A8-29B8-4846-8BE9-BD4F6E0DC60D@citrix.com>
 <9f5e86cc-4f64-982b-d84b-1de6b2739a2b@suse.com>
 <4c681c7c-be69-7e1a-4cd9-c9e05fe85372@suse.com>
 <2567da9b-be43-3f0d-e213-562b5454f4b7@suse.com>
 <757f5f78-6ec9-c740-18bf-a01105d552d7@suse.com>
 <A8D7C0A3-BA48-40F2-B290-C73BC1CDEBD1@citrix.com>
 <fd8902f6-0172-2f1f-aae8-fec096d4bff5@suse.com>
In-Reply-To: <fd8902f6-0172-2f1f-aae8-fec096d4bff5@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <FFD6D947570B684886038B11B1833383@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDE2LCAyMDIwLCBhdCAzOjM0IFBNLCBKw7xyZ2VuIEdyb8OfIDxqZ3Jvc3NA
c3VzZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMTYuMDcuMjAgMTY6MjAsIEdlb3JnZSBEdW5sYXAg
d3JvdGU6DQo+Pj4gT24gSnVsIDE2LCAyMDIwLCBhdCAxMjozNCBQTSwgSsO8cmdlbiBHcm/DnyA8
amdyb3NzQHN1c2UuY29tPiB3cm90ZToNCj4+PiANCj4+PiBPbiAxNi4wNy4yMCAxMzoyNCwgSmFu
IEJldWxpY2ggd3JvdGU6DQo+Pj4+IE9uIDE2LjA3LjIwMjAgMTI6MzEsIErDvHJnZW4gR3Jvw58g
d3JvdGU6DQo+Pj4+PiBPbiAxNi4wNy4yMCAxMjoxMSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4+
Pj4gT24gMTUuMDcuMjAyMCAxNjozNywgR2VvcmdlIER1bmxhcCB3cm90ZToNCj4+Pj4+Pj4gSVQg
c291bmRzIGxpa2UgeW914oCZcmUgc2F5aW5nOg0KPj4+Pj4+PiANCj4+Pj4+Pj4gMS4gUGF0aHMg
bGlzdGVkIHdpdGhvdXQgY29uZGl0aW9ucyB3aWxsIGFsd2F5cyBiZSBhdmFpbGFibGUNCj4+Pj4+
Pj4gDQo+Pj4+Pj4+IDIuIFBhdGhzIGxpc3RlZCB3aXRoIGNvbmRpdGlvbnMgbWF5IGJlIGV4dGVu
ZGVkOiBpLmUuLCBhIG5vZGUgY3VycmVudGx5IGxpc3RlZCBhcyBQViBtaWdodCBhbHNvIGJlY29t
ZSBhdmFpbGFibGUgZm9yIEhWTSBndWVzdHMNCj4+Pj4+Pj4gDQo+Pj4+Pj4+IDMuIFBhdGhzIGxp
c3RlZCB3aXRoIGNvbmRpdGlvbnMgbWlnaHQgaGF2ZSB0aG9zZSBjb25kaXRpb25zIHJlZHVjZWQs
IGJ1dCB3aWxsIG5ldmVyIGVudGlyZWx5IGRpc2FwcGVhci4gIFNvIHNvbWV0aGluZyBjdXJyZW50
bHkgbGlzdGVkIGFzIFBWIG1pZ2h0IGJlIHJlZHVjZWQgdG8gQ09ORklHX0hBU19GT08sIGJ1dCB3
b27igJl0IGJlIGNvbXBsZXRlbHkgcmVtb3ZlZC4NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IElzIHRoYXQg
d2hhdCB5b3UgbWVhbnQ/DQo+Pj4+Pj4gDQo+Pj4+Pj4gSSBzZWUgSsO8cmdlbiByZXBsaWVkICJ5
ZXMiIHRvIHRoaXMsIGJ1dCBJJ20gbm90IHN1cmUgYWJvdXQgMS4NCj4+Pj4+PiBhYm92ZTogSSB0
aGluayBpdCdzIHF1aXRlIHJlYXNvbmFibGUgdG8gZXhwZWN0IHRoYXQgcGF0aHMgd2l0aG91dA0K
Pj4+Pj4+IGNvbmRpdGlvbiBtYXkgZ2FpbiBhIGNvbmRpdGlvbi4gSnVzdCBsaWtlIHBhdGhzIG5v
dyBoYXZpbmcgYQ0KPj4+Pj4+IGNvbmRpdGlvbiBhbmQgKHBlcmhhcHMgdGVtcG9yYXJpbHkpIGxv
c2luZyBpdCBzaG91bGRuJ3QgYWxsIG9mDQo+Pj4+Pj4gdGhlIHN1ZGRlbiBiZWNvbWUgImFsd2F5
cyBhdmFpbGFibGUiIHdoZW4gdGhleSB3ZXJlbid0IG1lYW50IHRvDQo+Pj4+Pj4gYmUuDQo+Pj4+
Pj4gDQo+Pj4+Pj4gQXMgZmFyIGEgMy4gZ29lcywgSSdtIGFsc28gdW5zdXJlIGluIGhvdyBmYXIg
dGhpcyBpcyBhbnkgYmV0dGVyDQo+Pj4+Pj4gc3RhYmlsaXR5IHdpc2UgKGZyb20gYSBjb25zdW1l
ciBwb3YpIHRoYW4gYWxsb3dpbmcgcGF0aHMgdG8NCj4+Pj4+PiBlbnRpcmVseSBkaXNhcHBlYXIu
DQo+Pj4+PiANCj4+Pj4+IFRoZSBpZGVhIGlzIHRoYXQgYW55IHVzZXIgdG9vbCB1c2luZyBoeXBm
cyBjYW4gcmVseSBvbiBwYXRocyB1bmRlciAxIHRvDQo+Pj4+PiBleGlzdCwgd2hpbGUgdGhlIG9u
ZXMgdW5kZXIgMyBtaWdodCBub3QgYmUgdGhlcmUgZHVlIHRvIHRoZSBoeXBlcnZpc29yDQo+Pj4+
PiBjb25maWcgb3IgdGhlIHVzZWQgc3lzdGVtLg0KPj4+Pj4gDQo+Pj4+PiBBIHBhdGggbm90IGJl
aW5nIGFsbG93ZWQgdG8gZW50aXJlbHkgZGlzYXBwZWFyIGVuc3VyZXMgdGhhdCBpdCByZW1haW5z
DQo+Pj4+PiBpbiB0aGUgZG9jdW1lbnRhdGlvbiwgc28gdGhlIHNhbWUgcGF0aCBjYW4ndCBiZSBy
ZXVzZWQgZm9yIHNvbWV0aGluZw0KPj4+Pj4gZGlmZmVyZW50IGluIGZ1dHVyZS4NCj4+Pj4gQW5k
IHRoZW4gaG93IGRvIHlvdSBkZWFsIHdpdGggYSBjb25kaXRpb24gZ2V0dGluZyBkcm9wcGVkLCBh
bmQNCj4+Pj4gbGF0ZXIgd2FudGluZyB0byBnZXQgcmUtYWRkZWQ/IERvIHdlIG5lZWQgYSBwbGFj
ZWhvbGRlciBjb25kaXRpb24NCj4+Pj4gbGlrZSBbQUxXQVlTXSBvciBbVFJVRV0/DQo+Pj4gDQo+
Pj4gRHJvcHBpbmcgYSBjb25kaXRpb24gaGFzIHRvIGJlIGNvbnNpZGVyZWQgdmVyeSBjYXJlZnVs
bHksIHNhbWUgYXMNCj4+PiBpbnRyb2R1Y2luZyBhIG5ldyBwYXRoIHdpdGhvdXQgYW55IGNvbmRp
dGlvbi4NCj4+PiANCj4+PiBJbiB3b3JzdCBjYXNlIHlvdSBjYW4gc3RpbGwgZ28gd2l0aCBbQ09O
RklHX0hZUEZTXS4NCj4+IENvdWxkbuKAmXQgd2UganVzdCBoYXZlIGEgc2VjdGlvbiBvZiB0aGUg
ZG9jdW1lbnQgZm9yIGRlYWQgcGF0aHMgdGhhdCBhcmVu4oCZdCBhbGxvd2VkIHRvIGJlIHVzZWQ/
DQo+PiBBbHRlcm5hdGVseSwgd2UgY291bGQgaGF2ZSBhIHRhZyBmb3IgZW50cmllcyB3ZSBkb27i
gJl0IHdhbnQgdXNlZCBhbnltb3JlOyBbREVBRF0gb3IgW09CU09MRVRFXSBtYXliZT8gW0RFRlVO
Q1RdPyBbUkVNT1ZFRF0/DQo+PiBTbyBJIHRoaW5rIEnigJlkIHdyaXRlIGEgc2VwYXJhdGUgc2Vj
dGlvbiwgbGlrZSB0aGlzOg0KPj4gfn4NCj4+ICMgU3RhYmlsaXR5DQo+PiBQYXRoICpwcmVzZW5j
ZSogaXMgbm90IHN0YWJsZSwgYnV0IHBhdGggKm1lYW5pbmcqIGlzIGFsd2F5cyBzdGFibGU6IGlm
IGEgdG9vbCB5b3Ugd3JpdGUgZmluZHMgYSBwYXRoIHByZXNlbnQsIGl0IGNhbiByZWx5IG9uIGJl
aGF2aW9yIGluIGZ1dHVyZSB2ZXJzaW9ucyBvZiB0aGUgaHlwZXJ2aXNvcnMsIGFuZCBpbiBkaWZm
ZXJlbnQgY29uZmlndXJhdGlvbnMuICBTcGVjaWZpY2FsbHk6DQo+PiAxLiBDb25kaXRpb25zIHVu
ZGVyIHdoaWNoIHBhdGhzIGFyZSB1c2VkIG1heSBiZSBleHRlbmRlZCwgcmVzdHJpY3RlZCwgb3Ig
cmVtb3ZlZC4gIEZvciBleGFtcGxlLCBhIHBhdGggdGhhdOKAmXMgYWx3YXlzIGF2YWlsYWJsZSBv
bmx5IG9uIEFSTSBzeXN0ZW1zIG1heSBiZWNvbWUgYXZhaWxhYmxlIG9uIHg4Njsgb3IgYSBwYXRo
IGF2YWlsYWJsZSBvbiBib3RoIHN5c3RlbXMgbWF5IGJlIHJlc3RyaWN0ZWQgdG8gb25seSBhcHBl
YXJpbmcgb24gQVJNIHN5c3RlbXMuICBQYXRocyBtYXkgYWxzbyBkaXNhcHBlYXIgZW50aXJlbHku
DQo+PiAyLiBIb3dldmVyLCB0aGUgbWVhbmluZyBvZiBhIHBhdGggd2lsbCBuZXZlciBjaGFuZ2Uu
ICBJZiBhIHBhdGggaXMgcHJlc2VudCwgaXQgd2lsbCBhbHdheXMgaGF2ZSBleGFjdGx5IHRoZSBt
ZWFuaW5nIHRoYXQgaXQgYWx3YXlzIGhhZC4gIEluIG9yZGVyIHRvIG1haW50YWluIHRoaXMsIHJl
bW92ZWQgcGF0aHMgc2hvdWxkIGJlIHJldGFpbmVkIHdpdGggdGhlIHRhZyBbUkVNT1ZFRF0uICBU
aGUgcGF0aCBtYXkgYmUgcmVzdG9yZWQgKm9ubHkqIGlmIHRoZSByZXN0b3JlZCB2ZXJzaW9uIG9m
IHRoZSBwYXRoIGlzIGNvbXBhdGlibGUgd2l0aCB0aGUgcHJldmlvdXMgZnVuY3Rpb25hbGl0eS4N
Cj4+IH5+fg0KPj4gVGhvdWdodHM/DQo+IA0KPiBXb3VsZCB3b3JrIGZvciBtZSwgdG9vLg0KDQpT
byB3aG9zZSBjb3VydCBpcyB0aGUgYmFsbCBpbiBub3c/ICBBcmUgeW91IGdvaW5nIHRvIHNlbmQg
YW5vdGhlciBwYXRjaCwgb3Igd291bGQgeW91IHByZWZlciBJIGRvIGl0Pw0KDQogLUdlb3JnZQ==


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 10:20:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 10:20:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxSuP-0001FV-QP; Mon, 20 Jul 2020 10:20:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UosC=A7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxSuO-0001FO-N4
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 10:20:32 +0000
X-Inumbo-ID: a2ce3168-ca72-11ea-8487-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2ce3168-ca72-11ea-8487-bc764e2007e4;
 Mon, 20 Jul 2020 10:20:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595240431;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=p3Bd5pXAMYU/Ly5ta3QImfVnRY3RDMqxFMLQdBS3Mtc=;
 b=RatfbJ+Ns7/nW4qJ1o5ax/9/zO4ZTVFNt4SW243koM2uz4Y2uOvBdWXN
 WFzt22DppXRD4v4d/SAIYx+jswQuCTyf57VglcHaBOZ7ulBUGDyZ0V5NB
 xm5ualVT8+bjjJJkYpw/Hrsmy/0BQdv2iDNDlSpRcKYqHhkDH10BiZQXh 0=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: VrbH58Xm5nPOl7D0WJ6Go5erLMQnBS8bsCYxSgyUPYiZ+RAM99Pmt/n4jGOPeCCbqD9Vo/lH1E
 BpWqLoHuxhWw6ISWSdEMC2+peP0T9TobNDmuzQ2x1GZV3wosymwycTHvTkqB7/0ZRV6pecQbjB
 koFf7neSUmbQda8/V4aPUPWZK9TV1ssFZ1PY3Ek01D2M4jroS/d3UA2bDcDaXpuZVybggVCFT+
 +9WPNE+tT7CZGyVgY2KJE9Dc7KR66YuzKstFyt0VjSbDCOvaFqFSMWLbX6N+RXCukYVT25EtqA
 rl8=
X-SBRS: 2.7
X-MesageID: 23063696
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="23063696"
Date: Mon, 20 Jul 2020 12:20:23 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
Message-ID: <20200720102023.GH7191@Air-de-Roger>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <20200720091722.GF7191@Air-de-Roger>
 <10eaec62-2c48-52ae-d113-1681c87e3d59@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <10eaec62-2c48-52ae-d113-1681c87e3d59@xen.org>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Artem
 Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 20, 2020 at 10:40:40AM +0100, Julien Grall wrote:
> 
> 
> On 20/07/2020 10:17, Roger Pau Monné wrote:
> > On Fri, Jul 17, 2020 at 09:34:14PM +0300, Oleksandr wrote:
> > > On 17.07.20 18:00, Roger Pau Monné wrote:
> > > > On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
> > > > Do you have any plans to try to upstream a modification to the VirtIO
> > > > spec so that grants (ie: abstract references to memory addresses) can
> > > > be used on the VirtIO ring?
> > > 
> > > But VirtIO spec hasn't been modified as well as VirtIO infrastructure in the
> > > guest. Nothing to upsteam)
> > 
> > OK, so there's no intention to add grants (or a similar interface) to
> > the spec?
> > 
> > I understand that you want to support unmodified VirtIO frontends, but
> > I also think that long term frontends could negotiate with backends on
> > the usage of grants in the shared ring, like any other VirtIO feature
> > negotiated between the frontend and the backend.
> > 
> > This of course needs to be on the spec first before we can start
> > implementing it, and hence my question whether a modification to the
> > spec in order to add grants has been considered.
> The problem is not really the specification but the adoption in the
> ecosystem. A protocol based on grant-tables would mostly only be used by Xen
> therefore:
>    - It may be difficult to convince a proprietary OS vendor to invest
> resource on implementing the protocol
>    - It would be more difficult to move in/out of Xen ecosystem.
> 
> Both, may slow the adoption of Xen in some areas.

Right, just to be clear my suggestion wasn't to force the usage of
grants, but whether adding something along this lines was in the
roadmap, see below.

> If one is interested in security, then it would be better to work with the
> other interested parties. I think it would be possible to use a virtual
> IOMMU for this purpose.

Yes, I've also heard rumors about using the (I assume VirtIO) IOMMU in
order to protect what backends can map. This seems like a fine idea,
and would allow us to gain the lost security without having to do the
whole work ourselves.

Do you know if there's anything published about this? I'm curious
about how and where in the system the VirtIO IOMMU is/should be
implemented.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 10:50:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 10:50:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxTNV-0003oD-AT; Mon, 20 Jul 2020 10:50:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tyb8=A7=yujala.com=srini@srs-us1.protection.inumbo.net>)
 id 1jxTNT-0003o5-LD
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 10:50:35 +0000
X-Inumbo-ID: d63a88de-ca76-11ea-9f82-12813bfff9fa
Received: from gproxy2-pub.mail.unifiedlayer.com (unknown [69.89.18.3])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d63a88de-ca76-11ea-9f82-12813bfff9fa;
 Mon, 20 Jul 2020 10:50:34 +0000 (UTC)
Received: from cmgw11.unifiedlayer.com (unknown [10.9.0.11])
 by gproxy2.mail.unifiedlayer.com (Postfix) with ESMTP id 3DEC01E0848
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jul 2020 04:50:33 -0600 (MDT)
Received: from md-71.webhostbox.net ([204.11.58.143]) by cmsmtp with ESMTP
 id xTNQjcTqLpSV4xTNQj76Bh; Mon, 20 Jul 2020 04:50:33 -0600
X-Authority-Reason: nr=8
X-Authority-Analysis: v=2.3 cv=Ssa+FsG0 c=1 sm=1 tr=0
 a=yS0qNmEK8ed8yKyeR8R6rg==:117 a=yS0qNmEK8ed8yKyeR8R6rg==:17
 a=dLZJa+xiwSxG16/P+YVxDGlgEgI=:19 a=_RQrkK6FrEwA:10:nop_rcvd_month_year
 a=o-A10e_uY_YA:10:endurance_base64_authed_username_1 a=DAwyPP_o2Byb1YXLmDAA:9
 a=cWRNjhkoAAAA:8 a=xRxrqt2KI6ooqX2xGaIA:9 a=4xei6IQbW6wRUEe6:21
 a=lo7-v151uZgEp9cj:21 a=CjuIK1q_8ugA:10:nop_charset_2
 a=UVKsufMBYgcA:10:demote_hacked_domain_1 a=yMhMjlubAAAA:8 a=SSmOFEACAAAA:8
 a=oQ2D5r49MdXFBdUUMlEA:9 a=5RenllTrngEA09df:21 a=3nghbXPSgYU-dfqI:21
 a=gKO2Hq4RSVkA:10:nop_mshtml a=UiCQ7L4-1S4A:10:nop_mshtml_css_classes
 a=hTZeC7Yk6K0A:10:nop_msword_html a=frz4AuCg-hUA:10:nop_css_in_html
 a=sVa6W5Aao32NNC1mekxh:22
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=yujala.com; 
 s=default;
 h=Content-Type:MIME-Version:Message-ID:Date:Subject:To:From:
 Sender:Reply-To:Cc:Content-Transfer-Encoding:Content-ID:Content-Description:
 Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:
 In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:List-Subscribe:
 List-Post:List-Owner:List-Archive;
 bh=4rQgmxbsGmqYbg6X7jut/vJRkZNUyR2y9GZ0TIL/6SI=; b=WDeghiXjHbOSSCObZhb7lW39eV
 xtgI5cKmEdvFCt7j7SQqfbIHf5u0ITE34ZDNMsWUu+2Iqsq8fJhcifuSZF+IBIdTHsM7BvwiDi369
 OdlobKwnPWzr9ECo6hfTMqB9CTfjHCsG8KIaLmYTzwyl1dGI3B4ZW/VU/rib+kum+AOQjcto2NjnP
 yelRiDI4VwfucMzxC/7iH2vu/XkcoZFEOOzVEy1QSMK1pL0rHM142eTZDIgcgHqVX5zFVdDqi+xMD
 hneqMRFeJhBZLZ7nc+83KRHr/b1iwFRXV+g2s6mePiZhlDf5CYoR0bEeUphrSrsbFD94qlgeGu7v8
 HjBi6IPg==;
Received: from 162-231-240-210.lightspeed.sntcca.sbcglobal.net
 ([162.231.240.210]:49170 helo=SRINIASUSLAPTOP)
 by md-71.webhostbox.net with esmtpsa (TLS1.2) tls
 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.93)
 (envelope-from <srini@yujala.com>) id 1jxTNQ-001Mh6-6b
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 10:50:32 +0000
From: "Srinivas Bangalore" <srini@yujala.com>
To: <xen-devel@lists.xenproject.org>
Subject: Porting Xen to Jetson Nano 
Date: Mon, 20 Jul 2020 03:50:30 -0700
Message-ID: <004f01d65e83$967c65f0$c37531d0$@yujala.com>
MIME-Version: 1.0
Content-Type: multipart/alternative;
 boundary="----=_NextPart_000_0050_01D65E48.EA235A50"
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AdZeg0x1DrNeeg9GRtKkTne7zCxl1g==
Content-Language: en-us
X-AntiAbuse: This header was added to track abuse,
 please include it with any abuse report
X-AntiAbuse: Primary Hostname - md-71.webhostbox.net
X-AntiAbuse: Original Domain - lists.xenproject.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - yujala.com
X-BWhitelist: no
X-Source-IP: 162.231.240.210
X-Source-L: No
X-Exim-ID: 1jxTNQ-001Mh6-6b
X-Source: 
X-Source-Args: 
X-Source-Dir: 
X-Source-Sender: 162-231-240-210.lightspeed.sntcca.sbcglobal.net
 (SRINIASUSLAPTOP) [162.231.240.210]:49170
X-Source-Auth: srini@yujala.com
X-Email-Count: 1
X-Source-Cap: c3JpbmlxbGw7c3JpbmlxbGw7bWQtNzEud2ViaG9zdGJveC5uZXQ=
X-Local-Domain: yes
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a multipart message in MIME format.

------=_NextPart_000_0050_01D65E48.EA235A50
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit

Hello,

 

I am trying to get Xen working on a Jetson Nano board (which is based on
NVIDIA's Tegra210 SoC). After some searching through the Xen-devel archives,
I learnt that there was a set of patches developed in 2017 to port Xen to
Tegra
(https://lists.xenproject.org/archives/html/xen-devel/2017-04/msg00991.html)
. However these patches don't appear in the main source repository.
Therefore, I applied these manually to Xen-4.8.5. With these changes, Xen
now boots up successfully on the Jetson Nano, but there is no Dom0 output on
the console. I can switch between Xen and Dom0 with CTRL-a-a-a.

 

I am using Linux kernel version 5.7 for Dom0. I also tried using the native
Linux kernel that comes with the Nano board, but that doesn't help.

 

Here's the console screen capture:

 

## Flattened Device Tree blob at e3000000

   Booting using the fdt blob at 0xe3000000

   reserving fdt memory region: addr=80000000 size=20000

   reserving fdt memory region: addr=e3000000 size=35000

   Loading Device Tree to 00000000fc7f8000, end 00000000fc82ffff ... OK

 

Starting kernel ...

 

- UART enabled -

- CPU 00000000 booting -

- Current EL 00000008 -

- Xen starting at EL2 -

- Zero BSS -

- Setting up control registers -

- Turning on paging -

- Ready -

(XEN) Checking for initrd in /chosen

(XEN) linux,initrd limits invalid: 0000000084100000 >= 0000000084100000

(XEN) RAM: 0000000080000000 - 00000000fedfffff

(XEN) RAM: 0000000100000000 - 000000017f1fffff

(XEN)

(XEN) MODULE[0]: 00000000fc7f8000 - 00000000fc82d000 Device Tree

(XEN) MODULE[1]: 00000000e1000000 - 00000000e2cbe200 Kernel
console=hvc0 earlyprintk=uart8250-32bit,0x70006000 rootfstype=ext4 rw
rootwait root=/dev/mmcblk0p1

(XEN)  RESVD[0]: 0000000080000000 - 0000000080020000

(XEN)  RESVD[1]: 00000000e3000000 - 00000000e3035000

(XEN)  RESVD[2]: 00000000fc7f8000 - 00000000fc82d000

(XEN)

(XEN) Command line: console=dtuart earlyprintk=xen earlycon=xenboot
dom0_mem=512M loglevel=all

(XEN) Placing Xen at 0x00000000fec00000-0x00000000fee00000

(XEN) Update BOOTMOD_XEN from 0000000080080000-0000000080188e01 =>
00000000fec00000-00000000fed08e01

(XEN) Domain heap initialised

(XEN) Taking dtuart configuration from /chosen/stdout-path

(XEN) Looking for dtuart at "/serial@70 Xen 4.8.5

(XEN) Xen version 4.8.5 (srinivas@) (aarch64-linux-gnu-gcc (Ubuntu/Linaro
7.5.0-3ubuntu1~18.04) 7.5.0) debug=n  Sun Jul 19 07:44:00 PDT 2020

(XEN) Latest ChangeSet:

(XEN) Processor: 411fd071: "ARM Limited", variant: 0x1, part 0xd07, rev 0x1

(XEN) 64-bit Execution:

(XEN)   Processor Features: 0000000000002222 0000000000000000

(XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32

(XEN)     Extensions: FloatingPoint AdvancedSIMD

(XEN)   Debug Features: 0000000010305106 0000000000000000

(XEN)   Auxiliary Features: 0000000000000000 0000000000000000

(XEN)   Memory Model Features: 0000000000001124 0000000000000000

(XEN)   ISA Features:  0000000000011120 0000000000000000

(XEN) 32-bit Execution:

(XEN)   Processor Features: 00000131:00011011

(XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle

(XEN)     Extensions: GenericTimer Security

(XEN)   Debug Features: 03010066

(XEN)   Auxiliary Features: 00000000

(XEN)   Memory Model Features: 10101105 40000000 01260000 02102211

(XEN)  ISA Features: 02101110 13112111 21232042 01112131 00011142 00011121

(XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 19200 KHz

(XEN) GICv2 initialization:

(XEN)         gic_dist_addr=0000000050041000

(XEN)         gic_cpu_addr=0000000050042000

(XEN)         gic_hyp_addr=0000000050044000

(XEN)         gic_vcpu_addr=0000000050046000

(XEN)         gic_maintenance_irq=25

(XEN) GICv2: 224 lines, 4 cpus, secure (IID 0200143b).

(XEN) Using scheduler: SMP Credit Scheduler (credit)

(XEN) Allocated console ring of 16 KiB.

(XEN) Bringing up CPU1

- CPU 00000001 booting -

- Current EL 00000008 -

- Xen starting at EL2 -

- Setting up control registers -

- Turning on paging -

- Ready -

(XEN) Bringing up CPU2

- CPU 00000002 booting -

- Current EL 00000008 -

- Xen starting at EL2 -

- Setting up control registers -

- Turning on paging -

- Ready -

(XEN) Bringing up CPU3

- CPU 00000003 booting -

- Current EL 00000008 -

- Xen starting at EL2 -

- Setting up control registers -

- Turning on paging -

- Ready -

(XEN) Brought up 4 CPUs

(XEN) P2M: 44-bit IPA with 44-bit PA

(XEN) P2M: 4 levels with order-0 root, VTCR 0x80043594

(XEN) I/O virtualisation disabled

(XEN) *** LOADING DOMAIN 0 ***

(XEN) Loading kernel from boot module @ 00000000e1000000

(XEN) Allocating 1:1 mappings totalling 512MB for dom0:

(XEN) BANK[0] 0x000000a0000000-0x000000c0000000 (512MB)

(XEN) Grant table range: 0x000000fec00000-0x000000fec60000

(XEN) Loading zImage from 00000000e1000000 to
00000000a0080000-00000000a1d3e200

(XEN) Allocating PPI 16 for event channel interrupt

(XEN) Loading dom0 DTB to 0x00000000a8000000-0x00000000a8034354

(XEN) Scrubbing Free RAM on 1 nodes using 4 CPUs

(XEN) ........done.

(XEN) Initial low memory virq threshold set at 0x4000 pages.

(XEN) Std. Loglevel: Errors and warnings

(XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)

(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to
Xen)

(XEN) Freed 300kB init memory.

 

Any suggestions/pointers to move forward would be much appreciated.

 

Thanks,

Srini


------=_NextPart_000_0050_01D65E48.EA235A50
Content-Type: text/html;
	charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:#0563C1;
	text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
	{mso-style-priority:99;
	color:#954F72;
	text-decoration:underline;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri",sans-serif;
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri",sans-serif;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US =
link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p =
class=3DMsoNormal>Hello,<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>I am trying =
to get Xen working on a Jetson Nano board (which is based on =
NVIDIA&#8217;s Tegra210 SoC). After some searching through the Xen-devel =
archives, I learnt that there was a set of patches developed in 2017 to =
port Xen to Tegra (<a =
href=3D"https://lists.xenproject.org/archives/html/xen-devel/2017-04/msg0=
0991.html">https://lists.xenproject.org/archives/html/xen-devel/2017-04/m=
sg00991.html</a>). However these patches don&#8217;t appear in the main =
source repository. Therefore, I applied these manually to Xen-4.8.5. =
With these changes, Xen now boots up successfully on the Jetson Nano, =
but there is no Dom0 output on the console. I can switch between Xen and =
Dom0 with CTRL-a-a-a.<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>I am using =
Linux kernel version 5.7 for Dom0. I also tried using the native Linux =
kernel that comes with the Nano board, but that doesn&#8217;t =
help.<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>Here&#8217;s the console screen =
capture:<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal> ## Flattened Device Tree blob at =
e3000000<o:p></o:p></p><p class=3DMsoNormal>&nbsp;&nbsp; Booting using =
the fdt blob at 0xe3000000<o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;&nbsp; reserving fdt memory region: =
addr=3D80000000 size=3D20000<o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;&nbsp; reserving fdt memory region: =
addr=3De3000000 size=3D35000<o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;&nbsp; Loading Device Tree to 00000000fc7f8000, =
end 00000000fc82ffff ... OK<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>Starting =
kernel ...<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>- UART enabled -<o:p></o:p></p><p class=3DMsoNormal>- =
CPU 00000000 booting -<o:p></o:p></p><p class=3DMsoNormal>- Current EL =
00000008 -<o:p></o:p></p><p class=3DMsoNormal>- Xen starting at EL2 =
-<o:p></o:p></p><p class=3DMsoNormal>- Zero BSS -<o:p></o:p></p><p =
class=3DMsoNormal>- Setting up control registers -<o:p></o:p></p><p =
class=3DMsoNormal>- Turning on paging -<o:p></o:p></p><p =
class=3DMsoNormal>- Ready -<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
Checking for initrd in /chosen<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
linux,initrd limits invalid: 0000000084100000 &gt;=3D =
0000000084100000<o:p></o:p></p><p class=3DMsoNormal>(XEN) RAM: =
0000000080000000 - 00000000fedfffff<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) RAM: 0000000100000000 - =
000000017f1fffff<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
MODULE[0]: 00000000fc7f8000 - 00000000fc82d000 Device =
Tree<o:p></o:p></p><p class=3DMsoNormal>(XEN) MODULE[1]: =
00000000e1000000 - 00000000e2cbe200 =
Kernel&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; console=3Dhvc0 =
earlyprintk=3Duart8250-32bit,0x70006000 rootfstype=3Dext4 rw rootwait =
root=3D/dev/mmcblk0p1<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
RESVD[0]: 0000000080000000 - 0000000080020000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; RESVD[1]: 00000000e3000000 - =
00000000e3035000<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
RESVD[2]: 00000000fc7f8000 - 00000000fc82d000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN) Command =
line: console=3Ddtuart earlyprintk=3Dxen earlycon=3Dxenboot =
dom0_mem=3D512M loglevel=3Dall<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
Placing Xen at 0x00000000fec00000-0x00000000fee00000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Update BOOTMOD_XEN from =
0000000080080000-0000000080188e01 =3D&gt; =
00000000fec00000-00000000fed08e01<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Domain heap initialised<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Taking dtuart configuration from =
/chosen/stdout-path<o:p></o:p></p><p class=3DMsoNormal>(XEN) Looking for =
dtuart at &quot;/serial@70 Xen 4.8.5<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Xen version 4.8.5 (srinivas@) =
(aarch64-linux-gnu-gcc (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0) =
debug=3Dn&nbsp; Sun Jul 19 07:44:00 PDT 2020<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Latest ChangeSet:<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Processor: 411fd071: &quot;ARM Limited&quot;, =
variant: 0x1, part 0xd07, rev 0x1<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) 64-bit Execution:<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp; Processor Features: 0000000000002222 =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; Exception Levels: =
EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; Extensions: =
FloatingPoint AdvancedSIMD<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp; Debug Features: 0000000010305106 =
0000000000000000<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp;&nbsp; =
Auxiliary Features: 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp; Memory Model Features: =
0000000000001124 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp; ISA Features:&nbsp; 0000000000011120 =
0000000000000000<o:p></o:p></p><p class=3DMsoNormal>(XEN) 32-bit =
Execution:<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp;&nbsp; =
Processor Features: 00000131:00011011<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; Instruction Sets: =
AArch32 A32 Thumb Thumb-2 Jazelle<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; Extensions: GenericTimer =
Security<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp;&nbsp; Debug =
Features: 03010066<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp;&nbsp; =
Auxiliary Features: 00000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp; Memory Model Features: 10101105 =
40000000 01260000 02102211<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; ISA Features: 02101110 13112111 21232042 =
01112131 00011142 00011121<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
Generic Timer IRQ: phys=3D30 hyp=3D26 virt=3D27 Freq: 19200 =
KHz<o:p></o:p></p><p class=3DMsoNormal>(XEN) GICv2 =
initialization:<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
gic_dist_addr=3D0000000050041000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
gic_cpu_addr=3D0000000050042000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
gic_hyp_addr=3D0000000050044000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
gic_vcpu_addr=3D0000000050046000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
gic_maintenance_irq=3D25<o:p></o:p></p><p class=3DMsoNormal>(XEN) GICv2: =
224 lines, 4 cpus, secure (IID 0200143b).<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Using scheduler: SMP Credit Scheduler =
(credit)<o:p></o:p></p><p class=3DMsoNormal>(XEN) Allocated console ring =
of 16 KiB.<o:p></o:p></p><p class=3DMsoNormal>(XEN) Bringing up =
CPU1<o:p></o:p></p><p class=3DMsoNormal>- CPU 00000001 booting =
-<o:p></o:p></p><p class=3DMsoNormal>- Current EL 00000008 =
-<o:p></o:p></p><p class=3DMsoNormal>- Xen starting at EL2 =
-<o:p></o:p></p><p class=3DMsoNormal>- Setting up control registers =
-<o:p></o:p></p><p class=3DMsoNormal>- Turning on paging =
-<o:p></o:p></p><p class=3DMsoNormal>- Ready -<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Bringing up CPU2<o:p></o:p></p><p =
class=3DMsoNormal>- CPU 00000002 booting -<o:p></o:p></p><p =
class=3DMsoNormal>- Current EL 00000008 -<o:p></o:p></p><p =
class=3DMsoNormal>- Xen starting at EL2 -<o:p></o:p></p><p =
class=3DMsoNormal>- Setting up control registers -<o:p></o:p></p><p =
class=3DMsoNormal>- Turning on paging -<o:p></o:p></p><p =
class=3DMsoNormal>- Ready -<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
Bringing up CPU3<o:p></o:p></p><p class=3DMsoNormal>- CPU 00000003 =
booting -<o:p></o:p></p><p class=3DMsoNormal>- Current EL 00000008 =
-<o:p></o:p></p><p class=3DMsoNormal>- Xen starting at EL2 =
-<o:p></o:p></p><p class=3DMsoNormal>- Setting up control registers =
-<o:p></o:p></p><p class=3DMsoNormal>- Turning on paging =
-<o:p></o:p></p><p class=3DMsoNormal>- Ready -<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Brought up 4 CPUs<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) P2M: 44-bit IPA with 44-bit PA<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) P2M: 4 levels with order-0 root, VTCR =
0x80043594<o:p></o:p></p><p class=3DMsoNormal>(XEN) I/O virtualisation =
disabled<o:p></o:p></p><p class=3DMsoNormal>(XEN) *** LOADING DOMAIN 0 =
***<o:p></o:p></p><p class=3DMsoNormal>(XEN) Loading kernel from boot =
module @ 00000000e1000000<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
Allocating 1:1 mappings totalling 512MB for dom0:<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) BANK[0] 0x000000a0000000-0x000000c0000000 =
(512MB)<o:p></o:p></p><p class=3DMsoNormal>(XEN) Grant table range: =
0x000000fec00000-0x000000fec60000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Loading zImage from 00000000e1000000 to =
00000000a0080000-00000000a1d3e200<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Allocating PPI 16 for event channel =
interrupt<o:p></o:p></p><p class=3DMsoNormal>(XEN) Loading dom0 DTB to =
0x00000000a8000000-0x00000000a8034354<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Scrubbing Free RAM on 1 nodes using 4 =
CPUs<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
........done.<o:p></o:p></p><p class=3DMsoNormal>(XEN) Initial low =
memory virq threshold set at 0x4000 pages.<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Std. Loglevel: Errors and =
warnings<o:p></o:p></p><p class=3DMsoNormal>(XEN) Guest Loglevel: =
Nothing (Rate-limited: Errors and warnings)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) *** Serial input -&gt; DOM0 (type 'CTRL-a' three =
times to switch input to Xen)<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
Freed 300kB init memory.<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>Any =
suggestions/pointers to move forward would be much =
appreciated.<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>Thanks,<o:p></o:p></p><p =
class=3DMsoNormal>Srini<o:p></o:p></p></div></body></html>
------=_NextPart_000_0050_01D65E48.EA235A50--



From xen-devel-bounces@lists.xenproject.org Mon Jul 20 10:52:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 10:52:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxTPD-0003t2-Ml; Mon, 20 Jul 2020 10:52:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UosC=A7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxTPC-0003st-27
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 10:52:22 +0000
X-Inumbo-ID: 15bc86d8-ca77-11ea-8489-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15bc86d8-ca77-11ea-8489-bc764e2007e4;
 Mon, 20 Jul 2020 10:52:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595242341;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=W5cFX+NQcCdqdZtpNy/cgjcVgDwoZyubBv44b1n+/0s=;
 b=KnDRdLsUiVmgzQKiBCPXgb18vT1Em1MV3bh71Cd77GQTOrg8l+JdBCKa
 ljaa2g4ydgJaEpFx2vg6L2gROBE6Gt0PED1F0EKVfvOuKIpFTG3Lu8mLL
 eqp842gCxsVdTVbbztxdUWADU9rzUvL73n9U+m9qGo5/LhCuyomcxmCVR g=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 2NUAkrW7DnbNYpkBNcJoed9MwbVDC7VgUPdzvanQkhXPJ/Hlzb8oQP2cB0p43ePPbr7SbrVCua
 sZyTxceZgpZuL4tK0bLOIvplOYG0SGE6WrU+/HZeSCU21i3zMgYzVmhXB+IGEAzyej6IZzmMx3
 mrS7dP5MUlZa7HtuODPSkYoKH1IA4ckmAzf83dgqItL96Hyzw7e5+zhQo4X75h1FwNDiXIgL1M
 SuafOD4fGi89QCOc+ArK3rA6d2oQwAJ6higTp/TIMeNk0W7zES22B4aNeiwWQA0EGcSOTr+gMY
 Jhs=
X-SBRS: 2.7
X-MesageID: 23065228
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="23065228"
Date: Mon, 20 Jul 2020 12:52:13 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86: guard against port I/O overlapping the RTC/CMOS range
Message-ID: <20200720105213.GI7191@Air-de-Roger>
References: <38c73e17-30b8-27b4-bc7c-e6ef7817fa1e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <38c73e17-30b8-27b4-bc7c-e6ef7817fa1e@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 17, 2020 at 03:10:43PM +0200, Jan Beulich wrote:
> Since we intercept RTC/CMOS port accesses, let's do so consistently in
> all cases, i.e. also for e.g. a dword access to [006E,0071]. To avoid
> the risk of unintended impact on Dom0 code actually doing so (despite
> the belief that none ought to exist), also extend
> guest_io_{read,write}() to decompose accesses where some ports are
> allowed to be directly accessed and some aren't.

Wouldn't the same apply to displaced accesses to port 0xcf8?

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/pv/emul-priv-op.c
> +++ b/xen/arch/x86/pv/emul-priv-op.c
> @@ -210,7 +210,7 @@ static bool admin_io_okay(unsigned int p
>          return false;
>  
>      /* We also never permit direct access to the RTC/CMOS registers. */
> -    if ( ((port & ~1) == RTC_PORT(0)) )
> +    if ( port <= RTC_PORT(1) && port + bytes > RTC_PORT(0) )
>          return false;
>  
>      return ioports_access_permitted(d, port, port + bytes - 1);
> @@ -297,6 +297,17 @@ static uint32_t guest_io_read(unsigned i
>              if ( pci_cfg_ok(currd, port & 3, size, NULL) )
>                  sub_data = pci_conf_read(currd->arch.pci_cf8, port & 3, size);
>          }
> +        else if ( ioports_access_permitted(currd, port, port) )
> +        {
> +            if ( bytes > 1 && !(port & 1) &&
> +                 ioports_access_permitted(currd, port, port + 1) )
> +            {
> +                sub_data = inw(port);
> +                size = 2;
> +            }
> +            else
> +                sub_data = inb(port);
> +        }
>  
>          if ( size == 4 )
>              return sub_data;
> @@ -373,25 +384,31 @@ static int read_io(unsigned int port, un
>      return X86EMUL_OKAY;
>  }
>  
> +static void _guest_io_write(unsigned int port, unsigned int bytes,
> +                            uint32_t data)

There's nothing guest specific about this function I think? If so you
could drop the _guest_ prefix and just name it io_write?

> +{
> +    switch ( bytes )
> +    {
> +    case 1:
> +        outb((uint8_t)data, port);
> +        if ( amd_acpi_c1e_quirk )
> +            amd_check_disable_c1e(port, (uint8_t)data);
> +        break;
> +    case 2:
> +        outw((uint16_t)data, port);
> +        break;
> +    case 4:
> +        outl(data, port);
> +        break;
> +    }

Newlines after break statements would be nice, and maybe add a
default: ASSERT_UNREACHABLE() case to be on the safe side?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 10:57:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 10:57:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxTTh-0004AM-GP; Mon, 20 Jul 2020 10:57:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bPHy=A7=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1jxTTg-0004AH-H8
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 10:57:00 +0000
X-Inumbo-ID: bb8aa31a-ca77-11ea-848a-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb8aa31a-ca77-11ea-848a-bc764e2007e4;
 Mon, 20 Jul 2020 10:56:59 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id h22so19730856lji.9
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jul 2020 03:56:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-transfer-encoding:content-language;
 bh=thKLQZg/eCxpkga5gOSqVKbvyxLybiaQCBrOrgjQ4do=;
 b=WPfkHHkbOrAbZKNi/UJVbEHWPABsmPlAajL57yoj4NgS/FH81Hgx7BNqymS0r3uv2K
 3GHIgfHfVgT74DwbCnNv5Y2DiE8MqT8wbZZzrdSiOlkP7y4YQi/KGElXqkE+ISmPAmGw
 RwuUW5gTkI3YRcqqqSgnSHvqFee/Sy+8zg5d2Tdgb5+Wvn6owPjQlWgzl+MBslzlS0/U
 2P26FyV0AdhyAh8Tj7pQg/qbmgzGOBh9D/WovasEZxkrlGg+/pEkA4p/0hn4zbM/RbeL
 ptaAOUXYS0itXw9PUpkTi8DNl2969C8LpWLMWLEejHj89p/kkLr1v+NhRSyFz1BzRsyh
 qAOA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-transfer-encoding
 :content-language;
 bh=thKLQZg/eCxpkga5gOSqVKbvyxLybiaQCBrOrgjQ4do=;
 b=GZVZj3yVkuPnjg31qFzwStqO0UyJIL/cIPjqjYdpZWdDbb+7cXw0cNprJXrNpHlzue
 Fql2fWhbCgVsKWoZhd/kiQb9cuOz3zA7SxsbzT1rnFlDtI5Ed3qATCiks5C/atpeyweO
 wbNG/HqiwwKUtGik/owp1iH9ocb9p6qN5bv3+dT0lr5le7UFbwo0K5IDzV5atzodBi04
 z9dPJ+75FpHBRGgsqn/Vf9SA99vILXtdQABJzV5WjZq2LN33MsuSNQPuYNh+OCN0eYsN
 W6WpxRNWjY2ZN4cgDsmHTliTr8u10O2G1ry5u0ldqOyN6C8HYF/LRKvc9FgefLzZBGXQ
 ELJA==
X-Gm-Message-State: AOAM531bL0ft7PktdKdNoNttAwZmtRHUSlUFXNmTDYsbCY1HAXn+Ni7o
 4Pyxxu9Dx0edpfXKFbEB7OI=
X-Google-Smtp-Source: ABdhPJzPTDZxafIiol1O0sufKm7qGloHnmhkj/5DzK1VkqpiFPY4V2Sfd9RfHnztXR/CZpJepbS+3Q==
X-Received: by 2002:a2e:850f:: with SMTP id j15mr10688121lji.44.1595242617795; 
 Mon, 20 Jul 2020 03:56:57 -0700 (PDT)
Received: from [192.168.1.2] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id m5sm1543823lfo.47.2020.07.20.03.56.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 20 Jul 2020 03:56:57 -0700 (PDT)
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <20200720091722.GF7191@Air-de-Roger>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <be3fc8de-5582-8fd0-52cd-0cbfbfa96859@gmail.com>
Date: Mon, 20 Jul 2020 13:56:51 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200720091722.GF7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 20.07.20 12:17, Roger Pau Monné wrote:

Hello Roger

> On Fri, Jul 17, 2020 at 09:34:14PM +0300, Oleksandr wrote:
>> On 17.07.20 18:00, Roger Pau Monné wrote:
>>> On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
>>>> requires
>>>> some implementation to forward guest MMIO access to a device model. And as
>>>> it
>>>> turned out the Xen on x86 contains most of the pieces to be able to use that
>>>> transport (via existing IOREQ concept). Julien has already done a big amount
>>>> of work in his PoC (xen/arm: Add support for Guest IO forwarding to a
>>>> device emulator).
>>>> Using that code as a base we managed to create a completely functional PoC
>>>> with DomU
>>>> running on virtio block device instead of a traditional Xen PV driver
>>>> without
>>>> modifications to DomU Linux. Our work is mostly about rebasing Julien's
>>>> code on the actual
>>>> codebase (Xen 4.14-rc4), various tweeks to be able to run emulator
>>>> (virtio-disk backend)
>>>> in other than Dom0 domain (in our system we have thin Dom0 and keep all
>>>> backends
>>>> in driver domain),
>>> How do you handle this use-case? Are you using grants in the VirtIO
>>> ring, or rather allowing the driver domain to map all the guest memory
>>> and then placing gfn on the ring like it's commonly done with VirtIO?
>> Second option. Xen grants are not used at all as well as event channel and
>> Xenbus. That allows us to have guest
>>
>> *unmodified* which one of the main goals. Yes, this may sound (or even
>> sounds) non-secure, but backend which runs in driver domain is allowed to
>> map all guest memory.
> Supporting unmodified guests is certainly a fine goal, but I don't
> think it's incompatible with also trying to expand the spec in
> parallel in order to support grants in a negotiated way (see below).
>
> That way you could (long term) regain some of the lost security.
>
>>> Do you have any plans to try to upstream a modification to the VirtIO
>>> spec so that grants (ie: abstract references to memory addresses) can
>>> be used on the VirtIO ring?
>> But VirtIO spec hasn't been modified as well as VirtIO infrastructure in the
>> guest. Nothing to upsteam)
> OK, so there's no intention to add grants (or a similar interface) to
> the spec?
>
> I understand that you want to support unmodified VirtIO frontends, but
> I also think that long term frontends could negotiate with backends on
> the usage of grants in the shared ring, like any other VirtIO feature
> negotiated between the frontend and the backend.
>
> This of course needs to be on the spec first before we can start
> implementing it, and hence my question whether a modification to the
> spec in order to add grants has been considered.
>
> It's fine to say that you don't have any plans in this regard.
Adding grants (or a similar interface) to the spec hasn't been 
considered so far.

But I understand and completely agree that some solution should be found 
in order not to reduce security.


>>>> misc fixes for our use-cases and tool support for the
>>>> configuration.
>>>> Unfortunately, Julien doesn’t have much time to allocate on the work
>>>> anymore,
>>>> so we would like to step in and continue.
>>>>
>>>> *A few word about the Xen code:*
>>>> You can find the whole Xen series at [5]. The patches are in RFC state
>>>> because
>>>> some actions in the series should be reconsidered and implemented properly.
>>>> Before submitting the final code for the review the first IOREQ patch
>>>> (which is quite
>>>> big) will be split into x86, Arm and common parts. Please note, x86 part
>>>> wasn’t
>>>> even build-tested so far and could be broken with that series. Also the
>>>> series probably
>>>> wants splitting into adding IOREQ on Arm (should be focused first) and
>>>> tools support
>>>> for the virtio-disk (which is going to be the first Virtio driver)
>>>> configuration before going
>>>> into the mailing list.
>>> Sending first a patch series to enable IOREQs on Arm seems perfectly
>>> fine, and it doesn't have to come with the VirtIO backend. In fact I
>>> would recommend that you send that ASAP, so that you don't spend time
>>> working on the backend that would likely need to be modified
>>> according to the review received on the IOREQ series.
>> Completely agree with you, I will send it after splitting IOREQ patch and
>> performing some cleanup.
>>
>> However, it is going to take some time to make it properly taking into the
>> account
>>
>> that personally I won't be able to test on x86.
> We have gitlab and the osstest CI loop (plus all the reviewers) so we
> should be able to spot any regressions. Build testing on x86 would be
> nice so that you don't need to resend to fix build issues.

Of course, before sending series to ML I will definitely perform a build 
test

on x86.


>>>> What I would like to add here, the IOREQ feature on Arm could be used not
>>>> only
>>>> for implementing Virtio, but for other use-cases which require some
>>>> emulator entity
>>>> outside Xen such as custom PCI emulator (non-ECAM compatible) for example.
>>>>
>>>> *A few word about the backend(s):*
>>>> One of the main problems with Virtio in Xen on Arm is the absence of
>>>> “ready-to-use” and “out-of-Qemu” Virtio backends (I least am not aware of).
>>>> We managed to create virtio-disk backend based on demu [3] and kvmtool [4]
>>>> using
>>>> that series. It is worth mentioning that although Xenbus/Xenstore is not
>>>> supposed
>>>> to be used with native Virtio, that interface was chosen to just pass
>>>> configuration from toolstack
>>>> to the backend and notify it about creating/destroying Guest domain (I
>>>> think it is
>>> I would prefer if a single instance was launched to handle each
>>> backend, and that the configuration was passed on the command line.
>>> Killing the user-space backend from the toolstack is fine I think,
>>> there's no need to notify the backend using xenstore or any other
>>> out-of-band methods.
>>>
>>> xenstore has proven to be a bottleneck in terms of performance, and it
>>> would be better if we can avoid using it when possible, specially here
>>> that you have to do this from scratch anyway.
>> Let me elaborate a bit more on this.
>>
>> In current backend implementation, the Xenstore is *not* used for
>> communication between backend (VirtIO device) and frontend (VirtIO driver),
>> frontend knows nothing about it.
>>
>> Xenstore was chosen as an interface in order to be able to pass
>> configuration from toolstack in Dom0 to backend which may reside in other
>> than Dom0 domain (DomD in our case),
> There's 'xl devd' which can be used on the driver domain to spawn
> backends, maybe you could add the logic there so that 'xl devd' calls
> the backend executable with the required command line parameters, so
> that the backend itself doesn't need to interact with xenstore in any
> way?
>
> That way in the future we could use something else instead of
> xenstore, like Argo for instance in order to pass the backend data
> from the control domain to the driver domain.
>
>> also looking into the Xenstore entries backend always knows when the
>> intended guest is been created/destroyed.
> xl devd should also do the killing of backends anyway when a domain is
> destroyed, or else malfunctioning user-space backends could keep
> running after the domain they are serving is destroyed.
>
>> I may mistake, but I don't think we can avoid using Xenstore (or other
>> interface provided by toolstack) for the several reasons.
>>
>> Besides a virtio-disk configuration (a disk to be assigned to the guest, R/O
>> mode, etc), for each virtio-mmio device instance
>>
>> a pair (mmio range + IRQ) are allocated by toolstack at the guest
>> construction time and inserted into virtio-mmio device tree node
>>
>> in the guest device tree. And for the backend to properly operate these
>> variable parameters are also passed to the backend via Xenstore.
> I think you could pass all these parameters as command line arguments
> to the backend?
>
>> The other reasons are:
>>
>> 1. Automation. With current backend implementation we don't need to pause
>> guest right after creating it, then go to the driver domain and spawn
>> backend and
>>
>> after that go back to the dom0 and unpause the guest.
> xl devd should be capable of handling this for you on the driver
> domain.
>
>> 2. Ability to detect when guest with involved frontend has gone away and
>> properly release resource (guest destroy/reboot).
>>
>> 3. Ability to (re)connect to the newly created guest with involved frontend
>> (guest create/reboot).
>>
>> 4. What is more that having Xenstore support the backend is able to detect
>> the dom_id it runs into and the guest dom_id, there is no need pass them via
>> command line.
>>
>>
>> I will be happy to explain in details after publishing backend code).
> As I'm not the one doing the work I certainly won't stop you from
> using xenstore on the backend. I would certainly prefer if the backend
> gets all the information it needs from the command line so that the
> configuration data is completely agnostic to the transport layer used
> to convey it.
>
> Thanks, Roger.

Thank you for pointing another possible way. I feel I need to 
investigate what is the "xl devd" (+ Argo?) and how it works. If it is 
able to provide backend with

the support/information it needs and xenstore is not welcome then I 
would be absolutely ok to consider using other solution.

I propose to get back to that discussion after I prepare and send out 
the proper IOREQ series.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Jul 20 11:00:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 11:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxTWn-0004yn-0G; Mon, 20 Jul 2020 11:00:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ibwy=A7=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jxTWl-0004yi-O4
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 11:00:11 +0000
X-Inumbo-ID: 2d3da2dc-ca78-11ea-9f82-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2d3da2dc-ca78-11ea-9f82-12813bfff9fa;
 Mon, 20 Jul 2020 11:00:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 05FFCAC1D;
 Mon, 20 Jul 2020 11:00:15 +0000 (UTC)
Subject: Re: [PATCH v2] docs: specify stability of hypfs path documentation
To: George Dunlap <George.Dunlap@citrix.com>
References: <20200713140338.16172-1-jgross@suse.com>
 <8a96b1b9-cbcb-557a-5b82-661bbe40fe25@suse.com>
 <68F727A8-29B8-4846-8BE9-BD4F6E0DC60D@citrix.com>
 <9f5e86cc-4f64-982b-d84b-1de6b2739a2b@suse.com>
 <4c681c7c-be69-7e1a-4cd9-c9e05fe85372@suse.com>
 <2567da9b-be43-3f0d-e213-562b5454f4b7@suse.com>
 <757f5f78-6ec9-c740-18bf-a01105d552d7@suse.com>
 <A8D7C0A3-BA48-40F2-B290-C73BC1CDEBD1@citrix.com>
 <fd8902f6-0172-2f1f-aae8-fec096d4bff5@suse.com>
 <041963E3-304B-4F1D-BCA0-387057DFC7FE@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3209a519-d7b7-e77a-4abf-7a5dcd476d6d@suse.com>
Date: Mon, 20 Jul 2020 13:00:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <041963E3-304B-4F1D-BCA0-387057DFC7FE@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.07.20 12:19, George Dunlap wrote:
> 
> 
>> On Jul 16, 2020, at 3:34 PM, Jürgen Groß <jgross@suse.com> wrote:
>>
>> On 16.07.20 16:20, George Dunlap wrote:
>>>> On Jul 16, 2020, at 12:34 PM, Jürgen Groß <jgross@suse.com> wrote:
>>>>
>>>> On 16.07.20 13:24, Jan Beulich wrote:
>>>>> On 16.07.2020 12:31, Jürgen Groß wrote:
>>>>>> On 16.07.20 12:11, Jan Beulich wrote:
>>>>>>> On 15.07.2020 16:37, George Dunlap wrote:
>>>>>>>> IT sounds like you’re saying:
>>>>>>>>
>>>>>>>> 1. Paths listed without conditions will always be available
>>>>>>>>
>>>>>>>> 2. Paths listed with conditions may be extended: i.e., a node currently listed as PV might also become available for HVM guests
>>>>>>>>
>>>>>>>> 3. Paths listed with conditions might have those conditions reduced, but will never entirely disappear.  So something currently listed as PV might be reduced to CONFIG_HAS_FOO, but won’t be completely removed.
>>>>>>>>
>>>>>>>> Is that what you meant?
>>>>>>>
>>>>>>> I see Jürgen replied "yes" to this, but I'm not sure about 1.
>>>>>>> above: I think it's quite reasonable to expect that paths without
>>>>>>> condition may gain a condition. Just like paths now having a
>>>>>>> condition and (perhaps temporarily) losing it shouldn't all of
>>>>>>> the sudden become "always available" when they weren't meant to
>>>>>>> be.
>>>>>>>
>>>>>>> As far a 3. goes, I'm also unsure in how far this is any better
>>>>>>> stability wise (from a consumer pov) than allowing paths to
>>>>>>> entirely disappear.
>>>>>>
>>>>>> The idea is that any user tool using hypfs can rely on paths under 1 to
>>>>>> exist, while the ones under 3 might not be there due to the hypervisor
>>>>>> config or the used system.
>>>>>>
>>>>>> A path not being allowed to entirely disappear ensures that it remains
>>>>>> in the documentation, so the same path can't be reused for something
>>>>>> different in future.
>>>>> And then how do you deal with a condition getting dropped, and
>>>>> later wanting to get re-added? Do we need a placeholder condition
>>>>> like [ALWAYS] or [TRUE]?
>>>>
>>>> Dropping a condition has to be considered very carefully, same as
>>>> introducing a new path without any condition.
>>>>
>>>> In worst case you can still go with [CONFIG_HYPFS].
>>> Couldn’t we just have a section of the document for dead paths that aren’t allowed to be used?
>>> Alternately, we could have a tag for entries we don’t want used anymore; [DEAD] or [OBSOLETE] maybe? [DEFUNCT]? [REMOVED]?
>>> So I think I’d write a separate section, like this:
>>> ~~
>>> # Stability
>>> Path *presence* is not stable, but path *meaning* is always stable: if a tool you write finds a path present, it can rely on behavior in future versions of the hypervisors, and in different configurations.  Specifically:
>>> 1. Conditions under which paths are used may be extended, restricted, or removed.  For example, a path that’s always available only on ARM systems may become available on x86; or a path available on both systems may be restricted to only appearing on ARM systems.  Paths may also disappear entirely.
>>> 2. However, the meaning of a path will never change.  If a path is present, it will always have exactly the meaning that it always had.  In order to maintain this, removed paths should be retained with the tag [REMOVED].  The path may be restored *only* if the restored version of the path is compatible with the previous functionality.
>>> ~~~
>>> Thoughts?
>>
>> Would work for me, too.
> 
> So whose court is the ball in now?  Are you going to send another patch, or would you prefer I do it?

I can do it. I just hoped that maybe someone else would agree to this
approach.


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 11:10:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 11:10:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxTgY-0005u2-0O; Mon, 20 Jul 2020 11:10:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UosC=A7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxTgX-0005tx-29
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 11:10:17 +0000
X-Inumbo-ID: 96897620-ca79-11ea-9f84-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 96897620-ca79-11ea-9f84-12813bfff9fa;
 Mon, 20 Jul 2020 11:10:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595243417;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=0uY2iAGgQD/jTVNZ02tjyg9+PVJ6iVv1+aMXh5h7mv4=;
 b=G2oLYpaiKtgu8nq2PNqXJhIkR4SZVuinh/bqfn7hF5AR96+1VSCHeoEu
 B8fGYj5YMlBM2IZKWba5pEQldUPiqZZHtkaolMpecs35wlhnbzqYGfscW
 b15G2LQPkKfj9gvHZOyLLA5Lqon4TEvyCKMowU22LZhuDSADxuJPKMXaT I=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: d/9RX8s/l3kWpPI01lNkPjuqD1aCI7h9a+pqnZQ6piT5cn3wv5f75X4vtxfd2b9ILqf4TEcWTA
 GR4Cs59sV7u/5O64zLToBz8IjvueaLL7j8JL5Nvsk/lqmdnDAbhDh/uVrxZz4ZvcNjqBNvKBCX
 NkjyUruM3YGVy1QzqcxvNBk6ggOTWWRtFMGr1xJtOe3S1y0OnYif5sv9GRdpRqBAAXJQ2xZCNY
 NhyIRm1BLWJOpxioMDUn/5Gj9rhtP5KiQAdOcCpAlW07CpV5aPOXgfxbA+3t6CWzgyUWu6zkjf
 Z3c=
X-SBRS: 2.7
X-MesageID: 23066242
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="23066242"
Date: Mon, 20 Jul 2020 13:09:50 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr <olekstysh@gmail.com>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
Message-ID: <20200720110950.GJ7191@Air-de-Roger>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <20200720091722.GF7191@Air-de-Roger>
 <be3fc8de-5582-8fd0-52cd-0cbfbfa96859@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <be3fc8de-5582-8fd0-52cd-0cbfbfa96859@gmail.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 20, 2020 at 01:56:51PM +0300, Oleksandr wrote:
> On 20.07.20 12:17, Roger Pau Monné wrote:
> > On Fri, Jul 17, 2020 at 09:34:14PM +0300, Oleksandr wrote:
> > > On 17.07.20 18:00, Roger Pau Monné wrote:
> > > > On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
> > > The other reasons are:
> > > 
> > > 1. Automation. With current backend implementation we don't need to pause
> > > guest right after creating it, then go to the driver domain and spawn
> > > backend and
> > > 
> > > after that go back to the dom0 and unpause the guest.
> > xl devd should be capable of handling this for you on the driver
> > domain.
> > 
> > > 2. Ability to detect when guest with involved frontend has gone away and
> > > properly release resource (guest destroy/reboot).
> > > 
> > > 3. Ability to (re)connect to the newly created guest with involved frontend
> > > (guest create/reboot).
> > > 
> > > 4. What is more that having Xenstore support the backend is able to detect
> > > the dom_id it runs into and the guest dom_id, there is no need pass them via
> > > command line.
> > > 
> > > 
> > > I will be happy to explain in details after publishing backend code).
> > As I'm not the one doing the work I certainly won't stop you from
> > using xenstore on the backend. I would certainly prefer if the backend
> > gets all the information it needs from the command line so that the
> > configuration data is completely agnostic to the transport layer used
> > to convey it.
> > 
> > Thanks, Roger.
> 
> Thank you for pointing another possible way. I feel I need to investigate
> what is the "xl devd" (+ Argo?) and how it works. If it is able to provide
> backend with

That's what x86 at least uses to manage backends on driver domains: xl
devd will for example launch the QEMU instance required to handle a
Xen PV disk backend in user-space.

Note that there's currently no support for Argo or any communication
channel different than xenstore, but I think it would be cleaner to
place the fetching of data from xenstore in xl devd and just pass
those as command line arguments to the VirtIO backend if possible. I
would prefer the VirtIO backend to be fully decoupled from xenstore.

Note that for a backend running on dom0 there would be no need to
pass any data on xenstore, as the backend would be launched directly
from xl with the appropriate command line arguments.

> the support/information it needs and xenstore is not welcome then I would be
> absolutely ok to consider using other solution.
> 
> I propose to get back to that discussion after I prepare and send out the
> proper IOREQ series.

Sure, that's fine.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 11:11:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 11:11:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxThh-0005zO-BH; Mon, 20 Jul 2020 11:11:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UosC=A7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxThf-0005zJ-QJ
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 11:11:27 +0000
X-Inumbo-ID: c0e868ae-ca79-11ea-9f84-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0e868ae-ca79-11ea-9f84-12813bfff9fa;
 Mon, 20 Jul 2020 11:11:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595243486;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=NgdXZgD4/oOplL8ZdiaferWZLpooW7c0ZvYNTygkpLs=;
 b=Jr88qgDhwrCdhSwv+aklgCCuQXJ9My8hq/ZSlfpvNAAiNoo9iIxef7f4
 73/7LpxIpUGj7wxAsXVKDJFXCrAIJge0+hFSm6J/DULoXZpyrUjzbr2Pf
 iL3s9L+NUqsmQno97v2hmnRIj+LrSBF3yaKDn+uW+dE/qmiDQFtgAKmXE w=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: GhtyAOaJMikZbOLCzzJ3mcxUnkNWZya3IfjXqGviO5V9LiOT7rY98o0mJb9Yef7L2Mal1fUbGI
 fFbQDDOLpGBoMlgGpStYiCvhxgQqi2VomzYOLXBCyjGOkwdbSU3N2AnVbDB6C0kvE6Qek47HBT
 x2XBRJj0X7L0V7kOHI8O79bFpHXJB/jclVv0ukXH7chWtXZcM3noLjm/QbgsUumi3Y9h3CeX8p
 YPzH8Hemx0Qr6Skd2rF44FvH2FROENr8n+O3i6TjmzfspNmNKhyM1wbazTo/mTgCrO105iADmc
 +kg=
X-SBRS: 2.7
X-MesageID: 22935191
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,374,1589256000"; d="scan'208";a="22935191"
Date: Mon, 20 Jul 2020 13:11:18 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v3 2/2] x86: detect CMOS aliasing on ports other than
 0x70/0x71
Message-ID: <20200720111118.GK7191@Air-de-Roger>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <441327cd-a7d6-8cb6-bf90-69df8e509425@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <441327cd-a7d6-8cb6-bf90-69df8e509425@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 01:57:07PM +0200, Jan Beulich wrote:
> ... in order to also intercept accesses through the alias ports.
> 
> Also stop intercepting accesses to the CMOS ports if we won't ourselves
> use the CMOS RTC.

Will wait for v4 with the fixed additions to PVH dom0.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 11:24:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 11:24:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxTtx-0006yP-HP; Mon, 20 Jul 2020 11:24:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ibwy=A7=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jxTtw-0006yK-BX
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 11:24:08 +0000
X-Inumbo-ID: 861d1629-ca7b-11ea-9f85-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 861d1629-ca7b-11ea-9f85-12813bfff9fa;
 Mon, 20 Jul 2020 11:24:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C1930AB7A;
 Mon, 20 Jul 2020 11:24:12 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3] docs: specify stability of hypfs path documentation
Date: Mon, 20 Jul 2020 13:21:37 +0200
Message-Id: <20200720112137.27327-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In docs/misc/hypfs-paths.pandoc the supported paths in the hypervisor
file system are specified. Make it more clear that path availability
might change, e.g. due to scope widening or narrowing (e.g. being
limited to a specific architecture).

Signed-off-by: Juergen Gross <jgross@suse.com>
Release-acked-by: Paul Durrant <paul@xen.org>
---
V2: reworded as requested by Jan Beulich
V3: reworded again as suggested by George Dunlap
---
 docs/misc/hypfs-paths.pandoc | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index a111c6f25c..68d83d9245 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -5,6 +5,9 @@ in the Xen hypervisor file system (hypfs).
 
 The hypervisor file system can be accessed via the xenhypfs tool.
 
+The availability of the hypervisor file system depends on the hypervisor
+config option CONFIG_HYPFS, which is on per default.
+
 ## Notation
 
 The hypervisor file system is similar to the Linux kernel's sysfs.
@@ -64,6 +67,23 @@ the list elements separated by spaces, e.g. "dom0 PCID-on".
 The entry would be writable and it would exist on X86 only and only if the
 hypervisor is configured to support PV guests.
 
+# Stability
+
+Path *presence* is not stable, but path *meaning* is always stable: if a tool
+you write finds a path present, it can rely on behavior in future versions of
+the hypervisors, and in different configurations.  Specifically:
+
+1. Conditions under which paths are used may be extended, restricted, or
+   removed.  For example, a path that’s always available only on ARM systems
+   may become available on x86; or a path available on both systems may be
+   restricted to only appearing on ARM systems.  Paths may also disappear
+   entirely.
+2. However, the meaning of a path will never change.  If a path is present,
+   it will always have exactly the meaning that it always had.  In order to
+   maintain this, removed paths should be retained with the tag [REMOVED].
+   The path may be restored *only* if the restored version of the path is
+   compatible with the previous functionality.
+
 ## Example
 
 A populated Xen hypervisor file system might look like the following example:
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Jul 20 11:26:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 11:26:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxTwT-00075n-3p; Mon, 20 Jul 2020 11:26:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JfNH=A7=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1jxTwS-00075g-3a
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 11:26:44 +0000
X-Inumbo-ID: e203a902-ca7b-11ea-848b-bc764e2007e4
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.59]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e203a902-ca7b-11ea-848b-bc764e2007e4;
 Mon, 20 Jul 2020 11:26:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2xPcbbCTlUA8K7vBRakfxZyiXhrnH5soaXfyZJi+64k=;
 b=Anl1w0Cs7aYBJRgO7vEcTdasZGletEEAcKzE9jIhXbd21hFzlchupOhO3pgUXnBIMYBuaEMpKjrM6jPSmJfE4Giexoav59dQmE+/Appx3mKbAv+9KW35HWrSy/3RSNrN8uK7OmJGCEKqb32Xy4R6hXPiQCerTzk2Cp+R2sf6Yj0=
Received: from AM7PR03CA0018.eurprd03.prod.outlook.com (2603:10a6:20b:130::28)
 by AM5PR0801MB1713.eurprd08.prod.outlook.com (2603:10a6:203:34::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Mon, 20 Jul
 2020 11:26:39 +0000
Received: from VE1EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:130:cafe::4d) by AM7PR03CA0018.outlook.office365.com
 (2603:10a6:20b:130::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18 via Frontend
 Transport; Mon, 20 Jul 2020 11:26:39 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT051.mail.protection.outlook.com (10.152.19.75) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Mon, 20 Jul 2020 11:26:38 +0000
Received: ("Tessian outbound 7de93d801f24:v62");
 Mon, 20 Jul 2020 11:26:38 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 726e1fb8b60ae5ac
X-CR-MTA-TID: 64aa7808
Received: from dbc2a88e0618.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DC1CDCF4-CF91-446B-A171-D97E9786923C.1; 
 Mon, 20 Jul 2020 11:26:33 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id dbc2a88e0618.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 20 Jul 2020 11:26:33 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UnIMQIe/2l/xgsrHS3cdunRlZH9BXUrbMGt47ZCyAamlqMaeVuwFpVYYjCSoQ0tY87vMmTqkh/aay372rM7hn+5mM0EpExhdyTVlHF9NUJGuimB/Jp8zGrFH3kYmLKe8IAuojFTaffvIBNlmGkZXiWeuZ5odI6qzkcpVOuU6pdEj2EFsPVFDl5bZt/uCo0W4wOjrj5dIOU22wKNfk4zdkgW9XNt2NOg1JRAS9bb5By69tNWDn/JTvIENe8fkXDMz/QWbBzXULJNprEHIJdEooZs1m6IdsAFM49IOclgIde7YN5IF3OS9Y227qnNc0RUIUn6Z5eNQUwnTr9xUbMTi5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2xPcbbCTlUA8K7vBRakfxZyiXhrnH5soaXfyZJi+64k=;
 b=LQ/9y6kd4PJonS02w60bep+d6YzEHVvf7lm30rCJI2ILCVAyrrt+QEbJg8JW5n0PT0I7BD6oFJEgY+D0ttddc3xsEOOwywInT8tqsQbuS7chz++2fTjOLFBReweKQDeLmUn9DsgsUGKTB24EShI/eHSNxT4FEw7ZuYooRzDGXIfePfEgndizgJhxXd17anyFJY0/rvJiqrsZD9I4/onZNzm5x4kd94NtoWpr3G/Jtf8JDfrUp9gpPwn6IU1dR1UuvhFhSDmo7mnAV9chHIoyaYOP0dnv1watR41+/loy0oi5zV2PwQmMcNinJfaxsYZww3uRLC1GBP7mp7Uy9ekihw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2xPcbbCTlUA8K7vBRakfxZyiXhrnH5soaXfyZJi+64k=;
 b=Anl1w0Cs7aYBJRgO7vEcTdasZGletEEAcKzE9jIhXbd21hFzlchupOhO3pgUXnBIMYBuaEMpKjrM6jPSmJfE4Giexoav59dQmE+/Appx3mKbAv+9KW35HWrSy/3RSNrN8uK7OmJGCEKqb32Xy4R6hXPiQCerTzk2Cp+R2sf6Yj0=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB4086.eurprd08.prod.outlook.com (2603:10a6:20b:a8::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.23; Mon, 20 Jul
 2020 11:26:32 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3195.025; Mon, 20 Jul 2020
 11:26:32 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: PCI devices passthrough on Arm design proposal
Thread-Topic: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAFEeACAAAehAIAAD+CAgAAK1YCAAAXWgIABRE+AgAMpy4A=
Date: Mon, 20 Jul 2020 11:26:31 +0000
Message-ID: <9C341B21-6D11-428D-9C73-272DB8674998@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
 <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
 <865D5A77-85D4-4A88-A228-DDB70BDB3691@arm.com>
 <972c0c81-6595-7c41-baa5-8882f5d1c0ff@xen.org>
 <4E6B793C-2E0A-4999-9842-24CDCDE43903@arm.com>
 <22df2406-c4d3-1d06-0736-51ebea5581ea@xen.org>
In-Reply-To: <22df2406-c4d3-1d06-0736-51ebea5581ea@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 49a45f0a-ced2-4ad3-cb12-08d82c9fc4f3
x-ms-traffictypediagnostic: AM6PR08MB4086:|AM5PR0801MB1713:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <AM5PR0801MB1713C9BFF79FF374C98F4204FC7B0@AM5PR0801MB1713.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 5x3BZyGQ7236Dd0BxOGq0EY6vdPoCX1BrX3CB7gVu8cpRCDKwuzpVa7UaVLLphudc+oJ00ojxYzWdWZTWUE8qibiPEZ4bdelbr9X0StdL3A8O3kWOsPGQzHgigseFTMrXZMEH3i/fAGTiArHXT2cXoqjahXnqUHxxiL40hKK6ravZuMHFK1/tMYlsAVCpOcJMBJ+Rjhih9hptaR6r/rh0nVcrHADsno3nyUFfPfCrxbQEp/MMV75tK3dqdyFHttyp5/hCWmnIwariurjJp9bbw5OXZMaoIVeFn/6yTxZI1mrJabGiRqJioJdlFyKkTmAD6APiZnW2H5F5o4yRINrYg==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(376002)(346002)(136003)(396003)(366004)(66446008)(66556008)(8936002)(66476007)(76116006)(64756008)(2906002)(5660300002)(33656002)(91956017)(4326008)(66946007)(186003)(316002)(6916009)(36756003)(71200400001)(26005)(2616005)(83380400001)(6506007)(53546011)(55236004)(8676002)(54906003)(6512007)(6486002)(478600001)(86362001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: Lzpio+/VZOdqJh19Z59o78424JzbrgKgvQsq+VfmRIO7tBcnpzI5xvTwqj73LNQ9PHmt7SLol/ElaPsuYRzXf7GvB9iQGHaKWT2k1mGFeVhh1otz5fCiz5T2cxBjEPpLgpvtCBMzsKrfXgJ49/ARVCsujqrk/ga80RObqGj5Yo8b5jpqL1x31gO4wlQm8jTLDJw1GESd4RT2tH3jOrrhdoIu94DQzAPCXC1QOfeVpTOpqLGtyKoj1QQmTjfk1T0jPaEv8y8Htf/PBG6IpKB8swcK3SWT2vhpa5/Cyyykg4vKxcPiZl99YTYOtX+NaesUOD8B3tCdA02bUoE4gxexlAMfIpe6zF2yOMX77TySWJTmu1Aeod2oaBVy86hyWKcr/dUFCu6j8x54ljfRfaDZgg569stnCz1fKPLOEvCu9WeN+lTf+1tPralrevt2Ki2Ime6/1wP+NrT4LO0tYq3YUx4Hh0giGNMn8Shp9WrLAEw=
Content-Type: text/plain; charset="utf-8"
Content-ID: <CCFBDAA2CE113C42B527D65BC38FFE12@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4086
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(346002)(39860400002)(396003)(136003)(46966005)(26005)(33656002)(8936002)(6486002)(70586007)(70206006)(316002)(107886003)(356005)(6512007)(6862004)(2616005)(8676002)(54906003)(82310400002)(82740400003)(83380400001)(4326008)(36906005)(5660300002)(86362001)(6506007)(47076004)(53546011)(336012)(186003)(36756003)(2906002)(81166007)(478600001);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: e7d5edda-c3bf-4a7f-7d28-08d82c9fc0e7
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: aUfSHRa2a2iaq0imulGpdRuzih3WKiAgFAkaicUq8LjEsYKZDc3rdyrppYCpaIYL5Y98vgnlMLDaSPYxvzDoZEQFcRo8kxDHoL6JKYKkMft7mibH65ivD+PLCoNrpt5Fok5ySHTnM5Z+VFOE9GwghT3gspT/j7xZjZ9SQo68MmgBSI/IewR3CTKrPuQSORxDl0TeS8iGNTd6cSokku16L0YC4tMxzBJMeGVUuigYAJidUFYKQ9iadjVvnjO8iUC/eYACavLQTCr72jLqdZO/2oYb7S9lLO5hoqYcRXT/9Hpb3oOR4DXxaojfyGy2a9cwMdlRtGu25ropEDg859HippDykUb0/EwxHWu1Q9w8ridjrmt9a/igqqZWYhgFf0sY3D6iFxAVXbiHU6pVSrR0RQ==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jul 2020 11:26:38.7535 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 49a45f0a-ced2-4ad3-cb12-08d82c9fc4f3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB1713
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTggSnVsIDIwMjAsIGF0IDEyOjA4IHBtLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4
ZW4ub3JnPiB3cm90ZToNCj4gDQo+IEhpLA0KPiANCj4gT24gMTcvMDcvMjAyMCAxNjo0NywgQmVy
dHJhbmQgTWFycXVpcyB3cm90ZToNCj4+PiBPbiAxNyBKdWwgMjAyMCwgYXQgMTc6MjYsIEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+IHdyb3RlOg0KPj4+IE9uIDE3LzA3LzIwMjAgMTU6NDcs
IEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4+Pj4+ICAgICBwY2k9WyAiUENJX1NQRUNfU1RS
SU5HIiwgIlBDSV9TUEVDX1NUUklORyIsIC4uLl0NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IEd1ZXN0IHdp
bGwgYmUgb25seSBhYmxlIHRvIGFjY2VzcyB0aGUgYXNzaWduZWQgZGV2aWNlcyBhbmQgc2VlIHRo
ZSBicmlkZ2VzLiBHdWVzdCB3aWxsIG5vdCBiZSBhYmxlIHRvIGFjY2VzcyBvciBzZWUgdGhlIGRl
dmljZXMgdGhhdCBhcmUgbm8gYXNzaWduZWQgdG8gaGltLg0KPj4+Pj4+PiANCj4+Pj4+Pj4gTGlt
aXRhdGlvbjoNCj4+Pj4+Pj4gKiBBcyBvZiBub3cgYWxsIHRoZSBicmlkZ2VzIGluIHRoZSBQQ0kg
YnVzIGFyZSBzZWVuIGJ5IHRoZSBndWVzdCBvbiB0aGUgVlBDSSBidXMuDQo+Pj4+Pj4gV2h5IGRv
IHlvdSB3YW50IHRvIGV4cG9zZSBhbGwgdGhlIGJyaWRnZXMgdG8gYSBndWVzdD8gRG9lcyB0aGlz
IG1lYW4gdGhhdCB0aGUgQkRGIHNob3VsZCBhbHdheXMgbWF0Y2ggYmV0d2VlbiB0aGUgaG9zdCBh
bmQgdGhlIGd1ZXN0Pw0KPj4+PiBUaGF04oCZcyBub3QgcmVhbGx5IHNvbWV0aGluZyB0aGF0IHdl
IHdhbnRlZCBidXQgdGhpcyB3YXMgdGhlIGVhc2llc3Qgd2F5IHRvIGdvLg0KPj4+PiBBcyBzYWlk
IGluIGEgcHJldmlvdXMgbWFpbCB3ZSBjb3VsZCBidWlsZCBhIFZQQ0kgYnVzIHdpdGggYSBjb21w
bGV0ZWx5IGRpZmZlcmVudCB0b3BvbG9neSBidXQgSSBhbSBub3Qgc3VyZSBvZiB0aGUgYWR2YW50
YWdlcyB0aGlzIHdvdWxkIGhhdmUuDQo+Pj4+IERvIHlvdSBzZWUgc29tZSByZWFzb24gdG8gZG8g
dGhpcyA/DQo+Pj4gDQo+Pj4gWWVzIDopOg0KPj4+ICAxKSBJZiBhIHBsYXRmb3JtIGhhcyB0d28g
aG9zdCBjb250cm9sbGVycyAoSUlSQyBUaHVuZGVyLVggaGFzIGl0KSB0aGVuIHlvdSB3b3VsZCBu
ZWVkIHRvIGV4cG9zZSB0d28gaG9zdCBjb250cm9sbGVycyB0byB5b3VyIGd1ZXN0LiBJIHRoaW5r
IHRoaXMgaXMgdW5kZXNpcmFibGUgaWYgeW91ciBndWVzdCBpcyBvbmx5IHVzaW5nIGEgY291cGxl
IG9mIFBDSSBkZXZpY2VzIG9uIGVhY2ggaG9zdCBjb250cm9sbGVycy4NCj4+PiAgMikgSW4gdGhl
IGNhc2Ugb2YgbWlncmF0aW9uIChsaXZlIG9yIG5vdCksIHlvdSBtYXkgd2FudCB0byB1c2UgYSBk
aWZmZXJlbmNlIFBDSSBjYXJkIG9uIHRoZSB0YXJnZXQgcGxhdGZvcm0uIFNvIHlvdXIgQkRGIGFu
ZCBicmlkZ2VzIG1heSBiZSBkaWZmZXJlbnQuDQo+Pj4gDQo+Pj4gVGhlcmVmb3JlIEkgdGhpbmsg
dGhlIHZpcnR1YWwgdG9wb2xvZ3kgY2FuIGJlIGJlbmVmaWNpYWwuDQo+PiBJIHdvdWxkIHNlZSBh
IGJpZyBhZHZhbnRhZ2UgZGVmaW5pdGVseSB0byBoYXZlIG9ubHkgb25lIFZQQ0kgYnVzIHBlciBn
dWVzdCBhbmQgcHV0IGFsbCBkZXZpY2VzIGluIHRoZWlyIGluZGVwZW5kZW50bHkgb2YgdGhlIGhh
cmR3YXJlIGRvbWFpbiB0aGUgZGV2aWNlIGlzIG9uLg0KPj4gQnV0IHRoaXMgd2lsbCBwcm9iYWJs
eSBtYWtlIHRoZSBWUENJIEJBUnMgdmFsdWUgY29tcHV0YXRpb24gYSBiaXQgbW9yZSBjb21wbGV4
IGFzIHdlIG1pZ2h0IGVuZCB1cCB3aXRoIG5vIHNwYWNlIG9uIHRoZSBndWVzdCBwaHlzaWNhbCBt
YXAgZm9yIGl0Lg0KPj4gVGhpcyBtaWdodCBtYWtlIHRoZSBpbXBsZW1lbnRhdGlvbiBhIGxvdCBt
b3JlIGNvbXBsZXguDQo+IA0KPiBJIGFtIG5vdCBzdXJlIHRvIHVuZGVyc3RhbmQgeW91ciBhcmd1
bWVudCBhYm91dCB0aGUgc3BhY2UuLi4gWW91IHNob3VsZCBiZSBhYmxlIHRvIGZpbmQgb3V0IHRo
ZSBzaXplIG9mIGVhY2ggQkFScywgc28geW91IGNhbiBzaXplIHRoZSBNTUlPIHdpbmRvdyBjb3Jy
ZWN0bHkuIFRoaXMgc2hvdWxkbid0IGFkZCBhIGxvdCBvZiBjb21wbGV4aXR5Lg0KPiANCj4gSSBh
bSBub3QgYXNraW5nIGFueSBpbXBsZW1lbnRhdGlvbiBmb3IgdGhpcywgYnV0IHdlIG5lZWQgdG8g
bWFrZSBzdXJlIHRoZSBkZXNpZ24gY2FuIGVhc2lseSBiZSBleHRlbmRlZCBmb3Igb3RoZXIgdXNl
IGNhc2VzLiBJbiB0aGUgY2FzZSBvZiBzZXJ2ZXIsIHdlIHdpbGwgbGlrZWx5IHdhbnQgdG8gZXhw
b3NlIGEgc2luZ2xlIHZQQ0kgdG8gdGhlIGd1ZXN0Lg0KDQpUaGlzIGlzIHNvbWV0aGluZyB3ZSBo
YXZlIHRvIHdvcmsgb24gaG93IHRvIGltcGxlbWVudCB0aGUgdmlydHVhbCB0b3BvbG9neSBmb3Ig
dGhlIGd1ZXN0LiANCg0KPiANCj4+PiANCj4+Pj4+PiAgICAtIElzIHRoZXJlIGFueSBtZW1vcnkg
YWNjZXNzIHRoYXQgY2FuIGJ5cGFzc2VkIHRoZSBJT01NVSAoZS5nIGRvb3JiZWxsKT8NCj4+Pj4g
VGhpcyBpcyBzdGlsbCBzb21ldGhpbmcgdG8gYmUgaW52ZXN0aWdhdGVkIGFzIHBhcnQgb2YgdGhl
IE1TSSBpbXBsZW1lbnRhdGlvbi4NCj4+Pj4gSWYgeW91IGhhdmUgYW55IGlkZWEgaGVyZSwgZmVl
bCBmcmVlIHRvIHRlbGwgdXMuDQo+Pj4gDQo+Pj4gTXkgbWVtb3J5IGlzIGEgYml0IGZ1enp5IGhl
cmUuIEkgYW0gc3VyZSB0aGF0IHRoZSBkb29yYmVsbCBjYW4gYnlwYXNzIHRoZSBJT01NVSBvbiBz
b21lIHBsYXRmb3JtLCBidXQgSSBhbHNvIHZhZ3VlbHkgcmVtZW1iZXIgdGhhdCBhY2Nlc3NlcyB0
byB0aGUgUENJIGhvc3QgY29udHJvbGxlciBtZW1vcnkgd2luZG93IG1heSBhbHNvIGJ5cGFzcyB0
aGUgSU9NTVUuIEEgZ29vZCByZWFkaW5nIG1pZ2h0IGJlIFsyXS4NCj4+PiANCj4+PiBJSVJDLCBJ
IGNhbWUgdG8gdGhlIGNvbmNsdXNpb24gdGhhdCB3ZSBtYXkgd2FudCB0byB1c2UgdGhlIGhvc3Qg
bWVtb3J5IG1hcCBpbiB0aGUgZ3Vlc3Qgd2hlbiB1c2luZyB0aGUgUENJIHBhc3N0aHJvdWdoLiBC
dXQgbWF5YmUgbm90IG9uIGFsbCB0aGUgcGxhdGZvcm1zLg0KPj4gRGVmaW5pdGVseSBhIGxvdCBv
ZiB0aGlzIHdvdWxkIGJlIGVhc2llciBpZiBjb3VsZCB1c2UgMToxIG1hcHBpbmcuDQo+PiBXZSB3
aWxsIGtlZXAgdGhhdCBpbiBtaW5kIHdoZW4gd2Ugd2lsbCBzdGFydCB0byBpbnZlc3RpZ2F0ZSBv
biB0aGUgTVNJIHBhcnQuDQo+IA0KPiBIbW1tLi4uIE1heWJlIEkgd2Fzbid0IGNsZWFyIGVub3Vn
aCBidXQgdGhlIHByb2JsZW0gaXMgbm90IG9ubHkgaGFwcGVuaW5nIHdpdGggTVNJcyBkb29yYmVs
bHMuIEl0IGlzIGFsc28gd2l0aCB0aGUgUDJQIHRyYW5zYWN0aW9ucy4NCj4gDQo+IEFnYWluLCBJ
IGFtIG5vdCBhc2tpbmcgdG8gaW1wbGVtZW50IGl0IGF0IHRoZSBiZWdpbm5pbmcuIEhvd2V2ZXIs
IGl0IHdvdWxkIGJlIGdvb2QgdG8gb3V0bGluZSB0aGUgcG90ZW50aWFsIGxpbWl0YXRpb25zIG9m
IHRoZSBhcHByb2FjaCBpbiB5b3VyIGRlc2lnbi4NCg0KQXMgQmVydHJhbmQgbWVudGlvbiBvbmNl
IHdlIHN0YXJ0IGludmVzdGlnYXRpbmcgb24gdGhlIE1TSSBzdXBwb3J0IHdlIHdpbGwgaGF2ZSB0
aGlzIGluIG1pbmQgdG8gcHJvY2VlZC4NCj4gDQo+IENoZWVycywNCj4gDQo+IA0KPiAtLSANCj4g
SnVsaWVuIEdyYWxsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 11:27:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 11:27:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxTwp-00077n-CZ; Mon, 20 Jul 2020 11:27:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bPHy=A7=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1jxTwn-00077Z-Qn
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 11:27:05 +0000
X-Inumbo-ID: eff20e78-ca7b-11ea-848b-bc764e2007e4
Received: from mail-lf1-x129.google.com (unknown [2a00:1450:4864:20::129])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eff20e78-ca7b-11ea-848b-bc764e2007e4;
 Mon, 20 Jul 2020 11:27:05 +0000 (UTC)
Received: by mail-lf1-x129.google.com with SMTP id y18so9477366lfh.11
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jul 2020 04:27:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-transfer-encoding:content-language;
 bh=Q33zzfdqXxr2MgVpFLd414bKy58NQoCmmszfR9Nkzlg=;
 b=cm+/09mD19sinas2m0E5kWIQmbgktijwG6O445gAOOZuiqZ7BIIocqnX1ym9/AGj/R
 iuqnt4bs2t2HvsPxv3jZ/r6b71pSnwoTr+lNVHcXvJTF+PH/G32dboT/hAbz1h+BqYJc
 UAAf/q/H/0f9aXfJYGj/d9VMCfw1v1x9gb1blVgpyFVJAGdPBJzDmwRl1aRfXxOwPHQf
 j0ZgRpLXaOhvk8y3NPv35YeLsl7chXVaTsCaJnnNWWRHGmYIgO+r7lN7dbth+BytIEuE
 vf+pPa8qRrn3NvC0MwdoesHL4thUa/ajteiF+0eLXjOqDj39F181jR4A0Hmqo6iEuNli
 ejHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-transfer-encoding
 :content-language;
 bh=Q33zzfdqXxr2MgVpFLd414bKy58NQoCmmszfR9Nkzlg=;
 b=FW7gqqHl0WpomgoL7+bMKJEcU9qDh9Pp2rqKhBmB6gf80foVfXwWvAc3Znn7bn2gRF
 WbYpbMgoizjr/sNyHPr7RTU/tsb/jJR1dKJkeAWvlbH/4Mh/a4ShIyF5R4ifiUMNAv7J
 JKgCWK9+deE8CFZSRNOXSHfC425niyeC7dmX/C68Tpn/Ygw3KaPqeTETN58Y6wlGfEey
 q2SMDAJen7Jc5kctTvlHtOktEOkSyEZXI+SqzZ7TgOcGnhvRxF7MIzZcxjU59uuOjhw+
 kcZWACi21CcQv13o3AA82rCfZpD4gFUAbAzWekqFU4jnonVXMhHItIuQKAiWNeJGJqIv
 X+rA==
X-Gm-Message-State: AOAM531tJZKpGKEFBe9dVmKLcC3p25PghPhvTWAOqOeVDtXNmReCbIS3
 jMSPDbJoEuFxHdSkBrMxlgA=
X-Google-Smtp-Source: ABdhPJwes+yMzBs3u5ZxjgSvho9kpfiQM3alPMDR05EtPhob0gU4tGvl1hWgU9gsr+Id7s6ibxsUxQ==
X-Received: by 2002:ac2:5691:: with SMTP id 17mr10870019lfr.209.1595244423749; 
 Mon, 20 Jul 2020 04:27:03 -0700 (PDT)
Received: from [192.168.1.2] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id s28sm3196852ljm.24.2020.07.20.04.27.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 20 Jul 2020 04:27:03 -0700 (PDT)
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
To: Julien Grall <julien@xen.org>
References: <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
 <20200717144139.GU7191@Air-de-Roger>
 <90AE8DAB-2223-46DC-A263-D78365E5435E@arm.com>
 <20200717150507.GW7191@Air-de-Roger>
 <FBE040A9-D088-43D6-8929-FFEDE9DDDE34@arm.com>
 <20200717153043.GX7191@Air-de-Roger>
 <C5B2BDD5-E504-4871-8542-5BA8C051F699@arm.com>
 <20200717160834.GA7191@Air-de-Roger>
 <0c76b6a0-2242-3bbd-9740-75c5580e93e8@xen.org>
 <1dea1217-f884-0fe1-d339-95c5b473ae23@gmail.com>
 <2fd6c418-db41-8070-5644-344fefd8128d@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <a9951d1d-d858-de65-284f-3b604ec102e1@gmail.com>
Date: Mon, 20 Jul 2020 14:27:02 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <2fd6c418-db41-8070-5644-344fefd8128d@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Rahul Singh <Rahul.Singh@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 18.07.20 14:24, Julien Grall wrote:

Hello Julien

>
>
> On 17/07/2020 20:17, Oleksandr wrote:
>> I would like to clarify regarding an IOMMU driver changes which 
>> should be done to support PCI pass-through properly.
>>
>> Design document mentions about SMMU, but Xen also supports IPMMU-VMSA 
>> (under tech preview now). It would be really nice if the required 
>> support is extended to that kind of IOMMU as well.
>>
>> May I clarify what should be implemented in the Xen driver in order 
>> to support PCI pass-through feature on Arm? 
>
> I would expect callbacks to:
>     - add a PCI device
>     - remove a PCI device
>     - assign a PCI device
>     - deassign a PCI device
>
> AFAICT, they are already existing. So it is a matter of plumbing. This 
> would then be up to the driver to configure the IOMMU correctly.


Got it.

>
>> Should the IOMMU H/W be "PCI-aware" for that purpose?
>
> The only requirement is that your PCI devices are behind an IOMMU :). 
> Other than that the IOMMU can mostly be configured the same way as you 
> would do for the non-PCI devices. 

That's good.


> The main difference would be how you find the master ID.
>
> I am aware that on some platforms, the masterID may be shared between 
> multiple PCI devices. In that case, we would need to have a way to 
> assign all the devices to the same guest (maybe using group?).

Or just prevent these devices from being assigned to different guests? 
During assigning a device to newly created guest check whether masterID 
is already in use for any existing guests and deny operation in such a 
case.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Mon Jul 20 11:29:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 11:29:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxTyg-0007KF-Pp; Mon, 20 Jul 2020 11:29:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxTyf-0007K7-0O
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 11:29:01 +0000
X-Inumbo-ID: 34bdd1a4-ca7c-11ea-848b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 34bdd1a4-ca7c-11ea-848b-bc764e2007e4;
 Mon, 20 Jul 2020 11:29:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A90EFAB7A;
 Mon, 20 Jul 2020 11:29:05 +0000 (UTC)
Subject: Re: [PATCH 4/8] Arm: prune #include-s needed by domain.h
To: Julien Grall <julien@xen.org>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
 <150525bb-1c48-c331-3212-eff18bc4c13d@suse.com>
 <d836dc7f-017b-5048-02de-d1cb291fbc3b@xen.org>
 <931149db-2daf-6d72-0330-c938b5084eb6@suse.com>
 <2cc66fdb-1da2-16cd-717a-3248d136821c@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <66a90945-0d3e-beee-4128-bfc3a06a7cf2@suse.com>
Date: Mon, 20 Jul 2020 13:28:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <2cc66fdb-1da2-16cd-717a-3248d136821c@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.07.2020 11:09, Julien Grall wrote:
> 
> 
> On 20/07/2020 09:17, Jan Beulich wrote:
>> On 17.07.2020 16:44, Julien Grall wrote:
>>> On 15/07/2020 11:39, Jan Beulich wrote:
>>>> --- a/xen/include/asm-arm/domain.h
>>>> +++ b/xen/include/asm-arm/domain.h
>>>> @@ -2,7 +2,7 @@
>>>>    #define __ASM_DOMAIN_H__
>>>>    
>>>>    #include <xen/cache.h>
>>>> -#include <xen/sched.h>
>>>> +#include <xen/timer.h>
>>>>    #include <asm/page.h>
>>>>    #include <asm/p2m.h>
>>>>    #include <asm/vfp.h>
>>>> @@ -11,8 +11,6 @@
>>>>    #include <asm/vgic.h>
>>>>    #include <asm/vpl011.h>
>>>>    #include <public/hvm/params.h>
>>>> -#include <xen/serial.h>
>>>
>>> While we don't need the rbtree.h, we technically need serial.h for using
>>> vuart_info.
>>
>> The only reference to it is
>>
>>          const struct vuart_info     *info;
>>
>> which doesn't require a definition nor even a forward declaration
>> of struct vuart_info. It should just be source files instantiating
>> a struct or de-referencing pointers to one that actually need to
>> see the full declaration. 
> 
> Ah yes. I got confused because you introduced a forward declaration of 
> struct vcpu. But this is because you need it to declare the function 
> prototype.

As a result - are you happy for the change to go in with Stefano's
ack then?

>> The only source file doing so (vuart.c)
>> already includes xen/serial.h. (In fact, it being just a single
>> source file doing so, the struct definition could [and imo should]
>> be moved there. The type can be entirely opaque to the rest of the
>> code base, as - over time - we've been doing for other structs.)
> 
> There are definitely more use of vuart_info within the code base. All 
> the UART on Arm will fill up the structure (see drivers/char/pl011.c) 
> for instance.
> 
> So the definition is in the correct place.

Hmm, I will admit I judged from the uses of ->arch.vuart.info alone.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 11:32:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 11:32:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxU1r-00087A-9v; Mon, 20 Jul 2020 11:32:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JfNH=A7=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1jxU1p-000875-Kx
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 11:32:17 +0000
X-Inumbo-ID: a8d8e66f-ca7c-11ea-9f89-12813bfff9fa
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.84]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a8d8e66f-ca7c-11ea-9f89-12813bfff9fa;
 Mon, 20 Jul 2020 11:32:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vtK0mFLmn1WxwOxjjXutOXLAiAPq+X+utAToKjDl/eY=;
 b=k24rIdUTc7YmcsPY7KqmDDM/ewXK9i9bHD18yL3Z2Bhyga8INi23W9ROZbPz8HolqbK1Joz5vgYir+3Jt+3s5a1nxuqYF5sMBK2kHwzi5wlnF7wUF47ebd3ia2cNnYErx8h3ZPx8JhsbI9/2uTiXbLQZHJ/4/swqEldUqM4vyus=
Received: from DB6PR0801CA0048.eurprd08.prod.outlook.com (2603:10a6:4:2b::16)
 by VI1PR08MB4046.eurprd08.prod.outlook.com (2603:10a6:803:e4::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Mon, 20 Jul
 2020 11:32:12 +0000
Received: from DB5EUR03FT022.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:2b:cafe::30) by DB6PR0801CA0048.outlook.office365.com
 (2603:10a6:4:2b::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17 via Frontend
 Transport; Mon, 20 Jul 2020 11:32:12 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT022.mail.protection.outlook.com (10.152.20.171) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Mon, 20 Jul 2020 11:32:12 +0000
Received: ("Tessian outbound 2ae7cfbcc26c:v62");
 Mon, 20 Jul 2020 11:32:12 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 011a4ef4075f4727
X-CR-MTA-TID: 64aa7808
Received: from 7069426d3480.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 073052BB-6CCA-4126-8E75-6F4E0B1B887A.1; 
 Mon, 20 Jul 2020 11:32:07 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7069426d3480.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 20 Jul 2020 11:32:07 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UbOb7/vj4e941f2Nmt4U5g2hDae9v596cJwuWvQUeRHaGLx5APFrIkAYRzMrsT15W/Ozw3todP+puKsSwyQ5P3Vo8uMpWUTnCZZCxdbFNuTCnQtEzgUeJzAH/vwAYy4jCqyEV1yJ0pa3brmTRTNo0vZQzTCiwivXZ8odzBL7PBmT14Q9jfXyo6sR06UL3m4vnFJa5Lrf+zzJzMGIXymTDtnt2qdUfVi6o/xkxUtx2WViDcEeOT7/1fpY6TyeLaJIp/eEssJf8AajWB6gAPRwszYAEm5+a9bJ6QMtGXLBeaW0Qh9qQXfN8HeS3ROim961rA9Fe6Rqnw3fqGikG0MXow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vtK0mFLmn1WxwOxjjXutOXLAiAPq+X+utAToKjDl/eY=;
 b=Svww4EFHV/VVLmcBAp9dYuvqDgv9S6jXMYYkvQ7Tg7f8qneob5+KjlgwnMhIltbslk2b4WZJv83Eq/+KcwlhelIjFMT3GLECt67SCEolu4LwzNH3EfWwsMasGdJ8GFKzA64cuWgVcZeXGrTWM9+ytYAdAY/D4myIQzQui2AbCZRvUaly8AlWcd2IZQ+rC2K8hd+YznrJ8rVZ662XfF1mT5ov9vPLmPPS6GHglOkZ84MiZvWY+mG+UFePiNC8z/cSRygiT5LwZCUp2Vmw4DqpyZ8Z9Dk90zWsoLStLEqYNnCUYwkTUQEcHvU44uJYha1VgnJdLZCbwB5AjU3ZbCpwpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vtK0mFLmn1WxwOxjjXutOXLAiAPq+X+utAToKjDl/eY=;
 b=k24rIdUTc7YmcsPY7KqmDDM/ewXK9i9bHD18yL3Z2Bhyga8INi23W9ROZbPz8HolqbK1Joz5vgYir+3Jt+3s5a1nxuqYF5sMBK2kHwzi5wlnF7wUF47ebd3ia2cNnYErx8h3ZPx8JhsbI9/2uTiXbLQZHJ/4/swqEldUqM4vyus=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB4086.eurprd08.prod.outlook.com (2603:10a6:20b:a8::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.23; Mon, 20 Jul
 2020 11:32:05 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3195.025; Mon, 20 Jul 2020
 11:32:05 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: PCI devices passthrough on Arm design proposal
Thread-Topic: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAFEeACAAAehAIAAD+CAgAAK1YCAAAXWgIAABSYAgAEq6wCAABX4AIADKZ4A
Date: Mon, 20 Jul 2020 11:32:05 +0000
Message-ID: <D89E89FD-D7E4-4C1D-B3B9-21CA981881E5@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <8ac91a1b-e6b3-0f2b-0f23-d7aff100936d@xen.org>
 <c7d5a084-8111-9f43-57e1-bcf2bd822f5b@xen.org>
 <865D5A77-85D4-4A88-A228-DDB70BDB3691@arm.com>
 <972c0c81-6595-7c41-baa5-8882f5d1c0ff@xen.org>
 <4E6B793C-2E0A-4999-9842-24CDCDE43903@arm.com>
 <20200717160550.GZ7191@Air-de-Roger>
 <C86FE34B-4587-4895-8001-D8CD3F9D44F0@arm.com>
 <f6a0da85-6d44-b0fa-abe6-6839d88c3578@xen.org>
In-Reply-To: <f6a0da85-6d44-b0fa-abe6-6839d88c3578@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: fb426184-c19e-4223-13c5-08d82ca08bf2
x-ms-traffictypediagnostic: AM6PR08MB4086:|VI1PR08MB4046:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <VI1PR08MB40466B3CCD7B4B2DFC6348D6FC7B0@VI1PR08MB4046.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 1N/+NIcHOuegqJJ+rhJIeowFCUK0wfa2RvD/GTdw1KIKkva6tpHop8kEFHg18HYjYuBP/PpjyjsQNjCN3RPo9aLwBCLuYIIc1gY4Z517FxHcdSVG8BKDoDAdygoSMLSf+j5oGHxY9ecO6uaXLM9ukLfoiVM1x7+asMFxW4xT3pDXzXIdoiEEslPP1fxuNhAOqar9eJguKhSL9qSNk0q/rKQ99bhezbqHw4M5Qnb518fL1vRpPRRGEbr7RHbOhJn5cymFsXNU4bmUk36HkjhPgmk63uwDFmC5QGRZmFGaaIMnOty65gBy4v9JsDbI3UZ/vIEdV2x1fC1XOc7iuUxBwg==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(376002)(346002)(136003)(396003)(366004)(66446008)(66556008)(8936002)(66476007)(76116006)(64756008)(2906002)(5660300002)(33656002)(91956017)(4326008)(66946007)(186003)(316002)(6916009)(36756003)(71200400001)(26005)(2616005)(83380400001)(6506007)(53546011)(8676002)(54906003)(6512007)(6486002)(478600001)(86362001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: NbXF0SlOvPTrn3nDOLuVvPnoBojC4/GzbGrFWzcmqM1ErCawfCnRleL5tHHQhvyIDayhQ36RbR/2PsaVMhBZkEoeUXA5/Z86FYQnawuNJfsHNdYQ2AKFy3t3J3q8ocDPoKs9rNIDSZPnz6y+e/xVKr4nrYj/gWkXXwDUIIDs9vVqwIEJ7wiVIwpeop19/Bzr7DCBZcsWSu+RwxOmPLhHEnC29M1DT01KoVys0qfX/MiVfJQBHynuSjRfixwq01Y6BL/ND1S4LsnYbssuqDjZqDzQKxOO3ou6DQBMuxLcOe1gkMXN5iFS5pXtFT7J1ENdmmYx8tqhLft3TtAbF/q3ySWCDu5kioFviFYR2iyyzeTol9siEAROlolOKMR8SXbjIqOgM7NxiSBLUVUcDIYn00M5PRzvNbeK5+A1QF65yYByRW8+lWaUwnjfHMODzARx9wUMiS1sHRmf19UNdws9IWX1IY6uHHygxI0DpJ1L5bbwC0VWJso8l+KnIHXmuXgX
Content-Type: text/plain; charset="utf-8"
Content-ID: <B29409658179B348ABC47F543F6A222F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4086
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(376002)(396003)(136003)(346002)(46966005)(47076004)(70586007)(70206006)(6512007)(4326008)(356005)(83380400001)(8676002)(478600001)(33656002)(2616005)(5660300002)(81166007)(82310400002)(82740400003)(186003)(6506007)(6862004)(6486002)(36756003)(316002)(8936002)(26005)(336012)(53546011)(54906003)(107886003)(2906002)(86362001);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 3c263cd5-6c9f-4153-0be3-08d82ca08781
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Cw9pEVjf/jMZC7Ilz8w1ZcuhRi3X2pwkjDWhQxydnCL7tqI1YiBKTMrMsOh6g8rf3cE4i1jP25vif322YrC/0wbdMePi7ZI5tBZQ0vVKuAB3dzes+/JrwNNWfMcTluDYEpLRWBdFyXoupRRYxGG6l6GxdVzEcfJAa/X9KhIA6X6IrGLorru8IG9lhP3RYTkyx+ppGps52hpblEU6W6CsbPqoBD6ULWuD4xwwtfvCg65U+vCLfmZOwgHtHkEjwDI6KXgVOQN4mGmJxHk5WIblE0eDEz7QUI14Ro7ZlYNkdlo5iGXMaDnTBPPjOD4OVruPQNTYtPK8H0gMlEW3doc5Qf3nDSSYBa2MDMMajX3slgpdIQRMlEFUKoJlCOiiJbIPrQoV7dBgBk8v9OlLGDY4dA==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jul 2020 11:32:12.7142 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fb426184-c19e-4223-13c5-08d82ca08bf2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4046
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTggSnVsIDIwMjAsIGF0IDEyOjE0IHBtLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4
ZW4ub3JnPiB3cm90ZToNCj4gDQo+IA0KPiANCj4gT24gMTgvMDcvMjAyMCAxMDo1NSwgQmVydHJh
bmQgTWFycXVpcyB3cm90ZToNCj4+PiBPbiAxNyBKdWwgMjAyMCwgYXQgMTg6MDUsIFJvZ2VyIFBh
dSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToNCj4+PiANCj4+PiBPbiBGcmks
IEp1bCAxNywgMjAyMCBhdCAwMzo0NzoyNVBNICswMDAwLCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3Rl
Og0KPj4+Pj4gT24gMTcgSnVsIDIwMjAsIGF0IDE3OjI2LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4
ZW4ub3JnPiB3cm90ZToNCj4+Pj4+IE9uIDE3LzA3LzIwMjAgMTU6NDcsIEJlcnRyYW5kIE1hcnF1
aXMgd3JvdGU6DQo+Pj4+Pj4+Pj4gKiBEb20wTGVzcyBpbXBsZW1lbnRhdGlvbiB3aWxsIHJlcXVp
cmUgdG8gaGF2ZSB0aGUgY2FwYWNpdHkgaW5zaWRlIFhlbiB0byBkaXNjb3ZlciB0aGUgUENJIGRl
dmljZXMgKHdpdGhvdXQgZGVwZW5kaW5nIG9uIERvbTAgdG8gZGVjbGFyZSB0aGVtIHRvIFhlbiku
DQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gIyBFbmFibGUgdGhlIGV4aXN0aW5nIHg4NiB2aXJ0dWFs
IFBDSSBzdXBwb3J0IGZvciBBUk06DQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gVGhlIGV4aXN0aW5n
IFZQQ0kgc3VwcG9ydCBhdmFpbGFibGUgZm9yIFg4NiBpcyBhZGFwdGVkIGZvciBBcm0uIFdoZW4g
dGhlIGRldmljZSBpcyBhZGRlZCB0byBYRU4gdmlhIHRoZSBoeXBlciBjYWxsIOKAnFBIWVNERVZP
UF9wY2lfZGV2aWNlX2FkZOKAnSwgVlBDSSBoYW5kbGVyIGZvciB0aGUgY29uZmlnIHNwYWNlIGFj
Y2VzcyBpcyBhZGRlZCB0byB0aGUgUENJIGRldmljZSB0byBlbXVsYXRlIHRoZSBQQ0kgZGV2aWNl
cy4NCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBBIE1NSU8gdHJhcCBoYW5kbGVyIGZvciB0aGUgUENJ
IEVDQU0gc3BhY2UgaXMgcmVnaXN0ZXJlZCBpbiBYRU4gc28gdGhhdCB3aGVuIGd1ZXN0IGlzIHRy
eWluZyB0byBhY2Nlc3MgdGhlIFBDSSBjb25maWcgc3BhY2UsIFhFTiB3aWxsIHRyYXAgdGhlIGFj
Y2VzcyBhbmQgZW11bGF0ZSByZWFkL3dyaXRlIHVzaW5nIHRoZSBWUENJIGFuZCBub3QgdGhlIHJl
YWwgUENJIGhhcmR3YXJlLg0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IExpbWl0YXRpb246DQo+Pj4+
Pj4+Pj4gKiBObyBoYW5kbGVyIGlzIHJlZ2lzdGVyIGZvciB0aGUgTVNJIGNvbmZpZ3VyYXRpb24u
DQo+Pj4+Pj4+Pj4gKiBPbmx5IGxlZ2FjeSBpbnRlcnJ1cHQgaXMgc3VwcG9ydGVkIGFuZCB0ZXN0
ZWQgYXMgb2Ygbm93LCBNU0kgaXMgbm90IGltcGxlbWVudGVkIGFuZCB0ZXN0ZWQuDQo+Pj4+Pj4+
PiBJSVJDLCBsZWdhY3kgaW50ZXJydXB0IG1heSBiZSBzaGFyZWQgYmV0d2VlbiB0d28gUENJIGRl
dmljZXMuIEhvdyBkbyB5b3UgcGxhbiB0byBoYW5kbGUgdGhpcyBvbiBBcm0/DQo+Pj4+Pj4gV2Ug
cGxhbiB0byBmaXggdGhpcyBieSBhZGRpbmcgcHJvcGVyIHN1cHBvcnQgZm9yIE1TSSBpbiB0aGUg
bG9uZyB0ZXJtLg0KPj4+Pj4+IEZvciB0aGUgdXNlIGNhc2Ugd2hlcmUgTVNJIGlzIG5vdCBzdXBw
b3J0ZWQgb3Igbm90IHdhbnRlZCB3ZSBtaWdodCBoYXZlIHRvIGZpbmQgYSB3YXkgdG8gZm9yd2Fy
ZCB0aGUgaGFyZHdhcmUgaW50ZXJydXB0IHRvIHNldmVyYWwgZ3Vlc3RzIHRvIGVtdWxhdGUgc29t
ZSBraW5kIG9mIHNoYXJlZCBpbnRlcnJ1cHQuDQo+Pj4+PiANCj4+Pj4+IFNoYXJpbmcgaW50ZXJy
dXB0cyBhcmUgYSBiaXQgcGFpbiBiZWNhdXNlIHlvdSBjb3VsZG4ndCB0YWtlIGFkdmFudGFnZSBv
ZiB0aGUgZGlyZWN0IEVPSSBpbiBIVyBhbmQgaGF2ZSB0byBiZSBjYXJlZnVsIGlmIG9uZSBndWVz
dCBkb2Vzbid0IEVPSSBpbiB0aW1lbHkgbWFuZWVyLg0KPj4+Pj4gDQo+Pj4+PiBUaGlzIGlzIHNv
bWV0aGluZyBJIHdvdWxkIHJhdGhlciBhdm9pZCB1bmxlc3MgdGhlcmUgaXMgYSByZWFsIHVzZSBj
YXNlIGZvciBpdC4NCj4+Pj4gDQo+Pj4+IEkgd291bGQgZXhwZWN0IHRoYXQgbW9zdCByZWNlbnQg
aGFyZHdhcmUgd2lsbCBzdXBwb3J0IE1TSSBhbmQgdGhpcw0KPj4+PiB3aWxsIG5vdCBiZSBuZWVk
ZWQuDQo+Pj4gDQo+Pj4gV2VsbCwgUENJIEV4cHJlc3MgbWFuZGF0ZXMgTVNJIHN1cHBvcnQsIHNv
IHdoaWxlIHRoaXMgaXMganVzdCBhIHNwZWMsDQo+Pj4gSSB3b3VsZCBleHBlY3QgbW9zdCAoaWYg
bm90IGFsbCkgZGV2aWNlcyB0byBzdXBwb3J0IE1TSSAob3IgTVNJLVgpLCBhcw0KPj4+IEFybSBw
bGF0Zm9ybXMgaGF2ZW4ndCBpbXBsZW1lbnRlZCBsZWdhY3kgUENJIGFueXdheS4NCj4+IFllcyB0
aGF04oCZcyBvdXIgYXNzdW1wdGlvbiB0by4gQnV0IHdlIGhhdmUgdG8gc3RhcnQgc29tZXdoZXJl
IHNvIE1TSSBpcw0KPj4gcGxhbm5lZCBidXQgaW4gYSBmdXR1cmUgc3RlcC4gSSB3b3VsZCB0aGlu
ayB0aGF0IHN1cHBvcnRpbmcgbm9uIE1TSSBpZiBub3QNCj4+IGltcG9zc2libGUgd2lsbCBiZSBh
IGxvdCBtb3JlIGNvbXBsZXggZHVlIHRvIHRoZSBpbnRlcnJ1cHQgc2hhcmluZy4NCj4+IEkgZG8g
dGhpbmsgdGhhdCBub3Qgc3VwcG9ydGluZyBub24gTVNJIHNob3VsZCBiZSBvayBvbiBBcm0uDQo+
Pj4gDQo+Pj4+IFdoZW4gTVNJIGlzIG5vdCB1c2VkLCB0aGUgb25seSBzb2x1dGlvbiB3b3VsZCBi
ZSB0byBlbmZvcmNlIHRoYXQNCj4+Pj4gZGV2aWNlcyBhc3NpZ25lZCB0byBkaWZmZXJlbnQgZ3Vl
c3QgYXJlIHVzaW5nIGRpZmZlcmVudCBpbnRlcnJ1cHRzDQo+Pj4+IHdoaWNoIHdvdWxkIGxpbWl0
IHRoZSBudW1iZXIgb2YgZG9tYWlucyBiZWluZyBhYmxlIHRvIHVzZSBQQ0kNCj4+Pj4gZGV2aWNl
cyBvbiBhIGJ1cyB0byA0IChpZiB0aGUgZW51bWVyYXRpb24gY2FuIGJlIG1vZGlmaWVkIGNvcnJl
Y3RseQ0KPj4+PiB0byBhc3NpZ24gdGhlIGludGVycnVwdHMgcHJvcGVybHkpLg0KPj4+PiANCj4+
Pj4gSWYgd2UgYWxsIGFncmVlIHRoYXQgdGhpcyBpcyBhbiBhY2NlcHRhYmxlIGxpbWl0YXRpb24g
dGhlbiB3ZSB3b3VsZA0KPj4+PiBub3QgbmVlZCB0aGUg4oCcaW50ZXJydXB0IHNoYXJpbmfigJ0u
DQo+Pj4gDQo+Pj4gSSBtaWdodCBiZSBlYXNpZXIgdG8gc3RhcnQgYnkganVzdCBzdXBwb3J0aW5n
IGRldmljZXMgdGhhdCBoYXZlIE1TSQ0KPj4+IChvciBNU0ktWCkgYW5kIHRoZW4gbW92ZSB0byBs
ZWdhY3kgaW50ZXJydXB0cyBpZiByZXF1aXJlZD8NCj4+IE1TSSBzdXBwb3J0IHJlcXVpcmVzIGFs
c28gc29tZSBzdXBwb3J0IGluIHRoZSBpbnRlcnJ1cHQgY29udHJvbGxlciBwYXJ0DQo+PiBvbiBh
cm0uIFNvIHRoZXJlIGlzIHNvbWUgd29yayB0byBhY2hpZXZlIHRoYXQuDQo+Pj4gDQo+Pj4gWW91
IHNob3VsZCBoYXZlIG1vc3Qgb2YgdGhlIHBpZWNlcyB5b3UgcmVxdWlyZSBhbHJlYWR5IGltcGxl
bWVudGVkDQo+Pj4gc2luY2UgdGhhdCdzIHdoYXQgeDg2IHVzZXMsIGFuZCBoZW5jZSBjb3VsZCBy
ZXVzZSBhbG1vc3QgYWxsIG9mIGl0Pw0KPj4gSW5zaWRlIFBDSSBwcm9iYWJseSBidXQgdGhlIEdJ
QyBwYXJ0IHdpbGwgcmVxdWlyZSBzb21lIHdvcmsuDQo+IA0KPiBXZSBhbHJlYWR5IGhhdmUgYW4g
SVRTIGltcGxlbWVudGF0aW9uIGluIFhlbi4gVGhpcyBpcyByZXF1aXJlZCBpbiBvcmRlciB0byB1
c2UgUENJIGRldmljZXMgaW4gRE9NMCBvbiB0aHVuZGVyLXggKHRoZXJlIGlzIG5vIGxlZ2FjeSBp
bnRlcnJ1cHRzIHN1cHBvcnRlZCkuDQo+IA0KPiBJdCB3YXNuJ3QgeWV0IGV4cG9zZWQgdG8gdGhl
IGd1ZXN0IGJlY2F1c2Ugd2UgZGlkbid0IGZ1bGx5IGludmVzdGlnYXRlIHRoZSBzZWN1cml0eSBh
c3BlY3Qgb2YgdGhlIGltcGxlbWVudGF0aW9uLiBIb3dldmVyLCBmb3IgYSB0ZWNoIHByZXZpZXcg
dGhpcyBzaG91bGQgYmUgc3VmZmljaWVudC4NCj4gDQoNCk9rIFdlIHdpbGwgaGF2ZSBhIGxvb2sg
Zm9yIHRoZSBJVFMgaW1wbGVtZW50YXRpb24gb25jZSB3ZSB3aWxsIHN0YXJ0IHdvcmtpbmcgb24g
dGhlIE1TSSBzdXBwb3J0LiBUaGFua3MgZm9yIHRoZSBwb2ludGVyLg0KPiANCj4gLS0gDQo+IEp1
bGllbiBHcmFsbA0KPiANCg0K


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 11:32:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 11:32:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxU27-00089U-Nx; Mon, 20 Jul 2020 11:32:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ezcM=A7=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jxU26-00089G-N7
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 11:32:34 +0000
X-Inumbo-ID: b344822c-ca7c-11ea-848b-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b344822c-ca7c-11ea-848b-bc764e2007e4;
 Mon, 20 Jul 2020 11:32:33 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id o2so24876059wmh.2
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jul 2020 04:32:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=2Ws+19cyfyQaVImvalOepHUtg1lkzW9AF73Zc3H5Vio=;
 b=TC9HFvgP+O2oEmRnaDuwc6aQNRtcAgQMOf8gQkAB7g//4P0YaMiuwo4me0i9imhv2r
 ZSyYQmlsC9eHU4/EWm8Y3kdA/NmFWOv9vY64DGsa/5AVxDiRHwJ8WbdSfXW0iqxM4Mns
 BUobRz1wR6c+o0kho35RNdZQn/UL80IMcfi23e6H9eOMqSOQNZboXOpwm34WHu5CuzsV
 1vH/zwiH3F7Usp9JJHI+UzA2nGUpPQl1/Ph3MmxWAQ9ud8AaB49nsbk6gblaTPiknTJI
 lqkoKxk6WHfJrHiU3yGZEoYt2vfKMpjG5ZKyuFNaSr2HS3uutMwdzLekoNhn2Stc5xN9
 QggA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=2Ws+19cyfyQaVImvalOepHUtg1lkzW9AF73Zc3H5Vio=;
 b=ZMmHBNXSEoDJuaX+ah6WBUwIkhwsX+3EuClgIyekq07E4hqh/ORTKXZH5J2tus/7O5
 rUTEq3WgRnG4stIXqbYz+vaxLIkU4HngiE4v2OOk4rdo0SP54ibs0qOyuh5LyRvAM7Hi
 9zRXFsl6y/ZB8niS6c3WvAnTOMx0Bzv2ytl3CHdVv3nYed47pDjouK1qOVAnWdAPFNHd
 PEe5kNIFcYlY6ukGx3sNKc6nZG9pqVcOK19AGCnv18gotSSaW89iGg///NWuUYQ2RIhx
 7zsGd5IA+b5sEItG/faGO40BBEXsjfPdg/DFP8dEZgUGdu3mwHu5+1oS6GIDY92tE5ZC
 SIqA==
X-Gm-Message-State: AOAM5319qORbpu9CcS5tlsGgYPhrUYV04/CWXYYv95GVstJy7oqvl53E
 YG5Zwofma8CvOQEw6+ah0T4=
X-Google-Smtp-Source: ABdhPJy81zhpuzTr+RIXsx3WBtvxcP+7u1GkQHOiLzZ+3VoZGANAsCLxwRnoNKBZt5ocHpsbsKEElA==
X-Received: by 2002:a1c:2349:: with SMTP id j70mr20712469wmj.22.1595244752955; 
 Mon, 20 Jul 2020 04:32:32 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-226.amazon.com. [54.240.197.226])
 by smtp.gmail.com with ESMTPSA id o7sm18801953wrv.50.2020.07.20.04.32.31
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 20 Jul 2020 04:32:32 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Juergen Gross'" <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>
References: <20200720112137.27327-1-jgross@suse.com>
In-Reply-To: <20200720112137.27327-1-jgross@suse.com>
Subject: RE: [PATCH v3] docs: specify stability of hypfs path documentation
Date: Mon, 20 Jul 2020 12:32:31 +0100
Message-ID: <002101d65e89$75344a60$5f9cdf20$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQM5wuAhs7ejpzf9sg3QCNvYrfsi+6ZJjOjw
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Juergen Gross <jgross@suse.com>
> Sent: 20 July 2020 12:22
> To: xen-devel@lists.xenproject.org
> Cc: paul@xen.org; Juergen Gross <jgross@suse.com>; Andrew Cooper =
<andrew.cooper3@citrix.com>; George
> Dunlap <george.dunlap@citrix.com>; Ian Jackson =
<ian.jackson@eu.citrix.com>; Jan Beulich
> <jbeulich@suse.com>; Julien Grall <julien@xen.org>; Stefano Stabellini =
<sstabellini@kernel.org>; Wei
> Liu <wl@xen.org>
> Subject: [PATCH v3] docs: specify stability of hypfs path =
documentation
>=20
> In docs/misc/hypfs-paths.pandoc the supported paths in the hypervisor
> file system are specified. Make it more clear that path availability
> might change, e.g. due to scope widening or narrowing (e.g. being
> limited to a specific architecture).
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Release-acked-by: Paul Durrant <paul@xen.org>

TBC, this is also exempt from the commit moratorium as it really needs =
to be in 4.14.

  Paul

> ---
> V2: reworded as requested by Jan Beulich
> V3: reworded again as suggested by George Dunlap
> ---
>  docs/misc/hypfs-paths.pandoc | 20 ++++++++++++++++++++
>  1 file changed, 20 insertions(+)
>=20
> diff --git a/docs/misc/hypfs-paths.pandoc =
b/docs/misc/hypfs-paths.pandoc
> index a111c6f25c..68d83d9245 100644
> --- a/docs/misc/hypfs-paths.pandoc
> +++ b/docs/misc/hypfs-paths.pandoc
> @@ -5,6 +5,9 @@ in the Xen hypervisor file system (hypfs).
>=20
>  The hypervisor file system can be accessed via the xenhypfs tool.
>=20
> +The availability of the hypervisor file system depends on the =
hypervisor
> +config option CONFIG_HYPFS, which is on per default.
> +
>  ## Notation
>=20
>  The hypervisor file system is similar to the Linux kernel's sysfs.
> @@ -64,6 +67,23 @@ the list elements separated by spaces, e.g. "dom0 =
PCID-on".
>  The entry would be writable and it would exist on X86 only and only =
if the
>  hypervisor is configured to support PV guests.
>=20
> +# Stability
> +
> +Path *presence* is not stable, but path *meaning* is always stable: =
if a tool
> +you write finds a path present, it can rely on behavior in future =
versions of
> +the hypervisors, and in different configurations.  Specifically:
> +
> +1. Conditions under which paths are used may be extended, restricted, =
or
> +   removed.  For example, a path that=E2=80=99s always available only =
on ARM systems
> +   may become available on x86; or a path available on both systems =
may be
> +   restricted to only appearing on ARM systems.  Paths may also =
disappear
> +   entirely.
> +2. However, the meaning of a path will never change.  If a path is =
present,
> +   it will always have exactly the meaning that it always had.  In =
order to
> +   maintain this, removed paths should be retained with the tag =
[REMOVED].
> +   The path may be restored *only* if the restored version of the =
path is
> +   compatible with the previous functionality.
> +
>  ## Example
>=20
>  A populated Xen hypervisor file system might look like the following =
example:
> --
> 2.26.2




From xen-devel-bounces@lists.xenproject.org Mon Jul 20 11:33:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 11:33:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxU2i-0008FT-2B; Mon, 20 Jul 2020 11:33:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxU2g-0008FI-Vh
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 11:33:11 +0000
X-Inumbo-ID: c9d4333c-ca7c-11ea-848b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c9d4333c-ca7c-11ea-848b-bc764e2007e4;
 Mon, 20 Jul 2020 11:33:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B37A5AB7A;
 Mon, 20 Jul 2020 11:33:15 +0000 (UTC)
Subject: Re: [PATCH v3] docs: specify stability of hypfs path documentation
To: Juergen Gross <jgross@suse.com>, paul@xen.org
References: <20200720112137.27327-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0463f87c-2139-7f17-02d8-94c59ea39434@suse.com>
Date: Mon, 20 Jul 2020 13:33:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200720112137.27327-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.07.2020 13:21, Juergen Gross wrote:
> In docs/misc/hypfs-paths.pandoc the supported paths in the hypervisor
> file system are specified. Make it more clear that path availability
> might change, e.g. due to scope widening or narrowing (e.g. being
> limited to a specific architecture).
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Release-acked-by: Paul Durrant <paul@xen.org>

Acked-by: Jan Beulich <jbeulich@suse.com>

Paul - should I throw this in right away, or has it now rather missed
the train?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 11:35:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 11:35:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxU4N-0008QD-ER; Mon, 20 Jul 2020 11:34:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ezcM=A7=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jxU4L-0008Q6-Pk
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 11:34:53 +0000
X-Inumbo-ID: 0708c574-ca7d-11ea-848b-bc764e2007e4
Received: from mail-wm1-x32b.google.com (unknown [2a00:1450:4864:20::32b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0708c574-ca7d-11ea-848b-bc764e2007e4;
 Mon, 20 Jul 2020 11:34:53 +0000 (UTC)
Received: by mail-wm1-x32b.google.com with SMTP id j18so21924683wmi.3
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jul 2020 04:34:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=wY/v6COBEdLlhFrSj6EK10rS7gl/yZlsTCzXbIgs50g=;
 b=uyWxZHDNJmdDF9M37Q9NvveQ8HxZpCSApk25kds1j80t8481I3oDVUzeM/TNaK1WT9
 lFERB5/sJwhRn+31k0Cmz8eVj9IZVhJm0oDaK6krUNNcSxgMBdLqKgZiZERN1il7FE7Z
 pToPJLgD7F7JzgiFLIWc1EJTUlMA7KPzcOi2LJibxOsVTI/hRE9cZig7vxZ5Xn9x1JY4
 /MMp9Zv8+l3CuekrMZZ8zcheG6+896wmvDlIuUkagZcdEz31ihozw7c/1NbnEYgLkzm0
 JP4mvRsF9LRSFxDntkJhaGq4ZIdA7xSK5yYhbmkzgYwxFbRv1euY+IAx3JvL9Qf07jlm
 bN2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=wY/v6COBEdLlhFrSj6EK10rS7gl/yZlsTCzXbIgs50g=;
 b=WOYSqtDISn1y2lCiYTKN+CvpX0tlnwFqHiqJKhPQXEJn4Bs09sqYhNdioz+UyySO6M
 VNBE2IjxUKaK+yKcLP7ClVwHD5u/B69wNiQ8mnNfI2lFXxcjB66wknfodmKqjRe+UYPU
 yKocMszHLZWjWx1vB6pp9PNXHOSpviLF7KSnDnteVN10QcK79AE090kCTKQEL9lM583+
 53yOfeUQC3Mw2j2byOlF66BfRXRi4KT/nLCK0el+O/HZmL1R4s69nQGOVvXHAQcAdGtY
 XuoU5qQC2ynKTVgeqyKWyJY9wYeQm4g02BaEYqE9rrOF0WsshYmu7q8tvUBMcgpHKjSu
 5XwQ==
X-Gm-Message-State: AOAM533ZIQXMGCyvofWqM1ceDP6947SzIzFEnpi+8XKCFR7YSsiqYGy5
 oFyPLeXTp+wLDnyLL/QStRo=
X-Google-Smtp-Source: ABdhPJzFTiP70xsg3+d7aQMhPpGWQnzpYQoU1Ct20yY13yolFPvaYN3U73LFYbW1myuQ81r4o7zvHg==
X-Received: by 2002:a7b:c250:: with SMTP id b16mr21111475wmj.30.1595244892322; 
 Mon, 20 Jul 2020 04:34:52 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-226.amazon.com. [54.240.197.226])
 by smtp.gmail.com with ESMTPSA id u2sm29223671wml.16.2020.07.20.04.34.51
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 20 Jul 2020 04:34:51 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	"'Juergen Gross'" <jgross@suse.com>
References: <20200720112137.27327-1-jgross@suse.com>
 <0463f87c-2139-7f17-02d8-94c59ea39434@suse.com>
In-Reply-To: <0463f87c-2139-7f17-02d8-94c59ea39434@suse.com>
Subject: RE: [PATCH v3] docs: specify stability of hypfs path documentation
Date: Mon, 20 Jul 2020 12:34:50 +0100
Message-ID: <002201d65e89$c84eb460$58ec1d20$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQM5wuAhs7ejpzf9sg3QCNvYrfsi+wIyFHM7pjf899A=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 20 July 2020 12:33
> To: Juergen Gross <jgross@suse.com>; paul@xen.org
> Cc: xen-devel@lists.xenproject.org; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
> <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Julien Grall <julien@xen.org>;
> Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>
> Subject: Re: [PATCH v3] docs: specify stability of hypfs path documentation
> 
> On 20.07.2020 13:21, Juergen Gross wrote:
> > In docs/misc/hypfs-paths.pandoc the supported paths in the hypervisor
> > file system are specified. Make it more clear that path availability
> > might change, e.g. due to scope widening or narrowing (e.g. being
> > limited to a specific architecture).
> >
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > Release-acked-by: Paul Durrant <paul@xen.org>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> Paul - should I throw this in right away, or has it now rather missed
> the train?

I guess our emails raced. Throw it in now.

  Paul

> 
> Jan



From xen-devel-bounces@lists.xenproject.org Mon Jul 20 11:40:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 11:40:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxUA5-0000pe-3U; Mon, 20 Jul 2020 11:40:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxUA3-0000pY-Ik
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 11:40:47 +0000
X-Inumbo-ID: d99063ee-ca7d-11ea-9f8a-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d99063ee-ca7d-11ea-9f8a-12813bfff9fa;
 Mon, 20 Jul 2020 11:40:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A13D9AB7A;
 Mon, 20 Jul 2020 11:40:51 +0000 (UTC)
Subject: Re: [PATCH v3] docs: specify stability of hypfs path documentation
To: paul@xen.org
References: <20200720112137.27327-1-jgross@suse.com>
 <0463f87c-2139-7f17-02d8-94c59ea39434@suse.com>
 <002201d65e89$c84eb460$58ec1d20$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f4f5a43e-dd48-d853-1cbf-5505795d5c0a@suse.com>
Date: Mon, 20 Jul 2020 13:40:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <002201d65e89$c84eb460$58ec1d20$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Juergen Gross' <jgross@suse.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Julien Grall' <julien@xen.org>,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.07.2020 13:34, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 20 July 2020 12:33
>> To: Juergen Gross <jgross@suse.com>; paul@xen.org
>> Cc: xen-devel@lists.xenproject.org; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
>> <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Julien Grall <julien@xen.org>;
>> Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>
>> Subject: Re: [PATCH v3] docs: specify stability of hypfs path documentation
>>
>> On 20.07.2020 13:21, Juergen Gross wrote:
>>> In docs/misc/hypfs-paths.pandoc the supported paths in the hypervisor
>>> file system are specified. Make it more clear that path availability
>>> might change, e.g. due to scope widening or narrowing (e.g. being
>>> limited to a specific architecture).
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> Release-acked-by: Paul Durrant <paul@xen.org>
>>
>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>
>> Paul - should I throw this in right away, or has it now rather missed
>> the train?
> 
> I guess our emails raced. Throw it in now.

Indeed they did - I saw yours come in right after sending. Change is
in now.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 11:58:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 11:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxURR-0001vz-Ou; Mon, 20 Jul 2020 11:58:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxURQ-0001vu-LN
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 11:58:44 +0000
X-Inumbo-ID: 5b5a9f51-ca80-11ea-9f8c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5b5a9f51-ca80-11ea-9f8c-12813bfff9fa;
 Mon, 20 Jul 2020 11:58:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 71345AB3D;
 Mon, 20 Jul 2020 11:58:48 +0000 (UTC)
Subject: Re: [PATCH] x86: guard against port I/O overlapping the RTC/CMOS range
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <38c73e17-30b8-27b4-bc7c-e6ef7817fa1e@suse.com>
 <20200720105213.GI7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c82b9985-fd4e-fbd6-afe1-7bdbf395d426@suse.com>
Date: Mon, 20 Jul 2020 13:58:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200720105213.GI7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.07.2020 12:52, Roger Pau Monné wrote:
> On Fri, Jul 17, 2020 at 03:10:43PM +0200, Jan Beulich wrote:
>> Since we intercept RTC/CMOS port accesses, let's do so consistently in
>> all cases, i.e. also for e.g. a dword access to [006E,0071]. To avoid
>> the risk of unintended impact on Dom0 code actually doing so (despite
>> the belief that none ought to exist), also extend
>> guest_io_{read,write}() to decompose accesses where some ports are
>> allowed to be directly accessed and some aren't.
> 
> Wouldn't the same apply to displaced accesses to port 0xcf8?

No, CF8 is special - partial accesses have no meaning as to the
index selection for subsequent CFC accesses. Or else CF9
couldn't be a standalone port with entirely different
functionality..

>> @@ -373,25 +384,31 @@ static int read_io(unsigned int port, un
>>      return X86EMUL_OKAY;
>>  }
>>  
>> +static void _guest_io_write(unsigned int port, unsigned int bytes,
>> +                            uint32_t data)
> 
> There's nothing guest specific about this function I think? If so you
> could drop the _guest_ prefix and just name it io_write?

Hmm, when choosing the name I decided that (a) it's a helper of
the other function and (b) it's still guest driven data that we
output.

>> +{
>> +    switch ( bytes )
>> +    {
>> +    case 1:
>> +        outb((uint8_t)data, port);
>> +        if ( amd_acpi_c1e_quirk )
>> +            amd_check_disable_c1e(port, (uint8_t)data);
>> +        break;
>> +    case 2:
>> +        outw((uint16_t)data, port);
>> +        break;
>> +    case 4:
>> +        outl(data, port);
>> +        break;
>> +    }
> 
> Newlines after break statements would be nice, and maybe add a
> default: ASSERT_UNREACHABLE() case to be on the safe side?

Well, yes, I guess I should. But then if I edit this moved code,
I guess I'll also get rid of the stray casts.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 13:00:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 13:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxVOb-00071a-U0; Mon, 20 Jul 2020 12:59:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wjZm=A7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxVOZ-000718-Ux
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 12:59:51 +0000
X-Inumbo-ID: e1fbedb8-ca88-11ea-848e-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1fbedb8-ca88-11ea-848e-bc764e2007e4;
 Mon, 20 Jul 2020 12:59:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=7oxRHvGBMPOeIwf/kZBIkXfc18lWlEZzbrpGHWIBvZg=; b=K9lO4f1dv806PJaei7+6fpiSC
 hWTTpBFgmXyDLO/6/DEsH/WyuPInc3KWBXEM51ZY0qS/SycJvvgLSHa3N+kgiOmGp19vCW6sSbhTz
 SQSCU+ve90X+5fIlBpyRskhvj2JDSEESDI3G0BHRRifDoMoE+2zTLyS0EYfBWCxKD0LWw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxVOR-00062A-RB; Mon, 20 Jul 2020 12:59:43 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxVOR-0001Yt-H4; Mon, 20 Jul 2020 12:59:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxVOR-0004jF-GJ; Mon, 20 Jul 2020 12:59:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152031-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 152031: tolerable trouble: fail/pass/starved
X-Osstest-Failures: xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
 xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=fb024b779336a0f73b3aee885b2ce082e812881f
X-Osstest-Versions-That: xen=fb024b779336a0f73b3aee885b2ce082e812881f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jul 2020 12:59:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152031 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152031/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine    4 memdisk-try-append fail in 152004 pass in 152031
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10     fail pass in 152004
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail pass in 152004

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152004
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152004
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152004
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 152004
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152004
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152004
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152004
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 152004
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 152004
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 xen                  fb024b779336a0f73b3aee885b2ce082e812881f
baseline version:
 xen                  fb024b779336a0f73b3aee885b2ce082e812881f

Last test of basis   152031  2020-07-20 01:57:34 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jul 20 13:22:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 13:22:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxVkU-00012K-PT; Mon, 20 Jul 2020 13:22:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UosC=A7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxVkT-00011o-H3
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 13:22:29 +0000
X-Inumbo-ID: 0e2f93c9-ca8c-11ea-848e-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e2f93c9-ca8c-11ea-848e-bc764e2007e4;
 Mon, 20 Jul 2020 13:22:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595251348;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=rWX7UgQLWphzCpDeE9TH91eyyN4ef7p3oiKbTx8bXGQ=;
 b=EPz9JdkPznULBDIRL1KZ8SndXh3QlflXtdu4uoG509t7Sxg0iqfQS3ys
 auJIse6qVU5TMjd3jIL2XKiNR9IKC0rmlTr8drfgDJ3dNnwGyPGhwKV2b
 dB6V1cJ9O4Gc1DhMzGojAoP9geufdhwofKBlMao8wZjmSNl1BqvfQegMp A=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: JSc1KxCDMqwgE350jYVkc/thdFnuFjAul1kIhutLHrFGFEDANkexuSOA36gQXjddgIDy0KW4Qf
 XcmQzqw6iz0blI6b0dZYR3gxZK5HcOdY1G0FbNBP/7jEUqfz0vv3J5SEfXWZKFq8ytyhYpeh8t
 ha8iTqLjkQ0hC9Y/7omqNNQ+cwlvQ2p/4xgkpSBIcWFBkCVdlkUqGDXlywxgrDr0psZtvmzCpA
 raNj+4vvEpmOGoK+M0SAgXwQsgnhzrWNk047SNaw3GW8PFX8TKaHmmpTVk9yB/Hds1mGe6Qnl8
 uAE=
X-SBRS: 2.7
X-MesageID: 23077979
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,375,1589256000"; d="scan'208";a="23077979"
Date: Mon, 20 Jul 2020 15:22:19 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86: guard against port I/O overlapping the RTC/CMOS range
Message-ID: <20200720132219.GL7191@Air-de-Roger>
References: <38c73e17-30b8-27b4-bc7c-e6ef7817fa1e@suse.com>
 <20200720105213.GI7191@Air-de-Roger>
 <c82b9985-fd4e-fbd6-afe1-7bdbf395d426@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c82b9985-fd4e-fbd6-afe1-7bdbf395d426@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 20, 2020 at 01:58:40PM +0200, Jan Beulich wrote:
> On 20.07.2020 12:52, Roger Pau Monné wrote:
> > On Fri, Jul 17, 2020 at 03:10:43PM +0200, Jan Beulich wrote:
> >> Since we intercept RTC/CMOS port accesses, let's do so consistently in
> >> all cases, i.e. also for e.g. a dword access to [006E,0071]. To avoid
> >> the risk of unintended impact on Dom0 code actually doing so (despite
> >> the belief that none ought to exist), also extend
> >> guest_io_{read,write}() to decompose accesses where some ports are
> >> allowed to be directly accessed and some aren't.
> > 
> > Wouldn't the same apply to displaced accesses to port 0xcf8?
> 
> No, CF8 is special - partial accesses have no meaning as to the
> index selection for subsequent CFC accesses. Or else CF9
> couldn't be a standalone port with entirely different
> functionality..

Right:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

See below.

> >> @@ -373,25 +384,31 @@ static int read_io(unsigned int port, un
> >>      return X86EMUL_OKAY;
> >>  }
> >>  
> >> +static void _guest_io_write(unsigned int port, unsigned int bytes,
> >> +                            uint32_t data)
> > 
> > There's nothing guest specific about this function I think? If so you
> > could drop the _guest_ prefix and just name it io_write?
> 
> Hmm, when choosing the name I decided that (a) it's a helper of
> the other function and (b) it's still guest driven data that we
> output.

Well, the fact that it's guest driven data shouldn't matter much,
because there are no guest-specific checks in the function anyway - it
might as well be used for non-guest driven data AFAICT? (even if it's
not the case ATM).

It's likely that if I have to change code here in the future I will
drop such prefix, but the change is correct regardless of the naming,
so I'm not going to insist.

> >> +{
> >> +    switch ( bytes )
> >> +    {
> >> +    case 1:
> >> +        outb((uint8_t)data, port);
> >> +        if ( amd_acpi_c1e_quirk )
> >> +            amd_check_disable_c1e(port, (uint8_t)data);
> >> +        break;
> >> +    case 2:
> >> +        outw((uint16_t)data, port);
> >> +        break;
> >> +    case 4:
> >> +        outl(data, port);
> >> +        break;
> >> +    }
> > 
> > Newlines after break statements would be nice, and maybe add a
> > default: ASSERT_UNREACHABLE() case to be on the safe side?
> 
> Well, yes, I guess I should. But then if I edit this moved code,
> I guess I'll also get rid of the stray casts.

Was going to also ask for that, but I assumed there might we some
value in making the truncations explicit here. Feel free to drop those
also if you end up making the above adjustments.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 13:24:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 13:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxVmG-000191-5f; Mon, 20 Jul 2020 13:24:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wjZm=A7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxVmE-00018v-V1
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 13:24:18 +0000
X-Inumbo-ID: 50057cf4-ca8c-11ea-848e-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50057cf4-ca8c-11ea-848e-bc764e2007e4;
 Mon, 20 Jul 2020 13:24:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Q/xJWot8dH7yudlJSL5d+YcAy/hTlrUXhilxlbQTqU8=; b=4hTuOG9WJq8zyof+jS8W5fVQr
 xT//tY23ia9RcCUb9SaBgfXZQY4Tq5laEvEjWo2Jy093dYlw1m0LzPDzqDam2NWLhUlvuaMx+qkwU
 RbB0Al8SUwmdqNmPvhFhIxCYTZ6IvCdB3FX/z/kyM93cghSp+8/pm8oPRLNul/zRrar5Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxVmD-0006YF-31; Mon, 20 Jul 2020 13:24:17 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxVmC-000360-EN; Mon, 20 Jul 2020 13:24:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxVmC-0005DX-Di; Mon, 20 Jul 2020 13:24:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152041-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152041: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=8c4532f19d6925538fb0c938f7de9a97da8c5c3b
X-Osstest-Versions-That: xen=fb024b779336a0f73b3aee885b2ce082e812881f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jul 2020 13:24:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152041 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152041/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8c4532f19d6925538fb0c938f7de9a97da8c5c3b
baseline version:
 xen                  fb024b779336a0f73b3aee885b2ce082e812881f

Last test of basis   151970  2020-07-17 16:00:26 Z    2 days
Testing same since   152041  2020-07-20 11:00:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Michal Leszczynski <michal.leszczynski@cert.pl>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   fb024b7793..8c4532f19d  8c4532f19d6925538fb0c938f7de9a97da8c5c3b -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 13:45:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 13:45:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxW6M-0002ua-Kj; Mon, 20 Jul 2020 13:45:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gz2F=A7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jxW6L-0002uB-Jn
 for xen-devel@lists.xen.org; Mon, 20 Jul 2020 13:45:05 +0000
X-Inumbo-ID: 2ff209ac-ca8f-11ea-9fa7-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2ff209ac-ca8f-11ea-9fa7-12813bfff9fa;
 Mon, 20 Jul 2020 13:44:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595252693;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=OuiEP3D65JBIDm2R1AzKgKM4VFlaTnShCJIfjbHA2Dg=;
 b=LkgaOjcLyiChOPc4jVy8esxPqsRJmHJ7LKxIsxxh6DKd1II1cn6kUVdA
 kC4xN0czVrluqNGZd/1URpkMpnVY5PzcYplqL62qyKR+osQVFVTDLh/+S
 RRGFO5WK8KoQnczx1Yb6uqX1lKmsDBa6rrPrndOAQ7pYwPAc/Jtk9TSX/ Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: sNyUv7VO9pjGKXyswLM1TpeV6PWBf4YBhqlfyQbFFMeziSp3EwpWsVcfqABUCpw1FTCaHzUAiw
 ij+9alrRXwiPyLEBu6RgqkIuhM2WIGLbZSOWCqIIbO68a5oGUBt368oGDD0ovD2gTxmpgexUKM
 IxtU0mpQjgJv2B64F5+kc99RKPJavAYpwe3+EpvVgVOYKPcCJjIYnxqg0a86mOx6MBgv01FDqk
 pSc2vBe4DWoIs8K1Aw7NS9AJkkc8+b9OvYFvBzQr6fG4PvAmAMDLFn1LLnSNvZarYUi58i0etn
 H6c=
X-SBRS: 2.7
X-MesageID: 22950362
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,375,1589256000"; d="scan'208";a="22950362"
Subject: Re: [oss-security] Xen Security Advisory 329 v2 - Linux ioperm bitmap
 context switching issues
To: Mauro Matteo Cascella <mcascell@redhat.com>,
 <oss-security@lists.openwall.com>
References: <E1jw3ms-0006i6-Se@xenbits.xenproject.org>
 <CAA8xKjVib9UERsMrAy3nNdVssNxLciXTmmhmXqq1gvhO16URew@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <57e20b43-53cb-acc6-2634-4fc3b29e2312@citrix.com>
Date: Mon, 20 Jul 2020 14:44:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CAA8xKjVib9UERsMrAy3nNdVssNxLciXTmmhmXqq1gvhO16URew@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-users@lists.xen.org, xen-announce@lists.xen.org,
 "Xen.org security team" <security-team-members@xen.org>,
 xen-devel@lists.xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

/sigh - it seems that stuff like this doesn't get done when I'm on holiday.

I'll get one sorted.

~Andrew

On 17/07/2020 08:54, Mauro Matteo Cascella wrote:
> Hello,
>
> Will a CVE be assigned to this flaw?
>
> Thanks,
>
> On Thu, Jul 16, 2020 at 3:21 PM Xen.org security team
<security@xen.org <mailto:security@xen.org>> wrote:
>
>                     Xen Security Advisory XSA-329
>                               version 2
>
>              Linux ioperm bitmap context switching issues
>
> UPDATES IN VERSION 2
> ====================
>
> Public release.
>
> ISSUE DESCRIPTION
> =================
>
> Linux 5.5 overhauled the internal state handling for the iopl() and
> ioperm()
> system calls.  Unfortunately, one aspect on context switch wasn't wired up
> correctly for the Xen PVOps case.
>
> IMPACT
> ======
>
> IO port permissions don't get rescinded when context switching to an
> unprivileged task.  Therefore, all userspace can use the IO ports
> granted to
> the most recently scheduled task with IO port permissions.
>
> VULNERABLE SYSTEMS
> ==================
>
> Only x86 guests are vulnerable.
>
> All versions of Linux from 5.5 are potentially vulnerable.
>
> Linux is only vulnerable when running as x86 PV guest.  Linux is not
> vulnerable when running as an x86 HVM/PVH guests.
>
> The vulnerability can only be exploited in domains which have been granted
> access to IO ports by Xen.  This is typically only the hardware
> domain, and
> guests configured with PCI Passthrough.
>
> MITIGATION
> ==========
>
> Running only HVM/PVH guests avoids the vulnerability.
>
> CREDITS
> =======
>
> This issue was discovered by Andy Lutomirski.
>
> RESOLUTION
> ==========
>
> Applying the appropriate attached patch resolves this issue.
>
> xsa329.patch           Linux 5.5 and later
>
> $ sha256sum xsa329*
> cdb5ac9bfd21192b5965e8ec0a1c4fcf12d0a94a962a8158cd27810e6aa362f0 
> xsa329.patch
> $
>
> DEPLOYMENT DURING EMBARGO
> =========================
>
> Deployment of the patches and/or mitigations described above (or
> others which are substantially similar) is permitted during the
> embargo, even on public-facing systems with untrusted guest users and
> administrators.
>
> But: Distribution of updated software is prohibited (except to other
> members of the predisclosure list).
>
> Predisclosure list members who wish to deploy significantly different
> patches and/or mitigations, please contact the Xen Project Security
> Team.
>
>
> (Note: this during-embargo deployment notice is retained in
> post-embargo publicly released Xen Project advisories, even though it
> is then no longer applicable.  This is to enable the community to have
> oversight of the Xen Project Security Team's decisionmaking.)
>
> For more information about permissible uses of embargoed information,
> consult the Xen Project community's agreed Security Policy:
>   http://www.xenproject.org/security-policy.html
>
>
>
> --
> Mauro Matteo Cascella, Red Hat Product Security
> 6F78 E20B 5935 928C F0A8  1A9D 4E55 23B8 BB34 10B0




From xen-devel-bounces@lists.xenproject.org Mon Jul 20 14:07:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 14:07:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxWRj-0004vT-7A; Mon, 20 Jul 2020 14:07:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wjZm=A7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxWRh-0004v7-TL
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 14:07:09 +0000
X-Inumbo-ID: 497e73c6-ca92-11ea-8494-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 497e73c6-ca92-11ea-8494-bc764e2007e4;
 Mon, 20 Jul 2020 14:07:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ezi5X1KWTHKJzPwcY30nxBgiKQFsZjCPZ4Kpb4ZZ+KA=; b=vx+ejDTm/hN1oC+y7WvxpI9ZR
 q7LrZtvT+jrAnyVCVkbCNUHJlFRoQjuU3sdXEFYPp8EDkpFVRvhELMcN9p+XLSZSTiUY5ZjFqVN7u
 lNa11aVEAuouQT+Ox1iZLXKEYajAowGrWAO8A1VSv0SlZXDIjQj/Fb2Ed3fk+dLsFJRzE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxWRb-0007X9-Cc; Mon, 20 Jul 2020 14:07:03 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxWRa-0006Qt-Tk; Mon, 20 Jul 2020 14:07:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxWRa-0002aI-Sx; Mon, 20 Jul 2020 14:07:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152037-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152037: all pass - PUSHED
X-Osstest-Versions-This: ovmf=3d9d66ad760b67bfdfb5b4b8e9b34f6af6c45935
X-Osstest-Versions-That: ovmf=3d8327496762b4f2a54c9bafd7a214314ec28e9e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jul 2020 14:07:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152037 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152037/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3d9d66ad760b67bfdfb5b4b8e9b34f6af6c45935
baseline version:
 ovmf                 3d8327496762b4f2a54c9bafd7a214314ec28e9e

Last test of basis   151982  2020-07-18 02:36:30 Z    2 days
Testing same since   152037  2020-07-20 07:09:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Shenglei Zhang <shenglei.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3d83274967..3d9d66ad76  3d9d66ad760b67bfdfb5b4b8e9b34f6af6c45935 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 14:54:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 14:54:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxXBD-0000b9-7O; Mon, 20 Jul 2020 14:54:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pLMr=A7=gmail.com=alejandro.gonzalez.correo@srs-us1.protection.inumbo.net>)
 id 1jxXBB-0000b4-5o
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 14:54:09 +0000
X-Inumbo-ID: dd112ba0-ca98-11ea-849c-bc764e2007e4
Received: from mail-ot1-x344.google.com (unknown [2607:f8b0:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd112ba0-ca98-11ea-849c-bc764e2007e4;
 Mon, 20 Jul 2020 14:54:08 +0000 (UTC)
Received: by mail-ot1-x344.google.com with SMTP id c25so12402181otf.7
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jul 2020 07:54:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:from:date:message-id:subject:to;
 bh=QBB9o1cZ8g3+RFyyMGo7jZrS0UDti7SFSFebEliZB+c=;
 b=YMEZ+gm32iwx0IJ8U54KGaXMO1AcwsDDqSUrpGVbA00LcDQeSK319XcgPAxzC1aXLk
 0OgrQiN+RUJDWb0KRZyJXtfwC1w0DGzcLjb2EJQdNO5IvoWmpEKBopWWMt4ngswGUE1r
 YtrVisRRnOTPPQ8FE+2to8E6BVw4J5icGcORPpP6T3+8TLDkvysWWkbLa2k2wHUH1T1u
 Uok+VudT0FgpmkUn87b1TIkvT0YeG8zlp2+uJOWZRB7o2IdhlA2/rFVqHocnKOAddlWI
 JIRkK0jJnINEs0RBTibFNdOxgzNu4iTGBQicPQh+zSHDHFbz9lBhKhOcnH0S5/QMyiAC
 b+xg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
 bh=QBB9o1cZ8g3+RFyyMGo7jZrS0UDti7SFSFebEliZB+c=;
 b=P/vXOK2Rd0iwD3qVTxI3fZGcCCumCwAdMclP5THJEVBmj4OqpRCiAl4yfImdsmR9tm
 /xaKssT+wSjgX6trey0IjR1+ev4j/nUjnSSPnCiU3cJMBAqz53yVk4zE+sp2bpquSZqO
 U51Xls2DFy4BZJNYvsSYe4IFB8mFJfuyE3JmvwkI67L+C+/tR8ZnzkHESEgGDXHIrDk0
 nLl4WffK/H+86T1SvWLk0F+Vx/9eh2wX5PCu8JQyz2j/faRp/Ra/p0E08ibOoLiraduS
 0Fj5vOlz/9EYDutpjxtLSIb5AP+IzKn4oLZ2T3yww79MHm6RUc8j2Ub/62xI4rC4rSUs
 qjCg==
X-Gm-Message-State: AOAM533kqH3j2LjlGNHhSuhrnxreutez4xae8NDhzD/INJa/lTDCVn+P
 3U/i7SiVKf5VudDsV14mYQqmuYKwGy6sXslcfjST1IrK
X-Google-Smtp-Source: ABdhPJxqN5+nJ5pRQByNhQP7e17jnpQoeVfh5alWWX8XxaUd0ZCpgHJh8WXx06COjx/Ssm8r60iZVp+UbhSzwhndwNI=
X-Received: by 2002:a9d:640b:: with SMTP id h11mr20925612otl.92.1595256847440; 
 Mon, 20 Jul 2020 07:54:07 -0700 (PDT)
MIME-Version: 1.0
From: Alejandro <alejandro.gonzalez.correo@gmail.com>
Date: Mon, 20 Jul 2020 16:53:56 +0200
Message-ID: <CA+wirGqXMoRkS-aJmfFLipUv8SdY5LKV1aMrF0yKRJQaMvzs6Q@mail.gmail.com>
Subject: dom0 LInux 5.8-rc5 kernel failing to initialize cooling maps for
 Allwinner H6 SoC
To: xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello all.

I'm new to this community, and firstly I'd like to thank you all for
your efforts on supporting Xen in ARM devices.

I'm trying Xen 4.13.1 in a Allwinner H6 SoC (more precisely a Pine H64
model B, with a ARM Cortex-A53 CPU).
I managed to get a dom0 Linux 5.8-rc5 kernel running fine, unpatched,
and I'm using the upstream device tree for
my board. However, the dom0 kernel has trouble when reading some DT
nodes that are related to the CPUs, and
it can't initialize the thermal subsystem properly, which is a kind of
showstopper for me, because I'm concerned
that letting the CPU run at the maximum frequency without watching out
its temperature may cause overheating.
The relevant kernel messages are:

[  +0.001959] sun50i-cpufreq-nvmem: probe of sun50i-cpufreq-nvmem
failed with error -2
...
[  +0.003053] hw perfevents: failed to parse interrupt-affinity[0] for pmu
[  +0.000043] hw perfevents: /pmu: failed to register PMU devices!
[  +0.000037] armv8-pmu: probe of pmu failed with error -22
...
[  +0.000163] OF: /thermal-zones/cpu-thermal/cooling-maps/map0: could
not find phandle
[  +0.000063] thermal_sys: failed to build thermal zone cpu-thermal: -22

I've searched for issues, code or commits that may be related for this
issue. The most relevant things I found are:

- A patch that blacklists the A53 PMU:
https://patchwork.kernel.org/patch/10899881/
- The handle_node function in xen/arch/arm/domain_build.c:
https://github.com/xen-project/xen/blob/master/xen/arch/arm/domain_build.c#L1427

I've thought about removing "/cpus" from the skip_matches array in the
handle_node function, but I'm not sure
that would be a good fix.

I'd appreciate any tips for fixing this issue. Don't hesitate to
contact me back if you need any more information
about the problem.


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 15:05:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 15:05:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxXLl-0001Wy-8x; Mon, 20 Jul 2020 15:05:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Eely=A7=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxXLk-0001Wt-LO
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 15:05:04 +0000
X-Inumbo-ID: 63db1438-ca9a-11ea-849d-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63db1438-ca9a-11ea-849d-bc764e2007e4;
 Mon, 20 Jul 2020 15:05:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=sXjziw0/SXeIJ1COJ+9Lak3CJRjc5LGSwcGlCIPp4QA=; b=MGBjdM8z9ubDR46+dZlEoTJT0o
 T1YzcPl3U7gU/J1wfxR6Rqh6czEyWW5lBbdgf9viJXdSYp4sIlikJue70ZbhTIPHrb1xhk/w56XQt
 lxEYuuZtkKqQhkfVvGKgdJpiGHL/KWgXvFk8rvQCQdska8J+t7i6d2iBdMTJ12L+614w=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxXLh-0000IZ-55; Mon, 20 Jul 2020 15:05:01 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxXLg-00024o-UI; Mon, 20 Jul 2020 15:05:01 +0000
Subject: Re: Proposal: rename xen.git#master (to #trunk, perhaps)
To: Stefano Stabellini <sstabellini@kernel.org>,
 Ian Jackson <ian.jackson@citrix.com>
References: <24307.31637.214096.240023@mariner.uk.xensource.com>
 <alpine.DEB.2.21.2006241033210.8121@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <077b1dfa-bcc5-76bc-47f1-1a2bc207cece@xen.org>
Date: Mon, 20 Jul 2020 16:04:59 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2006241033210.8121@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, committers@xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 24/06/2020 18:38, Stefano Stabellini wrote:
> On Wed, 24 Jun 2020, Ian Jackson wrote:
>> I think it would be a good idea to rename this branch name.  This name
>> has unfortunate associations[1], even if it can be argued[2] that the
>> etymology is not as bad as in some uses of the word.
>>
>> This is relativity straightforward on a technical level and will
>> involve a minimum of inconvenience.  Since only osstest ever pushes to
>> xen.git#master, we could easily make a new branch name and also keep
>> #master for compatibility as long as we like.
>>
>> The effects[1] would be:
>>
>> Users who did "git clone https://xenbits.xen.org/git-http/xen.git""
>> would find themselves on a branch called "trunk" which tracked
>> "origin/trunk", by default.  (Some users with old versions of git
>> using old protocols would still end up on "master".)
>>
>> Everyone who currently tracks "master" would be able to switch to
>> tracking "trunk" at their leisure.
>>
>> Presumably at some future point (a year or two from now, say) we would
>> abolish the name "master".
>>
>> Comments ?  In particular, comments on:
>>
>> 1. What the new branch name should be called.  Suggestions I have seen
>> include "trunk" and "main".  I suggest "trunk" because this was used
>> by SVN, CVS, RCS, CSSC (and therefore probably SCCS) for this same
>> purpose.
> 
> Github seems to be about to make a similar change. I wonder if we should
> wait just a couple of weeks to see what name they are going to choose.

I have just tried to create a new repo on github. It looks like the 
default is still 'master' so far.

> 
> https://www.theregister.com/2020/06/15/github_replaces_master_with_main/
> 
> 
> Of course I don't particulalry care one way or the other, but it would
> be good if we end up using the same name as everybody else. It is not
> that we have to choose the name Github is going to choose, but their
> user base is massive -- whatever they are going to pick is very likely
> going to stick.

+1

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 15:15:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 15:15:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxXVn-0002QM-8c; Mon, 20 Jul 2020 15:15:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Eely=A7=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxXVm-0002QH-3t
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 15:15:26 +0000
X-Inumbo-ID: d6044b32-ca9b-11ea-9fc1-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d6044b32-ca9b-11ea-9fc1-12813bfff9fa;
 Mon, 20 Jul 2020 15:15:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Tznd19kl5Eo5HR+nffoBBu8NVwtfaZD1L6bx4e80cgg=; b=TqnAXciAybt4Kw/WnLr3bEQTNP
 SqNzPVJ3elvtYyu+/2LBteI/FplAe51jjKi6ioXrE4pf9AIN2z38oiI70oqp3sVDEPn9TxDWl/NNk
 zmP1J/icvs7v8Qnd2MjwTc/U9XMYAdqukFvY8PJUS8wLE4dAK+UBdd+QENFQADO5DxDw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxXVk-0000Vz-Is; Mon, 20 Jul 2020 15:15:24 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxXVk-0002fG-Ao; Mon, 20 Jul 2020 15:15:24 +0000
Subject: Re: [PATCH 4/8] Arm: prune #include-s needed by domain.h
To: Jan Beulich <jbeulich@suse.com>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
 <150525bb-1c48-c331-3212-eff18bc4c13d@suse.com>
 <d836dc7f-017b-5048-02de-d1cb291fbc3b@xen.org>
 <931149db-2daf-6d72-0330-c938b5084eb6@suse.com>
 <2cc66fdb-1da2-16cd-717a-3248d136821c@xen.org>
 <66a90945-0d3e-beee-4128-bfc3a06a7cf2@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <765f976a-e52d-d3e5-4481-32aaffb66db1@xen.org>
Date: Mon, 20 Jul 2020 16:15:22 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <66a90945-0d3e-beee-4128-bfc3a06a7cf2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 20/07/2020 12:28, Jan Beulich wrote:
> On 20.07.2020 11:09, Julien Grall wrote:
>>
>>
>> On 20/07/2020 09:17, Jan Beulich wrote:
>>> On 17.07.2020 16:44, Julien Grall wrote:
>>>> On 15/07/2020 11:39, Jan Beulich wrote:
>>>>> --- a/xen/include/asm-arm/domain.h
>>>>> +++ b/xen/include/asm-arm/domain.h
>>>>> @@ -2,7 +2,7 @@
>>>>>     #define __ASM_DOMAIN_H__
>>>>>     
>>>>>     #include <xen/cache.h>
>>>>> -#include <xen/sched.h>
>>>>> +#include <xen/timer.h>
>>>>>     #include <asm/page.h>
>>>>>     #include <asm/p2m.h>
>>>>>     #include <asm/vfp.h>
>>>>> @@ -11,8 +11,6 @@
>>>>>     #include <asm/vgic.h>
>>>>>     #include <asm/vpl011.h>
>>>>>     #include <public/hvm/params.h>
>>>>> -#include <xen/serial.h>
>>>>
>>>> While we don't need the rbtree.h, we technically need serial.h for using
>>>> vuart_info.
>>>
>>> The only reference to it is
>>>
>>>           const struct vuart_info     *info;
>>>
>>> which doesn't require a definition nor even a forward declaration
>>> of struct vuart_info. It should just be source files instantiating
>>> a struct or de-referencing pointers to one that actually need to
>>> see the full declaration.
>>
>> Ah yes. I got confused because you introduced a forward declaration of
>> struct vcpu. But this is because you need it to declare the function
>> prototype.
> 
> As a result - are you happy for the change to go in with Stefano's
> ack then?

Yes. Sorry I should have been clearer in my previous answer.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 15:20:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 15:20:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxXal-0003F1-TK; Mon, 20 Jul 2020 15:20:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxXak-0003Ew-Pv
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 15:20:34 +0000
X-Inumbo-ID: 8de45cbb-ca9c-11ea-9fc2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8de45cbb-ca9c-11ea-9fc2-12813bfff9fa;
 Mon, 20 Jul 2020 15:20:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0A0AEB893;
 Mon, 20 Jul 2020 15:20:23 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/S3: put data segment registers into known state upon
 resume
Message-ID: <3cad2798-1a01-7d5e-ea55-ddb9ba6388d9@suse.com>
Date: Mon, 20 Jul 2020 17:20:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 "M. Vefa Bicakci" <m.v.b@runbox.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

wakeup_32 sets %ds and %es to BOOT_DS, while leaving %fs at what
wakeup_start did set it to, and %gs at whatever BIOS did load into it.
All of this may end up confusing the first load_segments() to run on
the BSP after resume, in particular allowing a non-nul selector value
to be left in %fs.

Alongside %ss, also put all other data segment registers into the same
state that the boot and CPU bringup paths put them in.

Reported-by: M. Vefa Bicakci <m.v.b@runbox.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/acpi/wakeup_prot.S
+++ b/xen/arch/x86/acpi/wakeup_prot.S
@@ -52,6 +52,16 @@ ENTRY(s3_resume)
         mov     %eax, %ss
         mov     saved_rsp(%rip), %rsp
 
+        /*
+         * Also put other segment registers into known state, like would
+         * be done on the boot path. This is in particular necessary for
+         * the first load_segments() to work as intended.
+         */
+        mov     %eax, %ds
+        mov     %eax, %es
+        mov     %eax, %fs
+        mov     %eax, %gs
+
         /* Reload code selector */
         pushq   $__HYPERVISOR_CS
         leaq    1f(%rip),%rax


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 15:28:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 15:28:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxXil-0003T8-Od; Mon, 20 Jul 2020 15:28:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gz2F=A7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jxXij-0003T3-UC
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 15:28:49 +0000
X-Inumbo-ID: b502edba-ca9d-11ea-9fc5-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b502edba-ca9d-11ea-9fc5-12813bfff9fa;
 Mon, 20 Jul 2020 15:28:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595258928;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=dHkoChAb7fbL9I2v1zRaA2LTj9xG8/LBS67LHiUSLVU=;
 b=EuDAFR34NnElUE6ynL4NU4mlzfN3IV+ek6ZcgRjdA1aNngosI2d5EIDF
 qUwgD36JjSgyV6K2H2R+g8ynCdGGLXLsJ8FoWRWQx5vkwjAljwjeERRqX
 YDTCzvskwSmKJZtgrqvmml/bOJV3zx2yVwYbdbGBG63JSGYFx0OilSiSt s=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: E/KP/Tzve8NqTkwIRxA2gIUDMfLET6xYCGtDmozU41iJ0P858ydhiU13QJiNDAel4nTItjGPAv
 KSz1dy+D2e3eX1peVF+JbYKBj1iAluMtVqWEi3wK0xXO2mna0xq70n4oIfJkMiCPmr46Hret+T
 udbdAZmDP2lq4LMEQgw8rD/WKPac5TcUH1prg0XF4wgqB1x5bpCkPWFpxK+kFNC7ObtMgeihiS
 A+jAqIZO3jOeF3uTRYMFCKB0EKJBgEg+UrqbtZi3b0Ud5y4wvNSsdYZshGSSChv7D23znxLVe0
 ZGI=
X-SBRS: 2.7
X-MesageID: 22960947
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,375,1589256000"; d="scan'208";a="22960947"
Subject: Re: [PATCH v3 1/2] x86: restore pv_rtc_handler() invocation
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
 <20200715121347.GY7191@Air-de-Roger>
 <2b9de0fd-5973-ed66-868c-ffadca83edf3@suse.com>
 <20200715133217.GZ7191@Air-de-Roger>
 <cd08f928-2be9-314b-56e6-bb414247caff@suse.com>
 <20200715145144.GA7191@Air-de-Roger>
 <ff1926c7-cc21-03ad-1dff-53c703450151@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <01509d7d-4cf3-7f3f-4aa1-eaa3b1d3b95b@citrix.com>
Date: Mon, 20 Jul 2020 16:28:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <ff1926c7-cc21-03ad-1dff-53c703450151@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16/07/2020 11:06, Jan Beulich wrote:
> ACCESS_ONCE() guarantees single access, but doesn't guarantee that
> the compiler wouldn't split this single access into multiple insns.

ACCESS_ONCE() does guarantee single accesses for any natural integer size.

There is a section about this specifically in Linux's
memory-barriers.txt, and this isn't the first time I've pointed it out...

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 16:14:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 16:14:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxYR8-000872-PQ; Mon, 20 Jul 2020 16:14:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wjZm=A7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxYR6-00086u-WE
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 16:14:41 +0000
X-Inumbo-ID: 1c4ffa34-caa4-11ea-9fce-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c4ffa34-caa4-11ea-9fce-12813bfff9fa;
 Mon, 20 Jul 2020 16:14:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=0UMaA5tLeBtylROceV9amZWXXij9FfoUg9FV3+eghsg=; b=yguuOitXir6tiZTFtHYfzF1yM
 Xhe+gNfHTtnns6vvch0jB1pZXzUlCaTSmMYSETlu1BLNpQL/ga0wPwwJBByNc3pJ3i/np8Fh08ejE
 BFHjQ4L/f7pnhaj+rWM2cIFTyHjVhMpJkD3xbE3nKJaC2GZYDr+DCtUl+Pi1V8S1UJB1I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxYR4-0002Ez-E4; Mon, 20 Jul 2020 16:14:38 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxYR3-00044S-UK; Mon, 20 Jul 2020 16:14:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxYR3-00045E-ST; Mon, 20 Jul 2020 16:14:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152032-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152032: regressions - trouble: fail/pass/starved
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=5714ee50bb4375bd586858ad800b1d9772847452
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jul 2020 16:14:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152032 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152032/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 linux                5714ee50bb4375bd586858ad800b1d9772847452
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   32 days
Failing since        151236  2020-06-19 19:10:35 Z   30 days   49 attempts
Testing same since   152032  2020-07-20 02:22:49 Z    0 days    1 attempts

------------------------------------------------------------
829 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 45172 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 16:17:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 16:17:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxYTj-0008Do-AP; Mon, 20 Jul 2020 16:17:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Eely=A7=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxYTi-0008Dj-0p
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 16:17:22 +0000
X-Inumbo-ID: 7b5816f6-caa4-11ea-9fcf-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b5816f6-caa4-11ea-9fcf-12813bfff9fa;
 Mon, 20 Jul 2020 16:17:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:References:Cc:To:From:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=V4D91uulccwgXlZcaSMMNZR4kQIiJ11ee8tCH+aVQA0=; b=f5fs/yDPjLV8VPCJh1aduaAyNm
 juEd0EfPJnTEX9ki2purXTT70IQ8MUAXqzOSAbhD3Yhj+Tx5e3k7B31tQ+XbRZC+K2aUDxYZiHs3m
 Dc2NV7V3LD9R6O7l1cChuIMZr8rTnxNCKe25MmmUvedXRiLTbZFI+qLbEwkIWFhV2HOo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxYTd-0002Jp-W2; Mon, 20 Jul 2020 16:17:17 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxYTd-0005cd-NS; Mon, 20 Jul 2020 16:17:17 +0000
Subject: Re: [PATCH 4/8] Arm: prune #include-s needed by domain.h
From: Julien Grall <julien@xen.org>
To: Jan Beulich <jbeulich@suse.com>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
 <150525bb-1c48-c331-3212-eff18bc4c13d@suse.com>
 <d836dc7f-017b-5048-02de-d1cb291fbc3b@xen.org>
 <931149db-2daf-6d72-0330-c938b5084eb6@suse.com>
 <2cc66fdb-1da2-16cd-717a-3248d136821c@xen.org>
 <66a90945-0d3e-beee-4128-bfc3a06a7cf2@suse.com>
 <765f976a-e52d-d3e5-4481-32aaffb66db1@xen.org>
Message-ID: <c9ed723d-82ec-2fa9-a098-e67811809d61@xen.org>
Date: Mon, 20 Jul 2020 17:17:16 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <765f976a-e52d-d3e5-4481-32aaffb66db1@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 20/07/2020 16:15, Julien Grall wrote:
> Hi Jan,
> 
> On 20/07/2020 12:28, Jan Beulich wrote:
>> On 20.07.2020 11:09, Julien Grall wrote:
>>>
>>>
>>> On 20/07/2020 09:17, Jan Beulich wrote:
>>>> On 17.07.2020 16:44, Julien Grall wrote:
>>>>> On 15/07/2020 11:39, Jan Beulich wrote:
>>>>>> --- a/xen/include/asm-arm/domain.h
>>>>>> +++ b/xen/include/asm-arm/domain.h
>>>>>> @@ -2,7 +2,7 @@
>>>>>>     #define __ASM_DOMAIN_H__
>>>>>>     #include <xen/cache.h>
>>>>>> -#include <xen/sched.h>
>>>>>> +#include <xen/timer.h>
>>>>>>     #include <asm/page.h>
>>>>>>     #include <asm/p2m.h>
>>>>>>     #include <asm/vfp.h>
>>>>>> @@ -11,8 +11,6 @@
>>>>>>     #include <asm/vgic.h>
>>>>>>     #include <asm/vpl011.h>
>>>>>>     #include <public/hvm/params.h>
>>>>>> -#include <xen/serial.h>
>>>>>
>>>>> While we don't need the rbtree.h, we technically need serial.h for 
>>>>> using
>>>>> vuart_info.
>>>>
>>>> The only reference to it is
>>>>
>>>>           const struct vuart_info     *info;
>>>>
>>>> which doesn't require a definition nor even a forward declaration
>>>> of struct vuart_info. It should just be source files instantiating
>>>> a struct or de-referencing pointers to one that actually need to
>>>> see the full declaration.
>>>
>>> Ah yes. I got confused because you introduced a forward declaration of
>>> struct vcpu. But this is because you need it to declare the function
>>> prototype.
>>
>> As a result - are you happy for the change to go in with Stefano's
>> ack then?
> 
> Yes. Sorry I should have been clearer in my previous answer.

I have committed now.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 16:20:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 16:20:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxYWy-0000as-Ul; Mon, 20 Jul 2020 16:20:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Eely=A7=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxYWx-0000an-IJ
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 16:20:43 +0000
X-Inumbo-ID: f54ee9c6-caa4-11ea-84a5-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f54ee9c6-caa4-11ea-84a5-bc764e2007e4;
 Mon, 20 Jul 2020 16:20:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ony7tRUDmnp2M52H4Ead1BQvn2JkncG8N5OVB64bMsY=; b=HOKayEJ1n057vGkk1yfnFbgCPP
 oowFgCkZRD+PT0r4IUz6MpShYfuIRqyLEQpZQWdIiOelj+Wd37Ti1h1VJE0hW6/7+hgxWuVvLJFhl
 UKampjDMDUiXG1MmIByyVn9YMllfQ2QoY2ihshFTcHSm5RDkAe5f2mpTRFVRP6vmTgxg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxYWv-0002N1-Uk; Mon, 20 Jul 2020 16:20:41 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxYWv-0005lr-NQ; Mon, 20 Jul 2020 16:20:41 +0000
Subject: Re: [PATCH 5/8] bitmap: move to/from xenctl_bitmap conversion helpers
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
 <5835147f-8428-1d74-7d6e-bbb5522289c7@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <fef25c94-a3ce-c17e-966c-a7e479566fc5@xen.org>
Date: Mon, 20 Jul 2020 17:20:39 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <5835147f-8428-1d74-7d6e-bbb5522289c7@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 15/07/2020 11:40, Jan Beulich wrote:
> A subsequent change will exclude domctl.c from getting built for a
> particular configuration, yet the two functions get used from elsewhere.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/common/bitmap.c
> +++ b/xen/common/bitmap.c
> @@ -9,6 +9,9 @@
>   #include <xen/errno.h>
>   #include <xen/bitmap.h>
>   #include <xen/bitops.h>
> +#include <xen/cpumask.h>
> +#include <xen/domain.h>

The inclusion of xen/domain.h in common/bitmap.c seems a bit odd to me. 
Would it make sense to move the prototype of 
bitmap_to_xenctl_bitmap()/xenctl_bitmap_to_bitmap() to bitmap.h?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 16:27:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 16:27:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxYd8-0000nb-Qf; Mon, 20 Jul 2020 16:27:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WIjz=A7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxYd7-0000nW-O6
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 16:27:05 +0000
X-Inumbo-ID: d8c650fe-caa5-11ea-84a5-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8c650fe-caa5-11ea-84a5-bc764e2007e4;
 Mon, 20 Jul 2020 16:27:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 515B8B662;
 Mon, 20 Jul 2020 16:27:10 +0000 (UTC)
Subject: Re: [PATCH v3 1/2] x86: restore pv_rtc_handler() invocation
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
 <20200715121347.GY7191@Air-de-Roger>
 <2b9de0fd-5973-ed66-868c-ffadca83edf3@suse.com>
 <20200715133217.GZ7191@Air-de-Roger>
 <cd08f928-2be9-314b-56e6-bb414247caff@suse.com>
 <20200715145144.GA7191@Air-de-Roger>
 <ff1926c7-cc21-03ad-1dff-53c703450151@suse.com>
 <01509d7d-4cf3-7f3f-4aa1-eaa3b1d3b95b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e15eb2d0-800f-4fbd-6d58-8bceb408593f@suse.com>
Date: Mon, 20 Jul 2020 18:27:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <01509d7d-4cf3-7f3f-4aa1-eaa3b1d3b95b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.07.2020 17:28, Andrew Cooper wrote:
> On 16/07/2020 11:06, Jan Beulich wrote:
>> ACCESS_ONCE() guarantees single access, but doesn't guarantee that
>> the compiler wouldn't split this single access into multiple insns.
> 
> ACCESS_ONCE() does guarantee single accesses for any natural integer size.
> 
> There is a section about this specifically in Linux's
> memory-barriers.txt, and this isn't the first time I've pointed it out...

There indeed is text stating this, but I can't find any word on
why they believe this is the case. My understanding of volatile
is that it guarantees no more (and also no less) accesses to
any single storage location than indicated by the source. But
it doesn't prevent "tearing" of accesses. And really, how could
it, considering that volatile can also be applied to types that
aren't basic ones, and hence in particular to ones that can't
possibly be accessed by a single insn?

I'd be glad to see you point out the aspect I'm missing here.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 16:32:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 16:32:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxYiA-0001cv-Ll; Mon, 20 Jul 2020 16:32:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2q/L=A7=gmail.com=jaromir.dolecek@srs-us1.protection.inumbo.net>)
 id 1jxYiA-0001cq-1m
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 16:32:18 +0000
X-Inumbo-ID: 9338a892-caa6-11ea-84a5-bc764e2007e4
Received: from mail-vs1-xe2f.google.com (unknown [2607:f8b0:4864:20::e2f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9338a892-caa6-11ea-84a5-bc764e2007e4;
 Mon, 20 Jul 2020 16:32:17 +0000 (UTC)
Received: by mail-vs1-xe2f.google.com with SMTP id q15so8821596vso.9
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jul 2020 09:32:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=wfkWNTGr7rI1M7TmzCRQ5/XZvg9x2RK3EJGaSSXIjwc=;
 b=h2CVDI0UrkjbNx6BHtZc+AfRHn3d9eD0t5LW0+BGdlDxgqZHXxkr5jJm/mcmoy5BlB
 E34N+LkXAZ3/o1/gaFc3NRDprxrhAKUC/pfQcnJh/OVcpdYsxopA5LZnRccVH7792mKu
 Yp1srZGmaM3BrHfXywBZXR6CFicSOvdsfwfyGCq3gBHBo+EpBiSIzk4cfdsTBWn1fXek
 YufuZbup7NQbwnQwX/Vgm8VDAcwU47naeaB6p2pGp1PDGj8lKyhnB94vNv2adLUT7HEN
 wL0jiIzZ4b/SsCOks4G/OMh42oxxLDhqLAo6j0EH97CfYbvKQwFoDF3SCIXa0QONp4Lo
 TbSg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=wfkWNTGr7rI1M7TmzCRQ5/XZvg9x2RK3EJGaSSXIjwc=;
 b=jwZqyvbbK9ZaJvxWPFJZL/JYE1rwC8r/bhxv97LeAz/QXIxgJgL85sU+3Hy5BAhQ76
 GZh5cCCHo08zgqJMchu6r7SNAKS2k/8xY+8MuRCkArboLyUDWgX033QEHj1ySUJnr3ik
 Kaq9mrMl8BS09NAiyZ/HFx+Bh0Usg9ged+e7qz20zSNfTdNJiBQg9G8t+Ddyyk0fVj9N
 Cwp4NLu/jGXbMUq9Zy+oBmkzAvUXqWvrkw18BKoobXy736pZZur/aFREHtscpKidJrSw
 yN1kbati50ujx2g1QjItKTu6Dfr1WWK7n8qNcRZvD1Lhy683x4Q86X8I1tiGawGq3R8e
 ZVlQ==
X-Gm-Message-State: AOAM532vnq7rRKFf49nQfR88HNBBHurL7zZRKbghDcUl2wld5014Ho0a
 H9i7vVntMXb0/ystT5TJHCBFArp9IxO2fPRHi5c=
X-Google-Smtp-Source: ABdhPJzOTheOnAr0eEUHM/ZlqAgCvRlFJEanTPVC07ExvSJ9ujL0f6oBlRqxMnmovrezxud2+gVAJmf57TMm/uofcvg=
X-Received: by 2002:a05:6102:3188:: with SMTP id
 c8mr17832075vsh.61.1595262736913; 
 Mon, 20 Jul 2020 09:32:16 -0700 (PDT)
MIME-Version: 1.0
References: <CAMnsW5542gmBLpKBsW5pnm=2VXmaDVHzg=OXXvBdu1BsYLdDvQ@mail.gmail.com>
 <20200720070010.GC7191@Air-de-Roger>
 <0a389f69-2c6b-e564-c6b5-c8f09ed66de0@suse.com>
In-Reply-To: <0a389f69-2c6b-e564-c6b5-c8f09ed66de0@suse.com>
From: =?UTF-8?B?SmFyb23DrXIgRG9sZcSNZWs=?= <jaromir.dolecek@gmail.com>
Date: Mon, 20 Jul 2020 18:32:06 +0200
Message-ID: <CAMnsW55Cnw54_P=_Np4h1siaCPgDjwRFGQ_pUAJEkLELTSEW+Q@mail.gmail.com>
Subject: Re: Advice for implementing MSI-X support on NetBSD Xen 4.11?
To: Jan Beulich <jbeulich@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Le lun. 20 juil. 2020 =C3=A0 09:00, Roger Pau Monn=C3=A9 <roger.pau@citrix.=
com> a =C3=A9crit :
> You need to set entry_nr and table_base.

Yes, I do that. I use the table_base set to what would be written to
the register for "native".

> Are you enabling the capability and unmasking the interrupt in the
> MSI-X table?

Yes, I'm doing that.

> There are also the Xen debug keys which can be helpful, take a look at
> 'i' and 'M'.

OK, I'll check that.

Le lun. 20 juil. 2020 =C3=A0 10:47, Jan Beulich <jbeulich@suse.com> a =C3=
=A9crit :
> Is this effort for PV or PVH? If the former, I don't think Dom0 is
> supposed to write directly to any of these structures. This is all
> intended to be hypercall based, despite us intercepting and trying
> to emulate direct accesses.
>
> Jarom=C3=ADr - are you making use of PHYSDEVOP_prepare_msix?

It's PV for now. I already skip the step to actually write the table
vectors when setting this up, same as I do for MSI. I still write the
registers to enable MSI-X.

I was not aware of PHYSDEVOP_prepare_msix, I did not notice it when
looking on what Linux kernel does - I'll check it.

Thanks.

Jaromir


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 16:59:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 16:59:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxZ85-0003Zc-6b; Mon, 20 Jul 2020 16:59:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gz2F=A7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jxZ83-0003ZX-Mi
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 16:59:03 +0000
X-Inumbo-ID: 4f39d1f8-caaa-11ea-9fe1-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f39d1f8-caaa-11ea-9fe1-12813bfff9fa;
 Mon, 20 Jul 2020 16:59:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595264343;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=CwfgH+SVS7p5qhpGDrc3g/YyX6iBPChRKZUui8AeAvo=;
 b=aPpdi1MR/XZISNACKIqMZguDGPr1GKwhtzk5Ynq8UfQq9JclKqqONTQA
 h16mGASifodnSqWubz2UpcLjmIXBF1HdiS1vSv73ZRhfJJY90EKmfSYmA
 5r55a9Au9srK+GHgFDicoMuE8cNlFc1yjQSXKzHjl+foE3RAysigEWoXv c=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: OAp5d3Yvfz4GcMFx61wDzJy+QB4s9MoSlndlhDqXgfPhZ2JSJSpa/+j2eBdFRIOY0tlUdpUlCa
 6EOlnhxCcFQVxjr2OWAy9/QhDJ0fuEwBTiJ2NdVcSJpwmVXkgVcIFfh4j+OEe8AyPqTqDxo5Pj
 aUgB85aEtsmJMfaeNC2dDwoqqapnUqbY3alH0yBz3LzZmvIzOjGa66p3ii4IzPh1uY/GapmRtW
 o2IHZ/6234bjgKY+IuUMhVvdX3PVcQ9It8vb2yyMcWg2lu9W0nWUrtRFDAllrpD2mcy8QtpBHZ
 qN0=
X-SBRS: 2.7
X-MesageID: 23105255
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,375,1589256000"; d="scan'208,223";a="23105255"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] docs: Replace non-UTF-8 character in hypfs-paths.pandoc
Date: Mon, 20 Jul 2020 17:58:33 +0100
Message-ID: <20200720165833.14209-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

>From the docs cronjob on xenbits:

  /usr/bin/pandoc --number-sections --toc --standalone misc/hypfs-paths.pandoc --output html/misc/hypfs-paths.html
  pandoc: Cannot decode byte '\x92': Data.Text.Internal.Encoding.decodeUtf8: Invalid UTF-8 stream
  make: *** [Makefile:236: html/misc/hypfs-paths.html] Error 1

Fixes: 5a4a411bde4 ("docs: specify stability of hypfs path documentation")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Juergen Gross <jgross@suse.com>
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
---
 docs/misc/hypfs-paths.pandoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index 81d70bb80c..dddb592bc5 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -74,7 +74,7 @@ you write finds a path present, it can rely on behavior in future versions of
 the hypervisors, and in different configurations.  Specifically:
 
 1. Conditions under which paths are used may be extended, restricted, or
-   removed.  For example, a path thats always available only on ARM systems
+   removed.  For example, a path that's always available only on ARM systems
    may become available on x86; or a path available on both systems may be
    restricted to only appearing on ARM systems.  Paths may also disappear
    entirely.
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Jul 20 17:07:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 17:07:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxZFu-0004cr-Ej; Mon, 20 Jul 2020 17:07:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ezcM=A7=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jxZFt-0004cm-V5
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 17:07:10 +0000
X-Inumbo-ID: 71e092c2-caab-11ea-84aa-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 71e092c2-caab-11ea-84aa-bc764e2007e4;
 Mon, 20 Jul 2020 17:07:09 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id b6so18575411wrs.11
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jul 2020 10:07:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=BXp16PYZ88DKP4IeAclMUujGE9G/rGnyLqmlCZUuTIU=;
 b=OmCweoUSkMDNDsRxKUmPLNnKezVSuiYvUHDjj5RnntG/FEYpR8V8r2LkFIiXE0J/O7
 DNX54VuZo9fK/ayuC1b8mkR2sX6sv3EKySXLvrCseJ2k3qATcU+h0uJrh61LFVrBBf5d
 Qv4g3BF88SXDX+HxYllBMNsgNkzMRjAJHwDiUANorhXlFnu7akEio950qWsxjHe1LYOP
 /d7PW0DjBILoXNB6fuE4+Q/FtyfJiCxhcWA16T+vgA/2wRvu2j/zBBcw25OtpwWfWgig
 l+khMnf302wxWbvksFa4buSnJdG88LreNabODrxKvClbUlM8Q0L6oqxXtgkMAzgskOic
 RARA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=BXp16PYZ88DKP4IeAclMUujGE9G/rGnyLqmlCZUuTIU=;
 b=uEiF2RPe2S73A9lHK7blUP4OfXBY8GGc7EN93O2m901G8LlSqHBoKtsr1WTDMF6CP6
 5vU4Qfecp6QjGI2HwJupsbNYyFJxbQNZ9actupweQ8EkG9UXP86QM3H9Y7wzQaEb3LPa
 ZvF55OeCZn8OKawa+9Rm3np4ZYUovc9MVGI92BDxkQI/RqoywgJpHLWtb/MNzCrSzogC
 iHb4YRn52IOFDrre9Fz69rCReLHfPcbR+bdF+YxcECR5mmqnXTElp7X2hULwe3RuifWe
 wKutYfQ1D/Udz85iWiwtGFgN0OqWqgApZJBbbzemsA9wfB1tIFhDq5WoycJ2JX5M6sY9
 qUMw==
X-Gm-Message-State: AOAM533kd5Pthm7LmIFVElrjtPpOHYDuHL8dSFfwg1QrhZ9adjSA7XBM
 HiXPLGl0zJP5nARgRYieMaE=
X-Google-Smtp-Source: ABdhPJzCHtqpPB43LV5YDtuJ+ZWnXDRQdkntDQeRNOw3Eo9FfWKFAsN71w8jhdS+SWu9govVzxdxCg==
X-Received: by 2002:a05:6000:120c:: with SMTP id
 e12mr6095116wrx.354.1595264828299; 
 Mon, 20 Jul 2020 10:07:08 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-226.amazon.com. [54.240.197.226])
 by smtp.gmail.com with ESMTPSA id j16sm35331107wrt.7.2020.07.20.10.07.06
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 20 Jul 2020 10:07:07 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 "'Xen-devel'" <xen-devel@lists.xenproject.org>
References: <20200720165833.14209-1-andrew.cooper3@citrix.com>
In-Reply-To: <20200720165833.14209-1-andrew.cooper3@citrix.com>
Subject: RE: [PATCH] docs: Replace non-UTF-8 character in hypfs-paths.pandoc
Date: Mon, 20 Jul 2020 18:07:06 +0100
Message-ID: <003101d65eb8$32dc7370$98955a50$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQHn4cDTNHEvrMr7/+cssb2RABXLIajtrKoQ
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Juergen Gross' <jgross@suse.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Julien Grall' <julien@xen.org>,
 'Wei Liu' <wl@xen.org>, 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 'Jan Beulich' <JBeulich@suse.com>, 'Ian Jackson' <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of =
Andrew Cooper
> Sent: 20 July 2020 17:59
> To: Xen-devel <xen-devel@lists.xenproject.org>
> Cc: Juergen Gross <jgross@suse.com>; Stefano Stabellini =
<sstabellini@kernel.org>; Julien Grall
> <julien@xen.org>; Wei Liu <wl@xen.org>; George Dunlap =
<George.Dunlap@eu.citrix.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; Jan Beulich <JBeulich@suse.com>; Ian =
Jackson <ian.jackson@citrix.com>
> Subject: [PATCH] docs: Replace non-UTF-8 character in =
hypfs-paths.pandoc
>=20
> From the docs cronjob on xenbits:
>=20
>   /usr/bin/pandoc --number-sections --toc --standalone =
misc/hypfs-paths.pandoc --output
> html/misc/hypfs-paths.html
>   pandoc: Cannot decode byte '\x92': =
Data.Text.Internal.Encoding.decodeUtf8: Invalid UTF-8 stream
>   make: *** [Makefile:236: html/misc/hypfs-paths.html] Error 1
>=20
> Fixes: 5a4a411bde4 ("docs: specify stability of hypfs path =
documentation")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Release-acked-by: Paul Durrant <paul@xen.org>

...and please commit to staging-4.14 a.s.a.p.

> ---
> CC: Juergen Gross <jgross@suse.com>
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Liu <wl@xen.org>
> CC: Julien Grall <julien@xen.org>
> ---
>  docs/misc/hypfs-paths.pandoc | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/docs/misc/hypfs-paths.pandoc =
b/docs/misc/hypfs-paths.pandoc
> index 81d70bb80c..dddb592bc5 100644
> --- a/docs/misc/hypfs-paths.pandoc
> +++ b/docs/misc/hypfs-paths.pandoc
> @@ -74,7 +74,7 @@ you write finds a path present, it can rely on =
behavior in future versions of
>  the hypervisors, and in different configurations.  Specifically:
>=20
>  1. Conditions under which paths are used may be extended, restricted, =
or
> -   removed.  For example, a path that=EF=BF=BDs always available only =
on ARM systems
> +   removed.  For example, a path that's always available only on ARM =
systems
>     may become available on x86; or a path available on both systems =
may be
>     restricted to only appearing on ARM systems.  Paths may also =
disappear
>     entirely.
> --
> 2.11.0
>=20




From xen-devel-bounces@lists.xenproject.org Mon Jul 20 17:08:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 17:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxZHe-0004l8-Rt; Mon, 20 Jul 2020 17:08:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gz2F=A7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jxZHd-0004l3-Hf
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 17:08:57 +0000
X-Inumbo-ID: b1c86eb4-caab-11ea-84aa-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1c86eb4-caab-11ea-84aa-bc764e2007e4;
 Mon, 20 Jul 2020 17:08:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595264937;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=Z3qFN0lLsStvYXDINuO1I0grHoWYEzgxBmOYzHIzosM=;
 b=V09sHkTPOEAGadz5I22Mp6/1DslVUoSodApPq4RUn4cdTJK+JYri/4m5
 Vqle/P1Bfo+7lcWkK8dCHTnalqmHqlg/X2voEmVNWcbrWu2UaFFSx+res
 DCPh4/IZ+4awuoH4MV7Z54o5hoRC8/M3PQjHIFUHesZCokCZtpm9m8IV4 Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Bk28VWkLRWalT2SnJf5/zCvJK5Srvu4pCjMM+yf/qCXopsamlJqyxXXFSdAeTNxqVpVjjzEFe3
 AuRtSF3dZOKBG5aDwlQ5aKAH+ed5pBGvmcEAOa35PSzMD6jsIS1xv0t5KI++xy9h4ahYkkpgPB
 ndN+ZPgpid9AUwUCqdgt/bR23x8XTWnOA0O+C8s0KpGCsoeXJ3S74bIbCseqXta+KVmVKrw+q3
 t+sf2joEULODHCBUU/DuWUGW9tTryQPzWLnCWh06uwiPmpPywKnZi48MRbAZYXUeNfbhvGa4PM
 YLg=
X-SBRS: 2.7
X-MesageID: 22776539
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,375,1589256000"; d="scan'208";a="22776539"
Subject: Re: [PATCH] docs: Replace non-UTF-8 character in hypfs-paths.pandoc
To: <paul@xen.org>, 'Xen-devel' <xen-devel@lists.xenproject.org>
References: <20200720165833.14209-1-andrew.cooper3@citrix.com>
 <003101d65eb8$32dc7370$98955a50$@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <22659eaa-78b4-dc41-d607-e09b75efc97f@citrix.com>
Date: Mon, 20 Jul 2020 18:08:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <003101d65eb8$32dc7370$98955a50$@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Juergen Gross' <jgross@suse.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Julien Grall' <julien@xen.org>,
 'Wei Liu' <wl@xen.org>, 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 'Jan Beulich' <JBeulich@suse.com>, 'Ian Jackson' <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/07/2020 18:07, Paul Durrant wrote:
>> -----Original Message-----
>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Andrew Cooper
>> Sent: 20 July 2020 17:59
>> To: Xen-devel <xen-devel@lists.xenproject.org>
>> Cc: Juergen Gross <jgross@suse.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
>> <julien@xen.org>; Wei Liu <wl@xen.org>; George Dunlap <George.Dunlap@eu.citrix.com>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; Jan Beulich <JBeulich@suse.com>; Ian Jackson <ian.jackson@citrix.com>
>> Subject: [PATCH] docs: Replace non-UTF-8 character in hypfs-paths.pandoc
>>
>> From the docs cronjob on xenbits:
>>
>>   /usr/bin/pandoc --number-sections --toc --standalone misc/hypfs-paths.pandoc --output
>> html/misc/hypfs-paths.html
>>   pandoc: Cannot decode byte '\x92': Data.Text.Internal.Encoding.decodeUtf8: Invalid UTF-8 stream
>>   make: *** [Makefile:236: html/misc/hypfs-paths.html] Error 1
>>
>> Fixes: 5a4a411bde4 ("docs: specify stability of hypfs path documentation")
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Release-acked-by: Paul Durrant <paul@xen.org>
>
> ...and please commit to staging-4.14 a.s.a.p.

Thanks, and done.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 17:22:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 17:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxZUZ-0006Pp-5d; Mon, 20 Jul 2020 17:22:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wjZm=A7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxZUY-0006Pk-45
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 17:22:18 +0000
X-Inumbo-ID: 8ed5d818-caad-11ea-9fe8-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8ed5d818-caad-11ea-9fe8-12813bfff9fa;
 Mon, 20 Jul 2020 17:22:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=H5uFQ7+XftbpFVRaonzZwpPpefwsj6C8+Ais5R1otv4=; b=ID/SNN0WagwNyvNLuEAufOPrc
 VMDEMdGFbRNvUc7ESrSjA26iHppMx7Acyqhl81dswbx4i5vxRY2YtkIKUuxpy2RNRl7brjHnS3V9I
 8xyTIQr6MB6S4XJm/A2aV4kXTkj+fh1Iff1Y3Ai2qzCcK88El/OuUO9Q/TnrDGMbgvZBg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxZUW-0003hS-25; Mon, 20 Jul 2020 17:22:16 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxZUV-0007j7-KH; Mon, 20 Jul 2020 17:22:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxZUV-0007Or-Jd; Mon, 20 Jul 2020 17:22:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152046-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152046: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=5a4a411bde4f73ff8ce43d6e52b77302973e8f68
X-Osstest-Versions-That: xen=8c4532f19d6925538fb0c938f7de9a97da8c5c3b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jul 2020 17:22:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152046 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152046/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5a4a411bde4f73ff8ce43d6e52b77302973e8f68
baseline version:
 xen                  8c4532f19d6925538fb0c938f7de9a97da8c5c3b

Last test of basis   152041  2020-07-20 11:00:34 Z    0 days
Testing same since   152046  2020-07-20 14:00:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8c4532f19d..5a4a411bde  5a4a411bde4f73ff8ce43d6e52b77302973e8f68 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 17:36:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 17:36:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxZiW-0007ZA-Sa; Mon, 20 Jul 2020 17:36:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Eely=A7=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxZiW-0007Z5-97
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 17:36:44 +0000
X-Inumbo-ID: 93c6e14e-caaf-11ea-84ac-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 93c6e14e-caaf-11ea-84ac-bc764e2007e4;
 Mon, 20 Jul 2020 17:36:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=xTT211vu9tMK586W15vIDXiAhMSFIMbKUszxNIgH8mo=; b=p+qpisSlQLSIjGsF/xirQ/iTdk
 b6gnMD+m4WvNXvR9H27RH/FNZEdF2DnsTh+D19GiZZQ93Eevh4CPrM8VQWlXqAZ/dZgZGeA1YBIOa
 Ex6Ssmf5gEkOcG97iRAAOaUdwqUSztjyQwPpKopHErG6g8PdvUEBFJVSi0GfAWg7eWYM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxZiV-0003zT-4o; Mon, 20 Jul 2020 17:36:43 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxZiU-000568-OL; Mon, 20 Jul 2020 17:36:43 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] SUPPORT.md: Spell correctly Experimental
Date: Mon, 20 Jul 2020 18:36:35 +0100
Message-Id: <20200720173635.1571-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 SUPPORT.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/SUPPORT.md b/SUPPORT.md
index b81d36eea541..1479055c450d 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -249,13 +249,13 @@ to boot with memory < maxmem.
 
 Allow sharing of identical pages between guests
 
-    Status, x86 HVM: Expermental
+    Status, x86 HVM: Experimental
 
 ### Memory Paging
 
 Allow pages belonging to guests to be paged to disk
 
-    Status, x86 HVM: Experimenal
+    Status, x86 HVM: Experimental
 
 ### Alternative p2m
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Jul 20 17:37:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 17:37:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxZjN-0007br-6X; Mon, 20 Jul 2020 17:37:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dhL3=A7=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1jxZjM-0007bi-89
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 17:37:36 +0000
X-Inumbo-ID: b228c418-caaf-11ea-84ac-bc764e2007e4
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b228c418-caaf-11ea-84ac-bc764e2007e4;
 Mon, 20 Jul 2020 17:37:35 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1jxZjJ-000ILK-Oo; Mon, 20 Jul 2020 17:37:33 +0000
Date: Mon, 20 Jul 2020 18:37:33 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 5/5] x86/shadow: l3table[] and gl3e[] are HVM only
Message-ID: <20200720173733.GA70485@deinos.phlegethon.org>
References: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
 <a3b9b496-e860-e657-2afc-c0658871fa3f@suse.com>
 <20200718182037.GA48915@deinos.phlegethon.org>
 <1baa0d50-86a4-b0ba-d43a-ad0c0446b54b@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <1baa0d50-86a4-b0ba-d43a-ad0c0446b54b@suse.com>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org);
 SAEximRunCond expanded to false
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

At 10:55 +0200 on 20 Jul (1595242521), Jan Beulich wrote:
> On 18.07.2020 20:20, Tim Deegan wrote:
> > At 12:00 +0200 on 15 Jul (1594814409), Jan Beulich wrote:
> >> ... by the very fact that they're 3-level specific, while PV always gets
> >> run in 4-level mode. This requires adding some seemingly redundant
> >> #ifdef-s - some of them will be possible to drop again once 2- and
> >> 3-level guest code doesn't get built anymore in !HVM configs, but I'm
> >> afraid there's still quite a bit of disentangling work to be done to
> >> make this possible.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Looks good.  It seems like the new code for '3-level non-HVM' in
> > guest-walks ought to have some sort of assert-unreachable in them too
> > - or is there a reason to to?
> 
> You mean this piece of code
> 
> +#elif !defined(CONFIG_HVM)
> +    (void)root_gfn;
> +    memset(gw, 0, sizeof(*gw));
> +    return false;
> +#else /* PAE */
> 
> If so - sure, ASSERT_UNREACHABLE() could be added there. It simply
> didn't occur to me. I take it your ack for the entire series holds
> here with this addition.

Yes, it does.  Thanks!

Tim.


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 17:45:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 17:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxZrC-00008Q-7Y; Mon, 20 Jul 2020 17:45:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gz2F=A7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jxZrA-00008K-Jx
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 17:45:40 +0000
X-Inumbo-ID: d2be73e9-cab0-11ea-84ad-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2be73e9-cab0-11ea-84ad-bc764e2007e4;
 Mon, 20 Jul 2020 17:45:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595267140;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=J/7Onn3GuDlUNA0Iy0qUui8mUX7sy04p5t+kCtGbrTE=;
 b=DxshoKnXMEfzd+NuZvkUU6iPAIdJxr9dSyg7xHMDcdHWdDaAJXPk1L1C
 zdQk9O7qQt5D6wQavETVugPtucJQ8RjaXZjX1HkocTIZE39rTyJtfm2pS
 hqFMbeVmc0nnHlW96rO8xPVjevp9lSVABGYDfLgDtKONsRJHUMYakhBcE M=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 2lqrqTiqghNNiw1IFkaWVeL/esvQ/II7RuDvyAHmDfGNrtHh7A7JQ+BfCB3m9QNOuMhRv+QWfJ
 m8XDUZHbs+Vm8ZtQLRnACBt/iOlfX5YPk9R5GozPYRN9r7IUwNYqpHvFnon38161Ph9Om7bOdd
 20/oYi3ePNUBFnn64rr/O+hK8VtFNSse+/FwPINdqO3R7JjHpDkik6U9gqppg041MX5bzxr2im
 Srx9iBhrI2yWaQX85+eiks1W0Xcl1JkBYz0fftRDe2cFJ161RdZuLIHwUpqHu4/fW6s2gsswBz
 nAc=
X-SBRS: 2.7
X-MesageID: 23109873
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,375,1589256000"; d="scan'208";a="23109873"
Subject: Re: [PATCH] SUPPORT.md: Spell correctly Experimental
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
References: <20200720173635.1571-1-julien@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4cc580c5-146f-6f83-bd91-a798763c261b@citrix.com>
Date: Mon, 20 Jul 2020 18:45:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200720173635.1571-1-julien@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 paul@xen.org, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/07/2020 18:36, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Although I'd suggest the subject be changed rearranged to "Spell
Experimentally correctly".

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 17:48:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 17:48:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxZtb-0000Gl-M9; Mon, 20 Jul 2020 17:48:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Eely=A7=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxZtb-0000Gg-7G
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 17:48:11 +0000
X-Inumbo-ID: 2d2e0eb0-cab1-11ea-84ad-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2d2e0eb0-cab1-11ea-84ad-bc764e2007e4;
 Mon, 20 Jul 2020 17:48:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=I1vOqkTQHp54/DiklsAqjbZ1wXq/tOMFHzPhPDEN418=; b=AckKINPz56Fc7aKYqpNvvnmZNb
 8AzGV20m0DcHEwiSEnKscO6A4BHuX81q6CI0RMtMa7D4LzE1YMcNTlWTrIysSqXYng8mWeY8uttUJ
 PirYVrKfd05yhEWayXj4uXkC0lxHYQolrUJrpiHJrVctAhkBDEMLSPUaRU7JKERaOeY8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxZtW-0004EI-JU; Mon, 20 Jul 2020 17:48:06 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxZtW-0005va-97; Mon, 20 Jul 2020 17:48:06 +0000
Subject: Re: [PATCH] SUPPORT.md: Spell correctly Experimental
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20200720173635.1571-1-julien@xen.org>
 <4cc580c5-146f-6f83-bd91-a798763c261b@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <627851f2-d28e-5c3b-6f1f-882e9eb02ed4@xen.org>
Date: Mon, 20 Jul 2020 18:48:03 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <4cc580c5-146f-6f83-bd91-a798763c261b@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 paul@xen.org, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 20/07/2020 18:45, Andrew Cooper wrote:
> On 20/07/2020 18:36, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Although I'd suggest the subject be changed rearranged to "Spell
> Experimentally correctly".

Did you intend to write "Experimental" rather than "Experimentally"?

Other than that, I will fix it on commit.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 17:49:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 17:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxZua-0000LW-0d; Mon, 20 Jul 2020 17:49:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gz2F=A7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jxZuY-0000LM-Oa
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 17:49:10 +0000
X-Inumbo-ID: 5017c786-cab1-11ea-9fef-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5017c786-cab1-11ea-9fef-12813bfff9fa;
 Mon, 20 Jul 2020 17:49:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595267349;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=k+bJkAMdPWA0Q/lTdghw8iuqQC/R+Y88+G/voLuqVcQ=;
 b=haK3oIPG4IEmvweoEUiDq8eVF6MfVW4KTu3EAjtWULEzXgjLwrHm7dXr
 Pp2jfOCVcQRI4K8ASPAq/qxL7oSPOI2MQBO0bnN+3nIq6XxJbAbMQZ2Ir
 DFP3OIB9Q1kCIa/NNwJsXLeCBPzPbCqPi2m1IdweMLjLIkxM6NkleTMDr 4=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: PKEEKZcPq6vtaXXp5ZpvhfdN5hesgMfujB3y4twFBpeLqdoLRgo1QrC4ygLVSY/bHkwnoQS91P
 2AaCdrYIEsu7oknyxKTVtNb79DvmvTihUiF8o7S3ewKFJ/8K3MWEv2YfD32+OBxAlcHUVl/3Za
 bHkDHFAf/gnx+Awvir2fS2a44X5QQb5R43mVIUTB5yTr5BGFNBvnIzyb2pe3ATphUW9BmfAhiB
 pEKVRQJyxfEj68v9fW3bsD5Iku+FklPOmJeNzDXqsBP48TOeCHROFxby06sgxa9k71nozcalWE
 nik=
X-SBRS: 2.7
X-MesageID: 22780157
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,375,1589256000"; d="scan'208";a="22780157"
Subject: Re: [PATCH] SUPPORT.md: Spell correctly Experimental
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
References: <20200720173635.1571-1-julien@xen.org>
 <4cc580c5-146f-6f83-bd91-a798763c261b@citrix.com>
 <627851f2-d28e-5c3b-6f1f-882e9eb02ed4@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <aae69fa5-4aee-781d-2f52-291d8fa948bd@citrix.com>
Date: Mon, 20 Jul 2020 18:49:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <627851f2-d28e-5c3b-6f1f-882e9eb02ed4@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 paul@xen.org, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/07/2020 18:48, Julien Grall wrote:
>
> On 20/07/2020 18:45, Andrew Cooper wrote:
>> On 20/07/2020 18:36, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> Although I'd suggest the subject be changed rearranged to "Spell
>> Experimentally correctly".
>
> Did you intend to write "Experimental" rather than "Experimentally"?

Erm, yes :)

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 20:37:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 20:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxcXa-0005zo-N3; Mon, 20 Jul 2020 20:37:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rt7f=A7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jxcXZ-0005zj-3X
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 20:37:37 +0000
X-Inumbo-ID: d7f332c8-cac8-11ea-84c1-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d7f332c8-cac8-11ea-84c1-bc764e2007e4;
 Mon, 20 Jul 2020 20:37:35 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9052E22482;
 Mon, 20 Jul 2020 20:37:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595277454;
 bh=be6FQzh1KnqLJW+x73v/0Hei6pafsExlT9HyVvCbvvs=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=oGTcyKChqRht+ul3g8KH29Y8WM43EqeMzvaknHF/t/buri/QzU8xaV7JHDL9aQjDf
 YIaUL2/X6ivp7y49iCbJWUlDQFnt3nVyc0F8ugsH+qohHNRGXCrw5Kqwkik+HyU4O1
 HdfS9lr1ztz7SqfyswYo5oL8CvUaU0ENRRlgjJpc=
Date: Mon, 20 Jul 2020 13:37:28 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
In-Reply-To: <20200720102023.GH7191@Air-de-Roger>
Message-ID: <alpine.DEB.2.21.2007201322060.32544@sstabellini-ThinkPad-T480s>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <20200720091722.GF7191@Air-de-Roger>
 <10eaec62-2c48-52ae-d113-1681c87e3d59@xen.org>
 <20200720102023.GH7191@Air-de-Roger>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1748832064-1595276590=:32544"
Content-ID: <alpine.DEB.2.21.2007201323180.32544@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>, alex.bennee@linaro.org,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1748832064-1595276590=:32544
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2007201323181.32544@sstabellini-ThinkPad-T480s>

On Mon, 20 Jul 2020, Roger Pau Monné wrote:
> On Mon, Jul 20, 2020 at 10:40:40AM +0100, Julien Grall wrote:
> > 
> > 
> > On 20/07/2020 10:17, Roger Pau Monné wrote:
> > > On Fri, Jul 17, 2020 at 09:34:14PM +0300, Oleksandr wrote:
> > > > On 17.07.20 18:00, Roger Pau Monné wrote:
> > > > > On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
> > > > > Do you have any plans to try to upstream a modification to the VirtIO
> > > > > spec so that grants (ie: abstract references to memory addresses) can
> > > > > be used on the VirtIO ring?
> > > > 
> > > > But VirtIO spec hasn't been modified as well as VirtIO infrastructure in the
> > > > guest. Nothing to upsteam)
> > > 
> > > OK, so there's no intention to add grants (or a similar interface) to
> > > the spec?
> > > 
> > > I understand that you want to support unmodified VirtIO frontends, but
> > > I also think that long term frontends could negotiate with backends on
> > > the usage of grants in the shared ring, like any other VirtIO feature
> > > negotiated between the frontend and the backend.
> > > 
> > > This of course needs to be on the spec first before we can start
> > > implementing it, and hence my question whether a modification to the
> > > spec in order to add grants has been considered.
> > The problem is not really the specification but the adoption in the
> > ecosystem. A protocol based on grant-tables would mostly only be used by Xen
> > therefore:
> >    - It may be difficult to convince a proprietary OS vendor to invest
> > resource on implementing the protocol
> >    - It would be more difficult to move in/out of Xen ecosystem.
> > 
> > Both, may slow the adoption of Xen in some areas.
> 
> Right, just to be clear my suggestion wasn't to force the usage of
> grants, but whether adding something along this lines was in the
> roadmap, see below.
> 
> > If one is interested in security, then it would be better to work with the
> > other interested parties. I think it would be possible to use a virtual
> > IOMMU for this purpose.
> 
> Yes, I've also heard rumors about using the (I assume VirtIO) IOMMU in
> order to protect what backends can map. This seems like a fine idea,
> and would allow us to gain the lost security without having to do the
> whole work ourselves.
> 
> Do you know if there's anything published about this? I'm curious
> about how and where in the system the VirtIO IOMMU is/should be
> implemented.

Not yet (as far as I know), but we have just started some discussons on
this topic within Linaro.


You should also be aware that there is another proposal based on
pre-shared-memory and memcpys to solve the virtio security issue:

https://marc.info/?l=linux-kernel&m=158807398403549

It would be certainly slower than the "virtio IOMMU" solution but it
would take far less time to develop and could work as a short-term
stop-gap. (In my view the "virtio IOMMU" is the only clean solution
to the problem long term.)
--8323329-1748832064-1595276590=:32544--


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 20:38:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 20:38:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxcYA-00063U-W9; Mon, 20 Jul 2020 20:38:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rt7f=A7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jxcY8-00063H-US
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 20:38:12 +0000
X-Inumbo-ID: edd60930-cac8-11ea-84c1-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id edd60930-cac8-11ea-84c1-bc764e2007e4;
 Mon, 20 Jul 2020 20:38:12 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 75CA622482;
 Mon, 20 Jul 2020 20:38:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595277491;
 bh=E5pXKGj6NwMiolJPAeCWbRq/DyEArML96G7+tV3RcJw=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=LgoHjCVQjP9b7nfMNy3b+vVIhfwnhUtFIeLENig3E5XtceIIPsPws8z/IDOnsufxn
 mGTGL/ZP8+tsJ0HtfUL7SH6zuijv6OmAYBiZDpYqH2fSr2J+FuI3Dgc9MTeCnZzdlJ
 LprPBxwIDez/eRSWgoUOvvQqxay3u5PJrSLYuTE8=
Date: Mon, 20 Jul 2020 13:38:11 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Oleksandr <olekstysh@gmail.com>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
In-Reply-To: <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
Message-ID: <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 alex.bennee@linaro.org, Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 17 Jul 2020, Oleksandr wrote:
> > > *A few word about solution:*
> > > As it was mentioned at [1], in order to implement virtio-mmio Xen on Arm
> > Any plans for virtio-pci? Arm seems to be moving to the PCI bus, and
> > it would be very interesting from a x86 PoV, as I don't think
> > virtio-mmio is something that you can easily use on x86 (or even use
> > at all).
> 
> Being honest I didn't consider virtio-pci so far. Julien's PoC (we are based
> on) provides support for the virtio-mmio transport
> 
> which is enough to start working around VirtIO and is not as complex as
> virtio-pci. But it doesn't mean there is no way for virtio-pci in Xen.
> 
> I think, this could be added in next steps. But the nearest target is
> virtio-mmio approach (of course if the community agrees on that).

Hi Julien, Oleksandr,

Aside from complexity and easy-of-development, are there any other
architectural reasons for using virtio-mmio?

I am not asking because I intend to suggest to do something different
(virtio-mmio is fine as far as I can tell.) I am asking because recently
there was a virtio-pci/virtio-mmio discussion recently in Linaro and I
would like to understand if there are any implications from a Xen point
of view that I don't yet know.

For instance, what's your take on notifications with virtio-mmio? How
are they modelled today? Are they good enough or do we need MSIs?


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 20:40:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 20:40:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxcaJ-0006qr-E0; Mon, 20 Jul 2020 20:40:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rt7f=A7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jxcaI-0006qm-7m
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 20:40:26 +0000
X-Inumbo-ID: 3cebab7e-cac9-11ea-a014-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3cebab7e-cac9-11ea-a014-12813bfff9fa;
 Mon, 20 Jul 2020 20:40:25 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 3C89320773;
 Mon, 20 Jul 2020 20:40:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595277624;
 bh=sBujR0/o2tldXcRP6P/xpFAxQtgn/vbpQ9/+T2YeIbY=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=VzHKr3n7r/a5HuA45R3Ho296xVOwkVQ2PklXCEWpxWb3gjJzie5xQ/x6djZpqW4oZ
 EpCudGoqyEprtxbOEb4e5A0LvD1Q/LWqJ3BCKUliY++z7q7gQ7BmnDC/2LeWcYpNYw
 v22336ccSuZLkFlrqljRVZDD6BpCsd8b47B9XjxE=
Date: Mon, 20 Jul 2020 13:40:23 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
In-Reply-To: <20200720110950.GJ7191@Air-de-Roger>
Message-ID: <alpine.DEB.2.21.2007201330070.32544@sstabellini-ThinkPad-T480s>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <20200720091722.GF7191@Air-de-Roger>
 <be3fc8de-5582-8fd0-52cd-0cbfbfa96859@gmail.com>
 <20200720110950.GJ7191@Air-de-Roger>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-405729398-1595277306=:32544"
Content-ID: <alpine.DEB.2.21.2007201338140.32544@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-405729398-1595277306=:32544
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2007201338141.32544@sstabellini-ThinkPad-T480s>

On Mon, 20 Jul 2020, Roger Pau Monné wrote:
> On Mon, Jul 20, 2020 at 01:56:51PM +0300, Oleksandr wrote:
> > On 20.07.20 12:17, Roger Pau Monné wrote:
> > > On Fri, Jul 17, 2020 at 09:34:14PM +0300, Oleksandr wrote:
> > > > On 17.07.20 18:00, Roger Pau Monné wrote:
> > > > > On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
> > > > The other reasons are:
> > > > 
> > > > 1. Automation. With current backend implementation we don't need to pause
> > > > guest right after creating it, then go to the driver domain and spawn
> > > > backend and
> > > > 
> > > > after that go back to the dom0 and unpause the guest.
> > > xl devd should be capable of handling this for you on the driver
> > > domain.
> > > 
> > > > 2. Ability to detect when guest with involved frontend has gone away and
> > > > properly release resource (guest destroy/reboot).
> > > > 
> > > > 3. Ability to (re)connect to the newly created guest with involved frontend
> > > > (guest create/reboot).
> > > > 
> > > > 4. What is more that having Xenstore support the backend is able to detect
> > > > the dom_id it runs into and the guest dom_id, there is no need pass them via
> > > > command line.
> > > > 
> > > > 
> > > > I will be happy to explain in details after publishing backend code).
> > > As I'm not the one doing the work I certainly won't stop you from
> > > using xenstore on the backend. I would certainly prefer if the backend
> > > gets all the information it needs from the command line so that the
> > > configuration data is completely agnostic to the transport layer used
> > > to convey it.
> > > 
> > > Thanks, Roger.
> > 
> > Thank you for pointing another possible way. I feel I need to investigate
> > what is the "xl devd" (+ Argo?) and how it works. If it is able to provide
> > backend with
> 
> That's what x86 at least uses to manage backends on driver domains: xl
> devd will for example launch the QEMU instance required to handle a
> Xen PV disk backend in user-space.
> 
> Note that there's currently no support for Argo or any communication
> channel different than xenstore, but I think it would be cleaner to
> place the fetching of data from xenstore in xl devd and just pass
> those as command line arguments to the VirtIO backend if possible. I
> would prefer the VirtIO backend to be fully decoupled from xenstore.
> 
> Note that for a backend running on dom0 there would be no need to
> pass any data on xenstore, as the backend would be launched directly
> from xl with the appropriate command line arguments.

If I can paraphrase Roger's point, I think we all agree that xenstore is
very convenient to use and great to get something up and running
quickly. But it has several limitations, so it would be fantastic if we
could kill two birds with one stone and find a way to deploy the system
without xenstore, given that with virtio it is not actually needed if not
for very limited initial configurations. It would certainly be a big
win. However, it is fair to say that the xenstore alternative, whatever
that might be, needs work.
--8323329-405729398-1595277306=:32544--


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 21:45:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 21:45:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxdaV-0003VS-AP; Mon, 20 Jul 2020 21:44:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wjZm=A7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxdaT-0003VJ-Ri
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 21:44:41 +0000
X-Inumbo-ID: 371108da-cad2-11ea-84c6-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 371108da-cad2-11ea-84c6-bc764e2007e4;
 Mon, 20 Jul 2020 21:44:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=vWyh6gQMVbZYWid9CRK/dY1dwnnXHbP8iSwHAKuipfc=; b=DmybX6mHajaaSRfM7xMxo1dwy
 hkbQ1tLQreQcS03l7l0n1yAAKofooZ1UeHrcl1EH6g5tWNN6SBEfjOhQE76CSrCOuEgal8ApgILeD
 OJY6MDQIBpw06R1O/OCyfZp7dHJmFgWB7gimij2TozLDHIiRbTmXLG9zHZsV68R7sy2I8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxdaS-0000k8-8q; Mon, 20 Jul 2020 21:44:40 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxdaR-0004eH-NX; Mon, 20 Jul 2020 21:44:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxdaR-0000kp-Mv; Mon, 20 Jul 2020 21:44:39 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152054-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152054: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=9ffdda96d9e7c3d9c7a5bbe2df6ab30f63927542
X-Osstest-Versions-That: xen=5a4a411bde4f73ff8ce43d6e52b77302973e8f68
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jul 2020 21:44:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152054 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152054/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9ffdda96d9e7c3d9c7a5bbe2df6ab30f63927542
baseline version:
 xen                  5a4a411bde4f73ff8ce43d6e52b77302973e8f68

Last test of basis   152046  2020-07-20 14:00:33 Z    0 days
Testing same since   152054  2020-07-20 18:12:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5a4a411bde..9ffdda96d9  9ffdda96d9e7c3d9c7a5bbe2df6ab30f63927542 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 23:04:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 23:04:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxepb-0001s0-Lq; Mon, 20 Jul 2020 23:04:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wjZm=A7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxepb-0001rv-4Z
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 23:04:23 +0000
X-Inumbo-ID: 576f05cc-cadd-11ea-a025-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 576f05cc-cadd-11ea-a025-12813bfff9fa;
 Mon, 20 Jul 2020 23:04:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=WcwZlMunHmAKhIylIWDq8JW0iVP3dCXltl4GT39aMvY=; b=jtyA0pmLRnYRTZX6MKcoBm1qR
 PU1Wh7zs+NonyHd6dFMmd/5Hf5qOy7S3x3B6DMhPnCIIPW2zoEs5+lAfM4mAgvxUVslx1qUt+ZO73
 l2XfKJXLo/1/vu3cWsYTQYqdsjjuXLVEuJEOH4xcmXcbU2kbIbsQtVFVbRLbLcnWpHdg8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxepW-0002OF-Tz; Mon, 20 Jul 2020 23:04:18 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxepW-0000Ge-Kz; Mon, 20 Jul 2020 23:04:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxepW-0004ci-KO; Mon, 20 Jul 2020 23:04:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152039-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152039: regressions - trouble: fail/pass/starved
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: qemuu=9fc87111005e8903785db40819af66b8f85b8b96
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 20 Jul 2020 23:04:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152039 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152039/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 qemuu                9fc87111005e8903785db40819af66b8f85b8b96
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   38 days
Failing since        151101  2020-06-14 08:32:51 Z   36 days   51 attempts
Testing same since   152026  2020-07-19 21:37:31 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 29745 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 23:23:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 23:23:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxf89-0003ao-Cd; Mon, 20 Jul 2020 23:23:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rt7f=A7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jxf87-0003aj-M1
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 23:23:31 +0000
X-Inumbo-ID: 05938482-cae0-11ea-84d0-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05938482-cae0-11ea-84d0-bc764e2007e4;
 Mon, 20 Jul 2020 23:23:30 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id B905A2080D;
 Mon, 20 Jul 2020 23:23:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595287410;
 bh=YCX2kVb8OMjeEDbNJaboAW7lEFLWf9Pi5pJJtP+Mr6s=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=o/gcXCGXKPYiuxta9p3NqUMaJbyq2+oFYwCYsT10P4ZnmibWNzzFCgQCnxHCUvWab
 9ER5oiTVIw1hYCYwkkGi+iJre426xvbuyru+tOh41E4WVedbpfTJd/QXwFiE67oj4/
 ZV4tgvkd98ruidLCpbBwSzQ7ctCUZ62w2bfqkYqc=
Date: Mon, 20 Jul 2020 16:23:29 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
In-Reply-To: <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
Message-ID: <alpine.DEB.2.21.2007201509350.32544@sstabellini-ThinkPad-T480s>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
 <28899FEF-9DA7-4513-8283-1AC5EFFC6E92@arm.com>
 <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 17 Jul 2020, Bertrand Marquis wrote:
> > On 17 Jul 2020, at 16:06, Jan Beulich <jbeulich@suse.com> wrote:
> > 
> > On 17.07.2020 15:59, Bertrand Marquis wrote:
> >> 
> >> 
> >>> On 17 Jul 2020, at 15:19, Jan Beulich <jbeulich@suse.com> wrote:
> >>> 
> >>> On 17.07.2020 15:14, Bertrand Marquis wrote:
> >>>>> On 17 Jul 2020, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
> >>>>> On 16.07.2020 19:10, Rahul Singh wrote:
> >>>>>> # Emulated PCI device tree node in libxl:
> >>>>>> 
> >>>>>> Libxl is creating a virtual PCI device tree node in the device tree to enable the guest OS to discover the virtual PCI during guest boot. We introduced the new config option [vpci="pci_ecam"] for guests. When this config option is enabled in a guest configuration, a PCI device tree node will be created in the guest device tree.
> >>>>> 
> >>>>> I support Stefano's suggestion for this to be an optional thing, i.e.
> >>>>> there to be no need for it when there are PCI devices assigned to the
> >>>>> guest anyway. I also wonder about the pci_ prefix here - isn't
> >>>>> vpci="ecam" as unambiguous?
> >>>> 
> >>>> This could be a problem as we need to know that this is required for a guest upfront so that PCI devices can be assigned after using xl.> >>> 
> >>> I'm afraid I don't understand: When there are no PCI device that get
> >>> handed to a guest when it gets created, but it is supposed to be able
> >>> to have some assigned while already running, then we agree the option
> >>> is needed (afaict). When PCI devices get handed to the guest while it
> >>> gets constructed, where's the problem to infer this option from the
> >>> presence of PCI devices in the guest configuration?
> >> 
> >> If the user wants to use xl pci-attach to attach in runtime a device to a guest, this guest must have a VPCI bus (even with no devices).
> >> If we do not have the vpci parameter in the configuration this use case will not work anymore.
> > 
> > That's what everyone looks to agree with. Yet why is the parameter needed
> > when there _are_ PCI devices anyway? That's the "optional" that Stefano
> > was suggesting, aiui.
> 
> I agree in this case the parameter could be optional and only required if not PCI device is assigned directly in the guest configuration.

Great!

Moreover, we might also be able to get rid of the vpci parameter in
cases where are no devices assigned at boot time but still we want to
create a vpci host bridge in domU anyway. In those cases we could use
the following:

  pci = [];

otherwise, worse but it might be easier to implement in xl:

  pci = [""];


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 23:24:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 23:24:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxf8t-0003f4-Qo; Mon, 20 Jul 2020 23:24:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rt7f=A7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jxf8r-0003et-Si
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 23:24:17 +0000
X-Inumbo-ID: 213131c6-cae0-11ea-a02a-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 213131c6-cae0-11ea-a02a-12813bfff9fa;
 Mon, 20 Jul 2020 23:24:17 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 1CCF02080D;
 Mon, 20 Jul 2020 23:24:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595287456;
 bh=0YsJbppGkM3gd7bUZeqoBkXatU7RgpDLOLd+twEps+c=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=KyUNF8UQgwouyySBcQZrl5bSgrsZHqXMZVWWuw0aQysGvgsbL8zKbmMlrkOqHlisp
 RYhBF04j+nvg+E4XiVti4il9h7L20nIbf8OjIG/Fdl0HOGotGxO0eS+QcsIDhJWANL
 oUTbS10CSkd8WuQp8YfUAomVEYuzHV03/8YUrGkU=
Date: Mon, 20 Jul 2020 16:24:15 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>, robh@kernel.org
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
In-Reply-To: <8AF78FF1-C389-44D8-896B-B95C1A0560E2@arm.com>
Message-ID: <alpine.DEB.2.21.2007201520370.32544@sstabellini-ThinkPad-T480s>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <20200717111644.GS7191@Air-de-Roger>
 <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
 <20200717143120.GT7191@Air-de-Roger>
 <8AF78FF1-C389-44D8-896B-B95C1A0560E2@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

+ Rob Herring

On Fri, 17 Jul 2020, Bertrand Marquis wrote:
> >> Regarding the DT entry, this is not coming from us and this is already
> >> defined this way in existing DTBs, we just reuse the existing entry. 
> > 
> > Is it possible to standardize the property and drop the linux prefix?
> 
> Honestly i do not know. This was there in the DT examples we checked so
> we planned to use that. But it might be possible to standardize this.

We could certainly start a discussion about it. It looks like
linux,pci-domain is used beyond purely the Linux kernel. I think that it
is worth getting Rob's advice on this.


Rob, for context we are trying to get Linux and Xen to agree on a
numbering scheme to identify PCI host bridges correctly. We already have
an existing hypercall from the old x86 days that passes a segment number
to Xen as a parameter, see drivers/xen/pci.c:xen_add_device.
(xen_add_device assumes that a Linux domain and a PCI segment are the
same thing which I understand is not the case.) 


There is an existing device tree property called "linux,pci-domain"
which would solve the problem (ignoring the difference in the definition
of domain and segment) but it is clearly marked as a Linux-specific
property. Is there anything more "standard" that we can use?

I can find PCI domains being mentioned a few times in the Device Tree
PCI specification but can't find any associated IDs, and I couldn't find
segments at all.

What's your take on this? In general, what's your suggestion on getting
Xen and Linux (and other OSes which could be used as dom0 one day like
Zephyr) to agree on a simple numbering scheme to identify PCI host
bridges?

Should we just use "linux,pci-domain" as-is because it is already the de
facto standard? It looks like the property appears in both QEMU and
UBoot already.


From xen-devel-bounces@lists.xenproject.org Mon Jul 20 23:55:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 20 Jul 2020 23:55:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxfcT-0006El-Af; Mon, 20 Jul 2020 23:54:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZXdW=A7=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jxfcS-0006Eg-2K
 for xen-devel@lists.xenproject.org; Mon, 20 Jul 2020 23:54:52 +0000
X-Inumbo-ID: 65f0f2c0-cae4-11ea-84d4-bc764e2007e4
Received: from mail-io1-xd36.google.com (unknown [2607:f8b0:4864:20::d36])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65f0f2c0-cae4-11ea-84d4-bc764e2007e4;
 Mon, 20 Jul 2020 23:54:50 +0000 (UTC)
Received: by mail-io1-xd36.google.com with SMTP id z6so1994597iow.6
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jul 2020 16:54:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id;
 bh=/CbiZ46/++SBcFpdRzL/hrVcnPlp8s/fVui4csswoP4=;
 b=KCMFMr2IyoE+HRFXUxk0XNCr32Vg01XC2mfUZBwpEswMJ/FNODe65pU7IMsUfEIBuQ
 AsA548XQ6pJcEzEHqrb/1UdOnKfKzbrlxuzG5H8mMUikd9tR3gJlSIz6LG3ujjJyHxyo
 7mvsLdy3u6CI+LdRsh84ktcF9sc8aCz5YaWBZOFe4kM4wo39JIYhm85Q9P4ZWEUZmE4D
 evxvovw3F16Xnp+qh7wA5n2/DIwoSuEW/wt23S1BmbXPVDimErhB2PRYX0UW9DkntK0y
 v6F38ksrZY0CD8KQ0/7y5Peju9jzsIc9ui8JCPqWEq3CMZsA4NVXFwiee69XxRXnZ6Id
 FCoQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=/CbiZ46/++SBcFpdRzL/hrVcnPlp8s/fVui4csswoP4=;
 b=qm+T3+CuV6OqK0iHsQSPu2jR8Gkorrq73svmaU4hjPLeInv+W28Ekl7eDQYQS/eaI7
 rRnzkgNZ5U9woCGpI+Gsq4E8ov3AnG6RDaExGT+Gv8awYYXcnAXKMsriqfJxRHPrKjCH
 EO0bU96vRCej+BMDUgtZcAvUFlpacXrXg+s5ynHe8nIMOBTh5ewyc/iMIxtsjrlBx2Kj
 pkP9MNDU7HzIX+DJqaMyF0NbEXmRToTQZihyD2kugJsjqcvCh3uLCNMrrJU6lhyoRU8/
 hPd2aIgBQHPwj2isRe1vbgHszwsKzy72aeCjBNqNh/Dhk77ZEqippz87B2E0UBUu5ZtT
 V7zw==
X-Gm-Message-State: AOAM531d9unaAq7kBae+0iCselLmplVnYLzf2PTS4cZidqLvD2hxyAaT
 +tX7Pm2CnBO+9qI2d5UoxxQKyRkrFjQ=
X-Google-Smtp-Source: ABdhPJxx/0aqbTpb9YkrNYwJ4r6ZFV7jR7uR0NR/5Hzg9z6CNHx8vuocDNePjGMtrzZa5+jHkDX0wA==
X-Received: by 2002:a02:b714:: with SMTP id g20mr28564006jam.117.1595289288344; 
 Mon, 20 Jul 2020 16:54:48 -0700 (PDT)
Received: from six.lan (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id i11sm10084948iow.19.2020.07.20.16.54.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 20 Jul 2020 16:54:47 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH for-4.14] golang/xenlight: fix code generation for python 2.6
Date: Mon, 20 Jul 2020 19:54:40 -0400
Message-Id: <d406ae82e0cdde2dc33a92d2685ffb77bacab7ee.1595289055.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Before python 2.7, str.format() calls required that the format fields
were explicitly enumerated, e.g.:

  '{0} {1}'.format(foo, bar)

  vs.

  '{} {}'.format(foo, bar)

Currently, gengotypes.py uses the latter pattern everywhere, which means
the Go bindings do not build on python 2.6. Use the 2.6 syntax for
format() in order to support python 2.6 for now.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/gengotypes.py | 204 ++++++++++++++--------------
 1 file changed, 102 insertions(+), 102 deletions(-)

diff --git a/tools/golang/xenlight/gengotypes.py b/tools/golang/xenlight/gengotypes.py
index 557fecd07b..ebec938224 100644
--- a/tools/golang/xenlight/gengotypes.py
+++ b/tools/golang/xenlight/gengotypes.py
@@ -3,7 +3,7 @@
 import os
 import sys
 
-sys.path.append('{}/tools/libxl'.format(os.environ['XEN_ROOT']))
+sys.path.append('{0}/tools/libxl'.format(os.environ['XEN_ROOT']))
 import idl
 
 # Go versions of some builtin types.
@@ -73,14 +73,14 @@ def xenlight_golang_define_enum(ty = None):
 
     if ty.typename is not None:
         typename = xenlight_golang_fmt_name(ty.typename)
-        s += 'type {} int\n'.format(typename)
+        s += 'type {0} int\n'.format(typename)
 
     # Start const block
     s += 'const(\n'
 
     for v in ty.values:
         name = xenlight_golang_fmt_name(v.name)
-        s += '{} {} = {}\n'.format(name, typename, v.value)
+        s += '{0} {1} = {2}\n'.format(name, typename, v.value)
 
     # End const block
     s += ')\n'
@@ -99,9 +99,9 @@ def xenlight_golang_define_struct(ty = None, typename = None, nested = False):
 
     # Begin struct definition
     if nested:
-        s += '{} struct {{\n'.format(name)
+        s += '{0} struct {{\n'.format(name)
     else:
-        s += 'type {} struct {{\n'.format(name)
+        s += 'type {0} struct {{\n'.format(name)
 
     # Write struct fields
     for f in ty.fields:
@@ -111,13 +111,13 @@ def xenlight_golang_define_struct(ty = None, typename = None, nested = False):
                 typename = xenlight_golang_fmt_name(typename)
                 name     = xenlight_golang_fmt_name(f.name)
 
-                s += '{} []{}\n'.format(name, typename)
+                s += '{0} []{1}\n'.format(name, typename)
             else:
                 typename = f.type.typename
                 typename = xenlight_golang_fmt_name(typename)
                 name     = xenlight_golang_fmt_name(f.name)
 
-                s += '{} {}\n'.format(name, typename)
+                s += '{0} {1}\n'.format(name, typename)
 
         elif isinstance(f.type, idl.Struct):
             r = xenlight_golang_define_struct(f.type, typename=f.name, nested=True)
@@ -132,7 +132,7 @@ def xenlight_golang_define_struct(ty = None, typename = None, nested = False):
             extras.extend(r[1])
 
         else:
-            raise Exception('type {} not supported'.format(f.type))
+            raise Exception('type {0} not supported'.format(f.type))
 
     # End struct definition
     s += '}\n'
@@ -151,11 +151,11 @@ def xenlight_golang_define_union(ty = None, struct_name = '', union_name = ''):
     s = ''
     extras = []
 
-    interface_name = '{}_{}_union'.format(struct_name, ty.keyvar.name)
+    interface_name = '{0}_{1}_union'.format(struct_name, ty.keyvar.name)
     interface_name = xenlight_golang_fmt_name(interface_name, exported=False)
 
-    s += 'type {} interface {{\n'.format(interface_name)
-    s += 'is{}()\n'.format(interface_name)
+    s += 'type {0} interface {{\n'.format(interface_name)
+    s += 'is{0}()\n'.format(interface_name)
     s += '}\n'
 
     extras.append(s)
@@ -165,7 +165,7 @@ def xenlight_golang_define_union(ty = None, struct_name = '', union_name = ''):
             continue
 
         # Define struct
-        name = '{}_{}_union_{}'.format(struct_name, ty.keyvar.name, f.name)
+        name = '{0}_{1}_union_{2}'.format(struct_name, ty.keyvar.name, f.name)
         r = xenlight_golang_define_struct(f.type, typename=name)
         extras.append(r[0])
         extras.extend(r[1])
@@ -173,21 +173,21 @@ def xenlight_golang_define_union(ty = None, struct_name = '', union_name = ''):
         # This typeof trick ensures that the fields used in the cgo struct
         # used for marshaling are the same as the fields of the union in the
         # actual C type, and avoids re-defining all of those fields.
-        s = 'typedef typeof(((struct {} *)NULL)->{}.{}){};'
+        s = 'typedef typeof(((struct {0} *)NULL)->{1}.{2}){3};'
         s = s.format(struct_name, union_name, f.name, name)
         cgo_helpers_preamble.append(s)
 
         # Define function to implement 'union' interface
         name = xenlight_golang_fmt_name(name)
-        s = 'func (x {}) is{}(){{}}\n'.format(name, interface_name)
+        s = 'func (x {0}) is{1}(){{}}\n'.format(name, interface_name)
         extras.append(s)
 
     fname = xenlight_golang_fmt_name(ty.keyvar.name)
     ftype = xenlight_golang_fmt_name(ty.keyvar.type.typename)
-    s = '{} {}\n'.format(fname, ftype)
+    s = '{0} {1}\n'.format(fname, ftype)
 
-    fname = xenlight_golang_fmt_name('{}_union'.format(ty.keyvar.name))
-    s += '{} {}\n'.format(fname, interface_name)
+    fname = xenlight_golang_fmt_name('{0}_union'.format(ty.keyvar.name))
+    s += '{0} {1}\n'.format(fname, interface_name)
 
     return (s,extras)
 
@@ -243,7 +243,7 @@ def xenlight_golang_define_from_C(ty = None):
     Define the fromC marshaling function for the type
     represented by ty.
     """
-    func = 'func (x *{}) fromC(xc *C.{}) error {{\n {}\n return nil}}\n'
+    func = 'func (x *{0}) fromC(xc *C.{1}) error {{\n {2}\n return nil}}\n'
 
     goname = xenlight_golang_fmt_name(ty.typename)
     cname  = ty.typename
@@ -271,7 +271,7 @@ def xenlight_golang_define_from_C(ty = None):
             extras.extend(r[1])
 
         else:
-            raise Exception('type {} not supported'.format(f.type))
+            raise Exception('type {0} not supported'.format(f.type))
 
     return (func.format(goname, cname, body), extras)
 
@@ -300,8 +300,8 @@ def xenlight_golang_convert_from_C(ty = None, outer_name = None, cvarname = None
 
     # If outer_name is set, treat this as nested.
     if outer_name is not None:
-        goname = '{}.{}'.format(xenlight_golang_fmt_name(outer_name), goname)
-        cname  = '{}.{}'.format(outer_name, cname)
+        goname = '{0}.{1}'.format(xenlight_golang_fmt_name(outer_name), goname)
+        cname  = '{0}.{1}'.format(outer_name, cname)
 
     # Types that satisfy this condition can be easily casted or
     # converted to a Go builtin type.
@@ -312,15 +312,15 @@ def xenlight_golang_convert_from_C(ty = None, outer_name = None, cvarname = None
     if not is_castable:
         # If the type is not castable, we need to call its fromC
         # function.
-        s += 'if err := x.{}.fromC(&{}.{});'.format(goname,cvarname,cname)
-        s += 'err != nil {{\nreturn fmt.Errorf("converting field {}: %v", err)\n}}\n'.format(goname)
+        s += 'if err := x.{0}.fromC(&{1}.{2});'.format(goname,cvarname,cname)
+        s += 'err != nil {{\nreturn fmt.Errorf("converting field {0}: %v", err)\n}}\n'.format(goname)
 
     elif gotypename == 'string':
         # Use the cgo helper for converting C strings.
-        s += 'x.{} = C.GoString({}.{})\n'.format(goname,cvarname,cname)
+        s += 'x.{0} = C.GoString({1}.{2})\n'.format(goname,cvarname,cname)
 
     else:
-        s += 'x.{} = {}({}.{})\n'.format(goname,gotypename,cvarname,cname)
+        s += 'x.{0} = {1}({2}.{3})\n'.format(goname,gotypename,cvarname,cname)
 
     return s
 
@@ -331,9 +331,9 @@ def xenlight_golang_union_from_C(ty = None, union_name = '', struct_name = ''):
     gokeyname = xenlight_golang_fmt_name(keyname)
     keytype   = ty.keyvar.type.typename
     gokeytype = xenlight_golang_fmt_name(keytype)
-    field_name = xenlight_golang_fmt_name('{}_union'.format(keyname))
+    field_name = xenlight_golang_fmt_name('{0}_union'.format(keyname))
 
-    interface_name = '{}_{}_union'.format(struct_name, keyname)
+    interface_name = '{0}_{1}_union'.format(struct_name, keyname)
     interface_name = xenlight_golang_fmt_name(interface_name, exported=False)
 
     cgo_keyname = keyname
@@ -343,7 +343,7 @@ def xenlight_golang_union_from_C(ty = None, union_name = '', struct_name = ''):
     cases = {}
 
     for f in ty.fields:
-        val = '{}_{}'.format(keytype, f.name)
+        val = '{0}_{1}'.format(keytype, f.name)
         val = xenlight_golang_fmt_name(val)
 
         # Add to list of cases to make for the switch
@@ -354,17 +354,17 @@ def xenlight_golang_union_from_C(ty = None, union_name = '', struct_name = ''):
             continue
 
         # Define fromC func for 'union' struct.
-        typename   = '{}_{}_union_{}'.format(struct_name,keyname,f.name)
+        typename   = '{0}_{1}_union_{2}'.format(struct_name,keyname,f.name)
         gotypename = xenlight_golang_fmt_name(typename)
 
         # Define the function here. The cases for keyed unions are a little
         # different.
-        s = 'func (x *{}) fromC(xc *C.{}) error {{\n'.format(gotypename,struct_name)
-        s += 'if {}(xc.{}) != {} {{\n'.format(gokeytype,cgo_keyname,val)
-        err_string = '"expected union key {}"'.format(val)
-        s += 'return errors.New({})\n'.format(err_string)
+        s = 'func (x *{0}) fromC(xc *C.{1}) error {{\n'.format(gotypename,struct_name)
+        s += 'if {0}(xc.{1}) != {2} {{\n'.format(gokeytype,cgo_keyname,val)
+        err_string = '"expected union key {0}"'.format(val)
+        s += 'return errors.New({0})\n'.format(err_string)
         s += '}\n\n'
-        s += 'tmp := (*C.{})(unsafe.Pointer(&xc.{}[0]))\n'.format(typename,union_name)
+        s += 'tmp := (*C.{0})(unsafe.Pointer(&xc.{1}[0]))\n'.format(typename,union_name)
 
         for nf in f.type.fields:
             s += xenlight_golang_convert_from_C(nf,cvarname='tmp')
@@ -374,35 +374,35 @@ def xenlight_golang_union_from_C(ty = None, union_name = '', struct_name = ''):
 
         extras.append(s)
 
-    s = 'x.{} = {}(xc.{})\n'.format(gokeyname,gokeytype,cgo_keyname)
-    s += 'switch x.{}{{\n'.format(gokeyname)
+    s = 'x.{0} = {1}(xc.{2})\n'.format(gokeyname,gokeytype,cgo_keyname)
+    s += 'switch x.{0}{{\n'.format(gokeyname)
 
     # Create switch statement to determine which 'union element'
     # to populate in the Go struct.
     for case_name, case_tuple in sorted(cases.items()):
         (case_val, case_type) = case_tuple
 
-        s += 'case {}:\n'.format(case_val)
+        s += 'case {0}:\n'.format(case_val)
 
         if case_type is None:
-            s += "x.{} = nil\n".format(field_name)
+            s += "x.{0} = nil\n".format(field_name)
             continue
 
-        gotype = '{}_{}_union_{}'.format(struct_name,keyname,case_name)
+        gotype = '{0}_{1}_union_{2}'.format(struct_name,keyname,case_name)
         gotype = xenlight_golang_fmt_name(gotype)
-        goname = '{}_{}'.format(keyname,case_name)
+        goname = '{0}_{1}'.format(keyname,case_name)
         goname = xenlight_golang_fmt_name(goname,exported=False)
 
-        s += 'var {} {}\n'.format(goname, gotype)
-        s += 'if err := {}.fromC(xc);'.format(goname)
-        s += 'err != nil {{\n return fmt.Errorf("converting field {}: %v", err)\n}}\n'.format(goname)
+        s += 'var {0} {1}\n'.format(goname, gotype)
+        s += 'if err := {0}.fromC(xc);'.format(goname)
+        s += 'err != nil {{\n return fmt.Errorf("converting field {0}: %v", err)\n}}\n'.format(goname)
 
-        s += 'x.{} = {}\n'.format(field_name, goname)
+        s += 'x.{0} = {1}\n'.format(field_name, goname)
 
     # End switch statement
     s += 'default:\n'
-    err_string = '"invalid union key \'%v\'", x.{}'.format(gokeyname)
-    s += 'return fmt.Errorf({})'.format(err_string)
+    err_string = '"invalid union key \'%v\'", x.{0}'.format(gokeyname)
+    s += 'return fmt.Errorf({0})'.format(err_string)
     s += '}\n'
 
     return (s,extras)
@@ -420,22 +420,22 @@ def xenlight_golang_array_from_C(ty = None):
     goname     = xenlight_golang_fmt_name(ty.name)
     ctypename  = ty.type.elem_type.typename
     cname      = ty.name
-    cslice     = 'c{}'.format(goname)
+    cslice     = 'c{0}'.format(goname)
     clenvar    = ty.type.lenvar.name
 
-    s += 'x.{} = nil\n'.format(goname)
-    s += 'if n := int(xc.{}); n > 0 {{\n'.format(clenvar)
-    s += '{} := '.format(cslice)
-    s +='(*[1<<28]C.{})(unsafe.Pointer(xc.{}))[:n:n]\n'.format(ctypename, cname)
-    s += 'x.{} = make([]{}, n)\n'.format(goname, gotypename)
-    s += 'for i, v := range {} {{\n'.format(cslice)
+    s += 'x.{0} = nil\n'.format(goname)
+    s += 'if n := int(xc.{0}); n > 0 {{\n'.format(clenvar)
+    s += '{0} := '.format(cslice)
+    s +='(*[1<<28]C.{0})(unsafe.Pointer(xc.{1}))[:n:n]\n'.format(ctypename, cname)
+    s += 'x.{0} = make([]{1}, n)\n'.format(goname, gotypename)
+    s += 'for i, v := range {0} {{\n'.format(cslice)
 
     is_enum = isinstance(ty.type.elem_type,idl.Enumeration)
     if gotypename in go_builtin_types or is_enum:
-        s += 'x.{}[i] = {}(v)\n'.format(goname, gotypename)
+        s += 'x.{0}[i] = {1}(v)\n'.format(goname, gotypename)
     else:
-        s += 'if err := x.{}[i].fromC(&v); err != nil {{\n'.format(goname)
-        s += 'return fmt.Errorf("converting field {}: %v", err) }}\n'.format(goname)
+        s += 'if err := x.{0}[i].fromC(&v); err != nil {{\n'.format(goname)
+        s += 'return fmt.Errorf("converting field {0}: %v", err) }}\n'.format(goname)
 
     s += '}\n}\n'
 
@@ -446,11 +446,11 @@ def xenlight_golang_define_to_C(ty = None, typename = None, nested = False):
     Define the toC marshaling function for the type
     represented by ty.
     """
-    func = 'func (x *{}) toC(xc *C.{}) (err error){{{}\n return nil\n }}\n'
+    func = 'func (x *{0}) toC(xc *C.{1}) (err error){{{2}\n return nil\n }}\n'
     body = ''
 
     if ty.dispose_fn is not None:
-        body += 'defer func(){{\nif err != nil{{\nC.{}(xc)}}\n}}()\n\n'.format(ty.dispose_fn)
+        body += 'defer func(){{\nif err != nil{{\nC.{0}(xc)}}\n}}()\n\n'.format(ty.dispose_fn)
 
     goname = xenlight_golang_fmt_name(ty.typename)
     cname  = ty.typename
@@ -471,7 +471,7 @@ def xenlight_golang_define_to_C(ty = None, typename = None, nested = False):
             body += xenlight_golang_union_to_C(f.type, f.name, ty.typename)
 
         else:
-            raise Exception('type {} not supported'.format(f.type))
+            raise Exception('type {0} not supported'.format(f.type))
 
     return func.format(goname, cname, body)
 
@@ -506,26 +506,26 @@ def xenlight_golang_convert_to_C(ty = None, outer_name = None,
 
     # If outer_name is set, treat this as nested.
     if outer_name is not None:
-        goname = '{}.{}'.format(xenlight_golang_fmt_name(outer_name), goname)
-        cname  = '{}.{}'.format(outer_name, cname)
+        goname = '{0}.{1}'.format(xenlight_golang_fmt_name(outer_name), goname)
+        cname  = '{0}.{1}'.format(outer_name, cname)
 
     is_castable = (ty.type.json_parse_type == 'JSON_INTEGER' or
                    isinstance(ty.type, idl.Enumeration) or
                    gotypename in go_builtin_types)
 
     if not is_castable:
-        s += 'if err := {}.{}.toC(&{}.{}); err != nil {{\n'.format(govarname,goname,
+        s += 'if err := {0}.{1}.toC(&{2}.{3}); err != nil {{\n'.format(govarname,goname,
                                                                    cvarname,cname)
-        s += 'return fmt.Errorf("converting field {}: %v", err)\n}}\n'.format(goname)
+        s += 'return fmt.Errorf("converting field {0}: %v", err)\n}}\n'.format(goname)
 
     elif gotypename == 'string':
         # Use the cgo helper for converting C strings.
-        s += 'if {}.{} != "" {{\n'.format(govarname,goname)
-        s += '{}.{} = C.CString({}.{})}}\n'.format(cvarname,cname,
+        s += 'if {0}.{1} != "" {{\n'.format(govarname,goname)
+        s += '{0}.{1} = C.CString({2}.{3})}}\n'.format(cvarname,cname,
                                                    govarname,goname)
 
     else:
-        s += '{}.{} = C.{}({}.{})\n'.format(cvarname,cname,ctypename,
+        s += '{0}.{1} = C.{2}({3}.{4})\n'.format(cvarname,cname,ctypename,
                                             govarname,goname)
 
     return s
@@ -537,7 +537,7 @@ def xenlight_golang_union_to_C(ty = None, union_name = '',
     keytype   = ty.keyvar.type.typename
     gokeytype = xenlight_golang_fmt_name(keytype)
 
-    interface_name = '{}_{}_union'.format(struct_name, keyname)
+    interface_name = '{0}_{1}_union'.format(struct_name, keyname)
     interface_name = xenlight_golang_fmt_name(interface_name, exported=False)
 
     cgo_keyname = keyname
@@ -545,44 +545,44 @@ def xenlight_golang_union_to_C(ty = None, union_name = '',
         cgo_keyname = '_' + cgo_keyname
 
 
-    s = 'xc.{} = C.{}(x.{})\n'.format(cgo_keyname,keytype,gokeyname)
-    s += 'switch x.{}{{\n'.format(gokeyname)
+    s = 'xc.{0} = C.{1}(x.{2})\n'.format(cgo_keyname,keytype,gokeyname)
+    s += 'switch x.{0}{{\n'.format(gokeyname)
 
     # Create switch statement to determine how to populate the C union.
     for f in ty.fields:
-        key_val = '{}_{}'.format(keytype, f.name)
+        key_val = '{0}_{1}'.format(keytype, f.name)
         key_val = xenlight_golang_fmt_name(key_val)
 
-        s += 'case {}:\n'.format(key_val)
+        s += 'case {0}:\n'.format(key_val)
 
         if f.type is None:
             s += "break\n"
             continue
 
-        cgotype = '{}_{}_union_{}'.format(struct_name,keyname,f.name)
+        cgotype = '{0}_{1}_union_{2}'.format(struct_name,keyname,f.name)
         gotype  = xenlight_golang_fmt_name(cgotype)
 
-        field_name = xenlight_golang_fmt_name('{}_union'.format(keyname))
-        s += 'tmp, ok := x.{}.({})\n'.format(field_name,gotype)
+        field_name = xenlight_golang_fmt_name('{0}_union'.format(keyname))
+        s += 'tmp, ok := x.{0}.({1})\n'.format(field_name,gotype)
         s += 'if !ok {\n'
-        s += 'return errors.New("wrong type for union key {}")\n'.format(keyname)
+        s += 'return errors.New("wrong type for union key {0}")\n'.format(keyname)
         s += '}\n'
 
-        s += 'var {} C.{}\n'.format(f.name,cgotype)
+        s += 'var {0} C.{1}\n'.format(f.name,cgotype)
         for uf in f.type.fields:
             s += xenlight_golang_convert_to_C(uf,cvarname=f.name,
                                               govarname='tmp')
 
         # The union is still represented as Go []byte.
-        s += '{}Bytes := C.GoBytes(unsafe.Pointer(&{}),C.sizeof_{})\n'.format(f.name,
+        s += '{0}Bytes := C.GoBytes(unsafe.Pointer(&{1}),C.sizeof_{2})\n'.format(f.name,
                                                                               f.name,
                                                                               cgotype)
-        s += 'copy(xc.{}[:],{}Bytes)\n'.format(union_name,f.name)
+        s += 'copy(xc.{0}[:],{1}Bytes)\n'.format(union_name,f.name)
 
     # End switch statement
     s += 'default:\n'
-    err_string = '"invalid union key \'%v\'", x.{}'.format(gokeyname)
-    s += 'return fmt.Errorf({})'.format(err_string)
+    err_string = '"invalid union key \'%v\'", x.{0}'.format(gokeyname)
+    s += 'return fmt.Errorf({0})'.format(err_string)
     s += '}\n'
 
     return s
@@ -599,29 +599,29 @@ def xenlight_golang_array_to_C(ty = None):
 
     is_enum = isinstance(ty.type.elem_type,idl.Enumeration)
     if gotypename in go_builtin_types or is_enum:
-        s += 'if {} := len(x.{}); {} > 0 {{\n'.format(golenvar,goname,golenvar)
-        s += 'xc.{} = (*C.{})(C.malloc(C.size_t({}*{})))\n'.format(cname,ctypename,
+        s += 'if {0} := len(x.{1}); {2} > 0 {{\n'.format(golenvar,goname,golenvar)
+        s += 'xc.{0} = (*C.{1})(C.malloc(C.size_t({2}*{3})))\n'.format(cname,ctypename,
                                                                    golenvar,golenvar)
-        s += 'xc.{} = C.int({})\n'.format(clenvar,golenvar)
-        s += 'c{} := (*[1<<28]C.{})(unsafe.Pointer(xc.{}))[:{}:{}]\n'.format(goname,
+        s += 'xc.{0} = C.int({1})\n'.format(clenvar,golenvar)
+        s += 'c{0} := (*[1<<28]C.{1})(unsafe.Pointer(xc.{2}))[:{3}:{4}]\n'.format(goname,
                                                                       ctypename,cname,
                                                                       golenvar,golenvar)
-        s += 'for i,v := range x.{} {{\n'.format(goname)
-        s += 'c{}[i] = C.{}(v)\n'.format(goname,ctypename)
+        s += 'for i,v := range x.{0} {{\n'.format(goname)
+        s += 'c{0}[i] = C.{1}(v)\n'.format(goname,ctypename)
         s += '}\n}\n'
 
         return s
 
-    s += 'if {} := len(x.{}); {} > 0 {{\n'.format(golenvar,goname,golenvar)
-    s += 'xc.{} = (*C.{})(C.malloc(C.ulong({})*C.sizeof_{}))\n'.format(cname,ctypename,
+    s += 'if {0} := len(x.{1}); {2} > 0 {{\n'.format(golenvar,goname,golenvar)
+    s += 'xc.{0} = (*C.{1})(C.malloc(C.ulong({2})*C.sizeof_{3}))\n'.format(cname,ctypename,
                                                                    golenvar,ctypename)
-    s += 'xc.{} = C.int({})\n'.format(clenvar,golenvar)
-    s += 'c{} := (*[1<<28]C.{})(unsafe.Pointer(xc.{}))[:{}:{}]\n'.format(goname,
+    s += 'xc.{0} = C.int({1})\n'.format(clenvar,golenvar)
+    s += 'c{0} := (*[1<<28]C.{1})(unsafe.Pointer(xc.{2}))[:{3}:{4}]\n'.format(goname,
                                                                          ctypename,cname,
                                                                          golenvar,golenvar)
-    s += 'for i,v := range x.{} {{\n'.format(goname)
-    s += 'if err := v.toC(&c{}[i]); err != nil {{\n'.format(goname)
-    s += 'return fmt.Errorf("converting field {}: %v", err)\n'.format(goname)
+    s += 'for i,v := range x.{0} {{\n'.format(goname)
+    s += 'if err := v.toC(&c{0}[i]); err != nil {{\n'.format(goname)
+    s += 'return fmt.Errorf("converting field {0}: %v", err)\n'.format(goname)
     s += '}\n}\n}\n'
 
     return s
@@ -633,7 +633,7 @@ def xenlight_golang_define_constructor(ty = None):
     gotypename = xenlight_golang_fmt_name(ctypename)
 
     # Since this func is exported, add a comment as per Go conventions.
-    s += '// New{} returns an instance of {}'.format(gotypename,gotypename)
+    s += '// New{0} returns an instance of {1}'.format(gotypename,gotypename)
     s += ' initialized with defaults.\n'
 
     # If a struct has a keyed union, an extra argument is
@@ -643,7 +643,7 @@ def xenlight_golang_define_constructor(ty = None):
     init_fns = []
 
     # Add call to parent init_fn first.
-    init_fns.append('C.{}(&xc)'.format(ty.init_fn))
+    init_fns.append('C.{0}(&xc)'.format(ty.init_fn))
 
     for f in ty.fields:
         if not isinstance(f.type, idl.KeyedUnion):
@@ -658,24 +658,24 @@ def xenlight_golang_define_constructor(ty = None):
         # Serveral keyed unions use 'type' as the key variable name. In
         # that case, prepend the first letter of the Go type name.
         if param_goname == 'type':
-            param_goname = '{}type'.format(param_gotype.lower()[0])
+            param_goname = '{0}type'.format(param_gotype.lower()[0])
 
         # Add call to keyed union's init_fn.
-        init_fns.append('C.{}_{}(&xc, C.{}({}))'.format(ty.init_fn,
+        init_fns.append('C.{0}_{1}(&xc, C.{2}({3}))'.format(ty.init_fn,
                                                         param.name,
                                                         param_ctype,
                                                         param_goname))
 
         # Add to params list.
-        params.append('{} {}'.format(param_goname, param_gotype))
+        params.append('{0} {1}'.format(param_goname, param_gotype))
 
     # Define function
-    s += 'func New{}({}) (*{}, error) {{\n'.format(gotypename,
+    s += 'func New{0}({1}) (*{2}, error) {{\n'.format(gotypename,
                                                    ','.join(params),
                                                    gotypename)
 
     # Declare variables.
-    s += 'var (\nx {}\nxc C.{})\n\n'.format(gotypename, ctypename)
+    s += 'var (\nx {0}\nxc C.{1})\n\n'.format(gotypename, ctypename)
 
     # Write init_fn calls.
     s += '\n'.join(init_fns)
@@ -684,7 +684,7 @@ def xenlight_golang_define_constructor(ty = None):
     # Make sure dispose_fn get's called when constructor
     # returns.
     if ty.dispose_fn is not None:
-        s += 'defer C.{}(&xc)\n'.format(ty.dispose_fn)
+        s += 'defer C.{0}(&xc)\n'.format(ty.dispose_fn)
 
     s += '\n'
 
@@ -727,7 +727,7 @@ if __name__ == '__main__':
     header_comment="""// DO NOT EDIT.
 //
 // This file is generated by:
-// {}
+// {0}
 //
 
 """.format(' '.join(sys.argv))
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 00:04:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 00:04:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxflX-0007k7-1m; Tue, 21 Jul 2020 00:04:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EbhO=BA=amazon.com=prvs=46490858e=anchalag@srs-us1.protection.inumbo.net>)
 id 1jxflU-0007k2-UO
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 00:04:13 +0000
X-Inumbo-ID: b493d1bc-cae5-11ea-a036-12813bfff9fa
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b493d1bc-cae5-11ea-a036-12813bfff9fa;
 Tue, 21 Jul 2020 00:04:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1595289851; x=1626825851;
 h=date:from:to:cc:message-id:references:mime-version:
 content-transfer-encoding:in-reply-to:subject;
 bh=UsYCfB4vUXuGXUfz5NputLMtDzKXoyLRAcqjcS/oV1U=;
 b=EPr5X1FakCSP6fTwJXDPetnCGx/ufkrJ1vh3jhtR4X9D30jTM0JUjcTn
 C7qk/dcrnQBOoB+o0dsOj83R5D4UFv4dv8NcG345Yti99/gMqT+xJr3zV
 dYR/aBZwjF8ci9z2lJ0/aX2TLwmcWhTeXGJhLYhS01xwl6eHxrioX4yJr M=;
IronPort-SDR: tZK50MAqoODz1iBYxa4vWL5exodTTwRmcIhymTRjjXO2FMSE1DvQe+l0hCyjOlwb8BWlXHn/1m
 DLjLzCD/UXHg==
X-IronPort-AV: E=Sophos;i="5.75,375,1589241600"; d="scan'208";a="53134259"
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1a-821c648d.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP;
 21 Jul 2020 00:04:07 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1a-821c648d.us-east-1.amazon.com (Postfix) with ESMTPS
 id EECCBA1D1D; Tue, 21 Jul 2020 00:04:00 +0000 (UTC)
Received: from EX13D08UEE004.ant.amazon.com (10.43.62.182) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 21 Jul 2020 00:03:48 +0000
Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by
 EX13D08UEE004.ant.amazon.com (10.43.62.182) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 21 Jul 2020 00:03:48 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 21 Jul 2020 00:03:48 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 16C9240844; Tue, 21 Jul 2020 00:03:48 +0000 (UTC)
Date: Tue, 21 Jul 2020 00:03:48 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <20200721000348.GA19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, sstabellini@kernel.org, kamatam@amazon.com, mingo@redhat.com,
 xen-devel@lists.xenproject.org, sblbir@amazon.com, axboe@kernel.dk,
 konrad.wilk@oracle.com, anchalag@amazon.com, bp@alien8.de, tglx@linutronix.de,
 jgross@suse.com, netdev@vger.kernel.org, linux-pm@vger.kernel.org,
 rjw@rjwysocki.net, linux-kernel@vger.kernel.org, vkuznets@redhat.com,
 davem@davemloft.net, dwmw@amazon.co.uk, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 18, 2020 at 09:47:04PM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> (Roger, question for you at the very end)
> 
> On 7/17/20 3:10 PM, Anchal Agarwal wrote:
> > On Wed, Jul 15, 2020 at 05:18:08PM -0400, Boris Ostrovsky wrote:
> >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >>
> >>
> >>
> >> On 7/15/20 4:49 PM, Anchal Agarwal wrote:
> >>> On Mon, Jul 13, 2020 at 11:52:01AM -0400, Boris Ostrovsky wrote:
> >>>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >>>>
> >>>>
> >>>>
> >>>> On 7/2/20 2:21 PM, Anchal Agarwal wrote:
> >>>>> +
> >>>>> +bool xen_is_xen_suspend(void)
> >>>> Weren't you going to call this pv suspend? (And also --- is this suspend
> >>>> or hibernation? Your commit messages and cover letter talk about fixing
> >>>> hibernation).
> >>>>
> >>>>
> >>> This is for hibernation is for pvhvm/hvm/pv-on-hvm guests as you may call it.
> >>> The method is just there to check if "xen suspend" is in progress.
> >>> I do not see "xen_suspend" differentiating between pv or hvm
> >>> domain until later in the code hence, I abstracted it to xen_is_xen_suspend.
> >>
> >> I meant "pv suspend" in the sense that this is paravirtual suspend, not
> >> suspend for paravirtual guests. Just like pv drivers are for both pv and
> >> hvm guests.
> >>
> >>
> >> And then --- should it be pv suspend or pv hibernation?
> >>
> >>
> > Ok so I think I am lot confused by this question. Here is what this
> > function for, function xen_is_xen_suspend() just tells us whether
> > the guest is in "SHUTDOWN_SUSPEND" state or not. This check is needed
> > for correct invocation of syscore_ops callbacks registered for guest's
> > hibernation and for xenbus to invoke respective callbacks[suspend/resume
> > vs freeze/thaw/restore].
> > Since "shutting_down" state is defined static and is not directly available
> > to other parts of the code, the function solves the purpose.
> >
> > I am having hard time understanding why this should be called pv
> > suspend/hibernation unless you are suggesting something else?
> > Am I missing your point here?
> 
> 
> 
> I think I understand now what you are trying to say --- it's whether we
> are going to use xen_suspend() routine, right? If that's the case then
> sure, you can use "xen_suspend" term. (I'd probably still change
> xen_is_xen_suspend() to is_xen_suspend())
>
I think so too. Will change it.
> 
> >>>>> +{
> >>>>> +     return suspend_mode == XEN_SUSPEND;
> >>>>> +}
> >>>>> +
> >>>> +static int xen_setup_pm_notifier(void)
> >>>> +{
> >>>> +     if (!xen_hvm_domain())
> >>>> +             return -ENODEV;
> >>>>
> >>>> I forgot --- what did we decide about non-x86 (i.e. ARM)?
> >>> It would be great to support that however, its  out of
> >>> scope for this patch set.
> >>> I’ll be happy to discuss it separately.
> >>
> >> I wasn't implying that this *should* work on ARM but rather whether this
> >> will break ARM somehow (because xen_hvm_domain() is true there).
> >>
> >>
> > Ok makes sense. TBH, I haven't tested this part of code on ARM and the series
> > was only support x86 guests hibernation.
> > Moreover, this notifier is there to distinguish between 2 PM
> > events PM SUSPEND and PM hibernation. Now since we only care about PM
> > HIBERNATION I may just remove this code and rely on "SHUTDOWN_SUSPEND" state.
> > However, I may have to fix other patches in the series where this check may
> > appear and cater it only for x86 right?
> 
> 
> I don't know what would happen if ARM guest tries to handle hibernation
> callbacks. The only ones that you are introducing are in block and net
> fronts and that's arch-independent.
> 
> 
> You do add a bunch of x86-specific code though (syscore ops), would
> something similar be needed for ARM?
> 
> 
I don't expect this to work out of the box on ARM. To start with something
similar will be needed for ARM too.
We may still want to keep the driver code as-is.

I understand the concern here wrt ARM, however, currently the support is only
proposed for x86 guests here and similar work could be carried out for ARM.
Also, if regular hibernation works correctly on arm, then all is needed is to
fix Xen side of things.

I am not sure what could be done to achieve any assurances on arm side as far as
this series is concerned.
> >>>> And PVH dom0.
> >>> That's another good use case to make it work with however, I still
> >>> think that should be tested/worked upon separately as the feature itself
> >>> (PVH Dom0) is very new.
> >>
> >> Same question here --- will this break PVH dom0?
> >>
> > I haven't tested it as a part of this series. Is that a blocker here?
> 
> 
> I suspect dom0 will not do well now as far as hibernation goes, in which
> case you are not breaking anything.
> 
> 
> Roger?
> 
> 
> -boris

Thanks,
Anchal
> 
> 
> 


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 00:07:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 00:07:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxfow-0007re-J2; Tue, 21 Jul 2020 00:07:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ynct=BA=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jxfov-0007rZ-Ls
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 00:07:45 +0000
X-Inumbo-ID: 336759b4-cae6-11ea-84d6-bc764e2007e4
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 336759b4-cae6-11ea-84d6-bc764e2007e4;
 Tue, 21 Jul 2020 00:07:44 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id b11so8265586lfe.10
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jul 2020 17:07:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=bsRCsxPKxqmfoZl2ZkEOgQ+GHjuR5idgvQo41XSoLas=;
 b=HOZdVsUjVcmbXCkvbbIt9xkgdZ4I032/VDgaNgJ3yuYJzrToV/UQuL72mnxw0d1vy0
 BITS3v1ooyzwWGfTXqhg1wX16OwdXFvU62wcRwAYCWauF1balW68/HPL1GiB+a1tjttA
 RVPF9Q8/rboWNKugvBV045KWrFCBu2ffOlTi2Ib5BBAjklIdVjy/onSLKW1oCgi2RDPA
 Zc7agrVQFi9YUvbXq4PDmHZwjCycop7rbPx9dkN0ojIi1cP/Pp9KCSwkmIwJlrZ/iW60
 RC/mifOrnGZaJWZSUvWeu3ZPsfFps9Sq7eX81aTlvtsXDSu0IGAidNe4CLv9LDo7tAEE
 IYBA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=bsRCsxPKxqmfoZl2ZkEOgQ+GHjuR5idgvQo41XSoLas=;
 b=k3omFUXTlak2r2VVAmloB0/wZhNYl0pVeKY8YTBHVshcfDoHIGvt3NjXGAwpK7lZ8z
 k1VxQBsGxY0r1e1LziHAv3lpxwaXZorSEjjxeHmaLxdK9Wug7/DFvC7YpziSRRH/XL8f
 ovVxNb2V5dtmKQODWRzrqs77+Fi7iBdfxF4QU109usKPzt5JfxlRnkkKndOm171WeqTV
 FWujQjADDdofgvakNhHyxMIe94N+etQdxwNtkTA/gakl/eSZW1nasfSfqMJ6ePdcPgxo
 TsYzQ12/w2aWVRqDjoqmXL9/RCIpZtuOqhpl6+0yjde2jfFIwxZAJpttNDl1RYvMKnIl
 WIBg==
X-Gm-Message-State: AOAM5324EKLG8itKMJ5gVC56MXVAijJSGDTe1krzPmcmw3igsgHN1OJM
 uMJffW6YxVsSrT/vfDCg3wfN9/0JsdHLn4RUijyjPE21
X-Google-Smtp-Source: ABdhPJywF2VBX4oNr4SDVyY5a3PAhQgTrqr84xlouybBfbOD5nQyhzuw9s+XGlohrthmEtcvvNvktMYyRvauDlVEYaY=
X-Received: by 2002:ac2:4422:: with SMTP id w2mr11880421lfl.152.1595290063120; 
 Mon, 20 Jul 2020 17:07:43 -0700 (PDT)
MIME-Version: 1.0
References: <d406ae82e0cdde2dc33a92d2685ffb77bacab7ee.1595289055.git.rosbrookn@ainfosec.com>
In-Reply-To: <d406ae82e0cdde2dc33a92d2685ffb77bacab7ee.1595289055.git.rosbrookn@ainfosec.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Mon, 20 Jul 2020 20:07:31 -0400
Message-ID: <CAEBZRSffdZUWweDvZ9ZDMiemO4BGj10M4rj2Qmz3yFkgQhrn+g@mail.gmail.com>
Subject: Re: [PATCH for-4.14] golang/xenlight: fix code generation for python
 2.6
To: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> Before python 2.7, str.format() calls required that the format fields
> were explicitly enumerated, e.g.:
>
>   '{0} {1}'.format(foo, bar)
>
>   vs.
>
>   '{} {}'.format(foo, bar)
>
> Currently, gengotypes.py uses the latter pattern everywhere, which means
> the Go bindings do not build on python 2.6. Use the 2.6 syntax for
> format() in order to support python 2.6 for now.
>
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

I should add that I tested this with CONTAINER=centos6
./automation/scripts/containerize for python 2.6, and on my ubuntu
system with both python 2.7 and 3.6.

-NR


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 00:18:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 00:18:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxfyr-0000NJ-Ko; Tue, 21 Jul 2020 00:18:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EbhO=BA=amazon.com=prvs=46490858e=anchalag@srs-us1.protection.inumbo.net>)
 id 1jxfyq-0000NE-Q9
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 00:18:00 +0000
X-Inumbo-ID: a2a0ddd6-cae7-11ea-a038-12813bfff9fa
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a2a0ddd6-cae7-11ea-a038-12813bfff9fa;
 Tue, 21 Jul 2020 00:18:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1595290680; x=1626826680;
 h=date:from:to:cc:message-id:references:mime-version:
 content-transfer-encoding:in-reply-to:subject;
 bh=q1CNBzpRhN4Mc4ypaHePuk74XZdBFhC4bG3j6hSGqx4=;
 b=SW1XdfWTRyXAlrapx6hAdhAFNUiQnu/5goOz0hhhqg1fChchui/fo1bO
 OS2brwdGQxIauPJGbJIAjL2hf7+5sVTkdIg892J56gPaKvaY750lOyB+B
 p+EiLL1nz62IaOlcs+GzEwr09dkcVUFbKyDJ0eMTMWUoBfH7I0hzyyTQ6 0=;
IronPort-SDR: G2GXoAb+f4TN0NyhO0C8Og8NIHK2YxFNqEXDyAHtEtY+mbXOxHH+4fC2M/p5JlcPh2j5NAouPF
 /BfqN6WyTTjw==
X-IronPort-AV: E=Sophos;i="5.75,375,1589241600"; d="scan'208";a="42970481"
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1e-303d0b0e.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 21 Jul 2020 00:18:00 +0000
Received: from EX13MTAUEB002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1e-303d0b0e.us-east-1.amazon.com (Postfix) with ESMTPS
 id 7324CA1F8C; Tue, 21 Jul 2020 00:17:53 +0000 (UTC)
Received: from EX13D08UEB004.ant.amazon.com (10.43.60.142) by
 EX13MTAUEB002.ant.amazon.com (10.43.60.12) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 21 Jul 2020 00:17:36 +0000
Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by
 EX13D08UEB004.ant.amazon.com (10.43.60.142) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 21 Jul 2020 00:17:36 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 21 Jul 2020 00:17:36 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 192FC40712; Tue, 21 Jul 2020 00:17:36 +0000 (UTC)
Date: Tue, 21 Jul 2020 00:17:36 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20200721001736.GB19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200720093705.GG7191@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200720093705.GG7191@Air-de-Roger>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, tglx@linutronix.de, sstabellini@kernel.org, kamatam@amazon.com,
 mingo@redhat.com, xen-devel@lists.xenproject.org, sblbir@amazon.com,
 axboe@kernel.dk, konrad.wilk@oracle.com, bp@alien8.de,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, jgross@suse.com,
 netdev@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net,
 linux-kernel@vger.kernel.org, vkuznets@redhat.com, davem@davemloft.net,
 dwmw@amazon.co.uk
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 20, 2020 at 11:37:05AM +0200, Roger Pau Monn wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Sat, Jul 18, 2020 at 09:47:04PM -0400, Boris Ostrovsky wrote:
> > (Roger, question for you at the very end)
> >
> > On 7/17/20 3:10 PM, Anchal Agarwal wrote:
> > > On Wed, Jul 15, 2020 at 05:18:08PM -0400, Boris Ostrovsky wrote:
> > >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> > >>
> > >>
> > >>
> > >> On 7/15/20 4:49 PM, Anchal Agarwal wrote:
> > >>> On Mon, Jul 13, 2020 at 11:52:01AM -0400, Boris Ostrovsky wrote:
> > >>>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> > >>>>
> > >>>>
> > >>>>
> > >>>> On 7/2/20 2:21 PM, Anchal Agarwal wrote:
> > >>>> And PVH dom0.
> > >>> That's another good use case to make it work with however, I still
> > >>> think that should be tested/worked upon separately as the feature itself
> > >>> (PVH Dom0) is very new.
> > >>
> > >> Same question here --- will this break PVH dom0?
> > >>
> > > I haven't tested it as a part of this series. Is that a blocker here?
> >
> >
> > I suspect dom0 will not do well now as far as hibernation goes, in which
> > case you are not breaking anything.
> >
> >
> > Roger?
> 
> I sadly don't have any box ATM that supports hibernation where I
> could test it. We have hibernation support for PV dom0, so while I
> haven't done anything specific to support or test hibernation on PVH
> dom0 I would at least aim to not make this any worse, and hence the
> check should at least also fail for a PVH dom0?
> 
> if (!xen_hvm_domain() || xen_initial_domain())
>     return -ENODEV;
> 
> Ie: none of this should be applied to a PVH dom0, as it doesn't have
> PV devices and hence should follow the bare metal device suspend.
>
So from what I understand you meant for any guest running on pvh dom0 should not 
hibernate if hibernation is triggered from within the guest or should they?

> Also I would contact the QubesOS guys, they rely heavily on the
> suspend feature for dom0, and that's something not currently tested by
> osstest so any breakages there go unnoticed.
> 
Was this for me or Boris? If its the former then I have no idea how to?
> Thanks, Roger.

Thanks,
Anchal


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 00:36:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 00:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxgGB-00022E-7o; Tue, 21 Jul 2020 00:35:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ynct=BA=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jxgGA-000229-29
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 00:35:54 +0000
X-Inumbo-ID: 21fdb0ac-caea-11ea-84d6-bc764e2007e4
Received: from mail-il1-x12b.google.com (unknown [2607:f8b0:4864:20::12b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 21fdb0ac-caea-11ea-84d6-bc764e2007e4;
 Tue, 21 Jul 2020 00:35:53 +0000 (UTC)
Received: by mail-il1-x12b.google.com with SMTP id e18so14936120ilr.7
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jul 2020 17:35:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=date:from:to:cc:subject:message-id:mime-version:content-disposition
 :user-agent; bh=uTQPS0B/n4Yh4+Qk46RF1Bz/u49XaDIC+VFbdKpKK1c=;
 b=A4XF5nTMvJxQ6C1Ph0NgG9veoLwV5SXUcvpvR9nrVozcsXqoPWA8W8a653kgRW7Z37
 MSiykVktkkV8hpQyln2DE8i1Z0mrkaxrsdBs507SrbHB87UUnD6M4K0R/14p5CEu+ROe
 GyyesWoviNpQkDtZI8azFi1V99IMmPlGYpgg55j63I8AG+VsQqaW5iO3D4URfT8RRIpU
 wmQW4wEXW07U/7mE9vmXDjEKihxUt5himc4OGtgqpPeKUikUUxFRJL6reqPeaABkfYlf
 bCqtkSzA1msM++0cZOGEl/INYmythaf1tN3KN9M6fdAkcr8IYlWcCbQnwQ0W9FMhveT4
 EgaA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version
 :content-disposition:user-agent;
 bh=uTQPS0B/n4Yh4+Qk46RF1Bz/u49XaDIC+VFbdKpKK1c=;
 b=RNr6H6rahHl+1eeaZj/lwN95GBbgXo48QVoQn6sfAtOcawal38dkBj1nqJZtt2ALQu
 P6quLZX0Dwv0osgOfPMXEmYrEwZtMrIh9dDIMcrxJRcRj4/tuVwaJaqhT13q1ibmh8zC
 KJx1T4G0dgwtxb6RvEo8C0Z4MRAURkQIXzCDsLGFUWxAW5+pRjRTv8JGAUnsIEakHyyd
 kRld5SSmUBA7RJONgMWAMvk+AmPuSC+Isry03DEV+lKNtpszbUMUqpZjxTiBsjqwDq/c
 ztWRZ+6xJQGRnVefe0pNaGIFPmKom4t3y6AixNtXmaOoZSBXTzmnBAznH0yfKlVSqWk9
 3OOw==
X-Gm-Message-State: AOAM533FRlRtQCVXG4UKttxTWFO3XWWEitTD0uLCc5RJNnZLjHuBnZF2
 3X/ADOyaq8x9pt4oDoMcWOBuZTMHEPA=
X-Google-Smtp-Source: ABdhPJyP3lG+aoAc09hvlGxI22W14Jp4RcKLPImmlW5rQIRp7ET8iVoyJh9yXIKDSrH3hvsXk9c3YQ==
X-Received: by 2002:a92:bac4:: with SMTP id t65mr27348514ill.138.1595291752280; 
 Mon, 20 Jul 2020 17:35:52 -0700 (PDT)
Received: from six (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id v4sm9618467ilo.44.2020.07.20.17.35.50
 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256);
 Mon, 20 Jul 2020 17:35:51 -0700 (PDT)
Date: Mon, 20 Jul 2020 20:35:48 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: Golang design session follow-up
Message-ID: <20200721003548.GA9581@six>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.9.4 (2018-02-28)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: rosbrookn@ainfosec.com, ian.jackson@eu.citrix.com, george.dunlap@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

Here are the notes from the golang bindings design session at Xen Summit
earlier this month:

  Possible improvements: 
    - Expanding the IDL to have information about device add / remove functions
    
    Ian: It's important that the IDL generates the C prototypes as well.

    We could generate some man pages as well.
    
    "AO" functions need to be a separate thing, not just listed as arguments.
    
    How to distinguish between "easy to wrap" things vs "this is internal infrastructure"
  
    Idea: "Classes" in IDL: "Device add/remove function", &c

    Rust: Probably want two versions of the bindings; sync and async. Same thing for python.
  
  # Notification of events

    We don't want a golang wrapper for the sigchld callback

    Helper function to "do the domain shutdown thing"
  
  # Long-term home of the package
  
    Ian: Autogenerated stuff is becoming more annoying.

    Delete all the libxl auto-generated stuff from staging & master, and have "output branch".

    The reason we have these in-tree is that otherwise you can't build *from git* if you don't 
    have new enough versions of the right tools.

    Distribution: Make a repo on xenbits!

Our main focus for "improvements" was on expanding the libxl IDL to support
function prototypes so that wrappers could be easily generated. I
volunteered to work on this front, and have started some patches that I
will send in an RFC state soon (likely after 4.14 is released).

On "notification of events", I think we were just discussing what needs
to be done for the xenlight package to support domain
destruction/reaping. This is also something I will work on.

Finally, we came to the decision that the golang bindings should be
pushed to their own repo and tagged independently of xen.git. I think
tasks still need to be divvied up for this one.

If I missed anything important or made any mistakes, please let me know.

Thanks,
NR


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 01:14:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 01:14:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxgr3-0006RV-3D; Tue, 21 Jul 2020 01:14:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tByU=BA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxgr1-0006RA-C0
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 01:13:59 +0000
X-Inumbo-ID: 7084990c-caef-11ea-84de-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7084990c-caef-11ea-84de-bc764e2007e4;
 Tue, 21 Jul 2020 01:13:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=7vEqOgzB0nppiR6F4uprrgcZGCiB0/J2d1WES9HqlUk=; b=NLPRDdBE1M7le75a3F/vWrhA6
 OB/rgrMooa+8Rura+47ZADjjHWgYRq46fn/LSiuGZEpV6ziGyCZ8ZfHKxT289zvLQCKNeogyUAyoz
 0sYubzbKtX1uXFt5gJ2+MwEw8lO8Ce6B+b3JmMkTnrTCblqu0oo2Arp/OvBp1fHMSCVgw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxgqt-00073s-Gx; Tue, 21 Jul 2020 01:13:51 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxgqt-0006Ok-7y; Tue, 21 Jul 2020 01:13:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxgqt-0003AU-7E; Tue, 21 Jul 2020 01:13:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152043-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 152043: tolerable trouble: fail/pass/starved
 - PUSHED
X-Osstest-Failures: xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=23fe1b8d5170dfd5039c39181e82bfd5e20f3c18
X-Osstest-Versions-That: xen=d820391d2fba67566c52d5e0a047e70483265b6e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jul 2020 01:13:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152043 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152043/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 xen                  23fe1b8d5170dfd5039c39181e82bfd5e20f3c18
baseline version:
 xen                  d820391d2fba67566c52d5e0a047e70483265b6e

Last test of basis   151922  2020-07-15 14:10:44 Z    5 days
Testing same since   152043  2020-07-20 12:10:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d820391d2f..23fe1b8d51  23fe1b8d5170dfd5039c39181e82bfd5e20f3c18 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 01:39:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 01:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxhG1-0008GU-CN; Tue, 21 Jul 2020 01:39:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GVZ9=BA=kernel.org=robh@srs-us1.protection.inumbo.net>)
 id 1jxhG0-0008GP-9J
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 01:39:48 +0000
X-Inumbo-ID: 0f2c746e-caf3-11ea-84e1-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f2c746e-caf3-11ea-84e1-bc764e2007e4;
 Tue, 21 Jul 2020 01:39:47 +0000 (UTC)
Received: from mail-ot1-f52.google.com (mail-ot1-f52.google.com
 [209.85.210.52])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6F3A5207DD
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 01:39:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595295586;
 bh=wkMDppvxheK7knOtfEWWPRMhnK8PPAk2YBD2HCxlKhE=;
 h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
 b=UXFcC2U0nNAVpRFgRbwiPtHR7SJFnfEFkA0x9iHbezH7z/fRgb8G1++uj4z3/xppd
 ndGBAHpMe0ejLmoXngqwCPFk934z6h+1hvaIh263uoeP4nmH5zK5hJq5MZ7cHE/AlH
 MdMHU9zELyY5pqXmoGgtelAg3UQH+XyaNVnWKmiE=
Received: by mail-ot1-f52.google.com with SMTP id 72so13863760otc.3
 for <xen-devel@lists.xenproject.org>; Mon, 20 Jul 2020 18:39:46 -0700 (PDT)
X-Gm-Message-State: AOAM533XAwMLbFsKnSYxsUMvMQfjw7FP+d0JdgY+wQ6lzMEKUIjykIK4
 NvtyLsYAc4KWMa85zo6lRGBqSboNzfvQ+qRizg==
X-Google-Smtp-Source: ABdhPJwO/WazbcSglxyFjKKKgr7bq/843UxR0OTgXJZQvEIli96lZjKbtPdj37XycEj9xB+L8MaIuxKQnZ2M3XK2RKc=
X-Received: by 2002:a9d:46c:: with SMTP id 99mr3123658otc.192.1595295585833;
 Mon, 20 Jul 2020 18:39:45 -0700 (PDT)
MIME-Version: 1.0
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <20200717111644.GS7191@Air-de-Roger>
 <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
 <20200717143120.GT7191@Air-de-Roger>
 <8AF78FF1-C389-44D8-896B-B95C1A0560E2@arm.com>
 <alpine.DEB.2.21.2007201520370.32544@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007201520370.32544@sstabellini-ThinkPad-T480s>
From: Rob Herring <robh@kernel.org>
Date: Mon, 20 Jul 2020 19:39:34 -0600
X-Gmail-Original-Message-ID: <CAL_JsqKiaSNsKxqenVtgfk-_5=im73CHfEM3YqiVTFvRBbKsJA@mail.gmail.com>
Message-ID: <CAL_JsqKiaSNsKxqenVtgfk-_5=im73CHfEM3YqiVTFvRBbKsJA@mail.gmail.com>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
To: Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 20, 2020 at 5:24 PM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> + Rob Herring
>
> On Fri, 17 Jul 2020, Bertrand Marquis wrote:
> > >> Regarding the DT entry, this is not coming from us and this is already
> > >> defined this way in existing DTBs, we just reuse the existing entry.
> > >
> > > Is it possible to standardize the property and drop the linux prefix?
> >
> > Honestly i do not know. This was there in the DT examples we checked so
> > we planned to use that. But it might be possible to standardize this.
>
> We could certainly start a discussion about it. It looks like
> linux,pci-domain is used beyond purely the Linux kernel. I think that it
> is worth getting Rob's advice on this.
>
>
> Rob, for context we are trying to get Linux and Xen to agree on a
> numbering scheme to identify PCI host bridges correctly. We already have
> an existing hypercall from the old x86 days that passes a segment number
> to Xen as a parameter, see drivers/xen/pci.c:xen_add_device.
> (xen_add_device assumes that a Linux domain and a PCI segment are the
> same thing which I understand is not the case.)
>
>
> There is an existing device tree property called "linux,pci-domain"
> which would solve the problem (ignoring the difference in the definition
> of domain and segment) but it is clearly marked as a Linux-specific
> property. Is there anything more "standard" that we can use?
>
> I can find PCI domains being mentioned a few times in the Device Tree
> PCI specification but can't find any associated IDs, and I couldn't find
> segments at all.
>
> What's your take on this? In general, what's your suggestion on getting
> Xen and Linux (and other OSes which could be used as dom0 one day like
> Zephyr) to agree on a simple numbering scheme to identify PCI host
> bridges?
>
> Should we just use "linux,pci-domain" as-is because it is already the de
> facto standard? It looks like the property appears in both QEMU and
> UBoot already.

Sounds good to me. We could drop the 'linux' part, but based on other
places that has happened it just means we end up supporting both
strings forever.

Rob


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 02:09:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 02:09:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxhj1-0002jQ-SZ; Tue, 21 Jul 2020 02:09:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tByU=BA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxhj1-0002j3-3a
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 02:09:47 +0000
X-Inumbo-ID: 3c516b26-caf7-11ea-a04f-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3c516b26-caf7-11ea-a04f-12813bfff9fa;
 Tue, 21 Jul 2020 02:09:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=XTEX+IFpcOHiRncNwbfPVucJWVTHRAkOqEos2JgRwpY=; b=kITfKN1BMTAxQAfkIgQRU0oK/
 cHGLBzLERI6QJzXPuIxHewZDWapzkU4hFkuzn0UhM1zCx6F2s6i3J6eEUUB/VsSJNXH4J+3oR3BP4
 eYtHWdncwMDknO4zCF/a/Ofusu2ofc61yq0ROwGxPuPEdEIxtXmAc56UgDKB/W3a0Imp8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxhiu-0000C1-7p; Tue, 21 Jul 2020 02:09:40 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxhiu-0000eU-0F; Tue, 21 Jul 2020 02:09:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxhit-0004aS-Vp; Tue, 21 Jul 2020 02:09:39 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152049-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xtf test] 152049: all pass - PUSHED
X-Osstest-Versions-This: xtf=ba5923110c2f562170b82f955d9ace70f6a4a8e2
X-Osstest-Versions-That: xtf=f645a19115e666ce6401ca63b7d7388571463b55
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jul 2020 02:09:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152049 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152049/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf                  ba5923110c2f562170b82f955d9ace70f6a4a8e2
baseline version:
 xtf                  f645a19115e666ce6401ca63b7d7388571463b55

Last test of basis   151789  2020-07-10 11:12:38 Z   10 days
Testing same since   152049  2020-07-20 15:10:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-amd64-pvops                                            pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xtf.git
   f645a19..ba59231  ba5923110c2f562170b82f955d9ace70f6a4a8e2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 06:27:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 06:27:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxljl-0007dN-7r; Tue, 21 Jul 2020 06:26:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tByU=BA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxljj-0007d3-Ud
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 06:26:47 +0000
X-Inumbo-ID: 22b113aa-cb1b-11ea-a084-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 22b113aa-cb1b-11ea-a084-12813bfff9fa;
 Tue, 21 Jul 2020 06:26:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=7/6DbP9neqbYbMWViYEmJIKLej7cZ65MKcZiqyP456k=; b=sRRcHsYFJGw8MaoI8vsA5tHZ5Y
 HrlCRXJ1z/f9A6laWpn6/vZHR92SDRZqPJl5erijwuT9Oyw4ZR/Mb8C8Su73cdTvYCKQ3EX/p4XJP
 HkWke3mLGkC3UC2pAdgkaKlXNFfBDKNR2ZyH2E8uwffzSm7kfV1tkrppZmJ0977zQ2ns=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxljb-00060S-9O; Tue, 21 Jul 2020 06:26:39 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxlja-0005th-T7; Tue, 21 Jul 2020 06:26:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxlja-0005dB-SL; Tue, 21 Jul 2020 06:26:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-xl-qemuu-ovmf-amd64
Message-Id: <E1jxlja-0005dB-SL@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jul 2020 06:26:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemuu-ovmf-amd64
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/152065/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-ovmf-amd64.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-ovmf-amd64.debian-hvm-install --summary-out=tmp/152065.bisection-summary --basis-template=151065 --blessings=real,real-bisect qemu-mainline test-amd64-i386-xl-qemuu-ovmf-amd64 debian-hvm-install
Searching for failure / basis pass:
 152039 fail [host=pinot1] / 151149 [host=albana1] 151101 [host=huxelrebe0] 151065 [host=elbling0] 151047 [host=chardonnay1] 150970 [host=huxelrebe1] 150930 [host=chardonnay0] 150916 [host=fiano1] 150909 [host=italia0] 150895 [host=elbling1] 150831 [host=debina1] 150694 [host=pinot0] 150631 ok.
Failure / basis pass flights: 152039 / 150631
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 9fc87111005e8903785db40819af66b8f85b8b96 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 5cc7a54c2e91d82cb6a52e4921325c511fd90712 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9-3d8327496762b4f2a54c9bafd7a214314ec28e9e git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://git.qemu.org/qemu.git#5cc7a54c2e91d82cb6a52e4921325c511fd90712-9fc87111005e8903785db40819af66b8f85b8b96 git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-6ada2285d9918859699c92e09540e023e0a16054 git://xenbits.xen.org/xen.git#1497e78068421d83956f8e82fb6e1bf1fc3b1199-fb024b779336a0f73b3aee885b2ce082e812881f
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 67718 nodes in revision graph
Searching for test results:
 150631 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 5cc7a54c2e91d82cb6a52e4921325c511fd90712 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 150694 [host=pinot0]
 150831 [host=debina1]
 150909 [host=italia0]
 150930 [host=chardonnay0]
 150916 [host=fiano1]
 150895 [host=elbling1]
 150899 []
 150970 [host=huxelrebe1]
 151047 [host=chardonnay1]
 151101 [host=huxelrebe0]
 151065 [host=elbling0]
 151149 [host=albana1]
 151221 fail irrelevant
 151175 fail irrelevant
 151241 fail irrelevant
 151286 fail irrelevant
 151269 fail irrelevant
 151328 fail irrelevant
 151304 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 322969adf1fb3d6cfbd613f35121315715aff2ed 3c659044118e34603161457db9934a34f816d78b 171199f56f5f9bdf1e5d670d09ef1351d8f01bae 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151377 fail irrelevant
 151353 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b d4b78317b7cf8c0c635b70086503813f79ff21ec 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151399 fail irrelevant
 151414 fail irrelevant
 151435 fail irrelevant
 151459 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151471 fail irrelevant
 151485 fail irrelevant
 151500 fail irrelevant
 151518 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151547 fail irrelevant
 151598 fail irrelevant
 151577 fail irrelevant
 151622 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b 7b7515702012219410802a168ae4aa45b72a44df 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151656 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151634 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151645 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151669 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151685 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151704 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151778 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b aff2caf6b3fbab1062e117a47b66d27f7fd2f272 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151721 fail irrelevant
 151763 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 48f22ad04ead83e61b4b35871ec6f6109779b791 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151744 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 8796c64ecdfd34be394ea277aaaaa53df0c76996 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151804 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151816 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151833 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 827937158b72ce2265841ff528bba3c44a1bfbc8 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151855 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151841 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 2033cc6efa98b831d7839e367aa7d5aa74d0750f 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151849 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151874 fail irrelevant
 151895 fail irrelevant
 151914 fail irrelevant
 151934 fail irrelevant
 152060 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151968 fail irrelevant
 151952 fail irrelevant
 152023 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 53550e81e2cafe7c03a39526b95cd21b5194d9b1 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 152016 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 5cc7a54c2e91d82cb6a52e4921325c511fd90712 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 152030 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 86e8c353f705f14f2f2fd7a6195cefa431aa24d9 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 152017 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 97f750becac33e3d3e446d3ff4ae9af2577b7877 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152013 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 939ab64b400b9bec4b59795a87817784093e1acd 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 151988 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff53d2a13740e39dea110d6b3509c156c659586 3c659044118e34603161457db9934a34f816d78b b7bda69c4ef46c57480f6e378923f5215b122778 6ada2285d9918859699c92e09540e023e0a16054 f8fe3c07363d11fc81d8e7382dbcaa357c861569
 152024 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 250bc43a406f7d46e319abe87c19548d4f027828 2e3de6253422112ae43e608661ba94ea6b345694 3371ced37ced359167b5a71abee2062854371323
 152018 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b eefe34ea4b82c2b47abe28af4cc7247d51553626 2e3de6253422112ae43e608661ba94ea6b345694 25636ed707cf1211ce846c7ec58f8643e435d7a7
 151999 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 97f750becac33e3d3e446d3ff4ae9af2577b7877 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152036 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 5d2f557b47dfbf8f23277a5bdd8473d4607c681a 2e3de6253422112ae43e608661ba94ea6b345694 51ca66c37371b10b378513af126646de22eddb17
 152025 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 939ab64b400b9bec4b59795a87817784093e1acd 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152019 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b 3f429a3400822141651486193d6af625eeab05a5 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 152033 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 3575b0aea983ad57804c9af739ed8ff7bc168393 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 152020 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 58ae92a993687d913aa6dd00ef3497a1bc5f6c40 3c659044118e34603161457db9934a34f816d78b 54cdfe511219b8051046be55a6e156c4f08ff7ff 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 152027 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 3c659044118e34603161457db9934a34f816d78b 9f1f264edbdf5516d6f208497310b3eedbc7b74c 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 152021 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a2433243fbe471c250d7eddc2c7da325d91265fd 3c659044118e34603161457db9934a34f816d78b 6675a653d2e57ab09c32c0ea7b44a1d6c40a7f58 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 152028 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1d24410da356731da70b3334f86343e11e207d2 3c659044118e34603161457db9934a34f816d78b 470dd165d152ff7ceac61c7b71c2b89220b3aad7 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 152029 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b eea8f5df4ecc607d64f091b8d916fcc11a697541 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 152040 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 9fc87111005e8903785db40819af66b8f85b8b96 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152035 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 49ee11555262a256afec592dfed7c5902d5eefd2 2e3de6253422112ae43e608661ba94ea6b345694 726c78d14dfe6ec76f5e4c7756821a91f0a04b34
 152026 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 9fc87111005e8903785db40819af66b8f85b8b96 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152038 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 5cc7a54c2e91d82cb6a52e4921325c511fd90712 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 152059 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 152042 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 7d2410cea154bf915fb30179ebda3b17ac36e70e 2e3de6253422112ae43e608661ba94ea6b345694 780aba2779b834f19b2a6f0dcdea0e7e0b5e1622
 152044 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 175198ad91d8bac540159705873b4ffe4fb94eab 2e3de6253422112ae43e608661ba94ea6b345694 51ca66c37371b10b378513af126646de22eddb17
 152053 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 157ed954e2dc8c2a4230d38058ca7f1fe50902e0 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 152047 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 6b0eff1a4ea47c835a7d8bee88c05c47ada37495 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 152050 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b da9630c57ee386f8beb571ba6bb4a98d546c42ca 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 152051 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 fec6a7af5c5760b9bccd9e7c3eaf29f0401af264
 152055 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 75a6ed875ff0a2eb6b2971ae2098ed09963d7329 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 152039 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 9fc87111005e8903785db40819af66b8f85b8b96 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152056 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 007d1dbf72536ec1b847a944832e4de1546af2ac 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 152057 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 152062 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 152063 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 152065 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
Searching for interesting versions
 Result found: flight 150631 (pass), for basis pass
 Result found: flight 152026 (fail), for basis failure
 Repro found: flight 152038 (pass), for basis pass
 Repro found: flight 152039 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
No revisions left to test, checking graph state.
 Result found: flight 152057 (pass), for last pass
 Result found: flight 152059 (fail), for first failure
 Repro found: flight 152060 (pass), for last pass
 Repro found: flight 152062 (fail), for first failure
 Repro found: flight 152063 (pass), for last pass
 Repro found: flight 152065 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  81cb05732efb36971901c515b007869cc1d3a532
  Bug not present: d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/152065/


  commit 81cb05732efb36971901c515b007869cc1d3a532
  Author: Markus Armbruster <armbru@redhat.com>
  Date:   Tue Jun 9 14:23:37 2020 +0200
  
      qdev: Assert devices are plugged into a bus that can take them
      
      This would have caught some of the bugs I just fixed.
      
      Signed-off-by: Markus Armbruster <armbru@redhat.com>
      Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
      Message-Id: <20200609122339.937862-23-armbru@redhat.com>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.472733 to fit
pnmtopng: 219 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-ovmf-amd64.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
152065: tolerable ALL FAIL

flight 152065 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/152065/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail baseline untested


jobs:
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 06:36:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 06:36:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxltQ-00006H-DF; Tue, 21 Jul 2020 06:36:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxltP-00006C-35
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 06:36:47 +0000
X-Inumbo-ID: 8b919e48-cb1c-11ea-a087-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8b919e48-cb1c-11ea-a087-12813bfff9fa;
 Tue, 21 Jul 2020 06:36:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 40851B861;
 Tue, 21 Jul 2020 06:36:51 +0000 (UTC)
Subject: Re: [PATCH v3 1/2] x86: restore pv_rtc_handler() invocation
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <5426dd6f-50cd-dc23-5c6b-0ab631d98d38@suse.com>
 <7dd4b668-06ca-807a-9cc1-77430b2376a8@suse.com>
 <20200715121347.GY7191@Air-de-Roger>
 <2b9de0fd-5973-ed66-868c-ffadca83edf3@suse.com>
 <20200715133217.GZ7191@Air-de-Roger>
 <cd08f928-2be9-314b-56e6-bb414247caff@suse.com>
 <20200715145144.GA7191@Air-de-Roger>
 <ff1926c7-cc21-03ad-1dff-53c703450151@suse.com>
 <01509d7d-4cf3-7f3f-4aa1-eaa3b1d3b95b@citrix.com>
 <e15eb2d0-800f-4fbd-6d58-8bceb408593f@suse.com>
Message-ID: <9d5c7a0d-e34a-9fe4-c24d-871c4b5cb3d8@suse.com>
Date: Tue, 21 Jul 2020 08:36:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <e15eb2d0-800f-4fbd-6d58-8bceb408593f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.07.2020 18:27, Jan Beulich wrote:
> On 20.07.2020 17:28, Andrew Cooper wrote:
>> On 16/07/2020 11:06, Jan Beulich wrote:
>>> ACCESS_ONCE() guarantees single access, but doesn't guarantee that
>>> the compiler wouldn't split this single access into multiple insns.
>>
>> ACCESS_ONCE() does guarantee single accesses for any natural integer size.
>>
>> There is a section about this specifically in Linux's
>> memory-barriers.txt, and this isn't the first time I've pointed it out...
> 
> There indeed is text stating this, but I can't find any word on
> why they believe this is the case. My understanding of volatile
> is that it guarantees no more (and also no less) accesses to
> any single storage location than indicated by the source. But
> it doesn't prevent "tearing" of accesses. And really, how could
> it, considering that volatile can also be applied to types that
> aren't basic ones, and hence in particular to ones that can't
> possibly be accessed by a single insn?

To avoid a possible reference to *_ONCE() only accepting scalar
types - even the more explicit logic in the Linux constructs
permits "long long". Yet (I'm inclined to say of course) the
compiler makes no effort at all to carry out such a 64-bit
access as a single (atomic) insn on a 32-bit arch (i.e. cmpxchg8b
on ix86, if available). If there really was such a guarantee, it
surely would need to, or diagnose that it can't.

Furthermore I've looked at the current implementation of their
macros:

/*
 * Use __READ_ONCE() instead of READ_ONCE() if you do not require any
 * atomicity or dependency ordering guarantees. Note that this may result
 * in tears!
 */
#define __READ_ONCE(x)	(*(const volatile __unqual_scalar_typeof(x) *)&(x))

#define __READ_ONCE_SCALAR(x)						\
({									\
	__unqual_scalar_typeof(x) __x = __READ_ONCE(x);			\
	smp_read_barrier_depends();					\
	(typeof(x))__x;							\
})

#define READ_ONCE(x)							\
({									\
	compiletime_assert_rwonce_type(x);				\
	__READ_ONCE_SCALAR(x);						\
})

The difference between __READ_ONCE() and READ_ONCE() effectively
is merely the smp_read_barrier_depends() afaics. Hence to me the
"tears" in the comment can only refer to "tear drops", not to
"torn accesses". The comment ahead of
compiletime_assert_rwonce_type() is also "interesting":

/*
 * Yes, this permits 64-bit accesses on 32-bit architectures. These will
 * actually be atomic in some cases (namely Armv7 + LPAE), but for others we
 * rely on the access being split into 2x32-bit accesses for a 32-bit quantity
 * (e.g. a virtual address) and a strong prevailing wind.
 */

(I'm struggling to see what extra effects this construct has over
the type enforcement by __unqual_scalar_typeof().)

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 06:56:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 06:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxmCZ-0001sL-6D; Tue, 21 Jul 2020 06:56:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tByU=BA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxmCY-0001s1-0Q
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 06:56:34 +0000
X-Inumbo-ID: 4bea96a2-cb1f-11ea-a088-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4bea96a2-cb1f-11ea-a088-12813bfff9fa;
 Tue, 21 Jul 2020 06:56:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Cn1DNHk1ba32Loq7dCtlkEU50r7AYRwB+HtIcLxi+UE=; b=5Q/OvnoRHzgPthFAxOQZVwpuF
 CmOjArECo5bsgEMyB/8fIqiNRJIRAJQNn/bWDii6Tk3OEnK8vLdvxr1gASKfL9XvPI1gOdxTMlG6Z
 YGe6NlUI+mGjMEBcP+3SBQfSrVuzNoujFCrJilPyw9/ALz/LFnaEOR4WQKVeTvuhY6qrQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxmCQ-0006cL-1s; Tue, 21 Jul 2020 06:56:26 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxmCP-00088s-P6; Tue, 21 Jul 2020 06:56:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxmCP-0007gD-OL; Tue, 21 Jul 2020 06:56:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152045-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 152045: tolerable trouble: fail/pass/starved -
 PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=8c4532f19d6925538fb0c938f7de9a97da8c5c3b
X-Osstest-Versions-That: xen=fb024b779336a0f73b3aee885b2ce082e812881f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jul 2020 06:56:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152045 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152045/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152031
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152031
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152031
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 152031
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152031
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152031
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152031
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 152031
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 152031
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 xen                  8c4532f19d6925538fb0c938f7de9a97da8c5c3b
baseline version:
 xen                  fb024b779336a0f73b3aee885b2ce082e812881f

Last test of basis   152031  2020-07-20 01:57:34 Z    1 days
Testing same since   152045  2020-07-20 13:36:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Michal Leszczynski <michal.leszczynski@cert.pl>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   fb024b7793..8c4532f19d  8c4532f19d6925538fb0c938f7de9a97da8c5c3b -> master


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 07:09:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 07:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxmOl-0002sc-Bv; Tue, 21 Jul 2020 07:09:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tByU=BA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxmOj-0002sI-Fx
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 07:09:09 +0000
X-Inumbo-ID: 0e67a0ca-cb21-11ea-a08a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0e67a0ca-cb21-11ea-a08a-12813bfff9fa;
 Tue, 21 Jul 2020 07:09:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=LiUN3/0udrFNW6f+LPagfn1VM+6YUkVoN89sqRuCPUs=; b=bQaX548DXBy15oaUnk58J93rL
 SMY+xFpaSwfvfogkhGoGAn0bWeHmk8Lva+Gg8TA364guHLcwtoCjNqAHGzPOp1xTS+n+nbXfPcXsg
 kSIaRmpKMH0bQ7/bO7cIFlUYE7RAHpLan3C86+LDg0PbVpEmNCpyH4FnWxBp+XkTSNLNs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxmOc-0006t3-6q; Tue, 21 Jul 2020 07:09:02 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxmOb-0000IR-HL; Tue, 21 Jul 2020 07:09:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxmOb-0001c0-Ge; Tue, 21 Jul 2020 07:09:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152048-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152048: all pass - PUSHED
X-Osstest-Versions-This: ovmf=cb38ace647231076acfc0c5bdd21d3aff43e4f83
X-Osstest-Versions-That: ovmf=3d9d66ad760b67bfdfb5b4b8e9b34f6af6c45935
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jul 2020 07:09:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152048 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152048/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 cb38ace647231076acfc0c5bdd21d3aff43e4f83
baseline version:
 ovmf                 3d9d66ad760b67bfdfb5b4b8e9b34f6af6c45935

Last test of basis   152037  2020-07-20 07:09:58 Z    0 days
Testing same since   152048  2020-07-20 15:10:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  KrishnadasX Veliyathuparambil Prakashan <krishnadasx.veliyathuparambil.prakashan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3d9d66ad76..cb38ace647  cb38ace647231076acfc0c5bdd21d3aff43e4f83 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 07:09:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 07:09:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxmP4-0002ud-Pj; Tue, 21 Jul 2020 07:09:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JDR4=BA=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jxmP3-0002uT-VW
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 07:09:30 +0000
X-Inumbo-ID: 1de5cd89-cb21-11ea-84f2-bc764e2007e4
Received: from mail-wm1-x32b.google.com (unknown [2a00:1450:4864:20::32b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1de5cd89-cb21-11ea-84f2-bc764e2007e4;
 Tue, 21 Jul 2020 07:09:28 +0000 (UTC)
Received: by mail-wm1-x32b.google.com with SMTP id f18so1762934wml.3
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 00:09:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=HzrjnWe/4mTstvWu1GALcbzpv4WHCbnOVotodTJVkCM=;
 b=WwbLMiyjAj5o7tOzLMsEG7Icofdsxm5YgoUf0FNdCOz7n+IkFEhZ5qO6BSo7dBJwpH
 nf+j7NlAPlLV3dyFhiOPvBR96OMoUREx8wGe09Q1wtNvJHcT+UmlhDwxKRoepNgyuyh4
 7A5vnGBy6gvHqtcZMdCaOV6e0wTCxcDTyKST5pqQ+kpKgdmMlydKIHoFNS/B3Ms41Az5
 4r1H4f4uFjbDak9fbYUl0knIrYTyAbm7WXPmucKrb+OldzHAHpBJH6lcmTS2veE6uX38
 67F7/HDjpmOFer2UymyMdJNVYFmOKdtP9vHsajbtqUmMvWBL0LElJ9TMi7ZuAHvztgFc
 SPkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=HzrjnWe/4mTstvWu1GALcbzpv4WHCbnOVotodTJVkCM=;
 b=YtQv7+Un+2AvZm/dx9A4Z16tyBAT6BQ7u9KJWwGJxvfcyKUzkd/9yaZWKvHVgJArSl
 e+ZrRMZ9hMAVmwskISYtBcWRLRQ4gkpNXacV/7SgIokLvzzRu6tT2AR8xoqP3qWWWfXV
 SF8uY+gHojRDJ3fDaUBXkABJdFPxmPzqIpgrnMZDNTXEs30+zSC7iY2bqW7yr11SwxWZ
 iHMg/wjTzDzEt45A11Ea5+bRSCbZWTWULo1wmnlOTfa4sB4Wb4JsErpL0VRSC/R6wU3G
 CyfRzhpFpH4imqSd8o050i89G1Wq0Pemekd8l0GL2UMoVLJ1WzHnaWpZBxgXe2tzw978
 SmPg==
X-Gm-Message-State: AOAM5307IfdV7ldZ9ix52BAvMeyP7xFJdL3SvREPZJwqnraTEMUNxu4B
 HrrkoianJgpJ6A8NAJvDZ90=
X-Google-Smtp-Source: ABdhPJzIVeblSd0ad0JtzAaGVDQgtGzIwl6Al+M/ZW2KRDoZO7m2p7OjNuYzfdX5DzV7vuNzGekxwQ==
X-Received: by 2002:a1c:9c0b:: with SMTP id f11mr2640108wme.0.1595315368039;
 Tue, 21 Jul 2020 00:09:28 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id y16sm37353587wro.71.2020.07.21.00.09.26
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 21 Jul 2020 00:09:27 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 "'Julien Grall'" <julien@xen.org>, <xen-devel@lists.xenproject.org>
References: <20200720173635.1571-1-julien@xen.org>
 <4cc580c5-146f-6f83-bd91-a798763c261b@citrix.com>
 <627851f2-d28e-5c3b-6f1f-882e9eb02ed4@xen.org>
 <aae69fa5-4aee-781d-2f52-291d8fa948bd@citrix.com>
In-Reply-To: <aae69fa5-4aee-781d-2f52-291d8fa948bd@citrix.com>
Subject: RE: [PATCH] SUPPORT.md: Spell correctly Experimental
Date: Tue, 21 Jul 2020 08:09:26 +0100
Message-ID: <003801d65f2d$df17bc10$9d473430$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQIJA8r+zryR5zyBmuKSqGVsBFT/MgKv6aA7Adqhqd8Burtywah6KbBQ
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 'Julien Grall' <jgrall@amazon.com>, 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 20 July 2020 18:49
> To: Julien Grall <julien@xen.org>; xen-devel@lists.xenproject.org
> Cc: paul@xen.org; Julien Grall <jgrall@amazon.com>; George Dunlap <george.dunlap@citrix.com>; Ian
> Jackson <ian.jackson@eu.citrix.com>; Jan Beulich <jbeulich@suse.com>; Stefano Stabellini
> <sstabellini@kernel.org>; Wei Liu <wl@xen.org>
> Subject: Re: [PATCH] SUPPORT.md: Spell correctly Experimental
> 
> On 20/07/2020 18:48, Julien Grall wrote:
> >
> > On 20/07/2020 18:45, Andrew Cooper wrote:
> >> On 20/07/2020 18:36, Julien Grall wrote:
> >>> From: Julien Grall <jgrall@amazon.com>
> >>>
> >>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> >>
> >> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >>
> >> Although I'd suggest the subject be changed rearranged to "Spell
> >> Experimentally correctly".
> >
> > Did you intend to write "Experimental" rather than "Experimentally"?
> 
> Erm, yes :)
> 

Since this is a small docs change...

Release-acked-by: Paul Durrant <paul@xen.org>

...and please commit to staging-4.14 a.s.a.p.



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 07:13:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 07:13:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxmT1-0003mp-Bo; Tue, 21 Jul 2020 07:13:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JDR4=BA=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jxmSz-0003mk-RJ
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 07:13:33 +0000
X-Inumbo-ID: aea436ca-cb21-11ea-84f2-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aea436ca-cb21-11ea-84f2-bc764e2007e4;
 Tue, 21 Jul 2020 07:13:31 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id a6so1536203wmm.0
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 00:13:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=1zOyzlvQdyxWgnZZQ0hywFqWArqZleRc5bXhT9qSuS4=;
 b=RL2gAME52lKeIPtlTRzlWDqiY7xt3XTA9FJi1ETsJLsQFU24HqouvwQ8D9HdX3pz/0
 i4gRftwszHj8Cc9ZD1fiGVyLocNB93LuwkF2UKE6LmcrinJVo2Vk+iBFuIhl2qILbg6j
 0/imc3QFxQbfynZNhtSrXBlEPbU1PFhhE5LtOyZt3z3L++anV2xu6/VE5vgSrcaTYPvZ
 EKXZuH3vztthCZ/81cvPMuh8Z5j/nVA0xrr8C5GCuv9RsXRHn6lRxSgY7HjeEGh3cNpJ
 uG4vJFuxoqtYKdZ0rMPtwE2DlQFEQId0CUfENhHbGz5boA983woB6g/1wRQEYtAfghqd
 Z1SA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=1zOyzlvQdyxWgnZZQ0hywFqWArqZleRc5bXhT9qSuS4=;
 b=m1cCHxNctKci15azsIGDoD/KzvEqG+IzfPzmnrDwS8NEhlJA644kwmcxpuwBH6POb1
 /LdqyvV4R8vwqrXFX1a5RrV0/wf9Je+GogbTYIrz+z9oTjS6IypyVimnrDzB/3Gig06X
 gQgS3A7H6yW0vqB5Iag1AI0sfOCfF0ZhpEpcAK7jKmpWiJng1MHbYEF54xcDsXITs2JF
 tjNUUnv5OishiCCdgxPm5ZbCq0uznkk9GKnzfv2hxG6EW05PU+V9JhZSyz4ARnnlEWGX
 rN7Hnx3SvEOMjF4b8g2LEK17x2TzbyHxbvfHwYiR5r80BW6awnNXyeTvcXruTFKXrvGw
 DbRg==
X-Gm-Message-State: AOAM533fpAYJ8ddE4sJED5ceCGcdlcYpAnlnm2JrI5hB9e1n68GcP61r
 tNPmQumXlq5gUYVzt2J8ZKU=
X-Google-Smtp-Source: ABdhPJwIa9D5h99Pspxx6sqWFt9kA4aSWCLXq959QPvtkxSLAafIrJ6L3O2MxRJC79oag3KW0c9OWw==
X-Received: by 2002:a05:600c:22d3:: with SMTP id
 19mr2665661wmg.48.1595315610767; 
 Tue, 21 Jul 2020 00:13:30 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id b10sm2153053wmj.30.2020.07.21.00.13.29
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 21 Jul 2020 00:13:30 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Nick Rosbrook'" <rosbrookn@gmail.com>, <xen-devel@lists.xenproject.org>
References: <d406ae82e0cdde2dc33a92d2685ffb77bacab7ee.1595289055.git.rosbrookn@ainfosec.com>
In-Reply-To: <d406ae82e0cdde2dc33a92d2685ffb77bacab7ee.1595289055.git.rosbrookn@ainfosec.com>
Subject: RE: [PATCH for-4.14] golang/xenlight: fix code generation for python
 2.6
Date: Tue, 21 Jul 2020 08:13:28 +0100
Message-ID: <003901d65f2e$6faab0c0$4f001240$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJO5yDcExPNZQbG/t5S46VRX2lP66ggjaDw
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Nick Rosbrook' <rosbrookn@ainfosec.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Wei Liu' <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Nick Rosbrook <rosbrookn@gmail.com>
> Sent: 21 July 2020 00:55
> To: xen-devel@lists.xenproject.org
> Cc: paul@xen.org; Nick Rosbrook <rosbrookn@ainfosec.com>; George Dunlap <george.dunlap@citrix.com>;
> Ian Jackson <ian.jackson@eu.citrix.com>; Wei Liu <wl@xen.org>
> Subject: [PATCH for-4.14] golang/xenlight: fix code generation for python 2.6
> 
> Before python 2.7, str.format() calls required that the format fields
> were explicitly enumerated, e.g.:
> 
>   '{0} {1}'.format(foo, bar)
> 
>   vs.
> 
>   '{} {}'.format(foo, bar)
> 
> Currently, gengotypes.py uses the latter pattern everywhere, which means
> the Go bindings do not build on python 2.6. Use the 2.6 syntax for
> format() in order to support python 2.6 for now.
> 
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

I'm afraid this is too late for 4.14 now. We are in hard freeze, so only minor docs changes or critical bug fixes are being taken at
this point.

  Paul

> ---
>  tools/golang/xenlight/gengotypes.py | 204 ++++++++++++++--------------
>  1 file changed, 102 insertions(+), 102 deletions(-)
> 
> diff --git a/tools/golang/xenlight/gengotypes.py b/tools/golang/xenlight/gengotypes.py
> index 557fecd07b..ebec938224 100644
> --- a/tools/golang/xenlight/gengotypes.py
> +++ b/tools/golang/xenlight/gengotypes.py
> @@ -3,7 +3,7 @@
>  import os
>  import sys
> 
> -sys.path.append('{}/tools/libxl'.format(os.environ['XEN_ROOT']))
> +sys.path.append('{0}/tools/libxl'.format(os.environ['XEN_ROOT']))
>  import idl
> 
>  # Go versions of some builtin types.
> @@ -73,14 +73,14 @@ def xenlight_golang_define_enum(ty = None):
> 
>      if ty.typename is not None:
>          typename = xenlight_golang_fmt_name(ty.typename)
> -        s += 'type {} int\n'.format(typename)
> +        s += 'type {0} int\n'.format(typename)
> 
>      # Start const block
>      s += 'const(\n'
> 
>      for v in ty.values:
>          name = xenlight_golang_fmt_name(v.name)
> -        s += '{} {} = {}\n'.format(name, typename, v.value)
> +        s += '{0} {1} = {2}\n'.format(name, typename, v.value)
> 
>      # End const block
>      s += ')\n'
> @@ -99,9 +99,9 @@ def xenlight_golang_define_struct(ty = None, typename = None, nested = False):
> 
>      # Begin struct definition
>      if nested:
> -        s += '{} struct {{\n'.format(name)
> +        s += '{0} struct {{\n'.format(name)
>      else:
> -        s += 'type {} struct {{\n'.format(name)
> +        s += 'type {0} struct {{\n'.format(name)
> 
>      # Write struct fields
>      for f in ty.fields:
> @@ -111,13 +111,13 @@ def xenlight_golang_define_struct(ty = None, typename = None, nested = False):
>                  typename = xenlight_golang_fmt_name(typename)
>                  name     = xenlight_golang_fmt_name(f.name)
> 
> -                s += '{} []{}\n'.format(name, typename)
> +                s += '{0} []{1}\n'.format(name, typename)
>              else:
>                  typename = f.type.typename
>                  typename = xenlight_golang_fmt_name(typename)
>                  name     = xenlight_golang_fmt_name(f.name)
> 
> -                s += '{} {}\n'.format(name, typename)
> +                s += '{0} {1}\n'.format(name, typename)
> 
>          elif isinstance(f.type, idl.Struct):
>              r = xenlight_golang_define_struct(f.type, typename=f.name, nested=True)
> @@ -132,7 +132,7 @@ def xenlight_golang_define_struct(ty = None, typename = None, nested = False):
>              extras.extend(r[1])
> 
>          else:
> -            raise Exception('type {} not supported'.format(f.type))
> +            raise Exception('type {0} not supported'.format(f.type))
> 
>      # End struct definition
>      s += '}\n'
> @@ -151,11 +151,11 @@ def xenlight_golang_define_union(ty = None, struct_name = '', union_name = ''):
>      s = ''
>      extras = []
> 
> -    interface_name = '{}_{}_union'.format(struct_name, ty.keyvar.name)
> +    interface_name = '{0}_{1}_union'.format(struct_name, ty.keyvar.name)
>      interface_name = xenlight_golang_fmt_name(interface_name, exported=False)
> 
> -    s += 'type {} interface {{\n'.format(interface_name)
> -    s += 'is{}()\n'.format(interface_name)
> +    s += 'type {0} interface {{\n'.format(interface_name)
> +    s += 'is{0}()\n'.format(interface_name)
>      s += '}\n'
> 
>      extras.append(s)
> @@ -165,7 +165,7 @@ def xenlight_golang_define_union(ty = None, struct_name = '', union_name = ''):
>              continue
> 
>          # Define struct
> -        name = '{}_{}_union_{}'.format(struct_name, ty.keyvar.name, f.name)
> +        name = '{0}_{1}_union_{2}'.format(struct_name, ty.keyvar.name, f.name)
>          r = xenlight_golang_define_struct(f.type, typename=name)
>          extras.append(r[0])
>          extras.extend(r[1])
> @@ -173,21 +173,21 @@ def xenlight_golang_define_union(ty = None, struct_name = '', union_name = ''):
>          # This typeof trick ensures that the fields used in the cgo struct
>          # used for marshaling are the same as the fields of the union in the
>          # actual C type, and avoids re-defining all of those fields.
> -        s = 'typedef typeof(((struct {} *)NULL)->{}.{}){};'
> +        s = 'typedef typeof(((struct {0} *)NULL)->{1}.{2}){3};'
>          s = s.format(struct_name, union_name, f.name, name)
>          cgo_helpers_preamble.append(s)
> 
>          # Define function to implement 'union' interface
>          name = xenlight_golang_fmt_name(name)
> -        s = 'func (x {}) is{}(){{}}\n'.format(name, interface_name)
> +        s = 'func (x {0}) is{1}(){{}}\n'.format(name, interface_name)
>          extras.append(s)
> 
>      fname = xenlight_golang_fmt_name(ty.keyvar.name)
>      ftype = xenlight_golang_fmt_name(ty.keyvar.type.typename)
> -    s = '{} {}\n'.format(fname, ftype)
> +    s = '{0} {1}\n'.format(fname, ftype)
> 
> -    fname = xenlight_golang_fmt_name('{}_union'.format(ty.keyvar.name))
> -    s += '{} {}\n'.format(fname, interface_name)
> +    fname = xenlight_golang_fmt_name('{0}_union'.format(ty.keyvar.name))
> +    s += '{0} {1}\n'.format(fname, interface_name)
> 
>      return (s,extras)
> 
> @@ -243,7 +243,7 @@ def xenlight_golang_define_from_C(ty = None):
>      Define the fromC marshaling function for the type
>      represented by ty.
>      """
> -    func = 'func (x *{}) fromC(xc *C.{}) error {{\n {}\n return nil}}\n'
> +    func = 'func (x *{0}) fromC(xc *C.{1}) error {{\n {2}\n return nil}}\n'
> 
>      goname = xenlight_golang_fmt_name(ty.typename)
>      cname  = ty.typename
> @@ -271,7 +271,7 @@ def xenlight_golang_define_from_C(ty = None):
>              extras.extend(r[1])
> 
>          else:
> -            raise Exception('type {} not supported'.format(f.type))
> +            raise Exception('type {0} not supported'.format(f.type))
> 
>      return (func.format(goname, cname, body), extras)
> 
> @@ -300,8 +300,8 @@ def xenlight_golang_convert_from_C(ty = None, outer_name = None, cvarname = None
> 
>      # If outer_name is set, treat this as nested.
>      if outer_name is not None:
> -        goname = '{}.{}'.format(xenlight_golang_fmt_name(outer_name), goname)
> -        cname  = '{}.{}'.format(outer_name, cname)
> +        goname = '{0}.{1}'.format(xenlight_golang_fmt_name(outer_name), goname)
> +        cname  = '{0}.{1}'.format(outer_name, cname)
> 
>      # Types that satisfy this condition can be easily casted or
>      # converted to a Go builtin type.
> @@ -312,15 +312,15 @@ def xenlight_golang_convert_from_C(ty = None, outer_name = None, cvarname = None
>      if not is_castable:
>          # If the type is not castable, we need to call its fromC
>          # function.
> -        s += 'if err := x.{}.fromC(&{}.{});'.format(goname,cvarname,cname)
> -        s += 'err != nil {{\nreturn fmt.Errorf("converting field {}: %v", err)\n}}\n'.format(goname)
> +        s += 'if err := x.{0}.fromC(&{1}.{2});'.format(goname,cvarname,cname)
> +        s += 'err != nil {{\nreturn fmt.Errorf("converting field {0}: %v", err)\n}}\n'.format(goname)
> 
>      elif gotypename == 'string':
>          # Use the cgo helper for converting C strings.
> -        s += 'x.{} = C.GoString({}.{})\n'.format(goname,cvarname,cname)
> +        s += 'x.{0} = C.GoString({1}.{2})\n'.format(goname,cvarname,cname)
> 
>      else:
> -        s += 'x.{} = {}({}.{})\n'.format(goname,gotypename,cvarname,cname)
> +        s += 'x.{0} = {1}({2}.{3})\n'.format(goname,gotypename,cvarname,cname)
> 
>      return s
> 
> @@ -331,9 +331,9 @@ def xenlight_golang_union_from_C(ty = None, union_name = '', struct_name = ''):
>      gokeyname = xenlight_golang_fmt_name(keyname)
>      keytype   = ty.keyvar.type.typename
>      gokeytype = xenlight_golang_fmt_name(keytype)
> -    field_name = xenlight_golang_fmt_name('{}_union'.format(keyname))
> +    field_name = xenlight_golang_fmt_name('{0}_union'.format(keyname))
> 
> -    interface_name = '{}_{}_union'.format(struct_name, keyname)
> +    interface_name = '{0}_{1}_union'.format(struct_name, keyname)
>      interface_name = xenlight_golang_fmt_name(interface_name, exported=False)
> 
>      cgo_keyname = keyname
> @@ -343,7 +343,7 @@ def xenlight_golang_union_from_C(ty = None, union_name = '', struct_name = ''):
>      cases = {}
> 
>      for f in ty.fields:
> -        val = '{}_{}'.format(keytype, f.name)
> +        val = '{0}_{1}'.format(keytype, f.name)
>          val = xenlight_golang_fmt_name(val)
> 
>          # Add to list of cases to make for the switch
> @@ -354,17 +354,17 @@ def xenlight_golang_union_from_C(ty = None, union_name = '', struct_name = ''):
>              continue
> 
>          # Define fromC func for 'union' struct.
> -        typename   = '{}_{}_union_{}'.format(struct_name,keyname,f.name)
> +        typename   = '{0}_{1}_union_{2}'.format(struct_name,keyname,f.name)
>          gotypename = xenlight_golang_fmt_name(typename)
> 
>          # Define the function here. The cases for keyed unions are a little
>          # different.
> -        s = 'func (x *{}) fromC(xc *C.{}) error {{\n'.format(gotypename,struct_name)
> -        s += 'if {}(xc.{}) != {} {{\n'.format(gokeytype,cgo_keyname,val)
> -        err_string = '"expected union key {}"'.format(val)
> -        s += 'return errors.New({})\n'.format(err_string)
> +        s = 'func (x *{0}) fromC(xc *C.{1}) error {{\n'.format(gotypename,struct_name)
> +        s += 'if {0}(xc.{1}) != {2} {{\n'.format(gokeytype,cgo_keyname,val)
> +        err_string = '"expected union key {0}"'.format(val)
> +        s += 'return errors.New({0})\n'.format(err_string)
>          s += '}\n\n'
> -        s += 'tmp := (*C.{})(unsafe.Pointer(&xc.{}[0]))\n'.format(typename,union_name)
> +        s += 'tmp := (*C.{0})(unsafe.Pointer(&xc.{1}[0]))\n'.format(typename,union_name)
> 
>          for nf in f.type.fields:
>              s += xenlight_golang_convert_from_C(nf,cvarname='tmp')
> @@ -374,35 +374,35 @@ def xenlight_golang_union_from_C(ty = None, union_name = '', struct_name = ''):
> 
>          extras.append(s)
> 
> -    s = 'x.{} = {}(xc.{})\n'.format(gokeyname,gokeytype,cgo_keyname)
> -    s += 'switch x.{}{{\n'.format(gokeyname)
> +    s = 'x.{0} = {1}(xc.{2})\n'.format(gokeyname,gokeytype,cgo_keyname)
> +    s += 'switch x.{0}{{\n'.format(gokeyname)
> 
>      # Create switch statement to determine which 'union element'
>      # to populate in the Go struct.
>      for case_name, case_tuple in sorted(cases.items()):
>          (case_val, case_type) = case_tuple
> 
> -        s += 'case {}:\n'.format(case_val)
> +        s += 'case {0}:\n'.format(case_val)
> 
>          if case_type is None:
> -            s += "x.{} = nil\n".format(field_name)
> +            s += "x.{0} = nil\n".format(field_name)
>              continue
> 
> -        gotype = '{}_{}_union_{}'.format(struct_name,keyname,case_name)
> +        gotype = '{0}_{1}_union_{2}'.format(struct_name,keyname,case_name)
>          gotype = xenlight_golang_fmt_name(gotype)
> -        goname = '{}_{}'.format(keyname,case_name)
> +        goname = '{0}_{1}'.format(keyname,case_name)
>          goname = xenlight_golang_fmt_name(goname,exported=False)
> 
> -        s += 'var {} {}\n'.format(goname, gotype)
> -        s += 'if err := {}.fromC(xc);'.format(goname)
> -        s += 'err != nil {{\n return fmt.Errorf("converting field {}: %v", err)\n}}\n'.format(goname)
> +        s += 'var {0} {1}\n'.format(goname, gotype)
> +        s += 'if err := {0}.fromC(xc);'.format(goname)
> +        s += 'err != nil {{\n return fmt.Errorf("converting field {0}: %v",
> err)\n}}\n'.format(goname)
> 
> -        s += 'x.{} = {}\n'.format(field_name, goname)
> +        s += 'x.{0} = {1}\n'.format(field_name, goname)
> 
>      # End switch statement
>      s += 'default:\n'
> -    err_string = '"invalid union key \'%v\'", x.{}'.format(gokeyname)
> -    s += 'return fmt.Errorf({})'.format(err_string)
> +    err_string = '"invalid union key \'%v\'", x.{0}'.format(gokeyname)
> +    s += 'return fmt.Errorf({0})'.format(err_string)
>      s += '}\n'
> 
>      return (s,extras)
> @@ -420,22 +420,22 @@ def xenlight_golang_array_from_C(ty = None):
>      goname     = xenlight_golang_fmt_name(ty.name)
>      ctypename  = ty.type.elem_type.typename
>      cname      = ty.name
> -    cslice     = 'c{}'.format(goname)
> +    cslice     = 'c{0}'.format(goname)
>      clenvar    = ty.type.lenvar.name
> 
> -    s += 'x.{} = nil\n'.format(goname)
> -    s += 'if n := int(xc.{}); n > 0 {{\n'.format(clenvar)
> -    s += '{} := '.format(cslice)
> -    s +='(*[1<<28]C.{})(unsafe.Pointer(xc.{}))[:n:n]\n'.format(ctypename, cname)
> -    s += 'x.{} = make([]{}, n)\n'.format(goname, gotypename)
> -    s += 'for i, v := range {} {{\n'.format(cslice)
> +    s += 'x.{0} = nil\n'.format(goname)
> +    s += 'if n := int(xc.{0}); n > 0 {{\n'.format(clenvar)
> +    s += '{0} := '.format(cslice)
> +    s +='(*[1<<28]C.{0})(unsafe.Pointer(xc.{1}))[:n:n]\n'.format(ctypename, cname)
> +    s += 'x.{0} = make([]{1}, n)\n'.format(goname, gotypename)
> +    s += 'for i, v := range {0} {{\n'.format(cslice)
> 
>      is_enum = isinstance(ty.type.elem_type,idl.Enumeration)
>      if gotypename in go_builtin_types or is_enum:
> -        s += 'x.{}[i] = {}(v)\n'.format(goname, gotypename)
> +        s += 'x.{0}[i] = {1}(v)\n'.format(goname, gotypename)
>      else:
> -        s += 'if err := x.{}[i].fromC(&v); err != nil {{\n'.format(goname)
> -        s += 'return fmt.Errorf("converting field {}: %v", err) }}\n'.format(goname)
> +        s += 'if err := x.{0}[i].fromC(&v); err != nil {{\n'.format(goname)
> +        s += 'return fmt.Errorf("converting field {0}: %v", err) }}\n'.format(goname)
> 
>      s += '}\n}\n'
> 
> @@ -446,11 +446,11 @@ def xenlight_golang_define_to_C(ty = None, typename = None, nested = False):
>      Define the toC marshaling function for the type
>      represented by ty.
>      """
> -    func = 'func (x *{}) toC(xc *C.{}) (err error){{{}\n return nil\n }}\n'
> +    func = 'func (x *{0}) toC(xc *C.{1}) (err error){{{2}\n return nil\n }}\n'
>      body = ''
> 
>      if ty.dispose_fn is not None:
> -        body += 'defer func(){{\nif err != nil{{\nC.{}(xc)}}\n}}()\n\n'.format(ty.dispose_fn)
> +        body += 'defer func(){{\nif err != nil{{\nC.{0}(xc)}}\n}}()\n\n'.format(ty.dispose_fn)
> 
>      goname = xenlight_golang_fmt_name(ty.typename)
>      cname  = ty.typename
> @@ -471,7 +471,7 @@ def xenlight_golang_define_to_C(ty = None, typename = None, nested = False):
>              body += xenlight_golang_union_to_C(f.type, f.name, ty.typename)
> 
>          else:
> -            raise Exception('type {} not supported'.format(f.type))
> +            raise Exception('type {0} not supported'.format(f.type))
> 
>      return func.format(goname, cname, body)
> 
> @@ -506,26 +506,26 @@ def xenlight_golang_convert_to_C(ty = None, outer_name = None,
> 
>      # If outer_name is set, treat this as nested.
>      if outer_name is not None:
> -        goname = '{}.{}'.format(xenlight_golang_fmt_name(outer_name), goname)
> -        cname  = '{}.{}'.format(outer_name, cname)
> +        goname = '{0}.{1}'.format(xenlight_golang_fmt_name(outer_name), goname)
> +        cname  = '{0}.{1}'.format(outer_name, cname)
> 
>      is_castable = (ty.type.json_parse_type == 'JSON_INTEGER' or
>                     isinstance(ty.type, idl.Enumeration) or
>                     gotypename in go_builtin_types)
> 
>      if not is_castable:
> -        s += 'if err := {}.{}.toC(&{}.{}); err != nil {{\n'.format(govarname,goname,
> +        s += 'if err := {0}.{1}.toC(&{2}.{3}); err != nil {{\n'.format(govarname,goname,
>                                                                     cvarname,cname)
> -        s += 'return fmt.Errorf("converting field {}: %v", err)\n}}\n'.format(goname)
> +        s += 'return fmt.Errorf("converting field {0}: %v", err)\n}}\n'.format(goname)
> 
>      elif gotypename == 'string':
>          # Use the cgo helper for converting C strings.
> -        s += 'if {}.{} != "" {{\n'.format(govarname,goname)
> -        s += '{}.{} = C.CString({}.{})}}\n'.format(cvarname,cname,
> +        s += 'if {0}.{1} != "" {{\n'.format(govarname,goname)
> +        s += '{0}.{1} = C.CString({2}.{3})}}\n'.format(cvarname,cname,
>                                                     govarname,goname)
> 
>      else:
> -        s += '{}.{} = C.{}({}.{})\n'.format(cvarname,cname,ctypename,
> +        s += '{0}.{1} = C.{2}({3}.{4})\n'.format(cvarname,cname,ctypename,
>                                              govarname,goname)
> 
>      return s
> @@ -537,7 +537,7 @@ def xenlight_golang_union_to_C(ty = None, union_name = '',
>      keytype   = ty.keyvar.type.typename
>      gokeytype = xenlight_golang_fmt_name(keytype)
> 
> -    interface_name = '{}_{}_union'.format(struct_name, keyname)
> +    interface_name = '{0}_{1}_union'.format(struct_name, keyname)
>      interface_name = xenlight_golang_fmt_name(interface_name, exported=False)
> 
>      cgo_keyname = keyname
> @@ -545,44 +545,44 @@ def xenlight_golang_union_to_C(ty = None, union_name = '',
>          cgo_keyname = '_' + cgo_keyname
> 
> 
> -    s = 'xc.{} = C.{}(x.{})\n'.format(cgo_keyname,keytype,gokeyname)
> -    s += 'switch x.{}{{\n'.format(gokeyname)
> +    s = 'xc.{0} = C.{1}(x.{2})\n'.format(cgo_keyname,keytype,gokeyname)
> +    s += 'switch x.{0}{{\n'.format(gokeyname)
> 
>      # Create switch statement to determine how to populate the C union.
>      for f in ty.fields:
> -        key_val = '{}_{}'.format(keytype, f.name)
> +        key_val = '{0}_{1}'.format(keytype, f.name)
>          key_val = xenlight_golang_fmt_name(key_val)
> 
> -        s += 'case {}:\n'.format(key_val)
> +        s += 'case {0}:\n'.format(key_val)
> 
>          if f.type is None:
>              s += "break\n"
>              continue
> 
> -        cgotype = '{}_{}_union_{}'.format(struct_name,keyname,f.name)
> +        cgotype = '{0}_{1}_union_{2}'.format(struct_name,keyname,f.name)
>          gotype  = xenlight_golang_fmt_name(cgotype)
> 
> -        field_name = xenlight_golang_fmt_name('{}_union'.format(keyname))
> -        s += 'tmp, ok := x.{}.({})\n'.format(field_name,gotype)
> +        field_name = xenlight_golang_fmt_name('{0}_union'.format(keyname))
> +        s += 'tmp, ok := x.{0}.({1})\n'.format(field_name,gotype)
>          s += 'if !ok {\n'
> -        s += 'return errors.New("wrong type for union key {}")\n'.format(keyname)
> +        s += 'return errors.New("wrong type for union key {0}")\n'.format(keyname)
>          s += '}\n'
> 
> -        s += 'var {} C.{}\n'.format(f.name,cgotype)
> +        s += 'var {0} C.{1}\n'.format(f.name,cgotype)
>          for uf in f.type.fields:
>              s += xenlight_golang_convert_to_C(uf,cvarname=f.name,
>                                                govarname='tmp')
> 
>          # The union is still represented as Go []byte.
> -        s += '{}Bytes := C.GoBytes(unsafe.Pointer(&{}),C.sizeof_{})\n'.format(f.name,
> +        s += '{0}Bytes := C.GoBytes(unsafe.Pointer(&{1}),C.sizeof_{2})\n'.format(f.name,
>                                                                                f.name,
>                                                                                cgotype)
> -        s += 'copy(xc.{}[:],{}Bytes)\n'.format(union_name,f.name)
> +        s += 'copy(xc.{0}[:],{1}Bytes)\n'.format(union_name,f.name)
> 
>      # End switch statement
>      s += 'default:\n'
> -    err_string = '"invalid union key \'%v\'", x.{}'.format(gokeyname)
> -    s += 'return fmt.Errorf({})'.format(err_string)
> +    err_string = '"invalid union key \'%v\'", x.{0}'.format(gokeyname)
> +    s += 'return fmt.Errorf({0})'.format(err_string)
>      s += '}\n'
> 
>      return s
> @@ -599,29 +599,29 @@ def xenlight_golang_array_to_C(ty = None):
> 
>      is_enum = isinstance(ty.type.elem_type,idl.Enumeration)
>      if gotypename in go_builtin_types or is_enum:
> -        s += 'if {} := len(x.{}); {} > 0 {{\n'.format(golenvar,goname,golenvar)
> -        s += 'xc.{} = (*C.{})(C.malloc(C.size_t({}*{})))\n'.format(cname,ctypename,
> +        s += 'if {0} := len(x.{1}); {2} > 0 {{\n'.format(golenvar,goname,golenvar)
> +        s += 'xc.{0} = (*C.{1})(C.malloc(C.size_t({2}*{3})))\n'.format(cname,ctypename,
>                                                                     golenvar,golenvar)
> -        s += 'xc.{} = C.int({})\n'.format(clenvar,golenvar)
> -        s += 'c{} := (*[1<<28]C.{})(unsafe.Pointer(xc.{}))[:{}:{}]\n'.format(goname,
> +        s += 'xc.{0} = C.int({1})\n'.format(clenvar,golenvar)
> +        s += 'c{0} := (*[1<<28]C.{1})(unsafe.Pointer(xc.{2}))[:{3}:{4}]\n'.format(goname,
>                                                                        ctypename,cname,
>                                                                        golenvar,golenvar)
> -        s += 'for i,v := range x.{} {{\n'.format(goname)
> -        s += 'c{}[i] = C.{}(v)\n'.format(goname,ctypename)
> +        s += 'for i,v := range x.{0} {{\n'.format(goname)
> +        s += 'c{0}[i] = C.{1}(v)\n'.format(goname,ctypename)
>          s += '}\n}\n'
> 
>          return s
> 
> -    s += 'if {} := len(x.{}); {} > 0 {{\n'.format(golenvar,goname,golenvar)
> -    s += 'xc.{} = (*C.{})(C.malloc(C.ulong({})*C.sizeof_{}))\n'.format(cname,ctypename,
> +    s += 'if {0} := len(x.{1}); {2} > 0 {{\n'.format(golenvar,goname,golenvar)
> +    s += 'xc.{0} = (*C.{1})(C.malloc(C.ulong({2})*C.sizeof_{3}))\n'.format(cname,ctypename,
>                                                                     golenvar,ctypename)
> -    s += 'xc.{} = C.int({})\n'.format(clenvar,golenvar)
> -    s += 'c{} := (*[1<<28]C.{})(unsafe.Pointer(xc.{}))[:{}:{}]\n'.format(goname,
> +    s += 'xc.{0} = C.int({1})\n'.format(clenvar,golenvar)
> +    s += 'c{0} := (*[1<<28]C.{1})(unsafe.Pointer(xc.{2}))[:{3}:{4}]\n'.format(goname,
>                                                                           ctypename,cname,
>                                                                           golenvar,golenvar)
> -    s += 'for i,v := range x.{} {{\n'.format(goname)
> -    s += 'if err := v.toC(&c{}[i]); err != nil {{\n'.format(goname)
> -    s += 'return fmt.Errorf("converting field {}: %v", err)\n'.format(goname)
> +    s += 'for i,v := range x.{0} {{\n'.format(goname)
> +    s += 'if err := v.toC(&c{0}[i]); err != nil {{\n'.format(goname)
> +    s += 'return fmt.Errorf("converting field {0}: %v", err)\n'.format(goname)
>      s += '}\n}\n}\n'
> 
>      return s
> @@ -633,7 +633,7 @@ def xenlight_golang_define_constructor(ty = None):
>      gotypename = xenlight_golang_fmt_name(ctypename)
> 
>      # Since this func is exported, add a comment as per Go conventions.
> -    s += '// New{} returns an instance of {}'.format(gotypename,gotypename)
> +    s += '// New{0} returns an instance of {1}'.format(gotypename,gotypename)
>      s += ' initialized with defaults.\n'
> 
>      # If a struct has a keyed union, an extra argument is
> @@ -643,7 +643,7 @@ def xenlight_golang_define_constructor(ty = None):
>      init_fns = []
> 
>      # Add call to parent init_fn first.
> -    init_fns.append('C.{}(&xc)'.format(ty.init_fn))
> +    init_fns.append('C.{0}(&xc)'.format(ty.init_fn))
> 
>      for f in ty.fields:
>          if not isinstance(f.type, idl.KeyedUnion):
> @@ -658,24 +658,24 @@ def xenlight_golang_define_constructor(ty = None):
>          # Serveral keyed unions use 'type' as the key variable name. In
>          # that case, prepend the first letter of the Go type name.
>          if param_goname == 'type':
> -            param_goname = '{}type'.format(param_gotype.lower()[0])
> +            param_goname = '{0}type'.format(param_gotype.lower()[0])
> 
>          # Add call to keyed union's init_fn.
> -        init_fns.append('C.{}_{}(&xc, C.{}({}))'.format(ty.init_fn,
> +        init_fns.append('C.{0}_{1}(&xc, C.{2}({3}))'.format(ty.init_fn,
>                                                          param.name,
>                                                          param_ctype,
>                                                          param_goname))
> 
>          # Add to params list.
> -        params.append('{} {}'.format(param_goname, param_gotype))
> +        params.append('{0} {1}'.format(param_goname, param_gotype))
> 
>      # Define function
> -    s += 'func New{}({}) (*{}, error) {{\n'.format(gotypename,
> +    s += 'func New{0}({1}) (*{2}, error) {{\n'.format(gotypename,
>                                                     ','.join(params),
>                                                     gotypename)
> 
>      # Declare variables.
> -    s += 'var (\nx {}\nxc C.{})\n\n'.format(gotypename, ctypename)
> +    s += 'var (\nx {0}\nxc C.{1})\n\n'.format(gotypename, ctypename)
> 
>      # Write init_fn calls.
>      s += '\n'.join(init_fns)
> @@ -684,7 +684,7 @@ def xenlight_golang_define_constructor(ty = None):
>      # Make sure dispose_fn get's called when constructor
>      # returns.
>      if ty.dispose_fn is not None:
> -        s += 'defer C.{}(&xc)\n'.format(ty.dispose_fn)
> +        s += 'defer C.{0}(&xc)\n'.format(ty.dispose_fn)
> 
>      s += '\n'
> 
> @@ -727,7 +727,7 @@ if __name__ == '__main__':
>      header_comment="""// DO NOT EDIT.
>  //
>  // This file is generated by:
> -// {}
> +// {0}
>  //
> 
>  """.format(' '.join(sys.argv))
> --
> 2.17.1




From xen-devel-bounces@lists.xenproject.org Tue Jul 21 07:31:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 07:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxmkC-0005W3-5L; Tue, 21 Jul 2020 07:31:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxmkA-0005Vv-Cf
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 07:31:18 +0000
X-Inumbo-ID: 2a051d96-cb24-11ea-84f6-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a051d96-cb24-11ea-84f6-bc764e2007e4;
 Tue, 21 Jul 2020 07:31:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 803A2AD1A;
 Tue, 21 Jul 2020 07:31:23 +0000 (UTC)
Subject: Re: [PATCH 5/8] bitmap: move to/from xenctl_bitmap conversion helpers
To: Julien Grall <julien@xen.org>
References: <3375cacd-d3b7-9f06-44a7-4b684b6a77d6@suse.com>
 <5835147f-8428-1d74-7d6e-bbb5522289c7@suse.com>
 <fef25c94-a3ce-c17e-966c-a7e479566fc5@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3e2b3be4-d11c-3fa7-2ffe-8db5ebdb91b8@suse.com>
Date: Tue, 21 Jul 2020 09:31:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <fef25c94-a3ce-c17e-966c-a7e479566fc5@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.07.2020 18:20, Julien Grall wrote:
> On 15/07/2020 11:40, Jan Beulich wrote:
>> A subsequent change will exclude domctl.c from getting built for a
>> particular configuration, yet the two functions get used from elsewhere.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/common/bitmap.c
>> +++ b/xen/common/bitmap.c
>> @@ -9,6 +9,9 @@
>>   #include <xen/errno.h>
>>   #include <xen/bitmap.h>
>>   #include <xen/bitops.h>
>> +#include <xen/cpumask.h>
>> +#include <xen/domain.h>
> 
> The inclusion of xen/domain.h in common/bitmap.c seems a bit odd to me. 
> Would it make sense to move the prototype of 
> bitmap_to_xenctl_bitmap()/xenctl_bitmap_to_bitmap() to bitmap.h?

Ah yes, no idea why it didn't occur to me to do it this way; like you
I didn't really like the domain.h inclusion here. (This was what the
Arm side cleanup was needed for, which I guess is a nice side effect,
but now no longer strictly needed. You've committed that one already
anyway.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 08:30:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 08:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxnfV-0002ab-01; Tue, 21 Jul 2020 08:30:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gK6X=BA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxnfT-0002aW-LR
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 08:30:31 +0000
X-Inumbo-ID: 6f5ac2e4-cb2c-11ea-84f6-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f5ac2e4-cb2c-11ea-84f6-bc764e2007e4;
 Tue, 21 Jul 2020 08:30:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595320230;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=BzY2QweQF/rQiDbWv7Jn9Mmq9ZfmwWs8Ov87vMZGyUU=;
 b=NMatKYSKM8Qv9ope82+19TQxsi+Vc+EtzyekOsRyuuE1K49xkzQRo3VU
 6425hCIOVRGhPLvvY0tTQmQoBdj4wXeYpRDpr3J54zTz4T2HYonILqKyQ
 tlYfe5Nh9nNQ0Kw+kNG+nJ/GVs/dXG/P4DxP6DhKffsMY0F/99KD19ZbK E=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: x0OiNrk4e0Kv/geF7/GsMS3ajUnY+7gsSFFEXbjmHoWrn0Lxe6gtvxgEn0GR/cqaG+YYQcjcmB
 y9cXZDaFN18y2oIa9h8b9NAh7AkUkC7xN6pUY0g7UBLeXlTrlJ4gdZ2eS3wMmI4P/wL7ISLp8w
 /rH69WDGB4HRSyD8Mi1RPaLwy67UwO/DxY+phNhaDtpaI/XDgb+g5Z7PhX+c3q5qRqoauEaBxR
 Eblv/h6LWWnHOFZ3YXG0BABZ9rg/RT1Z7xPJGXBV02waTGeDtwxItb0TozFEIVA2hEWlnwbpcU
 zt4=
X-SBRS: 2.7
X-MesageID: 23153757
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,378,1589256000"; d="scan'208";a="23153757"
Date: Tue, 21 Jul 2020 10:30:18 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Anchal Agarwal <anchalag@amazon.com>, <marmarek@invisiblethingslab.com>
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
Message-ID: <20200721083018.GM7191@Air-de-Roger>
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200720093705.GG7191@Air-de-Roger>
 <20200721001736.GB19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200721001736.GB19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, tglx@linutronix.de, sstabellini@kernel.org, kamatam@amazon.com,
 mingo@redhat.com, xen-devel@lists.xenproject.org, sblbir@amazon.com,
 axboe@kernel.dk, konrad.wilk@oracle.com, bp@alien8.de,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, jgross@suse.com,
 netdev@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net,
 linux-kernel@vger.kernel.org, vkuznets@redhat.com, davem@davemloft.net,
 dwmw@amazon.co.uk
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Marek: I'm adding you in case you could be able to give this a try and
make sure it doesn't break suspend for dom0.

On Tue, Jul 21, 2020 at 12:17:36AM +0000, Anchal Agarwal wrote:
> On Mon, Jul 20, 2020 at 11:37:05AM +0200, Roger Pau Monné wrote:
> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> > 
> > 
> > 
> > On Sat, Jul 18, 2020 at 09:47:04PM -0400, Boris Ostrovsky wrote:
> > > (Roger, question for you at the very end)
> > >
> > > On 7/17/20 3:10 PM, Anchal Agarwal wrote:
> > > > On Wed, Jul 15, 2020 at 05:18:08PM -0400, Boris Ostrovsky wrote:
> > > >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> > > >>
> > > >>
> > > >>
> > > >> On 7/15/20 4:49 PM, Anchal Agarwal wrote:
> > > >>> On Mon, Jul 13, 2020 at 11:52:01AM -0400, Boris Ostrovsky wrote:
> > > >>>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> > > >>>>
> > > >>>>
> > > >>>>
> > > >>>> On 7/2/20 2:21 PM, Anchal Agarwal wrote:
> > > >>>> And PVH dom0.
> > > >>> That's another good use case to make it work with however, I still
> > > >>> think that should be tested/worked upon separately as the feature itself
> > > >>> (PVH Dom0) is very new.
> > > >>
> > > >> Same question here --- will this break PVH dom0?
> > > >>
> > > > I haven't tested it as a part of this series. Is that a blocker here?
> > >
> > >
> > > I suspect dom0 will not do well now as far as hibernation goes, in which
> > > case you are not breaking anything.
> > >
> > >
> > > Roger?
> > 
> > I sadly don't have any box ATM that supports hibernation where I
> > could test it. We have hibernation support for PV dom0, so while I
> > haven't done anything specific to support or test hibernation on PVH
> > dom0 I would at least aim to not make this any worse, and hence the
> > check should at least also fail for a PVH dom0?
> > 
> > if (!xen_hvm_domain() || xen_initial_domain())
> >     return -ENODEV;
> > 
> > Ie: none of this should be applied to a PVH dom0, as it doesn't have
> > PV devices and hence should follow the bare metal device suspend.
> >
> So from what I understand you meant for any guest running on pvh dom0 should not 
> hibernate if hibernation is triggered from within the guest or should they?

Er no to both I think. What I meant is that a PVH dom0 should be able
to properly suspend, and we should make sure this work doesn't make
this any harder (or breaks it if it's currently working).

Or at least that's how I understood the question raised by Boris.

You are adding code to the generic suspend path that's also used by dom0
in order to perform bare metal suspension. This is fine now for a PV
dom0 because the code is gated on xen_hvm_domain, but you should also
take into account that a PVH dom0 is considered a HVM domain, and
hence will get the notifier registered.

> > Also I would contact the QubesOS guys, they rely heavily on the
> > suspend feature for dom0, and that's something not currently tested by
> > osstest so any breakages there go unnoticed.
> > 
> Was this for me or Boris? If its the former then I have no idea how to?

I've now added Marek.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 09:21:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 09:21:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxoS7-0006pS-2X; Tue, 21 Jul 2020 09:20:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bQ5W=BA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jxoS5-0006pN-7m
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 09:20:45 +0000
X-Inumbo-ID: 737cf175-cb33-11ea-84fc-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 737cf175-cb33-11ea-84fc-bc764e2007e4;
 Tue, 21 Jul 2020 09:20:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595323244;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=7q50Xwsj+ljEirvyYc69VOA10kw2w40I9IPu2T3IlSY=;
 b=P0as67hJfJhJLIzYOQfk3c3TwEoQNq909P5BmodNy+8/NZ1RPG0+WjxC
 mE92nlh2m9VTRy3G20zVopfJLnuEOYaos5cnfzqkd/CZzL4/G0YVIGZWm
 udH0fcgzbkr+XMN5BSMM0EBTh2iHRa5+m2Ti6H75+jmyMQSj+p3Zge1ZV s=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: EIFYKv/CbPa0FveuWYks6Mxcspe/ZgyvW95ZOrgEXJzpNl9EXMwgk6BEc9SFlPrAkfsCeYnYMW
 k7maOgHqpVtdHtqp2IOBOIa43JK8mb0vd9COyxJuBIcdB/dowGiPZ5ncHx4uXnDoYbwRM6I+bH
 JAOOfr/yhb/bjFDVcnV1vlU8qADA4BOFryHf+tvzFMVsfrYmT2O5QPfE+Lo/e6LFRtDf3hIGKF
 ETRqeiFxLDNENGWdJG/61ge2akYxxXF8TTSrffsdVqqFHKyTC7+pp6lNc9OUyKXLSQUHwXqUvR
 +Ac=
X-SBRS: 2.7
X-MesageID: 23151962
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,378,1589256000"; d="scan'208";a="23151962"
Subject: Re: [PATCH for-4.14] golang/xenlight: fix code generation for python
 2.6
To: <paul@xen.org>, 'Nick Rosbrook' <rosbrookn@gmail.com>,
 <xen-devel@lists.xenproject.org>
References: <d406ae82e0cdde2dc33a92d2685ffb77bacab7ee.1595289055.git.rosbrookn@ainfosec.com>
 <003901d65f2e$6faab0c0$4f001240$@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <66dc2e79-e899-1d94-c0f2-d834b55cd859@citrix.com>
Date: Tue, 21 Jul 2020 10:20:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <003901d65f2e$6faab0c0$4f001240$@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Nick Rosbrook' <rosbrookn@ainfosec.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Wei Liu' <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21/07/2020 08:13, Paul Durrant wrote:
>> -----Original Message-----
>> From: Nick Rosbrook <rosbrookn@gmail.com>
>> Sent: 21 July 2020 00:55
>> To: xen-devel@lists.xenproject.org
>> Cc: paul@xen.org; Nick Rosbrook <rosbrookn@ainfosec.com>; George Dunlap <george.dunlap@citrix.com>;
>> Ian Jackson <ian.jackson@eu.citrix.com>; Wei Liu <wl@xen.org>
>> Subject: [PATCH for-4.14] golang/xenlight: fix code generation for python 2.6
>>
>> Before python 2.7, str.format() calls required that the format fields
>> were explicitly enumerated, e.g.:
>>
>>   '{0} {1}'.format(foo, bar)
>>
>>   vs.
>>
>>   '{} {}'.format(foo, bar)
>>
>> Currently, gengotypes.py uses the latter pattern everywhere, which means
>> the Go bindings do not build on python 2.6. Use the 2.6 syntax for
>> format() in order to support python 2.6 for now.
>>
>> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> I'm afraid this is too late for 4.14 now. We are in hard freeze, so only minor docs changes or critical bug fixes are being taken at
> this point.

This is Reported-by me, and breaking gitlab CI on the master and 4.14
branches (because apparently noone else cares to look at the results...)

The alternative is to pull support for CentOS 6 from the 4.14 release,
which would best be done by a commit taking out the C6 containers from CI.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 09:23:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 09:23:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxoUz-0006y6-I8; Tue, 21 Jul 2020 09:23:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JDR4=BA=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jxoUy-0006y1-Fa
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 09:23:44 +0000
X-Inumbo-ID: def41234-cb33-11ea-84fc-bc764e2007e4
Received: from mail-wr1-x431.google.com (unknown [2a00:1450:4864:20::431])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id def41234-cb33-11ea-84fc-bc764e2007e4;
 Tue, 21 Jul 2020 09:23:43 +0000 (UTC)
Received: by mail-wr1-x431.google.com with SMTP id a14so5608858wra.5
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 02:23:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=OlvW3xbvdzsZzjeUiLqigemg/exIGFMp6fI8py9pDoo=;
 b=JDklvqbu+podRlAKOwhFEiEbjDzEuGdWWqmNfkzs18dCNwBDINGWZ4wh2zHjuu9z5b
 raJDAXsNskWbXC9Ol4fQkmOyMUsvQ7mRDHbXJRBnhxF39gDIbWJAhJc2E8Eaz5ApDyNC
 /ZX6W/xwGVT9Qu1VpvyqatME8d/140wQo34hXX8uyEgmjJKuR0oiisr0Twup8i8XwNrm
 55L6CKNNx3h3ZFNQMneezgMhb7Wf304G/Y+oRli2rWcIq6SyEg+nJTZfPYB7xsj84POS
 z0uCa8pxYnPQ7o14SVT1TqNV2EJSkaiHcwES2qFWaRI5QKuSoA+Xh25gBNDLy3ZSw4UX
 bIlA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=OlvW3xbvdzsZzjeUiLqigemg/exIGFMp6fI8py9pDoo=;
 b=XgAXthWCF55/eRP+obo202iYE5va761eDuJKSizTIWIKLu+IwCXQI2eSBavWhSFqQT
 5RlbsQHF4ifwqov5N4tObTRhRjxPisH66xA641SKExx69fuj68LrnPtRHrbd3eEPGqS5
 qrUXTGXZZREwlhdGHpLnqEONCfEXMdHS7wVS3nSEpMIathQkc0vhGekqy9uPqtOBJwk9
 AWenMkEXJXq87t8u0KJfurUwONVaSEYhtdfa6p3Uln9yIG9+/Tvxk98hhf1uVPB+ezpe
 OTG6YiuEovnyp3hctRT3iRxLv0M5kYWzmu/YDIXB6jQig/wndknkAXzT0A8oQnN0qatB
 JMJw==
X-Gm-Message-State: AOAM5336T5NOTqdBaW+qmGEMEeRoW6/jbXubU04ARpnrA/qXCQdwD3cU
 k57/q2tdFpl4w0LUDoh+oTE=
X-Google-Smtp-Source: ABdhPJy1FccZoSGC4ujMAJaT39575N9yzIHrqVgvTrIYj0R8L/2sI6m2+F//D7j7iLXOIuNWgCXcVQ==
X-Received: by 2002:adf:f449:: with SMTP id f9mr1556571wrp.416.1595323422924; 
 Tue, 21 Jul 2020 02:23:42 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id k126sm2734171wmf.3.2020.07.21.02.23.41
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 21 Jul 2020 02:23:42 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 "'Nick Rosbrook'" <rosbrookn@gmail.com>, <xen-devel@lists.xenproject.org>
References: <d406ae82e0cdde2dc33a92d2685ffb77bacab7ee.1595289055.git.rosbrookn@ainfosec.com>
 <003901d65f2e$6faab0c0$4f001240$@xen.org>
 <66dc2e79-e899-1d94-c0f2-d834b55cd859@citrix.com>
In-Reply-To: <66dc2e79-e899-1d94-c0f2-d834b55cd859@citrix.com>
Subject: RE: [PATCH for-4.14] golang/xenlight: fix code generation for python
 2.6
Date: Tue, 21 Jul 2020 10:23:41 +0100
Message-ID: <004001d65f40$a0348330$e09d8990$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJO5yDcExPNZQbG/t5S46VRX2lP6wJ2S7dHAu7JLWan9YorIA==
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Nick Rosbrook' <rosbrookn@ainfosec.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Wei Liu' <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 21 July 2020 10:21
> To: paul@xen.org; 'Nick Rosbrook' <rosbrookn@gmail.com>; xen-devel@lists.xenproject.org
> Cc: 'Nick Rosbrook' <rosbrookn@ainfosec.com>; 'Ian Jackson' <ian.jackson@eu.citrix.com>; 'George
> Dunlap' <george.dunlap@citrix.com>; 'Wei Liu' <wl@xen.org>
> Subject: Re: [PATCH for-4.14] golang/xenlight: fix code generation for python 2.6
> 
> On 21/07/2020 08:13, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Nick Rosbrook <rosbrookn@gmail.com>
> >> Sent: 21 July 2020 00:55
> >> To: xen-devel@lists.xenproject.org
> >> Cc: paul@xen.org; Nick Rosbrook <rosbrookn@ainfosec.com>; George Dunlap <george.dunlap@citrix.com>;
> >> Ian Jackson <ian.jackson@eu.citrix.com>; Wei Liu <wl@xen.org>
> >> Subject: [PATCH for-4.14] golang/xenlight: fix code generation for python 2.6
> >>
> >> Before python 2.7, str.format() calls required that the format fields
> >> were explicitly enumerated, e.g.:
> >>
> >>   '{0} {1}'.format(foo, bar)
> >>
> >>   vs.
> >>
> >>   '{} {}'.format(foo, bar)
> >>
> >> Currently, gengotypes.py uses the latter pattern everywhere, which means
> >> the Go bindings do not build on python 2.6. Use the 2.6 syntax for
> >> format() in order to support python 2.6 for now.
> >>
> >> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> > I'm afraid this is too late for 4.14 now. We are in hard freeze, so only minor docs changes or
> critical bug fixes are being taken at
> > this point.
> 
> This is Reported-by me, and breaking gitlab CI on the master and 4.14
> branches (because apparently noone else cares to look at the results...)
> 
> The alternative is to pull support for CentOS 6 from the 4.14 release,
> which would best be done by a commit taking out the C6 containers from CI.
> 

At this late stage I'd rather we did that.

  Paul

> ~Andrew



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 09:33:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 09:33:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxoeQ-0007sH-Hl; Tue, 21 Jul 2020 09:33:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tByU=BA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxoeP-0007sC-7R
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 09:33:29 +0000
X-Inumbo-ID: 3ac067ce-cb35-11ea-84fd-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ac067ce-cb35-11ea-84fd-bc764e2007e4;
 Tue, 21 Jul 2020 09:33:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=0a/PoavfSvEr1yoz/3gtwqRbsLY0a5D4kDkK8yg80BE=; b=nCc6+RKYShkhLZi7DQAtjQcTb
 4xgt6G+u4Ocqkxiao3tVQFKJs3dN9aUwDdDCupAjbYWJMkBIlQSHALWRzJ764n78nZFG/my6cuE7w
 pzCBTRKjPfX8T+JuP58P8YE9Jq5AacL+aeqxJZM8rMdXh+2SPkLvhw5W0ri8Ofdrd5Ps0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxoeM-0001yg-9e; Tue, 21 Jul 2020 09:33:26 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxoeM-000855-24; Tue, 21 Jul 2020 09:33:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxoeM-0007SH-1M; Tue, 21 Jul 2020 09:33:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152052-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152052: regressions - trouble:
 blocked/fail/pass/starved
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:build-armhf-pvops:kernel-build:fail:regression
 linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=5714ee50bb4375bd586858ad800b1d9772847452
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jul 2020 09:33:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152052 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152052/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 build-armhf-pvops             6 kernel-build             fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 linux                5714ee50bb4375bd586858ad800b1d9772847452
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   33 days
Failing since        151236  2020-06-19 19:10:35 Z   31 days   50 attempts
Testing same since   152032  2020-07-20 02:22:49 Z    1 days    2 attempts

------------------------------------------------------------
829 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 45172 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 09:53:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 09:53:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxoy1-0001Ar-90; Tue, 21 Jul 2020 09:53:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qJsm=BA=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jxoxz-0001Am-Sp
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 09:53:43 +0000
X-Inumbo-ID: 0f22c168-cb38-11ea-84fd-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f22c168-cb38-11ea-84fd-bc764e2007e4;
 Tue, 21 Jul 2020 09:53:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595325222;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=GvsOY8nL9hru4HOZs3I0bKp4alvd43EjwcfVQIXsRdM=;
 b=TbSAELeeI2EoIUHZmxv/w/TP7A6BAQJGslDMkLp8BceKARDonAgSKHFg
 20R0pqBFxcALYsGBxlDcghsTQlP6mQrUPV/um2CEddA21GtIZ1CMbncRl
 bd6SrSwCe+tMU5Q9hnly4eAbICgvtWJr11CwfAKSbh73Tq7YUSz72DkX4 o=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: w3SWYxVnWB/ovGzHsoPG1cMgsKDfPOV/gOPPmXs+oopOUJbfw39zVKsaOinZOygIC/C2+1VRiJ
 XP5tUmxIonhyA7z3oCpetJNJNxfpK9uOUaJf+YVRZCg9+F3mja1kQFUPjVAnXU3+XcMgXOrK8G
 nE45VCSu4DsUm39YWW50EVYYU8QV1v/zz1iTDtDHrQlIymUhIoJUQDLrVUN+xTdRFRq0ezfeE7
 p8JSYb1Wt3FwnDySAGEASA4WCVEdtVgxH8yJtR81iu+Jf7rJttgXycs95SqrDR5ofFg2c9UvVw
 7RQ=
X-SBRS: 2.7
X-MesageID: 23680999
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,378,1589256000"; d="scan'208";a="23680999"
From: George Dunlap <George.Dunlap@citrix.com>
To: "paul@xen.org" <paul@xen.org>
Subject: Re: [PATCH for-4.14] golang/xenlight: fix code generation for python
 2.6
Thread-Topic: [PATCH for-4.14] golang/xenlight: fix code generation for python
 2.6
Thread-Index: AQHWXvEozBxH2vl03UmqM8eBIsELzakRfXMAgAAjhQCAAADdgIAACFcA
Date: Tue, 21 Jul 2020 09:53:32 +0000
Message-ID: <44537ECC-E301-46BD-8B8E-F3B522A18FEC@citrix.com>
References: <d406ae82e0cdde2dc33a92d2685ffb77bacab7ee.1595289055.git.rosbrookn@ainfosec.com>
 <003901d65f2e$6faab0c0$4f001240$@xen.org>
 <66dc2e79-e899-1d94-c0f2-d834b55cd859@citrix.com>
 <004001d65f40$a0348330$e09d8990$@xen.org>
In-Reply-To: <004001d65f40$a0348330$e09d8990$@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <5939C594A45DC04E8E97F6EF6D5EB097@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Nick Rosbrook <rosbrookn@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDIxLCAyMDIwLCBhdCAxMDoyMyBBTSwgUGF1bCBEdXJyYW50IDx4YWRpbWdu
aWtAZ21haWwuY29tPiB3cm90ZToNCj4gDQo+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0K
Pj4gRnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4NCj4+IFNl
bnQ6IDIxIEp1bHkgMjAyMCAxMDoyMQ0KPj4gVG86IHBhdWxAeGVuLm9yZzsgJ05pY2sgUm9zYnJv
b2snIDxyb3Nicm9va25AZ21haWwuY29tPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3Jn
DQo+PiBDYzogJ05pY2sgUm9zYnJvb2snIDxyb3Nicm9va25AYWluZm9zZWMuY29tPjsgJ0lhbiBK
YWNrc29uJyA8aWFuLmphY2tzb25AZXUuY2l0cml4LmNvbT47ICdHZW9yZ2UNCj4+IER1bmxhcCcg
PGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT47ICdXZWkgTGl1JyA8d2xAeGVuLm9yZz4NCj4+IFN1
YmplY3Q6IFJlOiBbUEFUQ0ggZm9yLTQuMTRdIGdvbGFuZy94ZW5saWdodDogZml4IGNvZGUgZ2Vu
ZXJhdGlvbiBmb3IgcHl0aG9uIDIuNg0KPj4gDQo+PiBPbiAyMS8wNy8yMDIwIDA4OjEzLCBQYXVs
IER1cnJhbnQgd3JvdGU6DQo+Pj4+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+Pj4+IEZy
b206IE5pY2sgUm9zYnJvb2sgPHJvc2Jyb29rbkBnbWFpbC5jb20+DQo+Pj4+IFNlbnQ6IDIxIEp1
bHkgMjAyMCAwMDo1NQ0KPj4+PiBUbzogeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQo+
Pj4+IENjOiBwYXVsQHhlbi5vcmc7IE5pY2sgUm9zYnJvb2sgPHJvc2Jyb29rbkBhaW5mb3NlYy5j
b20+OyBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+Ow0KPj4+PiBJYW4g
SmFja3NvbiA8aWFuLmphY2tzb25AZXUuY2l0cml4LmNvbT47IFdlaSBMaXUgPHdsQHhlbi5vcmc+
DQo+Pj4+IFN1YmplY3Q6IFtQQVRDSCBmb3ItNC4xNF0gZ29sYW5nL3hlbmxpZ2h0OiBmaXggY29k
ZSBnZW5lcmF0aW9uIGZvciBweXRob24gMi42DQo+Pj4+IA0KPj4+PiBCZWZvcmUgcHl0aG9uIDIu
Nywgc3RyLmZvcm1hdCgpIGNhbGxzIHJlcXVpcmVkIHRoYXQgdGhlIGZvcm1hdCBmaWVsZHMNCj4+
Pj4gd2VyZSBleHBsaWNpdGx5IGVudW1lcmF0ZWQsIGUuZy46DQo+Pj4+IA0KPj4+PiAgJ3swfSB7
MX0nLmZvcm1hdChmb28sIGJhcikNCj4+Pj4gDQo+Pj4+ICB2cy4NCj4+Pj4gDQo+Pj4+ICAne30g
e30nLmZvcm1hdChmb28sIGJhcikNCj4+Pj4gDQo+Pj4+IEN1cnJlbnRseSwgZ2VuZ290eXBlcy5w
eSB1c2VzIHRoZSBsYXR0ZXIgcGF0dGVybiBldmVyeXdoZXJlLCB3aGljaCBtZWFucw0KPj4+PiB0
aGUgR28gYmluZGluZ3MgZG8gbm90IGJ1aWxkIG9uIHB5dGhvbiAyLjYuIFVzZSB0aGUgMi42IHN5
bnRheCBmb3INCj4+Pj4gZm9ybWF0KCkgaW4gb3JkZXIgdG8gc3VwcG9ydCBweXRob24gMi42IGZv
ciBub3cuDQo+Pj4+IA0KPj4+PiBTaWduZWQtb2ZmLWJ5OiBOaWNrIFJvc2Jyb29rIDxyb3Nicm9v
a25AYWluZm9zZWMuY29tPg0KPj4+IEknbSBhZnJhaWQgdGhpcyBpcyB0b28gbGF0ZSBmb3IgNC4x
NCBub3cuIFdlIGFyZSBpbiBoYXJkIGZyZWV6ZSwgc28gb25seSBtaW5vciBkb2NzIGNoYW5nZXMg
b3INCj4+IGNyaXRpY2FsIGJ1ZyBmaXhlcyBhcmUgYmVpbmcgdGFrZW4gYXQNCj4+PiB0aGlzIHBv
aW50Lg0KPj4gDQo+PiBUaGlzIGlzIFJlcG9ydGVkLWJ5IG1lLCBhbmQgYnJlYWtpbmcgZ2l0bGFi
IENJIG9uIHRoZSBtYXN0ZXIgYW5kIDQuMTQNCj4+IGJyYW5jaGVzIChiZWNhdXNlIGFwcGFyZW50
bHkgbm9vbmUgZWxzZSBjYXJlcyB0byBsb29rIGF0IHRoZSByZXN1bHRzLi4uKQ0KPj4gDQo+PiBU
aGUgYWx0ZXJuYXRpdmUgaXMgdG8gcHVsbCBzdXBwb3J0IGZvciBDZW50T1MgNiBmcm9tIHRoZSA0
LjE0IHJlbGVhc2UsDQo+PiB3aGljaCB3b3VsZCBiZXN0IGJlIGRvbmUgYnkgYSBjb21taXQgdGFr
aW5nIG91dCB0aGUgQzYgY29udGFpbmVycyBmcm9tIENJLg0KPj4gDQo+IA0KPiBBdCB0aGlzIGxh
dGUgc3RhZ2UgSSdkIHJhdGhlciB3ZSBkaWQgdGhhdC4NCg0KV2Ugc2hvdWxkIHByb2JhYmx5IGFk
ZCBhIHJlbGVhc2Ugbm90ZSBzYXlpbmcgdGhhdCB0aGVyZeKAmXMgYSBrbm93biBpbnRlcm1pdHRl
bnQgYnVpbGQgaXNzdWUgb24gQ2VudE9TIDYuDQoNCiAtR2Vvcmdl


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 09:55:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 09:55:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxozG-0001Ev-LK; Tue, 21 Jul 2020 09:55:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rLXd=BA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1jxozE-0001Eo-8O
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 09:55:00 +0000
X-Inumbo-ID: 3ae22e10-cb38-11ea-a09c-12813bfff9fa
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.48]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3ae22e10-cb38-11ea-a09c-12813bfff9fa;
 Tue, 21 Jul 2020 09:54:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zeo4narOoP4gCUzDyeyuRzuZ+8SpPZLGTzSvlzCpGmo=;
 b=XdHb0lR0s7truYm3I75ShqzE0O7xSXscK820GEaZk5ZRVYtnDoCRrdb652w611R/+GKAa1rvNbMWYg5GZ79hAb5g5HTyoC/3njMqaqJAaiUXLTRioVzZDe4UAa6FvgSXdNvtVrZe0fv4ArAZX9Q8nzMB5ANaKKRBVvLn2FODrbA=
Received: from AM6P192CA0101.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:8d::42)
 by DBBPR08MB4712.eurprd08.prod.outlook.com (2603:10a6:10:f4::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.20; Tue, 21 Jul
 2020 09:54:54 +0000
Received: from VE1EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8d:cafe::d2) by AM6P192CA0101.outlook.office365.com
 (2603:10a6:209:8d::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18 via Frontend
 Transport; Tue, 21 Jul 2020 09:54:54 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT007.mail.protection.outlook.com (10.152.18.114) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18 via Frontend Transport; Tue, 21 Jul 2020 09:54:53 +0000
Received: ("Tessian outbound 73b502bf693a:v62");
 Tue, 21 Jul 2020 09:54:53 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 05fc36d0b3cbec33
X-CR-MTA-TID: 64aa7808
Received: from 20554f2f5632.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 18601C7B-55A2-4228-BFA9-348C7F463016.1; 
 Tue, 21 Jul 2020 09:54:48 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 20554f2f5632.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 21 Jul 2020 09:54:48 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EPraVo+dC7ih9zMWdZTwmC/xpzQ++0w7DHtVpoLWwXqFHmxQoLGpFG+Ws8gBUpUVSIVyvhz4Degtrk0w96aosY4nreMkICZVDO7pUk12Ya3CsP3ITf/oiiabuNqCkpgmfYuNKNdgm/JszhsMBXROiYFD/UU/mWVGMkBZ7MP5+NRyYspOKj64+cTyrBupUEW2r1g5opuCj7ITOaB98P/htX8So65yG/nLDQpGSmcO2I4uHkUrsB7Tcq4Dxw2nL/xfzGD1Ql3ssCsPwLfguwEsk3RB/SKS3SxPjUr8Gz9NAoQE4Uv91Heequ3BIB4w9naRNSQGq2l5HFO6YV/2xN23Og==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zeo4narOoP4gCUzDyeyuRzuZ+8SpPZLGTzSvlzCpGmo=;
 b=WJSy+N7GDj4f753iOpYT7CherJlNN+5pH2trzSUb7AuyYPDvNIJjSR5g3lnA5hglWwt/kSAlUb/KuAvkD7Jh5cPtusHiK3aVIxGyJOcAMK6OL49zyMH+BOeTnMciwSAvJIcywjvhHyOK1wpBjtnuyK3BMyYQB+K94LMLh63RbukKFxO4LVUD640ufkXv/qpzRPC83EjS8wObqEdjACzzhmivICe4ZJiGzyUX+YOM9U2PnTFP28C/4z0X4CkqMzOlKcuvpKLAGcfeP9m8rsJA+PcLByg6Fbo2mcSyqTfq93Zx2BaLtlGGC6mz7PyJ53tEfik26AEFGHCYedFC45HCpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zeo4narOoP4gCUzDyeyuRzuZ+8SpPZLGTzSvlzCpGmo=;
 b=XdHb0lR0s7truYm3I75ShqzE0O7xSXscK820GEaZk5ZRVYtnDoCRrdb652w611R/+GKAa1rvNbMWYg5GZ79hAb5g5HTyoC/3njMqaqJAaiUXLTRioVzZDe4UAa6FvgSXdNvtVrZe0fv4ArAZX9Q8nzMB5ANaKKRBVvLn2FODrbA=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB3654.eurprd08.prod.outlook.com (2603:10a6:20b:4d::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Tue, 21 Jul
 2020 09:54:47 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3195.026; Tue, 21 Jul 2020
 09:54:47 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
Thread-Topic: RFC: PCI devices passthrough on Arm design proposal
Thread-Index: AQHWW4kYTVU0hTDyYEitKlUuU5vZlKkKf2uAgAACLICAAOrEgIAAVPWAgAABeYCAAAssAIAAAcuAgAAIDQCABUqtgIAAsGEA
Date: Tue, 21 Jul 2020 09:54:47 +0000
Message-ID: <DDBD069B-CB7A-4050-A726-FBB622875B9A@arm.com>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <a50c714c-1642-0354-3f19-5a6f7278d8aa@suse.com>
 <28899FEF-9DA7-4513-8283-1AC5EFFC6E92@arm.com>
 <1dd5db2d-98c7-7738-c3d4-d3f098dfe674@suse.com>
 <F09F9354-EC9B-4D76-809B-A25AF4F7D863@arm.com>
 <a5007a6c-bdfe-04d4-8107-53cb222b95e8@suse.com>
 <DA19A9EC-A828-4EBC-BCAA-D1D9E4F222BB@arm.com>
 <alpine.DEB.2.21.2007201509350.32544@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007201509350.32544@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 57a8d093-dfb8-40b9-7a4e-08d82d5c1e0b
x-ms-traffictypediagnostic: AM6PR08MB3654:|DBBPR08MB4712:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <DBBPR08MB47120ECB03B30004BDB68F55FC780@DBBPR08MB4712.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: cn3uVMsATaEMGU3TACejRSfKUkS7fWoWPWZR2dE2AhcDufdM5tFBQZoJHOTX1GrJq5szWR1geSdBOhi91SNJJGQ8dbK81HpKin6+xFfwK+c/3+9YZabbboB28TDuCMNx3dcnjtvck8L/uZpyK/6tXghujOefPTw+G/6l0akFJrQ2gJC+COcC3c++8Gh00fno2IdoRBybcxa85pi+NkDl92KEmiMCPv5n92yBqo8LmZfXcUkVGBGiKqd0ZaBJb3K5CCmJZzAAh5ncKkY9VkDsNCxg1QMJh6ZOcB8XQxkR2a62RJz5lZLsZYOzhzrBDPCJ8Gq2PzFjsiUFQgcbDtfapw==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(366004)(136003)(396003)(376002)(346002)(83380400001)(2616005)(2906002)(54906003)(5660300002)(71200400001)(86362001)(6512007)(4326008)(6486002)(8936002)(478600001)(8676002)(91956017)(76116006)(26005)(66556008)(64756008)(66446008)(66476007)(33656002)(66946007)(186003)(6506007)(36756003)(6916009)(53546011)(316002)(55236004);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: VqEBbloCNLP6RPZGjzqbxNI/MPmih03XLbZfxmrFcF7eKlQaTTYiK/rWaKVQDZL2Ijq1P2Hg0DqwUVCsvD/z4D+EKiriNWfIaH13zCkXKEB1fTHjMixTQK/vvI75noQbBtx4Q4Fqntb21ZLFLZLt5P4TAmGhcOJ8Y+n8yYhx4XGRaMc7xHWn3vbfBaM45ntdYjnZHQ1p3zLrHWXW5T9nW4dbKI7OgBnAgcf7qrReA9l1WUOoy+ShJwiy8nzCICW0QndDFlAiGMBQZDhuZMgfxsEv9JWPw9tVdvmI/XPatKiij2odx3jgft4Kq63ZboWPVznS82xAdpl0sv821z5XLpGhNreu4XAprD1A9pBJsO2672dYsAGvltrJTmYpXWDvNfRrVl4yi0BJeJwqT6Z/hdZH87jWisQeR0l6Nx/QJUL7ygZySTr4y/JbPU92DckAZ9TJZVTaWx/Gm5SwwNVujXigxWlur7qD5sJ0vmGuK7E=
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <6BDC42D8EB17A64EBF7713A55966300F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3654
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(346002)(136003)(396003)(376002)(46966005)(82740400003)(8676002)(70586007)(36906005)(70206006)(6512007)(47076004)(33656002)(316002)(54906003)(2906002)(6486002)(107886003)(26005)(186003)(356005)(36756003)(2616005)(83380400001)(336012)(82310400002)(8936002)(6862004)(81166007)(478600001)(5660300002)(4326008)(6506007)(86362001)(53546011);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: e57ed6e6-85c8-4c64-1c0c-08d82d5c1a3b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 67P+vSGI3EPpjBJvHJTH03MGs51SF+iSfAHQ6AhcAess9Z+obOQIn9ZKlkN8n+NXqjZX9L62MR/XeGD6j9aiua7Nh73sDxG8xCTFDLMXWs6+R7PRh5M4OmkXyaKDMgAiTx+a1eM7R4YJQ+SeffzSbdMQL5RCG+8Pe5oV2XLYL3saPtr8EUPVnCEtnVUaOG9VPRrzX1zqOoFg/wd+TQ0cYY74SpU/kkMsxNVjLbcjdnIg3sOK+821sk8xiGgtVOkOqyB+QeK63JBH7j4FZeW1DwFzYET1liqLdkWO0/TQnNz2iwUNU8rPBr0mts6iNKBOGwVCEki4wNZiFcBpkcTMT+gS2cl3NQCm80CCNl94egakOphMTnZqXgHtUNPZi3afv8d0V3F5kGd/JXUtXMukew==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jul 2020 09:54:53.6165 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 57a8d093-dfb8-40b9-7a4e-08d82d5c1e0b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4712
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 21 Jul 2020, at 12:23 am, Stefano Stabellini <sstabellini@kernel.org> =
wrote:
>=20
> On Fri, 17 Jul 2020, Bertrand Marquis wrote:
>>> On 17 Jul 2020, at 16:06, Jan Beulich <jbeulich@suse.com> wrote:
>>>=20
>>> On 17.07.2020 15:59, Bertrand Marquis wrote:
>>>>=20
>>>>=20
>>>>> On 17 Jul 2020, at 15:19, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>=20
>>>>> On 17.07.2020 15:14, Bertrand Marquis wrote:
>>>>>>> On 17 Jul 2020, at 10:10, Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>> On 16.07.2020 19:10, Rahul Singh wrote:
>>>>>>>> # Emulated PCI device tree node in libxl:
>>>>>>>>=20
>>>>>>>> Libxl is creating a virtual PCI device tree node in the device tre=
e to enable the guest OS to discover the virtual PCI during guest boot. We =
introduced the new config option [vpci=3D"pci_ecam"] for guests. When this =
config option is enabled in a guest configuration, a PCI device tree node w=
ill be created in the guest device tree.
>>>>>>>=20
>>>>>>> I support Stefano's suggestion for this to be an optional thing, i.=
e.
>>>>>>> there to be no need for it when there are PCI devices assigned to t=
he
>>>>>>> guest anyway. I also wonder about the pci_ prefix here - isn't
>>>>>>> vpci=3D"ecam" as unambiguous?
>>>>>>=20
>>>>>> This could be a problem as we need to know that this is required for=
 a guest upfront so that PCI devices can be assigned after using xl.> >>>=20
>>>>> I'm afraid I don't understand: When there are no PCI device that get
>>>>> handed to a guest when it gets created, but it is supposed to be able
>>>>> to have some assigned while already running, then we agree the option
>>>>> is needed (afaict). When PCI devices get handed to the guest while it
>>>>> gets constructed, where's the problem to infer this option from the
>>>>> presence of PCI devices in the guest configuration?
>>>>=20
>>>> If the user wants to use xl pci-attach to attach in runtime a device t=
o a guest, this guest must have a VPCI bus (even with no devices).
>>>> If we do not have the vpci parameter in the configuration this use cas=
e will not work anymore.
>>>=20
>>> That's what everyone looks to agree with. Yet why is the parameter need=
ed
>>> when there _are_ PCI devices anyway? That's the "optional" that Stefano
>>> was suggesting, aiui.
>>=20
>> I agree in this case the parameter could be optional and only required i=
f not PCI device is assigned directly in the guest configuration.
>=20
> Great!
>=20
> Moreover, we might also be able to get rid of the vpci parameter in
> cases where are no devices assigned at boot time but still we want to
> create a vpci host bridge in domU anyway. In those cases we could use
> the following:
>=20
>  pci =3D [];
>=20
> otherwise, worse but it might be easier to implement in xl:
>=20
>  pci =3D [""];

pci =3D[] ; is a great idea to avoid new config option to create a device t=
ree node when there is no device assigned. We will check this and will upda=
te the design spec accodringly.=20



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 10:02:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 10:02:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxp6V-0002CZ-HS; Tue, 21 Jul 2020 10:02:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Pac=BA=canonical.com=colin.king@srs-us1.protection.inumbo.net>)
 id 1jxp6U-0002CU-4L
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 10:02:30 +0000
X-Inumbo-ID: 491d8a46-cb39-11ea-a09c-12813bfff9fa
Received: from youngberry.canonical.com (unknown [91.189.89.112])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 491d8a46-cb39-11ea-a09c-12813bfff9fa;
 Tue, 21 Jul 2020 10:02:29 +0000 (UTC)
Received: from 1.general.cking.uk.vpn ([10.172.193.212] helo=localhost)
 by youngberry.canonical.com with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2)
 (envelope-from <colin.king@canonical.com>)
 id 1jxp6H-0000Kf-Mg; Tue, 21 Jul 2020 10:02:17 +0000
From: Colin King <colin.king@canonical.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, x86@kernel.org,
 "H . Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org
Subject: [PATCH][next] x86/ioperm: initialize pointer bitmap with NULL rather
 than 0
Date: Tue, 21 Jul 2020 11:02:17 +0100
Message-Id: <20200721100217.407975-1-colin.king@canonical.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: kernel-janitors@vger.kernel.org, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Colin Ian King <colin.king@canonical.com>

The pointer bitmap is being initialized with a plain integer 0,
fix this by initializing it with a NULL instead.

Cleans up sparse warning:
arch/x86/xen/enlighten_pv.c:876:27: warning: Using plain integer
as NULL pointer

Signed-off-by: Colin Ian King <colin.king@canonical.com>
---
 arch/x86/xen/enlighten_pv.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
index c46b9f2e732f..2aab43a13a8c 100644
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -873,7 +873,7 @@ static void xen_load_sp0(unsigned long sp0)
 static void xen_invalidate_io_bitmap(void)
 {
 	struct physdev_set_iobitmap iobitmap = {
-		.bitmap = 0,
+		.bitmap = NULL,
 		.nr_ports = 0,
 	};
 
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 10:23:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 10:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxpQI-0003uS-Ac; Tue, 21 Jul 2020 10:22:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JDR4=BA=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jxpQH-0003uN-Ea
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 10:22:57 +0000
X-Inumbo-ID: 24820268-cb3c-11ea-84fe-bc764e2007e4
Received: from mail-wm1-x32c.google.com (unknown [2a00:1450:4864:20::32c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24820268-cb3c-11ea-84fe-bc764e2007e4;
 Tue, 21 Jul 2020 10:22:56 +0000 (UTC)
Received: by mail-wm1-x32c.google.com with SMTP id 184so2345084wmb.0
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 03:22:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=PtRuMxz8lwN5YHPl8WDFCQqWqb7YEMSMdiP2r+2UXBg=;
 b=oPS90EO/t1oZvfKL1OtgQs1ZOB98f38Q7SwqEnuIIDWROkhXCGThSO0gs/QB7cZJki
 rxIuYOrmF2nf1hiGhK90LG5xV2eQwoThZxgzFNhG8EgVntT/JpAuJd0jZoC6FQUnFipT
 gas/dlyorfeyDGcZSB9FrpAw1iWPNwclkwWMYKJ5W+F8CZomvr5YTRhbORFXSizmXPUD
 /15QFsW5HH4SoAqP4/g6CzkM1kTsxZ/vCXuf1bMc2Fzq7yU3mTqUnUgzGnwVgEQPivqy
 UjLY8BtP57+JpuEPtV6DQ3KzKRdlrclpK8gsV3wlpJzIFfLGl0LP3QQTVTSghV7pZClS
 4cVQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=PtRuMxz8lwN5YHPl8WDFCQqWqb7YEMSMdiP2r+2UXBg=;
 b=dkun6MF9UUMX+s89Cb/Rh6aI20KbKtrNBfBda30rxT4qB1H0bLcNq3llaw/LqTw9W2
 20vIPWPvwL7ACF/2erWtgIRJu2i4iBfFWOv53tDu2znXofmY4cZ0adXAWFX7x6ZVJHgn
 J15ayywYxnSY1xPno2ubKZQ1UcFzu1kBER/z09Xs6MCntJfiSQE6Viwdc+ndQA1T6+v6
 aBK0yjWgdpkutBIfqwJbXmDjXe6yjhabsn+Z6krp15fvvJNmfkqyLxFAO5H13L1RdTqz
 gwwuBvuHNSN+I1si4H/jwhr7wmXqkqIN3NrbsGUw8BJmx4VlX8FiPinUCyXsNmWevAay
 uvsw==
X-Gm-Message-State: AOAM531dCxNsBKBV4J4GoXFqQfJionhcBE/jeXv+vHO/iebTVgyM0eSw
 T6+/1gQt1mBzNAjH7S8l5mo=
X-Google-Smtp-Source: ABdhPJwtyyzWTQz3/bjbnQPssfxQM0UQXhHzAKkPQc0fuPWnFHqHTtrDHfLkn2+kxbTLga5/+8v5lQ==
X-Received: by 2002:a1c:a513:: with SMTP id o19mr3338085wme.119.1595326975550; 
 Tue, 21 Jul 2020 03:22:55 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-233.amazon.com. [54.240.197.233])
 by smtp.gmail.com with ESMTPSA id b23sm3029368wmd.37.2020.07.21.03.22.54
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 21 Jul 2020 03:22:55 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'George Dunlap'" <George.Dunlap@citrix.com>
References: <d406ae82e0cdde2dc33a92d2685ffb77bacab7ee.1595289055.git.rosbrookn@ainfosec.com>
 <003901d65f2e$6faab0c0$4f001240$@xen.org>
 <66dc2e79-e899-1d94-c0f2-d834b55cd859@citrix.com>
 <004001d65f40$a0348330$e09d8990$@xen.org>
 <44537ECC-E301-46BD-8B8E-F3B522A18FEC@citrix.com>
In-Reply-To: <44537ECC-E301-46BD-8B8E-F3B522A18FEC@citrix.com>
Subject: RE: [PATCH for-4.14] golang/xenlight: fix code generation for python
 2.6
Date: Tue, 21 Jul 2020 11:22:53 +0100
Message-ID: <004101d65f48$e5acf610$b106e230$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQJO5yDcExPNZQbG/t5S46VRX2lP6wJ2S7dHAu7JLWYBiCecnADZLF4vp+KP/nA=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <Andrew.Cooper3@citrix.com>,
 'Nick Rosbrook' <rosbrookn@ainfosec.com>,
 'Nick Rosbrook' <rosbrookn@gmail.com>, xen-devel@lists.xenproject.org,
 'Ian Jackson' <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: George Dunlap <George.Dunlap@citrix.com>
> Sent: 21 July 2020 10:54
> To: paul@xen.org
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Nick Rosbrook =
<rosbrookn@gmail.com>; xen-
> devel@lists.xenproject.org; Nick Rosbrook <rosbrookn@ainfosec.com>; =
Ian Jackson
> <Ian.Jackson@citrix.com>; Wei Liu <wl@xen.org>
> Subject: Re: [PATCH for-4.14] golang/xenlight: fix code generation for =
python 2.6
>=20
>=20
>=20
> > On Jul 21, 2020, at 10:23 AM, Paul Durrant <xadimgnik@gmail.com> =
wrote:
> >
> >> -----Original Message-----
> >> From: Andrew Cooper <andrew.cooper3@citrix.com>
> >> Sent: 21 July 2020 10:21
> >> To: paul@xen.org; 'Nick Rosbrook' <rosbrookn@gmail.com>; =
xen-devel@lists.xenproject.org
> >> Cc: 'Nick Rosbrook' <rosbrookn@ainfosec.com>; 'Ian Jackson' =
<ian.jackson@eu.citrix.com>; 'George
> >> Dunlap' <george.dunlap@citrix.com>; 'Wei Liu' <wl@xen.org>
> >> Subject: Re: [PATCH for-4.14] golang/xenlight: fix code generation =
for python 2.6
> >>
> >> On 21/07/2020 08:13, Paul Durrant wrote:
> >>>> -----Original Message-----
> >>>> From: Nick Rosbrook <rosbrookn@gmail.com>
> >>>> Sent: 21 July 2020 00:55
> >>>> To: xen-devel@lists.xenproject.org
> >>>> Cc: paul@xen.org; Nick Rosbrook <rosbrookn@ainfosec.com>; George =
Dunlap
> <george.dunlap@citrix.com>;
> >>>> Ian Jackson <ian.jackson@eu.citrix.com>; Wei Liu <wl@xen.org>
> >>>> Subject: [PATCH for-4.14] golang/xenlight: fix code generation =
for python 2.6
> >>>>
> >>>> Before python 2.7, str.format() calls required that the format =
fields
> >>>> were explicitly enumerated, e.g.:
> >>>>
> >>>>  '{0} {1}'.format(foo, bar)
> >>>>
> >>>>  vs.
> >>>>
> >>>>  '{} {}'.format(foo, bar)
> >>>>
> >>>> Currently, gengotypes.py uses the latter pattern everywhere, =
which means
> >>>> the Go bindings do not build on python 2.6. Use the 2.6 syntax =
for
> >>>> format() in order to support python 2.6 for now.
> >>>>
> >>>> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> >>> I'm afraid this is too late for 4.14 now. We are in hard freeze, =
so only minor docs changes or
> >> critical bug fixes are being taken at
> >>> this point.
> >>
> >> This is Reported-by me, and breaking gitlab CI on the master and =
4.14
> >> branches (because apparently noone else cares to look at the =
results...)
> >>
> >> The alternative is to pull support for CentOS 6 from the 4.14 =
release,
> >> which would best be done by a commit taking out the C6 containers =
from CI.
> >>
> >
> > At this late stage I'd rather we did that.
>=20
> We should probably add a release note saying that there=E2=80=99s a =
known intermittent build issue on CentOS
> 6.
>=20

Ok, that sounds fine.

  Paul

>  -George



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 10:23:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 10:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxpQW-0003xN-PI; Tue, 21 Jul 2020 10:23:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxpQV-0003xG-Kw
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 10:23:11 +0000
X-Inumbo-ID: 2d1208ce-cb3c-11ea-a09d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2d1208ce-cb3c-11ea-a09d-12813bfff9fa;
 Tue, 21 Jul 2020 10:23:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A7BB9AC12;
 Tue, 21 Jul 2020 10:23:16 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: support AVX512_VP2INTERSECT insns
Message-ID: <08083899-7348-63d2-1f28-0932e2295d64@suse.com>
Date: Tue, 21 Jul 2020 12:23:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The standard memory access pattern once again should allow us to go
without a test harness addition beyond the EVEX Disp8-scaling one.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
(SDE: -tgl)

--- a/tools/libxl/libxl_cpuid.c
+++ b/tools/libxl/libxl_cpuid.c
@@ -214,6 +214,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
 
         {"avx512-4vnniw",0x00000007,  0, CPUID_REG_EDX,  2,  1},
         {"avx512-4fmaps",0x00000007,  0, CPUID_REG_EDX,  3,  1},
+        {"avx512-vp2intersect",0x00000007,0,CPUID_REG_EDX,8, 1},
         {"srbds-ctrl",   0x00000007,  0, CPUID_REG_EDX,  9,  1},
         {"md-clear",     0x00000007,  0, CPUID_REG_EDX, 10,  1},
         {"serialize",    0x00000007,  0, CPUID_REG_EDX, 14,  1},
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -160,7 +160,7 @@ static const char *const str_7d0[32] =
     [ 2] = "avx512_4vnniw", [ 3] = "avx512_4fmaps",
     [ 4] = "fsrm",
 
-    /*  8 */                [ 9] = "srbds-ctrl",
+    [ 8] = "avx512_vp2intersect", [ 9] = "srbds-ctrl",
     [10] = "md-clear",
     /* 12 */                [13] = "tsx-force-abort",
     [14] = "serialize",
--- a/tools/tests/x86_emulator/evex-disp8.c
+++ b/tools/tests/x86_emulator/evex-disp8.c
@@ -593,6 +593,10 @@ static const struct test avx512_vnni_all
     INSN(pdpwssds, 66, 0f38, 53, vl, d, vl),
 };
 
+static const struct test avx512_vp2intersect_all[] = {
+    INSN(p2intersect, f2, 0f38, 68, vl, dq, vl)
+};
+
 static const struct test avx512_vpopcntdq_all[] = {
     INSN(popcnt, 66, 0f38, 55, vl, dq, vl)
 };
@@ -996,6 +1000,7 @@ void evex_disp8_test(void *instr, struct
     RUN(avx512_vbmi, all);
     RUN(avx512_vbmi2, all);
     RUN(avx512_vnni, all);
+    RUN(avx512_vp2intersect, all);
     RUN(avx512_vpopcntdq, all);
 
     if ( cpu_has_avx512f )
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -168,6 +168,7 @@ static inline bool xcr0_mask(uint64_t ma
 #define cpu_has_movdir64b  cp.feat.movdir64b
 #define cpu_has_avx512_4vnniw (cp.feat.avx512_4vnniw && xcr0_mask(0xe6))
 #define cpu_has_avx512_4fmaps (cp.feat.avx512_4fmaps && xcr0_mask(0xe6))
+#define cpu_has_avx512_vp2intersect (cp.feat.avx512_vp2intersect && xcr0_mask(0xe6))
 #define cpu_has_serialize  cp.feat.serialize
 #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6))
 
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -488,6 +488,7 @@ static const struct ext0f38_table {
     [0x62] = { .simd_size = simd_packed_int, .two_op = 1, .d8s = d8s_bw },
     [0x63] = { .simd_size = simd_packed_int, .to_mem = 1, .two_op = 1, .d8s = d8s_bw },
     [0x64 ... 0x66] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
+    [0x68] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
     [0x70 ... 0x73] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
     [0x75 ... 0x76] = { .simd_size = simd_packed_int, .d8s = d8s_vl },
     [0x77] = { .simd_size = simd_packed_fp, .d8s = d8s_vl },
@@ -2005,6 +2006,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_enqcmd()      (ctxt->cpuid->feat.enqcmd)
 #define vcpu_has_avx512_4vnniw() (ctxt->cpuid->feat.avx512_4vnniw)
 #define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps)
+#define vcpu_has_avx512_vp2intersect() (ctxt->cpuid->feat.avx512_vp2intersect)
 #define vcpu_has_serialize()   (ctxt->cpuid->feat.serialize)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
 
@@ -9545,6 +9547,12 @@ x86_emulate(
         }
         goto simd_zmm;
 
+    case X86EMUL_OPC_EVEX_F2(0x0f38, 0x68): /* vp2intersect{d,q} [xyz]mm/mem,[xyz]mm,k+1 */
+        host_and_vcpu_must_have(avx512_vp2intersect);
+        generate_exception_if(evex.opmsk || !evex.r || !evex.R, EXC_UD);
+        op_bytes = 16 << evex.lr;
+        goto avx512f_no_sae;
+
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x70): /* vpshldvw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x72): /* vpshrdvw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if(!evex.w, EXC_UD);
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -128,6 +128,7 @@
 /* CPUID level 0x00000007:0.edx */
 #define cpu_has_avx512_4vnniw   boot_cpu_has(X86_FEATURE_AVX512_4VNNIW)
 #define cpu_has_avx512_4fmaps   boot_cpu_has(X86_FEATURE_AVX512_4FMAPS)
+#define cpu_has_avx512_vp2intersect boot_cpu_has(X86_FEATURE_AVX512_VP2INTERSECT)
 #define cpu_has_tsx_force_abort boot_cpu_has(X86_FEATURE_TSX_FORCE_ABORT)
 #define cpu_has_serialize       boot_cpu_has(X86_FEATURE_SERIALIZE)
 
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -259,6 +259,7 @@ XEN_CPUFEATURE(SSB_NO,        8*32+26) /
 /* Intel-defined CPU features, CPUID level 0x00000007:0.edx, word 9 */
 XEN_CPUFEATURE(AVX512_4VNNIW, 9*32+ 2) /*A  AVX512 Neural Network Instructions */
 XEN_CPUFEATURE(AVX512_4FMAPS, 9*32+ 3) /*A  AVX512 Multiply Accumulation Single Precision */
+XEN_CPUFEATURE(AVX512_VP2INTERSECT, 9*32+8) /*a  VP2INTERSECT{D,Q} insns */
 XEN_CPUFEATURE(SRBDS_CTRL,    9*32+ 9) /*   MSR_MCU_OPT_CTRL and RNGDS_MITG_DIS. */
 XEN_CPUFEATURE(MD_CLEAR,      9*32+10) /*A  VERW clears microarchitectural buffers */
 XEN_CPUFEATURE(TSX_FORCE_ABORT, 9*32+13) /* MSR_TSX_FORCE_ABORT.RTM_ABORT */
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -260,7 +260,7 @@ def crunch_numbers(state):
         # AVX512 features are built on top of AVX512F
         AVX512F: [AVX512DQ, AVX512_IFMA, AVX512PF, AVX512ER, AVX512CD,
                   AVX512BW, AVX512VL, AVX512_4VNNIW, AVX512_4FMAPS,
-                  AVX512_VNNI, AVX512_VPOPCNTDQ],
+                  AVX512_VNNI, AVX512_VPOPCNTDQ, AVX512_VP2INTERSECT],
 
         # AVX512 extensions acting on vectors of bytes/words are made
         # dependents of AVX512BW (as to requiring wider than 16-bit mask


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 10:29:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 10:29:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxpWL-0004De-HV; Tue, 21 Jul 2020 10:29:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UXjz=BA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jxpWK-0004DZ-AG
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 10:29:12 +0000
X-Inumbo-ID: 04042376-cb3d-11ea-84fe-bc764e2007e4
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04042376-cb3d-11ea-84fe-bc764e2007e4;
 Tue, 21 Jul 2020 10:29:11 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id 184so2362798wmb.0
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 03:29:11 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:content-transfer-encoding
 :in-reply-to:user-agent;
 bh=H3X2ZwtUY79YQvzuxeH6DYPwazZR85JEKqa9Q2HMgH8=;
 b=BnDblM4MUR9QdgJZYKD49dzQ8UzAYj4lbW5VsMrW7e21oiEMWZGY2xc95c5CB1JuOs
 ajwb4IjT8t4W8d2+JVEkrp8BdGQioGcebFVRPY7FubEhI28Q8wf7+UVhREqXT5NTgUz8
 DGOkfgSCCxwQUKTUtJWdFntpwntSHD1IX24QXs8P4IHIL7h3CJsK4MnEJ0PjOKOx8qJ0
 UU+BeSJSctONB/hrjS6hbTPvsmLk6mi5DxZjdFSGZoXi4EOac/CbkM1VW4q0O5G8Uwor
 kZ544l6YKkj7ZZGOMz1OdNzjmbkspXqc1ezctr8ju67Kd6W/pkqiIgzwcV+gii25hXQS
 p58w==
X-Gm-Message-State: AOAM5323hwrj0LZVPOkr2qzH6RrKhO1ywcOg0uzeg+gSArN2AY+J9APM
 vaXrPTAsRWuLFZ5OalHJNsI=
X-Google-Smtp-Source: ABdhPJxA2c5PvSEcxR26leW5NWXedE+r+FeCC9HFuqAdMNZD5TXA2KLhQOSBYnfLZuA75oi1bvnLIQ==
X-Received: by 2002:a7b:c246:: with SMTP id b6mr3628079wmj.161.1595327350615; 
 Tue, 21 Jul 2020 03:29:10 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id c7sm38230408wrq.58.2020.07.21.03.29.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 03:29:10 -0700 (PDT)
Date: Tue, 21 Jul 2020 10:29:08 +0000
From: Wei Liu <wl@xen.org>
To: Christian Lindig <christian.lindig@citrix.com>
Subject: Re: [PATCH v1 1/1] oxenstored: fix ABI breakage introduced in Xen
 4.9.0
Message-ID: <20200721102908.ci5wcg2ylurxoyd5@liuwe-devbox-debian-v2>
References: <cover.1594825512.git.edvin.torok@citrix.com>
 <6fcfdb706cc2f666069c1d0bbc59d22f660fc81d.1594825512.git.edvin.torok@citrix.com>
 <1594826510774.33560@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1594826510774.33560@citrix.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>, Wei Liu <wl@xen.org>,
 David Scott <dave@recoil.org>, Edwin Torok <edvin.torok@citrix.com>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 03:21:50PM +0000, Christian Lindig wrote:
> 
> ________________________________________
> From: Edwin Trk <edvin.torok@citrix.com>
> Sent: 15 July 2020 16:10
> To: xen-devel@lists.xenproject.org
> Cc: Edwin Torok; Christian Lindig; David Scott; Ian Jackson; Wei Liu; Igor Druzhinin
> Subject: [PATCH v1 1/1] oxenstored: fix ABI breakage introduced in Xen 4.9.0
> 
> dbc84d2983969bb47d294131ed9e6bbbdc2aec49 (Xen >= 4.9.0) deleted XS_RESTRICT
> from oxenstored, which caused all the following opcodes to be shifted by 1:
> reset_watches became off-by-one compared to the C version of xenstored.
> 

I guess this needs

Backport: 4.9+

(Ian FYI)

> Looking at the C code the opcode for reset watches needs:
> XS_RESET_WATCHES = XS_SET_TARGET + 2
> 
> So add the placeholder `Invalid` in the OCaml<->C mapping list.
> (Note that the code here doesn't simply convert the OCaml constructor to
>  an integer, so we don't need to introduce a dummy constructor).
> 
> Igor says that with a suitably patched xenopsd to enable watch reset,
> we now see `reset watches` during kdump of a guest in xenstored-access.log.
> 
> Signed-off-by: Edwin Trk <edvin.torok@citrix.com>
> Tested-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> ---
>  tools/ocaml/libs/xb/op.ml | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/ocaml/libs/xb/op.ml b/tools/ocaml/libs/xb/op.ml
> index d4f1f08185..9bcab0f38c 100644
> --- a/tools/ocaml/libs/xb/op.ml
> +++ b/tools/ocaml/libs/xb/op.ml
> @@ -28,7 +28,7 @@ let operation_c_mapping =
>             Transaction_end; Introduce; Release;
>             Getdomainpath; Write; Mkdir; Rm;
>             Setperms; Watchevent; Error; Isintroduced;
> -           Resume; Set_target; Reset_watches |]
> +           Resume; Set_target; Invalid; Reset_watches |]
>  let size = Array.length operation_c_mapping
> 
>  let array_search el a =
> --
> 2.25.1
> 
> -- 
> Acked-by: Christian Lindig <christian.lindig@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 10:30:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 10:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxpXu-0004xG-Uh; Tue, 21 Jul 2020 10:30:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tByU=BA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxpXt-0004x6-Mc
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 10:30:49 +0000
X-Inumbo-ID: 3e03a434-cb3d-11ea-84fe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e03a434-cb3d-11ea-84fe-bc764e2007e4;
 Tue, 21 Jul 2020 10:30:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HcMfLZI+tO4dRQl/+8C/JoTqB4Ca1TBYEVhQTtrsvps=; b=mUV+i5PTYoqLcUA1vl3I5XnCs
 zZNGpfMNvl8BW8AjjijTMpDP/L1Em6bTEafAgW7/rUO1JNhxnsD5ithVIhDkpfTiK7BRgpZ7tblBN
 I8DUg18P+kuwZz4MA5cn4C+q5S976M75itdI6bvlQDdmCFrI/xSGzFpDM/upSQUbnXXIU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxpXr-0003DP-No; Tue, 21 Jul 2020 10:30:47 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxpXr-0005GR-Fj; Tue, 21 Jul 2020 10:30:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxpXr-0001Az-FI; Tue, 21 Jul 2020 10:30:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152064-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 152064: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=bb4e0596d911d8c516e55529f869a257c2178328
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jul 2020 10:30:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152064 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152064/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              bb4e0596d911d8c516e55529f869a257c2178328
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   11 days
Failing since        151818  2020-07-11 04:18:52 Z   10 days   11 attempts
Testing same since   152064  2020-07-21 04:22:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Bastien Orivel <bastien.orivel@diateam.net>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Jin Yan <jinyan12@huawei.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Stefan Berger <stefanb@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1937 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 10:33:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 10:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxpaN-00056r-DR; Tue, 21 Jul 2020 10:33:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bQ5W=BA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jxpaM-00056m-2z
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 10:33:22 +0000
X-Inumbo-ID: 98303ea4-cb3d-11ea-a09d-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 98303ea4-cb3d-11ea-a09d-12813bfff9fa;
 Tue, 21 Jul 2020 10:33:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595327600;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=QCzr/dm7szt2McrHFnYkDUuaTcGnxpCxhc8NWR5qIF8=;
 b=YTeC5gb0CRFrL5cvjpTpm2BpB6QxcBgs17pRiCmD1Vxr/eGBoIsISBdk
 16g5/RY9E7vuVFX8GbUyue7+030Gu4wG4clBNlUyvlHJ0OYe5noKzaRcU
 4a6oqVUofIBtC6KE7w3kAtnUWWE/Km1wHie01HXwOnSdOiA28ILVPfUty w=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: sRNp8pp8E0NRvLaym25bW5MavIq1VeggUU5eatcRvv+yzBzArIOoM/ylorEPqrCcY2ydlFl5vx
 MIXK36TrcuCk8FnZT6n/CBHartvr7isb2i1oF8x2SvBGXyMpsUwVNmK/5+s949nYZTUAbtjshT
 zO7YHWJYDcvrKOW9mDqaNAZLq9WpePNjqGpf++oylKqqpYcvKHVi+gsLBSrHK7hjUCwg3scLLo
 M6zhKIz560yoV3cz2RWJgBury+G+bFPvD6Zv35nieBstdAy9fE2ebXSZb1/146XthRvYNdjBHN
 1Ng=
X-SBRS: 2.7
X-MesageID: 23160836
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,378,1589256000"; d="scan'208";a="23160836"
Subject: Re: [PATCH] x86emul: support AVX512_VP2INTERSECT insns
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <08083899-7348-63d2-1f28-0932e2295d64@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <120bdf92-15b6-3616-5cdb-75b9c38155d4@citrix.com>
Date: Tue, 21 Jul 2020 11:32:59 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <08083899-7348-63d2-1f28-0932e2295d64@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21/07/2020 11:23, Jan Beulich wrote:
> --- a/tools/misc/xen-cpuid.c
> +++ b/tools/misc/xen-cpuid.c
> @@ -160,7 +160,7 @@ static const char *const str_7d0[32] =
>      [ 2] = "avx512_4vnniw", [ 3] = "avx512_4fmaps",
>      [ 4] = "fsrm",
>  
> -    /*  8 */                [ 9] = "srbds-ctrl",
> +    [ 8] = "avx512_vp2intersect", [ 9] = "srbds-ctrl",
>      [10] = "md-clear",
>      /* 12 */                [13] = "tsx-force-abort",
>      [14] = "serialize",

Are we using underscores or dashes?  I realise its is already
inconsistent, but this is a debugging tool only, and we can change our
minds.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 10:37:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 10:37:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxpdy-0005FQ-12; Tue, 21 Jul 2020 10:37:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UXjz=BA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jxpdw-0005Eu-BG
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 10:37:04 +0000
X-Inumbo-ID: 1d5778d6-cb3e-11ea-84fe-bc764e2007e4
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d5778d6-cb3e-11ea-84fe-bc764e2007e4;
 Tue, 21 Jul 2020 10:37:03 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id z15so20662278wrl.8
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 03:37:03 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=65aF1t5dIfPi4ml0lc9y3/i3TqsvxnPdI2i3ZT7PdDs=;
 b=IGQEAixVVYxYA9LfxNP5++SxsVrIRI1+Yoddw/gGx8Mo2Q2F6DFNU2/iAJg4feOZMx
 m9KouV8abTUolFn3qoh21nb/Swc0uSNOKZkYs61+rmGkrtW3GgWuVLQ/jOkSU3iHTexx
 HkaiJzVKI2q8nxokoynzLc0zBr1qPqWISJp8dE4ekOuX835/7V0azgRnL9KR6qbfEFPX
 yuV/BvWTve7onN7uh+r0c7rpNHesmmoTAwM5Ho1/aBoMdGT8JcWSLhFfKnO6QEJkOh1Z
 gtAZPhjQ4fpvjVFGpIrA37UuPsh5GXHuXm6duq6hpJ9qfMzI7Y8FRHz7O4ijXznTg2kX
 pq6g==
X-Gm-Message-State: AOAM531JhCA7qfIFBY5QCQleCt+Df3mTrX6R/vXDkMULVa7ZECD1IwgK
 vuuB6fclX+CVztvo74nfMsE=
X-Google-Smtp-Source: ABdhPJzTwhn0r2E7vhybGbyFemuG8JNOsgtqqI/fvD6hfrYhzjHhLSxO7uyLOpPXqVBhMtbrKmpPcg==
X-Received: by 2002:adf:9e8b:: with SMTP id a11mr3683820wrf.309.1595327822588; 
 Tue, 21 Jul 2020 03:37:02 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id c7sm38259204wrq.58.2020.07.21.03.37.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 03:37:02 -0700 (PDT)
Date: Tue, 21 Jul 2020 10:37:00 +0000
From: Wei Liu <wl@xen.org>
To: paul@xen.org
Subject: Re: [PATCH for-4.14] golang/xenlight: fix code generation for python
 2.6
Message-ID: <20200721103700.xmgtcyszlux4vahf@liuwe-devbox-debian-v2>
References: <d406ae82e0cdde2dc33a92d2685ffb77bacab7ee.1595289055.git.rosbrookn@ainfosec.com>
 <003901d65f2e$6faab0c0$4f001240$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <003901d65f2e$6faab0c0$4f001240$@xen.org>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Wei Liu' <wl@xen.org>, 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Nick Rosbrook' <rosbrookn@ainfosec.com>,
 'Nick Rosbrook' <rosbrookn@gmail.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 21, 2020 at 08:13:28AM +0100, Paul Durrant wrote:
> > -----Original Message-----
> > From: Nick Rosbrook <rosbrookn@gmail.com>
> > Sent: 21 July 2020 00:55
> > To: xen-devel@lists.xenproject.org
> > Cc: paul@xen.org; Nick Rosbrook <rosbrookn@ainfosec.com>; George Dunlap <george.dunlap@citrix.com>;
> > Ian Jackson <ian.jackson@eu.citrix.com>; Wei Liu <wl@xen.org>
> > Subject: [PATCH for-4.14] golang/xenlight: fix code generation for python 2.6
> > 
> > Before python 2.7, str.format() calls required that the format fields
> > were explicitly enumerated, e.g.:
> > 
> >   '{0} {1}'.format(foo, bar)
> > 
> >   vs.
> > 
> >   '{} {}'.format(foo, bar)
> > 
> > Currently, gengotypes.py uses the latter pattern everywhere, which means
> > the Go bindings do not build on python 2.6. Use the 2.6 syntax for
> > format() in order to support python 2.6 for now.
> > 
> > Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> 
> I'm afraid this is too late for 4.14 now. We are in hard freeze, so only minor docs changes or critical bug fixes are being taken at
> this point.


I will apply this to staging. This can be backported later if we care
about CentOS 6 (EOL Nov this year) or any other old distros that still
run python 2.6.

FAOD Nick is one of the maintainers of golang code so I think his SoB is
good enough for this simple patch. It looks obviously correct to me.

Wei.


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 10:38:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 10:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxpfV-0005NP-IK; Tue, 21 Jul 2020 10:38:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UXjz=BA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jxpfU-0005NJ-Ad
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 10:38:40 +0000
X-Inumbo-ID: 56b014ee-cb3e-11ea-84fe-bc764e2007e4
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56b014ee-cb3e-11ea-84fe-bc764e2007e4;
 Tue, 21 Jul 2020 10:38:39 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id q15so2317465wmj.2
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 03:38:39 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:content-transfer-encoding
 :in-reply-to:user-agent;
 bh=eixWLu67VU8LUPqETPCzqjgNnJL3BeZahrEuSfq3FmU=;
 b=GoySYz8zStWZ3l62G8wY3LD3N83fXRwBPLQuG7zODDV7TviHXydO7aLEAYntBIUDeL
 VasJWTw96iDBwInHKLYWItd7bIH/meu0ur2q+Ol5BW6DP/72RHAarv1eOeONxXuKYhsY
 vi04/R6HOA/o/9dE/qCz7b2huQgTd8MSrJigh70XMJ5OGqzDPl0mkMddO2Gn9QnN2ABm
 J3lsT0UrXz6iaL1IF/cLiIzz1XXmurvbD8bsyhfCDKP9QE/lXRZYOthOBl5cEa+SEt9D
 jHBzuIwWhku3LCoBAfagSGPOWmUBZyVn0JY/um1gnHlmQyvEtEC2U20hDOazaDLrGCr9
 0iHw==
X-Gm-Message-State: AOAM531Qy5O5uTKKjo+Df5j+ASSv9ZpM/td+k45JzEfWj61+fuq8KoJR
 djp99nYvNnNhkiFEFGYX1yE=
X-Google-Smtp-Source: ABdhPJxiFTII/CEwGTG5E+eqNo78WE3VAW1b0Gbu84hSLPspMcYZYPBjHvgSZ0UvzcuHxiPiKcP67g==
X-Received: by 2002:a7b:ce97:: with SMTP id q23mr3492831wmj.89.1595327918759; 
 Tue, 21 Jul 2020 03:38:38 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id x9sm2890366wmk.45.2020.07.21.03.38.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 03:38:38 -0700 (PDT)
Date: Tue, 21 Jul 2020 10:38:36 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] compat: add a little bit of description to xlat.lst
Message-ID: <20200721103836.ogdcvbcuknuxcf32@liuwe-devbox-debian-v2>
References: <d7d95acc-11b0-278b-373e-0115cfa99b51@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d7d95acc-11b0-278b-373e-0115cfa99b51@suse.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 16, 2020 at 02:21:33PM +0200, Jan Beulich wrote:
> Requested-by: Roger Pau Monn <roger.pau@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Wei Liu <wl@xen.org>

This is much appreciated.

> 
> --- a/xen/include/xlat.lst
> +++ b/xen/include/xlat.lst
> @@ -1,3 +1,22 @@
> +# There are a few fundamentally different strategies for handling compat
> +# (sub-)hypercalls:
> +#
> +# 1) Wrap a translation layer around the native hypercall. Structures involved
> +# in this model should use translation (xlat) macros generated by adding
> +# !-prefixed lines here.
> +#
> +# 2) Compile the entire hypercall function a second time, arranging for the
> +# compat structures to get used in place of the native ones. There are no xlat
> +# macros involved here, all that's needed are correctly translated structures.
> +#
> +# 3) Adhoc translation, which may or may not involve adding entries here.
> +#
> +# 4) Any mixture of the above.
> +#
> +# In all models any structures re-used in their native form should have
> +# ?-mark prefixed lines added here, with the resulting checking macros invoked
> +# somewhere in the code handling the hypercall or its translation.
> +#
>  # First column indicator:
>  # ! - needs translation
>  # ? - needs checking


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 10:47:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 10:47:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxpoG-0006Fo-GQ; Tue, 21 Jul 2020 10:47:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxpoE-0006FU-VR
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 10:47:42 +0000
X-Inumbo-ID: 9a28d8c2-cb3f-11ea-84fe-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a28d8c2-cb3f-11ea-84fe-bc764e2007e4;
 Tue, 21 Jul 2020 10:47:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3A532AB3D;
 Tue, 21 Jul 2020 10:47:48 +0000 (UTC)
Subject: Re: [PATCH] x86emul: support AVX512_VP2INTERSECT insns
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <08083899-7348-63d2-1f28-0932e2295d64@suse.com>
 <120bdf92-15b6-3616-5cdb-75b9c38155d4@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <98c74f0a-86d0-41d2-2aa3-f6b2c3e5ed68@suse.com>
Date: Tue, 21 Jul 2020 12:47:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <120bdf92-15b6-3616-5cdb-75b9c38155d4@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21.07.2020 12:32, Andrew Cooper wrote:
> On 21/07/2020 11:23, Jan Beulich wrote:
>> --- a/tools/misc/xen-cpuid.c
>> +++ b/tools/misc/xen-cpuid.c
>> @@ -160,7 +160,7 @@ static const char *const str_7d0[32] =
>>      [ 2] = "avx512_4vnniw", [ 3] = "avx512_4fmaps",
>>      [ 4] = "fsrm",
>>  
>> -    /*  8 */                [ 9] = "srbds-ctrl",
>> +    [ 8] = "avx512_vp2intersect", [ 9] = "srbds-ctrl",
>>      [10] = "md-clear",
>>      /* 12 */                [13] = "tsx-force-abort",
>>      [14] = "serialize",
> 
> Are we using underscores or dashes?  I realise its is already
> inconsistent, but this is a debugging tool only, and we can change our
> minds.

I've switched this one to use a dash. Want me to also switch others
(in a separate patch)?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 10:47:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 10:47:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxpoR-0006GU-R9; Tue, 21 Jul 2020 10:47:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UXjz=BA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jxpoQ-0006GD-73
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 10:47:54 +0000
X-Inumbo-ID: a08c6e86-cb3f-11ea-a09d-12813bfff9fa
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a08c6e86-cb3f-11ea-a09d-12813bfff9fa;
 Tue, 21 Jul 2020 10:47:53 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id j18so2345470wmi.3
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 03:47:53 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=0HXvjZ8pm71TLoeB9M4HW/CVr4atdMl4DEn66RZi/Dc=;
 b=hH4oBpkG5IJGoLTOG8q+qJm0rnG6lwHdlqyiGoay7t93qudUVETH3ghmahQje1SH9q
 Xb5g89UJoZzDrdNNpaJ9yPlWWpoIPxvAAhm97x5ytm9YiVpx02HvEnSqAW0ZRcNUnHQ1
 jLxVCnOIjWMvqcMmbh3uUttq8BFo26f3LaBgtD+T04+Q4zQrhudqFXNbX23MVPu4iUja
 jZn2aWH7lOzo3hgWqdjgII3iSJpYrWlwKr5opKUvIXCY7OB/04LMyTrd9lJ8Ftaem9v2
 vWJTF68YvWl2L0wMLnjbiiF8RTKuJoALeFrKcZVCIHz4NNGt7OCbyfR+q4bBnMG8AKII
 gRbw==
X-Gm-Message-State: AOAM530diZrOX3uRKoQRKIxWmXv17lHES5hlGagnQ56BXWVeY+8LFOdC
 vffPosR0rh7Zm50LYiTcS5Y=
X-Google-Smtp-Source: ABdhPJxeNk9Wuj1AFGQlP/TXlc5pnEchhfH9z+29B+8E7qsm2xT1csN8v7U3DhKRv32HWNh1fyBJIw==
X-Received: by 2002:a7b:c154:: with SMTP id z20mr3711386wmi.118.1595328472143; 
 Tue, 21 Jul 2020 03:47:52 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id t13sm13388492wru.65.2020.07.21.03.47.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 03:47:51 -0700 (PDT)
Date: Tue, 21 Jul 2020 10:47:50 +0000
From: Wei Liu <wl@xen.org>
To: Julien Grall <julien@xen.org>
Subject: Re: Kexec and libxenctlr.so
Message-ID: <20200721104750.l4zsqlzq4vsee7yv@liuwe-devbox-debian-v2>
References: <7a88218d-981e-6583-15a5-3fcaffb05294@amazon.com>
 <20200626110812.hxeoomagamkdceu7@liuwe-devbox-debian-v2>
 <aa5ad259-5848-e8c4-61e8-6649bb65ece5@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <aa5ad259-5848-e8c4-61e8-6649bb65ece5@xen.org>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, "paul@xen.org" <paul@xen.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>,
 "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 daniel.kiper@oracle.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 02, 2020 at 06:34:48PM +0100, Julien Grall wrote:
> Hi Wei,
> 
> On 26/06/2020 12:08, Wei Liu wrote:
> > On Thu, Jun 11, 2020 at 03:57:37PM +0100, Julien Grall wrote:
> > > Hi all,
> > > 
> > > kexec-tools has an option to load dynamically libxenctlr.so (not .so.4.x)
> > > (see [1]).
> > > 
> > > Given that the library has never been considered stable, it is probably a
> > > disaster that is waiting to happen.
> > > 
> > > Looking at the tree kexec is using the follow libxc functions:
> > >     - xc_kexec_get_range()
> > >     - xc_kexec_load()
> > >     - xc_kexec_unload()
> > >     - xc_kexec_status()
> > >     - xc_kexec_exec()
> > >     - xc_version()
> > >     - xc_interface_open()
> > >     - xc_interface_close()
> > >     - xc_get_max_cpus()
> > >     - xc_get_machine_memory_map()
> > > 
> > > I think it is uncontroversial that we want a new stable library for all the
> > > xc_kexec_* functions (maybe libxenexec)?
> > 
> > That sounds fine to me.
> > 
> > Looking at the list of functions, all the xc_kexec_* ones are probably
> > already rather stable.
> 
> That's my understanding as well.
> 
> Although, we may want to rethink some of the hypercalls (such as
> KEXEC_cmd_kexec_get_range) in the future as they have different layout
> between 32-bit and 64-bit. Thanksfully this wasn't exposed outside of libxc,
> so it shouldn't be an issue to have a stable library.
> 

Oh, that's good to hear.

> > 
> > For xc_interface_open / close, they are perhaps used only to obtain an
> > xc handle such that it can be used to make hypercalls. You new kexec
> > library is going to expose its own handle with a xencall handle wrapped
> > inside, so you can do away with an xc handle.
> 
> I have already a PoC for the new library. I had to tweak a bit the list of
> helpers as some use hypercalls arguments directly. Below, the proposed
> interface:
> 
> /* Callers who don't care don't need to #include <xentoollog.h> */
> struct xentoollog_logger;
> 
> typedef struct xenkexec_handle xenkexec_handle;
> 
> typedef struct xenkexec_segments xenkexec_segments;
> 
> xenkexec_handle *xenkexec_open(struct xentoollog_logger *logger,
>                                unsigned int open_flags);
> int xenkexec_close(xenkexec_handle *khdl);
> 
> int xenkexec_exec(xenkexec_handle *khdl, int type);
> int xenkexec_get_range(xenkexec_handle *khdl, int range, int nr,
>                        uint64_t *size, uint64_t *start);
> int xenkexec_load(xenkexec_handle *khdl, uint8_t type, uint16_t arch,
>                   uint64_t entry_maddr, uint32_t nr_segments,
>                   xenkexec_segments *segments);
> int xenkexec_unload(xenkexec_handle *khdl, int type);
> int xenkexec_status(xenkexec_handle *khdl, int type);
> 
> xenkexec_segments *xenkexec_allocate_segments(xenkexec_handle *khdl,
>                                               unsigned int nr);
> void xenkexec_free_segments(xenkexec_handle *khdl, xenkexec_segments *segs);
> 
> int xenkexec_update_segment(xenkexec_handle *khdl, xenkexec_segments *segs,
>                             unsigned int idx, void *buffer, size_t
> buffer_size,
>                             uint64_t dest_maddr, size_t dest_size);
> 

You definitely have more experience in kexec than I do. This list looks
sensible.

> 
> > 
> > > 
> > > However I am not entirely sure where to put the others.
> > > 
> > > I am thinking to introduce libxensysctl for xc_get_max_cpus() as it is a
> > > XEN_SYSCTL. We could possibly include xc_get_machine_memory_map() (despite
> > > it is a XENMEM_).
> > > 
> > 
> > Introducing an libxensysctl before we stabilise sysctl interface seems
> > wrong to me. We can bury the call inside libxenkexec itself for the time
> > being.
> 
> That would work for me.
> 
> > 
> > > For xc_version(), I am thinking to extend libxentoolcore to also include
> > > "stable xen API".
> > > 
> > 
> > If you can do without an xc handle, do you still need to call
> > xc_version?
> 
> Looking at kexec, xc_version() is used by crashdump to determine which
> architecture is used by Xen (in this case 32-bit x86 vs 64-bit x86).
> 
> The was introcuded by commit:
> 
> commit cdbc9b011fe43407908632d842e3a39e495e48d9
> Author: Ian Campbell <ian.campbell@xensource.com>
> Date:   Fri Mar 16 10:10:24 2007 +0000
> 
>     Set crash dump ELF header e_machine field based on underlying hypervisor
> architecture.
> 
>     This is necessary when running Xen with a 64 bit hypervisor and 32 bit
>     domain 0 since the CPU crash notes will be 64 bit.
> 
>     Detecting the hypervisor archiecture requires libxenctrl and therefore
> this
>     support is optional and disabled by default.
> 
>     Signed-off-by: Ian Campbell <ian.campbell@xensource.com>
>     Acked-by: Magnus Damm <magnus@valinux.co.jp>
>     Signed-off-by: Simon Horman <horms@verge.net.au>
> 
> As we drop support for 32-bit Xen quite a long time ago, we may be able to
> remove the call to xc_version().
> 

Does Arm care about the bitness of the hypervisor?

Wei.

> Cheers,
> 
> -- 
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 10:48:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 10:48:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxppF-0006Oy-8u; Tue, 21 Jul 2020 10:48:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bQ5W=BA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jxppE-0006Om-75
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 10:48:44 +0000
X-Inumbo-ID: be8cb544-cb3f-11ea-84fe-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be8cb544-cb3f-11ea-84fe-bc764e2007e4;
 Tue, 21 Jul 2020 10:48:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595328523;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=3ywLeEALQmD5NhJJhQcrRZ+F98L1q2WMC48nDha1bEY=;
 b=G8IRhU/GzPTRu91f2ArVYqQPeMbDiJTRki/nLGnIeMejd5288wbtChOj
 JBJ0NjeiqiYWoTCmQO+QrsEvpCQWivszblbAg/NxTbF0n9W3cPVsjl28p
 5FsmT6KY6VkGw7FLHK8WztqEyus76gCI1zAu3w7SiFlUx6wl8tN7S87pJ M=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ua7mRwwo6CF/45BXMHHXbgPmIzdB7E4l5kEA34Ags5K21AYSkZ6+5RCSqiIVy3wiIs8NO40bXC
 7IKq47Lz6i8y1sZc07xf5XZMtuwtPykn7TidKEWn8sgfEYYlCuPe6mXx4xgwCRvTN0WkGlmYLL
 4MG6PAOCGpRmX2EHG/Q32208IibQCbvYRKWYLUaA33hZFiX4LBmd1vUqpFlSHUmORDo4ycCL4y
 ywRAVx5gQ7DjckuJI2PStlImf+O4lcLlRGgzaH7iSfGuLyLbrOuSWKcmGAy8+lAVBQymBpRrto
 wfE=
X-SBRS: 2.7
X-MesageID: 23156518
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,378,1589256000"; d="scan'208";a="23156518"
Subject: Re: [PATCH] x86emul: support AVX512_VP2INTERSECT insns
To: Jan Beulich <jbeulich@suse.com>
References: <08083899-7348-63d2-1f28-0932e2295d64@suse.com>
 <120bdf92-15b6-3616-5cdb-75b9c38155d4@citrix.com>
 <98c74f0a-86d0-41d2-2aa3-f6b2c3e5ed68@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <772177be-eaff-bd2d-b6f4-676359166275@citrix.com>
Date: Tue, 21 Jul 2020 11:48:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <98c74f0a-86d0-41d2-2aa3-f6b2c3e5ed68@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21/07/2020 11:47, Jan Beulich wrote:
> On 21.07.2020 12:32, Andrew Cooper wrote:
>> On 21/07/2020 11:23, Jan Beulich wrote:
>>> --- a/tools/misc/xen-cpuid.c
>>> +++ b/tools/misc/xen-cpuid.c
>>> @@ -160,7 +160,7 @@ static const char *const str_7d0[32] =
>>>      [ 2] = "avx512_4vnniw", [ 3] = "avx512_4fmaps",
>>>      [ 4] = "fsrm",
>>>  
>>> -    /*  8 */                [ 9] = "srbds-ctrl",
>>> +    [ 8] = "avx512_vp2intersect", [ 9] = "srbds-ctrl",
>>>      [10] = "md-clear",
>>>      /* 12 */                [13] = "tsx-force-abort",
>>>      [14] = "serialize",
>> Are we using underscores or dashes?  I realise its is already
>> inconsistent, but this is a debugging tool only, and we can change our
>> minds.
> I've switched this one to use a dash.

Ok.  Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

> Want me to also switch others (in a separate patch)?

Probably best, yes.

Thanks,

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 10:52:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 10:52:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxpsq-0007Fk-S9; Tue, 21 Jul 2020 10:52:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UXjz=BA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jxpsp-0007Ff-LH
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 10:52:27 +0000
X-Inumbo-ID: 431ae326-cb40-11ea-a09d-12813bfff9fa
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 431ae326-cb40-11ea-a09d-12813bfff9fa;
 Tue, 21 Jul 2020 10:52:25 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id o11so20731861wrv.9
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 03:52:25 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:content-transfer-encoding
 :in-reply-to:user-agent;
 bh=LOBSvIoSTI/YyYbDBxuuGau4sSa1XnEal6omX4ja83c=;
 b=JZaRrY+bK2jVEEOLlVzgVQ+0jUknwC4ZLQ3VtXhnQhYM4ogDYwBAk9mObVpPyG1fHH
 8uPe7loLxQgLh6AjchPkhdBcQNmiH8phNeGcdZ9CZtSBlXSmaRmOR5P1yBlbrb2Iivj4
 IYcft4cn95g++WTNols0Ye+A8+wjZmAVHOfVSfILgBmbDw/oDHzyPakNZVcyU9Hai3vs
 qc9sPwwb6lciinIK9Vn1pn0AOMkgWd+BTaCzFepXKZqAVWA/liEQB55ieJXT4EW7PIGD
 dCI8XzMRGNy6KNjS9uGD68nOj4E3OjtA9RnTB9+0WHW55tPDBq87bFjXlZmzDq9/HlHi
 ImbA==
X-Gm-Message-State: AOAM533Z3Y/zzOVXzlHNcp/vNCmSNJcXD8rM3R8oisVBw9/e/jV9Ae1Y
 g/btGRIbcpjmaDE8/zoASjI=
X-Google-Smtp-Source: ABdhPJydJzMphHAlzgjG4pRy5BGHruDzDy8FswiCqxvPTxmVlSk3arvzeJzOkmy9b9KbKNZasD6kZw==
X-Received: by 2002:adf:fc45:: with SMTP id e5mr14506576wrs.226.1595328744934; 
 Tue, 21 Jul 2020 03:52:24 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id g3sm42002405wrb.59.2020.07.21.03.52.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 03:52:24 -0700 (PDT)
Date: Tue, 21 Jul 2020 10:52:23 +0000
From: Wei Liu <wl@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 10/10] tools/proctrace: add proctrace tool
Message-ID: <20200721105223.ao3mlpabk77vufh6@liuwe-devbox-debian-v2>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <0ab003238e4e666d3847024b8917dbc11c40fecb.1593519420.git.michal.leszczynski@cert.pl>
 <241285fc-f8be-575f-8b2a-f5aa44b77d47@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <241285fc-f8be-575f-8b2a-f5aa44b77d47@citrix.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: tamas.lengyel@intel.com, Wei Liu <wl@xen.org>,
 =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@eu.citrix.com>, luwei.kang@intel.com,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 02, 2020 at 04:10:57PM +0100, Andrew Cooper wrote:
[...]
> 
> > +#include <stdlib.h>
> > +#include <stdio.h>
> > +#include <sys/mman.h>
> > +#include <signal.h>
> > +
> > +#include <xenctrl.h>
> > +#include <xen/xen.h>
> > +#include <xenforeignmemory.h>
> > +
> > +#define BUF_SIZE (16384 * XC_PAGE_SIZE)
> 
> This hardcodes the size of the buffer which is configurable per VM.
> Mapping the buffer fails when it is smaller than this.
> 
> It appears there is still outstanding bug from the acquire_resource work
> which never got fixed. The guest_handle_is_null(xmar.frame_list) path
> in Xen is supposed to report the size of the resource, not the size of
> Xen's local buffer, so userspace can ask "how large is this resource".
> 
> I'll try and find some time to fix this and arrange for backports, but
> the current behaviour is nonsense, and problematic for new users.

I can't quite figure out if this is a blocking comment of this tool to
be accepted. Can you clarify?

Wei.

> 
> ~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 10:53:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 10:53:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxptP-0007Ju-58; Tue, 21 Jul 2020 10:53:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UXjz=BA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jxptO-0007Jl-BI
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 10:53:02 +0000
X-Inumbo-ID: 5887512c-cb40-11ea-84fe-bc764e2007e4
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5887512c-cb40-11ea-84fe-bc764e2007e4;
 Tue, 21 Jul 2020 10:53:01 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id g75so2342715wme.5
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 03:53:01 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:content-transfer-encoding
 :in-reply-to:user-agent;
 bh=TtY/EUxjTY357p9vD0+0PalRi+9Gt05MXt1acyKydgA=;
 b=d3TmV+JI+WUwM4kXN9pgG4oyQ6EXKvOmp0J6BVkSbZHlI0H3iiEypWfW4eXJsw+v6f
 onVbf4n4G9/dbw3r4kQaZnrSj1tIQpxJa0b2QJhwg/RAyBUrsu+vM97bknKD6xnkFPC2
 CVXYv/niPFCpaNYCFhr3irr5DIP7r3/myjynatOWag6A5ZIHFlwvGIO/pHo+DiA1rVKG
 JrF9HXov2X3Yw6+9295w34n/57PD0/UavaKQsWM7gOJoflroR4JYd31Hi29W8nsgeWLK
 Ja9A6y3zEBZz1X54Ky/6EuABgNgpNDubIiHdX9Alc+m3QfmGotFrSeNwOWbqICpGH8O9
 4FcQ==
X-Gm-Message-State: AOAM532HX1OeJg2ugkI+xpptQb0ULGEG4wejG7EckDQ6GuXwFPRi0JBa
 0LLeCSgzqxR7ERtEXcyckOs=
X-Google-Smtp-Source: ABdhPJxlLnaqg94UyZ9i7qYslCi1Gw9YFvh6FM5z9xwxJ4Y9hIuGJl/5fbYmvaBAWMKufi4p7lp9kg==
X-Received: by 2002:a1c:4343:: with SMTP id q64mr3489663wma.20.1595328780849; 
 Tue, 21 Jul 2020 03:53:00 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id c69sm2698313wmd.41.2020.07.21.03.53.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 03:53:00 -0700 (PDT)
Date: Tue, 21 Jul 2020 10:52:59 +0000
From: Wei Liu <wl@xen.org>
To: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Subject: Re: [PATCH v4 09/10] tools/libxc: add xc_vmtrace_* functions
Message-ID: <20200721105259.kh4lmtsrjtxaul3v@liuwe-devbox-debian-v2>
References: <cover.1593519420.git.michal.leszczynski@cert.pl>
 <03c751efa273bf2a2b1575b0175219577da42e39.1593519420.git.michal.leszczynski@cert.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <03c751efa273bf2a2b1575b0175219577da42e39.1593519420.git.michal.leszczynski@cert.pl>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, tamas.lengyel@intel.com,
 luwei.kang@intel.com, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jun 30, 2020 at 02:33:52PM +0200, Michał Leszczyński wrote:
> From: Michal Leszczynski <michal.leszczynski@cert.pl>
> 
> Add functions in libxc that use the new XEN_DOMCTL_vmtrace interface.
> 
> Signed-off-by: Michal Leszczynski <michal.leszczynski@cert.pl>

Acked-by: Wei Liu <wl@xen.org>

(Subject to acceptance of hypervisor patches)


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 11:02:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 11:02:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxq2B-0008GN-3d; Tue, 21 Jul 2020 11:02:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=254w=BA=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1jxq29-0008FW-JE
 for xen-devel@lists.xen.org; Tue, 21 Jul 2020 11:02:05 +0000
X-Inumbo-ID: 99b106f6-cb41-11ea-84fe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 99b106f6-cb41-11ea-84fe-bc764e2007e4;
 Tue, 21 Jul 2020 11:02:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=rG/yjzz16rNFLpv3o/8yFDCoEwHYZ2FiTwqh9jkLt3M=; b=rx4xY51uu4vIg1rQZB6PVlYekh
 n9YU/P4nL7fGhMMR1ssHfAcYdi4ImR26oW44a4SdEtDx/Uhve0Zu73JQ6SZytx4tHeZU3/jydqNR+
 BG2aM91+m6URSSbNIOn9Yjfpg7hyKe0mFLZBe/fUUk8vl3L4mowutWmVf1EuI4bMP/50=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1jxq1y-0003wa-6u; Tue, 21 Jul 2020 11:01:54 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1jxq1y-0007MA-2r; Tue, 21 Jul 2020 11:01:54 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
Subject: Xen Security Advisory 329 v3 (CVE-2020-15852) - Linux ioperm
 bitmap context switching issues
Message-Id: <E1jxq1y-0007MA-2r@xenbits.xenproject.org>
Date: Tue, 21 Jul 2020 11:01:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "Xen.org security team" <security-team-members@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2020-15852 / XSA-329
                              version 3

             Linux ioperm bitmap context switching issues

UPDATES IN VERSION 3
====================

CVE assigned.

ISSUE DESCRIPTION
=================

Linux 5.5 overhauled the internal state handling for the iopl() and ioperm()
system calls.  Unfortunately, one aspect on context switch wasn't wired up
correctly for the Xen PVOps case.

IMPACT
======

IO port permissions don't get rescinded when context switching to an
unprivileged task.  Therefore, all userspace can use the IO ports granted to
the most recently scheduled task with IO port permissions.

VULNERABLE SYSTEMS
==================

Only x86 guests are vulnerable.

All versions of Linux from 5.5 are potentially vulnerable.

Linux is only vulnerable when running as x86 PV guest.  Linux is not
vulnerable when running as an x86 HVM/PVH guests.

The vulnerability can only be exploited in domains which have been granted
access to IO ports by Xen.  This is typically only the hardware domain, and
guests configured with PCI Passthrough.

MITIGATION
==========

Running only HVM/PVH guests avoids the vulnerability.

CREDITS
=======

This issue was discovered by Andy Lutomirski.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

xsa329.patch           Linux 5.5 and later

$ sha256sum xsa329*
cdb5ac9bfd21192b5965e8ec0a1c4fcf12d0a94a962a8158cd27810e6aa362f0  xsa329.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl8WytoMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZ4wsH/0/2AMv2kb/Q6rfwlNLSrnDbK2b6bb/QUE+0GcHO
vrJ7Su53xrt7mllk/P4jYmtXfyUeJzfsahdb5GQVh4GBxOA3YGgS5T4pdpnwNoFi
NFZV35qOT0muwpjE/zoefKsESuvqWjd28Vssm4HrllJ4YqcGik9clo6Y5qWMFcFH
rlgchZinl5RtqAzMnuOdirWir7Xika6KdkXWi56CjKZBB5ozoqfH5JKi/XbWbwrz
ZoFHXwKRuckuQSxUlvdpmI7MZDyggii3OhdvA6fIMDWq58EjSVVatrvDxYsGRL8x
4PXmFPBp+871GjLQuQZ294fZH3DaZLWSrzvmwC8uZJr5uds=
=Wdnv
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa329.patch"
Content-Disposition: attachment; filename="xsa329.patch"
Content-Transfer-Encoding: base64

RnJvbTogQW5keSBMdXRvbWlyc2tpIDxsdXRvQGtlcm5lbC5vcmc+ClN1Ympl
Y3Q6IHg4Ni9pb3Blcm06IEZpeCBpbyBiaXRtYXAgaW52YWxpZGF0aW9uIG9u
IFhlbiBQVgoKdHNzX2ludmFsaWRhdGVfaW9fYml0bWFwKCkgd2Fzbid0IHdp
cmVkIHVwIHByb3Blcmx5IHRocm91Z2ggdGhlIHB2b3AKbWFjaGluZXJ5LCBz
byB0aGUgVFNTIGFuZCBYZW4ncyBpbyBiaXRtYXAgd291bGQgZ2V0IG91dCBv
ZiBzeW5jCndoZW5ldmVyIGRpc2FibGluZyBhIHZhbGlkIGlvIGJpdG1hcC4K
CkFkZCBhIG5ldyBwdm9wIGZvciB0c3NfaW52YWxpZGF0ZV9pb19iaXRtYXAo
KSB0byBmaXggaXQuCgpUaGlzIGlzIFhTQS0zMjkuCgpGaXhlczogMjJmZTVi
MDQzOWRkICgieDg2L2lvcGVybTogTW92ZSBUU1MgYml0bWFwIHVwZGF0ZSB0
byBleGl0IHRvIHVzZXIgd29yayIpClNpZ25lZC1vZmYtYnk6IEFuZHkgTHV0
b21pcnNraSA8bHV0b0BrZXJuZWwub3JnPgpSZXZpZXdlZC1ieTogSnVlcmdl
biBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogVGhvbWFz
IEdsZWl4bmVyIDx0Z2x4QGxpbnV0cm9uaXguZGU+CgpkaWZmIC0tZ2l0IGEv
YXJjaC94ODYvaW5jbHVkZS9hc20vaW9fYml0bWFwLmggYi9hcmNoL3g4Ni9p
bmNsdWRlL2FzbS9pb19iaXRtYXAuaAppbmRleCBhYzFhOTlmZmJkOGQuLjdm
MDgwZjVjN2RlZiAxMDA2NDQKLS0tIGEvYXJjaC94ODYvaW5jbHVkZS9hc20v
aW9fYml0bWFwLmgKKysrIGIvYXJjaC94ODYvaW5jbHVkZS9hc20vaW9fYml0
bWFwLmgKQEAgLTE5LDEyICsxOSwyOCBAQCBzdHJ1Y3QgdGFza19zdHJ1Y3Q7
CiB2b2lkIGlvX2JpdG1hcF9zaGFyZShzdHJ1Y3QgdGFza19zdHJ1Y3QgKnRz
ayk7CiB2b2lkIGlvX2JpdG1hcF9leGl0KHN0cnVjdCB0YXNrX3N0cnVjdCAq
dHNrKTsKIAorc3RhdGljIGlubGluZSB2b2lkIG5hdGl2ZV90c3NfaW52YWxp
ZGF0ZV9pb19iaXRtYXAodm9pZCkKK3sKKwkvKgorCSAqIEludmFsaWRhdGUg
dGhlIEkvTyBiaXRtYXAgYnkgbW92aW5nIGlvX2JpdG1hcF9iYXNlIG91dHNp
ZGUgdGhlCisJICogVFNTIGxpbWl0IHNvIGFueSBzdWJzZXF1ZW50IEkvTyBh
Y2Nlc3MgZnJvbSB1c2VyIHNwYWNlIHdpbGwKKwkgKiB0cmlnZ2VyIGEgI0dQ
LgorCSAqCisJICogVGhpcyBpcyBjb3JyZWN0IGV2ZW4gd2hlbiBWTUVYSVQg
cmV3cml0ZXMgdGhlIFRTUyBsaW1pdAorCSAqIHRvIDB4NjcgYXMgdGhlIG9u
bHkgcmVxdWlyZW1lbnQgaXMgdGhhdCB0aGUgYmFzZSBwb2ludHMKKwkgKiBv
dXRzaWRlIHRoZSBsaW1pdC4KKwkgKi8KKwl0aGlzX2NwdV93cml0ZShjcHVf
dHNzX3J3Lng4Nl90c3MuaW9fYml0bWFwX2Jhc2UsCisJCSAgICAgICBJT19C
SVRNQVBfT0ZGU0VUX0lOVkFMSUQpOworfQorCiB2b2lkIG5hdGl2ZV90c3Nf
dXBkYXRlX2lvX2JpdG1hcCh2b2lkKTsKIAogI2lmZGVmIENPTkZJR19QQVJB
VklSVF9YWEwKICNpbmNsdWRlIDxhc20vcGFyYXZpcnQuaD4KICNlbHNlCiAj
ZGVmaW5lIHRzc191cGRhdGVfaW9fYml0bWFwIG5hdGl2ZV90c3NfdXBkYXRl
X2lvX2JpdG1hcAorI2RlZmluZSB0c3NfaW52YWxpZGF0ZV9pb19iaXRtYXAg
bmF0aXZlX3Rzc19pbnZhbGlkYXRlX2lvX2JpdG1hcAogI2VuZGlmCiAKICNl
bHNlCmRpZmYgLS1naXQgYS9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9wYXJhdmly
dC5oIGIvYXJjaC94ODYvaW5jbHVkZS9hc20vcGFyYXZpcnQuaAppbmRleCA1
Y2E1ZDI5N2RmNzUuLjNkMmFmZWNkZTUwYyAxMDA2NDQKLS0tIGEvYXJjaC94
ODYvaW5jbHVkZS9hc20vcGFyYXZpcnQuaAorKysgYi9hcmNoL3g4Ni9pbmNs
dWRlL2FzbS9wYXJhdmlydC5oCkBAIC0zMDIsNiArMzAyLDExIEBAIHN0YXRp
YyBpbmxpbmUgdm9pZCB3cml0ZV9pZHRfZW50cnkoZ2F0ZV9kZXNjICpkdCwg
aW50IGVudHJ5LCBjb25zdCBnYXRlX2Rlc2MgKmcpCiB9CiAKICNpZmRlZiBD
T05GSUdfWDg2X0lPUExfSU9QRVJNCitzdGF0aWMgaW5saW5lIHZvaWQgdHNz
X2ludmFsaWRhdGVfaW9fYml0bWFwKHZvaWQpCit7CisJUFZPUF9WQ0FMTDAo
Y3B1LmludmFsaWRhdGVfaW9fYml0bWFwKTsKK30KKwogc3RhdGljIGlubGlu
ZSB2b2lkIHRzc191cGRhdGVfaW9fYml0bWFwKHZvaWQpCiB7CiAJUFZPUF9W
Q0FMTDAoY3B1LnVwZGF0ZV9pb19iaXRtYXApOwpkaWZmIC0tZ2l0IGEvYXJj
aC94ODYvaW5jbHVkZS9hc20vcGFyYXZpcnRfdHlwZXMuaCBiL2FyY2gveDg2
L2luY2x1ZGUvYXNtL3BhcmF2aXJ0X3R5cGVzLmgKaW5kZXggNzMyZjYyZTA0
ZGRiLi44ZGZjYjI1MDhlNmQgMTAwNjQ0Ci0tLSBhL2FyY2gveDg2L2luY2x1
ZGUvYXNtL3BhcmF2aXJ0X3R5cGVzLmgKKysrIGIvYXJjaC94ODYvaW5jbHVk
ZS9hc20vcGFyYXZpcnRfdHlwZXMuaApAQCAtMTQxLDYgKzE0MSw3IEBAIHN0
cnVjdCBwdl9jcHVfb3BzIHsKIAl2b2lkICgqbG9hZF9zcDApKHVuc2lnbmVk
IGxvbmcgc3AwKTsKIAogI2lmZGVmIENPTkZJR19YODZfSU9QTF9JT1BFUk0K
Kwl2b2lkICgqaW52YWxpZGF0ZV9pb19iaXRtYXApKHZvaWQpOwogCXZvaWQg
KCp1cGRhdGVfaW9fYml0bWFwKSh2b2lkKTsKICNlbmRpZgogCmRpZmYgLS1n
aXQgYS9hcmNoL3g4Ni9rZXJuZWwvcGFyYXZpcnQuYyBiL2FyY2gveDg2L2tl
cm5lbC9wYXJhdmlydC5jCmluZGV4IDY3NGE3ZDY2ZDk2MC4uZGUyMTM4YmEz
OGU1IDEwMDY0NAotLS0gYS9hcmNoL3g4Ni9rZXJuZWwvcGFyYXZpcnQuYwor
KysgYi9hcmNoL3g4Ni9rZXJuZWwvcGFyYXZpcnQuYwpAQCAtMzI0LDcgKzMy
NCw4IEBAIHN0cnVjdCBwYXJhdmlydF9wYXRjaF90ZW1wbGF0ZSBwdl9vcHMg
PSB7CiAJLmNwdS5zd2FwZ3MJCT0gbmF0aXZlX3N3YXBncywKIAogI2lmZGVm
IENPTkZJR19YODZfSU9QTF9JT1BFUk0KLQkuY3B1LnVwZGF0ZV9pb19iaXRt
YXAJPSBuYXRpdmVfdHNzX3VwZGF0ZV9pb19iaXRtYXAsCisJLmNwdS5pbnZh
bGlkYXRlX2lvX2JpdG1hcAk9IG5hdGl2ZV90c3NfaW52YWxpZGF0ZV9pb19i
aXRtYXAsCisJLmNwdS51cGRhdGVfaW9fYml0bWFwCQk9IG5hdGl2ZV90c3Nf
dXBkYXRlX2lvX2JpdG1hcCwKICNlbmRpZgogCiAJLmNwdS5zdGFydF9jb250
ZXh0X3N3aXRjaAk9IHBhcmF2aXJ0X25vcCwKZGlmZiAtLWdpdCBhL2FyY2gv
eDg2L2tlcm5lbC9wcm9jZXNzLmMgYi9hcmNoL3g4Ni9rZXJuZWwvcHJvY2Vz
cy5jCmluZGV4IGYzNjJjZTBkNWFjMC4uZmU2N2RiZDc2ZTUxIDEwMDY0NAot
LS0gYS9hcmNoL3g4Ni9rZXJuZWwvcHJvY2Vzcy5jCisrKyBiL2FyY2gveDg2
L2tlcm5lbC9wcm9jZXNzLmMKQEAgLTMyMiwyMCArMzIyLDYgQEAgdm9pZCBh
cmNoX3NldHVwX25ld19leGVjKHZvaWQpCiB9CiAKICNpZmRlZiBDT05GSUdf
WDg2X0lPUExfSU9QRVJNCi1zdGF0aWMgaW5saW5lIHZvaWQgdHNzX2ludmFs
aWRhdGVfaW9fYml0bWFwKHN0cnVjdCB0c3Nfc3RydWN0ICp0c3MpCi17Ci0J
LyoKLQkgKiBJbnZhbGlkYXRlIHRoZSBJL08gYml0bWFwIGJ5IG1vdmluZyBp
b19iaXRtYXBfYmFzZSBvdXRzaWRlIHRoZQotCSAqIFRTUyBsaW1pdCBzbyBh
bnkgc3Vic2VxdWVudCBJL08gYWNjZXNzIGZyb20gdXNlciBzcGFjZSB3aWxs
Ci0JICogdHJpZ2dlciBhICNHUC4KLQkgKgotCSAqIFRoaXMgaXMgY29ycmVj
dCBldmVuIHdoZW4gVk1FWElUIHJld3JpdGVzIHRoZSBUU1MgbGltaXQKLQkg
KiB0byAweDY3IGFzIHRoZSBvbmx5IHJlcXVpcmVtZW50IGlzIHRoYXQgdGhl
IGJhc2UgcG9pbnRzCi0JICogb3V0c2lkZSB0aGUgbGltaXQuCi0JICovCi0J
dHNzLT54ODZfdHNzLmlvX2JpdG1hcF9iYXNlID0gSU9fQklUTUFQX09GRlNF
VF9JTlZBTElEOwotfQotCiBzdGF0aWMgaW5saW5lIHZvaWQgc3dpdGNoX3Rv
X2JpdG1hcCh1bnNpZ25lZCBsb25nIHRpZnApCiB7CiAJLyoKQEAgLTM0Niw3
ICszMzIsNyBAQCBzdGF0aWMgaW5saW5lIHZvaWQgc3dpdGNoX3RvX2JpdG1h
cCh1bnNpZ25lZCBsb25nIHRpZnApCiAJICogdXNlciBtb2RlLgogCSAqLwog
CWlmICh0aWZwICYgX1RJRl9JT19CSVRNQVApCi0JCXRzc19pbnZhbGlkYXRl
X2lvX2JpdG1hcCh0aGlzX2NwdV9wdHIoJmNwdV90c3NfcncpKTsKKwkJdHNz
X2ludmFsaWRhdGVfaW9fYml0bWFwKCk7CiB9CiAKIHN0YXRpYyB2b2lkIHRz
c19jb3B5X2lvX2JpdG1hcChzdHJ1Y3QgdHNzX3N0cnVjdCAqdHNzLCBzdHJ1
Y3QgaW9fYml0bWFwICppb2JtKQpAQCAtMzgwLDcgKzM2Niw3IEBAIHZvaWQg
bmF0aXZlX3Rzc191cGRhdGVfaW9fYml0bWFwKHZvaWQpCiAJdTE2ICpiYXNl
ID0gJnRzcy0+eDg2X3Rzcy5pb19iaXRtYXBfYmFzZTsKIAogCWlmICghdGVz
dF90aHJlYWRfZmxhZyhUSUZfSU9fQklUTUFQKSkgewotCQl0c3NfaW52YWxp
ZGF0ZV9pb19iaXRtYXAodHNzKTsKKwkJbmF0aXZlX3Rzc19pbnZhbGlkYXRl
X2lvX2JpdG1hcCgpOwogCQlyZXR1cm47CiAJfQogCmRpZmYgLS1naXQgYS9h
cmNoL3g4Ni94ZW4vZW5saWdodGVuX3B2LmMgYi9hcmNoL3g4Ni94ZW4vZW5s
aWdodGVuX3B2LmMKaW5kZXggYWNjNDlmYTZhMDk3Li5jNDc1YTExYzY2MjAg
MTAwNjQ0Ci0tLSBhL2FyY2gveDg2L3hlbi9lbmxpZ2h0ZW5fcHYuYworKysg
Yi9hcmNoL3g4Ni94ZW4vZW5saWdodGVuX3B2LmMKQEAgLTg1MCw2ICs4NTAs
MTcgQEAgc3RhdGljIHZvaWQgeGVuX2xvYWRfc3AwKHVuc2lnbmVkIGxvbmcg
c3AwKQogfQogCiAjaWZkZWYgQ09ORklHX1g4Nl9JT1BMX0lPUEVSTQorc3Rh
dGljIHZvaWQgeGVuX2ludmFsaWRhdGVfaW9fYml0bWFwKHZvaWQpCit7CisJ
c3RydWN0IHBoeXNkZXZfc2V0X2lvYml0bWFwIGlvYml0bWFwID0geworCQku
Yml0bWFwID0gMCwKKwkJLm5yX3BvcnRzID0gMCwKKwl9OworCisJbmF0aXZl
X3Rzc19pbnZhbGlkYXRlX2lvX2JpdG1hcCgpOworCUhZUEVSVklTT1JfcGh5
c2Rldl9vcChQSFlTREVWT1Bfc2V0X2lvYml0bWFwLCAmaW9iaXRtYXApOwor
fQorCiBzdGF0aWMgdm9pZCB4ZW5fdXBkYXRlX2lvX2JpdG1hcCh2b2lkKQog
ewogCXN0cnVjdCBwaHlzZGV2X3NldF9pb2JpdG1hcCBpb2JpdG1hcDsKQEAg
LTEwNzksNiArMTA5MCw3IEBAIHN0YXRpYyBjb25zdCBzdHJ1Y3QgcHZfY3B1
X29wcyB4ZW5fY3B1X29wcyBfX2luaXRjb25zdCA9IHsKIAkubG9hZF9zcDAg
PSB4ZW5fbG9hZF9zcDAsCiAKICNpZmRlZiBDT05GSUdfWDg2X0lPUExfSU9Q
RVJNCisJLmludmFsaWRhdGVfaW9fYml0bWFwID0geGVuX2ludmFsaWRhdGVf
aW9fYml0bWFwLAogCS51cGRhdGVfaW9fYml0bWFwID0geGVuX3VwZGF0ZV9p
b19iaXRtYXAsCiAjZW5kaWYKIAkuaW9fZGVsYXkgPSB4ZW5faW9fZGVsYXks
Cg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 11:21:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 11:21:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxqJx-0001zr-Jd; Tue, 21 Jul 2020 11:20:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gK6X=BA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxqJv-0001zm-RF
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 11:20:27 +0000
X-Inumbo-ID: 2d1153e0-cb44-11ea-a0a0-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2d1153e0-cb44-11ea-a0a0-12813bfff9fa;
 Tue, 21 Jul 2020 11:20:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595330427;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=q06uvYXZcjVkQV0AJJoSUyhD9oUjaHROy1XDVILX0TQ=;
 b=KOiMS6UH6ZT8pduSqq9zHe8meH7JJp6vMlYtl9WXECtPnyy0pJWKK70I
 5XS3dA+qECj+vmOafYeMBJTE9AB536A1pprvExVMrQnv97hnwr9lJhTie
 jLpdr+HspSJ8O04K+S9Z7xyffQss5BDSUUj4Py/QmcbtDlVcNZVKpI7x6 g=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: NusP+3WfHagKGzXDD04jPhhmSSh74RMZ2QVcqVKFtJCO7zCjSRQ+w5PEodj6YXcT6nI5Y5kly6
 x8ISrYcb0RajELKA5YEYWQaqj3kuVDbBROK615lnwhVDS45JBG5827YrE/G8iIx4MEb33uQiex
 DkbZlSQ2fguVtL+R+jGBIROiXm1SKO2EeD0oi145/7UwY6qiwatHS8QrpRqpYnjEURIJ9mh3sF
 oX7+88iCOa8Eh7oHWAwP4imSfzJXMgQYjTd74H3HxcjCDoUlSm5UBApPLf9ylneoj94/+pr4yZ
 l3g=
X-SBRS: 2.7
X-MesageID: 23025163
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,378,1589256000"; d="scan'208";a="23025163"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [osstest PATCH] freebsd: remove freebsd- hostflags request from guest
 tests
Date: Tue, 21 Jul 2020 13:20:16 +0200
Message-ID: <20200721112016.30133-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: ian.jackson@eu.citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Guest tests shouldn't care about the capabilities or firmware of the
underlying hosts, so drop the request of specific freebsd-<version>
hostflags for FreeBSD guest tests.

While there request the presence of the hvm hostflag since the FreeBSD
guest tests are run in HVM mode.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 make-flight | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/make-flight b/make-flight
index b8942c1c..2ea9ad29 100755
--- a/make-flight
+++ b/make-flight
@@ -241,7 +241,7 @@ do_freebsd_tests () {
       job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
                       test-freebsd xl $xenarch $dom0arch freebsd_arch=$freebsdarch \
  freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.1-CUSTOM-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20150525.raw.xz} \
-                      all_hostflags=$most_hostflags,freebsd-10
+                      all_hostflags=$most_hostflags,hvm
     done
     return
   fi
@@ -251,11 +251,11 @@ do_freebsd_tests () {
     job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd11-$freebsdarch \
                     test-freebsd xl $xenarch $dom0arch freebsd_arch=$freebsdarch \
  freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-11.3-RELEASE-}$freebsdarch${FREEBSD_IMAGE_SUFFIX-.raw.xz} \
-                    all_hostflags=$most_hostflags,freebsd-11
+                    all_hostflags=$most_hostflags,hvm
     job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd12-$freebsdarch \
                     test-freebsd xl $xenarch $dom0arch freebsd_arch=$freebsdarch \
  freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-12.1-RELEASE-}$freebsdarch${FREEBSD_IMAGE_SUFFIX-.raw.xz} \
-                    all_hostflags=$most_hostflags,freebsd-12
+                    all_hostflags=$most_hostflags,hvm
   done
 }
 
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 11:26:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 11:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxqPb-0002BS-8P; Tue, 21 Jul 2020 11:26:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5NSl=BA=wunner.de=lukas@srs-us1.protection.inumbo.net>)
 id 1jxqHT-0001Hw-9k
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 11:17:55 +0000
X-Inumbo-ID: d1355972-cb43-11ea-8500-bc764e2007e4
Received: from bmailout3.hostsharing.net (unknown
 [2a01:4f8:150:2161:1:b009:f23e:0])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d1355972-cb43-11ea-8500-bc764e2007e4;
 Tue, 21 Jul 2020 11:17:52 +0000 (UTC)
Received: from h08.hostsharing.net (h08.hostsharing.net
 [IPv6:2a01:37:1000::53df:5f1c:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client CN "*.hostsharing.net",
 Issuer "COMODO RSA Domain Validation Secure Server CA" (not verified))
 by bmailout3.hostsharing.net (Postfix) with ESMTPS id 6CD1D1009FD55;
 Tue, 21 Jul 2020 13:17:51 +0200 (CEST)
Received: by h08.hostsharing.net (Postfix, from userid 100393)
 id 139D72304F; Tue, 21 Jul 2020 13:17:50 +0200 (CEST)
Message-Id: <908047f7699d9de9ec2efd6b79aa752d73dab4b6.1595329748.git.lukas@wunner.de>
From: Lukas Wunner <lukas@wunner.de>
Date: Tue, 21 Jul 2020 13:17:50 +0200
Subject: [PATCH] PCI: pciehp: Fix AB-BA deadlock between reset_lock and
 device_lock
To: Bjorn Helgaas <bhelgaas@google.com>,
 Alex Williamson <alex.williamson@redhat.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>
X-Mailman-Approved-At: Tue, 21 Jul 2020 11:26:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Derek Chickles <dchickles@marvell.com>, xen-devel@lists.xenproject.org,
 kvm@vger.kernel.org, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 linux-pci@vger.kernel.org, Satanand Burla <sburla@marvell.com>,
 Cornelia Huck <cohuck@redhat.com>, Felix Manlunas <fmanlunas@marvell.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Govinda Tatti <govinda.tatti@oracle.com>,
 Rick Farrington <ricardo.farrington@cavium.com>,
 Keith Busch <kbusch@kernel.org>, Michael Haeuptle <michael.haeuptle@hpe.com>,
 Ian May <ian.may@canonical.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Back in 2013, commits

  2e35afaefe64 ("PCI: pciehp: Add reset_slot() method")
  608c388122c7 ("PCI: Add slot reset option to pci_dev_reset()")

introduced the callback pciehp_reset_slot() to the PCIe hotplug driver
and amended __pci_dev_reset() (today __pci_reset_function_locked()) to
invoke it when resetting a hotplug port's child.  The callback performs
a Secondary Bus Reset and ensures that an ensuing link or presence flap
is ignored by pciehp.

However the commits did not perform any locking, in particular:

* No precautions were taken to prevent concurrent execution of the new
  callback with pciehp's IRQ handler or a sysfs request to bring the
  slot up or down.  These code paths may see flapping link or presence
  bits during a slot reset.

* pciehp is not prevented from unbinding while the new callback accesses
  its struct controller.  Commit 608c388122c7 did take a reference on
  pciehp's module, but that's not sufficient.  It only keeps pciehp's
  code in memory, but doesn't prevent unbinding.

* In pci_dev_reset_slot_function(), commit 608c388122c7 iterates over
  the devices on a bus without holding pci_bus_sem.

In 2018, commit

  5b3f7b7d062b ("PCI: pciehp: Avoid slot access during reset")

sought to address the first of these locking issues:  It introduced a
reset_lock which serializes a slot reset with other parts of pciehp.

But Michael Haeuptle reports that deadlocks now occur on simultaneous
hot-removal and reset of vfio devices because pciehp acquires the
reset_lock and the device_lock in a different order than
pci_try_reset_function():

pciehp_ist()                                    # down_read(reset_lock)
  pciehp_handle_presence_or_link_change()
    pciehp_disable_slot()
      __pciehp_disable_slot()
        remove_board()
          pciehp_unconfigure_device()
            pci_stop_and_remove_bus_device()
              pci_stop_bus_device()
                pci_stop_dev()
                  device_release_driver()
                    device_release_driver_internal()
                      __device_driver_lock()    # device_lock()

SYS_munmap()
  vfio_device_fops_release()
    vfio_pci_release()
      vfio_pci_disable()
        pci_try_reset_function()                # device_lock()
          __pci_reset_function_locked()
            pci_reset_hotplug_slot()
              pciehp_reset_slot()               # down_write(reset_lock)

Ian May reports the same deadlock on simultaneous hot-removal and AER
reset:

aer_recover_work_func()
  pcie_do_recovery()
    aer_root_reset()
      pci_bus_error_reset()
        pci_slot_reset()
          pci_slot_lock()                       # device_lock()
            pci_reset_hotplug_slot()
              pciehp_reset_slot()               # down_write(reset_lock)

Fix by pushing the reset_lock out of pciehp's struct controller and into
struct pci_slot such that it can be taken by the PCI core before taking
the device lock.

There's a catch though:  Some drivers call __pci_reset_function_locked()
directly and the function expects that all necessary locks, including
the reset_lock, have been acquired by the caller.  There are callers
which already hold the device_lock, so they can't acquire the reset_lock
without re-introducing the AB-BA deadlock:

* drivers/net/ethernet/cavium/liquidio/lio_main.c: octeon_pci_flr()
* drivers/xen/xen-pciback/pci_stub.c: pcistub_device_release()
* drivers/xen/xen-pciback/pci_stub.c: pcistub_init_device() (if called
  from pcistub_seize())

In the case of octeon_pci_flr(), the device is reset on driver unbind,
which is why the device_lock is already held.  A possible solution might
be to add a flag in struct pci_dev with which drivers tell the PCI core
that the device is handed back in an unclean state and needs a reset.
The PCI core would then perform the reset on behalf of the driver after
it has unbound and before any new driver is bound.

As for xen, this patch (which was never applied) explains that a reset
is performed on bind, unbind and on un-assigning a device from a guest:

  https://lore.kernel.org/patchwork/patch/848180/

The unbind code path could be solved by the same solution as for octeon
and it may also be possible to make it work on bind, though it's unclear
why a reset on bind is at all necessary.  The un-assigning code path is
fixed by the present commit I think.

For now, the three functions do not acquire the reset_lock.  I'm
inserting a lockdep_assert_held_write() so that a lockdep splat is shown
as a reminder that liquidio and xen require fixing.

Fixes: 5b3f7b7d062b ("PCI: pciehp: Avoid slot access during reset")
Link: https://lore.kernel.org/linux-pci/CS1PR8401MB0728FC6FDAB8A35C22BD90EC95F10@CS1PR8401MB0728.NAMPRD84.PROD.OUTLOOK.COM
Link: https://lore.kernel.org/linux-pci/20200615143250.438252-1-ian.may@canonical.com
Reported-and-tested-by: Michael Haeuptle <michael.haeuptle@hpe.com>
Reported-and-tested-by: Ian May <ian.may@canonical.com>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Cc: stable@vger.kernel.org # v4.19+
Cc: Alex Williamson <alex.williamson@redhat.com>
---
 drivers/pci/hotplug/pciehp.h          |  5 -----
 drivers/pci/hotplug/pciehp_core.c     |  4 ++--
 drivers/pci/hotplug/pciehp_hpc.c      | 12 ++++++------
 drivers/pci/pci.c                     | 17 +++++++++++++++++
 drivers/pci/slot.c                    |  2 ++
 drivers/vfio/pci/vfio_pci.c           | 19 +++++++++++++------
 drivers/xen/xen-pciback/passthrough.c | 14 ++++++++++++--
 drivers/xen/xen-pciback/pci_stub.c    |  6 ++++++
 drivers/xen/xen-pciback/vpci.c        | 14 ++++++++++++--
 include/linux/pci.h                   |  8 +++++++-
 10 files changed, 77 insertions(+), 24 deletions(-)

diff --git a/drivers/pci/hotplug/pciehp.h b/drivers/pci/hotplug/pciehp.h
index 4fd200d..676e579 100644
--- a/drivers/pci/hotplug/pciehp.h
+++ b/drivers/pci/hotplug/pciehp.h
@@ -20,7 +20,6 @@
 #include <linux/pci_hotplug.h>
 #include <linux/delay.h>
 #include <linux/mutex.h>
-#include <linux/rwsem.h>
 #include <linux/workqueue.h>
 
 #include "../pcie/portdrv.h"
@@ -69,9 +68,6 @@
  * @button_work: work item to turn the slot on or off after 5 seconds
  *	in response to an Attention Button press
  * @hotplug_slot: structure registered with the PCI hotplug core
- * @reset_lock: prevents access to the Data Link Layer Link Active bit in the
- *	Link Status register and to the Presence Detect State bit in the Slot
- *	Status register during a slot reset which may cause them to flap
  * @ist_running: flag to keep user request waiting while IRQ thread is running
  * @request_result: result of last user request submitted to the IRQ thread
  * @requester: wait queue to wake up on completion of user request,
@@ -102,7 +98,6 @@ struct controller {
 	struct delayed_work button_work;
 
 	struct hotplug_slot hotplug_slot;	/* hotplug core interface */
-	struct rw_semaphore reset_lock;
 	unsigned int ist_running;
 	int request_result;
 	wait_queue_head_t requester;
diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
index bf779f2..cdb241b 100644
--- a/drivers/pci/hotplug/pciehp_core.c
+++ b/drivers/pci/hotplug/pciehp_core.c
@@ -165,7 +165,7 @@ static void pciehp_check_presence(struct controller *ctrl)
 {
 	int occupied;
 
-	down_read(&ctrl->reset_lock);
+	down_read(&ctrl->hotplug_slot.pci_slot->reset_lock);
 	mutex_lock(&ctrl->state_lock);
 
 	occupied = pciehp_card_present_or_link_active(ctrl);
@@ -176,7 +176,7 @@ static void pciehp_check_presence(struct controller *ctrl)
 		pciehp_request(ctrl, PCI_EXP_SLTSTA_PDC);
 
 	mutex_unlock(&ctrl->state_lock);
-	up_read(&ctrl->reset_lock);
+	up_read(&ctrl->hotplug_slot.pci_slot->reset_lock);
 }
 
 static int pciehp_probe(struct pcie_device *dev)
diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
index 53433b3..a1c9072 100644
--- a/drivers/pci/hotplug/pciehp_hpc.c
+++ b/drivers/pci/hotplug/pciehp_hpc.c
@@ -706,13 +706,17 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
 	/*
 	 * Disable requests have higher priority than Presence Detect Changed
 	 * or Data Link Layer State Changed events.
+	 *
+	 * A slot reset may cause flaps of the Presence Detect State bit in the
+	 * Slot Status register and the Data Link Layer Link Active bit in the
+	 * Link Status register.  Prevent by holding the reset lock.
 	 */
-	down_read(&ctrl->reset_lock);
+	down_read(&ctrl->hotplug_slot.pci_slot->reset_lock);
 	if (events & DISABLE_SLOT)
 		pciehp_handle_disable_request(ctrl);
 	else if (events & (PCI_EXP_SLTSTA_PDC | PCI_EXP_SLTSTA_DLLSC))
 		pciehp_handle_presence_or_link_change(ctrl, events);
-	up_read(&ctrl->reset_lock);
+	up_read(&ctrl->hotplug_slot.pci_slot->reset_lock);
 
 	ret = IRQ_HANDLED;
 out:
@@ -841,8 +845,6 @@ int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe)
 	if (probe)
 		return 0;
 
-	down_write(&ctrl->reset_lock);
-
 	if (!ATTN_BUTTN(ctrl)) {
 		ctrl_mask |= PCI_EXP_SLTCTL_PDCE;
 		stat_mask |= PCI_EXP_SLTSTA_PDC;
@@ -861,7 +863,6 @@ int pciehp_reset_slot(struct hotplug_slot *hotplug_slot, int probe)
 	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
 		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask);
 
-	up_write(&ctrl->reset_lock);
 	return rc;
 }
 
@@ -925,7 +926,6 @@ struct controller *pcie_init(struct pcie_device *dev)
 	ctrl->slot_cap = slot_cap;
 	mutex_init(&ctrl->ctrl_lock);
 	mutex_init(&ctrl->state_lock);
-	init_rwsem(&ctrl->reset_lock);
 	init_waitqueue_head(&ctrl->requester);
 	init_waitqueue_head(&ctrl->queue);
 	INIT_DELAYED_WORK(&ctrl->button_work, pciehp_queue_pushbutton_work);
diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
index 45c51af..455da72 100644
--- a/drivers/pci/pci.c
+++ b/drivers/pci/pci.c
@@ -4902,6 +4902,8 @@ static int pci_reset_hotplug_slot(struct hotplug_slot *hotplug, int probe)
 	if (!hotplug || !try_module_get(hotplug->owner))
 		return rc;
 
+	lockdep_assert_held_write(&hotplug->pci_slot->reset_lock);
+
 	if (hotplug->ops->reset_slot)
 		rc = hotplug->ops->reset_slot(hotplug, probe);
 
@@ -5110,6 +5112,8 @@ int pci_reset_function(struct pci_dev *dev)
 	if (!dev->reset_fn)
 		return -ENOTTY;
 
+	if (dev->slot)
+		down_write(&dev->slot->reset_lock);
 	pci_dev_lock(dev);
 	pci_dev_save_and_disable(dev);
 
@@ -5117,6 +5121,8 @@ int pci_reset_function(struct pci_dev *dev)
 
 	pci_dev_restore(dev);
 	pci_dev_unlock(dev);
+	if (dev->slot)
+		up_write(&dev->slot->reset_lock);
 
 	return rc;
 }
@@ -5169,6 +5175,9 @@ int pci_try_reset_function(struct pci_dev *dev)
 	if (!dev->reset_fn)
 		return -ENOTTY;
 
+	if (dev->slot && !down_write_trylock(&dev->slot->reset_lock))
+		return -EAGAIN;
+
 	if (!pci_dev_trylock(dev))
 		return -EAGAIN;
 
@@ -5176,6 +5185,8 @@ int pci_try_reset_function(struct pci_dev *dev)
 	rc = __pci_reset_function_locked(dev);
 	pci_dev_restore(dev);
 	pci_dev_unlock(dev);
+	if (dev->slot)
+		up_write(&dev->slot->reset_lock);
 
 	return rc;
 }
@@ -5274,6 +5285,7 @@ static void pci_slot_lock(struct pci_slot *slot)
 {
 	struct pci_dev *dev;
 
+	down_write(&slot->reset_lock);
 	list_for_each_entry(dev, &slot->bus->devices, bus_list) {
 		if (!dev->slot || dev->slot != slot)
 			continue;
@@ -5295,6 +5307,7 @@ static void pci_slot_unlock(struct pci_slot *slot)
 			pci_bus_unlock(dev->subordinate);
 		pci_dev_unlock(dev);
 	}
+	up_write(&slot->reset_lock);
 }
 
 /* Return 1 on successful lock, 0 on contention */
@@ -5302,6 +5315,9 @@ static int pci_slot_trylock(struct pci_slot *slot)
 {
 	struct pci_dev *dev;
 
+	if (!down_write_trylock(&slot->reset_lock))
+		return 0;
+
 	list_for_each_entry(dev, &slot->bus->devices, bus_list) {
 		if (!dev->slot || dev->slot != slot)
 			continue;
@@ -5325,6 +5341,7 @@ static int pci_slot_trylock(struct pci_slot *slot)
 			pci_bus_unlock(dev->subordinate);
 		pci_dev_unlock(dev);
 	}
+	up_write(&slot->reset_lock);
 	return 0;
 }
 
diff --git a/drivers/pci/slot.c b/drivers/pci/slot.c
index cc386ef..e8e7d09 100644
--- a/drivers/pci/slot.c
+++ b/drivers/pci/slot.c
@@ -279,6 +279,8 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr,
 	INIT_LIST_HEAD(&slot->list);
 	list_add(&slot->list, &parent->slots);
 
+	init_rwsem(&slot->reset_lock);
+
 	down_read(&pci_bus_sem);
 	list_for_each_entry(dev, &parent->devices, bus_list)
 		if (PCI_SLOT(dev->devfn) == slot_nr)
diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c
index f634c81..260650e 100644
--- a/drivers/vfio/pci/vfio_pci.c
+++ b/drivers/vfio/pci/vfio_pci.c
@@ -454,13 +454,20 @@ static void vfio_pci_disable(struct vfio_pci_device *vdev)
 	 * We can not use the "try" reset interface here, which will
 	 * overwrite the previously restored configuration information.
 	 */
-	if (vdev->reset_works && pci_cfg_access_trylock(pdev)) {
-		if (device_trylock(&pdev->dev)) {
-			if (!__pci_reset_function_locked(pdev))
-				vdev->needs_reset = false;
-			device_unlock(&pdev->dev);
+	if (vdev->reset_works) {
+		if (!pdev->slot ||
+		    down_write_trylock(&pdev->slot->reset_lock)) {
+			if (pci_cfg_access_trylock(pdev)) {
+				if (device_trylock(&pdev->dev)) {
+					if (!__pci_reset_function_locked(pdev))
+						vdev->needs_reset = false;
+					device_unlock(&pdev->dev);
+				}
+				pci_cfg_access_unlock(pdev);
+			}
+			if (pdev->slot)
+				up_write(&pdev->slot->reset_lock);
 		}
-		pci_cfg_access_unlock(pdev);
 	}
 
 	pci_restore_state(pdev);
diff --git a/drivers/xen/xen-pciback/passthrough.c b/drivers/xen/xen-pciback/passthrough.c
index 66e9b81..98a9ec8 100644
--- a/drivers/xen/xen-pciback/passthrough.c
+++ b/drivers/xen/xen-pciback/passthrough.c
@@ -89,11 +89,17 @@ static void __xen_pcibk_release_pci_dev(struct xen_pcibk_device *pdev,
 	mutex_unlock(&dev_data->lock);
 
 	if (found_dev) {
-		if (lock)
+		if (lock) {
+			if (found_dev->slot)
+				down_write(&found_dev->slot->reset_lock);
 			device_lock(&found_dev->dev);
+		}
 		pcistub_put_pci_dev(found_dev);
-		if (lock)
+		if (lock) {
 			device_unlock(&found_dev->dev);
+			if (found_dev->slot)
+				up_write(&found_dev->slot->reset_lock);
+		}
 	}
 }
 
@@ -164,9 +170,13 @@ static void __xen_pcibk_release_devices(struct xen_pcibk_device *pdev)
 	list_for_each_entry_safe(dev_entry, t, &dev_data->dev_list, list) {
 		struct pci_dev *dev = dev_entry->dev;
 		list_del(&dev_entry->list);
+		if (dev->slot)
+			down_write(&dev->slot->reset_lock);
 		device_lock(&dev->dev);
 		pcistub_put_pci_dev(dev);
 		device_unlock(&dev->dev);
+		if (dev->slot)
+			up_write(&dev->slot->reset_lock);
 		kfree(dev_entry);
 	}
 
diff --git a/drivers/xen/xen-pciback/pci_stub.c b/drivers/xen/xen-pciback/pci_stub.c
index e876c3d..91779a2 100644
--- a/drivers/xen/xen-pciback/pci_stub.c
+++ b/drivers/xen/xen-pciback/pci_stub.c
@@ -463,6 +463,9 @@ static int __init pcistub_init_devices_late(void)
 
 		spin_unlock_irqrestore(&pcistub_devices_lock, flags);
 
+		if (psdev->dev->slot)
+			down_write(&psdev->dev->slot->reset_lock);
+		device_lock(&psdev->dev->dev);
 		err = pcistub_init_device(psdev->dev);
 		if (err) {
 			dev_err(&psdev->dev->dev,
@@ -470,6 +473,9 @@ static int __init pcistub_init_devices_late(void)
 			kfree(psdev);
 			psdev = NULL;
 		}
+		device_unlock(&psdev->dev->dev);
+		if (psdev->dev->slot)
+			up_write(&psdev->dev->slot->reset_lock);
 
 		spin_lock_irqsave(&pcistub_devices_lock, flags);
 
diff --git a/drivers/xen/xen-pciback/vpci.c b/drivers/xen/xen-pciback/vpci.c
index 5447b5a..d157b1d 100644
--- a/drivers/xen/xen-pciback/vpci.c
+++ b/drivers/xen/xen-pciback/vpci.c
@@ -171,11 +171,17 @@ static void __xen_pcibk_release_pci_dev(struct xen_pcibk_device *pdev,
 	mutex_unlock(&vpci_dev->lock);
 
 	if (found_dev) {
-		if (lock)
+		if (lock) {
+			if (found_dev->slot)
+				down_write(&found_dev->slot->reset_lock);
 			device_lock(&found_dev->dev);
+		}
 		pcistub_put_pci_dev(found_dev);
-		if (lock)
+		if (lock) {
 			device_unlock(&found_dev->dev);
+			if (found_dev->slot)
+				up_write(&found_dev->slot->reset_lock);
+		}
 	}
 }
 
@@ -216,9 +222,13 @@ static void __xen_pcibk_release_devices(struct xen_pcibk_device *pdev)
 					 list) {
 			struct pci_dev *dev = e->dev;
 			list_del(&e->list);
+			if (dev->slot)
+				down_write(&dev->slot->reset_lock);
 			device_lock(&dev->dev);
 			pcistub_put_pci_dev(dev);
 			device_unlock(&dev->dev);
+			if (dev->slot)
+				up_write(&dev->slot->reset_lock);
 			kfree(e);
 		}
 	}
diff --git a/include/linux/pci.h b/include/linux/pci.h
index 2a2d00e..12869bd 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -38,6 +38,7 @@
 #include <linux/interrupt.h>
 #include <linux/io.h>
 #include <linux/resource_ext.h>
+#include <linux/rwsem.h>
 #include <uapi/linux/pci.h>
 
 #include <linux/pci_ids.h>
@@ -65,11 +66,16 @@
 /* return bus from PCI devid = ((u16)bus_number) << 8) | devfn */
 #define PCI_BUS_NUM(x) (((x) >> 8) & 0xff)
 
-/* pci_slot represents a physical slot */
+/**
+ * struct pci_slot - represents a physical slot
+ * @reset_lock: held for writing during a slot reset; acquire for reading to
+ *	protect access to register bits which may flap upon a reset
+ */
 struct pci_slot {
 	struct pci_bus		*bus;		/* Bus this slot is on */
 	struct list_head	list;		/* Node in list of slots */
 	struct hotplug_slot	*hotplug;	/* Hotplug info (move here) */
+	struct rw_semaphore	reset_lock;
 	unsigned char		number;		/* PCI_SLOT(pci_dev->devfn) */
 	struct kobject		kobj;
 };
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 11:27:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 11:27:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxqQO-0002EQ-IG; Tue, 21 Jul 2020 11:27:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gK6X=BA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxqQO-0002EK-4L
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 11:27:08 +0000
X-Inumbo-ID: 1bb2517a-cb45-11ea-a0a0-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1bb2517a-cb45-11ea-a0a0-12813bfff9fa;
 Tue, 21 Jul 2020 11:27:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595330827;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=knBH4Wq7fTbG0g3Y0Gjxljh4vvw18WcLtVeKIt6upK0=;
 b=OG55sptUenZeYLccEXcHZ/w3FIwFc6vCnYU9p2d3sEkXYOtwbXQrN8Ir
 ES4KthaHTcAv8drdk/VfhalgLnGwDceUMdwZv/m/65tyI5B0N+L9IPBp2
 VS9U3qWUgrKtSWs1BfsGou7+S9PpCkb4a0Q8AxV+CkeHc5lD4LZ3Wdppr Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: qga5tWolNgSk0o0DOp7bexDU1g8wpC7XCDftGL5zwn7Qytwf7ufpAoY6kaDmhiLnXcsx8Gwbio
 2lH00iQKt3ymcwe12AP6Y0NOxx94nACWFZnjvygio8VwZ4hp3bUFOOuoAAIIURkdDHZF0VKn6l
 KmQzrNJ7OKd+PBIdyX0aZb+yVHvvTgG+MwjE9sytNoltIBI8Lko8YXQlDBa/mVokYSWrjmmqV9
 eDDlBayy7jdE57rkSatYIW+26w+E08z/pzilR6eoQDbVmYjjf+s2Da0FdFRLQqV6B/Tvr/hX6u
 Bu0=
X-SBRS: 2.7
X-MesageID: 23158730
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,378,1589256000"; d="scan'208";a="23158730"
Date: Tue, 21 Jul 2020 13:27:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86/S3: put data segment registers into known state upon
 resume
Message-ID: <20200721112700.GN7191@Air-de-Roger>
References: <3cad2798-1a01-7d5e-ea55-ddb9ba6388d9@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3cad2798-1a01-7d5e-ea55-ddb9ba6388d9@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "M. Vefa Bicakci" <m.v.b@runbox.com>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 20, 2020 at 05:20:15PM +0200, Jan Beulich wrote:
> wakeup_32 sets %ds and %es to BOOT_DS, while leaving %fs at what
> wakeup_start did set it to, and %gs at whatever BIOS did load into it.
> All of this may end up confusing the first load_segments() to run on
> the BSP after resume, in particular allowing a non-nul selector value
> to be left in %fs.
> 
> Alongside %ss, also put all other data segment registers into the same
> state that the boot and CPU bringup paths put them in.
> 
> Reported-by: M. Vefa Bicakci <m.v.b@runbox.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

I wouldn't mind if the added chunk was placed before loading %ss, so
that the context of what's in %eax would be clearer.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 11:53:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 11:53:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxqq2-00059T-Ho; Tue, 21 Jul 2020 11:53:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gK6X=BA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxqq1-00059O-7U
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 11:53:37 +0000
X-Inumbo-ID: ce1484d4-cb48-11ea-a0a1-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce1484d4-cb48-11ea-a0a1-12813bfff9fa;
 Tue, 21 Jul 2020 11:53:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595332414;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=OOa+A3QfUb/+6qL6XytZeqbMeTi37eM1OfzxLLRcWaA=;
 b=Vwe9SNSZElLF8f0RAqIn3sQtcXZRjO1czz8wYB18YLWS3JApY3Ok9fMk
 oeJ/6maBPViXQDcAGxhD68Ka74zQFPVuSERl5zT71FjjK1VL48YHXCjc0
 jpZByi9RRhhTKwcbms78r0K/VdLjf8/XPsbORUkjC3/UkvIoFSBzbgQqb k=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: KO5WVOrqDpuBC6xmtpaQupVrPLCkbWZrnuQJazzZZl5TqX8oxNBo54pyKkIF2e2SrBB65Ib7/n
 2KIka1TIAnCGy5bDSVub2k/r1vhBp31cH/A61Ulzi3EcVFvEERgu6XuIxntv8dAAOn/LG6cbip
 Ry0c+Q7ffu3DB5h4OPpjJ9Uc2i4GfiPQjpuCWhx6rCuK4zKRDGFsUDYDMZxroP3bTU0rzsOgGN
 j9A8Tg7GqXxBkLX7i4j0TlK/dRPNeZGyE676+RlzlNVBuU1iXyw7oIVGShd/QnroPQqJnyqSj0
 0j0=
X-SBRS: 2.7
X-MesageID: 23160386
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,378,1589256000"; d="scan'208";a="23160386"
Date: Tue, 21 Jul 2020 13:53:27 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: <paul@xen.org>
Subject: Re: vPT rework (and timer mode)
Message-ID: <20200721115327.GO7191@Air-de-Roger>
References: <20200701090210.GN735@Air-de-Roger>
 <f89a158a-416e-1939-597a-075ff97f2b02@suse.com>
 <af13fa01-db36-784d-dfaf-b9905defc7fd@citrix.com>
 <007a01d65363$9ab7c1d0$d0274570$@xen.org>
 <20200706083131.GA735@Air-de-Roger>
 <007c01d65373$ad3c4140$07b4c3c0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <007c01d65373$ad3c4140$07b4c3c0$@xen.org>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 Igor Druzhinin <igor.druzhinin@citrix.com>, 'Wei Liu' <wl@xen.org>,
 'Jan Beulich' <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 06, 2020 at 09:58:53AM +0100, Paul Durrant wrote:
> > -----Original Message-----
> > From: Roger Pau Monné <roger.pau@citrix.com>
> > Sent: 06 July 2020 09:32
> > To: paul@xen.org
> > Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'Jan Beulich' <jbeulich@suse.com>; xen-
> > devel@lists.xenproject.org; 'Wei Liu' <wl@xen.org>
> > Subject: Re: vPT rework (and timer mode)
> > 
> > On Mon, Jul 06, 2020 at 08:03:50AM +0100, Paul Durrant wrote:
> > > > -----Original Message-----
> > > > From: Andrew Cooper <andrew.cooper3@citrix.com>
> > > > Sent: 03 July 2020 16:03
> > > > To: Jan Beulich <jbeulich@suse.com>; Roger Pau Monné <roger.pau@citrix.com>
> > > > Cc: xen-devel@lists.xenproject.org; Wei Liu <wl@xen.org>; Paul Durrant <paul@xen.org>
> > > > Subject: Re: vPT rework (and timer mode)
> > > >
> > > > On 03/07/2020 15:50, Jan Beulich wrote:
> > > > > On 01.07.2020 11:02, Roger Pau Monné wrote:
> > > > >> It's my understanding that the purpose of pt_update_irq and
> > > > >> pt_intr_post is to attempt to implement the "delay for missed ticks"
> > > > >> mode, where Xen will accumulate timer interrupts if they cannot be
> > > > >> injected. As shown by the patch above, this is all broken when the
> > > > >> timer is added to a vCPU (pt->vcpu) different than the actual target
> > > > >> vCPU where the interrupt gets delivered (note this can also be a list
> > > > >> of vCPUs if routed from the IO-APIC using Fixed mode).
> > > > >>
> > > > >> I'm at lost at how to fix this so that virtual timers work properly
> > > > >> and we also keep the "delay for missed ticks" mode without doing a
> > > > >> massive rework and somehow keeping track of where injected interrupts
> > > > >> originated, which seems an overly complicated solution.
> > > > >>
> > > > >> My proposal hence would be to completely remove the timer_mode, and
> > > > >> just treat virtual timer interrupts as other interrupts, ie: they will
> > > > >> be injected from the callback (pt_timer_fn) and the vCPU(s) would be
> > > > >> kicked. Whether interrupts would get lost (ie: injected when a
> > > > >> previous one is still pending) depends on the contention on the
> > > > >> system. I'm not aware of any current OS that uses timer interrupts as
> > > > >> a way to track time. I think current OSes know the differences between
> > > > >> a timer counter and an event timer, and will use them appropriately.
> > > > > Fundamentally - why not, the more that this promises to be a
> > > > > simplification. The question we need to answer up front is whether
> > > > > we're happy to possibly break old OSes (presumably ones no-one
> > > > > ought to be using anymore these days, due to their support life
> > > > > cycles long having ended).
> > > >
> > > > The various timer modes were all compatibility, and IIRC, mostly for
> > > > Windows XP and older which told time by counting the number of timer
> > > > interrupts.
> > > >
> > > > Paul - you might remember better than me?
> > >
> > > I think it is only quite recently that Windows has started favouring enlightened time sources rather
> > than counting ticks but an admin may still turn all the viridian enlightenments off so just dropping
> > ticks will probably still cause time to drift backwards.
> > 
> > Even when not using the viridian enlightenments, Windows should rely
> > on emulated time counters (or the TSC) rather than counting ticks?
> 
> Microsoft implementations... sensible... two different things.
> 
> > 
> > I guess I could give it a try with one of the emulated Windows versions
> > that we test on osstest.
> > 
> 
> Pick an old-ish version. I think osstest has copy of Windows 7.

Tried on Windows 7 (with viridian disabled) setting
timer_mode="one_missed_tick_pending" and limiting the capacity of the
domain to 1 (1% CPU utilization) in order to start missing ticks, and
the clock does indeed start lagging behind.

When not using one_missed_tick_pending mode and limiting the capacity
to 1 the clock also lags a bit (I guess with 1% CPU utilization
delayed ticks accumulate too much), but the clock doesn't seem to be
skewed that much.

Both modes will catch up at some point, I assume Windows does sync time
periodically with the wallclock, but I don't think we want to resort
to that.

I will draft a plan about how to proceed in order to fix the emulated
timers event delivery while keeping the accumulated ticks mode and
send it to the list, as I would like to fix this.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 12:05:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 12:05:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxr15-0006Dc-AB; Tue, 21 Jul 2020 12:05:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxr14-0006DX-IE
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 12:05:02 +0000
X-Inumbo-ID: 66e35fae-cb4a-11ea-a0a5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 66e35fae-cb4a-11ea-a0a5-12813bfff9fa;
 Tue, 21 Jul 2020 12:05:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AF104B118;
 Tue, 21 Jul 2020 12:05:06 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] tools/xen-cpuid: use dashes consistently in feature names
Message-ID: <2bd92eaf-a29d-3fbf-e505-af118937cdda@suse.com>
Date: Tue, 21 Jul 2020 14:04:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

We've grown to a mix of dashes and underscores - switch to consistent
naming in the hope that future additions will play by this.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -75,7 +75,7 @@ static const char *const str_e1d[32] =
 
 static const char *const str_e1c[32] =
 {
-    [ 0] = "lahf_lm",    [ 1] = "cmp",
+    [ 0] = "lahf-lm",    [ 1] = "cmp",
     [ 2] = "svm",        [ 3] = "extapic",
     [ 4] = "cr8d",       [ 5] = "lzcnt",
     [ 6] = "sse4a",      [ 7] = "msse",
@@ -86,10 +86,10 @@ static const char *const str_e1c[32] =
     [16] = "fma4",       [17] = "tce",
     /* [18] */           [19] = "nodeid",
     /* [20] */           [21] = "tbm",
-    [22] = "topoext",    [23] = "perfctr_core",
-    [24] = "perfctr_nb", /* [25] */
+    [22] = "topoext",    [23] = "perfctr-core",
+    [24] = "perfctr-nb", /* [25] */
     [26] = "dbx",        [27] = "perftsc",
-    [28] = "pcx_l2i",    [29] = "monitorx",
+    [28] = "pcx-l2i",    [29] = "monitorx",
 };
 
 static const char *const str_7b0[32] =
@@ -97,7 +97,7 @@ static const char *const str_7b0[32] =
     [ 0] = "fsgsbase", [ 1] = "tsc-adj",
     [ 2] = "sgx",      [ 3] = "bmi1",
     [ 4] = "hle",      [ 5] = "avx2",
-    [ 6] = "fdp_exn",  [ 7] = "smep",
+    [ 6] = "fdp-exn",  [ 7] = "smep",
     [ 8] = "bmi2",     [ 9] = "erms",
     [10] = "invpcid",  [11] = "rtm",
     [12] = "pqm",      [13] = "depfpp",
@@ -120,21 +120,21 @@ static const char *const str_Da1[32] =
 
 static const char *const str_7c0[32] =
 {
-    [ 0] = "prefetchwt1",      [ 1] = "avx512_vbmi",
+    [ 0] = "prefetchwt1",      [ 1] = "avx512-vbmi",
     [ 2] = "umip",             [ 3] = "pku",
     [ 4] = "ospke",            [ 5] = "waitpkg",
-    [ 6] = "avx512_vbmi2",     [ 7] = "cet-ss",
+    [ 6] = "avx512-vbmi2",     [ 7] = "cet-ss",
     [ 8] = "gfni",             [ 9] = "vaes",
-    [10] = "vpclmulqdq",       [11] = "avx512_vnni",
-    [12] = "avx512_bitalg",
-    [14] = "avx512_vpopcntdq",
+    [10] = "vpclmulqdq",       [11] = "avx512-vnni",
+    [12] = "avx512-bitalg",
+    [14] = "avx512-vpopcntdq",
     [16] = "tsxldtrk",
 
     [22] = "rdpid",
     /* 24 */                   [25] = "cldemote",
     /* 26 */                   [27] = "movdiri",
     [28] = "movdir64b",
-    [30] = "sgx_lc",
+    [30] = "sgx-lc",
 };
 
 static const char *const str_e7d[32] =
@@ -157,7 +157,7 @@ static const char *const str_e8b[32] =
 
 static const char *const str_7d0[32] =
 {
-    [ 2] = "avx512_4vnniw", [ 3] = "avx512_4fmaps",
+    [ 2] = "avx512-4vnniw", [ 3] = "avx512-4fmaps",
     [ 4] = "fsrm",
 
     [ 8] = "avx512-vp2intersect", [ 9] = "srbds-ctrl",
@@ -169,13 +169,13 @@ static const char *const str_7d0[32] =
     [20] = "cet-ibt",
 
     [26] = "ibrsb",         [27] = "stibp",
-    [28] = "l1d_flush",     [29] = "arch_caps",
-    [30] = "core_caps",     [31] = "ssbd",
+    [28] = "l1d-flush",     [29] = "arch-caps",
+    [30] = "core-caps",     [31] = "ssbd",
 };
 
 static const char *const str_7a1[32] =
 {
-    /* 4 */                 [ 5] = "avx512_bf16",
+    /* 4 */                 [ 5] = "avx512-bf16",
 };
 
 static const struct {


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 12:08:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 12:08:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxr4d-0006NU-Sp; Tue, 21 Jul 2020 12:08:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CrlD=BA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jxr4d-0006NP-0G
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 12:08:43 +0000
X-Inumbo-ID: eaba306e-cb4a-11ea-a0a7-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eaba306e-cb4a-11ea-a0a7-12813bfff9fa;
 Tue, 21 Jul 2020 12:08:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E4BEEB118;
 Tue, 21 Jul 2020 12:08:47 +0000 (UTC)
Subject: Re: [PATCH v2 0/7] xen: credit2: limit the number of CPUs per runqueue
To: Dario Faggioli <dfaggioli@suse.com>
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fe24d520-7ef8-7dd7-6aa8-64df3a55b0bb@suse.com>
Date: Tue, 21 Jul 2020 14:08:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <159070133878.12060.13318432301910522647.stgit@Palanthas>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.2020 23:29, Dario Faggioli wrote:
> Dario Faggioli (7):
>       xen: credit2: factor cpu to runqueue matching in a function
>       xen: credit2: factor runqueue initialization in its own function.
>       xen: cpupool: add a back-pointer from a scheduler to its pool
>       xen: credit2: limit the max number of CPUs in a runqueue
>       xen: credit2: compute cpus per-runqueue more dynamically.
>       cpupool: create an the 'cpupool sync' infrastructure
>       xen: credit2: rebalance the number of CPUs in the scheduler runqueues

I still have the last three patches here as well as "xen: credit2:
document that min_rqd is valid and ok to use" in my "waiting for
sufficient acks" folder. Would you mind indicating if they should
stay there (and you will take care of pinging or whatever is
needed), or whether I may drop them (and you'll eventually re-
submit)?

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 12:16:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 12:16:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxrBm-0007Jt-2x; Tue, 21 Jul 2020 12:16:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UXjz=BA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jxrBk-0007Jo-T8
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 12:16:04 +0000
X-Inumbo-ID: f0f54991-cb4b-11ea-a0ab-12813bfff9fa
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f0f54991-cb4b-11ea-a0ab-12813bfff9fa;
 Tue, 21 Jul 2020 12:16:02 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id 88so10699442wrh.3
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 05:16:02 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=qKIoboWBkto8IvigdE+OP1JMbIIchmAvhLpreVrtlpM=;
 b=dsJAAs1e7PsCn8kEdn2SOtMnCY+VwfZXgYJp0lxcYr5pt+gfNWoZqFUi3hSvSXJ80L
 gERN8UcxEbYmA3rSgxuUXXiiYxKlLSGzPUJJV/UgKsa4TVvNLk+MGaK2gBL0KQ/42+mS
 rJKPuDOf+vdZlG7RK/y7rY7ApKXXTjIhKaPLzc+29tzL5nMi+X+g3l1AhJTxXrIeGzLs
 Mm1y+91nWPjKIR/DRgMRgmqqu2XYyBm8YXb98quILWVlOEKuPy9BiRqR9gtGarUE4WjP
 pVxOW2sALD7H1hxuBQsKF/7EhhEaxc/O9STEhIkwC4TVk6TEeddQwf3wYH2A4uwklLgM
 FKfA==
X-Gm-Message-State: AOAM530ceCYT1wItpb3TYuuD4cGofngmsXb5pMd/1hD6MprS3dRdZQIV
 etEaig23XqPxXxmXu6qknrk=
X-Google-Smtp-Source: ABdhPJwZda6uAmwqdM1t1VxXAXTRW6NtUBm0eIZwU1D8TmAj6HqCmdfqU8TRn+jAeA9JraDZdHeOSw==
X-Received: by 2002:adf:de8d:: with SMTP id w13mr25877177wrl.129.1595333761872; 
 Tue, 21 Jul 2020 05:16:01 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id v5sm3044835wmh.12.2020.07.21.05.16.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 05:16:01 -0700 (PDT)
Date: Tue, 21 Jul 2020 12:16:00 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] tools/xen-cpuid: use dashes consistently in feature names
Message-ID: <20200721121600.vdglmcv3m74qfnhw@liuwe-devbox-debian-v2>
References: <2bd92eaf-a29d-3fbf-e505-af118937cdda@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <2bd92eaf-a29d-3fbf-e505-af118937cdda@suse.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 21, 2020 at 02:04:59PM +0200, Jan Beulich wrote:
> We've grown to a mix of dashes and underscores - switch to consistent
> naming in the hope that future additions will play by this.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 12:21:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 12:21:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxrGy-00088s-OY; Tue, 21 Jul 2020 12:21:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UXjz=BA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jxrGx-00088n-8g
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 12:21:27 +0000
X-Inumbo-ID: b1f13745-cb4c-11ea-a0ac-12813bfff9fa
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1f13745-cb4c-11ea-a0ac-12813bfff9fa;
 Tue, 21 Jul 2020 12:21:25 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id f18so21043614wrs.0;
 Tue, 21 Jul 2020 05:21:25 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=SNfWVX5QtpauRhZzF7YTPhpX/9TOAv34XYBcKG6bEMA=;
 b=W+TxOR4CJVPuTDasmFIIOFR4VZ1FsR441fbrjihVH/hF6IA+UlI2s0aTtLNsrrImQW
 KWwxokhCALIbA+d7fTykjFsd2tl4JuE/IJMAAPhQ6EKfEby+KJWnQEPnwk57Vtpprsvt
 PJmarwc5ZsQn0yHX3sIv6x7ixl4qXuCgoPKS7ytgTqkTd+j1F6J6o5WA09xTEjx3BX0r
 rgCLNMzv52Mxz8CkjxCI5/ybURzZHYKROKQNk29xe60Ro5XK1bijk6f9Qhk2bYjjbzwl
 1ES/eNuOHEzNTS8YmLLvtqoE89s5Tp2Uj0N9Vd6jekU7oJPTp/LZtffHGDKPcUt8UZ4N
 /Acg==
X-Gm-Message-State: AOAM533dZUOFORa+/mJx9j785miPTIsr+RMf64Pg2aA9AgIi4ZvhWhuq
 YUNiC5yiTYMWC6RKv6nTtE0=
X-Google-Smtp-Source: ABdhPJyFSP1Hn9ZbFFG9cK9p8i7a/w3/TND65B/M0cuembEqoOaWJ9HSDLMQ2FkNKeZZWnjfkX9C3Q==
X-Received: by 2002:adf:e801:: with SMTP id o1mr27048159wrm.54.1595334084808; 
 Tue, 21 Jul 2020 05:21:24 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id u2sm3102387wml.16.2020.07.21.05.21.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 05:21:23 -0700 (PDT)
Date: Tue, 21 Jul 2020 12:21:22 +0000
From: Wei Liu <wl@xen.org>
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Juergen Gross <jgross@suse.com>, minios-devel@lists.xenproject.org,
 xen-devel@lists.xenproject.org, wl@xen.org
Subject: Re: [PATCH v2] mini-os: don't hard-wire xen internal paths
Message-ID: <20200721122122.ypuumlnwn4djwevw@liuwe-devbox-debian-v2>
References: <20200713084230.18177-1-jgross@suse.com>
 <20200718181827.7jrs5ilutt3jzp4i@function>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200718181827.7jrs5ilutt3jzp4i@function>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 18, 2020 at 08:18:27PM +0200, Samuel Thibault wrote:
> Juergen Gross, le lun. 13 juil. 2020 10:42:30 +0200, a ecrit:
> > Mini-OS shouldn't use Xen internal paths for building. Import the
> > needed paths from Xen and fall back to the current values only if
> > the import was not possible.
> > 
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Unfortunately this doesn't apply to staging.

Juergen, can you rebase?

Wei.


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 12:22:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 12:22:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxrIK-0008Gs-8A; Tue, 21 Jul 2020 12:22:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Oq7=BA=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxrIJ-0008Gl-8P
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 12:22:51 +0000
X-Inumbo-ID: e4ae5799-cb4c-11ea-a0ac-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e4ae5799-cb4c-11ea-a0ac-12813bfff9fa;
 Tue, 21 Jul 2020 12:22:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=K1FgHKhAy02tHNnmHMmauO/aQZlmT1eGifFxe8fALEs=; b=ioKycbit0dQcPPT48h4XxnHZP+
 nIv8R0kEgry76tPnCRdNuOUfmZVF2cEvmKaK+F4XBgO37kTsT/j0cqIwpjWdQP1oImChkKyMZ8/Qr
 4IdtYrggNcmhOC7ksziFkoCI/x7/YYXhuIg8eUQLpUkYwdi19FLn2Dluu/IvbNO5ZyIk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxrIE-0005dF-M9; Tue, 21 Jul 2020 12:22:46 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxrIE-0005Af-F6; Tue, 21 Jul 2020 12:22:46 +0000
Subject: Re: [PATCH] SUPPORT.md: Spell correctly Experimental
To: paul@xen.org, 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20200720173635.1571-1-julien@xen.org>
 <4cc580c5-146f-6f83-bd91-a798763c261b@citrix.com>
 <627851f2-d28e-5c3b-6f1f-882e9eb02ed4@xen.org>
 <aae69fa5-4aee-781d-2f52-291d8fa948bd@citrix.com>
 <003801d65f2d$df17bc10$9d473430$@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <042c7d69-a790-4f5d-56ba-fb64afefa4b8@xen.org>
Date: Tue, 21 Jul 2020 13:22:44 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <003801d65f2d$df17bc10$9d473430$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 'Julien Grall' <jgrall@amazon.com>, 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Paul,

On 21/07/2020 08:09, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew Cooper <andrew.cooper3@citrix.com>
>> Sent: 20 July 2020 18:49
>> To: Julien Grall <julien@xen.org>; xen-devel@lists.xenproject.org
>> Cc: paul@xen.org; Julien Grall <jgrall@amazon.com>; George Dunlap <george.dunlap@citrix.com>; Ian
>> Jackson <ian.jackson@eu.citrix.com>; Jan Beulich <jbeulich@suse.com>; Stefano Stabellini
>> <sstabellini@kernel.org>; Wei Liu <wl@xen.org>
>> Subject: Re: [PATCH] SUPPORT.md: Spell correctly Experimental
>>
>> On 20/07/2020 18:48, Julien Grall wrote:
>>>
>>> On 20/07/2020 18:45, Andrew Cooper wrote:
>>>> On 20/07/2020 18:36, Julien Grall wrote:
>>>>> From: Julien Grall <jgrall@amazon.com>
>>>>>
>>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>>
>>>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>
>>>> Although I'd suggest the subject be changed rearranged to "Spell
>>>> Experimentally correctly".
>>>
>>> Did you intend to write "Experimental" rather than "Experimentally"?
>>
>> Erm, yes :)
>>
> 
> Since this is a small docs change...
> 
> Release-acked-by: Paul Durrant <paul@xen.org>
> 
> ...and please commit to staging-4.14 a.s.a.p.

Thanks. I have committed it to staging and staging-4.14.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 12:24:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 12:24:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxrKK-0008Pj-MJ; Tue, 21 Jul 2020 12:24:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=d7zm=BA=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jxrKI-0008Pd-Vg
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 12:24:55 +0000
X-Inumbo-ID: 2da11378-cb4d-11ea-a0ac-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2da11378-cb4d-11ea-a0ac-12813bfff9fa;
 Tue, 21 Jul 2020 12:24:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 22E6BB947;
 Tue, 21 Jul 2020 12:24:59 +0000 (UTC)
Subject: Re: [PATCH v2] mini-os: don't hard-wire xen internal paths
To: Wei Liu <wl@xen.org>, Samuel Thibault <samuel.thibault@ens-lyon.org>,
 minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org
References: <20200713084230.18177-1-jgross@suse.com>
 <20200718181827.7jrs5ilutt3jzp4i@function>
 <20200721122122.ypuumlnwn4djwevw@liuwe-devbox-debian-v2>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3f8f2da2-552c-c651-5744-dfa01bd9821c@suse.com>
Date: Tue, 21 Jul 2020 14:24:51 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200721122122.ypuumlnwn4djwevw@liuwe-devbox-debian-v2>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21.07.20 14:21, Wei Liu wrote:
> On Sat, Jul 18, 2020 at 08:18:27PM +0200, Samuel Thibault wrote:
>> Juergen Gross, le lun. 13 juil. 2020 10:42:30 +0200, a ecrit:
>>> Mini-OS shouldn't use Xen internal paths for building. Import the
>>> needed paths from Xen and fall back to the current values only if
>>> the import was not possible.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
>> Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
> 
> Unfortunately this doesn't apply to staging.

Since when does mini-os.git have a staging branch?

> Juergen, can you rebase?

To what?


Juergen


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 12:26:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 12:26:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxrMA-00006W-Cm; Tue, 21 Jul 2020 12:26:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UXjz=BA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jxrM9-00006K-4x
 for xen-devel@lists.xen.org; Tue, 21 Jul 2020 12:26:49 +0000
X-Inumbo-ID: 722ee146-cb4d-11ea-850b-bc764e2007e4
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 722ee146-cb4d-11ea-850b-bc764e2007e4;
 Tue, 21 Jul 2020 12:26:48 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id r12so20915771wrj.13
 for <xen-devel@lists.xen.org>; Tue, 21 Jul 2020 05:26:48 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=atZwIHpY/sgLajW2nfxSRAQzRfTgPG8P0iRMDyO5FOA=;
 b=TUFtV7VtGyCz2kZwOlC6nsuwma5lYG/5a9E9lfC7vJsfuZE6/toFTEc8rbsedagLIW
 fY5FBzBAsdzaMsWPOmNI93sfxl/DDNYIOszAWjd/lC30ivxjRbHFPVk0rJ3jaQ06/N6g
 ho7SPDHV7A3ZHeWt0ypWNJvKtuZho1HYie64QBhFZow6ADwh7fABkysG6517UJeiQaZw
 Yml6AaY3Z/JDaIpClKqdcCTyrpaaj2snT3bI2Vc7sNWnsU3TE/odsebUBud7WkWds0rF
 y+1DmNP7RfOK1n2orHzBak3Ra+9f5ytHfBH+zSwh5/TC/zDy8XMnD3jtgm1zUAZHLXN+
 dKTg==
X-Gm-Message-State: AOAM531T8tR0iyx4h831xCua3CDNK2/SXwNdNnGLw2gmTByf5uonhMQz
 QHot5SCoIBx00KpkSeBqxbI=
X-Google-Smtp-Source: ABdhPJyIgHV0sGwD2jTmshExbYkeQJPANEAT6pilLxmZTT0bGH2L2e+r1gwDG/PuEyaauqQM/CeJ6w==
X-Received: by 2002:adf:f542:: with SMTP id j2mr26558225wrp.61.1595334407315; 
 Tue, 21 Jul 2020 05:26:47 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id u10sm3156403wml.29.2020.07.21.05.26.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 05:26:46 -0700 (PDT)
Date: Tue, 21 Jul 2020 12:26:45 +0000
From: Wei Liu <wl@xen.org>
To: Elliott Mitchell <ehem+xen@m5p.com>
Subject: Re: [PATCH 1/2] Partially revert "Cross-compilation fixes."
Message-ID: <20200721122645.qcens4lqq5vcnmz4@liuwe-devbox-debian-v2>
References: <20200718033121.GA88869@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200718033121.GA88869@mattapan.m5p.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: dave@recoil.org, ian.jackson@eu.citrix.com, christian.lindig@citrix.com,
 wl@xen.org, xen-devel@lists.xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 17, 2020 at 08:31:21PM -0700, Elliott Mitchell wrote:
> This partially reverts commit 16504669c5cbb8b195d20412aadc838da5c428f7.

Ok, so this commit is really old.

> 
> Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
> ---
> Doesn't look like much of 16504669c5cbb8b195d20412aadc838da5c428f7
> actually remains due to passage of time.
> 
> Of the 3, both Python and pygrub appear to mostly be building just fine
> cross-compiling.  The OCAML portion is being troublesome, this is going
> to cause bug reports elsewhere soon.  The OCAML portion though can
> already be disabled by setting OCAML_TOOLS=n and shouldn't have this
> extra form of disabling.

The reasoning here is fine by me. And it should be part of the commit
message.

I would like to also add "tools: prefix to the subject line:

  tools: Partially revert "Cross-compilation fixes."

If you agree with these changes, no action is required from you. I can
handle everything while committing.

Wei.

> ---
>  tools/Makefile | 3 ---
>  1 file changed, 3 deletions(-)
> 
> diff --git a/tools/Makefile b/tools/Makefile
> index 7b1f6c4d28..930a533724 100644
> --- a/tools/Makefile
> +++ b/tools/Makefile
> @@ -40,12 +40,9 @@ SUBDIRS-$(CONFIG_X86) += debugger/gdbsx
>  SUBDIRS-$(CONFIG_X86) += debugger/kdd
>  SUBDIRS-$(CONFIG_TESTS) += tests
>  
> -# These don't cross-compile
> -ifeq ($(XEN_COMPILE_ARCH),$(XEN_TARGET_ARCH))
>  SUBDIRS-y += python
>  SUBDIRS-y += pygrub
>  SUBDIRS-$(OCAML_TOOLS) += ocaml
> -endif
>  
>  ifeq ($(CONFIG_RUMP),y)
>  SUBDIRS-y := libs libxc xenstore
> -- 
> 2.20.1
> 
> 
> 
> -- 
> (\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
>  \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
>   \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
> 8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445
> 
> 


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 12:26:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 12:26:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxrLz-00005o-3b; Tue, 21 Jul 2020 12:26:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bQ5W=BA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jxrLx-00005i-K6
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 12:26:37 +0000
X-Inumbo-ID: 6b6e5238-cb4d-11ea-850b-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6b6e5238-cb4d-11ea-850b-bc764e2007e4;
 Tue, 21 Jul 2020 12:26:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595334396;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=+bTnkqjqrHPNyCrpDXVqs8mLvK7TcjZqSh+/Vw4wTqA=;
 b=XJPKZebmX9juLlohGLWNRVB2K0gO7OeQykHacLyds0Fl5+CJP/CeWnvq
 I1WbXkhRoXBaHdj1I0XIVXTxPWHon8Ggg7IOCCZW2nrOFWEBdPJelq5EA
 RWQR7auoyb2cmU3IXaxJnogr8oZK1FPfrJdYDToF9yW1kUkRuHvTB/WCs k=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: j4o6ulCCUFkkjEKHlstF7yy7CzgW0jl0cxriiYjKjK7oj52QcFB1qgv9nW/ABvuml/d3fr16OI
 BhyCn0f18lldNLoyskFBr072jEA2mFaGiykhobLnRVXBWaRXNtzETIZIUafmsWf1ECNq3L6hYX
 9jq7o51ZrfvcfynwXnSbM4+lZ1UPlQHe1rCJ6aHzDfknaAwf2vf0vS9MkGjSk1k6M1dFze0sML
 mYD9P8IwECnvEaEW46Mq2YTVbWEbcDWrQsPV/IjI5d1HVvOFCL5SHHyc8PaGvG1ES4m/apFsga
 Itw=
X-SBRS: 2.7
X-MesageID: 23162858
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,378,1589256000"; d="scan'208";a="23162858"
Subject: Re: [PATCH] tools/xen-cpuid: use dashes consistently in feature names
To: Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>
References: <2bd92eaf-a29d-3fbf-e505-af118937cdda@suse.com>
 <20200721121600.vdglmcv3m74qfnhw@liuwe-devbox-debian-v2>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1c9369f4-1e89-ce44-dd39-94548b134ad0@citrix.com>
Date: Tue, 21 Jul 2020 13:26:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200721121600.vdglmcv3m74qfnhw@liuwe-devbox-debian-v2>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21/07/2020 13:16, Wei Liu wrote:
> On Tue, Jul 21, 2020 at 02:04:59PM +0200, Jan Beulich wrote:
>> We've grown to a mix of dashes and underscores - switch to consistent
>> naming in the hope that future additions will play by this.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Wei Liu <wl@xen.org>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 12:26:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 12:26:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxrMC-00007s-Rc; Tue, 21 Jul 2020 12:26:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FhFK=BA=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1jxrMB-00006K-9S
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 12:26:51 +0000
X-Inumbo-ID: 737aa13e-cb4d-11ea-850b-bc764e2007e4
Received: from mail-lj1-x231.google.com (unknown [2a00:1450:4864:20::231])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 737aa13e-cb4d-11ea-850b-bc764e2007e4;
 Tue, 21 Jul 2020 12:26:50 +0000 (UTC)
Received: by mail-lj1-x231.google.com with SMTP id h22so23831018lji.9
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 05:26:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-transfer-encoding:content-language;
 bh=2Ln5ZtR9WdMPrbDe6t/l/6q1wU4V/hxBQeF49jsssf0=;
 b=UafC2psq0akwJM4oDjkNSdG1wfOdeXAFcy+Dn2R4xr47DVc3DPO1BsAjBMuZLRNrI8
 PA21WJekE9jzoVtmzNOGAP7+r5taZYvyZsqOKVn7+Wsskma5NeIdL7zWM1e+0QgIbsxJ
 YmzDoXndddm5EpuZI93LTBmUmFsZSluxTUquzOMwwXVNI7miTbG8Qevzm6oQ5vT8K0L8
 FjKRfGJsCHlaT/YswPTZSvq0WLujeniSEcmiXkhnz6ApCrpRNWaQdQ0smgQgpQ3huHhf
 SwY1A5ENgRR0EO6xbKaCObVfmTl7loezu4ybLxMpNW4K2cSJ7mURkFGzAOuOyXvwpzHl
 /u4w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-transfer-encoding
 :content-language;
 bh=2Ln5ZtR9WdMPrbDe6t/l/6q1wU4V/hxBQeF49jsssf0=;
 b=GcGnyxKg+oRKWRsMlXH2j1+jYJcBTHBYCGjvywyC+gkEuw5kgSdnY+lr5Lje0Skeor
 BsrOTRfmIac5XrdbbyAtsMld7Mm9y6S1wKlBfXmf2DAteddzuZ9qBoIb8OjZi8AFxnEV
 Ijz8JChL9FoymhCdQF+Sp0+mVbQXqVouyysNxTy3CCunyKtJf6GPVc8FcOm+r7gpQvMY
 8cSbyTVVsRKwcBxlNbvwxE7b9LuNToR3HOS+vqGLWR8Oh55HoVrvMHEtvlTzeIc+Bvzx
 mxDb6euQ7tNBWhPvXRhd1Np/6V+Q9wNv4FmDZLTpBN880T06qDU/dgEnmwXuOMNKdCP/
 0NCQ==
X-Gm-Message-State: AOAM533f+vpRSomQrJxOmQ8CyZC68zFpo0faTpkkf5I7y70RDDNvvPLl
 87BPSSGH97IsIrMNVr6h/FY=
X-Google-Smtp-Source: ABdhPJzVyRRe1SQBiA9ephoVzxBUoNlmt/84lr+A1kn7IDvcF/w9tkC3TyyChLFYt4Tm4Qyr0fP5kw==
X-Received: by 2002:a2e:b00a:: with SMTP id y10mr1984553ljk.266.1595334409315; 
 Tue, 21 Jul 2020 05:26:49 -0700 (PDT)
Received: from [192.168.1.2] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id j19sm1689407lfe.91.2020.07.21.05.26.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 21 Jul 2020 05:26:48 -0700 (PDT)
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: Stefano Stabellini <sstabellini@kernel.org>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <4454c70e-47fa-46e8-90bf-1904b11318b1@gmail.com>
Date: Tue, 21 Jul 2020 15:26:42 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 alex.bennee@linaro.org, Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 20.07.20 23:38, Stefano Stabellini wrote:

Hello Stefano

> On Fri, 17 Jul 2020, Oleksandr wrote:
>>>> *A few word about solution:*
>>>> As it was mentioned at [1], in order to implement virtio-mmio Xen on Arm
>>> Any plans for virtio-pci? Arm seems to be moving to the PCI bus, and
>>> it would be very interesting from a x86 PoV, as I don't think
>>> virtio-mmio is something that you can easily use on x86 (or even use
>>> at all).
>> Being honest I didn't consider virtio-pci so far. Julien's PoC (we are based
>> on) provides support for the virtio-mmio transport
>>
>> which is enough to start working around VirtIO and is not as complex as
>> virtio-pci. But it doesn't mean there is no way for virtio-pci in Xen.
>>
>> I think, this could be added in next steps. But the nearest target is
>> virtio-mmio approach (of course if the community agrees on that).
> Hi Julien, Oleksandr,
>
> Aside from complexity and easy-of-development, are there any other
> architectural reasons for using virtio-mmio?
>
> I am not asking because I intend to suggest to do something different
> (virtio-mmio is fine as far as I can tell.) I am asking because recently
> there was a virtio-pci/virtio-mmio discussion recently in Linaro and I
> would like to understand if there are any implications from a Xen point
> of view that I don't yet know.
Unfortunately, I can't say anything regarding virtio-pci/MSI. Could the 
virtio-pci work in virtual environment without PCI support (various 
embedded platforms)?

It feels to me that both transports (easy and lightweight virtio-mmio 
and complex and powerfull virtio-pci) will have their consumer demand 
and worth being implemented in Xen.


>
> For instance, what's your take on notifications with virtio-mmio? How
> are they modelled today? Are they good enough or do we need MSIs?

Notifications are sent from device (backend) to the driver (frontend) 
using interrupts. Additional DM function was introduced for that purpose 
xendevicemodel_set_irq_level() which results in vgic_inject_irq() call.

Currently, if device wants to notify a driver it should trigger the 
interrupt by calling that function twice (high level at first, then low 
level).


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 12:32:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 12:32:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxrR4-00017X-Fx; Tue, 21 Jul 2020 12:31:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Oq7=BA=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxrR2-00017S-OJ
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 12:31:52 +0000
X-Inumbo-ID: 27787102-cb4e-11ea-850b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27787102-cb4e-11ea-850b-bc764e2007e4;
 Tue, 21 Jul 2020 12:31:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=pThINDEi4USEYZjLqHdBiARtlHIpPF8j4miktMKagvs=; b=5AJSDYVVkPu49P5yJVRd1Whxoq
 SA3vOwrUhaDlkA1tq298ipQSNBfNYBZIqGft4ENptT/QSEQd9vmtpPDCgDHBowbqPX/udx+COXCLn
 YjkLykQoFnMlhm2jT/rcrHa1NXrFBUkLKdTB2tdlEUPMQY8Q3q0XgWhNF1z6J/azV+6E=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxrR0-0005q8-Kr; Tue, 21 Jul 2020 12:31:50 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxrR0-0005gj-AE; Tue, 21 Jul 2020 12:31:50 +0000
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <20200720091722.GF7191@Air-de-Roger>
 <10eaec62-2c48-52ae-d113-1681c87e3d59@xen.org>
 <20200720102023.GH7191@Air-de-Roger>
 <alpine.DEB.2.21.2007201322060.32544@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <390f3a67-5ca5-d9bd-f13a-2c5920bad45a@xen.org>
Date: Tue, 21 Jul 2020 13:31:48 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007201322060.32544@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>, alex.bennee@linaro.org,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Stefano,

On 20/07/2020 21:37, Stefano Stabellini wrote:
> On Mon, 20 Jul 2020, Roger Pau Monné wrote:
>> On Mon, Jul 20, 2020 at 10:40:40AM +0100, Julien Grall wrote:
>>>
>>>
>>> On 20/07/2020 10:17, Roger Pau Monné wrote:
>>>> On Fri, Jul 17, 2020 at 09:34:14PM +0300, Oleksandr wrote:
>>>>> On 17.07.20 18:00, Roger Pau Monné wrote:
>>>>>> On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
>>>>>> Do you have any plans to try to upstream a modification to the VirtIO
>>>>>> spec so that grants (ie: abstract references to memory addresses) can
>>>>>> be used on the VirtIO ring?
>>>>>
>>>>> But VirtIO spec hasn't been modified as well as VirtIO infrastructure in the
>>>>> guest. Nothing to upsteam)
>>>>
>>>> OK, so there's no intention to add grants (or a similar interface) to
>>>> the spec?
>>>>
>>>> I understand that you want to support unmodified VirtIO frontends, but
>>>> I also think that long term frontends could negotiate with backends on
>>>> the usage of grants in the shared ring, like any other VirtIO feature
>>>> negotiated between the frontend and the backend.
>>>>
>>>> This of course needs to be on the spec first before we can start
>>>> implementing it, and hence my question whether a modification to the
>>>> spec in order to add grants has been considered.
>>> The problem is not really the specification but the adoption in the
>>> ecosystem. A protocol based on grant-tables would mostly only be used by Xen
>>> therefore:
>>>     - It may be difficult to convince a proprietary OS vendor to invest
>>> resource on implementing the protocol
>>>     - It would be more difficult to move in/out of Xen ecosystem.
>>>
>>> Both, may slow the adoption of Xen in some areas.
>>
>> Right, just to be clear my suggestion wasn't to force the usage of
>> grants, but whether adding something along this lines was in the
>> roadmap, see below.
>>
>>> If one is interested in security, then it would be better to work with the
>>> other interested parties. I think it would be possible to use a virtual
>>> IOMMU for this purpose.
>>
>> Yes, I've also heard rumors about using the (I assume VirtIO) IOMMU in
>> order to protect what backends can map. This seems like a fine idea,
>> and would allow us to gain the lost security without having to do the
>> whole work ourselves.
>>
>> Do you know if there's anything published about this? I'm curious
>> about how and where in the system the VirtIO IOMMU is/should be
>> implemented.
> 
> Not yet (as far as I know), but we have just started some discussons on
> this topic within Linaro.
> 
> 
> You should also be aware that there is another proposal based on
> pre-shared-memory and memcpys to solve the virtio security issue:
> 
> https://marc.info/?l=linux-kernel&m=158807398403549
> 
> It would be certainly slower than the "virtio IOMMU" solution but it
> would take far less time to develop and could work as a short-term
> stop-gap.

I don't think I agree with this blank statement. In the case of "virtio 
IOMMU", you would need to potentially map/unmap pages every request 
which would result to a lot of back and forth to the hypervisor.

So it may turn out that pre-shared-memory may be faster on some setup.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 12:39:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 12:39:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxrYk-0001P0-Hv; Tue, 21 Jul 2020 12:39:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UXjz=BA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jxrYk-0001OZ-27
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 12:39:50 +0000
X-Inumbo-ID: 40565e04-cb4f-11ea-a0ae-12813bfff9fa
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 40565e04-cb4f-11ea-a0ae-12813bfff9fa;
 Tue, 21 Jul 2020 12:39:43 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id y3so3697350wrl.4;
 Tue, 21 Jul 2020 05:39:43 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:content-transfer-encoding
 :in-reply-to:user-agent;
 bh=yYk/n9bSJ9lpUEb/G+eXAQy/jJCRVpTKTHSuMovV7kY=;
 b=R4pKLxpzy5LZTjYYA67Jxg9l28RnXRsWOJwfNGtlQPS0AG8PZ3XFL8ENudop4UreCE
 WKjiLCItKpQ0mVGO6nqO38m0GXSBoyqr8R9XINTkTbSft7wq3/NOQ52mFizKDRU2YmOc
 rP2ZIEXWfGo9UWXfgclCdNZKtSx6FAocz0szVFW//5z1BGiC8iqIZBvCry4nfzQj2Al7
 D7sMqsSNrbn2xGG6lgiBx3/aTXanmad9Q5vW+4rLKo0Eulj4L1jUcOK0/NSsHhUntuQF
 iPASDnD5MTShcXy1IZ0APqIwlCpw/q0Zb14yZClseFH1SMLxpPYj9E/QrrNh+o8a7v6x
 Iv6Q==
X-Gm-Message-State: AOAM53295sUiV6mlLnVoAD5agYYio7ljrErSf/gT+2c/b9t3xITL18ky
 en5cFdSUNKqsXpRWTrWLtyU=
X-Google-Smtp-Source: ABdhPJxQFBI+NVd6cWdmZezxXmsyzRI73/sZyomF7mD/wbR25S6xZBigGFS8km4RSDSPly+vrl2J8g==
X-Received: by 2002:adf:9e8b:: with SMTP id a11mr4100912wrf.309.1595335182708; 
 Tue, 21 Jul 2020 05:39:42 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id v15sm3239420wmh.24.2020.07.21.05.39.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 05:39:42 -0700 (PDT)
Date: Tue, 21 Jul 2020 12:39:40 +0000
From: Wei Liu <wl@xen.org>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v2] mini-os: don't hard-wire xen internal paths
Message-ID: <20200721123940.blw3njlbpzbd5iia@liuwe-devbox-debian-v2>
References: <20200713084230.18177-1-jgross@suse.com>
 <20200718181827.7jrs5ilutt3jzp4i@function>
 <20200721122122.ypuumlnwn4djwevw@liuwe-devbox-debian-v2>
 <3f8f2da2-552c-c651-5744-dfa01bd9821c@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3f8f2da2-552c-c651-5744-dfa01bd9821c@suse.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: minios-devel@lists.xenproject.org,
 Samuel Thibault <samuel.thibault@ens-lyon.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 21, 2020 at 02:24:51PM +0200, Jrgen Gro wrote:
> On 21.07.20 14:21, Wei Liu wrote:
> > On Sat, Jul 18, 2020 at 08:18:27PM +0200, Samuel Thibault wrote:
> > > Juergen Gross, le lun. 13 juil. 2020 10:42:30 +0200, a ecrit:
> > > > Mini-OS shouldn't use Xen internal paths for building. Import the
> > > > needed paths from Xen and fall back to the current values only if
> > > > the import was not possible.
> > > > 
> > > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > > 
> > > Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
> > 
> > Unfortunately this doesn't apply to staging.
> 
> Since when does mini-os.git have a staging branch?
> 
> > Juergen, can you rebase?
> 
> To what?

Urgh, my bad! I thought this was a patch for xen.git. :-p

Wei.

> 
> 
> Juergen


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 12:42:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 12:42:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxrbV-0002AW-1T; Tue, 21 Jul 2020 12:42:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UXjz=BA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jxrbU-0002AR-4q
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 12:42:40 +0000
X-Inumbo-ID: a81425da-cb4f-11ea-a0ae-12813bfff9fa
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a81425da-cb4f-11ea-a0ae-12813bfff9fa;
 Tue, 21 Jul 2020 12:42:37 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id w3so2717337wmi.4;
 Tue, 21 Jul 2020 05:42:37 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:content-transfer-encoding
 :in-reply-to:user-agent;
 bh=tELJWANfqx4mlHBXTpXLWhk9GZg3z4LZLshklfe6VAQ=;
 b=Cp1vjdAsUVVDpjtVKHpUGBe4r/3z59eFIqn5WpGDzZOVy3rAQ5oI4DwqFrsPwoVByD
 svVEzmE4AT+2q56oMyVMYiqdUSpr1m5m3xqrt5BxaznpgGuAgHwZtHeG3YXFnWKf5+2L
 DreUH5L4OBpOmGgX8Is1fRCc0BUIr3D0xzwPFI9IauZ4qa8Tr7MN4C4JxWxuDV32+mZj
 p6QdUCVT6s/vivE2zBAUkS+bumazk1zUQxO/mUUAf3YgllDSxLJlLaOtNjrbJdxymK1D
 qCr2pbblpH6oXw1taiVVxHAIhhmInjDRcj0YJNo1yCj1HkGxmPCD0PJ6SW3SWoTNM+jl
 QKyw==
X-Gm-Message-State: AOAM530NLl0g+stKhvWx82gOvLRFTdbpuvzNwY5MOtOrFmFIiIHNBcw2
 8urQy1b/CcruULTefItveZY=
X-Google-Smtp-Source: ABdhPJysXW0HJF3DUf8MVNwlk735cjCrddyvm8JU58tgCSG0CcXRlr232Icr/zmIqv6CUSFjbNtfuA==
X-Received: by 2002:a1c:7ecb:: with SMTP id z194mr3552455wmc.12.1595335356790; 
 Tue, 21 Jul 2020 05:42:36 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id y6sm37664262wrr.74.2020.07.21.05.42.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 05:42:36 -0700 (PDT)
Date: Tue, 21 Jul 2020 12:42:34 +0000
From: Wei Liu <wl@xen.org>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH v2] mini-os: don't hard-wire xen internal paths
Message-ID: <20200721124234.y2ly2xgxmqwhobdv@liuwe-devbox-debian-v2>
References: <20200713084230.18177-1-jgross@suse.com>
 <20200718181827.7jrs5ilutt3jzp4i@function>
 <20200721122122.ypuumlnwn4djwevw@liuwe-devbox-debian-v2>
 <3f8f2da2-552c-c651-5744-dfa01bd9821c@suse.com>
 <20200721123940.blw3njlbpzbd5iia@liuwe-devbox-debian-v2>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200721123940.blw3njlbpzbd5iia@liuwe-devbox-debian-v2>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: minios-devel@lists.xenproject.org,
 Samuel Thibault <samuel.thibault@ens-lyon.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 21, 2020 at 12:39:40PM +0000, Wei Liu wrote:
> On Tue, Jul 21, 2020 at 02:24:51PM +0200, Jrgen Gro wrote:
> > On 21.07.20 14:21, Wei Liu wrote:
> > > On Sat, Jul 18, 2020 at 08:18:27PM +0200, Samuel Thibault wrote:
> > > > Juergen Gross, le lun. 13 juil. 2020 10:42:30 +0200, a ecrit:
> > > > > Mini-OS shouldn't use Xen internal paths for building. Import the
> > > > > needed paths from Xen and fall back to the current values only if
> > > > > the import was not possible.
> > > > > 
> > > > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > > > 
> > > > Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
> > > 
> > > Unfortunately this doesn't apply to staging.
> > 
> > Since when does mini-os.git have a staging branch?
> > 
> > > Juergen, can you rebase?
> > 
> > To what?
> 
> Urgh, my bad! I thought this was a patch for xen.git. :-p

Applied to mini-os.git.

Wei.


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 12:43:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 12:43:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxrcO-0002HV-FA; Tue, 21 Jul 2020 12:43:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FhFK=BA=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1jxrcM-0002HN-PB
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 12:43:34 +0000
X-Inumbo-ID: c89278a3-cb4f-11ea-850b-bc764e2007e4
Received: from mail-lf1-x12e.google.com (unknown [2a00:1450:4864:20::12e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c89278a3-cb4f-11ea-850b-bc764e2007e4;
 Tue, 21 Jul 2020 12:43:33 +0000 (UTC)
Received: by mail-lf1-x12e.google.com with SMTP id 140so1196643lfi.5
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 05:43:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-transfer-encoding:content-language;
 bh=3mR0ctUuSSRgosKIoD17ZxqHuQMrC4nwz7Ef6VXJ+nQ=;
 b=rdXf+svpvwuEI/CuJCw3nI6LayiudL8PqiyAzywoyvWk7gFlJ7v1EfZjvL601y+X3Z
 0wwpVilMMry5KWmNrav1utFP0DPO9kDoSk2ETYe0DpN85HYO1F1LGKsRPAJ5w5c4kbTF
 8AID2hZykKOBl/UlHAotkH5jqkiAHnM0lFGPnCpVeh3bTmoeuA0NkHLbmbpL/rOHOtIm
 n40BIxe8UmVWoyC0CREq532tUc/a7Pe7sK50b2yLwz7kfvei+FwDSfJtQc5qmAK1rKk3
 frgalfpVtXWMelLRds4J0IZFdoDJIh/XGboeQtvWP5dyib2FdA0T2/hSkpWKv5EeDdg5
 yCbw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-transfer-encoding
 :content-language;
 bh=3mR0ctUuSSRgosKIoD17ZxqHuQMrC4nwz7Ef6VXJ+nQ=;
 b=N1q0ZSgA3LqNkqq1Z87gIijxIdsJdRw13yA3oZj8sC8iwuyCWWXCdZKXeCfg2L3yW3
 H9z1nJ8ATFFuERxOSQl0UNLQQgG9fTeU67vXjXWKsyDtZ12s5IjPKcn+pT2clO6ndsH5
 lRdf9ZAC8qlkJiMt+TRFjB86hWTYTEDUSF7ibcgK/LF8m7hUMj/TtWMt/at+dQhc+sb7
 KZwNyXYX0jd+ah//05sImi8oFR2bUfgoTMQUzfajFFc90p5cY4vd3Yw8LhMX0V+3gDnp
 GUf3iXc2lJ5L2UZSNl3WtLJ9/3LfcC+yux0rLvB5yARZMTlOBxoFZv+SBMJGOi2Lfb1I
 SGQw==
X-Gm-Message-State: AOAM533z3e/Ec1LA3DIOyPXE+4YyMmoxaXDIDTLvEdFZ0K18OYi9ezUl
 oDidILxP6U7jfpuESc0/hGE=
X-Google-Smtp-Source: ABdhPJw6gLy02g+vtXhYCE9nq1p1RuFoMhPZvd+pKzSKLl1LuveCY5q0ATbhB5AaQfAMeQcHZdDOhA==
X-Received: by 2002:a19:7404:: with SMTP id v4mr13412036lfe.93.1595335412068; 
 Tue, 21 Jul 2020 05:43:32 -0700 (PDT)
Received: from [192.168.1.2] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id q3sm4382632ljm.22.2020.07.21.05.43.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 21 Jul 2020 05:43:31 -0700 (PDT)
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <20200720091722.GF7191@Air-de-Roger>
 <be3fc8de-5582-8fd0-52cd-0cbfbfa96859@gmail.com>
 <20200720110950.GJ7191@Air-de-Roger>
 <alpine.DEB.2.21.2007201330070.32544@sstabellini-ThinkPad-T480s>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <7f3a558f-e539-17bb-c8da-2d95d5578221@gmail.com>
Date: Tue, 21 Jul 2020 15:43:30 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007201330070.32544@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 20.07.20 23:40, Stefano Stabellini wrote:

Hello Stefano

> On Mon, 20 Jul 2020, Roger Pau Monné wrote:
>> On Mon, Jul 20, 2020 at 01:56:51PM +0300, Oleksandr wrote:
>>> On 20.07.20 12:17, Roger Pau Monné wrote:
>>>> On Fri, Jul 17, 2020 at 09:34:14PM +0300, Oleksandr wrote:
>>>>> On 17.07.20 18:00, Roger Pau Monné wrote:
>>>>>> On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
>>>>> The other reasons are:
>>>>>
>>>>> 1. Automation. With current backend implementation we don't need to pause
>>>>> guest right after creating it, then go to the driver domain and spawn
>>>>> backend and
>>>>>
>>>>> after that go back to the dom0 and unpause the guest.
>>>> xl devd should be capable of handling this for you on the driver
>>>> domain.
>>>>
>>>>> 2. Ability to detect when guest with involved frontend has gone away and
>>>>> properly release resource (guest destroy/reboot).
>>>>>
>>>>> 3. Ability to (re)connect to the newly created guest with involved frontend
>>>>> (guest create/reboot).
>>>>>
>>>>> 4. What is more that having Xenstore support the backend is able to detect
>>>>> the dom_id it runs into and the guest dom_id, there is no need pass them via
>>>>> command line.
>>>>>
>>>>>
>>>>> I will be happy to explain in details after publishing backend code).
>>>> As I'm not the one doing the work I certainly won't stop you from
>>>> using xenstore on the backend. I would certainly prefer if the backend
>>>> gets all the information it needs from the command line so that the
>>>> configuration data is completely agnostic to the transport layer used
>>>> to convey it.
>>>>
>>>> Thanks, Roger.
>>> Thank you for pointing another possible way. I feel I need to investigate
>>> what is the "xl devd" (+ Argo?) and how it works. If it is able to provide
>>> backend with
>> That's what x86 at least uses to manage backends on driver domains: xl
>> devd will for example launch the QEMU instance required to handle a
>> Xen PV disk backend in user-space.
>>
>> Note that there's currently no support for Argo or any communication
>> channel different than xenstore, but I think it would be cleaner to
>> place the fetching of data from xenstore in xl devd and just pass
>> those as command line arguments to the VirtIO backend if possible. I
>> would prefer the VirtIO backend to be fully decoupled from xenstore.
>>
>> Note that for a backend running on dom0 there would be no need to
>> pass any data on xenstore, as the backend would be launched directly
>> from xl with the appropriate command line arguments.
> If I can paraphrase Roger's point, I think we all agree that xenstore is
> very convenient to use and great to get something up and running
> quickly. But it has several limitations, so it would be fantastic if we
> could kill two birds with one stone and find a way to deploy the system
> without xenstore, given that with virtio it is not actually needed if not
> for very limited initial configurations. It would certainly be a big
> win. However, it is fair to say that the xenstore alternative, whatever
> that might be, needs work.

Well, why actually not?

For example, the idea "to place the fetching of data from xenstore in xl 
devd and just pass
those as command line arguments to the VirtIO backend if possible" 
sounds fine to me. But this needs an additional investigation.

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 13:14:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 13:14:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxs6J-00051l-9O; Tue, 21 Jul 2020 13:14:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JDR4=BA=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jxs6H-00051g-AZ
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 13:14:29 +0000
X-Inumbo-ID: 1aa4952c-cb54-11ea-8510-bc764e2007e4
Received: from mail-wr1-x429.google.com (unknown [2a00:1450:4864:20::429])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1aa4952c-cb54-11ea-8510-bc764e2007e4;
 Tue, 21 Jul 2020 13:14:27 +0000 (UTC)
Received: by mail-wr1-x429.google.com with SMTP id b6so21141522wrs.11
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 06:14:27 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=Oe3hdXJ/7JYTMaU8YwiS32I4gQHoup1kwePMxKDuzqc=;
 b=gSQeeNWxkmrCIaXDf8RlIdEkg3RuheSVYJGZvZUUOcE9V9+ng6of0buRkiKBjxdHT1
 q9FtejgTgeL/ogpC+YbS16YnhAEbJAz0ZhZ4ZSu1E89ovlbD2xC6IJwHtoOZ/BPT89rX
 PO+KQs6tjFFnrJR9vUUCleIjsiEjHp44l2lvQQZ3OK49JS/G/nIOOsxtpB5YuzX34njd
 8HahN/i3+DiUNm927YfGaQjwn98aRPAhBkcf8ZIUtokp7O+7tTR3DT72SivcTWpPKlHK
 VzuipiWHkfSYC+LXu8LnnDw9nlqCX7sds1+zeprbp/yJcj31uK8zxrjxW72hsMmbvaSm
 5rVA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=Oe3hdXJ/7JYTMaU8YwiS32I4gQHoup1kwePMxKDuzqc=;
 b=dV5PEHbjHfkTiEO8oSFsCSIFH6EE/R1r77P1hRam1nJxRWJaJO3jWMXxUDHB7hJN1z
 Qwm7Dgh2MnbAVXwGZUhRzmtu9+5IuVsSqV3OPpR4DmZ0svD3RERCMWg5Dyb4ivEzhoea
 AEfaJyu3NZT9m2kZvqXlu6d2FP8+vGgnKKEg/o1Qso82wNqUJJPTD7WC3yZkVTq6Fipo
 RNEwUC0aGDXQyeaYFVjHqkd3bWyUgJbcIh1I7FMNsj9SQ1QEXo8rlGf77izkKhv9Jwh3
 GmUoC1B1TxIZYT4w9jydjKAyJYu0ufcKAJZhzNxa5rjR5UjUkTuJBR+1wwVfIJ4YjJrb
 I5lg==
X-Gm-Message-State: AOAM531e7gI+0TJ1UGbYhKfSVUx9kTS+ftp6vtnSs5cGLJeaSeF08ixr
 mBS6nIboDSmTRsbsi13smsY=
X-Google-Smtp-Source: ABdhPJyxLQ+dyHO3WYqONEWDuQLNHOB4CU/sS93NdZMDlnMqbXntGQWDyVCkJsuLp4W04LU4WabMqg==
X-Received: by 2002:adf:f3cb:: with SMTP id g11mr9131227wrp.268.1595337266926; 
 Tue, 21 Jul 2020 06:14:26 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-233.amazon.com. [54.240.197.233])
 by smtp.gmail.com with ESMTPSA id l15sm36325555wro.33.2020.07.21.06.14.25
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 21 Jul 2020 06:14:25 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
References: <20200701090210.GN735@Air-de-Roger>
 <f89a158a-416e-1939-597a-075ff97f2b02@suse.com>
 <af13fa01-db36-784d-dfaf-b9905defc7fd@citrix.com>
 <007a01d65363$9ab7c1d0$d0274570$@xen.org> <20200706083131.GA735@Air-de-Roger>
 <007c01d65373$ad3c4140$07b4c3c0$@xen.org>
 <20200721115327.GO7191@Air-de-Roger>
In-Reply-To: <20200721115327.GO7191@Air-de-Roger>
Subject: RE: vPT rework (and timer mode)
Date: Tue, 21 Jul 2020 14:14:24 +0100
Message-ID: <004801d65f60$db70f300$9252d900$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQG+XT3UL7mtyhdM6X8x294LjZ1FmQGlQU2IApzoD8QCa8XKyQFu21K2ALaG9p4BeJJRUajvp4nQ
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Igor Druzhinin' <igor.druzhinin@citrix.com>, 'Wei Liu' <wl@xen.org>,
 'Jan Beulich' <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> Sent: 21 July 2020 12:53
> To: paul@xen.org
> Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'Jan Beulich' =
<jbeulich@suse.com>; xen-
> devel@lists.xenproject.org; 'Wei Liu' <wl@xen.org>; Igor Druzhinin =
<igor.druzhinin@citrix.com>
> Subject: Re: vPT rework (and timer mode)
>=20
> On Mon, Jul 06, 2020 at 09:58:53AM +0100, Paul Durrant wrote:
> > > -----Original Message-----
> > > From: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> > > Sent: 06 July 2020 09:32
> > > To: paul@xen.org
> > > Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'Jan Beulich' =
<jbeulich@suse.com>; xen-
> > > devel@lists.xenproject.org; 'Wei Liu' <wl@xen.org>
> > > Subject: Re: vPT rework (and timer mode)
> > >
> > > On Mon, Jul 06, 2020 at 08:03:50AM +0100, Paul Durrant wrote:
> > > > > -----Original Message-----
> > > > > From: Andrew Cooper <andrew.cooper3@citrix.com>
> > > > > Sent: 03 July 2020 16:03
> > > > > To: Jan Beulich <jbeulich@suse.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>
> > > > > Cc: xen-devel@lists.xenproject.org; Wei Liu <wl@xen.org>; Paul =
Durrant <paul@xen.org>
> > > > > Subject: Re: vPT rework (and timer mode)
> > > > >
> > > > > On 03/07/2020 15:50, Jan Beulich wrote:
> > > > > > On 01.07.2020 11:02, Roger Pau Monn=C3=A9 wrote:
> > > > > >> It's my understanding that the purpose of pt_update_irq and
> > > > > >> pt_intr_post is to attempt to implement the "delay for =
missed ticks"
> > > > > >> mode, where Xen will accumulate timer interrupts if they =
cannot be
> > > > > >> injected. As shown by the patch above, this is all broken =
when the
> > > > > >> timer is added to a vCPU (pt->vcpu) different than the =
actual target
> > > > > >> vCPU where the interrupt gets delivered (note this can also =
be a list
> > > > > >> of vCPUs if routed from the IO-APIC using Fixed mode).
> > > > > >>
> > > > > >> I'm at lost at how to fix this so that virtual timers work =
properly
> > > > > >> and we also keep the "delay for missed ticks" mode without =
doing a
> > > > > >> massive rework and somehow keeping track of where injected =
interrupts
> > > > > >> originated, which seems an overly complicated solution.
> > > > > >>
> > > > > >> My proposal hence would be to completely remove the =
timer_mode, and
> > > > > >> just treat virtual timer interrupts as other interrupts, =
ie: they will
> > > > > >> be injected from the callback (pt_timer_fn) and the vCPU(s) =
would be
> > > > > >> kicked. Whether interrupts would get lost (ie: injected =
when a
> > > > > >> previous one is still pending) depends on the contention on =
the
> > > > > >> system. I'm not aware of any current OS that uses timer =
interrupts as
> > > > > >> a way to track time. I think current OSes know the =
differences between
> > > > > >> a timer counter and an event timer, and will use them =
appropriately.
> > > > > > Fundamentally - why not, the more that this promises to be a
> > > > > > simplification. The question we need to answer up front is =
whether
> > > > > > we're happy to possibly break old OSes (presumably ones =
no-one
> > > > > > ought to be using anymore these days, due to their support =
life
> > > > > > cycles long having ended).
> > > > >
> > > > > The various timer modes were all compatibility, and IIRC, =
mostly for
> > > > > Windows XP and older which told time by counting the number of =
timer
> > > > > interrupts.
> > > > >
> > > > > Paul - you might remember better than me?
> > > >
> > > > I think it is only quite recently that Windows has started =
favouring enlightened time sources
> rather
> > > than counting ticks but an admin may still turn all the viridian =
enlightenments off so just
> dropping
> > > ticks will probably still cause time to drift backwards.
> > >
> > > Even when not using the viridian enlightenments, Windows should =
rely
> > > on emulated time counters (or the TSC) rather than counting ticks?
> >
> > Microsoft implementations... sensible... two different things.
> >
> > >
> > > I guess I could give it a try with one of the emulated Windows =
versions
> > > that we test on osstest.
> > >
> >
> > Pick an old-ish version. I think osstest has copy of Windows 7.
>=20
> Tried on Windows 7 (with viridian disabled) setting
> timer_mode=3D"one_missed_tick_pending" and limiting the capacity of =
the
> domain to 1 (1% CPU utilization) in order to start missing ticks, and
> the clock does indeed start lagging behind.
>=20
> When not using one_missed_tick_pending mode and limiting the capacity
> to 1 the clock also lags a bit (I guess with 1% CPU utilization
> delayed ticks accumulate too much), but the clock doesn't seem to be
> skewed that much.
>=20
> Both modes will catch up at some point, I assume Windows does sync =
time
> periodically with the wallclock, but I don't think we want to resort
> to that.
>=20

IIRC it normally syncs once an hour or thereabouts. PV drivers will =
force a re-sync every 10 mins if they are installed.

> I will draft a plan about how to proceed in order to fix the emulated
> timers event delivery while keeping the accumulated ticks mode and
> send it to the list, as I would like to fix this.

Ok.

Cheers,

  Paul

>=20
> Roger.



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 13:22:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 13:22:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxsE6-0005tH-9M; Tue, 21 Jul 2020 13:22:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Oq7=BA=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxsE4-0005tC-PH
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 13:22:32 +0000
X-Inumbo-ID: 3b582cba-cb55-11ea-8515-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b582cba-cb55-11ea-8515-bc764e2007e4;
 Tue, 21 Jul 2020 13:22:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=J42ay2v2PlHuRzncOzUEnEmIxuRBBDaDZG2DhQxBNDM=; b=wS2jJ5E9CauENOggMRU8UwFAw3
 BlZT+C6W1sAI3g+c8m2LqqlMpXoIHfHnOK3J/IUXBL+ej9lI1JNBik6uz71JVALjYrfAw+QHjNcA/
 bwChoPEj/8mQbD4GPpCRoZF/x1lres2XBeyH4BAmKKF1kXffz8FzXxjg01BbRik/hT/o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxsE2-0006tn-V2; Tue, 21 Jul 2020 13:22:30 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxsE2-0001Cc-Mu; Tue, 21 Jul 2020 13:22:30 +0000
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr <olekstysh@gmail.com>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <56e512af-993b-1364-be56-fc4be5d88519@xen.org>
Date: Tue, 21 Jul 2020 14:22:28 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 Andre Przywara <andre.przywara@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 alex.bennee@linaro.org, Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

(+ Andree for the vGIC).

Hi Stefano,

On 20/07/2020 21:38, Stefano Stabellini wrote:
> On Fri, 17 Jul 2020, Oleksandr wrote:
>>>> *A few word about solution:*
>>>> As it was mentioned at [1], in order to implement virtio-mmio Xen on Arm
>>> Any plans for virtio-pci? Arm seems to be moving to the PCI bus, and
>>> it would be very interesting from a x86 PoV, as I don't think
>>> virtio-mmio is something that you can easily use on x86 (or even use
>>> at all).
>>
>> Being honest I didn't consider virtio-pci so far. Julien's PoC (we are based
>> on) provides support for the virtio-mmio transport
>>
>> which is enough to start working around VirtIO and is not as complex as
>> virtio-pci. But it doesn't mean there is no way for virtio-pci in Xen.
>>
>> I think, this could be added in next steps. But the nearest target is
>> virtio-mmio approach (of course if the community agrees on that).

> Aside from complexity and easy-of-development, are there any other
> architectural reasons for using virtio-mmio?

 From the hypervisor PoV, the main/only difference between virtio-mmio 
and virtio-pci is that in the latter we need to forward PCI config space 
access to the device emulator. IOW, we would need to add support for 
vPCI. This shouldn't require much more work, but I didn't want to invest 
on it for PoC.

Long term, I don't think we should tie Xen to any of the virtio 
protocol. We just need to offer facilities so users can be build easily 
virtio backend for Xen.

> 
> I am not asking because I intend to suggest to do something different
> (virtio-mmio is fine as far as I can tell.) I am asking because recently
> there was a virtio-pci/virtio-mmio discussion recently in Linaro and I
> would like to understand if there are any implications from a Xen point
> of view that I don't yet know.

virtio-mmio is going to require more work in the toolstack because we 
would need to do the memory/interrupts allocation ourself. In the case 
of virtio-pci, we only need to pass a range of memory/interrupts to the 
guest and let him decide the allocation.

Regarding virtio-pci vs virtio-mmio:
      - flexibility: virtio-mmio is a good fit when you know all your 
devices at boot. If you want to hotplug disk/network, then virtio-pci is 
going to be a better fit.
      - interrupts: I would expect each virtio-mmio device to have its 
own SPI interrupts. In the case of virtio-pci, then legacy interrupts 
would be shared between all the PCI devices on the same host controller. 
This could possibly lead to performance issue if you have many devices. 
So for virtio-pci, we should consider MSIs.

> 
> For instance, what's your take on notifications with virtio-mmio? How
> are they modelled today?

The backend will notify the frontend using an SPI. The other way around 
(frontend -> backend) is based on an MMIO write.

We have an interface to allow the backend to control whether the 
interrupt level (i.e. low, high). However, the "old" vGIC doesn't handle 
properly level interrupts. So we would end up to treat level interrupts 
as edge.

Technically, the problem is already existing with HW interrupts, but the 
HW should fire it again if the interrupt line is still asserted. Another 
issue is the interrupt may fire even if the interrupt line was 
deasserted (IIRC this caused some interesting problem with the Arch timer).

I am a bit concerned that the issue will be more proeminent for virtual 
interrupts. I know that we have some gross hack in the vpl011 to handle 
a level interrupts. So maybe it is time to switch to the new vGIC?

> Are they good enough or do we need MSIs?

I am not sure whether virtio-mmio supports MSIs. However for virtio-pci, 
MSIs is going to be useful to improve performance. This may mean to 
expose an ITS, so we would need to add support for guest.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 13:25:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 13:25:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxsGf-000626-PC; Tue, 21 Jul 2020 13:25:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gK6X=BA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxsGe-000621-4o
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 13:25:12 +0000
X-Inumbo-ID: 9a269998-cb55-11ea-8517-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a269998-cb55-11ea-8517-bc764e2007e4;
 Tue, 21 Jul 2020 13:25:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595337911;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=kvMnBnD637r5MfENtMcCSipa5lRsSaOLZcF2+w597V0=;
 b=dWlokJki+fhh3ldXG9prEVSflquBLi+7nhzoGyuce2D6GWOpMOMup0Uw
 69Emt/pjBtbV9z7xx6M3BONTUPOixLqyEUyhgz9ZIsuHzs1L//wf5z3vN
 RAfj+PGJx2EimMsW4r7TlQgqiNvx2MvTFJ4dhdMkYXgZTv+Bl1z/0Jk5n 0=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: E6Y5e49RnceC1Th0BNuo4byoXceFDgylbC198bznDpKHvSBwYB8M2l/r/x0+2aWmWktBpLg0k/
 U2vyfCJG/F+erCpD3dQFAqIVy5vuOTv4iluf5eucB/FumSWp6rtlB47EpgRe3JmHHdnFY2FTbO
 ccCkytRbenOmF5mZiExRzcMYQzzkKRnnaXqXEpMr9C90tp93ouGAYNFnvvd4eyL+darwYZbb4M
 dDBzWlNxvdrvE771xlleihap7KzeDs+hwa0+LB6Sr+qYsJguQwM043PreDz+zuOEvFQ5zO+vSb
 uog=
X-SBRS: 2.7
X-MesageID: 23168896
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,378,1589256000"; d="scan'208";a="23168896"
Date: Tue, 21 Jul 2020 15:25:03 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
Message-ID: <20200721132503.GP7191@Air-de-Roger>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <20200720091722.GF7191@Air-de-Roger>
 <10eaec62-2c48-52ae-d113-1681c87e3d59@xen.org>
 <20200720102023.GH7191@Air-de-Roger>
 <alpine.DEB.2.21.2007201322060.32544@sstabellini-ThinkPad-T480s>
 <390f3a67-5ca5-d9bd-f13a-2c5920bad45a@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <390f3a67-5ca5-d9bd-f13a-2c5920bad45a@xen.org>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Oleksandr
 Andrushchenko <andr2000@gmail.com>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Oleksandr <olekstysh@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>,
 alex.bennee@linaro.org, Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 21, 2020 at 01:31:48PM +0100, Julien Grall wrote:
> Hi Stefano,
> 
> On 20/07/2020 21:37, Stefano Stabellini wrote:
> > On Mon, 20 Jul 2020, Roger Pau Monné wrote:
> > > On Mon, Jul 20, 2020 at 10:40:40AM +0100, Julien Grall wrote:
> > > > 
> > > > 
> > > > On 20/07/2020 10:17, Roger Pau Monné wrote:
> > > > > On Fri, Jul 17, 2020 at 09:34:14PM +0300, Oleksandr wrote:
> > > > > > On 17.07.20 18:00, Roger Pau Monné wrote:
> > > > > > > On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
> > > > > > > Do you have any plans to try to upstream a modification to the VirtIO
> > > > > > > spec so that grants (ie: abstract references to memory addresses) can
> > > > > > > be used on the VirtIO ring?
> > > > > > 
> > > > > > But VirtIO spec hasn't been modified as well as VirtIO infrastructure in the
> > > > > > guest. Nothing to upsteam)
> > > > > 
> > > > > OK, so there's no intention to add grants (or a similar interface) to
> > > > > the spec?
> > > > > 
> > > > > I understand that you want to support unmodified VirtIO frontends, but
> > > > > I also think that long term frontends could negotiate with backends on
> > > > > the usage of grants in the shared ring, like any other VirtIO feature
> > > > > negotiated between the frontend and the backend.
> > > > > 
> > > > > This of course needs to be on the spec first before we can start
> > > > > implementing it, and hence my question whether a modification to the
> > > > > spec in order to add grants has been considered.
> > > > The problem is not really the specification but the adoption in the
> > > > ecosystem. A protocol based on grant-tables would mostly only be used by Xen
> > > > therefore:
> > > >     - It may be difficult to convince a proprietary OS vendor to invest
> > > > resource on implementing the protocol
> > > >     - It would be more difficult to move in/out of Xen ecosystem.
> > > > 
> > > > Both, may slow the adoption of Xen in some areas.
> > > 
> > > Right, just to be clear my suggestion wasn't to force the usage of
> > > grants, but whether adding something along this lines was in the
> > > roadmap, see below.
> > > 
> > > > If one is interested in security, then it would be better to work with the
> > > > other interested parties. I think it would be possible to use a virtual
> > > > IOMMU for this purpose.
> > > 
> > > Yes, I've also heard rumors about using the (I assume VirtIO) IOMMU in
> > > order to protect what backends can map. This seems like a fine idea,
> > > and would allow us to gain the lost security without having to do the
> > > whole work ourselves.
> > > 
> > > Do you know if there's anything published about this? I'm curious
> > > about how and where in the system the VirtIO IOMMU is/should be
> > > implemented.
> > 
> > Not yet (as far as I know), but we have just started some discussons on
> > this topic within Linaro.
> > 
> > 
> > You should also be aware that there is another proposal based on
> > pre-shared-memory and memcpys to solve the virtio security issue:
> > 
> > https://marc.info/?l=linux-kernel&m=158807398403549
> > 
> > It would be certainly slower than the "virtio IOMMU" solution but it
> > would take far less time to develop and could work as a short-term
> > stop-gap.
> 
> I don't think I agree with this blank statement. In the case of "virtio
> IOMMU", you would need to potentially map/unmap pages every request which
> would result to a lot of back and forth to the hypervisor.
> 
> So it may turn out that pre-shared-memory may be faster on some setup.

AFAICT you could achieve the same with an IOMMU: pre-share (ie: add to
the device IOMMU page tables) a bunch of pages and keep bouncing data
to/from them in order to interact with the device, that way you could
avoid the map and unmaps (and is effectively how persistent grants
work in the blkif protocol).

The thread referenced by Stefano seems to point out this shared memory
model is targeted for very limited hypervisors that don't have the
capacity to trap, decode and emulate accesses to memory?

I certainly don't know much about it.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 13:31:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 13:31:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxsMJ-0006r3-Ec; Tue, 21 Jul 2020 13:31:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UXjz=BA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jxsMH-0006qs-Sv
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 13:31:01 +0000
X-Inumbo-ID: 6a5c8411-cb56-11ea-851a-bc764e2007e4
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6a5c8411-cb56-11ea-851a-bc764e2007e4;
 Tue, 21 Jul 2020 13:31:01 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id a15so6272520wrh.10
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 06:31:00 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=wA4StiJY3M6ySPkaaiLbbLONdObdxHPnE+eyooizazk=;
 b=FDdjsOvaHIpuu1paumcnsMndmyNmXefjEluwRRPYc2/ww2AcFUGh4vpkl6yHieiuWv
 9vz0sZP5nYzpm5gns0zYpdOBAoCqDHgvq7MeY8hUOaagqz1f5a7txJdbd/0Q1RzRZS/J
 u8oXadyCiiXGH0Eyhle2fN7WsQLp1dUzBgsnK7IwWJULbLzNpCvgUScIPZSB6Tc9xORB
 sNQb1rNC+RKZsNJw9MAGwzxct648APq9A3DGv7pU8pQ0wW04CESq5iE5MUjcYJexFw4l
 O9RdKzzG2vfNaYwtmL8m18KzsVOiCt8gatntI3ChRbraK039n8BWd6EQEeYKC9dffpg2
 FA8g==
X-Gm-Message-State: AOAM531JiGEykw4ttDJ+XfvUzsw6F824cILFsOBYWIq2p07moYDZV5yM
 Mf/fIHuZFJ6O6RMXiQEYQ30=
X-Google-Smtp-Source: ABdhPJyItXrtXExsO0etBJYKiKWT5KbujiZmvT39sRe/VNkGrSJyAh112MFfEZgvWC00SvHdJUsbOg==
X-Received: by 2002:a05:6000:1288:: with SMTP id
 f8mr18496045wrx.62.1595338260188; 
 Tue, 21 Jul 2020 06:31:00 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id n189sm2974451wmf.38.2020.07.21.06.30.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 06:30:59 -0700 (PDT)
Date: Tue, 21 Jul 2020 13:30:57 +0000
From: Wei Liu <wl@xen.org>
To: incoming+61544a64d0c2dc4555813e58f3810dd7@incoming.gitlab.com,
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH 02/12] tools: switch XEN_LIBXEN* make variables to lower
 case (XEN_libxen*)
Message-ID: <20200721133057.mpwelpyut6nruxcp@liuwe-devbox-debian-v2>
References: <20200715162511.5941-1-ian.jackson@eu.citrix.com>
 <20200715162511.5941-4-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200715162511.5941-4-ian.jackson@eu.citrix.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, ian.jackson@eu.citrix.com,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 05:25:01PM +0100, Ian Jackson wrote:
> From: Juergen Gross <jgross@suse.com>
> 
> In order to harmonize names of library related make variables switch
> XEN_LIBXEN* names to XEN_libxen*, as all other related variables (e.g.
> CFLAGS_libxen*, SHDEPS_libxen*, ...) already use this pattern.
> 
> Rename XEN_LIBXC to XEN_libxenctrl, XEN_XENSTORE to XEN_libxenstore,
> XEN_XENLIGHT to XEN_libxenlight, XEN_XLUTIL to XEN_libxlutil, and
> XEN_LIBVCHAN to XEN_libxenvchan for the same reason.
> 
> Introduce XEN_libxenguest with the same value as XEN_libxenctrl.
> 
> No functional change.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Looks fine to me.

(Experiment to see how this mail shows up in Gitlab)

Wei.


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 13:32:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 13:32:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxsNw-0006we-QO; Tue, 21 Jul 2020 13:32:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Oq7=BA=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxsNv-0006wX-HW
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 13:32:43 +0000
X-Inumbo-ID: a781b036-cb56-11ea-851a-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a781b036-cb56-11ea-851a-bc764e2007e4;
 Tue, 21 Jul 2020 13:32:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=g/oQMUvLsWaA3Azjhg0EuKclaN5GGnHC85kr1bPK/DM=; b=GMhIYrSNoGS3HeVAasn/mQT2r6
 0dmSZVxRsmPA3RoNRriDlwRArsH6XeBb8dH/Ycp9BQgomVZEYHSS96t2zVYan0TOQ3s7MK8sAB8zd
 wK9pBD+GTcRcpwMfBTyWZcJY+mqyNSvzsLFaQfI4Yf+ZMkGAL/8VlP0JSFuwpFV6xR4I=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxsNt-00076z-HO; Tue, 21 Jul 2020 13:32:41 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxsNt-0001bW-4E; Tue, 21 Jul 2020 13:32:41 +0000
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <20200720091722.GF7191@Air-de-Roger>
 <10eaec62-2c48-52ae-d113-1681c87e3d59@xen.org>
 <20200720102023.GH7191@Air-de-Roger>
 <alpine.DEB.2.21.2007201322060.32544@sstabellini-ThinkPad-T480s>
 <390f3a67-5ca5-d9bd-f13a-2c5920bad45a@xen.org>
 <20200721132503.GP7191@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <606db7cd-123a-f824-7eb3-689d74215f47@xen.org>
Date: Tue, 21 Jul 2020 14:32:38 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200721132503.GP7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>, alex.bennee@linaro.org,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Roger,

On 21/07/2020 14:25, Roger Pau Monné wrote:
> On Tue, Jul 21, 2020 at 01:31:48PM +0100, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 20/07/2020 21:37, Stefano Stabellini wrote:
>>> On Mon, 20 Jul 2020, Roger Pau Monné wrote:
>>>> On Mon, Jul 20, 2020 at 10:40:40AM +0100, Julien Grall wrote:
>>>>>
>>>>>
>>>>> On 20/07/2020 10:17, Roger Pau Monné wrote:
>>>>>> On Fri, Jul 17, 2020 at 09:34:14PM +0300, Oleksandr wrote:
>>>>>>> On 17.07.20 18:00, Roger Pau Monné wrote:
>>>>>>>> On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
>>>>>>>> Do you have any plans to try to upstream a modification to the VirtIO
>>>>>>>> spec so that grants (ie: abstract references to memory addresses) can
>>>>>>>> be used on the VirtIO ring?
>>>>>>>
>>>>>>> But VirtIO spec hasn't been modified as well as VirtIO infrastructure in the
>>>>>>> guest. Nothing to upsteam)
>>>>>>
>>>>>> OK, so there's no intention to add grants (or a similar interface) to
>>>>>> the spec?
>>>>>>
>>>>>> I understand that you want to support unmodified VirtIO frontends, but
>>>>>> I also think that long term frontends could negotiate with backends on
>>>>>> the usage of grants in the shared ring, like any other VirtIO feature
>>>>>> negotiated between the frontend and the backend.
>>>>>>
>>>>>> This of course needs to be on the spec first before we can start
>>>>>> implementing it, and hence my question whether a modification to the
>>>>>> spec in order to add grants has been considered.
>>>>> The problem is not really the specification but the adoption in the
>>>>> ecosystem. A protocol based on grant-tables would mostly only be used by Xen
>>>>> therefore:
>>>>>      - It may be difficult to convince a proprietary OS vendor to invest
>>>>> resource on implementing the protocol
>>>>>      - It would be more difficult to move in/out of Xen ecosystem.
>>>>>
>>>>> Both, may slow the adoption of Xen in some areas.
>>>>
>>>> Right, just to be clear my suggestion wasn't to force the usage of
>>>> grants, but whether adding something along this lines was in the
>>>> roadmap, see below.
>>>>
>>>>> If one is interested in security, then it would be better to work with the
>>>>> other interested parties. I think it would be possible to use a virtual
>>>>> IOMMU for this purpose.
>>>>
>>>> Yes, I've also heard rumors about using the (I assume VirtIO) IOMMU in
>>>> order to protect what backends can map. This seems like a fine idea,
>>>> and would allow us to gain the lost security without having to do the
>>>> whole work ourselves.
>>>>
>>>> Do you know if there's anything published about this? I'm curious
>>>> about how and where in the system the VirtIO IOMMU is/should be
>>>> implemented.
>>>
>>> Not yet (as far as I know), but we have just started some discussons on
>>> this topic within Linaro.
>>>
>>>
>>> You should also be aware that there is another proposal based on
>>> pre-shared-memory and memcpys to solve the virtio security issue:
>>>
>>> https://marc.info/?l=linux-kernel&m=158807398403549
>>>
>>> It would be certainly slower than the "virtio IOMMU" solution but it
>>> would take far less time to develop and could work as a short-term
>>> stop-gap.
>>
>> I don't think I agree with this blank statement. In the case of "virtio
>> IOMMU", you would need to potentially map/unmap pages every request which
>> would result to a lot of back and forth to the hypervisor.
>>
>> So it may turn out that pre-shared-memory may be faster on some setup.
> 
> AFAICT you could achieve the same with an IOMMU: pre-share (ie: add to
> the device IOMMU page tables) a bunch of pages and keep bouncing data
> to/from them in order to interact with the device, that way you could
> avoid the map and unmaps (and is effectively how persistent grants
> work in the blkif protocol).

Yes it is possible to do the same with the virtio IOMMU. I was more 
arguing on the statement that pre-shared-memory is going to be slower 
than the IOMMU case.

> 
> The thread referenced by Stefano seems to point out this shared memory
> model is targeted for very limited hypervisors that don't have the
> capacity to trap, decode and emulate accesses to memory?

Technically we are in the same case for Xen on Arm as we don't have the 
IOREQ support yet. But I think IOREQ is worthwhile as it would enable 
existing unmodified Linux with virtio driver to boot on Xen.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 13:40:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 13:40:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxsVW-0007vx-15; Tue, 21 Jul 2020 13:40:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gK6X=BA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jxsVU-0007vs-Uq
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 13:40:32 +0000
X-Inumbo-ID: bdde83d0-cb57-11ea-a0c3-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bdde83d0-cb57-11ea-a0c3-12813bfff9fa;
 Tue, 21 Jul 2020 13:40:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595338831;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=SPIB3N7Zji4orJZ72+01jsbssZSW0KLgQQ/1UzQzNXU=;
 b=Lf0HFzg/Ebs8CfGE55xfJV6DOrLFwftG4zjSvGa8rXqjfS5a60yY2YCc
 YqgsRbHLxMeNYM0RMJ/QH41hUrYOGZXkIyw8qFd3M55Xp1kT6qmSTQX+U
 ULI5ySc7zog0vs7J8p4+4PHjD2gHI7c/vLOkakqGjhuh/e2uT+uwsmBPk Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ZHyEnOfxKZU4xD+9/YrY5MSe8mDx8SlJyRPQYqCwoqqMrJSLYHwqrWwomPw3B5MxjCHQB/LJm4
 b5+Oksxpa80yc2X7lZy+ESDFb6adbHdRd50o5iVxvm0Q0zYN15iP9btZFHsgKFfMGj2jRu0hO/
 53ghvXTVJMq6TZxLBsCe82n9fDklAFi1DKFr+q5NDUoUWRSXCQkQDWuUhDQoaZnkxFaC54mlYq
 dmZl1FKxfNY+U+HYVDTn9EVd/ftgw1wTByXhKcI0aN4gYV2FuU7DP0YBcWUKL157VfA/eQ2hWx
 NFQ=
X-SBRS: 2.7
X-MesageID: 23175498
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,379,1589256000"; d="scan'208";a="23175498"
Date: Tue, 21 Jul 2020 15:40:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
Message-ID: <20200721134020.GQ7191@Air-de-Roger>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <20200720091722.GF7191@Air-de-Roger>
 <10eaec62-2c48-52ae-d113-1681c87e3d59@xen.org>
 <20200720102023.GH7191@Air-de-Roger>
 <alpine.DEB.2.21.2007201322060.32544@sstabellini-ThinkPad-T480s>
 <390f3a67-5ca5-d9bd-f13a-2c5920bad45a@xen.org>
 <20200721132503.GP7191@Air-de-Roger>
 <606db7cd-123a-f824-7eb3-689d74215f47@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <606db7cd-123a-f824-7eb3-689d74215f47@xen.org>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Oleksandr
 Andrushchenko <andr2000@gmail.com>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Oleksandr <olekstysh@gmail.com>, xen-devel <xen-devel@lists.xenproject.org>,
 alex.bennee@linaro.org, Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 21, 2020 at 02:32:38PM +0100, Julien Grall wrote:
> Hi Roger,
> 
> On 21/07/2020 14:25, Roger Pau Monné wrote:
> > On Tue, Jul 21, 2020 at 01:31:48PM +0100, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 20/07/2020 21:37, Stefano Stabellini wrote:
> > > > On Mon, 20 Jul 2020, Roger Pau Monné wrote:
> > > > > On Mon, Jul 20, 2020 at 10:40:40AM +0100, Julien Grall wrote:
> > > > > > 
> > > > > > 
> > > > > > On 20/07/2020 10:17, Roger Pau Monné wrote:
> > > > > > > On Fri, Jul 17, 2020 at 09:34:14PM +0300, Oleksandr wrote:
> > > > > > > > On 17.07.20 18:00, Roger Pau Monné wrote:
> > > > > > > > > On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
> > > > > > > > > Do you have any plans to try to upstream a modification to the VirtIO
> > > > > > > > > spec so that grants (ie: abstract references to memory addresses) can
> > > > > > > > > be used on the VirtIO ring?
> > > > > > > > 
> > > > > > > > But VirtIO spec hasn't been modified as well as VirtIO infrastructure in the
> > > > > > > > guest. Nothing to upsteam)
> > > > > > > 
> > > > > > > OK, so there's no intention to add grants (or a similar interface) to
> > > > > > > the spec?
> > > > > > > 
> > > > > > > I understand that you want to support unmodified VirtIO frontends, but
> > > > > > > I also think that long term frontends could negotiate with backends on
> > > > > > > the usage of grants in the shared ring, like any other VirtIO feature
> > > > > > > negotiated between the frontend and the backend.
> > > > > > > 
> > > > > > > This of course needs to be on the spec first before we can start
> > > > > > > implementing it, and hence my question whether a modification to the
> > > > > > > spec in order to add grants has been considered.
> > > > > > The problem is not really the specification but the adoption in the
> > > > > > ecosystem. A protocol based on grant-tables would mostly only be used by Xen
> > > > > > therefore:
> > > > > >      - It may be difficult to convince a proprietary OS vendor to invest
> > > > > > resource on implementing the protocol
> > > > > >      - It would be more difficult to move in/out of Xen ecosystem.
> > > > > > 
> > > > > > Both, may slow the adoption of Xen in some areas.
> > > > > 
> > > > > Right, just to be clear my suggestion wasn't to force the usage of
> > > > > grants, but whether adding something along this lines was in the
> > > > > roadmap, see below.
> > > > > 
> > > > > > If one is interested in security, then it would be better to work with the
> > > > > > other interested parties. I think it would be possible to use a virtual
> > > > > > IOMMU for this purpose.
> > > > > 
> > > > > Yes, I've also heard rumors about using the (I assume VirtIO) IOMMU in
> > > > > order to protect what backends can map. This seems like a fine idea,
> > > > > and would allow us to gain the lost security without having to do the
> > > > > whole work ourselves.
> > > > > 
> > > > > Do you know if there's anything published about this? I'm curious
> > > > > about how and where in the system the VirtIO IOMMU is/should be
> > > > > implemented.
> > > > 
> > > > Not yet (as far as I know), but we have just started some discussons on
> > > > this topic within Linaro.
> > > > 
> > > > 
> > > > You should also be aware that there is another proposal based on
> > > > pre-shared-memory and memcpys to solve the virtio security issue:
> > > > 
> > > > https://marc.info/?l=linux-kernel&m=158807398403549
> > > > 
> > > > It would be certainly slower than the "virtio IOMMU" solution but it
> > > > would take far less time to develop and could work as a short-term
> > > > stop-gap.
> > > 
> > > I don't think I agree with this blank statement. In the case of "virtio
> > > IOMMU", you would need to potentially map/unmap pages every request which
> > > would result to a lot of back and forth to the hypervisor.
> > > 
> > > So it may turn out that pre-shared-memory may be faster on some setup.
> > 
> > AFAICT you could achieve the same with an IOMMU: pre-share (ie: add to
> > the device IOMMU page tables) a bunch of pages and keep bouncing data
> > to/from them in order to interact with the device, that way you could
> > avoid the map and unmaps (and is effectively how persistent grants
> > work in the blkif protocol).
> 
> Yes it is possible to do the same with the virtio IOMMU. I was more arguing
> on the statement that pre-shared-memory is going to be slower than the IOMMU
> case.
> 
> > 
> > The thread referenced by Stefano seems to point out this shared memory
> > model is targeted for very limited hypervisors that don't have the
> > capacity to trap, decode and emulate accesses to memory?
> 
> Technically we are in the same case for Xen on Arm as we don't have the
> IOREQ support yet. But I think IOREQ is worthwhile as it would enable
> existing unmodified Linux with virtio driver to boot on Xen.

Yes, I fully agree.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 13:44:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 13:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxsYs-000865-QY; Tue, 21 Jul 2020 13:44:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Oq7=BA=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxsYq-00085z-Vl
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 13:44:01 +0000
X-Inumbo-ID: 3b3d28fe-cb58-11ea-851c-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b3d28fe-cb58-11ea-851c-bc764e2007e4;
 Tue, 21 Jul 2020 13:44:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=9q6mbDPCI6+EaVu7J0shuHZRkcB9oUqM/9IXzbH5U0Y=; b=No7xAmViNMWgrC6qpMrkHeyXWA
 /bigG+K8A910yUlxHQgjGMRPrvytc6i8ysDni8uOsIqcGli1wXPB/YFw00maGQ1OejGqERr2arMQM
 zHXBt+xZYxw8s3DOAL91WnKhy3kZbopEIH/gZZwAsdOHQ/mMPeL5DEKcHbs4pS+yXyvM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxsYo-0007Kv-Ub; Tue, 21 Jul 2020 13:43:58 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxsYo-0002Py-KG; Tue, 21 Jul 2020 13:43:58 +0000
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: Oleksandr <olekstysh@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>
 <4454c70e-47fa-46e8-90bf-1904b11318b1@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <048c27bf-a9ab-054c-8955-6e75fb6c6ea5@xen.org>
Date: Tue, 21 Jul 2020 14:43:56 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <4454c70e-47fa-46e8-90bf-1904b11318b1@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 Andre Przywara <andre.przywara@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 alex.bennee@linaro.org, Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

(+ Andre)

Hi Oleksandr,

On 21/07/2020 13:26, Oleksandr wrote:
> On 20.07.20 23:38, Stefano Stabellini wrote:
>> For instance, what's your take on notifications with virtio-mmio? How
>> are they modelled today? Are they good enough or do we need MSIs?
> 
> Notifications are sent from device (backend) to the driver (frontend) 
> using interrupts. Additional DM function was introduced for that purpose 
> xendevicemodel_set_irq_level() which results in vgic_inject_irq() call.
> 
> Currently, if device wants to notify a driver it should trigger the 
> interrupt by calling that function twice (high level at first, then low 
> level).

This doesn't look right to me. Assuming the interrupt is trigger when 
the line is high-level, the backend should only issue the hypercall once 
to set the level to high. Once the guest has finish to process all the 
notifications the backend would then call the hypercall to lower the 
interrupt line.

This means the interrupts should keep firing as long as the interrupt 
line is high.

It is quite possible that I took some shortcut when implementing the 
hypercall, so this should be corrected before anyone start to rely on it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 14:09:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 14:09:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxsxX-0001ZS-24; Tue, 21 Jul 2020 14:09:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GCg/=BA=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1jxsxW-0001ZN-8f
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 14:09:30 +0000
X-Inumbo-ID: ca0fee43-cb5b-11ea-8527-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca0fee43-cb5b-11ea-8527-bc764e2007e4;
 Tue, 21 Jul 2020 14:09:29 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id y3so4001615wrl.4
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 07:09:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google;
 h=references:user-agent:from:to:cc:subject:in-reply-to:date
 :message-id:mime-version:content-transfer-encoding;
 bh=r0HmWCt4nxblC25GgHh9UAjjMnHc5Kaa2vfETZQ0jtY=;
 b=ZiPQdq0NiKf4KpFSUl36kX1FqYOz9aKk5UACfGo2BMMjG+UsCczan7gvWlr5pV4kHC
 94rxAbUqsyRUgFwoL3iO4C3zgH3wLB+asOliu434CI4qug3w64LMQlfpTitPHHrhKvU/
 sgnpfLJAS2H/HLGZyoDy9GNQs0S8k+Im9zoT9if5xU5i8jP5BNIOENPugA42YWrQJ1Bg
 4pCRW4B4nmDu2oOmHmq30kOkkTh3o6U3svDVULV4Lo6F2F7lnWgbYwcH8h4gHvAvY4BI
 l+PzIyYy2Vq4gDC+D3I1fJvWewq0Cy4I/22NrHTJLjtIhm7CHb3e/KSG+yqkexrtZYbv
 8i4A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:references:user-agent:from:to:cc:subject
 :in-reply-to:date:message-id:mime-version:content-transfer-encoding;
 bh=r0HmWCt4nxblC25GgHh9UAjjMnHc5Kaa2vfETZQ0jtY=;
 b=MIfnLB2lNrATYppJOd+6rhrVjMT23RLjcHCrFUTwpPq9Bs/P5CJw6sp+w0mrPJITJm
 kUtlVWzQNkcT7himwxFNCQOAZyTU/qthOwhBsQQkaU33YrAZIp/qbS4gqLP/rcpG0k/I
 qGuIbmSJ3VcvF1fbojRfP8SMxL+Iv+mzG7M1M5UTP8j89w7Kza9KESHsmtiUuoQX1VBo
 cXcxAprX3p79YIt+xgEeqQ0AZfhOSaNY1vrRY8m2vO5/Wpcbu7Y6YyIwoFLbgQjRo25R
 gxQbv0bi7jgshPPGq/Vs6nUtlU1MxkUnLZ/+ctsH3f5xrxgN7uv+QgUpgxEikKQY64DY
 /WiQ==
X-Gm-Message-State: AOAM530Yn0BfhtydPX85a6F8P7+0I+JVR03ch2RwyDKaFKrbHuguhEHZ
 m6BH8g6M1BEWeoOHQ8j8vrGYTQ==
X-Google-Smtp-Source: ABdhPJz5RN1uk8UtJcFycycR+/K00IxcTYZ65Bwe7K/G0ZOQka56MWGo8j8bvWCDGNuzincfUXDw7w==
X-Received: by 2002:adf:bb14:: with SMTP id r20mr12296049wrg.366.1595340568432; 
 Tue, 21 Jul 2020 07:09:28 -0700 (PDT)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id c24sm12681373wrb.11.2020.07.21.07.09.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 07:09:27 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 07C161FF7E;
 Tue, 21 Jul 2020 15:09:25 +0100 (BST)
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <20200720091722.GF7191@Air-de-Roger>
 <10eaec62-2c48-52ae-d113-1681c87e3d59@xen.org>
 <20200720102023.GH7191@Air-de-Roger>
 <alpine.DEB.2.21.2007201322060.32544@sstabellini-ThinkPad-T480s>
 <390f3a67-5ca5-d9bd-f13a-2c5920bad45a@xen.org>
User-agent: mu4e 1.5.5; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Julien Grall <julien@xen.org>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
In-reply-to: <390f3a67-5ca5-d9bd-f13a-2c5920bad45a@xen.org>
Date: Tue, 21 Jul 2020 15:09:24 +0100
Message-ID: <87mu3tufhn.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Roger Pau =?utf-8?Q?Monn?= =?utf-8?Q?=C3=A9?= <roger.pau@citrix.com>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


Julien Grall <julien@xen.org> writes:

> Hi Stefano,
>
> On 20/07/2020 21:37, Stefano Stabellini wrote:
>> On Mon, 20 Jul 2020, Roger Pau Monn=C3=A9 wrote:
>>> On Mon, Jul 20, 2020 at 10:40:40AM +0100, Julien Grall wrote:
>>>>
>>>>
>>>> On 20/07/2020 10:17, Roger Pau Monn=C3=A9 wrote:
>>>>> On Fri, Jul 17, 2020 at 09:34:14PM +0300, Oleksandr wrote:
>>>>>> On 17.07.20 18:00, Roger Pau Monn=C3=A9 wrote:
>>>>>>> On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrot=
e:
>>>>>>> Do you have any plans to try to upstream a modification to the Virt=
IO
>>>>>>> spec so that grants (ie: abstract references to memory addresses) c=
an
>>>>>>> be used on the VirtIO ring?
>>>>>>
>>>>>> But VirtIO spec hasn't been modified as well as VirtIO infrastructur=
e in the
>>>>>> guest. Nothing to upsteam)
>>>>>
>>>>> OK, so there's no intention to add grants (or a similar interface) to
>>>>> the spec?
>>>>>
>>>>> I understand that you want to support unmodified VirtIO frontends, but
>>>>> I also think that long term frontends could negotiate with backends on
>>>>> the usage of grants in the shared ring, like any other VirtIO feature
>>>>> negotiated between the frontend and the backend.
>>>>>
>>>>> This of course needs to be on the spec first before we can start
>>>>> implementing it, and hence my question whether a modification to the
>>>>> spec in order to add grants has been considered.
>>>> The problem is not really the specification but the adoption in the
>>>> ecosystem. A protocol based on grant-tables would mostly only be used =
by Xen
>>>> therefore:
>>>>     - It may be difficult to convince a proprietary OS vendor to invest
>>>> resource on implementing the protocol
>>>>     - It would be more difficult to move in/out of Xen ecosystem.
>>>>
>>>> Both, may slow the adoption of Xen in some areas.
>>>
>>> Right, just to be clear my suggestion wasn't to force the usage of
>>> grants, but whether adding something along this lines was in the
>>> roadmap, see below.
>>>
>>>> If one is interested in security, then it would be better to work with=
 the
>>>> other interested parties. I think it would be possible to use a virtual
>>>> IOMMU for this purpose.
>>>
>>> Yes, I've also heard rumors about using the (I assume VirtIO) IOMMU in
>>> order to protect what backends can map. This seems like a fine idea,
>>> and would allow us to gain the lost security without having to do the
>>> whole work ourselves.
>>>
>>> Do you know if there's anything published about this? I'm curious
>>> about how and where in the system the VirtIO IOMMU is/should be
>>> implemented.
>>=20
>> Not yet (as far as I know), but we have just started some discussons on
>> this topic within Linaro.
>>=20
>>=20
>> You should also be aware that there is another proposal based on
>> pre-shared-memory and memcpys to solve the virtio security issue:
>>=20
>> https://marc.info/?l=3Dlinux-kernel&m=3D158807398403549
>>=20
>> It would be certainly slower than the "virtio IOMMU" solution but it
>> would take far less time to develop and could work as a short-term
>> stop-gap.
>
> I don't think I agree with this blank statement. In the case of "virtio=20
> IOMMU", you would need to potentially map/unmap pages every request=20
> which would result to a lot of back and forth to the hypervisor.

Can a virtio-iommu just set bounds when a device is initialised as to
where memory will be in the kernel address space?

> So it may turn out that pre-shared-memory may be faster on some setup.

Certainly having to update the page permissions every transaction is
going to be to slow for soemthing that wants to avoid the performance
penalty of a bounce buffer.

--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 14:15:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 14:15:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxt2z-0002Pj-Oe; Tue, 21 Jul 2020 14:15:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GCg/=BA=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1jxt2y-0002Pe-Gz
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 14:15:08 +0000
X-Inumbo-ID: 94077da0-cb5c-11ea-852c-bc764e2007e4
Received: from mail-wr1-x431.google.com (unknown [2a00:1450:4864:20::431])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 94077da0-cb5c-11ea-852c-bc764e2007e4;
 Tue, 21 Jul 2020 14:15:07 +0000 (UTC)
Received: by mail-wr1-x431.google.com with SMTP id s10so21313561wrw.12
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 07:15:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google;
 h=references:user-agent:from:to:cc:subject:in-reply-to:date
 :message-id:mime-version:content-transfer-encoding;
 bh=5vGTsT2Jt71t9KbvAU+gXKWROTB/VrPHRbh3rS5Klho=;
 b=lwRVFWheFnDMj/wHVQyPfaAkmnE56tgVu6YCVdSBueR87O58tbrKtxEUMI2eBV5/74
 eu0TjXH5AJiFPTXhYiptPlYWi1F3xtakY4FWSXF65PXIYcU+HeUEjS5SGm/ys4mWwWWO
 veYFC2RoFi/q2oi5Wy5KQgSgDFdDANo7uKQ5gYS7dz4vzSwELzFWnEdQUkvgCC52VRJn
 1/RrPvspI7UKJlq/NvdHYGcO5srqkGr9vSZ+FFtlDxq/uKzRrHy+qX8gGM3u12gTg1s0
 /SUxMgQs9UsUz+pABnhOD4gEF8Bai98rKq2RaxoiJamnhXWnhWD7ZPuGjIrUNMpGQDre
 Bo3Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:references:user-agent:from:to:cc:subject
 :in-reply-to:date:message-id:mime-version:content-transfer-encoding;
 bh=5vGTsT2Jt71t9KbvAU+gXKWROTB/VrPHRbh3rS5Klho=;
 b=dt/aaze0Kj1b7DskvqUqutO97NBw8cP5KzRxYoea3hQk9l/cMwNbdHfe/kOf7WQCjX
 pF2MH9mXjKJIzn0EE7ca26cjN7aYWg5+QwO5WAZ8fq8gvDbQhruPlJAv7zWqlixdCdxe
 GP7pfctZGD8CKufT1relZUFlxUt4OuxqcivDjVZ7EeiAQM96zAL2lVqrVUgfxs1aNPK6
 9VNsGlNUPxoEuHEbj6D+4Z6CBmbdq3agRCnJABTl8qXqKaULZIxrFN0lt9lh0R1ITgam
 eBQe7Zh4n9ImcOIQ0WQKOqbEFFK1CxsR87F8Lr/Itf6yJsF8XuHJhX8ds7VWNmI/EQUM
 zaAA==
X-Gm-Message-State: AOAM531KmG1hp4MGj/TeK0j93eHKNObu6VAD5QITD73GwrqjlrYIl0vZ
 LidJnTrBeLp7kkOGb/OqqvBUYg==
X-Google-Smtp-Source: ABdhPJw5fGSJ10ahCVKuaNcUsB8yM1W7RZPHRoLthPVbZHNXtlYUvd+3O0kPiTv0HGRuzINGo2/hSQ==
X-Received: by 2002:adf:df91:: with SMTP id z17mr383795wrl.149.1595340906402; 
 Tue, 21 Jul 2020 07:15:06 -0700 (PDT)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id j14sm37763343wrs.75.2020.07.21.07.15.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 07:15:04 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id D522F1FF7E;
 Tue, 21 Jul 2020 15:15:02 +0100 (BST)
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>
 <56e512af-993b-1364-be56-fc4be5d88519@xen.org>
User-agent: mu4e 1.5.5; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Julien Grall <julien@xen.org>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
In-reply-to: <56e512af-993b-1364-be56-fc4be5d88519@xen.org>
Date: Tue, 21 Jul 2020 15:15:02 +0100
Message-ID: <87k0yxuf89.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Andre Przywara <andre.przywara@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Roger Pau =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


Julien Grall <julien@xen.org> writes:

> (+ Andree for the vGIC).
>
> Hi Stefano,
>
> On 20/07/2020 21:38, Stefano Stabellini wrote:
>> On Fri, 17 Jul 2020, Oleksandr wrote:
>>>>> *A few word about solution:*
>>>>> As it was mentioned at [1], in order to implement virtio-mmio Xen on =
Arm
>>>> Any plans for virtio-pci? Arm seems to be moving to the PCI bus, and
>>>> it would be very interesting from a x86 PoV, as I don't think
>>>> virtio-mmio is something that you can easily use on x86 (or even use
>>>> at all).
>>>
>>> Being honest I didn't consider virtio-pci so far. Julien's PoC (we are =
based
>>> on) provides support for the virtio-mmio transport
>>>
>>> which is enough to start working around VirtIO and is not as complex as
>>> virtio-pci. But it doesn't mean there is no way for virtio-pci in Xen.
>>>
>>> I think, this could be added in next steps. But the nearest target is
>>> virtio-mmio approach (of course if the community agrees on that).
>
>> Aside from complexity and easy-of-development, are there any other
>> architectural reasons for using virtio-mmio?
>
<snip>
>>=20
>> For instance, what's your take on notifications with virtio-mmio? How
>> are they modelled today?
>
> The backend will notify the frontend using an SPI. The other way around=20
> (frontend -> backend) is based on an MMIO write.
>
> We have an interface to allow the backend to control whether the=20
> interrupt level (i.e. low, high). However, the "old" vGIC doesn't handle=
=20
> properly level interrupts. So we would end up to treat level interrupts=20
> as edge.
>
> Technically, the problem is already existing with HW interrupts, but the=
=20
> HW should fire it again if the interrupt line is still asserted. Another=
=20
> issue is the interrupt may fire even if the interrupt line was=20
> deasserted (IIRC this caused some interesting problem with the Arch timer=
).
>
> I am a bit concerned that the issue will be more proeminent for virtual=20
> interrupts. I know that we have some gross hack in the vpl011 to handle=20
> a level interrupts. So maybe it is time to switch to the new vGIC?
>
>> Are they good enough or do we need MSIs?
>
> I am not sure whether virtio-mmio supports MSIs. However for virtio-pci,=
=20
> MSIs is going to be useful to improve performance. This may mean to=20
> expose an ITS, so we would need to add support for guest.

virtio-mmio doesn't support MSI's at the moment although there have been
proposals to update the spec to allow them. At the moment the cost of
reading the ISR value and then writing an ack in vm_interrupt:

	/* Read and acknowledge interrupts */
	status =3D readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
	writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);

puts an extra vmexit cost to trap an emulate each exit. Getting an MSI
via an exitless access to the GIC would be better I think. I'm not quite
sure what the path to IRQs from Xen is.


>
> Cheers,


--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 14:22:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 14:22:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxt9a-0003GX-HM; Tue, 21 Jul 2020 14:21:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tByU=BA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxt9Y-0003GD-GH
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 14:21:56 +0000
X-Inumbo-ID: 8285769e-cb5d-11ea-a0d8-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8285769e-cb5d-11ea-a0d8-12813bfff9fa;
 Tue, 21 Jul 2020 14:21:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ptB2Tt7a9IvGR3uceo+QQOX7cXafsgrOLQ/917arQxE=; b=aAlO5rkyWN1TKEjjJeFTaZtue
 clSQKUgDlrCKPkgmKpT7Z16HH4Q+tzBP5kEp/Kfc2mlfiQUP2FdE9NYclPbrrX2Nk4nxAUgUp7FMp
 NYc7O11ewpFXa0x1zFMzkmw3DU0WuO0ccUi37stsmkSv1DEjxoALYHcWk0gm+EU+xkxgA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxt9O-0008Cq-SU; Tue, 21 Jul 2020 14:21:46 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxt9O-0000P3-22; Tue, 21 Jul 2020 14:21:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxt9O-00080E-1R; Tue, 21 Jul 2020 14:21:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152058-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152058: regressions - trouble: fail/pass/starved
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: qemuu=af3d69058e09bede9900f266a618ed11f76f49f3
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jul 2020 14:21:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152058 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152058/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 qemuu                af3d69058e09bede9900f266a618ed11f76f49f3
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   38 days
Failing since        151101  2020-06-14 08:32:51 Z   37 days   52 attempts
Testing same since   152058  2020-07-20 23:07:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 30328 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 14:27:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 14:27:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxtEe-0003SK-Aq; Tue, 21 Jul 2020 14:27:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Oq7=BA=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxtEd-0003SF-Pk
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 14:27:11 +0000
X-Inumbo-ID: 437e9af6-cb5e-11ea-a0d8-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 437e9af6-cb5e-11ea-a0d8-12813bfff9fa;
 Tue, 21 Jul 2020 14:27:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=any1ZldPkLv8QiB89izz51569QuLfWLRrlaCbIsPb3I=; b=HLZrh25PcdjTCclsROzCEB07S/
 tmGNzj2TtCeLnvWy8BXdBHQTm/1Bj9F9JecuK4txWGsxgEyMYAaI97NUQq73Gequ5YhR2c5UfOLdk
 sbBPmprJ7+/KGgs2mv8AUdNBzPhil5S4ipFxdD36cnKli9bHZKCNnFSSkJs8YPiDcgdA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxtEb-0008JF-RT; Tue, 21 Jul 2020 14:27:09 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxtEb-0005AJ-Fq; Tue, 21 Jul 2020 14:27:09 +0000
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: Oleksandr <olekstysh@gmail.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <3dcab37d-0d60-f1cc-1d59-b5497f0fa95f@xen.org>
Date: Tue, 21 Jul 2020 15:27:07 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 17/07/2020 19:34, Oleksandr wrote:
> 
> On 17.07.20 18:00, Roger Pau Monné wrote:
>>> requires
>>> some implementation to forward guest MMIO access to a device model. 
>>> And as
>>> it
>>> turned out the Xen on x86 contains most of the pieces to be able to 
>>> use that
>>> transport (via existing IOREQ concept). Julien has already done a big 
>>> amount
>>> of work in his PoC (xen/arm: Add support for Guest IO forwarding to a
>>> device emulator).
>>> Using that code as a base we managed to create a completely 
>>> functional PoC
>>> with DomU
>>> running on virtio block device instead of a traditional Xen PV driver
>>> without
>>> modifications to DomU Linux. Our work is mostly about rebasing Julien's
>>> code on the actual
>>> codebase (Xen 4.14-rc4), various tweeks to be able to run emulator
>>> (virtio-disk backend)
>>> in other than Dom0 domain (in our system we have thin Dom0 and keep all
>>> backends
>>> in driver domain),
>> How do you handle this use-case? Are you using grants in the VirtIO
>> ring, or rather allowing the driver domain to map all the guest memory
>> and then placing gfn on the ring like it's commonly done with VirtIO?
> 
> Second option. Xen grants are not used at all as well as event channel 
> and Xenbus. That allows us to have guest
> 
> *unmodified* which one of the main goals. Yes, this may sound (or even 
> sounds) non-secure, but backend which runs in driver domain is allowed 
> to map all guest memory.
> 
> In current backend implementation a part of guest memory is mapped just 
> to process guest request then unmapped back, there is no mappings in 
> advance. The xenforeignmemory_map
> 
> call is used for that purpose. For experiment I tried to map all guest 
> memory in advance and just calculated pointer at runtime. Of course that 
> logic performed better.

That works well for a PoC, however I am not sure you can rely on it long 
term as a guest is free to modify its memory layout. For instance, Linux 
may balloon in/out memory. You probably want to consider something 
similar to mapcache in QEMU.

On a similar topic, I am a bit surprised you didn't encounter memory 
exhaustion when trying to use virtio. Because on how Linux currently 
works (see XSA-300), the backend domain as to have a least as much RAM 
as the domain it serves. For instance, you have serve two domains with 
1GB of RAM each, then your backend would need at least 2GB + some for 
its own purpose.

This probably wants to be resolved by allowing foreign mapping to be 
"paging" out as you would for memory assigned to a userspace.

> I was thinking about guest static memory regions and forcing guest to 
> allocate descriptors from them (in order not to map all guest memory, 
> but a predefined region). But that implies modifying guest...

[...]

>>> misc fixes for our use-cases and tool support for the
>>> configuration.
>>> Unfortunately, Julien doesn’t have much time to allocate on the work
>>> anymore,
>>> so we would like to step in and continue.
>>>
>>> *A few word about the Xen code:*
>>> You can find the whole Xen series at [5]. The patches are in RFC state
>>> because
>>> some actions in the series should be reconsidered and implemented 
>>> properly.
>>> Before submitting the final code for the review the first IOREQ patch
>>> (which is quite
>>> big) will be split into x86, Arm and common parts. Please note, x86 part
>>> wasn’t
>>> even build-tested so far and could be broken with that series. Also the
>>> series probably
>>> wants splitting into adding IOREQ on Arm (should be focused first) and
>>> tools support
>>> for the virtio-disk (which is going to be the first Virtio driver)
>>> configuration before going
>>> into the mailing list.
>> Sending first a patch series to enable IOREQs on Arm seems perfectly
>> fine, and it doesn't have to come with the VirtIO backend. In fact I
>> would recommend that you send that ASAP, so that you don't spend time
>> working on the backend that would likely need to be modified
>> according to the review received on the IOREQ series.
> 
> Completely agree with you, I will send it after splitting IOREQ patch 
> and performing some cleanup.
> 
> However, it is going to take some time to make it properly taking into 
> the account
> 
> that personally I won't be able to test on x86.
I think other member of the community should be able to help here. 
However, nowadays testing Xen on x86 is pretty easy with QEMU :).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 14:34:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 14:34:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxtLD-0004JB-2p; Tue, 21 Jul 2020 14:33:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H3aL=BA=arm.com=andre.przywara@srs-us1.protection.inumbo.net>)
 id 1jxtLC-0004J6-A0
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 14:33:58 +0000
X-Inumbo-ID: 356d9b46-cb5f-11ea-8536-bc764e2007e4
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 356d9b46-cb5f-11ea-8536-bc764e2007e4;
 Tue, 21 Jul 2020 14:33:56 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 75A31101E;
 Tue, 21 Jul 2020 07:33:56 -0700 (PDT)
Received: from [192.168.2.22] (unknown [172.31.20.19])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D80A43F718;
 Tue, 21 Jul 2020 07:33:54 -0700 (PDT)
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: Julien Grall <julien@xen.org>, Oleksandr <olekstysh@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>
 <4454c70e-47fa-46e8-90bf-1904b11318b1@gmail.com>
 <048c27bf-a9ab-054c-8955-6e75fb6c6ea5@xen.org>
From: =?UTF-8?Q?Andr=c3=a9_Przywara?= <andre.przywara@arm.com>
Autocrypt: addr=andre.przywara@arm.com; prefer-encrypt=mutual; keydata=
 xsFNBFNPCKMBEAC+6GVcuP9ri8r+gg2fHZDedOmFRZPtcrMMF2Cx6KrTUT0YEISsqPoJTKld
 tPfEG0KnRL9CWvftyHseWTnU2Gi7hKNwhRkC0oBL5Er2hhNpoi8x4VcsxQ6bHG5/dA7ctvL6
 kYvKAZw4X2Y3GTbAZIOLf+leNPiF9175S8pvqMPi0qu67RWZD5H/uT/TfLpvmmOlRzNiXMBm
 kGvewkBpL3R2clHquv7pB6KLoY3uvjFhZfEedqSqTwBVu/JVZZO7tvYCJPfyY5JG9+BjPmr+
 REe2gS6w/4DJ4D8oMWKoY3r6ZpHx3YS2hWZFUYiCYovPxfj5+bOr78sg3JleEd0OB0yYtzTT
 esiNlQpCo0oOevwHR+jUiaZevM4xCyt23L2G+euzdRsUZcK/M6qYf41Dy6Afqa+PxgMEiDto
 ITEH3Dv+zfzwdeqCuNU0VOGrQZs/vrKOUmU/QDlYL7G8OIg5Ekheq4N+Ay+3EYCROXkstQnf
 YYxRn5F1oeVeqoh1LgGH7YN9H9LeIajwBD8OgiZDVsmb67DdF6EQtklH0ycBcVodG1zTCfqM
 AavYMfhldNMBg4vaLh0cJ/3ZXZNIyDlV372GmxSJJiidxDm7E1PkgdfCnHk+pD8YeITmSNyb
 7qeU08Hqqh4ui8SSeUp7+yie9zBhJB5vVBJoO5D0MikZAODIDwARAQABzS1BbmRyZSBQcnp5
 d2FyYSAoQVJNKSA8YW5kcmUucHJ6eXdhcmFAYXJtLmNvbT7CwXsEEwECACUCGwMGCwkIBwMC
 BhUIAgkKCwQWAgMBAh4BAheABQJTWSV8AhkBAAoJEAL1yD+ydue63REP/1tPqTo/f6StS00g
 NTUpjgVqxgsPWYWwSLkgkaUZn2z9Edv86BLpqTY8OBQZ19EUwfNehcnvR+Olw+7wxNnatyxo
 D2FG0paTia1SjxaJ8Nx3e85jy6l7N2AQrTCFCtFN9lp8Pc0LVBpSbjmP+Peh5Mi7gtCBNkpz
 KShEaJE25a/+rnIrIXzJHrsbC2GwcssAF3bd03iU41J1gMTalB6HCtQUwgqSsbG8MsR/IwHW
 XruOnVp0GQRJwlw07e9T3PKTLj3LWsAPe0LHm5W1Q+euoCLsZfYwr7phQ19HAxSCu8hzp43u
 zSw0+sEQsO+9wz2nGDgQCGepCcJR1lygVn2zwRTQKbq7Hjs+IWZ0gN2nDajScuR1RsxTE4WR
 lj0+Ne6VrAmPiW6QqRhliDO+e82riI75ywSWrJb9TQw0+UkIQ2DlNr0u0TwCUTcQNN6aKnru
 ouVt3qoRlcD5MuRhLH+ttAcmNITMg7GQ6RQajWrSKuKFrt6iuDbjgO2cnaTrLbNBBKPTG4oF
 D6kX8Zea0KvVBagBsaC1CDTDQQMxYBPDBSlqYCb/b2x7KHTvTAHUBSsBRL6MKz8wwruDodTM
 4E4ToV9URl4aE/msBZ4GLTtEmUHBh4/AYwk6ACYByYKyx5r3PDG0iHnJ8bV0OeyQ9ujfgBBP
 B2t4oASNnIOeGEEcQ2rjzsFNBFNPCKMBEACm7Xqafb1Dp1nDl06aw/3O9ixWsGMv1Uhfd2B6
 it6wh1HDCn9HpekgouR2HLMvdd3Y//GG89irEasjzENZPsK82PS0bvkxxIHRFm0pikF4ljIb
 6tca2sxFr/H7CCtWYZjZzPgnOPtnagN0qVVyEM7L5f7KjGb1/o5EDkVR2SVSSjrlmNdTL2Rd
 zaPqrBoxuR/y/n856deWqS1ZssOpqwKhxT1IVlF6S47CjFJ3+fiHNjkljLfxzDyQXwXCNoZn
 BKcW9PvAMf6W1DGASoXtsMg4HHzZ5fW+vnjzvWiC4pXrcP7Ivfxx5pB+nGiOfOY+/VSUlW/9
 GdzPlOIc1bGyKc6tGREH5lErmeoJZ5k7E9cMJx+xzuDItvnZbf6RuH5fg3QsljQy8jLlr4S6
 8YwxlObySJ5K+suPRzZOG2+kq77RJVqAgZXp3Zdvdaov4a5J3H8pxzjj0yZ2JZlndM4X7Msr
 P5tfxy1WvV4Km6QeFAsjcF5gM+wWl+mf2qrlp3dRwniG1vkLsnQugQ4oNUrx0ahwOSm9p6kM
 CIiTITo+W7O9KEE9XCb4vV0ejmLlgdDV8ASVUekeTJkmRIBnz0fa4pa1vbtZoi6/LlIdAEEt
 PY6p3hgkLLtr2GRodOW/Y3vPRd9+rJHq/tLIfwc58ZhQKmRcgrhtlnuTGTmyUqGSiMNfpwAR
 AQABwsFfBBgBAgAJBQJTTwijAhsMAAoJEAL1yD+ydue64BgP/33QKczgAvSdj9XTC14wZCGE
 U8ygZwkkyNf021iNMj+o0dpLU48PIhHIMTXlM2aiiZlPWgKVlDRjlYuc9EZqGgbOOuR/pNYA
 JX9vaqszyE34JzXBL9DBKUuAui8z8GcxRcz49/xtzzP0kH3OQbBIqZWuMRxKEpRptRT0wzBL
 O31ygf4FRxs68jvPCuZjTGKELIo656/Hmk17cmjoBAJK7JHfqdGkDXk5tneeHCkB411p9WJU
 vMO2EqsHjobjuFm89hI0pSxlUoiTL0Nuk9Edemjw70W4anGNyaQtBq+qu1RdjUPBvoJec7y/
 EXJtoGxq9Y+tmm22xwApSiIOyMwUi9A1iLjQLmngLeUdsHyrEWTbEYHd2sAM2sqKoZRyBDSv
 ejRvZD6zwkY/9nRqXt02H1quVOP42xlkwOQU6gxm93o/bxd7S5tEA359Sli5gZRaucpNQkwd
 KLQdCvFdksD270r4jU/rwR2R/Ubi+txfy0dk2wGBjl1xpSf0Lbl/KMR5TQntELfLR4etizLq
 Xpd2byn96Ivi8C8u9zJruXTueHH8vt7gJ1oax3yKRGU5o2eipCRiKZ0s/T7fvkdq+8beg9ku
 fDO4SAgJMIl6H5awliCY2zQvLHysS/Wb8QuB09hmhLZ4AifdHyF1J5qeePEhgTA+BaUbiUZf
 i4aIXCH3Wv6K
Organization: ARM Ltd.
Message-ID: <2c249585-aaba-1065-95df-be772861e9a8@arm.com>
Date: Tue, 21 Jul 2020 15:32:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <048c27bf-a9ab-054c-8955-6e75fb6c6ea5@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 alex.bennee@linaro.org, Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21/07/2020 14:43, Julien Grall wrote:
> (+ Andre)
> 
> Hi Oleksandr,
> 
> On 21/07/2020 13:26, Oleksandr wrote:
>> On 20.07.20 23:38, Stefano Stabellini wrote:
>>> For instance, what's your take on notifications with virtio-mmio? How
>>> are they modelled today? Are they good enough or do we need MSIs?
>>
>> Notifications are sent from device (backend) to the driver (frontend)
>> using interrupts. Additional DM function was introduced for that
>> purpose xendevicemodel_set_irq_level() which results in
>> vgic_inject_irq() call.
>>
>> Currently, if device wants to notify a driver it should trigger the
>> interrupt by calling that function twice (high level at first, then
>> low level).
> 
> This doesn't look right to me. Assuming the interrupt is trigger when
> the line is high-level, the backend should only issue the hypercall once
> to set the level to high. Once the guest has finish to process all the
> notifications the backend would then call the hypercall to lower the
> interrupt line.
> 
> This means the interrupts should keep firing as long as the interrupt
> line is high.
> 
> It is quite possible that I took some shortcut when implementing the
> hypercall, so this should be corrected before anyone start to rely on it.

So I think the key question is: are virtio interrupts level or edge
triggered? Both QEMU and kvmtool advertise virtio-mmio interrupts as
edge-triggered.
>From skimming through the virtio spec I can't find any explicit
mentioning of the type of IRQ, but the usage of MSIs indeed hints at
using an edge property. Apparently reading the PCI ISR status register
clears it, which again sounds like edge. For virtio-mmio the driver
needs to explicitly clear the interrupt status register, which again
says: edge (as it's not the device clearing the status).

So the device should just notify the driver once, which would cause one
vgic_inject_irq() call. It would be then up to the driver to clear up
that status, by reading PCI ISR status or writing to virtio-mmio's
interrupt-acknowledge register.

Does that make sense?

Cheers,
Andre


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 14:40:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 14:40:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxtRp-00059H-S7; Tue, 21 Jul 2020 14:40:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Oq7=BA=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxtRo-000596-5Q
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 14:40:48 +0000
X-Inumbo-ID: 28f02bb6-cb60-11ea-a0db-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 28f02bb6-cb60-11ea-a0db-12813bfff9fa;
 Tue, 21 Jul 2020 14:40:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=y9tm5lKlnZ6rC4ngXZhk6UMCYwArwfuMcegxqyYsUfU=; b=inUnpQ/CGDT8CAgzYoQZEMPMSk
 /ZZSztThLui9j/7g54+4rbnto9gH3ccM1ds1KH31YrXZXqxXL/pivlZFGHkuRBILO1zsDjzufAd+i
 vSKMox6/cOptjlHa9WxP2hwwKsWvNRpEikhQU7YofYJUT2/bj8iD/7vMp/EMho9MuDqg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxtRl-00007s-6h; Tue, 21 Jul 2020 14:40:45 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxtRk-0006Ah-UP; Tue, 21 Jul 2020 14:40:45 +0000
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>
 <56e512af-993b-1364-be56-fc4be5d88519@xen.org> <87k0yxuf89.fsf@linaro.org>
From: Julien Grall <julien@xen.org>
Message-ID: <8f125464-a0c2-dd71-6d51-eaf13259e727@xen.org>
Date: Tue, 21 Jul 2020 15:40:42 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <87k0yxuf89.fsf@linaro.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Andre Przywara <andre.przywara@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Alex,

Thank you for your feedback!

On 21/07/2020 15:15, Alex Bennée wrote:
> Julien Grall <julien@xen.org> writes:
> 
>> (+ Andree for the vGIC).
>>
>> Hi Stefano,
>>
>> On 20/07/2020 21:38, Stefano Stabellini wrote:
>>> On Fri, 17 Jul 2020, Oleksandr wrote:
>>>>>> *A few word about solution:*
>>>>>> As it was mentioned at [1], in order to implement virtio-mmio Xen on Arm
>>>>> Any plans for virtio-pci? Arm seems to be moving to the PCI bus, and
>>>>> it would be very interesting from a x86 PoV, as I don't think
>>>>> virtio-mmio is something that you can easily use on x86 (or even use
>>>>> at all).
>>>>
>>>> Being honest I didn't consider virtio-pci so far. Julien's PoC (we are based
>>>> on) provides support for the virtio-mmio transport
>>>>
>>>> which is enough to start working around VirtIO and is not as complex as
>>>> virtio-pci. But it doesn't mean there is no way for virtio-pci in Xen.
>>>>
>>>> I think, this could be added in next steps. But the nearest target is
>>>> virtio-mmio approach (of course if the community agrees on that).
>>
>>> Aside from complexity and easy-of-development, are there any other
>>> architectural reasons for using virtio-mmio?
>>
> <snip>
>>>
>>> For instance, what's your take on notifications with virtio-mmio? How
>>> are they modelled today?
>>
>> The backend will notify the frontend using an SPI. The other way around
>> (frontend -> backend) is based on an MMIO write.
>>
>> We have an interface to allow the backend to control whether the
>> interrupt level (i.e. low, high). However, the "old" vGIC doesn't handle
>> properly level interrupts. So we would end up to treat level interrupts
>> as edge.
>>
>> Technically, the problem is already existing with HW interrupts, but the
>> HW should fire it again if the interrupt line is still asserted. Another
>> issue is the interrupt may fire even if the interrupt line was
>> deasserted (IIRC this caused some interesting problem with the Arch timer).
>>
>> I am a bit concerned that the issue will be more proeminent for virtual
>> interrupts. I know that we have some gross hack in the vpl011 to handle
>> a level interrupts. So maybe it is time to switch to the new vGIC?
>>
>>> Are they good enough or do we need MSIs?
>>
>> I am not sure whether virtio-mmio supports MSIs. However for virtio-pci,
>> MSIs is going to be useful to improve performance. This may mean to
>> expose an ITS, so we would need to add support for guest.
> 
> virtio-mmio doesn't support MSI's at the moment although there have been
> proposals to update the spec to allow them. At the moment the cost of
> reading the ISR value and then writing an ack in vm_interrupt:
> 
> 	/* Read and acknowledge interrupts */
> 	status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
> 	writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);
>

Hmmmm, the current way to handle MMIO is the following:
     * pause the vCPU
     * Forward the access to the backend domain
     * Schedule the backend domain
     * Wait for the access to be handled
     * unpause the vCPU

So the sequence is going to be fairly expensive on Xen.

> puts an extra vmexit cost to trap an emulate each exit. Getting an MSI
> via an exitless access to the GIC would be better I think.
> I'm not quite
> sure what the path to IRQs from Xen is.

vmexit on Xen on Arm is pretty cheap compare to KVM as we don't save a 
lot of things. In this situation, they handling an extra trap for the 
interrupt is likely to be meaningless compare to the sequence above.

I am assuming the sequence is also going to be used by the MSIs, right?

It feels to me that it would be worth spending time to investigate the 
cost of that sequence. It might be possible to optimize the ACK and 
avoid to wait for the backend to handle the access.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 14:44:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 14:44:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxtVJ-0005J1-Bg; Tue, 21 Jul 2020 14:44:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RsL2=BA=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1jxtVH-0005Iv-Da
 for xen-devel@lists.xen.org; Tue, 21 Jul 2020 14:44:23 +0000
X-Inumbo-ID: aa2f1918-cb60-11ea-8542-bc764e2007e4
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa2f1918-cb60-11ea-8542-bc764e2007e4;
 Tue, 21 Jul 2020 14:44:22 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 06LEiAVK023881
 (version=TLSv1.2 cipher=DHE-RSA-AES128-GCM-SHA256 bits=128 verify=NO);
 Tue, 21 Jul 2020 10:44:16 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 06LEiA67023880;
 Tue, 21 Jul 2020 07:44:10 -0700 (PDT) (envelope-from ehem)
Date: Tue, 21 Jul 2020 07:44:10 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Wei Liu <wl@xen.org>
Subject: Re: [PATCH 1/2] Partially revert "Cross-compilation fixes."
Message-ID: <20200721144410.GA23640@mattapan.m5p.com>
References: <20200718033121.GA88869@mattapan.m5p.com>
 <20200721122645.qcens4lqq5vcnmz4@liuwe-devbox-debian-v2>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200721122645.qcens4lqq5vcnmz4@liuwe-devbox-debian-v2>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
 autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: dave@recoil.org, ian.jackson@eu.citrix.com, christian.lindig@citrix.com,
 xen-devel@lists.xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 21, 2020 at 12:26:45PM +0000, Wei Liu wrote:
> On Fri, Jul 17, 2020 at 08:31:21PM -0700, Elliott Mitchell wrote:
> > This partially reverts commit 16504669c5cbb8b195d20412aadc838da5c428f7.
> 
> Ok, so this commit is really old.

Yup.  It will still be visible in `git blame tools/examples/Makefile`,
but everywhere else has had commits stacked on top.

> > Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
> > ---
> > Doesn't look like much of 16504669c5cbb8b195d20412aadc838da5c428f7
> > actually remains due to passage of time.
> > 
> > Of the 3, both Python and pygrub appear to mostly be building just fine
> > cross-compiling.  The OCAML portion is being troublesome, this is going
> > to cause bug reports elsewhere soon.  The OCAML portion though can
> > already be disabled by setting OCAML_TOOLS=n and shouldn't have this
> > extra form of disabling.
> 
> The reasoning here is fine by me. And it should be part of the commit
> message.
> 
> I would like to also add "tools: prefix to the subject line:
> 
>   tools: Partially revert "Cross-compilation fixes."
> 
> If you agree with these changes, no action is required from you. I can
> handle everything while committing.

Fine by me.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Tue Jul 21 14:52:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 14:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxtcn-00068u-6L; Tue, 21 Jul 2020 14:52:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FhFK=BA=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1jxtcm-00068p-PW
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 14:52:08 +0000
X-Inumbo-ID: bf59c0e4-cb61-11ea-854b-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bf59c0e4-cb61-11ea-854b-bc764e2007e4;
 Tue, 21 Jul 2020 14:52:07 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id h22so24374360lji.9
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 07:52:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-transfer-encoding:content-language;
 bh=5L6mWnfWczvZIDlfcDmsAQ4aNqg8ljvSp66nbmLeIiQ=;
 b=TxnJzhM4/oxQ3rFRSjVWdu29lrpmFSac4RCywh2ch2fSCUc8QXLJPnTL1k3lCbdavq
 8V7u9Ax4F+5uPyWq8mFM/YsrKjEQd+2eRxgMIj2ZdTLEx3tjzQkYhro3Q5O9cys27FpQ
 eMKGyQBLh0MwjvI/4GcpTr3RVETpu/yo3Q4tbYYJR6eM04hC7890b8qnZ+pAD9DhKC7M
 z5UrnGlKjWOUHYHTs2bZE2aOveVc4gByPgYgryRBuY6v8sTEjOTZUobq6hrGHZt6nnB+
 X3xPWNdpaoppk6dPDkkH+oWsoWNtbcL5NUcsjFM0iJ7vrJbBSWPAzCyAT0Bx2Rq5H7SL
 c4ow==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-transfer-encoding
 :content-language;
 bh=5L6mWnfWczvZIDlfcDmsAQ4aNqg8ljvSp66nbmLeIiQ=;
 b=nyklji8CNWLc+5gYk37DFuik8XMd1ZWeL7ZOPp1lKMdp25sfs4kgjWhLpyv4voZhsa
 B9pKNJf70a1qvFDrST7PdG4iImzY1OstYWL/c1nS/rv7rXCuhwpJ/+/T+aBX+WPNwnY+
 EFQ2fpGakizR1S6m3PCIRRy3H+4CfmLcvS7QIPYG9FSb5sHxGZEnPZzBkgJ0sjRJ6Hx8
 QHPO2oFKrNQCD4wJM0/55RgXPMU80Xs1QG/5pW8RP6/QzOq50fStqxSE+IO43dCU3ht9
 xp956G3puw7PvR7SoRof7LtpG54Rn+3XRe4qCfy11KrcPCK1+gvhXz8CiOzpNS4vx6lX
 lK3g==
X-Gm-Message-State: AOAM530La2NeQHoXGhkG1d/3wG7JhRNn6YfBkeJmVdvxhVpe8N+/kwXQ
 YMUsRzRzTavp7AJttXMf7vY=
X-Google-Smtp-Source: ABdhPJz1NzNOJplgrkOHOVmskypLkyIfl0UeHUB05GBpabzm0LV3utn89PzGBEiIpMgjkcNKQdrdQQ==
X-Received: by 2002:a2e:7c07:: with SMTP id x7mr13224484ljc.166.1595343126526; 
 Tue, 21 Jul 2020 07:52:06 -0700 (PDT)
Received: from [192.168.1.2] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id e18sm1352907ljn.135.2020.07.21.07.52.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 21 Jul 2020 07:52:06 -0700 (PDT)
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: =?UTF-8?Q?Andr=c3=a9_Przywara?= <andre.przywara@arm.com>,
 Julien Grall <julien@xen.org>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>
 <4454c70e-47fa-46e8-90bf-1904b11318b1@gmail.com>
 <048c27bf-a9ab-054c-8955-6e75fb6c6ea5@xen.org>
 <2c249585-aaba-1065-95df-be772861e9a8@arm.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <e44d6826-643f-77c6-a821-77dc0abf4cbc@gmail.com>
Date: Tue, 21 Jul 2020 17:52:00 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <2c249585-aaba-1065-95df-be772861e9a8@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 alex.bennee@linaro.org, Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 21.07.20 17:32, André Przywara wrote:
> On 21/07/2020 14:43, Julien Grall wrote:

Hello Andre, Julien


>> (+ Andre)
>>
>> Hi Oleksandr,
>>
>> On 21/07/2020 13:26, Oleksandr wrote:
>>> On 20.07.20 23:38, Stefano Stabellini wrote:
>>>> For instance, what's your take on notifications with virtio-mmio? How
>>>> are they modelled today? Are they good enough or do we need MSIs?
>>> Notifications are sent from device (backend) to the driver (frontend)
>>> using interrupts. Additional DM function was introduced for that
>>> purpose xendevicemodel_set_irq_level() which results in
>>> vgic_inject_irq() call.
>>>
>>> Currently, if device wants to notify a driver it should trigger the
>>> interrupt by calling that function twice (high level at first, then
>>> low level).
>> This doesn't look right to me. Assuming the interrupt is trigger when
>> the line is high-level, the backend should only issue the hypercall once
>> to set the level to high. Once the guest has finish to process all the
>> notifications the backend would then call the hypercall to lower the
>> interrupt line.
>>
>> This means the interrupts should keep firing as long as the interrupt
>> line is high.
>>
>> It is quite possible that I took some shortcut when implementing the
>> hypercall, so this should be corrected before anyone start to rely on it.
> So I think the key question is: are virtio interrupts level or edge
> triggered? Both QEMU and kvmtool advertise virtio-mmio interrupts as
> edge-triggered.
>  From skimming through the virtio spec I can't find any explicit
> mentioning of the type of IRQ, but the usage of MSIs indeed hints at
> using an edge property. Apparently reading the PCI ISR status register
> clears it, which again sounds like edge. For virtio-mmio the driver
> needs to explicitly clear the interrupt status register, which again
> says: edge (as it's not the device clearing the status).
>
> So the device should just notify the driver once, which would cause one
> vgic_inject_irq() call. It would be then up to the driver to clear up
> that status, by reading PCI ISR status or writing to virtio-mmio's
> interrupt-acknowledge register.
>
> Does that make sense?
When implementing Xen backend, I didn't have an already working example 
so only guessed. I looked how kvmtool behaved when actually triggering 
the interrupt on Arm [1].

Taking into the account that Xen PoC on Arm advertises [2] the same irq 
type (TYPE_EDGE_RISING) as kvmtool [3] I decided to follow the model of 
triggering an interrupt. Could you please explain, is this wrong?


[1] 
https://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git/tree/arm/gic.c#n418

[2] 
https://github.com/xen-troops/xen/blob/ioreq_4.14_ml/tools/libxl/libxl_arm.c#L727

[3] 
https://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git/tree/virtio/mmio.c#n270

-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 14:57:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 14:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxthm-0006Kf-VO; Tue, 21 Jul 2020 14:57:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UXjz=BA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jxthl-0006Ka-M8
 for xen-devel@lists.xen.org; Tue, 21 Jul 2020 14:57:17 +0000
X-Inumbo-ID: 773fe0e5-cb62-11ea-854e-bc764e2007e4
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 773fe0e5-cb62-11ea-854e-bc764e2007e4;
 Tue, 21 Jul 2020 14:57:16 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id f18so3189247wml.3
 for <xen-devel@lists.xen.org>; Tue, 21 Jul 2020 07:57:16 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=LOlGQAylj/bktB5kZkda1IhNG5IUhgBwPGrWWAm6nI0=;
 b=fx78ZDlihtvtUHjiURg7h+i8yxMJ2ozp1t84UnA+oIQ123lbxIMG4d1141qOvrwu6C
 FaLvxpn4diG42+6mg6Ujuzs5XGNbLJfjF/5KK6qVAVPa2CSYBNKh7F40nvEDr95Bn7dA
 SQptRKrW4Q4xnXKmXk0kqAZxKlogBvEiitSMnlQcPwbngPrCqcTWuQWhPZbKbZiVU3jr
 MB6Oaaaek7C65dVM6E3eHpgWnvmG76cYSNA1JBcHmeM1ZnI+7bLd9cuNIxYCHWbej2hi
 E2C2xZEU4vb6TBbvGma20Gi6lHPKfC9PJQkI7/4e/7Ognxt4vUf+3fm5RTD4+o7wEQBi
 k3sA==
X-Gm-Message-State: AOAM530fi8TfBYEbOyuPLxczmMk4G9GMYRYELODwb2iYUxFlnMK7CCmB
 A5cvp2wiZQvJenjoW/ku5ZY=
X-Google-Smtp-Source: ABdhPJxzgjx8qg9R+/Yi7q0fFZYZgSloya9pLO9rbJ3vN91ZpoodgTA6OS3qd8Zx7juC04TOJw+YyQ==
X-Received: by 2002:a1c:bcd4:: with SMTP id m203mr4377232wmf.124.1595343435808; 
 Tue, 21 Jul 2020 07:57:15 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id x13sm2248179wro.64.2020.07.21.07.57.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 07:57:15 -0700 (PDT)
Date: Tue, 21 Jul 2020 14:57:14 +0000
From: Wei Liu <wl@xen.org>
To: Elliott Mitchell <ehem+xen@m5p.com>
Subject: Re: [PATCH 1/2] Partially revert "Cross-compilation fixes."
Message-ID: <20200721145714.bkvhu4meuhrwqcnj@liuwe-devbox-debian-v2>
References: <20200718033121.GA88869@mattapan.m5p.com>
 <20200721122645.qcens4lqq5vcnmz4@liuwe-devbox-debian-v2>
 <20200721144410.GA23640@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200721144410.GA23640@mattapan.m5p.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: dave@recoil.org, ian.jackson@eu.citrix.com, christian.lindig@citrix.com,
 Wei Liu <wl@xen.org>, xen-devel@lists.xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 21, 2020 at 07:44:10AM -0700, Elliott Mitchell wrote:
> On Tue, Jul 21, 2020 at 12:26:45PM +0000, Wei Liu wrote:
> > On Fri, Jul 17, 2020 at 08:31:21PM -0700, Elliott Mitchell wrote:
> > > This partially reverts commit 16504669c5cbb8b195d20412aadc838da5c428f7.
> > 
> > Ok, so this commit is really old.
> 
> Yup.  It will still be visible in `git blame tools/examples/Makefile`,
> but everywhere else has had commits stacked on top.
> 
> > > Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
> > > ---
> > > Doesn't look like much of 16504669c5cbb8b195d20412aadc838da5c428f7
> > > actually remains due to passage of time.
> > > 
> > > Of the 3, both Python and pygrub appear to mostly be building just fine
> > > cross-compiling.  The OCAML portion is being troublesome, this is going
> > > to cause bug reports elsewhere soon.  The OCAML portion though can
> > > already be disabled by setting OCAML_TOOLS=n and shouldn't have this
> > > extra form of disabling.
> > 
> > The reasoning here is fine by me. And it should be part of the commit
> > message.
> > 
> > I would like to also add "tools: prefix to the subject line:
> > 
> >   tools: Partially revert "Cross-compilation fixes."
> > 
> > If you agree with these changes, no action is required from you. I can
> > handle everything while committing.
> 
> Fine by me.

Your two patches have been applied to staging. Thanks.

Wei.


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 15:00:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 15:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxtkY-000793-G7; Tue, 21 Jul 2020 15:00:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H3aL=BA=arm.com=andre.przywara@srs-us1.protection.inumbo.net>)
 id 1jxtkX-00078s-9N
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 15:00:09 +0000
X-Inumbo-ID: ddcae048-cb62-11ea-854f-bc764e2007e4
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id ddcae048-cb62-11ea-854f-bc764e2007e4;
 Tue, 21 Jul 2020 15:00:07 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 68AB6101E;
 Tue, 21 Jul 2020 08:00:07 -0700 (PDT)
Received: from [192.168.2.22] (unknown [172.31.20.19])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2823D3F718;
 Tue, 21 Jul 2020 08:00:06 -0700 (PDT)
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: Oleksandr <olekstysh@gmail.com>, Julien Grall <julien@xen.org>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>
 <4454c70e-47fa-46e8-90bf-1904b11318b1@gmail.com>
 <048c27bf-a9ab-054c-8955-6e75fb6c6ea5@xen.org>
 <2c249585-aaba-1065-95df-be772861e9a8@arm.com>
 <e44d6826-643f-77c6-a821-77dc0abf4cbc@gmail.com>
From: =?UTF-8?Q?Andr=c3=a9_Przywara?= <andre.przywara@arm.com>
Organization: ARM Ltd.
Message-ID: <1811dd15-4009-f78c-674c-177709cf2a22@arm.com>
Date: Tue, 21 Jul 2020 15:58:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <e44d6826-643f-77c6-a821-77dc0abf4cbc@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 alex.bennee@linaro.org, Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21/07/2020 15:52, Oleksandr wrote:
> 
> On 21.07.20 17:32, André Przywara wrote:
>> On 21/07/2020 14:43, Julien Grall wrote:
> 
> Hello Andre, Julien
> 
> 
>>> (+ Andre)
>>>
>>> Hi Oleksandr,
>>>
>>> On 21/07/2020 13:26, Oleksandr wrote:
>>>> On 20.07.20 23:38, Stefano Stabellini wrote:
>>>>> For instance, what's your take on notifications with virtio-mmio? How
>>>>> are they modelled today? Are they good enough or do we need MSIs?
>>>> Notifications are sent from device (backend) to the driver (frontend)
>>>> using interrupts. Additional DM function was introduced for that
>>>> purpose xendevicemodel_set_irq_level() which results in
>>>> vgic_inject_irq() call.
>>>>
>>>> Currently, if device wants to notify a driver it should trigger the
>>>> interrupt by calling that function twice (high level at first, then
>>>> low level).
>>> This doesn't look right to me. Assuming the interrupt is trigger when
>>> the line is high-level, the backend should only issue the hypercall once
>>> to set the level to high. Once the guest has finish to process all the
>>> notifications the backend would then call the hypercall to lower the
>>> interrupt line.
>>>
>>> This means the interrupts should keep firing as long as the interrupt
>>> line is high.
>>>
>>> It is quite possible that I took some shortcut when implementing the
>>> hypercall, so this should be corrected before anyone start to rely on
>>> it.
>> So I think the key question is: are virtio interrupts level or edge
>> triggered? Both QEMU and kvmtool advertise virtio-mmio interrupts as
>> edge-triggered.
>>  From skimming through the virtio spec I can't find any explicit
>> mentioning of the type of IRQ, but the usage of MSIs indeed hints at
>> using an edge property. Apparently reading the PCI ISR status register
>> clears it, which again sounds like edge. For virtio-mmio the driver
>> needs to explicitly clear the interrupt status register, which again
>> says: edge (as it's not the device clearing the status).
>>
>> So the device should just notify the driver once, which would cause one
>> vgic_inject_irq() call. It would be then up to the driver to clear up
>> that status, by reading PCI ISR status or writing to virtio-mmio's
>> interrupt-acknowledge register.
>>
>> Does that make sense?
> When implementing Xen backend, I didn't have an already working example
> so only guessed. I looked how kvmtool behaved when actually triggering
> the interrupt on Arm [1].
> 
> Taking into the account that Xen PoC on Arm advertises [2] the same irq
> type (TYPE_EDGE_RISING) as kvmtool [3] I decided to follow the model of
> triggering an interrupt. Could you please explain, is this wrong?

Yes, kvmtool does a double call needlessly (on x86, ppc and arm, mips is
correct).
I just chased it down in the kernel, a KVM_IRQ_LINE ioctl with level=low
is ignored when the target IRQ is configured as edge (which it is,
because the DT says so), check vgic_validate_injection() in the kernel.

So you should only ever need one call to set the line "high" (actually:
trigger the edge pulse).

Cheers,
Andre.

> 
> 
> [1]
> https://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git/tree/arm/gic.c#n418
> 
> 
> [2]
> https://github.com/xen-troops/xen/blob/ioreq_4.14_ml/tools/libxl/libxl_arm.c#L727
> 
> 
> [3]
> https://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git/tree/virtio/mmio.c#n270
> 
> 



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 15:20:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 15:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxu3s-0000SA-1H; Tue, 21 Jul 2020 15:20:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tByU=BA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxu3q-0000OT-Eu
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 15:20:06 +0000
X-Inumbo-ID: a7793afa-cb65-11ea-8558-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7793afa-cb65-11ea-8558-bc764e2007e4;
 Tue, 21 Jul 2020 15:20:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=pd44pDfqi8HSr15PJy4Cke/2LTL8KBf80JGL9BdkTYU=; b=JKr2528lkta74TXPdsWn7aZla
 jWwr9Pa1Y4g+O3PipYVUfNzbjx3qpmc7AJMUpPP2/kjiv6lRqkWVlT6LwZfawyiYxXNy5gUHdnsF1
 qc3o2mjaKDnrXtNh8EPj5ptZrDF6HEXdlsXPa9s3zl9z5oFu2ChZI70/el6tyoye4647A=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxu3o-0000zl-PZ; Tue, 21 Jul 2020 15:20:04 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxu3o-0002h3-Db; Tue, 21 Jul 2020 15:20:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxu3o-0004P1-Cx; Tue, 21 Jul 2020 15:20:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152074-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152074: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=057cfa258ca554013178c5aaf6f80db47fb184fc
X-Osstest-Versions-That: xen=9ffdda96d9e7c3d9c7a5bbe2df6ab30f63927542
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jul 2020 15:20:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152074 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152074/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  057cfa258ca554013178c5aaf6f80db47fb184fc
baseline version:
 xen                  9ffdda96d9e7c3d9c7a5bbe2df6ab30f63927542

Last test of basis   152054  2020-07-20 18:12:43 Z    0 days
Testing same since   152074  2020-07-21 13:00:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9ffdda96d9..057cfa258c  057cfa258ca554013178c5aaf6f80db47fb184fc -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 15:26:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 15:26:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxu9l-0000eb-L0; Tue, 21 Jul 2020 15:26:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=d7zm=BA=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jxu9k-0000ds-Bh
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 15:26:12 +0000
X-Inumbo-ID: 8151b7de-cb66-11ea-a0f3-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8151b7de-cb66-11ea-a0f3-12813bfff9fa;
 Tue, 21 Jul 2020 15:26:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F240FAE57;
 Tue, 21 Jul 2020 15:26:16 +0000 (UTC)
Subject: Re: [PATCH][next] x86/ioperm: initialize pointer bitmap with NULL
 rather than 0
To: Colin King <colin.king@canonical.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, x86@kernel.org,
 "H . Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org
References: <20200721100217.407975-1-colin.king@canonical.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <46011f9b-a5a6-b91c-f8c0-1c106ff5e60e@suse.com>
Date: Tue, 21 Jul 2020 17:26:09 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200721100217.407975-1-colin.king@canonical.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: kernel-janitors@vger.kernel.org, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21.07.20 12:02, Colin King wrote:
> From: Colin Ian King <colin.king@canonical.com>
> 
> The pointer bitmap is being initialized with a plain integer 0,
> fix this by initializing it with a NULL instead.
> 
> Cleans up sparse warning:
> arch/x86/xen/enlighten_pv.c:876:27: warning: Using plain integer
> as NULL pointer
> 
> Signed-off-by: Colin Ian King <colin.king@canonical.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 16:10:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 16:10:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxupq-0004ua-54; Tue, 21 Jul 2020 16:09:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FhFK=BA=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1jxupn-0004uV-VS
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 16:09:40 +0000
X-Inumbo-ID: 936607ef-cb6c-11ea-856b-bc764e2007e4
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 936607ef-cb6c-11ea-856b-bc764e2007e4;
 Tue, 21 Jul 2020 16:09:38 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id r19so24598501ljn.12
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 09:09:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-transfer-encoding:content-language;
 bh=lWQom+B3YtchGFEgygjEdlRRiIvq78MqoOwISWJ7CTc=;
 b=Fdyxv1+usZFnher7O4ZpbX7ZMj+sVeKhNalikZdeK9IpRZylW0vlzKJm8664Ky9woq
 wuLDKJ2wLD6E9I1leXUfv8mjzViYE2oahWuOi1AmxLPNk2c8uWaAfXOct81sM98ijigX
 w0BquUi3Bl2Q6DaxiakuoT+453xT4F7Zo0bZ4PGuWOZ1mcI+zQdyyPxtfY94LBP7OvPL
 kgGCq6N4tqx1vsN74iF20fqvD6oMch/1lpBd5dBPAXhsAVLBEY07imU8WVZKCoegtmpY
 bF8Sd0uk2f7WW4IdKI3uGeIj1gfhJdhNUvkL0p9zWBReIwU/rd0nkp5LY8OzZGuzoaSX
 zDkA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-transfer-encoding
 :content-language;
 bh=lWQom+B3YtchGFEgygjEdlRRiIvq78MqoOwISWJ7CTc=;
 b=UNI9lIiRo9X4UENu8D1+C8ne/XMwpH8Y8Rh6jRwQ82p3C8Dx5L4HWLyfujHZ63Iw9h
 2ivKpX4fpqDNdduK0xvC4DZ+GwK1AOaj4bN+KE0f6c0APPhS7S/u9O4LCVm4lcgvoZjd
 u/BWXTPpW1aurO4fqWvU2rAYnMEk8BpMKjxj09jioShuyeHjkOEFk+G5XL7BwkEvLQJB
 bN/8+YM3t+v/B6yniNGv3x1jYPgLjHWEQ7FmXVJp/3zfJzptpy0+Vyx3759yadfJhKp0
 Rm7bxuLgE+sISqWjaKUU3n6r2hhhdTkxoXVEsnHLtz33S3jPpPvn63bgRfiF2w7n70FB
 MwzQ==
X-Gm-Message-State: AOAM530KkWavB/pK1JMjmJppCdF27YPLcQTdZAVlhqJc/BwTt7El/guH
 jA8HmoMeA0yFZUNrQ7yivhU=
X-Google-Smtp-Source: ABdhPJzFbjSZXGlgCPNabTWcMPAgK37S3srZJxHj5+jkWwnpAb23e33U4K0OXf3Sg3URv46BEzah1A==
X-Received: by 2002:a2e:b5b7:: with SMTP id f23mr11764896ljn.380.1595347777269; 
 Tue, 21 Jul 2020 09:09:37 -0700 (PDT)
Received: from [192.168.1.2] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id l14sm5450304lfj.13.2020.07.21.09.09.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 21 Jul 2020 09:09:36 -0700 (PDT)
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: =?UTF-8?Q?Andr=c3=a9_Przywara?= <andre.przywara@arm.com>,
 Julien Grall <julien@xen.org>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>
 <4454c70e-47fa-46e8-90bf-1904b11318b1@gmail.com>
 <048c27bf-a9ab-054c-8955-6e75fb6c6ea5@xen.org>
 <2c249585-aaba-1065-95df-be772861e9a8@arm.com>
 <e44d6826-643f-77c6-a821-77dc0abf4cbc@gmail.com>
 <1811dd15-4009-f78c-674c-177709cf2a22@arm.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <e7bbc9d6-648e-4d2a-e981-15743a628b1f@gmail.com>
Date: Tue, 21 Jul 2020 19:09:30 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1811dd15-4009-f78c-674c-177709cf2a22@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 alex.bennee@linaro.org, Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 21.07.20 17:58, André Przywara wrote:
> On 21/07/2020 15:52, Oleksandr wrote:
>> On 21.07.20 17:32, André Przywara wrote:
>>> On 21/07/2020 14:43, Julien Grall wrote:
>> Hello Andre, Julien
>>
>>
>>>> (+ Andre)
>>>>
>>>> Hi Oleksandr,
>>>>
>>>> On 21/07/2020 13:26, Oleksandr wrote:
>>>>> On 20.07.20 23:38, Stefano Stabellini wrote:
>>>>>> For instance, what's your take on notifications with virtio-mmio? How
>>>>>> are they modelled today? Are they good enough or do we need MSIs?
>>>>> Notifications are sent from device (backend) to the driver (frontend)
>>>>> using interrupts. Additional DM function was introduced for that
>>>>> purpose xendevicemodel_set_irq_level() which results in
>>>>> vgic_inject_irq() call.
>>>>>
>>>>> Currently, if device wants to notify a driver it should trigger the
>>>>> interrupt by calling that function twice (high level at first, then
>>>>> low level).
>>>> This doesn't look right to me. Assuming the interrupt is trigger when
>>>> the line is high-level, the backend should only issue the hypercall once
>>>> to set the level to high. Once the guest has finish to process all the
>>>> notifications the backend would then call the hypercall to lower the
>>>> interrupt line.
>>>>
>>>> This means the interrupts should keep firing as long as the interrupt
>>>> line is high.
>>>>
>>>> It is quite possible that I took some shortcut when implementing the
>>>> hypercall, so this should be corrected before anyone start to rely on
>>>> it.
>>> So I think the key question is: are virtio interrupts level or edge
>>> triggered? Both QEMU and kvmtool advertise virtio-mmio interrupts as
>>> edge-triggered.
>>>   From skimming through the virtio spec I can't find any explicit
>>> mentioning of the type of IRQ, but the usage of MSIs indeed hints at
>>> using an edge property. Apparently reading the PCI ISR status register
>>> clears it, which again sounds like edge. For virtio-mmio the driver
>>> needs to explicitly clear the interrupt status register, which again
>>> says: edge (as it's not the device clearing the status).
>>>
>>> So the device should just notify the driver once, which would cause one
>>> vgic_inject_irq() call. It would be then up to the driver to clear up
>>> that status, by reading PCI ISR status or writing to virtio-mmio's
>>> interrupt-acknowledge register.
>>>
>>> Does that make sense?
>> When implementing Xen backend, I didn't have an already working example
>> so only guessed. I looked how kvmtool behaved when actually triggering
>> the interrupt on Arm [1].
>>
>> Taking into the account that Xen PoC on Arm advertises [2] the same irq
>> type (TYPE_EDGE_RISING) as kvmtool [3] I decided to follow the model of
>> triggering an interrupt. Could you please explain, is this wrong?
> Yes, kvmtool does a double call needlessly (on x86, ppc and arm, mips is
> correct).
> I just chased it down in the kernel, a KVM_IRQ_LINE ioctl with level=low
> is ignored when the target IRQ is configured as edge (which it is,
> because the DT says so), check vgic_validate_injection() in the kernel.
>
> So you should only ever need one call to set the line "high" (actually:
> trigger the edge pulse).

Got it, thanks for the explanation. Have just removed an extra action 
(setting low level) and checked.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 16:14:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 16:14:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxuuE-0005jG-R3; Tue, 21 Jul 2020 16:14:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Eoy0=BA=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jxuuD-0005iW-EV
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 16:14:13 +0000
X-Inumbo-ID: 35b4c1fd-cb6d-11ea-a104-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 35b4c1fd-cb6d-11ea-a104-12813bfff9fa;
 Tue, 21 Jul 2020 16:14:11 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 1974E2073A;
 Tue, 21 Jul 2020 16:14:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595348050;
 bh=o37mSLsBpei9M8OpR/mn+lg5+r6LO58MP71JMJFXW8s=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=uuwy9Q+0XIt3ZFQPA9Km5xNRxjFNYWzh+GQf6rJ86Z+l/OJSlu+zm9X2c/cpEjv/8
 9NcNUtS5cqJwNwr7XE1+l2E99TheM8EPjSJgSeN7R4MgndB/LOUuGEVv1bkGCJbFev
 4Lb0BLrByLIrpNaPqTxabblrQMk+8eFNvF4TdABg=
Date: Tue, 21 Jul 2020 09:14:08 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
In-Reply-To: <87mu3tufhn.fsf@linaro.org>
Message-ID: <alpine.DEB.2.21.2007210901480.32544@sstabellini-ThinkPad-T480s>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <20200720091722.GF7191@Air-de-Roger>
 <10eaec62-2c48-52ae-d113-1681c87e3d59@xen.org>
 <20200720102023.GH7191@Air-de-Roger>
 <alpine.DEB.2.21.2007201322060.32544@sstabellini-ThinkPad-T480s>
 <390f3a67-5ca5-d9bd-f13a-2c5920bad45a@xen.org> <87mu3tufhn.fsf@linaro.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-76670381-1595348050=:32544"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Artem Mygaiev <joculator@gmail.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-76670381-1595348050=:32544
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Tue, 21 Jul 2020, Alex Bennée wrote:
> Julien Grall <julien@xen.org> writes:
> 
> > Hi Stefano,
> >
> > On 20/07/2020 21:37, Stefano Stabellini wrote:
> >> On Mon, 20 Jul 2020, Roger Pau Monné wrote:
> >>> On Mon, Jul 20, 2020 at 10:40:40AM +0100, Julien Grall wrote:
> >>>>
> >>>>
> >>>> On 20/07/2020 10:17, Roger Pau Monné wrote:
> >>>>> On Fri, Jul 17, 2020 at 09:34:14PM +0300, Oleksandr wrote:
> >>>>>> On 17.07.20 18:00, Roger Pau Monné wrote:
> >>>>>>> On Fri, Jul 17, 2020 at 05:11:02PM +0300, Oleksandr Tyshchenko wrote:
> >>>>>>> Do you have any plans to try to upstream a modification to the VirtIO
> >>>>>>> spec so that grants (ie: abstract references to memory addresses) can
> >>>>>>> be used on the VirtIO ring?
> >>>>>>
> >>>>>> But VirtIO spec hasn't been modified as well as VirtIO infrastructure in the
> >>>>>> guest. Nothing to upsteam)
> >>>>>
> >>>>> OK, so there's no intention to add grants (or a similar interface) to
> >>>>> the spec?
> >>>>>
> >>>>> I understand that you want to support unmodified VirtIO frontends, but
> >>>>> I also think that long term frontends could negotiate with backends on
> >>>>> the usage of grants in the shared ring, like any other VirtIO feature
> >>>>> negotiated between the frontend and the backend.
> >>>>>
> >>>>> This of course needs to be on the spec first before we can start
> >>>>> implementing it, and hence my question whether a modification to the
> >>>>> spec in order to add grants has been considered.
> >>>> The problem is not really the specification but the adoption in the
> >>>> ecosystem. A protocol based on grant-tables would mostly only be used by Xen
> >>>> therefore:
> >>>>     - It may be difficult to convince a proprietary OS vendor to invest
> >>>> resource on implementing the protocol
> >>>>     - It would be more difficult to move in/out of Xen ecosystem.
> >>>>
> >>>> Both, may slow the adoption of Xen in some areas.
> >>>
> >>> Right, just to be clear my suggestion wasn't to force the usage of
> >>> grants, but whether adding something along this lines was in the
> >>> roadmap, see below.
> >>>
> >>>> If one is interested in security, then it would be better to work with the
> >>>> other interested parties. I think it would be possible to use a virtual
> >>>> IOMMU for this purpose.
> >>>
> >>> Yes, I've also heard rumors about using the (I assume VirtIO) IOMMU in
> >>> order to protect what backends can map. This seems like a fine idea,
> >>> and would allow us to gain the lost security without having to do the
> >>> whole work ourselves.
> >>>
> >>> Do you know if there's anything published about this? I'm curious
> >>> about how and where in the system the VirtIO IOMMU is/should be
> >>> implemented.
> >> 
> >> Not yet (as far as I know), but we have just started some discussons on
> >> this topic within Linaro.
> >> 
> >> 
> >> You should also be aware that there is another proposal based on
> >> pre-shared-memory and memcpys to solve the virtio security issue:
> >> 
> >> https://marc.info/?l=linux-kernel&m=158807398403549
> >> 
> >> It would be certainly slower than the "virtio IOMMU" solution but it
> >> would take far less time to develop and could work as a short-term
> >> stop-gap.
> >
> > I don't think I agree with this blank statement. In the case of "virtio 
> > IOMMU", you would need to potentially map/unmap pages every request 
> > which would result to a lot of back and forth to the hypervisor.

Yes, that's true.


> Can a virtio-iommu just set bounds when a device is initialised as to
> where memory will be in the kernel address space?

First let me premise to avoid possible miscommunication that what Julien
and I are calling "virtio IOMMU" is not an existing virtio-iommu driver
of some sort, but an idea for a cross-domain virtual IOMMU for the sake
of the frontends to explicitly permit memory to be accessed by the
backends. Hopefully it was clear already but better be sure :-)


If you are asking whether it would be possible to use the virtual IOMMU
just to setup memory at startup time, then it certainly could, but
effectively we would end up with one of the following scenarios:

1) one pre-shared bounce buffer
Effectively the same as https://marc.info/?l=linux-kernel&m=158807398403549
still requires memcpys
could still be nicer than Qualcomm's proposal because easier to
configure?

2) all domU memory allowed access to the backend
Not actually any more secure than placing the backends in dom0


Otherwise we need the dynamic maps/unmaps.

For completeness, if we could write the whole software stack from
scratch, it would also be possible to architect a protocol (like
virtio-net) and the software stack above it to always allocate memory
from a given buffer (the pre-shared buffer), hence greatly reducing the
amount of required memcpys, maybe even down to zero. In reality, most
interfaces in Linux and POSIX userspace expect the application to be the
one providing the buffer, hence they would require memcpys in the kernel
to move data between the user-provided buffers and the pre-shared buffers.
--8323329-76670381-1595348050=:32544--


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 16:42:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 16:42:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxvLE-0008Cj-DA; Tue, 21 Jul 2020 16:42:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Eoy0=BA=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jxvLC-0008Ce-LP
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 16:42:06 +0000
X-Inumbo-ID: 1c544436-cb71-11ea-8576-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c544436-cb71-11ea-8576-bc764e2007e4;
 Tue, 21 Jul 2020 16:42:05 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id BF9EA207BB;
 Tue, 21 Jul 2020 16:42:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595349725;
 bh=lykWmdVeGNWnPl+dJlt+xFuRfHrrA3E3hZtNRghIMnU=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=mHqCXNbfrQZ5Hx9t09buaTY0xsGdbEXP6E893JZL8VU1sEgkd4FtLW4lne6yxa5it
 qn+ciGXE2fPNsrgzuHihD8VtZOtIu6tsQf7dDf+A5u/sfx5BV5aUHIYMotSue23/kt
 uVKkp7h+qL1N/ghozCiwDSgXMulD46eYV32l8SrQ=
Date: Tue, 21 Jul 2020 09:42:04 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
In-Reply-To: <8f125464-a0c2-dd71-6d51-eaf13259e727@xen.org>
Message-ID: <alpine.DEB.2.21.2007210939510.32544@sstabellini-ThinkPad-T480s>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>
 <56e512af-993b-1364-be56-fc4be5d88519@xen.org>
 <87k0yxuf89.fsf@linaro.org> <8f125464-a0c2-dd71-6d51-eaf13259e727@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1593658400-1595349725=:32544"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Andre Przywara <andre.przywara@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Artem Mygaiev <joculator@gmail.com>,
 =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1593658400-1595349725=:32544
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Tue, 21 Jul 2020, Julien Grall wrote:
> Hi Alex,
> 
> Thank you for your feedback!
> 
> On 21/07/2020 15:15, Alex Bennée wrote:
> > Julien Grall <julien@xen.org> writes:
> > 
> > > (+ Andree for the vGIC).
> > > 
> > > Hi Stefano,
> > > 
> > > On 20/07/2020 21:38, Stefano Stabellini wrote:
> > > > On Fri, 17 Jul 2020, Oleksandr wrote:
> > > > > > > *A few word about solution:*
> > > > > > > As it was mentioned at [1], in order to implement virtio-mmio Xen
> > > > > > > on Arm
> > > > > > Any plans for virtio-pci? Arm seems to be moving to the PCI bus, and
> > > > > > it would be very interesting from a x86 PoV, as I don't think
> > > > > > virtio-mmio is something that you can easily use on x86 (or even use
> > > > > > at all).
> > > > > 
> > > > > Being honest I didn't consider virtio-pci so far. Julien's PoC (we are
> > > > > based
> > > > > on) provides support for the virtio-mmio transport
> > > > > 
> > > > > which is enough to start working around VirtIO and is not as complex
> > > > > as
> > > > > virtio-pci. But it doesn't mean there is no way for virtio-pci in Xen.
> > > > > 
> > > > > I think, this could be added in next steps. But the nearest target is
> > > > > virtio-mmio approach (of course if the community agrees on that).
> > > 
> > > > Aside from complexity and easy-of-development, are there any other
> > > > architectural reasons for using virtio-mmio?
> > > 
> > <snip>
> > > > 
> > > > For instance, what's your take on notifications with virtio-mmio? How
> > > > are they modelled today?
> > > 
> > > The backend will notify the frontend using an SPI. The other way around
> > > (frontend -> backend) is based on an MMIO write.
> > > 
> > > We have an interface to allow the backend to control whether the
> > > interrupt level (i.e. low, high). However, the "old" vGIC doesn't handle
> > > properly level interrupts. So we would end up to treat level interrupts
> > > as edge.
> > > 
> > > Technically, the problem is already existing with HW interrupts, but the
> > > HW should fire it again if the interrupt line is still asserted. Another
> > > issue is the interrupt may fire even if the interrupt line was
> > > deasserted (IIRC this caused some interesting problem with the Arch
> > > timer).
> > > 
> > > I am a bit concerned that the issue will be more proeminent for virtual
> > > interrupts. I know that we have some gross hack in the vpl011 to handle
> > > a level interrupts. So maybe it is time to switch to the new vGIC?
> > > 
> > > > Are they good enough or do we need MSIs?
> > > 
> > > I am not sure whether virtio-mmio supports MSIs. However for virtio-pci,
> > > MSIs is going to be useful to improve performance. This may mean to
> > > expose an ITS, so we would need to add support for guest.
> > 
> > virtio-mmio doesn't support MSI's at the moment although there have been
> > proposals to update the spec to allow them. At the moment the cost of
> > reading the ISR value and then writing an ack in vm_interrupt:
> > 
> > 	/* Read and acknowledge interrupts */
> > 	status = readl(vm_dev->base + VIRTIO_MMIO_INTERRUPT_STATUS);
> > 	writel(status, vm_dev->base + VIRTIO_MMIO_INTERRUPT_ACK);
> > 
> 
> Hmmmm, the current way to handle MMIO is the following:
>     * pause the vCPU
>     * Forward the access to the backend domain
>     * Schedule the backend domain
>     * Wait for the access to be handled
>     * unpause the vCPU
> 
> So the sequence is going to be fairly expensive on Xen.
> 
> > puts an extra vmexit cost to trap an emulate each exit. Getting an MSI
> > via an exitless access to the GIC would be better I think.
> > I'm not quite
> > sure what the path to IRQs from Xen is.
> 
> vmexit on Xen on Arm is pretty cheap compare to KVM as we don't save a lot of
> things. In this situation, they handling an extra trap for the interrupt is
> likely to be meaningless compare to the sequence above.

+1


> I am assuming the sequence is also going to be used by the MSIs, right?
> 
> It feels to me that it would be worth spending time to investigate the cost of
> that sequence. It might be possible to optimize the ACK and avoid to wait for
> the backend to handle the access.

+1
--8323329-1593658400-1595349725=:32544--


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 16:49:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 16:49:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxvSZ-0008QV-Af; Tue, 21 Jul 2020 16:49:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tByU=BA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxvSY-0008QN-5T
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 16:49:42 +0000
X-Inumbo-ID: 2b10c444-cb72-11ea-8577-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b10c444-cb72-11ea-8577-bc764e2007e4;
 Tue, 21 Jul 2020 16:49:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=9nHJKCpAzpDVPrqvgCdh1klddWnXxg2WWDw3DYyViBg=; b=2MslN0HxSt177eHg1m684spmY
 B9pLVI+/l8NWGFslK+0+8o8Bv/TOafUidLgQ4b1CR+oPGNxVmQJJsg3yKMjvpNNhPyRT5KW6X1Hcw
 URBYxs8UKq1wH99bfU59E5Obar/8tuSE+eHztsF0dURy1dFAhcqwqpP4tAMPgB9mkjz9w=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxvSV-0003Mz-Ie; Tue, 21 Jul 2020 16:49:39 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxvSV-0005Wr-Af; Tue, 21 Jul 2020 16:49:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxvSV-0003pK-A0; Tue, 21 Jul 2020 16:49:39 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152061-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 152061: regressions - trouble:
 fail/pass/starved
X-Osstest-Failures: xen-4.14-testing:test-amd64-amd64-dom0pvh-xl-amd:guest-localmigrate:fail:regression
 xen-4.14-testing:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
 xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=312e5be7ce751c35b79dff3aca4fb660610913e9
X-Osstest-Versions-That: xen=23fe1b8d5170dfd5039c39181e82bfd5e20f3c18
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jul 2020 16:49:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152061 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152061/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-amd 16 guest-localmigrate    fail REGR. vs. 152043
 test-armhf-armhf-xl-vhd     15 guest-start/debian.repeat fail REGR. vs. 152043

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 xen                  312e5be7ce751c35b79dff3aca4fb660610913e9
baseline version:
 xen                  23fe1b8d5170dfd5039c39181e82bfd5e20f3c18

Last test of basis   152043  2020-07-20 12:10:42 Z    1 days
Testing same since   152061  2020-07-21 01:41:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 312e5be7ce751c35b79dff3aca4fb660610913e9
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jul 20 17:54:52 2020 +0100

    docs: Replace non-UTF-8 character in hypfs-paths.pandoc
    
    From the docs cronjob on xenbits:
    
      /usr/bin/pandoc --number-sections --toc --standalone misc/hypfs-paths.pandoc --output html/misc/hypfs-paths.html
      pandoc: Cannot decode byte '\x92': Data.Text.Internal.Encoding.decodeUtf8: Invalid UTF-8 stream
      make: *** [Makefile:236: html/misc/hypfs-paths.html] Error 1
    
    Fixes: 5a4a411bde4 ("docs: specify stability of hypfs path documentation")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Paul Durrant <paul@xen.org>
    (cherry picked from commit 9ffdda96d9e7c3d9c7a5bbe2df6ab30f63927542)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 17:04:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 17:04:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxvgl-0001gF-Ph; Tue, 21 Jul 2020 17:04:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H3aL=BA=arm.com=andre.przywara@srs-us1.protection.inumbo.net>)
 id 1jxvgk-0001gA-Al
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 17:04:22 +0000
X-Inumbo-ID: 380153ba-cb74-11ea-a10a-12813bfff9fa
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 380153ba-cb74-11ea-a10a-12813bfff9fa;
 Tue, 21 Jul 2020 17:04:20 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 252EA1045;
 Tue, 21 Jul 2020 10:04:20 -0700 (PDT)
Received: from [192.168.2.22] (unknown [172.31.20.19])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DFD3A3F66E;
 Tue, 21 Jul 2020 10:04:18 -0700 (PDT)
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: Oleksandr <olekstysh@gmail.com>, Julien Grall <julien@xen.org>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <alpine.DEB.2.21.2007201326060.32544@sstabellini-ThinkPad-T480s>
 <4454c70e-47fa-46e8-90bf-1904b11318b1@gmail.com>
 <048c27bf-a9ab-054c-8955-6e75fb6c6ea5@xen.org>
 <2c249585-aaba-1065-95df-be772861e9a8@arm.com>
 <e44d6826-643f-77c6-a821-77dc0abf4cbc@gmail.com>
 <1811dd15-4009-f78c-674c-177709cf2a22@arm.com>
 <e7bbc9d6-648e-4d2a-e981-15743a628b1f@gmail.com>
From: =?UTF-8?Q?Andr=c3=a9_Przywara?= <andre.przywara@arm.com>
Autocrypt: addr=andre.przywara@arm.com; prefer-encrypt=mutual; keydata=
 xsFNBFNPCKMBEAC+6GVcuP9ri8r+gg2fHZDedOmFRZPtcrMMF2Cx6KrTUT0YEISsqPoJTKld
 tPfEG0KnRL9CWvftyHseWTnU2Gi7hKNwhRkC0oBL5Er2hhNpoi8x4VcsxQ6bHG5/dA7ctvL6
 kYvKAZw4X2Y3GTbAZIOLf+leNPiF9175S8pvqMPi0qu67RWZD5H/uT/TfLpvmmOlRzNiXMBm
 kGvewkBpL3R2clHquv7pB6KLoY3uvjFhZfEedqSqTwBVu/JVZZO7tvYCJPfyY5JG9+BjPmr+
 REe2gS6w/4DJ4D8oMWKoY3r6ZpHx3YS2hWZFUYiCYovPxfj5+bOr78sg3JleEd0OB0yYtzTT
 esiNlQpCo0oOevwHR+jUiaZevM4xCyt23L2G+euzdRsUZcK/M6qYf41Dy6Afqa+PxgMEiDto
 ITEH3Dv+zfzwdeqCuNU0VOGrQZs/vrKOUmU/QDlYL7G8OIg5Ekheq4N+Ay+3EYCROXkstQnf
 YYxRn5F1oeVeqoh1LgGH7YN9H9LeIajwBD8OgiZDVsmb67DdF6EQtklH0ycBcVodG1zTCfqM
 AavYMfhldNMBg4vaLh0cJ/3ZXZNIyDlV372GmxSJJiidxDm7E1PkgdfCnHk+pD8YeITmSNyb
 7qeU08Hqqh4ui8SSeUp7+yie9zBhJB5vVBJoO5D0MikZAODIDwARAQABzS1BbmRyZSBQcnp5
 d2FyYSAoQVJNKSA8YW5kcmUucHJ6eXdhcmFAYXJtLmNvbT7CwXsEEwECACUCGwMGCwkIBwMC
 BhUIAgkKCwQWAgMBAh4BAheABQJTWSV8AhkBAAoJEAL1yD+ydue63REP/1tPqTo/f6StS00g
 NTUpjgVqxgsPWYWwSLkgkaUZn2z9Edv86BLpqTY8OBQZ19EUwfNehcnvR+Olw+7wxNnatyxo
 D2FG0paTia1SjxaJ8Nx3e85jy6l7N2AQrTCFCtFN9lp8Pc0LVBpSbjmP+Peh5Mi7gtCBNkpz
 KShEaJE25a/+rnIrIXzJHrsbC2GwcssAF3bd03iU41J1gMTalB6HCtQUwgqSsbG8MsR/IwHW
 XruOnVp0GQRJwlw07e9T3PKTLj3LWsAPe0LHm5W1Q+euoCLsZfYwr7phQ19HAxSCu8hzp43u
 zSw0+sEQsO+9wz2nGDgQCGepCcJR1lygVn2zwRTQKbq7Hjs+IWZ0gN2nDajScuR1RsxTE4WR
 lj0+Ne6VrAmPiW6QqRhliDO+e82riI75ywSWrJb9TQw0+UkIQ2DlNr0u0TwCUTcQNN6aKnru
 ouVt3qoRlcD5MuRhLH+ttAcmNITMg7GQ6RQajWrSKuKFrt6iuDbjgO2cnaTrLbNBBKPTG4oF
 D6kX8Zea0KvVBagBsaC1CDTDQQMxYBPDBSlqYCb/b2x7KHTvTAHUBSsBRL6MKz8wwruDodTM
 4E4ToV9URl4aE/msBZ4GLTtEmUHBh4/AYwk6ACYByYKyx5r3PDG0iHnJ8bV0OeyQ9ujfgBBP
 B2t4oASNnIOeGEEcQ2rjzsFNBFNPCKMBEACm7Xqafb1Dp1nDl06aw/3O9ixWsGMv1Uhfd2B6
 it6wh1HDCn9HpekgouR2HLMvdd3Y//GG89irEasjzENZPsK82PS0bvkxxIHRFm0pikF4ljIb
 6tca2sxFr/H7CCtWYZjZzPgnOPtnagN0qVVyEM7L5f7KjGb1/o5EDkVR2SVSSjrlmNdTL2Rd
 zaPqrBoxuR/y/n856deWqS1ZssOpqwKhxT1IVlF6S47CjFJ3+fiHNjkljLfxzDyQXwXCNoZn
 BKcW9PvAMf6W1DGASoXtsMg4HHzZ5fW+vnjzvWiC4pXrcP7Ivfxx5pB+nGiOfOY+/VSUlW/9
 GdzPlOIc1bGyKc6tGREH5lErmeoJZ5k7E9cMJx+xzuDItvnZbf6RuH5fg3QsljQy8jLlr4S6
 8YwxlObySJ5K+suPRzZOG2+kq77RJVqAgZXp3Zdvdaov4a5J3H8pxzjj0yZ2JZlndM4X7Msr
 P5tfxy1WvV4Km6QeFAsjcF5gM+wWl+mf2qrlp3dRwniG1vkLsnQugQ4oNUrx0ahwOSm9p6kM
 CIiTITo+W7O9KEE9XCb4vV0ejmLlgdDV8ASVUekeTJkmRIBnz0fa4pa1vbtZoi6/LlIdAEEt
 PY6p3hgkLLtr2GRodOW/Y3vPRd9+rJHq/tLIfwc58ZhQKmRcgrhtlnuTGTmyUqGSiMNfpwAR
 AQABwsFfBBgBAgAJBQJTTwijAhsMAAoJEAL1yD+ydue64BgP/33QKczgAvSdj9XTC14wZCGE
 U8ygZwkkyNf021iNMj+o0dpLU48PIhHIMTXlM2aiiZlPWgKVlDRjlYuc9EZqGgbOOuR/pNYA
 JX9vaqszyE34JzXBL9DBKUuAui8z8GcxRcz49/xtzzP0kH3OQbBIqZWuMRxKEpRptRT0wzBL
 O31ygf4FRxs68jvPCuZjTGKELIo656/Hmk17cmjoBAJK7JHfqdGkDXk5tneeHCkB411p9WJU
 vMO2EqsHjobjuFm89hI0pSxlUoiTL0Nuk9Edemjw70W4anGNyaQtBq+qu1RdjUPBvoJec7y/
 EXJtoGxq9Y+tmm22xwApSiIOyMwUi9A1iLjQLmngLeUdsHyrEWTbEYHd2sAM2sqKoZRyBDSv
 ejRvZD6zwkY/9nRqXt02H1quVOP42xlkwOQU6gxm93o/bxd7S5tEA359Sli5gZRaucpNQkwd
 KLQdCvFdksD270r4jU/rwR2R/Ubi+txfy0dk2wGBjl1xpSf0Lbl/KMR5TQntELfLR4etizLq
 Xpd2byn96Ivi8C8u9zJruXTueHH8vt7gJ1oax3yKRGU5o2eipCRiKZ0s/T7fvkdq+8beg9ku
 fDO4SAgJMIl6H5awliCY2zQvLHysS/Wb8QuB09hmhLZ4AifdHyF1J5qeePEhgTA+BaUbiUZf
 i4aIXCH3Wv6K
Organization: ARM Ltd.
Message-ID: <b5faad02-34a5-eb58-1da9-f7852a817281@arm.com>
Date: Tue, 21 Jul 2020 18:02:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <e7bbc9d6-648e-4d2a-e981-15743a628b1f@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 alex.bennee@linaro.org, Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21/07/2020 17:09, Oleksandr wrote:
> 
> On 21.07.20 17:58, André Przywara wrote:
>> On 21/07/2020 15:52, Oleksandr wrote:
>>> On 21.07.20 17:32, André Przywara wrote:
>>>> On 21/07/2020 14:43, Julien Grall wrote:
>>> Hello Andre, Julien
>>>
>>>
>>>>> (+ Andre)
>>>>>
>>>>> Hi Oleksandr,
>>>>>
>>>>> On 21/07/2020 13:26, Oleksandr wrote:
>>>>>> On 20.07.20 23:38, Stefano Stabellini wrote:
>>>>>>> For instance, what's your take on notifications with virtio-mmio?
>>>>>>> How
>>>>>>> are they modelled today? Are they good enough or do we need MSIs?
>>>>>> Notifications are sent from device (backend) to the driver (frontend)
>>>>>> using interrupts. Additional DM function was introduced for that
>>>>>> purpose xendevicemodel_set_irq_level() which results in
>>>>>> vgic_inject_irq() call.
>>>>>>
>>>>>> Currently, if device wants to notify a driver it should trigger the
>>>>>> interrupt by calling that function twice (high level at first, then
>>>>>> low level).
>>>>> This doesn't look right to me. Assuming the interrupt is trigger when
>>>>> the line is high-level, the backend should only issue the hypercall
>>>>> once
>>>>> to set the level to high. Once the guest has finish to process all the
>>>>> notifications the backend would then call the hypercall to lower the
>>>>> interrupt line.
>>>>>
>>>>> This means the interrupts should keep firing as long as the interrupt
>>>>> line is high.
>>>>>
>>>>> It is quite possible that I took some shortcut when implementing the
>>>>> hypercall, so this should be corrected before anyone start to rely on
>>>>> it.
>>>> So I think the key question is: are virtio interrupts level or edge
>>>> triggered? Both QEMU and kvmtool advertise virtio-mmio interrupts as
>>>> edge-triggered.
>>>>   From skimming through the virtio spec I can't find any explicit
>>>> mentioning of the type of IRQ, but the usage of MSIs indeed hints at
>>>> using an edge property. Apparently reading the PCI ISR status register
>>>> clears it, which again sounds like edge. For virtio-mmio the driver
>>>> needs to explicitly clear the interrupt status register, which again
>>>> says: edge (as it's not the device clearing the status).
>>>>
>>>> So the device should just notify the driver once, which would cause one
>>>> vgic_inject_irq() call. It would be then up to the driver to clear up
>>>> that status, by reading PCI ISR status or writing to virtio-mmio's
>>>> interrupt-acknowledge register.
>>>>
>>>> Does that make sense?
>>> When implementing Xen backend, I didn't have an already working example
>>> so only guessed. I looked how kvmtool behaved when actually triggering
>>> the interrupt on Arm [1].
>>>
>>> Taking into the account that Xen PoC on Arm advertises [2] the same irq
>>> type (TYPE_EDGE_RISING) as kvmtool [3] I decided to follow the model of
>>> triggering an interrupt. Could you please explain, is this wrong?
>> Yes, kvmtool does a double call needlessly (on x86, ppc and arm, mips is
>> correct).
>> I just chased it down in the kernel, a KVM_IRQ_LINE ioctl with level=low
>> is ignored when the target IRQ is configured as edge (which it is,
>> because the DT says so), check vgic_validate_injection() in the kernel.
>>
>> So you should only ever need one call to set the line "high" (actually:
>> trigger the edge pulse).
> 
> Got it, thanks for the explanation. Have just removed an extra action
> (setting low level) and checked.
> 

Just for the records: the KVM API documentation explicitly mentions:
"Note that edge-triggered interrupts require the level to be set to 1
and then back to 0." So kvmtool is just following the book.

Setting it to 0 still does nothing *on ARM*, and the x86 IRQ code is far
to convoluted to easily judge what's really happening here. For MSIs at
least it's equally ignored.

So I guess a clean implementation in Xen does not need two calls, but
some folks with understanding of x86 IRQ handling in Xen should confirm.

Cheers,
Andre.


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 17:22:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 17:22:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxvyQ-0003MU-Kb; Tue, 21 Jul 2020 17:22:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bQ5W=BA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jxvyP-0003MP-HE
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 17:22:37 +0000
X-Inumbo-ID: c48d8004-cb76-11ea-a110-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c48d8004-cb76-11ea-a110-12813bfff9fa;
 Tue, 21 Jul 2020 17:22:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595352155;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=U9TFSiMPyXpVYVZMejGmia8JMd9i23k5xSQc2PVJ5+Q=;
 b=QeLcFQZen2EDU7lLxnAUVZa7d4JkphtLpPoA+prKL0Ng6V1nAEOk0ecm
 FaAhNA34xl0HIuGBAXxSswHqnYySXojnXH5mc1htWx789zHown9TVKJrp
 LHY8lOFN/Dx+xHkrtCvVsrjtJ/spIFbUhssBo1Pp14F/m0et/eP/nMHGh Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 4JDADQ/sytG4ozsWr7yIeK7GkrOMB6CevNqwqE5rav/DcjQYI9gvEU6aoeAvqmntPFkur9u+bH
 ma409rlwRPmVD7noc2pT5LTeslcZOWqOhBjrg55k0/YYu1m+3VvWlIE2waX1PFrJ9I8pclf+zq
 z9nRcXDFZP6Yq9zxciOo26Tupvv8+AlgaEZg9xMYVQTdo5YfDD0ds60rI1B+8U6V7AxIg5otQk
 Wb4FkN8VeWkx+bE6mlPRF9UD0+SDVafoRZu1tSsRjPZSgHu1GbhDEEqxe4pFkNPp6/OY7aGlQM
 5WQ=
X-SBRS: 2.7
X-MesageID: 23065430
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,379,1589256000"; d="scan'208";a="23065430"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/svm: Fold nsvm_{wr, rd}msr() into svm_msr_{read,
 write}_intercept()
Date: Tue, 21 Jul 2020 18:22:08 +0100
Message-ID: <20200721172208.12176-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

... to simplify the default cases.

There are multiple errors with the handling of these three MSRs, but they are
deliberately not addressed this point.

This removes the dance converting -1/0/1 into X86EMUL_*, allowing for the
removal of the 'ret' variable.

While cleaning this up, drop the gdprintk()'s for #GP conditions, and the
'result' variable from svm_msr_write_intercept() is it is never modified.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/svm/nestedsvm.c        | 77 ---------------------------------
 xen/arch/x86/hvm/svm/svm.c              | 46 +++++++++++++-------
 xen/include/asm-x86/hvm/svm/nestedsvm.h |  4 --
 3 files changed, 31 insertions(+), 96 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 11dc9c089c..a193d9de45 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -47,22 +47,6 @@ nestedsvm_vcpu_stgi(struct vcpu *v)
     local_event_delivery_enable(); /* unmask events for PV drivers */
 }
 
-static int
-nestedsvm_vmcb_isvalid(struct vcpu *v, uint64_t vmcxaddr)
-{
-    /* Address must be 4k aligned */
-    if ( (vmcxaddr & ~PAGE_MASK) != 0 )
-        return 0;
-
-    /* Maximum valid physical address.
-     * See AMD BKDG for HSAVE_PA MSR.
-     */
-    if ( vmcxaddr > 0xfd00000000ULL )
-        return 0;
-
-    return 1;
-}
-
 int nestedsvm_vmcb_map(struct vcpu *v, uint64_t vmcbaddr)
 {
     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
@@ -1263,67 +1247,6 @@ enum hvm_intblk nsvm_intr_blocked(struct vcpu *v)
     return hvm_intblk_none;
 }
 
-/* MSR handling */
-int nsvm_rdmsr(struct vcpu *v, unsigned int msr, uint64_t *msr_content)
-{
-    struct nestedsvm *svm = &vcpu_nestedsvm(v);
-    int ret = 1;
-
-    *msr_content = 0;
-
-    switch (msr) {
-    case MSR_K8_VM_CR:
-        break;
-    case MSR_K8_VM_HSAVE_PA:
-        *msr_content = svm->ns_msr_hsavepa;
-        break;
-    case MSR_AMD64_TSC_RATIO:
-        *msr_content = svm->ns_tscratio;
-        break;
-    default:
-        ret = 0;
-        break;
-    }
-
-    return ret;
-}
-
-int nsvm_wrmsr(struct vcpu *v, unsigned int msr, uint64_t msr_content)
-{
-    int ret = 1;
-    struct nestedsvm *svm = &vcpu_nestedsvm(v);
-
-    switch (msr) {
-    case MSR_K8_VM_CR:
-        /* ignore write. handle all bits as read-only. */
-        break;
-    case MSR_K8_VM_HSAVE_PA:
-        if (!nestedsvm_vmcb_isvalid(v, msr_content)) {
-            gdprintk(XENLOG_ERR,
-                "MSR_K8_VM_HSAVE_PA value invalid %#"PRIx64"\n", msr_content);
-            ret = -1; /* inject #GP */
-            break;
-        }
-        svm->ns_msr_hsavepa = msr_content;
-        break;
-    case MSR_AMD64_TSC_RATIO:
-        if ((msr_content & ~TSC_RATIO_RSVD_BITS) != msr_content) {
-            gdprintk(XENLOG_ERR,
-                "reserved bits set in MSR_AMD64_TSC_RATIO %#"PRIx64"\n",
-                msr_content);
-            ret = -1; /* inject #GP */
-            break;
-        }
-        svm->ns_tscratio = msr_content;
-        break;
-    default:
-        ret = 0;
-        break;
-    }
-
-    return ret;
-}
-
 /* VMEXIT emulation */
 void
 nestedsvm_vmexit_defer(struct vcpu *v,
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 4eb41792e2..bbe73744b8 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1788,10 +1788,10 @@ static void svm_dr_access(struct vcpu *v, struct cpu_user_regs *regs)
 
 static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
 {
-    int ret;
     struct vcpu *v = current;
     const struct domain *d = v->domain;
     struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
+    const struct nestedsvm *nsvm = &vcpu_nestedsvm(v);
 
     switch ( msr )
     {
@@ -1914,6 +1914,18 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
             goto gpf;
         break;
 
+    case MSR_K8_VM_CR:
+        *msr_content = 0;
+        break;
+
+    case MSR_K8_VM_HSAVE_PA:
+        *msr_content = nsvm->ns_msr_hsavepa;
+        break;
+
+    case MSR_AMD64_TSC_RATIO:
+        *msr_content = nsvm->ns_tscratio;
+        break;
+
     case MSR_AMD_OSVW_ID_LENGTH:
     case MSR_AMD_OSVW_STATUS:
         if ( !d->arch.cpuid->extd.osvw )
@@ -1922,12 +1934,6 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         break;
 
     default:
-        ret = nsvm_rdmsr(v, msr, msr_content);
-        if ( ret < 0 )
-            goto gpf;
-        else if ( ret )
-            break;
-
         if ( rdmsr_safe(msr, *msr_content) == 0 )
             break;
 
@@ -1956,10 +1962,10 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
 
 static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
 {
-    int ret, result = X86EMUL_OKAY;
     struct vcpu *v = current;
     struct domain *d = v->domain;
     struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
+    struct nestedsvm *nsvm = &vcpu_nestedsvm(v);
 
     switch ( msr )
     {
@@ -2085,6 +2091,22 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
             goto gpf;
         break;
 
+    case MSR_K8_VM_CR:
+        /* ignore write. handle all bits as read-only. */
+        break;
+
+    case MSR_K8_VM_HSAVE_PA:
+        if ( (msr_content & ~PAGE_MASK) || msr_content > 0xfd00000000ULL )
+            goto gpf;
+        nsvm->ns_msr_hsavepa = msr_content;
+        break;
+
+    case MSR_AMD64_TSC_RATIO:
+        if ( msr_content & TSC_RATIO_RSVD_BITS )
+            goto gpf;
+        nsvm->ns_tscratio = msr_content;
+        break;
+
     case MSR_IA32_MCx_MISC(4): /* Threshold register */
     case MSR_F10_MC4_MISC1 ... MSR_F10_MC4_MISC3:
         /*
@@ -2102,12 +2124,6 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         break;
 
     default:
-        ret = nsvm_wrmsr(v, msr, msr_content);
-        if ( ret < 0 )
-            goto gpf;
-        else if ( ret )
-            break;
-
         /* Match up with the RDMSR side; ultimately this should go away. */
         if ( rdmsr_safe(msr, msr_content) == 0 )
             break;
@@ -2115,7 +2131,7 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
         goto gpf;
     }
 
-    return result;
+    return X86EMUL_OKAY;
 
  gpf:
     return X86EMUL_EXCEPTION;
diff --git a/xen/include/asm-x86/hvm/svm/nestedsvm.h b/xen/include/asm-x86/hvm/svm/nestedsvm.h
index 31fb4bfeb4..0873698457 100644
--- a/xen/include/asm-x86/hvm/svm/nestedsvm.h
+++ b/xen/include/asm-x86/hvm/svm/nestedsvm.h
@@ -118,10 +118,6 @@ bool_t nsvm_vmcb_guest_intercepts_event(
 bool_t nsvm_vmcb_hap_enabled(struct vcpu *v);
 enum hvm_intblk nsvm_intr_blocked(struct vcpu *v);
 
-/* MSRs */
-int nsvm_rdmsr(struct vcpu *v, unsigned int msr, uint64_t *msr_content);
-int nsvm_wrmsr(struct vcpu *v, unsigned int msr, uint64_t msr_content);
-
 /* Interrupts, vGIF */
 void svm_vmexit_do_clgi(struct cpu_user_regs *regs, struct vcpu *v);
 void svm_vmexit_do_stgi(struct cpu_user_regs *regs, struct vcpu *v);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 17:25:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 17:25:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxw0p-0003U8-2Y; Tue, 21 Jul 2020 17:25:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FhFK=BA=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1jxw0o-0003U3-78
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 17:25:06 +0000
X-Inumbo-ID: 1d4f7594-cb77-11ea-8584-bc764e2007e4
Received: from mail-lj1-x22e.google.com (unknown [2a00:1450:4864:20::22e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d4f7594-cb77-11ea-8584-bc764e2007e4;
 Tue, 21 Jul 2020 17:25:04 +0000 (UTC)
Received: by mail-lj1-x22e.google.com with SMTP id d17so24954976ljl.3
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 10:25:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-transfer-encoding:content-language;
 bh=cwFmheLwP7XUTScLQASuc1Rn/JAgGwsxwDrCkfQpiQY=;
 b=q1WxdZXZ8boc4Md6AiV0GyO/7pD4k+GptMLYvxZiDF1RfBmSnW2N9yQM/rJk0zc7r9
 7EXWd1808ixzLFGFEGpWso36ObdtzRs9aOh6PjRRiy2GlXbb5hWnXWmSxcnNf1T42P/Q
 vGwvZitp0CltohLEWrDwHl7A8ft4jqiquHlvLx0HqrEw28a8/bw41IEBoRAvMhNmsF6n
 Rd4X+67YBC4yJ3THD6IXEijlwAx3++5hdqf0FRQKtudSjpTKCrC2N13tPE83Kaj4Tm9N
 6kUxkpii0rDwjb9y/+6FbzmMtRWvwz3WM2RGA0R+uAfrX479QiwuT7pUbXDfO8Z5WCSA
 ociw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-transfer-encoding
 :content-language;
 bh=cwFmheLwP7XUTScLQASuc1Rn/JAgGwsxwDrCkfQpiQY=;
 b=KbUzlXqjDHfrTNv6HgB+dv/ST4ezVnvFsWncbgY3uE5ZXeRkwYueCenLa0mlZ1hb2K
 w1TLPkj9uKPQF1dXgXV7V5ociozaIiH47eUREJvfGIZ5P36s6NXKkj5gXb9fILa2q8pK
 bVntOIFr/Y1/Rea9+Gu/opAMUeC0nCxhvLBApNPgL5bH+kWShgXaMx9iW69Mi3M34f5m
 2vlKexd0gAlTiUGaD5cBE3DOGnLcygJKkftRqEvOe/1lzh3hXK4HFB1JeTfCBCnUZddV
 l8sfE6aSE2w6JwQ107EPCcd+sv8vxTgQ3DgAGO9IWuXFl6ajh7sF+3CkBpjyBPbh/tx0
 9dXg==
X-Gm-Message-State: AOAM533dg7ek58mNKs2A2XPSbbddt6jbOVF0+/oN2qUKzMHNJO+FyrYw
 NcbknMrA/JVTa0hhltNZK3s=
X-Google-Smtp-Source: ABdhPJzeF8Eow23dHn4U1iwWcosv4ox4PvH0/Ozno+/UlHy6AVKK2KNswyVaxQIfLreadNTaL6oVHw==
X-Received: by 2002:a2e:98c8:: with SMTP id s8mr3127222ljj.368.1595352303624; 
 Tue, 21 Jul 2020 10:25:03 -0700 (PDT)
Received: from [192.168.1.2] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id d23sm5598940lfm.85.2020.07.21.10.25.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 21 Jul 2020 10:25:03 -0700 (PDT)
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: Julien Grall <julien@xen.org>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <3dcab37d-0d60-f1cc-1d59-b5497f0fa95f@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <4a66d39f-dece-672a-5ad3-7801f2583d07@gmail.com>
Date: Tue, 21 Jul 2020 20:24:57 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3dcab37d-0d60-f1cc-1d59-b5497f0fa95f@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 21.07.20 17:27, Julien Grall wrote:
> Hi,

Hello Julien


>
> On 17/07/2020 19:34, Oleksandr wrote:
>>
>> On 17.07.20 18:00, Roger Pau Monné wrote:
>>>> requires
>>>> some implementation to forward guest MMIO access to a device model. 
>>>> And as
>>>> it
>>>> turned out the Xen on x86 contains most of the pieces to be able to 
>>>> use that
>>>> transport (via existing IOREQ concept). Julien has already done a 
>>>> big amount
>>>> of work in his PoC (xen/arm: Add support for Guest IO forwarding to a
>>>> device emulator).
>>>> Using that code as a base we managed to create a completely 
>>>> functional PoC
>>>> with DomU
>>>> running on virtio block device instead of a traditional Xen PV driver
>>>> without
>>>> modifications to DomU Linux. Our work is mostly about rebasing 
>>>> Julien's
>>>> code on the actual
>>>> codebase (Xen 4.14-rc4), various tweeks to be able to run emulator
>>>> (virtio-disk backend)
>>>> in other than Dom0 domain (in our system we have thin Dom0 and keep 
>>>> all
>>>> backends
>>>> in driver domain),
>>> How do you handle this use-case? Are you using grants in the VirtIO
>>> ring, or rather allowing the driver domain to map all the guest memory
>>> and then placing gfn on the ring like it's commonly done with VirtIO?
>>
>> Second option. Xen grants are not used at all as well as event 
>> channel and Xenbus. That allows us to have guest
>>
>> *unmodified* which one of the main goals. Yes, this may sound (or 
>> even sounds) non-secure, but backend which runs in driver domain is 
>> allowed to map all guest memory.
>>
>> In current backend implementation a part of guest memory is mapped 
>> just to process guest request then unmapped back, there is no 
>> mappings in advance. The xenforeignmemory_map
>>
>> call is used for that purpose. For experiment I tried to map all 
>> guest memory in advance and just calculated pointer at runtime. Of 
>> course that logic performed better.
>
> That works well for a PoC, however I am not sure you can rely on it 
> long term as a guest is free to modify its memory layout. For 
> instance, Linux may balloon in/out memory. You probably want to 
> consider something similar to mapcache in QEMU.
Yes, that was considered and even tried.
Current backend implementation is based on map/unmap only needed part of 
guest memory per each request with some kind of mapcache. I borrowed x86 
logic on Arm to invalidate mapcache on XENMEM_decrease_reservation call, 
so if mapcache is in use it will be cleared. Hopefully DomU without 
backends running is not going to balloon in/out memory often.


>
> On a similar topic, I am a bit surprised you didn't encounter memory 
> exhaustion when trying to use virtio. Because on how Linux currently 
> works (see XSA-300), the backend domain as to have a least as much RAM 
> as the domain it serves. For instance, you have serve two domains with 
> 1GB of RAM each, then your backend would need at least 2GB + some for 
> its own purpose.
I understand these bits. You have already warned me about that. When 
playing with mapping the whole guest memory in advance, I gave a DomU 
512MB only, that was enough to not encounter memory exhaustion on my
environment. Then switched to "map/unmap at runtime" model.


>>>>
>>>> *A few word about the Xen code:*
>>>> You can find the whole Xen series at [5]. The patches are in RFC state
>>>> because
>>>> some actions in the series should be reconsidered and implemented 
>>>> properly.
>>>> Before submitting the final code for the review the first IOREQ patch
>>>> (which is quite
>>>> big) will be split into x86, Arm and common parts. Please note, x86 
>>>> part
>>>> wasn’t
>>>> even build-tested so far and could be broken with that series. Also 
>>>> the
>>>> series probably
>>>> wants splitting into adding IOREQ on Arm (should be focused first) and
>>>> tools support
>>>> for the virtio-disk (which is going to be the first Virtio driver)
>>>> configuration before going
>>>> into the mailing list.
>>> Sending first a patch series to enable IOREQs on Arm seems perfectly
>>> fine, and it doesn't have to come with the VirtIO backend. In fact I
>>> would recommend that you send that ASAP, so that you don't spend time
>>> working on the backend that would likely need to be modified
>>> according to the review received on the IOREQ series.
>>
>> Completely agree with you, I will send it after splitting IOREQ patch 
>> and performing some cleanup.
>>
>> However, it is going to take some time to make it properly taking 
>> into the account
>>
>> that personally I won't be able to test on x86.
> I think other member of the community should be able to help here. 
> However, nowadays testing Xen on x86 is pretty easy with QEMU :).

That's good.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 18:16:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 18:16:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxwoZ-0007qV-Ap; Tue, 21 Jul 2020 18:16:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FhFK=BA=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1jxwoX-0007qQ-VR
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 18:16:30 +0000
X-Inumbo-ID: 4bafde9b-cb7e-11ea-859b-bc764e2007e4
Received: from mail-lj1-x234.google.com (unknown [2a00:1450:4864:20::234])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4bafde9b-cb7e-11ea-859b-bc764e2007e4;
 Tue, 21 Jul 2020 18:16:29 +0000 (UTC)
Received: by mail-lj1-x234.google.com with SMTP id h22so25091909lji.9
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 11:16:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-transfer-encoding:content-language;
 bh=ghgcnfJ7kfPCIJeWCLlLLzJzBObkZ6CFvEmgHiR5fpU=;
 b=FHoj2GBFieqignXTbxCB+0wkPeEL+vka6roLWyC0UrwI8So9NsftB0Se9hWh45v88a
 nOKYri7Zbv9iBdKl2tXG6Upyl411svPay1tgDHPw7ZpcQYerdF13++lqVgn+0ccqrX4h
 6aScQGJeeozOJsX7eU1XgfTQ+SLYHWwLYGYxcB8VIfTj4wgxwf+h+4/Yb/oa95lrm8ih
 vYqcTyLk/sLtfmJiSeguhame9/gwDGNhP0PpT89D53pacjbWhQAdKxSf9MacNnlX0X7K
 q3C7z/6c7050lwKeVNXvOIAhIiAI6P+ERmNsbQTMOr7P7xbTsTRrwyvs218ptHYpT41i
 ucGQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-transfer-encoding
 :content-language;
 bh=ghgcnfJ7kfPCIJeWCLlLLzJzBObkZ6CFvEmgHiR5fpU=;
 b=IRcprdl5a0yfWOD8oDqOpd6J4JjMmXQDj+jn3aifkmBjAwknovmc/AdqEC/lbuy2Z8
 hY3jraCFFcr38fu+BkGO7fhr0wPja5kPWkqUBISqUw2hwDmje9eRRQsQguXMg1TWuGN0
 R7smI9kIurlJnLD1BJSHNRXiiXReN5QSr6OJKjG3+usF+tb2DPuiZViUkphnCMTgjAbj
 pljpPJJdovxIiYOkMoI+pXo3KJZdd2AbcxsbrI21TOfkNw8ncgFpUoijzjDGJ4Y6uM1+
 ptuj5czzj/e8l/554kQTKldeEpxbQIaQ6Ki61Q3vI93qN+9suMNIeYcFOJxS67T7R1GG
 1Xqg==
X-Gm-Message-State: AOAM533d/dXNr0HI+ooIVewXgpCtd81wJFdLmsu9fUw4PkNk/8Gq9mC2
 DjnXblSYPeDpjjOZRH639eU=
X-Google-Smtp-Source: ABdhPJz36GWHMGv84VN14p7ikCiTsMkejxPKOC8xAHrfxJX8r9Dbh3AIonidQGZz3pK6x6YM78OICA==
X-Received: by 2002:a2e:50b:: with SMTP id 11mr13921574ljf.458.1595355387971; 
 Tue, 21 Jul 2020 11:16:27 -0700 (PDT)
Received: from [192.168.1.2] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id r14sm2074466lfe.29.2020.07.21.11.16.27
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 21 Jul 2020 11:16:27 -0700 (PDT)
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: Julien Grall <julien@xen.org>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <3dcab37d-0d60-f1cc-1d59-b5497f0fa95f@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <b6cf0931-c31e-b03b-3995-688536de391a@gmail.com>
Date: Tue, 21 Jul 2020 21:16:21 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3dcab37d-0d60-f1cc-1d59-b5497f0fa95f@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 21.07.20 17:27, Julien Grall wrote:
> Hi,

Hello


>
> On a similar topic, I am a bit surprised you didn't encounter memory 
> exhaustion when trying to use virtio. Because on how Linux currently 
> works (see XSA-300), the backend domain as to have a least as much RAM 
> as the domain it serves. For instance, you have serve two domains with 
> 1GB of RAM each, then your backend would need at least 2GB + some for 
> its own purpose.
>
> This probably wants to be resolved by allowing foreign mapping to be 
> "paging" out as you would for memory assigned to a userspace. 

Didn't notice the last sentence initially. Could you please explain your 
idea in detail if possible. Does it mean if implemented it would be 
feasible to map all guest memory regardless of how much memory the guest 
has? Avoiding map/unmap memory each guest request would allow us to have 
better performance (of course with taking care of the fact that guest 
memory layout could be changed)... Actually what I understand looking at 
kvmtool is the fact it does not map/unmap memory dynamically, just 
calculate virt addresses according to the gfn provided.


-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 18:42:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 18:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxDb-0001yT-0S; Tue, 21 Jul 2020 18:42:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxDZ-0001xV-Lz
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 18:42:21 +0000
X-Inumbo-ID: e7aef92c-cb81-11ea-85a2-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7aef92c-cb81-11ea-85a2-bc764e2007e4;
 Tue, 21 Jul 2020 18:42:19 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDW-0001u7-8s; Tue, 21 Jul 2020 19:42:18 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 01/14] sg-report-flight: Add a comment re same-flight
 search narrowing
Date: Tue, 21 Jul 2020 19:41:52 +0100
Message-Id: <20200721184205.15232-2-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In afe851ca1771e5da6395b596afa69e509dbbc278
  sg-report-flight: When justifying, disregard out-of-flight build jobs
we narrowed sg-report-flight's search algorith.

An extensive justification is in the commit message.  I think much of
this information belongs in-tree, so c&p it (with slight edits) here.

No code change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/sg-report-flight b/sg-report-flight
index 6c481f6f..927ea37d 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -242,9 +242,27 @@ END
 	# jobs.  We start with all jobs in $tflight, and for each job
 	# we also process any other jobs it refers to in *buildjob runvars.
 	#
+	# The real thing we want to check that the build jobs *in the
+	# same flight as the justifying job* used the right revisions.
+	# Build jobs from other flights were either (i) build jobs for
+	# components not being targed for testing by this branch, but
+	# which were necessary for the justifying job and for which we
+	# decided to reuse another build job (in which case we don't
+	# really care what versions they used, even if underlying it
+	# all there might be a different version of a tree we are
+	# actually interested in (ii) the kind of continuous update
+	# thing seen with freebsdbuildjob.
+	#
+	# (This is rather different to cs-bisection-step, which is
+	# less focused on changes in a particular set of trees.)
+	#
+	# So we limit the scope of our recursive descent into build
+	# jobs, to jobs in the same flight.
+	#
 	# We don't actually use a recursive algorithm because that
 	# would involve recursive use of the same sql query object;
 	# hence the @binfos_todo queue.
+
 	my @binfos_todo;
 	my $binfos_queue = sub {
 	    my ($inflight,$q,$why) = @_;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 18:42:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 18:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxDW-0001yG-Lk; Tue, 21 Jul 2020 18:42:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxDU-0001xV-SX
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 18:42:16 +0000
X-Inumbo-ID: e58f2176-cb81-11ea-85a2-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e58f2176-cb81-11ea-85a2-bc764e2007e4;
 Tue, 21 Jul 2020 18:42:15 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDS-0001u7-MV; Tue, 21 Jul 2020 19:42:14 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 00/14] Flight report performance improvements
Date: Tue, 21 Jul 2020 19:41:51 +0100
Message-Id: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

osstest was taking far too long calculating what test failures were
regressions, and generating the email and web reports.  The slow part
was analysing the test history, mostly because it ended up doing a lot
of dumb scans of large tables.

In this series I fix this problem for sg-report-flight: I add some
indexes, and reorganise some of the queries so that they can make good
use of them.

I suspect there may still be problems with sg-report-host-history and
cs-bisection-step.  I haven't investigated those yet.

George: you volunteered to review my SQL.  I hope the information in
the commit messages is useful for that.  Thanks!

Ian Jackson (14):
  sg-report-flight: Add a comment re same-flight search narrowing
  sg-report-flight: Sort failures by job name as last resort
  schema: Provide indices for sg-report-flight
  sg-report-flight: Ask the db for flights of interest
  sg-report-flight: Use WITH to use best index use for $flightsq
  sg-report-flight: Use WITH clause to use index for $anypassq
  sg-report-flight: Use the job row from the intitial query
  Executive: Use index for report__find_test
  duration_estimator: Ignore truncated jobs unless we know the step
  duration_estimator: Introduce some _qtxt variables
  duration_estimator: Explicitly provide null in general host q
  duration_estimator: Return job column in first query
  duration_estimator: Move $uptincl_testid to separate @x_params
  duration_estimator: Move duration query loop into database

 Osstest/Executive.pm              |  70 ++++++++++------
 schema/runvars-built-index.sql    |   7 ++
 schema/runvars-revision-index.sql |   7 ++
 schema/steps-job-index.sql        |   7 ++
 sg-report-flight                  | 127 +++++++++++++++++++++++++-----
 5 files changed, 174 insertions(+), 44 deletions(-)
 create mode 100644 schema/runvars-built-index.sql
 create mode 100644 schema/runvars-revision-index.sql
 create mode 100644 schema/steps-job-index.sql

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 18:42:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 18:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxDg-0001z7-JS; Tue, 21 Jul 2020 18:42:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxDe-0001xV-ME
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 18:42:26 +0000
X-Inumbo-ID: e8816efc-cb81-11ea-85a2-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8816efc-cb81-11ea-85a2-bc764e2007e4;
 Tue, 21 Jul 2020 18:42:20 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDX-0001u7-IL; Tue, 21 Jul 2020 19:42:19 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 02/14] sg-report-flight: Sort failures by job name as
 last resort
Date: Tue, 21 Jul 2020 19:41:53 +0100
Message-Id: <20200721184205.15232-3-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This removes some nondeterminism from the output.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 1 +
 1 file changed, 1 insertion(+)

diff --git a/sg-report-flight b/sg-report-flight
index 927ea37d..70def778 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -813,6 +813,7 @@ END
 	# they finished in the same second, we pick the lower-numbered
 	# step, which is the earlier one (if they are sequential at
 	# all).
+	or $a->{Job} cmp $b->{Job}
     }
         @failures;
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 18:42:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 18:42:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxDk-00020c-VM; Tue, 21 Jul 2020 18:42:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxDj-0001xV-MS
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 18:42:31 +0000
X-Inumbo-ID: e7f1f129-cb81-11ea-85a2-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7f1f129-cb81-11ea-85a2-bc764e2007e4;
 Tue, 21 Jul 2020 18:42:20 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDX-0001u7-TP; Tue, 21 Jul 2020 19:42:19 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 03/14] schema: Provide indices for sg-report-flight
Date: Tue, 21 Jul 2020 19:41:54 +0100
Message-Id: <20200721184205.15232-4-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

These indexes allow very fast lookup of "relevant" flights eg when
trying to justify failures.

In my ad-hoc test case, these indices (along with the subsequent
changes to sg-report-flight and Executive.pm, reduce the runtime of
sg-report-flight from 2-3ks (unacceptably long!) to as little as
5-7s seconds - a speedup of about 500x.

(Getting the database snapshot may take a while first, but deploying
this code should help with that too by reducing long-running
transactions.  Quoted perf timings are from snapshot acquisition.)

Without these new indexes there may be a performance change from the
query changes.  I haven't benchmarked this so I am setting the schema
updates to be Preparatory/Needed (ie, "Schema first" as
schema/README.updates has it), to say that the index should be created
before the new code is deployed.

Testing: I have tested this series by creating experimental indices
"trial_..." in the actual production instance.  (Transactional DDL was
very helpful with this.)  I have verified with \d that schema update
instructions in this commit generate indexes which are equivalent to
the trial indices.

Deployment: AFter these schema updates are applied, the trial indices
are redundant duplicates and should be deleted.

CC: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 schema/runvars-built-index.sql    | 7 +++++++
 schema/runvars-revision-index.sql | 7 +++++++
 schema/steps-job-index.sql        | 7 +++++++
 3 files changed, 21 insertions(+)
 create mode 100644 schema/runvars-built-index.sql
 create mode 100644 schema/runvars-revision-index.sql
 create mode 100644 schema/steps-job-index.sql

diff --git a/schema/runvars-built-index.sql b/schema/runvars-built-index.sql
new file mode 100644
index 00000000..94f85ed8
--- /dev/null
+++ b/schema/runvars-built-index.sql
@@ -0,0 +1,7 @@
+-- ##OSSTEST## 007 Preparatory
+--
+-- This index helps sg-report-flight find relevant flights.
+
+CREATE INDEX runvars_built_revision_idx
+    ON runvars (val)
+ WHERE name LIKE 'built_revision_%';
diff --git a/schema/runvars-revision-index.sql b/schema/runvars-revision-index.sql
new file mode 100644
index 00000000..a2e3be13
--- /dev/null
+++ b/schema/runvars-revision-index.sql
@@ -0,0 +1,7 @@
+-- ##OSSTEST## 008 Preparatory
+--
+-- This index helps Executive::report__find_test find relevant flights.
+
+CREATE INDEX runvars_revision_idx
+    ON runvars (val)
+ WHERE name LIKE 'revision_%';
diff --git a/schema/steps-job-index.sql b/schema/steps-job-index.sql
new file mode 100644
index 00000000..07dc5a30
--- /dev/null
+++ b/schema/steps-job-index.sql
@@ -0,0 +1,7 @@
+-- ##OSSTEST## 006 Preparatory
+--
+-- This index helps sg-report-flight find if a test ever passed.
+
+CREATE INDEX steps_job_testid_status_idx
+    ON steps (job, testid, status);
+
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 18:42:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 18:42:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxDp-00022F-D0; Tue, 21 Jul 2020 18:42:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxDo-0001xV-MZ
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 18:42:36 +0000
X-Inumbo-ID: e8d90018-cb81-11ea-85a2-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8d90018-cb81-11ea-85a2-bc764e2007e4;
 Tue, 21 Jul 2020 18:42:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDY-0001u7-6O; Tue, 21 Jul 2020 19:42:20 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 04/14] sg-report-flight: Ask the db for flights of
 interest
Date: Tue, 21 Jul 2020 19:41:55 +0100
Message-Id: <20200721184205.15232-5-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Specifically, we narrow the initial query to flights which have at
least some job with the built_revision_foo we are looking for.

This condition is strictly broader than that implemented inside the
flight search loop, so there is no functional change.

Perf: runtime of my test case now ~300s-500s.

Example query before (from the Perl DBI trace):

      SELECT * FROM (
        SELECT flight, blessing FROM flights
            WHERE (branch='xen-unstable')
              AND                   EXISTS (SELECT 1
                            FROM jobs
                           WHERE jobs.flight = flights.flight
                             AND jobs.job = ?)

              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
            ORDER BY flight DESC
            LIMIT 1000
      ) AS sub
      ORDER BY blessing ASC, flight DESC

With these bind variables:

    "test-armhf-armhf-libvirt"

After:

      SELECT * FROM (
        SELECT DISTINCT flight, blessing
             FROM flights
             JOIN runvars r1 USING (flight)

            WHERE (branch='xen-unstable')
              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
                  AND EXISTS (SELECT 1
                            FROM jobs
                           WHERE jobs.flight = flights.flight
                             AND jobs.job = ?)

              AND r1.name LIKE 'built_revision_%'
              AND r1.name = ?
              AND r1.val= ?

            ORDER BY flight DESC
            LIMIT 1000
      ) AS sub
      ORDER BY blessing ASC, flight DESC

With these bind variables:

      "test-armhf-armhf-libvirt"
      'built_revision_xen'
      '165f3afbfc3db70fcfdccad07085cde0a03c858b'

Diff to the query:

      SELECT * FROM (
-        SELECT flight, blessing FROM flights
+        SELECT DISTINCT flight, blessing
+             FROM flights
+             JOIN runvars r1 USING (flight)
+
             WHERE (branch='xen-unstable')
+              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
               AND                   EXISTS (SELECT 1
                             FROM jobs
                            WHERE jobs.flight = flights.flight
                              AND jobs.job = ?)

-              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
+              AND r1.name LIKE 'built_revision_%'
+              AND r1.name = ?
+              AND r1.val= ?
+
             ORDER BY flight DESC
             LIMIT 1000
       ) AS sub

CC: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 schema/runvars-built-index.sql |  2 +-
 sg-report-flight               | 64 ++++++++++++++++++++++++++++++++--
 2 files changed, 62 insertions(+), 4 deletions(-)

diff --git a/schema/runvars-built-index.sql b/schema/runvars-built-index.sql
index 94f85ed8..8582227e 100644
--- a/schema/runvars-built-index.sql
+++ b/schema/runvars-built-index.sql
@@ -1,4 +1,4 @@
--- ##OSSTEST## 007 Preparatory
+-- ##OSSTEST## 007 Needed
 --
 -- This index helps sg-report-flight find relevant flights.
 
diff --git a/sg-report-flight b/sg-report-flight
index 70def778..61aec7a8 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -185,19 +185,77 @@ END
     if (defined $job) {
 	push @flightsq_params, $job;
 	$flightsq_jobcond = <<END;
-                  EXISTS (SELECT 1
+                  AND EXISTS (SELECT 1
 			    FROM jobs
 			   WHERE jobs.flight = flights.flight
 			     AND jobs.job = ?)
 END
     }
 
+    # We build a slightly complicated query to find possibly-relevant
+    # flights.  A "possibly-relevant" flight is one which the main
+    # flight categorisation algorithm below (the loop over $tflight)
+    # *might* decide is of interest.
+    #
+    # That algorithm produces a table of which revision(s) of what
+    # %specver trees the build jobs for the relevant test job used.
+    # And then it insists (amongst other things) that for each such
+    # tree the revision in question appears.
+    #
+    # It only looks at build jobs within the flight.  So any flight
+    # that the main algorithm finds interesting will have *some* job
+    # (in the same flight) mentioning that revision in a built
+    # revision runvar.  So we can search the runvars table by its
+    # index on the revision.
+    #
+    # So we look for flights that have an appropriate entry in runvars
+    # for each %specver tree.  We can do this by joining the runvar
+    # table once for each tree.
+    #
+    # The "osstest" tree is handled specially. as ever.  (We use
+    # "r$ri" there too for orthogonality of the code, not because
+    # there could be multiple specifiations for the osstest revision.)
+    #
+    # This complex query is an optimisation: for correctness, we must
+    # still execute the full job-specific recursive examination, for
+    # each possibly-relevant flight - that's the $tflight loop body.
+
+    my $runvars_joins = '';
+    my $runvars_conds = '';
+    my $ri=0;
+    foreach my $tree (sort keys %{ $specver{$thisthat} }) {
+      $ri++;
+      if ($tree ne 'osstest') {
+	  $runvars_joins .= <<END;
+             JOIN runvars r$ri USING (flight)
+END
+	  $runvars_conds .= <<END;
+              AND r$ri.name LIKE 'built_revision_%' 
+              AND r$ri.name = ?
+              AND r$ri.val= ?
+END
+	  push @flightsq_params, "built_revision_$tree",
+	                     $specver{$thisthat}{$tree};
+      } else {
+	  $runvars_joins .= <<END;
+             JOIN flights_harness_touched r$ri USING (flight)
+END
+	  $runvars_conds .= <<END;
+              AND r$ri.harness= ?
+END
+	  push @flightsq_params, $specver{$thisthat}{$tree};
+      }
+    }
+
     my $flightsq= <<END;
       SELECT * FROM (
-        SELECT flight, blessing FROM flights
+        SELECT DISTINCT flight, blessing
+             FROM flights
+$runvars_joins
             WHERE $branches_cond_q
-              AND $flightsq_jobcond
               AND $blessingscond
+$flightsq_jobcond
+$runvars_conds
             ORDER BY flight DESC
             LIMIT 1000
       ) AS sub
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 18:42:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 18:42:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxDu-00024h-PK; Tue, 21 Jul 2020 18:42:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxDt-0001xV-Mo
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 18:42:41 +0000
X-Inumbo-ID: e7f1f12a-cb81-11ea-85a2-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7f1f12a-cb81-11ea-85a2-bc764e2007e4;
 Tue, 21 Jul 2020 18:42:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDY-0001u7-F0; Tue, 21 Jul 2020 19:42:20 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 05/14] sg-report-flight: Use WITH to use best index
 use for $flightsq
Date: Tue, 21 Jul 2020 19:41:56 +0100
Message-Id: <20200721184205.15232-6-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

While we're here, convert this EXISTS subquery to a JOIN.

Perf: runtime of my test case now ~200-300s.

Example query before (from the Perl DBI trace):

      SELECT * FROM (
        SELECT DISTINCT flight, blessing
             FROM flights
             JOIN runvars r1 USING (flight)

            WHERE (branch='xen-unstable')
              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
                  AND EXISTS (SELECT 1
                            FROM jobs
                           WHERE jobs.flight = flights.flight
                             AND jobs.job = ?)

              AND r1.name LIKE 'built_revision_%'
              AND r1.name = ?
              AND r1.val= ?

            ORDER BY flight DESC
            LIMIT 1000
      ) AS sub
      ORDER BY blessing ASC, flight DESC

With bind variables:

     "test-armhf-armhf-libvirt"
     'built_revision_xen'
     '165f3afbfc3db70fcfdccad07085cde0a03c858b'

After:

      WITH sub AS (
        SELECT DISTINCT flight, blessing
             FROM flights
             JOIN runvars r1 USING (flight)

            WHERE (branch='xen-unstable')
              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
              AND r1.name LIKE 'built_revision_%'
              AND r1.name = ?
              AND r1.val= ?

            ORDER BY flight DESC
            LIMIT 1000
      )
      SELECT *
        FROM sub
        JOIN jobs USING (flight)

       WHERE (1=1)
                  AND jobs.job = ?

      ORDER BY blessing ASC, flight DESC

With bind variables:

    'built_revision_xen'
    '165f3afbfc3db70fcfdccad07085cde0a03c858b'
    "test-armhf-armhf-libvirt"

Diff to the query:

-      SELECT * FROM (
+      WITH sub AS (
         SELECT DISTINCT flight, blessing
              FROM flights
              JOIN runvars r1 USING (flight)

             WHERE (branch='xen-unstable')
               AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
-                  AND EXISTS (SELECT 1
-                            FROM jobs
-                           WHERE jobs.flight = flights.flight
-                             AND jobs.job = ?)
-
               AND r1.name LIKE 'built_revision_%'
               AND r1.name = ?
               AND r1.val= ?

             ORDER BY flight DESC
             LIMIT 1000
-      ) AS sub
+      )
+      SELECT *
+        FROM sub
+        JOIN jobs USING (flight)
+
+       WHERE (1=1)
+                  AND jobs.job = ?
+
       ORDER BY blessing ASC, flight DESC

CC: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 39 ++++++++++++++++++++++++---------------
 1 file changed, 24 insertions(+), 15 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index 61aec7a8..b5398573 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -180,18 +180,6 @@ END
         return undef;
     }
 
-    my @flightsq_params;
-    my $flightsq_jobcond='(1=1)';
-    if (defined $job) {
-	push @flightsq_params, $job;
-	$flightsq_jobcond = <<END;
-                  AND EXISTS (SELECT 1
-			    FROM jobs
-			   WHERE jobs.flight = flights.flight
-			     AND jobs.job = ?)
-END
-    }
-
     # We build a slightly complicated query to find possibly-relevant
     # flights.  A "possibly-relevant" flight is one which the main
     # flight categorisation algorithm below (the loop over $tflight)
@@ -220,6 +208,7 @@ END
     # still execute the full job-specific recursive examination, for
     # each possibly-relevant flight - that's the $tflight loop body.
 
+    my @flightsq_params;
     my $runvars_joins = '';
     my $runvars_conds = '';
     my $ri=0;
@@ -247,18 +236,38 @@ END
       }
     }
 
+    my $flightsq_jobs_join = '';
+    my $flightsq_jobcond = '';
+    if (defined $job) {
+	push @flightsq_params, $job;
+	$flightsq_jobs_join = <<END;
+        JOIN jobs USING (flight)
+END
+	$flightsq_jobcond = <<END;
+                  AND jobs.job = ?
+END
+    }
+
+    # In psql 9.6 this WITH clause makes postgresql do the flights
+    # query first.  This is good because our built revision index finds
+    # relevant flights very quickly.  Without this, postgresql seems
+    # to like to scan the jobs table.
     my $flightsq= <<END;
-      SELECT * FROM (
+      WITH sub AS (
         SELECT DISTINCT flight, blessing
              FROM flights
 $runvars_joins
             WHERE $branches_cond_q
               AND $blessingscond
-$flightsq_jobcond
 $runvars_conds
             ORDER BY flight DESC
             LIMIT 1000
-      ) AS sub
+      )
+      SELECT *
+        FROM sub
+$flightsq_jobs_join
+       WHERE (1=1)
+$flightsq_jobcond
       ORDER BY blessing ASC, flight DESC
 END
     $flightsq= db_prepare($flightsq);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 18:42:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 18:42:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxDz-00026f-4m; Tue, 21 Jul 2020 18:42:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxDy-0001xV-N5
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 18:42:46 +0000
X-Inumbo-ID: e926bf38-cb81-11ea-85a2-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e926bf38-cb81-11ea-85a2-bc764e2007e4;
 Tue, 21 Jul 2020 18:42:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDY-0001u7-Nz; Tue, 21 Jul 2020 19:42:20 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 06/14] sg-report-flight: Use WITH clause to use index
 for $anypassq
Date: Tue, 21 Jul 2020 19:41:57 +0100
Message-Id: <20200721184205.15232-7-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Perf: runtime of my test case now ~11s

Example query before (from the Perl DBI trace):

        SELECT * FROM flights JOIN steps USING (flight)
            WHERE (branch='xen-unstable')
              AND job=? and testid=? and status='pass'
              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
            LIMIT 1

After:

        WITH s AS
        (
        SELECT * FROM steps
         WHERE job=? and testid=? and status='pass'
        )
        SELECT * FROM flights JOIN s USING (flight)
            WHERE (branch='xen-unstable')
              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
            LIMIT 1

In both cases with bind vars:

   "test-amd64-i386-xl-pvshim"
   "guest-start"

Diff to the query:

-        SELECT * FROM flights JOIN steps USING (flight)
+        WITH s AS
+        (
+        SELECT * FROM steps
+         WHERE job=? and testid=? and status='pass'
+        )
+        SELECT * FROM flights JOIN s USING (flight)
             WHERE (branch='xen-unstable')
-              AND job=? and testid=? and status='pass'
               AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
             LIMIT 1

CC: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 schema/steps-job-index.sql |  2 +-
 sg-report-flight           | 14 ++++++++++++--
 2 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/schema/steps-job-index.sql b/schema/steps-job-index.sql
index 07dc5a30..2c33af72 100644
--- a/schema/steps-job-index.sql
+++ b/schema/steps-job-index.sql
@@ -1,4 +1,4 @@
--- ##OSSTEST## 006 Preparatory
+-- ##OSSTEST## 006 Needed
 --
 -- This index helps sg-report-flight find if a test ever passed.
 
diff --git a/sg-report-flight b/sg-report-flight
index b5398573..b8d948da 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -849,10 +849,20 @@ sub justifyfailures ($;$) {
 
     my @failures= values %{ $fi->{Failures} };
 
+    # In psql 9.6 this WITH clause makes postgresql do the steps query
+    # first.  This is good because if this test never passed we can
+    # determine that really quickly using the new index, without
+    # having to scan the flights table.  (If the test passed we will
+    # probably not have to look at many flights to find one, so in
+    # that case this is not much worse.)
     my $anypassq= <<END;
-        SELECT * FROM flights JOIN steps USING (flight)
+        WITH s AS
+        (
+        SELECT * FROM steps
+         WHERE job=? and testid=? and status='pass'
+        )
+        SELECT * FROM flights JOIN s USING (flight)
             WHERE $branches_cond_q
-              AND job=? and testid=? and status='pass'
               AND $blessingscond
             LIMIT 1
 END
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 18:42:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 18:42:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxE4-0002AF-H7; Tue, 21 Jul 2020 18:42:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxE3-0001xV-NL
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 18:42:51 +0000
X-Inumbo-ID: e8d90019-cb81-11ea-85a2-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8d90019-cb81-11ea-85a2-bc764e2007e4;
 Tue, 21 Jul 2020 18:42:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDZ-0001u7-0G; Tue, 21 Jul 2020 19:42:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 07/14] sg-report-flight: Use the job row from the
 intitial query
Date: Tue, 21 Jul 2020 19:41:58 +0100
Message-Id: <20200721184205.15232-8-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

$jcheckq is redundant: we looked this up right at the start.

This is not expected to speed things up very much, but it makes things
somewhat cleaner and clearer.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index b8d948da..bcb0d427 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -160,10 +160,6 @@ sub findaflight ($$$$$) {
         return undef;
     }
 
-    my $jcheckq= db_prepare(<<END);
-        SELECT status FROM jobs WHERE flight=? AND job=?
-END
-
     my $checkq= db_prepare(<<END);
         SELECT status FROM steps WHERE flight=? AND job=? AND testid=?
                                    AND status!='skip'
@@ -263,7 +259,7 @@ $runvars_conds
             ORDER BY flight DESC
             LIMIT 1000
       )
-      SELECT *
+      SELECT flight, jobs.status
         FROM sub
 $flightsq_jobs_join
        WHERE (1=1)
@@ -304,7 +300,7 @@ END
                 WHERE flight=?
 END
 
-    while (my ($tflight) = $flightsq->fetchrow_array) {
+    while (my ($tflight, $tjstatus) = $flightsq->fetchrow_array) {
 	# Recurse from the starting flight looking for relevant build
 	# jobs.  We start with all jobs in $tflight, and for each job
 	# we also process any other jobs it refers to in *buildjob runvars.
@@ -407,8 +403,7 @@ END
             $checkq->execute($tflight, $job, $testid);
             ($chkst) = $checkq->fetchrow_array();
 	    if (!defined $chkst) {
-		$jcheckq->execute($tflight, $job);
-		my ($jchkst) = $jcheckq->fetchrow_array();
+		my $jchkst = $tflight->{status};
 		$chkst = $jchkst if $jchkst eq 'starved';
 	    }
         }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 18:42:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 18:42:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxE9-0002Dp-TO; Tue, 21 Jul 2020 18:42:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxE8-0001xV-NN
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 18:42:56 +0000
X-Inumbo-ID: e8d9001a-cb81-11ea-85a2-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8d9001a-cb81-11ea-85a2-bc764e2007e4;
 Tue, 21 Jul 2020 18:42:22 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDZ-0001u7-A6; Tue, 21 Jul 2020 19:42:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 08/14] Executive: Use index for report__find_test
Date: Tue, 21 Jul 2020 19:41:59 +0100
Message-Id: <20200721184205.15232-9-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

After we refactor this query then we can enable the index use.
(Both of these things together in this commit because I haven't perf
tested the version with just the refactoring.)

(We have provided an index that can answer this question really
quickly if a version is specified.  But the query planner couldn't see
that because it works without seeing the bind variables, so doesn't
know that the value of name is going to be suitable for this index.)

* Convert the two EXISTS subqueries into JOIN/AND with a DISTINCT
  clause naming the fields on flights, so as to replicate the previous
  result rows.  Then do $selection field last.  The subquery is a
  convenient way to let this do the previous thing for all the values
  of $selection (including, notably, *).

* Add the additional AND clause for r.name, which has no logical
  effect given the actual values of name, enabling the query planner
  to use this index.

Perf: In my test case the sg-report-flight runtime is now ~8s.  I am
reasonably confident that this will not make other use cases of this
code worse.

Perf: runtime of my test case now ~11s

Example query before (from the Perl DBI trace):

        SELECT *
         FROM flights f
        WHERE
                EXISTS (
                   SELECT 1
                    FROM runvars r
                   WHERE name=?
                     AND val=?
                     AND r.flight=f.flight
                     AND (      (CASE
       WHEN (r.job) LIKE 'build-%-prev' THEN 'xprev'
       WHEN ((r.job) LIKE 'build-%-freebsd'
             AND 'x' = 'freebsdbuildjob') THEN 'DISCARD'
       ELSE                                      ''
       END)
 = '')
                 )
          AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
          AND (branch=?)
        ORDER BY flight DESC
        LIMIT 1

After:

        SELECT *
          FROM ( SELECT DISTINCT
                      flight, started, blessing, branch, intended
                 FROM flights f
                    JOIN runvars r USING (flight)
                   WHERE name=?
                     AND name LIKE 'revision_%'
                     AND val=?
                     AND r.flight=f.flight
                     AND (      (CASE
       WHEN (r.job) LIKE 'build-%-prev' THEN 'xprev'
       WHEN ((r.job) LIKE 'build-%-freebsd'
             AND 'x' = 'freebsdbuildjob') THEN 'DISCARD'
       ELSE                                      ''
       END)
 = '')
          AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
          AND (branch=?)
) AS sub WHERE TRUE
        ORDER BY flight DESC
        LIMIT 1

In both cases with bind vars:

   'revision_xen'
   '165f3afbfc3db70fcfdccad07085cde0a03c858b'
   "xen-unstable"

Diff to the example query:

@@ -1,10 +1,10 @@
         SELECT *
+          FROM ( SELECT DISTINCT
+                      flight, started, blessing, branch, intended
          FROM flights f
-        WHERE
-                EXISTS (
-                   SELECT 1
-                    FROM runvars r
+                    JOIN runvars r USING (flight)
                    WHERE name=?
+                     AND name LIKE 'revision_%'
                      AND val=?
                      AND r.flight=f.flight
                      AND (      (CASE
@@ -14,8 +14,8 @@
        ELSE                                      ''
        END)
  = '')
-                 )
           AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
           AND (branch=?)
+) AS sub WHERE TRUE
         ORDER BY flight DESC
         LIMIT 1

CC: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm              | 20 ++++++++------------
 schema/runvars-revision-index.sql |  2 +-
 2 files changed, 9 insertions(+), 13 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index c3dc1261..c272e9f2 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -415,37 +415,32 @@ sub report__find_test ($$$$$$$) {
 
     my $querytext = <<END;
         SELECT $selection
-	 FROM flights f
-	WHERE
+          FROM ( SELECT DISTINCT
+                      flight, started, blessing, branch, intended
+   	         FROM flights f
 END
 
     if (defined $revision) {
 	if ($tree eq 'osstest') {
 	    $querytext .= <<END;
-		EXISTS (
-		   SELECT 1
-		    FROM flights_harness_touched t
+		    JOIN flights_harness_touched t USING (flight)
 		   WHERE t.harness=?
-		     AND t.flight=f.flight
-		 )
 END
             push @params, $revision;
 	} else {
 	    $querytext .= <<END;
-		EXISTS (
-		   SELECT 1
-		    FROM runvars r
+		    JOIN runvars r USING (flight)
 		   WHERE name=?
+                     AND name LIKE 'revision_%'
 		     AND val=?
 		     AND r.flight=f.flight
                      AND ${\ main_revision_job_cond('r.job') }
-		 )
 END
             push @params, "revision_$tree", $revision;
         }
     } else {
 	$querytext .= <<END;
-	    TRUE
+	    WHERE TRUE
 END
     }
 
@@ -460,6 +455,7 @@ END
 END
     push @params, @$branches;
 
+    $querytext .= ") AS sub WHERE TRUE\n";
     $querytext .= $extracond;
     $querytext .= $sortlimit;
 
diff --git a/schema/runvars-revision-index.sql b/schema/runvars-revision-index.sql
index a2e3be13..4c1aea66 100644
--- a/schema/runvars-revision-index.sql
+++ b/schema/runvars-revision-index.sql
@@ -1,4 +1,4 @@
--- ##OSSTEST## 008 Preparatory
+-- ##OSSTEST## 008 Needed
 --
 -- This index helps Executive::report__find_test find relevant flights.
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 18:43:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 18:43:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxEE-0002Ih-EB; Tue, 21 Jul 2020 18:43:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxED-0001xV-Nj
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 18:43:01 +0000
X-Inumbo-ID: e9bcdb94-cb81-11ea-85a2-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e9bcdb94-cb81-11ea-85a2-bc764e2007e4;
 Tue, 21 Jul 2020 18:42:22 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDZ-0001u7-MD; Tue, 21 Jul 2020 19:42:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 09/14] duration_estimator: Ignore truncated jobs
 unless we know the step
Date: Tue, 21 Jul 2020 19:42:00 +0100
Message-Id: <20200721184205.15232-10-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

If we are looking for a particular step then we will ignore jobs
without that step, so any job which was truncated before it will be
ignored.

Otherwise we are looking for the whole job duration and a truncated
job is not a good representative.

This is a bugfix (to duration estimation), not a performance
improvement like the preceding and subsequent changes.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index c272e9f2..3cd37c14 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1142,6 +1142,10 @@ sub duration_estimator ($$;$$) {
     # estimated (and only jobs which contained that step will be
     # considered).
 
+    my $or_status_truncated = '';
+    if ($will_uptoincl_testid) {
+	$or_status_truncated = "OR j.status='truncated'!";
+    }
     my $recentflights_q= $dbh_tests->prepare(<<END);
             SELECT f.flight AS flight,
 		   f.started AS started,
@@ -1156,8 +1160,8 @@ sub duration_estimator ($$;$$) {
                       AND  f.branch=?
                       AND  j.job=?
                       AND  r.val=?
-		      AND  (j.status='pass' OR j.status='fail' OR
-                            j.status='truncated')
+		      AND  (j.status='pass' OR j.status='fail'
+                           $or_status_truncated)
                       AND  f.started IS NOT NULL
                       AND  f.started >= ?
                  ORDER BY f.started DESC
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 19:06:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 19:06:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxbB-0004aY-He; Tue, 21 Jul 2020 19:06:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxbA-0004aT-LH
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 19:06:44 +0000
X-Inumbo-ID: 50ae2986-cb85-11ea-85a6-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50ae2986-cb85-11ea-85a6-bc764e2007e4;
 Tue, 21 Jul 2020 19:06:43 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDZ-0001u7-UT; Tue, 21 Jul 2020 19:42:22 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 10/14] duration_estimator: Introduce some _qtxt
 variables
Date: Tue, 21 Jul 2020 19:42:01 +0100
Message-Id: <20200721184205.15232-11-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 3cd37c14..c966a1be 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1146,7 +1146,7 @@ sub duration_estimator ($$;$$) {
     if ($will_uptoincl_testid) {
 	$or_status_truncated = "OR j.status='truncated'!";
     }
-    my $recentflights_q= $dbh_tests->prepare(<<END);
+    my $recentflights_qtxt= <<END;
             SELECT f.flight AS flight,
 		   f.started AS started,
                    j.status AS status
@@ -1167,7 +1167,7 @@ sub duration_estimator ($$;$$) {
                  ORDER BY f.started DESC
 END
 
-    my $duration_anyref_q= $dbh_tests->prepare(<<END);
+    my $duration_anyref_qtxt= <<END;
             SELECT f.flight AS flight,
                    max(s.finished) AS max_finished
 		      FROM steps s JOIN flights f
@@ -1212,6 +1212,8 @@ END_UPTOINCL
                 AS duration
 END_ALWAYS
 	
+    my $recentflights_q= $dbh_tests->prepare($recentflights_qtxt);
+    my $duration_anyref_q= $dbh_tests->prepare($duration_anyref_qtxt);
     my $duration_duration_q = $dbh_tests->prepare($duration_duration_qtxt);
 
     return sub {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 19:06:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 19:06:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxbG-0004ap-Qw; Tue, 21 Jul 2020 19:06:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxbF-0004aT-GF
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 19:06:49 +0000
X-Inumbo-ID: 51f94f28-cb85-11ea-85a6-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 51f94f28-cb85-11ea-85a6-bc764e2007e4;
 Tue, 21 Jul 2020 19:06:45 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDb-0001u7-6c; Tue, 21 Jul 2020 19:42:23 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 14/14] duration_estimator: Move duration query loop
 into database
Date: Tue, 21 Jul 2020 19:42:05 +0100
Message-Id: <20200721184205.15232-15-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Stuff the two queries together: we use the firsty query as a WITH
clause.  This is significantly faster, perhaps because the query
optimiser does a better job but probably just because it saves on
round trips.

No functional change.

Perf: subjectively this seemed to help when the cache was cold.  Now I
have a warm cache and it doesn't seem to make much difference.

Perf: runtime of my test case now ~5-7s.

Example queries before (from the debugging output):

 Query A part I:

            SELECT f.flight AS flight,
                   j.job AS job,
                   f.started AS started,
                   j.status AS status
                     FROM flights f
                     JOIN jobs j USING (flight)
                     JOIN runvars r
                             ON  f.flight=r.flight
                            AND  r.name=?
                    WHERE  j.job=r.job
                      AND  f.blessing=?
                      AND  f.branch=?
                      AND  j.job=?
                      AND  r.val=?
                      AND  (j.status='pass' OR j.status='fail'
                           OR j.status='truncated'!)
                      AND  f.started IS NOT NULL
                      AND  f.started >= ?
                 ORDER BY f.started DESC

 With bind variables:
     "test-amd64-i386-xl-pvshim"
     "guest-start"

 Query B part I:

            SELECT f.flight AS flight,
                   s.job AS job,
                   NULL as started,
                   NULL as status,
                   max(s.finished) AS max_finished
                      FROM steps s JOIN flights f
                        ON s.flight=f.flight
                     WHERE s.job=? AND f.blessing=? AND f.branch=?
                       AND s.finished IS NOT NULL
                       AND f.started IS NOT NULL
                       AND f.started >= ?
                     GROUP BY f.flight, s.job
                     ORDER BY max_finished DESC

 With bind variables:
    "test-armhf-armhf-libvirt"
    'real'
    "xen-unstable"
    1594144469

 Query common part II:

        WITH tsteps AS
        (
            SELECT *
              FROM steps
             WHERE flight=? AND job=?
        )
        , tsteps2 AS
        (
            SELECT *
              FROM tsteps
             WHERE finished <=
                     (SELECT finished
                        FROM tsteps
                       WHERE tsteps.testid = ?)
        )
        SELECT (
            SELECT max(finished)-min(started)
              FROM tsteps2
          ) - (
            SELECT sum(finished-started)
              FROM tsteps2
             WHERE step = 'ts-hosts-allocate'
          )
                AS duration

 With bind variables from previous query, eg:
     152045
     "test-armhf-armhf-libvirt"
     "guest-start.2"

After:

 Query A (combined):

            WITH f AS (
            SELECT f.flight AS flight,
                   j.job AS job,
                   f.started AS started,
                   j.status AS status
                     FROM flights f
                     JOIN jobs j USING (flight)
                     JOIN runvars r
                             ON  f.flight=r.flight
                            AND  r.name=?
                    WHERE  j.job=r.job
                      AND  f.blessing=?
                      AND  f.branch=?
                      AND  j.job=?
                      AND  r.val=?
                      AND  (j.status='pass' OR j.status='fail'
                           OR j.status='truncated'!)
                      AND  f.started IS NOT NULL
                      AND  f.started >= ?
                 ORDER BY f.started DESC

            )
            SELECT flight, max_finished, job, started, status,
            (
        WITH tsteps AS
        (
            SELECT *
              FROM steps
             WHERE flight=f.flight AND job=f.job
        )
        , tsteps2 AS
        (
            SELECT *
              FROM tsteps
             WHERE finished <=
                     (SELECT finished
                        FROM tsteps
                       WHERE tsteps.testid = ?)
        )
        SELECT (
            SELECT max(finished)-min(started)
              FROM tsteps2
          ) - (
            SELECT sum(finished-started)
              FROM tsteps2
             WHERE step = 'ts-hosts-allocate'
          )
                AS duration

            ) FROM f

 Query B (combined):

            WITH f AS (
            SELECT f.flight AS flight,
                   s.job AS job,
                   NULL as started,
                   NULL as status,
                   max(s.finished) AS max_finished
                      FROM steps s JOIN flights f
                        ON s.flight=f.flight
                     WHERE s.job=? AND f.blessing=? AND f.branch=?
                       AND s.finished IS NOT NULL
                       AND f.started IS NOT NULL
                       AND f.started >= ?
                     GROUP BY f.flight, s.job
                     ORDER BY max_finished DESC

            )
            SELECT flight, max_finished, job, started, status,
            (
        WITH tsteps AS
        (
            SELECT *
              FROM steps
             WHERE flight=f.flight AND job=f.job
        )
        , tsteps2 AS
        (
            SELECT *
              FROM tsteps
             WHERE finished <=
                     (SELECT finished
                        FROM tsteps
                       WHERE tsteps.testid = ?)
        )
        SELECT (
            SELECT max(finished)-min(started)
              FROM tsteps2
          ) - (
            SELECT sum(finished-started)
              FROM tsteps2
             WHERE step = 'ts-hosts-allocate'
          )
                AS duration

            ) FROM f

Diff for query A:

@@ -1,3 +1,4 @@
+            WITH f AS (
             SELECT f.flight AS flight,
                    j.job AS job,
                    f.started AS started,
@@ -18,11 +19,14 @@
                       AND  f.started >= ?
                  ORDER BY f.started DESC

+            )
+            SELECT flight, max_finished, job, started, status,
+            (
        WITH tsteps AS
         (
             SELECT *
               FROM steps
-             WHERE flight=? AND job=?
+             WHERE flight=f.flight AND job=f.job
         )
         , tsteps2 AS
         (
@@ -42,3 +46,5 @@
              WHERE step = 'ts-hosts-allocate'
           )
                 AS duration
+
+            ) FROM f

Diff for query B:

@@ -1,3 +1,4 @@
+            WITH f AS (
             SELECT f.flight AS flight,
                    s.job AS job,
                    NULL as started,
@@ -12,11 +13,14 @@
                      GROUP BY f.flight, s.job
                      ORDER BY max_finished DESC

+            )
+            SELECT flight, max_finished, job, started, status,
+            (
         WITH tsteps AS
         (
             SELECT *
               FROM steps
-             WHERE flight=? AND job=?
+             WHERE flight=f.flight AND job=f.job
         )
         , tsteps2 AS
         (
@@ -36,3 +40,5 @@
              WHERE step = 'ts-hosts-allocate'
           )
                 AS duration
+
+            ) FROM f

CC: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 31 ++++++++++++++++++++-----------
 1 file changed, 20 insertions(+), 11 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 621153ee..66c93ab9 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1192,7 +1192,7 @@ END
         (
             SELECT *
               FROM steps
-             WHERE flight=? AND job=?
+             WHERE flight=f.flight AND job=f.job
         )
 END_ALWAYS
         , tsteps2 AS
@@ -1216,9 +1216,20 @@ END_UPTOINCL
                 AS duration
 END_ALWAYS
 	
-    my $recentflights_q= $dbh_tests->prepare($recentflights_qtxt);
-    my $duration_anyref_q= $dbh_tests->prepare($duration_anyref_qtxt);
-    my $duration_duration_q = $dbh_tests->prepare($duration_duration_qtxt);
+    my $prepare_combi = sub {
+	db_prepare(<<END);
+            WITH f AS (
+$_[0]
+            )
+            SELECT flight, max_finished, job, started, status,
+            (
+$duration_duration_qtxt
+            ) FROM f
+END
+    };
+
+    my $recentflights_q= $prepare_combi->($recentflights_qtxt);
+    my $duration_anyref_q= $prepare_combi->($duration_anyref_qtxt);
 
     return sub {
         my ($job, $hostidname, $onhost, $uptoincl_testid) = @_;
@@ -1239,14 +1250,16 @@ END_ALWAYS
                                       $branch,
                                       $job,
                                       $onhost,
-                                      $limit);
+                                      $limit,
+				      @x_params);
             $refs= $recentflights_q->fetchall_arrayref({});
             $recentflights_q->finish();
             $dbg->("SAME-HOST GOT ".scalar(@$refs));
         }
 
         if (!@$refs) {
-            $duration_anyref_q->execute($job, $blessing, $branch, $limit);
+            $duration_anyref_q->execute($job, $blessing, $branch, $limit,
+					@x_params);
             $refs= $duration_anyref_q->fetchall_arrayref({});
             $duration_anyref_q->finish();
             $dbg->("ANY-HOST GOT ".scalar(@$refs));
@@ -1259,11 +1272,7 @@ END_ALWAYS
 
         my $duration_max= 0;
         foreach my $ref (@$refs) {
-	    my @d_d_args = ($ref->{flight}, $job);
-	    push @d_d_args, @x_params;
-            $duration_duration_q->execute(@d_d_args);
-            my ($duration) = $duration_duration_q->fetchrow_array();
-            $duration_duration_q->finish();
+            my ($duration) = $ref->{duration};
             if ($duration) {
                 $dbg->("REF $ref->{flight} DURATION $duration ".
 		       ($ref->{status} // ''));
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 19:06:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 19:06:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxbL-0004bd-4x; Tue, 21 Jul 2020 19:06:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxbK-0004aT-GG
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 19:06:54 +0000
X-Inumbo-ID: 55929932-cb85-11ea-85a6-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 55929932-cb85-11ea-85a6-bc764e2007e4;
 Tue, 21 Jul 2020 19:06:51 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDa-0001u7-QU; Tue, 21 Jul 2020 19:42:22 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 13/14] duration_estimator: Move $uptincl_testid to
 separate @x_params
Date: Tue, 21 Jul 2020 19:42:04 +0100
Message-Id: <20200721184205.15232-14-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is going to be useful soon.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 8e8b3d33..621153ee 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1223,6 +1223,9 @@ END_ALWAYS
     return sub {
         my ($job, $hostidname, $onhost, $uptoincl_testid) = @_;
 
+	my @x_params;
+	push @x_params, $uptoincl_testid if $will_uptoincl_testid;
+
         my $dbg= $debug ? sub {
             $debug->("DUR $branch $blessing $job $hostidname $onhost @_");
         } : sub { };
@@ -1257,7 +1260,7 @@ END_ALWAYS
         my $duration_max= 0;
         foreach my $ref (@$refs) {
 	    my @d_d_args = ($ref->{flight}, $job);
-	    push @d_d_args, $uptoincl_testid if $will_uptoincl_testid;
+	    push @d_d_args, @x_params;
             $duration_duration_q->execute(@d_d_args);
             my ($duration) = $duration_duration_q->fetchrow_array();
             $duration_duration_q->finish();
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 19:07:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 19:07:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxbS-0004dJ-Ez; Tue, 21 Jul 2020 19:07:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxbR-0004d0-7u
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 19:07:01 +0000
X-Inumbo-ID: 5ab65f34-cb85-11ea-85a6-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ab65f34-cb85-11ea-85a6-bc764e2007e4;
 Tue, 21 Jul 2020 19:07:00 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDa-0001u7-7T; Tue, 21 Jul 2020 19:42:22 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 11/14] duration_estimator: Explicitly provide null in
 general host q
Date: Tue, 21 Jul 2020 19:42:02 +0100
Message-Id: <20200721184205.15232-12-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Our spec. says we return nulls for started and status if we don't find
a job matching the host spec.

The way this works right now is that we look up the nonexistent
entries in $refs->[0].  This is not really brilliant and is going to
be troublesome as we continue to refactor.

Provide these values explicitly.  No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index c966a1be..ee1bf07e 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1169,6 +1169,8 @@ END
 
     my $duration_anyref_qtxt= <<END;
             SELECT f.flight AS flight,
+                   NULL as started,
+                   NULL as status,
                    max(s.finished) AS max_finished
 		      FROM steps s JOIN flights f
 		        ON s.flight=f.flight
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 19:07:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 19:07:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxbV-0004eV-Pe; Tue, 21 Jul 2020 19:07:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8efX=BA=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jxxbU-0004d0-GC
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 19:07:04 +0000
X-Inumbo-ID: 5c74bf78-cb85-11ea-85a6-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c74bf78-cb85-11ea-85a6-bc764e2007e4;
 Tue, 21 Jul 2020 19:07:03 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jxxDa-0001u7-I0; Tue, 21 Jul 2020 19:42:22 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 12/14] duration_estimator: Return job column in first
 query
Date: Tue, 21 Jul 2020 19:42:03 +0100
Message-Id: <20200721184205.15232-13-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Right now this is pointless since the Perl code doesn't need it.  But
this row is going to be part of a WITH clause soon.

No functional change.

Diffs to two example queries (from the Perl DBI trace):

            SELECT f.flight AS flight,
+                   j.job AS job,
                   f.started AS started,
                    j.status AS status
                     FROM flights f
                     JOIN jobs j USING (flight)
                     JOIN runvars r
                             ON  f.flight=r.flight
                            AND  r.name=?
                    WHERE  j.job=r.job
                      AND  f.blessing=?
                      AND  f.branch=?
                      AND  j.job=?
                      AND  r.val=?
                      AND  (j.status='pass' OR j.status='fail'
                           OR j.status='truncated'!)
                      AND  f.started IS NOT NULL
                       AND  f.started >= ?
                  ORDER BY f.started DESC

            SELECT f.flight AS flight,
+                   s.job AS job,
                    NULL as started,
                    NULL as status,
                    max(s.finished) AS max_finished
                      FROM steps s JOIN flights f
                        ON s.flight=f.flight
                     WHERE s.job=? AND f.blessing=? AND f.branch=?
                        AND s.finished IS NOT NULL
                        AND f.started IS NOT NULL
                        AND f.started >= ?
-                     GROUP BY f.flight
+                     GROUP BY f.flight, s.job
                      ORDER BY max_finished DESC

CC: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index ee1bf07e..8e8b3d33 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1148,6 +1148,7 @@ sub duration_estimator ($$;$$) {
     }
     my $recentflights_qtxt= <<END;
             SELECT f.flight AS flight,
+                   j.job AS job,
 		   f.started AS started,
                    j.status AS status
 		     FROM flights f
@@ -1169,6 +1170,7 @@ END
 
     my $duration_anyref_qtxt= <<END;
             SELECT f.flight AS flight,
+                   s.job AS job,
                    NULL as started,
                    NULL as status,
                    max(s.finished) AS max_finished
@@ -1178,7 +1180,7 @@ END
                        AND s.finished IS NOT NULL
                        AND f.started IS NOT NULL
                        AND f.started >= ?
-                     GROUP BY f.flight
+                     GROUP BY f.flight, s.job
                      ORDER BY max_finished DESC
 END
     # s J J J # fix perl-mode
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 21 19:22:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 19:22:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxxqE-0006ZY-6w; Tue, 21 Jul 2020 19:22:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tByU=BA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jxxqC-0006ZE-UH
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 19:22:16 +0000
X-Inumbo-ID: 79568264-cb87-11ea-a12b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 79568264-cb87-11ea-a12b-12813bfff9fa;
 Tue, 21 Jul 2020 19:22:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=dSHfSqEz24zvUFmAywOle41TbLdc1AXmqoJ8Lsgd/U8=; b=CAKA3AeDoC932aYM81OfCKYwC
 qb4OqVls3PJS2UN1rfAngxbDWUCwzy80SAzdXxbasgzujwCfE+mGpfHvSTkV9NE0vY0m9AlWwyQpa
 c1jyCwEEQ0znM09rRgjVMb7Az4Zf4nrIPtjCe/DCA/jNaNrZn6/wBP4PnDVAkOIMnAU9s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxxq6-0006ds-B0; Tue, 21 Jul 2020 19:22:10 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jxxq5-0003Gb-Vi; Tue, 21 Jul 2020 19:22:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jxxq5-00032B-V3; Tue, 21 Jul 2020 19:22:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152077-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152077: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=f3885e8c3ceaef101e466466e879e97103ecce18
X-Osstest-Versions-That: xen=057cfa258ca554013178c5aaf6f80db47fb184fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jul 2020 19:22:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152077 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152077/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f3885e8c3ceaef101e466466e879e97103ecce18
baseline version:
 xen                  057cfa258ca554013178c5aaf6f80db47fb184fc

Last test of basis   152074  2020-07-21 13:00:38 Z    0 days
Testing same since   152077  2020-07-21 16:02:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Lindig <christian.lindig@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   057cfa258c..f3885e8c3c  f3885e8c3ceaef101e466466e879e97103ecce18 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 19:36:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 19:36:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxy3X-0007ZM-N3; Tue, 21 Jul 2020 19:36:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Eoy0=BA=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jxy3W-0007ZH-9F
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 19:36:02 +0000
X-Inumbo-ID: 6873ac9a-cb89-11ea-a134-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6873ac9a-cb89-11ea-a134-12813bfff9fa;
 Tue, 21 Jul 2020 19:36:01 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 76CE7206C1;
 Tue, 21 Jul 2020 19:36:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595360160;
 bh=OZpuLvVIoSMSHp1G5qIFDzv3VQbS1nfra1gbWBsZOrY=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=E7KhPjPKjghUwuXVAKhuK5HiwgUUP093orNvf7mePQndiH6Z6ZLc6pgjU1lwUbFey
 TnGuW4QsnBoo1SbVSDmXmedhpDSFamGQwdWhYA3YF1VO0z0azNRnsRzpAwF0urCagS
 +2qfqip/sR/XerKQWIRosIbzmzxt4laq9EbQXxOw=
Date: Tue, 21 Jul 2020 12:35:57 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rob Herring <robh@kernel.org>
Subject: Re: RFC: PCI devices passthrough on Arm design proposal
In-Reply-To: <CAL_JsqKiaSNsKxqenVtgfk-_5=im73CHfEM3YqiVTFvRBbKsJA@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2007211235340.32544@sstabellini-ThinkPad-T480s>
References: <3F6E40FB-79C5-4AE8-81CA-E16CA37BB298@arm.com>
 <BD475825-10F6-4538-8294-931E370A602C@arm.com>
 <E9CBAA57-5EF3-47F9-8A40-F5D7816DB2A4@arm.com>
 <20200717111644.GS7191@Air-de-Roger>
 <3B8A1B9D-A101-4937-AC42-4F62BE7E677C@arm.com>
 <20200717143120.GT7191@Air-de-Roger>
 <8AF78FF1-C389-44D8-896B-B95C1A0560E2@arm.com>
 <alpine.DEB.2.21.2007201520370.32544@sstabellini-ThinkPad-T480s>
 <CAL_JsqKiaSNsKxqenVtgfk-_5=im73CHfEM3YqiVTFvRBbKsJA@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, 20 Jul 2020, Rob Herring wrote:
> On Mon, Jul 20, 2020 at 5:24 PM Stefano Stabellini
> <sstabellini@kernel.org> wrote:
> >
> > + Rob Herring
> >
> > On Fri, 17 Jul 2020, Bertrand Marquis wrote:
> > > >> Regarding the DT entry, this is not coming from us and this is already
> > > >> defined this way in existing DTBs, we just reuse the existing entry.
> > > >
> > > > Is it possible to standardize the property and drop the linux prefix?
> > >
> > > Honestly i do not know. This was there in the DT examples we checked so
> > > we planned to use that. But it might be possible to standardize this.
> >
> > We could certainly start a discussion about it. It looks like
> > linux,pci-domain is used beyond purely the Linux kernel. I think that it
> > is worth getting Rob's advice on this.
> >
> >
> > Rob, for context we are trying to get Linux and Xen to agree on a
> > numbering scheme to identify PCI host bridges correctly. We already have
> > an existing hypercall from the old x86 days that passes a segment number
> > to Xen as a parameter, see drivers/xen/pci.c:xen_add_device.
> > (xen_add_device assumes that a Linux domain and a PCI segment are the
> > same thing which I understand is not the case.)
> >
> >
> > There is an existing device tree property called "linux,pci-domain"
> > which would solve the problem (ignoring the difference in the definition
> > of domain and segment) but it is clearly marked as a Linux-specific
> > property. Is there anything more "standard" that we can use?
> >
> > I can find PCI domains being mentioned a few times in the Device Tree
> > PCI specification but can't find any associated IDs, and I couldn't find
> > segments at all.
> >
> > What's your take on this? In general, what's your suggestion on getting
> > Xen and Linux (and other OSes which could be used as dom0 one day like
> > Zephyr) to agree on a simple numbering scheme to identify PCI host
> > bridges?
> >
> > Should we just use "linux,pci-domain" as-is because it is already the de
> > facto standard? It looks like the property appears in both QEMU and
> > UBoot already.
> 
> Sounds good to me. We could drop the 'linux' part, but based on other
> places that has happened it just means we end up supporting both
> strings forever.

OK, thank you!


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 19:55:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 19:55:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxyMY-0000r6-BE; Tue, 21 Jul 2020 19:55:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EbhO=BA=amazon.com=prvs=46490858e=anchalag@srs-us1.protection.inumbo.net>)
 id 1jxyMX-0000r1-4x
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 19:55:41 +0000
X-Inumbo-ID: 2733a318-cb8c-11ea-85b6-bc764e2007e4
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2733a318-cb8c-11ea-85b6-bc764e2007e4;
 Tue, 21 Jul 2020 19:55:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1595361340; x=1626897340;
 h=date:from:to:cc:message-id:references:mime-version:
 content-transfer-encoding:in-reply-to:subject;
 bh=5vreiwo8fMlL1xtpYFrbID02dZ/n0SD7ehepl0EtZ6c=;
 b=R4LPQiyfhyWjdwbHOUSAKz2huhmE0SICxVM2roiMCqqAjDItCFtMUEJI
 mQnJ5PYNAz/BGmIeTFmJpBLptzqsdEHq+vJ17ibCIHT+325mLBrbN8+xw
 HwXTQVdVFWAm29c5jAgWf8Llvx8GQ8Fxc/sKUZnL84JzhDKU3jf//yt+v c=;
IronPort-SDR: qkBMVLCMhOgpZ4WzlwZUhSxPSGeQdV7KpMOrhhIruU5PoAtatjY3hzMicEpmWdiM7OgXDrZMKk
 oyFa+lhnjHgA==
X-IronPort-AV: E=Sophos;i="5.75,379,1589241600"; d="scan'208";a="43212392"
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1d-37fd6b3d.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 21 Jul 2020 19:55:38 +0000
Received: from EX13MTAUWC001.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1d-37fd6b3d.us-east-1.amazon.com (Postfix) with ESMTPS
 id 094CE2826E7; Tue, 21 Jul 2020 19:55:31 +0000 (UTC)
Received: from EX13D05UWC003.ant.amazon.com (10.43.162.226) by
 EX13MTAUWC001.ant.amazon.com (10.43.162.135) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 21 Jul 2020 19:55:09 +0000
Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by
 EX13D05UWC003.ant.amazon.com (10.43.162.226) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 21 Jul 2020 19:55:09 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 21 Jul 2020 19:55:09 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 813E040839; Tue, 21 Jul 2020 19:55:09 +0000 (UTC)
Date: Tue, 21 Jul 2020 19:55:09 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Message-ID: <20200721195509.GA14682@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200720093705.GG7191@Air-de-Roger>
 <20200721001736.GB19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <20200721083018.GM7191@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200721083018.GM7191@Air-de-Roger>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, tglx@linutronix.de, sstabellini@kernel.org, kamatam@amazon.com,
 marmarek@invisiblethingslab.com, mingo@redhat.com,
 xen-devel@lists.xenproject.org, sblbir@amazon.com, axboe@kernel.dk,
 konrad.wilk@oracle.com, anchalag@amazon.com, bp@alien8.de,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, jgross@suse.com,
 netdev@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net,
 linux-kernel@vger.kernel.org, vkuznets@redhat.com, davem@davemloft.net,
 dwmw@amazon.co.uk
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 21, 2020 at 10:30:18AM +0200, Roger Pau Monn wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> Marek: I'm adding you in case you could be able to give this a try and
> make sure it doesn't break suspend for dom0.
> 
> On Tue, Jul 21, 2020 at 12:17:36AM +0000, Anchal Agarwal wrote:
> > On Mon, Jul 20, 2020 at 11:37:05AM +0200, Roger Pau Monn wrote:
> > > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> > >
> > >
> > >
> > > On Sat, Jul 18, 2020 at 09:47:04PM -0400, Boris Ostrovsky wrote:
> > > > (Roger, question for you at the very end)
> > > >
> > > > On 7/17/20 3:10 PM, Anchal Agarwal wrote:
> > > > > On Wed, Jul 15, 2020 at 05:18:08PM -0400, Boris Ostrovsky wrote:
> > > > >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> > > > >>
> > > > >>
> > > > >>
> > > > >> On 7/15/20 4:49 PM, Anchal Agarwal wrote:
> > > > >>> On Mon, Jul 13, 2020 at 11:52:01AM -0400, Boris Ostrovsky wrote:
> > > > >>>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> > > > >>>>
> > > > >>>>
> > > > >>>>
> > > > >>>> On 7/2/20 2:21 PM, Anchal Agarwal wrote:
> > > > >>>> And PVH dom0.
> > > > >>> That's another good use case to make it work with however, I still
> > > > >>> think that should be tested/worked upon separately as the feature itself
> > > > >>> (PVH Dom0) is very new.
> > > > >>
> > > > >> Same question here --- will this break PVH dom0?
> > > > >>
> > > > > I haven't tested it as a part of this series. Is that a blocker here?
> > > >
> > > >
> > > > I suspect dom0 will not do well now as far as hibernation goes, in which
> > > > case you are not breaking anything.
> > > >
> > > >
> > > > Roger?
> > >
> > > I sadly don't have any box ATM that supports hibernation where I
> > > could test it. We have hibernation support for PV dom0, so while I
> > > haven't done anything specific to support or test hibernation on PVH
> > > dom0 I would at least aim to not make this any worse, and hence the
> > > check should at least also fail for a PVH dom0?
> > >
> > > if (!xen_hvm_domain() || xen_initial_domain())
> > >     return -ENODEV;
> > >
> > > Ie: none of this should be applied to a PVH dom0, as it doesn't have
> > > PV devices and hence should follow the bare metal device suspend.
> > >
> > So from what I understand you meant for any guest running on pvh dom0 should not
> > hibernate if hibernation is triggered from within the guest or should they?
> 
> Er no to both I think. What I meant is that a PVH dom0 should be able
> to properly suspend, and we should make sure this work doesn't make
> this any harder (or breaks it if it's currently working).
> 
> Or at least that's how I understood the question raised by Boris.
> 
> You are adding code to the generic suspend path that's also used by dom0
> in order to perform bare metal suspension. This is fine now for a PV
> dom0 because the code is gated on xen_hvm_domain, but you should also
> take into account that a PVH dom0 is considered a HVM domain, and
> hence will get the notifier registered.
>
Ok that makes sense now. This is good to be safe, but my patch series is only to
support domU hibernation, so I am not sure if this will affect pvh dom0.
However, since I do not have a good way of testing sure I will add the check.

Moreover, in Patch-0004, I do register suspend/resume syscore_ops specifically for domU
hibernation only if its xen_hvm_domain. I don't see any reason that should not
be registered for domU's running on pvh dom0. Those suspend/resume callbacks will
only be invoked in case hibernation and will be skipped if generic suspend path
is in progress. Do you see any issue with that?

> > > Also I would contact the QubesOS guys, they rely heavily on the
> > > suspend feature for dom0, and that's something not currently tested by
> > > osstest so any breakages there go unnoticed.
> > >
> > Was this for me or Boris? If its the former then I have no idea how to?
> 
> I've now added Marek.
> 
> Roger.
Anchal


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 21:13:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 21:13:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jxzZ9-0007XZ-7j; Tue, 21 Jul 2020 21:12:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Oq7=BA=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jxzZ8-0007XU-9p
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 21:12:46 +0000
X-Inumbo-ID: eb13d9ba-cb96-11ea-a151-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eb13d9ba-cb96-11ea-a151-12813bfff9fa;
 Tue, 21 Jul 2020 21:12:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=WWbAlpS5UkAiKMlHO0S+8mbb3NFJuOWLX01zCrTB4w4=; b=reG3RG2X4kORvKyzghVA5tpvdB
 D264W/HedP+BLwu/99j+VkJFj5yoTI1p97TLVRYcRPpFN/3Uh6scD8QZGJ7xcDQeed8EuUaIE4Lzf
 OVsUShIQ1kafYhaJ3B3geqXPmcN3rDkWwdAbuxL7ys4geueqvQ8F6khSNqCIDhqNv64k=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxzZ4-0000ZO-Bg; Tue, 21 Jul 2020 21:12:42 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jxzZ4-0001Md-2K; Tue, 21 Jul 2020 21:12:42 +0000
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: Oleksandr <olekstysh@gmail.com>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <3dcab37d-0d60-f1cc-1d59-b5497f0fa95f@xen.org>
 <b6cf0931-c31e-b03b-3995-688536de391a@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <05acce61-5b29-76f7-5664-3438361caf82@xen.org>
Date: Tue, 21 Jul 2020 22:12:40 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b6cf0931-c31e-b03b-3995-688536de391a@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Oleksandr,

On 21/07/2020 19:16, Oleksandr wrote:
> 
> On 21.07.20 17:27, Julien Grall wrote:
>> On a similar topic, I am a bit surprised you didn't encounter memory 
>> exhaustion when trying to use virtio. Because on how Linux currently 
>> works (see XSA-300), the backend domain as to have a least as much RAM 
>> as the domain it serves. For instance, you have serve two domains with 
>> 1GB of RAM each, then your backend would need at least 2GB + some for 
>> its own purpose.
>>
>> This probably wants to be resolved by allowing foreign mapping to be 
>> "paging" out as you would for memory assigned to a userspace. 
> 
> Didn't notice the last sentence initially. Could you please explain your 
> idea in detail if possible. Does it mean if implemented it would be 
> feasible to map all guest memory regardless of how much memory the guest 
> has? 
 >
> Avoiding map/unmap memory each guest request would allow us to have 
> better performance (of course with taking care of the fact that guest 
> memory layout could be changed)...

I will explain that below. Before let me comment on KVM first.

> Actually what I understand looking at 
> kvmtool is the fact it does not map/unmap memory dynamically, just 
> calculate virt addresses according to the gfn provided.

The memory management between KVM and Xen is quite different. In the 
case of KVM, the guest RAM is effectively memory from the userspace 
(allocated via mmap) and then shared with the guest.

 From the userspace PoV, the guest memory will always be accessible from 
the same virtual region. However, behind the scene, the pages may not 
always reside in memory. They are basically managed the same way as 
"normal" userspace memory.

In the case of Xen, we are basically stealing a guest physical page 
allocated via kmalloc() and provide no facilities for Linux to reclaim 
the page if it needs to do it before the userspace decide to unmap the 
foreign mapping.

I think it would be good to handle the foreing mapping the same way as 
userspace memory. By that I mean, that Linux could reclaim the physical 
page used by the foreing mapping if it needs to.

The process for reclaiming the page would look like:
     1) Unmap the foreign page
     2) Ballon in the backend domain physical address used by the 
foreing mapping (allocate the page in the physmap)

The next time the userspace is trying to access the foreign page, Linux 
will receive a data abort that would result to:
     1) Allocate a backend domain physical page
     2) Balloon out the physical address (remove the page from the physmap)
     3) Map the foreing mapping at the new guest physical address
     4) Map the guest physical page in the userspace address space

With this approach, we should be able to have backend domain that can 
handle frontend domain without require a lot of memory.

Note that I haven't looked at the Linux code yet, so I don't know the 
complexity to implement it or all the pitfalls.

One pitfall I could think right now is the frontend guest may have 
removed the page from its physmap. Therefore the backend domain wouldn't 
be able to re-map the page. We definitely don't want to crash the 
backend app in this case. However, I am not entirely sure what would be 
the correct action.

Long term, we may want to consider to use separate region in the backend 
domain physical address. This may remove the pressure in the backend 
domain RAM and reducing the number of page that may be "swapped out".

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 21:49:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 21:49:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jy08q-0001nb-8g; Tue, 21 Jul 2020 21:49:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z8Xg=BA=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jy08p-0001nW-TK
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 21:49:39 +0000
X-Inumbo-ID: 131a3e72-cb9c-11ea-85e0-bc764e2007e4
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 131a3e72-cb9c-11ea-85e0-bc764e2007e4;
 Tue, 21 Jul 2020 21:49:38 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06LLbH1D053011;
 Tue, 21 Jul 2020 21:49:01 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=Wt6D8IQ+tsBZP6vphxS0dWXDmx3gxPrZp48cft8vOXU=;
 b=ZR4VdSvpet26tZX92/sCTy6vjJRFNCoCf79j4Li/q7cKsN4RA+SEgcehrgeXzYghI39g
 GYS4x0QEr3/UcchZE2HhK91MqIhJMoSV8kIuz9KF8yxpE3GFJORL/0kdVBy5cxXq/z3V
 E6eHSlwLWYh61MdbHNBMQ27s290U0bffq8ULeNHLH+JxD3rM5d9R3CEUibv0B65VdOIW
 oGKqsSKrFuSP3ym/tIxRphgdMC+k3Osf/01ZiVqNeBUfMOySghsQKwssHv2XWAZAR3f0
 BK03rso9V7f/kosi6JqKalp1nm3gcCxcWL+QfhK0sP7cBEhVkWUki/cpJQ6rw7fbBV1k Vg== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2120.oracle.com with ESMTP id 32d6ksm436-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 21 Jul 2020 21:49:01 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06LLYBKO109659;
 Tue, 21 Jul 2020 21:49:01 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by userp3020.oracle.com with ESMTP id 32e83g9p8c-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 21 Jul 2020 21:49:01 +0000
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 06LLmuIO030407;
 Tue, 21 Jul 2020 21:48:56 GMT
Received: from [10.39.225.136] (/10.39.225.136)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 21 Jul 2020 21:48:56 +0000
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
To: Anchal Agarwal <anchalag@amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200721000348.GA19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <408d3ce9-2510-2950-d28d-fdfe8ee41a54@oracle.com>
Date: Tue, 21 Jul 2020 17:48:45 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200721000348.GA19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9689
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999
 suspectscore=0
 bulkscore=0 malwarescore=0 adultscore=0 spamscore=0 mlxscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007210138
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9689
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 suspectscore=0
 bulkscore=0 mlxscore=0 mlxlogscore=999 impostorscore=0 priorityscore=1501
 lowpriorityscore=0 phishscore=0 spamscore=0 adultscore=0 clxscore=1015
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007210138
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, sstabellini@kernel.org, kamatam@amazon.com, mingo@redhat.com,
 xen-devel@lists.xenproject.org, sblbir@amazon.com, axboe@kernel.dk,
 konrad.wilk@oracle.com, bp@alien8.de, tglx@linutronix.de, jgross@suse.com,
 netdev@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net,
 linux-kernel@vger.kernel.org, vkuznets@redhat.com, davem@davemloft.net,
 dwmw@amazon.co.uk, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


>>>>>> +static int xen_setup_pm_notifier(void)
>>>>>> +{
>>>>>> +     if (!xen_hvm_domain())
>>>>>> +             return -ENODEV;
>>>>>>
>>>>>> I forgot --- what did we decide about non-x86 (i.e. ARM)?
>>>>> It would be great to support that however, its  out of
>>>>> scope for this patch set.
>>>>> I’ll be happy to discuss it separately.
>>>>
>>>> I wasn't implying that this *should* work on ARM but rather whether this
>>>> will break ARM somehow (because xen_hvm_domain() is true there).
>>>>
>>>>
>>> Ok makes sense. TBH, I haven't tested this part of code on ARM and the series
>>> was only support x86 guests hibernation.
>>> Moreover, this notifier is there to distinguish between 2 PM
>>> events PM SUSPEND and PM hibernation. Now since we only care about PM
>>> HIBERNATION I may just remove this code and rely on "SHUTDOWN_SUSPEND" state.
>>> However, I may have to fix other patches in the series where this check may
>>> appear and cater it only for x86 right?
>>
>>
>> I don't know what would happen if ARM guest tries to handle hibernation
>> callbacks. The only ones that you are introducing are in block and net
>> fronts and that's arch-independent.
>>
>>
>> You do add a bunch of x86-specific code though (syscore ops), would
>> something similar be needed for ARM?
>>
>>
> I don't expect this to work out of the box on ARM. To start with something
> similar will be needed for ARM too.
> We may still want to keep the driver code as-is.
> 
> I understand the concern here wrt ARM, however, currently the support is only
> proposed for x86 guests here and similar work could be carried out for ARM.
> Also, if regular hibernation works correctly on arm, then all is needed is to
> fix Xen side of things.
> 
> I am not sure what could be done to achieve any assurances on arm side as far as
> this series is concerned.


If you are not sure what the effects are (or sure that it won't work) on
ARM then I'd add IS_ENABLED(CONFIG_X86) check, i.e.


if (!IS_ENABLED(CONFIG_X86) || !xen_hvm_domain())
	return -ENODEV;


(plus '|| xen_initial_domain()' for PVH dom0 as Roger suggested.)

-boris


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 23:26:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 23:26:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jy1eO-0001fF-R4; Tue, 21 Jul 2020 23:26:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tByU=BA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jy1eN-0001es-D3
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 23:26:19 +0000
X-Inumbo-ID: 91179f9c-cba9-11ea-85f0-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 91179f9c-cba9-11ea-85f0-bc764e2007e4;
 Tue, 21 Jul 2020 23:26:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=al3zr13s4ap9TCuTT30xKSNOKZIGN3eHcSdajgbYUP4=; b=BazIFIAc0FM9f/knjOWXX8s6n
 NBsCvVfVSeFHTzr+QyRBQ9maCFKDuTW+//vhoo85dfBVWeXq305picYsE7eiHlJjfUOIUK2GmBtbl
 lFP5KYeIBDEbYtZNyM2Sn7flQvuDhZ/Qu03OZIkADupV411lstx6b2kGbDQlq8ai8lkBs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jy1eG-0003J1-PQ; Tue, 21 Jul 2020 23:26:12 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jy1eG-0001M4-Dx; Tue, 21 Jul 2020 23:26:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jy1eG-0003Y6-D2; Tue, 21 Jul 2020 23:26:12 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152068-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152068: all pass - PUSHED
X-Osstest-Versions-This: ovmf=02539e900854488343a1efa435d4dded1ddd66a2
X-Osstest-Versions-That: ovmf=cb38ace647231076acfc0c5bdd21d3aff43e4f83
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 21 Jul 2020 23:26:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152068 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152068/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 02539e900854488343a1efa435d4dded1ddd66a2
baseline version:
 ovmf                 cb38ace647231076acfc0c5bdd21d3aff43e4f83

Last test of basis   152048  2020-07-20 15:10:26 Z    1 days
Testing same since   152068  2020-07-21 07:11:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bob Feng <bob.c.feng@intel.com>
  Liu, Zhiguang <Zhiguang.Liu@intel.com>
  Pierre Gondois <pierre.gondois@arm.com>
  Rebecca Cran <rebecca@bsdio.com>
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   cb38ace647..02539e9008  02539e900854488343a1efa435d4dded1ddd66a2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jul 21 23:35:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 21 Jul 2020 23:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jy1n5-0002a3-O7; Tue, 21 Jul 2020 23:35:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z8Xg=BA=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jy1n5-0002Zy-02
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 23:35:19 +0000
X-Inumbo-ID: d5392083-cbaa-11ea-85f0-bc764e2007e4
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d5392083-cbaa-11ea-85f0-bc764e2007e4;
 Tue, 21 Jul 2020 23:35:18 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06LNWjTD096546;
 Tue, 21 Jul 2020 23:34:11 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=Pcd5dDkRDUtwujHYEQEieCl683hZWEmAo2jUxumqU9Q=;
 b=w2Hl9QHW/V3rL6mzikOEy749JWLQY+15j1ohWGF2hxcqIijJvVQPZg6MwZYQR4DzTlMA
 W/Wc3lzR/P2WLdlX4JuSEZ8ivDTemYxLlERZLZpHCRL5oeTGXZBsAoClRH7zDCYvSxp9
 Benicduz3uwVxl1AiF8PxQJPU6HP1rBJcyEyz0IdNnhr4Mc15R5RV4/u47bBwgITJI0t
 dcst0MrfZwDTLUgxJ3+5YtW1S028oc21sOc3cuxdK/1xPvTzHb+1tx7FEGozbVZf+cQm
 6AAGCwzDmP8YwthKQRKtI8WigEHKx7U85dt5J0xccjOIwAJSTeDE78frRNx0lCcPZU6z VA== 
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2130.oracle.com with ESMTP id 32brgrg6xv-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 21 Jul 2020 23:34:11 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06LNY8AF108165;
 Tue, 21 Jul 2020 23:34:11 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by aserp3030.oracle.com with ESMTP id 32e9us8fc4-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 21 Jul 2020 23:34:10 +0000
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 06LNXtwI026828;
 Tue, 21 Jul 2020 23:33:55 GMT
Received: from [10.39.225.136] (/10.39.225.136)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 21 Jul 2020 16:33:55 -0700
Subject: Re: [PATCH] x86/xen/time: set the X86_FEATURE_TSC_KNOWN_FREQ flag in
 xen_tsc_khz()
To: Hayato Ohhashi <o8@vmm.dev>, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, x86@kernel.org
References: <20200721161231.6019-1-o8@vmm.dev>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <cd9ce52e-6026-a115-7f3e-405e45c3e20b@oracle.com>
Date: Tue, 21 Jul 2020 19:33:44 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200721161231.6019-1-o8@vmm.dev>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9689
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 mlxscore=0 phishscore=0
 bulkscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 spamscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007210147
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9689
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 bulkscore=0 spamscore=0
 impostorscore=0 suspectscore=0 adultscore=0 clxscore=1011 mlxlogscore=999
 priorityscore=1501 phishscore=0 lowpriorityscore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007210147
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/21/20 12:12 PM, Hayato Ohhashi wrote:
> If the TSC frequency is known from the pvclock page,
> the TSC frequency does not need to be recalibrated.
> We can avoid recalibration by setting X86_FEATURE_TSC_KNOWN_FREQ.
>
> Signed-off-by: Hayato Ohhashi <o8@vmm.dev>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>




From xen-devel-bounces@lists.xenproject.org Wed Jul 22 00:18:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 00:18:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jy2T1-0006hn-Ni; Wed, 22 Jul 2020 00:18:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0kdp=BB=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jy2Sz-0006hi-Nv
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 00:18:37 +0000
X-Inumbo-ID: e296814c-cbb0-11ea-85f8-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e296814c-cbb0-11ea-85f8-bc764e2007e4;
 Wed, 22 Jul 2020 00:18:36 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 04D5F206F2;
 Wed, 22 Jul 2020 00:18:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595377116;
 bh=VJtScx2bVukP/mlcrClwkoR4ksqhOSgVTh+yFH7IWqM=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=q/rbNlzMTu2d7h5yLJQ8MI4vWYi2LX7ozhDoYhAUPEoDZ8904PyP/6DGkZPlaa7h+
 o8Fhe9tLoc59rwBt81rS8ZNb2x+kajIYiSDt6wqdK85Qfy3Jg1VyVaAQUbxbsTZyIp
 xLurlKmXOP87yAI9egGiQP25yNvM/MDfjaezUFq0=
Date: Tue, 21 Jul 2020 17:18:34 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
In-Reply-To: <408d3ce9-2510-2950-d28d-fdfe8ee41a54@oracle.com>
Message-ID: <alpine.DEB.2.21.2007211640500.17562@sstabellini-ThinkPad-T480s>
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200721000348.GA19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <408d3ce9-2510-2950-d28d-fdfe8ee41a54@oracle.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-219213542-1595374972=:17562"
Content-ID: <alpine.DEB.2.21.2007211643430.17562@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, sstabellini@kernel.org, kamatam@amazon.com, mingo@redhat.com,
 xen-devel@lists.xenproject.org, sblbir@amazon.com, axboe@kernel.dk,
 konrad.wilk@oracle.com, Anchal Agarwal <anchalag@amazon.com>, bp@alien8.de,
 tglx@linutronix.de, jgross@suse.com, netdev@vger.kernel.org,
 linux-pm@vger.kernel.org, rjw@rjwysocki.net, linux-kernel@vger.kernel.org,
 vkuznets@redhat.com, davem@davemloft.net, dwmw@amazon.co.uk,
 roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-219213542-1595374972=:17562
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2007211643431.17562@sstabellini-ThinkPad-T480s>

On Tue, 21 Jul 2020, Boris Ostrovsky wrote:
> >>>>>> +static int xen_setup_pm_notifier(void)
> >>>>>> +{
> >>>>>> +     if (!xen_hvm_domain())
> >>>>>> +             return -ENODEV;
> >>>>>>
> >>>>>> I forgot --- what did we decide about non-x86 (i.e. ARM)?
> >>>>> It would be great to support that however, its  out of
> >>>>> scope for this patch set.
> >>>>> I’ll be happy to discuss it separately.
> >>>>
> >>>> I wasn't implying that this *should* work on ARM but rather whether this
> >>>> will break ARM somehow (because xen_hvm_domain() is true there).
> >>>>
> >>>>
> >>> Ok makes sense. TBH, I haven't tested this part of code on ARM and the series
> >>> was only support x86 guests hibernation.
> >>> Moreover, this notifier is there to distinguish between 2 PM
> >>> events PM SUSPEND and PM hibernation. Now since we only care about PM
> >>> HIBERNATION I may just remove this code and rely on "SHUTDOWN_SUSPEND" state.
> >>> However, I may have to fix other patches in the series where this check may
> >>> appear and cater it only for x86 right?
> >>
> >>
> >> I don't know what would happen if ARM guest tries to handle hibernation
> >> callbacks. The only ones that you are introducing are in block and net
> >> fronts and that's arch-independent.
> >>
> >>
> >> You do add a bunch of x86-specific code though (syscore ops), would
> >> something similar be needed for ARM?
> >>
> >>
> > I don't expect this to work out of the box on ARM. To start with something
> > similar will be needed for ARM too.
> > We may still want to keep the driver code as-is.
> > 
> > I understand the concern here wrt ARM, however, currently the support is only
> > proposed for x86 guests here and similar work could be carried out for ARM.
> > Also, if regular hibernation works correctly on arm, then all is needed is to
> > fix Xen side of things.
> > 
> > I am not sure what could be done to achieve any assurances on arm side as far as
> > this series is concerned.

Just to clarify: new features don't need to work on ARM or cause any
addition efforts to you to make them work on ARM. The patch series only
needs not to break existing code paths (on ARM and any other platforms).
It should also not make it overly difficult to implement the ARM side of
things (if there is one) at some point in the future.

FYI drivers/xen/manage.c is compiled and working on ARM today, however
Xen suspend/resume is not supported. I don't know for sure if
guest-initiated hibernation works because I have not tested it.


 
> If you are not sure what the effects are (or sure that it won't work) on
> ARM then I'd add IS_ENABLED(CONFIG_X86) check, i.e.
> 
> 
> if (!IS_ENABLED(CONFIG_X86) || !xen_hvm_domain())
> 	return -ENODEV;

That is a good principle to have and thanks for suggesting it. However,
in this specific case there is nothing in this patch that doesn't work
on ARM. From an ARM perspective I think we should enable it and
&xen_pm_notifier_block should be registered.

Given that all guests are HVM guests on ARM, it should work fine as is.


I gave a quick look at the rest of the series and everything looks fine
to me from an ARM perspective. I cannot imaging that the new freeze,
thaw, and restore callbacks for net and block are going to cause any
trouble on ARM. The two main x86-specific functions are
xen_syscore_suspend/resume and they look trivial to implement on ARM (in
the sense that they are likely going to look exactly the same.)


One question for Anchal: what's going to happen if you trigger a
hibernation, you have the new callbacks, but you are missing
xen_syscore_suspend/resume?

Is it any worse than not having the new freeze, thaw and restore
callbacks at all and try to do a hibernation?
--8323329-219213542-1595374972=:17562--


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 00:38:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 00:38:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jy2ld-0008Ow-Cc; Wed, 22 Jul 2020 00:37:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q/Qh=BB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jy2lc-0008Or-A9
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 00:37:52 +0000
X-Inumbo-ID: 90c6b9c4-cbb3-11ea-a16c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 90c6b9c4-cbb3-11ea-a16c-12813bfff9fa;
 Wed, 22 Jul 2020 00:37:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=/5G9zVl9rHXX7uG3OC0agCvkkT8vTAd+uTtjxjA9y7w=; b=UosSSBje1futip4S2KoLhQJur
 GefDJFJKwOiBtpE6OCE3MZ7jbKd24okn0VYZuWzh60Z3uvs46EbqYY/zfLpLOK0Uk87BD5qD0cfWB
 BU8sBJhTKCbUe0IlOshPpXDKOig40+3L00Ko5urAZ+fgVvAfZeg61iimXIib0M71YbjQ8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jy2lX-0005Ny-7s; Wed, 22 Jul 2020 00:37:47 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jy2lW-0005Om-Rd; Wed, 22 Jul 2020 00:37:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jy2lW-0006xx-R0; Wed, 22 Jul 2020 00:37:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152067-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 152067: regressions - trouble: fail/pass/starved
X-Osstest-Failures: xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:guest-localmigrate/x10:fail:regression
 xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=9ffdda96d9e7c3d9c7a5bbe2df6ab30f63927542
X-Osstest-Versions-That: xen=8c4532f19d6925538fb0c938f7de9a97da8c5c3b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jul 2020 00:37:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152067 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152067/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152045

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152045
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152045
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152045
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 152045
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152045
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152045
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152045
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 152045
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 152045
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 xen                  9ffdda96d9e7c3d9c7a5bbe2df6ab30f63927542
baseline version:
 xen                  8c4532f19d6925538fb0c938f7de9a97da8c5c3b

Last test of basis   152045  2020-07-20 13:36:39 Z    1 days
Testing same since   152067  2020-07-21 06:59:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9ffdda96d9e7c3d9c7a5bbe2df6ab30f63927542
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jul 20 17:54:52 2020 +0100

    docs: Replace non-UTF-8 character in hypfs-paths.pandoc
    
    From the docs cronjob on xenbits:
    
      /usr/bin/pandoc --number-sections --toc --standalone misc/hypfs-paths.pandoc --output html/misc/hypfs-paths.html
      pandoc: Cannot decode byte '\x92': Data.Text.Internal.Encoding.decodeUtf8: Invalid UTF-8 stream
      make: *** [Makefile:236: html/misc/hypfs-paths.html] Error 1
    
    Fixes: 5a4a411bde4 ("docs: specify stability of hypfs path documentation")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Paul Durrant <paul@xen.org>

commit 6720345aaf82fc76dca084f3f7a577062f5ff0f3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jul 15 12:39:06 2020 +0200

    Arm: prune #include-s needed by domain.h
    
    asm/domain.h is a dependency of xen/sched.h, and hence should not itself
    include xen/sched.h. Nor should any of the other #include-s used by it.
    While at it, also drop two other #include-s that aren't needed by this
    particular header.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 5a4a411bde4f73ff8ce43d6e52b77302973e8f68
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Jul 20 13:38:00 2020 +0200

    docs: specify stability of hypfs path documentation
    
    In docs/misc/hypfs-paths.pandoc the supported paths in the hypervisor
    file system are specified. Make it more clear that path availability
    might change, e.g. due to scope widening or narrowing (e.g. being
    limited to a specific architecture).
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Paul Durrant <paul@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 04:01:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 04:01:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jy5wB-0001kG-Gy; Wed, 22 Jul 2020 04:00:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SEGq=BB=xilinx.com=woods@srs-us1.protection.inumbo.net>)
 id 1jy5wA-0001k6-BC
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 04:00:58 +0000
X-Inumbo-ID: f1ee473c-cbcf-11ea-a17c-12813bfff9fa
Received: from NAM11-CO1-obe.outbound.protection.outlook.com (unknown
 [40.107.220.63]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f1ee473c-cbcf-11ea-a17c-12813bfff9fa;
 Wed, 22 Jul 2020 04:00:57 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h2f08OYIqR80XD+kfUIXmKKDiMxbomLiJJUOuvbBF6qLawwVhEEfkHdxGKzT2HiUfPEzBjxE/TPuZAitRKhepZX5mKJzHCa4t33a+ngOPtCZ2otRbOe/6/FDTHrVNziqDrdrBXWZDUV9iiIKBnjpLXtwn5dZzOd8J+T71wUORFWYgvWY0fOczoyRsmLGz6TJJY2oQd5alvsf613bfDm6E1AlMCB03hjbMuHnA/lzyzXzA3tDG4Qoln1fCPC0Xht4AUZ2gQnzDsyMHUvUuIVgVPsN3/eKHcR8h1n530rJKcKCiSCOiNGEqUwuqN5XsUZXRKtglm9HHRNi4ZzXyUSGVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EVUvECXV5L4nQYMNSDz3dm1kjTIFTD5CSE0XcToz2fQ=;
 b=mp6iMU4TdD/zV4vIBq8NtOTK7UIyadv2mPTRr7XQmv0+KN7afW63Wo83r8AHCjT/g3trQ1RGdDv9QpoGGB0zSP3T0DYDJnhzPlaVM8asARGXHaxg96hbouT5QxzVuH2oVyIjt7ckjvsScNEWctlZTn8LRfDtZ36zsyIdloAtckkkiLINj3sa6cBV1XdU8QPAAV2xaAkxbnYZiwIYP+H9mQDDVlhMwPbmqS2q93SEJ+Wmlb6Rrzrsaniye9zJfBGJ6efiFN92RFNuNQz1DkAaD1HtFSAqkZsB2DMI190wFwdWV84POqXlg6AMopsMw+xNy8egO6MYWmQopPHSjTiUdA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.60.83) smtp.rcpttodomain=epam.com smtp.mailfrom=xilinx.com;
 dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message
 not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EVUvECXV5L4nQYMNSDz3dm1kjTIFTD5CSE0XcToz2fQ=;
 b=JlacmhF3T1R0x7sfklGmGj73mtkaorE1OsKaiZR95Qi/8qk8rakFSvU2N4KMtmz9NuFwQKXqYEGA0FFsZAXgj4+CHqL022Dq3z1FeKlMyGtemGYQGt5HToJmcXuu+XZ7IfoRkGOnULb2HjOnAbSMgSHynpixe61gAZzSkkV7Y+o=
Received: from CY4PR20CA0042.namprd20.prod.outlook.com (2603:10b6:903:cb::28)
 by DM6PR02MB5849.namprd02.prod.outlook.com (2603:10b6:5:156::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17; Wed, 22 Jul
 2020 04:00:54 +0000
Received: from CY1NAM02FT030.eop-nam02.prod.protection.outlook.com
 (2603:10b6:903:cb:cafe::73) by CY4PR20CA0042.outlook.office365.com
 (2603:10b6:903:cb::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.21 via Frontend
 Transport; Wed, 22 Jul 2020 04:00:54 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.60.83)
 smtp.mailfrom=xilinx.com; epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=bestguesspass action=none
 header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.60.83 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01;
Received: from xsj-pvapsmtpgw01 (149.199.60.83) by
 CY1NAM02FT030.mail.protection.outlook.com (10.152.75.163) with Microsoft SMTP
 Server id 15.20.3216.10 via Frontend Transport; Wed, 22 Jul 2020 04:00:54
 +0000
Received: from [149.199.38.66] (port=50306 helo=smtp.xilinx.com)
 by xsj-pvapsmtpgw01 with esmtp (Exim 4.90)
 (envelope-from <brian.woods@xilinx.com>)
 id 1jy5uH-0007kn-TK; Tue, 21 Jul 2020 20:59:01 -0700
Received: from [127.0.0.1] (helo=localhost)
 by xsj-pvapsmtp01 with smtp (Exim 4.63)
 (envelope-from <brian.woods@xilinx.com>)
 id 1jy5w5-0002zS-T1; Tue, 21 Jul 2020 21:00:53 -0700
Received: from xsj-pvapsmtp01 (smtp-fallback.xilinx.com [149.199.38.66] (may
 be forged))
 by xsj-smtp-dlp1.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id 06M40lJX012872; 
 Tue, 21 Jul 2020 21:00:47 -0700
Received: from [172.19.2.62] (helo=xsjwoods50.xilinx.com)
 by xsj-pvapsmtp01 with esmtp (Exim 4.63)
 (envelope-from <brian.woods@xilinx.com>)
 id 1jy5vz-0002u2-Mf; Tue, 21 Jul 2020 21:00:47 -0700
From: Brian Woods <brian.woods@xilinx.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: [RFC v2 2/2] arm,smmu: add support for generic DT bindings
Date: Tue, 21 Jul 2020 21:00:31 -0700
Message-Id: <1595390431-24805-3-git-send-email-brian.woods@xilinx.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1595390431-24805-1-git-send-email-brian.woods@xilinx.com>
References: <1595390431-24805-1-git-send-email-brian.woods@xilinx.com>
X-RCIS-Action: ALLOW
X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.2.0.1013-23620.005
X-TM-AS-User-Approved-Sender: Yes;Yes
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-Forefront-Antispam-Report: CIP:149.199.60.83; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:xsj-pvapsmtpgw01; PTR:unknown-60-83.xilinx.com; CAT:NONE;
 SFTY:;
 SFS:(376002)(346002)(39850400004)(396003)(136003)(46966005)(7696005)(316002)(426003)(9786002)(4326008)(107886003)(83380400001)(186003)(44832011)(336012)(47076004)(54906003)(2616005)(8676002)(5660300002)(70206006)(26005)(6916009)(36756003)(478600001)(2906002)(8936002)(86362001)(82740400003)(82310400002)(81166007)(6666004)(70586007)(356005)(41533002)(142933001)(42866002)(21314003);
 DIR:OUT; SFP:1101; 
X-MS-PublicTrafficType: Email
MIME-Version: 1.0
Content-Type: text/plain
X-MS-Office365-Filtering-Correlation-Id: 100c8d46-da75-4a23-5ae1-08d82df3d4ab
X-MS-TrafficTypeDiagnostic: DM6PR02MB5849:
X-Microsoft-Antispam-PRVS: <DM6PR02MB58498124EA817EAF686ED9BAD7790@DM6PR02MB5849.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:1091;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: azwg7/hIzbt5KF3sK4i/FARUUAtfxD7R27j2LBDdT7GqIzgwuP3hzgD3QDzjm2rb+zSWmEA0QPwBC1b57/olVZi1GKMI9Ejjrad1UbF8O+fqgcPE0o+U6xEf3LALZFZ77zGp+pjFcy2yUTvPUXadrHlKB9vyZjNDGcQCyP/vqc4fPae3WUa+X2GM1Bg5S9zLK3anZn/4fEBjJ6oWlCn4ET1MhSozJgoj2cm3oNtHR+J0n+d4y6La4ZkQzdkUAIQoLWYD0xoaSJQAyi+VngPVkYIOAhTuH/QZUChzPTTi9ZPP+Jgm5829iNYRIwILRsGrni/25PeWD/msqhM7WcdHwl0N1T4lSq0/5JCM3gODrjdh966eQzGppQZWO9mqWksZizh99CdnIduL8LlGVnZaagflmXx4PC4JJ1z486dn2RZP1kCiUkjgHCfVkLOOf+XH3GBZlPxopbjdT87lSpPe/BEJdcYMeZ2HppShkuei7FZksx/HdOUONo7NNO5QIDu/
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jul 2020 04:00:54.1157 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 100c8d46-da75-4a23-5ae1-08d82df3d4ab
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.83];
 Helo=[xsj-pvapsmtpgw01]
X-MS-Exchange-CrossTenant-AuthSource: CY1NAM02FT030.eop-nam02.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR02MB5849
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Brian Woods <brian.woods@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Restructure some of the code and add supporting functions for adding
generic device tree (DT) binding support.  This will allow for using
current Linux device trees with just modifying the chosen field to
enable Xen.

Signed-off-by: Brian Woods <brian.woods@xilinx.com>
---

Just realized that I'm fairly sure I need to do some work on the SMRs.
Other than that though, I think things should be okayish.

v1 -> v2
    - Corrected how reading of DT is done with generic bindings


 xen/drivers/passthrough/arm/smmu.c    | 102 +++++++++++++++++++++++++---------
 xen/drivers/passthrough/device_tree.c |  17 +-----
 2 files changed, 78 insertions(+), 41 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 7a5c6cd..25c090a 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -251,6 +251,8 @@ struct iommu_group
 	atomic_t ref;
 };
 
+static const struct arm_smmu_device *find_smmu(const struct device *dev);
+
 static struct iommu_group *iommu_group_alloc(void)
 {
 	struct iommu_group *group = xzalloc(struct iommu_group);
@@ -772,56 +774,104 @@ static int insert_smmu_master(struct arm_smmu_device *smmu,
 	return 0;
 }
 
-static int register_smmu_master(struct arm_smmu_device *smmu,
-				struct device *dev,
-				struct of_phandle_args *masterspec)
+static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
+					 struct device *dev,
+					 struct iommu_fwspec *fwspec)
 {
-	int i, ret = 0;
+	int i;
 	struct arm_smmu_master *master;
+	struct device_node *dev_node = dev_get_dev_node(dev);
 
-	master = find_smmu_master(smmu, masterspec->np);
+	master = find_smmu_master(smmu, dev_node);
 	if (master) {
 		dev_err(dev,
 			"rejecting multiple registrations for master device %s\n",
-			masterspec->np->name);
+			dev_node->name);
 		return -EBUSY;
 	}
 
 	master = devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
 	if (!master)
 		return -ENOMEM;
-	master->of_node = masterspec->np;
 
-	ret = iommu_fwspec_init(&master->of_node->dev, smmu->dev);
-	if (ret) {
-		kfree(master);
-		return ret;
-	}
-	master->cfg.fwspec = dev_iommu_fwspec_get(&master->of_node->dev);
-
-	/* adding the ids here */
-	ret = iommu_fwspec_add_ids(&masterspec->np->dev,
-				   masterspec->args,
-				   masterspec->args_count);
-	if (ret)
-		return ret;
+	master->of_node = dev_node;
+	master->cfg.fwspec = fwspec;
 
 	/* Xen: Let Xen know that the device is protected by an SMMU */
-	dt_device_set_protected(masterspec->np);
+	dt_device_set_protected(dev_node);
 
 	if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH)) {
-		for (i = 0; i < master->cfg.fwspec->num_ids; ++i) {
-			if (masterspec->args[i] >= smmu->num_mapping_groups) {
+		for (i = 0; i < fwspec->num_ids; ++i) {
+			if (fwspec->ids[i] >= smmu->num_mapping_groups) {
 				dev_err(dev,
 					"stream ID for master device %s greater than maximum allowed (%d)\n",
-					masterspec->np->name, smmu->num_mapping_groups);
+					dev_node->name, smmu->num_mapping_groups);
 				return -ERANGE;
 			}
 		}
 	}
+
 	return insert_smmu_master(smmu, master);
 }
 
+static int arm_smmu_dt_add_device_generic(u8 devfn, struct device *dev)
+{
+	struct arm_smmu_device *smmu;
+	struct iommu_fwspec *fwspec;
+
+	fwspec = dev_iommu_fwspec_get(dev);
+	if (fwspec == NULL)
+		return -ENXIO;
+
+	smmu = (struct arm_smmu_device *) find_smmu(fwspec->iommu_dev);
+	if (smmu == NULL)
+		return -ENXIO;
+
+	return arm_smmu_dt_add_device_legacy(smmu, dev, fwspec);
+}
+
+static int arm_smmu_dt_xlate_generic(struct device *dev,
+				    const struct of_phandle_args *spec)
+{
+	uint32_t mask, fwid = 0;
+
+	if (spec->args_count > 0)
+		fwid |= ((SMR_ID_MASK << SMR_ID_SHIFT) & spec->args[0]) >> SMR_ID_SHIFT;
+
+	if (spec->args_count > 1)
+		fwid |= ((SMR_MASK_MASK << SMR_MASK_SHIFT) & spec->args[1]) >> SMR_MASK_SHIFT;
+	else if (!of_property_read_u32(spec->np, "stream-match-mask", &mask))
+		fwid |= ((SMR_MASK_MASK << SMR_MASK_SHIFT) & mask) >> SMR_MASK_SHIFT;
+
+	return iommu_fwspec_add_ids(dev,
+				    &fwid,
+				    1);
+}
+
+static int register_smmu_master(struct arm_smmu_device *smmu,
+				struct device *dev,
+				struct of_phandle_args *masterspec)
+{
+	int ret = 0;
+	struct iommu_fwspec *fwspec;
+
+	ret = iommu_fwspec_init(&masterspec->np->dev, smmu->dev);
+	if (ret)
+		return ret;
+
+	fwspec = dev_iommu_fwspec_get(&masterspec->np->dev);
+
+	ret = iommu_fwspec_add_ids(&masterspec->np->dev,
+				   masterspec->args,
+				   masterspec->args_count);
+	if (ret)
+		return ret;
+
+	return arm_smmu_dt_add_device_legacy(smmu,
+					     &masterspec->np->dev,
+					     fwspec);
+}
+
 static struct arm_smmu_device *find_smmu_for_device(struct device *dev)
 {
 	struct arm_smmu_device *smmu;
@@ -2743,6 +2793,7 @@ static void arm_smmu_iommu_domain_teardown(struct domain *d)
 static const struct iommu_ops arm_smmu_iommu_ops = {
     .init = arm_smmu_iommu_domain_init,
     .hwdom_init = arm_smmu_iommu_hwdom_init,
+    .add_device = arm_smmu_dt_add_device_generic,
     .teardown = arm_smmu_iommu_domain_teardown,
     .iotlb_flush = arm_smmu_iotlb_flush,
     .iotlb_flush_all = arm_smmu_iotlb_flush_all,
@@ -2750,9 +2801,10 @@ static const struct iommu_ops arm_smmu_iommu_ops = {
     .reassign_device = arm_smmu_reassign_dev,
     .map_page = arm_iommu_map_page,
     .unmap_page = arm_iommu_unmap_page,
+    .dt_xlate = arm_smmu_dt_xlate_generic,
 };
 
-static __init const struct arm_smmu_device *find_smmu(const struct device *dev)
+static const struct arm_smmu_device *find_smmu(const struct device *dev)
 {
 	struct arm_smmu_device *smmu;
 	bool found = false;
diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index acf6b62..dd9cf65 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -158,22 +158,7 @@ int iommu_add_dt_device(struct dt_device_node *np)
          * these callback implemented.
          */
         if ( !ops->add_device || !ops->dt_xlate )
-        {
-            /*
-             * Some Device Trees may expose both legacy SMMU and generic
-             * IOMMU bindings together. However, the SMMU driver is only
-             * supporting the former and will protect them during the
-             * initialization. So we need to skip them and not return
-             * error here.
-             *
-             * XXX: This can be dropped when the SMMU is able to deal
-             * with generic bindings.
-             */
-            if ( dt_device_is_protected(np) )
-                return 0;
-            else
-                return -EINVAL;
-        }
+            return -EINVAL;
 
         if ( !dt_device_is_available(iommu_spec.np) )
             break;
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 04:01:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 04:01:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jy5wB-0001kO-Pn; Wed, 22 Jul 2020 04:00:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SEGq=BB=xilinx.com=woods@srs-us1.protection.inumbo.net>)
 id 1jy5wA-0001kB-Sv
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 04:00:58 +0000
X-Inumbo-ID: f23470a4-cbcf-11ea-8613-bc764e2007e4
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (unknown
 [40.107.94.67]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f23470a4-cbcf-11ea-8613-bc764e2007e4;
 Wed, 22 Jul 2020 04:00:57 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ce2MugMj5Gi+e++p25NcYhkNyz+cJiWkYQhiEJx8Wkw3JKYLCCvIIROAxGF6wuMbcJ66UmDEiaSzoDClw3M5WAgk8W/1r6w+MAfuof/43qayzw4M/aPn+pWdtqlNxIpfJsX9woXS1chU0lf208qJN4a3tm+hf722J9/T84mgNcOkXhDmjpIrwRwjhWuzoWWh97OwJfP2g9C9fXeOLtF0DdcVDGJts3Z/uHePfgKe2oIXSKHcZc7qFjpNWC8pB8bXHBqyWbajEXHjqfA8f5m/kzOwXBSTmdBPkjr0m14pYi/UNjnN4VgyuT5dJYjCMLZCGsyOy3Xoy/+6ptZxrjzpIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lu6/sGI2fQO/GKysq+3npebONed5SiclapckpPkFhvQ=;
 b=UGqr855hhUf8FB47Rlkuhwm6DRwGcWrlfTajdO9gB/8RtrnP64ENiVFpj3do2ITz5hQqSuxF000LQmI//x8Q7tGj1F7xqHW1zP0BLCWenHxF1UqeD+hy0E9HPAHBnbAvK9ZJbimk8yW5nRk1+qSbnvjtb8r84YYHYdoXs8q89fv0th7ORtQbkMn8fxEtItbi9AqPoeIccjiKs2J4/hFHmvBjuxZjBxBkeIe2JDhUNVqNUOufwo0wE+GkM/OFc7PhLDyBbLUvVAhaZxyJg0ROmbuwY4zE3DMN3+nrMdTBo3AmqbFWMy6LMmpUP0mnFVXPMaGqx06I4N13sLOPFuCjCw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.60.83) smtp.rcpttodomain=epam.com smtp.mailfrom=xilinx.com;
 dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message
 not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lu6/sGI2fQO/GKysq+3npebONed5SiclapckpPkFhvQ=;
 b=rCAXgyTpl084xUzdaV54SjN/pHqGBHb/s53ZRHMS3bgkt/lt7dG8NbfyI5KEvobO3aSQk03zxOkXU4cQLTuhWtVQOoGSkoSEmYplcay8uHqunthpZ4cXoO/xo9Y1PrT1r9VKm1PPNcV4h/4DheqsclmUCX+PXtMYRKokLa0Naks=
Received: from CY4PR03CA0023.namprd03.prod.outlook.com (2603:10b6:903:33::33)
 by BYAPR02MB5717.namprd02.prod.outlook.com (2603:10b6:a03:122::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.20; Wed, 22 Jul
 2020 04:00:54 +0000
Received: from CY1NAM02FT038.eop-nam02.prod.protection.outlook.com
 (2603:10b6:903:33:cafe::8a) by CY4PR03CA0023.outlook.office365.com
 (2603:10b6:903:33::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.22 via Frontend
 Transport; Wed, 22 Jul 2020 04:00:54 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.60.83)
 smtp.mailfrom=xilinx.com; epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=bestguesspass action=none
 header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.60.83 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01;
Received: from xsj-pvapsmtpgw01 (149.199.60.83) by
 CY1NAM02FT038.mail.protection.outlook.com (10.152.74.217) with Microsoft SMTP
 Server id 15.20.3216.10 via Frontend Transport; Wed, 22 Jul 2020 04:00:54
 +0000
Received: from [149.199.38.66] (port=50304 helo=smtp.xilinx.com)
 by xsj-pvapsmtpgw01 with esmtp (Exim 4.90)
 (envelope-from <brian.woods@xilinx.com>)
 id 1jy5uH-0007kk-Rs; Tue, 21 Jul 2020 20:59:01 -0700
Received: from [127.0.0.1] (helo=localhost)
 by xsj-pvapsmtp01 with smtp (Exim 4.63)
 (envelope-from <brian.woods@xilinx.com>)
 id 1jy5w5-0002zS-RT; Tue, 21 Jul 2020 21:00:53 -0700
Received: from xsj-pvapsmtp01 (mailhub.xilinx.com [149.199.38.66])
 by xsj-smtp-dlp1.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id 06M40jFj012848; 
 Tue, 21 Jul 2020 21:00:45 -0700
Received: from [172.19.2.62] (helo=xsjwoods50.xilinx.com)
 by xsj-pvapsmtp01 with esmtp (Exim 4.63)
 (envelope-from <brian.woods@xilinx.com>)
 id 1jy5vx-0002u2-CN; Tue, 21 Jul 2020 21:00:45 -0700
From: Brian Woods <brian.woods@xilinx.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: [RFC v2 1/2] arm,smmu: switch to using iommu_fwspec functions
Date: Tue, 21 Jul 2020 21:00:30 -0700
Message-Id: <1595390431-24805-2-git-send-email-brian.woods@xilinx.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1595390431-24805-1-git-send-email-brian.woods@xilinx.com>
References: <1595390431-24805-1-git-send-email-brian.woods@xilinx.com>
X-RCIS-Action: ALLOW
X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.2.0.1013-23620.005
X-TM-AS-User-Approved-Sender: Yes;Yes
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-Forefront-Antispam-Report: CIP:149.199.60.83; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:xsj-pvapsmtpgw01; PTR:unknown-60-83.xilinx.com; CAT:NONE;
 SFTY:;
 SFS:(376002)(39850400004)(136003)(346002)(396003)(46966005)(8936002)(83380400001)(356005)(336012)(9786002)(70206006)(81166007)(8676002)(86362001)(7696005)(70586007)(54906003)(186003)(2906002)(82740400003)(316002)(82310400002)(107886003)(44832011)(36756003)(5660300002)(426003)(47076004)(4326008)(6916009)(6666004)(26005)(2616005)(478600001)(142933001);
 DIR:OUT; SFP:1101; 
X-MS-PublicTrafficType: Email
MIME-Version: 1.0
Content-Type: text/plain
X-MS-Office365-Filtering-Correlation-Id: 8eb859d8-7d45-41d4-6a7a-08d82df3d4a4
X-MS-TrafficTypeDiagnostic: BYAPR02MB5717:
X-Microsoft-Antispam-PRVS: <BYAPR02MB57172D8E22BAD52F2CD08CC9D7790@BYAPR02MB5717.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:773;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: UfJ+Z5jvDffqDkebW77K0D8lMwezIGgAwMF0Gf1X1vwe2qkGXqrxzFJOAsAhi5Unx+oR3f1tNLpDDkBO8ck8Tjn/6NNyPtuQeS4mB8lDRPEqeKM7XvjOWtoEBhvOP+rqak/UuBmxPbarF9B/5I+yrcKTPLMKXJh4BE+3Ls8gK/XCJxo73ELjgGtFTpGsAWRv8yIv3EJvwllqjqsUGXqu8yISO2GyytqyFbeqzUKNe2c8VIVxqbkU45JNcFtv+XpgkihHTasjBtRaHiRVhnTA/fZ+n+8PTrr2RqMYg9HPjgdUwfXPmcHw9DbwNIFiofTmO40doWi57MPsSvQ5nplFjqQi06RjeMcVKEoljB8FO6eb8JAwQGwEedXbJDrA3N84swoWxBUHJhJD/FbxQJR2CGnsxF0uiSbllUADOJOp+gc=
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jul 2020 04:00:54.0703 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8eb859d8-7d45-41d4-6a7a-08d82df3d4a4
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.83];
 Helo=[xsj-pvapsmtpgw01]
X-MS-Exchange-CrossTenant-AuthSource: CY1NAM02FT038.eop-nam02.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR02MB5717
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Brian Woods <brian.woods@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Modify the smmu driver so that it uses the iommu_fwspec helper
functions.  This means both ARM IOMMU drivers will both use the
iommu_fwspec helper functions.

Signed-off-by: Brian Woods <brian.woods@xilinx.com>
---

Interested in if combining the legacy and generic bindings paths are
worth or if Xen plans to depreicate legacy bindings at some point.

v1 -> v2
    - removed MAX_MASTER_STREAMIDS
    - removed unneeded curly brackets

 xen/drivers/passthrough/arm/smmu.c    | 81 +++++++++++++++++++----------------
 xen/drivers/passthrough/device_tree.c |  3 ++
 2 files changed, 47 insertions(+), 37 deletions(-)

diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 94662a8..7a5c6cd 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -49,6 +49,7 @@
 #include <asm/atomic.h>
 #include <asm/device.h>
 #include <asm/io.h>
+#include <asm/iommu_fwspec.h>
 #include <asm/platform.h>
 
 /* Xen: The below defines are redefined within the file. Undef it */
@@ -302,9 +303,6 @@ static struct iommu_group *iommu_group_get(struct device *dev)
 
 /***** Start of Linux SMMU code *****/
 
-/* Maximum number of stream IDs assigned to a single device */
-#define MAX_MASTER_STREAMIDS		MAX_PHANDLE_ARGS
-
 /* Maximum number of context banks per SMMU */
 #define ARM_SMMU_MAX_CBS		128
 
@@ -597,8 +595,7 @@ struct arm_smmu_smr {
 };
 
 struct arm_smmu_master_cfg {
-	int				num_streamids;
-	u16				streamids[MAX_MASTER_STREAMIDS];
+	struct iommu_fwspec		*fwspec;
 	struct arm_smmu_smr		*smrs;
 };
 
@@ -779,7 +776,7 @@ static int register_smmu_master(struct arm_smmu_device *smmu,
 				struct device *dev,
 				struct of_phandle_args *masterspec)
 {
-	int i;
+	int i, ret = 0;
 	struct arm_smmu_master *master;
 
 	master = find_smmu_master(smmu, masterspec->np);
@@ -790,34 +787,37 @@ static int register_smmu_master(struct arm_smmu_device *smmu,
 		return -EBUSY;
 	}
 
-	if (masterspec->args_count > MAX_MASTER_STREAMIDS) {
-		dev_err(dev,
-			"reached maximum number (%d) of stream IDs for master device %s\n",
-			MAX_MASTER_STREAMIDS, masterspec->np->name);
-		return -ENOSPC;
-	}
-
 	master = devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
 	if (!master)
 		return -ENOMEM;
+	master->of_node = masterspec->np;
 
-	master->of_node			= masterspec->np;
-	master->cfg.num_streamids	= masterspec->args_count;
+	ret = iommu_fwspec_init(&master->of_node->dev, smmu->dev);
+	if (ret) {
+		kfree(master);
+		return ret;
+	}
+	master->cfg.fwspec = dev_iommu_fwspec_get(&master->of_node->dev);
+
+	/* adding the ids here */
+	ret = iommu_fwspec_add_ids(&masterspec->np->dev,
+				   masterspec->args,
+				   masterspec->args_count);
+	if (ret)
+		return ret;
 
 	/* Xen: Let Xen know that the device is protected by an SMMU */
 	dt_device_set_protected(masterspec->np);
 
-	for (i = 0; i < master->cfg.num_streamids; ++i) {
-		u16 streamid = masterspec->args[i];
-
-		if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH) &&
-		     (streamid >= smmu->num_mapping_groups)) {
-			dev_err(dev,
-				"stream ID for master device %s greater than maximum allowed (%d)\n",
-				masterspec->np->name, smmu->num_mapping_groups);
-			return -ERANGE;
+	if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH)) {
+		for (i = 0; i < master->cfg.fwspec->num_ids; ++i) {
+			if (masterspec->args[i] >= smmu->num_mapping_groups) {
+				dev_err(dev,
+					"stream ID for master device %s greater than maximum allowed (%d)\n",
+					masterspec->np->name, smmu->num_mapping_groups);
+				return -ERANGE;
+			}
 		}
-		master->cfg.streamids[i] = streamid;
 	}
 	return insert_smmu_master(smmu, master);
 }
@@ -1397,15 +1397,15 @@ static int arm_smmu_master_configure_smrs(struct arm_smmu_device *smmu,
 	if (cfg->smrs)
 		return -EEXIST;
 
-	smrs = kmalloc_array(cfg->num_streamids, sizeof(*smrs), GFP_KERNEL);
+	smrs = kmalloc_array(cfg->fwspec->num_ids, sizeof(*smrs), GFP_KERNEL);
 	if (!smrs) {
 		dev_err(smmu->dev, "failed to allocate %d SMRs\n",
-			cfg->num_streamids);
+			cfg->fwspec->num_ids);
 		return -ENOMEM;
 	}
 
 	/* Allocate the SMRs on the SMMU */
-	for (i = 0; i < cfg->num_streamids; ++i) {
+	for (i = 0; i < cfg->fwspec->num_ids; ++i) {
 		int idx = __arm_smmu_alloc_bitmap(smmu->smr_map, 0,
 						  smmu->num_mapping_groups);
 		if (IS_ERR_VALUE(idx)) {
@@ -1416,12 +1416,12 @@ static int arm_smmu_master_configure_smrs(struct arm_smmu_device *smmu,
 		smrs[i] = (struct arm_smmu_smr) {
 			.idx	= idx,
 			.mask	= 0, /* We don't currently share SMRs */
-			.id	= cfg->streamids[i],
+			.id	= cfg->fwspec->ids[i],
 		};
 	}
 
 	/* It worked! Now, poke the actual hardware */
-	for (i = 0; i < cfg->num_streamids; ++i) {
+	for (i = 0; i < cfg->fwspec->num_ids; ++i) {
 		u32 reg = SMR_VALID | smrs[i].id << SMR_ID_SHIFT |
 			  smrs[i].mask << SMR_MASK_SHIFT;
 		writel_relaxed(reg, gr0_base + ARM_SMMU_GR0_SMR(smrs[i].idx));
@@ -1448,7 +1448,7 @@ static void arm_smmu_master_free_smrs(struct arm_smmu_device *smmu,
 		return;
 
 	/* Invalidate the SMRs before freeing back to the allocator */
-	for (i = 0; i < cfg->num_streamids; ++i) {
+	for (i = 0; i < cfg->fwspec->num_ids; ++i) {
 		u8 idx = smrs[i].idx;
 
 		writel_relaxed(~SMR_VALID, gr0_base + ARM_SMMU_GR0_SMR(idx));
@@ -1471,10 +1471,10 @@ static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain,
 	if (ret)
 		return ret == -EEXIST ? 0 : ret;
 
-	for (i = 0; i < cfg->num_streamids; ++i) {
+	for (i = 0; i < cfg->fwspec->num_ids; ++i) {
 		u32 idx, s2cr;
 
-		idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i];
+		idx = cfg->smrs ? cfg->smrs[i].idx : cfg->fwspec->ids[i];
 		s2cr = S2CR_TYPE_TRANS |
 		       (smmu_domain->cfg.cbndx << S2CR_CBNDX_SHIFT);
 		writel_relaxed(s2cr, gr0_base + ARM_SMMU_GR0_S2CR(idx));
@@ -1499,8 +1499,8 @@ static void arm_smmu_domain_remove_master(struct arm_smmu_domain *smmu_domain,
 	 * that it can be re-allocated immediately.
 	 * Xen: Unlike Linux, any access to non-configured stream will fault.
 	 */
-	for (i = 0; i < cfg->num_streamids; ++i) {
-		u32 idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i];
+	for (i = 0; i < cfg->fwspec->num_ids; ++i) {
+		u32 idx = cfg->smrs ? cfg->smrs[i].idx : cfg->fwspec->ids[i];
 
 		writel_relaxed(S2CR_TYPE_FAULT,
 			       gr0_base + ARM_SMMU_GR0_S2CR(idx));
@@ -1924,14 +1924,21 @@ static int arm_smmu_add_device(struct device *dev)
 			ret = -ENOMEM;
 			goto out_put_group;
 		}
+		cfg->fwspec = kzalloc(sizeof(struct iommu_fwspec), GFP_KERNEL);
+		if (!cfg->fwspec) {
+			kfree(cfg);
+			ret = -ENOMEM;
+			goto out_put_group;
+		}
+		iommu_fwspec_init(dev, smmu->dev);
 
-		cfg->num_streamids = 1;
+		cfg->fwspec->num_ids = 1;
 		/*
 		 * Assume Stream ID == Requester ID for now.
 		 * We need a way to describe the ID mappings in FDT.
 		 */
 		pci_for_each_dma_alias(pdev, __arm_smmu_get_pci_sid,
-				       &cfg->streamids[0]);
+				       &cfg->fwspec->ids[0]);
 		releasefn = __arm_smmu_release_pci_iommudata;
 	} else {
 		struct arm_smmu_master *master;
diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 999b831..acf6b62 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -140,6 +140,9 @@ int iommu_add_dt_device(struct dt_device_node *np)
     if ( !ops )
         return -EINVAL;
 
+    if ( dt_device_is_protected(np) )
+        return 0;
+
     if ( dev_iommu_fwspec_get(dev) )
         return -EEXIST;
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 04:01:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 04:01:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jy5w0-0001jk-88; Wed, 22 Jul 2020 04:00:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SEGq=BB=xilinx.com=woods@srs-us1.protection.inumbo.net>)
 id 1jy5vy-0001je-HG
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 04:00:46 +0000
X-Inumbo-ID: ead26da3-cbcf-11ea-8613-bc764e2007e4
Received: from NAM11-BN8-obe.outbound.protection.outlook.com (unknown
 [40.107.236.86]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ead26da3-cbcf-11ea-8613-bc764e2007e4;
 Wed, 22 Jul 2020 04:00:45 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VT1oRzAdGKkb51iBxvW1zfuuPciEuRCBTB+/EhqZuivualIlH3AJWCBCtE3my58VqUn+eenIXmHEv9XsjPrJoiwR4MSDyZ3qg+C9wSK/BuQdvE9jWk30ZoC9u372DQ2XN+bOwmRV/11AsEXD91L9vuVIYTiuRBRBCerbnVIWdqP+Cj6HC3ftsVf3p7dKTbrn9yvpzCRCIIp8deUcQP+l+jat2vJpWA3wdwOGEbmy5eYIHfe7gDtYTlEn8WUtKYC6SbMC+jA0Ouq6MXDVvnvPZXhK1Dn5IM/qLABD3bBmiKvBtEdxbsp/LJLgIrc+Fx6azwERHl7nM3e/uwNID1Qsow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0uM6KTWH/QLynHj5webgiJtYpsdqo/HTl87VcEalPow=;
 b=lZj3fcLPNbVPq5VJSwzABFwR5Xr4Dug2O5aH2w179WXVbdph+6+/jDbN56TnaY3s4xg65NK8DlRMlPKTBIyoDwFhDtlmDzhWEFrgdYm8X8R+CPwS6UJO4Eq9GrgoLpaWACx7xKtYn6Wj7Rb296hZ0xpf2alFz91I7ONJq8MhMYPAvN71A+M/8nBnqsNI/Qs5Jxn3B5nQM4XppNQZB/a4+AsUo24kJhcfqhYRypE8MIIiK+6bvRFJaXBEGclLH73ff64Dxbg1E/WvGxFuXXJ9ZsC643gvrtIgAOeVyhOL+/Fx5UyAbZwUmax4+ZLjHYhucAbp+IOoxG5I2p97ML3Eqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.60.83) smtp.rcpttodomain=epam.com smtp.mailfrom=xilinx.com;
 dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message
 not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0uM6KTWH/QLynHj5webgiJtYpsdqo/HTl87VcEalPow=;
 b=Qnjw70VgbdQT7xOKqyFPwV3iWD7642M5q5iTKDVlg6P0qIeNHFIEFBplVDmhustJhmzEt2cfhB73MaEQUrmUCrRhm2ldCz+bgHhnlfk7OrzZmznMTywHaOG2WXwlPNBA4DXTvao5ne58uqUALP76HB4rCUr3sWdqwU3S8Y22+7U=
Received: from DM5PR06CA0031.namprd06.prod.outlook.com (2603:10b6:3:5d::17) by
 SN6PR02MB5582.namprd02.prod.outlook.com (2603:10b6:805:eb::18) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3195.18; Wed, 22 Jul 2020 04:00:44 +0000
Received: from CY1NAM02FT042.eop-nam02.prod.protection.outlook.com
 (2603:10b6:3:5d:cafe::75) by DM5PR06CA0031.outlook.office365.com
 (2603:10b6:3:5d::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.20 via Frontend
 Transport; Wed, 22 Jul 2020 04:00:44 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.60.83)
 smtp.mailfrom=xilinx.com; epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=bestguesspass action=none
 header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.60.83 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01;
Received: from xsj-pvapsmtpgw01 (149.199.60.83) by
 CY1NAM02FT042.mail.protection.outlook.com (10.152.75.136) with Microsoft SMTP
 Server id 15.20.3216.10 via Frontend Transport; Wed, 22 Jul 2020 04:00:43
 +0000
Received: from [149.199.38.66] (port=50077 helo=smtp.xilinx.com)
 by xsj-pvapsmtpgw01 with esmtp (Exim 4.90)
 (envelope-from <brian.woods@xilinx.com>)
 id 1jy5u7-0007kZ-LT; Tue, 21 Jul 2020 20:58:51 -0700
Received: from [127.0.0.1] (helo=localhost)
 by xsj-pvapsmtp01 with smtp (Exim 4.63)
 (envelope-from <brian.woods@xilinx.com>)
 id 1jy5vv-0002vJ-LF; Tue, 21 Jul 2020 21:00:43 -0700
Received: from xsj-pvapsmtp01 (smtp-fallback.xilinx.com [149.199.38.66] (may
 be forged))
 by xsj-smtp-dlp1.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id 06M40eUD012816; 
 Tue, 21 Jul 2020 21:00:40 -0700
Received: from [172.19.2.62] (helo=xsjwoods50.xilinx.com)
 by xsj-pvapsmtp01 with esmtp (Exim 4.63)
 (envelope-from <brian.woods@xilinx.com>)
 id 1jy5vs-0002u2-Mh; Tue, 21 Jul 2020 21:00:40 -0700
From: Brian Woods <brian.woods@xilinx.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: [RFC v2 0/2] Generic SMMU Bindings
Date: Tue, 21 Jul 2020 21:00:29 -0700
Message-Id: <1595390431-24805-1-git-send-email-brian.woods@xilinx.com>
X-Mailer: git-send-email 2.7.4
X-RCIS-Action: ALLOW
X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.2.0.1013-23620.005
X-TM-AS-User-Approved-Sender: Yes;Yes
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-Forefront-Antispam-Report: CIP:149.199.60.83; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:xsj-pvapsmtpgw01; PTR:unknown-60-83.xilinx.com; CAT:NONE;
 SFTY:;
 SFS:(39850400004)(376002)(396003)(136003)(346002)(46966005)(81166007)(83380400001)(6916009)(82310400002)(36756003)(4326008)(356005)(54906003)(8936002)(8676002)(82740400003)(478600001)(316002)(9786002)(6666004)(70206006)(70586007)(47076004)(26005)(2616005)(336012)(107886003)(426003)(2906002)(7696005)(4744005)(86362001)(5660300002)(44832011)(186003)(41533002)(42866002);
 DIR:OUT; SFP:1101; 
X-MS-PublicTrafficType: Email
MIME-Version: 1.0
Content-Type: text/plain
X-MS-Office365-Filtering-Correlation-Id: fb10d107-ab50-4916-7294-08d82df3ce8e
X-MS-TrafficTypeDiagnostic: SN6PR02MB5582:
X-Microsoft-Antispam-PRVS: <SN6PR02MB558224B544556DDA86A61747D7790@SN6PR02MB5582.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:1850;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: iFlxy1JT7JnI37VlO9TA4Mu9lcS+1bnFKYMS6GQQtpRgBZCo+kqlHDDnWXuwz01/RdeTk4ckM6MWBh3ow+M38JfSMrd1rnpflDULdofefKN4CU06bAc1b8kSFkLLlFH6Zb5Fngqu+wNc9hS3wPuNoDsUVJWrO19y+UeILj+/sbWyCJijVzS6ES2wtuOcCcg+IVN3WQ0FHERmjuIZbsqax91297CR6Praon8oVLU1ir30VxdrYNRxQm9dM2ulUv//zX9d3h6ls0rD5g5dCvvynnkGqrSjFqcp3qKXwAcp51gsGbior7AO0kpc1iVw1iphO6mOE+yz0+q8NnkYOzV/l7bFeE+afIzT3BtMDpmmPPZYNoAWcYUWfjklRZoPenifKXdXWAEQXQxlJ9OMOPTEmFmCgKRuXLGOAFzCbyAeIELslE8UL36cMLZUe3euOkAJ
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jul 2020 04:00:43.8600 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fb10d107-ab50-4916-7294-08d82df3ce8e
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.83];
 Helo=[xsj-pvapsmtpgw01]
X-MS-Exchange-CrossTenant-AuthSource: CY1NAM02FT042.eop-nam02.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR02MB5582
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Brian Woods <brian.woods@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Finally had time to do a v2.  Changes are in each patch.

Brian Woods (2):
  arm,smmu: switch to using iommu_fwspec functions
  arm,smmu: add support for generic DT bindings

 xen/drivers/passthrough/arm/smmu.c    | 147 ++++++++++++++++++++++++----------
 xen/drivers/passthrough/device_tree.c |  20 +----
 2 files changed, 107 insertions(+), 60 deletions(-)

-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 06:34:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 06:34:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jy8K3-0006Mj-Vp; Wed, 22 Jul 2020 06:33:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q/Qh=BB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jy8K2-0006MP-Fc
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 06:33:46 +0000
X-Inumbo-ID: 47083e02-cbe5-11ea-a183-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 47083e02-cbe5-11ea-a183-12813bfff9fa;
 Wed, 22 Jul 2020 06:33:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1HpAY6hto9GobSh7pQPkEYLBxFo9u2OhPW0mhPNXBz8=; b=a1xzHUlbPSHsXdKV0GrXtQ43v
 ssGz0+cAm9vIFoszl3iHVSv+vmJiVkQRW+WurBlWXPir4uaMZR4rS6u6FpmkO3phST0Jw/CaDAr2i
 aI77rp9L6Kuqkiq3RMEbL3ufAo55vHJd+/IK+PWsQvd+RbexPxcYiFxABEkRPsdCcfrJ4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jy8Ju-0006Xc-59; Wed, 22 Jul 2020 06:33:38 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jy8Js-0005HJ-V5; Wed, 22 Jul 2020 06:33:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jy8Js-00078O-UV; Wed, 22 Jul 2020 06:33:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152070-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152070: regressions - trouble: fail/pass/starved
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=4fa640dc52302b5e62b01b05c755b055549633ae
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jul 2020 06:33:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152070 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152070/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 test-armhf-armhf-xl-vhd      11 guest-start              fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 linux                4fa640dc52302b5e62b01b05c755b055549633ae
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   34 days
Failing since        151236  2020-06-19 19:10:35 Z   32 days   51 attempts
Testing same since   152070  2020-07-21 09:35:59 Z    0 days    1 attempts

------------------------------------------------------------
830 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 45273 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 06:52:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 06:52:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jy8bz-00081f-FO; Wed, 22 Jul 2020 06:52:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oDH3=BB=canonical.com=andrea.righi@srs-us1.protection.inumbo.net>)
 id 1jy8by-00081a-8s
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 06:52:18 +0000
X-Inumbo-ID: e17134c4-cbe7-11ea-a183-12813bfff9fa
Received: from youngberry.canonical.com (unknown [91.189.89.112])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e17134c4-cbe7-11ea-a183-12813bfff9fa;
 Wed, 22 Jul 2020 06:52:17 +0000 (UTC)
Received: from mail-wr1-f70.google.com ([209.85.221.70])
 by youngberry.canonical.com with esmtps
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2)
 (envelope-from <andrea.righi@canonical.com>) id 1jy8bw-0005y6-F8
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 06:52:16 +0000
Received: by mail-wr1-f70.google.com with SMTP id z1so240511wrn.18
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 23:52:16 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version
 :content-disposition;
 bh=uhBXqoms6Y2BJ+DRWgv1UXrLfuoV1AP2/YI2uOM0BIc=;
 b=VKuZA0gLSArZIZzP65NX/w+lRZXLiqbqUeLxvLOCQDaCsBoQNLjmwjEWDVk2gjatcG
 055x5H0Kj4+ERMlm+KInqvSZvTUSrxwyE6UhON9MaDugEHAVhyvLMXmepX2yTsZGxA3n
 U9yJufnHOw81xq40AFH+svD9gGgBB27SU/5vSjnJMB2WSzMO8Q0zASq5vv8cJ47TBIuR
 TygCY+zlGvhfpIov1eFmUojAj8hKdHB3Yzm4ge5iToxvaydpFgKz1Esh1rmrJ+QMN+kj
 Cc1AxqiyeVcWY5C9jdBR/4njZrJYtZZIYPSugpvCC/iI0FUXwZR3auWNYdmOBdwIjsGo
 xdGg==
X-Gm-Message-State: AOAM532SHylvegz0dr1zLXEpE2wJSq8VmlSNQWFLhehuAVxj5ja8Q18m
 39bN/SzkkzhIBzfEKvMC60cVnhaHbAz4cAK59xmPZDzwMpZyOSvqRxyjI0GIZ25GVR7OswUQREw
 ijsOJqq9v96xn1Hasc9wNQ+SZMUqtmBLo0NlK7bk6SNye
X-Received: by 2002:a7b:c05a:: with SMTP id u26mr1897720wmc.73.1595400733676; 
 Tue, 21 Jul 2020 23:52:13 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJyhC1gnk9TlQyUIEJ3tOszObyW4FaQs/e6/6fGBY61IX5lMt1lAPzd6jSpjxL2aHTCAHJKvaQ==
X-Received: by 2002:a7b:c05a:: with SMTP id u26mr1897700wmc.73.1595400733297; 
 Tue, 21 Jul 2020 23:52:13 -0700 (PDT)
Received: from localhost (host-87-11-131-192.retail.telecomitalia.it.
 [87.11.131.192])
 by smtp.gmail.com with ESMTPSA id g70sm2426599wmg.24.2020.07.21.23.52.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 21 Jul 2020 23:52:12 -0700 (PDT)
Date: Wed, 22 Jul 2020 08:52:11 +0200
From: Andrea Righi <andrea.righi@canonical.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH] xen-netfront: fix potential deadlock in xennet_remove()
Message-ID: <20200722065211.GA841369@xps-13>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Jakub Kicinski <kuba@kernel.org>, netdev@vger.kernel.org,
 "David S. Miller" <davem@davemloft.net>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

There's a potential race in xennet_remove(); this is what the driver is
doing upon unregistering a network device:

  1. state = read bus state
  2. if state is not "Closed":
  3.    request to set state to "Closing"
  4.    wait for state to be set to "Closing"
  5.    request to set state to "Closed"
  6.    wait for state to be set to "Closed"

If the state changes to "Closed" immediately after step 1 we are stuck
forever in step 4, because the state will never go back from "Closed" to
"Closing".

Make sure to check also for state == "Closed" in step 4 to prevent the
deadlock.

Also add a 5 sec timeout any time we wait for the bus state to change,
to avoid getting stuck forever in wait_event() and add a debug message
to help tracking down potential similar issues.

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
---
 drivers/net/xen-netfront.c | 79 +++++++++++++++++++++++++++-----------
 1 file changed, 57 insertions(+), 22 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 482c6c8b0fb7..e09caba93dd9 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -63,6 +63,8 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644);
 MODULE_PARM_DESC(max_queues,
 		 "Maximum number of queues per virtual interface");
 
+#define XENNET_TIMEOUT  (5 * HZ)
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -1334,12 +1336,20 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	netif_carrier_off(netdev);
 
-	xenbus_switch_state(dev, XenbusStateInitialising);
-	wait_event(module_wq,
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateClosed &&
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateUnknown);
+	do {
+		dev_dbg(&dev->dev,
+			"%s: switching to XenbusStateInitialising\n",
+			dev->nodename);
+		xenbus_switch_state(dev, XenbusStateInitialising);
+		err = wait_event_timeout(module_wq,
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateClosed &&
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateUnknown, XENNET_TIMEOUT);
+		dev_dbg(&dev->dev, "%s: state = %d\n", dev->nodename,
+			xenbus_read_driver_state(dev->otherend));
+	} while (!err);
+
 	return netdev;
 
  exit:
@@ -2139,28 +2149,53 @@ static const struct attribute_group xennet_dev_group = {
 };
 #endif /* CONFIG_SYSFS */
 
-static int xennet_remove(struct xenbus_device *dev)
+static void xennet_bus_close(struct xenbus_device *dev)
 {
-	struct netfront_info *info = dev_get_drvdata(&dev->dev);
-
-	dev_dbg(&dev->dev, "%s\n", dev->nodename);
+	int ret;
 
-	if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) {
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
+	do {
+		dev_dbg(&dev->dev, "%s: switching to XenbusStateClosing\n",
+			dev->nodename);
 		xenbus_switch_state(dev, XenbusStateClosing);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosing ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosing ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+		dev_dbg(&dev->dev, "%s: state = %d\n", dev->nodename,
+			xenbus_read_driver_state(dev->otherend));
+	} while (!ret);
+
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
 
+	do {
+		dev_dbg(&dev->dev, "%s: switching to XenbusStateClosed\n",
+			dev->nodename);
 		xenbus_switch_state(dev, XenbusStateClosed);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosed ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
-	}
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+		dev_dbg(&dev->dev, "%s: state = %d\n", dev->nodename,
+			xenbus_read_driver_state(dev->otherend));
+	} while (!ret);
+}
+
+static int xennet_remove(struct xenbus_device *dev)
+{
+	struct netfront_info *info = dev_get_drvdata(&dev->dev);
+
+	dev_dbg(&dev->dev, "%s\n", dev->nodename);
 
+	xennet_bus_close(dev);
 	xennet_disconnect_backend(info);
 
 	if (info->netdev->reg_state == NETREG_REGISTERED)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 07:42:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 07:42:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jy9ON-0003ng-Gt; Wed, 22 Jul 2020 07:42:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=G4i8=BA=vmm.dev=o8@srs-us1.protection.inumbo.net>)
 id 1jxut4-0005hS-B8
 for xen-devel@lists.xenproject.org; Tue, 21 Jul 2020 16:13:02 +0000
X-Inumbo-ID: 0cac372c-cb6d-11ea-856d-bc764e2007e4
Received: from mail-pl1-x641.google.com (unknown [2607:f8b0:4864:20::641])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0cac372c-cb6d-11ea-856d-bc764e2007e4;
 Tue, 21 Jul 2020 16:13:01 +0000 (UTC)
Received: by mail-pl1-x641.google.com with SMTP id l6so10454565plt.7
 for <xen-devel@lists.xenproject.org>; Tue, 21 Jul 2020 09:13:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vmm.dev; s=google;
 h=from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=wDitsdDHDoETDTxDKluPrWhU2D8DE6u8BnPSmRmrGh4=;
 b=fy7L/4Re2TIFmm12k2i3bXju6ZZlkbEA1354CbMIGv3i6xktgqXThaI1n9+yLvVeEO
 C6XotX0xy43lCHKadVKvJu6as164ENvY1aZYC+Yyk6hrIbMWtMK5qbbY/oAzlC1QJc/Q
 8TrLwbEZwu3d5Il0QNTGzHqDr1Wi7ruxTjCJbyOBHnxKClmMl1qVy2GQASFp3s9u8YP4
 VSo4vbTiSmKkp48DBE9pCIvmYDvC2JPmk5mrF9XSkbyYnR9veMcyfz8PlL4ZXKXSqov7
 yfHhV8BL0GsYeU64KxJy8smQviZ1pKkO7sM+BGXTVcqsJVnmyRO6wSLGb235TJ1J3hxc
 UCEw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=wDitsdDHDoETDTxDKluPrWhU2D8DE6u8BnPSmRmrGh4=;
 b=mgAoC4zhQtSLjy6vq3PzpF4LEO7OHGbCH4dHDffbiBlRKB9U6/WCuESxKDtcz6axVX
 b2C9twM0aN/sdrF/qGANdX/5Myoo9S8cCN7XyvunGMcLLONdcP/QUaEk6pcJSP39P4Vd
 tQ6qgM23kUdS0iu0qdFqRjtzamg7VnoaS+t9HUZ+yOoaPOAtP8iQkqU0iazzKNLQ9YST
 lBW/cRyhAaxvQ9/bgOqjsqp3JUTPht8Z8hHlQTBOlafrff0pJNoP6MELRm+VpGDhNj0N
 5+RsdSj5BeE1vrepo8fvuJkjhi+j/vU6jQHQdS9WxYUvTfczcoG445Gt+CEfijoJmm7X
 gVjw==
X-Gm-Message-State: AOAM530y+GIBAsdjEsDQmBJso0/6e16hnk1G0xFbTq/MH06OO3Os8vgE
 eLw3RP+Fu9TbuxtzlfARG1qagw==
X-Google-Smtp-Source: ABdhPJz/4Rls+u9+PMZSs3OFi3HIkImvb038W2RxX4ZvHHXgVr6fOs32UJtvHWbBcLF6bDLtkM4tvg==
X-Received: by 2002:a17:902:8ecb:: with SMTP id
 x11mr17341190plo.123.1595347980879; 
 Tue, 21 Jul 2020 09:13:00 -0700 (PDT)
Received: from ip-172-31-28-103.ap-northeast-1.compute.internal
 (ec2-18-183-109-148.ap-northeast-1.compute.amazonaws.com. [18.183.109.148])
 by smtp.gmail.com with ESMTPSA id y80sm20467763pfb.165.2020.07.21.09.12.57
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 21 Jul 2020 09:13:00 -0700 (PDT)
From: Hayato Ohhashi <o8@vmm.dev>
To: tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org
Subject: [PATCH] x86/xen/time: set the X86_FEATURE_TSC_KNOWN_FREQ flag in
 xen_tsc_khz()
Date: Tue, 21 Jul 2020 16:12:31 +0000
Message-Id: <20200721161231.6019-1-o8@vmm.dev>
X-Mailer: git-send-email 2.23.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Mailman-Approved-At: Wed, 22 Jul 2020 07:42:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Hayato Ohhashi <o8@vmm.dev>, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
 boris.ostrovsky@oracle.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

If the TSC frequency is known from the pvclock page,
the TSC frequency does not need to be recalibrated.
We can avoid recalibration by setting X86_FEATURE_TSC_KNOWN_FREQ.

Signed-off-by: Hayato Ohhashi <o8@vmm.dev>
---
 arch/x86/xen/time.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index c8897aad13cd..91f5b330dcc6 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -39,6 +39,7 @@ static unsigned long xen_tsc_khz(void)
 	struct pvclock_vcpu_time_info *info =
 		&HYPERVISOR_shared_info->vcpu_info[0].time;
 
+	setup_force_cpu_cap(X86_FEATURE_TSC_KNOWN_FREQ);
 	return pvclock_tsc_khz(info);
 }
 
-- 
2.23.3



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 08:21:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 08:21:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyA0J-0007go-1k; Wed, 22 Jul 2020 08:21:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhkO=BB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyA0I-0007gj-3E
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 08:21:30 +0000
X-Inumbo-ID: 55cc478a-cbf4-11ea-8620-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 55cc478a-cbf4-11ea-8620-bc764e2007e4;
 Wed, 22 Jul 2020 08:21:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595406089;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=iI4sRVjlvh9gEQqxjsfWwqL+VLacm6rDZcDKOJBuX1Y=;
 b=RqGfSa5Lkp203CVAGlS9SK8Xe96DiJhflFa5lLi7oYEGgbKAC+j6WSvT
 ueTH3zSWHvoD0jhLPwYiIhWtJ1ThlKfYCBQFxWan/+G+6MZPOMHZV1hz2
 h41fvdvYk94tC6wWXDEaTo7XqFW+B4uc4OF8t7jt6azQYmUeIvrJ4dGU5 8=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: eT357Zb1Z/BIKmkwgYYRc7oFLXkcrCBm+9MIOhkUi2JgLVSxLhGhwpsHfhtEOlqTubOtqbM2b3
 dY2SM1MdM6S0oANXeDKPsh3gJ0K6t+ASIM/k05660uLtn4n4IpNPl07TPvp4A+/JJwQGAifF9P
 w2euzvW5GqLAiV9jI92H3JEY2DOf44P1CsGNXyZQS/tBTJfaF9jxaIvQwZBdH3Btj+CtSE6xqF
 C1idOf4v2f7rUXAAUcnfm7M4V7AC6/Zxj1A6m8h8dXsz6NfBF/sVAjwjh51C9RLX9mZuhls1U6
 o6g=
X-SBRS: 2.7
X-MesageID: 22915674
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="22915674"
Date: Wed, 22 Jul 2020 10:21:15 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
Message-ID: <20200722082115.GR7191@Air-de-Roger>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <3dcab37d-0d60-f1cc-1d59-b5497f0fa95f@xen.org>
 <b6cf0931-c31e-b03b-3995-688536de391a@gmail.com>
 <05acce61-5b29-76f7-5664-3438361caf82@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <05acce61-5b29-76f7-5664-3438361caf82@xen.org>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Artem
 Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 21, 2020 at 10:12:40PM +0100, Julien Grall wrote:
> Hi Oleksandr,
> 
> On 21/07/2020 19:16, Oleksandr wrote:
> > 
> > On 21.07.20 17:27, Julien Grall wrote:
> > > On a similar topic, I am a bit surprised you didn't encounter memory
> > > exhaustion when trying to use virtio. Because on how Linux currently
> > > works (see XSA-300), the backend domain as to have a least as much
> > > RAM as the domain it serves. For instance, you have serve two
> > > domains with 1GB of RAM each, then your backend would need at least
> > > 2GB + some for its own purpose.
> > > 
> > > This probably wants to be resolved by allowing foreign mapping to be
> > > "paging" out as you would for memory assigned to a userspace.
> > 
> > Didn't notice the last sentence initially. Could you please explain your
> > idea in detail if possible. Does it mean if implemented it would be
> > feasible to map all guest memory regardless of how much memory the guest
> > has?
> >
> > Avoiding map/unmap memory each guest request would allow us to have
> > better performance (of course with taking care of the fact that guest
> > memory layout could be changed)...
> 
> I will explain that below. Before let me comment on KVM first.
> 
> > Actually what I understand looking at kvmtool is the fact it does not
> > map/unmap memory dynamically, just calculate virt addresses according to
> > the gfn provided.
> 
> The memory management between KVM and Xen is quite different. In the case of
> KVM, the guest RAM is effectively memory from the userspace (allocated via
> mmap) and then shared with the guest.
> 
> From the userspace PoV, the guest memory will always be accessible from the
> same virtual region. However, behind the scene, the pages may not always
> reside in memory. They are basically managed the same way as "normal"
> userspace memory.
> 
> In the case of Xen, we are basically stealing a guest physical page
> allocated via kmalloc() and provide no facilities for Linux to reclaim the
> page if it needs to do it before the userspace decide to unmap the foreign
> mapping.
> 
> I think it would be good to handle the foreing mapping the same way as
> userspace memory. By that I mean, that Linux could reclaim the physical page
> used by the foreing mapping if it needs to.
> 
> The process for reclaiming the page would look like:
>     1) Unmap the foreign page
>     2) Ballon in the backend domain physical address used by the foreing
> mapping (allocate the page in the physmap)
> 
> The next time the userspace is trying to access the foreign page, Linux will
> receive a data abort that would result to:
>     1) Allocate a backend domain physical page
>     2) Balloon out the physical address (remove the page from the physmap)
>     3) Map the foreing mapping at the new guest physical address
>     4) Map the guest physical page in the userspace address space

This is going to shatter all the super pages in the stage-2
translation.

> With this approach, we should be able to have backend domain that can handle
> frontend domain without require a lot of memory.

Linux on x86 has the option to use empty hotplug memory ranges to map
foreign memory: the balloon driver hotplugs an unpopulated physical
memory range that's not made available to the OS free memory allocator
and it's just used as scratch space to map foreign memory. Not sure
whether Arm has something similar, or if it could be implemented.

You can still use the map-on-fault behaviour as above, but I would
recommend that you try to limit the number of hypercalls issued.
Having to issue a single hypercall for each page fault it's going to
be slow, so I would instead use mmap batch to map the hole range in
unpopulated physical memory and then the OS fault handler just needs to
fill the page tables with the corresponding address.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 08:25:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 08:25:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyA46-0007q9-J5; Wed, 22 Jul 2020 08:25:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+HSf=BB=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1jyA44-0007q4-PO
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 08:25:25 +0000
X-Inumbo-ID: e3756300-cbf4-11ea-8620-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.61])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id e3756300-cbf4-11ea-8620-bc764e2007e4;
 Wed, 22 Jul 2020 08:25:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1595406323;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding;
 bh=iMxQatL9RPRr52uoa/cQsIi+/qvfqC+NGO6O/Z/trAk=;
 b=ELpFr0TSwHtbJhlGuvPPyiR4Rv6JBARFNyjN/7mtmxphARaHrlMw2YePFITgKznRGzEn4j
 qx75Hf+Nx67Cx1TU9p/A60YcDjw+/+RvOC7pTTqIQZzbMmqVZyZNBb6+3KLl5Lodygv7T4
 lOmBPCwCh/8l7qm7RFF7c2O1dWJTnaU=
Received: from mail-ej1-f70.google.com (mail-ej1-f70.google.com
 [209.85.218.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-402-jdFd6FPHM4i93iT34d2hRw-1; Wed, 22 Jul 2020 04:25:21 -0400
X-MC-Unique: jdFd6FPHM4i93iT34d2hRw-1
Received: by mail-ej1-f70.google.com with SMTP id cf15so714085ejb.6
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jul 2020 01:25:20 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=iMxQatL9RPRr52uoa/cQsIi+/qvfqC+NGO6O/Z/trAk=;
 b=EwWWK7OJBR3V5Gv9gtCjvDCyQV3iz5unkBALUUfugG+LqGmU/QoNyChSC7UH5QzuQx
 qp69im05n9l/Bzwa+obGJtML/gOCSfgofcsP86TjfaSyBXfiWHUe51ELw4Y6rnShJrv2
 obVH3HR5vZec7Ux/UXU4MBOOAPbjdhwq/qPLu081hA7ZHQa1vtGP6XR7+pdGS+MhrYkV
 TA4Aad6fjDGNFAqzAmp5eDSPqhU2YPtCPLbxtDfX8oWcy3wo1RMgO1hLFtUAE0vtzDJN
 3ajXjlGN4rV0glKdjluRDhXNqgRCxJD7hxZjVOJmQTghIKCRhegOWXYc70YPtN7qJIdX
 7SUA==
X-Gm-Message-State: AOAM532QCCnIdIiRGZ0K5/2fPQa+ixtbm6vN08REXHh9A3eWPFBJlumj
 ZNZ1pwbIAziafS0aJElzdNNpH61prEy9X3MRWPbeEfvvJ9+56ybM7xiR9WOb9tC9nRLVqhdgONa
 67YHVzuJOfD6J17UuQwycsk/wmJE=
X-Received: by 2002:a05:6402:cb9:: with SMTP id
 cn25mr30469507edb.247.1595406319987; 
 Wed, 22 Jul 2020 01:25:19 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJyjYRzzJTOzYTRVfwqd/zbj7wdZv8GzJF0hCcZfQHNO6k/Ak91x0TPiYTcKtYKYbO1SqY50dg==
X-Received: by 2002:a05:6402:cb9:: with SMTP id
 cn25mr30469485edb.247.1595406319749; 
 Wed, 22 Jul 2020 01:25:19 -0700 (PDT)
Received: from x1w.redhat.com (138.red-83-57-170.dynamicip.rima-tde.net.
 [83.57.170.138])
 by smtp.gmail.com with ESMTPSA id m6sm18125273eja.87.2020.07.22.01.25.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 22 Jul 2020 01:25:19 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH-for-5.2] hw/i386/q35: Remove unreachable Xen code on Q35
 machine
Date: Wed, 22 Jul 2020 10:25:17 +0200
Message-Id: <20200722082517.18708-1-philmd@redhat.com>
X-Mailer: git-send-email 2.21.3
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8;
	text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Richard Henderson <rth@twiddle.net>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Xen accelerator requires specific changes to a machine to be able
to use it. See for example the 'Xen PC' machine configure its PCI
bus calling pc_xen_hvm_init_pci(). There is no 'Xen Q35' machine
declared. This code was probably added while introducing the Q35
machine, based on the existing PC machine (see commit df2d8b3ed4
"Introduce q35 pc based chipset emulator"). Remove the unreachable
code to simplify this file.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 hw/i386/pc_q35.c | 13 ++-----------
 1 file changed, 2 insertions(+), 11 deletions(-)

diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index a3e607a544..12f5934241 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -34,9 +34,7 @@
 #include "sysemu/arch_init.h"
 #include "hw/i2c/smbus_eeprom.h"
 #include "hw/rtc/mc146818rtc.h"
-#include "hw/xen/xen.h"
 #include "sysemu/kvm.h"
-#include "sysemu/xen.h"
 #include "hw/kvm/clock.h"
 #include "hw/pci-host/q35.h"
 #include "hw/qdev-properties.h"
@@ -179,10 +177,6 @@ static void pc_q35_init(MachineState *machine)
         x86ms->below_4g_mem_size = machine->ram_size;
     }
 
-    if (xen_enabled()) {
-        xen_hvm_init(pcms, &ram_memory);
-    }
-
     x86_cpus_init(x86ms, pcmc->default_cpu_version);
 
     kvmclock_create();
@@ -208,10 +202,7 @@ static void pc_q35_init(MachineState *machine)
     }
 
     /* allocate ram and load rom/bios */
-    if (!xen_enabled()) {
-        pc_memory_init(pcms, get_system_memory(),
-                       rom_memory, &ram_memory);
-    }
+    pc_memory_init(pcms, get_system_memory(), rom_memory, &ram_memory);
 
     /* create pci host bus */
     q35_host = Q35_HOST_DEVICE(qdev_new(TYPE_Q35_HOST_DEVICE));
@@ -271,7 +262,7 @@ static void pc_q35_init(MachineState *machine)
 
     assert(pcms->vmport != ON_OFF_AUTO__MAX);
     if (pcms->vmport == ON_OFF_AUTO_AUTO) {
-        pcms->vmport = xen_enabled() ? ON_OFF_AUTO_OFF : ON_OFF_AUTO_ON;
+        pcms->vmport = ON_OFF_AUTO_ON;
     }
 
     /* init basic PC hardware */
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 08:28:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 08:28:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyA6Z-0007yQ-21; Wed, 22 Jul 2020 08:27:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhkO=BB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyA6X-0007xa-0w
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 08:27:57 +0000
X-Inumbo-ID: 3dc8f07e-cbf5-11ea-8620-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3dc8f07e-cbf5-11ea-8620-bc764e2007e4;
 Wed, 22 Jul 2020 08:27:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595406476;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=FN40+KHzDRqn99k21tmlYB+H9TCBGBR1w3tvJcN9Tgs=;
 b=cYGQklK5OJcdqGfhVG/S/3cufajSRbOFRUfQQLa3TCpzos4ukgypGEBn
 Eg1L30ggqyBKuSKpzlhqE//n0ii5AgdcqF+pJBVd5SRYPzD7JuxukI+78
 HCQIezx4M+cWLF4AfkwJ2AiwIcsWAOnwqh2ec3qVNTRjI8INjRN/z8JEr s=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: mlqzl0cWqd5gabQyWtfyst+GqUDqk2plDTUvwmrYJ8XALOOL3c4pwsZ14wIgTuqSOrWdAsqabU
 U3z4rBnJN46Le58mC9XaMMoAN3S3s0bqLbgiivLfDLVwkpF/RX2WQDULvW4T8zVE5XCVH++AU3
 OUWlO6ngvCzlLeLZixzTsry09fylayuaS5boFcXYZna62kTmuNMGKttp0dIuWif7hkABJgwyxK
 ikPkcGIF9TKekbeq9FWw+LIYJMaFgyhyMnNs+N/qmuLzuxVJUkXnzCjftz5DjOIOBeOWFolL+4
 yw4=
X-SBRS: 2.7
X-MesageID: 22916032
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="22916032"
Date: Wed, 22 Jul 2020 10:27:46 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Anchal Agarwal <anchalag@amazon.com>
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
Message-ID: <20200722082746.GS7191@Air-de-Roger>
References: <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200720093705.GG7191@Air-de-Roger>
 <20200721001736.GB19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <20200721083018.GM7191@Air-de-Roger>
 <20200721195509.GA14682@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200721195509.GA14682@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, tglx@linutronix.de, sstabellini@kernel.org, kamatam@amazon.com,
 marmarek@invisiblethingslab.com, mingo@redhat.com,
 xen-devel@lists.xenproject.org, sblbir@amazon.com, axboe@kernel.dk,
 konrad.wilk@oracle.com, bp@alien8.de,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, jgross@suse.com,
 netdev@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net,
 linux-kernel@vger.kernel.org, vkuznets@redhat.com, davem@davemloft.net,
 dwmw@amazon.co.uk
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 21, 2020 at 07:55:09PM +0000, Anchal Agarwal wrote:
> On Tue, Jul 21, 2020 at 10:30:18AM +0200, Roger Pau Monné wrote:
> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> > 
> > 
> > 
> > Marek: I'm adding you in case you could be able to give this a try and
> > make sure it doesn't break suspend for dom0.
> > 
> > On Tue, Jul 21, 2020 at 12:17:36AM +0000, Anchal Agarwal wrote:
> > > On Mon, Jul 20, 2020 at 11:37:05AM +0200, Roger Pau Monné wrote:
> > > > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> > > >
> > > >
> > > >
> > > > On Sat, Jul 18, 2020 at 09:47:04PM -0400, Boris Ostrovsky wrote:
> > > > > (Roger, question for you at the very end)
> > > > >
> > > > > On 7/17/20 3:10 PM, Anchal Agarwal wrote:
> > > > > > On Wed, Jul 15, 2020 at 05:18:08PM -0400, Boris Ostrovsky wrote:
> > > > > >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> On 7/15/20 4:49 PM, Anchal Agarwal wrote:
> > > > > >>> On Mon, Jul 13, 2020 at 11:52:01AM -0400, Boris Ostrovsky wrote:
> > > > > >>>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> > > > > >>>>
> > > > > >>>>
> > > > > >>>>
> > > > > >>>> On 7/2/20 2:21 PM, Anchal Agarwal wrote:
> > > > > >>>> And PVH dom0.
> > > > > >>> That's another good use case to make it work with however, I still
> > > > > >>> think that should be tested/worked upon separately as the feature itself
> > > > > >>> (PVH Dom0) is very new.
> > > > > >>
> > > > > >> Same question here --- will this break PVH dom0?
> > > > > >>
> > > > > > I haven't tested it as a part of this series. Is that a blocker here?
> > > > >
> > > > >
> > > > > I suspect dom0 will not do well now as far as hibernation goes, in which
> > > > > case you are not breaking anything.
> > > > >
> > > > >
> > > > > Roger?
> > > >
> > > > I sadly don't have any box ATM that supports hibernation where I
> > > > could test it. We have hibernation support for PV dom0, so while I
> > > > haven't done anything specific to support or test hibernation on PVH
> > > > dom0 I would at least aim to not make this any worse, and hence the
> > > > check should at least also fail for a PVH dom0?
> > > >
> > > > if (!xen_hvm_domain() || xen_initial_domain())
> > > >     return -ENODEV;
> > > >
> > > > Ie: none of this should be applied to a PVH dom0, as it doesn't have
> > > > PV devices and hence should follow the bare metal device suspend.
> > > >
> > > So from what I understand you meant for any guest running on pvh dom0 should not
> > > hibernate if hibernation is triggered from within the guest or should they?
> > 
> > Er no to both I think. What I meant is that a PVH dom0 should be able
> > to properly suspend, and we should make sure this work doesn't make
> > this any harder (or breaks it if it's currently working).
> > 
> > Or at least that's how I understood the question raised by Boris.
> > 
> > You are adding code to the generic suspend path that's also used by dom0
> > in order to perform bare metal suspension. This is fine now for a PV
> > dom0 because the code is gated on xen_hvm_domain, but you should also
> > take into account that a PVH dom0 is considered a HVM domain, and
> > hence will get the notifier registered.
> >
> Ok that makes sense now. This is good to be safe, but my patch series is only to
> support domU hibernation, so I am not sure if this will affect pvh dom0.
> However, since I do not have a good way of testing sure I will add the check.
> 
> Moreover, in Patch-0004, I do register suspend/resume syscore_ops specifically for domU
> hibernation only if its xen_hvm_domain.

So if the hooks are only registered for domU, do you still need this
xen_hvm_domain check here?

I have to admit I'm not familiar with Linux PM suspend.

> I don't see any reason that should not
> be registered for domU's running on pvh dom0.

To be clear: it should be registered for all HVM domUs, regardless of
whether they are running on a PV or a PVH dom0. My intention was never
to suggest otherwise. It should be enabled for all HVM domUs, but
shouldn't be enabled for HVM dom0.

> Those suspend/resume callbacks will
> only be invoked in case hibernation and will be skipped if generic suspend path
> is in progress. Do you see any issue with that?

No, I think it's fine.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 08:34:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 08:34:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyACh-0000PC-PX; Wed, 22 Jul 2020 08:34:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mY6V=BB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyACh-0000P7-4D
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 08:34:19 +0000
X-Inumbo-ID: 21c44422-cbf6-11ea-8620-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 21c44422-cbf6-11ea-8620-bc764e2007e4;
 Wed, 22 Jul 2020 08:34:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A6FB2AD4A;
 Wed, 22 Jul 2020 08:34:24 +0000 (UTC)
Subject: Re: [xen-unstable test] 152067: regressions - trouble:
 fail/pass/starved
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-152067-mainreport@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <62b87ab7-1f1e-0ef8-0ff7-3b6fb55837dd@suse.com>
Date: Wed, 22 Jul 2020 10:34:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <osstest-152067-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.07.2020 02:37, osstest service owner wrote:
> flight 152067 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/152067/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045

Jul 21 16:20:58.985209 [  530.412043] libxl-save-help: page allocation failure: order:4, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)

My first reaction to this would be to ask if Dom0 was given too little
memory here. Or of course there could be a memory leak somewhere. But
the system isn't entirely out of memory (about 7Mb left), so perhaps
the "order:4" aspect here also plays a meaningful role. Hence ...

Jul 21 16:21:00.390810 [  530.412448] Call Trace:
Jul 21 16:21:00.402721 [  530.412499]  dump_stack+0x72/0x8c
Jul 21 16:21:00.402801 [  530.412541]  warn_alloc.cold.140+0x68/0xe8
Jul 21 16:21:00.402841 [  530.412585]  __alloc_pages_slowpath+0xc73/0xcb0
Jul 21 16:21:00.414737 [  530.412640]  ? __do_page_fault+0x249/0x4d0
Jul 21 16:21:00.414786 [  530.412681]  __alloc_pages_nodemask+0x235/0x250
Jul 21 16:21:00.426555 [  530.412734]  kmalloc_order+0x13/0x60
Jul 21 16:21:00.426619 [  530.412774]  kmalloc_order_trace+0x18/0xa0
Jul 21 16:21:00.426671 [  530.412816]  alloc_empty_pages.isra.15+0x24/0x60
Jul 21 16:21:00.438447 [  530.412867]  privcmd_ioctl_mmap_batch.isra.18+0x303/0x320
Jul 21 16:21:00.438507 [  530.412918]  ? vmacache_find+0xb0/0xb0
Jul 21 16:21:00.450475 [  530.412957]  privcmd_ioctl+0x253/0xa9b

... perhaps we ought to consider re-working this code path to avoid
order > 0 allocations (may be as simple as switching to vmalloc(),
but I say this without having looked at the code).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 08:38:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 08:38:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyAGW-0000Yr-Ai; Wed, 22 Jul 2020 08:38:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhkO=BB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyAGU-0000Ym-O1
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 08:38:14 +0000
X-Inumbo-ID: ae02b9dc-cbf6-11ea-8620-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae02b9dc-cbf6-11ea-8620-bc764e2007e4;
 Wed, 22 Jul 2020 08:38:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595407093;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=TpwMOPbfwXPVBO8RljRcamkWDptfk3WiGzSDlhn7fT8=;
 b=QpaLvVYO7vDdtqL9mfVUCnHSsh/f45Z1p7tQzE7pVX012S7r9rejM9X8
 1r+Hb+W0AETMEi1DHScN9azEDNqfGmwtGhLF1FZgvRi9ZimOCmu35Ys51
 jMnb+repW4UAhk4k/oxXLGfKDx3SSozPJjdeu93dkwNbtw1eqLQrthTYt w=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: nI6XQcCJHIy/AAwzoSZYhEtwPOmBQSBC+bhCr+7Zc7kYgiE6oefmfDuL6jrrDmOVj+sFaMefl8
 ijGcDFmSQ1HhNkGfCEvwLKyYHiCkIzmzGdFX+282nBAcK4gZ5tuaOioIr3a2/9QrNd6Cb5BBdE
 Sau/6xOV5u4viFjX1Ojj9bZIyEqNYITjCZLDtNTs+QHnUQfSTP5dVHViO8FNRFX1JlCBIso/IF
 O+a6BoJJ+gt0BcCDW1M/PtT+MBcDmA1VHjcAsaGOHkEwb/AJx9ZviAmPSqY9jTSkD/NMK+n/oL
 PpU=
X-SBRS: 2.7
X-MesageID: 23777533
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23777533"
Date: Wed, 22 Jul 2020 10:38:05 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: osstest service owner <osstest-admin@xenproject.org>, <jgross@suse.com>,
 <boris.ostrovsky@oracle.com>
Subject: Re: [xen-unstable test] 152067: regressions - trouble:
 fail/pass/starved
Message-ID: <20200722083805.GT7191@Air-de-Roger>
References: <osstest-152067-mainreport@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <osstest-152067-mainreport@xen.org>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 22, 2020 at 12:37:46AM +0000, osstest service owner wrote:
> flight 152067 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/152067/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045

Failure was caused by:

Jul 21 16:20:58.985209 [  530.412043] libxl-save-help: page allocation failure: order:4, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
Jul 21 16:21:00.378548 [  530.412261] libxl-save-help cpuset=/ mems_allowed=0
Jul 21 16:21:00.378622 [  530.412318] CPU: 1 PID: 15485 Comm: libxl-save-help Not tainted 4.19.80+ #1
Jul 21 16:21:00.390740 [  530.412377] Hardware name: Dell Inc. PowerEdge R420/0K29HN, BIOS 2.4.2 01/29/2015
Jul 21 16:21:00.390810 [  530.412448] Call Trace:
Jul 21 16:21:00.402721 [  530.412499]  dump_stack+0x72/0x8c
Jul 21 16:21:00.402801 [  530.412541]  warn_alloc.cold.140+0x68/0xe8
Jul 21 16:21:00.402841 [  530.412585]  __alloc_pages_slowpath+0xc73/0xcb0
Jul 21 16:21:00.414737 [  530.412640]  ? __do_page_fault+0x249/0x4d0
Jul 21 16:21:00.414786 [  530.412681]  __alloc_pages_nodemask+0x235/0x250
Jul 21 16:21:00.426555 [  530.412734]  kmalloc_order+0x13/0x60
Jul 21 16:21:00.426619 [  530.412774]  kmalloc_order_trace+0x18/0xa0
Jul 21 16:21:00.426671 [  530.412816]  alloc_empty_pages.isra.15+0x24/0x60
Jul 21 16:21:00.438447 [  530.412867]  privcmd_ioctl_mmap_batch.isra.18+0x303/0x320
Jul 21 16:21:00.438507 [  530.412918]  ? vmacache_find+0xb0/0xb0
Jul 21 16:21:00.450475 [  530.412957]  privcmd_ioctl+0x253/0xa9b
Jul 21 16:21:00.450540 [  530.412996]  ? mmap_region+0x226/0x630
Jul 21 16:21:00.450592 [  530.413043]  ? selinux_mmap_file+0xb0/0xb0
Jul 21 16:21:00.462757 [  530.413084]  ? selinux_file_ioctl+0x15c/0x200
Jul 21 16:21:00.462823 [  530.413136]  do_vfs_ioctl+0x9f/0x630
Jul 21 16:21:00.474698 [  530.413177]  ksys_ioctl+0x5b/0x90
Jul 21 16:21:00.474762 [  530.413224]  __x64_sys_ioctl+0x11/0x20
Jul 21 16:21:00.474813 [  530.413264]  do_syscall_64+0x57/0x130
Jul 21 16:21:00.486480 [  530.413305]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jul 21 16:21:00.486548 [  530.413357] RIP: 0033:0x7f4f7ecde427
Jul 21 16:21:00.486600 [  530.413395] Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
Jul 21 16:21:00.510766 [  530.413556] RSP: 002b:00007ffc1ef6eb38 EFLAGS: 00000213 ORIG_RAX: 0000000000000010
Jul 21 16:21:00.522758 [  530.413629] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4f7ecde427
Jul 21 16:21:00.534632 [  530.413699] RDX: 00007ffc1ef6eb90 RSI: 0000000000205004 RDI: 0000000000000007
Jul 21 16:21:00.534702 [  530.413810] RBP: 00007ffc1ef6ebe0 R08: 0000000000000007 R09: 0000000000000000
Jul 21 16:21:00.547013 [  530.413881] R10: 0000000000000001 R11: 0000000000000213 R12: 000055d754136200
Jul 21 16:21:00.558751 [  530.413951] R13: 00007ffc1ef6f340 R14: 0000000000000000 R15: 0000000000000000
Jul 21 16:21:00.558846 [  530.414079] Mem-Info:
Jul 21 16:21:00.558928 [  530.414123] active_anon:1724 inactive_anon:3931 isolated_anon:0
Jul 21 16:21:00.570481 [  530.414123]  active_file:7862 inactive_file:86530 isolated_file:0
Jul 21 16:21:00.582599 [  530.414123]  unevictable:0 dirty:18 writeback:0 unstable:0
Jul 21 16:21:00.582668 [  530.414123]  slab_reclaimable:4704 slab_unreclaimable:4036
Jul 21 16:21:00.594782 [  530.414123]  mapped:3461 shmem:124 pagetables:372 bounce:0
Jul 21 16:21:00.594849 [  530.414123]  free:1863 free_pcp:16 free_cma:0
Jul 21 16:21:00.606733 [  530.414579] Node 0 active_anon:6896kB inactive_anon:15724kB active_file:31448kB inactive_file:346120kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:13844kB dirty:72kB writeback:0kB shmem:496kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
Jul 21 16:21:00.630626 [  530.414870] DMA free:1816kB min:92kB low:112kB high:132kB active_anon:0kB inactive_anon:0kB active_file:76kB inactive_file:9988kB unevictable:0kB writepending:0kB present:15980kB managed:14328kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
Jul 21 16:21:00.658448 [  530.415329] lowmem_reserve[]: 0 431 431 431
Jul 21 16:21:00.658513 [  530.415404] DMA32 free:5512kB min:2608kB low:3260kB high:3912kB active_anon:6896kB inactive_anon:15724kB active_file:31372kB inactive_file:336132kB unevictable:0kB writepending:72kB present:508300kB managed:451760kB mlocked:0kB kernel_stack:2848kB pagetables:1488kB bounce:0kB free_pcp:184kB local_pcp:0kB free_cma:0kB
Jul 21 16:21:00.694702 [  530.415742] lowmem_reserve[]: 0 0 0 0
Jul 21 16:21:00.694778 [  530.415806] DMA: 8*4kB (UM) 3*8kB (UM) 4*16kB (UM) 3*32kB (M) 5*64kB (UM) 2*128kB (UM) 4*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1816kB
Jul 21 16:21:00.706798 [  530.416015] DMA32: 4*4kB (UH) 459*8kB (MH) 2*16kB (H) 6*32kB (H) 5*64kB (H) 4*128kB (H) 3*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5512kB
Jul 21 16:21:00.718789 [  530.416287] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
Jul 21 16:21:00.730785 [  530.416413] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Jul 21 16:21:00.742847 [  530.416538] 94608 total pagecache pages
Jul 21 16:21:00.742881 [  530.416598] 79 pages in swap cache
Jul 21 16:21:00.754859 [  530.416670] Swap cache stats: add 702, delete 623, find 948/1025
Jul 21 16:21:00.754924 [  530.416759] Free swap  = 1947124kB
Jul 21 16:21:00.766880 [  530.416822] Total swap = 1949692kB
Jul 21 16:21:00.766960 [  530.416924] 131070 pages RAM
Jul 21 16:21:00.767021 [  530.416988] 0 pages HighMem/MovableOnly
Jul 21 16:21:00.778697 [  530.417051] 14548 pages reserved

AFAICT from the kernel config used for the test [0]
CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is enabled, so I'm not sure where
the memory exhaustion is coming from. Maybe 512M is too low for a PVH
dom0, even when using hotplug balloon memory?

Roger.

[0] http://logs.test-lab.xenproject.org/osstest/logs/152067/build-amd64-pvops/godello0--kconfig


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 08:40:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 08:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyAIf-0001K5-Sj; Wed, 22 Jul 2020 08:40:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mY6V=BB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyAIe-0001K0-CE
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 08:40:28 +0000
X-Inumbo-ID: fda90306-cbf6-11ea-8620-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fda90306-cbf6-11ea-8620-bc764e2007e4;
 Wed, 22 Jul 2020 08:40:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9AB2CAD4A;
 Wed, 22 Jul 2020 08:40:33 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: Xen 4.11.4 released
To: xen-announce@lists.xenproject.org
Message-ID: <bd86a400-7e5c-60cd-d25a-a0c5cfa3ad43@suse.com>
Date: Wed, 22 Jul 2020 10:40:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

All,

better late than never, I am pleased to announce the release of Xen 4.11.4.
This has been available immediately from its git repository
http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.11
(tag RELEASE-4.11.4) or from the XenProject download page
https://xenproject.org/downloads/xen-project-archives/xen-project-4-11-series/xen-project-4-11-4/
(where a list of changes can also be found).

We recommend all users of the 4.11 stable series to update to this last
point release to be made by the XenProject team from this stable branch.

Apologies for the much delayed announcement.

Regards, Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 09:00:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 09:00:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyAbR-0002PM-KS; Wed, 22 Jul 2020 08:59:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E/5f=BB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jyAbQ-0002PH-FR
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 08:59:52 +0000
X-Inumbo-ID: b35c8a72-cbf9-11ea-a18a-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b35c8a72-cbf9-11ea-a18a-12813bfff9fa;
 Wed, 22 Jul 2020 08:59:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 70A24AC2E;
 Wed, 22 Jul 2020 08:59:57 +0000 (UTC)
Subject: Re: [xen-unstable test] 152067: regressions - trouble:
 fail/pass/starved
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 osstest service owner <osstest-admin@xenproject.org>,
 boris.ostrovsky@oracle.com
References: <osstest-152067-mainreport@xen.org>
 <20200722083805.GT7191@Air-de-Roger>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d3ba53f3-aae0-b8f1-4205-fedcd59e2243@suse.com>
Date: Wed, 22 Jul 2020 10:59:48 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200722083805.GT7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.07.20 10:38, Roger Pau Monné wrote:
> On Wed, Jul 22, 2020 at 12:37:46AM +0000, osstest service owner wrote:
>> flight 152067 xen-unstable real [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/152067/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>   test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045
> 
> Failure was caused by:
> 
> Jul 21 16:20:58.985209 [  530.412043] libxl-save-help: page allocation failure: order:4, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
> Jul 21 16:21:00.378548 [  530.412261] libxl-save-help cpuset=/ mems_allowed=0
> Jul 21 16:21:00.378622 [  530.412318] CPU: 1 PID: 15485 Comm: libxl-save-help Not tainted 4.19.80+ #1
> Jul 21 16:21:00.390740 [  530.412377] Hardware name: Dell Inc. PowerEdge R420/0K29HN, BIOS 2.4.2 01/29/2015
> Jul 21 16:21:00.390810 [  530.412448] Call Trace:
> Jul 21 16:21:00.402721 [  530.412499]  dump_stack+0x72/0x8c
> Jul 21 16:21:00.402801 [  530.412541]  warn_alloc.cold.140+0x68/0xe8
> Jul 21 16:21:00.402841 [  530.412585]  __alloc_pages_slowpath+0xc73/0xcb0
> Jul 21 16:21:00.414737 [  530.412640]  ? __do_page_fault+0x249/0x4d0
> Jul 21 16:21:00.414786 [  530.412681]  __alloc_pages_nodemask+0x235/0x250
> Jul 21 16:21:00.426555 [  530.412734]  kmalloc_order+0x13/0x60
> Jul 21 16:21:00.426619 [  530.412774]  kmalloc_order_trace+0x18/0xa0
> Jul 21 16:21:00.426671 [  530.412816]  alloc_empty_pages.isra.15+0x24/0x60
> Jul 21 16:21:00.438447 [  530.412867]  privcmd_ioctl_mmap_batch.isra.18+0x303/0x320
> Jul 21 16:21:00.438507 [  530.412918]  ? vmacache_find+0xb0/0xb0
> Jul 21 16:21:00.450475 [  530.412957]  privcmd_ioctl+0x253/0xa9b
> Jul 21 16:21:00.450540 [  530.412996]  ? mmap_region+0x226/0x630
> Jul 21 16:21:00.450592 [  530.413043]  ? selinux_mmap_file+0xb0/0xb0
> Jul 21 16:21:00.462757 [  530.413084]  ? selinux_file_ioctl+0x15c/0x200
> Jul 21 16:21:00.462823 [  530.413136]  do_vfs_ioctl+0x9f/0x630
> Jul 21 16:21:00.474698 [  530.413177]  ksys_ioctl+0x5b/0x90
> Jul 21 16:21:00.474762 [  530.413224]  __x64_sys_ioctl+0x11/0x20
> Jul 21 16:21:00.474813 [  530.413264]  do_syscall_64+0x57/0x130
> Jul 21 16:21:00.486480 [  530.413305]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> Jul 21 16:21:00.486548 [  530.413357] RIP: 0033:0x7f4f7ecde427
> Jul 21 16:21:00.486600 [  530.413395] Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
> Jul 21 16:21:00.510766 [  530.413556] RSP: 002b:00007ffc1ef6eb38 EFLAGS: 00000213 ORIG_RAX: 0000000000000010
> Jul 21 16:21:00.522758 [  530.413629] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4f7ecde427
> Jul 21 16:21:00.534632 [  530.413699] RDX: 00007ffc1ef6eb90 RSI: 0000000000205004 RDI: 0000000000000007
> Jul 21 16:21:00.534702 [  530.413810] RBP: 00007ffc1ef6ebe0 R08: 0000000000000007 R09: 0000000000000000
> Jul 21 16:21:00.547013 [  530.413881] R10: 0000000000000001 R11: 0000000000000213 R12: 000055d754136200
> Jul 21 16:21:00.558751 [  530.413951] R13: 00007ffc1ef6f340 R14: 0000000000000000 R15: 0000000000000000
> Jul 21 16:21:00.558846 [  530.414079] Mem-Info:
> Jul 21 16:21:00.558928 [  530.414123] active_anon:1724 inactive_anon:3931 isolated_anon:0
> Jul 21 16:21:00.570481 [  530.414123]  active_file:7862 inactive_file:86530 isolated_file:0
> Jul 21 16:21:00.582599 [  530.414123]  unevictable:0 dirty:18 writeback:0 unstable:0
> Jul 21 16:21:00.582668 [  530.414123]  slab_reclaimable:4704 slab_unreclaimable:4036
> Jul 21 16:21:00.594782 [  530.414123]  mapped:3461 shmem:124 pagetables:372 bounce:0
> Jul 21 16:21:00.594849 [  530.414123]  free:1863 free_pcp:16 free_cma:0
> Jul 21 16:21:00.606733 [  530.414579] Node 0 active_anon:6896kB inactive_anon:15724kB active_file:31448kB inactive_file:346120kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:13844kB dirty:72kB writeback:0kB shmem:496kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
> Jul 21 16:21:00.630626 [  530.414870] DMA free:1816kB min:92kB low:112kB high:132kB active_anon:0kB inactive_anon:0kB active_file:76kB inactive_file:9988kB unevictable:0kB writepending:0kB present:15980kB managed:14328kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
> Jul 21 16:21:00.658448 [  530.415329] lowmem_reserve[]: 0 431 431 431
> Jul 21 16:21:00.658513 [  530.415404] DMA32 free:5512kB min:2608kB low:3260kB high:3912kB active_anon:6896kB inactive_anon:15724kB active_file:31372kB inactive_file:336132kB unevictable:0kB writepending:72kB present:508300kB managed:451760kB mlocked:0kB kernel_stack:2848kB pagetables:1488kB bounce:0kB free_pcp:184kB local_pcp:0kB free_cma:0kB
> Jul 21 16:21:00.694702 [  530.415742] lowmem_reserve[]: 0 0 0 0
> Jul 21 16:21:00.694778 [  530.415806] DMA: 8*4kB (UM) 3*8kB (UM) 4*16kB (UM) 3*32kB (M) 5*64kB (UM) 2*128kB (UM) 4*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1816kB
> Jul 21 16:21:00.706798 [  530.416015] DMA32: 4*4kB (UH) 459*8kB (MH) 2*16kB (H) 6*32kB (H) 5*64kB (H) 4*128kB (H) 3*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5512kB
> Jul 21 16:21:00.718789 [  530.416287] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
> Jul 21 16:21:00.730785 [  530.416413] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
> Jul 21 16:21:00.742847 [  530.416538] 94608 total pagecache pages
> Jul 21 16:21:00.742881 [  530.416598] 79 pages in swap cache
> Jul 21 16:21:00.754859 [  530.416670] Swap cache stats: add 702, delete 623, find 948/1025
> Jul 21 16:21:00.754924 [  530.416759] Free swap  = 1947124kB
> Jul 21 16:21:00.766880 [  530.416822] Total swap = 1949692kB
> Jul 21 16:21:00.766960 [  530.416924] 131070 pages RAM
> Jul 21 16:21:00.767021 [  530.416988] 0 pages HighMem/MovableOnly
> Jul 21 16:21:00.778697 [  530.417051] 14548 pages reserved
> 
> AFAICT from the kernel config used for the test [0]
> CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is enabled, so I'm not sure where
> the memory exhaustion is coming from. Maybe 512M is too low for a PVH
> dom0, even when using hotplug balloon memory?

I don't see how CONFIG_XEN_BALLOON_MEMORY_HOTPLUG would help here, as it
will be used for real memory hotplug only. Well, you _can_ use it for
mapping of foreign pages, but you'd have to:

echo 1 > /proc/sys/xen/balloon/hotplug_unpopulated


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 09:03:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 09:03:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyAec-0003EB-4n; Wed, 22 Jul 2020 09:03:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhkO=BB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyAea-0003E5-SW
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 09:03:08 +0000
X-Inumbo-ID: 282ff564-cbfa-11ea-a18b-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 282ff564-cbfa-11ea-a18b-12813bfff9fa;
 Wed, 22 Jul 2020 09:03:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595408586;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=aJt824IVbf+hA/vQVw7LywK/efgxhYNHKxqks+Cm4TQ=;
 b=d8OSSSmynezrJpFvOENTWyj+W8h9+RvGmLjXXRt4W4uX7E3m6C0oQJh6
 vycJ5EFSbsAsm+qCQzrqyZV6YJVUgKrCi7m+GLyB63dQ447/SwBClh9yv
 C3QQkVGVnhHgCnD5XnpUiocI00JKhPwVyWDBBeuKBCjN6pXc0mwlAhgm2 E=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: b52Q5mOaG4l26cX+eL/q8cdu/pRmB2UgGYkgbhQ+HtFsOGvt0JiGKLlceKdwUEACDqFFV+e14e
 MrQNF7QicQXE08gB9q1A4ySJrtNksw1ABDic/cL3wbH18r7NWmJ4auVwxCGOmTsw+RxzeIdH4y
 7PCQENV0kDMkgTBjMgXlTT6sgI+4wZMi42+dmRpOCZY1Bv17O+zIqNRuuAonrW30Kq8JeGvjcd
 Ir4C1iMBrYZJSM1JMsbpCR44KgveWicfswonMGGkDu81HXaOL1PhHkXoIHBNRhrBPIJfTNU2Eb
 ao0=
X-SBRS: 2.7
X-MesageID: 23247865
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23247865"
Date: Wed, 22 Jul 2020 11:02:59 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [xen-unstable test] 152067: regressions - trouble:
 fail/pass/starved
Message-ID: <20200722090259.GU7191@Air-de-Roger>
References: <osstest-152067-mainreport@xen.org>
 <20200722083805.GT7191@Air-de-Roger>
 <d3ba53f3-aae0-b8f1-4205-fedcd59e2243@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d3ba53f3-aae0-b8f1-4205-fedcd59e2243@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 osstest service owner <osstest-admin@xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 22, 2020 at 10:59:48AM +0200, Jürgen Groß wrote:
> On 22.07.20 10:38, Roger Pau Monné wrote:
> > On Wed, Jul 22, 2020 at 12:37:46AM +0000, osstest service owner wrote:
> > > flight 152067 xen-unstable real [real]
> > > http://logs.test-lab.xenproject.org/osstest/logs/152067/
> > > 
> > > Regressions :-(
> > > 
> > > Tests which did not succeed and are blocking,
> > > including tests which could not be run:
> > >   test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045
> > 
> > Failure was caused by:
> > 
> > Jul 21 16:20:58.985209 [  530.412043] libxl-save-help: page allocation failure: order:4, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
> > Jul 21 16:21:00.378548 [  530.412261] libxl-save-help cpuset=/ mems_allowed=0
> > Jul 21 16:21:00.378622 [  530.412318] CPU: 1 PID: 15485 Comm: libxl-save-help Not tainted 4.19.80+ #1
> > Jul 21 16:21:00.390740 [  530.412377] Hardware name: Dell Inc. PowerEdge R420/0K29HN, BIOS 2.4.2 01/29/2015
> > Jul 21 16:21:00.390810 [  530.412448] Call Trace:
> > Jul 21 16:21:00.402721 [  530.412499]  dump_stack+0x72/0x8c
> > Jul 21 16:21:00.402801 [  530.412541]  warn_alloc.cold.140+0x68/0xe8
> > Jul 21 16:21:00.402841 [  530.412585]  __alloc_pages_slowpath+0xc73/0xcb0
> > Jul 21 16:21:00.414737 [  530.412640]  ? __do_page_fault+0x249/0x4d0
> > Jul 21 16:21:00.414786 [  530.412681]  __alloc_pages_nodemask+0x235/0x250
> > Jul 21 16:21:00.426555 [  530.412734]  kmalloc_order+0x13/0x60
> > Jul 21 16:21:00.426619 [  530.412774]  kmalloc_order_trace+0x18/0xa0
> > Jul 21 16:21:00.426671 [  530.412816]  alloc_empty_pages.isra.15+0x24/0x60
> > Jul 21 16:21:00.438447 [  530.412867]  privcmd_ioctl_mmap_batch.isra.18+0x303/0x320
> > Jul 21 16:21:00.438507 [  530.412918]  ? vmacache_find+0xb0/0xb0
> > Jul 21 16:21:00.450475 [  530.412957]  privcmd_ioctl+0x253/0xa9b
> > Jul 21 16:21:00.450540 [  530.412996]  ? mmap_region+0x226/0x630
> > Jul 21 16:21:00.450592 [  530.413043]  ? selinux_mmap_file+0xb0/0xb0
> > Jul 21 16:21:00.462757 [  530.413084]  ? selinux_file_ioctl+0x15c/0x200
> > Jul 21 16:21:00.462823 [  530.413136]  do_vfs_ioctl+0x9f/0x630
> > Jul 21 16:21:00.474698 [  530.413177]  ksys_ioctl+0x5b/0x90
> > Jul 21 16:21:00.474762 [  530.413224]  __x64_sys_ioctl+0x11/0x20
> > Jul 21 16:21:00.474813 [  530.413264]  do_syscall_64+0x57/0x130
> > Jul 21 16:21:00.486480 [  530.413305]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> > Jul 21 16:21:00.486548 [  530.413357] RIP: 0033:0x7f4f7ecde427
> > Jul 21 16:21:00.486600 [  530.413395] Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
> > Jul 21 16:21:00.510766 [  530.413556] RSP: 002b:00007ffc1ef6eb38 EFLAGS: 00000213 ORIG_RAX: 0000000000000010
> > Jul 21 16:21:00.522758 [  530.413629] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4f7ecde427
> > Jul 21 16:21:00.534632 [  530.413699] RDX: 00007ffc1ef6eb90 RSI: 0000000000205004 RDI: 0000000000000007
> > Jul 21 16:21:00.534702 [  530.413810] RBP: 00007ffc1ef6ebe0 R08: 0000000000000007 R09: 0000000000000000
> > Jul 21 16:21:00.547013 [  530.413881] R10: 0000000000000001 R11: 0000000000000213 R12: 000055d754136200
> > Jul 21 16:21:00.558751 [  530.413951] R13: 00007ffc1ef6f340 R14: 0000000000000000 R15: 0000000000000000
> > Jul 21 16:21:00.558846 [  530.414079] Mem-Info:
> > Jul 21 16:21:00.558928 [  530.414123] active_anon:1724 inactive_anon:3931 isolated_anon:0
> > Jul 21 16:21:00.570481 [  530.414123]  active_file:7862 inactive_file:86530 isolated_file:0
> > Jul 21 16:21:00.582599 [  530.414123]  unevictable:0 dirty:18 writeback:0 unstable:0
> > Jul 21 16:21:00.582668 [  530.414123]  slab_reclaimable:4704 slab_unreclaimable:4036
> > Jul 21 16:21:00.594782 [  530.414123]  mapped:3461 shmem:124 pagetables:372 bounce:0
> > Jul 21 16:21:00.594849 [  530.414123]  free:1863 free_pcp:16 free_cma:0
> > Jul 21 16:21:00.606733 [  530.414579] Node 0 active_anon:6896kB inactive_anon:15724kB active_file:31448kB inactive_file:346120kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:13844kB dirty:72kB writeback:0kB shmem:496kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
> > Jul 21 16:21:00.630626 [  530.414870] DMA free:1816kB min:92kB low:112kB high:132kB active_anon:0kB inactive_anon:0kB active_file:76kB inactive_file:9988kB unevictable:0kB writepending:0kB present:15980kB managed:14328kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
> > Jul 21 16:21:00.658448 [  530.415329] lowmem_reserve[]: 0 431 431 431
> > Jul 21 16:21:00.658513 [  530.415404] DMA32 free:5512kB min:2608kB low:3260kB high:3912kB active_anon:6896kB inactive_anon:15724kB active_file:31372kB inactive_file:336132kB unevictable:0kB writepending:72kB present:508300kB managed:451760kB mlocked:0kB kernel_stack:2848kB pagetables:1488kB bounce:0kB free_pcp:184kB local_pcp:0kB free_cma:0kB
> > Jul 21 16:21:00.694702 [  530.415742] lowmem_reserve[]: 0 0 0 0
> > Jul 21 16:21:00.694778 [  530.415806] DMA: 8*4kB (UM) 3*8kB (UM) 4*16kB (UM) 3*32kB (M) 5*64kB (UM) 2*128kB (UM) 4*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1816kB
> > Jul 21 16:21:00.706798 [  530.416015] DMA32: 4*4kB (UH) 459*8kB (MH) 2*16kB (H) 6*32kB (H) 5*64kB (H) 4*128kB (H) 3*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5512kB
> > Jul 21 16:21:00.718789 [  530.416287] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
> > Jul 21 16:21:00.730785 [  530.416413] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
> > Jul 21 16:21:00.742847 [  530.416538] 94608 total pagecache pages
> > Jul 21 16:21:00.742881 [  530.416598] 79 pages in swap cache
> > Jul 21 16:21:00.754859 [  530.416670] Swap cache stats: add 702, delete 623, find 948/1025
> > Jul 21 16:21:00.754924 [  530.416759] Free swap  = 1947124kB
> > Jul 21 16:21:00.766880 [  530.416822] Total swap = 1949692kB
> > Jul 21 16:21:00.766960 [  530.416924] 131070 pages RAM
> > Jul 21 16:21:00.767021 [  530.416988] 0 pages HighMem/MovableOnly
> > Jul 21 16:21:00.778697 [  530.417051] 14548 pages reserved
> > 
> > AFAICT from the kernel config used for the test [0]
> > CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is enabled, so I'm not sure where
> > the memory exhaustion is coming from. Maybe 512M is too low for a PVH
> > dom0, even when using hotplug balloon memory?
> 
> I don't see how CONFIG_XEN_BALLOON_MEMORY_HOTPLUG would help here, as it
> will be used for real memory hotplug only. Well, you _can_ use it for
> mapping of foreign pages, but you'd have to:
> 
> echo 1 > /proc/sys/xen/balloon/hotplug_unpopulated

Uh, I've completely missed the point then. I assume there's some
reason for not doing it by default then? (using empty hotplug ranges
to map foreign memory)

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 09:08:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 09:08:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyAjg-0003PA-Ps; Wed, 22 Jul 2020 09:08:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u0rb=BB=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jyAjf-0003P5-DQ
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 09:08:23 +0000
X-Inumbo-ID: e3f39d50-cbfa-11ea-a18b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e3f39d50-cbfa-11ea-a18b-12813bfff9fa;
 Wed, 22 Jul 2020 09:08:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:To:Subject:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=wX1Eu/u/kjAZ3efLZ2Y25RplLLxRiLVlINm0rbq4U98=; b=6D2J+hGtF1HieE1ujaUHTZTQKX
 0uqZCfxIEafhkFu/2+tPlCCIQh74oBBdRFhDZAKdYAKi9ZRrhEodcK0n5QuMZ8nQPdHF6MJXkCxUR
 PQwIjoJj5gT95SKVKt+68ABn3KlLeURHFXRUw6T4IMIKT1d9g4yuGxKJjIi+WonW2tiI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyAjS-0001t0-P2; Wed, 22 Jul 2020 09:08:10 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyAjS-0001i7-E0; Wed, 22 Jul 2020 09:08:10 +0000
Subject: Re: [PATCH v2 04/11] x86/xen: add system core suspend and resume
 callbacks
To: Anchal Agarwal <anchalag@amazon.com>, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org,
 boris.ostrovsky@oracle.com, jgross@suse.com, linux-pm@vger.kernel.org,
 linux-mm@kvack.org, kamatam@amazon.com, sstabellini@kernel.org,
 konrad.wilk@oracle.com, roger.pau@citrix.com, axboe@kernel.dk,
 davem@davemloft.net, rjw@rjwysocki.net, len.brown@intel.com, pavel@ucw.cz,
 peterz@infradead.org, eduval@amazon.com, sblbir@amazon.com,
 xen-devel@lists.xenproject.org, vkuznets@redhat.com, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, dwmw@amazon.co.uk, benh@kernel.crashing.org
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182205.GA3531@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b8445e93-deed-1a28-cd3b-993d42c78251@xen.org>
Date: Wed, 22 Jul 2020 10:08:05 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200702182205.GA3531@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 02/07/2020 19:22, Anchal Agarwal wrote:
> diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
> index 2521d6a306cd..9fa8a4082d68 100644
> --- a/include/xen/xen-ops.h
> +++ b/include/xen/xen-ops.h
> @@ -41,6 +41,8 @@ u64 xen_steal_clock(int cpu);
>   int xen_setup_shutdown_event(void);
>   
>   bool xen_is_xen_suspend(void);
> +void xen_setup_syscore_ops(void);

The function is only implemented and used by x86. So shouldn't this be 
declared in an x86 header?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 09:16:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 09:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyArG-0004Gg-L6; Wed, 22 Jul 2020 09:16:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mY6V=BB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyArF-0004Gb-9k
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 09:16:13 +0000
X-Inumbo-ID: fbd93d20-cbfb-11ea-a18b-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fbd93d20-cbfb-11ea-a18b-12813bfff9fa;
 Wed, 22 Jul 2020 09:16:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 11950AE4B;
 Wed, 22 Jul 2020 09:16:18 +0000 (UTC)
Subject: Re: [PATCH] x86/svm: Fold nsvm_{wr,rd}msr() into
 svm_msr_{read,write}_intercept()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200721172208.12176-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b3c3dfa9-d2b8-1042-ecf1-8f51351807e1@suse.com>
Date: Wed, 22 Jul 2020 11:16:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200721172208.12176-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21.07.2020 19:22, Andrew Cooper wrote:
> ... to simplify the default cases.
> 
> There are multiple errors with the handling of these three MSRs, but they are
> deliberately not addressed this point.
> 
> This removes the dance converting -1/0/1 into X86EMUL_*, allowing for the
> removal of the 'ret' variable.
> 
> While cleaning this up, drop the gdprintk()'s for #GP conditions, and the
> 'result' variable from svm_msr_write_intercept() is it is never modified.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

However, ...

> @@ -2085,6 +2091,22 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>              goto gpf;
>          break;
>  
> +    case MSR_K8_VM_CR:
> +        /* ignore write. handle all bits as read-only. */
> +        break;
> +
> +    case MSR_K8_VM_HSAVE_PA:
> +        if ( (msr_content & ~PAGE_MASK) || msr_content > 0xfd00000000ULL )

... while you're moving this code here, wouldn't it be worthwhile
to at least fix the > to be >= ?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 09:23:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 09:23:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyAyH-00059P-EV; Wed, 22 Jul 2020 09:23:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E/5f=BB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jyAyF-00059K-BW
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 09:23:27 +0000
X-Inumbo-ID: fc87ac42-cbfc-11ea-a18b-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fc87ac42-cbfc-11ea-a18b-12813bfff9fa;
 Wed, 22 Jul 2020 09:23:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AA4D3B052;
 Wed, 22 Jul 2020 09:23:28 +0000 (UTC)
Subject: Re: [xen-unstable test] 152067: regressions - trouble:
 fail/pass/starved
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <osstest-152067-mainreport@xen.org>
 <20200722083805.GT7191@Air-de-Roger>
 <d3ba53f3-aae0-b8f1-4205-fedcd59e2243@suse.com>
 <20200722090259.GU7191@Air-de-Roger>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <c69e8dd0-97ba-545c-fc58-748012513cd7@suse.com>
Date: Wed, 22 Jul 2020 11:23:20 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200722090259.GU7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 osstest service owner <osstest-admin@xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.07.20 11:02, Roger Pau Monné wrote:
> On Wed, Jul 22, 2020 at 10:59:48AM +0200, Jürgen Groß wrote:
>> On 22.07.20 10:38, Roger Pau Monné wrote:
>>> On Wed, Jul 22, 2020 at 12:37:46AM +0000, osstest service owner wrote:
>>>> flight 152067 xen-unstable real [real]
>>>> http://logs.test-lab.xenproject.org/osstest/logs/152067/
>>>>
>>>> Regressions :-(
>>>>
>>>> Tests which did not succeed and are blocking,
>>>> including tests which could not be run:
>>>>    test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045
>>>
>>> Failure was caused by:
>>>
>>> Jul 21 16:20:58.985209 [  530.412043] libxl-save-help: page allocation failure: order:4, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
>>> Jul 21 16:21:00.378548 [  530.412261] libxl-save-help cpuset=/ mems_allowed=0
>>> Jul 21 16:21:00.378622 [  530.412318] CPU: 1 PID: 15485 Comm: libxl-save-help Not tainted 4.19.80+ #1
>>> Jul 21 16:21:00.390740 [  530.412377] Hardware name: Dell Inc. PowerEdge R420/0K29HN, BIOS 2.4.2 01/29/2015
>>> Jul 21 16:21:00.390810 [  530.412448] Call Trace:
>>> Jul 21 16:21:00.402721 [  530.412499]  dump_stack+0x72/0x8c
>>> Jul 21 16:21:00.402801 [  530.412541]  warn_alloc.cold.140+0x68/0xe8
>>> Jul 21 16:21:00.402841 [  530.412585]  __alloc_pages_slowpath+0xc73/0xcb0
>>> Jul 21 16:21:00.414737 [  530.412640]  ? __do_page_fault+0x249/0x4d0
>>> Jul 21 16:21:00.414786 [  530.412681]  __alloc_pages_nodemask+0x235/0x250
>>> Jul 21 16:21:00.426555 [  530.412734]  kmalloc_order+0x13/0x60
>>> Jul 21 16:21:00.426619 [  530.412774]  kmalloc_order_trace+0x18/0xa0
>>> Jul 21 16:21:00.426671 [  530.412816]  alloc_empty_pages.isra.15+0x24/0x60
>>> Jul 21 16:21:00.438447 [  530.412867]  privcmd_ioctl_mmap_batch.isra.18+0x303/0x320
>>> Jul 21 16:21:00.438507 [  530.412918]  ? vmacache_find+0xb0/0xb0
>>> Jul 21 16:21:00.450475 [  530.412957]  privcmd_ioctl+0x253/0xa9b
>>> Jul 21 16:21:00.450540 [  530.412996]  ? mmap_region+0x226/0x630
>>> Jul 21 16:21:00.450592 [  530.413043]  ? selinux_mmap_file+0xb0/0xb0
>>> Jul 21 16:21:00.462757 [  530.413084]  ? selinux_file_ioctl+0x15c/0x200
>>> Jul 21 16:21:00.462823 [  530.413136]  do_vfs_ioctl+0x9f/0x630
>>> Jul 21 16:21:00.474698 [  530.413177]  ksys_ioctl+0x5b/0x90
>>> Jul 21 16:21:00.474762 [  530.413224]  __x64_sys_ioctl+0x11/0x20
>>> Jul 21 16:21:00.474813 [  530.413264]  do_syscall_64+0x57/0x130
>>> Jul 21 16:21:00.486480 [  530.413305]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>> Jul 21 16:21:00.486548 [  530.413357] RIP: 0033:0x7f4f7ecde427
>>> Jul 21 16:21:00.486600 [  530.413395] Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
>>> Jul 21 16:21:00.510766 [  530.413556] RSP: 002b:00007ffc1ef6eb38 EFLAGS: 00000213 ORIG_RAX: 0000000000000010
>>> Jul 21 16:21:00.522758 [  530.413629] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4f7ecde427
>>> Jul 21 16:21:00.534632 [  530.413699] RDX: 00007ffc1ef6eb90 RSI: 0000000000205004 RDI: 0000000000000007
>>> Jul 21 16:21:00.534702 [  530.413810] RBP: 00007ffc1ef6ebe0 R08: 0000000000000007 R09: 0000000000000000
>>> Jul 21 16:21:00.547013 [  530.413881] R10: 0000000000000001 R11: 0000000000000213 R12: 000055d754136200
>>> Jul 21 16:21:00.558751 [  530.413951] R13: 00007ffc1ef6f340 R14: 0000000000000000 R15: 0000000000000000
>>> Jul 21 16:21:00.558846 [  530.414079] Mem-Info:
>>> Jul 21 16:21:00.558928 [  530.414123] active_anon:1724 inactive_anon:3931 isolated_anon:0
>>> Jul 21 16:21:00.570481 [  530.414123]  active_file:7862 inactive_file:86530 isolated_file:0
>>> Jul 21 16:21:00.582599 [  530.414123]  unevictable:0 dirty:18 writeback:0 unstable:0
>>> Jul 21 16:21:00.582668 [  530.414123]  slab_reclaimable:4704 slab_unreclaimable:4036
>>> Jul 21 16:21:00.594782 [  530.414123]  mapped:3461 shmem:124 pagetables:372 bounce:0
>>> Jul 21 16:21:00.594849 [  530.414123]  free:1863 free_pcp:16 free_cma:0
>>> Jul 21 16:21:00.606733 [  530.414579] Node 0 active_anon:6896kB inactive_anon:15724kB active_file:31448kB inactive_file:346120kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:13844kB dirty:72kB writeback:0kB shmem:496kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
>>> Jul 21 16:21:00.630626 [  530.414870] DMA free:1816kB min:92kB low:112kB high:132kB active_anon:0kB inactive_anon:0kB active_file:76kB inactive_file:9988kB unevictable:0kB writepending:0kB present:15980kB managed:14328kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
>>> Jul 21 16:21:00.658448 [  530.415329] lowmem_reserve[]: 0 431 431 431
>>> Jul 21 16:21:00.658513 [  530.415404] DMA32 free:5512kB min:2608kB low:3260kB high:3912kB active_anon:6896kB inactive_anon:15724kB active_file:31372kB inactive_file:336132kB unevictable:0kB writepending:72kB present:508300kB managed:451760kB mlocked:0kB kernel_stack:2848kB pagetables:1488kB bounce:0kB free_pcp:184kB local_pcp:0kB free_cma:0kB
>>> Jul 21 16:21:00.694702 [  530.415742] lowmem_reserve[]: 0 0 0 0
>>> Jul 21 16:21:00.694778 [  530.415806] DMA: 8*4kB (UM) 3*8kB (UM) 4*16kB (UM) 3*32kB (M) 5*64kB (UM) 2*128kB (UM) 4*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1816kB
>>> Jul 21 16:21:00.706798 [  530.416015] DMA32: 4*4kB (UH) 459*8kB (MH) 2*16kB (H) 6*32kB (H) 5*64kB (H) 4*128kB (H) 3*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5512kB
>>> Jul 21 16:21:00.718789 [  530.416287] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
>>> Jul 21 16:21:00.730785 [  530.416413] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
>>> Jul 21 16:21:00.742847 [  530.416538] 94608 total pagecache pages
>>> Jul 21 16:21:00.742881 [  530.416598] 79 pages in swap cache
>>> Jul 21 16:21:00.754859 [  530.416670] Swap cache stats: add 702, delete 623, find 948/1025
>>> Jul 21 16:21:00.754924 [  530.416759] Free swap  = 1947124kB
>>> Jul 21 16:21:00.766880 [  530.416822] Total swap = 1949692kB
>>> Jul 21 16:21:00.766960 [  530.416924] 131070 pages RAM
>>> Jul 21 16:21:00.767021 [  530.416988] 0 pages HighMem/MovableOnly
>>> Jul 21 16:21:00.778697 [  530.417051] 14548 pages reserved
>>>
>>> AFAICT from the kernel config used for the test [0]
>>> CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is enabled, so I'm not sure where
>>> the memory exhaustion is coming from. Maybe 512M is too low for a PVH
>>> dom0, even when using hotplug balloon memory?
>>
>> I don't see how CONFIG_XEN_BALLOON_MEMORY_HOTPLUG would help here, as it
>> will be used for real memory hotplug only. Well, you _can_ use it for
>> mapping of foreign pages, but you'd have to:
>>
>> echo 1 > /proc/sys/xen/balloon/hotplug_unpopulated
> 
> Uh, I've completely missed the point then. I assume there's some
> reason for not doing it by default then? (using empty hotplug ranges
> to map foreign memory)

This dates back to 2015. See commit 1cf6a6c82918c9aad.

I guess we could initialize xen_hotplug_unpopulated with 1 for PVH
dom0.


Juergen



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 09:27:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 09:27:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyB1i-0005IQ-3V; Wed, 22 Jul 2020 09:27:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhkO=BB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyB1h-0005IK-F8
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 09:27:01 +0000
X-Inumbo-ID: 7e2012f9-cbfd-11ea-a18b-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7e2012f9-cbfd-11ea-a18b-12813bfff9fa;
 Wed, 22 Jul 2020 09:27:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595410019;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=7Fh+rbQRNUDa5+zTNfQkKP5c8nCwbqcsybA1Fs/VMkE=;
 b=VdfX5AM8kykQcdHmCxzyuPfsHzRCMHi2Qu21SCfQfHMsI3UNPatApbMf
 RvJwzug/apTzSQ26/4sg2NK9TSuv9wKnPT12qKDJ03+q/xH+Q7PmZvEp/
 a3B/JpHH1xD/WyPHq0Qa59vdzD5CZa7TjwtOzdOTBT1/ZircrnxdXp7tl Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: SYhqokHk6IZqp1JRLHu7qyVJSgE1GtzJ5WbNDFoveD3u/yVDeOGtCQrV3j0wf5iDu9VhFjZmYV
 Gg7Tb4sYMs5anILBqb2ZGTNI5Eg4A936gRhGq/4v/ISlUj2pk3Y43EASpnL6jHfUR03InhN4zU
 pW4P3Xd0/rlCDQ4wb4SFKk3ovSDT2mGeisT8n3tuYGby+FFtHdhxTy1HseWBBmPu5KZVg6CSp6
 HCXyyvf2oPKWpm8TgjXNsb/2my6P375KdWTdjxcG7umEhFsl7f1Ub/GR8CZFLgHFwHiVsin1Vd
 mfE=
X-SBRS: 2.7
X-MesageID: 23249252
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23249252"
Date: Wed, 22 Jul 2020 11:26:53 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/svm: Fold nsvm_{wr,rd}msr() into
 svm_msr_{read,write}_intercept()
Message-ID: <20200722092653.GV7191@Air-de-Roger>
References: <20200721172208.12176-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200721172208.12176-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 21, 2020 at 06:22:08PM +0100, Andrew Cooper wrote:
> ... to simplify the default cases.
> 
> There are multiple errors with the handling of these three MSRs, but they are
> deliberately not addressed this point.
                            ^ at
> 
> This removes the dance converting -1/0/1 into X86EMUL_*, allowing for the
> removal of the 'ret' variable.
> 
> While cleaning this up, drop the gdprintk()'s for #GP conditions, and the
> 'result' variable from svm_msr_write_intercept() is it is never modified.
                                                   ^ extra is
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

I've got one question (not really patch related).

> @@ -1956,10 +1962,10 @@ static int svm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>  
>  static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>  {
> -    int ret, result = X86EMUL_OKAY;
>      struct vcpu *v = current;
>      struct domain *d = v->domain;
>      struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
> +    struct nestedsvm *nsvm = &vcpu_nestedsvm(v);
>  
>      switch ( msr )
>      {
> @@ -2085,6 +2091,22 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>              goto gpf;
>          break;
>  
> +    case MSR_K8_VM_CR:
> +        /* ignore write. handle all bits as read-only. */
> +        break;
> +
> +    case MSR_K8_VM_HSAVE_PA:
> +        if ( (msr_content & ~PAGE_MASK) || msr_content > 0xfd00000000ULL )

Regarding the address check, the PM states "the maximum supported
physical address for this implementation", but I don't seem to be able
to find where is this actually announced.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 09:31:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 09:31:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyB5T-00066J-LM; Wed, 22 Jul 2020 09:30:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhkO=BB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyB5S-00066E-DP
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 09:30:54 +0000
X-Inumbo-ID: 08e7df24-cbfe-11ea-8623-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08e7df24-cbfe-11ea-8623-bc764e2007e4;
 Wed, 22 Jul 2020 09:30:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595410253;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=7Yw3koA2ZQWaFxH5PuDsgmKqYcun+Gup/ngTMCfWkR0=;
 b=K1k4mBv8sm1mGqPjEIvYGgNtLJWq7P1O6IXlpTtm+JN8wYn558vgEG/4
 Ei2eZBju30EReDFxFV+1jcu8+PuoE7Igbnz65K7o6TO4WHLuQoyvuNI5D
 fxRFvWctidRz7zmDz4hb9bONpnBwHMAAvHw5xmErdpIFE85Uu8tajQDBg Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 1Y5pNXxO1+LMYcaFwt6xdL4S0FJ3QKTUtK+E6Z8wHflqfs2eGK7Cx0//IyXa4HtW6gMX7iFRf/
 9wmZ89vUO2qNh78pTESYaT9/kBZIMYNC3WnpU1/CiWkGToNRwJl3RNoMdagXEGL5yMvi0LRJQa
 fmxr9IW2H4TR2E89xH0zyJc20LcJgDVo9IqOKL0ptwwJvQ6XiWHB1GZGzj1Mwvc4HkjhwVxxe1
 3M/xjAajHoFSPwdE4stL1tIiA087EMRJbzrla6koqbsRqs7F6PBTEU5g3wne/aidMC80oHP6cV
 vho=
X-SBRS: 2.7
X-MesageID: 23255672
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23255672"
Date: Wed, 22 Jul 2020 11:30:41 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [xen-unstable test] 152067: regressions - trouble:
 fail/pass/starved
Message-ID: <20200722093041.GW7191@Air-de-Roger>
References: <osstest-152067-mainreport@xen.org>
 <20200722083805.GT7191@Air-de-Roger>
 <d3ba53f3-aae0-b8f1-4205-fedcd59e2243@suse.com>
 <20200722090259.GU7191@Air-de-Roger>
 <c69e8dd0-97ba-545c-fc58-748012513cd7@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c69e8dd0-97ba-545c-fc58-748012513cd7@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 osstest service owner <osstest-admin@xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 22, 2020 at 11:23:20AM +0200, Jürgen Groß wrote:
> On 22.07.20 11:02, Roger Pau Monné wrote:
> > On Wed, Jul 22, 2020 at 10:59:48AM +0200, Jürgen Groß wrote:
> > > On 22.07.20 10:38, Roger Pau Monné wrote:
> > > > On Wed, Jul 22, 2020 at 12:37:46AM +0000, osstest service owner wrote:
> > > > > flight 152067 xen-unstable real [real]
> > > > > http://logs.test-lab.xenproject.org/osstest/logs/152067/
> > > > > 
> > > > > Regressions :-(
> > > > > 
> > > > > Tests which did not succeed and are blocking,
> > > > > including tests which could not be run:
> > > > >    test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045
> > > > 
> > > > Failure was caused by:
> > > > 
> > > > Jul 21 16:20:58.985209 [  530.412043] libxl-save-help: page allocation failure: order:4, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
> > > > Jul 21 16:21:00.378548 [  530.412261] libxl-save-help cpuset=/ mems_allowed=0
> > > > Jul 21 16:21:00.378622 [  530.412318] CPU: 1 PID: 15485 Comm: libxl-save-help Not tainted 4.19.80+ #1
> > > > Jul 21 16:21:00.390740 [  530.412377] Hardware name: Dell Inc. PowerEdge R420/0K29HN, BIOS 2.4.2 01/29/2015
> > > > Jul 21 16:21:00.390810 [  530.412448] Call Trace:
> > > > Jul 21 16:21:00.402721 [  530.412499]  dump_stack+0x72/0x8c
> > > > Jul 21 16:21:00.402801 [  530.412541]  warn_alloc.cold.140+0x68/0xe8
> > > > Jul 21 16:21:00.402841 [  530.412585]  __alloc_pages_slowpath+0xc73/0xcb0
> > > > Jul 21 16:21:00.414737 [  530.412640]  ? __do_page_fault+0x249/0x4d0
> > > > Jul 21 16:21:00.414786 [  530.412681]  __alloc_pages_nodemask+0x235/0x250
> > > > Jul 21 16:21:00.426555 [  530.412734]  kmalloc_order+0x13/0x60
> > > > Jul 21 16:21:00.426619 [  530.412774]  kmalloc_order_trace+0x18/0xa0
> > > > Jul 21 16:21:00.426671 [  530.412816]  alloc_empty_pages.isra.15+0x24/0x60
> > > > Jul 21 16:21:00.438447 [  530.412867]  privcmd_ioctl_mmap_batch.isra.18+0x303/0x320
> > > > Jul 21 16:21:00.438507 [  530.412918]  ? vmacache_find+0xb0/0xb0
> > > > Jul 21 16:21:00.450475 [  530.412957]  privcmd_ioctl+0x253/0xa9b
> > > > Jul 21 16:21:00.450540 [  530.412996]  ? mmap_region+0x226/0x630
> > > > Jul 21 16:21:00.450592 [  530.413043]  ? selinux_mmap_file+0xb0/0xb0
> > > > Jul 21 16:21:00.462757 [  530.413084]  ? selinux_file_ioctl+0x15c/0x200
> > > > Jul 21 16:21:00.462823 [  530.413136]  do_vfs_ioctl+0x9f/0x630
> > > > Jul 21 16:21:00.474698 [  530.413177]  ksys_ioctl+0x5b/0x90
> > > > Jul 21 16:21:00.474762 [  530.413224]  __x64_sys_ioctl+0x11/0x20
> > > > Jul 21 16:21:00.474813 [  530.413264]  do_syscall_64+0x57/0x130
> > > > Jul 21 16:21:00.486480 [  530.413305]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
> > > > Jul 21 16:21:00.486548 [  530.413357] RIP: 0033:0x7f4f7ecde427
> > > > Jul 21 16:21:00.486600 [  530.413395] Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
> > > > Jul 21 16:21:00.510766 [  530.413556] RSP: 002b:00007ffc1ef6eb38 EFLAGS: 00000213 ORIG_RAX: 0000000000000010
> > > > Jul 21 16:21:00.522758 [  530.413629] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4f7ecde427
> > > > Jul 21 16:21:00.534632 [  530.413699] RDX: 00007ffc1ef6eb90 RSI: 0000000000205004 RDI: 0000000000000007
> > > > Jul 21 16:21:00.534702 [  530.413810] RBP: 00007ffc1ef6ebe0 R08: 0000000000000007 R09: 0000000000000000
> > > > Jul 21 16:21:00.547013 [  530.413881] R10: 0000000000000001 R11: 0000000000000213 R12: 000055d754136200
> > > > Jul 21 16:21:00.558751 [  530.413951] R13: 00007ffc1ef6f340 R14: 0000000000000000 R15: 0000000000000000
> > > > Jul 21 16:21:00.558846 [  530.414079] Mem-Info:
> > > > Jul 21 16:21:00.558928 [  530.414123] active_anon:1724 inactive_anon:3931 isolated_anon:0
> > > > Jul 21 16:21:00.570481 [  530.414123]  active_file:7862 inactive_file:86530 isolated_file:0
> > > > Jul 21 16:21:00.582599 [  530.414123]  unevictable:0 dirty:18 writeback:0 unstable:0
> > > > Jul 21 16:21:00.582668 [  530.414123]  slab_reclaimable:4704 slab_unreclaimable:4036
> > > > Jul 21 16:21:00.594782 [  530.414123]  mapped:3461 shmem:124 pagetables:372 bounce:0
> > > > Jul 21 16:21:00.594849 [  530.414123]  free:1863 free_pcp:16 free_cma:0
> > > > Jul 21 16:21:00.606733 [  530.414579] Node 0 active_anon:6896kB inactive_anon:15724kB active_file:31448kB inactive_file:346120kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:13844kB dirty:72kB writeback:0kB shmem:496kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
> > > > Jul 21 16:21:00.630626 [  530.414870] DMA free:1816kB min:92kB low:112kB high:132kB active_anon:0kB inactive_anon:0kB active_file:76kB inactive_file:9988kB unevictable:0kB writepending:0kB present:15980kB managed:14328kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
> > > > Jul 21 16:21:00.658448 [  530.415329] lowmem_reserve[]: 0 431 431 431
> > > > Jul 21 16:21:00.658513 [  530.415404] DMA32 free:5512kB min:2608kB low:3260kB high:3912kB active_anon:6896kB inactive_anon:15724kB active_file:31372kB inactive_file:336132kB unevictable:0kB writepending:72kB present:508300kB managed:451760kB mlocked:0kB kernel_stack:2848kB pagetables:1488kB bounce:0kB free_pcp:184kB local_pcp:0kB free_cma:0kB
> > > > Jul 21 16:21:00.694702 [  530.415742] lowmem_reserve[]: 0 0 0 0
> > > > Jul 21 16:21:00.694778 [  530.415806] DMA: 8*4kB (UM) 3*8kB (UM) 4*16kB (UM) 3*32kB (M) 5*64kB (UM) 2*128kB (UM) 4*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1816kB
> > > > Jul 21 16:21:00.706798 [  530.416015] DMA32: 4*4kB (UH) 459*8kB (MH) 2*16kB (H) 6*32kB (H) 5*64kB (H) 4*128kB (H) 3*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5512kB
> > > > Jul 21 16:21:00.718789 [  530.416287] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
> > > > Jul 21 16:21:00.730785 [  530.416413] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
> > > > Jul 21 16:21:00.742847 [  530.416538] 94608 total pagecache pages
> > > > Jul 21 16:21:00.742881 [  530.416598] 79 pages in swap cache
> > > > Jul 21 16:21:00.754859 [  530.416670] Swap cache stats: add 702, delete 623, find 948/1025
> > > > Jul 21 16:21:00.754924 [  530.416759] Free swap  = 1947124kB
> > > > Jul 21 16:21:00.766880 [  530.416822] Total swap = 1949692kB
> > > > Jul 21 16:21:00.766960 [  530.416924] 131070 pages RAM
> > > > Jul 21 16:21:00.767021 [  530.416988] 0 pages HighMem/MovableOnly
> > > > Jul 21 16:21:00.778697 [  530.417051] 14548 pages reserved
> > > > 
> > > > AFAICT from the kernel config used for the test [0]
> > > > CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is enabled, so I'm not sure where
> > > > the memory exhaustion is coming from. Maybe 512M is too low for a PVH
> > > > dom0, even when using hotplug balloon memory?
> > > 
> > > I don't see how CONFIG_XEN_BALLOON_MEMORY_HOTPLUG would help here, as it
> > > will be used for real memory hotplug only. Well, you _can_ use it for
> > > mapping of foreign pages, but you'd have to:
> > > 
> > > echo 1 > /proc/sys/xen/balloon/hotplug_unpopulated
> > 
> > Uh, I've completely missed the point then. I assume there's some
> > reason for not doing it by default then? (using empty hotplug ranges
> > to map foreign memory)
> 
> This dates back to 2015. See commit 1cf6a6c82918c9aad.
> 
> I guess we could initialize xen_hotplug_unpopulated with 1 for PVH
> dom0.

Would you like me to enabled it in osstest first and then we can see
about enabling it by default?

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 09:34:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 09:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyB9H-0006GS-6K; Wed, 22 Jul 2020 09:34:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mY6V=BB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyB9G-0006GN-3e
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 09:34:50 +0000
X-Inumbo-ID: 95968525-cbfe-11ea-a18b-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 95968525-cbfe-11ea-a18b-12813bfff9fa;
 Wed, 22 Jul 2020 09:34:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 049E6ADAD;
 Wed, 22 Jul 2020 09:34:55 +0000 (UTC)
Subject: Re: [PATCH] x86/svm: Fold nsvm_{wr,rd}msr() into
 svm_msr_{read,write}_intercept()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200721172208.12176-1-andrew.cooper3@citrix.com>
 <20200722092653.GV7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d57ec557-3b6a-3571-3c63-08166e40af75@suse.com>
Date: Wed, 22 Jul 2020 11:34:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200722092653.GV7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.07.2020 11:26, Roger Pau Monné wrote:
> On Tue, Jul 21, 2020 at 06:22:08PM +0100, Andrew Cooper wrote:
>> @@ -2085,6 +2091,22 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>>              goto gpf;
>>          break;
>>  
>> +    case MSR_K8_VM_CR:
>> +        /* ignore write. handle all bits as read-only. */
>> +        break;
>> +
>> +    case MSR_K8_VM_HSAVE_PA:
>> +        if ( (msr_content & ~PAGE_MASK) || msr_content > 0xfd00000000ULL )
> 
> Regarding the address check, the PM states "the maximum supported
> physical address for this implementation", but I don't seem to be able
> to find where is this actually announced.

I think you'd typically find this information in the BKDG or PPR only.
The PM is generic, while the named two are specific to particular
families or even just models.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 09:41:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 09:41:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyBF6-00075o-Sw; Wed, 22 Jul 2020 09:40:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E/5f=BB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jyBF6-00075j-4C
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 09:40:52 +0000
X-Inumbo-ID: 6d86f5a4-cbff-11ea-8623-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6d86f5a4-cbff-11ea-8623-bc764e2007e4;
 Wed, 22 Jul 2020 09:40:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 468C6B11F;
 Wed, 22 Jul 2020 09:40:57 +0000 (UTC)
Subject: Re: [xen-unstable test] 152067: regressions - trouble:
 fail/pass/starved
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <osstest-152067-mainreport@xen.org>
 <20200722083805.GT7191@Air-de-Roger>
 <d3ba53f3-aae0-b8f1-4205-fedcd59e2243@suse.com>
 <20200722090259.GU7191@Air-de-Roger>
 <c69e8dd0-97ba-545c-fc58-748012513cd7@suse.com>
 <20200722093041.GW7191@Air-de-Roger>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <9dcd984d-a471-7f65-d0e3-47f2cea18a6a@suse.com>
Date: Wed, 22 Jul 2020 11:40:49 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200722093041.GW7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 osstest service owner <osstest-admin@xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.07.20 11:30, Roger Pau Monné wrote:
> On Wed, Jul 22, 2020 at 11:23:20AM +0200, Jürgen Groß wrote:
>> On 22.07.20 11:02, Roger Pau Monné wrote:
>>> On Wed, Jul 22, 2020 at 10:59:48AM +0200, Jürgen Groß wrote:
>>>> On 22.07.20 10:38, Roger Pau Monné wrote:
>>>>> On Wed, Jul 22, 2020 at 12:37:46AM +0000, osstest service owner wrote:
>>>>>> flight 152067 xen-unstable real [real]
>>>>>> http://logs.test-lab.xenproject.org/osstest/logs/152067/
>>>>>>
>>>>>> Regressions :-(
>>>>>>
>>>>>> Tests which did not succeed and are blocking,
>>>>>> including tests which could not be run:
>>>>>>     test-amd64-amd64-dom0pvh-xl-intel 18 guest-localmigrate/x10 fail REGR. vs. 152045
>>>>>
>>>>> Failure was caused by:
>>>>>
>>>>> Jul 21 16:20:58.985209 [  530.412043] libxl-save-help: page allocation failure: order:4, mode:0x60c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
>>>>> Jul 21 16:21:00.378548 [  530.412261] libxl-save-help cpuset=/ mems_allowed=0
>>>>> Jul 21 16:21:00.378622 [  530.412318] CPU: 1 PID: 15485 Comm: libxl-save-help Not tainted 4.19.80+ #1
>>>>> Jul 21 16:21:00.390740 [  530.412377] Hardware name: Dell Inc. PowerEdge R420/0K29HN, BIOS 2.4.2 01/29/2015
>>>>> Jul 21 16:21:00.390810 [  530.412448] Call Trace:
>>>>> Jul 21 16:21:00.402721 [  530.412499]  dump_stack+0x72/0x8c
>>>>> Jul 21 16:21:00.402801 [  530.412541]  warn_alloc.cold.140+0x68/0xe8
>>>>> Jul 21 16:21:00.402841 [  530.412585]  __alloc_pages_slowpath+0xc73/0xcb0
>>>>> Jul 21 16:21:00.414737 [  530.412640]  ? __do_page_fault+0x249/0x4d0
>>>>> Jul 21 16:21:00.414786 [  530.412681]  __alloc_pages_nodemask+0x235/0x250
>>>>> Jul 21 16:21:00.426555 [  530.412734]  kmalloc_order+0x13/0x60
>>>>> Jul 21 16:21:00.426619 [  530.412774]  kmalloc_order_trace+0x18/0xa0
>>>>> Jul 21 16:21:00.426671 [  530.412816]  alloc_empty_pages.isra.15+0x24/0x60
>>>>> Jul 21 16:21:00.438447 [  530.412867]  privcmd_ioctl_mmap_batch.isra.18+0x303/0x320
>>>>> Jul 21 16:21:00.438507 [  530.412918]  ? vmacache_find+0xb0/0xb0
>>>>> Jul 21 16:21:00.450475 [  530.412957]  privcmd_ioctl+0x253/0xa9b
>>>>> Jul 21 16:21:00.450540 [  530.412996]  ? mmap_region+0x226/0x630
>>>>> Jul 21 16:21:00.450592 [  530.413043]  ? selinux_mmap_file+0xb0/0xb0
>>>>> Jul 21 16:21:00.462757 [  530.413084]  ? selinux_file_ioctl+0x15c/0x200
>>>>> Jul 21 16:21:00.462823 [  530.413136]  do_vfs_ioctl+0x9f/0x630
>>>>> Jul 21 16:21:00.474698 [  530.413177]  ksys_ioctl+0x5b/0x90
>>>>> Jul 21 16:21:00.474762 [  530.413224]  __x64_sys_ioctl+0x11/0x20
>>>>> Jul 21 16:21:00.474813 [  530.413264]  do_syscall_64+0x57/0x130
>>>>> Jul 21 16:21:00.486480 [  530.413305]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>>>>> Jul 21 16:21:00.486548 [  530.413357] RIP: 0033:0x7f4f7ecde427
>>>>> Jul 21 16:21:00.486600 [  530.413395] Code: 00 00 90 48 8b 05 69 aa 0c 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 39 aa 0c 00 f7 d8 64 89 01 48
>>>>> Jul 21 16:21:00.510766 [  530.413556] RSP: 002b:00007ffc1ef6eb38 EFLAGS: 00000213 ORIG_RAX: 0000000000000010
>>>>> Jul 21 16:21:00.522758 [  530.413629] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f4f7ecde427
>>>>> Jul 21 16:21:00.534632 [  530.413699] RDX: 00007ffc1ef6eb90 RSI: 0000000000205004 RDI: 0000000000000007
>>>>> Jul 21 16:21:00.534702 [  530.413810] RBP: 00007ffc1ef6ebe0 R08: 0000000000000007 R09: 0000000000000000
>>>>> Jul 21 16:21:00.547013 [  530.413881] R10: 0000000000000001 R11: 0000000000000213 R12: 000055d754136200
>>>>> Jul 21 16:21:00.558751 [  530.413951] R13: 00007ffc1ef6f340 R14: 0000000000000000 R15: 0000000000000000
>>>>> Jul 21 16:21:00.558846 [  530.414079] Mem-Info:
>>>>> Jul 21 16:21:00.558928 [  530.414123] active_anon:1724 inactive_anon:3931 isolated_anon:0
>>>>> Jul 21 16:21:00.570481 [  530.414123]  active_file:7862 inactive_file:86530 isolated_file:0
>>>>> Jul 21 16:21:00.582599 [  530.414123]  unevictable:0 dirty:18 writeback:0 unstable:0
>>>>> Jul 21 16:21:00.582668 [  530.414123]  slab_reclaimable:4704 slab_unreclaimable:4036
>>>>> Jul 21 16:21:00.594782 [  530.414123]  mapped:3461 shmem:124 pagetables:372 bounce:0
>>>>> Jul 21 16:21:00.594849 [  530.414123]  free:1863 free_pcp:16 free_cma:0
>>>>> Jul 21 16:21:00.606733 [  530.414579] Node 0 active_anon:6896kB inactive_anon:15724kB active_file:31448kB inactive_file:346120kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:13844kB dirty:72kB writeback:0kB shmem:496kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
>>>>> Jul 21 16:21:00.630626 [  530.414870] DMA free:1816kB min:92kB low:112kB high:132kB active_anon:0kB inactive_anon:0kB active_file:76kB inactive_file:9988kB unevictable:0kB writepending:0kB present:15980kB managed:14328kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
>>>>> Jul 21 16:21:00.658448 [  530.415329] lowmem_reserve[]: 0 431 431 431
>>>>> Jul 21 16:21:00.658513 [  530.415404] DMA32 free:5512kB min:2608kB low:3260kB high:3912kB active_anon:6896kB inactive_anon:15724kB active_file:31372kB inactive_file:336132kB unevictable:0kB writepending:72kB present:508300kB managed:451760kB mlocked:0kB kernel_stack:2848kB pagetables:1488kB bounce:0kB free_pcp:184kB local_pcp:0kB free_cma:0kB
>>>>> Jul 21 16:21:00.694702 [  530.415742] lowmem_reserve[]: 0 0 0 0
>>>>> Jul 21 16:21:00.694778 [  530.415806] DMA: 8*4kB (UM) 3*8kB (UM) 4*16kB (UM) 3*32kB (M) 5*64kB (UM) 2*128kB (UM) 4*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1816kB
>>>>> Jul 21 16:21:00.706798 [  530.416015] DMA32: 4*4kB (UH) 459*8kB (MH) 2*16kB (H) 6*32kB (H) 5*64kB (H) 4*128kB (H) 3*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5512kB
>>>>> Jul 21 16:21:00.718789 [  530.416287] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
>>>>> Jul 21 16:21:00.730785 [  530.416413] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
>>>>> Jul 21 16:21:00.742847 [  530.416538] 94608 total pagecache pages
>>>>> Jul 21 16:21:00.742881 [  530.416598] 79 pages in swap cache
>>>>> Jul 21 16:21:00.754859 [  530.416670] Swap cache stats: add 702, delete 623, find 948/1025
>>>>> Jul 21 16:21:00.754924 [  530.416759] Free swap  = 1947124kB
>>>>> Jul 21 16:21:00.766880 [  530.416822] Total swap = 1949692kB
>>>>> Jul 21 16:21:00.766960 [  530.416924] 131070 pages RAM
>>>>> Jul 21 16:21:00.767021 [  530.416988] 0 pages HighMem/MovableOnly
>>>>> Jul 21 16:21:00.778697 [  530.417051] 14548 pages reserved
>>>>>
>>>>> AFAICT from the kernel config used for the test [0]
>>>>> CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is enabled, so I'm not sure where
>>>>> the memory exhaustion is coming from. Maybe 512M is too low for a PVH
>>>>> dom0, even when using hotplug balloon memory?
>>>>
>>>> I don't see how CONFIG_XEN_BALLOON_MEMORY_HOTPLUG would help here, as it
>>>> will be used for real memory hotplug only. Well, you _can_ use it for
>>>> mapping of foreign pages, but you'd have to:
>>>>
>>>> echo 1 > /proc/sys/xen/balloon/hotplug_unpopulated
>>>
>>> Uh, I've completely missed the point then. I assume there's some
>>> reason for not doing it by default then? (using empty hotplug ranges
>>> to map foreign memory)
>>
>> This dates back to 2015. See commit 1cf6a6c82918c9aad.
>>
>> I guess we could initialize xen_hotplug_unpopulated with 1 for PVH
>> dom0.
> 
> Would you like me to enabled it in osstest first and then we can see
> about enabling it by default?

Yes, good idea.


Juergen



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 09:52:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 09:52:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyBQR-00081Y-1s; Wed, 22 Jul 2020 09:52:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dvI5=BB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyBQQ-00081T-3n
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 09:52:34 +0000
X-Inumbo-ID: 0f956672-cc01-11ea-8623-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f956672-cc01-11ea-8623-bc764e2007e4;
 Wed, 22 Jul 2020 09:52:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595411552;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=32/HzdDyda4+/2821/7cORIOl16CkvgJ3RaKlAC/9DM=;
 b=eu8VYf6K6WbS44KE8KkWk9k9njGhrFgL2WvxnQ5fd8JkxJMOPVLZZet0
 52hRo8z+gUs85fSTcUFrQQjYs0Kv0W5BHDhldjQU+SLmnl67WJjpkHwiV
 Yj86DMIiZXihoD2LYdmPsg3PIxj6YNs1roLd7Aki6iZiS4U9WU2Yuc7Gz o=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: lEf3Luoy6OykSDTxGeKIQyTWwn5iT5zWPdTPwiM4ebwvQPqJyOht4i/dTneeiToVqJbX+wrU4F
 SE55uaaqzFPwE9iVQ+zzK8wCBhr2yFr4k/+56+6D4HZiBD4p5kgGbJnvUsd1m0QhIT0G6sWxuo
 U8rx2W/mzTm5Lz6x5LFsV5okKyAuGsX5h+J36wkJcftRKVBpXsS4qHxT1d/9QmhId31FGii83U
 xAGi8UE+9Im097ad0MsM1GiiaF2/XbjAOrv7ImtWF1lMZ0gkKfHY4GwSkVngdNeW0eLNW7Gulj
 9tM=
X-SBRS: 2.7
X-MesageID: 23782057
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23782057"
Subject: Re: [PATCH] x86/svm: Fold nsvm_{wr,rd}msr() into
 svm_msr_{read,write}_intercept()
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20200721172208.12176-1-andrew.cooper3@citrix.com>
 <20200722092653.GV7191@Air-de-Roger>
 <d57ec557-3b6a-3571-3c63-08166e40af75@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b6cec319-95d7-2389-16c8-570b7402055c@citrix.com>
Date: Wed, 22 Jul 2020 10:52:27 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d57ec557-3b6a-3571-3c63-08166e40af75@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22/07/2020 10:34, Jan Beulich wrote:
> On 22.07.2020 11:26, Roger Pau Monné wrote:
>> On Tue, Jul 21, 2020 at 06:22:08PM +0100, Andrew Cooper wrote:
>>> @@ -2085,6 +2091,22 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>>>              goto gpf;
>>>          break;
>>>  
>>> +    case MSR_K8_VM_CR:
>>> +        /* ignore write. handle all bits as read-only. */
>>> +        break;
>>> +
>>> +    case MSR_K8_VM_HSAVE_PA:
>>> +        if ( (msr_content & ~PAGE_MASK) || msr_content > 0xfd00000000ULL )
>> Regarding the address check, the PM states "the maximum supported
>> physical address for this implementation", but I don't seem to be able
>> to find where is this actually announced.
> I think you'd typically find this information in the BKDG or PPR only.
> The PM is generic, while the named two are specific to particular
> families or even just models.

Furthermore, the BKDG/PPR's are misleading/wrong.

For pre Fam17h, it is MAXPHYSADDR - 12G, which gives a limit lower than
0xfd00000000 on various SoC and embedded platforms.

On Fam17h, it is also lowered dynamically by how much memory encryption
is turned on (and therefore steals bits from the upper end of MAXPHYSADDR).


However, neither of these points are relevant in the slightest to
nested-svm because we don't ever map the HyperTransport range into
guests to start with - we'd get #PF[Rsvd] if we ever tried to use these
mappings.

Last time I presented this patch (nearly 2 years ago, and in the middle
of a series), it got very bogged down in a swamp of nested virt work,
which is why this time I've gone for no functional change, and punting
all the nested virt work to some future point where I've got time to
deal with it, and its not blocking the improvement wanted here.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 10:10:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 10:10:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyBht-0001ef-SK; Wed, 22 Jul 2020 10:10:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dvI5=BB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyBhs-0001eX-A7
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 10:10:36 +0000
X-Inumbo-ID: 953654ec-cc03-11ea-8624-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 953654ec-cc03-11ea-8624-bc764e2007e4;
 Wed, 22 Jul 2020 10:10:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595412635;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=UGfBbfqXa3WXsAAtVp2FenTiXCWIaqeavpt7/jw6NqY=;
 b=AAyTLt5oT5SkOz5zaO7XHO2wOcMU6G5kCIWDKmye9IoR+ROCg2HLYyec
 NHKblxokCMKrfuokO9NphMlP5Tdf7Ti6ulvk6FXiwrYnS+HLn9sXj00S3
 vvP5s+kusffi2W49d9ov25BBX8PN47WaKTE5U5gV4gzF2dR4nufwM+Fy1 k=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: vGPfRNGf89t5C0y6RFfWnkADcWMfJQoOgNOR62KHeuC/U1WsgeLA6Z7m69CT45NyZW/ZZHDTmI
 x2rLgHuRGXmG9wrg9LRQIm+GKekSPsjYMo7ZX3UiAauIbzrqrnu57zgAhQZdiXOCBRIneAj50w
 LMZ2nLudQ39Fm2FVYEs36sw4Gin7A55aOZbdgQNush4bU1ZlSPApZENm/iGsnl3ztsQB1W2wko
 Bh21HyrPQ794GIvxGqAdorBeH44w+2pWDtSykGQnsxaZD4qyD41W5ug2+n5YUKsxJ4tbrhXs30
 0Ig=
X-SBRS: 2.7
X-MesageID: 23783273
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23783273"
Subject: Re: [PATCH] x86/svm: Fold nsvm_{wr,rd}msr() into
 svm_msr_{read,write}_intercept()
To: Jan Beulich <jbeulich@suse.com>
References: <20200721172208.12176-1-andrew.cooper3@citrix.com>
 <b3c3dfa9-d2b8-1042-ecf1-8f51351807e1@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <96e0cb7b-c597-075c-f142-6b35aae1a881@citrix.com>
Date: Wed, 22 Jul 2020 11:10:30 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b3c3dfa9-d2b8-1042-ecf1-8f51351807e1@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22/07/2020 10:16, Jan Beulich wrote:
> On 21.07.2020 19:22, Andrew Cooper wrote:
>> ... to simplify the default cases.
>>
>> There are multiple errors with the handling of these three MSRs, but they are
>> deliberately not addressed this point.
>>
>> This removes the dance converting -1/0/1 into X86EMUL_*, allowing for the
>> removal of the 'ret' variable.
>>
>> While cleaning this up, drop the gdprintk()'s for #GP conditions, and the
>> 'result' variable from svm_msr_write_intercept() is it is never modified.
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
> However, ...
>
>> @@ -2085,6 +2091,22 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>>              goto gpf;
>>          break;
>>  
>> +    case MSR_K8_VM_CR:
>> +        /* ignore write. handle all bits as read-only. */
>> +        break;
>> +
>> +    case MSR_K8_VM_HSAVE_PA:
>> +        if ( (msr_content & ~PAGE_MASK) || msr_content > 0xfd00000000ULL )
> ... while you're moving this code here, wouldn't it be worthwhile
> to at least fix the > to be >= ?

I'd prefer not to, to avoid breaking the "No Functional Change" aspect.

In reality, this needs to be a path which takes an extra ref on the
nominated frame and globally maps it, seeing as we memcpy to/from it on
every virtual vmentry/exit.  The check against the HT range is quite bogus.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 10:18:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 10:18:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyBpZ-0002Am-OK; Wed, 22 Jul 2020 10:18:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dvI5=BB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyBpY-0002Ah-AO
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 10:18:32 +0000
X-Inumbo-ID: b0df24ac-cc04-11ea-8624-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0df24ac-cc04-11ea-8624-bc764e2007e4;
 Wed, 22 Jul 2020 10:18:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595413111;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=Lb9jIpjmcOCTg3+LJ4DI+MzgIAmtiRCi98XCw3Lkdbg=;
 b=BIAoTc7BtemA2RbZ8PpnKJnW18TLCy9kacFeWkyy2Lz7UG03cqi3x8Q9
 ZPFkC6qcBB8C4/JGFKbVCXvXzLFwLaoG/yJPFsQf0vWyNPPDa38WizBbB
 7aFHDhVV6TGQI0G8Kap04Ckr6lQ3Y6MMQQlYRssaFRXQnXb8M+rUODPYV 8=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 0uIXKQLE8Q5nGHnzSjLwJGGdDIEwujHwChbQK748QFkbxa/EBkT9POv3cMXta191qg4gQRFjmQ
 49PzZEsqle76O8ebdfRW9PBqv/fP3lvB4TBv/Lyw/nsuR0GT2YZNzxEgoSzCPWgOlso0SYwyF9
 9bu1/ELyvcAXffS7hmbE0oIjwt/GBaKuBVmkmMK+n3gi9CjcjAnTPhpGXl0NVHanFAV3cjcjGn
 mF/w5V1SHp/7OjX+M1s+jZl5Z1hokw+wfMhMiVAoZ0FT/Y8ZNS5nHqn0u0EnhdiJUNz4CxON8I
 8oI=
X-SBRS: 2.7
X-MesageID: 23258370
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23258370"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/vmce: Dispatch vmce_{rd,wr}msr() from guest_{rd,wr}msr()
Date: Wed, 22 Jul 2020 11:18:09 +0100
Message-ID: <20200722101809.8389-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

... rather than from the default clauses of the PV and HVM MSR handlers.

This means that we no longer take the vmce lock for any unknown MSR, and
accesses to architectural MCE banks outside of the subset implemented for the
guest no longer fall further through the unknown MSR path.

With the vmce calls removed, the hvm alternative_call()'s expression can be
simplified substantially.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/hvm.c         | 16 ++--------------
 xen/arch/x86/msr.c             | 16 ++++++++++++++++
 xen/arch/x86/pv/emul-priv-op.c | 15 ---------------
 3 files changed, 18 insertions(+), 29 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 5bb47583b3..a9d1685549 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3560,13 +3560,7 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
          break;
 
     default:
-        if ( (ret = vmce_rdmsr(msr, msr_content)) < 0 )
-            goto gp_fault;
-        /* If ret == 0 then this is not an MCE MSR, see other MSRs. */
-        ret = ((ret == 0)
-               ? alternative_call(hvm_funcs.msr_read_intercept,
-                                  msr, msr_content)
-               : X86EMUL_OKAY);
+        ret = alternative_call(hvm_funcs.msr_read_intercept, msr, msr_content);
         break;
     }
 
@@ -3696,13 +3690,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
         break;
 
     default:
-        if ( (ret = vmce_wrmsr(msr, msr_content)) < 0 )
-            goto gp_fault;
-        /* If ret == 0 then this is not an MCE MSR, see other MSRs. */
-        ret = ((ret == 0)
-               ? alternative_call(hvm_funcs.msr_write_intercept,
-                                  msr, msr_content)
-               : X86EMUL_OKAY);
+        ret = alternative_call(hvm_funcs.msr_write_intercept, msr, msr_content);
         break;
     }
 
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 22f921cc71..ca4307e19f 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -227,6 +227,14 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
         *val = msrs->misc_features_enables.raw;
         break;
 
+    case MSR_IA32_MCG_CAP     ... MSR_IA32_MCG_CTL:      /* 0x179 -> 0x17b */
+    case MSR_IA32_MCx_CTL2(0) ... MSR_IA32_MCx_CTL2(31): /* 0x280 -> 0x29f */
+    case MSR_IA32_MCx_CTL(0)  ... MSR_IA32_MCx_MISC(31): /* 0x400 -> 0x47f */
+    case MSR_IA32_MCG_EXT_CTL:                           /* 0x4d0 */
+        if ( vmce_rdmsr(msr, val) < 0 )
+            goto gp_fault;
+        break;
+
     case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
         if ( !is_hvm_domain(d) || v != curr )
             goto gp_fault;
@@ -436,6 +444,14 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
         break;
     }
 
+    case MSR_IA32_MCG_CAP     ... MSR_IA32_MCG_CTL:      /* 0x179 -> 0x17b */
+    case MSR_IA32_MCx_CTL2(0) ... MSR_IA32_MCx_CTL2(31): /* 0x280 -> 0x29f */
+    case MSR_IA32_MCx_CTL(0)  ... MSR_IA32_MCx_MISC(31): /* 0x400 -> 0x47f */
+    case MSR_IA32_MCG_EXT_CTL:                           /* 0x4d0 */
+        if ( vmce_wrmsr(msr, val) < 0 )
+            goto gp_fault;
+        break;
+
     case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
         if ( !is_hvm_domain(d) || v != curr )
             goto gp_fault;
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 254da2b849..f14552cb4b 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -855,8 +855,6 @@ static int read_msr(unsigned int reg, uint64_t *val,
 
     switch ( reg )
     {
-        int rc;
-
     case MSR_FS_BASE:
         if ( is_pv_32bit_domain(currd) )
             break;
@@ -955,12 +953,6 @@ static int read_msr(unsigned int reg, uint64_t *val,
         }
         /* fall through */
     default:
-        rc = vmce_rdmsr(reg, val);
-        if ( rc < 0 )
-            break;
-        if ( rc )
-            return X86EMUL_OKAY;
-        /* fall through */
     normal:
         /* Everyone can read the MSR space. */
         /* gdprintk(XENLOG_WARNING, "Domain attempted RDMSR %08x\n", reg); */
@@ -991,7 +983,6 @@ static int write_msr(unsigned int reg, uint64_t val,
     switch ( reg )
     {
         uint64_t temp;
-        int rc;
 
     case MSR_FS_BASE:
         if ( is_pv_32bit_domain(currd) || !is_canonical_address(val) )
@@ -1122,12 +1113,6 @@ static int write_msr(unsigned int reg, uint64_t val,
         }
         /* fall through */
     default:
-        rc = vmce_wrmsr(reg, val);
-        if ( rc < 0 )
-            break;
-        if ( rc )
-            return X86EMUL_OKAY;
-
         if ( (rdmsr_safe(reg, temp) != 0) || (val != temp) )
     invalid:
             gdprintk(XENLOG_WARNING,
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 10:39:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 10:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyC9y-0003uj-Gr; Wed, 22 Jul 2020 10:39:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q/Qh=BB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyC9x-0003ue-KF
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 10:39:37 +0000
X-Inumbo-ID: a2cefc2d-cc07-11ea-8625-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2cefc2d-cc07-11ea-8625-bc764e2007e4;
 Wed, 22 Jul 2020 10:39:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=rnIQZIYAi8mmNQEv2neSr12PLQMrmpMFuLr32rOwwX4=; b=4iTOdNafhNdA3NAy7WTT+kJYh
 lZvC+I7T4Uve6Ghao406EQT5tquPHUqyIopUKaIBBUneRhI7wDtpEvVHcMb0FsfEj+WpIbXSrR0jU
 wwdMPEbSpMcgM97ygMtpBbX+LNlzsLOYGZv0+iUG+aIT0qv4M5BZLe5Ri6XeF+K+jYEY0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyC9v-0003t7-Kl; Wed, 22 Jul 2020 10:39:35 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyC9v-00058B-7q; Wed, 22 Jul 2020 10:39:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyC9u-0001re-Vq; Wed, 22 Jul 2020 10:39:35 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152103-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 152103: all pass - PUSHED
X-Osstest-Versions-This: xen=f3885e8c3ceaef101e466466e879e97103ecce18
X-Osstest-Versions-That: xen=fb024b779336a0f73b3aee885b2ce082e812881f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jul 2020 10:39:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152103 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152103/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  f3885e8c3ceaef101e466466e879e97103ecce18
baseline version:
 xen                  fb024b779336a0f73b3aee885b2ce082e812881f

Last test of basis   152012  2020-07-19 09:18:28 Z    3 days
Testing same since   152103  2020-07-22 09:24:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  George Dunlap <george.dunlap@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Michal Leszczynski <michal.leszczynski@cert.pl>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   fb024b7793..f3885e8c3c  f3885e8c3ceaef101e466466e879e97103ecce18 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 10:47:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 10:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyCHU-0004m3-B7; Wed, 22 Jul 2020 10:47:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u0rb=BB=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jyCHS-0004ly-Dt
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 10:47:22 +0000
X-Inumbo-ID: b88720ca-cc08-11ea-a191-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b88720ca-cc08-11ea-a191-12813bfff9fa;
 Wed, 22 Jul 2020 10:47:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=KWDTHJr/LCAySk1UJbZ6Tw3X9tpPHj2uVJa31UaiGwU=; b=UAtgZviP4Or09t1X38/hHvf4eL
 r3xB8BbMaWtLE6uC46z+4b45e70Mf4vvmEdVvMkZZY95JudYQZth3COc9O9+ycHrinuXPfpwzygr+
 1j0ae2n+IKPbJ0oI4SiHiMIX63CSGxKLbypFk1cxHW/lkDKt4hTwQ9gbuyE1ByRhSjy8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyCHQ-00043g-Et; Wed, 22 Jul 2020 10:47:20 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyCHQ-0006zu-2z; Wed, 22 Jul 2020 10:47:20 +0000
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <3dcab37d-0d60-f1cc-1d59-b5497f0fa95f@xen.org>
 <b6cf0931-c31e-b03b-3995-688536de391a@gmail.com>
 <05acce61-5b29-76f7-5664-3438361caf82@xen.org>
 <20200722082115.GR7191@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <f3c54a7e-4352-7591-73c2-14215bd3ad34@xen.org>
Date: Wed, 22 Jul 2020 11:47:18 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200722082115.GR7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Roger,

On 22/07/2020 09:21, Roger Pau Monné wrote:
> On Tue, Jul 21, 2020 at 10:12:40PM +0100, Julien Grall wrote:
>> Hi Oleksandr,
>>
>> On 21/07/2020 19:16, Oleksandr wrote:
>>>
>>> On 21.07.20 17:27, Julien Grall wrote:
>>>> On a similar topic, I am a bit surprised you didn't encounter memory
>>>> exhaustion when trying to use virtio. Because on how Linux currently
>>>> works (see XSA-300), the backend domain as to have a least as much
>>>> RAM as the domain it serves. For instance, you have serve two
>>>> domains with 1GB of RAM each, then your backend would need at least
>>>> 2GB + some for its own purpose.
>>>>
>>>> This probably wants to be resolved by allowing foreign mapping to be
>>>> "paging" out as you would for memory assigned to a userspace.
>>>
>>> Didn't notice the last sentence initially. Could you please explain your
>>> idea in detail if possible. Does it mean if implemented it would be
>>> feasible to map all guest memory regardless of how much memory the guest
>>> has?
>>>
>>> Avoiding map/unmap memory each guest request would allow us to have
>>> better performance (of course with taking care of the fact that guest
>>> memory layout could be changed)...
>>
>> I will explain that below. Before let me comment on KVM first.
>>
>>> Actually what I understand looking at kvmtool is the fact it does not
>>> map/unmap memory dynamically, just calculate virt addresses according to
>>> the gfn provided.
>>
>> The memory management between KVM and Xen is quite different. In the case of
>> KVM, the guest RAM is effectively memory from the userspace (allocated via
>> mmap) and then shared with the guest.
>>
>>  From the userspace PoV, the guest memory will always be accessible from the
>> same virtual region. However, behind the scene, the pages may not always
>> reside in memory. They are basically managed the same way as "normal"
>> userspace memory.
>>
>> In the case of Xen, we are basically stealing a guest physical page
>> allocated via kmalloc() and provide no facilities for Linux to reclaim the
>> page if it needs to do it before the userspace decide to unmap the foreign
>> mapping.
>>
>> I think it would be good to handle the foreing mapping the same way as
>> userspace memory. By that I mean, that Linux could reclaim the physical page
>> used by the foreing mapping if it needs to.
>>
>> The process for reclaiming the page would look like:
>>      1) Unmap the foreign page
>>      2) Ballon in the backend domain physical address used by the foreing
>> mapping (allocate the page in the physmap)
>>
>> The next time the userspace is trying to access the foreign page, Linux will
>> receive a data abort that would result to:
>>      1) Allocate a backend domain physical page
>>      2) Balloon out the physical address (remove the page from the physmap)
>>      3) Map the foreing mapping at the new guest physical address
>>      4) Map the guest physical page in the userspace address space
> 
> This is going to shatter all the super pages in the stage-2
> translation.

Yes, but this is nothing really new as ballooning would result to 
(AFAICT) the same behavior on Linux.

> 
>> With this approach, we should be able to have backend domain that can handle
>> frontend domain without require a lot of memory.
> 
> Linux on x86 has the option to use empty hotplug memory ranges to map
> foreign memory: the balloon driver hotplugs an unpopulated physical
> memory range that's not made available to the OS free memory allocator
> and it's just used as scratch space to map foreign memory. Not sure
> whether Arm has something similar, or if it could be implemented.

We already discussed that last year :). This was attempted in the past 
(I was still at Citrix) and indefinitely paused for Arm.

/proc/iomem can be incomplete on Linux if we didn't load a driver for 
all the devices. This means that Linux doesn't have the full view of 
what is physical range is freed.

Additionally, in the case of Dom0, all the regions corresponding to the 
host RAM are unusable when using the SMMU. This is because we would do 
1:1 mapping for the foreign mapping as well.

It might be possible to take advantage of the direct mapping property if 
Linux do some bookeeping. Although, this wouldn't work for 32-bit Dom0 
using short page tables (e.g some version of Debian does) as it may not 
be able to access all the host RAM. Whether we still care about is a 
different situation :).

For all the other domains, I think we would want the toolstack to 
provide a region that can be safely used for foreign mapping (similar to 
what we already do for the grant-table).

> 
> You can still use the map-on-fault behaviour as above, but I would
> recommend that you try to limit the number of hypercalls issued.
> Having to issue a single hypercall for each page fault it's going to
> be slow, so I would instead use mmap batch to map the hole range in
> unpopulated physical memory and then the OS fault handler just needs to
> fill the page tables with the corresponding address.
IIUC your proposal, you are assuming that you will have enough free 
space in the physical address space to map the foreign mapping.

However that amount of free space is not unlimited and may be quite 
small (see above). It would be fairly easy to exhaust it given that a 
userspace application can map many times the same guest physical address.

So I still think we need to be able to allow Linux to swap a foreign 
page with another page.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 10:56:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 10:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyCPg-0005fa-5o; Wed, 22 Jul 2020 10:55:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dvI5=BB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyCPe-0005fU-3R
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 10:55:50 +0000
X-Inumbo-ID: e67e2f90-cc09-11ea-862a-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e67e2f90-cc09-11ea-862a-bc764e2007e4;
 Wed, 22 Jul 2020 10:55:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595415348;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=x2el/5EGZvCwoyblN7Bu8AExlZvxktoPFUqYFtbqQWM=;
 b=g9Yg3LN8P79EzLPaOfx1H1BWDuEj0pi6HnMVKh9IueAhEKKrlBAOM1v2
 qw+b8cRsfociAi6A8Ggxqhhg2VmpnZsUx+388rxVWNyVRlMLxxxtYuUgR
 ipdFBe13YI66lNQQtsPQ830TE5tELSg1ZxcF8KjB001LKdr4EqMEwAmaK A=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 4wrEHuZaELtgtMyor6/YShnhaRx1zPs+IMD2f1mUljS5Lph4bmbVAdzaKKbfaFBKaqQphSSVv9
 QB9wff8/6J4G1lh7QB7rDCKv/Dbpm7V2p2I3Or9xmVNiGd1XSrQlV3Nih1z62tSjxjsHx9v3fA
 ibCQ2zIdP4O5JXrgnaEYJFLpzVN+12R15gVAfCKXiD77XPsfBZtEo+yEQlU1eAYpp/fKAusarb
 LRhw6uV1wazzj0jM0h2NnJDaCzkxvFSkut2XiV27oqh1Ls1pefU1RSuQ+7Zz++YhhXqn+6WNAg
 gxw=
X-SBRS: 2.7
X-MesageID: 23120953
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23120953"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/msr: Unify the real {rd, wr}msr() paths in guest_{rd,
 wr}msr()
Date: Wed, 22 Jul 2020 11:55:29 +0100
Message-ID: <20200722105529.12177-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Both the read and write side have commonalities which can be abstracted away.
This also allows for additional safety in release builds, and slightly more
helpful diagnostics in debug builds.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

I'm not a massive fan of the global scope want_rdmsr_safe boolean, but I can't
think of a reasonable way to fix it without starting to use other
flexibiltiies offered to us by C99.  (And to preempt the other question, an
extra set of braces makes extremely confusing to read logic.)
---
 xen/arch/x86/msr.c | 54 ++++++++++++++++++++++++++++++++++++++++++------------
 1 file changed, 42 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 22f921cc71..68f3aadeab 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -154,6 +154,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
     const struct cpuid_policy *cp = d->arch.cpuid;
     const struct msr_policy *mp = d->arch.msr;
     const struct vcpu_msrs *msrs = v->arch.msrs;
+    bool want_rdmsr_safe = false;
     int ret = X86EMUL_OKAY;
 
     switch ( msr )
@@ -204,10 +205,9 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
          */
         if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_AMD)) ||
              !(boot_cpu_data.x86_vendor &
-               (X86_VENDOR_INTEL | X86_VENDOR_AMD)) ||
-             rdmsr_safe(MSR_AMD_PATCHLEVEL, *val) )
+               (X86_VENDOR_INTEL | X86_VENDOR_AMD)) )
             goto gp_fault;
-        break;
+        goto read_from_hw_safe;
 
     case MSR_SPEC_CTRL:
         if ( !cp->feat.ibrsb )
@@ -278,7 +278,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
          */
 #ifdef CONFIG_HVM
         if ( v == current && is_hvm_domain(d) && v->arch.hvm.flag_dr_dirty )
-            rdmsrl(msr, *val);
+            goto read_from_hw;
         else
 #endif
             *val = msrs->dr_mask[
@@ -303,6 +303,23 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
 
     return ret;
 
+ read_from_hw_safe:
+    want_rdmsr_safe = true;
+ read_from_hw:
+    if ( !rdmsr_safe(msr, *val) )
+        return X86EMUL_OKAY;
+
+    /*
+     * Paths which didn't want rdmsr_safe() and get here took a #GP fault.
+     * Something is broken with the above logic, so make it obvious in debug
+     * builds, and fail safe by handing #GP back to guests in release builds.
+     */
+    if ( !want_rdmsr_safe )
+    {
+        gprintk(XENLOG_ERR, "Bad rdmsr %#x\n", msr);
+        ASSERT_UNREACHABLE();
+    }
+
  gp_fault:
     return X86EMUL_EXCEPTION;
 }
@@ -402,9 +419,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
         if ( val & ~PRED_CMD_IBPB )
             goto gp_fault; /* Rsvd bit set? */
 
-        if ( v == curr )
-            wrmsrl(MSR_PRED_CMD, val);
-        break;
+        goto maybe_write_to_hw;
 
     case MSR_FLUSH_CMD:
         if ( !cp->feat.l1d_flush )
@@ -413,9 +428,7 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
         if ( val & ~FLUSH_CMD_L1D )
             goto gp_fault; /* Rsvd bit set? */
 
-        if ( v == curr )
-            wrmsrl(MSR_FLUSH_CMD, val);
-        break;
+        goto maybe_write_to_hw;
 
     case MSR_INTEL_MISC_FEATURES_ENABLES:
     {
@@ -493,8 +506,8 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
                                ? 0 : (msr - MSR_AMD64_DR1_ADDRESS_MASK + 1),
                                ARRAY_SIZE(msrs->dr_mask))] = val;
 
-        if ( v == curr && (curr->arch.dr7 & DR7_ACTIVE_MASK) )
-            wrmsrl(msr, val);
+        if ( curr->arch.dr7 & DR7_ACTIVE_MASK )
+            goto maybe_write_to_hw;
         break;
 
     default:
@@ -509,6 +522,23 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
 
     return ret;
 
+ maybe_write_to_hw:
+    /*
+     * All paths potentially updating a value in hardware need to check
+     * whether the call is in current context or not, so the logic is
+     * implemented here.  Remote context need do nothing more.
+     */
+    if ( v != curr || !wrmsr_safe(msr, val) )
+        return X86EMUL_OKAY;
+
+    /*
+     * Paths which end up here took a #GP fault in wrmsr_safe().  Something is
+     * broken with the logic above, so make it obvious in debug builds, and
+     * fail safe by handing #GP back to the guests in release builds.
+     */
+    gprintk(XENLOG_ERR, "Bad wrmsr %#x val %016"PRIx64"\n", msr, val);
+    ASSERT_UNREACHABLE();
+
  gp_fault:
     return X86EMUL_EXCEPTION;
 }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 11:01:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 11:01:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyCUn-0006XM-V4; Wed, 22 Jul 2020 11:01:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q/Qh=BB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyCUn-0006XH-Cc
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 11:01:09 +0000
X-Inumbo-ID: a55b8dae-cc0a-11ea-a191-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a55b8dae-cc0a-11ea-a191-12813bfff9fa;
 Wed, 22 Jul 2020 11:01:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=elgyL+EOKCQprDAfX5nfVNeWCmbbdzf1HfxyqHTDUaQ=; b=2boJhiLx2mLPDX7ydCffhRJv8
 Pnh7Bv+q3ZGzC2SeDxl2l2hLLl4BfbA0Fs0X3kxNuZig1EZzfHIkbszhZMTH0beQewfHDbYEAZotD
 CGtWETBrS6ycCDGZ+L8gdBqbjLl2xMOPeHwzHt6LDlCzGq6zRFvI7jmivVEbKEPCRqIFc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyCUl-0004M2-LX; Wed, 22 Jul 2020 11:01:07 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyCUk-0006JR-W8; Wed, 22 Jul 2020 11:01:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyCUk-0005PX-VL; Wed, 22 Jul 2020 11:01:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152088-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152088: all pass - PUSHED
X-Osstest-Versions-This: ovmf=9132a31b9c8381197eee75eb66c809182b264110
X-Osstest-Versions-That: ovmf=02539e900854488343a1efa435d4dded1ddd66a2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jul 2020 11:01:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152088 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152088/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 9132a31b9c8381197eee75eb66c809182b264110
baseline version:
 ovmf                 02539e900854488343a1efa435d4dded1ddd66a2

Last test of basis   152068  2020-07-21 07:11:01 Z    1 days
Testing same since   152088  2020-07-21 23:41:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jeff Brasen <jbrasen@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   02539e9008..9132a31b9c  9132a31b9c8381197eee75eb66c809182b264110 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 11:10:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 11:10:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyCdh-0007Pn-U8; Wed, 22 Jul 2020 11:10:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhkO=BB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyCdh-0007Pf-7f
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 11:10:21 +0000
X-Inumbo-ID: ee096322-cc0b-11ea-a192-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ee096322-cc0b-11ea-a192-12813bfff9fa;
 Wed, 22 Jul 2020 11:10:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595416220;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=r5qfBwq3dtkrKeJog+XoxKlp8SM9Zmz/NJM3CCWHS0Y=;
 b=cS6PRdB7j/jCTpN7AI/TRoBHvMjXrsoxnn7id58zF8fEzVq5Pk1Q3Uy8
 LKsIF6Q24x+aC8N3yhtLwfzOx7Dt7clcg7xLXJiniDaYE3ImAe1OtEgX7
 XauDmqwkLnrDbDzapdPBhCtRuDYIrybEberrQP6wckOYKXePUq9LK/V73 U=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ZG31tbawB5/Eq/9daThEF2uzrwsb7VITyweO1+27mKyi/iZDQ9Y6YSjPn/1/N/iURG3cmoYYk4
 TAS1xw7iMwBvyiaXsCo72ZBFEcdTfr/gpmC7OtNCCta/hJ4I0uAnBHJSG4bQc9Q119kKFRFEyr
 7EdrxVbD8qgJEtrMyRLJ127SpHEWirl2nUbmCWgLQyG6E1mY0RBd+mbP4iamrb5n98eAn3D9dj
 z9h2AFcXEsPFErCPBBxFSPvHQElZw6d7rhrnIMhPWw07P92R2Sxl0UVIZnBVpM1sF8rvOcmQO6
 NbY=
X-SBRS: 2.7
X-MesageID: 23121989
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23121989"
Date: Wed, 22 Jul 2020 13:10:12 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
Message-ID: <20200722111012.GX7191@Air-de-Roger>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <3dcab37d-0d60-f1cc-1d59-b5497f0fa95f@xen.org>
 <b6cf0931-c31e-b03b-3995-688536de391a@gmail.com>
 <05acce61-5b29-76f7-5664-3438361caf82@xen.org>
 <20200722082115.GR7191@Air-de-Roger>
 <f3c54a7e-4352-7591-73c2-14215bd3ad34@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f3c54a7e-4352-7591-73c2-14215bd3ad34@xen.org>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Artem
 Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 22, 2020 at 11:47:18AM +0100, Julien Grall wrote:
> Hi Roger,
> 
> On 22/07/2020 09:21, Roger Pau Monné wrote:
> > On Tue, Jul 21, 2020 at 10:12:40PM +0100, Julien Grall wrote:
> > > Hi Oleksandr,
> > > 
> > > On 21/07/2020 19:16, Oleksandr wrote:
> > > > 
> > > > On 21.07.20 17:27, Julien Grall wrote:
> > > > > On a similar topic, I am a bit surprised you didn't encounter memory
> > > > > exhaustion when trying to use virtio. Because on how Linux currently
> > > > > works (see XSA-300), the backend domain as to have a least as much
> > > > > RAM as the domain it serves. For instance, you have serve two
> > > > > domains with 1GB of RAM each, then your backend would need at least
> > > > > 2GB + some for its own purpose.
> > > > > 
> > > > > This probably wants to be resolved by allowing foreign mapping to be
> > > > > "paging" out as you would for memory assigned to a userspace.
> > > > 
> > > > Didn't notice the last sentence initially. Could you please explain your
> > > > idea in detail if possible. Does it mean if implemented it would be
> > > > feasible to map all guest memory regardless of how much memory the guest
> > > > has?
> > > > 
> > > > Avoiding map/unmap memory each guest request would allow us to have
> > > > better performance (of course with taking care of the fact that guest
> > > > memory layout could be changed)...
> > > 
> > > I will explain that below. Before let me comment on KVM first.
> > > 
> > > > Actually what I understand looking at kvmtool is the fact it does not
> > > > map/unmap memory dynamically, just calculate virt addresses according to
> > > > the gfn provided.
> > > 
> > > The memory management between KVM and Xen is quite different. In the case of
> > > KVM, the guest RAM is effectively memory from the userspace (allocated via
> > > mmap) and then shared with the guest.
> > > 
> > >  From the userspace PoV, the guest memory will always be accessible from the
> > > same virtual region. However, behind the scene, the pages may not always
> > > reside in memory. They are basically managed the same way as "normal"
> > > userspace memory.
> > > 
> > > In the case of Xen, we are basically stealing a guest physical page
> > > allocated via kmalloc() and provide no facilities for Linux to reclaim the
> > > page if it needs to do it before the userspace decide to unmap the foreign
> > > mapping.
> > > 
> > > I think it would be good to handle the foreing mapping the same way as
> > > userspace memory. By that I mean, that Linux could reclaim the physical page
> > > used by the foreing mapping if it needs to.
> > > 
> > > The process for reclaiming the page would look like:
> > >      1) Unmap the foreign page
> > >      2) Ballon in the backend domain physical address used by the foreing
> > > mapping (allocate the page in the physmap)
> > > 
> > > The next time the userspace is trying to access the foreign page, Linux will
> > > receive a data abort that would result to:
> > >      1) Allocate a backend domain physical page
> > >      2) Balloon out the physical address (remove the page from the physmap)
> > >      3) Map the foreing mapping at the new guest physical address
> > >      4) Map the guest physical page in the userspace address space
> > 
> > This is going to shatter all the super pages in the stage-2
> > translation.
> 
> Yes, but this is nothing really new as ballooning would result to (AFAICT)
> the same behavior on Linux.
> 
> > 
> > > With this approach, we should be able to have backend domain that can handle
> > > frontend domain without require a lot of memory.
> > 
> > Linux on x86 has the option to use empty hotplug memory ranges to map
> > foreign memory: the balloon driver hotplugs an unpopulated physical
> > memory range that's not made available to the OS free memory allocator
> > and it's just used as scratch space to map foreign memory. Not sure
> > whether Arm has something similar, or if it could be implemented.
> 
> We already discussed that last year :). This was attempted in the past (I
> was still at Citrix) and indefinitely paused for Arm.
> 
> /proc/iomem can be incomplete on Linux if we didn't load a driver for all
> the devices. This means that Linux doesn't have the full view of what is
> physical range is freed.
> 
> Additionally, in the case of Dom0, all the regions corresponding to the host
> RAM are unusable when using the SMMU. This is because we would do 1:1
> mapping for the foreign mapping as well.

Right, that's a PITA because on x86 PVH dom0 I was planning to use
those RAM regions as scratch space for foreign mapping lacking a
better alternative ATM.

> It might be possible to take advantage of the direct mapping property if
> Linux do some bookeeping. Although, this wouldn't work for 32-bit Dom0 using
> short page tables (e.g some version of Debian does) as it may not be able to
> access all the host RAM. Whether we still care about is a different
> situation :).
> 
> For all the other domains, I think we would want the toolstack to provide a
> region that can be safely used for foreign mapping (similar to what we
> already do for the grant-table).

Yes, that would be the plan on x86 also - have some way for the
hypervisor to report safe ranges where a domU can create foreign
mappings.

> > 
> > You can still use the map-on-fault behaviour as above, but I would
> > recommend that you try to limit the number of hypercalls issued.
> > Having to issue a single hypercall for each page fault it's going to
> > be slow, so I would instead use mmap batch to map the hole range in
> > unpopulated physical memory and then the OS fault handler just needs to
> > fill the page tables with the corresponding address.
> IIUC your proposal, you are assuming that you will have enough free space in
> the physical address space to map the foreign mapping.
> 
> However that amount of free space is not unlimited and may be quite small
> (see above). It would be fairly easy to exhaust it given that a userspace
> application can map many times the same guest physical address.
> 
> So I still think we need to be able to allow Linux to swap a foreign page
> with another page.

Right, but you will have to be careful to make sure physical addresses
are not swapped while being used for IO with devices, as in that case
you won't get a recoverable fault. This is safe now because physical
mappings created by privcmd are never swapped out, but if you go the
route you propose you will have to figure a way to correctly populate
physical ranges used for IO with devices, even when the CPU hasn't
accessed them.

Relying solely on CPU page faults to populate them will not be enough,
as the CPU won't necessarily access all the pages that would be send
to devices for IO.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 11:17:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 11:17:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyCke-0007d8-Pq; Wed, 22 Jul 2020 11:17:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u0rb=BB=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jyCkc-0007d3-TF
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 11:17:30 +0000
X-Inumbo-ID: ee805b84-cc0c-11ea-8631-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee805b84-cc0c-11ea-8631-bc764e2007e4;
 Wed, 22 Jul 2020 11:17:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=OolPR9fxPy209fVnQYEahm8Kor0kPOiihLICyEuQJeQ=; b=fOF1mCZ10HvrVMbkQu/9B/r3RZ
 i91y2VX5I8Idrv1fRhVVsWjlICgLjLVsrO+hanvSnE9nDFRtRL6ySRp7Wp65a+srQjMxcIrLJyVbX
 Kwrkd9n6GPOB3SoVLZCrLeghwjZH7Lh1+cxbxtXahqdHkjjdxFStbmhHyOO1tOUBRE3s=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyCka-0004ij-T9; Wed, 22 Jul 2020 11:17:28 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyCka-0000BF-Kr; Wed, 22 Jul 2020 11:17:28 +0000
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <3dcab37d-0d60-f1cc-1d59-b5497f0fa95f@xen.org>
 <b6cf0931-c31e-b03b-3995-688536de391a@gmail.com>
 <05acce61-5b29-76f7-5664-3438361caf82@xen.org>
 <20200722082115.GR7191@Air-de-Roger>
 <f3c54a7e-4352-7591-73c2-14215bd3ad34@xen.org>
 <20200722111012.GX7191@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <d28f7cff-53dc-6a63-c681-16bd90b50436@xen.org>
Date: Wed, 22 Jul 2020 12:17:26 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200722111012.GX7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Artem Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 22/07/2020 12:10, Roger Pau Monné wrote:
> On Wed, Jul 22, 2020 at 11:47:18AM +0100, Julien Grall wrote:
>>>
>>> You can still use the map-on-fault behaviour as above, but I would
>>> recommend that you try to limit the number of hypercalls issued.
>>> Having to issue a single hypercall for each page fault it's going to
>>> be slow, so I would instead use mmap batch to map the hole range in
>>> unpopulated physical memory and then the OS fault handler just needs to
>>> fill the page tables with the corresponding address.
>> IIUC your proposal, you are assuming that you will have enough free space in
>> the physical address space to map the foreign mapping.
>>
>> However that amount of free space is not unlimited and may be quite small
>> (see above). It would be fairly easy to exhaust it given that a userspace
>> application can map many times the same guest physical address.
>>
>> So I still think we need to be able to allow Linux to swap a foreign page
>> with another page.
> 
> Right, but you will have to be careful to make sure physical addresses
> are not swapped while being used for IO with devices, as in that case
> you won't get a recoverable fault. This is safe now because physical
> mappings created by privcmd are never swapped out, but if you go the
> route you propose you will have to figure a way to correctly populate
> physical ranges used for IO with devices, even when the CPU hasn't
> accessed them.
> 
> Relying solely on CPU page faults to populate them will not be enough,
> as the CPU won't necessarily access all the pages that would be send
> to devices for IO.

The problem you described here doesn't seem to be specific to foreign 
mapping. So I would really be surprised if Linux doesn't already have 
generic mechanism to deal with this.

Hence why I suggested before to deal with foreign mapping the same way 
as Linux would do with user memory.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 11:33:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 11:33:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyCzl-0000tp-8z; Wed, 22 Jul 2020 11:33:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0IpC=BB=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jyCzk-0000tk-JU
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 11:33:08 +0000
X-Inumbo-ID: 1c4a5a41-cc0f-11ea-8634-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c4a5a41-cc0f-11ea-8634-bc764e2007e4;
 Wed, 22 Jul 2020 11:33:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595417586;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=FdaxwzmkB9ZWieOTZghY6131G37CtVZDjeBxVrfYhS8=;
 b=CpZ1ySfVM/MW7nMEcVx1NiOrzGCpvqEnMROny7WvtcHGWyqsPWB3W7Fd
 isoUDkJF/qnOQKxSFCHaSxlaMzXYxd3hIRidAjzIsqGOBlkjdfwTBuNc3
 alICthH+HvlJZXS7E66iWNIoZ1sG6ATamVxaeu+NJLin7ng9xNwdRgtJm s=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ekTm3/qI8RXzXk83yaDTej46M1xn2o1cIDzjOc1HhXzYYKn/nKIpOPh88J/6vdcHLRscNVwtll
 VZ50KyIw7ZRUDsFZBtRIhAcRP6y4JdDeNQP4JZUym2FVrVXlQjuow6qHxxGH/fVkIhm2tsBHlz
 ZDvE4+HLW0hpjoPHvDGO/MesC3YlCD6sWYUv1DBFY2YatFGt1LyTD057xV29c6T/2PMKnN6k3F
 JVfaHY4YlNaBM/qIdvdIiRrLMmOSXfFVf3sLIgzQ06Ni4AdNTHgdQ3mhl1ISfwJ7yNdPfrsN9K
 ob0=
X-SBRS: 2.7
X-MesageID: 23256595
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23256595"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [OSSTEST PATCH 08/14] Executive: Use index for report__find_test
Thread-Topic: [OSSTEST PATCH 08/14] Executive: Use index for report__find_test
Thread-Index: AQHWX46tfAvUjKxsQUGNt2ymzIKIvqkTVxGA
Date: Wed, 22 Jul 2020 11:33:01 +0000
Message-ID: <3ACBEEA3-C17D-48AE-8AE5-52C9D92C8C46@citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
 <20200721184205.15232-9-ian.jackson@eu.citrix.com>
In-Reply-To: <20200721184205.15232-9-ian.jackson@eu.citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <7C790B3B5B6FE44D862957E0A5F42FD0@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDIxLCAyMDIwLCBhdCA3OjQxIFBNLCBJYW4gSmFja3NvbiA8aWFuLmphY2tz
b25AZXUuY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBBZnRlciB3ZSByZWZhY3RvciB0aGlzIHF1
ZXJ5IHRoZW4gd2UgY2FuIGVuYWJsZSB0aGUgaW5kZXggdXNlLg0KPiAoQm90aCBvZiB0aGVzZSB0
aGluZ3MgdG9nZXRoZXIgaW4gdGhpcyBjb21taXQgYmVjYXVzZSBJIGhhdmVuJ3QgcGVyZg0KPiB0
ZXN0ZWQgdGhlIHZlcnNpb24gd2l0aCBqdXN0IHRoZSByZWZhY3RvcmluZy4pDQo+IA0KPiAoV2Ug
aGF2ZSBwcm92aWRlZCBhbiBpbmRleCB0aGF0IGNhbiBhbnN3ZXIgdGhpcyBxdWVzdGlvbiByZWFs
bHkNCj4gcXVpY2tseSBpZiBhIHZlcnNpb24gaXMgc3BlY2lmaWVkLiAgQnV0IHRoZSBxdWVyeSBw
bGFubmVyIGNvdWxkbid0IHNlZQ0KPiB0aGF0IGJlY2F1c2UgaXQgd29ya3Mgd2l0aG91dCBzZWVp
bmcgdGhlIGJpbmQgdmFyaWFibGVzLCBzbyBkb2Vzbid0DQo+IGtub3cgdGhhdCB0aGUgdmFsdWUg
b2YgbmFtZSBpcyBnb2luZyB0byBiZSBzdWl0YWJsZSBmb3IgdGhpcyBpbmRleC4pDQo+IA0KPiAq
IENvbnZlcnQgdGhlIHR3byBFWElTVFMgc3VicXVlcmllcyBpbnRvIEpPSU4vQU5EIHdpdGggYSBE
SVNUSU5DVA0KPiAgY2xhdXNlIG5hbWluZyB0aGUgZmllbGRzIG9uIGZsaWdodHMsIHNvIGFzIHRv
IHJlcGxpY2F0ZSB0aGUgcHJldmlvdXMNCj4gIHJlc3VsdCByb3dzLiAgVGhlbiBkbyAkc2VsZWN0
aW9uIGZpZWxkIGxhc3QuICBUaGUgc3VicXVlcnkgaXMgYQ0KPiAgY29udmVuaWVudCB3YXkgdG8g
bGV0IHRoaXMgZG8gdGhlIHByZXZpb3VzIHRoaW5nIGZvciBhbGwgdGhlIHZhbHVlcw0KPiAgb2Yg
JHNlbGVjdGlvbiAoaW5jbHVkaW5nLCBub3RhYmx5LCAqKS4NCj4gDQo+ICogQWRkIHRoZSBhZGRp
dGlvbmFsIEFORCBjbGF1c2UgZm9yIHIubmFtZSwgd2hpY2ggaGFzIG5vIGxvZ2ljYWwNCj4gIGVm
ZmVjdCBnaXZlbiB0aGUgYWN0dWFsIHZhbHVlcyBvZiBuYW1lLCBlbmFibGluZyB0aGUgcXVlcnkg
cGxhbm5lcg0KPiAgdG8gdXNlIHRoaXMgaW5kZXguDQo+IA0KPiBQZXJmOiBJbiBteSB0ZXN0IGNh
c2UgdGhlIHNnLXJlcG9ydC1mbGlnaHQgcnVudGltZSBpcyBub3cgfjhzLiAgSSBhbQ0KPiByZWFz
b25hYmx5IGNvbmZpZGVudCB0aGF0IHRoaXMgd2lsbCBub3QgbWFrZSBvdGhlciB1c2UgY2FzZXMg
b2YgdGhpcw0KPiBjb2RlIHdvcnNlLg0KPiANCj4gUGVyZjogcnVudGltZSBvZiBteSB0ZXN0IGNh
c2Ugbm93IH4xMXMNCj4gDQo+IEV4YW1wbGUgcXVlcnkgYmVmb3JlIChmcm9tIHRoZSBQZXJsIERC
SSB0cmFjZSk6DQo+IA0KPiAgICAgICAgU0VMRUNUICoNCj4gICAgICAgICBGUk9NIGZsaWdodHMg
Zg0KPiAgICAgICAgV0hFUkUNCj4gICAgICAgICAgICAgICAgRVhJU1RTICgNCj4gICAgICAgICAg
ICAgICAgICAgU0VMRUNUIDENCj4gICAgICAgICAgICAgICAgICAgIEZST00gcnVudmFycyByDQo+
ICAgICAgICAgICAgICAgICAgIFdIRVJFIG5hbWU9Pw0KPiAgICAgICAgICAgICAgICAgICAgIEFO
RCB2YWw9Pw0KPiAgICAgICAgICAgICAgICAgICAgIEFORCByLmZsaWdodD1mLmZsaWdodA0KPiAg
ICAgICAgICAgICAgICAgICAgIEFORCAoICAgICAgKENBU0UNCj4gICAgICAgV0hFTiAoci5qb2Ip
IExJS0UgJ2J1aWxkLSUtcHJldicgVEhFTiAneHByZXYnDQo+ICAgICAgIFdIRU4gKChyLmpvYikg
TElLRSAnYnVpbGQtJS1mcmVlYnNkJw0KPiAgICAgICAgICAgICBBTkQgJ3gnID0gJ2ZyZWVic2Ri
dWlsZGpvYicpIFRIRU4gJ0RJU0NBUkQnDQo+ICAgICAgIEVMU0UgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICcnDQo+ICAgICAgIEVORCkNCj4gPSAnJykNCj4gICAgICAgICAg
ICAgICAgICkNCj4gICAgICAgICAgQU5EICggKFRSVUUgQU5EIGZsaWdodCA8PSAxNTE5MDMpIEFO
RCAoYmxlc3Npbmc9J3JlYWwnKSApDQo+ICAgICAgICAgIEFORCAoYnJhbmNoPT8pDQo+ICAgICAg
ICBPUkRFUiBCWSBmbGlnaHQgREVTQw0KPiAgICAgICAgTElNSVQgMQ0KDQpTbyB0aGlzIHNheXM6
DQoNCkdldCBtZSBhbGwgdGhlIGNvbHVtbnMNCmZvciB0aGUgaGlnaGVzdC1udW1iZXJlZCBmbGln
aHQNCldoZXJlOg0KICBUaGVyZSBpcyBhdCBsZWFzdCBvbmUgcnVudmFyIGZvciB0aGF0IGZsaWdo
dCBoYXMgdGhlIHNwZWNpZmllZCAkbmFtZSBhbmQgJHZhbHVlDQogIEFuZCB0aGUgam9iIGlzICpu
b3QqIGxpa2UgYnVpbGQtJS1wcmV2IG9yIGJ1aWxkLSUtZnJlZWJzZA0KICBUaGUgZmxpZ2h0IG51
bWJlciAoPykgaXMgPD0gMTUxOTAzLCBhbmQgYmxlc3NpbmcgPSByZWFsDQogIEZvciB0aGUgc3Bl
Y2lmaWVkICRicmFuY2gNCg0KV2hhdOKAmXMgdGhlIOKAnFRSVUUgYW5kIGZsaWdodCA8PSAxNTE5
MDPigJ0gZm9yPw0KDQo+IA0KPiBBZnRlcjoNCj4gDQo+ICAgICAgICBTRUxFQ1QgKg0KPiAgICAg
ICAgICBGUk9NICggU0VMRUNUIERJU1RJTkNUDQo+ICAgICAgICAgICAgICAgICAgICAgIGZsaWdo
dCwgc3RhcnRlZCwgYmxlc3NpbmcsIGJyYW5jaCwgaW50ZW5kZWQNCj4gICAgICAgICAgICAgICAg
IEZST00gZmxpZ2h0cyBmDQo+ICAgICAgICAgICAgICAgICAgICBKT0lOIHJ1bnZhcnMgciBVU0lO
RyAoZmxpZ2h0KQ0KPiAgICAgICAgICAgICAgICAgICBXSEVSRSBuYW1lPT8NCj4gICAgICAgICAg
ICAgICAgICAgICBBTkQgbmFtZSBMSUtFICdyZXZpc2lvbl8lJw0KPiAgICAgICAgICAgICAgICAg
ICAgIEFORCB2YWw9Pw0KPiAgICAgICAgICAgICAgICAgICAgIEFORCByLmZsaWdodD1mLmZsaWdo
dA0KPiAgICAgICAgICAgICAgICAgICAgIEFORCAoICAgICAgKENBU0UNCj4gICAgICAgV0hFTiAo
ci5qb2IpIExJS0UgJ2J1aWxkLSUtcHJldicgVEhFTiAneHByZXYnDQo+ICAgICAgIFdIRU4gKChy
LmpvYikgTElLRSAnYnVpbGQtJS1mcmVlYnNkJw0KPiAgICAgICAgICAgICBBTkQgJ3gnID0gJ2Zy
ZWVic2RidWlsZGpvYicpIFRIRU4gJ0RJU0NBUkQnDQo+ICAgICAgIEVMU0UgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICcnDQo+ICAgICAgIEVORCkNCj4gPSAnJykNCj4gICAg
ICAgICAgQU5EICggKFRSVUUgQU5EIGZsaWdodCA8PSAxNTE5MDMpIEFORCAoYmxlc3Npbmc9J3Jl
YWwnKSApDQo+ICAgICAgICAgIEFORCAoYnJhbmNoPT8pDQo+ICkgQVMgc3ViIFdIRVJFIFRSVUUN
Cj4gICAgICAgIE9SREVSIEJZIGZsaWdodCBERVNDDQo+ICAgICAgICBMSU1JVCAxDQoNCkFuZCB0
aGlzIHNheXMgKGVmZmVjdGl2ZWx5KQ0KDQpHZXQgbWUgPGZsaWdodCwgc3RhcnRlZCwgYmxlc3Np
bmcsIGJyYW5jaCwgaW50ZW5kZWQ+DQpGcm9tIHRoZSBoaWdoZXN0LW51bWJlcmVkIGZsaWdodA0K
V2hlcmUNCiAgVGhhdCBmbGlnaHQgaGFzIGEgcnVudmFyIHdpdGggc3BlY2lmaWVkIG5hbWUgYW5k
IHZhbHVlDQogIFRoZSBqb2IgKmRvZXNu4oCZdCogbG9vayBsaWtlIOKAnGJ1aWxkLSUtcHJlduKA
nSBvciDigJxidWlsZC0lLWZyZWVic2TigJ0NCiAgZmxpZ2h0ICYgYmxlc3NpbmcgYXMgYXBwcm9w
cmlhdGUNCiAgYnJhbmNoIGFzIHNwZWNpZmllZC4NCiAgDQpJc27igJl0IHRoZSByLmZsaWdodCA9
IGYuZmxpZ2h0IHJlZHVuZGFudCBpZiB3ZeKAmXJlIGpvaW5pbmcgb24gZmxpZ2h0Pw0KDQpBbHNv
LCBpbiBzcGl0ZSBvZiB0aGUgcGFyYWdyYXBoIGF0dGVtcHRpbmcgdG8gZXhwbGFpbiBpdCwgSeKA
mW0gYWZyYWlkIEkgZG9u4oCZdCB1bmRlcnN0YW5kIHdoYXQgdGhlIOKAnEFTIHN1YiBXSEVSRSBU
UlVF4oCdIGlzIGZvci4NCg0KQnV0IGl0IGxvb2tzIGxpa2UgdGhlIG5ldyBxdWVyeSBzaG91bGQg
ZG8gdGhlIHNhbWUgdGhpbmcgYXMgdGhlIG9sZCBxdWVyeSwgYXNzdW1pbmcgdGhhdCB0aGUgY29s
dW1ucyBmcm9tIHRoZSBzdWJxdWVyeSBhcmUgYWxsIHRoZSBjb2x1bW5zIHRoYXQgeW91IG5lZWQg
aW4gdGhlIGNvcnJlY3Qgb3JkZXIuDQoNCiAtR2VvcmdlDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 11:33:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 11:33:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyCzy-0000ud-IU; Wed, 22 Jul 2020 11:33:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dvI5=BB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyCzx-0000uK-Ci
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 11:33:21 +0000
X-Inumbo-ID: 2360078a-cc0f-11ea-a195-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2360078a-cc0f-11ea-a195-12813bfff9fa;
 Wed, 22 Jul 2020 11:33:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595417598;
 h=from:to:cc:subject:date:message-id:mime-version;
 bh=A9t0+R7n35AICnXzvZVgQOgQ4H3rGcr7mFvRi2qJNRU=;
 b=CK7g5yxAQSS0kmap45lwMmDeHycSv7Pwhsu5fBkS6LiD0LSXhO+gULLo
 s8pZOqJ3SS93R/9gv3qrWbGtwm9S7GFhkgXw/Y6yTbjy7mupht6bHEFe9
 Zom9oLvdNXSfAMKWZEYXsHmJLlQO4ZBu0ykh7HNHRuuwpqUXqeqNYr7D8 Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: azuF+de7qNehpaOspaS+gw8SmO/Ig6kDLeW4xXeiUzrg23CW/kZfXjPXcQET2A1JZXOyJoCaZX
 KEPxajkUJZnjFmGIxjCB2ZoxacrWusdz6sz3pKzzcFQO6AK5OWYYWNZJeYFkmPKzb1miQqSSRX
 V8Mpd76qPru5UdSw4uHtZ5T5t0Hs09yHOF1wja8SYg0HLAA6f9AzGz0VNyiLBMNjwiPprzGUf8
 8EvPk33XrNeLjbWpscs5px0Lclq4vo3/X+uaIfkC59LeFzCbOUzw29rVyjTuf6wp5hWaAJU8gz
 Ipo=
X-SBRS: 2.7
X-MesageID: 23787972
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23787972"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2] tools/configure: drop BASH configure variable
Date: Wed, 22 Jul 2020 12:32:58 +0100
Message-ID: <20200722113258.3673-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Wei Liu <wl@xen.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a weird variable to have in the first place.  The only user of it is
XSM's CONFIG_SHELL, which opencodes a fallback to sh, and the only two scripts
run with this are shebang sh anyway, so don't need bash in the first place.

Make the mkflask.sh and mkaccess_vector.sh scripts executable, drop the
CONFIG_SHELL, and drop the $BASH variable to prevent further use.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <Ian.Jackson@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Daniel De Graaf <dgdegra@tycho.nsa.gov>

./autogen.sh needs rerunning on commit.

v2:
 * Use $(SHELL)

There is a separate check for [BASH] in the configure script, which is
checking the requirement for the Linux hotplug scripts.  Really, that is a
runtime requirement not a build time requirement, and it is rude to require
bash in a build environment on this basis.  IMO, that check wants dropping as
well.
---
 config/Tools.mk.in                      | 1 -
 tools/configure.ac                      | 1 -
 xen/xsm/flask/Makefile                  | 8 ++------
 xen/xsm/flask/policy/mkaccess_vector.sh | 0
 xen/xsm/flask/policy/mkflask.sh         | 0
 5 files changed, 2 insertions(+), 8 deletions(-)
 mode change 100644 => 100755 xen/xsm/flask/policy/mkaccess_vector.sh
 mode change 100644 => 100755 xen/xsm/flask/policy/mkflask.sh

diff --git a/config/Tools.mk.in b/config/Tools.mk.in
index 23df47af8d..48bd9ab731 100644
--- a/config/Tools.mk.in
+++ b/config/Tools.mk.in
@@ -12,7 +12,6 @@ PYTHON              := @PYTHON@
 PYTHON_PATH         := @PYTHONPATH@
 PY_NOOPT_CFLAGS     := @PY_NOOPT_CFLAGS@
 PERL                := @PERL@
-BASH                := @BASH@
 XGETTTEXT           := @XGETTEXT@
 AS86                := @AS86@
 LD86                := @LD86@
diff --git a/tools/configure.ac b/tools/configure.ac
index 9d126b7a14..6614a4f130 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -297,7 +297,6 @@ AC_ARG_VAR([PYTHON], [Path to the Python parser])
 AC_ARG_VAR([PERL], [Path to Perl parser])
 AC_ARG_VAR([BISON], [Path to Bison parser generator])
 AC_ARG_VAR([FLEX], [Path to Flex lexical analyser generator])
-AC_ARG_VAR([BASH], [Path to bash shell])
 AC_ARG_VAR([XGETTEXT], [Path to xgetttext tool])
 AC_ARG_VAR([AS86], [Path to as86 tool])
 AC_ARG_VAR([LD86], [Path to ld86 tool])
diff --git a/xen/xsm/flask/Makefile b/xen/xsm/flask/Makefile
index 07f36d075d..50bec20a1e 100644
--- a/xen/xsm/flask/Makefile
+++ b/xen/xsm/flask/Makefile
@@ -8,10 +8,6 @@ CFLAGS-y += -I./include
 
 AWK = awk
 
-CONFIG_SHELL := $(shell if [ -x "$$BASH" ]; then echo $$BASH; \
-          else if [ -x /bin/bash ]; then echo /bin/bash; \
-          else echo sh; fi ; fi)
-
 FLASK_H_DEPEND = policy/security_classes policy/initial_sids
 AV_H_DEPEND = policy/access_vectors
 
@@ -24,14 +20,14 @@ extra-y += $(ALL_H_FILES)
 
 mkflask := policy/mkflask.sh
 quiet_cmd_mkflask = MKFLASK $@
-cmd_mkflask = $(CONFIG_SHELL) $(mkflask) $(AWK) include $(FLASK_H_DEPEND)
+cmd_mkflask = $(SHELL) $(mkflask) $(AWK) include $(FLASK_H_DEPEND)
 
 $(subst include/,%/,$(FLASK_H_FILES)): $(FLASK_H_DEPEND) $(mkflask) FORCE
 	$(call if_changed,mkflask)
 
 mkaccess := policy/mkaccess_vector.sh
 quiet_cmd_mkaccess = MKACCESS VECTOR $@
-cmd_mkaccess = $(CONFIG_SHELL) $(mkaccess) $(AWK) $(AV_H_DEPEND)
+cmd_mkaccess = $(SHELL) $(mkaccess) $(AWK) $(AV_H_DEPEND)
 
 $(subst include/,%/,$(AV_H_FILES)): $(AV_H_DEPEND) $(mkaccess) FORCE
 	$(call if_changed,mkaccess)
diff --git a/xen/xsm/flask/policy/mkaccess_vector.sh b/xen/xsm/flask/policy/mkaccess_vector.sh
old mode 100644
new mode 100755
diff --git a/xen/xsm/flask/policy/mkflask.sh b/xen/xsm/flask/policy/mkflask.sh
old mode 100644
new mode 100755
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 11:38:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 11:38:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyD4S-0001AR-BH; Wed, 22 Jul 2020 11:38:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhkO=BB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyD4R-00019a-7k
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 11:37:59 +0000
X-Inumbo-ID: ca438cad-cc0f-11ea-8635-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca438cad-cc0f-11ea-8635-bc764e2007e4;
 Wed, 22 Jul 2020 11:37:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595417878;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=rFikLpDoTwN+lb+5qA356sNYf4XfvDSl0K1Rs3d0IPE=;
 b=NRU1twIiwmhTKfJU7HGfj8Jgvjnl3mmZXni0uKsX4uxpEy73+gOpMpC3
 B8LU91m6Ify7hv33K+uaKpUI3PWVIGRb3RghWKGi20GPduoPu5JZqmOZA
 +FLPAKxJMTSbVSsG3BOoL9U5tPd5TywCazDu9MX5a/VUHIQL86o8oYfGC I=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 8iq/QQgwV7PDLNqmG2OEWqIzTiKctg2JRYeC6uH+7fIHrnT5u+7hjGHM0jhXsPw09Py1enmyqK
 /CcALuKPBw2XMAU2IayTbpLi0HbAHTASeTiv0V6Nl83OCfMbbHGvaFoMHRu1r92uCEpwqkbYyo
 C9uKc3hVIe9tMmtoi+/SaqOOhxyVwt2ejHa9Y0oGdmY04Xk1zehwG2KjHtxyTRHh0Xcu8acLVu
 +ZuRDKlNjb5aNzq4w+i6l2b6XJJ7v2D2EOJnfBbtghVewFmM6f3lxKoFeqAfSITIIamLPs6wZv
 ej4=
X-SBRS: 2.7
X-MesageID: 23123594
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23123594"
Date: Wed, 22 Jul 2020 13:37:51 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: Virtio in Xen on Arm (based on IOREQ concept)
Message-ID: <20200722113751.GY7191@Air-de-Roger>
References: <CAPD2p-nthLq5NaU32u8pVaa-ub=a9-LOPenupntTYdS-cu31jQ@mail.gmail.com>
 <20200717150039.GV7191@Air-de-Roger>
 <8f4e0c0d-b3d4-9dd3-ce20-639539321968@gmail.com>
 <3dcab37d-0d60-f1cc-1d59-b5497f0fa95f@xen.org>
 <b6cf0931-c31e-b03b-3995-688536de391a@gmail.com>
 <05acce61-5b29-76f7-5664-3438361caf82@xen.org>
 <20200722082115.GR7191@Air-de-Roger>
 <f3c54a7e-4352-7591-73c2-14215bd3ad34@xen.org>
 <20200722111012.GX7191@Air-de-Roger>
 <d28f7cff-53dc-6a63-c681-16bd90b50436@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d28f7cff-53dc-6a63-c681-16bd90b50436@xen.org>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <andr2000@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Oleksandr <olekstysh@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Artem
 Mygaiev <joculator@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 22, 2020 at 12:17:26PM +0100, Julien Grall wrote:
> 
> 
> On 22/07/2020 12:10, Roger Pau Monné wrote:
> > On Wed, Jul 22, 2020 at 11:47:18AM +0100, Julien Grall wrote:
> > > > 
> > > > You can still use the map-on-fault behaviour as above, but I would
> > > > recommend that you try to limit the number of hypercalls issued.
> > > > Having to issue a single hypercall for each page fault it's going to
> > > > be slow, so I would instead use mmap batch to map the hole range in
> > > > unpopulated physical memory and then the OS fault handler just needs to
> > > > fill the page tables with the corresponding address.
> > > IIUC your proposal, you are assuming that you will have enough free space in
> > > the physical address space to map the foreign mapping.
> > > 
> > > However that amount of free space is not unlimited and may be quite small
> > > (see above). It would be fairly easy to exhaust it given that a userspace
> > > application can map many times the same guest physical address.
> > > 
> > > So I still think we need to be able to allow Linux to swap a foreign page
> > > with another page.
> > 
> > Right, but you will have to be careful to make sure physical addresses
> > are not swapped while being used for IO with devices, as in that case
> > you won't get a recoverable fault. This is safe now because physical
> > mappings created by privcmd are never swapped out, but if you go the
> > route you propose you will have to figure a way to correctly populate
> > physical ranges used for IO with devices, even when the CPU hasn't
> > accessed them.
> > 
> > Relying solely on CPU page faults to populate them will not be enough,
> > as the CPU won't necessarily access all the pages that would be send
> > to devices for IO.
> 
> The problem you described here doesn't seem to be specific to foreign
> mapping. So I would really be surprised if Linux doesn't already have
> generic mechanism to deal with this.

Right, Linux will pre-fault and lock the pages before using them for
IO, and unlock them afterwards, in which case it should be safe.

> Hence why I suggested before to deal with foreign mapping the same way as
> Linux would do with user memory.

Should work, on FreeBSD privcmd I also populate the pages in the page
fault handler, but the hypercall to create the foreign mappings is
executed only once when the ioctl is issued.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 11:38:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 11:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyD4y-0001D6-KG; Wed, 22 Jul 2020 11:38:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yOZ1=BB=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jyD4x-0001Cp-1T
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 11:38:31 +0000
X-Inumbo-ID: dd590448-cc0f-11ea-a196-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd590448-cc0f-11ea-a196-12813bfff9fa;
 Wed, 22 Jul 2020 11:38:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595417910;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=rzSFlW3z1WhVH1s2HOv3/cvhHo2ouO98o7Ju+GOiKno=;
 b=eQsAzxoozNeswbfrjM1hWi6lEfk39KQiRlfDWGgRDRyq21s1T5GTxp7u
 M63AnTMcaJx80XEdc9EteAHhp79asyOJQlYQ3So8e5r4m7hhXZbBNkvnU
 KYYLchhjlCQxxWGNQxOsSLrRcLQ1fClStvH3CfuYaKw+9T0LRbrcXOwuf U=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 3FTV9IGtXD/SOKIH4RL8X3Vp2lyd15ChPJ0S9+RRgPyR0KBDgohKXMhlj7kDUsttGOp0fudb7h
 c8Iz3z3nYNuwi/zf94Zl12mhKth+jPcijeYdjNnZZsZx6PzwS3pvB6+Fj4ED7gT49FnymKpzo5
 KbB37HKnnPzbgZi8iavgchAa9rrgywaBAM035ZIgQCCxV8snT4rPNyATT554Yg5AEAqD1wkfrI
 8Y/jRZdX6cyG0eyX/oopofILQLxHEAHID98+XiqxaK7u9UW/T1E8rHXYZf7N/cKjmcQn7gEHOJ
 9FY=
X-SBRS: 2.7
X-MesageID: 23123604
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23123604"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24344.9513.638028.351008@mariner.uk.xensource.com>
Date: Wed, 22 Jul 2020 12:38:17 +0100
To: Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [osstest PATCH] freebsd: remove freebsd- hostflags request from
 guest tests
In-Reply-To: <20200721112016.30133-1-roger.pau@citrix.com>
References: <20200721112016.30133-1-roger.pau@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Roger Pau Monne writes ("[osstest PATCH] freebsd: remove freebsd- hostflags request from guest tests"):
> Guest tests shouldn't care about the capabilities or firmware of the
> underlying hosts, so drop the request of specific freebsd-<version>
> hostflags for FreeBSD guest tests.
> 
> While there request the presence of the hvm hostflag since the FreeBSD
> guest tests are run in HVM mode.
> 
> Signed-off-by: Roger Pau Monn <roger.pau@citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

I have queued this for the next push to pretest which I hope to do
some time today.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 11:49:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 11:49:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyDEz-0002BD-Mv; Wed, 22 Jul 2020 11:48:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q/Qh=BB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyDEy-0002B8-Ip
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 11:48:52 +0000
X-Inumbo-ID: 4fe54c46-cc11-11ea-a197-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4fe54c46-cc11-11ea-a197-12813bfff9fa;
 Wed, 22 Jul 2020 11:48:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=EBdqWiSyuqyIyFS91rbDRU6W5BcF2CpXLioglgk54jg=; b=OlV1Sz6E++vxhwvidcesumS0M
 tAiGdykyn7JTRaAcy9v0O+QGsUcdmNWDiZcdQiTHqrw4PsUjAjSNIOFN9NQLULIKDs4B9UQdNx/Mg
 MHHE6bllWB7m9ZjkZeBROaXdEwV11bEhgbq0wZDNIm+EivI/C4hyKOVCP17oF+UoE/hxM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyDEx-0005Ls-BR; Wed, 22 Jul 2020 11:48:51 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyDEw-0008G6-So; Wed, 22 Jul 2020 11:48:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyDEw-0005jl-RW; Wed, 22 Jul 2020 11:48:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152076-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152076: regressions - trouble: fail/pass/starved
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: qemuu=90218a9a393c7925f330e7dcc08658e2a01d3bd4
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jul 2020 11:48:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152076 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152076/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     12 guest-start              fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 qemuu                90218a9a393c7925f330e7dcc08658e2a01d3bd4
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   39 days
Failing since        151101  2020-06-14 08:32:51 Z   38 days   53 attempts
Testing same since   152076  2020-07-21 14:23:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 30530 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 11:54:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 11:54:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyDKJ-00031Z-IN; Wed, 22 Jul 2020 11:54:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mY6V=BB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyDKH-00031U-Vm
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 11:54:22 +0000
X-Inumbo-ID: 142c80ce-cc12-11ea-a197-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 142c80ce-cc12-11ea-a197-12813bfff9fa;
 Wed, 22 Jul 2020 11:54:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D2D71ABE2;
 Wed, 22 Jul 2020 11:54:27 +0000 (UTC)
Subject: Re: [PATCH] x86/svm: Fold nsvm_{wr,rd}msr() into
 svm_msr_{read,write}_intercept()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200721172208.12176-1-andrew.cooper3@citrix.com>
 <b3c3dfa9-d2b8-1042-ecf1-8f51351807e1@suse.com>
 <96e0cb7b-c597-075c-f142-6b35aae1a881@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <37c1a0d2-e19c-88b9-cac3-5977b3951d25@suse.com>
Date: Wed, 22 Jul 2020 13:54:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <96e0cb7b-c597-075c-f142-6b35aae1a881@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.07.2020 12:10, Andrew Cooper wrote:
> On 22/07/2020 10:16, Jan Beulich wrote:
>> On 21.07.2020 19:22, Andrew Cooper wrote:
>>> ... to simplify the default cases.
>>>
>>> There are multiple errors with the handling of these three MSRs, but they are
>>> deliberately not addressed this point.
>>>
>>> This removes the dance converting -1/0/1 into X86EMUL_*, allowing for the
>>> removal of the 'ret' variable.
>>>
>>> While cleaning this up, drop the gdprintk()'s for #GP conditions, and the
>>> 'result' variable from svm_msr_write_intercept() is it is never modified.
>>>
>>> No functional change.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> However, ...
>>
>>> @@ -2085,6 +2091,22 @@ static int svm_msr_write_intercept(unsigned int msr, uint64_t msr_content)
>>>              goto gpf;
>>>          break;
>>>  
>>> +    case MSR_K8_VM_CR:
>>> +        /* ignore write. handle all bits as read-only. */
>>> +        break;
>>> +
>>> +    case MSR_K8_VM_HSAVE_PA:
>>> +        if ( (msr_content & ~PAGE_MASK) || msr_content > 0xfd00000000ULL )
>> ... while you're moving this code here, wouldn't it be worthwhile
>> to at least fix the > to be >= ?
> 
> I'd prefer not to, to avoid breaking the "No Functional Change" aspect.

Well, so be it then.

> In reality, this needs to be a path which takes an extra ref on the
> nominated frame and globally maps it, seeing as we memcpy to/from it on
> every virtual vmentry/exit.  The check against the HT range is quite bogus.

I agree; I merely found the > so very obviously off-by-one that I
thought I'd at least inquire.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 11:55:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 11:55:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyDLk-0003A3-3F; Wed, 22 Jul 2020 11:55:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q/Qh=BB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyDLi-00038t-2L
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 11:55:50 +0000
X-Inumbo-ID: 4526a3ee-cc12-11ea-8636-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4526a3ee-cc12-11ea-8636-bc764e2007e4;
 Wed, 22 Jul 2020 11:55:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=UcsIlsf2TNh+0Ml02Lcqm26DIdZcGS8DBlZEz2l81wk=; b=58lc91yYrNEYpFyD8NpPs993p
 vUk6adj3CCVcszofodx8dlLy+BJEafTfsRUTQ4r+OUN+oJzu5eZeFIdFFIcV6YWiyJMGGnw7upuuU
 66gUexVgLdvkbSatK6olZwyj7SG2OBuuJPvW6W0MUBWR2mghFguP4uxNr1uPlaDsaHmOk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyDLa-0005U5-Ix; Wed, 22 Jul 2020 11:55:42 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyDLa-0000Fl-7Q; Wed, 22 Jul 2020 11:55:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyDLa-00020v-6q; Wed, 22 Jul 2020 11:55:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152081-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 152081: tolerable trouble: fail/pass/starved
 - PUSHED
X-Osstest-Failures: xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=827031adfeb3c2656baa2156d3e7caaea8aec739
X-Osstest-Versions-That: xen=23fe1b8d5170dfd5039c39181e82bfd5e20f3c18
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jul 2020 11:55:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152081 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152081/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 xen                  827031adfeb3c2656baa2156d3e7caaea8aec739
baseline version:
 xen                  23fe1b8d5170dfd5039c39181e82bfd5e20f3c18

Last test of basis   152043  2020-07-20 12:10:42 Z    1 days
Failing since        152061  2020-07-21 01:41:43 Z    1 days    2 attempts
Testing same since   152081  2020-07-21 16:52:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   23fe1b8d51..827031adfe  827031adfeb3c2656baa2156d3e7caaea8aec739 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 12:10:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 12:10:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyDZh-0004qQ-K3; Wed, 22 Jul 2020 12:10:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0IpC=BB=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jyDZg-0004qL-4J
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 12:10:16 +0000
X-Inumbo-ID: 4c94de00-cc14-11ea-8637-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c94de00-cc14-11ea-8637-bc764e2007e4;
 Wed, 22 Jul 2020 12:10:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595419814;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=GatcW2GReBb9SAiidFcPYWQ1tiOtO57bMcOdp2x5J+M=;
 b=gh6xHoRyM8zjBb2X1Mtn1ZRVjLYw9iFX7gN3L/ftH2iVxvkKeaDSLVPa
 iZT39EAsCPJ9TRuRgobYeepmMnClBH07h0/+fmarXYQou9DZ4sFFqVYqU
 D+PNou9ROtuGUtz1PPNRtXGBETbQ3VuY7fQ1rSMQk0A2p/SVw1Smc0dqi o=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: wMONTuKVxotu7eKAhnS/KMMe/Ha+QbI1HzN8xeQEHrdZ1+fqAXAtPqEkwIMsaOIIA/RrCfI2J/
 FFfKDy4pArZVFiPnIc3o+xqOQy64dLv6pblUN4SXmXZ+TRJiq8p0xBY0OpXhp3VGXSGJMSCUak
 W0MWn7xPU9T/mapSbi6nyHmjXkgX/C7Vk04oKZdFCIM1qbTWw+z8s3BLpXwg//E1DHS472dVPK
 TXzqUljkou0GskYcip3L/rFhz16S7t2n1UzUPCOMw7L5raGCi/tQmOlkW3nsngfWdUsGB+CDcx
 aWs=
X-SBRS: 2.7
X-MesageID: 23126084
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23126084"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [OSSTEST PATCH 04/14] sg-report-flight: Ask the db for flights of
 interest
Thread-Topic: [OSSTEST PATCH 04/14] sg-report-flight: Ask the db for flights
 of interest
Thread-Index: AQHWX46tA1o7W5rW1EWGL27a+eUklakTYXSA
Date: Wed, 22 Jul 2020 12:10:11 +0000
Message-ID: <3966AFCB-7B7B-45BE-A3F1-7E04943EEEFA@citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
 <20200721184205.15232-5-ian.jackson@eu.citrix.com>
In-Reply-To: <20200721184205.15232-5-ian.jackson@eu.citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <8B4B6CA882925742A43405C357AD1A1D@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDIxLCAyMDIwLCBhdCA3OjQxIFBNLCBJYW4gSmFja3NvbiA8aWFuLmphY2tz
b25AZXUuY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBTcGVjaWZpY2FsbHksIHdlIG5hcnJvdyB0
aGUgaW5pdGlhbCBxdWVyeSB0byBmbGlnaHRzIHdoaWNoIGhhdmUgYXQNCj4gbGVhc3Qgc29tZSBq
b2Igd2l0aCB0aGUgYnVpbHRfcmV2aXNpb25fZm9vIHdlIGFyZSBsb29raW5nIGZvci4NCj4gDQo+
IFRoaXMgY29uZGl0aW9uIGlzIHN0cmljdGx5IGJyb2FkZXIgdGhhbiB0aGF0IGltcGxlbWVudGVk
IGluc2lkZSB0aGUNCj4gZmxpZ2h0IHNlYXJjaCBsb29wLCBzbyB0aGVyZSBpcyBubyBmdW5jdGlv
bmFsIGNoYW5nZS4NCj4gDQo+IFBlcmY6IHJ1bnRpbWUgb2YgbXkgdGVzdCBjYXNlIG5vdyB+MzAw
cy01MDBzLg0KPiANCj4gRXhhbXBsZSBxdWVyeSBiZWZvcmUgKGZyb20gdGhlIFBlcmwgREJJIHRy
YWNlKToNCj4gDQo+ICAgICAgU0VMRUNUICogRlJPTSAoDQo+ICAgICAgICBTRUxFQ1QgZmxpZ2h0
LCBibGVzc2luZyBGUk9NIGZsaWdodHMNCj4gICAgICAgICAgICBXSEVSRSAoYnJhbmNoPSd4ZW4t
dW5zdGFibGUnKQ0KPiAgICAgICAgICAgICAgQU5EICAgICAgICAgICAgICAgICAgIEVYSVNUUyAo
U0VMRUNUIDENCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgRlJPTSBqb2JzDQo+ICAgICAg
ICAgICAgICAgICAgICAgICAgICAgV0hFUkUgam9icy5mbGlnaHQgPSBmbGlnaHRzLmZsaWdodA0K
PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgQU5EIGpvYnMuam9iID0gPykNCj4gDQo+ICAg
ICAgICAgICAgICBBTkQgKCAoVFJVRSBBTkQgZmxpZ2h0IDw9IDE1MTkwMykgQU5EIChibGVzc2lu
Zz0ncmVhbCcpICkNCj4gICAgICAgICAgICBPUkRFUiBCWSBmbGlnaHQgREVTQw0KPiAgICAgICAg
ICAgIExJTUlUIDEwMDANCj4gICAgICApIEFTIHN1Yg0KPiAgICAgIE9SREVSIEJZIGJsZXNzaW5n
IEFTQywgZmxpZ2h0IERFU0MNCg0KVGhpcyBvbmUgc2F5czoNCg0KRmluZCB0aGUgMTAwMCBtb3N0
IHJlY2VudCBmbGlnaHRzDQpXaGVyZSANCiAgYnJhbmNoIGlzICJ4ZW4tdW5zdGFibGXigJ0NCiAg
b25lIG9mIGl0cyBqb2JzIGlzICRqb2INCiAgQW5kIGJsZXNzaW5nIGlzIOKAnHJlYWzigJ0NCg0K
QnV0IHdoeSBhcmUgd2Ugc2VsZWN0aW5nIOKAmGJsZXNzaW5n4oCZIGZyb20gdGhlc2UsIGlmIHdl
4oCZdmUgc3BlY2lmaWVkIHRoYXQgYmxlc3NpbmcgPSDigJxyZWFs4oCdPyBJc27igJl0IHRoYXQg
cmVkdW5kYW50Pw0KDQo+IA0KPiBXaXRoIHRoZXNlIGJpbmQgdmFyaWFibGVzOg0KPiANCj4gICAg
InRlc3QtYXJtaGYtYXJtaGYtbGlidmlydCINCj4gDQo+IEFmdGVyOg0KPiANCj4gICAgICBTRUxF
Q1QgKiBGUk9NICgNCj4gICAgICAgIFNFTEVDVCBESVNUSU5DVCBmbGlnaHQsIGJsZXNzaW5nDQo+
ICAgICAgICAgICAgIEZST00gZmxpZ2h0cw0KPiAgICAgICAgICAgICBKT0lOIHJ1bnZhcnMgcjEg
VVNJTkcgKGZsaWdodCkNCj4gDQo+ICAgICAgICAgICAgV0hFUkUgKGJyYW5jaD0neGVuLXVuc3Rh
YmxlJykNCj4gICAgICAgICAgICAgIEFORCAoIChUUlVFIEFORCBmbGlnaHQgPD0gMTUxOTAzKSBB
TkQgKGJsZXNzaW5nPSdyZWFsJykgKQ0KPiAgICAgICAgICAgICAgICAgIEFORCBFWElTVFMgKFNF
TEVDVCAxDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgIEZST00gam9icw0KPiAgICAgICAg
ICAgICAgICAgICAgICAgICAgIFdIRVJFIGpvYnMuZmxpZ2h0ID0gZmxpZ2h0cy5mbGlnaHQNCj4g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIEFORCBqb2JzLmpvYiA9ID8pDQo+IA0KPiAgICAg
ICAgICAgICAgQU5EIHIxLm5hbWUgTElLRSAnYnVpbHRfcmV2aXNpb25fJScNCj4gICAgICAgICAg
ICAgIEFORCByMS5uYW1lID0gPw0KPiAgICAgICAgICAgICAgQU5EIHIxLnZhbD0gPw0KPiANCj4g
ICAgICAgICAgICBPUkRFUiBCWSBmbGlnaHQgREVTQw0KPiAgICAgICAgICAgIExJTUlUIDEwMDAN
Cj4gICAgICApIEFTIHN1Yg0KPiAgICAgIE9SREVSIEJZIGJsZXNzaW5nIEFTQywgZmxpZ2h0IERF
U0MNCg0KU28gdGhpcyBzYXlzOg0KDQpGaW5kIG1lIHRoZSBtb3N0IDEwMDAgcmVjZW50IGZsaWdo
dHMNCldoZXJlOg0KICBicmFuY2ggaXMg4oCceGVuLXVuc3RhYmxl4oCdDQogIGZsaWdodCA8PSAx
NTkwMw0KICBibGVzc2luZyBpcyDigJxyZWFs4oCdDQogIE9uZSBvZiBpdHMgam9icyBpcyAkam9i
DQogIEl0IGhhcyBhIHJ1bnZhciBtYXRjaGluZyBnaXZlbiAkbmFtZSBhbmQgJHZhbA0KDQpBbmQg
b2YgY291cnNlIGl0IHVzZXMgdGhlIOKAmW5hbWUgTElLRSDigJhidWlsdF9yZXZpc2lvbl8l4oCZ
IGluZGV4Lg0KDQpTdGlsbCBkb27igJl0IHVuZGVyc3RhbmQgdGhlIOKAmVRSVUUgQU5E4oCZIGFu
ZCDigJhBUyBzdWLigJkgYml0cywgYnV0IGl0IGxvb2tzIHRvIG1lIGxpa2UgaXTigJlzIHN1YnN0
YW50aWFsbHkgdGhlIHNhbWUgcXVlcnksIHdpdGggYWRkaXRpb25hbCAkbmFtZSA9ICR2YWwgcnVu
dmFyIHJlc3RyaWN0aW9uLg0KDQpBbmQgZ2l2ZW4gdGhhdCB5b3Ugc2F5LCAiVGhpcyBjb25kaXRp
b24gaXMgc3RyaWN0bHkgYnJvYWRlciB0aGFuIHRoYXQgaW1wbGVtZW50ZWQgaW5zaWRlIHRoZSBm
bGlnaHQgc2VhcmNoIGxvb3DigJ0sIEkgdGFrZSBpdCB0aGF0IGl04oCZcyBhZ2FpbiBtYWlubHkg
dG8gdGFrZSBhZHZhbnRhZ2Ugb2YgdGhlIG5ldyBpbmRleD8NCg0KIC1HZW9yZ2U=


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 12:48:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 12:48:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyEA5-0007Xk-No; Wed, 22 Jul 2020 12:47:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0IpC=BB=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jyEA4-0007Xf-BB
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 12:47:52 +0000
X-Inumbo-ID: 8d794ed8-cc19-11ea-a1a0-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d794ed8-cc19-11ea-a1a0-12813bfff9fa;
 Wed, 22 Jul 2020 12:47:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595422071;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=iXF3Nsj4JvxpIT4ZVYF2qgyPJ+WsQnz7naW3sYFT1p0=;
 b=KSRwySH8jouPjtXV5j0TSIpiae6QDmwAH9F6HOqo+gTvW+9XwSEs/NkK
 lZ/Xq0LwWU5j3vyC8YDMjRSlZ71W1KoZh46JF2z/d/1SPu96XBtuore3T
 ghui1DVgJAHrTekrDt9neP5piXSVGYWhKNV7gdrtAgbSzZKs7nPHiFJCQ Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: QuNFg5IDhwSM4Fm4v5++nQSUz5awR7wWVkEXIacXNyJgGN/aImaFdFG0c4s9UrWDnB0ffmtg8z
 dQ9NDbVLVu2C8yFHbUhTBQ6TLWDxdfU2G4PL1aGPhpYGa+7mlwEIqixPrbmDJ3KPljMaHN/XTB
 ScROH88yerC5Z4wGMT2zLIxS+dJQ52JlIFzlfcxDWoqK3qwFjQPmQEx6UGGWx6fRa6gFBoncAV
 VPp0vALMVog9QrrHUgK1bnWqy/CVv9YOaPq5uV8Lp/j/zRKw6C5YBcoQywp2yeBwRpcE2y4lEM
 8wI=
X-SBRS: 2.7
X-MesageID: 23128803
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,381,1589256000"; d="scan'208";a="23128803"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [OSSTEST PATCH 05/14] sg-report-flight: Use WITH to use best
 index use for $flightsq
Thread-Topic: [OSSTEST PATCH 05/14] sg-report-flight: Use WITH to use best
 index use for $flightsq
Thread-Index: AQHWX46t5LGxeS+LX0KNSBltVaDWxqkTa/WA
Date: Wed, 22 Jul 2020 12:47:47 +0000
Message-ID: <12D6C675-582D-467A-A882-B779652AF635@citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
 <20200721184205.15232-6-ian.jackson@eu.citrix.com>
In-Reply-To: <20200721184205.15232-6-ian.jackson@eu.citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <CED72C3E85B75C47BFF6E173743CC6BB@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDIxLCAyMDIwLCBhdCA3OjQxIFBNLCBJYW4gSmFja3NvbiA8aWFuLmphY2tz
b25AZXUuY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBXaGlsZSB3ZSdyZSBoZXJlLCBjb252ZXJ0
IHRoaXMgRVhJU1RTIHN1YnF1ZXJ5IHRvIGEgSk9JTi4NCj4gDQo+IFBlcmY6IHJ1bnRpbWUgb2Yg
bXkgdGVzdCBjYXNlIG5vdyB+MjAwLTMwMHMuDQo+IA0KPiBFeGFtcGxlIHF1ZXJ5IGJlZm9yZSAo
ZnJvbSB0aGUgUGVybCBEQkkgdHJhY2UpOg0KPiANCj4gICAgICBTRUxFQ1QgKiBGUk9NICgNCj4g
ICAgICAgIFNFTEVDVCBESVNUSU5DVCBmbGlnaHQsIGJsZXNzaW5nDQo+ICAgICAgICAgICAgIEZS
T00gZmxpZ2h0cw0KPiAgICAgICAgICAgICBKT0lOIHJ1bnZhcnMgcjEgVVNJTkcgKGZsaWdodCkN
Cj4gDQo+ICAgICAgICAgICAgV0hFUkUgKGJyYW5jaD0neGVuLXVuc3RhYmxlJykNCj4gICAgICAg
ICAgICAgIEFORCAoIChUUlVFIEFORCBmbGlnaHQgPD0gMTUxOTAzKSBBTkQgKGJsZXNzaW5nPSdy
ZWFsJykgKQ0KPiAgICAgICAgICAgICAgICAgIEFORCBFWElTVFMgKFNFTEVDVCAxDQo+ICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIEZST00gam9icw0KPiAgICAgICAgICAgICAgICAgICAgICAg
ICAgIFdIRVJFIGpvYnMuZmxpZ2h0ID0gZmxpZ2h0cy5mbGlnaHQNCj4gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIEFORCBqb2JzLmpvYiA9ID8pDQo+IA0KPiAgICAgICAgICAgICAgQU5EIHIx
Lm5hbWUgTElLRSAnYnVpbHRfcmV2aXNpb25fJScNCj4gICAgICAgICAgICAgIEFORCByMS5uYW1l
ID0gPw0KPiAgICAgICAgICAgICAgQU5EIHIxLnZhbD0gPw0KPiANCj4gICAgICAgICAgICBPUkRF
UiBCWSBmbGlnaHQgREVTQw0KPiAgICAgICAgICAgIExJTUlUIDEwMDANCj4gICAgICApIEFTIHN1
Yg0KPiAgICAgIE9SREVSIEJZIGJsZXNzaW5nIEFTQywgZmxpZ2h0IERFU0MNCj4gDQo+IFdpdGgg
YmluZCB2YXJpYWJsZXM6DQo+IA0KPiAgICAgInRlc3QtYXJtaGYtYXJtaGYtbGlidmlydCINCj4g
ICAgICdidWlsdF9yZXZpc2lvbl94ZW4nDQo+ICAgICAnMTY1ZjNhZmJmYzNkYjcwZmNmZGNjYWQw
NzA4NWNkZTBhMDNjODU4YicNCj4gDQo+IEFmdGVyOg0KPiANCj4gICAgICBXSVRIIHN1YiBBUyAo
DQo+ICAgICAgICBTRUxFQ1QgRElTVElOQ1QgZmxpZ2h0LCBibGVzc2luZw0KPiAgICAgICAgICAg
ICBGUk9NIGZsaWdodHMNCj4gICAgICAgICAgICAgSk9JTiBydW52YXJzIHIxIFVTSU5HIChmbGln
aHQpDQo+IA0KPiAgICAgICAgICAgIFdIRVJFIChicmFuY2g9J3hlbi11bnN0YWJsZScpDQo+ICAg
ICAgICAgICAgICBBTkQgKCAoVFJVRSBBTkQgZmxpZ2h0IDw9IDE1MTkwMykgQU5EIChibGVzc2lu
Zz0ncmVhbCcpICkNCj4gICAgICAgICAgICAgIEFORCByMS5uYW1lIExJS0UgJ2J1aWx0X3Jldmlz
aW9uXyUnDQo+ICAgICAgICAgICAgICBBTkQgcjEubmFtZSA9ID8NCj4gICAgICAgICAgICAgIEFO
RCByMS52YWw9ID8NCj4gDQo+ICAgICAgICAgICAgT1JERVIgQlkgZmxpZ2h0IERFU0MNCj4gICAg
ICAgICAgICBMSU1JVCAxMDAwDQo+ICAgICAgKQ0KPiAgICAgIFNFTEVDVCAqDQo+ICAgICAgICBG
Uk9NIHN1Yg0KPiAgICAgICAgSk9JTiBqb2JzIFVTSU5HIChmbGlnaHQpDQo+IA0KPiAgICAgICBX
SEVSRSAoMT0xKQ0KPiAgICAgICAgICAgICAgICAgIEFORCBqb2JzLmpvYiA9ID8NCj4gDQo+ICAg
ICAgT1JERVIgQlkgYmxlc3NpbmcgQVNDLCBmbGlnaHQgREVTQw0KDQpJIHdhcyB3b25kZXJpbmcg
aWYgaXQgd291bGQgYmUgdXNlZnVsIGNvbnZlcnRpbmcgdGhpcyB0byBhIGpvaW4gd291bGQgYmUg
dXNlZnVsLiA6LSkNCg0KQWdhaW4sIG5vdCBzdXJlIHdoYXQgdGhlIOKAnCgxPTEpIEFOROKAnSBi
aXQgaXMgZm9yOyBzb21ldGhpbmcgdG8gcG9rZSB0aGUgcXVlcnkgcGxhbm5lciBzb21laG93Pw0K
DQpUaGUgbWFpbiB0aGluZyBJIHNlZSBoZXJlIGlzIHRoYXQgdGhlcmXigJlzIG5vdGhpbmcgKmlu
IHRoZSBxdWVyeSogdGhhdCBndWFyYW50ZWVzIHlvdSB3b27igJl0IGdldCBtdWx0aXBsZSBmbGln
aHRzIGlmIHRoZXJlIGFyZSBtdWx0aXBsZSBqb2JzIGZvciB0aGF0IGZsaWdodCB3aG9zZSDigJhq
b2LigJkgdmFsdWU7IGJ1dCBnaXZlbiB0aGUgbmFtaW5nIHNjaGVtZSBzbyBmYXIsIEnigJltIGd1
ZXNzaW5nIGpvYiBpcyB1bmlxdWXigKY/ICBBcyBsb25nIGFzIHRoZXJl4oCZcyBzb21ldGhpbmcg
ZWxzZSBwcmV2ZW50aW5nIGR1cGxpY2F0aW9uIEkgdGhpbmsgaXTigJlzIGZpbmUuDQoNCiAtR2Vv
cmdlDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 13:50:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 13:50:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyF8C-0004HG-Dg; Wed, 22 Jul 2020 13:50:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yOZ1=BB=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jyF8B-0004HB-4O
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 13:49:59 +0000
X-Inumbo-ID: 39f47586-cc22-11ea-8656-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39f47586-cc22-11ea-8656-bc764e2007e4;
 Wed, 22 Jul 2020 13:49:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595425797;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=qJsAsTIlOWPMJ2E3fyLul1sYHDh13bYAVNY5PRBa3Fs=;
 b=AWhl+d89WKRpT6Ab0zHBP6qwO75Edyxf8pHxge46ZXNEyzcAJ0zzMsC3
 en18AU9aaMhkpwVc2uxlAv3fnqSXMNtbSVreTF3a4A6Xv75BWB9CktaG6
 CM81VUkwi1xP7C6FG4zMpx2tEpputKn/an9TzvFxh6ELVLO/EFixcW1Pi c=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ipHFvHi5QGu2VadyFS6Fyf4ZelRNo3p2OSndFaUhQiEatTRftXeMQdhJiOflKEh76lRNMFxMhg
 MJOcnAPKn0n0Puf+fTY5eUqPCYPYGTaw8vFfV4Uc/gLDMbPhbmmwWySX4V9arhXcZzca9SHOWd
 jibS2bJB64MTRWZ3ohS0kQekQfP60xneokZRWqbqlB/lb9QuFQdQ/RjXTtV91y+F1+YypeQj/R
 zTC/m7edZFQ4Nt9R4PJR/FT1a5NBPpfmwKQPN9NLjXn8zJ97UlkAv3Rhb+cwPPzVVJkoG1gIeO
 k/c=
X-SBRS: 2.7
X-MesageID: 23801730
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,383,1589256000"; d="scan'208";a="23801730"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24344.17407.490315.888745@mariner.uk.xensource.com>
Date: Wed, 22 Jul 2020 14:49:51 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [OSSTEST PATCH 08/14] Executive: Use index for report__find_test
In-Reply-To: <3ACBEEA3-C17D-48AE-8AE5-52C9D92C8C46@citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
 <20200721184205.15232-9-ian.jackson@eu.citrix.com>
 <3ACBEEA3-C17D-48AE-8AE5-52C9D92C8C46@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("Re: [OSSTEST PATCH 08/14] Executive: Use index for report__find_test"):
> > On Jul 21, 2020, at 7:41 PM, Ian Jackson <ian.jackson@eu.citrix.com> wrote:
> > Example query before (from the Perl DBI trace):
...
> So this says:
> 
> Get me all the columns
> for the highest-numbered flight
> Where:
>   There is at least one runvar for that flight has the specified $name and $value
>   And the job is *not* like build-%-prev or build-%-freebsd
>   The flight number (?) is <= 151903, and blessing = real
>   For the specified $branch

Yes.

> What’s the “TRUE and flight <= 151903” for?

These queries are programmetically constructed.  In this case, the
flight condition is not always there.  My test case had a
--max-flight=151903 on the command line: this is a debugging option.
It avoids newly added stuff in the db confusing me and generally
disturbing things.  This is implemented with a condition variable
which contains either "" or "and flight <= 151903".  Doing it this way
simplifies the generation code.

> And this says (effectively)
> 
> Get me <flight, started, blessing, branch, intended>
> From the highest-numbered flight
> Where
>   That flight has a runvar with specified name and value
>   The job *doesn’t* look like “build-%-prev” or “build-%-freebsd”
>   flight & blessing as appropriate
>   branch as specified.

I think so, yes.

> Isn’t the r.flight = f.flight redundant if we’re joining on flight?

Indeed it is.  I guess I can add a patch at theend to delete that.

> Also, in spite of the paragraph attempting to explain it, I’m afraid
> I don’t understand what the “AS sub WHERE TRUE” is for.

The reason for the subquery is not evident in the SQL.  It's because
of the Perl code which generates this query.  The same code is used to
generate queries that start with things like
   SELECT * ...
   SELECT COUNT(*) AS count ...
The perl code gets told "*" or "COUNT(*) AS count".  The call sites
that pass "*" expect to see fields from flights.  It would be
possible to change "*" to the explicit field list everywhere, but
it was much easier to do it this way.

(The WHERE TRUE is another one of these stubs where a condition might
appear.)

> But it looks like the new query should do the same thing as the old
> query, assuming that the columns from the subquery are all the
> columns that you need in the correct order.

The subquery columns are precisely the columns currently existing in
he flights table.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 14:03:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 14:03:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyFLN-000614-PQ; Wed, 22 Jul 2020 14:03:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yOZ1=BB=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jyFLM-00060z-9Q
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 14:03:36 +0000
X-Inumbo-ID: 21161d39-cc24-11ea-865f-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 21161d39-cc24-11ea-865f-bc764e2007e4;
 Wed, 22 Jul 2020 14:03:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595426616;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=RoY830mZQUlGRogyZSkEZerSmwZ3DbZmjbLDb8s4x/A=;
 b=KWxlmMW6tHQ/EzdJ7nkhd5R07xzxYOeKIDixlD5lzlk/Hmj+EmvVe4Et
 uZ+KX1wHGT1GVluzrRMPG53M/PabzzqqMO2DHTqlT3EGbOzxNLLkUuJvd
 znVo1Kw7IwhfNEgvTZAXcBgPcJKH7Vfg/8clNznMOWw3KDf+NII8m7P8h c=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: LDmwW1axsGR/U/YiRxozgv66iZwJOmkzlfxs2AorzEl66aTvDhzBSSume0gTtRS6lbxh8CO197
 kkNAQEPBdhrYf4MWBQC4VJROfDjRlUdype5BOWFzPwEw36HO+QFuB4LP02kh1KmhA1LcOtKbAJ
 1uNs82T/O8+/Ol3tG9hIWd+JmgY9oShblK2s5iMRDC56bOrRrHx4WokGS7MZdL+ebSj8UdOins
 3c+cjBkw8jFSTMUXSv5hvv2oreWAoi4KDADYL0sth+lPKVtERxc9VnerfjLpvlzBqF2yP7c+2T
 Tog=
X-SBRS: 2.7
X-MesageID: 22960915
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,383,1589256000"; d="scan'208";a="22960915"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24344.18220.286848.935081@mariner.uk.xensource.com>
Date: Wed, 22 Jul 2020 15:03:24 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [OSSTEST PATCH 04/14] sg-report-flight: Ask the db for flights of
 interest
In-Reply-To: <3966AFCB-7B7B-45BE-A3F1-7E04943EEEFA@citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
 <20200721184205.15232-5-ian.jackson@eu.citrix.com>
 <3966AFCB-7B7B-45BE-A3F1-7E04943EEEFA@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("Re: [OSSTEST PATCH 04/14] sg-report-flight: Ask the db for flights of interest"):
> > On Jul 21, 2020, at 7:41 PM, Ian Jackson <ian.jackson@eu.citrix.com> wrote:
> > Example query before (from the Perl DBI trace):
> > 
> >      SELECT * FROM (
> >        SELECT flight, blessing FROM flights
...
> >              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
...
> But why are we selecting ‘blessing’ from these, if we’ve specified that blessing = “real”? Isn’t that redundant?

That condition is programmatically constructed.  Sometimes it will ask
for multiple different blessings and then it wants to know which.

> > After:
...
> So this says:
> 
> Find me the most 1000 recent flights
> Where:
>   branch is “xen-unstable”
>   flight <= 15903
>   blessing is “real”
>   One of its jobs is $job
>   It has a runvar matching given $name and $val
> 
> And of course it uses the ’name LIKE ‘built_revision_%’ index.

Yes.

> Still don’t understand the ’TRUE AND’ and ‘AS sub’ bits, but it
> looks to me like it’s substantially the same query, with additional
> $name = $val runvar restriction.

That's my intent, ytes.

> And given that you say, "This condition is strictly broader than
> that implemented inside the flight search loop”, I take it that it’s
> again mainly to take advantage of the new index?

Right.  The previous approach was "iterate over recent flights,
figure out precisely what they built, and decide if they meet the
(complex) requirements".

Now we only iterate over a subset of recent flights: those which have
at least one such runvar.  The big commennt is meant to be a
demonstration that the "(complex) requirements" are a narrower
condition than the new condition on the initial flights query.

So I think the result is that it will look deeper into history, and be
faster, but not otherwise change its beaviour.

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 14:06:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 14:06:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyFOB-000687-8M; Wed, 22 Jul 2020 14:06:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yOZ1=BB=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jyFOA-000682-D7
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 14:06:30 +0000
X-Inumbo-ID: 8980ed31-cc24-11ea-8662-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8980ed31-cc24-11ea-8662-bc764e2007e4;
 Wed, 22 Jul 2020 14:06:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595426790;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=guJF1F+4rB+qPBfIVu2s2bC2RICGjU9i+5xaE5Cxt7Y=;
 b=IsCAeyIg21ASFDmf3lR5iH7M0JKWJtTNv80IZEkSNlysmhjMGpXuTW2d
 5yjsA7Lqg/WiG3BnaWj43+jJkcqawHNBw/uu+YEZks9opiZK3ppW/hyRy
 qJUBUqvFMuFxfdPBFliFQ5zfHFa5GPmgUc3Ke4Mp5LH+zCqiDi4c68+1P A=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: kCbtd7R0hAsXKJ6IpOtosQ9p/g76VluJOGu0DdK/lXBejCl+tEYxya64RFUCiRI0AqmRrjh0I2
 kVbpA63cLKEoJcKnZAkY4jnC451b4E8N7wEstqebmCOGFo3ZYZHOLWKGl3EvEfKw2dpG08YuEH
 UQqOHOne28GFwQg57yUWCCHgKc7C3n28RObBOPbOEeODTeO7HbDXPCRtC1YGdRS5+AdlAClLqy
 4R83/Ja9CmI+KuQkAh7hErii5EarIuFQLokQjLuPu0A9e2HmPqf5f5OqyBSDV5NpjjX5JSG+cA
 sWs=
X-SBRS: 2.7
X-MesageID: 22961238
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,383,1589256000"; d="scan'208";a="22961238"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24344.18400.375008.553022@mariner.uk.xensource.com>
Date: Wed, 22 Jul 2020 15:06:24 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [OSSTEST PATCH 05/14] sg-report-flight: Use WITH to use best
 index use for $flightsq
In-Reply-To: <12D6C675-582D-467A-A882-B779652AF635@citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
 <20200721184205.15232-6-ian.jackson@eu.citrix.com>
 <12D6C675-582D-467A-A882-B779652AF635@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <Ian.Jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("Re: [OSSTEST PATCH 05/14] sg-report-flight: Use WITH to use best index use for $flightsq"):
> On Jul 21, 2020, at 7:41 PM, Ian Jackson <ian.jackson@eu.citrix.com> wrote:
> > After:
> >      WITH sub AS (
> >        SELECT DISTINCT flight, blessing
> >             FROM flights
> >             JOIN runvars r1 USING (flight)
> > 
> >            WHERE (branch='xen-unstable')
> >              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
> >              AND r1.name LIKE 'built_revision_%'
> >              AND r1.name = ?
> >              AND r1.val= ?
> > 
> >            ORDER BY flight DESC
> >            LIMIT 1000
> >      )
> >      SELECT *
> >        FROM sub
> >        JOIN jobs USING (flight)
> > 
> >       WHERE (1=1)
> >                  AND jobs.job = ?
> > 
> >      ORDER BY blessing ASC, flight DESC
> 
> I was wondering if it would be useful converting this to a join would be useful. :-)
...
> The main thing I see here is that there’s nothing *in the query*
> that guarantees you won’t get multiple flights if there are multiple
> jobs for that flight whose ‘job’ value; but given the naming scheme
> so far, I’m guessing job is unique…?  As long as there’s something
> else preventing duplication I think it’s fine.

(flight,job) is the primary key for the jobs table.

I can probably produce a schema dump if that would make reading this
stuff easier.

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 14:13:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 14:13:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyFUa-00070o-0e; Wed, 22 Jul 2020 14:13:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JiSV=BB=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jyFUY-00070j-Im
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 14:13:06 +0000
X-Inumbo-ID: 75ddec3c-cc25-11ea-8662-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75ddec3c-cc25-11ea-8662-bc764e2007e4;
 Wed, 22 Jul 2020 14:13:05 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jyFUW-000585-NJ; Wed, 22 Jul 2020 15:13:04 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 15/14] Executive: Drop redundant AND clause
Date: Wed, 22 Jul 2020 15:13:02 +0100
Message-Id: <20200722141302.21345-1-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In "Executive: Use index for report__find_test" we changed an EXISTS
subquery into a JOIN.

Now, the condition r.flight=f.flight is redundant because this is the
join column (from USING).

No functional change.

CC: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 1 -
 1 file changed, 1 deletion(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 66c93ab9..33de3708 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -433,7 +433,6 @@ END
 		   WHERE name=?
                      AND name LIKE 'revision_%'
 		     AND val=?
-		     AND r.flight=f.flight
                      AND ${\ main_revision_job_cond('r.job') }
 END
             push @params, "revision_$tree", $revision;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 14:40:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 14:40:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyFuU-0000TD-Kj; Wed, 22 Jul 2020 14:39:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dvI5=BB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyFuS-0000T8-LA
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 14:39:52 +0000
X-Inumbo-ID: 320f9d31-cc29-11ea-a1c1-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 320f9d31-cc29-11ea-a1c1-12813bfff9fa;
 Wed, 22 Jul 2020 14:39:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595428790;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=jYMazXo2Zf70+I57knJORMg8aSlo9FA3v0p9AT+pwr0=;
 b=R7PGCKBO4ic1iBfXvwzAo2rA27nrUYy3KW4uiSbSgLCbQX09e0C2d4yp
 qvTFm8nNDmx1wwFpJ5WdugiidnR4RdrhLRtrgo7RdWTdbjPGMNZmz1ilh
 qTLYRVkdaeMkX3pBLnWvWECILc9qX08Se0NJp4NxWr4YoV92FRU9uYxOe E=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: mug06Q5xnHO6xDTKmPM+y8AnJvUqfBe9EeYjT7K5U3Duvx3uIoc8J6MqDkKMc3aUIqxNuIEPWy
 ugtNw/g9jpZElNGm7YiUgkTTXj7RuIGAenj0J86OUe4u8fDxlmkGymPvpdsXcL4r1ZFF2tSg0V
 S8oaGugcu8iM7/AbFk2sOUkcFEJTPcPoqVRKYMYYFqXN24MuOPeIMXxaGvlm1bDHJpEkEKL8AD
 PlOgV2CXVHKkibK57/jgR+NwJ3jteLyAl3y1r/vRthsp5KzE9vkySdMhOP6WvON/3UvYxXULel
 9ZI=
X-SBRS: 2.7
X-MesageID: 23808977
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,383,1589256000"; d="scan'208";a="23808977"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/svm: Misc coding style corrections
Date: Wed, 22 Jul 2020 15:39:29 +0100
Message-ID: <20200722143929.14191-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

These almost certainly aren't all the style issues, but the end result is
certainly far more consistent.
---
 xen/arch/x86/hvm/svm/intr.c      |  19 ++-
 xen/arch/x86/hvm/svm/nestedsvm.c | 291 +++++++++++++++++++++++----------------
 xen/arch/x86/hvm/svm/svm.c       |  76 +++++-----
 3 files changed, 225 insertions(+), 161 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/intr.c b/xen/arch/x86/hvm/svm/intr.c
index 38011bd4e2..7f815d2307 100644
--- a/xen/arch/x86/hvm/svm/intr.c
+++ b/xen/arch/x86/hvm/svm/intr.c
@@ -1,6 +1,6 @@
 /*
  * intr.c: Interrupt handling for SVM.
- * Copyright (c) 2005, AMD Inc. 
+ * Copyright (c) 2005, AMD Inc.
  * Copyright (c) 2004, Intel Corporation.
  *
  * This program is free software; you can redistribute it and/or modify it
@@ -83,9 +83,12 @@ static void svm_enable_intr_window(struct vcpu *v, struct hvm_intack intack)
 
     ASSERT(intack.source != hvm_intsrc_none);
 
-    if ( nestedhvm_enabled(v->domain) ) {
+    if ( nestedhvm_enabled(v->domain) )
+    {
         struct nestedvcpu *nv = &vcpu_nestedhvm(v);
-        if ( nv->nv_vmentry_pending ) {
+
+        if ( nv->nv_vmentry_pending )
+        {
             struct vmcb_struct *gvmcb = nv->nv_vvmcx;
 
             /* check if l1 guest injects interrupt into l2 guest via vintr.
@@ -131,7 +134,7 @@ static void svm_enable_intr_window(struct vcpu *v, struct hvm_intack intack)
         vmcb, general1_intercepts | GENERAL1_INTERCEPT_VINTR);
 }
 
-void svm_intr_assist(void) 
+void svm_intr_assist(void)
 {
     struct vcpu *v = current;
     struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
@@ -151,7 +154,8 @@ void svm_intr_assist(void)
             return;
 
         intblk = hvm_interrupt_blocked(v, intack);
-        if ( intblk == hvm_intblk_svm_gif ) {
+        if ( intblk == hvm_intblk_svm_gif )
+        {
             ASSERT(nestedhvm_enabled(v->domain));
             return;
         }
@@ -167,10 +171,11 @@ void svm_intr_assist(void)
              * the l1 guest occurred.
              */
             rc = nestedsvm_vcpu_interrupt(v, intack);
-            switch (rc) {
+            switch ( rc )
+            {
             case NSVM_INTR_NOTINTERCEPTED:
                 /* Inject interrupt into 2nd level guest directly. */
-                break;	
+                break;
             case NSVM_INTR_NOTHANDLED:
             case NSVM_INTR_FORCEVMEXIT:
                 return;
diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index a193d9de45..fcfccf75df 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -30,7 +30,7 @@
 
 #define NSVM_ERROR_VVMCB        1
 #define NSVM_ERROR_VMENTRY      2
- 
+
 static void
 nestedsvm_vcpu_clgi(struct vcpu *v)
 {
@@ -51,7 +51,8 @@ int nestedsvm_vmcb_map(struct vcpu *v, uint64_t vmcbaddr)
 {
     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
 
-    if (nv->nv_vvmcx != NULL && nv->nv_vvmcxaddr != vmcbaddr) {
+    if ( nv->nv_vvmcx != NULL && nv->nv_vvmcxaddr != vmcbaddr )
+    {
         ASSERT(vvmcx_valid(v));
         hvm_unmap_guest_frame(nv->nv_vvmcx, 1);
         nv->nv_vvmcx = NULL;
@@ -87,24 +88,24 @@ int nsvm_vcpu_initialise(struct vcpu *v)
 
     msrpm = alloc_xenheap_pages(get_order_from_bytes(MSRPM_SIZE), 0);
     svm->ns_cached_msrpm = msrpm;
-    if (msrpm == NULL)
+    if ( msrpm == NULL )
         goto err;
     memset(msrpm, 0x0, MSRPM_SIZE);
 
     msrpm = alloc_xenheap_pages(get_order_from_bytes(MSRPM_SIZE), 0);
     svm->ns_merged_msrpm = msrpm;
-    if (msrpm == NULL)
+    if ( msrpm == NULL )
         goto err;
     memset(msrpm, 0x0, MSRPM_SIZE);
 
     nv->nv_n2vmcx = alloc_vmcb();
-    if (nv->nv_n2vmcx == NULL)
+    if ( nv->nv_n2vmcx == NULL )
         goto err;
     nv->nv_n2vmcx_pa = virt_to_maddr(nv->nv_n2vmcx);
 
     return 0;
 
-err:
+ err:
     nsvm_vcpu_destroy(v);
     return -ENOMEM;
 }
@@ -120,28 +121,33 @@ void nsvm_vcpu_destroy(struct vcpu *v)
      * in order to avoid double free of l2 vmcb and the possible memory leak
      * of l1 vmcb page.
      */
-    if (nv->nv_n1vmcx)
+    if ( nv->nv_n1vmcx )
         v->arch.hvm.svm.vmcb = nv->nv_n1vmcx;
 
-    if (svm->ns_cached_msrpm) {
+    if ( svm->ns_cached_msrpm )
+    {
         free_xenheap_pages(svm->ns_cached_msrpm,
                            get_order_from_bytes(MSRPM_SIZE));
         svm->ns_cached_msrpm = NULL;
     }
-    if (svm->ns_merged_msrpm) {
+
+    if ( svm->ns_merged_msrpm )
+    {
         free_xenheap_pages(svm->ns_merged_msrpm,
                            get_order_from_bytes(MSRPM_SIZE));
         svm->ns_merged_msrpm = NULL;
     }
+
     hvm_unmap_guest_frame(nv->nv_vvmcx, 1);
     nv->nv_vvmcx = NULL;
-    if (nv->nv_n2vmcx) {
+    if ( nv->nv_n2vmcx )
+    {
         free_vmcb(nv->nv_n2vmcx);
         nv->nv_n2vmcx = NULL;
         nv->nv_n2vmcx_pa = INVALID_PADDR;
     }
-    if (svm->ns_iomap)
-        svm->ns_iomap = NULL;
+
+    svm->ns_iomap = NULL;
 }
 
 int nsvm_vcpu_reset(struct vcpu *v)
@@ -168,8 +174,7 @@ int nsvm_vcpu_reset(struct vcpu *v)
     svm->ns_vmexit.exitinfo1 = 0;
     svm->ns_vmexit.exitinfo2 = 0;
 
-    if (svm->ns_iomap)
-        svm->ns_iomap = NULL;
+    svm->ns_iomap = NULL;
 
     nestedsvm_vcpu_stgi(v);
     return 0;
@@ -182,15 +187,21 @@ static uint64_t nestedsvm_fpu_vmentry(uint64_t n1cr0,
     uint64_t vcr0;
 
     vcr0 = vvmcb->_cr0;
-    if ( !(n1cr0 & X86_CR0_TS) && (n1vmcb->_cr0 & X86_CR0_TS) ) {
-        /* svm_fpu_leave() run while l1 guest was running.
+    if ( !(n1cr0 & X86_CR0_TS) && (n1vmcb->_cr0 & X86_CR0_TS) )
+    {
+        /*
+         * svm_fpu_leave() run while l1 guest was running.
          * Sync FPU state with l2 guest.
          */
         vcr0 |= X86_CR0_TS;
         n2vmcb->_exception_intercepts |= (1U << TRAP_no_device);
-    } else if ( !(vcr0 & X86_CR0_TS) && (n2vmcb->_cr0 & X86_CR0_TS) ) {
-        /* svm_fpu_enter() run while l1 guest was running.
-         * Sync FPU state with l2 guest. */
+    }
+    else if ( !(vcr0 & X86_CR0_TS) && (n2vmcb->_cr0 & X86_CR0_TS) )
+    {
+        /*
+         * svm_fpu_enter() run while l1 guest was running.
+         * Sync FPU state with l2 guest.
+         */
         vcr0 &= ~X86_CR0_TS;
         n2vmcb->_exception_intercepts &= ~(1U << TRAP_no_device);
     }
@@ -201,14 +212,21 @@ static uint64_t nestedsvm_fpu_vmentry(uint64_t n1cr0,
 static void nestedsvm_fpu_vmexit(struct vmcb_struct *n1vmcb,
     struct vmcb_struct *n2vmcb, uint64_t n1cr0, uint64_t guest_cr0)
 {
-    if ( !(guest_cr0 & X86_CR0_TS) && (n2vmcb->_cr0 & X86_CR0_TS) ) {
-        /* svm_fpu_leave() run while l2 guest was running.
-         * Sync FPU state with l1 guest. */
+    if ( !(guest_cr0 & X86_CR0_TS) && (n2vmcb->_cr0 & X86_CR0_TS) )
+    {
+        /*
+         * svm_fpu_leave() run while l2 guest was running.
+         * Sync FPU state with l1 guest.
+         */
         n1vmcb->_cr0 |= X86_CR0_TS;
         n1vmcb->_exception_intercepts |= (1U << TRAP_no_device);
-    } else if ( !(n1cr0 & X86_CR0_TS) && (n1vmcb->_cr0 & X86_CR0_TS) ) {
-        /* svm_fpu_enter() run while l2 guest was running.
-         * Sync FPU state with l1 guest. */
+    }
+    else if ( !(n1cr0 & X86_CR0_TS) && (n1vmcb->_cr0 & X86_CR0_TS) )
+    {
+        /*
+         * svm_fpu_enter() run while l2 guest was running.
+         * Sync FPU state with l1 guest.
+         */
         n1vmcb->_cr0 &= ~X86_CR0_TS;
         n1vmcb->_exception_intercepts &= ~(1U << TRAP_no_device);
     }
@@ -225,16 +243,17 @@ static int nsvm_vcpu_hostsave(struct vcpu *v, unsigned int inst_len)
 
     n1vmcb->rip += inst_len;
 
-    /* Save shadowed values. This ensures that the l1 guest
-     * cannot override them to break out. */
+    /*
+     * Save shadowed values. This ensures that the l1 guest
+     * cannot override them to break out.
+     */
     n1vmcb->_efer = v->arch.hvm.guest_efer;
     n1vmcb->_cr0 = v->arch.hvm.guest_cr[0];
     n1vmcb->_cr2 = v->arch.hvm.guest_cr[2];
     n1vmcb->_cr4 = v->arch.hvm.guest_cr[4];
 
     /* Remember the host interrupt flag */
-    svm->ns_hostflags.fields.rflagsif =
-        (n1vmcb->rflags & X86_EFLAGS_IF) ? 1 : 0;
+    svm->ns_hostflags.fields.rflagsif = !!(n1vmcb->rflags & X86_EFLAGS_IF);
 
     return 0;
 }
@@ -251,7 +270,8 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
     ASSERT(n1vmcb != NULL);
     ASSERT(n2vmcb != NULL);
 
-    /* nsvm_vmcb_prepare4vmexit() already saved register values
+    /*
+     * nsvm_vmcb_prepare4vmexit() already saved register values
      * handled by VMSAVE/VMLOAD into n1vmcb directly.
      */
 
@@ -264,7 +284,7 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
     rc = hvm_set_efer(n1vmcb->_efer);
     if ( rc == X86EMUL_EXCEPTION )
         hvm_inject_hw_exception(TRAP_gp_fault, 0);
-    if (rc != X86EMUL_OKAY)
+    if ( rc != X86EMUL_OKAY )
         gdprintk(XENLOG_ERR, "hvm_set_efer failed, rc: %u\n", rc);
 
     /* CR4 */
@@ -272,7 +292,7 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
     rc = hvm_set_cr4(n1vmcb->_cr4, true);
     if ( rc == X86EMUL_EXCEPTION )
         hvm_inject_hw_exception(TRAP_gp_fault, 0);
-    if (rc != X86EMUL_OKAY)
+    if ( rc != X86EMUL_OKAY )
         gdprintk(XENLOG_ERR, "hvm_set_cr4 failed, rc: %u\n", rc);
 
     /* CR0 */
@@ -283,7 +303,7 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
     rc = hvm_set_cr0(n1vmcb->_cr0 | X86_CR0_PE, true);
     if ( rc == X86EMUL_EXCEPTION )
         hvm_inject_hw_exception(TRAP_gp_fault, 0);
-    if (rc != X86EMUL_OKAY)
+    if ( rc != X86EMUL_OKAY )
         gdprintk(XENLOG_ERR, "hvm_set_cr0 failed, rc: %u\n", rc);
     svm->ns_cr0 = v->arch.hvm.guest_cr[0];
 
@@ -293,17 +313,22 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
 
     /* CR3 */
     /* Nested paging mode */
-    if (nestedhvm_paging_mode_hap(v)) {
+    if ( nestedhvm_paging_mode_hap(v) )
+    {
         /* host nested paging + guest nested paging. */
         /* hvm_set_cr3() below sets v->arch.hvm.guest_cr[3] for us. */
-    } else if (paging_mode_hap(v->domain)) {
+    }
+    else if ( paging_mode_hap(v->domain) )
+    {
         /* host nested paging + guest shadow paging. */
         /* hvm_set_cr3() below sets v->arch.hvm.guest_cr[3] for us. */
-    } else {
+    }
+    else
+    {
         /* host shadow paging + guest shadow paging. */
 
         /* Reset MMU context  -- XXX (hostrestore) not yet working*/
-        if (!pagetable_is_null(v->arch.guest_table))
+        if ( !pagetable_is_null(v->arch.guest_table) )
             put_page(pagetable_get_page(v->arch.guest_table));
         v->arch.guest_table = pagetable_null();
         /* hvm_set_cr3() below sets v->arch.hvm.guest_cr[3] for us. */
@@ -311,7 +336,7 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
     rc = hvm_set_cr3(n1vmcb->_cr3, false, true);
     if ( rc == X86EMUL_EXCEPTION )
         hvm_inject_hw_exception(TRAP_gp_fault, 0);
-    if (rc != X86EMUL_OKAY)
+    if ( rc != X86EMUL_OKAY )
         gdprintk(XENLOG_ERR, "hvm_set_cr3 failed, rc: %u\n", rc);
 
     regs->rax = n1vmcb->rax;
@@ -321,7 +346,8 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
     n1vmcb->_dr7 = 0; /* disable all breakpoints */
     n1vmcb->_cpl = 0;
 
-    /* Clear exitintinfo to prevent a fault loop of re-injecting
+    /*
+     * Clear exitintinfo to prevent a fault loop of re-injecting
      * exceptions forever.
      */
     n1vmcb->exit_int_info.raw = 0;
@@ -375,13 +401,11 @@ static int nsvm_vmrun_permissionmap(struct vcpu *v, bool_t viopm)
     nv->nv_ioportED = ioport_ed;
 
     /* v->arch.hvm.svm.msrpm has type unsigned long, thus BYTES_PER_LONG. */
-    for (i = 0; i < MSRPM_SIZE / BYTES_PER_LONG; i++)
+    for ( i = 0; i < MSRPM_SIZE / BYTES_PER_LONG; i++ )
         svm->ns_merged_msrpm[i] = arch_svm->msrpm[i] | ns_msrpm_ptr[i];
 
-    host_vmcb->_iopm_base_pa =
-        (uint64_t)virt_to_maddr(svm->ns_iomap);
-    host_vmcb->_msrpm_base_pa =
-        (uint64_t)virt_to_maddr(svm->ns_merged_msrpm);
+    host_vmcb->_iopm_base_pa  = virt_to_maddr(svm->ns_iomap);
+    host_vmcb->_msrpm_base_pa = virt_to_maddr(svm->ns_merged_msrpm);
 
     return 0;
 }
@@ -438,7 +462,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
      * below. Those cleanbits would be tracked in an integer field
      * in struct nestedsvm.
      * But this effort is not worth doing because:
-     * - Only the intercepts bit of the n1vmcb can effectively be used here 
+     * - Only the intercepts bit of the n1vmcb can effectively be used here
      * - The CPU runs more instructions for the tracking than can be
      *   safed here.
      * The overhead comes from (ordered from highest to lowest):
@@ -462,7 +486,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
         n1vmcb->_general2_intercepts | ns_vmcb->_general2_intercepts;
 
     /* Nested Pause Filter */
-    if (ns_vmcb->_general1_intercepts & GENERAL1_INTERCEPT_PAUSE)
+    if ( ns_vmcb->_general1_intercepts & GENERAL1_INTERCEPT_PAUSE )
         n2vmcb->_pause_filter_count =
             min(n1vmcb->_pause_filter_count, ns_vmcb->_pause_filter_count);
     else
@@ -473,7 +497,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
 
     /* Nested IO permission bitmaps */
     rc = nsvm_vmrun_permissionmap(v, clean.iopm);
-    if (rc)
+    if ( rc )
         return rc;
 
     /* ASID - Emulation handled in hvm_asid_handle_vmenter() */
@@ -534,7 +558,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     rc = hvm_set_efer(ns_vmcb->_efer);
     if ( rc == X86EMUL_EXCEPTION )
         hvm_inject_hw_exception(TRAP_gp_fault, 0);
-    if (rc != X86EMUL_OKAY)
+    if ( rc != X86EMUL_OKAY )
         gdprintk(XENLOG_ERR, "hvm_set_efer failed, rc: %u\n", rc);
 
     /* CR4 */
@@ -542,7 +566,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     rc = hvm_set_cr4(ns_vmcb->_cr4, true);
     if ( rc == X86EMUL_EXCEPTION )
         hvm_inject_hw_exception(TRAP_gp_fault, 0);
-    if (rc != X86EMUL_OKAY)
+    if ( rc != X86EMUL_OKAY )
         gdprintk(XENLOG_ERR, "hvm_set_cr4 failed, rc: %u\n", rc);
 
     /* CR0 */
@@ -552,7 +576,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     rc = hvm_set_cr0(cr0, true);
     if ( rc == X86EMUL_EXCEPTION )
         hvm_inject_hw_exception(TRAP_gp_fault, 0);
-    if (rc != X86EMUL_OKAY)
+    if ( rc != X86EMUL_OKAY )
         gdprintk(XENLOG_ERR, "hvm_set_cr0 failed, rc: %u\n", rc);
 
     /* CR2 */
@@ -560,7 +584,8 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     hvm_update_guest_cr(v, 2);
 
     /* Nested paging mode */
-    if (nestedhvm_paging_mode_hap(v)) {
+    if ( nestedhvm_paging_mode_hap(v) )
+    {
         /* host nested paging + guest nested paging. */
         n2vmcb->_np_enable = 1;
 
@@ -570,9 +595,11 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
         rc = hvm_set_cr3(ns_vmcb->_cr3, false, true);
         if ( rc == X86EMUL_EXCEPTION )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
-        if (rc != X86EMUL_OKAY)
+        if ( rc != X86EMUL_OKAY )
             gdprintk(XENLOG_ERR, "hvm_set_cr3 failed, rc: %u\n", rc);
-    } else if (paging_mode_hap(v->domain)) {
+    }
+    else if ( paging_mode_hap(v->domain) )
+    {
         /* host nested paging + guest shadow paging. */
         n2vmcb->_np_enable = 1;
         /* Keep h_cr3 as it is. */
@@ -584,9 +611,11 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
         rc = hvm_set_cr3(ns_vmcb->_cr3, false, true);
         if ( rc == X86EMUL_EXCEPTION )
             hvm_inject_hw_exception(TRAP_gp_fault, 0);
-        if (rc != X86EMUL_OKAY)
+        if ( rc != X86EMUL_OKAY )
             gdprintk(XENLOG_ERR, "hvm_set_cr3 failed, rc: %u\n", rc);
-    } else {
+    }
+    else
+    {
         /* host shadow paging + guest shadow paging. */
         n2vmcb->_np_enable = 0;
         n2vmcb->_h_cr3 = 0x0;
@@ -640,13 +669,15 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     n2vmcb->cleanbits.raw = 0;
 
     rc = svm_vmcb_isvalid(__func__, ns_vmcb, v, true);
-    if (rc) {
+    if ( rc )
+    {
         gdprintk(XENLOG_ERR, "virtual vmcb invalid\n");
         return NSVM_ERROR_VVMCB;
     }
 
     rc = svm_vmcb_isvalid(__func__, n2vmcb, v, true);
-    if (rc) {
+    if ( rc )
+    {
         gdprintk(XENLOG_ERR, "n2vmcb invalid\n");
         return NSVM_ERROR_VMENTRY;
     }
@@ -691,15 +722,15 @@ nsvm_vcpu_vmentry(struct vcpu *v, struct cpu_user_regs *regs,
     }
 
     /* nested paging for the guest */
-    svm->ns_hap_enabled = (ns_vmcb->_np_enable) ? 1 : 0;
+    svm->ns_hap_enabled = !!ns_vmcb->_np_enable;
 
     /* Remember the V_INTR_MASK in hostflags */
-    svm->ns_hostflags.fields.vintrmask =
-        (ns_vmcb->_vintr.fields.intr_masking) ? 1 : 0;
+    svm->ns_hostflags.fields.vintrmask = !!ns_vmcb->_vintr.fields.intr_masking;
 
     /* Save l1 guest state (= host state) */
     ret = nsvm_vcpu_hostsave(v, inst_len);
-    if (ret) {
+    if ( ret )
+    {
         gdprintk(XENLOG_ERR, "hostsave failed, ret = %i\n", ret);
         return ret;
     }
@@ -709,7 +740,8 @@ nsvm_vcpu_vmentry(struct vcpu *v, struct cpu_user_regs *regs,
     v->arch.hvm.svm.vmcb_pa = nv->nv_n2vmcx_pa;
 
     ret = nsvm_vmcb_prepare4vmrun(v, regs);
-    if (ret) {
+    if ( ret )
+    {
         gdprintk(XENLOG_ERR, "prepare4vmrun failed, ret = %i\n", ret);
         return ret;
     }
@@ -744,7 +776,8 @@ nsvm_vcpu_vmrun(struct vcpu *v, struct cpu_user_regs *regs)
      * and l1 guest keeps alive. */
     nestedhvm_vcpu_enter_guestmode(v);
 
-    switch (ret) {
+    switch ( ret )
+    {
     case 0:
         break;
     case NSVM_ERROR_VVMCB:
@@ -762,7 +795,7 @@ nsvm_vcpu_vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     }
 
     /* If l1 guest uses shadow paging, update the paging mode. */
-    if (!nestedhvm_paging_mode_hap(v))
+    if ( !nestedhvm_paging_mode_hap(v) )
         paging_update_paging_modes(v);
 
     nv->nv_vmswitch_in_progress = 0;
@@ -785,9 +818,10 @@ nsvm_vcpu_vmexit_inject(struct vcpu *v, struct cpu_user_regs *regs,
 
     ns_vmcb = nv->nv_vvmcx;
 
-    if (nv->nv_vmexit_pending) {
-
-        switch (exitcode) {
+    if ( nv->nv_vmexit_pending )
+    {
+        switch ( exitcode )
+        {
         case VMEXIT_INTR:
             if ( unlikely(ns_vmcb->event_inj.v) && nv->nv_vmentry_pending &&
                  hvm_event_needs_reinjection(ns_vmcb->event_inj.type,
@@ -845,20 +879,20 @@ nsvm_vmcb_guest_intercepts_msr(unsigned long *msr_bitmap,
 
     msr_bit = svm_msrbit(msr_bitmap, msr);
 
-    if (msr_bit == NULL)
+    if ( msr_bit == NULL )
         /* MSR not in the permission map: Let the guest handle it. */
         return NESTEDHVM_VMEXIT_INJECT;
 
     msr &= 0x1fff;
 
-    if (write)
+    if ( write )
         /* write access */
         enabled = test_bit(msr * 2 + 1, msr_bit);
     else
         /* read access */
         enabled = test_bit(msr * 2, msr_bit);
 
-    if (!enabled)
+    if ( !enabled )
         return NESTEDHVM_VMEXIT_HOST;
 
     return NESTEDHVM_VMEXIT_INJECT;
@@ -921,41 +955,42 @@ nsvm_vmcb_guest_intercepts_exitcode(struct vcpu *v,
     struct vmcb_struct *ns_vmcb = nv->nv_vvmcx;
     enum nestedhvm_vmexits vmexits;
 
-    switch (exitcode) {
+    switch ( exitcode )
+    {
     case VMEXIT_CR0_READ ... VMEXIT_CR15_READ:
     case VMEXIT_CR0_WRITE ... VMEXIT_CR15_WRITE:
         exit_bits = 1ULL << (exitcode - VMEXIT_CR0_READ);
-        if (svm->ns_cr_intercepts & exit_bits)
+        if ( svm->ns_cr_intercepts & exit_bits )
             break;
         return 0;
 
     case VMEXIT_DR0_READ ... VMEXIT_DR7_READ:
     case VMEXIT_DR0_WRITE ... VMEXIT_DR7_WRITE:
         exit_bits = 1ULL << (exitcode - VMEXIT_DR0_READ);
-        if (svm->ns_dr_intercepts & exit_bits)
+        if ( svm->ns_dr_intercepts & exit_bits )
             break;
         return 0;
 
     case VMEXIT_EXCEPTION_DE ... VMEXIT_EXCEPTION_XF:
         exit_bits = 1ULL << (exitcode - VMEXIT_EXCEPTION_DE);
-        if (svm->ns_exception_intercepts & exit_bits)
+        if ( svm->ns_exception_intercepts & exit_bits )
             break;
         return 0;
 
     case VMEXIT_INTR ... VMEXIT_SHUTDOWN:
         exit_bits = 1ULL << (exitcode - VMEXIT_INTR);
-        if (svm->ns_general1_intercepts & exit_bits)
+        if ( svm->ns_general1_intercepts & exit_bits )
             break;
         return 0;
 
     case VMEXIT_VMRUN ... VMEXIT_XSETBV:
         exit_bits = 1ULL << (exitcode - VMEXIT_VMRUN);
-        if (svm->ns_general2_intercepts & exit_bits)
+        if ( svm->ns_general2_intercepts & exit_bits )
             break;
         return 0;
 
     case VMEXIT_NPF:
-        if (nestedhvm_paging_mode_hap(v))
+        if ( nestedhvm_paging_mode_hap(v) )
             break;
         return 0;
     case VMEXIT_INVALID:
@@ -969,7 +1004,8 @@ nsvm_vmcb_guest_intercepts_exitcode(struct vcpu *v,
     }
 
     /* Special cases: Do more detailed checks */
-    switch (exitcode) {
+    switch ( exitcode )
+    {
     case VMEXIT_MSR:
         ASSERT(regs != NULL);
         if ( !nestedsvm_vmcb_map(v, nv->nv_vvmcxaddr) )
@@ -977,7 +1013,7 @@ nsvm_vmcb_guest_intercepts_exitcode(struct vcpu *v,
         ns_vmcb = nv->nv_vvmcx;
         vmexits = nsvm_vmcb_guest_intercepts_msr(svm->ns_cached_msrpm,
             regs->ecx, ns_vmcb->exitinfo1 != 0);
-        if (vmexits == NESTEDHVM_VMEXIT_HOST)
+        if ( vmexits == NESTEDHVM_VMEXIT_HOST )
             return 0;
         break;
     case VMEXIT_IOIO:
@@ -986,7 +1022,7 @@ nsvm_vmcb_guest_intercepts_exitcode(struct vcpu *v,
         ns_vmcb = nv->nv_vvmcx;
         vmexits = nsvm_vmcb_guest_intercepts_ioio(ns_vmcb->_iopm_base_pa,
             ns_vmcb->exitinfo1);
-        if (vmexits == NESTEDHVM_VMEXIT_HOST)
+        if ( vmexits == NESTEDHVM_VMEXIT_HOST )
             return 0;
         break;
     }
@@ -1027,7 +1063,7 @@ nsvm_vmcb_prepare4vmexit(struct vcpu *v, struct cpu_user_regs *regs)
      */
 
     /* TSC offset */
-    /* Keep it. It's maintainted by the l1 guest. */ 
+    /* Keep it. It's maintainted by the l1 guest. */
 
     /* ASID */
     /* ns_vmcb->_guest_asid = n2vmcb->_guest_asid; */
@@ -1037,7 +1073,7 @@ nsvm_vmcb_prepare4vmexit(struct vcpu *v, struct cpu_user_regs *regs)
 
     /* Virtual Interrupts */
     ns_vmcb->_vintr = n2vmcb->_vintr;
-    if (!(svm->ns_hostflags.fields.vintrmask))
+    if ( !svm->ns_hostflags.fields.vintrmask )
         ns_vmcb->_vintr.fields.intr_masking = 0;
 
     /* Interrupt state */
@@ -1065,14 +1101,17 @@ nsvm_vmcb_prepare4vmexit(struct vcpu *v, struct cpu_user_regs *regs)
     ns_vmcb->event_inj.raw = 0;
 
     /* Nested paging mode */
-    if (nestedhvm_paging_mode_hap(v)) {
+    if ( nestedhvm_paging_mode_hap(v) )
+    {
         /* host nested paging + guest nested paging. */
         ns_vmcb->_np_enable = n2vmcb->_np_enable;
         ns_vmcb->_cr3 = n2vmcb->_cr3;
         /* The vmcb->h_cr3 is the shadowed h_cr3. The original
          * unshadowed guest h_cr3 is kept in ns_vmcb->h_cr3,
          * hence we keep the ns_vmcb->h_cr3 value. */
-    } else if (paging_mode_hap(v->domain)) {
+    }
+    else if ( paging_mode_hap(v->domain) )
+    {
         /* host nested paging + guest shadow paging. */
         ns_vmcb->_np_enable = 0;
         /* Throw h_cr3 away. Guest is not allowed to set it or
@@ -1081,7 +1120,9 @@ nsvm_vmcb_prepare4vmexit(struct vcpu *v, struct cpu_user_regs *regs)
         /* Stop intercepting #PF (already done above
          * by restoring cached intercepts). */
         ns_vmcb->_cr3 = n2vmcb->_cr3;
-    } else {
+    }
+    else
+    {
         /* host shadow paging + guest shadow paging. */
         ns_vmcb->_np_enable = 0;
         ns_vmcb->_h_cr3 = 0x0;
@@ -1211,12 +1252,13 @@ enum hvm_intblk nsvm_intr_blocked(struct vcpu *v)
     if ( !nestedsvm_gif_isset(v) )
         return hvm_intblk_svm_gif;
 
-    if ( nestedhvm_vcpu_in_guestmode(v) ) {
+    if ( nestedhvm_vcpu_in_guestmode(v) )
+    {
         struct vmcb_struct *n2vmcb = nv->nv_n2vmcx;
 
-        if ( svm->ns_hostflags.fields.vintrmask )
-            if ( !svm->ns_hostflags.fields.rflagsif )
-                return hvm_intblk_rflags_ie;
+        if ( svm->ns_hostflags.fields.vintrmask &&
+             !svm->ns_hostflags.fields.rflagsif )
+            return hvm_intblk_rflags_ie;
 
         /* when l1 guest passes its devices through to the l2 guest
          * and l2 guest does an MMIO access then we may want to
@@ -1237,12 +1279,11 @@ enum hvm_intblk nsvm_intr_blocked(struct vcpu *v)
         }
     }
 
-    if ( nv->nv_vmexit_pending ) {
+    if ( nv->nv_vmexit_pending )
         /* hvm_inject_hw_exception() must have run before.
          * exceptions have higher priority than interrupts.
          */
         return hvm_intblk_rflags_ie;
-    }
 
     return hvm_intblk_none;
 }
@@ -1275,9 +1316,10 @@ nestedsvm_check_intercepts(struct vcpu *v, struct cpu_user_regs *regs,
     ASSERT(vcpu_nestedhvm(v).nv_vmexit_pending == 0);
     is_intercepted = nsvm_vmcb_guest_intercepts_exitcode(v, regs, exitcode);
 
-    switch (exitcode) {
+    switch ( exitcode )
+    {
     case VMEXIT_INVALID:
-        if (is_intercepted)
+        if ( is_intercepted )
             return NESTEDHVM_VMEXIT_INJECT;
         return NESTEDHVM_VMEXIT_HOST;
 
@@ -1291,14 +1333,16 @@ nestedsvm_check_intercepts(struct vcpu *v, struct cpu_user_regs *regs,
         return NESTEDHVM_VMEXIT_HOST;
 
     case VMEXIT_NPF:
-        if (nestedhvm_paging_mode_hap(v)) {
-            if (!is_intercepted)
+        if ( nestedhvm_paging_mode_hap(v) )
+        {
+            if ( !is_intercepted )
                 return NESTEDHVM_VMEXIT_FATALERROR;
             /* host nested paging + guest nested paging */
             return NESTEDHVM_VMEXIT_HOST;
         }
-        if (paging_mode_hap(v->domain)) {
-            if (is_intercepted)
+        if ( paging_mode_hap(v->domain) )
+        {
+            if ( is_intercepted )
                 return NESTEDHVM_VMEXIT_FATALERROR;
             /* host nested paging + guest shadow paging */
             return NESTEDHVM_VMEXIT_HOST;
@@ -1306,20 +1350,21 @@ nestedsvm_check_intercepts(struct vcpu *v, struct cpu_user_regs *regs,
         /* host shadow paging + guest shadow paging */
         /* Can this happen? */
         BUG();
-        return NESTEDHVM_VMEXIT_FATALERROR;
+
     case VMEXIT_EXCEPTION_PF:
-        if (nestedhvm_paging_mode_hap(v)) {
+        if ( nestedhvm_paging_mode_hap(v) )
+        {
             /* host nested paging + guest nested paging */
-            if (!is_intercepted)
+            if ( !is_intercepted )
                 /* l1 guest intercepts #PF unnecessarily */
                 return NESTEDHVM_VMEXIT_HOST;
             /* l2 guest intercepts #PF unnecessarily */
             return NESTEDHVM_VMEXIT_INJECT;
         }
-        if (!paging_mode_hap(v->domain)) {
+        if ( !paging_mode_hap(v->domain) )
             /* host shadow paging + guest shadow paging */
             return NESTEDHVM_VMEXIT_HOST;
-        }
+
         /* host nested paging + guest shadow paging */
         return NESTEDHVM_VMEXIT_INJECT;
     case VMEXIT_VMMCALL:
@@ -1331,7 +1376,7 @@ nestedsvm_check_intercepts(struct vcpu *v, struct cpu_user_regs *regs,
         break;
     }
 
-    if (is_intercepted)
+    if ( is_intercepted )
         return NESTEDHVM_VMEXIT_INJECT;
     return NESTEDHVM_VMEXIT_HOST;
 }
@@ -1346,11 +1391,11 @@ nestedsvm_vmexit_n2n1(struct vcpu *v, struct cpu_user_regs *regs)
     ASSERT(nestedhvm_vcpu_in_guestmode(v));
 
     rc = nsvm_vmcb_prepare4vmexit(v, regs);
-    if (rc)
+    if ( rc )
         ret = NESTEDHVM_VMEXIT_ERROR;
 
     rc = nsvm_vcpu_hostrestore(v, regs);
-    if (rc)
+    if ( rc )
         ret = NESTEDHVM_VMEXIT_FATALERROR;
 
     nestedhvm_vcpu_exit_guestmode(v);
@@ -1374,17 +1419,19 @@ nestedsvm_vcpu_vmexit(struct vcpu *v, struct cpu_user_regs *regs,
     /* On special intercepts the host has to handle
      * the vcpu is still in guest mode here.
      */
-    if (nestedhvm_vcpu_in_guestmode(v)) {
+    if ( nestedhvm_vcpu_in_guestmode(v) )
+    {
         enum nestedhvm_vmexits ret;
 
         ret = nestedsvm_vmexit_n2n1(v, regs);
-        switch (ret) {
+        switch ( ret )
+        {
         case NESTEDHVM_VMEXIT_FATALERROR:
             gdprintk(XENLOG_ERR, "VMEXIT: fatal error\n");
             return ret;
         case NESTEDHVM_VMEXIT_HOST:
             BUG();
-            return ret;
+
         case NESTEDHVM_VMEXIT_ERROR:
             exitcode = VMEXIT_INVALID;
             break;
@@ -1404,12 +1451,12 @@ nestedsvm_vcpu_vmexit(struct vcpu *v, struct cpu_user_regs *regs,
     rc = nsvm_vcpu_vmexit_inject(v, regs, exitcode);
 
     /* If l1 guest uses shadow paging, update the paging mode. */
-    if (!nestedhvm_paging_mode_hap(v))
+    if ( !nestedhvm_paging_mode_hap(v) )
         paging_update_paging_modes(v);
 
     nv->nv_vmswitch_in_progress = 0;
 
-    if (rc)
+    if ( rc )
         return NESTEDHVM_VMEXIT_FATALERROR;
 
     return NESTEDHVM_VMEXIT_DONE;
@@ -1422,7 +1469,7 @@ void nsvm_vcpu_switch(struct cpu_user_regs *regs)
     struct nestedvcpu *nv;
     struct nestedsvm *svm;
 
-    if (!nestedhvm_enabled(v->domain))
+    if ( !nestedhvm_enabled(v->domain) )
         return;
 
     nv = &vcpu_nestedhvm(v);
@@ -1433,32 +1480,34 @@ void nsvm_vcpu_switch(struct cpu_user_regs *regs)
     ASSERT(nv->nv_n1vmcx_pa != INVALID_PADDR);
     ASSERT(nv->nv_n2vmcx_pa != INVALID_PADDR);
 
-    if (nv->nv_vmexit_pending) {
- vmexit:
+    if ( nv->nv_vmexit_pending )
+    {
+    vmexit:
         nestedsvm_vcpu_vmexit(v, regs, svm->ns_vmexit.exitcode);
         nv->nv_vmexit_pending = 0;
         nv->nv_vmentry_pending = 0;
         return;
     }
-    if (nv->nv_vmentry_pending) {
+
+    if ( nv->nv_vmentry_pending )
+    {
         int ret;
         ASSERT(!nv->nv_vmexit_pending);
         ret = nsvm_vcpu_vmrun(v, regs);
-        if (ret)
+        if ( ret )
             goto vmexit;
 
         ASSERT(nestedhvm_vcpu_in_guestmode(v));
         nv->nv_vmentry_pending = 0;
     }
 
-    if (nestedhvm_vcpu_in_guestmode(v)
-       && nestedhvm_paging_mode_hap(v))
+    if ( nestedhvm_vcpu_in_guestmode(v) && nestedhvm_paging_mode_hap(v) )
     {
         /* In case left the l2 guest due to a physical interrupt (e.g. IPI)
          * that is not for the l1 guest then we continue running the l2 guest
          * but check if the nestedp2m is still valid.
          */
-        if (nv->nv_p2m == NULL)
+        if ( nv->nv_p2m == NULL )
             nestedsvm_vmcb_set_nestedp2m(v, nv->nv_vvmcx, nv->nv_n2vmcx);
     }
 }
@@ -1477,7 +1526,8 @@ nestedsvm_vcpu_interrupt(struct vcpu *v, const struct hvm_intack intack)
     if ( intr != hvm_intblk_none )
         return NSVM_INTR_MASKED;
 
-    switch (intack.source) {
+    switch ( intack.source )
+    {
     case hvm_intsrc_pic:
     case hvm_intsrc_lapic:
     case hvm_intsrc_vector:
@@ -1500,7 +1550,8 @@ nestedsvm_vcpu_interrupt(struct vcpu *v, const struct hvm_intack intack)
 
     ret = nsvm_vmcb_guest_intercepts_exitcode(v,
                                      guest_cpu_user_regs(), exitcode);
-    if (ret) {
+    if ( ret )
+    {
         nestedsvm_vmexit_defer(v, exitcode, intack.source, exitinfo2);
         return NSVM_INTR_FORCEVMEXIT;
     }
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index bbe73744b8..ca3bbfcbb3 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -326,7 +326,7 @@ static int svm_vmcb_restore(struct vcpu *v, struct hvm_hw_cpu *c)
     vmcb->sysenter_cs = v->arch.hvm.svm.guest_sysenter_cs = c->sysenter_cs;
     vmcb->sysenter_esp = v->arch.hvm.svm.guest_sysenter_esp = c->sysenter_esp;
     vmcb->sysenter_eip = v->arch.hvm.svm.guest_sysenter_eip = c->sysenter_eip;
-    
+
     if ( paging_mode_hap(v->domain) )
     {
         vmcb_set_np_enable(vmcb, 1);
@@ -386,7 +386,8 @@ static void svm_save_vmcb_ctxt(struct vcpu *v, struct hvm_hw_cpu *ctxt)
 static int svm_load_vmcb_ctxt(struct vcpu *v, struct hvm_hw_cpu *ctxt)
 {
     svm_load_cpu_state(v, ctxt);
-    if (svm_vmcb_restore(v, ctxt)) {
+    if ( svm_vmcb_restore(v, ctxt) )
+    {
         gdprintk(XENLOG_ERR, "svm_vmcb restore failed!\n");
         domain_crash(v->domain);
         return -EINVAL;
@@ -413,9 +414,9 @@ static void svm_fpu_leave(struct vcpu *v)
     ASSERT(read_cr0() & X86_CR0_TS);
 
     /*
-     * If the guest does not have TS enabled then we must cause and handle an 
-     * exception on first use of the FPU. If the guest *does* have TS enabled 
-     * then this is not necessary: no FPU activity can occur until the guest 
+     * If the guest does not have TS enabled then we must cause and handle an
+     * exception on first use of the FPU. If the guest *does* have TS enabled
+     * then this is not necessary: no FPU activity can occur until the guest
      * clears CR0.TS, and we will initialise the FPU when that happens.
      */
     if ( !(v->arch.hvm.guest_cr[0] & X86_CR0_TS) )
@@ -475,7 +476,8 @@ void svm_update_guest_cr(struct vcpu *v, unsigned int cr, unsigned int flags)
 
     switch ( cr )
     {
-    case 0: {
+    case 0:
+    {
         unsigned long hw_cr0_mask = 0;
 
         if ( !(v->arch.hvm.guest_cr[0] & X86_CR0_TS) )
@@ -821,7 +823,8 @@ static void svm_set_tsc_offset(struct vcpu *v, u64 offset, u64 at_tsc)
     uint64_t n2_tsc_offset = 0;
     struct domain *d = v->domain;
 
-    if ( !nestedhvm_enabled(d) ) {
+    if ( !nestedhvm_enabled(d) )
+    {
         vmcb_set_tsc_offset(vmcb, offset);
         return;
     }
@@ -829,12 +832,14 @@ static void svm_set_tsc_offset(struct vcpu *v, u64 offset, u64 at_tsc)
     n1vmcb = vcpu_nestedhvm(v).nv_n1vmcx;
     n2vmcb = vcpu_nestedhvm(v).nv_n2vmcx;
 
-    if ( nestedhvm_vcpu_in_guestmode(v) ) {
+    if ( nestedhvm_vcpu_in_guestmode(v) )
+    {
         struct nestedsvm *svm = &vcpu_nestedsvm(v);
 
         n2_tsc_offset = vmcb_get_tsc_offset(n2vmcb) -
                         vmcb_get_tsc_offset(n1vmcb);
-        if ( svm->ns_tscratio != DEFAULT_TSC_RATIO ) {
+        if ( svm->ns_tscratio != DEFAULT_TSC_RATIO )
+        {
             uint64_t guest_tsc = hvm_get_guest_tsc_fixed(v, at_tsc);
 
             n2_tsc_offset = svm_get_tsc_offset(guest_tsc,
@@ -930,7 +935,7 @@ static inline void svm_tsc_ratio_save(struct vcpu *v)
 
 static inline void svm_tsc_ratio_load(struct vcpu *v)
 {
-    if ( cpu_has_tsc_ratio && !v->domain->arch.vtsc ) 
+    if ( cpu_has_tsc_ratio && !v->domain->arch.vtsc )
         wrmsrl(MSR_AMD64_TSC_RATIO, hvm_tsc_scaling_ratio(v->domain));
 }
 
@@ -1111,7 +1116,7 @@ static void svm_host_osvw_init(void)
              rdmsr_safe(MSR_AMD_OSVW_STATUS, status) )
             len = status = 0;
 
-        if (len < osvw_length)
+        if ( len < osvw_length )
             osvw_length = len;
 
         osvw_status |= status;
@@ -1507,13 +1512,11 @@ static void svm_init_erratum_383(const struct cpuinfo_x86 *c)
         return;
 
     /* use safe methods to be compatible with nested virtualization */
-    if (rdmsr_safe(MSR_AMD64_DC_CFG, msr_content) == 0 &&
-        wrmsr_safe(MSR_AMD64_DC_CFG, msr_content | (1ULL << 47)) == 0)
-    {
+    if ( rdmsr_safe(MSR_AMD64_DC_CFG, msr_content) == 0 &&
+         wrmsr_safe(MSR_AMD64_DC_CFG, msr_content | (1ULL << 47)) == 0 )
         amd_erratum383_found = 1;
-    } else {
+    else
         printk("Failed to enable erratum 383\n");
-    }
 }
 
 #ifdef CONFIG_PV
@@ -1582,7 +1585,7 @@ static int _svm_cpu_up(bool bsp)
     int rc;
     unsigned int cpu = smp_processor_id();
     const struct cpuinfo_x86 *c = &cpu_data[cpu];
- 
+
     /* Check whether SVM feature is disabled in BIOS */
     rdmsrl(MSR_K8_VM_CR, msr_content);
     if ( msr_content & K8_VMCR_SVME_DISABLE )
@@ -1713,7 +1716,7 @@ static void svm_do_nested_pgfault(struct vcpu *v,
         _d.qualification = 0;
         _d.mfn = mfn_x(mfn);
         _d.p2mt = p2mt;
-        
+
         __trace_var(TRC_HVM_NPF, 0, sizeof(_d), &_d);
     }
 
@@ -2248,16 +2251,15 @@ nsvm_get_nvmcb_page(struct vcpu *v, uint64_t vmcbaddr)
         return NULL;
 
     /* Need to translate L1-GPA to MPA */
-    page = get_page_from_gfn(v->domain, 
-                            nv->nv_vvmcxaddr >> PAGE_SHIFT, 
-                            &p2mt, P2M_ALLOC | P2M_UNSHARE);
+    page = get_page_from_gfn(v->domain, nv->nv_vvmcxaddr >> PAGE_SHIFT,
+                             &p2mt, P2M_ALLOC | P2M_UNSHARE);
     if ( !page )
         return NULL;
 
     if ( !p2m_is_ram(p2mt) || p2m_is_readonly(p2mt) )
     {
         put_page(page);
-        return NULL; 
+        return NULL;
     }
 
     return  page;
@@ -2274,7 +2276,7 @@ svm_vmexit_do_vmload(struct vmcb_struct *vmcb,
     if ( (inst_len = svm_get_insn_len(v, INSTR_VMLOAD)) == 0 )
         return;
 
-    if ( !nsvm_efer_svm_enabled(v) ) 
+    if ( !nsvm_efer_svm_enabled(v) )
     {
         hvm_inject_hw_exception(TRAP_invalid_op, X86_EVENT_NO_EC);
         return;
@@ -2309,7 +2311,7 @@ svm_vmexit_do_vmsave(struct vmcb_struct *vmcb,
     if ( (inst_len = svm_get_insn_len(v, INSTR_VMSAVE)) == 0 )
         return;
 
-    if ( !nsvm_efer_svm_enabled(v) ) 
+    if ( !nsvm_efer_svm_enabled(v) )
     {
         hvm_inject_hw_exception(TRAP_invalid_op, X86_EVENT_NO_EC);
         return;
@@ -2344,11 +2346,11 @@ static int svm_is_erratum_383(struct cpu_user_regs *regs)
 
     if ( msr_content != 0xb600000000010015ULL )
         return 0;
-    
+
     /* Clear MCi_STATUS registers */
-    for (i = 0; i < this_cpu(nr_mce_banks); i++)
+    for ( i = 0; i < this_cpu(nr_mce_banks); i++ )
         wrmsrl(MSR_IA32_MCx_STATUS(i), 0ULL);
-    
+
     rdmsrl(MSR_IA32_MCG_STATUS, msr_content);
     wrmsrl(MSR_IA32_MCG_STATUS, msr_content & ~(1ULL << 2));
 
@@ -2535,7 +2537,8 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
                     1/*cycles*/, 2, exit_reason,
                     regs->eip, 0, 0, 0, 0);
 
-    if ( vcpu_guestmode ) {
+    if ( vcpu_guestmode )
+    {
         enum nestedhvm_vmexits nsret;
         struct nestedvcpu *nv = &vcpu_nestedhvm(v);
         struct vmcb_struct *ns_vmcb = nv->nv_vvmcx;
@@ -2550,7 +2553,8 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
         exitinfo1 = ns_vmcb->exitinfo1;
         ns_vmcb->exitinfo1 = vmcb->exitinfo1;
         nsret = nestedsvm_check_intercepts(v, regs, exit_reason);
-        switch (nsret) {
+        switch ( nsret )
+        {
         case NESTEDHVM_VMEXIT_CONTINUE:
             BUG();
             break;
@@ -2566,7 +2570,8 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
             nv->nv_vmswitch_in_progress = 1;
             nsret = nestedsvm_vmexit_n2n1(v, regs);
             nv->nv_vmswitch_in_progress = 0;
-            switch (nsret) {
+            switch ( nsret )
+            {
             case NESTEDHVM_VMEXIT_DONE:
                 /* defer VMEXIT injection */
                 nestedsvm_vmexit_defer(v, exit_reason, exitinfo1, exitinfo2);
@@ -2698,9 +2703,10 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
 
     case VMEXIT_EXCEPTION_NM:
         svm_fpu_dirty_intercept();
-        break;  
+        break;
 
-    case VMEXIT_EXCEPTION_PF: {
+    case VMEXIT_EXCEPTION_PF:
+    {
         unsigned long va;
         va = vmcb->exitinfo2;
         regs->error_code = vmcb->exitinfo1;
@@ -2744,7 +2750,8 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
         svm_vmexit_mce_intercept(v, regs);
         break;
 
-    case VMEXIT_VINTR: {
+    case VMEXIT_VINTR:
+    {
         u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb);
         intr = vmcb_get_vintr(vmcb);
 
@@ -2952,7 +2959,8 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
         v->arch.hvm.svm.cached_insn_len = 0;
         break;
 
-    case VMEXIT_IRET: {
+    case VMEXIT_IRET:
+    {
         u32 general1_intercepts = vmcb_get_general1_intercepts(vmcb);
 
         /*
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 15:04:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 15:04:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyGIG-000307-VY; Wed, 22 Jul 2020 15:04:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhkO=BB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyGIF-000302-I8
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 15:04:27 +0000
X-Inumbo-ID: a21124d4-cc2c-11ea-8671-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a21124d4-cc2c-11ea-8671-bc764e2007e4;
 Wed, 22 Jul 2020 15:04:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595430266;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=AAYgqcoBKRpjicWNLL2x+rmBBqlt7sp1c23H5SA5FRk=;
 b=f2M26DEInanHhYAt3ubGTjox4TUTONeQ0pSNgd1K5UMBt1HaOG1E+dqw
 MM34vZ74lD915uMU1ddOy/aFlwpZwiOi3inplUJgp0PVPyhUrvpRziZLh
 BsNyfV8d038P9XZge/E4UGRi54wCDv/u9n9hhDGJh3yVkad/WbWDjGoGz U=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: QBJIqwwCAVxw12ZjB8qq92pHnmk5JxWTpQj34ml8uEHJVbUTebKP6FIEjDzyDHTc5XYf12bxsc
 L/9WK684fTimt3fuRogTaVFvwjdzDJeZUKYLY1JoyNNz5zQ3nAuwzHQ/D06Kkk33FBPCHqUjYy
 h2EB9umLWtYRjwRZFRmzCdOKsDw68QoBwi6eTuRKw1zBJg1qFd7JcQZkLb1ub5jeBBFoMVcbzX
 kvVFQ0+e0nwctdF3tXYfCJbzKJyZhSlQ6rpemWFDQM/+HoSKviN99+1dq8pZQMcUAQvgX6nDYi
 VuQ=
X-SBRS: 2.7
X-MesageID: 22950706
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,383,1589256000"; d="scan'208";a="22950706"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [osstest PATCH] dom0pvh: assign 1GB of memory to PVH dom0
Date: Wed, 22 Jul 2020 17:04:16 +0200
Message-ID: <20200722150416.36426-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: ian.jackson@eu.citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Current tests use 512MB of memory for dom0, but that's too low for a
PVH dom0 on some hosts and will cause errors because memory is
ballooned out in order to obtain physical memory ranges to map foreign
pages.

Using ballooned out pages for foreign mappings also doesn't seem to
work properly with the current Linux kernel version, so increase the
memory assigned to dom0 to 1GB for PVH dom0 tests. We should see about
reverting this when using ballooned pages is fixed.

The runvar diff is:

+test-amd64-amd64-dom0pvh-xl-amd   dom0_mem 1024
+test-amd64-amd64-dom0pvh-xl-intel dom0_mem 1024

I've done a repro of the failed test on elbling0 with dom0_mem set to
1GB and it seems to prevent the issue, the flight is 152111.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 make-flight | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/make-flight b/make-flight
index b8942c1c..85559c68 100755
--- a/make-flight
+++ b/make-flight
@@ -903,7 +903,7 @@ test_matrix_do_one () {
       job_create_test test-$xenarch$kern-$dom0arch-dom0pvh-xl-$cpuvendor \
                 test-debian xl $xenarch $dom0arch $debian_runvars \
                 all_hostflags=$most_hostflags,hvm-$cpuvendor,iommu \
-                xen_boot_append='dom0=pvh,verbose'
+                xen_boot_append='dom0=pvh,verbose' dom0_mem=1024
 
     done
 
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 15:11:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 15:11:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyGP4-0003rE-Oc; Wed, 22 Jul 2020 15:11:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yOZ1=BB=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jyGP3-0003r9-PV
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 15:11:29 +0000
X-Inumbo-ID: 9dcf9940-cc2d-11ea-8676-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9dcf9940-cc2d-11ea-8676-bc764e2007e4;
 Wed, 22 Jul 2020 15:11:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595430688;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=ApPNVIIi5mcIkCo9tTSbg8wZ838gPku5lyHiuYPRiV4=;
 b=cuwEJzRCUVQebf5vYq/ObW0zjfGiyID9ftgBJBk+To0iCO0XDKqRMJNb
 0a4IS5Au4wQaqv1iJqeVbBwp3jQBi87/+xV5UPrkW+O4EUD3aANCcF3L7
 +hr4Ti3iKZquPjT8ZxD0AI4m7AafHa83fCDHUaCmggfycYjNVw04zazIQ I=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: cqWSG2edpto00/Jn2KSkPDTAiXbG3k6vl/BYC5HTRj9tBT7KFKoFL+l1djBl0I1lmH3n1OYvLC
 pFjueJ/swG9+YFXbUXyJ/3u52MT11Kg0s7BpMPclDtjahyEo10g/d2R+gN4mKsEpCtb3D+K2dO
 I6D9veTFA/H4639RRd7iK6YehtLk6YlNAuYliJkCKYFN8opz7gRgzUcQpXRFHapgl4iCTzDge6
 PVWcil4NYhoXYJ/K9IiH/DO2rhs5IPD2f3dQxm5XKhEze29GRopZwi1AvcAlY33PEuM70U5QRf
 Tvo=
X-SBRS: 2.7
X-MesageID: 23281383
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,383,1589256000"; d="scan'208";a="23281383"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24344.22297.957336.615021@mariner.uk.xensource.com>
Date: Wed, 22 Jul 2020 16:11:21 +0100
To: Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [osstest PATCH] dom0pvh: assign 1GB of memory to PVH dom0
In-Reply-To: <20200722150416.36426-1-roger.pau@citrix.com>
References: <20200722150416.36426-1-roger.pau@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Roger Pau Monne writes ("[osstest PATCH] dom0pvh: assign 1GB of memory to PVH dom0"):
> Current tests use 512MB of memory for dom0, but that's too low for a
> PVH dom0 on some hosts and will cause errors because memory is
> ballooned out in order to obtain physical memory ranges to map foreign
> pages.
> 
> Using ballooned out pages for foreign mappings also doesn't seem to
> work properly with the current Linux kernel version, so increase the
> memory assigned to dom0 to 1GB for PVH dom0 tests. We should see about
> reverting this when using ballooned pages is fixed.
> 
> The runvar diff is:
> 
> +test-amd64-amd64-dom0pvh-xl-amd   dom0_mem 1024
> +test-amd64-amd64-dom0pvh-xl-intel dom0_mem 1024
> 
> I've done a repro of the failed test on elbling0 with dom0_mem set to
> 1GB and it seems to prevent the issue, the flight is 152111.
> 
> Signed-off-by: Roger Pau Monn <roger.pau@citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

And queued.

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 15:16:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 15:16:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyGTd-00041D-BO; Wed, 22 Jul 2020 15:16:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dvI5=BB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyGTb-000418-O5
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 15:16:11 +0000
X-Inumbo-ID: 4518a200-cc2e-11ea-a1d5-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4518a200-cc2e-11ea-a1d5-12813bfff9fa;
 Wed, 22 Jul 2020 15:16:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595430969;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=GkCNRzEGrEUCMdendNnNIXFeYmcqt6lWzZ0/vwffEQI=;
 b=G/H+XqCIcDgwA1IbPUmzLaU36qZ4ExU3vq4s06IAzozOEmZz85MZbNdN
 7/68NIASZ9k9mL90gH/4m0ZfguIcOGMomWoh+npf78DITbDkJTelB/Kyz
 w0YqQMJJaxClcuAa0W1YBmFdWCBgRsjMNNJ3yNz5gMx49YpCrdkw6D0sq I=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 7UBGLkz2kGlHS25y63b3NAM66sHVrXTVEdocPW+w7t0wWjYKD2kTZLzenLtX5sbfor7dp7RwqD
 x0YMzYngUlaLELSN5TkpVRMvxaVqpqyQmpAdj5sClL6sJ0upjjc/S7y7oKh+hLwYWBG/dm55Bv
 o8ShKQhtm8T9UMk1zIbsHUahRTPP9B+gef6kDZNSLdc9HXAXBCUG1Kx6A5vWxiPMvJuas99GZ/
 R7rzmBeyxPuZqyq3tGk2UOcP3fO41Fgxioj5VMwX09dtlKuMmxOSAGKoVssDONoRK9vyrbf3Pk
 SkU=
X-SBRS: 2.7
X-MesageID: 23282256
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,383,1589256000"; d="scan'208";a="23282256"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/hvm: Clean up track_dirty_vram() calltree
Date: Wed, 22 Jul 2020 16:15:48 +0100
Message-ID: <20200722151548.4000-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

 * Rename nr to nr_frames.  A plain 'nr' is confusing to follow in the the
   lower levels.
 * Use DIV_ROUND_UP() rather than opencoding it in several different ways
 * The hypercall input is capped at uint32_t, so there is no need for
   nr_frames to be unsigned long in the lower levels.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/dm.c        | 13 +++++++------
 xen/arch/x86/mm/hap/hap.c    | 21 +++++++++++----------
 xen/arch/x86/mm/shadow/hvm.c | 16 ++++++++--------
 xen/include/asm-x86/hap.h    |  2 +-
 xen/include/asm-x86/shadow.h |  2 +-
 5 files changed, 28 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index e3f845165d..9930d68860 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -62,9 +62,10 @@ static bool _raw_copy_from_guest_buf_offset(void *dst,
                                     sizeof(dst))
 
 static int track_dirty_vram(struct domain *d, xen_pfn_t first_pfn,
-                            unsigned int nr, const struct xen_dm_op_buf *buf)
+                            unsigned int nr_frames,
+                            const struct xen_dm_op_buf *buf)
 {
-    if ( nr > (GB(1) >> PAGE_SHIFT) )
+    if ( nr_frames > (GB(1) >> PAGE_SHIFT) )
         return -EINVAL;
 
     if ( d->is_dying )
@@ -73,12 +74,12 @@ static int track_dirty_vram(struct domain *d, xen_pfn_t first_pfn,
     if ( !d->max_vcpus || !d->vcpu[0] )
         return -EINVAL;
 
-    if ( ((nr + 7) / 8) > buf->size )
+    if ( DIV_ROUND_UP(nr_frames, BITS_PER_BYTE) > buf->size )
         return -EINVAL;
 
-    return shadow_mode_enabled(d) ?
-        shadow_track_dirty_vram(d, first_pfn, nr, buf->h) :
-        hap_track_dirty_vram(d, first_pfn, nr, buf->h);
+    return shadow_mode_enabled(d)
+        ? shadow_track_dirty_vram(d, first_pfn, nr_frames, buf->h)
+        :    hap_track_dirty_vram(d, first_pfn, nr_frames, buf->h);
 }
 
 static int set_pci_intx_level(struct domain *d, uint16_t domain,
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 7f84d0c6ea..4eedd1a995 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -58,16 +58,16 @@
 
 int hap_track_dirty_vram(struct domain *d,
                          unsigned long begin_pfn,
-                         unsigned long nr,
+                         unsigned int nr_frames,
                          XEN_GUEST_HANDLE(void) guest_dirty_bitmap)
 {
     long rc = 0;
     struct sh_dirty_vram *dirty_vram;
     uint8_t *dirty_bitmap = NULL;
 
-    if ( nr )
+    if ( nr_frames )
     {
-        int size = (nr + BITS_PER_BYTE - 1) / BITS_PER_BYTE;
+        unsigned int size = DIV_ROUND_UP(nr_frames, BITS_PER_BYTE);
 
         if ( !paging_mode_log_dirty(d) )
         {
@@ -97,13 +97,13 @@ int hap_track_dirty_vram(struct domain *d,
         }
 
         if ( begin_pfn != dirty_vram->begin_pfn ||
-             begin_pfn + nr != dirty_vram->end_pfn )
+             begin_pfn + nr_frames != dirty_vram->end_pfn )
         {
             unsigned long ostart = dirty_vram->begin_pfn;
             unsigned long oend = dirty_vram->end_pfn;
 
             dirty_vram->begin_pfn = begin_pfn;
-            dirty_vram->end_pfn = begin_pfn + nr;
+            dirty_vram->end_pfn = begin_pfn + nr_frames;
 
             paging_unlock(d);
 
@@ -115,7 +115,7 @@ int hap_track_dirty_vram(struct domain *d,
              * Switch vram to log dirty mode, either by setting l1e entries of
              * P2M table to be read-only, or via hardware-assisted log-dirty.
              */
-            p2m_change_type_range(d, begin_pfn, begin_pfn + nr,
+            p2m_change_type_range(d, begin_pfn, begin_pfn + nr_frames,
                                   p2m_ram_rw, p2m_ram_logdirty);
 
             guest_flush_tlb_mask(d, d->dirty_cpumask);
@@ -132,7 +132,7 @@ int hap_track_dirty_vram(struct domain *d,
             p2m_flush_hardware_cached_dirty(d);
 
             /* get the bitmap */
-            paging_log_dirty_range(d, begin_pfn, nr, dirty_bitmap);
+            paging_log_dirty_range(d, begin_pfn, nr_frames, dirty_bitmap);
 
             domain_unpause(d);
         }
@@ -153,14 +153,15 @@ int hap_track_dirty_vram(struct domain *d,
              * then stop tracking
              */
             begin_pfn = dirty_vram->begin_pfn;
-            nr = dirty_vram->end_pfn - dirty_vram->begin_pfn;
+            nr_frames = dirty_vram->end_pfn - dirty_vram->begin_pfn;
             xfree(dirty_vram);
             d->arch.hvm.dirty_vram = NULL;
         }
 
         paging_unlock(d);
-        if ( nr )
-            p2m_change_type_range(d, begin_pfn, begin_pfn + nr,
+
+        if ( nr_frames )
+            p2m_change_type_range(d, begin_pfn, begin_pfn + nr_frames,
                                   p2m_ram_logdirty, p2m_ram_rw);
     }
 out:
diff --git a/xen/arch/x86/mm/shadow/hvm.c b/xen/arch/x86/mm/shadow/hvm.c
index c5da7a071c..b832272c10 100644
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -695,12 +695,12 @@ static void sh_emulate_unmap_dest(struct vcpu *v, void *addr,
 /* VRAM dirty tracking support */
 int shadow_track_dirty_vram(struct domain *d,
                             unsigned long begin_pfn,
-                            unsigned long nr,
+                            unsigned int nr_frames,
                             XEN_GUEST_HANDLE(void) guest_dirty_bitmap)
 {
     int rc = 0;
-    unsigned long end_pfn = begin_pfn + nr;
-    unsigned long dirty_size = (nr + 7) / 8;
+    unsigned long end_pfn = begin_pfn + nr_frames;
+    unsigned int dirty_size = DIV_ROUND_UP(nr_frames, BITS_PER_BYTE);
     int flush_tlb = 0;
     unsigned long i;
     p2m_type_t t;
@@ -717,7 +717,7 @@ int shadow_track_dirty_vram(struct domain *d,
 
     dirty_vram = d->arch.hvm.dirty_vram;
 
-    if ( dirty_vram && (!nr ||
+    if ( dirty_vram && (!nr_frames ||
              ( begin_pfn != dirty_vram->begin_pfn
             || end_pfn   != dirty_vram->end_pfn )) )
     {
@@ -729,7 +729,7 @@ int shadow_track_dirty_vram(struct domain *d,
         dirty_vram = d->arch.hvm.dirty_vram = NULL;
     }
 
-    if ( !nr )
+    if ( !nr_frames )
         goto out;
 
     dirty_bitmap = vzalloc(dirty_size);
@@ -759,9 +759,9 @@ int shadow_track_dirty_vram(struct domain *d,
         dirty_vram->end_pfn = end_pfn;
         d->arch.hvm.dirty_vram = dirty_vram;
 
-        if ( (dirty_vram->sl1ma = xmalloc_array(paddr_t, nr)) == NULL )
+        if ( (dirty_vram->sl1ma = xmalloc_array(paddr_t, nr_frames)) == NULL )
             goto out_dirty_vram;
-        memset(dirty_vram->sl1ma, ~0, sizeof(paddr_t) * nr);
+        memset(dirty_vram->sl1ma, ~0, sizeof(paddr_t) * nr_frames);
 
         if ( (dirty_vram->dirty_bitmap = xzalloc_array(uint8_t, dirty_size)) == NULL )
             goto out_sl1ma;
@@ -780,7 +780,7 @@ int shadow_track_dirty_vram(struct domain *d,
         void *map_sl1p = NULL;
 
         /* Iterate over VRAM to track dirty bits. */
-        for ( i = 0; i < nr; i++ )
+        for ( i = 0; i < nr_frames; i++ )
         {
             mfn_t mfn = get_gfn_query_unlocked(d, begin_pfn + i, &t);
             struct page_info *page;
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index faf856913a..d489df3812 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -40,7 +40,7 @@ void  hap_teardown(struct domain *d, bool *preempted);
 void  hap_vcpu_init(struct vcpu *v);
 int   hap_track_dirty_vram(struct domain *d,
                            unsigned long begin_pfn,
-                           unsigned long nr,
+                           unsigned int nr_frames,
                            XEN_GUEST_HANDLE(void) dirty_bitmap);
 
 extern const struct paging_mode *hap_paging_get_mode(struct vcpu *);
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 224d1bc2f9..76e47f257f 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -64,7 +64,7 @@ int shadow_enable(struct domain *d, u32 mode);
 /* Enable VRAM dirty bit tracking. */
 int shadow_track_dirty_vram(struct domain *d,
                             unsigned long first_pfn,
-                            unsigned long nr,
+                            unsigned int nr_frames,
                             XEN_GUEST_HANDLE(void) dirty_bitmap);
 
 /* Handler for shadow control ops: operations from user-space to enable
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 15:23:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 15:23:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyGaG-0004to-3c; Wed, 22 Jul 2020 15:23:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q/Qh=BB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyGaE-0004tg-Mg
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 15:23:02 +0000
X-Inumbo-ID: 3accb538-cc2f-11ea-8678-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3accb538-cc2f-11ea-8678-bc764e2007e4;
 Wed, 22 Jul 2020 15:23:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=gYtkkmowsYUCPloFbUDBM23P4346DWfJlzdBRRKfmGA=; b=o2+A7GWjyuh5d6vevt1LGteDV
 SXpsiReA02vdT7pBSU1h5QEXp+NYJNFAybt/9GSF7Dzfhnm9Ny/b9TfkONRRpLglTWLK9ct3AbBrz
 smV+Vmfs+TsQcBO5AGx88B1YX+dSWv7cgUDjgoQaF4yGyKtHI58Bp+WCetKmiQEzKZcRg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyGaC-0001UU-HT; Wed, 22 Jul 2020 15:23:00 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyGaC-0002H1-8U; Wed, 22 Jul 2020 15:23:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyGaC-0004Jm-7u; Wed, 22 Jul 2020 15:23:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152094-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 152094: regressions - FAIL
X-Osstest-Failures: libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=6f59749e4e1ecaef0f7c252e5a57f5cd569f7ea3
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jul 2020 15:23:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152094 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152094/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              6f59749e4e1ecaef0f7c252e5a57f5cd569f7ea3
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   12 days
Failing since        151818  2020-07-11 04:18:52 Z   11 days   12 attempts
Testing same since   152094  2020-07-22 04:18:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Bastien Orivel <bastien.orivel@diateam.net>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Jianan Gao <jgao@redhat.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Yi Wang <wang.yi59@zte.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2331 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 15:29:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 15:29:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyGgE-00055Y-Ri; Wed, 22 Jul 2020 15:29:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bhkO=BB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyGgE-00055T-DJ
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 15:29:14 +0000
X-Inumbo-ID: 18996172-cc30-11ea-8679-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 18996172-cc30-11ea-8679-bc764e2007e4;
 Wed, 22 Jul 2020 15:29:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595431753;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=fUXTGdT3jYCDdHtQHnKQ7lAjSUZB0Kbp4aNUH2MgNvg=;
 b=cXDT29ZH1j6nlZNmGVoCQhHk1akbbkMaIlQTzoLHOpsCUaP9FRLbL8uC
 jljXtJ0jEac2Qp7Rz9q6/Oo33iZr4f2fWzTr+48vJeRSklK68UoF6VahX
 Dk2Pi2Rz7hln1LzT4ePAkCmx7stVV3xV8Nmkrh+68MBkH0ye+Sn7eenzu k=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: tj7FnE76nwyjkNHNvDGhg9FUx3ZCH2j3UR70ozDh3ERe4jA0AqkXMyukwhajfoCLbkjFth+pvO
 zx7o8At4ERfr7cCHnAf/cwq+DqcsaGOYNyZ4F1bmw9Y735xyHXqMhctPqvAc6HohAl+AQCMfLT
 llNpcf9Ahi4emw8NQydokfB4fxDyDIeG16yeh6ICgGWZn2kDNfxpNJTGsEAykaPOxgssvxv7k5
 Jn0hswRa4lxkP1JdEacbpqETi6N3j3+7pt+lu4ES98Xwme9pAdWLJYrcu3ARpIo8WMv/5PGRp6
 kC0=
X-SBRS: 2.7
X-MesageID: 23150283
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,383,1589256000"; d="scan'208";a="23150283"
Date: Wed, 22 Jul 2020 17:29:06 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Ian Jackson <ian.jackson@citrix.com>
Subject: Re: [osstest PATCH] dom0pvh: assign 1GB of memory to PVH dom0
Message-ID: <20200722152906.GZ7191@Air-de-Roger>
References: <20200722150416.36426-1-roger.pau@citrix.com>
 <24344.22297.957336.615021@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <24344.22297.957336.615021@mariner.uk.xensource.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 22, 2020 at 04:11:21PM +0100, Ian Jackson wrote:
> Roger Pau Monne writes ("[osstest PATCH] dom0pvh: assign 1GB of memory to PVH dom0"):
> > Current tests use 512MB of memory for dom0, but that's too low for a
> > PVH dom0 on some hosts and will cause errors because memory is
> > ballooned out in order to obtain physical memory ranges to map foreign
> > pages.
> > 
> > Using ballooned out pages for foreign mappings also doesn't seem to
> > work properly with the current Linux kernel version, so increase the
> > memory assigned to dom0 to 1GB for PVH dom0 tests. We should see about
> > reverting this when using ballooned pages is fixed.
> > 
> > The runvar diff is:
> > 
> > +test-amd64-amd64-dom0pvh-xl-amd   dom0_mem 1024
> > +test-amd64-amd64-dom0pvh-xl-intel dom0_mem 1024
> > 
> > I've done a repro of the failed test on elbling0 with dom0_mem set to
> > 1GB and it seems to prevent the issue, the flight is 152111.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> And queued.

Thanks! Forgot to add that I've checked x86 hosts and they all have at
least 8GB of RAM, so using 1GB for dom0 should be fine, as I don't
think osstest runs guests close to 7GB of RAM.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 15:33:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 15:33:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyGk0-0005vj-Gc; Wed, 22 Jul 2020 15:33:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xSrN=BB=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jyGjz-0005ve-Ek
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 15:33:07 +0000
X-Inumbo-ID: a36e6d93-cc30-11ea-867b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a36e6d93-cc30-11ea-867b-bc764e2007e4;
 Wed, 22 Jul 2020 15:33:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 36D89AEBB;
 Wed, 22 Jul 2020 15:33:13 +0000 (UTC)
Message-ID: <d08102ed75c078fa189c6cd0067429026eb489ab.camel@suse.com>
Subject: Re: [PATCH v2 0/7] xen: credit2: limit the number of CPUs per runqueue
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Date: Wed, 22 Jul 2020 17:33:04 +0200
In-Reply-To: <fe24d520-7ef8-7dd7-6aa8-64df3a55b0bb@suse.com>
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
 <fe24d520-7ef8-7dd7-6aa8-64df3a55b0bb@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-lsm84saEy5QEvbtzOkhP"
User-Agent: Evolution 3.36.3 (by Flathub.org) 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-lsm84saEy5QEvbtzOkhP
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2020-07-21 at 14:08 +0200, Jan Beulich wrote:
> On 28.05.2020 23:29, Dario Faggioli wrote:
> > Dario Faggioli (7):
> >       xen: credit2: factor cpu to runqueue matching in a function
> >       xen: credit2: factor runqueue initialization in its own
> > function.
> >       xen: cpupool: add a back-pointer from a scheduler to its pool
> >       xen: credit2: limit the max number of CPUs in a runqueue
> >       xen: credit2: compute cpus per-runqueue more dynamically.
> >       cpupool: create an the 'cpupool sync' infrastructure
> >       xen: credit2: rebalance the number of CPUs in the scheduler
> > runqueues
>=20
> I still have the last three patches here as well as "xen: credit2:
> document that min_rqd is valid and ok to use" in my "waiting for
> sufficient acks" folder.=20
>
Ok.

> Would you mind indicating if they should
> stay there (and you will take care of pinging or whatever is
> needed), or whether I may drop them (and you'll eventually re-
> submit)?
>=20
So, the last 3 patches of this series, despite their status is indeed
"waiting for sufficient acks", in the sense that they've not been
looked at due to code freeze being imminent back then, but it still
would be ok for people to look at them, I'm happy for you to drop this
from your queue.

I will take care of resending just them in a new series and everything.
And thanks.

For the min_rqd one... That should be quite easy, in theory, so let me
ping the thread right now. Especially considering that I just looked
back at it, and noticed that I forgot to Cc George in the first place!
:-O

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-lsm84saEy5QEvbtzOkhP
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl8YXDAACgkQFkJ4iaW4
c+4L6hAAjahFMb3/gHlWZaqqv2pshngpdpRgbI0i9LQOJKHvnxaTb4zeajxnMYs/
A1OJyAJQpKtYSHOVLc+t9slHpkBRjbw4JsmIkXbxwBA0CXH0aq2pmlWtOR4GrIY7
MHqgPrf/kKOA3ks1kHcY5PGplTDg0E0ai3l1AMdnlXon1+6xvEpo3EUSYAmqf737
YgTphdQCI3dqT+61+clwSb9/kmjac7UYR/z1C97ESEMDsNkJsHJivlmKKkf2HJDN
4LaVHsg/7qBeUYF+DOFIRuqaXGaeXfaEUFDJFuXWXl/uYYd6DFVYcVDHQ7GNj+yF
ylnsmZQT/jGiPlwx5u3k1+CR82DpGNtXx8ji4h6MsdBBRQecTUn0F1xkcJpYCbT7
mPmU9pPJi5EotVqGixeLBjo53Kdt8PNDxirSMJprl4k2l2bGf9WrzkfG9XN9bCxd
7yP4V1rk52T9A/MwhK12666+Dv7Un3VlpL+6MbaKJxnFhdk8NF54yr9O7hRY/FaV
ya3495Sbk8ipiTjFXyR4BTePjHuXDrUcZkAf8OFvD2dobUSDPnxiEKtrKkwWgSQk
Qx7+fC89/KmxaZOxQS2XT54fiCuZBF2gykG6nuvgmSbAiSPp739EM1ApWYfxn3iY
AMP5vI3jU6s1xAx7lvWiw+oU2nDmn+anUl1rO/3kDtS6clUeSjA=
=h63m
-----END PGP SIGNATURE-----

--=-lsm84saEy5QEvbtzOkhP--



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 15:40:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 15:40:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyGqx-0006lO-8i; Wed, 22 Jul 2020 15:40:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xSrN=BB=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jyGqw-0006lJ-74
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 15:40:18 +0000
X-Inumbo-ID: a3d971f4-cc31-11ea-a1d9-12813bfff9fa
Received: from de-smtp-delivery-102.mimecast.com (unknown [51.163.158.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a3d971f4-cc31-11ea-a1d9-12813bfff9fa;
 Wed, 22 Jul 2020 15:40:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com;
 s=mimecast20200619; t=1595432416;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 in-reply-to:in-reply-to:references:references;
 bh=NMIl8jnlh+N2cCXY8SNVvjBNlGZU9fZNkn1UDio7Vh0=;
 b=ESRZqq3KnOOjr5nmEtOG8sBIfb2mlqiW++GbnDGlGLjqjKGHejgSOd6uFte/PKF3o+cXSp
 p9fHxXAQZ3dIQQqBK+Mej8oUpwK5TONHVqT8sQGWfwM5x2gFTT1twDTkTjfiZ2LYapm7Nr
 RfMQoPpQYwqdg7Ul4WXawi6jUDg4W1k=
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2052.outbound.protection.outlook.com [104.47.0.52]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-19-Oy6QlQZ_MC22eWWqpIMyEQ-2; Wed, 22 Jul 2020 17:40:14 +0200
X-MC-Unique: Oy6QlQZ_MC22eWWqpIMyEQ-2
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kMJ5frpTDFfEhhas2sO0Slcxp+S0TebwJVTnlXDRvmYRpufPe5X8jAH+2ULJJpMT9UkGKXqDs0kt5wAax9uOXJhzhVtot+i8Jn1ulhCR559LpjVt4uMn39ghwQrtJPgGMSZnbjzcAtMkGQcyIMEXAv8Rjs5nM0zXZM2hFsYe4cBGR9uP0h7LfrnOhPXvEWRCDsrAbtJWuWi1USr97u0FNlhCex3NfsW9mYeRsPhh9JNlPR7jT86Bvd+jEYpnduSbRJQHk8h/K3eF2zd1YblL4pwhLlAoHRi5E+6jQ+qQ9F37sW7dwJKHdhBoxg9YnfJvEiIuMgxMJFpzHMqZe6OA/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NMIl8jnlh+N2cCXY8SNVvjBNlGZU9fZNkn1UDio7Vh0=;
 b=RKNA1jZ7fVKKK1heeD+zdGfd8AiEAY/crBE5JuScHKKS16PhcxvPljQr+bhV1s0BsbptJXT8DvtesAeU9E1LN4JtJ0PQlopD+s1WtP/jm97saWaKa0lwbaOedL7i7jJhc6Q/jv9+nGOHINTjHUnq7ZklQpSW/z5hu3kHFW6zLeuQH/TVbC7nPpVgQvvfVNcW5gXn31ioDcDDVYa53F5BpFiNgdbk0EZUaROrESA6x8omIwZYZd6ztdI3XRlGJDr4MOaJLS19GzHtmzmmOF5hq0WoPkluhPtzrS6znl0N5ITjNj4ofycYEVKQIKMqZG3VWL2r+1vOMTteDlz+SB9bkg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Received: from DB8PR04MB5834.eurprd04.prod.outlook.com (2603:10a6:10:af::12)
 by DB7PR04MB5276.eurprd04.prod.outlook.com (2603:10a6:10:15::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.21; Wed, 22 Jul
 2020 15:40:11 +0000
Received: from DB8PR04MB5834.eurprd04.prod.outlook.com
 ([fe80::7d0f:f3c4:6a9b:4799]) by DB8PR04MB5834.eurprd04.prod.outlook.com
 ([fe80::7d0f:f3c4:6a9b:4799%5]) with mapi id 15.20.3195.026; Wed, 22 Jul 2020
 15:40:11 +0000
From: Dario Faggioli <dfaggioli@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] xen: credit2: document that min_rqd is valid and ok to use
Thread-Topic: [PATCH] xen: credit2: document that min_rqd is valid and ok to
 use
Thread-Index: AQHWYD5i3CiuwX6Cr0uQclh88PJ90g==
Date: Wed, 22 Jul 2020 15:40:10 +0000
Message-ID: <1ec2e50b96263f6d4f4fd0d8de66c236b35c3101.camel@suse.com>
References: <158524252335.30595.3422322089286433323.stgit@Palanthas>
In-Reply-To: <158524252335.30595.3422322089286433323.stgit@Palanthas>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
user-agent: Evolution 3.36.3 (by Flathub.org) 
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
x-originating-ip: [89.186.78.87]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 6949aa91-ad62-425c-0c04-08d82e55852c
x-ms-traffictypediagnostic: DB7PR04MB5276:
x-ld-processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <DB7PR04MB52767D2DF02CBB776FE071BAC5790@DB7PR04MB5276.eurprd04.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:3383;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: qnN4lKK70AgwT3PPYq/MtpaE+2YTKd7dyd0voSZ3m9DoIKPet3iHK12R9NjvsfVgaDLSMsC0HxPHgtmPJklB2GjLw6ebkq6vNn2lOASgbpLGDs04uhowBTcYBZt+u40WH9Sq4BkxSCYXYU+z6+iP8DLFrDbSiQxk79CRdMUOAznHlndrxeDlQDj+T+I9HHD9ddcOQg/0TGpDhzBjYlPTKy9I68Yei9ZrKBtwtsMqUkLDZSEZ44cxEn//vc2eZOfq4uL+fPWxh3YXAOcNCwFxlWbTOA1KCy+PEJw9x1ma1bn87kYVRzPevkY8FkD8wRbBm7pweW7ewEmDBAPIAdenxZw2XQGo6toc+gnMDM3ELDhk+jbmCH5UHYKC+ynPdk33+hmZtrOmrZyHdmHNEQGSVw==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:DB8PR04MB5834.eurprd04.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(39860400002)(136003)(376002)(366004)(396003)(346002)(2616005)(6506007)(26005)(8676002)(99936003)(36756003)(71200400001)(86362001)(4326008)(186003)(8936002)(4744005)(66446008)(64756008)(66556008)(6486002)(66476007)(66946007)(91956017)(6512007)(76116006)(5660300002)(966005)(478600001)(66616009)(2906002)(54906003)(316002)(6916009);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: e0YO2Pvu57JOzgMYvjN6uSGkmig4XTURadd7qSpSFQRphKelbuOgRB+zsQqzhisoRu49cyxPN54mAWLzCAEPcjpCh3uAIPLaoI9kIH07mUp+THMBl/WTq2Jbru0+hMvMQ/3bjfmNOWlV/LV1FePjOmYekH2D1Hoc/NKQLhTq1mKCYDIncLcyAYGX+2VwH314p36+vxJMyZ8NVGP5QH3/PT+QaE5iEeODLa05LNHKjzkDB5Qoe8wmiQ0ua9zaPhjkbZp7JITZijkFv1kVpqtlsATAtkxD1Z4F2vqrgyTgBsdWq1yH4wN0bA30yRcTZgchxGMqndg5ZsalaBWfV++KTqK5Rf5GttvXIfBi3hj0RF4SNxlRz/hCthPHsE2x7uH4hM7RaKDSzED7f2LkE3FiRxJQMWbfVUSNVRMLk/9L3V6ombWnnqsv4xD2pgQs+IzsoPkynoMtVHPfSKO9liQoh6yxPrpD9RO2MdZXu4WwaJQ=
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-4pbxLaLnMipw4hZnQyCG"
MIME-Version: 1.0
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DB8PR04MB5834.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6949aa91-ad62-425c-0c04-08d82e55852c
X-MS-Exchange-CrossTenant-originalarrivaltime: 22 Jul 2020 15:40:11.2700 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: AcZg0FIVEGSae5AUUYRgjstZhfasAZTn3j61jSVzpjuLIfxVuqC3OApWCvobrfuwJ/sjhyv2PFxPjPz0WPn89w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR04MB5276
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <JGross@suse.com>,
 "george.dunlap@citrix.com" <george.dunlap@citrix.com>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--=-4pbxLaLnMipw4hZnQyCG
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2020-03-26 at 18:08 +0100, Dario Faggioli wrote:
> Code is a bit involved, and it is not easy to tell that min_rqd,
> inside
> csched2_res_pick() is actually pointing to a runqueue, when it is
> dereferenced.
>=20
> Add a comment and an ASSERT() for that.
>=20
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
>
Ping.

Actually, for George, this is more a:

<<Hey, it seems I forgot to Cc you when I sent this. Apologies for
that!>> :-)

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-4pbxLaLnMipw4hZnQyCG
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl8YXdkACgkQFkJ4iaW4
c+5WHQ/8CDf0i7iJNjNM/9tqQpMMIGQN4Wvy8ycrHVMsoXpHb+UwTRceBslsd3UA
mGOKTFA39tvDdB8TgrpCNOQbeGljn+dh6gvN6FeE9M6URM3hfQE2oDaJJRtGMXlI
fF6VEL0ElA1N2z/7zL+yKDNQI6vIyikMOx4BAfoTeC7TdXFWKtanlwFv6S75dW/a
lt0MkTXUrrDJ1NDc1nQPcuKJdq/k/RYiFxSjEt0TXMRJwhz9qIPEkCsW2tXMOWW4
J8dqfBf6M6PGMYKe2baoYXY88+S7u9INpLOVhBBtML/cqov2gao27JxiZJovwEV8
NjFbp9LlpRA12swWMEK0lurnT/4EuQ+PdOcL1lE1X1602USbIl1aQLZmZMwARS6r
4tNcZ7is/3D6CAnAzJwUwAcA7wCtZTQfWAHMZ2BnLcQ/toH303ue8w4V830PrvCa
l0zA/+zLMF6PAM/NadRsdG97OOoodtpHzgp6oDFotbA1iulgcHFxc18Z7vT6zIul
2mktWkpRlHCMh7ZQT/J0F3crwar8OK53q9UEchVNqO0dD5sYCr16Gllb4feiZ9SO
v/3BIg17fUoTVIZpGcK/VJuH1gyHlviQIiWxCiPc0RbZkEGtdITH+LB2PYNx6V+r
T3rfejZJvoM67dZFD8h6e2rE5wu1N8MMT2/vhGirlVNSx1+n1fc=
=gREJ
-----END PGP SIGNATURE-----

--=-4pbxLaLnMipw4hZnQyCG--



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 16:03:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 16:03:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyHD0-0000iN-6V; Wed, 22 Jul 2020 16:03:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mY6V=BB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyHCy-0000iH-G1
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 16:03:04 +0000
X-Inumbo-ID: d28c77f0-cc34-11ea-8680-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d28c77f0-cc34-11ea-8680-bc764e2007e4;
 Wed, 22 Jul 2020 16:03:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 426E3AC83;
 Wed, 22 Jul 2020 16:03:10 +0000 (UTC)
Subject: Re: [PATCH] x86/svm: Misc coding style corrections
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200722143929.14191-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <31de1112-c1ec-bbd6-691d-d992792740c9@suse.com>
Date: Wed, 22 Jul 2020 18:03:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200722143929.14191-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.07.2020 16:39, Andrew Cooper wrote:
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 16:13:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 16:13:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyHMf-0001dA-6L; Wed, 22 Jul 2020 16:13:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mY6V=BB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyHMd-0001d2-Tw
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 16:13:03 +0000
X-Inumbo-ID: 36ad1540-cc36-11ea-a1de-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 36ad1540-cc36-11ea-a1de-12813bfff9fa;
 Wed, 22 Jul 2020 16:13:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BC3ECABD2;
 Wed, 22 Jul 2020 16:13:07 +0000 (UTC)
Subject: Re: [PATCH] x86/hvm: Clean up track_dirty_vram() calltree
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200722151548.4000-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <07ecb7dd-c823-0c6a-2bcd-7fc22471af7a@suse.com>
Date: Wed, 22 Jul 2020 18:13:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200722151548.4000-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.07.2020 17:15, Andrew Cooper wrote:
>  * Rename nr to nr_frames.  A plain 'nr' is confusing to follow in the the
>    lower levels.
>  * Use DIV_ROUND_UP() rather than opencoding it in several different ways
>  * The hypercall input is capped at uint32_t, so there is no need for
>    nr_frames to be unsigned long in the lower levels.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

I'd like to note though that ...

> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -58,16 +58,16 @@
>  
>  int hap_track_dirty_vram(struct domain *d,
>                           unsigned long begin_pfn,
> -                         unsigned long nr,
> +                         unsigned int nr_frames,
>                           XEN_GUEST_HANDLE(void) guest_dirty_bitmap)
>  {
>      long rc = 0;
>      struct sh_dirty_vram *dirty_vram;
>      uint8_t *dirty_bitmap = NULL;
>  
> -    if ( nr )
> +    if ( nr_frames )
>      {
> -        int size = (nr + BITS_PER_BYTE - 1) / BITS_PER_BYTE;
> +        unsigned int size = DIV_ROUND_UP(nr_frames, BITS_PER_BYTE);

... with the change from long to int this construct will now no
longer be correct for the (admittedly absurd) case of a hypercall
input in the range of [0xfffffff9,0xffffffff]. We now fully
depend on this getting properly rejected at the top level hypercall
handler (which limits to 1Gb worth of tracked space).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 16:47:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 16:47:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyHuE-0004Fu-1W; Wed, 22 Jul 2020 16:47:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0kdp=BB=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jyHuD-0004Fp-BJ
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 16:47:45 +0000
X-Inumbo-ID: 10a18b42-cc3b-11ea-8684-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10a18b42-cc3b-11ea-8684-bc764e2007e4;
 Wed, 22 Jul 2020 16:47:44 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id B7525206C1;
 Wed, 22 Jul 2020 16:47:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595436464;
 bh=fJ4H+s8bnXEuA7qbfhHl8boB7131WZ52qZ7G1ZDicJM=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=TVk4EaBJE1p9iB5Rn/STCvaUhPYCnyeEcaTr9rcJFI5JtiWR43F9CJIszRpb40tZS
 OoZUu0fUFCCqIfMcS2sH8ACTy30eWojmmjl03VmxlqU+B5IMIi9m5f9VCfrDW/zkmI
 UT/q49pmSyA7Tb/f9l68/RKPsoToVZaXSQHRrslg=
Date: Wed, 22 Jul 2020 09:47:43 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Brian Woods <brian.woods@xilinx.com>
Subject: Re: [RFC v2 1/2] arm,smmu: switch to using iommu_fwspec functions
In-Reply-To: <1595390431-24805-2-git-send-email-brian.woods@xilinx.com>
Message-ID: <alpine.DEB.2.21.2007220938020.17562@sstabellini-ThinkPad-T480s>
References: <1595390431-24805-1-git-send-email-brian.woods@xilinx.com>
 <1595390431-24805-2-git-send-email-brian.woods@xilinx.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, 21 Jul 2020, Brian Woods wrote:
> Modify the smmu driver so that it uses the iommu_fwspec helper
> functions.  This means both ARM IOMMU drivers will both use the
> iommu_fwspec helper functions.
> 
> Signed-off-by: Brian Woods <brian.woods@xilinx.com>

[...]

> @@ -1924,14 +1924,21 @@ static int arm_smmu_add_device(struct device *dev)
>  			ret = -ENOMEM;
>  			goto out_put_group;
>  		}
> +		cfg->fwspec = kzalloc(sizeof(struct iommu_fwspec), GFP_KERNEL);
> +		if (!cfg->fwspec) {
> +			kfree(cfg);
> +			ret = -ENOMEM;
> +			goto out_put_group;
> +		}
> +		iommu_fwspec_init(dev, smmu->dev);

Normally the fwspec structure is initialized in
xen/drivers/passthrough/device_tree.c:iommu_add_dt_device. However here
we are trying to use it with the legacy bindings, that of course don't
initialize in iommu_add_dt_device because they are different.

So I imagine this is the reason why we have to initialize iommu_fwspec here
indepdendently from iommu_add_dt_device.

However, why are we allocating the struct iommu_fwspec twice?

We are calling kzalloc, then iommu_fwspec_init is calling xzalloc.

Similarly, we are storing the pointer to struct iommu_fwspec in
cfg->fwspec but actually there is already a pointer to struct
iommu_fwspec in struct device (the one set by dev_iommu_fwspec_set.)

Do we actually need both?


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 16:53:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 16:53:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyHzQ-00058H-NC; Wed, 22 Jul 2020 16:53:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u0rb=BB=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jyHzO-00058B-Tq
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 16:53:06 +0000
X-Inumbo-ID: cfeed6d0-cc3b-11ea-a1e5-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cfeed6d0-cc3b-11ea-a1e5-12813bfff9fa;
 Wed, 22 Jul 2020 16:53:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=kFC2CwkBpCIOOYAIewqEA21Bqic7EPnr09gGzUQ+zKo=; b=zh5b3Eh5H7mVm64GK0S+/nExUZ
 p0SoLxv6JEug33D7AnCmJEEZOX7eefsqbVkOS8bZl8Mo+HFfZcMvmhnsJnVHhOG5dgpL6bOTHBe//
 T0rmURRS5NwqSU//5+82AKQ60uNCEDS3yRzQpfRl3QsmmH2DnZ2KhJYYHCV4pffpUR5s=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyHzM-0003r7-Pe; Wed, 22 Jul 2020 16:53:04 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyHzM-0007vp-Cq; Wed, 22 Jul 2020 16:53:04 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] xen/x86: irq: Avoid a TOCTOU race in pirq_spin_lock_irq_desc()
Date: Wed, 22 Jul 2020 17:53:00 +0100
Message-Id: <20200722165300.22655-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: julien@xen.org, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Even if we assigned pirq->arch.irq to a variable, a compile is still
allowed to read pirq->arch.irq multiple time. This means that the value
checked may be different from the value used to get the desc.

Force the compiler to only do one read access by using read_atomic().

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/x86/irq.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index a69937c840b9..25f2eb611692 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1187,7 +1187,7 @@ struct irq_desc *pirq_spin_lock_irq_desc(
 
     for ( ; ; )
     {
-        int irq = pirq->arch.irq;
+        int irq = read_atomic(&pirq->arch.irq);
 
         if ( irq <= 0 )
             return NULL;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 16:55:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 16:55:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyI22-0005EX-5X; Wed, 22 Jul 2020 16:55:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gpyo=BB=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jyI20-0005EM-Q0
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 16:55:48 +0000
X-Inumbo-ID: 302a6353-cc3c-11ea-8684-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 302a6353-cc3c-11ea-8684-bc764e2007e4;
 Wed, 22 Jul 2020 16:55:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=7N5eoqmd8x29AALJwHhphkKTisEuPHoh/xOtLC76bNk=; b=Sd0eE7QgxJa+9V+Ngzopu2WtWI
 HVgHlv0NtOIh0UvC65AUZ8AZE+lnDq0H+OLtBl7HliSLW8iBGmdFh2/tGIVaJRby3U8si8PxWe/Dz
 /H/sxsFJ6ScnWDiNeNhug7S58Au4A/FJilADAXJOOmUWH2O4FXwCWTzgkLqAFRRh9TgU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jyI1z-0003tc-Bw; Wed, 22 Jul 2020 16:55:47 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=CBG-R90WXYV0.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jyI1z-0000tn-3D; Wed, 22 Jul 2020 16:55:47 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH-for-4.14] SUPPORT.md: Set version and release/support dates
Date: Wed, 22 Jul 2020 17:55:44 +0100
Message-Id: <20200722165544.557-1-paul@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>

Obviously this has my implicit Release-acked-by and is to be committed to
the staging-4.14 branch only.
---
 SUPPORT.md | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/SUPPORT.md b/SUPPORT.md
index efbcb26ddf..88a182ac31 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -9,10 +9,10 @@ for the definitions of the support status levels etc.
 
 # Release Support
 
-    Xen-Version: 4.14-rc
-    Initial-Release: n/a
-    Supported-Until: TBD
-    Security-Support-Until: Unreleased - not yet security-supported
+    Xen-Version: 4.14
+    Initial-Release: 2020-07-24
+    Supported-Until: 2022-01-24
+    Security-Support-Until: 2023-07-24
 
 Release Notes
 : <a href="https://wiki.xenproject.org/wiki/Xen_Project_4.14_Release_Notes">RN</a>
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 16:58:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 16:58:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyI4k-0005O8-Q8; Wed, 22 Jul 2020 16:58:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u0rb=BB=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jyI4k-0005O3-8l
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 16:58:38 +0000
X-Inumbo-ID: 9606fcc6-cc3c-11ea-8684-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9606fcc6-cc3c-11ea-8684-bc764e2007e4;
 Wed, 22 Jul 2020 16:58:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=udL5Yv2gUFU0tQQcIQ2s/epuPnsaDNAFz0eBRjhX+po=; b=31k8nW32I90lzWwA0RZcVvJM0E
 mG04UEP5Qm/hSP+sOyAje0OiGyTyyNZWFIBk63QAEIL88YOVUfTkX9+DTF90BXRJp9T6xOdyNlo+p
 iAnBMhVNgTWwMBMY7vltFhNpaAoCIN8LJHTtChJoKiHmqFejel9vOmuhtb2LR3gK2m3k=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyI4g-0003yR-CE; Wed, 22 Jul 2020 16:58:34 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyI4g-000127-3A; Wed, 22 Jul 2020 16:58:34 +0000
Subject: Re: [PATCH-for-4.14] SUPPORT.md: Set version and release/support dates
To: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20200722165544.557-1-paul@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <7d608d35-e373-07bf-81a4-16f3a4ee03c1@xen.org>
Date: Wed, 22 Jul 2020 17:58:31 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200722165544.557-1-paul@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 22/07/2020 17:55, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Julien Grall <jgrall@amazon.com>

> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Julien Grall <julien@xen.org>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Wei Liu <wl@xen.org>
> 
> Obviously this has my implicit Release-acked-by and is to be committed to
> the staging-4.14 branch only.

I will commit it.

> ---
>   SUPPORT.md | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index efbcb26ddf..88a182ac31 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -9,10 +9,10 @@ for the definitions of the support status levels etc.
>   
>   # Release Support
>   
> -    Xen-Version: 4.14-rc
> -    Initial-Release: n/a
> -    Supported-Until: TBD
> -    Security-Support-Until: Unreleased - not yet security-supported
> +    Xen-Version: 4.14
> +    Initial-Release: 2020-07-24
> +    Supported-Until: 2022-01-24
> +    Security-Support-Until: 2023-07-24
>   
>   Release Notes
>   : <a href="https://wiki.xenproject.org/wiki/Xen_Project_4.14_Release_Notes">RN</a>
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 17:13:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 17:13:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyIJI-00077J-57; Wed, 22 Jul 2020 17:13:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=j3fc=BB=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jyIJG-00077D-It
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 17:13:38 +0000
X-Inumbo-ID: ae13d8aa-cc3e-11ea-8685-bc764e2007e4
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae13d8aa-cc3e-11ea-8685-bc764e2007e4;
 Wed, 22 Jul 2020 17:13:37 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id a15so2623637wrh.10
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jul 2020 10:13:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=eg+DRkQhY0ItKbPuycTxg6YkUMHgHt0fn6xg39BAEdg=;
 b=twHzVk8LsIAqzCP51yaQay2ZF3NwICO14rM8fMa05PI7taXnbJrrhAAQ8f8xVN3SWZ
 fBMO81qnufz/sf9cnibiOqQvyFWICkTjJuTLqS7/Op3C35chJx31oCKjAKnVSDDFXrVP
 AvRx4VQ4L6mWEYKl42yQ3vPWwMEhM91UqCYeNVaOWbplHB89CBg6IIOlzEnLIzV7Q38R
 H8koxRI0QL0KFTsQ5uiB39/QOeljvtrR2ahR5BGE4hIOFNPD9UDOT/3iPG0rzaYl+cPd
 Bcd3lKq8aGlXPxLwtzdUDMp+iDY/Y8d2KJ/IqlIddL/9ph5eCTo3kUxYrgzEdJGyHB9/
 MgwQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=eg+DRkQhY0ItKbPuycTxg6YkUMHgHt0fn6xg39BAEdg=;
 b=CmjEb9pRpBTwJi+lYhOUkv6JV7fIiqMXyzuHxXYiOAf9i2xAczAAAFb3Kdg+qZnxCq
 IRcUeidw4veSwAiM0ej9MDmYolyPkgOgWf2dbTku1JXZ1FrSJXKLidj1aWPXGafCcpJG
 LbiFj8PEzr0b6Am/pzeVMNMzJROGdcWVUgE+jtOkHJujsCrMGe5ydnuv2FGM1NpBdYT1
 FNmjKSWVZdJ5Y0h3k67lpxXT6KWjUl/x8la8EkrfwytrLlpPnBPri1XqKfsP9CycARTx
 8GXCyQNvKacQ0x4bXa+k2epPST6TVOhsjmmeCMnswuVRKc7JoNc5VgscLGGQzdpy9MEl
 aJsQ==
X-Gm-Message-State: AOAM531k7HrSS8C/pqBUr4XfAIZNpHQLmRyKb4oMtMcuNAA1+sMhQBY4
 CqMEOIW8v0rvDEGLO+mGwdU=
X-Google-Smtp-Source: ABdhPJyo9Ri6+VWgzfq46t5H+Tepp7GSb7Xei9uTt5g6lqVRrivGj2BXAhtx06ogl23/JFjIA4PbuQ==
X-Received: by 2002:a5d:66c7:: with SMTP id k7mr492550wrw.290.1595438016438;
 Wed, 22 Jul 2020 10:13:36 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:5de9:cd31:71fa:3a6f])
 by smtp.gmail.com with ESMTPSA id g3sm718835wrb.59.2020.07.22.10.13.35
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 22 Jul 2020 10:13:35 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Julien Grall'" <julien@xen.org>,
	<xen-devel@lists.xenproject.org>
References: <20200722165544.557-1-paul@xen.org>
 <7d608d35-e373-07bf-81a4-16f3a4ee03c1@xen.org>
In-Reply-To: <7d608d35-e373-07bf-81a4-16f3a4ee03c1@xen.org>
Subject: RE: [PATCH-for-4.14] SUPPORT.md: Set version and release/support dates
Date: Wed, 22 Jul 2020 18:13:35 +0100
Message-ID: <001c01d6604b$6fa953b0$4efbfb10$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQFr5LSoFCjTNW2heY+JVDg0SR/0qQFrPA2nqd1ziGA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 22 July 2020 17:59
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Paul Durrant <pdurrant@amazon.com>; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
> <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Jan Beulich <jbeulich@suse.com>;
> Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>
> Subject: Re: [PATCH-for-4.14] SUPPORT.md: Set version and release/support dates
> 
> 
> 
> On 22/07/2020 17:55, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> 
> Acked-by: Julien Grall <jgrall@amazon.com>
> 
> > ---
> > Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> > Cc: George Dunlap <george.dunlap@citrix.com>
> > Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> > Cc: Jan Beulich <jbeulich@suse.com>
> > Cc: Julien Grall <julien@xen.org>
> > Cc: Stefano Stabellini <sstabellini@kernel.org>
> > Cc: Wei Liu <wl@xen.org>
> >
> > Obviously this has my implicit Release-acked-by and is to be committed to
> > the staging-4.14 branch only.
> 
> I will commit it.

Thanks,

  Paul

> 
> > ---
> >   SUPPORT.md | 8 ++++----
> >   1 file changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/SUPPORT.md b/SUPPORT.md
> > index efbcb26ddf..88a182ac31 100644
> > --- a/SUPPORT.md
> > +++ b/SUPPORT.md
> > @@ -9,10 +9,10 @@ for the definitions of the support status levels etc.
> >
> >   # Release Support
> >
> > -    Xen-Version: 4.14-rc
> > -    Initial-Release: n/a
> > -    Supported-Until: TBD
> > -    Security-Support-Until: Unreleased - not yet security-supported
> > +    Xen-Version: 4.14
> > +    Initial-Release: 2020-07-24
> > +    Supported-Until: 2022-01-24
> > +    Security-Support-Until: 2023-07-24
> >
> >   Release Notes
> >   : <a href="https://wiki.xenproject.org/wiki/Xen_Project_4.14_Release_Notes">RN</a>
> >
> 
> --
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 17:24:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 17:24:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyITp-00082f-2k; Wed, 22 Jul 2020 17:24:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yOZ1=BB=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jyITn-00082a-Jw
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 17:24:31 +0000
X-Inumbo-ID: 333f0b8e-cc40-11ea-8685-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 333f0b8e-cc40-11ea-8685-bc764e2007e4;
 Wed, 22 Jul 2020 17:24:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595438670;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=IcWoqAeuOPX0NqK6CnpgbZTbdVuOgRpQ5eCh0w/ifE4=;
 b=Ju5hfX2hI9PImKjokgEg1EPoYjxhQMmhSGUHIu3ku2YKc3CQIe/FqBRF
 eQUUk7MYkhZxDIm8bTjB/fS14ur7p5re4KbHZRAraGlGlb0jiKGG3H8nE
 Wz42pcAmLIspaIe+ZE3OkWawY7QbOucBTOWFcjL5NWO6lJ+9IFkD6hoPO E=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: uymvW4qasOXPyCqG65QrBt7O/LoW9C0GADH0lriwwHG6FJrMYUgSkgIdvHMOJ/157xFdXsWHeZ
 xqzhVLzLl3IIP/k3uPH3Chz5uu6u11vbR0gwE2a5m1hm2VbwERmvQHEv/TY0FV2rPSr6HpCKJg
 fv8OLT2+wNJUQJAtB291/oOSXvZwYg64YD5fPFOEswKPFu8/LaqDf1c9YilXO0GTt7hpFrn7qG
 cJiNjLqm2wZqruw3T6KOt2/CSSi7PwJ713nJz6LlJbzYcvJt9KvvwuTspBLiBoIFMRl5/Bm3dt
 ILY=
X-SBRS: 2.7
X-MesageID: 23162212
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,383,1589256000"; d="scan'208";a="23162212"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24344.30280.164326.834894@mariner.uk.xensource.com>
Date: Wed, 22 Jul 2020 18:24:24 +0100
To: Paul Durrant <paul@xen.org>
Subject: Re: [PATCH-for-4.14] SUPPORT.md: Set version and release/support dates
In-Reply-To: <20200722165544.557-1-paul@xen.org>
References: <20200722165544.557-1-paul@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>, Paul
 Durrant <pdurrant@amazon.com>, George Dunlap <George.Dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Paul Durrant writes ("[PATCH-for-4.14] SUPPORT.md: Set version and release/support dates"):
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Julien Grall <julien@xen.org>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Wei Liu <wl@xen.org>
> 
> Obviously this has my implicit Release-acked-by and is to be committed to
> the staging-4.14 branch only.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

The plan is to commit this as part of the 4.14 release deliverables
prep.

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 17:50:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 17:50:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyIsV-0001gO-6e; Wed, 22 Jul 2020 17:50:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u0rb=BB=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jyIsT-0001VB-OY
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 17:50:01 +0000
X-Inumbo-ID: c3b30961-cc43-11ea-868a-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3b30961-cc43-11ea-868a-bc764e2007e4;
 Wed, 22 Jul 2020 17:50:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=P2XOLKz2T5YZY3tlrY7kbX9WooD0WJy2qhtjbIWMzzU=; b=vL+p9JumPxUuryntrVnz/Lyc4R
 KUgIUQti0rRTtE4uvrNth8T4hm3VUJsSd7gpFpDvHjoTxQPLMTDgW6+vEXn3ggdlWK+DbUEY5Btve
 HDnbrCFifVdX46HWYCqrq62z+w88AOGGsnjaPKRo3E0uNZ2iROrsABfpvVXia1QFWAJg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyIsO-000512-UR; Wed, 22 Jul 2020 17:49:56 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyIsO-0004NO-Lo; Wed, 22 Jul 2020 17:49:56 +0000
Subject: Re: [PATCH-for-4.14] SUPPORT.md: Set version and release/support dates
To: paul@xen.org, xen-devel@lists.xenproject.org
References: <20200722165544.557-1-paul@xen.org>
 <7d608d35-e373-07bf-81a4-16f3a4ee03c1@xen.org>
 <001c01d6604b$6fa953b0$4efbfb10$@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <75d956f6-055d-b9eb-5128-d44a4005b2f3@xen.org>
Date: Wed, 22 Jul 2020 18:49:54 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <001c01d6604b$6fa953b0$4efbfb10$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 22/07/2020 18:13, Paul Durrant wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: 22 July 2020 17:59
>> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
>> Cc: Paul Durrant <pdurrant@amazon.com>; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
>> <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; Jan Beulich <jbeulich@suse.com>;
>> Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>
>> Subject: Re: [PATCH-for-4.14] SUPPORT.md: Set version and release/support dates
>>
>>
>>
>> On 22/07/2020 17:55, Paul Durrant wrote:
>>> From: Paul Durrant <pdurrant@amazon.com>
>>>
>>> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
>>
>> Acked-by: Julien Grall <jgrall@amazon.com>
>>
>>> ---
>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Cc: George Dunlap <george.dunlap@citrix.com>
>>> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
>>> Cc: Jan Beulich <jbeulich@suse.com>
>>> Cc: Julien Grall <julien@xen.org>
>>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>>> Cc: Wei Liu <wl@xen.org>
>>>
>>> Obviously this has my implicit Release-acked-by and is to be committed to
>>> the staging-4.14 branch only.
>>
>> I will commit it.
> 
> Thanks,

I ended up to revert the patch as there was some unhappiness on 
#xendevel about committing it.

I will let Ian doing it as part of release deliverables.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 17:57:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 17:57:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyIzq-0002Ls-2P; Wed, 22 Jul 2020 17:57:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k9w/=BB=yujala.com=srini@srs-us1.protection.inumbo.net>)
 id 1jyIzp-0002Ln-90
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 17:57:37 +0000
X-Inumbo-ID: d088149b-cc44-11ea-868c-bc764e2007e4
Received: from gproxy1-pub.mail.unifiedlayer.com (unknown [69.89.25.95])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d088149b-cc44-11ea-868c-bc764e2007e4;
 Wed, 22 Jul 2020 17:57:32 +0000 (UTC)
Received: from cmgw10.unifiedlayer.com (unknown [10.9.0.10])
 by gproxy1.mail.unifiedlayer.com (Postfix) with ESMTP id 6A9FE84BD6190
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jul 2020 11:57:31 -0600 (MDT)
Received: from md-71.webhostbox.net ([204.11.58.143]) by cmsmtp with ESMTP
 id yIzij0hEcDlydyIzijF67R; Wed, 22 Jul 2020 11:57:31 -0600
X-Authority-Reason: nr=8
X-Authority-Analysis: v=2.3 cv=X7F81lbe c=1 sm=1 tr=0
 a=yS0qNmEK8ed8yKyeR8R6rg==:117 a=yS0qNmEK8ed8yKyeR8R6rg==:17
 a=dLZJa+xiwSxG16/P+YVxDGlgEgI=:19 a=_RQrkK6FrEwA:10:nop_rcvd_month_year
 a=o-A10e_uY_YA:10:endurance_base64_authed_username_1 a=DAwyPP_o2Byb1YXLmDAA:9
 a=0f1Y9JmXAAAA:8 a=cWRNjhkoAAAA:8 a=5JJ0oef6weKNsmNsFb8A:9
 a=5tgyK3aFugTBrctB:21 a=spVqwRLfTmeiu3zE:21 a=CjuIK1q_8ugA:10:nop_charset_2
 a=UVKsufMBYgcA:10:demote_hacked_domain_1 a=yMhMjlubAAAA:8 a=SSmOFEACAAAA:8
 a=y8_R9gnzg0ak4JZGfxsA:9 a=CX5wSMZFaTPvz5or:21 a=xjF0CreuJmd9iHl0:21
 a=ZkcJNbYcqkxZVWdY:21 a=gKO2Hq4RSVkA:10:nop_mshtml
 a=UiCQ7L4-1S4A:10:nop_mshtml_css_classes a=hTZeC7Yk6K0A:10:nop_msword_html
 a=frz4AuCg-hUA:10:nop_css_in_html a=It28mvvgxjsq2WIeNnUB:22
 a=sVa6W5Aao32NNC1mekxh:22
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=yujala.com; 
 s=default;
 h=Content-Type:MIME-Version:Message-ID:Date:Subject:In-Reply-To:
 References:To:From:Sender:Reply-To:Cc:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe:
 List-Post:List-Owner:List-Archive;
 bh=8kcf24TudZqoMYhspoMjkVi/XhjYybMqrFeOKJdCVdY=; b=vWpaEAx/QvMH+mj/CUdhMO0KkH
 wUceNTJ10mQ0piGt01fMjFcO9fKPpTV4EPVAmZjLkbCCjyMOeaptEZtPAr32L8bFEShv0b7aWkS48
 MpNkczLasm5vk/CJaSQBj+GxFCTdt0s1sK5gti5b+ou/o9HdhyWzuWOn9nR+fTLBrxCEfBoTQM/D0
 XhnRuCSR4ZSPm74JfMENP8p1Ddq5amR9KkfpxlUMFm3fl+TUflwdsz/ZjpaZC7tjMfB1BSiuuvrZ0
 WUn6+PliobB0X8bHWjY5vU64n4iAc3E00NCcuGsS7qi4UTjUeeSci+6U47KjO0k+nrSXyPgyw0oaD
 NspARMDw==;
Received: from 162-231-240-210.lightspeed.sntcca.sbcglobal.net
 ([162.231.240.210]:55464 helo=SRINIASUSLAPTOP)
 by md-71.webhostbox.net with esmtpsa (TLS1.2) tls
 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.93)
 (envelope-from <srini@yujala.com>) id 1jyIzi-0019V8-4D
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 17:57:30 +0000
From: "Srinivas Bangalore" <srini@yujala.com>
To: <xen-devel@lists.xenproject.org>
References: 
In-Reply-To: 
Subject: RE: Porting Xen to Jetson Nano 
Date: Wed, 22 Jul 2020 10:57:28 -0700
Message-ID: <002801d66051$90fe2300$b2fa6900$@yujala.com>
MIME-Version: 1.0
Content-Type: multipart/alternative;
 boundary="----=_NextPart_000_0029_01D66016.E4A1BC00"
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AdZeg0x1DrNeeg9GRtKkTne7zCxl1gBzRCMQ
Content-Language: en-us
X-AntiAbuse: This header was added to track abuse,
 please include it with any abuse report
X-AntiAbuse: Primary Hostname - md-71.webhostbox.net
X-AntiAbuse: Original Domain - lists.xenproject.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - yujala.com
X-BWhitelist: no
X-Source-IP: 162.231.240.210
X-Source-L: No
X-Exim-ID: 1jyIzi-0019V8-4D
X-Source: 
X-Source-Args: 
X-Source-Dir: 
X-Source-Sender: 162-231-240-210.lightspeed.sntcca.sbcglobal.net
 (SRINIASUSLAPTOP) [162.231.240.210]:55464
X-Source-Auth: srini@yujala.com
X-Email-Count: 1
X-Source-Cap: c3JpbmlxbGw7c3JpbmlxbGw7bWQtNzEud2ViaG9zdGJveC5uZXQ=
X-Local-Domain: yes
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a multipart message in MIME format.

------=_NextPart_000_0029_01D66016.E4A1BC00
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit

Dear Xen experts,

 

Would greatly appreciate some hints on how to move forward with this one.

I have included further details on Xen diagnostics below:

 

(XEN) *** LOADING DOMAIN 0 ***

(XEN) Loading kernel from boot module @ 00000000e1000000

(XEN) Allocating 1:1 mappings totalling 512MB for dom0:

(XEN) BANK[0] 0x000000a0000000-0x000000c0000000 (512MB)

(XEN) Grant table range: 0x000000fec00000-0x000000fec60000

(XEN) Loading zImage from 00000000e1000000 to
00000000a0080000-00000000a223c808

(XEN) Allocating PPI 16 for event channel interrupt

(XEN) Loading dom0 DTB to 0x00000000a8000000-0x00000000a803435c

(XEN) Scrubbing Free RAM on 1 nodes using 4 CPUs

(XEN) ........done.

(XEN) Initial low memory virq threshold set at 0x4000 pages.

(XEN) Std. Loglevel: Errors and warnings

(XEN) Guest Loglevel: All

(XEN) ***************************************************

(XEN) WARNING: CONSOLE OUTPUT IS SYNCHRONOUS

(XEN) This option is intended to aid debugging of Xen by ensuring

(XEN) that all output is synchronously delivered on the serial line.

(XEN) However it can introduce SIGNIFICANT latencies and affect

(XEN) timekeeping. It is NOT recommended for production use!

(XEN) ***************************************************

(XEN) 3... 2... 1...

(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to
Xen)

(XEN) Freed 300kB init memory.

(XEN) *** Serial input -> Xen (type 'CTRL-a' three times to switch input to
DOM0)

(XEN) 'h' pressed -> showing installed handlers

(XEN)  key '%' (ascii '25') => trap to xendbg

(XEN)  key '*' (ascii '2a') => print all diagnostics

(XEN)  key '0' (ascii '30') => dump Dom0 registers

(XEN)  key 'A' (ascii '41') => toggle alternative key handling

(XEN)  key 'H' (ascii '48') => dump heap info

(XEN)  key 'R' (ascii '52') => reboot machine

(XEN)  key 'a' (ascii '61') => dump timer queues

(XEN)  key 'd' (ascii '64') => dump registers

(XEN)  key 'e' (ascii '65') => dump evtchn info

(XEN)  key 'g' (ascii '67') => print grant table usage

(XEN)  key 'h' (ascii '68') => show this message

(XEN)  key 'm' (ascii '6d') => memory info

(XEN)  key 'q' (ascii '71') => dump domain (and guest debug) info

(XEN)  key 'r' (ascii '72') => dump run queues

(XEN)  key 't' (ascii '74') => display multi-cpu clock info

(XEN)  key 'w' (ascii '77') => synchronously dump console ring buffer
(dmesg)

(XEN) '*' pressed -> firing all diagnostic keyhandlers

(XEN) [d: dump registers]

(XEN) 'd' pressed -> dumping registers

(XEN)

(XEN) *** Dumping CPU0 guest state (d0v0): ***

(XEN) ----[ Xen-4.8.5  arm64  debug=n   Tainted:  C   ]----

(XEN) CPU:    0

(XEN) PC:     00000000a0080000

(XEN) LR:     0000000000000000

(XEN) SP_EL0: 0000000000000000

(XEN) SP_EL1: 0000000000000000

(XEN) CPSR:   000001c5 MODE:64-bit EL1h (Guest Kernel, handler)

(XEN)      X0: 00000000a8000000  X1: 0000000000000000  X2: 0000000000000000

(XEN)      X3: 0000000000000000  X4: 0000000000000000  X5: 0000000000000000

(XEN)      X6: 0000000000000000  X7: 0000000000000000  X8: 0000000000000000

(XEN)      X9: 0000000000000000 X10: 0000000000000000 X11: 0000000000000000

(XEN)     X12: 0000000000000000 X13: 0000000000000000 X14: 0000000000000000

(XEN)     X15: 0000000000000000 X16: 0000000000000000 X17: 0000000000000000

(XEN)     X18: 0000000000000000 X19: 0000000000000000 X20: 0000000000000000

(XEN)     X21: 0000000000000000 X22: 0000000000000000 X23: 0000000000000000

(XEN)     X24: 0000000000000000 X25: 0000000000000000 X26: 0000000000000000

(XEN)     X27: 0000000000000000 X28: 0000000000000000  FP: 0000000000000000

(XEN)

(XEN)    ELR_EL1: 0000000000000000

(XEN)    ESR_EL1: 00000000

(XEN)    FAR_EL1: 0000000000000000

(XEN)

(XEN)  SCTLR_EL1: 00c50838

(XEN)    TCR_EL1: 00000000

(XEN)  TTBR0_EL1: 0000000000000000

(XEN)  TTBR1_EL1: 0000000000000000

(XEN)

(XEN)   VTCR_EL2: 80043594

(XEN)  VTTBR_EL2: 000100017f0f9000

(XEN)

(XEN)  SCTLR_EL2: 30cd183d

(XEN)    HCR_EL2: 000000008038663f

(XEN)  TTBR0_EL2: 00000000fecfc000

(XEN)

(XEN)    ESR_EL2: 8200000d

(XEN)  HPFAR_EL2: 0000000000000000

(XEN)    FAR_EL2: 00000000a0080000

(XEN)

(XEN) Guest stack trace from sp=0:

(XEN)   Failed to convert stack to physical address

(XEN)

(XEN) *** Dumping CPU1 host state: ***

(XEN) ----[ Xen-4.8.5  arm64  debug=n   Tainted:  C   ]----

(XEN) CPU:    1

(XEN) PC:     0000000000243ad8 idle_loop+0x74/0x11c

(XEN) LR:     0000000000243ae0

(XEN) SP:     00008000ff1bfe70

(XEN) CPSR:   20000249 MODE:64-bit EL2h (Hypervisor, handler)

(XEN)      X0: 0000000000000000  X1: 00008000feeb8680  X2: 0000000000000001

(XEN)      X3: fffffffffffffed4  X4: 00008000feeb8680  X5: 0000000000000000

(XEN)      X6: 00008000ff16dc40  X7: 00008000ff16dc58  X8: 00008000ff1bfe08

(XEN)      X9: 0000000000262458 X10: 000000000000000a X11: 00008000ff1bfbe9

(XEN)     X12: 0000000000000031 X13: 0000000000000001 X14: 00008000ff1bfbe8

(XEN)     X15: 0000000000000020 X16: 0000000000000000 X17: 0000000000000000

(XEN)     X18: 0000000000000000 X19: 0000000000302448 X20: 0000000000308d18

(XEN)     X21: 00000000002cbf80 X22: 0000000000308d18 X23: 0000000000308d18

(XEN)     X24: 0000000000000001 X25: 0000000000000001 X26: 0000000000000000

(XEN)     X27: 0000000000000000 X28: 0000000000000000  FP: 00008000ff1bfe70

(XEN)

(XEN)   VTCR_EL2: 80043594

(XEN)  VTTBR_EL2: 0000000000000000

(XEN)

(XEN)  SCTLR_EL2: 30cd183d

(XEN)    HCR_EL2: 000000000038663f

(XEN)  TTBR0_EL2: 00000000fecfc000

(XEN)

(XEN)    ESR_EL2: 00000000

(XEN)  HPFAR_EL2: 0000000000000000

(XEN)    FAR_EL2: 0000000000000000

(XEN)

(XEN) Xen stack trace from sp=00008000ff1bfe70:

(XEN)    00008000ff1bfe80 0000000000250f3c 0000000000308d18 0000000000000001

(XEN)    0000000000000001 0000000000000001 0000000000400000 0900494931070860

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000

(XEN) Xen call trace:

(XEN)    [<0000000000243ad8>] idle_loop+0x74/0x11c (PC)

(XEN)    [<0000000000243ae0>] idle_loop+0x7c/0x11c (LR)

(XEN)    [<0000000000250f3c>] start_secondary+0xfc/0x10c

(XEN)    [<0000000000000001>] 0000000000000001

(XEN)

(XEN) *** Dumping CPU2 host state: ***

(XEN) ----[ Xen-4.8.5  arm64  debug=n   Tainted:  C   ]----

(XEN) CPU:    2

(XEN) PC:     0000000000243ad8 idle_loop+0x74/0x11c

(XEN) LR:     0000000000243ae0

(XEN) SP:     00008000ff1afe70

(XEN) CPSR:   20000249 MODE:64-bit EL2h (Hypervisor, handler)

(XEN)      X0: 0000000000000000  X1: 00008000feeae680  X2: 0000000000000002

(XEN)      X3: fffffffffffffed4  X4: 00008000feeae680  X5: 0000000000000000

(XEN)      X6: 00008000ff16df20  X7: 00008000ff16df38  X8: 00008000ff1afe08

(XEN)      X9: 0000000000262458 X10: 000000000000000a X11: 00008000ff1afbe9

(XEN)     X12: 0000000000000032 X13: 0000000000000001 X14: 00008000ff1afbe8

(XEN)     X15: 0000000000000020 X16: 0000000000000000 X17: 0000000000000000

(XEN)     X18: 0000000000000000 X19: 0000000000302448 X20: 0000000000308d18

(XEN)     X21: 00000000002cbf80 X22: 0000000000308d18 X23: 0000000000308d18

(XEN)     X24: 0000000000000002 X25: 0000000000000001 X26: 0000000000000000

(XEN)     X27: 0000000000000000 X28: 0000000000000000  FP: 00008000ff1afe70

(XEN)

(XEN)   VTCR_EL2: 80043594

(XEN)  VTTBR_EL2: 0000000000000000

(XEN)

(XEN)  SCTLR_EL2: 30cd183d

(XEN)    HCR_EL2: 000000000038663f

(XEN)  TTBR0_EL2: 00000000fecfc000

(XEN)

(XEN)    ESR_EL2: 00000000

(XEN)  HPFAR_EL2: 0000000000000000

(XEN)    FAR_EL2: 0000000000000000

(XEN)

(XEN) Xen stack trace from sp=00008000ff1afe70:

(XEN)    00008000ff1afe80 0000000000250f3c 0000000000308d18 0000000000000002

(XEN)    0000000000000002 0000000000000001 0000000000400000 6002c108894108ca

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000

(XEN) Xen call trace:

(XEN)    [<0000000000243ad8>] idle_loop+0x74/0x11c (PC)

(XEN)    [<0000000000243ae0>] idle_loop+0x7c/0x11c (LR)

(XEN)    [<0000000000250f3c>] start_secondary+0xfc/0x10c

(XEN)    [<0000000000000002>] 0000000000000002

(XEN)

(XEN) *** Dumping CPU3 host state: ***

(XEN) ----[ Xen-4.8.5  arm64  debug=n   Tainted:  C   ]----

(XEN) CPU:    3

(XEN) PC:     0000000000243ad8 idle_loop+0x74/0x11c

(XEN) LR:     0000000000243ae0

(XEN) SP:     00008000ff1a7e70

(XEN) CPSR:   20000249 MODE:64-bit EL2h (Hypervisor, handler)

(XEN)      X0: 0000000000000000  X1: 00008000feeaa680  X2: 0000000000000003

(XEN)      X3: fffffffffffffed4  X4: 00008000feeaa680  X5: 0000000000000000

(XEN)      X6: 00008000ff1b4180  X7: 00008000ff1b4198  X8: 00008000ff1a7e08

(XEN)      X9: 0000000000262458 X10: 000000000000000a X11: 00008000ff1a7be9

(XEN)     X12: 0000000000000033 X13: 0000000000000001 X14: 00008000ff1a7be8

(XEN)     X15: 0000000000000020 X16: 0000000000000000 X17: 0000000000000000

(XEN)     X18: 0000000000000000 X19: 0000000000302448 X20: 0000000000308d18

(XEN)     X21: 00000000002cbf80 X22: 0000000000308d18 X23: 0000000000308d18

(XEN)     X24: 0000000000000003 X25: 0000000000000001 X26: 0000000000000000

(XEN)     X27: 0000000000000000 X28: 0000000000000000  FP: 00008000ff1a7e70

(XEN)

(XEN)   VTCR_EL2: 80043594

(XEN)  VTTBR_EL2: 0000000000000000

(XEN)

(XEN)  SCTLR_EL2: 30cd183d

(XEN)    HCR_EL2: 000000000038663f

(XEN)  TTBR0_EL2: 00000000fecfc000

(XEN)

(XEN)    ESR_EL2: 00000000

(XEN)  HPFAR_EL2: 0000000000000000

(XEN)    FAR_EL2: 0000000000000000

(XEN)

(XEN) Xen stack trace from sp=00008000ff1a7e70:

(XEN)    00008000ff1a7e80 0000000000250f3c 0000000000308d18 0000000000000003

(XEN)    0000000000000003 0000000000000001 0000000000400000 70c821138b0c9de0

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000

(XEN)    0000000000000000 0000000000000000

(XEN) Xen call trace:

(XEN)    [<0000000000243ad8>] idle_loop+0x74/0x11c (PC)

(XEN)    [<0000000000243ae0>] idle_loop+0x7c/0x11c (LR)

(XEN)    [<0000000000250f3c>] start_secondary+0xfc/0x10c

(XEN)    [<0000000000000003>] 0000000000000003

(XEN)

(XEN)

 

It seems the DOM0 kernel did not get added to the task list..

 

Boot args for Xen and Dom0 are here:
(XEN) Checking for initrd in /chosen

(XEN) linux,initrd limits invalid: 0000000084100000 >= 0000000084100000

(XEN) RAM: 0000000080000000 - 00000000fedfffff

(XEN) RAM: 0000000100000000 - 000000017f1fffff

(XEN)

(XEN) MODULE[0]: 00000000fc7f8000 - 00000000fc82d000 Device Tree

(XEN) MODULE[1]: 00000000e1000000 - 00000000e31bc808 Kernel
console=hvc0 earlyprintk=xen earlycon=xen rootfstype=ext4 rw rootwait
root=/dev/mmcblk0p1 rdinit=/sbin/init

(XEN)  RESVD[0]: 0000000080000000 - 0000000080020000

(XEN)  RESVD[1]: 00000000e3500000 - 00000000e3535000

(XEN)  RESVD[2]: 00000000fc7f8000 - 00000000fc82d000

(XEN)

(XEN) Command line: console=dtuart earlyprintk=xen
earlycon=uart8250,mmio32,0x70006000 sync_console dom0_mem=512M log_lvl=all
guest_loglvl=all console_to_ring

(XEN) Placing Xen at 0x00000000fec00000-0x00000000fee00000

 

Thanks,

Srini

 

From: Srinivas Bangalore <srini@yujala.com> 
Sent: Monday, July 20, 2020 3:51 AM
To: 'xen-devel@lists.xenproject.org' <xen-devel@lists.xenproject.org>
Subject: Porting Xen to Jetson Nano 

 

Hello,

 

I am trying to get Xen working on a Jetson Nano board (which is based on
NVIDIA's Tegra210 SoC). After some searching through the Xen-devel archives,
I learnt that there was a set of patches developed in 2017 to port Xen to
Tegra
(https://lists.xenproject.org/archives/html/xen-devel/2017-04/msg00991.html)
. However these patches don't appear in the main source repository.
Therefore, I applied these manually to Xen-4.8.5. With these changes, Xen
now boots up successfully on the Jetson Nano, but there is no Dom0 output on
the console. I can switch between Xen and Dom0 with CTRL-a-a-a.

 

I am using Linux kernel version 5.7 for Dom0. I also tried using the native
Linux kernel that comes with the Nano board, but that doesn't help.

 

Here's the console screen capture:

 

## Flattened Device Tree blob at e3000000

   Booting using the fdt blob at 0xe3000000

   reserving fdt memory region: addr=80000000 size=20000

   reserving fdt memory region: addr=e3000000 size=35000

   Loading Device Tree to 00000000fc7f8000, end 00000000fc82ffff ... OK

 

Starting kernel ...

 

- UART enabled -

- CPU 00000000 booting -

- Current EL 00000008 -

- Xen starting at EL2 -

- Zero BSS -

- Setting up control registers -

- Turning on paging -

- Ready -

(XEN) Checking for initrd in /chosen

(XEN) linux,initrd limits invalid: 0000000084100000 >= 0000000084100000

(XEN) RAM: 0000000080000000 - 00000000fedfffff

(XEN) RAM: 0000000100000000 - 000000017f1fffff

(XEN)

(XEN) MODULE[0]: 00000000fc7f8000 - 00000000fc82d000 Device Tree

(XEN) MODULE[1]: 00000000e1000000 - 00000000e2cbe200 Kernel
console=hvc0 earlyprintk=uart8250-32bit,0x70006000 rootfstype=ext4 rw
rootwait root=/dev/mmcblk0p1

(XEN)  RESVD[0]: 0000000080000000 - 0000000080020000

(XEN)  RESVD[1]: 00000000e3000000 - 00000000e3035000

(XEN)  RESVD[2]: 00000000fc7f8000 - 00000000fc82d000

(XEN)

(XEN) Command line: console=dtuart earlyprintk=xen earlycon=xenboot
dom0_mem=512M loglevel=all

(XEN) Placing Xen at 0x00000000fec00000-0x00000000fee00000

(XEN) Update BOOTMOD_XEN from 0000000080080000-0000000080188e01 =>
00000000fec00000-00000000fed08e01

(XEN) Domain heap initialised

(XEN) Taking dtuart configuration from /chosen/stdout-path

(XEN) Looking for dtuart at "/serial@70 Xen 4.8.5

(XEN) Xen version 4.8.5 (srinivas@) (aarch64-linux-gnu-gcc (Ubuntu/Linaro
7.5.0-3ubuntu1~18.04) 7.5.0) debug=n  Sun Jul 19 07:44:00 PDT 2020

(XEN) Latest ChangeSet:

(XEN) Processor: 411fd071: "ARM Limited", variant: 0x1, part 0xd07, rev 0x1

(XEN) 64-bit Execution:

(XEN)   Processor Features: 0000000000002222 0000000000000000

(XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32

(XEN)     Extensions: FloatingPoint AdvancedSIMD

(XEN)   Debug Features: 0000000010305106 0000000000000000

(XEN)   Auxiliary Features: 0000000000000000 0000000000000000

(XEN)   Memory Model Features: 0000000000001124 0000000000000000

(XEN)   ISA Features:  0000000000011120 0000000000000000

(XEN) 32-bit Execution:

(XEN)   Processor Features: 00000131:00011011

(XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle

(XEN)     Extensions: GenericTimer Security

(XEN)   Debug Features: 03010066

(XEN)   Auxiliary Features: 00000000

(XEN)   Memory Model Features: 10101105 40000000 01260000 02102211

(XEN)  ISA Features: 02101110 13112111 21232042 01112131 00011142 00011121

(XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 19200 KHz

(XEN) GICv2 initialization:

(XEN)         gic_dist_addr=0000000050041000

(XEN)         gic_cpu_addr=0000000050042000

(XEN)         gic_hyp_addr=0000000050044000

(XEN)         gic_vcpu_addr=0000000050046000

(XEN)         gic_maintenance_irq=25

(XEN) GICv2: 224 lines, 4 cpus, secure (IID 0200143b).

(XEN) Using scheduler: SMP Credit Scheduler (credit)

(XEN) Allocated console ring of 16 KiB.

(XEN) Bringing up CPU1

- CPU 00000001 booting -

- Current EL 00000008 -

- Xen starting at EL2 -

- Setting up control registers -

- Turning on paging -

- Ready -

(XEN) Bringing up CPU2

- CPU 00000002 booting -

- Current EL 00000008 -

- Xen starting at EL2 -

- Setting up control registers -

- Turning on paging -

- Ready -

(XEN) Bringing up CPU3

- CPU 00000003 booting -

- Current EL 00000008 -

- Xen starting at EL2 -

- Setting up control registers -

- Turning on paging -

- Ready -

(XEN) Brought up 4 CPUs

(XEN) P2M: 44-bit IPA with 44-bit PA

(XEN) P2M: 4 levels with order-0 root, VTCR 0x80043594

(XEN) I/O virtualisation disabled

(XEN) *** LOADING DOMAIN 0 ***

(XEN) Loading kernel from boot module @ 00000000e1000000

(XEN) Allocating 1:1 mappings totalling 512MB for dom0:

(XEN) BANK[0] 0x000000a0000000-0x000000c0000000 (512MB)

(XEN) Grant table range: 0x000000fec00000-0x000000fec60000

(XEN) Loading zImage from 00000000e1000000 to
00000000a0080000-00000000a1d3e200

(XEN) Allocating PPI 16 for event channel interrupt

(XEN) Loading dom0 DTB to 0x00000000a8000000-0x00000000a8034354

(XEN) Scrubbing Free RAM on 1 nodes using 4 CPUs

(XEN) ........done.

(XEN) Initial low memory virq threshold set at 0x4000 pages.

(XEN) Std. Loglevel: Errors and warnings

(XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)

(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to
Xen)

(XEN) Freed 300kB init memory.

 

Any suggestions/pointers to move forward would be much appreciated.

 

Thanks,

Srini


------=_NextPart_000_0029_01D66016.E4A1BC00
Content-Type: text/html;
	charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri",sans-serif;}
a:link, span.MsoHyperlink
	{mso-style-priority:99;
	color:#0563C1;
	text-decoration:underline;}
span.EmailStyle19
	{mso-style-type:personal-reply;
	font-family:"Calibri",sans-serif;
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-size:10.0pt;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US =
link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p =
class=3DMsoNormal>Dear Xen experts,<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>Would =
greatly appreciate some hints on how to move forward with this =
one&#8230;<o:p></o:p></p><p class=3DMsoNormal>I have included further =
details on Xen diagnostics below:<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>(XEN) *** =
LOADING DOMAIN 0 ***<o:p></o:p></p><p class=3DMsoNormal>(XEN) Loading =
kernel from boot module @ 00000000e1000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Allocating 1:1 mappings totalling 512MB for =
dom0:<o:p></o:p></p><p class=3DMsoNormal>(XEN) BANK[0] =
0x000000a0000000-0x000000c0000000 (512MB)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Grant table range: =
0x000000fec00000-0x000000fec60000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Loading zImage from 00000000e1000000 to =
00000000a0080000-00000000a223c808<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Allocating PPI 16 for event channel =
interrupt<o:p></o:p></p><p class=3DMsoNormal>(XEN) Loading dom0 DTB to =
0x00000000a8000000-0x00000000a803435c<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Scrubbing Free RAM on 1 nodes using 4 =
CPUs<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
........done.<o:p></o:p></p><p class=3DMsoNormal>(XEN) Initial low =
memory virq threshold set at 0x4000 pages.<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Std. Loglevel: Errors and =
warnings<o:p></o:p></p><p class=3DMsoNormal>(XEN) Guest Loglevel: =
All<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
***************************************************<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) WARNING: CONSOLE OUTPUT IS =
SYNCHRONOUS<o:p></o:p></p><p class=3DMsoNormal>(XEN) This option is =
intended to aid debugging of Xen by ensuring<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) that all output is synchronously delivered on =
the serial line.<o:p></o:p></p><p class=3DMsoNormal>(XEN) However it can =
introduce SIGNIFICANT latencies and affect<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) timekeeping. It is NOT recommended for =
production use!<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
***************************************************<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) 3... 2... 1...<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) *** Serial input -&gt; DOM0 (type 'CTRL-a' three =
times to switch input to Xen)<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
Freed 300kB init memory.<o:p></o:p></p><p class=3DMsoNormal>(XEN) *** =
Serial input -&gt; Xen (type 'CTRL-a' three times to switch input to =
DOM0)<o:p></o:p></p><p class=3DMsoNormal>(XEN) 'h' pressed -&gt; showing =
installed handlers<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; key =
'%' (ascii '25') =3D&gt; trap to xendbg<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; key '*' (ascii '2a') =3D&gt; print all =
diagnostics<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; key '0' =
(ascii '30') =3D&gt; dump Dom0 registers<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; key 'A' (ascii '41') =3D&gt; toggle =
alternative key handling<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
key 'H' (ascii '48') =3D&gt; dump heap info<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; key 'R' (ascii '52') =3D&gt; reboot =
machine<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; key 'a' (ascii =
'61') =3D&gt; dump timer queues<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; key 'd' (ascii '64') =3D&gt; dump =
registers<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; key 'e' (ascii =
'65') =3D&gt; dump evtchn info<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; key 'g' (ascii '67') =3D&gt; print grant =
table usage<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; key 'h' =
(ascii '68') =3D&gt; show this message<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; key 'm' (ascii '6d') =3D&gt; memory =
info<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; key 'q' (ascii '71') =
=3D&gt; dump domain (and guest debug) info<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; key 'r' (ascii '72') =3D&gt; dump run =
queues<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; key 't' (ascii =
'74') =3D&gt; display multi-cpu clock info<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; key 'w' (ascii '77') =3D&gt; synchronously =
dump console ring buffer (dmesg)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) '*' pressed -&gt; firing all diagnostic =
keyhandlers<o:p></o:p></p><p class=3DMsoNormal>(XEN) [d: dump =
registers]<o:p></o:p></p><p class=3DMsoNormal>(XEN) 'd' pressed -&gt; =
dumping registers<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN) *** =
Dumping CPU0 guest state (d0v0): ***<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) ----[ Xen-4.8.5&nbsp; arm64&nbsp; =
debug=3Dn&nbsp;&nbsp; Tainted:&nbsp; C&nbsp;&nbsp; =
]----<o:p></o:p></p><p class=3DMsoNormal>(XEN) CPU:&nbsp;&nbsp;&nbsp; =
0<o:p></o:p></p><p class=3DMsoNormal>(XEN) PC:&nbsp;&nbsp;&nbsp;&nbsp; =
00000000a0080000<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
LR:&nbsp;&nbsp;&nbsp;&nbsp; 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) SP_EL0: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) SP_EL1: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) CPSR:&nbsp;&nbsp; 000001c5 MODE:64-bit EL1h =
(Guest Kernel, handler)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X0: =
00000000a8000000&nbsp; X1: 0000000000000000&nbsp; X2: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X3: =
0000000000000000&nbsp; X4: 0000000000000000&nbsp; X5: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X6: =
0000000000000000&nbsp; X7: 0000000000000000&nbsp; X8: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X9: =
0000000000000000 X10: 0000000000000000 X11: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X12: 0000000000000000 =
X13: 0000000000000000 X14: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X15: 0000000000000000 =
X16: 0000000000000000 X17: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X18: 0000000000000000 =
X19: 0000000000000000 X20: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X21: 0000000000000000 =
X22: 0000000000000000 X23: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X24: 0000000000000000 =
X25: 0000000000000000 X26: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X27: 0000000000000000 =
X28: 0000000000000000&nbsp; FP: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; ELR_EL1: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; ESR_EL1: =
00000000<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; =
FAR_EL1: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
SCTLR_EL1: 00c50838<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; TCR_EL1: =
00000000<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; TTBR0_EL1: =
0000000000000000<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
TTBR1_EL1: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp; VTCR_EL2: 80043594<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; VTTBR_EL2: =
000100017f0f9000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
SCTLR_EL2: 30cd183d<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; HCR_EL2: =
000000008038663f<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
TTBR0_EL2: 00000000fecfc000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; ESR_EL2: =
8200000d<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; HPFAR_EL2: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; FAR_EL2: =
00000000a0080000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN) Guest =
stack trace from sp=3D0:<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp; Failed to convert stack to physical =
address<o:p></o:p></p><p class=3DMsoNormal>(XEN)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) *** Dumping CPU1 host state: =
***<o:p></o:p></p><p class=3DMsoNormal>(XEN) ----[ Xen-4.8.5&nbsp; =
arm64&nbsp; debug=3Dn&nbsp;&nbsp; Tainted:&nbsp; C&nbsp;&nbsp; =
]----<o:p></o:p></p><p class=3DMsoNormal>(XEN) CPU:&nbsp;&nbsp;&nbsp; =
1<o:p></o:p></p><p class=3DMsoNormal>(XEN) PC:&nbsp;&nbsp;&nbsp;&nbsp; =
0000000000243ad8 idle_loop+0x74/0x11c<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) LR:&nbsp;&nbsp;&nbsp;&nbsp; =
0000000000243ae0<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
SP:&nbsp;&nbsp;&nbsp;&nbsp; 00008000ff1bfe70<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) CPSR:&nbsp;&nbsp; 20000249 MODE:64-bit EL2h =
(Hypervisor, handler)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X0: =
0000000000000000&nbsp; X1: 00008000feeb8680&nbsp; X2: =
0000000000000001<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X3: =
fffffffffffffed4&nbsp; X4: 00008000feeb8680&nbsp; X5: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X6: =
00008000ff16dc40&nbsp; X7: 00008000ff16dc58&nbsp; X8: =
00008000ff1bfe08<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X9: =
0000000000262458 X10: 000000000000000a X11: =
00008000ff1bfbe9<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X12: 0000000000000031 =
X13: 0000000000000001 X14: 00008000ff1bfbe8<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X15: 0000000000000020 =
X16: 0000000000000000 X17: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X18: 0000000000000000 =
X19: 0000000000302448 X20: 0000000000308d18<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X21: 00000000002cbf80 =
X22: 0000000000308d18 X23: 0000000000308d18<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X24: 0000000000000001 =
X25: 0000000000000001 X26: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X27: 0000000000000000 =
X28: 0000000000000000&nbsp; FP: 00008000ff1bfe70<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
&nbsp;&nbsp;VTCR_EL2: 80043594<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; VTTBR_EL2: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
SCTLR_EL2: 30cd183d<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; HCR_EL2: =
000000000038663f<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
TTBR0_EL2: 00000000fecfc000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; ESR_EL2: =
00000000<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; HPFAR_EL2: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; FAR_EL2: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN) Xen =
stack trace from sp=3D00008000ff1bfe70:<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 00008000ff1bfe80 =
0000000000250f3c 0000000000308d18 0000000000000001<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000001 =
0000000000000001 0000000000400000 0900494931070860<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000<o:p></o:p></p><p class=3DMsoNormal>(XEN) Xen call =
trace:<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; =
[&lt;0000000000243ad8&gt;] idle_loop+0x74/0x11c (PC)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; [&lt;0000000000243ae0&gt;] =
idle_loop+0x7c/0x11c (LR)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; [&lt;0000000000250f3c&gt;] =
start_secondary+0xfc/0x10c<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; [&lt;0000000000000001&gt;] =
0000000000000001<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN) *** =
Dumping CPU2 host state: ***<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
----[ Xen-4.8.5&nbsp; arm64&nbsp; debug=3Dn&nbsp;&nbsp; Tainted:&nbsp; =
C&nbsp;&nbsp; ]----<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
CPU:&nbsp;&nbsp;&nbsp; 2<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
PC:&nbsp;&nbsp;&nbsp;&nbsp; 0000000000243ad8 =
idle_loop+0x74/0x11c<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
LR:&nbsp;&nbsp;&nbsp;&nbsp; 0000000000243ae0<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) SP:&nbsp;&nbsp;&nbsp;&nbsp; =
00008000ff1afe70<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
CPSR:&nbsp;&nbsp; 20000249 MODE:64-bit EL2h (Hypervisor, =
handler)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X0: =
0000000000000000&nbsp; X1: 00008000feeae680&nbsp; X2: =
0000000000000002<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X3: =
fffffffffffffed4&nbsp; X4: 00008000feeae680&nbsp; X5: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X6: =
00008000ff16df20&nbsp; X7: 00008000ff16df38&nbsp; X8: =
00008000ff1afe08<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X9: =
0000000000262458 X10: 000000000000000a X11: =
00008000ff1afbe9<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X12: 0000000000000032 =
X13: 0000000000000001 X14: 00008000ff1afbe8<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X15: 0000000000000020 =
X16: 0000000000000000 X17: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X18: 0000000000000000 =
X19: 0000000000302448 X20: 0000000000308d18<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X21: 00000000002cbf80 =
X22: 0000000000308d18 X23: 0000000000308d18<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X24: 0000000000000002 =
X25: 0000000000000001 X26: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X27: 0000000000000000 =
X28: 0000000000000000&nbsp; FP: 00008000ff1afe70<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp; VTCR_EL2: 80043594<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; VTTBR_EL2: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
SCTLR_EL2: 30cd183d<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; HCR_EL2: =
000000000038663f<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
TTBR0_EL2: 00000000fecfc000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; ESR_EL2: =
00000000<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; HPFAR_EL2: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; FAR_EL2: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN) Xen =
stack trace from sp=3D00008000ff1afe70:<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 00008000ff1afe80 =
0000000000250f3c 0000000000308d18 0000000000000002<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000002 =
0000000000000001 0000000000400000 6002c108894108ca<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000<o:p></o:p></p><p class=3DMsoNormal>(XEN) Xen call =
trace:<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; =
[&lt;0000000000243ad8&gt;] idle_loop+0x74/0x11c (PC)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; [&lt;0000000000243ae0&gt;] =
idle_loop+0x7c/0x11c (LR)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; [&lt;0000000000250f3c&gt;] =
start_secondary+0xfc/0x10c<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; [&lt;0000000000000002&gt;] =
0000000000000002<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN) *** =
Dumping CPU3 host state: ***<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
----[ Xen-4.8.5&nbsp; arm64&nbsp; debug=3Dn&nbsp;&nbsp; Tainted:&nbsp; =
C&nbsp;&nbsp; ]----<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
CPU:&nbsp;&nbsp;&nbsp; 3<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
PC:&nbsp;&nbsp;&nbsp;&nbsp; 0000000000243ad8 =
idle_loop+0x74/0x11c<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
LR:&nbsp;&nbsp;&nbsp;&nbsp; 0000000000243ae0<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) SP:&nbsp;&nbsp;&nbsp;&nbsp; =
00008000ff1a7e70<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
CPSR:&nbsp;&nbsp; 20000249 MODE:64-bit EL2h (Hypervisor, =
handler)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X0: =
0000000000000000&nbsp; X1: 00008000feeaa680&nbsp; X2: =
0000000000000003<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X3: =
fffffffffffffed4&nbsp; X4: 00008000feeaa680&nbsp; X5: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X6: =
00008000ff1b4180&nbsp; X7: 00008000ff1b4198&nbsp; X8: =
00008000ff1a7e08<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; X9: =
0000000000262458 X10: 000000000000000a X11: =
00008000ff1a7be9<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X12: 0000000000000033 =
X13: 0000000000000001 X14: 00008000ff1a7be8<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X15: 0000000000000020 =
X16: 0000000000000000 X17: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X18: 0000000000000000 =
X19: 0000000000302448 X20: 0000000000308d18<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X21: 00000000002cbf80 =
X22: 0000000000308d18 X23: 0000000000308d18<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X24: 0000000000000003 =
X25: 0000000000000001 X26: 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; X27: 0000000000000000 =
X28: 0000000000000000&nbsp; FP: 00008000ff1a7e70<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp; VTCR_EL2: 80043594<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; VTTBR_EL2: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
SCTLR_EL2: 30cd183d<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; HCR_EL2: =
000000000038663f<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
TTBR0_EL2: 00000000fecfc000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; ESR_EL2: =
00000000<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; HPFAR_EL2: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; FAR_EL2: =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN) Xen =
stack trace from sp=3D00008000ff1a7e70:<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 00008000ff1a7e80 =
0000000000250f3c 0000000000308d18 0000000000000003<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000003 =
0000000000000001 0000000000400000 70c821138b0c9de0<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; 0000000000000000 =
0000000000000000<o:p></o:p></p><p class=3DMsoNormal>(XEN) Xen call =
trace:<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; =
[&lt;0000000000243ad8&gt;] idle_loop+0x74/0x11c (PC)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; [&lt;0000000000243ae0&gt;] =
idle_loop+0x7c/0x11c (LR)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; [&lt;0000000000250f3c&gt;] =
start_secondary+0xfc/0x10c<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp; [&lt;0000000000000003&gt;] =
0000000000000003<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>It seems the =
DOM0 kernel did not get added to the task list&#8230;.<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>Boot args =
for Xen and Dom0 are here:<br>(XEN) Checking for initrd in =
/chosen<o:p></o:p></p><p class=3DMsoNormal>(XEN) linux,initrd limits =
invalid: 0000000084100000 &gt;=3D 0000000084100000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) RAM: 0000000080000000 - =
00000000fedfffff<o:p></o:p></p><p class=3DMsoNormal>(XEN) RAM: =
0000000100000000 - 000000017f1fffff<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
MODULE[0]: 00000000fc7f8000 - 00000000fc82d000 Device =
Tree<o:p></o:p></p><p class=3DMsoNormal>(XEN) MODULE[1]: =
00000000e1000000 - 00000000e31bc808 =
Kernel&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; console=3Dhvc0 =
earlyprintk=3Dxen earlycon=3Dxen rootfstype=3Dext4 rw rootwait =
root=3D/dev/mmcblk0p1 rdinit=3D/sbin/init<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; RESVD[0]: 0000000080000000 - =
0000000080020000<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
RESVD[1]: 00000000e3500000 - 00000000e3535000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; RESVD[2]: 00000000fc7f8000 - =
00000000fc82d000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN) Command =
line: console=3Ddtuart earlyprintk=3Dxen =
earlycon=3Duart8250,mmio32,0x70006000 sync_console dom0_mem=3D512M =
log_lvl=3Dall guest_loglvl=3Dall console_to_ring<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Placing Xen at =
0x00000000fec00000-0x00000000fee00000<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>Thanks,<o:p></o:p></p><p =
class=3DMsoNormal>Srini<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><div><div =
style=3D'border:none;border-top:solid #E1E1E1 1.0pt;padding:3.0pt 0in =
0in 0in'><p class=3DMsoNormal><b>From:</b> Srinivas Bangalore =
&lt;srini@yujala.com&gt; <br><b>Sent:</b> Monday, July 20, 2020 3:51 =
AM<br><b>To:</b> 'xen-devel@lists.xenproject.org' =
&lt;xen-devel@lists.xenproject.org&gt;<br><b>Subject:</b> Porting Xen to =
Jetson Nano <o:p></o:p></p></div></div><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>Hello,<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>I am trying =
to get Xen working on a Jetson Nano board (which is based on =
NVIDIA&#8217;s Tegra210 SoC). After some searching through the Xen-devel =
archives, I learnt that there was a set of patches developed in 2017 to =
port Xen to Tegra (<a =
href=3D"https://lists.xenproject.org/archives/html/xen-devel/2017-04/msg0=
0991.html">https://lists.xenproject.org/archives/html/xen-devel/2017-04/m=
sg00991.html</a>). However these patches don&#8217;t appear in the main =
source repository. Therefore, I applied these manually to Xen-4.8.5. =
With these changes, Xen now boots up successfully on the Jetson Nano, =
but there is no Dom0 output on the console. I can switch between Xen and =
Dom0 with CTRL-a-a-a.<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>I am using =
Linux kernel version 5.7 for Dom0. I also tried using the native Linux =
kernel that comes with the Nano board, but that doesn&#8217;t =
help.<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>Here&#8217;s the console screen =
capture:<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>## Flattened Device Tree blob at =
e3000000<o:p></o:p></p><p class=3DMsoNormal>&nbsp;&nbsp; Booting using =
the fdt blob at 0xe3000000<o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;&nbsp; reserving fdt memory region: =
addr=3D80000000 size=3D20000<o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;&nbsp; reserving fdt memory region: =
addr=3De3000000 size=3D35000<o:p></o:p></p><p =
class=3DMsoNormal>&nbsp;&nbsp; Loading Device Tree to 00000000fc7f8000, =
end 00000000fc82ffff ... OK<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>Starting =
kernel ...<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>- UART enabled -<o:p></o:p></p><p class=3DMsoNormal>- =
CPU 00000000 booting -<o:p></o:p></p><p class=3DMsoNormal>- Current EL =
00000008 -<o:p></o:p></p><p class=3DMsoNormal>- Xen starting at EL2 =
-<o:p></o:p></p><p class=3DMsoNormal>- Zero BSS -<o:p></o:p></p><p =
class=3DMsoNormal>- Setting up control registers -<o:p></o:p></p><p =
class=3DMsoNormal>- Turning on paging -<o:p></o:p></p><p =
class=3DMsoNormal>- Ready -<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
Checking for initrd in /chosen<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
linux,initrd limits invalid: 0000000084100000 &gt;=3D =
0000000084100000<o:p></o:p></p><p class=3DMsoNormal>(XEN) RAM: =
0000000080000000 - 00000000fedfffff<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) RAM: 0000000100000000 - =
000000017f1fffff<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
MODULE[0]: 00000000fc7f8000 - 00000000fc82d000 Device =
Tree<o:p></o:p></p><p class=3DMsoNormal>(XEN) MODULE[1]: =
00000000e1000000 - 00000000e2cbe200 =
Kernel&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; console=3Dhvc0 =
earlyprintk=3Duart8250-32bit,0x70006000 rootfstype=3Dext4 rw rootwait =
root=3D/dev/mmcblk0p1<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
RESVD[0]: 0000000080000000 - 0000000080020000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; RESVD[1]: 00000000e3000000 - =
00000000e3035000<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp; =
RESVD[2]: 00000000fc7f8000 - 00000000fc82d000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)<o:p></o:p></p><p class=3DMsoNormal>(XEN) Command =
line: console=3Ddtuart earlyprintk=3Dxen earlycon=3Dxenboot =
dom0_mem=3D512M loglevel=3Dall<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
Placing Xen at 0x00000000fec00000-0x00000000fee00000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Update BOOTMOD_XEN from =
0000000080080000-0000000080188e01 =3D&gt; =
00000000fec00000-00000000fed08e01<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Domain heap initialised<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Taking dtuart configuration from =
/chosen/stdout-path<o:p></o:p></p><p class=3DMsoNormal>(XEN) Looking for =
dtuart at &quot;/serial@70 Xen 4.8.5<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Xen version 4.8.5 (srinivas@) =
(aarch64-linux-gnu-gcc (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0) =
debug=3Dn&nbsp; Sun Jul 19 07:44:00 PDT 2020<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Latest ChangeSet:<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Processor: 411fd071: &quot;ARM Limited&quot;, =
variant: 0x1, part 0xd07, rev 0x1<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) 64-bit Execution:<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp; Processor Features: 0000000000002222 =
0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; Exception Levels: =
EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; Extensions: =
FloatingPoint AdvancedSIMD<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp; Debug Features: 0000000010305106 =
0000000000000000<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp;&nbsp; =
Auxiliary Features: 0000000000000000 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp; Memory Model Features: =
0000000000001124 0000000000000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp; ISA Features:&nbsp; 0000000000011120 =
0000000000000000<o:p></o:p></p><p class=3DMsoNormal>(XEN) 32-bit =
Execution:<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp;&nbsp; =
Processor Features: 00000131:00011011<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; Instruction Sets: =
AArch32 A32 Thumb Thumb-2 Jazelle<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp; Extensions: GenericTimer =
Security<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp;&nbsp; Debug =
Features: 03010066<o:p></o:p></p><p class=3DMsoNormal>(XEN)&nbsp;&nbsp; =
Auxiliary Features: 00000000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp; Memory Model Features: 10101105 =
40000000 01260000 02102211<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp; ISA Features: 02101110 13112111 21232042 =
01112131 00011142 00011121<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
Generic Timer IRQ: phys=3D30 hyp=3D26 virt=3D27 Freq: 19200 =
KHz<o:p></o:p></p><p class=3DMsoNormal>(XEN) GICv2 =
initialization:<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
gic_dist_addr=3D0000000050041000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
gic_cpu_addr=3D0000000050042000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
gic_hyp_addr=3D0000000050044000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
gic_vcpu_addr=3D0000000050046000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
gic_maintenance_irq=3D25<o:p></o:p></p><p class=3DMsoNormal>(XEN) GICv2: =
224 lines, 4 cpus, secure (IID 0200143b).<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Using scheduler: SMP Credit Scheduler =
(credit)<o:p></o:p></p><p class=3DMsoNormal>(XEN) Allocated console ring =
of 16 KiB.<o:p></o:p></p><p class=3DMsoNormal>(XEN) Bringing up =
CPU1<o:p></o:p></p><p class=3DMsoNormal>- CPU 00000001 booting =
-<o:p></o:p></p><p class=3DMsoNormal>- Current EL 00000008 =
-<o:p></o:p></p><p class=3DMsoNormal>- Xen starting at EL2 =
-<o:p></o:p></p><p class=3DMsoNormal>- Setting up control registers =
-<o:p></o:p></p><p class=3DMsoNormal>- Turning on paging =
-<o:p></o:p></p><p class=3DMsoNormal>- Ready -<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Bringing up CPU2<o:p></o:p></p><p =
class=3DMsoNormal>- CPU 00000002 booting -<o:p></o:p></p><p =
class=3DMsoNormal>- Current EL 00000008 -<o:p></o:p></p><p =
class=3DMsoNormal>- Xen starting at EL2 -<o:p></o:p></p><p =
class=3DMsoNormal>- Setting up control registers -<o:p></o:p></p><p =
class=3DMsoNormal>- Turning on paging -<o:p></o:p></p><p =
class=3DMsoNormal>- Ready -<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
Bringing up CPU3<o:p></o:p></p><p class=3DMsoNormal>- CPU 00000003 =
booting -<o:p></o:p></p><p class=3DMsoNormal>- Current EL 00000008 =
-<o:p></o:p></p><p class=3DMsoNormal>- Xen starting at EL2 =
-<o:p></o:p></p><p class=3DMsoNormal>- Setting up control registers =
-<o:p></o:p></p><p class=3DMsoNormal>- Turning on paging =
-<o:p></o:p></p><p class=3DMsoNormal>- Ready -<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Brought up 4 CPUs<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) P2M: 44-bit IPA with 44-bit PA<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) P2M: 4 levels with order-0 root, VTCR =
0x80043594<o:p></o:p></p><p class=3DMsoNormal>(XEN) I/O virtualisation =
disabled<o:p></o:p></p><p class=3DMsoNormal>(XEN) *** LOADING DOMAIN 0 =
***<o:p></o:p></p><p class=3DMsoNormal>(XEN) Loading kernel from boot =
module @ 00000000e1000000<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
Allocating 1:1 mappings totalling 512MB for dom0:<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) BANK[0] 0x000000a0000000-0x000000c0000000 =
(512MB)<o:p></o:p></p><p class=3DMsoNormal>(XEN) Grant table range: =
0x000000fec00000-0x000000fec60000<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Loading zImage from 00000000e1000000 to =
00000000a0080000-00000000a1d3e200<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Allocating PPI 16 for event channel =
interrupt<o:p></o:p></p><p class=3DMsoNormal>(XEN) Loading dom0 DTB to =
0x00000000a8000000-0x00000000a8034354<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Scrubbing Free RAM on 1 nodes using 4 =
CPUs<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
........done.<o:p></o:p></p><p class=3DMsoNormal>(XEN) Initial low =
memory virq threshold set at 0x4000 pages.<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) Std. Loglevel: Errors and =
warnings<o:p></o:p></p><p class=3DMsoNormal>(XEN) Guest Loglevel: =
Nothing (Rate-limited: Errors and warnings)<o:p></o:p></p><p =
class=3DMsoNormal>(XEN) *** Serial input -&gt; DOM0 (type 'CTRL-a' three =
times to switch input to Xen)<o:p></o:p></p><p class=3DMsoNormal>(XEN) =
Freed 300kB init memory.<o:p></o:p></p><p =
class=3DMsoNormal><o:p>&nbsp;</o:p></p><p class=3DMsoNormal>Any =
suggestions/pointers to move forward would be much =
appreciated.<o:p></o:p></p><p class=3DMsoNormal><o:p>&nbsp;</o:p></p><p =
class=3DMsoNormal>Thanks,<o:p></o:p></p><p =
class=3DMsoNormal>Srini<o:p></o:p></p></div></body></html>
------=_NextPart_000_0029_01D66016.E4A1BC00--



From xen-devel-bounces@lists.xenproject.org Wed Jul 22 18:02:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 18:02:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyJ4v-0003HM-S0; Wed, 22 Jul 2020 18:02:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x/9+=BB=amazon.com=prvs=4651d42d9=anchalag@srs-us1.protection.inumbo.net>)
 id 1jyJ4u-0003HH-E5
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 18:02:52 +0000
X-Inumbo-ID: 8e0f2e42-cc45-11ea-868d-bc764e2007e4
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e0f2e42-cc45-11ea-868d-bc764e2007e4;
 Wed, 22 Jul 2020 18:02:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1595440971; x=1626976971;
 h=date:from:to:cc:message-id:references:mime-version:
 content-transfer-encoding:in-reply-to:subject;
 bh=7S9G0j6WNk2lHQ1bZXQAcCjIdbziJxspgdHUPMMA6gA=;
 b=n3fKOZmRaWlxBxz8OeIFS/hlDM2E87OppNvfThGepfcaJqkjR7iQT9Dj
 dRT2IAFeAXGhT3jKEnn5cRRKjiWMcWmnPjGGG2AJ8a/nBplCMM1EvxDzO
 D/VBmBlpV94ZQ0ycDkUre0pLYhLg6dXIYwH3uZUaKisYOKOQ4IGRTHJKo I=;
IronPort-SDR: gddspTBzn+Y+TFjJTeY9TdNlPFZ60AB2yNuYk5xuO/V+CC+LP+ZLXGVyt1ka9QWY+Qo+37Am9i
 sF1re8O1G5oQ==
X-IronPort-AV: E=Sophos;i="5.75,383,1589241600"; d="scan'208";a="60802713"
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1e-c7c08562.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 22 Jul 2020 18:02:46 +0000
Received: from EX13MTAUEB002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1e-c7c08562.us-east-1.amazon.com (Postfix) with ESMTPS
 id 51994240ABB; Wed, 22 Jul 2020 18:02:40 +0000 (UTC)
Received: from EX13D08UEB004.ant.amazon.com (10.43.60.142) by
 EX13MTAUEB002.ant.amazon.com (10.43.60.12) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 22 Jul 2020 18:02:30 +0000
Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by
 EX13D08UEB004.ant.amazon.com (10.43.60.142) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 22 Jul 2020 18:02:29 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Wed, 22 Jul 2020 18:02:29 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 8E7C44CA97; Wed, 22 Jul 2020 18:02:29 +0000 (UTC)
Date: Wed, 22 Jul 2020 18:02:29 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <20200722180229.GA32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200721000348.GA19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <408d3ce9-2510-2950-d28d-fdfe8ee41a54@oracle.com>
 <alpine.DEB.2.21.2007211640500.17562@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <alpine.DEB.2.21.2007211640500.17562@sstabellini-ThinkPad-T480s>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, tglx@linutronix.de, kamatam@amazon.com, mingo@redhat.com,
 xen-devel@lists.xenproject.org, sblbir@amazon.com, axboe@kernel.dk,
 konrad.wilk@oracle.com, bp@alien8.de,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, jgross@suse.com,
 netdev@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net,
 linux-kernel@vger.kernel.org, vkuznets@redhat.com, davem@davemloft.net,
 dwmw@amazon.co.uk, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 21, 2020 at 05:18:34PM -0700, Stefano Stabellini wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Tue, 21 Jul 2020, Boris Ostrovsky wrote:
> > >>>>>> +static int xen_setup_pm_notifier(void)
> > >>>>>> +{
> > >>>>>> +     if (!xen_hvm_domain())
> > >>>>>> +             return -ENODEV;
> > >>>>>>
> > >>>>>> I forgot --- what did we decide about non-x86 (i.e. ARM)?
> > >>>>> It would be great to support that however, its  out of
> > >>>>> scope for this patch set.
> > >>>>> I’ll be happy to discuss it separately.
> > >>>>
> > >>>> I wasn't implying that this *should* work on ARM but rather whether this
> > >>>> will break ARM somehow (because xen_hvm_domain() is true there).
> > >>>>
> > >>>>
> > >>> Ok makes sense. TBH, I haven't tested this part of code on ARM and the series
> > >>> was only support x86 guests hibernation.
> > >>> Moreover, this notifier is there to distinguish between 2 PM
> > >>> events PM SUSPEND and PM hibernation. Now since we only care about PM
> > >>> HIBERNATION I may just remove this code and rely on "SHUTDOWN_SUSPEND" state.
> > >>> However, I may have to fix other patches in the series where this check may
> > >>> appear and cater it only for x86 right?
> > >>
> > >>
> > >> I don't know what would happen if ARM guest tries to handle hibernation
> > >> callbacks. The only ones that you are introducing are in block and net
> > >> fronts and that's arch-independent.
> > >>
> > >>
> > >> You do add a bunch of x86-specific code though (syscore ops), would
> > >> something similar be needed for ARM?
> > >>
> > >>
> > > I don't expect this to work out of the box on ARM. To start with something
> > > similar will be needed for ARM too.
> > > We may still want to keep the driver code as-is.
> > >
> > > I understand the concern here wrt ARM, however, currently the support is only
> > > proposed for x86 guests here and similar work could be carried out for ARM.
> > > Also, if regular hibernation works correctly on arm, then all is needed is to
> > > fix Xen side of things.
> > >
> > > I am not sure what could be done to achieve any assurances on arm side as far as
> > > this series is concerned.
> 
> Just to clarify: new features don't need to work on ARM or cause any
> addition efforts to you to make them work on ARM. The patch series only
> needs not to break existing code paths (on ARM and any other platforms).
> It should also not make it overly difficult to implement the ARM side of
> things (if there is one) at some point in the future.
> 
> FYI drivers/xen/manage.c is compiled and working on ARM today, however
> Xen suspend/resume is not supported. I don't know for sure if
> guest-initiated hibernation works because I have not tested it.
> 
> 
> 
> > If you are not sure what the effects are (or sure that it won't work) on
> > ARM then I'd add IS_ENABLED(CONFIG_X86) check, i.e.
> >
> >
> > if (!IS_ENABLED(CONFIG_X86) || !xen_hvm_domain())
> >       return -ENODEV;
> 
> That is a good principle to have and thanks for suggesting it. However,
> in this specific case there is nothing in this patch that doesn't work
> on ARM. From an ARM perspective I think we should enable it and
> &xen_pm_notifier_block should be registered.
> 
This question is for Boris, I think you we decided to get rid of the notifier
in V3 as all we need  to check is SHUTDOWN_SUSPEND state which sounds plausible
to me. So this check may go away. It may still be needed for sycore_ops
callbacks registration.
> Given that all guests are HVM guests on ARM, it should work fine as is.
> 
> 
> I gave a quick look at the rest of the series and everything looks fine
> to me from an ARM perspective. I cannot imaging that the new freeze,
> thaw, and restore callbacks for net and block are going to cause any
> trouble on ARM. The two main x86-specific functions are
> xen_syscore_suspend/resume and they look trivial to implement on ARM (in
> the sense that they are likely going to look exactly the same.)
> 
Yes but for now since things are not tested I will put this
!IS_ENABLED(CONFIG_X86) on syscore_ops calls registration part just to be safe
and not break anything.
> 
> One question for Anchal: what's going to happen if you trigger a
> hibernation, you have the new callbacks, but you are missing
> xen_syscore_suspend/resume?
> 
> Is it any worse than not having the new freeze, thaw and restore
> callbacks at all and try to do a hibernation?
If callbacks are not there, I don't expect hibernation to work correctly.
These callbacks takes care of xen primitives like shared_info_page,
grant table, sched clock, runstate time which are important to save the correct
state of the guest and bring it back up. Other patches in the series, adds all
the logic to these syscore callbacks. Freeze/thaw/restore are just there for at driver
level.

Thanks,
Anchal


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 20:25:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 20:25:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyLJ7-0006tq-JP; Wed, 22 Jul 2020 20:25:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q/Qh=BB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyLJ5-0006tN-PX
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 20:25:39 +0000
X-Inumbo-ID: 7d7bff18-cc59-11ea-a212-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7d7bff18-cc59-11ea-a212-12813bfff9fa;
 Wed, 22 Jul 2020 20:25:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ihn2vSdLSXHd02Y9M4dW55OPEV2C4RFjiBd+Mf1w9VA=; b=G0XfLtGu87uWYTbhjqzRqxaES
 Zr/v7FYSgG9AJqT9K1EEx8Z/hItejGzb9+rCHUSThpOzMoqJyeP9WMVXdcDXlsiC2ls4+mOEcn2b8
 iBqIAXX0MRawMcm/debBlyPprqBMyHlnaQVLBhO618UhxgQKVYZ9vw10O6tH9rgdxN/eI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyLIx-0008Mg-F7; Wed, 22 Jul 2020 20:25:31 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyLIx-0001Z7-2M; Wed, 22 Jul 2020 20:25:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyLIx-0002Ee-1f; Wed, 22 Jul 2020 20:25:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152123-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 152123: trouble: preparing/queued
X-Osstest-Failures: xen-4.14-testing:build-i386-libvirt:<none
 executed>:queued:regression
 xen-4.14-testing:test-armhf-armhf-xl-vhd:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-libvirt-xsm:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:<none
 executed>:queued:regression
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:<none
 executed>:queued:regression
 xen-4.14-testing:test-arm64-arm64-xl-credit2:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:<none
 executed>:queued:regression
 xen-4.14-testing:test-xtf-amd64-amd64-2:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-shadow:<none executed>:queued:regression
 xen-4.14-testing:test-armhf-armhf-xl-rtds:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-rtds:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-dom0pvh-xl-amd:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-qemuu-rhel6hvm-intel:<none
 executed>:queued:regression
 xen-4.14-testing:test-arm64-arm64-xl-xsm:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-dom0pvh-xl-intel:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-coresched-i386-xl:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-xsm:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-freebsd10-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-qemut-rhel6hvm-amd:<none
 executed>:queued:regression
 xen-4.14-testing:build-arm64-libvirt:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-freebsd10-i386:<none
 executed>:queued:regression
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-amd64-pvgrub:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-pair:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-credit2:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-livepatch:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-migrupgrade:<none executed>:queued:regression
 xen-4.14-testing:test-armhf-armhf-xl-arndale:<none executed>:queued:regression
 xen-4.14-testing:test-arm64-arm64-xl-credit1:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-multivcpu:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-raw:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-pair:<none executed>:queued:regression
 xen-4.14-testing:build-armhf-libvirt:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-coresched-amd64-xl:<none
 executed>:queued:regression
 xen-4.14-testing:test-armhf-armhf-libvirt:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-qcow2:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:<none
 executed>:queued:regression
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-libvirt-pair:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-pvshim:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-libvirt-pair:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-libvirt-vhd:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<none
 executed>:queued:regression
 xen-4.14-testing:build-amd64-libvirt:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-pvshim:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-qemut-rhel6hvm-intel:<none
 executed>:queued:regression
 xen-4.14-testing:test-xtf-amd64-amd64-5:<none executed>:queued:regression
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-i386-pvgrub:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:<none
 executed>:queued:regression
 xen-4.14-testing:test-armhf-armhf-xl:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-qemuu-nested-intel:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-arm64-arm64-xl-seattle:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-credit1:<none executed>:queued:regression
 xen-4.14-testing:test-xtf-amd64-amd64-3:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-xsm:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-libvirt:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-libvirt-xsm:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-qemuu-freebsd11-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-xtf-amd64-amd64-4:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-pygrub:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-arm64-arm64-xl:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-qemuu-rhel6hvm-amd:<none
 executed>:queued:regression
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:<none
 executed>:queued:regression
 xen-4.14-testing:test-xtf-amd64-amd64-1:<none executed>:queued:regression
 xen-4.14-testing:test-armhf-armhf-xl-credit1:<none executed>:queued:regression
 xen-4.14-testing:test-armhf-armhf-xl-credit2:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-libvirt:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-qemuu-freebsd12-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-migrupgrade:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-livepatch:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-shadow:<none executed>:queued:regression
 xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-pvhv2-amd:<none
 executed>:queued:regression
 xen-4.14-testing:test-amd64-amd64-xl-pvhv2-intel:<none
 executed>:queued:regression
 xen-4.14-testing:build-amd64-xtf:hosts-allocate:running:regression
 xen-4.14-testing:build-amd64-xsm:hosts-allocate:running:regression
 xen-4.14-testing:build-i386-pvops:hosts-allocate:running:regression
 xen-4.14-testing:build-i386-prev:hosts-allocate:running:regression
 xen-4.14-testing:build-amd64-pvops:hosts-allocate:running:regression
 xen-4.14-testing:build-arm64:hosts-allocate:running:regression
 xen-4.14-testing:build-i386:hosts-allocate:running:regression
 xen-4.14-testing:build-arm64-xsm:hosts-allocate:running:regression
 xen-4.14-testing:build-amd64:hosts-allocate:running:regression
 xen-4.14-testing:build-i386-xsm:hosts-allocate:running:regression
 xen-4.14-testing:build-armhf:hosts-allocate:running:regression
 xen-4.14-testing:build-armhf-pvops:hosts-allocate:running:regression
 xen-4.14-testing:build-arm64-pvops:hosts-allocate:running:regression
 xen-4.14-testing:build-amd64-prev:hosts-allocate:running:regression
X-Osstest-Versions-This: xen=26984f2f432bb880f2bb4954e1248c9c2d1bbd54
X-Osstest-Versions-That: xen=827031adfeb3c2656baa2156d3e7caaea8aec739
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jul 2020 20:25:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152123 xen-4.14-testing running [real]
http://logs.test-lab.xenproject.org/osstest/logs/152123/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt              <none executed>              queued
 test-armhf-armhf-xl-vhd         <none executed>              queued
 test-amd64-i386-libvirt-xsm     <none executed>              queued
 test-amd64-i386-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-amd64-xl             <none executed>              queued
 test-amd64-amd64-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemut-debianhvm-amd64    <none executed>             queued
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>   queued
 test-armhf-armhf-libvirt-raw    <none executed>              queued
 test-arm64-arm64-xl-credit2     <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm    <none executed>          queued
 test-xtf-amd64-amd64-2          <none executed>              queued
 test-amd64-amd64-xl-qemut-debianhvm-amd64    <none executed>            queued
 test-amd64-i386-xl-shadow       <none executed>              queued
 test-armhf-armhf-xl-rtds        <none executed>              queued
 test-amd64-amd64-xl-rtds        <none executed>              queued
 test-amd64-amd64-dom0pvh-xl-amd    <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-intel    <none executed>              queued
 test-arm64-arm64-xl-xsm         <none executed>              queued
 test-amd64-amd64-dom0pvh-xl-intel    <none executed>              queued
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm    <none executed> queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow    <none executed>      queued
 test-amd64-coresched-i386-xl    <none executed>              queued
 test-amd64-amd64-qemuu-nested-amd    <none executed>              queued
 test-amd64-i386-xl-xsm          <none executed>              queued
 test-amd64-amd64-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm    <none executed>    queued
 test-amd64-i386-freebsd10-amd64    <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-amd    <none executed>              queued
 build-arm64-libvirt             <none executed>              queued
 test-amd64-i386-xl              <none executed>              queued
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm    <none executed>          queued
 test-amd64-i386-freebsd10-i386    <none executed>              queued
 test-arm64-arm64-xl-thunderx    <none executed>              queued
 test-amd64-amd64-amd64-pvgrub    <none executed>              queued
 test-amd64-amd64-pair           <none executed>              queued
 test-amd64-amd64-xl-credit2     <none executed>              queued
 test-amd64-amd64-livepatch      <none executed>              queued
 test-amd64-i386-migrupgrade     <none executed>              queued
 test-armhf-armhf-xl-arndale     <none executed>              queued
 test-arm64-arm64-xl-credit1     <none executed>              queued
 test-amd64-amd64-xl-multivcpu    <none executed>              queued
 test-amd64-i386-xl-raw          <none executed>              queued
 test-amd64-i386-pair            <none executed>              queued
 build-armhf-libvirt             <none executed>              queued
 test-amd64-coresched-amd64-xl    <none executed>              queued
 test-armhf-armhf-libvirt        <none executed>              queued
 test-amd64-amd64-xl-qcow2       <none executed>              queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow    <none executed>     queued
 test-arm64-arm64-libvirt-xsm    <none executed>              queued
 test-amd64-amd64-libvirt-pair    <none executed>              queued
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict   <none executed> queued
 test-amd64-amd64-xl-pvshim      <none executed>              queued
 test-amd64-i386-libvirt-pair    <none executed>              queued
 test-amd64-i386-xl-qemuu-debianhvm-amd64    <none executed>             queued
 test-amd64-amd64-libvirt-vhd    <none executed>              queued
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm   <none executed> queued
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <none executed>         queued
 build-amd64-libvirt             <none executed>              queued
 test-amd64-i386-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-pvshim       <none executed>              queued
 test-amd64-i386-qemut-rhel6hvm-intel    <none executed>              queued
 test-xtf-amd64-amd64-5          <none executed>              queued
 test-armhf-armhf-xl-multivcpu    <none executed>              queued
 test-amd64-amd64-i386-pvgrub    <none executed>              queued
 test-amd64-amd64-xl-qemuu-ws16-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict    <none executed> queued
 test-armhf-armhf-xl             <none executed>              queued
 test-amd64-amd64-qemuu-nested-intel    <none executed>              queued
 test-amd64-i386-xl-qemut-ws16-amd64    <none executed>              queued
 test-arm64-arm64-xl-seattle     <none executed>              queued
 test-amd64-amd64-xl-credit1     <none executed>              queued
 test-xtf-amd64-amd64-3          <none executed>              queued
 test-amd64-amd64-xl-xsm         <none executed>              queued
 test-amd64-amd64-xl-qemut-ws16-amd64    <none executed>              queued
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm    <none executed>         queued
 test-amd64-amd64-libvirt        <none executed>              queued
 test-amd64-amd64-libvirt-xsm    <none executed>              queued
 test-amd64-amd64-qemuu-freebsd11-amd64    <none executed>              queued
 test-xtf-amd64-amd64-4          <none executed>              queued
 test-amd64-amd64-pygrub         <none executed>              queued
 test-amd64-amd64-xl-qemuu-debianhvm-amd64    <none executed>            queued
 test-arm64-arm64-xl             <none executed>              queued
 test-amd64-i386-qemuu-rhel6hvm-amd    <none executed>              queued
 test-armhf-armhf-xl-cubietruck    <none executed>              queued
 test-xtf-amd64-amd64-1          <none executed>              queued
 test-armhf-armhf-xl-credit1     <none executed>              queued
 test-armhf-armhf-xl-credit2     <none executed>              queued
 test-amd64-i386-libvirt         <none executed>              queued
 test-amd64-amd64-qemuu-freebsd12-amd64    <none executed>              queued
 test-amd64-amd64-migrupgrade    <none executed>              queued
 test-amd64-i386-livepatch       <none executed>              queued
 test-amd64-amd64-xl-qemut-win7-amd64    <none executed>              queued
 test-amd64-i386-xl-qemuu-ovmf-amd64    <none executed>              queued
 test-amd64-amd64-xl-shadow      <none executed>              queued
 test-amd64-i386-xl-qemuu-win7-amd64    <none executed>              queued
 test-amd64-amd64-xl-pvhv2-amd    <none executed>              queued
 test-amd64-amd64-xl-pvhv2-intel    <none executed>              queued
 build-amd64-xtf               2 hosts-allocate               running
 build-amd64-xsm               2 hosts-allocate               running
 build-i386-pvops              2 hosts-allocate               running
 build-i386-prev               2 hosts-allocate               running
 build-amd64-pvops             2 hosts-allocate               running
 build-arm64                   2 hosts-allocate               running
 build-i386                    2 hosts-allocate               running
 build-arm64-xsm               2 hosts-allocate               running
 build-amd64                   2 hosts-allocate               running
 build-i386-xsm                2 hosts-allocate               running
 build-armhf                   2 hosts-allocate               running
 build-armhf-pvops             2 hosts-allocate               running
 build-arm64-pvops             2 hosts-allocate               running
 build-amd64-prev              2 hosts-allocate               running

version targeted for testing:
 xen                  26984f2f432bb880f2bb4954e1248c9c2d1bbd54
baseline version:
 xen                  827031adfeb3c2656baa2156d3e7caaea8aec739

Last test of basis   152081  2020-07-21 16:52:47 Z    1 days
Testing same since                          (not found)         0 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Paul Durrant <pdurrant@amazon.com>

jobs:
 build-amd64-xsm                                              preparing
 build-arm64-xsm                                              preparing
 build-i386-xsm                                               preparing
 build-amd64-xtf                                              preparing
 build-amd64                                                  preparing
 build-arm64                                                  preparing
 build-armhf                                                  preparing
 build-i386                                                   preparing
 build-amd64-libvirt                                          queued  
 build-arm64-libvirt                                          queued  
 build-armhf-libvirt                                          queued  
 build-i386-libvirt                                           queued  
 build-amd64-prev                                             preparing
 build-i386-prev                                              preparing
 build-amd64-pvops                                            preparing
 build-arm64-pvops                                            preparing
 build-armhf-pvops                                            preparing
 build-i386-pvops                                             preparing
 test-xtf-amd64-amd64-1                                       queued  
 test-xtf-amd64-amd64-2                                       queued  
 test-xtf-amd64-amd64-3                                       queued  
 test-xtf-amd64-amd64-4                                       queued  
 test-xtf-amd64-amd64-5                                       queued  
 test-amd64-amd64-xl                                          queued  
 test-amd64-coresched-amd64-xl                                queued  
 test-arm64-arm64-xl                                          queued  
 test-armhf-armhf-xl                                          queued  
 test-amd64-i386-xl                                           queued  
 test-amd64-coresched-i386-xl                                 queued  
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           queued  
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            queued  
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        queued  
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         queued  
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 queued  
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  queued  
 test-amd64-amd64-libvirt-xsm                                 queued  
 test-arm64-arm64-libvirt-xsm                                 queued  
 test-amd64-i386-libvirt-xsm                                  queued  
 test-amd64-amd64-xl-xsm                                      queued  
 test-arm64-arm64-xl-xsm                                      queued  
 test-amd64-i386-xl-xsm                                       queued  
 test-amd64-amd64-qemuu-nested-amd                            queued  
 test-amd64-amd64-xl-pvhv2-amd                                queued  
 test-amd64-i386-qemut-rhel6hvm-amd                           queued  
 test-amd64-i386-qemuu-rhel6hvm-amd                           queued  
 test-amd64-amd64-dom0pvh-xl-amd                              queued  
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemut-debianhvm-amd64                     queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     queued  
 test-amd64-i386-freebsd10-amd64                              queued  
 test-amd64-amd64-qemuu-freebsd11-amd64                       queued  
 test-amd64-amd64-qemuu-freebsd12-amd64                       queued  
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         queued  
 test-amd64-i386-xl-qemuu-ovmf-amd64                          queued  
 test-amd64-amd64-xl-qemut-win7-amd64                         queued  
 test-amd64-i386-xl-qemut-win7-amd64                          queued  
 test-amd64-amd64-xl-qemuu-win7-amd64                         queued  
 test-amd64-i386-xl-qemuu-win7-amd64                          queued  
 test-amd64-amd64-xl-qemut-ws16-amd64                         queued  
 test-amd64-i386-xl-qemut-ws16-amd64                          queued  
 test-amd64-amd64-xl-qemuu-ws16-amd64                         queued  
 test-amd64-i386-xl-qemuu-ws16-amd64                          queued  
 test-armhf-armhf-xl-arndale                                  queued  
 test-amd64-amd64-xl-credit1                                  queued  
 test-arm64-arm64-xl-credit1                                  queued  
 test-armhf-armhf-xl-credit1                                  queued  
 test-amd64-amd64-xl-credit2                                  queued  
 test-arm64-arm64-xl-credit2                                  queued  
 test-armhf-armhf-xl-credit2                                  queued  
 test-armhf-armhf-xl-cubietruck                               queued  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        queued  
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         queued  
 test-amd64-i386-freebsd10-i386                               queued  
 test-amd64-amd64-qemuu-nested-intel                          queued  
 test-amd64-amd64-xl-pvhv2-intel                              queued  
 test-amd64-i386-qemut-rhel6hvm-intel                         queued  
 test-amd64-i386-qemuu-rhel6hvm-intel                         queued  
 test-amd64-amd64-dom0pvh-xl-intel                            queued  
 test-amd64-amd64-libvirt                                     queued  
 test-armhf-armhf-libvirt                                     queued  
 test-amd64-i386-libvirt                                      queued  
 test-amd64-amd64-livepatch                                   queued  
 test-amd64-i386-livepatch                                    queued  
 test-amd64-amd64-migrupgrade                                 queued  
 test-amd64-i386-migrupgrade                                  queued  
 test-amd64-amd64-xl-multivcpu                                queued  
 test-armhf-armhf-xl-multivcpu                                queued  
 test-amd64-amd64-pair                                        queued  
 test-amd64-i386-pair                                         queued  
 test-amd64-amd64-libvirt-pair                                queued  
 test-amd64-i386-libvirt-pair                                 queued  
 test-amd64-amd64-amd64-pvgrub                                queued  
 test-amd64-amd64-i386-pvgrub                                 queued  
 test-amd64-amd64-xl-pvshim                                   queued  
 test-amd64-i386-xl-pvshim                                    queued  
 test-amd64-amd64-pygrub                                      queued  
 test-amd64-amd64-xl-qcow2                                    queued  
 test-armhf-armhf-libvirt-raw                                 queued  
 test-amd64-i386-xl-raw                                       queued  
 test-amd64-amd64-xl-rtds                                     queued  
 test-armhf-armhf-xl-rtds                                     queued  
 test-arm64-arm64-xl-seattle                                  queued  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             queued  
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              queued  
 test-amd64-amd64-xl-shadow                                   queued  
 test-amd64-i386-xl-shadow                                    queued  
 test-arm64-arm64-xl-thunderx                                 queued  
 test-amd64-amd64-libvirt-vhd                                 queued  
 test-armhf-armhf-xl-vhd                                      queued  


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-i386-libvirt queued
broken-job test-armhf-armhf-xl-vhd queued
broken-job test-amd64-i386-libvirt-xsm queued
broken-job test-amd64-i386-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-amd64-xl queued
broken-job test-amd64-amd64-xl-qemuu-win7-amd64 queued
broken-job test-amd64-i386-xl-qemut-debianhvm-amd64 queued
broken-job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-armhf-armhf-libvirt-raw queued
broken-job test-arm64-arm64-xl-credit2 queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-xtf-amd64-amd64-2 queued
broken-job test-amd64-amd64-xl-qemut-debianhvm-amd64 queued
broken-job test-amd64-i386-xl-shadow queued
broken-job test-armhf-armhf-xl-rtds queued
broken-job test-amd64-amd64-xl-rtds queued
broken-job test-amd64-amd64-dom0pvh-xl-amd queued
broken-job test-amd64-i386-qemuu-rhel6hvm-intel queued
broken-job test-arm64-arm64-xl-xsm queued
broken-job test-amd64-amd64-dom0pvh-xl-intel queued
broken-job test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-amd64-coresched-i386-xl queued
broken-job test-amd64-amd64-qemuu-nested-amd queued
broken-job test-amd64-i386-xl-xsm queued
broken-job test-amd64-amd64-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm queued
broken-job test-amd64-i386-freebsd10-amd64 queued
broken-job test-amd64-i386-qemut-rhel6hvm-amd queued
broken-job build-arm64-libvirt queued
broken-job test-amd64-i386-xl queued
broken-job test-amd64-i386-xl-qemut-debianhvm-i386-xsm queued
broken-job test-amd64-i386-freebsd10-i386 queued
broken-job test-arm64-arm64-xl-thunderx queued
broken-job test-amd64-amd64-amd64-pvgrub queued
broken-job test-amd64-amd64-pair queued
broken-job test-amd64-amd64-xl-credit2 queued
broken-job test-amd64-amd64-livepatch queued
broken-job test-amd64-i386-migrupgrade queued
broken-job test-armhf-armhf-xl-arndale queued
broken-job test-arm64-arm64-xl-credit1 queued
broken-job test-amd64-amd64-xl-multivcpu queued
broken-job test-amd64-i386-xl-raw queued
broken-job test-amd64-i386-pair queued
broken-job build-armhf-libvirt queued
broken-job test-amd64-coresched-amd64-xl queued
broken-job test-armhf-armhf-libvirt queued
broken-job test-amd64-amd64-xl-qcow2 queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow queued
broken-job test-arm64-arm64-libvirt-xsm queued
broken-job test-amd64-amd64-libvirt-pair queued
broken-job test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-amd64-amd64-xl-pvshim queued
broken-job test-amd64-i386-libvirt-pair queued
broken-job test-amd64-i386-xl-qemuu-debianhvm-amd64 queued
broken-job test-amd64-amd64-libvirt-vhd queued
broken-job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm queued
broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm queued
broken-job build-amd64-libvirt queued
broken-job test-amd64-i386-xl-qemut-win7-amd64 queued
broken-job test-amd64-i386-xl-pvshim queued
broken-job test-amd64-i386-qemut-rhel6hvm-intel queued
broken-job test-xtf-amd64-amd64-5 queued
broken-job test-armhf-armhf-xl-multivcpu queued
broken-job test-amd64-amd64-i386-pvgrub queued
broken-job test-amd64-amd64-xl-qemuu-ws16-amd64 queued
broken-job test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict queued
broken-job test-armhf-armhf-xl queued
broken-job test-amd64-amd64-qemuu-nested-intel queued
broken-job test-amd64-i386-xl-qemut-ws16-amd64 queued
broken-job test-arm64-arm64-xl-seattle queued
broken-job test-amd64-amd64-xl-credit1 queued
broken-job test-xtf-amd64-amd64-3 queued
broken-job test-amd64-amd64-xl-xsm queued
broken-job test-amd64-amd64-xl-qemut-ws16-amd64 queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm queued
broken-job test-amd64-amd64-libvirt queued
broken-job test-amd64-amd64-libvirt-xsm queued
broken-job test-amd64-amd64-qemuu-freebsd11-amd64 queued
broken-job test-xtf-amd64-amd64-4 queued
broken-job test-amd64-amd64-pygrub queued
broken-job test-amd64-amd64-xl-qemuu-debianhvm-amd64 queued
broken-job test-arm64-arm64-xl queued
broken-job test-amd64-i386-qemuu-rhel6hvm-amd queued
broken-job test-armhf-armhf-xl-cubietruck queued
broken-job test-xtf-amd64-amd64-1 queued
broken-job test-armhf-armhf-xl-credit1 queued
broken-job test-armhf-armhf-xl-credit2 queued
broken-job test-amd64-i386-libvirt queued
broken-job test-amd64-amd64-qemuu-freebsd12-amd64 queued
broken-job test-amd64-amd64-migrupgrade queued
broken-job test-amd64-i386-livepatch queued
broken-job test-amd64-amd64-xl-qemut-win7-amd64 queued
broken-job test-amd64-i386-xl-qemuu-ovmf-amd64 queued
broken-job test-amd64-amd64-xl-shadow queued
broken-job test-amd64-i386-xl-qemuu-win7-amd64 queued
broken-job test-amd64-amd64-xl-pvhv2-amd queued
broken-job test-amd64-amd64-xl-pvhv2-intel queued

Not pushing.

------------------------------------------------------------
commit 26984f2f432bb880f2bb4954e1248c9c2d1bbd54
Author: Julien Grall <jgrall@amazon.com>
Date:   Wed Jul 22 18:47:10 2020 +0100

    Revert "SUPPORT.md: Set version and release/support dates"
    
    This reverts commit e4670f8b045b11a524171b119d9d4a20bf643367.

commit e4670f8b045b11a524171b119d9d4a20bf643367
Author: Paul Durrant <pdurrant@amazon.com>
Date:   Wed Jul 22 17:55:44 2020 +0100

    SUPPORT.md: Set version and release/support dates
    
    Signed-off-by: Paul Durrant <pdurrant@amazon.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 21:14:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 21:14:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyM4A-0002oj-NE; Wed, 22 Jul 2020 21:14:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q/Qh=BB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyM4A-0002oK-19
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 21:14:18 +0000
X-Inumbo-ID: 49e48aba-cc60-11ea-86ae-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 49e48aba-cc60-11ea-86ae-bc764e2007e4;
 Wed, 22 Jul 2020 21:14:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=/erIfwzehZfxQ7gf8wVxBB3iw8ThnwIlgM0EIq0n9hU=; b=K8bsSxn5lJ1iGjVjH43FbJc4S
 cEK3I3XSgVZAodJHi435CN/5sUriEAuYkLUKQCN6Fv0QXbxUZNtwDr2gyAcmTyjuhy0W4a4OasiAr
 TCogCKFL+t0y124oT+4w5n4lEFXhNPTgb7B+ktDr/8eZ4EZVx5pcJj6ORcN9Ri9KCmeKU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyM43-0000wR-Fr; Wed, 22 Jul 2020 21:14:11 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyM42-0004Br-Nx; Wed, 22 Jul 2020 21:14:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyM42-0007PK-NQ; Wed, 22 Jul 2020 21:14:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152121-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152121: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=26707b747feb5d707f659989c0f8f2e847e8020a
X-Osstest-Versions-That: xen=f3885e8c3ceaef101e466466e879e97103ecce18
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jul 2020 21:14:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152121 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152121/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  26707b747feb5d707f659989c0f8f2e847e8020a
baseline version:
 xen                  f3885e8c3ceaef101e466466e879e97103ecce18

Last test of basis   152077  2020-07-21 16:02:01 Z    1 days
Testing same since   152121  2020-07-22 15:07:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f3885e8c3c..26707b747f  26707b747feb5d707f659989c0f8f2e847e8020a -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 21:22:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 21:22:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyMCT-0003gx-Jl; Wed, 22 Jul 2020 21:22:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q/Qh=BB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyMCS-0003gs-8f
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 21:22:52 +0000
X-Inumbo-ID: 7d3a21db-cc61-11ea-86b2-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d3a21db-cc61-11ea-86b2-bc764e2007e4;
 Wed, 22 Jul 2020 21:22:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=4vVT4u0COFDR9Qi30mFdFMsCzgbcpe3wxjY4IFgC7hg=; b=lDzdbq6sHmYJHyPvpThZeW5TE
 jAmb3z1u4Z0zMATR3JzgiOekBexsYqz8EWkpVZ3tT1DTP0+rxEZFtUG0gI2GVhpdg3ILlIf2RXD06
 m91oHnSJtvrWcCyYdJGF0nLJ3oLYfl/OxF0lD2mP4Js7oz1lRH/hFyyelqboVVr/9Yc5U=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyMCO-00018Q-1s; Wed, 22 Jul 2020 21:22:48 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyMCN-0004YX-Pt; Wed, 22 Jul 2020 21:22:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyMCN-0002Yh-Ol; Wed, 22 Jul 2020 21:22:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152091-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 152091: regressions - trouble:
 fail/pass/preparing/running/starved
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-multivcpu:guest-localmigrate/x10:fail:regression
 xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-start.2:fail:regression
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:hosts-allocate:running:regression
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:windows-install:running:regression
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:syslog-server:running:regression
 xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:running:regression
 xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:syslog-server:running:regression
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=f3885e8c3ceaef101e466466e879e97103ecce18
X-Osstest-Versions-That: xen=8c4532f19d6925538fb0c938f7de9a97da8c5c3b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 22 Jul 2020 21:22:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152091 xen-unstable running [real]
http://logs.test-lab.xenproject.org/osstest/logs/152091/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-multivcpu 18 guest-localmigrate/x10  fail REGR. vs. 152045
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152045
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 19 guest-start.2 fail REGR. vs. 152045
 test-amd64-amd64-xl-qemut-win7-amd64  2 hosts-allocate               running
 test-amd64-amd64-xl-qemut-ws16-amd64 10 windows-install              running
 test-amd64-amd64-xl-qemut-ws16-amd64  3 syslog-server                running
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 18 guest-start/debianhvm.repeat running
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  3 syslog-server             running

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152045
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152045
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 152045
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152045
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152045
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 152045
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 152045
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 xen                  f3885e8c3ceaef101e466466e879e97103ecce18
baseline version:
 xen                  8c4532f19d6925538fb0c938f7de9a97da8c5c3b

Last test of basis   152045  2020-07-20 13:36:39 Z    2 days
Failing since        152067  2020-07-21 06:59:07 Z    1 days    1 attempts
Testing same since                          (not found)         0 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  George Dunlap <george.dunlap@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    running 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         preparing
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         running 
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f3885e8c3ceaef101e466466e879e97103ecce18
Author: Elliott Mitchell <ehem+xen@m5p.com>
Date:   Fri Jul 17 20:32:42 2020 -0700

    tools/ocaml: Default to useful build output
    
    While hiding details of build output looks pretty to some, defaulting to
    doing so deviates from the rest of Xen.  Switch the OCAML tools to match
    everything else.
    
    Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>

commit 69953e2856382274749b617125cc98ce38198463
Author: Elliott Mitchell <ehem+xen@m5p.com>
Date:   Fri Jul 17 20:31:21 2020 -0700

    tools: Partially revert "Cross-compilation fixes."
    
    This partially reverts commit 16504669c5cbb8b195d20412aadc838da5c428f7.
    
    Doesn't look like much of 16504669c5cbb8b195d20412aadc838da5c428f7
    actually remains due to passage of time.
    
    Of the 3, both Python and pygrub appear to mostly be building just fine
    cross-compiling.  The OCAML portion is being troublesome, this is going
    to cause bug reports elsewhere soon.  The OCAML portion though can
    already be disabled by setting OCAML_TOOLS=n and shouldn't have this
    extra form of disabling.
    
    Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 057cfa258ca554013178c5aaf6f80db47fb184fc
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jul 21 14:04:59 2020 +0200

    tools/xen-cpuid: use dashes consistently in feature names
    
    We've grown to a mix of dashes and underscores - switch to consistent
    naming in the hope that future additions will play by this.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit a6ed77f1e0334c26e6e216aea45f8674d9284856
Author: Edwin Török <edvin.torok@citrix.com>
Date:   Wed Jul 15 16:10:56 2020 +0100

    oxenstored: fix ABI breakage introduced in Xen 4.9.0
    
    dbc84d2983969bb47d294131ed9e6bbbdc2aec49 (Xen >= 4.9.0) deleted XS_RESTRICT
    from oxenstored, which caused all the following opcodes to be shifted by 1:
    reset_watches became off-by-one compared to the C version of xenstored.
    
    Looking at the C code the opcode for reset watches needs:
    XS_RESET_WATCHES = XS_SET_TARGET + 2
    
    So add the placeholder `Invalid` in the OCaml<->C mapping list.
    (Note that the code here doesn't simply convert the OCaml constructor to
     an integer, so we don't need to introduce a dummy constructor).
    
    Igor says that with a suitably patched xenopsd to enable watch reset,
    we now see `reset watches` during kdump of a guest in xenstored-access.log.
    
    Signed-off-by: Edwin Török <edvin.torok@citrix.com>
    Tested-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>

commit 6d49fbdeab3e687a6818f809ca3d98ac7ced2c8d
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Mon Jul 20 19:54:40 2020 -0400

    golang/xenlight: fix code generation for python 2.6
    
    Before python 2.7, str.format() calls required that the format fields
    were explicitly enumerated, e.g.:
    
      '{0} {1}'.format(foo, bar)
    
      vs.
    
      '{} {}'.format(foo, bar)
    
    Currently, gengotypes.py uses the latter pattern everywhere, which means
    the Go bindings do not build on python 2.6. Use the 2.6 syntax for
    format() in order to support python 2.6 for now.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Acked-by: Wei Liu <wl@xen.org>

commit af0584931c1b902577317dacff976bc4b4f3923d
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Thu Jul 16 12:00:26 2020 -0400

    MAINTAINERS: add myself as a golang bindings maintainer
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 139ce42388c3fe7096a09b3d397250fe14906809
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon Jul 20 18:35:55 2020 +0100

    SUPPORT.md: Spell Experimental correctly
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit fc7f700cf1845d80dee1f4075044a54645aec04e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jul 21 14:00:25 2020 +0200

    x86emul: support AVX512_VP2INTERSECT insns
    
    The standard memory access pattern once again should allow us to go
    without a test harness addition beyond the EVEX Disp8-scaling one.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit f6b78aefea557e5fd58d1c1e1e314c25c0bacaef
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jul 21 13:59:28 2020 +0200

    x86/shadow: l3table[] and gl3e[] are HVM only
    
    ... by the very fact that they're 3-level specific, while PV always gets
    run in 4-level mode. This requires adding some seemingly redundant
    #ifdef-s - some of them will be possible to drop again once 2- and
    3-level guest code doesn't get built anymore in !HVM configs, but I'm
    afraid there's still quite a bit of disentangling work to be done to
    make this possible.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 5fd152ea7dfbd7e83c4f398bc8d7273466b88cbb
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jul 21 13:58:56 2020 +0200

    x86/shadow: have just a single instance of sh_set_toplevel_shadow()
    
    The only guest/shadow level dependent piece here is the call to
    sh_make_shadow(). Make a pointer to the respective function an
    argument of sh_set_toplevel_shadow(), allowing it to be moved to
    common.c.
    
    This implies making get_shadow_status() available to common.c; its set
    and delete counterparts are moved along with it.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit ef3b0d8d2c3975c5cdd6a521896d85e97b74e924
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jul 21 13:58:15 2020 +0200

    x86/shadow: shadow_table[] needs only one entry for PV-only configs
    
    Furthermore the field isn't needed at all with shadow support disabled -
    move it into struct shadow_vcpu.
    
    Introduce for_each_shadow_table(), shortening loops for the 4-level case
    at the same time.
    
    Adjust loop variables and a function parameter to be "unsigned int"
    where applicable at the same time. Also move a comment that ended up
    misplaced due to incremental additions.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit ded576ce07e9328f66842bef67d8cfc14c3088b7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Jul 21 13:57:06 2020 +0200

    x86/shadow: dirty VRAM tracking is needed for HVM only
    
    Move shadow_track_dirty_vram() into hvm.c (requiring two static
    functions to become non-static). More importantly though make sure we
    don't de-reference d->arch.hvm.dirty_vram for a non-HVM guest. This was
    a latent issue only just because the field lives far enough into struct
    hvm_domain to be outside the part overlapping with struct pv_domain.
    
    While moving shadow_track_dirty_vram() some purely typographic
    adjustments are being made, like inserting missing blanks or putting
    breaces on their own lines.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Tim Deegan <tim@xen.org>

commit 9ffdda96d9e7c3d9c7a5bbe2df6ab30f63927542
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Jul 20 17:54:52 2020 +0100

    docs: Replace non-UTF-8 character in hypfs-paths.pandoc
    
    From the docs cronjob on xenbits:
    
      /usr/bin/pandoc --number-sections --toc --standalone misc/hypfs-paths.pandoc --output html/misc/hypfs-paths.html
      pandoc: Cannot decode byte '\x92': Data.Text.Internal.Encoding.decodeUtf8: Invalid UTF-8 stream
      make: *** [Makefile:236: html/misc/hypfs-paths.html] Error 1
    
    Fixes: 5a4a411bde4 ("docs: specify stability of hypfs path documentation")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-acked-by: Paul Durrant <paul@xen.org>

commit 6720345aaf82fc76dca084f3f7a577062f5ff0f3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jul 15 12:39:06 2020 +0200

    Arm: prune #include-s needed by domain.h
    
    asm/domain.h is a dependency of xen/sched.h, and hence should not itself
    include xen/sched.h. Nor should any of the other #include-s used by it.
    While at it, also drop two other #include-s that aren't needed by this
    particular header.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>

commit 5a4a411bde4f73ff8ce43d6e52b77302973e8f68
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Jul 20 13:38:00 2020 +0200

    docs: specify stability of hypfs path documentation
    
    In docs/misc/hypfs-paths.pandoc the supported paths in the hypervisor
    file system are specified. Make it more clear that path availability
    might change, e.g. due to scope widening or narrowing (e.g. being
    limited to a specific architecture).
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Paul Durrant <paul@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Jul 22 22:47:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 22:47:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyNVs-0002Dj-HY; Wed, 22 Jul 2020 22:47:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JOgq=BB=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jyNVq-0002De-SR
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 22:46:58 +0000
X-Inumbo-ID: 3ee7b1ca-cc6d-11ea-a232-12813bfff9fa
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3ee7b1ca-cc6d-11ea-a232-12813bfff9fa;
 Wed, 22 Jul 2020 22:46:57 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06MMXMrT034544;
 Wed, 22 Jul 2020 22:46:16 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=apQwUwcSTA+SKE+KQnC+dlEnw4so1FZdMeemve7vc4Y=;
 b=0avgJq+Joe5tI3frUG9bCPv0J/wvQyatMpX5aC0JJnq3lYDFupa0dW47sFGCfAgi1r/q
 MIgsmeuz9n5h0NuhVoEldMQb5Q9sU/KAfevYE3L40x2QSAS5s3FmEqkXjjIHyEgGu3a2
 pum6T7Q/TDJ5npDuWK8hBKS0L0vlbyTn43EWPp/a+uNJakQocaV2DdM9UH47138Phn0F
 4MHv1gYQlCUoikVBmVbOEoxQX1aQ4lqB+R5bUfXEcdBfefB81VbFYOeXb8WpxmEfPsdJ
 r5TcJ8PbospwEGv099kfBn7tz2HGL/rtwseom2tkl7+zM2kKhI88K6sN9/KwRz1QYqE7 +Q== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2120.oracle.com with ESMTP id 32d6kstb1q-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Wed, 22 Jul 2020 22:46:16 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06MMYHBg065331;
 Wed, 22 Jul 2020 22:46:15 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by userp3020.oracle.com with ESMTP id 32ewejb2an-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 22 Jul 2020 22:46:15 +0000
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 06MMk0p3015871;
 Wed, 22 Jul 2020 22:46:00 GMT
Received: from [10.39.211.22] (/10.39.211.22)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Wed, 22 Jul 2020 15:46:00 -0700
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
To: Anchal Agarwal <anchalag@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200721000348.GA19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <408d3ce9-2510-2950-d28d-fdfe8ee41a54@oracle.com>
 <alpine.DEB.2.21.2007211640500.17562@sstabellini-ThinkPad-T480s>
 <20200722180229.GA32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <1e1f947e-ae16-33f4-435b-13d69c829029@oracle.com>
Date: Wed, 22 Jul 2020 18:45:47 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200722180229.GA32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9690
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 suspectscore=0
 mlxscore=0 spamscore=0 mlxlogscore=999 adultscore=0 bulkscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007220141
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9690
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 suspectscore=0
 bulkscore=0 mlxscore=0 mlxlogscore=999 impostorscore=0 priorityscore=1501
 lowpriorityscore=0 phishscore=0 spamscore=0 adultscore=0 clxscore=1015
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007220141
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, kamatam@amazon.com, mingo@redhat.com,
 xen-devel@lists.xenproject.org, sblbir@amazon.com, axboe@kernel.dk,
 konrad.wilk@oracle.com, bp@alien8.de, tglx@linutronix.de, jgross@suse.com,
 netdev@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net,
 linux-kernel@vger.kernel.org, vkuznets@redhat.com, davem@davemloft.net,
 dwmw@amazon.co.uk, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/22/20 2:02 PM, Anchal Agarwal wrote:
> On Tue, Jul 21, 2020 at 05:18:34PM -0700, Stefano Stabellini wrote:
>>
>>
>>> If you are not sure what the effects are (or sure that it won't work)=
 on
>>> ARM then I'd add IS_ENABLED(CONFIG_X86) check, i.e.
>>>
>>>
>>> if (!IS_ENABLED(CONFIG_X86) || !xen_hvm_domain())
>>>       return -ENODEV;
>> That is a good principle to have and thanks for suggesting it. However=
,
>> in this specific case there is nothing in this patch that doesn't work=

>> on ARM. From an ARM perspective I think we should enable it and
>> &xen_pm_notifier_block should be registered.
>>
> This question is for Boris, I think you we decided to get rid of the no=
tifier
> in V3 as all we need  to check is SHUTDOWN_SUSPEND state which sounds p=
lausible
> to me. So this check may go away. It may still be needed for sycore_ops=

> callbacks registration.


If this check is going away then I guess there is nothing to do here.


My concern isn't about this particular notifier but rather whether this
feature may affect existing functionality (ARM and PVH dom0). If Stefano
feels this should be fine for ARM then so be it.


-boris


>> Given that all guests are HVM guests on ARM, it should work fine as is=
=2E
>>
>>
>> I gave a quick look at the rest of the series and everything looks fin=
e
>> to me from an ARM perspective. I cannot imaging that the new freeze,
>> thaw, and restore callbacks for net and block are going to cause any
>> trouble on ARM. The two main x86-specific functions are
>> xen_syscore_suspend/resume and they look trivial to implement on ARM (=
in
>> the sense that they are likely going to look exactly the same.)
>>
> Yes but for now since things are not tested I will put this
> !IS_ENABLED(CONFIG_X86) on syscore_ops calls registration part just to =
be safe
> and not break anything.
>> One question for Anchal: what's going to happen if you trigger a
>> hibernation, you have the new callbacks, but you are missing
>> xen_syscore_suspend/resume?
>>
>> Is it any worse than not having the new freeze, thaw and restore
>> callbacks at all and try to do a hibernation?
> If callbacks are not there, I don't expect hibernation to work correctl=
y.
> These callbacks takes care of xen primitives like shared_info_page,
> grant table, sched clock, runstate time which are important to save the=
 correct
> state of the guest and bring it back up. Other patches in the series, a=
dds all
> the logic to these syscore callbacks. Freeze/thaw/restore are just ther=
e for at driver
> level.
>
> Thanks,
> Anchal





From xen-devel-bounces@lists.xenproject.org Wed Jul 22 23:49:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 22 Jul 2020 23:49:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyOUD-0007Qe-Cm; Wed, 22 Jul 2020 23:49:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0kdp=BB=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jyOUB-0007QZ-7K
 for xen-devel@lists.xenproject.org; Wed, 22 Jul 2020 23:49:19 +0000
X-Inumbo-ID: f4e366b0-cc75-11ea-86ca-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f4e366b0-cc75-11ea-86ca-bc764e2007e4;
 Wed, 22 Jul 2020 23:49:18 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 99E9320825;
 Wed, 22 Jul 2020 23:49:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595461757;
 bh=1IS+O/7YryqIBEkm5nkaKs05nC51c/pf8nsRFWQ6LdE=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=FrgmIjvmA70VVMfZFUF0EKllD6JAd0T3wuzJkEXkBHd9mk68R94zOTAVb1VshR80o
 VoedozueZTQ79QnhYsBQuIWbKhzLCmG3xMwCZYv104TVVJWpntgJlkPZ9kXSd1yIxs
 xBrz8+QmXgyh1brARc/h9Lq6CDqDU8ZD0/QxaBss=
Date: Wed, 22 Jul 2020 16:49:16 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Anchal Agarwal <anchalag@amazon.com>
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
In-Reply-To: <20200722180229.GA32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Message-ID: <alpine.DEB.2.21.2007221645430.17562@sstabellini-ThinkPad-T480s>
References: <cover.1593665947.git.anchalag@amazon.com>
 <20200702182136.GA3511@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200721000348.GA19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <408d3ce9-2510-2950-d28d-fdfe8ee41a54@oracle.com>
 <alpine.DEB.2.21.2007211640500.17562@sstabellini-ThinkPad-T480s>
 <20200722180229.GA32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1520963972-1595461757=:17562"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: x86@kernel.org, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 pavel@ucw.cz, hpa@zytor.com, tglx@linutronix.de,
 Stefano Stabellini <sstabellini@kernel.org>, eduval@amazon.com,
 mingo@redhat.com, xen-devel@lists.xenproject.org, sblbir@amazon.com,
 axboe@kernel.dk, konrad.wilk@oracle.com, bp@alien8.de,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, jgross@suse.com,
 netdev@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net,
 kamatam@amazon.com, vkuznets@redhat.com, davem@davemloft.net,
 dwmw@amazon.co.uk, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1520963972-1595461757=:17562
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 22 Jul 2020, Anchal Agarwal wrote:
> On Tue, Jul 21, 2020 at 05:18:34PM -0700, Stefano Stabellini wrote:
> > On Tue, 21 Jul 2020, Boris Ostrovsky wrote:
> > > >>>>>> +static int xen_setup_pm_notifier(void)
> > > >>>>>> +{
> > > >>>>>> +     if (!xen_hvm_domain())
> > > >>>>>> +             return -ENODEV;
> > > >>>>>>
> > > >>>>>> I forgot --- what did we decide about non-x86 (i.e. ARM)?
> > > >>>>> It would be great to support that however, its  out of
> > > >>>>> scope for this patch set.
> > > >>>>> I’ll be happy to discuss it separately.
> > > >>>>
> > > >>>> I wasn't implying that this *should* work on ARM but rather whether this
> > > >>>> will break ARM somehow (because xen_hvm_domain() is true there).
> > > >>>>
> > > >>>>
> > > >>> Ok makes sense. TBH, I haven't tested this part of code on ARM and the series
> > > >>> was only support x86 guests hibernation.
> > > >>> Moreover, this notifier is there to distinguish between 2 PM
> > > >>> events PM SUSPEND and PM hibernation. Now since we only care about PM
> > > >>> HIBERNATION I may just remove this code and rely on "SHUTDOWN_SUSPEND" state.
> > > >>> However, I may have to fix other patches in the series where this check may
> > > >>> appear and cater it only for x86 right?
> > > >>
> > > >>
> > > >> I don't know what would happen if ARM guest tries to handle hibernation
> > > >> callbacks. The only ones that you are introducing are in block and net
> > > >> fronts and that's arch-independent.
> > > >>
> > > >>
> > > >> You do add a bunch of x86-specific code though (syscore ops), would
> > > >> something similar be needed for ARM?
> > > >>
> > > >>
> > > > I don't expect this to work out of the box on ARM. To start with something
> > > > similar will be needed for ARM too.
> > > > We may still want to keep the driver code as-is.
> > > >
> > > > I understand the concern here wrt ARM, however, currently the support is only
> > > > proposed for x86 guests here and similar work could be carried out for ARM.
> > > > Also, if regular hibernation works correctly on arm, then all is needed is to
> > > > fix Xen side of things.
> > > >
> > > > I am not sure what could be done to achieve any assurances on arm side as far as
> > > > this series is concerned.
> > 
> > Just to clarify: new features don't need to work on ARM or cause any
> > addition efforts to you to make them work on ARM. The patch series only
> > needs not to break existing code paths (on ARM and any other platforms).
> > It should also not make it overly difficult to implement the ARM side of
> > things (if there is one) at some point in the future.
> > 
> > FYI drivers/xen/manage.c is compiled and working on ARM today, however
> > Xen suspend/resume is not supported. I don't know for sure if
> > guest-initiated hibernation works because I have not tested it.
> > 
> > 
> > 
> > > If you are not sure what the effects are (or sure that it won't work) on
> > > ARM then I'd add IS_ENABLED(CONFIG_X86) check, i.e.
> > >
> > >
> > > if (!IS_ENABLED(CONFIG_X86) || !xen_hvm_domain())
> > >       return -ENODEV;
> > 
> > That is a good principle to have and thanks for suggesting it. However,
> > in this specific case there is nothing in this patch that doesn't work
> > on ARM. From an ARM perspective I think we should enable it and
> > &xen_pm_notifier_block should be registered.
> > 
> This question is for Boris, I think you we decided to get rid of the notifier
> in V3 as all we need  to check is SHUTDOWN_SUSPEND state which sounds plausible
> to me. So this check may go away. It may still be needed for sycore_ops
> callbacks registration.
> > Given that all guests are HVM guests on ARM, it should work fine as is.
> > 
> > 
> > I gave a quick look at the rest of the series and everything looks fine
> > to me from an ARM perspective. I cannot imaging that the new freeze,
> > thaw, and restore callbacks for net and block are going to cause any
> > trouble on ARM. The two main x86-specific functions are
> > xen_syscore_suspend/resume and they look trivial to implement on ARM (in
> > the sense that they are likely going to look exactly the same.)
> > 
> Yes but for now since things are not tested I will put this
> !IS_ENABLED(CONFIG_X86) on syscore_ops calls registration part just to be safe
> and not break anything.
> > 
> > One question for Anchal: what's going to happen if you trigger a
> > hibernation, you have the new callbacks, but you are missing
> > xen_syscore_suspend/resume?
> > 
> > Is it any worse than not having the new freeze, thaw and restore
> > callbacks at all and try to do a hibernation?
> If callbacks are not there, I don't expect hibernation to work correctly.
> These callbacks takes care of xen primitives like shared_info_page,
> grant table, sched clock, runstate time which are important to save the correct
> state of the guest and bring it back up. Other patches in the series, adds all
> the logic to these syscore callbacks. Freeze/thaw/restore are just there for at driver
> level.

I meant the other way around :-)  Let me rephrase the question.

Do you think that implementing freeze/thaw/restore at the driver level
without having xen_syscore_suspend/resume can potentially make things
worse compared to not having freeze/thaw/restore at the driver level at
all?
--8323329-1520963972-1595461757=:17562--


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 03:29:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 03:29:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyRuZ-0002oS-Vg; Thu, 23 Jul 2020 03:28:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h/wE=BC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyRuX-0002o8-VB
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 03:28:46 +0000
X-Inumbo-ID: 993882fe-cc94-11ea-a255-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 993882fe-cc94-11ea-a255-12813bfff9fa;
 Thu, 23 Jul 2020 03:28:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=UG+tY0uKS1jmMOc42xcm/j2jzG1rmYXR/Xx7SsqdkB8=; b=wpX0YFExAtgHt+NXqB7RF+G2l
 uaV7g6sagXvJqMimbt5MBkv0X5Pi08Ry2uHuQToB7zg+8MelfuprseiDglHx4bw8OmP3NRw/pfUlj
 efC7ZlmnHYS3FMuXLW36h74C2XSTrXMPxdBAYsszGP2oyxeBgxrTdEY4RM7wCsYpqQby4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyRuP-0002Tq-Rx; Thu, 23 Jul 2020 03:28:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyRuP-0006Sn-DT; Thu, 23 Jul 2020 03:28:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyRuP-0007R6-Co; Thu, 23 Jul 2020 03:28:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152097-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152097: regressions - trouble: fail/pass/starved
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=4fa640dc52302b5e62b01b05c755b055549633ae
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jul 2020 03:28:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152097 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152097/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-vhd      11 guest-start      fail in 152070 pass in 152097
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10     fail pass in 152070
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 152070

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 linux                4fa640dc52302b5e62b01b05c755b055549633ae
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   35 days
Failing since        151236  2020-06-19 19:10:35 Z   33 days   52 attempts
Testing same since   152070  2020-07-21 09:35:59 Z    1 days    2 attempts

------------------------------------------------------------
830 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 45273 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 04:26:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 04:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jySoG-0007z2-82; Thu, 23 Jul 2020 04:26:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g66O=BC=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1jySoE-0007yu-CV
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 04:26:18 +0000
X-Inumbo-ID: a6cccaee-cc9c-11ea-86e1-bc764e2007e4
Received: from mail-oi1-x243.google.com (unknown [2607:f8b0:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6cccaee-cc9c-11ea-86e1-bc764e2007e4;
 Thu, 23 Jul 2020 04:26:17 +0000 (UTC)
Received: by mail-oi1-x243.google.com with SMTP id y22so3862355oie.8
 for <xen-devel@lists.xenproject.org>; Wed, 22 Jul 2020 21:26:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=1f3LnCo0UdaRCs4cl8+MGpB4N3LOBXDq4Dj60+sZTXM=;
 b=RUZEOzSj7C+2WImZe5BJ+O0BugpgkHuJT6ZN+4IuqoCglHBqswO/0jgHTfR66AHuLz
 LZyh5GifMojw9ufAJkC2u/Om2C9COFjIdcgwdcv8kxOaDXui2VpZm3QO0Ec9tbZ8TsjX
 3XF5xB6wrhzwWBTOH+IHTTbDTjZJyTv7t2T0rC6FhCMqjN0dVzAxGAOVypuX0jXoK34q
 fU2vqMgWxwcb0LDpTdZSuRO7P33TQM7YhS56SuBcO2xDycvPj4mlqzfxsKgkBtAcQMfl
 cN3pbOdmtEju1ZcfnBIS2w3Pt30XpAi5flfrhYZlktNitEk9dbN8pR0BGNyYGg86SDuS
 9fQA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=1f3LnCo0UdaRCs4cl8+MGpB4N3LOBXDq4Dj60+sZTXM=;
 b=OjiuWkI4JyKkLpbTqEFn0NPHyy4R+Kt4OEeic8alKN5v7rwM8162V7MtaeCaXa/QvG
 /D9C3Auu3/Hlr1MGb9z1k37/VorztAtD+bUdrfO71KU1h+FmkBpftJ+62u7RdIu3YhQb
 b6ePxHaII4zUCicGHS6Mzox663foWasPVja/SiR7MMHMQ7kyMGWpusnYZmJX+bh12qVU
 wqZMhB4mXJZhYLtCgrMd7tQMZxwcZhFPi0EFr14LmweWbW8rI6MjIjJsqS3UABV0D0rj
 p5P6zYx2UEypY9KMzZEjVi75kt+INlu/BDeZtpQIIo79Pjlz97azSJz/HsJk/9F4LrBx
 msvw==
X-Gm-Message-State: AOAM531ySCFhlDS6KrxjBQ65686u4Bit7i218MImI00cctAUZMJjDpCD
 yzMHYCO8hu3zM++uv1y6GNX4PWCgK5V4JsMdi2ttNgGL
X-Google-Smtp-Source: ABdhPJw/dkBaCi0vS0quFcCdCocLIISiQuR6XZ+YlvsS1497e2gXvBvLi1A4VMxntTnwAu62VVI3OEkSPvFhV6yyqWw=
X-Received: by 2002:aca:72ca:: with SMTP id p193mr2285208oic.20.1595478377022; 
 Wed, 22 Jul 2020 21:26:17 -0700 (PDT)
MIME-Version: 1.0
References: <002801d66051$90fe2300$b2fa6900$@yujala.com>
In-Reply-To: <002801d66051$90fe2300$b2fa6900$@yujala.com>
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Wed, 22 Jul 2020 21:26:05 -0700
Message-ID: <CACMJ4GYQUXNGrqq_6wFLX4actMgTat-i5ThhS21Bjy3HO52bUQ@mail.gmail.com>
Subject: Re: Porting Xen to Jetson Nano
To: Srinivas Bangalore <srini@yujala.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 22, 2020 at 10:59 AM Srinivas Bangalore <srini@yujala.com> wrot=
e:
> Dear Xen experts,
>
> Would greatly appreciate some hints on how to move forward with this one=
=E2=80=A6

Hi Srini,

I don't have any strong recommendations for you, but I do want to say
that I'm very happy to see you taking this project on and I am hoping
for your success. I have a newly-arrived Jetson Nano sitting on my
desk here, purchased with the intention of getting Xen up and running
on it, that I just haven't got to work on yet. I'm also familiar with
Chris Patterson, Kyle Temkin and Ian Campbell's previous Tegra Jetson
patches and it would be great to see some further progress made from
those.

In my recent experience with the Raspberry Pi 4, one basic observation
with ARM kernel bringup is that if your device tree isn't good, your
dom0 kernel can be missing the configuration it needs to use the
serial port correctly and you don't get any diagnostics from it after
Xen attempts to launch it, so I would just patch the right serial port
config directly into your Linux kernel (eg. hardcode specific things
onto the kernel command line) so you're not messing about with that
any more.

The other thing I would recommend is patching in some printks into the
earliest part of the Xen parts of the Dom0 Linux kernel start code.
Others who are more familar with Xen on ARM may have some better
recommendations, but linux/arch/arm/xen/enlighten.c has a function
xen_guest_init that looks like a good place to stuff some extra
printks for some early proof-of-entry from your kernel, and that way
you'll have some indication whether execution has actually commenced
in there.

I don't think you're going to get a great deal of enthusiasm on this
list for Xen 4.8.5, unfortunately; most people around here work off
Xen's staging branch, and I'd be surprised to hear of anyone having
tried a 5.7 Linux kernel with Xen 4.8.5. I can understand why you
might start there from the existing patch series though.

Best of luck,

Christopher


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 06:55:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 06:55:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyV7o-0004Ln-7Y; Thu, 23 Jul 2020 06:54:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h/wE=BC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyV7n-0004LT-PQ
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 06:54:39 +0000
X-Inumbo-ID: 5b539358-ccb1-11ea-86e4-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b539358-ccb1-11ea-86e4-bc764e2007e4;
 Thu, 23 Jul 2020 06:54:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=7CCQRDDxvr6h60vTUjlUIjYjB5aIQXzldG9IRcjNUVc=; b=heQX2w/80AadmTLsGIka70A7G
 U7f57ZiMD16MLgkfwmZ++m7QZIp/OspfgaVzGhEVLt66lY4fy2U4HeuEinj/fPAi+sStVU4CAJSZF
 z+TdwkQWWE+ZvwRX+DIP+LL6GXyLWzBirFXPsfx1dvQhuiB8LOcn5bLudOv4qLnB/yKSY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyV7d-0007MN-Mk; Thu, 23 Jul 2020 06:54:29 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyV7d-00019N-EZ; Thu, 23 Jul 2020 06:54:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyV7d-000668-E0; Thu, 23 Jul 2020 06:54:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152100-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 152100: regressions - trouble: fail/pass/starved
X-Osstest-Failures: linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
 linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=d811d29517d1ea05bc159579231652d3ca1c2a01
X-Osstest-Versions-That: linux=c57b1153a58a6263863667296b5f00933fc46a4f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jul 2020 06:54:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152100 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152100/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 151939

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10   fail REGR. vs. 151939
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151939

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 linux                d811d29517d1ea05bc159579231652d3ca1c2a01
baseline version:
 linux                c57b1153a58a6263863667296b5f00933fc46a4f

Last test of basis   151939  2020-07-16 06:40:22 Z    7 days
Testing same since   152100  2020-07-22 07:43:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  AceLan Kao <acelan.kao@canonical.com>
  Adrian Hunter <adrian.hunter@intel.com>
  Aharon Landau <aharonl@mellanox.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Hung <alex.hung@canonical.com>
  Alexander Lobakin <alobakin@marvell.com>
  Alexander Lobakin <alobakin@pm.me>
  Alexander Shishkin <alexander.shishkin@linux.intel.com>
  Alexander Tsoy <alexander@tsoy.me>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexandre Belloni <alexandre.belloni@bootlin.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ali Saidi <alisaidi@amazon.com>
  Amir Goldstein <amir73il@gmail.com>
  Ammy Yi <ammy.yi@intel.com>
  Andreas Schwab <schwab@suse.de>
  Andrew F. Davis <afd@ti.com>
  Andrey Lebedev <andrey@lebedev.lt>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
  Angelo Dureghello <angelo.dureghello@timesys.com>
  Angelo Dureghello <angelo@sysam.it>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Anson Huang <Anson.Huang@nxp.com>
  Ard Biesheuvel <ardb@kernel.org>
  Armas Spann <zappel@retarded.farm>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Bernard Zhao <bernard@vivo.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Bjorn Helgaas <bhelgaas@google.com>
  Bjørn Mork <bjorn@mork.no>
  Bob Peterson <rpeterso@redhat.com>
  Borislav Petkov <bp@suse.de>
  Cameron Berkenpas <cam@neo-zeon.de>
  Chandrakanth Patil <chandrakanth.patil@broadcom.com>
  Chirantan Ekbote <chirantan@chromium.org>
  Chris Mason <clm@fb.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Chris Wulff <crwulff@gmail.com>
  Christoffer Nielsen <cn@obviux.dk>
  Christoph Paasch <cpaasch@apple.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuhong Yuan <hslester96@gmail.com>
  Chunyan Zhang <chunyan.zhang@unisoc.com>
  Claudiu Beznea <claudiu.beznea@microchip.com>
  Codrin Ciubotariu <codrin.ciubotariu@microchip.com>
  Colin Ian King <colin.king@canonical.com>
  Colin Xu <colin.xu@intel.com>
  Cong Wang <xiyou.wangcong@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Dave Wang <dave.wang@emc.com.tw>
  David Ahern <dsahern@kernel.org>
  David Howells <dhowells@redhat.com>
  David Pedersen <limero1337@gmail.com>
  David S. Miller <davem@davemloft.net>
  Diego Elio Pettenò <flameeyes@flameeyes.com>
  Dietmar Eggemann <dietmar.eggemann@arm.com>
  dillon min <dillon.minfei@gmail.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dinh Nguyen <dinguyen@kernel.org>
  Dmitry Bogdanov <dbogdanov@marvell.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Douglas Anderson <dianders@chromium.org>
  Eddie James <eajames@linux.ibm.com>
  Emmanuel Pescosta <emmanuelpescosta099@gmail.com>
  Enric Balletbo i Serra <enric.balletbo@collabora.com>
  Eric Dumazet <edumazet@google.com>
  Esben Haabendal <esben@geanix.com>
  Felipe Balbi <balbi@kernel.org>
  Finley Xiao <finley.xiao@rock-chips.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Weimer <fweimer@redhat.com>
  Frank Mori Hess <fmh6jj@gmail.com>
  Frederic Weisbecker <frederic@kernel.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Greg Ungerer <gerg@linux-m68k.org>
  Gregor Pintar <grpintar@gmail.com>
  Guenter Roeck <linux@roeck-us.net>
  Haibo Chen <haibo.chen@nxp.com>
  Hans de Goede <hdegoede@redhat.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hou Tao <houtao1@huawei.com>
  Igor Moura <imphilippini@gmail.com>
  Ilya Dryomov <idryomov@gmail.com>
  Inki Dae <inki.dae@samsung.com>
  James Chapman <jchapman@katalix.com>
  James Hilliard <james.hilliard1@gmail.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Jani Nikula <jani.nikula@intel.com>
  Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Jens Axboe <axboe@kernel.dk>
  Jerome Brunet <jbrunet@baylibre.com>
  Jian-Hong Pan <jian-hong@endlessm.com>
  Jin Yao <yao.jin@linux.intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Joerg Roedel <jroedel@suse.de>
  Johan Hovold <johan@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  John Johansen <john.johansen@canonical.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Toppins <jtoppins@redhat.com>
  Juri Lelli <juri.lelli@redhat.com>
  Jörgen Storvist <jorgen.storvist@gmail.com>
  Kailang Yang <kailang@realtek.com>
  Kangmin Park <l4stpr0gr4m@gmail.com>
  Kashyap Desai <kashyap.desai@broadcom.com>
  Kevin Buettner <kevinb@redhat.com>
  Kevin Hilman <khilman@baylibre.com>
  Krishna Manikandan <mkrishn@codeaurora.org>
  Krzysztof Kozlowski <krzk@kernel.org>
  Leon Romanovsky <leonro@mellanox.com>
  Lingling Xu <ling_ling.xu@unisoc.com>
  Linus Lüssing <linus.luessing@c0d3.blue>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Luis Machado <luis.machado@linaro.org>
  Maciej S. Szmigiero <mail@maciej.szmigiero.name>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Mark Starovoytov <mstarovoitov@marvell.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Varghese <martin.varghese@nokia.com>
  Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
  Matt Ranostay <matt.ranostay@konsulko.com>
  Matthias Brugger <matthias.bgg@gmail.com>
  Maulik Shah <mkshah@codeaurora.org>
  Maxime Ripard <maxime@cerno.tech>
  Maxime Ripard <mripard@kernel.org>
  Michael Ellerman <mpe@ellerman.id.au>
  Michal Simek <michal.simek@xilinx.com>
  Michał Mirosław <mirq-linux@rere.qmqm.pl>
  Mike Rapoport <rppt@linux.ibm.com>
  Miklos Szeredi <mszeredi@redhat.com>
  Minas Harutyunyan <hminas@synopsys.com>
  Minas Harutyunyan <Minas.Harutyunyan@synopsys.com>
  Minchan Kim <minchan@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  Neil Armstrong <narmstrong@baylibre.com>
  Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
  Oded Gabbay <oded.gabbay@gmail.com>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paul Menzel <pmenzel@molgen.mpg.de>
  Paul Wouters <pwouters@redhat.com>
  Peter Chen <peter.chen@nxp.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Ujfalusi <peter.ujfalusi@ti.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Petteri Aimonen <jpa@git.mail.kapsi.fi>
  Philippe Schenker <philippe.schenker@toradex.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raju P.L.S.S.S.N <rplsssn@codeaurora.org>
  Renato Lui Geh <renatogeh@gmail.com>
  Rob Clark <robdclark@chromium.org>
  Rob Herring <robh@kernel.org>
  Robin Gong <yibin.gong@nxp.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Sabrina Dubroca <sd@queasysnail.net>
  Saravana Kannan <saravanak@google.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Sasha Levin <sashal@kernel.org>
  Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
  Sean Tranchetti <stranche@codeaurora.org>
  Sean Wang <sean.wang@mediatek.com>
  Sebastian Parschauer <s.parschauer@gmx.de>
  Sergei A. Trusov <sergei.a.trusov@ya.ru>
  Shannon Nelson <snelson@pensando.io>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Stephan Gerhold <stephan@gerhold.net>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Suman Anna <s-anna@ti.com>
  Taehee Yoo <ap420073@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tero Kristo <t-kristo@ti.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Lamprecht <t.lamprecht@proxmox.com>
  Toke Høiland-Jørgensen <toke@redhat.com>
  Tom Rix <trix@redhat.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tomasz Duszynski <tomasz.duszynski@octakon.com>
  Tomer Tayar <ttayar@habana.ai>
  Tony Lindgren <tony@atomide.com>
  Tudor Ambarus <tudor.ambarus@microchip.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vasily Averin <vvs@virtuozzo.com>
  Vincent Guittot <vincent.guittot@linaro.org>
  Vinod Koul <vkoul@kernel.org>
  Viresh Kumar <viresh.kumar@linaro.org>
  Vishwas M <vishwas.reddy.vr@gmail.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Wade Mealing <wmealing@redhat.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Wolfram Sang <wsa@kernel.org>
  Xiaojie Yuan <xiaojie.yuan@amd.com>
  Xin Long <lucien.xin@gmail.com>
  Yariv <oigevald+kernel@gmail.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  youngjun <her0gyugyu@gmail.com>
  YueHaibing <yuehaibing@huawei.com>
  Zhang Qiang <qiang.zhang@windriver.com>
  Zhang Rui <rui.zhang@intel.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Álvaro Fernández Rojas <noltari@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5921 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 08:46:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 08:46:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyWrM-000608-3A; Thu, 23 Jul 2020 08:45:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0L1b=BC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyWrK-000601-BA
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 08:45:46 +0000
X-Inumbo-ID: e5b30b28-ccc0-11ea-86e9-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5b30b28-ccc0-11ea-86e9-bc764e2007e4;
 Thu, 23 Jul 2020 08:45:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595493945;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=7Tw4G/V7kxbTklR4OkIgBDvRcTQey+xrJa3KZIIL0yE=;
 b=cEw9Qk0mB8sVOcLppdzc0VDIW4CjWqlAkd7PXOqAhHnIr3kitvmi6cDW
 vkcrwmmWMri0R/FSrwWrqnGuAF1D4A6Hene82kvGCHIbJ5j3e962UVXiK
 V3Fo1Q9DrVvM/ZdFBf865gwFLLBVNd2MeOp90RrnK/YGzeuMrPKnCi1mL k=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: XvT0C2bz7SBG/PGU4X8x9SKjeEDHKXC2DnQh4Ohf3AOgAn+fTf2GBFquaKPJBrljF3EVlbn2no
 Lzfc3CouKYANDaPfndD65nop2ySPLmK0BRz+SfUhqzhkTkvyl+hHr4Z0fdpAz1X2ZWwbByPwxE
 txf4kH+VPc2cNl/2s6fQcd2ImZSNihdbZ55qd9S6RkyMdKww02zAxd/G43mz0DnCOq5uctvKGf
 qm8BXn1hp7fmezUGGl0A0/xagekyzRUqjT8GjK/+LO+PDB2u/KfTu3f//hJBi5jthWymPxhGdP
 Ppc=
X-SBRS: 2.7
X-MesageID: 23880204
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23880204"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Subject: [PATCH 2/3] xen/balloon: make the balloon wait interruptible
Date: Thu, 23 Jul 2020 10:45:22 +0200
Message-ID: <20200723084523.42109-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200723084523.42109-1-roger.pau@citrix.com>
References: <20200723084523.42109-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 xen-devel@lists.xenproject.org, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

So it can be killed, or else processes can get hung indefinitely
waiting for balloon pages.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: stable@vger.kernel.org
---
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/balloon.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 3cb10ed32557..292413b27575 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -568,11 +568,13 @@ static int add_ballooned_pages(int nr_pages)
 	if (xen_hotplug_unpopulated) {
 		st = reserve_additional_memory();
 		if (st != BP_ECANCELED) {
+			int rc;
+
 			mutex_unlock(&balloon_mutex);
-			wait_event(balloon_wq,
+			rc = wait_event_interruptible(balloon_wq,
 				   !list_empty(&ballooned_pages));
 			mutex_lock(&balloon_mutex);
-			return 0;
+			return rc ? -ENOMEM : 0;
 		}
 	}
 
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 08:46:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 08:46:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyWrQ-00060e-Lg; Thu, 23 Jul 2020 08:45:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0L1b=BC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyWrP-00060P-Kz
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 08:45:51 +0000
X-Inumbo-ID: e7f7ddb4-ccc0-11ea-a26b-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e7f7ddb4-ccc0-11ea-a26b-12813bfff9fa;
 Thu, 23 Jul 2020 08:45:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595493949;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=aDtTphrzz9VOwTJvjWtl31/5xlDEnj/EJHimWB6gA4o=;
 b=cYG1g6MNbDcE+RkR9QpcLmYcP656KCgk6rKsTHEwJllY/GeMMe/hPbBY
 uFaqKj/RyU3+q+35QZ9jgWQ9Kvku5d8IGOPA3VaEV791IxfNDZmXObc3X
 X4zaZmkl/nMNt+DReP8Bty0TPXM4HJrxiAszvZqIH6RfkHQU8uMYjlD/J A=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: FWgaZAI6RoWQ6RqerM/hehVavtzAH7kBf/6J3Itw9DAyS7asko2A0WA2QUDT8/3hNJsqelSZhx
 aGmvB+UAcjhoS8QAFOir6owrVHYiklIGssIJbF2bknAmpy3xPWKLguU8S9RRbMH3bf0+cva/3N
 uWbogJ0Gm1FH42uL71Ft0fG/1RAiD2Rw7MHlwe8wv+Cn7R/dGiTSxubqWMgwlXk5nExHTGmjb4
 pCwIbJGW/hdprKdJesLAO/DCcfjT/aOM9VhVGAf07YUdfu+XW0GCx/J8nODk0DNNXrGa4pMW2b
 11g=
X-SBRS: 2.7
X-MesageID: 23212752
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23212752"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Subject: [PATCH 1/3] xen/balloon: fix accounting in alloc_xenballooned_pages
 error path
Date: Thu, 23 Jul 2020 10:45:21 +0200
Message-ID: <20200723084523.42109-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200723084523.42109-1-roger.pau@citrix.com>
References: <20200723084523.42109-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 xen-devel@lists.xenproject.org, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

target_unpopulated is incremented with nr_pages at the start of the
function, but the call to free_xenballooned_pages will only subtract
pgno number of pages, and thus the rest need to be subtracted before
returning or else accounting will be skewed.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: stable@vger.kernel.org
---
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/balloon.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 77c57568e5d7..3cb10ed32557 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -630,6 +630,12 @@ int alloc_xenballooned_pages(int nr_pages, struct page **pages)
  out_undo:
 	mutex_unlock(&balloon_mutex);
 	free_xenballooned_pages(pgno, pages);
+	/*
+	 * NB: free_xenballooned_pages will only subtract pgno pages, but since
+	 * target_unpopulated is incremented with nr_pages at the start we need
+	 * to remove the remaining ones also, or accounting will be screwed.
+	 */
+	balloon_stats.target_unpopulated -= nr_pages - pgno;
 	return ret;
 }
 EXPORT_SYMBOL(alloc_xenballooned_pages);
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 08:46:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 08:46:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyWrQ-00060W-CY; Thu, 23 Jul 2020 08:45:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0L1b=BC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyWrP-000601-9x
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 08:45:51 +0000
X-Inumbo-ID: e754dd1c-ccc0-11ea-86e9-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e754dd1c-ccc0-11ea-86e9-bc764e2007e4;
 Thu, 23 Jul 2020 08:45:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595493947;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=HV5BAB3Ru+vHQcQzkwyNhWTtfNyeOXATqykTtgvMic0=;
 b=FJa9WUs8X32R49Gny9LcDfi+OAAzS8rkILr5MSX2CHs6KNAJ1R/WLpQd
 hJ36omV7R8YdktvCBiVbEDfI8fNtgonGZtby1k/pzULG7qrtyljmUlNDz
 MB+f0qhwHclrZyI+eB/Mmya3IEflQY3Vmn04QIT1oeUwt62ab0GeI3eUq Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: /mIOgJ6nw15sfd0b279IBlNRDY5Nc/NmPHLXeuRpP2hUXdEHF/oClU3qjsdqiiOLB/BuhRSbNl
 nbjw/gt+W3wDj2jF6DT0XSktpe4Zqmlt2xmpB7AG4FyTFqXbTd2NY9fPEyULGuSGVDm1gFktgP
 oWYZD4McskbIE3NygSE93KjKBlC3y/FF1T0eCUAmq/8EKHHJkx05LMCudAbEgXxGSkpuyxJ3tc
 6R7bGAAfT8lxXca6nDVQi926JeyOCRMiPU56n91TlftuXTsC8Itt0sWuiP6ZibQ8knT3xBPR6Y
 e3c=
X-SBRS: 2.7
X-MesageID: 23346344
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23346344"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Subject: [PATCH 3/3] memory: introduce an option to force onlining of hotplug
 memory
Date: Thu, 23 Jul 2020 10:45:23 +0200
Message-ID: <20200723084523.42109-4-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200723084523.42109-1-roger.pau@citrix.com>
References: <20200723084523.42109-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, linux-mm@kvack.org,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add an extra option to add_memory_resource that overrides the memory
hotplug online behavior in order to force onlining of memory from
add_memory_resource unconditionally.

This is required for the Xen balloon driver, that must run the
online page callback in order to correctly process the newly added
memory region, note this is an unpopulated region that is used by Linux
to either hotplug RAM or to map foreign pages from other domains, and
hence memory hotplug when running on Xen can be used even without the
user explicitly requesting it, as part of the normal operations of the
OS when attempting to map memory from a different domain.

Setting a different default value of memhp_default_online_type when
attaching the balloon driver is not a robust solution, as the user (or
distro init scripts) could still change it and thus break the Xen
balloon driver.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: xen-devel@lists.xenproject.org
Cc: linux-mm@kvack.org
---
 drivers/xen/balloon.c          |  2 +-
 include/linux/memory_hotplug.h |  3 ++-
 mm/memory_hotplug.c            | 16 ++++++++++------
 3 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 292413b27575..fe0e0c76834b 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -346,7 +346,7 @@ static enum bp_state reserve_additional_memory(void)
 	mutex_unlock(&balloon_mutex);
 	/* add_memory_resource() requires the device_hotplug lock */
 	lock_device_hotplug();
-	rc = add_memory_resource(nid, resource);
+	rc = add_memory_resource(nid, resource, true);
 	unlock_device_hotplug();
 	mutex_lock(&balloon_mutex);
 
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 375515803cd8..1793619fe4a6 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -342,7 +342,8 @@ extern void clear_zone_contiguous(struct zone *zone);
 extern void __ref free_area_init_core_hotplug(int nid);
 extern int __add_memory(int nid, u64 start, u64 size);
 extern int add_memory(int nid, u64 start, u64 size);
-extern int add_memory_resource(int nid, struct resource *resource);
+extern int add_memory_resource(int nid, struct resource *resource,
+			       bool force_online);
 extern int add_memory_driver_managed(int nid, u64 start, u64 size,
 				     const char *resource_name);
 extern void move_pfn_range_to_zone(struct zone *zone, unsigned long start_pfn,
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index da374cd3d45b..2491588d3f86 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1002,7 +1002,10 @@ static int check_hotplug_memory_range(u64 start, u64 size)
 
 static int online_memory_block(struct memory_block *mem, void *arg)
 {
-	mem->online_type = memhp_default_online_type;
+	bool force_online = arg;
+
+	mem->online_type = force_online ? MMOP_ONLINE
+					: memhp_default_online_type;
 	return device_online(&mem->dev);
 }
 
@@ -1012,7 +1015,7 @@ static int online_memory_block(struct memory_block *mem, void *arg)
  *
  * we are OK calling __meminit stuff here - we have CONFIG_MEMORY_HOTPLUG
  */
-int __ref add_memory_resource(int nid, struct resource *res)
+int __ref add_memory_resource(int nid, struct resource *res, bool force_online)
 {
 	struct mhp_params params = { .pgprot = PAGE_KERNEL };
 	u64 start, size;
@@ -1076,8 +1079,9 @@ int __ref add_memory_resource(int nid, struct resource *res)
 	mem_hotplug_done();
 
 	/* online pages if requested */
-	if (memhp_default_online_type != MMOP_OFFLINE)
-		walk_memory_blocks(start, size, NULL, online_memory_block);
+	if (memhp_default_online_type != MMOP_OFFLINE || force_online)
+		walk_memory_blocks(start, size, (void *)force_online,
+				   online_memory_block);
 
 	return ret;
 error:
@@ -1100,7 +1104,7 @@ int __ref __add_memory(int nid, u64 start, u64 size)
 	if (IS_ERR(res))
 		return PTR_ERR(res);
 
-	ret = add_memory_resource(nid, res);
+	ret = add_memory_resource(nid, res, false);
 	if (ret < 0)
 		release_memory_resource(res);
 	return ret;
@@ -1158,7 +1162,7 @@ int add_memory_driver_managed(int nid, u64 start, u64 size,
 		goto out_unlock;
 	}
 
-	rc = add_memory_resource(nid, res);
+	rc = add_memory_resource(nid, res, false);
 	if (rc < 0)
 		release_memory_resource(res);
 
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 09:13:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 09:13:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyXIR-0000P7-Ub; Thu, 23 Jul 2020 09:13:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/BS1=BC=intel.com=lkp@srs-us1.protection.inumbo.net>)
 id 1jyXIQ-0000P2-JC
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 09:13:46 +0000
X-Inumbo-ID: ce6290ca-ccc4-11ea-a26d-12813bfff9fa
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce6290ca-ccc4-11ea-a26d-12813bfff9fa;
 Thu, 23 Jul 2020 09:13:44 +0000 (UTC)
IronPort-SDR: B699+CPLwT1VdFs0oJDoMX/cOIoZ/0/wYynE+Ej6O4fWklmMpfP6Dev+rUybWOqx6KaQOQYvfq
 fFpfUsJ5WQCg==
X-IronPort-AV: E=McAfee;i="6000,8403,9690"; a="151796624"
X-IronPort-AV: E=Sophos;i="5.75,386,1589266800"; 
 d="xz'?scan'208";a="151796624"
X-Amp-Result: UNKNOWN
X-Amp-Original-Verdict: FILE UNKNOWN
X-Amp-File-Uploaded: False
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 23 Jul 2020 02:13:42 -0700
IronPort-SDR: pDqm4cgzM6LXushgG9DlWM393VvfmrSmET/1+/p+1kVhGJZts2Ead3HEKzByCUQ5C/uyIbDwNT
 d5saLzJh10aQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.75,386,1589266800"; 
 d="xz'?scan'208";a="488766214"
Received: from shao2-debian.sh.intel.com (HELO localhost) ([10.239.13.3])
 by fmsmga005.fm.intel.com with ESMTP; 23 Jul 2020 02:13:36 -0700
Date: Thu, 23 Jul 2020 17:13:06 +0800
From: kernel test robot <lkp@intel.com>
To: Lukas Wunner <lukas@wunner.de>
Subject: [PCI] 3233e41d3e: WARNING:at_drivers/pci/pci.c:#pci_reset_hotplug_slot
Message-ID: <20200723091305.GJ19262@shao2-debian>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="fKov5AqTsvseSZ0Z"
Content-Disposition: inline
In-Reply-To: <908047f7699d9de9ec2efd6b79aa752d73dab4b6.1595329748.git.lukas@wunner.de>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Derek Chickles <dchickles@marvell.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, linux-pci@vger.kernel.org,
 Satanand Burla <sburla@marvell.com>, Cornelia Huck <cohuck@redhat.com>,
 LKML <linux-kernel@vger.kernel.org>, Felix Manlunas <fmanlunas@marvell.com>,
 Keith Busch <kbusch@kernel.org>, Alex Williamson <alex.williamson@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Govinda Tatti <govinda.tatti@oracle.com>, lkp@lists.01.org,
 Rick Farrington <ricardo.farrington@cavium.com>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michael Haeuptle <michael.haeuptle@hpe.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Ian May <ian.may@canonical.com>,
 0day robot <lkp@intel.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--fKov5AqTsvseSZ0Z
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline

Greeting,

FYI, we noticed the following commit (built with gcc-9):

commit: 3233e41d3e8ebcd44e92da47ffed97fd49b84278 ("[PATCH] PCI: pciehp: Fix AB-BA deadlock between reset_lock and device_lock")
url: https://github.com/0day-ci/linux/commits/Lukas-Wunner/PCI-pciehp-Fix-AB-BA-deadlock-between-reset_lock-and-device_lock/20200721-192848
base: https://git.kernel.org/cgit/linux/kernel/git/helgaas/pci.git next

in testcase: boot

on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 8 -m 16G

caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):


+------------------------------------------------------+------------+------------+
|                                                      | 8a445afd71 | 3233e41d3e |
+------------------------------------------------------+------------+------------+
| boot_successes                                       | 4          | 0          |
| boot_failures                                        | 0          | 4          |
| WARNING:at_drivers/pci/pci.c:#pci_reset_hotplug_slot | 0          | 4          |
| RIP:pci_reset_hotplug_slot                           | 0          | 4          |
+------------------------------------------------------+------------+------------+


If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp@intel.com>


[    0.971752] WARNING: CPU: 0 PID: 1 at drivers/pci/pci.c:4905 pci_reset_hotplug_slot+0x70/0x80
[    0.971753] Modules linked in:
[    0.971755] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.8.0-rc1-00053-g3233e41d3e8eb #1
[    0.971756] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[    0.971757] RIP: 0010:pci_reset_hotplug_slot+0x70/0x80
[    0.971759] Code: 41 89 c4 48 8b 7b 20 e8 4e 3a c1 ff 44 89 e0 5b 5d 41 5c c3 48 8b 43 18 31 f6 48 8d b8 90 00 00 00 e8 e4 80 70 00 85 c0 75 ba <0f> 0b eb b6 41 bc e7 ff ff ff eb d6 0f 1f 40 00 66 66 66 66 90 48
[    0.971759] RSP: 0000:ffffbd73c0013ab0 EFLAGS: 00010246
[    0.971761] RAX: 0000000000000000 RBX: ffff9700475024c0 RCX: ffff9701048c9900
[    0.971762] RDX: ffff970047e40000 RSI: ffff9701048c9990 RDI: 0000000000000246
[    0.971763] RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000
[    0.971763] R10: 0000000000000001 R11: 0000000000000000 R12: ffff9700477320b0
[    0.971764] R13: ffff970047732000 R14: ffff97004769c238 R15: 0000000000000000
[    0.971765] FS:  0000000000000000(0000) GS:ffff97036fc00000(0000) knlGS:0000000000000000
[    0.971766] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    0.971767] CR2: 0000000000000000 CR3: 00000001c2c10000 CR4: 00000000000006f0
[    0.971767] Call Trace:
[    0.971768]  pci_probe_reset_function+0xc4/0xe0
[    0.971769]  pci_device_add+0x13f/0x2a0
[    0.971769]  pci_scan_single_device+0xa4/0xc0
[    0.971770]  pci_scan_slot+0x52/0x110
[    0.971771]  pci_scan_child_bus_extend+0x3a/0x2a0
[    0.971771]  acpi_pci_root_create+0x1f7/0x250
[    0.971772]  pci_acpi_scan_root+0x182/0x1b0
[    0.971773]  acpi_pci_root_add.cold+0x59/0x1b0
[    0.971773]  ? acpi_device_always_present+0x20/0x90
[    0.971774]  acpi_bus_attach+0xf6/0x200
[    0.971775]  acpi_bus_attach+0x6b/0x200
[    0.971776]  acpi_bus_scan+0x43/0x90
[    0.971776]  ? acpi_sleep_proc_init+0x24/0x24
[    0.971777]  acpi_scan_init+0x102/0x24b
[    0.971778]  acpi_init+0x2c7/0x329
[    0.971778]  do_one_initcall+0x5d/0x330
[    0.971779]  ? rcu_read_lock_sched_held+0x52/0x90
[    0.971780]  kernel_init_freeable+0x248/0x2c9
[    0.971780]  ? rest_init+0x23e/0x23e
[    0.971781]  kernel_init+0xa/0x112
[    0.971782]  ret_from_fork+0x22/0x30
[    0.971782] irq event stamp: 109115
[    0.971783] hardirqs last  enabled at (109115): [<ffffffffaa71a514>] _raw_spin_unlock_irqrestore+0x54/0x70
[    0.971784] hardirqs last disabled at (109114): [<ffffffffaa71ab01>] _raw_spin_lock_irqsave+0x21/0x60
[    0.971785] softirqs last  enabled at (109060): [<ffffffffaaa003aa>] __do_softirq+0x3aa/0x4af
[    0.971786] softirqs last disabled at (109053): [<ffffffffaa8010b2>] asm_call_on_stack+0x12/0x20
[    0.971787] ---[ end trace c3e4ce92dee5df5f ]---


To reproduce:

        # build kernel
	cd linux
	cp config-5.8.0-rc1-00053-g3233e41d3e8eb .config
	make HOSTCC=gcc-9 CC=gcc-9 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage

        git clone https://github.com/intel/lkp-tests.git
        cd lkp-tests
        bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email



Thanks,
lkp


--fKov5AqTsvseSZ0Z
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="config-5.8.0-rc1-00053-g3233e41d3e8eb"

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86_64 5.8.0-rc1 Kernel Configuration
#
CONFIG_CC_VERSION_TEXT="gcc-9 (Debian 9.3.0-14) 9.3.0"
CONFIG_CC_IS_GCC=y
CONFIG_GCC_VERSION=90300
CONFIG_LD_VERSION=234000000
CONFIG_CLANG_VERSION=0
CONFIG_CC_CAN_LINK=y
CONFIG_CC_CAN_LINK_STATIC=y
CONFIG_CC_HAS_ASM_GOTO=y
CONFIG_CC_HAS_ASM_INLINE=y
CONFIG_IRQ_WORK=y
CONFIG_BUILDTIME_TABLE_SORT=y
CONFIG_THREAD_INFO_IN_TASK=y

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=32
# CONFIG_COMPILE_TEST is not set
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_BUILD_SALT=""
CONFIG_HAVE_KERNEL_GZIP=y
CONFIG_HAVE_KERNEL_BZIP2=y
CONFIG_HAVE_KERNEL_LZMA=y
CONFIG_HAVE_KERNEL_XZ=y
CONFIG_HAVE_KERNEL_LZO=y
CONFIG_HAVE_KERNEL_LZ4=y
CONFIG_KERNEL_GZIP=y
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
# CONFIG_KERNEL_XZ is not set
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
CONFIG_DEFAULT_INIT=""
CONFIG_DEFAULT_HOSTNAME="(none)"
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
CONFIG_SYSVIPC_SYSCTL=y
CONFIG_POSIX_MQUEUE=y
CONFIG_POSIX_MQUEUE_SYSCTL=y
# CONFIG_WATCH_QUEUE is not set
CONFIG_CROSS_MEMORY_ATTACH=y
CONFIG_USELIB=y
CONFIG_AUDIT=y
CONFIG_HAVE_ARCH_AUDITSYSCALL=y
CONFIG_AUDITSYSCALL=y

#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_GENERIC_IRQ_SHOW=y
CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=y
CONFIG_GENERIC_PENDING_IRQ=y
CONFIG_GENERIC_IRQ_MIGRATION=y
CONFIG_GENERIC_IRQ_INJECTION=y
CONFIG_HARDIRQS_SW_RESEND=y
CONFIG_IRQ_DOMAIN=y
CONFIG_IRQ_SIM=y
CONFIG_IRQ_DOMAIN_HIERARCHY=y
CONFIG_GENERIC_MSI_IRQ=y
CONFIG_GENERIC_MSI_IRQ_DOMAIN=y
CONFIG_IRQ_MSI_IOMMU=y
CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR=y
CONFIG_GENERIC_IRQ_RESERVATION_MODE=y
CONFIG_IRQ_FORCED_THREADING=y
CONFIG_SPARSE_IRQ=y
# CONFIG_GENERIC_IRQ_DEBUGFS is not set
# end of IRQ subsystem

CONFIG_CLOCKSOURCE_WATCHDOG=y
CONFIG_ARCH_CLOCKSOURCE_INIT=y
CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE=y
CONFIG_GENERIC_TIME_VSYSCALL=y
CONFIG_GENERIC_CLOCKEVENTS=y
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=y
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=y
CONFIG_GENERIC_CMOS_UPDATE=y

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=y
CONFIG_NO_HZ_COMMON=y
# CONFIG_HZ_PERIODIC is not set
# CONFIG_NO_HZ_IDLE is not set
CONFIG_NO_HZ_FULL=y
CONFIG_CONTEXT_TRACKING=y
# CONFIG_CONTEXT_TRACKING_FORCE is not set
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
# end of Timers subsystem

# CONFIG_PREEMPT_NONE is not set
# CONFIG_PREEMPT_VOLUNTARY is not set
CONFIG_PREEMPT=y
CONFIG_PREEMPT_COUNT=y
CONFIG_PREEMPTION=y

#
# CPU/Task time and stats accounting
#
CONFIG_VIRT_CPU_ACCOUNTING=y
CONFIG_VIRT_CPU_ACCOUNTING_GEN=y
# CONFIG_IRQ_TIME_ACCOUNTING is not set
CONFIG_HAVE_SCHED_AVG_IRQ=y
# CONFIG_SCHED_THERMAL_PRESSURE is not set
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
CONFIG_TASKSTATS=y
CONFIG_TASK_DELAY_ACCT=y
CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
# CONFIG_PSI is not set
# end of CPU/Task time and stats accounting

CONFIG_CPU_ISOLATION=y

#
# RCU Subsystem
#
CONFIG_TREE_RCU=y
CONFIG_PREEMPT_RCU=y
# CONFIG_RCU_EXPERT is not set
CONFIG_SRCU=y
CONFIG_TREE_SRCU=y
CONFIG_TASKS_RCU_GENERIC=y
CONFIG_TASKS_RCU=y
CONFIG_TASKS_RUDE_RCU=y
CONFIG_RCU_STALL_COMMON=y
CONFIG_RCU_NEED_SEGCBLIST=y
CONFIG_RCU_NOCB_CPU=y
# end of RCU Subsystem

CONFIG_BUILD_BIN2C=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
# CONFIG_IKHEADERS is not set
CONFIG_LOG_BUF_SHIFT=20
CONFIG_LOG_CPU_MAX_BUF_SHIFT=12
CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=13
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=y

#
# Scheduler features
#
# CONFIG_UCLAMP_TASK is not set
# end of Scheduler features

CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=y
CONFIG_CC_HAS_INT128=y
CONFIG_ARCH_SUPPORTS_INT128=y
CONFIG_NUMA_BALANCING=y
CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
CONFIG_CGROUPS=y
CONFIG_PAGE_COUNTER=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_KMEM=y
CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_WRITEBACK=y
CONFIG_CGROUP_SCHED=y
CONFIG_FAIR_GROUP_SCHED=y
CONFIG_CFS_BANDWIDTH=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_CGROUP_PIDS=y
# CONFIG_CGROUP_RDMA is not set
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_HUGETLB=y
CONFIG_CPUSETS=y
CONFIG_PROC_PID_CPUSET=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
CONFIG_CGROUP_PERF=y
CONFIG_CGROUP_BPF=y
# CONFIG_CGROUP_DEBUG is not set
CONFIG_SOCK_CGROUP_DATA=y
CONFIG_NAMESPACES=y
CONFIG_UTS_NS=y
CONFIG_TIME_NS=y
CONFIG_IPC_NS=y
CONFIG_USER_NS=y
CONFIG_PID_NS=y
CONFIG_NET_NS=y
CONFIG_CHECKPOINT_RESTORE=y
CONFIG_SCHED_AUTOGROUP=y
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_INITRAMFS_SOURCE=""
CONFIG_RD_GZIP=y
CONFIG_RD_BZIP2=y
CONFIG_RD_LZMA=y
CONFIG_RD_XZ=y
CONFIG_RD_LZO=y
CONFIG_RD_LZ4=y
# CONFIG_BOOT_CONFIG is not set
CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=y
CONFIG_HAVE_UID16=y
CONFIG_SYSCTL_EXCEPTION_TRACE=y
CONFIG_HAVE_PCSPKR_PLATFORM=y
CONFIG_BPF=y
CONFIG_EXPERT=y
CONFIG_UID16=y
CONFIG_MULTIUSER=y
CONFIG_SGETMASK_SYSCALL=y
CONFIG_SYSFS_SYSCALL=y
CONFIG_FHANDLE=y
CONFIG_POSIX_TIMERS=y
CONFIG_PRINTK=y
CONFIG_PRINTK_NMI=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_PCSPKR_PLATFORM=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
CONFIG_FUTEX_PI=y
CONFIG_EPOLL=y
CONFIG_SIGNALFD=y
CONFIG_TIMERFD=y
CONFIG_EVENTFD=y
CONFIG_SHMEM=y
CONFIG_AIO=y
CONFIG_IO_URING=y
CONFIG_ADVISE_SYSCALLS=y
CONFIG_HAVE_ARCH_USERFAULTFD_WP=y
CONFIG_MEMBARRIER=y
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
CONFIG_KALLSYMS_ABSOLUTE_PERCPU=y
CONFIG_KALLSYMS_BASE_RELATIVE=y
# CONFIG_BPF_LSM is not set
CONFIG_BPF_SYSCALL=y
CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=y
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_BPF_JIT_DEFAULT_ON=y
CONFIG_USERFAULTFD=y
CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=y
CONFIG_RSEQ=y
# CONFIG_DEBUG_RSEQ is not set
CONFIG_EMBEDDED=y
CONFIG_HAVE_PERF_EVENTS=y
# CONFIG_PC104 is not set

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=y
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
# end of Kernel Performance Events And Counters

CONFIG_VM_EVENT_COUNTERS=y
CONFIG_SLUB_DEBUG=y
# CONFIG_SLUB_MEMCG_SYSFS_ON is not set
# CONFIG_COMPAT_BRK is not set
# CONFIG_SLAB is not set
CONFIG_SLUB=y
# CONFIG_SLOB is not set
CONFIG_SLAB_MERGE_DEFAULT=y
# CONFIG_SLAB_FREELIST_RANDOM is not set
# CONFIG_SLAB_FREELIST_HARDENED is not set
# CONFIG_SHUFFLE_PAGE_ALLOCATOR is not set
CONFIG_SLUB_CPU_PARTIAL=y
CONFIG_SYSTEM_DATA_VERIFICATION=y
CONFIG_PROFILING=y
CONFIG_TRACEPOINTS=y
# end of General setup

CONFIG_64BIT=y
CONFIG_X86_64=y
CONFIG_X86=y
CONFIG_INSTRUCTION_DECODER=y
CONFIG_OUTPUT_FORMAT="elf64-x86-64"
CONFIG_LOCKDEP_SUPPORT=y
CONFIG_STACKTRACE_SUPPORT=y
CONFIG_MMU=y
CONFIG_ARCH_MMAP_RND_BITS_MIN=28
CONFIG_ARCH_MMAP_RND_BITS_MAX=32
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=8
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=16
CONFIG_GENERIC_ISA_DMA=y
CONFIG_GENERIC_BUG=y
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=y
CONFIG_ARCH_MAY_HAVE_PC_FDC=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_ARCH_HAS_CPU_RELAX=y
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=y
CONFIG_ARCH_HAS_FILTER_PGPROT=y
CONFIG_HAVE_SETUP_PER_CPU_AREA=y
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=y
CONFIG_ARCH_HIBERNATION_POSSIBLE=y
CONFIG_ARCH_SUSPEND_POSSIBLE=y
CONFIG_ARCH_WANT_GENERAL_HUGETLB=y
CONFIG_ZONE_DMA32=y
CONFIG_AUDIT_ARCH=y
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=y
CONFIG_HAVE_INTEL_TXT=y
CONFIG_X86_64_SMP=y
CONFIG_ARCH_SUPPORTS_UPROBES=y
CONFIG_FIX_EARLYCON_MEM=y
CONFIG_DYNAMIC_PHYSICAL_MASK=y
CONFIG_PGTABLE_LEVELS=5
CONFIG_CC_HAS_SANE_STACKPROTECTOR=y

#
# Processor type and features
#
CONFIG_ZONE_DMA=y
CONFIG_SMP=y
CONFIG_X86_FEATURE_NAMES=y
CONFIG_X86_X2APIC=y
CONFIG_X86_MPPARSE=y
# CONFIG_GOLDFISH is not set
CONFIG_RETPOLINE=y
CONFIG_X86_CPU_RESCTRL=y
CONFIG_X86_EXTENDED_PLATFORM=y
# CONFIG_X86_NUMACHIP is not set
# CONFIG_X86_VSMP is not set
CONFIG_X86_UV=y
# CONFIG_X86_GOLDFISH is not set
# CONFIG_X86_INTEL_MID is not set
CONFIG_X86_INTEL_LPSS=y
CONFIG_X86_AMD_PLATFORM_DEVICE=y
CONFIG_IOSF_MBI=y
# CONFIG_IOSF_MBI_DEBUG is not set
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=y
# CONFIG_SCHED_OMIT_FRAME_POINTER is not set
CONFIG_HYPERVISOR_GUEST=y
CONFIG_PARAVIRT=y
CONFIG_PARAVIRT_XXL=y
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_PARAVIRT_SPINLOCKS=y
CONFIG_X86_HV_CALLBACK_VECTOR=y
CONFIG_XEN=y
CONFIG_XEN_PV=y
CONFIG_XEN_PV_SMP=y
# CONFIG_XEN_DOM0 is not set
CONFIG_XEN_PVHVM=y
CONFIG_XEN_PVHVM_SMP=y
CONFIG_XEN_512GB=y
CONFIG_XEN_SAVE_RESTORE=y
# CONFIG_XEN_DEBUG_FS is not set
# CONFIG_XEN_PVH is not set
CONFIG_KVM_GUEST=y
CONFIG_ARCH_CPUIDLE_HALTPOLL=y
# CONFIG_PVH is not set
CONFIG_PARAVIRT_TIME_ACCOUNTING=y
CONFIG_PARAVIRT_CLOCK=y
# CONFIG_JAILHOUSE_GUEST is not set
# CONFIG_ACRN_GUEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=y
CONFIG_X86_INTERNODE_CACHE_SHIFT=6
CONFIG_X86_L1_CACHE_SHIFT=6
CONFIG_X86_TSC=y
CONFIG_X86_CMPXCHG64=y
CONFIG_X86_CMOV=y
CONFIG_X86_MINIMUM_CPU_FAMILY=64
CONFIG_X86_DEBUGCTLMSR=y
CONFIG_IA32_FEAT_CTL=y
CONFIG_X86_VMX_FEATURE_NAMES=y
# CONFIG_PROCESSOR_SELECT is not set
CONFIG_CPU_SUP_INTEL=y
CONFIG_CPU_SUP_AMD=y
CONFIG_CPU_SUP_HYGON=y
CONFIG_CPU_SUP_CENTAUR=y
CONFIG_CPU_SUP_ZHAOXIN=y
CONFIG_HPET_TIMER=y
CONFIG_HPET_EMULATE_RTC=y
CONFIG_DMI=y
CONFIG_GART_IOMMU=y
CONFIG_MAXSMP=y
CONFIG_NR_CPUS_RANGE_BEGIN=8192
CONFIG_NR_CPUS_RANGE_END=8192
CONFIG_NR_CPUS_DEFAULT=8192
CONFIG_NR_CPUS=8192
CONFIG_SCHED_SMT=y
CONFIG_SCHED_MC=y
CONFIG_SCHED_MC_PRIO=y
CONFIG_X86_LOCAL_APIC=y
CONFIG_X86_IO_APIC=y
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=y
CONFIG_X86_MCE=y
# CONFIG_X86_MCELOG_LEGACY is not set
CONFIG_X86_MCE_INTEL=y
CONFIG_X86_MCE_AMD=y
CONFIG_X86_MCE_THRESHOLD=y
CONFIG_X86_MCE_INJECT=m
CONFIG_X86_THERMAL_VECTOR=y

#
# Performance monitoring
#
CONFIG_PERF_EVENTS_INTEL_UNCORE=y
CONFIG_PERF_EVENTS_INTEL_RAPL=y
CONFIG_PERF_EVENTS_INTEL_CSTATE=y
# CONFIG_PERF_EVENTS_AMD_POWER is not set
# end of Performance monitoring

CONFIG_X86_16BIT=y
CONFIG_X86_ESPFIX64=y
CONFIG_X86_VSYSCALL_EMULATION=y
CONFIG_X86_IOPL_IOPERM=y
CONFIG_I8K=m
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
CONFIG_MICROCODE_AMD=y
CONFIG_MICROCODE_OLD_INTERFACE=y
CONFIG_X86_MSR=y
CONFIG_X86_CPUID=y
CONFIG_X86_5LEVEL=y
CONFIG_X86_DIRECT_GBPAGES=y
# CONFIG_X86_CPA_STATISTICS is not set
CONFIG_AMD_MEM_ENCRYPT=y
# CONFIG_AMD_MEM_ENCRYPT_ACTIVE_BY_DEFAULT is not set
CONFIG_NUMA=y
CONFIG_AMD_NUMA=y
CONFIG_X86_64_ACPI_NUMA=y
# CONFIG_NUMA_EMU is not set
CONFIG_NODES_SHIFT=10
CONFIG_ARCH_SPARSEMEM_ENABLE=y
CONFIG_ARCH_SPARSEMEM_DEFAULT=y
CONFIG_ARCH_SELECT_MEMORY_MODEL=y
CONFIG_ARCH_MEMORY_PROBE=y
CONFIG_ARCH_PROC_KCORE_TEXT=y
CONFIG_ILLEGAL_POINTER_VALUE=0xdead000000000000
CONFIG_X86_PMEM_LEGACY_DEVICE=y
CONFIG_X86_PMEM_LEGACY=m
CONFIG_X86_CHECK_BIOS_CORRUPTION=y
# CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK is not set
CONFIG_X86_RESERVE_LOW=64
CONFIG_MTRR=y
CONFIG_MTRR_SANITIZER=y
CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=1
CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=1
CONFIG_X86_PAT=y
CONFIG_ARCH_USES_PG_UNCACHED=y
CONFIG_ARCH_RANDOM=y
CONFIG_X86_SMAP=y
CONFIG_X86_UMIP=y
CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=y
CONFIG_X86_INTEL_TSX_MODE_OFF=y
# CONFIG_X86_INTEL_TSX_MODE_ON is not set
# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set
CONFIG_EFI=y
CONFIG_EFI_STUB=y
CONFIG_EFI_MIXED=y
CONFIG_SECCOMP=y
# CONFIG_HZ_100 is not set
# CONFIG_HZ_250 is not set
# CONFIG_HZ_300 is not set
CONFIG_HZ_1000=y
CONFIG_HZ=1000
CONFIG_SCHED_HRTICK=y
CONFIG_KEXEC=y
CONFIG_KEXEC_FILE=y
CONFIG_ARCH_HAS_KEXEC_PURGATORY=y
# CONFIG_KEXEC_SIG is not set
CONFIG_CRASH_DUMP=y
CONFIG_KEXEC_JUMP=y
CONFIG_PHYSICAL_START=0x1000000
CONFIG_RELOCATABLE=y
CONFIG_RANDOMIZE_BASE=y
CONFIG_X86_NEED_RELOCS=y
CONFIG_PHYSICAL_ALIGN=0x200000
CONFIG_DYNAMIC_MEMORY_LAYOUT=y
CONFIG_RANDOMIZE_MEMORY=y
CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING=0xa
CONFIG_HOTPLUG_CPU=y
CONFIG_BOOTPARAM_HOTPLUG_CPU0=y
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
# CONFIG_COMPAT_VDSO is not set
CONFIG_LEGACY_VSYSCALL_EMULATE=y
# CONFIG_LEGACY_VSYSCALL_XONLY is not set
# CONFIG_LEGACY_VSYSCALL_NONE is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_MODIFY_LDT_SYSCALL=y
CONFIG_HAVE_LIVEPATCH=y
CONFIG_LIVEPATCH=y
# end of Processor type and features

CONFIG_ARCH_HAS_ADD_PAGES=y
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=y
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=y
CONFIG_USE_PERCPU_NUMA_NODE_ID=y
CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=y
CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=y
CONFIG_ARCH_ENABLE_THP_MIGRATION=y

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=y
CONFIG_SUSPEND=y
CONFIG_SUSPEND_FREEZER=y
# CONFIG_SUSPEND_SKIP_SYNC is not set
CONFIG_HIBERNATE_CALLBACKS=y
CONFIG_HIBERNATION=y
CONFIG_HIBERNATION_SNAPSHOT_DEV=y
CONFIG_PM_STD_PARTITION=""
CONFIG_PM_SLEEP=y
CONFIG_PM_SLEEP_SMP=y
# CONFIG_PM_AUTOSLEEP is not set
# CONFIG_PM_WAKELOCKS is not set
CONFIG_PM=y
CONFIG_PM_DEBUG=y
CONFIG_PM_ADVANCED_DEBUG=y
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=y
# CONFIG_DPM_WATCHDOG is not set
CONFIG_PM_TRACE=y
CONFIG_PM_TRACE_RTC=y
CONFIG_PM_CLK=y
# CONFIG_WQ_POWER_EFFICIENT_DEFAULT is not set
# CONFIG_ENERGY_MODEL is not set
CONFIG_ARCH_SUPPORTS_ACPI=y
CONFIG_ACPI=y
CONFIG_ACPI_LEGACY_TABLES_LOOKUP=y
CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=y
CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=y
# CONFIG_ACPI_DEBUGGER is not set
CONFIG_ACPI_SPCR_TABLE=y
CONFIG_ACPI_LPIT=y
CONFIG_ACPI_SLEEP=y
# CONFIG_ACPI_PROCFS_POWER is not set
CONFIG_ACPI_REV_OVERRIDE_POSSIBLE=y
CONFIG_ACPI_EC_DEBUGFS=m
CONFIG_ACPI_AC=y
CONFIG_ACPI_BATTERY=y
CONFIG_ACPI_BUTTON=y
CONFIG_ACPI_VIDEO=m
CONFIG_ACPI_FAN=y
# CONFIG_ACPI_TAD is not set
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_CPU_FREQ_PSS=y
CONFIG_ACPI_PROCESSOR_CSTATE=y
CONFIG_ACPI_PROCESSOR_IDLE=y
CONFIG_ACPI_CPPC_LIB=y
CONFIG_ACPI_PROCESSOR=y
CONFIG_ACPI_IPMI=m
CONFIG_ACPI_HOTPLUG_CPU=y
CONFIG_ACPI_PROCESSOR_AGGREGATOR=m
CONFIG_ACPI_THERMAL=y
CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=y
CONFIG_ACPI_TABLE_UPGRADE=y
# CONFIG_ACPI_DEBUG is not set
CONFIG_ACPI_PCI_SLOT=y
CONFIG_ACPI_CONTAINER=y
CONFIG_ACPI_HOTPLUG_MEMORY=y
CONFIG_ACPI_HOTPLUG_IOAPIC=y
CONFIG_ACPI_SBS=m
CONFIG_ACPI_HED=y
CONFIG_ACPI_CUSTOM_METHOD=m
CONFIG_ACPI_BGRT=y
# CONFIG_ACPI_REDUCED_HARDWARE_ONLY is not set
CONFIG_ACPI_NFIT=m
# CONFIG_NFIT_SECURITY_DEBUG is not set
CONFIG_ACPI_NUMA=y
# CONFIG_ACPI_HMAT is not set
CONFIG_HAVE_ACPI_APEI=y
CONFIG_HAVE_ACPI_APEI_NMI=y
CONFIG_ACPI_APEI=y
CONFIG_ACPI_APEI_GHES=y
CONFIG_ACPI_APEI_PCIEAER=y
CONFIG_ACPI_APEI_MEMORY_FAILURE=y
CONFIG_ACPI_APEI_EINJ=m
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
# CONFIG_DPTF_POWER is not set
CONFIG_ACPI_WATCHDOG=y
CONFIG_ACPI_EXTLOG=m
CONFIG_ACPI_ADXL=y
# CONFIG_PMIC_OPREGION is not set
# CONFIG_ACPI_CONFIGFS is not set
CONFIG_X86_PM_TIMER=y
CONFIG_SFI=y

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_GOV_ATTR_SET=y
CONFIG_CPU_FREQ_GOV_COMMON=y
CONFIG_CPU_FREQ_STAT=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
CONFIG_CPU_FREQ_GOV_SCHEDUTIL=y

#
# CPU frequency scaling drivers
#
CONFIG_X86_INTEL_PSTATE=y
CONFIG_X86_PCC_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ=m
CONFIG_X86_ACPI_CPUFREQ_CPB=y
CONFIG_X86_POWERNOW_K8=m
CONFIG_X86_AMD_FREQ_SENSITIVITY=m
# CONFIG_X86_SPEEDSTEP_CENTRINO is not set
CONFIG_X86_P4_CLOCKMOD=m

#
# shared options
#
CONFIG_X86_SPEEDSTEP_LIB=m
# end of CPU Frequency scaling

#
# CPU Idle
#
CONFIG_CPU_IDLE=y
# CONFIG_CPU_IDLE_GOV_LADDER is not set
CONFIG_CPU_IDLE_GOV_MENU=y
# CONFIG_CPU_IDLE_GOV_TEO is not set
# CONFIG_CPU_IDLE_GOV_HALTPOLL is not set
CONFIG_HALTPOLL_CPUIDLE=y
# end of CPU Idle

CONFIG_INTEL_IDLE=y
# end of Power management and ACPI options

#
# Bus options (PCI etc.)
#
CONFIG_PCI_DIRECT=y
CONFIG_PCI_MMCONFIG=y
CONFIG_PCI_XEN=y
CONFIG_MMCONF_FAM10H=y
# CONFIG_PCI_CNB20LE_QUIRK is not set
# CONFIG_ISA_BUS is not set
CONFIG_ISA_DMA_API=y
CONFIG_AMD_NB=y
# CONFIG_X86_SYSFB is not set
# end of Bus options (PCI etc.)

#
# Binary Emulations
#
CONFIG_IA32_EMULATION=y
# CONFIG_X86_X32 is not set
CONFIG_COMPAT_32=y
CONFIG_COMPAT=y
CONFIG_COMPAT_FOR_U64_ALIGNMENT=y
CONFIG_SYSVIPC_COMPAT=y
# end of Binary Emulations

#
# Firmware Drivers
#
CONFIG_EDD=m
# CONFIG_EDD_OFF is not set
CONFIG_FIRMWARE_MEMMAP=y
CONFIG_DMIID=y
CONFIG_DMI_SYSFS=y
CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=y
CONFIG_ISCSI_IBFT_FIND=y
CONFIG_ISCSI_IBFT=m
CONFIG_FW_CFG_SYSFS=y
# CONFIG_FW_CFG_SYSFS_CMDLINE is not set
# CONFIG_GOOGLE_FIRMWARE is not set

#
# EFI (Extensible Firmware Interface) Support
#
CONFIG_EFI_VARS=y
CONFIG_EFI_ESRT=y
CONFIG_EFI_VARS_PSTORE=y
CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE=y
CONFIG_EFI_RUNTIME_MAP=y
# CONFIG_EFI_FAKE_MEMMAP is not set
CONFIG_EFI_RUNTIME_WRAPPERS=y
CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=y
# CONFIG_EFI_BOOTLOADER_CONTROL is not set
# CONFIG_EFI_CAPSULE_LOADER is not set
# CONFIG_EFI_TEST is not set
CONFIG_APPLE_PROPERTIES=y
# CONFIG_RESET_ATTACK_MITIGATION is not set
# CONFIG_EFI_RCI2_TABLE is not set
# CONFIG_EFI_DISABLE_PCI_DMA is not set
# end of EFI (Extensible Firmware Interface) Support

CONFIG_UEFI_CPER=y
CONFIG_UEFI_CPER_X86=y
CONFIG_EFI_DEV_PATH_PARSER=y
CONFIG_EFI_EARLYCON=y

#
# Tegra firmware driver
#
# end of Tegra firmware driver
# end of Firmware Drivers

CONFIG_HAVE_KVM=y
CONFIG_HAVE_KVM_IRQCHIP=y
CONFIG_HAVE_KVM_IRQFD=y
CONFIG_HAVE_KVM_IRQ_ROUTING=y
CONFIG_HAVE_KVM_EVENTFD=y
CONFIG_KVM_MMIO=y
CONFIG_KVM_ASYNC_PF=y
CONFIG_HAVE_KVM_MSI=y
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=y
CONFIG_KVM_VFIO=y
CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=y
CONFIG_KVM_COMPAT=y
CONFIG_HAVE_KVM_IRQ_BYPASS=y
CONFIG_HAVE_KVM_NO_POLL=y
CONFIG_VIRTUALIZATION=y
CONFIG_KVM=y
# CONFIG_KVM_WERROR is not set
CONFIG_KVM_INTEL=y
CONFIG_KVM_AMD=y
CONFIG_KVM_AMD_SEV=y
CONFIG_KVM_MMU_AUDIT=y
CONFIG_AS_AVX512=y
CONFIG_AS_SHA1_NI=y
CONFIG_AS_SHA256_NI=y
CONFIG_AS_TPAUSE=y

#
# General architecture-dependent options
#
CONFIG_CRASH_CORE=y
CONFIG_KEXEC_CORE=y
CONFIG_HOTPLUG_SMT=y
CONFIG_OPROFILE=m
CONFIG_OPROFILE_EVENT_MULTIPLEX=y
CONFIG_HAVE_OPROFILE=y
CONFIG_OPROFILE_NMI_TIMER=y
CONFIG_KPROBES=y
CONFIG_JUMP_LABEL=y
# CONFIG_STATIC_KEYS_SELFTEST is not set
CONFIG_OPTPROBES=y
CONFIG_KPROBES_ON_FTRACE=y
CONFIG_UPROBES=y
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=y
CONFIG_ARCH_USE_BUILTIN_BSWAP=y
CONFIG_KRETPROBES=y
CONFIG_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_IOREMAP_PROT=y
CONFIG_HAVE_KPROBES=y
CONFIG_HAVE_KRETPROBES=y
CONFIG_HAVE_OPTPROBES=y
CONFIG_HAVE_KPROBES_ON_FTRACE=y
CONFIG_HAVE_FUNCTION_ERROR_INJECTION=y
CONFIG_HAVE_NMI=y
CONFIG_HAVE_ARCH_TRACEHOOK=y
CONFIG_HAVE_DMA_CONTIGUOUS=y
CONFIG_GENERIC_SMP_IDLE_THREAD=y
CONFIG_ARCH_HAS_FORTIFY_SOURCE=y
CONFIG_ARCH_HAS_SET_MEMORY=y
CONFIG_ARCH_HAS_SET_DIRECT_MAP=y
CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=y
CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT=y
CONFIG_HAVE_ASM_MODVERSIONS=y
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=y
CONFIG_HAVE_RSEQ=y
CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=y
CONFIG_HAVE_HW_BREAKPOINT=y
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=y
CONFIG_HAVE_USER_RETURN_NOTIFIER=y
CONFIG_HAVE_PERF_EVENTS_NMI=y
CONFIG_HAVE_HARDLOCKUP_DETECTOR_PERF=y
CONFIG_HAVE_PERF_REGS=y
CONFIG_HAVE_PERF_USER_STACK_DUMP=y
CONFIG_HAVE_ARCH_JUMP_LABEL=y
CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=y
CONFIG_MMU_GATHER_TABLE_FREE=y
CONFIG_MMU_GATHER_RCU_TABLE_FREE=y
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=y
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=y
CONFIG_HAVE_CMPXCHG_LOCAL=y
CONFIG_HAVE_CMPXCHG_DOUBLE=y
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=y
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=y
CONFIG_HAVE_ARCH_SECCOMP_FILTER=y
CONFIG_SECCOMP_FILTER=y
CONFIG_HAVE_ARCH_STACKLEAK=y
CONFIG_HAVE_STACKPROTECTOR=y
CONFIG_CC_HAS_STACKPROTECTOR_NONE=y
CONFIG_STACKPROTECTOR=y
CONFIG_STACKPROTECTOR_STRONG=y
CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=y
CONFIG_HAVE_CONTEXT_TRACKING=y
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=y
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=y
CONFIG_HAVE_MOVE_PMD=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=y
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=y
CONFIG_HAVE_ARCH_HUGE_VMAP=y
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=y
CONFIG_HAVE_ARCH_SOFT_DIRTY=y
CONFIG_HAVE_MOD_ARCH_SPECIFIC=y
CONFIG_MODULES_USE_ELF_RELA=y
CONFIG_ARCH_HAS_ELF_RANDOMIZE=y
CONFIG_HAVE_ARCH_MMAP_RND_BITS=y
CONFIG_HAVE_EXIT_THREAD=y
CONFIG_ARCH_MMAP_RND_BITS=28
CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=y
CONFIG_ARCH_MMAP_RND_COMPAT_BITS=8
CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES=y
CONFIG_HAVE_COPY_THREAD_TLS=y
CONFIG_HAVE_STACK_VALIDATION=y
CONFIG_HAVE_RELIABLE_STACKTRACE=y
CONFIG_OLD_SIGSUSPEND3=y
CONFIG_COMPAT_OLD_SIGACTION=y
CONFIG_COMPAT_32BIT_TIME=y
CONFIG_HAVE_ARCH_VMAP_STACK=y
CONFIG_VMAP_STACK=y
CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=y
CONFIG_STRICT_KERNEL_RWX=y
CONFIG_ARCH_HAS_STRICT_MODULE_RWX=y
CONFIG_STRICT_MODULE_RWX=y
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=y
CONFIG_ARCH_USE_MEMREMAP_PROT=y
# CONFIG_LOCK_EVENT_COUNTS is not set
CONFIG_ARCH_HAS_MEM_ENCRYPT=y

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=y
# end of GCOV-based kernel profiling

CONFIG_HAVE_GCC_PLUGINS=y
# end of General architecture-dependent options

CONFIG_RT_MUTEXES=y
CONFIG_BASE_SMALL=0
CONFIG_MODULE_SIG_FORMAT=y
CONFIG_MODULES=y
CONFIG_MODULE_FORCE_LOAD=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_MODULE_FORCE_UNLOAD is not set
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
CONFIG_MODULE_SIG=y
# CONFIG_MODULE_SIG_FORCE is not set
CONFIG_MODULE_SIG_ALL=y
# CONFIG_MODULE_SIG_SHA1 is not set
# CONFIG_MODULE_SIG_SHA224 is not set
CONFIG_MODULE_SIG_SHA256=y
# CONFIG_MODULE_SIG_SHA384 is not set
# CONFIG_MODULE_SIG_SHA512 is not set
CONFIG_MODULE_SIG_HASH="sha256"
# CONFIG_MODULE_COMPRESS is not set
# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set
# CONFIG_UNUSED_SYMBOLS is not set
# CONFIG_TRIM_UNUSED_KSYMS is not set
CONFIG_MODULES_TREE_LOOKUP=y
CONFIG_BLOCK=y
CONFIG_BLK_SCSI_REQUEST=y
CONFIG_BLK_CGROUP_RWSTAT=y
CONFIG_BLK_DEV_BSG=y
CONFIG_BLK_DEV_BSGLIB=y
CONFIG_BLK_DEV_INTEGRITY=y
CONFIG_BLK_DEV_INTEGRITY_T10=m
# CONFIG_BLK_DEV_ZONED is not set
CONFIG_BLK_DEV_THROTTLING=y
# CONFIG_BLK_DEV_THROTTLING_LOW is not set
# CONFIG_BLK_CMDLINE_PARSER is not set
# CONFIG_BLK_WBT is not set
# CONFIG_BLK_CGROUP_IOLATENCY is not set
# CONFIG_BLK_CGROUP_IOCOST is not set
CONFIG_BLK_DEBUG_FS=y
# CONFIG_BLK_SED_OPAL is not set
# CONFIG_BLK_INLINE_ENCRYPTION is not set

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=y
# CONFIG_ACORN_PARTITION is not set
# CONFIG_AIX_PARTITION is not set
CONFIG_OSF_PARTITION=y
CONFIG_AMIGA_PARTITION=y
# CONFIG_ATARI_PARTITION is not set
CONFIG_MAC_PARTITION=y
CONFIG_MSDOS_PARTITION=y
CONFIG_BSD_DISKLABEL=y
CONFIG_MINIX_SUBPARTITION=y
CONFIG_SOLARIS_X86_PARTITION=y
CONFIG_UNIXWARE_DISKLABEL=y
# CONFIG_LDM_PARTITION is not set
CONFIG_SGI_PARTITION=y
# CONFIG_ULTRIX_PARTITION is not set
CONFIG_SUN_PARTITION=y
CONFIG_KARMA_PARTITION=y
CONFIG_EFI_PARTITION=y
# CONFIG_SYSV68_PARTITION is not set
# CONFIG_CMDLINE_PARTITION is not set
# end of Partition Types

CONFIG_BLOCK_COMPAT=y
CONFIG_BLK_MQ_PCI=y
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_BLK_PM=y

#
# IO Schedulers
#
CONFIG_MQ_IOSCHED_DEADLINE=y
CONFIG_MQ_IOSCHED_KYBER=y
# CONFIG_IOSCHED_BFQ is not set
# end of IO Schedulers

CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_PADATA=y
CONFIG_ASN1=y
CONFIG_UNINLINE_SPIN_UNLOCK=y
CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=y
CONFIG_MUTEX_SPIN_ON_OWNER=y
CONFIG_RWSEM_SPIN_ON_OWNER=y
CONFIG_LOCK_SPIN_ON_OWNER=y
CONFIG_ARCH_USE_QUEUED_SPINLOCKS=y
CONFIG_QUEUED_SPINLOCKS=y
CONFIG_ARCH_USE_QUEUED_RWLOCKS=y
CONFIG_QUEUED_RWLOCKS=y
CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=y
CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE=y
CONFIG_ARCH_HAS_SYSCALL_WRAPPER=y
CONFIG_FREEZER=y

#
# Executable file formats
#
CONFIG_BINFMT_ELF=y
CONFIG_COMPAT_BINFMT_ELF=y
CONFIG_ELFCORE=y
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=y
CONFIG_BINFMT_SCRIPT=y
CONFIG_BINFMT_MISC=m
CONFIG_COREDUMP=y
# end of Executable file formats

#
# Memory Management options
#
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_SPARSEMEM=y
CONFIG_NEED_MULTIPLE_NODES=y
CONFIG_HAVE_MEMORY_PRESENT=y
CONFIG_SPARSEMEM_EXTREME=y
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=y
CONFIG_SPARSEMEM_VMEMMAP=y
CONFIG_HAVE_FAST_GUP=y
CONFIG_NUMA_KEEP_MEMINFO=y
CONFIG_MEMORY_ISOLATION=y
CONFIG_HAVE_BOOTMEM_INFO_NODE=y
CONFIG_MEMORY_HOTPLUG=y
CONFIG_MEMORY_HOTPLUG_SPARSE=y
# CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set
CONFIG_MEMORY_HOTREMOVE=y
CONFIG_SPLIT_PTLOCK_CPUS=4
CONFIG_MEMORY_BALLOON=y
CONFIG_BALLOON_COMPACTION=y
CONFIG_COMPACTION=y
CONFIG_PAGE_REPORTING=y
CONFIG_MIGRATION=y
CONFIG_CONTIG_ALLOC=y
CONFIG_PHYS_ADDR_T_64BIT=y
CONFIG_BOUNCE=y
CONFIG_VIRT_TO_BUS=y
CONFIG_MMU_NOTIFIER=y
CONFIG_KSM=y
CONFIG_DEFAULT_MMAP_MIN_ADDR=4096
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=y
CONFIG_MEMORY_FAILURE=y
CONFIG_HWPOISON_INJECT=m
CONFIG_TRANSPARENT_HUGEPAGE=y
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=y
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
CONFIG_ARCH_WANTS_THP_SWAP=y
CONFIG_THP_SWAP=y
CONFIG_CLEANCACHE=y
CONFIG_FRONTSWAP=y
CONFIG_CMA=y
# CONFIG_CMA_DEBUG is not set
# CONFIG_CMA_DEBUGFS is not set
CONFIG_CMA_AREAS=7
CONFIG_MEM_SOFT_DIRTY=y
CONFIG_ZSWAP=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO=y
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4HC is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT="lzo"
CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=y
# CONFIG_ZSWAP_ZPOOL_DEFAULT_Z3FOLD is not set
# CONFIG_ZSWAP_ZPOOL_DEFAULT_ZSMALLOC is not set
CONFIG_ZSWAP_ZPOOL_DEFAULT="zbud"
# CONFIG_ZSWAP_DEFAULT_ON is not set
CONFIG_ZPOOL=y
CONFIG_ZBUD=y
# CONFIG_Z3FOLD is not set
CONFIG_ZSMALLOC=y
# CONFIG_ZSMALLOC_PGTABLE_MAPPING is not set
# CONFIG_ZSMALLOC_STAT is not set
CONFIG_GENERIC_EARLY_IOREMAP=y
CONFIG_DEFERRED_STRUCT_PAGE_INIT=y
CONFIG_IDLE_PAGE_TRACKING=y
CONFIG_ARCH_HAS_PTE_DEVMAP=y
CONFIG_ZONE_DEVICE=y
CONFIG_DEV_PAGEMAP_OPS=y
# CONFIG_DEVICE_PRIVATE is not set
CONFIG_FRAME_VECTOR=y
CONFIG_ARCH_USES_HIGH_VMA_FLAGS=y
CONFIG_ARCH_HAS_PKEYS=y
# CONFIG_PERCPU_STATS is not set
CONFIG_GUP_BENCHMARK=y
# CONFIG_READ_ONLY_THP_FOR_FS is not set
CONFIG_ARCH_HAS_PTE_SPECIAL=y
CONFIG_MAPPING_DIRTY_HELPERS=y
# end of Memory Management options

CONFIG_NET=y
CONFIG_COMPAT_NETLINK_MESSAGES=y
CONFIG_NET_INGRESS=y
CONFIG_NET_EGRESS=y
CONFIG_NET_REDIRECT=y
CONFIG_SKB_EXTENSIONS=y

#
# Networking options
#
CONFIG_PACKET=y
CONFIG_PACKET_DIAG=m
CONFIG_UNIX=y
CONFIG_UNIX_SCM=y
CONFIG_UNIX_DIAG=m
CONFIG_TLS=m
# CONFIG_TLS_DEVICE is not set
# CONFIG_TLS_TOE is not set
CONFIG_XFRM=y
CONFIG_XFRM_ALGO=y
CONFIG_XFRM_USER=y
# CONFIG_XFRM_INTERFACE is not set
CONFIG_XFRM_SUB_POLICY=y
CONFIG_XFRM_MIGRATE=y
CONFIG_XFRM_STATISTICS=y
CONFIG_XFRM_IPCOMP=m
CONFIG_NET_KEY=m
CONFIG_NET_KEY_MIGRATE=y
CONFIG_XDP_SOCKETS=y
# CONFIG_XDP_SOCKETS_DIAG is not set
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_FIB_TRIE_STATS=y
CONFIG_IP_MULTIPLE_TABLES=y
CONFIG_IP_ROUTE_MULTIPATH=y
CONFIG_IP_ROUTE_VERBOSE=y
CONFIG_IP_ROUTE_CLASSID=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
# CONFIG_IP_PNP_BOOTP is not set
# CONFIG_IP_PNP_RARP is not set
CONFIG_NET_IPIP=y
CONFIG_NET_IPGRE_DEMUX=y
CONFIG_NET_IP_TUNNEL=y
CONFIG_NET_IPGRE=y
CONFIG_NET_IPGRE_BROADCAST=y
CONFIG_IP_MROUTE_COMMON=y
CONFIG_IP_MROUTE=y
CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
CONFIG_IP_PIMSM_V1=y
CONFIG_IP_PIMSM_V2=y
CONFIG_SYN_COOKIES=y
CONFIG_NET_IPVTI=y
CONFIG_NET_UDP_TUNNEL=y
CONFIG_NET_FOU=y
CONFIG_NET_FOU_IP_TUNNELS=y
CONFIG_INET_AH=m
CONFIG_INET_ESP=m
# CONFIG_INET_ESP_OFFLOAD is not set
# CONFIG_INET_ESPINTCP is not set
CONFIG_INET_IPCOMP=m
CONFIG_INET_XFRM_TUNNEL=m
CONFIG_INET_TUNNEL=y
CONFIG_INET_DIAG=m
CONFIG_INET_TCP_DIAG=m
CONFIG_INET_UDP_DIAG=m
# CONFIG_INET_RAW_DIAG is not set
# CONFIG_INET_DIAG_DESTROY is not set
CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=m
CONFIG_TCP_CONG_CUBIC=y
CONFIG_TCP_CONG_WESTWOOD=m
CONFIG_TCP_CONG_HTCP=m
CONFIG_TCP_CONG_HSTCP=m
CONFIG_TCP_CONG_HYBLA=m
CONFIG_TCP_CONG_VEGAS=m
# CONFIG_TCP_CONG_NV is not set
CONFIG_TCP_CONG_SCALABLE=m
CONFIG_TCP_CONG_LP=m
CONFIG_TCP_CONG_VENO=m
CONFIG_TCP_CONG_YEAH=m
CONFIG_TCP_CONG_ILLINOIS=m
CONFIG_TCP_CONG_DCTCP=m
# CONFIG_TCP_CONG_CDG is not set
# CONFIG_TCP_CONG_BBR is not set
CONFIG_DEFAULT_CUBIC=y
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG="cubic"
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
# CONFIG_INET6_ESP_OFFLOAD is not set
# CONFIG_INET6_ESPINTCP is not set
CONFIG_INET6_IPCOMP=m
CONFIG_IPV6_MIP6=m
# CONFIG_IPV6_ILA is not set
CONFIG_INET6_XFRM_TUNNEL=m
CONFIG_INET6_TUNNEL=y
CONFIG_IPV6_VTI=y
CONFIG_IPV6_SIT=m
CONFIG_IPV6_SIT_6RD=y
CONFIG_IPV6_NDISC_NODETYPE=y
CONFIG_IPV6_TUNNEL=y
CONFIG_IPV6_GRE=y
CONFIG_IPV6_FOU=y
CONFIG_IPV6_FOU_TUNNEL=y
CONFIG_IPV6_MULTIPLE_TABLES=y
# CONFIG_IPV6_SUBTREES is not set
CONFIG_IPV6_MROUTE=y
CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=y
CONFIG_IPV6_PIMSM_V2=y
CONFIG_IPV6_SEG6_LWTUNNEL=y
# CONFIG_IPV6_SEG6_HMAC is not set
CONFIG_IPV6_SEG6_BPF=y
# CONFIG_IPV6_RPL_LWTUNNEL is not set
CONFIG_NETLABEL=y
CONFIG_MPTCP=y
CONFIG_MPTCP_IPV6=y
# CONFIG_MPTCP_HMAC_TEST is not set
CONFIG_NETWORK_SECMARK=y
CONFIG_NET_PTP_CLASSIFY=y
CONFIG_NETWORK_PHY_TIMESTAMPING=y
CONFIG_NETFILTER=y
CONFIG_NETFILTER_ADVANCED=y
CONFIG_BRIDGE_NETFILTER=m

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_INGRESS=y
CONFIG_NETFILTER_NETLINK=m
CONFIG_NETFILTER_FAMILY_BRIDGE=y
CONFIG_NETFILTER_FAMILY_ARP=y
CONFIG_NETFILTER_NETLINK_ACCT=m
CONFIG_NETFILTER_NETLINK_QUEUE=m
CONFIG_NETFILTER_NETLINK_LOG=m
CONFIG_NETFILTER_NETLINK_OSF=m
CONFIG_NF_CONNTRACK=m
CONFIG_NF_LOG_COMMON=m
# CONFIG_NF_LOG_NETDEV is not set
CONFIG_NETFILTER_CONNCOUNT=m
CONFIG_NF_CONNTRACK_MARK=y
CONFIG_NF_CONNTRACK_SECMARK=y
CONFIG_NF_CONNTRACK_ZONES=y
CONFIG_NF_CONNTRACK_PROCFS=y
CONFIG_NF_CONNTRACK_EVENTS=y
CONFIG_NF_CONNTRACK_TIMEOUT=y
CONFIG_NF_CONNTRACK_TIMESTAMP=y
CONFIG_NF_CONNTRACK_LABELS=y
CONFIG_NF_CT_PROTO_DCCP=y
CONFIG_NF_CT_PROTO_GRE=y
CONFIG_NF_CT_PROTO_SCTP=y
CONFIG_NF_CT_PROTO_UDPLITE=y
CONFIG_NF_CONNTRACK_AMANDA=m
CONFIG_NF_CONNTRACK_FTP=m
CONFIG_NF_CONNTRACK_H323=m
CONFIG_NF_CONNTRACK_IRC=m
CONFIG_NF_CONNTRACK_BROADCAST=m
CONFIG_NF_CONNTRACK_NETBIOS_NS=m
CONFIG_NF_CONNTRACK_SNMP=m
CONFIG_NF_CONNTRACK_PPTP=m
CONFIG_NF_CONNTRACK_SANE=m
CONFIG_NF_CONNTRACK_SIP=m
CONFIG_NF_CONNTRACK_TFTP=m
CONFIG_NF_CT_NETLINK=m
CONFIG_NF_CT_NETLINK_TIMEOUT=m
# CONFIG_NETFILTER_NETLINK_GLUE_CT is not set
CONFIG_NF_NAT=m
CONFIG_NF_NAT_AMANDA=m
CONFIG_NF_NAT_FTP=m
CONFIG_NF_NAT_IRC=m
CONFIG_NF_NAT_SIP=m
CONFIG_NF_NAT_TFTP=m
CONFIG_NF_NAT_REDIRECT=y
CONFIG_NF_NAT_MASQUERADE=y
CONFIG_NETFILTER_SYNPROXY=m
CONFIG_NF_TABLES=m
CONFIG_NF_TABLES_INET=y
CONFIG_NF_TABLES_NETDEV=y
# CONFIG_NFT_NUMGEN is not set
CONFIG_NFT_CT=m
CONFIG_NFT_FLOW_OFFLOAD=m
CONFIG_NFT_COUNTER=m
# CONFIG_NFT_CONNLIMIT is not set
CONFIG_NFT_LOG=m
CONFIG_NFT_LIMIT=m
CONFIG_NFT_MASQ=m
CONFIG_NFT_REDIR=m
CONFIG_NFT_NAT=m
# CONFIG_NFT_TUNNEL is not set
CONFIG_NFT_OBJREF=m
CONFIG_NFT_QUEUE=m
# CONFIG_NFT_QUOTA is not set
CONFIG_NFT_REJECT=m
CONFIG_NFT_REJECT_INET=m
CONFIG_NFT_COMPAT=m
CONFIG_NFT_HASH=m
# CONFIG_NFT_XFRM is not set
# CONFIG_NFT_SOCKET is not set
# CONFIG_NFT_OSF is not set
# CONFIG_NFT_TPROXY is not set
# CONFIG_NFT_SYNPROXY is not set
# CONFIG_NF_DUP_NETDEV is not set
# CONFIG_NFT_DUP_NETDEV is not set
# CONFIG_NFT_FWD_NETDEV is not set
CONFIG_NF_FLOW_TABLE_INET=m
CONFIG_NF_FLOW_TABLE=m
CONFIG_NETFILTER_XTABLES=y

#
# Xtables combined modules
#
CONFIG_NETFILTER_XT_MARK=m
CONFIG_NETFILTER_XT_CONNMARK=m
CONFIG_NETFILTER_XT_SET=m

#
# Xtables targets
#
CONFIG_NETFILTER_XT_TARGET_AUDIT=m
CONFIG_NETFILTER_XT_TARGET_CHECKSUM=m
CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
CONFIG_NETFILTER_XT_TARGET_CONNSECMARK=m
CONFIG_NETFILTER_XT_TARGET_CT=m
CONFIG_NETFILTER_XT_TARGET_DSCP=m
CONFIG_NETFILTER_XT_TARGET_HL=m
CONFIG_NETFILTER_XT_TARGET_HMARK=m
CONFIG_NETFILTER_XT_TARGET_IDLETIMER=m
CONFIG_NETFILTER_XT_TARGET_LED=m
CONFIG_NETFILTER_XT_TARGET_LOG=m
CONFIG_NETFILTER_XT_TARGET_MARK=m
CONFIG_NETFILTER_XT_NAT=m
CONFIG_NETFILTER_XT_TARGET_NETMAP=m
CONFIG_NETFILTER_XT_TARGET_NFLOG=m
CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
CONFIG_NETFILTER_XT_TARGET_NOTRACK=m
CONFIG_NETFILTER_XT_TARGET_RATEEST=m
CONFIG_NETFILTER_XT_TARGET_REDIRECT=m
CONFIG_NETFILTER_XT_TARGET_MASQUERADE=m
CONFIG_NETFILTER_XT_TARGET_TEE=m
CONFIG_NETFILTER_XT_TARGET_TPROXY=m
CONFIG_NETFILTER_XT_TARGET_TRACE=m
CONFIG_NETFILTER_XT_TARGET_SECMARK=m
CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
CONFIG_NETFILTER_XT_TARGET_TCPOPTSTRIP=m

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=m
CONFIG_NETFILTER_XT_MATCH_BPF=m
CONFIG_NETFILTER_XT_MATCH_CGROUP=m
CONFIG_NETFILTER_XT_MATCH_CLUSTER=m
CONFIG_NETFILTER_XT_MATCH_COMMENT=m
CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
CONFIG_NETFILTER_XT_MATCH_CONNLABEL=m
CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
CONFIG_NETFILTER_XT_MATCH_CPU=m
CONFIG_NETFILTER_XT_MATCH_DCCP=m
CONFIG_NETFILTER_XT_MATCH_DEVGROUP=m
CONFIG_NETFILTER_XT_MATCH_DSCP=m
CONFIG_NETFILTER_XT_MATCH_ECN=m
CONFIG_NETFILTER_XT_MATCH_ESP=m
CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
CONFIG_NETFILTER_XT_MATCH_HELPER=m
CONFIG_NETFILTER_XT_MATCH_HL=m
# CONFIG_NETFILTER_XT_MATCH_IPCOMP is not set
CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
CONFIG_NETFILTER_XT_MATCH_IPVS=m
CONFIG_NETFILTER_XT_MATCH_L2TP=m
CONFIG_NETFILTER_XT_MATCH_LENGTH=m
CONFIG_NETFILTER_XT_MATCH_LIMIT=m
CONFIG_NETFILTER_XT_MATCH_MAC=m
CONFIG_NETFILTER_XT_MATCH_MARK=m
CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
CONFIG_NETFILTER_XT_MATCH_NFACCT=m
CONFIG_NETFILTER_XT_MATCH_OSF=m
CONFIG_NETFILTER_XT_MATCH_OWNER=m
CONFIG_NETFILTER_XT_MATCH_POLICY=m
CONFIG_NETFILTER_XT_MATCH_PHYSDEV=m
CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
CONFIG_NETFILTER_XT_MATCH_QUOTA=m
CONFIG_NETFILTER_XT_MATCH_RATEEST=m
CONFIG_NETFILTER_XT_MATCH_REALM=m
CONFIG_NETFILTER_XT_MATCH_RECENT=m
CONFIG_NETFILTER_XT_MATCH_SCTP=m
CONFIG_NETFILTER_XT_MATCH_SOCKET=m
CONFIG_NETFILTER_XT_MATCH_STATE=m
CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
CONFIG_NETFILTER_XT_MATCH_STRING=m
CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
CONFIG_NETFILTER_XT_MATCH_TIME=m
CONFIG_NETFILTER_XT_MATCH_U32=m
# end of Core Netfilter Configuration

CONFIG_IP_SET=m
CONFIG_IP_SET_MAX=256
CONFIG_IP_SET_BITMAP_IP=m
CONFIG_IP_SET_BITMAP_IPMAC=m
CONFIG_IP_SET_BITMAP_PORT=m
CONFIG_IP_SET_HASH_IP=m
CONFIG_IP_SET_HASH_IPMARK=m
CONFIG_IP_SET_HASH_IPPORT=m
CONFIG_IP_SET_HASH_IPPORTIP=m
CONFIG_IP_SET_HASH_IPPORTNET=m
CONFIG_IP_SET_HASH_IPMAC=m
CONFIG_IP_SET_HASH_MAC=m
CONFIG_IP_SET_HASH_NETPORTNET=m
CONFIG_IP_SET_HASH_NET=m
CONFIG_IP_SET_HASH_NETNET=m
CONFIG_IP_SET_HASH_NETPORT=m
CONFIG_IP_SET_HASH_NETIFACE=m
CONFIG_IP_SET_LIST_SET=m
CONFIG_IP_VS=m
CONFIG_IP_VS_IPV6=y
# CONFIG_IP_VS_DEBUG is not set
CONFIG_IP_VS_TAB_BITS=12

#
# IPVS transport protocol load balancing support
#
CONFIG_IP_VS_PROTO_TCP=y
CONFIG_IP_VS_PROTO_UDP=y
CONFIG_IP_VS_PROTO_AH_ESP=y
CONFIG_IP_VS_PROTO_ESP=y
CONFIG_IP_VS_PROTO_AH=y
CONFIG_IP_VS_PROTO_SCTP=y

#
# IPVS scheduler
#
CONFIG_IP_VS_RR=m
CONFIG_IP_VS_WRR=m
CONFIG_IP_VS_LC=m
CONFIG_IP_VS_WLC=m
# CONFIG_IP_VS_FO is not set
# CONFIG_IP_VS_OVF is not set
CONFIG_IP_VS_LBLC=m
CONFIG_IP_VS_LBLCR=m
CONFIG_IP_VS_DH=m
CONFIG_IP_VS_SH=m
# CONFIG_IP_VS_MH is not set
CONFIG_IP_VS_SED=m
CONFIG_IP_VS_NQ=m

#
# IPVS SH scheduler
#
CONFIG_IP_VS_SH_TAB_BITS=8

#
# IPVS MH scheduler
#
CONFIG_IP_VS_MH_TAB_INDEX=12

#
# IPVS application helper
#
CONFIG_IP_VS_FTP=m
CONFIG_IP_VS_NFCT=y
CONFIG_IP_VS_PE_SIP=m

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=m
CONFIG_NF_SOCKET_IPV4=m
CONFIG_NF_TPROXY_IPV4=m
CONFIG_NF_TABLES_IPV4=y
CONFIG_NFT_REJECT_IPV4=m
# CONFIG_NFT_DUP_IPV4 is not set
# CONFIG_NFT_FIB_IPV4 is not set
# CONFIG_NF_TABLES_ARP is not set
CONFIG_NF_FLOW_TABLE_IPV4=m
CONFIG_NF_DUP_IPV4=m
# CONFIG_NF_LOG_ARP is not set
CONFIG_NF_LOG_IPV4=m
CONFIG_NF_REJECT_IPV4=m
CONFIG_NF_NAT_SNMP_BASIC=m
CONFIG_NF_NAT_PPTP=m
CONFIG_NF_NAT_H323=m
CONFIG_IP_NF_IPTABLES=m
CONFIG_IP_NF_MATCH_AH=m
CONFIG_IP_NF_MATCH_ECN=m
CONFIG_IP_NF_MATCH_RPFILTER=m
CONFIG_IP_NF_MATCH_TTL=m
CONFIG_IP_NF_FILTER=m
CONFIG_IP_NF_TARGET_REJECT=m
CONFIG_IP_NF_TARGET_SYNPROXY=m
CONFIG_IP_NF_NAT=m
CONFIG_IP_NF_TARGET_MASQUERADE=m
CONFIG_IP_NF_TARGET_NETMAP=m
CONFIG_IP_NF_TARGET_REDIRECT=m
CONFIG_IP_NF_MANGLE=m
CONFIG_IP_NF_TARGET_CLUSTERIP=m
CONFIG_IP_NF_TARGET_ECN=m
CONFIG_IP_NF_TARGET_TTL=m
CONFIG_IP_NF_RAW=m
CONFIG_IP_NF_SECURITY=m
CONFIG_IP_NF_ARPTABLES=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
# end of IP: Netfilter Configuration

#
# IPv6: Netfilter Configuration
#
CONFIG_NF_SOCKET_IPV6=m
CONFIG_NF_TPROXY_IPV6=m
CONFIG_NF_TABLES_IPV6=y
CONFIG_NFT_REJECT_IPV6=m
# CONFIG_NFT_DUP_IPV6 is not set
# CONFIG_NFT_FIB_IPV6 is not set
CONFIG_NF_FLOW_TABLE_IPV6=m
CONFIG_NF_DUP_IPV6=m
CONFIG_NF_REJECT_IPV6=m
CONFIG_NF_LOG_IPV6=m
CONFIG_IP6_NF_IPTABLES=m
CONFIG_IP6_NF_MATCH_AH=m
CONFIG_IP6_NF_MATCH_EUI64=m
CONFIG_IP6_NF_MATCH_FRAG=m
CONFIG_IP6_NF_MATCH_OPTS=m
CONFIG_IP6_NF_MATCH_HL=m
CONFIG_IP6_NF_MATCH_IPV6HEADER=m
CONFIG_IP6_NF_MATCH_MH=m
CONFIG_IP6_NF_MATCH_RPFILTER=m
CONFIG_IP6_NF_MATCH_RT=m
# CONFIG_IP6_NF_MATCH_SRH is not set
CONFIG_IP6_NF_TARGET_HL=m
CONFIG_IP6_NF_FILTER=m
CONFIG_IP6_NF_TARGET_REJECT=m
CONFIG_IP6_NF_TARGET_SYNPROXY=m
CONFIG_IP6_NF_MANGLE=m
CONFIG_IP6_NF_RAW=m
CONFIG_IP6_NF_SECURITY=m
CONFIG_IP6_NF_NAT=m
CONFIG_IP6_NF_TARGET_MASQUERADE=m
CONFIG_IP6_NF_TARGET_NPT=m
# end of IPv6: Netfilter Configuration

CONFIG_NF_DEFRAG_IPV6=m
# CONFIG_NF_TABLES_BRIDGE is not set
# CONFIG_NF_CONNTRACK_BRIDGE is not set
CONFIG_BRIDGE_NF_EBTABLES=m
CONFIG_BRIDGE_EBT_BROUTE=m
CONFIG_BRIDGE_EBT_T_FILTER=m
CONFIG_BRIDGE_EBT_T_NAT=m
CONFIG_BRIDGE_EBT_802_3=m
CONFIG_BRIDGE_EBT_AMONG=m
CONFIG_BRIDGE_EBT_ARP=m
CONFIG_BRIDGE_EBT_IP=m
CONFIG_BRIDGE_EBT_IP6=m
CONFIG_BRIDGE_EBT_LIMIT=m
CONFIG_BRIDGE_EBT_MARK=m
CONFIG_BRIDGE_EBT_PKTTYPE=m
CONFIG_BRIDGE_EBT_STP=m
CONFIG_BRIDGE_EBT_VLAN=m
CONFIG_BRIDGE_EBT_ARPREPLY=m
CONFIG_BRIDGE_EBT_DNAT=m
CONFIG_BRIDGE_EBT_MARK_T=m
CONFIG_BRIDGE_EBT_REDIRECT=m
CONFIG_BRIDGE_EBT_SNAT=m
CONFIG_BRIDGE_EBT_LOG=m
CONFIG_BRIDGE_EBT_NFLOG=m
# CONFIG_BPFILTER is not set
CONFIG_IP_DCCP=m
CONFIG_INET_DCCP_DIAG=m

#
# DCCP CCIDs Configuration
#
# CONFIG_IP_DCCP_CCID2_DEBUG is not set
CONFIG_IP_DCCP_CCID3=y
# CONFIG_IP_DCCP_CCID3_DEBUG is not set
CONFIG_IP_DCCP_TFRC_LIB=y
# end of DCCP CCIDs Configuration

#
# DCCP Kernel Hacking
#
# CONFIG_IP_DCCP_DEBUG is not set
# end of DCCP Kernel Hacking

CONFIG_IP_SCTP=m
# CONFIG_SCTP_DBG_OBJCNT is not set
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_MD5 is not set
CONFIG_SCTP_DEFAULT_COOKIE_HMAC_SHA1=y
# CONFIG_SCTP_DEFAULT_COOKIE_HMAC_NONE is not set
CONFIG_SCTP_COOKIE_HMAC_MD5=y
CONFIG_SCTP_COOKIE_HMAC_SHA1=y
CONFIG_INET_SCTP_DIAG=m
# CONFIG_RDS is not set
# CONFIG_TIPC is not set
CONFIG_ATM=m
CONFIG_ATM_CLIP=m
# CONFIG_ATM_CLIP_NO_ICMP is not set
CONFIG_ATM_LANE=m
# CONFIG_ATM_MPOA is not set
CONFIG_ATM_BR2684=m
# CONFIG_ATM_BR2684_IPFILTER is not set
CONFIG_L2TP=m
CONFIG_L2TP_DEBUGFS=m
CONFIG_L2TP_V3=y
CONFIG_L2TP_IP=m
CONFIG_L2TP_ETH=m
CONFIG_STP=y
CONFIG_GARP=y
CONFIG_MRP=y
CONFIG_BRIDGE=y
CONFIG_BRIDGE_IGMP_SNOOPING=y
CONFIG_BRIDGE_VLAN_FILTERING=y
# CONFIG_BRIDGE_MRP is not set
CONFIG_HAVE_NET_DSA=y
# CONFIG_NET_DSA is not set
CONFIG_VLAN_8021Q=y
CONFIG_VLAN_8021Q_GVRP=y
CONFIG_VLAN_8021Q_MVRP=y
# CONFIG_DECNET is not set
CONFIG_LLC=y
# CONFIG_LLC2 is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_PHONET is not set
CONFIG_6LOWPAN=m
# CONFIG_6LOWPAN_DEBUGFS is not set
CONFIG_6LOWPAN_NHC=m
CONFIG_6LOWPAN_NHC_DEST=m
CONFIG_6LOWPAN_NHC_FRAGMENT=m
CONFIG_6LOWPAN_NHC_HOP=m
CONFIG_6LOWPAN_NHC_IPV6=m
CONFIG_6LOWPAN_NHC_MOBILITY=m
CONFIG_6LOWPAN_NHC_ROUTING=m
CONFIG_6LOWPAN_NHC_UDP=m
# CONFIG_6LOWPAN_GHC_EXT_HDR_HOP is not set
# CONFIG_6LOWPAN_GHC_UDP is not set
# CONFIG_6LOWPAN_GHC_ICMPV6 is not set
# CONFIG_6LOWPAN_GHC_EXT_HDR_DEST is not set
# CONFIG_6LOWPAN_GHC_EXT_HDR_FRAG is not set
# CONFIG_6LOWPAN_GHC_EXT_HDR_ROUTE is not set
CONFIG_IEEE802154=m
# CONFIG_IEEE802154_NL802154_EXPERIMENTAL is not set
CONFIG_IEEE802154_SOCKET=m
CONFIG_IEEE802154_6LOWPAN=m
CONFIG_MAC802154=m
CONFIG_NET_SCHED=y

#
# Queueing/Scheduling
#
CONFIG_NET_SCH_CBQ=m
CONFIG_NET_SCH_HTB=m
CONFIG_NET_SCH_HFSC=m
CONFIG_NET_SCH_ATM=m
CONFIG_NET_SCH_PRIO=m
CONFIG_NET_SCH_MULTIQ=m
CONFIG_NET_SCH_RED=m
CONFIG_NET_SCH_SFB=m
CONFIG_NET_SCH_SFQ=m
CONFIG_NET_SCH_TEQL=m
CONFIG_NET_SCH_TBF=m
# CONFIG_NET_SCH_CBS is not set
CONFIG_NET_SCH_ETF=m
# CONFIG_NET_SCH_TAPRIO is not set
CONFIG_NET_SCH_GRED=m
CONFIG_NET_SCH_DSMARK=m
CONFIG_NET_SCH_NETEM=y
CONFIG_NET_SCH_DRR=m
CONFIG_NET_SCH_MQPRIO=m
# CONFIG_NET_SCH_SKBPRIO is not set
CONFIG_NET_SCH_CHOKE=m
CONFIG_NET_SCH_QFQ=m
CONFIG_NET_SCH_CODEL=m
CONFIG_NET_SCH_FQ_CODEL=m
# CONFIG_NET_SCH_CAKE is not set
CONFIG_NET_SCH_FQ=m
# CONFIG_NET_SCH_HHF is not set
# CONFIG_NET_SCH_PIE is not set
CONFIG_NET_SCH_INGRESS=y
CONFIG_NET_SCH_PLUG=m
CONFIG_NET_SCH_ETS=m
# CONFIG_NET_SCH_DEFAULT is not set

#
# Classification
#
CONFIG_NET_CLS=y
CONFIG_NET_CLS_BASIC=m
CONFIG_NET_CLS_TCINDEX=m
CONFIG_NET_CLS_ROUTE4=m
CONFIG_NET_CLS_FW=m
CONFIG_NET_CLS_U32=m
CONFIG_CLS_U32_PERF=y
CONFIG_CLS_U32_MARK=y
CONFIG_NET_CLS_RSVP=m
CONFIG_NET_CLS_RSVP6=m
CONFIG_NET_CLS_FLOW=m
CONFIG_NET_CLS_CGROUP=y
CONFIG_NET_CLS_BPF=m
CONFIG_NET_CLS_FLOWER=m
CONFIG_NET_CLS_MATCHALL=m
CONFIG_NET_EMATCH=y
CONFIG_NET_EMATCH_STACK=32
CONFIG_NET_EMATCH_CMP=m
CONFIG_NET_EMATCH_NBYTE=m
CONFIG_NET_EMATCH_U32=m
CONFIG_NET_EMATCH_META=m
CONFIG_NET_EMATCH_TEXT=m
CONFIG_NET_EMATCH_CANID=m
CONFIG_NET_EMATCH_IPSET=m
CONFIG_NET_EMATCH_IPT=m
CONFIG_NET_CLS_ACT=y
CONFIG_NET_ACT_POLICE=m
CONFIG_NET_ACT_GACT=m
CONFIG_GACT_PROB=y
CONFIG_NET_ACT_MIRRED=m
CONFIG_NET_ACT_SAMPLE=m
CONFIG_NET_ACT_IPT=m
CONFIG_NET_ACT_NAT=m
CONFIG_NET_ACT_PEDIT=m
CONFIG_NET_ACT_SIMP=m
CONFIG_NET_ACT_SKBEDIT=m
CONFIG_NET_ACT_CSUM=m
CONFIG_NET_ACT_MPLS=m
CONFIG_NET_ACT_VLAN=m
CONFIG_NET_ACT_BPF=m
CONFIG_NET_ACT_CONNMARK=m
CONFIG_NET_ACT_CTINFO=m
CONFIG_NET_ACT_SKBMOD=m
CONFIG_NET_ACT_IFE=m
CONFIG_NET_ACT_TUNNEL_KEY=m
CONFIG_NET_ACT_CT=m
# CONFIG_NET_ACT_GATE is not set
CONFIG_NET_IFE_SKBMARK=m
CONFIG_NET_IFE_SKBPRIO=m
CONFIG_NET_IFE_SKBTCINDEX=m
# CONFIG_NET_TC_SKB_EXT is not set
CONFIG_NET_SCH_FIFO=y
CONFIG_DCB=y
CONFIG_DNS_RESOLVER=m
# CONFIG_BATMAN_ADV is not set
CONFIG_OPENVSWITCH=m
CONFIG_OPENVSWITCH_GRE=m
CONFIG_OPENVSWITCH_VXLAN=m
CONFIG_OPENVSWITCH_GENEVE=m
CONFIG_VSOCKETS=m
CONFIG_VSOCKETS_DIAG=m
CONFIG_VSOCKETS_LOOPBACK=m
CONFIG_VMWARE_VMCI_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS_COMMON=m
CONFIG_HYPERV_VSOCKETS=m
CONFIG_NETLINK_DIAG=m
CONFIG_MPLS=y
CONFIG_NET_MPLS_GSO=m
CONFIG_MPLS_ROUTING=m
CONFIG_MPLS_IPTUNNEL=m
CONFIG_NET_NSH=m
# CONFIG_HSR is not set
CONFIG_NET_SWITCHDEV=y
CONFIG_NET_L3_MASTER_DEV=y
# CONFIG_QRTR is not set
# CONFIG_NET_NCSI is not set
CONFIG_RPS=y
CONFIG_RFS_ACCEL=y
CONFIG_XPS=y
# CONFIG_CGROUP_NET_PRIO is not set
CONFIG_CGROUP_NET_CLASSID=y
CONFIG_NET_RX_BUSY_POLL=y
CONFIG_BQL=y
CONFIG_BPF_JIT=y
CONFIG_BPF_STREAM_PARSER=y
CONFIG_NET_FLOW_LIMIT=y

#
# Network testing
#
CONFIG_NET_PKTGEN=m
CONFIG_NET_DROP_MONITOR=y
# end of Network testing
# end of Networking options

# CONFIG_HAMRADIO is not set
CONFIG_CAN=m
CONFIG_CAN_RAW=m
CONFIG_CAN_BCM=m
CONFIG_CAN_GW=m
# CONFIG_CAN_J1939 is not set

#
# CAN Device Drivers
#
CONFIG_CAN_VCAN=m
# CONFIG_CAN_VXCAN is not set
CONFIG_CAN_SLCAN=m
CONFIG_CAN_DEV=m
CONFIG_CAN_CALC_BITTIMING=y
# CONFIG_CAN_KVASER_PCIEFD is not set
CONFIG_CAN_C_CAN=m
CONFIG_CAN_C_CAN_PLATFORM=m
CONFIG_CAN_C_CAN_PCI=m
CONFIG_CAN_CC770=m
# CONFIG_CAN_CC770_ISA is not set
CONFIG_CAN_CC770_PLATFORM=m
# CONFIG_CAN_IFI_CANFD is not set
# CONFIG_CAN_M_CAN is not set
# CONFIG_CAN_PEAK_PCIEFD is not set
CONFIG_CAN_SJA1000=m
CONFIG_CAN_EMS_PCI=m
# CONFIG_CAN_F81601 is not set
CONFIG_CAN_KVASER_PCI=m
CONFIG_CAN_PEAK_PCI=m
CONFIG_CAN_PEAK_PCIEC=y
CONFIG_CAN_PLX_PCI=m
# CONFIG_CAN_SJA1000_ISA is not set
CONFIG_CAN_SJA1000_PLATFORM=m
CONFIG_CAN_SOFTING=m

#
# CAN SPI interfaces
#
# CONFIG_CAN_HI311X is not set
# CONFIG_CAN_MCP251X is not set
# end of CAN SPI interfaces

#
# CAN USB interfaces
#
CONFIG_CAN_8DEV_USB=m
CONFIG_CAN_EMS_USB=m
CONFIG_CAN_ESD_USB2=m
# CONFIG_CAN_GS_USB is not set
CONFIG_CAN_KVASER_USB=m
# CONFIG_CAN_MCBA_USB is not set
CONFIG_CAN_PEAK_USB=m
# CONFIG_CAN_UCAN is not set
# end of CAN USB interfaces

# CONFIG_CAN_DEBUG_DEVICES is not set
# end of CAN Device Drivers

CONFIG_BT=m
CONFIG_BT_BREDR=y
CONFIG_BT_RFCOMM=m
CONFIG_BT_RFCOMM_TTY=y
CONFIG_BT_BNEP=m
CONFIG_BT_BNEP_MC_FILTER=y
CONFIG_BT_BNEP_PROTO_FILTER=y
CONFIG_BT_CMTP=m
CONFIG_BT_HIDP=m
CONFIG_BT_HS=y
CONFIG_BT_LE=y
# CONFIG_BT_6LOWPAN is not set
# CONFIG_BT_LEDS is not set
# CONFIG_BT_MSFTEXT is not set
CONFIG_BT_DEBUGFS=y
# CONFIG_BT_SELFTEST is not set

#
# Bluetooth device drivers
#
CONFIG_BT_INTEL=m
CONFIG_BT_BCM=m
CONFIG_BT_RTL=m
CONFIG_BT_HCIBTUSB=m
# CONFIG_BT_HCIBTUSB_AUTOSUSPEND is not set
CONFIG_BT_HCIBTUSB_BCM=y
# CONFIG_BT_HCIBTUSB_MTK is not set
CONFIG_BT_HCIBTUSB_RTL=y
CONFIG_BT_HCIBTSDIO=m
CONFIG_BT_HCIUART=m
CONFIG_BT_HCIUART_H4=y
CONFIG_BT_HCIUART_BCSP=y
CONFIG_BT_HCIUART_ATH3K=y
# CONFIG_BT_HCIUART_INTEL is not set
# CONFIG_BT_HCIUART_AG6XX is not set
CONFIG_BT_HCIBCM203X=m
CONFIG_BT_HCIBPA10X=m
CONFIG_BT_HCIBFUSB=m
CONFIG_BT_HCIVHCI=m
CONFIG_BT_MRVL=m
CONFIG_BT_MRVL_SDIO=m
CONFIG_BT_ATH3K=m
# CONFIG_BT_MTKSDIO is not set
# end of Bluetooth device drivers

# CONFIG_AF_RXRPC is not set
# CONFIG_AF_KCM is not set
CONFIG_STREAM_PARSER=y
CONFIG_FIB_RULES=y
CONFIG_WIRELESS=y
CONFIG_WIRELESS_EXT=y
CONFIG_WEXT_CORE=y
CONFIG_WEXT_PROC=y
CONFIG_WEXT_PRIV=y
CONFIG_CFG80211=m
# CONFIG_NL80211_TESTMODE is not set
# CONFIG_CFG80211_DEVELOPER_WARNINGS is not set
# CONFIG_CFG80211_CERTIFICATION_ONUS is not set
CONFIG_CFG80211_REQUIRE_SIGNED_REGDB=y
CONFIG_CFG80211_USE_KERNEL_REGDB_KEYS=y
CONFIG_CFG80211_DEFAULT_PS=y
# CONFIG_CFG80211_DEBUGFS is not set
CONFIG_CFG80211_CRDA_SUPPORT=y
CONFIG_CFG80211_WEXT=y
CONFIG_LIB80211=m
# CONFIG_LIB80211_DEBUG is not set
CONFIG_MAC80211=m
CONFIG_MAC80211_HAS_RC=y
CONFIG_MAC80211_RC_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT_MINSTREL=y
CONFIG_MAC80211_RC_DEFAULT="minstrel_ht"
# CONFIG_MAC80211_MESH is not set
CONFIG_MAC80211_LEDS=y
CONFIG_MAC80211_DEBUGFS=y
# CONFIG_MAC80211_MESSAGE_TRACING is not set
# CONFIG_MAC80211_DEBUG_MENU is not set
CONFIG_MAC80211_STA_HASH_MAX_SIZE=0
# CONFIG_WIMAX is not set
CONFIG_RFKILL=m
CONFIG_RFKILL_LEDS=y
CONFIG_RFKILL_INPUT=y
# CONFIG_RFKILL_GPIO is not set
CONFIG_NET_9P=y
CONFIG_NET_9P_VIRTIO=y
# CONFIG_NET_9P_XEN is not set
# CONFIG_NET_9P_DEBUG is not set
# CONFIG_CAIF is not set
CONFIG_CEPH_LIB=m
# CONFIG_CEPH_LIB_PRETTYDEBUG is not set
CONFIG_CEPH_LIB_USE_DNS_RESOLVER=y
# CONFIG_NFC is not set
CONFIG_PSAMPLE=m
CONFIG_NET_IFE=m
CONFIG_LWTUNNEL=y
CONFIG_LWTUNNEL_BPF=y
CONFIG_DST_CACHE=y
CONFIG_GRO_CELLS=y
CONFIG_NET_SOCK_MSG=y
CONFIG_NET_DEVLINK=y
CONFIG_PAGE_POOL=y
CONFIG_FAILOVER=m
CONFIG_ETHTOOL_NETLINK=y
CONFIG_HAVE_EBPF_JIT=y

#
# Device Drivers
#
CONFIG_HAVE_EISA=y
# CONFIG_EISA is not set
CONFIG_HAVE_PCI=y
CONFIG_PCI=y
CONFIG_PCI_DOMAINS=y
CONFIG_PCIEPORTBUS=y
CONFIG_HOTPLUG_PCI_PCIE=y
CONFIG_PCIEAER=y
CONFIG_PCIEAER_INJECT=m
CONFIG_PCIE_ECRC=y
CONFIG_PCIEASPM=y
CONFIG_PCIEASPM_DEFAULT=y
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCIE_PME=y
# CONFIG_PCIE_DPC is not set
# CONFIG_PCIE_PTM is not set
# CONFIG_PCIE_BW is not set
CONFIG_PCI_MSI=y
CONFIG_PCI_MSI_IRQ_DOMAIN=y
CONFIG_PCI_QUIRKS=y
# CONFIG_PCI_DEBUG is not set
# CONFIG_PCI_REALLOC_ENABLE_AUTO is not set
CONFIG_PCI_STUB=y
# CONFIG_PCI_PF_STUB is not set
# CONFIG_XEN_PCIDEV_FRONTEND is not set
CONFIG_PCI_ATS=y
CONFIG_PCI_LOCKLESS_CONFIG=y
CONFIG_PCI_IOV=y
CONFIG_PCI_PRI=y
CONFIG_PCI_PASID=y
# CONFIG_PCI_P2PDMA is not set
CONFIG_PCI_LABEL=y
CONFIG_PCI_HYPERV=m
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
CONFIG_HOTPLUG_PCI_ACPI_IBM=m
# CONFIG_HOTPLUG_PCI_CPCI is not set
CONFIG_HOTPLUG_PCI_SHPC=y

#
# PCI controller drivers
#
CONFIG_VMD=y
CONFIG_PCI_HYPERV_INTERFACE=m

#
# DesignWare PCI Core Support
#
# CONFIG_PCIE_DW_PLAT_HOST is not set
# CONFIG_PCI_MESON is not set
# end of DesignWare PCI Core Support

#
# Mobiveil PCIe Core Support
#
# end of Mobiveil PCIe Core Support

#
# Cadence PCIe controllers support
#
# end of Cadence PCIe controllers support
# end of PCI controller drivers

#
# PCI Endpoint
#
# CONFIG_PCI_ENDPOINT is not set
# end of PCI Endpoint

#
# PCI switch controller drivers
#
# CONFIG_PCI_SW_SWITCHTEC is not set
# end of PCI switch controller drivers

CONFIG_PCCARD=y
# CONFIG_PCMCIA is not set
CONFIG_CARDBUS=y

#
# PC-card bridges
#
CONFIG_YENTA=m
CONFIG_YENTA_O2=y
CONFIG_YENTA_RICOH=y
CONFIG_YENTA_TI=y
CONFIG_YENTA_ENE_TUNE=y
CONFIG_YENTA_TOSHIBA=y
# CONFIG_RAPIDIO is not set

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER=y
CONFIG_UEVENT_HELPER_PATH=""
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y

#
# Firmware loader
#
CONFIG_FW_LOADER=y
CONFIG_FW_LOADER_PAGED_BUF=y
CONFIG_EXTRA_FIRMWARE=""
CONFIG_FW_LOADER_USER_HELPER=y
# CONFIG_FW_LOADER_USER_HELPER_FALLBACK is not set
# CONFIG_FW_LOADER_COMPRESS is not set
CONFIG_FW_CACHE=y
# end of Firmware loader

CONFIG_WANT_DEV_COREDUMP=y
CONFIG_ALLOW_DEV_COREDUMP=y
CONFIG_DEV_COREDUMP=y
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
CONFIG_SYS_HYPERVISOR=y
CONFIG_GENERIC_CPU_AUTOPROBE=y
CONFIG_GENERIC_CPU_VULNERABILITIES=y
CONFIG_REGMAP=y
CONFIG_REGMAP_I2C=m
CONFIG_REGMAP_SPI=m
CONFIG_REGMAP_IRQ=y
CONFIG_DMA_SHARED_BUFFER=y
# CONFIG_DMA_FENCE_TRACE is not set
# end of Generic Driver Options

#
# Bus devices
#
# CONFIG_MHI_BUS is not set
# end of Bus devices

CONFIG_CONNECTOR=y
CONFIG_PROC_EVENTS=y
# CONFIG_GNSS is not set
CONFIG_MTD=m
# CONFIG_MTD_TESTS is not set

#
# Partition parsers
#
# CONFIG_MTD_AR7_PARTS is not set
# CONFIG_MTD_CMDLINE_PARTS is not set
# CONFIG_MTD_REDBOOT_PARTS is not set
# end of Partition parsers

#
# User Modules And Translation Layers
#
CONFIG_MTD_BLKDEVS=m
CONFIG_MTD_BLOCK=m
# CONFIG_MTD_BLOCK_RO is not set
# CONFIG_FTL is not set
# CONFIG_NFTL is not set
# CONFIG_INFTL is not set
# CONFIG_RFD_FTL is not set
# CONFIG_SSFDC is not set
# CONFIG_SM_FTL is not set
# CONFIG_MTD_OOPS is not set
# CONFIG_MTD_SWAP is not set
# CONFIG_MTD_PARTITIONED_MASTER is not set

#
# RAM/ROM/Flash chip drivers
#
# CONFIG_MTD_CFI is not set
# CONFIG_MTD_JEDECPROBE is not set
CONFIG_MTD_MAP_BANK_WIDTH_1=y
CONFIG_MTD_MAP_BANK_WIDTH_2=y
CONFIG_MTD_MAP_BANK_WIDTH_4=y
CONFIG_MTD_CFI_I1=y
CONFIG_MTD_CFI_I2=y
# CONFIG_MTD_RAM is not set
# CONFIG_MTD_ROM is not set
# CONFIG_MTD_ABSENT is not set
# end of RAM/ROM/Flash chip drivers

#
# Mapping drivers for chip access
#
# CONFIG_MTD_COMPLEX_MAPPINGS is not set
# CONFIG_MTD_INTEL_VR_NOR is not set
# CONFIG_MTD_PLATRAM is not set
# end of Mapping drivers for chip access

#
# Self-contained MTD device drivers
#
# CONFIG_MTD_PMC551 is not set
# CONFIG_MTD_DATAFLASH is not set
# CONFIG_MTD_MCHP23K256 is not set
# CONFIG_MTD_SST25L is not set
# CONFIG_MTD_SLRAM is not set
# CONFIG_MTD_PHRAM is not set
# CONFIG_MTD_MTDRAM is not set
# CONFIG_MTD_BLOCK2MTD is not set

#
# Disk-On-Chip Device Drivers
#
# CONFIG_MTD_DOCG3 is not set
# end of Self-contained MTD device drivers

# CONFIG_MTD_ONENAND is not set
# CONFIG_MTD_RAW_NAND is not set
# CONFIG_MTD_SPI_NAND is not set

#
# LPDDR & LPDDR2 PCM memory drivers
#
# CONFIG_MTD_LPDDR is not set
# end of LPDDR & LPDDR2 PCM memory drivers

# CONFIG_MTD_SPI_NOR is not set
CONFIG_MTD_UBI=m
CONFIG_MTD_UBI_WL_THRESHOLD=4096
CONFIG_MTD_UBI_BEB_LIMIT=20
# CONFIG_MTD_UBI_FASTMAP is not set
# CONFIG_MTD_UBI_GLUEBI is not set
# CONFIG_MTD_UBI_BLOCK is not set
# CONFIG_MTD_HYPERBUS is not set
# CONFIG_OF is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=y
CONFIG_PARPORT=m
CONFIG_PARPORT_PC=m
CONFIG_PARPORT_SERIAL=m
# CONFIG_PARPORT_PC_FIFO is not set
# CONFIG_PARPORT_PC_SUPERIO is not set
# CONFIG_PARPORT_AX88796 is not set
CONFIG_PARPORT_1284=y
CONFIG_PARPORT_NOT_PC=y
CONFIG_PNP=y
# CONFIG_PNP_DEBUG_MESSAGES is not set

#
# Protocols
#
CONFIG_PNPACPI=y
CONFIG_BLK_DEV=y
CONFIG_BLK_DEV_NULL_BLK=m
CONFIG_BLK_DEV_FD=m
CONFIG_CDROM=m
# CONFIG_PARIDE is not set
CONFIG_BLK_DEV_PCIESSD_MTIP32XX=m
CONFIG_ZRAM=m
# CONFIG_ZRAM_WRITEBACK is not set
# CONFIG_ZRAM_MEMORY_TRACKING is not set
# CONFIG_BLK_DEV_UMEM is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_LOOP_MIN_COUNT=0
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_DRBD is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_SKD is not set
CONFIG_BLK_DEV_SX8=m
CONFIG_BLK_DEV_RAM=m
CONFIG_BLK_DEV_RAM_COUNT=16
CONFIG_BLK_DEV_RAM_SIZE=16384
CONFIG_CDROM_PKTCDVD=m
CONFIG_CDROM_PKTCDVD_BUFFERS=8
# CONFIG_CDROM_PKTCDVD_WCACHE is not set
CONFIG_ATA_OVER_ETH=m
CONFIG_XEN_BLKDEV_FRONTEND=m
CONFIG_VIRTIO_BLK=y
CONFIG_BLK_DEV_RBD=m
# CONFIG_BLK_DEV_RSXX is not set

#
# NVME Support
#
CONFIG_NVME_CORE=m
CONFIG_BLK_DEV_NVME=m
# CONFIG_NVME_MULTIPATH is not set
# CONFIG_NVME_HWMON is not set
CONFIG_NVME_FABRICS=m
CONFIG_NVME_FC=m
# CONFIG_NVME_TCP is not set
CONFIG_NVME_TARGET=m
CONFIG_NVME_TARGET_LOOP=m
CONFIG_NVME_TARGET_FC=m
CONFIG_NVME_TARGET_FCLOOP=m
# CONFIG_NVME_TARGET_TCP is not set
# end of NVME Support

#
# Misc devices
#
CONFIG_SENSORS_LIS3LV02D=m
# CONFIG_AD525X_DPOT is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
CONFIG_TIFM_CORE=m
CONFIG_TIFM_7XX1=m
# CONFIG_ICS932S401 is not set
CONFIG_ENCLOSURE_SERVICES=m
CONFIG_SGI_XP=m
CONFIG_HP_ILO=m
CONFIG_SGI_GRU=m
# CONFIG_SGI_GRU_DEBUG is not set
CONFIG_APDS9802ALS=m
CONFIG_ISL29003=m
CONFIG_ISL29020=m
CONFIG_SENSORS_TSL2550=m
CONFIG_SENSORS_BH1770=m
CONFIG_SENSORS_APDS990X=m
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
CONFIG_VMWARE_BALLOON=m
# CONFIG_LATTICE_ECP3_CONFIG is not set
# CONFIG_SRAM is not set
# CONFIG_PCI_ENDPOINT_TEST is not set
# CONFIG_XILINX_SDFEC is not set
CONFIG_PVPANIC=y
# CONFIG_C2PORT is not set

#
# EEPROM support
#
CONFIG_EEPROM_AT24=m
# CONFIG_EEPROM_AT25 is not set
CONFIG_EEPROM_LEGACY=m
CONFIG_EEPROM_MAX6875=m
CONFIG_EEPROM_93CX6=m
# CONFIG_EEPROM_93XX46 is not set
# CONFIG_EEPROM_IDT_89HPESX is not set
# CONFIG_EEPROM_EE1004 is not set
# end of EEPROM support

CONFIG_CB710_CORE=m
# CONFIG_CB710_DEBUG is not set
CONFIG_CB710_DEBUG_ASSUMPTIONS=y

#
# Texas Instruments shared transport line discipline
#
# CONFIG_TI_ST is not set
# end of Texas Instruments shared transport line discipline

CONFIG_SENSORS_LIS3_I2C=m
CONFIG_ALTERA_STAPL=m
CONFIG_INTEL_MEI=m
CONFIG_INTEL_MEI_ME=m
# CONFIG_INTEL_MEI_TXE is not set
# CONFIG_INTEL_MEI_HDCP is not set
CONFIG_VMWARE_VMCI=m

#
# Intel MIC & related support
#
# CONFIG_INTEL_MIC_BUS is not set
# CONFIG_SCIF_BUS is not set
# CONFIG_VOP_BUS is not set
# end of Intel MIC & related support

# CONFIG_GENWQE is not set
# CONFIG_ECHO is not set
# CONFIG_MISC_ALCOR_PCI is not set
# CONFIG_MISC_RTSX_PCI is not set
# CONFIG_MISC_RTSX_USB is not set
# CONFIG_HABANA_AI is not set
# CONFIG_UACCE is not set
# end of Misc devices

CONFIG_HAVE_IDE=y
# CONFIG_IDE is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=y
CONFIG_RAID_ATTRS=m
CONFIG_SCSI=y
CONFIG_SCSI_DMA=y
CONFIG_SCSI_NETLINK=y
CONFIG_SCSI_PROC_FS=y

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=m
CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=m
CONFIG_CHR_DEV_SG=m
CONFIG_CHR_DEV_SCH=m
CONFIG_SCSI_ENCLOSURE=m
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SCAN_ASYNC=y

#
# SCSI Transports
#
CONFIG_SCSI_SPI_ATTRS=m
CONFIG_SCSI_FC_ATTRS=m
CONFIG_SCSI_ISCSI_ATTRS=m
CONFIG_SCSI_SAS_ATTRS=m
CONFIG_SCSI_SAS_LIBSAS=m
CONFIG_SCSI_SAS_ATA=y
CONFIG_SCSI_SAS_HOST_SMP=y
CONFIG_SCSI_SRP_ATTRS=m
# end of SCSI Transports

CONFIG_SCSI_LOWLEVEL=y
CONFIG_ISCSI_TCP=m
CONFIG_ISCSI_BOOT_SYSFS=m
CONFIG_SCSI_CXGB3_ISCSI=m
CONFIG_SCSI_CXGB4_ISCSI=m
CONFIG_SCSI_BNX2_ISCSI=m
CONFIG_SCSI_BNX2X_FCOE=m
CONFIG_BE2ISCSI=m
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
CONFIG_SCSI_HPSA=m
CONFIG_SCSI_3W_9XXX=m
CONFIG_SCSI_3W_SAS=m
# CONFIG_SCSI_ACARD is not set
CONFIG_SCSI_AACRAID=m
# CONFIG_SCSI_AIC7XXX is not set
CONFIG_SCSI_AIC79XX=m
CONFIG_AIC79XX_CMDS_PER_DEVICE=4
CONFIG_AIC79XX_RESET_DELAY_MS=15000
# CONFIG_AIC79XX_DEBUG_ENABLE is not set
CONFIG_AIC79XX_DEBUG_MASK=0
# CONFIG_AIC79XX_REG_PRETTY_PRINT is not set
# CONFIG_SCSI_AIC94XX is not set
CONFIG_SCSI_MVSAS=m
# CONFIG_SCSI_MVSAS_DEBUG is not set
CONFIG_SCSI_MVSAS_TASKLET=y
CONFIG_SCSI_MVUMI=m
# CONFIG_SCSI_DPT_I2O is not set
# CONFIG_SCSI_ADVANSYS is not set
CONFIG_SCSI_ARCMSR=m
# CONFIG_SCSI_ESAS2R is not set
# CONFIG_MEGARAID_NEWGEN is not set
# CONFIG_MEGARAID_LEGACY is not set
CONFIG_MEGARAID_SAS=m
CONFIG_SCSI_MPT3SAS=m
CONFIG_SCSI_MPT2SAS_MAX_SGE=128
CONFIG_SCSI_MPT3SAS_MAX_SGE=128
CONFIG_SCSI_MPT2SAS=m
# CONFIG_SCSI_SMARTPQI is not set
CONFIG_SCSI_UFSHCD=m
CONFIG_SCSI_UFSHCD_PCI=m
# CONFIG_SCSI_UFS_DWC_TC_PCI is not set
# CONFIG_SCSI_UFSHCD_PLATFORM is not set
# CONFIG_SCSI_UFS_BSG is not set
CONFIG_SCSI_HPTIOP=m
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_MYRB is not set
# CONFIG_SCSI_MYRS is not set
CONFIG_VMWARE_PVSCSI=m
# CONFIG_XEN_SCSI_FRONTEND is not set
CONFIG_HYPERV_STORAGE=m
CONFIG_LIBFC=m
CONFIG_LIBFCOE=m
CONFIG_FCOE=m
CONFIG_FCOE_FNIC=m
# CONFIG_SCSI_SNIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_FDOMAIN_PCI is not set
# CONFIG_SCSI_GDTH is not set
CONFIG_SCSI_ISCI=m
# CONFIG_SCSI_IPS is not set
CONFIG_SCSI_INITIO=m
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_PPA is not set
# CONFIG_SCSI_IMM is not set
CONFIG_SCSI_STEX=m
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
CONFIG_SCSI_QLA_FC=m
CONFIG_TCM_QLA2XXX=m
# CONFIG_TCM_QLA2XXX_DEBUG is not set
CONFIG_SCSI_QLA_ISCSI=m
# CONFIG_QEDI is not set
# CONFIG_QEDF is not set
# CONFIG_SCSI_LPFC is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_AM53C974 is not set
# CONFIG_SCSI_WD719X is not set
CONFIG_SCSI_DEBUG=m
CONFIG_SCSI_PMCRAID=m
CONFIG_SCSI_PM8001=m
# CONFIG_SCSI_BFA_FC is not set
CONFIG_SCSI_VIRTIO=m
# CONFIG_SCSI_CHELSIO_FCOE is not set
CONFIG_SCSI_DH=y
CONFIG_SCSI_DH_RDAC=y
CONFIG_SCSI_DH_HP_SW=y
CONFIG_SCSI_DH_EMC=y
CONFIG_SCSI_DH_ALUA=y
# end of SCSI device support

CONFIG_ATA=m
CONFIG_SATA_HOST=y
CONFIG_PATA_TIMINGS=y
CONFIG_ATA_VERBOSE_ERROR=y
CONFIG_ATA_FORCE=y
CONFIG_ATA_ACPI=y
# CONFIG_SATA_ZPODD is not set
CONFIG_SATA_PMP=y

#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=m
CONFIG_SATA_MOBILE_LPM_POLICY=0
CONFIG_SATA_AHCI_PLATFORM=m
# CONFIG_SATA_INIC162X is not set
CONFIG_SATA_ACARD_AHCI=m
CONFIG_SATA_SIL24=m
CONFIG_ATA_SFF=y

#
# SFF controllers with custom DMA interface
#
CONFIG_PDC_ADMA=m
CONFIG_SATA_QSTOR=m
CONFIG_SATA_SX4=m
CONFIG_ATA_BMDMA=y

#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=m
# CONFIG_SATA_DWC is not set
CONFIG_SATA_MV=m
CONFIG_SATA_NV=m
CONFIG_SATA_PROMISE=m
CONFIG_SATA_SIL=m
CONFIG_SATA_SIS=m
CONFIG_SATA_SVW=m
CONFIG_SATA_ULI=m
CONFIG_SATA_VIA=m
CONFIG_SATA_VITESSE=m

#
# PATA SFF controllers with BMDMA
#
CONFIG_PATA_ALI=m
CONFIG_PATA_AMD=m
CONFIG_PATA_ARTOP=m
CONFIG_PATA_ATIIXP=m
CONFIG_PATA_ATP867X=m
CONFIG_PATA_CMD64X=m
# CONFIG_PATA_CYPRESS is not set
# CONFIG_PATA_EFAR is not set
CONFIG_PATA_HPT366=m
CONFIG_PATA_HPT37X=m
CONFIG_PATA_HPT3X2N=m
CONFIG_PATA_HPT3X3=m
# CONFIG_PATA_HPT3X3_DMA is not set
CONFIG_PATA_IT8213=m
CONFIG_PATA_IT821X=m
CONFIG_PATA_JMICRON=m
CONFIG_PATA_MARVELL=m
CONFIG_PATA_NETCELL=m
CONFIG_PATA_NINJA32=m
# CONFIG_PATA_NS87415 is not set
CONFIG_PATA_OLDPIIX=m
# CONFIG_PATA_OPTIDMA is not set
CONFIG_PATA_PDC2027X=m
CONFIG_PATA_PDC_OLD=m
# CONFIG_PATA_RADISYS is not set
CONFIG_PATA_RDC=m
CONFIG_PATA_SCH=m
CONFIG_PATA_SERVERWORKS=m
CONFIG_PATA_SIL680=m
CONFIG_PATA_SIS=m
CONFIG_PATA_TOSHIBA=m
# CONFIG_PATA_TRIFLEX is not set
CONFIG_PATA_VIA=m
# CONFIG_PATA_WINBOND is not set

#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
# CONFIG_PATA_PLATFORM is not set
# CONFIG_PATA_RZ1000 is not set

#
# Generic fallback / legacy drivers
#
CONFIG_PATA_ACPI=m
CONFIG_ATA_GENERIC=m
# CONFIG_PATA_LEGACY is not set
CONFIG_MD=y
CONFIG_BLK_DEV_MD=y
CONFIG_MD_AUTODETECT=y
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_RAID456=m
# CONFIG_MD_MULTIPATH is not set
CONFIG_MD_FAULTY=m
# CONFIG_MD_CLUSTER is not set
# CONFIG_BCACHE is not set
CONFIG_BLK_DEV_DM_BUILTIN=y
CONFIG_BLK_DEV_DM=m
CONFIG_DM_DEBUG=y
CONFIG_DM_BUFIO=m
# CONFIG_DM_DEBUG_BLOCK_MANAGER_LOCKING is not set
CONFIG_DM_BIO_PRISON=m
CONFIG_DM_PERSISTENT_DATA=m
# CONFIG_DM_UNSTRIPED is not set
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
CONFIG_DM_THIN_PROVISIONING=m
CONFIG_DM_CACHE=m
CONFIG_DM_CACHE_SMQ=m
# CONFIG_DM_WRITECACHE is not set
# CONFIG_DM_EBS is not set
CONFIG_DM_ERA=m
# CONFIG_DM_CLONE is not set
CONFIG_DM_MIRROR=m
CONFIG_DM_LOG_USERSPACE=m
CONFIG_DM_RAID=m
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
CONFIG_DM_MULTIPATH_QL=m
CONFIG_DM_MULTIPATH_ST=m
# CONFIG_DM_MULTIPATH_HST is not set
CONFIG_DM_DELAY=m
# CONFIG_DM_DUST is not set
CONFIG_DM_UEVENT=y
CONFIG_DM_FLAKEY=m
CONFIG_DM_VERITY=m
# CONFIG_DM_VERITY_VERIFY_ROOTHASH_SIG is not set
# CONFIG_DM_VERITY_FEC is not set
CONFIG_DM_SWITCH=m
CONFIG_DM_LOG_WRITES=m
# CONFIG_DM_INTEGRITY is not set
CONFIG_TARGET_CORE=m
CONFIG_TCM_IBLOCK=m
CONFIG_TCM_FILEIO=m
CONFIG_TCM_PSCSI=m
CONFIG_TCM_USER2=m
CONFIG_LOOPBACK_TARGET=m
CONFIG_TCM_FC=m
CONFIG_ISCSI_TARGET=m
CONFIG_ISCSI_TARGET_CXGB4=m
# CONFIG_SBP_TARGET is not set
CONFIG_FUSION=y
CONFIG_FUSION_SPI=m
# CONFIG_FUSION_FC is not set
CONFIG_FUSION_SAS=m
CONFIG_FUSION_MAX_SGE=128
CONFIG_FUSION_CTL=m
CONFIG_FUSION_LOGGING=y

#
# IEEE 1394 (FireWire) support
#
CONFIG_FIREWIRE=m
CONFIG_FIREWIRE_OHCI=m
CONFIG_FIREWIRE_SBP2=m
CONFIG_FIREWIRE_NET=m
# CONFIG_FIREWIRE_NOSY is not set
# end of IEEE 1394 (FireWire) support

CONFIG_MACINTOSH_DRIVERS=y
CONFIG_MAC_EMUMOUSEBTN=y
CONFIG_NETDEVICES=y
CONFIG_MII=y
CONFIG_NET_CORE=y
CONFIG_BONDING=m
CONFIG_DUMMY=y
# CONFIG_WIREGUARD is not set
# CONFIG_EQUALIZER is not set
CONFIG_NET_FC=y
CONFIG_IFB=y
CONFIG_NET_TEAM=m
CONFIG_NET_TEAM_MODE_BROADCAST=m
CONFIG_NET_TEAM_MODE_ROUNDROBIN=m
CONFIG_NET_TEAM_MODE_RANDOM=m
CONFIG_NET_TEAM_MODE_ACTIVEBACKUP=m
CONFIG_NET_TEAM_MODE_LOADBALANCE=m
CONFIG_MACVLAN=m
CONFIG_MACVTAP=m
# CONFIG_IPVLAN is not set
CONFIG_VXLAN=y
CONFIG_GENEVE=y
# CONFIG_BAREUDP is not set
# CONFIG_GTP is not set
CONFIG_MACSEC=y
CONFIG_NETCONSOLE=m
CONFIG_NETCONSOLE_DYNAMIC=y
CONFIG_NETPOLL=y
CONFIG_NET_POLL_CONTROLLER=y
CONFIG_NTB_NETDEV=m
CONFIG_TUN=m
CONFIG_TAP=m
# CONFIG_TUN_VNET_CROSS_LE is not set
CONFIG_VETH=y
CONFIG_VIRTIO_NET=m
CONFIG_NLMON=m
CONFIG_NET_VRF=y
CONFIG_VSOCKMON=m
# CONFIG_ARCNET is not set
# CONFIG_ATM_DRIVERS is not set

#
# Distributed Switch Architecture drivers
#
# end of Distributed Switch Architecture drivers

CONFIG_ETHERNET=y
CONFIG_MDIO=y
# CONFIG_NET_VENDOR_3COM is not set
# CONFIG_NET_VENDOR_ADAPTEC is not set
CONFIG_NET_VENDOR_AGERE=y
# CONFIG_ET131X is not set
CONFIG_NET_VENDOR_ALACRITECH=y
# CONFIG_SLICOSS is not set
# CONFIG_NET_VENDOR_ALTEON is not set
# CONFIG_ALTERA_TSE is not set
CONFIG_NET_VENDOR_AMAZON=y
CONFIG_ENA_ETHERNET=m
CONFIG_NET_VENDOR_AMD=y
CONFIG_AMD8111_ETH=m
CONFIG_PCNET32=m
CONFIG_AMD_XGBE=m
# CONFIG_AMD_XGBE_DCB is not set
CONFIG_AMD_XGBE_HAVE_ECC=y
CONFIG_NET_VENDOR_AQUANTIA=y
CONFIG_AQTION=m
CONFIG_NET_VENDOR_ARC=y
CONFIG_NET_VENDOR_ATHEROS=y
CONFIG_ATL2=m
CONFIG_ATL1=m
CONFIG_ATL1E=m
CONFIG_ATL1C=m
CONFIG_ALX=m
CONFIG_NET_VENDOR_AURORA=y
# CONFIG_AURORA_NB8800 is not set
CONFIG_NET_VENDOR_BROADCOM=y
CONFIG_B44=m
CONFIG_B44_PCI_AUTOSELECT=y
CONFIG_B44_PCICORE_AUTOSELECT=y
CONFIG_B44_PCI=y
# CONFIG_BCMGENET is not set
CONFIG_BNX2=m
CONFIG_CNIC=m
CONFIG_TIGON3=y
CONFIG_TIGON3_HWMON=y
CONFIG_BNX2X=m
CONFIG_BNX2X_SRIOV=y
# CONFIG_SYSTEMPORT is not set
CONFIG_BNXT=m
CONFIG_BNXT_SRIOV=y
CONFIG_BNXT_FLOWER_OFFLOAD=y
CONFIG_BNXT_DCB=y
CONFIG_BNXT_HWMON=y
CONFIG_NET_VENDOR_BROCADE=y
CONFIG_BNA=m
CONFIG_NET_VENDOR_CADENCE=y
CONFIG_MACB=m
CONFIG_MACB_USE_HWSTAMP=y
# CONFIG_MACB_PCI is not set
CONFIG_NET_VENDOR_CAVIUM=y
# CONFIG_THUNDER_NIC_PF is not set
# CONFIG_THUNDER_NIC_VF is not set
# CONFIG_THUNDER_NIC_BGX is not set
# CONFIG_THUNDER_NIC_RGX is not set
CONFIG_CAVIUM_PTP=y
CONFIG_LIQUIDIO=m
CONFIG_LIQUIDIO_VF=m
CONFIG_NET_VENDOR_CHELSIO=y
# CONFIG_CHELSIO_T1 is not set
CONFIG_CHELSIO_T3=m
CONFIG_CHELSIO_T4=m
# CONFIG_CHELSIO_T4_DCB is not set
CONFIG_CHELSIO_T4VF=m
CONFIG_CHELSIO_LIB=m
CONFIG_NET_VENDOR_CISCO=y
CONFIG_ENIC=m
CONFIG_NET_VENDOR_CORTINA=y
# CONFIG_CX_ECAT is not set
CONFIG_DNET=m
CONFIG_NET_VENDOR_DEC=y
CONFIG_NET_TULIP=y
CONFIG_DE2104X=m
CONFIG_DE2104X_DSL=0
CONFIG_TULIP=y
# CONFIG_TULIP_MWI is not set
CONFIG_TULIP_MMIO=y
# CONFIG_TULIP_NAPI is not set
CONFIG_DE4X5=m
CONFIG_WINBOND_840=m
CONFIG_DM9102=m
CONFIG_ULI526X=m
CONFIG_PCMCIA_XIRCOM=m
# CONFIG_NET_VENDOR_DLINK is not set
CONFIG_NET_VENDOR_EMULEX=y
CONFIG_BE2NET=m
CONFIG_BE2NET_HWMON=y
CONFIG_BE2NET_BE2=y
CONFIG_BE2NET_BE3=y
CONFIG_BE2NET_LANCER=y
CONFIG_BE2NET_SKYHAWK=y
CONFIG_NET_VENDOR_EZCHIP=y
CONFIG_NET_VENDOR_GOOGLE=y
# CONFIG_GVE is not set
CONFIG_NET_VENDOR_HUAWEI=y
# CONFIG_HINIC is not set
# CONFIG_NET_VENDOR_I825XX is not set
CONFIG_NET_VENDOR_INTEL=y
# CONFIG_E100 is not set
CONFIG_E1000=y
CONFIG_E1000E=y
CONFIG_E1000E_HWTS=y
CONFIG_IGB=y
CONFIG_IGB_HWMON=y
CONFIG_IGBVF=m
# CONFIG_IXGB is not set
CONFIG_IXGBE=y
CONFIG_IXGBE_HWMON=y
CONFIG_IXGBE_DCB=y
CONFIG_IXGBEVF=m
CONFIG_I40E=y
CONFIG_I40E_DCB=y
CONFIG_IAVF=m
CONFIG_I40EVF=m
# CONFIG_ICE is not set
CONFIG_FM10K=m
# CONFIG_IGC is not set
CONFIG_JME=m
CONFIG_NET_VENDOR_MARVELL=y
CONFIG_MVMDIO=m
CONFIG_SKGE=y
# CONFIG_SKGE_DEBUG is not set
CONFIG_SKGE_GENESIS=y
CONFIG_SKY2=m
# CONFIG_SKY2_DEBUG is not set
CONFIG_NET_VENDOR_MELLANOX=y
CONFIG_MLX4_EN=m
CONFIG_MLX4_EN_DCB=y
CONFIG_MLX4_CORE=m
CONFIG_MLX4_DEBUG=y
CONFIG_MLX4_CORE_GEN2=y
# CONFIG_MLX5_CORE is not set
# CONFIG_MLXSW_CORE is not set
# CONFIG_MLXFW is not set
# CONFIG_NET_VENDOR_MICREL is not set
# CONFIG_NET_VENDOR_MICROCHIP is not set
CONFIG_NET_VENDOR_MICROSEMI=y
# CONFIG_MSCC_OCELOT_SWITCH is not set
CONFIG_NET_VENDOR_MYRI=y
CONFIG_MYRI10GE=m
CONFIG_MYRI10GE_DCA=y
# CONFIG_FEALNX is not set
# CONFIG_NET_VENDOR_NATSEMI is not set
CONFIG_NET_VENDOR_NETERION=y
# CONFIG_S2IO is not set
# CONFIG_VXGE is not set
CONFIG_NET_VENDOR_NETRONOME=y
CONFIG_NFP=m
CONFIG_NFP_APP_FLOWER=y
CONFIG_NFP_APP_ABM_NIC=y
# CONFIG_NFP_DEBUG is not set
CONFIG_NET_VENDOR_NI=y
# CONFIG_NI_XGE_MANAGEMENT_ENET is not set
# CONFIG_NET_VENDOR_NVIDIA is not set
CONFIG_NET_VENDOR_OKI=y
CONFIG_ETHOC=m
CONFIG_NET_VENDOR_PACKET_ENGINES=y
# CONFIG_HAMACHI is not set
CONFIG_YELLOWFIN=m
CONFIG_NET_VENDOR_PENSANDO=y
# CONFIG_IONIC is not set
CONFIG_NET_VENDOR_QLOGIC=y
CONFIG_QLA3XXX=m
CONFIG_QLCNIC=m
CONFIG_QLCNIC_SRIOV=y
CONFIG_QLCNIC_DCB=y
CONFIG_QLCNIC_HWMON=y
CONFIG_NETXEN_NIC=m
CONFIG_QED=m
CONFIG_QED_SRIOV=y
CONFIG_QEDE=m
CONFIG_NET_VENDOR_QUALCOMM=y
# CONFIG_QCOM_EMAC is not set
# CONFIG_RMNET is not set
# CONFIG_NET_VENDOR_RDC is not set
CONFIG_NET_VENDOR_REALTEK=y
# CONFIG_ATP is not set
CONFIG_8139CP=y
CONFIG_8139TOO=y
# CONFIG_8139TOO_PIO is not set
# CONFIG_8139TOO_TUNE_TWISTER is not set
CONFIG_8139TOO_8129=y
# CONFIG_8139_OLD_RX_RESET is not set
CONFIG_R8169=y
CONFIG_NET_VENDOR_RENESAS=y
CONFIG_NET_VENDOR_ROCKER=y
CONFIG_ROCKER=m
CONFIG_NET_VENDOR_SAMSUNG=y
# CONFIG_SXGBE_ETH is not set
# CONFIG_NET_VENDOR_SEEQ is not set
CONFIG_NET_VENDOR_SOLARFLARE=y
CONFIG_SFC=m
CONFIG_SFC_MTD=y
CONFIG_SFC_MCDI_MON=y
CONFIG_SFC_SRIOV=y
CONFIG_SFC_MCDI_LOGGING=y
CONFIG_SFC_FALCON=m
CONFIG_SFC_FALCON_MTD=y
# CONFIG_NET_VENDOR_SILAN is not set
# CONFIG_NET_VENDOR_SIS is not set
CONFIG_NET_VENDOR_SMSC=y
CONFIG_EPIC100=m
# CONFIG_SMSC911X is not set
CONFIG_SMSC9420=m
CONFIG_NET_VENDOR_SOCIONEXT=y
# CONFIG_NET_VENDOR_STMICRO is not set
# CONFIG_NET_VENDOR_SUN is not set
CONFIG_NET_VENDOR_SYNOPSYS=y
# CONFIG_DWC_XLGMAC is not set
# CONFIG_NET_VENDOR_TEHUTI is not set
CONFIG_NET_VENDOR_TI=y
# CONFIG_TI_CPSW_PHY_SEL is not set
CONFIG_TLAN=m
# CONFIG_NET_VENDOR_VIA is not set
# CONFIG_NET_VENDOR_WIZNET is not set
CONFIG_NET_VENDOR_XILINX=y
# CONFIG_XILINX_AXI_EMAC is not set
# CONFIG_XILINX_LL_TEMAC is not set
# CONFIG_FDDI is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
CONFIG_MDIO_DEVICE=y
CONFIG_MDIO_BUS=y
# CONFIG_MDIO_BCM_UNIMAC is not set
CONFIG_MDIO_BITBANG=m
# CONFIG_MDIO_GPIO is not set
# CONFIG_MDIO_MSCC_MIIM is not set
# CONFIG_MDIO_MVUSB is not set
# CONFIG_MDIO_THUNDER is not set
# CONFIG_MDIO_XPCS is not set
CONFIG_PHYLINK=m
CONFIG_PHYLIB=y
CONFIG_SWPHY=y
# CONFIG_LED_TRIGGER_PHY is not set

#
# MII PHY device drivers
#
# CONFIG_SFP is not set
# CONFIG_ADIN_PHY is not set
CONFIG_AMD_PHY=m
# CONFIG_AQUANTIA_PHY is not set
# CONFIG_AX88796B_PHY is not set
# CONFIG_BCM7XXX_PHY is not set
CONFIG_BCM87XX_PHY=m
CONFIG_BCM_NET_PHYLIB=m
CONFIG_BROADCOM_PHY=m
# CONFIG_BCM54140_PHY is not set
# CONFIG_BCM84881_PHY is not set
CONFIG_CICADA_PHY=m
# CONFIG_CORTINA_PHY is not set
CONFIG_DAVICOM_PHY=m
# CONFIG_DP83822_PHY is not set
# CONFIG_DP83TC811_PHY is not set
# CONFIG_DP83848_PHY is not set
# CONFIG_DP83867_PHY is not set
# CONFIG_DP83869_PHY is not set
CONFIG_FIXED_PHY=y
CONFIG_ICPLUS_PHY=m
# CONFIG_INTEL_XWAY_PHY is not set
CONFIG_LSI_ET1011C_PHY=m
CONFIG_LXT_PHY=m
CONFIG_MARVELL_PHY=m
# CONFIG_MARVELL_10G_PHY is not set
CONFIG_MICREL_PHY=m
# CONFIG_MICROCHIP_PHY is not set
# CONFIG_MICROCHIP_T1_PHY is not set
# CONFIG_MICROSEMI_PHY is not set
CONFIG_NATIONAL_PHY=m
# CONFIG_NXP_TJA11XX_PHY is not set
CONFIG_QSEMI_PHY=m
CONFIG_REALTEK_PHY=y
# CONFIG_RENESAS_PHY is not set
# CONFIG_ROCKCHIP_PHY is not set
CONFIG_SMSC_PHY=m
CONFIG_STE10XP=m
# CONFIG_TERANETICS_PHY is not set
CONFIG_VITESSE_PHY=m
# CONFIG_XILINX_GMII2RGMII is not set
# CONFIG_MICREL_KS8995MA is not set
# CONFIG_PLIP is not set
CONFIG_PPP=m
CONFIG_PPP_BSDCOMP=m
CONFIG_PPP_DEFLATE=m
CONFIG_PPP_FILTER=y
CONFIG_PPP_MPPE=m
CONFIG_PPP_MULTILINK=y
CONFIG_PPPOATM=m
CONFIG_PPPOE=m
CONFIG_PPTP=m
CONFIG_PPPOL2TP=m
CONFIG_PPP_ASYNC=m
CONFIG_PPP_SYNC_TTY=m
CONFIG_SLIP=m
CONFIG_SLHC=m
CONFIG_SLIP_COMPRESSED=y
CONFIG_SLIP_SMART=y
# CONFIG_SLIP_MODE_SLIP6 is not set
CONFIG_USB_NET_DRIVERS=y
CONFIG_USB_CATC=y
CONFIG_USB_KAWETH=y
CONFIG_USB_PEGASUS=y
CONFIG_USB_RTL8150=y
CONFIG_USB_RTL8152=m
# CONFIG_USB_LAN78XX is not set
CONFIG_USB_USBNET=y
CONFIG_USB_NET_AX8817X=y
CONFIG_USB_NET_AX88179_178A=m
CONFIG_USB_NET_CDCETHER=y
CONFIG_USB_NET_CDC_EEM=y
CONFIG_USB_NET_CDC_NCM=m
CONFIG_USB_NET_HUAWEI_CDC_NCM=m
CONFIG_USB_NET_CDC_MBIM=m
CONFIG_USB_NET_DM9601=y
# CONFIG_USB_NET_SR9700 is not set
# CONFIG_USB_NET_SR9800 is not set
CONFIG_USB_NET_SMSC75XX=y
CONFIG_USB_NET_SMSC95XX=y
CONFIG_USB_NET_GL620A=y
CONFIG_USB_NET_NET1080=y
CONFIG_USB_NET_PLUSB=y
CONFIG_USB_NET_MCS7830=y
CONFIG_USB_NET_RNDIS_HOST=y
CONFIG_USB_NET_CDC_SUBSET_ENABLE=y
CONFIG_USB_NET_CDC_SUBSET=y
CONFIG_USB_ALI_M5632=y
CONFIG_USB_AN2720=y
CONFIG_USB_BELKIN=y
CONFIG_USB_ARMLINUX=y
CONFIG_USB_EPSON2888=y
CONFIG_USB_KC2190=y
CONFIG_USB_NET_ZAURUS=y
CONFIG_USB_NET_CX82310_ETH=m
CONFIG_USB_NET_KALMIA=m
CONFIG_USB_NET_QMI_WWAN=m
CONFIG_USB_HSO=m
CONFIG_USB_NET_INT51X1=y
CONFIG_USB_IPHETH=y
CONFIG_USB_SIERRA_NET=y
CONFIG_USB_VL600=m
# CONFIG_USB_NET_CH9200 is not set
# CONFIG_USB_NET_AQC111 is not set
CONFIG_WLAN=y
# CONFIG_WIRELESS_WDS is not set
CONFIG_WLAN_VENDOR_ADMTEK=y
# CONFIG_ADM8211 is not set
CONFIG_ATH_COMMON=m
CONFIG_WLAN_VENDOR_ATH=y
# CONFIG_ATH_DEBUG is not set
# CONFIG_ATH5K is not set
# CONFIG_ATH5K_PCI is not set
CONFIG_ATH9K_HW=m
CONFIG_ATH9K_COMMON=m
CONFIG_ATH9K_BTCOEX_SUPPORT=y
# CONFIG_ATH9K is not set
CONFIG_ATH9K_HTC=m
# CONFIG_ATH9K_HTC_DEBUGFS is not set
# CONFIG_CARL9170 is not set
# CONFIG_ATH6KL is not set
# CONFIG_AR5523 is not set
# CONFIG_WIL6210 is not set
# CONFIG_ATH10K is not set
# CONFIG_WCN36XX is not set
CONFIG_WLAN_VENDOR_ATMEL=y
# CONFIG_ATMEL is not set
# CONFIG_AT76C50X_USB is not set
CONFIG_WLAN_VENDOR_BROADCOM=y
# CONFIG_B43 is not set
# CONFIG_B43LEGACY is not set
# CONFIG_BRCMSMAC is not set
# CONFIG_BRCMFMAC is not set
CONFIG_WLAN_VENDOR_CISCO=y
# CONFIG_AIRO is not set
CONFIG_WLAN_VENDOR_INTEL=y
# CONFIG_IPW2100 is not set
# CONFIG_IPW2200 is not set
CONFIG_IWLEGACY=m
CONFIG_IWL4965=m
CONFIG_IWL3945=m

#
# iwl3945 / iwl4965 Debugging Options
#
CONFIG_IWLEGACY_DEBUG=y
CONFIG_IWLEGACY_DEBUGFS=y
# end of iwl3945 / iwl4965 Debugging Options

CONFIG_IWLWIFI=m
CONFIG_IWLWIFI_LEDS=y
CONFIG_IWLDVM=m
CONFIG_IWLMVM=m
CONFIG_IWLWIFI_OPMODE_MODULAR=y
# CONFIG_IWLWIFI_BCAST_FILTERING is not set

#
# Debugging Options
#
# CONFIG_IWLWIFI_DEBUG is not set
CONFIG_IWLWIFI_DEBUGFS=y
# CONFIG_IWLWIFI_DEVICE_TRACING is not set
# end of Debugging Options

CONFIG_WLAN_VENDOR_INTERSIL=y
# CONFIG_HOSTAP is not set
# CONFIG_HERMES is not set
# CONFIG_P54_COMMON is not set
# CONFIG_PRISM54 is not set
CONFIG_WLAN_VENDOR_MARVELL=y
# CONFIG_LIBERTAS is not set
# CONFIG_LIBERTAS_THINFIRM is not set
# CONFIG_MWIFIEX is not set
# CONFIG_MWL8K is not set
CONFIG_WLAN_VENDOR_MEDIATEK=y
# CONFIG_MT7601U is not set
# CONFIG_MT76x0U is not set
# CONFIG_MT76x0E is not set
# CONFIG_MT76x2E is not set
# CONFIG_MT76x2U is not set
# CONFIG_MT7603E is not set
# CONFIG_MT7615E is not set
# CONFIG_MT7663U is not set
# CONFIG_MT7915E is not set
CONFIG_WLAN_VENDOR_RALINK=y
# CONFIG_RT2X00 is not set
CONFIG_WLAN_VENDOR_REALTEK=y
# CONFIG_RTL8180 is not set
# CONFIG_RTL8187 is not set
# CONFIG_RTL_CARDS is not set
# CONFIG_RTL8XXXU is not set
# CONFIG_RTW88 is not set
CONFIG_WLAN_VENDOR_RSI=y
# CONFIG_RSI_91X is not set
CONFIG_WLAN_VENDOR_ST=y
# CONFIG_CW1200 is not set
CONFIG_WLAN_VENDOR_TI=y
# CONFIG_WL1251 is not set
# CONFIG_WL12XX is not set
# CONFIG_WL18XX is not set
# CONFIG_WLCORE is not set
CONFIG_WLAN_VENDOR_ZYDAS=y
# CONFIG_USB_ZD1201 is not set
# CONFIG_ZD1211RW is not set
CONFIG_WLAN_VENDOR_QUANTENNA=y
# CONFIG_QTNFMAC_PCIE is not set
CONFIG_MAC80211_HWSIM=m
# CONFIG_USB_NET_RNDIS_WLAN is not set
# CONFIG_VIRT_WIFI is not set

#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
CONFIG_WAN=y
# CONFIG_LANMEDIA is not set
CONFIG_HDLC=m
CONFIG_HDLC_RAW=m
# CONFIG_HDLC_RAW_ETH is not set
CONFIG_HDLC_CISCO=m
CONFIG_HDLC_FR=m
CONFIG_HDLC_PPP=m

#
# X.25/LAPB support is disabled
#
# CONFIG_PCI200SYN is not set
# CONFIG_WANXL is not set
# CONFIG_PC300TOO is not set
# CONFIG_FARSYNC is not set
CONFIG_DLCI=m
CONFIG_DLCI_MAX=8
# CONFIG_SBNI is not set
CONFIG_IEEE802154_DRIVERS=m
CONFIG_IEEE802154_FAKELB=m
# CONFIG_IEEE802154_AT86RF230 is not set
# CONFIG_IEEE802154_MRF24J40 is not set
# CONFIG_IEEE802154_CC2520 is not set
# CONFIG_IEEE802154_ATUSB is not set
# CONFIG_IEEE802154_ADF7242 is not set
# CONFIG_IEEE802154_CA8210 is not set
# CONFIG_IEEE802154_MCR20A is not set
# CONFIG_IEEE802154_HWSIM is not set
CONFIG_XEN_NETDEV_FRONTEND=m
CONFIG_VMXNET3=m
CONFIG_FUJITSU_ES=m
CONFIG_HYPERV_NET=m
CONFIG_NETDEVSIM=m
CONFIG_NET_FAILOVER=m
CONFIG_ISDN=y
CONFIG_ISDN_CAPI=y
CONFIG_CAPI_TRACE=y
CONFIG_ISDN_CAPI_MIDDLEWARE=y
CONFIG_MISDN=m
CONFIG_MISDN_DSP=m
CONFIG_MISDN_L1OIP=m

#
# mISDN hardware drivers
#
CONFIG_MISDN_HFCPCI=m
CONFIG_MISDN_HFCMULTI=m
CONFIG_MISDN_HFCUSB=m
CONFIG_MISDN_AVMFRITZ=m
CONFIG_MISDN_SPEEDFAX=m
CONFIG_MISDN_INFINEON=m
CONFIG_MISDN_W6692=m
CONFIG_MISDN_NETJET=m
CONFIG_MISDN_HDLC=m
CONFIG_MISDN_IPAC=m
CONFIG_MISDN_ISAR=m
CONFIG_NVM=y
# CONFIG_NVM_PBLK is not set

#
# Input device support
#
CONFIG_INPUT=y
CONFIG_INPUT_LEDS=y
CONFIG_INPUT_FF_MEMLESS=y
CONFIG_INPUT_POLLDEV=m
CONFIG_INPUT_SPARSEKMAP=m
# CONFIG_INPUT_MATRIXKMAP is not set

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
CONFIG_INPUT_JOYDEV=m
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
# CONFIG_KEYBOARD_ADC is not set
# CONFIG_KEYBOARD_ADP5588 is not set
# CONFIG_KEYBOARD_ADP5589 is not set
# CONFIG_KEYBOARD_APPLESPI is not set
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_QT1050 is not set
# CONFIG_KEYBOARD_QT1070 is not set
# CONFIG_KEYBOARD_QT2160 is not set
# CONFIG_KEYBOARD_DLINK_DIR685 is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_GPIO is not set
# CONFIG_KEYBOARD_GPIO_POLLED is not set
# CONFIG_KEYBOARD_TCA6416 is not set
# CONFIG_KEYBOARD_TCA8418 is not set
# CONFIG_KEYBOARD_MATRIX is not set
# CONFIG_KEYBOARD_LM8323 is not set
# CONFIG_KEYBOARD_LM8333 is not set
# CONFIG_KEYBOARD_MAX7359 is not set
# CONFIG_KEYBOARD_MCS is not set
# CONFIG_KEYBOARD_MPR121 is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_SAMSUNG is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_TM2_TOUCHKEY is not set
# CONFIG_KEYBOARD_XTKBD is not set
CONFIG_INPUT_MOUSE=y
CONFIG_MOUSE_PS2=y
CONFIG_MOUSE_PS2_ALPS=y
CONFIG_MOUSE_PS2_BYD=y
CONFIG_MOUSE_PS2_LOGIPS2PP=y
CONFIG_MOUSE_PS2_SYNAPTICS=y
CONFIG_MOUSE_PS2_SYNAPTICS_SMBUS=y
CONFIG_MOUSE_PS2_CYPRESS=y
CONFIG_MOUSE_PS2_LIFEBOOK=y
CONFIG_MOUSE_PS2_TRACKPOINT=y
CONFIG_MOUSE_PS2_ELANTECH=y
CONFIG_MOUSE_PS2_ELANTECH_SMBUS=y
CONFIG_MOUSE_PS2_SENTELIC=y
# CONFIG_MOUSE_PS2_TOUCHKIT is not set
CONFIG_MOUSE_PS2_FOCALTECH=y
CONFIG_MOUSE_PS2_VMMOUSE=y
CONFIG_MOUSE_PS2_SMBUS=y
CONFIG_MOUSE_SERIAL=m
CONFIG_MOUSE_APPLETOUCH=m
CONFIG_MOUSE_BCM5974=m
CONFIG_MOUSE_CYAPA=m
# CONFIG_MOUSE_ELAN_I2C is not set
CONFIG_MOUSE_VSXXXAA=m
# CONFIG_MOUSE_GPIO is not set
CONFIG_MOUSE_SYNAPTICS_I2C=m
CONFIG_MOUSE_SYNAPTICS_USB=m
# CONFIG_INPUT_JOYSTICK is not set
CONFIG_INPUT_TABLET=y
CONFIG_TABLET_USB_ACECAD=m
CONFIG_TABLET_USB_AIPTEK=m
CONFIG_TABLET_USB_GTCO=m
# CONFIG_TABLET_USB_HANWANG is not set
CONFIG_TABLET_USB_KBTAB=m
# CONFIG_TABLET_USB_PEGASUS is not set
# CONFIG_TABLET_SERIAL_WACOM4 is not set
CONFIG_INPUT_TOUCHSCREEN=y
CONFIG_TOUCHSCREEN_PROPERTIES=y
# CONFIG_TOUCHSCREEN_ADS7846 is not set
# CONFIG_TOUCHSCREEN_AD7877 is not set
# CONFIG_TOUCHSCREEN_AD7879 is not set
# CONFIG_TOUCHSCREEN_ADC is not set
# CONFIG_TOUCHSCREEN_ATMEL_MXT is not set
# CONFIG_TOUCHSCREEN_AUO_PIXCIR is not set
# CONFIG_TOUCHSCREEN_BU21013 is not set
# CONFIG_TOUCHSCREEN_BU21029 is not set
# CONFIG_TOUCHSCREEN_CHIPONE_ICN8505 is not set
# CONFIG_TOUCHSCREEN_CY8CTMA140 is not set
# CONFIG_TOUCHSCREEN_CY8CTMG110 is not set
# CONFIG_TOUCHSCREEN_CYTTSP_CORE is not set
# CONFIG_TOUCHSCREEN_CYTTSP4_CORE is not set
# CONFIG_TOUCHSCREEN_DYNAPRO is not set
# CONFIG_TOUCHSCREEN_HAMPSHIRE is not set
# CONFIG_TOUCHSCREEN_EETI is not set
# CONFIG_TOUCHSCREEN_EGALAX_SERIAL is not set
# CONFIG_TOUCHSCREEN_EXC3000 is not set
# CONFIG_TOUCHSCREEN_FUJITSU is not set
# CONFIG_TOUCHSCREEN_GOODIX is not set
# CONFIG_TOUCHSCREEN_HIDEEP is not set
# CONFIG_TOUCHSCREEN_ILI210X is not set
# CONFIG_TOUCHSCREEN_S6SY761 is not set
# CONFIG_TOUCHSCREEN_GUNZE is not set
# CONFIG_TOUCHSCREEN_EKTF2127 is not set
# CONFIG_TOUCHSCREEN_ELAN is not set
CONFIG_TOUCHSCREEN_ELO=m
CONFIG_TOUCHSCREEN_WACOM_W8001=m
CONFIG_TOUCHSCREEN_WACOM_I2C=m
# CONFIG_TOUCHSCREEN_MAX11801 is not set
# CONFIG_TOUCHSCREEN_MCS5000 is not set
# CONFIG_TOUCHSCREEN_MMS114 is not set
# CONFIG_TOUCHSCREEN_MELFAS_MIP4 is not set
# CONFIG_TOUCHSCREEN_MTOUCH is not set
# CONFIG_TOUCHSCREEN_INEXIO is not set
# CONFIG_TOUCHSCREEN_MK712 is not set
# CONFIG_TOUCHSCREEN_PENMOUNT is not set
# CONFIG_TOUCHSCREEN_EDT_FT5X06 is not set
# CONFIG_TOUCHSCREEN_TOUCHRIGHT is not set
# CONFIG_TOUCHSCREEN_TOUCHWIN is not set
# CONFIG_TOUCHSCREEN_PIXCIR is not set
# CONFIG_TOUCHSCREEN_WDT87XX_I2C is not set
# CONFIG_TOUCHSCREEN_WM97XX is not set
# CONFIG_TOUCHSCREEN_USB_COMPOSITE is not set
# CONFIG_TOUCHSCREEN_TOUCHIT213 is not set
# CONFIG_TOUCHSCREEN_TSC_SERIO is not set
# CONFIG_TOUCHSCREEN_TSC2004 is not set
# CONFIG_TOUCHSCREEN_TSC2005 is not set
# CONFIG_TOUCHSCREEN_TSC2007 is not set
# CONFIG_TOUCHSCREEN_RM_TS is not set
# CONFIG_TOUCHSCREEN_SILEAD is not set
# CONFIG_TOUCHSCREEN_SIS_I2C is not set
# CONFIG_TOUCHSCREEN_ST1232 is not set
# CONFIG_TOUCHSCREEN_STMFTS is not set
# CONFIG_TOUCHSCREEN_SUR40 is not set
# CONFIG_TOUCHSCREEN_SURFACE3_SPI is not set
# CONFIG_TOUCHSCREEN_SX8654 is not set
# CONFIG_TOUCHSCREEN_TPS6507X is not set
# CONFIG_TOUCHSCREEN_ZET6223 is not set
# CONFIG_TOUCHSCREEN_ZFORCE is not set
# CONFIG_TOUCHSCREEN_ROHM_BU21023 is not set
# CONFIG_TOUCHSCREEN_IQS5XX is not set
CONFIG_INPUT_MISC=y
# CONFIG_INPUT_AD714X is not set
# CONFIG_INPUT_BMA150 is not set
# CONFIG_INPUT_E3X0_BUTTON is not set
CONFIG_INPUT_PCSPKR=m
# CONFIG_INPUT_MMA8450 is not set
CONFIG_INPUT_APANEL=m
# CONFIG_INPUT_GPIO_BEEPER is not set
# CONFIG_INPUT_GPIO_DECODER is not set
# CONFIG_INPUT_GPIO_VIBRA is not set
CONFIG_INPUT_ATLAS_BTNS=m
CONFIG_INPUT_ATI_REMOTE2=m
CONFIG_INPUT_KEYSPAN_REMOTE=m
# CONFIG_INPUT_KXTJ9 is not set
CONFIG_INPUT_POWERMATE=m
CONFIG_INPUT_YEALINK=m
CONFIG_INPUT_CM109=m
CONFIG_INPUT_UINPUT=m
# CONFIG_INPUT_PCF8574 is not set
# CONFIG_INPUT_PWM_BEEPER is not set
# CONFIG_INPUT_PWM_VIBRA is not set
CONFIG_INPUT_GPIO_ROTARY_ENCODER=m
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_IMS_PCU is not set
# CONFIG_INPUT_IQS269A is not set
# CONFIG_INPUT_CMA3000 is not set
CONFIG_INPUT_XEN_KBDDEV_FRONTEND=m
# CONFIG_INPUT_IDEAPAD_SLIDEBAR is not set
# CONFIG_INPUT_DRV260X_HAPTICS is not set
# CONFIG_INPUT_DRV2665_HAPTICS is not set
# CONFIG_INPUT_DRV2667_HAPTICS is not set
CONFIG_RMI4_CORE=m
# CONFIG_RMI4_I2C is not set
# CONFIG_RMI4_SPI is not set
CONFIG_RMI4_SMB=m
CONFIG_RMI4_F03=y
CONFIG_RMI4_F03_SERIO=m
CONFIG_RMI4_2D_SENSOR=y
CONFIG_RMI4_F11=y
CONFIG_RMI4_F12=y
CONFIG_RMI4_F30=y
# CONFIG_RMI4_F34 is not set
# CONFIG_RMI4_F54 is not set
# CONFIG_RMI4_F55 is not set

#
# Hardware I/O ports
#
CONFIG_SERIO=y
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=y
CONFIG_SERIO_I8042=y
CONFIG_SERIO_SERPORT=y
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PARKBD is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=y
CONFIG_SERIO_RAW=m
CONFIG_SERIO_ALTERA_PS2=m
# CONFIG_SERIO_PS2MULT is not set
CONFIG_SERIO_ARC_PS2=m
CONFIG_HYPERV_KEYBOARD=m
# CONFIG_SERIO_GPIO_PS2 is not set
# CONFIG_USERIO is not set
# CONFIG_GAMEPORT is not set
# end of Hardware I/O ports
# end of Input device support

#
# Character devices
#
CONFIG_TTY=y
CONFIG_VT=y
CONFIG_CONSOLE_TRANSLATIONS=y
CONFIG_VT_CONSOLE=y
CONFIG_VT_CONSOLE_SLEEP=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
CONFIG_UNIX98_PTYS=y
# CONFIG_LEGACY_PTYS is not set
CONFIG_LDISC_AUTOLOAD=y

#
# Serial drivers
#
CONFIG_SERIAL_EARLYCON=y
CONFIG_SERIAL_8250=y
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
CONFIG_SERIAL_8250_PNP=y
# CONFIG_SERIAL_8250_16550A_VARIANTS is not set
# CONFIG_SERIAL_8250_FINTEK is not set
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_DMA=y
CONFIG_SERIAL_8250_PCI=y
CONFIG_SERIAL_8250_EXAR=y
CONFIG_SERIAL_8250_NR_UARTS=32
CONFIG_SERIAL_8250_RUNTIME_UARTS=4
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
CONFIG_SERIAL_8250_RSA=y
CONFIG_SERIAL_8250_DWLIB=y
CONFIG_SERIAL_8250_DW=y
# CONFIG_SERIAL_8250_RT288X is not set
CONFIG_SERIAL_8250_LPSS=y
CONFIG_SERIAL_8250_MID=y

#
# Non-8250 serial port support
#
# CONFIG_SERIAL_MAX3100 is not set
# CONFIG_SERIAL_MAX310X is not set
# CONFIG_SERIAL_UARTLITE is not set
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_SERIAL_JSM=m
# CONFIG_SERIAL_LANTIQ is not set
# CONFIG_SERIAL_SCCNXP is not set
# CONFIG_SERIAL_SC16IS7XX is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_IFX6X60 is not set
CONFIG_SERIAL_ARC=m
CONFIG_SERIAL_ARC_NR_PORTS=1
# CONFIG_SERIAL_RP2 is not set
# CONFIG_SERIAL_FSL_LPUART is not set
# CONFIG_SERIAL_FSL_LINFLEXUART is not set
# CONFIG_SERIAL_SPRD is not set
# end of Serial drivers

CONFIG_SERIAL_MCTRL_GPIO=y
CONFIG_SERIAL_NONSTANDARD=y
# CONFIG_ROCKETPORT is not set
CONFIG_CYCLADES=m
# CONFIG_CYZ_INTR is not set
# CONFIG_MOXA_INTELLIO is not set
# CONFIG_MOXA_SMARTIO is not set
CONFIG_SYNCLINK=m
CONFIG_SYNCLINKMP=m
CONFIG_SYNCLINK_GT=m
# CONFIG_ISI is not set
CONFIG_N_HDLC=m
CONFIG_N_GSM=m
CONFIG_NOZOMI=m
# CONFIG_NULL_TTY is not set
# CONFIG_TRACE_SINK is not set
CONFIG_HVC_DRIVER=y
CONFIG_HVC_IRQ=y
CONFIG_HVC_XEN=y
CONFIG_HVC_XEN_FRONTEND=y
# CONFIG_SERIAL_DEV_BUS is not set
# CONFIG_TTY_PRINTK is not set
CONFIG_PRINTER=m
# CONFIG_LP_CONSOLE is not set
CONFIG_PPDEV=m
CONFIG_VIRTIO_CONSOLE=y
CONFIG_IPMI_HANDLER=m
CONFIG_IPMI_DMI_DECODE=y
CONFIG_IPMI_PLAT_DATA=y
# CONFIG_IPMI_PANIC_EVENT is not set
CONFIG_IPMI_DEVICE_INTERFACE=m
CONFIG_IPMI_SI=m
CONFIG_IPMI_SSIF=m
CONFIG_IPMI_WATCHDOG=m
CONFIG_IPMI_POWEROFF=m
CONFIG_HW_RANDOM=y
CONFIG_HW_RANDOM_TIMERIOMEM=m
CONFIG_HW_RANDOM_INTEL=m
CONFIG_HW_RANDOM_AMD=m
CONFIG_HW_RANDOM_VIA=m
CONFIG_HW_RANDOM_VIRTIO=y
# CONFIG_APPLICOM is not set
# CONFIG_MWAVE is not set
CONFIG_DEVMEM=y
# CONFIG_DEVKMEM is not set
CONFIG_NVRAM=y
CONFIG_RAW_DRIVER=y
CONFIG_MAX_RAW_DEVS=8192
CONFIG_DEVPORT=y
CONFIG_HPET=y
CONFIG_HPET_MMAP=y
# CONFIG_HPET_MMAP_DEFAULT is not set
CONFIG_HANGCHECK_TIMER=m
CONFIG_UV_MMTIMER=m
CONFIG_TCG_TPM=y
CONFIG_HW_RANDOM_TPM=y
CONFIG_TCG_TIS_CORE=y
CONFIG_TCG_TIS=y
# CONFIG_TCG_TIS_SPI is not set
CONFIG_TCG_TIS_I2C_ATMEL=m
CONFIG_TCG_TIS_I2C_INFINEON=m
CONFIG_TCG_TIS_I2C_NUVOTON=m
CONFIG_TCG_NSC=m
CONFIG_TCG_ATMEL=m
CONFIG_TCG_INFINEON=m
# CONFIG_TCG_XEN is not set
CONFIG_TCG_CRB=y
# CONFIG_TCG_VTPM_PROXY is not set
CONFIG_TCG_TIS_ST33ZP24=m
CONFIG_TCG_TIS_ST33ZP24_I2C=m
# CONFIG_TCG_TIS_ST33ZP24_SPI is not set
CONFIG_TELCLOCK=m
# CONFIG_XILLYBUS is not set
# end of Character devices

# CONFIG_RANDOM_TRUST_CPU is not set
# CONFIG_RANDOM_TRUST_BOOTLOADER is not set

#
# I2C support
#
CONFIG_I2C=y
CONFIG_ACPI_I2C_OPREGION=y
CONFIG_I2C_BOARDINFO=y
CONFIG_I2C_COMPAT=y
CONFIG_I2C_CHARDEV=m
CONFIG_I2C_MUX=m

#
# Multiplexer I2C Chip support
#
# CONFIG_I2C_MUX_GPIO is not set
# CONFIG_I2C_MUX_LTC4306 is not set
# CONFIG_I2C_MUX_PCA9541 is not set
# CONFIG_I2C_MUX_PCA954x is not set
# CONFIG_I2C_MUX_REG is not set
# CONFIG_I2C_MUX_MLXCPLD is not set
# end of Multiplexer I2C Chip support

CONFIG_I2C_HELPER_AUTO=y
CONFIG_I2C_SMBUS=m
CONFIG_I2C_ALGOBIT=y
CONFIG_I2C_ALGOPCA=m

#
# I2C Hardware Bus support
#

#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
CONFIG_I2C_AMD756=m
CONFIG_I2C_AMD756_S4882=m
CONFIG_I2C_AMD8111=m
# CONFIG_I2C_AMD_MP2 is not set
CONFIG_I2C_I801=m
CONFIG_I2C_ISCH=m
CONFIG_I2C_ISMT=m
CONFIG_I2C_PIIX4=m
CONFIG_I2C_NFORCE2=m
CONFIG_I2C_NFORCE2_S4985=m
# CONFIG_I2C_NVIDIA_GPU is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
CONFIG_I2C_SIS96X=m
CONFIG_I2C_VIA=m
CONFIG_I2C_VIAPRO=m

#
# ACPI drivers
#
CONFIG_I2C_SCMI=m

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_CBUS_GPIO is not set
CONFIG_I2C_DESIGNWARE_CORE=m
# CONFIG_I2C_DESIGNWARE_SLAVE is not set
CONFIG_I2C_DESIGNWARE_PLATFORM=m
# CONFIG_I2C_DESIGNWARE_BAYTRAIL is not set
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_EMEV2 is not set
# CONFIG_I2C_GPIO is not set
# CONFIG_I2C_OCORES is not set
CONFIG_I2C_PCA_PLATFORM=m
CONFIG_I2C_SIMTEC=m
# CONFIG_I2C_XILINX is not set

#
# External I2C/SMBus adapter drivers
#
CONFIG_I2C_DIOLAN_U2C=m
CONFIG_I2C_PARPORT=m
# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
# CONFIG_I2C_TAOS_EVM is not set
CONFIG_I2C_TINY_USB=m
CONFIG_I2C_VIPERBOARD=m

#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_MLXCPLD is not set
# end of I2C Hardware Bus support

CONFIG_I2C_STUB=m
# CONFIG_I2C_SLAVE is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# end of I2C support

# CONFIG_I3C is not set
CONFIG_SPI=y
# CONFIG_SPI_DEBUG is not set
CONFIG_SPI_MASTER=y
# CONFIG_SPI_MEM is not set

#
# SPI Master Controller Drivers
#
# CONFIG_SPI_ALTERA is not set
# CONFIG_SPI_AXI_SPI_ENGINE is not set
# CONFIG_SPI_BITBANG is not set
# CONFIG_SPI_BUTTERFLY is not set
# CONFIG_SPI_CADENCE is not set
# CONFIG_SPI_DESIGNWARE is not set
# CONFIG_SPI_NXP_FLEXSPI is not set
# CONFIG_SPI_GPIO is not set
# CONFIG_SPI_LM70_LLP is not set
# CONFIG_SPI_OC_TINY is not set
CONFIG_SPI_PXA2XX=m
CONFIG_SPI_PXA2XX_PCI=m
# CONFIG_SPI_ROCKCHIP is not set
# CONFIG_SPI_SC18IS602 is not set
# CONFIG_SPI_SIFIVE is not set
# CONFIG_SPI_MXIC is not set
# CONFIG_SPI_XCOMM is not set
# CONFIG_SPI_XILINX is not set
# CONFIG_SPI_ZYNQMP_GQSPI is not set
# CONFIG_SPI_AMD is not set

#
# SPI Multiplexer support
#
# CONFIG_SPI_MUX is not set

#
# SPI Protocol Masters
#
# CONFIG_SPI_SPIDEV is not set
# CONFIG_SPI_LOOPBACK_TEST is not set
# CONFIG_SPI_TLE62X0 is not set
# CONFIG_SPI_SLAVE is not set
# CONFIG_SPMI is not set
# CONFIG_HSI is not set
CONFIG_PPS=y
# CONFIG_PPS_DEBUG is not set

#
# PPS clients support
#
# CONFIG_PPS_CLIENT_KTIMER is not set
CONFIG_PPS_CLIENT_LDISC=m
CONFIG_PPS_CLIENT_PARPORT=m
CONFIG_PPS_CLIENT_GPIO=m

#
# PPS generators support
#

#
# PTP clock support
#
CONFIG_PTP_1588_CLOCK=y
CONFIG_DP83640_PHY=m
# CONFIG_PTP_1588_CLOCK_INES is not set
CONFIG_PTP_1588_CLOCK_KVM=m
# CONFIG_PTP_1588_CLOCK_IDT82P33 is not set
# CONFIG_PTP_1588_CLOCK_IDTCM is not set
# CONFIG_PTP_1588_CLOCK_VMW is not set
# end of PTP clock support

CONFIG_PINCTRL=y
CONFIG_PINMUX=y
CONFIG_PINCONF=y
CONFIG_GENERIC_PINCONF=y
# CONFIG_DEBUG_PINCTRL is not set
CONFIG_PINCTRL_AMD=m
# CONFIG_PINCTRL_MCP23S08 is not set
# CONFIG_PINCTRL_SX150X is not set
CONFIG_PINCTRL_BAYTRAIL=y
# CONFIG_PINCTRL_CHERRYVIEW is not set
# CONFIG_PINCTRL_LYNXPOINT is not set
CONFIG_PINCTRL_INTEL=m
# CONFIG_PINCTRL_BROXTON is not set
CONFIG_PINCTRL_CANNONLAKE=m
# CONFIG_PINCTRL_CEDARFORK is not set
CONFIG_PINCTRL_DENVERTON=m
CONFIG_PINCTRL_GEMINILAKE=m
# CONFIG_PINCTRL_ICELAKE is not set
# CONFIG_PINCTRL_JASPERLAKE is not set
CONFIG_PINCTRL_LEWISBURG=m
CONFIG_PINCTRL_SUNRISEPOINT=m
# CONFIG_PINCTRL_TIGERLAKE is not set
CONFIG_GPIOLIB=y
CONFIG_GPIOLIB_FASTPATH_LIMIT=512
CONFIG_GPIO_ACPI=y
CONFIG_GPIOLIB_IRQCHIP=y
# CONFIG_DEBUG_GPIO is not set
CONFIG_GPIO_SYSFS=y
CONFIG_GPIO_GENERIC=m

#
# Memory mapped GPIO drivers
#
CONFIG_GPIO_AMDPT=m
# CONFIG_GPIO_DWAPB is not set
# CONFIG_GPIO_EXAR is not set
# CONFIG_GPIO_GENERIC_PLATFORM is not set
CONFIG_GPIO_ICH=m
# CONFIG_GPIO_MB86S7X is not set
# CONFIG_GPIO_VX855 is not set
# CONFIG_GPIO_XILINX is not set
# CONFIG_GPIO_AMD_FCH is not set
# end of Memory mapped GPIO drivers

#
# Port-mapped I/O GPIO drivers
#
# CONFIG_GPIO_F7188X is not set
# CONFIG_GPIO_IT87 is not set
# CONFIG_GPIO_SCH is not set
# CONFIG_GPIO_SCH311X is not set
# CONFIG_GPIO_WINBOND is not set
# CONFIG_GPIO_WS16C48 is not set
# end of Port-mapped I/O GPIO drivers

#
# I2C GPIO expanders
#
# CONFIG_GPIO_ADP5588 is not set
# CONFIG_GPIO_MAX7300 is not set
# CONFIG_GPIO_MAX732X is not set
# CONFIG_GPIO_PCA953X is not set
# CONFIG_GPIO_PCF857X is not set
# CONFIG_GPIO_TPIC2810 is not set
# end of I2C GPIO expanders

#
# MFD GPIO expanders
#
# end of MFD GPIO expanders

#
# PCI GPIO expanders
#
# CONFIG_GPIO_AMD8111 is not set
# CONFIG_GPIO_ML_IOH is not set
# CONFIG_GPIO_PCI_IDIO_16 is not set
# CONFIG_GPIO_PCIE_IDIO_24 is not set
# CONFIG_GPIO_RDC321X is not set
# end of PCI GPIO expanders

#
# SPI GPIO expanders
#
# CONFIG_GPIO_MAX3191X is not set
# CONFIG_GPIO_MAX7301 is not set
# CONFIG_GPIO_MC33880 is not set
# CONFIG_GPIO_PISOSR is not set
# CONFIG_GPIO_XRA1403 is not set
# end of SPI GPIO expanders

#
# USB GPIO expanders
#
CONFIG_GPIO_VIPERBOARD=m
# end of USB GPIO expanders

# CONFIG_GPIO_AGGREGATOR is not set
CONFIG_GPIO_MOCKUP=m
# CONFIG_W1 is not set
# CONFIG_POWER_AVS is not set
CONFIG_POWER_RESET=y
# CONFIG_POWER_RESET_RESTART is not set
CONFIG_POWER_SUPPLY=y
# CONFIG_POWER_SUPPLY_DEBUG is not set
CONFIG_POWER_SUPPLY_HWMON=y
# CONFIG_PDA_POWER is not set
# CONFIG_GENERIC_ADC_BATTERY is not set
# CONFIG_TEST_POWER is not set
# CONFIG_CHARGER_ADP5061 is not set
# CONFIG_BATTERY_CW2015 is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_CHARGER_SBS is not set
# CONFIG_MANAGER_SBS is not set
# CONFIG_BATTERY_BQ27XXX is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_GPIO is not set
# CONFIG_CHARGER_LT3651 is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_CHARGER_BQ24257 is not set
# CONFIG_CHARGER_BQ24735 is not set
# CONFIG_CHARGER_BQ25890 is not set
CONFIG_CHARGER_SMB347=m
# CONFIG_BATTERY_GAUGE_LTC2941 is not set
# CONFIG_CHARGER_RT9455 is not set
# CONFIG_CHARGER_BD99954 is not set
CONFIG_HWMON=y
CONFIG_HWMON_VID=m
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
CONFIG_SENSORS_ABITUGURU=m
CONFIG_SENSORS_ABITUGURU3=m
# CONFIG_SENSORS_AD7314 is not set
CONFIG_SENSORS_AD7414=m
CONFIG_SENSORS_AD7418=m
CONFIG_SENSORS_ADM1021=m
CONFIG_SENSORS_ADM1025=m
CONFIG_SENSORS_ADM1026=m
CONFIG_SENSORS_ADM1029=m
CONFIG_SENSORS_ADM1031=m
# CONFIG_SENSORS_ADM1177 is not set
CONFIG_SENSORS_ADM9240=m
CONFIG_SENSORS_ADT7X10=m
# CONFIG_SENSORS_ADT7310 is not set
CONFIG_SENSORS_ADT7410=m
CONFIG_SENSORS_ADT7411=m
CONFIG_SENSORS_ADT7462=m
CONFIG_SENSORS_ADT7470=m
CONFIG_SENSORS_ADT7475=m
# CONFIG_SENSORS_AS370 is not set
CONFIG_SENSORS_ASC7621=m
# CONFIG_SENSORS_AXI_FAN_CONTROL is not set
CONFIG_SENSORS_K8TEMP=m
CONFIG_SENSORS_K10TEMP=m
CONFIG_SENSORS_FAM15H_POWER=m
# CONFIG_SENSORS_AMD_ENERGY is not set
CONFIG_SENSORS_APPLESMC=m
CONFIG_SENSORS_ASB100=m
# CONFIG_SENSORS_ASPEED is not set
CONFIG_SENSORS_ATXP1=m
# CONFIG_SENSORS_DRIVETEMP is not set
CONFIG_SENSORS_DS620=m
CONFIG_SENSORS_DS1621=m
CONFIG_SENSORS_DELL_SMM=m
CONFIG_SENSORS_I5K_AMB=m
CONFIG_SENSORS_F71805F=m
CONFIG_SENSORS_F71882FG=m
CONFIG_SENSORS_F75375S=m
CONFIG_SENSORS_FSCHMD=m
# CONFIG_SENSORS_FTSTEUTATES is not set
CONFIG_SENSORS_GL518SM=m
CONFIG_SENSORS_GL520SM=m
CONFIG_SENSORS_G760A=m
# CONFIG_SENSORS_G762 is not set
# CONFIG_SENSORS_HIH6130 is not set
CONFIG_SENSORS_IBMAEM=m
CONFIG_SENSORS_IBMPEX=m
# CONFIG_SENSORS_IIO_HWMON is not set
# CONFIG_SENSORS_I5500 is not set
CONFIG_SENSORS_CORETEMP=m
CONFIG_SENSORS_IT87=m
CONFIG_SENSORS_JC42=m
# CONFIG_SENSORS_POWR1220 is not set
CONFIG_SENSORS_LINEAGE=m
# CONFIG_SENSORS_LTC2945 is not set
# CONFIG_SENSORS_LTC2947_I2C is not set
# CONFIG_SENSORS_LTC2947_SPI is not set
# CONFIG_SENSORS_LTC2990 is not set
CONFIG_SENSORS_LTC4151=m
CONFIG_SENSORS_LTC4215=m
# CONFIG_SENSORS_LTC4222 is not set
CONFIG_SENSORS_LTC4245=m
# CONFIG_SENSORS_LTC4260 is not set
CONFIG_SENSORS_LTC4261=m
# CONFIG_SENSORS_MAX1111 is not set
CONFIG_SENSORS_MAX16065=m
CONFIG_SENSORS_MAX1619=m
CONFIG_SENSORS_MAX1668=m
CONFIG_SENSORS_MAX197=m
# CONFIG_SENSORS_MAX31722 is not set
# CONFIG_SENSORS_MAX31730 is not set
# CONFIG_SENSORS_MAX6621 is not set
CONFIG_SENSORS_MAX6639=m
CONFIG_SENSORS_MAX6642=m
CONFIG_SENSORS_MAX6650=m
CONFIG_SENSORS_MAX6697=m
# CONFIG_SENSORS_MAX31790 is not set
CONFIG_SENSORS_MCP3021=m
# CONFIG_SENSORS_TC654 is not set
# CONFIG_SENSORS_ADCXX is not set
CONFIG_SENSORS_LM63=m
# CONFIG_SENSORS_LM70 is not set
CONFIG_SENSORS_LM73=m
CONFIG_SENSORS_LM75=m
CONFIG_SENSORS_LM77=m
CONFIG_SENSORS_LM78=m
CONFIG_SENSORS_LM80=m
CONFIG_SENSORS_LM83=m
CONFIG_SENSORS_LM85=m
CONFIG_SENSORS_LM87=m
CONFIG_SENSORS_LM90=m
CONFIG_SENSORS_LM92=m
CONFIG_SENSORS_LM93=m
CONFIG_SENSORS_LM95234=m
CONFIG_SENSORS_LM95241=m
CONFIG_SENSORS_LM95245=m
CONFIG_SENSORS_PC87360=m
CONFIG_SENSORS_PC87427=m
CONFIG_SENSORS_NTC_THERMISTOR=m
# CONFIG_SENSORS_NCT6683 is not set
CONFIG_SENSORS_NCT6775=m
# CONFIG_SENSORS_NCT7802 is not set
# CONFIG_SENSORS_NCT7904 is not set
# CONFIG_SENSORS_NPCM7XX is not set
CONFIG_SENSORS_PCF8591=m
CONFIG_PMBUS=m
CONFIG_SENSORS_PMBUS=m
CONFIG_SENSORS_ADM1275=m
# CONFIG_SENSORS_BEL_PFE is not set
# CONFIG_SENSORS_IBM_CFFPS is not set
# CONFIG_SENSORS_INSPUR_IPSPS is not set
# CONFIG_SENSORS_IR35221 is not set
# CONFIG_SENSORS_IR38064 is not set
# CONFIG_SENSORS_IRPS5401 is not set
# CONFIG_SENSORS_ISL68137 is not set
CONFIG_SENSORS_LM25066=m
CONFIG_SENSORS_LTC2978=m
# CONFIG_SENSORS_LTC3815 is not set
CONFIG_SENSORS_MAX16064=m
# CONFIG_SENSORS_MAX16601 is not set
# CONFIG_SENSORS_MAX20730 is not set
# CONFIG_SENSORS_MAX20751 is not set
# CONFIG_SENSORS_MAX31785 is not set
CONFIG_SENSORS_MAX34440=m
CONFIG_SENSORS_MAX8688=m
# CONFIG_SENSORS_PXE1610 is not set
# CONFIG_SENSORS_TPS40422 is not set
# CONFIG_SENSORS_TPS53679 is not set
CONFIG_SENSORS_UCD9000=m
CONFIG_SENSORS_UCD9200=m
# CONFIG_SENSORS_XDPE122 is not set
CONFIG_SENSORS_ZL6100=m
CONFIG_SENSORS_SHT15=m
CONFIG_SENSORS_SHT21=m
# CONFIG_SENSORS_SHT3x is not set
# CONFIG_SENSORS_SHTC1 is not set
CONFIG_SENSORS_SIS5595=m
CONFIG_SENSORS_DME1737=m
CONFIG_SENSORS_EMC1403=m
# CONFIG_SENSORS_EMC2103 is not set
CONFIG_SENSORS_EMC6W201=m
CONFIG_SENSORS_SMSC47M1=m
CONFIG_SENSORS_SMSC47M192=m
CONFIG_SENSORS_SMSC47B397=m
CONFIG_SENSORS_SCH56XX_COMMON=m
CONFIG_SENSORS_SCH5627=m
CONFIG_SENSORS_SCH5636=m
# CONFIG_SENSORS_STTS751 is not set
# CONFIG_SENSORS_SMM665 is not set
# CONFIG_SENSORS_ADC128D818 is not set
CONFIG_SENSORS_ADS7828=m
# CONFIG_SENSORS_ADS7871 is not set
CONFIG_SENSORS_AMC6821=m
CONFIG_SENSORS_INA209=m
CONFIG_SENSORS_INA2XX=m
# CONFIG_SENSORS_INA3221 is not set
# CONFIG_SENSORS_TC74 is not set
CONFIG_SENSORS_THMC50=m
CONFIG_SENSORS_TMP102=m
# CONFIG_SENSORS_TMP103 is not set
# CONFIG_SENSORS_TMP108 is not set
CONFIG_SENSORS_TMP401=m
CONFIG_SENSORS_TMP421=m
# CONFIG_SENSORS_TMP513 is not set
CONFIG_SENSORS_VIA_CPUTEMP=m
CONFIG_SENSORS_VIA686A=m
CONFIG_SENSORS_VT1211=m
CONFIG_SENSORS_VT8231=m
# CONFIG_SENSORS_W83773G is not set
CONFIG_SENSORS_W83781D=m
CONFIG_SENSORS_W83791D=m
CONFIG_SENSORS_W83792D=m
CONFIG_SENSORS_W83793=m
CONFIG_SENSORS_W83795=m
# CONFIG_SENSORS_W83795_FANCTRL is not set
CONFIG_SENSORS_W83L785TS=m
CONFIG_SENSORS_W83L786NG=m
CONFIG_SENSORS_W83627HF=m
CONFIG_SENSORS_W83627EHF=m
# CONFIG_SENSORS_XGENE is not set

#
# ACPI drivers
#
CONFIG_SENSORS_ACPI_POWER=m
CONFIG_SENSORS_ATK0110=m
CONFIG_THERMAL=y
# CONFIG_THERMAL_STATISTICS is not set
CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=0
CONFIG_THERMAL_HWMON=y
CONFIG_THERMAL_WRITABLE_TRIPS=y
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=y
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
CONFIG_THERMAL_GOV_FAIR_SHARE=y
CONFIG_THERMAL_GOV_STEP_WISE=y
CONFIG_THERMAL_GOV_BANG_BANG=y
CONFIG_THERMAL_GOV_USER_SPACE=y
# CONFIG_CLOCK_THERMAL is not set
# CONFIG_DEVFREQ_THERMAL is not set
# CONFIG_THERMAL_EMULATION is not set

#
# Intel thermal drivers
#
CONFIG_INTEL_POWERCLAMP=m
CONFIG_X86_PKG_TEMP_THERMAL=m
CONFIG_INTEL_SOC_DTS_IOSF_CORE=m
# CONFIG_INTEL_SOC_DTS_THERMAL is not set

#
# ACPI INT340X thermal drivers
#
CONFIG_INT340X_THERMAL=m
CONFIG_ACPI_THERMAL_REL=m
# CONFIG_INT3406_THERMAL is not set
CONFIG_PROC_THERMAL_MMIO_RAPL=y
# end of ACPI INT340X thermal drivers

# CONFIG_INTEL_PCH_THERMAL is not set
# end of Intel thermal drivers

# CONFIG_GENERIC_ADC_THERMAL is not set
CONFIG_WATCHDOG=y
CONFIG_WATCHDOG_CORE=y
# CONFIG_WATCHDOG_NOWAYOUT is not set
CONFIG_WATCHDOG_HANDLE_BOOT_ENABLED=y
CONFIG_WATCHDOG_OPEN_TIMEOUT=0
CONFIG_WATCHDOG_SYSFS=y

#
# Watchdog Pretimeout Governors
#
# CONFIG_WATCHDOG_PRETIMEOUT_GOV is not set

#
# Watchdog Device Drivers
#
CONFIG_SOFT_WATCHDOG=m
CONFIG_WDAT_WDT=m
# CONFIG_XILINX_WATCHDOG is not set
# CONFIG_ZIIRAVE_WATCHDOG is not set
# CONFIG_CADENCE_WATCHDOG is not set
# CONFIG_DW_WATCHDOG is not set
# CONFIG_MAX63XX_WATCHDOG is not set
# CONFIG_ACQUIRE_WDT is not set
# CONFIG_ADVANTECH_WDT is not set
CONFIG_ALIM1535_WDT=m
CONFIG_ALIM7101_WDT=m
# CONFIG_EBC_C384_WDT is not set
CONFIG_F71808E_WDT=m
CONFIG_SP5100_TCO=m
CONFIG_SBC_FITPC2_WATCHDOG=m
# CONFIG_EUROTECH_WDT is not set
CONFIG_IB700_WDT=m
CONFIG_IBMASR=m
# CONFIG_WAFER_WDT is not set
CONFIG_I6300ESB_WDT=y
CONFIG_IE6XX_WDT=m
CONFIG_ITCO_WDT=y
CONFIG_ITCO_VENDOR_SUPPORT=y
CONFIG_IT8712F_WDT=m
CONFIG_IT87_WDT=m
CONFIG_HP_WATCHDOG=m
CONFIG_HPWDT_NMI_DECODING=y
# CONFIG_SC1200_WDT is not set
# CONFIG_PC87413_WDT is not set
CONFIG_NV_TCO=m
# CONFIG_60XX_WDT is not set
# CONFIG_CPU5_WDT is not set
CONFIG_SMSC_SCH311X_WDT=m
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_TQMX86_WDT is not set
CONFIG_VIA_WDT=m
CONFIG_W83627HF_WDT=m
CONFIG_W83877F_WDT=m
CONFIG_W83977F_WDT=m
CONFIG_MACHZ_WDT=m
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
CONFIG_INTEL_MEI_WDT=m
# CONFIG_NI903X_WDT is not set
# CONFIG_NIC7018_WDT is not set
# CONFIG_MEN_A21_WDT is not set
CONFIG_XEN_WDT=m

#
# PCI-based Watchdog Cards
#
CONFIG_PCIPCWATCHDOG=m
CONFIG_WDTPCI=m

#
# USB-based Watchdog Cards
#
CONFIG_USBPCWATCHDOG=m
CONFIG_SSB_POSSIBLE=y
CONFIG_SSB=m
CONFIG_SSB_SPROM=y
CONFIG_SSB_PCIHOST_POSSIBLE=y
CONFIG_SSB_PCIHOST=y
CONFIG_SSB_SDIOHOST_POSSIBLE=y
CONFIG_SSB_SDIOHOST=y
CONFIG_SSB_DRIVER_PCICORE_POSSIBLE=y
CONFIG_SSB_DRIVER_PCICORE=y
CONFIG_SSB_DRIVER_GPIO=y
CONFIG_BCMA_POSSIBLE=y
CONFIG_BCMA=m
CONFIG_BCMA_HOST_PCI_POSSIBLE=y
CONFIG_BCMA_HOST_PCI=y
# CONFIG_BCMA_HOST_SOC is not set
CONFIG_BCMA_DRIVER_PCI=y
CONFIG_BCMA_DRIVER_GMAC_CMN=y
CONFIG_BCMA_DRIVER_GPIO=y
# CONFIG_BCMA_DEBUG is not set

#
# Multifunction device drivers
#
CONFIG_MFD_CORE=y
# CONFIG_MFD_AS3711 is not set
# CONFIG_PMIC_ADP5520 is not set
# CONFIG_MFD_AAT2870_CORE is not set
# CONFIG_MFD_BCM590XX is not set
# CONFIG_MFD_BD9571MWV is not set
# CONFIG_MFD_AXP20X_I2C is not set
# CONFIG_MFD_MADERA is not set
# CONFIG_PMIC_DA903X is not set
# CONFIG_MFD_DA9052_SPI is not set
# CONFIG_MFD_DA9052_I2C is not set
# CONFIG_MFD_DA9055 is not set
# CONFIG_MFD_DA9062 is not set
# CONFIG_MFD_DA9063 is not set
# CONFIG_MFD_DA9150 is not set
# CONFIG_MFD_DLN2 is not set
# CONFIG_MFD_MC13XXX_SPI is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_MFD_MP2629 is not set
# CONFIG_HTC_PASIC3 is not set
# CONFIG_HTC_I2CPLD is not set
# CONFIG_MFD_INTEL_QUARK_I2C_GPIO is not set
CONFIG_LPC_ICH=m
CONFIG_LPC_SCH=m
# CONFIG_INTEL_SOC_PMIC_CHTDC_TI is not set
CONFIG_MFD_INTEL_LPSS=y
CONFIG_MFD_INTEL_LPSS_ACPI=y
CONFIG_MFD_INTEL_LPSS_PCI=y
# CONFIG_MFD_INTEL_PMC_BXT is not set
# CONFIG_MFD_IQS62X is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_KEMPLD is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
# CONFIG_MFD_88PM860X is not set
# CONFIG_MFD_MAX14577 is not set
# CONFIG_MFD_MAX77693 is not set
# CONFIG_MFD_MAX77843 is not set
# CONFIG_MFD_MAX8907 is not set
# CONFIG_MFD_MAX8925 is not set
# CONFIG_MFD_MAX8997 is not set
# CONFIG_MFD_MAX8998 is not set
# CONFIG_MFD_MT6360 is not set
# CONFIG_MFD_MT6397 is not set
# CONFIG_MFD_MENF21BMC is not set
# CONFIG_EZX_PCAP is not set
CONFIG_MFD_VIPERBOARD=m
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_UCB1400_CORE is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_RT5033 is not set
# CONFIG_MFD_RC5T583 is not set
# CONFIG_MFD_SEC_CORE is not set
# CONFIG_MFD_SI476X_CORE is not set
CONFIG_MFD_SM501=m
CONFIG_MFD_SM501_GPIO=y
# CONFIG_MFD_SKY81452 is not set
# CONFIG_MFD_SMSC is not set
# CONFIG_ABX500_CORE is not set
# CONFIG_MFD_SYSCON is not set
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_LP3943 is not set
# CONFIG_MFD_LP8788 is not set
# CONFIG_MFD_TI_LMU is not set
# CONFIG_MFD_PALMAS is not set
# CONFIG_TPS6105X is not set
# CONFIG_TPS65010 is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65086 is not set
# CONFIG_MFD_TPS65090 is not set
# CONFIG_MFD_TI_LP873X is not set
# CONFIG_MFD_TPS6586X is not set
# CONFIG_MFD_TPS65910 is not set
# CONFIG_MFD_TPS65912_I2C is not set
# CONFIG_MFD_TPS65912_SPI is not set
# CONFIG_MFD_TPS80031 is not set
# CONFIG_TWL4030_CORE is not set
# CONFIG_TWL6040_CORE is not set
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_MFD_TQMX86 is not set
CONFIG_MFD_VX855=m
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_ARIZONA_SPI is not set
# CONFIG_MFD_WM8400 is not set
# CONFIG_MFD_WM831X_I2C is not set
# CONFIG_MFD_WM831X_SPI is not set
# CONFIG_MFD_WM8350_I2C is not set
# CONFIG_MFD_WM8994 is not set
# end of Multifunction device drivers

# CONFIG_REGULATOR is not set
CONFIG_RC_CORE=m
CONFIG_RC_MAP=m
CONFIG_LIRC=y
CONFIG_RC_DECODERS=y
CONFIG_IR_NEC_DECODER=m
CONFIG_IR_RC5_DECODER=m
CONFIG_IR_RC6_DECODER=m
CONFIG_IR_JVC_DECODER=m
CONFIG_IR_SONY_DECODER=m
CONFIG_IR_SANYO_DECODER=m
CONFIG_IR_SHARP_DECODER=m
CONFIG_IR_MCE_KBD_DECODER=m
# CONFIG_IR_XMP_DECODER is not set
CONFIG_IR_IMON_DECODER=m
# CONFIG_IR_RCMM_DECODER is not set
CONFIG_RC_DEVICES=y
CONFIG_RC_ATI_REMOTE=m
CONFIG_IR_ENE=m
CONFIG_IR_IMON=m
# CONFIG_IR_IMON_RAW is not set
CONFIG_IR_MCEUSB=m
CONFIG_IR_ITE_CIR=m
CONFIG_IR_FINTEK=m
CONFIG_IR_NUVOTON=m
CONFIG_IR_REDRAT3=m
CONFIG_IR_STREAMZAP=m
CONFIG_IR_WINBOND_CIR=m
# CONFIG_IR_IGORPLUGUSB is not set
CONFIG_IR_IGUANA=m
CONFIG_IR_TTUSBIR=m
CONFIG_RC_LOOPBACK=m
# CONFIG_IR_SERIAL is not set
# CONFIG_IR_SIR is not set
# CONFIG_RC_XBOX_DVD is not set
# CONFIG_MEDIA_CEC_SUPPORT is not set
CONFIG_MEDIA_SUPPORT=m
# CONFIG_MEDIA_SUPPORT_FILTER is not set
CONFIG_MEDIA_SUBDRV_AUTOSELECT=y

#
# Media device types
#
CONFIG_MEDIA_CAMERA_SUPPORT=y
CONFIG_MEDIA_ANALOG_TV_SUPPORT=y
CONFIG_MEDIA_DIGITAL_TV_SUPPORT=y
CONFIG_MEDIA_RADIO_SUPPORT=y
CONFIG_MEDIA_SDR_SUPPORT=y
CONFIG_MEDIA_PLATFORM_SUPPORT=y
CONFIG_MEDIA_TEST_SUPPORT=y
# end of Media device types

#
# Media core support
#
CONFIG_VIDEO_DEV=m
CONFIG_MEDIA_CONTROLLER=y
CONFIG_DVB_CORE=m
# end of Media core support

#
# Video4Linux options
#
CONFIG_VIDEO_V4L2=m
CONFIG_VIDEO_V4L2_I2C=y
# CONFIG_VIDEO_V4L2_SUBDEV_API is not set
# CONFIG_VIDEO_ADV_DEBUG is not set
# CONFIG_VIDEO_FIXED_MINOR_RANGES is not set
CONFIG_VIDEO_TUNER=m
CONFIG_VIDEOBUF_GEN=m
CONFIG_VIDEOBUF_DMA_SG=m
CONFIG_VIDEOBUF_VMALLOC=m
# end of Video4Linux options

#
# Media controller options
#
CONFIG_MEDIA_CONTROLLER_DVB=y
# end of Media controller options

#
# Digital TV options
#
# CONFIG_DVB_MMAP is not set
CONFIG_DVB_NET=y
CONFIG_DVB_MAX_ADAPTERS=8
CONFIG_DVB_DYNAMIC_MINORS=y
# CONFIG_DVB_DEMUX_SECTION_LOSS_LOG is not set
# CONFIG_DVB_ULE_DEBUG is not set
# end of Digital TV options

#
# Media drivers
#
CONFIG_TTPCI_EEPROM=m
CONFIG_MEDIA_USB_SUPPORT=y

#
# Webcam devices
#
CONFIG_USB_VIDEO_CLASS=m
CONFIG_USB_VIDEO_CLASS_INPUT_EVDEV=y
CONFIG_USB_GSPCA=m
CONFIG_USB_M5602=m
CONFIG_USB_STV06XX=m
CONFIG_USB_GL860=m
CONFIG_USB_GSPCA_BENQ=m
CONFIG_USB_GSPCA_CONEX=m
CONFIG_USB_GSPCA_CPIA1=m
# CONFIG_USB_GSPCA_DTCS033 is not set
CONFIG_USB_GSPCA_ETOMS=m
CONFIG_USB_GSPCA_FINEPIX=m
CONFIG_USB_GSPCA_JEILINJ=m
CONFIG_USB_GSPCA_JL2005BCD=m
# CONFIG_USB_GSPCA_KINECT is not set
CONFIG_USB_GSPCA_KONICA=m
CONFIG_USB_GSPCA_MARS=m
CONFIG_USB_GSPCA_MR97310A=m
CONFIG_USB_GSPCA_NW80X=m
CONFIG_USB_GSPCA_OV519=m
CONFIG_USB_GSPCA_OV534=m
CONFIG_USB_GSPCA_OV534_9=m
CONFIG_USB_GSPCA_PAC207=m
CONFIG_USB_GSPCA_PAC7302=m
CONFIG_USB_GSPCA_PAC7311=m
CONFIG_USB_GSPCA_SE401=m
CONFIG_USB_GSPCA_SN9C2028=m
CONFIG_USB_GSPCA_SN9C20X=m
CONFIG_USB_GSPCA_SONIXB=m
CONFIG_USB_GSPCA_SONIXJ=m
CONFIG_USB_GSPCA_SPCA500=m
CONFIG_USB_GSPCA_SPCA501=m
CONFIG_USB_GSPCA_SPCA505=m
CONFIG_USB_GSPCA_SPCA506=m
CONFIG_USB_GSPCA_SPCA508=m
CONFIG_USB_GSPCA_SPCA561=m
CONFIG_USB_GSPCA_SPCA1528=m
CONFIG_USB_GSPCA_SQ905=m
CONFIG_USB_GSPCA_SQ905C=m
CONFIG_USB_GSPCA_SQ930X=m
CONFIG_USB_GSPCA_STK014=m
# CONFIG_USB_GSPCA_STK1135 is not set
CONFIG_USB_GSPCA_STV0680=m
CONFIG_USB_GSPCA_SUNPLUS=m
CONFIG_USB_GSPCA_T613=m
CONFIG_USB_GSPCA_TOPRO=m
# CONFIG_USB_GSPCA_TOUPTEK is not set
CONFIG_USB_GSPCA_TV8532=m
CONFIG_USB_GSPCA_VC032X=m
CONFIG_USB_GSPCA_VICAM=m
CONFIG_USB_GSPCA_XIRLINK_CIT=m
CONFIG_USB_GSPCA_ZC3XX=m
CONFIG_USB_PWC=m
# CONFIG_USB_PWC_DEBUG is not set
CONFIG_USB_PWC_INPUT_EVDEV=y
# CONFIG_VIDEO_CPIA2 is not set
CONFIG_USB_ZR364XX=m
CONFIG_USB_STKWEBCAM=m
CONFIG_USB_S2255=m
# CONFIG_VIDEO_USBTV is not set

#
# Analog TV USB devices
#
CONFIG_VIDEO_PVRUSB2=m
CONFIG_VIDEO_PVRUSB2_SYSFS=y
CONFIG_VIDEO_PVRUSB2_DVB=y
# CONFIG_VIDEO_PVRUSB2_DEBUGIFC is not set
CONFIG_VIDEO_HDPVR=m
# CONFIG_VIDEO_STK1160_COMMON is not set
# CONFIG_VIDEO_GO7007 is not set

#
# Analog/digital TV USB devices
#
CONFIG_VIDEO_AU0828=m
CONFIG_VIDEO_AU0828_V4L2=y
# CONFIG_VIDEO_AU0828_RC is not set
CONFIG_VIDEO_CX231XX=m
CONFIG_VIDEO_CX231XX_RC=y
CONFIG_VIDEO_CX231XX_ALSA=m
CONFIG_VIDEO_CX231XX_DVB=m
CONFIG_VIDEO_TM6000=m
CONFIG_VIDEO_TM6000_ALSA=m
CONFIG_VIDEO_TM6000_DVB=m

#
# Digital TV USB devices
#
CONFIG_DVB_USB=m
# CONFIG_DVB_USB_DEBUG is not set
CONFIG_DVB_USB_DIB3000MC=m
CONFIG_DVB_USB_A800=m
CONFIG_DVB_USB_DIBUSB_MB=m
# CONFIG_DVB_USB_DIBUSB_MB_FAULTY is not set
CONFIG_DVB_USB_DIBUSB_MC=m
CONFIG_DVB_USB_DIB0700=m
CONFIG_DVB_USB_UMT_010=m
CONFIG_DVB_USB_CXUSB=m
# CONFIG_DVB_USB_CXUSB_ANALOG is not set
CONFIG_DVB_USB_M920X=m
CONFIG_DVB_USB_DIGITV=m
CONFIG_DVB_USB_VP7045=m
CONFIG_DVB_USB_VP702X=m
CONFIG_DVB_USB_GP8PSK=m
CONFIG_DVB_USB_NOVA_T_USB2=m
CONFIG_DVB_USB_TTUSB2=m
CONFIG_DVB_USB_DTT200U=m
CONFIG_DVB_USB_OPERA1=m
CONFIG_DVB_USB_AF9005=m
CONFIG_DVB_USB_AF9005_REMOTE=m
CONFIG_DVB_USB_PCTV452E=m
CONFIG_DVB_USB_DW2102=m
CONFIG_DVB_USB_CINERGY_T2=m
CONFIG_DVB_USB_DTV5100=m
CONFIG_DVB_USB_AZ6027=m
CONFIG_DVB_USB_TECHNISAT_USB2=m
CONFIG_DVB_USB_V2=m
CONFIG_DVB_USB_AF9015=m
CONFIG_DVB_USB_AF9035=m
CONFIG_DVB_USB_ANYSEE=m
CONFIG_DVB_USB_AU6610=m
CONFIG_DVB_USB_AZ6007=m
CONFIG_DVB_USB_CE6230=m
CONFIG_DVB_USB_EC168=m
CONFIG_DVB_USB_GL861=m
CONFIG_DVB_USB_LME2510=m
CONFIG_DVB_USB_MXL111SF=m
CONFIG_DVB_USB_RTL28XXU=m
# CONFIG_DVB_USB_DVBSKY is not set
# CONFIG_DVB_USB_ZD1301 is not set
CONFIG_DVB_TTUSB_BUDGET=m
CONFIG_DVB_TTUSB_DEC=m
CONFIG_SMS_USB_DRV=m
CONFIG_DVB_B2C2_FLEXCOP_USB=m
# CONFIG_DVB_B2C2_FLEXCOP_USB_DEBUG is not set
# CONFIG_DVB_AS102 is not set

#
# Webcam, TV (analog/digital) USB devices
#
CONFIG_VIDEO_EM28XX=m
# CONFIG_VIDEO_EM28XX_V4L2 is not set
CONFIG_VIDEO_EM28XX_ALSA=m
CONFIG_VIDEO_EM28XX_DVB=m
CONFIG_VIDEO_EM28XX_RC=m

#
# Software defined radio USB devices
#
# CONFIG_USB_AIRSPY is not set
# CONFIG_USB_HACKRF is not set
# CONFIG_USB_MSI2500 is not set
CONFIG_MEDIA_PCI_SUPPORT=y

#
# Media capture support
#
# CONFIG_VIDEO_MEYE is not set
# CONFIG_VIDEO_SOLO6X10 is not set
# CONFIG_VIDEO_TW5864 is not set
# CONFIG_VIDEO_TW68 is not set
# CONFIG_VIDEO_TW686X is not set

#
# Media capture/analog TV support
#
CONFIG_VIDEO_IVTV=m
# CONFIG_VIDEO_IVTV_DEPRECATED_IOCTLS is not set
# CONFIG_VIDEO_IVTV_ALSA is not set
CONFIG_VIDEO_FB_IVTV=m
# CONFIG_VIDEO_FB_IVTV_FORCE_PAT is not set
# CONFIG_VIDEO_HEXIUM_GEMINI is not set
# CONFIG_VIDEO_HEXIUM_ORION is not set
# CONFIG_VIDEO_MXB is not set
# CONFIG_VIDEO_DT3155 is not set

#
# Media capture/analog/hybrid TV support
#
CONFIG_VIDEO_CX18=m
CONFIG_VIDEO_CX18_ALSA=m
CONFIG_VIDEO_CX23885=m
CONFIG_MEDIA_ALTERA_CI=m
# CONFIG_VIDEO_CX25821 is not set
CONFIG_VIDEO_CX88=m
CONFIG_VIDEO_CX88_ALSA=m
CONFIG_VIDEO_CX88_BLACKBIRD=m
CONFIG_VIDEO_CX88_DVB=m
CONFIG_VIDEO_CX88_ENABLE_VP3054=y
CONFIG_VIDEO_CX88_VP3054=m
CONFIG_VIDEO_CX88_MPEG=m
CONFIG_VIDEO_BT848=m
CONFIG_DVB_BT8XX=m
CONFIG_VIDEO_SAA7134=m
CONFIG_VIDEO_SAA7134_ALSA=m
CONFIG_VIDEO_SAA7134_RC=y
CONFIG_VIDEO_SAA7134_DVB=m
CONFIG_VIDEO_SAA7164=m

#
# Media digital TV PCI Adapters
#
CONFIG_DVB_AV7110_IR=y
CONFIG_DVB_AV7110=m
CONFIG_DVB_AV7110_OSD=y
CONFIG_DVB_BUDGET_CORE=m
CONFIG_DVB_BUDGET=m
CONFIG_DVB_BUDGET_CI=m
CONFIG_DVB_BUDGET_AV=m
CONFIG_DVB_BUDGET_PATCH=m
CONFIG_DVB_B2C2_FLEXCOP_PCI=m
# CONFIG_DVB_B2C2_FLEXCOP_PCI_DEBUG is not set
CONFIG_DVB_PLUTO2=m
CONFIG_DVB_DM1105=m
CONFIG_DVB_PT1=m
# CONFIG_DVB_PT3 is not set
CONFIG_MANTIS_CORE=m
CONFIG_DVB_MANTIS=m
CONFIG_DVB_HOPPER=m
CONFIG_DVB_NGENE=m
CONFIG_DVB_DDBRIDGE=m
# CONFIG_DVB_DDBRIDGE_MSIENABLE is not set
# CONFIG_DVB_SMIPCIE is not set
# CONFIG_DVB_NETUP_UNIDVB is not set
# CONFIG_VIDEO_IPU3_CIO2 is not set
# CONFIG_VIDEO_PCI_SKELETON is not set
CONFIG_RADIO_ADAPTERS=y
CONFIG_RADIO_TEA575X=m
# CONFIG_RADIO_SI470X is not set
# CONFIG_RADIO_SI4713 is not set
# CONFIG_USB_MR800 is not set
# CONFIG_USB_DSBR is not set
# CONFIG_RADIO_MAXIRADIO is not set
# CONFIG_RADIO_SHARK is not set
# CONFIG_RADIO_SHARK2 is not set
# CONFIG_USB_KEENE is not set
# CONFIG_USB_RAREMONO is not set
# CONFIG_USB_MA901 is not set
# CONFIG_RADIO_TEA5764 is not set
# CONFIG_RADIO_SAA7706H is not set
# CONFIG_RADIO_TEF6862 is not set
# CONFIG_RADIO_WL1273 is not set
CONFIG_MEDIA_COMMON_OPTIONS=y

#
# common driver options
#
CONFIG_VIDEO_CX2341X=m
CONFIG_VIDEO_TVEEPROM=m
CONFIG_CYPRESS_FIRMWARE=m
CONFIG_VIDEOBUF2_CORE=m
CONFIG_VIDEOBUF2_V4L2=m
CONFIG_VIDEOBUF2_MEMOPS=m
CONFIG_VIDEOBUF2_VMALLOC=m
CONFIG_VIDEOBUF2_DMA_SG=m
CONFIG_VIDEOBUF2_DVB=m
CONFIG_DVB_B2C2_FLEXCOP=m
CONFIG_VIDEO_SAA7146=m
CONFIG_VIDEO_SAA7146_VV=m
CONFIG_SMS_SIANO_MDTV=m
CONFIG_SMS_SIANO_RC=y
# CONFIG_SMS_SIANO_DEBUGFS is not set
# CONFIG_V4L_PLATFORM_DRIVERS is not set
# CONFIG_V4L_MEM2MEM_DRIVERS is not set
# CONFIG_DVB_PLATFORM_DRIVERS is not set
# CONFIG_SDR_PLATFORM_DRIVERS is not set

#
# MMC/SDIO DVB adapters
#
CONFIG_SMS_SDIO_DRV=m
# CONFIG_V4L_TEST_DRIVERS is not set

#
# FireWire (IEEE 1394) Adapters
#
CONFIG_DVB_FIREDTV=m
CONFIG_DVB_FIREDTV_INPUT=y
# end of Media drivers

#
# Media ancillary drivers
#
CONFIG_MEDIA_ATTACH=y

#
# IR I2C driver auto-selected by 'Autoselect ancillary drivers'
#
CONFIG_VIDEO_IR_I2C=m

#
# Audio decoders, processors and mixers
#
CONFIG_VIDEO_TVAUDIO=m
CONFIG_VIDEO_TDA7432=m
# CONFIG_VIDEO_TDA9840 is not set
# CONFIG_VIDEO_TDA1997X is not set
# CONFIG_VIDEO_TEA6415C is not set
# CONFIG_VIDEO_TEA6420 is not set
CONFIG_VIDEO_MSP3400=m
CONFIG_VIDEO_CS3308=m
CONFIG_VIDEO_CS5345=m
CONFIG_VIDEO_CS53L32A=m
# CONFIG_VIDEO_TLV320AIC23B is not set
# CONFIG_VIDEO_UDA1342 is not set
CONFIG_VIDEO_WM8775=m
CONFIG_VIDEO_WM8739=m
CONFIG_VIDEO_VP27SMPX=m
# CONFIG_VIDEO_SONY_BTF_MPX is not set
# end of Audio decoders, processors and mixers

#
# RDS decoders
#
CONFIG_VIDEO_SAA6588=m
# end of RDS decoders

#
# Video decoders
#
# CONFIG_VIDEO_ADV7180 is not set
# CONFIG_VIDEO_ADV7183 is not set
# CONFIG_VIDEO_ADV7604 is not set
# CONFIG_VIDEO_ADV7842 is not set
# CONFIG_VIDEO_BT819 is not set
# CONFIG_VIDEO_BT856 is not set
# CONFIG_VIDEO_BT866 is not set
# CONFIG_VIDEO_KS0127 is not set
# CONFIG_VIDEO_ML86V7667 is not set
# CONFIG_VIDEO_SAA7110 is not set
CONFIG_VIDEO_SAA711X=m
# CONFIG_VIDEO_TC358743 is not set
# CONFIG_VIDEO_TVP514X is not set
# CONFIG_VIDEO_TVP5150 is not set
# CONFIG_VIDEO_TVP7002 is not set
# CONFIG_VIDEO_TW2804 is not set
# CONFIG_VIDEO_TW9903 is not set
# CONFIG_VIDEO_TW9906 is not set
# CONFIG_VIDEO_TW9910 is not set
# CONFIG_VIDEO_VPX3220 is not set

#
# Video and audio decoders
#
CONFIG_VIDEO_SAA717X=m
CONFIG_VIDEO_CX25840=m
# end of Video decoders

#
# Video encoders
#
CONFIG_VIDEO_SAA7127=m
# CONFIG_VIDEO_SAA7185 is not set
# CONFIG_VIDEO_ADV7170 is not set
# CONFIG_VIDEO_ADV7175 is not set
# CONFIG_VIDEO_ADV7343 is not set
# CONFIG_VIDEO_ADV7393 is not set
# CONFIG_VIDEO_ADV7511 is not set
# CONFIG_VIDEO_AD9389B is not set
# CONFIG_VIDEO_AK881X is not set
# CONFIG_VIDEO_THS8200 is not set
# end of Video encoders

#
# Video improvement chips
#
CONFIG_VIDEO_UPD64031A=m
CONFIG_VIDEO_UPD64083=m
# end of Video improvement chips

#
# Audio/Video compression chips
#
CONFIG_VIDEO_SAA6752HS=m
# end of Audio/Video compression chips

#
# SDR tuner chips
#
# CONFIG_SDR_MAX2175 is not set
# end of SDR tuner chips

#
# Miscellaneous helper chips
#
# CONFIG_VIDEO_THS7303 is not set
CONFIG_VIDEO_M52790=m
# CONFIG_VIDEO_I2C is not set
# CONFIG_VIDEO_ST_MIPID02 is not set
# end of Miscellaneous helper chips

#
# Camera sensor devices
#
# CONFIG_VIDEO_HI556 is not set
# CONFIG_VIDEO_IMX219 is not set
# CONFIG_VIDEO_IMX258 is not set
# CONFIG_VIDEO_IMX274 is not set
# CONFIG_VIDEO_IMX290 is not set
# CONFIG_VIDEO_IMX319 is not set
# CONFIG_VIDEO_IMX355 is not set
# CONFIG_VIDEO_OV2640 is not set
# CONFIG_VIDEO_OV2659 is not set
# CONFIG_VIDEO_OV2680 is not set
# CONFIG_VIDEO_OV2685 is not set
# CONFIG_VIDEO_OV2740 is not set
# CONFIG_VIDEO_OV5647 is not set
# CONFIG_VIDEO_OV6650 is not set
# CONFIG_VIDEO_OV5670 is not set
# CONFIG_VIDEO_OV5675 is not set
# CONFIG_VIDEO_OV5695 is not set
# CONFIG_VIDEO_OV7251 is not set
# CONFIG_VIDEO_OV772X is not set
# CONFIG_VIDEO_OV7640 is not set
# CONFIG_VIDEO_OV7670 is not set
# CONFIG_VIDEO_OV7740 is not set
# CONFIG_VIDEO_OV8856 is not set
# CONFIG_VIDEO_OV9640 is not set
# CONFIG_VIDEO_OV9650 is not set
# CONFIG_VIDEO_OV13858 is not set
# CONFIG_VIDEO_VS6624 is not set
# CONFIG_VIDEO_MT9M001 is not set
# CONFIG_VIDEO_MT9M032 is not set
# CONFIG_VIDEO_MT9M111 is not set
# CONFIG_VIDEO_MT9P031 is not set
# CONFIG_VIDEO_MT9T001 is not set
# CONFIG_VIDEO_MT9T112 is not set
# CONFIG_VIDEO_MT9V011 is not set
# CONFIG_VIDEO_MT9V032 is not set
# CONFIG_VIDEO_MT9V111 is not set
# CONFIG_VIDEO_SR030PC30 is not set
# CONFIG_VIDEO_NOON010PC30 is not set
# CONFIG_VIDEO_M5MOLS is not set
# CONFIG_VIDEO_RJ54N1 is not set
# CONFIG_VIDEO_S5K6AA is not set
# CONFIG_VIDEO_S5K6A3 is not set
# CONFIG_VIDEO_S5K4ECGX is not set
# CONFIG_VIDEO_S5K5BAF is not set
# CONFIG_VIDEO_SMIAPP is not set
# CONFIG_VIDEO_ET8EK8 is not set
# CONFIG_VIDEO_S5C73M3 is not set
# end of Camera sensor devices

#
# Lens drivers
#
# CONFIG_VIDEO_AD5820 is not set
# CONFIG_VIDEO_AK7375 is not set
# CONFIG_VIDEO_DW9714 is not set
# CONFIG_VIDEO_DW9807_VCM is not set
# end of Lens drivers

#
# Flash devices
#
# CONFIG_VIDEO_ADP1653 is not set
# CONFIG_VIDEO_LM3560 is not set
# CONFIG_VIDEO_LM3646 is not set
# end of Flash devices

#
# SPI helper chips
#
# CONFIG_VIDEO_GS1662 is not set
# end of SPI helper chips

#
# Media SPI Adapters
#
# CONFIG_CXD2880_SPI_DRV is not set
# end of Media SPI Adapters

CONFIG_MEDIA_TUNER=m

#
# Customize TV tuners
#
CONFIG_MEDIA_TUNER_SIMPLE=m
CONFIG_MEDIA_TUNER_TDA18250=m
CONFIG_MEDIA_TUNER_TDA8290=m
CONFIG_MEDIA_TUNER_TDA827X=m
CONFIG_MEDIA_TUNER_TDA18271=m
CONFIG_MEDIA_TUNER_TDA9887=m
CONFIG_MEDIA_TUNER_TEA5761=m
CONFIG_MEDIA_TUNER_TEA5767=m
# CONFIG_MEDIA_TUNER_MSI001 is not set
CONFIG_MEDIA_TUNER_MT20XX=m
CONFIG_MEDIA_TUNER_MT2060=m
CONFIG_MEDIA_TUNER_MT2063=m
CONFIG_MEDIA_TUNER_MT2266=m
CONFIG_MEDIA_TUNER_MT2131=m
CONFIG_MEDIA_TUNER_QT1010=m
CONFIG_MEDIA_TUNER_XC2028=m
CONFIG_MEDIA_TUNER_XC5000=m
CONFIG_MEDIA_TUNER_XC4000=m
CONFIG_MEDIA_TUNER_MXL5005S=m
CONFIG_MEDIA_TUNER_MXL5007T=m
CONFIG_MEDIA_TUNER_MC44S803=m
CONFIG_MEDIA_TUNER_MAX2165=m
CONFIG_MEDIA_TUNER_TDA18218=m
CONFIG_MEDIA_TUNER_FC0011=m
CONFIG_MEDIA_TUNER_FC0012=m
CONFIG_MEDIA_TUNER_FC0013=m
CONFIG_MEDIA_TUNER_TDA18212=m
CONFIG_MEDIA_TUNER_E4000=m
CONFIG_MEDIA_TUNER_FC2580=m
CONFIG_MEDIA_TUNER_M88RS6000T=m
CONFIG_MEDIA_TUNER_TUA9001=m
CONFIG_MEDIA_TUNER_SI2157=m
CONFIG_MEDIA_TUNER_IT913X=m
CONFIG_MEDIA_TUNER_R820T=m
# CONFIG_MEDIA_TUNER_MXL301RF is not set
CONFIG_MEDIA_TUNER_QM1D1C0042=m
CONFIG_MEDIA_TUNER_QM1D1B0004=m
# end of Customize TV tuners

#
# Customise DVB Frontends
#

#
# Multistandard (satellite) frontends
#
CONFIG_DVB_STB0899=m
CONFIG_DVB_STB6100=m
CONFIG_DVB_STV090x=m
CONFIG_DVB_STV0910=m
CONFIG_DVB_STV6110x=m
CONFIG_DVB_STV6111=m
CONFIG_DVB_MXL5XX=m
CONFIG_DVB_M88DS3103=m

#
# Multistandard (cable + terrestrial) frontends
#
CONFIG_DVB_DRXK=m
CONFIG_DVB_TDA18271C2DD=m
CONFIG_DVB_SI2165=m
CONFIG_DVB_MN88472=m
CONFIG_DVB_MN88473=m

#
# DVB-S (satellite) frontends
#
CONFIG_DVB_CX24110=m
CONFIG_DVB_CX24123=m
CONFIG_DVB_MT312=m
CONFIG_DVB_ZL10036=m
CONFIG_DVB_ZL10039=m
CONFIG_DVB_S5H1420=m
CONFIG_DVB_STV0288=m
CONFIG_DVB_STB6000=m
CONFIG_DVB_STV0299=m
CONFIG_DVB_STV6110=m
CONFIG_DVB_STV0900=m
CONFIG_DVB_TDA8083=m
CONFIG_DVB_TDA10086=m
CONFIG_DVB_TDA8261=m
CONFIG_DVB_VES1X93=m
CONFIG_DVB_TUNER_ITD1000=m
CONFIG_DVB_TUNER_CX24113=m
CONFIG_DVB_TDA826X=m
CONFIG_DVB_TUA6100=m
CONFIG_DVB_CX24116=m
CONFIG_DVB_CX24117=m
CONFIG_DVB_CX24120=m
CONFIG_DVB_SI21XX=m
CONFIG_DVB_TS2020=m
CONFIG_DVB_DS3000=m
CONFIG_DVB_MB86A16=m
CONFIG_DVB_TDA10071=m

#
# DVB-T (terrestrial) frontends
#
CONFIG_DVB_SP8870=m
CONFIG_DVB_SP887X=m
CONFIG_DVB_CX22700=m
CONFIG_DVB_CX22702=m
# CONFIG_DVB_S5H1432 is not set
CONFIG_DVB_DRXD=m
CONFIG_DVB_L64781=m
CONFIG_DVB_TDA1004X=m
CONFIG_DVB_NXT6000=m
CONFIG_DVB_MT352=m
CONFIG_DVB_ZL10353=m
CONFIG_DVB_DIB3000MB=m
CONFIG_DVB_DIB3000MC=m
CONFIG_DVB_DIB7000M=m
CONFIG_DVB_DIB7000P=m
# CONFIG_DVB_DIB9000 is not set
CONFIG_DVB_TDA10048=m
CONFIG_DVB_AF9013=m
CONFIG_DVB_EC100=m
CONFIG_DVB_STV0367=m
CONFIG_DVB_CXD2820R=m
CONFIG_DVB_CXD2841ER=m
CONFIG_DVB_RTL2830=m
CONFIG_DVB_RTL2832=m
CONFIG_DVB_RTL2832_SDR=m
CONFIG_DVB_SI2168=m
# CONFIG_DVB_ZD1301_DEMOD is not set
CONFIG_DVB_GP8PSK_FE=m
# CONFIG_DVB_CXD2880 is not set

#
# DVB-C (cable) frontends
#
CONFIG_DVB_VES1820=m
CONFIG_DVB_TDA10021=m
CONFIG_DVB_TDA10023=m
CONFIG_DVB_STV0297=m

#
# ATSC (North American/Korean Terrestrial/Cable DTV) frontends
#
CONFIG_DVB_NXT200X=m
CONFIG_DVB_OR51211=m
CONFIG_DVB_OR51132=m
CONFIG_DVB_BCM3510=m
CONFIG_DVB_LGDT330X=m
CONFIG_DVB_LGDT3305=m
CONFIG_DVB_LGDT3306A=m
CONFIG_DVB_LG2160=m
CONFIG_DVB_S5H1409=m
CONFIG_DVB_AU8522=m
CONFIG_DVB_AU8522_DTV=m
CONFIG_DVB_AU8522_V4L=m
CONFIG_DVB_S5H1411=m

#
# ISDB-T (terrestrial) frontends
#
CONFIG_DVB_S921=m
CONFIG_DVB_DIB8000=m
CONFIG_DVB_MB86A20S=m

#
# ISDB-S (satellite) & ISDB-T (terrestrial) frontends
#
CONFIG_DVB_TC90522=m
# CONFIG_DVB_MN88443X is not set

#
# Digital terrestrial only tuners/PLL
#
CONFIG_DVB_PLL=m
CONFIG_DVB_TUNER_DIB0070=m
CONFIG_DVB_TUNER_DIB0090=m

#
# SEC control devices for DVB-S
#
CONFIG_DVB_DRX39XYJ=m
CONFIG_DVB_LNBH25=m
# CONFIG_DVB_LNBH29 is not set
CONFIG_DVB_LNBP21=m
CONFIG_DVB_LNBP22=m
CONFIG_DVB_ISL6405=m
CONFIG_DVB_ISL6421=m
CONFIG_DVB_ISL6423=m
CONFIG_DVB_A8293=m
# CONFIG_DVB_LGS8GL5 is not set
CONFIG_DVB_LGS8GXX=m
CONFIG_DVB_ATBM8830=m
CONFIG_DVB_TDA665x=m
CONFIG_DVB_IX2505V=m
CONFIG_DVB_M88RS2000=m
CONFIG_DVB_AF9033=m
# CONFIG_DVB_HORUS3A is not set
# CONFIG_DVB_ASCOT2E is not set
# CONFIG_DVB_HELENE is not set

#
# Common Interface (EN50221) controller drivers
#
CONFIG_DVB_CXD2099=m
# CONFIG_DVB_SP2 is not set
# end of Customise DVB Frontends

#
# Tools to develop new frontends
#
CONFIG_DVB_DUMMY_FE=m
# end of Media ancillary drivers

#
# Graphics support
#
CONFIG_AGP=y
CONFIG_AGP_AMD64=y
CONFIG_AGP_INTEL=y
CONFIG_AGP_SIS=y
CONFIG_AGP_VIA=y
CONFIG_INTEL_GTT=y
CONFIG_VGA_ARB=y
CONFIG_VGA_ARB_MAX_GPUS=64
CONFIG_VGA_SWITCHEROO=y
CONFIG_DRM=y
CONFIG_DRM_MIPI_DSI=y
CONFIG_DRM_DP_AUX_CHARDEV=y
# CONFIG_DRM_DEBUG_MM is not set
CONFIG_DRM_DEBUG_SELFTEST=m
CONFIG_DRM_KMS_HELPER=y
CONFIG_DRM_KMS_FB_HELPER=y
# CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS is not set
CONFIG_DRM_FBDEV_EMULATION=y
CONFIG_DRM_FBDEV_OVERALLOC=100
# CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM is not set
CONFIG_DRM_LOAD_EDID_FIRMWARE=y
# CONFIG_DRM_DP_CEC is not set
CONFIG_DRM_TTM=m
CONFIG_DRM_TTM_DMA_PAGE_POOL=y
CONFIG_DRM_VRAM_HELPER=m
CONFIG_DRM_TTM_HELPER=m
CONFIG_DRM_GEM_SHMEM_HELPER=y

#
# I2C encoder or helper chips
#
CONFIG_DRM_I2C_CH7006=m
CONFIG_DRM_I2C_SIL164=m
# CONFIG_DRM_I2C_NXP_TDA998X is not set
# CONFIG_DRM_I2C_NXP_TDA9950 is not set
# end of I2C encoder or helper chips

#
# ARM devices
#
# end of ARM devices

# CONFIG_DRM_RADEON is not set
# CONFIG_DRM_AMDGPU is not set
# CONFIG_DRM_NOUVEAU is not set
CONFIG_DRM_I915=m
CONFIG_DRM_I915_FORCE_PROBE=""
CONFIG_DRM_I915_CAPTURE_ERROR=y
CONFIG_DRM_I915_COMPRESS_ERROR=y
CONFIG_DRM_I915_USERPTR=y
CONFIG_DRM_I915_GVT=y
CONFIG_DRM_I915_GVT_KVMGT=m

#
# drm/i915 Debugging
#
# CONFIG_DRM_I915_WERROR is not set
# CONFIG_DRM_I915_DEBUG is not set
# CONFIG_DRM_I915_DEBUG_MMIO is not set
# CONFIG_DRM_I915_SW_FENCE_DEBUG_OBJECTS is not set
# CONFIG_DRM_I915_SW_FENCE_CHECK_DAG is not set
# CONFIG_DRM_I915_DEBUG_GUC is not set
# CONFIG_DRM_I915_SELFTEST is not set
# CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS is not set
# CONFIG_DRM_I915_DEBUG_VBLANK_EVADE is not set
# CONFIG_DRM_I915_DEBUG_RUNTIME_PM is not set
# end of drm/i915 Debugging

#
# drm/i915 Profile Guided Optimisation
#
CONFIG_DRM_I915_FENCE_TIMEOUT=10000
CONFIG_DRM_I915_USERFAULT_AUTOSUSPEND=250
CONFIG_DRM_I915_HEARTBEAT_INTERVAL=2500
CONFIG_DRM_I915_PREEMPT_TIMEOUT=640
CONFIG_DRM_I915_MAX_REQUEST_BUSYWAIT=8000
CONFIG_DRM_I915_STOP_TIMEOUT=100
CONFIG_DRM_I915_TIMESLICE_DURATION=1
# end of drm/i915 Profile Guided Optimisation

CONFIG_DRM_VGEM=y
# CONFIG_DRM_VKMS is not set
CONFIG_DRM_VMWGFX=m
CONFIG_DRM_VMWGFX_FBCON=y
CONFIG_DRM_GMA500=m
CONFIG_DRM_GMA600=y
CONFIG_DRM_GMA3600=y
CONFIG_DRM_UDL=m
CONFIG_DRM_AST=m
CONFIG_DRM_MGAG200=m
CONFIG_DRM_QXL=m
CONFIG_DRM_BOCHS=m
CONFIG_DRM_VIRTIO_GPU=m
CONFIG_DRM_PANEL=y

#
# Display Panels
#
# CONFIG_DRM_PANEL_RASPBERRYPI_TOUCHSCREEN is not set
# end of Display Panels

CONFIG_DRM_BRIDGE=y
CONFIG_DRM_PANEL_BRIDGE=y

#
# Display Interface Bridges
#
# CONFIG_DRM_ANALOGIX_ANX78XX is not set
# end of Display Interface Bridges

# CONFIG_DRM_ETNAVIV is not set
CONFIG_DRM_CIRRUS_QEMU=m
# CONFIG_DRM_GM12U320 is not set
# CONFIG_TINYDRM_HX8357D is not set
# CONFIG_TINYDRM_ILI9225 is not set
# CONFIG_TINYDRM_ILI9341 is not set
# CONFIG_TINYDRM_ILI9486 is not set
# CONFIG_TINYDRM_MI0283QT is not set
# CONFIG_TINYDRM_REPAPER is not set
# CONFIG_TINYDRM_ST7586 is not set
# CONFIG_TINYDRM_ST7735R is not set
# CONFIG_DRM_XEN is not set
# CONFIG_DRM_VBOXVIDEO is not set
# CONFIG_DRM_LEGACY is not set
CONFIG_DRM_EXPORT_FOR_TESTS=y
CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=y
CONFIG_DRM_LIB_RANDOM=y

#
# Frame buffer Devices
#
CONFIG_FB_CMDLINE=y
CONFIG_FB_NOTIFY=y
CONFIG_FB=y
# CONFIG_FIRMWARE_EDID is not set
CONFIG_FB_BOOT_VESA_SUPPORT=y
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
CONFIG_FB_SYS_FILLRECT=y
CONFIG_FB_SYS_COPYAREA=y
CONFIG_FB_SYS_IMAGEBLIT=y
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYS_FOPS=y
CONFIG_FB_DEFERRED_IO=y
# CONFIG_FB_MODE_HELPERS is not set
CONFIG_FB_TILEBLITTING=y

#
# Frame buffer hardware drivers
#
# CONFIG_FB_CIRRUS is not set
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
# CONFIG_FB_ASILIANT is not set
# CONFIG_FB_IMSTT is not set
# CONFIG_FB_VGA16 is not set
# CONFIG_FB_UVESA is not set
CONFIG_FB_VESA=y
CONFIG_FB_EFI=y
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_OPENCORES is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_INTEL is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_SM501 is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_UDL is not set
# CONFIG_FB_IBM_GXT4500 is not set
# CONFIG_FB_VIRTUAL is not set
# CONFIG_XEN_FBDEV_FRONTEND is not set
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
CONFIG_FB_HYPERV=m
# CONFIG_FB_SIMPLE is not set
# CONFIG_FB_SM712 is not set
# end of Frame buffer Devices

#
# Backlight & LCD device support
#
CONFIG_LCD_CLASS_DEVICE=m
# CONFIG_LCD_L4F00242T03 is not set
# CONFIG_LCD_LMS283GF05 is not set
# CONFIG_LCD_LTV350QV is not set
# CONFIG_LCD_ILI922X is not set
# CONFIG_LCD_ILI9320 is not set
# CONFIG_LCD_TDO24M is not set
# CONFIG_LCD_VGG2432A4 is not set
CONFIG_LCD_PLATFORM=m
# CONFIG_LCD_AMS369FG06 is not set
# CONFIG_LCD_LMS501KF03 is not set
# CONFIG_LCD_HX8357 is not set
# CONFIG_LCD_OTM3225A is not set
CONFIG_BACKLIGHT_CLASS_DEVICE=y
# CONFIG_BACKLIGHT_GENERIC is not set
# CONFIG_BACKLIGHT_PWM is not set
CONFIG_BACKLIGHT_APPLE=m
# CONFIG_BACKLIGHT_QCOM_WLED is not set
# CONFIG_BACKLIGHT_SAHARA is not set
# CONFIG_BACKLIGHT_ADP8860 is not set
# CONFIG_BACKLIGHT_ADP8870 is not set
# CONFIG_BACKLIGHT_LM3630A is not set
# CONFIG_BACKLIGHT_LM3639 is not set
CONFIG_BACKLIGHT_LP855X=m
# CONFIG_BACKLIGHT_GPIO is not set
# CONFIG_BACKLIGHT_LV5207LP is not set
# CONFIG_BACKLIGHT_BD6107 is not set
# CONFIG_BACKLIGHT_ARCXCNN is not set
# end of Backlight & LCD device support

CONFIG_HDMI=y

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
CONFIG_VGACON_SOFT_SCROLLBACK=y
CONFIG_VGACON_SOFT_SCROLLBACK_SIZE=64
# CONFIG_VGACON_SOFT_SCROLLBACK_PERSISTENT_ENABLE_BY_DEFAULT is not set
CONFIG_DUMMY_CONSOLE=y
CONFIG_DUMMY_CONSOLE_COLUMNS=80
CONFIG_DUMMY_CONSOLE_ROWS=25
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
# CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER is not set
# end of Console display driver support

CONFIG_LOGO=y
# CONFIG_LOGO_LINUX_MONO is not set
# CONFIG_LOGO_LINUX_VGA16 is not set
CONFIG_LOGO_LINUX_CLUT224=y
# end of Graphics support

CONFIG_SOUND=m
CONFIG_SOUND_OSS_CORE=y
CONFIG_SOUND_OSS_CORE_PRECLAIM=y
CONFIG_SND=m
CONFIG_SND_TIMER=m
CONFIG_SND_PCM=m
CONFIG_SND_PCM_ELD=y
CONFIG_SND_HWDEP=m
CONFIG_SND_SEQ_DEVICE=m
CONFIG_SND_RAWMIDI=m
CONFIG_SND_COMPRESS_OFFLOAD=m
CONFIG_SND_JACK=y
CONFIG_SND_JACK_INPUT_DEV=y
CONFIG_SND_OSSEMUL=y
# CONFIG_SND_MIXER_OSS is not set
# CONFIG_SND_PCM_OSS is not set
CONFIG_SND_PCM_TIMER=y
CONFIG_SND_HRTIMER=m
CONFIG_SND_DYNAMIC_MINORS=y
CONFIG_SND_MAX_CARDS=32
# CONFIG_SND_SUPPORT_OLD_API is not set
CONFIG_SND_PROC_FS=y
CONFIG_SND_VERBOSE_PROCFS=y
# CONFIG_SND_VERBOSE_PRINTK is not set
# CONFIG_SND_DEBUG is not set
CONFIG_SND_VMASTER=y
CONFIG_SND_DMA_SGBUF=y
CONFIG_SND_SEQUENCER=m
CONFIG_SND_SEQ_DUMMY=m
CONFIG_SND_SEQUENCER_OSS=m
CONFIG_SND_SEQ_HRTIMER_DEFAULT=y
CONFIG_SND_SEQ_MIDI_EVENT=m
CONFIG_SND_SEQ_MIDI=m
CONFIG_SND_SEQ_MIDI_EMUL=m
CONFIG_SND_SEQ_VIRMIDI=m
CONFIG_SND_MPU401_UART=m
CONFIG_SND_OPL3_LIB=m
CONFIG_SND_OPL3_LIB_SEQ=m
CONFIG_SND_VX_LIB=m
CONFIG_SND_AC97_CODEC=m
CONFIG_SND_DRIVERS=y
CONFIG_SND_PCSP=m
CONFIG_SND_DUMMY=m
CONFIG_SND_ALOOP=m
CONFIG_SND_VIRMIDI=m
CONFIG_SND_MTPAV=m
# CONFIG_SND_MTS64 is not set
# CONFIG_SND_SERIAL_U16550 is not set
CONFIG_SND_MPU401=m
# CONFIG_SND_PORTMAN2X4 is not set
CONFIG_SND_AC97_POWER_SAVE=y
CONFIG_SND_AC97_POWER_SAVE_DEFAULT=5
CONFIG_SND_PCI=y
CONFIG_SND_AD1889=m
# CONFIG_SND_ALS300 is not set
# CONFIG_SND_ALS4000 is not set
CONFIG_SND_ALI5451=m
CONFIG_SND_ASIHPI=m
CONFIG_SND_ATIIXP=m
CONFIG_SND_ATIIXP_MODEM=m
CONFIG_SND_AU8810=m
CONFIG_SND_AU8820=m
CONFIG_SND_AU8830=m
# CONFIG_SND_AW2 is not set
# CONFIG_SND_AZT3328 is not set
CONFIG_SND_BT87X=m
# CONFIG_SND_BT87X_OVERCLOCK is not set
CONFIG_SND_CA0106=m
CONFIG_SND_CMIPCI=m
CONFIG_SND_OXYGEN_LIB=m
CONFIG_SND_OXYGEN=m
# CONFIG_SND_CS4281 is not set
CONFIG_SND_CS46XX=m
CONFIG_SND_CS46XX_NEW_DSP=y
CONFIG_SND_CTXFI=m
CONFIG_SND_DARLA20=m
CONFIG_SND_GINA20=m
CONFIG_SND_LAYLA20=m
CONFIG_SND_DARLA24=m
CONFIG_SND_GINA24=m
CONFIG_SND_LAYLA24=m
CONFIG_SND_MONA=m
CONFIG_SND_MIA=m
CONFIG_SND_ECHO3G=m
CONFIG_SND_INDIGO=m
CONFIG_SND_INDIGOIO=m
CONFIG_SND_INDIGODJ=m
CONFIG_SND_INDIGOIOX=m
CONFIG_SND_INDIGODJX=m
CONFIG_SND_EMU10K1=m
CONFIG_SND_EMU10K1_SEQ=m
CONFIG_SND_EMU10K1X=m
CONFIG_SND_ENS1370=m
CONFIG_SND_ENS1371=m
# CONFIG_SND_ES1938 is not set
CONFIG_SND_ES1968=m
CONFIG_SND_ES1968_INPUT=y
CONFIG_SND_ES1968_RADIO=y
# CONFIG_SND_FM801 is not set
CONFIG_SND_HDSP=m
CONFIG_SND_HDSPM=m
CONFIG_SND_ICE1712=m
CONFIG_SND_ICE1724=m
CONFIG_SND_INTEL8X0=m
CONFIG_SND_INTEL8X0M=m
CONFIG_SND_KORG1212=m
CONFIG_SND_LOLA=m
CONFIG_SND_LX6464ES=m
CONFIG_SND_MAESTRO3=m
CONFIG_SND_MAESTRO3_INPUT=y
CONFIG_SND_MIXART=m
# CONFIG_SND_NM256 is not set
CONFIG_SND_PCXHR=m
# CONFIG_SND_RIPTIDE is not set
CONFIG_SND_RME32=m
CONFIG_SND_RME96=m
CONFIG_SND_RME9652=m
# CONFIG_SND_SONICVIBES is not set
CONFIG_SND_TRIDENT=m
CONFIG_SND_VIA82XX=m
CONFIG_SND_VIA82XX_MODEM=m
CONFIG_SND_VIRTUOSO=m
CONFIG_SND_VX222=m
# CONFIG_SND_YMFPCI is not set

#
# HD-Audio
#
CONFIG_SND_HDA=m
CONFIG_SND_HDA_INTEL=m
CONFIG_SND_HDA_HWDEP=y
CONFIG_SND_HDA_RECONFIG=y
CONFIG_SND_HDA_INPUT_BEEP=y
CONFIG_SND_HDA_INPUT_BEEP_MODE=0
CONFIG_SND_HDA_PATCH_LOADER=y
CONFIG_SND_HDA_CODEC_REALTEK=m
CONFIG_SND_HDA_CODEC_ANALOG=m
CONFIG_SND_HDA_CODEC_SIGMATEL=m
CONFIG_SND_HDA_CODEC_VIA=m
CONFIG_SND_HDA_CODEC_HDMI=m
CONFIG_SND_HDA_CODEC_CIRRUS=m
CONFIG_SND_HDA_CODEC_CONEXANT=m
CONFIG_SND_HDA_CODEC_CA0110=m
CONFIG_SND_HDA_CODEC_CA0132=m
CONFIG_SND_HDA_CODEC_CA0132_DSP=y
CONFIG_SND_HDA_CODEC_CMEDIA=m
CONFIG_SND_HDA_CODEC_SI3054=m
CONFIG_SND_HDA_GENERIC=m
CONFIG_SND_HDA_POWER_SAVE_DEFAULT=0
# end of HD-Audio

CONFIG_SND_HDA_CORE=m
CONFIG_SND_HDA_DSP_LOADER=y
CONFIG_SND_HDA_COMPONENT=y
CONFIG_SND_HDA_I915=y
CONFIG_SND_HDA_EXT_CORE=m
CONFIG_SND_HDA_PREALLOC_SIZE=512
CONFIG_SND_INTEL_NHLT=y
CONFIG_SND_INTEL_DSP_CONFIG=m
# CONFIG_SND_SPI is not set
CONFIG_SND_USB=y
CONFIG_SND_USB_AUDIO=m
CONFIG_SND_USB_AUDIO_USE_MEDIA_CONTROLLER=y
CONFIG_SND_USB_UA101=m
CONFIG_SND_USB_USX2Y=m
CONFIG_SND_USB_CAIAQ=m
CONFIG_SND_USB_CAIAQ_INPUT=y
CONFIG_SND_USB_US122L=m
CONFIG_SND_USB_6FIRE=m
CONFIG_SND_USB_HIFACE=m
CONFIG_SND_BCD2000=m
CONFIG_SND_USB_LINE6=m
CONFIG_SND_USB_POD=m
CONFIG_SND_USB_PODHD=m
CONFIG_SND_USB_TONEPORT=m
CONFIG_SND_USB_VARIAX=m
CONFIG_SND_FIREWIRE=y
CONFIG_SND_FIREWIRE_LIB=m
# CONFIG_SND_DICE is not set
# CONFIG_SND_OXFW is not set
CONFIG_SND_ISIGHT=m
# CONFIG_SND_FIREWORKS is not set
# CONFIG_SND_BEBOB is not set
# CONFIG_SND_FIREWIRE_DIGI00X is not set
# CONFIG_SND_FIREWIRE_TASCAM is not set
# CONFIG_SND_FIREWIRE_MOTU is not set
# CONFIG_SND_FIREFACE is not set
CONFIG_SND_SOC=m
CONFIG_SND_SOC_COMPRESS=y
CONFIG_SND_SOC_TOPOLOGY=y
CONFIG_SND_SOC_ACPI=m
# CONFIG_SND_SOC_AMD_ACP is not set
# CONFIG_SND_SOC_AMD_ACP3x is not set
# CONFIG_SND_SOC_AMD_RENOIR is not set
# CONFIG_SND_ATMEL_SOC is not set
# CONFIG_SND_BCM63XX_I2S_WHISTLER is not set
# CONFIG_SND_DESIGNWARE_I2S is not set

#
# SoC Audio for Freescale CPUs
#

#
# Common SoC Audio options for Freescale CPUs:
#
# CONFIG_SND_SOC_FSL_ASRC is not set
# CONFIG_SND_SOC_FSL_SAI is not set
# CONFIG_SND_SOC_FSL_AUDMIX is not set
# CONFIG_SND_SOC_FSL_SSI is not set
# CONFIG_SND_SOC_FSL_SPDIF is not set
# CONFIG_SND_SOC_FSL_ESAI is not set
# CONFIG_SND_SOC_FSL_MICFIL is not set
# CONFIG_SND_SOC_IMX_AUDMUX is not set
# end of SoC Audio for Freescale CPUs

# CONFIG_SND_I2S_HI6210_I2S is not set
# CONFIG_SND_SOC_IMG is not set
CONFIG_SND_SOC_INTEL_SST_TOPLEVEL=y
CONFIG_SND_SST_IPC=m
CONFIG_SND_SST_IPC_ACPI=m
CONFIG_SND_SOC_INTEL_SST_ACPI=m
CONFIG_SND_SOC_INTEL_SST=m
CONFIG_SND_SOC_INTEL_SST_FIRMWARE=m
CONFIG_SND_SOC_INTEL_HASWELL=m
CONFIG_SND_SST_ATOM_HIFI2_PLATFORM=m
# CONFIG_SND_SST_ATOM_HIFI2_PLATFORM_PCI is not set
CONFIG_SND_SST_ATOM_HIFI2_PLATFORM_ACPI=m
CONFIG_SND_SOC_INTEL_SKYLAKE=m
CONFIG_SND_SOC_INTEL_SKL=m
CONFIG_SND_SOC_INTEL_APL=m
CONFIG_SND_SOC_INTEL_KBL=m
CONFIG_SND_SOC_INTEL_GLK=m
CONFIG_SND_SOC_INTEL_CNL=m
CONFIG_SND_SOC_INTEL_CFL=m
# CONFIG_SND_SOC_INTEL_CML_H is not set
# CONFIG_SND_SOC_INTEL_CML_LP is not set
CONFIG_SND_SOC_INTEL_SKYLAKE_FAMILY=m
CONFIG_SND_SOC_INTEL_SKYLAKE_SSP_CLK=m
# CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC is not set
CONFIG_SND_SOC_INTEL_SKYLAKE_COMMON=m
CONFIG_SND_SOC_ACPI_INTEL_MATCH=m
CONFIG_SND_SOC_INTEL_MACH=y
# CONFIG_SND_SOC_INTEL_USER_FRIENDLY_LONG_NAMES is not set
CONFIG_SND_SOC_INTEL_HASWELL_MACH=m
# CONFIG_SND_SOC_INTEL_BDW_RT5650_MACH is not set
CONFIG_SND_SOC_INTEL_BDW_RT5677_MACH=m
CONFIG_SND_SOC_INTEL_BROADWELL_MACH=m
CONFIG_SND_SOC_INTEL_BYTCR_RT5640_MACH=m
CONFIG_SND_SOC_INTEL_BYTCR_RT5651_MACH=m
CONFIG_SND_SOC_INTEL_CHT_BSW_RT5672_MACH=m
CONFIG_SND_SOC_INTEL_CHT_BSW_RT5645_MACH=m
CONFIG_SND_SOC_INTEL_CHT_BSW_MAX98090_TI_MACH=m
# CONFIG_SND_SOC_INTEL_CHT_BSW_NAU8824_MACH is not set
# CONFIG_SND_SOC_INTEL_BYT_CHT_CX2072X_MACH is not set
CONFIG_SND_SOC_INTEL_BYT_CHT_DA7213_MACH=m
CONFIG_SND_SOC_INTEL_BYT_CHT_ES8316_MACH=m
CONFIG_SND_SOC_INTEL_BYT_CHT_NOCODEC_MACH=m
CONFIG_SND_SOC_INTEL_SKL_RT286_MACH=m
CONFIG_SND_SOC_INTEL_SKL_NAU88L25_SSM4567_MACH=m
CONFIG_SND_SOC_INTEL_SKL_NAU88L25_MAX98357A_MACH=m
CONFIG_SND_SOC_INTEL_DA7219_MAX98357A_GENERIC=m
CONFIG_SND_SOC_INTEL_BXT_DA7219_MAX98357A_COMMON=m
CONFIG_SND_SOC_INTEL_BXT_DA7219_MAX98357A_MACH=m
CONFIG_SND_SOC_INTEL_BXT_RT298_MACH=m
CONFIG_SND_SOC_INTEL_KBL_RT5663_MAX98927_MACH=m
CONFIG_SND_SOC_INTEL_KBL_RT5663_RT5514_MAX98927_MACH=m
# CONFIG_SND_SOC_INTEL_KBL_DA7219_MAX98357A_MACH is not set
# CONFIG_SND_SOC_INTEL_KBL_DA7219_MAX98927_MACH is not set
# CONFIG_SND_SOC_INTEL_KBL_RT5660_MACH is not set
# CONFIG_SND_SOC_MTK_BTCVSD is not set
# CONFIG_SND_SOC_SOF_TOPLEVEL is not set

#
# STMicroelectronics STM32 SOC audio support
#
# end of STMicroelectronics STM32 SOC audio support

# CONFIG_SND_SOC_XILINX_I2S is not set
# CONFIG_SND_SOC_XILINX_AUDIO_FORMATTER is not set
# CONFIG_SND_SOC_XILINX_SPDIF is not set
# CONFIG_SND_SOC_XTFPGA_I2S is not set
# CONFIG_ZX_TDM is not set
CONFIG_SND_SOC_I2C_AND_SPI=m

#
# CODEC drivers
#
# CONFIG_SND_SOC_AC97_CODEC is not set
# CONFIG_SND_SOC_ADAU1701 is not set
# CONFIG_SND_SOC_ADAU1761_I2C is not set
# CONFIG_SND_SOC_ADAU1761_SPI is not set
# CONFIG_SND_SOC_ADAU7002 is not set
# CONFIG_SND_SOC_ADAU7118_HW is not set
# CONFIG_SND_SOC_ADAU7118_I2C is not set
# CONFIG_SND_SOC_AK4104 is not set
# CONFIG_SND_SOC_AK4118 is not set
# CONFIG_SND_SOC_AK4458 is not set
# CONFIG_SND_SOC_AK4554 is not set
# CONFIG_SND_SOC_AK4613 is not set
# CONFIG_SND_SOC_AK4642 is not set
# CONFIG_SND_SOC_AK5386 is not set
# CONFIG_SND_SOC_AK5558 is not set
# CONFIG_SND_SOC_ALC5623 is not set
# CONFIG_SND_SOC_BD28623 is not set
# CONFIG_SND_SOC_BT_SCO is not set
# CONFIG_SND_SOC_CS35L32 is not set
# CONFIG_SND_SOC_CS35L33 is not set
# CONFIG_SND_SOC_CS35L34 is not set
# CONFIG_SND_SOC_CS35L35 is not set
# CONFIG_SND_SOC_CS35L36 is not set
# CONFIG_SND_SOC_CS42L42 is not set
# CONFIG_SND_SOC_CS42L51_I2C is not set
# CONFIG_SND_SOC_CS42L52 is not set
# CONFIG_SND_SOC_CS42L56 is not set
# CONFIG_SND_SOC_CS42L73 is not set
# CONFIG_SND_SOC_CS4265 is not set
# CONFIG_SND_SOC_CS4270 is not set
# CONFIG_SND_SOC_CS4271_I2C is not set
# CONFIG_SND_SOC_CS4271_SPI is not set
# CONFIG_SND_SOC_CS42XX8_I2C is not set
# CONFIG_SND_SOC_CS43130 is not set
# CONFIG_SND_SOC_CS4341 is not set
# CONFIG_SND_SOC_CS4349 is not set
# CONFIG_SND_SOC_CS53L30 is not set
# CONFIG_SND_SOC_CX2072X is not set
CONFIG_SND_SOC_DA7213=m
CONFIG_SND_SOC_DA7219=m
CONFIG_SND_SOC_DMIC=m
# CONFIG_SND_SOC_ES7134 is not set
# CONFIG_SND_SOC_ES7241 is not set
CONFIG_SND_SOC_ES8316=m
# CONFIG_SND_SOC_ES8328_I2C is not set
# CONFIG_SND_SOC_ES8328_SPI is not set
# CONFIG_SND_SOC_GTM601 is not set
CONFIG_SND_SOC_HDAC_HDMI=m
# CONFIG_SND_SOC_INNO_RK3036 is not set
# CONFIG_SND_SOC_MAX98088 is not set
CONFIG_SND_SOC_MAX98090=m
CONFIG_SND_SOC_MAX98357A=m
# CONFIG_SND_SOC_MAX98504 is not set
# CONFIG_SND_SOC_MAX9867 is not set
CONFIG_SND_SOC_MAX98927=m
# CONFIG_SND_SOC_MAX98373 is not set
# CONFIG_SND_SOC_MAX98390 is not set
# CONFIG_SND_SOC_MAX9860 is not set
# CONFIG_SND_SOC_MSM8916_WCD_DIGITAL is not set
# CONFIG_SND_SOC_PCM1681 is not set
# CONFIG_SND_SOC_PCM1789_I2C is not set
# CONFIG_SND_SOC_PCM179X_I2C is not set
# CONFIG_SND_SOC_PCM179X_SPI is not set
# CONFIG_SND_SOC_PCM186X_I2C is not set
# CONFIG_SND_SOC_PCM186X_SPI is not set
# CONFIG_SND_SOC_PCM3060_I2C is not set
# CONFIG_SND_SOC_PCM3060_SPI is not set
# CONFIG_SND_SOC_PCM3168A_I2C is not set
# CONFIG_SND_SOC_PCM3168A_SPI is not set
# CONFIG_SND_SOC_PCM512x_I2C is not set
# CONFIG_SND_SOC_PCM512x_SPI is not set
# CONFIG_SND_SOC_RK3328 is not set
CONFIG_SND_SOC_RL6231=m
CONFIG_SND_SOC_RL6347A=m
CONFIG_SND_SOC_RT286=m
CONFIG_SND_SOC_RT298=m
CONFIG_SND_SOC_RT5514=m
CONFIG_SND_SOC_RT5514_SPI=m
# CONFIG_SND_SOC_RT5616 is not set
# CONFIG_SND_SOC_RT5631 is not set
CONFIG_SND_SOC_RT5640=m
CONFIG_SND_SOC_RT5645=m
CONFIG_SND_SOC_RT5651=m
CONFIG_SND_SOC_RT5663=m
CONFIG_SND_SOC_RT5670=m
CONFIG_SND_SOC_RT5677=m
CONFIG_SND_SOC_RT5677_SPI=m
# CONFIG_SND_SOC_SGTL5000 is not set
# CONFIG_SND_SOC_SIMPLE_AMPLIFIER is not set
# CONFIG_SND_SOC_SIRF_AUDIO_CODEC is not set
# CONFIG_SND_SOC_SPDIF is not set
# CONFIG_SND_SOC_SSM2305 is not set
# CONFIG_SND_SOC_SSM2602_SPI is not set
# CONFIG_SND_SOC_SSM2602_I2C is not set
CONFIG_SND_SOC_SSM4567=m
# CONFIG_SND_SOC_STA32X is not set
# CONFIG_SND_SOC_STA350 is not set
# CONFIG_SND_SOC_STI_SAS is not set
# CONFIG_SND_SOC_TAS2552 is not set
# CONFIG_SND_SOC_TAS2562 is not set
# CONFIG_SND_SOC_TAS2770 is not set
# CONFIG_SND_SOC_TAS5086 is not set
# CONFIG_SND_SOC_TAS571X is not set
# CONFIG_SND_SOC_TAS5720 is not set
# CONFIG_SND_SOC_TAS6424 is not set
# CONFIG_SND_SOC_TDA7419 is not set
# CONFIG_SND_SOC_TFA9879 is not set
# CONFIG_SND_SOC_TLV320AIC23_I2C is not set
# CONFIG_SND_SOC_TLV320AIC23_SPI is not set
# CONFIG_SND_SOC_TLV320AIC31XX is not set
# CONFIG_SND_SOC_TLV320AIC32X4_I2C is not set
# CONFIG_SND_SOC_TLV320AIC32X4_SPI is not set
# CONFIG_SND_SOC_TLV320AIC3X is not set
# CONFIG_SND_SOC_TLV320ADCX140 is not set
CONFIG_SND_SOC_TS3A227E=m
# CONFIG_SND_SOC_TSCS42XX is not set
# CONFIG_SND_SOC_TSCS454 is not set
# CONFIG_SND_SOC_UDA1334 is not set
# CONFIG_SND_SOC_WM8510 is not set
# CONFIG_SND_SOC_WM8523 is not set
# CONFIG_SND_SOC_WM8524 is not set
# CONFIG_SND_SOC_WM8580 is not set
# CONFIG_SND_SOC_WM8711 is not set
# CONFIG_SND_SOC_WM8728 is not set
# CONFIG_SND_SOC_WM8731 is not set
# CONFIG_SND_SOC_WM8737 is not set
# CONFIG_SND_SOC_WM8741 is not set
# CONFIG_SND_SOC_WM8750 is not set
# CONFIG_SND_SOC_WM8753 is not set
# CONFIG_SND_SOC_WM8770 is not set
# CONFIG_SND_SOC_WM8776 is not set
# CONFIG_SND_SOC_WM8782 is not set
# CONFIG_SND_SOC_WM8804_I2C is not set
# CONFIG_SND_SOC_WM8804_SPI is not set
# CONFIG_SND_SOC_WM8903 is not set
# CONFIG_SND_SOC_WM8904 is not set
# CONFIG_SND_SOC_WM8960 is not set
# CONFIG_SND_SOC_WM8962 is not set
# CONFIG_SND_SOC_WM8974 is not set
# CONFIG_SND_SOC_WM8978 is not set
# CONFIG_SND_SOC_WM8985 is not set
# CONFIG_SND_SOC_ZL38060 is not set
# CONFIG_SND_SOC_ZX_AUD96P22 is not set
# CONFIG_SND_SOC_MAX9759 is not set
# CONFIG_SND_SOC_MT6351 is not set
# CONFIG_SND_SOC_MT6358 is not set
# CONFIG_SND_SOC_MT6660 is not set
# CONFIG_SND_SOC_NAU8540 is not set
# CONFIG_SND_SOC_NAU8810 is not set
# CONFIG_SND_SOC_NAU8822 is not set
CONFIG_SND_SOC_NAU8824=m
CONFIG_SND_SOC_NAU8825=m
# CONFIG_SND_SOC_TPA6130A2 is not set
# end of CODEC drivers

# CONFIG_SND_SIMPLE_CARD is not set
CONFIG_SND_X86=y
CONFIG_HDMI_LPE_AUDIO=m
CONFIG_SND_SYNTH_EMUX=m
# CONFIG_SND_XEN_FRONTEND is not set
CONFIG_AC97_BUS=m

#
# HID support
#
CONFIG_HID=y
CONFIG_HID_BATTERY_STRENGTH=y
CONFIG_HIDRAW=y
CONFIG_UHID=m
CONFIG_HID_GENERIC=y

#
# Special HID drivers
#
CONFIG_HID_A4TECH=y
# CONFIG_HID_ACCUTOUCH is not set
CONFIG_HID_ACRUX=m
# CONFIG_HID_ACRUX_FF is not set
CONFIG_HID_APPLE=y
CONFIG_HID_APPLEIR=m
# CONFIG_HID_ASUS is not set
CONFIG_HID_AUREAL=m
CONFIG_HID_BELKIN=y
# CONFIG_HID_BETOP_FF is not set
# CONFIG_HID_BIGBEN_FF is not set
CONFIG_HID_CHERRY=y
CONFIG_HID_CHICONY=y
# CONFIG_HID_CORSAIR is not set
# CONFIG_HID_COUGAR is not set
# CONFIG_HID_MACALLY is not set
CONFIG_HID_PRODIKEYS=m
# CONFIG_HID_CMEDIA is not set
# CONFIG_HID_CP2112 is not set
# CONFIG_HID_CREATIVE_SB0540 is not set
CONFIG_HID_CYPRESS=y
CONFIG_HID_DRAGONRISE=m
# CONFIG_DRAGONRISE_FF is not set
# CONFIG_HID_EMS_FF is not set
# CONFIG_HID_ELAN is not set
CONFIG_HID_ELECOM=m
# CONFIG_HID_ELO is not set
CONFIG_HID_EZKEY=y
# CONFIG_HID_GEMBIRD is not set
# CONFIG_HID_GFRM is not set
# CONFIG_HID_GLORIOUS is not set
CONFIG_HID_HOLTEK=m
# CONFIG_HOLTEK_FF is not set
# CONFIG_HID_GT683R is not set
CONFIG_HID_KEYTOUCH=m
CONFIG_HID_KYE=m
CONFIG_HID_UCLOGIC=m
CONFIG_HID_WALTOP=m
# CONFIG_HID_VIEWSONIC is not set
CONFIG_HID_GYRATION=m
CONFIG_HID_ICADE=m
CONFIG_HID_ITE=y
# CONFIG_HID_JABRA is not set
CONFIG_HID_TWINHAN=m
CONFIG_HID_KENSINGTON=y
CONFIG_HID_LCPOWER=m
CONFIG_HID_LED=m
# CONFIG_HID_LENOVO is not set
CONFIG_HID_LOGITECH=y
CONFIG_HID_LOGITECH_DJ=m
CONFIG_HID_LOGITECH_HIDPP=m
# CONFIG_LOGITECH_FF is not set
# CONFIG_LOGIRUMBLEPAD2_FF is not set
# CONFIG_LOGIG940_FF is not set
# CONFIG_LOGIWHEELS_FF is not set
CONFIG_HID_MAGICMOUSE=y
# CONFIG_HID_MALTRON is not set
# CONFIG_HID_MAYFLASH is not set
CONFIG_HID_REDRAGON=y
CONFIG_HID_MICROSOFT=y
CONFIG_HID_MONTEREY=y
CONFIG_HID_MULTITOUCH=m
# CONFIG_HID_NTI is not set
CONFIG_HID_NTRIG=y
CONFIG_HID_ORTEK=m
CONFIG_HID_PANTHERLORD=m
# CONFIG_PANTHERLORD_FF is not set
# CONFIG_HID_PENMOUNT is not set
CONFIG_HID_PETALYNX=m
CONFIG_HID_PICOLCD=m
CONFIG_HID_PICOLCD_FB=y
CONFIG_HID_PICOLCD_BACKLIGHT=y
CONFIG_HID_PICOLCD_LCD=y
CONFIG_HID_PICOLCD_LEDS=y
CONFIG_HID_PICOLCD_CIR=y
CONFIG_HID_PLANTRONICS=y
CONFIG_HID_PRIMAX=m
# CONFIG_HID_RETRODE is not set
CONFIG_HID_ROCCAT=m
CONFIG_HID_SAITEK=m
CONFIG_HID_SAMSUNG=m
CONFIG_HID_SONY=m
# CONFIG_SONY_FF is not set
CONFIG_HID_SPEEDLINK=m
# CONFIG_HID_STEAM is not set
CONFIG_HID_STEELSERIES=m
CONFIG_HID_SUNPLUS=m
CONFIG_HID_RMI=m
CONFIG_HID_GREENASIA=m
# CONFIG_GREENASIA_FF is not set
CONFIG_HID_HYPERV_MOUSE=m
CONFIG_HID_SMARTJOYPLUS=m
# CONFIG_SMARTJOYPLUS_FF is not set
CONFIG_HID_TIVO=m
CONFIG_HID_TOPSEED=m
CONFIG_HID_THINGM=m
CONFIG_HID_THRUSTMASTER=m
# CONFIG_THRUSTMASTER_FF is not set
# CONFIG_HID_UDRAW_PS3 is not set
# CONFIG_HID_U2FZERO is not set
CONFIG_HID_WACOM=m
CONFIG_HID_WIIMOTE=m
# CONFIG_HID_XINMO is not set
CONFIG_HID_ZEROPLUS=m
# CONFIG_ZEROPLUS_FF is not set
CONFIG_HID_ZYDACRON=m
CONFIG_HID_SENSOR_HUB=m
CONFIG_HID_SENSOR_CUSTOM_SENSOR=m
CONFIG_HID_ALPS=m
# CONFIG_HID_MCP2221 is not set
# end of Special HID drivers

#
# USB HID support
#
CONFIG_USB_HID=y
CONFIG_HID_PID=y
CONFIG_USB_HIDDEV=y
# end of USB HID support

#
# I2C HID support
#
CONFIG_I2C_HID=m
# end of I2C HID support

#
# Intel ISH HID support
#
CONFIG_INTEL_ISH_HID=y
# CONFIG_INTEL_ISH_FIRMWARE_DOWNLOADER is not set
# end of Intel ISH HID support
# end of HID support

CONFIG_USB_OHCI_LITTLE_ENDIAN=y
CONFIG_USB_SUPPORT=y
CONFIG_USB_COMMON=y
# CONFIG_USB_LED_TRIG is not set
# CONFIG_USB_ULPI_BUS is not set
# CONFIG_USB_CONN_GPIO is not set
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB=y
CONFIG_USB_PCI=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y

#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=y
# CONFIG_USB_DYNAMIC_MINORS is not set
# CONFIG_USB_OTG is not set
# CONFIG_USB_OTG_WHITELIST is not set
# CONFIG_USB_OTG_BLACKLIST_HUB is not set
CONFIG_USB_LEDS_TRIGGER_USBPORT=m
CONFIG_USB_AUTOSUSPEND_DELAY=2
CONFIG_USB_MON=y

#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
CONFIG_USB_XHCI_HCD=y
# CONFIG_USB_XHCI_DBGCAP is not set
CONFIG_USB_XHCI_PCI=y
# CONFIG_USB_XHCI_PCI_RENESAS is not set
# CONFIG_USB_XHCI_PLATFORM is not set
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_ROOT_HUB_TT=y
CONFIG_USB_EHCI_TT_NEWSCHED=y
CONFIG_USB_EHCI_PCI=y
# CONFIG_USB_EHCI_FSL is not set
# CONFIG_USB_EHCI_HCD_PLATFORM is not set
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_FOTG210_HCD is not set
# CONFIG_USB_MAX3421_HCD is not set
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_OHCI_HCD_PCI=y
# CONFIG_USB_OHCI_HCD_PLATFORM is not set
CONFIG_USB_UHCI_HCD=y
# CONFIG_USB_U132_HCD is not set
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_HCD_BCMA is not set
# CONFIG_USB_HCD_SSB is not set
# CONFIG_USB_HCD_TEST_MODE is not set

#
# USB Device Class drivers
#
CONFIG_USB_ACM=m
CONFIG_USB_PRINTER=m
CONFIG_USB_WDM=m
CONFIG_USB_TMC=m

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
CONFIG_USB_STORAGE=m
# CONFIG_USB_STORAGE_DEBUG is not set
CONFIG_USB_STORAGE_REALTEK=m
CONFIG_REALTEK_AUTOPM=y
CONFIG_USB_STORAGE_DATAFAB=m
CONFIG_USB_STORAGE_FREECOM=m
CONFIG_USB_STORAGE_ISD200=m
CONFIG_USB_STORAGE_USBAT=m
CONFIG_USB_STORAGE_SDDR09=m
CONFIG_USB_STORAGE_SDDR55=m
CONFIG_USB_STORAGE_JUMPSHOT=m
CONFIG_USB_STORAGE_ALAUDA=m
CONFIG_USB_STORAGE_ONETOUCH=m
CONFIG_USB_STORAGE_KARMA=m
CONFIG_USB_STORAGE_CYPRESS_ATACB=m
CONFIG_USB_STORAGE_ENE_UB6250=m
CONFIG_USB_UAS=m

#
# USB Imaging devices
#
CONFIG_USB_MDC800=m
CONFIG_USB_MICROTEK=m
CONFIG_USBIP_CORE=m
# CONFIG_USBIP_VHCI_HCD is not set
# CONFIG_USBIP_HOST is not set
# CONFIG_USBIP_DEBUG is not set
# CONFIG_USB_CDNS3 is not set
# CONFIG_USB_MUSB_HDRC is not set
# CONFIG_USB_DWC3 is not set
# CONFIG_USB_DWC2 is not set
# CONFIG_USB_CHIPIDEA is not set
# CONFIG_USB_ISP1760 is not set

#
# USB port drivers
#
CONFIG_USB_USS720=m
CONFIG_USB_SERIAL=y
CONFIG_USB_SERIAL_CONSOLE=y
CONFIG_USB_SERIAL_GENERIC=y
# CONFIG_USB_SERIAL_SIMPLE is not set
CONFIG_USB_SERIAL_AIRCABLE=m
CONFIG_USB_SERIAL_ARK3116=m
CONFIG_USB_SERIAL_BELKIN=m
CONFIG_USB_SERIAL_CH341=m
CONFIG_USB_SERIAL_WHITEHEAT=m
CONFIG_USB_SERIAL_DIGI_ACCELEPORT=m
CONFIG_USB_SERIAL_CP210X=m
CONFIG_USB_SERIAL_CYPRESS_M8=m
CONFIG_USB_SERIAL_EMPEG=m
CONFIG_USB_SERIAL_FTDI_SIO=m
CONFIG_USB_SERIAL_VISOR=m
CONFIG_USB_SERIAL_IPAQ=m
CONFIG_USB_SERIAL_IR=m
CONFIG_USB_SERIAL_EDGEPORT=m
CONFIG_USB_SERIAL_EDGEPORT_TI=m
# CONFIG_USB_SERIAL_F81232 is not set
# CONFIG_USB_SERIAL_F8153X is not set
CONFIG_USB_SERIAL_GARMIN=m
CONFIG_USB_SERIAL_IPW=m
CONFIG_USB_SERIAL_IUU=m
CONFIG_USB_SERIAL_KEYSPAN_PDA=m
CONFIG_USB_SERIAL_KEYSPAN=m
CONFIG_USB_SERIAL_KLSI=m
CONFIG_USB_SERIAL_KOBIL_SCT=m
CONFIG_USB_SERIAL_MCT_U232=m
# CONFIG_USB_SERIAL_METRO is not set
CONFIG_USB_SERIAL_MOS7720=m
CONFIG_USB_SERIAL_MOS7715_PARPORT=y
CONFIG_USB_SERIAL_MOS7840=m
# CONFIG_USB_SERIAL_MXUPORT is not set
CONFIG_USB_SERIAL_NAVMAN=m
CONFIG_USB_SERIAL_PL2303=m
CONFIG_USB_SERIAL_OTI6858=m
CONFIG_USB_SERIAL_QCAUX=m
CONFIG_USB_SERIAL_QUALCOMM=m
CONFIG_USB_SERIAL_SPCP8X5=m
CONFIG_USB_SERIAL_SAFE=m
CONFIG_USB_SERIAL_SAFE_PADDED=y
CONFIG_USB_SERIAL_SIERRAWIRELESS=m
CONFIG_USB_SERIAL_SYMBOL=m
# CONFIG_USB_SERIAL_TI is not set
CONFIG_USB_SERIAL_CYBERJACK=m
CONFIG_USB_SERIAL_XIRCOM=m
CONFIG_USB_SERIAL_WWAN=m
CONFIG_USB_SERIAL_OPTION=m
CONFIG_USB_SERIAL_OMNINET=m
CONFIG_USB_SERIAL_OPTICON=m
CONFIG_USB_SERIAL_XSENS_MT=m
# CONFIG_USB_SERIAL_WISHBONE is not set
CONFIG_USB_SERIAL_SSU100=m
CONFIG_USB_SERIAL_QT2=m
# CONFIG_USB_SERIAL_UPD78F0730 is not set
CONFIG_USB_SERIAL_DEBUG=m

#
# USB Miscellaneous drivers
#
CONFIG_USB_EMI62=m
CONFIG_USB_EMI26=m
CONFIG_USB_ADUTUX=m
CONFIG_USB_SEVSEG=m
CONFIG_USB_LEGOTOWER=m
CONFIG_USB_LCD=m
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
CONFIG_USB_IDMOUSE=m
CONFIG_USB_FTDI_ELAN=m
CONFIG_USB_APPLEDISPLAY=m
# CONFIG_APPLE_MFI_FASTCHARGE is not set
CONFIG_USB_SISUSBVGA=m
CONFIG_USB_SISUSBVGA_CON=y
CONFIG_USB_LD=m
# CONFIG_USB_TRANCEVIBRATOR is not set
CONFIG_USB_IOWARRIOR=m
# CONFIG_USB_TEST is not set
# CONFIG_USB_EHSET_TEST_FIXTURE is not set
CONFIG_USB_ISIGHTFW=m
# CONFIG_USB_YUREX is not set
CONFIG_USB_EZUSB_FX2=m
# CONFIG_USB_HUB_USB251XB is not set
CONFIG_USB_HSIC_USB3503=m
# CONFIG_USB_HSIC_USB4604 is not set
# CONFIG_USB_LINK_LAYER_TEST is not set
# CONFIG_USB_CHAOSKEY is not set
CONFIG_USB_ATM=m
CONFIG_USB_SPEEDTOUCH=m
CONFIG_USB_CXACRU=m
CONFIG_USB_UEAGLEATM=m
CONFIG_USB_XUSBATM=m

#
# USB Physical Layer drivers
#
# CONFIG_NOP_USB_XCEIV is not set
# CONFIG_USB_GPIO_VBUS is not set
# CONFIG_USB_ISP1301 is not set
# end of USB Physical Layer drivers

# CONFIG_USB_GADGET is not set
CONFIG_TYPEC=y
# CONFIG_TYPEC_TCPM is not set
CONFIG_TYPEC_UCSI=y
# CONFIG_UCSI_CCG is not set
CONFIG_UCSI_ACPI=y
# CONFIG_TYPEC_TPS6598X is not set

#
# USB Type-C Multiplexer/DeMultiplexer Switch support
#
# CONFIG_TYPEC_MUX_PI3USB30532 is not set
# end of USB Type-C Multiplexer/DeMultiplexer Switch support

#
# USB Type-C Alternate Mode drivers
#
# CONFIG_TYPEC_DP_ALTMODE is not set
# end of USB Type-C Alternate Mode drivers

# CONFIG_USB_ROLE_SWITCH is not set
CONFIG_MMC=m
CONFIG_MMC_BLOCK=m
CONFIG_MMC_BLOCK_MINORS=8
CONFIG_SDIO_UART=m
# CONFIG_MMC_TEST is not set

#
# MMC/SD/SDIO Host Controller Drivers
#
# CONFIG_MMC_DEBUG is not set
CONFIG_MMC_SDHCI=m
CONFIG_MMC_SDHCI_IO_ACCESSORS=y
CONFIG_MMC_SDHCI_PCI=m
CONFIG_MMC_RICOH_MMC=y
CONFIG_MMC_SDHCI_ACPI=m
CONFIG_MMC_SDHCI_PLTFM=m
# CONFIG_MMC_SDHCI_F_SDH30 is not set
# CONFIG_MMC_WBSD is not set
CONFIG_MMC_TIFM_SD=m
# CONFIG_MMC_SPI is not set
CONFIG_MMC_CB710=m
CONFIG_MMC_VIA_SDMMC=m
CONFIG_MMC_VUB300=m
CONFIG_MMC_USHC=m
# CONFIG_MMC_USDHI6ROL0 is not set
CONFIG_MMC_CQHCI=m
# CONFIG_MMC_HSQ is not set
# CONFIG_MMC_TOSHIBA_PCI is not set
# CONFIG_MMC_MTK is not set
# CONFIG_MMC_SDHCI_XENON is not set
CONFIG_MEMSTICK=m
# CONFIG_MEMSTICK_DEBUG is not set

#
# MemoryStick drivers
#
# CONFIG_MEMSTICK_UNSAFE_RESUME is not set
CONFIG_MSPRO_BLOCK=m
# CONFIG_MS_BLOCK is not set

#
# MemoryStick Host Controller Drivers
#
CONFIG_MEMSTICK_TIFM_MS=m
CONFIG_MEMSTICK_JMICRON_38X=m
CONFIG_MEMSTICK_R592=m
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
# CONFIG_LEDS_CLASS_FLASH is not set
# CONFIG_LEDS_BRIGHTNESS_HW_CHANGED is not set

#
# LED drivers
#
# CONFIG_LEDS_APU is not set
CONFIG_LEDS_LM3530=m
# CONFIG_LEDS_LM3532 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_GPIO is not set
CONFIG_LEDS_LP3944=m
# CONFIG_LEDS_LP3952 is not set
CONFIG_LEDS_LP55XX_COMMON=m
CONFIG_LEDS_LP5521=m
CONFIG_LEDS_LP5523=m
CONFIG_LEDS_LP5562=m
# CONFIG_LEDS_LP8501 is not set
CONFIG_LEDS_CLEVO_MAIL=m
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA963X is not set
# CONFIG_LEDS_DAC124S085 is not set
# CONFIG_LEDS_PWM is not set
# CONFIG_LEDS_BD2802 is not set
CONFIG_LEDS_INTEL_SS4200=m
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_TLC591XX is not set
# CONFIG_LEDS_LM355x is not set

#
# LED driver for blink(1) USB RGB LED is under Special HID drivers (HID_THINGM)
#
CONFIG_LEDS_BLINKM=m
# CONFIG_LEDS_MLXCPLD is not set
# CONFIG_LEDS_MLXREG is not set
# CONFIG_LEDS_USER is not set
# CONFIG_LEDS_NIC78BX is not set
# CONFIG_LEDS_TI_LMU_COMMON is not set

#
# LED Triggers
#
CONFIG_LEDS_TRIGGERS=y
CONFIG_LEDS_TRIGGER_TIMER=m
CONFIG_LEDS_TRIGGER_ONESHOT=m
# CONFIG_LEDS_TRIGGER_DISK is not set
# CONFIG_LEDS_TRIGGER_MTD is not set
CONFIG_LEDS_TRIGGER_HEARTBEAT=m
CONFIG_LEDS_TRIGGER_BACKLIGHT=m
# CONFIG_LEDS_TRIGGER_CPU is not set
# CONFIG_LEDS_TRIGGER_ACTIVITY is not set
CONFIG_LEDS_TRIGGER_GPIO=m
CONFIG_LEDS_TRIGGER_DEFAULT_ON=m

#
# iptables trigger is under Netfilter config (LED target)
#
CONFIG_LEDS_TRIGGER_TRANSIENT=m
CONFIG_LEDS_TRIGGER_CAMERA=m
# CONFIG_LEDS_TRIGGER_PANIC is not set
# CONFIG_LEDS_TRIGGER_NETDEV is not set
# CONFIG_LEDS_TRIGGER_PATTERN is not set
CONFIG_LEDS_TRIGGER_AUDIO=m
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC_ATOMIC_SCRUB=y
CONFIG_EDAC_SUPPORT=y
CONFIG_EDAC=y
CONFIG_EDAC_LEGACY_SYSFS=y
# CONFIG_EDAC_DEBUG is not set
CONFIG_EDAC_DECODE_MCE=m
CONFIG_EDAC_GHES=y
CONFIG_EDAC_AMD64=m
# CONFIG_EDAC_AMD64_ERROR_INJECTION is not set
CONFIG_EDAC_E752X=m
CONFIG_EDAC_I82975X=m
CONFIG_EDAC_I3000=m
CONFIG_EDAC_I3200=m
CONFIG_EDAC_IE31200=m
CONFIG_EDAC_X38=m
CONFIG_EDAC_I5400=m
CONFIG_EDAC_I7CORE=m
CONFIG_EDAC_I5000=m
CONFIG_EDAC_I5100=m
CONFIG_EDAC_I7300=m
CONFIG_EDAC_SBRIDGE=m
CONFIG_EDAC_SKX=m
# CONFIG_EDAC_I10NM is not set
CONFIG_EDAC_PND2=m
CONFIG_RTC_LIB=y
CONFIG_RTC_MC146818_LIB=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_HCTOSYS=y
CONFIG_RTC_HCTOSYS_DEVICE="rtc0"
# CONFIG_RTC_SYSTOHC is not set
# CONFIG_RTC_DEBUG is not set
CONFIG_RTC_NVMEM=y

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=y
CONFIG_RTC_INTF_PROC=y
CONFIG_RTC_INTF_DEV=y
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_ABB5ZES3 is not set
# CONFIG_RTC_DRV_ABEOZ9 is not set
# CONFIG_RTC_DRV_ABX80X is not set
CONFIG_RTC_DRV_DS1307=m
# CONFIG_RTC_DRV_DS1307_CENTURY is not set
CONFIG_RTC_DRV_DS1374=m
# CONFIG_RTC_DRV_DS1374_WDT is not set
CONFIG_RTC_DRV_DS1672=m
CONFIG_RTC_DRV_MAX6900=m
CONFIG_RTC_DRV_RS5C372=m
CONFIG_RTC_DRV_ISL1208=m
CONFIG_RTC_DRV_ISL12022=m
CONFIG_RTC_DRV_X1205=m
CONFIG_RTC_DRV_PCF8523=m
# CONFIG_RTC_DRV_PCF85063 is not set
# CONFIG_RTC_DRV_PCF85363 is not set
CONFIG_RTC_DRV_PCF8563=m
CONFIG_RTC_DRV_PCF8583=m
CONFIG_RTC_DRV_M41T80=m
CONFIG_RTC_DRV_M41T80_WDT=y
CONFIG_RTC_DRV_BQ32K=m
# CONFIG_RTC_DRV_S35390A is not set
CONFIG_RTC_DRV_FM3130=m
# CONFIG_RTC_DRV_RX8010 is not set
CONFIG_RTC_DRV_RX8581=m
CONFIG_RTC_DRV_RX8025=m
CONFIG_RTC_DRV_EM3027=m
# CONFIG_RTC_DRV_RV3028 is not set
# CONFIG_RTC_DRV_RV8803 is not set
# CONFIG_RTC_DRV_SD3078 is not set

#
# SPI RTC drivers
#
# CONFIG_RTC_DRV_M41T93 is not set
# CONFIG_RTC_DRV_M41T94 is not set
# CONFIG_RTC_DRV_DS1302 is not set
# CONFIG_RTC_DRV_DS1305 is not set
# CONFIG_RTC_DRV_DS1343 is not set
# CONFIG_RTC_DRV_DS1347 is not set
# CONFIG_RTC_DRV_DS1390 is not set
# CONFIG_RTC_DRV_MAX6916 is not set
# CONFIG_RTC_DRV_R9701 is not set
CONFIG_RTC_DRV_RX4581=m
# CONFIG_RTC_DRV_RX6110 is not set
# CONFIG_RTC_DRV_RS5C348 is not set
# CONFIG_RTC_DRV_MAX6902 is not set
# CONFIG_RTC_DRV_PCF2123 is not set
# CONFIG_RTC_DRV_MCP795 is not set
CONFIG_RTC_I2C_AND_SPI=y

#
# SPI and I2C RTC drivers
#
CONFIG_RTC_DRV_DS3232=m
CONFIG_RTC_DRV_DS3232_HWMON=y
# CONFIG_RTC_DRV_PCF2127 is not set
CONFIG_RTC_DRV_RV3029C2=m
CONFIG_RTC_DRV_RV3029_HWMON=y

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=y
CONFIG_RTC_DRV_DS1286=m
CONFIG_RTC_DRV_DS1511=m
CONFIG_RTC_DRV_DS1553=m
# CONFIG_RTC_DRV_DS1685_FAMILY is not set
CONFIG_RTC_DRV_DS1742=m
CONFIG_RTC_DRV_DS2404=m
CONFIG_RTC_DRV_STK17TA8=m
# CONFIG_RTC_DRV_M48T86 is not set
CONFIG_RTC_DRV_M48T35=m
CONFIG_RTC_DRV_M48T59=m
CONFIG_RTC_DRV_MSM6242=m
CONFIG_RTC_DRV_BQ4802=m
CONFIG_RTC_DRV_RP5C01=m
CONFIG_RTC_DRV_V3020=m

#
# on-CPU RTC drivers
#
# CONFIG_RTC_DRV_FTRTC010 is not set

#
# HID Sensor RTC drivers
#
# CONFIG_RTC_DRV_HID_SENSOR_TIME is not set
CONFIG_DMADEVICES=y
# CONFIG_DMADEVICES_DEBUG is not set

#
# DMA Devices
#
CONFIG_DMA_ENGINE=y
CONFIG_DMA_VIRTUAL_CHANNELS=y
CONFIG_DMA_ACPI=y
# CONFIG_ALTERA_MSGDMA is not set
# CONFIG_INTEL_IDMA64 is not set
# CONFIG_INTEL_IDXD is not set
CONFIG_INTEL_IOATDMA=m
# CONFIG_PLX_DMA is not set
# CONFIG_QCOM_HIDMA_MGMT is not set
# CONFIG_QCOM_HIDMA is not set
CONFIG_DW_DMAC_CORE=y
CONFIG_DW_DMAC=m
CONFIG_DW_DMAC_PCI=y
# CONFIG_DW_EDMA is not set
# CONFIG_DW_EDMA_PCIE is not set
CONFIG_HSU_DMA=y
# CONFIG_SF_PDMA is not set

#
# DMA Clients
#
CONFIG_ASYNC_TX_DMA=y
# CONFIG_DMATEST is not set
CONFIG_DMA_ENGINE_RAID=y

#
# DMABUF options
#
CONFIG_SYNC_FILE=y
CONFIG_SW_SYNC=y
# CONFIG_UDMABUF is not set
# CONFIG_DMABUF_MOVE_NOTIFY is not set
# CONFIG_DMABUF_SELFTESTS is not set
# CONFIG_DMABUF_HEAPS is not set
# end of DMABUF options

CONFIG_DCA=m
CONFIG_AUXDISPLAY=y
# CONFIG_HD44780 is not set
CONFIG_KS0108=m
CONFIG_KS0108_PORT=0x378
CONFIG_KS0108_DELAY=2
CONFIG_CFAG12864B=m
CONFIG_CFAG12864B_RATE=20
# CONFIG_IMG_ASCII_LCD is not set
# CONFIG_PARPORT_PANEL is not set
# CONFIG_CHARLCD_BL_OFF is not set
# CONFIG_CHARLCD_BL_ON is not set
CONFIG_CHARLCD_BL_FLASH=y
# CONFIG_PANEL is not set
CONFIG_UIO=m
CONFIG_UIO_CIF=m
CONFIG_UIO_PDRV_GENIRQ=m
# CONFIG_UIO_DMEM_GENIRQ is not set
CONFIG_UIO_AEC=m
CONFIG_UIO_SERCOS3=m
CONFIG_UIO_PCI_GENERIC=m
# CONFIG_UIO_NETX is not set
# CONFIG_UIO_PRUSS is not set
# CONFIG_UIO_MF624 is not set
CONFIG_UIO_HV_GENERIC=m
CONFIG_VFIO_IOMMU_TYPE1=m
CONFIG_VFIO_VIRQFD=m
CONFIG_VFIO=m
CONFIG_VFIO_NOIOMMU=y
CONFIG_VFIO_PCI=m
# CONFIG_VFIO_PCI_VGA is not set
CONFIG_VFIO_PCI_MMAP=y
CONFIG_VFIO_PCI_INTX=y
# CONFIG_VFIO_PCI_IGD is not set
CONFIG_VFIO_MDEV=m
CONFIG_VFIO_MDEV_DEVICE=m
CONFIG_IRQ_BYPASS_MANAGER=y
# CONFIG_VIRT_DRIVERS is not set
CONFIG_VIRTIO=y
CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_PCI_LEGACY=y
# CONFIG_VIRTIO_PMEM is not set
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MEM=m
CONFIG_VIRTIO_INPUT=m
# CONFIG_VIRTIO_MMIO is not set
# CONFIG_VDPA is not set
CONFIG_VHOST_IOTLB=m
CONFIG_VHOST=m
CONFIG_VHOST_MENU=y
CONFIG_VHOST_NET=m
# CONFIG_VHOST_SCSI is not set
CONFIG_VHOST_VSOCK=m
# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set

#
# Microsoft Hyper-V guest support
#
CONFIG_HYPERV=m
CONFIG_HYPERV_TIMER=y
CONFIG_HYPERV_UTILS=m
CONFIG_HYPERV_BALLOON=m
# end of Microsoft Hyper-V guest support

#
# Xen driver support
#
CONFIG_XEN_BALLOON=y
# CONFIG_XEN_BALLOON_MEMORY_HOTPLUG is not set
CONFIG_XEN_SCRUB_PAGES_DEFAULT=y
CONFIG_XEN_DEV_EVTCHN=m
# CONFIG_XEN_BACKEND is not set
CONFIG_XENFS=m
CONFIG_XEN_COMPAT_XENFS=y
CONFIG_XEN_SYS_HYPERVISOR=y
CONFIG_XEN_XENBUS_FRONTEND=y
# CONFIG_XEN_GNTDEV is not set
# CONFIG_XEN_GRANT_DEV_ALLOC is not set
# CONFIG_XEN_GRANT_DMA_ALLOC is not set
CONFIG_SWIOTLB_XEN=y
# CONFIG_XEN_PVCALLS_FRONTEND is not set
CONFIG_XEN_PRIVCMD=m
CONFIG_XEN_HAVE_PVMMU=y
CONFIG_XEN_EFI=y
CONFIG_XEN_AUTO_XLATE=y
CONFIG_XEN_ACPI=y
CONFIG_XEN_HAVE_VPMU=y
# end of Xen driver support

# CONFIG_GREYBUS is not set
CONFIG_STAGING=y
# CONFIG_PRISM2_USB is not set
# CONFIG_COMEDI is not set
# CONFIG_RTL8192U is not set
CONFIG_RTLLIB=m
CONFIG_RTLLIB_CRYPTO_CCMP=m
CONFIG_RTLLIB_CRYPTO_TKIP=m
CONFIG_RTLLIB_CRYPTO_WEP=m
CONFIG_RTL8192E=m
# CONFIG_RTL8723BS is not set
CONFIG_R8712U=m
# CONFIG_R8188EU is not set
# CONFIG_RTS5208 is not set
# CONFIG_VT6655 is not set
# CONFIG_VT6656 is not set

#
# IIO staging drivers
#

#
# Accelerometers
#
# CONFIG_ADIS16203 is not set
# CONFIG_ADIS16240 is not set
# end of Accelerometers

#
# Analog to digital converters
#
# CONFIG_AD7816 is not set
# CONFIG_AD7280 is not set
# end of Analog to digital converters

#
# Analog digital bi-direction converters
#
# CONFIG_ADT7316 is not set
# end of Analog digital bi-direction converters

#
# Capacitance to digital converters
#
# CONFIG_AD7150 is not set
# CONFIG_AD7746 is not set
# end of Capacitance to digital converters

#
# Direct Digital Synthesis
#
# CONFIG_AD9832 is not set
# CONFIG_AD9834 is not set
# end of Direct Digital Synthesis

#
# Network Analyzer, Impedance Converters
#
# CONFIG_AD5933 is not set
# end of Network Analyzer, Impedance Converters

#
# Active energy metering IC
#
# CONFIG_ADE7854 is not set
# end of Active energy metering IC

#
# Resolver to digital converters
#
# CONFIG_AD2S1210 is not set
# end of Resolver to digital converters
# end of IIO staging drivers

# CONFIG_FB_SM750 is not set

#
# Speakup console speech
#
# CONFIG_SPEAKUP is not set
# end of Speakup console speech

# CONFIG_STAGING_MEDIA is not set

#
# Android
#
# CONFIG_ASHMEM is not set
CONFIG_ION=y
CONFIG_ION_SYSTEM_HEAP=y
# CONFIG_ION_CMA_HEAP is not set
# end of Android

# CONFIG_LTE_GDM724X is not set
CONFIG_FIREWIRE_SERIAL=m
CONFIG_FWTTY_MAX_TOTAL_PORTS=64
CONFIG_FWTTY_MAX_CARD_PORTS=32
# CONFIG_GS_FPGABOOT is not set
# CONFIG_UNISYSSPAR is not set
# CONFIG_FB_TFT is not set
# CONFIG_WILC1000_SDIO is not set
# CONFIG_WILC1000_SPI is not set
# CONFIG_KS7010 is not set
# CONFIG_PI433 is not set

#
# Gasket devices
#
# CONFIG_STAGING_GASKET_FRAMEWORK is not set
# end of Gasket devices

# CONFIG_FIELDBUS_DEV is not set
# CONFIG_KPC2000 is not set
CONFIG_QLGE=m
# CONFIG_WFX is not set
CONFIG_X86_PLATFORM_DEVICES=y
CONFIG_ACPI_WMI=m
CONFIG_WMI_BMOF=m
# CONFIG_ALIENWARE_WMI is not set
# CONFIG_HUAWEI_WMI is not set
# CONFIG_INTEL_WMI_SBL_FW_UPDATE is not set
CONFIG_INTEL_WMI_THUNDERBOLT=m
CONFIG_MXM_WMI=m
# CONFIG_PEAQ_WMI is not set
# CONFIG_XIAOMI_WMI is not set
CONFIG_ACERHDF=m
# CONFIG_ACER_WIRELESS is not set
CONFIG_ACER_WMI=m
CONFIG_APPLE_GMUX=m
CONFIG_ASUS_LAPTOP=m
# CONFIG_ASUS_WIRELESS is not set
CONFIG_ASUS_WMI=m
CONFIG_ASUS_NB_WMI=m
CONFIG_EEEPC_LAPTOP=m
CONFIG_EEEPC_WMI=m
CONFIG_DCDBAS=m
CONFIG_DELL_SMBIOS=m
CONFIG_DELL_SMBIOS_WMI=y
CONFIG_DELL_SMBIOS_SMM=y
CONFIG_DELL_LAPTOP=m
CONFIG_DELL_RBTN=m
CONFIG_DELL_RBU=m
CONFIG_DELL_SMO8800=m
CONFIG_DELL_WMI=m
CONFIG_DELL_WMI_DESCRIPTOR=m
CONFIG_DELL_WMI_AIO=m
# CONFIG_DELL_WMI_LED is not set
CONFIG_AMILO_RFKILL=m
CONFIG_FUJITSU_LAPTOP=m
CONFIG_FUJITSU_TABLET=m
# CONFIG_GPD_POCKET_FAN is not set
CONFIG_HP_ACCEL=m
CONFIG_HP_WIRELESS=m
CONFIG_HP_WMI=m
# CONFIG_IBM_RTL is not set
CONFIG_IDEAPAD_LAPTOP=m
CONFIG_SENSORS_HDAPS=m
CONFIG_THINKPAD_ACPI=m
CONFIG_THINKPAD_ACPI_ALSA_SUPPORT=y
# CONFIG_THINKPAD_ACPI_DEBUGFACILITIES is not set
# CONFIG_THINKPAD_ACPI_DEBUG is not set
# CONFIG_THINKPAD_ACPI_UNSAFE_LEDS is not set
CONFIG_THINKPAD_ACPI_VIDEO=y
CONFIG_THINKPAD_ACPI_HOTKEY_POLL=y
# CONFIG_INTEL_ATOMISP2_PM is not set
CONFIG_INTEL_HID_EVENT=m
# CONFIG_INTEL_INT0002_VGPIO is not set
# CONFIG_INTEL_MENLOW is not set
CONFIG_INTEL_OAKTRAIL=m
CONFIG_INTEL_VBTN=m
# CONFIG_SURFACE3_WMI is not set
# CONFIG_SURFACE_3_POWER_OPREGION is not set
# CONFIG_SURFACE_PRO3_BUTTON is not set
CONFIG_MSI_LAPTOP=m
CONFIG_MSI_WMI=m
# CONFIG_PCENGINES_APU2 is not set
CONFIG_SAMSUNG_LAPTOP=m
CONFIG_SAMSUNG_Q10=m
CONFIG_ACPI_TOSHIBA=m
CONFIG_TOSHIBA_BT_RFKILL=m
# CONFIG_TOSHIBA_HAPS is not set
# CONFIG_TOSHIBA_WMI is not set
CONFIG_ACPI_CMPC=m
CONFIG_COMPAL_LAPTOP=m
# CONFIG_LG_LAPTOP is not set
CONFIG_PANASONIC_LAPTOP=m
CONFIG_SONY_LAPTOP=m
CONFIG_SONYPI_COMPAT=y
# CONFIG_SYSTEM76_ACPI is not set
CONFIG_TOPSTAR_LAPTOP=m
# CONFIG_I2C_MULTI_INSTANTIATE is not set
# CONFIG_MLX_PLATFORM is not set
CONFIG_INTEL_IPS=m
# CONFIG_INTEL_RST is not set
# CONFIG_INTEL_SMARTCONNECT is not set

#
# Intel Speed Select Technology interface support
#
# CONFIG_INTEL_SPEED_SELECT_INTERFACE is not set
# end of Intel Speed Select Technology interface support

# CONFIG_INTEL_TURBO_MAX_3 is not set
# CONFIG_INTEL_UNCORE_FREQ_CONTROL is not set
CONFIG_INTEL_PMC_CORE=m
# CONFIG_INTEL_PUNIT_IPC is not set
# CONFIG_INTEL_SCU_PCI is not set
# CONFIG_INTEL_SCU_PLATFORM is not set
CONFIG_PMC_ATOM=y
# CONFIG_MFD_CROS_EC is not set
# CONFIG_CHROME_PLATFORMS is not set
# CONFIG_MELLANOX_PLATFORM is not set
CONFIG_HAVE_CLK=y
CONFIG_CLKDEV_LOOKUP=y
CONFIG_HAVE_CLK_PREPARE=y
CONFIG_COMMON_CLK=y
# CONFIG_COMMON_CLK_MAX9485 is not set
# CONFIG_COMMON_CLK_SI5341 is not set
# CONFIG_COMMON_CLK_SI5351 is not set
# CONFIG_COMMON_CLK_SI544 is not set
# CONFIG_COMMON_CLK_CDCE706 is not set
# CONFIG_COMMON_CLK_CS2000_CP is not set
# CONFIG_COMMON_CLK_PWM is not set
# CONFIG_HWSPINLOCK is not set

#
# Clock Source drivers
#
CONFIG_CLKEVT_I8253=y
CONFIG_I8253_LOCK=y
CONFIG_CLKBLD_I8253=y
# end of Clock Source drivers

CONFIG_MAILBOX=y
CONFIG_PCC=y
# CONFIG_ALTERA_MBOX is not set
CONFIG_IOMMU_IOVA=y
CONFIG_IOASID=y
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y

#
# Generic IOMMU Pagetable Support
#
# end of Generic IOMMU Pagetable Support

# CONFIG_IOMMU_DEBUGFS is not set
# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set
CONFIG_IOMMU_DMA=y
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_V2=m
CONFIG_DMAR_TABLE=y
CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_SVM is not set
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
# CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON is not set
CONFIG_IRQ_REMAP=y
CONFIG_HYPERV_IOMMU=y

#
# Remoteproc drivers
#
# CONFIG_REMOTEPROC is not set
# end of Remoteproc drivers

#
# Rpmsg drivers
#
# CONFIG_RPMSG_QCOM_GLINK_RPM is not set
# CONFIG_RPMSG_VIRTIO is not set
# end of Rpmsg drivers

# CONFIG_SOUNDWIRE is not set

#
# SOC (System On Chip) specific Drivers
#

#
# Amlogic SoC drivers
#
# end of Amlogic SoC drivers

#
# Aspeed SoC drivers
#
# end of Aspeed SoC drivers

#
# Broadcom SoC drivers
#
# end of Broadcom SoC drivers

#
# NXP/Freescale QorIQ SoC drivers
#
# end of NXP/Freescale QorIQ SoC drivers

#
# i.MX SoC drivers
#
# end of i.MX SoC drivers

#
# Qualcomm SoC drivers
#
# end of Qualcomm SoC drivers

# CONFIG_SOC_TI is not set

#
# Xilinx SoC drivers
#
# CONFIG_XILINX_VCU is not set
# end of Xilinx SoC drivers
# end of SOC (System On Chip) specific Drivers

CONFIG_PM_DEVFREQ=y

#
# DEVFREQ Governors
#
CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=m
# CONFIG_DEVFREQ_GOV_PERFORMANCE is not set
# CONFIG_DEVFREQ_GOV_POWERSAVE is not set
# CONFIG_DEVFREQ_GOV_USERSPACE is not set
# CONFIG_DEVFREQ_GOV_PASSIVE is not set

#
# DEVFREQ Drivers
#
# CONFIG_PM_DEVFREQ_EVENT is not set
# CONFIG_EXTCON is not set
# CONFIG_MEMORY is not set
CONFIG_IIO=y
CONFIG_IIO_BUFFER=y
CONFIG_IIO_BUFFER_CB=y
# CONFIG_IIO_BUFFER_HW_CONSUMER is not set
CONFIG_IIO_KFIFO_BUF=y
CONFIG_IIO_TRIGGERED_BUFFER=m
# CONFIG_IIO_CONFIGFS is not set
CONFIG_IIO_TRIGGER=y
CONFIG_IIO_CONSUMERS_PER_TRIGGER=2
# CONFIG_IIO_SW_DEVICE is not set
# CONFIG_IIO_SW_TRIGGER is not set

#
# Accelerometers
#
# CONFIG_ADIS16201 is not set
# CONFIG_ADIS16209 is not set
# CONFIG_ADXL345_I2C is not set
# CONFIG_ADXL345_SPI is not set
# CONFIG_ADXL372_SPI is not set
# CONFIG_ADXL372_I2C is not set
# CONFIG_BMA180 is not set
# CONFIG_BMA220 is not set
# CONFIG_BMA400 is not set
# CONFIG_BMC150_ACCEL is not set
# CONFIG_DA280 is not set
# CONFIG_DA311 is not set
# CONFIG_DMARD09 is not set
# CONFIG_DMARD10 is not set
CONFIG_HID_SENSOR_ACCEL_3D=m
# CONFIG_IIO_ST_ACCEL_3AXIS is not set
# CONFIG_KXSD9 is not set
# CONFIG_KXCJK1013 is not set
# CONFIG_MC3230 is not set
# CONFIG_MMA7455_I2C is not set
# CONFIG_MMA7455_SPI is not set
# CONFIG_MMA7660 is not set
# CONFIG_MMA8452 is not set
# CONFIG_MMA9551 is not set
# CONFIG_MMA9553 is not set
# CONFIG_MXC4005 is not set
# CONFIG_MXC6255 is not set
# CONFIG_SCA3000 is not set
# CONFIG_STK8312 is not set
# CONFIG_STK8BA50 is not set
# end of Accelerometers

#
# Analog to digital converters
#
# CONFIG_AD7091R5 is not set
# CONFIG_AD7124 is not set
# CONFIG_AD7192 is not set
# CONFIG_AD7266 is not set
# CONFIG_AD7291 is not set
# CONFIG_AD7292 is not set
# CONFIG_AD7298 is not set
# CONFIG_AD7476 is not set
# CONFIG_AD7606_IFACE_PARALLEL is not set
# CONFIG_AD7606_IFACE_SPI is not set
# CONFIG_AD7766 is not set
# CONFIG_AD7768_1 is not set
# CONFIG_AD7780 is not set
# CONFIG_AD7791 is not set
# CONFIG_AD7793 is not set
# CONFIG_AD7887 is not set
# CONFIG_AD7923 is not set
# CONFIG_AD7949 is not set
# CONFIG_AD799X is not set
# CONFIG_AD9467 is not set
# CONFIG_ADI_AXI_ADC is not set
# CONFIG_HI8435 is not set
# CONFIG_HX711 is not set
# CONFIG_INA2XX_ADC is not set
# CONFIG_LTC2471 is not set
# CONFIG_LTC2485 is not set
# CONFIG_LTC2496 is not set
# CONFIG_LTC2497 is not set
# CONFIG_MAX1027 is not set
# CONFIG_MAX11100 is not set
# CONFIG_MAX1118 is not set
# CONFIG_MAX1241 is not set
# CONFIG_MAX1363 is not set
# CONFIG_MAX9611 is not set
# CONFIG_MCP320X is not set
# CONFIG_MCP3422 is not set
# CONFIG_MCP3911 is not set
# CONFIG_NAU7802 is not set
# CONFIG_TI_ADC081C is not set
# CONFIG_TI_ADC0832 is not set
# CONFIG_TI_ADC084S021 is not set
# CONFIG_TI_ADC12138 is not set
# CONFIG_TI_ADC108S102 is not set
# CONFIG_TI_ADC128S052 is not set
# CONFIG_TI_ADC161S626 is not set
# CONFIG_TI_ADS1015 is not set
# CONFIG_TI_ADS7950 is not set
# CONFIG_TI_TLC4541 is not set
# CONFIG_VIPERBOARD_ADC is not set
# CONFIG_XILINX_XADC is not set
# end of Analog to digital converters

#
# Analog Front Ends
#
# end of Analog Front Ends

#
# Amplifiers
#
# CONFIG_AD8366 is not set
# CONFIG_HMC425 is not set
# end of Amplifiers

#
# Chemical Sensors
#
# CONFIG_ATLAS_PH_SENSOR is not set
# CONFIG_ATLAS_EZO_SENSOR is not set
# CONFIG_BME680 is not set
# CONFIG_CCS811 is not set
# CONFIG_IAQCORE is not set
# CONFIG_SENSIRION_SGP30 is not set
# CONFIG_SPS30 is not set
# CONFIG_VZ89X is not set
# end of Chemical Sensors

#
# Hid Sensor IIO Common
#
CONFIG_HID_SENSOR_IIO_COMMON=m
CONFIG_HID_SENSOR_IIO_TRIGGER=m
# end of Hid Sensor IIO Common

#
# SSP Sensor Common
#
# CONFIG_IIO_SSP_SENSORHUB is not set
# end of SSP Sensor Common

#
# Digital to analog converters
#
# CONFIG_AD5064 is not set
# CONFIG_AD5360 is not set
# CONFIG_AD5380 is not set
# CONFIG_AD5421 is not set
# CONFIG_AD5446 is not set
# CONFIG_AD5449 is not set
# CONFIG_AD5592R is not set
# CONFIG_AD5593R is not set
# CONFIG_AD5504 is not set
# CONFIG_AD5624R_SPI is not set
# CONFIG_AD5686_SPI is not set
# CONFIG_AD5696_I2C is not set
# CONFIG_AD5755 is not set
# CONFIG_AD5758 is not set
# CONFIG_AD5761 is not set
# CONFIG_AD5764 is not set
# CONFIG_AD5770R is not set
# CONFIG_AD5791 is not set
# CONFIG_AD7303 is not set
# CONFIG_AD8801 is not set
# CONFIG_DS4424 is not set
# CONFIG_LTC1660 is not set
# CONFIG_LTC2632 is not set
# CONFIG_M62332 is not set
# CONFIG_MAX517 is not set
# CONFIG_MCP4725 is not set
# CONFIG_MCP4922 is not set
# CONFIG_TI_DAC082S085 is not set
# CONFIG_TI_DAC5571 is not set
# CONFIG_TI_DAC7311 is not set
# CONFIG_TI_DAC7612 is not set
# end of Digital to analog converters

#
# IIO dummy driver
#
# end of IIO dummy driver

#
# Frequency Synthesizers DDS/PLL
#

#
# Clock Generator/Distribution
#
# CONFIG_AD9523 is not set
# end of Clock Generator/Distribution

#
# Phase-Locked Loop (PLL) frequency synthesizers
#
# CONFIG_ADF4350 is not set
# CONFIG_ADF4371 is not set
# end of Phase-Locked Loop (PLL) frequency synthesizers
# end of Frequency Synthesizers DDS/PLL

#
# Digital gyroscope sensors
#
# CONFIG_ADIS16080 is not set
# CONFIG_ADIS16130 is not set
# CONFIG_ADIS16136 is not set
# CONFIG_ADIS16260 is not set
# CONFIG_ADXRS450 is not set
# CONFIG_BMG160 is not set
# CONFIG_FXAS21002C is not set
CONFIG_HID_SENSOR_GYRO_3D=m
# CONFIG_MPU3050_I2C is not set
# CONFIG_IIO_ST_GYRO_3AXIS is not set
# CONFIG_ITG3200 is not set
# end of Digital gyroscope sensors

#
# Health Sensors
#

#
# Heart Rate Monitors
#
# CONFIG_AFE4403 is not set
# CONFIG_AFE4404 is not set
# CONFIG_MAX30100 is not set
# CONFIG_MAX30102 is not set
# end of Heart Rate Monitors
# end of Health Sensors

#
# Humidity sensors
#
# CONFIG_AM2315 is not set
# CONFIG_DHT11 is not set
# CONFIG_HDC100X is not set
# CONFIG_HID_SENSOR_HUMIDITY is not set
# CONFIG_HTS221 is not set
# CONFIG_HTU21 is not set
# CONFIG_SI7005 is not set
# CONFIG_SI7020 is not set
# end of Humidity sensors

#
# Inertial measurement units
#
# CONFIG_ADIS16400 is not set
# CONFIG_ADIS16460 is not set
# CONFIG_ADIS16475 is not set
# CONFIG_ADIS16480 is not set
# CONFIG_BMI160_I2C is not set
# CONFIG_BMI160_SPI is not set
# CONFIG_FXOS8700_I2C is not set
# CONFIG_FXOS8700_SPI is not set
# CONFIG_KMX61 is not set
# CONFIG_INV_MPU6050_I2C is not set
# CONFIG_INV_MPU6050_SPI is not set
# CONFIG_IIO_ST_LSM6DSX is not set
# end of Inertial measurement units

#
# Light sensors
#
# CONFIG_ACPI_ALS is not set
# CONFIG_ADJD_S311 is not set
# CONFIG_ADUX1020 is not set
# CONFIG_AL3010 is not set
# CONFIG_AL3320A is not set
# CONFIG_APDS9300 is not set
# CONFIG_APDS9960 is not set
# CONFIG_BH1750 is not set
# CONFIG_BH1780 is not set
# CONFIG_CM32181 is not set
# CONFIG_CM3232 is not set
# CONFIG_CM3323 is not set
# CONFIG_CM36651 is not set
# CONFIG_GP2AP002 is not set
# CONFIG_GP2AP020A00F is not set
# CONFIG_SENSORS_ISL29018 is not set
# CONFIG_SENSORS_ISL29028 is not set
# CONFIG_ISL29125 is not set
CONFIG_HID_SENSOR_ALS=m
CONFIG_HID_SENSOR_PROX=m
# CONFIG_JSA1212 is not set
# CONFIG_RPR0521 is not set
# CONFIG_LTR501 is not set
# CONFIG_LV0104CS is not set
# CONFIG_MAX44000 is not set
# CONFIG_MAX44009 is not set
# CONFIG_NOA1305 is not set
# CONFIG_OPT3001 is not set
# CONFIG_PA12203001 is not set
# CONFIG_SI1133 is not set
# CONFIG_SI1145 is not set
# CONFIG_STK3310 is not set
# CONFIG_ST_UVIS25 is not set
# CONFIG_TCS3414 is not set
# CONFIG_TCS3472 is not set
# CONFIG_SENSORS_TSL2563 is not set
# CONFIG_TSL2583 is not set
# CONFIG_TSL2772 is not set
# CONFIG_TSL4531 is not set
# CONFIG_US5182D is not set
# CONFIG_VCNL4000 is not set
# CONFIG_VCNL4035 is not set
# CONFIG_VEML6030 is not set
# CONFIG_VEML6070 is not set
# CONFIG_VL6180 is not set
# CONFIG_ZOPT2201 is not set
# end of Light sensors

#
# Magnetometer sensors
#
# CONFIG_AK8975 is not set
# CONFIG_AK09911 is not set
# CONFIG_BMC150_MAGN_I2C is not set
# CONFIG_BMC150_MAGN_SPI is not set
# CONFIG_MAG3110 is not set
CONFIG_HID_SENSOR_MAGNETOMETER_3D=m
# CONFIG_MMC35240 is not set
# CONFIG_IIO_ST_MAGN_3AXIS is not set
# CONFIG_SENSORS_HMC5843_I2C is not set
# CONFIG_SENSORS_HMC5843_SPI is not set
# CONFIG_SENSORS_RM3100_I2C is not set
# CONFIG_SENSORS_RM3100_SPI is not set
# end of Magnetometer sensors

#
# Multiplexers
#
# end of Multiplexers

#
# Inclinometer sensors
#
CONFIG_HID_SENSOR_INCLINOMETER_3D=m
CONFIG_HID_SENSOR_DEVICE_ROTATION=m
# end of Inclinometer sensors

#
# Triggers - standalone
#
# CONFIG_IIO_INTERRUPT_TRIGGER is not set
# CONFIG_IIO_SYSFS_TRIGGER is not set
# end of Triggers - standalone

#
# Linear and angular position sensors
#
# end of Linear and angular position sensors

#
# Digital potentiometers
#
# CONFIG_AD5272 is not set
# CONFIG_DS1803 is not set
# CONFIG_MAX5432 is not set
# CONFIG_MAX5481 is not set
# CONFIG_MAX5487 is not set
# CONFIG_MCP4018 is not set
# CONFIG_MCP4131 is not set
# CONFIG_MCP4531 is not set
# CONFIG_MCP41010 is not set
# CONFIG_TPL0102 is not set
# end of Digital potentiometers

#
# Digital potentiostats
#
# CONFIG_LMP91000 is not set
# end of Digital potentiostats

#
# Pressure sensors
#
# CONFIG_ABP060MG is not set
# CONFIG_BMP280 is not set
# CONFIG_DLHL60D is not set
# CONFIG_DPS310 is not set
CONFIG_HID_SENSOR_PRESS=m
# CONFIG_HP03 is not set
# CONFIG_ICP10100 is not set
# CONFIG_MPL115_I2C is not set
# CONFIG_MPL115_SPI is not set
# CONFIG_MPL3115 is not set
# CONFIG_MS5611 is not set
# CONFIG_MS5637 is not set
# CONFIG_IIO_ST_PRESS is not set
# CONFIG_T5403 is not set
# CONFIG_HP206C is not set
# CONFIG_ZPA2326 is not set
# end of Pressure sensors

#
# Lightning sensors
#
# CONFIG_AS3935 is not set
# end of Lightning sensors

#
# Proximity and distance sensors
#
# CONFIG_ISL29501 is not set
# CONFIG_LIDAR_LITE_V2 is not set
# CONFIG_MB1232 is not set
# CONFIG_PING is not set
# CONFIG_RFD77402 is not set
# CONFIG_SRF04 is not set
# CONFIG_SX9310 is not set
# CONFIG_SX9500 is not set
# CONFIG_SRF08 is not set
# CONFIG_VCNL3020 is not set
# CONFIG_VL53L0X_I2C is not set
# end of Proximity and distance sensors

#
# Resolver to digital converters
#
# CONFIG_AD2S90 is not set
# CONFIG_AD2S1200 is not set
# end of Resolver to digital converters

#
# Temperature sensors
#
# CONFIG_LTC2983 is not set
# CONFIG_MAXIM_THERMOCOUPLE is not set
# CONFIG_HID_SENSOR_TEMP is not set
# CONFIG_MLX90614 is not set
# CONFIG_MLX90632 is not set
# CONFIG_TMP006 is not set
# CONFIG_TMP007 is not set
# CONFIG_TSYS01 is not set
# CONFIG_TSYS02D is not set
# CONFIG_MAX31856 is not set
# end of Temperature sensors

CONFIG_NTB=m
# CONFIG_NTB_MSI is not set
CONFIG_NTB_AMD=m
# CONFIG_NTB_IDT is not set
# CONFIG_NTB_INTEL is not set
# CONFIG_NTB_SWITCHTEC is not set
# CONFIG_NTB_PINGPONG is not set
# CONFIG_NTB_TOOL is not set
CONFIG_NTB_PERF=m
CONFIG_NTB_TRANSPORT=m
# CONFIG_VME_BUS is not set
CONFIG_PWM=y
CONFIG_PWM_SYSFS=y
# CONFIG_PWM_DEBUG is not set
# CONFIG_PWM_LPSS_PCI is not set
# CONFIG_PWM_LPSS_PLATFORM is not set
# CONFIG_PWM_PCA9685 is not set

#
# IRQ chip support
#
# end of IRQ chip support

# CONFIG_IPACK_BUS is not set
# CONFIG_RESET_CONTROLLER is not set

#
# PHY Subsystem
#
CONFIG_GENERIC_PHY=y
# CONFIG_BCM_KONA_USB2_PHY is not set
# CONFIG_PHY_PXA_28NM_HSIC is not set
# CONFIG_PHY_PXA_28NM_USB2 is not set
# CONFIG_PHY_CPCAP_USB is not set
# CONFIG_PHY_INTEL_EMMC is not set
# end of PHY Subsystem

CONFIG_POWERCAP=y
CONFIG_INTEL_RAPL_CORE=m
CONFIG_INTEL_RAPL=m
# CONFIG_IDLE_INJECT is not set
# CONFIG_MCB is not set

#
# Performance monitor support
#
# end of Performance monitor support

CONFIG_RAS=y
# CONFIG_RAS_CEC is not set
# CONFIG_USB4 is not set

#
# Android
#
CONFIG_ANDROID=y
# CONFIG_ANDROID_BINDER_IPC is not set
# end of Android

CONFIG_LIBNVDIMM=m
CONFIG_BLK_DEV_PMEM=m
CONFIG_ND_BLK=m
CONFIG_ND_CLAIM=y
CONFIG_ND_BTT=m
CONFIG_BTT=y
CONFIG_ND_PFN=m
CONFIG_NVDIMM_PFN=y
CONFIG_NVDIMM_DAX=y
CONFIG_NVDIMM_KEYS=y
CONFIG_DAX_DRIVER=y
CONFIG_DAX=y
CONFIG_DEV_DAX=m
CONFIG_DEV_DAX_PMEM=m
CONFIG_DEV_DAX_KMEM=m
CONFIG_DEV_DAX_PMEM_COMPAT=m
CONFIG_NVMEM=y
CONFIG_NVMEM_SYSFS=y

#
# HW tracing support
#
# CONFIG_STM is not set
# CONFIG_INTEL_TH is not set
# end of HW tracing support

# CONFIG_FPGA is not set
# CONFIG_TEE is not set
CONFIG_PM_OPP=y
# CONFIG_UNISYS_VISORBUS is not set
# CONFIG_SIOX is not set
# CONFIG_SLIMBUS is not set
# CONFIG_INTERCONNECT is not set
# CONFIG_COUNTER is not set
# CONFIG_MOST is not set
# end of Device Drivers

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=y
# CONFIG_VALIDATE_FS_PARSER is not set
CONFIG_FS_IOMAP=y
# CONFIG_EXT2_FS is not set
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=m
CONFIG_EXT4_USE_FOR_EXT2=y
CONFIG_EXT4_FS_POSIX_ACL=y
CONFIG_EXT4_FS_SECURITY=y
# CONFIG_EXT4_DEBUG is not set
CONFIG_JBD2=m
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=m
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
CONFIG_XFS_FS=m
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
# CONFIG_XFS_RT is not set
# CONFIG_XFS_ONLINE_SCRUB is not set
# CONFIG_XFS_WARN is not set
# CONFIG_XFS_DEBUG is not set
CONFIG_GFS2_FS=m
CONFIG_GFS2_FS_LOCKING_DLM=y
# CONFIG_OCFS2_FS is not set
CONFIG_BTRFS_FS=m
CONFIG_BTRFS_FS_POSIX_ACL=y
# CONFIG_BTRFS_FS_CHECK_INTEGRITY is not set
# CONFIG_BTRFS_FS_RUN_SANITY_TESTS is not set
# CONFIG_BTRFS_DEBUG is not set
# CONFIG_BTRFS_ASSERT is not set
# CONFIG_BTRFS_FS_REF_VERIFY is not set
# CONFIG_NILFS2_FS is not set
# CONFIG_F2FS_FS is not set
CONFIG_FS_DAX=y
CONFIG_FS_DAX_PMD=y
CONFIG_FS_POSIX_ACL=y
CONFIG_EXPORTFS=y
CONFIG_EXPORTFS_BLOCK_OPS=y
CONFIG_FILE_LOCKING=y
CONFIG_MANDATORY_FILE_LOCKING=y
# CONFIG_FS_ENCRYPTION is not set
# CONFIG_FS_VERITY is not set
CONFIG_FSNOTIFY=y
CONFIG_DNOTIFY=y
CONFIG_INOTIFY_USER=y
CONFIG_FANOTIFY=y
CONFIG_FANOTIFY_ACCESS_PERMISSIONS=y
CONFIG_QUOTA=y
CONFIG_QUOTA_NETLINK_INTERFACE=y
CONFIG_PRINT_QUOTA_WARNING=y
# CONFIG_QUOTA_DEBUG is not set
CONFIG_QUOTA_TREE=y
# CONFIG_QFMT_V1 is not set
CONFIG_QFMT_V2=y
CONFIG_QUOTACTL=y
CONFIG_QUOTACTL_COMPAT=y
CONFIG_AUTOFS4_FS=y
CONFIG_AUTOFS_FS=y
CONFIG_FUSE_FS=m
CONFIG_CUSE=m
# CONFIG_VIRTIO_FS is not set
CONFIG_OVERLAY_FS=m
# CONFIG_OVERLAY_FS_REDIRECT_DIR is not set
# CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW is not set
# CONFIG_OVERLAY_FS_INDEX is not set
# CONFIG_OVERLAY_FS_XINO_AUTO is not set
# CONFIG_OVERLAY_FS_METACOPY is not set

#
# Caches
#
CONFIG_FSCACHE=m
CONFIG_FSCACHE_STATS=y
# CONFIG_FSCACHE_HISTOGRAM is not set
# CONFIG_FSCACHE_DEBUG is not set
# CONFIG_FSCACHE_OBJECT_LIST is not set
CONFIG_CACHEFILES=m
# CONFIG_CACHEFILES_DEBUG is not set
# CONFIG_CACHEFILES_HISTOGRAM is not set
# end of Caches

#
# CD-ROM/DVD Filesystems
#
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
CONFIG_ZISOFS=y
CONFIG_UDF_FS=m
# end of CD-ROM/DVD Filesystems

#
# DOS/FAT/EXFAT/NT Filesystems
#
CONFIG_FAT_FS=m
CONFIG_MSDOS_FS=m
CONFIG_VFAT_FS=m
CONFIG_FAT_DEFAULT_CODEPAGE=437
CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
# CONFIG_FAT_DEFAULT_UTF8 is not set
# CONFIG_EXFAT_FS is not set
# CONFIG_NTFS_FS is not set
# end of DOS/FAT/EXFAT/NT Filesystems

#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_PROC_KCORE=y
CONFIG_PROC_VMCORE=y
# CONFIG_PROC_VMCORE_DEVICE_DUMP is not set
CONFIG_PROC_SYSCTL=y
CONFIG_PROC_PAGE_MONITOR=y
CONFIG_PROC_CHILDREN=y
CONFIG_PROC_PID_ARCH_STATUS=y
CONFIG_PROC_CPU_RESCTRL=y
CONFIG_KERNFS=y
CONFIG_SYSFS=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
CONFIG_TMPFS_XATTR=y
CONFIG_HUGETLBFS=y
CONFIG_HUGETLB_PAGE=y
CONFIG_MEMFD_CREATE=y
CONFIG_ARCH_HAS_GIGANTIC_PAGE=y
CONFIG_CONFIGFS_FS=y
CONFIG_EFIVAR_FS=y
# end of Pseudo filesystems

CONFIG_MISC_FILESYSTEMS=y
# CONFIG_ORANGEFS_FS is not set
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_ECRYPT_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_JFFS2_FS is not set
# CONFIG_UBIFS_FS is not set
CONFIG_CRAMFS=m
CONFIG_CRAMFS_BLOCKDEV=y
# CONFIG_CRAMFS_MTD is not set
CONFIG_SQUASHFS=m
CONFIG_SQUASHFS_FILE_CACHE=y
# CONFIG_SQUASHFS_FILE_DIRECT is not set
CONFIG_SQUASHFS_DECOMP_SINGLE=y
# CONFIG_SQUASHFS_DECOMP_MULTI is not set
# CONFIG_SQUASHFS_DECOMP_MULTI_PERCPU is not set
CONFIG_SQUASHFS_XATTR=y
CONFIG_SQUASHFS_ZLIB=y
# CONFIG_SQUASHFS_LZ4 is not set
CONFIG_SQUASHFS_LZO=y
CONFIG_SQUASHFS_XZ=y
# CONFIG_SQUASHFS_ZSTD is not set
# CONFIG_SQUASHFS_4K_DEVBLK_SIZE is not set
# CONFIG_SQUASHFS_EMBEDDED is not set
CONFIG_SQUASHFS_FRAGMENT_CACHE_SIZE=3
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_PSTORE=y
CONFIG_PSTORE_DEFLATE_COMPRESS=y
# CONFIG_PSTORE_LZO_COMPRESS is not set
# CONFIG_PSTORE_LZ4_COMPRESS is not set
# CONFIG_PSTORE_LZ4HC_COMPRESS is not set
# CONFIG_PSTORE_842_COMPRESS is not set
# CONFIG_PSTORE_ZSTD_COMPRESS is not set
CONFIG_PSTORE_COMPRESS=y
CONFIG_PSTORE_DEFLATE_COMPRESS_DEFAULT=y
CONFIG_PSTORE_COMPRESS_DEFAULT="deflate"
CONFIG_PSTORE_CONSOLE=y
CONFIG_PSTORE_PMSG=y
# CONFIG_PSTORE_FTRACE is not set
CONFIG_PSTORE_RAM=m
# CONFIG_PSTORE_BLK is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_EROFS_FS is not set
CONFIG_NETWORK_FILESYSTEMS=y
CONFIG_NFS_FS=y
# CONFIG_NFS_V2 is not set
CONFIG_NFS_V3=y
CONFIG_NFS_V3_ACL=y
CONFIG_NFS_V4=m
# CONFIG_NFS_SWAP is not set
CONFIG_NFS_V4_1=y
CONFIG_NFS_V4_2=y
CONFIG_PNFS_FILE_LAYOUT=m
CONFIG_PNFS_BLOCK=m
CONFIG_PNFS_FLEXFILE_LAYOUT=m
CONFIG_NFS_V4_1_IMPLEMENTATION_ID_DOMAIN="kernel.org"
# CONFIG_NFS_V4_1_MIGRATION is not set
CONFIG_NFS_V4_SECURITY_LABEL=y
CONFIG_ROOT_NFS=y
# CONFIG_NFS_USE_LEGACY_DNS is not set
CONFIG_NFS_USE_KERNEL_DNS=y
CONFIG_NFS_DEBUG=y
CONFIG_NFS_DISABLE_UDP_SUPPORT=y
CONFIG_NFSD=m
CONFIG_NFSD_V2_ACL=y
CONFIG_NFSD_V3=y
CONFIG_NFSD_V3_ACL=y
CONFIG_NFSD_V4=y
CONFIG_NFSD_PNFS=y
# CONFIG_NFSD_BLOCKLAYOUT is not set
CONFIG_NFSD_SCSILAYOUT=y
# CONFIG_NFSD_FLEXFILELAYOUT is not set
# CONFIG_NFSD_V4_2_INTER_SSC is not set
CONFIG_NFSD_V4_SECURITY_LABEL=y
CONFIG_GRACE_PERIOD=y
CONFIG_LOCKD=y
CONFIG_LOCKD_V4=y
CONFIG_NFS_ACL_SUPPORT=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
CONFIG_SUNRPC_GSS=m
CONFIG_SUNRPC_BACKCHANNEL=y
CONFIG_RPCSEC_GSS_KRB5=m
# CONFIG_SUNRPC_DISABLE_INSECURE_ENCTYPES is not set
CONFIG_SUNRPC_DEBUG=y
CONFIG_CEPH_FS=m
# CONFIG_CEPH_FSCACHE is not set
CONFIG_CEPH_FS_POSIX_ACL=y
# CONFIG_CEPH_FS_SECURITY_LABEL is not set
CONFIG_CIFS=m
# CONFIG_CIFS_STATS2 is not set
CONFIG_CIFS_ALLOW_INSECURE_LEGACY=y
CONFIG_CIFS_WEAK_PW_HASH=y
CONFIG_CIFS_UPCALL=y
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
CONFIG_CIFS_DEBUG=y
# CONFIG_CIFS_DEBUG2 is not set
# CONFIG_CIFS_DEBUG_DUMP_KEYS is not set
CONFIG_CIFS_DFS_UPCALL=y
# CONFIG_CIFS_FSCACHE is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
CONFIG_9P_FS=y
CONFIG_9P_FS_POSIX_ACL=y
# CONFIG_9P_FS_SECURITY is not set
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="utf8"
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m
CONFIG_NLS_CODEPAGE_850=m
CONFIG_NLS_CODEPAGE_852=m
CONFIG_NLS_CODEPAGE_855=m
CONFIG_NLS_CODEPAGE_857=m
CONFIG_NLS_CODEPAGE_860=m
CONFIG_NLS_CODEPAGE_861=m
CONFIG_NLS_CODEPAGE_862=m
CONFIG_NLS_CODEPAGE_863=m
CONFIG_NLS_CODEPAGE_864=m
CONFIG_NLS_CODEPAGE_865=m
CONFIG_NLS_CODEPAGE_866=m
CONFIG_NLS_CODEPAGE_869=m
CONFIG_NLS_CODEPAGE_936=m
CONFIG_NLS_CODEPAGE_950=m
CONFIG_NLS_CODEPAGE_932=m
CONFIG_NLS_CODEPAGE_949=m
CONFIG_NLS_CODEPAGE_874=m
CONFIG_NLS_ISO8859_8=m
CONFIG_NLS_CODEPAGE_1250=m
CONFIG_NLS_CODEPAGE_1251=m
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=m
CONFIG_NLS_ISO8859_2=m
CONFIG_NLS_ISO8859_3=m
CONFIG_NLS_ISO8859_4=m
CONFIG_NLS_ISO8859_5=m
CONFIG_NLS_ISO8859_6=m
CONFIG_NLS_ISO8859_7=m
CONFIG_NLS_ISO8859_9=m
CONFIG_NLS_ISO8859_13=m
CONFIG_NLS_ISO8859_14=m
CONFIG_NLS_ISO8859_15=m
CONFIG_NLS_KOI8_R=m
CONFIG_NLS_KOI8_U=m
CONFIG_NLS_MAC_ROMAN=m
CONFIG_NLS_MAC_CELTIC=m
CONFIG_NLS_MAC_CENTEURO=m
CONFIG_NLS_MAC_CROATIAN=m
CONFIG_NLS_MAC_CYRILLIC=m
CONFIG_NLS_MAC_GAELIC=m
CONFIG_NLS_MAC_GREEK=m
CONFIG_NLS_MAC_ICELAND=m
CONFIG_NLS_MAC_INUIT=m
CONFIG_NLS_MAC_ROMANIAN=m
CONFIG_NLS_MAC_TURKISH=m
CONFIG_NLS_UTF8=m
CONFIG_DLM=m
CONFIG_DLM_DEBUG=y
# CONFIG_UNICODE is not set
CONFIG_IO_WQ=y
# end of File systems

#
# Security options
#
CONFIG_KEYS=y
# CONFIG_KEYS_REQUEST_CACHE is not set
CONFIG_PERSISTENT_KEYRINGS=y
CONFIG_TRUSTED_KEYS=y
CONFIG_ENCRYPTED_KEYS=y
# CONFIG_KEY_DH_OPERATIONS is not set
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=y
CONFIG_SECURITY_WRITABLE_HOOKS=y
CONFIG_SECURITYFS=y
CONFIG_SECURITY_NETWORK=y
CONFIG_PAGE_TABLE_ISOLATION=y
CONFIG_SECURITY_NETWORK_XFRM=y
CONFIG_SECURITY_PATH=y
CONFIG_INTEL_TXT=y
CONFIG_LSM_MMAP_MIN_ADDR=65535
CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=y
CONFIG_HARDENED_USERCOPY=y
CONFIG_HARDENED_USERCOPY_FALLBACK=y
# CONFIG_HARDENED_USERCOPY_PAGESPAN is not set
# CONFIG_FORTIFY_SOURCE is not set
# CONFIG_STATIC_USERMODEHELPER is not set
CONFIG_SECURITY_SELINUX=y
CONFIG_SECURITY_SELINUX_BOOTPARAM=y
CONFIG_SECURITY_SELINUX_DISABLE=y
CONFIG_SECURITY_SELINUX_DEVELOP=y
CONFIG_SECURITY_SELINUX_AVC_STATS=y
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
CONFIG_SECURITY_SELINUX_SIDTAB_HASH_BITS=9
CONFIG_SECURITY_SELINUX_SID2STR_CACHE_SIZE=256
# CONFIG_SECURITY_SMACK is not set
# CONFIG_SECURITY_TOMOYO is not set
# CONFIG_SECURITY_APPARMOR is not set
# CONFIG_SECURITY_LOADPIN is not set
CONFIG_SECURITY_YAMA=y
# CONFIG_SECURITY_SAFESETID is not set
# CONFIG_SECURITY_LOCKDOWN_LSM is not set
CONFIG_INTEGRITY=y
CONFIG_INTEGRITY_SIGNATURE=y
CONFIG_INTEGRITY_ASYMMETRIC_KEYS=y
CONFIG_INTEGRITY_TRUSTED_KEYRING=y
# CONFIG_INTEGRITY_PLATFORM_KEYRING is not set
CONFIG_INTEGRITY_AUDIT=y
CONFIG_IMA=y
CONFIG_IMA_MEASURE_PCR_IDX=10
CONFIG_IMA_LSM_RULES=y
# CONFIG_IMA_TEMPLATE is not set
CONFIG_IMA_NG_TEMPLATE=y
# CONFIG_IMA_SIG_TEMPLATE is not set
CONFIG_IMA_DEFAULT_TEMPLATE="ima-ng"
CONFIG_IMA_DEFAULT_HASH_SHA1=y
# CONFIG_IMA_DEFAULT_HASH_SHA256 is not set
CONFIG_IMA_DEFAULT_HASH="sha1"
# CONFIG_IMA_WRITE_POLICY is not set
# CONFIG_IMA_READ_POLICY is not set
CONFIG_IMA_APPRAISE=y
CONFIG_IMA_ARCH_POLICY=y
# CONFIG_IMA_APPRAISE_BUILD_POLICY is not set
# CONFIG_IMA_APPRAISE_MODSIG is not set
CONFIG_IMA_TRUSTED_KEYRING=y
# CONFIG_IMA_BLACKLIST_KEYRING is not set
# CONFIG_IMA_LOAD_X509 is not set
CONFIG_IMA_MEASURE_ASYMMETRIC_KEYS=y
CONFIG_IMA_QUEUE_EARLY_BOOT_KEYS=y
CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT=y
CONFIG_EVM=y
CONFIG_EVM_ATTR_FSUUID=y
# CONFIG_EVM_ADD_XATTRS is not set
# CONFIG_EVM_LOAD_X509 is not set
CONFIG_DEFAULT_SECURITY_SELINUX=y
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_LSM="lockdown,yama,loadpin,safesetid,integrity,selinux,smack,tomoyo,apparmor,bpf"

#
# Kernel hardening options
#

#
# Memory initialization
#
CONFIG_INIT_STACK_NONE=y
# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set
# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set
# end of Memory initialization
# end of Kernel hardening options
# end of Security options

CONFIG_XOR_BLOCKS=m
CONFIG_ASYNC_CORE=m
CONFIG_ASYNC_MEMCPY=m
CONFIG_ASYNC_XOR=m
CONFIG_ASYNC_PQ=m
CONFIG_ASYNC_RAID6_RECOV=m
CONFIG_CRYPTO=y

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=y
CONFIG_CRYPTO_ALGAPI2=y
CONFIG_CRYPTO_AEAD=y
CONFIG_CRYPTO_AEAD2=y
CONFIG_CRYPTO_SKCIPHER=y
CONFIG_CRYPTO_SKCIPHER2=y
CONFIG_CRYPTO_HASH=y
CONFIG_CRYPTO_HASH2=y
CONFIG_CRYPTO_RNG=y
CONFIG_CRYPTO_RNG2=y
CONFIG_CRYPTO_RNG_DEFAULT=y
CONFIG_CRYPTO_AKCIPHER2=y
CONFIG_CRYPTO_AKCIPHER=y
CONFIG_CRYPTO_KPP2=y
CONFIG_CRYPTO_KPP=m
CONFIG_CRYPTO_ACOMP2=y
CONFIG_CRYPTO_MANAGER=y
CONFIG_CRYPTO_MANAGER2=y
CONFIG_CRYPTO_USER=m
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=y
CONFIG_CRYPTO_GF128MUL=y
CONFIG_CRYPTO_NULL=y
CONFIG_CRYPTO_NULL2=y
CONFIG_CRYPTO_PCRYPT=m
CONFIG_CRYPTO_CRYPTD=m
CONFIG_CRYPTO_AUTHENC=m
CONFIG_CRYPTO_TEST=m
CONFIG_CRYPTO_SIMD=m
CONFIG_CRYPTO_GLUE_HELPER_X86=m
CONFIG_CRYPTO_ENGINE=m

#
# Public-key cryptography
#
CONFIG_CRYPTO_RSA=y
CONFIG_CRYPTO_DH=m
CONFIG_CRYPTO_ECC=m
CONFIG_CRYPTO_ECDH=m
# CONFIG_CRYPTO_ECRDSA is not set
# CONFIG_CRYPTO_CURVE25519 is not set
# CONFIG_CRYPTO_CURVE25519_X86 is not set

#
# Authenticated Encryption with Associated Data
#
CONFIG_CRYPTO_CCM=m
CONFIG_CRYPTO_GCM=y
# CONFIG_CRYPTO_CHACHA20POLY1305 is not set
# CONFIG_CRYPTO_AEGIS128 is not set
# CONFIG_CRYPTO_AEGIS128_AESNI_SSE2 is not set
CONFIG_CRYPTO_SEQIV=y
CONFIG_CRYPTO_ECHAINIV=m

#
# Block modes
#
CONFIG_CRYPTO_CBC=y
# CONFIG_CRYPTO_CFB is not set
CONFIG_CRYPTO_CTR=y
CONFIG_CRYPTO_CTS=m
CONFIG_CRYPTO_ECB=y
CONFIG_CRYPTO_LRW=m
# CONFIG_CRYPTO_OFB is not set
CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_XTS=m
# CONFIG_CRYPTO_KEYWRAP is not set
# CONFIG_CRYPTO_NHPOLY1305_SSE2 is not set
# CONFIG_CRYPTO_NHPOLY1305_AVX2 is not set
# CONFIG_CRYPTO_ADIANTUM is not set
CONFIG_CRYPTO_ESSIV=m

#
# Hash modes
#
CONFIG_CRYPTO_CMAC=m
CONFIG_CRYPTO_HMAC=y
CONFIG_CRYPTO_XCBC=m
CONFIG_CRYPTO_VMAC=m

#
# Digest
#
CONFIG_CRYPTO_CRC32C=y
CONFIG_CRYPTO_CRC32C_INTEL=m
CONFIG_CRYPTO_CRC32=m
CONFIG_CRYPTO_CRC32_PCLMUL=m
CONFIG_CRYPTO_XXHASH=m
CONFIG_CRYPTO_BLAKE2B=m
# CONFIG_CRYPTO_BLAKE2S is not set
# CONFIG_CRYPTO_BLAKE2S_X86 is not set
CONFIG_CRYPTO_CRCT10DIF=y
CONFIG_CRYPTO_CRCT10DIF_PCLMUL=m
CONFIG_CRYPTO_GHASH=y
# CONFIG_CRYPTO_POLY1305 is not set
# CONFIG_CRYPTO_POLY1305_X86_64 is not set
CONFIG_CRYPTO_MD4=m
CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_MICHAEL_MIC=m
CONFIG_CRYPTO_RMD128=m
CONFIG_CRYPTO_RMD160=m
CONFIG_CRYPTO_RMD256=m
CONFIG_CRYPTO_RMD320=m
CONFIG_CRYPTO_SHA1=y
CONFIG_CRYPTO_SHA1_SSSE3=y
CONFIG_CRYPTO_SHA256_SSSE3=y
CONFIG_CRYPTO_SHA512_SSSE3=m
CONFIG_CRYPTO_SHA256=y
CONFIG_CRYPTO_SHA512=m
# CONFIG_CRYPTO_SHA3 is not set
# CONFIG_CRYPTO_SM3 is not set
# CONFIG_CRYPTO_STREEBOG is not set
CONFIG_CRYPTO_TGR192=m
CONFIG_CRYPTO_WP512=m
CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL=m

#
# Ciphers
#
CONFIG_CRYPTO_AES=y
# CONFIG_CRYPTO_AES_TI is not set
CONFIG_CRYPTO_AES_NI_INTEL=m
CONFIG_CRYPTO_ANUBIS=m
CONFIG_CRYPTO_ARC4=m
CONFIG_CRYPTO_BLOWFISH=m
CONFIG_CRYPTO_BLOWFISH_COMMON=m
CONFIG_CRYPTO_BLOWFISH_X86_64=m
CONFIG_CRYPTO_CAMELLIA=m
CONFIG_CRYPTO_CAMELLIA_X86_64=m
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64=m
CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64=m
CONFIG_CRYPTO_CAST_COMMON=m
CONFIG_CRYPTO_CAST5=m
CONFIG_CRYPTO_CAST5_AVX_X86_64=m
CONFIG_CRYPTO_CAST6=m
CONFIG_CRYPTO_CAST6_AVX_X86_64=m
CONFIG_CRYPTO_DES=m
# CONFIG_CRYPTO_DES3_EDE_X86_64 is not set
CONFIG_CRYPTO_FCRYPT=m
CONFIG_CRYPTO_KHAZAD=m
CONFIG_CRYPTO_SALSA20=m
# CONFIG_CRYPTO_CHACHA20 is not set
# CONFIG_CRYPTO_CHACHA20_X86_64 is not set
CONFIG_CRYPTO_SEED=m
CONFIG_CRYPTO_SERPENT=m
CONFIG_CRYPTO_SERPENT_SSE2_X86_64=m
CONFIG_CRYPTO_SERPENT_AVX_X86_64=m
CONFIG_CRYPTO_SERPENT_AVX2_X86_64=m
# CONFIG_CRYPTO_SM4 is not set
CONFIG_CRYPTO_TEA=m
CONFIG_CRYPTO_TWOFISH=m
CONFIG_CRYPTO_TWOFISH_COMMON=m
CONFIG_CRYPTO_TWOFISH_X86_64=m
CONFIG_CRYPTO_TWOFISH_X86_64_3WAY=m
CONFIG_CRYPTO_TWOFISH_AVX_X86_64=m

#
# Compression
#
CONFIG_CRYPTO_DEFLATE=y
CONFIG_CRYPTO_LZO=y
# CONFIG_CRYPTO_842 is not set
# CONFIG_CRYPTO_LZ4 is not set
# CONFIG_CRYPTO_LZ4HC is not set
# CONFIG_CRYPTO_ZSTD is not set

#
# Random Number Generation
#
CONFIG_CRYPTO_ANSI_CPRNG=m
CONFIG_CRYPTO_DRBG_MENU=y
CONFIG_CRYPTO_DRBG_HMAC=y
CONFIG_CRYPTO_DRBG_HASH=y
CONFIG_CRYPTO_DRBG_CTR=y
CONFIG_CRYPTO_DRBG=y
CONFIG_CRYPTO_JITTERENTROPY=y
CONFIG_CRYPTO_USER_API=y
CONFIG_CRYPTO_USER_API_HASH=y
CONFIG_CRYPTO_USER_API_SKCIPHER=y
CONFIG_CRYPTO_USER_API_RNG=m
# CONFIG_CRYPTO_USER_API_AEAD is not set
# CONFIG_CRYPTO_STATS is not set
CONFIG_CRYPTO_HASH_INFO=y

#
# Crypto library routines
#
CONFIG_CRYPTO_LIB_AES=y
CONFIG_CRYPTO_LIB_ARC4=m
# CONFIG_CRYPTO_LIB_BLAKE2S is not set
# CONFIG_CRYPTO_LIB_CHACHA is not set
# CONFIG_CRYPTO_LIB_CURVE25519 is not set
CONFIG_CRYPTO_LIB_DES=m
CONFIG_CRYPTO_LIB_POLY1305_RSIZE=11
# CONFIG_CRYPTO_LIB_POLY1305 is not set
# CONFIG_CRYPTO_LIB_CHACHA20POLY1305 is not set
CONFIG_CRYPTO_LIB_SHA256=y
CONFIG_CRYPTO_HW=y
CONFIG_CRYPTO_DEV_PADLOCK=m
CONFIG_CRYPTO_DEV_PADLOCK_AES=m
CONFIG_CRYPTO_DEV_PADLOCK_SHA=m
# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set
# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set
CONFIG_CRYPTO_DEV_CCP=y
CONFIG_CRYPTO_DEV_CCP_DD=y
CONFIG_CRYPTO_DEV_SP_CCP=y
CONFIG_CRYPTO_DEV_CCP_CRYPTO=m
CONFIG_CRYPTO_DEV_SP_PSP=y
# CONFIG_CRYPTO_DEV_CCP_DEBUGFS is not set
CONFIG_CRYPTO_DEV_QAT=m
CONFIG_CRYPTO_DEV_QAT_DH895xCC=m
CONFIG_CRYPTO_DEV_QAT_C3XXX=m
CONFIG_CRYPTO_DEV_QAT_C62X=m
CONFIG_CRYPTO_DEV_QAT_DH895xCCVF=m
CONFIG_CRYPTO_DEV_QAT_C3XXXVF=m
CONFIG_CRYPTO_DEV_QAT_C62XVF=m
# CONFIG_CRYPTO_DEV_NITROX_CNN55XX is not set
CONFIG_CRYPTO_DEV_CHELSIO=m
CONFIG_CRYPTO_DEV_VIRTIO=m
# CONFIG_CRYPTO_DEV_SAFEXCEL is not set
# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set
CONFIG_ASYMMETRIC_KEY_TYPE=y
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=y
# CONFIG_ASYMMETRIC_TPM_KEY_SUBTYPE is not set
CONFIG_X509_CERTIFICATE_PARSER=y
# CONFIG_PKCS8_PRIVATE_KEY_PARSER is not set
CONFIG_PKCS7_MESSAGE_PARSER=y
# CONFIG_PKCS7_TEST_KEY is not set
CONFIG_SIGNED_PE_FILE_VERIFICATION=y

#
# Certificates for signature checking
#
CONFIG_MODULE_SIG_KEY="certs/signing_key.pem"
CONFIG_SYSTEM_TRUSTED_KEYRING=y
CONFIG_SYSTEM_TRUSTED_KEYS=""
# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
CONFIG_SYSTEM_BLACKLIST_KEYRING=y
CONFIG_SYSTEM_BLACKLIST_HASH_LIST=""
# end of Certificates for signature checking

CONFIG_BINARY_PRINTF=y

#
# Library routines
#
CONFIG_RAID6_PQ=m
CONFIG_RAID6_PQ_BENCHMARK=y
# CONFIG_PACKING is not set
CONFIG_BITREVERSE=y
CONFIG_GENERIC_STRNCPY_FROM_USER=y
CONFIG_GENERIC_STRNLEN_USER=y
CONFIG_GENERIC_NET_UTILS=y
CONFIG_GENERIC_FIND_FIRST_BIT=y
CONFIG_CORDIC=m
CONFIG_PRIME_NUMBERS=m
CONFIG_RATIONAL=y
CONFIG_GENERIC_PCI_IOMAP=y
CONFIG_GENERIC_IOMAP=y
CONFIG_ARCH_USE_CMPXCHG_LOCKREF=y
CONFIG_ARCH_HAS_FAST_MULTIPLIER=y
CONFIG_ARCH_USE_SYM_ANNOTATIONS=y
CONFIG_CRC_CCITT=y
CONFIG_CRC16=y
CONFIG_CRC_T10DIF=y
CONFIG_CRC_ITU_T=m
CONFIG_CRC32=y
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=y
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC64 is not set
# CONFIG_CRC4 is not set
# CONFIG_CRC7 is not set
CONFIG_LIBCRC32C=m
CONFIG_CRC8=m
CONFIG_XXHASH=y
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
CONFIG_LZO_COMPRESS=y
CONFIG_LZO_DECOMPRESS=y
CONFIG_LZ4_DECOMPRESS=y
CONFIG_ZSTD_COMPRESS=m
CONFIG_ZSTD_DECOMPRESS=m
CONFIG_XZ_DEC=y
CONFIG_XZ_DEC_X86=y
CONFIG_XZ_DEC_POWERPC=y
CONFIG_XZ_DEC_IA64=y
CONFIG_XZ_DEC_ARM=y
CONFIG_XZ_DEC_ARMTHUMB=y
CONFIG_XZ_DEC_SPARC=y
CONFIG_XZ_DEC_BCJ=y
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=y
CONFIG_DECOMPRESS_BZIP2=y
CONFIG_DECOMPRESS_LZMA=y
CONFIG_DECOMPRESS_XZ=y
CONFIG_DECOMPRESS_LZO=y
CONFIG_DECOMPRESS_LZ4=y
CONFIG_GENERIC_ALLOCATOR=y
CONFIG_REED_SOLOMON=m
CONFIG_REED_SOLOMON_ENC8=y
CONFIG_REED_SOLOMON_DEC8=y
CONFIG_TEXTSEARCH=y
CONFIG_TEXTSEARCH_KMP=m
CONFIG_TEXTSEARCH_BM=m
CONFIG_TEXTSEARCH_FSM=m
CONFIG_BTREE=y
CONFIG_INTERVAL_TREE=y
CONFIG_XARRAY_MULTI=y
CONFIG_ASSOCIATIVE_ARRAY=y
CONFIG_HAS_IOMEM=y
CONFIG_HAS_IOPORT_MAP=y
CONFIG_HAS_DMA=y
CONFIG_NEED_SG_DMA_LENGTH=y
CONFIG_NEED_DMA_MAP_STATE=y
CONFIG_ARCH_DMA_ADDR_T_64BIT=y
CONFIG_ARCH_HAS_FORCE_DMA_UNENCRYPTED=y
CONFIG_SWIOTLB=y
CONFIG_DMA_NONCOHERENT_MMAP=y
CONFIG_DMA_REMAP=y
CONFIG_DMA_COHERENT_POOL=y
CONFIG_DMA_CMA=y

#
# Default contiguous memory area size:
#
CONFIG_CMA_SIZE_MBYTES=0
CONFIG_CMA_SIZE_SEL_MBYTES=y
# CONFIG_CMA_SIZE_SEL_PERCENTAGE is not set
# CONFIG_CMA_SIZE_SEL_MIN is not set
# CONFIG_CMA_SIZE_SEL_MAX is not set
CONFIG_CMA_ALIGNMENT=8
# CONFIG_DMA_API_DEBUG is not set
CONFIG_SGL_ALLOC=y
CONFIG_IOMMU_HELPER=y
CONFIG_CHECK_SIGNATURE=y
CONFIG_CPUMASK_OFFSTACK=y
CONFIG_CPU_RMAP=y
CONFIG_DQL=y
CONFIG_GLOB=y
# CONFIG_GLOB_SELFTEST is not set
CONFIG_NLATTR=y
CONFIG_CLZ_TAB=y
CONFIG_IRQ_POLL=y
CONFIG_MPILIB=y
CONFIG_SIGNATURE=y
CONFIG_DIMLIB=y
CONFIG_OID_REGISTRY=y
CONFIG_UCS2_STRING=y
CONFIG_HAVE_GENERIC_VDSO=y
CONFIG_GENERIC_GETTIMEOFDAY=y
CONFIG_GENERIC_VDSO_TIME_NS=y
CONFIG_FONT_SUPPORT=y
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
CONFIG_SG_POOL=y
CONFIG_ARCH_HAS_PMEM_API=y
CONFIG_MEMREGION=y
CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=y
CONFIG_ARCH_HAS_UACCESS_MCSAFE=y
CONFIG_ARCH_STACKWALK=y
CONFIG_SBITMAP=y
# CONFIG_STRING_SELFTEST is not set
# end of Library routines

#
# Kernel hacking
#

#
# printk and dmesg options
#
CONFIG_PRINTK_TIME=y
# CONFIG_PRINTK_CALLER is not set
CONFIG_CONSOLE_LOGLEVEL_DEFAULT=7
CONFIG_CONSOLE_LOGLEVEL_QUIET=4
CONFIG_MESSAGE_LOGLEVEL_DEFAULT=4
CONFIG_BOOT_PRINTK_DELAY=y
CONFIG_DYNAMIC_DEBUG=y
CONFIG_DYNAMIC_DEBUG_CORE=y
CONFIG_SYMBOLIC_ERRNAME=y
CONFIG_DEBUG_BUGVERBOSE=y
# end of printk and dmesg options

#
# Compile-time checks and compiler options
#
CONFIG_DEBUG_INFO=y
# CONFIG_DEBUG_INFO_REDUCED is not set
# CONFIG_DEBUG_INFO_COMPRESSED is not set
# CONFIG_DEBUG_INFO_SPLIT is not set
# CONFIG_DEBUG_INFO_DWARF4 is not set
CONFIG_DEBUG_INFO_BTF=y
# CONFIG_GDB_SCRIPTS is not set
CONFIG_ENABLE_MUST_CHECK=y
CONFIG_FRAME_WARN=2048
CONFIG_STRIP_ASM_SYMS=y
# CONFIG_READABLE_ASM is not set
# CONFIG_HEADERS_INSTALL is not set
CONFIG_DEBUG_SECTION_MISMATCH=y
CONFIG_SECTION_MISMATCH_WARN_ONLY=y
CONFIG_STACK_VALIDATION=y
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
# end of Compile-time checks and compiler options

#
# Generic Kernel Debugging Instruments
#
CONFIG_MAGIC_SYSRQ=y
CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=0x1
CONFIG_MAGIC_SYSRQ_SERIAL=y
CONFIG_MAGIC_SYSRQ_SERIAL_SEQUENCE=""
CONFIG_DEBUG_FS=y
CONFIG_HAVE_ARCH_KGDB=y
# CONFIG_KGDB is not set
CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=y
# CONFIG_UBSAN is not set
# end of Generic Kernel Debugging Instruments

CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MISC=y

#
# Memory Debugging
#
# CONFIG_PAGE_EXTENSION is not set
# CONFIG_DEBUG_PAGEALLOC is not set
# CONFIG_PAGE_OWNER is not set
# CONFIG_PAGE_POISONING is not set
# CONFIG_DEBUG_PAGE_REF is not set
CONFIG_DEBUG_RODATA_TEST=y
CONFIG_ARCH_HAS_DEBUG_WX=y
# CONFIG_DEBUG_WX is not set
CONFIG_GENERIC_PTDUMP=y
# CONFIG_PTDUMP_DEBUGFS is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_SLUB_DEBUG_ON is not set
# CONFIG_SLUB_STATS is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=y
# CONFIG_DEBUG_KMEMLEAK is not set
# CONFIG_DEBUG_STACK_USAGE is not set
# CONFIG_SCHED_STACK_END_CHECK is not set
CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=y
# CONFIG_DEBUG_VM is not set
# CONFIG_DEBUG_VM_PGTABLE is not set
CONFIG_ARCH_HAS_DEBUG_VIRTUAL=y
# CONFIG_DEBUG_VIRTUAL is not set
CONFIG_DEBUG_MEMORY_INIT=y
CONFIG_MEMORY_NOTIFIER_ERROR_INJECT=m
# CONFIG_DEBUG_PER_CPU_MAPS is not set
CONFIG_HAVE_ARCH_KASAN=y
CONFIG_HAVE_ARCH_KASAN_VMALLOC=y
CONFIG_CC_HAS_KASAN_GENERIC=y
# CONFIG_KASAN is not set
CONFIG_KASAN_STACK=1
# end of Memory Debugging

CONFIG_DEBUG_SHIRQ=y

#
# Debug Oops, Lockups and Hangs
#
CONFIG_PANIC_ON_OOPS=y
CONFIG_PANIC_ON_OOPS_VALUE=1
CONFIG_PANIC_TIMEOUT=0
CONFIG_LOCKUP_DETECTOR=y
CONFIG_SOFTLOCKUP_DETECTOR=y
# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=0
CONFIG_HARDLOCKUP_DETECTOR_PERF=y
CONFIG_HARDLOCKUP_CHECK_TIMESTAMP=y
CONFIG_HARDLOCKUP_DETECTOR=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC=y
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=1
# CONFIG_DETECT_HUNG_TASK is not set
# CONFIG_WQ_WATCHDOG is not set
# CONFIG_TEST_LOCKUP is not set
# end of Debug Oops, Lockups and Hangs

#
# Scheduler Debugging
#
CONFIG_SCHED_DEBUG=y
CONFIG_SCHED_INFO=y
CONFIG_SCHEDSTATS=y
# end of Scheduler Debugging

# CONFIG_DEBUG_TIMEKEEPING is not set
CONFIG_DEBUG_PREEMPT=y

#
# Lock Debugging (spinlocks, mutexes, etc...)
#
CONFIG_LOCK_DEBUGGING_SUPPORT=y
CONFIG_PROVE_LOCKING=y
# CONFIG_PROVE_RAW_LOCK_NESTING is not set
# CONFIG_LOCK_STAT is not set
CONFIG_DEBUG_RT_MUTEXES=y
CONFIG_DEBUG_SPINLOCK=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_DEBUG_WW_MUTEX_SLOWPATH=y
CONFIG_DEBUG_RWSEMS=y
CONFIG_DEBUG_LOCK_ALLOC=y
CONFIG_LOCKDEP=y
# CONFIG_DEBUG_LOCKDEP is not set
CONFIG_DEBUG_ATOMIC_SLEEP=y
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
# CONFIG_LOCK_TORTURE_TEST is not set
CONFIG_WW_MUTEX_SELFTEST=m
# end of Lock Debugging (spinlocks, mutexes, etc...)

CONFIG_TRACE_IRQFLAGS=y
CONFIG_STACKTRACE=y
# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
# CONFIG_DEBUG_KOBJECT is not set

#
# Debug kernel data structures
#
CONFIG_DEBUG_LIST=y
CONFIG_DEBUG_PLIST=y
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
# CONFIG_BUG_ON_DATA_CORRUPTION is not set
# end of Debug kernel data structures

# CONFIG_DEBUG_CREDENTIALS is not set

#
# RCU Debugging
#
CONFIG_PROVE_RCU=y
# CONFIG_RCU_PERF_TEST is not set
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=60
# CONFIG_RCU_TRACE is not set
# CONFIG_RCU_EQS_DEBUG is not set
# end of RCU Debugging

# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set
CONFIG_LATENCYTOP=y
CONFIG_USER_STACKTRACE_SUPPORT=y
CONFIG_NOP_TRACER=y
CONFIG_HAVE_FUNCTION_TRACER=y
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=y
CONFIG_HAVE_DYNAMIC_FTRACE=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=y
CONFIG_HAVE_SYSCALL_TRACEPOINTS=y
CONFIG_HAVE_FENTRY=y
CONFIG_HAVE_C_RECORDMCOUNT=y
CONFIG_TRACER_MAX_TRACE=y
CONFIG_TRACE_CLOCK=y
CONFIG_RING_BUFFER=y
CONFIG_EVENT_TRACING=y
CONFIG_CONTEXT_SWITCH_TRACER=y
CONFIG_RING_BUFFER_ALLOW_SWAP=y
CONFIG_PREEMPTIRQ_TRACEPOINTS=y
CONFIG_TRACING=y
CONFIG_GENERIC_TRACER=y
CONFIG_TRACING_SUPPORT=y
CONFIG_FTRACE=y
# CONFIG_BOOTTIME_TRACING is not set
CONFIG_FUNCTION_TRACER=y
CONFIG_FUNCTION_GRAPH_TRACER=y
CONFIG_DYNAMIC_FTRACE=y
CONFIG_DYNAMIC_FTRACE_WITH_REGS=y
CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=y
CONFIG_FUNCTION_PROFILER=y
CONFIG_STACK_TRACER=y
CONFIG_TRACE_PREEMPT_TOGGLE=y
CONFIG_IRQSOFF_TRACER=y
CONFIG_PREEMPT_TRACER=y
CONFIG_SCHED_TRACER=y
CONFIG_HWLAT_TRACER=y
# CONFIG_MMIOTRACE is not set
CONFIG_FTRACE_SYSCALLS=y
CONFIG_TRACER_SNAPSHOT=y
CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP=y
CONFIG_BRANCH_PROFILE_NONE=y
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
# CONFIG_PROFILE_ALL_BRANCHES is not set
CONFIG_BLK_DEV_IO_TRACE=y
CONFIG_KPROBE_EVENTS=y
# CONFIG_KPROBE_EVENTS_ON_NOTRACE is not set
CONFIG_UPROBE_EVENTS=y
CONFIG_BPF_EVENTS=y
CONFIG_DYNAMIC_EVENTS=y
CONFIG_PROBE_EVENTS=y
# CONFIG_BPF_KPROBE_OVERRIDE is not set
CONFIG_FTRACE_MCOUNT_RECORD=y
CONFIG_TRACING_MAP=y
CONFIG_SYNTH_EVENTS=y
CONFIG_HIST_TRIGGERS=y
# CONFIG_TRACE_EVENT_INJECT is not set
# CONFIG_TRACEPOINT_BENCHMARK is not set
CONFIG_RING_BUFFER_BENCHMARK=m
# CONFIG_TRACE_EVAL_MAP_FILE is not set
# CONFIG_FTRACE_STARTUP_TEST is not set
# CONFIG_RING_BUFFER_STARTUP_TEST is not set
CONFIG_PREEMPTIRQ_DELAY_TEST=m
# CONFIG_SYNTH_EVENT_GEN_TEST is not set
# CONFIG_KPROBE_EVENT_GEN_TEST is not set
# CONFIG_HIST_TRIGGERS_DEBUG is not set
CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
CONFIG_SAMPLES=y
# CONFIG_SAMPLE_AUXDISPLAY is not set
# CONFIG_SAMPLE_TRACE_EVENTS is not set
CONFIG_SAMPLE_TRACE_PRINTK=m
CONFIG_SAMPLE_FTRACE_DIRECT=m
# CONFIG_SAMPLE_TRACE_ARRAY is not set
# CONFIG_SAMPLE_KOBJECT is not set
# CONFIG_SAMPLE_KPROBES is not set
# CONFIG_SAMPLE_HW_BREAKPOINT is not set
# CONFIG_SAMPLE_KFIFO is not set
# CONFIG_SAMPLE_LIVEPATCH is not set
# CONFIG_SAMPLE_CONFIGFS is not set
# CONFIG_SAMPLE_VFIO_MDEV_MTTY is not set
# CONFIG_SAMPLE_VFIO_MDEV_MDPY is not set
# CONFIG_SAMPLE_VFIO_MDEV_MDPY_FB is not set
# CONFIG_SAMPLE_VFIO_MDEV_MBOCHS is not set
# CONFIG_SAMPLE_WATCHDOG is not set
CONFIG_HAVE_ARCH_KCSAN=y
CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=y
CONFIG_STRICT_DEVMEM=y
# CONFIG_IO_STRICT_DEVMEM is not set

#
# x86 Debugging
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=y
CONFIG_EARLY_PRINTK_USB=y
CONFIG_X86_VERBOSE_BOOTUP=y
CONFIG_EARLY_PRINTK=y
CONFIG_EARLY_PRINTK_DBGP=y
# CONFIG_EARLY_PRINTK_USB_XDBC is not set
# CONFIG_EFI_PGT_DUMP is not set
# CONFIG_DEBUG_TLBFLUSH is not set
# CONFIG_IOMMU_DEBUG is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=y
CONFIG_X86_DECODER_SELFTEST=y
CONFIG_IO_DELAY_0X80=y
# CONFIG_IO_DELAY_0XED is not set
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
CONFIG_DEBUG_BOOT_PARAMS=y
# CONFIG_CPA_DEBUG is not set
# CONFIG_DEBUG_ENTRY is not set
# CONFIG_DEBUG_NMI_SELFTEST is not set
CONFIG_X86_DEBUG_FPU=y
# CONFIG_PUNIT_ATOM_DEBUG is not set
CONFIG_UNWINDER_ORC=y
# CONFIG_UNWINDER_FRAME_POINTER is not set
# CONFIG_UNWINDER_GUESS is not set
# end of x86 Debugging

#
# Kernel Testing and Coverage
#
# CONFIG_KUNIT is not set
CONFIG_NOTIFIER_ERROR_INJECTION=y
CONFIG_PM_NOTIFIER_ERROR_INJECT=m
# CONFIG_NETDEV_NOTIFIER_ERROR_INJECT is not set
CONFIG_FUNCTION_ERROR_INJECTION=y
# CONFIG_FAULT_INJECTION is not set
CONFIG_ARCH_HAS_KCOV=y
CONFIG_CC_HAS_SANCOV_TRACE_PC=y
# CONFIG_KCOV is not set
CONFIG_RUNTIME_TESTING_MENU=y
CONFIG_LKDTM=y
# CONFIG_TEST_LIST_SORT is not set
# CONFIG_TEST_MIN_HEAP is not set
# CONFIG_TEST_SORT is not set
# CONFIG_KPROBES_SANITY_TEST is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_REED_SOLOMON_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
# CONFIG_PERCPU_TEST is not set
CONFIG_ATOMIC64_SELFTEST=y
# CONFIG_ASYNC_RAID6_TEST is not set
# CONFIG_TEST_HEXDUMP is not set
# CONFIG_TEST_STRING_HELPERS is not set
CONFIG_TEST_STRSCPY=m
# CONFIG_TEST_KSTRTOX is not set
CONFIG_TEST_PRINTF=m
CONFIG_TEST_BITMAP=m
# CONFIG_TEST_BITFIELD is not set
# CONFIG_TEST_UUID is not set
# CONFIG_TEST_XARRAY is not set
# CONFIG_TEST_OVERFLOW is not set
# CONFIG_TEST_RHASHTABLE is not set
# CONFIG_TEST_HASH is not set
# CONFIG_TEST_IDA is not set
CONFIG_TEST_LKM=m
# CONFIG_TEST_BITOPS is not set
CONFIG_TEST_VMALLOC=m
CONFIG_TEST_USER_COPY=m
CONFIG_TEST_BPF=m
CONFIG_TEST_BLACKHOLE_DEV=m
# CONFIG_FIND_BIT_BENCHMARK is not set
CONFIG_TEST_FIRMWARE=m
CONFIG_TEST_SYSCTL=y
# CONFIG_TEST_UDELAY is not set
CONFIG_TEST_STATIC_KEYS=m
CONFIG_TEST_KMOD=m
# CONFIG_TEST_MEMCAT_P is not set
CONFIG_TEST_LIVEPATCH=m
# CONFIG_TEST_STACKINIT is not set
# CONFIG_TEST_MEMINIT is not set
# CONFIG_MEMTEST is not set
# CONFIG_HYPERV_TESTING is not set
# end of Kernel Testing and Coverage
# end of Kernel hacking

--fKov5AqTsvseSZ0Z
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=job-script

#!/bin/sh

export_top_env()
{
	export suite='boot'
	export testcase='boot'
	export category='functional'
	export timeout='10m'
	export job_origin='/lkp-src/jobs/boot.yaml'
	export queue_cmdline_keys='branch
commit
queue_at_least_once'
	export queue='validate'
	export testbox='vm-snb-ssd-14'
	export tbox_group='vm-snb-ssd'
	export branch='linux-review/Lukas-Wunner/PCI-pciehp-Fix-AB-BA-deadlock-between-reset_lock-and-device_lock/20200721-192848'
	export commit='3233e41d3e8ebcd44e92da47ffed97fd49b84278'
	export kconfig='x86_64-rhel-7.6-kselftests'
	export repeat_to=4
	export nr_vm=64
	export submit_id='5f1773f69a967a258ec38213'
	export job_file='/lkp/jobs/scheduled/vm-snb-ssd-14/boot-1-aliyun-x86_64-20190626.cgz-3233e41d3e8ebcd44e92da47ffed97fd49b84278-20200722-9614-45avv-2.yaml'
	export id='b7e043a4a8afe6bc7d506aaf3bf9583d3f2a4e1b'
	export queuer_version='/lkp-src'
	export model='qemu-system-x86_64 -enable-kvm -cpu SandyBridge'
	export nr_cpu=2
	export memory='16G'
	export disk_type='virtio-scsi'
	export ssd_partitions='/dev/sda /dev/sdb /dev/sdc /dev/sdd'
	export hdd_partitions='/dev/sde /dev/sdf /dev/sdg /dev/sdh /dev/sdi /dev/sdj'
	export swap_partitions='/dev/sdk'
	export ssh_base_port=33000
	export need_kconfig='CONFIG_KVM_GUEST=y'
	export rootfs='aliyun-x86_64-20190626.cgz'
	export compiler='gcc-9'
	export enqueue_time='2020-07-22 07:02:15 +0800'
	export _id='5f1773f69a967a258ec38213'
	export _rt='/result/boot/1/vm-snb-ssd/aliyun-x86_64-20190626.cgz/x86_64-rhel-7.6-kselftests/gcc-9/3233e41d3e8ebcd44e92da47ffed97fd49b84278'
	export user='lkp'
	export result_root='/result/boot/1/vm-snb-ssd/aliyun-x86_64-20190626.cgz/x86_64-rhel-7.6-kselftests/gcc-9/3233e41d3e8ebcd44e92da47ffed97fd49b84278/3'
	export scheduler_version='/lkp/lkp/.src-20200721-154228'
	export LKP_SERVER='inn'
	export arch='x86_64'
	export max_uptime=600
	export initrd='/osimage/aliyun/aliyun-x86_64-20190626.cgz'
	export bootloader_append='root=/dev/ram0
user=lkp
job=/lkp/jobs/scheduled/vm-snb-ssd-14/boot-1-aliyun-x86_64-20190626.cgz-3233e41d3e8ebcd44e92da47ffed97fd49b84278-20200722-9614-45avv-2.yaml
ARCH=x86_64
kconfig=x86_64-rhel-7.6-kselftests
branch=linux-review/Lukas-Wunner/PCI-pciehp-Fix-AB-BA-deadlock-between-reset_lock-and-device_lock/20200721-192848
commit=3233e41d3e8ebcd44e92da47ffed97fd49b84278
BOOT_IMAGE=/pkg/linux/x86_64-rhel-7.6-kselftests/gcc-9/3233e41d3e8ebcd44e92da47ffed97fd49b84278/vmlinuz-5.8.0-rc1-00053-g3233e41d3e8eb
max_uptime=600
RESULT_ROOT=/result/boot/1/vm-snb-ssd/aliyun-x86_64-20190626.cgz/x86_64-rhel-7.6-kselftests/gcc-9/3233e41d3e8ebcd44e92da47ffed97fd49b84278/3
LKP_SERVER=inn
selinux=0
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
net.ifnames=0
printk.devkmsg=on
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
drbd.minor_count=8
systemd.log_level=err
ignore_loglevel
console=tty0
earlyprintk=ttyS0,115200
console=ttyS0,115200
vga=normal
rw'
	export modules_initrd='/pkg/linux/x86_64-rhel-7.6-kselftests/gcc-9/3233e41d3e8ebcd44e92da47ffed97fd49b84278/modules.cgz'
	export lkp_initrd='/osimage/user/lkp/lkp-x86_64.cgz'
	export site='inn'
	export LKP_CGI_PORT=80
	export LKP_CIFS_PORT=139
	export schedule_notify_address=
	export queue_at_least_once=1
	export kernel='/pkg/linux/x86_64-rhel-7.6-kselftests/gcc-9/3233e41d3e8ebcd44e92da47ffed97fd49b84278/vmlinuz-5.8.0-rc1-00053-g3233e41d3e8eb'
	export dequeue_time='2020-07-22 07:02:34 +0800'
	export job_initrd='/lkp/jobs/scheduled/vm-snb-ssd-14/boot-1-aliyun-x86_64-20190626.cgz-3233e41d3e8ebcd44e92da47ffed97fd49b84278-20200722-9614-45avv-2.cgz'

	[ -n "$LKP_SRC" ] ||
	export LKP_SRC=/lkp/${user:-lkp}/src
}

run_job()
{
	echo $$ > $TMP/run-job.pid

	. $LKP_SRC/lib/http.sh
	. $LKP_SRC/lib/job.sh
	. $LKP_SRC/lib/env.sh

	export_top_env

	run_monitor $LKP_SRC/monitors/one-shot/wrapper boot-slabinfo
	run_monitor $LKP_SRC/monitors/one-shot/wrapper boot-meminfo
	run_monitor $LKP_SRC/monitors/one-shot/wrapper memmap
	run_monitor $LKP_SRC/monitors/no-stdout/wrapper boot-time
	run_monitor $LKP_SRC/monitors/wrapper kmsg
	run_monitor $LKP_SRC/monitors/wrapper heartbeat
	run_monitor $LKP_SRC/monitors/wrapper meminfo
	run_monitor $LKP_SRC/monitors/wrapper oom-killer
	run_monitor $LKP_SRC/monitors/plain/watchdog

	run_test $LKP_SRC/tests/wrapper sleep 1
}

extract_stats()
{
	export stats_part_begin=
	export stats_part_end=

	$LKP_SRC/stats/wrapper boot-slabinfo
	$LKP_SRC/stats/wrapper boot-meminfo
	$LKP_SRC/stats/wrapper memmap
	$LKP_SRC/stats/wrapper boot-memory
	$LKP_SRC/stats/wrapper boot-time
	$LKP_SRC/stats/wrapper kernel-size
	$LKP_SRC/stats/wrapper kmsg
	$LKP_SRC/stats/wrapper sleep
	$LKP_SRC/stats/wrapper meminfo

	$LKP_SRC/stats/wrapper time sleep.time
	$LKP_SRC/stats/wrapper dmesg
	$LKP_SRC/stats/wrapper kmsg
	$LKP_SRC/stats/wrapper last_state
	$LKP_SRC/stats/wrapper stderr
	$LKP_SRC/stats/wrapper time
}

"$@"

--fKov5AqTsvseSZ0Z
Content-Type: application/x-xz
Content-Disposition: attachment; filename="dmesg.xz"
Content-Transfer-Encoding: base64

/Td6WFoAAATm1rRGAgAhARYAAAB0L+Wj4Ri8QmVdADKYSqt8kKSEWvAZo7Ydv/tz/AJuxJZ5
vBF30b/zsUFOhv9TudZULcPnnyAaraV0UdmWBL/0Qq2x8RyxDtkd8eBUmhlmsGDkLnjG8/UC
/LVJ3DTG+b3WBVv4UOrCMeZ4fnRs1FjHB3J1kLMaTwlPnZUJsQUBFz4Id/63wBDvv2pbWzv4
lNpfBMeF74bh19GEM68rd4yWLFkrwGAZvvJOehXi2VxkC93aoLVlbj59APDuU/nY215X87b+
2/OpBwStjDdjKyku4thM+rA35TFyIOPwVF/02JXtisGbtwdBmSz6B0kXqjhrkUSzsCSofqeW
zTxg9Oq9rkvZSMHK9ASAS9e5+xUyH7T+rNDxn1Y8f9bt/V/VZcXOF4UbjN8BLB86RAWRDPOL
5A0Ntbp5+tdevdhmIwEe90w8knsDGSUZdhWwbmjruhUB6jjliHp4VAwEftkjUJiKs2LkD6d5
5L4Ix6XqDE5u03U6XWGlnK/kG4QfkdDIU5hS/Jkjna5H1PP83yoOfVzZ4pn67fuTAFb2ZD2+
x/7OqW9HWcYfPblxMfRHV21DA74pTp/UTA6GqFZ+WvJASYuQtqw7hxEYzrD9ghC/xtZSdXls
54ypKnPIpGOU6Oi2g3rtNTh+UvYlWQR73tRV4JZjManL37p0fWmdoW/46LB0XDyQixkNK/EW
83X1RYPPG+RtxsKx4xtL5XVy38uGxuPn2NsX/uTVcmgtLjKdsF8GfksQgKwALm6RbiI0nfHp
9sKsftQoTTbptI1aHWhfT/dxTBcNOpiq1ThwxDvHKOKi6GZNnPJf0MewkIC/fuymKGWUw8sY
n9TZs78YGIbSFy4GKJ8QnY135XnGkalyN8f7pnxDH35aHkDgr8zHaMGB8phzYvfsdiDGNCAY
+Ow5+W52OzAyKfl7d8w5h/aQDK+GoV1fGp6YKt+iY5WEqMfUq1cetnZgH8iOJRmU6SNjLRNQ
/iqvdiW2Pus9GWkyc1KRTCbRiO5144gXaXsPRicyh40IoCaLIrSSXa5ZDewQHxwuZswHb1lj
PWnSP0UGzDIlxMNYysmSRjrqUJj+2rYWHletbEXlL+z65mlVb5Hv03pyuATT6hvm8k23Fdd9
r1diQ7Rn+4KYx7gi9cm9kOyaIFvN2E2CW6zBz0PBG9rQX3lY9yq5eCbs8xUrfvmr/N6GQVKr
m8wrBHI8zw9UtBwhGVsYV/bw181bcCnx5nty/VVVJmxX9YJmKlk9QPJxVHfrqA+Y5kT1mEXH
nRpH8VesCyP6I+gtJaK1rYMWoOswSw6LaJSVgfKzZvAc155xgpQjRVWtD5aQ9Z1fxARohdQw
whSXa7ewjLHvUzSe6x1dg6GDC4ppyPMNoDh8w1P8cAE764L4+ahCra4yI8hO3uQburcyuDpf
8GNbrZQ+X79DMtdMiBX+XEH5tqQD+3Cm4WT0vvCKHQb4lni2ViRCfZo+yeuA/tBezc9qfTPo
7uexPzhZImFTCU6yNsdFg+BjpwiiaHxqA2Ez6Ez8kiHf/LcGF6Srqjrdhygvl0G3w3rTXcHS
/BK8ohICIJE2IGkr8r3/GoC5hKh9mWNImPD72O2DNjyGF/1mJTjIXU63WyhNa8d4CvPkwJ+M
TeEUyIz8Q+LMQYsCXw1OzkjB9P2PhIbAl+T+lH0eiqzHIkg1AUP7tLTmnF5ZTwFgpMLIXapr
mgTiZsZ5FO4014p4tr88zi1AzDqXZlaM2r/UKwdifVEbtENMo6an/UX6NMZ8ZR9n09jzKbSV
B+U+08s6JdmJ5I5DxG6VDwSRh3whlSRIRir6mFQdxlgfe8x3CinGeFbmC2zOhYUoQud1IzX7
i04/y58oWyE4PF9YpTyQAzPGpXNHAjb4cPrOi1aVvRLdN1nXAVi2GTPhCumbEmYWzxaaieWl
NnegQabj0GmAW9ab3fAJCh/i5NkVAtA/RlN3q4Z5ZW2AjPZ5Oi7tLT0w9YTi9xRCHSIQi0cY
WVz8pFiTck/GV3IIPeOWEwgAjI8CY84LqcsGMc/MKbavPuk37lN7m1PQVjb2Nd7HuPVXX+Vm
OskNzYK8CY9IPMtgH5U8q6RHrdi6gHmCiNnHHw5ikMu3+FBWYravxJ2mrHFOQ6vwa6NOUdcm
xJPCfb8SlXgtrdvAG/Tj1XDBFWtPr78BeD+OSTYpspiQUUnL2N5EDJpdhSwriPMj9l/mpRpw
xp3oviqrX6S3wn5dd0zIM5ff4DPjmDKvnS9LZnomiA94mDq6blFj83GU/rEbfswntROEewN9
Q4CaoxmkWSI66CGetc26/lxnmsK63LAQpHCSeBdjvBnv7ReAjK5y3Ox3mh1ij9zLHsNwfmN4
i3wh5ab4H0aOFM+TZehQWj1mjmBaZUxmSvQ7zOVjW6uokHe1bkamdab58avnm8O1WMB9h7be
+WjATR1JAY0tTqy1Woru9G5FoBqW1WDm8PTH9QBbAQKOmEymbGIifFyu+ofsjgTwVUxR74M9
6swpTJiBExeY0FTgjJ33+7H4e+SLYL1Ac9qDOT4Qk61TMD2t+l1agySn+UeR7TbbS71w4a6N
sPGFc/CBIB7IXAqEaALil6uSnfB6keTPGAFzWf9nsTYc+7xlaeh27UDGAv2Z5NWeRK2OLkW7
Pv3KkTwxo2GHD+2SghF5ZQsXDzwe5O6crq80Gq8l5xYxuW7qynmKLN0spWtCsk0RzYnlEZVC
Xa4YmA6CsvvmdrlYgimKvuvnU/5VK8iVDabGAOGgjGmwwsR9LNgigDh1rw4HZULsgbZusJmZ
4RZJEg3vQAEB3+ndyZTUg+xogqx2dh/DheAEzgMBiFbnc1AW91bwWGr3oHh9gessletk0qXL
a+rR+g11F4IjiIlxFepW/GiqJx9d/sPeNVe2j0Vd+JATOz/VfUMEej2fkH7/p4FLji+fxcrd
u/i8Mm7PdCQ20fSKV5KMVLT8QY0FV2g22afWf6vVnoIBGjqMMFuZbs5Wl5E4jEFpX0FkPnzp
kZlMAC9FOt3r9G3/tdsvLrPFk6VwUn2JOEbXxQjnCFF7YOvmrWp5UIo0g/Xjej/qcmKPF8YQ
rAINlUq72R5CEeQhB+xbH9lFD/hNGWz0RCk/ZMY929TCtbMnXXw2SGo1EnFWJv6kCZrJXMGV
9TLUnZ7kbjX6Nw21Ics+mQLVP7JfeTsQv251UQAG60pWLKbRHgPYhhbzZs0TvU3jRVr/kYKH
AXLVR0S/Q7X7rUZLbY6ZQqAdxJG3cm/S+8RyKvSMIA60S9YnqdsKDpq0G1FJoK3hFE97K8xC
uRYhdsD5wdSYl/vOxkFnUPQKN7vqH8TeAJxI2Cs5H9C6y53dl0Y7mfcC4TdtCTg8HRaKMuFp
hm/yVQP5TcWGKNV0JEvlrUgTdi1LA/skyYXwjdKM/8pdGnYAhxpFwQlvGAMAktNbYNAmoabI
HBF28/MZqHWlZ8qAuPbpT1LwyD2+RWwI8IZVtU3LeMqYnbQjq7FOJjNJpahfhe/VBwRVpEiK
vJ/JLF4gEsYeOjs7f13HriCzPs0KbiD1pQeQvuVWUKZl7l6WLF8sNe35OaMxBs2AtuMW56Nr
9Rg/izOSHHKB3RhMEHJcIxzd6xLAXsSpLXAr3Vg13BlJU/6JUtLgg43mTFbO51Sjfy8cLVx+
Q63AThvGxDEMNM8f+2TOpQ7ZMOJ6QQJCfwpqdYAl0y/golzJdVOpZjNKxdAPpGsQeZ7GG4JZ
atB6wko/KMdsTPn82xhJDKyqcrDFS1hDIe1Cc5XGoG7B+XHAVImmOHnKir1KnSOlMsM8RP0J
9i+D1zLWuXAGGUATBs+wtyR/4XXhb/PzqfRM0B7LSM+xet7eINESKVII9i83zXeOpagfb3oz
OxSrOYBLYLsVBVDolecj0xDTF5FIM+uUG3SwhXYQk6VL43dR6/QX/L03oImj4otdD8ABEVnv
rn+e/T4LL8jynewpVwToSobrOePdHWPqqcTxql4m4Yjz7uP6CrIYI+zID/Nl1AVBl/gbzPrD
aW4keYYY1gnRIrMopD5yKNqRRkTTz8mtxw/pf1rVZirDIYjB8jzjJcXQDcsLOMqtLaKt//kq
gP3FZuqTCeLtm0gPOq223DLysUnt3J8ioMyCVbMnh/rcinbZzxk/cm2hoNj+fyMfnUkJN7/C
FZ7hu3FEFIq+ijPrufyfgiADryIrOkt616Xfxq1JBcqqhRXbC64diMuI6gdOZwNLFWSZIx6p
tYIgccL3a1YrbL61dibKqigu59xoQ56YD42Ms8PjAyP2JYSzjWj5icBnIN2ZO2SubgsR/MeX
Vtz9LJqt1wj5XmeTpaswRuB8AFXcjsQi9/6c3HDpCgDX7HPM7DoE4vg9lONKutuG5iAmv8Zf
m4Blh6xD+p/woFL4bRh4oyZFjxRS+Vh3L9oz0E8PaDLnxJH67vPYKhhIOXkxPU/mFI45WszU
ZA9wM5FxMESJU0WCUee71D22afsFlKvcmsE+cwGrRWE4xlXCdDdnh6BAOqm0yL/GyByUgHmA
3NS84serPBFMVYmhYw/39pF7R+nf433fCRCtePwtNZ3+8KpXZ4jL8BLPiYCYALA7X2klxUwK
WJgNnoxwfEXiLBifOBC6c11nPoxBSTfyaA9RuObUVtq9hV9O3sxBc7QDVywVUfYKBzQUpknU
D8D1NekygzFCHxA01Ouu7F8/Tf+cYQDBPUzfxjIamN+xoooNaI1CNw5JVwoxOS43chwK8Jov
UeBMNZa473wrbgeBJ7JgN2FcOdXlceFSqmhfKkNbsi5ITNHZeKCrbZvQIEryxbgfFJIjJpk1
7syEj5wCQQ9IO0Rk4OUImxIx3QxtBayf0yP1eFsrgTUj88PleaY+LetSFss5lI9aztR0c05x
/MbDZSxZUvNrDUEd+uYs5p4YqdPBu8jr1e7UHgRvh8d5YJKFzizMw3bb+mkjnCBTorTuEuJo
JbwBIoVrKRIwiuHipKkcV26wVzDSjYFol8HGVWTU7riXuNAI04drZCgp9rjMntHFhHp5zBH+
79S+GRHEnNDqIZVOiBKtauhG9Sa5p02bPM/uHRO1xe/8gGAeiU+Em6FBOCNN0iiCPNrDNywg
jIzcRwrh7qA7+Zm3iBGIpVkroGyJCtzR89SCX4mistu8VZI9V7OJEggHw+LSUg0DS+m9/EhQ
BVBqkcD7PzosIvjMpXWaYdCZhQCuzy8SolP5nttp14qnxJwkc23yTwG01LEwghyXU6h54Au6
S+0SgDiB3R8CmTC9hfnsOHC5hhB2ISqQgp4PbcvZGhyj2XOYtR0+KY8D/FBYrrAEtMAfrNjK
ajKCO3zpBIO3cw2AsYdSAbLVvPXgAwr5Asd08LAhz2YSLJNSctZZuRt9BO3a0LWGOGQQt6sq
sPVXfAB3rllORrlG09wenEvHHIO8cH9rVuDkX5EY7cpYCjvipe7dGGGnrlpFcLpEcchnp3rP
iZTVJgP/nkCBxwr5BLSYXrJJUa1UhYFQw5p8c2Qi/pABiP5NS8gCXWxFz/JHHvZA5xDZk7QD
vSjR3++zfIhSQrSWRsIwPcirwFwy4AGf4e7ly1V65leFOM8tNDRwK3+7p9WZDtKkpmUUHd1C
xnsA9gNJWR4PJ0yCeRL4fQWZu9ZKPPgWap6SzOFNJ31W5esyCIT0CV/B3KRPeuY5+OPNjf/w
YveTrzzKb9wLT2l+0qsw9PSMyuVd3CWk4oxZIqn128XdHTaH8pI0Zs7kF20LEJW3+nACL0+g
RfKg2lQGsOHfppm47RR9fxhj+fU625GXMWI1Nz8sC0IVRfxm6alIEd73YMEfnBwaN2dxUQCk
mn7NWcWb/N7Zk4UVayEAjqkIoDCzQXRoik7Zi/CYxBqnXUf2RwrCE5uXILpRKO/sHwpb/op3
EzvN+/DsJWHa784IOZ5FgPLujPFg1IVtcRv7UX5K9aTDkbKT3+ACyVyTYuV+fS7qR4CTT8um
8MqLqmVK6BGqOJkYDJD9KZuX9UvZSC/K74eDGrvKdlN2buBsWa8IVtkIwU2H2QNC5DwIufSw
BpiMqVI7QnknHoCdjnQiB+A0fbvORyqg/ff5QsOMWkxlXu0IKUyeVsNegg5s79WTFst+7VpL
ZDKUG3jLHnhgdGqDy+oZUlCAt07uB9DPbtxKrewRX+2G6bpNeV7QfLPsxUNVmFgdCiHwA22y
iHEXi8vdnU+Sry9Q8YBC5RonKUej2m5L01p8SfL/pmsN0p7J4rcJmpIKtqSBsxCc9dVJEJDm
HG7kPtLaV1q4OXMpkwoecfWnb046nFLT2p0s8HAjxTxuDczBFQzGvkOc4pgFFNi32y4+WT7E
TEIbB+6mzgGZOhGgioGZCnHOZZxv5mkoc7l8S9rBYQXNMb/627z4vrNGVvcIOyqpNxbndhrO
RJjNOaJgNdqGPAZ3ItP0bI4+SVPBz/t51/qJ7ylB/rtpNsCxdDjfdbKjk2DwaV0O4Dv60Mfa
wpY08uFLWZPlroEFyOsDQtXcQwcu/U26lm1uD7MqKcZZLtkhEaKsrhNW/vHQkQOXnW3pX5/s
ouFApvoZv+Zmk/7kTRSfWQyEbCSEqAOwS3AthvY76bcQ8+/3mVBRdIgqzz2ZF92RqkJPKj9d
49QC1s/nIqPTo1Ffl33ATUCOCQ4ny/vgOj9n19BD5TN8lxfZHTbHhLiVJ7jfttKcvx0Ofx/m
k40KRjdp7/glSJwYOSBvDL7aHUw1boX7J7H3uYk5t5svfty/vVbEIZjogAuo73rzhu53IJqC
TtUnDZn7xySbSMSlktuwy7xVC2eYcAbBOqImr2vJYXtr2wk1T97E/Fy2UWi7IzP2r+lkz4n4
x4mETKcvTt6CW3h75HWTymi4CKNrWs9kkFo7Zz7oiuoBaGkiJ7L88jYBa9U1/cHcOhgW0ZGA
LJOC+zszEv/6Djvw7KRQrxS+z4DPYjlJo4YJnzYynttVZw9O2eT8h6sLzPXujg2KT7EnANLJ
mKKL4c/o0ZwgF2yjp6kSN0O0+l46VFLsozoCUZ7IRd0NKC+b7/6GukPq2kco/WsI08wJ40cc
d12fedY9Q/RyjaCp7IrFAiWNervHxE8P9Zpvd1lIUfdr2+iAVTBTKMnSB51EIWNLgAumd1Qi
TqKVO/xEjWPyrRHfXuakS+y60qt5hgRe0HZfYDne+rgj0PhUp0J0TI5+W3+eMiI9Y8JP0k1e
ycPoi4yrJAwoGNVpQ6rFPOfy9ANMIcLCrjNnRk3u79dGsY22nzIlVKmOAR07ORnBfquetn9u
0qTxyWfm9HNCYSGI+Xip41MUVLOiJeoSgZuOKgvLk/iT8KTOXaSB5ApJH9hRQGz/aoLUvvBn
ixVQTMOA1g3irJDmCG7gaUJ4GQpy2HAFOlAkHjX2fDY/nKC97MScSUU53lSJdJan7x11abnG
fTmK29K/fBs+q1XY+AgODg1NhCNSXFBL7PyhZEcvTIJRUvoPa8xMWC3IJd6j5WrIZ1j/D7eW
JAB392hcgDBNtFwrgdT3P5hAd/H9xRKic9ezgXTrqFnXRr7rvbkGN+4zVul9gFDfHhEDLcbM
px8hkA/l1oq0dy/CsbJ/BNDL66Skh957KLfco8aeXPx+6xtY6QxFSLa5C86IDgeKtYOQEkLM
/5IkygbuqeEHFU9kCDZVrdGPX197GJWQbSBmd3BjOyvzfHx0YC+JbskXchY+uhBf1mrwEzEN
lRNHFvBzDscRdSBbnCWQJPzJiwn77wy5d9OB0IXmYiw4B5tX45MiO1NZ4qhYOjvtHd26Nuxg
bM0RH7hIfLPuJMAoSqvV9kfxAyrgz/gOAfWt3XujZvxxVaA5zWlJLeOy37FveqTfniQAfYEy
a98y4dnIv+GXnHAIkXTXMgtXY8N2C6RGT4FmatfpK1f9OpGSGUXtkMnebE8OIbIeWREr6MLf
y+eC/z2UO4u+jWukb5LC5D1FMX7ZwDl3E7sAMMqpHOWQzXdtR/+TM6D/YqfY+E5zNC4reIwz
fzFk0CJI6gtbkqQxJ9wMj4ffxp9mkndkzLkMYw6wd5QrjC2afxp+RSaRs1cDu3jBwO5lVBil
wE1WpPPCbMewk/MrEsjNbCeNtlJ+BD/Et/0JA73K72+vpOSkAZDRhmJurL/l7yH8Rw5IF2pC
DaFeuXcihYK0ppjpDw9SY2uwZxjyoGGh5/rGbVQ5SalB3oODst3vTpHAfc5g+4ILBVJ8h5mC
vu2nlA2POXIZs7w5mYe2oIxuFa6q5hoDjukEBUosDAqp46j4xHZ90V5o6aFNX5zPvtEhwlWZ
RIDMEtVFezkd+SrkgCh7XDkSbKdnF0MjbxWqTRC01mgAw3TYVNbxYg4YYZ+3kQbINlCov4Ej
DV642EMfstLIa6npOXCWvSVT3CktejNk3aVXuanpv2rZvO97DFCsJWI+MLc8Oq++M/Aq+Y7Q
8jlNveXGouroCkJ1D8Ad1oqqEIN54Fuqrwi+5veCWAuZRl6aZJ/Po+H0D3Wlh9vLEFLeMoqi
bc5ROYwizHA7uP11qjsqnyQg/pVJ2f0kbxLtqF5GtyjrAcP/tQtsogutO3l7rpeFlv+ms/nN
tOQsDsQVSdAYpqDDxBqbkx46rhrgLS80UEKsbrDbupbzejQ4G8GnpRna++X9EpLwApisA6Xy
K/VYpmUQ/PrvEufWCAeEcyGLxb+qoDRDJhlylvHZOKKbHj8xFyB/HmdUmPVAyw3illqJYfIq
XyxMz/STk8gRYu8m+IHhxyqH/F5PkkevWkEwmhhb2slwgDYw5vKoU7ihXw2rVZzjKNQtlAAh
dbUX3r7zXlz2Nwns0tj3VzshNQLntSMhFlM+DhKrLRVW0updvBfNeznWSJ+CcSk3HYRUHspw
wJYhvIoZEOnVExb5pnOW/R7VUoMTNf3r73D1hxEcmuRpCJm7R45IkApeZkm4mu+UfECEuiAZ
6DP5l7eT/xzGiVKmFIhh6uzpqojl11Qd0sAUgg2pYcCdV4fu1j8EUYTDgCtS+YSaC8NGeiy9
Z/XEaZO8IxwU0M2quio3WKhj8Lrbuzzzw0fy0oiYsUheEju3opzt7s69oDL1AKMGp6VPaWD9
oIvFkTsoErkB5SGXhjWniP87wVmXUvGz57sBx026FJzmmjsvbmLreFgIwjhxB4fHP6jd0YDM
R8xJhVcP8xWiB3hF71I/76YpsS58R86rUeLGY1q16moXh9WWnTWPkZ7w2mtOuZsBjTr8kOtI
atPmI2gqPEFZjXkGxXv6yI1aEL8svYv1df6iV+jgoDqsSVMMD64Y9bN/3jI2+n1ET+CaPPp5
gfRekHUq6oWapf1cY+EohZgVDxE6rISYN/YWjY1OTY8M2pZdG1etwnf3kD3EB8ueUb6ZrXeK
PhfolIWW2Pp2FFTtzh5h2QgwN4mejX0SDoMujt5Y+53Ag5JKUTVZnSDOjqRsXbnhmKZ5HIem
+TYC7MoSEu2ZO5b/e/zRIC3JEO0NjbsoKAH6TiQjwJyLragUl3EA7ZK3nJnckceXKhuuGb6E
OApNWzb/oXUELiUeZE0H/OnL/4qDeSZEFubusXvFqxbzffwZBwO8OcKbH5Fr5VlmSpPoMzor
uoov9QGtD/lxpf8LX/VmT5FajhoE+bb2GpFs4GTxSZsK0u9KwNjxIIHKGRl29pjSuH+IYVy7
tTnYA4NLY12PMJEAfe+lLNrl5EARk7yLahSxJeGazFVpXPJUijDWSdwMif/k7ix/J5R30Z01
+zNGT9UJMQgkaj8cZf9ZUECb8CYJbyLmBLMcA6iR3DNE50pgfC6MUDo0LxFvgs1d6/ohkZV1
N4QBgpCOKHsLf4+xzdnI9hUGm7KYpNnAh8Um+2LU8TYtcZWDoQ3EAsf076NsRq0h16Dy4VTx
+u3xD+cY19zAGICGC40DGLjtXKM4qN31tbzfpDqoZFDlw5oSneKKtnqGylgSCH8wbQVohWrl
Giev/TU+bBbZRclXUfRJfaNxF3mEK3eUM8zCmU2iKijgfZ1wEbdS8CwKgR/LjWBvYF0rd/fj
KoO61qKh7ldaxXRFwqB4mJ/te05+3PVoNR287NFZe5pS4RiyMs6nv6YJUueZYfbjhdwg+XQ5
wH5N5JJgAGij+Ei9QKGufTh2YQMY+jA0UrX5W/l2RrV7+K9REMTvo1VbvfEDk4BHUujAhEP+
qQMMW315PvEX7N7noqJKhgmCXjgpsxCNK353x5lBVVtjs7tRfegUKGXWYlqMq4HoxDNd78yh
0fW5bcbZ49KoOzeqTAcLt0DvNEmvRpAXtH0TYmdNvyxwfngfz+cSTFOK8q7JeksgsGzAOpzs
IgDzWWDuGAEcdPpo5yCI6F+eB1vu0/5qbGAWyLy14TugtpTXZIovFtuDs8zPL7wqJ+9tPs6j
K2F2vJf5vyEKbulDbheig1NMxwUt3ENC+p5JYvKMKy7c6mP65v/u0VB9a4/8VmsazFsjXsfw
iSA2/qqIqXxtmj3yR1PIreiq/U+aImEEapkbUK1M/d7CIbmZ4JUzlpdOHy1lA6l8jRLCyjaB
p6MqhSfLXNaxhxxdLM+oiSK50HUziIv+sAUOBf7bC/E+8vN7SCyQBA9u1UTyV91lpDRbwzOl
cXoeT7bu1TzwD8CmF1sRU7Fodx368/kB6GD12K+HUi3MLzH+S6wOn7ihrm7j49UCj5wTW611
6I9Kh2UT9h6H79I9JqHqGoqQjWCAkedkxl4xaaDFBC24Scdd+DMKD7f7Pq9TgBj7OgOZEy3Y
/JsZcMwtTa6IGJWKEjFqE4ZLTY1Avpnk2V+RmaBNjIyhSf2pWG6YcqyHvzcYwz6onhGar36+
1Qb7mC/BMoae8GyqUGFZ7V0eS/SJWyX8p/mlN0z/6ZFO7KniybpQi2pBUJ/xioDO37S5MYT6
DRxHpGv5wd+GWVkj6t50cmvarQS9RvEuF/y3mwecncxXpGS3L5wdD3a464uUgqboChp8et8Y
FVLgeOprdmEMxaxKJKPPwGUPCKt+JZ8SbGa1Z8egn1E9nQ50cvDhxJxIYe49KQcmWhjI7oHq
IADUKKUevGbWsMUy+PpSQlly1jP8C+ESuNChTyBaPoLNUBwrEB7k8L6Fjv0R041gI3jJCb7t
D+2kf6yVlQOVCnp61D0Ch8gSHRfYB04Q2XA0kTkeUo+mZbAXwf8Ss/fu7E8dn+SXDLqef/Ij
krjqMGSw9PE5xsZtmOroLO1wQpv1Vn4kXlq3bjvqJn5jmR6+KOVl0KQKJbl1gfg/9W/H1akJ
eOTpbVsY7Hq6aL6zXevDrQ+NB5uwXrgjskS0roGLfJrvJP4nlvNTNz36lO8C/CFNUwbMGk5w
hIDSZSLb9pB43MizDHuxqd5oYVUVsyshn7enEGi2+RW2K2yPR84zbXz9X6K4W02yhzypU0yT
N46cVue4ZTnAW/eu3OlLhgoVOgeIuoF7s8Lo5LMMow6ghSx0y2JX3FKHgeSXjN9X7DW4Hef4
8aWl6JMr2KzJ6XzDtgmAtZgTuT/eiRuqQJoFnTMWboMa4SaehN1n3PIBGaZnR/cHd5tBgj7d
y9Rkz4wTaZFgDRUN2bP9tsW1UeFCEpOSUMH/JICAaR7K4c4x/sJOeFGU+1ynTyos/v9az8cm
E3ED4XLB8systCDMExzKavJYWHHK+igBg/S/adGDWA/tIFcK4JPt5vBU+j0fbkSBBWvKSLAm
URQ6TWz/7GWmt1DirA4DYuGbqmpJmuXJWLSMovc1dNOYbxn1eEFMSEuYSkfRtd8M0KG+Aayh
JLox44UBE4skbOdthichHvuKYZ92QZBrWtjhNjnJk0FywxS12tLOgm50lyxahjTbE83P/CXo
gMq5NZ+01ZSHnN/tB4xB/munU4tHVvHyhxhZ9Dr+hwnSAQzk4M0bYERONLTYGblHY1bnek4N
L+LRak3lL3Cu9WRyVJbsVVkSW687Fyx6NSoQygUyPqwBsV5FLGYe//Zsq8lgfOEkX9dxVUlq
eSrQEmp8fuhcKIWwkdbEFea4jBpNtLV2ghtakYftg8C3tnEB58H0wMTicw+54MTMm8DgynyB
eWfxuVJ/MqC21Et0nnQn13eX6Iw8tDWzdWB+9cpAOgmpSUtUwYVM8oKUGk4ONcHah7Bg9Cxr
eUEo2Uj9oqVHdXTsgOTlLfrei8s2cduoeP4PnlJGPCvjmjHbc/u1e+1otNGN0DIQKpoqNcuE
PIhPCmg4FmmyCb6BxbLVn/7CWSJYY+cvCx+S2foPZPrLI1yL94DwFlZr/KOuHNpAGXdERSfs
YDaImUYrh7OsYHl/67tBS60iI7szsDxkc0uoGSUXlUwHd6WmslMoBXhj1ZINDwiYvuEGYqH3
w63l+K4YIaVyq5szdiaNANceayF4to7J8bGprP+eSp5eHD/nBPR45g2tFLOCwX/dtse4jIOv
GjWbIs7uQp9k6znkCCjeflq1mDY0Mbkp3SmECY64GCBmGutn4hkoIlkBETbO0jnbjqn9Et+S
qlCROTbmDD4CoTm+OqCYLgIvci7FkX4p2ZpqIMyxXcxggFja4/hAy77utFYSVTabswnMf2If
y4tb3lIZclVXJ1dgEgqvCBfBn3eLxKLzAV4VP38+BqhqKmWo6O0KrWcxncWeuNvuMdE/aG95
n0PD+Lgx/1+8UENdJGo5S6wQOCNcGzPr0HDUaNySZeliwUWmYn4EE83U8nu5hixm76O9U+co
FKbS9vplPBS4IXk+dkQvUgEQeEirfs/NP2+iRoitdSrWGCEfCjf4UlYGzxUwFKfmqhoBsVOX
+eSBAzpzx2/gRUQ0kkslRvBO2AON39sdTWO2MXQczDYOCeM0Xn/u7Fc/shyXI7MOmoiHCXTN
8ApQFWM7YcbiYSk4wHj8QFNyOOZwPdCtwszPA4nHLuvcFg4j5gU2E4UAXQ/FUFzXsQfofZ+p
boodFAVtD4XGt16UYijCg54fQuIPoZWbdEIqidRyxvLDIdNk6wV1UxWNLph29ASWvwSuUn9c
KV1c3kKlvlwDbvk0M4I0oIpva3Phqv9fF9ZjDy42ONzh4bcIW7NO0xWlOPOZ1YsVk6YllYO7
J6qm5oGYIpbe5gNoW2Or6lLyXZ4xhFaKL1GxU5IcOIyu0zaN7cINLXjUqCmo2PBaR1uJYR2f
kTr2wt5FKjRLFrWZpUX8INNG0rThXF1ND4ypMOYR99ldI8MlY0A36l90+/EXVxIAgROmvJKw
+ea8hCEWsvXv4YUziKD/0FZYxjjC6ueCSk4agyO5gvhBEUcoGmfZy9lf9EAQbAIMUOoTAGEK
LiWlFhHDcP9Uw/hiF9HSFQvkQDhARhvzlldCQ3OuA420K2mZUlvATnmWbz4LLEpBSD7HevmA
2p5HAMCOjQFjnzEA6o2DTWX6IapEm3HRpxm5JSznWfcN+xqoH4PRcQZEGs87sNKGau31YMqv
PdICv65em48nnvtwetQZeY4p3bQA0WlL5UwbtacV5+ligNCUUEI93kZZwdGJq7RgLuzrlWEj
rsFncaGHqR9IIIJo9tKDE9oJKSRtb1IaNHbg4et7a64BcCnF/RpckAfhBrnq//u7HSqoLO38
d6p0NzhbXU6rYHUJzoIQ0DJpYsrLL1jLnBYH+PjTcKpX7YQDvoMvqygUnoW/dT70vFqaUKjx
u7qeFW/dF7f8RrrEPOx+Ne8RLLyHN67hkGG7wXY3Uf/McKBuEYG2hZ5g9wDSsDEIIP1foitZ
1rIqalNnZ3o6CjzjREKE2N1Nilj+LTZPn0yJJ6eLhraDDkvk436ArHkKv+LLQ8iwk2gxtKi/
vJ1PKrgFc0ucoDziE2kOCFVNI3MIUOdsTHWfx0IDs0qqNP/vSEtT7kkxSzmYz0T3sXzz4BFu
m3h78u3QUpvyvZaHvDUFpNp9IZNfqgcQ3Anv81EaLXt6tmKr89iCU/jhpBTuG2dZJa+c2D5/
gcDOhAw4QaorazocWfkoHJo35Nvunk2kzjpvy9B54Dgr/ZFjswnjWaMbIDhPdfFH4aWkz95Z
LXze1V8NMwOly7y16/owmZq8lcPjqwpvfw9m2mb3Cw4wmF7ZqZuMZMqSRL1P3lwbdzXor6Ub
etVZCAP95oftzN44lmmN7ZN0/RjBB2Scdrguasd4sctDoXTATJMH5x751CEhvfZeGm+2druY
rsdSc43M/wdO3s1RXKHQ2TjHz1DRe6Qbo1sUEcLjxbuXuEmusAQ7FjNagZ3VGxxdcQ7Mt9lR
p9UkfqVCfLzBsf0r8A5iJ1pvLF/zMguO8WCQEq6dqAdRLN6ivzOlAeUu0i5PClA8LZVJlgDe
Zqwkmb8kwrIX2tHtjF6Jx/HB1QKq/doacgWG9xZpQIbiLkI68A7AuzhCd/Em9YSExqLAojSW
5XnBCY1pwgqI29OmqntAeVzwjkqbdmQ6hhRdD4MpIJeYHmblF/Tf9fzs+1kb/Mbi4acrpPFp
vKMqFtSF/U/vYka17Tq5c1AN02PFQ/xKV7OPUV1Qvpddyn7G2EeHH/+9DvIMCFqMswjc8d/3
m7I4D7pzOdtBbw3bImcwSQypE2jVLZnc/Po6lpW+nRCDDp4X53kpgy1mMZO6g9DvvVQPypuN
CksTnfubc1MHwK/N8a3RX/MbCNIgTRwgtx2mx4k6OXpuKD/MryCiFiVj5YadjoktznGsqUaH
Tp132tgnPJEBa4mFuDolLNMPyFaMWfLCT1j0yRb5eZLGj+4dNv/MYbpnE1JgBqrc5kudmmO+
vlD6eHjDM0xzmT/dq67UsyjtDmVeYfL1UYSE+pDxqEU/oaCaSPYSfUEACKAOVW8KoOV5bbrU
yq19gZNO55CV4bzf+aj/HfreeslJCR8arB9m8dv5kEwqAlzSXNtgwFXesaN/4QTG7p4whCgv
N9qPEoWXqxeiMJrazr9zYwHdsTuOLavKrPKK8XQE6Z0dKFMU2uVRqX0L1iVdANSHVV3BDfR/
9Sjzxo3nxGUtz9h6pIRx6Nedsv50BO4IO26LWiwlFvvhkfl7EKMugeb/T6JAAqCyoqUrvugj
er7vq56z9Xv8PKBJr5xL6lWwGlCpTrPhYl6r/wjni0gmotVvE+Pwn5Xl1em6JV6SfRIUT1Nc
H2CiGP/4511SyZAkX5w+6j5bOeyTSPK752drXEkrWx6l8oRpGVqF+N14KkO3a2iNaRTb2Zp9
3ZqdWYo+WjSh89lEMXCUaGX8kr6hfPosBLxeZU8cJcPr1b+75kKWZ8VceR5DP/9oInu2fLPR
w1gKsvLsFzJjL7FdcYc437GK/FQbjntDoUyObXBUvoN0Vaib+3/xxDI9HDGTNf49tC3a+PQV
sppTaHM13Ackgvh8w3cSUOn0Pa4j9Wfiu+VV1Ft+qvUcQ5Kc6TJfqDYYG3B6ZngKc0WPaldF
MbclNH5kM/Ln1PaQp5WIO2gm2wBp9Flmaf4LUhxf+r1dsWZddiWHGc4TYRoy0vW7q42yxSck
qRMTuTsig7lKLk4WS0zpir+8/TUAZu3dM1/Wr6pW6ztn46tfCk/6IR4gVNlPi9gMpDUMDYhX
5BudzMvZa0sELqUtch5KFSPLEqj8mRP6hCWdkj5EMxW/5Pep6rV75zO3VvOzJuVz3QiHKAYj
K93QAeMTwdCTujtp0+gi5fl24l8lxUQ/7F8h5qsx9yFCZkHcxDnJwyYSHnuIGE+ENWQRZXWF
4swTRamJDncj5cVC67rfvmryQxAe5ldKQ/Ptw42m+P6iSKCKQgHeh8ZFCYoSRivxQNuwepDl
kM6Om0LgRZSxbOi2LPx5OV3KBrrysFRtDV7fJ3XDT/puHd1JvAbpA/FUKo24YWqTQpvdA4J0
iu+0P6T9lgPEMCJncUK3sXF5NELceY+BG8jhHwlxcYAgpu9M5P2F+Qf12lzmoaP+Vvyn/ARR
x2IQIO8J1VDJ+lR02eFKZEHUQTGuDZhgANuR6S6Hb4Yls+tF2Ob0TGbXrFf02O0RV+aOFhc5
gjpUfz8mgxaSVSmyUmaFc9JcF2vpyK6eXghJSJTh+XL+uNezUCl8s3LbYGN39EflYk8cGTuJ
0tPe6BMZ9B+CW/O5vMPVI1rUGNO9tY7fGFcB4tI59fYNF2SZlRvO5M96GDuU8xm95/IRNgPV
4wk6AfIiXWypQNvR+gMO5SpgSQ28OckaZsVwKPUGVr+KqKySB0vR1QdRzNXlKG+kCs8E4rLI
w8wplRjZ1LOyVP82f7Rn5VjmJmQ2KF//bPbOP6ymFZMPEi+XHEgJgZyw6v5s2l3zl9uxkMpw
npE2dWA/q3sanQBHuI6NjypnU9n9J1en40pDrbMxaBtu4xY0RaQ/I0rGbFOtKWq3f1ax/B/K
5pSiEX7Qc0QdDJNWjKWSLjtPTdJE4/FImKK7SDNEeC3tJN1QOCiDyOEJvsuwB/2T0RCWne/N
4jXmQXh4vhBCtaaZtDI0Vb9KcpcPHLMHPiaeLd2cV3BgF/8VHTJ97F0+8X/AH9kJUnIcDJb2
M8jeLhTozcxNNSFQnh1KR+tlPNEvmiJcGkiIlbxyh0Ssj8HKM7QvOfhxhM2DD25k0uJ0fNgt
i/VWAzqnJNMjW/TDWrcVNdSQTJh5IybZr++paM3sYr9bbPLJqzao70qkZ0NwFuLascESaoB8
dbqNzLLAYe+UWwHrkYRUvAdfIZ+zKAoFKXaOignSUyJXPIodmwLCa4WloRnMtXpqulUtVsMC
it+eB6YWMpX76M8jg7z7Dk9NFfODliUnb9KjLShZ0cJzm8U/qgalCEoRk5dft3Lr67yrYiJr
mtytTMgR0oJsf0a7LXUjhc8YSUqJ7GK8lOPjqBp/Tr6RQtHD6R8aYyYGAsQU4guJdyZBmGr2
KySu0DLScIEvJXbGSfUB4bAGr6XVl2MHopv0Gf3gUw9CAnjtBshb/pb9dOHAcmtX8on267PA
FGs3fkC9KH8LgsBc5cMmCXZNcxo/qOETTgIxEUbS7rD8RgG/Rkg8FQ8aLiqJ7r0WHWccSEPb
Puz0Fr9mlQrfjUpcbvE7xozq3iBPz8hK1Nj10PPEjhFfdsnGBP8AWAGr7a7Y8lW0Aq8XXGbl
38m7VtxpDAQ6QQ0UpgrQDo6UtvObVcNK2v5LzbPPSRyrHXd6xKHfuxG7bB5Kk8wP4sLGgiUX
KoVZ0cUkmbfQheBb3A+ancATYRfEeO1YP40KIVn8jENMtzFtfJ45C98pABMHDtGXFDMTjsG2
gvtUC12wRn2VExm4bJOGHboRM9qbI5wiOaOMmpqndOU7T3r0jgtY4wVgE3hwnOUSOtLUCBNb
7ZMYqVQ9xCPU8a4NRY1nYG8WX08kniuilAlEXVbieZpmdUUxHRzdZCihszT6viA2LbU819iq
DRtPBX4HlYZOBmIPQuOZ5Wz8/H67uU6nhMkSNJM7e4p9XjIagxUsh1t2he5JTAVIFx8msMK8
q+lV5BtHt2kmnouyZ1SrqO8D1EnjXnLBIglf66Yr1+CroxIi08lIursWAWqioL1HBRGj8oOH
dJ8hk6M019F1oKIpXnWqoaKfi4Vm3o2gAMnWI10g9si6ncgtyjIVlP7HP+TyLN6mPOYmw42S
bVeNrWGQawoI8evm2EDNG907Mxu9XLoOoExNCXGhgKUKjsr4KmpbYaqqHFY4iGQV1s6Sa9rE
cWtRfmjzuY8+bfdmn3otKwb2WzL/KP5UeeBjFr+G6WJGtvuBaJqMs8jGhiDUHsOCfWas+s5v
xGpFdV77KpjtGjeD7aM12QB8dAV7yd/8gbqIfDF49WUq1vmA1OkkLmWJM3PHwFP1gt/hyV2g
oqROttppHtsMi6DWB7CppBsbwlObz1igvagdmR6NyfyQbfktelI8rMbSxFWnJUjSRafDKV3f
YvjI3ACEA4C6o1GzYkb9cLvDOceP0ThnAae3ti7yKHxt9DOMqgSphT1Q4RXpwao916bOjJ5n
WbG/tElgoWdwJjjclKZ7SsPvpvSZw/Vyoq+hKssZ9MLZbdeJNByQhJ7r+05nC4BYANVkLab8
I5vSL4dVsmJiYDMDdreVUD2gfn6wvmjG+lC/TNaMmDI3/32D4dQE8/uiZnesaYhIEzKnDu3S
G3T8O6qK68buI6BjIYRkaxfFM4zimlxOj9XNjAJn++aD3h3PlculVU3fI+QcIa1pBB90+4jw
HQxGkt8hJlYFzHTIDjIEpwahQPxKzxD4CGWzOk6kLSGrh58vtbwjgNueDCtOiyTAPI/qoCA6
vccbgBz6zgbztRxldD2s1uNfRvd3HyLl6d51dtKlRup9Y4CpUhXbIBFkuaLUw3v/VfO7XRDu
aw18W13zK136z7CfvTmFJnMB+AlJNKgHhU+oo86tob2rqrMdceJ4Fq/233jCEKTJQ2c2QDVd
XZnj1owyq8U55ab3Y3JdD0NlAXcuGuIh07/z/wAo/lmjE2OXbUpDt1kQZAdn42RH+1bnUEd0
e8uRYjN0Y+UzYOVLX8t3qFwn0tYUxKmbppJRl7pvcBLR82IkxnOWUw7ZhrqaHolVG769EDcg
bknL7DELfqv82an+gul4vI755NAHCH24cEkTg3FB368+hfCNT2KT4v6P+nHjCoetkUEV+w4G
QSsZ3qDvef9NJCk7xqlzKBS5GfHLrco7e3ORNxaeGk0af45vIwgmmpkxYw9t2IIErGx8Gh2L
dP8DopSq/U7mwImbByqVZpwccatJEhknlbZhx3mrEqoF2BPNHmOmyA8xgnOp3KlLAYiev1bf
1puHzptDgbj9CY5V/vXdT0OwecHYnUpLg1anAj7sgzLo9tk28aIWbEfOHm6lLSku8XUjNAXw
elieoLW98n+9IDX0pNyytrp8OzcDMS0qNp6rUm8CPFdhfQmc6SONImj5iCse+VAOAgtqsi0H
f4iHwFzsr8rcxLlpBB6HfUADm0nhArjMrHYcC9X6rpLWk6KnoqpQ7bfGbxExvMO2o6w5+qvs
icl3XDXn5+6OIM6WAAvWL7BelHyXIid09ZSL+GaNqpGei1CHIK+d3WYk/TRaTKBR2DhKiIdn
WOFBhixWxmBhA9ineOFBJXt41/Vg8bJnYvReK6w9hxzlaZvTg3IFw9Bhp3tCeDHgScq5baoO
Zs9GBRlzwxEvF5QezvULb+3T/7xeSSnuQdGBLM2HSQYfscdlUBwDXMIR6g6VJ67dRZ9yY4Wj
Fkd1HSksfPy8J6R3G39NuLUeTO4yrPss0sZhgdVkvo8YtdjAJIxdsz6UMrIwFy7CtczkHs2y
wCNna7yO7n5SWgI852q24XL7bBNN6E6XfYTolg284SjMqrrynJNd7PfJzoidf1yH06BCx3dB
199775+n4IOgvBsYX4wjWaD7qy5wZUk5rUtcugNPO9qZoNWyVhlaWFPwiRRRpfycZyoRjujB
pcjnf3yXQs0G428sHK9kJ3ZNX3JmkzLigU9j96+J1Dumz/X/yP2EvTWRp+BYluUZCdSt9hBI
7fuEHmpJouHa37mRn7Bj9m/YmfA+/qVYTfMzC/zE8UScehWQGBXj6H5zppKzW6r0DB7cYUvr
nhHXOA7EwnL/oqHEiwx8x/F9bswhkWQYUzVdMPh/aSoyPYIURYGUgrjYdYdPT4f523z3l+RE
i5FcnTZr4DwpXQ2eNWjPazERGNKDMv99dvfMpALhrGd7Hroy7agSdxedQqIJN8i/GjtgWeE3
8+TqBe1DA9h1/Mpi22SkbBOvEGNzWeXcSNjn+tvpEG3+ZJvJinxJQEQlEfzcQIk7IT5xGyWj
hEOZkvMrlrkrjQqyVs+qa6eIhpnafRfFzeerrKzm7USQc8F16tbskjfgq+6AsZR9n0T4OdUM
jWWTcD/v6vj6FU77v1e2s1XIlR/ea/fYeNL/hE6CmwSJ3g7U0HYnsViHZvPzl7RB//uIdQWi
dme4e0vwneBY1a601taJpJE491UTm7FCVSrZSRWH3cI9YEKl7qOFS+mXUQ1YzqQbxfFiM2TB
rM8sAfnlO0zweZ29Bm1+a4WkdhUukWOn79DW8tUxycdK6pIilCPl6ovWTx+HFdb6hJx6TaVU
hn2varvbUE2CJsYX+VwKTo0/92T+KQH7JZzqprGsRs3MEXJ+YdtY8hnRQ0iS3Wz6gMkaROwe
H4NJyozQIMvKYCp0HrsQa1882GPlYxKWBMP3XCInwGM3+KExJCD78hdt7N9CtpbtvYXB6uii
H1EW6TObZtukQzbjkJQA1gHpVtYNQpYOFbhzDGh19dodeuglxvnO8vClR8vgonI/HxPSPgxq
YUAYFytAn95UoQnwA39l4lPWRUzpJ/I3pFXuWk7NxdmKBssXv1YVCd4KoOzvjmTJe+rj94oh
wHJVE9CogA1OUfVcbrr+AxHnL1ynUreULxW1Pw94mhR7GLNBAGOyIu3yaN0HRTsojfPVKkXd
ODAuZ2zhOBvxPEYTL51ByyLepTdbcepXTrnhMSAkE6xCnk8F4m5oxoExPKeswoeKlkGYIUN/
CyZzk9VjX2d4CN7EXXXcRSyEZnfI3CQ18EBfZEqLysZiXIpfiJkUXLKKd2UehYk6K52COvzB
rsL/dhu6iodWLn/DFhVZAM3SNhGsLoWPH9jKLG91q38xNQe81mXz7NtH8yptAp0MHbqprKNp
Ggdh+iLdL5ODCFB3oczWL9Sec+vsLCUs1y6+TAcLKZo3X+1FdwaPxJ0w0tGAiR8cIO7bdL5F
wqEOjwf+MivLQqA/H1qmEpLyA2u2i5fG1jxApOI4fnnp5Dupf55xotcbFrZktMXIQQfa/upO
XaUktAa1gI3pHC0vlwaGuSLrd/N+0UHpTzk86OdoSsVvtQANjcSfr8NviMSjjMg3dK3G3FcP
1mrxsZSCDnC/zN3Ako18+3ajhRlXDRP28MMdHEhHAg+4fNme5CEVzxLWk3lVR20ewXEjRA/p
eS2TJRJSiZFYNY/On3IgnMpsDbqIPTfdp/LD4I9beycKxwtT/jgBaV4V0ytdbCYH/CnQ4QeQ
vuAw7TPR23NI6FZjORm+UQ3eOYsmx6wpfubXQD14B0u92gmfWlzIGi+6cwQuaW5LWVxbXUBC
S2llI+c65u5j6RhrkK2gOfiDxYmWq0jEkg/qUkkc9mPhQcfxtTUcxZL60y4UBRLy8pv5izgE
zsImRmad2X1kICOad3mQNelh4Fme3zhhx+gxtmM3LH7MEbbz7sEiMe/tN2Fc2sIUfqh7x2PE
DUStY7w1qH8/fa+5dyUGwQqojmECe+cJycr1GIRjYZFkD09WLmW/xhY4++nB7+LQEj2z5Xxf
x8qDN3mh7UDiQ4LRYl6+AqM5q93iaqgdy0I/HE7/Sn/l7u4QqA2Khj0blx0VHiXPHhHIHJt3
W60JCk+XTm4piiOBsSsphHPKjzirnmoOp3aMjDxvxEFWdB96PIxuartVtqlHJ1P4nNtUajar
n85pIBkArCBy9yvQCa4FrEaLWI0ybeq239yQXLWYjSEu30vbI25I4FR6QjTT8BGb/KDUOs62
EwUE3jJVMNQjwDO3RGSSFC7noTz1kFkTWL1zWNCb3MWL9+oARpYd8nPBX4VrC4liSHLyngV+
xU6i71hVJtW6YeXahVNc3UWSZxOpB4dEr5/BtsOIMLQi4XaZdFyxWruIhFWbOnQeWf2UA+ip
uY9YMu8prRfUpFdTaisvnEEPVFYPuHZCPfYBoZEawSioRyHvLiuLzeZb6s4DeLVr80M5wyoW
T6f1skHDt66Qq133Ky1hHSn4VVTcZEA5CmWE5cYNK8E/ZWgScAzZlXVTovsXLhMG7tQA5PAY
enTub2TdPW+x39oA8+mDaqyWSOTGiAK08dRoebAzjVHH6+6U3pLuYMupf5zN29coR7jXcA11
kPVbSL0BCfOf/GX6/4iSmbSwkbMZnkewv6PmauWHRXQS3ckKDPaVEr+HMvmigqZ3VNMzK7uB
tIPI4EUl3SpEh3gqF0TXQc0oGphQBHlgCx6S7cG8TIoj4qLDsrDXBFcFqdzL56laZWrExu5Q
gJVjz8d41niVIEH50uHydZQpAY/pFfaEZDEmGXVU09hgab56SjzP3mMTcncEJs3oNfgNcxkr
PFmSI6Oma/92/lEwymiWCe9XhY/MlLz4/1LU7p9fuvIS/PS1YjO5YHumv6I9nGTPnK/B7xpj
PW1tzBFtGdGyXqL4U90DtBh2pGbFuuXQEmua5yg4fUElIBxR7eCHZnlhvQ8eq+iP85M4ek/D
Jy59s1j9gya28+Ld9uxwI/gMQc4xjniWaZ4tyHcSbY/l6DRb17bCUK5uQ0iJ1PMXzACwsQ65
TZtdQkb9BY5PgZfSg0EfnUeS+a8K3wunOA5HH8wqa12gopahYhT3GPDt6vR5prf2crl+IEwh
EooKwVJp46zDih2lPWfFzNMiTVYih6bDhrx5RRQwvQszP/8v7A4bMnkzUI8K2PQEBlWj70NH
fiO2eGJiPUX0V/BDM+MqtooN0Uq3cBYxoFUdKhlmvwQBD0XjX7YtY/AUHrwDZDZo5DnWCgCv
SfTqZFIAqDQvG43Uhrf46aruzzJqNe02QbdgSsgGqzhIXQS3FZ8oqaNl0g0o/Tz5WhgRyalG
nqQe1Mg3j7fgV6ynFKDpaF9DuCFcubV4ZPxy1Gqt8L401ACaFJ4uRljFEeBbJ8kqekBVx28x
siSH4akkhPzJblMa1vK8nv0doYo+jsZjaFaTlGECRiW3nKfBvT5WwwARZlewLkPs3X93f4Dg
pNfHjym6nmpv+86TBDboVJbpgzUKbR1I8yCwRC8S7riprwWXnmmYNcfFEMTElKN/vy62GPD2
vjZoNPkx1KTc1DKwm2j8VKDwAAAAAPn0v2Op3fZ7AAGBhQG9sQSEfTbgscRn+wIAAAAABFla

--fKov5AqTsvseSZ0Z--


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 09:40:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 09:40:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyXiU-0002yM-Eg; Thu, 23 Jul 2020 09:40:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xWck=BC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyXiS-0002yH-PD
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 09:40:40 +0000
X-Inumbo-ID: 91656860-ccc8-11ea-a271-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 91656860-ccc8-11ea-a271-12813bfff9fa;
 Thu, 23 Jul 2020 09:40:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595497241;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=sWR6v3YmxJDTEcYe4oLHOI434b0aPK1j5k3mIryJvY8=;
 b=fE8ivjykTIe8QEPKE4QHUUMGFLu8a4I9sXVeSHA8FpT/p7ZujgX/162K
 N8MVsiZclZ/szBckrrhaBN9WnvV2hoESLCABU45FxVLuurZ/qYwntLWDZ
 Jr6e7KZcjyHp8TwxylpOc4zWFt4lXDZ+PBSv+kn7AnH7yCNNLS3MYnySF U=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: L9WZog4wf6i+XamkX28E32Tv+Ot8sBcUHjERZnc0jjB9ZvK2VHAMD4a/YMfSM1zrq4t8JscCDE
 mds309gp1QPlbiXfkVBeG1+QxuD9eMmYRV17iFBhEimKsPVi0KxSFbAtvoGivZ28bfpYXeaX9B
 iVqoo2tdWoqZU7hfZh3YdmNL/jP/aHURVk1BnTJco9ZASVxK8aLR7qbMIVG1L5JkKnR2K9u/9/
 rhxyYpGOA0XmNx8GxRL1n6rk0H0H2h4D+JRPOzAFnYILYqxY9a0XjnjuYppXULMiTtDnTRZlx5
 Qmc=
X-SBRS: 2.7
X-MesageID: 23019286
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23019286"
Subject: Re: [PATCH] x86/hvm: Clean up track_dirty_vram() calltree
To: Jan Beulich <jbeulich@suse.com>
References: <20200722151548.4000-1-andrew.cooper3@citrix.com>
 <07ecb7dd-c823-0c6a-2bcd-7fc22471af7a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <822f6c64-0e63-1199-63b0-f27449fd79c6@citrix.com>
Date: Thu, 23 Jul 2020 10:40:35 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <07ecb7dd-c823-0c6a-2bcd-7fc22471af7a@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22/07/2020 17:13, Jan Beulich wrote:
> On 22.07.2020 17:15, Andrew Cooper wrote:
>>  * Rename nr to nr_frames.  A plain 'nr' is confusing to follow in the the
>>    lower levels.
>>  * Use DIV_ROUND_UP() rather than opencoding it in several different ways
>>  * The hypercall input is capped at uint32_t, so there is no need for
>>    nr_frames to be unsigned long in the lower levels.
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
> I'd like to note though that ...
>
>> --- a/xen/arch/x86/mm/hap/hap.c
>> +++ b/xen/arch/x86/mm/hap/hap.c
>> @@ -58,16 +58,16 @@
>>  
>>  int hap_track_dirty_vram(struct domain *d,
>>                           unsigned long begin_pfn,
>> -                         unsigned long nr,
>> +                         unsigned int nr_frames,
>>                           XEN_GUEST_HANDLE(void) guest_dirty_bitmap)
>>  {
>>      long rc = 0;
>>      struct sh_dirty_vram *dirty_vram;
>>      uint8_t *dirty_bitmap = NULL;
>>  
>> -    if ( nr )
>> +    if ( nr_frames )
>>      {
>> -        int size = (nr + BITS_PER_BYTE - 1) / BITS_PER_BYTE;
>> +        unsigned int size = DIV_ROUND_UP(nr_frames, BITS_PER_BYTE);
> ... with the change from long to int this construct will now no
> longer be correct for the (admittedly absurd) case of a hypercall
> input in the range of [0xfffffff9,0xffffffff]. We now fully
> depend on this getting properly rejected at the top level hypercall
> handler (which limits to 1Gb worth of tracked space).

I don't see how this makes any difference at all.

Exactly the same would be true in the old code for an input in the range
[0xfffffffffffffff9,0xffffffffffffffff], where the aspect which protects
you is the fact that the hypercall ABI truncates to 32 bits.

If you want a non-overflowing DIV_ROUND_UP(), the appropriate expression
in (x / a) + !!(x % a), but I don't think its reasonable to use the type
of this variable as a credible defence-in-depth argument against the
audit logic making a mistake, or that it is worth worrying about audit
mistakes in the first place.  If there are any audit mistakes, so much
more can potentially go wrong than this corner case.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 09:52:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 09:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyXtN-0003xG-HY; Thu, 23 Jul 2020 09:51:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=j4k2=BC=h08.hostsharing.net=foo00@srs-us1.protection.inumbo.net>)
 id 1jyXtM-0003xB-7a
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 09:51:56 +0000
X-Inumbo-ID: 2354592e-ccca-11ea-86eb-bc764e2007e4
Received: from bmailout1.hostsharing.net (unknown [83.223.95.100])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2354592e-ccca-11ea-86eb-bc764e2007e4;
 Thu, 23 Jul 2020 09:51:54 +0000 (UTC)
Received: from h08.hostsharing.net (h08.hostsharing.net
 [IPv6:2a01:37:1000::53df:5f1c:0])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client CN "*.hostsharing.net",
 Issuer "COMODO RSA Domain Validation Secure Server CA" (not verified))
 by bmailout1.hostsharing.net (Postfix) with ESMTPS id AF329300011A0;
 Thu, 23 Jul 2020 11:51:52 +0200 (CEST)
Received: by h08.hostsharing.net (Postfix, from userid 100393)
 id 68E6B36272; Thu, 23 Jul 2020 11:51:52 +0200 (CEST)
Date: Thu, 23 Jul 2020 11:51:52 +0200
From: Lukas Wunner <lukas@wunner.de>
To: kernel test robot <lkp@intel.com>
Subject: Re: [PCI] 3233e41d3e:
 WARNING:at_drivers/pci/pci.c:#pci_reset_hotplug_slot
Message-ID: <20200723095152.nf3fmfzrjlpoi35h@wunner.de>
References: <908047f7699d9de9ec2efd6b79aa752d73dab4b6.1595329748.git.lukas@wunner.de>
 <20200723091305.GJ19262@shao2-debian>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200723091305.GJ19262@shao2-debian>
User-Agent: NeoMutt/20170113 (1.7.2)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Derek Chickles <dchickles@marvell.com>,
 xen-devel@lists.xenproject.org, kvm@vger.kernel.org,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, linux-pci@vger.kernel.org,
 Satanand Burla <sburla@marvell.com>, Cornelia Huck <cohuck@redhat.com>,
 LKML <linux-kernel@vger.kernel.org>, Felix Manlunas <fmanlunas@marvell.com>,
 Keith Busch <kbusch@kernel.org>, Alex Williamson <alex.williamson@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Govinda Tatti <govinda.tatti@oracle.com>, lkp@lists.01.org,
 Rick Farrington <ricardo.farrington@cavium.com>,
 Bjorn Helgaas <bhelgaas@google.com>,
 Michael Haeuptle <michael.haeuptle@hpe.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Ian May <ian.may@canonical.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 23, 2020 at 05:13:06PM +0800, kernel test robot wrote:
> FYI, we noticed the following commit (built with gcc-9):
[...]
> commit: 3233e41d3e8ebcd44e92da47ffed97fd49b84278 ("[PATCH] PCI: pciehp: Fix AB-BA deadlock between reset_lock and device_lock")
[...]
> caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
> [    0.971752] WARNING: CPU: 0 PID: 1 at drivers/pci/pci.c:4905 pci_reset_hotplug_slot+0x70/0x80

Thank you, trusty robot.

I botched the call to lockdep_assert_held_write(), it should have been
conditional on "if (probe)".

Happy to respin the patch, but I'd like to hear opinions on the locking
issues surrounding xen and octeon (and the patch in general).

In particular, would a solution be entertained wherein the pci_dev is
reset by the PCI core after driver unbinding, contingent on a flag which
is set by a PCI driver to indicate that the pci_dev is returned to the
core in an unclean state?

Also, why does xen require a device reset on bind?

Thanks!

Lukas


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 10:07:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 10:07:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyY8i-00054S-2I; Thu, 23 Jul 2020 10:07:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0L1b=BC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyY8g-00054N-I8
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 10:07:46 +0000
X-Inumbo-ID: 59a5baac-cccc-11ea-a271-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 59a5baac-cccc-11ea-a271-12813bfff9fa;
 Thu, 23 Jul 2020 10:07:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595498865;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=bQJiS9xFZUELYjfJnG1ICjedNSkJwEqRKFaBuZQt8Ak=;
 b=A+JIpyKjrqlsK3lo4UtO1jvKoh3tbDID3TqQoOqaYuhvW5SG6WDN26Cq
 FSqmLsss4hVW5bstZhYNSG3e86XrWl9BNuLfW2Rowb+wIFwPPAXQCiXaB
 ryyEr9S7OT1FK8bVqepayZKxlPSVXefQxbREOBpgypg8Doao2bUwKPlm9 8=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: bZFXF6/WNJHCKqecgfm07qSYmr8C11I5LCeQYAmB2J6ejGf2BxnVqGad3IzBVSMT2qM/6xk2pb
 ZDtL6w5ZgkxccLhh7102R3+00n3DJpxz/5cS69zB4Mifr9LHVimxAlJaYuL2ZbTves9JGhmgfS
 hyY/zIBp7MqmE8LrRrMlRqmGJTRZ3yjVjnF22OwOCh8q7Mq/1ZszDTFU6HCnsfegVmI365A9Ci
 U8r8fC4kSdAqXQxnwYnpxOJWlj1y9sOn9zwWntIiUVEgJ3GagxBo2m1PoklSQqSPyxGf52wFWx
 /GY=
X-SBRS: 2.7
X-MesageID: 23357104
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23357104"
Date: Thu, 23 Jul 2020 12:07:27 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/vmce: Dispatch vmce_{rd,wr}msr() from
 guest_{rd,wr}msr()
Message-ID: <20200723100727.GA7191@Air-de-Roger>
References: <20200722101809.8389-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200722101809.8389-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 22, 2020 at 11:18:09AM +0100, Andrew Cooper wrote:
> ... rather than from the default clauses of the PV and HVM MSR handlers.
> 
> This means that we no longer take the vmce lock for any unknown MSR, and
> accesses to architectural MCE banks outside of the subset implemented for the
> guest no longer fall further through the unknown MSR path.
> 
> With the vmce calls removed, the hvm alternative_call()'s expression can be
> simplified substantially.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

LGTM, I just have one question below regarding the ranges.

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/arch/x86/hvm/hvm.c         | 16 ++--------------
>  xen/arch/x86/msr.c             | 16 ++++++++++++++++
>  xen/arch/x86/pv/emul-priv-op.c | 15 ---------------
>  3 files changed, 18 insertions(+), 29 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 5bb47583b3..a9d1685549 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3560,13 +3560,7 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>           break;
>  
>      default:
> -        if ( (ret = vmce_rdmsr(msr, msr_content)) < 0 )
> -            goto gp_fault;
> -        /* If ret == 0 then this is not an MCE MSR, see other MSRs. */
> -        ret = ((ret == 0)
> -               ? alternative_call(hvm_funcs.msr_read_intercept,
> -                                  msr, msr_content)
> -               : X86EMUL_OKAY);
> +        ret = alternative_call(hvm_funcs.msr_read_intercept, msr, msr_content);
>          break;
>      }
>  
> @@ -3696,13 +3690,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
>          break;
>  
>      default:
> -        if ( (ret = vmce_wrmsr(msr, msr_content)) < 0 )
> -            goto gp_fault;
> -        /* If ret == 0 then this is not an MCE MSR, see other MSRs. */
> -        ret = ((ret == 0)
> -               ? alternative_call(hvm_funcs.msr_write_intercept,
> -                                  msr, msr_content)
> -               : X86EMUL_OKAY);
> +        ret = alternative_call(hvm_funcs.msr_write_intercept, msr, msr_content);
>          break;
>      }
>  
> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
> index 22f921cc71..ca4307e19f 100644
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -227,6 +227,14 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>          *val = msrs->misc_features_enables.raw;
>          break;
>  
> +    case MSR_IA32_MCG_CAP     ... MSR_IA32_MCG_CTL:      /* 0x179 -> 0x17b */
> +    case MSR_IA32_MCx_CTL2(0) ... MSR_IA32_MCx_CTL2(31): /* 0x280 -> 0x29f */
> +    case MSR_IA32_MCx_CTL(0)  ... MSR_IA32_MCx_MISC(31): /* 0x400 -> 0x47f */

Where do you get the ranges from 0 to 31? It seems like the count
field in the CAP register is 8 bits, which could allow for up to 256
banks?

I'm quite sure this would then overlap with other MSRs?

> +    case MSR_IA32_MCG_EXT_CTL:                           /* 0x4d0 */
> +        if ( vmce_rdmsr(msr, val) < 0 )
> +            goto gp_fault;
> +        break;
> +
>      case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
>          if ( !is_hvm_domain(d) || v != curr )
>              goto gp_fault;
> @@ -436,6 +444,14 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>          break;
>      }
>  
> +    case MSR_IA32_MCG_CAP     ... MSR_IA32_MCG_CTL:      /* 0x179 -> 0x17b */
> +    case MSR_IA32_MCx_CTL2(0) ... MSR_IA32_MCx_CTL2(31): /* 0x280 -> 0x29f */
> +    case MSR_IA32_MCx_CTL(0)  ... MSR_IA32_MCx_MISC(31): /* 0x400 -> 0x47f */
> +    case MSR_IA32_MCG_EXT_CTL:                           /* 0x4d0 */
> +        if ( vmce_wrmsr(msr, val) < 0 )
> +            goto gp_fault;
> +        break;
> +
>      case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
>          if ( !is_hvm_domain(d) || v != curr )
>              goto gp_fault;
> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
> index 254da2b849..f14552cb4b 100644
> --- a/xen/arch/x86/pv/emul-priv-op.c
> +++ b/xen/arch/x86/pv/emul-priv-op.c
> @@ -855,8 +855,6 @@ static int read_msr(unsigned int reg, uint64_t *val,
>  
>      switch ( reg )
>      {
> -        int rc;
> -
>      case MSR_FS_BASE:
>          if ( is_pv_32bit_domain(currd) )
>              break;
> @@ -955,12 +953,6 @@ static int read_msr(unsigned int reg, uint64_t *val,
>          }
>          /* fall through */
>      default:
> -        rc = vmce_rdmsr(reg, val);
> -        if ( rc < 0 )
> -            break;
> -        if ( rc )
> -            return X86EMUL_OKAY;
> -        /* fall through */
>      normal:

We could remove the 'normal' label and just use the default one
instead.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 10:26:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 10:26:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyYQE-0006p4-Ma; Thu, 23 Jul 2020 10:25:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyYQD-0006oz-8g
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 10:25:53 +0000
X-Inumbo-ID: e2398b62-ccce-11ea-86ed-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e2398b62-ccce-11ea-86ed-bc764e2007e4;
 Thu, 23 Jul 2020 10:25:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 80A8CACDF;
 Thu, 23 Jul 2020 10:25:59 +0000 (UTC)
Subject: Re: [PATCH] x86/hvm: Clean up track_dirty_vram() calltree
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200722151548.4000-1-andrew.cooper3@citrix.com>
 <07ecb7dd-c823-0c6a-2bcd-7fc22471af7a@suse.com>
 <822f6c64-0e63-1199-63b0-f27449fd79c6@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <635385e7-81f4-138a-f8ba-269a6d2c7ddb@suse.com>
Date: Thu, 23 Jul 2020 12:25:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <822f6c64-0e63-1199-63b0-f27449fd79c6@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.2020 11:40, Andrew Cooper wrote:
> On 22/07/2020 17:13, Jan Beulich wrote:
>> On 22.07.2020 17:15, Andrew Cooper wrote:
>>>  * Rename nr to nr_frames.  A plain 'nr' is confusing to follow in the the
>>>    lower levels.
>>>  * Use DIV_ROUND_UP() rather than opencoding it in several different ways
>>>  * The hypercall input is capped at uint32_t, so there is no need for
>>>    nr_frames to be unsigned long in the lower levels.
>>>
>>> No functional change.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> I'd like to note though that ...
>>
>>> --- a/xen/arch/x86/mm/hap/hap.c
>>> +++ b/xen/arch/x86/mm/hap/hap.c
>>> @@ -58,16 +58,16 @@
>>>  
>>>  int hap_track_dirty_vram(struct domain *d,
>>>                           unsigned long begin_pfn,
>>> -                         unsigned long nr,
>>> +                         unsigned int nr_frames,
>>>                           XEN_GUEST_HANDLE(void) guest_dirty_bitmap)
>>>  {
>>>      long rc = 0;
>>>      struct sh_dirty_vram *dirty_vram;
>>>      uint8_t *dirty_bitmap = NULL;
>>>  
>>> -    if ( nr )
>>> +    if ( nr_frames )
>>>      {
>>> -        int size = (nr + BITS_PER_BYTE - 1) / BITS_PER_BYTE;
>>> +        unsigned int size = DIV_ROUND_UP(nr_frames, BITS_PER_BYTE);
>> ... with the change from long to int this construct will now no
>> longer be correct for the (admittedly absurd) case of a hypercall
>> input in the range of [0xfffffff9,0xffffffff]. We now fully
>> depend on this getting properly rejected at the top level hypercall
>> handler (which limits to 1Gb worth of tracked space).
> 
> I don't see how this makes any difference at all.
> 
> Exactly the same would be true in the old code for an input in the range
> [0xfffffffffffffff9,0xffffffffffffffff], where the aspect which protects
> you is the fact that the hypercall ABI truncates to 32 bits.

Exactly: The hypercall ABI won't change. The GB(1) check up the call
tree may go away, without the then arising issue being noticed.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 10:37:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 10:37:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyYbX-0007lt-Pc; Thu, 23 Jul 2020 10:37:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyYbX-0007lo-A9
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 10:37:35 +0000
X-Inumbo-ID: 831fb442-ccd0-11ea-a273-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 831fb442-ccd0-11ea-a273-12813bfff9fa;
 Thu, 23 Jul 2020 10:37:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F0A5CAC52;
 Thu, 23 Jul 2020 10:37:38 +0000 (UTC)
Subject: Re: [PATCH] x86/vmce: Dispatch vmce_{rd,wr}msr() from
 guest_{rd,wr}msr()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200722101809.8389-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <76851f30-2003-2fee-221a-df70907ee91c@suse.com>
Date: Thu, 23 Jul 2020 12:37:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200722101809.8389-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.07.2020 12:18, Andrew Cooper wrote:
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -227,6 +227,14 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>          *val = msrs->misc_features_enables.raw;
>          break;
>  
> +    case MSR_IA32_MCG_CAP     ... MSR_IA32_MCG_CTL:      /* 0x179 -> 0x17b */
> +    case MSR_IA32_MCx_CTL2(0) ... MSR_IA32_MCx_CTL2(31): /* 0x280 -> 0x29f */
> +    case MSR_IA32_MCx_CTL(0)  ... MSR_IA32_MCx_MISC(31): /* 0x400 -> 0x47f */
> +    case MSR_IA32_MCG_EXT_CTL:                           /* 0x4d0 */
> +        if ( vmce_rdmsr(msr, val) < 0 )
> +            goto gp_fault;
> +        break;
> +
>      case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
>          if ( !is_hvm_domain(d) || v != curr )
>              goto gp_fault;
> @@ -436,6 +444,14 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>          break;
>      }
>  
> +    case MSR_IA32_MCG_CAP     ... MSR_IA32_MCG_CTL:      /* 0x179 -> 0x17b */
> +    case MSR_IA32_MCx_CTL2(0) ... MSR_IA32_MCx_CTL2(31): /* 0x280 -> 0x29f */
> +    case MSR_IA32_MCx_CTL(0)  ... MSR_IA32_MCx_MISC(31): /* 0x400 -> 0x47f */
> +    case MSR_IA32_MCG_EXT_CTL:                           /* 0x4d0 */
> +        if ( vmce_wrmsr(msr, val) < 0 )
> +            goto gp_fault;
> +        break;
> +
>      case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
>          if ( !is_hvm_domain(d) || v != curr )
>              goto gp_fault;

With this the two functions also possibly returning 0 or 1 becomes
meaningless. Would you think you can make then return bool at this
occasion, or would you prefer to leave this to whenever someone
gets to clean up this resulting anomaly? (I'm fine either way, but
would prefer to not see the then meaningless tristate return values
left in place.)

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 10:50:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 10:50:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyYnp-0000z8-U7; Thu, 23 Jul 2020 10:50:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyYno-0000z3-Ht
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 10:50:16 +0000
X-Inumbo-ID: 4a7dd536-ccd2-11ea-86ee-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a7dd536-ccd2-11ea-86ee-bc764e2007e4;
 Thu, 23 Jul 2020 10:50:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E3009AF3A;
 Thu, 23 Jul 2020 10:50:22 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] lockprof: eliminate a minor bug and a quirk
Message-ID: <47f5478d-2f46-656c-0882-121aebc77f39@suse.com>
Date: Thu, 23 Jul 2020 12:50:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

1: don't leave locks uninitialized upon allocation failure
2: don't pass name into registration function

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 10:51:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 10:51:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyYpO-00017L-9E; Thu, 23 Jul 2020 10:51:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyYpM-00017D-V8
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 10:51:52 +0000
X-Inumbo-ID: 82f20e01-ccd2-11ea-a278-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 82f20e01-ccd2-11ea-a278-12813bfff9fa;
 Thu, 23 Jul 2020 10:51:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 50C07AF2D;
 Thu, 23 Jul 2020 10:51:59 +0000 (UTC)
Subject: [PATCH 1/2] lockprof: don't leave locks uninitialized upon allocation
 failure
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <47f5478d-2f46-656c-0882-121aebc77f39@suse.com>
Message-ID: <7c4f50ce-6212-2f16-c9c5-c9af450b10ba@suse.com>
Date: Thu, 23 Jul 2020 12:51:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <47f5478d-2f46-656c-0882-121aebc77f39@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Even if a specific struct lock_profile instance can't be allocated, the
lock itself should still be functional. As this isn't a production use
feature, also log a message in the event that the profiling struct can't
be allocated.

Fixes: d98feda5c756 ("Make lock profiling usable again")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -103,10 +103,16 @@ struct lock_profile_qhead {
     do {                                                                      \
         struct lock_profile *prof;                                            \
         prof = xzalloc(struct lock_profile);                                  \
-        if (!prof) break;                                                     \
+        (s)->l = (spinlock_t)_SPIN_LOCK_UNLOCKED(prof);                       \
+        if ( !prof )                                                          \
+        {                                                                     \
+            printk(XENLOG_WARNING                                             \
+                   "lock profiling unavailable for %p(%d)'s " #l "\n",        \
+                   s, (s)->profile_head.idx);                                 \
+            break;                                                            \
+        }                                                                     \
         prof->name = #l;                                                      \
         prof->lock = &(s)->l;                                                 \
-        (s)->l = (spinlock_t)_SPIN_LOCK_UNLOCKED(prof);                       \
         prof->next = (s)->profile_head.elem_q;                                \
         (s)->profile_head.elem_q = prof;                                      \
     } while(0)



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 10:52:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 10:52:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyYpn-0001AV-Mb; Thu, 23 Jul 2020 10:52:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyYpl-0001AL-Of
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 10:52:17 +0000
X-Inumbo-ID: 92811fc8-ccd2-11ea-86ee-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 92811fc8-ccd2-11ea-86ee-bc764e2007e4;
 Thu, 23 Jul 2020 10:52:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C752CAE69;
 Thu, 23 Jul 2020 10:52:23 +0000 (UTC)
Subject: [PATCH 2/2] lockprof: don't pass name into registration function
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <47f5478d-2f46-656c-0882-121aebc77f39@suse.com>
Message-ID: <d8eab983-9377-a519-3be8-6ef83fa96516@suse.com>
Date: Thu, 23 Jul 2020 12:52:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <47f5478d-2f46-656c-0882-121aebc77f39@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The type uniquely identifies the associated name, hence the name fields
can be statically initialized.

Also constify not just the involved struct field, but also struct
lock_profile's. Rather than specifying lock_profile_ancs[]' dimension at
definition time, add a suitable build time check, such that at least
missing tail additions to the initializer can be spotted easily.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -392,7 +392,7 @@ struct domain *domain_create(domid_t dom
         d->max_vcpus = config->max_vcpus;
     }
 
-    lock_profile_register_struct(LOCKPROF_TYPE_PERDOM, d, domid, "Domain");
+    lock_profile_register_struct(LOCKPROF_TYPE_PERDOM, d, domid);
 
     if ( (err = xsm_alloc_security_domain(d)) != 0 )
         goto fail;
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -338,7 +338,7 @@ void _spin_unlock_recursive(spinlock_t *
 
 struct lock_profile_anc {
     struct lock_profile_qhead *head_q;   /* first head of this type */
-    char                      *name;     /* descriptive string for print */
+    const char                *name;     /* descriptive string for print */
 };
 
 typedef void lock_profile_subfunc(
@@ -348,7 +348,10 @@ extern struct lock_profile *__lock_profi
 extern struct lock_profile *__lock_profile_end;
 
 static s_time_t lock_profile_start;
-static struct lock_profile_anc lock_profile_ancs[LOCKPROF_TYPE_N];
+static struct lock_profile_anc lock_profile_ancs[] = {
+    [LOCKPROF_TYPE_GLOBAL] = { .name = "Global" },
+    [LOCKPROF_TYPE_PERDOM] = { .name = "Domain" },
+};
 static struct lock_profile_qhead lock_profile_glb_q;
 static spinlock_t lock_profile_lock = SPIN_LOCK_UNLOCKED;
 
@@ -473,13 +476,12 @@ int spinlock_profile_control(struct xen_
 }
 
 void _lock_profile_register_struct(
-    int32_t type, struct lock_profile_qhead *qhead, int32_t idx, char *name)
+    int32_t type, struct lock_profile_qhead *qhead, int32_t idx)
 {
     qhead->idx = idx;
     spin_lock(&lock_profile_lock);
     qhead->head_q = lock_profile_ancs[type].head_q;
     lock_profile_ancs[type].head_q = qhead;
-    lock_profile_ancs[type].name = name;
     spin_unlock(&lock_profile_lock);
 }
 
@@ -504,6 +506,8 @@ static int __init lock_prof_init(void)
 {
     struct lock_profile **q;
 
+    BUILD_BUG_ON(ARRAY_SIZE(lock_profile_ancs) != LOCKPROF_TYPE_N);
+
     for ( q = &__lock_profile_start; q < &__lock_profile_end; q++ )
     {
         (*q)->next = lock_profile_glb_q.elem_q;
@@ -511,9 +515,8 @@ static int __init lock_prof_init(void)
         (*q)->lock->profile = *q;
     }
 
-    _lock_profile_register_struct(
-        LOCKPROF_TYPE_GLOBAL, &lock_profile_glb_q,
-        0, "Global lock");
+    _lock_profile_register_struct(LOCKPROF_TYPE_GLOBAL,
+                                  &lock_profile_glb_q, 0);
 
     return 0;
 }
--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -72,7 +72,7 @@ struct spinlock;
 
 struct lock_profile {
     struct lock_profile *next;       /* forward link */
-    char                *name;       /* lock name */
+    const char          *name;       /* lock name */
     struct spinlock     *lock;       /* the lock itself */
     u64                 lock_cnt;    /* # of complete locking ops */
     u64                 block_cnt;   /* # of complete wait for lock */
@@ -118,11 +118,11 @@ struct lock_profile_qhead {
     } while(0)
 
 void _lock_profile_register_struct(
-    int32_t, struct lock_profile_qhead *, int32_t, char *);
+    int32_t, struct lock_profile_qhead *, int32_t);
 void _lock_profile_deregister_struct(int32_t, struct lock_profile_qhead *);
 
-#define lock_profile_register_struct(type, ptr, idx, print)                   \
-    _lock_profile_register_struct(type, &((ptr)->profile_head), idx, print)
+#define lock_profile_register_struct(type, ptr, idx)                          \
+    _lock_profile_register_struct(type, &((ptr)->profile_head), idx)
 #define lock_profile_deregister_struct(type, ptr)                             \
     _lock_profile_deregister_struct(type, &((ptr)->profile_head))
 
@@ -138,7 +138,7 @@ struct lock_profile_qhead { };
 #define DEFINE_SPINLOCK(l) spinlock_t l = SPIN_LOCK_UNLOCKED
 
 #define spin_lock_init_prof(s, l) spin_lock_init(&((s)->l))
-#define lock_profile_register_struct(type, ptr, idx, print)
+#define lock_profile_register_struct(type, ptr, idx)
 #define lock_profile_deregister_struct(type, ptr)
 #define spinlock_profile_printall(key)
 



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 11:01:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 11:01:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyYyB-000298-IS; Thu, 23 Jul 2020 11:00:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xWck=BC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyYyA-000293-9e
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 11:00:58 +0000
X-Inumbo-ID: c87681b2-ccd3-11ea-86ee-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c87681b2-ccd3-11ea-86ee-bc764e2007e4;
 Thu, 23 Jul 2020 11:00:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595502056;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=DSZDdZHienvWk0F8a6G+gA3BfU9aam8tN7agsDbnLXY=;
 b=Sub+95/yRcHGRJPZ2iDSEiVQ3oYqi1yVU3Ev1ysizmYDrV72FSGEYfYx
 NXI6HqCOgxSwPA6T7tGkIxv34InvYTSAaU+BMhK3NRnm6gcE/TwzuXyWL
 du7tceX3pjWWA861SfMKGGhWmeRIEFzQLGVYgZfn0LOCdY1jsRRDIuzhQ o=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: cH32bdtPvs4M+ydkQkC3OtU0hTk0b5mtSjMMKo7QAtlEKnwcn7ZtBKVV+VSYfPgzHAyshuUoCQ
 p2TOqw9YJELwxAPTcBUut5XpIx0uiVtj7kgiemaQaXMBz+3OuYjF1y0RkkNtN7u8bYHhvtOMlG
 OTbisKJRBJGL8yJ58+kXxvx9XA5/LtCLUHaDHAMcd7GwkYhjBSyaWutbLDHbJUwUXetHcDFN24
 Zb43Cz2qUdUk7TnVQPf0XZADKY+GCmoUBZ20sjPrkFQ7BCzL4XYVXs7rlh+b/K1+4yIWk1i+Kw
 9HY=
X-SBRS: 2.7
X-MesageID: 23353359
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23353359"
Subject: Re: [PATCH] x86/vmce: Dispatch vmce_{rd,wr}msr() from
 guest_{rd,wr}msr()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200722101809.8389-1-andrew.cooper3@citrix.com>
 <20200723100727.GA7191@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ccc153a5-cf65-c483-43ea-d6b864366e06@citrix.com>
Date: Thu, 23 Jul 2020 12:00:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200723100727.GA7191@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/07/2020 11:07, Roger Pau Monné wrote:
> On Wed, Jul 22, 2020 at 11:18:09AM +0100, Andrew Cooper wrote:
>> ... rather than from the default clauses of the PV and HVM MSR handlers.
>>
>> This means that we no longer take the vmce lock for any unknown MSR, and
>> accesses to architectural MCE banks outside of the subset implemented for the
>> guest no longer fall further through the unknown MSR path.
>>
>> With the vmce calls removed, the hvm alternative_call()'s expression can be
>> simplified substantially.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> LGTM, I just have one question below regarding the ranges.
>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>>  xen/arch/x86/hvm/hvm.c         | 16 ++--------------
>>  xen/arch/x86/msr.c             | 16 ++++++++++++++++
>>  xen/arch/x86/pv/emul-priv-op.c | 15 ---------------
>>  3 files changed, 18 insertions(+), 29 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> index 5bb47583b3..a9d1685549 100644
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -3560,13 +3560,7 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
>>           break;
>>  
>>      default:
>> -        if ( (ret = vmce_rdmsr(msr, msr_content)) < 0 )
>> -            goto gp_fault;
>> -        /* If ret == 0 then this is not an MCE MSR, see other MSRs. */
>> -        ret = ((ret == 0)
>> -               ? alternative_call(hvm_funcs.msr_read_intercept,
>> -                                  msr, msr_content)
>> -               : X86EMUL_OKAY);
>> +        ret = alternative_call(hvm_funcs.msr_read_intercept, msr, msr_content);
>>          break;
>>      }
>>  
>> @@ -3696,13 +3690,7 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
>>          break;
>>  
>>      default:
>> -        if ( (ret = vmce_wrmsr(msr, msr_content)) < 0 )
>> -            goto gp_fault;
>> -        /* If ret == 0 then this is not an MCE MSR, see other MSRs. */
>> -        ret = ((ret == 0)
>> -               ? alternative_call(hvm_funcs.msr_write_intercept,
>> -                                  msr, msr_content)
>> -               : X86EMUL_OKAY);
>> +        ret = alternative_call(hvm_funcs.msr_write_intercept, msr, msr_content);
>>          break;
>>      }
>>  
>> diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
>> index 22f921cc71..ca4307e19f 100644
>> --- a/xen/arch/x86/msr.c
>> +++ b/xen/arch/x86/msr.c
>> @@ -227,6 +227,14 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>>          *val = msrs->misc_features_enables.raw;
>>          break;
>>  
>> +    case MSR_IA32_MCG_CAP     ... MSR_IA32_MCG_CTL:      /* 0x179 -> 0x17b */
>> +    case MSR_IA32_MCx_CTL2(0) ... MSR_IA32_MCx_CTL2(31): /* 0x280 -> 0x29f */
>> +    case MSR_IA32_MCx_CTL(0)  ... MSR_IA32_MCx_MISC(31): /* 0x400 -> 0x47f */
> Where do you get the ranges from 0 to 31? It seems like the count
> field in the CAP register is 8 bits, which could allow for up to 256
> banks?
>
> I'm quite sure this would then overlap with other MSRs?

Irritatingly, nothing I can find actually states an upper architectural
limit.

SDM Vol4, Table 2-2 which enumerates the Architectural MSRs.

0x280 thru 0x29f are explicitly reserved MCx_CTL2, which is a limit of
32 banks.  There are gaps after this in the architectural table, but
IceLake has PRMRR_BASE_0 at 0x2a0.

The main bank of MCx_{CTL,STATUS,ADDR,MISC} start at 0x400 and are
listed in the table up to 0x473, which is a limit of 29 banks.  The
Model specific table for SandyBridge fills in the remaining 3 banks up
to MSR 0x47f, which is the previous limit of 32 banks.  (These MSRs have
package scope rather than core/thread scope, but they are still
enumerated architecturally so I'm not sure why they are in the model
specific tables.)

More importantly however, the VMX MSR range starts at 0x480, immediately
above bank 31, which puts an architectural hard limit on the number of
banks.

>> +    case MSR_IA32_MCG_EXT_CTL:                           /* 0x4d0 */
>> +        if ( vmce_rdmsr(msr, val) < 0 )
>> +            goto gp_fault;
>> +        break;
>> +
>>      case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
>>          if ( !is_hvm_domain(d) || v != curr )
>>              goto gp_fault;
>> @@ -436,6 +444,14 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>>          break;
>>      }
>>  
>> +    case MSR_IA32_MCG_CAP     ... MSR_IA32_MCG_CTL:      /* 0x179 -> 0x17b */
>> +    case MSR_IA32_MCx_CTL2(0) ... MSR_IA32_MCx_CTL2(31): /* 0x280 -> 0x29f */
>> +    case MSR_IA32_MCx_CTL(0)  ... MSR_IA32_MCx_MISC(31): /* 0x400 -> 0x47f */
>> +    case MSR_IA32_MCG_EXT_CTL:                           /* 0x4d0 */
>> +        if ( vmce_wrmsr(msr, val) < 0 )
>> +            goto gp_fault;
>> +        break;
>> +
>>      case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
>>          if ( !is_hvm_domain(d) || v != curr )
>>              goto gp_fault;
>> diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
>> index 254da2b849..f14552cb4b 100644
>> --- a/xen/arch/x86/pv/emul-priv-op.c
>> +++ b/xen/arch/x86/pv/emul-priv-op.c
>> @@ -855,8 +855,6 @@ static int read_msr(unsigned int reg, uint64_t *val,
>>  
>>      switch ( reg )
>>      {
>> -        int rc;
>> -
>>      case MSR_FS_BASE:
>>          if ( is_pv_32bit_domain(currd) )
>>              break;
>> @@ -955,12 +953,6 @@ static int read_msr(unsigned int reg, uint64_t *val,
>>          }
>>          /* fall through */
>>      default:
>> -        rc = vmce_rdmsr(reg, val);
>> -        if ( rc < 0 )
>> -            break;
>> -        if ( rc )
>> -            return X86EMUL_OKAY;
>> -        /* fall through */
>>      normal:
> We could remove the 'normal' label and just use the default one
> instead.

You can't "goto default;" which is what a number of paths between these
two hunks do.

I do however have a plan to clean this up in due course.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 11:16:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 11:16:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyZCi-0003FR-86; Thu, 23 Jul 2020 11:16:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyZCg-0003FK-Ra
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 11:15:58 +0000
X-Inumbo-ID: e19a3e34-ccd5-11ea-a27f-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e19a3e34-ccd5-11ea-a27f-12813bfff9fa;
 Thu, 23 Jul 2020 11:15:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F1D23ABCF;
 Thu, 23 Jul 2020 11:16:04 +0000 (UTC)
Subject: Re: [PATCH] x86/msr: Unify the real {rd,wr}msr() paths in
 guest_{rd,wr}msr()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200722105529.12177-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4e5f1d63-5f22-a43d-e025-21aa34345092@suse.com>
Date: Thu, 23 Jul 2020 13:15:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200722105529.12177-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.07.2020 12:55, Andrew Cooper wrote:
> Both the read and write side have commonalities which can be abstracted away.
> This also allows for additional safety in release builds, and slightly more
> helpful diagnostics in debug builds.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> 
> I'm not a massive fan of the global scope want_rdmsr_safe boolean, but I can't
> think of a reasonable way to fix it without starting to use other
> flexibiltiies offered to us by C99.

I can't seem to be able to guess what C99 feature(s) you mean.
If there are any that would help, why not use them?

> @@ -204,10 +205,9 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>           */
>          if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_AMD)) ||
>               !(boot_cpu_data.x86_vendor &
> -               (X86_VENDOR_INTEL | X86_VENDOR_AMD)) ||
> -             rdmsr_safe(MSR_AMD_PATCHLEVEL, *val) )
> +               (X86_VENDOR_INTEL | X86_VENDOR_AMD)) )
>              goto gp_fault;
> -        break;
> +        goto read_from_hw_safe;

Above from here is a read from MSR_IA32_PLATFORM_ID - any reason
it doesn't also get folded?

> @@ -278,7 +278,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>           */
>  #ifdef CONFIG_HVM
>          if ( v == current && is_hvm_domain(d) && v->arch.hvm.flag_dr_dirty )
> -            rdmsrl(msr, *val);
> +            goto read_from_hw;

In the write path you also abstract out the check for v being current.
Wouldn't this better be abstracted out here, too, as reading an actual
MSR when not current isn't generally very helpful?

> @@ -493,8 +506,8 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>                                 ? 0 : (msr - MSR_AMD64_DR1_ADDRESS_MASK + 1),
>                                 ARRAY_SIZE(msrs->dr_mask))] = val;
>  
> -        if ( v == curr && (curr->arch.dr7 & DR7_ACTIVE_MASK) )
> -            wrmsrl(msr, val);
> +        if ( curr->arch.dr7 & DR7_ACTIVE_MASK )
> +            goto maybe_write_to_hw;
>          break;

I have to admit that I'd find it more logical if v was now used
here instead of curr.

> @@ -509,6 +522,23 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>  
>      return ret;
>  
> + maybe_write_to_hw:
> +    /*
> +     * All paths potentially updating a value in hardware need to check
> +     * whether the call is in current context or not, so the logic is
> +     * implemented here.  Remote context need do nothing more.
> +     */
> +    if ( v != curr || !wrmsr_safe(msr, val) )
> +        return X86EMUL_OKAY;
> +
> +    /*
> +     * Paths which end up here took a #GP fault in wrmsr_safe().  Something is
> +     * broken with the logic above, so make it obvious in debug builds, and
> +     * fail safe by handing #GP back to the guests in release builds.
> +     */
> +    gprintk(XENLOG_ERR, "Bad wrmsr %#x val %016"PRIx64"\n", msr, val);

Didn't you indicate more than once that you dislike mixing 0x-
prefixed and non-prefixed hex values in a single message?
(Personally I'd simple drop the #, but I expect you to prefer it
the other way around.)

Also both here and in the read path I'm unconvinced of the
"by handing #GP back" wording: When v != curr, no #GP fault can
typically be handed anywhere. And even when v == curr it's still
up to the caller to decide what to do. IOW how about "by
suggesting to hand back #GP" or some such?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 11:17:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 11:17:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyZEO-0003Nx-Jp; Thu, 23 Jul 2020 11:17:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xWck=BC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyZEM-0003Nq-UT
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 11:17:43 +0000
X-Inumbo-ID: 1f6aefe2-ccd6-11ea-a27f-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1f6aefe2-ccd6-11ea-a27f-12813bfff9fa;
 Thu, 23 Jul 2020 11:17:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595503062;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=z0Fi62eFfvWzl4FKaC8rv7RJqCwnE5cHtJaqlHI+aIM=;
 b=KEdJXvdto4tMn0QgBc+0Dd2y5RFpOQKj/YadFZF+LrVNEX3FgTA+sNQa
 AokohYmdVs7Wxy2Q/qjomjvxnMvBSOTvTgHhuahBn2P4uV6K8h8RJX1vf
 8QayS05Fn0mOhCraH+tiMI4CCQ7ToaJA47DThnzaOxf6zQ8Gj/ULZuPr/ 4=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: y/85bcAa3FC1Ch+hia4US6SjegNJX2bm1nOrAfHj5eQLDQCgL0L968L5FYLvhqeOy0adYLyxv8
 J3QJcjfWVncCjYliyenMP2Z088u8bX4PmeqV745oCOFWNJDpPhAOwfDwUcDfWTXYXf5qELEz1y
 w2zol5+mC2fS61sTGnMfIs0EwSxTMtUQwo5Zm+Aba61w72p0MgGnazhI1O2zhcA2Xa5s/XhkTk
 jGym3SjeHyPZRb7pzjZJbP4b+kzZd86lggdaif1gYWYwaS+YeKiyGs2YctCeRM6MDMGDDV1b5W
 fFE=
X-SBRS: 2.7
X-MesageID: 23024645
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23024645"
Subject: Re: [PATCH] x86/vmce: Dispatch vmce_{rd,wr}msr() from
 guest_{rd,wr}msr()
To: Jan Beulich <jbeulich@suse.com>
References: <20200722101809.8389-1-andrew.cooper3@citrix.com>
 <76851f30-2003-2fee-221a-df70907ee91c@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4c5edb17-faf8-7b8b-a896-2d60696a3bf2@citrix.com>
Date: Thu, 23 Jul 2020 12:17:37 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <76851f30-2003-2fee-221a-df70907ee91c@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/07/2020 11:37, Jan Beulich wrote:
> On 22.07.2020 12:18, Andrew Cooper wrote:
>> --- a/xen/arch/x86/msr.c
>> +++ b/xen/arch/x86/msr.c
>> @@ -227,6 +227,14 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>>          *val = msrs->misc_features_enables.raw;
>>          break;
>>  
>> +    case MSR_IA32_MCG_CAP     ... MSR_IA32_MCG_CTL:      /* 0x179 -> 0x17b */
>> +    case MSR_IA32_MCx_CTL2(0) ... MSR_IA32_MCx_CTL2(31): /* 0x280 -> 0x29f */
>> +    case MSR_IA32_MCx_CTL(0)  ... MSR_IA32_MCx_MISC(31): /* 0x400 -> 0x47f */
>> +    case MSR_IA32_MCG_EXT_CTL:                           /* 0x4d0 */
>> +        if ( vmce_rdmsr(msr, val) < 0 )
>> +            goto gp_fault;
>> +        break;
>> +
>>      case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
>>          if ( !is_hvm_domain(d) || v != curr )
>>              goto gp_fault;
>> @@ -436,6 +444,14 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>>          break;
>>      }
>>  
>> +    case MSR_IA32_MCG_CAP     ... MSR_IA32_MCG_CTL:      /* 0x179 -> 0x17b */
>> +    case MSR_IA32_MCx_CTL2(0) ... MSR_IA32_MCx_CTL2(31): /* 0x280 -> 0x29f */
>> +    case MSR_IA32_MCx_CTL(0)  ... MSR_IA32_MCx_MISC(31): /* 0x400 -> 0x47f */
>> +    case MSR_IA32_MCG_EXT_CTL:                           /* 0x4d0 */
>> +        if ( vmce_wrmsr(msr, val) < 0 )
>> +            goto gp_fault;
>> +        break;
>> +
>>      case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
>>          if ( !is_hvm_domain(d) || v != curr )
>>              goto gp_fault;
> With this the two functions also possibly returning 0 or 1 becomes
> meaningless. Would you think you can make then return bool at this
> occasion, or would you prefer to leave this to whenever someone
> gets to clean up this resulting anomaly? (I'm fine either way, but
> would prefer to not see the then meaningless tristate return values
> left in place.)

The entire internals of vmce_{wr,rd}msr() need an overhaul.

I tried switching them to use X86EMUL_* (at which point they will match
all the other subsystems we hand off MSR blocks to), but that quickly
turned into a larger mess than I have time for right now.  I've still
got the partial work so far, and will finish it at some point.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 11:22:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 11:22:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyZIs-0004Lu-9M; Thu, 23 Jul 2020 11:22:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0L1b=BC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyZIr-0004LO-AS
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 11:22:21 +0000
X-Inumbo-ID: c58a3a90-ccd6-11ea-86ef-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c58a3a90-ccd6-11ea-86ef-bc764e2007e4;
 Thu, 23 Jul 2020 11:22:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595503340;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=XMCaNp2k6EeuVt7pt+FtqxDkl3YZlIt2Q1ItqH2ww4g=;
 b=RbZBx0GY5yacPl2kdsjTV+XA/3ki2FbnvghoLhMFQXPT7LAhVHi48avY
 Z2rptiotcFvIXofhhoBMpcCN5E6DMYw47p8biVcr96k2tT4FPGqTzvpln
 MIX8wfCDV9DIaIl/i8m4BHnOZ3PzvODayVKuYM/rUYBHdQrqN8Lba4fcY A=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: uM54ZEpV6NCrdOd4ejUXQ+6xfuOc/7tuvAbjurWE2maj1DZkXTZIvFA2aZBGqDgVe87KT4DAXL
 0tbwioETkhdXmqJeKUc9pVUmJo8NCpCaRhE8lAV50xNHBzhkIeRYRpupbDgrkuHCxR07R0Eccm
 9FG3K20vAaUTOu954/sROvEmhLY5w/zTCzLXRpZaDETa+3kVK9K4bWukR/WShEgCrGPxl0bJ8p
 sNtGbSJlfkIC/U2Si8n8/MYjgmqV1Bb8p+g3eJDp5uSsgdjt8NBuyxPSOjzP4tOItJKDoyYL0x
 aMQ=
X-SBRS: 2.7
X-MesageID: 23889141
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23889141"
Date: Thu, 23 Jul 2020 13:22:13 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/msr: Unify the real {rd,wr}msr() paths in
 guest_{rd,wr}msr()
Message-ID: <20200723112213.GB7191@Air-de-Roger>
References: <20200722105529.12177-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200722105529.12177-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 22, 2020 at 11:55:29AM +0100, Andrew Cooper wrote:
> Both the read and write side have commonalities which can be abstracted away.
> This also allows for additional safety in release builds, and slightly more
> helpful diagnostics in debug builds.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> 
> I'm not a massive fan of the global scope want_rdmsr_safe boolean, but I can't
> think of a reasonable way to fix it without starting to use other
> flexibiltiies offered to us by C99.  (And to preempt the other question, an
> extra set of braces makes extremely confusing to read logic.)

The logic could be moved to a helper that takes a expected_safe or
some such parameter, but I think I prefer this approach.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 11:23:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 11:23:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyZK0-0004UE-NK; Thu, 23 Jul 2020 11:23:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyZJz-0004Sx-3A
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 11:23:31 +0000
X-Inumbo-ID: eb0046b6-ccd6-11ea-a27f-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eb0046b6-ccd6-11ea-a27f-12813bfff9fa;
 Thu, 23 Jul 2020 11:23:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2CCDDABCF;
 Thu, 23 Jul 2020 11:23:30 +0000 (UTC)
Subject: Re: [PATCH] xen/x86: irq: Avoid a TOCTOU race in
 pirq_spin_lock_irq_desc()
To: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200722165300.22655-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c9863243-0b5e-521f-80b8-bc5673f895a6@suse.com>
Date: Thu, 23 Jul 2020 13:23:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200722165300.22655-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.07.2020 18:53, Julien Grall wrote:
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -1187,7 +1187,7 @@ struct irq_desc *pirq_spin_lock_irq_desc(
>  
>      for ( ; ; )
>      {
> -        int irq = pirq->arch.irq;
> +        int irq = read_atomic(&pirq->arch.irq);

There we go - I'd be fine this way, but I'm pretty sure Andrew
would want this to be ACCESS_ONCE(). So I guess now is the time
to settle which one to prefer in new code (or which criteria
there are to prefer one over the other).

And this is of course besides the fact that I think we have many
more instances where guaranteeing a single access would be
needed, if we're afraid of the described permitted compiler
behavior. Which then makes me wonder if this is really something
we should fix one by one, rather than by at least larger scope
audits (in order to not suggest "throughout the code base").

As a minor remark, unless you've observed problematic behavior,
would you mind adding "potential" or "theoretical" to the title?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 11:23:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 11:23:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyZK4-0004VA-Vd; Thu, 23 Jul 2020 11:23:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xWck=BC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyZK3-0004Un-8y
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 11:23:35 +0000
X-Inumbo-ID: f1b43260-ccd6-11ea-86ef-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f1b43260-ccd6-11ea-86ef-bc764e2007e4;
 Thu, 23 Jul 2020 11:23:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595503415;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=5043FSVGr2QCvh6VcUR41H4uVQ36RMQBykxDE6RLfnI=;
 b=XeWHtdWLWMVOqxmZ5DM9qLmwnQTyJ5Kk20bXvi92fhsEhZ59QtikqBjp
 0xXQbGvqqL/jzgVgH/CQjpkQmTMGHhsD5pP/bXMeonWfzB/KkFQ39KJEf
 hvnPkerjqlZWfhlFTdf3KtEmj/PHJW4xpK0mUOMi2rrMunxNlfA5ix44a k=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: vxqR4kR3NFdeXoQIdd/lZ2KjBYnYUtfy8k4I7QKFU0B5I6AHWNTGf6L9c9ox5XlFeQE3qfMcOR
 PUxsS5JL2/DAJnFJSkkQOpeCqtACTlU9A75yhk/CRFPELO/eOBEStkzm/p5LoqBTveg+p4hUTu
 d9TCbHgOZrcURSbVmt2miurQmHOb+83finivTHy/cIcsv8Jxyskrt5tmlxHX7ECMn7LeXMRJjN
 yKiafd6R963rapUhglBRw5lVM0Sgvw3lMLtrp3uKyqFbm8ckHz/lzBYzhA9ummZ8XI9D7ia1lI
 Mn8=
X-SBRS: 2.7
X-MesageID: 23024954
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23024954"
Subject: Re: [PATCH 1/2] lockprof: don't leave locks uninitialized upon
 allocation failure
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <47f5478d-2f46-656c-0882-121aebc77f39@suse.com>
 <7c4f50ce-6212-2f16-c9c5-c9af450b10ba@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <39e8063c-3e91-fb27-1160-13baa0a97849@citrix.com>
Date: Thu, 23 Jul 2020 12:23:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <7c4f50ce-6212-2f16-c9c5-c9af450b10ba@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/07/2020 11:51, Jan Beulich wrote:
> Even if a specific struct lock_profile instance can't be allocated, the
> lock itself should still be functional. As this isn't a production use
> feature, also log a message in the event that the profiling struct can't
> be allocated.
>
> Fixes: d98feda5c756 ("Make lock profiling usable again")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/include/xen/spinlock.h
> +++ b/xen/include/xen/spinlock.h
> @@ -103,10 +103,16 @@ struct lock_profile_qhead {
>      do {                                                                      \
>          struct lock_profile *prof;                                            \
>          prof = xzalloc(struct lock_profile);                                  \
> -        if (!prof) break;                                                     \
> +        (s)->l = (spinlock_t)_SPIN_LOCK_UNLOCKED(prof);                       \
> +        if ( !prof )                                                          \
> +        {                                                                     \
> +            printk(XENLOG_WARNING                                             \
> +                   "lock profiling unavailable for %p(%d)'s " #l "\n",        \
> +                   s, (s)->profile_head.idx);                                 \

You'll end up with far less .rodata if you use %s and pass #l in as a
parameter.

Either way, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

> +            break;                                                            \
> +        }                                                                     \
>          prof->name = #l;                                                      \
>          prof->lock = &(s)->l;                                                 \
> -        (s)->l = (spinlock_t)_SPIN_LOCK_UNLOCKED(prof);                       \
>          prof->next = (s)->profile_head.elem_q;                                \
>          (s)->profile_head.elem_q = prof;                                      \
>      } while(0)
>



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 11:26:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 11:26:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyZMz-0004iW-Dv; Thu, 23 Jul 2020 11:26:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyZMx-0004iM-Vv
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 11:26:36 +0000
X-Inumbo-ID: 5d87c59d-ccd7-11ea-86f3-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d87c59d-ccd7-11ea-86f3-bc764e2007e4;
 Thu, 23 Jul 2020 11:26:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6FD2FAD93;
 Thu, 23 Jul 2020 11:26:42 +0000 (UTC)
Subject: Re: [PATCH 1/2] lockprof: don't leave locks uninitialized upon
 allocation failure
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <47f5478d-2f46-656c-0882-121aebc77f39@suse.com>
 <7c4f50ce-6212-2f16-c9c5-c9af450b10ba@suse.com>
 <39e8063c-3e91-fb27-1160-13baa0a97849@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <48417b2c-2d09-a63f-f300-8eb725339285@suse.com>
Date: Thu, 23 Jul 2020 13:26:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <39e8063c-3e91-fb27-1160-13baa0a97849@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.2020 13:23, Andrew Cooper wrote:
> On 23/07/2020 11:51, Jan Beulich wrote:
>> Even if a specific struct lock_profile instance can't be allocated, the
>> lock itself should still be functional. As this isn't a production use
>> feature, also log a message in the event that the profiling struct can't
>> be allocated.
>>
>> Fixes: d98feda5c756 ("Make lock profiling usable again")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/include/xen/spinlock.h
>> +++ b/xen/include/xen/spinlock.h
>> @@ -103,10 +103,16 @@ struct lock_profile_qhead {
>>      do {                                                                      \
>>          struct lock_profile *prof;                                            \
>>          prof = xzalloc(struct lock_profile);                                  \
>> -        if (!prof) break;                                                     \
>> +        (s)->l = (spinlock_t)_SPIN_LOCK_UNLOCKED(prof);                       \
>> +        if ( !prof )                                                          \
>> +        {                                                                     \
>> +            printk(XENLOG_WARNING                                             \
>> +                   "lock profiling unavailable for %p(%d)'s " #l "\n",        \
>> +                   s, (s)->profile_head.idx);                                 \
> 
> You'll end up with far less .rodata if you use %s and pass #l in as a
> parameter.

Well, "far less" perhaps goes a little far, as we currently have
just three use sites, but I see your point and hence will switch.

> Either way, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 11:27:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 11:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyZO9-0004on-OH; Thu, 23 Jul 2020 11:27:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xWck=BC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyZO8-0004od-Kd
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 11:27:48 +0000
X-Inumbo-ID: 8875288a-ccd7-11ea-a280-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8875288a-ccd7-11ea-a280-12813bfff9fa;
 Thu, 23 Jul 2020 11:27:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595503668;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=FoW/AC5n1ITQNf8Twa9bU5uOV7JIe+/XBRQgH5QqNHk=;
 b=TXbDXVvkTks9fFDSACWU3b9vWKTIdWd1jGvSoy7gJ8pX6k2xmVfNWYbP
 6oSz2+GlG5l0jEVlIpHHy2jm7o7HZx3iMoPjssCKgg1MKrJGA5VbsbEZM
 IuU8P7nODtIhMMXmd2UkOojqWeaZ1jkfIcpN7sohVMudmOGgER0fQjHy+ g=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: c5Un9MxD484NGlctdOuM/ZAQpvHLUdTWiWsVwLGZBfbRlLMc8gdy1G5Zn3CuS8/HhJP3FFADGW
 I+QAUARchKPdYdCRWsuxIN/ahj5AVNG6PAaFX/h3zC3xSkPP8wlkE2L2ZkvSaCpIMPd0wRQxwv
 xeSbJklwQ2/yLwvrvrZq/Ay3Gd1dS/apvP4kZR+Le+RxYWO/oJ9QOu9SDD9hmNY6zXNlB14D5T
 nj8UxQeducYJPUklHb7Fmi+tcBIh6GjytkaIP1C78NNrlmal5KhhL2IVfvfH8i6WSqd152uiVb
 eME=
X-SBRS: 2.7
X-MesageID: 23025143
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23025143"
Subject: Re: [PATCH 2/2] lockprof: don't pass name into registration function
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <47f5478d-2f46-656c-0882-121aebc77f39@suse.com>
 <d8eab983-9377-a519-3be8-6ef83fa96516@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <34904795-b1da-15d9-4525-aa1210b45d1f@citrix.com>
Date: Thu, 23 Jul 2020 12:27:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d8eab983-9377-a519-3be8-6ef83fa96516@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/07/2020 11:52, Jan Beulich wrote:
> The type uniquely identifies the associated name, hence the name fields
> can be statically initialized.
>
> Also constify not just the involved struct field, but also struct
> lock_profile's. Rather than specifying lock_profile_ancs[]' dimension at
> definition time, add a suitable build time check, such that at least
> missing tail additions to the initializer can be spotted easily.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 11:30:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 11:30:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyZQp-0005dB-5x; Thu, 23 Jul 2020 11:30:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0L1b=BC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyZQo-0005d3-2B
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 11:30:34 +0000
X-Inumbo-ID: eb242685-ccd7-11ea-a281-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eb242685-ccd7-11ea-a281-12813bfff9fa;
 Thu, 23 Jul 2020 11:30:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595503832;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=AEGXJnYUL36sPmyfPejacPpdAzMnNyNFpcFfg/8Dcwc=;
 b=CrH+/rCGPfvvhoez4tPgWrRACfDX+t1qlVAwvPc0PuJ4DHBqZUJFmVBZ
 5b8IDALzoIkvg6OGVBV1btIdFZTONE7BAoNq4B3x/bBUUoUORTYX8cADd
 n5XtPaFA8PU3Wz9a8CzxW6tNb80uR0T5OHar+UShBO/xLJTq344S5Q2Gi M=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: WNh0sKpjmcuyb8eJEuv9zo7IX2gwMJjXI2cuvfu94C5HwfUCbokOJVcr/MO2vueW5qKaUlFLQ8
 xmgWrrEOKBTRAHbQ27aqdn5sSV7Lx0JfbtfRcb3i3drfcGRer67UJ933t98A3V19nH4CNbBV7f
 O9+wUmuD4E0Kpx2shfYKJ6vNJb9QB6/uotOc87vUQIYmAU3dESqSZSGE/PCZkZAZB+v6fFHpua
 zKeCSFpkh1R3UQv1g9q9p8BCyXvAEAsbvZKkj2VIZdcBfcWJ9Mwvn1HDhjezTnll4ZgEgbB08k
 FVg=
X-SBRS: 2.7
X-MesageID: 23889623
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23889623"
Date: Thu, 23 Jul 2020 13:30:25 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/vmce: Dispatch vmce_{rd,wr}msr() from
 guest_{rd,wr}msr()
Message-ID: <20200723113025.GC7191@Air-de-Roger>
References: <20200722101809.8389-1-andrew.cooper3@citrix.com>
 <20200723100727.GA7191@Air-de-Roger>
 <ccc153a5-cf65-c483-43ea-d6b864366e06@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ccc153a5-cf65-c483-43ea-d6b864366e06@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 23, 2020 at 12:00:53PM +0100, Andrew Cooper wrote:
> On 23/07/2020 11:07, Roger Pau Monné wrote:
> > On Wed, Jul 22, 2020 at 11:18:09AM +0100, Andrew Cooper wrote:
> >> +    case MSR_IA32_MCG_CAP     ... MSR_IA32_MCG_CTL:      /* 0x179 -> 0x17b */
> >> +    case MSR_IA32_MCx_CTL2(0) ... MSR_IA32_MCx_CTL2(31): /* 0x280 -> 0x29f */
> >> +    case MSR_IA32_MCx_CTL(0)  ... MSR_IA32_MCx_MISC(31): /* 0x400 -> 0x47f */
> > Where do you get the ranges from 0 to 31? It seems like the count
> > field in the CAP register is 8 bits, which could allow for up to 256
> > banks?
> >
> > I'm quite sure this would then overlap with other MSRs?
> 
> Irritatingly, nothing I can find actually states an upper architectural
> limit.
> 
> SDM Vol4, Table 2-2 which enumerates the Architectural MSRs.
> 
> 0x280 thru 0x29f are explicitly reserved MCx_CTL2, which is a limit of
> 32 banks.  There are gaps after this in the architectural table, but
> IceLake has PRMRR_BASE_0 at 0x2a0.
> 
> The main bank of MCx_{CTL,STATUS,ADDR,MISC} start at 0x400 and are
> listed in the table up to 0x473, which is a limit of 29 banks.  The
> Model specific table for SandyBridge fills in the remaining 3 banks up
> to MSR 0x47f, which is the previous limit of 32 banks.  (These MSRs have
> package scope rather than core/thread scope, but they are still
> enumerated architecturally so I'm not sure why they are in the model
> specific tables.)
> 
> More importantly however, the VMX MSR range starts at 0x480, immediately
> above bank 31, which puts an architectural hard limit on the number of
> banks.

Yes, realized about the VMX MSRs starting at 0x480, which limits the
number of banks. Maybe a small comment about the fact that albeit the
count in the CAP register could go up to 256 32 is the actual limit
due to how MSRs are arranged?

Note there's also GUEST_MC_BANK_NUM which is the actual implementation
limit in Xen AFAICT, maybe using it here would be clearer? (and limit
the ranges forwarded to vmce_rdmsr)

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 11:37:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 11:37:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyZXF-0005t4-UU; Thu, 23 Jul 2020 11:37:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=64jQ=BC=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jyZXE-0005sz-0y
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 11:37:12 +0000
X-Inumbo-ID: d814ea1e-ccd8-11ea-86f7-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id d814ea1e-ccd8-11ea-86f7-bc764e2007e4;
 Thu, 23 Jul 2020 11:37:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1595504229;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=cut4QnIcrkt41QqkMvdydzME5f2nRZ3+Ar1gb+kat8E=;
 b=VE/Rrow1Aa7Y8kr8VXn/frvPWpTL1TnI3kuusYXwz1M7X9/K3A4PGGxLAMrcsKcAaGgDdE
 tSEKRH3G+84cEUrlsAaJzyQvmvP7WNtFVao54OKswPX2gp7IdEzqvVF/1VxEEQY0WMQHmq
 n/s/4mvUbjcECXkPSZNwxvVy6iXMlao=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-125-87lMikxXOdSzORwwInr4Ag-1; Thu, 23 Jul 2020 07:37:07 -0400
X-MC-Unique: 87lMikxXOdSzORwwInr4Ag-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EA5A380BCA4;
 Thu, 23 Jul 2020 11:37:05 +0000 (UTC)
Received: from [10.36.114.90] (ovpn-114-90.ams2.redhat.com [10.36.114.90])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 233F65D9D3;
 Thu, 23 Jul 2020 11:37:03 +0000 (UTC)
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
Date: Thu, 23 Jul 2020 13:37:03 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200723084523.42109-4-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-mm@kvack.org,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.20 10:45, Roger Pau Monne wrote:
> Add an extra option to add_memory_resource that overrides the memory
> hotplug online behavior in order to force onlining of memory from
> add_memory_resource unconditionally.
> 
> This is required for the Xen balloon driver, that must run the
> online page callback in order to correctly process the newly added
> memory region, note this is an unpopulated region that is used by Linux
> to either hotplug RAM or to map foreign pages from other domains, and
> hence memory hotplug when running on Xen can be used even without the
> user explicitly requesting it, as part of the normal operations of the
> OS when attempting to map memory from a different domain.
> 
> Setting a different default value of memhp_default_online_type when
> attaching the balloon driver is not a robust solution, as the user (or
> distro init scripts) could still change it and thus break the Xen
> balloon driver.

I think we discussed this a couple of times before (even triggered by my
request), and this is responsibility of user space to configure. Usually
distros have udev rules to online memory automatically. Especially, user
space should eb able to configure *how* to online memory.

It's the admin/distro responsibility to configure this properly. In case
this doesn't happen (or as you say, users change it), bad luck.

E.g., virtio-mem takes care to not add more memory in case it is not
getting onlined. I remember hyper-v has similar code to at least wait a
bit for memory to get onlined.

Nacked-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 11:52:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 11:52:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyZlj-0007dX-E2; Thu, 23 Jul 2020 11:52:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=64jQ=BC=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jyZli-0007dS-9i
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 11:52:10 +0000
X-Inumbo-ID: f01c7eb8-ccda-11ea-a284-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id f01c7eb8-ccda-11ea-a284-12813bfff9fa;
 Thu, 23 Jul 2020 11:52:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1595505128;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=CIXSVu9nQP+8PDb0hzoMQSS9r8hDzc01kzYIvCbHTjQ=;
 b=dTUzxvGaXVwF4idwYU5Fs7LGI0/pFVwmH0jVGGXSzcFrUvlCZJLibJJ+h4lTWqCYodANXV
 YjpuyPzMjiY1qpqA8eluTZdQ7gsz+ZcWfAWDBqcGduTddR5bTqZMduMUqHViHIim3UL4CW
 9NQPsTIdDCwCHOlPbruk3E5IHX8oIs4=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-442-wWfQow7ePFKPEJa9Ohw43Q-1; Thu, 23 Jul 2020 07:52:06 -0400
X-MC-Unique: wWfQow7ePFKPEJa9Ohw43Q-1
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5A4961009440;
 Thu, 23 Jul 2020 11:52:05 +0000 (UTC)
Received: from [10.36.114.90] (ovpn-114-90.ams2.redhat.com [10.36.114.90])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 71FFB5C1BD;
 Thu, 23 Jul 2020 11:52:03 +0000 (UTC)
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
From: David Hildenbrand <david@redhat.com>
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <18f3987f-d2ca-409b-951d-20381d96e3a8@redhat.com>
Date: Thu, 23 Jul 2020 13:52:02 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-mm@kvack.org,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.20 13:37, David Hildenbrand wrote:
> On 23.07.20 10:45, Roger Pau Monne wrote:
>> Add an extra option to add_memory_resource that overrides the memory
>> hotplug online behavior in order to force onlining of memory from
>> add_memory_resource unconditionally.
>>
>> This is required for the Xen balloon driver, that must run the
>> online page callback in order to correctly process the newly added
>> memory region, note this is an unpopulated region that is used by Linux
>> to either hotplug RAM or to map foreign pages from other domains, and
>> hence memory hotplug when running on Xen can be used even without the
>> user explicitly requesting it, as part of the normal operations of the
>> OS when attempting to map memory from a different domain.
>>
>> Setting a different default value of memhp_default_online_type when
>> attaching the balloon driver is not a robust solution, as the user (or
>> distro init scripts) could still change it and thus break the Xen
>> balloon driver.
> 
> I think we discussed this a couple of times before (even triggered by my
> request), and this is responsibility of user space to configure. Usually
> distros have udev rules to online memory automatically. Especially, user
> space should eb able to configure *how* to online memory.
> 
> It's the admin/distro responsibility to configure this properly. In case
> this doesn't happen (or as you say, users change it), bad luck.
> 
> E.g., virtio-mem takes care to not add more memory in case it is not
> getting onlined. I remember hyper-v has similar code to at least wait a
> bit for memory to get onlined.
> 
> Nacked-by: David Hildenbrand <david@redhat.com>
> 

Oh, BTW, I removed that "online" parameter in

commit f29d8e9c0191a2a02500945db505e5c89159c3f4
Author: David Hildenbrand <david@redhat.com>
Date:   Fri Dec 28 00:35:36 2018 -0800

    mm/memory_hotplug: drop "online" parameter from add_memory_resource()
    
    Userspace should always be in charge of how to online memory and if memory
    should be onlined automatically in the kernel.  Let's drop the parameter
    to overwrite this - XEN passes memhp_auto_online, just like add_memory(),
    so we can directly use that instead internally.


Xen was passing "memhp_auto_online" since

commit 703fc13a3f6615e29ce3eb862275d7b58a5d03ba
Author: Vitaly Kuznetsov <vkuznets@redhat.com>
Date:   Tue Mar 15 14:56:52 2016 -0700

    xen_balloon: support memory auto onlining policy
    
    Add support for the newly added kernel memory auto onlining policy to
    Xen ballon driver.


And before that I assume XEN was completely relying on udev rules to handle it. Parameter was introduced in

commit 31bc3858ea3ebcc3157b3f5f0e624c5962f5a7a6
Author: Vitaly Kuznetsov <vkuznets@redhat.com>
Date:   Tue Mar 15 14:56:48 2016 -0700

    memory-hotplug: add automatic onlining policy for the newly added memory
    
    Currently, all newly added memory blocks remain in 'offline' state
    unless someone onlines them, some linux distributions carry special udev
    rules like:
    
      SUBSYSTEM=="memory", ACTION=="add", ATTR{state}=="offline", ATTR{state}="online"


-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 12:17:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 12:17:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyaAE-0001DT-Sk; Thu, 23 Jul 2020 12:17:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xWck=BC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyaAD-0001DO-Vi
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 12:17:30 +0000
X-Inumbo-ID: 795b688b-ccde-11ea-86ff-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 795b688b-ccde-11ea-86ff-bc764e2007e4;
 Thu, 23 Jul 2020 12:17:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595506648;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=P5669HyCFd48b48gsBzCgBvTK9seeGKXQRhXcpz/vVQ=;
 b=BsZ97IvorHQ1PTg/GLjU81P5eFZY70lXQfdtXeDnbwgO87LTWegVelk7
 c/RZBCnLLU6xFLmLPOoxLvqn8g/PrRFDcTgwS8TQm9Is9lyD2sFPrCQ9G
 e2VCp3uCFgsvM9UwhTvgldIAtbIb34l2pLWhpTH84ZtFv3Toiw4kFyW+o Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Jm5aV3RJiMdNLdR3BaTuRlDZxYoECC0CKx4201Q+yidhKwdtXRx/hL46lPo7mTckxAqCnyIcWV
 5D9s0XvfvtQbarZTVYpl7xQsmRa4zbdT/boBSW5m7hpk037LoJyTpHm0EhQAPKNOYKq69dQ6sm
 V+N1ypeNZwCqMM1myOU2fdHsKf8yC+lQNqaX9CIiBsvHnU7EfQZGqWcpUQrOFIdr3l6M0qzyJ8
 SgaM3pKphSbmeEBWWmA5ykZwM1Wg1Pz5Px0cFijxVK4ctXMj3zNfUu89P9YWm5U3VY92DxaBbM
 VcY=
X-SBRS: 2.7
X-MesageID: 23892952
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23892952"
Subject: Re: [PATCH] x86/msr: Unify the real {rd,wr}msr() paths in
 guest_{rd,wr}msr()
To: Jan Beulich <jbeulich@suse.com>
References: <20200722105529.12177-1-andrew.cooper3@citrix.com>
 <4e5f1d63-5f22-a43d-e025-21aa34345092@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4b6f6dad-a831-60f1-313c-d80ed442eed9@citrix.com>
Date: Thu, 23 Jul 2020 13:17:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <4e5f1d63-5f22-a43d-e025-21aa34345092@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/07/2020 12:15, Jan Beulich wrote:
> On 22.07.2020 12:55, Andrew Cooper wrote:
>> Both the read and write side have commonalities which can be abstracted away.
>> This also allows for additional safety in release builds, and slightly more
>> helpful diagnostics in debug builds.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>
>> I'm not a massive fan of the global scope want_rdmsr_safe boolean, but I can't
>> think of a reasonable way to fix it without starting to use other
>> flexibiltiies offered to us by C99.
> I can't seem to be able to guess what C99 feature(s) you mean.
> If there are any that would help, why not use them?

This hunk:

@@ -154,7 +154,6 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr,
uint64_t *val)
     const struct cpuid_policy *cp = d->arch.cpuid;
     const struct msr_policy *mp = d->arch.msr;
     const struct vcpu_msrs *msrs = v->arch.msrs;
-    bool want_rdmsr_safe = false;
     int ret = X86EMUL_OKAY;
 
     switch ( msr )
@@ -303,6 +302,8 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr,
uint64_t *val)
 
     return ret;
 
+    bool want_rdmsr_safe = false;
+
  read_from_hw_safe:
     want_rdmsr_safe = true;
  read_from_hw:


Except that in our root Config.mk, we pass $(call
cc-option-add,CFLAGS,CC,-Wdeclaration-after-statement)  (and then
various bits of tools/ override to -Wno-declaration-after-statement).

Perhaps this is something we want to generally permit across our
codebase, seeing as some pieces already depend on it.

>> @@ -204,10 +205,9 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>>           */
>>          if ( !(cp->x86_vendor & (X86_VENDOR_INTEL | X86_VENDOR_AMD)) ||
>>               !(boot_cpu_data.x86_vendor &
>> -               (X86_VENDOR_INTEL | X86_VENDOR_AMD)) ||
>> -             rdmsr_safe(MSR_AMD_PATCHLEVEL, *val) )
>> +               (X86_VENDOR_INTEL | X86_VENDOR_AMD)) )
>>              goto gp_fault;
>> -        break;
>> +        goto read_from_hw_safe;
> Above from here is a read from MSR_IA32_PLATFORM_ID - any reason
> it doesn't also get folded?

Oh - looks to be a rebasing error.  This patch is actually more than a
year old at this point.

>> @@ -278,7 +278,7 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>>           */
>>  #ifdef CONFIG_HVM
>>          if ( v == current && is_hvm_domain(d) && v->arch.hvm.flag_dr_dirty )
>> -            rdmsrl(msr, *val);
>> +            goto read_from_hw;
> In the write path you also abstract out the check for v being current.
> Wouldn't this better be abstracted out here, too, as reading an actual
> MSR when not current isn't generally very helpful?

This is rather complicated to answer.

In the example of PLATFORM_ID above, which is consistent across the
entire system, and therefore it doesn't matter if we read it in
non-current context.

More generally however, the read and write paths truly are asymmetric
when it comes to their use in remote context.  Read is "I need this
value now", so always has to be of the form "if current do one thing,
else read from struct vcpu", whereas write is "always update struct
vcpu/etc, and let context switch handle getting it into hardware".


Then again, the more I think about this, the more I'm unsure if either
of the approaches here is ideal.

I think what this is going to need to morph into is a
get_reg()/set_reg() pair of helpers, which are first split between PV
and HVM, and then has further vmx/svm logic.  We're gaining an
increasing number of registers which might be RAM only (things emulated
for PV), or might be in the VMCB/VMCS (some even depending on hardware
generation), or might be in the MSR load lists (Intel Only) or might be
actually in hardware, or stale in hardware (VMLOAD/VMSAVE), and these
positions might vary on a per-VM or per context basis, and when we
finally get on to nested virt, might vary based on the settings of the
L1 hypervisor.

I'm wondering whether I should in fact withdraw this patch, and wait
until we've implemented guest_{rd,wr}msr() for some of the more
interesting MSRs, and see how the logic looks at that point.

>> @@ -493,8 +506,8 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>>                                 ? 0 : (msr - MSR_AMD64_DR1_ADDRESS_MASK + 1),
>>                                 ARRAY_SIZE(msrs->dr_mask))] = val;
>>  
>> -        if ( v == curr && (curr->arch.dr7 & DR7_ACTIVE_MASK) )
>> -            wrmsrl(msr, val);
>> +        if ( curr->arch.dr7 & DR7_ACTIVE_MASK )
>> +            goto maybe_write_to_hw;
>>          break;
> I have to admit that I'd find it more logical if v was now used
> here instead of curr.

Hmm true.

>
>> @@ -509,6 +522,23 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
>>  
>>      return ret;
>>  
>> + maybe_write_to_hw:
>> +    /*
>> +     * All paths potentially updating a value in hardware need to check
>> +     * whether the call is in current context or not, so the logic is
>> +     * implemented here.  Remote context need do nothing more.
>> +     */
>> +    if ( v != curr || !wrmsr_safe(msr, val) )
>> +        return X86EMUL_OKAY;
>> +
>> +    /*
>> +     * Paths which end up here took a #GP fault in wrmsr_safe().  Something is
>> +     * broken with the logic above, so make it obvious in debug builds, and
>> +     * fail safe by handing #GP back to the guests in release builds.
>> +     */
>> +    gprintk(XENLOG_ERR, "Bad wrmsr %#x val %016"PRIx64"\n", msr, val);
> Didn't you indicate more than once that you dislike mixing 0x-
> prefixed and non-prefixed hex values in a single message?

Yes - my mistake.

> (Personally I'd simple drop the #, but I expect you to prefer it
> the other way around.)

In this case, I'm not overly fussed about the 0x.  It is clear from
context (WRMSR in the message, and the two numbers of exact width) that
we're using only hex.

> Also both here and in the read path I'm unconvinced of the
> "by handing #GP back" wording: When v != curr, no #GP fault can
> typically be handed anywhere. And even when v == curr it's still
> up to the caller to decide what to do. IOW how about "by
> suggesting to hand back #GP" or some such?

The overwhelming majority usecase is in current context, so I suppose it
is mostly true.

For the remote usecase, if this were to go wrong, something on the
context switch path would explode.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 12:23:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 12:23:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyaFi-000269-HW; Thu, 23 Jul 2020 12:23:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0L1b=BC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyaFh-000264-2O
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 12:23:09 +0000
X-Inumbo-ID: 43a754e6-ccdf-11ea-a28d-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 43a754e6-ccdf-11ea-a28d-12813bfff9fa;
 Thu, 23 Jul 2020 12:23:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595506987;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=j6qul/NVIhOo7e+TLhNG7ehnkX4T5gKRyU+jhvsVBOU=;
 b=Gryz+W9CwsOe3o1rsuphzIi9TWKuXXwnykYNQXn7Xg/Ofw53Tr32aF8y
 3is8rq9mnVXtpCIDveKhgkRzkKw/JxzTbmkhfWeseEyJidpoZf+417dpS
 n2DhPZr7mGVUGksHvL08CkIKDqS733r64sGQjbsK+/UHdzGJTolEnWQWB 0=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ky2qICr6DqkiBG7KlBXtd8Nsu9X16KtCcJpdaQl2eTun6se982M36cWHvyEBpZRpb7rH+JG/KC
 m6C22yVSuSt1y3f7cVBJ/RCRwq062uamwwz6G27s1MRDsXFja6WtxfdFAr6thhDLxt1I3WpQIc
 eRrGfyDqy2MQ5l0ZSzYYNTtUfJUZ9TIanmdgQ9JBuerBzX2SHfUIg4YXga4CA3F4eViEcIUa4a
 s2Oj410dcO+gDBI1Oz2vpjMxYehXG2ooG1W38YV5EjT9bxPSEr0U2K5GMu8tZYVE6dvhkgBanA
 3fc=
X-SBRS: 2.7
X-MesageID: 23358347
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23358347"
Date: Thu, 23 Jul 2020 14:23:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: David Hildenbrand <david@redhat.com>
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
Message-ID: <20200723122300.GD7191@Air-de-Roger>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
> On 23.07.20 10:45, Roger Pau Monne wrote:
> > Add an extra option to add_memory_resource that overrides the memory
> > hotplug online behavior in order to force onlining of memory from
> > add_memory_resource unconditionally.
> > 
> > This is required for the Xen balloon driver, that must run the
> > online page callback in order to correctly process the newly added
> > memory region, note this is an unpopulated region that is used by Linux
> > to either hotplug RAM or to map foreign pages from other domains, and
> > hence memory hotplug when running on Xen can be used even without the
> > user explicitly requesting it, as part of the normal operations of the
> > OS when attempting to map memory from a different domain.
> > 
> > Setting a different default value of memhp_default_online_type when
> > attaching the balloon driver is not a robust solution, as the user (or
> > distro init scripts) could still change it and thus break the Xen
> > balloon driver.
> 
> I think we discussed this a couple of times before (even triggered by my
> request), and this is responsibility of user space to configure. Usually
> distros have udev rules to online memory automatically. Especially, user
> space should eb able to configure *how* to online memory.

Note (as per the commit message) that in the specific case I'm
referring to the memory hotplugged by the Xen balloon driver will be
an unpopulated range to be used internally by certain Xen subsystems,
like the xen-blkback or the privcmd drivers. The addition of such
blocks of (unpopulated) memory can happen without the user explicitly
requesting it, and hence not even aware such hotplug process is taking
place. To be clear: no actual RAM will be added to the system.

Failure to online such blocks using the Xen specific online handler
(which does not handle back the memory to the allocator in any way)
will result in the system getting stuck and malfunctioning.

> It's the admin/distro responsibility to configure this properly. In case
> this doesn't happen (or as you say, users change it), bad luck.
> 
> E.g., virtio-mem takes care to not add more memory in case it is not
> getting onlined. I remember hyper-v has similar code to at least wait a
> bit for memory to get onlined.

I don't think VirtIO or Hyper-V use the hotplug system in the same way
as Xen, as said this is done to add unpopulated memory regions that
will be used to map foreign memory (from other domains) by Xen drivers
on the system.

Maybe this should somehow use a different mechanism to hotplug such
empty memory blocks? I don't mind doing this differently, but I would
need some pointers. Allowing user-space to change a (seemingly
unrelated) parameter and as a result produce failures on Xen drivers
is not an acceptable solution IMO.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 12:24:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 12:24:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyaHR-0002Co-UC; Thu, 23 Jul 2020 12:24:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h/wE=BC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyaHQ-0002CO-Hg
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 12:24:56 +0000
X-Inumbo-ID: 7f785952-ccdf-11ea-a28d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f785952-ccdf-11ea-a28d-12813bfff9fa;
 Thu, 23 Jul 2020 12:24:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Gu2jijYJXDBWHuyHGhTqZnLu+EPkRSCc/TTZB2lCOpU=; b=iK1K/PtJAS0IbgB1iMvNavqzd
 fQy3nU8eMu1E1IP78Au02wCq8jxp5ap/H35vsqPr///UFAhtRNycmes+0WSr6Z/Vvkj0eu+MTM50d
 8S3z0YFSP2FWuV+CSQhMIJodnb8LrmISO9akFIyzQr6AjQm0GSbpY1tZT6Eu5mgehWKTg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyaHH-00070M-5T; Thu, 23 Jul 2020 12:24:47 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyaHG-0000Kv-Tv; Thu, 23 Jul 2020 12:24:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyaHG-0008Re-Sy; Thu, 23 Jul 2020 12:24:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152108-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152108: regressions - trouble: fail/pass/starved
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: qemuu=3cbc8970f55c87cb58699b6dc8fe42998bc79dc0
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jul 2020 12:24:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152108 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152108/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 qemuu                3cbc8970f55c87cb58699b6dc8fe42998bc79dc0
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   40 days
Failing since        151101  2020-06-14 08:32:51 Z   39 days   54 attempts
Testing same since   152108  2020-07-22 11:52:34 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Antoine Damhet <antoine.damhet@blade-group.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 31064 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 12:28:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 12:28:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyaKe-0002N4-I4; Thu, 23 Jul 2020 12:28:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tIQT=BC=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jyaKe-0002Mz-2D
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 12:28:16 +0000
X-Inumbo-ID: fab245c4-ccdf-11ea-a28e-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fab245c4-ccdf-11ea-a28e-12813bfff9fa;
 Thu, 23 Jul 2020 12:28:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F2008AAC5;
 Thu, 23 Jul 2020 12:28:21 +0000 (UTC)
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 David Hildenbrand <david@redhat.com>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <404ea76f-c3d8-dbc5-432d-08d84a17f2d7@suse.com>
Date: Thu, 23 Jul 2020 14:28:13 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200723122300.GD7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.20 14:23, Roger Pau Monné wrote:
> On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
>> On 23.07.20 10:45, Roger Pau Monne wrote:
>>> Add an extra option to add_memory_resource that overrides the memory
>>> hotplug online behavior in order to force onlining of memory from
>>> add_memory_resource unconditionally.
>>>
>>> This is required for the Xen balloon driver, that must run the
>>> online page callback in order to correctly process the newly added
>>> memory region, note this is an unpopulated region that is used by Linux
>>> to either hotplug RAM or to map foreign pages from other domains, and
>>> hence memory hotplug when running on Xen can be used even without the
>>> user explicitly requesting it, as part of the normal operations of the
>>> OS when attempting to map memory from a different domain.
>>>
>>> Setting a different default value of memhp_default_online_type when
>>> attaching the balloon driver is not a robust solution, as the user (or
>>> distro init scripts) could still change it and thus break the Xen
>>> balloon driver.
>>
>> I think we discussed this a couple of times before (even triggered by my
>> request), and this is responsibility of user space to configure. Usually
>> distros have udev rules to online memory automatically. Especially, user
>> space should eb able to configure *how* to online memory.
> 
> Note (as per the commit message) that in the specific case I'm
> referring to the memory hotplugged by the Xen balloon driver will be
> an unpopulated range to be used internally by certain Xen subsystems,
> like the xen-blkback or the privcmd drivers. The addition of such
> blocks of (unpopulated) memory can happen without the user explicitly
> requesting it, and hence not even aware such hotplug process is taking
> place. To be clear: no actual RAM will be added to the system.
> 
> Failure to online such blocks using the Xen specific online handler
> (which does not handle back the memory to the allocator in any way)
> will result in the system getting stuck and malfunctioning.
> 
>> It's the admin/distro responsibility to configure this properly. In case
>> this doesn't happen (or as you say, users change it), bad luck.
>>
>> E.g., virtio-mem takes care to not add more memory in case it is not
>> getting onlined. I remember hyper-v has similar code to at least wait a
>> bit for memory to get onlined.
> 
> I don't think VirtIO or Hyper-V use the hotplug system in the same way
> as Xen, as said this is done to add unpopulated memory regions that
> will be used to map foreign memory (from other domains) by Xen drivers
> on the system.
> 
> Maybe this should somehow use a different mechanism to hotplug such
> empty memory blocks? I don't mind doing this differently, but I would
> need some pointers. Allowing user-space to change a (seemingly
> unrelated) parameter and as a result produce failures on Xen drivers
> is not an acceptable solution IMO.

Maybe we can use the same approach as Xen PV-domains: pre-allocate a
region in the memory map to be used for mapping foreign pages. For the
kernel it will look like pre-ballooned memory, so it will create struct
page for the region (which is what we are after), but it won't give the
memory to the allocator.


Juergen



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 12:32:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 12:32:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyaOS-0003CR-2x; Thu, 23 Jul 2020 12:32:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=64jQ=BC=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jyaOR-0003CM-GI
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 12:32:11 +0000
X-Inumbo-ID: 86f4ff9b-cce0-11ea-a28e-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 86f4ff9b-cce0-11ea-a28e-12813bfff9fa;
 Thu, 23 Jul 2020 12:32:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1595507529;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=QRKjQYnjt5em9AgDaPu46Q+t9XfLW81za3cyd4A+FSc=;
 b=PQxgLKbUAXwK0OE80uAyPoqTtd0s5MMbdTLBf6ADC679m7aYPVBK2IICTYNIVmNSetNx/7
 L6vUkenxCkjkZGJiMcYilSJnDr2LXVywXpJaivwFlKvDbOMgTzUaW9EnjY39ExzFOQbrh/
 oz0HwqJ098XmzYGcBqu9kSmMX5Zu7xM=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-190-aNNf3YKsMlWni1M54ILTaw-1; Thu, 23 Jul 2020 08:32:05 -0400
X-MC-Unique: aNNf3YKsMlWni1M54ILTaw-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A5B31EC802;
 Thu, 23 Jul 2020 12:31:22 +0000 (UTC)
Received: from [10.36.114.90] (ovpn-114-90.ams2.redhat.com [10.36.114.90])
 by smtp.corp.redhat.com (Postfix) with ESMTP id DCACD756A0;
 Thu, 23 Jul 2020 12:31:20 +0000 (UTC)
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <404ea76f-c3d8-dbc5-432d-08d84a17f2d7@suse.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <430ff55d-299f-7c44-675f-3aa3dabb7b70@redhat.com>
Date: Thu, 23 Jul 2020 14:31:20 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <404ea76f-c3d8-dbc5-432d-08d84a17f2d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.20 14:28, Jürgen Groß wrote:
> On 23.07.20 14:23, Roger Pau Monné wrote:
>> On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
>>> On 23.07.20 10:45, Roger Pau Monne wrote:
>>>> Add an extra option to add_memory_resource that overrides the memory
>>>> hotplug online behavior in order to force onlining of memory from
>>>> add_memory_resource unconditionally.
>>>>
>>>> This is required for the Xen balloon driver, that must run the
>>>> online page callback in order to correctly process the newly added
>>>> memory region, note this is an unpopulated region that is used by Linux
>>>> to either hotplug RAM or to map foreign pages from other domains, and
>>>> hence memory hotplug when running on Xen can be used even without the
>>>> user explicitly requesting it, as part of the normal operations of the
>>>> OS when attempting to map memory from a different domain.
>>>>
>>>> Setting a different default value of memhp_default_online_type when
>>>> attaching the balloon driver is not a robust solution, as the user (or
>>>> distro init scripts) could still change it and thus break the Xen
>>>> balloon driver.
>>>
>>> I think we discussed this a couple of times before (even triggered by my
>>> request), and this is responsibility of user space to configure. Usually
>>> distros have udev rules to online memory automatically. Especially, user
>>> space should eb able to configure *how* to online memory.
>>
>> Note (as per the commit message) that in the specific case I'm
>> referring to the memory hotplugged by the Xen balloon driver will be
>> an unpopulated range to be used internally by certain Xen subsystems,
>> like the xen-blkback or the privcmd drivers. The addition of such
>> blocks of (unpopulated) memory can happen without the user explicitly
>> requesting it, and hence not even aware such hotplug process is taking
>> place. To be clear: no actual RAM will be added to the system.
>>
>> Failure to online such blocks using the Xen specific online handler
>> (which does not handle back the memory to the allocator in any way)
>> will result in the system getting stuck and malfunctioning.
>>
>>> It's the admin/distro responsibility to configure this properly. In case
>>> this doesn't happen (or as you say, users change it), bad luck.
>>>
>>> E.g., virtio-mem takes care to not add more memory in case it is not
>>> getting onlined. I remember hyper-v has similar code to at least wait a
>>> bit for memory to get onlined.
>>
>> I don't think VirtIO or Hyper-V use the hotplug system in the same way
>> as Xen, as said this is done to add unpopulated memory regions that
>> will be used to map foreign memory (from other domains) by Xen drivers
>> on the system.
>>
>> Maybe this should somehow use a different mechanism to hotplug such
>> empty memory blocks? I don't mind doing this differently, but I would
>> need some pointers. Allowing user-space to change a (seemingly
>> unrelated) parameter and as a result produce failures on Xen drivers
>> is not an acceptable solution IMO.
> 
> Maybe we can use the same approach as Xen PV-domains: pre-allocate a
> region in the memory map to be used for mapping foreign pages. For the
> kernel it will look like pre-ballooned memory, so it will create struct
> page for the region (which is what we are after), but it won't give the
> memory to the allocator.

Something like that sounds a lot cleaner to me than abusing the  memory
hotplug mechanism (which the xen balloon also uses to just expose more
memory). Because there are other issues in case you "really want memory
to be onlined". What if onlining fails (nacked by a notifier, e.g., kasan)?

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 13:08:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 13:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyaxm-00068s-CT; Thu, 23 Jul 2020 13:08:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0L1b=BC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyaxk-00068n-OD
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 13:08:40 +0000
X-Inumbo-ID: 9f8ce28e-cce5-11ea-a293-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9f8ce28e-cce5-11ea-a293-12813bfff9fa;
 Thu, 23 Jul 2020 13:08:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595509718;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=dXHSH9wbRdiIIVrdpu3/5eZAN762+wcXGXoVK0/kVTw=;
 b=I7R68eghWhEajjqpmWnOtxgQZoM9OwSc3LfXkA5467cMguYfD8E/jPTi
 O3nnY1ch2titxdfrlK/1d+wDfrXbin2H3YyATpqDZ+3z6BcPJe9SaKSOa
 BFnf1L3rDTSzi5fOGeQ2Wf/LsSgKJCWeSKT3q43F2d+Zr0v9JpZ8c0NBK Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: c1OsWBvLwJGdBIFnV7ENKa2QZlNdSh82WqwRMgsvuKPvhFpo8s2BP9yFpIzMwzmrLh1F/8swoZ
 NrSBwknuSpeGkYXCW5UWNYuhJU8cn0wmYFemgCYDsDgCReb6KpvhYi2bXcfZWWYgtFxE9Ixuvw
 u9IPUuf+bT+bpZ2TKC6xY9VT1lQpDIyTN64WgIb0KSU+YiyW5pIPd8v9ABo8y0wCjBDXZ82BsE
 MvH9H5V+C1VWlj2yILkGrJQv49MC+ckfofs7CZwF7xjTb5eCuU3E4AhWScvQVHySkwkSS5GKV4
 Oxk=
X-SBRS: 2.7
X-MesageID: 23230214
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23230214"
Date: Thu, 23 Jul 2020 15:08:31 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
Message-ID: <20200723130831.GE7191@Air-de-Roger>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <404ea76f-c3d8-dbc5-432d-08d84a17f2d7@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <404ea76f-c3d8-dbc5-432d-08d84a17f2d7@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 23, 2020 at 02:28:13PM +0200, Jürgen Groß wrote:
> On 23.07.20 14:23, Roger Pau Monné wrote:
> > On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
> > > On 23.07.20 10:45, Roger Pau Monne wrote:
> > > > Add an extra option to add_memory_resource that overrides the memory
> > > > hotplug online behavior in order to force onlining of memory from
> > > > add_memory_resource unconditionally.
> > > > 
> > > > This is required for the Xen balloon driver, that must run the
> > > > online page callback in order to correctly process the newly added
> > > > memory region, note this is an unpopulated region that is used by Linux
> > > > to either hotplug RAM or to map foreign pages from other domains, and
> > > > hence memory hotplug when running on Xen can be used even without the
> > > > user explicitly requesting it, as part of the normal operations of the
> > > > OS when attempting to map memory from a different domain.
> > > > 
> > > > Setting a different default value of memhp_default_online_type when
> > > > attaching the balloon driver is not a robust solution, as the user (or
> > > > distro init scripts) could still change it and thus break the Xen
> > > > balloon driver.
> > > 
> > > I think we discussed this a couple of times before (even triggered by my
> > > request), and this is responsibility of user space to configure. Usually
> > > distros have udev rules to online memory automatically. Especially, user
> > > space should eb able to configure *how* to online memory.
> > 
> > Note (as per the commit message) that in the specific case I'm
> > referring to the memory hotplugged by the Xen balloon driver will be
> > an unpopulated range to be used internally by certain Xen subsystems,
> > like the xen-blkback or the privcmd drivers. The addition of such
> > blocks of (unpopulated) memory can happen without the user explicitly
> > requesting it, and hence not even aware such hotplug process is taking
> > place. To be clear: no actual RAM will be added to the system.
> > 
> > Failure to online such blocks using the Xen specific online handler
> > (which does not handle back the memory to the allocator in any way)
> > will result in the system getting stuck and malfunctioning.
> > 
> > > It's the admin/distro responsibility to configure this properly. In case
> > > this doesn't happen (or as you say, users change it), bad luck.
> > > 
> > > E.g., virtio-mem takes care to not add more memory in case it is not
> > > getting onlined. I remember hyper-v has similar code to at least wait a
> > > bit for memory to get onlined.
> > 
> > I don't think VirtIO or Hyper-V use the hotplug system in the same way
> > as Xen, as said this is done to add unpopulated memory regions that
> > will be used to map foreign memory (from other domains) by Xen drivers
> > on the system.
> > 
> > Maybe this should somehow use a different mechanism to hotplug such
> > empty memory blocks? I don't mind doing this differently, but I would
> > need some pointers. Allowing user-space to change a (seemingly
> > unrelated) parameter and as a result produce failures on Xen drivers
> > is not an acceptable solution IMO.
> 
> Maybe we can use the same approach as Xen PV-domains: pre-allocate a
> region in the memory map to be used for mapping foreign pages. For the
> kernel it will look like pre-ballooned memory, so it will create struct
> page for the region (which is what we are after), but it won't give the
> memory to the allocator.

IMO using something similar to memory hotplug would give us more
flexibility, and TBH the logic is already there in the balloon driver.
It seems quite wasteful to allocate such region(s) beforehand for all
domains, even when most of them won't end up using foreign mappings at
all.

Anyway, I'm going to take a look at how to do that, I guess it's going
to involve playing with the memory map and reserving some space.

I suggest we should remove the Xen balloon hotplug logic, as it's not
working properly and we don't have a plan to fix it.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 13:14:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 13:14:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyb3b-00071H-1u; Thu, 23 Jul 2020 13:14:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=64jQ=BC=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jyb3Z-000719-Kv
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 13:14:42 +0000
X-Inumbo-ID: 772065b8-cce6-11ea-a293-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 772065b8-cce6-11ea-a293-12813bfff9fa;
 Thu, 23 Jul 2020 13:14:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1595510080;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=Aq79mrESgOZzjD1v3iWkL4RCB8MLlomAyu3tuJJ8OeE=;
 b=Sd/R//wDmS2cqzAi+VW/ThFML7ZihUBZaQMIe5GsuzMYEi5CM0XfphRLogSTpDXI4C4VOc
 9d4MZsKjIZedZwN0uknxf9/Hi3bVftyVKS3+AsOCGIdOE+qDY+Kznd1Fy1YCcjkRzkqqvf
 acfkgd34cYzwjyA7RRhCUW3x9WfY24U=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-168-gx8EYH9POl-iJcNbtMapsw-1; Thu, 23 Jul 2020 09:14:35 -0400
X-MC-Unique: gx8EYH9POl-iJcNbtMapsw-1
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 51CA31E04;
 Thu, 23 Jul 2020 13:14:34 +0000 (UTC)
Received: from [10.36.114.90] (ovpn-114-90.ams2.redhat.com [10.36.114.90])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 8C5888BEE5;
 Thu, 23 Jul 2020 13:14:32 +0000 (UTC)
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <404ea76f-c3d8-dbc5-432d-08d84a17f2d7@suse.com>
 <20200723130831.GE7191@Air-de-Roger>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <0e04b526-924d-aa51-cc2e-2c7561ce3df2@redhat.com>
Date: Thu, 23 Jul 2020 15:14:31 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200723130831.GE7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.20 15:08, Roger Pau Monné wrote:
> On Thu, Jul 23, 2020 at 02:28:13PM +0200, Jürgen Groß wrote:
>> On 23.07.20 14:23, Roger Pau Monné wrote:
>>> On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
>>>> On 23.07.20 10:45, Roger Pau Monne wrote:
>>>>> Add an extra option to add_memory_resource that overrides the memory
>>>>> hotplug online behavior in order to force onlining of memory from
>>>>> add_memory_resource unconditionally.
>>>>>
>>>>> This is required for the Xen balloon driver, that must run the
>>>>> online page callback in order to correctly process the newly added
>>>>> memory region, note this is an unpopulated region that is used by Linux
>>>>> to either hotplug RAM or to map foreign pages from other domains, and
>>>>> hence memory hotplug when running on Xen can be used even without the
>>>>> user explicitly requesting it, as part of the normal operations of the
>>>>> OS when attempting to map memory from a different domain.
>>>>>
>>>>> Setting a different default value of memhp_default_online_type when
>>>>> attaching the balloon driver is not a robust solution, as the user (or
>>>>> distro init scripts) could still change it and thus break the Xen
>>>>> balloon driver.
>>>>
>>>> I think we discussed this a couple of times before (even triggered by my
>>>> request), and this is responsibility of user space to configure. Usually
>>>> distros have udev rules to online memory automatically. Especially, user
>>>> space should eb able to configure *how* to online memory.
>>>
>>> Note (as per the commit message) that in the specific case I'm
>>> referring to the memory hotplugged by the Xen balloon driver will be
>>> an unpopulated range to be used internally by certain Xen subsystems,
>>> like the xen-blkback or the privcmd drivers. The addition of such
>>> blocks of (unpopulated) memory can happen without the user explicitly
>>> requesting it, and hence not even aware such hotplug process is taking
>>> place. To be clear: no actual RAM will be added to the system.
>>>
>>> Failure to online such blocks using the Xen specific online handler
>>> (which does not handle back the memory to the allocator in any way)
>>> will result in the system getting stuck and malfunctioning.
>>>
>>>> It's the admin/distro responsibility to configure this properly. In case
>>>> this doesn't happen (or as you say, users change it), bad luck.
>>>>
>>>> E.g., virtio-mem takes care to not add more memory in case it is not
>>>> getting onlined. I remember hyper-v has similar code to at least wait a
>>>> bit for memory to get onlined.
>>>
>>> I don't think VirtIO or Hyper-V use the hotplug system in the same way
>>> as Xen, as said this is done to add unpopulated memory regions that
>>> will be used to map foreign memory (from other domains) by Xen drivers
>>> on the system.
>>>
>>> Maybe this should somehow use a different mechanism to hotplug such
>>> empty memory blocks? I don't mind doing this differently, but I would
>>> need some pointers. Allowing user-space to change a (seemingly
>>> unrelated) parameter and as a result produce failures on Xen drivers
>>> is not an acceptable solution IMO.
>>
>> Maybe we can use the same approach as Xen PV-domains: pre-allocate a
>> region in the memory map to be used for mapping foreign pages. For the
>> kernel it will look like pre-ballooned memory, so it will create struct
>> page for the region (which is what we are after), but it won't give the
>> memory to the allocator.
> 
> IMO using something similar to memory hotplug would give us more
> flexibility, and TBH the logic is already there in the balloon driver.
> It seems quite wasteful to allocate such region(s) beforehand for all
> domains, even when most of them won't end up using foreign mappings at
> all.

I do wonder why these issues you describe start to pop up now, literally
years after this stuff has been implemented - or am I missing something
important?

> 
> Anyway, I'm going to take a look at how to do that, I guess it's going
> to involve playing with the memory map and reserving some space.
> 
> I suggest we should remove the Xen balloon hotplug logic, as it's not
> working properly and we don't have a plan to fix it.

Which exact hotplug logic are you referring to?

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 13:21:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 13:21:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyb9e-0007rC-Pi; Thu, 23 Jul 2020 13:20:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tIQT=BC=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jyb9e-0007r7-2p
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 13:20:58 +0000
X-Inumbo-ID: 56d28a43-cce7-11ea-870b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56d28a43-cce7-11ea-870b-bc764e2007e4;
 Thu, 23 Jul 2020 13:20:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4352AAB55;
 Thu, 23 Jul 2020 13:21:04 +0000 (UTC)
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <404ea76f-c3d8-dbc5-432d-08d84a17f2d7@suse.com>
 <20200723130831.GE7191@Air-de-Roger>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <76640b3e-f46c-80d5-7714-aa3b731276ab@suse.com>
Date: Thu, 23 Jul 2020 15:20:55 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200723130831.GE7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.20 15:08, Roger Pau Monné wrote:
> On Thu, Jul 23, 2020 at 02:28:13PM +0200, Jürgen Groß wrote:
>> On 23.07.20 14:23, Roger Pau Monné wrote:
>>> On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
>>>> On 23.07.20 10:45, Roger Pau Monne wrote:
>>>>> Add an extra option to add_memory_resource that overrides the memory
>>>>> hotplug online behavior in order to force onlining of memory from
>>>>> add_memory_resource unconditionally.
>>>>>
>>>>> This is required for the Xen balloon driver, that must run the
>>>>> online page callback in order to correctly process the newly added
>>>>> memory region, note this is an unpopulated region that is used by Linux
>>>>> to either hotplug RAM or to map foreign pages from other domains, and
>>>>> hence memory hotplug when running on Xen can be used even without the
>>>>> user explicitly requesting it, as part of the normal operations of the
>>>>> OS when attempting to map memory from a different domain.
>>>>>
>>>>> Setting a different default value of memhp_default_online_type when
>>>>> attaching the balloon driver is not a robust solution, as the user (or
>>>>> distro init scripts) could still change it and thus break the Xen
>>>>> balloon driver.
>>>>
>>>> I think we discussed this a couple of times before (even triggered by my
>>>> request), and this is responsibility of user space to configure. Usually
>>>> distros have udev rules to online memory automatically. Especially, user
>>>> space should eb able to configure *how* to online memory.
>>>
>>> Note (as per the commit message) that in the specific case I'm
>>> referring to the memory hotplugged by the Xen balloon driver will be
>>> an unpopulated range to be used internally by certain Xen subsystems,
>>> like the xen-blkback or the privcmd drivers. The addition of such
>>> blocks of (unpopulated) memory can happen without the user explicitly
>>> requesting it, and hence not even aware such hotplug process is taking
>>> place. To be clear: no actual RAM will be added to the system.
>>>
>>> Failure to online such blocks using the Xen specific online handler
>>> (which does not handle back the memory to the allocator in any way)
>>> will result in the system getting stuck and malfunctioning.
>>>
>>>> It's the admin/distro responsibility to configure this properly. In case
>>>> this doesn't happen (or as you say, users change it), bad luck.
>>>>
>>>> E.g., virtio-mem takes care to not add more memory in case it is not
>>>> getting onlined. I remember hyper-v has similar code to at least wait a
>>>> bit for memory to get onlined.
>>>
>>> I don't think VirtIO or Hyper-V use the hotplug system in the same way
>>> as Xen, as said this is done to add unpopulated memory regions that
>>> will be used to map foreign memory (from other domains) by Xen drivers
>>> on the system.
>>>
>>> Maybe this should somehow use a different mechanism to hotplug such
>>> empty memory blocks? I don't mind doing this differently, but I would
>>> need some pointers. Allowing user-space to change a (seemingly
>>> unrelated) parameter and as a result produce failures on Xen drivers
>>> is not an acceptable solution IMO.
>>
>> Maybe we can use the same approach as Xen PV-domains: pre-allocate a
>> region in the memory map to be used for mapping foreign pages. For the
>> kernel it will look like pre-ballooned memory, so it will create struct
>> page for the region (which is what we are after), but it won't give the
>> memory to the allocator.
> 
> IMO using something similar to memory hotplug would give us more
> flexibility, and TBH the logic is already there in the balloon driver.
> It seems quite wasteful to allocate such region(s) beforehand for all
> domains, even when most of them won't end up using foreign mappings at
> all.

We can do it for dom0 only per default, and add a boot parameter e.g.
for driver domains.

And the logic is already there (just pv-only right now).

> 
> Anyway, I'm going to take a look at how to do that, I guess it's going
> to involve playing with the memory map and reserving some space.

Look at arch/x86/xen/setup.c (xen_add_extra_mem() and its usage).

> 
> I suggest we should remove the Xen balloon hotplug logic, as it's not
> working properly and we don't have a plan to fix it.

I have used memory hotplug successfully not very long ago.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 13:22:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 13:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jybB9-0007zk-52; Thu, 23 Jul 2020 13:22:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1OPV=BC=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jybB7-0007za-H6
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 13:22:29 +0000
X-Inumbo-ID: 8df58aec-cce7-11ea-a294-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8df58aec-cce7-11ea-a294-12813bfff9fa;
 Thu, 23 Jul 2020 13:22:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zFAxGj7t0naFq8V00wiOTIEgSTg9FjQdClBP/9y0MLA=; b=ledAvmMe5HcHyt7CasvjnvTK/o
 VMHBNaqvcyBPNGlq/d0ou4c00X2b1pUrx9IoZ3vq9D52WBGiWGh8n6eV9h/POTPpq0G0nFSYFyGzu
 lPdQSy5m3+JilZDdfwQOq76uqvNgrzRU5YkXeB9HXHd1w4qmWMIxZSkIwBJk4l5OGRsU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jybB4-0008Ca-Mi; Thu, 23 Jul 2020 13:22:26 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jybB4-0007JB-F3; Thu, 23 Jul 2020 13:22:26 +0000
Subject: Re: [PATCH] xen/x86: irq: Avoid a TOCTOU race in
 pirq_spin_lock_irq_desc()
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200722165300.22655-1-julien@xen.org>
 <c9863243-0b5e-521f-80b8-bc5673f895a6@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <5bd56ef4-8bf5-3308-b7db-71e41ac45918@xen.org>
Date: Thu, 23 Jul 2020 14:22:22 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <c9863243-0b5e-521f-80b8-bc5673f895a6@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 23/07/2020 12:23, Jan Beulich wrote:
> On 22.07.2020 18:53, Julien Grall wrote:
>> --- a/xen/arch/x86/irq.c
>> +++ b/xen/arch/x86/irq.c
>> @@ -1187,7 +1187,7 @@ struct irq_desc *pirq_spin_lock_irq_desc(
>>   
>>       for ( ; ; )
>>       {
>> -        int irq = pirq->arch.irq;
>> +        int irq = read_atomic(&pirq->arch.irq);
> 
> There we go - I'd be fine this way, but I'm pretty sure Andrew
> would want this to be ACCESS_ONCE(). So I guess now is the time
> to settle which one to prefer in new code (or which criteria
> there are to prefer one over the other).

I would prefer if we have a single way to force the compiler to do a 
single access (read/write).

The existing implementation of ACCESS_ONCE() can only work on scalar 
type. The implementation is based on a Linux, although we have an extra 
check. Looking through the Linux history, it looks like it is not 
possible to make ACCESS_ONCE() work with non-scalar types:

     ACCESS_ONCE does not work reliably on non-scalar types. For
     example gcc 4.6 and 4.7 might remove the volatile tag for such
     accesses during the SRA (scalar replacement of aggregates) step
     https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145)

I understand that our implementation of read_atomic(), write_atomic() 
would lead to less optimized code. So maybe we want to import 
READ_ONCE() and WRITE_ONCE() from Linux?

As a side note, I have seen suggestion only (see [1]) which suggest that 
they those helpers wouldn't be portable:

"One relatively unimportant misunderstanding is due to the fact that the 
standard only talks about accesses to volatile objects. It does not talk 
about accesses via volatile qualified pointers. Some programmers believe 
that using a pointer-to-volatile should be handled as though it pointed 
to a volatile object. That is not guaranteed by the standard and is 
therefore not portable. However, this is relatively unimportant because 
gcc does in fact treat a pointer-to-volatile as though it pointed to a 
volatile object."

I would assume that the use is OK on CLang and GCC given that Linux has 
been using it.


> And this is of course besides the fact that I think we have many
> more instances where guaranteeing a single access would be
> needed, if we're afraid of the described permitted compiler
> behavior. Which then makes me wonder if this is really something
> we should fix one by one, rather than by at least larger scope
> audits (in order to not suggest "throughout the code base").

It depends how much the contributor can invest on chasing the rest of 
the issues. The larger the scope is, the less likely you will find 
someone that has bandwith to allocate for fixing it completely.

If the scope is "a field", then I think it is a reasonable suggesting.

In this case, I had a look at arch.irq and wasn't able to spot other 
potential issue.

> As a minor remark, unless you've observed problematic behavior,
> would you mind adding "potential" or "theoretical" to the title?

I am not aware of any issues with compiler so far. So I can add 
"potential" in the title.

Cheers,

[1] https://www.airs.com/blog/archives/154

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 13:22:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 13:22:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jybBZ-00082k-EQ; Thu, 23 Jul 2020 13:22:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=64jQ=BC=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jybBY-00082d-BJ
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 13:22:56 +0000
X-Inumbo-ID: 9e4cf358-cce7-11ea-a294-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 9e4cf358-cce7-11ea-a294-12813bfff9fa;
 Thu, 23 Jul 2020 13:22:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1595510575;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=swJfYIy4gp/vFDkclYRL1u0h1aBED6tb1OvFVOnEUbw=;
 b=DIWLKyNCVQz9l2qSarsgp8qWu2YjbR8ZhFLg/vAWjCDqNLJsCBa2KIvx8TGa+jOG0ci29N
 llPlLhfoWCSoAQdXG24Ae7ZLvwEc1C0naN7XSKq5V4el+pDZOBmds0dfK+aXb/uGDk7eNA
 QPxpOKmQnoKfGbBgMaODvJ/t2VQFsek=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-469-mCH6JL7LN-6NHAGzomymBA-1; Thu, 23 Jul 2020 09:22:53 -0400
X-MC-Unique: mCH6JL7LN-6NHAGzomymBA-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BD1E01083E83;
 Thu, 23 Jul 2020 13:22:51 +0000 (UTC)
Received: from [10.36.114.90] (ovpn-114-90.ams2.redhat.com [10.36.114.90])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 1506F88F1F;
 Thu, 23 Jul 2020 13:22:49 +0000 (UTC)
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <e94d9556-f615-bbe2-07d2-08958969ee5f@redhat.com>
Date: Thu, 23 Jul 2020 15:22:49 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200723122300.GD7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.20 14:23, Roger Pau Monné wrote:
> On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
>> On 23.07.20 10:45, Roger Pau Monne wrote:
>>> Add an extra option to add_memory_resource that overrides the memory
>>> hotplug online behavior in order to force onlining of memory from
>>> add_memory_resource unconditionally.
>>>
>>> This is required for the Xen balloon driver, that must run the
>>> online page callback in order to correctly process the newly added
>>> memory region, note this is an unpopulated region that is used by Linux
>>> to either hotplug RAM or to map foreign pages from other domains, and
>>> hence memory hotplug when running on Xen can be used even without the
>>> user explicitly requesting it, as part of the normal operations of the
>>> OS when attempting to map memory from a different domain.
>>>
>>> Setting a different default value of memhp_default_online_type when
>>> attaching the balloon driver is not a robust solution, as the user (or
>>> distro init scripts) could still change it and thus break the Xen
>>> balloon driver.
>>
>> I think we discussed this a couple of times before (even triggered by my
>> request), and this is responsibility of user space to configure. Usually
>> distros have udev rules to online memory automatically. Especially, user
>> space should eb able to configure *how* to online memory.
> 
> Note (as per the commit message) that in the specific case I'm
> referring to the memory hotplugged by the Xen balloon driver will be
> an unpopulated range to be used internally by certain Xen subsystems,
> like the xen-blkback or the privcmd drivers. The addition of such
> blocks of (unpopulated) memory can happen without the user explicitly
> requesting it, and hence not even aware such hotplug process is taking
> place. To be clear: no actual RAM will be added to the system.

Okay, but there is also the case where XEN will actually hotplug memory
using this same handler IIRC (at least I've read papers about it). Both
are using the same handler, correct?

> 
>> It's the admin/distro responsibility to configure this properly. In case
>> this doesn't happen (or as you say, users change it), bad luck.
>>
>> E.g., virtio-mem takes care to not add more memory in case it is not
>> getting onlined. I remember hyper-v has similar code to at least wait a
>> bit for memory to get onlined.
> 
> I don't think VirtIO or Hyper-V use the hotplug system in the same way
> as Xen, as said this is done to add unpopulated memory regions that
> will be used to map foreign memory (from other domains) by Xen drivers
> on the system.

Indeed, if the memory is never exposed to the buddy (and all you need is
struct pages +  a kernel virtual mapping), I wonder if
memremap/ZONE_DEVICE is what you want? Then you won't have user-visible
memory blocks created with unclear online semantics, partially involving
the buddy.

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 13:25:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 13:25:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jybDi-0008E6-Sc; Thu, 23 Jul 2020 13:25:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0L1b=BC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jybDh-0008E1-0v
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 13:25:09 +0000
X-Inumbo-ID: ed2cbddc-cce7-11ea-a298-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed2cbddc-cce7-11ea-a298-12813bfff9fa;
 Thu, 23 Jul 2020 13:25:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595510708;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=dzkYPQkWuIKszRErK34RX+I5IJu4n5MB+23a1TyFK1Y=;
 b=Onrf+/q5zTnntN63mSbfYdZnPqrHs9ILyVOrctgAUM9m5tGW9dL7TfkX
 XOFoRmDmYHtjsDBanIDBn2U4GAnQp6yVdPMvdCdo/jSwWnnl/q9gK9ffq
 cd+s4g41hLWlgG2RDjuoukYtM6rAN2Nd6sPgKlWEjL3NHmHgM0gMJP8fQ c=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: hOa7QLdLG4sGCBb98EKi9ESS5BawhFC2ZRJ0t8z3qRtPr7/JcAaRwVCUiMx00yVY9q0BWMMc2y
 EJEFrR8DRLeTbZ9tUl45BDZ1uoYV5ZPgB8LRI6IwwELn49a3nlLty4Rp24orrUEL3/2aJm7k5Q
 WYibHeAaC5zq9npx9jp5FYOwEgVlCDgV+u7runjA/wYjj4USyiKg8nf5WGLr2jIWGQtUnHdwsw
 97ItFod9fnwcAPjpR5w6SWIxIUTOqf/nAHW+7RRkAzap0rBugUku0pYuG/dyjD1E13qenKgqKQ
 HFs=
X-SBRS: 2.7
X-MesageID: 23366682
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23366682"
Date: Thu, 23 Jul 2020 15:25:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: David Hildenbrand <david@redhat.com>
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
Message-ID: <20200723132500.GF7191@Air-de-Roger>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <404ea76f-c3d8-dbc5-432d-08d84a17f2d7@suse.com>
 <20200723130831.GE7191@Air-de-Roger>
 <0e04b526-924d-aa51-cc2e-2c7561ce3df2@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0e04b526-924d-aa51-cc2e-2c7561ce3df2@redhat.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 23, 2020 at 03:14:31PM +0200, David Hildenbrand wrote:
> On 23.07.20 15:08, Roger Pau Monné wrote:
> > On Thu, Jul 23, 2020 at 02:28:13PM +0200, Jürgen Groß wrote:
> >> On 23.07.20 14:23, Roger Pau Monné wrote:
> >>> On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
> >>>> On 23.07.20 10:45, Roger Pau Monne wrote:
> >>>>> Add an extra option to add_memory_resource that overrides the memory
> >>>>> hotplug online behavior in order to force onlining of memory from
> >>>>> add_memory_resource unconditionally.
> >>>>>
> >>>>> This is required for the Xen balloon driver, that must run the
> >>>>> online page callback in order to correctly process the newly added
> >>>>> memory region, note this is an unpopulated region that is used by Linux
> >>>>> to either hotplug RAM or to map foreign pages from other domains, and
> >>>>> hence memory hotplug when running on Xen can be used even without the
> >>>>> user explicitly requesting it, as part of the normal operations of the
> >>>>> OS when attempting to map memory from a different domain.
> >>>>>
> >>>>> Setting a different default value of memhp_default_online_type when
> >>>>> attaching the balloon driver is not a robust solution, as the user (or
> >>>>> distro init scripts) could still change it and thus break the Xen
> >>>>> balloon driver.
> >>>>
> >>>> I think we discussed this a couple of times before (even triggered by my
> >>>> request), and this is responsibility of user space to configure. Usually
> >>>> distros have udev rules to online memory automatically. Especially, user
> >>>> space should eb able to configure *how* to online memory.
> >>>
> >>> Note (as per the commit message) that in the specific case I'm
> >>> referring to the memory hotplugged by the Xen balloon driver will be
> >>> an unpopulated range to be used internally by certain Xen subsystems,
> >>> like the xen-blkback or the privcmd drivers. The addition of such
> >>> blocks of (unpopulated) memory can happen without the user explicitly
> >>> requesting it, and hence not even aware such hotplug process is taking
> >>> place. To be clear: no actual RAM will be added to the system.
> >>>
> >>> Failure to online such blocks using the Xen specific online handler
> >>> (which does not handle back the memory to the allocator in any way)
> >>> will result in the system getting stuck and malfunctioning.
> >>>
> >>>> It's the admin/distro responsibility to configure this properly. In case
> >>>> this doesn't happen (or as you say, users change it), bad luck.
> >>>>
> >>>> E.g., virtio-mem takes care to not add more memory in case it is not
> >>>> getting onlined. I remember hyper-v has similar code to at least wait a
> >>>> bit for memory to get onlined.
> >>>
> >>> I don't think VirtIO or Hyper-V use the hotplug system in the same way
> >>> as Xen, as said this is done to add unpopulated memory regions that
> >>> will be used to map foreign memory (from other domains) by Xen drivers
> >>> on the system.
> >>>
> >>> Maybe this should somehow use a different mechanism to hotplug such
> >>> empty memory blocks? I don't mind doing this differently, but I would
> >>> need some pointers. Allowing user-space to change a (seemingly
> >>> unrelated) parameter and as a result produce failures on Xen drivers
> >>> is not an acceptable solution IMO.
> >>
> >> Maybe we can use the same approach as Xen PV-domains: pre-allocate a
> >> region in the memory map to be used for mapping foreign pages. For the
> >> kernel it will look like pre-ballooned memory, so it will create struct
> >> page for the region (which is what we are after), but it won't give the
> >> memory to the allocator.
> > 
> > IMO using something similar to memory hotplug would give us more
> > flexibility, and TBH the logic is already there in the balloon driver.
> > It seems quite wasteful to allocate such region(s) beforehand for all
> > domains, even when most of them won't end up using foreign mappings at
> > all.
> 
> I do wonder why these issues you describe start to pop up now, literally
> years after this stuff has been implemented - or am I missing something
> important?

We are (very slowly) implementing support to switch to a PVH dom0
(something similar to a fully emulated guest as dom0), and that kind
of guest no longer has a pre-allocated memory region for mappings,
likely because we mostly use the native path when dealing with
memory, and such reservation was done by the PV specific setup path
that deals with the memory map.

> > 
> > Anyway, I'm going to take a look at how to do that, I guess it's going
> > to involve playing with the memory map and reserving some space.
> > 
> > I suggest we should remove the Xen balloon hotplug logic, as it's not
> > working properly and we don't have a plan to fix it.
> 
> Which exact hotplug logic are you referring to?

There are some sections in the Xen balloon driver protected with
CONFIG_XEN_BALLOON_MEMORY_HOTPLUG.

When xen_hotplug_unpopulated is enabled but the default policy for
memory hotplug is to not online the added blocks certain operations
like alloc_xenballooned_pages would block forever waiting for the
hotplugged memory to be onlined in order to be usable.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 13:28:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 13:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jybGS-0008P6-Ft; Thu, 23 Jul 2020 13:28:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xWck=BC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jybGR-0008OF-Jo
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 13:27:59 +0000
X-Inumbo-ID: 52a8f55f-cce8-11ea-a298-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 52a8f55f-cce8-11ea-a298-12813bfff9fa;
 Thu, 23 Jul 2020 13:27:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595510878;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=s7t2QUOLs1qt8WqvBpHUm61e9CeK8KZQNhK9/ZwC1us=;
 b=EVVwTLz2+LEpET7smVxlKNYZpktsGCWY2c/5vu61KLMt7quWCB7eOOWa
 IjKa/MD1T+Ai3lHJz2x26rcu/jokJ1MjRS9ec81W+DWX75Fus3QZ0RzZ0
 RxC0ciP5WEB6My+q82p/gQmLh9on0eV3P+Pv2W+olimhfr1SlxZHpeCtc w=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: bDWa+yTMtrngYt6gmlpTivdst9fYiE9wb+rwX3W/ksRkHxrd0FvN54g1Ahk29Gbyp1ZLAIbyuN
 6lwdkWQpLX2KTl/2j3takemDVB4/Pm6Kwj1DvmDcIqyBQ+i0/W5s/majNPt1Kj2q2MNbfC2191
 zbVNLA84DnF1a2Po46ZB1zyRGxoM+L6VgV1ItBq26atLywcXy6d2VFJTUWvGFPHyCKSRz2yLmz
 FVcnyR+8+gi5UPao1+LRQ/UcG521vUSPfErLvmuqEPf2+bZ21iGOXuZPePy97ZJ2SPfQtfYrgK
 Igs=
X-SBRS: 2.7
X-MesageID: 23055499
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23055499"
Subject: Re: [PATCH] x86/vmce: Dispatch vmce_{rd,wr}msr() from
 guest_{rd,wr}msr()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200722101809.8389-1-andrew.cooper3@citrix.com>
 <20200723100727.GA7191@Air-de-Roger>
 <ccc153a5-cf65-c483-43ea-d6b864366e06@citrix.com>
 <20200723113025.GC7191@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1250cb45-5539-cb45-52eb-b2cb1477c48d@citrix.com>
Date: Thu, 23 Jul 2020 14:27:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200723113025.GC7191@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/07/2020 12:30, Roger Pau Monné wrote:
> On Thu, Jul 23, 2020 at 12:00:53PM +0100, Andrew Cooper wrote:
>> On 23/07/2020 11:07, Roger Pau Monné wrote:
>>> On Wed, Jul 22, 2020 at 11:18:09AM +0100, Andrew Cooper wrote:
>>>> +    case MSR_IA32_MCG_CAP     ... MSR_IA32_MCG_CTL:      /* 0x179 -> 0x17b */
>>>> +    case MSR_IA32_MCx_CTL2(0) ... MSR_IA32_MCx_CTL2(31): /* 0x280 -> 0x29f */
>>>> +    case MSR_IA32_MCx_CTL(0)  ... MSR_IA32_MCx_MISC(31): /* 0x400 -> 0x47f */
>>> Where do you get the ranges from 0 to 31? It seems like the count
>>> field in the CAP register is 8 bits, which could allow for up to 256
>>> banks?
>>>
>>> I'm quite sure this would then overlap with other MSRs?
>> Irritatingly, nothing I can find actually states an upper architectural
>> limit.
>>
>> SDM Vol4, Table 2-2 which enumerates the Architectural MSRs.
>>
>> 0x280 thru 0x29f are explicitly reserved MCx_CTL2, which is a limit of
>> 32 banks.  There are gaps after this in the architectural table, but
>> IceLake has PRMRR_BASE_0 at 0x2a0.
>>
>> The main bank of MCx_{CTL,STATUS,ADDR,MISC} start at 0x400 and are
>> listed in the table up to 0x473, which is a limit of 29 banks.  The
>> Model specific table for SandyBridge fills in the remaining 3 banks up
>> to MSR 0x47f, which is the previous limit of 32 banks.  (These MSRs have
>> package scope rather than core/thread scope, but they are still
>> enumerated architecturally so I'm not sure why they are in the model
>> specific tables.)
>>
>> More importantly however, the VMX MSR range starts at 0x480, immediately
>> above bank 31, which puts an architectural hard limit on the number of
>> banks.
> Yes, realized about the VMX MSRs starting at 0x480, which limits the
> number of banks. Maybe a small comment about the fact that albeit the
> count in the CAP register could go up to 256 32 is the actual limit
> due to how MSRs are arranged?

Ok.  I've added:

The bank limit of 32 isn't stated anywhere I can locate, but is a
consequence
of the MSR layout described in SDM Volume 4.

as another paragraph to the commit message.

> Note there's also GUEST_MC_BANK_NUM which is the actual implementation
> limit in Xen AFAICT, maybe using it here would be clearer? (and limit
> the ranges forwarded to vmce_rdmsr)

First, there is a note saying that older versions of Xen advertised more
than 2 banks to guests (and therefore we might see such a guest migrated
in), and second, capturing all MSRs is a bug I'm specifically fixing
with this change, and was called out in the commit message.

These MSRs, even beyond the banks implemented by the guest, are still
MCE banks, and need handling appropriately as "out of range banks",
which isn't necessarily the same as falling into the general default MSR
path.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 13:40:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 13:40:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jybS0-00014C-2I; Thu, 23 Jul 2020 13:39:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0L1b=BC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jybRy-000147-H5
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 13:39:54 +0000
X-Inumbo-ID: fc6b115d-cce9-11ea-a29d-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fc6b115d-cce9-11ea-a29d-12813bfff9fa;
 Thu, 23 Jul 2020 13:39:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595511594;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=SPySEjUBF6icBOLV/IomyE/JZa+kTbam/2Y6LYZ1gk4=;
 b=FjbKqtKa8KumYowPRZHgOA9v6vXjelNilaeIYt7sm+CBIvRy8gkf6Oi3
 HYG9gdbuMZRyJEqPCu0VG23kLyQIONL+1/sBH5YtcYNQrT4g3UhszMCfD
 0CQrEUw+D6EFDHgTEs5P4q6hm0hXzcPOJiCzcjJTEL1MZqUrTbosvSq+8 0=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: gC5NREPQG9nd67j1J33rwfTdriPTr6ltn3xqL6NDbB05O6F5teQ1n0VfuWFZIocFW0HPSBnMhp
 Y3AkadLKuX5GHB6lXFzUjoY8mWfARLA3wo89xLy28bpq0ZpOjtTfM6Z31iif8gGHWxF9Plv8cy
 lLawOTf2kPVUTbahVLvJ2ev0fP6KPvwLLfWkXiupcpZq9kfiZy5s8GImGeE7nXiaGVzhL5KoTB
 k0WwwMKtT43wX/Mtj9uBSx87QwZ7lALeN1Svj0BnISrs76OosK8MsmSJrxp7cUYbfdlvngV48W
 R6g=
X-SBRS: 2.7
X-MesageID: 23038656
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23038656"
Date: Thu, 23 Jul 2020 15:39:45 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
Message-ID: <20200723133945.GG7191@Air-de-Roger>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <404ea76f-c3d8-dbc5-432d-08d84a17f2d7@suse.com>
 <20200723130831.GE7191@Air-de-Roger>
 <76640b3e-f46c-80d5-7714-aa3b731276ab@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <76640b3e-f46c-80d5-7714-aa3b731276ab@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 23, 2020 at 03:20:55PM +0200, Jürgen Groß wrote:
> On 23.07.20 15:08, Roger Pau Monné wrote:
> > On Thu, Jul 23, 2020 at 02:28:13PM +0200, Jürgen Groß wrote:
> > > On 23.07.20 14:23, Roger Pau Monné wrote:
> > > > On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
> > > > > On 23.07.20 10:45, Roger Pau Monne wrote:
> > > > > > Add an extra option to add_memory_resource that overrides the memory
> > > > > > hotplug online behavior in order to force onlining of memory from
> > > > > > add_memory_resource unconditionally.
> > > > > > 
> > > > > > This is required for the Xen balloon driver, that must run the
> > > > > > online page callback in order to correctly process the newly added
> > > > > > memory region, note this is an unpopulated region that is used by Linux
> > > > > > to either hotplug RAM or to map foreign pages from other domains, and
> > > > > > hence memory hotplug when running on Xen can be used even without the
> > > > > > user explicitly requesting it, as part of the normal operations of the
> > > > > > OS when attempting to map memory from a different domain.
> > > > > > 
> > > > > > Setting a different default value of memhp_default_online_type when
> > > > > > attaching the balloon driver is not a robust solution, as the user (or
> > > > > > distro init scripts) could still change it and thus break the Xen
> > > > > > balloon driver.
> > > > > 
> > > > > I think we discussed this a couple of times before (even triggered by my
> > > > > request), and this is responsibility of user space to configure. Usually
> > > > > distros have udev rules to online memory automatically. Especially, user
> > > > > space should eb able to configure *how* to online memory.
> > > > 
> > > > Note (as per the commit message) that in the specific case I'm
> > > > referring to the memory hotplugged by the Xen balloon driver will be
> > > > an unpopulated range to be used internally by certain Xen subsystems,
> > > > like the xen-blkback or the privcmd drivers. The addition of such
> > > > blocks of (unpopulated) memory can happen without the user explicitly
> > > > requesting it, and hence not even aware such hotplug process is taking
> > > > place. To be clear: no actual RAM will be added to the system.
> > > > 
> > > > Failure to online such blocks using the Xen specific online handler
> > > > (which does not handle back the memory to the allocator in any way)
> > > > will result in the system getting stuck and malfunctioning.
> > > > 
> > > > > It's the admin/distro responsibility to configure this properly. In case
> > > > > this doesn't happen (or as you say, users change it), bad luck.
> > > > > 
> > > > > E.g., virtio-mem takes care to not add more memory in case it is not
> > > > > getting onlined. I remember hyper-v has similar code to at least wait a
> > > > > bit for memory to get onlined.
> > > > 
> > > > I don't think VirtIO or Hyper-V use the hotplug system in the same way
> > > > as Xen, as said this is done to add unpopulated memory regions that
> > > > will be used to map foreign memory (from other domains) by Xen drivers
> > > > on the system.
> > > > 
> > > > Maybe this should somehow use a different mechanism to hotplug such
> > > > empty memory blocks? I don't mind doing this differently, but I would
> > > > need some pointers. Allowing user-space to change a (seemingly
> > > > unrelated) parameter and as a result produce failures on Xen drivers
> > > > is not an acceptable solution IMO.
> > > 
> > > Maybe we can use the same approach as Xen PV-domains: pre-allocate a
> > > region in the memory map to be used for mapping foreign pages. For the
> > > kernel it will look like pre-ballooned memory, so it will create struct
> > > page for the region (which is what we are after), but it won't give the
> > > memory to the allocator.
> > 
> > IMO using something similar to memory hotplug would give us more
> > flexibility, and TBH the logic is already there in the balloon driver.
> > It seems quite wasteful to allocate such region(s) beforehand for all
> > domains, even when most of them won't end up using foreign mappings at
> > all.
> 
> We can do it for dom0 only per default, and add a boot parameter e.g.
> for driver domains.
> 
> And the logic is already there (just pv-only right now).
> 
> > 
> > Anyway, I'm going to take a look at how to do that, I guess it's going
> > to involve playing with the memory map and reserving some space.
> 
> Look at arch/x86/xen/setup.c (xen_add_extra_mem() and its usage).

Yes, I've taken a look. It's my rough understanding that I would need
to add a hook for HVM/PVH that modifies the memory map in order to add
an extra region (or regions) that would be marked as reserved using
memblock_reserve by xen_add_extra_mem.

Adding such hook for PVH guests booted using the PVH entry point and
fetching the memory map using the hypercall interface
(mem_map_via_hcall) seems feasible, however I'm not sure dealing with
other guests types is that easy.

> > 
> > I suggest we should remove the Xen balloon hotplug logic, as it's not
> > working properly and we don't have a plan to fix it.
> 
> I have used memory hotplug successfully not very long ago.

Right, but it requires a certain set of enabled options, which IMO is
not obvious. For example enabling xen_hotplug_unpopulated without also
setting the default memory hotplug policy to online the added blocks
will result in processes getting stuck. This is IMO too fragile.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 13:40:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 13:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jybSD-0001jJ-E0; Thu, 23 Jul 2020 13:40:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h/wE=BC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jybSD-0001Hu-1d
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 13:40:09 +0000
X-Inumbo-ID: 01d2f904-ccea-11ea-8711-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01d2f904-ccea-11ea-8711-bc764e2007e4;
 Thu, 23 Jul 2020 13:40:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=mj51ygoB1BfBTHGePxPBPZ8dCB6ZoxBCOQUoKxh/irE=; b=N6lZcYvzp7Bvn0E3q0Jz8rlKK
 A5rmpFO1rJHzsIu+YPB3GUdGCl6CtHDhR3C4hAmNTJwYTLiwIXzr33RIcU6ukuQQaYQN7pfzjRWQ+
 dB1uP3kGbgYnALXNy1YP27VhREM13dR1+t4QoT0zs7uyPzgZFxxbZBWQYUEEj/bkDOPpo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jybS6-00007j-8D; Thu, 23 Jul 2020 13:40:02 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jybS5-0005HD-Qk; Thu, 23 Jul 2020 13:40:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jybS5-0008II-Pn; Thu, 23 Jul 2020 13:40:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152142-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152142: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ffe4f0fe17b5288e0c19955cd1ba589e6db1b0fe
X-Osstest-Versions-That: xen=26707b747feb5d707f659989c0f8f2e847e8020a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jul 2020 13:40:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152142 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152142/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ffe4f0fe17b5288e0c19955cd1ba589e6db1b0fe
baseline version:
 xen                  26707b747feb5d707f659989c0f8f2e847e8020a

Last test of basis   152121  2020-07-22 15:07:52 Z    0 days
Testing same since   152142  2020-07-23 10:00:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   26707b747f..ffe4f0fe17  ffe4f0fe17b5288e0c19955cd1ba589e6db1b0fe -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 13:47:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 13:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jybZN-000240-6o; Thu, 23 Jul 2020 13:47:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=64jQ=BC=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jybZL-00023v-MN
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 13:47:31 +0000
X-Inumbo-ID: 0ba485b4-cceb-11ea-8715-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0ba485b4-cceb-11ea-8715-bc764e2007e4;
 Thu, 23 Jul 2020 13:47:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1595512048;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=18jcLbWvrYiy04myeabqbt4ovzopAoj1RyaVCl5T6K4=;
 b=L7n8oWicJHAYKDrOWPyEEEboNQsguGHfXIZygN7MROx1rurRpXfCX9WLWXUQtYcJ7AEZhF
 lj8gsnsUxEbeWOaseqNEshy3xVq+ZUMcK9B4/KBjSUbZur8BpSdSNSpwcYlI3VNd9B+CCj
 5gRErII/yot5WFYFqyf0Kk5xUn1DXjQ=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-33-xhZrJ9ZRO-qSQFh2Ax8U_A-1; Thu, 23 Jul 2020 09:47:26 -0400
X-MC-Unique: xhZrJ9ZRO-qSQFh2Ax8U_A-1
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B2317800469;
 Thu, 23 Jul 2020 13:47:24 +0000 (UTC)
Received: from [10.36.114.90] (ovpn-114-90.ams2.redhat.com [10.36.114.90])
 by smtp.corp.redhat.com (Postfix) with ESMTP id E56607855A;
 Thu, 23 Jul 2020 13:47:22 +0000 (UTC)
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
From: David Hildenbrand <david@redhat.com>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <e94d9556-f615-bbe2-07d2-08958969ee5f@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <429c2889-93c2-23b3-ba1e-da56e3a76ba4@redhat.com>
Date: Thu, 23 Jul 2020 15:47:22 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <e94d9556-f615-bbe2-07d2-08958969ee5f@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.20 15:22, David Hildenbrand wrote:
> On 23.07.20 14:23, Roger Pau Monné wrote:
>> On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
>>> On 23.07.20 10:45, Roger Pau Monne wrote:
>>>> Add an extra option to add_memory_resource that overrides the memory
>>>> hotplug online behavior in order to force onlining of memory from
>>>> add_memory_resource unconditionally.
>>>>
>>>> This is required for the Xen balloon driver, that must run the
>>>> online page callback in order to correctly process the newly added
>>>> memory region, note this is an unpopulated region that is used by Linux
>>>> to either hotplug RAM or to map foreign pages from other domains, and
>>>> hence memory hotplug when running on Xen can be used even without the
>>>> user explicitly requesting it, as part of the normal operations of the
>>>> OS when attempting to map memory from a different domain.
>>>>
>>>> Setting a different default value of memhp_default_online_type when
>>>> attaching the balloon driver is not a robust solution, as the user (or
>>>> distro init scripts) could still change it and thus break the Xen
>>>> balloon driver.
>>>
>>> I think we discussed this a couple of times before (even triggered by my
>>> request), and this is responsibility of user space to configure. Usually
>>> distros have udev rules to online memory automatically. Especially, user
>>> space should eb able to configure *how* to online memory.
>>
>> Note (as per the commit message) that in the specific case I'm
>> referring to the memory hotplugged by the Xen balloon driver will be
>> an unpopulated range to be used internally by certain Xen subsystems,
>> like the xen-blkback or the privcmd drivers. The addition of such
>> blocks of (unpopulated) memory can happen without the user explicitly
>> requesting it, and hence not even aware such hotplug process is taking
>> place. To be clear: no actual RAM will be added to the system.
> 
> Okay, but there is also the case where XEN will actually hotplug memory
> using this same handler IIRC (at least I've read papers about it). Both
> are using the same handler, correct?
> 
>>
>>> It's the admin/distro responsibility to configure this properly. In case
>>> this doesn't happen (or as you say, users change it), bad luck.
>>>
>>> E.g., virtio-mem takes care to not add more memory in case it is not
>>> getting onlined. I remember hyper-v has similar code to at least wait a
>>> bit for memory to get onlined.
>>
>> I don't think VirtIO or Hyper-V use the hotplug system in the same way
>> as Xen, as said this is done to add unpopulated memory regions that
>> will be used to map foreign memory (from other domains) by Xen drivers
>> on the system.
> 
> Indeed, if the memory is never exposed to the buddy (and all you need is
> struct pages +  a kernel virtual mapping), I wonder if
> memremap/ZONE_DEVICE is what you want? Then you won't have user-visible
> memory blocks created with unclear online semantics, partially involving
> the buddy.

And just a note that there is also DCSS on s390x / z/VM which allows to
map segments into the VM physical address space (e.g., you can share
segments between VMs). They don't need any memmap (struct page) for that
memory, though. All they do is create the identity mapping in the kernel
virtual address space manually. Not sure what the exact requirements on
the XEN side are. I assume you need a memmap for this memory.

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 13:49:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 13:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jybbG-0002CQ-Mx; Thu, 23 Jul 2020 13:49:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tIQT=BC=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jybbF-0002CK-0j
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 13:49:29 +0000
X-Inumbo-ID: 5299b750-cceb-11ea-8715-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5299b750-cceb-11ea-8715-bc764e2007e4;
 Thu, 23 Jul 2020 13:49:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2AEE9AB3D;
 Thu, 23 Jul 2020 13:49:35 +0000 (UTC)
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <404ea76f-c3d8-dbc5-432d-08d84a17f2d7@suse.com>
 <20200723130831.GE7191@Air-de-Roger>
 <76640b3e-f46c-80d5-7714-aa3b731276ab@suse.com>
 <20200723133945.GG7191@Air-de-Roger>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <44049236-c78a-f7d1-1b7e-647235b026f3@suse.com>
Date: Thu, 23 Jul 2020 15:49:26 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200723133945.GG7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.20 15:39, Roger Pau Monné wrote:
> On Thu, Jul 23, 2020 at 03:20:55PM +0200, Jürgen Groß wrote:
>> On 23.07.20 15:08, Roger Pau Monné wrote:
>>> On Thu, Jul 23, 2020 at 02:28:13PM +0200, Jürgen Groß wrote:
>>>> On 23.07.20 14:23, Roger Pau Monné wrote:
>>>>> On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
>>>>>> On 23.07.20 10:45, Roger Pau Monne wrote:
>>>>>>> Add an extra option to add_memory_resource that overrides the memory
>>>>>>> hotplug online behavior in order to force onlining of memory from
>>>>>>> add_memory_resource unconditionally.
>>>>>>>
>>>>>>> This is required for the Xen balloon driver, that must run the
>>>>>>> online page callback in order to correctly process the newly added
>>>>>>> memory region, note this is an unpopulated region that is used by Linux
>>>>>>> to either hotplug RAM or to map foreign pages from other domains, and
>>>>>>> hence memory hotplug when running on Xen can be used even without the
>>>>>>> user explicitly requesting it, as part of the normal operations of the
>>>>>>> OS when attempting to map memory from a different domain.
>>>>>>>
>>>>>>> Setting a different default value of memhp_default_online_type when
>>>>>>> attaching the balloon driver is not a robust solution, as the user (or
>>>>>>> distro init scripts) could still change it and thus break the Xen
>>>>>>> balloon driver.
>>>>>>
>>>>>> I think we discussed this a couple of times before (even triggered by my
>>>>>> request), and this is responsibility of user space to configure. Usually
>>>>>> distros have udev rules to online memory automatically. Especially, user
>>>>>> space should eb able to configure *how* to online memory.
>>>>>
>>>>> Note (as per the commit message) that in the specific case I'm
>>>>> referring to the memory hotplugged by the Xen balloon driver will be
>>>>> an unpopulated range to be used internally by certain Xen subsystems,
>>>>> like the xen-blkback or the privcmd drivers. The addition of such
>>>>> blocks of (unpopulated) memory can happen without the user explicitly
>>>>> requesting it, and hence not even aware such hotplug process is taking
>>>>> place. To be clear: no actual RAM will be added to the system.
>>>>>
>>>>> Failure to online such blocks using the Xen specific online handler
>>>>> (which does not handle back the memory to the allocator in any way)
>>>>> will result in the system getting stuck and malfunctioning.
>>>>>
>>>>>> It's the admin/distro responsibility to configure this properly. In case
>>>>>> this doesn't happen (or as you say, users change it), bad luck.
>>>>>>
>>>>>> E.g., virtio-mem takes care to not add more memory in case it is not
>>>>>> getting onlined. I remember hyper-v has similar code to at least wait a
>>>>>> bit for memory to get onlined.
>>>>>
>>>>> I don't think VirtIO or Hyper-V use the hotplug system in the same way
>>>>> as Xen, as said this is done to add unpopulated memory regions that
>>>>> will be used to map foreign memory (from other domains) by Xen drivers
>>>>> on the system.
>>>>>
>>>>> Maybe this should somehow use a different mechanism to hotplug such
>>>>> empty memory blocks? I don't mind doing this differently, but I would
>>>>> need some pointers. Allowing user-space to change a (seemingly
>>>>> unrelated) parameter and as a result produce failures on Xen drivers
>>>>> is not an acceptable solution IMO.
>>>>
>>>> Maybe we can use the same approach as Xen PV-domains: pre-allocate a
>>>> region in the memory map to be used for mapping foreign pages. For the
>>>> kernel it will look like pre-ballooned memory, so it will create struct
>>>> page for the region (which is what we are after), but it won't give the
>>>> memory to the allocator.
>>>
>>> IMO using something similar to memory hotplug would give us more
>>> flexibility, and TBH the logic is already there in the balloon driver.
>>> It seems quite wasteful to allocate such region(s) beforehand for all
>>> domains, even when most of them won't end up using foreign mappings at
>>> all.
>>
>> We can do it for dom0 only per default, and add a boot parameter e.g.
>> for driver domains.
>>
>> And the logic is already there (just pv-only right now).
>>
>>>
>>> Anyway, I'm going to take a look at how to do that, I guess it's going
>>> to involve playing with the memory map and reserving some space.
>>
>> Look at arch/x86/xen/setup.c (xen_add_extra_mem() and its usage).
> 
> Yes, I've taken a look. It's my rough understanding that I would need
> to add a hook for HVM/PVH that modifies the memory map in order to add
> an extra region (or regions) that would be marked as reserved using
> memblock_reserve by xen_add_extra_mem.
> 
> Adding such hook for PVH guests booted using the PVH entry point and
> fetching the memory map using the hypercall interface
> (mem_map_via_hcall) seems feasible, however I'm not sure dealing with
> other guests types is that easy.

I think for dom0 we can just use the existing logic using the host
memory map for selecting which region to use (possibly the size could
be specified as boot parameter in order to overwrite the default).

For domUs we'd need a boot parameter specifying either just the size
(resulting in a possible clash in case of pci passthrough) or
specifying the guest physical region for that additional area.

> 
>>>
>>> I suggest we should remove the Xen balloon hotplug logic, as it's not
>>> working properly and we don't have a plan to fix it.
>>
>> I have used memory hotplug successfully not very long ago.
> 
> Right, but it requires a certain set of enabled options, which IMO is
> not obvious. For example enabling xen_hotplug_unpopulated without also
> setting the default memory hotplug policy to online the added blocks
> will result in processes getting stuck. This is IMO too fragile.

Yes, memory hotplug is an item on my todo-list for some years now.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 13:53:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 13:53:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jybes-00033t-7n; Thu, 23 Jul 2020 13:53:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tIQT=BC=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jybeq-00033o-Ai
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 13:53:12 +0000
X-Inumbo-ID: d84ee436-cceb-11ea-871b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d84ee436-cceb-11ea-871b-bc764e2007e4;
 Thu, 23 Jul 2020 13:53:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 66E13AB3D;
 Thu, 23 Jul 2020 13:53:18 +0000 (UTC)
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
To: David Hildenbrand <david@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <e94d9556-f615-bbe2-07d2-08958969ee5f@redhat.com>
 <429c2889-93c2-23b3-ba1e-da56e3a76ba4@redhat.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <de0b17e6-6cb1-211b-bc40-e34f4e1b30d0@suse.com>
Date: Thu, 23 Jul 2020 15:53:09 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <429c2889-93c2-23b3-ba1e-da56e3a76ba4@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.20 15:47, David Hildenbrand wrote:
> On 23.07.20 15:22, David Hildenbrand wrote:
>> On 23.07.20 14:23, Roger Pau Monné wrote:
>>> On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
>>>> On 23.07.20 10:45, Roger Pau Monne wrote:
>>>>> Add an extra option to add_memory_resource that overrides the memory
>>>>> hotplug online behavior in order to force onlining of memory from
>>>>> add_memory_resource unconditionally.
>>>>>
>>>>> This is required for the Xen balloon driver, that must run the
>>>>> online page callback in order to correctly process the newly added
>>>>> memory region, note this is an unpopulated region that is used by Linux
>>>>> to either hotplug RAM or to map foreign pages from other domains, and
>>>>> hence memory hotplug when running on Xen can be used even without the
>>>>> user explicitly requesting it, as part of the normal operations of the
>>>>> OS when attempting to map memory from a different domain.
>>>>>
>>>>> Setting a different default value of memhp_default_online_type when
>>>>> attaching the balloon driver is not a robust solution, as the user (or
>>>>> distro init scripts) could still change it and thus break the Xen
>>>>> balloon driver.
>>>>
>>>> I think we discussed this a couple of times before (even triggered by my
>>>> request), and this is responsibility of user space to configure. Usually
>>>> distros have udev rules to online memory automatically. Especially, user
>>>> space should eb able to configure *how* to online memory.
>>>
>>> Note (as per the commit message) that in the specific case I'm
>>> referring to the memory hotplugged by the Xen balloon driver will be
>>> an unpopulated range to be used internally by certain Xen subsystems,
>>> like the xen-blkback or the privcmd drivers. The addition of such
>>> blocks of (unpopulated) memory can happen without the user explicitly
>>> requesting it, and hence not even aware such hotplug process is taking
>>> place. To be clear: no actual RAM will be added to the system.
>>
>> Okay, but there is also the case where XEN will actually hotplug memory
>> using this same handler IIRC (at least I've read papers about it). Both
>> are using the same handler, correct?
>>
>>>
>>>> It's the admin/distro responsibility to configure this properly. In case
>>>> this doesn't happen (or as you say, users change it), bad luck.
>>>>
>>>> E.g., virtio-mem takes care to not add more memory in case it is not
>>>> getting onlined. I remember hyper-v has similar code to at least wait a
>>>> bit for memory to get onlined.
>>>
>>> I don't think VirtIO or Hyper-V use the hotplug system in the same way
>>> as Xen, as said this is done to add unpopulated memory regions that
>>> will be used to map foreign memory (from other domains) by Xen drivers
>>> on the system.
>>
>> Indeed, if the memory is never exposed to the buddy (and all you need is
>> struct pages +  a kernel virtual mapping), I wonder if
>> memremap/ZONE_DEVICE is what you want? Then you won't have user-visible
>> memory blocks created with unclear online semantics, partially involving
>> the buddy.
> 
> And just a note that there is also DCSS on s390x / z/VM which allows to
> map segments into the VM physical address space (e.g., you can share
> segments between VMs). They don't need any memmap (struct page) for that
> memory, though. All they do is create the identity mapping in the kernel
> virtual address space manually. Not sure what the exact requirements on
> the XEN side are. I assume you need a memmap for this memory.

We need to be able to do I/O with that memory via normal drivers and we
need to be able to map it, both from user land and from the kernel.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 13:54:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 13:54:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jybgK-00039k-Kd; Thu, 23 Jul 2020 13:54:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jybgJ-00039f-QY
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 13:54:43 +0000
X-Inumbo-ID: 0e4275fb-ccec-11ea-871b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e4275fb-ccec-11ea-871b-bc764e2007e4;
 Thu, 23 Jul 2020 13:54:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 09757AB3D;
 Thu, 23 Jul 2020 13:54:50 +0000 (UTC)
Subject: Re: [PATCH] xen/x86: irq: Avoid a TOCTOU race in
 pirq_spin_lock_irq_desc()
To: Julien Grall <julien@xen.org>
References: <20200722165300.22655-1-julien@xen.org>
 <c9863243-0b5e-521f-80b8-bc5673f895a6@suse.com>
 <5bd56ef4-8bf5-3308-b7db-71e41ac45918@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d3ba0dad-63db-06ad-ff3f-f90fe8649845@suse.com>
Date: Thu, 23 Jul 2020 15:54:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <5bd56ef4-8bf5-3308-b7db-71e41ac45918@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.2020 15:22, Julien Grall wrote:
> On 23/07/2020 12:23, Jan Beulich wrote:
>> On 22.07.2020 18:53, Julien Grall wrote:
>>> --- a/xen/arch/x86/irq.c
>>> +++ b/xen/arch/x86/irq.c
>>> @@ -1187,7 +1187,7 @@ struct irq_desc *pirq_spin_lock_irq_desc(
>>>   
>>>       for ( ; ; )
>>>       {
>>> -        int irq = pirq->arch.irq;
>>> +        int irq = read_atomic(&pirq->arch.irq);
>>
>> There we go - I'd be fine this way, but I'm pretty sure Andrew
>> would want this to be ACCESS_ONCE(). So I guess now is the time
>> to settle which one to prefer in new code (or which criteria
>> there are to prefer one over the other).
> 
> I would prefer if we have a single way to force the compiler to do a 
> single access (read/write).

Ideally yes. I'm unconvinced though that either construct fits all
needs (for {read,write}_atomic() there may be reasons why the
compiler is allowed to produce multiple generated code instances
from a single source instance, while for *_ONCE() the compiler may
be allowed to split the access into pieces (as can be easily seen
for an access to a uint64_t variable on 32-bit x86 at least, and
by deduction I then can't see why it shouldn't be allowed to use
byte-wise accesses).

> The existing implementation of ACCESS_ONCE() can only work on scalar 
> type. The implementation is based on a Linux, although we have an extra 
> check. Looking through the Linux history, it looks like it is not 
> possible to make ACCESS_ONCE() work with non-scalar types:
> 
>      ACCESS_ONCE does not work reliably on non-scalar types. For
>      example gcc 4.6 and 4.7 might remove the volatile tag for such
>      accesses during the SRA (scalar replacement of aggregates) step
>      https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145)
> 
> I understand that our implementation of read_atomic(), write_atomic() 
> would lead to less optimized code.

I.e. you see ways for the compiler to be more clever than using
a single "move" instruction for a single move? Or are you referring
to insn scheduling by the compiler (which my gut feeling would say
is impacted as much by an asm volatile() as by accessing a volatile
object).

> So maybe we want to import 
> READ_ONCE() and WRITE_ONCE() from Linux?

So far I was under the impression that our ACCESS_ONCE() is the
result of folding (older) Linux'es READ_ONCE() and WRITE_ONCE()
into a single construct.

> As a side note, I have seen suggestion only (see [1]) which suggest that 
> they those helpers wouldn't be portable:
> 
> "One relatively unimportant misunderstanding is due to the fact that the 
> standard only talks about accesses to volatile objects. It does not talk 
> about accesses via volatile qualified pointers. Some programmers believe 
> that using a pointer-to-volatile should be handled as though it pointed 
> to a volatile object. That is not guaranteed by the standard and is 
> therefore not portable. However, this is relatively unimportant because 
> gcc does in fact treat a pointer-to-volatile as though it pointed to a 
> volatile object."
> 
> I would assume that the use is OK on CLang and GCC given that Linux has 
> been using it.

Then again your change here is exactly to drop such assumptions of
ours on compiler behavior.

>> And this is of course besides the fact that I think we have many
>> more instances where guaranteeing a single access would be
>> needed, if we're afraid of the described permitted compiler
>> behavior. Which then makes me wonder if this is really something
>> we should fix one by one, rather than by at least larger scope
>> audits (in order to not suggest "throughout the code base").
> 
> It depends how much the contributor can invest on chasing the rest of 
> the issues. The larger the scope is, the less likely you will find 
> someone that has bandwith to allocate for fixing it completely.

I certainly understand that.

> If the scope is "a field", then I think it is a reasonable suggesting.
> 
> In this case, I had a look at arch.irq and wasn't able to spot other 
> potential issue.

That's good to know, and may be worth mentioning - if not in the
description, then maybe in a post-commit-message remark?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 13:59:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 13:59:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jybl5-0003LF-9k; Thu, 23 Jul 2020 13:59:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0L1b=BC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jybl4-0003LA-LS
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 13:59:38 +0000
X-Inumbo-ID: be96f5d2-ccec-11ea-871b-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be96f5d2-ccec-11ea-871b-bc764e2007e4;
 Thu, 23 Jul 2020 13:59:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595512777;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=G5fMAF6Ssv/d5LldYw6d7YvCx0QLDX1OATdOT3g5954=;
 b=g6B56hVetjZwz0rUwOnELoe7THEtXKxWmQr6PRIqCds0lPHmPPJkH3E1
 Jxmy0nsjd8IAIISHnlB7pTpAYHiCLPEadkrnRlQDVmG1GNc5jKHEge4KM
 zupGHaoEF2TrquZl5glBdpX81nYHDAA9UIOz4LTEzPoA/76riOToDDik7 c=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: FzLhbB64z51wD52/8mlF2q7KYHAB/SzggPhRBoNTO3bKquEZzRNlMWzeg+d6qkIRFXnEM8wCQn
 I8ZVldIoD7vjg1o4XWJv7lCKqcDmh4ey62417HFSjfrORAmoH7KO8ql4RRRNqD26tBGpEgh4Xm
 4qZ++TwHeTlgHbCc8HsVqPpbmG9+/D0m9Kz4xeXkCr+8ClNSrV9FP3Ohl3Yq8LTJWxBVc/e4yw
 KbHE+hEoCTrkPnGZ+OPXceMgdwvBw289PxG1Fn18sfFFMuj4f/MmLdi1R8gVDd0k7mTOFAYSLz
 XCc=
X-SBRS: 2.7
X-MesageID: 23370256
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23370256"
Date: Thu, 23 Jul 2020 15:59:30 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: David Hildenbrand <david@redhat.com>
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
Message-ID: <20200723135930.GH7191@Air-de-Roger>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <e94d9556-f615-bbe2-07d2-08958969ee5f@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <e94d9556-f615-bbe2-07d2-08958969ee5f@redhat.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 23, 2020 at 03:22:49PM +0200, David Hildenbrand wrote:
> On 23.07.20 14:23, Roger Pau Monné wrote:
> > On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
> >> On 23.07.20 10:45, Roger Pau Monne wrote:
> >>> Add an extra option to add_memory_resource that overrides the memory
> >>> hotplug online behavior in order to force onlining of memory from
> >>> add_memory_resource unconditionally.
> >>>
> >>> This is required for the Xen balloon driver, that must run the
> >>> online page callback in order to correctly process the newly added
> >>> memory region, note this is an unpopulated region that is used by Linux
> >>> to either hotplug RAM or to map foreign pages from other domains, and
> >>> hence memory hotplug when running on Xen can be used even without the
> >>> user explicitly requesting it, as part of the normal operations of the
> >>> OS when attempting to map memory from a different domain.
> >>>
> >>> Setting a different default value of memhp_default_online_type when
> >>> attaching the balloon driver is not a robust solution, as the user (or
> >>> distro init scripts) could still change it and thus break the Xen
> >>> balloon driver.
> >>
> >> I think we discussed this a couple of times before (even triggered by my
> >> request), and this is responsibility of user space to configure. Usually
> >> distros have udev rules to online memory automatically. Especially, user
> >> space should eb able to configure *how* to online memory.
> > 
> > Note (as per the commit message) that in the specific case I'm
> > referring to the memory hotplugged by the Xen balloon driver will be
> > an unpopulated range to be used internally by certain Xen subsystems,
> > like the xen-blkback or the privcmd drivers. The addition of such
> > blocks of (unpopulated) memory can happen without the user explicitly
> > requesting it, and hence not even aware such hotplug process is taking
> > place. To be clear: no actual RAM will be added to the system.
> 
> Okay, but there is also the case where XEN will actually hotplug memory
> using this same handler IIRC (at least I've read papers about it). Both
> are using the same handler, correct?

Yes, it's used by this dual purpose, which I have to admit I don't
like that much either.

One set of pages should be clearly used for RAM memory hotplug, and
the other to map foreign pages that are not related to memory hotplug,
it's just that we happen to need a physical region with backing struct
pages.

> > 
> >> It's the admin/distro responsibility to configure this properly. In case
> >> this doesn't happen (or as you say, users change it), bad luck.
> >>
> >> E.g., virtio-mem takes care to not add more memory in case it is not
> >> getting onlined. I remember hyper-v has similar code to at least wait a
> >> bit for memory to get onlined.
> > 
> > I don't think VirtIO or Hyper-V use the hotplug system in the same way
> > as Xen, as said this is done to add unpopulated memory regions that
> > will be used to map foreign memory (from other domains) by Xen drivers
> > on the system.
> 
> Indeed, if the memory is never exposed to the buddy (and all you need is
> struct pages +  a kernel virtual mapping), I wonder if
> memremap/ZONE_DEVICE is what you want?

I'm certainly not familiar with the Linux memory subsystem, but if
that gets us a backing struct page and a kernel mapping then I would
say yes.

> Then you won't have user-visible
> memory blocks created with unclear online semantics, partially involving
> the buddy.

Seems like a fine solution.

Juergen: would you be OK to use a separate page-list for
alloc_xenballooned_pages on HVM/PVH using the logic described by
David?

I guess I would leave PV as-is, since it already has this reserved
region to map foreign pages.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 14:00:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 14:00:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyblU-0003jv-Jx; Thu, 23 Jul 2020 14:00:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xWck=BC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyblT-0003eT-5G
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 14:00:03 +0000
X-Inumbo-ID: cd1a5f4a-ccec-11ea-a2a8-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd1a5f4a-ccec-11ea-a2a8-12813bfff9fa;
 Thu, 23 Jul 2020 14:00:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595512802;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=tUxcXp+XsAThP5KE63cViOLa120K/x7jswjVAvp32SA=;
 b=EPKF1zInyHNAO4iRyB1mWDdznjEy7/5C2jbKwNCIQz8drG2Ux/hHAEoc
 OdhpzGXiH2CFIJAkV+Z0NtWCMXNgDvgpsVmMRhOqdmybWv0JQRFGJXRzo
 h9CsPTeL8Zcu8N5daWOtyJy4XGO9OcuvAAbtr2kDr3nmk68jgMwilS6ih k=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: +QVIcrGkzDbY1sC1JXGCJhALYgzzGqzN1o4PQCMi4Sf16hIF/SFhWylvazzkbu5Yt0pGABQiua
 acRDUA/6J2viG8z1rTsmk4aQ0I4EWAYja/7p9s8UKTMg4/OSZW32lg688RDD/3MCJw2iQJpnbJ
 D6+wnRqNU20jCo5Le9/piLYvZ7A+D7JIkouwmCalDGqwf7BUthM4i2DXAlDgWGBeiDs/5DMP4P
 rnw2OGcJwGV0uG5/zqnAdN8AB1pbcRRkdc1DL8SZdQVg5RT1gT+V+hCVWGcEX3cYCU3ofJP605
 Sc8=
X-SBRS: 2.7
X-MesageID: 23040644
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23040644"
Subject: Re: [PATCH] xen/x86: irq: Avoid a TOCTOU race in
 pirq_spin_lock_irq_desc()
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
References: <20200722165300.22655-1-julien@xen.org>
 <c9863243-0b5e-521f-80b8-bc5673f895a6@suse.com>
 <5bd56ef4-8bf5-3308-b7db-71e41ac45918@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <bb25c46f-0670-889e-db0b-3031291db640@citrix.com>
Date: Thu, 23 Jul 2020 14:59:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <5bd56ef4-8bf5-3308-b7db-71e41ac45918@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>, Wei
 Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/07/2020 14:22, Julien Grall wrote:
> Hi Jan,
>
> On 23/07/2020 12:23, Jan Beulich wrote:
>> On 22.07.2020 18:53, Julien Grall wrote:
>>> --- a/xen/arch/x86/irq.c
>>> +++ b/xen/arch/x86/irq.c
>>> @@ -1187,7 +1187,7 @@ struct irq_desc *pirq_spin_lock_irq_desc(
>>>         for ( ; ; )
>>>       {
>>> -        int irq = pirq->arch.irq;
>>> +        int irq = read_atomic(&pirq->arch.irq);
>>
>> There we go - I'd be fine this way, but I'm pretty sure Andrew
>> would want this to be ACCESS_ONCE(). So I guess now is the time
>> to settle which one to prefer in new code (or which criteria
>> there are to prefer one over the other).
>
> I would prefer if we have a single way to force the compiler to do a
> single access (read/write).

Unlikely to happen, I'd expect.

But I would really like to get rid of (or at least rename)
read_atomic()/write_atomic() specifically because they've got nothing to
do with atomic_t's and the set of functionality who's namespace they share.

>
> The existing implementation of ACCESS_ONCE() can only work on scalar
> type. The implementation is based on a Linux, although we have an
> extra check. Looking through the Linux history, it looks like it is
> not possible to make ACCESS_ONCE() work with non-scalar types:
>
>     ACCESS_ONCE does not work reliably on non-scalar types. For
>     example gcc 4.6 and 4.7 might remove the volatile tag for such
>     accesses during the SRA (scalar replacement of aggregates) step
>     https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145)
>
> I understand that our implementation of read_atomic(), write_atomic()
> would lead to less optimized code.

There are cases where read_atomic()/write_atomic() prevent optimisations
which ACCESS_ONCE() would allow, but it is only for code of the form:

ACCESS_ONCE(ptr) |= val;

Which a sufficiently clever compiler could convert to a single `or $val,
ptr` instruction on x86, while read_atomic()/write_atomic() would force
it to be `mov ptr, %reg; or $val, %reg; mov %reg, ptr`.

That said - your note about GCC treating the pointed-to object as
volatile probably means it won't make the above optimisation, even
though it would be appropriate to do so.

> So maybe we want to import READ_ONCE() and WRITE_ONCE() from Linux?

There is no point.  Linux has taken a massive detour through wildly
different READ/WRITE_ONCE() functions (to fix the above GCC bugs), and
are now back to something very similar to ACCESS_ONCE().

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 14:43:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 14:43:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jycRK-0007tq-Bj; Thu, 23 Jul 2020 14:43:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xWck=BC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jycRJ-0007tl-RY
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 14:43:17 +0000
X-Inumbo-ID: d7ef76d4-ccf2-11ea-a2ba-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7ef76d4-ccf2-11ea-a2ba-12813bfff9fa;
 Thu, 23 Jul 2020 14:43:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595515397;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=LX+ZlWCtBd1t9rdykL+lY+9RmrIAo5grgz9ZfVL9Epw=;
 b=cCWFbMrKIY7CauV500lkQigoEOypyazGsssRmTl2PyFIagqgjnirRGsN
 WfYz/6nzM9qNH9Y26b63thlDcBErqLh3Z/nxoovIY4SPRJfF6mLJOa7c4
 TXro6UqQQcn96iXZm019rKQ/zPJGALNGdy5WYMoLf3vGB21SRLghAv8V6 o=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: SAiegcltMIL9ydSkPEo7wUhAqyySwFguryX74FJRFwvQqEhn+wu9sR207uvhGU6SiDdQdIWb0Y
 8XyVNW3Fq7Iee4vXkP+3WNIAIOiWKX4emhDDLoREs4DgyIbRFTL45GCYNqFZHoUdgL9uZf32sD
 S81l6wCmyddCESFHLHEt4qgMLTq9hzzKervYWVN5ZTli0KXf9zzKnJw+8j7PlxHRzB38EqQhVS
 Ju9icWXj5mIDdOGMqDgCEp6m5F3ge67ZXBQJQPMgpigEjAYnVDnoa7FmHsiT9MDhFtMM5sZ9+e
 +w4=
X-SBRS: 2.7
X-MesageID: 23242506
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,386,1589256000"; d="scan'208";a="23242506"
Subject: Re: [PATCH] x86/S3: put data segment registers into known state upon
 resume
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <3cad2798-1a01-7d5e-ea55-ddb9ba6388d9@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6343ad61-246f-fefd-cd12-d260807e82f0@citrix.com>
Date: Thu, 23 Jul 2020 15:40:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3cad2798-1a01-7d5e-ea55-ddb9ba6388d9@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "M. Vefa Bicakci" <m.v.b@runbox.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/07/2020 16:20, Jan Beulich wrote:
> wakeup_32 sets %ds and %es to BOOT_DS, while leaving %fs at what
> wakeup_start did set it to, and %gs at whatever BIOS did load into it.
> All of this may end up confusing the first load_segments() to run on
> the BSP after resume, in particular allowing a non-nul selector value
> to be left in %fs.
>
> Alongside %ss, also put all other data segment registers into the same
> state that the boot and CPU bringup paths put them in.
>
> Reported-by: M. Vefa Bicakci <m.v.b@runbox.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/acpi/wakeup_prot.S
> +++ b/xen/arch/x86/acpi/wakeup_prot.S
> @@ -52,6 +52,16 @@ ENTRY(s3_resume)
>          mov     %eax, %ss
>          mov     saved_rsp(%rip), %rsp
>  
> +        /*
> +         * Also put other segment registers into known state, like would
> +         * be done on the boot path. This is in particular necessary for
> +         * the first load_segments() to work as intended.
> +         */

I don't think the comment is helpful, not least because it refers to a
broken behaviour in load_segemnts() which is soon going to change anyway.

We've literally just loaded the GDT, at which point reloading all
segments *is* the expected thing to do.

I'd recommend that the diff be simply:

diff --git a/xen/arch/x86/acpi/wakeup_prot.S
b/xen/arch/x86/acpi/wakeup_prot.S
index dcc7e2327d..a2c41c4f3f 100644
--- a/xen/arch/x86/acpi/wakeup_prot.S
+++ b/xen/arch/x86/acpi/wakeup_prot.S
@@ -49,6 +49,10 @@ ENTRY(s3_resume)
         mov     %rax, %cr0
 
         mov     $__HYPERVISOR_DS64, %eax
+        mov     %eax, %ds
+        mov     %eax, %es
+        mov     %eax, %fs
+        mov     %eax, %gs
         mov     %eax, %ss
         mov     saved_rsp(%rip), %rsp
 

It is a shame that the CR0 load breaks up the obvious connection with
lgdt, but IIRC, that was a consequence of how the code was laid out
previously.

Preferably with the above diff, Reviewed-by: Andrew Cooper
<andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:10:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:10:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jycrG-00024Q-ND; Thu, 23 Jul 2020 15:10:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tIQT=BC=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jycrF-00020S-Sw
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:10:05 +0000
X-Inumbo-ID: 963d197c-ccf6-11ea-a2bc-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 963d197c-ccf6-11ea-a2bc-12813bfff9fa;
 Thu, 23 Jul 2020 15:10:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 050F5AE91;
 Thu, 23 Jul 2020 15:10:12 +0000 (UTC)
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 David Hildenbrand <david@redhat.com>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <e94d9556-f615-bbe2-07d2-08958969ee5f@redhat.com>
 <20200723135930.GH7191@Air-de-Roger>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <82b131f4-8f50-cd49-65cf-9a87d51b5555@suse.com>
Date: Thu, 23 Jul 2020 17:10:03 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200723135930.GH7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.20 15:59, Roger Pau Monné wrote:
> On Thu, Jul 23, 2020 at 03:22:49PM +0200, David Hildenbrand wrote:
>> On 23.07.20 14:23, Roger Pau Monné wrote:
>>> On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
>>>> On 23.07.20 10:45, Roger Pau Monne wrote:
>>>>> Add an extra option to add_memory_resource that overrides the memory
>>>>> hotplug online behavior in order to force onlining of memory from
>>>>> add_memory_resource unconditionally.
>>>>>
>>>>> This is required for the Xen balloon driver, that must run the
>>>>> online page callback in order to correctly process the newly added
>>>>> memory region, note this is an unpopulated region that is used by Linux
>>>>> to either hotplug RAM or to map foreign pages from other domains, and
>>>>> hence memory hotplug when running on Xen can be used even without the
>>>>> user explicitly requesting it, as part of the normal operations of the
>>>>> OS when attempting to map memory from a different domain.
>>>>>
>>>>> Setting a different default value of memhp_default_online_type when
>>>>> attaching the balloon driver is not a robust solution, as the user (or
>>>>> distro init scripts) could still change it and thus break the Xen
>>>>> balloon driver.
>>>>
>>>> I think we discussed this a couple of times before (even triggered by my
>>>> request), and this is responsibility of user space to configure. Usually
>>>> distros have udev rules to online memory automatically. Especially, user
>>>> space should eb able to configure *how* to online memory.
>>>
>>> Note (as per the commit message) that in the specific case I'm
>>> referring to the memory hotplugged by the Xen balloon driver will be
>>> an unpopulated range to be used internally by certain Xen subsystems,
>>> like the xen-blkback or the privcmd drivers. The addition of such
>>> blocks of (unpopulated) memory can happen without the user explicitly
>>> requesting it, and hence not even aware such hotplug process is taking
>>> place. To be clear: no actual RAM will be added to the system.
>>
>> Okay, but there is also the case where XEN will actually hotplug memory
>> using this same handler IIRC (at least I've read papers about it). Both
>> are using the same handler, correct?
> 
> Yes, it's used by this dual purpose, which I have to admit I don't
> like that much either.
> 
> One set of pages should be clearly used for RAM memory hotplug, and
> the other to map foreign pages that are not related to memory hotplug,
> it's just that we happen to need a physical region with backing struct
> pages.
> 
>>>
>>>> It's the admin/distro responsibility to configure this properly. In case
>>>> this doesn't happen (or as you say, users change it), bad luck.
>>>>
>>>> E.g., virtio-mem takes care to not add more memory in case it is not
>>>> getting onlined. I remember hyper-v has similar code to at least wait a
>>>> bit for memory to get onlined.
>>>
>>> I don't think VirtIO or Hyper-V use the hotplug system in the same way
>>> as Xen, as said this is done to add unpopulated memory regions that
>>> will be used to map foreign memory (from other domains) by Xen drivers
>>> on the system.
>>
>> Indeed, if the memory is never exposed to the buddy (and all you need is
>> struct pages +  a kernel virtual mapping), I wonder if
>> memremap/ZONE_DEVICE is what you want?
> 
> I'm certainly not familiar with the Linux memory subsystem, but if
> that gets us a backing struct page and a kernel mapping then I would
> say yes.
> 
>> Then you won't have user-visible
>> memory blocks created with unclear online semantics, partially involving
>> the buddy.
> 
> Seems like a fine solution.
> 
> Juergen: would you be OK to use a separate page-list for
> alloc_xenballooned_pages on HVM/PVH using the logic described by
> David?
> 
> I guess I would leave PV as-is, since it already has this reserved
> region to map foreign pages.

I would really like a common solution, especially as it would enable
pv driver domains to use that feature, too.

And finding a region for this memory zone in PVH dom0 should be common
with PV dom0 after all. We don't want to collide with either PCI space
or hotplug memory.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:16:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:16:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jycww-0002QF-Ha; Thu, 23 Jul 2020 15:15:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h/wE=BC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jycwu-0002Pv-QX
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:15:56 +0000
X-Inumbo-ID: 64202fa0-ccf7-11ea-a2be-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 64202fa0-ccf7-11ea-a2be-12813bfff9fa;
 Thu, 23 Jul 2020 15:15:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=++MzV0T5q5jHdNTsHh886G0OYavcsX5C0F5jWUc7LNw=; b=OvR990JRi3oCzCgw84ifb8NL1
 K5lD7iBZdwosRXGOlcANLf6I38WA9cxhL3ZBT3bMtuLuRYvpK5/b12wyMwjwnzh+uox0++1aLyPiy
 9xO75hB4s8jbRsRy9slND7rxkrkX4wgZPfUKAigqH0DMNGXn/7GucHRIYWByFkUOFX0jI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jycwn-0002Dq-8b; Thu, 23 Jul 2020 15:15:49 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jycwm-0001BC-Tg; Thu, 23 Jul 2020 15:15:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jycwm-0000nJ-Sv; Thu, 23 Jul 2020 15:15:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152124-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 152124: regressions - trouble:
 fail/pass/starved
X-Osstest-Failures: xen-4.14-testing:test-amd64-amd64-dom0pvh-xl-amd:guest-localmigrate:fail:regression
 xen-4.14-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=26984f2f432bb880f2bb4954e1248c9c2d1bbd54
X-Osstest-Versions-That: xen=827031adfeb3c2656baa2156d3e7caaea8aec739
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jul 2020 15:15:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152124 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152124/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-dom0pvh-xl-amd 16 guest-localmigrate    fail REGR. vs. 152081

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 152081

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 xen                  26984f2f432bb880f2bb4954e1248c9c2d1bbd54
baseline version:
 xen                  827031adfeb3c2656baa2156d3e7caaea8aec739

Last test of basis   152081  2020-07-21 16:52:47 Z    1 days
Testing same since   152124  2020-07-22 20:36:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Paul Durrant <pdurrant@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 26984f2f432bb880f2bb4954e1248c9c2d1bbd54
Author: Julien Grall <jgrall@amazon.com>
Date:   Wed Jul 22 18:47:10 2020 +0100

    Revert "SUPPORT.md: Set version and release/support dates"
    
    This reverts commit e4670f8b045b11a524171b119d9d4a20bf643367.

commit e4670f8b045b11a524171b119d9d4a20bf643367
Author: Paul Durrant <pdurrant@amazon.com>
Date:   Wed Jul 22 17:55:44 2020 +0100

    SUPPORT.md: Set version and release/support dates
    
    Signed-off-by: Paul Durrant <pdurrant@amazon.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:19:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:19:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyd0N-0002bB-B6; Thu, 23 Jul 2020 15:19:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyd0N-0002b6-2p
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:19:31 +0000
X-Inumbo-ID: e768c4d0-ccf7-11ea-a2be-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e768c4d0-ccf7-11ea-a2be-12813bfff9fa;
 Thu, 23 Jul 2020 15:19:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AF629AE91;
 Thu, 23 Jul 2020 15:19:37 +0000 (UTC)
Subject: Re: [PATCH] x86/S3: put data segment registers into known state upon
 resume
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <3cad2798-1a01-7d5e-ea55-ddb9ba6388d9@suse.com>
 <6343ad61-246f-fefd-cd12-d260807e82f0@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c726cdc7-271b-0ea7-4056-8ab86686282e@suse.com>
Date: Thu, 23 Jul 2020 17:19:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <6343ad61-246f-fefd-cd12-d260807e82f0@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "M. Vefa Bicakci" <m.v.b@runbox.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.2020 16:40, Andrew Cooper wrote:
> On 20/07/2020 16:20, Jan Beulich wrote:
>> wakeup_32 sets %ds and %es to BOOT_DS, while leaving %fs at what
>> wakeup_start did set it to, and %gs at whatever BIOS did load into it.
>> All of this may end up confusing the first load_segments() to run on
>> the BSP after resume, in particular allowing a non-nul selector value
>> to be left in %fs.
>>
>> Alongside %ss, also put all other data segment registers into the same
>> state that the boot and CPU bringup paths put them in.
>>
>> Reported-by: M. Vefa Bicakci <m.v.b@runbox.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/acpi/wakeup_prot.S
>> +++ b/xen/arch/x86/acpi/wakeup_prot.S
>> @@ -52,6 +52,16 @@ ENTRY(s3_resume)
>>          mov     %eax, %ss
>>          mov     saved_rsp(%rip), %rsp
>>  
>> +        /*
>> +         * Also put other segment registers into known state, like would
>> +         * be done on the boot path. This is in particular necessary for
>> +         * the first load_segments() to work as intended.
>> +         */
> 
> I don't think the comment is helpful, not least because it refers to a
> broken behaviour in load_segemnts() which is soon going to change anyway.

Well, I can drop it. I merely thought I'd be nice and comment my
code once in a while (and the comment could be dropped / adjusted
when load_segments() changes)...

> We've literally just loaded the GDT, at which point reloading all
> segments *is* the expected thing to do.

In a way, unless some/all are assumed to already hold a nul selector.

> I'd recommend that the diff be simply:
> 
> diff --git a/xen/arch/x86/acpi/wakeup_prot.S
> b/xen/arch/x86/acpi/wakeup_prot.S
> index dcc7e2327d..a2c41c4f3f 100644
> --- a/xen/arch/x86/acpi/wakeup_prot.S
> +++ b/xen/arch/x86/acpi/wakeup_prot.S
> @@ -49,6 +49,10 @@ ENTRY(s3_resume)
>          mov     %rax, %cr0
>  
>          mov     $__HYPERVISOR_DS64, %eax
> +        mov     %eax, %ds
> +        mov     %eax, %es
> +        mov     %eax, %fs
> +        mov     %eax, %gs
>          mov     %eax, %ss
>          mov     saved_rsp(%rip), %rsp

So I had specifically elected to not put the addition there, to make
sure the stack would get established first. But seeing both Roger
and you ask me to do otherwise - well, so be it then.

> It is a shame that the CR0 load breaks up the obvious connection with
> lgdt, but IIRC, that was a consequence of how the code was laid out
> previously.
> 
> Preferably with the above diff, Reviewed-by: Andrew Cooper
> <andrew.cooper3@citrix.com>

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:40:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:40:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydKx-00056V-Lf; Thu, 23 Jul 2020 15:40:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dJck=BC=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1jydKw-00056J-8u
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:40:46 +0000
X-Inumbo-ID: de83a166-ccfa-11ea-a2c5-12813bfff9fa
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.80]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id de83a166-ccfa-11ea-a2c5-12813bfff9fa;
 Thu, 23 Jul 2020 15:40:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DkaWWEiIcI3MklJyY5OREQ/S5Dx+rNCL9jY/oAp4P4A=;
 b=IKRZM17c7fNeNJ3E6n1oIfvf2aAQA6xzvLsXDKwu7qwb9rsMBmu3hNGBJXkwmC+DbpK8GWz420kEZ7TChr8unf5A+fDVOp02Tm+o+WzKMDoAamHQZ4N9+7TgC4LMjXR+Hzpwyutjns45JRn2VRk2w35uZkcvfsp+7wkDpazUI3o=
Received: from DB8PR06CA0018.eurprd06.prod.outlook.com (2603:10a6:10:100::31)
 by DB6PR0802MB2245.eurprd08.prod.outlook.com (2603:10a6:4:84::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.17; Thu, 23 Jul
 2020 15:40:41 +0000
Received: from DB5EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:100:cafe::a1) by DB8PR06CA0018.outlook.office365.com
 (2603:10a6:10:100::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.22 via Frontend
 Transport; Thu, 23 Jul 2020 15:40:41 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT017.mail.protection.outlook.com (10.152.20.114) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Thu, 23 Jul 2020 15:40:41 +0000
Received: ("Tessian outbound 8f45de5545d6:v62");
 Thu, 23 Jul 2020 15:40:41 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0afc5ea6deeedb69
X-CR-MTA-TID: 64aa7808
Received: from a28b84a73466.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 629A4F73-8B5C-4DF7-9894-DC8F479DA64D.1; 
 Thu, 23 Jul 2020 15:40:35 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a28b84a73466.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 23 Jul 2020 15:40:35 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vzj44P8ENkdYkCx1Bc2O+ZadrHl3gMkiVWgJe37d5Sb4ORZAcM0HSHnQAwWS+7iFQgqpZIl44sA+twJJ8BvAkT8Y4xvgie1qEzb/44710XPA2e1Zm2yw68qyqBKrKqEjEs0Rkk7Se2WzVjqKF/nlRkmme3xY5YTbRG/iO0+hSxHqV9NpCrTQRkZNMtCbZ585qXmGwRlXH8RHCfv2MoaZtOCSBjWtpgiY6Z1hyHr1kUCYbXavQm1Sg29GbVXNy2IuQXKy7HhQgJrEfcImXEgXmO2nS3/iOTBod0ZOf8hIQ5T37f60v7fNKSRAkN1ZA+jTz4FGPLM6MABL8UiTIPb4Gg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DkaWWEiIcI3MklJyY5OREQ/S5Dx+rNCL9jY/oAp4P4A=;
 b=IuEpAT2IimnNbyUYJ3NPAykMquopVgljYBJAycTGAXWoSsbrkeN/bhmbFXriJCCXOSio1tCwaAus9feDLlI9ICE9p+q7SnQ1WCo5tc56Dd1uz35oGPDaN/uN2Zs7kV6y6S8o7Hw0KiYsOkAd4sF+//KdkNgIeWV2s3BIZEE7sP8VaRrY03C+G1eutZDRFiDwEwdX5oH619n6VlWo52+7OHEFWIYRu3bTmRLTQtRSBqinooSYnIiUZfkkz987iCui0Jv8vYkGIJzjP2PxO0cD8l5YRfLAV7hUPsYVxE+82PdQe8XVHZTOvaaZma2AyoieeDpzWwr7Fbv3MbyyVOQu8w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DkaWWEiIcI3MklJyY5OREQ/S5Dx+rNCL9jY/oAp4P4A=;
 b=IKRZM17c7fNeNJ3E6n1oIfvf2aAQA6xzvLsXDKwu7qwb9rsMBmu3hNGBJXkwmC+DbpK8GWz420kEZ7TChr8unf5A+fDVOp02Tm+o+WzKMDoAamHQZ4N9+7TgC4LMjXR+Hzpwyutjns45JRn2VRk2w35uZkcvfsp+7wkDpazUI3o=
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none; lists.xenproject.org;
 dmarc=none action=none header.from=arm.com;
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB4439.eurprd08.prod.outlook.com (2603:10a6:20b:be::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Thu, 23 Jul
 2020 15:40:34 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3216.021; Thu, 23 Jul 2020
 15:40:34 +0000
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Subject: [RFC PATCH v1 4/4] arm/libxl: Emulated PCI device tree node in libxl
Date: Thu, 23 Jul 2020 16:40:24 +0100
Message-Id: <23346b24762467bd246b91b05f7b0fc1719282f6.1595511416.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595511416.git.rahul.singh@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
Content-Type: text/plain
X-ClientProxiedBy: LO2P265CA0300.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::24) To AM6PR08MB3560.eurprd08.prod.outlook.com
 (2603:10a6:20b:4c::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from localhost.localdomain (217.140.106.54) by
 LO2P265CA0300.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a5::24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.20 via Frontend Transport; Thu, 23 Jul 2020 15:40:33 +0000
X-Mailer: git-send-email 2.17.1
X-Originating-IP: [217.140.106.54]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 171f2c16-d7e3-46a6-55b0-08d82f1ec15f
X-MS-TrafficTypeDiagnostic: AM6PR08MB4439:|DB6PR0802MB2245:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DB6PR0802MB2245F92CF439C5EE22CB83A7FC760@DB6PR0802MB2245.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: I6pVXL5VHbkxS9luoZl0deZTqx1tYLyNqe7oANXwLkTBEPiAMacVZlKVmiXZRXRRUImjx/mMxTYEGhnMlx5CYA14GqPmXNIiDxhiEvrWc1QRKZlceizh0MzCn8nSn2vkAa5W8TpTPgmUk9Qmxoa3FElfOn2ukTqVHKExxfEKgQy51YlSLG7AMHCFl6n3VCxwSfnOoRKzk5xzwNoCNKXSx0i4ToSFXaFcsTqCQvXHPbRMDGWw8Vy4QsdAoBqYDhTF8N49J13wDiinUUSX7tRQKhCdSD/JCKrTY5Uvd596fIuONHBtMENS1w72bpAReA7iFvNrNoKp8iBwivAXHZBBEuv57yurNhcdP/bSYH0NUteFwsxrMrffKJi5XjZNioeG/qd+3fWqu8zbF7cz72sdiAwtipKhbHzP/9pXVWbap6g=
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(366004)(39860400002)(376002)(346002)(396003)(8676002)(26005)(6512007)(2616005)(316002)(54906003)(6666004)(6506007)(66946007)(4326008)(6486002)(956004)(36756003)(86362001)(6916009)(44832011)(8936002)(186003)(16526019)(478600001)(69590400007)(5660300002)(52116002)(83380400001)(66556008)(66476007)(2906002)(136400200001);
 DIR:OUT; SFP:1101; 
X-MS-Exchange-AntiSpam-MessageData: b8k+/EHco7QtKNmMcz24c8SZ8AXyJeoJzkI1JA5izQu2n9wmk6F+SlSzJktP2QMysXXezr+i2nq+rV//SiG4AqvMIrCTrzfOuoiP6usd0QFILoY4Yz8D8H5DjEbl2uYT+JnzMFva3/hxDfSgiLPVy40NLBCWcA8we+s4Q6WoDHeoHIPAoum3u/ZjnDOo1SM1l74V/rUfaYda2+9iOhHFG8KaB4wq5eaOTKWZ8qImp3D2wwuE2XbyvpkzQgRZZziJ61TejXMHUNsmZ7TMciCdUC2Tb7XGpo3Kq9YV8+/HwYu0h0ebmE0t/uRMtTv/wlNk/EJwH5QLI/quD5uEfQK0QmEAZTIEFrPmLCWm9tAzL4Ov368CNtPVyfPU+cPcTW/wStrOOCQzmq9TjXgMOSvxHarMvxLbY3u1dpFbfqgnbOWes4i+2vpUAoVYsYAYr/FXamo2inqskYZOT85EdTKQoKOp6uujAUU8rVoRT5EfVfi5VwhreoS+KubBvDBric/e
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4439
Original-Authentication-Results: lists.xenproject.org;
 dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: c19e5cea-9d04-40bc-6357-08d82f1ebd1f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: suexNXHd03d65BclGREEECEzmCwd+6l5EXN9pyQgDIYFq/lKYKGdzHPtXOFXLUnGbPex30q3KqWhcv39UTN01BGw+A1EX193r/8Pd11+/9pRV5hWbo2XmVcb4ktJGTWo+Z+JdCpb7M9MqGvyP9PT43eJiqpnFTh+TV2R0hAICz3R3JkxXMfRY7NGqD/UWu+kx02iC4eeIdlF2E/SiNU5sq8VAw1DnPwQ8ib9CIzI62wiV0gJHxGRGqfSSuV4G4MMrQXj1bepFTxloFTprmegoL9xfu8V+lzr7+ipBr8tiF0recGkaiZQFDH9iSkvAh0wQDyNNb3cMIsMQCYDRgcE+fBElfYf6jcFZI3LsZpvhkjXVVcpUXlANZEmIk8+9Re6pF9gVxB8t+cFHXgJIB34MuRFyY1ZoksJAdpWJuSy4Wi5Id4Tg21KDnyJWZzdRP1PPFf/1+TSFGhOZhWuKPKEYQ==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(396003)(376002)(346002)(46966005)(70586007)(70206006)(6512007)(4326008)(69590400007)(44832011)(6506007)(8676002)(478600001)(6666004)(83380400001)(36756003)(107886003)(336012)(47076004)(6916009)(82740400003)(26005)(2906002)(356005)(6486002)(16526019)(82310400002)(186003)(5660300002)(8936002)(316002)(86362001)(956004)(54906003)(81166007)(2616005)(136400200001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2020 15:40:41.2520 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 171f2c16-d7e3-46a6-55b0-08d82f1ec15f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2245
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: rahul.singh@arm.com, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Bertrand.Marquis@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony.perard@citrix.com>, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

libxl will create an emulated PCI device tree node in the
device tree to enable the guest OS to discover the virtual
PCI during guest boot.

We introduced the new config option [vpci="ecam"] for guests.
When this config option is enabled in a guest configuration,
a PCI device tree node will be created in the guest device tree.

A new area has been reserved in the arm guest physical map at
which the VPCI bus is declared in the device tree (reg and ranges
parameters of the node).

Change-Id: I47d39cbe8184de2226f174644df9790ecc610ccd
Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 tools/libxl/libxl_arm.c       | 200 ++++++++++++++++++++++++++++++++++
 tools/libxl/libxl_types.idl   |   6 +
 tools/xl/xl_parse.c           |   7 ++
 xen/include/public/arch-arm.h |  28 +++++
 4 files changed, 241 insertions(+)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 34f8a29056..84568e9dc9 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -268,6 +268,130 @@ static int fdt_property_regs(libxl__gc *gc, void *fdt,
     return fdt_property(fdt, "reg", regs, sizeof(regs));
 }
 
+static int fdt_property_vpci_bus_range(libxl__gc *gc, void *fdt,
+        unsigned num_cells, ...)
+{
+    uint32_t bus_range[num_cells];
+    be32 *cells = &bus_range[0];
+    int i;
+    va_list ap;
+    uint32_t arg;
+
+    va_start(ap, num_cells);
+    for (i = 0 ; i < num_cells; i++) {
+        arg = va_arg(ap, uint32_t);
+        set_cell(&cells, 1, arg);
+    }
+    va_end(ap);
+
+    return fdt_property(fdt, "bus-range", bus_range, sizeof(bus_range));
+}
+
+static int fdt_property_vpci_interrupt_map_mask(libxl__gc *gc, void *fdt,
+        unsigned num_cells, ...)
+{
+    uint32_t interrupt_map_mask[num_cells];
+    be32 *cells = &interrupt_map_mask[0];
+    int i;
+    va_list ap;
+    uint32_t arg;
+
+    va_start(ap, num_cells);
+    for (i = 0 ; i < num_cells; i++) {
+        arg = va_arg(ap, uint32_t);
+        set_cell(&cells, 1, arg);
+    }
+    va_end(ap);
+
+    return fdt_property(fdt, "interrupt-map-mask", interrupt_map_mask,
+                                sizeof(interrupt_map_mask));
+}
+
+static int fdt_property_vpci_ranges(libxl__gc *gc, void *fdt,
+        unsigned vpci_addr_cells,
+        unsigned cpu_addr_cells,
+        unsigned vpci_size_cells,
+        unsigned num_regs, ...)
+{
+    uint32_t regs[num_regs*(vpci_addr_cells+cpu_addr_cells+vpci_size_cells)];
+    be32 *cells = &regs[0];
+    int i;
+    va_list ap;
+    uint64_t arg;
+
+    va_start(ap, num_regs);
+    for (i = 0 ; i < num_regs; i++) {
+        /* Set the memory bit field */
+        arg = va_arg(ap, uint64_t);
+        set_cell(&cells, 1, arg);
+
+        /* Set the vpci bus address */
+        arg = vpci_addr_cells ? va_arg(ap, uint64_t) : 0;
+        set_cell(&cells, 2 , arg);
+
+        /* Set the cpu bus address where vpci address is mapped */
+        arg = cpu_addr_cells ? va_arg(ap, uint64_t) : 0;
+        set_cell(&cells, cpu_addr_cells, arg);
+
+        /* Set the vpci size requested */
+        arg = vpci_size_cells ? va_arg(ap, uint64_t) : 0;
+        set_cell(&cells, vpci_size_cells,arg);
+    }
+    va_end(ap);
+
+    return fdt_property(fdt, "ranges", regs, sizeof(regs));
+}
+
+static int fdt_property_vpci_interrupt_map(libxl__gc *gc, void *fdt,
+        unsigned child_unit_addr_cells,
+        unsigned child_interrupt_specifier_cells,
+        unsigned parent_unit_addr_cells,
+        unsigned parent_interrupt_specifier_cells,
+        unsigned num_regs, ...)
+{
+    uint32_t interrupt_map[num_regs * (child_unit_addr_cells +
+            child_interrupt_specifier_cells + parent_unit_addr_cells
+            + parent_interrupt_specifier_cells + 1)];
+    be32 *cells = &interrupt_map[0];
+    int i,j;
+    va_list ap;
+    uint64_t arg;
+
+    va_start(ap, num_regs);
+    for (i = 0 ; i < num_regs; i++) {
+        /* Set the child unit address*/
+        for (j = 0 ; j < child_unit_addr_cells; j++) {
+            arg = va_arg(ap, uint32_t);
+            set_cell(&cells, 1 , arg);
+        }
+
+        /* Set the child interrupt specifier*/
+        for (j = 0 ; j < child_interrupt_specifier_cells ; j++) {
+            arg = va_arg(ap, uint32_t);
+            set_cell(&cells, 1 , arg);
+        }
+
+        /* Set the interrupt-parent*/
+        set_cell(&cells, 1 , GUEST_PHANDLE_GIC);
+
+        /* Set the parent unit address*/
+        for (j = 0 ; j < parent_unit_addr_cells; j++) {
+            arg = va_arg(ap, uint32_t);
+            set_cell(&cells, 1 , arg);
+        }
+
+        /* Set the parent interrupt specifier*/
+        for (j = 0 ; j < parent_interrupt_specifier_cells; j++) {
+            arg = va_arg(ap, uint32_t);
+            set_cell(&cells, 1 , arg);
+        }
+    }
+    va_end(ap);
+
+    return fdt_property(fdt, "interrupt-map", interrupt_map,
+                                sizeof(interrupt_map));
+}
+
 static int make_root_properties(libxl__gc *gc,
                                 const libxl_version_info *vers,
                                 void *fdt)
@@ -659,6 +783,79 @@ static int make_vpl011_uart_node(libxl__gc *gc, void *fdt,
     return 0;
 }
 
+static int make_vpci_node(libxl__gc *gc, void *fdt,
+        const struct arch_info *ainfo,
+        struct xc_dom_image *dom)
+{
+    int res;
+    const uint64_t vpci_ecam_base = GUEST_VPCI_ECAM_BASE;
+    const uint64_t vpci_ecam_size = GUEST_VPCI_ECAM_SIZE;
+    const char *name = GCSPRINTF("pcie@%"PRIx64, vpci_ecam_base);
+
+    res = fdt_begin_node(fdt, name);
+    if (res) return res;
+
+    res = fdt_property_compat(gc, fdt, 1, "pci-host-ecam-generic");
+    if (res) return res;
+
+    res = fdt_property_string(fdt, "device_type", "pci");
+    if (res) return res;
+
+    res = fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS,
+            GUEST_ROOT_SIZE_CELLS, 1, vpci_ecam_base, vpci_ecam_size);
+    if (res) return res;
+
+    res = fdt_property_vpci_bus_range(gc, fdt, 2, 0,255);
+    if (res) return res;
+
+    res = fdt_property_cell(fdt, "linux,pci-domain", 0);
+    if (res) return res;
+
+    res = fdt_property_cell(fdt, "#address-cells", 3);
+    if (res) return res;
+
+    res = fdt_property_cell(fdt, "#size-cells", 2);
+    if (res) return res;
+
+    res = fdt_property_cell(fdt, "#interrupt-cells", 1);
+    if (res) return res;
+
+    res = fdt_property_string(fdt, "status", "okay");
+    if (res) return res;
+
+    res = fdt_property_vpci_ranges(gc, fdt, GUEST_PCI_ADDRESS_CELLS,
+        GUEST_ROOT_ADDRESS_CELLS, GUEST_PCI_SIZE_CELLS,
+        3,
+        GUEST_VPCI_ADDR_TYPE_MEM, GUEST_VPCI_MEM_PCI_ADDR,
+        GUEST_VPCI_MEM_CPU_ADDR, GUEST_VPCI_MEM_SIZE,
+        GUEST_VPCI_ADDR_TYPE_PREFETCH_MEM, GUEST_VPCI_PREFETCH_MEM_PCI_ADDR,
+        GUEST_VPCI_PREFETCH_MEM_CPU_ADDR, GUEST_VPCI_PREFETCH_MEM_SIZE,
+        GUEST_VPCI_ADDR_TYPE_IO, GUEST_VPCI_IO_PCI_ADDR,
+        GUEST_VPCI_IO_CPU_ADDR, GUEST_VPCI_IO_SIZE);
+    if (res) return res;
+
+    res = fdt_property_vpci_interrupt_map_mask(gc, fdt, 4, 0, 0, 0, 7);
+    if (res) return res;
+
+    /*
+     * Legacy interrupt is forced and assigned to the guest.
+     * This will be removed once we have implementation for MSI support.
+     *
+     */
+    res = fdt_property_vpci_interrupt_map(gc, fdt, 3, 1, 0, 3,
+            4,
+            0, 0, 0, 1, 0, 136, DT_IRQ_TYPE_LEVEL_HIGH,
+            0, 0, 0, 2, 0, 137, DT_IRQ_TYPE_LEVEL_HIGH,
+            0, 0, 0, 3, 0, 138, DT_IRQ_TYPE_LEVEL_HIGH,
+            0, 0, 0, 4, 0, 139, DT_IRQ_TYPE_LEVEL_HIGH);
+    if (res) return res;
+
+    res = fdt_end_node(fdt);
+    if (res) return res;
+
+    return 0;
+}
+
 static const struct arch_info *get_arch_info(libxl__gc *gc,
                                              const struct xc_dom_image *dom)
 {
@@ -962,6 +1159,9 @@ next_resize:
         if (info->tee == LIBXL_TEE_TYPE_OPTEE)
             FDT( make_optee_node(gc, fdt) );
 
+        if (info->arch_arm.vpci == LIBXL_VPCI_TYPE_ECAM)
+            FDT( make_vpci_node(gc, fdt, ainfo, dom) );
+
         if (pfdt)
             FDT( copy_partial_fdt(gc, fdt, pfdt) );
 
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 9d3f05f399..d493637705 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -257,6 +257,11 @@ libxl_vuart_type = Enumeration("vuart_type", [
     (1, "sbsa_uart"),
     ])
 
+libxl_vpci_type = Enumeration("vpci_type", [
+    (0, "unknown"),
+    (1, "ecam"),
+    ])
+
 libxl_vkb_backend = Enumeration("vkb_backend", [
     (0, "UNKNOWN"),
     (1, "QEMU"),
@@ -640,6 +645,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
 
     ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),
                                ("vuart", libxl_vuart_type),
+                               ("vpci", libxl_vpci_type),
                               ])),
     # Alternate p2m is not bound to any architecture or guest type, as it is
     # supported by x86 HVM and ARM support is planned.
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 61b4ef7b7e..58b7e6f56a 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1386,6 +1386,13 @@ void parse_config_data(const char *config_source,
         }
     }
 
+    if (!xlu_cfg_get_string(config, "vpci", &buf, 0)) {
+        if (libxl_vpci_type_from_string(buf, &b_info->arch_arm.vpci)) {
+            fprintf(stderr, "ERROR: invalid value \"%s\" for \"vpci\"\n",
+                    buf);
+            exit(1);
+        }
+    }
     parse_vnuma_config(config, b_info);
 
     /* Set max_memkb to target_memkb and max_vcpus to avail_vcpus if
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 7364a07362..4e19c62948 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -426,6 +426,34 @@ typedef uint64_t xen_callback_t;
 #define GUEST_VPCI_ECAM_BASE    xen_mk_ullong(0x10000000)
 #define GUEST_VPCI_ECAM_SIZE    xen_mk_ullong(0x10000000)
 
+#define GUEST_PCI_ADDRESS_CELLS 3
+#define GUEST_PCI_SIZE_CELLS 2
+
+/* PCI-PCIe memory space types */
+#define GUEST_VPCI_ADDR_TYPE_PREFETCH_MEM xen_mk_ullong(0x42000000)
+#define GUEST_VPCI_ADDR_TYPE_MEM          xen_mk_ullong(0x02000000)
+#define GUEST_VPCI_ADDR_TYPE_IO           xen_mk_ullong(0x01000000)
+
+/* Guest PCI-PCIe memory space where config space and BAR will be available.*/
+#define GUEST_VPCI_PREFETCH_MEM_CPU_ADDR  xen_mk_ullong(0x4000000000)
+#define GUEST_VPCI_MEM_CPU_ADDR           xen_mk_ullong(0x04020000)
+#define GUEST_VPCI_IO_CPU_ADDR            xen_mk_ullong(0xC0200800)
+
+/*
+ * This is hardcoded values for the real PCI physical addresses.
+ * This will be removed once we read the real PCI-PCIe physical
+ * addresses form the config space and map to the guest memory map
+ * when assigning the device to guest via VPCI.
+ *
+ */
+#define GUEST_VPCI_PREFETCH_MEM_PCI_ADDR  xen_mk_ullong(0x4000000000)
+#define GUEST_VPCI_MEM_PCI_ADDR           xen_mk_ullong(0x50000000)
+#define GUEST_VPCI_IO_PCI_ADDR            xen_mk_ullong(0x00000000)
+
+#define GUEST_VPCI_PREFETCH_MEM_SIZE      xen_mk_ullong(0x100000000)
+#define GUEST_VPCI_MEM_SIZE               xen_mk_ullong(0x08000000)
+#define GUEST_VPCI_IO_SIZE                xen_mk_ullong(0x00800000)
+
 /*
  * 16MB == 4096 pages reserved for guest to use as a region to map its
  * grant table in.
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:40:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:40:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydKw-00056K-BX; Thu, 23 Jul 2020 15:40:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dJck=BC=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1jydKv-00056E-Ix
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:40:45 +0000
X-Inumbo-ID: ddbf2d22-ccfa-11ea-873e-bc764e2007e4
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.66]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ddbf2d22-ccfa-11ea-873e-bc764e2007e4;
 Thu, 23 Jul 2020 15:40:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GtXKQEmKNjdCjhdq8ucgvNPIQLHD6guIeLAAonP1+9k=;
 b=dQJ+G4REhbzxIPd+8mAlVfqzQXg7Is88J6ECMFwWDinJVrHUxK0ERZpsdO/QRCq91vy/KnghD/8GWOD6J6Ofm8irS1VVMBuybEBmx9aijERLGxIvjFz4L3GWUKA3e/iroc+6Br65dzFk7ZwfohmsnVafdnICXaSuJgyg7D6hrL8=
Received: from AM5PR0402CA0016.eurprd04.prod.outlook.com
 (2603:10a6:203:90::26) by AM6PR08MB3720.eurprd08.prod.outlook.com
 (2603:10a6:20b:8f::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.21; Thu, 23 Jul
 2020 15:40:40 +0000
Received: from AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:90:cafe::4e) by AM5PR0402CA0016.outlook.office365.com
 (2603:10a6:203:90::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.23 via Frontend
 Transport; Thu, 23 Jul 2020 15:40:40 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT059.mail.protection.outlook.com (10.152.17.193) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Thu, 23 Jul 2020 15:40:40 +0000
Received: ("Tessian outbound 1dc58800d5dd:v62");
 Thu, 23 Jul 2020 15:40:40 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: ca370f21c0ea2171
X-CR-MTA-TID: 64aa7808
Received: from ad811b610fbf.4
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8CBECA01-FB0A-4209-B243-03FEA6F29218.1; 
 Thu, 23 Jul 2020 15:40:34 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ad811b610fbf.4
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 23 Jul 2020 15:40:34 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZKnOrL9mAMiQHrzOPAYcjpNzfUk4NORaDHTuCLtomUsuIdIuSuSWEbDzn78Za/BHYjk5CWEomyhZBnVw6YZxuHM8Kl/GELCWUvfGwfUMeqvUCEY6EEvkpnTMGM7klRjkFSEwkiAMC7S41QoUm551+8t6jepVTaW6BreJcbvKnT+OMTymiZiua/u4TISfAFmYzAvAyvOI43sMCf2NDvwqWz92qFbLHcR3YNKpr2D4gwZuSk78NjvICX9AttZgSJIaMeeRWYL+v1tcvXWF0nG6rEZkwvcZNFCvwXcF7XNXq5DP9+1gQkvTIK738MOw13U/TDTT5CZVCx8dyJPXyFYIZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GtXKQEmKNjdCjhdq8ucgvNPIQLHD6guIeLAAonP1+9k=;
 b=bnMVeMqFOBcBwDvKMw6m387+j5c+zg6qxeL/lLbhjwC0vvTBZv3jQSCuwd2R6eoV/+Ki5B48cvolJ2Av2z6Rjk9xNPbS4q51bei4maHA2b8WQgPPjDnblw4Zr0dbky2n03ZIqFYPZPpvFBgzDUMDekarrHmPipyc+GiwntiQ1jFePqkQQ6wVcbs3dInU84ssGkV8xP8+1CQV2hTsoZMrm9kLd+7t3xBoLBjej4Bxlqm67GdlrQRc7NnIVggQXnPkwEwHQTSKn4by5YniITK6GMujGINd/82ctMlo8PtpiRakYoHl5m5NjhmeJOg/miPSKkvbaZiHfBrjbgSBnv65pw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GtXKQEmKNjdCjhdq8ucgvNPIQLHD6guIeLAAonP1+9k=;
 b=dQJ+G4REhbzxIPd+8mAlVfqzQXg7Is88J6ECMFwWDinJVrHUxK0ERZpsdO/QRCq91vy/KnghD/8GWOD6J6Ofm8irS1VVMBuybEBmx9aijERLGxIvjFz4L3GWUKA3e/iroc+6Br65dzFk7ZwfohmsnVafdnICXaSuJgyg7D6hrL8=
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none; lists.xenproject.org;
 dmarc=none action=none header.from=arm.com;
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB4439.eurprd08.prod.outlook.com (2603:10a6:20b:be::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Thu, 23 Jul
 2020 15:40:33 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3216.021; Thu, 23 Jul 2020
 15:40:33 +0000
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Subject: [RFC PATCH v1 3/4] xen/arm: Enable the existing x86 virtual PCI
 support for ARM.
Date: Thu, 23 Jul 2020 16:40:23 +0100
Message-Id: <c719ed8e92720d0b470a130c1264f8296dac32ac.1595511416.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595511416.git.rahul.singh@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0300.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::24) To AM6PR08MB3560.eurprd08.prod.outlook.com
 (2603:10a6:20b:4c::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from localhost.localdomain (217.140.106.54) by
 LO2P265CA0300.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a5::24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.20 via Frontend Transport; Thu, 23 Jul 2020 15:40:33 +0000
X-Mailer: git-send-email 2.17.1
X-Originating-IP: [217.140.106.54]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d689e4e6-629c-4367-b69c-08d82f1ec0ed
X-MS-TrafficTypeDiagnostic: AM6PR08MB4439:|AM6PR08MB3720:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <AM6PR08MB3720984070785AF6CA0BDD20FC760@AM6PR08MB3720.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: twxUAxnDELd1QEoToMFx19uDgbqmR6cSGZsKMUUcSsoWBTSxFEuXJmK2iZ0TOvH63QVEjoL+YxwLMlhcDvi5Cqq4NAG7mH4HCt1iJN+rBSZJeRRedTpvCOLOQ7xuwE6xTYFnitC4ouR/96EnGu9xjaEm9t15f2BvE9bph3cXJWBIkaeCOCsECGphksVMMQy1dN+u7WFM3X+ScEoUbVW5yRkzIq5WSm3CjTr02BFKAvP2n9BoeID2+RfdwtjYp9VKqm0Xm+xQfu0av9aO1tJJO+LU0LupmXSUKcbQc5nnRz8Vl+GJL17a0RkXu1+OlXuMHRuOYxz+aMqNRAmrWkJ8igHfaR2SmJKwD9zoM3q8QgSaI/N8362n0EhdkPNvwSTa9K8SSiZ48C5Y+niR0tYPig==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(366004)(39860400002)(376002)(346002)(396003)(8676002)(26005)(6512007)(2616005)(316002)(54906003)(6666004)(6506007)(66946007)(4326008)(6486002)(956004)(36756003)(86362001)(6916009)(44832011)(8936002)(186003)(16526019)(478600001)(5660300002)(52116002)(83380400001)(66556008)(66476007)(2906002)(2004002)(136400200001);
 DIR:OUT; SFP:1101; 
X-MS-Exchange-AntiSpam-MessageData: VR4zt7JlXihk55B+yimqS+HoPNH8z1eqW8D4Y+NSqFQp+YQNEAmA0boS9FQi4RZoY7Ao6ArnkK+cXne1C68Ba1vqu2ZQIL6rH9PeduMDipbI8gtJZgTwFbLLvUn21lsTWRTa6jJml/76/yXrF1+53kpMH6C9WNt8jGOYD2xXoJ4RSHghJ3xAbpTzubhOrRmizbaPxy6yfy8pEpqso5fRVH3EeRkE7vUBwNwX+6YPDp/VLilLzWNOihbqP0kQo6LcSorhnUF2/GzgEua0RtD3HnbYXVBk10JMU8Zo4YLh6cWFubLSZPAUElNKTjIw7Hz55E/fZadVhfogNFvOqheiUxRK4MTY7yE8+8FnsqVwnB4FNDZwc322mzPU/Ll82EEBQ875FKDFwvRfK0+uYV5KMbRwtkbDcDFWa8y11NuFQaloASKxtqS56n2GNadjYC/byrpeLZWzMRNAi07XY5R7YWtJoBrvb8CFNC6pavrommXcQNsvxn66MvwAhnZhISyK
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4439
Original-Authentication-Results: lists.xenproject.org;
 dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 5ed81a51-0ba3-4faf-0d63-08d82f1ebcd4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Wke04LIe2H6RUlAisrw8zc7WwCxVHAVH+HHFEanbqmDct8atz/Una7g5WGyO65VbtSdrqczqjMf71KEUcvWAiyZB6B4Ga/1mC5OZzgVpJe2kTMrZgED/6YDhHszuBtgrDcf+IFpORoywn7VlAp9xmX+aBYGkEl8zmqnIk98x9ZGbJZhGUB4S5ZR74tdnmloP/6pCK7l1TvJX/4smHJwGBFIlINEKozcinu2eQc1+zdrmSad43y4roTWRrfkX3qGsTwyrRMDiSmxSy8gVL2s2vYe+A+H4nuWASG/0Y+MjKLMouiyW42YXxLRgREf4qbcNuXqIPK9flC+PjTfgOaVlrrMW+ktSGbEIUn24fnxeFuZhqfizCrdInYde17bKpAzdJ6d2aZVlrqWKxqr7C86peHY2EFoWcAjDpY/GQ2nY7JcVUpsY1FhkQUcIFZIBr/Et
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(376002)(346002)(39860400002)(136003)(46966005)(44832011)(70206006)(70586007)(478600001)(4326008)(956004)(336012)(5660300002)(36906005)(2616005)(54906003)(316002)(36756003)(86362001)(8936002)(81166007)(6666004)(6512007)(82310400002)(2906002)(6916009)(6486002)(16526019)(47076004)(82740400003)(186003)(356005)(8676002)(6506007)(83380400001)(26005)(136400200001)(2004002);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2020 15:40:40.4504 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d689e4e6-629c-4367-b69c-08d82f1ec0ed
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3720
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: rahul.singh@arm.com, Julien Grall <julien@xen.org>,
 Paul Durrant <paul@xen.org>, Bertrand.Marquis@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The existing VPCI support available for X86 is adapted for Arm.
When the device is added to XEN via the hyper call
“PHYSDEVOP_pci_device_add”, VPCI handler for the config space
access is added to the PCI device to emulate the PCI devices.

A MMIO trap handler for the PCI ECAM space is registered in XEN
so that when guest is trying to access the PCI config space,XEN
will trap the access and emulate read/write using the VPCI and
not the real PCI hardware.

VPCI MSI support is disable for ARM as it is not tested on ARM.

Change-Id: I5501db2781f8064640403fecce53713091cd9ab4
Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/arch/arm/Makefile         |   1 +
 xen/arch/arm/domain.c         |   4 ++
 xen/arch/arm/vpci.c           | 102 ++++++++++++++++++++++++++++++++++
 xen/arch/arm/vpci.h           |  37 ++++++++++++
 xen/drivers/passthrough/pci.c |   7 +++
 xen/include/asm-arm/domain.h  |   5 ++
 xen/include/public/arch-arm.h |   4 ++
 7 files changed, 160 insertions(+)
 create mode 100644 xen/arch/arm/vpci.c
 create mode 100644 xen/arch/arm/vpci.h

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 345cb83eed..5a23ec5cc0 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -7,6 +7,7 @@ obj-y += platforms/
 endif
 obj-$(CONFIG_TEE) += tee/
 obj-$(CONFIG_ARM_PCI) += pci/
+obj-$(CONFIG_HAS_VPCI) += vpci.o
 
 obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o
 obj-y += bootfdt.init.o
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 31169326b2..23098ffd02 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -39,6 +39,7 @@
 #include <asm/vtimer.h>
 
 #include "vuart.h"
+#include "vpci.h"
 
 DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
 
@@ -747,6 +748,9 @@ int arch_domain_create(struct domain *d,
     if ( is_hardware_domain(d) && (rc = domain_vuart_init(d)) )
         goto fail;
 
+    if ( (rc = domain_vpci_init(d)) != 0 )
+        goto fail;
+
     return 0;
 
 fail:
diff --git a/xen/arch/arm/vpci.c b/xen/arch/arm/vpci.c
new file mode 100644
index 0000000000..49e473ab0d
--- /dev/null
+++ b/xen/arch/arm/vpci.c
@@ -0,0 +1,102 @@
+/*
+ * xen/arch/arm/vpci.c
+ * Copyright (c) 2020 Arm Ltd.
+ *
+ * Based on arch/x86/hvm/io.c
+ * Copyright (c) 2004, Intel Corporation.
+ * Copyright (c) 2005, International Business Machines Corporation.
+ * Copyright (c) 2008, Citrix Systems, Inc.
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#include <xen/sched.h>
+#include <asm/mmio.h>
+
+/* Do some sanity checks. */
+static bool vpci_mmio_access_allowed(unsigned int reg, unsigned int len)
+{
+    /* Check access size. */
+    if ( len != 1 && len != 2 && len != 4 && len != 8 )
+        return false;
+
+    /* Check that access is size aligned. */
+    if ( (reg & (len - 1)) )
+        return false;
+
+    return true;
+}
+
+static int vpci_mmio_read(struct vcpu *v, mmio_info_t *info,
+        register_t *r, void *priv)
+{
+    unsigned int reg;
+    pci_sbdf_t sbdf;
+    uint32_t data = 0;
+    unsigned int size = 1U << info->dabt.size;
+
+    sbdf.bdf = (((info->gpa) & 0x0ffff000) >> 12);
+    reg = (((info->gpa) & 0x00000ffc) | (info->gpa & 3));
+
+    if ( !vpci_mmio_access_allowed(reg, size) )
+        return 1;
+
+    data = vpci_read(sbdf, reg, size);
+
+    memcpy(r, &data, size);
+
+    return 1;
+}
+
+static int vpci_mmio_write(struct vcpu *v, mmio_info_t *info,
+        register_t r, void *priv)
+{
+    unsigned int reg;
+    pci_sbdf_t sbdf;
+    uint32_t data = r;
+    unsigned int size = 1U << info->dabt.size;
+
+    sbdf.bdf = (((info->gpa) & 0x0ffff000) >> 12);
+    reg = (((info->gpa) & 0x00000ffc) | (info->gpa & 3));
+
+    if ( !vpci_mmio_access_allowed(reg, size) )
+        return 1;
+
+    vpci_write(sbdf, reg, size, data);
+
+    return 1;
+}
+
+static const struct mmio_handler_ops vpci_mmio_handler = {
+    .read  = vpci_mmio_read,
+    .write = vpci_mmio_write,
+};
+
+int domain_vpci_init(struct domain *d)
+{
+    if ( !has_vpci(d) || is_hardware_domain(d) )
+        return 0;
+
+    register_mmio_handler(d, &vpci_mmio_handler,
+            GUEST_VPCI_ECAM_BASE,GUEST_VPCI_ECAM_SIZE,NULL);
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
+
diff --git a/xen/arch/arm/vpci.h b/xen/arch/arm/vpci.h
new file mode 100644
index 0000000000..20dce1f4c4
--- /dev/null
+++ b/xen/arch/arm/vpci.h
@@ -0,0 +1,37 @@
+/*
+ * xen/arch/arm/vpci.h
+ * Copyright (c) 2020 Arm Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __ARCH_ARM_VPCI_H__
+#define __ARCH_ARM_VPCI_H__
+
+#ifdef CONFIG_HAS_VPCI
+int domain_vpci_init(struct domain *d);
+#else
+static inline int domain_vpci_init(struct domain *d)
+{
+    return 0;
+}
+#endif
+
+#endif /* __ARCH_ARM_VPCI_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 5846978890..28511eb641 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -804,6 +804,13 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
     else
         iommu_enable_device(pdev);
 
+#ifdef CONFIG_ARM
+    ret = vpci_add_handlers(pdev);
+    if ( ret ) {
+        printk(XENLOG_ERR "setup of vPCI for failed: %d\n",ret);
+        goto out;
+    }
+#endif
     pci_enable_acs(pdev);
 
 out:
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 4e2f582006..ad70610226 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -34,6 +34,11 @@ enum domain_type {
 /* The hardware domain has always its memory direct mapped. */
 #define is_domain_direct_mapped(d) ((d) == hardware_domain)
 
+/* For X86 VPCI is enabled and tested for PVH DOM0 only but
+ * for ARM we enable support VPCI for guest domain also.
+ */
+#define has_vpci(d) (true)
+
 struct vtimer {
     struct vcpu *v;
     int irq;
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index c365b1b39e..7364a07362 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -422,6 +422,10 @@ typedef uint64_t xen_callback_t;
 #define GUEST_PL011_BASE    xen_mk_ullong(0x22000000)
 #define GUEST_PL011_SIZE    xen_mk_ullong(0x00001000)
 
+/* VPCI ECAM mappings */
+#define GUEST_VPCI_ECAM_BASE    xen_mk_ullong(0x10000000)
+#define GUEST_VPCI_ECAM_SIZE    xen_mk_ullong(0x10000000)
+
 /*
  * 16MB == 4096 pages reserved for guest to use as a region to map its
  * grant table in.
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:40:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:40:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydL2-00057X-5I; Thu, 23 Jul 2020 15:40:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dJck=BC=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1jydL0-00056E-Gk
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:40:50 +0000
X-Inumbo-ID: dd90bb18-ccfa-11ea-873e-bc764e2007e4
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.56]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd90bb18-ccfa-11ea-873e-bc764e2007e4;
 Thu, 23 Jul 2020 15:40:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7Ig21FGJcaMYflDSUeuWkPVGyi1CGLYgrcJFV+ngOUE=;
 b=WE/koDoMYxR79CrdgfSWygikzr0sKfoO//pOmcNrIR0XL9dPT71qs8xLBU+Ax3EfwjxRQoCVZVVSo0kYd4Iq+SRmGmWIDv6H0Z+ZuowzZ2IVMTV9k633D3wMhJL7i1pSGo64HlkAOQBZfAz4jonVxSsqsgUgqbLk7RBksbezNms=
Received: from AM5PR04CA0014.eurprd04.prod.outlook.com (2603:10a6:206:1::27)
 by VI1PR0802MB2256.eurprd08.prod.outlook.com (2603:10a6:800:9e::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Thu, 23 Jul
 2020 15:40:40 +0000
Received: from AM5EUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:1:cafe::c3) by AM5PR04CA0014.outlook.office365.com
 (2603:10a6:206:1::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.20 via Frontend
 Transport; Thu, 23 Jul 2020 15:40:40 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT053.mail.protection.outlook.com (10.152.16.210) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Thu, 23 Jul 2020 15:40:40 +0000
Received: ("Tessian outbound c83312565ef4:v62");
 Thu, 23 Jul 2020 15:40:40 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3fa9bceec7b6ce00
X-CR-MTA-TID: 64aa7808
Received: from ad811b610fbf.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 04FA8CF8-6D49-4C38-93A8-8B180895957A.1; 
 Thu, 23 Jul 2020 15:40:34 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ad811b610fbf.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 23 Jul 2020 15:40:34 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h60RQ8yHONI/PAgVn2nug5PWcCqo+xRnrNDK+r8cFczVo5fVYYcP7q+EhyJCtoZTd3d5p4qn009BgO12GDQMmL0rX3lt3o++8wkChVLuidoEluXvLmDatAx+vbUKqCXb7BIM7dpuweK6uXNjvOZNkybdeSUl0Nn3QVGgVBYRG96AzPqMrOUgKo9Kg+lrZUa+1rryFJkDoSOtOQt0OgXzeNf/S6JCW39YnEYsqlhUXp25vnupYqN8yaNJMboylwtdDjjJFiP+ETaGPkeOq0KqclucsF8szJS76PX0tJumoq2oJ4oFiH3/3Apjzt5V879NElw4B0AWTRqQlyfvZETGYQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7Ig21FGJcaMYflDSUeuWkPVGyi1CGLYgrcJFV+ngOUE=;
 b=BqdFd6ap7yNCzuu/z9jm00cGZcA/1V2KA9ojNYaDo56kGNk0+UULXkOa+z4MtWZBf9L9l4XuLB+AnlJ4JFzVd/AE11zlc/fOZjqMr7xWqivTZcakIjymx7XLhqTQQLrVqYXXQIGFRxdS4hHtE5pJ0+0+zZVSdwyu/QTh7py3A5000dAtmskTS/62qt3Ut/AJGnXmfr5ZcyVk/fsUhDeSMUAIuZLM7aYLk9zJQDYGIKXMk8ifBmtESXZlQKhU8JOve2PS7W+1g2fdcM8bMa7MOnUaSuVGP+994jYswJH3rZNBEyab3n100NiuwXYZdh1rbkS9bipVTI2Q2GWGCao76w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7Ig21FGJcaMYflDSUeuWkPVGyi1CGLYgrcJFV+ngOUE=;
 b=WE/koDoMYxR79CrdgfSWygikzr0sKfoO//pOmcNrIR0XL9dPT71qs8xLBU+Ax3EfwjxRQoCVZVVSo0kYd4Iq+SRmGmWIDv6H0Z+ZuowzZ2IVMTV9k633D3wMhJL7i1pSGo64HlkAOQBZfAz4jonVxSsqsgUgqbLk7RBksbezNms=
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none; lists.xenproject.org;
 dmarc=none action=none header.from=arm.com;
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB4439.eurprd08.prod.outlook.com (2603:10a6:20b:be::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Thu, 23 Jul
 2020 15:40:33 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3216.021; Thu, 23 Jul 2020
 15:40:32 +0000
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Subject: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge discovery
 within XEN on ARM.
Date: Thu, 23 Jul 2020 16:40:21 +0100
Message-Id: <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595511416.git.rahul.singh@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0300.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::24) To AM6PR08MB3560.eurprd08.prod.outlook.com
 (2603:10a6:20b:4c::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from localhost.localdomain (217.140.106.54) by
 LO2P265CA0300.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a5::24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.20 via Frontend Transport; Thu, 23 Jul 2020 15:40:32 +0000
X-Mailer: git-send-email 2.17.1
X-Originating-IP: [217.140.106.54]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f18fe8ae-df6b-481b-e097-08d82f1ec0b4
X-MS-TrafficTypeDiagnostic: AM6PR08MB4439:|VI1PR0802MB2256:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <VI1PR0802MB2256033731EDA72EF87EBB71FC760@VI1PR0802MB2256.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:3383;OLM:3383;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: q4OnUcQirivsHUj4PMJQLjan/DGV9xSaWQBwsvkimyiVSPHp7lSBiucO8yBcFHA1sJnEfOEHFFNL+Vozu6JHsjyI4BmjDPV8/f07WF+iwXPKPY6M6sKn7HaXuuuPcQuUfHQuboRPOCKB4aNYBvml4kvNsJIFbNnawHBEob9kCKcKXjSbL3ASGWCV3FPeKsYTQb73ZfYlUUKQAHXkaRremIIXkwoBe682W8xo+nne7a/66C4q+eRegcRJH0KeA2EwCsL89RgmAc+nl7I3eaX0sMoqweSpx1XVkGcJkEeFdUUIRhS2d7s8maHa7KkE0EARgGJSaV4Ue1YUO9wPWTVIJ0z19cyOsqreIFpq2V2TZN9CJ9AaJO3NoPqJmYwFAaoAS0nUjfyt7wwS6RY0iPpsfsV0tQ4kWXQKt8xsfPQiKZzkmSiZheiAJfgQzPXytpdlcqjSzuZy94cshjOYp8XSOvXYV8zC0blUZ3WT0Tpz4hM=
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(366004)(39860400002)(376002)(346002)(396003)(8676002)(26005)(6512007)(2616005)(316002)(54906003)(6666004)(6506007)(66946007)(4326008)(6486002)(956004)(36756003)(86362001)(30864003)(6916009)(44832011)(8936002)(186003)(16526019)(478600001)(5660300002)(52116002)(83380400001)(66556008)(66476007)(2906002)(2004002)(136400200001);
 DIR:OUT; SFP:1101; 
X-MS-Exchange-AntiSpam-MessageData: rVIcaR9ussPehTx01moTc/TbtgWLlGAUvxdG4vPhJ3tUURj+7yeneD3YBHRHpdBMf5NOOTEHuokUCQVhq1wPbgH+rXrnbrKAWmfHI4e9geQZZZBFWV+A+Gzx5hwLcQTmq8t8Mpm37/VUszcBUx8N/cHGiqb+z1IO3KY9f+qPNIkd0v0srXWN69o6QZyP2af4YPazKl6Tmqw27geRGHwv7m0SKxPKtAAeBDMLN6R6lkS2FjTI2W/SnwqvsdCFFdnDqemBkgtyPfQHVXBP3ZmmWs/jMURuY+/AFsxtICnr9Kn+mTkC1VPbRYZep3n8zjeuD5d9IGuRq7XpdoC1Z2BDUPseJQU/bGsvgJU7SpBSqJokSVYrPqVKl8GUFR0Nh8O4+qCJ7UBZY3tyyn0HB4+karq7b+UgtqFD0bKUrbBv8p1DJOw/7uo9GA39XgjHaIH/CgDCi1NNwUaDrUHRYdXr2Sv0g+HQmuhM9GQil0umWRFq7IVYLWCtlBaALyy34LWU
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4439
Original-Authentication-Results: lists.xenproject.org;
 dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 3d62723e-38e2-4302-c5ff-08d82f1ebc33
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: D2kKBe2HC86OKa6ch1MavynG/PTUlZgwNpshYyJEaG9D3YGMSxZvDH7hKUns9znglbaQ1NXx4KmROlhHKu9F5vdZ9EpyWSbEGHs3ZMiKcHqXL2J/iaBSIEHEU05p0Ok36hvvjdLrxWvnMBRrbO32TwxlI2WOOsNCwwnBz8pIIWpwnyzbHIben7aXq6gRxGs/rix6tvtBrvr67pDrEscnnbXjNa2BXA1V9iBJnv8xXUj3NVg2mV3+FeEkD9opukAo5660d+PF2ziRCpfzyR9oyjJkir/dTM7mLGVtJax3jt/qoNYqrL5W4p1JxbcAhgp6pksguHePX/QHSCb0BQ4Q2zj/barL/zlGhpxJuSwKkD+bWl5N13zWmDI6O9fQ2EgsL3A4BFDaDxYtzwhD13DpKNV0zeeegQaTb1zJmqPdjHmW7UxhjFNBy1cGx0dDRkbETGq+VQ5HrsZNdPIJp158q7Jjl/UuTVLAzg5VaoiBdypVzqURbY1iF/jbp56VIg8u0mFaVqMNekJ/Vxb6Udak27GXdZzsCVWUdcygbeQ1Ayo=
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(39860400002)(376002)(346002)(136003)(46966005)(2906002)(6666004)(6916009)(8936002)(82310400002)(6486002)(16526019)(186003)(86362001)(81166007)(356005)(83380400001)(107886003)(26005)(4326008)(36756003)(8676002)(47076004)(6512007)(82740400003)(336012)(6506007)(956004)(2616005)(44832011)(478600001)(70586007)(70206006)(30864003)(316002)(36906005)(5660300002)(54906003)(2004002)(136400200001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2020 15:40:40.0769 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f18fe8ae-df6b-481b-e097-08d82f1ec0b4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0802MB2256
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: rahul.singh@arm.com, Julien Grall <julien@xen.org>,
 Bertrand.Marquis@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

XEN during boot will read the PCI device tree node “reg” property
and will map the PCI config space to the XEN memory.

XEN will read the “linux, pci-domain” property from the device tree
node and configure the host bridge segment number accordingly.

As of now "pci-host-ecam-generic" compatible board is supported.

Change-Id: If32f7748b7dc89dd37114dc502943222a2a36c49
Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/arch/arm/Kconfig                |   7 +
 xen/arch/arm/Makefile               |   1 +
 xen/arch/arm/pci/Makefile           |   4 +
 xen/arch/arm/pci/pci-access.c       | 101 ++++++++++++++
 xen/arch/arm/pci/pci-host-common.c  | 198 ++++++++++++++++++++++++++++
 xen/arch/arm/pci/pci-host-generic.c | 131 ++++++++++++++++++
 xen/arch/arm/pci/pci.c              | 112 ++++++++++++++++
 xen/arch/arm/setup.c                |   2 +
 xen/include/asm-arm/device.h        |   7 +-
 xen/include/asm-arm/pci.h           |  97 +++++++++++++-
 10 files changed, 654 insertions(+), 6 deletions(-)
 create mode 100644 xen/arch/arm/pci/Makefile
 create mode 100644 xen/arch/arm/pci/pci-access.c
 create mode 100644 xen/arch/arm/pci/pci-host-common.c
 create mode 100644 xen/arch/arm/pci/pci-host-generic.c
 create mode 100644 xen/arch/arm/pci/pci.c

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 2777388265..ee13339ae9 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -31,6 +31,13 @@ menu "Architecture Features"
 
 source "arch/Kconfig"
 
+config ARM_PCI
+	bool "PCI Passthrough Support"
+	depends on ARM_64
+	---help---
+
+	  PCI passthrough support for Xen on ARM64.
+
 config ACPI
 	bool "ACPI (Advanced Configuration and Power Interface) Support" if EXPERT
 	depends on ARM_64
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 7e82b2178c..345cb83eed 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -6,6 +6,7 @@ ifneq ($(CONFIG_NO_PLAT),y)
 obj-y += platforms/
 endif
 obj-$(CONFIG_TEE) += tee/
+obj-$(CONFIG_ARM_PCI) += pci/
 
 obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o
 obj-y += bootfdt.init.o
diff --git a/xen/arch/arm/pci/Makefile b/xen/arch/arm/pci/Makefile
new file mode 100644
index 0000000000..358508b787
--- /dev/null
+++ b/xen/arch/arm/pci/Makefile
@@ -0,0 +1,4 @@
+obj-y += pci.o
+obj-y += pci-host-generic.o
+obj-y += pci-host-common.o
+obj-y += pci-access.o
diff --git a/xen/arch/arm/pci/pci-access.c b/xen/arch/arm/pci/pci-access.c
new file mode 100644
index 0000000000..c53ef58336
--- /dev/null
+++ b/xen/arch/arm/pci/pci-access.c
@@ -0,0 +1,101 @@
+/*
+ * Copyright (C) 2020 Arm Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/init.h>
+#include <xen/pci.h>
+#include <asm/pci.h>
+#include <xen/rwlock.h>
+
+static uint32_t pci_config_read(pci_sbdf_t sbdf, unsigned int reg,
+                            unsigned int len)
+{
+    int rc;
+    uint32_t val = GENMASK(0, len * 8);
+
+    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
+
+    if ( unlikely(!bridge) )
+    {
+        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
+                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
+        return val;
+    }
+
+    if ( unlikely(!bridge->ops->read) )
+        return val;
+
+    rc = bridge->ops->read(bridge, (uint32_t) sbdf.sbdf, reg, len, &val);
+    if ( rc )
+        printk(XENLOG_ERR "Failed to read reg %#x len %u for "PRI_pci"\n",
+                reg, len, sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
+
+    return val;
+}
+
+static void pci_config_write(pci_sbdf_t sbdf, unsigned int reg,
+        unsigned int len, uint32_t val)
+{
+    int rc;
+    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
+
+    if ( unlikely(!bridge) )
+    {
+        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
+                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
+        return;
+    }
+
+    if ( unlikely(!bridge->ops->write) )
+        return;
+
+    rc = bridge->ops->write(bridge, (uint32_t) sbdf.sbdf, reg, len, val);
+    if ( rc )
+        printk(XENLOG_ERR "Failed to write reg %#x len %u for "PRI_pci"\n",
+                reg, len, sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
+}
+
+/*
+ * Wrappers for all PCI configuration access functions.
+ */
+
+#define PCI_OP_WRITE(size, type) \
+    void pci_conf_write##size (pci_sbdf_t sbdf,unsigned int reg, type val) \
+{                                               \
+    pci_config_write(sbdf, reg, size / 8, val);     \
+}
+
+#define PCI_OP_READ(size, type) \
+    type pci_conf_read##size (pci_sbdf_t sbdf, unsigned int reg)  \
+{                                               \
+    return pci_config_read(sbdf, reg, size / 8);     \
+}
+
+PCI_OP_READ(8, u8)
+PCI_OP_READ(16, u16)
+PCI_OP_READ(32, u32)
+PCI_OP_WRITE(8, u8)
+PCI_OP_WRITE(16, u16)
+PCI_OP_WRITE(32, u32)
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/pci/pci-host-common.c b/xen/arch/arm/pci/pci-host-common.c
new file mode 100644
index 0000000000..c5f98be698
--- /dev/null
+++ b/xen/arch/arm/pci/pci-host-common.c
@@ -0,0 +1,198 @@
+/*
+ * Copyright (C) 2020 Arm Ltd.
+ *
+ * Based on Linux drivers/pci/ecam.c
+ * Copyright 2016 Broadcom.
+ *
+ * Based on Linux drivers/pci/controller/pci-host-common.c
+ * Based on Linux drivers/pci/controller/pci-host-generic.c
+ * Copyright (C) 2014 ARM Limited Will Deacon <will.deacon@arm.com>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/init.h>
+#include <xen/pci.h>
+#include <asm/pci.h>
+#include <xen/rwlock.h>
+#include <xen/vmap.h>
+
+/*
+ * List for all the pci host bridges.
+ */
+
+static LIST_HEAD(pci_host_bridges);
+
+static bool __init dt_pci_parse_bus_range(struct dt_device_node *dev,
+        struct pci_config_window *cfg)
+{
+    const __be32 *cells;
+    uint32_t len;
+
+    cells = dt_get_property(dev, "bus-range", &len);
+    /* bus-range should at least be 2 cells */
+    if ( !cells || (len < (sizeof(*cells) * 2)) )
+        return false;
+
+    cfg->busn_start = dt_next_cell(1, &cells);
+    cfg->busn_end = dt_next_cell(1, &cells);
+
+    return true;
+}
+
+static inline void __iomem *pci_remap_cfgspace(paddr_t start, size_t len)
+{
+    return ioremap_nocache(start, len);
+}
+
+static void pci_ecam_free(struct pci_config_window *cfg)
+{
+    if ( cfg->win )
+        iounmap(cfg->win);
+
+    xfree(cfg);
+}
+
+static struct pci_config_window *gen_pci_init(struct dt_device_node *dev,
+        struct pci_ecam_ops *ops)
+{
+    int err;
+    struct pci_config_window *cfg;
+    paddr_t addr, size;
+
+    cfg = xzalloc(struct pci_config_window);
+    if ( !cfg )
+        return NULL;
+
+    err = dt_pci_parse_bus_range(dev, cfg);
+    if ( !err ) {
+        cfg->busn_start = 0;
+        cfg->busn_end = 0xff;
+        printk(XENLOG_ERR "No bus range found for pci controller\n");
+    } else {
+        if ( cfg->busn_end > cfg->busn_start + 0xff )
+            cfg->busn_end = cfg->busn_start + 0xff;
+    }
+
+    /* Parse our PCI ecam register address*/
+    err = dt_device_get_address(dev, 0, &addr, &size);
+    if ( err )
+        goto err_exit;
+
+    cfg->phys_addr = addr;
+    cfg->size = size;
+    cfg->ops = ops;
+
+    /*
+     * On 64-bit systems, we do a single ioremap for the whole config space
+     * since we have enough virtual address range available.  On 32-bit, we
+     * ioremap the config space for each bus individually.
+     *
+     * As of now only 64-bit is supported 32-bit is not supported.
+     */
+    cfg->win = pci_remap_cfgspace(cfg->phys_addr, cfg->size);
+    if ( !cfg->win )
+        goto err_exit_remap;
+
+    printk("ECAM at [mem %lx-%lx] for [bus %x-%x] \n",cfg->phys_addr,
+            cfg->phys_addr + cfg->size - 1,cfg->busn_start,cfg->busn_end);
+
+    if ( ops->init ) {
+        err = ops->init(cfg);
+        if (err)
+            goto err_exit;
+    }
+
+    return cfg;
+
+err_exit_remap:
+    printk(XENLOG_ERR "ECAM ioremap failed\n");
+err_exit:
+    pci_ecam_free(cfg);
+    return NULL;
+}
+
+static struct pci_host_bridge * pci_alloc_host_bridge(void)
+{
+    struct pci_host_bridge *bridge = xzalloc(struct pci_host_bridge);
+
+    if ( !bridge )
+        return NULL;
+
+    INIT_LIST_HEAD(&bridge->node);
+    return bridge;
+}
+
+int pci_host_common_probe(struct dt_device_node *dev,
+        struct pci_ecam_ops *ops)
+{
+    struct pci_host_bridge *bridge;
+    struct pci_config_window *cfg;
+    u32 segment;
+
+    bridge = pci_alloc_host_bridge();
+    if ( !bridge )
+        return -ENOMEM;
+
+    /* Parse and map our Configuration Space windows */
+    cfg = gen_pci_init(dev, ops);
+    if ( !cfg )
+        return -ENOMEM;
+
+    bridge->dt_node = dev;
+    bridge->sysdata = cfg;
+    bridge->ops = &ops->pci_ops;
+
+    if( !dt_property_read_u32(dev, "linux,pci-domain", &segment) )
+    {
+        printk(XENLOG_ERR "\"linux,pci-domain\" property in not available in DT\n");
+        return -ENODEV;
+    }
+
+    bridge->segment = (u16)segment;
+
+    list_add_tail(&bridge->node, &pci_host_bridges);
+
+    return 0;
+}
+
+/*
+ * This function will lookup an hostbridge based on the segment and bus
+ * number.
+ */
+struct pci_host_bridge *pci_find_host_bridge(uint16_t segment, uint8_t bus)
+{
+    struct pci_host_bridge *bridge;
+    bool found = false;
+
+    list_for_each_entry( bridge, &pci_host_bridges, node )
+    {
+        if ( bridge->segment != segment )
+            continue;
+
+        found = true;
+        break;
+    }
+
+    return (found) ? bridge : NULL;
+}
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/pci/pci-host-generic.c b/xen/arch/arm/pci/pci-host-generic.c
new file mode 100644
index 0000000000..cd67b3dec6
--- /dev/null
+++ b/xen/arch/arm/pci/pci-host-generic.c
@@ -0,0 +1,131 @@
+/*
+ * Copyright (C) 2020 Arm Ltd.
+ *
+ * Based on Linux drivers/pci/controller/pci-host-common.c
+ * Based on Linux drivers/pci/controller/pci-host-generic.c
+ * Copyright (C) 2014 ARM Limited Will Deacon <will.deacon@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <asm/device.h>
+#include <asm/io.h>
+#include <xen/pci.h>
+#include <asm/pci.h>
+
+/*
+ * Function to get the config space base.
+ */
+static void __iomem *pci_config_base(struct pci_host_bridge *bridge,
+        uint32_t sbdf, int where)
+{
+    struct pci_config_window *cfg = bridge->sysdata;
+    unsigned int devfn_shift = cfg->ops->bus_shift - 8;
+
+    pci_sbdf_t sbdf_t = (pci_sbdf_t) sbdf ;
+
+    unsigned int busn = sbdf_t.bus;
+    void __iomem *base;
+
+    if ( busn < cfg->busn_start || busn > cfg->busn_end )
+        return NULL;
+
+    base = cfg->win + (busn << cfg->ops->bus_shift);
+
+    return base + (PCI_DEVFN(sbdf_t.dev, sbdf_t.fn) << devfn_shift) + where;
+}
+
+int pci_ecam_config_write(struct pci_host_bridge *bridge, uint32_t sbdf,
+        int where, int size, u32 val)
+{
+    void __iomem *addr;
+
+    addr = pci_config_base(bridge, sbdf, where);
+    if ( !addr )
+        return -ENODEV;
+
+    if ( size == 1 )
+        writeb(val, addr);
+    else if ( size == 2 )
+        writew(val, addr);
+    else
+        writel(val, addr);
+
+    return 0;
+}
+
+int pci_ecam_config_read(struct pci_host_bridge *bridge, uint32_t sbdf,
+        int where, int size, u32 *val)
+{
+    void __iomem *addr;
+
+    addr = pci_config_base(bridge, sbdf, where);
+    if ( !addr ) {
+        *val = ~0;
+        return -ENODEV;
+    }
+
+    if ( size == 1 )
+        *val = readb(addr);
+    else if ( size == 2 )
+        *val = readw(addr);
+    else
+        *val = readl(addr);
+
+    return 0;
+}
+
+/* ECAM ops */
+struct pci_ecam_ops pci_generic_ecam_ops = {
+    .bus_shift  = 20,
+    .pci_ops    = {
+        .read       = pci_ecam_config_read,
+        .write      = pci_ecam_config_write,
+    }
+};
+
+static const struct dt_device_match gen_pci_dt_match[] = {
+    { .compatible = "pci-host-ecam-generic",
+      .data =       &pci_generic_ecam_ops },
+
+    { },
+};
+
+static int gen_pci_dt_init(struct dt_device_node *dev, const void *data)
+{
+    const struct dt_device_match *of_id;
+    struct pci_ecam_ops *ops;
+
+    of_id = dt_match_node(gen_pci_dt_match, dev->dev.of_node);
+    ops = (struct pci_ecam_ops *) of_id->data;
+
+    printk(XENLOG_INFO "Found PCI host bridge %s compatible:%s \n",
+            dt_node_full_name(dev), of_id->compatible);
+
+    return pci_host_common_probe(dev, ops);
+}
+
+DT_DEVICE_START(pci_gen, "PCI HOST GENERIC", DEVICE_PCI)
+.dt_match = gen_pci_dt_match,
+.init = gen_pci_dt_init,
+DT_DEVICE_END
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/pci/pci.c b/xen/arch/arm/pci/pci.c
new file mode 100644
index 0000000000..f8cbb99591
--- /dev/null
+++ b/xen/arch/arm/pci/pci.c
@@ -0,0 +1,112 @@
+/*
+ * Copyright (C) 2020 Arm Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/acpi.h>
+#include <xen/device_tree.h>
+#include <xen/errno.h>
+#include <xen/init.h>
+#include <xen/pci.h>
+#include <xen/param.h>
+
+static int __init dt_pci_init(void)
+{
+    struct dt_device_node *np;
+    int rc;
+
+    dt_for_each_device_node(dt_host, np)
+    {
+        rc = device_init(np, DEVICE_PCI, NULL);
+        if( !rc )
+            continue;
+        /*
+         * Ignore the following error codes:
+         *   - EBADF: Indicate the current is not an pci
+         *   - ENODEV: The pci device is not present or cannot be used by
+         *     Xen.
+         */
+        else if ( rc != -EBADF && rc != -ENODEV )
+        {
+            printk(XENLOG_ERR "No driver found in XEN or driver init error.\n");
+            return rc;
+        }
+    }
+
+    return 0;
+}
+
+#ifdef CONFIG_ACPI
+static void __init acpi_pci_init(void)
+{
+    printk(XENLOG_ERR "ACPI pci init not supported \n");
+    return;
+}
+#else
+static inline void __init acpi_pci_init(void) { }
+#endif
+
+static bool __initdata param_pci_enable;
+static int __init parse_pci_param(const char *arg)
+{
+    if ( !arg )
+    {
+        param_pci_enable = false;
+        return 0;
+    }
+
+    switch ( parse_bool(arg, NULL) )
+    {
+        case 0:
+            param_pci_enable = false;
+            return 0;
+        case 1:
+            param_pci_enable = true;
+            return 0;
+    }
+
+    return -EINVAL;
+}
+custom_param("pci", parse_pci_param);
+
+void __init pci_init(void)
+{
+    /*
+     * Enable PCI when has been enabled explicitly (pci=on)
+     */
+    if ( !param_pci_enable)
+        goto disable;
+
+    if ( acpi_disabled )
+        dt_pci_init();
+    else
+        acpi_pci_init();
+
+#ifdef CONFIG_HAS_PCI
+    pci_segments_init();
+#endif
+
+disable:
+    return;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 7968cee47d..2d7f1db44f 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -930,6 +930,8 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     setup_virt_paging();
 
+    pci_init();
+
     do_initcalls();
 
     /*
diff --git a/xen/include/asm-arm/device.h b/xen/include/asm-arm/device.h
index ee7cff2d44..28f8049cfd 100644
--- a/xen/include/asm-arm/device.h
+++ b/xen/include/asm-arm/device.h
@@ -4,6 +4,7 @@
 enum device_type
 {
     DEV_DT,
+    DEV_PCI,
 };
 
 struct dev_archdata {
@@ -25,15 +26,15 @@ typedef struct device device_t;
 
 #include <xen/device_tree.h>
 
-/* TODO: Correctly implement dev_is_pci when PCI is supported on ARM */
-#define dev_is_pci(dev) ((void)(dev), 0)
-#define dev_is_dt(dev)  ((dev->type == DEV_DT)
+#define dev_is_pci(dev) (dev->type == DEV_PCI)
+#define dev_is_dt(dev)  (dev->type == DEV_DT)
 
 enum device_class
 {
     DEVICE_SERIAL,
     DEVICE_IOMMU,
     DEVICE_GIC,
+    DEVICE_PCI,
     /* Use for error */
     DEVICE_UNKNOWN,
 };
diff --git a/xen/include/asm-arm/pci.h b/xen/include/asm-arm/pci.h
index de13359f65..94fd00360a 100644
--- a/xen/include/asm-arm/pci.h
+++ b/xen/include/asm-arm/pci.h
@@ -1,7 +1,98 @@
-#ifndef __X86_PCI_H__
-#define __X86_PCI_H__
+/*
+ * Copyright (C) 2020 Arm Ltd.
+ *
+ * Based on Linux drivers/pci/ecam.c
+ * Copyright 2016 Broadcom.
+ *
+ * Based on Linux drivers/pci/controller/pci-host-common.c
+ * Based on Linux drivers/pci/controller/pci-host-generic.c
+ * Copyright (C) 2014 ARM Limited Will Deacon <will.deacon@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
 
+#ifndef __ARM_PCI_H__
+#define __ARM_PCI_H__
+
+#include <xen/pci.h>
+#include <xen/device_tree.h>
+#include <asm/device.h>
+
+#ifdef CONFIG_ARM_PCI
+
+/* Arch pci dev struct */
 struct arch_pci_dev {
+    struct device dev;
+};
+
+#define PRI_pci "%04x:%02x:%02x.%u"
+#define pci_to_dev(pcidev) (&(pcidev)->arch.dev)
+
+/*
+ * struct to hold the mappings of a config space window. This
+ * is expected to be used as sysdata for PCI controllers that
+ * use ECAM.
+ */
+struct pci_config_window {
+    paddr_t     phys_addr;
+    paddr_t     size;
+    uint8_t     busn_start;
+    uint8_t     busn_end;
+    struct pci_ecam_ops     *ops;
+    void __iomem        *win;
+};
+
+/* Forward declaration as pci_host_bridge and pci_ops depend on each other. */
+struct pci_host_bridge;
+
+struct pci_ops {
+    int (*read)(struct pci_host_bridge *bridge,
+                    uint32_t sbdf, int where, int size, u32 *val);
+    int (*write)(struct pci_host_bridge *bridge,
+                    uint32_t sbdf, int where, int size, u32 val);
+};
+
+/*
+ * struct to hold pci ops and bus shift of the config window
+ * for a PCI controller.
+ */
+struct pci_ecam_ops {
+    unsigned int            bus_shift;
+    struct pci_ops          pci_ops;
+    int             (*init)(struct pci_config_window *);
+};
+
+/*
+ * struct to hold pci host bridge information
+ * for a PCI controller.
+ */
+struct pci_host_bridge {
+    struct dt_device_node *dt_node;  /* Pointer to the associated DT node */
+    struct list_head node;           /* Node in list of host bridges */
+    uint16_t segment;                /* Segment number */
+    void *sysdata;                   /* Pointer to the config space window*/
+    const struct pci_ops *ops;
 };
 
-#endif /* __X86_PCI_H__ */
+struct pci_host_bridge *pci_find_host_bridge(uint16_t segment, uint8_t bus);
+
+int pci_host_common_probe(struct dt_device_node *dev,
+                struct pci_ecam_ops *ops);
+
+void pci_init(void);
+
+#else   /*!CONFIG_ARM_PCI*/
+struct arch_pci_dev { };
+static inline void  pci_init(void) { }
+#endif  /*!CONFIG_ARM_PCI*/
+#endif /* __ARM_PCI_H__ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:40:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:40:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydL6-00059K-Lk; Thu, 23 Jul 2020 15:40:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dJck=BC=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1jydL5-00056E-Gp
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:40:55 +0000
X-Inumbo-ID: dd870334-ccfa-11ea-873e-bc764e2007e4
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.81]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd870334-ccfa-11ea-873e-bc764e2007e4;
 Thu, 23 Jul 2020 15:40:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+qBrQE/prViUqVQSn9QqWxYINAyZdBmv/u8R521Hni8=;
 b=sz7op+iZCDFJT9d88nFiyJBooGs+Lw7Xi+hNZkfkcgL7vy721rGcergeq+s6o2XYl0FgFR94TyWFAhrhlH6GrKpk74/NnE+kjTmbnYmWs5jGWPgEkBXV5iltnmDsVT0vhAUGWXsi2/G+tJURuEFk60RQJ2aV/4n3RDXIkv2R0ws=
Received: from DB6P193CA0017.EURP193.PROD.OUTLOOK.COM (2603:10a6:6:29::27) by
 VI1PR08MB4336.eurprd08.prod.outlook.com (2603:10a6:803:fe::14) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.24; Thu, 23 Jul 2020 15:40:40 +0000
Received: from DB5EUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:29:cafe::a2) by DB6P193CA0017.outlook.office365.com
 (2603:10a6:6:29::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.20 via Frontend
 Transport; Thu, 23 Jul 2020 15:40:40 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT014.mail.protection.outlook.com (10.152.20.102) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Thu, 23 Jul 2020 15:40:39 +0000
Received: ("Tessian outbound c4059ed8d7bf:v62");
 Thu, 23 Jul 2020 15:40:39 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: cb8faa3ba6261820
X-CR-MTA-TID: 64aa7808
Received: from ad811b610fbf.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 70B6FD04-E237-4C1D-B191-796C7021326B.1; 
 Thu, 23 Jul 2020 15:40:34 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ad811b610fbf.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 23 Jul 2020 15:40:34 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hUvJLRv4MULblw8+//xIveknlhcwLLYeI2U8tsdvu3/rZS6GKQePe6ut6k+u779wAHBxNcd1F8wvG82oTUArLyTBhDhOvJ8mmeuwwNG9IXnAyzK5T4p86q0L9ooCF4q5kdvEY2iFG5gxCYZHGCaxlrgHjA47IJ7WsQXzCW+QTJWaoJlJibk7b3g3vNb3XIXbpgYM93t/nbyEMsztf9OcPd2gley3Rt4IZA3ZW74pF69ZlLzuQ8x7Ljfee/aJBvDTR8zBur8RPJ3neI1HbV8s3J2S7KR8c+dtpJ9aT95b4gGsvxJ1u+YTd3RPuEFrnFuTdSNi9If8HJWoUzwD21V9Og==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+qBrQE/prViUqVQSn9QqWxYINAyZdBmv/u8R521Hni8=;
 b=lunZUay2OG4xjT+2Kg+xqoqv97Oic6hMTjuQuOuM0RcrU0osK1NMhX0a9XdrKtFa3qBEFptutI3xAQrpSXu1Z7/fU6zWRCIBoCENx/wmWTHNyV6LV2ZqjiF8v87NBzq9nCmSmgqaplhyHcWPI9ZCay3C7OzBBXX6IENRUllmK5Z/4dCXY7Je1gbqK+tVIoQouVWgb3d//Yfiks0eO9BWWL3N54k2qSPdEedhjYmMCbqCz138o9dhiCQb/3WCFKco7p3z64WC4FDB5zp+ldTRL174RU8nUpRq4pReymw1vbwoaDgnl4xTDa9F1zxHiRd+kMa8srqCPKTZA/6mRmFk4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+qBrQE/prViUqVQSn9QqWxYINAyZdBmv/u8R521Hni8=;
 b=sz7op+iZCDFJT9d88nFiyJBooGs+Lw7Xi+hNZkfkcgL7vy721rGcergeq+s6o2XYl0FgFR94TyWFAhrhlH6GrKpk74/NnE+kjTmbnYmWs5jGWPgEkBXV5iltnmDsVT0vhAUGWXsi2/G+tJURuEFk60RQJ2aV/4n3RDXIkv2R0ws=
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none; lists.xenproject.org;
 dmarc=none action=none header.from=arm.com;
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB4439.eurprd08.prod.outlook.com (2603:10a6:20b:be::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Thu, 23 Jul
 2020 15:40:33 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3216.021; Thu, 23 Jul 2020
 15:40:33 +0000
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Subject: [RFC PATCH v1 2/4] xen/arm: Discovering PCI devices and add the PCI
 devices in XEN.
Date: Thu, 23 Jul 2020 16:40:22 +0100
Message-Id: <666df0147054dda8af13ae74a89be44c69984295.1595511416.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595511416.git.rahul.singh@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
Content-Type: text/plain
X-ClientProxiedBy: LO2P265CA0300.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::24) To AM6PR08MB3560.eurprd08.prod.outlook.com
 (2603:10a6:20b:4c::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from localhost.localdomain (217.140.106.54) by
 LO2P265CA0300.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a5::24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.20 via Frontend Transport; Thu, 23 Jul 2020 15:40:33 +0000
X-Mailer: git-send-email 2.17.1
X-Originating-IP: [217.140.106.54]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: e163d089-4ce9-4596-93a1-08d82f1ec084
X-MS-TrafficTypeDiagnostic: AM6PR08MB4439:|VI1PR08MB4336:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <VI1PR08MB4336F196A6CF6BDE08AD2D82FC760@VI1PR08MB4336.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: ptv+FrxmpR/clN3SqjLCV2rTdg5I9MA0Gvy7zfMwCKX7p78FL7Zi1LkKZmjvPtRC+NsJZ462eVC9RgboO6qxz1Jy3OrZ2oxWTfeQXDRM/K8wZjmSsLkbz1B4DDn1caTyeIWxPNvXpoD7y9XrRa6DIbkWpYplLCQAtz0PCmWQCeSJxXYbVIYKf5KmCLJ/HGV4TP3MCVuBvfow345mP5SNdjMkITRR8MEOvXkvaY5ZbJ+KR7ltwncKuwH4nJVpQuRZ8g0AmbObmJBZOS+qFOrt1TixtSdWb746uaya6vbXCWBp5kVE0OI4/gA1K6KkGZ8AYSeC0haBrwvyvh0wtkP9ilBkj4oyvQy2MVfa329k5JMNkUvSAbD/DayEgljfT8Fa+gCqOuLrQbqJNSJlDLFJfP3E3fSKWKzK5LqjFB0MdPI=
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(366004)(39860400002)(376002)(346002)(396003)(8676002)(26005)(6512007)(2616005)(316002)(54906003)(6666004)(6506007)(66946007)(4326008)(6486002)(956004)(36756003)(86362001)(6916009)(44832011)(8936002)(186003)(16526019)(478600001)(69590400007)(5660300002)(52116002)(83380400001)(66556008)(66476007)(2906002)(136400200001);
 DIR:OUT; SFP:1101; 
X-MS-Exchange-AntiSpam-MessageData: V6p+QxlCAKI1SnVCOI1NYf3z8YqrcqbwCwBe4KEApsMOn/4/qraoGFIRMv2T24PL3HUD4COUWzkdfcemDcXPE5/h9mVxeuUij137Xon91lVXqnH2/5VW8CMbh8Ej9ZomfAuZ6CQynIF/sHwnColr3U6yyNMpF4y6wLh6NQhbBtJaqaYKkpJBC24hXgUavL1em5miSGol6vYh7ytzjW913cR7ATfT2ap/byHV2AyHlMnp6wRJHT2x4kFuHJdQ++LIYQr2h1FEf8Fcyx9xn+e4VQ1jk7vyOqqN/eBVjbZjFlSsPJUF+jnPZaMrqH9Zvvq/nXCAM3lCvFNNIhbV88bxejIow9Gbs0mbbhvzHdjPuiR9W7f1P8YbJ9sZ+8D1N0c7J3Kz7/EJ9v2mPaaZ9eYJCF7BFmmKRNRI1R9u/Tij5fsySyypctMOn06XSb3OxvG+GodIE9DONhduNL13sNZez46zElTzggxHUTC0OQ58Nke+hB780k91xCcWxcbZx4L6
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4439
Original-Authentication-Results: lists.xenproject.org;
 dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: bfec9df7-fd04-4258-b2ff-08d82f1ebc8a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: UP7NBQ30G58bAqNZDTgYA8WBwrr1OU1mKU9BD9jQlV5eOwS6S9LuY2yquTxywY4J00mb5muPGN9qcnLzfx7y98SHVXIGLeQnIlkbxIsKp/7dXaAm1yEsMUCugzysOXIMIKR8L+hJ0fBBhCBqZhIwrtgNtvV+TCqLjWhBeAoiAkFXBhNJ3TbcxbTiFeH2bzJ0h3j1dm7S+0G8vcsVq6+btB6iDHs2tjesfqKk/B2u5TSdsgUlgr3J1PSoBFmP1SkNPm6B5wtbM6ds/wSGt6zskAjySyvUT9arvY8g7xLMNYsWHBxejaWW57WstuQY51WITmSgFWBHboHQHu1LtNJ8+QUQktB4p4wllXObhvtzp8etaPU+37TpGzPoNVPPMaVHyos9+Wqs/kPwXnLUsLMNX1ZzcO15dYrbgK66Je2Ght9eJXs9apC9EDYKIGE9+CE3GxfHUhAzTKW6aU/Vy1Tq+Q==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(346002)(376002)(396003)(136003)(46966005)(8936002)(82740400003)(36756003)(69590400007)(70586007)(70206006)(81166007)(8676002)(5660300002)(47076004)(2906002)(107886003)(54906003)(186003)(6666004)(356005)(478600001)(336012)(316002)(26005)(4326008)(6506007)(6512007)(2616005)(44832011)(6486002)(82310400002)(16526019)(86362001)(83380400001)(956004)(6916009)(136400200001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2020 15:40:39.8293 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e163d089-4ce9-4596-93a1-08d82f1ec084
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4336
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: rahul.singh@arm.com, Julien Grall <julien@xen.org>,
 Bertrand.Marquis@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hardware domain is in charge of doing the PCI enumeration and will
discover the PCI devices and then will communicate to XEN via hyper
call PHYSDEVOP_pci_device_add to add the PCI devices in XEN.

Change-Id: Ie87e19741689503b4b62da911c8dc2ee318584ac
Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/arch/arm/physdev.c | 42 +++++++++++++++++++++++++++++++++++++++---
 1 file changed, 39 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
index e91355fe22..274720f98a 100644
--- a/xen/arch/arm/physdev.c
+++ b/xen/arch/arm/physdev.c
@@ -9,12 +9,48 @@
 #include <xen/errno.h>
 #include <xen/sched.h>
 #include <asm/hypercall.h>
-
+#include <xen/guest_access.h>
+#include <xsm/xsm.h>
 
 int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
-    gdprintk(XENLOG_DEBUG, "PHYSDEVOP cmd=%d: not implemented\n", cmd);
-    return -ENOSYS;
+    int ret = 0;
+
+    switch ( cmd )
+    {
+#ifdef CONFIG_HAS_PCI
+        case PHYSDEVOP_pci_device_add:
+            {
+                struct physdev_pci_device_add add;
+                struct pci_dev_info pdev_info;
+                nodeid_t node = NUMA_NO_NODE;
+
+                ret = -EFAULT;
+                if ( copy_from_guest(&add, arg, 1) != 0 )
+                    break;
+
+                pdev_info.is_extfn = !!(add.flags & XEN_PCI_DEV_EXTFN);
+                if ( add.flags & XEN_PCI_DEV_VIRTFN )
+                {
+                    pdev_info.is_virtfn = 1;
+                    pdev_info.physfn.bus = add.physfn.bus;
+                    pdev_info.physfn.devfn = add.physfn.devfn;
+                }
+                else
+                    pdev_info.is_virtfn = 0;
+
+                ret = pci_add_device(add.seg, add.bus, add.devfn,
+                                &pdev_info, node);
+
+                break;
+            }
+#endif
+        default:
+            gdprintk(XENLOG_DEBUG, "PHYSDEVOP cmd=%d: not implemented\n", cmd);
+            ret = -ENOSYS;
+    }
+
+    return ret;
 }
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:41:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:41:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydLC-0005BK-1a; Thu, 23 Jul 2020 15:41:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dJck=BC=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1jydLA-00056E-Gy
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:41:00 +0000
X-Inumbo-ID: dea769f2-ccfa-11ea-873e-bc764e2007e4
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.54]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dea769f2-ccfa-11ea-873e-bc764e2007e4;
 Thu, 23 Jul 2020 15:40:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Wp7UI0hHLya70K0PAeXfMLlW0x1k8QQ9eQZMIPw88Lw=;
 b=Q2F+PNy1hpUD3317nn3uH1iusZuWpLnKyQxANt4RM+XKK3lJEV93iw1Op+IwNURRuitwdpdBvjZ6Dt+hrF8G6b6hj8r66gZtR9e6hJdE7ihA5nSfjhvpNhmD10eZBx92QwdlHvdAa37b+S1eYnd07fP9sofhGQwgcbkIISp0d9w=
Received: from AM6P195CA0041.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:87::18)
 by VI1PR08MB3551.eurprd08.prod.outlook.com (2603:10a6:803:79::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Thu, 23 Jul
 2020 15:40:41 +0000
Received: from AM5EUR03FT042.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:87:cafe::32) by AM6P195CA0041.outlook.office365.com
 (2603:10a6:209:87::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.20 via Frontend
 Transport; Thu, 23 Jul 2020 15:40:41 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT042.mail.protection.outlook.com (10.152.17.168) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Thu, 23 Jul 2020 15:40:40 +0000
Received: ("Tessian outbound 2ae7cfbcc26c:v62");
 Thu, 23 Jul 2020 15:40:40 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 59dfaee054a493f4
X-CR-MTA-TID: 64aa7808
Received: from c8f39d8aa697.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CD59E5BD-0C97-4902-B05E-686EAAFE6466.1; 
 Thu, 23 Jul 2020 15:40:35 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c8f39d8aa697.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 23 Jul 2020 15:40:35 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Uoz46YLajRoIGS66MXUfUbCJQm0WKHFl+kM0IwKjIVU/o3DkCfNIH4gHcQTIynYGqIIciMdv2Omq6AtE8REetOJs3r62wg6ZdzIzyjBb81RvTwJfjIeFmPl6PuJ/zPxslpLveKVu4tMn7WMRvge2ko8HV5c3RNXUFnSKLJqe4rzll0v/rcCoad1BqseL9Auu26fGEmYNS7aY0DymH7Y8xxHreO4jXHJxdab6iujPlhw4nbA4crfAbA9FF/aePbgAJEl47Jx1ZIec6fqJ/59uGDXQtL184aQ+R2Kxk8lirxgM+mI7fGiiXdfs8zkM57oja1Lrw5ZKGi31bbHKpGyu+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Wp7UI0hHLya70K0PAeXfMLlW0x1k8QQ9eQZMIPw88Lw=;
 b=YbyVwSK3MoZYHu76vMOpc/+fCDNlF4Vu2eE/13Z0mMZ5D5VHcj1KfponYH8QjNarBWZtMZgDK1T/qFlMOuIc+8uYBESX/nSSbthpJDF+BXvNB8gj1IkM/dNR9ACwV/k4swaXmWSAhCZ+n6mXLgwufjfbMrWY1wqDVPkwG8JukTkwT0F+Yw/nounbkEQGCDlNH4P3GR0hMhvsgiFa05Q6E+gfbH8UfqSa7sfUtL4xhW4qmiDZwZSZ/kdG4zCg/JMBzElu13ilPeu/iWagTpFlVy6Jcw8SchmJQVIfPZpyZRyvS05fBp9CwMiJcop+e1Yi6GJvREhJ5WAFRfQ1Tam3cA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Wp7UI0hHLya70K0PAeXfMLlW0x1k8QQ9eQZMIPw88Lw=;
 b=Q2F+PNy1hpUD3317nn3uH1iusZuWpLnKyQxANt4RM+XKK3lJEV93iw1Op+IwNURRuitwdpdBvjZ6Dt+hrF8G6b6hj8r66gZtR9e6hJdE7ihA5nSfjhvpNhmD10eZBx92QwdlHvdAa37b+S1eYnd07fP9sofhGQwgcbkIISp0d9w=
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none; lists.xenproject.org;
 dmarc=none action=none header.from=arm.com;
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB3046.eurprd08.prod.outlook.com (2603:10a6:209:48::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.21; Thu, 23 Jul
 2020 15:40:32 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3216.021; Thu, 23 Jul 2020
 15:40:32 +0000
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Subject: [RFC PATCH v1 0/4] PCI devices passthrough on Arm
Date: Thu, 23 Jul 2020 16:40:20 +0100
Message-Id: <cover.1595511416.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO2P265CA0300.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::24) To AM6PR08MB3560.eurprd08.prod.outlook.com
 (2603:10a6:20b:4c::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from localhost.localdomain (217.140.106.54) by
 LO2P265CA0300.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a5::24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.20 via Frontend Transport; Thu, 23 Jul 2020 15:40:31 +0000
X-Mailer: git-send-email 2.17.1
X-Originating-IP: [217.140.106.54]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 40658b5c-9ef3-4e1d-3e5e-08d82f1ec137
X-MS-TrafficTypeDiagnostic: AM6PR08MB3046:|VI1PR08MB3551:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <VI1PR08MB3551E8BD490298B87B5B5CFEFC760@VI1PR08MB3551.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: U1G5NgpUVFmEizU3FcScqC3tnXkBXzj6CN/0hFrgjsUFpWADOOq5Dj86j3GKfABwqqw+5ZC25kzUGJy8jWoMnOgHsNReYcz5TO5+C837dLIQlZaH16dXXS5ZuPLsQlqHHHV8t8V/yTo/pp1/s5N1/U+y3sUCZeI/r/5jrdwxFBbANarpmSd7ShTzYY31/Ea2+uN15YoXzXZ7aj1FdI7xJ8jfJ3okRVVZPCZyxAhfsCmbIRvaYCTd61YtkuibOrbv0WvApuXl8AVoKt0AhL7+scI5GlQ5EPjOGwlwfQQoyPKd+JkF2FFHtzdipFka4p37hhxC3jLfceDT4m5YH1/rJSsdugbJxbWRp7Slti5n+cXjltJQGo+tWbJC/o3SO9fA
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(376002)(366004)(396003)(346002)(2616005)(956004)(86362001)(6506007)(4326008)(8676002)(6666004)(186003)(8936002)(16526019)(26005)(36756003)(83380400001)(44832011)(2906002)(54906003)(478600001)(5660300002)(66946007)(6486002)(66556008)(6916009)(316002)(52116002)(66476007)(6512007)(136400200001);
 DIR:OUT; SFP:1101; 
X-MS-Exchange-AntiSpam-MessageData: oHop37iqaDBAyVOB/WLWGB6iyeXNkdGpMDsSOGicRg3/YYu5XOyjtfS281H9PRJBNomlsT/X6eIbphEIM7abZVEqkHblPDQEGhODxp0yHK3fxpRrI3JIkKEBLh/p1vO+Ypw+6IJvPixyxVLUYlIoVoZHOqETMGKDonjhrFwovK3birWSEfywlfUA4lrDeTtIGPf/aJ7pb4wS3DBNb1zkEdnvuWPRkdyTMSKoqQAWdSZxvdUz4nTj8JfxyLi6c5L9KTpFMvfASIpjEasjvNRDn/zTU/OqKS0MoQLYwmTGpgkPb5JQgy1GzdqJH/cBbaKEi4wSPz7UKujAosVwBpQJDCczWDD8zYxK4omvSkoSmnN9wk8FZTFBLZoantbOaOeBbvK6/GGr9qpgQ9cXDDUxBmtfj/sLk8+FwM65hTT7xOHzCpSNxHPjXPQeig8r0MGpBrGPQNCmztFGl7F8GmqRZ6U/vWHaBu12CdXnZ3Ej2YuyI7cmy95MYYP2FoEsy3zs
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3046
Original-Authentication-Results: lists.xenproject.org;
 dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: c5a815f7-b485-42b3-2674-08d82f1ebbe6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: vm2vVu/M/f9FP4n+VWC0/Sl3jeNxjLbauwWT3KF3szzuw55QuaWKXY5tmakH03RbJtmDOILI5lzsWvNtKtEJedE4JqLD97spr8RIY8KRZLe/xnxiywAjSZsEVbFlnpmzUqitXw30AcQHbb1jbSENjYq1jn8IBdTo6HQ8KMG+QfQo3i4hN4qqkGGgRW6zrgt+rosT4lljHVXjxpzbrtwSt4CDqU+daYtOZXSNmHZB5C8tNqc/GtBwk3i0btRF94YsP1WR9gqb+EOteVBsB4Ls4m6tN20kYE4r1hNF//uxbERXn855rm0Bx+sD1CMS2u0S6p6ZpGX3WuHOdovP5WbrZ9tDtNgLKVjUtYivQFfTeAP/5BrBBnuQtR85MaqD7tBy/1tYuiIZMVzt0GPhmwX2aqDrxQvYM2/RjZ2H7WeOJLdZxX7Xk/VWCvzExMXaaq1m
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(136003)(396003)(39860400002)(346002)(46966005)(83380400001)(36756003)(478600001)(2906002)(186003)(16526019)(26005)(6666004)(6506007)(44832011)(2616005)(956004)(336012)(6486002)(82740400003)(70206006)(70586007)(107886003)(6512007)(54906003)(8936002)(316002)(36906005)(47076004)(356005)(81166007)(8676002)(5660300002)(86362001)(82310400002)(6916009)(4326008)(136400200001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2020 15:40:40.9442 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 40658b5c-9ef3-4e1d-3e5e-08d82f1ec137
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT042.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3551
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: rahul.singh@arm.com, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Bertrand.Marquis@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 Jan Beulich <jbeulich@suse.com>, Anthony PERARD <anthony.perard@citrix.com>,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Following up on the discussion on PCI devices passthrough support on Arm
design proposal.Please feel free to give you feedback.

We are submitting the code that we developed to get the early feedback. PCI
passthrough support on ARM is not fully implemented in this patch series for
that reason we are not enabling the HAS_PCI and HAS_VPCI flags for ARM.

We will work on design document that we submitted for feedback on mailing
list and we will submit the next design document to address all the comments.

This patch series is based on v1 of the design document that we submitted for
review. Any comments in the design will be addressed in the later version of
the design document and will subsequently implemented in the code in next
patch series. 

PCI passthrough support is divided into different patches:

Discovering PCI Host Bridge in XEN:
- PCI host bridge discovery in XEN and map the PCI ECAM configuration
space to the XEN memory.

Discovering PCI devices:
- In order to support the PCI passthrough, XEN should be aware of the PCI 
devices.
- Hardware domain is in charge of doing the PCI enumeration and will discover
the PCI devices and then communicate to the XEN via hyper call to add the
PCI devices in XEN.

Enable the existing x86 virtual PCI support for ARM:
- Add VPCI trap handler for each of the PCI device added for config space
access.
- Register the trap handler in XEN for each of the host bridge PCI ECAM config
space access.

Emulated PCI device tree node in libxl:
- Create a virtual PCI device tree node in libxl to enable the guest OS to
discover the virtual PCI during guest boot.

This patch series does not implemented the following features. Following
features will be implemented in the next version of the patch series.
- MSI support for interrupt.
- ACPI support for PCI host bridge discovery within XEN on ARM.
- SMMU modification to support PCI devices.
- Use already defined config option "pci=[]" in place of new "vpci=ecam" config
option to create VPCI bus.
- Map the assigned device PCI BAR values and interrupt to the guest when device
is assigned by xl during domain creation.Currently we are using "iomem=[]"
config options to map the value to the guest.

Rahul Singh (4):
  arm/pci: PCI setup and PCI host bridge discovery within XEN on ARM.
  xen/arm: Discovering PCI devices and add the PCI devices in XEN.
  xen/arm: Enable the existing x86 virtual PCI support for ARM.
  arm/libxl: Emulated PCI device tree node in libxl

 tools/libxl/libxl_arm.c             | 200 ++++++++++++++++++++++++++++
 tools/libxl/libxl_types.idl         |   6 +
 tools/xl/xl_parse.c                 |   7 +
 xen/arch/arm/Kconfig                |   7 +
 xen/arch/arm/Makefile               |   2 +
 xen/arch/arm/domain.c               |   4 +
 xen/arch/arm/pci/Makefile           |   4 +
 xen/arch/arm/pci/pci-access.c       | 101 ++++++++++++++
 xen/arch/arm/pci/pci-host-common.c  | 198 +++++++++++++++++++++++++++
 xen/arch/arm/pci/pci-host-generic.c | 131 ++++++++++++++++++
 xen/arch/arm/pci/pci.c              | 112 ++++++++++++++++
 xen/arch/arm/physdev.c              |  42 +++++-
 xen/arch/arm/setup.c                |   2 +
 xen/arch/arm/vpci.c                 | 102 ++++++++++++++
 xen/arch/arm/vpci.h                 |  37 +++++
 xen/drivers/passthrough/pci.c       |   7 +
 xen/include/asm-arm/device.h        |   7 +-
 xen/include/asm-arm/domain.h        |   5 +
 xen/include/asm-arm/pci.h           |  97 +++++++++++++-
 xen/include/public/arch-arm.h       |  32 +++++
 20 files changed, 1094 insertions(+), 9 deletions(-)
 create mode 100644 xen/arch/arm/pci/Makefile
 create mode 100644 xen/arch/arm/pci/pci-access.c
 create mode 100644 xen/arch/arm/pci/pci-host-common.c
 create mode 100644 xen/arch/arm/pci/pci-host-generic.c
 create mode 100644 xen/arch/arm/pci/pci.c
 create mode 100644 xen/arch/arm/vpci.c
 create mode 100644 xen/arch/arm/vpci.h

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:42:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:42:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydMr-0005bb-GW; Thu, 23 Jul 2020 15:42:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9d01=BC=yujala.com=srini@srs-us1.protection.inumbo.net>)
 id 1jydMq-0005bW-GG
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:42:44 +0000
X-Inumbo-ID: 25dec45a-ccfb-11ea-a2c5-12813bfff9fa
Received: from gproxy4-pub.mail.unifiedlayer.com (unknown [69.89.23.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 25dec45a-ccfb-11ea-a2c5-12813bfff9fa;
 Thu, 23 Jul 2020 15:42:43 +0000 (UTC)
Received: from cmgw11.unifiedlayer.com (unknown [10.9.0.11])
 by gproxy4.mail.unifiedlayer.com (Postfix) with ESMTP id BF775175C80
 for <xen-devel@lists.xenproject.org>; Thu, 23 Jul 2020 09:42:42 -0600 (MDT)
Received: from md-71.webhostbox.net ([204.11.58.143]) by cmsmtp with ESMTP
 id ydMoj60MUpSV4ydMoja8xk; Thu, 23 Jul 2020 09:42:42 -0600
X-Authority-Reason: nr=8
X-Authority-Analysis: v=2.3 cv=U8K889ju c=1 sm=1 tr=0
 a=yS0qNmEK8ed8yKyeR8R6rg==:117 a=yS0qNmEK8ed8yKyeR8R6rg==:17
 a=dLZJa+xiwSxG16/P+YVxDGlgEgI=:19 a=IkcTkHD0fZMA:10:nop_charset_1
 a=_RQrkK6FrEwA:10:nop_rcvd_month_year
 a=o-A10e_uY_YA:10:endurance_base64_authed_username_1 a=0f1Y9JmXAAAA:8
 a=avzA5_SeES9-e_Xt0w8A:9 a=QEXdDO2ut3YA:10:nop_charset_2
 a=It28mvvgxjsq2WIeNnUB:22
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=yujala.com; 
 s=default;
 h=Content-Transfer-Encoding:Content-Type:MIME-Version:Message-ID:
 Date:Subject:In-Reply-To:References:Cc:To:From:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe:
 List-Post:List-Owner:List-Archive;
 bh=84sN1gvnMp1/xhymVvxBMEiT8VikLruqSb1mykT0lsM=; b=Hd5EDdLWI9DYieAQbzZad9hn/I
 NaVfZwwox1oQuU///vbfZseg/+Y8eteKN3m4Xq4XWZIrmGQZ/I+4IDwmh8XwNnToKEmcBCC7qgsZT
 9+jpSpqmqsjFVz5S/SerYgdV5seLX7QSwfOiPze7BsYQ2hkONTDKF0ZnnAJlOfZ6napvuaMCyJlz3
 AZyUsRvJ2UmSphPb0lvhNXn8hTYgwNHpVfvCp/F6QkbgR2GNPg1qr7WaszlciD13g/5949dDYYg1u
 +lQe7cxHFqga8a7ijIOthJuvO2ei3wfevaUsGD309z0Ys24XFMEW6Oy6c85ez5uszVobsLwprNbax
 NlabTd3A==;
Received: from 162-231-240-210.lightspeed.sntcca.sbcglobal.net
 ([162.231.240.210]:62823 helo=SRINIASUSLAPTOP)
 by md-71.webhostbox.net with esmtpsa (TLS1.2) tls
 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.93)
 (envelope-from <srini@yujala.com>)
 id 1jydMn-003IUp-P0; Thu, 23 Jul 2020 15:42:41 +0000
From: "Srinivas Bangalore" <srini@yujala.com>
To: "'Christopher Clark'" <christopher.w.clark@gmail.com>
References: <002801d66051$90fe2300$b2fa6900$@yujala.com>
 <CACMJ4GYQUXNGrqq_6wFLX4actMgTat-i5ThhS21Bjy3HO52bUQ@mail.gmail.com>
In-Reply-To: <CACMJ4GYQUXNGrqq_6wFLX4actMgTat-i5ThhS21Bjy3HO52bUQ@mail.gmail.com>
Subject: RE: Porting Xen to Jetson Nano
Date: Thu, 23 Jul 2020 08:42:39 -0700
Message-ID: <001c01d66107$e60263f0$b2072bd0$@yujala.com>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQIl7jaf5+ZLFToUYJ/P44Ycp83hwAHyJKyOqGafTTA=
Content-Language: en-us
X-AntiAbuse: This header was added to track abuse,
 please include it with any abuse report
X-AntiAbuse: Primary Hostname - md-71.webhostbox.net
X-AntiAbuse: Original Domain - lists.xenproject.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - yujala.com
X-BWhitelist: no
X-Source-IP: 162.231.240.210
X-Source-L: No
X-Exim-ID: 1jydMn-003IUp-P0
X-Source: 
X-Source-Args: 
X-Source-Dir: 
X-Source-Sender: 162-231-240-210.lightspeed.sntcca.sbcglobal.net
 (SRINIASUSLAPTOP) [162.231.240.210]:62823
X-Source-Auth: srini@yujala.com
X-Email-Count: 2
X-Source-Cap: c3JpbmlxbGw7c3JpbmlxbGw7bWQtNzEud2ViaG9zdGJveC5uZXQ=
X-Local-Domain: yes
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'xen-devel' <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Christopher,

Thanks for those encouraging words!=20
I did in fact try to apply the Tegra patches to 4.13 but there were =
other issues and Xen wouldn't boot up. I felt that working with 4.8 =
would minimize the unknowns. I am also using the Linux Image from the =
Linux_for_Tegra build.=20

Let me work with your suggestions and see if I can make further =
progress.

Thanks,
Srini


On Wed, Jul 22, 2020 at 10:59 AM Srinivas Bangalore <srini@yujala.com> =
wrote:
> Dear Xen experts,
>
> Would greatly appreciate some hints on how to move forward with this=20
> one=E2=80=A6

Hi Srini,

I don't have any strong recommendations for you, but I do want to say =
that I'm very happy to see you taking this project on and I am hoping =
for your success. I have a newly-arrived Jetson Nano sitting on my desk =
here, purchased with the intention of getting Xen up and running on it, =
that I just haven't got to work on yet. I'm also familiar with Chris =
Patterson, Kyle Temkin and Ian Campbell's previous Tegra Jetson patches =
and it would be great to see some further progress made from those.

In my recent experience with the Raspberry Pi 4, one basic observation =
with ARM kernel bringup is that if your device tree isn't good, your
dom0 kernel can be missing the configuration it needs to use the serial =
port correctly and you don't get any diagnostics from it after Xen =
attempts to launch it, so I would just patch the right serial port =
config directly into your Linux kernel (eg. hardcode specific things =
onto the kernel command line) so you're not messing about with that any =
more.

The other thing I would recommend is patching in some printks into the =
earliest part of the Xen parts of the Dom0 Linux kernel start code.
Others who are more familar with Xen on ARM may have some better =
recommendations, but linux/arch/arm/xen/enlighten.c has a function =
xen_guest_init that looks like a good place to stuff some extra printks =
for some early proof-of-entry from your kernel, and that way you'll have =
some indication whether execution has actually commenced in there.

I don't think you're going to get a great deal of enthusiasm on this =
list for Xen 4.8.5, unfortunately; most people around here work off =
Xen's staging branch, and I'd be surprised to hear of anyone having =
tried a 5.7 Linux kernel with Xen 4.8.5. I can understand why you might =
start there from the existing patch series though.

Best of luck,

Christopher



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:43:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydNM-0005hd-Uo; Thu, 23 Jul 2020 15:43:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h/wE=BC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jydNM-0005go-5B
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:43:16 +0000
X-Inumbo-ID: 35d3f09c-ccfb-11ea-873e-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35d3f09c-ccfb-11ea-873e-bc764e2007e4;
 Thu, 23 Jul 2020 15:43:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=A3Qyp/KqhMFonhh72qezywkX2ZbRZ+8bhOZb7/FrFAc=; b=fr/mFtYvhR+gUZYIofMQSn3gc
 FjkoaKXCOOsQAhMmWRdkUsVo3ADtz1I6iDSs1VG3+i9JlF7SQhuG+izno8+2Z3QEZrgWEgW2wRGeO
 ZBGNK6IELQq43tpqc+K3RzJqcmOmHMbiyFpMCWVK5iDddcU1NTafdMaHcelUeIrsuqSrM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jydNF-0002ng-Oq; Thu, 23 Jul 2020 15:43:09 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jydNF-0002Rn-BQ; Thu, 23 Jul 2020 15:43:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jydNF-0001QG-Al; Thu, 23 Jul 2020 15:43:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152131-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152131: all pass - PUSHED
X-Osstest-Versions-This: ovmf=3d2f7953b2ba9d27b1905c864c369fe624c74a3f
X-Osstest-Versions-That: ovmf=9132a31b9c8381197eee75eb66c809182b264110
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jul 2020 15:43:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152131 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152131/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3d2f7953b2ba9d27b1905c864c369fe624c74a3f
baseline version:
 ovmf                 9132a31b9c8381197eee75eb66c809182b264110

Last test of basis   152088  2020-07-21 23:41:49 Z    1 days
Testing same since   152131  2020-07-23 01:10:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   9132a31b9c..3d2f7953b2  3d2f7953b2ba9d27b1905c864c369fe624c74a3f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:45:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydPS-0005so-Bu; Thu, 23 Jul 2020 15:45:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jydPQ-0005si-Q7
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:45:24 +0000
X-Inumbo-ID: 8581ad32-ccfb-11ea-873e-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8581ad32-ccfb-11ea-873e-bc764e2007e4;
 Thu, 23 Jul 2020 15:45:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 717ADAC82;
 Thu, 23 Jul 2020 15:45:31 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 0/8] x86: compat header generation and checking adjustments
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Date: Thu, 23 Jul 2020 17:45:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

As was pointed out by 0e2e54966af5 ("mm: fix public declaration of
struct xen_mem_acquire_resource"), we're not currently handling structs
correctly that have uint64_aligned_t fields. Patch 2 demonstrates that
there was also an issue with XEN_GUEST_HANDLE_64().

1: x86: fix compat header generation
2: x86/mce: add compat struct checking for XEN_MC_inject_v2
3: x86/mce: bring hypercall subop compat checking in sync again
4: x86/dmop: add compat struct checking for XEN_DMOP_map_mem_type_to_ioreq_server
5: evtchn: add compat struct checking for newer sub-ops
6: x86: generalize padding field handling
7: flask: drop dead compat translation code
8: x86: only generate compat headers actually needed

v3: Build fix for old gcc in patch 1. New patch 5.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:48:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydS3-00062V-Qe; Thu, 23 Jul 2020 15:48:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jydS2-00062Q-68
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:48:06 +0000
X-Inumbo-ID: e48cf1ce-ccfb-11ea-a2c7-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e48cf1ce-ccfb-11ea-a2c7-12813bfff9fa;
 Thu, 23 Jul 2020 15:48:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EA62EAC82;
 Thu, 23 Jul 2020 15:48:10 +0000 (UTC)
Subject: [PATCH v3 1/8] x86: fix compat header generation
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Message-ID: <c2cb193c-f162-485e-1997-fb74e40c0cc5@suse.com>
Date: Thu, 23 Jul 2020 17:48:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

As was pointed out by 0e2e54966af5 ("mm: fix public declaration of
struct xen_mem_acquire_resource"), we're not currently handling structs
correctly that have uint64_aligned_t fields. #pragma pack(4) suppresses
the necessary alignment even if the type did properly survive (which
it also didn't) in the process of generating the headers. Overall,
with the above mentioned change applied, there's only a latent issue
here afaict, i.e. no other of our interface structs is currently
affected.

As a result it is clear that using #pragma pack(4) is not an option.
Drop all uses from compat header generation. Make sure
{,u}int64_aligned_t actually survives, such that explicitly aligned
fields will remain aligned. Arrange for {,u}int64_t to be transformed
into a type that's 64 bits wide and 4-byte aligned, by utilizing that
in typedef-s the "aligned" attribute can be used to reduce alignment.
Additionally, for the cases where native structures get re-used,
enforce suitable alignment via typedef-s (which allow alignment to be
reduced).

This use of typedef-s makes necessary changes to CHECK_*() macro
generation: Previously get-fields.sh relied on finding struct/union
keywords when other compound types were used. We need to use the
typedef-s (guaranteeing suitable alignment) now, and hence the script
has to recognize those cases, too. (Unfortunately there are a few
special cases to be dealt with, but this is really not much different
from e.g. the pre-existing compat_domain_handle_t special case.)

This need to use typedef-s is certainly somewhat fragile going forward,
as in similar future cases it is imperative to also use typedef-s, or
else the CHECK_*() macros won't check what they're supposed to check. I
don't currently see any means to avoid this fragility, though.

There's one change to generated code according to my observations: In
arch_compat_vcpu_op() the runstate area "area" variable would previously
have been put in a just 4-byte aligned stack slot (despite being 8 bytes
in size), whereas now it gets put in an 8-byte aligned location.

There also results some curious inconsistency in struct xen_mc from
these changes - I intend to clean this up later on. Otherwise unrelated
code would also need adjustment right here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v3: Fix build with older gcc (duplicate typedef-s for
    {,u}int64_compat_t).
v2: Different approach, addressing the latent alignment issues in v1.

--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -34,15 +34,6 @@ headers-$(CONFIG_XSM_FLASK) += compat/xs
 cppflags-y                := -include public/xen-compat.h -DXEN_GENERATING_COMPAT_HEADERS
 cppflags-$(CONFIG_X86)    += -m32
 
-# 8-byte types are 4-byte aligned on x86_32 ...
-ifeq ($(CONFIG_CC_IS_CLANG),y)
-prefix-$(CONFIG_X86)      := \#pragma pack(push, 4)
-suffix-$(CONFIG_X86)      := \#pragma pack(pop)
-else
-prefix-$(CONFIG_X86)      := \#pragma pack(4)
-suffix-$(CONFIG_X86)      := \#pragma pack()
-endif
-
 endif
 
 public-$(CONFIG_X86) := $(wildcard public/arch-x86/*.h public/arch-x86/*/*.h)
@@ -57,10 +48,8 @@ compat/%.h: compat/%.i Makefile $(BASEDI
 	echo "#define $$id" >>$@.new; \
 	echo "#include <xen/compat.h>" >>$@.new; \
 	$(if $(filter-out compat/arch-%.h,$@),echo "#include <$(patsubst compat/%,public/%,$@)>" >>$@.new;) \
-	$(if $(prefix-y),echo "$(prefix-y)" >>$@.new;) \
 	grep -v '^# [0-9]' $< | \
 	$(PYTHON) $(BASEDIR)/tools/compat-build-header.py | uniq >>$@.new; \
-	$(if $(suffix-y),echo "$(suffix-y)" >>$@.new;) \
 	echo "#endif /* $$id */" >>$@.new
 	mv -f $@.new $@
 
--- a/xen/include/public/arch-x86/pmu.h
+++ b/xen/include/public/arch-x86/pmu.h
@@ -105,7 +105,7 @@ struct xen_pmu_arch {
          * Processor's registers at the time of interrupt.
          * WO for hypervisor, RO for guests.
          */
-        struct xen_pmu_regs regs;
+        xen_pmu_regs_t regs;
         /* Padding for adding new registers to xen_pmu_regs in the future */
 #define XENPMU_REGS_PAD_SZ  64
         uint8_t pad[XENPMU_REGS_PAD_SZ];
@@ -132,8 +132,8 @@ struct xen_pmu_arch {
      * hypervisor into hardware during XENPMU_flush
      */
     union {
-        struct xen_pmu_amd_ctxt amd;
-        struct xen_pmu_intel_ctxt intel;
+        xen_pmu_amd_ctxt_t amd;
+        xen_pmu_intel_ctxt_t intel;
 
         /*
          * Padding for contexts (fixed parts only, does not include MSR banks
--- a/xen/include/public/arch-x86/xen-mca.h
+++ b/xen/include/public/arch-x86/xen-mca.h
@@ -112,7 +112,7 @@ struct mcinfo_common {
     uint16_t type;      /* structure type */
     uint16_t size;      /* size of this struct in bytes */
 };
-
+typedef struct mcinfo_common xen_mcinfo_common_t;
 
 #define MC_FLAG_CORRECTABLE     (1 << 0)
 #define MC_FLAG_UNCORRECTABLE   (1 << 1)
@@ -123,7 +123,7 @@ struct mcinfo_common {
 #define MC_FLAG_MCE		(1 << 6)
 /* contains global x86 mc information */
 struct mcinfo_global {
-    struct mcinfo_common common;
+    xen_mcinfo_common_t common;
 
     /* running domain at the time in error (most likely the impacted one) */
     uint16_t mc_domid;
@@ -138,7 +138,7 @@ struct mcinfo_global {
 
 /* contains bank local x86 mc information */
 struct mcinfo_bank {
-    struct mcinfo_common common;
+    xen_mcinfo_common_t common;
 
     uint16_t mc_bank; /* bank nr */
     uint16_t mc_domid; /* Usecase 5: domain referenced by mc_addr on dom0
@@ -156,11 +156,12 @@ struct mcinfo_msr {
     uint64_t reg;   /* MSR */
     uint64_t value; /* MSR value */
 };
+typedef struct mcinfo_msr xen_mcinfo_msr_t;
 
 /* contains mc information from other
  * or additional mc MSRs */
 struct mcinfo_extended {
-    struct mcinfo_common common;
+    xen_mcinfo_common_t common;
 
     /* You can fill up to five registers.
      * If you need more, then use this structure
@@ -172,7 +173,7 @@ struct mcinfo_extended {
      * and E(R)FLAGS, E(R)IP, E(R)MISC, up to 11/19 of them might be
      * useful at present. So expand this array to 32 to leave room.
      */
-    struct mcinfo_msr mc_msr[32];
+    xen_mcinfo_msr_t mc_msr[32];
 };
 
 /* Recovery Action flags. Giving recovery result information to DOM0 */
@@ -208,6 +209,7 @@ struct page_offline_action
     uint64_t mfn;
     uint64_t status;
 };
+typedef struct page_offline_action xen_page_offline_action_t;
 
 struct cpu_offline_action
 {
@@ -216,17 +218,18 @@ struct cpu_offline_action
     uint16_t mc_coreid;
     uint16_t mc_core_threadid;
 };
+typedef struct cpu_offline_action xen_cpu_offline_action_t;
 
 #define MAX_UNION_SIZE 16
 struct mcinfo_recovery
 {
-    struct mcinfo_common common;
+    xen_mcinfo_common_t common;
     uint16_t mc_bank; /* bank nr */
     uint8_t action_flags;
     uint8_t action_types;
     union {
-        struct page_offline_action page_retire;
-        struct cpu_offline_action cpu_offline;
+        xen_page_offline_action_t page_retire;
+        xen_cpu_offline_action_t cpu_offline;
         uint8_t pad[MAX_UNION_SIZE];
     } action_info;
 };
@@ -279,7 +282,7 @@ struct mcinfo_logical_cpu {
     uint32_t mc_cache_size;
     uint32_t mc_cache_alignment;
     int32_t mc_nmsrvals;
-    struct mcinfo_msr mc_msrvalues[__MC_MSR_ARRAYSIZE];
+    xen_mcinfo_msr_t mc_msrvalues[__MC_MSR_ARRAYSIZE];
 };
 typedef struct mcinfo_logical_cpu xen_mc_logical_cpu_t;
 DEFINE_XEN_GUEST_HANDLE(xen_mc_logical_cpu_t);
@@ -399,8 +402,9 @@ struct xen_mc_msrinject {
     domid_t  mcinj_domid;           /* valid only if MC_MSRINJ_F_GPADDR is
                                        present in mcinj_flags */
     uint16_t _pad0;
-    struct mcinfo_msr mcinj_msr[MC_MSRINJ_MAXMSRS];
+    xen_mcinfo_msr_t mcinj_msr[MC_MSRINJ_MAXMSRS];
 };
+typedef struct xen_mc_msrinject xen_mc_msrinject_t;
 
 /* Flags for mcinj_flags above; bits 16-31 are reserved */
 #define MC_MSRINJ_F_INTERPOSE   0x1
@@ -410,6 +414,7 @@ struct xen_mc_msrinject {
 struct xen_mc_mceinject {
     unsigned int mceinj_cpunr;      /* target processor id */
 };
+typedef struct xen_mc_mceinject xen_mc_mceinject_t;
 
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 #define XEN_MC_inject_v2        6
@@ -422,7 +427,7 @@ struct xen_mc_mceinject {
 
 struct xen_mc_inject_v2 {
     uint32_t flags;
-    struct xenctl_bitmap cpumap;
+    xenctl_bitmap_t cpumap;
 };
 #endif
 
@@ -431,10 +436,10 @@ struct xen_mc {
     uint32_t interface_version; /* XEN_MCA_INTERFACE_VERSION */
     union {
         struct xen_mc_fetch        mc_fetch;
-        struct xen_mc_notifydomain mc_notifydomain;
+        xen_mc_notifydomain_t      mc_notifydomain;
         struct xen_mc_physcpuinfo  mc_physcpuinfo;
-        struct xen_mc_msrinject    mc_msrinject;
-        struct xen_mc_mceinject    mc_mceinject;
+        xen_mc_msrinject_t         mc_msrinject;
+        xen_mc_mceinject_t         mc_mceinject;
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
         struct xen_mc_inject_v2    mc_inject_v2;
 #endif
--- a/xen/include/public/argo.h
+++ b/xen/include/public/argo.h
@@ -67,8 +67,8 @@ typedef struct xen_argo_addr
 
 typedef struct xen_argo_send_addr
 {
-    struct xen_argo_addr src;
-    struct xen_argo_addr dst;
+    xen_argo_addr_t src;
+    xen_argo_addr_t dst;
 } xen_argo_send_addr_t;
 
 typedef struct xen_argo_ring
@@ -121,7 +121,7 @@ typedef struct xen_argo_unregister_ring
 
 typedef struct xen_argo_ring_data_ent
 {
-    struct xen_argo_addr ring;
+    xen_argo_addr_t ring;
     uint16_t flags;
     uint16_t pad;
     uint32_t space_required;
@@ -132,13 +132,13 @@ typedef struct xen_argo_ring_data
 {
     uint32_t nent;
     uint32_t pad;
-    struct xen_argo_ring_data_ent data[XEN_FLEX_ARRAY_DIM];
+    xen_argo_ring_data_ent_t data[XEN_FLEX_ARRAY_DIM];
 } xen_argo_ring_data_t;
 
 struct xen_argo_ring_message_header
 {
     uint32_t len;
-    struct xen_argo_addr source;
+    xen_argo_addr_t source;
     uint32_t message_type;
     uint8_t data[XEN_FLEX_ARRAY_DIM];
 };
--- a/xen/include/public/event_channel.h
+++ b/xen/include/public/event_channel.h
@@ -321,16 +321,16 @@ typedef struct evtchn_set_priority evtch
 struct evtchn_op {
     uint32_t cmd; /* enum event_channel_op */
     union {
-        struct evtchn_alloc_unbound    alloc_unbound;
-        struct evtchn_bind_interdomain bind_interdomain;
-        struct evtchn_bind_virq        bind_virq;
-        struct evtchn_bind_pirq        bind_pirq;
-        struct evtchn_bind_ipi         bind_ipi;
-        struct evtchn_close            close;
-        struct evtchn_send             send;
-        struct evtchn_status           status;
-        struct evtchn_bind_vcpu        bind_vcpu;
-        struct evtchn_unmask           unmask;
+        evtchn_alloc_unbound_t    alloc_unbound;
+        evtchn_bind_interdomain_t bind_interdomain;
+        evtchn_bind_virq_t        bind_virq;
+        evtchn_bind_pirq_t        bind_pirq;
+        evtchn_bind_ipi_t         bind_ipi;
+        evtchn_close_t            close;
+        evtchn_send_t             send;
+        evtchn_status_t           status;
+        evtchn_bind_vcpu_t        bind_vcpu;
+        evtchn_unmask_t           unmask;
     } u;
 };
 typedef struct evtchn_op evtchn_op_t;
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -74,6 +74,7 @@ struct xen_dm_op_create_ioreq_server {
     /* OUT - server id */
     ioservid_t id;
 };
+typedef struct xen_dm_op_create_ioreq_server xen_dm_op_create_ioreq_server_t;
 
 /*
  * XEN_DMOP_get_ioreq_server_info: Get all the information necessary to
@@ -113,6 +114,7 @@ struct xen_dm_op_get_ioreq_server_info {
     /* OUT - buffered ioreq gfn (see block comment above)*/
     uint64_aligned_t bufioreq_gfn;
 };
+typedef struct xen_dm_op_get_ioreq_server_info xen_dm_op_get_ioreq_server_info_t;
 
 /*
  * XEN_DMOP_map_io_range_to_ioreq_server: Register an I/O range for
@@ -148,6 +150,7 @@ struct xen_dm_op_ioreq_server_range {
     /* IN - inclusive start and end of range */
     uint64_aligned_t start, end;
 };
+typedef struct xen_dm_op_ioreq_server_range xen_dm_op_ioreq_server_range_t;
 
 #define XEN_DMOP_PCI_SBDF(s,b,d,f) \
 	((((s) & 0xffff) << 16) |  \
@@ -173,6 +176,7 @@ struct xen_dm_op_set_ioreq_server_state
     uint8_t enabled;
     uint8_t pad;
 };
+typedef struct xen_dm_op_set_ioreq_server_state xen_dm_op_set_ioreq_server_state_t;
 
 /*
  * XEN_DMOP_destroy_ioreq_server: Destroy the IOREQ Server <id>.
@@ -186,6 +190,7 @@ struct xen_dm_op_destroy_ioreq_server {
     ioservid_t id;
     uint16_t pad;
 };
+typedef struct xen_dm_op_destroy_ioreq_server xen_dm_op_destroy_ioreq_server_t;
 
 /*
  * XEN_DMOP_track_dirty_vram: Track modifications to the specified pfn
@@ -203,6 +208,7 @@ struct xen_dm_op_track_dirty_vram {
     /* IN - first pfn to track */
     uint64_aligned_t first_pfn;
 };
+typedef struct xen_dm_op_track_dirty_vram xen_dm_op_track_dirty_vram_t;
 
 /*
  * XEN_DMOP_set_pci_intx_level: Set the logical level of one of a domain's
@@ -217,6 +223,7 @@ struct xen_dm_op_set_pci_intx_level {
     /* IN - Level: 0 -> deasserted, 1 -> asserted */
     uint8_t  level;
 };
+typedef struct xen_dm_op_set_pci_intx_level xen_dm_op_set_pci_intx_level_t;
 
 /*
  * XEN_DMOP_set_isa_irq_level: Set the logical level of a one of a domain's
@@ -230,6 +237,7 @@ struct xen_dm_op_set_isa_irq_level {
     /* IN - Level: 0 -> deasserted, 1 -> asserted */
     uint8_t  level;
 };
+typedef struct xen_dm_op_set_isa_irq_level xen_dm_op_set_isa_irq_level_t;
 
 /*
  * XEN_DMOP_set_pci_link_route: Map a PCI INTx line to an IRQ line.
@@ -242,6 +250,7 @@ struct xen_dm_op_set_pci_link_route {
     /* ISA IRQ (1-15) or 0 -> disable link */
     uint8_t  isa_irq;
 };
+typedef struct xen_dm_op_set_pci_link_route xen_dm_op_set_pci_link_route_t;
 
 /*
  * XEN_DMOP_modified_memory: Notify that a set of pages were modified by
@@ -265,6 +274,7 @@ struct xen_dm_op_modified_memory {
     /* IN/OUT - Must be set to 0 */
     uint32_t opaque;
 };
+typedef struct xen_dm_op_modified_memory xen_dm_op_modified_memory_t;
 
 struct xen_dm_op_modified_memory_extent {
     /* IN - number of contiguous pages modified */
@@ -294,6 +304,7 @@ struct xen_dm_op_set_mem_type {
     /* IN - first pfn in region */
     uint64_aligned_t first_pfn;
 };
+typedef struct xen_dm_op_set_mem_type xen_dm_op_set_mem_type_t;
 
 /*
  * XEN_DMOP_inject_event: Inject an event into a VCPU, which will
@@ -327,6 +338,7 @@ struct xen_dm_op_inject_event {
     /* IN - type-specific extra data (%cr2 for #PF, pending_dbg for #DB) */
     uint64_aligned_t cr2;
 };
+typedef struct xen_dm_op_inject_event xen_dm_op_inject_event_t;
 
 /*
  * XEN_DMOP_inject_msi: Inject an MSI for an emulated device.
@@ -340,6 +352,7 @@ struct xen_dm_op_inject_msi {
     /* IN - MSI address (0xfeexxxxx) */
     uint64_aligned_t addr;
 };
+typedef struct xen_dm_op_inject_msi xen_dm_op_inject_msi_t;
 
 /*
  * XEN_DMOP_map_mem_type_to_ioreq_server : map or unmap the IOREQ Server <id>
@@ -366,6 +379,7 @@ struct xen_dm_op_map_mem_type_to_ioreq_s
     uint64_t opaque;    /* IN/OUT - only used for hypercall continuation,
                            has to be set to zero by the caller */
 };
+typedef struct xen_dm_op_map_mem_type_to_ioreq_server xen_dm_op_map_mem_type_to_ioreq_server_t;
 
 /*
  * XEN_DMOP_remote_shutdown : Declare a shutdown for another domain
@@ -377,6 +391,7 @@ struct xen_dm_op_remote_shutdown {
     uint32_t reason;       /* SHUTDOWN_* => enum sched_shutdown_reason */
                            /* (Other reason values are not blocked) */
 };
+typedef struct xen_dm_op_remote_shutdown xen_dm_op_remote_shutdown_t;
 
 /*
  * XEN_DMOP_relocate_memory : Relocate GFNs for the specified guest.
@@ -395,6 +410,7 @@ struct xen_dm_op_relocate_memory {
     /* Starting GFN where GFNs should be relocated. */
     uint64_aligned_t dst_gfn;
 };
+typedef struct xen_dm_op_relocate_memory xen_dm_op_relocate_memory_t;
 
 /*
  * XEN_DMOP_pin_memory_cacheattr : Pin caching type of RAM space.
@@ -416,30 +432,30 @@ struct xen_dm_op_pin_memory_cacheattr {
     uint32_t type;          /* XEN_DMOP_MEM_CACHEATTR_* */
     uint32_t pad;
 };
+typedef struct xen_dm_op_pin_memory_cacheattr xen_dm_op_pin_memory_cacheattr_t;
 
 struct xen_dm_op {
     uint32_t op;
     uint32_t pad;
     union {
-        struct xen_dm_op_create_ioreq_server create_ioreq_server;
-        struct xen_dm_op_get_ioreq_server_info get_ioreq_server_info;
-        struct xen_dm_op_ioreq_server_range map_io_range_to_ioreq_server;
-        struct xen_dm_op_ioreq_server_range unmap_io_range_from_ioreq_server;
-        struct xen_dm_op_set_ioreq_server_state set_ioreq_server_state;
-        struct xen_dm_op_destroy_ioreq_server destroy_ioreq_server;
-        struct xen_dm_op_track_dirty_vram track_dirty_vram;
-        struct xen_dm_op_set_pci_intx_level set_pci_intx_level;
-        struct xen_dm_op_set_isa_irq_level set_isa_irq_level;
-        struct xen_dm_op_set_pci_link_route set_pci_link_route;
-        struct xen_dm_op_modified_memory modified_memory;
-        struct xen_dm_op_set_mem_type set_mem_type;
-        struct xen_dm_op_inject_event inject_event;
-        struct xen_dm_op_inject_msi inject_msi;
-        struct xen_dm_op_map_mem_type_to_ioreq_server
-                map_mem_type_to_ioreq_server;
-        struct xen_dm_op_remote_shutdown remote_shutdown;
-        struct xen_dm_op_relocate_memory relocate_memory;
-        struct xen_dm_op_pin_memory_cacheattr pin_memory_cacheattr;
+        xen_dm_op_create_ioreq_server_t create_ioreq_server;
+        xen_dm_op_get_ioreq_server_info_t get_ioreq_server_info;
+        xen_dm_op_ioreq_server_range_t map_io_range_to_ioreq_server;
+        xen_dm_op_ioreq_server_range_t unmap_io_range_from_ioreq_server;
+        xen_dm_op_set_ioreq_server_state_t set_ioreq_server_state;
+        xen_dm_op_destroy_ioreq_server_t destroy_ioreq_server;
+        xen_dm_op_track_dirty_vram_t track_dirty_vram;
+        xen_dm_op_set_pci_intx_level_t set_pci_intx_level;
+        xen_dm_op_set_isa_irq_level_t set_isa_irq_level;
+        xen_dm_op_set_pci_link_route_t set_pci_link_route;
+        xen_dm_op_modified_memory_t modified_memory;
+        xen_dm_op_set_mem_type_t set_mem_type;
+        xen_dm_op_inject_event_t inject_event;
+        xen_dm_op_inject_msi_t inject_msi;
+        xen_dm_op_map_mem_type_to_ioreq_server_t map_mem_type_to_ioreq_server;
+        xen_dm_op_remote_shutdown_t remote_shutdown;
+        xen_dm_op_relocate_memory_t relocate_memory;
+        xen_dm_op_pin_memory_cacheattr_t pin_memory_cacheattr;
     } u;
 };
 
--- a/xen/include/public/hvm/hvm_vcpu.h
+++ b/xen/include/public/hvm/hvm_vcpu.h
@@ -69,6 +69,7 @@ struct vcpu_hvm_x86_32 {
 
     uint16_t pad2[3];
 };
+typedef struct vcpu_hvm_x86_32 xen_vcpu_hvm_x86_32_t;
 
 /*
  * The layout of the _ar fields of the segment registers is the
@@ -114,6 +115,7 @@ struct vcpu_hvm_x86_64 {
      * the 32-bit structure should be used instead.
      */
 };
+typedef struct vcpu_hvm_x86_64 xen_vcpu_hvm_x86_64_t;
 
 struct vcpu_hvm_context {
 #define VCPU_HVM_MODE_32B 0  /* 32bit fields of the structure will be used. */
@@ -124,8 +126,8 @@ struct vcpu_hvm_context {
 
     /* CPU registers. */
     union {
-        struct vcpu_hvm_x86_32 x86_32;
-        struct vcpu_hvm_x86_64 x86_64;
+        xen_vcpu_hvm_x86_32_t x86_32;
+        xen_vcpu_hvm_x86_64_t x86_64;
     } cpu_regs;
 };
 typedef struct vcpu_hvm_context vcpu_hvm_context_t;
--- a/xen/include/public/hypfs.h
+++ b/xen/include/public/hypfs.h
@@ -53,9 +53,10 @@ struct xen_hypfs_direntry {
     uint32_t content_len;      /* Current length of data. */
     uint32_t max_write_len;    /* Max. length for writes (0 if read-only). */
 };
+typedef struct xen_hypfs_direntry xen_hypfs_direntry_t;
 
 struct xen_hypfs_dirlistentry {
-    struct xen_hypfs_direntry e;
+    xen_hypfs_direntry_t e;
     /* Offset in bytes to next entry (0 == this is the last entry). */
     uint16_t off_next;
     /* Zero terminated entry name, possibly with some padding for alignment. */
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -604,7 +604,7 @@ struct xen_reserved_device_memory_map {
     XEN_GUEST_HANDLE(xen_reserved_device_memory_t) buffer;
     /* IN */
     union {
-        struct physdev_pci_device pci;
+        physdev_pci_device_t pci;
     } dev;
 };
 typedef struct xen_reserved_device_memory_map xen_reserved_device_memory_map_t;
--- a/xen/include/public/physdev.h
+++ b/xen/include/public/physdev.h
@@ -229,11 +229,11 @@ DEFINE_XEN_GUEST_HANDLE(physdev_manage_p
 struct physdev_op {
     uint32_t cmd;
     union {
-        struct physdev_irq_status_query      irq_status_query;
-        struct physdev_set_iopl              set_iopl;
-        struct physdev_set_iobitmap          set_iobitmap;
-        struct physdev_apic                  apic_op;
-        struct physdev_irq                   irq_op;
+        physdev_irq_status_query_t irq_status_query;
+        physdev_set_iopl_t         set_iopl;
+        physdev_set_iobitmap_t     set_iobitmap;
+        physdev_apic_t             apic_op;
+        physdev_irq_t              irq_op;
     } u;
 };
 typedef struct physdev_op physdev_op_t;
@@ -334,7 +334,7 @@ struct physdev_dbgp_op {
     uint8_t op;
     uint8_t bus;
     union {
-        struct physdev_pci_device pci;
+        physdev_pci_device_t pci;
     } u;
 };
 typedef struct physdev_dbgp_op physdev_dbgp_op_t;
--- a/xen/include/public/platform.h
+++ b/xen/include/public/platform.h
@@ -42,6 +42,7 @@ struct xenpf_settime32 {
     uint32_t nsecs;
     uint64_t system_time;
 };
+typedef struct xenpf_settime32 xenpf_settime32_t;
 #define XENPF_settime64           62
 struct xenpf_settime64 {
     /* IN variables. */
@@ -50,6 +51,7 @@ struct xenpf_settime64 {
     uint32_t mbz;
     uint64_t system_time;
 };
+typedef struct xenpf_settime64 xenpf_settime64_t;
 #if __XEN_INTERFACE_VERSION__ < 0x00040600
 #define XENPF_settime XENPF_settime32
 #define xenpf_settime xenpf_settime32
@@ -529,6 +531,7 @@ struct xenpf_cpu_hotadd
 	uint32_t acpi_id;
 	uint32_t pxm;
 };
+typedef struct xenpf_cpu_hotadd xenpf_cpu_hotadd_t;
 
 #define XENPF_mem_hotadd    59
 struct xenpf_mem_hotadd
@@ -538,6 +541,7 @@ struct xenpf_mem_hotadd
     uint32_t pxm;
     uint32_t flags;
 };
+typedef struct xenpf_mem_hotadd xenpf_mem_hotadd_t;
 
 #define XENPF_core_parking  60
 
@@ -622,29 +626,29 @@ struct xen_platform_op {
     uint32_t cmd;
     uint32_t interface_version; /* XENPF_INTERFACE_VERSION */
     union {
-        struct xenpf_settime           settime;
-        struct xenpf_settime32         settime32;
-        struct xenpf_settime64         settime64;
-        struct xenpf_add_memtype       add_memtype;
-        struct xenpf_del_memtype       del_memtype;
-        struct xenpf_read_memtype      read_memtype;
-        struct xenpf_microcode_update  microcode;
-        struct xenpf_platform_quirk    platform_quirk;
-        struct xenpf_efi_runtime_call  efi_runtime_call;
-        struct xenpf_firmware_info     firmware_info;
-        struct xenpf_enter_acpi_sleep  enter_acpi_sleep;
-        struct xenpf_change_freq       change_freq;
-        struct xenpf_getidletime       getidletime;
-        struct xenpf_set_processor_pminfo set_pminfo;
-        struct xenpf_pcpuinfo          pcpu_info;
-        struct xenpf_pcpu_version      pcpu_version;
-        struct xenpf_cpu_ol            cpu_ol;
-        struct xenpf_cpu_hotadd        cpu_add;
-        struct xenpf_mem_hotadd        mem_add;
-        struct xenpf_core_parking      core_parking;
-        struct xenpf_resource_op       resource_op;
-        struct xenpf_symdata           symdata;
-        uint8_t                        pad[128];
+        xenpf_settime_t               settime;
+        xenpf_settime32_t             settime32;
+        xenpf_settime64_t             settime64;
+        xenpf_add_memtype_t           add_memtype;
+        xenpf_del_memtype_t           del_memtype;
+        xenpf_read_memtype_t          read_memtype;
+        xenpf_microcode_update_t      microcode;
+        xenpf_platform_quirk_t        platform_quirk;
+        xenpf_efi_runtime_call_t      efi_runtime_call;
+        xenpf_firmware_info_t         firmware_info;
+        xenpf_enter_acpi_sleep_t      enter_acpi_sleep;
+        xenpf_change_freq_t           change_freq;
+        xenpf_getidletime_t           getidletime;
+        xenpf_set_processor_pminfo_t  set_pminfo;
+        xenpf_pcpuinfo_t              pcpu_info;
+        xenpf_pcpu_version_t          pcpu_version;
+        xenpf_cpu_ol_t                cpu_ol;
+        xenpf_cpu_hotadd_t            cpu_add;
+        xenpf_mem_hotadd_t            mem_add;
+        xenpf_core_parking_t          core_parking;
+        xenpf_resource_op_t           resource_op;
+        xenpf_symdata_t               symdata;
+        uint8_t                       pad[128];
     } u;
 };
 typedef struct xen_platform_op xen_platform_op_t;
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -127,7 +127,7 @@ struct xen_pmu_data {
     uint8_t pad[6];
 
     /* Architecture-specific information */
-    struct xen_pmu_arch pmu;
+    xen_pmu_arch_t pmu;
 };
 
 #endif /* __XEN_PUBLIC_PMU_H__ */
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -726,7 +726,7 @@ struct vcpu_info {
 #endif /* XEN_HAVE_PV_UPCALL_MASK */
     xen_ulong_t evtchn_pending_sel;
     struct arch_vcpu_info arch;
-    struct vcpu_time_info time;
+    vcpu_time_info_t time;
 }; /* 64 bytes (x86) */
 #ifndef __XEN__
 typedef struct vcpu_info vcpu_info_t;
@@ -1031,6 +1031,7 @@ struct xenctl_bitmap {
     XEN_GUEST_HANDLE_64(uint8) bitmap;
     uint32_t nr_bits;
 };
+typedef struct xenctl_bitmap xenctl_bitmap_t;
 #endif
 
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
--- a/xen/include/public/xsm/flask_op.h
+++ b/xen/include/public/xsm/flask_op.h
@@ -33,10 +33,12 @@ struct xen_flask_load {
     XEN_GUEST_HANDLE(char) buffer;
     uint32_t size;
 };
+typedef struct xen_flask_load xen_flask_load_t;
 
 struct xen_flask_setenforce {
     uint32_t enforcing;
 };
+typedef struct xen_flask_setenforce xen_flask_setenforce_t;
 
 struct xen_flask_sid_context {
     /* IN/OUT: sid to convert to/from string */
@@ -47,6 +49,7 @@ struct xen_flask_sid_context {
     uint32_t size;
     XEN_GUEST_HANDLE(char) context;
 };
+typedef struct xen_flask_sid_context xen_flask_sid_context_t;
 
 struct xen_flask_access {
     /* IN: access request */
@@ -60,6 +63,7 @@ struct xen_flask_access {
     uint32_t audit_deny;
     uint32_t seqno;
 };
+typedef struct xen_flask_access xen_flask_access_t;
 
 struct xen_flask_transition {
     /* IN: transition SIDs and class */
@@ -69,6 +73,7 @@ struct xen_flask_transition {
     /* OUT: new SID */
     uint32_t newsid;
 };
+typedef struct xen_flask_transition xen_flask_transition_t;
 
 #if __XEN_INTERFACE_VERSION__ < 0x00040800
 struct xen_flask_userlist {
@@ -106,11 +111,13 @@ struct xen_flask_boolean {
      */
     XEN_GUEST_HANDLE(char) name;
 };
+typedef struct xen_flask_boolean xen_flask_boolean_t;
 
 struct xen_flask_setavc_threshold {
     /* IN */
     uint32_t threshold;
 };
+typedef struct xen_flask_setavc_threshold xen_flask_setavc_threshold_t;
 
 struct xen_flask_hash_stats {
     /* OUT */
@@ -119,6 +126,7 @@ struct xen_flask_hash_stats {
     uint32_t buckets_total;
     uint32_t max_chain_len;
 };
+typedef struct xen_flask_hash_stats xen_flask_hash_stats_t;
 
 struct xen_flask_cache_stats {
     /* IN */
@@ -131,6 +139,7 @@ struct xen_flask_cache_stats {
     uint32_t reclaims;
     uint32_t frees;
 };
+typedef struct xen_flask_cache_stats xen_flask_cache_stats_t;
 
 struct xen_flask_ocontext {
     /* IN */
@@ -138,6 +147,7 @@ struct xen_flask_ocontext {
     uint32_t sid;
     uint64_t low, high;
 };
+typedef struct xen_flask_ocontext xen_flask_ocontext_t;
 
 struct xen_flask_peersid {
     /* IN */
@@ -145,12 +155,14 @@ struct xen_flask_peersid {
     /* OUT */
     uint32_t sid;
 };
+typedef struct xen_flask_peersid xen_flask_peersid_t;
 
 struct xen_flask_relabel {
     /* IN */
     uint32_t domid;
     uint32_t sid;
 };
+typedef struct xen_flask_relabel xen_flask_relabel_t;
 
 struct xen_flask_devicetree_label {
     /* IN */
@@ -158,6 +170,7 @@ struct xen_flask_devicetree_label {
     uint32_t length;
     XEN_GUEST_HANDLE(char) path;
 };
+typedef struct xen_flask_devicetree_label xen_flask_devicetree_label_t;
 
 struct xen_flask_op {
     uint32_t cmd;
@@ -188,26 +201,26 @@ struct xen_flask_op {
 #define FLASK_DEVICETREE_LABEL  25
     uint32_t interface_version; /* XEN_FLASK_INTERFACE_VERSION */
     union {
-        struct xen_flask_load load;
-        struct xen_flask_setenforce enforce;
+        xen_flask_load_t load;
+        xen_flask_setenforce_t enforce;
         /* FLASK_CONTEXT_TO_SID and FLASK_SID_TO_CONTEXT */
-        struct xen_flask_sid_context sid_context;
-        struct xen_flask_access access;
+        xen_flask_sid_context_t sid_context;
+        xen_flask_access_t access;
         /* FLASK_CREATE, FLASK_RELABEL, FLASK_MEMBER */
-        struct xen_flask_transition transition;
+        xen_flask_transition_t transition;
 #if __XEN_INTERFACE_VERSION__ < 0x00040800
         struct xen_flask_userlist userlist;
 #endif
         /* FLASK_GETBOOL, FLASK_SETBOOL */
-        struct xen_flask_boolean boolean;
-        struct xen_flask_setavc_threshold setavc_threshold;
-        struct xen_flask_hash_stats hash_stats;
-        struct xen_flask_cache_stats cache_stats;
+        xen_flask_boolean_t boolean;
+        xen_flask_setavc_threshold_t setavc_threshold;
+        xen_flask_hash_stats_t hash_stats;
+        xen_flask_cache_stats_t cache_stats;
         /* FLASK_ADD_OCONTEXT, FLASK_DEL_OCONTEXT */
-        struct xen_flask_ocontext ocontext;
-        struct xen_flask_peersid peersid;
-        struct xen_flask_relabel relabel;
-        struct xen_flask_devicetree_label devicetree_label;
+        xen_flask_ocontext_t ocontext;
+        xen_flask_peersid_t peersid;
+        xen_flask_relabel_t relabel;
+        xen_flask_devicetree_label_t devicetree_label;
     } u;
 };
 typedef struct xen_flask_op xen_flask_op_t;
--- a/xen/tools/compat-build-header.py
+++ b/xen/tools/compat-build-header.py
@@ -3,7 +3,7 @@
 import re,sys
 
 pats = [
- [ r"__InClUdE__(.*)", r"#include\1\n#pragma pack(4)" ],
+ [ r"__InClUdE__(.*)", r"#include\1" ],
  [ r"__IfDeF__ (XEN_HAVE.*)", r"#ifdef \1" ],
  [ r"__ElSe__", r"#else" ],
  [ r"__EnDif__", r"#endif" ],
@@ -11,9 +11,11 @@ pats = [
  [ r"__UnDeF__", r"#undef" ],
  [ r"\"xen-compat.h\"", r"<public/xen-compat.h>" ],
  [ r"(struct|union|enum)\s+(xen_?)?(\w)", r"\1 compat_\3" ],
- [ r"@KeeP@", r"" ],
+ [ r"typedef(.*)@KeeP@(xen_?)?([\w]+)([^\w])",
+   r"typedef\1\2\3 __attribute__((__aligned__(__alignof(\1compat_\3))))\4" ],
  [ r"_t([^\w]|$)", r"_compat_t\1" ],
- [ r"(8|16|32|64)_compat_t([^\w]|$)", r"\1_t\2" ],
+ [ r"int(8|16|32|64_aligned)_compat_t([^\w]|$)", r"int\1_t\2" ],
+ [ r"(\su?int64(_compat)?)_T([^\w]|$)", r"\1_t\3" ],
  [ r"(^|[^\w])xen_?(\w*)_compat_t([^\w]|$$)", r"\1compat_\2_t\3" ],
  [ r"(^|[^\w])XEN_?", r"\1COMPAT_" ],
  [ r"(^|[^\w])Xen_?", r"\1Compat_" ],
--- a/xen/tools/compat-build-source.py
+++ b/xen/tools/compat-build-source.py
@@ -9,6 +9,7 @@ pats = [
  [ r"^\s*#\s*endif /\* (XEN_HAVE.*) \*/\s+", r"__EnDif__" ],
  [ r"^\s*#\s*define\s+([A-Z_]*_GUEST_HANDLE)", r"#define HIDE_\1" ],
  [ r"^\s*#\s*define\s+([a-z_]*_guest_handle)", r"#define hide_\1" ],
+ [ r"^\s*#\s*define\s+(u?int64)_aligned_t\s.*aligned.*", r"typedef \1_T __attribute__((aligned(4))) \1_compat_T;" ],
  [ r"XEN_GUEST_HANDLE(_[0-9A-Fa-f]+)?", r"COMPAT_HANDLE" ],
 ];
 
--- a/xen/tools/get-fields.sh
+++ b/xen/tools/get-fields.sh
@@ -418,6 +418,21 @@ check_field ()
 			"}")
 				level=$(expr $level - 1) id=
 				;;
+			compat_*_t)
+				if [ $level = 2 ]
+				then
+					fields=" "
+					token="${token%_t}"
+					token="${token#compat_}"
+				fi
+				;;
+			evtchn_*_compat_t)
+				if [ $level = 2 -a $token != evtchn_port_compat_t ]
+				then
+					fields=" "
+					token="${token%_compat_t}"
+				fi
+				;;
 			[a-zA-Z]*)
 				id=$token
 				;;
@@ -464,6 +479,14 @@ build_check ()
 		"]")
 			arrlvl=$(expr $arrlvl - 1)
 			;;
+		compat_*_t)
+			if [ $level = 2 -a $token != compat_argo_port_t ]
+			then
+				fields=" "
+				token="${token%_t}"
+				token="${token#compat_}"
+			fi
+			;;
 		[a-zA-Z_]*)
 			test $level != 2 -o $arrlvl != 1 || id=$token
 			;;



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:48:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:48:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydSd-00065t-8V; Thu, 23 Jul 2020 15:48:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jydSc-00065i-18
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:48:42 +0000
X-Inumbo-ID: fae4028c-ccfb-11ea-873e-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fae4028c-ccfb-11ea-873e-bc764e2007e4;
 Thu, 23 Jul 2020 15:48:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6B91CAC83;
 Thu, 23 Jul 2020 15:48:48 +0000 (UTC)
Subject: [PATCH v3 2/8] x86/mce: add compat struct checking for
 XEN_MC_inject_v2
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Message-ID: <30c04dc1-07aa-4edf-f913-5536dd07a199@suse.com>
Date: Thu, 23 Jul 2020 17:48:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

84e364f2eda2 ("x86: add CMCI software injection interface") merely made
sure things would build, without any concern about things actually
working:
- despite the addition of xenctl_bitmap to xlat.lst, the resulting macro
  wasn't invoked anywhere (which would have lead to recognizing that the
  structure appeared to have no fully compatible layout, despite the use
  of a 64-bit handle),
- the interface struct itself was neither added to xlat.lst (and the
  resulting macro then invoked) nor was any manual checking of
  individual fields added.

Adjust compat header generation logic to retain XEN_GUEST_HANDLE_64(),
which is intentionally layed out to be compatible between different size
guests. Invoke the missing checking (implicitly through CHECK_mc).

No change in the resulting generated code.

Fixes: 84e364f2eda2 ("x86: add CMCI software injection interface")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1312,10 +1312,12 @@ CHECK_FIELD_(struct, mc_fetch, fetch_id)
 CHECK_FIELD_(struct, mc_physcpuinfo, ncpus);
 # define CHECK_compat_mc_physcpuinfo struct mc_physcpuinfo
 
-#define CHECK_compat_mc_inject_v2   struct mc_inject_v2
+# define xen_ctl_bitmap              xenctl_bitmap
+
 CHECK_mc;
 # undef CHECK_compat_mc_fetch
 # undef CHECK_compat_mc_physcpuinfo
+# undef xen_ctl_bitmap
 
 # define xen_mc_info                 mc_info
 CHECK_mc_info;
--- a/xen/include/public/arch-x86/xen-mca.h
+++ b/xen/include/public/arch-x86/xen-mca.h
@@ -429,6 +429,7 @@ struct xen_mc_inject_v2 {
     uint32_t flags;
     xenctl_bitmap_t cpumap;
 };
+typedef struct xen_mc_inject_v2 xen_mc_inject_v2_t;
 #endif
 
 struct xen_mc {
@@ -441,7 +442,7 @@ struct xen_mc {
         xen_mc_msrinject_t         mc_msrinject;
         xen_mc_mceinject_t         mc_mceinject;
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
-        struct xen_mc_inject_v2    mc_inject_v2;
+        xen_mc_inject_v2_t         mc_inject_v2;
 #endif
     } u;
 };
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -44,6 +44,7 @@
 ?	mcinfo_recovery			arch-x86/xen-mca.h
 !	mc_fetch			arch-x86/xen-mca.h
 ?	mc_info				arch-x86/xen-mca.h
+?	mc_inject_v2			arch-x86/xen-mca.h
 ?	mc_mceinject			arch-x86/xen-mca.h
 ?	mc_msrinject			arch-x86/xen-mca.h
 ?	mc_notifydomain			arch-x86/xen-mca.h
--- a/xen/tools/compat-build-header.py
+++ b/xen/tools/compat-build-header.py
@@ -19,6 +19,7 @@ pats = [
  [ r"(^|[^\w])xen_?(\w*)_compat_t([^\w]|$$)", r"\1compat_\2_t\3" ],
  [ r"(^|[^\w])XEN_?", r"\1COMPAT_" ],
  [ r"(^|[^\w])Xen_?", r"\1Compat_" ],
+ [ r"(^|[^\w])COMPAT_HANDLE_64\(", r"\1XEN_GUEST_HANDLE_64(" ],
  [ r"(^|[^\w])long([^\w]|$$)", r"\1int\2" ]
 ];
 
--- a/xen/tools/compat-build-source.py
+++ b/xen/tools/compat-build-source.py
@@ -10,7 +10,7 @@ pats = [
  [ r"^\s*#\s*define\s+([A-Z_]*_GUEST_HANDLE)", r"#define HIDE_\1" ],
  [ r"^\s*#\s*define\s+([a-z_]*_guest_handle)", r"#define hide_\1" ],
  [ r"^\s*#\s*define\s+(u?int64)_aligned_t\s.*aligned.*", r"typedef \1_T __attribute__((aligned(4))) \1_compat_T;" ],
- [ r"XEN_GUEST_HANDLE(_[0-9A-Fa-f]+)?", r"COMPAT_HANDLE" ],
+ [ r"XEN_GUEST_HANDLE", r"COMPAT_HANDLE" ],
 ];
 
 xlatf = open('xlat.lst', 'r')



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:48:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:48:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydSp-00067x-HY; Thu, 23 Jul 2020 15:48:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h/wE=BC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jydSn-00067d-Tz
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:48:53 +0000
X-Inumbo-ID: 019b9432-ccfc-11ea-a2c7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 019b9432-ccfc-11ea-a2c7-12813bfff9fa;
 Thu, 23 Jul 2020 15:48:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=B5P7phfhQa8TeDd6RQ3MBClL54p7dCMHykzqQhA7iQU=; b=i/nthey5iwfmrx/mYVC0SDA61
 iqr+tBybojhe03T2d05FBsabkyWoFCIbiZnKW2rh6WliPbqXb414jO5n37pCgdB/bCN7Kq2bwm5zb
 9hlJImTJkFlGNswQA7Qo5r+s99dhDQ8OTe6eYmikthGxPGnIZNgK7CnaADc16CwSx7jfI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jydSl-0002x7-MA; Thu, 23 Jul 2020 15:48:51 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jydSl-0002li-DR; Thu, 23 Jul 2020 15:48:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jydSl-0003Kl-Cn; Thu, 23 Jul 2020 15:48:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152135-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 152135: regressions - FAIL
X-Osstest-Failures: libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=6c7ba7b496374c5d7d4fb925f38415f087234641
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jul 2020 15:48:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152135 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152135/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              6c7ba7b496374c5d7d4fb925f38415f087234641
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   13 days
Failing since        151818  2020-07-11 04:18:52 Z   12 days   13 attempts
Testing same since   152135  2020-07-23 04:18:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Jianan Gao <jgao@redhat.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Yi Wang <wang.yi59@zte.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2368 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:49:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:49:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydTB-0006CL-S0; Thu, 23 Jul 2020 15:49:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jydTA-0006CA-RG
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:49:16 +0000
X-Inumbo-ID: 0efb4d0d-ccfc-11ea-a2c7-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0efb4d0d-ccfc-11ea-a2c7-12813bfff9fa;
 Thu, 23 Jul 2020 15:49:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 502EBAC83;
 Thu, 23 Jul 2020 15:49:23 +0000 (UTC)
Subject: [PATCH v3 3/8] x86/mce: bring hypercall subop compat checking in sync
 again
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Message-ID: <dfad8cb1-e8a2-11ca-70f3-2342f9a04c12@suse.com>
Date: Thu, 23 Jul 2020 17:49:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Use a typedef in struct xen_mc also for the two subops "manually"
translated in the handler, just for consistency. No functional
change.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1307,16 +1307,16 @@ CHECK_mcinfo_common;
 
 CHECK_FIELD_(struct, mc_fetch, flags);
 CHECK_FIELD_(struct, mc_fetch, fetch_id);
-# define CHECK_compat_mc_fetch       struct mc_fetch
+# define CHECK_mc_fetch              struct mc_fetch
 
 CHECK_FIELD_(struct, mc_physcpuinfo, ncpus);
-# define CHECK_compat_mc_physcpuinfo struct mc_physcpuinfo
+# define CHECK_mc_physcpuinfo        struct mc_physcpuinfo
 
 # define xen_ctl_bitmap              xenctl_bitmap
 
 CHECK_mc;
-# undef CHECK_compat_mc_fetch
-# undef CHECK_compat_mc_physcpuinfo
+# undef CHECK_mc_fetch
+# undef CHECK_mc_physcpuinfo
 # undef xen_ctl_bitmap
 
 # define xen_mc_info                 mc_info
--- a/xen/include/public/arch-x86/xen-mca.h
+++ b/xen/include/public/arch-x86/xen-mca.h
@@ -391,6 +391,7 @@ struct xen_mc_physcpuinfo {
     /* OUT */
     XEN_GUEST_HANDLE(xen_mc_logical_cpu_t) info;
 };
+typedef struct xen_mc_physcpuinfo xen_mc_physcpuinfo_t;
 
 #define XEN_MC_msrinject    4
 #define MC_MSRINJ_MAXMSRS       8
@@ -436,9 +437,9 @@ struct xen_mc {
     uint32_t cmd;
     uint32_t interface_version; /* XEN_MCA_INTERFACE_VERSION */
     union {
-        struct xen_mc_fetch        mc_fetch;
+        xen_mc_fetch_t             mc_fetch;
         xen_mc_notifydomain_t      mc_notifydomain;
-        struct xen_mc_physcpuinfo  mc_physcpuinfo;
+        xen_mc_physcpuinfo_t       mc_physcpuinfo;
         xen_mc_msrinject_t         mc_msrinject;
         xen_mc_mceinject_t         mc_mceinject;
 #if defined(__XEN__) || defined(__XEN_TOOLS__)



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:49:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydTh-0006Jp-56; Thu, 23 Jul 2020 15:49:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jydTf-0006JL-3V
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:49:47 +0000
X-Inumbo-ID: 21bfac62-ccfc-11ea-a2c7-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21bfac62-ccfc-11ea-a2c7-12813bfff9fa;
 Thu, 23 Jul 2020 15:49:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9BB3FAC83;
 Thu, 23 Jul 2020 15:49:53 +0000 (UTC)
Subject: [PATCH v3 4/8] x86/dmop: add compat struct checking for
 XEN_DMOP_map_mem_type_to_ioreq_server
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Message-ID: <a3ecc8d7-10b6-4678-e7c9-9900d4d008c8@suse.com>
Date: Thu, 23 Jul 2020 17:49:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This was forgotten when the subop was added.

Also take the opportunity and move the dm_op_relocate_memory entry in
xlat.lst to its designated place.

No change in the resulting generated code.

Fixes: ca2b511d3ff4 ("x86/ioreq server: add DMOP to map guest ram with p2m_ioreq_server to an ioreq server")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau MonnÃ© <roger.pau@citrix.com>

--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -730,6 +730,7 @@ CHECK_dm_op_modified_memory;
 CHECK_dm_op_set_mem_type;
 CHECK_dm_op_inject_event;
 CHECK_dm_op_inject_msi;
+CHECK_dm_op_map_mem_type_to_ioreq_server;
 CHECK_dm_op_remote_shutdown;
 CHECK_dm_op_relocate_memory;
 CHECK_dm_op_pin_memory_cacheattr;
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -86,15 +86,16 @@
 ?	grant_entry_v2			grant_table.h
 ?	gnttab_swap_grant_ref		grant_table.h
 !	dm_op_buf			hvm/dm_op.h
-?	dm_op_relocate_memory		hvm/dm_op.h
 ?	dm_op_create_ioreq_server	hvm/dm_op.h
 ?	dm_op_destroy_ioreq_server	hvm/dm_op.h
 ?	dm_op_get_ioreq_server_info	hvm/dm_op.h
 ?	dm_op_inject_event		hvm/dm_op.h
 ?	dm_op_inject_msi		hvm/dm_op.h
 ?	dm_op_ioreq_server_range	hvm/dm_op.h
+?	dm_op_map_mem_type_to_ioreq_server hvm/dm_op.h
 ?	dm_op_modified_memory		hvm/dm_op.h
 ?	dm_op_pin_memory_cacheattr	hvm/dm_op.h
+?	dm_op_relocate_memory		hvm/dm_op.h
 ?	dm_op_remote_shutdown		hvm/dm_op.h
 ?	dm_op_set_ioreq_server_state	hvm/dm_op.h
 ?	dm_op_set_isa_irq_level		hvm/dm_op.h



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:50:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:50:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydUF-000747-FS; Thu, 23 Jul 2020 15:50:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jydUF-000740-0y
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:50:23 +0000
X-Inumbo-ID: 3735fdda-ccfc-11ea-873e-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3735fdda-ccfc-11ea-873e-bc764e2007e4;
 Thu, 23 Jul 2020 15:50:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9E135AC83;
 Thu, 23 Jul 2020 15:50:29 +0000 (UTC)
Subject: [PATCH v3 5/8] evtchn: add compat struct checking for newer sub-ops
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Message-ID: <99e52b76-de0f-13ac-f37a-6e14cd4b566f@suse.com>
Date: Thu, 23 Jul 2020 17:50:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Various additions to the interface did not get mirrored into the compat
handling machinery. Luckily all additions were done in ways not making
any form of translation necessary.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: New.

--- a/xen/common/compat/xlat.c
+++ b/xen/common/compat/xlat.c
@@ -54,6 +54,22 @@ CHECK_evtchn_op;
 #undef xen_evtchn_status
 #undef xen_evtchn_unmask
 
+#define xen_evtchn_expand_array evtchn_expand_array
+CHECK_evtchn_expand_array;
+#undef xen_evtchn_expand_array
+
+#define xen_evtchn_init_control evtchn_init_control
+CHECK_evtchn_init_control;
+#undef xen_evtchn_init_control
+
+#define xen_evtchn_reset evtchn_reset
+CHECK_evtchn_reset;
+#undef xen_evtchn_reset
+
+#define xen_evtchn_set_priority evtchn_set_priority
+CHECK_evtchn_set_priority;
+#undef xen_evtchn_set_priority
+
 #define xen_mmu_update mmu_update
 CHECK_mmu_update;
 #undef xen_mmu_update
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -66,8 +66,12 @@
 ?	evtchn_bind_vcpu		event_channel.h
 ?	evtchn_bind_virq		event_channel.h
 ?	evtchn_close			event_channel.h
+?	evtchn_expand_array		event_channel.h
+?	evtchn_init_control		event_channel.h
 ?	evtchn_op			event_channel.h
+?	evtchn_reset			event_channel.h
 ?	evtchn_send			event_channel.h
+?	evtchn_set_priority		event_channel.h
 ?	evtchn_status			event_channel.h
 ?	evtchn_unmask			event_channel.h
 ?	gnttab_cache_flush		grant_table.h



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:51:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:51:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydUo-0007BN-Pb; Thu, 23 Jul 2020 15:50:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jydUn-0007BA-FD
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:50:57 +0000
X-Inumbo-ID: 4b788902-ccfc-11ea-a2c7-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4b788902-ccfc-11ea-a2c7-12813bfff9fa;
 Thu, 23 Jul 2020 15:50:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9A323AC83;
 Thu, 23 Jul 2020 15:51:03 +0000 (UTC)
Subject: [PATCH v3 6/8] x86: generalize padding field handling
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Message-ID: <abc2fc97-32be-8886-902e-d6d6e8bab87f@suse.com>
Date: Thu, 23 Jul 2020 17:50:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The original intention was to ignore padding fields, but the pattern
matched only ones whose names started with an underscore. Also match
fields whose names are in line with the C spec by not having a leading
underscore. (Note that the leading ^ in the sed regexps was pointless
and hence get dropped.)

This requires adjusting some vNUMA macros, to avoid triggering
"enumeration value ... not handled in switch" warnings, which - due to
-Werror - would cause the build to fail. (I have to admit that I find
these padding fields odd, when translation of the containing structure
is needed anyway.)

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
While for translation macros skipping padding fields pretty surely is a
reasonable thing to do, we may want to consider not ignoring them when
generating checking macros.

--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -354,10 +354,13 @@ int compat_memory_op(unsigned int cmd, X
                 return -EFAULT;
 
 #define XLAT_vnuma_topology_info_HNDL_vdistance_h(_d_, _s_)		\
+            case XLAT_vnuma_topology_info_vdistance_pad:                \
             guest_from_compat_handle((_d_)->vdistance.h, (_s_)->vdistance.h)
 #define XLAT_vnuma_topology_info_HNDL_vcpu_to_vnode_h(_d_, _s_)		\
+            case XLAT_vnuma_topology_info_vcpu_to_vnode_pad:            \
             guest_from_compat_handle((_d_)->vcpu_to_vnode.h, (_s_)->vcpu_to_vnode.h)
 #define XLAT_vnuma_topology_info_HNDL_vmemrange_h(_d_, _s_)		\
+            case XLAT_vnuma_topology_info_vmemrange_pad:                \
             guest_from_compat_handle((_d_)->vmemrange.h, (_s_)->vmemrange.h)
 
             XLAT_vnuma_topology_info(nat.vnuma, &cmp.vnuma);
--- a/xen/tools/get-fields.sh
+++ b/xen/tools/get-fields.sh
@@ -218,7 +218,7 @@ for line in sys.stdin.readlines():
 				fi
 				;;
 			[\,\;])
-				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
+				if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
 				then
 					if [ $kind = union ]
 					then
@@ -347,7 +347,7 @@ build_body ()
 			fi
 			;;
 		[\,\;])
-			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
+			if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
 			then
 				if [ -z "$array" -a -z "$array_type" ]
 				then
@@ -437,7 +437,7 @@ check_field ()
 				id=$token
 				;;
 			[\,\;])
-				if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
+				if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
 				then
 					check_field $1 $2 $3.$id "$fields"
 					test "$token" != ";" || fields= id=
@@ -491,7 +491,7 @@ build_check ()
 			test $level != 2 -o $arrlvl != 1 || id=$token
 			;;
 		[\,\;])
-			if [ $level = 2 -a -n "$(echo $id | $SED 's,^_pad[[:digit:]]*,,')" ]
+			if [ $level = 2 -a -n "$(echo $id | $SED 's,_\?pad[[:digit:]]*,,')" ]
 			then
 				check_field $kind $1 $id "$fields"
 				test "$token" != ";" || fields= id=



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:51:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:51:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydVM-0007ME-7E; Thu, 23 Jul 2020 15:51:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jydVK-0007M0-E0
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:51:30 +0000
X-Inumbo-ID: 5ea71ba8-ccfc-11ea-873e-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ea71ba8-ccfc-11ea-873e-bc764e2007e4;
 Thu, 23 Jul 2020 15:51:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 18490AC83;
 Thu, 23 Jul 2020 15:51:37 +0000 (UTC)
Subject: [PATCH v3 7/8] flask: drop dead compat translation code
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Message-ID: <533889b9-7cbc-2df4-f308-861536902689@suse.com>
Date: Thu, 23 Jul 2020 17:51:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Translation macros aren't used (and hence needed) at all (or else a
devicetree_label entry would have been missing), and userlist has been
removed quite some time ago.

No functional change.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -171,14 +171,11 @@
 ?	xenoprof_init			xenoprof.h
 ?	xenoprof_passive		xenoprof.h
 ?	flask_access			xsm/flask_op.h
-!	flask_boolean			xsm/flask_op.h
 ?	flask_cache_stats		xsm/flask_op.h
 ?	flask_hash_stats		xsm/flask_op.h
-!	flask_load			xsm/flask_op.h
 ?	flask_ocontext			xsm/flask_op.h
 ?	flask_peersid			xsm/flask_op.h
 ?	flask_relabel			xsm/flask_op.h
 ?	flask_setavc_threshold		xsm/flask_op.h
 ?	flask_setenforce		xsm/flask_op.h
-!	flask_sid_context		xsm/flask_op.h
 ?	flask_transition		xsm/flask_op.h
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -790,8 +790,6 @@ CHECK_flask_transition;
 #define xen_flask_load compat_flask_load
 #define flask_security_load compat_security_load
 
-#define xen_flask_userlist compat_flask_userlist
-
 #define xen_flask_sid_context compat_flask_sid_context
 #define flask_security_context compat_security_context
 #define flask_security_sid compat_security_sid



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 15:52:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 15:52:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydVt-0007SQ-GT; Thu, 23 Jul 2020 15:52:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9kJt=BC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jydVs-0007SA-5o
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 15:52:04 +0000
X-Inumbo-ID: 735f8c54-ccfc-11ea-873e-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 735f8c54-ccfc-11ea-873e-bc764e2007e4;
 Thu, 23 Jul 2020 15:52:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8E4E5AC83;
 Thu, 23 Jul 2020 15:52:10 +0000 (UTC)
Subject: [PATCH v3 8/8] x86: only generate compat headers actually needed
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Message-ID: <b790dde5-821f-cecf-b542-10bf5a9179d8@suse.com>
Date: Thu, 23 Jul 2020 17:52:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

As was already the case for XSM/Flask, avoid generating compat headers
when they're not going to be needed. To address resulting build issues
- move compat/hvm/dm_op.h inclusion to the only source file needing it,
- add a little bit of #ifdef-ary.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
Alternatively we could consistently drop conditionals (except for per-
arch cases perhaps).

--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -717,6 +717,8 @@ static int dm_op(const struct dmop_args
     return rc;
 }
 
+#include <compat/hvm/dm_op.h>
+
 CHECK_dm_op_create_ioreq_server;
 CHECK_dm_op_get_ioreq_server_info;
 CHECK_dm_op_ioreq_server_range;
--- a/xen/common/compat/domain.c
+++ b/xen/common/compat/domain.c
@@ -11,7 +11,6 @@ EMIT_FILE;
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
 #include <compat/vcpu.h>
-#include <compat/hvm/hvm_vcpu.h>
 
 #define xen_vcpu_set_periodic_timer vcpu_set_periodic_timer
 CHECK_vcpu_set_periodic_timer;
@@ -25,6 +24,10 @@ CHECK_SIZE_(struct, vcpu_info);
 CHECK_vcpu_register_vcpu_info;
 #undef xen_vcpu_register_vcpu_info
 
+#ifdef CONFIG_HVM
+
+#include <compat/hvm/hvm_vcpu.h>
+
 #define xen_vcpu_hvm_context vcpu_hvm_context
 #define xen_vcpu_hvm_x86_32 vcpu_hvm_x86_32
 #define xen_vcpu_hvm_x86_64 vcpu_hvm_x86_64
@@ -33,6 +36,8 @@ CHECK_vcpu_hvm_context;
 #undef xen_vcpu_hvm_x86_32
 #undef xen_vcpu_hvm_context
 
+#endif
+
 int compat_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d = current->domain;
@@ -49,6 +54,7 @@ int compat_vcpu_op(int cmd, unsigned int
         if ( v->vcpu_info == &dummy_vcpu_info )
             return -EINVAL;
 
+#ifdef CONFIG_HVM
         if ( is_hvm_vcpu(v) )
         {
             struct vcpu_hvm_context ctxt;
@@ -61,6 +67,7 @@ int compat_vcpu_op(int cmd, unsigned int
             domain_unlock(d);
         }
         else
+#endif
         {
             struct compat_vcpu_guest_context *ctxt;
 
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -3,32 +3,34 @@ ifneq ($(CONFIG_COMPAT),)
 compat-arch-$(CONFIG_X86) := x86_32
 
 headers-y := \
-    compat/argo.h \
-    compat/callback.h \
+    compat/arch-$(compat-arch-y).h \
     compat/elfnote.h \
     compat/event_channel.h \
     compat/features.h \
-    compat/grant_table.h \
-    compat/hypfs.h \
-    compat/kexec.h \
     compat/memory.h \
     compat/nmi.h \
     compat/physdev.h \
     compat/platform.h \
+    compat/pmu.h \
     compat/sched.h \
-    compat/trace.h \
     compat/vcpu.h \
     compat/version.h \
     compat/xen.h \
-    compat/xenoprof.h
+    compat/xlat.h
 headers-$(CONFIG_X86)     += compat/arch-x86/pmu.h
 headers-$(CONFIG_X86)     += compat/arch-x86/xen-mca.h
 headers-$(CONFIG_X86)     += compat/arch-x86/xen.h
 headers-$(CONFIG_X86)     += compat/arch-x86/xen-$(compat-arch-y).h
-headers-$(CONFIG_X86)     += compat/hvm/dm_op.h
-headers-$(CONFIG_X86)     += compat/hvm/hvm_op.h
-headers-$(CONFIG_X86)     += compat/hvm/hvm_vcpu.h
-headers-y                 += compat/arch-$(compat-arch-y).h compat/pmu.h compat/xlat.h
+headers-$(CONFIG_ARGO)    += compat/argo.h
+headers-$(CONFIG_PV)      += compat/callback.h
+headers-$(CONFIG_GRANT_TABLE) += compat/grant_table.h
+headers-$(CONFIG_HVM)     += compat/hvm/dm_op.h
+headers-$(CONFIG_HVM)     += compat/hvm/hvm_op.h
+headers-$(CONFIG_HVM)     += compat/hvm/hvm_vcpu.h
+headers-$(CONFIG_HYPFS)   += compat/hypfs.h
+headers-$(CONFIG_KEXEC)   += compat/kexec.h
+headers-$(CONFIG_TRACEBUFFER) += compat/trace.h
+headers-$(CONFIG_XENOPROF) += compat/xenoprof.h
 headers-$(CONFIG_XSM_FLASK) += compat/xsm/flask_op.h
 
 cppflags-y                := -include public/xen-compat.h -DXEN_GENERATING_COMPAT_HEADERS
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -216,8 +216,6 @@ extern long compat_argo_op(
     unsigned long arg4);
 #endif
 
-#include <compat/hvm/dm_op.h>
-
 extern int
 compat_dm_op(
     domid_t domid,



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 16:00:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 16:00:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydeB-0000X2-F6; Thu, 23 Jul 2020 16:00:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xWck=BC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jydeA-0000Wx-KM
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 16:00:38 +0000
X-Inumbo-ID: a599aa29-ccfd-11ea-873e-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a599aa29-ccfd-11ea-873e-bc764e2007e4;
 Thu, 23 Jul 2020 16:00:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595520036;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=CUBcWDqMi6J0iPnXI5g8xtGAcA/gPUjYeEizeNxjR9U=;
 b=MjwGafP2/OZudzC2runZ4vW0rthoymczFkqEA9EHkAV9ugWRx+xzZLnT
 4vTKuUkQiCk8QKuE9VRSKR5JmxMbk8mMLisennZ33KeCwG5mpQSy6bNev
 FMIgnJmO8CyklaWgqBBtIrAcjPk68gO02Um5gpoTuULwyT5s2piNAHIfy M=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: xwhDdYuWlOcdmIa3e8tGYl1S09NqyRiXNeYFP7dk3c7IBOvqtPBOz2WkIrQliKapKQcTUAZw1H
 H1j/WMMYfRYgnvluBV8qtj/WU4gRIICYk7IBoPjT5FKfI/314DRZCeR94r7QbLI+BtS7hitl4I
 MT1q0+PPtpjQ33Z33B5jWdcziIOEuVD57spAaYX1+hQPylFrBxBhiO9hRkj/hRI9wt0JmpWOeh
 U/qcK3sAFcvWzkfIXAHfk8byOAOJHcZbm4LPErwPldUule1DuSJr/GvWZBxTZcUAvYxLUiCgM1
 SdY=
X-SBRS: 2.7
X-MesageID: 23384890
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,387,1589256000"; d="scan'208";a="23384890"
Subject: Re: [PATCH] x86/S3: put data segment registers into known state upon
 resume
To: Jan Beulich <jbeulich@suse.com>
References: <3cad2798-1a01-7d5e-ea55-ddb9ba6388d9@suse.com>
 <6343ad61-246f-fefd-cd12-d260807e82f0@citrix.com>
 <c726cdc7-271b-0ea7-4056-8ab86686282e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e61e34c4-38dd-d201-8035-ead79a7595c2@citrix.com>
Date: Thu, 23 Jul 2020 17:00:32 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <c726cdc7-271b-0ea7-4056-8ab86686282e@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "M.
 Vefa Bicakci" <m.v.b@runbox.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/07/2020 16:19, Jan Beulich wrote:
> On 23.07.2020 16:40, Andrew Cooper wrote:
>> On 20/07/2020 16:20, Jan Beulich wrote:
>>> wakeup_32 sets %ds and %es to BOOT_DS, while leaving %fs at what
>>> wakeup_start did set it to, and %gs at whatever BIOS did load into it.
>>> All of this may end up confusing the first load_segments() to run on
>>> the BSP after resume, in particular allowing a non-nul selector value
>>> to be left in %fs.
>>>
>>> Alongside %ss, also put all other data segment registers into the same
>>> state that the boot and CPU bringup paths put them in.
>>>
>>> Reported-by: M. Vefa Bicakci <m.v.b@runbox.com>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> --- a/xen/arch/x86/acpi/wakeup_prot.S
>>> +++ b/xen/arch/x86/acpi/wakeup_prot.S
>>> @@ -52,6 +52,16 @@ ENTRY(s3_resume)
>>>          mov     %eax, %ss
>>>          mov     saved_rsp(%rip), %rsp
>>>  
>>> +        /*
>>> +         * Also put other segment registers into known state, like would
>>> +         * be done on the boot path. This is in particular necessary for
>>> +         * the first load_segments() to work as intended.
>>> +         */
>> I don't think the comment is helpful, not least because it refers to a
>> broken behaviour in load_segemnts() which is soon going to change anyway.
> Well, I can drop it. I merely thought I'd be nice and comment my
> code once in a while (and the comment could be dropped / adjusted
> when load_segments() changes)...
>
>> We've literally just loaded the GDT, at which point reloading all
>> segments *is* the expected thing to do.
> In a way, unless some/all are assumed to already hold a nul selector.
>
>> I'd recommend that the diff be simply:
>>
>> diff --git a/xen/arch/x86/acpi/wakeup_prot.S
>> b/xen/arch/x86/acpi/wakeup_prot.S
>> index dcc7e2327d..a2c41c4f3f 100644
>> --- a/xen/arch/x86/acpi/wakeup_prot.S
>> +++ b/xen/arch/x86/acpi/wakeup_prot.S
>> @@ -49,6 +49,10 @@ ENTRY(s3_resume)
>>          mov     %rax, %cr0
>>  
>>          mov     $__HYPERVISOR_DS64, %eax
>> +        mov     %eax, %ds
>> +        mov     %eax, %es
>> +        mov     %eax, %fs
>> +        mov     %eax, %gs
>>          mov     %eax, %ss
>>          mov     saved_rsp(%rip), %rsp
> So I had specifically elected to not put the addition there, to make
> sure the stack would get established first. But seeing both Roger
> and you ask me to do otherwise - well, so be it then.

There is no IDT.  Any fault is will be triple, irrespective of the exact
code layout.

This sequence actually matches what we have in __high_start().

I don't think it is wise to write code which presumes that
__HYPERVISOR_DS64 is 0 (it happens to be, but could easily be 0xe010 as
well), or that the trampoline has fixed behaviours for the segments.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 16:03:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 16:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydhK-0000l9-V6; Thu, 23 Jul 2020 16:03:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xWck=BC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jydhK-0000l4-22
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 16:03:54 +0000
X-Inumbo-ID: 1a036372-ccfe-11ea-a2c8-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1a036372-ccfe-11ea-a2c8-12813bfff9fa;
 Thu, 23 Jul 2020 16:03:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595520233;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=EvxeNc47Aj2ifX7I2dOEp/A9QT4H2xQ/u9lwoz5shzI=;
 b=Ik9Htv1Rn4W3lK12SIcVa4z2xYhzG5yEF7ipih+nX4EsdFor5mDF3vNp
 O+/VkZf86n3l74phBQsGxlF0UsCzRYg/Q1JcFCuJHzzzw+SLp99BGfDLr
 dsYLzHtZHFY5xpjmZDTfbPSvzklIhX6vzvaj+0QNrwrwiOVNcgrbiCxz/ U=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: n9UugflWA/BE40d8h8v7T+t8Hk1PRLHxDefttB491tL3qzQz3blSONoIq24jbkc/6Wnn0r8rBU
 XFHH8vvUaGjTEdO/ipfRj37LAZhDf9pbv6oUTkGuIyngH+RK9dyfF9tk2uYHnmxSWpTXoa7asc
 IBXjTlfQNJ/KhIAyoELwzVB14fs1Vz5O378JuSI6x4xoewXdIcNpfbGrFnN8gjSndsCQUQ7ugn
 YZEXeyeQRDOq82nNDfUvVnaeW3j6P0U07RuAPj70IcRn4ncCgk1QVJ3TTv391AQZEHGbukVhpD
 oEo=
X-SBRS: 2.7
X-MesageID: 23390074
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,387,1589256000"; d="scan'208";a="23390074"
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, David Hildenbrand
 <david@redhat.com>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <e94d9556-f615-bbe2-07d2-08958969ee5f@redhat.com>
 <20200723135930.GH7191@Air-de-Roger>
 <82b131f4-8f50-cd49-65cf-9a87d51b5555@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e3fd0281-128f-d885-0657-62ae6bce27c8@citrix.com>
Date: Thu, 23 Jul 2020 17:03:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <82b131f4-8f50-cd49-65cf-9a87d51b5555@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/07/2020 16:10, Jürgen Groß wrote:
> On 23.07.20 15:59, Roger Pau Monné wrote:
>> On Thu, Jul 23, 2020 at 03:22:49PM +0200, David Hildenbrand wrote:
>>> On 23.07.20 14:23, Roger Pau Monné wrote:
>>>> On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
>>>>> On 23.07.20 10:45, Roger Pau Monne wrote:
>>>>>> Add an extra option to add_memory_resource that overrides the memory
>>>>>> hotplug online behavior in order to force onlining of memory from
>>>>>> add_memory_resource unconditionally.
>>>>>>
>>>>>> This is required for the Xen balloon driver, that must run the
>>>>>> online page callback in order to correctly process the newly added
>>>>>> memory region, note this is an unpopulated region that is used by
>>>>>> Linux
>>>>>> to either hotplug RAM or to map foreign pages from other domains,
>>>>>> and
>>>>>> hence memory hotplug when running on Xen can be used even without
>>>>>> the
>>>>>> user explicitly requesting it, as part of the normal operations
>>>>>> of the
>>>>>> OS when attempting to map memory from a different domain.
>>>>>>
>>>>>> Setting a different default value of memhp_default_online_type when
>>>>>> attaching the balloon driver is not a robust solution, as the
>>>>>> user (or
>>>>>> distro init scripts) could still change it and thus break the Xen
>>>>>> balloon driver.
>>>>>
>>>>> I think we discussed this a couple of times before (even triggered
>>>>> by my
>>>>> request), and this is responsibility of user space to configure.
>>>>> Usually
>>>>> distros have udev rules to online memory automatically.
>>>>> Especially, user
>>>>> space should eb able to configure *how* to online memory.
>>>>
>>>> Note (as per the commit message) that in the specific case I'm
>>>> referring to the memory hotplugged by the Xen balloon driver will be
>>>> an unpopulated range to be used internally by certain Xen subsystems,
>>>> like the xen-blkback or the privcmd drivers. The addition of such
>>>> blocks of (unpopulated) memory can happen without the user explicitly
>>>> requesting it, and hence not even aware such hotplug process is taking
>>>> place. To be clear: no actual RAM will be added to the system.
>>>
>>> Okay, but there is also the case where XEN will actually hotplug memory
>>> using this same handler IIRC (at least I've read papers about it). Both
>>> are using the same handler, correct?
>>
>> Yes, it's used by this dual purpose, which I have to admit I don't
>> like that much either.
>>
>> One set of pages should be clearly used for RAM memory hotplug, and
>> the other to map foreign pages that are not related to memory hotplug,
>> it's just that we happen to need a physical region with backing struct
>> pages.
>>
>>>>
>>>>> It's the admin/distro responsibility to configure this properly.
>>>>> In case
>>>>> this doesn't happen (or as you say, users change it), bad luck.
>>>>>
>>>>> E.g., virtio-mem takes care to not add more memory in case it is not
>>>>> getting onlined. I remember hyper-v has similar code to at least
>>>>> wait a
>>>>> bit for memory to get onlined.
>>>>
>>>> I don't think VirtIO or Hyper-V use the hotplug system in the same way
>>>> as Xen, as said this is done to add unpopulated memory regions that
>>>> will be used to map foreign memory (from other domains) by Xen drivers
>>>> on the system.
>>>
>>> Indeed, if the memory is never exposed to the buddy (and all you
>>> need is
>>> struct pages +  a kernel virtual mapping), I wonder if
>>> memremap/ZONE_DEVICE is what you want?
>>
>> I'm certainly not familiar with the Linux memory subsystem, but if
>> that gets us a backing struct page and a kernel mapping then I would
>> say yes.
>>
>>> Then you won't have user-visible
>>> memory blocks created with unclear online semantics, partially
>>> involving
>>> the buddy.
>>
>> Seems like a fine solution.
>>
>> Juergen: would you be OK to use a separate page-list for
>> alloc_xenballooned_pages on HVM/PVH using the logic described by
>> David?
>>
>> I guess I would leave PV as-is, since it already has this reserved
>> region to map foreign pages.
>
> I would really like a common solution, especially as it would enable
> pv driver domains to use that feature, too.
>
> And finding a region for this memory zone in PVH dom0 should be common
> with PV dom0 after all. We don't want to collide with either PCI space
> or hotplug memory.

While I agree with goal here, these are two very different things, due
to the completely different nature of PV and HVM/PVH guests.

HVM/PVH guests have a concrete guest physical address space.  Linux
needs to pick some gfn's to use which aren't used by anything else (and
Xen's behaviour of not providing any help here is deeply unhelpful, and
needs fixing), and get struct page_info's for them.

PV is totally different.  Linux still needs page_info's for them, but
there is no concept of a guest physical address space.  You can
literally gain access to foreign mappings or grant maps by asking Xen to
modify a PTE.  For convenience with the core code, Linux tries to map
this concept back into a 1:1 pfn space, but it is quite fictitious.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 16:18:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 16:18:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydv9-0001nr-9R; Thu, 23 Jul 2020 16:18:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vequ=BC=gmail.com=persaur@srs-us1.protection.inumbo.net>)
 id 1jydv7-0001nm-IA
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 16:18:09 +0000
X-Inumbo-ID: 187ca354-cd00-11ea-8745-bc764e2007e4
Received: from mail-qk1-x735.google.com (unknown [2607:f8b0:4864:20::735])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 187ca354-cd00-11ea-8745-bc764e2007e4;
 Thu, 23 Jul 2020 16:18:08 +0000 (UTC)
Received: by mail-qk1-x735.google.com with SMTP id 11so5910259qkn.2
 for <xen-devel@lists.xenproject.org>; Thu, 23 Jul 2020 09:18:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=content-transfer-encoding:mime-version:subject:from:in-reply-to:cc
 :date:message-id:references:to;
 bh=rh4PNVBS5aRFXO4gV/z7qaN2ZLLGeNIuB2mmYHsU0QY=;
 b=Gfn/FARY/k588k4Nx7/cewSBhPDDTGxip8k+D/QrvQIcuWtkysc2wmHjYlVKzwiTxg
 O4fu+RQfMBYk7CA/0UcFUMxKNmv+/uIms9cJqYHhxRRx3q0GatpwcU6OUdScyf9Ls5cd
 F8rpnzwVCwG9m2vJX2xV0IvlpIwwdUaemMNfm0o00ppn4ZYSU3LDQ3zN53i78URC69v1
 G8QdzzMNjjaXjujV05Kcxwh9XxC0o66L12OwXF1BtKu+1JinZ8ojb/RDmoBJbgdKC+ae
 lixbcECir240yJ6m4uC0IaVJnecpOqFPi/Mo5Y7F5UYEHx0FwZEBQJg0t86/UOL75LlK
 atag==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:content-transfer-encoding:mime-version:subject
 :from:in-reply-to:cc:date:message-id:references:to;
 bh=rh4PNVBS5aRFXO4gV/z7qaN2ZLLGeNIuB2mmYHsU0QY=;
 b=H9HCwSA78KuxA4VsSjTfBpsZNO6kRR4qn3ad+esHbweCY0S1coN6Pr/90//qbWdAzr
 Vg9Bjmw3fZv34eTe4MrlFpI7q86jvnnNIplQeyJEouBEzvIndglJJCSiv345KoRonTD9
 dy0jDYUORQ+HYLhBGsYzAR/cKJE8lhcn+Ezq5oAj9qa3DoJgkjy9H16CIeDn3grfhHS4
 49AOURo6wIHpZtjbnr1M1Z/JGrmAultKBoEmND5B1FXTtl11sFc/KdDqWEk9Wsqh+QyM
 iAQtbUPwyrU0jIZ/1lks6WVax7MSczBk26GgStUtsSNTAK2g9CveWOAeQE0VrFnvTuAx
 sPDg==
X-Gm-Message-State: AOAM532NI9ARistBydZpJowwChBsmGXJb+lVXZNkVLtmqOubDlCoGwYX
 eM0rBblGtR2xG13q3pfQkFowLowE
X-Google-Smtp-Source: ABdhPJxEifcMrPlIsaZg5MQaVp5aEIGiJ2V/tAH39Pcyt/lIW8StUu/CQL504+yXBxiE+KAzSKw+Gg==
X-Received: by 2002:a37:614:: with SMTP id 20mr5789671qkg.456.1595521087961;
 Thu, 23 Jul 2020 09:18:07 -0700 (PDT)
Received: from ?IPv6:2607:fb90:768:3cd8:f051:f0cd:aac9:1c5d?
 ([2607:fb90:768:3cd8:f051:f0cd:aac9:1c5d])
 by smtp.gmail.com with ESMTPSA id z197sm3102598qkb.66.2020.07.23.09.18.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 23 Jul 2020 09:18:07 -0700 (PDT)
Content-Type: multipart/alternative;
 boundary=Apple-Mail-005F301A-5711-4DF3-91AD-8515E7DD9438
Content-Transfer-Encoding: 7bit
Mime-Version: 1.0 (1.0)
Subject: Re: Saved notes for design sessions
From: Rich Persaud <persaur@gmail.com>
In-Reply-To: <8D97F48E-1948-4C1D-965F-0B42797516DD@citrix.com>
Date: Thu, 23 Jul 2020 12:18:05 -0400
Message-Id: <3C024D1E-3E29-4163-BA5C-958E7E4DC1E5@gmail.com>
References: <8D97F48E-1948-4C1D-965F-0B42797516DD@citrix.com>
To: George Dunlap <george.dunlap@citrix.com>
X-Mailer: iPhone Mail (17G68)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--Apple-Mail-005F301A-5711-4DF3-91AD-8515E7DD9438
Content-Type: text/plain;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

> On Jul 16, 2020, at 07:55, George Dunlap <george.dunlap@citrix.com> wrote:=

> =EF=BB=BFHey all,
>=20
> PDFs of the saved shared notes for the design sessions can be found in thi=
s zipfile:
>=20
> https://cryptpad.fr/file/#/2/file/LoJZpSq+vHKNoisVqdsPj3Z9/
>=20
> The files are labeled with the start/end time and the room in which they w=
ere held; that should help you find out the notes which are pertinent to you=
.
>=20
> Please remember to post a summary of your design session to xen-devel for p=
osterity.
>=20
> Thanks!
>=20
> -George

After 2020 design session notes are published on xen-devel, they can be inde=
xed on a wiki page similar to the 2019 index:=20

  https://wiki.xenproject.org/wiki/Design_Sessions_2019

Rich



--Apple-Mail-005F301A-5711-4DF3-91AD-8515E7DD9438
Content-Type: text/html;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D=
utf-8"></head><body dir=3D"auto"><div dir=3D"ltr"><meta http-equiv=3D"conten=
t-type" content=3D"text/html; charset=3Dutf-8"><div dir=3D"ltr">On Jul 16, 2=
020, at 07:55, George Dunlap &lt;george.dunlap@citrix.com&gt; wrote:</div><d=
iv dir=3D"ltr"><blockquote type=3D"cite"><br></blockquote></div><blockquote t=
ype=3D"cite"><div dir=3D"ltr">=EF=BB=BF<span>Hey all,</span><br><span></span=
><br><span>PDFs of the saved shared notes for the design sessions can be fou=
nd in this zipfile:</span><br><span></span><br><span>https://cryptpad.fr/fil=
e/#/2/file/LoJZpSq+vHKNoisVqdsPj3Z9/</span><br><span></span><br><span>The fi=
les are labeled with the start/end time and the room in which they were held=
; that should help you find out the notes which are pertinent to you.</span>=
<br><span></span><br><span>Please remember to post a summary of your design s=
ession to xen-devel for posterity.</span><br><span></span><br><span>Thanks!<=
/span><br><span></span><br><span> -George</span><br></div></blockquote><br><=
div>After 2020 design session notes are published on xen-devel, they can be i=
ndexed on a wiki page similar to the 2019 index:&nbsp;</div><div><br></div><=
div>&nbsp;&nbsp;<a href=3D"https://wiki.xenproject.org/wiki/Design_Sessions_=
2019">https://wiki.xenproject.org/wiki/Design_Sessions_2019</a></div><div><b=
r></div><div>Rich</div><div><br></div><div><br></div></div></body></html>=

--Apple-Mail-005F301A-5711-4DF3-91AD-8515E7DD9438--


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 16:23:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 16:23:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jydzw-0002hI-2b; Thu, 23 Jul 2020 16:23:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0L1b=BC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jydzv-0002hD-74
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 16:23:07 +0000
X-Inumbo-ID: c9594d9e-cd00-11ea-a2d0-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c9594d9e-cd00-11ea-a2d0-12813bfff9fa;
 Thu, 23 Jul 2020 16:23:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595521385;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=pnROkcEpzt5hqqMgmqaKRrmNaTL4b1rnk61ZyREWxOE=;
 b=GwwRckPuN9CZXV0JIapMxBsl4NokNF+orfmrVvOSUgMU5CQXygZKXV3y
 nBSHGlc/G6T7lvGffPF8/Gzduey/TSkZzcOjrEI3PzeVXUUGatUycgDx5
 rtj17PZbSaZ0psZLvYwKZsuCctMwGUuThe8mhvIXcRKN5rRixKeu7/QoG U=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: SWN1Qv9Vg9UuCVerUDK4H3bgKeOadVxeZGsvJXl6PnccOSKOYBTcvN9I6wjT6IwJFkPdhXIRw0
 kFcKo29eMjnJoS2WsGGcaZyca03MQjcyw9U9320DxG5Asn9dsHmVHne9Bb6/eAg5+tv33V2pMR
 vEJbAaT3x6DNSYYJ7UfUjgvb+4YbrSOkcFVfMSfCiq8087EDxZOi+vssGJbntjDi+9FJZH7Mfe
 PdndOJ+pY4J7QtauKXpYcPYITi03og2YKjxGzu/uKM3rK82oV5/A3wPsVz9EOMq3DVQzMSI2KP
 Ebg=
X-SBRS: 2.7
X-MesageID: 23391925
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,387,1589256000"; d="scan'208";a="23391925"
Date: Thu, 23 Jul 2020 18:22:56 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, David Hildenbrand
 <david@redhat.com>
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
Message-ID: <20200723162256.GI7191@Air-de-Roger>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <e94d9556-f615-bbe2-07d2-08958969ee5f@redhat.com>
 <20200723135930.GH7191@Air-de-Roger>
 <82b131f4-8f50-cd49-65cf-9a87d51b5555@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <82b131f4-8f50-cd49-65cf-9a87d51b5555@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 23, 2020 at 05:10:03PM +0200, Jürgen Groß wrote:
> On 23.07.20 15:59, Roger Pau Monné wrote:
> > On Thu, Jul 23, 2020 at 03:22:49PM +0200, David Hildenbrand wrote:
> > > On 23.07.20 14:23, Roger Pau Monné wrote:
> > > > On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
> > > > > On 23.07.20 10:45, Roger Pau Monne wrote:
> > > > > > Add an extra option to add_memory_resource that overrides the memory
> > > > > > hotplug online behavior in order to force onlining of memory from
> > > > > > add_memory_resource unconditionally.
> > > > > > 
> > > > > > This is required for the Xen balloon driver, that must run the
> > > > > > online page callback in order to correctly process the newly added
> > > > > > memory region, note this is an unpopulated region that is used by Linux
> > > > > > to either hotplug RAM or to map foreign pages from other domains, and
> > > > > > hence memory hotplug when running on Xen can be used even without the
> > > > > > user explicitly requesting it, as part of the normal operations of the
> > > > > > OS when attempting to map memory from a different domain.
> > > > > > 
> > > > > > Setting a different default value of memhp_default_online_type when
> > > > > > attaching the balloon driver is not a robust solution, as the user (or
> > > > > > distro init scripts) could still change it and thus break the Xen
> > > > > > balloon driver.
> > > > > 
> > > > > I think we discussed this a couple of times before (even triggered by my
> > > > > request), and this is responsibility of user space to configure. Usually
> > > > > distros have udev rules to online memory automatically. Especially, user
> > > > > space should eb able to configure *how* to online memory.
> > > > 
> > > > Note (as per the commit message) that in the specific case I'm
> > > > referring to the memory hotplugged by the Xen balloon driver will be
> > > > an unpopulated range to be used internally by certain Xen subsystems,
> > > > like the xen-blkback or the privcmd drivers. The addition of such
> > > > blocks of (unpopulated) memory can happen without the user explicitly
> > > > requesting it, and hence not even aware such hotplug process is taking
> > > > place. To be clear: no actual RAM will be added to the system.
> > > 
> > > Okay, but there is also the case where XEN will actually hotplug memory
> > > using this same handler IIRC (at least I've read papers about it). Both
> > > are using the same handler, correct?
> > 
> > Yes, it's used by this dual purpose, which I have to admit I don't
> > like that much either.
> > 
> > One set of pages should be clearly used for RAM memory hotplug, and
> > the other to map foreign pages that are not related to memory hotplug,
> > it's just that we happen to need a physical region with backing struct
> > pages.
> > 
> > > > 
> > > > > It's the admin/distro responsibility to configure this properly. In case
> > > > > this doesn't happen (or as you say, users change it), bad luck.
> > > > > 
> > > > > E.g., virtio-mem takes care to not add more memory in case it is not
> > > > > getting onlined. I remember hyper-v has similar code to at least wait a
> > > > > bit for memory to get onlined.
> > > > 
> > > > I don't think VirtIO or Hyper-V use the hotplug system in the same way
> > > > as Xen, as said this is done to add unpopulated memory regions that
> > > > will be used to map foreign memory (from other domains) by Xen drivers
> > > > on the system.
> > > 
> > > Indeed, if the memory is never exposed to the buddy (and all you need is
> > > struct pages +  a kernel virtual mapping), I wonder if
> > > memremap/ZONE_DEVICE is what you want?
> > 
> > I'm certainly not familiar with the Linux memory subsystem, but if
> > that gets us a backing struct page and a kernel mapping then I would
> > say yes.
> > 
> > > Then you won't have user-visible
> > > memory blocks created with unclear online semantics, partially involving
> > > the buddy.
> > 
> > Seems like a fine solution.
> > 
> > Juergen: would you be OK to use a separate page-list for
> > alloc_xenballooned_pages on HVM/PVH using the logic described by
> > David?
> > 
> > I guess I would leave PV as-is, since it already has this reserved
> > region to map foreign pages.
> 
> I would really like a common solution, especially as it would enable
> pv driver domains to use that feature, too.

I think PV is much more easy on that regard, as it doesn't have MMIO
holes except when using PCI passthrough, and in that case it's
trivial to identify those.

However on HVM/PVH this is not so trivial. I'm certainly not opposing
to a solution that covers both, but ATM I would really like to get
something working for PVH dom0, or else it's not usable on Linux IMO.

> And finding a region for this memory zone in PVH dom0 should be common
> with PV dom0 after all. We don't want to collide with either PCI space
> or hotplug memory.

Right, we could use the native memory map for that on dom0, and maybe
create a custom resource for the Xen balloon driver instead of
allocating from iomem_resource?

DomUs are more tricky as a guest has no idea where mappings can be
safely placed, maybe we will have to resort to iomem_resource in that
case, as I don't see much other option due to the lack of information
from Xen.

I also think that ZONE_DEVICE will need some adjustments, for once the
types in memory_type don't seem to be suitable for Xen, as they are
either specific to DAX or PCI. I gave a try at using allocate_resource
plus memremap_pages but that didn't seem to fly, I will need to
investigate further.

Maybe we can resort to something even simpler than memremap_pages? I
certainly have very little idea of how this is supposed to be used,
but dev_pagemap seems overly complex for what we are trying to
achieve.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 17:05:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 17:05:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyeeQ-0006Lb-J4; Thu, 23 Jul 2020 17:04:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1OPV=BC=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jyeeO-0006LW-MG
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 17:04:56 +0000
X-Inumbo-ID: a0e1fb95-cd06-11ea-8757-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a0e1fb95-cd06-11ea-8757-bc764e2007e4;
 Thu, 23 Jul 2020 17:04:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Syxb0daW+OdrF+nQx4GmiQbu7VPA+Q5+5LZXAAjPSpg=; b=or8ZfDVK06yjVX+JhvGS0z+743
 h1FcKRAltDcyD1bUK0KoDK2/7G5JWZiWVQ1wX6JPU4yrQSY35Z/ZNpsLV6MdFvU3Mb2RpL5fyGXih
 dbETLVapFegPSZzMdZDqHO+OX3YSJXL23FIAsyVoK3Wt8Ex+CZNlayNvW2F1H4JqBJsw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyeeN-00055n-Cr; Thu, 23 Jul 2020 17:04:55 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyeeN-00014A-59; Thu, 23 Jul 2020 17:04:55 +0000
Subject: Re: Porting Xen to Jetson Nano
To: Christopher Clark <christopher.w.clark@gmail.com>,
 Srinivas Bangalore <srini@yujala.com>
References: <002801d66051$90fe2300$b2fa6900$@yujala.com>
 <CACMJ4GYQUXNGrqq_6wFLX4actMgTat-i5ThhS21Bjy3HO52bUQ@mail.gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c04b542c-53b9-ee0d-981f-53ea4100f139@xen.org>
Date: Thu, 23 Jul 2020 18:04:53 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CACMJ4GYQUXNGrqq_6wFLX4actMgTat-i5ThhS21Bjy3HO52bUQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 23/07/2020 05:26, Christopher Clark wrote:
> On Wed, Jul 22, 2020 at 10:59 AM Srinivas Bangalore <srini@yujala.com> wrote:
>> Dear Xen experts,
>>
>> Would greatly appreciate some hints on how to move forward with this one…
> 
> Hi Srini,
> 
> I don't have any strong recommendations for you, but I do want to say
> that I'm very happy to see you taking this project on and I am hoping
> for your success. I have a newly-arrived Jetson Nano sitting on my
> desk here, purchased with the intention of getting Xen up and running
> on it, that I just haven't got to work on yet. I'm also familiar with
> Chris Patterson, Kyle Temkin and Ian Campbell's previous Tegra Jetson
> patches and it would be great to see some further progress made from
> those.

I agree that it would be good to have the support in upstream!

> 
> In my recent experience with the Raspberry Pi 4, one basic observation
> with ARM kernel bringup is that if your device tree isn't good, your
> dom0 kernel can be missing the configuration it needs to use the
> serial port correctly and you don't get any diagnostics from it after
> Xen attempts to launch it, so I would just patch the right serial port
> config directly into your Linux kernel (eg. hardcode specific things
> onto the kernel command line) so you're not messing about with that
> any more.
> 
> The other thing I would recommend is patching in some printks into the
> earliest part of the Xen parts of the Dom0 Linux kernel start code.
> Others who are more familar with Xen on ARM may have some better
> recommendations, but linux/arch/arm/xen/enlighten.c has a function
> xen_guest_init that looks like a good place to stuff some extra
> printks for some early proof-of-entry from your kernel, and that way
> you'll have some indication whether execution has actually commenced
> in there.

Linux provides earlyprintk facilities that can be used in Xen. To enable 
it, you need to have your kernel built with CONFIG_EARLY_PRINTK=y and 
CONFIG_XEN=y. This can then be enabled by passing earlyprintk=xenboot on 
your command line.

Note that Linux needs to detect you are using Xen before using 
earlyprintk. If you need earlier, then what I usually do is hacking 
xen_raw_console_write() (in drivers/tty/hvc/hvc_xen.c) and replace 'if 
(xen_domain())' with 'if (1)'.

> 
> I don't think you're going to get a great deal of enthusiasm on this
> list for Xen 4.8.5, unfortunately; most people around here work off
> Xen's staging branch, and I'd be surprised to hear of anyone having
> tried a 5.7 Linux kernel with Xen 4.8.5. I can understand why you
> might start there from the existing patch series though.

Right, 4.8.5 is now out of support and we improved Xen quite a lot since 
then. As a general recommendation, I would suggest to move the series to 
the latest staging once you get it working on 4.8.5.

However, I don't see any reason why 5.7 wouldn't boot on 4.8.5. I will 
have a look at your stack trace and answer there.

Best regards,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 17:20:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 17:20:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyesv-0007Os-0u; Thu, 23 Jul 2020 17:19:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1OPV=BC=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jyesu-0007On-48
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 17:19:56 +0000
X-Inumbo-ID: b94691c0-cd08-11ea-875a-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b94691c0-cd08-11ea-875a-bc764e2007e4;
 Thu, 23 Jul 2020 17:19:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:References:Cc:To:From:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=/KTtZDsjU5gr3Y8EtaXXDWN9mScHGwOdKToqKrvpHwI=; b=VU+wBg396S7bAS1jUQKpRAH/WZ
 QJLzMyxg3dCw/L4abOPOx7b1QpVHALW3E99OK4GkRBCPEaLuMNakQ8OiL1B/HXFX04ohSgWYWtiTq
 d01roKyD3kRseUDF0FOi75F8GjIkghlwUhNkmJv1AkSX0foWulyBfLZ1EzdtsSuw9yhI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyesr-0005O3-On; Thu, 23 Jul 2020 17:19:53 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyesr-00021B-H1; Thu, 23 Jul 2020 17:19:53 +0000
Subject: Re: Porting Xen to Jetson Nano
From: Julien Grall <julien@xen.org>
To: Christopher Clark <christopher.w.clark@gmail.com>,
 Srinivas Bangalore <srini@yujala.com>
References: <002801d66051$90fe2300$b2fa6900$@yujala.com>
 <CACMJ4GYQUXNGrqq_6wFLX4actMgTat-i5ThhS21Bjy3HO52bUQ@mail.gmail.com>
 <c04b542c-53b9-ee0d-981f-53ea4100f139@xen.org>
Message-ID: <14765102-9ef0-ea0a-823f-6a529a617cfd@xen.org>
Date: Thu, 23 Jul 2020 18:19:52 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <c04b542c-53b9-ee0d-981f-53ea4100f139@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 23/07/2020 18:04, Julien Grall wrote:
> Hi,
> 
> On 23/07/2020 05:26, Christopher Clark wrote:
>> On Wed, Jul 22, 2020 at 10:59 AM Srinivas Bangalore <srini@yujala.com> 
>> wrote:
>>> Dear Xen experts,
>>>
>>> Would greatly appreciate some hints on how to move forward with this 
>>> one…
>>
>> Hi Srini,
>>
>> I don't have any strong recommendations for you, but I do want to say
>> that I'm very happy to see you taking this project on and I am hoping
>> for your success. I have a newly-arrived Jetson Nano sitting on my
>> desk here, purchased with the intention of getting Xen up and running
>> on it, that I just haven't got to work on yet. I'm also familiar with
>> Chris Patterson, Kyle Temkin and Ian Campbell's previous Tegra Jetson
>> patches and it would be great to see some further progress made from
>> those.
> 
> I agree that it would be good to have the support in upstream!
> 
>>
>> In my recent experience with the Raspberry Pi 4, one basic observation
>> with ARM kernel bringup is that if your device tree isn't good, your
>> dom0 kernel can be missing the configuration it needs to use the
>> serial port correctly and you don't get any diagnostics from it after
>> Xen attempts to launch it, so I would just patch the right serial port
>> config directly into your Linux kernel (eg. hardcode specific things
>> onto the kernel command line) so you're not messing about with that
>> any more.
>>
>> The other thing I would recommend is patching in some printks into the
>> earliest part of the Xen parts of the Dom0 Linux kernel start code.
>> Others who are more familar with Xen on ARM may have some better
>> recommendations, but linux/arch/arm/xen/enlighten.c has a function
>> xen_guest_init that looks like a good place to stuff some extra
>> printks for some early proof-of-entry from your kernel, and that way
>> you'll have some indication whether execution has actually commenced
>> in there.
> 
> Linux provides earlyprintk facilities that can be used in Xen. To enable 
> it, you need to have your kernel built with CONFIG_EARLY_PRINTK=y and 
> CONFIG_XEN=y. This can then be enabled by passing earlyprintk=xenboot on 
> your command line.
> 
> Note that Linux needs to detect you are using Xen before using 
> earlyprintk.

Hmmm... I forgot to earlyprintk is x86 only. On Arm, you want to use 
earlycon=xenboot. This should be available as long as you build Linux 
with CONFIG_XEN=y. No need to detect Xen. However...

> If you need earlier, then what I usually do is hacking 
> xen_raw_console_write() (in drivers/tty/hvc/hvc_xen.c) and replace 'if 
> (xen_domain())' with 'if (1)'.

... this point is still valid if you want to use earlycon before it is 
actually initialized.

Apologies for the confusion.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 17:40:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 17:40:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyfCR-0001Ta-PE; Thu, 23 Jul 2020 17:40:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=64jQ=BC=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jyfCQ-0001Pt-Ce
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 17:40:06 +0000
X-Inumbo-ID: 8a5c92b2-cd0b-11ea-8761-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 8a5c92b2-cd0b-11ea-8761-bc764e2007e4;
 Thu, 23 Jul 2020 17:40:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1595526003;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=JQQuv+xfMQyARaKSLnM372u/UvVcsk6Xh9qk8GAU3yE=;
 b=ZZgPuZIdS0HIo+8XiLgpLworvWxc9THkhnItM7RICiMZCbSZHmRUlIPxoIqhDt6a7Xep9C
 iRhz0YkzVUWwqvohGv/rxr62BoHDFeBvRfahT1/P2915heJbwo8PMKX6ELb1JhQo+C/Fgw
 49+4mqIVkGW/UA5daCl6dJwdINyzTKQ=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-451-hY77vW-TO3KITW_4eGbbpA-1; Thu, 23 Jul 2020 13:39:59 -0400
X-MC-Unique: hY77vW-TO3KITW_4eGbbpA-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B145F800685;
 Thu, 23 Jul 2020 17:39:57 +0000 (UTC)
Received: from [10.36.114.90] (ovpn-114-90.ams2.redhat.com [10.36.114.90])
 by smtp.corp.redhat.com (Postfix) with ESMTP id DC86E71D00;
 Thu, 23 Jul 2020 17:39:55 +0000 (UTC)
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <e94d9556-f615-bbe2-07d2-08958969ee5f@redhat.com>
 <20200723135930.GH7191@Air-de-Roger>
 <82b131f4-8f50-cd49-65cf-9a87d51b5555@suse.com>
 <20200723162256.GI7191@Air-de-Roger>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <4ff380e9-4b16-4cd0-7753-c2b89bd8ac6b@redhat.com>
Date: Thu, 23 Jul 2020 19:39:54 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200723162256.GI7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.20 18:22, Roger Pau Monné wrote:
> On Thu, Jul 23, 2020 at 05:10:03PM +0200, Jürgen Groß wrote:
>> On 23.07.20 15:59, Roger Pau Monné wrote:
>>> On Thu, Jul 23, 2020 at 03:22:49PM +0200, David Hildenbrand wrote:
>>>> On 23.07.20 14:23, Roger Pau Monné wrote:
>>>>> On Thu, Jul 23, 2020 at 01:37:03PM +0200, David Hildenbrand wrote:
>>>>>> On 23.07.20 10:45, Roger Pau Monne wrote:
>>>>>>> Add an extra option to add_memory_resource that overrides the memory
>>>>>>> hotplug online behavior in order to force onlining of memory from
>>>>>>> add_memory_resource unconditionally.
>>>>>>>
>>>>>>> This is required for the Xen balloon driver, that must run the
>>>>>>> online page callback in order to correctly process the newly added
>>>>>>> memory region, note this is an unpopulated region that is used by Linux
>>>>>>> to either hotplug RAM or to map foreign pages from other domains, and
>>>>>>> hence memory hotplug when running on Xen can be used even without the
>>>>>>> user explicitly requesting it, as part of the normal operations of the
>>>>>>> OS when attempting to map memory from a different domain.
>>>>>>>
>>>>>>> Setting a different default value of memhp_default_online_type when
>>>>>>> attaching the balloon driver is not a robust solution, as the user (or
>>>>>>> distro init scripts) could still change it and thus break the Xen
>>>>>>> balloon driver.
>>>>>>
>>>>>> I think we discussed this a couple of times before (even triggered by my
>>>>>> request), and this is responsibility of user space to configure. Usually
>>>>>> distros have udev rules to online memory automatically. Especially, user
>>>>>> space should eb able to configure *how* to online memory.
>>>>>
>>>>> Note (as per the commit message) that in the specific case I'm
>>>>> referring to the memory hotplugged by the Xen balloon driver will be
>>>>> an unpopulated range to be used internally by certain Xen subsystems,
>>>>> like the xen-blkback or the privcmd drivers. The addition of such
>>>>> blocks of (unpopulated) memory can happen without the user explicitly
>>>>> requesting it, and hence not even aware such hotplug process is taking
>>>>> place. To be clear: no actual RAM will be added to the system.
>>>>
>>>> Okay, but there is also the case where XEN will actually hotplug memory
>>>> using this same handler IIRC (at least I've read papers about it). Both
>>>> are using the same handler, correct?
>>>
>>> Yes, it's used by this dual purpose, which I have to admit I don't
>>> like that much either.
>>>
>>> One set of pages should be clearly used for RAM memory hotplug, and
>>> the other to map foreign pages that are not related to memory hotplug,
>>> it's just that we happen to need a physical region with backing struct
>>> pages.
>>>
>>>>>
>>>>>> It's the admin/distro responsibility to configure this properly. In case
>>>>>> this doesn't happen (or as you say, users change it), bad luck.
>>>>>>
>>>>>> E.g., virtio-mem takes care to not add more memory in case it is not
>>>>>> getting onlined. I remember hyper-v has similar code to at least wait a
>>>>>> bit for memory to get onlined.
>>>>>
>>>>> I don't think VirtIO or Hyper-V use the hotplug system in the same way
>>>>> as Xen, as said this is done to add unpopulated memory regions that
>>>>> will be used to map foreign memory (from other domains) by Xen drivers
>>>>> on the system.
>>>>
>>>> Indeed, if the memory is never exposed to the buddy (and all you need is
>>>> struct pages +  a kernel virtual mapping), I wonder if
>>>> memremap/ZONE_DEVICE is what you want?
>>>
>>> I'm certainly not familiar with the Linux memory subsystem, but if
>>> that gets us a backing struct page and a kernel mapping then I would
>>> say yes.
>>>
>>>> Then you won't have user-visible
>>>> memory blocks created with unclear online semantics, partially involving
>>>> the buddy.
>>>
>>> Seems like a fine solution.
>>>
>>> Juergen: would you be OK to use a separate page-list for
>>> alloc_xenballooned_pages on HVM/PVH using the logic described by
>>> David?
>>>
>>> I guess I would leave PV as-is, since it already has this reserved
>>> region to map foreign pages.
>>
>> I would really like a common solution, especially as it would enable
>> pv driver domains to use that feature, too.
> 
> I think PV is much more easy on that regard, as it doesn't have MMIO
> holes except when using PCI passthrough, and in that case it's
> trivial to identify those.
> 
> However on HVM/PVH this is not so trivial. I'm certainly not opposing
> to a solution that covers both, but ATM I would really like to get
> something working for PVH dom0, or else it's not usable on Linux IMO.
> 
>> And finding a region for this memory zone in PVH dom0 should be common
>> with PV dom0 after all. We don't want to collide with either PCI space
>> or hotplug memory.
> 
> Right, we could use the native memory map for that on dom0, and maybe
> create a custom resource for the Xen balloon driver instead of
> allocating from iomem_resource?
> 
> DomUs are more tricky as a guest has no idea where mappings can be
> safely placed, maybe we will have to resort to iomem_resource in that
> case, as I don't see much other option due to the lack of information
> from Xen.
> 
> I also think that ZONE_DEVICE will need some adjustments, for once the
> types in memory_type don't seem to be suitable for Xen, as they are
> either specific to DAX or PCI. I gave a try at using allocate_resource
> plus memremap_pages but that didn't seem to fly, I will need to
> investigate further.
> 
> Maybe we can resort to something even simpler than memremap_pages? I
> certainly have very little idea of how this is supposed to be used,
> but dev_pagemap seems overly complex for what we are trying to
> achieve.

Yeah, might require some code churn. It just feels wrong to involve
buddy concepts (e.g., onlining pages, calling memory notifiers, exposing
memory block devices) and introducing hacks (forced onlining) just to
get a memmap+identity mapping+iomem resource. I think reserving such a
region during boot as suggested is the easiest approach, but I am
*absolutely* not an expert on all these XEN-specific things :)

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 18:04:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 18:04:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyfa0-0003YP-RV; Thu, 23 Jul 2020 18:04:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1OPV=BC=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jyfZz-0003YJ-Bq
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 18:04:27 +0000
X-Inumbo-ID: f17a3b90-cd0e-11ea-a2f7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f17a3b90-cd0e-11ea-a2f7-12813bfff9fa;
 Thu, 23 Jul 2020 18:04:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:To:Subject:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=cJf61rxVLo+ii9E89SGksnPeWHNEfiL+W8w2zdwell0=; b=zmz0n+a8FrfRaio3PWidp7ujEj
 4ddBq97a6+QlkwjpV+pqk0tMnQolld77OrWwRggDIElyESVdJFoaeG0lyF61pDEOfwIec/rNiJLFW
 GeCHb8iSGdC473uvWQ+deGly6mo7jRNioMyuGZDQoLd2AZ7lMRFoQeKlafBi/eeHzEtM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyfZx-0006Oz-11; Thu, 23 Jul 2020 18:04:25 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyfZw-0004iw-MB; Thu, 23 Jul 2020 18:04:24 +0000
Subject: Re: Porting Xen to Jetson Nano
To: Srinivas Bangalore <srini@yujala.com>, xen-devel@lists.xenproject.org,
 Christopher Clark <christopher.w.clark@gmail.com>
References: <002801d66051$90fe2300$b2fa6900$@yujala.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9736680b-1c81-652b-552b-4103341bad50@xen.org>
Date: Thu, 23 Jul 2020 19:04:23 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <002801d66051$90fe2300$b2fa6900$@yujala.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22/07/2020 18:57, Srinivas Bangalore wrote:
> Dear Xen experts,

Hello,

> Would greatly appreciate some hints on how to move forward with this one

 From your first set of original log:

 > Xen version 4.8.5 (srinivas@) (aarch64-linux-gnu-gcc
 > (Ubuntu/Linaro 7.5.0-3ubuntu1~18.04) 7.5.0) debug=n  Sun Jul 19 07:44:00
 > PDT 2020

I would recommend to compile Xen with debug enabled (CONFIG_DEBUG=y) as 
it may provide you more information of what's happening.

Also, aside the Tegra series. Do you have any other patches on top?

[...]

> (XEN) BANK[0] 0x000000a0000000-0x000000c0000000 (512MB)
> 
> (XEN) Grant table range: 0x000000fec00000-0x000000fec60000
> 
> (XEN) Loading zImage from 00000000e1000000 to 
> 00000000a0080000-00000000a223c808
> 
> (XEN) Allocating PPI 16 for event channel interrupt
> 
> (XEN) Loading dom0 DTB to 0x00000000a8000000-0x00000000a803435c

[...]

> 
> (XEN) *** Dumping CPU0 guest state (d0v0): ***
> 
> (XEN) ----[ Xen-4.8.5 arm64 debug=n Tainted: C ]----
> 
> (XEN) CPU: 0
> 
> (XEN) PC: 00000000a0080000

PC is pointing to the entry point of your kernel...

> 
> (XEN) LR: 0000000000000000
> 
> (XEN) SP_EL0: 0000000000000000
> 
> (XEN) SP_EL1: 0000000000000000
> 
> (XEN) CPSR: 000001c5 MODE:64-bit EL1h (Guest Kernel, handler)
> 
> (XEN) X0: 00000000a8000000 X1: 0000000000000000 X2: 0000000000000000
> 
> (XEN) X3: 0000000000000000 X4: 0000000000000000 X5: 0000000000000000
> 
> (XEN) X6: 0000000000000000 X7: 0000000000000000 X8: 0000000000000000
> 
> (XEN) X9: 0000000000000000 X10: 0000000000000000 X11: 0000000000000000
> 
> (XEN) X12: 0000000000000000 X13: 0000000000000000 X14: 0000000000000000
> 
> (XEN) X15: 0000000000000000 X16: 0000000000000000 X17: 0000000000000000
> 
> (XEN) X18: 0000000000000000 X19: 0000000000000000 X20: 0000000000000000
> 
> (XEN) X21: 0000000000000000 X22: 0000000000000000 X23: 0000000000000000
> 
> (XEN) X24: 0000000000000000 X25: 0000000000000000 X26: 0000000000000000
> 
> (XEN) X27: 0000000000000000 X28: 0000000000000000 FP: 0000000000000000
> 
> (XEN)
> 
> (XEN) ELR_EL1: 0000000000000000
> 
> (XEN) ESR_EL1: 00000000
> 
> (XEN) FAR_EL1: 0000000000000000
> 
> (XEN)
> 
> (XEN) SCTLR_EL1: 00c50838
> 
> (XEN) TCR_EL1: 00000000
> 
> (XEN) TTBR0_EL1: 0000000000000000
> 
> (XEN) TTBR1_EL1: 0000000000000000
> 
> (XEN)
> 
> (XEN) VTCR_EL2: 80043594
> 
> (XEN) VTTBR_EL2: 000100017f0f9000
> 
> (XEN)
> 
> (XEN) SCTLR_EL2: 30cd183d
> 
> (XEN) HCR_EL2: 000000008038663f
> 
> (XEN) TTBR0_EL2: 00000000fecfc000
> 
> (XEN)
> 
> (XEN) ESR_EL2: 8200000d

... it looks like we are receiving a trap in EL2 because it can't 
execute the instruction. This is a bit odd as the p2m (stage-2 
page-tables) should be configured to allow execution. It would be useful 
if you can dump the p2m walk here. This following patch should do the 
job (not compiled test):

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index d578a5c598dd..af1834cdf735 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2489,9 +2489,14 @@ static void do_trap_instr_abort_guest(struct 
cpu_user_regs *regs,
           */
          rc = gva_to_ipa(gva, &gpa, GV2M_READ);
          if ( rc == -EFAULT )
+        {
+            printk("Unable to translate 0x%lx\n", gva);
              return; /* Try again */
+        }
      }

+    dump_p2m_walk(current->domain, gpa);
+
      switch ( fsc )
      {
      case FSC_FLT_PERM:


> 
> (XEN) HPFAR_EL2: 0000000000000000
> 
> (XEN) FAR_EL2: 00000000a0080000
> 
> (XEN)
> 
> (XEN) Guest stack trace from sp=0:
> 
> (XEN) Failed to convert stack to physical address

[...]

> It seems the DOM0 kernel did not get added to the task list.

 From a look at the dump, dom0 vCPU0 has been scheduled and running on 
pCPU0.

> 
> Boot args for Xen and Dom0 are here:
> (XEN) Checking for initrd in /chosen
> 
> (XEN) linux,initrd limits invalid: 0000000084100000 >= 0000000084100000
> 
> (XEN) RAM: 0000000080000000 - 00000000fedfffff
> 
> (XEN) RAM: 0000000100000000 - 000000017f1fffff
> 
> (XEN)
> 
> (XEN) MODULE[0]: 00000000fc7f8000 - 00000000fc82d000 Device Tree
> 
> (XEN) MODULE[1]: 00000000e1000000 - 00000000e31bc808 Kernel       
> console=hvc0 earlyprintk=xen earlycon=xen rootfstype=ext4 rw rootwait 
> root=/dev/mmcblk0p1 rdinit=/sbin/init

You want to use earlycon=xenboot here.

> 
> (XEN) RESVD[0]: 0000000080000000 - 0000000080020000
> 
> (XEN) RESVD[1]: 00000000e3500000 - 00000000e3535000
> 
> (XEN) RESVD[2]: 00000000fc7f8000 - 00000000fc82d000
> 
> (XEN)
> 
> (XEN) Command line: console=dtuart earlyprintk=xen 
> earlycon=uart8250,mmio32,0x70006000 sync_console dom0_mem=512M 
> log_lvl=all guest_loglvl=all console_to_ring

FWIW, earlyprintk and earlycon are not understood by Xen. They are only 
useful for Dom0.

Best regards,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 18:25:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 18:25:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyfta-0005Pq-Ff; Thu, 23 Jul 2020 18:24:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h/wE=BC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyftY-0005PR-AB
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 18:24:40 +0000
X-Inumbo-ID: c22575b4-cd11-11ea-a2fa-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c22575b4-cd11-11ea-a2fa-12813bfff9fa;
 Thu, 23 Jul 2020 18:24:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=dqTNTsFkhzbPlk/CLyi1a4JqpBs+MYCM5RcgM3e3NzM=; b=wmOhdBXqoK3ZhcpL5LfavPHnw
 aBSlfxVesq9OzwFC9sC5N7NxzaM95spNUeWzBj3aAbQtRlGreIfkb7J6/WRvsZd1um10KE/1sv2Xk
 4cdsXq3w0xbj+WH+ks+FzhPICQe6MhPvNNXrDJcDPHwcr5DBtrveOyVGR3wSgoIJQ6qAg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyftR-0006o4-Jc; Thu, 23 Jul 2020 18:24:33 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyftR-0003TB-9B; Thu, 23 Jul 2020 18:24:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyftR-0004Xn-7L; Thu, 23 Jul 2020 18:24:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152152-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152152: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=8a7bf75eb5bba4046c1aa278330a371545a6ecbd
X-Osstest-Versions-That: xen=ffe4f0fe17b5288e0c19955cd1ba589e6db1b0fe
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jul 2020 18:24:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152152 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152152/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a7bf75eb5bba4046c1aa278330a371545a6ecbd
baseline version:
 xen                  ffe4f0fe17b5288e0c19955cd1ba589e6db1b0fe

Last test of basis   152142  2020-07-23 10:00:30 Z    0 days
Testing same since   152152  2020-07-23 15:00:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ffe4f0fe17..8a7bf75eb5  8a7bf75eb5bba4046c1aa278330a371545a6ecbd -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 18:26:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 18:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyfvk-0005W7-Ta; Thu, 23 Jul 2020 18:26:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xWck=BC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jyfvk-0005W1-60
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 18:26:56 +0000
X-Inumbo-ID: 153a8c80-cd12-11ea-876c-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 153a8c80-cd12-11ea-876c-bc764e2007e4;
 Thu, 23 Jul 2020 18:26:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595528814;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=U2+CiKFwpvLxh4d4IMieke3JfjjRWJo1j+sz6ZqIRBQ=;
 b=OHJFG2IdBBse5EnsfH2vLIoCxgPazZWr9VM9brC+EcLgFRmp5OSA1j2a
 4rIpERm5FbV5xNETHV2zMvN31HCsze6sXI0diYGBpY8ajznCNrACwFD9K
 Im0wgNCXwha1IsjlI+i3GfrTsRXTOjnt1fglb0FWFeohUCiXTLgU0iQcx E=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: huDvZPas59Cux8i69MV9rDin8lFZkNRXN/fcYdCLfJNG6dTzbtnmBbZXSyObCLxPGbU6sz0uqR
 DrnU3jqSjlqgTzHUwuuDVm4Ime9uq6gqGSRXCSShg1CySsJe3aDnCNLzzJBKf89svGttKGIWEG
 Rkxwq+zlgFSeWJ57k5ksdgCjz+W6uazB+b9o5A7ryYNW+pBsD0fw7uJdKtO4nbNGsU8wPCFUqS
 CNdtugG1M36GOiWc+1QLMR1Wa2qzDLVvB6otzOI9GZCdA5CgzFWD7GTkdTx6Gnpk9ndvrhy5vT
 zzg=
X-SBRS: 2.7
X-MesageID: 23086844
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,387,1589256000"; d="scan'208";a="23086844"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/pv: Make the PV default WRMSR path match the HVM default
Date: Thu, 23 Jul 2020 19:26:26 +0100
Message-ID: <20200723182626.7500-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The current HVM default for writes to unknown MSRs is to inject #GP if the MSR
is unreadable, and discard writes otherwise. While this behaviour isn't great,
the PV default is even worse, because it swallows writes even to non-readable
MSRs.  i.e. A PV guest doesn't even get a #GP fault for a write to a totally
bogus index.

Update PV to make it consistent with HVM, which will simplify the task of
making other improvements to the default MSR behaviour.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/pv/emul-priv-op.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index f14552cb4b..efeb2a727e 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -1113,7 +1113,10 @@ static int write_msr(unsigned int reg, uint64_t val,
         }
         /* fall through */
     default:
-        if ( (rdmsr_safe(reg, temp) != 0) || (val != temp) )
+        if ( rdmsr_safe(reg, temp) )
+            break;
+
+        if ( val != temp )
     invalid:
             gdprintk(XENLOG_WARNING,
                      "Domain attempted WRMSR %08x from 0x%016"PRIx64" to 0x%016"PRIx64"\n",
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 20:44:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 20:44:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyi4V-0001AX-5E; Thu, 23 Jul 2020 20:44:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lw2b=BC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jyi4T-0001AS-Nu
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 20:44:05 +0000
X-Inumbo-ID: 3f3f1920-cd25-11ea-8782-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f3f1920-cd25-11ea-8782-bc764e2007e4;
 Thu, 23 Jul 2020 20:44:05 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id EC07F2065E;
 Thu, 23 Jul 2020 20:44:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595537044;
 bh=OpWpCb8rL/SmrzCCJV8ZKI0dEkRstb/48b6TMagit3Q=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=ubCdb9HjBGjSX5kqp1zt3yREs3ZLGJz/1/+vCTnJq+m8ORwToFwpTi08c3QiMcCOl
 RLdNybg8yu6selCwVaOkoBETtIufE8uKwdrIS0CSX5WZWjh9W5a/gHCBp794S0PXAJ
 hQ/7hQfoS5NjKkASChBVvzBs42jtpnZ98wxqrfew=
Date: Thu, 23 Jul 2020 13:44:03 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
Subject: Re: [RFC PATCH v1 2/4] xen/arm: Discovering PCI devices and add the
 PCI devices in XEN.
In-Reply-To: <666df0147054dda8af13ae74a89be44c69984295.1595511416.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2007231337140.17562@sstabellini-ThinkPad-T480s>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <666df0147054dda8af13ae74a89be44c69984295.1595511416.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 andrew.cooper3@citrix.com, Bertrand.Marquis@arm.com, jbeulich@suse.com,
 xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 23 Jul 2020, Rahul Singh wrote:
> Hardware domain is in charge of doing the PCI enumeration and will
> discover the PCI devices and then will communicate to XEN via hyper
> call PHYSDEVOP_pci_device_add to add the PCI devices in XEN.
> 
> Change-Id: Ie87e19741689503b4b62da911c8dc2ee318584ac

Same question about Change-Id


> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  xen/arch/arm/physdev.c | 42 +++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 39 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
> index e91355fe22..274720f98a 100644
> --- a/xen/arch/arm/physdev.c
> +++ b/xen/arch/arm/physdev.c
> @@ -9,12 +9,48 @@
>  #include <xen/errno.h>
>  #include <xen/sched.h>
>  #include <asm/hypercall.h>
> -
> +#include <xen/guest_access.h>
> +#include <xsm/xsm.h>
>  
>  int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>  {
> -    gdprintk(XENLOG_DEBUG, "PHYSDEVOP cmd=%d: not implemented\n", cmd);
> -    return -ENOSYS;
> +    int ret = 0;
> +
> +    switch ( cmd )
> +    {
> +#ifdef CONFIG_HAS_PCI
> +        case PHYSDEVOP_pci_device_add:
> +            {
> +                struct physdev_pci_device_add add;
> +                struct pci_dev_info pdev_info;
> +                nodeid_t node = NUMA_NO_NODE;
> +
> +                ret = -EFAULT;
> +                if ( copy_from_guest(&add, arg, 1) != 0 )
> +                    break;
> +
> +                pdev_info.is_extfn = !!(add.flags & XEN_PCI_DEV_EXTFN);
> +                if ( add.flags & XEN_PCI_DEV_VIRTFN )
> +                {
> +                    pdev_info.is_virtfn = 1;
> +                    pdev_info.physfn.bus = add.physfn.bus;
> +                    pdev_info.physfn.devfn = add.physfn.devfn;
> +                }
> +                else
> +                    pdev_info.is_virtfn = 0;
> +
> +                ret = pci_add_device(add.seg, add.bus, add.devfn,
> +                                &pdev_info, node);
> +
> +                break;
> +            }
> +#endif
> +        default:
> +            gdprintk(XENLOG_DEBUG, "PHYSDEVOP cmd=%d: not implemented\n", cmd);
> +            ret = -ENOSYS;
> +    }

I think we should make the implementation common between arm and x86 by
creating xen/common/physdev.c:do_physdev_op as a shared entry point for
PHYSDEVOP hypercalls implementations. See for instance:

xen/common/sysctl.c:do_sysctl

and

xen/arch/arm/sysctl.c:arch_do_sysctl
xen/arch/x86/sysctl.c:arch_do_sysctl


Jan, Andrew, Roger, any opinions?



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 21:32:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 21:32:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyiov-0005dB-SI; Thu, 23 Jul 2020 21:32:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yR9g=BC=suse.com=jfehlig@srs-us1.protection.inumbo.net>)
 id 1jyiou-0005d6-S4
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 21:32:04 +0000
X-Inumbo-ID: f2aa6cc0-cd2b-11ea-878b-bc764e2007e4
Received: from de-smtp-delivery-102.mimecast.com (unknown [51.163.158.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f2aa6cc0-cd2b-11ea-878b-bc764e2007e4;
 Thu, 23 Jul 2020 21:32:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com;
 s=mimecast20200619; t=1595539922;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding;
 bh=D5usojrMtMUPJ71Zs1JmL1qzPiqUSpu9P6KeeaZ6KJ4=;
 b=mIH4J7vdu9023eyrDMF7HKvctERBHZf/F0unNxr6L7KMUgu6tErJ2KQPM49KNXoCD0AXnj
 PUG7Y5LwNe+4eiNfF6PHDDmurO9H9qVVeAqiWXRB8v/tU5EOfwa1sUQkMpCYFiz1g115mX
 8NBOLlA4nlyyziXfWPva4Ir9FtnhzOw=
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2054.outbound.protection.outlook.com [104.47.14.54]) (Using
 TLS) by relay.mimecast.com with ESMTP id de-mta-3-t9ctjvbmPpCyPQBcC4IjPw-1;
 Thu, 23 Jul 2020 23:32:00 +0200
X-MC-Unique: t9ctjvbmPpCyPQBcC4IjPw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c/REZZfkqObDrwSuIDh9CeMvziu65/p+yNRFURh9S/9WXaEMg+hHxZZtP+UlTXagSKnhDMKEyoamzWVHTNPKC8P9GVF8+Uz90vzV1zjQBKpgSlSuRkS+/Kwsq579umLE5D4NVJT+DmJ+9TRDIQDMOlvK11CnLAXCUclQJtQlBB6sPnzoY93UZqGtkA6D1Ov5T8yC6nnr9HdF3qm4AYU/NHtICkB6OFhfjuHJBiX03wgHz4h/PF2akk4mnqmO85ooshLkJpNF1u1+EFv4cQRTg9YDNx5SWXGOffbMGG8nyqevmhqj5mHBfmhLMElPH2O+DeAkT6ui5eYUtRNAUDhSbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BM+TQn3Nmw4ykgz8qpTTfxHrBNJcFEhAWOM6Tk3cPfQ=;
 b=h+iQYRIdplZH3cwU60rT8JP8sK1gCULOmjr+0AG7qMS8ka8ZqWKZhmvZV5+Ch4pxwVMh2enWosGaYPsL8Ouh0aZNuR45qlkc2n5Q3axGVzoW2yqcCaoXqKEYmHIEqqg8XFmsrNg5P47LmJxhMitYoFhJ3rMOFx7q/8MNpicWb5dbuehk21kXn/Z4gWqCMvt1WWTOPtICOm2w7VH7m4yaaE+61yhA76uupxhIc9vgdzKtNyA1pc+h15V9AY3sbUXoDjiAGCeKhI3749NXFgTaNobR9riHA14EmNuVXJIwbNoE30izPQJ9bL1yK3KDQfRcNCC0VGFfJRMWI+jdeWycXA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Received: from VI1PR0401MB2429.eurprd04.prod.outlook.com
 (2603:10a6:800:2c::13) by VI1PR04MB4288.eurprd04.prod.outlook.com
 (2603:10a6:803:3f::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.23; Thu, 23 Jul
 2020 21:31:58 +0000
Received: from VI1PR0401MB2429.eurprd04.prod.outlook.com
 ([fe80::7cc0:b0a4:b951:90e2]) by VI1PR0401MB2429.eurprd04.prod.outlook.com
 ([fe80::7cc0:b0a4:b951:90e2%11]) with mapi id 15.20.3216.023; Thu, 23 Jul
 2020 21:31:58 +0000
From: Jim Fehlig <jfehlig@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] OSSTEST: Install libtirpc-dev for libvirt builds
Date: Thu, 23 Jul 2020 15:31:34 -0600
Message-ID: <20200723213134.11044-1-jfehlig@suse.com>
X-Mailer: git-send-email 2.26.2
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain
X-ClientProxiedBy: FR2P281CA0004.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:a::14) To VI1PR0401MB2429.eurprd04.prod.outlook.com
 (2603:10a6:800:2c::13)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from linux-tbji.devlab.prv.suse.com (192.225.185.20) by
 FR2P281CA0004.DEUP281.PROD.OUTLOOK.COM (2603:10a6:d10:a::14) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.9 via Frontend Transport; Thu, 23 Jul 2020 21:31:56 +0000
X-Mailer: git-send-email 2.26.2
X-Originating-IP: [192.225.185.20]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ee3d7a32-9461-4b34-72ff-08d82f4fd426
X-MS-TrafficTypeDiagnostic: VI1PR04MB4288:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <VI1PR04MB428878DA85280375DDAE0DD4C6760@VI1PR04MB4288.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:765;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: hWrQp2tBLL3oQ254Gdmf1yjgxWrYP7CV6/PaDKWFSOLXKQJwWWS7i7dZGe4DYveO29TkbsrtC7nhWa9YqoUCboDBmMVRXeF+mG86ACAMKc9FidptBwK4LFUeGPLoCHrSze0sge9BzqQslJL4pKj3L6dXD8eY64K0u8n/iPontGW/+EHQRK7I6bFTQQbIUouUWSDm5EsSmLre5hJycBA9xhnNLjI/IBcvXcICQBVb37lUp1ZLdbdB2be8T5N+RXgAoPnuMrakKqVFgVsYE8ab3ZuL7cljGtp7Y7U5LsPxf/JNSCWctQTxJz+dSL+1kGYhdj8vNEO5F+p3XB4z1h2ctw==
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR0401MB2429.eurprd04.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(366004)(136003)(376002)(346002)(396003)(39860400002)(2616005)(52116002)(7696005)(26005)(478600001)(956004)(16526019)(186003)(83380400001)(6486002)(66476007)(66556008)(5660300002)(66946007)(8936002)(1076003)(107886003)(6916009)(4326008)(6666004)(316002)(86362001)(2906002)(8676002)(36756003);
 DIR:OUT; SFP:1101; 
X-MS-Exchange-AntiSpam-MessageData: tlWJsXLK75ihjczhOKK72UnrliGF3mkRWJw5vIPRmRjZQkdx3qUsU5B4xX617WA2LBXuN+D0DWX7MyKdFPW0ltx0liJIox9DNHckxpYovymY6xC4wIm1VVJGJcAJWmEnhiKJxJNILP2dRjsHE0+p/+AjFJij0nuzaJnnLwlYHiNyFgyeO+g+n6GO5E7Vcg5sbcwGs5SwzqPuqJZ0jpNMSPMbZ9C2YmVpltQsBNqhJhZRghpZoGkTEVcqOPaOHWnqkZdykzhwDHKJqK0RLtvKkHbIuoV3uqQgFtpp4ZUJWIuWs8yC4p377X1CYBSF2GWRhel8iS6tOWOn5Ni/H8KWTRaYGOyEyG5YB/qA6ECRQKc3LF8FY7BZZiOHiux0gVh0oh9s3lo3HNd8e8G3VCYKRWWy9ztH5DjX4cq2OeVxIx46sT9tnXftpkiEp0vRBsbSdVzDItam7VClHr9JOwG4bmwrkpxFc+1b7wJeaBjPOKEivLBYFPMuF4gZ1QvL4sSJ
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ee3d7a32-9461-4b34-72ff-08d82f4fd426
X-MS-Exchange-CrossTenant-AuthSource: VI1PR0401MB2429.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2020 21:31:58.5464 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: UzEXQwCABANF99lTU/JUncxHfQ4Ug2M0R8St64IJ7SHNjoCZpZdgLZn7EiEENBtsbdyIhVJSWu7KYnysRth/dw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4288
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Jim Fehlig <jfehlig@suse.com>, ian.jackson@eu.citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The check for XDR support was changed in libvirt commit d7147b3797
to use libtirpc pkg-config instead of complicated AC_CHECK_LIB,
AC_COMPILE_IFELSE, et. al. logic. The libvirt OSSTEST has been
failing since this change hit libvirt.git master. Fix it by adding
libtirpc-dev to the list of 'extra_packages' installed for libvirt
builds.

Signed-off-by: Jim Fehlig <jfehlig@suse.com>
---

I *think* this change will work for older libvirt branches too.
The old, hand-coded m4 logic should work with libtirpc-dev
installed.

 Osstest/Toolstack/libvirt.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Toolstack/libvirt.pm b/Osstest/Toolstack/libvirt.pm
index e817f5b4..11e4d730 100644
--- a/Osstest/Toolstack/libvirt.pm
+++ b/Osstest/Toolstack/libvirt.pm
@@ -26,7 +26,7 @@ use XML::LibXML;
=20
 sub new {
     my ($class, $ho, $methname,$asset) =3D @_;
-    my @extra_packages =3D qw(libavahi-client3);
+    my @extra_packages =3D qw(libavahi-client3 libtirpc-dev);
     my $nl_lib =3D "libnl-3-200";
     my $libgnutls =3D "libgnutls30";
=20
--=20
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 21:43:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 21:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyizf-0006eS-0U; Thu, 23 Jul 2020 21:43:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yR9g=BC=suse.com=jfehlig@srs-us1.protection.inumbo.net>)
 id 1jyizd-0006eN-7X
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 21:43:09 +0000
X-Inumbo-ID: 7ee901e6-cd2d-11ea-8793-bc764e2007e4
Received: from de-smtp-delivery-102.mimecast.com (unknown [51.163.158.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ee901e6-cd2d-11ea-8793-bc764e2007e4;
 Thu, 23 Jul 2020 21:43:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com;
 s=mimecast20200619; t=1595540587;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=MfIlbif7i41UIQMoFHM7eyg4IilGkqBzIA0815+7Rz0=;
 b=a8g83vEf//z7nDCI8t/EC6Eo5iafvnOnX2TCWA4sxBTehWC2krY4koxeWVcweKPsIVkyLn
 p7jUC+N6YdSr6Dr4nVx7wKyPtK60wsu6yFdd8pRSrTkLtJmV+mRUmmpIg2qGSTZK3hVOs/
 H+OUXfb98bVyHbxX0SopnJsJ8XCel1o=
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur02lp2058.outbound.protection.outlook.com [104.47.6.58]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-19-xSrivt5CPZmpFJBk9TgLgQ-1; Thu, 23 Jul 2020 23:43:05 +0200
X-MC-Unique: xSrivt5CPZmpFJBk9TgLgQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EG7ZyJMyc3BKeT2CyNtrX4ewkZ07DGN9RHwaZmKaafGqyVYuR9cL1BJ4z340POaDmiGkrPcILW021dsm1bdjGFeN92xUAfeqiJezQc4d56PM/1jGbdU6azTsiaDfy3V2BapIHRH9z0t2YvQxc8EHw3x9tG5mFPPzn5DgMnHPJPcrJqWOkGTi0M+0TQQ9KyUPmYjhCvhubfDDHNmgbNPaXDnmp1c8WBri8+LjfYCxH0xz1oALKWjluEW+onHEpUEKmn8e9EmEayZcL4vG0pKHsDuuMNx0eUQE5NfxnZiTRocLjxlsQchFSFKWp7rkzow9T+uwsgzsjBcu829Z8y/Jmw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/J6tclNH+dh5YvBLbMWPlRydz1HKC8tJpeU5/eni8PI=;
 b=i5G6Otl64BnOD+PdVkUQtJ2zH/+uvPn6nPM/O6Rr+evoSwDts4v6NiarE+Akt+nUEI51HkpPZRgjEahsd6Zhyz8g198bXPl5b6xMFd5ohWMF8t4pxhnacvrrb4Qv1RseEibxvVzK5vrzUWFGXEYE5Cr70JTeizxPla5tspQjVmo43aKCPsaCrLtcC1E7UiR53oYVltmLIBPbpBG6fhsQUpaFX6UaSM2RdstO1yqyr4FHiMBLuuDxyQz9Al5h2GKPxFdoQpbJwclT4pbjgSGyUxZMFshRmj3kBWP3XOwlSv+Vkh9PtFegXtkVcDeeA/dmQIqEG41lxEHrHVJJAreOxA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: eu.citrix.com; dkim=none (message not signed)
 header.d=none;eu.citrix.com; dmarc=none action=none header.from=suse.com;
Received: from VI1PR0401MB2429.eurprd04.prod.outlook.com
 (2603:10a6:800:2c::13) by VE1PR04MB6624.eurprd04.prod.outlook.com
 (2603:10a6:803:123::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Thu, 23 Jul
 2020 21:43:04 +0000
Received: from VI1PR0401MB2429.eurprd04.prod.outlook.com
 ([fe80::7cc0:b0a4:b951:90e2]) by VI1PR0401MB2429.eurprd04.prod.outlook.com
 ([fe80::7cc0:b0a4:b951:90e2%11]) with mapi id 15.20.3216.023; Thu, 23 Jul
 2020 21:43:04 +0000
Subject: Re: [libvirt test] 151910: regressions - FAIL
From: Jim Fehlig <jfehlig@suse.com>
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-151910-mainreport@xen.org>
 <5b44b5dc-bc37-bdaa-47a4-5f5b72392f45@suse.com>
Message-ID: <8aa2f4c4-752c-8339-f34c-18025e3377dc@suse.com>
Date: Thu, 23 Jul 2020 15:42:57 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
In-Reply-To: <5b44b5dc-bc37-bdaa-47a4-5f5b72392f45@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-ClientProxiedBy: AM4PR0101CA0045.eurprd01.prod.exchangelabs.com
 (2603:10a6:200:41::13) To VI1PR0401MB2429.eurprd04.prod.outlook.com
 (2603:10a6:800:2c::13)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.1.55] (192.225.185.20) by
 AM4PR0101CA0045.eurprd01.prod.exchangelabs.com (2603:10a6:200:41::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.21 via Frontend
 Transport; Thu, 23 Jul 2020 21:43:02 +0000
X-Originating-IP: [192.225.185.20]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ad254e1f-6341-4e8f-ab8a-08d82f5160df
X-MS-TrafficTypeDiagnostic: VE1PR04MB6624:
X-Microsoft-Antispam-PRVS: <VE1PR04MB662487DD4FF78AB72DD584A5C6760@VE1PR04MB6624.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1051;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: KGHMTcUOzCPZSA+ZWSqhXzCkxPSum5DyX8vRIZ4hvVlVK3rezQ5UNIEL79Q3yMNRIh1s/lkLEkod6YINvmymB7XQNW0YfM9jtwepaQtJvcs8HGVJH7X5v8C8XF7481a0qaC6OCNd49RTd5GusOcdftuu6wEcv/A0N3VrwSKuyvM1augvSdbF6SI3qygLi5jUZazrZLzRmBB5vvfMXflZgHLe2yUa5HlJpqj17Qat+OB4BI4/M3hyLeKPp85YGJayULAjFJXRzphg96zgqRlhiBGbYDLlTEMo2FtpMZqOjVaj16cj2ssQEhu+2nDQ+A/8g537jz4gfTeK4SIKCW8suzPNY02pwfyUXkG24/gv15Up/1Hv+JSyR55oBsGrAcFPqLJoglYD/BdI+v8L53uvo64gDEA2GbxXrgZVKSTohFgxhvzgobcZKDVKZ18ILa5u1rdZPrMiJfO2VILWrEoOWQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR0401MB2429.eurprd04.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(396003)(39860400002)(136003)(366004)(376002)(346002)(6666004)(8936002)(86362001)(6486002)(16526019)(186003)(26005)(31696002)(4326008)(83380400001)(36756003)(52116002)(966005)(2906002)(316002)(8676002)(2616005)(53546011)(31686004)(478600001)(66946007)(66476007)(66556008)(5660300002)(16576012)(956004)(43740500002);
 DIR:OUT; SFP:1101; 
X-MS-Exchange-AntiSpam-MessageData: i31gVNzkA5mJ5eWnP7tdsE10a9i9CXC3rskYDArc5qWOF5XcW49nY3BsvK1fStDPDIdsEDDQllCzsA7vMmKe0xYCRraIt5aeYyDUFltuDFDHo2K0+LTBZkB6s/fa+9PS5fHh/zIya7amoTJ3LifQxg0n8pguSUJ6irbgbJ47DzY1dd94jVPbTqIz0elAi7+D0lM7HMa8DeeBwtWe1OkjlQxOTT+mVesc0PRpRuEzIljt5dR0UDEz96ZRv0hgEP121M3FJn8B4wGXzh9EJZ5E4RpJBU7bVVox/c8y6B4H3p3d3T7VQwqUaroBoRSLjAu3rMt8sdubJEx2mUmwCLkmTvd/Ofg1PZ9J/Z4ahkoIsUtQVIBtfgaIxqh4x4QcmV9gXDi30uYsrmXo9bsYg7rCFie2PwHOjquAkBvbVbNX7cc7WkE9Co7g+f4mGPI6/oiw0jnrqb00iJizpvDJ9WQeKNm+npYRFvYEHcHkT8BPssu1oILvda39ThC42eFdO/pl
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ad254e1f-6341-4e8f-ab8a-08d82f5160df
X-MS-Exchange-CrossTenant-AuthSource: VI1PR0401MB2429.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Jul 2020 21:43:03.9701 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NYdBAAP9B29EbRZko2gF/rJmGMvxLCSTtYLi3JOoFk89UlkdgapSnwlauZNaMFWyLdg+Xz0CqEKbQZTauPnmYw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6624
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/15/20 1:53 PM, Jim Fehlig wrote:
> On 7/15/20 9:07 AM, osstest service owner wrote:
>> flight 151910 libvirt real [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/151910/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>> =C2=A0 build-amd64-libvirt=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 6 libvirt-build=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 fail REGR. vs. 151777
>> =C2=A0 build-i386-libvirt=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 6 libvirt-build=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 fail REGR. vs. 151777
>> =C2=A0 build-arm64-libvirt=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 6 libvirt-build=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 fail REGR. vs. 151777
>> =C2=A0 build-armhf-libvirt=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 6 libvirt-build=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 fail REGR. vs. 151777
>=20
> I see the same configure failure has been encountered since July 11
>=20
> checking for XDR... no
> configure: error: You must install the libtirpc >=3D 0.1.10 pkg-config mo=
dule to=20
> compile libvirt
>=20
> AFAICT there have been no related changes in libvirt (which has required=
=20
> libtirpc for over two years).

Sorry for the mistake. There has been a change in libvirt

https://gitlab.com/libvirt/libvirt/-/commit/d7147b3797380de2d159ce6324536f3=
e1f2d97e3

My reputation for OSSTEST patches is not the greatest, but I took a stab at=
 it=20
regardless :-)

https://lists.xenproject.org/archives/html/xen-devel/2020-07/msg01208.html

Regards,
Jim



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 21:44:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 21:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyj0T-0006i7-Ag; Thu, 23 Jul 2020 21:44:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h/wE=BC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyj0S-0006i0-LD
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 21:44:00 +0000
X-Inumbo-ID: 9d4df6f0-cd2d-11ea-8793-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d4df6f0-cd2d-11ea-8793-bc764e2007e4;
 Thu, 23 Jul 2020 21:43:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ftUeaC8FehE0MqO7jXtbywP+uaDGDL49XPJ/nd47vU8=; b=mTp64fBWM8b7oAXJVKW1XUt+i
 bKuVJmaJjMkbl2/aLQeu+tV9kqV1JC2OMzjPQE3e4P28RZCavYOv8RPM0qk8YWOgFBIzywBIhHpmS
 GwucMoDbrpNjoKCoOFdZV7qnNTgpSBIWz7mAoG+T0uxVDy2PGmBRvOYUEkhx9lhyP3XBI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyj0Q-0002XG-5d; Thu, 23 Jul 2020 21:43:58 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyj0P-0002tO-TU; Thu, 23 Jul 2020 21:43:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyj0P-0004mQ-Sm; Thu, 23 Jul 2020 21:43:57 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152134-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152134: regressions - trouble: fail/pass/starved
X-Osstest-Failures: linux-linus:test-amd64-amd64-qemuu-nested-amd:leak-check/basis/l1(16):fail:regression
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-start/debianhvm.repeat:fail:regression
 linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=d15be546031cf65a0fc34879beca02fd90fe7ac7
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jul 2020 21:43:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152134 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152134/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 16 leak-check/basis/l1(16) fail REGR. vs. 151214
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 16 guest-start/debianhvm.repeat fail REGR. vs. 151214
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 linux                d15be546031cf65a0fc34879beca02fd90fe7ac7
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   35 days
Failing since        151236  2020-06-19 19:10:35 Z   34 days   53 attempts
Testing same since   152134  2020-07-23 03:31:31 Z    0 days    1 attempts

------------------------------------------------------------
846 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 46995 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 21:55:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 21:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyjBD-0007ji-Dm; Thu, 23 Jul 2020 21:55:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h/wE=BC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyjBC-0007jd-Ls
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 21:55:06 +0000
X-Inumbo-ID: 2a92ddc2-cd2f-11ea-a328-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2a92ddc2-cd2f-11ea-a328-12813bfff9fa;
 Thu, 23 Jul 2020 21:55:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=kSXH90yp5ypp7TfN2q2X5RA4fUJ/oD6GbxsSq+ojnY8=; b=h9pnDhsZdY7KCzZVB0QRxOCCK
 4MyrqmqHBgTQqC6IuXj87404f06mzr1u4pyp7k56waVEc4vpPAJPhnvHfvrIReQRTJN7pDAZiPsHE
 y/OFH7FDgtHd71NpudvYoeNk9r5A1GF0k23M0dSgcL1GFy1dLRRP+iFHgLcVewSqUwtNY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyjBA-0002lZ-Nz; Thu, 23 Jul 2020 21:55:04 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyjBA-0003rh-F5; Thu, 23 Jul 2020 21:55:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyjBA-0000Tv-EN; Thu, 23 Jul 2020 21:55:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152156-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [examine test] 152156: trouble: fail/pass/starved
X-Osstest-Failures: examine:examine-italia0:host-install:broken:regression
 examine:examine-debina0:host-install:broken:regression
 examine:examine-elbling0:hosts-allocate:starved:nonblocking
 examine:examine-chardonnay0:hosts-allocate:starved:nonblocking
 examine:examine-rimava1:hosts-allocate:starved:nonblocking
 examine:examine-chardonnay1:hosts-allocate:starved:nonblocking
 examine:examine-fiano1:hosts-allocate:starved:nonblocking
 examine:examine-godello1:hosts-allocate:starved:nonblocking
 examine:examine-huxelrebe1:hosts-allocate:starved:nonblocking
 examine:examine-pinot0:hosts-allocate:starved:nonblocking
 examine:examine-pinot1:hosts-allocate:starved:nonblocking
 examine:examine-elbling1:hosts-allocate:starved:nonblocking
 examine:examine-albana1:hosts-allocate:starved:nonblocking
 examine:examine-godello0:hosts-allocate:starved:nonblocking
 examine:examine-fiano0:hosts-allocate:starved:nonblocking
 examine:examine-albana0:hosts-allocate:starved:nonblocking
 examine:examine-debina1:hosts-allocate:starved:nonblocking
 examine:examine-huxelrebe0:hosts-allocate:starved:nonblocking
X-Osstest-Versions-That: flight=151315
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 23 Jul 2020 21:55:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152156 examine real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152156/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 examine-italia0               5 host-install           broken REGR. vs. 151315
 examine-debina0               5 host-install           broken REGR. vs. 151315

Tests which did not succeed, but are not blocking:
 examine-elbling0              2 hosts-allocate               starved  n/a
 examine-chardonnay0           2 hosts-allocate               starved  n/a
 examine-rimava1               2 hosts-allocate               starved  n/a
 examine-chardonnay1           2 hosts-allocate               starved  n/a
 examine-fiano1                2 hosts-allocate               starved  n/a
 examine-godello1              2 hosts-allocate               starved  n/a
 examine-huxelrebe1            2 hosts-allocate               starved  n/a
 examine-pinot0                2 hosts-allocate               starved  n/a
 examine-pinot1                2 hosts-allocate               starved  n/a
 examine-elbling1              2 hosts-allocate               starved  n/a
 examine-albana1               2 hosts-allocate               starved  n/a
 examine-godello0              2 hosts-allocate               starved  n/a
 examine-fiano0                2 hosts-allocate               starved  n/a
 examine-albana0               2 hosts-allocate               starved  n/a
 examine-debina1               2 hosts-allocate               starved  n/a
 examine-huxelrebe0            2 hosts-allocate               starved  n/a

baseline version:
 flight               151315

jobs:
 examine-albana0                                              starved 
 examine-albana1                                              starved 
 examine-arndale-bluewater                                    pass    
 examine-cubietruck-braque                                    pass    
 examine-chardonnay0                                          starved 
 examine-chardonnay1                                          starved 
 examine-debina0                                              fail    
 examine-debina1                                              starved 
 examine-elbling0                                             starved 
 examine-elbling1                                             starved 
 examine-fiano0                                               starved 
 examine-fiano1                                               starved 
 examine-cubietruck-gleizes                                   pass    
 examine-godello0                                             starved 
 examine-godello1                                             starved 
 examine-huxelrebe0                                           starved 
 examine-huxelrebe1                                           starved 
 examine-italia0                                              fail    
 examine-arndale-lakeside                                     pass    
 examine-laxton0                                              pass    
 examine-laxton1                                              pass    
 examine-arndale-metrocentre                                  pass    
 examine-cubietruck-metzinger                                 pass    
 examine-cubietruck-picasso                                   pass    
 examine-pinot0                                               starved 
 examine-pinot1                                               starved 
 examine-rimava1                                              starved 
 examine-rochester0                                           pass    
 examine-rochester1                                           pass    
 examine-arndale-westfield                                    pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Push not applicable.



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 21:57:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 21:57:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyjDV-0007rT-0T; Thu, 23 Jul 2020 21:57:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rt52=BC=davemloft.net=davem@srs-us1.protection.inumbo.net>)
 id 1jyjDT-0007rO-Qw
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 21:57:27 +0000
X-Inumbo-ID: 7d893882-cd2f-11ea-8799-bc764e2007e4
Received: from shards.monkeyblade.net (unknown [2620:137:e000::1:9])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d893882-cd2f-11ea-8799-bc764e2007e4;
 Thu, 23 Jul 2020 21:57:24 +0000 (UTC)
Received: from localhost (unknown [IPv6:2601:601:9f00:477::3d5])
 (using TLSv1 with cipher AES256-SHA (256/256 bits))
 (Client did not present a certificate)
 (Authenticated sender: davem-davemloft)
 by shards.monkeyblade.net (Postfix) with ESMTPSA id DF90611E48C62;
 Thu, 23 Jul 2020 14:40:37 -0700 (PDT)
Date: Thu, 23 Jul 2020 14:57:22 -0700 (PDT)
Message-Id: <20200723.145722.752878326752101646.davem@davemloft.net>
To: andrea.righi@canonical.com
Subject: Re: [PATCH] xen-netfront: fix potential deadlock in xennet_remove()
From: David Miller <davem@davemloft.net>
In-Reply-To: <20200722065211.GA841369@xps-13>
References: <20200722065211.GA841369@xps-13>
X-Mailer: Mew version 6.8 on Emacs 26.3
Mime-Version: 1.0
Content-Type: Text/Plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.12
 (shards.monkeyblade.net [149.20.54.216]);
 Thu, 23 Jul 2020 14:40:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, sstabellini@kernel.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, kuba@kernel.org,
 boris.ostrovsky@oracle.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Andrea Righi <andrea.righi@canonical.com>
Date: Wed, 22 Jul 2020 08:52:11 +0200

> +static int xennet_remove(struct xenbus_device *dev)
> +{
> +	struct netfront_info *info = dev_get_drvdata(&dev->dev);
> +
> +	dev_dbg(&dev->dev, "%s\n", dev->nodename);

These kinds of debugging messages provide zero context and are so much
less useful than simply using tracepoints which are more universally
available than printk debugging facilities.

Please remove all of the dev_dbg() calls from this patch.


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 22:58:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 22:58:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jykAC-0004tQ-I5; Thu, 23 Jul 2020 22:58:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eYPZ=BC=amazon.com=prvs=466a6ed8d=anchalag@srs-us1.protection.inumbo.net>)
 id 1jykAA-0004tL-TJ
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 22:58:07 +0000
X-Inumbo-ID: f82505f0-cd37-11ea-a332-12813bfff9fa
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f82505f0-cd37-11ea-a332-12813bfff9fa;
 Thu, 23 Jul 2020 22:58:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1595545086; x=1627081086;
 h=date:from:to:cc:message-id:references:mime-version:
 content-transfer-encoding:in-reply-to:subject;
 bh=y+Ab+qpN+drP51OsAVzLhFNy+EV/cJUJK2AzFsvossY=;
 b=Q7k60bnGU9kk5Tdk9XgUbkFaaO8mXV0lPem+hleZJctIBDXgtGGBNHlR
 4IREheUs4YRzU1a6GeXsCuJ1dHgQQ/RN7ldElzKPBoxYepm3cSwBmT1GL
 V2yMAZW3DwFkIq9WhOlj5H0VJmslDY6KM4t/RZgZA68kAeJhpZ8hpULdV A=;
IronPort-SDR: dA+OGJwBfOOYugAkt1GR7i4MLo/b/uzrQfkdodJW7nqunoIzxIwSDzciIPUnI8o7Bc2FoRcL7e
 1mgdgdge/NFQ==
X-IronPort-AV: E=Sophos;i="5.75,388,1589241600"; d="scan'208";a="43610901"
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1d-474bcd9f.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 23 Jul 2020 22:58:05 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1d-474bcd9f.us-east-1.amazon.com (Postfix) with ESMTPS
 id D35D6A1FB9; Thu, 23 Jul 2020 22:57:58 +0000 (UTC)
Received: from EX13D08UEE003.ant.amazon.com (10.43.62.118) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 23 Jul 2020 22:57:45 +0000
Received: from EX13MTAUEE002.ant.amazon.com (10.43.62.24) by
 EX13D08UEE003.ant.amazon.com (10.43.62.118) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 23 Jul 2020 22:57:45 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.62.224) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 23 Jul 2020 22:57:45 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 5F7384CA2B; Thu, 23 Jul 2020 22:57:45 +0000 (UTC)
Date: Thu, 23 Jul 2020 22:57:45 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <20200723225745.GB32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200721000348.GA19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <408d3ce9-2510-2950-d28d-fdfe8ee41a54@oracle.com>
 <alpine.DEB.2.21.2007211640500.17562@sstabellini-ThinkPad-T480s>
 <20200722180229.GA32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <alpine.DEB.2.21.2007221645430.17562@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <alpine.DEB.2.21.2007221645430.17562@sstabellini-ThinkPad-T480s>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, tglx@linutronix.de, kamatam@amazon.com, mingo@redhat.com,
 xen-devel@lists.xenproject.org, sblbir@amazon.com, axboe@kernel.dk,
 konrad.wilk@oracle.com, bp@alien8.de,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, jgross@suse.com,
 netdev@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net,
 linux-kernel@vger.kernel.org, vkuznets@redhat.com, davem@davemloft.net,
 dwmw@amazon.co.uk, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 22, 2020 at 04:49:16PM -0700, Stefano Stabellini wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On Wed, 22 Jul 2020, Anchal Agarwal wrote:
> > On Tue, Jul 21, 2020 at 05:18:34PM -0700, Stefano Stabellini wrote:
> > > On Tue, 21 Jul 2020, Boris Ostrovsky wrote:
> > > > >>>>>> +static int xen_setup_pm_notifier(void)
> > > > >>>>>> +{
> > > > >>>>>> +     if (!xen_hvm_domain())
> > > > >>>>>> +             return -ENODEV;
> > > > >>>>>>
> > > > >>>>>> I forgot --- what did we decide about non-x86 (i.e. ARM)?
> > > > >>>>> It would be great to support that however, its  out of
> > > > >>>>> scope for this patch set.
> > > > >>>>> I’ll be happy to discuss it separately.
> > > > >>>>
> > > > >>>> I wasn't implying that this *should* work on ARM but rather whether this
> > > > >>>> will break ARM somehow (because xen_hvm_domain() is true there).
> > > > >>>>
> > > > >>>>
> > > > >>> Ok makes sense. TBH, I haven't tested this part of code on ARM and the series
> > > > >>> was only support x86 guests hibernation.
> > > > >>> Moreover, this notifier is there to distinguish between 2 PM
> > > > >>> events PM SUSPEND and PM hibernation. Now since we only care about PM
> > > > >>> HIBERNATION I may just remove this code and rely on "SHUTDOWN_SUSPEND" state.
> > > > >>> However, I may have to fix other patches in the series where this check may
> > > > >>> appear and cater it only for x86 right?
> > > > >>
> > > > >>
> > > > >> I don't know what would happen if ARM guest tries to handle hibernation
> > > > >> callbacks. The only ones that you are introducing are in block and net
> > > > >> fronts and that's arch-independent.
> > > > >>
> > > > >>
> > > > >> You do add a bunch of x86-specific code though (syscore ops), would
> > > > >> something similar be needed for ARM?
> > > > >>
> > > > >>
> > > > > I don't expect this to work out of the box on ARM. To start with something
> > > > > similar will be needed for ARM too.
> > > > > We may still want to keep the driver code as-is.
> > > > >
> > > > > I understand the concern here wrt ARM, however, currently the support is only
> > > > > proposed for x86 guests here and similar work could be carried out for ARM.
> > > > > Also, if regular hibernation works correctly on arm, then all is needed is to
> > > > > fix Xen side of things.
> > > > >
> > > > > I am not sure what could be done to achieve any assurances on arm side as far as
> > > > > this series is concerned.
> > >
> > > Just to clarify: new features don't need to work on ARM or cause any
> > > addition efforts to you to make them work on ARM. The patch series only
> > > needs not to break existing code paths (on ARM and any other platforms).
> > > It should also not make it overly difficult to implement the ARM side of
> > > things (if there is one) at some point in the future.
> > >
> > > FYI drivers/xen/manage.c is compiled and working on ARM today, however
> > > Xen suspend/resume is not supported. I don't know for sure if
> > > guest-initiated hibernation works because I have not tested it.
> > >
> > >
> > >
> > > > If you are not sure what the effects are (or sure that it won't work) on
> > > > ARM then I'd add IS_ENABLED(CONFIG_X86) check, i.e.
> > > >
> > > >
> > > > if (!IS_ENABLED(CONFIG_X86) || !xen_hvm_domain())
> > > >       return -ENODEV;
> > >
> > > That is a good principle to have and thanks for suggesting it. However,
> > > in this specific case there is nothing in this patch that doesn't work
> > > on ARM. From an ARM perspective I think we should enable it and
> > > &xen_pm_notifier_block should be registered.
> > >
> > This question is for Boris, I think you we decided to get rid of the notifier
> > in V3 as all we need  to check is SHUTDOWN_SUSPEND state which sounds plausible
> > to me. So this check may go away. It may still be needed for sycore_ops
> > callbacks registration.
> > > Given that all guests are HVM guests on ARM, it should work fine as is.
> > >
> > >
> > > I gave a quick look at the rest of the series and everything looks fine
> > > to me from an ARM perspective. I cannot imaging that the new freeze,
> > > thaw, and restore callbacks for net and block are going to cause any
> > > trouble on ARM. The two main x86-specific functions are
> > > xen_syscore_suspend/resume and they look trivial to implement on ARM (in
> > > the sense that they are likely going to look exactly the same.)
> > >
> > Yes but for now since things are not tested I will put this
> > !IS_ENABLED(CONFIG_X86) on syscore_ops calls registration part just to be safe
> > and not break anything.
> > >
> > > One question for Anchal: what's going to happen if you trigger a
> > > hibernation, you have the new callbacks, but you are missing
> > > xen_syscore_suspend/resume?
> > >
> > > Is it any worse than not having the new freeze, thaw and restore
> > > callbacks at all and try to do a hibernation?
> > If callbacks are not there, I don't expect hibernation to work correctly.
> > These callbacks takes care of xen primitives like shared_info_page,
> > grant table, sched clock, runstate time which are important to save the correct
> > state of the guest and bring it back up. Other patches in the series, adds all
> > the logic to these syscore callbacks. Freeze/thaw/restore are just there for at driver
> > level.
> 
> I meant the other way around :-)  Let me rephrase the question.
> 
> Do you think that implementing freeze/thaw/restore at the driver level
> without having xen_syscore_suspend/resume can potentially make things
> worse compared to not having freeze/thaw/restore at the driver level at
> all?
I think in both the cases I don't expect it to work. System may end up in
different state if you register vs not. Hibernation does not work properly
at least for domU instances without these changes on x86 and I am assuming the
same for ARM.

If you do not register freeze/thaw/restore callbacks for arm, then on
invocation of xenbus_dev_suspend, default suspend/resume callbacks
will be called for each driver and since you do not have any code to save domU's
xen primitives state (syscore_ops), hibernation will either fail or will demand a reboot.
I do no have setup to test the current state of ARM's hibernation

If you only register freeze/thaw/restore and no syscore_ops, it will again fail.
Since, I do not have an ARM setup running, I quickly ran a similar test on x86,
may not be an apple to apple comparison but instance failed to resume or I
should say stuck showing huge jump in time and required a reboot.

Now if this doesn't happen currently when you trigger hibernation on arm domU
instances or if system is still alive when you trigger hibernation in xen guest
then not registering the callbacks may be a better idea. In that case  may be 
I need to put arch specific check when registering freeze/thaw/restore handlers.

Hope that answers your question.

Thanks,
Anchal



From xen-devel-bounces@lists.xenproject.org Thu Jul 23 23:39:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 23:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyknj-00006v-I8; Thu, 23 Jul 2020 23:38:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lw2b=BC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jykni-00006q-N2
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 23:38:58 +0000
X-Inumbo-ID: ad73a858-cd3d-11ea-87af-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad73a858-cd3d-11ea-87af-bc764e2007e4;
 Thu, 23 Jul 2020 23:38:57 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 8340F20792;
 Thu, 23 Jul 2020 23:38:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595547537;
 bh=ZbrzDEbrQZu02cy/gPITrvjMlRLRXu60g+GUNIBgVb8=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=XYorQlumzQ9/F/aB3pFROyj4+SKAupGKnAA1jdCqO/u0o+LIZZS0CPmRtWX0HmUo/
 lipnlPs5vnFq+6WTp12WYDXxPYyIVIT3MY2zH7JzC7IojveL4Mal4ElrDLAv11pXNP
 c+LkMzpBGsuWkPVlM1Q6rfWw20hi/3wO9J3bUhDs=
Date: Thu, 23 Jul 2020 16:38:55 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
In-Reply-To: <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1736696476-1595527045=:17562"
Content-ID: <alpine.DEB.2.21.2007231101450.17562@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 andrew.cooper3@citrix.com, Bertrand.Marquis@arm.com, jbeulich@suse.com,
 xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1736696476-1595527045=:17562
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2007231101451.17562@sstabellini-ThinkPad-T480s>

+ Jan, Andrew, Roger

Please have a look at my comment on whether we should share the MMCFG
code below, feel free to ignore the rest :-)


On Thu, 23 Jul 2020, Rahul Singh wrote:
> XEN during boot will read the PCI device tree node “reg” property
> and will map the PCI config space to the XEN memory.
> 
> XEN will read the “linux, pci-domain” property from the device tree
> node and configure the host bridge segment number accordingly.
> 
> As of now "pci-host-ecam-generic" compatible board is supported.
> 
> Change-Id: If32f7748b7dc89dd37114dc502943222a2a36c49

What is this Change-Id property?


> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  xen/arch/arm/Kconfig                |   7 +
>  xen/arch/arm/Makefile               |   1 +
>  xen/arch/arm/pci/Makefile           |   4 +
>  xen/arch/arm/pci/pci-access.c       | 101 ++++++++++++++
>  xen/arch/arm/pci/pci-host-common.c  | 198 ++++++++++++++++++++++++++++
>  xen/arch/arm/pci/pci-host-generic.c | 131 ++++++++++++++++++
>  xen/arch/arm/pci/pci.c              | 112 ++++++++++++++++
>  xen/arch/arm/setup.c                |   2 +
>  xen/include/asm-arm/device.h        |   7 +-
>  xen/include/asm-arm/pci.h           |  97 +++++++++++++-
>  10 files changed, 654 insertions(+), 6 deletions(-)
>  create mode 100644 xen/arch/arm/pci/Makefile
>  create mode 100644 xen/arch/arm/pci/pci-access.c
>  create mode 100644 xen/arch/arm/pci/pci-host-common.c
>  create mode 100644 xen/arch/arm/pci/pci-host-generic.c
>  create mode 100644 xen/arch/arm/pci/pci.c
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 2777388265..ee13339ae9 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -31,6 +31,13 @@ menu "Architecture Features"
>  
>  source "arch/Kconfig"
>  
> +config ARM_PCI
> +	bool "PCI Passthrough Support"
> +	depends on ARM_64
> +	---help---
> +
> +	  PCI passthrough support for Xen on ARM64.
> +
>  config ACPI
>  	bool "ACPI (Advanced Configuration and Power Interface) Support" if EXPERT
>  	depends on ARM_64
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 7e82b2178c..345cb83eed 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -6,6 +6,7 @@ ifneq ($(CONFIG_NO_PLAT),y)
>  obj-y += platforms/
>  endif
>  obj-$(CONFIG_TEE) += tee/
> +obj-$(CONFIG_ARM_PCI) += pci/
>  
>  obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o
>  obj-y += bootfdt.init.o
> diff --git a/xen/arch/arm/pci/Makefile b/xen/arch/arm/pci/Makefile
> new file mode 100644
> index 0000000000..358508b787
> --- /dev/null
> +++ b/xen/arch/arm/pci/Makefile
> @@ -0,0 +1,4 @@
> +obj-y += pci.o
> +obj-y += pci-host-generic.o
> +obj-y += pci-host-common.o
> +obj-y += pci-access.o
> diff --git a/xen/arch/arm/pci/pci-access.c b/xen/arch/arm/pci/pci-access.c
> new file mode 100644
> index 0000000000..c53ef58336
> --- /dev/null
> +++ b/xen/arch/arm/pci/pci-access.c
> @@ -0,0 +1,101 @@
> +/*
> + * Copyright (C) 2020 Arm Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/init.h>
> +#include <xen/pci.h>
> +#include <asm/pci.h>
> +#include <xen/rwlock.h>
> +
> +static uint32_t pci_config_read(pci_sbdf_t sbdf, unsigned int reg,
> +                            unsigned int len)
> +{
> +    int rc;
> +    uint32_t val = GENMASK(0, len * 8);
> +
> +    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
> +
> +    if ( unlikely(!bridge) )
> +    {
> +        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
> +                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
> +        return val;
> +    }
> +
> +    if ( unlikely(!bridge->ops->read) )
> +        return val;
> +
> +    rc = bridge->ops->read(bridge, (uint32_t) sbdf.sbdf, reg, len, &val);
> +    if ( rc )
> +        printk(XENLOG_ERR "Failed to read reg %#x len %u for "PRI_pci"\n",
> +                reg, len, sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
> +
> +    return val;
> +}
> +
> +static void pci_config_write(pci_sbdf_t sbdf, unsigned int reg,
> +        unsigned int len, uint32_t val)
> +{
> +    int rc;
> +    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
> +
> +    if ( unlikely(!bridge) )
> +    {
> +        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
> +                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
> +        return;
> +    }
> +
> +    if ( unlikely(!bridge->ops->write) )
> +        return;
> +
> +    rc = bridge->ops->write(bridge, (uint32_t) sbdf.sbdf, reg, len, val);
> +    if ( rc )
> +        printk(XENLOG_ERR "Failed to write reg %#x len %u for "PRI_pci"\n",
> +                reg, len, sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
> +}
> +
> +/*
> + * Wrappers for all PCI configuration access functions.
> + */
> +
> +#define PCI_OP_WRITE(size, type) \
> +    void pci_conf_write##size (pci_sbdf_t sbdf,unsigned int reg, type val) \
> +{                                               \
> +    pci_config_write(sbdf, reg, size / 8, val);     \
> +}
> +
> +#define PCI_OP_READ(size, type) \
> +    type pci_conf_read##size (pci_sbdf_t sbdf, unsigned int reg)  \
> +{                                               \
> +    return pci_config_read(sbdf, reg, size / 8);     \
> +}
> +
> +PCI_OP_READ(8, u8)
> +PCI_OP_READ(16, u16)
> +PCI_OP_READ(32, u32)
> +PCI_OP_WRITE(8, u8)
> +PCI_OP_WRITE(16, u16)
> +PCI_OP_WRITE(32, u32)

This looks like a subset of xen/arch/x86/x86_64/mmconfig_64.c ?

MMCFG is supposed to cover ECAM-compliant host bridges too, if I am not
mistaken. Is there any value in sharing the code with x86? It is OK if
we don't, but I would like to understand the reasoning.



> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/pci/pci-host-common.c b/xen/arch/arm/pci/pci-host-common.c
> new file mode 100644
> index 0000000000..c5f98be698
> --- /dev/null
> +++ b/xen/arch/arm/pci/pci-host-common.c
> @@ -0,0 +1,198 @@
> +/*
> + * Copyright (C) 2020 Arm Ltd.
> + *
> + * Based on Linux drivers/pci/ecam.c
> + * Copyright 2016 Broadcom.
> + *
> + * Based on Linux drivers/pci/controller/pci-host-common.c
> + * Based on Linux drivers/pci/controller/pci-host-generic.c
> + * Copyright (C) 2014 ARM Limited Will Deacon <will.deacon@arm.com>
> + *
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/init.h>
> +#include <xen/pci.h>
> +#include <asm/pci.h>
> +#include <xen/rwlock.h>
> +#include <xen/vmap.h>
> +
> +/*
> + * List for all the pci host bridges.
> + */
> +
> +static LIST_HEAD(pci_host_bridges);
> +
> +static bool __init dt_pci_parse_bus_range(struct dt_device_node *dev,
> +        struct pci_config_window *cfg)
> +{
> +    const __be32 *cells;
> +    uint32_t len;
> +
> +    cells = dt_get_property(dev, "bus-range", &len);
> +    /* bus-range should at least be 2 cells */
> +    if ( !cells || (len < (sizeof(*cells) * 2)) )
> +        return false;
> +
> +    cfg->busn_start = dt_next_cell(1, &cells);
> +    cfg->busn_end = dt_next_cell(1, &cells);
> +
> +    return true;
> +}
> +
> +static inline void __iomem *pci_remap_cfgspace(paddr_t start, size_t len)
> +{
> +    return ioremap_nocache(start, len);
> +}
> +
> +static void pci_ecam_free(struct pci_config_window *cfg)
> +{
> +    if ( cfg->win )
> +        iounmap(cfg->win);
> +
> +    xfree(cfg);
> +}
> +
> +static struct pci_config_window *gen_pci_init(struct dt_device_node *dev,
> +        struct pci_ecam_ops *ops)
> +{
> +    int err;
> +    struct pci_config_window *cfg;
> +    paddr_t addr, size;
> +
> +    cfg = xzalloc(struct pci_config_window);
> +    if ( !cfg )
> +        return NULL;
> +
> +    err = dt_pci_parse_bus_range(dev, cfg);
> +    if ( !err ) {
> +        cfg->busn_start = 0;
> +        cfg->busn_end = 0xff;
> +        printk(XENLOG_ERR "No bus range found for pci controller\n");
> +    } else {
> +        if ( cfg->busn_end > cfg->busn_start + 0xff )
> +            cfg->busn_end = cfg->busn_start + 0xff;
> +    }
> +
> +    /* Parse our PCI ecam register address*/
> +    err = dt_device_get_address(dev, 0, &addr, &size);
> +    if ( err )
> +        goto err_exit;

Shouldn't we handle the possibility of multiple addresses? Is it
possible to have more than one range for an ECAM compliant host bridge?


> +    cfg->phys_addr = addr;
> +    cfg->size = size;
> +    cfg->ops = ops;
> +
> +    /*
> +     * On 64-bit systems, we do a single ioremap for the whole config space
> +     * since we have enough virtual address range available.  On 32-bit, we
> +     * ioremap the config space for each bus individually.
> +     *
> +     * As of now only 64-bit is supported 32-bit is not supported.
> +     */
> +    cfg->win = pci_remap_cfgspace(cfg->phys_addr, cfg->size);
> +    if ( !cfg->win )
> +        goto err_exit_remap;
> +
> +    printk("ECAM at [mem %lx-%lx] for [bus %x-%x] \n",cfg->phys_addr,
> +            cfg->phys_addr + cfg->size - 1,cfg->busn_start,cfg->busn_end);
> +
> +    if ( ops->init ) {
> +        err = ops->init(cfg);
> +        if (err)
> +            goto err_exit;
> +    }
> +
> +    return cfg;
> +
> +err_exit_remap:
> +    printk(XENLOG_ERR "ECAM ioremap failed\n");
> +err_exit:
> +    pci_ecam_free(cfg);
> +    return NULL;
> +}
> +
> +static struct pci_host_bridge * pci_alloc_host_bridge(void)
> +{
> +    struct pci_host_bridge *bridge = xzalloc(struct pci_host_bridge);
> +
> +    if ( !bridge )
> +        return NULL;
> +
> +    INIT_LIST_HEAD(&bridge->node);
> +    return bridge;
> +}
> +
> +int pci_host_common_probe(struct dt_device_node *dev,
> +        struct pci_ecam_ops *ops)
> +{
> +    struct pci_host_bridge *bridge;
> +    struct pci_config_window *cfg;
> +    u32 segment;
> +
> +    bridge = pci_alloc_host_bridge();
> +    if ( !bridge )
> +        return -ENOMEM;
> +
> +    /* Parse and map our Configuration Space windows */
> +    cfg = gen_pci_init(dev, ops);
> +    if ( !cfg )
> +        return -ENOMEM;

In case of errors the allocated bridge is not freed.


> +    bridge->dt_node = dev;
> +    bridge->sysdata = cfg;
> +    bridge->ops = &ops->pci_ops;
> +
> +    if( !dt_property_read_u32(dev, "linux,pci-domain", &segment) )
> +    {
> +        printk(XENLOG_ERR "\"linux,pci-domain\" property in not available in DT\n");
> +        return -ENODEV;
> +    }
> +
> +    bridge->segment = (u16)segment;

My understanding is that a Linux pci-domain doesn't correspond exactly
to a PCI segment. See for instance:

https://lists.gnu.org/archive/html/qemu-devel/2018-04/msg03885.html

Do we need to care about the difference? If we mean pci-domain here,
should we just call them as such instead of calling them "segments" in
Xen (if they are not segments)?


> +    list_add_tail(&bridge->node, &pci_host_bridges);

It looks like &pci_host_bridges should be an ordered list, ordered by
segment number?


> +    return 0;
> +}
> +
> +/*
> + * This function will lookup an hostbridge based on the segment and bus
> + * number.
> + */
> +struct pci_host_bridge *pci_find_host_bridge(uint16_t segment, uint8_t bus)
> +{
> +    struct pci_host_bridge *bridge;
> +    bool found = false;
> +
> +    list_for_each_entry( bridge, &pci_host_bridges, node )
> +    {
> +        if ( bridge->segment != segment )
> +            continue;
> +
> +        found = true;
> +        break;
> +    }
> +
> +    return (found) ? bridge : NULL;
> +}
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/pci/pci-host-generic.c b/xen/arch/arm/pci/pci-host-generic.c
> new file mode 100644
> index 0000000000..cd67b3dec6
> --- /dev/null
> +++ b/xen/arch/arm/pci/pci-host-generic.c
> @@ -0,0 +1,131 @@
> +/*
> + * Copyright (C) 2020 Arm Ltd.
> + *
> + * Based on Linux drivers/pci/controller/pci-host-common.c
> + * Based on Linux drivers/pci/controller/pci-host-generic.c
> + * Copyright (C) 2014 ARM Limited Will Deacon <will.deacon@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <asm/device.h>
> +#include <asm/io.h>
> +#include <xen/pci.h>
> +#include <asm/pci.h>
> +
> +/*
> + * Function to get the config space base.
> + */
> +static void __iomem *pci_config_base(struct pci_host_bridge *bridge,
> +        uint32_t sbdf, int where)

I think the function is misnamed because reading the code below it looks
like it is not just returning the base config space address but also the
specific address we need to read/write (adding the device offset,
"where", and everything).

Maybe pci_config_get_address or something like that?


> +{
> +    struct pci_config_window *cfg = bridge->sysdata;
> +    unsigned int devfn_shift = cfg->ops->bus_shift - 8;
> +
> +    pci_sbdf_t sbdf_t = (pci_sbdf_t) sbdf ;
> +
> +    unsigned int busn = sbdf_t.bus;
> +    void __iomem *base;
> +
> +    if ( busn < cfg->busn_start || busn > cfg->busn_end )
> +        return NULL;
> +
> +    base = cfg->win + (busn << cfg->ops->bus_shift);
> +
> +    return base + (PCI_DEVFN(sbdf_t.dev, sbdf_t.fn) << devfn_shift) + where;
> +}
> +
> +int pci_ecam_config_write(struct pci_host_bridge *bridge, uint32_t sbdf,
> +        int where, int size, u32 val)
> +{
> +    void __iomem *addr;
> +
> +    addr = pci_config_base(bridge, sbdf, where);
> +    if ( !addr )
> +        return -ENODEV;
> +
> +    if ( size == 1 )
> +        writeb(val, addr);
> +    else if ( size == 2 )
> +        writew(val, addr);
> +    else
> +        writel(val, addr);

please use a switch


> +    return 0;
> +}
> +
> +int pci_ecam_config_read(struct pci_host_bridge *bridge, uint32_t sbdf,
> +        int where, int size, u32 *val)
> +{
> +    void __iomem *addr;
> +
> +    addr = pci_config_base(bridge, sbdf, where);
> +    if ( !addr ) {
> +        *val = ~0;
> +        return -ENODEV;
> +    }
> +
> +    if ( size == 1 )
> +        *val = readb(addr);
> +    else if ( size == 2 )
> +        *val = readw(addr);
> +    else
> +        *val = readl(addr);

please use a switch


> +    return 0;
> +}
> +
> +/* ECAM ops */
> +struct pci_ecam_ops pci_generic_ecam_ops = {
> +    .bus_shift  = 20,
> +    .pci_ops    = {
> +        .read       = pci_ecam_config_read,
> +        .write      = pci_ecam_config_write,
> +    }
> +};
> +
> +static const struct dt_device_match gen_pci_dt_match[] = {
> +    { .compatible = "pci-host-ecam-generic",
> +      .data =       &pci_generic_ecam_ops },

spurious blank line


> +    { },
> +};
> +
> +static int gen_pci_dt_init(struct dt_device_node *dev, const void *data)
> +{
> +    const struct dt_device_match *of_id;
> +    struct pci_ecam_ops *ops;
> +
> +    of_id = dt_match_node(gen_pci_dt_match, dev->dev.of_node);
> +    ops = (struct pci_ecam_ops *) of_id->data;
> +
> +    printk(XENLOG_INFO "Found PCI host bridge %s compatible:%s \n",
> +            dt_node_full_name(dev), of_id->compatible);
> +
> +    return pci_host_common_probe(dev, ops);
> +}
> +
> +DT_DEVICE_START(pci_gen, "PCI HOST GENERIC", DEVICE_PCI)
> +.dt_match = gen_pci_dt_match,
> +.init = gen_pci_dt_init,
> +DT_DEVICE_END
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/pci/pci.c b/xen/arch/arm/pci/pci.c
> new file mode 100644
> index 0000000000..f8cbb99591
> --- /dev/null
> +++ b/xen/arch/arm/pci/pci.c
> @@ -0,0 +1,112 @@
> +/*
> + * Copyright (C) 2020 Arm Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/acpi.h>
> +#include <xen/device_tree.h>
> +#include <xen/errno.h>
> +#include <xen/init.h>
> +#include <xen/pci.h>
> +#include <xen/param.h>
> +
> +static int __init dt_pci_init(void)
> +{
> +    struct dt_device_node *np;
> +    int rc;
> +
> +    dt_for_each_device_node(dt_host, np)
> +    {
> +        rc = device_init(np, DEVICE_PCI, NULL);
> +        if( !rc )
> +            continue;
> +        /*
> +         * Ignore the following error codes:
> +         *   - EBADF: Indicate the current is not an pci
> +         *   - ENODEV: The pci device is not present or cannot be used by
> +         *     Xen.
> +         */
> +        else if ( rc != -EBADF && rc != -ENODEV )
> +        {
> +            printk(XENLOG_ERR "No driver found in XEN or driver init error.\n");
> +            return rc;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
> +#ifdef CONFIG_ACPI
> +static void __init acpi_pci_init(void)
> +{
> +    printk(XENLOG_ERR "ACPI pci init not supported \n");
> +    return;
> +}
> +#else
> +static inline void __init acpi_pci_init(void) { }
> +#endif
> +
> +static bool __initdata param_pci_enable;
> +static int __init parse_pci_param(const char *arg)
> +{
> +    if ( !arg )
> +    {
> +        param_pci_enable = false;
> +        return 0;
> +    }
> +
> +    switch ( parse_bool(arg, NULL) )
> +    {
> +        case 0:
> +            param_pci_enable = false;
> +            return 0;
> +        case 1:
> +            param_pci_enable = true;
> +            return 0;
> +    }
> +
> +    return -EINVAL;
> +}
> +custom_param("pci", parse_pci_param);

When adding new command line parameters please also add its
documentation (docs/misc/xen-command-line.pandoc) in the same patch,
unless this is meant to be just transient and we'll get removed before
the final commit of the series.


> +void __init pci_init(void)
> +{
> +    /*
> +     * Enable PCI when has been enabled explicitly (pci=on)
> +     */
> +    if ( !param_pci_enable)
> +        goto disable;
> +
> +    if ( acpi_disabled )
> +        dt_pci_init();
> +    else
> +        acpi_pci_init();
> +
> +#ifdef CONFIG_HAS_PCI
> +    pci_segments_init();
> +#endif
> +
> +disable:
> +    return;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 7968cee47d..2d7f1db44f 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -930,6 +930,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>  
>      setup_virt_paging();
>  
> +    pci_init();

pci_init should probably be an initcall


>      do_initcalls();
>  
>      /*
> diff --git a/xen/include/asm-arm/device.h b/xen/include/asm-arm/device.h
> index ee7cff2d44..28f8049cfd 100644
> --- a/xen/include/asm-arm/device.h
> +++ b/xen/include/asm-arm/device.h
> @@ -4,6 +4,7 @@
>  enum device_type
>  {
>      DEV_DT,
> +    DEV_PCI,
>  };
>  
>  struct dev_archdata {
> @@ -25,15 +26,15 @@ typedef struct device device_t;
>  
>  #include <xen/device_tree.h>
>  
> -/* TODO: Correctly implement dev_is_pci when PCI is supported on ARM */
> -#define dev_is_pci(dev) ((void)(dev), 0)
> -#define dev_is_dt(dev)  ((dev->type == DEV_DT)
> +#define dev_is_pci(dev) (dev->type == DEV_PCI)
> +#define dev_is_dt(dev)  (dev->type == DEV_DT)
>  
>  enum device_class
>  {
>      DEVICE_SERIAL,
>      DEVICE_IOMMU,
>      DEVICE_GIC,
> +    DEVICE_PCI,
>      /* Use for error */
>      DEVICE_UNKNOWN,
>  };
> diff --git a/xen/include/asm-arm/pci.h b/xen/include/asm-arm/pci.h
> index de13359f65..94fd00360a 100644
> --- a/xen/include/asm-arm/pci.h
> +++ b/xen/include/asm-arm/pci.h
> @@ -1,7 +1,98 @@
> -#ifndef __X86_PCI_H__
> -#define __X86_PCI_H__
> +/*
> + * Copyright (C) 2020 Arm Ltd.
> + *
> + * Based on Linux drivers/pci/ecam.c
> + * Copyright 2016 Broadcom.
> + *
> + * Based on Linux drivers/pci/controller/pci-host-common.c
> + * Based on Linux drivers/pci/controller/pci-host-generic.c
> + * Copyright (C) 2014 ARM Limited Will Deacon <will.deacon@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
>  
> +#ifndef __ARM_PCI_H__
> +#define __ARM_PCI_H__
> +
> +#include <xen/pci.h>
> +#include <xen/device_tree.h>
> +#include <asm/device.h>
> +
> +#ifdef CONFIG_ARM_PCI
> +
> +/* Arch pci dev struct */
>  struct arch_pci_dev {
> +    struct device dev;
> +};

Are you actually using struct device in struct arch_pci_dev?
struct device is already part of struct dt_device_node and a pointer to
it is stored in bridge->dt_node.


> +#define PRI_pci "%04x:%02x:%02x.%u"
> +#define pci_to_dev(pcidev) (&(pcidev)->arch.dev)
> +
> +/*
> + * struct to hold the mappings of a config space window. This
> + * is expected to be used as sysdata for PCI controllers that
> + * use ECAM.
> + */
> +struct pci_config_window {
> +    paddr_t     phys_addr;
> +    paddr_t     size;
> +    uint8_t     busn_start;
> +    uint8_t     busn_end;
> +    struct pci_ecam_ops     *ops;
> +    void __iomem        *win;
> +};
> +
> +/* Forward declaration as pci_host_bridge and pci_ops depend on each other. */
> +struct pci_host_bridge;
> +
> +struct pci_ops {
> +    int (*read)(struct pci_host_bridge *bridge,
> +                    uint32_t sbdf, int where, int size, u32 *val);
> +    int (*write)(struct pci_host_bridge *bridge,
> +                    uint32_t sbdf, int where, int size, u32 val);

I'd prefer if we could use explicitly-sized integers for "where" and
"size" too. Also, should they be unsigned rather than signed?

Can we use pci_sbdf_t for the sbdf parameter?


> +};
> +
> +/*
> + * struct to hold pci ops and bus shift of the config window
> + * for a PCI controller.
> + */
> +struct pci_ecam_ops {
> +    unsigned int            bus_shift;
> +    struct pci_ops          pci_ops;
> +    int             (*init)(struct pci_config_window *);
> +};

Although I realize that we are only targeting ECAM now, and the
implementation is based on ECAM, the interface doesn't seem to have
anything ECAM-specific in it. I would rename pci_ecam_ops to something
else, maybe simply "pci_ops".


> +/*
> + * struct to hold pci host bridge information
> + * for a PCI controller.
> + */
> +struct pci_host_bridge {
> +    struct dt_device_node *dt_node;  /* Pointer to the associated DT node */
> +    struct list_head node;           /* Node in list of host bridges */
> +    uint16_t segment;                /* Segment number */
> +    void *sysdata;                   /* Pointer to the config space window*/
> +    const struct pci_ops *ops;
>  };
>  
> -#endif /* __X86_PCI_H__ */
> +struct pci_host_bridge *pci_find_host_bridge(uint16_t segment, uint8_t bus);
> +
> +int pci_host_common_probe(struct dt_device_node *dev,
> +                struct pci_ecam_ops *ops);
> +
> +void pci_init(void);
> +
> +#else   /*!CONFIG_ARM_PCI*/
> +struct arch_pci_dev { };
> +static inline void  pci_init(void) { }
> +#endif  /*!CONFIG_ARM_PCI*/
> +#endif /* __ARM_PCI_H__ */
> -- 
> 2.17.1
> 
--8323329-1736696476-1595527045=:17562--


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 23:39:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 23:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jykoE-00009Q-Vi; Thu, 23 Jul 2020 23:39:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lw2b=BC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jykoD-00009H-KT
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 23:39:29 +0000
X-Inumbo-ID: bfcb75bc-cd3d-11ea-a33b-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bfcb75bc-cd3d-11ea-a33b-12813bfff9fa;
 Thu, 23 Jul 2020 23:39:28 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 8FF6620792;
 Thu, 23 Jul 2020 23:39:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595547568;
 bh=cAqQC1Bd1Kq2j6xPZw8MrEA6cv9mnxdr8LxnNeVKD2E=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=RzMZe1Pv0ch1VNNGnmSLcKbuOpZp2rp9DN4WDwzhD9e6NR9bgf00Z4tsx+myOdS+h
 l3/0+Nmx8ToCRqSr7qLlMZ2tpQ3WSgMkjX7s1N9zeUjQMdlB3Lpj4aXHNljik1pdhy
 MetaJYcLY76yZjZC/XmcLK7/JFSKvVIaOv7kLJ/Q=
Date: Thu, 23 Jul 2020 16:39:27 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
Subject: Re: [RFC PATCH v1 3/4] xen/arm: Enable the existing x86 virtual PCI
 support for ARM.
In-Reply-To: <c719ed8e92720d0b470a130c1264f8296dac32ac.1595511416.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2007231351350.17562@sstabellini-ThinkPad-T480s>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <c719ed8e92720d0b470a130c1264f8296dac32ac.1595511416.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1315544770-1595537720=:17562"
Content-ID: <alpine.DEB.2.21.2007231440080.17562@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Paul Durrant <paul@xen.org>, andrew.cooper3@citrix.com,
 Bertrand.Marquis@arm.com, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1315544770-1595537720=:17562
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2007231440081.17562@sstabellini-ThinkPad-T480s>

On Thu, 23 Jul 2020, Rahul Singh wrote:
> The existing VPCI support available for X86 is adapted for Arm.
> When the device is added to XEN via the hyper call
> “PHYSDEVOP_pci_device_add”, VPCI handler for the config space
> access is added to the PCI device to emulate the PCI devices.
> 
> A MMIO trap handler for the PCI ECAM space is registered in XEN
> so that when guest is trying to access the PCI config space,XEN
> will trap the access and emulate read/write using the VPCI and
> not the real PCI hardware.
> 
> VPCI MSI support is disable for ARM as it is not tested on ARM.
> 
> Change-Id: I5501db2781f8064640403fecce53713091cd9ab4

Same question


> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  xen/arch/arm/Makefile         |   1 +
>  xen/arch/arm/domain.c         |   4 ++
>  xen/arch/arm/vpci.c           | 102 ++++++++++++++++++++++++++++++++++
>  xen/arch/arm/vpci.h           |  37 ++++++++++++
>  xen/drivers/passthrough/pci.c |   7 +++
>  xen/include/asm-arm/domain.h  |   5 ++
>  xen/include/public/arch-arm.h |   4 ++
>  7 files changed, 160 insertions(+)
>  create mode 100644 xen/arch/arm/vpci.c
>  create mode 100644 xen/arch/arm/vpci.h
> 
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 345cb83eed..5a23ec5cc0 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -7,6 +7,7 @@ obj-y += platforms/
>  endif
>  obj-$(CONFIG_TEE) += tee/
>  obj-$(CONFIG_ARM_PCI) += pci/
> +obj-$(CONFIG_HAS_VPCI) += vpci.o
>  
>  obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o
>  obj-y += bootfdt.init.o
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 31169326b2..23098ffd02 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -39,6 +39,7 @@
>  #include <asm/vtimer.h>
>  
>  #include "vuart.h"
> +#include "vpci.h"
>  
>  DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
>  
> @@ -747,6 +748,9 @@ int arch_domain_create(struct domain *d,
>      if ( is_hardware_domain(d) && (rc = domain_vuart_init(d)) )
>          goto fail;
>  
> +    if ( (rc = domain_vpci_init(d)) != 0 )
> +        goto fail;
> +
>      return 0;
>  
>  fail:
> diff --git a/xen/arch/arm/vpci.c b/xen/arch/arm/vpci.c
> new file mode 100644
> index 0000000000..49e473ab0d
> --- /dev/null
> +++ b/xen/arch/arm/vpci.c
> @@ -0,0 +1,102 @@
> +/*
> + * xen/arch/arm/vpci.c
> + * Copyright (c) 2020 Arm Ltd.
> + *
> + * Based on arch/x86/hvm/io.c
> + * Copyright (c) 2004, Intel Corporation.
> + * Copyright (c) 2005, International Business Machines Corporation.
> + * Copyright (c) 2008, Citrix Systems, Inc.
> + *
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +#include <xen/sched.h>
> +#include <asm/mmio.h>
> +
> +/* Do some sanity checks. */
> +static bool vpci_mmio_access_allowed(unsigned int reg, unsigned int len)
> +{
> +    /* Check access size. */
> +    if ( len != 1 && len != 2 && len != 4 && len != 8 )
> +        return false;
> +
> +    /* Check that access is size aligned. */
> +    if ( (reg & (len - 1)) )
> +        return false;
> +
> +    return true;
> +}
> +
> +static int vpci_mmio_read(struct vcpu *v, mmio_info_t *info,
> +        register_t *r, void *priv)
> +{
> +    unsigned int reg;
> +    pci_sbdf_t sbdf;
> +    uint32_t data = 0;
> +    unsigned int size = 1U << info->dabt.size;
> +
> +    sbdf.bdf = (((info->gpa) & 0x0ffff000) >> 12);
> +    reg = (((info->gpa) & 0x00000ffc) | (info->gpa & 3));
> +
> +    if ( !vpci_mmio_access_allowed(reg, size) )
> +        return 1;
> +
> +    data = vpci_read(sbdf, reg, size);
> +
> +    memcpy(r, &data, size);
> +
> +    return 1;
> +}
> +
> +static int vpci_mmio_write(struct vcpu *v, mmio_info_t *info,
> +        register_t r, void *priv)
> +{
> +    unsigned int reg;
> +    pci_sbdf_t sbdf;
> +    uint32_t data = r;
> +    unsigned int size = 1U << info->dabt.size;
> +
> +    sbdf.bdf = (((info->gpa) & 0x0ffff000) >> 12);
> +    reg = (((info->gpa) & 0x00000ffc) | (info->gpa & 3));
> +
> +    if ( !vpci_mmio_access_allowed(reg, size) )
> +        return 1;
> +
> +    vpci_write(sbdf, reg, size, data);
> +
> +    return 1;
> +}

I wonder if we could share vpci_mmcfg_read/write. Again, it is OK if we
can't, or if it is not worth the effort. just want to make sure we
thought about it :-)


> +static const struct mmio_handler_ops vpci_mmio_handler = {
> +    .read  = vpci_mmio_read,
> +    .write = vpci_mmio_write,
> +};
> +
> +int domain_vpci_init(struct domain *d)
> +{
> +    if ( !has_vpci(d) || is_hardware_domain(d) )
> +        return 0;
> +
> +    register_mmio_handler(d, &vpci_mmio_handler,
> +            GUEST_VPCI_ECAM_BASE,GUEST_VPCI_ECAM_SIZE,NULL);
> +
> +    return 0;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> +
> diff --git a/xen/arch/arm/vpci.h b/xen/arch/arm/vpci.h
> new file mode 100644
> index 0000000000..20dce1f4c4
> --- /dev/null
> +++ b/xen/arch/arm/vpci.h
> @@ -0,0 +1,37 @@
> +/*
> + * xen/arch/arm/vpci.h
> + * Copyright (c) 2020 Arm Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#ifndef __ARCH_ARM_VPCI_H__
> +#define __ARCH_ARM_VPCI_H__
> +
> +#ifdef CONFIG_HAS_VPCI
> +int domain_vpci_init(struct domain *d);
> +#else
> +static inline int domain_vpci_init(struct domain *d)
> +{
> +    return 0;
> +}
> +#endif
> +
> +#endif /* __ARCH_ARM_VPCI_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index 5846978890..28511eb641 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -804,6 +804,13 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>      else
>          iommu_enable_device(pdev);
>  
> +#ifdef CONFIG_ARM
> +    ret = vpci_add_handlers(pdev);
> +    if ( ret ) {
> +        printk(XENLOG_ERR "setup of vPCI for failed: %d\n",ret);
> +        goto out;
> +    }
> +#endif
>      pci_enable_acs(pdev);
>  
>  out:
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 4e2f582006..ad70610226 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -34,6 +34,11 @@ enum domain_type {
>  /* The hardware domain has always its memory direct mapped. */
>  #define is_domain_direct_mapped(d) ((d) == hardware_domain)
>  
> +/* For X86 VPCI is enabled and tested for PVH DOM0 only but
> + * for ARM we enable support VPCI for guest domain also.
> + */
> +#define has_vpci(d) (true)

As mentioned we could make this configurabled based on the presence of
pci=[] or something similar in device tree for dom0less guests.



>  struct vtimer {
>      struct vcpu *v;
>      int irq;
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index c365b1b39e..7364a07362 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -422,6 +422,10 @@ typedef uint64_t xen_callback_t;
>  #define GUEST_PL011_BASE    xen_mk_ullong(0x22000000)
>  #define GUEST_PL011_SIZE    xen_mk_ullong(0x00001000)
>  
> +/* VPCI ECAM mappings */
> +#define GUEST_VPCI_ECAM_BASE    xen_mk_ullong(0x10000000)
> +#define GUEST_VPCI_ECAM_SIZE    xen_mk_ullong(0x10000000)

Is 256MB in size part of the ECAM standard?


>  /*
>   * 16MB == 4096 pages reserved for guest to use as a region to map its
>   * grant table in.
> -- 
> 2.17.1
> 
--8323329-1315544770-1595537720=:17562--


From xen-devel-bounces@lists.xenproject.org Thu Jul 23 23:39:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 23 Jul 2020 23:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jykog-0000Dp-8b; Thu, 23 Jul 2020 23:39:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lw2b=BC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jykof-0000De-Ab
 for xen-devel@lists.xenproject.org; Thu, 23 Jul 2020 23:39:57 +0000
X-Inumbo-ID: d0790ea6-cd3d-11ea-a33b-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d0790ea6-cd3d-11ea-a33b-12813bfff9fa;
 Thu, 23 Jul 2020 23:39:56 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 950BC20792;
 Thu, 23 Jul 2020 23:39:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595547596;
 bh=6tkuzd/fjE4rkfP55cgq1G31SwOvw0pZNn+D7K/FYu8=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=CWQwP5GhYLm4j9gqFpZzSEXnmbXfNfFKtfZG+F2Ia4ZRu8FezWsYHSQgVOZwa19tl
 CViPyp0o6j1gXF9lFtViLAFEK6g9uVb6TFYfNM5C0hGsU9PqZvobsOhRjqXVf959/G
 oBP6ZzaDzAMpbkpN7xQzn1ta5j/FPGgApA7AEI+g=
Date: Thu, 23 Jul 2020 16:39:55 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
Subject: Re: [RFC PATCH v1 4/4] arm/libxl: Emulated PCI device tree node in
 libxl
In-Reply-To: <23346b24762467bd246b91b05f7b0fc1719282f6.1595511416.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2007231505170.17562@sstabellini-ThinkPad-T480s>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <23346b24762467bd246b91b05f7b0fc1719282f6.1595511416.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Bertrand.Marquis@arm.com, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 23 Jul 2020, Rahul Singh wrote:
> libxl will create an emulated PCI device tree node in the
> device tree to enable the guest OS to discover the virtual
> PCI during guest boot.
> 
> We introduced the new config option [vpci="ecam"] for guests.
> When this config option is enabled in a guest configuration,
> a PCI device tree node will be created in the guest device tree.
> 
> A new area has been reserved in the arm guest physical map at
> which the VPCI bus is declared in the device tree (reg and ranges
> parameters of the node).
> 
> Change-Id: I47d39cbe8184de2226f174644df9790ecc610ccd

Same question


> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  tools/libxl/libxl_arm.c       | 200 ++++++++++++++++++++++++++++++++++
>  tools/libxl/libxl_types.idl   |   6 +
>  tools/xl/xl_parse.c           |   7 ++
>  xen/include/public/arch-arm.h |  28 +++++
>  4 files changed, 241 insertions(+)
> 
> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> index 34f8a29056..84568e9dc9 100644
> --- a/tools/libxl/libxl_arm.c
> +++ b/tools/libxl/libxl_arm.c
> @@ -268,6 +268,130 @@ static int fdt_property_regs(libxl__gc *gc, void *fdt,
>      return fdt_property(fdt, "reg", regs, sizeof(regs));
>  }
>  
> +static int fdt_property_vpci_bus_range(libxl__gc *gc, void *fdt,
> +        unsigned num_cells, ...)
> +{
> +    uint32_t bus_range[num_cells];
> +    be32 *cells = &bus_range[0];
> +    int i;
> +    va_list ap;
> +    uint32_t arg;
> +
> +    va_start(ap, num_cells);
> +    for (i = 0 ; i < num_cells; i++) {
> +        arg = va_arg(ap, uint32_t);
> +        set_cell(&cells, 1, arg);
> +    }
> +    va_end(ap);
> +
> +    return fdt_property(fdt, "bus-range", bus_range, sizeof(bus_range));
> +}
> +
> +static int fdt_property_vpci_interrupt_map_mask(libxl__gc *gc, void *fdt,
> +        unsigned num_cells, ...)
> +{
> +    uint32_t interrupt_map_mask[num_cells];
> +    be32 *cells = &interrupt_map_mask[0];
> +    int i;
> +    va_list ap;
> +    uint32_t arg;
> +
> +    va_start(ap, num_cells);
> +    for (i = 0 ; i < num_cells; i++) {
> +        arg = va_arg(ap, uint32_t);
> +        set_cell(&cells, 1, arg);
> +    }
> +    va_end(ap);
> +
> +    return fdt_property(fdt, "interrupt-map-mask", interrupt_map_mask,
> +                                sizeof(interrupt_map_mask));
> +}
> +
> +static int fdt_property_vpci_ranges(libxl__gc *gc, void *fdt,
> +        unsigned vpci_addr_cells,
> +        unsigned cpu_addr_cells,
> +        unsigned vpci_size_cells,
> +        unsigned num_regs, ...)
> +{
> +    uint32_t regs[num_regs*(vpci_addr_cells+cpu_addr_cells+vpci_size_cells)];
> +    be32 *cells = &regs[0];
> +    int i;
> +    va_list ap;
> +    uint64_t arg;
> +
> +    va_start(ap, num_regs);
> +    for (i = 0 ; i < num_regs; i++) {
> +        /* Set the memory bit field */
> +        arg = va_arg(ap, uint64_t);
> +        set_cell(&cells, 1, arg);
> +
> +        /* Set the vpci bus address */
> +        arg = vpci_addr_cells ? va_arg(ap, uint64_t) : 0;
> +        set_cell(&cells, 2 , arg);
> +
> +        /* Set the cpu bus address where vpci address is mapped */
> +        arg = cpu_addr_cells ? va_arg(ap, uint64_t) : 0;
> +        set_cell(&cells, cpu_addr_cells, arg);
> +
> +        /* Set the vpci size requested */
> +        arg = vpci_size_cells ? va_arg(ap, uint64_t) : 0;
> +        set_cell(&cells, vpci_size_cells,arg);
> +    }
> +    va_end(ap);
> +
> +    return fdt_property(fdt, "ranges", regs, sizeof(regs));
> +}
> +
> +static int fdt_property_vpci_interrupt_map(libxl__gc *gc, void *fdt,
> +        unsigned child_unit_addr_cells,
> +        unsigned child_interrupt_specifier_cells,
> +        unsigned parent_unit_addr_cells,
> +        unsigned parent_interrupt_specifier_cells,
> +        unsigned num_regs, ...)
> +{
> +    uint32_t interrupt_map[num_regs * (child_unit_addr_cells +
> +            child_interrupt_specifier_cells + parent_unit_addr_cells
> +            + parent_interrupt_specifier_cells + 1)];
> +    be32 *cells = &interrupt_map[0];
> +    int i,j;
> +    va_list ap;
> +    uint64_t arg;
> +
> +    va_start(ap, num_regs);
> +    for (i = 0 ; i < num_regs; i++) {
> +        /* Set the child unit address*/
> +        for (j = 0 ; j < child_unit_addr_cells; j++) {
> +            arg = va_arg(ap, uint32_t);
> +            set_cell(&cells, 1 , arg);
> +        }
> +
> +        /* Set the child interrupt specifier*/
> +        for (j = 0 ; j < child_interrupt_specifier_cells ; j++) {
> +            arg = va_arg(ap, uint32_t);
> +            set_cell(&cells, 1 , arg);
> +        }
> +
> +        /* Set the interrupt-parent*/
> +        set_cell(&cells, 1 , GUEST_PHANDLE_GIC);
> +
> +        /* Set the parent unit address*/
> +        for (j = 0 ; j < parent_unit_addr_cells; j++) {
> +            arg = va_arg(ap, uint32_t);
> +            set_cell(&cells, 1 , arg);
> +        }
> +
> +        /* Set the parent interrupt specifier*/
> +        for (j = 0 ; j < parent_interrupt_specifier_cells; j++) {
> +            arg = va_arg(ap, uint32_t);
> +            set_cell(&cells, 1 , arg);
> +        }
> +    }
> +    va_end(ap);
> +
> +    return fdt_property(fdt, "interrupt-map", interrupt_map,
> +                                sizeof(interrupt_map));
> +}
> +
>  static int make_root_properties(libxl__gc *gc,
>                                  const libxl_version_info *vers,
>                                  void *fdt)
> @@ -659,6 +783,79 @@ static int make_vpl011_uart_node(libxl__gc *gc, void *fdt,
>      return 0;
>  }
>  
> +static int make_vpci_node(libxl__gc *gc, void *fdt,
> +        const struct arch_info *ainfo,
> +        struct xc_dom_image *dom)
> +{
> +    int res;
> +    const uint64_t vpci_ecam_base = GUEST_VPCI_ECAM_BASE;
> +    const uint64_t vpci_ecam_size = GUEST_VPCI_ECAM_SIZE;
> +    const char *name = GCSPRINTF("pcie@%"PRIx64, vpci_ecam_base);
> +
> +    res = fdt_begin_node(fdt, name);
> +    if (res) return res;
> +
> +    res = fdt_property_compat(gc, fdt, 1, "pci-host-ecam-generic");
> +    if (res) return res;
> +
> +    res = fdt_property_string(fdt, "device_type", "pci");
> +    if (res) return res;
> +
> +    res = fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS,
> +            GUEST_ROOT_SIZE_CELLS, 1, vpci_ecam_base, vpci_ecam_size);
> +    if (res) return res;
> +
> +    res = fdt_property_vpci_bus_range(gc, fdt, 2, 0,255);
> +    if (res) return res;
> +
> +    res = fdt_property_cell(fdt, "linux,pci-domain", 0);
> +    if (res) return res;
> +
> +    res = fdt_property_cell(fdt, "#address-cells", 3);
> +    if (res) return res;
> +
> +    res = fdt_property_cell(fdt, "#size-cells", 2);
> +    if (res) return res;
> +
> +    res = fdt_property_cell(fdt, "#interrupt-cells", 1);
> +    if (res) return res;
> +
> +    res = fdt_property_string(fdt, "status", "okay");
> +    if (res) return res;
> +
> +    res = fdt_property_vpci_ranges(gc, fdt, GUEST_PCI_ADDRESS_CELLS,
> +        GUEST_ROOT_ADDRESS_CELLS, GUEST_PCI_SIZE_CELLS,
> +        3,
> +        GUEST_VPCI_ADDR_TYPE_MEM, GUEST_VPCI_MEM_PCI_ADDR,
> +        GUEST_VPCI_MEM_CPU_ADDR, GUEST_VPCI_MEM_SIZE,
> +        GUEST_VPCI_ADDR_TYPE_PREFETCH_MEM, GUEST_VPCI_PREFETCH_MEM_PCI_ADDR,
> +        GUEST_VPCI_PREFETCH_MEM_CPU_ADDR, GUEST_VPCI_PREFETCH_MEM_SIZE,
> +        GUEST_VPCI_ADDR_TYPE_IO, GUEST_VPCI_IO_PCI_ADDR,
> +        GUEST_VPCI_IO_CPU_ADDR, GUEST_VPCI_IO_SIZE);
> +    if (res) return res;
> +
> +    res = fdt_property_vpci_interrupt_map_mask(gc, fdt, 4, 0, 0, 0, 7);

it would make sense to separate out child_unit_addr_cells and
child_interrupt_specifier_cells here like we do below with
fdt_property_vpci_interrupt_map


> +    if (res) return res;
> +
> +    /*
> +     * Legacy interrupt is forced and assigned to the guest.
> +     * This will be removed once we have implementation for MSI support.
> +     *
> +     */
> +    res = fdt_property_vpci_interrupt_map(gc, fdt, 3, 1, 0, 3,
> +            4,
> +            0, 0, 0, 1, 0, 136, DT_IRQ_TYPE_LEVEL_HIGH,
> +            0, 0, 0, 2, 0, 137, DT_IRQ_TYPE_LEVEL_HIGH,
> +            0, 0, 0, 3, 0, 138, DT_IRQ_TYPE_LEVEL_HIGH,
> +            0, 0, 0, 4, 0, 139, DT_IRQ_TYPE_LEVEL_HIGH);

The 4 interrupt allocated for this need to be defined in
xen/include/public/arch-arm.h as well. Also, why would we want to get
rid of the legacy interrupts completely? It would be possible to still
find device or software that rely on them.


> +    if (res) return res;
> +
> +    res = fdt_end_node(fdt);
> +    if (res) return res;
> +
> +    return 0;
> +}
> +
>  static const struct arch_info *get_arch_info(libxl__gc *gc,
>                                               const struct xc_dom_image *dom)
>  {

[...]


> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index 7364a07362..4e19c62948 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -426,6 +426,34 @@ typedef uint64_t xen_callback_t;
>  #define GUEST_VPCI_ECAM_BASE    xen_mk_ullong(0x10000000)
>  #define GUEST_VPCI_ECAM_SIZE    xen_mk_ullong(0x10000000)
>  
> +#define GUEST_PCI_ADDRESS_CELLS 3
> +#define GUEST_PCI_SIZE_CELLS 2
> +
> +/* PCI-PCIe memory space types */
> +#define GUEST_VPCI_ADDR_TYPE_PREFETCH_MEM xen_mk_ullong(0x42000000)
> +#define GUEST_VPCI_ADDR_TYPE_MEM          xen_mk_ullong(0x02000000)
> +#define GUEST_VPCI_ADDR_TYPE_IO           xen_mk_ullong(0x01000000)
> +
> +/* Guest PCI-PCIe memory space where config space and BAR will be available.*/
> +#define GUEST_VPCI_PREFETCH_MEM_CPU_ADDR  xen_mk_ullong(0x4000000000)

It looks like it could conflict with GUEST_RAM1_BASE+GUEST_RAM1_SIZE?


> +#define GUEST_VPCI_MEM_CPU_ADDR           xen_mk_ullong(0x04020000)
> +#define GUEST_VPCI_IO_CPU_ADDR            xen_mk_ullong(0xC0200800)

0xC0200800 looks like it could conflict with
GUEST_RAM0_BASE+GUEST_RAM0_SIZE?


> +/*
> + * This is hardcoded values for the real PCI physical addresses.
> + * This will be removed once we read the real PCI-PCIe physical
> + * addresses form the config space and map to the guest memory map
> + * when assigning the device to guest via VPCI.
> + *
> + */
> +#define GUEST_VPCI_PREFETCH_MEM_PCI_ADDR  xen_mk_ullong(0x4000000000)
> +#define GUEST_VPCI_MEM_PCI_ADDR           xen_mk_ullong(0x50000000)
> +#define GUEST_VPCI_IO_PCI_ADDR            xen_mk_ullong(0x00000000)
> +
> +#define GUEST_VPCI_PREFETCH_MEM_SIZE      xen_mk_ullong(0x100000000)
> +#define GUEST_VPCI_MEM_SIZE               xen_mk_ullong(0x08000000)

How did you chose these sizes? GUEST_VPCI_MEM_SIZE and/or
GUEST_VPCI_PREFETCH_MEM_SIZE are supposed to potentially cover all the
PCI BARs, including potential future hotplug devices, right?

If so, maybe we need to increase GUEST_VPCI_MEM_SIZE to a couple of GB
and GUEST_VPCI_PREFETCH_MEM_SIZE to even more?




> +#define GUEST_VPCI_IO_SIZE                xen_mk_ullong(0x00800000)
> +
>  /*
>   * 16MB == 4096 pages reserved for guest to use as a region to map its
>   * grant table in.


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 01:50:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 01:50:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jymqc-0005tF-Vq; Fri, 24 Jul 2020 01:50:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qjWU=BD=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jymqb-0005nL-RQ
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 01:50:05 +0000
X-Inumbo-ID: fc23ca7a-cd4f-11ea-a35b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fc23ca7a-cd4f-11ea-a35b-12813bfff9fa;
 Fri, 24 Jul 2020 01:50:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=LYQzmk0E6gLfWyyHc8ouQ/nwRVD33kvYAlZQ9wuowc0=; b=ZLamWevqd6fwhn6/ZE6uZpIzJ
 0dpPaPYYP0OORF7m/hEc3QuhDhiIMZZCPKo/5SxTdVf5yNGppEmBefFbtoFoO7GM+oznWaUNRHg9T
 LldyuaCKRocXDMEeyotiw8ASkYDeYXCX1yiZXWvV5LRrKM6D2tGjjBTqFK6n3D+WX53YQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jymqV-00016O-KY; Fri, 24 Jul 2020 01:49:59 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jymqV-0006N6-CG; Fri, 24 Jul 2020 01:49:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jymqV-0006Z7-BL; Fri, 24 Jul 2020 01:49:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152137-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 152137: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures: linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
 linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=d811d29517d1ea05bc159579231652d3ca1c2a01
X-Osstest-Versions-That: linux=c57b1153a58a6263863667296b5f00933fc46a4f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jul 2020 01:49:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152137 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152137/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10   fail REGR. vs. 151939
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151939

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 linux                d811d29517d1ea05bc159579231652d3ca1c2a01
baseline version:
 linux                c57b1153a58a6263863667296b5f00933fc46a4f

Last test of basis   151939  2020-07-16 06:40:22 Z    7 days
Testing same since   152100  2020-07-22 07:43:09 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  AceLan Kao <acelan.kao@canonical.com>
  Adrian Hunter <adrian.hunter@intel.com>
  Aharon Landau <aharonl@mellanox.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Hung <alex.hung@canonical.com>
  Alexander Lobakin <alobakin@marvell.com>
  Alexander Lobakin <alobakin@pm.me>
  Alexander Shishkin <alexander.shishkin@linux.intel.com>
  Alexander Tsoy <alexander@tsoy.me>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexandre Belloni <alexandre.belloni@bootlin.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Ali Saidi <alisaidi@amazon.com>
  Amir Goldstein <amir73il@gmail.com>
  Ammy Yi <ammy.yi@intel.com>
  Andreas Schwab <schwab@suse.de>
  Andrew F. Davis <afd@ti.com>
  Andrey Lebedev <andrey@lebedev.lt>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
  Angelo Dureghello <angelo.dureghello@timesys.com>
  Angelo Dureghello <angelo@sysam.it>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Anson Huang <Anson.Huang@nxp.com>
  Ard Biesheuvel <ardb@kernel.org>
  Armas Spann <zappel@retarded.farm>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Bernard Zhao <bernard@vivo.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Bjorn Helgaas <bhelgaas@google.com>
  Bjørn Mork <bjorn@mork.no>
  Bob Peterson <rpeterso@redhat.com>
  Borislav Petkov <bp@suse.de>
  Cameron Berkenpas <cam@neo-zeon.de>
  Chandrakanth Patil <chandrakanth.patil@broadcom.com>
  Chirantan Ekbote <chirantan@chromium.org>
  Chris Mason <clm@fb.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Chris Wulff <crwulff@gmail.com>
  Christoffer Nielsen <cn@obviux.dk>
  Christoph Paasch <cpaasch@apple.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuhong Yuan <hslester96@gmail.com>
  Chunyan Zhang <chunyan.zhang@unisoc.com>
  Claudiu Beznea <claudiu.beznea@microchip.com>
  Codrin Ciubotariu <codrin.ciubotariu@microchip.com>
  Colin Ian King <colin.king@canonical.com>
  Colin Xu <colin.xu@intel.com>
  Cong Wang <xiyou.wangcong@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Dave Wang <dave.wang@emc.com.tw>
  David Ahern <dsahern@kernel.org>
  David Howells <dhowells@redhat.com>
  David Pedersen <limero1337@gmail.com>
  David S. Miller <davem@davemloft.net>
  Diego Elio Pettenò <flameeyes@flameeyes.com>
  Dietmar Eggemann <dietmar.eggemann@arm.com>
  dillon min <dillon.minfei@gmail.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dinh Nguyen <dinguyen@kernel.org>
  Dmitry Bogdanov <dbogdanov@marvell.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Douglas Anderson <dianders@chromium.org>
  Eddie James <eajames@linux.ibm.com>
  Emmanuel Pescosta <emmanuelpescosta099@gmail.com>
  Enric Balletbo i Serra <enric.balletbo@collabora.com>
  Eric Dumazet <edumazet@google.com>
  Esben Haabendal <esben@geanix.com>
  Felipe Balbi <balbi@kernel.org>
  Finley Xiao <finley.xiao@rock-chips.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Weimer <fweimer@redhat.com>
  Frank Mori Hess <fmh6jj@gmail.com>
  Frederic Weisbecker <frederic@kernel.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Greg Ungerer <gerg@linux-m68k.org>
  Gregor Pintar <grpintar@gmail.com>
  Guenter Roeck <linux@roeck-us.net>
  Haibo Chen <haibo.chen@nxp.com>
  Hans de Goede <hdegoede@redhat.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hou Tao <houtao1@huawei.com>
  Igor Moura <imphilippini@gmail.com>
  Ilya Dryomov <idryomov@gmail.com>
  Inki Dae <inki.dae@samsung.com>
  James Chapman <jchapman@katalix.com>
  James Hilliard <james.hilliard1@gmail.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Jani Nikula <jani.nikula@intel.com>
  Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Jens Axboe <axboe@kernel.dk>
  Jerome Brunet <jbrunet@baylibre.com>
  Jian-Hong Pan <jian-hong@endlessm.com>
  Jin Yao <yao.jin@linux.intel.com>
  Jiri Kosina <jkosina@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Joerg Roedel <jroedel@suse.de>
  Johan Hovold <johan@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  John Johansen <john.johansen@canonical.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan Toppins <jtoppins@redhat.com>
  Juri Lelli <juri.lelli@redhat.com>
  Jörgen Storvist <jorgen.storvist@gmail.com>
  Kailang Yang <kailang@realtek.com>
  Kangmin Park <l4stpr0gr4m@gmail.com>
  Kashyap Desai <kashyap.desai@broadcom.com>
  Kevin Buettner <kevinb@redhat.com>
  Kevin Hilman <khilman@baylibre.com>
  Krishna Manikandan <mkrishn@codeaurora.org>
  Krzysztof Kozlowski <krzk@kernel.org>
  Leon Romanovsky <leonro@mellanox.com>
  Lingling Xu <ling_ling.xu@unisoc.com>
  Linus Lüssing <linus.luessing@c0d3.blue>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Luis Machado <luis.machado@linaro.org>
  Maciej S. Szmigiero <mail@maciej.szmigiero.name>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Mark Starovoytov <mstarovoitov@marvell.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Varghese <martin.varghese@nokia.com>
  Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
  Matt Ranostay <matt.ranostay@konsulko.com>
  Matthias Brugger <matthias.bgg@gmail.com>
  Maulik Shah <mkshah@codeaurora.org>
  Maxime Ripard <maxime@cerno.tech>
  Maxime Ripard <mripard@kernel.org>
  Michael Ellerman <mpe@ellerman.id.au>
  Michal Simek <michal.simek@xilinx.com>
  Michał Mirosław <mirq-linux@rere.qmqm.pl>
  Mike Rapoport <rppt@linux.ibm.com>
  Miklos Szeredi <mszeredi@redhat.com>
  Minas Harutyunyan <hminas@synopsys.com>
  Minas Harutyunyan <Minas.Harutyunyan@synopsys.com>
  Minchan Kim <minchan@kernel.org>
  Ming Lei <ming.lei@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  Neil Armstrong <narmstrong@baylibre.com>
  Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
  Oded Gabbay <oded.gabbay@gmail.com>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paul Menzel <pmenzel@molgen.mpg.de>
  Paul Wouters <pwouters@redhat.com>
  Peter Chen <peter.chen@nxp.com>
  Peter Geis <pgwipeout@gmail.com>
  Peter Ujfalusi <peter.ujfalusi@ti.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Petteri Aimonen <jpa@git.mail.kapsi.fi>
  Philippe Schenker <philippe.schenker@toradex.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raju P.L.S.S.S.N <rplsssn@codeaurora.org>
  Renato Lui Geh <renatogeh@gmail.com>
  Rob Clark <robdclark@chromium.org>
  Rob Herring <robh@kernel.org>
  Robin Gong <yibin.gong@nxp.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Sabrina Dubroca <sd@queasysnail.net>
  Saravana Kannan <saravanak@google.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Sasha Levin <sashal@kernel.org>
  Satheesh Rajendran <sathnaga@linux.vnet.ibm.com>
  Sean Tranchetti <stranche@codeaurora.org>
  Sean Wang <sean.wang@mediatek.com>
  Sebastian Parschauer <s.parschauer@gmx.de>
  Sergei A. Trusov <sergei.a.trusov@ya.ru>
  Shannon Nelson <snelson@pensando.io>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Stephan Gerhold <stephan@gerhold.net>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Suman Anna <s-anna@ti.com>
  Taehee Yoo <ap420073@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tero Kristo <t-kristo@ti.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Lamprecht <t.lamprecht@proxmox.com>
  Toke Høiland-Jørgensen <toke@redhat.com>
  Tom Rix <trix@redhat.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tomasz Duszynski <tomasz.duszynski@octakon.com>
  Tomer Tayar <ttayar@habana.ai>
  Tony Lindgren <tony@atomide.com>
  Tudor Ambarus <tudor.ambarus@microchip.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vasily Averin <vvs@virtuozzo.com>
  Vincent Guittot <vincent.guittot@linaro.org>
  Vinod Koul <vkoul@kernel.org>
  Viresh Kumar <viresh.kumar@linaro.org>
  Vishwas M <vishwas.reddy.vr@gmail.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Wade Mealing <wmealing@redhat.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Wolfram Sang <wsa@kernel.org>
  Xiaojie Yuan <xiaojie.yuan@amd.com>
  Xin Long <lucien.xin@gmail.com>
  Yariv <oigevald+kernel@gmail.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  youngjun <her0gyugyu@gmail.com>
  YueHaibing <yuehaibing@huawei.com>
  Zhang Qiang <qiang.zhang@windriver.com>
  Zhang Rui <rui.zhang@intel.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Álvaro Fernández Rojas <noltari@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   c57b1153a58a..d811d29517d1  d811d29517d1ea05bc159579231652d3ca1c2a01 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 04:33:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 04:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jypOi-000495-Fj; Fri, 24 Jul 2020 04:33:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SHXM=BD=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jypOh-000490-U6
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 04:33:27 +0000
X-Inumbo-ID: d10b65d4-cd66-11ea-a36e-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d10b65d4-cd66-11ea-a36e-12813bfff9fa;
 Fri, 24 Jul 2020 04:33:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9DF10ACA3;
 Fri, 24 Jul 2020 04:33:34 +0000 (UTC)
Subject: Re: [PATCH v3 0/8] x86: compat header generation and checking
 adjustments
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <125c9611-dcae-f119-b44b-e3333b5dc0fd@suse.com>
Date: Fri, 24 Jul 2020 06:33:25 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.20 17:45, Jan Beulich wrote:
> As was pointed out by 0e2e54966af5 ("mm: fix public declaration of
> struct xen_mem_acquire_resource"), we're not currently handling structs
> correctly that have uint64_aligned_t fields. Patch 2 demonstrates that
> there was also an issue with XEN_GUEST_HANDLE_64().
> 
> 1: x86: fix compat header generation
> 2: x86/mce: add compat struct checking for XEN_MC_inject_v2
> 3: x86/mce: bring hypercall subop compat checking in sync again
> 4: x86/dmop: add compat struct checking for XEN_DMOP_map_mem_type_to_ioreq_server
> 5: evtchn: add compat struct checking for newer sub-ops
> 6: x86: generalize padding field handling
> 7: flask: drop dead compat translation code
> 8: x86: only generate compat headers actually needed
> 
> v3: Build fix for old gcc in patch 1. New patch 5.

Just an idea:

Instead of parsing an existing header and trying to create a compat
header from it, assuming some special constructs and names, wouldn't it
make more sense to have a common input file and create non-compat and
compat headers (and the functions/macros to convert them into each
other) from it? This would at once drop the need for compat checking
and new interfaces could be tested automatically to not require a compat
variant.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 06:28:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 06:28:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyrBc-00063f-53; Fri, 24 Jul 2020 06:28:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yKVY=BD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyrBa-00063X-Qk
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 06:28:02 +0000
X-Inumbo-ID: d2a6be1a-cd76-11ea-a374-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d2a6be1a-cd76-11ea-a374-12813bfff9fa;
 Fri, 24 Jul 2020 06:28:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 556F4AC20;
 Fri, 24 Jul 2020 06:28:09 +0000 (UTC)
Subject: Re: [PATCH v3 0/8] x86: compat header generation and checking
 adjustments
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <adb0fe93-c251-b84a-a357-936029af0e9c@suse.com>
 <125c9611-dcae-f119-b44b-e3333b5dc0fd@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1397fc0c-b325-a330-8fab-ad55b009ffe6@suse.com>
Date: Fri, 24 Jul 2020 08:27:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <125c9611-dcae-f119-b44b-e3333b5dc0fd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.07.2020 06:33, Jürgen Groß wrote:
> On 23.07.20 17:45, Jan Beulich wrote:
>> As was pointed out by 0e2e54966af5 ("mm: fix public declaration of
>> struct xen_mem_acquire_resource"), we're not currently handling structs
>> correctly that have uint64_aligned_t fields. Patch 2 demonstrates that
>> there was also an issue with XEN_GUEST_HANDLE_64().
>>
>> 1: x86: fix compat header generation
>> 2: x86/mce: add compat struct checking for XEN_MC_inject_v2
>> 3: x86/mce: bring hypercall subop compat checking in sync again
>> 4: x86/dmop: add compat struct checking for XEN_DMOP_map_mem_type_to_ioreq_server
>> 5: evtchn: add compat struct checking for newer sub-ops
>> 6: x86: generalize padding field handling
>> 7: flask: drop dead compat translation code
>> 8: x86: only generate compat headers actually needed
>>
>> v3: Build fix for old gcc in patch 1. New patch 5.
> 
> Just an idea:
> 
> Instead of parsing an existing header and trying to create a compat
> header from it, assuming some special constructs and names, wouldn't it
> make more sense to have a common input file and create non-compat and
> compat headers (and the functions/macros to convert them into each
> other) from it?

Sounds like quite a bit of work, but if you or anyone else would
want to invest into trying this approach - why not? (Ideally
interfaces like our public ABI would imo best be described in IDL
or some such anyway, and per-language headers [or whatever the
language requires] then derived from it.)

The current approach was chosen back at the time to make it
sufficiently obvious that the introduction of the compat layer
had no negative impact on the native interface definitions.

> This would at once drop the need for compat checking
> and new interfaces could be tested automatically to not require a compat
> variant.

Not sure about this one: If a code path uses the native struct
even for handling the compat case, how would both layouts matching
be enforced without any explicit check somewhere?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 06:30:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 06:30:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyrEP-0006oH-JR; Fri, 24 Jul 2020 06:30:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QctF=BD=canonical.com=andrea.righi@srs-us1.protection.inumbo.net>)
 id 1jyrEO-0006oC-HL
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 06:30:56 +0000
X-Inumbo-ID: 3a727c8c-cd77-11ea-87e3-bc764e2007e4
Received: from youngberry.canonical.com (unknown [91.189.89.112])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3a727c8c-cd77-11ea-87e3-bc764e2007e4;
 Fri, 24 Jul 2020 06:30:55 +0000 (UTC)
Received: from mail-ed1-f71.google.com ([209.85.208.71])
 by youngberry.canonical.com with esmtps
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2)
 (envelope-from <andrea.righi@canonical.com>) id 1jyrEM-0004j1-OQ
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 06:30:54 +0000
Received: by mail-ed1-f71.google.com with SMTP id r18so1214994edi.2
 for <xen-devel@lists.xenproject.org>; Thu, 23 Jul 2020 23:30:54 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to;
 bh=f5Bi1N9eHmzbwm4KZS+jcd4luzKCa17r9gngXjYNYvc=;
 b=Q/Cky1XnVrT2iFj1FMjcYpxQP3QGdu7oQ4/k34DbGlmsA2gGmjY9gQkrvE9nMdmaSy
 Ej55I9fG1EzjVeo/X+ekptKUZg+idxHxzQmv8c3oMaXOrFjUjhngUrl2zOZ/9+ZYc1gf
 1vX1Bh5Gkt3jGVigYzA9yAB3a5LyrmBB4DueHOXQzdeKnyYB4n32Rkv0EPOzoM+qifE0
 4LRjiHHNDGVYqqlHDSmFtUHg/v0p9v0OZdaIvgO+U9IPrIHHVmZPyNWiJUfcj/4bbxQr
 1+TcDN49q3WzjRj6BejAsn2veD2VDuLJvzebcLiYcRNGzqrKNvugtZ7Nq6vxucsRyxc8
 6YrQ==
X-Gm-Message-State: AOAM533Wh0/ZmN5j/lXilbaGxyghrk0KYuq93hekvvdKY1hc2D7kOJTF
 8ho8tCRavXWEbZy3WQrxFs/eaHm8ziHV9gJmBhzpmJ6yfWngB8yrI1igesqRPQu+2Xu7kxZTrs5
 +REuhD21RI0gZlzMf4QT1FnKRQkEIcI+d0BcbD7NgIRq2
X-Received: by 2002:a17:906:3bd5:: with SMTP id
 v21mr3756077ejf.329.1595572254321; 
 Thu, 23 Jul 2020 23:30:54 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJxXO1++pFVwTCL/tYfrIs0xm/tuHNlxlyRivKJ5/9QT5Rk/9NHc5BZDVgyqSnOH54Jo7XttPw==
X-Received: by 2002:a17:906:3bd5:: with SMTP id
 v21mr3756053ejf.329.1595572254072; 
 Thu, 23 Jul 2020 23:30:54 -0700 (PDT)
Received: from localhost (host-87-11-131-192.retail.telecomitalia.it.
 [87.11.131.192])
 by smtp.gmail.com with ESMTPSA id r19sm48005edi.85.2020.07.23.23.30.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 23 Jul 2020 23:30:53 -0700 (PDT)
Date: Fri, 24 Jul 2020 08:30:52 +0200
From: Andrea Righi <andrea.righi@canonical.com>
To: David Miller <davem@davemloft.net>
Subject: Re: [PATCH] xen-netfront: fix potential deadlock in xennet_remove()
Message-ID: <20200724063052.GG841369@xps-13>
References: <20200722065211.GA841369@xps-13>
 <20200723.145722.752878326752101646.davem@davemloft.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200723.145722.752878326752101646.davem@davemloft.net>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, sstabellini@kernel.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, kuba@kernel.org,
 boris.ostrovsky@oracle.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 23, 2020 at 02:57:22PM -0700, David Miller wrote:
> From: Andrea Righi <andrea.righi@canonical.com>
> Date: Wed, 22 Jul 2020 08:52:11 +0200
> 
> > +static int xennet_remove(struct xenbus_device *dev)
> > +{
> > +	struct netfront_info *info = dev_get_drvdata(&dev->dev);
> > +
> > +	dev_dbg(&dev->dev, "%s\n", dev->nodename);
> 
> These kinds of debugging messages provide zero context and are so much
> less useful than simply using tracepoints which are more universally
> available than printk debugging facilities.
> 
> Please remove all of the dev_dbg() calls from this patch.

I didn't add that dev_dbg() call, it's just the old code moved around,
but I agree, I'll remove that call and send a new version of this patch.

Thanks for looking at it!
-Andrea


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 06:51:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 06:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyrYb-0000LL-Br; Fri, 24 Jul 2020 06:51:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qjWU=BD=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyrYZ-0000LG-AB
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 06:51:47 +0000
X-Inumbo-ID: 227c5b40-cd7a-11ea-87e3-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 227c5b40-cd7a-11ea-87e3-bc764e2007e4;
 Fri, 24 Jul 2020 06:51:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1dJTSmIy/Rj0W0j7w1CE49JhVMXeYs7dowXJ+FUC+i0=; b=X566VAvF5DlqZguxEQ3jhvoEN
 ttyRNhFU0R1fVW4iVhYy0w34K7ASOU5+qP2DvzcZumXRMy2XVq/FdRlQjBEHyKeOIZiv9ndxnK+jL
 j7q7cjWv9+pCYzrO2fuHxPtOtUKYO0iRuiTWM6Qyu0L6etwwjvFY4PveVWmk/wxVAYERw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyrYV-0008Hn-2o; Fri, 24 Jul 2020 06:51:43 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyrYU-00081P-MM; Fri, 24 Jul 2020 06:51:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyrYU-0001di-Lj; Fri, 24 Jul 2020 06:51:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152144-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152144: regressions - trouble: fail/pass/starved
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
 qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: qemuu=d0cc248164961a7ba9d43806feffd76f9f6d7f41
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jul 2020 06:51:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152144 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152144/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 151065
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 151065
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 151065
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 qemuu                d0cc248164961a7ba9d43806feffd76f9f6d7f41
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   41 days
Failing since        151101  2020-06-14 08:32:51 Z   39 days   55 attempts
Testing same since   152144  2020-07-23 12:27:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Antoine Damhet <antoine.damhet@blade-group.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 31151 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 07:03:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 07:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyrk3-0001S2-JH; Fri, 24 Jul 2020 07:03:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fd0P=BD=epam.com=prvs=5474228b71=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1jyrk1-0001Rw-JY
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 07:03:38 +0000
X-Inumbo-ID: cac29354-cd7b-11ea-a377-12813bfff9fa
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cac29354-cd7b-11ea-a377-12813bfff9fa;
 Fri, 24 Jul 2020 07:03:35 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 06O6xZxq013126; Fri, 24 Jul 2020 07:03:31 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2112.outbound.protection.outlook.com [104.47.18.112])
 by mx0a-0039f301.pphosted.com with ESMTP id 32ft03r3t3-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 24 Jul 2020 07:03:30 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MQ/KOY7Dva9arfYelirV/Yn2BuOdAklTzcoOmIVW7X0PLmCgBjPrk+y2E5yYv51H1xOHc3e7sJAnYrStUaupEGRrW6lhfDsmbc9W+c7tHbYsxb6LqWM7gUlKL2FnPRAGdgBaQFLrysWQ4TgQf6M/YVyiI3hBWmAgTtAqux9w/WhI6DEVzO12UWeau+e9o62PwZJ01yVFW98vpFAiJ9DeCEpaOCDZnjpHpDAYpy/EGaBcvAPoFkUcoGS0AX0rhVX/QUC4kHcYHjV9pfNEfUTtqdYvxqGMyvDudtFXHzZy3+mXdmC++WxOxnvHLr8e9HzPTXAomQHdT5Sif3/k5sA6Bw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r5FsgE2QdG0iQT1zpuXlaDpt7pk8G2iidr2NvGfTb/0=;
 b=HCS7mxBUG9K+pI1wWdy3SH2umdItMGe/dpS8gWSRHc+/peiupasOfWsZHx4NjZl4BqZirLNmkCAOKxI2puaWd10NS0k0O+7xutwgDufdr7fIbTwNtDWFvn84ka6R/DbGfZX0qIUCRvi+Lod6RRQ0mCpO2k3WE0+mAmyzPon/KpCBIluwI/wlr4fkSZDk0a82ObOL554QWW8cDOki9MlHXcuupV5QrIxsYBb3GaizPviET3FHPxqaseSK/VbGqUJCLL22n0zz5gdhSlgc/XBu/3H1kDRzx9WaTyJdfi/caL1/ZLsgD9/i0PGkIoV24b8MJsHjcztJ6L7SXTdHvP+k9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r5FsgE2QdG0iQT1zpuXlaDpt7pk8G2iidr2NvGfTb/0=;
 b=1v+5RgK5FzWSOjYOKHYQH+Q+XcPwWK/21GhfGuKNvVRwmy4MX2eqriwO+cUyAKLf8mU3PsN3NfjSKa1DaGioclwYTfckygtrHaMTOXPyh0uyAS07WBRM9mRaVEofZYIwmDJU3TRI8eMn4rifQjIoQy5vo/WcA1OrfBO+TsTbS1oq37/6DELaQhdzzjJO3mh6UkL1Ek2QOLoKsTK8vsIjmvbuBaN5Xe/cHYjV2bC56pNH2fpRacgJnl+7amyBejiz6Ohfk9FnRwVjAssxZdk0mMX6bD8uWTg/axXvvEX5juTWDO3kkiA68pfxWsKfWjb5bH+0VwZ+yUerVWbrE0tT1w==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4307.eurprd03.prod.outlook.com (2603:10a6:208:d0::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.18; Fri, 24 Jul
 2020 07:03:27 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508%7]) with mapi id 15.20.3216.023; Fri, 24 Jul 2020
 07:03:27 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Stefano Stabellini <sstabellini@kernel.org>, Rahul Singh
 <rahul.singh@arm.com>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Thread-Topic: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Thread-Index: AQHWYYiGk1rAzpKOTEW1iNvviIonwA==
Date: Fri, 24 Jul 2020 07:03:27 +0000
Message-ID: <3ee41590-e8ca-84d6-3010-6e5dffe91df0@epam.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 540a477a-cadf-4e4f-ff4a-08d82f9faa70
x-ms-traffictypediagnostic: AM0PR03MB4307:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <AM0PR03MB4307F1441F877D008F6A2954E7770@AM0PR03MB4307.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:5236;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: mqLrxjg2LC8vmjVUXq0PytMLBrVV+/p/qLPR2CWULYaWEcG5K+mfKdygNgnyXy5wMmDtmj02VbivvEy6hprETWXagF53xnlvksuTUBltGIRrciFnHpgKV45BoCB6pGEqaDXFdIwnsTIZ1Oay8uMqmrxlacOIYYUEdamx0Snud4ggJG4SeFuaPK3xQ9UZMvrEA7DrIk8ZeGHIvMrGGYU0rGSfqCYx4RYKmVmzu7aZ0lfAT20s7/224UkhB8HgPhM0UtjSTENWgq9pwoyRCoQAvl+GZwoMTNyqJykPnBFulh20KGvUC63+SxD48V1qnQdMuymDtGStC3LqpeCcsNaMWpEF8XHZfBnxevX++p+MXv6b2xuZHB+qF7j7RUssVFhnOoYmL9XDMHbgzF3X+xD6H0L0RkpmU5zDN4O86UUNqo/fMvR2E8yopncapZityv3WAZ3oUDyfZlK6Zcq63c/NMQ==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM0PR03MB6324.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(136003)(346002)(39860400002)(376002)(366004)(396003)(6486002)(76116006)(66556008)(91956017)(8936002)(66476007)(54906003)(8676002)(66946007)(110136005)(53546011)(6506007)(316002)(186003)(2616005)(64756008)(26005)(66446008)(6512007)(966005)(2906002)(478600001)(4326008)(5660300002)(30864003)(86362001)(31696002)(83380400001)(36756003)(71200400001)(31686004)(2004002)(579004)(559001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: FVTj0CL7tCL0UjTE/5/4DBU+Tj9KMRs+3UTcriCXOX91JM6S4D7r7omtYiZ7WEg+imJw2a7x4hCMBt9cmnEAPlRhx0odyQ931o5atw/E2+YtqkjrcOMReYGw+2pzfPOD8p9ht0XXVFXWH37CKgpLbcIAT3BryjAsgV1IwrpVUWIUNs7zbMxceMbKl2DqzYRGJYoHJfVZtxamIB+6BNwsSZ13wyMTiz/g1lWlLMHS85e3kiFJUky5RToGvpVpteDWrQt4p7XYq6uvOa3uzmUQ7e4IaZTPyC4HHQBKsUQxK5g0DK4ejJPf+zjj4DiuAKhn0m5N+81S4vgbUOownP1lfSC40nw9QwzcDI/eTwa3FGq7BsxMjTLRpMAabF+8ZrUZvd0rjlC+UvyUzaLYiqkHK+CKDTHdlBpTs1oSgF9pNdh38GAIAR93W6Jv2IcaAp54x2epNLz2+LST+JZlIrwQrUnx58Mj5z4ssUY78QiZP2E=
Content-Type: text/plain; charset="utf-8"
Content-ID: <F873DC067CFCA047800DE10FD85A4C33@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 540a477a-cadf-4e4f-ff4a-08d82f9faa70
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Jul 2020 07:03:27.7707 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: lN9Flysnlb/HD0ovbN9bRPBVFmBZUVhuji0ZBE/6pDWsglOxuyt3ZhVKF2cKvXAmCa1JZVjrdXfWF7Xp8a4TMEpQjl7ujIklfw6OuyDdGJMBNM/th969ewpp+iT48Clq
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4307
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687
 definitions=2020-07-24_01:2020-07-24,
 2020-07-23 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0
 mlxscore=0 spamscore=0
 impostorscore=0 mlxlogscore=999 suspectscore=0 adultscore=0 phishscore=0
 priorityscore=1501 clxscore=1011 lowpriorityscore=0 bulkscore=0
 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007240052
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "nd@arm.com" <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQpPbiA3LzI0LzIwIDI6MzggQU0sIFN0ZWZhbm8gU3RhYmVsbGluaSB3cm90ZToNCj4gKyBKYW4s
IEFuZHJldywgUm9nZXINCj4NCj4gUGxlYXNlIGhhdmUgYSBsb29rIGF0IG15IGNvbW1lbnQgb24g
d2hldGhlciB3ZSBzaG91bGQgc2hhcmUgdGhlIE1NQ0ZHDQo+IGNvZGUgYmVsb3csIGZlZWwgZnJl
ZSB0byBpZ25vcmUgdGhlIHJlc3QgOi0pDQo+DQo+DQo+IE9uIFRodSwgMjMgSnVsIDIwMjAsIFJh
aHVsIFNpbmdoIHdyb3RlOg0KPj4gWEVOIGR1cmluZyBib290IHdpbGwgcmVhZCB0aGUgUENJIGRl
dmljZSB0cmVlIG5vZGUg4oCccmVn4oCdIHByb3BlcnR5DQo+PiBhbmQgd2lsbCBtYXAgdGhlIFBD
SSBjb25maWcgc3BhY2UgdG8gdGhlIFhFTiBtZW1vcnkuDQo+Pg0KPj4gWEVOIHdpbGwgcmVhZCB0
aGUg4oCcbGludXgsIHBjaS1kb21haW7igJ0gcHJvcGVydHkgZnJvbSB0aGUgZGV2aWNlIHRyZWUN
Cj4+IG5vZGUgYW5kIGNvbmZpZ3VyZSB0aGUgaG9zdCBicmlkZ2Ugc2VnbWVudCBudW1iZXIgYWNj
b3JkaW5nbHkuDQo+Pg0KPj4gQXMgb2Ygbm93ICJwY2ktaG9zdC1lY2FtLWdlbmVyaWMiIGNvbXBh
dGlibGUgYm9hcmQgaXMgc3VwcG9ydGVkLg0KPj4NCj4+IENoYW5nZS1JZDogSWYzMmY3NzQ4Yjdk
Yzg5ZGQzNzExNGRjNTAyOTQzMjIyYTJhMzZjNDkNCj4gV2hhdCBpcyB0aGlzIENoYW5nZS1JZCBw
cm9wZXJ0eT8NCkdlcnJpdCA7KQ0KPg0KPg0KPj4gU2lnbmVkLW9mZi1ieTogUmFodWwgU2luZ2gg
PHJhaHVsLnNpbmdoQGFybS5jb20+DQo+PiAtLS0NCj4+ICAgeGVuL2FyY2gvYXJtL0tjb25maWcg
ICAgICAgICAgICAgICAgfCAgIDcgKw0KPj4gICB4ZW4vYXJjaC9hcm0vTWFrZWZpbGUgICAgICAg
ICAgICAgICB8ICAgMSArDQo+PiAgIHhlbi9hcmNoL2FybS9wY2kvTWFrZWZpbGUgICAgICAgICAg
IHwgICA0ICsNCj4+ICAgeGVuL2FyY2gvYXJtL3BjaS9wY2ktYWNjZXNzLmMgICAgICAgfCAxMDEg
KysrKysrKysrKysrKysNCj4+ICAgeGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9zdC1jb21tb24uYyAg
fCAxOTggKysrKysrKysrKysrKysrKysrKysrKysrKysrKw0KPj4gICB4ZW4vYXJjaC9hcm0vcGNp
L3BjaS1ob3N0LWdlbmVyaWMuYyB8IDEzMSArKysrKysrKysrKysrKysrKysNCj4+ICAgeGVuL2Fy
Y2gvYXJtL3BjaS9wY2kuYyAgICAgICAgICAgICAgfCAxMTIgKysrKysrKysrKysrKysrKw0KPj4g
ICB4ZW4vYXJjaC9hcm0vc2V0dXAuYyAgICAgICAgICAgICAgICB8ICAgMiArDQo+PiAgIHhlbi9p
bmNsdWRlL2FzbS1hcm0vZGV2aWNlLmggICAgICAgIHwgICA3ICstDQo+PiAgIHhlbi9pbmNsdWRl
L2FzbS1hcm0vcGNpLmggICAgICAgICAgIHwgIDk3ICsrKysrKysrKysrKystDQo+PiAgIDEwIGZp
bGVzIGNoYW5nZWQsIDY1NCBpbnNlcnRpb25zKCspLCA2IGRlbGV0aW9ucygtKQ0KPj4gICBjcmVh
dGUgbW9kZSAxMDA2NDQgeGVuL2FyY2gvYXJtL3BjaS9NYWtlZmlsZQ0KPj4gICBjcmVhdGUgbW9k
ZSAxMDA2NDQgeGVuL2FyY2gvYXJtL3BjaS9wY2ktYWNjZXNzLmMNCj4+ICAgY3JlYXRlIG1vZGUg
MTAwNjQ0IHhlbi9hcmNoL2FybS9wY2kvcGNpLWhvc3QtY29tbW9uLmMNCj4+ICAgY3JlYXRlIG1v
ZGUgMTAwNjQ0IHhlbi9hcmNoL2FybS9wY2kvcGNpLWhvc3QtZ2VuZXJpYy5jDQo+PiAgIGNyZWF0
ZSBtb2RlIDEwMDY0NCB4ZW4vYXJjaC9hcm0vcGNpL3BjaS5jDQo+Pg0KPj4gZGlmZiAtLWdpdCBh
L3hlbi9hcmNoL2FybS9LY29uZmlnIGIveGVuL2FyY2gvYXJtL0tjb25maWcNCj4+IGluZGV4IDI3
NzczODgyNjUuLmVlMTMzMzlhZTkgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC9hcm0vS2NvbmZp
Zw0KPj4gKysrIGIveGVuL2FyY2gvYXJtL0tjb25maWcNCj4+IEBAIC0zMSw2ICszMSwxMyBAQCBt
ZW51ICJBcmNoaXRlY3R1cmUgRmVhdHVyZXMiDQo+PiAgIA0KPj4gICBzb3VyY2UgImFyY2gvS2Nv
bmZpZyINCj4+ICAgDQo+PiArY29uZmlnIEFSTV9QQ0kNCj4+ICsJYm9vbCAiUENJIFBhc3N0aHJv
dWdoIFN1cHBvcnQiDQo+PiArCWRlcGVuZHMgb24gQVJNXzY0DQo+PiArCS0tLWhlbHAtLS0NCj4+
ICsNCj4+ICsJICBQQ0kgcGFzc3Rocm91Z2ggc3VwcG9ydCBmb3IgWGVuIG9uIEFSTTY0Lg0KPj4g
Kw0KPj4gICBjb25maWcgQUNQSQ0KPj4gICAJYm9vbCAiQUNQSSAoQWR2YW5jZWQgQ29uZmlndXJh
dGlvbiBhbmQgUG93ZXIgSW50ZXJmYWNlKSBTdXBwb3J0IiBpZiBFWFBFUlQNCj4+ICAgCWRlcGVu
ZHMgb24gQVJNXzY0DQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL01ha2VmaWxlIGIveGVu
L2FyY2gvYXJtL01ha2VmaWxlDQo+PiBpbmRleCA3ZTgyYjIxNzhjLi4zNDVjYjgzZWVkIDEwMDY0
NA0KPj4gLS0tIGEveGVuL2FyY2gvYXJtL01ha2VmaWxlDQo+PiArKysgYi94ZW4vYXJjaC9hcm0v
TWFrZWZpbGUNCj4+IEBAIC02LDYgKzYsNyBAQCBpZm5lcSAoJChDT05GSUdfTk9fUExBVCkseSkN
Cj4+ICAgb2JqLXkgKz0gcGxhdGZvcm1zLw0KPj4gICBlbmRpZg0KPj4gICBvYmotJChDT05GSUdf
VEVFKSArPSB0ZWUvDQo+PiArb2JqLSQoQ09ORklHX0FSTV9QQ0kpICs9IHBjaS8NCj4+ICAgDQo+
PiAgIG9iai0kKENPTkZJR19IQVNfQUxURVJOQVRJVkUpICs9IGFsdGVybmF0aXZlLm8NCj4+ICAg
b2JqLXkgKz0gYm9vdGZkdC5pbml0Lm8NCj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vcGNp
L01ha2VmaWxlIGIveGVuL2FyY2gvYXJtL3BjaS9NYWtlZmlsZQ0KPj4gbmV3IGZpbGUgbW9kZSAx
MDA2NDQNCj4+IGluZGV4IDAwMDAwMDAwMDAuLjM1ODUwOGI3ODcNCj4+IC0tLSAvZGV2L251bGwN
Cj4+ICsrKyBiL3hlbi9hcmNoL2FybS9wY2kvTWFrZWZpbGUNCj4+IEBAIC0wLDAgKzEsNCBAQA0K
Pj4gK29iai15ICs9IHBjaS5vDQo+PiArb2JqLXkgKz0gcGNpLWhvc3QtZ2VuZXJpYy5vDQo+PiAr
b2JqLXkgKz0gcGNpLWhvc3QtY29tbW9uLm8NCj4+ICtvYmoteSArPSBwY2ktYWNjZXNzLm8NCj4+
IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vcGNpL3BjaS1hY2Nlc3MuYyBiL3hlbi9hcmNoL2Fy
bS9wY2kvcGNpLWFjY2Vzcy5jDQo+PiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPj4gaW5kZXggMDAw
MDAwMDAwMC4uYzUzZWY1ODMzNg0KPj4gLS0tIC9kZXYvbnVsbA0KPj4gKysrIGIveGVuL2FyY2gv
YXJtL3BjaS9wY2ktYWNjZXNzLmMNCj4+IEBAIC0wLDAgKzEsMTAxIEBADQo+PiArLyoNCj4+ICsg
KiBDb3B5cmlnaHQgKEMpIDIwMjAgQXJtIEx0ZC4NCkkgdGhpbmsgU1BEWCB3aWxsIGZpdCBiZXR0
ZXIgaW4gYW55IG5ldyBjb2RlLg0KPj4gKyAqDQo+PiArICogVGhpcyBwcm9ncmFtIGlzIGZyZWUg
c29mdHdhcmU7IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkNCj4+ICsgKiBp
dCB1bmRlciB0aGUgdGVybXMgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIHZlcnNp
b24gMiBhcw0KPj4gKyAqIHB1Ymxpc2hlZCBieSB0aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9u
Lg0KPj4gKyAqDQo+PiArICogVGhpcyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3Bl
IHRoYXQgaXQgd2lsbCBiZSB1c2VmdWwsDQo+PiArICogYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZ
OyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YNCj4+ICsgKiBNRVJDSEFOVEFC
SUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlDQo+PiAr
ICogR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4NCj4+ICsgKg0K
Pj4gKyAqIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhlIEdOVSBHZW5lcmFs
IFB1YmxpYyBMaWNlbnNlDQo+PiArICogYWxvbmcgd2l0aCB0aGlzIHByb2dyYW0uICBJZiBub3Qs
IHNlZSA8aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4uDQo+PiArICovDQo+PiArDQo+PiAr
I2luY2x1ZGUgPHhlbi9pbml0Lmg+DQo+PiArI2luY2x1ZGUgPHhlbi9wY2kuaD4NCj4+ICsjaW5j
bHVkZSA8YXNtL3BjaS5oPg0KPj4gKyNpbmNsdWRlIDx4ZW4vcndsb2NrLmg+DQo+PiArDQo+PiAr
c3RhdGljIHVpbnQzMl90IHBjaV9jb25maWdfcmVhZChwY2lfc2JkZl90IHNiZGYsIHVuc2lnbmVk
IGludCByZWcsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGludCBs
ZW4pDQo+PiArew0KPj4gKyAgICBpbnQgcmM7DQo+PiArICAgIHVpbnQzMl90IHZhbCA9IEdFTk1B
U0soMCwgbGVuICogOCk7DQo+PiArDQo+PiArICAgIHN0cnVjdCBwY2lfaG9zdF9icmlkZ2UgKmJy
aWRnZSA9IHBjaV9maW5kX2hvc3RfYnJpZGdlKHNiZGYuc2VnLCBzYmRmLmJ1cyk7DQo+PiArDQo+
PiArICAgIGlmICggdW5saWtlbHkoIWJyaWRnZSkgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBw
cmludGsoWEVOTE9HX0VSUiAiVW5hYmxlIHRvIGZpbmQgYnJpZGdlIGZvciAiUFJJX3BjaSJcbiIs
DQo+PiArICAgICAgICAgICAgICAgIHNiZGYuc2VnLCBzYmRmLmJ1cywgc2JkZi5kZXYsIHNiZGYu
Zm4pOw0KPj4gKyAgICAgICAgcmV0dXJuIHZhbDsNCj4+ICsgICAgfQ0KPj4gKw0KPj4gKyAgICBp
ZiAoIHVubGlrZWx5KCFicmlkZ2UtPm9wcy0+cmVhZCkgKQ0KPj4gKyAgICAgICAgcmV0dXJuIHZh
bDsNCj4+ICsNCj4+ICsgICAgcmMgPSBicmlkZ2UtPm9wcy0+cmVhZChicmlkZ2UsICh1aW50MzJf
dCkgc2JkZi5zYmRmLCByZWcsIGxlbiwgJnZhbCk7DQo+PiArICAgIGlmICggcmMgKQ0KPj4gKyAg
ICAgICAgcHJpbnRrKFhFTkxPR19FUlIgIkZhaWxlZCB0byByZWFkIHJlZyAlI3ggbGVuICV1IGZv
ciAiUFJJX3BjaSJcbiIsDQo+PiArICAgICAgICAgICAgICAgIHJlZywgbGVuLCBzYmRmLnNlZywg
c2JkZi5idXMsIHNiZGYuZGV2LCBzYmRmLmZuKTsNCj4+ICsNCj4+ICsgICAgcmV0dXJuIHZhbDsN
Cj4+ICt9DQo+PiArDQo+PiArc3RhdGljIHZvaWQgcGNpX2NvbmZpZ193cml0ZShwY2lfc2JkZl90
IHNiZGYsIHVuc2lnbmVkIGludCByZWcsDQo+PiArICAgICAgICB1bnNpZ25lZCBpbnQgbGVuLCB1
aW50MzJfdCB2YWwpDQo+PiArew0KPj4gKyAgICBpbnQgcmM7DQo+PiArICAgIHN0cnVjdCBwY2lf
aG9zdF9icmlkZ2UgKmJyaWRnZSA9IHBjaV9maW5kX2hvc3RfYnJpZGdlKHNiZGYuc2VnLCBzYmRm
LmJ1cyk7DQo+PiArDQo+PiArICAgIGlmICggdW5saWtlbHkoIWJyaWRnZSkgKQ0KPj4gKyAgICB7
DQo+PiArICAgICAgICBwcmludGsoWEVOTE9HX0VSUiAiVW5hYmxlIHRvIGZpbmQgYnJpZGdlIGZv
ciAiUFJJX3BjaSJcbiIsDQo+PiArICAgICAgICAgICAgICAgIHNiZGYuc2VnLCBzYmRmLmJ1cywg
c2JkZi5kZXYsIHNiZGYuZm4pOw0KPj4gKyAgICAgICAgcmV0dXJuOw0KPj4gKyAgICB9DQo+PiAr
DQo+PiArICAgIGlmICggdW5saWtlbHkoIWJyaWRnZS0+b3BzLT53cml0ZSkgKQ0KPj4gKyAgICAg
ICAgcmV0dXJuOw0KPj4gKw0KPj4gKyAgICByYyA9IGJyaWRnZS0+b3BzLT53cml0ZShicmlkZ2Us
ICh1aW50MzJfdCkgc2JkZi5zYmRmLCByZWcsIGxlbiwgdmFsKTsNCj4+ICsgICAgaWYgKCByYyAp
DQo+PiArICAgICAgICBwcmludGsoWEVOTE9HX0VSUiAiRmFpbGVkIHRvIHdyaXRlIHJlZyAlI3gg
bGVuICV1IGZvciAiUFJJX3BjaSJcbiIsDQo+PiArICAgICAgICAgICAgICAgIHJlZywgbGVuLCBz
YmRmLnNlZywgc2JkZi5idXMsIHNiZGYuZGV2LCBzYmRmLmZuKTsNCj4+ICt9DQo+PiArDQo+PiAr
LyoNCj4+ICsgKiBXcmFwcGVycyBmb3IgYWxsIFBDSSBjb25maWd1cmF0aW9uIGFjY2VzcyBmdW5j
dGlvbnMuDQo+PiArICovDQo+PiArDQo+PiArI2RlZmluZSBQQ0lfT1BfV1JJVEUoc2l6ZSwgdHlw
ZSkgXA0KPj4gKyAgICB2b2lkIHBjaV9jb25mX3dyaXRlIyNzaXplIChwY2lfc2JkZl90IHNiZGYs
dW5zaWduZWQgaW50IHJlZywgdHlwZSB2YWwpIFwNCj4+ICt7ICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBcDQo+PiArICAgIHBjaV9jb25maWdfd3JpdGUoc2Jk
ZiwgcmVnLCBzaXplIC8gOCwgdmFsKTsgICAgIFwNCj4+ICt9DQo+PiArDQo+PiArI2RlZmluZSBQ
Q0lfT1BfUkVBRChzaXplLCB0eXBlKSBcDQo+PiArICAgIHR5cGUgcGNpX2NvbmZfcmVhZCMjc2l6
ZSAocGNpX3NiZGZfdCBzYmRmLCB1bnNpZ25lZCBpbnQgcmVnKSAgXA0KPj4gK3sgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4+ICsgICAgcmV0dXJuIHBj
aV9jb25maWdfcmVhZChzYmRmLCByZWcsIHNpemUgLyA4KTsgICAgIFwNCj4+ICt9DQo+PiArDQo+
PiArUENJX09QX1JFQUQoOCwgdTgpDQo+PiArUENJX09QX1JFQUQoMTYsIHUxNikNCj4+ICtQQ0lf
T1BfUkVBRCgzMiwgdTMyKQ0KPj4gK1BDSV9PUF9XUklURSg4LCB1OCkNCj4+ICtQQ0lfT1BfV1JJ
VEUoMTYsIHUxNikNCj4+ICtQQ0lfT1BfV1JJVEUoMzIsIHUzMikNCj4gVGhpcyBsb29rcyBsaWtl
IGEgc3Vic2V0IG9mIHhlbi9hcmNoL3g4Ni94ODZfNjQvbW1jb25maWdfNjQuYyA/DQo+DQo+IE1N
Q0ZHIGlzIHN1cHBvc2VkIHRvIGNvdmVyIEVDQU0tY29tcGxpYW50IGhvc3QgYnJpZGdlcyB0b28s
IGlmIEkgYW0gbm90DQo+IG1pc3Rha2VuLiBJcyB0aGVyZSBhbnkgdmFsdWUgaW4gc2hhcmluZyB0
aGUgY29kZSB3aXRoIHg4Nj8gSXQgaXMgT0sgaWYNCj4gd2UgZG9uJ3QsIGJ1dCBJIHdvdWxkIGxp
a2UgdG8gdW5kZXJzdGFuZCB0aGUgcmVhc29uaW5nLg0KPg0KPg0KPg0KPj4gKy8qDQo+PiArICog
TG9jYWwgdmFyaWFibGVzOg0KPj4gKyAqIG1vZGU6IEMNCj4+ICsgKiBjLWZpbGUtc3R5bGU6ICJC
U0QiDQo+PiArICogYy1iYXNpYy1vZmZzZXQ6IDQNCj4+ICsgKiB0YWItd2lkdGg6IDQNCj4+ICsg
KiBpbmRlbnQtdGFicy1tb2RlOiBuaWwNCj4+ICsgKiBFbmQ6DQo+PiArICovDQo+PiBkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9zdC1jb21tb24uYyBiL3hlbi9hcmNoL2FybS9w
Y2kvcGNpLWhvc3QtY29tbW9uLmMNCj4+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0DQo+PiBpbmRleCAw
MDAwMDAwMDAwLi5jNWY5OGJlNjk4DQo+PiAtLS0gL2Rldi9udWxsDQo+PiArKysgYi94ZW4vYXJj
aC9hcm0vcGNpL3BjaS1ob3N0LWNvbW1vbi5jDQo+PiBAQCAtMCwwICsxLDE5OCBAQA0KPj4gKy8q
DQo+PiArICogQ29weXJpZ2h0IChDKSAyMDIwIEFybSBMdGQuDQo+PiArICoNCj4+ICsgKiBCYXNl
ZCBvbiBMaW51eCBkcml2ZXJzL3BjaS9lY2FtLmMNCj4+ICsgKiBDb3B5cmlnaHQgMjAxNiBCcm9h
ZGNvbS4NCj4+ICsgKg0KPj4gKyAqIEJhc2VkIG9uIExpbnV4IGRyaXZlcnMvcGNpL2NvbnRyb2xs
ZXIvcGNpLWhvc3QtY29tbW9uLmMNCj4+ICsgKiBCYXNlZCBvbiBMaW51eCBkcml2ZXJzL3BjaS9j
b250cm9sbGVyL3BjaS1ob3N0LWdlbmVyaWMuYw0KPj4gKyAqIENvcHlyaWdodCAoQykgMjAxNCBB
Uk0gTGltaXRlZCBXaWxsIERlYWNvbiA8d2lsbC5kZWFjb25AYXJtLmNvbT4NCj4+ICsgKg0KPj4g
KyAqDQo+PiArICogVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlvdSBjYW4gcmVkaXN0
cmlidXRlIGl0IGFuZC9vciBtb2RpZnkNCj4+ICsgKiBpdCB1bmRlciB0aGUgdGVybXMgb2YgdGhl
IEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIHZlcnNpb24gMiBhcw0KPj4gKyAqIHB1Ymxpc2hl
ZCBieSB0aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLg0KPj4gKyAqDQo+PiArICogVGhpcyBw
cm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWws
DQo+PiArICogYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxp
ZWQgd2FycmFudHkgb2YNCj4+ICsgKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQ
QVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlDQo+PiArICogR05VIEdlbmVyYWwgUHVibGljIExp
Y2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4NCj4+ICsgKg0KPj4gKyAqIFlvdSBzaG91bGQgaGF2ZSBy
ZWNlaXZlZCBhIGNvcHkgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlDQo+PiArICog
YWxvbmcgd2l0aCB0aGlzIHByb2dyYW0uICBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5nbnUub3Jn
L2xpY2Vuc2VzLz4uDQo+PiArICovDQo+PiArDQo+PiArI2luY2x1ZGUgPHhlbi9pbml0Lmg+DQo+
PiArI2luY2x1ZGUgPHhlbi9wY2kuaD4NCj4+ICsjaW5jbHVkZSA8YXNtL3BjaS5oPg0KPj4gKyNp
bmNsdWRlIDx4ZW4vcndsb2NrLmg+DQo+PiArI2luY2x1ZGUgPHhlbi92bWFwLmg+DQo+PiArDQo+
PiArLyoNCj4+ICsgKiBMaXN0IGZvciBhbGwgdGhlIHBjaSBob3N0IGJyaWRnZXMuDQo+PiArICov
DQo+PiArDQo+PiArc3RhdGljIExJU1RfSEVBRChwY2lfaG9zdF9icmlkZ2VzKTsNCj4+ICsNCj4+
ICtzdGF0aWMgYm9vbCBfX2luaXQgZHRfcGNpX3BhcnNlX2J1c19yYW5nZShzdHJ1Y3QgZHRfZGV2
aWNlX25vZGUgKmRldiwNCj4+ICsgICAgICAgIHN0cnVjdCBwY2lfY29uZmlnX3dpbmRvdyAqY2Zn
KQ0KPj4gK3sNCj4+ICsgICAgY29uc3QgX19iZTMyICpjZWxsczsNCj4+ICsgICAgdWludDMyX3Qg
bGVuOw0KPj4gKw0KPj4gKyAgICBjZWxscyA9IGR0X2dldF9wcm9wZXJ0eShkZXYsICJidXMtcmFu
Z2UiLCAmbGVuKTsNCj4+ICsgICAgLyogYnVzLXJhbmdlIHNob3VsZCBhdCBsZWFzdCBiZSAyIGNl
bGxzICovDQo+PiArICAgIGlmICggIWNlbGxzIHx8IChsZW4gPCAoc2l6ZW9mKCpjZWxscykgKiAy
KSkgKQ0KPj4gKyAgICAgICAgcmV0dXJuIGZhbHNlOw0KPj4gKw0KPj4gKyAgICBjZmctPmJ1c25f
c3RhcnQgPSBkdF9uZXh0X2NlbGwoMSwgJmNlbGxzKTsNCj4+ICsgICAgY2ZnLT5idXNuX2VuZCA9
IGR0X25leHRfY2VsbCgxLCAmY2VsbHMpOw0KPj4gKw0KPj4gKyAgICByZXR1cm4gdHJ1ZTsNCj4+
ICt9DQo+PiArDQo+PiArc3RhdGljIGlubGluZSB2b2lkIF9faW9tZW0gKnBjaV9yZW1hcF9jZmdz
cGFjZShwYWRkcl90IHN0YXJ0LCBzaXplX3QgbGVuKQ0KPj4gK3sNCj4+ICsgICAgcmV0dXJuIGlv
cmVtYXBfbm9jYWNoZShzdGFydCwgbGVuKTsNCj4+ICt9DQo+PiArDQo+PiArc3RhdGljIHZvaWQg
cGNpX2VjYW1fZnJlZShzdHJ1Y3QgcGNpX2NvbmZpZ193aW5kb3cgKmNmZykNCj4+ICt7DQo+PiAr
ICAgIGlmICggY2ZnLT53aW4gKQ0KPj4gKyAgICAgICAgaW91bm1hcChjZmctPndpbik7DQo+PiAr
DQo+PiArICAgIHhmcmVlKGNmZyk7DQo+PiArfQ0KDQpUaGUgdHdvIGZ1bmN0aW9ucyBhYm92ZSBz
ZWVtIHRvIGRlYWwgd2l0aCB0aGUgc2FtZSByZXNvdXJjZXMsIGUuZy4gY2ZnLT53aW4NCg0KYW5k
IG1hcC91bm1hcC4gV291bGQgaXQgbWFrZSBzZW5zZSB0byBhbGlnbiB0aG9zZSwgc29tZXRoaW5n
IGxpa2UNCg0Kcy9wY2lfcmVtYXBfY2Znc3BhY2UvcGNpX2VjYW1fYWxsb2MgYW5kIHBjaV9lY2Ft
X2FsbG9jIGhhbmRsZXMgY2ZnLT53aW4/DQoNCk9yIGFueXRoaW5nIHdoaWNoIG1ha2VzIHRoZW0g
bG9vayBpbml0L2Zpbmkgc3R5bGU/DQoNCj4+ICsNCj4+ICtzdGF0aWMgc3RydWN0IHBjaV9jb25m
aWdfd2luZG93ICpnZW5fcGNpX2luaXQoc3RydWN0IGR0X2RldmljZV9ub2RlICpkZXYsDQo+PiAr
ICAgICAgICBzdHJ1Y3QgcGNpX2VjYW1fb3BzICpvcHMpDQo+PiArew0KPj4gKyAgICBpbnQgZXJy
Ow0KPj4gKyAgICBzdHJ1Y3QgcGNpX2NvbmZpZ193aW5kb3cgKmNmZzsNCj4+ICsgICAgcGFkZHJf
dCBhZGRyLCBzaXplOw0KPj4gKw0KPj4gKyAgICBjZmcgPSB4emFsbG9jKHN0cnVjdCBwY2lfY29u
ZmlnX3dpbmRvdyk7DQo+PiArICAgIGlmICggIWNmZyApDQo+PiArICAgICAgICByZXR1cm4gTlVM
TDsNCj4+ICsNCj4+ICsgICAgZXJyID0gZHRfcGNpX3BhcnNlX2J1c19yYW5nZShkZXYsIGNmZyk7
DQo+PiArICAgIGlmICggIWVyciApIHsNCj4+ICsgICAgICAgIGNmZy0+YnVzbl9zdGFydCA9IDA7
DQo+PiArICAgICAgICBjZmctPmJ1c25fZW5kID0gMHhmZjsNCj4+ICsgICAgICAgIHByaW50ayhY
RU5MT0dfRVJSICJObyBidXMgcmFuZ2UgZm91bmQgZm9yIHBjaSBjb250cm9sbGVyXG4iKTsNCj4+
ICsgICAgfSBlbHNlIHsNCj4+ICsgICAgICAgIGlmICggY2ZnLT5idXNuX2VuZCA+IGNmZy0+YnVz
bl9zdGFydCArIDB4ZmYgKQ0KPj4gKyAgICAgICAgICAgIGNmZy0+YnVzbl9lbmQgPSBjZmctPmJ1
c25fc3RhcnQgKyAweGZmOw0KDQpTbywgaWYgYnVzIHN0YXJ0IGlzLCBmb3IgZXhhbXBsZSwgMHgx
MCB0aGVuIHdlJ2xsIGVuZCB1cCB3aXRoIGJ1cyBlbmQgYXQgKDB4MTAgKyAweGZmKSA+IDB4ZmYN
Cg0Kd2hpY2ggZG9lc24ndCBzZWVtIHRvIGJlIHdoYXQgd2Ugd2FudA0KDQo+PiArICAgIH0NCj4+
ICsNCj4+ICsgICAgLyogUGFyc2Ugb3VyIFBDSSBlY2FtIHJlZ2lzdGVyIGFkZHJlc3MqLw0KPj4g
KyAgICBlcnIgPSBkdF9kZXZpY2VfZ2V0X2FkZHJlc3MoZGV2LCAwLCAmYWRkciwgJnNpemUpOw0K
Pj4gKyAgICBpZiAoIGVyciApDQo+PiArICAgICAgICBnb3RvIGVycl9leGl0Ow0KPiBTaG91bGRu
J3Qgd2UgaGFuZGxlIHRoZSBwb3NzaWJpbGl0eSBvZiBtdWx0aXBsZSBhZGRyZXNzZXM/IElzIGl0
DQo+IHBvc3NpYmxlIHRvIGhhdmUgbW9yZSB0aGFuIG9uZSByYW5nZSBmb3IgYW4gRUNBTSBjb21w
bGlhbnQgaG9zdCBicmlkZ2U/DQo+DQo+DQo+PiArICAgIGNmZy0+cGh5c19hZGRyID0gYWRkcjsN
Cj4+ICsgICAgY2ZnLT5zaXplID0gc2l6ZTsNCj4+ICsgICAgY2ZnLT5vcHMgPSBvcHM7DQo+PiAr
DQo+PiArICAgIC8qDQo+PiArICAgICAqIE9uIDY0LWJpdCBzeXN0ZW1zLCB3ZSBkbyBhIHNpbmds
ZSBpb3JlbWFwIGZvciB0aGUgd2hvbGUgY29uZmlnIHNwYWNlDQo+PiArICAgICAqIHNpbmNlIHdl
IGhhdmUgZW5vdWdoIHZpcnR1YWwgYWRkcmVzcyByYW5nZSBhdmFpbGFibGUuICBPbiAzMi1iaXQs
IHdlDQo+PiArICAgICAqIGlvcmVtYXAgdGhlIGNvbmZpZyBzcGFjZSBmb3IgZWFjaCBidXMgaW5k
aXZpZHVhbGx5Lg0KPj4gKyAgICAgKg0KPj4gKyAgICAgKiBBcyBvZiBub3cgb25seSA2NC1iaXQg
aXMgc3VwcG9ydGVkIDMyLWJpdCBpcyBub3Qgc3VwcG9ydGVkLg0KPj4gKyAgICAgKi8NCj4+ICsg
ICAgY2ZnLT53aW4gPSBwY2lfcmVtYXBfY2Znc3BhY2UoY2ZnLT5waHlzX2FkZHIsIGNmZy0+c2l6
ZSk7DQoNCkkgYW0gZmluZSB3aXRoICJ3aW4iLCBidXQgY2FuIHdlIHRoaW5rIG9mIHNvbWV0aGlu
ZyB0aGF0IHRlbGxzIHVzIHRoYXQNCg0KIndpbiIgaXMgYWN0dWFsbHkgRUNBTSBiYXNlIGFkZHJl
c3MsIHNvIG9uZSBkb2Vzbid0IG5lZWQgdG8gbWFwICJ3aW4iIHRvICJFQ0FNIg0KDQp3aGlsZSBy
ZWFkaW5nPw0KDQo+PiArICAgIGlmICggIWNmZy0+d2luICkNCj4+ICsgICAgICAgIGdvdG8gZXJy
X2V4aXRfcmVtYXA7DQo+PiArDQo+PiArICAgIHByaW50aygiRUNBTSBhdCBbbWVtICVseC0lbHhd
IGZvciBbYnVzICV4LSV4XSBcbiIsY2ZnLT5waHlzX2FkZHIsDQo+PiArICAgICAgICAgICAgY2Zn
LT5waHlzX2FkZHIgKyBjZmctPnNpemUgLSAxLGNmZy0+YnVzbl9zdGFydCxjZmctPmJ1c25fZW5k
KTsNCj4+ICsNCj4+ICsgICAgaWYgKCBvcHMtPmluaXQgKSB7DQo+PiArICAgICAgICBlcnIgPSBv
cHMtPmluaXQoY2ZnKTsNCj4+ICsgICAgICAgIGlmIChlcnIpDQo+PiArICAgICAgICAgICAgZ290
byBlcnJfZXhpdDsNCj4+ICsgICAgfQ0KPj4gKw0KPj4gKyAgICByZXR1cm4gY2ZnOw0KPj4gKw0K
Pj4gK2Vycl9leGl0X3JlbWFwOg0KPj4gKyAgICBwcmludGsoWEVOTE9HX0VSUiAiRUNBTSBpb3Jl
bWFwIGZhaWxlZFxuIik7DQo+PiArZXJyX2V4aXQ6DQo+PiArICAgIHBjaV9lY2FtX2ZyZWUoY2Zn
KTsNCj4+ICsgICAgcmV0dXJuIE5VTEw7DQo+PiArfQ0KPj4gKw0KPj4gK3N0YXRpYyBzdHJ1Y3Qg
cGNpX2hvc3RfYnJpZGdlICogcGNpX2FsbG9jX2hvc3RfYnJpZGdlKHZvaWQpDQo+PiArew0KPj4g
KyAgICBzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICpicmlkZ2UgPSB4emFsbG9jKHN0cnVjdCBwY2lf
aG9zdF9icmlkZ2UpOw0KPj4gKw0KPj4gKyAgICBpZiAoICFicmlkZ2UgKQ0KPj4gKyAgICAgICAg
cmV0dXJuIE5VTEw7DQo+PiArDQo+PiArICAgIElOSVRfTElTVF9IRUFEKCZicmlkZ2UtPm5vZGUp
Ow0KPj4gKyAgICByZXR1cm4gYnJpZGdlOw0KPj4gK30NCj4+ICsNCj4+ICtpbnQgcGNpX2hvc3Rf
Y29tbW9uX3Byb2JlKHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSAqZGV2LA0KPj4gKyAgICAgICAgc3Ry
dWN0IHBjaV9lY2FtX29wcyAqb3BzKQ0KPj4gK3sNCj4+ICsgICAgc3RydWN0IHBjaV9ob3N0X2Jy
aWRnZSAqYnJpZGdlOw0KPj4gKyAgICBzdHJ1Y3QgcGNpX2NvbmZpZ193aW5kb3cgKmNmZzsNCj4+
ICsgICAgdTMyIHNlZ21lbnQ7DQo+PiArDQo+PiArICAgIGJyaWRnZSA9IHBjaV9hbGxvY19ob3N0
X2JyaWRnZSgpOw0KPj4gKyAgICBpZiAoICFicmlkZ2UgKQ0KPj4gKyAgICAgICAgcmV0dXJuIC1F
Tk9NRU07DQo+PiArDQo+PiArICAgIC8qIFBhcnNlIGFuZCBtYXAgb3VyIENvbmZpZ3VyYXRpb24g
U3BhY2Ugd2luZG93cyAqLw0KRG8geW91IGV4cGVjdCBtdWx0aXBsZSB3aW5kb3dzIGhlcmUgYXMg
dGhlIGNvbW1lbnQgc2F5cz8NCj4+ICsgICAgY2ZnID0gZ2VuX3BjaV9pbml0KGRldiwgb3BzKTsN
Cj4+ICsgICAgaWYgKCAhY2ZnICkNCj4+ICsgICAgICAgIHJldHVybiAtRU5PTUVNOw0KPiBJbiBj
YXNlIG9mIGVycm9ycyB0aGUgYWxsb2NhdGVkIGJyaWRnZSBpcyBub3QgZnJlZWQuDQo+DQo+DQo+
PiArICAgIGJyaWRnZS0+ZHRfbm9kZSA9IGRldjsNCj4+ICsgICAgYnJpZGdlLT5zeXNkYXRhID0g
Y2ZnOw0KPj4gKyAgICBicmlkZ2UtPm9wcyA9ICZvcHMtPnBjaV9vcHM7DQoNCkNhbiB3ZSBoYXZl
IHNvbWUgc29ydCBvZiBkdW1teSBvcHMgc28gd2UgZG9uJ3QgaGF2ZSB0byBjaGVjayBmb3Igb3Bz
ICE9IE5VTEwgZXZlcnkgdGltZQ0KDQp3ZSByZWFkL3dyaXRlIGNvbmZpZz8gSXMgaXQgcmVhbGx5
IHBvc3NpYmxlIHRoYXQgd2UgaGF2ZSBvcHMgc2V0IHRvIE5VTEwgYWZ0ZXIgd2UgaGF2ZQ0KDQp0
aGUgZGV2ZWxvcG1lbnQgZmluaXNoZWQ/DQoNCj4+ICsNCj4+ICsgICAgaWYoICFkdF9wcm9wZXJ0
eV9yZWFkX3UzMihkZXYsICJsaW51eCxwY2ktZG9tYWluIiwgJnNlZ21lbnQpICkNCj4+ICsgICAg
ew0KPj4gKyAgICAgICAgcHJpbnRrKFhFTkxPR19FUlIgIlwibGludXgscGNpLWRvbWFpblwiIHBy
b3BlcnR5IGluIG5vdCBhdmFpbGFibGUgaW4gRFRcbiIpOw0KPj4gKyAgICAgICAgcmV0dXJuIC1F
Tk9ERVY7DQo+PiArICAgIH0NCj4+ICsNCj4+ICsgICAgYnJpZGdlLT5zZWdtZW50ID0gKHUxNilz
ZWdtZW50Ow0KPiBNeSB1bmRlcnN0YW5kaW5nIGlzIHRoYXQgYSBMaW51eCBwY2ktZG9tYWluIGRv
ZXNuJ3QgY29ycmVzcG9uZCBleGFjdGx5DQo+IHRvIGEgUENJIHNlZ21lbnQuIFNlZSBmb3IgaW5z
dGFuY2U6DQo+DQo+IGh0dHBzOi8vbGlzdHMuZ251Lm9yZy9hcmNoaXZlL2h0bWwvcWVtdS1kZXZl
bC8yMDE4LTA0L21zZzAzODg1Lmh0bWwNCj4NCj4gRG8gd2UgbmVlZCB0byBjYXJlIGFib3V0IHRo
ZSBkaWZmZXJlbmNlPyBJZiB3ZSBtZWFuIHBjaS1kb21haW4gaGVyZSwNCj4gc2hvdWxkIHdlIGp1
c3QgY2FsbCB0aGVtIGFzIHN1Y2ggaW5zdGVhZCBvZiBjYWxsaW5nIHRoZW0gInNlZ21lbnRzIiBp
bg0KPiBYZW4gKGlmIHRoZXkgYXJlIG5vdCBzZWdtZW50cyk/DQoNCj4NCj4NCj4+ICsgICAgbGlz
dF9hZGRfdGFpbCgmYnJpZGdlLT5ub2RlLCAmcGNpX2hvc3RfYnJpZGdlcyk7DQo+IEl0IGxvb2tz
IGxpa2UgJnBjaV9ob3N0X2JyaWRnZXMgc2hvdWxkIGJlIGFuIG9yZGVyZWQgbGlzdCwgb3JkZXJl
ZCBieQ0KPiBzZWdtZW50IG51bWJlcj8NCg0KV2h5PyBEbyB5b3UgZXhwZWN0IGJyaWRnZSBhY2Nl
c3MgaW4gc29tZSBzcGVjaWZpYyBvcmRlciBzbyBvcmRlcmVkDQoNCmxpc3Qgd2lsbCBtYWtlIGl0
IGZhc3Rlcj8NCg0KPg0KPg0KPj4gKyAgICByZXR1cm4gMDsNCj4+ICt9DQo+PiArDQo+PiArLyoN
Cj4+ICsgKiBUaGlzIGZ1bmN0aW9uIHdpbGwgbG9va3VwIGFuIGhvc3RicmlkZ2UgYmFzZWQgb24g
dGhlIHNlZ21lbnQgYW5kIGJ1cw0KPj4gKyAqIG51bWJlci4NCj4+ICsgKi8NCj4+ICtzdHJ1Y3Qg
cGNpX2hvc3RfYnJpZGdlICpwY2lfZmluZF9ob3N0X2JyaWRnZSh1aW50MTZfdCBzZWdtZW50LCB1
aW50OF90IGJ1cykNCj4+ICt7DQo+PiArICAgIHN0cnVjdCBwY2lfaG9zdF9icmlkZ2UgKmJyaWRn
ZTsNCj4+ICsgICAgYm9vbCBmb3VuZCA9IGZhbHNlOw0KPj4gKw0KPj4gKyAgICBsaXN0X2Zvcl9l
YWNoX2VudHJ5KCBicmlkZ2UsICZwY2lfaG9zdF9icmlkZ2VzLCBub2RlICkNCj4+ICsgICAgew0K
Pj4gKyAgICAgICAgaWYgKCBicmlkZ2UtPnNlZ21lbnQgIT0gc2VnbWVudCApDQo+PiArICAgICAg
ICAgICAgY29udGludWU7DQo+PiArDQo+PiArICAgICAgICBmb3VuZCA9IHRydWU7DQo+PiArICAg
ICAgICBicmVhazsNCj4+ICsgICAgfQ0KPj4gKw0KPj4gKyAgICByZXR1cm4gKGZvdW5kKSA/IGJy
aWRnZSA6IE5VTEw7DQo+PiArfQ0KPj4gKy8qDQo+PiArICogTG9jYWwgdmFyaWFibGVzOg0KPj4g
KyAqIG1vZGU6IEMNCj4+ICsgKiBjLWZpbGUtc3R5bGU6ICJCU0QiDQo+PiArICogYy1iYXNpYy1v
ZmZzZXQ6IDQNCj4+ICsgKiB0YWItd2lkdGg6IDQNCj4+ICsgKiBpbmRlbnQtdGFicy1tb2RlOiBu
aWwNCj4+ICsgKiBFbmQ6DQo+PiArICovDQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL3Bj
aS9wY2ktaG9zdC1nZW5lcmljLmMgYi94ZW4vYXJjaC9hcm0vcGNpL3BjaS1ob3N0LWdlbmVyaWMu
Yw0KPj4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4+IGluZGV4IDAwMDAwMDAwMDAuLmNkNjdiM2Rl
YzYNCj4+IC0tLSAvZGV2L251bGwNCj4+ICsrKyBiL3hlbi9hcmNoL2FybS9wY2kvcGNpLWhvc3Qt
Z2VuZXJpYy5jDQo+PiBAQCAtMCwwICsxLDEzMSBAQA0KPj4gKy8qDQo+PiArICogQ29weXJpZ2h0
IChDKSAyMDIwIEFybSBMdGQuDQo+PiArICoNCj4+ICsgKiBCYXNlZCBvbiBMaW51eCBkcml2ZXJz
L3BjaS9jb250cm9sbGVyL3BjaS1ob3N0LWNvbW1vbi5jDQo+PiArICogQmFzZWQgb24gTGludXgg
ZHJpdmVycy9wY2kvY29udHJvbGxlci9wY2ktaG9zdC1nZW5lcmljLmMNCj4+ICsgKiBDb3B5cmln
aHQgKEMpIDIwMTQgQVJNIExpbWl0ZWQgV2lsbCBEZWFjb24gPHdpbGwuZGVhY29uQGFybS5jb20+
DQo+PiArICoNCj4+ICsgKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiBy
ZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeQ0KPj4gKyAqIGl0IHVuZGVyIHRoZSB0ZXJtcyBv
ZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgdmVyc2lvbiAyIGFzDQo+PiArICogcHVi
bGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uDQo+PiArICoNCj4+ICsgKiBU
aGlzIHByb2dyYW0gaXMgZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVz
ZWZ1bCwNCj4+ICsgKiBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUg
aW1wbGllZCB3YXJyYW50eSBvZg0KPj4gKyAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZP
UiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUNCj4+ICsgKiBHTlUgR2VuZXJhbCBQdWJs
aWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLg0KPj4gKyAqDQo+PiArICogWW91IHNob3VsZCBo
YXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UNCj4+
ICsgKiBhbG9uZyB3aXRoIHRoaXMgcHJvZ3JhbS4gIElmIG5vdCwgc2VlIDxodHRwOi8vd3d3Lmdu
dS5vcmcvbGljZW5zZXMvPi4NCj4+ICsgKi8NCj4+ICsNCj4+ICsjaW5jbHVkZSA8YXNtL2Rldmlj
ZS5oPg0KPj4gKyNpbmNsdWRlIDxhc20vaW8uaD4NCj4+ICsjaW5jbHVkZSA8eGVuL3BjaS5oPg0K
Pj4gKyNpbmNsdWRlIDxhc20vcGNpLmg+DQo+PiArDQo+PiArLyoNCj4+ICsgKiBGdW5jdGlvbiB0
byBnZXQgdGhlIGNvbmZpZyBzcGFjZSBiYXNlLg0KPj4gKyAqLw0KPj4gK3N0YXRpYyB2b2lkIF9f
aW9tZW0gKnBjaV9jb25maWdfYmFzZShzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICpicmlkZ2UsDQo+
PiArICAgICAgICB1aW50MzJfdCBzYmRmLCBpbnQgd2hlcmUpDQo+IEkgdGhpbmsgdGhlIGZ1bmN0
aW9uIGlzIG1pc25hbWVkIGJlY2F1c2UgcmVhZGluZyB0aGUgY29kZSBiZWxvdyBpdCBsb29rcw0K
PiBsaWtlIGl0IGlzIG5vdCBqdXN0IHJldHVybmluZyB0aGUgYmFzZSBjb25maWcgc3BhY2UgYWRk
cmVzcyBidXQgYWxzbyB0aGUNCj4gc3BlY2lmaWMgYWRkcmVzcyB3ZSBuZWVkIHRvIHJlYWQvd3Jp
dGUgKGFkZGluZyB0aGUgZGV2aWNlIG9mZnNldCwNCj4gIndoZXJlIiwgYW5kIGV2ZXJ5dGhpbmcp
Lg0KPg0KPiBNYXliZSBwY2lfY29uZmlnX2dldF9hZGRyZXNzIG9yIHNvbWV0aGluZyBsaWtlIHRo
YXQ/DQo+DQo+DQo+PiArew0KPj4gKyAgICBzdHJ1Y3QgcGNpX2NvbmZpZ193aW5kb3cgKmNmZyA9
IGJyaWRnZS0+c3lzZGF0YTsNCg0KSSBhbSBhIGJpdCBjb25mdXNlZCBvZiB0aGUgbmFtaW5nIDsp
DQoNCldlIGFscmVhZHkgaGF2ZSAyIG1hcHM6IHdpbiAtPiBFQ0FNIGJhc2UgYW5kIG5vdyBzeXNk
YXRhIC0+IGNmZy4NCg0KQ2FuIHdlIHBsZWFzZSBoYXZlIHRoYXQgYWxpZ25lZCBzb21laG93IHNv
IGl0IGlzIGVhc2llciB0byBmb2xsb3c/DQoNCj4+ICsgICAgdW5zaWduZWQgaW50IGRldmZuX3No
aWZ0ID0gY2ZnLT5vcHMtPmJ1c19zaGlmdCAtIDg7DQoNCldlIGFyZSBub3QgY2hlY2tpbmcgY2Zn
LT5vcHMgZm9yIE5VTEwsIHNvIHByb2JhYmx5IHdlIGRvIG5vdCB3YW50IGJyaWRnZXMNCg0Kd2l0
aCBOVUxMIG9wcyBhcyB3ZWxsPw0KDQo+PiArDQo+PiArICAgIHBjaV9zYmRmX3Qgc2JkZl90ID0g
KHBjaV9zYmRmX3QpIHNiZGYgOw0KPj4gKw0KPj4gKyAgICB1bnNpZ25lZCBpbnQgYnVzbiA9IHNi
ZGZfdC5idXM7DQo+PiArICAgIHZvaWQgX19pb21lbSAqYmFzZTsNCj4+ICsNCj4+ICsgICAgaWYg
KCBidXNuIDwgY2ZnLT5idXNuX3N0YXJ0IHx8IGJ1c24gPiBjZmctPmJ1c25fZW5kICkNCj4+ICsg
ICAgICAgIHJldHVybiBOVUxMOw0KPj4gKw0KPj4gKyAgICBiYXNlID0gY2ZnLT53aW4gKyAoYnVz
biA8PCBjZmctPm9wcy0+YnVzX3NoaWZ0KTsNCj4+ICsNCj4+ICsgICAgcmV0dXJuIGJhc2UgKyAo
UENJX0RFVkZOKHNiZGZfdC5kZXYsIHNiZGZfdC5mbikgPDwgZGV2Zm5fc2hpZnQpICsgd2hlcmU7
DQo+PiArfQ0KPj4gKw0KPj4gK2ludCBwY2lfZWNhbV9jb25maWdfd3JpdGUoc3RydWN0IHBjaV9o
b3N0X2JyaWRnZSAqYnJpZGdlLCB1aW50MzJfdCBzYmRmLA0KPj4gKyAgICAgICAgaW50IHdoZXJl
LCBpbnQgc2l6ZSwgdTMyIHZhbCkNCj4+ICt7DQo+PiArICAgIHZvaWQgX19pb21lbSAqYWRkcjsN
Cj4+ICsNCj4+ICsgICAgYWRkciA9IHBjaV9jb25maWdfYmFzZShicmlkZ2UsIHNiZGYsIHdoZXJl
KTsNCj4+ICsgICAgaWYgKCAhYWRkciApDQo+PiArICAgICAgICByZXR1cm4gLUVOT0RFVjsNCj4+
ICsNCj4+ICsgICAgaWYgKCBzaXplID09IDEgKQ0KPj4gKyAgICAgICAgd3JpdGViKHZhbCwgYWRk
cik7DQo+PiArICAgIGVsc2UgaWYgKCBzaXplID09IDIgKQ0KPj4gKyAgICAgICAgd3JpdGV3KHZh
bCwgYWRkcik7DQo+PiArICAgIGVsc2UNCj4+ICsgICAgICAgIHdyaXRlbCh2YWwsIGFkZHIpOw0K
PiBwbGVhc2UgdXNlIGEgc3dpdGNoDQo+DQo+DQo+PiArICAgIHJldHVybiAwOw0KPj4gK30NCj4+
ICsNCj4+ICtpbnQgcGNpX2VjYW1fY29uZmlnX3JlYWQoc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAq
YnJpZGdlLCB1aW50MzJfdCBzYmRmLA0KPj4gKyAgICAgICAgaW50IHdoZXJlLCBpbnQgc2l6ZSwg
dTMyICp2YWwpDQo+PiArew0KPj4gKyAgICB2b2lkIF9faW9tZW0gKmFkZHI7DQo+PiArDQo+PiAr
ICAgIGFkZHIgPSBwY2lfY29uZmlnX2Jhc2UoYnJpZGdlLCBzYmRmLCB3aGVyZSk7DQo+PiArICAg
IGlmICggIWFkZHIgKSB7DQo+PiArICAgICAgICAqdmFsID0gfjA7DQo+PiArICAgICAgICByZXR1
cm4gLUVOT0RFVjsNCj4+ICsgICAgfQ0KPj4gKw0KPj4gKyAgICBpZiAoIHNpemUgPT0gMSApDQo+
PiArICAgICAgICAqdmFsID0gcmVhZGIoYWRkcik7DQo+PiArICAgIGVsc2UgaWYgKCBzaXplID09
IDIgKQ0KPj4gKyAgICAgICAgKnZhbCA9IHJlYWR3KGFkZHIpOw0KPj4gKyAgICBlbHNlDQo+PiAr
ICAgICAgICAqdmFsID0gcmVhZGwoYWRkcik7DQo+IHBsZWFzZSB1c2UgYSBzd2l0Y2gNCj4NCj4N
Cj4+ICsgICAgcmV0dXJuIDA7DQo+PiArfQ0KPj4gKw0KPj4gKy8qIEVDQU0gb3BzICovDQo+PiAr
c3RydWN0IHBjaV9lY2FtX29wcyBwY2lfZ2VuZXJpY19lY2FtX29wcyA9IHsNCj4+ICsgICAgLmJ1
c19zaGlmdCAgPSAyMCwNCj4+ICsgICAgLnBjaV9vcHMgICAgPSB7DQo+PiArICAgICAgICAucmVh
ZCAgICAgICA9IHBjaV9lY2FtX2NvbmZpZ19yZWFkLA0KPj4gKyAgICAgICAgLndyaXRlICAgICAg
PSBwY2lfZWNhbV9jb25maWdfd3JpdGUsDQo+PiArICAgIH0NCj4+ICt9Ow0KPj4gKw0KPj4gK3N0
YXRpYyBjb25zdCBzdHJ1Y3QgZHRfZGV2aWNlX21hdGNoIGdlbl9wY2lfZHRfbWF0Y2hbXSA9IHsN
Cj4+ICsgICAgeyAuY29tcGF0aWJsZSA9ICJwY2ktaG9zdC1lY2FtLWdlbmVyaWMiLA0KPj4gKyAg
ICAgIC5kYXRhID0gICAgICAgJnBjaV9nZW5lcmljX2VjYW1fb3BzIH0sDQo+IHNwdXJpb3VzIGJs
YW5rIGxpbmUNCj4NCj4NCj4+ICsgICAgeyB9LA0KPj4gK307DQo+PiArDQo+PiArc3RhdGljIGlu
dCBnZW5fcGNpX2R0X2luaXQoc3RydWN0IGR0X2RldmljZV9ub2RlICpkZXYsIGNvbnN0IHZvaWQg
KmRhdGEpDQo+PiArew0KPj4gKyAgICBjb25zdCBzdHJ1Y3QgZHRfZGV2aWNlX21hdGNoICpvZl9p
ZDsNCj4+ICsgICAgc3RydWN0IHBjaV9lY2FtX29wcyAqb3BzOw0KPj4gKw0KPj4gKyAgICBvZl9p
ZCA9IGR0X21hdGNoX25vZGUoZ2VuX3BjaV9kdF9tYXRjaCwgZGV2LT5kZXYub2Zfbm9kZSk7DQo+
PiArICAgIG9wcyA9IChzdHJ1Y3QgcGNpX2VjYW1fb3BzICopIG9mX2lkLT5kYXRhOw0KPj4gKw0K
Pj4gKyAgICBwcmludGsoWEVOTE9HX0lORk8gIkZvdW5kIFBDSSBob3N0IGJyaWRnZSAlcyBjb21w
YXRpYmxlOiVzIFxuIiwNCj4+ICsgICAgICAgICAgICBkdF9ub2RlX2Z1bGxfbmFtZShkZXYpLCBv
Zl9pZC0+Y29tcGF0aWJsZSk7DQo+PiArDQo+PiArICAgIHJldHVybiBwY2lfaG9zdF9jb21tb25f
cHJvYmUoZGV2LCBvcHMpOw0KPj4gK30NCj4+ICsNCj4+ICtEVF9ERVZJQ0VfU1RBUlQocGNpX2dl
biwgIlBDSSBIT1NUIEdFTkVSSUMiLCBERVZJQ0VfUENJKQ0KPj4gKy5kdF9tYXRjaCA9IGdlbl9w
Y2lfZHRfbWF0Y2gsDQo+PiArLmluaXQgPSBnZW5fcGNpX2R0X2luaXQsDQo+PiArRFRfREVWSUNF
X0VORA0KPj4gKw0KPj4gKy8qDQo+PiArICogTG9jYWwgdmFyaWFibGVzOg0KPj4gKyAqIG1vZGU6
IEMNCj4+ICsgKiBjLWZpbGUtc3R5bGU6ICJCU0QiDQo+PiArICogYy1iYXNpYy1vZmZzZXQ6IDQN
Cj4+ICsgKiB0YWItd2lkdGg6IDQNCj4+ICsgKiBpbmRlbnQtdGFicy1tb2RlOiBuaWwNCj4+ICsg
KiBFbmQ6DQo+PiArICovDQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL3BjaS9wY2kuYyBi
L3hlbi9hcmNoL2FybS9wY2kvcGNpLmMNCj4+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0DQo+PiBpbmRl
eCAwMDAwMDAwMDAwLi5mOGNiYjk5NTkxDQo+PiAtLS0gL2Rldi9udWxsDQo+PiArKysgYi94ZW4v
YXJjaC9hcm0vcGNpL3BjaS5jDQo+PiBAQCAtMCwwICsxLDExMiBAQA0KPj4gKy8qDQo+PiArICog
Q29weXJpZ2h0IChDKSAyMDIwIEFybSBMdGQuDQo+PiArICoNCj4+ICsgKiBUaGlzIHByb2dyYW0g
aXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeQ0K
Pj4gKyAqIGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vu
c2UgdmVyc2lvbiAyIGFzDQo+PiArICogcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZv
dW5kYXRpb24uDQo+PiArICoNCj4+ICsgKiBUaGlzIHByb2dyYW0gaXMgZGlzdHJpYnV0ZWQgaW4g
dGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwNCj4+ICsgKiBidXQgV0lUSE9VVCBBTlkg
V0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3YXJyYW50eSBvZg0KPj4gKyAqIE1F
UkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0
aGUNCj4+ICsgKiBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLg0K
Pj4gKyAqDQo+PiArICogWW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05V
IEdlbmVyYWwgUHVibGljIExpY2Vuc2UNCj4+ICsgKiBhbG9uZyB3aXRoIHRoaXMgcHJvZ3JhbS4g
IElmIG5vdCwgc2VlIDxodHRwOi8vd3d3LmdudS5vcmcvbGljZW5zZXMvPi4NCj4+ICsgKi8NCj4+
ICsNCj4+ICsjaW5jbHVkZSA8eGVuL2FjcGkuaD4NCj4+ICsjaW5jbHVkZSA8eGVuL2RldmljZV90
cmVlLmg+DQo+PiArI2luY2x1ZGUgPHhlbi9lcnJuby5oPg0KPj4gKyNpbmNsdWRlIDx4ZW4vaW5p
dC5oPg0KPj4gKyNpbmNsdWRlIDx4ZW4vcGNpLmg+DQo+PiArI2luY2x1ZGUgPHhlbi9wYXJhbS5o
Pg0KPj4gKw0KPj4gK3N0YXRpYyBpbnQgX19pbml0IGR0X3BjaV9pbml0KHZvaWQpDQo+PiArew0K
Pj4gKyAgICBzdHJ1Y3QgZHRfZGV2aWNlX25vZGUgKm5wOw0KPj4gKyAgICBpbnQgcmM7DQo+PiAr
DQo+PiArICAgIGR0X2Zvcl9lYWNoX2RldmljZV9ub2RlKGR0X2hvc3QsIG5wKQ0KPj4gKyAgICB7
DQo+PiArICAgICAgICByYyA9IGRldmljZV9pbml0KG5wLCBERVZJQ0VfUENJLCBOVUxMKTsNCj4+
ICsgICAgICAgIGlmKCAhcmMgKQ0KPj4gKyAgICAgICAgICAgIGNvbnRpbnVlOw0KPj4gKyAgICAg
ICAgLyoNCj4+ICsgICAgICAgICAqIElnbm9yZSB0aGUgZm9sbG93aW5nIGVycm9yIGNvZGVzOg0K
Pj4gKyAgICAgICAgICogICAtIEVCQURGOiBJbmRpY2F0ZSB0aGUgY3VycmVudCBpcyBub3QgYW4g
cGNpDQo+PiArICAgICAgICAgKiAgIC0gRU5PREVWOiBUaGUgcGNpIGRldmljZSBpcyBub3QgcHJl
c2VudCBvciBjYW5ub3QgYmUgdXNlZCBieQ0KPj4gKyAgICAgICAgICogICAgIFhlbi4NCj4+ICsg
ICAgICAgICAqLw0KPj4gKyAgICAgICAgZWxzZSBpZiAoIHJjICE9IC1FQkFERiAmJiByYyAhPSAt
RU5PREVWICkNCj4+ICsgICAgICAgIHsNCj4+ICsgICAgICAgICAgICBwcmludGsoWEVOTE9HX0VS
UiAiTm8gZHJpdmVyIGZvdW5kIGluIFhFTiBvciBkcml2ZXIgaW5pdCBlcnJvci5cbiIpOw0KPj4g
KyAgICAgICAgICAgIHJldHVybiByYzsNCj4+ICsgICAgICAgIH0NCj4+ICsgICAgfQ0KPj4gKw0K
Pj4gKyAgICByZXR1cm4gMDsNCj4+ICt9DQo+PiArDQo+PiArI2lmZGVmIENPTkZJR19BQ1BJDQo+
PiArc3RhdGljIHZvaWQgX19pbml0IGFjcGlfcGNpX2luaXQodm9pZCkNCj4+ICt7DQo+PiArICAg
IHByaW50ayhYRU5MT0dfRVJSICJBQ1BJIHBjaSBpbml0IG5vdCBzdXBwb3J0ZWQgXG4iKTsNCj4+
ICsgICAgcmV0dXJuOw0KPj4gK30NCj4+ICsjZWxzZQ0KPj4gK3N0YXRpYyBpbmxpbmUgdm9pZCBf
X2luaXQgYWNwaV9wY2lfaW5pdCh2b2lkKSB7IH0NCj4+ICsjZW5kaWYNCj4+ICsNCj4+ICtzdGF0
aWMgYm9vbCBfX2luaXRkYXRhIHBhcmFtX3BjaV9lbmFibGU7DQo+PiArc3RhdGljIGludCBfX2lu
aXQgcGFyc2VfcGNpX3BhcmFtKGNvbnN0IGNoYXIgKmFyZykNCj4+ICt7DQo+PiArICAgIGlmICgg
IWFyZyApDQo+PiArICAgIHsNCj4+ICsgICAgICAgIHBhcmFtX3BjaV9lbmFibGUgPSBmYWxzZTsN
Cj4+ICsgICAgICAgIHJldHVybiAwOw0KPj4gKyAgICB9DQo+PiArDQo+PiArICAgIHN3aXRjaCAo
IHBhcnNlX2Jvb2woYXJnLCBOVUxMKSApDQo+PiArICAgIHsNCj4+ICsgICAgICAgIGNhc2UgMDoN
Cj4+ICsgICAgICAgICAgICBwYXJhbV9wY2lfZW5hYmxlID0gZmFsc2U7DQo+PiArICAgICAgICAg
ICAgcmV0dXJuIDA7DQo+PiArICAgICAgICBjYXNlIDE6DQo+PiArICAgICAgICAgICAgcGFyYW1f
cGNpX2VuYWJsZSA9IHRydWU7DQo+PiArICAgICAgICAgICAgcmV0dXJuIDA7DQo+PiArICAgIH0N
Cj4+ICsNCj4+ICsgICAgcmV0dXJuIC1FSU5WQUw7DQo+PiArfQ0KPj4gK2N1c3RvbV9wYXJhbSgi
cGNpIiwgcGFyc2VfcGNpX3BhcmFtKTsNCj4gV2hlbiBhZGRpbmcgbmV3IGNvbW1hbmQgbGluZSBw
YXJhbWV0ZXJzIHBsZWFzZSBhbHNvIGFkZCBpdHMNCj4gZG9jdW1lbnRhdGlvbiAoZG9jcy9taXNj
L3hlbi1jb21tYW5kLWxpbmUucGFuZG9jKSBpbiB0aGUgc2FtZSBwYXRjaCwNCj4gdW5sZXNzIHRo
aXMgaXMgbWVhbnQgdG8gYmUganVzdCB0cmFuc2llbnQgYW5kIHdlJ2xsIGdldCByZW1vdmVkIGJl
Zm9yZQ0KPiB0aGUgZmluYWwgY29tbWl0IG9mIHRoZSBzZXJpZXMuDQo+DQo+DQo+PiArdm9pZCBf
X2luaXQgcGNpX2luaXQodm9pZCkNCj4+ICt7DQo+PiArICAgIC8qDQo+PiArICAgICAqIEVuYWJs
ZSBQQ0kgd2hlbiBoYXMgYmVlbiBlbmFibGVkIGV4cGxpY2l0bHkgKHBjaT1vbikNCj4+ICsgICAg
ICovDQo+PiArICAgIGlmICggIXBhcmFtX3BjaV9lbmFibGUpDQo+PiArICAgICAgICBnb3RvIGRp
c2FibGU7DQo+PiArDQo+PiArICAgIGlmICggYWNwaV9kaXNhYmxlZCApDQo+PiArICAgICAgICBk
dF9wY2lfaW5pdCgpOw0KPj4gKyAgICBlbHNlDQo+PiArICAgICAgICBhY3BpX3BjaV9pbml0KCk7
DQo+PiArDQo+PiArI2lmZGVmIENPTkZJR19IQVNfUENJDQo+PiArICAgIHBjaV9zZWdtZW50c19p
bml0KCk7DQo+PiArI2VuZGlmDQo+PiArDQo+PiArZGlzYWJsZToNCj4+ICsgICAgcmV0dXJuOw0K
Pj4gK30NCj4+ICsNCj4+ICsvKg0KPj4gKyAqIExvY2FsIHZhcmlhYmxlczoNCj4+ICsgKiBtb2Rl
OiBDDQo+PiArICogYy1maWxlLXN0eWxlOiAiQlNEIg0KPj4gKyAqIGMtYmFzaWMtb2Zmc2V0OiA0
DQo+PiArICogdGFiLXdpZHRoOiA0DQo+PiArICogaW5kZW50LXRhYnMtbW9kZTogbmlsDQo+PiAr
ICogRW5kOg0KPj4gKyAqLw0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9zZXR1cC5jIGIv
eGVuL2FyY2gvYXJtL3NldHVwLmMNCj4+IGluZGV4IDc5NjhjZWU0N2QuLjJkN2YxZGI0NGYgMTAw
NjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC9hcm0vc2V0dXAuYw0KPj4gKysrIGIveGVuL2FyY2gvYXJt
L3NldHVwLmMNCj4+IEBAIC05MzAsNiArOTMwLDggQEAgdm9pZCBfX2luaXQgc3RhcnRfeGVuKHVu
c2lnbmVkIGxvbmcgYm9vdF9waHlzX29mZnNldCwNCj4+ICAgDQo+PiAgICAgICBzZXR1cF92aXJ0
X3BhZ2luZygpOw0KPj4gICANCj4+ICsgICAgcGNpX2luaXQoKTsNCj4gcGNpX2luaXQgc2hvdWxk
IHByb2JhYmx5IGJlIGFuIGluaXRjYWxsDQo+DQo+DQo+PiAgICAgICBkb19pbml0Y2FsbHMoKTsN
Cj4+ICAgDQo+PiAgICAgICAvKg0KPj4gZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS1hcm0v
ZGV2aWNlLmggYi94ZW4vaW5jbHVkZS9hc20tYXJtL2RldmljZS5oDQo+PiBpbmRleCBlZTdjZmYy
ZDQ0Li4yOGY4MDQ5Y2ZkIDEwMDY0NA0KPj4gLS0tIGEveGVuL2luY2x1ZGUvYXNtLWFybS9kZXZp
Y2UuaA0KPj4gKysrIGIveGVuL2luY2x1ZGUvYXNtLWFybS9kZXZpY2UuaA0KPj4gQEAgLTQsNiAr
NCw3IEBADQo+PiAgIGVudW0gZGV2aWNlX3R5cGUNCj4+ICAgew0KPj4gICAgICAgREVWX0RULA0K
Pj4gKyAgICBERVZfUENJLA0KPj4gICB9Ow0KPj4gICANCj4+ICAgc3RydWN0IGRldl9hcmNoZGF0
YSB7DQo+PiBAQCAtMjUsMTUgKzI2LDE1IEBAIHR5cGVkZWYgc3RydWN0IGRldmljZSBkZXZpY2Vf
dDsNCj4+ICAgDQo+PiAgICNpbmNsdWRlIDx4ZW4vZGV2aWNlX3RyZWUuaD4NCj4+ICAgDQo+PiAt
LyogVE9ETzogQ29ycmVjdGx5IGltcGxlbWVudCBkZXZfaXNfcGNpIHdoZW4gUENJIGlzIHN1cHBv
cnRlZCBvbiBBUk0gKi8NCj4+IC0jZGVmaW5lIGRldl9pc19wY2koZGV2KSAoKHZvaWQpKGRldiks
IDApDQo+PiAtI2RlZmluZSBkZXZfaXNfZHQoZGV2KSAgKChkZXYtPnR5cGUgPT0gREVWX0RUKQ0K
RGlkbid0IHdlIGhhdmUgYSBwYXRjaCBmb3IgdGhhdCByZWNlbnRseSBvciB0YWxrZWQgYWJvdXQ/
DQo+PiArI2RlZmluZSBkZXZfaXNfcGNpKGRldikgKGRldi0+dHlwZSA9PSBERVZfUENJKQ0KPj4g
KyNkZWZpbmUgZGV2X2lzX2R0KGRldikgIChkZXYtPnR5cGUgPT0gREVWX0RUKQ0KPj4gICANCj4+
ICAgZW51bSBkZXZpY2VfY2xhc3MNCj4+ICAgew0KPj4gICAgICAgREVWSUNFX1NFUklBTCwNCj4+
ICAgICAgIERFVklDRV9JT01NVSwNCj4+ICAgICAgIERFVklDRV9HSUMsDQo+PiArICAgIERFVklD
RV9QQ0ksDQo+PiAgICAgICAvKiBVc2UgZm9yIGVycm9yICovDQo+PiAgICAgICBERVZJQ0VfVU5L
Tk9XTiwNCj4+ICAgfTsNCj4+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20tYXJtL3BjaS5o
IGIveGVuL2luY2x1ZGUvYXNtLWFybS9wY2kuaA0KPj4gaW5kZXggZGUxMzM1OWY2NS4uOTRmZDAw
MzYwYSAxMDA2NDQNCj4+IC0tLSBhL3hlbi9pbmNsdWRlL2FzbS1hcm0vcGNpLmgNCj4+ICsrKyBi
L3hlbi9pbmNsdWRlL2FzbS1hcm0vcGNpLmgNCj4+IEBAIC0xLDcgKzEsOTggQEANCj4+IC0jaWZu
ZGVmIF9fWDg2X1BDSV9IX18NCj4+IC0jZGVmaW5lIF9fWDg2X1BDSV9IX18NCj4+ICsvKg0KPj4g
KyAqIENvcHlyaWdodCAoQykgMjAyMCBBcm0gTHRkLg0KPj4gKyAqDQo+PiArICogQmFzZWQgb24g
TGludXggZHJpdmVycy9wY2kvZWNhbS5jDQo+PiArICogQ29weXJpZ2h0IDIwMTYgQnJvYWRjb20u
DQo+PiArICoNCj4+ICsgKiBCYXNlZCBvbiBMaW51eCBkcml2ZXJzL3BjaS9jb250cm9sbGVyL3Bj
aS1ob3N0LWNvbW1vbi5jDQo+PiArICogQmFzZWQgb24gTGludXggZHJpdmVycy9wY2kvY29udHJv
bGxlci9wY2ktaG9zdC1nZW5lcmljLmMNCj4+ICsgKiBDb3B5cmlnaHQgKEMpIDIwMTQgQVJNIExp
bWl0ZWQgV2lsbCBEZWFjb24gPHdpbGwuZGVhY29uQGFybS5jb20+DQo+PiArICoNCj4+ICsgKiBU
aGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5k
L29yIG1vZGlmeQ0KPj4gKyAqIGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwg
UHVibGljIExpY2Vuc2UgdmVyc2lvbiAyIGFzDQo+PiArICogcHVibGlzaGVkIGJ5IHRoZSBGcmVl
IFNvZnR3YXJlIEZvdW5kYXRpb24uDQo+PiArICoNCj4+ICsgKiBUaGlzIHByb2dyYW0gaXMgZGlz
dHJpYnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwNCj4+ICsgKiBidXQg
V0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3YXJyYW50eSBv
Zg0KPj4gKyAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVS
UE9TRS4gIFNlZSB0aGUNCj4+ICsgKiBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9y
ZSBkZXRhaWxzLg0KPj4gKyAqDQo+PiArICogWW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29w
eSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UNCj4+ICsgKiBhbG9uZyB3aXRoIHRo
aXMgcHJvZ3JhbS4gIElmIG5vdCwgc2VlIDxodHRwOi8vd3d3LmdudS5vcmcvbGljZW5zZXMvPi4N
Cj4+ICsgKi8NCj4+ICAgDQo+PiArI2lmbmRlZiBfX0FSTV9QQ0lfSF9fDQo+PiArI2RlZmluZSBf
X0FSTV9QQ0lfSF9fDQo+PiArDQo+PiArI2luY2x1ZGUgPHhlbi9wY2kuaD4NCj4+ICsjaW5jbHVk
ZSA8eGVuL2RldmljZV90cmVlLmg+DQo+PiArI2luY2x1ZGUgPGFzbS9kZXZpY2UuaD4NCj4+ICsN
Cj4+ICsjaWZkZWYgQ09ORklHX0FSTV9QQ0kNCj4+ICsNCj4+ICsvKiBBcmNoIHBjaSBkZXYgc3Ry
dWN0ICovDQo+PiAgIHN0cnVjdCBhcmNoX3BjaV9kZXYgew0KPj4gKyAgICBzdHJ1Y3QgZGV2aWNl
IGRldjsNCj4+ICt9Ow0KPiBBcmUgeW91IGFjdHVhbGx5IHVzaW5nIHN0cnVjdCBkZXZpY2UgaW4g
c3RydWN0IGFyY2hfcGNpX2Rldj8NCj4gc3RydWN0IGRldmljZSBpcyBhbHJlYWR5IHBhcnQgb2Yg
c3RydWN0IGR0X2RldmljZV9ub2RlIGFuZCBhIHBvaW50ZXIgdG8NCj4gaXQgaXMgc3RvcmVkIGlu
IGJyaWRnZS0+ZHRfbm9kZS4NCj4NCj4NCj4+ICsjZGVmaW5lIFBSSV9wY2kgIiUwNHg6JTAyeDol
MDJ4LiV1Ig0KPj4gKyNkZWZpbmUgcGNpX3RvX2RldihwY2lkZXYpICgmKHBjaWRldiktPmFyY2gu
ZGV2KQ0KPj4gKw0KPj4gKy8qDQo+PiArICogc3RydWN0IHRvIGhvbGQgdGhlIG1hcHBpbmdzIG9m
IGEgY29uZmlnIHNwYWNlIHdpbmRvdy4gVGhpcw0KPj4gKyAqIGlzIGV4cGVjdGVkIHRvIGJlIHVz
ZWQgYXMgc3lzZGF0YSBmb3IgUENJIGNvbnRyb2xsZXJzIHRoYXQNCj4+ICsgKiB1c2UgRUNBTS4N
Cj4+ICsgKi8NCj4+ICtzdHJ1Y3QgcGNpX2NvbmZpZ193aW5kb3cgew0KPj4gKyAgICBwYWRkcl90
ICAgICBwaHlzX2FkZHI7DQo+PiArICAgIHBhZGRyX3QgICAgIHNpemU7DQo+PiArICAgIHVpbnQ4
X3QgICAgIGJ1c25fc3RhcnQ7DQo+PiArICAgIHVpbnQ4X3QgICAgIGJ1c25fZW5kOw0KPj4gKyAg
ICBzdHJ1Y3QgcGNpX2VjYW1fb3BzICAgICAqb3BzOw0KPj4gKyAgICB2b2lkIF9faW9tZW0gICAg
ICAgICp3aW47DQo+PiArfTsNCj4+ICsNCj4+ICsvKiBGb3J3YXJkIGRlY2xhcmF0aW9uIGFzIHBj
aV9ob3N0X2JyaWRnZSBhbmQgcGNpX29wcyBkZXBlbmQgb24gZWFjaCBvdGhlci4gKi8NCj4+ICtz
dHJ1Y3QgcGNpX2hvc3RfYnJpZGdlOw0KPj4gKw0KPj4gK3N0cnVjdCBwY2lfb3BzIHsNCj4+ICsg
ICAgaW50ICgqcmVhZCkoc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdlLA0KPj4gKyAgICAg
ICAgICAgICAgICAgICAgdWludDMyX3Qgc2JkZiwgaW50IHdoZXJlLCBpbnQgc2l6ZSwgdTMyICp2
YWwpOw0KPj4gKyAgICBpbnQgKCp3cml0ZSkoc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdl
LA0KPj4gKyAgICAgICAgICAgICAgICAgICAgdWludDMyX3Qgc2JkZiwgaW50IHdoZXJlLCBpbnQg
c2l6ZSwgdTMyIHZhbCk7DQo+IEknZCBwcmVmZXIgaWYgd2UgY291bGQgdXNlIGV4cGxpY2l0bHkt
c2l6ZWQgaW50ZWdlcnMgZm9yICJ3aGVyZSIgYW5kDQo+ICJzaXplIiB0b28uIEFsc28sIHNob3Vs
ZCB0aGV5IGJlIHVuc2lnbmVkIHJhdGhlciB0aGFuIHNpZ25lZD8NCj4NCj4gQ2FuIHdlIHVzZSBw
Y2lfc2JkZl90IGZvciB0aGUgc2JkZiBwYXJhbWV0ZXI/DQo+DQo+DQo+PiArfTsNCj4+ICsNCj4+
ICsvKg0KPj4gKyAqIHN0cnVjdCB0byBob2xkIHBjaSBvcHMgYW5kIGJ1cyBzaGlmdCBvZiB0aGUg
Y29uZmlnIHdpbmRvdw0KPj4gKyAqIGZvciBhIFBDSSBjb250cm9sbGVyLg0KPj4gKyAqLw0KPj4g
K3N0cnVjdCBwY2lfZWNhbV9vcHMgew0KPj4gKyAgICB1bnNpZ25lZCBpbnQgICAgICAgICAgICBi
dXNfc2hpZnQ7DQo+PiArICAgIHN0cnVjdCBwY2lfb3BzICAgICAgICAgIHBjaV9vcHM7DQo+PiAr
ICAgIGludCAgICAgICAgICAgICAoKmluaXQpKHN0cnVjdCBwY2lfY29uZmlnX3dpbmRvdyAqKTsN
Cj4+ICt9Ow0KPiBBbHRob3VnaCBJIHJlYWxpemUgdGhhdCB3ZSBhcmUgb25seSB0YXJnZXRpbmcg
RUNBTSBub3csIGFuZCB0aGUNCj4gaW1wbGVtZW50YXRpb24gaXMgYmFzZWQgb24gRUNBTSwgdGhl
IGludGVyZmFjZSBkb2Vzbid0IHNlZW0gdG8gaGF2ZQ0KPiBhbnl0aGluZyBFQ0FNLXNwZWNpZmlj
IGluIGl0LiBJIHdvdWxkIHJlbmFtZSBwY2lfZWNhbV9vcHMgdG8gc29tZXRoaW5nDQo+IGVsc2Us
IG1heWJlIHNpbXBseSAicGNpX29wcyIuDQoNClllcywgcGxlYXNlLCBiZWFyIGluIG1pbmQgdGhh
dCB3ZSBhcmUgYWJvdXQgdG8gd29yayB3aXRoIHRoaXMgY29kZSBvbg0KDQpub24tRUNBTSBIVyBm
cm9tIHRoZSB2ZXJ5IGJlZ2lubmluZywgc28gdGhpcyBpcyBzb21ldGhpbmcgdGhhdCB3ZSB3b3Vs
ZA0KDQpsaWtlIHRvIHNlZSBmcm9tIHRoZSBncm91bmQgdXANCg0KPg0KPg0KPj4gKy8qDQo+PiAr
ICogc3RydWN0IHRvIGhvbGQgcGNpIGhvc3QgYnJpZGdlIGluZm9ybWF0aW9uDQo+PiArICogZm9y
IGEgUENJIGNvbnRyb2xsZXIuDQo+PiArICovDQo+PiArc3RydWN0IHBjaV9ob3N0X2JyaWRnZSB7
DQo+PiArICAgIHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSAqZHRfbm9kZTsgIC8qIFBvaW50ZXIgdG8g
dGhlIGFzc29jaWF0ZWQgRFQgbm9kZSAqLw0KPj4gKyAgICBzdHJ1Y3QgbGlzdF9oZWFkIG5vZGU7
ICAgICAgICAgICAvKiBOb2RlIGluIGxpc3Qgb2YgaG9zdCBicmlkZ2VzICovDQo+PiArICAgIHVp
bnQxNl90IHNlZ21lbnQ7ICAgICAgICAgICAgICAgIC8qIFNlZ21lbnQgbnVtYmVyICovDQo+PiAr
ICAgIHZvaWQgKnN5c2RhdGE7ICAgICAgICAgICAgICAgICAgIC8qIFBvaW50ZXIgdG8gdGhlIGNv
bmZpZyBzcGFjZSB3aW5kb3cqLw0KPj4gKyAgICBjb25zdCBzdHJ1Y3QgcGNpX29wcyAqb3BzOw0K
Pj4gICB9Ow0KPj4gICANCj4+IC0jZW5kaWYgLyogX19YODZfUENJX0hfXyAqLw0KPj4gK3N0cnVj
dCBwY2lfaG9zdF9icmlkZ2UgKnBjaV9maW5kX2hvc3RfYnJpZGdlKHVpbnQxNl90IHNlZ21lbnQs
IHVpbnQ4X3QgYnVzKTsNCj4+ICsNCj4+ICtpbnQgcGNpX2hvc3RfY29tbW9uX3Byb2JlKHN0cnVj
dCBkdF9kZXZpY2Vfbm9kZSAqZGV2LA0KPj4gKyAgICAgICAgICAgICAgICBzdHJ1Y3QgcGNpX2Vj
YW1fb3BzICpvcHMpOw0KPj4gKw0KPj4gK3ZvaWQgcGNpX2luaXQodm9pZCk7DQo+PiArDQo+PiAr
I2Vsc2UgICAvKiFDT05GSUdfQVJNX1BDSSovDQo+PiArc3RydWN0IGFyY2hfcGNpX2RldiB7IH07
DQo+PiArc3RhdGljIGlubGluZSB2b2lkICBwY2lfaW5pdCh2b2lkKSB7IH0NCj4+ICsjZW5kaWYg
IC8qIUNPTkZJR19BUk1fUENJKi8NCj4+ICsjZW5kaWYgLyogX19BUk1fUENJX0hfXyAqLw0KPj4g
LS0gDQo+PiAyLjE3LjE=


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 07:14:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 07:14:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyruk-0002TC-RW; Fri, 24 Jul 2020 07:14:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fd0P=BD=epam.com=prvs=5474228b71=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1jyruj-0002T4-LO
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 07:14:41 +0000
X-Inumbo-ID: 57259a2a-cd7d-11ea-a378-12813bfff9fa
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 57259a2a-cd7d-11ea-a378-12813bfff9fa;
 Fri, 24 Jul 2020 07:14:41 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 06O7EaAj009807; Fri, 24 Jul 2020 07:14:36 GMT
Received: from eur03-ve1-obe.outbound.protection.outlook.com
 (mail-ve1eur03lp2051.outbound.protection.outlook.com [104.47.9.51])
 by mx0a-0039f301.pphosted.com with ESMTP id 32eupgmavu-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 24 Jul 2020 07:14:36 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J6KLaYHbO0nBtXsX3j6xu8n2JpO3HOAi5ODDhj2CSKvck+XYd0M7jqVv2j4SkPCfQ7WJcEIl65HPXBi4PtT8g+blmplye05iS3+WQZg7VKkF+jqMMK6F6E/nFUjZ1+avEeFHeXWJLTdR7zIiY4s24Svo9+0gJGuUEHymi6uiResp3SlQZ3lYGpYSb0BJHtUTyU3012zDJ8kaRY7R4J55O4lD+OOZBqk5gzBzebtrxuX7MPDtIcfVRK1dltNN1g5lkOiSsWsb/Nr6pg4j22zN6f0iFairJd6TykK76V1g+wR5AQqeZ02EKT94Xgc4IkE/XVW+Q2blUmBxE8CyR5G/Jw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uaPlPIsLFbKKSO8c+y8J4uKk+JzkywTrpPFdOKOwC/Q=;
 b=nvsbpIpli7HLQNy/2ybcXZfWDP4NhxLzS+x/fEw2qbF6BApTQ6PkI/23deBMvGy2F7cYQWJ1u2ga8gyOQDv+JmqRXCeSq5I7JrxBa8xsCrwtH7UFs60kOdaeJmYVPNBp5IQeT1fGaATzzO0u2bCOjmv0GOZ/SFr8+a8kYzIa7MeCpOROuyY/WXP/6JA2KMbadTcRYZ4e0QSQLtu3o3dZcGY0h0hR2hKpSx55BppLSDTSO3A0rwDWz/0f+eqXZCRmShIbPtiM8A84SFCMepbuU6HYbF49OobgNkiEBphNdlCp43DFeaskKxZ0KsaDLVk8M36xdEVORpEKFeV/qdLSrg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uaPlPIsLFbKKSO8c+y8J4uKk+JzkywTrpPFdOKOwC/Q=;
 b=SOobmYTF4wl3fUwraLNBOL4DXbGx3jq8xYmwrxIwx4oCXQvePrj3x+FtnVGPiNFPEnCwbUh7zAUZdoyFgxqKkwK79TLIRFOJ6TLerZBIJoNXhaHG5Us8wryACes/19Y4ndewklG3qVbIcGXcphuFLWUrqLxcLZnqPVNVtQa1yzzK+uTrt8rzORUx4PnWgPbR6DLWbura6ZXmOhbayJLfIe1iJudDGYNEC+fpKqQZZ/8H7Qt/g8psUu/nZclA5U1ahEe8aGqmY1kLNWkA6XQKvqjuoHmHTpQJ/O8EXH+T41JOle3Knnc0KBrPrUI7uz63awDPktzUyZi+4bUR9YMOEg==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5699.eurprd03.prod.outlook.com (2603:10a6:208:173::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.20; Fri, 24 Jul
 2020 07:14:33 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508%7]) with mapi id 15.20.3216.023; Fri, 24 Jul 2020
 07:14:33 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Stefano Stabellini <sstabellini@kernel.org>, Rahul Singh
 <rahul.singh@arm.com>
Subject: Re: [RFC PATCH v1 2/4] xen/arm: Discovering PCI devices and add the
 PCI devices in XEN.
Thread-Topic: [RFC PATCH v1 2/4] xen/arm: Discovering PCI devices and add the
 PCI devices in XEN.
Thread-Index: AQHWYYoU5xC9fc1Hm02eNWNLg1aFFw==
Date: Fri, 24 Jul 2020 07:14:32 +0000
Message-ID: <81cad0cd-731d-e1d5-cacd-d64f2c0781b6@epam.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <666df0147054dda8af13ae74a89be44c69984295.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231337140.17562@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007231337140.17562@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 07f81523-a845-4605-060d-08d82fa136e7
x-ms-traffictypediagnostic: AM0PR03MB5699:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <AM0PR03MB5699CD13DC08CD225D8DEF90E7770@AM0PR03MB5699.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: GNAWQ8/DYN9xC69buMadr4dQ+K51xeE4YQJvksve9iTjdL6E9QedmOX7r2rpwA13rsaBUtT2VZfrBdMmhEoE7OFgNx5/emG3ugQmVkPcg5h4wVyO0DpBZvZoj17mCPkaNvn4dsxzWFTJHJfYVsEKy9jhcsAifNO36OWySSblHcL0VsrmNkGJkkrmneW1KQLJSdmdefgTCHMtIIjQaU4nCHnt03LHk6jjCuTS8ArKgxMmdLQjrPw9HkwcUJSWHR9DKPteUFKHd88iF5Rn18JH5U0iObtdlygx1DVoAfy6Wo8zpZJBX2ctBhotXV+c/+SimBKrT8FhKSYMKXYcA3PBnmHPs4ltkCg7nh4kNa5AjNRswWRFJVM5niwzcLv07YEQucQdQiXtd1zhl2Kt95Qy7g==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM0PR03MB6324.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(346002)(396003)(376002)(39860400002)(136003)(366004)(54906003)(966005)(110136005)(31686004)(31696002)(2616005)(8676002)(4326008)(6512007)(71200400001)(86362001)(8936002)(36756003)(6486002)(5660300002)(83380400001)(478600001)(316002)(91956017)(76116006)(2906002)(66946007)(26005)(64756008)(66556008)(53546011)(66446008)(6506007)(66476007)(186003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: y18mEKe8TvZR4wiFAGvEGCgq/tGkWNv+97vT3ZKsG+XK5VrojBxvB+k2HvgQkaBmFFJSoJCpYxDiLss2uKMGsRRDqHcXpLs+2CLleUryKjU66Tppren2iS5byFPGxsDqmMzSiQxoMyVRVxSsLfg0sC/whVQhS2yB0KsldMofurPAAFTiW/egcnlCNIOm7x/cbouKxUnZ/bUqqh29poeTTn5yxGeRFVVGORIZi3Q4AY2HERpgEWMcUErI4i7xOqqDrWiNmZ9qbotkXGp8NqjA3qfKR/3eTUWkuiogJehnzBPNA7hG/1Oo/sGVEHIEo15+aCaqIo726CgjSqFUcfuJtFaktU9gJqJUWSaD0E4/ivJNxhWDvcs7nMdgmo/iZ6hdypAH2QfNoJ4sbGQqyP4Nx1a6IGu5kowu6c211u0zp3RkaBMTv6hAOhcSZuOKTm1cN+9FyHC9PS/rYdLFGZzrloe5BZdsRG/nLwGsBy85GK0=
Content-Type: text/plain; charset="utf-8"
Content-ID: <722FAF729FD8E9408EBB0EDB9D9E6CBB@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 07f81523-a845-4605-060d-08d82fa136e7
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Jul 2020 07:14:32.9060 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Dky/P0TNl3Qtz3eFL+/Xbe7gWKLeTdVSdK1GdH3dre2ue/7QXt6iUYUSQJzhS6X2NUk4TpnyReBJ5mh+9IvAnIWgFY2nNzwUXddzas/AzdNfiietJ/SdEk7jn3qhRAoE
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5699
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687
 definitions=2020-07-24_01:2020-07-24,
 2020-07-23 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0
 lowpriorityscore=0
 priorityscore=1501 spamscore=0 impostorscore=0 adultscore=0 suspectscore=0
 mlxlogscore=999 malwarescore=0 phishscore=0 mlxscore=0 bulkscore=0
 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007240053
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "nd@arm.com" <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQpPbiA3LzIzLzIwIDExOjQ0IFBNLCBTdGVmYW5vIFN0YWJlbGxpbmkgd3JvdGU6DQo+IE9uIFRo
dSwgMjMgSnVsIDIwMjAsIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4gSGFyZHdhcmUgZG9tYWluIGlz
IGluIGNoYXJnZSBvZiBkb2luZyB0aGUgUENJIGVudW1lcmF0aW9uIGFuZCB3aWxsDQo+PiBkaXNj
b3ZlciB0aGUgUENJIGRldmljZXMgYW5kIHRoZW4gd2lsbCBjb21tdW5pY2F0ZSB0byBYRU4gdmlh
IGh5cGVyDQo+PiBjYWxsIFBIWVNERVZPUF9wY2lfZGV2aWNlX2FkZCB0byBhZGQgdGhlIFBDSSBk
ZXZpY2VzIGluIFhFTi4NCj4+DQo+PiBDaGFuZ2UtSWQ6IEllODdlMTk3NDE2ODk1MDNiNGI2MmRh
OTExYzhkYzJlZTMxODU4NGFjDQo+IFNhbWUgcXVlc3Rpb24gYWJvdXQgQ2hhbmdlLUlkDQo+DQo+
DQo+PiBTaWduZWQtb2ZmLWJ5OiBSYWh1bCBTaW5naCA8cmFodWwuc2luZ2hAYXJtLmNvbT4NCj4+
IC0tLQ0KPj4gICB4ZW4vYXJjaC9hcm0vcGh5c2Rldi5jIHwgNDIgKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrLS0tDQo+PiAgIDEgZmlsZSBjaGFuZ2VkLCAzOSBpbnNlcnRp
b25zKCspLCAzIGRlbGV0aW9ucygtKQ0KPj4NCj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0v
cGh5c2Rldi5jIGIveGVuL2FyY2gvYXJtL3BoeXNkZXYuYw0KPj4gaW5kZXggZTkxMzU1ZmUyMi4u
Mjc0NzIwZjk4YSAxMDA2NDQNCj4+IC0tLSBhL3hlbi9hcmNoL2FybS9waHlzZGV2LmMNCj4+ICsr
KyBiL3hlbi9hcmNoL2FybS9waHlzZGV2LmMNCj4+IEBAIC05LDEyICs5LDQ4IEBADQo+PiAgICNp
bmNsdWRlIDx4ZW4vZXJybm8uaD4NCj4+ICAgI2luY2x1ZGUgPHhlbi9zY2hlZC5oPg0KPj4gICAj
aW5jbHVkZSA8YXNtL2h5cGVyY2FsbC5oPg0KPj4gLQ0KPj4gKyNpbmNsdWRlIDx4ZW4vZ3Vlc3Rf
YWNjZXNzLmg+DQo+PiArI2luY2x1ZGUgPHhzbS94c20uaD4NCj4+ICAgDQo+PiAgIGludCBkb19w
aHlzZGV2X29wKGludCBjbWQsIFhFTl9HVUVTVF9IQU5ETEVfUEFSQU0odm9pZCkgYXJnKQ0KPj4g
ICB7DQo+PiAtICAgIGdkcHJpbnRrKFhFTkxPR19ERUJVRywgIlBIWVNERVZPUCBjbWQ9JWQ6IG5v
dCBpbXBsZW1lbnRlZFxuIiwgY21kKTsNCj4+IC0gICAgcmV0dXJuIC1FTk9TWVM7DQo+PiArICAg
IGludCByZXQgPSAwOw0KPj4gKw0KPj4gKyAgICBzd2l0Y2ggKCBjbWQgKQ0KPj4gKyAgICB7DQo+
PiArI2lmZGVmIENPTkZJR19IQVNfUENJDQoNCkluIHRoZSBjb3ZlciBsZXR0ZXIgeW91IHdlcmUg
c2F5aW5nICJ3ZSBhcmUgbm90IGVuYWJsaW5nIHRoZSBIQVNfUENJIGFuZCBIQVNfVlBDSSBmbGFn
cyBmb3IgQVJNIi4NCg0KSXMgdGhpcyBzdGlsbCB2YWxpZD8NCg0KPj4gKyAgICAgICAgY2FzZSBQ
SFlTREVWT1BfcGNpX2RldmljZV9hZGQ6DQo+PiArICAgICAgICAgICAgew0KPj4gKyAgICAgICAg
ICAgICAgICBzdHJ1Y3QgcGh5c2Rldl9wY2lfZGV2aWNlX2FkZCBhZGQ7DQo+PiArICAgICAgICAg
ICAgICAgIHN0cnVjdCBwY2lfZGV2X2luZm8gcGRldl9pbmZvOw0KPj4gKyAgICAgICAgICAgICAg
ICBub2RlaWRfdCBub2RlID0gTlVNQV9OT19OT0RFOw0KPj4gKw0KPj4gKyAgICAgICAgICAgICAg
ICByZXQgPSAtRUZBVUxUOw0KPj4gKyAgICAgICAgICAgICAgICBpZiAoIGNvcHlfZnJvbV9ndWVz
dCgmYWRkLCBhcmcsIDEpICE9IDAgKQ0KPj4gKyAgICAgICAgICAgICAgICAgICAgYnJlYWs7DQo+
PiArDQo+PiArICAgICAgICAgICAgICAgIHBkZXZfaW5mby5pc19leHRmbiA9ICEhKGFkZC5mbGFn
cyAmIFhFTl9QQ0lfREVWX0VYVEZOKTsNCj4+ICsgICAgICAgICAgICAgICAgaWYgKCBhZGQuZmxh
Z3MgJiBYRU5fUENJX0RFVl9WSVJURk4gKQ0KPj4gKyAgICAgICAgICAgICAgICB7DQo+PiArICAg
ICAgICAgICAgICAgICAgICBwZGV2X2luZm8uaXNfdmlydGZuID0gMTsNCj4+ICsgICAgICAgICAg
ICAgICAgICAgIHBkZXZfaW5mby5waHlzZm4uYnVzID0gYWRkLnBoeXNmbi5idXM7DQo+PiArICAg
ICAgICAgICAgICAgICAgICBwZGV2X2luZm8ucGh5c2ZuLmRldmZuID0gYWRkLnBoeXNmbi5kZXZm
bjsNCj4+ICsgICAgICAgICAgICAgICAgfQ0KPj4gKyAgICAgICAgICAgICAgICBlbHNlDQo+PiAr
ICAgICAgICAgICAgICAgICAgICBwZGV2X2luZm8uaXNfdmlydGZuID0gMDsNCj4+ICsNCj4+ICsg
ICAgICAgICAgICAgICAgcmV0ID0gcGNpX2FkZF9kZXZpY2UoYWRkLnNlZywgYWRkLmJ1cywgYWRk
LmRldmZuLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgJnBkZXZfaW5mbywg
bm9kZSk7DQo+PiArDQo+PiArICAgICAgICAgICAgICAgIGJyZWFrOw0KPj4gKyAgICAgICAgICAg
IH0NCj4+ICsjZW5kaWYNCj4+ICsgICAgICAgIGRlZmF1bHQ6DQo+PiArICAgICAgICAgICAgZ2Rw
cmludGsoWEVOTE9HX0RFQlVHLCAiUEhZU0RFVk9QIGNtZD0lZDogbm90IGltcGxlbWVudGVkXG4i
LCBjbWQpOw0KPj4gKyAgICAgICAgICAgIHJldCA9IC1FTk9TWVM7DQo+PiArICAgIH0NCj4gSSB0
aGluayB3ZSBzaG91bGQgbWFrZSB0aGUgaW1wbGVtZW50YXRpb24gY29tbW9uIGJldHdlZW4gYXJt
IGFuZCB4ODYgYnkNCj4gY3JlYXRpbmcgeGVuL2NvbW1vbi9waHlzZGV2LmM6ZG9fcGh5c2Rldl9v
cCBhcyBhIHNoYXJlZCBlbnRyeSBwb2ludCBmb3INCj4gUEhZU0RFVk9QIGh5cGVyY2FsbHMgaW1w
bGVtZW50YXRpb25zLiBTZWUgZm9yIGluc3RhbmNlOg0KPg0KPiB4ZW4vY29tbW9uL3N5c2N0bC5j
OmRvX3N5c2N0bA0KPg0KPiBhbmQNCj4NCj4geGVuL2FyY2gvYXJtL3N5c2N0bC5jOmFyY2hfZG9f
c3lzY3RsDQo+IHhlbi9hcmNoL3g4Ni9zeXNjdGwuYzphcmNoX2RvX3N5c2N0bA0KPg0KPg0KPiBK
YW4sIEFuZHJldywgUm9nZXIsIGFueSBvcGluaW9ucz8NCj4NCj4NCkkgdGhpbmsgd2UgY2FuIGFs
c28gaGF2ZSBhIGxvb2sgYXQgWzFdIGJ5IEp1bGllbi4gVGhhdCBpbXBsZW1lbnRhdGlvbiwNCg0K
SU1PLCBoYWQgc29tZSB0aG91Z2h0cyBvbiBtYWtpbmcgQXJtL3g4NiBjb2RlIGNvbW1vbiB3aGVy
ZSBwb3NzaWJsZQ0KDQoNClsxXSBodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9cGVv
cGxlL2p1bGllbmcveGVuLXVuc3RhYmxlLmdpdDthPXNob3J0bG9nO2g9cmVmcy9oZWFkcy9kZXYt
cGNpDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 07:23:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 07:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jys2y-0003RW-N4; Fri, 24 Jul 2020 07:23:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yKVY=BD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jys2x-0003RR-8G
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 07:23:11 +0000
X-Inumbo-ID: 86d9bdea-cd7e-11ea-87e3-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 86d9bdea-cd7e-11ea-87e3-bc764e2007e4;
 Fri, 24 Jul 2020 07:23:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3052CB03D;
 Fri, 24 Jul 2020 07:23:18 +0000 (UTC)
Subject: Re: [PATCH] x86/pv: Make the PV default WRMSR path match the HVM
 default
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200723182626.7500-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4a5f2c67-f494-e682-1712-0f3f431ce5e7@suse.com>
Date: Fri, 24 Jul 2020 09:23:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200723182626.7500-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.07.2020 20:26, Andrew Cooper wrote:
> The current HVM default for writes to unknown MSRs is to inject #GP if the MSR
> is unreadable, and discard writes otherwise. While this behaviour isn't great,
> the PV default is even worse, because it swallows writes even to non-readable
> MSRs.  i.e. A PV guest doesn't even get a #GP fault for a write to a totally
> bogus index.
> 
> Update PV to make it consistent with HVM, which will simplify the task of
> making other improvements to the default MSR behaviour.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 07:28:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 07:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jys83-0003c2-BL; Fri, 24 Jul 2020 07:28:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gdfW=BD=gmail.com=mstsxfx@srs-us1.protection.inumbo.net>)
 id 1jys82-0003bx-4w
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 07:28:26 +0000
X-Inumbo-ID: 426181a6-cd7f-11ea-a37b-12813bfff9fa
Received: from mail-ed1-f67.google.com (unknown [209.85.208.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 426181a6-cd7f-11ea-a37b-12813bfff9fa;
 Fri, 24 Jul 2020 07:28:25 +0000 (UTC)
Received: by mail-ed1-f67.google.com with SMTP id n2so6325521edr.5
 for <xen-devel@lists.xenproject.org>; Fri, 24 Jul 2020 00:28:25 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to;
 bh=xfL/HVhb5CZmwLPp8FrK3Hc1m89ug/giIGTFH3XnKGo=;
 b=TDSWpspTEwZy7lJOmXWQ9LdNEAiCZ6gBhjOZGoIJNpzBrU8ZsvtGN4mMcPL+Jn+i7l
 N/7Qu99yGiT9FrO551clr71eHzNfN80c2mJjmE7Ru55pf2GG18ijGu5wZZbQWgH/ONAA
 LcYkDS0ONBXitRALpFBYk9/H1N66+jjU90nbo6fpTEUv/vrs1idkoD1pOT6QUDb3FBsC
 kf5OyBgIh+yxYsw5P5QyAJZ6YboGY8YvoB26QbTCcGxpvT0T8BOQpDGyGv8KXpuWLauT
 +vFSS3EqVgw9Y8qs8zxadx5alCUO4zVNWtHxy/JUNSOqi9+S0q23Lg7BPi6iqqG0JJXv
 PQIA==
X-Gm-Message-State: AOAM5325lD6MKa0ET3Z2BNBAIgRxaWQF83B/XTi1IrEQHHTMwpdaPsNf
 53j59ugxL+Gk/qySAyn6fis=
X-Google-Smtp-Source: ABdhPJw43GhFTsOZBuTzQhugNO+Sk6Rpj0CrpsDoWTG6p0Jw6ktDtr5TgT1bNjQpFHfxrJ2v5rE5xA==
X-Received: by 2002:a50:cd53:: with SMTP id d19mr7868671edj.300.1595575704271; 
 Fri, 24 Jul 2020 00:28:24 -0700 (PDT)
Received: from localhost (ip-37-188-169-187.eurotel.cz. [37.188.169.187])
 by smtp.gmail.com with ESMTPSA id lc18sm120397ejb.29.2020.07.24.00.28.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 24 Jul 2020 00:28:23 -0700 (PDT)
Date: Fri, 24 Jul 2020 09:28:21 +0200
From: Michal Hocko <mhocko@kernel.org>
To: David Hildenbrand <david@redhat.com>
Subject: Re: [PATCH 3/3] memory: introduce an option to force onlining of
 hotplug memory
Message-ID: <20200724072821.GD4061@dhcp22.suse.cz>
References: <20200723084523.42109-1-roger.pau@citrix.com>
 <20200723084523.42109-4-roger.pau@citrix.com>
 <21490d49-b2cf-a398-0609-8010bdb0b004@redhat.com>
 <20200723122300.GD7191@Air-de-Roger>
 <e94d9556-f615-bbe2-07d2-08958969ee5f@redhat.com>
 <20200723135930.GH7191@Air-de-Roger>
 <82b131f4-8f50-cd49-65cf-9a87d51b5555@suse.com>
 <20200723162256.GI7191@Air-de-Roger>
 <4ff380e9-4b16-4cd0-7753-c2b89bd8ac6b@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <4ff380e9-4b16-4cd0-7753-c2b89bd8ac6b@redhat.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Andrew Morton <akpm@linux-foundation.org>,
 Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu 23-07-20 19:39:54, David Hildenbrand wrote:
[...]
> Yeah, might require some code churn. It just feels wrong to involve
> buddy concepts (e.g., onlining pages, calling memory notifiers, exposing
> memory block devices) and introducing hacks (forced onlining) just to
> get a memmap+identity mapping+iomem resource. I think reserving such a
> region during boot as suggested is the easiest approach, but I am
> *absolutely* not an expert on all these XEN-specific things :)

I am late to the discussion but FTR I completely agree.
-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 07:55:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 07:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jysY3-0006J2-Dh; Fri, 24 Jul 2020 07:55:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fd0P=BD=epam.com=prvs=5474228b71=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1jysY2-0006Ix-LX
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 07:55:18 +0000
X-Inumbo-ID: 02aa59c6-cd83-11ea-a384-12813bfff9fa
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 02aa59c6-cd83-11ea-a384-12813bfff9fa;
 Fri, 24 Jul 2020 07:55:15 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 06O7lefM017236; Fri, 24 Jul 2020 07:55:10 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2105.outbound.protection.outlook.com [104.47.17.105])
 by mx0b-0039f301.pphosted.com with ESMTP id 32ft0g87sb-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 24 Jul 2020 07:55:10 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JVwFuKUC4Mc9Krdhsykrv1ZxihhxWfE92zngbwESNjfKh0101OfNi3EVXs2yMP3OnXUnlKKIuzCGfbR1/xgPlKdRttm/eeCY/aaSLrFAGfyi0u09SdOV2pTayDyqnzizcv1W3LdoS1XTDkj+AXAGcd3LngxmPDkr+RG/r6IhysB26+fmAdInRE27pPPRiD/+YezpVMgDj6mNM2hKgrbx1JbD1bpL1bxENXDoljxl0LXpt3IkBSgppmV75SgJOr6iZM9JdqgPU0ln068r5FLgXviT9NX7diFHeODD2ls2SBctXTiu74HYMX/FvgQjHwGguwpSE0nhwFBqMsKLFFbGXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LQCb9UouIYA+KN/Q9SnIEoJwyphrbPsxoHFBYrkZJxE=;
 b=REwkcvsRSw1VznEhCPGqJroomRZfyEe4QShDwkX0wSrs3UT5JQxCTT7ebJlHbjQxwfHnoj2bx4iELjSbScENpAH7FTVXM5YhbnWhkaY1yhpSG7TPmX844eVKThmHFtoIAXJsMxNVkz5HRvIBwx3qCQGhoJ/WDA44SfrRkay8o/CxioaQ9Z+4Ob0kIs1KpLLQMhFFL94VwPlw2tcjtmOs9/Jg9nMt4q/2wkG2MLE/jjinNhzmvuf6+ax31V+byiYN+f45iv2ILmCz5H9HN+4ItMVxWNADJSEd5qDhn5vgRIn6BewZNRQ/99vZpOHEbEzU+zrCG3XOMWkDLUQMe4L5kg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LQCb9UouIYA+KN/Q9SnIEoJwyphrbPsxoHFBYrkZJxE=;
 b=RQekt0a6IzIOo2YUQ7rARgt9gHUgEw9OnfD5phVfnnp6jTENFrHeLgFXjvjEP9Wxn2oERBzSXD0VzKNplFSAgIFnt5oBhT0yHHfENRgQQoLl6AZ/9vLvmXodbY/LHQPRWEoYlcikTC4UhlU+nsuS9wZn2Z6CkirtmsBpIB5t2Nxm/vBucz3DQkk/jCfxlJ4evvfReO5FGK5P6yVHLuZbl9QfQd7m6RJC2ipmmLg358i4gJJQBuw9dZSKQ6FzrvZHlDHg1xFVDeqx1eAEBOkMKbeX4YS8Ihwt9Qx3IHqG1dylbRyTfENjNESe0QQ45rZ94BhuHKevYAm4N9A5ddSKxA==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4017.eurprd03.prod.outlook.com (2603:10a6:208:70::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3195.26; Fri, 24 Jul
 2020 07:55:08 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508%7]) with mapi id 15.20.3216.023; Fri, 24 Jul 2020
 07:55:08 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Stefano Stabellini <sstabellini@kernel.org>, Rahul Singh
 <rahul.singh@arm.com>
Subject: Re: [RFC PATCH v1 4/4] arm/libxl: Emulated PCI device tree node in
 libxl
Thread-Topic: [RFC PATCH v1 4/4] arm/libxl: Emulated PCI device tree node in
 libxl
Thread-Index: AQHWYY+/hNTBSYEOFkWb/ljQZncVFQ==
Date: Fri, 24 Jul 2020 07:55:08 +0000
Message-ID: <81cad7ed-25fd-3002-eda6-6a9f57e4d754@epam.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <23346b24762467bd246b91b05f7b0fc1719282f6.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231505170.17562@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007231505170.17562@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d096370c-6239-47ef-ba6a-08d82fa6e26c
x-ms-traffictypediagnostic: AM0PR03MB4017:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <AM0PR03MB4017879F7912A8D407A6A8EEE7770@AM0PR03MB4017.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7219;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: w7b0FE3QF0DQo0edR58X+9Fu5CDYc7apZH1LFiCKIL/dF6y06kKDaUcjZEjcZ0lKDBmTvSqsWNg4DLPxGoAloO+PdvPyVLkM/CRN7NmQaIipbhQxdy7NRiXYlH/ppmDEEHaCJafRKowVBdxkG6ttOzkIjEA7xHkHYEqXf11OYrIbE9jNkPhcsHMqz+xu/9228apqholPtQEHFcmi30mNVgU3dBijIR7/K7fgVrYdxa9Nc+/YB6I4iFvL/gAbAPjziD84hs2H2B+3sDBPPeQ7ejadKicKbS36+O+tuN1rwOU0SvbZzM0DX6b6SKTpyLyiJal9MP8V4WChjSKPgjVD/A==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM0PR03MB6324.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(136003)(39860400002)(366004)(346002)(396003)(376002)(66556008)(26005)(66446008)(66476007)(64756008)(186003)(6506007)(54906003)(53546011)(316002)(110136005)(66946007)(83380400001)(71200400001)(478600001)(76116006)(8676002)(6512007)(8936002)(2906002)(5660300002)(30864003)(4326008)(36756003)(2616005)(6486002)(31686004)(107886003)(86362001)(31696002)(91956017);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: /oN9Joe/QQcRstMNoBvDX9DhWgs8Flqh9LuRAvPR0r42+9ze4uolu2+BdHbtIoQwREMadabGuDl+z22oN1WAH4p+f8ccxM+7NKfeKdGIzG/a09LSUhZj6CYndwFIKTGUGv2LZjIv3uCrvNMsBBSUNiCQX66k3vtJvDjM4MML1rPcuxVu1cyXx96aX3HkT0BqJ1Ql3uxiZOdnBYt8BuOWd0bLySVHoRmE3u34VOHskY+taR+rOGY43T8LnxijuLFpgACrNr1eM3U5dQQ7DarMNOGjo8uyQ7UjE25X6+so0faq4ZaIMuZ5rMgQ42ob6Bptp8xiUcnQuIQx/3fzK/uekWXZH++QBEKlNWWmRmIPUifjbvd2I1KGeVNzJe5yJ0ME+2q3iHS7nCm0gDK2OH0gwmsRtMNpXtXO1RRMOJpmJRbLi6dDxTk1O7P4oP/WtkMQQNOIWtLKKhqNlvw1o1LZfhGE19+pMvVYzeVswSH+c7A=
Content-Type: text/plain; charset="utf-8"
Content-ID: <A6C1AB37EBA4284A902DEE40C358C94F@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d096370c-6239-47ef-ba6a-08d82fa6e26c
X-MS-Exchange-CrossTenant-originalarrivaltime: 24 Jul 2020 07:55:08.1653 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: OJq5hEHCOugVQIdcyQ2HZIWRT4rKyw5F/EOZy5KGJwAaZqEDgMIwZuRf5x+kzT5akMgqtq8eH8fJ9++rXg4A+Tvvb4667vBITwrh66BFtWDApG/2+8sWnsjY7RIdVGOV
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4017
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687
 definitions=2020-07-24_01:2020-07-24,
 2020-07-23 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0
 adultscore=0 bulkscore=0
 spamscore=0 malwarescore=0 impostorscore=0 priorityscore=1501
 mlxlogscore=999 phishscore=0 suspectscore=0 clxscore=1011 mlxscore=0
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007240057
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "nd@arm.com" <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQpPbiA3LzI0LzIwIDI6MzkgQU0sIFN0ZWZhbm8gU3RhYmVsbGluaSB3cm90ZToNCj4gT24gVGh1
LCAyMyBKdWwgMjAyMCwgUmFodWwgU2luZ2ggd3JvdGU6DQo+PiBsaWJ4bCB3aWxsIGNyZWF0ZSBh
biBlbXVsYXRlZCBQQ0kgZGV2aWNlIHRyZWUgbm9kZSBpbiB0aGUNCj4+IGRldmljZSB0cmVlIHRv
IGVuYWJsZSB0aGUgZ3Vlc3QgT1MgdG8gZGlzY292ZXIgdGhlIHZpcnR1YWwNCj4+IFBDSSBkdXJp
bmcgZ3Vlc3QgYm9vdC4NCj4+DQo+PiBXZSBpbnRyb2R1Y2VkIHRoZSBuZXcgY29uZmlnIG9wdGlv
biBbdnBjaT0iZWNhbSJdIGZvciBndWVzdHMuDQo+PiBXaGVuIHRoaXMgY29uZmlnIG9wdGlvbiBp
cyBlbmFibGVkIGluIGEgZ3Vlc3QgY29uZmlndXJhdGlvbiwNCj4+IGEgUENJIGRldmljZSB0cmVl
IG5vZGUgd2lsbCBiZSBjcmVhdGVkIGluIHRoZSBndWVzdCBkZXZpY2UgdHJlZS4NCj4+DQo+PiBB
IG5ldyBhcmVhIGhhcyBiZWVuIHJlc2VydmVkIGluIHRoZSBhcm0gZ3Vlc3QgcGh5c2ljYWwgbWFw
IGF0DQo+PiB3aGljaCB0aGUgVlBDSSBidXMgaXMgZGVjbGFyZWQgaW4gdGhlIGRldmljZSB0cmVl
IChyZWcgYW5kIHJhbmdlcw0KPj4gcGFyYW1ldGVycyBvZiB0aGUgbm9kZSkuDQo+Pg0KPj4gQ2hh
bmdlLUlkOiBJNDdkMzljYmU4MTg0ZGUyMjI2ZjE3NDY0NGRmOTc5MGVjYzYxMGNjZA0KPiBTYW1l
IHF1ZXN0aW9uDQo+DQo+DQo+PiBTaWduZWQtb2ZmLWJ5OiBSYWh1bCBTaW5naCA8cmFodWwuc2lu
Z2hAYXJtLmNvbT4NCj4+IC0tLQ0KPj4gICB0b29scy9saWJ4bC9saWJ4bF9hcm0uYyAgICAgICB8
IDIwMCArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrDQo+PiAgIHRvb2xzL2xpYnhs
L2xpYnhsX3R5cGVzLmlkbCAgIHwgICA2ICsNCj4+ICAgdG9vbHMveGwveGxfcGFyc2UuYyAgICAg
ICAgICAgfCAgIDcgKysNCj4+ICAgeGVuL2luY2x1ZGUvcHVibGljL2FyY2gtYXJtLmggfCAgMjgg
KysrKysNCj4+ICAgNCBmaWxlcyBjaGFuZ2VkLCAyNDEgaW5zZXJ0aW9ucygrKQ0KPj4NCj4+IGRp
ZmYgLS1naXQgYS90b29scy9saWJ4bC9saWJ4bF9hcm0uYyBiL3Rvb2xzL2xpYnhsL2xpYnhsX2Fy
bS5jDQo+PiBpbmRleCAzNGY4YTI5MDU2Li44NDU2OGU5ZGM5IDEwMDY0NA0KPj4gLS0tIGEvdG9v
bHMvbGlieGwvbGlieGxfYXJtLmMNCj4+ICsrKyBiL3Rvb2xzL2xpYnhsL2xpYnhsX2FybS5jDQo+
PiBAQCAtMjY4LDYgKzI2OCwxMzAgQEAgc3RhdGljIGludCBmZHRfcHJvcGVydHlfcmVncyhsaWJ4
bF9fZ2MgKmdjLCB2b2lkICpmZHQsDQo+PiAgICAgICByZXR1cm4gZmR0X3Byb3BlcnR5KGZkdCwg
InJlZyIsIHJlZ3MsIHNpemVvZihyZWdzKSk7DQo+PiAgIH0NCj4+ICAgDQo+PiArc3RhdGljIGlu
dCBmZHRfcHJvcGVydHlfdnBjaV9idXNfcmFuZ2UobGlieGxfX2djICpnYywgdm9pZCAqZmR0LA0K
Pj4gKyAgICAgICAgdW5zaWduZWQgbnVtX2NlbGxzLCAuLi4pDQo+PiArew0KPj4gKyAgICB1aW50
MzJfdCBidXNfcmFuZ2VbbnVtX2NlbGxzXTsNCj4+ICsgICAgYmUzMiAqY2VsbHMgPSAmYnVzX3Jh
bmdlWzBdOw0KPj4gKyAgICBpbnQgaTsNCj4+ICsgICAgdmFfbGlzdCBhcDsNCj4+ICsgICAgdWlu
dDMyX3QgYXJnOw0KPj4gKw0KPj4gKyAgICB2YV9zdGFydChhcCwgbnVtX2NlbGxzKTsNCj4+ICsg
ICAgZm9yIChpID0gMCA7IGkgPCBudW1fY2VsbHM7IGkrKykgew0KPj4gKyAgICAgICAgYXJnID0g
dmFfYXJnKGFwLCB1aW50MzJfdCk7DQo+PiArICAgICAgICBzZXRfY2VsbCgmY2VsbHMsIDEsIGFy
Zyk7DQo+PiArICAgIH0NCj4+ICsgICAgdmFfZW5kKGFwKTsNCj4+ICsNCj4+ICsgICAgcmV0dXJu
IGZkdF9wcm9wZXJ0eShmZHQsICJidXMtcmFuZ2UiLCBidXNfcmFuZ2UsIHNpemVvZihidXNfcmFu
Z2UpKTsNCj4+ICt9DQo+PiArDQo+PiArc3RhdGljIGludCBmZHRfcHJvcGVydHlfdnBjaV9pbnRl
cnJ1cHRfbWFwX21hc2sobGlieGxfX2djICpnYywgdm9pZCAqZmR0LA0KPj4gKyAgICAgICAgdW5z
aWduZWQgbnVtX2NlbGxzLCAuLi4pDQo+PiArew0KPj4gKyAgICB1aW50MzJfdCBpbnRlcnJ1cHRf
bWFwX21hc2tbbnVtX2NlbGxzXTsNCj4+ICsgICAgYmUzMiAqY2VsbHMgPSAmaW50ZXJydXB0X21h
cF9tYXNrWzBdOw0KPj4gKyAgICBpbnQgaTsNCj4+ICsgICAgdmFfbGlzdCBhcDsNCj4+ICsgICAg
dWludDMyX3QgYXJnOw0KPj4gKw0KPj4gKyAgICB2YV9zdGFydChhcCwgbnVtX2NlbGxzKTsNCj4+
ICsgICAgZm9yIChpID0gMCA7IGkgPCBudW1fY2VsbHM7IGkrKykgew0KPj4gKyAgICAgICAgYXJn
ID0gdmFfYXJnKGFwLCB1aW50MzJfdCk7DQo+PiArICAgICAgICBzZXRfY2VsbCgmY2VsbHMsIDEs
IGFyZyk7DQo+PiArICAgIH0NCj4+ICsgICAgdmFfZW5kKGFwKTsNCj4+ICsNCj4+ICsgICAgcmV0
dXJuIGZkdF9wcm9wZXJ0eShmZHQsICJpbnRlcnJ1cHQtbWFwLW1hc2siLCBpbnRlcnJ1cHRfbWFw
X21hc2ssDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzaXplb2YoaW50ZXJy
dXB0X21hcF9tYXNrKSk7DQo+PiArfQ0KPj4gKw0KPj4gK3N0YXRpYyBpbnQgZmR0X3Byb3BlcnR5
X3ZwY2lfcmFuZ2VzKGxpYnhsX19nYyAqZ2MsIHZvaWQgKmZkdCwNCj4+ICsgICAgICAgIHVuc2ln
bmVkIHZwY2lfYWRkcl9jZWxscywNCj4+ICsgICAgICAgIHVuc2lnbmVkIGNwdV9hZGRyX2NlbGxz
LA0KPj4gKyAgICAgICAgdW5zaWduZWQgdnBjaV9zaXplX2NlbGxzLA0KPj4gKyAgICAgICAgdW5z
aWduZWQgbnVtX3JlZ3MsIC4uLikNCj4+ICt7DQo+PiArICAgIHVpbnQzMl90IHJlZ3NbbnVtX3Jl
Z3MqKHZwY2lfYWRkcl9jZWxscytjcHVfYWRkcl9jZWxscyt2cGNpX3NpemVfY2VsbHMpXTsNCj4+
ICsgICAgYmUzMiAqY2VsbHMgPSAmcmVnc1swXTsNCj4+ICsgICAgaW50IGk7DQo+PiArICAgIHZh
X2xpc3QgYXA7DQo+PiArICAgIHVpbnQ2NF90IGFyZzsNCj4+ICsNCj4+ICsgICAgdmFfc3RhcnQo
YXAsIG51bV9yZWdzKTsNCj4+ICsgICAgZm9yIChpID0gMCA7IGkgPCBudW1fcmVnczsgaSsrKSB7
DQo+PiArICAgICAgICAvKiBTZXQgdGhlIG1lbW9yeSBiaXQgZmllbGQgKi8NCj4+ICsgICAgICAg
IGFyZyA9IHZhX2FyZyhhcCwgdWludDY0X3QpOw0KPj4gKyAgICAgICAgc2V0X2NlbGwoJmNlbGxz
LCAxLCBhcmcpOw0KPj4gKw0KPj4gKyAgICAgICAgLyogU2V0IHRoZSB2cGNpIGJ1cyBhZGRyZXNz
ICovDQo+PiArICAgICAgICBhcmcgPSB2cGNpX2FkZHJfY2VsbHMgPyB2YV9hcmcoYXAsIHVpbnQ2
NF90KSA6IDA7DQo+PiArICAgICAgICBzZXRfY2VsbCgmY2VsbHMsIDIgLCBhcmcpOw0KPj4gKw0K
Pj4gKyAgICAgICAgLyogU2V0IHRoZSBjcHUgYnVzIGFkZHJlc3Mgd2hlcmUgdnBjaSBhZGRyZXNz
IGlzIG1hcHBlZCAqLw0KPj4gKyAgICAgICAgYXJnID0gY3B1X2FkZHJfY2VsbHMgPyB2YV9hcmco
YXAsIHVpbnQ2NF90KSA6IDA7DQo+PiArICAgICAgICBzZXRfY2VsbCgmY2VsbHMsIGNwdV9hZGRy
X2NlbGxzLCBhcmcpOw0KPj4gKw0KPj4gKyAgICAgICAgLyogU2V0IHRoZSB2cGNpIHNpemUgcmVx
dWVzdGVkICovDQo+PiArICAgICAgICBhcmcgPSB2cGNpX3NpemVfY2VsbHMgPyB2YV9hcmcoYXAs
IHVpbnQ2NF90KSA6IDA7DQo+PiArICAgICAgICBzZXRfY2VsbCgmY2VsbHMsIHZwY2lfc2l6ZV9j
ZWxscyxhcmcpOw0KPj4gKyAgICB9DQo+PiArICAgIHZhX2VuZChhcCk7DQo+PiArDQo+PiArICAg
IHJldHVybiBmZHRfcHJvcGVydHkoZmR0LCAicmFuZ2VzIiwgcmVncywgc2l6ZW9mKHJlZ3MpKTsN
Cj4+ICt9DQo+PiArDQo+PiArc3RhdGljIGludCBmZHRfcHJvcGVydHlfdnBjaV9pbnRlcnJ1cHRf
bWFwKGxpYnhsX19nYyAqZ2MsIHZvaWQgKmZkdCwNCj4+ICsgICAgICAgIHVuc2lnbmVkIGNoaWxk
X3VuaXRfYWRkcl9jZWxscywNCj4+ICsgICAgICAgIHVuc2lnbmVkIGNoaWxkX2ludGVycnVwdF9z
cGVjaWZpZXJfY2VsbHMsDQo+PiArICAgICAgICB1bnNpZ25lZCBwYXJlbnRfdW5pdF9hZGRyX2Nl
bGxzLA0KPj4gKyAgICAgICAgdW5zaWduZWQgcGFyZW50X2ludGVycnVwdF9zcGVjaWZpZXJfY2Vs
bHMsDQo+PiArICAgICAgICB1bnNpZ25lZCBudW1fcmVncywgLi4uKQ0KPj4gK3sNCj4+ICsgICAg
dWludDMyX3QgaW50ZXJydXB0X21hcFtudW1fcmVncyAqIChjaGlsZF91bml0X2FkZHJfY2VsbHMg
Kw0KPj4gKyAgICAgICAgICAgIGNoaWxkX2ludGVycnVwdF9zcGVjaWZpZXJfY2VsbHMgKyBwYXJl
bnRfdW5pdF9hZGRyX2NlbGxzDQo+PiArICAgICAgICAgICAgKyBwYXJlbnRfaW50ZXJydXB0X3Nw
ZWNpZmllcl9jZWxscyArIDEpXTsNCj4+ICsgICAgYmUzMiAqY2VsbHMgPSAmaW50ZXJydXB0X21h
cFswXTsNCj4+ICsgICAgaW50IGksajsNCj4+ICsgICAgdmFfbGlzdCBhcDsNCj4+ICsgICAgdWlu
dDY0X3QgYXJnOw0KPj4gKw0KPj4gKyAgICB2YV9zdGFydChhcCwgbnVtX3JlZ3MpOw0KPj4gKyAg
ICBmb3IgKGkgPSAwIDsgaSA8IG51bV9yZWdzOyBpKyspIHsNCj4+ICsgICAgICAgIC8qIFNldCB0
aGUgY2hpbGQgdW5pdCBhZGRyZXNzKi8NCj4+ICsgICAgICAgIGZvciAoaiA9IDAgOyBqIDwgY2hp
bGRfdW5pdF9hZGRyX2NlbGxzOyBqKyspIHsNCj4+ICsgICAgICAgICAgICBhcmcgPSB2YV9hcmco
YXAsIHVpbnQzMl90KTsNCj4+ICsgICAgICAgICAgICBzZXRfY2VsbCgmY2VsbHMsIDEgLCBhcmcp
Ow0KPj4gKyAgICAgICAgfQ0KPj4gKw0KPj4gKyAgICAgICAgLyogU2V0IHRoZSBjaGlsZCBpbnRl
cnJ1cHQgc3BlY2lmaWVyKi8NCj4+ICsgICAgICAgIGZvciAoaiA9IDAgOyBqIDwgY2hpbGRfaW50
ZXJydXB0X3NwZWNpZmllcl9jZWxscyA7IGorKykgew0KPj4gKyAgICAgICAgICAgIGFyZyA9IHZh
X2FyZyhhcCwgdWludDMyX3QpOw0KPj4gKyAgICAgICAgICAgIHNldF9jZWxsKCZjZWxscywgMSAs
IGFyZyk7DQo+PiArICAgICAgICB9DQo+PiArDQo+PiArICAgICAgICAvKiBTZXQgdGhlIGludGVy
cnVwdC1wYXJlbnQqLw0KPj4gKyAgICAgICAgc2V0X2NlbGwoJmNlbGxzLCAxICwgR1VFU1RfUEhB
TkRMRV9HSUMpOw0KPj4gKw0KPj4gKyAgICAgICAgLyogU2V0IHRoZSBwYXJlbnQgdW5pdCBhZGRy
ZXNzKi8NCj4+ICsgICAgICAgIGZvciAoaiA9IDAgOyBqIDwgcGFyZW50X3VuaXRfYWRkcl9jZWxs
czsgaisrKSB7DQo+PiArICAgICAgICAgICAgYXJnID0gdmFfYXJnKGFwLCB1aW50MzJfdCk7DQo+
PiArICAgICAgICAgICAgc2V0X2NlbGwoJmNlbGxzLCAxICwgYXJnKTsNCj4+ICsgICAgICAgIH0N
Cj4+ICsNCj4+ICsgICAgICAgIC8qIFNldCB0aGUgcGFyZW50IGludGVycnVwdCBzcGVjaWZpZXIq
Lw0KPj4gKyAgICAgICAgZm9yIChqID0gMCA7IGogPCBwYXJlbnRfaW50ZXJydXB0X3NwZWNpZmll
cl9jZWxsczsgaisrKSB7DQo+PiArICAgICAgICAgICAgYXJnID0gdmFfYXJnKGFwLCB1aW50MzJf
dCk7DQo+PiArICAgICAgICAgICAgc2V0X2NlbGwoJmNlbGxzLCAxICwgYXJnKTsNCj4+ICsgICAg
ICAgIH0NCj4+ICsgICAgfQ0KPj4gKyAgICB2YV9lbmQoYXApOw0KPj4gKw0KPj4gKyAgICByZXR1
cm4gZmR0X3Byb3BlcnR5KGZkdCwgImludGVycnVwdC1tYXAiLCBpbnRlcnJ1cHRfbWFwLA0KPj4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgc2l6ZW9mKGludGVycnVwdF9tYXApKTsN
Cj4+ICt9DQo+PiArDQo+PiAgIHN0YXRpYyBpbnQgbWFrZV9yb290X3Byb3BlcnRpZXMobGlieGxf
X2djICpnYywNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCBsaWJ4
bF92ZXJzaW9uX2luZm8gKnZlcnMsDQo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgdm9pZCAqZmR0KQ0KPj4gQEAgLTY1OSw2ICs3ODMsNzkgQEAgc3RhdGljIGludCBtYWtlX3Zw
bDAxMV91YXJ0X25vZGUobGlieGxfX2djICpnYywgdm9pZCAqZmR0LA0KPj4gICAgICAgcmV0dXJu
IDA7DQo+PiAgIH0NCj4+ICAgDQo+PiArc3RhdGljIGludCBtYWtlX3ZwY2lfbm9kZShsaWJ4bF9f
Z2MgKmdjLCB2b2lkICpmZHQsDQo+PiArICAgICAgICBjb25zdCBzdHJ1Y3QgYXJjaF9pbmZvICph
aW5mbywNCj4+ICsgICAgICAgIHN0cnVjdCB4Y19kb21faW1hZ2UgKmRvbSkNCj4+ICt7DQo+PiAr
ICAgIGludCByZXM7DQo+PiArICAgIGNvbnN0IHVpbnQ2NF90IHZwY2lfZWNhbV9iYXNlID0gR1VF
U1RfVlBDSV9FQ0FNX0JBU0U7DQo+PiArICAgIGNvbnN0IHVpbnQ2NF90IHZwY2lfZWNhbV9zaXpl
ID0gR1VFU1RfVlBDSV9FQ0FNX1NJWkU7DQo+PiArICAgIGNvbnN0IGNoYXIgKm5hbWUgPSBHQ1NQ
UklOVEYoInBjaWVAJSJQUkl4NjQsIHZwY2lfZWNhbV9iYXNlKTsNCj4+ICsNCj4+ICsgICAgcmVz
ID0gZmR0X2JlZ2luX25vZGUoZmR0LCBuYW1lKTsNCj4+ICsgICAgaWYgKHJlcykgcmV0dXJuIHJl
czsNCj4+ICsNCj4+ICsgICAgcmVzID0gZmR0X3Byb3BlcnR5X2NvbXBhdChnYywgZmR0LCAxLCAi
cGNpLWhvc3QtZWNhbS1nZW5lcmljIik7DQo+PiArICAgIGlmIChyZXMpIHJldHVybiByZXM7DQo+
PiArDQo+PiArICAgIHJlcyA9IGZkdF9wcm9wZXJ0eV9zdHJpbmcoZmR0LCAiZGV2aWNlX3R5cGUi
LCAicGNpIik7DQo+PiArICAgIGlmIChyZXMpIHJldHVybiByZXM7DQo+PiArDQo+PiArICAgIHJl
cyA9IGZkdF9wcm9wZXJ0eV9yZWdzKGdjLCBmZHQsIEdVRVNUX1JPT1RfQUREUkVTU19DRUxMUywN
Cj4+ICsgICAgICAgICAgICBHVUVTVF9ST09UX1NJWkVfQ0VMTFMsIDEsIHZwY2lfZWNhbV9iYXNl
LCB2cGNpX2VjYW1fc2l6ZSk7DQo+PiArICAgIGlmIChyZXMpIHJldHVybiByZXM7DQo+PiArDQo+
PiArICAgIHJlcyA9IGZkdF9wcm9wZXJ0eV92cGNpX2J1c19yYW5nZShnYywgZmR0LCAyLCAwLDI1
NSk7DQo+PiArICAgIGlmIChyZXMpIHJldHVybiByZXM7DQo+PiArDQo+PiArICAgIHJlcyA9IGZk
dF9wcm9wZXJ0eV9jZWxsKGZkdCwgImxpbnV4LHBjaS1kb21haW4iLCAwKTsNCj4+ICsgICAgaWYg
KHJlcykgcmV0dXJuIHJlczsNCj4+ICsNCj4+ICsgICAgcmVzID0gZmR0X3Byb3BlcnR5X2NlbGwo
ZmR0LCAiI2FkZHJlc3MtY2VsbHMiLCAzKTsNCj4+ICsgICAgaWYgKHJlcykgcmV0dXJuIHJlczsN
Cj4+ICsNCj4+ICsgICAgcmVzID0gZmR0X3Byb3BlcnR5X2NlbGwoZmR0LCAiI3NpemUtY2VsbHMi
LCAyKTsNCj4+ICsgICAgaWYgKHJlcykgcmV0dXJuIHJlczsNCj4+ICsNCj4+ICsgICAgcmVzID0g
ZmR0X3Byb3BlcnR5X2NlbGwoZmR0LCAiI2ludGVycnVwdC1jZWxscyIsIDEpOw0KPj4gKyAgICBp
ZiAocmVzKSByZXR1cm4gcmVzOw0KPj4gKw0KPj4gKyAgICByZXMgPSBmZHRfcHJvcGVydHlfc3Ry
aW5nKGZkdCwgInN0YXR1cyIsICJva2F5Iik7DQo+PiArICAgIGlmIChyZXMpIHJldHVybiByZXM7
DQo+PiArDQo+PiArICAgIHJlcyA9IGZkdF9wcm9wZXJ0eV92cGNpX3JhbmdlcyhnYywgZmR0LCBH
VUVTVF9QQ0lfQUREUkVTU19DRUxMUywNCj4+ICsgICAgICAgIEdVRVNUX1JPT1RfQUREUkVTU19D
RUxMUywgR1VFU1RfUENJX1NJWkVfQ0VMTFMsDQo+PiArICAgICAgICAzLA0KPj4gKyAgICAgICAg
R1VFU1RfVlBDSV9BRERSX1RZUEVfTUVNLCBHVUVTVF9WUENJX01FTV9QQ0lfQUREUiwNCj4+ICsg
ICAgICAgIEdVRVNUX1ZQQ0lfTUVNX0NQVV9BRERSLCBHVUVTVF9WUENJX01FTV9TSVpFLA0KPj4g
KyAgICAgICAgR1VFU1RfVlBDSV9BRERSX1RZUEVfUFJFRkVUQ0hfTUVNLCBHVUVTVF9WUENJX1BS
RUZFVENIX01FTV9QQ0lfQUREUiwNCj4+ICsgICAgICAgIEdVRVNUX1ZQQ0lfUFJFRkVUQ0hfTUVN
X0NQVV9BRERSLCBHVUVTVF9WUENJX1BSRUZFVENIX01FTV9TSVpFLA0KPj4gKyAgICAgICAgR1VF
U1RfVlBDSV9BRERSX1RZUEVfSU8sIEdVRVNUX1ZQQ0lfSU9fUENJX0FERFIsDQo+PiArICAgICAg
ICBHVUVTVF9WUENJX0lPX0NQVV9BRERSLCBHVUVTVF9WUENJX0lPX1NJWkUpOw0KPj4gKyAgICBp
ZiAocmVzKSByZXR1cm4gcmVzOw0KPj4gKw0KPj4gKyAgICByZXMgPSBmZHRfcHJvcGVydHlfdnBj
aV9pbnRlcnJ1cHRfbWFwX21hc2soZ2MsIGZkdCwgNCwgMCwgMCwgMCwgNyk7DQo+IGl0IHdvdWxk
IG1ha2Ugc2Vuc2UgdG8gc2VwYXJhdGUgb3V0IGNoaWxkX3VuaXRfYWRkcl9jZWxscyBhbmQNCj4g
Y2hpbGRfaW50ZXJydXB0X3NwZWNpZmllcl9jZWxscyBoZXJlIGxpa2Ugd2UgZG8gYmVsb3cgd2l0
aA0KPiBmZHRfcHJvcGVydHlfdnBjaV9pbnRlcnJ1cHRfbWFwDQo+DQo+DQo+PiArICAgIGlmIChy
ZXMpIHJldHVybiByZXM7DQo+PiArDQo+PiArICAgIC8qDQo+PiArICAgICAqIExlZ2FjeSBpbnRl
cnJ1cHQgaXMgZm9yY2VkIGFuZCBhc3NpZ25lZCB0byB0aGUgZ3Vlc3QuDQo+PiArICAgICAqIFRo
aXMgd2lsbCBiZSByZW1vdmVkIG9uY2Ugd2UgaGF2ZSBpbXBsZW1lbnRhdGlvbiBmb3IgTVNJIHN1
cHBvcnQuDQo+PiArICAgICAqDQo+PiArICAgICAqLw0KPj4gKyAgICByZXMgPSBmZHRfcHJvcGVy
dHlfdnBjaV9pbnRlcnJ1cHRfbWFwKGdjLCBmZHQsIDMsIDEsIDAsIDMsDQo+PiArICAgICAgICAg
ICAgNCwNCj4+ICsgICAgICAgICAgICAwLCAwLCAwLCAxLCAwLCAxMzYsIERUX0lSUV9UWVBFX0xF
VkVMX0hJR0gsDQo+PiArICAgICAgICAgICAgMCwgMCwgMCwgMiwgMCwgMTM3LCBEVF9JUlFfVFlQ
RV9MRVZFTF9ISUdILA0KPj4gKyAgICAgICAgICAgIDAsIDAsIDAsIDMsIDAsIDEzOCwgRFRfSVJR
X1RZUEVfTEVWRUxfSElHSCwNCj4+ICsgICAgICAgICAgICAwLCAwLCAwLCA0LCAwLCAxMzksIERU
X0lSUV9UWVBFX0xFVkVMX0hJR0gpOw0KPiBUaGUgNCBpbnRlcnJ1cHQgYWxsb2NhdGVkIGZvciB0
aGlzIG5lZWQgdG8gYmUgZGVmaW5lZCBpbg0KPiB4ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC1hcm0u
aCBhcyB3ZWxsLiBBbHNvLCB3aHkgd291bGQgd2Ugd2FudCB0byBnZXQNCj4gcmlkIG9mIHRoZSBs
ZWdhY3kgaW50ZXJydXB0cyBjb21wbGV0ZWx5PyBJdCB3b3VsZCBiZSBwb3NzaWJsZSB0byBzdGls
bA0KPiBmaW5kIGRldmljZSBvciBzb2Z0d2FyZSB0aGF0IHJlbHkgb24gdGhlbS4NCkkgdGhpbmsg
dGhpcyBpcyBtb3JlIGFib3V0IHNoYXJlZCBpbnRlcnJ1cHRzIHN1cHBvcnQgY29tcGxleGl0eQ0K
Pg0KPg0KPj4gKyAgICBpZiAocmVzKSByZXR1cm4gcmVzOw0KPj4gKw0KPj4gKyAgICByZXMgPSBm
ZHRfZW5kX25vZGUoZmR0KTsNCj4+ICsgICAgaWYgKHJlcykgcmV0dXJuIHJlczsNCj4+ICsNCj4+
ICsgICAgcmV0dXJuIDA7DQo+PiArfQ0KPj4gKw0KPj4gICBzdGF0aWMgY29uc3Qgc3RydWN0IGFy
Y2hfaW5mbyAqZ2V0X2FyY2hfaW5mbyhsaWJ4bF9fZ2MgKmdjLA0KPj4gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCBzdHJ1Y3QgeGNfZG9tX2ltYWdl
ICpkb20pDQo+PiAgIHsNCj4gWy4uLl0NCj4NCj4NCj4+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVk
ZS9wdWJsaWMvYXJjaC1hcm0uaCBiL3hlbi9pbmNsdWRlL3B1YmxpYy9hcmNoLWFybS5oDQo+PiBp
bmRleCA3MzY0YTA3MzYyLi40ZTE5YzYyOTQ4IDEwMDY0NA0KPj4gLS0tIGEveGVuL2luY2x1ZGUv
cHVibGljL2FyY2gtYXJtLmgNCj4+ICsrKyBiL3hlbi9pbmNsdWRlL3B1YmxpYy9hcmNoLWFybS5o
DQo+PiBAQCAtNDI2LDYgKzQyNiwzNCBAQCB0eXBlZGVmIHVpbnQ2NF90IHhlbl9jYWxsYmFja190
Ow0KPj4gICAjZGVmaW5lIEdVRVNUX1ZQQ0lfRUNBTV9CQVNFICAgIHhlbl9ta191bGxvbmcoMHgx
MDAwMDAwMCkNCj4+ICAgI2RlZmluZSBHVUVTVF9WUENJX0VDQU1fU0laRSAgICB4ZW5fbWtfdWxs
b25nKDB4MTAwMDAwMDApDQo+PiAgIA0KPj4gKyNkZWZpbmUgR1VFU1RfUENJX0FERFJFU1NfQ0VM
TFMgMw0KPj4gKyNkZWZpbmUgR1VFU1RfUENJX1NJWkVfQ0VMTFMgMg0KPj4gKw0KPj4gKy8qIFBD
SS1QQ0llIG1lbW9yeSBzcGFjZSB0eXBlcyAqLw0KPj4gKyNkZWZpbmUgR1VFU1RfVlBDSV9BRERS
X1RZUEVfUFJFRkVUQ0hfTUVNIHhlbl9ta191bGxvbmcoMHg0MjAwMDAwMCkNCj4+ICsjZGVmaW5l
IEdVRVNUX1ZQQ0lfQUREUl9UWVBFX01FTSAgICAgICAgICB4ZW5fbWtfdWxsb25nKDB4MDIwMDAw
MDApDQo+PiArI2RlZmluZSBHVUVTVF9WUENJX0FERFJfVFlQRV9JTyAgICAgICAgICAgeGVuX21r
X3VsbG9uZygweDAxMDAwMDAwKQ0KPj4gKw0KPj4gKy8qIEd1ZXN0IFBDSS1QQ0llIG1lbW9yeSBz
cGFjZSB3aGVyZSBjb25maWcgc3BhY2UgYW5kIEJBUiB3aWxsIGJlIGF2YWlsYWJsZS4qLw0KPj4g
KyNkZWZpbmUgR1VFU1RfVlBDSV9QUkVGRVRDSF9NRU1fQ1BVX0FERFIgIHhlbl9ta191bGxvbmco
MHg0MDAwMDAwMDAwKQ0KPiBJdCBsb29rcyBsaWtlIGl0IGNvdWxkIGNvbmZsaWN0IHdpdGggR1VF
U1RfUkFNMV9CQVNFK0dVRVNUX1JBTTFfU0laRT8NCj4NCj4NCj4+ICsjZGVmaW5lIEdVRVNUX1ZQ
Q0lfTUVNX0NQVV9BRERSICAgICAgICAgICB4ZW5fbWtfdWxsb25nKDB4MDQwMjAwMDApDQo+PiAr
I2RlZmluZSBHVUVTVF9WUENJX0lPX0NQVV9BRERSICAgICAgICAgICAgeGVuX21rX3VsbG9uZygw
eEMwMjAwODAwKQ0KPiAweEMwMjAwODAwIGxvb2tzIGxpa2UgaXQgY291bGQgY29uZmxpY3Qgd2l0
aA0KPiBHVUVTVF9SQU0wX0JBU0UrR1VFU1RfUkFNMF9TSVpFPw0KPg0KPg0KPj4gKy8qDQo+PiAr
ICogVGhpcyBpcyBoYXJkY29kZWQgdmFsdWVzIGZvciB0aGUgcmVhbCBQQ0kgcGh5c2ljYWwgYWRk
cmVzc2VzLg0KPj4gKyAqIFRoaXMgd2lsbCBiZSByZW1vdmVkIG9uY2Ugd2UgcmVhZCB0aGUgcmVh
bCBQQ0ktUENJZSBwaHlzaWNhbA0KPj4gKyAqIGFkZHJlc3NlcyBmb3JtIHRoZSBjb25maWcgc3Bh
Y2UgYW5kIG1hcCB0byB0aGUgZ3Vlc3QgbWVtb3J5IG1hcA0KPj4gKyAqIHdoZW4gYXNzaWduaW5n
IHRoZSBkZXZpY2UgdG8gZ3Vlc3QgdmlhIFZQQ0kuDQo+PiArICoNCj4+ICsgKi8NCj4+ICsjZGVm
aW5lIEdVRVNUX1ZQQ0lfUFJFRkVUQ0hfTUVNX1BDSV9BRERSICB4ZW5fbWtfdWxsb25nKDB4NDAw
MDAwMDAwMCkNCj4+ICsjZGVmaW5lIEdVRVNUX1ZQQ0lfTUVNX1BDSV9BRERSICAgICAgICAgICB4
ZW5fbWtfdWxsb25nKDB4NTAwMDAwMDApDQo+PiArI2RlZmluZSBHVUVTVF9WUENJX0lPX1BDSV9B
RERSICAgICAgICAgICAgeGVuX21rX3VsbG9uZygweDAwMDAwMDAwKQ0KPj4gKw0KPj4gKyNkZWZp
bmUgR1VFU1RfVlBDSV9QUkVGRVRDSF9NRU1fU0laRSAgICAgIHhlbl9ta191bGxvbmcoMHgxMDAw
MDAwMDApDQo+PiArI2RlZmluZSBHVUVTVF9WUENJX01FTV9TSVpFICAgICAgICAgICAgICAgeGVu
X21rX3VsbG9uZygweDA4MDAwMDAwKQ0KPiBIb3cgZGlkIHlvdSBjaG9zZSB0aGVzZSBzaXplcz8g
R1VFU1RfVlBDSV9NRU1fU0laRSBhbmQvb3INCj4gR1VFU1RfVlBDSV9QUkVGRVRDSF9NRU1fU0la
RSBhcmUgc3VwcG9zZWQgdG8gcG90ZW50aWFsbHkgY292ZXIgYWxsIHRoZQ0KPiBQQ0kgQkFScywg
aW5jbHVkaW5nIHBvdGVudGlhbCBmdXR1cmUgaG90cGx1ZyBkZXZpY2VzLCByaWdodD8NCj4NCj4g
SWYgc28sIG1heWJlIHdlIG5lZWQgdG8gaW5jcmVhc2UgR1VFU1RfVlBDSV9NRU1fU0laRSB0byBh
IGNvdXBsZSBvZiBHQg0KPiBhbmQgR1VFU1RfVlBDSV9QUkVGRVRDSF9NRU1fU0laRSB0byBldmVu
IG1vcmU/DQo+DQo+DQo+DQo+DQo+PiArI2RlZmluZSBHVUVTVF9WUENJX0lPX1NJWkUgICAgICAg
ICAgICAgICAgeGVuX21rX3VsbG9uZygweDAwODAwMDAwKQ0KPj4gKw0KPj4gICAvKg0KPj4gICAg
KiAxNk1CID09IDQwOTYgcGFnZXMgcmVzZXJ2ZWQgZm9yIGd1ZXN0IHRvIHVzZSBhcyBhIHJlZ2lv
biB0byBtYXAgaXRzDQo+PiAgICAqIGdyYW50IHRhYmxlIGluLg==


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 08:05:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 08:05:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jysi7-0007r7-1T; Fri, 24 Jul 2020 08:05:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5T8C=BD=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jysi5-0007r2-05
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 08:05:41 +0000
X-Inumbo-ID: 76d730c0-cd84-11ea-87e3-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 76d730c0-cd84-11ea-87e3-bc764e2007e4;
 Fri, 24 Jul 2020 08:05:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=vcUV31aYWSnGqtJsv8ngKgDlhVy7UxTBkS6YqKCatO0=; b=oHxYO686RrCgocwjyUKm7iXB5T
 ozdWmM1YCH73Kdumnwg+zraJaMMXobUzzGGf9ITPriq6Neal0M2diOr/wXULr1IbLpXuPv7NvFA8l
 56WrnK60iz73RakqPbvmD4084YVDsnCsrzJEc/FYMv6zusZjHVeKqyMp/xvUgTxlFjnk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jysi3-0001te-KC; Fri, 24 Jul 2020 08:05:39 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jysi3-00048c-Bo; Fri, 24 Jul 2020 08:05:39 +0000
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Rahul Singh <rahul.singh@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <3ee41590-e8ca-84d6-3010-6e5dffe91df0@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <276d6b48-8cd7-7fb1-1d76-15cb6a95cad9@xen.org>
Date: Fri, 24 Jul 2020 09:05:36 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3ee41590-e8ca-84d6-3010-6e5dffe91df0@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "nd@arm.com" <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 24/07/2020 08:03, Oleksandr Andrushchenko wrote:
>>> diff --git a/xen/arch/arm/pci/pci-access.c b/xen/arch/arm/pci/pci-access.c
>>> new file mode 100644
>>> index 0000000000..c53ef58336
>>> --- /dev/null
>>> +++ b/xen/arch/arm/pci/pci-access.c
>>> @@ -0,0 +1,101 @@
>>> +/*
>>> + * Copyright (C) 2020 Arm Ltd.
> I think SPDX will fit better in any new code.

While I would love to use SPDX in Xen, there was some push back in the 
past to use it. So the new code should use the full-blown copyright 
until there is an agreement to use it.

>>
>>> +    list_add_tail(&bridge->node, &pci_host_bridges);
>> It looks like &pci_host_bridges should be an ordered list, ordered by
>> segment number?
> 
> Why? Do you expect bridge access in some specific order so ordered
> 
> list will make it faster?

Access to the configure space will be pretty random. So I don't think 
ordering the list will make anything better.

However, looking up for the bridge for every config spec access is 
pretty slow. When I was working on the PCI passthrough, I wanted to look 
whether it would be possible to have a pointer to the PCI host bridge 
passed in argument to the helpers (rather than the segment).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 08:19:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 08:19:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jysvT-0000Ts-CZ; Fri, 24 Jul 2020 08:19:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5T8C=BD=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jysvS-0000Tn-Oe
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 08:19:30 +0000
X-Inumbo-ID: 65961266-cd86-11ea-a385-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 65961266-cd86-11ea-a385-12813bfff9fa;
 Fri, 24 Jul 2020 08:19:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=FD2SS3S2FDkaZ6K3eYR6kq0HNXGyx8F4P+Y2NNCtnSc=; b=12axmA7CygjE1kLIFcRx9baOqV
 sUIlfwcE8BPR/6UH+6ERxEFlyk3G0MNIyA2mXQ9+jzCokrGhhlFEEbwPHd/rGgE8iEmaU7NhNBu4v
 OKHfd+VM6yDbVhXW6s2GRsBHLu6B5Wu6bB6+i33UaRiYAf+5dXzI795glc/lUvL87PZs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jysvR-0002BN-DX; Fri, 24 Jul 2020 08:19:29 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jysvR-0004xX-4B; Fri, 24 Jul 2020 08:19:29 +0000
Subject: Re: [RFC PATCH v1 2/4] xen/arm: Discovering PCI devices and add the
 PCI devices in XEN.
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Rahul Singh <rahul.singh@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <666df0147054dda8af13ae74a89be44c69984295.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231337140.17562@sstabellini-ThinkPad-T480s>
 <81cad0cd-731d-e1d5-cacd-d64f2c0781b6@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <fb602a20-c7c8-d009-0f8c-d9730a6e4ddc@xen.org>
Date: Fri, 24 Jul 2020 09:19:26 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <81cad0cd-731d-e1d5-cacd-d64f2c0781b6@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "nd@arm.com" <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 24/07/2020 08:14, Oleksandr Andrushchenko wrote:
> 
> On 7/23/20 11:44 PM, Stefano Stabellini wrote:
>> On Thu, 23 Jul 2020, Rahul Singh wrote:
>>> Hardware domain is in charge of doing the PCI enumeration and will
>>> discover the PCI devices and then will communicate to XEN via hyper
>>> call PHYSDEVOP_pci_device_add to add the PCI devices in XEN.
>>>
>>> Change-Id: Ie87e19741689503b4b62da911c8dc2ee318584ac
>> Same question about Change-Id
>>
>>
>>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>>> ---
>>>    xen/arch/arm/physdev.c | 42 +++++++++++++++++++++++++++++++++++++++---
>>>    1 file changed, 39 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
>>> index e91355fe22..274720f98a 100644
>>> --- a/xen/arch/arm/physdev.c
>>> +++ b/xen/arch/arm/physdev.c
>>> @@ -9,12 +9,48 @@
>>>    #include <xen/errno.h>
>>>    #include <xen/sched.h>
>>>    #include <asm/hypercall.h>
>>> -
>>> +#include <xen/guest_access.h>
>>> +#include <xsm/xsm.h>
>>>    
>>>    int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>>>    {
>>> -    gdprintk(XENLOG_DEBUG, "PHYSDEVOP cmd=%d: not implemented\n", cmd);
>>> -    return -ENOSYS;
>>> +    int ret = 0;
>>> +
>>> +    switch ( cmd )
>>> +    {
>>> +#ifdef CONFIG_HAS_PCI
> 
> In the cover letter you were saying "we are not enabling the HAS_PCI and HAS_VPCI flags for ARM".
> 
> Is this still valid?
> 
>>> +        case PHYSDEVOP_pci_device_add:
>>> +            {
>>> +                struct physdev_pci_device_add add;
>>> +                struct pci_dev_info pdev_info;
>>> +                nodeid_t node = NUMA_NO_NODE;
>>> +
>>> +                ret = -EFAULT;
>>> +                if ( copy_from_guest(&add, arg, 1) != 0 )
>>> +                    break;
>>> +
>>> +                pdev_info.is_extfn = !!(add.flags & XEN_PCI_DEV_EXTFN);
>>> +                if ( add.flags & XEN_PCI_DEV_VIRTFN )
>>> +                {
>>> +                    pdev_info.is_virtfn = 1;
>>> +                    pdev_info.physfn.bus = add.physfn.bus;
>>> +                    pdev_info.physfn.devfn = add.physfn.devfn;
>>> +                }
>>> +                else
>>> +                    pdev_info.is_virtfn = 0;
>>> +
>>> +                ret = pci_add_device(add.seg, add.bus, add.devfn,
>>> +                                &pdev_info, node);
>>> +
>>> +                break;
>>> +            }
>>> +#endif
>>> +        default:
>>> +            gdprintk(XENLOG_DEBUG, "PHYSDEVOP cmd=%d: not implemented\n", cmd);
>>> +            ret = -ENOSYS;
>>> +    }
>> I think we should make the implementation common between arm and x86 by
>> creating xen/common/physdev.c:do_physdev_op as a shared entry point for
>> PHYSDEVOP hypercalls implementations. See for instance:
>>
>> xen/common/sysctl.c:do_sysctl
>>
>> and
>>
>> xen/arch/arm/sysctl.c:arch_do_sysctl
>> xen/arch/x86/sysctl.c:arch_do_sysctl
>>
>>
>> Jan, Andrew, Roger, any opinions?
>>
>>
> I think we can also have a look at [1] by Julien. That implementation,
> 
> IMO, had some thoughts on making Arm/x86 code common where possible

There are some ideas on how I would like to see the split, although they 
need some cleanup. :)

In particular, I was expecting some preparatory work to get the existing 
PCI code non-x86 specific.

Regarding the hypercall Stefano mentionned, I think it should be 
possible to make it common if we abstract the NUMA node lookup part.

Cheers,

> 
> 
> [1] https://xenbits.xen.org/gitweb/?p=people/julieng/xen-unstable.git;a=shortlog;h=refs/heads/dev-pci
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 08:23:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 08:23:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jysz0-0001Pl-1b; Fri, 24 Jul 2020 08:23:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5T8C=BD=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jysyz-0001Pg-7p
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 08:23:09 +0000
X-Inumbo-ID: e7b37022-cd86-11ea-87e3-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7b37022-cd86-11ea-87e3-bc764e2007e4;
 Fri, 24 Jul 2020 08:23:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=67gnAVfj5c5wLITd427kJl9QyzcDZeQx+7iKWLhruVA=; b=6Yw2Uwodf207aY8e8gIPeAPaN/
 GGGxmQ7MzGYqbE7TfXCELxw1rJ88JycjBDmMopBFs09F973WHPXCLAPkZDatVn1HlE7dtivPdcGzI
 ILM7O8GHDMqP7JXfhY03xTlT6u9I/V82VuF7MjJBHmS4z9uH409T9PmE10Zd0eLD1lX4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jysyx-0002GU-D3; Fri, 24 Jul 2020 08:23:07 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jysyx-0005Ks-63; Fri, 24 Jul 2020 08:23:07 +0000
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <756d7979-0ebf-b9a4-72bd-18782762f7da@xen.org>
Date: Fri, 24 Jul 2020 09:23:05 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Bertrand.Marquis@arm.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Rahul,

On 23/07/2020 16:40, Rahul Singh wrote:
> XEN during boot will read the PCI device tree node “reg” property
> and will map the PCI config space to the XEN memory.
> 
> XEN will read the “linux, pci-domain” property from the device tree
> node and configure the host bridge segment number accordingly.
> 
> As of now "pci-host-ecam-generic" compatible board is supported.
> 
> Change-Id: If32f7748b7dc89dd37114dc502943222a2a36c49
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>   xen/arch/arm/Kconfig                |   7 +
>   xen/arch/arm/Makefile               |   1 +
>   xen/arch/arm/pci/Makefile           |   4 +
>   xen/arch/arm/pci/pci-access.c       | 101 ++++++++++++++
>   xen/arch/arm/pci/pci-host-common.c  | 198 ++++++++++++++++++++++++++++
>   xen/arch/arm/pci/pci-host-generic.c | 131 ++++++++++++++++++
>   xen/arch/arm/pci/pci.c              | 112 ++++++++++++++++
>   xen/arch/arm/setup.c                |   2 +
>   xen/include/asm-arm/device.h        |   7 +-
>   xen/include/asm-arm/pci.h           |  97 +++++++++++++-
>   10 files changed, 654 insertions(+), 6 deletions(-)

As a general comment, I would suggest to split the patch in smaller 
chunk. This would help the review and also allow to provide more 
explanation on what is done.

For instance, I think it is possible to a split looking like:
     - Add framework to access an hostbridge
     - Add support for ECAM
     - Add code to initialize the PCI subsystem

There is also some small fixes in this code that probably can move in 
there own patches.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 08:45:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 08:45:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jytJt-0003K1-PW; Fri, 24 Jul 2020 08:44:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5T8C=BD=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jytJs-0003Jw-EN
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 08:44:44 +0000
X-Inumbo-ID: ebc137a0-cd89-11ea-a38a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ebc137a0-cd89-11ea-a38a-12813bfff9fa;
 Fri, 24 Jul 2020 08:44:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=rFult8kINdISWKMBrnxAFULL5bxrr1mijKyLzJ7ocFg=; b=R4IjjJOQFwmBtPH0tc7FMbKbPd
 t5GF+vGNel7pnhaM0u8IEVpT2izJ934/bjkffC/F3eRrF2PZ0tdXX5jtu2WlCMXJvS+N82+XZ+eqr
 GB98GyP8No8uQ7OLq1VZxzWuAtUp3ON6iZqSrVpt872nMQ3UcbkBjlPXQfZS6Dc0I3jc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jytJp-0002gX-3s; Fri, 24 Jul 2020 08:44:41 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jytJo-0006i3-Pm; Fri, 24 Jul 2020 08:44:40 +0000
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
To: Stefano Stabellini <sstabellini@kernel.org>,
 Rahul Singh <rahul.singh@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <9f09ff42-a930-e4e3-d1c8-612ad03698ae@xen.org>
Date: Fri, 24 Jul 2020 09:44:38 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: andrew.cooper3@citrix.com, Bertrand.Marquis@arm.com, jbeulich@suse.com,
 xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 24/07/2020 00:38, Stefano Stabellini wrote:
>> +    bridge->dt_node = dev;
>> +    bridge->sysdata = cfg;
>> +    bridge->ops = &ops->pci_ops;
>> +
>> +    if( !dt_property_read_u32(dev, "linux,pci-domain", &segment) )
>> +    {
>> +        printk(XENLOG_ERR "\"linux,pci-domain\" property in not available in DT\n");
>> +        return -ENODEV;
>> +    }
>> +
>> +    bridge->segment = (u16)segment;
> 
> My understanding is that a Linux pci-domain doesn't correspond exactly
> to a PCI segment. See for instance:
> 
> https://lists.gnu.org/archive/html/qemu-devel/2018-04/msg03885.html
> 
> Do we need to care about the difference? If we mean pci-domain here,
> should we just call them as such instead of calling them "segments" in
> Xen (if they are not segments)?

So we definitely need a segment number in hand because this is what the 
admin will use to assign a PCI device to a guest.

The segment number is just a value defined by the software. So as long 
as Linux and Xen agrees with the number, then we should be ok.

Looking at the code, I think Linux is using 'segment' as 'domain'. So 
they should be interchangeably. I am not sure what other OS does though.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 08:59:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 08:59:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jytXx-0004Nq-0G; Fri, 24 Jul 2020 08:59:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QctF=BD=canonical.com=andrea.righi@srs-us1.protection.inumbo.net>)
 id 1jytXv-0004Nj-9k
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 08:59:15 +0000
X-Inumbo-ID: f2665624-cd8b-11ea-87e7-bc764e2007e4
Received: from youngberry.canonical.com (unknown [91.189.89.112])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f2665624-cd8b-11ea-87e7-bc764e2007e4;
 Fri, 24 Jul 2020 08:59:14 +0000 (UTC)
Received: from mail-ej1-f72.google.com ([209.85.218.72])
 by youngberry.canonical.com with esmtps
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2)
 (envelope-from <andrea.righi@canonical.com>) id 1jytXt-0003cC-Ig
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 08:59:13 +0000
Received: by mail-ej1-f72.google.com with SMTP id q11so3434066eja.3
 for <xen-devel@lists.xenproject.org>; Fri, 24 Jul 2020 01:59:13 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version
 :content-disposition;
 bh=Hjub13SPe2wEgoDjP8LKlCX8Yl7alX0oGHdU3qzYDPM=;
 b=n/3zDNavFBwyidp5mK9uTIUnZfbHeSKJP4k6Md3Q+MmdkMWQ8FQ2ZbQoZ9SwA3NWvc
 EqtOXntSZ9x4rmez3Jw1+S/UWUPdrNW3an2O+mEV7odoE4wZzNDZ+X3zGsXQmmSSDML1
 HUBGuHZ3Fqz6BZLdd1A6yR6oULhBm5YpLuYv6THiBFs3tEbC56nM9xQSbfZHFkQzeDE9
 1CgAPQu5Aj+br1BdM0coAM1wKpocTKt3vQliydYhLgPhghicxYr0ml5+PJVXD7Ra6R+b
 oEITKCk0MfjzDcbkG4/CQsqTHla7AATJ0QGL0fxnsCRipvNErj59Y9tCgRj7LjHJ5V+B
 5RMA==
X-Gm-Message-State: AOAM533Jplrd6i3wjxEFqCFSpPFrzyDmGHKm3O5F+UGrSbqV2R4/TW6V
 TT6zn2/vYZUAZlbukFS4W+OF336yydgi6S0D8iQrD8GmW/yDuwKIAxjJ3bdKF0D56zJkyB+q1oA
 daE2tgRMNZdk22OWl4WIN+q9UkHsrmHS/IkoEACie2jlV
X-Received: by 2002:a17:906:c40d:: with SMTP id
 u13mr8037436ejz.519.1595581152669; 
 Fri, 24 Jul 2020 01:59:12 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJyLghkjxyAHSXRSJUM7yxIeHsXsl1Sx8URoFGpqj60bEA7IlZyIWZIUznr715CpupWlbmdO2A==
X-Received: by 2002:a17:906:c40d:: with SMTP id
 u13mr8037418ejz.519.1595581152278; 
 Fri, 24 Jul 2020 01:59:12 -0700 (PDT)
Received: from localhost (host-87-11-131-192.retail.telecomitalia.it.
 [87.11.131.192])
 by smtp.gmail.com with ESMTPSA id y22sm302547edl.84.2020.07.24.01.59.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 24 Jul 2020 01:59:11 -0700 (PDT)
Date: Fri, 24 Jul 2020 10:59:10 +0200
From: Andrea Righi <andrea.righi@canonical.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2] xen-netfront: fix potential deadlock in xennet_remove()
Message-ID: <20200724085910.GA1043801@xps-13>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Jakub Kicinski <kuba@kernel.org>, netdev@vger.kernel.org,
 "David S. Miller" <davem@davemloft.net>, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

There's a potential race in xennet_remove(); this is what the driver is
doing upon unregistering a network device:

  1. state = read bus state
  2. if state is not "Closed":
  3.    request to set state to "Closing"
  4.    wait for state to be set to "Closing"
  5.    request to set state to "Closed"
  6.    wait for state to be set to "Closed"

If the state changes to "Closed" immediately after step 1 we are stuck
forever in step 4, because the state will never go back from "Closed" to
"Closing".

Make sure to check also for state == "Closed" in step 4 to prevent the
deadlock.

Also add a 5 sec timeout any time we wait for the bus state to change,
to avoid getting stuck forever in wait_event().

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
---
Changes in v2:
 - remove all dev_dbg() calls (as suggested by David Miller)

 drivers/net/xen-netfront.c | 64 +++++++++++++++++++++++++-------------
 1 file changed, 42 insertions(+), 22 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 482c6c8b0fb7..88280057e032 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -63,6 +63,8 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644);
 MODULE_PARM_DESC(max_queues,
 		 "Maximum number of queues per virtual interface");
 
+#define XENNET_TIMEOUT  (5 * HZ)
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -1334,12 +1336,15 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	netif_carrier_off(netdev);
 
-	xenbus_switch_state(dev, XenbusStateInitialising);
-	wait_event(module_wq,
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateClosed &&
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateUnknown);
+	do {
+		xenbus_switch_state(dev, XenbusStateInitialising);
+		err = wait_event_timeout(module_wq,
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateClosed &&
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateUnknown, XENNET_TIMEOUT);
+	} while (!err);
+
 	return netdev;
 
  exit:
@@ -2139,28 +2144,43 @@ static const struct attribute_group xennet_dev_group = {
 };
 #endif /* CONFIG_SYSFS */
 
-static int xennet_remove(struct xenbus_device *dev)
+static void xennet_bus_close(struct xenbus_device *dev)
 {
-	struct netfront_info *info = dev_get_drvdata(&dev->dev);
-
-	dev_dbg(&dev->dev, "%s\n", dev->nodename);
+	int ret;
 
-	if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) {
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
+	do {
 		xenbus_switch_state(dev, XenbusStateClosing);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosing ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosing ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+	} while (!ret);
+
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
 
+	do {
 		xenbus_switch_state(dev, XenbusStateClosed);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosed ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
-	}
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+	} while (!ret);
+}
+
+static int xennet_remove(struct xenbus_device *dev)
+{
+	struct netfront_info *info = dev_get_drvdata(&dev->dev);
 
+	xennet_bus_close(dev);
 	xennet_disconnect_backend(info);
 
 	if (info->netdev->reg_state == NETREG_REGISTERED)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 09:12:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 09:12:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jytkN-0006Av-4n; Fri, 24 Jul 2020 09:12:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5T8C=BD=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jytkL-0006Ao-Vo
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 09:12:06 +0000
X-Inumbo-ID: be0aee11-cd8d-11ea-87e7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be0aee11-cd8d-11ea-87e7-bc764e2007e4;
 Fri, 24 Jul 2020 09:12:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=N60kbuTV4dLfypzHvH054K4rbKJpxwA4tUp/pc4EeeM=; b=ud0qT1YnPn3qznD69RBNYK22xS
 Yu8nrkPlsivkcCdiZlBPlzaXh2nO3HBmrYW4e9y8HdLnFLTmiQvH3Q1IMpFNpi1DtWC0V9LRk3BxR
 sF7sQcavCNZ/f7PG9lc7cZA59FsLVmSmRGvIt2y7MXWo1XNDATy++Pat4fgExaWfx56E=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jytkH-0003Ho-D6; Fri, 24 Jul 2020 09:12:01 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jytkH-0008W1-54; Fri, 24 Jul 2020 09:12:01 +0000
Subject: Re: [RFC PATCH v1 4/4] arm/libxl: Emulated PCI device tree node in
 libxl
To: Stefano Stabellini <sstabellini@kernel.org>,
 Rahul Singh <rahul.singh@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <23346b24762467bd246b91b05f7b0fc1719282f6.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231505170.17562@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <a1de23f3-8ab2-16d8-915b-4bf0e41b895f@xen.org>
Date: Fri, 24 Jul 2020 10:11:58 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007231505170.17562@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Bertrand.Marquis@arm.com, Anthony PERARD <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 24/07/2020 00:39, Stefano Stabellini wrote:
> On Thu, 23 Jul 2020, Rahul Singh wrote:
>> +    if (res) return res;
>> +
>> +    /*
>> +     * Legacy interrupt is forced and assigned to the guest.
>> +     * This will be removed once we have implementation for MSI support.
>> +     *
>> +     */
>> +    res = fdt_property_vpci_interrupt_map(gc, fdt, 3, 1, 0, 3,
>> +            4,
>> +            0, 0, 0, 1, 0, 136, DT_IRQ_TYPE_LEVEL_HIGH,
>> +            0, 0, 0, 2, 0, 137, DT_IRQ_TYPE_LEVEL_HIGH,
>> +            0, 0, 0, 3, 0, 138, DT_IRQ_TYPE_LEVEL_HIGH,
>> +            0, 0, 0, 4, 0, 139, DT_IRQ_TYPE_LEVEL_HIGH);
> 
> The 4 interrupt allocated for this need to be defined in
> xen/include/public/arch-arm.h as well. Also, why would we want to get
> rid of the legacy interrupts completely? 

With legacy interrupts, there are a few cases to take into account:
    1) Two PCI devices from the same hostbridge are assigned to 
different cases. As SPIs (used for legacy interrupts) can only be routed 
to one guest, we would need to now be able to share them. This raises 
the question when to EOI the physical interrupts. AFAICT, Linux has some 
code to deal with it using a timer if it takes too long.

    2) Two PCI devices from two distinct hostbridge are assigned to the 
same virtual hostbridge. Legacy interrupts would need to be virtual and 
we would possibly need to merge multiple physical interrupts into one 
virtual.

    3) A mix of virtual and physical PCI devices are assigned to the 
same host bridges. Same as 2) legacy interrupts would need to be virtual.

Given the complexity of handling interrupts legacy and the fact they 
will largely be slower compare to MSIs, I would rather focus on MSIs at 
first.

We can then decide to implement legacy interrupts if there is a real need.

> It would be possible to still
> find device or software that rely on them.

PCI devices without MSI support is getting extremely rare. There are 
actually Arm HW that doesn't support legacy interrupts at all (such as 
Thunder-X). I can't tell for the software.

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 09:31:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 09:31:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyu2Z-000836-5a; Fri, 24 Jul 2020 09:30:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qjWU=BD=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyu2Y-000831-DF
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 09:30:54 +0000
X-Inumbo-ID: 5d5bfc64-cd90-11ea-a390-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5d5bfc64-cd90-11ea-a390-12813bfff9fa;
 Fri, 24 Jul 2020 09:30:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+qSWM6qN+UGvhkn8tSEi5JtJ9dekMZjZSaEf3kcj1j0=; b=QjJm8nI/7hwe5xC5HjstP8cRP
 bILiQXZa38J9teN1kSE7Ug6awGkLHisFN4DtJ59q6Qf4uJO5N/sh4TcxSmGH9O9JNNX/xaHbYklEc
 qxtYiCv5CrZ0o618rbXDeD9tKVafO+irkw2f5/ZtWYoZloH1GH2R/t2MGAu8DVdOEAU+Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyu2U-0003eS-OY; Fri, 24 Jul 2020 09:30:50 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyu2U-0006fU-Ec; Fri, 24 Jul 2020 09:30:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyu2U-0005J9-Dv; Fri, 24 Jul 2020 09:30:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152153-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 152153: tolerable trouble: fail/pass/starved
 - PUSHED
X-Osstest-Failures: xen-4.14-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 xen-4.14-testing:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=456957aaa1391e0dfa969e2dd97b87c51a79444e
X-Osstest-Versions-That: xen=827031adfeb3c2656baa2156d3e7caaea8aec739
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jul 2020 09:30:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152153 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152153/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 152081

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 xen                  456957aaa1391e0dfa969e2dd97b87c51a79444e
baseline version:
 xen                  827031adfeb3c2656baa2156d3e7caaea8aec739

Last test of basis   152081  2020-07-21 16:52:47 Z    2 days
Failing since        152124  2020-07-22 20:36:59 Z    1 days    2 attempts
Testing same since   152153  2020-07-23 15:18:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Paul Durrant <pdurrant@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   827031adfe..456957aaa1  456957aaa1391e0dfa969e2dd97b87c51a79444e -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 09:39:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 09:39:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyuAf-0008Sp-8z; Fri, 24 Jul 2020 09:39:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qjWU=BD=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyuAd-0008SQ-A5
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 09:39:15 +0000
X-Inumbo-ID: 854f86f4-cd91-11ea-a393-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 854f86f4-cd91-11ea-a393-12813bfff9fa;
 Fri, 24 Jul 2020 09:39:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=RkOtHijcxDUwC63vvbpCulKS6tzqjVjQRWQrgxZ8JKY=; b=P+CazwBINhKJDtDZmcW6+rf+i
 KrowM4T6r2GG8RGUs5qLJLZKLHtc2jItqfVKe/po37MIsf0I/2ArQheS5sEZY2HOpNO7HMtGYehgT
 0fxu6WtrnN1lLzIy6QV4hPMutZXJsPlY7OOc5TS1sivsP1Q9f3KrChEmlyHEPyAHBfEMU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyuAV-0003pt-9O; Fri, 24 Jul 2020 09:39:07 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyuAV-0006ug-0R; Fri, 24 Jul 2020 09:39:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyuAU-0005zo-W1; Fri, 24 Jul 2020 09:39:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152157-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152157: all pass - PUSHED
X-Osstest-Versions-This: ovmf=e43d0884ed93ffd8044e48e8d5d2d010a46aab33
X-Osstest-Versions-That: ovmf=3d2f7953b2ba9d27b1905c864c369fe624c74a3f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jul 2020 09:39:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152157 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152157/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e43d0884ed93ffd8044e48e8d5d2d010a46aab33
baseline version:
 ovmf                 3d2f7953b2ba9d27b1905c864c369fe624c74a3f

Last test of basis   152131  2020-07-23 01:10:42 Z    1 days
Testing same since   152157  2020-07-23 15:45:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Feng, YunhuaX <YunhuaX.Feng@intel.com>
  Jiewen Yao <jiewen.yao@intel.com>
  Leif Lindholm <leif@nuviainc.com>
  Pierre Gondois <pierre.gondois@arm.com>
  Yunhua Feng <yunhuax.feng@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3d2f7953b2..e43d0884ed  e43d0884ed93ffd8044e48e8d5d2d010a46aab33 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 10:11:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 10:11:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyufB-0003ZT-KY; Fri, 24 Jul 2020 10:10:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GqWP=BD=xilinx.com=woods@srs-us1.protection.inumbo.net>)
 id 1jyufA-0003ZJ-81
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 10:10:48 +0000
X-Inumbo-ID: f0eebdd6-cd95-11ea-87eb-bc764e2007e4
Received: from NAM12-MW2-obe.outbound.protection.outlook.com (unknown
 [40.107.244.77]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0eebdd6-cd95-11ea-87eb-bc764e2007e4;
 Fri, 24 Jul 2020 10:10:46 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Kx1/02qUC4E/3GSyLlGDtHa/zTynoHHlol+h+Kc0qVjqNEQ99CrzccpEbxC5vxzispwrR4/kJOP+hmQbVcxaql4eM3eQ2P/En2ag9eW5rc5YQPqj3r5YqhDHfXVHg2fx1D29HQZzJri9nX8JvmxcwFgc+21yCzHRACZJI/0BgY+J8suin0aOoKx7cWjB+B9oylVSvOnfeQiA4sOvr3kXWsOs6mssKUVFYuM8qehFWJifH8uuhgBPUVPKzae5Uum5zKTZp0UV5tVayH1ZJMilvx0X+P6d2TbJK8SyoUb5r88a8PJ0jcug6OJ1et/xKIRD1O6C9HCehdA7ZFoFcL30sg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CUonZA9j8i16PCdI0fzT8Z9CmO63uN/yPps7RE191vo=;
 b=EZJ425/ZG2ZGVngMhLZxgz7lJwRtOI4sl5mFDE9rOyXXS8i+2jP6UTb+fvm1QqHutBaNL5AzBEAWJy24FdxhlSgeE/IXupJOStP5NHNr+Wx/HjcR/RsIpgnbVLrRDy0NSWOcE5VIFB+mPG1skHkKpEOHUcmtL8oHsdczqIX0eXAXYosP+O+x1b6u1hQuucanFpWPQJsvdVI+cuC4OkcWY+KKXCln3Y+oxJxf7Wl77vcUpW1cRErnQTrs9MzCLq53n4dG2QpGI69KaXOcRnWfUmailYVUCwS1HIk79HT413Uet26JQVOorKPg7Zae3NbaDOnS0zrExF3GZK+4qYJ36Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.60.83) smtp.rcpttodomain=epam.com smtp.mailfrom=xilinx.com;
 dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message
 not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CUonZA9j8i16PCdI0fzT8Z9CmO63uN/yPps7RE191vo=;
 b=MRsBhcrFDz4LJ6lcRbTYxkpbD8JWY57bO179EO+QIWlEMCuO1QmgY9FZ3Jk2IZ6GwX3L68J7KfaEwPTJc1sJY7CS5exK9KO0WtKMvh9CD5muqfYtu+eMnh+r1qPXTCRnHlcBqb3q965J7Biuapd3EvXep58DgCZiu8ZOyPzMfWc=
Received: from MN2PR02CA0029.namprd02.prod.outlook.com (2603:10b6:208:fc::42)
 by BYAPR02MB4536.namprd02.prod.outlook.com (2603:10b6:a03:55::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.21; Fri, 24 Jul
 2020 10:10:44 +0000
Received: from BL2NAM02FT013.eop-nam02.prod.protection.outlook.com
 (2603:10b6:208:fc:cafe::df) by MN2PR02CA0029.outlook.office365.com
 (2603:10b6:208:fc::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24 via Frontend
 Transport; Fri, 24 Jul 2020 10:10:44 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.60.83)
 smtp.mailfrom=xilinx.com; epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=bestguesspass action=none
 header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.60.83 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01;
Received: from xsj-pvapsmtpgw01 (149.199.60.83) by
 BL2NAM02FT013.mail.protection.outlook.com (10.152.77.19) with Microsoft SMTP
 Server id 15.20.3216.10 via Frontend Transport; Fri, 24 Jul 2020 10:10:44
 +0000
Received: from [149.199.38.66] (port=47026 helo=smtp.xilinx.com)
 by xsj-pvapsmtpgw01 with esmtp (Exim 4.90)
 (envelope-from <brian.woods@xilinx.com>)
 id 1jyudE-0002AN-Mt; Fri, 24 Jul 2020 03:08:48 -0700
Received: from [127.0.0.1] (helo=localhost)
 by xsj-pvapsmtp01 with smtp (Exim 4.63)
 (envelope-from <brian.woods@xilinx.com>)
 id 1jyuf5-00070d-RF; Fri, 24 Jul 2020 03:10:43 -0700
Received: from xsj-pvapsmtp01 (xsj-smtp1.xilinx.com [149.199.38.66])
 by xsj-smtp-dlp1.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id 06OAAVLT005058; 
 Fri, 24 Jul 2020 03:10:32 -0700
Received: from [10.23.123.153] (helo=xilinx.com)
 by xsj-pvapsmtp01 with esmtp (Exim 4.63)
 (envelope-from <brian.woods@xilinx.com>)
 id 1jyuet-0006tI-T0; Fri, 24 Jul 2020 03:10:31 -0700
Date: Fri, 24 Jul 2020 03:10:31 -0700
From: Brian Woods <brian.woods@xilinx.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [RFC v2 1/2] arm,smmu: switch to using iommu_fwspec functions
Message-ID: <20200724101030.eznostod3ngsnpea@xilinx.com>
References: <1595390431-24805-1-git-send-email-brian.woods@xilinx.com>
 <1595390431-24805-2-git-send-email-brian.woods@xilinx.com>
 <alpine.DEB.2.21.2007220938020.17562@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2007220938020.17562@sstabellini-ThinkPad-T480s>
User-Agent: NeoMutt/20180716
X-RCIS-Action: ALLOW
X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.2.0.1013-23620.005
X-TM-AS-User-Approved-Sender: Yes;Yes
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0bc23bda-ca66-444d-ccd0-08d82fb9d3cd
X-MS-TrafficTypeDiagnostic: BYAPR02MB4536:
X-Microsoft-Antispam-PRVS: <BYAPR02MB4536299D6A7B6B160CC466B3D7770@BYAPR02MB4536.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: F97vID2GpDBBBRRj29C1QUrvl+IvGCCTN1VXUxsnKMcVTK2DQwJKXI9EclWnEKrMszT/ygPN27Erb4A/I+d6E9s3gJOSy5kwPs/fu9zfLfaJElMB4ATR3btsKf8kYIgtPT757Z17fkcn0H3MY1sk3I1L58eghMQhX5P74vVtLWrKNkHuzt1vvXgty+Nh4MAK2l/fVqXxyqxs+FmgeuPx1nPJWCXuqO+GcliyEAxncryvcxOwVMbEejWYR7frRi6lLU9OD1jcu92M4ASMxRt7QD2ylnVne/FwJDufJW/H6waW81G9scGSAE9Q0oU1bic9J2rprdzeJhWrXbLoQcPkgTQoZi8ehlI5trnkoeaEj1Zg6pfHe08EPzOtVdFpU+eOXtzi4rss+5RMAS3kbtjRj7x/4K5JkHELVc1lsBwKr8w=
X-Forefront-Antispam-Report: CIP:149.199.60.83; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:xsj-pvapsmtpgw01; PTR:unknown-60-83.xilinx.com; CAT:NONE;
 SFTY:;
 SFS:(376002)(136003)(396003)(346002)(39860400002)(46966005)(4326008)(356005)(478600001)(2906002)(82310400002)(316002)(8676002)(5660300002)(82740400003)(1076003)(81166007)(44832011)(8936002)(36756003)(47076004)(2616005)(26005)(70586007)(70206006)(336012)(9786002)(54906003)(86362001)(6916009)(186003)(7696005)(426003)(142933001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jul 2020 10:10:44.1800 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0bc23bda-ca66-444d-ccd0-08d82fb9d3cd
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.83];
 Helo=[xsj-pvapsmtpgw01]
X-MS-Exchange-CrossTenant-AuthSource: BL2NAM02FT013.eop-nam02.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR02MB4536
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Brian Woods <brian.woods@xilinx.com>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 22, 2020 at 09:47:43AM -0700, Stefano Stabellini wrote:
> On Tue, 21 Jul 2020, Brian Woods wrote:
> > Modify the smmu driver so that it uses the iommu_fwspec helper
> > functions.  This means both ARM IOMMU drivers will both use the
> > iommu_fwspec helper functions.
> > 
> > Signed-off-by: Brian Woods <brian.woods@xilinx.com>
> 
> [...]
> 
> > @@ -1924,14 +1924,21 @@ static int arm_smmu_add_device(struct device *dev)
> >  			ret = -ENOMEM;
> >  			goto out_put_group;
> >  		}
> > +		cfg->fwspec = kzalloc(sizeof(struct iommu_fwspec), GFP_KERNEL);
> > +		if (!cfg->fwspec) {
> > +			kfree(cfg);
> > +			ret = -ENOMEM;
> > +			goto out_put_group;
> > +		}
> > +		iommu_fwspec_init(dev, smmu->dev);
> 
> Normally the fwspec structure is initialized in
> xen/drivers/passthrough/device_tree.c:iommu_add_dt_device. However here
> we are trying to use it with the legacy bindings, that of course don't
> initialize in iommu_add_dt_device because they are different.
> 
> So I imagine this is the reason why we have to initialize iommu_fwspec here
> indepdendently from iommu_add_dt_device.
> 
> However, why are we allocating the struct iommu_fwspec twice?
> 
> We are calling kzalloc, then iommu_fwspec_init is calling xzalloc.
> 
> Similarly, we are storing the pointer to struct iommu_fwspec in
> cfg->fwspec but actually there is already a pointer to struct
> iommu_fwspec in struct device (the one set by dev_iommu_fwspec_set.)
> 
> Do we actually need both?

Sorry for taking so long.

Hrm, I've been looking for why I created two fwspecs and I'm not sure
why... It's pretty late, but later this morning I'll try some things
and try and remove it.

Brian


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 10:45:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 10:45:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyvCg-0006YL-LB; Fri, 24 Jul 2020 10:45:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5T8C=BD=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jyvCg-0006YG-08
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 10:45:26 +0000
X-Inumbo-ID: c761f623-cd9a-11ea-a3a4-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c761f623-cd9a-11ea-a3a4-12813bfff9fa;
 Fri, 24 Jul 2020 10:45:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:Cc:References:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=l+UAk3p0LmCJrUnDM/PS/Q9/RWFRvTqJ3FXosS3Dh4Q=; b=v7CEzxo7jzkPxhDNIdLc+s5xGb
 fG2Lr6jq9b/u4IKwTfAuoRLQj1pSIEAJZE5Rlm/ntCdxBqfgk1CG7CCYhBpHMhV7o62AwPtpu+Nb6
 G6GxFujGBr9oriS4dSedU2pWy+deiKX+5Ud23w6uus2pU8OKxvKkwQKzIwQCuD2KLqE0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyvCc-0005IT-HB; Fri, 24 Jul 2020 10:45:22 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyvCc-0001gQ-8J; Fri, 24 Jul 2020 10:45:22 +0000
Subject: Re: dom0 LInux 5.8-rc5 kernel failing to initialize cooling maps for
 Allwinner H6 SoC
To: Alejandro <alejandro.gonzalez.correo@gmail.com>,
 xen-devel@lists.xenproject.org
References: <CA+wirGqXMoRkS-aJmfFLipUv8SdY5LKV1aMrF0yKRJQaMvzs6Q@mail.gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1c5cee83-295e-cc02-d018-b53ddc6e3590@xen.org>
Date: Fri, 24 Jul 2020 11:45:20 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CA+wirGqXMoRkS-aJmfFLipUv8SdY5LKV1aMrF0yKRJQaMvzs6Q@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andre Przywara <andre.przywara@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

(+ Andre and Stefano)

On 20/07/2020 15:53, Alejandro wrote:
> Hello all.

Hello,

> 
> I'm new to this community, and firstly I'd like to thank you all for
> your efforts on supporting Xen in ARM devices.

Welcome to the community!

> 
> I'm trying Xen 4.13.1 in a Allwinner H6 SoC (more precisely a Pine H64
> model B, with a ARM Cortex-A53 CPU).
> I managed to get a dom0 Linux 5.8-rc5 kernel running fine, unpatched,
> and I'm using the upstream device tree for
> my board. However, the dom0 kernel has trouble when reading some DT
> nodes that are related to the CPUs, and
> it can't initialize the thermal subsystem properly, which is a kind of
> showstopper for me, because I'm concerned
> that letting the CPU run at the maximum frequency without watching out
> its temperature may cause overheating.

I understand this concern, I am aware of some efforts to get CPUFreq 
working on Xen but I am not sure if there is anything available yet. I 
have CCed a couple of more person that may be able to help here.

> The relevant kernel messages are:
> 
> [  +0.001959] sun50i-cpufreq-nvmem: probe of sun50i-cpufreq-nvmem
> failed with error -2
> ...
> [  +0.003053] hw perfevents: failed to parse interrupt-affinity[0] for pmu
> [  +0.000043] hw perfevents: /pmu: failed to register PMU devices!
> [  +0.000037] armv8-pmu: probe of pmu failed with error -22

I am not sure the PMU failure is related to the thermal failure below.

> ...
> [  +0.000163] OF: /thermal-zones/cpu-thermal/cooling-maps/map0: could
> not find phandle
> [  +0.000063] thermal_sys: failed to build thermal zone cpu-thermal: -22
Would it be possible to paste the device-tree node for 
/thermal-zones/cpu-thermal/cooling-maps? I suspect the issue is because 
we recreated /cpus from scratch.

I don't know much about how the thermal subsystem works, but I suspect 
this will not be enough to get it working properly on Xen. For a 
workaround, you would need to create a dom0 with the same numbers of 
vCPU as the numbers of pCPUs. They would also need to be pinned.

I will leave the others to fill in more details.

> 
> I've searched for issues, code or commits that may be related for this
> issue. The most relevant things I found are:
> 
> - A patch that blacklists the A53 PMU:
> https://patchwork.kernel.org/patch/10899881/
> - The handle_node function in xen/arch/arm/domain_build.c:
> https://github.com/xen-project/xen/blob/master/xen/arch/arm/domain_build.c#L1427

I remember this discussion. The problem was that the PMU is using 
per-CPU interrupts. Xen is not yet able to handle PPIs as they often 
requires more context to be saved/restored (in this case the PMU context).

There was a proposal to look if a device is using PPIs and just remove 
them from the Device-Tree. Unfortunately, I haven't seen any official 
submission for this patch.

Did you have to apply the patch to boot up? If not, then the error above 
shouldn't be a concern. However, if you need PMU support for the using 
thermal devices then it is going to require some work.

> 
> I've thought about removing "/cpus" from the skip_matches array in the
> handle_node function, but I'm not sure
> that would be a good fix.

The node "/cpus" and its sub-node are recreated by Xen for Dom0. This is 
because Dom0 may have a different numbers of vCPUs and it doesn't seen 
the pCPUs.

If you don't skip "/cpus" from the host DT then you would end up with 
two "/cpus" path in your dom0 DT. Mostly likely, Linux will not be happy 
with it.

I vaguely remember some discussions on how to deal with CPUFreq in Xen. 
IIRC we agreed that Dom0 should be part of the equation because it 
already contains all the drivers. However, I can't remember if we agreed 
how the dom0 would be made aware of the pCPUs.

@Volodymyr, I think you were looking at CPUFreq. Maybe you can help?

Best regards,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 11:18:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 11:18:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyvhw-00010Y-Bn; Fri, 24 Jul 2020 11:17:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F0Jp=BD=gmail.com=amittomer25@srs-us1.protection.inumbo.net>)
 id 1jyvhu-00010T-Gx
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 11:17:42 +0000
X-Inumbo-ID: 49aabd7c-cd9f-11ea-87f9-bc764e2007e4
Received: from mail-ed1-x542.google.com (unknown [2a00:1450:4864:20::542])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 49aabd7c-cd9f-11ea-87f9-bc764e2007e4;
 Fri, 24 Jul 2020 11:17:41 +0000 (UTC)
Received: by mail-ed1-x542.google.com with SMTP id q4so3533600edv.13
 for <xen-devel@lists.xenproject.org>; Fri, 24 Jul 2020 04:17:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=CduvMxd8TuIJJcT2tc7SBVPIeE3Pd6WKLSswo4MzA0o=;
 b=B5V/hcqX4OmsnJPvmWv9x6m87bCMcsIpyrWz+UxCFEW6Qw7vMgTU/CyhTwBDn02+Kj
 TUcjck+ll0GmOwwtPqN4TxsSj5IKzte6eJSST6QeGd+Z4ZaeTHkjwB4Ajr5m09OvGTZm
 YHbcr59wiXGwVHGygldyB2ChYSZ5CWpsCy0v5Nh/qKx+uKzm2SDlZDMTtHeK+mgxeMdH
 hguBD7rqoLZfIQifHHw4yYRijvqpNSXpluxZSMf/SHtgx4+/oLi9b/eMdGxff460WBrW
 d4VJDtkProiF1C2t3hs2xTGMdGIPOVdX6pzlwNKD+lWfqPVswzGazJWO1WpM4t969JDw
 GIxA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=CduvMxd8TuIJJcT2tc7SBVPIeE3Pd6WKLSswo4MzA0o=;
 b=ssEgSLsQu8Qd/RYxdpGXMA9E0BpEwAD3Y6FZPdZ0cBXhS/Icl0SyhdmqR3s8VYpaEw
 5179qZaH2Mgcjgn4R7duHNlRVsnpnlVupi2q21Y9ThvzdHY1s0dd9p9t99eDuEA0O0FZ
 ApaxvnjL2WjcnDxwIaZOFSZXVRB7ayQCKGbmUSFPjyg8/SpZP41nwMR1WVnimING9i6B
 hC3nWeQdSPzcZXxLC9mBPDW+TJezaxv+C7YRY4X2qFW0U54LaPfxxHT7xlzyKYUPliIP
 cTZSK8yfSyHFYtUrJt8sAP+PkROGMp8IVT95rmSWk32bRBq4tJRsAmDjzBllk3YQ8SNv
 cY0g==
X-Gm-Message-State: AOAM532BXmbRbvxtfKzFnNBfCubkitFc5ZnVx+Y8VSs+NZymLClA41Ud
 RcYOP9JyR7oMlLI6S0yNqFeXOCI/U9V3l7r89j8=
X-Google-Smtp-Source: ABdhPJzJo79xX5ypw8BgYr2BQYqRaOCSZMPH980+X7RknWCCKR/FUDUSllL3i9q9Y0Y6X741WlcReHWwuIEnVc3W5rY=
X-Received: by 2002:a50:eac5:: with SMTP id u5mr8676711edp.6.1595589460340;
 Fri, 24 Jul 2020 04:17:40 -0700 (PDT)
MIME-Version: 1.0
References: <CA+wirGqXMoRkS-aJmfFLipUv8SdY5LKV1aMrF0yKRJQaMvzs6Q@mail.gmail.com>
 <1c5cee83-295e-cc02-d018-b53ddc6e3590@xen.org>
In-Reply-To: <1c5cee83-295e-cc02-d018-b53ddc6e3590@xen.org>
From: Amit Tomer <amittomer25@gmail.com>
Date: Fri, 24 Jul 2020 16:47:03 +0530
Message-ID: <CABHD4K87aqCxsaW+j7uiM3kWQeHjSb+zQEs2p-SuYu83V-g38Q@mail.gmail.com>
Subject: Re: dom0 LInux 5.8-rc5 kernel failing to initialize cooling maps for
 Allwinner H6 SoC
To: Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Alejandro <alejandro.gonzalez.correo@gmail.com>,
 Andre Przywara <andre.przywara@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

> I remember this discussion. The problem was that the PMU is using
> per-CPU interrupts. Xen is not yet able to handle PPIs as they often
> requires more context to be saved/restored (in this case the PMU context).
>
> There was a proposal to look if a device is using PPIs and just remove
> them from the Device-Tree. Unfortunately, I haven't seen any official
> submission for this patch.

But we have this patch that remove devices using PPIs
http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=9b1a31922ac066ef0dffe36ebd6a6ba016567d69

Thanks
-Amit


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 11:19:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 11:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyvjB-00015x-NW; Fri, 24 Jul 2020 11:19:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5T8C=BD=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jyvjA-00015s-Tl
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 11:19:00 +0000
X-Inumbo-ID: 78d43754-cd9f-11ea-a3ae-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 78d43754-cd9f-11ea-a3ae-12813bfff9fa;
 Fri, 24 Jul 2020 11:19:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=EtP3ZpHeanLiq0448Cy//mqBKi2jh5zhhEKmkXDffD8=; b=JFairxnP4OUvqGOWa1fCuGcKHs
 8fS8lxZwwTycr4HLjYIGgbJrkf5J2UVjwG+h9CgvgHx8nnjTFBez6mubOaoNRkHwsKHC/iEpHpPUa
 XduD5E5ONffWcFje8TnFFEREowR/m+1oe+CdVGsfb2HsrwVtDg1spL9C2t3LJ1STYF4A=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyvj8-00062X-Ax; Fri, 24 Jul 2020 11:18:58 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyvj8-0003lC-4G; Fri, 24 Jul 2020 11:18:58 +0000
Subject: Re: dom0 LInux 5.8-rc5 kernel failing to initialize cooling maps for
 Allwinner H6 SoC
To: Amit Tomer <amittomer25@gmail.com>
References: <CA+wirGqXMoRkS-aJmfFLipUv8SdY5LKV1aMrF0yKRJQaMvzs6Q@mail.gmail.com>
 <1c5cee83-295e-cc02-d018-b53ddc6e3590@xen.org>
 <CABHD4K87aqCxsaW+j7uiM3kWQeHjSb+zQEs2p-SuYu83V-g38Q@mail.gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0dd0dba3-de51-ae76-ce57-41323fc6fb2c@xen.org>
Date: Fri, 24 Jul 2020 12:18:56 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CABHD4K87aqCxsaW+j7uiM3kWQeHjSb+zQEs2p-SuYu83V-g38Q@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Alejandro <alejandro.gonzalez.correo@gmail.com>,
 Andre Przywara <andre.przywara@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 24/07/2020 12:17, Amit Tomer wrote:
> Hi,

Hi,

>> I remember this discussion. The problem was that the PMU is using
>> per-CPU interrupts. Xen is not yet able to handle PPIs as they often
>> requires more context to be saved/restored (in this case the PMU context).
>>
>> There was a proposal to look if a device is using PPIs and just remove
>> them from the Device-Tree. Unfortunately, I haven't seen any official
>> submission for this patch.
> 
> But we have this patch that remove devices using PPIs
> http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=9b1a31922ac066ef0dffe36ebd6a6ba016567d69

Urgh, I forgot we merged it. I should have double-checked the tree. 
Apologies for that.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 11:20:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 11:20:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyvkg-0001qq-3r; Fri, 24 Jul 2020 11:20:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mtOI=BD=gmail.com=alejandro.gonzalez.correo@srs-us1.protection.inumbo.net>)
 id 1jyvkf-0001qj-42
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 11:20:33 +0000
X-Inumbo-ID: afb02e04-cd9f-11ea-87fc-bc764e2007e4
Received: from mail-ot1-x342.google.com (unknown [2607:f8b0:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id afb02e04-cd9f-11ea-87fc-bc764e2007e4;
 Fri, 24 Jul 2020 11:20:32 +0000 (UTC)
Received: by mail-ot1-x342.google.com with SMTP id g37so6680679otb.9
 for <xen-devel@lists.xenproject.org>; Fri, 24 Jul 2020 04:20:32 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=SmFGggjYg2xKc6nhXweMYQaHBSC3vmhtgUBewHY866Q=;
 b=gYRHl5sLTXnEDls4bpprBiWDlr64ZWXOBJGJ1oR6XdDOoq4nWXbvTYUFkNzOV/ToCx
 XmXfSFm/vL3GUI3013artVLvyBzH7cfRgB+uClQ5ADTABOV1N1kDKHA2tqm0SJdKsMTB
 t81U2MpRT/N7XTnnsLIRDHrkbuIFs5d1NowCZGrzNzZ4+MgxfVavw0oCH6fQHvHKB/hw
 gFw38GMbRMDCnGgSfN0WI7RdPGV9M44G3eFjDsYZazsuKmmezMW08DtjZOt+Lz6t5757
 OERjMo2Diz+za9jZ53MyOMSvjTwh+hHcPSwsvJj5DV73tMG32fQep6jFkBCpKrTz7P6L
 U7Bg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=SmFGggjYg2xKc6nhXweMYQaHBSC3vmhtgUBewHY866Q=;
 b=ftnVDTQiIPLVbbb6m2Yuz7Oq08GUCfYjP/w7iOemjAdMG5mZRIrySBsnXoVXk9xVQ7
 8lAL38wEIpASc63F/e1wPyuXV2eubl5uvn2U26kcJPItGg46LB8uEPplwOuKtMz8iEzY
 /svEhiV40zXyqaOt2ZqYPRz6xN4R+8QMRzE0eYFAHmryyyZnCszJiONiwCd8LEZlCqQh
 uYoRgF70T/d9AUz3y364mmcgcvsXWMF1Wols27tWKMnmL+wsyMqI0k0473jXmqM2YhCh
 fgxPDTK0hIRin8709qK2G6Bo9Jv/ThPiYNuQyvRmRjZtSyRRnA//0QM/j6P2FYGkHqoD
 0HfA==
X-Gm-Message-State: AOAM531X6vVjr9SFvCWw1KBSrHCqaiOCzyNF0O6kvYIgxCIPpNvu8ffi
 7IUikAzjSt4fd0QMmBFoZtlU3zT88MaOMRDnVnw=
X-Google-Smtp-Source: ABdhPJxn5ig285p+poPNj4iS9TDjc9OOibS2iWAvBoXPT3a8PGtRy4/d5XDpnDpoVGKD24NXV+i439E1jKuOX8DyoIo=
X-Received: by 2002:a9d:3984:: with SMTP id y4mr2456274otb.92.1595589631504;
 Fri, 24 Jul 2020 04:20:31 -0700 (PDT)
MIME-Version: 1.0
References: <CA+wirGqXMoRkS-aJmfFLipUv8SdY5LKV1aMrF0yKRJQaMvzs6Q@mail.gmail.com>
 <1c5cee83-295e-cc02-d018-b53ddc6e3590@xen.org>
In-Reply-To: <1c5cee83-295e-cc02-d018-b53ddc6e3590@xen.org>
From: Alejandro <alejandro.gonzalez.correo@gmail.com>
Date: Fri, 24 Jul 2020 13:20:10 +0200
Message-ID: <CA+wirGpFvLBzYRBaq8yspJj8j9-yoLwN88bt079qM5yqPTwtcA@mail.gmail.com>
Subject: Re: dom0 LInux 5.8-rc5 kernel failing to initialize cooling maps for
 Allwinner H6 SoC
To: Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andre Przywara <andre.przywara@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello, and thanks for the response.

El vie., 24 jul. 2020 a las 12:45, Julien Grall (<julien@xen.org>) escribi=
=C3=B3:
> > I'm trying Xen 4.13.1 in a Allwinner H6 SoC (more precisely a Pine H64
> > model B, with a ARM Cortex-A53 CPU).
> > I managed to get a dom0 Linux 5.8-rc5 kernel running fine, unpatched,
> > and I'm using the upstream device tree for
> > my board. However, the dom0 kernel has trouble when reading some DT
> > nodes that are related to the CPUs, and
> > it can't initialize the thermal subsystem properly, which is a kind of
> > showstopper for me, because I'm concerned
> > that letting the CPU run at the maximum frequency without watching out
> > its temperature may cause overheating.
>
> I understand this concern, I am aware of some efforts to get CPUFreq
> working on Xen but I am not sure if there is anything available yet. I
> have CCed a couple of more person that may be able to help here.

Thank you for the CCs. I hope they can bring on some insight about this :)

> > The relevant kernel messages are:
> >
> > [  +0.001959] sun50i-cpufreq-nvmem: probe of sun50i-cpufreq-nvmem
> > failed with error -2
> > ...
> > [  +0.003053] hw perfevents: failed to parse interrupt-affinity[0] for =
pmu
> > [  +0.000043] hw perfevents: /pmu: failed to register PMU devices!
> > [  +0.000037] armv8-pmu: probe of pmu failed with error -22
>
> I am not sure the PMU failure is related to the thermal failure below.

I'm not sure either, but after comparing the kernel messages for a
boot with and without Xen, those were the differences (excluding, of
course, the messages that inform that the Xen hypervisor console is
being used and such). For the sake of completeness, I decided to
mention it anyway.

> > [  +0.000163] OF: /thermal-zones/cpu-thermal/cooling-maps/map0: could
> > not find phandle
> > [  +0.000063] thermal_sys: failed to build thermal zone cpu-thermal: -2=
2
> Would it be possible to paste the device-tree node for
> /thermal-zones/cpu-thermal/cooling-maps? I suspect the issue is because
> we recreated /cpus from scratch.
>
> I don't know much about how the thermal subsystem works, but I suspect
> this will not be enough to get it working properly on Xen. For a
> workaround, you would need to create a dom0 with the same numbers of
> vCPU as the numbers of pCPUs. They would also need to be pinned.
>
> I will leave the others to fill in more details.

I think I should mention that I've tried to hackily fix things by
removing the make_cpus_node call on handle_node
(https://github.com/xen-project/xen/blob/master/xen/arch/arm/domain_build.c=
#L1585),
after removing the /cpus node from the skip_matches array. This way,
the original /cpus node was passed through, without being recreated by
Xen. Of course, I made sure that dom0 used the same number of vCPUs as
pCPUs, because otherwise things would probably blow up, which luckily
that was not a compromise for me. The end result was that the
aforementioned kernel error messages were gone, and the thermal
subsystem worked fine again. However, this time the cpufreq-dt probe
failed, with what I think was an ENODEV error. This left the CPU
locked at the boot frequency of less than 1 GHz, compared to the
maximum 1.8 GHz frequency that the SoC supports, which has bad
implications for performance.

Therefore, as it seems that passing more properties (like
#cooling-cells) is enough to get temperatures working, I suspect that
fixing the thermal issue is relatively easy, at least for my SoC. But
maybe I have just been lucky and that's not supposed to work anyway;
I'm not sure.

> >
> > I've searched for issues, code or commits that may be related for this
> > issue. The most relevant things I found are:
> >
> > - A patch that blacklists the A53 PMU:
> > https://patchwork.kernel.org/patch/10899881/
> > - The handle_node function in xen/arch/arm/domain_build.c:
> > https://github.com/xen-project/xen/blob/master/xen/arch/arm/domain_buil=
d.c#L1427
>
> I remember this discussion. The problem was that the PMU is using
> per-CPU interrupts. Xen is not yet able to handle PPIs as they often
> requires more context to be saved/restored (in this case the PMU context)=
.
>
> There was a proposal to look if a device is using PPIs and just remove
> them from the Device-Tree. Unfortunately, I haven't seen any official
> submission for this patch.
>
> Did you have to apply the patch to boot up? If not, then the error above
> shouldn't be a concern. However, if you need PMU support for the using
> thermal devices then it is going to require some work.

No, I didn't apply any patch to Xen whatsoever. It worked fine out of
the box. As I mentioned above, with a more complete /cpus node
declaration, the thermal subsystem works. I guess the PMU worked fine
too, but I didn't test it in any way, so maybe it is just barely able
to probe successfully somehow.

> > I've thought about removing "/cpus" from the skip_matches array in the
> > handle_node function, but I'm not sure
> > that would be a good fix.
>
> The node "/cpus" and its sub-node are recreated by Xen for Dom0. This is
> because Dom0 may have a different numbers of vCPUs and it doesn't seen
> the pCPUs.
>
> If you don't skip "/cpus" from the host DT then you would end up with
> two "/cpus" path in your dom0 DT. Mostly likely, Linux will not be happy
> with it.

Indeed, that is consistent with my observations of how the source code
works. Thanks for the confirmation :)

> I vaguely remember some discussions on how to deal with CPUFreq in Xen.
> IIRC we agreed that Dom0 should be part of the equation because it
> already contains all the drivers. However, I can't remember if we agreed
> how the dom0 would be made aware of the pCPUs.

That makes sense. Supporting every existing thermal and cpufreq method
in every ARM SoC seems like a lot of unneeded duplication of work,
provided that Linux already has pretty good support for that. But, if
that's the case, I guess we should not mark the "dom0-kernel" cpufreq
boot parameter as deprecated in the documentation, at least for the
ARM platform: http://xenbits.xen.org/docs/unstable/misc/xen-command-line.ht=
ml#cpufreq


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 12:11:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 12:11:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jywXu-0006f7-8j; Fri, 24 Jul 2020 12:11:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tno1=BD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jywXs-0006f2-CS
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 12:11:24 +0000
X-Inumbo-ID: ca03ec6c-cda6-11ea-a3bd-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ca03ec6c-cda6-11ea-a3bd-12813bfff9fa;
 Fri, 24 Jul 2020 12:11:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595592683;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=1P0gyWlreOZfRM/fQGa8yYGacnnFBtjTvWhn8X6cGuo=;
 b=efIW1T5Isp5K3LJa06HaWVr4TKxVEg7V9htLuLHgBUIppkrifR1AsVce
 eBWBfs09kkvhcrbsiKmF0eibL3bUvVmQ/vBx2zoGPzg/j1vcXNkmlQCyY
 b1y05OBVu+abEkAvjFrCFPkfdNan3jvMY43I3ngbnNv8X1gVfvuDzcR8l U=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: sxJRq5KY/C4CEE2PXaJ8Mbtme54zkvMBY+7ZCHnQ5L/1e4W1eFXsIeAADadGEaoJbDIRdGiwIm
 XaazKwkCMwHRkzT0ZlpWIVCe49dcchAl6GmOB3eGMlnZYjc8J6TqAcJJ39QMJTWvTsxMt69ty7
 Lzzwp0tdjfv7rm8RGrSEryYBGwDGC014F5d7quuNVNMZQDiEPOSZY8rWVFjRCQ/ZN7gAkpF2e7
 wOs0GxVZ1+JUZerQc3z/YHkVLaR9bjPfg3b1bBSkwSXDqDhsT2STyY/L7915oU0EhjjY8/hSfS
 TZA=
X-SBRS: 2.7
X-MesageID: 23117059
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,390,1589256000"; d="scan'208";a="23117059"
Subject: Re: [PATCH] x86: guard against port I/O overlapping the RTC/CMOS range
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <38c73e17-30b8-27b4-bc7c-e6ef7817fa1e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8b267b5e-8bd0-692e-d5d9-4a2bd21fb261@citrix.com>
Date: Fri, 24 Jul 2020 13:11:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <38c73e17-30b8-27b4-bc7c-e6ef7817fa1e@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 17/07/2020 14:10, Jan Beulich wrote:
> Since we intercept RTC/CMOS port accesses, let's do so consistently in
> all cases, i.e. also for e.g. a dword access to [006E,0071]. To avoid
> the risk of unintended impact on Dom0 code actually doing so (despite
> the belief that none ought to exist), also extend
> guest_io_{read,write}() to decompose accesses where some ports are
> allowed to be directly accessed and some aren't.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/pv/emul-priv-op.c
> +++ b/xen/arch/x86/pv/emul-priv-op.c
> @@ -210,7 +210,7 @@ static bool admin_io_okay(unsigned int p
>          return false;
>  
>      /* We also never permit direct access to the RTC/CMOS registers. */
> -    if ( ((port & ~1) == RTC_PORT(0)) )
> +    if ( port <= RTC_PORT(1) && port + bytes > RTC_PORT(0) )
>          return false;

This first hunk is fine.

However, why decompose anything?  Any disallowed port in the range
terminates the entire access, and doesn't internally shrink the access.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 12:43:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 12:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyx2Q-000139-2f; Fri, 24 Jul 2020 12:42:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hWcK=BD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyx2P-00012u-3k
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 12:42:57 +0000
X-Inumbo-ID: 308bff3e-cdab-11ea-880c-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 308bff3e-cdab-11ea-880c-bc764e2007e4;
 Fri, 24 Jul 2020 12:42:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595594572;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=aDtTphrzz9VOwTJvjWtl31/5xlDEnj/EJHimWB6gA4o=;
 b=deiS7u4XhPpWcyrTBguRa/HFybY2oT+527j9v9hDZYTj5QajGP0eDFIf
 e1sQ+lNHIXbFlx2ROI87387Q4aePbqj0QV1UOdE2rXSEJC29D3E+/qB+1
 GGH+UiStNhEGpK7mKKCNQCvRSRrUc6kDbgbeYrtTshSp8kPd6IczNqZH0 U=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: KJCsrxedNsP6PXR2GRxBdY4JBtMMaZKb/2iWqx70jsyhvQVs2mJ4a6ODJl6e5Ze0n71njR8l7V
 9U+neT0HVYP3POae1Ostbc3PyXtJbBN2r4zxUF0cQs6LBnJFr0VNVu/tNCJc9T1E4naoIb331o
 PnI7RkHi0XdVA2g0geZOyiF6+NKoWClGj8G1fo1E2jxpTrVvVikRzuiGwbtXCIN56qO7yeEIjv
 kh6FrSOIHklNuDPUeb45cpzlQ/2JrbYLIt+iJwQswgcnxS7Z13OFRECZhFNlp6l07ZlfxwhxDd
 Xyk=
X-SBRS: 2.7
X-MesageID: 23987140
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,390,1589256000"; d="scan'208";a="23987140"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Subject: [PATCH v2 1/4] xen/balloon: fix accounting in
 alloc_xenballooned_pages error path
Date: Fri, 24 Jul 2020 14:42:38 +0200
Message-ID: <20200724124241.48208-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200724124241.48208-1-roger.pau@citrix.com>
References: <20200724124241.48208-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 xen-devel@lists.xenproject.org, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

target_unpopulated is incremented with nr_pages at the start of the
function, but the call to free_xenballooned_pages will only subtract
pgno number of pages, and thus the rest need to be subtracted before
returning or else accounting will be skewed.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: stable@vger.kernel.org
---
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/balloon.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 77c57568e5d7..3cb10ed32557 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -630,6 +630,12 @@ int alloc_xenballooned_pages(int nr_pages, struct page **pages)
  out_undo:
 	mutex_unlock(&balloon_mutex);
 	free_xenballooned_pages(pgno, pages);
+	/*
+	 * NB: free_xenballooned_pages will only subtract pgno pages, but since
+	 * target_unpopulated is incremented with nr_pages at the start we need
+	 * to remove the remaining ones also, or accounting will be screwed.
+	 */
+	balloon_stats.target_unpopulated -= nr_pages - pgno;
 	return ret;
 }
 EXPORT_SYMBOL(alloc_xenballooned_pages);
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 12:43:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 12:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyx2S-000151-AJ; Fri, 24 Jul 2020 12:43:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hWcK=BD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyx2R-000148-Hz
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 12:42:59 +0000
X-Inumbo-ID: 337a5524-cdab-11ea-a3c5-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 337a5524-cdab-11ea-a3c5-12813bfff9fa;
 Fri, 24 Jul 2020 12:42:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595594578;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=9vtgZWvCFbf23f9EcBa2/BD85tIDxfXjWQT+ZKvEIkQ=;
 b=Czt7CIUx0nuOtcFFImq/Lq+aTFgHTtUp6IoZl/XgDTav3ihP76Tw4QkQ
 Fta6Q9dZZpy43XhWnibp0U0lFGILLn00+TiO1QW6xkX0DiL3ISp4nbKGZ
 IFOOox7mwYuX9uzH+BpxwIVx6pEkBERteak3IlFwXgFLRmkfKjigL5QnI M=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Dc8nqelwNvG0/+NMY4+cPWhthOaCyoSnUTP+8s1m6cGyixWFyqmz0D0lOYHL0AUzwySXw/Dsum
 o63shKP7+8p2KQKTGtvB48PgrA6dEpk+FjgHPkyzfK2BdO9RZy5LFaeWiHSJ7I+WDhXsT2jHOT
 wwolMT+yZOPpLqEFoJMnTdrZTr/gAGSMeUDtDsDdVOpmphY1HzwjTNCmNpwC1cqbLeglQMR3MV
 SEkdE4rvDQv+7kDNXpiuvhHJxFMiP7UZpwCwBW/hxpICmMVwcLWyQ/67BEdhKdnVHEXcmpAL5L
 8z8=
X-SBRS: 2.7
X-MesageID: 23454363
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,390,1589256000"; d="scan'208";a="23454363"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Subject: [PATCH v2 3/4] Revert "xen/balloon: Fix crash when ballooning on x86
 32 bit PAE"
Date: Fri, 24 Jul 2020 14:42:40 +0200
Message-ID: <20200724124241.48208-4-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200724124241.48208-1-roger.pau@citrix.com>
References: <20200724124241.48208-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This reverts commit dfd74a1edfaba5864276a2859190a8d242d18952.

This has been fixed by commit dca4436d1cf9e0d237c which added the out
of bounds check to __add_memory, so that trying to add blocks past
MAX_PHYSMEM_BITS will fail.

Note the check in the Xen balloon driver was bogus anyway, as it
checked the start address of the resource, but it should instead test
the end address to assert the whole resource falls below
MAX_PHYSMEM_BITS.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/balloon.c | 14 --------------
 1 file changed, 14 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 292413b27575..b1d8b028bf80 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -266,20 +266,6 @@ static struct resource *additional_memory_resource(phys_addr_t size)
 		return NULL;
 	}
 
-#ifdef CONFIG_SPARSEMEM
-	{
-		unsigned long limit = 1UL << (MAX_PHYSMEM_BITS - PAGE_SHIFT);
-		unsigned long pfn = res->start >> PAGE_SHIFT;
-
-		if (pfn > limit) {
-			pr_err("New System RAM resource outside addressable RAM (%lu > %lu)\n",
-			       pfn, limit);
-			release_memory_resource(res);
-			return NULL;
-		}
-	}
-#endif
-
 	return res;
 }
 
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 12:43:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 12:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyx2c-00017A-Qc; Fri, 24 Jul 2020 12:43:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hWcK=BD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyx2b-00016o-2u
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 12:43:09 +0000
X-Inumbo-ID: 38f63932-cdab-11ea-880c-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 38f63932-cdab-11ea-880c-bc764e2007e4;
 Fri, 24 Jul 2020 12:43:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595594588;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=LnrKUU0E0L2NEx45muJDP2eR0l7cIb2vvmXBeLMbICs=;
 b=Z2PGy+hbl65phsYQO3vpIS7GNHhVCFYheS1X+igm7nxdlki/ReV1/NhM
 EGMUWvN+kZPogSn9Z8aGvDMBdMJUkhAQQQlqI8rxx1WmnRMXFj4u2XN04
 +htpMVnfglzamuXy9UoAxUAfrIK2sYFSSdNFCnY1pLoJuk5pvQmfg49Mj g=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: WfBQbZ/B8RjixznMe135h8G4op5I57LLr6cNhd+2OlPg5iZoQzntVR2fvI66FzZ1wlsu2aGkR4
 ZTXryKKaEGGOgpNI6NUmNv9sIISAdllRW8pF5Wsw0jnCf0Un1Q0JwhxiUG2N/GLyNmw6VI40PG
 ziD5YP1SssIvYKgACa/1Ty0AuogXr9+achvpsOZqLB9Nwq7ifesdt/ZozyCHzUASG4GdOluU++
 8bvRw7dRXdla4XWmbl7wasqEPit5RMymzZx9vV0MT4FkQlksJDfa6kf7HV3p6eTnXhFRXgyk7p
 3jQ=
X-SBRS: 2.7
X-MesageID: 23139916
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,390,1589256000"; d="scan'208";a="23139916"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Subject: [PATCH v2 4/4] xen: add helpers to allocate unpopulated memory
Date: Fri, 24 Jul 2020 14:42:41 +0200
Message-ID: <20200724124241.48208-5-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200724124241.48208-1-roger.pau@citrix.com>
References: <20200724124241.48208-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Yan
 Yankovskyi <yyankovskyi@gmail.com>, David Hildenbrand <david@redhat.com>,
 dri-devel@lists.freedesktop.org, Michal Hocko <mhocko@kernel.org>,
 linux-mm@kvack.org, Daniel
 Vetter <daniel@ffwll.ch>, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Dan Carpenter <dan.carpenter@oracle.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

To be used in order to create foreign mappings. This is based on the
ZONE_DEVICE facility which is used by persistent memory devices in
order to create struct pages and kernel virtual mappings for the IOMEM
areas of such devices. Note that on kernels without support for
ZONE_DEVICE Xen will fallback to use ballooned pages in order to
create foreign mappings.

The newly added helpers use the same parameters as the existing
{alloc/free}_xenballooned_pages functions, which allows for in-place
replacement of the callers. Once a memory region has been added to be
used as scratch mapping space it will no longer be released, and pages
returned are kept in a linked list. This allows to have a buffer of
pages and prevents resorting to frequent additions and removals of
regions.

If enabled (because ZONE_DEVICE is supported) the usage of the new
functionality untangles Xen balloon and RAM hotplug from the usage of
unpopulated physical memory ranges to map foreign pages, which is the
correct thing to do in order to avoid mappings of foreign pages depend
on memory hotplug.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
I've not added a new memory_type type and just used
MEMORY_DEVICE_DEVDAX which seems to be what we want for such memory
regions. I'm unsure whether abusing this type is fine, or if I should
instead add a specific type, maybe MEMORY_DEVICE_GENERIC? I don't
think we should be using a specific Xen type at all.
---
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Yan Yankovskyi <yyankovskyi@gmail.com>
Cc: dri-devel@lists.freedesktop.org
Cc: xen-devel@lists.xenproject.org
Cc: linux-mm@kvack.org
Cc: David Hildenbrand <david@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
---
 drivers/gpu/drm/xen/xen_drm_front_gem.c |   8 +-
 drivers/xen/Makefile                    |   1 +
 drivers/xen/balloon.c                   |   4 +-
 drivers/xen/grant-table.c               |   4 +-
 drivers/xen/privcmd.c                   |   4 +-
 drivers/xen/unpopulated-alloc.c         | 222 ++++++++++++++++++++++++
 drivers/xen/xenbus/xenbus_client.c      |   6 +-
 drivers/xen/xlate_mmu.c                 |   4 +-
 include/xen/xen.h                       |   8 +
 9 files changed, 246 insertions(+), 15 deletions(-)
 create mode 100644 drivers/xen/unpopulated-alloc.c

diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index f0b85e094111..9dd06eae767a 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -99,8 +99,8 @@ static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
 		 * allocate ballooned pages which will be used to map
 		 * grant references provided by the backend
 		 */
-		ret = alloc_xenballooned_pages(xen_obj->num_pages,
-					       xen_obj->pages);
+		ret = xen_alloc_unpopulated_pages(xen_obj->num_pages,
+					          xen_obj->pages);
 		if (ret < 0) {
 			DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
 				  xen_obj->num_pages, ret);
@@ -152,8 +152,8 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj)
 	} else {
 		if (xen_obj->pages) {
 			if (xen_obj->be_alloc) {
-				free_xenballooned_pages(xen_obj->num_pages,
-							xen_obj->pages);
+				xen_free_unpopulated_pages(xen_obj->num_pages,
+							   xen_obj->pages);
 				gem_free_pages_array(xen_obj);
 			} else {
 				drm_gem_put_pages(&xen_obj->base,
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 0d322f3d90cd..788a5d9c8ef0 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -42,3 +42,4 @@ xen-gntdev-$(CONFIG_XEN_GNTDEV_DMABUF)	+= gntdev-dmabuf.o
 xen-gntalloc-y				:= gntalloc.o
 xen-privcmd-y				:= privcmd.o privcmd-buf.o
 obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF)	+= xen-front-pgdir-shbuf.o
+obj-$(CONFIG_ZONE_DEVICE)		+= unpopulated-alloc.o
diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index b1d8b028bf80..815ef10eb2ff 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -654,7 +654,7 @@ void free_xenballooned_pages(int nr_pages, struct page **pages)
 }
 EXPORT_SYMBOL(free_xenballooned_pages);
 
-#ifdef CONFIG_XEN_PV
+#if defined(CONFIG_XEN_PV) && !defined(CONFIG_ZONE_DEVICE)
 static void __init balloon_add_region(unsigned long start_pfn,
 				      unsigned long pages)
 {
@@ -708,7 +708,7 @@ static int __init balloon_init(void)
 	register_sysctl_table(xen_root);
 #endif
 
-#ifdef CONFIG_XEN_PV
+#if defined(CONFIG_XEN_PV) && !defined(CONFIG_ZONE_DEVICE)
 	{
 		int i;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 8d06bf1cc347..523dcdf39cc9 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -801,7 +801,7 @@ int gnttab_alloc_pages(int nr_pages, struct page **pages)
 {
 	int ret;
 
-	ret = alloc_xenballooned_pages(nr_pages, pages);
+	ret = xen_alloc_unpopulated_pages(nr_pages, pages);
 	if (ret < 0)
 		return ret;
 
@@ -836,7 +836,7 @@ EXPORT_SYMBOL_GPL(gnttab_pages_clear_private);
 void gnttab_free_pages(int nr_pages, struct page **pages)
 {
 	gnttab_pages_clear_private(nr_pages, pages);
-	free_xenballooned_pages(nr_pages, pages);
+	xen_free_unpopulated_pages(nr_pages, pages);
 }
 EXPORT_SYMBOL_GPL(gnttab_free_pages);
 
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index a250d118144a..56000ab70974 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -425,7 +425,7 @@ static int alloc_empty_pages(struct vm_area_struct *vma, int numpgs)
 	if (pages == NULL)
 		return -ENOMEM;
 
-	rc = alloc_xenballooned_pages(numpgs, pages);
+	rc = xen_alloc_unpopulated_pages(numpgs, pages);
 	if (rc != 0) {
 		pr_warn("%s Could not alloc %d pfns rc:%d\n", __func__,
 			numpgs, rc);
@@ -900,7 +900,7 @@ static void privcmd_close(struct vm_area_struct *vma)
 
 	rc = xen_unmap_domain_gfn_range(vma, numgfns, pages);
 	if (rc == 0)
-		free_xenballooned_pages(numpgs, pages);
+		xen_free_unpopulated_pages(numpgs, pages);
 	else
 		pr_crit("unable to unmap MFN range: leaking %d pages. rc=%d\n",
 			numpgs, rc);
diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
new file mode 100644
index 000000000000..aaa91cefbbf9
--- /dev/null
+++ b/drivers/xen/unpopulated-alloc.c
@@ -0,0 +1,222 @@
+/*
+ * Helpers to allocate unpopulated memory for foreign mappings
+ *
+ * Copyright (c) 2020, Citrix Systems R&D
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License version 2
+ * as published by the Free Software Foundation; or, when distributed
+ * separately from the Linux kernel or incorporated into other
+ * software packages, subject to the following license:
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this source file (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use, copy, modify,
+ * merge, publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so, subject to
+ * the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ */
+
+#include <linux/errno.h>
+#include <linux/gfp.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/memremap.h>
+#include <linux/slab.h>
+
+#include <asm/page.h>
+
+#include <xen/page.h>
+#include <xen/xen.h>
+
+static DEFINE_MUTEX(lock);
+static LIST_HEAD(list);
+static unsigned int count;
+
+static int fill(unsigned int nr_pages)
+{
+	struct dev_pagemap *pgmap;
+	void *vaddr;
+	unsigned int i, alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
+	int nid, ret;
+
+	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
+	if (!pgmap)
+		return -ENOMEM;
+
+	pgmap->type = MEMORY_DEVICE_DEVDAX;
+	pgmap->res.name = "XEN SCRATCH";
+	pgmap->res.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+
+	ret = allocate_resource(&iomem_resource, &pgmap->res,
+				alloc_pages * PAGE_SIZE, 0, -1,
+				PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL);
+	if (ret < 0) {
+		pr_err("Cannot allocate new IOMEM resource\n");
+		kfree(pgmap);
+		return ret;
+	}
+
+	nid = memory_add_physaddr_to_nid(pgmap->res.start);
+
+#ifdef CONFIG_XEN_HAVE_PVMMU
+	/*
+	 * We don't support PV MMU when Linux and Xen is using
+	 * different page granularity.
+	 */
+	BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE);
+
+        /*
+         * memremap will build page tables for the new memory so
+         * the p2m must contain invalid entries so the correct
+         * non-present PTEs will be written.
+         *
+         * If a failure occurs, the original (identity) p2m entries
+         * are not restored since this region is now known not to
+         * conflict with any devices.
+         */
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		xen_pfn_t pfn = PFN_DOWN(pgmap->res.start);
+
+		for (i = 0; i < alloc_pages; i++) {
+			if (!set_phys_to_machine(pfn + i, INVALID_P2M_ENTRY)) {
+				pr_warn("set_phys_to_machine() failed, no memory added\n");
+				release_resource(&pgmap->res);
+				kfree(pgmap);
+				return -ENOMEM;
+			}
+                }
+	}
+#endif
+
+	vaddr = memremap_pages(pgmap, nid);
+	if (IS_ERR(vaddr)) {
+		pr_err("Cannot remap memory range\n");
+		release_resource(&pgmap->res);
+		kfree(pgmap);
+		return PTR_ERR(vaddr);
+	}
+
+	for (i = 0; i < alloc_pages; i++) {
+		struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
+
+		BUG_ON(!virt_addr_valid(vaddr + PAGE_SIZE * i));
+		list_add(&pg->lru, &list);
+		count++;
+	}
+
+	return 0;
+}
+
+/**
+ * xen_alloc_unpopulated_pages - alloc unpopulated pages
+ * @nr_pages: Number of pages
+ * @pages: pages returned
+ * @return 0 on success, error otherwise
+ */
+int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
+{
+	unsigned int i;
+	int ret = 0;
+
+	mutex_lock(&lock);
+	if (count < nr_pages) {
+		ret = fill(nr_pages);
+		if (ret)
+			goto out;
+	}
+
+	for (i = 0; i < nr_pages; i++) {
+		struct page *pg = list_first_entry_or_null(&list, struct page,
+							   lru);
+
+		BUG_ON(!pg);
+		list_del(&pg->lru);
+		count--;
+		pages[i] = pg;
+
+#ifdef CONFIG_XEN_HAVE_PVMMU
+		/*
+		 * We don't support PV MMU when Linux and Xen is using
+		 * different page granularity.
+		 */
+		BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE);
+
+		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+			ret = xen_alloc_p2m_entry(page_to_pfn(pg));
+			if (ret < 0) {
+				unsigned int j;
+
+				for (j = 0; j <= i; j++) {
+					list_add(&pages[j]->lru, &list);
+					count++;
+				}
+				goto out;
+			}
+		}
+#endif
+	}
+
+out:
+	mutex_unlock(&lock);
+	return ret;
+}
+EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
+
+/**
+ * xen_free_unpopulated_pages - return unpopulated pages
+ * @nr_pages: Number of pages
+ * @pages: pages to return
+ */
+void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
+{
+	unsigned int i;
+
+	mutex_lock(&lock);
+	for (i = 0; i < nr_pages; i++) {
+		list_add(&pages[i]->lru, &list);
+		count++;
+	}
+	mutex_unlock(&lock);
+}
+EXPORT_SYMBOL(xen_free_unpopulated_pages);
+
+#ifdef CONFIG_XEN_PV
+static int __init init(void)
+{
+	unsigned int i;
+
+	if (!xen_domain())
+		return -ENODEV;
+
+	/*
+	 * Initialize with pages from the extra memory regions (see
+	 * arch/x86/xen/setup.c).
+	 */
+	for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {
+		unsigned int j;
+
+		for (j = 0; j < xen_extra_mem[i].n_pfns; j++) {
+			struct page *pg =
+				pfn_to_page(xen_extra_mem[i].start_pfn + j);
+
+			list_add(&pg->lru, &list);
+			count++;
+		}
+	}
+
+	return 0;
+}
+subsys_initcall(init);
+#endif
diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index 786fbb7d8be0..70b6c4780fbd 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -615,7 +615,7 @@ static int xenbus_map_ring_hvm(struct xenbus_device *dev,
 	bool leaked = false;
 	unsigned int nr_pages = XENBUS_PAGES(nr_grefs);
 
-	err = alloc_xenballooned_pages(nr_pages, node->hvm.pages);
+	err = xen_alloc_unpopulated_pages(nr_pages, node->hvm.pages);
 	if (err)
 		goto out_err;
 
@@ -656,7 +656,7 @@ static int xenbus_map_ring_hvm(struct xenbus_device *dev,
 			 addr, nr_pages);
  out_free_ballooned_pages:
 	if (!leaked)
-		free_xenballooned_pages(nr_pages, node->hvm.pages);
+		xen_free_unpopulated_pages(nr_pages, node->hvm.pages);
  out_err:
 	return err;
 }
@@ -852,7 +852,7 @@ static int xenbus_unmap_ring_hvm(struct xenbus_device *dev, void *vaddr)
 			       info.addrs);
 	if (!rv) {
 		vunmap(vaddr);
-		free_xenballooned_pages(nr_pages, node->hvm.pages);
+		xen_free_unpopulated_pages(nr_pages, node->hvm.pages);
 	}
 	else
 		WARN(1, "Leaking %p, size %u page(s)\n", vaddr, nr_pages);
diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c
index 7b1077f0abcb..34742c6e189e 100644
--- a/drivers/xen/xlate_mmu.c
+++ b/drivers/xen/xlate_mmu.c
@@ -232,7 +232,7 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
 		kfree(pages);
 		return -ENOMEM;
 	}
-	rc = alloc_xenballooned_pages(nr_pages, pages);
+	rc = xen_alloc_unpopulated_pages(nr_pages, pages);
 	if (rc) {
 		pr_warn("%s Couldn't balloon alloc %ld pages rc:%d\n", __func__,
 			nr_pages, rc);
@@ -249,7 +249,7 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
 	if (!vaddr) {
 		pr_warn("%s Couldn't map %ld pages rc:%d\n", __func__,
 			nr_pages, rc);
-		free_xenballooned_pages(nr_pages, pages);
+		xen_free_unpopulated_pages(nr_pages, pages);
 		kfree(pages);
 		kfree(pfns);
 		return -ENOMEM;
diff --git a/include/xen/xen.h b/include/xen/xen.h
index 19a72f591e2b..aa33bc0d933c 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -52,4 +52,12 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
 extern u64 xen_saved_max_mem_size;
 #endif
 
+#ifdef CONFIG_ZONE_DEVICE
+int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
+void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
+#else
+#define xen_alloc_unpopulated_pages alloc_xenballooned_pages
+#define xen_free_unpopulated_pages free_xenballooned_pages
+#endif
+
 #endif	/* _XEN_XEN_H */
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 12:43:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 12:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyx2V-00015o-IS; Fri, 24 Jul 2020 12:43:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hWcK=BD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyx2U-00012u-40
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 12:43:02 +0000
X-Inumbo-ID: 31f451f1-cdab-11ea-880c-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31f451f1-cdab-11ea-880c-bc764e2007e4;
 Fri, 24 Jul 2020 12:42:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595594576;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=7Tw4G/V7kxbTklR4OkIgBDvRcTQey+xrJa3KZIIL0yE=;
 b=Wr12Kj69LSoxJfKjPvI9zKeyP+8qjKurWGX/V1S90YdMWo2SxUBm5mR4
 FWTFEvXA4N9ijDkQtc8GPDZEGXrnhA8Jkre1m7+vZNVToTQg2lXoi0I1a
 Yjknuae2jRN+PSMFW35gT8v8c16ILAY+xALtDbWG4kTcvAoTtdu2rRLGk Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 1RmtWXLEVLsjBBebMyNb9dJntTLAmrgMDJmAs3T9l48/E9hvv/9Zz/JGcwmC/qk907TZ5cCnhv
 JD3rZBNMfCwCn/WlbKh5LBFv7oL/dKr6eU4+jdqj7dHQOQQqNjdvpYQ66LyEy3gb9PIHCC1MrD
 kmD0iJEk7E1uJWWtM02WnuIA+Yoq9Ra2+LyjJAAjbuuR1etEvwEpbBKBYFOxclIwriVbLZlQRM
 khIh9nzeMT3Dhtv9/vt6aqAXVFbKhToIhnx5qJOUTVfm1CHLGLlsws80oqkAFKPc/SW8TD/l4h
 FnA=
X-SBRS: 2.7
X-MesageID: 23139909
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,390,1589256000"; d="scan'208";a="23139909"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Subject: [PATCH v2 2/4] xen/balloon: make the balloon wait interruptible
Date: Fri, 24 Jul 2020 14:42:39 +0200
Message-ID: <20200724124241.48208-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200724124241.48208-1-roger.pau@citrix.com>
References: <20200724124241.48208-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 xen-devel@lists.xenproject.org, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

So it can be killed, or else processes can get hung indefinitely
waiting for balloon pages.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Cc: stable@vger.kernel.org
---
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/balloon.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 3cb10ed32557..292413b27575 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -568,11 +568,13 @@ static int add_ballooned_pages(int nr_pages)
 	if (xen_hotplug_unpopulated) {
 		st = reserve_additional_memory();
 		if (st != BP_ECANCELED) {
+			int rc;
+
 			mutex_unlock(&balloon_mutex);
-			wait_event(balloon_wq,
+			rc = wait_event_interruptible(balloon_wq,
 				   !list_empty(&ballooned_pages));
 			mutex_lock(&balloon_mutex);
-			return 0;
+			return rc ? -ENOMEM : 0;
 		}
 	}
 
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 12:43:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 12:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyx2L-00012z-R7; Fri, 24 Jul 2020 12:42:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hWcK=BD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyx2K-00012u-DT
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 12:42:52 +0000
X-Inumbo-ID: 2f1a0240-cdab-11ea-880c-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f1a0240-cdab-11ea-880c-bc764e2007e4;
 Fri, 24 Jul 2020 12:42:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595594572;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=lKXqW8/3YOwTJyFQghmaI+GKNMTKpRONfG96XJL/WMA=;
 b=bBI4V1y1yQsRi3eVnZy6WijdUklYMPpbgtO+niDVMPVVxEHjx1/sODwp
 PztC0Q8SR08deLMZhtXD8ShY7TmlkN2kugy9+uN7H3AbZrSy5ke2wunr4
 FV06DLCU3jIbrKKUCDEOP2iK+xPCw8t6Kbit3kuGuecRuRU3OruyoZnjl 0=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: K2iI69Y6OCtQDD19Hf1T4acInpaZV4ysI4avOab6HMISqvB1o3gpzLiKS0FhxgkjERb9XUjbTM
 F5Fk3T9f8PoaJiidx88PglziYhudJ7iwbIzk0laWncsGIwz8mbbcVgIIfzCz86by8Hxvqzx5BR
 qbK8blo8XCtPs2i+LqVGnWdLRWcLB7az7C3KN4ylPsbPouPFyCxfn6homOcHD6A5C8ZGVk1Wyy
 ZBFJQbjsCo8HjxyTMtMXjYDlwSF1IkT6vaIvqsJ6hPCQ6AHZnn0CgbbMk0UmWe/uB4BCMSNSr1
 L3s=
X-SBRS: 2.7
X-MesageID: 23118831
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,390,1589256000"; d="scan'208";a="23118831"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Subject: [PATCH v2 0/4] xen/balloon: fixes for memory hotplug
Date: Fri, 24 Jul 2020 14:42:37 +0200
Message-ID: <20200724124241.48208-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

The following series contain some fixes in order to split Xen
unpopulated memory handling from the ballooning driver if ZONE_DEVICE is
available, so that physical memory regions used to map foreign pages are
not tied to memory hotplug.

Fix two patches are bugfixes that IMO should be backported to stable
branches, third patch is a revert of a workaround applied to the balloon
driver and last patch introduces an interface based on ZONE_DEVICE in
order to manage regions to use for foreign mappings.

Thanks, Roger.

Roger Pau Monne (4):
  xen/balloon: fix accounting in alloc_xenballooned_pages error path
  xen/balloon: make the balloon wait interruptible
  Revert "xen/balloon: Fix crash when ballooning on x86 32 bit PAE"
  xen: add helpers to allocate unpopulated memory

 drivers/gpu/drm/xen/xen_drm_front_gem.c |   8 +-
 drivers/xen/Makefile                    |   1 +
 drivers/xen/balloon.c                   |  30 ++--
 drivers/xen/grant-table.c               |   4 +-
 drivers/xen/privcmd.c                   |   4 +-
 drivers/xen/unpopulated-alloc.c         | 222 ++++++++++++++++++++++++
 drivers/xen/xenbus/xenbus_client.c      |   6 +-
 drivers/xen/xlate_mmu.c                 |   4 +-
 include/xen/xen.h                       |   8 +
 9 files changed, 256 insertions(+), 31 deletions(-)
 create mode 100644 drivers/xen/unpopulated-alloc.c

-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 12:52:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 12:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyxBP-0002PA-SE; Fri, 24 Jul 2020 12:52:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qjWU=BD=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyxBP-0002P5-0w
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 12:52:15 +0000
X-Inumbo-ID: 7e69d9d2-cdac-11ea-a3c9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7e69d9d2-cdac-11ea-a3c9-12813bfff9fa;
 Fri, 24 Jul 2020 12:52:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hj0PCAAnQj4XD2u5pQkBEvg4KH5lrluPpd1M9l37OE0=; b=IA/ci25zkmSCm1C1hKyImG1/y
 D2VVikPo4no5cI+2/jILG2m9LAqA3d2QvEfNkcx2wh44Dukj9Y4/B4LBy3gUkqzx/8e3cYeLsJBNw
 hiiErJZGHIP2mvbvP1h2wkkTx4Qm28hGEFuwT7CdRy2jKcX6Nncxl7o5MHl5Bv+jQxizY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyxBM-0007xE-94; Fri, 24 Jul 2020 12:52:12 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyxBM-0007sP-1X; Fri, 24 Jul 2020 12:52:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyxBM-0004Ma-0m; Fri, 24 Jul 2020 12:52:12 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152167-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 152167: regressions - FAIL
X-Osstest-Failures: libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=bb8ccb050d17e6068a02a6e3c356391ba50269d7
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jul 2020 12:52:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152167 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152167/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a

version targeted for testing:
 libvirt              bb8ccb050d17e6068a02a6e3c356391ba50269d7
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   14 days
Failing since        151818  2020-07-11 04:18:52 Z   13 days   14 attempts
Testing same since   152167  2020-07-24 04:18:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Jianan Gao <jgao@redhat.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Yi Wang <wang.yi59@zte.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2658 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 13:11:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 13:11:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyxUJ-0004Ip-Lz; Fri, 24 Jul 2020 13:11:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SHXM=BD=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jyxUI-0004G7-Ck
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 13:11:46 +0000
X-Inumbo-ID: 3949eb5a-cdaf-11ea-8810-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3949eb5a-cdaf-11ea-8810-bc764e2007e4;
 Fri, 24 Jul 2020 13:11:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F423FAF24;
 Fri, 24 Jul 2020 13:11:52 +0000 (UTC)
Subject: Re: [PATCH v2 1/4] xen/balloon: fix accounting in
 alloc_xenballooned_pages error path
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
References: <20200724124241.48208-1-roger.pau@citrix.com>
 <20200724124241.48208-2-roger.pau@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <7f18eca6-9785-fbff-7870-83024173cb69@suse.com>
Date: Fri, 24 Jul 2020 15:11:44 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200724124241.48208-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.07.20 14:42, Roger Pau Monne wrote:
> target_unpopulated is incremented with nr_pages at the start of the
> function, but the call to free_xenballooned_pages will only subtract
> pgno number of pages, and thus the rest need to be subtracted before
> returning or else accounting will be skewed.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 13:13:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 13:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyxWE-0004Tj-26; Fri, 24 Jul 2020 13:13:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SHXM=BD=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jyxWC-0004Ta-N7
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 13:13:44 +0000
X-Inumbo-ID: 7fd17714-cdaf-11ea-8810-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7fd17714-cdaf-11ea-8810-bc764e2007e4;
 Fri, 24 Jul 2020 13:13:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 679B3AC46;
 Fri, 24 Jul 2020 13:13:51 +0000 (UTC)
Subject: Re: [PATCH v2 2/4] xen/balloon: make the balloon wait interruptible
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
References: <20200724124241.48208-1-roger.pau@citrix.com>
 <20200724124241.48208-3-roger.pau@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <57d403c1-5df4-d4cf-3faa-2aae2ba1faa1@suse.com>
Date: Fri, 24 Jul 2020 15:13:42 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200724124241.48208-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.07.20 14:42, Roger Pau Monne wrote:
> So it can be killed, or else processes can get hung indefinitely
> waiting for balloon pages.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 13:21:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 13:21:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyxd9-0005KQ-Qj; Fri, 24 Jul 2020 13:20:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SHXM=BD=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jyxd8-0005KL-EO
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 13:20:54 +0000
X-Inumbo-ID: 7ec19d8b-cdb0-11ea-a3d4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7ec19d8b-cdb0-11ea-a3d4-12813bfff9fa;
 Fri, 24 Jul 2020 13:20:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 80F51AC66;
 Fri, 24 Jul 2020 13:21:00 +0000 (UTC)
Subject: Re: [PATCH v2 3/4] Revert "xen/balloon: Fix crash when ballooning on
 x86 32 bit PAE"
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
References: <20200724124241.48208-1-roger.pau@citrix.com>
 <20200724124241.48208-4-roger.pau@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <81215bc0-6594-239a-9a27-0a3f1f43dfd6@suse.com>
Date: Fri, 24 Jul 2020 15:20:51 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200724124241.48208-4-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.07.20 14:42, Roger Pau Monne wrote:
> This reverts commit dfd74a1edfaba5864276a2859190a8d242d18952.
> 
> This has been fixed by commit dca4436d1cf9e0d237c which added the out
> of bounds check to __add_memory, so that trying to add blocks past
> MAX_PHYSMEM_BITS will fail.
> 
> Note the check in the Xen balloon driver was bogus anyway, as it
> checked the start address of the resource, but it should instead test
> the end address to assert the whole resource falls below
> MAX_PHYSMEM_BITS.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 14:20:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 14:20:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyyXp-0001tQ-2K; Fri, 24 Jul 2020 14:19:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yKVY=BD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jyyXn-0001tL-U1
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 14:19:27 +0000
X-Inumbo-ID: add9a8bc-cdb8-11ea-a3f5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id add9a8bc-cdb8-11ea-a3f5-12813bfff9fa;
 Fri, 24 Jul 2020 14:19:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1C9BAAAC5;
 Fri, 24 Jul 2020 14:19:34 +0000 (UTC)
Subject: Re: [PATCH] x86: guard against port I/O overlapping the RTC/CMOS range
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <38c73e17-30b8-27b4-bc7c-e6ef7817fa1e@suse.com>
 <8b267b5e-8bd0-692e-d5d9-4a2bd21fb261@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f192793d-d074-990a-190d-67f48ccda87a@suse.com>
Date: Fri, 24 Jul 2020 16:19:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <8b267b5e-8bd0-692e-d5d9-4a2bd21fb261@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.07.2020 14:11, Andrew Cooper wrote:
> On 17/07/2020 14:10, Jan Beulich wrote:
>> Since we intercept RTC/CMOS port accesses, let's do so consistently in
>> all cases, i.e. also for e.g. a dword access to [006E,0071]. To avoid
>> the risk of unintended impact on Dom0 code actually doing so (despite
>> the belief that none ought to exist), also extend
>> guest_io_{read,write}() to decompose accesses where some ports are
>> allowed to be directly accessed and some aren't.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/pv/emul-priv-op.c
>> +++ b/xen/arch/x86/pv/emul-priv-op.c
>> @@ -210,7 +210,7 @@ static bool admin_io_okay(unsigned int p
>>          return false;
>>  
>>      /* We also never permit direct access to the RTC/CMOS registers. */
>> -    if ( ((port & ~1) == RTC_PORT(0)) )
>> +    if ( port <= RTC_PORT(1) && port + bytes > RTC_PORT(0) )
>>          return false;
> 
> This first hunk is fine.
> 
> However, why decompose anything?  Any disallowed port in the range
> terminates the entire access, and doesn't internally shrink the access.

What tells you that adjacent ports (e.g. 006E and 006F to match
the example in the description) are disallowed? The typical
case here is Dom0 (as mentioned in the description), which has
access to most of the ports.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 14:33:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 14:33:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyyl5-0003ju-Am; Fri, 24 Jul 2020 14:33:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SHXM=BD=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jyyl4-0003jp-B1
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 14:33:10 +0000
X-Inumbo-ID: 977865a2-cdba-11ea-a3f8-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 977865a2-cdba-11ea-a3f8-12813bfff9fa;
 Fri, 24 Jul 2020 14:33:08 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7B833AF8F;
 Fri, 24 Jul 2020 14:33:15 +0000 (UTC)
Subject: Re: [PATCH v2 4/4] xen: add helpers to allocate unpopulated memory
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
References: <20200724124241.48208-1-roger.pau@citrix.com>
 <20200724124241.48208-5-roger.pau@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <ca1d1f22-296f-e985-6b2e-613448de95a2@suse.com>
Date: Fri, 24 Jul 2020 16:33:06 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200724124241.48208-5-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Yan Yankovskyi <yyankovskyi@gmail.com>,
 David Hildenbrand <david@redhat.com>, dri-devel@lists.freedesktop.org,
 Michal Hocko <mhocko@kernel.org>, linux-mm@kvack.org,
 Daniel Vetter <daniel@ffwll.ch>, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Dan Carpenter <dan.carpenter@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.07.20 14:42, Roger Pau Monne wrote:
> To be used in order to create foreign mappings. This is based on the
> ZONE_DEVICE facility which is used by persistent memory devices in
> order to create struct pages and kernel virtual mappings for the IOMEM
> areas of such devices. Note that on kernels without support for
> ZONE_DEVICE Xen will fallback to use ballooned pages in order to
> create foreign mappings.
> 
> The newly added helpers use the same parameters as the existing
> {alloc/free}_xenballooned_pages functions, which allows for in-place
> replacement of the callers. Once a memory region has been added to be
> used as scratch mapping space it will no longer be released, and pages
> returned are kept in a linked list. This allows to have a buffer of
> pages and prevents resorting to frequent additions and removals of
> regions.
> 
> If enabled (because ZONE_DEVICE is supported) the usage of the new
> functionality untangles Xen balloon and RAM hotplug from the usage of
> unpopulated physical memory ranges to map foreign pages, which is the
> correct thing to do in order to avoid mappings of foreign pages depend
> on memory hotplug.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> I've not added a new memory_type type and just used
> MEMORY_DEVICE_DEVDAX which seems to be what we want for such memory
> regions. I'm unsure whether abusing this type is fine, or if I should
> instead add a specific type, maybe MEMORY_DEVICE_GENERIC? I don't
> think we should be using a specific Xen type at all.
> ---
> Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> Cc: David Airlie <airlied@linux.ie>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Dan Carpenter <dan.carpenter@oracle.com>
> Cc: Roger Pau Monne <roger.pau@citrix.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: Yan Yankovskyi <yyankovskyi@gmail.com>
> Cc: dri-devel@lists.freedesktop.org
> Cc: xen-devel@lists.xenproject.org
> Cc: linux-mm@kvack.org
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> ---
>   drivers/gpu/drm/xen/xen_drm_front_gem.c |   8 +-
>   drivers/xen/Makefile                    |   1 +
>   drivers/xen/balloon.c                   |   4 +-
>   drivers/xen/grant-table.c               |   4 +-
>   drivers/xen/privcmd.c                   |   4 +-
>   drivers/xen/unpopulated-alloc.c         | 222 ++++++++++++++++++++++++
>   drivers/xen/xenbus/xenbus_client.c      |   6 +-
>   drivers/xen/xlate_mmu.c                 |   4 +-
>   include/xen/xen.h                       |   8 +
>   9 files changed, 246 insertions(+), 15 deletions(-)
>   create mode 100644 drivers/xen/unpopulated-alloc.c
> 
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index f0b85e094111..9dd06eae767a 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -99,8 +99,8 @@ static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
>   		 * allocate ballooned pages which will be used to map
>   		 * grant references provided by the backend
>   		 */
> -		ret = alloc_xenballooned_pages(xen_obj->num_pages,
> -					       xen_obj->pages);
> +		ret = xen_alloc_unpopulated_pages(xen_obj->num_pages,
> +					          xen_obj->pages);
>   		if (ret < 0) {
>   			DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
>   				  xen_obj->num_pages, ret);
> @@ -152,8 +152,8 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj)
>   	} else {
>   		if (xen_obj->pages) {
>   			if (xen_obj->be_alloc) {
> -				free_xenballooned_pages(xen_obj->num_pages,
> -							xen_obj->pages);
> +				xen_free_unpopulated_pages(xen_obj->num_pages,
> +							   xen_obj->pages);
>   				gem_free_pages_array(xen_obj);
>   			} else {
>   				drm_gem_put_pages(&xen_obj->base,
> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> index 0d322f3d90cd..788a5d9c8ef0 100644
> --- a/drivers/xen/Makefile
> +++ b/drivers/xen/Makefile
> @@ -42,3 +42,4 @@ xen-gntdev-$(CONFIG_XEN_GNTDEV_DMABUF)	+= gntdev-dmabuf.o
>   xen-gntalloc-y				:= gntalloc.o
>   xen-privcmd-y				:= privcmd.o privcmd-buf.o
>   obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF)	+= xen-front-pgdir-shbuf.o
> +obj-$(CONFIG_ZONE_DEVICE)		+= unpopulated-alloc.o
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index b1d8b028bf80..815ef10eb2ff 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -654,7 +654,7 @@ void free_xenballooned_pages(int nr_pages, struct page **pages)
>   }
>   EXPORT_SYMBOL(free_xenballooned_pages);
>   
> -#ifdef CONFIG_XEN_PV
> +#if defined(CONFIG_XEN_PV) && !defined(CONFIG_ZONE_DEVICE)
>   static void __init balloon_add_region(unsigned long start_pfn,
>   				      unsigned long pages)
>   {
> @@ -708,7 +708,7 @@ static int __init balloon_init(void)
>   	register_sysctl_table(xen_root);
>   #endif
>   
> -#ifdef CONFIG_XEN_PV
> +#if defined(CONFIG_XEN_PV) && !defined(CONFIG_ZONE_DEVICE)
>   	{
>   		int i;
>   
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 8d06bf1cc347..523dcdf39cc9 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -801,7 +801,7 @@ int gnttab_alloc_pages(int nr_pages, struct page **pages)
>   {
>   	int ret;
>   
> -	ret = alloc_xenballooned_pages(nr_pages, pages);
> +	ret = xen_alloc_unpopulated_pages(nr_pages, pages);
>   	if (ret < 0)
>   		return ret;
>   
> @@ -836,7 +836,7 @@ EXPORT_SYMBOL_GPL(gnttab_pages_clear_private);
>   void gnttab_free_pages(int nr_pages, struct page **pages)
>   {
>   	gnttab_pages_clear_private(nr_pages, pages);
> -	free_xenballooned_pages(nr_pages, pages);
> +	xen_free_unpopulated_pages(nr_pages, pages);
>   }
>   EXPORT_SYMBOL_GPL(gnttab_free_pages);
>   
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index a250d118144a..56000ab70974 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -425,7 +425,7 @@ static int alloc_empty_pages(struct vm_area_struct *vma, int numpgs)
>   	if (pages == NULL)
>   		return -ENOMEM;
>   
> -	rc = alloc_xenballooned_pages(numpgs, pages);
> +	rc = xen_alloc_unpopulated_pages(numpgs, pages);
>   	if (rc != 0) {
>   		pr_warn("%s Could not alloc %d pfns rc:%d\n", __func__,
>   			numpgs, rc);
> @@ -900,7 +900,7 @@ static void privcmd_close(struct vm_area_struct *vma)
>   
>   	rc = xen_unmap_domain_gfn_range(vma, numgfns, pages);
>   	if (rc == 0)
> -		free_xenballooned_pages(numpgs, pages);
> +		xen_free_unpopulated_pages(numpgs, pages);
>   	else
>   		pr_crit("unable to unmap MFN range: leaking %d pages. rc=%d\n",
>   			numpgs, rc);
> diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
> new file mode 100644
> index 000000000000..aaa91cefbbf9
> --- /dev/null
> +++ b/drivers/xen/unpopulated-alloc.c
> @@ -0,0 +1,222 @@
> +/*
> + * Helpers to allocate unpopulated memory for foreign mappings
> + *
> + * Copyright (c) 2020, Citrix Systems R&D
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version 2
> + * as published by the Free Software Foundation; or, when distributed
> + * separately from the Linux kernel or incorporated into other
> + * software packages, subject to the following license:
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this source file (the "Software"), to deal in the Software without
> + * restriction, including without limitation the rights to use, copy, modify,
> + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */

Please use:

// SPDX-License-Identifier: GPL-2.0-only

instead of the long GPL sermon.

> +
> +#include <linux/errno.h>
> +#include <linux/gfp.h>
> +#include <linux/kernel.h>
> +#include <linux/mm.h>
> +#include <linux/memremap.h>
> +#include <linux/slab.h>
> +
> +#include <asm/page.h>
> +
> +#include <xen/page.h>
> +#include <xen/xen.h>
> +
> +static DEFINE_MUTEX(lock);
> +static LIST_HEAD(list);
> +static unsigned int count;
> +
> +static int fill(unsigned int nr_pages)
> +{
> +	struct dev_pagemap *pgmap;
> +	void *vaddr;
> +	unsigned int i, alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
> +	int nid, ret;
> +
> +	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
> +	if (!pgmap)
> +		return -ENOMEM;
> +
> +	pgmap->type = MEMORY_DEVICE_DEVDAX;
> +	pgmap->res.name = "XEN SCRATCH";
> +	pgmap->res.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
> +
> +	ret = allocate_resource(&iomem_resource, &pgmap->res,
> +				alloc_pages * PAGE_SIZE, 0, -1,
> +				PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL);
> +	if (ret < 0) {
> +		pr_err("Cannot allocate new IOMEM resource\n");
> +		kfree(pgmap);
> +		return ret;
> +	}
> +
> +	nid = memory_add_physaddr_to_nid(pgmap->res.start);
> +
> +#ifdef CONFIG_XEN_HAVE_PVMMU
> +	/*
> +	 * We don't support PV MMU when Linux and Xen is using
> +	 * different page granularity.
> +	 */
> +	BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE);

Drop that, please. PV MMU is x86 only and we surely don't want to add
it to another architecture. On x86 this will never trigger.

> +
> +        /*
> +         * memremap will build page tables for the new memory so
> +         * the p2m must contain invalid entries so the correct
> +         * non-present PTEs will be written.
> +         *
> +         * If a failure occurs, the original (identity) p2m entries
> +         * are not restored since this region is now known not to
> +         * conflict with any devices.
> +         */
> +	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> +		xen_pfn_t pfn = PFN_DOWN(pgmap->res.start);
> +
> +		for (i = 0; i < alloc_pages; i++) {
> +			if (!set_phys_to_machine(pfn + i, INVALID_P2M_ENTRY)) {
> +				pr_warn("set_phys_to_machine() failed, no memory added\n");
> +				release_resource(&pgmap->res);
> +				kfree(pgmap);
> +				return -ENOMEM;
> +			}
> +                }
> +	}
> +#endif
> +
> +	vaddr = memremap_pages(pgmap, nid);
> +	if (IS_ERR(vaddr)) {
> +		pr_err("Cannot remap memory range\n");
> +		release_resource(&pgmap->res);
> +		kfree(pgmap);
> +		return PTR_ERR(vaddr);
> +	}
> +
> +	for (i = 0; i < alloc_pages; i++) {
> +		struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
> +
> +		BUG_ON(!virt_addr_valid(vaddr + PAGE_SIZE * i));
> +		list_add(&pg->lru, &list);
> +		count++;
> +	}
> +
> +	return 0;
> +}
> +
> +/**
> + * xen_alloc_unpopulated_pages - alloc unpopulated pages
> + * @nr_pages: Number of pages
> + * @pages: pages returned
> + * @return 0 on success, error otherwise
> + */
> +int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
> +{
> +	unsigned int i;
> +	int ret = 0;
> +
> +	mutex_lock(&lock);
> +	if (count < nr_pages) {
> +		ret = fill(nr_pages);

I'd rather use: ret = fill(nr_pages - count);

> +		if (ret)
> +			goto out;
> +	}
> +
> +	for (i = 0; i < nr_pages; i++) {
> +		struct page *pg = list_first_entry_or_null(&list, struct page,
> +							   lru);
> +
> +		BUG_ON(!pg);
> +		list_del(&pg->lru);
> +		count--;
> +		pages[i] = pg;
> +
> +#ifdef CONFIG_XEN_HAVE_PVMMU
> +		/*
> +		 * We don't support PV MMU when Linux and Xen is using
> +		 * different page granularity.
> +		 */
> +		BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE);

Having two identical BUILD_BUG_ON() in the same source is really not
wanted.

> +
> +		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> +			ret = xen_alloc_p2m_entry(page_to_pfn(pg));
> +			if (ret < 0) {
> +				unsigned int j;
> +
> +				for (j = 0; j <= i; j++) {
> +					list_add(&pages[j]->lru, &list);
> +					count++;
> +				}
> +				goto out;
> +			}
> +		}
> +#endif
> +	}
> +
> +out:
> +	mutex_unlock(&lock);
> +	return ret;
> +}
> +EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
> +
> +/**
> + * xen_free_unpopulated_pages - return unpopulated pages
> + * @nr_pages: Number of pages
> + * @pages: pages to return
> + */
> +void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
> +{
> +	unsigned int i;
> +
> +	mutex_lock(&lock);
> +	for (i = 0; i < nr_pages; i++) {
> +		list_add(&pages[i]->lru, &list);
> +		count++;
> +	}
> +	mutex_unlock(&lock);
> +}
> +EXPORT_SYMBOL(xen_free_unpopulated_pages);
> +
> +#ifdef CONFIG_XEN_PV
> +static int __init init(void)
> +{
> +	unsigned int i;
> +
> +	if (!xen_domain())
> +		return -ENODEV;
> +
> +	/*
> +	 * Initialize with pages from the extra memory regions (see
> +	 * arch/x86/xen/setup.c).
> +	 */
> +	for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {
> +		unsigned int j;
> +
> +		for (j = 0; j < xen_extra_mem[i].n_pfns; j++) {
> +			struct page *pg =
> +				pfn_to_page(xen_extra_mem[i].start_pfn + j);
> +
> +			list_add(&pg->lru, &list);
> +			count++;
> +		}
> +	}
> +
> +	return 0;
> +}
> +subsys_initcall(init);
> +#endif
> diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
> index 786fbb7d8be0..70b6c4780fbd 100644
> --- a/drivers/xen/xenbus/xenbus_client.c
> +++ b/drivers/xen/xenbus/xenbus_client.c
> @@ -615,7 +615,7 @@ static int xenbus_map_ring_hvm(struct xenbus_device *dev,
>   	bool leaked = false;
>   	unsigned int nr_pages = XENBUS_PAGES(nr_grefs);
>   
> -	err = alloc_xenballooned_pages(nr_pages, node->hvm.pages);
> +	err = xen_alloc_unpopulated_pages(nr_pages, node->hvm.pages);
>   	if (err)
>   		goto out_err;
>   
> @@ -656,7 +656,7 @@ static int xenbus_map_ring_hvm(struct xenbus_device *dev,
>   			 addr, nr_pages);
>    out_free_ballooned_pages:
>   	if (!leaked)
> -		free_xenballooned_pages(nr_pages, node->hvm.pages);
> +		xen_free_unpopulated_pages(nr_pages, node->hvm.pages);
>    out_err:
>   	return err;
>   }
> @@ -852,7 +852,7 @@ static int xenbus_unmap_ring_hvm(struct xenbus_device *dev, void *vaddr)
>   			       info.addrs);
>   	if (!rv) {
>   		vunmap(vaddr);
> -		free_xenballooned_pages(nr_pages, node->hvm.pages);
> +		xen_free_unpopulated_pages(nr_pages, node->hvm.pages);
>   	}
>   	else
>   		WARN(1, "Leaking %p, size %u page(s)\n", vaddr, nr_pages);
> diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c
> index 7b1077f0abcb..34742c6e189e 100644
> --- a/drivers/xen/xlate_mmu.c
> +++ b/drivers/xen/xlate_mmu.c
> @@ -232,7 +232,7 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
>   		kfree(pages);
>   		return -ENOMEM;
>   	}
> -	rc = alloc_xenballooned_pages(nr_pages, pages);
> +	rc = xen_alloc_unpopulated_pages(nr_pages, pages);
>   	if (rc) {
>   		pr_warn("%s Couldn't balloon alloc %ld pages rc:%d\n", __func__,
>   			nr_pages, rc);
> @@ -249,7 +249,7 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
>   	if (!vaddr) {
>   		pr_warn("%s Couldn't map %ld pages rc:%d\n", __func__,
>   			nr_pages, rc);
> -		free_xenballooned_pages(nr_pages, pages);
> +		xen_free_unpopulated_pages(nr_pages, pages);
>   		kfree(pages);
>   		kfree(pfns);
>   		return -ENOMEM;
> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index 19a72f591e2b..aa33bc0d933c 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -52,4 +52,12 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
>   extern u64 xen_saved_max_mem_size;
>   #endif
>   
> +#ifdef CONFIG_ZONE_DEVICE
> +int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
> +void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
> +#else
> +#define xen_alloc_unpopulated_pages alloc_xenballooned_pages
> +#define xen_free_unpopulated_pages free_xenballooned_pages
> +#endif
> +
>   #endif	/* _XEN_XEN_H */
> 

Juergen


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 14:34:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 14:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyymn-0003qS-RU; Fri, 24 Jul 2020 14:34:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=M72E=BD=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jyymm-0003qN-EH
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 14:34:56 +0000
X-Inumbo-ID: d66fce66-cdba-11ea-882c-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id d66fce66-cdba-11ea-882c-bc764e2007e4;
 Fri, 24 Jul 2020 14:34:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1595601294;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=+xEG2oRWVrcWwt8QM0VD8vNcd0fbr/DhAE0GizAuzY0=;
 b=YfXe75592delgNEi44dmKvM0d0PtIWCQh8ZZLBb3LppoSPsrkkpua/WRzREAwMsAOHlY8h
 pcPMuRHK4yQys0YinqrlloWZLemBI1MHRLxXHd3V4tpbd3GYVev85BWKEIT3hAy6QswaBX
 wBXFZUNWHVLpr+WXIRHNVA1gSLA6Gqo=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-124-WvzseDkuOl2JB5UQZyg1rA-1; Fri, 24 Jul 2020 10:34:50 -0400
X-MC-Unique: WvzseDkuOl2JB5UQZyg1rA-1
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2E5581083E8E;
 Fri, 24 Jul 2020 14:34:46 +0000 (UTC)
Received: from [10.36.113.94] (ovpn-113-94.ams2.redhat.com [10.36.113.94])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 228CF10027A6;
 Fri, 24 Jul 2020 14:34:41 +0000 (UTC)
Subject: Re: [PATCH v2 4/4] xen: add helpers to allocate unpopulated memory
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
References: <20200724124241.48208-1-roger.pau@citrix.com>
 <20200724124241.48208-5-roger.pau@citrix.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <1778c97f-3a69-8280-141c-879814dd213f@redhat.com>
Date: Fri, 24 Jul 2020 16:34:41 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200724124241.48208-5-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Yan Yankovskyi <yyankovskyi@gmail.com>,
 dri-devel@lists.freedesktop.org, Michal Hocko <mhocko@kernel.org>,
 linux-mm@kvack.org, Daniel Vetter <daniel@ffwll.ch>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Dan Williams <dan.j.williams@intel.com>,
 Dan Carpenter <dan.carpenter@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

CCing Dan

On 24.07.20 14:42, Roger Pau Monne wrote:
> To be used in order to create foreign mappings. This is based on the
> ZONE_DEVICE facility which is used by persistent memory devices in
> order to create struct pages and kernel virtual mappings for the IOMEM
> areas of such devices. Note that on kernels without support for
> ZONE_DEVICE Xen will fallback to use ballooned pages in order to
> create foreign mappings.
> 
> The newly added helpers use the same parameters as the existing
> {alloc/free}_xenballooned_pages functions, which allows for in-place
> replacement of the callers. Once a memory region has been added to be
> used as scratch mapping space it will no longer be released, and pages
> returned are kept in a linked list. This allows to have a buffer of
> pages and prevents resorting to frequent additions and removals of
> regions.
> 
> If enabled (because ZONE_DEVICE is supported) the usage of the new
> functionality untangles Xen balloon and RAM hotplug from the usage of
> unpopulated physical memory ranges to map foreign pages, which is the
> correct thing to do in order to avoid mappings of foreign pages depend
> on memory hotplug.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> I've not added a new memory_type type and just used
> MEMORY_DEVICE_DEVDAX which seems to be what we want for such memory
> regions. I'm unsure whether abusing this type is fine, or if I should
> instead add a specific type, maybe MEMORY_DEVICE_GENERIC? I don't
> think we should be using a specific Xen type at all.
> ---
> Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> Cc: David Airlie <airlied@linux.ie>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Dan Carpenter <dan.carpenter@oracle.com>
> Cc: Roger Pau Monne <roger.pau@citrix.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: Yan Yankovskyi <yyankovskyi@gmail.com>
> Cc: dri-devel@lists.freedesktop.org
> Cc: xen-devel@lists.xenproject.org
> Cc: linux-mm@kvack.org
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> ---
>  drivers/gpu/drm/xen/xen_drm_front_gem.c |   8 +-
>  drivers/xen/Makefile                    |   1 +
>  drivers/xen/balloon.c                   |   4 +-
>  drivers/xen/grant-table.c               |   4 +-
>  drivers/xen/privcmd.c                   |   4 +-
>  drivers/xen/unpopulated-alloc.c         | 222 ++++++++++++++++++++++++
>  drivers/xen/xenbus/xenbus_client.c      |   6 +-
>  drivers/xen/xlate_mmu.c                 |   4 +-
>  include/xen/xen.h                       |   8 +
>  9 files changed, 246 insertions(+), 15 deletions(-)
>  create mode 100644 drivers/xen/unpopulated-alloc.c
> 
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index f0b85e094111..9dd06eae767a 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -99,8 +99,8 @@ static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
>  		 * allocate ballooned pages which will be used to map
>  		 * grant references provided by the backend
>  		 */
> -		ret = alloc_xenballooned_pages(xen_obj->num_pages,
> -					       xen_obj->pages);
> +		ret = xen_alloc_unpopulated_pages(xen_obj->num_pages,
> +					          xen_obj->pages);
>  		if (ret < 0) {
>  			DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
>  				  xen_obj->num_pages, ret);
> @@ -152,8 +152,8 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj)
>  	} else {
>  		if (xen_obj->pages) {
>  			if (xen_obj->be_alloc) {
> -				free_xenballooned_pages(xen_obj->num_pages,
> -							xen_obj->pages);
> +				xen_free_unpopulated_pages(xen_obj->num_pages,
> +							   xen_obj->pages);
>  				gem_free_pages_array(xen_obj);
>  			} else {
>  				drm_gem_put_pages(&xen_obj->base,
> diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
> index 0d322f3d90cd..788a5d9c8ef0 100644
> --- a/drivers/xen/Makefile
> +++ b/drivers/xen/Makefile
> @@ -42,3 +42,4 @@ xen-gntdev-$(CONFIG_XEN_GNTDEV_DMABUF)	+= gntdev-dmabuf.o
>  xen-gntalloc-y				:= gntalloc.o
>  xen-privcmd-y				:= privcmd.o privcmd-buf.o
>  obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF)	+= xen-front-pgdir-shbuf.o
> +obj-$(CONFIG_ZONE_DEVICE)		+= unpopulated-alloc.o
> diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
> index b1d8b028bf80..815ef10eb2ff 100644
> --- a/drivers/xen/balloon.c
> +++ b/drivers/xen/balloon.c
> @@ -654,7 +654,7 @@ void free_xenballooned_pages(int nr_pages, struct page **pages)
>  }
>  EXPORT_SYMBOL(free_xenballooned_pages);
>  
> -#ifdef CONFIG_XEN_PV
> +#if defined(CONFIG_XEN_PV) && !defined(CONFIG_ZONE_DEVICE)
>  static void __init balloon_add_region(unsigned long start_pfn,
>  				      unsigned long pages)
>  {
> @@ -708,7 +708,7 @@ static int __init balloon_init(void)
>  	register_sysctl_table(xen_root);
>  #endif
>  
> -#ifdef CONFIG_XEN_PV
> +#if defined(CONFIG_XEN_PV) && !defined(CONFIG_ZONE_DEVICE)
>  	{
>  		int i;
>  
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 8d06bf1cc347..523dcdf39cc9 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -801,7 +801,7 @@ int gnttab_alloc_pages(int nr_pages, struct page **pages)
>  {
>  	int ret;
>  
> -	ret = alloc_xenballooned_pages(nr_pages, pages);
> +	ret = xen_alloc_unpopulated_pages(nr_pages, pages);
>  	if (ret < 0)
>  		return ret;
>  
> @@ -836,7 +836,7 @@ EXPORT_SYMBOL_GPL(gnttab_pages_clear_private);
>  void gnttab_free_pages(int nr_pages, struct page **pages)
>  {
>  	gnttab_pages_clear_private(nr_pages, pages);
> -	free_xenballooned_pages(nr_pages, pages);
> +	xen_free_unpopulated_pages(nr_pages, pages);
>  }
>  EXPORT_SYMBOL_GPL(gnttab_free_pages);
>  
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index a250d118144a..56000ab70974 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -425,7 +425,7 @@ static int alloc_empty_pages(struct vm_area_struct *vma, int numpgs)
>  	if (pages == NULL)
>  		return -ENOMEM;
>  
> -	rc = alloc_xenballooned_pages(numpgs, pages);
> +	rc = xen_alloc_unpopulated_pages(numpgs, pages);
>  	if (rc != 0) {
>  		pr_warn("%s Could not alloc %d pfns rc:%d\n", __func__,
>  			numpgs, rc);
> @@ -900,7 +900,7 @@ static void privcmd_close(struct vm_area_struct *vma)
>  
>  	rc = xen_unmap_domain_gfn_range(vma, numgfns, pages);
>  	if (rc == 0)
> -		free_xenballooned_pages(numpgs, pages);
> +		xen_free_unpopulated_pages(numpgs, pages);
>  	else
>  		pr_crit("unable to unmap MFN range: leaking %d pages. rc=%d\n",
>  			numpgs, rc);
> diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
> new file mode 100644
> index 000000000000..aaa91cefbbf9
> --- /dev/null
> +++ b/drivers/xen/unpopulated-alloc.c
> @@ -0,0 +1,222 @@
> +/*
> + * Helpers to allocate unpopulated memory for foreign mappings
> + *
> + * Copyright (c) 2020, Citrix Systems R&D
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License version 2
> + * as published by the Free Software Foundation; or, when distributed
> + * separately from the Linux kernel or incorporated into other
> + * software packages, subject to the following license:
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this source file (the "Software"), to deal in the Software without
> + * restriction, including without limitation the rights to use, copy, modify,
> + * merge, publish, distribute, sublicense, and/or sell copies of the Software,
> + * and to permit persons to whom the Software is furnished to do so, subject to
> + * the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
> + * IN THE SOFTWARE.
> + */
> +
> +#include <linux/errno.h>
> +#include <linux/gfp.h>
> +#include <linux/kernel.h>
> +#include <linux/mm.h>
> +#include <linux/memremap.h>
> +#include <linux/slab.h>
> +
> +#include <asm/page.h>
> +
> +#include <xen/page.h>
> +#include <xen/xen.h>
> +
> +static DEFINE_MUTEX(lock);
> +static LIST_HEAD(list);
> +static unsigned int count;
> +
> +static int fill(unsigned int nr_pages)
> +{
> +	struct dev_pagemap *pgmap;
> +	void *vaddr;
> +	unsigned int i, alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
> +	int nid, ret;
> +
> +	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
> +	if (!pgmap)
> +		return -ENOMEM;
> +
> +	pgmap->type = MEMORY_DEVICE_DEVDAX;
> +	pgmap->res.name = "XEN SCRATCH";
> +	pgmap->res.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
> +
> +	ret = allocate_resource(&iomem_resource, &pgmap->res,
> +				alloc_pages * PAGE_SIZE, 0, -1,
> +				PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL);
> +	if (ret < 0) {
> +		pr_err("Cannot allocate new IOMEM resource\n");
> +		kfree(pgmap);
> +		return ret;
> +	}
> +
> +	nid = memory_add_physaddr_to_nid(pgmap->res.start);
> +
> +#ifdef CONFIG_XEN_HAVE_PVMMU
> +	/*
> +	 * We don't support PV MMU when Linux and Xen is using
> +	 * different page granularity.
> +	 */
> +	BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE);
> +
> +        /*
> +         * memremap will build page tables for the new memory so
> +         * the p2m must contain invalid entries so the correct
> +         * non-present PTEs will be written.
> +         *
> +         * If a failure occurs, the original (identity) p2m entries
> +         * are not restored since this region is now known not to
> +         * conflict with any devices.
> +         */
> +	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> +		xen_pfn_t pfn = PFN_DOWN(pgmap->res.start);
> +
> +		for (i = 0; i < alloc_pages; i++) {
> +			if (!set_phys_to_machine(pfn + i, INVALID_P2M_ENTRY)) {
> +				pr_warn("set_phys_to_machine() failed, no memory added\n");
> +				release_resource(&pgmap->res);
> +				kfree(pgmap);
> +				return -ENOMEM;
> +			}
> +                }
> +	}
> +#endif
> +
> +	vaddr = memremap_pages(pgmap, nid);
> +	if (IS_ERR(vaddr)) {
> +		pr_err("Cannot remap memory range\n");
> +		release_resource(&pgmap->res);
> +		kfree(pgmap);
> +		return PTR_ERR(vaddr);
> +	}
> +
> +	for (i = 0; i < alloc_pages; i++) {
> +		struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
> +
> +		BUG_ON(!virt_addr_valid(vaddr + PAGE_SIZE * i));
> +		list_add(&pg->lru, &list);
> +		count++;
> +	}
> +
> +	return 0;
> +}
> +
> +/**
> + * xen_alloc_unpopulated_pages - alloc unpopulated pages
> + * @nr_pages: Number of pages
> + * @pages: pages returned
> + * @return 0 on success, error otherwise
> + */
> +int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
> +{
> +	unsigned int i;
> +	int ret = 0;
> +
> +	mutex_lock(&lock);
> +	if (count < nr_pages) {
> +		ret = fill(nr_pages);
> +		if (ret)
> +			goto out;
> +	}
> +
> +	for (i = 0; i < nr_pages; i++) {
> +		struct page *pg = list_first_entry_or_null(&list, struct page,
> +							   lru);
> +
> +		BUG_ON(!pg);
> +		list_del(&pg->lru);
> +		count--;
> +		pages[i] = pg;
> +
> +#ifdef CONFIG_XEN_HAVE_PVMMU
> +		/*
> +		 * We don't support PV MMU when Linux and Xen is using
> +		 * different page granularity.
> +		 */
> +		BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE);
> +
> +		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> +			ret = xen_alloc_p2m_entry(page_to_pfn(pg));
> +			if (ret < 0) {
> +				unsigned int j;
> +
> +				for (j = 0; j <= i; j++) {
> +					list_add(&pages[j]->lru, &list);
> +					count++;
> +				}
> +				goto out;
> +			}
> +		}
> +#endif
> +	}
> +
> +out:
> +	mutex_unlock(&lock);
> +	return ret;
> +}
> +EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
> +
> +/**
> + * xen_free_unpopulated_pages - return unpopulated pages
> + * @nr_pages: Number of pages
> + * @pages: pages to return
> + */
> +void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
> +{
> +	unsigned int i;
> +
> +	mutex_lock(&lock);
> +	for (i = 0; i < nr_pages; i++) {
> +		list_add(&pages[i]->lru, &list);
> +		count++;
> +	}
> +	mutex_unlock(&lock);
> +}
> +EXPORT_SYMBOL(xen_free_unpopulated_pages);
> +
> +#ifdef CONFIG_XEN_PV
> +static int __init init(void)
> +{
> +	unsigned int i;
> +
> +	if (!xen_domain())
> +		return -ENODEV;
> +
> +	/*
> +	 * Initialize with pages from the extra memory regions (see
> +	 * arch/x86/xen/setup.c).
> +	 */
> +	for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {
> +		unsigned int j;
> +
> +		for (j = 0; j < xen_extra_mem[i].n_pfns; j++) {
> +			struct page *pg =
> +				pfn_to_page(xen_extra_mem[i].start_pfn + j);
> +
> +			list_add(&pg->lru, &list);
> +			count++;
> +		}
> +	}
> +
> +	return 0;
> +}
> +subsys_initcall(init);
> +#endif
> diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
> index 786fbb7d8be0..70b6c4780fbd 100644
> --- a/drivers/xen/xenbus/xenbus_client.c
> +++ b/drivers/xen/xenbus/xenbus_client.c
> @@ -615,7 +615,7 @@ static int xenbus_map_ring_hvm(struct xenbus_device *dev,
>  	bool leaked = false;
>  	unsigned int nr_pages = XENBUS_PAGES(nr_grefs);
>  
> -	err = alloc_xenballooned_pages(nr_pages, node->hvm.pages);
> +	err = xen_alloc_unpopulated_pages(nr_pages, node->hvm.pages);
>  	if (err)
>  		goto out_err;
>  
> @@ -656,7 +656,7 @@ static int xenbus_map_ring_hvm(struct xenbus_device *dev,
>  			 addr, nr_pages);
>   out_free_ballooned_pages:
>  	if (!leaked)
> -		free_xenballooned_pages(nr_pages, node->hvm.pages);
> +		xen_free_unpopulated_pages(nr_pages, node->hvm.pages);
>   out_err:
>  	return err;
>  }
> @@ -852,7 +852,7 @@ static int xenbus_unmap_ring_hvm(struct xenbus_device *dev, void *vaddr)
>  			       info.addrs);
>  	if (!rv) {
>  		vunmap(vaddr);
> -		free_xenballooned_pages(nr_pages, node->hvm.pages);
> +		xen_free_unpopulated_pages(nr_pages, node->hvm.pages);
>  	}
>  	else
>  		WARN(1, "Leaking %p, size %u page(s)\n", vaddr, nr_pages);
> diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c
> index 7b1077f0abcb..34742c6e189e 100644
> --- a/drivers/xen/xlate_mmu.c
> +++ b/drivers/xen/xlate_mmu.c
> @@ -232,7 +232,7 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
>  		kfree(pages);
>  		return -ENOMEM;
>  	}
> -	rc = alloc_xenballooned_pages(nr_pages, pages);
> +	rc = xen_alloc_unpopulated_pages(nr_pages, pages);
>  	if (rc) {
>  		pr_warn("%s Couldn't balloon alloc %ld pages rc:%d\n", __func__,
>  			nr_pages, rc);
> @@ -249,7 +249,7 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
>  	if (!vaddr) {
>  		pr_warn("%s Couldn't map %ld pages rc:%d\n", __func__,
>  			nr_pages, rc);
> -		free_xenballooned_pages(nr_pages, pages);
> +		xen_free_unpopulated_pages(nr_pages, pages);
>  		kfree(pages);
>  		kfree(pfns);
>  		return -ENOMEM;
> diff --git a/include/xen/xen.h b/include/xen/xen.h
> index 19a72f591e2b..aa33bc0d933c 100644
> --- a/include/xen/xen.h
> +++ b/include/xen/xen.h
> @@ -52,4 +52,12 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
>  extern u64 xen_saved_max_mem_size;
>  #endif
>  
> +#ifdef CONFIG_ZONE_DEVICE
> +int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
> +void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
> +#else
> +#define xen_alloc_unpopulated_pages alloc_xenballooned_pages
> +#define xen_free_unpopulated_pages free_xenballooned_pages
> +#endif
> +
>  #endif	/* _XEN_XEN_H */
> 



-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 14:35:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 14:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyynY-0003u8-5Y; Fri, 24 Jul 2020 14:35:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qjWU=BD=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jyynX-0003tY-33
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 14:35:43 +0000
X-Inumbo-ID: f0486a6a-cdba-11ea-a3fb-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f0486a6a-cdba-11ea-a3fb-12813bfff9fa;
 Fri, 24 Jul 2020 14:35:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=cI0/m9la+zTc9ksvcN+XKxb+iY/9+pMteFbtElEP7gM=; b=gGbzHtWchl5clyvVEH3UhYRCg
 TKG49NebYjcXzaUGPvyGeUM5vmXPkpC2x8+wpiQpRB2imUSFHpbNphP5ixDo5hbwtual03RKEQ0t0
 C3D+YOT1YewVCIsnYn/7ng3rc3p2l9MpTei9EZbCNYtuvfxuYSvFsatkripZzWzUZMMf8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyynQ-0001iY-IM; Fri, 24 Jul 2020 14:35:36 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jyynQ-0004WH-2M; Fri, 24 Jul 2020 14:35:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jyynQ-0005Bk-1l; Fri, 24 Jul 2020 14:35:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152173-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152173: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=b2a64292b0bfa317886b3432d1a5b2a4193a48d6
X-Osstest-Versions-That: xen=8a7bf75eb5bba4046c1aa278330a371545a6ecbd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jul 2020 14:35:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152173 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152173/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b2a64292b0bfa317886b3432d1a5b2a4193a48d6
baseline version:
 xen                  8a7bf75eb5bba4046c1aa278330a371545a6ecbd

Last test of basis   152152  2020-07-23 15:00:30 Z    0 days
Testing same since   152173  2020-07-24 09:00:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8a7bf75eb5..b2a64292b0  b2a64292b0bfa317886b3432d1a5b2a4193a48d6 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 14:44:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 14:44:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyyvs-0004xb-1X; Fri, 24 Jul 2020 14:44:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hWcK=BD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyyvq-0004xU-U6
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 14:44:18 +0000
X-Inumbo-ID: 25b33c74-cdbc-11ea-a400-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 25b33c74-cdbc-11ea-a400-12813bfff9fa;
 Fri, 24 Jul 2020 14:44:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595601856;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=0bYk7a0zznLfo2v46dTLOdpZ5AbIPI47vaSbWHGjMJQ=;
 b=IarLKOqJoi37+A+m/ceLAlk6McEDy5f3QY3mHxXxFII/KEhtjT9oQ++K
 Hlv6fJ1iwRueQNhUVZ3J+ss9MG7grM7t6KwFeMJlMWFVgLqXJ5Gts0CVI
 jVJXK4ES1sl8p6128eQCEywYxYPSoEk1whcf6p9oQQ2nWqUMRxGkByqvF k=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: SRa+pre4ZmCeH0TZSioNeMtPGWaPu8c50hD/NH3AKbQDxoDFVachb0xgfTjBCOSjyAFdOnBxK6
 ig/U4K/pVLAuapc4vz2XR68VlLhGNc2j+tPp+Kz5DDCo5RDEuP0HhIxBMjVs+3iBFBrD0JExF/
 uSc15j1BfU04X2+KoYxEUI8shL6q3a6TFcgpKX06MW/dIBe7J8zW0fotD/SUuTzcTw+7zshkzn
 I5/9ku80oXWzIi03JRJVC0vodDUdZ3vJ7o90KbXytRiQQ7HIwtdWA7o5IMJ4DBfb55Kac720sU
 2ME=
X-SBRS: 2.7
X-MesageID: 23460070
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,391,1589256000"; d="scan'208";a="23460070"
Date: Fri, 24 Jul 2020 16:44:04 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Rahul Singh <rahul.singh@arm.com>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Message-ID: <20200724144404.GJ7191@Air-de-Roger>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand.Marquis@arm.com, xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 23, 2020 at 04:40:21PM +0100, Rahul Singh wrote:
> XEN during boot will read the PCI device tree node “reg” property
> and will map the PCI config space to the XEN memory.
> 
> XEN will read the “linux, pci-domain” property from the device tree
> node and configure the host bridge segment number accordingly.
> 
> As of now "pci-host-ecam-generic" compatible board is supported.
> 
> Change-Id: If32f7748b7dc89dd37114dc502943222a2a36c49
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  xen/arch/arm/Kconfig                |   7 +
>  xen/arch/arm/Makefile               |   1 +
>  xen/arch/arm/pci/Makefile           |   4 +
>  xen/arch/arm/pci/pci-access.c       | 101 ++++++++++++++
>  xen/arch/arm/pci/pci-host-common.c  | 198 ++++++++++++++++++++++++++++
>  xen/arch/arm/pci/pci-host-generic.c | 131 ++++++++++++++++++
>  xen/arch/arm/pci/pci.c              | 112 ++++++++++++++++
>  xen/arch/arm/setup.c                |   2 +
>  xen/include/asm-arm/device.h        |   7 +-
>  xen/include/asm-arm/pci.h           |  97 +++++++++++++-
>  10 files changed, 654 insertions(+), 6 deletions(-)
>  create mode 100644 xen/arch/arm/pci/Makefile
>  create mode 100644 xen/arch/arm/pci/pci-access.c
>  create mode 100644 xen/arch/arm/pci/pci-host-common.c
>  create mode 100644 xen/arch/arm/pci/pci-host-generic.c
>  create mode 100644 xen/arch/arm/pci/pci.c
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 2777388265..ee13339ae9 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -31,6 +31,13 @@ menu "Architecture Features"
>  
>  source "arch/Kconfig"
>  
> +config ARM_PCI
> +	bool "PCI Passthrough Support"
> +	depends on ARM_64
> +	---help---
> +
> +	  PCI passthrough support for Xen on ARM64.
> +
>  config ACPI
>  	bool "ACPI (Advanced Configuration and Power Interface) Support" if EXPERT
>  	depends on ARM_64
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 7e82b2178c..345cb83eed 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -6,6 +6,7 @@ ifneq ($(CONFIG_NO_PLAT),y)
>  obj-y += platforms/
>  endif
>  obj-$(CONFIG_TEE) += tee/
> +obj-$(CONFIG_ARM_PCI) += pci/
>  
>  obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o
>  obj-y += bootfdt.init.o
> diff --git a/xen/arch/arm/pci/Makefile b/xen/arch/arm/pci/Makefile
> new file mode 100644
> index 0000000000..358508b787
> --- /dev/null
> +++ b/xen/arch/arm/pci/Makefile
> @@ -0,0 +1,4 @@
> +obj-y += pci.o
> +obj-y += pci-host-generic.o
> +obj-y += pci-host-common.o
> +obj-y += pci-access.o

The Kconfig option mentions the support being explicitly for ARM64,
would it be better to place the code in arch/arm/arm64 then?

> diff --git a/xen/arch/arm/pci/pci-access.c b/xen/arch/arm/pci/pci-access.c
> new file mode 100644
> index 0000000000..c53ef58336
> --- /dev/null
> +++ b/xen/arch/arm/pci/pci-access.c
> @@ -0,0 +1,101 @@
> +/*
> + * Copyright (C) 2020 Arm Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/init.h>
> +#include <xen/pci.h>
> +#include <asm/pci.h>
> +#include <xen/rwlock.h>
> +
> +static uint32_t pci_config_read(pci_sbdf_t sbdf, unsigned int reg,
> +                            unsigned int len)

Please align with the opening parenthesis (here and everywhere in the
patch series).

> +{
> +    int rc;
> +    uint32_t val = GENMASK(0, len * 8);

You can just set val = ~0. The return type of the pci_conf_readXX
helpers will already truncate the value.

> +
> +    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
> +
> +    if ( unlikely(!bridge) )
> +    {
> +        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
> +                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);

I had a patch to add a custom modifier to out printf format in
order to handle pci_sbdf_t natively:

https://patchew.org/Xen/20190822065132.48200-1-roger.pau@citrix.com/

It missed maintainers Acks and was never committed. Since you are
doing a bunch of work here, and likely adding a lot of SBDF related
prints, feel free to import the modifier (%pp) and use in your code
(do not attempt to switch existing users, or it's likely to get
stuck again).

> +        return val;
> +    }
> +
> +    if ( unlikely(!bridge->ops->read) )
> +        return val;
> +
> +    rc = bridge->ops->read(bridge, (uint32_t) sbdf.sbdf, reg, len, &val);

There's no need for the uint32_t cast, the sbdf field is already of
such type.

> +    if ( rc )
> +        printk(XENLOG_ERR "Failed to read reg %#x len %u for "PRI_pci"\n",
> +                reg, len, sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
> +
> +    return val;
> +}
> +
> +static void pci_config_write(pci_sbdf_t sbdf, unsigned int reg,
> +        unsigned int len, uint32_t val)
> +{
> +    int rc;
> +    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
> +
> +    if ( unlikely(!bridge) )
> +    {
> +        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
> +                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
> +        return;
> +    }
> +
> +    if ( unlikely(!bridge->ops->write) )
> +        return;
> +
> +    rc = bridge->ops->write(bridge, (uint32_t) sbdf.sbdf, reg, len, val);
> +    if ( rc )
> +        printk(XENLOG_ERR "Failed to write reg %#x len %u for "PRI_pci"\n",
> +                reg, len, sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
> +}
> +
> +/*
> + * Wrappers for all PCI configuration access functions.
> + */
> +
> +#define PCI_OP_WRITE(size, type) \
> +    void pci_conf_write##size (pci_sbdf_t sbdf,unsigned int reg, type val) \
> +{                                               \
> +    pci_config_write(sbdf, reg, size / 8, val);     \
> +}
> +
> +#define PCI_OP_READ(size, type) \
> +    type pci_conf_read##size (pci_sbdf_t sbdf, unsigned int reg)  \
> +{                                               \
> +    return pci_config_read(sbdf, reg, size / 8);     \
> +}
> +
> +PCI_OP_READ(8, u8)
> +PCI_OP_READ(16, u16)
> +PCI_OP_READ(32, u32)
> +PCI_OP_WRITE(8, u8)
> +PCI_OP_WRITE(16, u16)
> +PCI_OP_WRITE(32, u32)

Please use uintXX_t.

Also, It's nice to add some kind of signals for cscope and friends so
they can find the autogenerated functions, ie:

#define pci_conf_read8
#undef pci_conf_read8
#define pci_conf_read16
#undef pci_conf_read16
...

It's tedious but helps future users find where the code is generated.

> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/pci/pci-host-common.c b/xen/arch/arm/pci/pci-host-common.c
> new file mode 100644
> index 0000000000..c5f98be698
> --- /dev/null
> +++ b/xen/arch/arm/pci/pci-host-common.c
> @@ -0,0 +1,198 @@
> +/*
> + * Copyright (C) 2020 Arm Ltd.
> + *
> + * Based on Linux drivers/pci/ecam.c
> + * Copyright 2016 Broadcom.
> + *
> + * Based on Linux drivers/pci/controller/pci-host-common.c
> + * Based on Linux drivers/pci/controller/pci-host-generic.c
> + * Copyright (C) 2014 ARM Limited Will Deacon <will.deacon@arm.com>
> + *
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/init.h>
> +#include <xen/pci.h>
> +#include <asm/pci.h>
> +#include <xen/rwlock.h>
> +#include <xen/vmap.h>
> +
> +/*
> + * List for all the pci host bridges.
> + */
> +
> +static LIST_HEAD(pci_host_bridges);
> +
> +static bool __init dt_pci_parse_bus_range(struct dt_device_node *dev,
> +        struct pci_config_window *cfg)
> +{
> +    const __be32 *cells;

It's my impression that while based on Linux this is not a verbatim
copy of a Linux file, and tries to adhere with the Xen coding style.
If so please use uint32_t here.

> +    uint32_t len;
> +
> +    cells = dt_get_property(dev, "bus-range", &len);
> +    /* bus-range should at least be 2 cells */
> +    if ( !cells || (len < (sizeof(*cells) * 2)) )
> +        return false;
> +
> +    cfg->busn_start = dt_next_cell(1, &cells);
> +    cfg->busn_end = dt_next_cell(1, &cells);
> +
> +    return true;
> +}
> +
> +static inline void __iomem *pci_remap_cfgspace(paddr_t start, size_t len)
> +{
> +    return ioremap_nocache(start, len);
> +}
> +
> +static void pci_ecam_free(struct pci_config_window *cfg)
> +{
> +    if ( cfg->win )
> +        iounmap(cfg->win);
> +
> +    xfree(cfg);
> +}
> +
> +static struct pci_config_window *gen_pci_init(struct dt_device_node *dev,
> +        struct pci_ecam_ops *ops)
> +{
> +    int err;
> +    struct pci_config_window *cfg;
> +    paddr_t addr, size;
> +
> +    cfg = xzalloc(struct pci_config_window);
> +    if ( !cfg )
> +        return NULL;
> +
> +    err = dt_pci_parse_bus_range(dev, cfg);
> +    if ( !err ) {

Braces

> +        cfg->busn_start = 0;
> +        cfg->busn_end = 0xff;
> +        printk(XENLOG_ERR "No bus range found for pci controller\n");
> +    } else {
> +        if ( cfg->busn_end > cfg->busn_start + 0xff )
> +            cfg->busn_end = cfg->busn_start + 0xff;
> +    }
> +
> +    /* Parse our PCI ecam register address*/
> +    err = dt_device_get_address(dev, 0, &addr, &size);
> +    if ( err )
> +        goto err_exit;
> +
> +    cfg->phys_addr = addr;
> +    cfg->size = size;
> +    cfg->ops = ops;
> +
> +    /*
> +     * On 64-bit systems, we do a single ioremap for the whole config space
> +     * since we have enough virtual address range available.  On 32-bit, we
> +     * ioremap the config space for each bus individually.
> +     *
> +     * As of now only 64-bit is supported 32-bit is not supported.
> +     */
> +    cfg->win = pci_remap_cfgspace(cfg->phys_addr, cfg->size);
> +    if ( !cfg->win )
> +        goto err_exit_remap;
> +
> +    printk("ECAM at [mem %lx-%lx] for [bus %x-%x] \n",cfg->phys_addr,
> +            cfg->phys_addr + cfg->size - 1,cfg->busn_start,cfg->busn_end);
> +
> +    if ( ops->init ) {
> +        err = ops->init(cfg);
> +        if (err)
> +            goto err_exit;
> +    }
> +
> +    return cfg;
> +
> +err_exit_remap:
> +    printk(XENLOG_ERR "ECAM ioremap failed\n");
> +err_exit:
> +    pci_ecam_free(cfg);
> +    return NULL;
> +}
> +
> +static struct pci_host_bridge * pci_alloc_host_bridge(void)
                                  ^ extra space
> +{
> +    struct pci_host_bridge *bridge = xzalloc(struct pci_host_bridge);
> +
> +    if ( !bridge )
> +        return NULL;
> +
> +    INIT_LIST_HEAD(&bridge->node);
> +    return bridge;
> +}
> +
> +int pci_host_common_probe(struct dt_device_node *dev,
> +        struct pci_ecam_ops *ops)
> +{
> +    struct pci_host_bridge *bridge;
> +    struct pci_config_window *cfg;
> +    u32 segment;
> +
> +    bridge = pci_alloc_host_bridge();
> +    if ( !bridge )
> +        return -ENOMEM;
> +
> +    /* Parse and map our Configuration Space windows */
> +    cfg = gen_pci_init(dev, ops);
> +    if ( !cfg )
> +        return -ENOMEM;

You are leaking the allocation of bridge here ...

> +
> +    bridge->dt_node = dev;
> +    bridge->sysdata = cfg;
> +    bridge->ops = &ops->pci_ops;
> +
> +    if( !dt_property_read_u32(dev, "linux,pci-domain", &segment) )
> +    {
> +        printk(XENLOG_ERR "\"linux,pci-domain\" property in not available in DT\n");
> +        return -ENODEV;

... and here.

> +    }
> +
> +    bridge->segment = (u16)segment;
> +
> +    list_add_tail(&bridge->node, &pci_host_bridges);
> +
> +    return 0;
> +}
> +
> +/*
> + * This function will lookup an hostbridge based on the segment and bus
> + * number.
> + */
> +struct pci_host_bridge *pci_find_host_bridge(uint16_t segment, uint8_t bus)
> +{
> +    struct pci_host_bridge *bridge;
> +    bool found = false;
> +
> +    list_for_each_entry( bridge, &pci_host_bridges, node )
> +    {
> +        if ( bridge->segment != segment )
> +            continue;
> +
> +        found = true;
> +        break;
> +    }
> +
> +    return (found) ? bridge : NULL;

This can be much shorter:

struct pci_host_bridge *pci_find_host_bridge(uint16_t segment, uint8_t bus)
{
    struct pci_host_bridge *bridge;

    list_for_each_entry( bridge, &pci_host_bridges, node )
        if ( bridge->segment == segment )
            return bridge;

    return NULL;
}

Albeit I'm confused by the fact that you pass a bus number that's
completely unused.

> +}
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/pci/pci-host-generic.c b/xen/arch/arm/pci/pci-host-generic.c
> new file mode 100644
> index 0000000000..cd67b3dec6
> --- /dev/null
> +++ b/xen/arch/arm/pci/pci-host-generic.c
> @@ -0,0 +1,131 @@
> +/*
> + * Copyright (C) 2020 Arm Ltd.
> + *
> + * Based on Linux drivers/pci/controller/pci-host-common.c
> + * Based on Linux drivers/pci/controller/pci-host-generic.c
> + * Copyright (C) 2014 ARM Limited Will Deacon <will.deacon@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <asm/device.h>
> +#include <asm/io.h>
> +#include <xen/pci.h>
> +#include <asm/pci.h>
> +
> +/*
> + * Function to get the config space base.
> + */
> +static void __iomem *pci_config_base(struct pci_host_bridge *bridge,
> +        uint32_t sbdf, int where)

You would be better passing a pci_sbdf_t directly here. Also 'where'
should be renamed to offset, or reg, and be made unsigned int. AFAICT
you will never pass a negative value here.

For sanity you should also assert that the offset falls between the
PCI config space used by the device, in order to easily catch with
wrong offsets being used.

> +{
> +    struct pci_config_window *cfg = bridge->sysdata;

const

> +    unsigned int devfn_shift = cfg->ops->bus_shift - 8;
> +
> +    pci_sbdf_t sbdf_t = (pci_sbdf_t) sbdf ;
> +
> +    unsigned int busn = sbdf_t.bus;
> +    void __iomem *base;

IMO adding newlines between variable definitions is not helpful, but
that's my taste.

> +
> +    if ( busn < cfg->busn_start || busn > cfg->busn_end )
> +        return NULL;
> +
> +    base = cfg->win + (busn << cfg->ops->bus_shift);
> +
> +    return base + (PCI_DEVFN(sbdf_t.dev, sbdf_t.fn) << devfn_shift) + where;
> +}
> +
> +int pci_ecam_config_write(struct pci_host_bridge *bridge, uint32_t sbdf,
> +        int where, int size, u32 val)
> +{
> +    void __iomem *addr;
> +
> +    addr = pci_config_base(bridge, sbdf, where);

You can initialize at definition.

> +    if ( !addr )
> +        return -ENODEV;
> +
> +    if ( size == 1 )
> +        writeb(val, addr);
> +    else if ( size == 2 )
> +        writew(val, addr);
> +    else
> +        writel(val, addr);

Please use a switch, and check against specific values. The default
case should be a BUG();. See pci_conf_read from x86 for an example.

> +
> +    return 0;
> +}
> +
> +int pci_ecam_config_read(struct pci_host_bridge *bridge, uint32_t sbdf,
> +        int where, int size, u32 *val)
> +{
> +    void __iomem *addr;
> +
> +    addr = pci_config_base(bridge, sbdf, where);
> +    if ( !addr ) {
> +        *val = ~0;
> +        return -ENODEV;
> +    }
> +
> +    if ( size == 1 )
> +        *val = readb(addr);
> +    else if ( size == 2 )
> +        *val = readw(addr);
> +    else
> +        *val = readl(addr);
> +
> +    return 0;
> +}
> +
> +/* ECAM ops */
> +struct pci_ecam_ops pci_generic_ecam_ops = {
> +    .bus_shift  = 20,
> +    .pci_ops    = {
> +        .read       = pci_ecam_config_read,
> +        .write      = pci_ecam_config_write,
> +    }
> +};
> +
> +static const struct dt_device_match gen_pci_dt_match[] = {
> +    { .compatible = "pci-host-ecam-generic",
> +      .data =       &pci_generic_ecam_ops },
> +
> +    { },
> +};
> +
> +static int gen_pci_dt_init(struct dt_device_node *dev, const void *data)
> +{
> +    const struct dt_device_match *of_id;
> +    struct pci_ecam_ops *ops;
> +
> +    of_id = dt_match_node(gen_pci_dt_match, dev->dev.of_node);
> +    ops = (struct pci_ecam_ops *) of_id->data;
> +
> +    printk(XENLOG_INFO "Found PCI host bridge %s compatible:%s \n",
> +            dt_node_full_name(dev), of_id->compatible);
> +
> +    return pci_host_common_probe(dev, ops);
> +}
> +
> +DT_DEVICE_START(pci_gen, "PCI HOST GENERIC", DEVICE_PCI)
> +.dt_match = gen_pci_dt_match,
> +.init = gen_pci_dt_init,
> +DT_DEVICE_END
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/pci/pci.c b/xen/arch/arm/pci/pci.c
> new file mode 100644
> index 0000000000..f8cbb99591
> --- /dev/null
> +++ b/xen/arch/arm/pci/pci.c
> @@ -0,0 +1,112 @@
> +/*
> + * Copyright (C) 2020 Arm Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/acpi.h>
> +#include <xen/device_tree.h>
> +#include <xen/errno.h>
> +#include <xen/init.h>
> +#include <xen/pci.h>
> +#include <xen/param.h>
> +
> +static int __init dt_pci_init(void)
> +{
> +    struct dt_device_node *np;
> +    int rc;
> +
> +    dt_for_each_device_node(dt_host, np)
> +    {
> +        rc = device_init(np, DEVICE_PCI, NULL);
> +        if( !rc )
> +            continue;
> +        /*
> +         * Ignore the following error codes:
> +         *   - EBADF: Indicate the current is not an pci
> +         *   - ENODEV: The pci device is not present or cannot be used by
> +         *     Xen.
> +         */
> +        else if ( rc != -EBADF && rc != -ENODEV )
> +        {
> +            printk(XENLOG_ERR "No driver found in XEN or driver init error.\n");
> +            return rc;
> +        }
> +    }
> +
> +    return 0;
> +}
> +
> +#ifdef CONFIG_ACPI
> +static void __init acpi_pci_init(void)
> +{
> +    printk(XENLOG_ERR "ACPI pci init not supported \n");
> +    return;
> +}
> +#else
> +static inline void __init acpi_pci_init(void) { }
> +#endif
> +
> +static bool __initdata param_pci_enable;
> +static int __init parse_pci_param(const char *arg)
> +{
> +    if ( !arg )
> +    {
> +        param_pci_enable = false;
> +        return 0;
> +    }
> +
> +    switch ( parse_bool(arg, NULL) )
> +    {
> +        case 0:
> +            param_pci_enable = false;
> +            return 0;
> +        case 1:
> +            param_pci_enable = true;
> +            return 0;
> +    }
> +
> +    return -EINVAL;
> +}
> +custom_param("pci", parse_pci_param);

You need to introduce the documentation for the parameter at
docs/misc/xen-command-line.pandoc

Albeit I'm not sure I like it, why do you need to enable PCI
explicitly?

Shouldn't it be discovered automatically and enabled by default?

> +void __init pci_init(void)
> +{
> +    /*
> +     * Enable PCI when has been enabled explicitly (pci=on)
> +     */
> +    if ( !param_pci_enable)
> +        goto disable;

Just return here, there's not point in having a label to perform a
return.

> +
> +    if ( acpi_disabled )
> +        dt_pci_init();
> +    else
> +        acpi_pci_init();

Isn't there an enum or something that tells you whether the system
description is coming from ACPI or from DT?

This if .. else seems fragile.

Also for ACPI you will get called by acpi_boot_init, and likely need
to implement a acpi_mmcfg_init or pci_mmcfg_arch_{init,enable}. I'm
not sure whether the code in acpi_mmcfg_init could be made shared
between both x86 and Arm.

> +
> +#ifdef CONFIG_HAS_PCI
> +    pci_segments_init();
> +#endif
> +
> +disable:
> +    return;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 7968cee47d..2d7f1db44f 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -930,6 +930,8 @@ void __init start_xen(unsigned long boot_phys_offset,
>  
>      setup_virt_paging();
>  
> +    pci_init();
> +
>      do_initcalls();
>  
>      /*
> diff --git a/xen/include/asm-arm/device.h b/xen/include/asm-arm/device.h
> index ee7cff2d44..28f8049cfd 100644
> --- a/xen/include/asm-arm/device.h
> +++ b/xen/include/asm-arm/device.h
> @@ -4,6 +4,7 @@
>  enum device_type
>  {
>      DEV_DT,
> +    DEV_PCI,
>  };
>  
>  struct dev_archdata {
> @@ -25,15 +26,15 @@ typedef struct device device_t;
>  
>  #include <xen/device_tree.h>
>  
> -/* TODO: Correctly implement dev_is_pci when PCI is supported on ARM */
> -#define dev_is_pci(dev) ((void)(dev), 0)
> -#define dev_is_dt(dev)  ((dev->type == DEV_DT)
> +#define dev_is_pci(dev) (dev->type == DEV_PCI)
> +#define dev_is_dt(dev)  (dev->type == DEV_DT)
>  
>  enum device_class
>  {
>      DEVICE_SERIAL,
>      DEVICE_IOMMU,
>      DEVICE_GIC,
> +    DEVICE_PCI,

It seems to be like this wants to be DEVICE_PCI_HOST_BRIDGE or some
such, since this is not used to identify all PCI devices, but just
bridges?

>      /* Use for error */
>      DEVICE_UNKNOWN,
>  };
> diff --git a/xen/include/asm-arm/pci.h b/xen/include/asm-arm/pci.h
> index de13359f65..94fd00360a 100644
> --- a/xen/include/asm-arm/pci.h
> +++ b/xen/include/asm-arm/pci.h
> @@ -1,7 +1,98 @@
> -#ifndef __X86_PCI_H__
> -#define __X86_PCI_H__
> +/*
> + * Copyright (C) 2020 Arm Ltd.
> + *
> + * Based on Linux drivers/pci/ecam.c
> + * Copyright 2016 Broadcom.
> + *
> + * Based on Linux drivers/pci/controller/pci-host-common.c
> + * Based on Linux drivers/pci/controller/pci-host-generic.c
> + * Copyright (C) 2014 ARM Limited Will Deacon <will.deacon@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see <http://www.gnu.org/licenses/>.
> + */
>  
> +#ifndef __ARM_PCI_H__
> +#define __ARM_PCI_H__
> +
> +#include <xen/pci.h>
> +#include <xen/device_tree.h>
> +#include <asm/device.h>
> +
> +#ifdef CONFIG_ARM_PCI
> +
> +/* Arch pci dev struct */
>  struct arch_pci_dev {
> +    struct device dev;
> +};

This seems to be completely unused?

> +
> +#define PRI_pci "%04x:%02x:%02x.%u"
> +#define pci_to_dev(pcidev) (&(pcidev)->arch.dev)
> +
> +/*
> + * struct to hold the mappings of a config space window. This
> + * is expected to be used as sysdata for PCI controllers that
> + * use ECAM.
> + */
> +struct pci_config_window {
> +    paddr_t     phys_addr;
> +    paddr_t     size;
> +    uint8_t     busn_start;
> +    uint8_t     busn_end;
> +    struct pci_ecam_ops     *ops;

const?

> +    void __iomem        *win;
> +};
> +
> +/* Forward declaration as pci_host_bridge and pci_ops depend on each other. */
> +struct pci_host_bridge;
> +
> +struct pci_ops {
> +    int (*read)(struct pci_host_bridge *bridge,
> +                    uint32_t sbdf, int where, int size, u32 *val);
> +    int (*write)(struct pci_host_bridge *bridge,
> +                    uint32_t sbdf, int where, int size, u32 val);
> +};
> +
> +/*
> + * struct to hold pci ops and bus shift of the config window
> + * for a PCI controller.
> + */
> +struct pci_ecam_ops {
> +    unsigned int            bus_shift;
> +    struct pci_ops          pci_ops;
> +    int             (*init)(struct pci_config_window *);
> +};
> +
> +/*
> + * struct to hold pci host bridge information
> + * for a PCI controller.
> + */
> +struct pci_host_bridge {
> +    struct dt_device_node *dt_node;  /* Pointer to the associated DT node */
> +    struct list_head node;           /* Node in list of host bridges */
> +    uint16_t segment;                /* Segment number */
> +    void *sysdata;                   /* Pointer to the config space window*/
> +    const struct pci_ops *ops;

You seem to introduce a lot of ops structs, yet there's only one
implementation the generic ECAM one, and adding such complexity should
IMO be done when further implementations are added. Also given this is
a fully ECAM compliant bridge you could just use most of the existing
logic for x86?

I understand the discovery needs to be different, but x86 MCFG logic
should already be capable of handling multiple ECAM regions.

I also agree with Julien that splitting this into separate patch would
make it easier to review. For example you can start with the discovery
logic, followed by the initialization and the add the accessors to the
config space lastly.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 14:50:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 14:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyz1Q-0005Xl-SH; Fri, 24 Jul 2020 14:50:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hWcK=BD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyz1Q-0005R6-CV
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 14:50:04 +0000
X-Inumbo-ID: f4a13235-cdbc-11ea-a402-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f4a13235-cdbc-11ea-a402-12813bfff9fa;
 Fri, 24 Jul 2020 14:50:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595602203;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=TB6ji7a43ZCHERjgPgXcqAao5g0lQPKrWUbY/b1zT7w=;
 b=cPjIWYm8pEqHiTUWRZuMOSt6H3gHc08e3lObRjLRCUtxYlJI6C+HaXak
 uB+4QkH9CPaBaolz1K/KKHHWiViHREr9Pf3HGqO7f7qR+BV+78JKPGxtZ
 dxgYP9mpeywsMH64C64EFR7Jd+t92oOnhXp019oMDssoLxL9/+q2a5GbO E=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: e38tHvubSEl/OAFsy4+7Xl7djadcZrsyYGl/Hbh3zLKBZQG63b1uzdKWwl/4JNSzQ23arLPscW
 7Ja60I9ksMCFgzTl5iF6SaPWpur7nNlSt3TjTB6gm7pdUp4OX9O1oW/LcOUjGuzxGx2e0D0+QP
 +Nm3AktNyNJQ+yBN3VpFiKeTO/qR5R25MFkUmXSDP/jlq0pUQNiK9dm3U3g/uziNSzTBPF00X0
 KZdgoJl7cWoPl5MuMc3ks5Inp5IVkHUbPwPKF+I+iiwXTdclyOH8xwQ+SkvTbv43sfC3PcwtsV
 uTM=
X-SBRS: 2.7
X-MesageID: 23129670
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,391,1589256000"; d="scan'208";a="23129670"
Date: Fri, 24 Jul 2020 16:49:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [RFC PATCH v1 2/4] xen/arm: Discovering PCI devices and add the
 PCI devices in XEN.
Message-ID: <20200724144955.GK7191@Air-de-Roger>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <666df0147054dda8af13ae74a89be44c69984295.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231337140.17562@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2007231337140.17562@sstabellini-ThinkPad-T480s>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <rahul.singh@arm.com>, Julien Grall <julien@xen.org>,
 andrew.cooper3@citrix.com, Bertrand.Marquis@arm.com, jbeulich@suse.com,
 xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 23, 2020 at 01:44:03PM -0700, Stefano Stabellini wrote:
> On Thu, 23 Jul 2020, Rahul Singh wrote:
> > Hardware domain is in charge of doing the PCI enumeration and will
> > discover the PCI devices and then will communicate to XEN via hyper
> > call PHYSDEVOP_pci_device_add to add the PCI devices in XEN.
> > 
> > Change-Id: Ie87e19741689503b4b62da911c8dc2ee318584ac
> 
> Same question about Change-Id
> 
> 
> > Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> > ---
> >  xen/arch/arm/physdev.c | 42 +++++++++++++++++++++++++++++++++++++++---
> >  1 file changed, 39 insertions(+), 3 deletions(-)
> > 
> > diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
> > index e91355fe22..274720f98a 100644
> > --- a/xen/arch/arm/physdev.c
> > +++ b/xen/arch/arm/physdev.c
> > @@ -9,12 +9,48 @@
> >  #include <xen/errno.h>
> >  #include <xen/sched.h>
> >  #include <asm/hypercall.h>
> > -
> > +#include <xen/guest_access.h>
> > +#include <xsm/xsm.h>
> >  
> >  int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
> >  {
> > -    gdprintk(XENLOG_DEBUG, "PHYSDEVOP cmd=%d: not implemented\n", cmd);
> > -    return -ENOSYS;
> > +    int ret = 0;
> > +
> > +    switch ( cmd )
> > +    {
> > +#ifdef CONFIG_HAS_PCI
> > +        case PHYSDEVOP_pci_device_add:
> > +            {
> > +                struct physdev_pci_device_add add;
> > +                struct pci_dev_info pdev_info;
> > +                nodeid_t node = NUMA_NO_NODE;
> > +
> > +                ret = -EFAULT;
> > +                if ( copy_from_guest(&add, arg, 1) != 0 )
> > +                    break;
> > +
> > +                pdev_info.is_extfn = !!(add.flags & XEN_PCI_DEV_EXTFN);
> > +                if ( add.flags & XEN_PCI_DEV_VIRTFN )
> > +                {
> > +                    pdev_info.is_virtfn = 1;
> > +                    pdev_info.physfn.bus = add.physfn.bus;
> > +                    pdev_info.physfn.devfn = add.physfn.devfn;
> > +                }
> > +                else
> > +                    pdev_info.is_virtfn = 0;
> > +
> > +                ret = pci_add_device(add.seg, add.bus, add.devfn,
> > +                                &pdev_info, node);
> > +
> > +                break;
> > +            }
> > +#endif
> > +        default:
> > +            gdprintk(XENLOG_DEBUG, "PHYSDEVOP cmd=%d: not implemented\n", cmd);
> > +            ret = -ENOSYS;
> > +    }
> 
> I think we should make the implementation common between arm and x86 by
> creating xen/common/physdev.c:do_physdev_op as a shared entry point for
> PHYSDEVOP hypercalls implementations. See for instance:
> 
> xen/common/sysctl.c:do_sysctl
> 
> and
> 
> xen/arch/arm/sysctl.c:arch_do_sysctl
> xen/arch/x86/sysctl.c:arch_do_sysctl
> 
> 
> Jan, Andrew, Roger, any opinions?

Oh, physdev ops don't have a common entry point, it's all per-arch.
Since Arm has no physdev ops at all, I think we should start by adding
a common do_physdev_op and move PHYSDEVOP_pci_device_add into it,
leaving the rest of x86 operations as arch_do_physdev_op.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 15:01:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 15:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyzCL-0006yZ-VP; Fri, 24 Jul 2020 15:01:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=59Ih=BD=yujala.com=srini@srs-us1.protection.inumbo.net>)
 id 1jyzCL-0006yU-3D
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 15:01:21 +0000
X-Inumbo-ID: 874a03c6-cdbe-11ea-a405-12813bfff9fa
Received: from gproxy2-pub.mail.unifiedlayer.com (unknown [69.89.18.3])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 874a03c6-cdbe-11ea-a405-12813bfff9fa;
 Fri, 24 Jul 2020 15:01:18 +0000 (UTC)
Received: from cmgw15.unifiedlayer.com (unknown [10.9.0.15])
 by gproxy2.mail.unifiedlayer.com (Postfix) with ESMTP id 7744A1E0703
 for <xen-devel@lists.xenproject.org>; Fri, 24 Jul 2020 09:01:15 -0600 (MDT)
Received: from md-71.webhostbox.net ([204.11.58.143]) by cmsmtp with ESMTP
 id yzCEjUjQUhDCCyzCEj3DMh; Fri, 24 Jul 2020 09:01:15 -0600
X-Authority-Reason: nr=8
X-Authority-Analysis: v=2.3 cv=Aayf4UfG c=1 sm=1 tr=0
 a=yS0qNmEK8ed8yKyeR8R6rg==:117 a=yS0qNmEK8ed8yKyeR8R6rg==:17
 a=dLZJa+xiwSxG16/P+YVxDGlgEgI=:19 a=_RQrkK6FrEwA:10:nop_rcvd_month_year
 a=o-A10e_uY_YA:10:endurance_base64_authed_username_1 a=DAwyPP_o2Byb1YXLmDAA:9
 a=cWRNjhkoAAAA:8 a=0f1Y9JmXAAAA:8 a=pGLkceISAAAA:8 a=J1u6DroUJCJUO7YzKxcA:9
 a=RIVIM4Kt5FzBIjfJ:21 a=rUt3xXAb7zav4-gB:21 a=wPNLvfGTeEIA:10:nop_charset_2
 a=HMUB7uMQ5RUd_Y6GMyUA:9
 a=klsXiepXz4AA:10:nop_attachment_filename_extension_2
 a=sVa6W5Aao32NNC1mekxh:22 a=It28mvvgxjsq2WIeNnUB:22
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=yujala.com; 
 s=default;
 h=Content-Type:MIME-Version:Message-ID:Date:Subject:In-Reply-To:
 References:To:From:Sender:Reply-To:Cc:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe:
 List-Post:List-Owner:List-Archive;
 bh=c1CcDszMDIMkDm680/pZlYp81gXv7XoTBl5DIYGxmxQ=; b=ng7FapfZthc1RWvNj7vx7kZLop
 pEXqx8kq+T3tnhvnFjpinN5vNIjxOLx2FPjX9bQTTqxnCRfp3BSVZx6IRpZTY8VR7CAzQCjVvMcTJ
 A/Ja7SfL3BiCo4gHNmdtirr+21C1sS21GHI3U1fb22hWzbHeOmbq054sT66CbHpLvNNsxlGmJQ79K
 Wy2a47nmVha+xqpT4MoYahDOudSV2+SD/c6leFrGDfpGGDON/IwkQ92rnI6wcxBJx+AWwo8kQWiIQ
 rbY90CPzdAAsYiNvSCsOtxAcsrY3xTKWs0yh32DEymmMNUru6AAlC4GpHHjnkBzuJ4UPICZLIxWV+
 muizxOdQ==;
Received: from 162-231-240-210.lightspeed.sntcca.sbcglobal.net
 ([162.231.240.210]:50065 helo=SRINIASUSLAPTOP)
 by md-71.webhostbox.net with esmtpsa (TLS1.2) tls
 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.93)
 (envelope-from <srini@yujala.com>)
 id 1jyzCD-001IUT-Mh; Fri, 24 Jul 2020 15:01:14 +0000
From: "Srinivas Bangalore" <srini@yujala.com>
To: "'Julien Grall'" <julien@xen.org>, <xen-devel@lists.xenproject.org>,
 "'Christopher Clark'" <christopher.w.clark@gmail.com>
References: <002801d66051$90fe2300$b2fa6900$@yujala.com>
 <9736680b-1c81-652b-552b-4103341bad50@xen.org>
In-Reply-To: <9736680b-1c81-652b-552b-4103341bad50@xen.org>
Subject: RE: Porting Xen to Jetson Nano
Date: Fri, 24 Jul 2020 08:01:12 -0700
Message-ID: <000001d661cb$45cdaa10$d168fe30$@yujala.com>
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="----=_NextPart_000_0001_01D66190.996FE380"
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQIl7jaf5+ZLFToUYJ/P44Ycp83hwAFkmiVWqGyP/MA=
Content-Language: en-us
X-AntiAbuse: This header was added to track abuse,
 please include it with any abuse report
X-AntiAbuse: Primary Hostname - md-71.webhostbox.net
X-AntiAbuse: Original Domain - lists.xenproject.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - yujala.com
X-BWhitelist: no
X-Source-IP: 162.231.240.210
X-Source-L: No
X-Exim-ID: 1jyzCD-001IUT-Mh
X-Source: 
X-Source-Args: 
X-Source-Dir: 
X-Source-Sender: 162-231-240-210.lightspeed.sntcca.sbcglobal.net
 (SRINIASUSLAPTOP) [162.231.240.210]:50065
X-Source-Auth: srini@yujala.com
X-Email-Count: 1
X-Source-Cap: c3JpbmlxbGw7c3JpbmlxbGw7bWQtNzEud2ViaG9zdGJveC5uZXQ=
X-Local-Domain: yes
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a multipart message in MIME format.

------=_NextPart_000_0001_01D66190.996FE380
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Hi Julien,

Thanks for the tips. Comments inline...

Regards,
Srini

-----Original Message-----
From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of =
Julien
Grall
Sent: Thursday, July 23, 2020 11:04 AM
To: Srinivas Bangalore <srini@yujala.com>; =
xen-devel@lists.xenproject.org;
Christopher Clark <christopher.w.clark@gmail.com>
Subject: Re: Porting Xen to Jetson Nano

On 22/07/2020 18:57, Srinivas Bangalore wrote:
> Dear Xen experts,

Hello,

> Would greatly appreciate some hints on how to move forward with this=20
> one=85

 From your first set of original log:

 > Xen version 4.8.5 (srinivas@) (aarch64-linux-gnu-gcc  > =
(Ubuntu/Linaro
7.5.0-3ubuntu1~18.04) 7.5.0) debug=3Dn  Sun Jul 19 07:44:00  > PDT 2020

I would recommend to compile Xen with debug enabled (CONFIG_DEBUG=3Dy) =
as it
may provide you more information of what's happening.

Xen image rebuild now with CONFIG_DEBUG=3Dy. Also changed bootargs as
suggested.

(XEN) MODULE[0]: 00000000fc7f8000 - 00000000fc82d000 Device Tree
(XEN) MODULE[1]: 00000000e1000000 - 00000000e31bc808 Kernel
console=3Dhvc0 earlycon=3Dxenboot rootfstype=3Dext4 rw rootwait
root=3D/dev/mmcblk0p1 rdinit=3D/sbin/init clk_ignore_unused
(XEN)  RESVD[0]: 0000000080000000 - 0000000080020000
(XEN)  RESVD[1]: 00000000e3500000 - 00000000e3535000
(XEN)  RESVD[2]: 00000000fc7f8000 - 00000000fc82d000
(XEN)
(XEN) Command line: console=3Ddtuart sync_console dom0_mem=3D128M =
log_lvl=3Dall
guest_loglvl=3Dall console_to_ring
(XEN) Placing Xen at 0x00000000fec00000-0x00000000fee00000
(XEN) Update BOOTMOD_XEN from 0000000080080000-0000000080198e01 =3D>
00000000fec00000-00000000fed18e01
(XEN) Domain heap initialised
(XEN) Platform: Tegra
(XEN) Taking dtuart configuration from /chosen/stdout-path
(XEN) Looking for dtuart at "/serial@70 Xen 4.8.5
(XEN) Xen version 4.8.5 (srinivas@) (aarch64-linux-gnu-gcc =
(Ubuntu/Linaro
7.5.0-3ubuntu1~18.04) 7.5.0) debug=3Dy  Thu Jul 23 21:17:23 PDT 2020


Also, aside the Tegra series. Do you have any other patches on top?

No other patches.=20

[...]

> (XEN) BANK[0] 0x000000a0000000-0x000000c0000000 (512MB)
>=20
> (XEN) Grant table range: 0x000000fec00000-0x000000fec60000
>=20
> (XEN) Loading zImage from 00000000e1000000 to
> 00000000a0080000-00000000a223c808
>=20
> (XEN) Allocating PPI 16 for event channel interrupt
>=20
> (XEN) Loading dom0 DTB to 0x00000000a8000000-0x00000000a803435c

[...]

>=20
> (XEN) *** Dumping CPU0 guest state (d0v0): ***
>=20
> (XEN) ----[ Xen-4.8.5=A0 arm64=A0 debug=3Dn=A0=A0 Tainted:=A0 C=A0=A0 =
]----
>=20
> (XEN) CPU:=A0=A0=A0 0
>=20
> (XEN) PC:=A0=A0=A0=A0 00000000a0080000

PC is pointing to the entry point of your kernel...

>=20
> (XEN) LR:=A0=A0=A0=A0 0000000000000000
>=20
> (XEN) SP_EL0: 0000000000000000
>=20
> (XEN) SP_EL1: 0000000000000000
>=20
> (XEN) CPSR:=A0=A0 000001c5 MODE:64-bit EL1h (Guest Kernel, handler)
>=20
> (XEN)=A0=A0=A0=A0=A0 X0: 00000000a8000000=A0 X1: 0000000000000000=A0 =
X2:=20
> 0000000000000000
>=20
> (XEN)=A0=A0=A0=A0=A0 X3: 0000000000000000=A0 X4: 0000000000000000=A0 =
X5:=20
> 0000000000000000
>=20
> (XEN)=A0=A0=A0=A0=A0 X6: 0000000000000000=A0 X7: 0000000000000000=A0 =
X8:=20
> 0000000000000000
>=20
> (XEN)=A0=A0=A0=A0=A0 X9: 0000000000000000 X10: 0000000000000000 X11:=20
> 0000000000000000
>=20
> (XEN)=A0=A0=A0=A0 X12: 0000000000000000 X13: 0000000000000000 X14:=20
> 0000000000000000
>=20
> (XEN)=A0=A0=A0=A0 X15: 0000000000000000 X16: 0000000000000000 X17:=20
> 0000000000000000
>=20
> (XEN)=A0=A0=A0=A0 X18: 0000000000000000 X19: 0000000000000000 X20:=20
> 0000000000000000
>=20
> (XEN)=A0=A0=A0=A0 X21: 0000000000000000 X22: 0000000000000000 X23:=20
> 0000000000000000
>=20
> (XEN)=A0=A0=A0=A0 X24: 0000000000000000 X25: 0000000000000000 X26:=20
> 0000000000000000
>=20
> (XEN)=A0=A0=A0=A0 X27: 0000000000000000 X28: 0000000000000000=A0 FP:=20
> 0000000000000000
>=20
> (XEN)
>=20
> (XEN)=A0=A0=A0 ELR_EL1: 0000000000000000
>=20
> (XEN)=A0=A0=A0 ESR_EL1: 00000000
>=20
> (XEN)=A0=A0=A0 FAR_EL1: 0000000000000000
>=20
> (XEN)
>=20
> (XEN)=A0 SCTLR_EL1: 00c50838
>=20
> (XEN)=A0=A0=A0 TCR_EL1: 00000000
>=20
> (XEN)=A0 TTBR0_EL1: 0000000000000000
>=20
> (XEN)=A0 TTBR1_EL1: 0000000000000000
>=20
> (XEN)
>=20
> (XEN)=A0=A0 VTCR_EL2: 80043594
>=20
> (XEN)=A0 VTTBR_EL2: 000100017f0f9000
>=20
> (XEN)
>=20
> (XEN)=A0 SCTLR_EL2: 30cd183d
>=20
> (XEN)=A0=A0=A0 HCR_EL2: 000000008038663f
>=20
> (XEN)=A0 TTBR0_EL2: 00000000fecfc000
>=20
> (XEN)
>=20
> (XEN)=A0=A0=A0 ESR_EL2: 8200000d

... it looks like we are receiving a trap in EL2 because it can't =
execute
the instruction. This is a bit odd as the p2m (stage-2
page-tables) should be configured to allow execution. It would be useful =
if
you can dump the p2m walk here. This following patch should do the job =
(not
compiled test):

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index
d578a5c598dd..af1834cdf735 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2489,9 +2489,14 @@ static void do_trap_instr_abort_guest(struct
cpu_user_regs *regs,
           */
          rc =3D gva_to_ipa(gva, &gpa, GV2M_READ);
          if ( rc =3D=3D -EFAULT )
+        {
+            printk("Unable to translate 0x%lx\n", gva);
              return; /* Try again */
+        }
      }

+    dump_p2m_walk(current->domain, gpa);
+
      switch ( fsc )
      {
      case FSC_FLT_PERM:

I believe you meant 'dump_p2m_lookup'? I couldn't find 'dump_p2m_walk' =
in
the source, so included 'dump_p2m_lookup' (which actually calls
'dump_pm_walk').
Here's the output, truncated since it goes into an infinite loop =
printing
the same info:
[..]
(XEN) Allocating 1:1 mappings totalling 128MB for dom0:
(XEN) BANK[0] 0x00000088000000-0x00000090000000 (128MB)
(XEN) Grant table range: 0x000000fec00000-0x000000fec68000
(XEN) Loading zImage from 00000000e1000000 to
0000000088080000-000000008a23c808
(XEN) Allocating PPI 16 for event channel interrupt
(XEN) Loading dom0 DTB to 0x000000008fe00000-0x000000008fe34444
(XEN) Scrubbing Free RAM on 1 nodes using 4 CPUs
(XEN) ........done.
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) ***************************************************
(XEN) WARNING: CONSOLE OUTPUT IS SYNCHRONOUS
(XEN) This option is intended to aid debugging of Xen by ensuring
(XEN) that all output is synchronously delivered on the serial line.
(XEN) However it can introduce SIGNIFICANT latencies and affect
(XEN) timekeeping. It is NOT recommended for production use!
(XEN) ***************************************************
(XEN) 3... 2... 1...
(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch =
input to
Xen)
(XEN) Freed 296kB init memory.
(XEN) dom0 IPA 0x0000000088080000
(XEN) P2M @ 0000000803fc3d40 mfn:0x17f0f5
(XEN) 0TH[0x0] =3D 0x004000017f0f377f
(XEN) 1ST[0x2] =3D 0x02c00000800006fd
(XEN) Mem access check
(XEN) dom0 IPA 0x0000000088080000
(XEN) P2M @ 0000000803fc3d40 mfn:0x17f0f5
(XEN) 0TH[0x0] =3D 0x004000017f0f377f
(XEN) 1ST[0x2] =3D 0x02c00000800006fd
(XEN) Mem access check

[..]

I added the printk for 'Mem access check' inside the 'case FSC_FLT_PERM' =
of
the switch (fsc) code following the lookup. That's what you see in the
output above.=20
So it does seem like there's a memory access fault somehow.
=20
>=20
> (XEN)=A0 HPFAR_EL2: 0000000000000000
>=20
> (XEN)=A0=A0=A0 FAR_EL2: 00000000a0080000
>=20
> (XEN)
>=20
> (XEN) Guest stack trace from sp=3D0:
>=20
> (XEN)=A0=A0 Failed to convert stack to physical address

[...]

> It seems the DOM0 kernel did not get added to the task list=85.

 From a look at the dump, dom0 vCPU0 has been scheduled and running on
pCPU0.

>=20
> Boot args for Xen and Dom0 are here:
> (XEN) Checking for initrd in /chosen
>=20
> (XEN) linux,initrd limits invalid: 0000000084100000 >=3D=20
> 0000000084100000
>=20
> (XEN) RAM: 0000000080000000 - 00000000fedfffff
>=20
> (XEN) RAM: 0000000100000000 - 000000017f1fffff
>=20
> (XEN)
>=20
> (XEN) MODULE[0]: 00000000fc7f8000 - 00000000fc82d000 Device Tree
>=20
> (XEN) MODULE[1]: 00000000e1000000 - 00000000e31bc808 Kernel      =20
> console=3Dhvc0 earlyprintk=3Dxen earlycon=3Dxen rootfstype=3Dext4 rw =
rootwait
> root=3D/dev/mmcblk0p1 rdinit=3D/sbin/init

You want to use earlycon=3Dxenboot here.

>=20
> (XEN)=A0 RESVD[0]: 0000000080000000 - 0000000080020000
>=20
> (XEN)=A0 RESVD[1]: 00000000e3500000 - 00000000e3535000
>=20
> (XEN)=A0 RESVD[2]: 00000000fc7f8000 - 00000000fc82d000
>=20
> (XEN)
>=20
> (XEN) Command line: console=3Ddtuart earlyprintk=3Dxen
> earlycon=3Duart8250,mmio32,0x70006000 sync_console dom0_mem=3D512M=20
> log_lvl=3Dall guest_loglvl=3Dall console_to_ring

FWIW, earlyprintk and earlycon are not understood by Xen. They are only
useful for Dom0.

BTW, to Christopher's point, the dtb did have some issues. I had to hack =
the
'interrupt-controller' node to get the GICv2 working.
I have attached the .dts file that I'm using.

Best regards,

--
Julien Grall

------=_NextPart_000_0001_01D66190.996FE380
Content-Type: application/octet-stream;
	name="jetson-nano-b00.dts"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
	filename="jetson-nano-b00.dts"

/dts-v1/;=0A=
=0A=
/memreserve/	0x0000000080000000 0x0000000000020000;=0A=
/ {=0A=
	compatible =3D "nvidia,p3449-0000-b00+p3448-0000-b00", =
"nvidia,jetson-nano", "nvidia,tegra210";=0A=
	interrupt-parent =3D <0x1>;=0A=
	#address-cells =3D <0x2>;=0A=
	#size-cells =3D <0x2>;=0A=
	nvidia,dtbbuildtime =3D "Jul 23 2020", "17:30:48";=0A=
	nvidia,boardids =3D "3448";=0A=
	nvidia,proc-boardid =3D "3448";=0A=
	nvidia,pmu-boardid =3D "3448";=0A=
	nvidia,fastboot-usb-pid =3D <0xb442>;=0A=
	model =3D "NVIDIA Jetson Nano Developer Kit";=0A=
	nvidia,dtsfilename =3D =
"../arch/arm64/boot/dts/../../../../../../hardware/nvidia/platform/t210/p=
org/kernel-dts/tegra210-p3448-0000-p3449-0000-b00.dts";=0A=
=0A=
	thermal-zones {=0A=
=0A=
		AO-therm {=0A=
			status =3D "okay";=0A=
			polling-delay =3D <0x3e8>;=0A=
			polling-delay-passive =3D <0x3e8>;=0A=
			thermal-sensors =3D <0x2>;=0A=
=0A=
			trips {=0A=
=0A=
				trip_shutdown {=0A=
					temperature =3D <0x1adb0>;=0A=
					hysteresis =3D <0x0>;=0A=
					type =3D "critical";=0A=
					writable;=0A=
				};=0A=
=0A=
				gpu-scaling0 {=0A=
					temperature =3D <0xffff9e58>;=0A=
					hysteresis =3D <0x0>;=0A=
					type =3D "active";=0A=
					linux,phandle =3D <0xa7>;=0A=
					phandle =3D <0xa7>;=0A=
				};=0A=
=0A=
				gpu-scaling1 {=0A=
					temperature =3D <0x3a98>;=0A=
					hysteresis =3D <0x3e8>;=0A=
					type =3D "active";=0A=
					linux,phandle =3D <0x3>;=0A=
					phandle =3D <0x3>;=0A=
				};=0A=
=0A=
				gpu-scaling2 {=0A=
					temperature =3D <0x7530>;=0A=
					hysteresis =3D <0x3e8>;=0A=
					type =3D "active";=0A=
					linux,phandle =3D <0x5>;=0A=
					phandle =3D <0x5>;=0A=
				};=0A=
=0A=
				gpu-scaling3 {=0A=
					temperature =3D <0xc350>;=0A=
					hysteresis =3D <0x3e8>;=0A=
					type =3D "active";=0A=
					linux,phandle =3D <0x6>;=0A=
					phandle =3D <0x6>;=0A=
				};=0A=
=0A=
				gpu-scaling4 {=0A=
					temperature =3D <0x11170>;=0A=
					hysteresis =3D <0x3e8>;=0A=
					type =3D "active";=0A=
					linux,phandle =3D <0x7>;=0A=
					phandle =3D <0x7>;=0A=
				};=0A=
=0A=
				gpu-scaling5 {=0A=
					temperature =3D <0x19a28>;=0A=
					hysteresis =3D <0x3e8>;=0A=
					type =3D "active";=0A=
					linux,phandle =3D <0x8>;=0A=
					phandle =3D <0x8>;=0A=
				};=0A=
=0A=
				gpu-vmax1 {=0A=
					temperature =3D <0x14438>;=0A=
					hysteresis =3D <0x3e8>;=0A=
					type =3D "active";=0A=
					linux,phandle =3D <0x9>;=0A=
					phandle =3D <0x9>;=0A=
				};=0A=
=0A=
				core_dvfs_floor_trip0 {=0A=
					temperature =3D <0x3a98>;=0A=
					hysteresis =3D <0x3e8>;=0A=
					type =3D "active";=0A=
					linux,phandle =3D <0xb>;=0A=
					phandle =3D <0xb>;=0A=
				};=0A=
=0A=
				core_dvfs_cap_trip0 {=0A=
					temperature =3D <0x14c08>;=0A=
					hysteresis =3D <0x3e8>;=0A=
					type =3D "active";=0A=
					linux,phandle =3D <0xd>;=0A=
					phandle =3D <0xd>;=0A=
				};=0A=
=0A=
				dfll-floor-trip0 {=0A=
					temperature =3D <0x3a98>;=0A=
					hysteresis =3D <0x3e8>;=0A=
					type =3D "active";=0A=
					linux,phandle =3D <0xf>;=0A=
					phandle =3D <0xf>;=0A=
				};=0A=
			};=0A=
=0A=
			thermal-zone-params {=0A=
				governor-name =3D "pid_thermal_gov";=0A=
			};=0A=
=0A=
			cooling-maps {=0A=
=0A=
				gpu-scaling-map1 {=0A=
					trip =3D <0x3>;=0A=
					cooling-device =3D <0x4 0x1 0x1>;=0A=
				};=0A=
=0A=
				gpu-scaling-map2 {=0A=
					trip =3D <0x5>;=0A=
					cooling-device =3D <0x4 0x2 0x2>;=0A=
				};=0A=
=0A=
				gpu_scaling_map3 {=0A=
					trip =3D <0x6>;=0A=
					cooling-device =3D <0x4 0x3 0x3>;=0A=
				};=0A=
=0A=
				gpu-scaling-map4 {=0A=
					trip =3D <0x7>;=0A=
					cooling-device =3D <0x4 0x4 0x4>;=0A=
				};=0A=
=0A=
				gpu-scaling-map5 {=0A=
					trip =3D <0x8>;=0A=
					cooling-device =3D <0x4 0x5 0x5>;=0A=
				};=0A=
=0A=
				gpu-vmax-map1 {=0A=
					trip =3D <0x9>;=0A=
					cooling-device =3D <0xa 0x1 0x1>;=0A=
				};=0A=
=0A=
				core_dvfs_floor_map0 {=0A=
					trip =3D <0xb>;=0A=
					cooling-device =3D <0xc 0x1 0x1>;=0A=
				};=0A=
=0A=
				core_dvfs_cap_map0 {=0A=
					trip =3D <0xd>;=0A=
					cooling-device =3D <0xe 0x1 0x1>;=0A=
				};=0A=
=0A=
				dfll-floor-map0 {=0A=
					trip =3D <0xf>;=0A=
					cooling-device =3D <0x10 0x1 0x1>;=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		CPU-therm {=0A=
			polling-delay =3D <0x0>;=0A=
			polling-delay-passive =3D <0x1f4>;=0A=
			thermal-sensors =3D <0x11 0x0>;=0A=
			status =3D "okay";=0A=
=0A=
			thermal-zone-params {=0A=
				governor-name =3D "step_wise";=0A=
				max_err_temp =3D <0x2328>;=0A=
				max_err_gain =3D <0x3e8>;=0A=
				gain_p =3D <0x3e8>;=0A=
				gain_d =3D <0x0>;=0A=
				up_compensation =3D <0x14>;=0A=
				down_compensation =3D <0x14>;=0A=
			};=0A=
=0A=
			trips {=0A=
=0A=
				dfll-cap-trip0 {=0A=
					temperature =3D <0x101d0>;=0A=
					hysteresis =3D <0x3e8>;=0A=
					type =3D "active";=0A=
					linux,phandle =3D <0x16>;=0A=
					phandle =3D <0x16>;=0A=
				};=0A=
=0A=
				dfll-cap-trip1 {=0A=
					temperature =3D <0x14ff0>;=0A=
					hysteresis =3D <0x3e8>;=0A=
					type =3D "active";=0A=
					linux,phandle =3D <0x18>;=0A=
					phandle =3D <0x18>;=0A=
				};=0A=
=0A=
				cpu_critical {=0A=
					temperature =3D <0x18e70>;=0A=
					hysteresis =3D <0x0>;=0A=
					type =3D "critical";=0A=
					writable;=0A=
				};=0A=
=0A=
				cpu_heavy {=0A=
					temperature =3D <0x18894>;=0A=
					hysteresis =3D <0x0>;=0A=
					type =3D "hot";=0A=
					writable;=0A=
					linux,phandle =3D <0x12>;=0A=
					phandle =3D <0x12>;=0A=
				};=0A=
=0A=
				cpu_throttle {=0A=
					temperature =3D <0x17ae8>;=0A=
					hysteresis =3D <0x0>;=0A=
					type =3D "passive";=0A=
					writable;=0A=
					linux,phandle =3D <0x14>;=0A=
					phandle =3D <0x14>;=0A=
				};=0A=
			};=0A=
=0A=
			cooling-maps {=0A=
=0A=
				map1 {=0A=
					trip =3D <0x12>;=0A=
					cdev-type =3D "tegra_heavy";=0A=
					cooling-device =3D <0x13 0x1 0x1>;=0A=
				};=0A=
=0A=
				map2 {=0A=
					trip =3D <0x14>;=0A=
					cdev-type =3D "cpu-balanced";=0A=
					cooling-device =3D <0x15 0xffffffff 0xffffffff>;=0A=
				};=0A=
=0A=
				dfll-cap-map0 {=0A=
					trip =3D <0x16>;=0A=
					cooling-device =3D <0x17 0x1 0x1>;=0A=
				};=0A=
=0A=
				dfll-cap-map1 {=0A=
					trip =3D <0x18>;=0A=
					cooling-device =3D <0x17 0x2 0x2>;=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		GPU-therm {=0A=
			polling-delay =3D <0x0>;=0A=
			polling-delay-passive =3D <0x1f4>;=0A=
			thermal-sensors =3D <0x11 0x2>;=0A=
			status =3D "okay";=0A=
=0A=
			thermal-zone-params {=0A=
				governor-name =3D "step_wise";=0A=
				max_err_temp =3D <0x2328>;=0A=
				max_err_gain =3D <0x3e8>;=0A=
				gain_p =3D <0x3e8>;=0A=
				gain_d =3D <0x0>;=0A=
				up_compensation =3D <0x14>;=0A=
				down_compensation =3D <0x14>;=0A=
			};=0A=
=0A=
			trips {=0A=
=0A=
				gpu_critical {=0A=
					temperature =3D <0x19064>;=0A=
					hysteresis =3D <0x0>;=0A=
					type =3D "critical";=0A=
					writable;=0A=
				};=0A=
=0A=
				gpu_heavy {=0A=
					temperature =3D <0x18a88>;=0A=
					hysteresis =3D <0x0>;=0A=
					type =3D "hot";=0A=
					writable;=0A=
					linux,phandle =3D <0x19>;=0A=
					phandle =3D <0x19>;=0A=
				};=0A=
=0A=
				gpu_throttle {=0A=
					temperature =3D <0x17cdc>;=0A=
					hysteresis =3D <0x0>;=0A=
					type =3D "passive";=0A=
					writable;=0A=
					linux,phandle =3D <0x1a>;=0A=
					phandle =3D <0x1a>;=0A=
				};=0A=
			};=0A=
=0A=
			cooling-maps {=0A=
=0A=
				map1 {=0A=
					trip =3D <0x19>;=0A=
					cdev-type =3D "tegra_heavy";=0A=
					cooling-device =3D <0x13 0x1 0x1>;=0A=
				};=0A=
=0A=
				map2 {=0A=
					trip =3D <0x1a>;=0A=
					cdev-type =3D "gpu-balanced";=0A=
					cooling-device =3D <0x1b 0xffffffff 0xffffffff>;=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		PLL-therm {=0A=
			polling-delay =3D <0x0>;=0A=
			polling-delay-passive =3D <0x3e8>;=0A=
			thermal-sensors =3D <0x11 0x3>;=0A=
			status =3D "okay";=0A=
=0A=
			thermal-zone-params {=0A=
				governor-name =3D "pid_thermal_gov";=0A=
				max_err_temp =3D <0x2328>;=0A=
				max_err_gain =3D <0x3e8>;=0A=
				gain_p =3D <0x3e8>;=0A=
				gain_d =3D <0x0>;=0A=
				up_compensation =3D <0x14>;=0A=
				down_compensation =3D <0x14>;=0A=
			};=0A=
=0A=
			trips {=0A=
=0A=
				dram-throttle {=0A=
					temperature =3D <0x11170>;=0A=
					hysteresis =3D <0x3e8>;=0A=
					type =3D "passive";=0A=
					writable;=0A=
					linux,phandle =3D <0x1c>;=0A=
					phandle =3D <0x1c>;=0A=
				};=0A=
			};=0A=
=0A=
			cooling-maps {=0A=
=0A=
				map-tegra-dram {=0A=
					trip =3D <0x1c>;=0A=
					cooling-device =3D <0x1d 0x1 0x1>;=0A=
					cdev-type =3D "tegra-dram";=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		PMIC-Die {=0A=
			polling-delay =3D <0x0>;=0A=
			polling-delay-passive =3D <0x0>;=0A=
			thermal-sensors =3D <0x1e>;=0A=
=0A=
			trips {=0A=
=0A=
				hot-die {=0A=
					temperature =3D <0x1d4c0>;=0A=
					type =3D "active";=0A=
					hysteresis =3D <0x0>;=0A=
					linux,phandle =3D <0x1f>;=0A=
					phandle =3D <0x1f>;=0A=
				};=0A=
			};=0A=
=0A=
			cooling-maps {=0A=
=0A=
				map0 {=0A=
					trip =3D <0x1f>;=0A=
					cooling-device =3D <0x20 0xffffffff 0xffffffff>;=0A=
					contribution =3D <0x64>;=0A=
					cdev-type =3D "emergency-balanced";=0A=
				};=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	core_dvfs_cdev_floor {=0A=
		compatible =3D "nvidia,tegra-core-cdev-action";=0A=
		cdev-type =3D "CORE-floor";=0A=
		#cooling-cells =3D <0x2>;=0A=
		linux,phandle =3D <0xc>;=0A=
		phandle =3D <0xc>;=0A=
	};=0A=
=0A=
	core_dvfs_cdev_cap {=0A=
		compatible =3D "nvidia,tegra-core-cdev-action";=0A=
		cdev-type =3D "CORE-cap";=0A=
		#cooling-cells =3D <0x2>;=0A=
		clocks =3D <0x21 0x198 0x21 0x1a1 0x21 0x1b8 0x21 0x1f6 0x21 0x206>;=0A=
		clock-names =3D "c2bus_cap", "c3bus_cap", "sclk_cap", "host1x_cap", =
"adsp_cap";=0A=
		linux,phandle =3D <0xe>;=0A=
		phandle =3D <0xe>;=0A=
	};=0A=
=0A=
	power-domain {=0A=
		compatible =3D "tegra-power-domains";=0A=
=0A=
		host1x-pd {=0A=
			compatible =3D "nvidia,tegra210-host1x-pd";=0A=
			is_off;=0A=
			host1x;=0A=
			#power-domain-cells =3D <0x0>;=0A=
			linux,phandle =3D <0x23>;=0A=
			phandle =3D <0x23>;=0A=
		};=0A=
=0A=
		ape-pd {=0A=
			compatible =3D "nvidia,tegra210-ape-pd";=0A=
			is_off;=0A=
			#power-domain-cells =3D <0x0>;=0A=
			partition-id =3D <0x1b>;=0A=
			clocks =3D <0x21 0xc6 0x21 0x6b 0x21 0xc7>;=0A=
			clock-names =3D "ape", "apb2ape", "adsp";=0A=
			linux,phandle =3D <0x22>;=0A=
			phandle =3D <0x22>;=0A=
		};=0A=
=0A=
		adsp-pd {=0A=
			compatible =3D "nvidia,tegra210-adsp-pd";=0A=
			is_off;=0A=
			#power-domain-cells =3D <0x0>;=0A=
			power-domains =3D <0x22>;=0A=
			linux,phandle =3D <0xdf>;=0A=
			phandle =3D <0xdf>;=0A=
		};=0A=
=0A=
		tsec-pd {=0A=
			compatible =3D "nvidia,tegra210-tsec-pd";=0A=
			is_off;=0A=
			#power-domain-cells =3D <0x0>;=0A=
			power-domains =3D <0x23>;=0A=
			linux,phandle =3D <0x6b>;=0A=
			phandle =3D <0x6b>;=0A=
		};=0A=
=0A=
		nvdec-pd {=0A=
			compatible =3D "nvidia,tegra210-nvdec-pd";=0A=
			is_off;=0A=
			#power-domain-cells =3D <0x0>;=0A=
			power-domains =3D <0x23>;=0A=
			partition-id =3D <0x19>;=0A=
			linux,phandle =3D <0x6c>;=0A=
			phandle =3D <0x6c>;=0A=
		};=0A=
=0A=
		ve2-pd {=0A=
			compatible =3D "nvidia,tegra210-ve2-pd";=0A=
			is_off;=0A=
			#power-domain-cells =3D <0x0>;=0A=
			power-domains =3D <0x23>;=0A=
			partition-id =3D <0x1d>;=0A=
			linux,phandle =3D <0x5c>;=0A=
			phandle =3D <0x5c>;=0A=
		};=0A=
=0A=
		vic03-pd {=0A=
			compatible =3D "nvidia,tegra210-vic03-pd";=0A=
			is_off;=0A=
			#power-domain-cells =3D <0x0>;=0A=
			power-domains =3D <0x23>;=0A=
			partition-id =3D <0x17>;=0A=
			linux,phandle =3D <0x69>;=0A=
			phandle =3D <0x69>;=0A=
		};=0A=
=0A=
		msenc-pd {=0A=
			compatible =3D "nvidia,tegra210-msenc-pd";=0A=
			is_off;=0A=
			#power-domain-cells =3D <0x0>;=0A=
			power-domains =3D <0x23>;=0A=
			partition-id =3D <0x6>;=0A=
			linux,phandle =3D <0x6a>;=0A=
			phandle =3D <0x6a>;=0A=
		};=0A=
=0A=
		nvjpg-pd {=0A=
			compatible =3D "nvidia,tegra210-nvjpg-pd";=0A=
			is_off;=0A=
			#power-domain-cells =3D <0x0>;=0A=
			power-domains =3D <0x23>;=0A=
			partition-id =3D <0x1a>;=0A=
			linux,phandle =3D <0x6d>;=0A=
			phandle =3D <0x6d>;=0A=
		};=0A=
=0A=
		pcie-pd {=0A=
			compatible =3D "nvidia,tegra210-pcie-pd";=0A=
			is_off;=0A=
			#power-domain-cells =3D <0x0>;=0A=
			partition-id =3D <0x3>;=0A=
			linux,phandle =3D <0x7a>;=0A=
			phandle =3D <0x7a>;=0A=
		};=0A=
=0A=
		ve-pd {=0A=
			compatible =3D "nvidia,tegra210-ve-pd";=0A=
			is_off;=0A=
			#power-domain-cells =3D <0x0>;=0A=
			power-domains =3D <0x23>;=0A=
			partition-id =3D <0x2>;=0A=
			linux,phandle =3D <0x59>;=0A=
			phandle =3D <0x59>;=0A=
		};=0A=
=0A=
		sata-pd {=0A=
			compatible =3D "nvidia,tegra210-sata-pd";=0A=
			#power-domain-cells =3D <0x0>;=0A=
			partition-id =3D <0x8>;=0A=
			linux,phandle =3D <0xe0>;=0A=
			phandle =3D <0xe0>;=0A=
		};=0A=
=0A=
		sor-pd {=0A=
			compatible =3D "nvidia,tegra210-sor-pd";=0A=
			#power-domain-cells =3D <0x0>;=0A=
			partition-id =3D <0x11>;=0A=
			linux,phandle =3D <0xe1>;=0A=
			phandle =3D <0xe1>;=0A=
		};=0A=
=0A=
		disa-pd {=0A=
			compatible =3D "nvidia,tegra210-disa-pd";=0A=
			#power-domain-cells =3D <0x0>;=0A=
			partition-id =3D <0x12>;=0A=
			linux,phandle =3D <0xe2>;=0A=
			phandle =3D <0xe2>;=0A=
		};=0A=
=0A=
		disb-pd {=0A=
			compatible =3D "nvidia,tegra210-disb-pd";=0A=
			#power-domain-cells =3D <0x0>;=0A=
			partition-id =3D <0x13>;=0A=
			linux,phandle =3D <0xe3>;=0A=
			phandle =3D <0xe3>;=0A=
		};=0A=
=0A=
		xusba-pd {=0A=
			compatible =3D "nvidia,tegra210-xusba-pd";=0A=
			#power-domain-cells =3D <0x0>;=0A=
			partition-id =3D <0x14>;=0A=
			linux,phandle =3D <0xe4>;=0A=
			phandle =3D <0xe4>;=0A=
		};=0A=
=0A=
		xusbb-pd {=0A=
			compatible =3D "nvidia,tegra210-xusbb-pd";=0A=
			#power-domain-cells =3D <0x0>;=0A=
			partition-id =3D <0x15>;=0A=
			linux,phandle =3D <0xe5>;=0A=
			phandle =3D <0xe5>;=0A=
		};=0A=
=0A=
		xusbc-pd {=0A=
			compatible =3D "nvidia,tegra210-xusbc-pd";=0A=
			#power-domain-cells =3D <0x0>;=0A=
			partition-id =3D <0x16>;=0A=
			linux,phandle =3D <0xe6>;=0A=
			phandle =3D <0xe6>;=0A=
		};=0A=
	};=0A=
=0A=
	actmon@6000c800 {=0A=
		status =3D "okay";=0A=
		#address-cells =3D <0x2>;=0A=
		#size-cells =3D <0x2>;=0A=
		compatible =3D "nvidia,tegra210-cactmon";=0A=
		reg =3D <0x0 0x6000c800 0x0 0x400>;=0A=
		interrupts =3D <0x0 0x2d 0x4>;=0A=
		clocks =3D <0x21 0x77>;=0A=
		clock-names =3D "actmon";=0A=
		resets =3D <0x21 0x77>;=0A=
		reset-names =3D "actmon";=0A=
		nvidia,sample_period =3D [14];=0A=
=0A=
		mc_all {=0A=
			#address-cells =3D <0x1>;=0A=
			#size-cells =3D <0x0>;=0A=
			nvidia,con_id =3D "mc_all";=0A=
			nvidia,dev_id =3D "actmon";=0A=
			nvidia,reg_offs =3D <0x1c0>;=0A=
			nvidia,irq_mask =3D <0x4000000>;=0A=
			nvidia,suspend_freq =3D <0x324b0>;=0A=
			nvidia,boost_freq_step =3D <0x3e80>;=0A=
			nvidia,boost_up_coef =3D <0xc8>;=0A=
			nvidia,boost_down_coef =3D <0x32>;=0A=
			nvidia,boost_up_threshold =3D <0x3c>;=0A=
			nvidia,boost_down_threshold =3D <0x28>;=0A=
			nvidia,up_wmark_window =3D [01];=0A=
			nvidia,down_wmark_window =3D [03];=0A=
			nvidia,avg_window_log2 =3D [07];=0A=
			nvidia,count_weight =3D <0x400>;=0A=
			nvidia,max_dram_channels =3D [02];=0A=
			nvidia,type =3D <0x1>;=0A=
			status =3D "okay";=0A=
		};=0A=
	};=0A=
=0A=
	aliases {=0A=
		sdhci0 =3D "/sdhci@700b0000";=0A=
		sdhci1 =3D "/sdhci@700b0200";=0A=
		sdhci2 =3D "/sdhci@700b0400";=0A=
		sdhci3 =3D "/sdhci@700b0600";=0A=
		i2c0 =3D "/i2c@7000c000";=0A=
		i2c1 =3D "/i2c@7000c400";=0A=
		i2c2 =3D "/i2c@7000c500";=0A=
		i2c3 =3D "/i2c@7000c700";=0A=
		i2c4 =3D "/i2c@7000d000";=0A=
		i2c5 =3D "/i2c@7000d100";=0A=
		i2c6 =3D "/host1x/i2c@546c0000";=0A=
		spi0 =3D "/spi@7000d400";=0A=
		spi1 =3D "/spi@7000d600";=0A=
		spi2 =3D "/spi@7000d800";=0A=
		spi3 =3D "/spi@7000da00";=0A=
		qspi6 =3D "/spi@70410000";=0A=
		serial0 =3D "/serial@70006000";=0A=
		serial1 =3D "/serial@70006040";=0A=
		serial2 =3D "/serial@70006200";=0A=
		serial3 =3D "/serial@70006300";=0A=
		rtc0 =3D "/i2c@7000d000/max77620@3c";=0A=
		rtc1 =3D "/rtc";=0A=
	};=0A=
=0A=
	cpus {=0A=
		#address-cells =3D <0x2>;=0A=
		#size-cells =3D <0x0>;=0A=
		status =3D "okay";=0A=
=0A=
		cpu@0 {=0A=
			device_type =3D "cpu";=0A=
			compatible =3D "arm,cortex-a57-64bit", "arm,armv8";=0A=
			reg =3D <0x0 0x0>;=0A=
			enable-method =3D "psci";=0A=
			cpu-idle-states =3D <0x24>;=0A=
			errata_hwcaps =3D <0x7>;=0A=
			cpu-ipc =3D <0x400>;=0A=
			next-level-cache =3D <0x25>;=0A=
			status =3D "okay";=0A=
			clocks =3D <0x21 0x126 0x21 0x127 0x21 0x103 0x21 0xf7 0x26>;=0A=
			clock-names =3D "cpu_g", "cpu_lp", "pll_x", "pll_p", "dfll";=0A=
			clock-latency =3D <0x493e0>;=0A=
			linux,phandle =3D <0x27>;=0A=
			phandle =3D <0x27>;=0A=
		};=0A=
=0A=
		cpu@1 {=0A=
			device_type =3D "cpu";=0A=
			compatible =3D "arm,cortex-a57-64bit", "arm,armv8";=0A=
			reg =3D <0x0 0x1>;=0A=
			enable-method =3D "psci";=0A=
			cpu-idle-states =3D <0x24>;=0A=
			errata_hwcaps =3D <0x7>;=0A=
			cpu-ipc =3D <0x400>;=0A=
			next-level-cache =3D <0x25>;=0A=
			status =3D "okay";=0A=
			linux,phandle =3D <0x28>;=0A=
			phandle =3D <0x28>;=0A=
		};=0A=
=0A=
		cpu@2 {=0A=
			device_type =3D "cpu";=0A=
			compatible =3D "arm,cortex-a57-64bit", "arm,armv8";=0A=
			reg =3D <0x0 0x2>;=0A=
			enable-method =3D "psci";=0A=
			cpu-idle-states =3D <0x24>;=0A=
			errata_hwcaps =3D <0x7>;=0A=
			cpu-ipc =3D <0x400>;=0A=
			next-level-cache =3D <0x25>;=0A=
			status =3D "okay";=0A=
			linux,phandle =3D <0x29>;=0A=
			phandle =3D <0x29>;=0A=
		};=0A=
=0A=
		cpu@3 {=0A=
			device_type =3D "cpu";=0A=
			compatible =3D "arm,cortex-a57-64bit", "arm,armv8";=0A=
			reg =3D <0x0 0x3>;=0A=
			enable-method =3D "psci";=0A=
			cpu-idle-states =3D <0x24>;=0A=
			errata_hwcaps =3D <0x7>;=0A=
			cpu-ipc =3D <0x400>;=0A=
			next-level-cache =3D <0x25>;=0A=
			status =3D "okay";=0A=
			linux,phandle =3D <0x2a>;=0A=
			phandle =3D <0x2a>;=0A=
		};=0A=
=0A=
		idle-states {=0A=
			entry-method =3D "psci";=0A=
=0A=
			c7 {=0A=
				compatible =3D "arm,idle-state";=0A=
				arm,psci-suspend-param =3D <0x40000007>;=0A=
				wakeup-latency-us =3D <0x82>;=0A=
				min-residency-us =3D <0x3e8>;=0A=
				idle-state-name =3D "c7-cpu-powergated";=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0x24>;=0A=
				phandle =3D <0x24>;=0A=
			};=0A=
=0A=
			cc6 {=0A=
				compatible =3D "arm,idle-state";=0A=
				arm,psci-suspend-param =3D <0x40000010>;=0A=
				wakeup-latency-us =3D <0xe6>;=0A=
				min-residency-us =3D <0x2710>;=0A=
				idle-state-name =3D "cc6-cluster-powergated";=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xe7>;=0A=
				phandle =3D <0xe7>;=0A=
			};=0A=
		};=0A=
=0A=
		l2-cache {=0A=
			compatible =3D "cache";=0A=
			linux,phandle =3D <0x25>;=0A=
			phandle =3D <0x25>;=0A=
		};=0A=
	};=0A=
=0A=
	psci {=0A=
		compatible =3D "arm,psci-1.0";=0A=
		status =3D "okay";=0A=
		method =3D "smc";=0A=
	};=0A=
=0A=
	tlk {=0A=
		compatible =3D "android,tlk-driver";=0A=
		status =3D "disabled";=0A=
=0A=
		log {=0A=
			compatible =3D "android,ote-logger";=0A=
		};=0A=
	};=0A=
=0A=
	arm-pmu {=0A=
		compatible =3D "arm,armv8-pmuv3";=0A=
		status =3D "okay";=0A=
		interrupts =3D <0x0 0x90 0x4 0x0 0x91 0x4 0x0 0x92 0x4 0x0 0x93 0x4>;=0A=
		interrupt-affinity =3D <0x27 0x28 0x29 0x2a>;=0A=
	};=0A=
=0A=
	clock {=0A=
		compatible =3D "nvidia,tegra210-car";=0A=
		reg =3D <0x0 0x60006000 0x0 0x1000>;=0A=
		#clock-cells =3D <0x1>;=0A=
		#reset-cells =3D <0x1>;=0A=
		status =3D "okay";=0A=
		linux,phandle =3D <0x21>;=0A=
		phandle =3D <0x21>;=0A=
	};=0A=
=0A=
	bwmgr {=0A=
		compatible =3D "nvidia,bwmgr";=0A=
		clocks =3D <0x21 0x212>;=0A=
		nvidia,bwmgr-use-shared-master;=0A=
		clock-names =3D "emc";=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	reserved-memory {=0A=
		#address-cells =3D <0x2>;=0A=
		#size-cells =3D <0x2>;=0A=
		ranges;=0A=
=0A=
		iram-carveout {=0A=
			compatible =3D "nvidia,iram-carveout";=0A=
			reg =3D <0x0 0x40001000 0x0 0x3f000>;=0A=
			no-map;=0A=
			linux,phandle =3D <0x2d>;=0A=
			phandle =3D <0x2d>;=0A=
		};=0A=
=0A=
		ramoops_carveout {=0A=
			compatible =3D "nvidia,ramoops";=0A=
			reg =3D <0x0 0xb0000000 0x0 0x200000>;=0A=
			no-map;=0A=
			linux,phandle =3D <0xe8>;=0A=
			phandle =3D <0xe8>;=0A=
		};=0A=
=0A=
		vpr-carveout {=0A=
			compatible =3D "nvidia,vpr-carveout";=0A=
			size =3D <0x0 0x19000000>;=0A=
			alignment =3D <0x0 0x400000>;=0A=
			alloc-ranges =3D <0x0 0x80000000 0x0 0x70000000>;=0A=
			reusable;=0A=
			linux,phandle =3D <0x2c>;=0A=
			phandle =3D <0x2c>;=0A=
		};=0A=
=0A=
		fb0_carveout {=0A=
			reg =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0>;=0A=
			reg-names =3D "surface", "lut";=0A=
			no-map;=0A=
			linux,phandle =3D <0x5d>;=0A=
			phandle =3D <0x5d>;=0A=
		};=0A=
=0A=
		fb1_carveout {=0A=
			reg =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0>;=0A=
			reg-names =3D "surface", "lut";=0A=
			no-map;=0A=
			linux,phandle =3D <0x66>;=0A=
			phandle =3D <0x66>;=0A=
		};=0A=
	};=0A=
=0A=
	tegra-carveouts {=0A=
		compatible =3D "nvidia,carveouts";=0A=
		iommus =3D <0x2b 0x6>;=0A=
		memory-region =3D <0x2c 0x2d>;=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	iommu {=0A=
		compatible =3D "nvidia,tegra210-smmu";=0A=
		reg =3D <0x0 0x70019000 0x0 0x1000 0x0 0x6000c000 0x0 0x1000>;=0A=
		status =3D "okay";=0A=
		#asids =3D <0x80>;=0A=
		dma-window =3D <0x0 0x80000000 0x0 0x7ff00000>;=0A=
		#iommu-cells =3D <0x1>;=0A=
		swgid-mask =3D <0x100fff 0xfffccdcf>;=0A=
		#num-translation-enable =3D <0x5>;=0A=
		#num-asid-security =3D <0x8>;=0A=
		domains =3D <0x2e 0x1004000 0x49 0x2f 0x80000000 0x0 0x30 0x0 0x4 0x31 =
0x404 0x0 0x31 0x8 0x0 0x32 0x1 0x0 0x32 0x2000000 0x0 0x32 0x4000000 =
0x0 0x32 0x8000000 0x0 0x32 0x10000000 0x0 0x32 0x2 0x0 0x32 0x0 =
0x100000 0x32 0xffffffff 0xffffffff>;=0A=
		linux,phandle =3D <0x2b>;=0A=
		phandle =3D <0x2b>;=0A=
=0A=
		address-space-prop {=0A=
=0A=
			common {=0A=
				iova-start =3D <0x0 0x80000000>;=0A=
				iova-size =3D <0x0 0x7ff00000>;=0A=
				num-pf-page =3D <0x0>;=0A=
				gap-page =3D <0x1>;=0A=
				linux,phandle =3D <0x32>;=0A=
				phandle =3D <0x32>;=0A=
			};=0A=
=0A=
			ppcs {=0A=
				iova-start =3D <0x0 0x80000000>;=0A=
				iova-size =3D <0x0 0x7ff00000>;=0A=
				num-pf-page =3D <0x1>;=0A=
				gap-page =3D <0x1>;=0A=
				linux,phandle =3D <0x2e>;=0A=
				phandle =3D <0x2e>;=0A=
			};=0A=
=0A=
			dc {=0A=
				iova-start =3D <0x0 0x10000>;=0A=
				iova-size =3D <0x0 0xfffeffff>;=0A=
				num-pf-page =3D <0x0>;=0A=
				gap-page =3D <0x0>;=0A=
				linux,phandle =3D <0x31>;=0A=
				phandle =3D <0x31>;=0A=
			};=0A=
=0A=
			gpu {=0A=
				iova-start =3D <0x0 0x100000>;=0A=
				iova-size =3D <0x3 0xffefffff>;=0A=
				alignment =3D <0x20000>;=0A=
				num-pf-page =3D <0x0>;=0A=
				gap-page =3D <0x0>;=0A=
				linux,phandle =3D <0x2f>;=0A=
				phandle =3D <0x2f>;=0A=
			};=0A=
=0A=
			ape {=0A=
				iova-start =3D <0x0 0x70300000>;=0A=
				iova-size =3D <0x0 0x8fc00000>;=0A=
				num-pf-page =3D <0x0>;=0A=
				gap-page =3D <0x1>;=0A=
				linux,phandle =3D <0x30>;=0A=
				phandle =3D <0x30>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	smmu_test {=0A=
		compatible =3D "nvidia,smmu_test";=0A=
		iommus =3D <0x2b 0x34>;=0A=
		linux,phandle =3D <0xe9>;=0A=
		phandle =3D <0xe9>;=0A=
	};=0A=
=0A=
	dma_test {=0A=
		compatible =3D "nvidia,dma_test";=0A=
		linux,phandle =3D <0xea>;=0A=
		phandle =3D <0xea>;=0A=
	};=0A=
=0A=
	bpmp {=0A=
		compatible =3D "nvidia,tegra210-bpmp";=0A=
		carveout-start =3D <0x80005000>;=0A=
		carveout-size =3D <0x10000>;=0A=
		resets =3D <0x21 0x1>;=0A=
		reset-names =3D "cop";=0A=
		clocks =3D <0x21 0x1ae>;=0A=
		clock-names =3D "sclk";=0A=
		reg =3D <0x0 0x70016000 0x0 0x2000 0x0 0x60001000 0x0 0x1000>;=0A=
		iommus =3D <0x2b 0x1>;=0A=
		status =3D "disabled";=0A=
	};=0A=
=0A=
	mc {=0A=
		compatible =3D "nvidia,tegra-mc";=0A=
		reg-ranges =3D <0xa>;=0A=
		reg =3D <0x0 0x70019000 0x0 0xc 0x0 0x70019050 0x0 0x19c 0x0 =
0x70019200 0x0 0x24 0x0 0x7001929c 0x0 0x1b8 0x0 0x70019464 0x0 0x198 =
0x0 0x70019604 0x0 0x3b0 0x0 0x700199bc 0x0 0x20 0x0 0x700199f8 0x0 0x8c =
0x0 0x70019ae4 0x0 0xb0 0x0 0x70019ba0 0x0 0x460 0x0 0x7001c000 0x0 0xc =
0x0 0x7001c050 0x0 0x198 0x0 0x7001c200 0x0 0x24 0x0 0x7001c29c 0x0 =
0x1b8 0x0 0x7001c464 0x0 0x198 0x0 0x7001c604 0x0 0x3b0 0x0 0x7001c9bc =
0x0 0x20 0x0 0x7001c9f8 0x0 0x8c 0x0 0x7001cae4 0x0 0xb0 0x0 0x7001cba0 =
0x0 0x460 0x0 0x7001d000 0x0 0xc 0x0 0x7001d050 0x0 0x198 0x0 0x7001d200 =
0x0 0x24 0x0 0x7001d29c 0x0 0x1b8 0x0 0x7001d464 0x0 0x198 0x0 =
0x7001d604 0x0 0x3b0 0x0 0x7001d9bc 0x0 0x20 0x0 0x7001d9f8 0x0 0x8c 0x0 =
0x7001dae4 0x0 0xb0 0x0 0x7001dba0 0x0 0x460>;=0A=
		interrupts =3D <0x0 0x4d 0x4>;=0A=
		int_mask =3D <0x23d40>;=0A=
		channels =3D <0x2>;=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	interrupt-controller {=0A=
		compatible =3D "arm,cortex-a15-gic";=0A=
		interrupt-parent =3D <0x33>;=0A=
		#interrupt-cells =3D <0x3>;=0A=
		interrupt-controller;=0A=
		reg =3D <0x0 0x50041000 0x0 0x1000 0x0 0x50042000 0x0 0x2000 0x0 =
0x50044000 0x0 0x2000 0x0 0x50046000 0x0 0x2000>;=0A=
		status =3D "okay";=0A=
		interrupts =3D <0x1 0x9 0xf04>;=0A=
		linux,phandle =3D <0x33>;=0A=
		phandle =3D <0x33>;=0A=
	};=0A=
=0A=
	interrupt-controller@60004000 {=0A=
		compatible =3D "nvidia,tegra210-ictlr";=0A=
		interrupt-parent =3D <0x33>;=0A=
		interrupt-controller;=0A=
		#interrupt-cells =3D <0x3>;=0A=
		reg =3D <0x0 0x60004000 0x0 0x40 0x0 0x60004100 0x0 0x40 0x0 =
0x60004200 0x0 0x40 0x0 0x60004300 0x0 0x40 0x0 0x60004400 0x0 0x40 0x0 =
0x60004500 0x0 0x40>;=0A=
		interrupts =3D <0x0 0x4 0x4 0x0 0x5 0x4 0x0 0x7 0x4 0x0 0x12 0x4>;=0A=
		outgoing-doorbell =3D <0x6>;=0A=
		status =3D "okay";=0A=
		linux,phandle =3D <0x1>;=0A=
		phandle =3D <0x1>;=0A=
	};=0A=
=0A=
	flow-controller@60007000 {=0A=
		compatible =3D "nvidia,tegra210-flowctrl";=0A=
		reg =3D <0x0 0x60007000 0x0 0x1000>;=0A=
	};=0A=
=0A=
	ahb@6000c000 {=0A=
		compatible =3D "nvidia,tegra210-ahb", "nvidia,tegra30-ahb";=0A=
		reg =3D <0x0 0x6000c000 0x0 0x14f>;=0A=
		status =3D "okay";=0A=
		linux,phandle =3D <0xeb>;=0A=
		phandle =3D <0xeb>;=0A=
	};=0A=
=0A=
	aconnect@702c0000 {=0A=
		compatible =3D "nvidia,tegra210-aconnect";=0A=
		clocks =3D <0x21 0xc6 0x21 0x6b>;=0A=
		clock-names =3D "ape", "apb2ape";=0A=
		power-domains =3D <0x22>;=0A=
		#address-cells =3D <0x2>;=0A=
		#size-cells =3D <0x2>;=0A=
		ranges;=0A=
		status =3D "okay";=0A=
=0A=
		agic@702f9000 {=0A=
			compatible =3D "nvidia,tegra210-agic";=0A=
			#interrupt-cells =3D <0x4>;=0A=
			interrupt-controller;=0A=
			reg =3D <0x0 0x702f9000 0x0 0x2000 0x0 0x702fa000 0x0 0x2000>;=0A=
			interrupts =3D <0x0 0x66 0xf04>;=0A=
			clocks =3D <0x21 0xc6>;=0A=
			clock-names =3D "clk";=0A=
			linux,phandle =3D <0x34>;=0A=
			phandle =3D <0x34>;=0A=
		};=0A=
=0A=
		adsp {=0A=
			compatible =3D "nvidia,tegra210-adsp";=0A=
			wakeup-disable;=0A=
			interrupt-parent =3D <0x34>;=0A=
			reg =3D <0x0 0x702ef000 0x0 0x1000 0x0 0x702ec000 0x0 0x2000 0x0 =
0x702ee000 0x0 0x1000 0x0 0x702dc800 0x0 0x0 0x0 0x0 0x0 0x1 0x0 =
0x1000000 0x0 0x6f2c0000 0x0 0x70300000 0x0 0x8fd00000>;=0A=
			iommus =3D <0x2b 0x22>;=0A=
			dma-mask =3D <0x0 0xfff00000>;=0A=
			iommu-resv-regions =3D <0x0 0x0 0x0 0x70300000 0x0 0xfff00000 =
0xffffffff 0xffffffff>;=0A=
			iommu-group-id =3D <0x2>;=0A=
			nvidia,adsp_mem =3D <0x80300000 0x1000000 0x80b00000 0x800000 =
0x400000 0x10000 0x80300000 0x200000>;=0A=
			nvidia,adsp-evp-base =3D <0x702ef700 0x40>;=0A=
			interrupts =3D <0x0 0x5 0x4 0x0 0x0 0x0 0x4 0x0 0x0 0x2f 0x4 0x0 0x0 =
0x34 0x4 0x0 0x0 0x32 0x4 0x0 0x0 0x37 0x4 0x0 0x0 0x4 0x4 0x1 0x0 0x1 =
0x4 0x1 0x0 0x2 0x4 0x1>;=0A=
			clocks =3D <0x21 0x200 0x21 0x6b 0x21 0xda 0x21 0xc7 0x21 0x205>;=0A=
			clock-names =3D "adsp.ape", "adsp.apb2ape", "adspneon", "adsp", =
"adsp_cpu_abus";=0A=
			resets =3D <0x21 0xe1>;=0A=
			reset-names =3D "adspall";=0A=
			nvidia,adsp_unit_fpga_reset =3D <0x0 0x40>;=0A=
			status =3D "okay";=0A=
		};=0A=
=0A=
		adma@702e2000 {=0A=
			compatible =3D "nvidia,tegra210-adma";=0A=
			interrupt-parent =3D <0x34>;=0A=
			reg =3D <0x0 0x702e2000 0x0 0x2000 0x0 0x702ec000 0x0 0x72>;=0A=
			clocks =3D <0x21 0x6a>;=0A=
			clock-names =3D "d_audio";=0A=
			interrupts =3D <0x0 0x18 0x4 0x0 0x0 0x19 0x4 0x0 0x0 0x1a 0x4 0x0 =
0x0 0x1b 0x4 0x0 0x0 0x1c 0x4 0x0 0x0 0x1d 0x4 0x0 0x0 0x1e 0x4 0x0 0x0 =
0x1f 0x4 0x0 0x0 0x20 0x4 0x0 0x0 0x21 0x4 0x0 0x0 0x22 0x4 0x0 0x0 0x23 =
0x4 0x0 0x0 0x24 0x4 0x0 0x0 0x25 0x4 0x0 0x0 0x26 0x4 0x0 0x0 0x27 0x4 =
0x0 0x0 0x28 0x4 0x0 0x0 0x29 0x4 0x0 0x0 0x2a 0x4 0x0 0x0 0x2b 0x4 0x0 =
0x0 0x2c 0x4 0x0 0x0 0x2d 0x4 0x0>;=0A=
			#dma-cells =3D <0x1>;=0A=
			status =3D "okay";=0A=
			linux,phandle =3D <0x35>;=0A=
			phandle =3D <0x35>;=0A=
		};=0A=
=0A=
		ahub {=0A=
			compatible =3D "nvidia,tegra210-axbar";=0A=
			wakeup-disable;=0A=
			reg =3D <0x0 0x702d0800 0x0 0x800>;=0A=
			clocks =3D <0x21 0x6a 0x21 0xf9 0x21 0xc6 0x21 0x6b>;=0A=
			clock-names =3D "ahub", "parent", "xbar.ape", "apb2ape";=0A=
			assigned-clocks =3D <0x21 0x6a>;=0A=
			assigned-clock-parents =3D <0x21 0xf3>;=0A=
			assigned-clock-rates =3D <0x4dd1e00>;=0A=
			status =3D "okay";=0A=
			#address-cells =3D <0x1>;=0A=
			#size-cells =3D <0x1>;=0A=
			ranges =3D <0x702d0000 0x0 0x702d0000 0x10000>;=0A=
			linux,phandle =3D <0x4d>;=0A=
			phandle =3D <0x4d>;=0A=
=0A=
			admaif@0x702d0000 {=0A=
				compatible =3D "nvidia,tegra210-admaif";=0A=
				reg =3D <0x702d0000 0x800>;=0A=
				dmas =3D <0x35 0x1 0x35 0x1 0x35 0x2 0x35 0x2 0x35 0x3 0x35 0x3 0x35 =
0x4 0x35 0x4 0x35 0x5 0x35 0x5 0x35 0x6 0x35 0x6 0x35 0x7 0x35 0x7 0x35 =
0x8 0x35 0x8 0x35 0x9 0x35 0x9 0x35 0xa 0x35 0xa>;=0A=
				dma-names =3D "rx1", "tx1", "rx2", "tx2", "rx3", "tx3", "rx4", =
"tx4", "rx5", "tx5", "rx6", "tx6", "rx7", "tx7", "rx8", "tx8", "rx9", =
"tx9", "rx10", "tx10";=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xec>;=0A=
				phandle =3D <0xec>;=0A=
			};=0A=
=0A=
			sfc@702d2000 {=0A=
				compatible =3D "nvidia,tegra210-sfc";=0A=
				reg =3D <0x702d2000 0x200>;=0A=
				nvidia,ahub-sfc-id =3D <0x0>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xed>;=0A=
				phandle =3D <0xed>;=0A=
			};=0A=
=0A=
			sfc@702d2200 {=0A=
				compatible =3D "nvidia,tegra210-sfc";=0A=
				reg =3D <0x702d2200 0x200>;=0A=
				nvidia,ahub-sfc-id =3D <0x1>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xee>;=0A=
				phandle =3D <0xee>;=0A=
			};=0A=
=0A=
			sfc@702d2400 {=0A=
				compatible =3D "nvidia,tegra210-sfc";=0A=
				reg =3D <0x702d2400 0x200>;=0A=
				nvidia,ahub-sfc-id =3D <0x2>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xef>;=0A=
				phandle =3D <0xef>;=0A=
			};=0A=
=0A=
			sfc@702d2600 {=0A=
				compatible =3D "nvidia,tegra210-sfc";=0A=
				reg =3D <0x702d2600 0x200>;=0A=
				nvidia,ahub-sfc-id =3D <0x3>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xf0>;=0A=
				phandle =3D <0xf0>;=0A=
			};=0A=
=0A=
			spkprot@702d8c00 {=0A=
				compatible =3D "nvidia,tegra210-spkprot";=0A=
				reg =3D <0x702d8c00 0x400>;=0A=
				nvidia,ahub-spkprot-id =3D <0x0>;=0A=
				status =3D "okay";=0A=
			};=0A=
=0A=
			amixer@702dbb00 {=0A=
				compatible =3D "nvidia,tegra210-amixer";=0A=
				reg =3D <0x702dbb00 0x800>;=0A=
				nvidia,ahub-amixer-id =3D <0x0>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xf1>;=0A=
				phandle =3D <0xf1>;=0A=
			};=0A=
=0A=
			i2s@702d1000 {=0A=
				compatible =3D "nvidia,tegra210-i2s";=0A=
				reg =3D <0x702d1000 0x100>;=0A=
				nvidia,ahub-i2s-id =3D <0x0>;=0A=
				status =3D "disabled";=0A=
				clocks =3D <0x21 0x1e 0x21 0xf9 0x21 0x109 0x21 0x15e>;=0A=
				clock-names =3D "i2s", "i2s_clk_parent", "ext_audio_sync", =
"audio_sync";=0A=
				assigned-clocks =3D <0x21 0x1e>;=0A=
				assigned-clock-parents =3D <0x21 0xf9>;=0A=
				assigned-clock-rates =3D <0x177000>;=0A=
				pinctrl-names =3D "dap_active", "dap_inactive";=0A=
				pinctrl-0;=0A=
				pinctrl-1;=0A=
				linux,phandle =3D <0xae>;=0A=
				phandle =3D <0xae>;=0A=
			};=0A=
=0A=
			i2s@702d1100 {=0A=
				compatible =3D "nvidia,tegra210-i2s";=0A=
				reg =3D <0x702d1100 0x100>;=0A=
				nvidia,ahub-i2s-id =3D <0x1>;=0A=
				status =3D "disabled";=0A=
				clocks =3D <0x21 0xb 0x21 0xf9 0x21 0x10a 0x21 0x15f>;=0A=
				clock-names =3D "i2s", "i2s_clk_parent", "ext_audio_sync", =
"audio_sync";=0A=
				assigned-clocks =3D <0x21 0xb>;=0A=
				assigned-clock-parents =3D <0x21 0xf9>;=0A=
				assigned-clock-rates =3D <0x177000>;=0A=
				pinctrl-names =3D "dap_active", "dap_inactive";=0A=
				pinctrl-0;=0A=
				pinctrl-1;=0A=
				linux,phandle =3D <0xf2>;=0A=
				phandle =3D <0xf2>;=0A=
			};=0A=
=0A=
			i2s@702d1200 {=0A=
				compatible =3D "nvidia,tegra210-i2s";=0A=
				reg =3D <0x702d1200 0x100>;=0A=
				nvidia,ahub-i2s-id =3D <0x2>;=0A=
				status =3D "okay";=0A=
				clocks =3D <0x21 0x12 0x21 0xf9 0x21 0x10b 0x21 0x160>;=0A=
				clock-names =3D "i2s", "i2s_clk_parent", "ext_audio_sync", =
"audio_sync";=0A=
				assigned-clocks =3D <0x21 0x12>;=0A=
				assigned-clock-parents =3D <0x21 0xf9>;=0A=
				assigned-clock-rates =3D <0x177000>;=0A=
				prod-name =3D "i2s2_prod";=0A=
				pinctrl-names =3D "dap_active", "dap_inactive";=0A=
				pinctrl-0;=0A=
				pinctrl-1;=0A=
				regulator-supplies =3D "vdd-1v8-dmic";=0A=
				vdd-1v8-dmic-supply =3D <0x36>;=0A=
				fsync-width =3D <0xf>;=0A=
				linux,phandle =3D <0x50>;=0A=
				phandle =3D <0x50>;=0A=
			};=0A=
=0A=
			i2s@702d1300 {=0A=
				compatible =3D "nvidia,tegra210-i2s";=0A=
				reg =3D <0x702d1300 0x100>;=0A=
				nvidia,ahub-i2s-id =3D <0x3>;=0A=
				status =3D "okay";=0A=
				clocks =3D <0x21 0x65 0x21 0xf9 0x21 0x10c 0x21 0x161>;=0A=
				clock-names =3D "i2s", "i2s_clk_parent", "ext_audio_sync", =
"audio_sync";=0A=
				assigned-clocks =3D <0x21 0x65>;=0A=
				assigned-clock-parents =3D <0x21 0xf9>;=0A=
				assigned-clock-rates =3D <0x177000>;=0A=
				pinctrl-names =3D "dap_active", "dap_inactive";=0A=
				pinctrl-0;=0A=
				pinctrl-1;=0A=
				regulator-supplies =3D "vddio-uart";=0A=
				vddio-uart-supply =3D <0x36>;=0A=
				fsync-width =3D <0xf>;=0A=
				enable-cya;=0A=
				linux,phandle =3D <0x4e>;=0A=
				phandle =3D <0x4e>;=0A=
			};=0A=
=0A=
			i2s@702d1400 {=0A=
				compatible =3D "nvidia,tegra210-i2s";=0A=
				reg =3D <0x702d1400 0x100>;=0A=
				nvidia,ahub-i2s-id =3D <0x4>;=0A=
				status =3D "disabled";=0A=
				clocks =3D <0x21 0x66 0x21 0xf9 0x21 0x10d 0x21 0x162>;=0A=
				clock-names =3D "i2s", "i2s_clk_parent", "ext_audio_sync", =
"audio_sync";=0A=
				assigned-clocks =3D <0x21 0x66>;=0A=
				assigned-clock-parents =3D <0x21 0xf9>;=0A=
				assigned-clock-rates =3D <0x177000>;=0A=
				pinctrl-names =3D "dap_active", "dap_inactive";=0A=
				pinctrl-0;=0A=
				pinctrl-1;=0A=
				linux,phandle =3D <0xf3>;=0A=
				phandle =3D <0xf3>;=0A=
			};=0A=
=0A=
			amx@702d3000 {=0A=
				compatible =3D "nvidia,tegra210-amx";=0A=
				reg =3D <0x702d3000 0x100>;=0A=
				nvidia,ahub-amx-id =3D <0x0>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xf4>;=0A=
				phandle =3D <0xf4>;=0A=
			};=0A=
=0A=
			amx@702d3100 {=0A=
				compatible =3D "nvidia,tegra210-amx";=0A=
				reg =3D <0x702d3100 0x100>;=0A=
				nvidia,ahub-amx-id =3D <0x1>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xf5>;=0A=
				phandle =3D <0xf5>;=0A=
			};=0A=
=0A=
			adx@702d3800 {=0A=
				compatible =3D "nvidia,tegra210-adx";=0A=
				reg =3D <0x702d3800 0x100>;=0A=
				nvidia,ahub-adx-id =3D <0x0>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xf6>;=0A=
				phandle =3D <0xf6>;=0A=
			};=0A=
=0A=
			adx@702d3900 {=0A=
				compatible =3D "nvidia,tegra210-adx";=0A=
				reg =3D <0x702d3900 0x100>;=0A=
				nvidia,ahub-adx-id =3D <0x1>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xf7>;=0A=
				phandle =3D <0xf7>;=0A=
			};=0A=
=0A=
			dmic@702d4000 {=0A=
				compatible =3D "nvidia,tegra210-dmic";=0A=
				reg =3D <0x702d4000 0x100>;=0A=
				nvidia,ahub-dmic-id =3D <0x0>;=0A=
				status =3D "okay";=0A=
				clocks =3D <0x21 0xa1 0x21 0xf9>;=0A=
				clock-names =3D "dmic", "parent";=0A=
				assigned-clocks =3D <0x21 0xa1>;=0A=
				assigned-clock-parents =3D <0x21 0xf9>;=0A=
				assigned-clock-rates =3D <0x2ee000>;=0A=
				regulator-supplies =3D "vdd-1v8-dmic";=0A=
				vdd-1v8-dmic-supply =3D <0x36>;=0A=
				linux,phandle =3D <0x52>;=0A=
				phandle =3D <0x52>;=0A=
			};=0A=
=0A=
			dmic@702d4100 {=0A=
				compatible =3D "nvidia,tegra210-dmic";=0A=
				reg =3D <0x702d4100 0x100>;=0A=
				nvidia,ahub-dmic-id =3D <0x1>;=0A=
				status =3D "okay";=0A=
				clocks =3D <0x21 0xa2 0x21 0xf9>;=0A=
				clock-names =3D "dmic", "parent";=0A=
				assigned-clocks =3D <0x21 0xa2>;=0A=
				assigned-clock-parents =3D <0x21 0xf9>;=0A=
				assigned-clock-rates =3D <0x2ee000>;=0A=
				regulator-supplies =3D "vdd-1v8-dmic";=0A=
				vdd-1v8-dmic-supply =3D <0x36>;=0A=
				linux,phandle =3D <0x54>;=0A=
				phandle =3D <0x54>;=0A=
			};=0A=
=0A=
			dmic@702d4200 {=0A=
				compatible =3D "nvidia,tegra210-dmic";=0A=
				reg =3D <0x702d4200 0x100>;=0A=
				nvidia,ahub-dmic-id =3D <0x2>;=0A=
				status =3D "disabled";=0A=
				clocks =3D <0x21 0xc5 0x21 0xf9>;=0A=
				clock-names =3D "dmic", "parent";=0A=
				assigned-clocks =3D <0x21 0xc5>;=0A=
				assigned-clock-parents =3D <0x21 0xf9>;=0A=
				assigned-clock-rates =3D <0x2ee000>;=0A=
				linux,phandle =3D <0xf8>;=0A=
				phandle =3D <0xf8>;=0A=
			};=0A=
=0A=
			afc@702d7000 {=0A=
				compatible =3D "nvidia,tegra210-afc";=0A=
				reg =3D <0x702d7000 0x100>;=0A=
				nvidia,ahub-afc-id =3D <0x0>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xf9>;=0A=
				phandle =3D <0xf9>;=0A=
			};=0A=
=0A=
			afc@702d7100 {=0A=
				compatible =3D "nvidia,tegra210-afc";=0A=
				reg =3D <0x702d7100 0x100>;=0A=
				nvidia,ahub-afc-id =3D <0x1>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xfa>;=0A=
				phandle =3D <0xfa>;=0A=
			};=0A=
=0A=
			afc@702d7200 {=0A=
				compatible =3D "nvidia,tegra210-afc";=0A=
				reg =3D <0x702d7200 0x100>;=0A=
				nvidia,ahub-afc-id =3D <0x2>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xfb>;=0A=
				phandle =3D <0xfb>;=0A=
			};=0A=
=0A=
			afc@702d7300 {=0A=
				compatible =3D "nvidia,tegra210-afc";=0A=
				reg =3D <0x702d7300 0x100>;=0A=
				nvidia,ahub-afc-id =3D <0x3>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xfc>;=0A=
				phandle =3D <0xfc>;=0A=
			};=0A=
=0A=
			afc@702d7400 {=0A=
				compatible =3D "nvidia,tegra210-afc";=0A=
				reg =3D <0x702d7400 0x100>;=0A=
				nvidia,ahub-afc-id =3D <0x4>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xfd>;=0A=
				phandle =3D <0xfd>;=0A=
			};=0A=
=0A=
			afc@702d7500 {=0A=
				compatible =3D "nvidia,tegra210-afc";=0A=
				reg =3D <0x702d7500 0x100>;=0A=
				nvidia,ahub-afc-id =3D <0x5>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xfe>;=0A=
				phandle =3D <0xfe>;=0A=
			};=0A=
=0A=
			mvc@702da000 {=0A=
				compatible =3D "nvidia,tegra210-mvc";=0A=
				reg =3D <0x702da000 0x200>;=0A=
				nvidia,ahub-mvc-id =3D <0x0>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0xff>;=0A=
				phandle =3D <0xff>;=0A=
			};=0A=
=0A=
			mvc@702da200 {=0A=
				compatible =3D "nvidia,tegra210-mvc";=0A=
				reg =3D <0x702da200 0x200>;=0A=
				nvidia,ahub-mvc-id =3D <0x1>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0x100>;=0A=
				phandle =3D <0x100>;=0A=
			};=0A=
=0A=
			iqc@702de000 {=0A=
				compatible =3D "nvidia,tegra210-iqc";=0A=
				reg =3D <0x702de000 0x200>;=0A=
				nvidia,ahub-iqc-id =3D <0x0>;=0A=
				status =3D "disabled";=0A=
				linux,phandle =3D <0x101>;=0A=
				phandle =3D <0x101>;=0A=
			};=0A=
=0A=
			iqc@702de200 {=0A=
				compatible =3D "nvidia,tegra210-iqc";=0A=
				reg =3D <0x702de200 0x200>;=0A=
				nvidia,ahub-iqc-id =3D <0x1>;=0A=
				status =3D "disabled";=0A=
				linux,phandle =3D <0x102>;=0A=
				phandle =3D <0x102>;=0A=
			};=0A=
=0A=
			ope@702d8000 {=0A=
				compatible =3D "nvidia,tegra210-ope";=0A=
				reg =3D <0x702d8000 0x100 0x702d8100 0x100 0x702d8200 0x200>;=0A=
				nvidia,ahub-ope-id =3D <0x0>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0x103>;=0A=
				phandle =3D <0x103>;=0A=
=0A=
				peq@702d8100 {=0A=
					status =3D "okay";=0A=
				};=0A=
=0A=
				mbdrc@702d8200 {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			ope@702d8400 {=0A=
				compatible =3D "nvidia,tegra210-ope";=0A=
				reg =3D <0x702d8400 0x100 0x702d8500 0x100 0x702d8600 0x200>;=0A=
				nvidia,ahub-ope-id =3D <0x1>;=0A=
				status =3D "okay";=0A=
				linux,phandle =3D <0x104>;=0A=
				phandle =3D <0x104>;=0A=
=0A=
				peq@702d8500 {=0A=
					status =3D "okay";=0A=
				};=0A=
=0A=
				mbdrc@702d8600 {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			mvc@0x702da200 {=0A=
				status =3D "okay";=0A=
			};=0A=
		};=0A=
=0A=
		adsp_audio {=0A=
			compatible =3D "nvidia,tegra210-adsp-audio";=0A=
			wakeup-disable;=0A=
			iommus =3D <0x2b 0x22>;=0A=
			iommu-resv-regions =3D <0x0 0x0 0x0 0x70300000 0x0 0xfff00000 =
0xffffffff 0xffffffff>;=0A=
			iommu-group-id =3D <0x2>;=0A=
			nvidia,adma_ch_start =3D <0xb>;=0A=
			nvidia,adma_ch_cnt =3D <0xb>;=0A=
			interrupt-parent =3D <0x34>;=0A=
			interrupts =3D <0x0 0x23 0x4 0x1 0x0 0x24 0x4 0x1 0x0 0x25 0x4 0x1 =
0x0 0x26 0x4 0x1 0x0 0x27 0x4 0x1 0x0 0x28 0x4 0x1 0x0 0x29 0x4 0x1 0x0 =
0x2a 0x4 0x1 0x0 0x2b 0x4 0x1 0x0 0x2c 0x4 0x1 0x0 0x2d 0x4 0x1>;=0A=
			clocks =3D <0x21 0x6a 0x21 0xc6>;=0A=
			clock-names =3D "ahub", "ape";=0A=
			status =3D "okay";=0A=
			linux,phandle =3D <0x105>;=0A=
			phandle =3D <0x105>;=0A=
		};=0A=
	};=0A=
=0A=
	timer {=0A=
		compatible =3D "arm,armv8-timer";=0A=
		interrupt-parent =3D <0x33>;=0A=
		interrupts =3D <0x1 0xd 0xf08 0x1 0xe 0xf08 0x1 0xb 0xf08 0x1 0xa =
0xf08>;=0A=
		clock-frequency =3D <0x124f800>;=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	timer@60005000 {=0A=
		compatible =3D "nvidia,tegra210-timer", "nvidia,tegra30-timer", =
"nvidia,tegra30-timer-wdt";=0A=
		reg =3D <0x0 0x60005000 0x0 0x400>;=0A=
		interrupts =3D <0x0 0xb0 0x4 0x0 0xb1 0x4 0x0 0xb2 0x4 0x0 0xb3 0x4>;=0A=
		clocks =3D <0x21 0x5>;=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	rtc {=0A=
		compatible =3D "nvidia,tegra-rtc";=0A=
		reg =3D <0x0 0x7000e000 0x0 0x100>;=0A=
		interrupts =3D <0x0 0x2 0x4>;=0A=
		status =3D "okay";=0A=
		nvidia,pmc-wakeup =3D <0x37 0x1 0x10 0x4>;=0A=
	};=0A=
=0A=
	dma@60020000 {=0A=
		compatible =3D "nvidia,tegra148-apbdma";=0A=
		reg =3D <0x0 0x60020000 0x0 0x1400>;=0A=
		clocks =3D <0x21 0x22>;=0A=
		clock-names =3D "dma";=0A=
		resets =3D <0x21 0x22>;=0A=
		reset-names =3D "dma";=0A=
		interrupts =3D <0x0 0x68 0x4 0x0 0x69 0x4 0x0 0x6a 0x4 0x0 0x6b 0x4 =
0x0 0x6c 0x4 0x0 0x6d 0x4 0x0 0x6e 0x4 0x0 0x6f 0x4 0x0 0x70 0x4 0x0 =
0x71 0x4 0x0 0x72 0x4 0x0 0x73 0x4 0x0 0x74 0x4 0x0 0x75 0x4 0x0 0x76 =
0x4 0x0 0x77 0x4 0x0 0x80 0x4 0x0 0x81 0x4 0x0 0x82 0x4 0x0 0x83 0x4 0x0 =
0x84 0x4 0x0 0x85 0x4 0x0 0x86 0x4 0x0 0x87 0x4 0x0 0x88 0x4 0x0 0x89 =
0x4 0x0 0x8a 0x4 0x0 0x8b 0x4 0x0 0x8c 0x4 0x0 0x8d 0x4 0x0 0x8e 0x4 0x0 =
0x8f 0x4>;=0A=
		#dma-cells =3D <0x1>;=0A=
		status =3D "okay";=0A=
		linux,phandle =3D <0x4c>;=0A=
		phandle =3D <0x4c>;=0A=
	};=0A=
=0A=
	pinmux@700008d4 {=0A=
		compatible =3D "nvidia,tegra210-pinmux";=0A=
		reg =3D <0x0 0x700008d4 0x0 0x2a5 0x0 0x70003000 0x0 0x294>;=0A=
		#gpio-range-cells =3D <0x3>;=0A=
		status =3D "okay";=0A=
		pinctrl-names =3D "default", "drive", "unused";=0A=
		pinctrl-0 =3D <0x38>;=0A=
		pinctrl-1 =3D <0x39>;=0A=
		pinctrl-2 =3D <0x3a>;=0A=
		linux,phandle =3D <0x3b>;=0A=
		phandle =3D <0x3b>;=0A=
=0A=
		clkreq_0_bi_dir {=0A=
			linux,phandle =3D <0x7b>;=0A=
			phandle =3D <0x7b>;=0A=
=0A=
			clkreq0 {=0A=
				nvidia,pins =3D "pex_l0_clkreq_n_pa1";=0A=
				nvidia,tristate =3D <0x0>;=0A=
			};=0A=
		};=0A=
=0A=
		clkreq_1_bi_dir {=0A=
			linux,phandle =3D <0x7c>;=0A=
			phandle =3D <0x7c>;=0A=
=0A=
			clkreq1 {=0A=
				nvidia,pins =3D "pex_l1_clkreq_n_pa4";=0A=
				nvidia,tristate =3D <0x0>;=0A=
			};=0A=
		};=0A=
=0A=
		clkreq_0_in_dir {=0A=
			linux,phandle =3D <0x7d>;=0A=
			phandle =3D <0x7d>;=0A=
=0A=
			clkreq0 {=0A=
				nvidia,pins =3D "pex_l0_clkreq_n_pa1";=0A=
				nvidia,tristate =3D <0x1>;=0A=
			};=0A=
		};=0A=
=0A=
		clkreq_1_in_dir {=0A=
			linux,phandle =3D <0x7e>;=0A=
			phandle =3D <0x7e>;=0A=
=0A=
			clkreq1 {=0A=
				nvidia,pins =3D "pex_l1_clkreq_n_pa4";=0A=
				nvidia,tristate =3D <0x1>;=0A=
			};=0A=
		};=0A=
=0A=
		prod-settings {=0A=
			#prod-cells =3D <0x4>;=0A=
=0A=
			prod {=0A=
				status =3D "okay";=0A=
				nvidia,prod-boot-init;=0A=
				prod =3D <0x0 0x1c4 0xf7f7f000 0x51212000 0x0 0x128 0x1f1f000 =
0x1010000 0x0 0x12c 0x1f1f000 0x1010000 0x0 0x1c8 0xf0003ffd 0x1040 0x0 =
0x1dc 0xf7f7f000 0x51212000 0x0 0x1e0 0xf0003ffd 0x1040 0x0 0x23c =
0x1f1f000 0x1f1f000 0x0 0x20 0x1f1f000 0x1010000 0x0 0x44 0x1f1f000 =
0x1010000 0x0 0x50 0x1f1f000 0x1010000 0x0 0x58 0x1f1f000 0x1010000 0x0 =
0x5c 0x1f1f000 0x1010000 0x0 0xa0 0x1f1f000 0x1010000 0x0 0xa4 0x1f1f000 =
0x1010000 0x0 0xa8 0x1f1f000 0x1010000 0x0 0xac 0x1f1f000 0x1010000 0x0 =
0xb0 0x1f1f000 0x1f1f000 0x0 0xb4 0x1f1f000 0x1f1f000 0x0 0xb8 0x1f1f000 =
0x1f1f000 0x0 0xbc 0x1f1f000 0x1f1f000 0x0 0xc0 0x1f1f000 0x1f1f000 0x0 =
0xc4 0x1f1f000 0x1f1f000 0x1 0x0 0x7200 0x2000 0x1 0x4 0x7200 0x2000 0x1 =
0x8 0x7200 0x2000 0x1 0xc 0x7200 0x2000 0x1 0x10 0x7200 0x2000 0x1 0x14 =
0x7200 0x2000 0x1 0x1c 0x7200 0x2000 0x1 0x20 0x7200 0x2000 0x1 0x24 =
0x7200 0x2000 0x1 0x28 0x7200 0x2000 0x1 0x2c 0x7200 0x2000 0x1 0x30 =
0x7200 0x2000 0x1 0x160 0x1000 0x1000>;=0A=
			};=0A=
=0A=
			i2s2_prod {=0A=
				prod =3D <0x0 0xb0 0x1f1f000 0x1010000 0x0 0xb4 0x1f1f000 0x1010000 =
0x0 0xb8 0x1f1f000 0x1010000 0x0 0xbc 0x1f1f000 0x1010000>;=0A=
			};=0A=
=0A=
			spi1_prod {=0A=
				nvidia,prod-boot-init;=0A=
				prod =3D <0x0 0x200 0xf0000000 0x50000000 0x0 0x204 0xf0000000 =
0x50000000 0x0 0x208 0xf0000000 0x50000000 0x0 0x20c 0xf0000000 =
0x50000000 0x0 0x210 0xf0000000 0x50000000 0x1 0x50 0x6000 0x6040 0x1 =
0x54 0x6000 0x6040 0x1 0x58 0x6000 0x6040 0x1 0x5c 0x6000 0x6040 0x1 =
0x60 0x6000 0x6040>;=0A=
			};=0A=
=0A=
			spi2_prod {=0A=
				nvidia,prod-boot-init;=0A=
				prod =3D <0x0 0x214 0xf0000000 0xd0000000 0x0 0x218 0xf0000000 =
0xd0000000 0x0 0x21c 0xf0000000 0xd0000000 0x0 0x220 0xf0000000 =
0xd0000000 0x0 0x224 0xf0000000 0xd0000000 0x1 0x64 0x6000 0x6040 0x1 =
0x68 0x6000 0x6040 0x1 0x6c 0x6000 0x6040 0x1 0x70 0x6000 0x6040 0x1 =
0x74 0x6000 0x6040>;=0A=
			};=0A=
=0A=
			spi3_prod {=0A=
				nvidia,prod-boot-init;=0A=
				prod =3D <0x0 0xcc 0x1404000 0x1414000 0x0 0xd0 0x1404000 0x1414000 =
0x0 0x140 0x1404000 0x1414000 0x0 0x144 0x1404000 0x1414000>;=0A=
			};=0A=
=0A=
			spi4_prod {=0A=
				nvidia,prod-boot-init;=0A=
				prod =3D <0x0 0x268 0x1404000 0x1414000 0x0 0x26c 0x1404000 =
0x1414000 0x0 0x270 0x1404000 0x1414000 0x0 0x274 0x1404000 0x1414000>;=0A=
			};=0A=
=0A=
			i2c0_prod {=0A=
				nvidia,prod-boot-init;=0A=
				prod =3D <0x0 0xd4 0x1f1f000 0x1f000 0x0 0xd8 0x1f1f000 0x1f000 0x1 =
0xbc 0x1100 0x0 0x1 0xc0 0x1100 0x0>;=0A=
			};=0A=
=0A=
			i2c1_prod {=0A=
				nvidia,prod-boot-init;=0A=
				prod =3D <0x0 0xdc 0x1f1f000 0x1f000 0x0 0xe0 0x1f1f000 0x1f000 0x1 =
0xc4 0x1100 0x0 0x1 0xc8 0x1100 0x0>;=0A=
			};=0A=
=0A=
			i2c2_prod {=0A=
				nvidia,prod-boot-init;=0A=
				prod =3D <0x0 0xe4 0x1f1f000 0x1f000 0x0 0xe8 0x1f1f000 0x1f000 0x1 =
0xcc 0x1100 0x0 0x1 0xd0 0x1100 0x0 0x0 0x60 0x1f1f000 0x1f000 0x0 0x64 =
0x1f1f000 0x1f000 0x1 0xd4 0x1100 0x0 0x1 0xd8 0x1100 0x0>;=0A=
			};=0A=
=0A=
			i2c4_prod {=0A=
				nvidia,prod-boot-init;=0A=
				prod =3D <0x0 0x198 0x1f1f000 0x1f000 0x0 0x19c 0x1f1f000 0x1f000 =
0x1 0xdc 0x1100 0x0 0x1 0xe0 0x1100 0x0>;=0A=
			};=0A=
=0A=
			i2c0_hs_prod {=0A=
				prod =3D <0x0 0xd4 0x1f1f000 0x1f1f000 0x0 0xd8 0x1f1f000 0x1f1f000 =
0x1 0xbc 0x1100 0x1000 0x1 0xc0 0x1100 0x1000>;=0A=
			};=0A=
=0A=
			i2c1_hs_prod {=0A=
				prod =3D <0x0 0xdc 0x1f1f000 0x1f1f000 0x0 0xe0 0x1f1f000 0x1f1f000 =
0x1 0xc4 0x1100 0x1000 0x1 0xc8 0x1100 0x1000>;=0A=
			};=0A=
=0A=
			i2c2_hs_prod {=0A=
				prod =3D <0x0 0xe4 0x1f1f000 0x1f1f000 0x0 0xe8 0x1f1f000 0x1f1f000 =
0x1 0xcc 0x1100 0x1000 0x1 0xd0 0x1100 0x1000 0x0 0x60 0x1f1f000 =
0x1f1f000 0x0 0x64 0x1f1f000 0x1f1f000 0x1 0xd4 0x1100 0x1000 0x1 0xd8 =
0x1100 0x1000>;=0A=
			};=0A=
=0A=
			i2c4_hs_prod {=0A=
				prod =3D <0x0 0x198 0x1f1f000 0x1f1f000 0x0 0x19c 0x1f1f000 =
0x1f1f000 0x1 0xdc 0x1100 0x1000 0x1 0xe0 0x1100 0x1000>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc1_schmitt_enable {=0A=
			linux,phandle =3D <0x90>;=0A=
			phandle =3D <0x90>;=0A=
=0A=
			sdmmc1 {=0A=
				nvidia,pins =3D "sdmmc1_cmd_pm1", "sdmmc1_dat0_pm5", =
"sdmmc1_dat1_pm4", "sdmmc1_dat2_pm3", "sdmmc1_dat3_pm2";=0A=
				nvidia,schmitt =3D <0x1>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc1_schmitt_disable {=0A=
			linux,phandle =3D <0x91>;=0A=
			phandle =3D <0x91>;=0A=
=0A=
			sdmmc1 {=0A=
				nvidia,pins =3D "sdmmc1_cmd_pm1", "sdmmc1_dat0_pm5", =
"sdmmc1_dat1_pm4", "sdmmc1_dat2_pm3", "sdmmc1_dat3_pm2";=0A=
				nvidia,schmitt =3D <0x0>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc1_clk_schmitt_enable {=0A=
			linux,phandle =3D <0x92>;=0A=
			phandle =3D <0x92>;=0A=
=0A=
			sdmmc1 {=0A=
				nvidia,pins =3D "sdmmc1_clk_pm0";=0A=
				nvidia,schmitt =3D <0x1>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc1_clk_schmitt_disable {=0A=
			linux,phandle =3D <0x93>;=0A=
			phandle =3D <0x93>;=0A=
=0A=
			sdmmc1 {=0A=
				nvidia,pins =3D "sdmmc1_clk_pm0";=0A=
				nvidia,schmitt =3D <0x0>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc1_drv_code {=0A=
			linux,phandle =3D <0x94>;=0A=
			phandle =3D <0x94>;=0A=
=0A=
			sdmmc1 {=0A=
				nvidia,pins =3D "drive_sdmmc1";=0A=
				nvidia,pull-down-strength =3D <0x15>;=0A=
				nvidia,pull-up-strength =3D <0x11>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc1_default_drv_code {=0A=
			linux,phandle =3D <0x95>;=0A=
			phandle =3D <0x95>;=0A=
=0A=
			sdmmc1 {=0A=
				nvidia,pins =3D "drive_sdmmc1";=0A=
				nvidia,pull-down-strength =3D <0x12>;=0A=
				nvidia,pull-up-strength =3D <0x12>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc3_schmitt_enable {=0A=
			linux,phandle =3D <0x88>;=0A=
			phandle =3D <0x88>;=0A=
=0A=
			sdmmc3 {=0A=
				nvidia,pins =3D "sdmmc3_cmd_pp1", "sdmmc3_dat0_pp5", =
"sdmmc3_dat1_pp4", "sdmmc3_dat2_pp3", "sdmmc3_dat3_pp2";=0A=
				nvidia,schmitt =3D <0x1>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc3_schmitt_disable {=0A=
			linux,phandle =3D <0x89>;=0A=
			phandle =3D <0x89>;=0A=
=0A=
			sdmmc3 {=0A=
				nvidia,pins =3D "sdmmc3_cmd_pp1", "sdmmc3_dat0_pp5", =
"sdmmc3_dat1_pp4", "sdmmc3_dat2_pp3", "sdmmc3_dat3_pp2";=0A=
				nvidia,schmitt =3D <0x0>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc3_clk_schmitt_enable {=0A=
			linux,phandle =3D <0x8a>;=0A=
			phandle =3D <0x8a>;=0A=
=0A=
			sdmmc3 {=0A=
				nvidia,pins =3D "sdmmc3_clk_pp0";=0A=
				nvidia,schmitt =3D <0x1>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc3_clk_schmitt_disable {=0A=
			linux,phandle =3D <0x8b>;=0A=
			phandle =3D <0x8b>;=0A=
=0A=
			sdmmc3 {=0A=
				nvidia,pins =3D "sdmmc3_clk_pp0";=0A=
				nvidia,schmitt =3D <0x0>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc3_drv_code {=0A=
			linux,phandle =3D <0x8c>;=0A=
			phandle =3D <0x8c>;=0A=
=0A=
			sdmmc3 {=0A=
				nvidia,pins =3D "drive_sdmmc3";=0A=
				nvidia,pull-down-strength =3D <0x15>;=0A=
				nvidia,pull-up-strength =3D <0x11>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc3_default_drv_code {=0A=
			linux,phandle =3D <0x8d>;=0A=
			phandle =3D <0x8d>;=0A=
=0A=
			sdmmc3 {=0A=
				nvidia,pins =3D "drive_sdmmc3";=0A=
				nvidia,pull-down-strength =3D <0x12>;=0A=
				nvidia,pull-up-strength =3D <0x12>;=0A=
			};=0A=
		};=0A=
=0A=
		dvfs_pwm_active {=0A=
			linux,phandle =3D <0x9b>;=0A=
			phandle =3D <0x9b>;=0A=
=0A=
			dvfs_pwm_pbb1 {=0A=
				nvidia,pins =3D "dvfs_pwm_pbb1";=0A=
				nvidia,tristate =3D <0x0>;=0A=
			};=0A=
		};=0A=
=0A=
		dvfs_pwm_inactive {=0A=
			linux,phandle =3D <0x9c>;=0A=
			phandle =3D <0x9c>;=0A=
=0A=
			dvfs_pwm_pbb1 {=0A=
				nvidia,pins =3D "dvfs_pwm_pbb1";=0A=
				nvidia,tristate =3D <0x1>;=0A=
			};=0A=
		};=0A=
=0A=
		common {=0A=
			linux,phandle =3D <0x38>;=0A=
			phandle =3D <0x38>;=0A=
=0A=
			dvfs_pwm_pbb1 {=0A=
				nvidia,pins =3D "dvfs_pwm_pbb1";=0A=
				nvidia,function =3D "cldvfs";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			dmic1_clk_pe0 {=0A=
				nvidia,pins =3D "dmic1_clk_pe0";=0A=
				nvidia,function =3D "i2s3";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			dmic1_dat_pe1 {=0A=
				nvidia,pins =3D "dmic1_dat_pe1";=0A=
				nvidia,function =3D "i2s3";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			dmic2_clk_pe2 {=0A=
				nvidia,pins =3D "dmic2_clk_pe2";=0A=
				nvidia,function =3D "i2s3";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			dmic2_dat_pe3 {=0A=
				nvidia,pins =3D "dmic2_dat_pe3";=0A=
				nvidia,function =3D "i2s3";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			pe7 {=0A=
				nvidia,pins =3D "pe7";=0A=
				nvidia,function =3D "pwm3";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			gen3_i2c_scl_pf0 {=0A=
				nvidia,pins =3D "gen3_i2c_scl_pf0";=0A=
				nvidia,function =3D "i2c3";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			gen3_i2c_sda_pf1 {=0A=
				nvidia,pins =3D "gen3_i2c_sda_pf1";=0A=
				nvidia,function =3D "i2c3";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			cam_i2c_scl_ps2 {=0A=
				nvidia,pins =3D "cam_i2c_scl_ps2";=0A=
				nvidia,function =3D "i2cvi";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x1>;=0A=
			};=0A=
=0A=
			cam_i2c_sda_ps3 {=0A=
				nvidia,pins =3D "cam_i2c_sda_ps3";=0A=
				nvidia,function =3D "i2cvi";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x1>;=0A=
			};=0A=
=0A=
			cam1_mclk_ps0 {=0A=
				nvidia,pins =3D "cam1_mclk_ps0";=0A=
				nvidia,function =3D "extperiph3";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			cam2_mclk_ps1 {=0A=
				nvidia,pins =3D "cam2_mclk_ps1";=0A=
				nvidia,function =3D "extperiph3";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pex_l0_clkreq_n_pa1 {=0A=
				nvidia,pins =3D "pex_l0_clkreq_n_pa1";=0A=
				nvidia,function =3D "pe0";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x1>;=0A=
			};=0A=
=0A=
			pex_l0_rst_n_pa0 {=0A=
				nvidia,pins =3D "pex_l0_rst_n_pa0";=0A=
				nvidia,function =3D "pe0";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
				nvidia,io-high-voltage =3D <0x1>;=0A=
			};=0A=
=0A=
			pex_l1_clkreq_n_pa4 {=0A=
				nvidia,pins =3D "pex_l1_clkreq_n_pa4";=0A=
				nvidia,function =3D "pe1";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x1>;=0A=
			};=0A=
=0A=
			pex_l1_rst_n_pa3 {=0A=
				nvidia,pins =3D "pex_l1_rst_n_pa3";=0A=
				nvidia,function =3D "pe1";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
				nvidia,io-high-voltage =3D <0x1>;=0A=
			};=0A=
=0A=
			pex_wake_n_pa2 {=0A=
				nvidia,pins =3D "pex_wake_n_pa2";=0A=
				nvidia,function =3D "pe";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x1>;=0A=
			};=0A=
=0A=
			sdmmc1_clk_pm0 {=0A=
				nvidia,pins =3D "sdmmc1_clk_pm0";=0A=
				nvidia,function =3D "sdmmc1";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			sdmmc1_cmd_pm1 {=0A=
				nvidia,pins =3D "sdmmc1_cmd_pm1";=0A=
				nvidia,function =3D "sdmmc1";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			sdmmc1_dat0_pm5 {=0A=
				nvidia,pins =3D "sdmmc1_dat0_pm5";=0A=
				nvidia,function =3D "sdmmc1";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			sdmmc1_dat1_pm4 {=0A=
				nvidia,pins =3D "sdmmc1_dat1_pm4";=0A=
				nvidia,function =3D "sdmmc1";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			sdmmc1_dat2_pm3 {=0A=
				nvidia,pins =3D "sdmmc1_dat2_pm3";=0A=
				nvidia,function =3D "sdmmc1";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			sdmmc1_dat3_pm2 {=0A=
				nvidia,pins =3D "sdmmc1_dat3_pm2";=0A=
				nvidia,function =3D "sdmmc1";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			sdmmc3_clk_pp0 {=0A=
				nvidia,pins =3D "sdmmc3_clk_pp0";=0A=
				nvidia,function =3D "sdmmc3";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			sdmmc3_cmd_pp1 {=0A=
				nvidia,pins =3D "sdmmc3_cmd_pp1";=0A=
				nvidia,function =3D "sdmmc3";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			sdmmc3_dat0_pp5 {=0A=
				nvidia,pins =3D "sdmmc3_dat0_pp5";=0A=
				nvidia,function =3D "sdmmc3";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			sdmmc3_dat1_pp4 {=0A=
				nvidia,pins =3D "sdmmc3_dat1_pp4";=0A=
				nvidia,function =3D "sdmmc3";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			sdmmc3_dat2_pp3 {=0A=
				nvidia,pins =3D "sdmmc3_dat2_pp3";=0A=
				nvidia,function =3D "sdmmc3";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			sdmmc3_dat3_pp2 {=0A=
				nvidia,pins =3D "sdmmc3_dat3_pp2";=0A=
				nvidia,function =3D "sdmmc3";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			shutdown {=0A=
				nvidia,pins =3D "shutdown";=0A=
				nvidia,function =3D "shutdown";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			lcd_gpio2_pv4 {=0A=
				nvidia,pins =3D "lcd_gpio2_pv4";=0A=
				nvidia,function =3D "pwm1";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pwr_i2c_scl_py3 {=0A=
				nvidia,pins =3D "pwr_i2c_scl_py3";=0A=
				nvidia,function =3D "i2cpmu";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			pwr_i2c_sda_py4 {=0A=
				nvidia,pins =3D "pwr_i2c_sda_py4";=0A=
				nvidia,function =3D "i2cpmu";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			clk_32k_in {=0A=
				nvidia,pins =3D "clk_32k_in";=0A=
				nvidia,function =3D "clk";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			clk_32k_out_py5 {=0A=
				nvidia,pins =3D "clk_32k_out_py5";=0A=
				nvidia,function =3D "soc";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			pz1 {=0A=
				nvidia,pins =3D "pz1";=0A=
				nvidia,function =3D "sdmmc1";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			pz5 {=0A=
				nvidia,pins =3D "pz5";=0A=
				nvidia,function =3D "soc";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			core_pwr_req {=0A=
				nvidia,pins =3D "core_pwr_req";=0A=
				nvidia,function =3D "core";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pwr_int_n {=0A=
				nvidia,pins =3D "pwr_int_n";=0A=
				nvidia,function =3D "pmi";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			gen1_i2c_scl_pj1 {=0A=
				nvidia,pins =3D "gen1_i2c_scl_pj1";=0A=
				nvidia,function =3D "i2c1";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x1>;=0A=
			};=0A=
=0A=
			gen1_i2c_sda_pj0 {=0A=
				nvidia,pins =3D "gen1_i2c_sda_pj0";=0A=
				nvidia,function =3D "i2c1";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x1>;=0A=
			};=0A=
=0A=
			gen2_i2c_scl_pj2 {=0A=
				nvidia,pins =3D "gen2_i2c_scl_pj2";=0A=
				nvidia,function =3D "i2c2";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x1>;=0A=
			};=0A=
=0A=
			gen2_i2c_sda_pj3 {=0A=
				nvidia,pins =3D "gen2_i2c_sda_pj3";=0A=
				nvidia,function =3D "i2c2";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x1>;=0A=
			};=0A=
=0A=
			uart2_tx_pg0 {=0A=
				nvidia,pins =3D "uart2_tx_pg0";=0A=
				nvidia,function =3D "uartb";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			uart2_rx_pg1 {=0A=
				nvidia,pins =3D "uart2_rx_pg1";=0A=
				nvidia,function =3D "uartb";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			uart1_tx_pu0 {=0A=
				nvidia,pins =3D "uart1_tx_pu0";=0A=
				nvidia,function =3D "uarta";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			uart1_rx_pu1 {=0A=
				nvidia,pins =3D "uart1_rx_pu1";=0A=
				nvidia,function =3D "uarta";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			jtag_rtck {=0A=
				nvidia,pins =3D "jtag_rtck";=0A=
				nvidia,function =3D "jtag";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			uart3_tx_pd1 {=0A=
				nvidia,pins =3D "uart3_tx_pd1";=0A=
				nvidia,function =3D "uartc";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			uart3_rx_pd2 {=0A=
				nvidia,pins =3D "uart3_rx_pd2";=0A=
				nvidia,function =3D "uartc";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			uart3_rts_pd3 {=0A=
				nvidia,pins =3D "uart3_rts_pd3";=0A=
				nvidia,function =3D "uartc";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			uart3_cts_pd4 {=0A=
				nvidia,pins =3D "uart3_cts_pd4";=0A=
				nvidia,function =3D "uartc";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			uart4_tx_pi4 {=0A=
				nvidia,pins =3D "uart4_tx_pi4";=0A=
				nvidia,function =3D "uartd";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			uart4_rx_pi5 {=0A=
				nvidia,pins =3D "uart4_rx_pi5";=0A=
				nvidia,function =3D "uartd";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			uart4_rts_pi6 {=0A=
				nvidia,pins =3D "uart4_rts_pi6";=0A=
				nvidia,function =3D "uartd";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			uart4_cts_pi7 {=0A=
				nvidia,pins =3D "uart4_cts_pi7";=0A=
				nvidia,function =3D "uartd";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			qspi_io0_pee2 {=0A=
				nvidia,pins =3D "qspi_io0_pee2";=0A=
				nvidia,function =3D "qspi";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			qspi_io1_pee3 {=0A=
				nvidia,pins =3D "qspi_io1_pee3";=0A=
				nvidia,function =3D "qspi";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			qspi_sck_pee0 {=0A=
				nvidia,pins =3D "qspi_sck_pee0";=0A=
				nvidia,function =3D "qspi";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			qspi_cs_n_pee1 {=0A=
				nvidia,pins =3D "qspi_cs_n_pee1";=0A=
				nvidia,function =3D "qspi";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			qspi_io2_pee4 {=0A=
				nvidia,pins =3D "qspi_io2_pee4";=0A=
				nvidia,function =3D "qspi";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			qspi_io3_pee5 {=0A=
				nvidia,pins =3D "qspi_io3_pee5";=0A=
				nvidia,function =3D "qspi";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			dap2_din_paa2 {=0A=
				nvidia,pins =3D "dap2_din_paa2";=0A=
				nvidia,function =3D "i2s2";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			dap2_dout_paa3 {=0A=
				nvidia,pins =3D "dap2_dout_paa3";=0A=
				nvidia,function =3D "i2s2";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			dap2_fs_paa0 {=0A=
				nvidia,pins =3D "dap2_fs_paa0";=0A=
				nvidia,function =3D "i2s2";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			dap2_sclk_paa1 {=0A=
				nvidia,pins =3D "dap2_sclk_paa1";=0A=
				nvidia,function =3D "i2s2";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			dp_hpd0_pcc6 {=0A=
				nvidia,pins =3D "dp_hpd0_pcc6";=0A=
				nvidia,function =3D "dp";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			hdmi_int_dp_hpd_pcc1 {=0A=
				nvidia,pins =3D "hdmi_int_dp_hpd_pcc1";=0A=
				nvidia,function =3D "dp";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			hdmi_cec_pcc0 {=0A=
				nvidia,pins =3D "hdmi_cec_pcc0";=0A=
				nvidia,function =3D "cec";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x1>;=0A=
			};=0A=
=0A=
			cam1_pwdn_ps7 {=0A=
				nvidia,pins =3D "cam1_pwdn_ps7";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			cam2_pwdn_pt0 {=0A=
				nvidia,pins =3D "cam2_pwdn_pt0";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			sata_led_active_pa5 {=0A=
				nvidia,pins =3D "sata_led_active_pa5";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			pa6 {=0A=
				nvidia,pins =3D "pa6";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			als_prox_int_px3 {=0A=
				nvidia,pins =3D "als_prox_int_px3";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			temp_alert_px4 {=0A=
				nvidia,pins =3D "temp_alert_px4";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			button_power_on_px5 {=0A=
				nvidia,pins =3D "button_power_on_px5";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			button_vol_up_px6 {=0A=
				nvidia,pins =3D "button_vol_up_px6";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			button_home_py1 {=0A=
				nvidia,pins =3D "button_home_py1";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			lcd_bl_en_pv1 {=0A=
				nvidia,pins =3D "lcd_bl_en_pv1";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			pz2 {=0A=
				nvidia,pins =3D "pz2";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			pz3 {=0A=
				nvidia,pins =3D "pz3";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			wifi_en_ph0 {=0A=
				nvidia,pins =3D "wifi_en_ph0";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			wifi_wake_ap_ph2 {=0A=
				nvidia,pins =3D "wifi_wake_ap_ph2";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			ap_wake_bt_ph3 {=0A=
				nvidia,pins =3D "ap_wake_bt_ph3";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			bt_rst_ph4 {=0A=
				nvidia,pins =3D "bt_rst_ph4";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			bt_wake_ap_ph5 {=0A=
				nvidia,pins =3D "bt_wake_ap_ph5";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			ph6 {=0A=
				nvidia,pins =3D "ph6";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			ap_wake_nfc_ph7 {=0A=
				nvidia,pins =3D "ap_wake_nfc_ph7";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			nfc_en_pi0 {=0A=
				nvidia,pins =3D "nfc_en_pi0";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			nfc_int_pi1 {=0A=
				nvidia,pins =3D "nfc_int_pi1";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
			};=0A=
=0A=
			gps_en_pi2 {=0A=
				nvidia,pins =3D "gps_en_pi2";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pcc7 {=0A=
				nvidia,pins =3D "pcc7";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x0>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
				nvidia,io-high-voltage =3D <0x1>;=0A=
			};=0A=
=0A=
			usb_vbus_en0_pcc4 {=0A=
				nvidia,pins =3D "usb_vbus_en0_pcc4";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x2>;=0A=
				nvidia,tristate =3D <0x0>;=0A=
				nvidia,enable-input =3D <0x1>;=0A=
				nvidia,io-high-voltage =3D <0x0>;=0A=
			};=0A=
		};=0A=
=0A=
		unused_lowpower {=0A=
			linux,phandle =3D <0x3a>;=0A=
			phandle =3D <0x3a>;=0A=
=0A=
			aud_mclk_pbb0 {=0A=
				nvidia,pins =3D "aud_mclk_pbb0";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			dvfs_clk_pbb2 {=0A=
				nvidia,pins =3D "dvfs_clk_pbb2";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			gpio_x1_aud_pbb3 {=0A=
				nvidia,pins =3D "gpio_x1_aud_pbb3";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			gpio_x3_aud_pbb4 {=0A=
				nvidia,pins =3D "gpio_x3_aud_pbb4";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			dap1_din_pb1 {=0A=
				nvidia,pins =3D "dap1_din_pb1";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			dap1_dout_pb2 {=0A=
				nvidia,pins =3D "dap1_dout_pb2";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			dap1_fs_pb0 {=0A=
				nvidia,pins =3D "dap1_fs_pb0";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			dap1_sclk_pb3 {=0A=
				nvidia,pins =3D "dap1_sclk_pb3";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spi2_mosi_pb4 {=0A=
				nvidia,pins =3D "spi2_mosi_pb4";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spi2_miso_pb5 {=0A=
				nvidia,pins =3D "spi2_miso_pb5";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spi2_sck_pb6 {=0A=
				nvidia,pins =3D "spi2_sck_pb6";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spi2_cs0_pb7 {=0A=
				nvidia,pins =3D "spi2_cs0_pb7";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spi2_cs1_pdd0 {=0A=
				nvidia,pins =3D "spi2_cs1_pdd0";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			dmic3_clk_pe4 {=0A=
				nvidia,pins =3D "dmic3_clk_pe4";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			dmic3_dat_pe5 {=0A=
				nvidia,pins =3D "dmic3_dat_pe5";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pe6 {=0A=
				nvidia,pins =3D "pe6";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			cam_rst_ps4 {=0A=
				nvidia,pins =3D "cam_rst_ps4";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			cam_af_en_ps5 {=0A=
				nvidia,pins =3D "cam_af_en_ps5";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			cam_flash_en_ps6 {=0A=
				nvidia,pins =3D "cam_flash_en_ps6";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			cam1_strobe_pt1 {=0A=
				nvidia,pins =3D "cam1_strobe_pt1";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			motion_int_px2 {=0A=
				nvidia,pins =3D "motion_int_px2";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			touch_rst_pv6 {=0A=
				nvidia,pins =3D "touch_rst_pv6";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			touch_clk_pv7 {=0A=
				nvidia,pins =3D "touch_clk_pv7";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			touch_int_px1 {=0A=
				nvidia,pins =3D "touch_int_px1";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			modem_wake_ap_px0 {=0A=
				nvidia,pins =3D "modem_wake_ap_px0";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			button_vol_down_px7 {=0A=
				nvidia,pins =3D "button_vol_down_px7";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			button_slide_sw_py0 {=0A=
				nvidia,pins =3D "button_slide_sw_py0";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			lcd_te_py2 {=0A=
				nvidia,pins =3D "lcd_te_py2";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			lcd_bl_pwm_pv0 {=0A=
				nvidia,pins =3D "lcd_bl_pwm_pv0";=0A=
				nvidia,function =3D "rsvd3";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			lcd_rst_pv2 {=0A=
				nvidia,pins =3D "lcd_rst_pv2";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			lcd_gpio1_pv3 {=0A=
				nvidia,pins =3D "lcd_gpio1_pv3";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			ap_ready_pv5 {=0A=
				nvidia,pins =3D "ap_ready_pv5";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pz0 {=0A=
				nvidia,pins =3D "pz0";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pz4 {=0A=
				nvidia,pins =3D "pz4";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			clk_req {=0A=
				nvidia,pins =3D "clk_req";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			cpu_pwr_req {=0A=
				nvidia,pins =3D "cpu_pwr_req";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			dap4_din_pj5 {=0A=
				nvidia,pins =3D "dap4_din_pj5";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			dap4_dout_pj6 {=0A=
				nvidia,pins =3D "dap4_dout_pj6";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			dap4_fs_pj4 {=0A=
				nvidia,pins =3D "dap4_fs_pj4";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			dap4_sclk_pj7 {=0A=
				nvidia,pins =3D "dap4_sclk_pj7";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			uart2_rts_pg2 {=0A=
				nvidia,pins =3D "uart2_rts_pg2";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			uart2_cts_pg3 {=0A=
				nvidia,pins =3D "uart2_cts_pg3";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			uart1_rts_pu2 {=0A=
				nvidia,pins =3D "uart1_rts_pu2";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			uart1_cts_pu3 {=0A=
				nvidia,pins =3D "uart1_cts_pu3";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pk0 {=0A=
				nvidia,pins =3D "pk0";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pk1 {=0A=
				nvidia,pins =3D "pk1";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pk2 {=0A=
				nvidia,pins =3D "pk2";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pk3 {=0A=
				nvidia,pins =3D "pk3";=0A=
				nvidia,function =3D "rsvd2";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pk4 {=0A=
				nvidia,pins =3D "pk4";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pk5 {=0A=
				nvidia,pins =3D "pk5";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pk6 {=0A=
				nvidia,pins =3D "pk6";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pk7 {=0A=
				nvidia,pins =3D "pk7";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pl0 {=0A=
				nvidia,pins =3D "pl0";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			pl1 {=0A=
				nvidia,pins =3D "pl1";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spi1_mosi_pc0 {=0A=
				nvidia,pins =3D "spi1_mosi_pc0";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spi1_miso_pc1 {=0A=
				nvidia,pins =3D "spi1_miso_pc1";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spi1_sck_pc2 {=0A=
				nvidia,pins =3D "spi1_sck_pc2";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spi1_cs0_pc3 {=0A=
				nvidia,pins =3D "spi1_cs0_pc3";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spi1_cs1_pc4 {=0A=
				nvidia,pins =3D "spi1_cs1_pc4";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spi4_mosi_pc7 {=0A=
				nvidia,pins =3D "spi4_mosi_pc7";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spi4_miso_pd0 {=0A=
				nvidia,pins =3D "spi4_miso_pd0";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spi4_sck_pc5 {=0A=
				nvidia,pins =3D "spi4_sck_pc5";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spi4_cs0_pc6 {=0A=
				nvidia,pins =3D "spi4_cs0_pc6";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			wifi_rst_ph1 {=0A=
				nvidia,pins =3D "wifi_rst_ph1";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			gps_rst_pi3 {=0A=
				nvidia,pins =3D "gps_rst_pi3";=0A=
				nvidia,function =3D "rsvd0";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spdif_out_pcc2 {=0A=
				nvidia,pins =3D "spdif_out_pcc2";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			spdif_in_pcc3 {=0A=
				nvidia,pins =3D "spdif_in_pcc3";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
			};=0A=
=0A=
			usb_vbus_en1_pcc5 {=0A=
				nvidia,pins =3D "usb_vbus_en1_pcc5";=0A=
				nvidia,function =3D "rsvd1";=0A=
				nvidia,pull =3D <0x1>;=0A=
				nvidia,tristate =3D <0x1>;=0A=
				nvidia,enable-input =3D <0x0>;=0A=
				nvidia,io-high-voltage =3D <0x0>;=0A=
			};=0A=
		};=0A=
=0A=
		drive {=0A=
			linux,phandle =3D <0x39>;=0A=
			phandle =3D <0x39>;=0A=
		};=0A=
	};=0A=
=0A=
	gpio@6000d000 {=0A=
		compatible =3D "nvidia,tegra210-gpio", "nvidia,tegra124-gpio", =
"nvidia,tegra30-gpio";=0A=
		reg =3D <0x0 0x6000d000 0x0 0x1000>;=0A=
		interrupts =3D <0x0 0x20 0x4 0x0 0x21 0x4 0x0 0x22 0x4 0x0 0x23 0x4 =
0x0 0x37 0x4 0x0 0x57 0x4 0x0 0x59 0x4 0x0 0x7d 0x4>;=0A=
		#gpio-cells =3D <0x2>;=0A=
		gpio-controller;=0A=
		#interrupt-cells =3D <0x2>;=0A=
		interrupt-controller;=0A=
		gpio-ranges =3D <0x3b 0x0 0x0 0xf6>;=0A=
		status =3D "okay";=0A=
		gpio-init-names =3D "default";=0A=
		gpio-init-0 =3D <0x3c>;=0A=
		gpio-line-names =3D [00 00 00 00 00 00 00 00 00 00 00 00 53 50 49 31 =
5f 4d 4f 53 49 00 53 50 49 31 5f 4d 49 53 4f 00 53 50 49 31 5f 53 43 4b =
00 53 50 49 31 5f 43 53 30 00 53 50 49 30 5f 4d 4f 53 49 00 53 50 49 30 =
5f 4d 49 53 4f 00 53 50 49 30 5f 53 43 4b 00 53 50 49 30 5f 43 53 30 00 =
53 50 49 30 5f 43 53 31 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =
00 00 47 50 49 4f 31 33 00 00 00 00 00 00 00 00 00 00 00 00 55 41 52 54 =
31 5f 52 54 53 00 55 41 52 54 31 5f 43 54 53 00 00 00 00 00 00 00 00 00 =
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 49 32 53 30 5f 46 53 00 =
49 32 53 30 5f 44 49 4e 00 49 32 53 30 5f 44 4f 55 54 00 49 32 53 30 5f =
53 43 4c 4b 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =
00 00 47 50 49 4f 30 31 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =
00 00 00 47 50 49 4f 30 37 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 =
00 00 00 00 00 00 00 00 00 00 00 47 50 49 4f 31 32 00 00 00 00 00 00 47 =
50 49 4f 31 31 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 47 50 49 =
4f 30 39 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 53 50 49 31 5f =
43 53 31 00 00 00 00 00 00 00 00];=0A=
		linux,phandle =3D <0x56>;=0A=
		phandle =3D <0x56>;=0A=
=0A=
		camera-control-output-low {=0A=
			gpio-hog;=0A=
			output-low;=0A=
			gpios =3D <0x97 0x0 0x98 0x0>;=0A=
			label =3D "cam1-pwdn", "cam2-pwdn";=0A=
		};=0A=
=0A=
		e2614-rt5658-audio {=0A=
			gpio-hog;=0A=
			function;=0A=
			gpios =3D <0x4c 0x0 0x4d 0x0 0x4e 0x0 0x4f 0x0 0xd8 0x0 0x95 0x0>;=0A=
			label =3D "I2S4_LRCLK", "I2S4_SDIN", "I2S4_SDOUT", "I2S4_CLK", =
"AUDIO_MCLK", "AUD_RST";=0A=
			status =3D "disabled";=0A=
			linux,phandle =3D <0xdb>;=0A=
			phandle =3D <0xdb>;=0A=
		};=0A=
=0A=
		system-suspend-gpio {=0A=
			status =3D "okay";=0A=
			gpio-hog;=0A=
			output-high;=0A=
			gpio-suspend;=0A=
			suspend-output-low;=0A=
			gpios =3D <0x6 0x0>;=0A=
			linux,phandle =3D <0xb2>;=0A=
			phandle =3D <0xb2>;=0A=
		};=0A=
=0A=
		default {=0A=
			gpio-input =3D <0x5 0xbc 0xbd 0xbe 0xc1 0xa9 0xca 0x3a 0x3d 0x3e 0x41 =
0xe4>;=0A=
			gpio-output-low =3D <0x97 0x98 0xcb 0x38 0x3b 0x3c 0x3f 0x40 0x42>;=0A=
			gpio-output-high =3D <0x6 0xbb 0xe7>;=0A=
			linux,phandle =3D <0x3c>;=0A=
			phandle =3D <0x3c>;=0A=
		};=0A=
	};=0A=
=0A=
	xotg {=0A=
		compatible =3D "nvidia,tegra210-xotg";=0A=
		interrupts =3D <0x0 0x31 0x4 0x0 0x14 0x4>;=0A=
		status =3D "disabled";=0A=
		#extcon-cells =3D <0x1>;=0A=
	};=0A=
=0A=
	mailbox@70098000 {=0A=
		compatible =3D "nvidia,tegra210-xusb-mbox";=0A=
		reg =3D <0x0 0x70098000 0x0 0x1000>;=0A=
		interrupts =3D <0x0 0x28 0x4>;=0A=
		#mbox-cells =3D <0x0>;=0A=
		status =3D "okay";=0A=
		linux,phandle =3D <0x46>;=0A=
		phandle =3D <0x46>;=0A=
	};=0A=
=0A=
	xusb_padctl@7009f000 {=0A=
		compatible =3D "nvidia,tegra210-xusb-padctl";=0A=
		reg =3D <0x0 0x7009f000 0x0 0x1000>;=0A=
		reg-names =3D "padctl";=0A=
		resets =3D <0x21 0x8e>;=0A=
		reset-names =3D "padctl";=0A=
		status =3D "okay";=0A=
		vddio-hsic-supply =3D <0x3d>;=0A=
		avdd_pll_uerefe-supply =3D <0x3e>;=0A=
		hvdd_pex_pll_e-supply =3D <0x36>;=0A=
		dvdd_pex_pll-supply =3D <0x3f>;=0A=
		hvddio_pex-supply =3D <0x36>;=0A=
		dvddio_pex-supply =3D <0x3f>;=0A=
		hvdd_sata-supply =3D <0x36>;=0A=
		dvdd_sata_pll-supply =3D <0x40>;=0A=
		hvddio_sata-supply =3D <0x36>;=0A=
		dvddio_sata-supply =3D <0x40>;=0A=
		linux,phandle =3D <0x44>;=0A=
		phandle =3D <0x44>;=0A=
=0A=
		pads {=0A=
=0A=
			usb2 {=0A=
				clocks =3D <0x21 0xd2>;=0A=
				clock-names =3D "trk";=0A=
				status =3D "okay";=0A=
=0A=
				lanes {=0A=
=0A=
					usb2-0 {=0A=
						status =3D "okay";=0A=
						#phy-cells =3D <0x0>;=0A=
						nvidia,function =3D "xusb";=0A=
						linux,phandle =3D <0x45>;=0A=
						phandle =3D <0x45>;=0A=
					};=0A=
=0A=
					usb2-1 {=0A=
						status =3D "okay";=0A=
						#phy-cells =3D <0x0>;=0A=
						nvidia,function =3D "xusb";=0A=
						linux,phandle =3D <0x49>;=0A=
						phandle =3D <0x49>;=0A=
					};=0A=
=0A=
					usb2-2 {=0A=
						status =3D "okay";=0A=
						#phy-cells =3D <0x0>;=0A=
						nvidia,function =3D "xusb";=0A=
						linux,phandle =3D <0x4a>;=0A=
						phandle =3D <0x4a>;=0A=
					};=0A=
=0A=
					usb2-3 {=0A=
						status =3D "disabled";=0A=
						#phy-cells =3D <0x0>;=0A=
					};=0A=
				};=0A=
			};=0A=
=0A=
			pcie {=0A=
				clocks =3D <0x21 0x107>;=0A=
				clock-names =3D "pll";=0A=
				resets =3D <0x21 0xcd>;=0A=
				reset-names =3D "phy";=0A=
				status =3D "okay";=0A=
=0A=
				lanes {=0A=
=0A=
					pcie-0 {=0A=
						status =3D "okay";=0A=
						#phy-cells =3D <0x0>;=0A=
						nvidia,function =3D "pcie-x1";=0A=
						linux,phandle =3D <0x85>;=0A=
						phandle =3D <0x85>;=0A=
					};=0A=
=0A=
					pcie-1 {=0A=
						status =3D "okay";=0A=
						#phy-cells =3D <0x0>;=0A=
						nvidia,function =3D "pcie-x4";=0A=
						linux,phandle =3D <0x81>;=0A=
						phandle =3D <0x81>;=0A=
					};=0A=
=0A=
					pcie-2 {=0A=
						status =3D "okay";=0A=
						#phy-cells =3D <0x0>;=0A=
						nvidia,function =3D "pcie-x4";=0A=
						linux,phandle =3D <0x82>;=0A=
						phandle =3D <0x82>;=0A=
					};=0A=
=0A=
					pcie-3 {=0A=
						status =3D "okay";=0A=
						#phy-cells =3D <0x0>;=0A=
						nvidia,function =3D "pcie-x4";=0A=
						linux,phandle =3D <0x83>;=0A=
						phandle =3D <0x83>;=0A=
					};=0A=
=0A=
					pcie-4 {=0A=
						status =3D "okay";=0A=
						#phy-cells =3D <0x0>;=0A=
						nvidia,function =3D "pcie-x4";=0A=
						linux,phandle =3D <0x84>;=0A=
						phandle =3D <0x84>;=0A=
					};=0A=
=0A=
					pcie-5 {=0A=
						status =3D "okay";=0A=
						#phy-cells =3D <0x0>;=0A=
						nvidia,function =3D "xusb";=0A=
					};=0A=
=0A=
					pcie-6 {=0A=
						status =3D "okay";=0A=
						#phy-cells =3D <0x0>;=0A=
						nvidia,function =3D "xusb";=0A=
						linux,phandle =3D <0x4b>;=0A=
						phandle =3D <0x4b>;=0A=
					};=0A=
				};=0A=
			};=0A=
=0A=
			sata {=0A=
				clocks =3D <0x21 0x107>;=0A=
				clock-names =3D "pll";=0A=
				resets =3D <0x21 0xcc>;=0A=
				reset-names =3D "phy";=0A=
				status =3D "disabled";=0A=
=0A=
				lanes {=0A=
=0A=
					sata-0 {=0A=
						status =3D "disabled";=0A=
						#phy-cells =3D <0x0>;=0A=
					};=0A=
				};=0A=
			};=0A=
=0A=
			hsic {=0A=
				clocks =3D <0x21 0xd1>;=0A=
				clock-names =3D "trk";=0A=
				status =3D "disabled";=0A=
=0A=
				lanes {=0A=
=0A=
					hsic-0 {=0A=
						status =3D "disabled";=0A=
						#phy-cells =3D <0x0>;=0A=
					};=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		ports {=0A=
=0A=
			usb2-0 {=0A=
				status =3D "okay";=0A=
				vbus-supply =3D <0x41>;=0A=
				mode =3D "otg";=0A=
				nvidia,usb3-port-fake =3D <0x3>;=0A=
			};=0A=
=0A=
			usb2-1 {=0A=
				status =3D "okay";=0A=
				vbus-supply =3D <0x42>;=0A=
				mode =3D "host";=0A=
				linux,phandle =3D <0xb4>;=0A=
				phandle =3D <0xb4>;=0A=
			};=0A=
=0A=
			usb2-2 {=0A=
				status =3D "okay";=0A=
				vbus-supply =3D <0x43>;=0A=
				mode =3D "host";=0A=
			};=0A=
=0A=
			usb2-3 {=0A=
				status =3D "disabled";=0A=
			};=0A=
=0A=
			usb3-0 {=0A=
				status =3D "okay";=0A=
				nvidia,usb2-companion =3D <0x1>;=0A=
			};=0A=
=0A=
			usb3-1 {=0A=
				status =3D "disabled";=0A=
			};=0A=
=0A=
			usb3-2 {=0A=
				status =3D "disabled";=0A=
			};=0A=
=0A=
			usb3-3 {=0A=
				status =3D "disabled";=0A=
			};=0A=
=0A=
			hsic-0 {=0A=
				status =3D "disabled";=0A=
			};=0A=
		};=0A=
=0A=
		prod-settings {=0A=
			#prod-cells =3D <0x4>;=0A=
=0A=
			prod_c_bias {=0A=
				prod =3D <0x0 0x284 0x3f 0x3a>;=0A=
			};=0A=
=0A=
			prod_c_bias_a02 {=0A=
				prod =3D <0x0 0x284 0x3f 0x38>;=0A=
			};=0A=
=0A=
			prod_c_utmi0 {=0A=
				prod =3D <0x0 0x84 0x20 0x40>;=0A=
			};=0A=
=0A=
			prod_c_utmi1 {=0A=
				prod =3D <0x0 0xc4 0x20 0x40>;=0A=
			};=0A=
=0A=
			prod_c_utmi2 {=0A=
				prod =3D <0x0 0x104 0x20 0x40>;=0A=
			};=0A=
=0A=
			prod_c_utmi3 {=0A=
				prod =3D <0x0 0x144 0x20 0x40>;=0A=
			};=0A=
=0A=
			prod_c_ss0 {=0A=
				prod =3D <0x0 0xa60 0x30000 0x20000 0x0 0xa64 0xffff 0xfc 0x0 0xa68 =
0xffffffff 0xc0077f1f 0x0 0xa74 0xffffffff 0xfcf01368>;=0A=
			};=0A=
=0A=
			prod_c_ss1 {=0A=
				prod =3D <0x0 0xaa0 0x30000 0x20000 0x0 0xaa4 0xffff 0xfc 0x0 0xaa8 =
0xffffffff 0xc0077f1f 0x0 0xab4 0xffffffff 0xfcf01368>;=0A=
			};=0A=
=0A=
			prod_c_ss2 {=0A=
				prod =3D <0x0 0xae0 0x30000 0x20000 0x0 0xae4 0xffff 0xfc 0x0 0xae8 =
0xffffffff 0xc0077f1f 0x0 0xaf4 0xffffffff 0xfcf01368>;=0A=
			};=0A=
=0A=
			prod_c_ss3 {=0A=
				prod =3D <0x0 0xb20 0x30000 0x20000 0x0 0xb24 0xffff 0xfc 0x0 0xb28 =
0xffffffff 0xc0077f1f 0x0 0xb34 0xffffffff 0xfcf01368>;=0A=
			};=0A=
=0A=
			prod_c_hsic0 {=0A=
				prod =3D <0x0 0x344 0x1f 0x1c>;=0A=
			};=0A=
=0A=
			prod_c_hsic1 {=0A=
				prod =3D <0x0 0x344 0x1f 0x1c>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	usb_cd {=0A=
		compatible =3D "nvidia,tegra210-usb-cd";=0A=
		nvidia,xusb-padctl =3D <0x44>;=0A=
		status =3D "disabled";=0A=
		reg =3D <0x0 0x7009f000 0x0 0x1000>;=0A=
		#extcon-cells =3D <0x1>;=0A=
		dt-override-status-odm-data =3D <0x1000000 0x1000000>;=0A=
		phys =3D <0x45>;=0A=
		phy-names =3D "otg-phy";=0A=
		linux,phandle =3D <0x9a>;=0A=
		phandle =3D <0x9a>;=0A=
	};=0A=
=0A=
	pinctrl@7009f000 {=0A=
		compatible =3D "nvidia,tegra21x-padctl-uphy";=0A=
		reg =3D <0x0 0x7009f000 0x0 0x1000>;=0A=
		reg-names =3D "padctl";=0A=
		resets =3D <0x21 0x8e 0x21 0xcc 0x21 0xcd>;=0A=
		reset-names =3D "padctl", "sata_uphy", "pex_uphy";=0A=
		clocks =3D <0x21 0xd1 0x21 0xd2 0x21 0x107>;=0A=
		clock-names =3D "hsic_trk", "usb2_trk", "pll_e";=0A=
		interrupts =3D <0x0 0x31 0x4>;=0A=
		mboxes =3D <0x46>;=0A=
		mbox-names =3D "xusb";=0A=
		#phy-cells =3D <0x1>;=0A=
		status =3D "disabled";=0A=
		linux,phandle =3D <0x106>;=0A=
		phandle =3D <0x106>;=0A=
	};=0A=
=0A=
	xusb@70090000 {=0A=
		compatible =3D "nvidia,tegra210-xhci";=0A=
		reg =3D <0x0 0x70090000 0x0 0x8000 0x0 0x70098000 0x0 0x1000 0x0 =
0x70099000 0x0 0x1000>;=0A=
		interrupts =3D <0x0 0x27 0x4 0x0 0x28 0x4 0x0 0x31 0x4>;=0A=
		nvidia,xusb-padctl =3D <0x44>;=0A=
		clocks =3D <0x21 0x59 0x21 0x11d 0x21 0x9c 0x21 0x11f 0x21 0x122 0x21 =
0x11e 0x21 0xff 0x21 0xe9 0x21 0x107>;=0A=
		clock-names =3D "xusb_host", "xusb_falcon_src", "xusb_ss", =
"xusb_ss_src", "xusb_hs_src", "xusb_fs_src", "pll_u_480m", "clk_m", =
"pll_e";=0A=
		iommus =3D <0x2b 0x14>;=0A=
		status =3D "okay";=0A=
		hvdd_usb-supply =3D <0x47>;=0A=
		avdd_pll_utmip-supply =3D <0x36>;=0A=
		vddio_hsic-supply =3D <0x3d>;=0A=
		avddio_usb-supply =3D <0x3f>;=0A=
		dvdd_sata-supply =3D <0x40>;=0A=
		avddio_pll_uerefe-supply =3D <0x3e>;=0A=
		extcon-cables =3D <0x48 0x1>;=0A=
		extcon-cable-names =3D "id";=0A=
		phys =3D <0x45 0x49 0x4a 0x4b>;=0A=
		phy-names =3D "usb2-0", "usb2-1", "usb2-2", "usb3-0";=0A=
		#extcon-cells =3D <0x1>;=0A=
		nvidia,pmc-wakeup =3D <0x37 0x1 0x27 0x4 0x37 0x1 0x28 0x4 0x37 0x1 =
0x29 0x4 0x37 0x1 0x2a 0x4 0x37 0x1 0x2c 0x4>;=0A=
		nvidia,boost_cpu_freq =3D <0x4b0>;=0A=
	};=0A=
=0A=
	max16984-cdp {=0A=
		compatible =3D "maxim,max16984-tegra210-cdp-phy";=0A=
		#phy-cells =3D <0x1>;=0A=
		status =3D "disabled";=0A=
		linux,phandle =3D <0x107>;=0A=
		phandle =3D <0x107>;=0A=
	};=0A=
=0A=
	serial@70006000 {=0A=
		compatible =3D "nvidia,tegra210-uart", "nvidia,tegra114-hsuart", =
"nvidia,tegra20-uart";=0A=
		reg =3D <0x0 0x70006000 0x0 0x40>;=0A=
		reg-shift =3D <0x2>;=0A=
		interrupts =3D <0x0 0x24 0x4>;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		dmas =3D <0x4c 0x8 0x4c 0x8>;=0A=
		dma-names =3D "rx", "tx";=0A=
		clocks =3D <0x21 0x6 0x21 0xf3>;=0A=
		clock-names =3D "serial", "parent";=0A=
		nvidia,adjust-baud-rates =3D <0x1c200 0x1c200 0x64>;=0A=
		status =3D "okay";=0A=
		console-port;=0A=
		sqa-automation-port;=0A=
		enable-rx-poll-timer;=0A=
		linux,phandle =3D <0x108>;=0A=
		phandle =3D <0x108>;=0A=
	};=0A=
=0A=
	serial@70006040 {=0A=
		compatible =3D "nvidia,tegra114-hsuart";=0A=
		reg =3D <0x0 0x70006040 0x0 0x40>;=0A=
		reg-shift =3D <0x2>;=0A=
		interrupts =3D <0x0 0x25 0x4>;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		dmas =3D <0x4c 0x9 0x4c 0x9>;=0A=
		dma-names =3D "rx", "tx";=0A=
		clocks =3D <0x21 0xe0 0x21 0xf3>;=0A=
		clock-names =3D "serial", "parent";=0A=
		resets =3D <0x21 0x7>;=0A=
		reset-names =3D "serial";=0A=
		nvidia,adjust-baud-rates =3D <0x1c200 0x1c200 0x64>;=0A=
		status =3D "okay";=0A=
		linux,phandle =3D <0x109>;=0A=
		phandle =3D <0x109>;=0A=
	};=0A=
=0A=
	serial@70006200 {=0A=
		compatible =3D "nvidia,tegra114-hsuart";=0A=
		reg =3D <0x0 0x70006200 0x0 0x40>;=0A=
		reg-shift =3D <0x2>;=0A=
		interrupts =3D <0x0 0x2e 0x4>;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		dmas =3D <0x4c 0xa 0x4c 0xa>;=0A=
		dma-names =3D "tx";=0A=
		clocks =3D <0x21 0x37 0x21 0xf3>;=0A=
		clock-names =3D "serial", "parent";=0A=
		resets =3D <0x21 0x37>;=0A=
		reset-names =3D "serial";=0A=
		nvidia,adjust-baud-rates =3D <0xe1000 0xe1000 0x64>;=0A=
		status =3D "okay";=0A=
		linux,phandle =3D <0x10a>;=0A=
		phandle =3D <0x10a>;=0A=
	};=0A=
=0A=
	serial@70006300 {=0A=
		compatible =3D "nvidia,tegra114-hsuart";=0A=
		reg =3D <0x0 0x70006300 0x0 0x40>;=0A=
		reg-shift =3D <0x2>;=0A=
		interrupts =3D <0x0 0x5a 0x4>;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		dmas =3D <0x4c 0x13 0x4c 0x13>;=0A=
		dma-names =3D "rx", "tx";=0A=
		clocks =3D <0x21 0x41 0x21 0xf3>;=0A=
		clock-names =3D "serial", "parent";=0A=
		resets =3D <0x21 0x41>;=0A=
		reset-names =3D "serial";=0A=
		nvidia,adjust-baud-rates =3D <0x1c200 0x1c200 0x64>;=0A=
		status =3D "disabled";=0A=
		linux,phandle =3D <0x10b>;=0A=
		phandle =3D <0x10b>;=0A=
	};=0A=
=0A=
	sound {=0A=
		iommus =3D <0x2b 0x22>;=0A=
		dma-mask =3D <0x0 0xfff00000>;=0A=
		iommu-resv-regions =3D <0x0 0x0 0x0 0x70300000 0x0 0xfff00000 =
0xffffffff 0xffffffff>;=0A=
		iommu-group-id =3D <0x2>;=0A=
		status =3D "okay";=0A=
		compatible =3D "nvidia,tegra-audio-t210ref-mobile-rt565x";=0A=
		nvidia,model =3D "tegra-snd-t210ref-mobile-rt565x";=0A=
		clocks =3D <0x21 0xf8 0x21 0xf9 0x21 0x78>;=0A=
		clock-names =3D "pll_a", "pll_a_out0", "extern1";=0A=
		assigned-clocks =3D <0x21 0x78>;=0A=
		assigned-clock-parents =3D <0x21 0xf9>;=0A=
		nvidia,num-codec-link =3D <0x4>;=0A=
		nvidia,audio-routing =3D "x Headphone", "x OUT", "x IN", "x Mic", "y =
Headphone", "y OUT", "y IN", "y Mic", "a IN", "a Mic", "b IN", "b Mic";=0A=
		nvidia,xbar =3D <0x4d>;=0A=
		mclk-fs =3D <0x100>;=0A=
		linux,phandle =3D <0xaf>;=0A=
		phandle =3D <0xaf>;=0A=
=0A=
		nvidia,dai-link-1 {=0A=
			link-name =3D "spdif-dit-0";=0A=
			cpu-dai =3D <0x4e>;=0A=
			codec-dai =3D <0x4f>;=0A=
			cpu-dai-name =3D "I2S4";=0A=
			codec-dai-name =3D "dit-hifi";=0A=
			format =3D "i2s";=0A=
			bitclock-slave;=0A=
			frame-slave;=0A=
			bitclock-noninversion;=0A=
			frame-noninversion;=0A=
			bit-format =3D "s16_le";=0A=
			srate =3D <0xbb80>;=0A=
			num-channel =3D <0x2>;=0A=
			ignore_suspend;=0A=
			name-prefix =3D [78 00];=0A=
			status =3D "okay";=0A=
			linux,phandle =3D <0xda>;=0A=
			phandle =3D <0xda>;=0A=
		};=0A=
=0A=
		nvidia,dai-link-2 {=0A=
			link-name =3D "spdif-dit-1";=0A=
			cpu-dai =3D <0x50>;=0A=
			codec-dai =3D <0x51>;=0A=
			cpu-dai-name =3D "I2S3";=0A=
			codec-dai-name =3D "dit-hifi";=0A=
			format =3D "i2s";=0A=
			bitclock-slave;=0A=
			frame-slave;=0A=
			bitclock-noninversion;=0A=
			frame-noninversion;=0A=
			bit-format =3D "s16_le";=0A=
			srate =3D <0xbb80>;=0A=
			num-channel =3D <0x2>;=0A=
			ignore_suspend;=0A=
			name-prefix =3D [79 00];=0A=
			status =3D "okay";=0A=
		};=0A=
=0A=
		nvidia,dai-link-3 {=0A=
			link-name =3D "spdif-dit-2";=0A=
			cpu-dai =3D <0x52>;=0A=
			codec-dai =3D <0x53>;=0A=
			cpu-dai-name =3D "DMIC1";=0A=
			codec-dai-name =3D "dit-hifi";=0A=
			format =3D "i2s";=0A=
			bit-format =3D "s16_le";=0A=
			srate =3D <0xbb80>;=0A=
			ignore_suspend;=0A=
			num-channel =3D <0x2>;=0A=
			name-prefix =3D [61 00];=0A=
			status =3D "okay";=0A=
		};=0A=
=0A=
		nvidia,dai-link-4 {=0A=
			link-name =3D "spdif-dit-3";=0A=
			cpu-dai =3D <0x54>;=0A=
			codec-dai =3D <0x55>;=0A=
			cpu-dai-name =3D "DMIC2";=0A=
			codec-dai-name =3D "dit-hifi";=0A=
			format =3D "i2s";=0A=
			bit-format =3D "s16_le";=0A=
			srate =3D <0xbb80>;=0A=
			ignore_suspend;=0A=
			num-channel =3D <0x2>;=0A=
			name-prefix =3D [62 00];=0A=
			status =3D "okay";=0A=
		};=0A=
	};=0A=
=0A=
	sound_ref {=0A=
		iommus =3D <0x2b 0x22>;=0A=
		dma-mask =3D <0x0 0xfff00000>;=0A=
		iommu-resv-regions =3D <0x0 0x0 0x0 0x70300000 0x0 0xfff00000 =
0xffffffff 0xffffffff>;=0A=
		iommu-group-id =3D <0x2>;=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	pwm@7000a000 {=0A=
		compatible =3D "nvidia,tegra124-pwm", "nvidia,tegra20-pwm";=0A=
		reg =3D <0x0 0x7000a000 0x0 0x100>;=0A=
		#pwm-cells =3D <0x2>;=0A=
		status =3D "okay";=0A=
		clocks =3D <0x21 0x11 0x21 0xf3>;=0A=
		clock-names =3D "pwm", "parent";=0A=
		resets =3D <0x21 0x11>;=0A=
		reset-names =3D "pwm";=0A=
		nvidia,no-clk-sleeping-in-ops;=0A=
		linux,phandle =3D <0xa5>;=0A=
		phandle =3D <0xa5>;=0A=
	};=0A=
=0A=
	spi@7000d400 {=0A=
		compatible =3D "nvidia,tegra210-spi";=0A=
		reg =3D <0x0 0x7000d400 0x0 0x200>;=0A=
		interrupts =3D <0x0 0x3b 0x4>;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
		dmas =3D <0x4c 0xf 0x4c 0xf>;=0A=
		dma-names =3D "rx", "tx";=0A=
		nvidia,clk-parents =3D "pll_p", "clk_m";=0A=
		clocks =3D <0x21 0x29 0x21 0xf3 0x21 0xe9>;=0A=
		clock-names =3D "spi", "pll_p", "clk_m";=0A=
		resets =3D <0x21 0x29>;=0A=
		reset-names =3D "spi";=0A=
		status =3D "okay";=0A=
		linux,phandle =3D <0x10c>;=0A=
		phandle =3D <0x10c>;=0A=
=0A=
		prod-settings {=0A=
			#prod-cells =3D <0x3>;=0A=
=0A=
			prod {=0A=
				prod =3D <0x4 0xfff 0x0>;=0A=
			};=0A=
=0A=
			prod_c_flash {=0A=
				status =3D "disabled";=0A=
				prod =3D <0x4 0x3f 0x7>;=0A=
			};=0A=
=0A=
			prod_c_loop {=0A=
				status =3D "disabled";=0A=
				prod =3D <0x4 0xfff 0x44b>;=0A=
			};=0A=
		};=0A=
=0A=
		spi@0 {=0A=
			compatible =3D "spidev";=0A=
			reg =3D <0x0>;=0A=
			spi-max-frequency =3D <0x1f78a40>;=0A=
=0A=
			controller-data {=0A=
				nvidia,enable-hw-based-cs;=0A=
				nvidia,rx-clk-tap-delay =3D <0x7>;=0A=
			};=0A=
		};=0A=
=0A=
		spi@1 {=0A=
			compatible =3D "spidev";=0A=
			reg =3D <0x1>;=0A=
			spi-max-frequency =3D <0x1f78a40>;=0A=
=0A=
			controller-data {=0A=
				nvidia,enable-hw-based-cs;=0A=
				nvidia,rx-clk-tap-delay =3D <0x7>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	spi@7000d600 {=0A=
		compatible =3D "nvidia,tegra210-spi";=0A=
		reg =3D <0x0 0x7000d600 0x0 0x200>;=0A=
		interrupts =3D <0x0 0x52 0x4>;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
		dmas =3D <0x4c 0x10 0x4c 0x10>;=0A=
		dma-names =3D "rx", "tx";=0A=
		nvidia,clk-parents =3D "pll_p", "clk_m";=0A=
		clocks =3D <0x21 0x2c 0x21 0xf3 0x21 0xe9>;=0A=
		clock-names =3D "spi", "pll_p", "clk_m";=0A=
		resets =3D <0x21 0x2c>;=0A=
		reset-names =3D "spi";=0A=
		status =3D "okay";=0A=
		linux,phandle =3D <0x10d>;=0A=
		phandle =3D <0x10d>;=0A=
=0A=
		prod-settings {=0A=
			#prod-cells =3D <0x3>;=0A=
=0A=
			prod {=0A=
				prod =3D <0x4 0xfff 0x0>;=0A=
			};=0A=
=0A=
			prod_c_flash {=0A=
				status =3D "disabled";=0A=
				prod =3D <0x4 0x3f 0x6>;=0A=
			};=0A=
=0A=
			prod_c_loop {=0A=
				status =3D "disabled";=0A=
				prod =3D <0x4 0xfff 0x44b>;=0A=
			};=0A=
		};=0A=
=0A=
		spi@0 {=0A=
			compatible =3D "spidev";=0A=
			reg =3D <0x0>;=0A=
			spi-max-frequency =3D <0x1f78a40>;=0A=
=0A=
			controller-data {=0A=
				nvidia,enable-hw-based-cs;=0A=
				nvidia,rx-clk-tap-delay =3D <0x6>;=0A=
			};=0A=
		};=0A=
=0A=
		spi@1 {=0A=
			compatible =3D "spidev";=0A=
			reg =3D <0x1>;=0A=
			spi-max-frequency =3D <0x1f78a40>;=0A=
=0A=
			controller-data {=0A=
				nvidia,enable-hw-based-cs;=0A=
				nvidia,rx-clk-tap-delay =3D <0x6>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	spi@7000d800 {=0A=
		compatible =3D "nvidia,tegra210-spi";=0A=
		reg =3D <0x0 0x7000d800 0x0 0x200>;=0A=
		interrupts =3D <0x0 0x53 0x4>;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
		dmas =3D <0x4c 0x11 0x4c 0x11>;=0A=
		dma-names =3D "rx", "tx";=0A=
		nvidia,clk-parents =3D "pll_p", "clk_m";=0A=
		clocks =3D <0x21 0x2e 0x21 0xf3 0x21 0xe9>;=0A=
		clock-names =3D "spi", "pll_p", "clk_m";=0A=
		resets =3D <0x21 0x2e>;=0A=
		reset-names =3D "spi";=0A=
		status =3D "disabled";=0A=
		linux,phandle =3D <0x10e>;=0A=
		phandle =3D <0x10e>;=0A=
=0A=
		prod-settings {=0A=
			#prod-cells =3D <0x3>;=0A=
=0A=
			prod {=0A=
				prod =3D <0x4 0xfff 0x0>;=0A=
			};=0A=
=0A=
			prod_c_flash {=0A=
				status =3D "disabled";=0A=
				prod =3D <0x4 0x3f 0x8>;=0A=
			};=0A=
=0A=
			prod_c_loop {=0A=
				status =3D "disabled";=0A=
				prod =3D <0x4 0xfff 0x44b>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	spi@7000da00 {=0A=
		compatible =3D "nvidia,tegra210-spi";=0A=
		reg =3D <0x0 0x7000da00 0x0 0x200>;=0A=
		interrupts =3D <0x0 0x5d 0x4>;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
		dmas =3D <0x4c 0x12 0x4c 0x12>;=0A=
		dma-names =3D "rx", "tx";=0A=
		nvidia,clk-parents =3D "pll_p", "clk_m";=0A=
		clocks =3D <0x21 0x44 0x21 0xf3 0x21 0xe9>;=0A=
		clock-names =3D "spi", "pll_p", "clk_m";=0A=
		resets =3D <0x21 0x44>;=0A=
		reset-names =3D "spi";=0A=
		status =3D "disabled";=0A=
		spi-max-frequency =3D <0xb71b00>;=0A=
		linux,phandle =3D <0x10f>;=0A=
		phandle =3D <0x10f>;=0A=
=0A=
		prod-settings {=0A=
			#prod-cells =3D <0x3>;=0A=
=0A=
			prod {=0A=
				prod =3D <0x4 0xfff 0x0>;=0A=
			};=0A=
=0A=
			prod_c_flash {=0A=
				status =3D "disabled";=0A=
				prod =3D <0x4 0xfff 0x44b>;=0A=
			};=0A=
=0A=
			prod_c_cs0 {=0A=
				prod =3D <0x4 0xfc0 0x400>;=0A=
			};=0A=
		};=0A=
=0A=
		spi-touch19x12@0 {=0A=
			compatible =3D "raydium,rm_ts_spidev";=0A=
			status =3D "disabled";=0A=
			reg =3D <0x0>;=0A=
			spi-max-frequency =3D <0xb71b00>;=0A=
			interrupt-parent =3D <0x56>;=0A=
			interrupts =3D <0xb9 0x1>;=0A=
			reset-gpio =3D <0x56 0xae 0x0>;=0A=
			config =3D <0x0>;=0A=
			platform-id =3D <0xd>;=0A=
			name-of-clock =3D "clk_out_2";=0A=
			name-of-clock-con =3D "extern2";=0A=
			avdd-supply =3D <0x57>;=0A=
			dvdd-supply =3D <0x58>;=0A=
		};=0A=
=0A=
		spi-touch25x16@0 {=0A=
			compatible =3D "raydium,rm_ts_spidev";=0A=
			status =3D "disabled";=0A=
			reg =3D <0x0>;=0A=
			spi-max-frequency =3D <0xb71b00>;=0A=
			interrupt-parent =3D <0x56>;=0A=
			interrupts =3D <0xb9 0x1>;=0A=
			reset-gpio =3D <0x56 0xae 0x0>;=0A=
			config =3D <0x0>;=0A=
			platform-id =3D <0x8>;=0A=
			name-of-clock =3D "clk_out_2";=0A=
			name-of-clock-con =3D "extern2";=0A=
			avdd-supply =3D <0x57>;=0A=
			dvdd-supply =3D <0x58>;=0A=
		};=0A=
=0A=
		spi-touch14x8@0 {=0A=
			compatible =3D "raydium,rm_ts_spidev";=0A=
			status =3D "disabled";=0A=
			reg =3D <0x0>;=0A=
			spi-max-frequency =3D <0xb71b00>;=0A=
			interrupt-parent =3D <0x56>;=0A=
			interrupts =3D <0xb9 0x1>;=0A=
			reset-gpio =3D <0x56 0xae 0x0>;=0A=
			config =3D <0x0>;=0A=
			platform-id =3D <0x9>;=0A=
			name-of-clock =3D "clk_out_2";=0A=
			name-of-clock-con =3D "extern2";=0A=
			avdd-supply =3D <0x57>;=0A=
			dvdd-supply =3D <0x58>;=0A=
		};=0A=
	};=0A=
=0A=
	spi@70410000 {=0A=
		compatible =3D "nvidia,tegra210-qspi";=0A=
		reg =3D <0x0 0x70410000 0x0 0x1000>;=0A=
		interrupts =3D <0x0 0xa 0x4>;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
		dmas =3D <0x4c 0x5 0x4c 0x5>;=0A=
		dma-names =3D "rx", "tx";=0A=
		clocks =3D <0x21 0xd3 0x21 0x119>;=0A=
		clock-names =3D "qspi", "qspi_out";=0A=
		resets =3D <0x21 0xd3>;=0A=
		reset-names =3D "qspi";=0A=
		status =3D "okay";=0A=
		spi-max-frequency =3D <0x632ea00>;=0A=
		linux,phandle =3D <0x110>;=0A=
		phandle =3D <0x110>;=0A=
=0A=
		spiflash@0 {=0A=
			#address-cells =3D <0x1>;=0A=
			#size-cells =3D <0x1>;=0A=
			compatible =3D "MX25U3235F";=0A=
			reg =3D <0x0>;=0A=
			spi-max-frequency =3D <0x632ea00>;=0A=
=0A=
			controller-data {=0A=
				nvidia,x1-len-limit =3D <0x10>;=0A=
				nvidia,x1-bus-speed =3D <0x632ea00>;=0A=
				nvidia,x1-dymmy-cycle =3D <0x8>;=0A=
				nvidia,ctrl-bus-clk-ratio =3D [01];=0A=
				nvidia,x4-bus-speed =3D <0x632ea00>;=0A=
				nvidia,x4-dymmy-cycle =3D <0x8>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	host1x {=0A=
		compatible =3D "nvidia,tegra210-host1x", "simple-bus";=0A=
		power-domains =3D <0x23>;=0A=
		wakeup-capable;=0A=
		reg =3D <0x0 0x50000000 0x0 0x34000>;=0A=
		interrupts =3D <0x0 0x41 0x4 0x0 0x43 0x4>;=0A=
		iommus =3D <0x2b 0x6>;=0A=
		#address-cells =3D <0x2>;=0A=
		#size-cells =3D <0x2>;=0A=
		status =3D "okay";=0A=
		ranges;=0A=
		clocks =3D <0x21 0x1f2 0x21 0x77>;=0A=
		clock-names =3D "host1x", "actmon";=0A=
		resets =3D <0x21 0x1c>;=0A=
		nvidia,ch-base =3D <0x0>;=0A=
		nvidia,nb-channels =3D <0xc>;=0A=
		nvidia,nb-hw-pts =3D <0xc0>;=0A=
		nvidia,pts-base =3D <0x0>;=0A=
		nvidia,nb-pts =3D <0xc0>;=0A=
		assigned-clocks =3D <0x21 0x7a 0x21 0x92 0x21 0x91 0x21 0x90 0x21 0xd0 =
0x21 0x166 0x21 0xe4 0x21 0x142 0x21 0x3>;=0A=
		assigned-clock-parents =3D <0x21 0xf3 0x21 0xf3 0x21 0xf3 0x21 0xf3 =
0x21 0xf3 0x21 0x7a 0x21 0xed 0x21 0xed 0x21 0x142>;=0A=
		assigned-clock-rates =3D <0x16e3600 0x6146580 0x6146580 0x6146580 =
0x6146580 0x16e3600 0x18519600 0x18519600 0x0>;=0A=
		linux,phandle =3D <0x78>;=0A=
		phandle =3D <0x78>;=0A=
=0A=
		vi {=0A=
			compatible =3D "nvidia,tegra210-vi", "simple-bus";=0A=
			power-domains =3D <0x59>;=0A=
			reg =3D <0x0 0x54080000 0x0 0x40000>;=0A=
			interrupts =3D <0x0 0x45 0x4>;=0A=
			iommus =3D <0x2b 0x12>;=0A=
			status =3D "okay";=0A=
			clocks =3D <0x21 0x210 0x21 0x34 0x21 0x90 0x21 0x91 0x21 0x92 0x21 =
0xd0 0x21 0x51 0x21 0xfa 0x21 0x133>;=0A=
			clock-names =3D "vi", "csi", "cilab", "cilcd", "cile", "vii2c", =
"i2cslow", "pll_d", "pll_d_dsi_out";=0A=
			resets =3D <0x21 0x14>;=0A=
			reset-names =3D "vi";=0A=
			#address-cells =3D <0x1>;=0A=
			#size-cells =3D <0x0>;=0A=
			avdd_dsi_csi-supply =3D <0x36>;=0A=
			num-channels =3D <0x2>;=0A=
			linux,phandle =3D <0xbd>;=0A=
			phandle =3D <0xbd>;=0A=
=0A=
			ports {=0A=
				#address-cells =3D <0x1>;=0A=
				#size-cells =3D <0x0>;=0A=
=0A=
				port@0 {=0A=
					reg =3D <0x0>;=0A=
					linux,phandle =3D <0xbe>;=0A=
					phandle =3D <0xbe>;=0A=
=0A=
					endpoint {=0A=
						port-index =3D <0x0>;=0A=
						bus-width =3D <0x2>;=0A=
						remote-endpoint =3D <0x5a>;=0A=
						linux,phandle =3D <0x75>;=0A=
						phandle =3D <0x75>;=0A=
					};=0A=
				};=0A=
=0A=
				port@1 {=0A=
					reg =3D <0x1>;=0A=
					linux,phandle =3D <0xcd>;=0A=
					phandle =3D <0xcd>;=0A=
=0A=
					endpoint {=0A=
						port-index =3D <0x4>;=0A=
						bus-width =3D <0x2>;=0A=
						remote-endpoint =3D <0x5b>;=0A=
						linux,phandle =3D <0x77>;=0A=
						phandle =3D <0x77>;=0A=
					};=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		vi-bypass {=0A=
			compatible =3D "nvidia,tegra210-vi-bypass";=0A=
			status =3D "okay";=0A=
		};=0A=
=0A=
		isp@54600000 {=0A=
			compatible =3D "nvidia,tegra210-isp";=0A=
			power-domains =3D <0x59>;=0A=
			reg =3D <0x0 0x54600000 0x0 0x40000>;=0A=
			interrupts =3D <0x0 0x47 0x4>;=0A=
			iommus =3D <0x2b 0x8>;=0A=
			status =3D "okay";=0A=
			clocks =3D <0x21 0x1ab>;=0A=
			clock-names =3D "ispa";=0A=
			resets =3D <0x21 0x17>;=0A=
		};=0A=
=0A=
		isp@54680000 {=0A=
			compatible =3D "nvidia,tegra210-isp";=0A=
			power-domains =3D <0x5c>;=0A=
			reg =3D <0x0 0x54680000 0x0 0x40000>;=0A=
			interrupts =3D <0x0 0x46 0x4>;=0A=
			iommus =3D <0x2b 0x1d>;=0A=
			status =3D "okay";=0A=
			clocks =3D <0x21 0x1ac>;=0A=
			clock-names =3D "ispb";=0A=
			resets =3D <0x21 0x3>;=0A=
		};=0A=
=0A=
		dc@54200000 {=0A=
			compatible =3D "nvidia,tegra210-dc";=0A=
			aux-device-name =3D "tegradc.0";=0A=
			reg =3D <0x0 0x54200000 0x0 0x40000>;=0A=
			interrupts =3D <0x0 0x49 0x4>;=0A=
			win-mask =3D <0x7>;=0A=
			nvidia,fb-win =3D <0x0>;=0A=
			iommus =3D <0x2b 0x2 0x2b 0xa>;=0A=
			clocks =3D <0x21 0x1b 0x21 0x5 0x21 0x1c5 0x21 0x1c7 0x21 0xf6 0x21 =
0xfb 0x21 0xfa>;=0A=
			clock-names =3D "disp1", "timer", "disp1_emc", "disp1_la_emc", =
"pll_p_out3", "pll_d_out0", "pll_d";=0A=
			resets =3D <0x21 0x1b>;=0A=
			reset-names =3D "dc_rst";=0A=
			status =3D "okay";=0A=
			nvidia,dc-ctrlnum =3D <0x0>;=0A=
			fb_reserved =3D <0x5d>;=0A=
			iommu-direct-regions =3D <0x5d>;=0A=
			pinctrl-names =3D "dsi-dpd-disable", "dsi-dpd-enable", =
"dsib-dpd-disable", "dsib-dpd-enable", "hdmi-dpd-disable", =
"hdmi-dpd-enable";=0A=
			pinctrl-0 =3D <0x5e>;=0A=
			pinctrl-1 =3D <0x5f>;=0A=
			pinctrl-2 =3D <0x60>;=0A=
			pinctrl-3 =3D <0x61>;=0A=
			pinctrl-4 =3D <0x62>;=0A=
			pinctrl-5 =3D <0x63>;=0A=
			avdd_hdmi-supply =3D <0x40>;=0A=
			avdd_hdmi_pll-supply =3D <0x36>;=0A=
			vdd_hdmi_5v0-supply =3D <0x64>;=0A=
			nvidia,dc-flags =3D <0x1>;=0A=
			nvidia,emc-clk-rate =3D <0x11e1a300>;=0A=
			nvidia,cmu-enable =3D <0x1>;=0A=
			nvidia,fb-bpp =3D <0x20>;=0A=
			nvidia,fb-flags =3D <0x1>;=0A=
			nvidia,dc-or-node =3D "/host1x/sor1";=0A=
			nvidia,dc-connector =3D <0x65>;=0A=
			linux,phandle =3D <0x111>;=0A=
			phandle =3D <0x111>;=0A=
=0A=
			rgb {=0A=
				status =3D "disabled";=0A=
			};=0A=
		};=0A=
=0A=
		dc@54240000 {=0A=
			compatible =3D "nvidia,tegra210-dc";=0A=
			aux-device-name =3D "tegradc.1";=0A=
			reg =3D <0x0 0x54240000 0x0 0x40000>;=0A=
			interrupts =3D <0x0 0x4a 0x4>;=0A=
			win-mask =3D <0x7>;=0A=
			nvidia,fb-win =3D <0x0>;=0A=
			iommus =3D <0x2b 0x3>;=0A=
			status =3D "okay";=0A=
			nvidia,dc-ctrlnum =3D <0x1>;=0A=
			clocks =3D <0x21 0x1a 0x21 0x5 0x21 0x1c6 0x21 0x1c8 0x21 0xfd 0x21 =
0xfc>;=0A=
			clock-names =3D "disp2", "timer", "disp2_emc", "disp2_la_emc", =
"pll_d2_out0", "pll_d2";=0A=
			resets =3D <0x21 0x1a>;=0A=
			reset-names =3D "dc_rst";=0A=
			fb_reserved =3D <0x66>;=0A=
			iommu-direct-regions =3D <0x66>;=0A=
			pinctrl-names =3D "dsi-dpd-disable", "dsi-dpd-enable", =
"dsib-dpd-disable", "dsib-dpd-enable", "hdmi-dpd-disable", =
"hdmi-dpd-enable";=0A=
			pinctrl-0 =3D <0x5e>;=0A=
			pinctrl-1 =3D <0x5f>;=0A=
			pinctrl-2 =3D <0x60>;=0A=
			pinctrl-3 =3D <0x61>;=0A=
			pinctrl-4 =3D <0x62>;=0A=
			pinctrl-5 =3D <0x63>;=0A=
			vdd-dp-pwr-supply =3D <0x67>;=0A=
			avdd-dp-pll-supply =3D <0x36>;=0A=
			vdd-edp-sec-mode-supply =3D <0x42>;=0A=
			vdd-dp-pad-supply =3D <0x42>;=0A=
			vdd_hdmi_5v0-supply =3D <0x64>;=0A=
			nvidia,dc-flags =3D <0x1>;=0A=
			nvidia,emc-clk-rate =3D <0x11e1a300>;=0A=
			nvidia,cmu-enable =3D <0x1>;=0A=
			nvidia,fb-bpp =3D <0x20>;=0A=
			nvidia,fb-flags =3D <0x1>;=0A=
			nvidia,dc-or-node =3D "/host1x/sor";=0A=
			nvidia,dc-connector =3D <0x68>;=0A=
			linux,phandle =3D <0x112>;=0A=
			phandle =3D <0x112>;=0A=
=0A=
			rgb {=0A=
				status =3D "disabled";=0A=
			};=0A=
		};=0A=
=0A=
		dsi {=0A=
			compatible =3D "nvidia,tegra210-dsi";=0A=
			reg =3D <0x0 0x54300000 0x0 0x40000 0x0 0x54400000 0x0 0x40000>;=0A=
			clocks =3D <0x21 0x30 0x21 0x93 0x21 0x52 0x21 0x94 0x21 0xf6 0x21 =
0xb1>;=0A=
			clock-names =3D "dsi", "dsia_lp", "dsib", "dsib_lp", "pll_p_out3", =
"clk72mhz";=0A=
			resets =3D <0x21 0x30 0x21 0x52>;=0A=
			reset-names =3D "dsia", "dsib";=0A=
			status =3D "disabled";=0A=
			linux,phandle =3D <0x113>;=0A=
			phandle =3D <0x113>;=0A=
		};=0A=
=0A=
		vic {=0A=
			compatible =3D "nvidia,tegra210-vic";=0A=
			power-domains =3D <0x69>;=0A=
			reg =3D <0x0 0x54340000 0x0 0x40000>;=0A=
			iommus =3D <0x2b 0x13>;=0A=
			iommu-group-id =3D <0x1>;=0A=
			status =3D "okay";=0A=
			clocks =3D <0x21 0x193 0x21 0x1e2 0x21 0x19f 0x21 0x1e3>;=0A=
			clock-names =3D "vic03", "emc", "vic_floor", "emc_shared";=0A=
			resets =3D <0x21 0xb2>;=0A=
		};=0A=
=0A=
		nvenc {=0A=
			compatible =3D "nvidia,tegra210-nvenc";=0A=
			power-domains =3D <0x6a>;=0A=
			reg =3D <0x0 0x544c0000 0x0 0x40000>;=0A=
			clocks =3D <0x21 0x19d>;=0A=
			clock-names =3D "msenc";=0A=
			resets =3D <0x21 0xdb>;=0A=
			iommus =3D <0x2b 0xb>;=0A=
			iommu-group-id =3D <0x1>;=0A=
			status =3D "okay";=0A=
		};=0A=
=0A=
		tsec {=0A=
			compatible =3D "nvidia,tegra210-tsec";=0A=
			power-domains =3D <0x6b>;=0A=
			reg =3D <0x0 0x54500000 0x0 0x40000>;=0A=
			clocks =3D <0x21 0x53>;=0A=
			clock-names =3D "tsec";=0A=
			resets =3D <0x21 0x53>;=0A=
			iommus =3D <0x2b 0x17>;=0A=
			iommu-group-id =3D <0x1>;=0A=
			status =3D "okay";=0A=
		};=0A=
=0A=
		tsecb {=0A=
			compatible =3D "nvidia,tegra210-tsec";=0A=
			power-domains =3D <0x6b>;=0A=
			reg =3D <0x0 0x54100000 0x0 0x40000>;=0A=
			clocks =3D <0x21 0x196>;=0A=
			clock-names =3D "tsecb";=0A=
			resets =3D <0x21 0xce>;=0A=
			iommus =3D <0x2b 0x29>;=0A=
			iommu-group-id =3D <0x1>;=0A=
			status =3D "okay";=0A=
		};=0A=
=0A=
		nvdec {=0A=
			compatible =3D "nvidia,tegra210-nvdec";=0A=
			power-domains =3D <0x6c>;=0A=
			reg =3D <0x0 0x54480000 0x0 0x40000>;=0A=
			clocks =3D <0x21 0x19e>;=0A=
			clock-names =3D "nvdec";=0A=
			resets =3D <0x21 0xc2>;=0A=
			iommus =3D <0x2b 0x21>;=0A=
			iommu-group-id =3D <0x1>;=0A=
			status =3D "okay";=0A=
		};=0A=
=0A=
		nvjpg {=0A=
			compatible =3D "nvidia,tegra210-nvjpg";=0A=
			power-domains =3D <0x6d>;=0A=
			reg =3D <0x0 0x54380000 0x0 0x40000>;=0A=
			clocks =3D <0x21 0x194>;=0A=
			clock-names =3D "nvjpg";=0A=
			resets =3D <0x21 0xc3>;=0A=
			iommus =3D <0x2b 0x24>;=0A=
			iommu-group-id =3D <0x1>;=0A=
			status =3D "okay";=0A=
		};=0A=
=0A=
		sor {=0A=
			compatible =3D "nvidia,tegra210-sor";=0A=
			reg =3D <0x0 0x54540000 0x0 0x40000>;=0A=
			reg-names =3D "sor";=0A=
			status =3D "okay";=0A=
			nvidia,sor-ctrlnum =3D <0x0>;=0A=
			nvidia,dpaux =3D <0x6e>;=0A=
			nvidia,xbar-ctrl =3D <0x2 0x1 0x0 0x3 0x4>;=0A=
			clocks =3D <0x21 0xde 0x21 0xb6 0x21 0x12f>;=0A=
			clock-names =3D "sor_safe", "sor0", "pll_dp";=0A=
			resets =3D <0x21 0xb6>;=0A=
			reset-names =3D "sor0";=0A=
			nvidia,sor-audio-not-supported;=0A=
			nvidia,sor1-output-type =3D "dp";=0A=
			nvidia,active-panel =3D <0x6f>;=0A=
			linux,phandle =3D <0x68>;=0A=
			phandle =3D <0x68>;=0A=
=0A=
			hdmi-display {=0A=
				compatible =3D "hdmi,display";=0A=
				status =3D "disabled";=0A=
				linux,phandle =3D <0x114>;=0A=
				phandle =3D <0x114>;=0A=
			};=0A=
=0A=
			dp-display {=0A=
				compatible =3D "dp, display";=0A=
				status =3D "okay";=0A=
				nvidia,hpd-gpio =3D <0x56 0xe1 0x1>;=0A=
				nvidia,is_ext_dp_panel =3D <0x1>;=0A=
				linux,phandle =3D <0x6f>;=0A=
				phandle =3D <0x6f>;=0A=
=0A=
				disp-default-out {=0A=
					nvidia,out-type =3D <0x3>;=0A=
					nvidia,out-align =3D <0x0>;=0A=
					nvidia,out-order =3D <0x0>;=0A=
					nvidia,out-flags =3D <0x0>;=0A=
					nvidia,out-pins =3D <0x1 0x0 0x2 0x0 0x3 0x0 0x0 0x1>;=0A=
					nvidia,out-parent-clk =3D "pll_d_out0";=0A=
				};=0A=
=0A=
				dp-lt-settings {=0A=
=0A=
					lt-setting@0 {=0A=
						nvidia,drive-current =3D <0x0 0x0 0x0 0x0>;=0A=
						nvidia,lane-preemphasis =3D <0x0 0x0 0x0 0x0>;=0A=
						nvidia,post-cursor =3D <0x0 0x0 0x0 0x0>;=0A=
						nvidia,tx-pu =3D <0x0>;=0A=
						nvidia,load-adj =3D <0x3>;=0A=
					};=0A=
=0A=
					lt-setting@1 {=0A=
						nvidia,drive-current =3D <0x0 0x0 0x0 0x0>;=0A=
						nvidia,lane-preemphasis =3D <0x0 0x0 0x0 0x0>;=0A=
						nvidia,post-cursor =3D <0x0 0x0 0x0 0x0>;=0A=
						nvidia,tx-pu =3D <0x0>;=0A=
						nvidia,load-adj =3D <0x4>;=0A=
					};=0A=
=0A=
					lt-setting@2 {=0A=
						nvidia,drive-current =3D <0x0 0x0 0x0 0x0>;=0A=
						nvidia,lane-preemphasis =3D <0x1 0x1 0x1 0x1>;=0A=
						nvidia,post-cursor =3D <0x0 0x0 0x0 0x0>;=0A=
						nvidia,tx-pu =3D <0x0>;=0A=
						nvidia,load-adj =3D <0x6>;=0A=
					};=0A=
				};=0A=
			};=0A=
=0A=
			prod-settings {=0A=
				#prod-cells =3D <0x3>;=0A=
=0A=
				prod_c_dp {=0A=
					prod =3D <0x5c 0xf000f10 0x1000310 0x60 0x3f00100 0x400100 0x68 =
0x2000 0x2000 0x70 0xffffffff 0x0 0x180 0x1 0x1>;=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		sor1 {=0A=
			compatible =3D "nvidia,tegra210-sor1";=0A=
			reg =3D <0x0 0x54580000 0x0 0x40000>;=0A=
			reg-names =3D "sor";=0A=
			interrupts =3D <0x0 0x4c 0x4>;=0A=
			status =3D "okay";=0A=
			nvidia,sor-ctrlnum =3D <0x1>;=0A=
			nvidia,dpaux =3D <0x70>;=0A=
			nvidia,xbar-ctrl =3D <0x0 0x1 0x2 0x3 0x4>;=0A=
			clocks =3D <0x21 0x16f 0x21 0xde 0x21 0x16e 0x21 0xb7 0x21 0x12f 0x21 =
0xf3 0x21 0xca 0x21 0x7d 0x21 0x6f 0x21 0x80>;=0A=
			clock-names =3D "sor1_ref", "sor_safe", "sor1_pad_clkout", "sor1", =
"pll_dp", "pll_p", "maud", "hda", "hda2codec_2x", "hda2hdmi";=0A=
			resets =3D <0x21 0xb7 0x21 0x7d 0x21 0x6f 0x21 0x80>;=0A=
			reset-names =3D "sor1", "hda_rst", "hda2codec_2x_rst", "hda2hdmi_rst";=0A=
			nvidia,ddc-i2c-bus =3D <0x71>;=0A=
			nvidia,hpd-gpio =3D <0x56 0xe1 0x1>;=0A=
			nvidia,active-panel =3D <0x72>;=0A=
			linux,phandle =3D <0x65>;=0A=
			phandle =3D <0x65>;=0A=
=0A=
			hdmi-display {=0A=
				compatible =3D "hdmi,display";=0A=
				status =3D "okay";=0A=
				generic-infoframe-type =3D <0x87>;=0A=
				linux,phandle =3D <0x72>;=0A=
				phandle =3D <0x72>;=0A=
=0A=
				disp-default-out {=0A=
					nvidia,out-xres =3D <0x1000>;=0A=
					nvidia,out-yres =3D <0x870>;=0A=
					nvidia,out-type =3D <0x1>;=0A=
					nvidia,out-flags =3D <0x2>;=0A=
					nvidia,out-parent-clk =3D "pll_d2";=0A=
					nvidia,out-align =3D <0x0>;=0A=
					nvidia,out-order =3D <0x0>;=0A=
				};=0A=
			};=0A=
=0A=
			dp-display {=0A=
				compatible =3D "dp, display";=0A=
				status =3D "disabled";=0A=
				linux,phandle =3D <0x115>;=0A=
				phandle =3D <0x115>;=0A=
			};=0A=
=0A=
			prod-settings {=0A=
				#prod-cells =3D <0x3>;=0A=
				prod_list_hdmi_soc =3D "prod_c_hdmi_0m_54m", "prod_c_hdmi_54m_111m", =
"prod_c_hdmi_111m_223m", "prod_c_hdmi_223m_300m", =
"prod_c_hdmi_300m_600m";=0A=
				prod_list_hdmi_board =3D "prod_c_hdmi_0m_54m", =
"prod_c_hdmi_54m_75m", "prod_c_hdmi_75m_150m", "prod_c_hdmi_150m_300m", =
"prod_c_hdmi_300m_600m";=0A=
=0A=
				prod {=0A=
					prod =3D <0x3a0 0x1 0x1 0x5c 0xf000700 0x1000000 0x60 0xf01f00 =
0x300f80 0x68 0xf000000 0xe000000 0x138 0xffffffff 0x3c3c3c3c 0x148 =
0xffffffff 0x0 0x170 0x40ff00 0x401000>;=0A=
				};=0A=
=0A=
				prod_c_hdmi_0m_54m {=0A=
					prod =3D <0x3a0 0x2 0x2 0x5c 0xf000700 0x5000310 0x60 0xf01f00 =
0x1100 0x68 0xf000000 0x8000000 0x138 0xffffffff 0x2d2f2f2f 0x148 =
0xffffffff 0x0 0x170 0xf040ff00 0x80406600>;=0A=
				};=0A=
=0A=
				prod_c_hdmi_54m_111m {=0A=
					prod =3D <0x3a0 0x2 0x2 0x5c 0xf000700 0x1000100 0x60 0xf01f00 =
0x401380 0x68 0xf000000 0x8000000 0x138 0xffffffff 0x333a3a3a 0x148 =
0xffffffff 0x0 0x170 0x40ff00 0x404000>;=0A=
				};=0A=
=0A=
				prod_c_hdmi_111m_223m {=0A=
					prod =3D <0x3a0 0x2 0x0 0x5c 0xf000700 0x1000300 0x60 0xff0fe0ff =
0x401380 0x68 0xf000000 0x8000000 0x138 0xffffffff 0x333a3a3a 0x148 =
0xffffffff 0x0 0x170 0x40ff00 0x406600>;=0A=
				};=0A=
=0A=
				prod_c_hdmi_223m_300m {=0A=
					prod =3D <0x3a0 0x2 0x0 0x5c 0xf000700 0x1000300 0x60 0xf01f00 =
0x401380 0x68 0xf000000 0xa000000 0x138 0xffffffff 0x333f3f3f 0x148 =
0xffffffff 0x171717 0x170 0x40ff00 0x406600>;=0A=
				};=0A=
=0A=
				prod_c_hdmi_300m_600m {=0A=
					prod =3D <0x3a0 0x2 0x2 0x5c 0xf000700 0x5000310 0x60 0xf01f00 =
0x300f00 0x68 0xf000000 0x8000000 0x138 0xffffffff 0x30353537 0x148 =
0xffffffff 0x0 0x170 0x40ff00 0x406000>;=0A=
				};=0A=
=0A=
				prod_c_54M {=0A=
					prod =3D <0x3a0 0x2 0x2 0x5c 0xf000700 0x1000000 0x60 0xf01f00 =
0x401380 0x68 0xf000000 0x8000000 0x138 0xffffffff 0x333a3a3a 0x148 =
0xffffffff 0x0 0x170 0x40ff00 0x401000>;=0A=
				};=0A=
=0A=
				prod_c_75M {=0A=
					prod =3D <0x3a0 0x2 0x2 0x5c 0xf000700 0x1000100 0x60 0xf01f00 =
0x401380 0x68 0xf000000 0x8000000 0x138 0xffffffff 0x333a3a3a 0x148 =
0xffffffff 0x0 0x170 0x40ff00 0x404000>;=0A=
				};=0A=
=0A=
				prod_c_150M {=0A=
					prod =3D <0x3a0 0x2 0x0 0x5c 0xf000700 0x1000300 0x60 0xff0fe0ff =
0x401380 0x68 0xf000000 0x8000000 0x138 0xffffffff 0x333a3a3a 0x148 =
0xffffffff 0x0 0x170 0x40ff00 0x406600>;=0A=
				};=0A=
=0A=
				prod_c_300M {=0A=
					prod =3D <0x3a0 0x2 0x0 0x5c 0xf000700 0x1000300 0x60 0xf01f00 =
0x401380 0x68 0xf000000 0xa000000 0x138 0xffffffff 0x333f3f3f 0x148 =
0xffffffff 0x171717 0x170 0x40ff00 0x406600>;=0A=
				};=0A=
=0A=
				prod_c_600M {=0A=
					prod =3D <0x3a0 0x2 0x2 0x5c 0xf000700 0x1000300 0x60 0xf01f00 =
0x401380 0x68 0xf000000 0x8000000 0x138 0xffffffff 0x333f3f3f 0x148 =
0xffffffff 0x0 0x170 0x40ff00 0x406600>;=0A=
				};=0A=
=0A=
				prod_c_dp {=0A=
					prod =3D <0x5c 0xf000f10 0x1000310 0x60 0x3f00100 0x400100 0x68 =
0x2000 0x2000 0x70 0xffffffff 0x0 0x180 0x1 0x1>;=0A=
				};=0A=
=0A=
				prod_c_hdmi_54m_75m {=0A=
					prod =3D <0x3a0 0x2 0x2 0x5c 0xf000700 0x5000310 0x60 0xf01f00 =
0x301500 0x68 0xf000000 0x8000000 0x138 0xffffffff 0x2d303030 0x148 =
0xffffffff 0x0 0x170 0xf040ff00 0x80406600>;=0A=
				};=0A=
=0A=
				prod_c_hdmi_75m_150m {=0A=
					prod =3D <0x3a0 0x2 0x0 0x5c 0xf000700 0x1000300 0x60 0xf01f00 =
0x309300 0x68 0xf000000 0x8000000 0x138 0xffffffff 0x2d303030 0x148 =
0xffffffff 0x0 0x170 0xf040ff00 0x80406600>;=0A=
				};=0A=
=0A=
				prod_c_hdmi_150m_300m {=0A=
					prod =3D <0x3a0 0x2 0x0 0x5c 0xf000700 0x1000300 0x60 0xf01f00 =
0x309300 0x68 0xf000000 0x8000000 0x138 0xffffffff 0x2d303430 0x148 =
0xffffffff 0x0 0x170 0xf040ff00 0x80406600>;=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		dpaux {=0A=
			compatible =3D "nvidia,tegra210-dpaux";=0A=
			reg =3D <0x0 0x545c0000 0x0 0x40000>;=0A=
			interrupts =3D <0x0 0x9f 0x4>;=0A=
			nvidia,dpaux-ctrlnum =3D <0x0>;=0A=
			status =3D "okay";=0A=
			clocks =3D <0x21 0xb5>;=0A=
			clock-names =3D "dpaux";=0A=
			resets =3D <0x21 0xb5>;=0A=
			reset-names =3D "dpaux";=0A=
			linux,phandle =3D <0x6e>;=0A=
			phandle =3D <0x6e>;=0A=
=0A=
			prod-settings {=0A=
				#prod-cells =3D <0x3>;=0A=
=0A=
				prod_c_dpaux_dp {=0A=
					prod =3D <0x124 0x37fe 0x24c2>;=0A=
				};=0A=
=0A=
				prod_c_dpaux_hdmi {=0A=
					prod =3D <0x124 0x700 0x400>;=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		dpaux1 {=0A=
			compatible =3D "nvidia,tegra210-dpaux1";=0A=
			reg =3D <0x0 0x54040000 0x0 0x40000>;=0A=
			interrupts =3D <0x0 0xb 0x4>;=0A=
			nvidia,dpaux-ctrlnum =3D <0x1>;=0A=
			status =3D "okay";=0A=
			clocks =3D <0x21 0xcf>;=0A=
			clock-names =3D "dpaux1";=0A=
			resets =3D <0x21 0xcf>;=0A=
			reset-names =3D "dpaux1";=0A=
			linux,phandle =3D <0x70>;=0A=
			phandle =3D <0x70>;=0A=
=0A=
			prod-settings {=0A=
				#prod-cells =3D <0x3>;=0A=
=0A=
				prod_c_dpaux_dp {=0A=
					prod =3D <0x124 0x37fe 0x24c2>;=0A=
				};=0A=
=0A=
				prod_c_dpaux_hdmi {=0A=
					prod =3D <0x124 0x700 0x400>;=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		i2c@546c0000 {=0A=
			#address-cells =3D <0x1>;=0A=
			#size-cells =3D <0x0>;=0A=
			compatible =3D "nvidia,tegra210-vii2c";=0A=
			reg =3D <0x0 0x546c0000 0x0 0x34000>;=0A=
			iommus =3D <0x2b 0x12>;=0A=
			interrupts =3D <0x0 0x11 0x4>;=0A=
			scl-gpio =3D <0x56 0x92 0x0>;=0A=
			sda-gpio =3D <0x56 0x93 0x0>;=0A=
			status =3D "okay";=0A=
			clocks =3D <0x21 0xd0 0x21 0x51 0x21 0x1c>;=0A=
			clock-names =3D "vii2c", "i2cslow", "host1x";=0A=
			resets =3D <0x21 0xd0>;=0A=
			reset-names =3D "vii2c";=0A=
			clock-frequency =3D <0x61a80>;=0A=
			bus-pullup-supply =3D <0x42>;=0A=
			avdd_dsi_csi-supply =3D <0x36>;=0A=
			linux,phandle =3D <0xa8>;=0A=
			phandle =3D <0xa8>;=0A=
=0A=
			rbpcv2_imx219_a@10 {=0A=
				compatible =3D "nvidia,imx219";=0A=
				reg =3D <0x10>;=0A=
				devnode =3D "video0";=0A=
				physical_w =3D "3.680";=0A=
				physical_h =3D "2.760";=0A=
				sensor_model =3D "imx219";=0A=
				use_sensor_mode_id =3D "true";=0A=
				status =3D "disabled";=0A=
				reset-gpios =3D <0x56 0x97 0x0>;=0A=
				linux,phandle =3D <0xb9>;=0A=
				phandle =3D <0xb9>;=0A=
=0A=
				mode0 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_a";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "3264";=0A=
					active_h =3D "2464";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "182400000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "21000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "21000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				mode1 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_a";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "3264";=0A=
					active_h =3D "1848";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "182400000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "28000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "28000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				mode2 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_a";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "1920";=0A=
					active_h =3D "1080";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "182400000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "30000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "30000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				mode3 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_a";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "1280";=0A=
					active_h =3D "720";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "182400000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "60000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "60000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				mode4 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_a";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "1280";=0A=
					active_h =3D "720";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "169600000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "120000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "120000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				ports {=0A=
					#address-cells =3D <0x1>;=0A=
					#size-cells =3D <0x0>;=0A=
=0A=
					port@0 {=0A=
						reg =3D <0x0>;=0A=
=0A=
						endpoint {=0A=
							port-index =3D <0x0>;=0A=
							bus-width =3D <0x2>;=0A=
							remote-endpoint =3D <0x73>;=0A=
							linux,phandle =3D <0xc2>;=0A=
							phandle =3D <0xc2>;=0A=
						};=0A=
					};=0A=
				};=0A=
			};=0A=
=0A=
			ina3221x@40 {=0A=
				compatible =3D "ti,ina3221x";=0A=
				reg =3D <0x40>;=0A=
				status =3D "okay";=0A=
				ti,trigger-config =3D <0x7003>;=0A=
				ti,continuous-config =3D <0x7607>;=0A=
				ti,enable-forced-continuous;=0A=
				#io-channel-cells =3D <0x1>;=0A=
				#address-cells =3D <0x1>;=0A=
				#size-cells =3D <0x0>;=0A=
				linux,phandle =3D <0xad>;=0A=
				phandle =3D <0xad>;=0A=
=0A=
				channel@0 {=0A=
					reg =3D <0x0>;=0A=
					ti,rail-name =3D "POM_5V_GPU";=0A=
					ti,shunt-resistor-mohm =3D <0x5>;=0A=
				};=0A=
=0A=
				channel@1 {=0A=
					reg =3D <0x1>;=0A=
					ti,rail-name =3D "POM_5V_IN";=0A=
					ti,shunt-resistor-mohm =3D <0x5>;=0A=
				};=0A=
=0A=
				channel@2 {=0A=
					reg =3D <0x2>;=0A=
					ti,rail-name =3D "POM_5V_CPU";=0A=
					ti,shunt-resistor-mohm =3D <0x5>;=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		nvcsi {=0A=
			num-channels =3D <0x2>;=0A=
			#address-cells =3D <0x1>;=0A=
			#size-cells =3D <0x0>;=0A=
			linux,phandle =3D <0xbf>;=0A=
			phandle =3D <0xbf>;=0A=
=0A=
			channel@0 {=0A=
				reg =3D <0x0>;=0A=
				linux,phandle =3D <0xc0>;=0A=
				phandle =3D <0xc0>;=0A=
=0A=
				ports {=0A=
					#address-cells =3D <0x1>;=0A=
					#size-cells =3D <0x0>;=0A=
=0A=
					port@0 {=0A=
						reg =3D <0x0>;=0A=
						linux,phandle =3D <0xc1>;=0A=
						phandle =3D <0xc1>;=0A=
=0A=
						endpoint@0 {=0A=
							port-index =3D <0x0>;=0A=
							bus-width =3D <0x2>;=0A=
							remote-endpoint =3D <0x74>;=0A=
							linux,phandle =3D <0x73>;=0A=
							phandle =3D <0x73>;=0A=
						};=0A=
					};=0A=
=0A=
					port@1 {=0A=
						reg =3D <0x1>;=0A=
						linux,phandle =3D <0xc3>;=0A=
						phandle =3D <0xc3>;=0A=
=0A=
						endpoint@1 {=0A=
							remote-endpoint =3D <0x75>;=0A=
							linux,phandle =3D <0x5a>;=0A=
							phandle =3D <0x5a>;=0A=
						};=0A=
					};=0A=
				};=0A=
			};=0A=
=0A=
			channel@1 {=0A=
				reg =3D <0x1>;=0A=
				linux,phandle =3D <0xce>;=0A=
				phandle =3D <0xce>;=0A=
=0A=
				ports {=0A=
					#address-cells =3D <0x1>;=0A=
					#size-cells =3D <0x0>;=0A=
=0A=
					port@2 {=0A=
						reg =3D <0x0>;=0A=
						linux,phandle =3D <0xcf>;=0A=
						phandle =3D <0xcf>;=0A=
=0A=
						endpoint@2 {=0A=
							port-index =3D <0x4>;=0A=
							bus-width =3D <0x2>;=0A=
							remote-endpoint =3D <0x76>;=0A=
							linux,phandle =3D <0xa9>;=0A=
							phandle =3D <0xa9>;=0A=
						};=0A=
					};=0A=
=0A=
					port@3 {=0A=
						reg =3D <0x1>;=0A=
						linux,phandle =3D <0xd0>;=0A=
						phandle =3D <0xd0>;=0A=
=0A=
						endpoint@3 {=0A=
							remote-endpoint =3D <0x77>;=0A=
							linux,phandle =3D <0x5b>;=0A=
							phandle =3D <0x5b>;=0A=
						};=0A=
					};=0A=
				};=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	gpu {=0A=
		compatible =3D "nvidia,tegra210-gm20b", "nvidia,gm20b";=0A=
		nvidia,host1x =3D <0x78>;=0A=
		reg =3D <0x0 0x57000000 0x0 0x1000000 0x0 0x58000000 0x0 0x1000000 0x0 =
0x538f0000 0x0 0x1000>;=0A=
		interrupts =3D <0x0 0x9d 0x4 0x0 0x9e 0x4>;=0A=
		interrupt-names =3D "stall", "nonstall";=0A=
		iommus =3D <0x2b 0x1f>;=0A=
		access-vpr-phys;=0A=
		status =3D "okay";=0A=
		resets =3D <0x21 0xb8>;=0A=
		reset-names =3D "gpu";=0A=
	};=0A=
=0A=
	mipical {=0A=
		compatible =3D "nvidia,tegra210-mipical";=0A=
		reg =3D <0x0 0x700e3000 0x0 0x100>;=0A=
		clocks =3D <0x21 0x38 0x21 0xb1>;=0A=
		clock-names =3D "mipi_cal", "uart_mipi_cal";=0A=
		status =3D "okay";=0A=
		assigned-clocks =3D <0x21 0xb1>;=0A=
		assigned-clock-parents =3D <0x21 0xf3>;=0A=
		assigned-clock-rates =3D <0x40d9900>;=0A=
=0A=
		prod-settings {=0A=
			#prod-cells =3D <0x3>;=0A=
=0A=
			prod_c_dphy_dsi {=0A=
				prod =3D <0x38 0x1f00 0x200 0x3c 0x1f00 0x200 0x40 0x1f00 0x200 0x44 =
0x1f00 0x200 0x5c 0xf00 0x300 0x60 0xf00f0 0x10010 0x64 0x1f 0x2 0x68 =
0x1f 0x2 0x70 0x1f 0x2 0x74 0x1f 0x2>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	pmc@7000e400 {=0A=
		compatible =3D "nvidia,tegra210-pmc";=0A=
		reg =3D <0x0 0x7000e400 0x0 0x400>;=0A=
		#padcontroller-cells =3D <0x1>;=0A=
		status =3D "okay";=0A=
		clocks =3D <0x21 0x125>;=0A=
		clock-names =3D "pclk";=0A=
		nvidia,secure-pmc;=0A=
		clear-all-io-pads-dpd;=0A=
		pinctrl-names =3D "default";=0A=
		pinctrl-0 =3D <0x79>;=0A=
		nvidia,restrict-voltage-switch;=0A=
		#nvidia,wake-cells =3D <0x3>;=0A=
		nvidia,invert-interrupt;=0A=
		nvidia,suspend-mode =3D <0x0>;=0A=
		nvidia,cpu-pwr-good-time =3D <0x0>;=0A=
		nvidia,cpu-pwr-off-time =3D <0x0>;=0A=
		nvidia,core-pwr-good-time =3D <0x11eb 0xf24>;=0A=
		nvidia,core-pwr-off-time =3D <0x9899>;=0A=
		nvidia,core-pwr-req-active-high;=0A=
		nvidia,sys-clock-req-active-high;=0A=
		linux,phandle =3D <0x37>;=0A=
		phandle =3D <0x37>;=0A=
=0A=
		pex_en {=0A=
			linux,phandle =3D <0x7f>;=0A=
			phandle =3D <0x7f>;=0A=
=0A=
			pex-io-dpd-signals-dis {=0A=
				pins =3D "pex-bias", "pex-clk1", "pex-clk2";=0A=
				low-power-disable;=0A=
			};=0A=
		};=0A=
=0A=
		pex_dis {=0A=
			linux,phandle =3D <0x80>;=0A=
			phandle =3D <0x80>;=0A=
=0A=
			pex-io-dpd-signals-en {=0A=
				pins =3D "pex-bias", "pex-clk1", "pex-clk2";=0A=
				low-power-enable;=0A=
			};=0A=
		};=0A=
=0A=
		hdmi-dpd-enable {=0A=
			linux,phandle =3D <0x63>;=0A=
			phandle =3D <0x63>;=0A=
=0A=
			hdmi-pad-lowpower-enable {=0A=
				pins =3D "hdmi";=0A=
				low-power-enable;=0A=
			};=0A=
		};=0A=
=0A=
		hdmi-dpd-disable {=0A=
			linux,phandle =3D <0x62>;=0A=
			phandle =3D <0x62>;=0A=
=0A=
			hdmi-pad-lowpower-disable {=0A=
				pins =3D "hdmi";=0A=
				low-power-disable;=0A=
			};=0A=
		};=0A=
=0A=
		dsi-dpd-enable {=0A=
			linux,phandle =3D <0x5f>;=0A=
			phandle =3D <0x5f>;=0A=
=0A=
			dsi-pad-lowpower-enable {=0A=
				pins =3D "dsi";=0A=
				low-power-enable;=0A=
			};=0A=
		};=0A=
=0A=
		dsi-dpd-disable {=0A=
			linux,phandle =3D <0x5e>;=0A=
			phandle =3D <0x5e>;=0A=
=0A=
			dsi-pad-lowpower-disable {=0A=
				pins =3D "dsi";=0A=
				low-power-disable;=0A=
			};=0A=
		};=0A=
=0A=
		dsib-dpd-enable {=0A=
			linux,phandle =3D <0x61>;=0A=
			phandle =3D <0x61>;=0A=
=0A=
			dsib-pad-lowpower-enable {=0A=
				pins =3D "dsib";=0A=
				low-power-enable;=0A=
			};=0A=
		};=0A=
=0A=
		dsib-dpd-disable {=0A=
			linux,phandle =3D <0x60>;=0A=
			phandle =3D <0x60>;=0A=
=0A=
			dsib-pad-lowpower-disable {=0A=
				pins =3D "dsib";=0A=
				low-power-disable;=0A=
			};=0A=
		};=0A=
=0A=
		iopad-defaults {=0A=
			linux,phandle =3D <0x79>;=0A=
			phandle =3D <0x79>;=0A=
=0A=
			audio-pads {=0A=
				pins =3D "audio";=0A=
				nvidia,power-source-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			cam-pads {=0A=
				pins =3D "cam";=0A=
				nvidia,power-source-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			dbg-pads {=0A=
				pins =3D "dbg";=0A=
				nvidia,power-source-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			dmic-pads {=0A=
				pins =3D "dmic";=0A=
				nvidia,power-source-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			pex-ctrl-pads {=0A=
				pins =3D "pex-ctrl";=0A=
				nvidia,power-source-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			spi-pads {=0A=
				pins =3D "spi";=0A=
				nvidia,power-source-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			uart-pads {=0A=
				pins =3D "uart";=0A=
				nvidia,power-source-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			pex-io-pads {=0A=
				pins =3D "pex-bias", "pex-clk1", "pex-clk2";=0A=
				low-power-enable;=0A=
			};=0A=
=0A=
			audio-hv-pads {=0A=
				pins =3D "audio-hv";=0A=
				nvidia,power-source-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			spi-hv-pads {=0A=
				pins =3D "spi-hv";=0A=
				nvidia,power-source-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			gpio-pads {=0A=
				pins =3D "gpio";=0A=
				nvidia,power-source-voltage =3D <0x0>;=0A=
			};=0A=
=0A=
			sdmmc-io-pads {=0A=
				pins =3D "sdmmc1", "sdmmc3";=0A=
				nvidia,enable-voltage-switching;=0A=
			};=0A=
		};=0A=
=0A=
		bootrom-commands {=0A=
			nvidia,command-retries-count =3D <0x2>;=0A=
			nvidia,delay-between-commands-us =3D <0xa>;=0A=
			nvidia,wait-start-bus-clear-us =3D <0xa>;=0A=
			#address-cells =3D <0x1>;=0A=
			#size-cells =3D <0x0>;=0A=
=0A=
			reset-commands {=0A=
				nvidia,command-retries-count =3D <0x2>;=0A=
				nvidia,delay-between-commands-us =3D <0xa>;=0A=
				nvidia,wait-start-bus-clear-us =3D <0xa>;=0A=
				#address-cells =3D <0x1>;=0A=
				#size-cells =3D <0x0>;=0A=
=0A=
				commands@4-003c {=0A=
					nvidia,command-names =3D "pmic-rails";=0A=
					reg =3D <0x3c>;=0A=
					nvidia,enable-8bit-register;=0A=
					nvidia,enable-8bit-data;=0A=
					nvidia,controller-type-i2c;=0A=
					nvidia,controller-id =3D <0x4>;=0A=
					nvidia,enable-controller-reset;=0A=
					nvidia,write-commands =3D <0x16 0x20>;=0A=
				};=0A=
			};=0A=
=0A=
			power-off-commands {=0A=
				nvidia,command-retries-count =3D <0x2>;=0A=
				nvidia,delay-between-commands-us =3D <0xa>;=0A=
				nvidia,wait-start-bus-clear-us =3D <0xa>;=0A=
				#address-cells =3D <0x1>;=0A=
				#size-cells =3D <0x0>;=0A=
=0A=
				commands@4-003c {=0A=
					nvidia,command-names =3D "pmic-rails";=0A=
					reg =3D <0x3c>;=0A=
					nvidia,enable-8bit-register;=0A=
					nvidia,enable-8bit-data;=0A=
					nvidia,controller-type-i2c;=0A=
					nvidia,controller-id =3D <0x4>;=0A=
					nvidia,enable-controller-reset;=0A=
					nvidia,write-commands =3D <0x3b 0x1 0x42 0x5b 0x41 0xf8>;=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc1_e_33V_enable {=0A=
			linux,phandle =3D <0x96>;=0A=
			phandle =3D <0x96>;=0A=
=0A=
			sdmmc1 {=0A=
				pins =3D "sdmmc1";=0A=
				nvidia,power-source-voltage =3D <0x1>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc1_e_33V_disable {=0A=
			linux,phandle =3D <0x97>;=0A=
			phandle =3D <0x97>;=0A=
=0A=
			sdmmc1 {=0A=
				pins =3D "sdmmc1";=0A=
				nvidia,power-source-voltage =3D <0x0>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc3_e_33V_enable {=0A=
			linux,phandle =3D <0x8e>;=0A=
			phandle =3D <0x8e>;=0A=
=0A=
			sdmmc3 {=0A=
				pins =3D "sdmmc3";=0A=
				nvidia,power-source-voltage =3D <0x1>;=0A=
			};=0A=
		};=0A=
=0A=
		sdmmc3_e_33V_disable {=0A=
			linux,phandle =3D <0x8f>;=0A=
			phandle =3D <0x8f>;=0A=
=0A=
			sdmmc3 {=0A=
				pins =3D "sdmmc3";=0A=
				nvidia,power-source-voltage =3D <0x0>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	se@70012000 {=0A=
		compatible =3D "nvidia,tegra210-se";=0A=
		reg =3D <0x0 0x70012000 0x0 0x2000>;=0A=
		iommus =3D <0x2b 0x23 0x2b 0x26>;=0A=
		iommu-group-id =3D <0x4>;=0A=
		interrupts =3D <0x0 0x3a 0x4>;=0A=
		clocks =3D <0x21 0x195 0x21 0x95>;=0A=
		clock-names =3D "se", "entropy";=0A=
		status =3D "okay";=0A=
		supported-algos =3D "aes", "drbg", "rsa", "sha";=0A=
		linux,phandle =3D <0x116>;=0A=
		phandle =3D <0x116>;=0A=
	};=0A=
=0A=
	hda@70030000 {=0A=
		compatible =3D "nvidia,tegra30-hda";=0A=
		reg =3D <0x0 0x70030000 0x0 0x10000>;=0A=
		interrupts =3D <0x0 0x51 0x4>;=0A=
		clocks =3D <0x21 0x7d 0x21 0x80 0x21 0x6f 0x21 0xca>;=0A=
		clock-names =3D "hda", "hda2hdmi", "hda2codec_2x", "maud";=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	pcie@1003000 {=0A=
		compatible =3D "nvidia,tegra210-pcie", "nvidia,tegra124-pcie";=0A=
		power-domains =3D <0x7a>;=0A=
		device_type =3D "pci";=0A=
		reg =3D <0x0 0x1003000 0x0 0x800 0x0 0x1003800 0x0 0x800 0x0 =
0x11fff000 0x0 0x1000>;=0A=
		reg-names =3D "pads", "afi", "cs";=0A=
		interrupts =3D <0x0 0x62 0x4 0x0 0x63 0x4>;=0A=
		interrupt-names =3D "intr", "msi";=0A=
		clocks =3D <0x21 0x46 0x21 0x48 0x21 0x107 0x21 0x12c 0x21 0x63>;=0A=
		clock-names =3D "pex", "afi", "pll_e", "cml", "mselect";=0A=
		resets =3D <0x21 0x46 0x21 0x48 0x21 0x4a>;=0A=
		reset-names =3D "pex", "afi", "pcie_x";=0A=
		#interrupt-cells =3D <0x1>;=0A=
		interrupt-map-mask =3D <0x0 0x0 0x0 0x0>;=0A=
		interrupt-map =3D <0x0 0x0 0x0 0x0 0x33 0x0 0x62 0x4>;=0A=
		pinctrl-names =3D "clkreq-0-bi-dir-enable", "clkreq-1-bi-dir-enable", =
"clkreq-0-in-dir-enable", "clkreq-1-in-dir-enable", "pex-io-dpd-dis", =
"pex-io-dpd-en";=0A=
		pinctrl-0 =3D <0x7b>;=0A=
		pinctrl-1 =3D <0x7c>;=0A=
		pinctrl-2 =3D <0x7d>;=0A=
		pinctrl-3 =3D <0x7e>;=0A=
		pinctrl-4 =3D <0x7f>;=0A=
		pinctrl-5 =3D <0x80>;=0A=
		bus-range =3D <0x0 0xff>;=0A=
		#address-cells =3D <0x3>;=0A=
		#size-cells =3D <0x2>;=0A=
		ranges =3D <0x82000000 0x0 0x1000000 0x0 0x1000000 0x0 0x1000 =
0x82000000 0x0 0x1001000 0x0 0x1001000 0x0 0x1000 0x81000000 0x0 0x0 0x0 =
0x12000000 0x0 0x10000 0x82000000 0x0 0x13000000 0x0 0x13000000 0x0 =
0xd000000 0xc2000000 0x0 0x20000000 0x0 0x20000000 0x0 0x20000000>;=0A=
		status =3D "okay";=0A=
		nvidia,wake-gpio =3D <0x56 0x2 0x0>;=0A=
		nvidia,pmc-wakeup =3D <0x37 0x1 0x0 0x8>;=0A=
		avdd-pll-uerefe-supply =3D <0x3e>;=0A=
		hvddio-pex-supply =3D <0x36>;=0A=
		dvddio-pex-supply =3D <0x3f>;=0A=
		dvdd-pex-pll-supply =3D <0x3f>;=0A=
		hvdd-pex-pll-e-supply =3D <0x36>;=0A=
		vddio-pex-ctl-supply =3D <0x36>;=0A=
=0A=
		pci@1,0 {=0A=
			device_type =3D "pci";=0A=
			assigned-addresses =3D <0x82000800 0x0 0x1000000 0x0 0x1000>;=0A=
			reg =3D <0x800 0x0 0x0 0x0 0x0>;=0A=
			status =3D "okay";=0A=
			#address-cells =3D <0x3>;=0A=
			#size-cells =3D <0x2>;=0A=
			ranges;=0A=
			nvidia,num-lanes =3D <0x4>;=0A=
			nvidia,afi-ctl-offset =3D <0x110>;=0A=
			nvidia,disable-aspm-states =3D <0xf>;=0A=
			phys =3D <0x81 0x82 0x83 0x84>;=0A=
			phy-names =3D "pcie-0", "pcie-1", "pcie-2", "pcie-3";=0A=
		};=0A=
=0A=
		pci@2,0 {=0A=
			device_type =3D "pci";=0A=
			assigned-addresses =3D <0x82001000 0x0 0x1001000 0x0 0x1000>;=0A=
			reg =3D <0x1000 0x0 0x0 0x0 0x0>;=0A=
			status =3D "okay";=0A=
			#address-cells =3D <0x3>;=0A=
			#size-cells =3D <0x2>;=0A=
			ranges;=0A=
			nvidia,num-lanes =3D <0x1>;=0A=
			nvidia,afi-ctl-offset =3D <0x118>;=0A=
			nvidia,disable-aspm-states =3D <0xf>;=0A=
			phys =3D <0x85>;=0A=
			phy-names =3D "pcie-0";=0A=
			nvidia,plat-gpios =3D <0x56 0xbb 0x0>;=0A=
			linux,phandle =3D <0xc6>;=0A=
			phandle =3D <0xc6>;=0A=
=0A=
			ethernet@0,0 {=0A=
				reg =3D <0x0 0x0 0x0 0x0 0x0>;=0A=
				linux,phandle =3D <0xd4>;=0A=
				phandle =3D <0xd4>;=0A=
			};=0A=
		};=0A=
=0A=
		prod-settings {=0A=
			#prod-cells =3D <0x3>;=0A=
=0A=
			prod_c_pad {=0A=
				prod =3D <0xc8 0xffffffff 0x90b890b8>;=0A=
			};=0A=
=0A=
			prod_c_rp {=0A=
				prod =3D <0xe84 0xffff 0xf 0xea4 0xffff 0x8f 0xe90 0xffffffff =
0x55010000 0xe94 0xffffffff 0x1 0xeb0 0xffffffff 0x55010000 0xeb4 =
0xffffffff 0x1 0xe8c 0xffff0000 0x670000 0xeac 0xffff0000 0xc70000>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	i2c@7000c000 {=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
		compatible =3D "nvidia,tegra210-i2c";=0A=
		reg =3D <0x0 0x7000c000 0x0 0x100>;=0A=
		interrupts =3D <0x0 0x26 0x4>;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		status =3D "okay";=0A=
		clock-frequency =3D <0x61a80>;=0A=
		dmas =3D <0x4c 0x15 0x4c 0x15>;=0A=
		dma-names =3D "rx", "tx";=0A=
		clocks =3D <0x21 0xc 0x21 0xf3>;=0A=
		clock-names =3D "div-clk", "parent";=0A=
		resets =3D <0x21 0xc>;=0A=
		reset-names =3D "i2c";=0A=
		linux,phandle =3D <0xab>;=0A=
		phandle =3D <0xab>;=0A=
=0A=
		temp-sensor@4c {=0A=
			#thermal-sensor-cells =3D <0x1>;=0A=
			compatible =3D "ti,tmp451";=0A=
			reg =3D <0x4c>;=0A=
			sensor-name =3D "tegra";=0A=
			supported-hwrev =3D <0x1>;=0A=
			offset =3D <0x0>;=0A=
			conv-rate =3D <0x6>;=0A=
			extended-rage =3D <0x1>;=0A=
			interrupt-parent =3D <0x56>;=0A=
			interrupts =3D <0xbc 0x8>;=0A=
			vdd-supply =3D <0x36>;=0A=
			temp-alert-gpio =3D <0x56 0xbc 0x0>;=0A=
			status =3D "disabled";=0A=
			linux,phandle =3D <0x117>;=0A=
			phandle =3D <0x117>;=0A=
=0A=
			loc {=0A=
				shutdown-limit =3D <0x78>;=0A=
			};=0A=
=0A=
			ext {=0A=
				shutdown-limit =3D <0x69>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	i2c@7000c400 {=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
		compatible =3D "nvidia,tegra210-i2c";=0A=
		reg =3D <0x0 0x7000c400 0x0 0x100>;=0A=
		interrupts =3D <0x0 0x54 0x4>;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		status =3D "okay";=0A=
		clock-frequency =3D <0x186a0>;=0A=
		dmas =3D <0x4c 0x16 0x4c 0x16>;=0A=
		dma-names =3D "rx", "tx";=0A=
		clocks =3D <0x21 0x36 0x21 0xf3>;=0A=
		clock-names =3D "div-clk", "parent";=0A=
		resets =3D <0x21 0x36>;=0A=
		reset-names =3D "i2c";=0A=
		linux,phandle =3D <0x118>;=0A=
		phandle =3D <0x118>;=0A=
=0A=
		i2cmux@70 {=0A=
			compatible =3D "nxp,pca9546";=0A=
			reg =3D <0x70>;=0A=
			#address-cells =3D <0x1>;=0A=
			#size-cells =3D <0x0>;=0A=
			vcc-supply =3D <0x47>;=0A=
			vcc-pullup-supply =3D <0x47>;=0A=
			status =3D "disabled";=0A=
			linux,phandle =3D <0xd5>;=0A=
			phandle =3D <0xd5>;=0A=
=0A=
			i2c@0 {=0A=
				reg =3D <0x0>;=0A=
				i2c-mux,deselect-on-exit;=0A=
				#address-cells =3D <0x1>;=0A=
				#size-cells =3D <0x0>;=0A=
			};=0A=
=0A=
			i2c@1 {=0A=
				reg =3D <0x1>;=0A=
				i2c-mux,deselect-on-exit;=0A=
				#address-cells =3D <0x1>;=0A=
				#size-cells =3D <0x0>;=0A=
=0A=
				ina3221x@40 {=0A=
					compatible =3D "ti,ina3221x";=0A=
					reg =3D <0x40>;=0A=
					ti,trigger-config =3D <0x7003>;=0A=
					ti,continuous-config =3D <0x7c07>;=0A=
					ti,enable-forced-continuous;=0A=
					#address-cells =3D <0x1>;=0A=
					#size-cells =3D <0x0>;=0A=
=0A=
					channel@0 {=0A=
						reg =3D <0x0>;=0A=
						ti,rail-name =3D "VDD_5V";=0A=
						ti,shunt-resistor-mohm =3D <0xa>;=0A=
					};=0A=
=0A=
					channel@1 {=0A=
						reg =3D <0x1>;=0A=
						ti,rail-name =3D "VDD_3V3";=0A=
						ti,shunt-resistor-mohm =3D <0xa>;=0A=
					};=0A=
=0A=
					channel@2 {=0A=
						reg =3D <0x2>;=0A=
						ti,rail-name =3D "VDD_1V8";=0A=
						ti,shunt-resistor-mohm =3D <0x1>;=0A=
					};=0A=
				};=0A=
=0A=
				ina3221x@41 {=0A=
					compatible =3D "ti,ina3221x";=0A=
					reg =3D <0x41>;=0A=
					ti,trigger-config =3D <0x7003>;=0A=
					ti,continuous-config =3D <0x7c07>;=0A=
					ti,enable-forced-continuous;=0A=
					#address-cells =3D <0x1>;=0A=
					#size-cells =3D <0x0>;=0A=
=0A=
					channel@0 {=0A=
						reg =3D <0x0>;=0A=
						ti,rail-name =3D "VDD_5V_AUD";=0A=
						ti,shunt-resistor-mohm =3D <0x1>;=0A=
					};=0A=
=0A=
					channel@1 {=0A=
						reg =3D <0x1>;=0A=
						ti,rail-name =3D "VDD_3V3_AUD";=0A=
						ti,shunt-resistor-mohm =3D <0xa>;=0A=
					};=0A=
=0A=
					channel@2 {=0A=
						reg =3D <0x2>;=0A=
						ti,rail-name =3D "VDD_1V8_AUD";=0A=
						ti,shunt-resistor-mohm =3D <0xa>;=0A=
					};=0A=
				};=0A=
=0A=
				ina3221x@42 {=0A=
					compatible =3D "ti,ina3221x";=0A=
					reg =3D <0x42>;=0A=
					ti,trigger-config =3D <0x7003>;=0A=
					ti,continuous-config =3D <0x7c07>;=0A=
					ti,enable-forced-continuous;=0A=
					#address-cells =3D <0x1>;=0A=
					#size-cells =3D <0x0>;=0A=
=0A=
					channel@0 {=0A=
						reg =3D <0x0>;=0A=
						ti,rail-name =3D "VDD_3V3_GPS";=0A=
						ti,shunt-resistor-mohm =3D <0xa>;=0A=
					};=0A=
=0A=
					channel@1 {=0A=
						reg =3D <0x1>;=0A=
						ti,rail-name =3D "VDD_3V3_NFC";=0A=
						ti,shunt-resistor-mohm =3D <0xa>;=0A=
					};=0A=
=0A=
					channel@2 {=0A=
						reg =3D <0x2>;=0A=
						ti,rail-name =3D "VDD_3V3_GYRO";=0A=
						ti,shunt-resistor-mohm =3D <0xa>;=0A=
					};=0A=
				};=0A=
			};=0A=
=0A=
			i2c@2 {=0A=
				reg =3D <0x2>;=0A=
				i2c-mux,deselect-on-exit;=0A=
				#address-cells =3D <0x1>;=0A=
				#size-cells =3D <0x0>;=0A=
			};=0A=
=0A=
			i2c@3 {=0A=
				reg =3D <0x3>;=0A=
				i2c-mux,deselect-on-exit;=0A=
				#address-cells =3D <0x1>;=0A=
				#size-cells =3D <0x0>;=0A=
=0A=
				rt5659.12-001a@1a {=0A=
					compatible =3D "realtek,rt5658";=0A=
					reg =3D <0x1a>;=0A=
					status =3D "disabled";=0A=
					gpios =3D <0x56 0xe 0x0>;=0A=
					realtek,jd-src =3D <0x1>;=0A=
					realtek,dmic1-data-pin =3D <0x2>;=0A=
					linux,phandle =3D <0xdd>;=0A=
					phandle =3D <0xdd>;=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		gpio@20 {=0A=
			compatible =3D "ti,tca6416";=0A=
			reg =3D <0x20>;=0A=
			gpio-controller;=0A=
			#gpio-cells =3D <0x2>;=0A=
			vcc-supply =3D <0x47>;=0A=
			status =3D "disabled";=0A=
			linux,phandle =3D <0xd6>;=0A=
			phandle =3D <0xd6>;=0A=
		};=0A=
=0A=
		icm20628@68 {=0A=
			compatible =3D "invensense,mpu6xxx";=0A=
			reg =3D <0x68>;=0A=
			interrupt-parent =3D <0x56>;=0A=
			interrupts =3D <0xc8 0x1>;=0A=
			accelerometer_matrix =3D [01 00 00 00 01 00 00 00 01];=0A=
			gyroscope_matrix =3D [01 00 00 00 01 00 00 00 01];=0A=
			geomagnetic_rotation_vector_disable =3D <0x1>;=0A=
			gyroscope_uncalibrated_disable =3D <0x1>;=0A=
			quaternion_disable =3D <0x1>;=0A=
			status =3D "disabled";=0A=
			linux,phandle =3D <0xd7>;=0A=
			phandle =3D <0xd7>;=0A=
		};=0A=
=0A=
		ak8963@0d {=0A=
			compatible =3D "ak,ak89xx";=0A=
			reg =3D <0xd>;=0A=
			magnetic_field_matrix =3D [01 00 00 00 01 00 00 00 01];=0A=
			status =3D "disabled";=0A=
			linux,phandle =3D <0xd8>;=0A=
			phandle =3D <0xd8>;=0A=
		};=0A=
=0A=
		cm32180@48 {=0A=
			compatible =3D "capella,cm32180";=0A=
			reg =3D <0x48>;=0A=
			gpio_irq =3D <0x56 0xc 0x1>;=0A=
			light_uncalibrated_lo =3D <0x1>;=0A=
			light_calibrated_lo =3D <0x96>;=0A=
			light_uncalibrated_hi =3D <0x17318>;=0A=
			light_calibrated_hi =3D <0x1ab3f0>;=0A=
			status =3D "disabled";=0A=
			linux,phandle =3D <0xd9>;=0A=
			phandle =3D <0xd9>;=0A=
		};=0A=
=0A=
		iqs263@44 {=0A=
			status =3D "disabled";=0A=
			linux,phandle =3D <0x119>;=0A=
			phandle =3D <0x119>;=0A=
		};=0A=
=0A=
		rt5659.1-001a@1a {=0A=
			compatible =3D "realtek,rt5658";=0A=
			reg =3D <0x1a>;=0A=
			status =3D "disabled";=0A=
			gpios =3D <0x56 0xe 0x0>;=0A=
			realtek,jd-src =3D <0x1>;=0A=
			realtek,dmic1-data-pin =3D <0x2>;=0A=
			linux,phandle =3D <0xdc>;=0A=
			phandle =3D <0xdc>;=0A=
		};=0A=
	};=0A=
=0A=
	i2c@7000c500 {=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
		compatible =3D "nvidia,tegra210-i2c";=0A=
		reg =3D <0x0 0x7000c500 0x0 0x100>;=0A=
		interrupts =3D <0x0 0x5c 0x4>;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		status =3D "okay";=0A=
		clock-frequency =3D <0x61a80>;=0A=
		dmas =3D <0x4c 0x17 0x4c 0x17>;=0A=
		dma-names =3D "rx", "tx";=0A=
		clocks =3D <0x21 0x43 0x21 0xf3>;=0A=
		clock-names =3D "div-clk", "parent";=0A=
		resets =3D <0x21 0x43>;=0A=
		reset-names =3D "i2c";=0A=
		linux,phandle =3D <0xac>;=0A=
		phandle =3D <0xac>;=0A=
=0A=
		battery-charger@6b {=0A=
			status =3D "disabled";=0A=
		};=0A=
	};=0A=
=0A=
	i2c@7000c700 {=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
		compatible =3D "nvidia,tegra210-i2c";=0A=
		reg =3D <0x0 0x7000c700 0x0 0x100>;=0A=
		interrupts =3D <0x0 0x78 0x4>;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		status =3D "okay";=0A=
		clock-frequency =3D <0x186a0>;=0A=
		dmas =3D <0x4c 0x1a 0x4c 0x1a>;=0A=
		dma-names =3D "rx", "tx";=0A=
		clocks =3D <0x21 0x67 0x21 0xf3>;=0A=
		clock-names =3D "div-clk", "parent";=0A=
		resets =3D <0x21 0x67>;=0A=
		reset-names =3D "i2c";=0A=
		nvidia,restrict-clk-change;=0A=
		print-rate-limit =3D <0x78 0x1>;=0A=
		linux,phandle =3D <0x71>;=0A=
		phandle =3D <0x71>;=0A=
	};=0A=
=0A=
	i2c@7000d000 {=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
		compatible =3D "nvidia,tegra210-i2c";=0A=
		reg =3D <0x0 0x7000d000 0x0 0x100>;=0A=
		interrupts =3D <0x0 0x35 0x4>;=0A=
		scl-gpio =3D <0x56 0xc3 0x0>;=0A=
		sda-gpio =3D <0x56 0xc4 0x0>;=0A=
		nvidia,require-cldvfs-clock;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		status =3D "okay";=0A=
		clock-frequency =3D <0xf4240>;=0A=
		dmas =3D <0x4c 0x18 0x4c 0x18>;=0A=
		dma-names =3D "rx", "tx";=0A=
		clocks =3D <0x21 0x2f 0x21 0xf3>;=0A=
		clock-names =3D "div-clk", "parent";=0A=
		resets =3D <0x21 0x2f>;=0A=
		reset-names =3D "i2c";=0A=
		nvidia,bit-bang-after-shutdown;=0A=
		linux,phandle =3D <0x11a>;=0A=
		phandle =3D <0x11a>;=0A=
=0A=
		max77620@3c {=0A=
			compatible =3D "maxim,max77620";=0A=
			reg =3D <0x3c>;=0A=
			interrupts =3D <0x0 0x56 0x0>;=0A=
			nvidia,pmc-wakeup =3D <0x37 0x1 0x33 0x8>;=0A=
			#interrupt-cells =3D <0x2>;=0A=
			interrupt-controller;=0A=
			gpio-controller;=0A=
			#gpio-cells =3D <0x2>;=0A=
			maxim,enable-clock32k-out;=0A=
			maxim,system-pmic-power-off;=0A=
			maxim,hot-die-threshold-temp =3D <0x1adb0>;=0A=
			#thermal-sensor-cells =3D <0x0>;=0A=
			pinctrl-names =3D "default";=0A=
			pinctrl-0 =3D <0x86>;=0A=
			maxim,power-shutdown-gpio-states =3D <0x1 0x0>;=0A=
			linux,phandle =3D <0x1e>;=0A=
			phandle =3D <0x1e>;=0A=
=0A=
			pinmux@0 {=0A=
				linux,phandle =3D <0x86>;=0A=
				phandle =3D <0x86>;=0A=
=0A=
				pin_gpio0 {=0A=
					pins =3D "gpio0";=0A=
					function =3D "gpio";=0A=
				};=0A=
=0A=
				pin_gpio1 {=0A=
					pins =3D "gpio1";=0A=
					function =3D "gpio";=0A=
					drive-open-drain =3D <0x1>;=0A=
					maxim,active-fps-source =3D <0x3>;=0A=
					maxim,active-fps-power-up-slot =3D <0x0>;=0A=
					maxim,active-fps-power-down-slot =3D <0x7>;=0A=
				};=0A=
=0A=
				pin_gpio2 {=0A=
					pins =3D "gpio2";=0A=
					maxim,active-fps-source =3D <0x0>;=0A=
					maxim,active-fps-power-up-slot =3D <0x0>;=0A=
					maxim,active-fps-power-down-slot =3D <0x7>;=0A=
				};=0A=
=0A=
				pin_gpio3 {=0A=
					pins =3D "gpio3";=0A=
					maxim,active-fps-source =3D <0x0>;=0A=
					maxim,active-fps-power-up-slot =3D <0x4>;=0A=
					maxim,active-fps-power-down-slot =3D <0x3>;=0A=
				};=0A=
=0A=
				pin_gpio2_3 {=0A=
					pins =3D "gpio2", "gpio3";=0A=
					function =3D "fps-out";=0A=
					drive-open-drain =3D <0x1>;=0A=
					maxim,active-fps-source =3D <0x0>;=0A=
				};=0A=
=0A=
				pin_gpio4 {=0A=
					pins =3D "gpio4";=0A=
					function =3D "32k-out1";=0A=
				};=0A=
=0A=
				pin_gpio5_6_7 {=0A=
					pins =3D "gpio5", "gpio6", "gpio7";=0A=
					function =3D "gpio";=0A=
					drive-push-pull =3D <0x1>;=0A=
				};=0A=
			};=0A=
=0A=
			spmic-default-output-high {=0A=
				gpio-hog;=0A=
				output-high;=0A=
				gpios =3D <0x1 0x0>;=0A=
				label =3D "spmic-default-output-high";=0A=
			};=0A=
=0A=
			watchdog {=0A=
				maxim,wdt-timeout =3D <0x10>;=0A=
				maxim,wdt-clear-time =3D <0xd>;=0A=
				status =3D "disabled";=0A=
				dt-override-status-odm-data =3D <0x20000 0x20000>;=0A=
				linux,phandle =3D <0xb6>;=0A=
				phandle =3D <0xb6>;=0A=
			};=0A=
=0A=
			fps {=0A=
				#address-cells =3D <0x1>;=0A=
				#size-cells =3D <0x0>;=0A=
=0A=
				fps0 {=0A=
					reg =3D <0x0>;=0A=
					maxim,shutdown-fps-time-periodi-us =3D <0x500>;=0A=
					maxim,fps-event-source =3D <0x0>;=0A=
				};=0A=
=0A=
				fps1 {=0A=
					reg =3D <0x1>;=0A=
					maxim,shutdown-fps-time-period-us =3D <0x500>;=0A=
					maxim,fps-event-source =3D <0x1>;=0A=
					maxim,device-state-on-disabled-event =3D <0x0>;=0A=
				};=0A=
=0A=
				fps2 {=0A=
					reg =3D <0x2>;=0A=
					maxim,fps-event-source =3D <0x0>;=0A=
				};=0A=
			};=0A=
=0A=
			backup-battery {=0A=
				maxim,backup-battery-charging-current =3D <0x64>;=0A=
				maxim,backup-battery-charging-voltage =3D <0x2dc6c0>;=0A=
				maxim,backup-battery-output-resister =3D <0x64>;=0A=
			};=0A=
=0A=
			regulators {=0A=
				in-ldo0-1-supply =3D <0x87>;=0A=
				in-ldo7-8-supply =3D <0x87>;=0A=
=0A=
				sd0 {=0A=
					regulator-name =3D "vdd-core";=0A=
					regulator-min-microvolt =3D <0xf4240>;=0A=
					regulator-max-microvolt =3D <0x11da50>;=0A=
					regulator-boot-on;=0A=
					regulator-always-on;=0A=
					maxim,active-fps-source =3D <0x1>;=0A=
					regulator-init-mode =3D <0x2>;=0A=
					maxim,active-fps-power-up-slot =3D <0x1>;=0A=
					maxim,active-fps-power-down-slot =3D <0x6>;=0A=
					regulator-enable-ramp-delay =3D <0x92>;=0A=
					regulator-disable-ramp-delay =3D <0xff0>;=0A=
					regulator-ramp-delay =3D <0x6b6c>;=0A=
					regulator-ramp-delay-scale =3D <0x12c>;=0A=
					linux,phandle =3D <0xa1>;=0A=
					phandle =3D <0xa1>;=0A=
				};=0A=
=0A=
				sd1 {=0A=
					regulator-name =3D "vdd-ddr-1v1";=0A=
					regulator-always-on;=0A=
					regulator-boot-on;=0A=
					regulator-init-mode =3D <0x2>;=0A=
					maxim,active-fps-source =3D <0x0>;=0A=
					maxim,active-fps-power-up-slot =3D <0x5>;=0A=
					maxim,active-fps-power-down-slot =3D <0x2>;=0A=
					regulator-min-microvolt =3D <0x118c30>;=0A=
					regulator-max-microvolt =3D <0x118c30>;=0A=
					regulator-enable-ramp-delay =3D <0x82>;=0A=
					regulator-disable-ramp-delay =3D <0x23988>;=0A=
					regulator-ramp-delay =3D <0x6b6c>;=0A=
					regulator-ramp-delay-scale =3D <0x12c>;=0A=
					linux,phandle =3D <0x11b>;=0A=
					phandle =3D <0x11b>;=0A=
				};=0A=
=0A=
				sd2 {=0A=
					regulator-name =3D "vdd-pre-reg-1v35";=0A=
					regulator-min-microvolt =3D <0x149970>;=0A=
					regulator-max-microvolt =3D <0x149970>;=0A=
					regulator-always-on;=0A=
					regulator-boot-on;=0A=
					maxim,active-fps-source =3D <0x1>;=0A=
					maxim,active-fps-power-up-slot =3D <0x2>;=0A=
					maxim,active-fps-power-down-slot =3D <0x5>;=0A=
					regulator-enable-ramp-delay =3D <0xb0>;=0A=
					regulator-disable-ramp-delay =3D <0x7d00>;=0A=
					regulator-ramp-delay =3D <0x6b6c>;=0A=
					regulator-ramp-delay-scale =3D <0x15e>;=0A=
					linux,phandle =3D <0x87>;=0A=
					phandle =3D <0x87>;=0A=
				};=0A=
=0A=
				sd3 {=0A=
					regulator-name =3D "vdd-1v8";=0A=
					regulator-min-microvolt =3D <0x1b7740>;=0A=
					regulator-max-microvolt =3D <0x1b7740>;=0A=
					regulator-always-on;=0A=
					regulator-boot-on;=0A=
					maxim,active-fps-source =3D <0x0>;=0A=
					regulator-init-mode =3D <0x2>;=0A=
					maxim,active-fps-power-up-slot =3D <0x3>;=0A=
					maxim,active-fps-power-down-slot =3D <0x4>;=0A=
					regulator-enable-ramp-delay =3D <0xf2>;=0A=
					regulator-disable-ramp-delay =3D <0x1ccf0>;=0A=
					regulator-ramp-delay =3D <0x6b6c>;=0A=
					regulator-ramp-delay-scale =3D <0x168>;=0A=
					linux,phandle =3D <0x36>;=0A=
					phandle =3D <0x36>;=0A=
				};=0A=
=0A=
				ldo0 {=0A=
					regulator-name =3D "avdd-sys-1v2";=0A=
					regulator-min-microvolt =3D <0x124f80>;=0A=
					regulator-max-microvolt =3D <0x124f80>;=0A=
					regulator-boot-on;=0A=
					maxim,active-fps-source =3D <0x3>;=0A=
					maxim,active-fps-power-up-slot =3D <0x0>;=0A=
					maxim,active-fps-power-down-slot =3D <0x7>;=0A=
					regulator-enable-ramp-delay =3D <0x1a>;=0A=
					regulator-disable-ramp-delay =3D <0x272>;=0A=
					regulator-ramp-delay =3D <0x186a0>;=0A=
					regulator-ramp-delay-scale =3D <0xc8>;=0A=
					linux,phandle =3D <0x3d>;=0A=
					phandle =3D <0x3d>;=0A=
				};=0A=
=0A=
				ldo1 {=0A=
					regulator-name =3D "vdd-pex-1v0";=0A=
					regulator-min-microvolt =3D <0x100590>;=0A=
					regulator-max-microvolt =3D <0x100590>;=0A=
					regulator-always-on;=0A=
					maxim,active-fps-source =3D <0x3>;=0A=
					maxim,active-fps-power-up-slot =3D <0x0>;=0A=
					maxim,active-fps-power-down-slot =3D <0x7>;=0A=
					regulator-enable-ramp-delay =3D <0x16>;=0A=
					regulator-disable-ramp-delay =3D <0x276>;=0A=
					regulator-ramp-delay =3D <0x186a0>;=0A=
					regulator-ramp-delay-scale =3D <0xc8>;=0A=
					linux,phandle =3D <0x3f>;=0A=
					phandle =3D <0x3f>;=0A=
				};=0A=
=0A=
				ldo2 {=0A=
					regulator-name =3D "vddio-sdmmc-ap";=0A=
					regulator-min-microvolt =3D <0x1b7740>;=0A=
					regulator-max-microvolt =3D <0x325aa0>;=0A=
					maxim,active-fps-source =3D <0x3>;=0A=
					maxim,active-fps-power-up-slot =3D <0x0>;=0A=
					maxim,active-fps-power-down-slot =3D <0x7>;=0A=
					regulator-enable-ramp-delay =3D <0x3e>;=0A=
					regulator-disable-ramp-delay =3D <0x28a>;=0A=
					regulator-ramp-delay =3D <0x186a0>;=0A=
					regulator-ramp-delay-scale =3D <0xc8>;=0A=
					linux,phandle =3D <0x98>;=0A=
					phandle =3D <0x98>;=0A=
				};=0A=
=0A=
				ldo3 {=0A=
					regulator-name =3D "vdd-ldo3";=0A=
					regulator-min-microvolt =3D <0x2ab980>;=0A=
					regulator-max-microvolt =3D <0x2ab980>;=0A=
					maxim,active-fps-source =3D <0x3>;=0A=
					maxim,active-fps-power-up-slot =3D <0x0>;=0A=
					maxim,active-fps-power-down-slot =3D <0x7>;=0A=
					regulator-enable-ramp-delay =3D <0x32>;=0A=
					regulator-disable-ramp-delay =3D <0x456>;=0A=
					regulator-ramp-delay =3D <0x186a0>;=0A=
					regulator-ramp-delay-scale =3D <0xc8>;=0A=
					status =3D "disabled";=0A=
					linux,phandle =3D <0x11c>;=0A=
					phandle =3D <0x11c>;=0A=
				};=0A=
=0A=
				ldo4 {=0A=
					regulator-name =3D "vdd-rtc";=0A=
					regulator-min-microvolt =3D <0xcf850>;=0A=
					regulator-max-microvolt =3D <0x10c8e0>;=0A=
					regulator-always-on;=0A=
					regulator-boot-on;=0A=
					maxim,active-fps-source =3D <0x0>;=0A=
					maxim,active-fps-power-up-slot =3D <0x1>;=0A=
					maxim,active-fps-power-down-slot =3D <0x6>;=0A=
					regulator-enable-ramp-delay =3D <0x16>;=0A=
					regulator-disable-ramp-delay =3D <0x262>;=0A=
					regulator-ramp-delay =3D <0x186a0>;=0A=
					regulator-ramp-delay-scale =3D <0xc8>;=0A=
					regulator-disable-active-discharge;=0A=
					linux,phandle =3D <0x11d>;=0A=
					phandle =3D <0x11d>;=0A=
				};=0A=
=0A=
				ldo5 {=0A=
					regulator-name =3D "vdd-ldo5";=0A=
					regulator-min-microvolt =3D <0x325aa0>;=0A=
					regulator-max-microvolt =3D <0x325aa0>;=0A=
					maxim,active-fps-source =3D <0x3>;=0A=
					maxim,active-fps-power-up-slot =3D <0x0>;=0A=
					maxim,active-fps-power-down-slot =3D <0x7>;=0A=
					regulator-enable-ramp-delay =3D <0x3e>;=0A=
					regulator-disable-ramp-delay =3D <0x280>;=0A=
					regulator-ramp-delay =3D <0x186a0>;=0A=
					regulator-ramp-delay-scale =3D <0xc8>;=0A=
					status =3D "disabled";=0A=
					linux,phandle =3D <0x57>;=0A=
					phandle =3D <0x57>;=0A=
				};=0A=
=0A=
				ldo6 {=0A=
					regulator-name =3D "vddio-sdmmc3-ap";=0A=
					regulator-min-microvolt =3D <0x1b7740>;=0A=
					regulator-max-microvolt =3D <0x325aa0>;=0A=
					regulator-boot-on;=0A=
					maxim,active-fps-source =3D <0x3>;=0A=
					maxim,active-fps-power-up-slot =3D <0x0>;=0A=
					maxim,active-fps-power-down-slot =3D <0x7>;=0A=
					regulator-enable-ramp-delay =3D <0x24>;=0A=
					regulator-disable-ramp-delay =3D <0x2a2>;=0A=
					regulator-ramp-delay =3D <0x186a0>;=0A=
					regulator-ramp-delay-scale =3D <0xc8>;=0A=
					linux,phandle =3D <0x58>;=0A=
					phandle =3D <0x58>;=0A=
				};=0A=
=0A=
				ldo7 {=0A=
					regulator-name =3D "avdd-1v05-pll";=0A=
					regulator-min-microvolt =3D <0x100590>;=0A=
					regulator-max-microvolt =3D <0x100590>;=0A=
					regulator-always-on;=0A=
					regulator-boot-on;=0A=
					maxim,active-fps-source =3D <0x1>;=0A=
					maxim,active-fps-power-up-slot =3D <0x3>;=0A=
					maxim,active-fps-power-down-slot =3D <0x4>;=0A=
					regulator-enable-ramp-delay =3D <0x18>;=0A=
					regulator-disable-ramp-delay =3D <0xad0>;=0A=
					regulator-ramp-delay =3D <0x186a0>;=0A=
					regulator-ramp-delay-scale =3D <0xc8>;=0A=
					linux,phandle =3D <0x3e>;=0A=
					phandle =3D <0x3e>;=0A=
				};=0A=
=0A=
				ldo8 {=0A=
					regulator-name =3D "avdd-io-hdmi-dp";=0A=
					regulator-min-microvolt =3D <0x100590>;=0A=
					regulator-max-microvolt =3D <0x100590>;=0A=
					regulator-boot-on;=0A=
					regulator-always-on;=0A=
					maxim,active-fps-source =3D <0x1>;=0A=
					maxim,active-fps-power-up-slot =3D <0x6>;=0A=
					maxim,active-fps-power-down-slot =3D <0x1>;=0A=
					regulator-enable-ramp-delay =3D <0x16>;=0A=
					regulator-disable-ramp-delay =3D <0x488>;=0A=
					regulator-ramp-delay =3D <0x186a0>;=0A=
					regulator-ramp-delay-scale =3D <0xc8>;=0A=
					linux,phandle =3D <0x40>;=0A=
					phandle =3D <0x40>;=0A=
				};=0A=
			};=0A=
=0A=
			low-battery-monitor {=0A=
				maxim,low-battery-shutdown-enable;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	i2c@7000d100 {=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
		compatible =3D "nvidia,tegra210-i2c";=0A=
		reg =3D <0x0 0x7000d100 0x0 0x100>;=0A=
		interrupts =3D <0x0 0x3f 0x4>;=0A=
		iommus =3D <0x2b 0xe>;=0A=
		status =3D "okay";=0A=
		clock-frequency =3D <0x61a80>;=0A=
		dmas =3D <0x4c 0x1e 0x4c 0x1e>;=0A=
		dma-names =3D "rx", "tx";=0A=
		clocks =3D <0x21 0xa6 0x21 0xf3>;=0A=
		clock-names =3D "div-clk", "parent";=0A=
		resets =3D <0x21 0xa6>;=0A=
		reset-names =3D "i2c";=0A=
		linux,phandle =3D <0x11e>;=0A=
		phandle =3D <0x11e>;=0A=
	};=0A=
=0A=
	sdhci@700b0600 {=0A=
		compatible =3D "nvidia,tegra210-sdhci";=0A=
		reg =3D <0x0 0x700b0600 0x0 0x200>;=0A=
		interrupts =3D <0x0 0x1f 0x4>;=0A=
		aux-device-name =3D "sdhci-tegra.3";=0A=
		iommus =3D <0x2b 0x1c>;=0A=
		nvidia,runtime-pm-type =3D <0x1>;=0A=
		clocks =3D <0x21 0xf 0x21 0xf3 0x21 0x134 0x21 0xc1>;=0A=
		clock-names =3D "sdmmc", "pll_p", "pll_c4_out0", "sdmmc_legacy_tm";=0A=
		resets =3D <0x21 0xf>;=0A=
		reset-names =3D "sdhci";=0A=
		status =3D "disabled";=0A=
		tap-delay =3D <0x4>;=0A=
		trim-delay =3D <0x8>;=0A=
		nvidia,is-ddr-tap-delay;=0A=
		nvidia,ddr-tap-delay =3D <0x0>;=0A=
		mmc-ocr-mask =3D <0x0>;=0A=
		max-clk-limit =3D <0xbebc200>;=0A=
		bus-width =3D <0x8>;=0A=
		built-in;=0A=
		calib-3v3-offsets =3D <0x505>;=0A=
		calib-1v8-offsets =3D <0x505>;=0A=
		compad-vref-3v3 =3D <0x7>;=0A=
		compad-vref-1v8 =3D <0x7>;=0A=
		nvidia,en-io-trim-volt;=0A=
		nvidia,is-emmc;=0A=
		nvidia,enable-cq;=0A=
		ignore-pm-notify;=0A=
		keep-power-in-suspend;=0A=
		non-removable;=0A=
		cap-mmc-highspeed;=0A=
		cap-sd-highspeed;=0A=
		mmc-ddr-1_8v;=0A=
		mmc-hs200-1_8v;=0A=
		mmc-hs400-1_8v;=0A=
		nvidia,enable-strobe-mode;=0A=
		mmc-hs400-enhanced-strobe;=0A=
		nvidia,min-tap-delay =3D <0x6a>;=0A=
		nvidia,max-tap-delay =3D <0xb9>;=0A=
		pll_source =3D "pll_p", "pll_c4_out2";=0A=
		vqmmc-supply =3D <0x36>;=0A=
		vmmc-supply =3D <0x47>;=0A=
		uhs-mask =3D <0x0>;=0A=
		power-off-rail;=0A=
		no-sdio;=0A=
		no-sd;=0A=
		linux,phandle =3D <0xb0>;=0A=
		phandle =3D <0xb0>;=0A=
=0A=
		prod-settings {=0A=
			#prod-cells =3D <0x3>;=0A=
=0A=
			prod_c_ds {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x30000505>;=0A=
			};=0A=
=0A=
			prod_c_hs {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x30000505>;=0A=
			};=0A=
=0A=
			prod_c_ddr52 {=0A=
				prod =3D <0x100 0x1fff0000 0x0 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x30000505>;=0A=
			};=0A=
=0A=
			prod_c_hs200 {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1c0 0xe000 0x4000 0x1e0 0xf 0x7 =
0x1e4 0x30077f7f 0x30000505>;=0A=
			};=0A=
=0A=
			prod_c_hs400 {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1c0 0xe000 0x4000 0x1e0 0xf 0x7 =
0x1e4 0x30077f7f 0x30000505>;=0A=
			};=0A=
=0A=
			prod_c_hs533 {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1c0 0xe000 0x2000 0x1e0 0xf 0x7 =
0x1e4 0x30000505 0x30000505>;=0A=
			};=0A=
=0A=
			prod {=0A=
				prod =3D <0x100 0x1fff000e 0x8090028 0x10c 0x3f00 0x2800 0x1c0 =
0x8001fc0 0x8000040 0x1c4 0x77 0x0 0x120 0x20001 0x1 0x128 0x43000000 =
0x0 0x1f0 0x80000 0x80000>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	sdhci@700b0400 {=0A=
		compatible =3D "nvidia,tegra210-sdhci";=0A=
		reg =3D <0x0 0x700b0400 0x0 0x200>;=0A=
		interrupts =3D <0x0 0x13 0x4>;=0A=
		aux-device-name =3D "sdhci-tegra.2";=0A=
		iommus =3D <0x2b 0x1b>;=0A=
		nvidia,runtime-pm-type =3D <0x0>;=0A=
		clocks =3D <0x21 0x45 0x21 0xf3 0x21 0x136 0x21 0xc1>;=0A=
		clock-names =3D "sdmmc", "pll_p", "pll_c4_out2", "sdmmc_legacy_tm";=0A=
		resets =3D <0x21 0x45>;=0A=
		reset-names =3D "sdhci";=0A=
		status =3D "disabled";=0A=
		tap-delay =3D <0x3>;=0A=
		trim-delay =3D <0x3>;=0A=
		mmc-ocr-mask =3D <0x3>;=0A=
		max-clk-limit =3D <0x61a80>;=0A=
		ddr-clk-limit =3D <0x2dc6c00>;=0A=
		bus-width =3D <0x4>;=0A=
		calib-3v3-offsets =3D <0x7d>;=0A=
		calib-1v8-offsets =3D <0x7b7b>;=0A=
		compad-vref-3v3 =3D <0x7>;=0A=
		compad-vref-1v8 =3D <0x7>;=0A=
		pll_source =3D "pll_p", "pll_c4_out2";=0A=
		ignore-pm-notify;=0A=
		cap-mmc-highspeed;=0A=
		cap-sd-highspeed;=0A=
		nvidia,en-io-trim-volt;=0A=
		nvidia,en-periodic-calib;=0A=
		cd-inverted;=0A=
		wp-inverted;=0A=
		pwrdet-support;=0A=
		nvidia,min-tap-delay =3D <0x6a>;=0A=
		nvidia,max-tap-delay =3D <0xb9>;=0A=
		pinctrl-names =3D "sdmmc_schmitt_enable", "sdmmc_schmitt_disable", =
"sdmmc_clk_schmitt_enable", "sdmmc_clk_schmitt_disable", =
"sdmmc_drv_code", "sdmmc_default_drv_code", "sdmmc_e_33v_enable", =
"sdmmc_e_33v_disable";=0A=
		pinctrl-0 =3D <0x88>;=0A=
		pinctrl-1 =3D <0x89>;=0A=
		pinctrl-2 =3D <0x8a>;=0A=
		pinctrl-3 =3D <0x8b>;=0A=
		pinctrl-4 =3D <0x8c>;=0A=
		pinctrl-5 =3D <0x8d>;=0A=
		pinctrl-6 =3D <0x8e>;=0A=
		pinctrl-7 =3D <0x8f>;=0A=
		vqmmc-supply =3D <0x36>;=0A=
		vmmc-supply =3D <0x47>;=0A=
		mmc-ddr-1_8v;=0A=
		uhs-mask =3D <0x0>;=0A=
		linux,phandle =3D <0xb8>;=0A=
		phandle =3D <0xb8>;=0A=
=0A=
		prod-settings {=0A=
			#prod-cells =3D <0x3>;=0A=
=0A=
			prod_c_ds {=0A=
				prod =3D <0x100 0xff0000 0x10000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x3000007d>;=0A=
			};=0A=
=0A=
			prod_c_hs {=0A=
				prod =3D <0x100 0xff0000 0x10000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x3000007d>;=0A=
			};=0A=
=0A=
			prod_c_sdr12 {=0A=
				prod =3D <0x100 0xff0000 0x10000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x30007b7b>;=0A=
			};=0A=
=0A=
			prod_c_sdr25 {=0A=
				prod =3D <0x100 0xff0000 0x10000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x30007b7b>;=0A=
			};=0A=
=0A=
			prod_c_sdr50 {=0A=
				prod =3D <0x100 0xff0000 0x10000 0x1c0 0xe000 0x8000 0x1e0 0xf 0x7 =
0x1e4 0x30077f7f 0x30007b7b>;=0A=
			};=0A=
=0A=
			prod_c_sdr104 {=0A=
				prod =3D <0x100 0xff0000 0x10000 0x1c0 0xe000 0x4000 0x1e0 0xf 0x7 =
0x1e4 0x30077f7f 0x30007b7b>;=0A=
			};=0A=
=0A=
			prod_c_ddr52 {=0A=
				prod =3D <0x100 0x1fff0000 0x0 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x30007b7b>;=0A=
			};=0A=
=0A=
			prod {=0A=
				prod =3D <0x100 0x1fff000e 0x3090028 0x1c0 0x8001fc0 0x8000040 0x1c4 =
0x77 0x0 0x120 0x20001 0x1 0x128 0x43000000 0x0 0x1f0 0x80000 0x80000>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	sdhci@700b0200 {=0A=
		compatible =3D "nvidia,tegra210-sdhci";=0A=
		reg =3D <0x0 0x700b0200 0x0 0x200>;=0A=
		interrupts =3D <0x0 0xf 0x4>;=0A=
		aux-device-name =3D "sdhci-tegra.1";=0A=
		nvidia,runtime-pm-type =3D <0x1>;=0A=
		clocks =3D <0x21 0x9 0x21 0xf3 0x21 0xc1>;=0A=
		clock-names =3D "sdmmc", "pll_p", "sdmmc_legacy_tm";=0A=
		resets =3D <0x21 0x9>;=0A=
		reset-names =3D "sdhci";=0A=
		status =3D "disabled";=0A=
		tap-delay =3D <0x4>;=0A=
		trim-delay =3D <0x8>;=0A=
		mmc-ocr-mask =3D <0x0>;=0A=
		max-clk-limit =3D <0xc28cb00>;=0A=
		ddr-clk-limit =3D <0x2719c40>;=0A=
		bus-width =3D <0x4>;=0A=
		calib-3v3-offsets =3D <0x505>;=0A=
		calib-1v8-offsets =3D <0x505>;=0A=
		compad-vref-3v3 =3D <0x7>;=0A=
		compad-vref-1v8 =3D <0x7>;=0A=
		default-drive-type =3D <0x1>;=0A=
		nvidia,min-tap-delay =3D <0x6a>;=0A=
		nvidia,max-tap-delay =3D <0xb9>;=0A=
		pll_source =3D "pll_p";=0A=
		non-removable;=0A=
		cap-mmc-highspeed;=0A=
		cap-sd-highspeed;=0A=
		keep-power-in-suspend;=0A=
		ignore-pm-notify;=0A=
		nvidia,en-io-trim-volt;=0A=
		vqmmc-supply =3D <0x36>;=0A=
		vmmc-supply =3D <0x47>;=0A=
		uhs-mask =3D <0x8>;=0A=
		power-off-rail;=0A=
		force-non-removable-rescan;=0A=
		linux,phandle =3D <0x11f>;=0A=
		phandle =3D <0x11f>;=0A=
=0A=
		prod-settings {=0A=
			#prod-cells =3D <0x3>;=0A=
=0A=
			prod_c_ds {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x30000505>;=0A=
			};=0A=
=0A=
			prod_c_hs {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x30000505>;=0A=
			};=0A=
=0A=
			prod_c_sdr12 {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x30000505>;=0A=
			};=0A=
=0A=
			prod_c_sdr25 {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x30000505>;=0A=
			};=0A=
=0A=
			prod_c_sdr50 {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1c0 0xe000 0x8000 0x1e0 0xf 0x7 =
0x1e4 0x30077f7f 0x30000505>;=0A=
			};=0A=
=0A=
			prod_c_sdr104 {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1c0 0xe000 0x4000 0x1e0 0xf 0x7 =
0x1e4 0x30077f7f 0x30000505>;=0A=
			};=0A=
=0A=
			prod_c_ddr52 {=0A=
				prod =3D <0x100 0x1fff0000 0x40000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x30000505>;=0A=
			};=0A=
=0A=
			prod_c_hs200 {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1c0 0xe000 0x4000 0x1e0 0xf 0x7 =
0x1e4 0x30077f7f 0x30000505>;=0A=
			};=0A=
=0A=
			prod_c_hs400 {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1c0 0xe000 0x4000 0x1e0 0xf 0x7 =
0x1e4 0x30077f7f 0x30000505>;=0A=
			};=0A=
=0A=
			prod_c_hs533 {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1c0 0xe000 0x2000 0x1e0 0xf 0x7 =
0x1e4 0x30000505 0x30000505>;=0A=
			};=0A=
=0A=
			prod {=0A=
				prod =3D <0x100 0x1fff000e 0x8090028 0x1c0 0x8001fc0 0x8000040 0x1c4 =
0x77 0x0 0x120 0x20001 0x1 0x128 0x43000000 0x0 0x1f0 0x80000 0x80000>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	sdhci@700b0000 {=0A=
		compatible =3D "nvidia,tegra210-sdhci";=0A=
		reg =3D <0x0 0x700b0000 0x0 0x200>;=0A=
		interrupts =3D <0x0 0xe 0x4>;=0A=
		aux-device-name =3D "sdhci-tegra.0";=0A=
		iommus =3D <0x2b 0x19>;=0A=
		nvidia,runtime-pm-type =3D <0x1>;=0A=
		clocks =3D <0x21 0xe 0x21 0xf3 0x21 0xc1>;=0A=
		clock-names =3D "sdmmc", "pll_p", "sdmmc_legacy_tm";=0A=
		resets =3D <0x21 0xe>;=0A=
		reset-names =3D "sdhci";=0A=
		status =3D "okay";=0A=
		tap-delay =3D <0x4>;=0A=
		trim-delay =3D <0x2>;=0A=
		max-clk-limit =3D <0xc28cb00>;=0A=
		ddr-clk-limit =3D <0x2dc6c00>;=0A=
		bus-width =3D <0x4>;=0A=
		mmc-ocr-mask =3D <0x3>;=0A=
		calib-3v3-offsets =3D <0x7d>;=0A=
		calib-1v8-offsets =3D <0x7b7b>;=0A=
		compad-vref-3v3 =3D <0x7>;=0A=
		compad-vref-1v8 =3D <0x7>;=0A=
		cd-gpios =3D <0x56 0xc9 0x0>;=0A=
		pll_source =3D "pll_p";=0A=
		cap-mmc-highspeed;=0A=
		cap-sd-highspeed;=0A=
		nvidia,en-io-trim-volt;=0A=
		nvidia,en-periodic-calib;=0A=
		keep-power-in-suspend;=0A=
		ignore-pm-notify;=0A=
		cd-inverted;=0A=
		wp-inverted;=0A=
		nvidia,min-tap-delay =3D <0x6a>;=0A=
		nvidia,max-tap-delay =3D <0xb9>;=0A=
		pwrdet-support;=0A=
		pinctrl-names =3D "sdmmc_schmitt_enable", "sdmmc_schmitt_disable", =
"sdmmc_clk_schmitt_enable", "sdmmc_clk_schmitt_disable", =
"sdmmc_drv_code", "sdmmc_default_drv_code", "sdmmc_e_33v_enable", =
"sdmmc_e_33v_disable";=0A=
		pinctrl-0 =3D <0x90>;=0A=
		pinctrl-1 =3D <0x91>;=0A=
		pinctrl-2 =3D <0x92>;=0A=
		pinctrl-3 =3D <0x93>;=0A=
		pinctrl-4 =3D <0x94>;=0A=
		pinctrl-5 =3D <0x95>;=0A=
		pinctrl-6 =3D <0x96>;=0A=
		pinctrl-7 =3D <0x97>;=0A=
		vqmmc-supply =3D <0x98>;=0A=
		vmmc-supply =3D <0x99>;=0A=
		default-drv-type =3D <0x1>;=0A=
		sd-uhs-sdr104;=0A=
		sd-uhs-sdr50;=0A=
		sd-uhs-sdr25;=0A=
		sd-uhs-sdr12;=0A=
		mmc-ddr-1_8v;=0A=
		mmc-hs200-1_8v;=0A=
		nvidia,cd-wakeup-capable;=0A=
		nvidia,update-pinctrl-settings;=0A=
		nvidia,pmc-wakeup =3D <0x37 0x0 0x23 0x0>;=0A=
		uhs-mask =3D <0xc>;=0A=
		no-sdio;=0A=
		no-mmc;=0A=
		disable-wp;=0A=
		linux,phandle =3D <0xb1>;=0A=
		phandle =3D <0xb1>;=0A=
=0A=
		prod-settings {=0A=
			#prod-cells =3D <0x3>;=0A=
=0A=
			prod_c_ds {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x3000007d>;=0A=
			};=0A=
=0A=
			prod_c_hs {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x3000007d>;=0A=
			};=0A=
=0A=
			prod_c_sdr12 {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x30007b7b>;=0A=
			};=0A=
=0A=
			prod_c_sdr25 {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x30007b7b>;=0A=
			};=0A=
=0A=
			prod_c_sdr50 {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1c0 0xe000 0x8000 0x1e0 0xf 0x7 =
0x1e4 0x30077f7f 0x30007b7b>;=0A=
			};=0A=
=0A=
			prod_c_sdr104 {=0A=
				prod =3D <0x100 0xff0000 0x40000 0x1c0 0xe000 0x4000 0x1e0 0xf 0x7 =
0x1e4 0x30077f7f 0x30007b7b>;=0A=
			};=0A=
=0A=
			prod_c_ddr52 {=0A=
				prod =3D <0x100 0x1fff0000 0x0 0x1e0 0xf 0x7 0x1e4 0x30077f7f =
0x30007b7b>;=0A=
			};=0A=
=0A=
			prod {=0A=
				prod =3D <0x100 0x1fff000e 0x2090028 0x1c0 0x8001fc0 0x8000040 0x1c4 =
0x77 0x0 0x120 0x20001 0x1 0x128 0x43000000 0x0 0x1f0 0x80000 0x80000>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	efuse@7000f800 {=0A=
		compatible =3D "nvidia,tegra210-efuse";=0A=
		reg =3D <0x0 0x7000f800 0x0 0x400>;=0A=
		clocks =3D <0x21 0xe6>;=0A=
		clock-names =3D "fuse";=0A=
		nvidia,clock-always-on;=0A=
		status =3D "okay";=0A=
		vpp_fuse-supply =3D <0x36>;=0A=
=0A=
		efuse-burn {=0A=
			compatible =3D "nvidia,tegra210-efuse-burn";=0A=
			clocks =3D <0x21 0xe9>;=0A=
			clock-names =3D "clk_m";=0A=
			status =3D "okay";=0A=
		};=0A=
	};=0A=
=0A=
	kfuse@7000fc00 {=0A=
		compatible =3D "nvidia,tegra210-kfuse";=0A=
		reg =3D <0x0 0x7000fc00 0x0 0x400>;=0A=
		clocks =3D <0x21 0x28>;=0A=
		clock-names =3D "kfuse";=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	pmc-iopower {=0A=
		compatible =3D "nvidia,tegra210-pmc-iopower";=0A=
		pad-controllers =3D <0x37 0x32 0x37 0x2b 0x37 0x0 0x37 0x2 0x37 0x22 =
0x37 0x23 0x37 0x26 0x37 0x33 0x37 0x1 0x37 0xa 0x37 0xc 0x37 0x15 0x37 =
0x29 0x37 0x2a 0x37 0xf 0x37 0x10 0x37 0x11 0x37 0x12 0x37 0x17>;=0A=
		pad-names =3D "sys", "uart", "audio", "cam", "pex-ctrl", "sdmmc1", =
"sdmmc3", "hv", "audio-hv", "debug", "dmic", "gpio", "spi", "spi-hv", =
"dsia", "dsib", "dsic", "dsid", "hdmi";=0A=
		status =3D "okay";=0A=
		iopower-sys-supply =3D <0x36>;=0A=
		iopower-uart-supply =3D <0x36>;=0A=
		iopower-audio-supply =3D <0x36>;=0A=
		iopower-cam-supply =3D <0x36>;=0A=
		iopower-pex-ctrl-supply =3D <0x36>;=0A=
		iopower-sdmmc1-supply =3D <0x98>;=0A=
		iopower-sdmmc3-supply =3D <0x36>;=0A=
		iopower-sdmmc4-supply =3D <0x36>;=0A=
		iopower-audio-hv-supply =3D <0x36>;=0A=
		iopower-debug-supply =3D <0x36>;=0A=
		iopower-dmic-supply =3D <0x36>;=0A=
		iopower-gpio-supply =3D <0x36>;=0A=
		iopower-spi-supply =3D <0x36>;=0A=
		iopower-spi-hv-supply =3D <0x36>;=0A=
		iopower-sdmmc2-supply =3D <0x36>;=0A=
		iopower-dp-supply =3D <0x36>;=0A=
	};=0A=
=0A=
	dtv@7000c300 {=0A=
		compatible =3D "nvidia,tegra210-dtv";=0A=
		reg =3D <0x0 0x7000c300 0x0 0x100>;=0A=
		dmas =3D <0x4c 0xb>;=0A=
		dma-names =3D "rx";=0A=
		status =3D "disabled";=0A=
	};=0A=
=0A=
	xudc@700d0000 {=0A=
		compatible =3D "nvidia,tegra210-xudc";=0A=
		reg =3D <0x0 0x700d0000 0x0 0x8000 0x0 0x700d8000 0x0 0x1000 0x0 =
0x700d9000 0x0 0x1000>;=0A=
		interrupts =3D <0x0 0x2c 0x4>;=0A=
		clocks =3D <0x21 0x121 0x21 0x9c 0x21 0x13e 0x21 0x122 0x21 0x11e>;=0A=
		nvidia,xusb-padctl =3D <0x44>;=0A=
		iommus =3D <0x2b 0x15>;=0A=
		status =3D "okay";=0A=
		charger-detector =3D <0x9a>;=0A=
		hvdd_usb-supply =3D <0x47>;=0A=
		avdd_pll_utmip-supply =3D <0x36>;=0A=
		avddio_usb-supply =3D <0x3f>;=0A=
		avddio_pll_uerefe-supply =3D <0x3e>;=0A=
		extcon-cables =3D <0x48 0x0>;=0A=
		extcon-cable-names =3D "vbus";=0A=
		phys =3D <0x45>;=0A=
		phy-names =3D "usb2";=0A=
		#extcon-cells =3D <0x1>;=0A=
	};=0A=
=0A=
	memory-controller@70019000 {=0A=
		compatible =3D "nvidia,tegra210-mc";=0A=
		reg =3D <0x0 0x70019000 0x0 0x1000>;=0A=
		clocks =3D <0x21 0x20 0x21 0x39>;=0A=
		clock-names =3D "mc", "emc";=0A=
		interrupts =3D <0x0 0x4d 0x4>;=0A=
		#iommu-cells =3D <0x1>;=0A=
		#reset-cells =3D <0x1>;=0A=
		status =3D "okay";=0A=
		linux,phandle =3D <0x120>;=0A=
		phandle =3D <0x120>;=0A=
	};=0A=
=0A=
	pwm@70110000 {=0A=
		compatible =3D "nvidia,tegra210-dfll-pwm";=0A=
		reg =3D <0x0 0x70110000 0x0 0x400>;=0A=
		clocks =3D <0x21 0x128>;=0A=
		clock-names =3D "ref";=0A=
		pinctrl-names =3D "dvfs_pwm_enable", "dvfs_pwm_disable";=0A=
		#pwm-cells =3D <0x2>;=0A=
		status =3D "okay";=0A=
		pinctrl-0 =3D <0x9b>;=0A=
		pinctrl-1 =3D <0x9c>;=0A=
		pwm-regulator =3D <0x9d>;=0A=
		linux,phandle =3D <0xde>;=0A=
		phandle =3D <0xde>;=0A=
	};=0A=
=0A=
	clock@70110000 {=0A=
		compatible =3D "nvidia,tegra210-dfll";=0A=
		reg =3D <0x0 0x70110000 0x0 0x100 0x0 0x70110000 0x0 0x100 0x0 =
0x70110100 0x0 0x100 0x0 0x70110200 0x0 0x100>;=0A=
		interrupts =3D <0x0 0x3e 0x4>;=0A=
		clocks =3D <0x21 0x129 0x21 0x128 0x21 0x2f>;=0A=
		clock-names =3D "soc", "ref", "i2c";=0A=
		resets =3D <0x21 0xe0>;=0A=
		reset-names =3D "dvco";=0A=
		#clock-cells =3D <0x0>;=0A=
		clock-output-names =3D "dfllCPU_out";=0A=
		out-clock-name =3D "dfll_cpu";=0A=
		status =3D "okay";=0A=
		vdd-cpu-supply =3D <0x9d>;=0A=
		nvidia,dfll-max-freq-khz =3D <0x169158>;=0A=
		nvidia,pwm-to-pmic;=0A=
		nvidia,init-uv =3D <0xf4240>;=0A=
		nvidia,sample-rate =3D <0x61a8>;=0A=
		nvidia,droop-ctrl =3D <0xf00>;=0A=
		nvidia,force-mode =3D <0x1>;=0A=
		nvidia,cf =3D <0x6>;=0A=
		nvidia,ci =3D <0x0>;=0A=
		nvidia,cg =3D <0x2>;=0A=
		nvidia,idle-override;=0A=
		nvidia,one-shot-calibrate;=0A=
		nvidia,pwm-period =3D <0x9c4>;=0A=
		pinctrl-names =3D "dvfs_pwm_enable", "dvfs_pwm_disable";=0A=
		pinctrl-0 =3D <0x9b>;=0A=
		pinctrl-1 =3D <0x9c>;=0A=
		nvidia,align-offset-uv =3D <0xacda0>;=0A=
		nvidia,align-step-uv =3D <0x4b00>;=0A=
		linux,phandle =3D <0x26>;=0A=
		phandle =3D <0x26>;=0A=
	};=0A=
=0A=
	soctherm@0x700E2000 {=0A=
		compatible =3D "nvidia,tegra-soctherm", "nvidia,tegra210-soctherm";=0A=
		reg =3D <0x0 0x700e2000 0x0 0x600 0x0 0x60006000 0x0 0x400 0x0 =
0x70040000 0x0 0x200>;=0A=
		reg-names =3D "soctherm-reg", "car-reg", "ccroc-reg";=0A=
		interrupts =3D <0x0 0x30 0x4 0x0 0x33 0x4>;=0A=
		clocks =3D <0x21 0x64 0x21 0x4e>;=0A=
		clock-names =3D "tsensor", "soctherm";=0A=
		resets =3D <0x21 0x4e>;=0A=
		reset-names =3D "soctherm";=0A=
		#thermal-sensor-cells =3D <0x1>;=0A=
		status =3D "okay";=0A=
		interrupt-controller;=0A=
		#interrupt-cells =3D <0x2>;=0A=
		soctherm-clock-frequency =3D <0x30a32c0>;=0A=
		tsensor-clock-frequency =3D <0x61a80>;=0A=
		sensor-params-tall =3D <0x3fac>;=0A=
		sensor-params-tiddq =3D <0x1>;=0A=
		sensor-params-ten-count =3D <0x1>;=0A=
		sensor-params-tsample =3D <0x78>;=0A=
		sensor-params-pdiv =3D <0x8>;=0A=
		sensor-params-tsamp-ate =3D <0x1e0>;=0A=
		sensor-params-pdiv-ate =3D <0x8>;=0A=
		hw-pllx-offsets =3D <0x0 0x3e8 0x1b58 0x2 0x7d0 0xfa0>;=0A=
		nvidia,thermtrips =3D <0x0 0x19064 0x2 0x19258>;=0A=
		linux,phandle =3D <0x11>;=0A=
		phandle =3D <0x11>;=0A=
=0A=
		throttle-cfgs {=0A=
=0A=
			heavy {=0A=
				nvidia,priority =3D <0x64>;=0A=
				nvidia,cpu-throt-percent =3D <0x55>;=0A=
				nvidia,gpu-throt-level =3D <0x3>;=0A=
				#cooling-cells =3D <0x2>;=0A=
				linux,phandle =3D <0x13>;=0A=
				phandle =3D <0x13>;=0A=
			};=0A=
=0A=
			oc1 {=0A=
				nvidia,priority =3D <0x0>;=0A=
				nvidia,polarity-active-low =3D <0x0>;=0A=
				nvidia,count-threshold =3D <0x0>;=0A=
				nvidia,alarm-filter =3D <0x0>;=0A=
				nvidia,alarm-period =3D <0x0>;=0A=
				nvidia,cpu-throt-percent =3D <0x0>;=0A=
				nvidia,gpu-throt-level =3D <0x0>;=0A=
				linux,phandle =3D <0xc7>;=0A=
				phandle =3D <0xc7>;=0A=
			};=0A=
=0A=
			oc3 {=0A=
				nvidia,priority =3D <0x28>;=0A=
				nvidia,polarity-active-low =3D <0x1>;=0A=
				nvidia,count-threshold =3D <0xf>;=0A=
				nvidia,alarm-filter =3D <0x4dd1e0>;=0A=
				nvidia,alarm-period =3D <0x0>;=0A=
				nvidia,cpu-throt-percent =3D <0x4b>;=0A=
				nvidia,gpu-throt-level =3D <0x2>;=0A=
				linux,phandle =3D <0x121>;=0A=
				phandle =3D <0x121>;=0A=
			};=0A=
		};=0A=
=0A=
		fuse_war@fuse_rev_0_1 {=0A=
			device_type =3D "fuse_war";=0A=
			match_fuse_rev =3D <0x0 0x1>;=0A=
			cpu0 =3D <0x109cbc 0x61120>;=0A=
			cpu1 =3D <0x107160 0x106030>;=0A=
			cpu2 =3D <0x10dba0 0xfff695d8>;=0A=
			cpu3 =3D <0x10b0a8 0xfff23fb0>;=0A=
			mem0 =3D <0x108f74 0xfffe7fa0>;=0A=
			mem1 =3D <0x10dba0 0xfffe7d48>;=0A=
			gpu =3D <0x109168 0xffef40e4>;=0A=
			pllx =3D <0x107610 0xfff268b4>;=0A=
		};=0A=
=0A=
		fuse_war@fuse_rev_2 {=0A=
			device_type =3D "fuse_war";=0A=
			match_fuse_rev =3D <0x2>;=0A=
			cpu0 =3D <0x108e48 0x3180a8>;=0A=
			cpu1 =3D <0x112f38 0xfffef854>;=0A=
			cpu2 =3D <0x10c2a0 0x22595c>;=0A=
			cpu3 =3D <0x10e820 0x9324c>;=0A=
			mem0 =3D <0x105090 0x362acc>;=0A=
			mem1 =3D <0x11e8c4 0xffa06cd0>;=0A=
			gpu =3D <0x10647c 0x29bb34>;=0A=
			pllx =3D <0xfdd54 0x68342c>;=0A=
		};=0A=
=0A=
		throttle@critical {=0A=
			device_type =3D "throttlectl";=0A=
			cdev-type =3D "tegra-shutdown";=0A=
			cooling-min-state =3D <0x0>;=0A=
			cooling-max-state =3D <0x3>;=0A=
			#cooling-cells =3D <0x2>;=0A=
		};=0A=
=0A=
		throttle@heavy {=0A=
			device_type =3D "throttlectl";=0A=
			cdev-type =3D "tegra-heavy";=0A=
			cooling-min-state =3D <0x0>;=0A=
			cooling-max-state =3D <0x3>;=0A=
			#cooling-cells =3D <0x2>;=0A=
			priority =3D <0x64>;=0A=
			throttle_dev =3D <0x9e 0x9f>;=0A=
		};=0A=
=0A=
		throttle_dev@cpu_high {=0A=
			depth =3D <0x55>;=0A=
			linux,phandle =3D <0x9e>;=0A=
			phandle =3D <0x9e>;=0A=
		};=0A=
=0A=
		throttle_dev@gpu_high {=0A=
			level =3D "heavy_throttling";=0A=
			linux,phandle =3D <0x9f>;=0A=
			phandle =3D <0x9f>;=0A=
		};=0A=
	};=0A=
=0A=
	tegra-aotag {=0A=
		compatible =3D "nvidia,tegra21x-aotag";=0A=
		parent-block =3D <0x37>;=0A=
		status =3D "okay";=0A=
		sensor-params-tall =3D <0x4c>;=0A=
		sensor-params-tiddq =3D <0x1>;=0A=
		sensor-params-ten-count =3D <0x10>;=0A=
		sensor-params-tsample =3D <0x9>;=0A=
		sensor-params-pdiv =3D <0x8>;=0A=
		sensor-params-tsamp-ate =3D <0x27>;=0A=
		sensor-params-pdiv-ate =3D <0x8>;=0A=
		#thermal-sensor-cells =3D <0x0>;=0A=
		sensor-name =3D "aotag0";=0A=
		sensor-id =3D <0x0>;=0A=
		advertised-sensor-id =3D <0x9>;=0A=
		sensor-nominal-temp-cp =3D <0x19>;=0A=
		sensor-nominal-temp-ft =3D <0x69>;=0A=
		sensor-compensation-a =3D <0x2988>;=0A=
		sensor-compensation-b =3D <0xfffef85e>;=0A=
		linux,phandle =3D <0x2>;=0A=
		phandle =3D <0x2>;=0A=
	};=0A=
=0A=
	tegra_cec {=0A=
		compatible =3D "nvidia,tegra210-cec";=0A=
		reg =3D <0x0 0x70015000 0x0 0x1000>;=0A=
		interrupts =3D <0x0 0x3 0x4>;=0A=
		clocks =3D <0x21 0x88>;=0A=
		clock-names =3D "cec";=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	watchdog@60005100 {=0A=
		compatible =3D "nvidia,tegra-wdt-t21x";=0A=
		reg =3D <0x0 0x60005100 0x0 0x20 0x0 0x60005088 0x0 0x8>;=0A=
		interrupts =3D <0x0 0x7b 0x4>;=0A=
		nvidia,expiry-count =3D <0x4>;=0A=
		nvidia,timer-index =3D <0x7>;=0A=
		nvidia,enable-on-init;=0A=
		status =3D "disabled";=0A=
		dt-override-status-odm-data =3D <0x10000 0x10000>;=0A=
		timeout-sec =3D <0x78>;=0A=
		linux,phandle =3D <0xb5>;=0A=
		phandle =3D <0xb5>;=0A=
	};=0A=
=0A=
	tegra_fiq_debugger {=0A=
		compatible =3D "nvidia,fiq-debugger";=0A=
		use-console-port;=0A=
		interrupts =3D <0x0 0x7b 0x4>;=0A=
	};=0A=
=0A=
	ptm {=0A=
		compatible =3D "nvidia,ptm";=0A=
		reg =3D <0x0 0x72010000 0x0 0x1000 0x0 0x72030000 0x0 0x1000 0x0 =
0x72040000 0x0 0x1000 0x0 0x72050000 0x0 0x1000 0x0 0x72060000 0x0 =
0x1000 0x0 0x73010000 0x0 0x1000 0x0 0x73440000 0x0 0x1000 0x0 =
0x73540000 0x0 0x1000 0x0 0x73640000 0x0 0x1000 0x0 0x73740000 0x0 =
0x1000 0x0 0x72820000 0x0 0x1000 0x0 0x72a1c000 0x0 0x1000>;=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	mselect {=0A=
		compatible =3D "nvidia,tegra-mselect";=0A=
		interrupts =3D <0x0 0xaf 0x4>;=0A=
		reg =3D <0x0 0x50060000 0x0 0x1000>;=0A=
		status =3D "disabled";=0A=
	};=0A=
=0A=
	cpuidle {=0A=
		compatible =3D "nvidia,tegra210-cpuidle";=0A=
		cc4-no-retention;=0A=
	};=0A=
=0A=
	apbmisc@70000800 {=0A=
		compatible =3D "nvidia,tegra210-apbmisc", "nvidia,tegra20-apbmisc";=0A=
		reg =3D <0x0 0x70000800 0x0 0x64 0x0 0x70000008 0x0 0x4>;=0A=
	};=0A=
=0A=
	nvdumper {=0A=
		compatible =3D "nvidia,tegra210-nvdumper";=0A=
		status =3D "disabled";=0A=
	};=0A=
=0A=
	tegra-pmc-blink-pwm {=0A=
		compatible =3D "nvidia,tegra210-pmc-blink-pwm";=0A=
		status =3D "disabled";=0A=
	};=0A=
=0A=
	nvpmodel {=0A=
		compatible =3D "nvidia,nvpmodel";=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	extcon {=0A=
		compatible =3D "simple-bus";=0A=
		device_type =3D "external-connection";=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
=0A=
		disp-state {=0A=
			compatible =3D "extcon-disp-state";=0A=
			#extcon-cells =3D <0x1>;=0A=
		};=0A=
=0A=
		extcon@0 {=0A=
			compatible =3D "extcon-gpio";=0A=
			reg =3D <0x0>;=0A=
			extcon-gpio,name =3D "ID";=0A=
			gpio =3D <0x1e 0x0 0x0>;=0A=
			extcon-gpio,connection-state-low;=0A=
			extcon-gpio,cable-name =3D "USB-Host";=0A=
			#extcon-cells =3D <0x1>;=0A=
			status =3D "disabled";=0A=
			linux,phandle =3D <0x122>;=0A=
			phandle =3D <0x122>;=0A=
		};=0A=
=0A=
		extcon@1 {=0A=
			compatible =3D "extcon-gpio-states";=0A=
			reg =3D <0x1>;=0A=
			extcon-gpio,name =3D "VBUS";=0A=
			extcon-gpio,cable-states =3D <0x0 0x1 0x1 0x0>;=0A=
			gpios =3D <0x56 0xe4 0x0>;=0A=
			extcon-gpio,out-cable-names =3D <0x1 0x2 0x0>;=0A=
			wakeup-source;=0A=
			#extcon-cells =3D <0x1>;=0A=
			nvidia,pmc-wakeup =3D <0x37 0x0 0x36 0x0>;=0A=
			linux,phandle =3D <0x48>;=0A=
			phandle =3D <0x48>;=0A=
		};=0A=
	};=0A=
=0A=
	bthrot_cdev {=0A=
		compatible =3D "nvidia,tegra-balanced-throttle";=0A=
		clocks =3D <0x21 0x126 0x21 0x1ec 0x21 0x199 0x21 0x1a2 0x21 0x1b9 =
0x21 0x1d2>;=0A=
		clock-names =3D "cclk_g", "gpu", "cap.throttle.c2bus", =
"cap.throttle.c3bus", "cap.throttle.sclk", "emc";=0A=
=0A=
		skin_balanced {=0A=
			cdev-type =3D "skin-balanced";=0A=
			num_states =3D <0x42>;=0A=
			cooling-min-state =3D <0x0>;=0A=
			cooling-max-state =3D <0x42>;=0A=
			#cooling-cells =3D <0x2>;=0A=
			status =3D "okay";=0A=
			throttle_table =3D <0x16358c 0xd4cb0 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x15e9bc 0xd1c9f 0x75300 0x7d000 0x5dc00 0xffffffff 0x159ded =
0xcec8f 0x75300 0x7d000 0x5dc00 0xffffffff 0x15521d 0xcbc7e 0x75300 =
0x7d000 0x5dc00 0xffffffff 0x15064d 0xc8c6e 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x14ba7e 0xc5c5d 0x75300 0x7d000 0x5dc00 0xffffffff 0x146eae =
0xc2c4c 0x75300 0x7d000 0x5dc00 0xffffffff 0x1422de 0xbfc3c 0x75300 =
0x7d000 0x5dc00 0xffffffff 0x13d70e 0xbcc2b 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x138b3f 0xb9c1a 0x75300 0x7d000 0x5dc00 0xffffffff 0x133f6f =
0xb6c0a 0x75300 0x7d000 0x5dc00 0xffffffff 0x12f39f 0xb3bf9 0x75300 =
0x7d000 0x5dc00 0xffffffff 0x12a7d0 0xb0be9 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x125c00 0xadbd8 0x75300 0x7d000 0x5dc00 0xffffffff 0x121030 =
0xaabc7 0x75300 0x7d000 0x5dc00 0xffffffff 0x11c461 0xa7bb7 0x75300 =
0x7d000 0x5dc00 0xffffffff 0x117891 0xa4ba6 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x112cc1 0xa1b96 0x75300 0x7d000 0x5dc00 0xffffffff 0x10e0f2 =
0x9eb85 0x75300 0x7d000 0x5dc00 0xffffffff 0x109522 0x9bb74 0x75300 =
0x7d000 0x5dc00 0xffffffff 0x104952 0x98b64 0x75300 0x7d000 0x5dc00 =
0xffffffff 0xffd82 0x95b53 0x75300 0x7d000 0x5dc00 0xffffffff 0xfb1b3 =
0x92b42 0x75300 0x7d000 0x5dc00 0xffffffff 0xf65e3 0x8fb32 0x75300 =
0x7d000 0x5dc00 0xffffffff 0xf1a13 0x8cb21 0x75300 0x7d000 0x5dc00 =
0xffffffff 0xece44 0x89b11 0x75300 0x7d000 0x5dc00 0xffffffff 0xe8274 =
0x86b00 0x75300 0x7d000 0x5dc00 0xffffffff 0xe36a4 0x83aef 0x75300 =
0x7d000 0x5dc00 0xffffffff 0xdead5 0x80adf 0x75300 0x7d000 0x5dc00 =
0xffffffff 0xd9f05 0x7dace 0x75300 0x7d000 0x5dc00 0xffffffff 0xd5335 =
0x7aabe 0x75300 0x7d000 0x5dc00 0xffffffff 0xd0766 0x77aad 0x75300 =
0x7d000 0x5dc00 0xffffffff 0xcbb96 0x74a9c 0x75300 0x7d000 0x5dc00 =
0xffffffff 0xc6fc6 0x71a8c 0x75300 0x7d000 0x5dc00 0xffffffff 0xc23f6 =
0x6ea7b 0x75300 0x7d000 0x5dc00 0xffffffff 0xbd827 0x6ba6a 0x75300 =
0x76c00 0x5dc00 0xffffffff 0xb8c57 0x68a5a 0x75300 0x76c00 0x5dc00 =
0xffffffff 0xb4087 0x65a49 0x75300 0x76c00 0x5dc00 0xffffffff 0xaf4b8 =
0x62a39 0x75300 0x76c00 0x5dc00 0xffffffff 0xaa8e8 0x5fa28 0x75300 =
0x76c00 0x5dc00 0xffffffff 0xa5d18 0x5ca17 0x75300 0x76c00 0x5dc00 =
0xffffffff 0xa1149 0x59a07 0x75300 0x76c00 0x5dc00 0xffffffff 0x9c579 =
0x569f6 0x75300 0x76c00 0x5dc00 0xffffffff 0x979a9 0x539e6 0x75300 =
0x76c00 0x5dc00 0xffffffff 0x92dda 0x509d5 0x75300 0x73a00 0x5dc00 =
0xffffffff 0x8e20a 0x4d9c4 0x75300 0x73a00 0x5dc00 0xffffffff 0x8963a =
0x4a9b4 0x75300 0x73a00 0x5dc00 0xffffffff 0x84a6a 0x479a3 0x75300 =
0x73a00 0x5dc00 0xffffffff 0x7fe9b 0x44992 0x75300 0x73a00 0x5dc00 =
0xffffffff 0x7b2cb 0x41982 0x75300 0x73a00 0x5dc00 0xffffffff 0x766fb =
0x3e971 0x75300 0x73a00 0x5dc00 0xffffffff 0x71b2c 0x3b961 0x75300 =
0x6a400 0x5dc00 0xffffffff 0x6cf5c 0x38950 0x75300 0x6a400 0x5dc00 =
0xffffffff 0x6838c 0x3593f 0x75300 0x6a400 0x5dc00 0xffffffff 0x637bd =
0x3292f 0x75300 0x6a400 0x5dc00 0xffffffff 0x5ebed 0x2f91e 0x75300 =
0x6a400 0x5dc00 0xffffffff 0x5a01d 0x2c90e 0x75300 0x6a400 0x5dc00 =
0xffffffff 0x5544e 0x298fd 0x75300 0x60e00 0x5dc00 0xffffffff 0x5087e =
0x268ec 0x75300 0x60e00 0x5dc00 0xffffffff 0x4bcae 0x238dc 0x75300 =
0x60e00 0x5dc00 0xffffffff 0x470de 0x208cb 0x75300 0x60e00 0x5dc00 =
0xffffffff 0x4250f 0x1d8ba 0x75300 0x60e00 0x5dc00 0xffffffff 0x3d93f =
0x1a8aa 0x75300 0x60e00 0x5dc00 0xffffffff 0x38d6f 0x17899 0x75300 =
0x60e00 0x5dc00 0xffffffff 0x341a0 0x14889 0x75300 0x60e00 0x5dc00 =
0xffffffff 0x2f5d0 0x11878 0x75300 0x60e00 0x5dc00 0xffffffff>;=0A=
		};=0A=
=0A=
		gpu_balanced {=0A=
			cdev-type =3D "gpu-balanced";=0A=
			num_states =3D <0x42>;=0A=
			cooling-min-state =3D <0x0>;=0A=
			cooling-max-state =3D <0x42>;=0A=
			#cooling-cells =3D <0x2>;=0A=
			status =3D "okay";=0A=
			throttle_table =3D <0x16358c 0xceb08 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x15e9bc 0xcafb0 0x75300 0x7d000 0x5dc00 0xffffffff 0x159ded =
0xc7458 0x75300 0x7d000 0x5dc00 0xffffffff 0x15521d 0xc3900 0x75300 =
0x7d000 0x5dc00 0xffffffff 0x15064d 0xbfda7 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x14ba7e 0xbc24f 0x75300 0x7d000 0x5dc00 0xffffffff 0x146eae =
0xb86f7 0x75300 0x7d000 0x5dc00 0xffffffff 0x1422de 0xb4b9f 0x75300 =
0x7d000 0x5dc00 0xffffffff 0x13d70e 0xb1047 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x138b3f 0xad4ef 0x75300 0x7d000 0x5dc00 0xffffffff 0x133f6f =
0xa9996 0x75300 0x7d000 0x5dc00 0xffffffff 0x12f39f 0xa5e3e 0x75300 =
0x7d000 0x5dc00 0xffffffff 0x12a7d0 0xa22e6 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x125c00 0x9e78e 0x75300 0x7d000 0x5dc00 0xffffffff 0x121030 =
0x9ac36 0x75300 0x7d000 0x5dc00 0xffffffff 0x11c461 0x970de 0x75300 =
0x7d000 0x5dc00 0xffffffff 0x117891 0x93585 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x112cc1 0x8fa2d 0x75300 0x7d000 0x5dc00 0xffffffff 0x10e0f2 =
0x8bed5 0x75300 0x7d000 0x5dc00 0xffffffff 0x109522 0x8837d 0x75300 =
0x7d000 0x5dc00 0xffffffff 0x104952 0x84825 0x75300 0x7d000 0x5dc00 =
0xffffffff 0xffd82 0x80ccd 0x75300 0x7d000 0x5dc00 0xffffffff 0xfb1b3 =
0x7d175 0x75300 0x7d000 0x5dc00 0xffffffff 0xf65e3 0x7961c 0x75300 =
0x7d000 0x5dc00 0xffffffff 0xf1a13 0x75ac4 0x75300 0x7d000 0x5dc00 =
0xffffffff 0xece44 0x71f6c 0x75300 0x7d000 0x5dc00 0xffffffff 0xe8274 =
0x6e414 0x75300 0x7d000 0x5dc00 0xffffffff 0xe36a4 0x6a8bc 0x75300 =
0x7d000 0x5dc00 0xffffffff 0xdead5 0x66d64 0x75300 0x7d000 0x5dc00 =
0xffffffff 0xd9f05 0x6320b 0x75300 0x7d000 0x5dc00 0xffffffff 0xd5335 =
0x5f6b3 0x75300 0x7d000 0x5dc00 0xffffffff 0xd0766 0x5bb5b 0x75300 =
0x7d000 0x5dc00 0xffffffff 0xcbb96 0x58003 0x75300 0x7d000 0x5dc00 =
0xffffffff 0xc6fc6 0x544ab 0x75300 0x7d000 0x5dc00 0xffffffff 0xc23f6 =
0x50953 0x75300 0x7d000 0x5dc00 0xffffffff 0xbd827 0x4cdfb 0x75300 =
0x76c00 0x5dc00 0xffffffff 0xb8c57 0x492a2 0x75300 0x76c00 0x5dc00 =
0xffffffff 0xb4087 0x4574a 0x75300 0x76c00 0x5dc00 0xffffffff 0xaf4b8 =
0x41bf2 0x75300 0x76c00 0x5dc00 0xffffffff 0xaa8e8 0x3e09a 0x75300 =
0x76c00 0x5dc00 0xffffffff 0xa5d18 0x3a542 0x75300 0x76c00 0x5dc00 =
0xffffffff 0xa1149 0x369ea 0x75300 0x76c00 0x5dc00 0xffffffff 0x9c579 =
0x32e91 0x75300 0x76c00 0x5dc00 0xffffffff 0x979a9 0x2f339 0x75300 =
0x76c00 0x5dc00 0xffffffff 0x92dda 0x2b7e1 0x75300 0x73a00 0x5dc00 =
0xffffffff 0x8e20a 0x27c89 0x75300 0x73a00 0x5dc00 0xffffffff 0x8963a =
0x24131 0x75300 0x73a00 0x5dc00 0xffffffff 0x84a6a 0x205d9 0x75300 =
0x73a00 0x5dc00 0xffffffff 0x7fe9b 0x1ca80 0x75300 0x73a00 0x5dc00 =
0xffffffff 0x7b2cb 0x18f28 0x75300 0x73a00 0x5dc00 0xffffffff 0x766fb =
0x153d0 0x75300 0x73a00 0x5dc00 0xffffffff 0x71b2c 0xf168 0x75300 =
0x6a400 0x5dc00 0xffffffff>;=0A=
			linux,phandle =3D <0x1b>;=0A=
			phandle =3D <0x1b>;=0A=
		};=0A=
=0A=
		cpu_balanced {=0A=
			cdev-type =3D "cpu-balanced";=0A=
			num_states =3D <0x42>;=0A=
			cooling-min-state =3D <0x0>;=0A=
			cooling-max-state =3D <0x42>;=0A=
			#cooling-cells =3D <0x2>;=0A=
			status =3D "okay";=0A=
			throttle_table =3D <0x16358c 0xffffffff 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x15e9bc 0xffffffff 0x75300 0x7d000 0x5dc00 0xffffffff =
0x159ded 0xffffffff 0x75300 0x7d000 0x5dc00 0xffffffff 0x15521d =
0xffffffff 0x75300 0x7d000 0x5dc00 0xffffffff 0x15064d 0xffffffff =
0x75300 0x7d000 0x5dc00 0xffffffff 0x14ba7e 0xffffffff 0x75300 0x7d000 =
0x5dc00 0xffffffff 0x146eae 0xffffffff 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x1422de 0xffffffff 0x75300 0x7d000 0x5dc00 0xffffffff =
0x13d70e 0xffffffff 0x75300 0x7d000 0x5dc00 0xffffffff 0x138b3f =
0xffffffff 0x75300 0x7d000 0x5dc00 0xffffffff 0x133f6f 0xe1000 0x75300 =
0x7d000 0x5dc00 0xffffffff 0x12f39f 0xdd3a5 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x12a7d0 0xd974a 0x75300 0x7d000 0x5dc00 0xffffffff 0x125c00 =
0xd5aef 0x75300 0x7d000 0x5dc00 0xffffffff 0x121030 0xd1e94 0x75300 =
0x7d000 0x5dc00 0xffffffff 0x11c461 0xce239 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x117891 0xca5df 0x75300 0x7d000 0x5dc00 0xffffffff 0x112cc1 =
0xc6984 0x75300 0x7d000 0x5dc00 0xffffffff 0x10e0f2 0xc2d29 0x75300 =
0x7d000 0x5dc00 0xffffffff 0x109522 0xbf0ce 0x75300 0x7d000 0x5dc00 =
0xffffffff 0x104952 0xbb473 0x75300 0x7d000 0x5dc00 0xffffffff 0xffd82 =
0xb7818 0x75300 0x7d000 0x5dc00 0xffffffff 0xfb1b3 0xb3bbd 0x75300 =
0x7d000 0x5dc00 0xffffffff 0xf65e3 0xaff62 0x75300 0x7d000 0x5dc00 =
0xffffffff 0xf1a13 0xac307 0x75300 0x7d000 0x5dc00 0xffffffff 0xece44 =
0xa86ac 0x75300 0x7d000 0x5dc00 0xffffffff 0xe8274 0xa4a51 0x75300 =
0x7d000 0x5dc00 0xffffffff 0xe36a4 0xa0df7 0x75300 0x7d000 0x5dc00 =
0xffffffff 0xdead5 0x9d19c 0x75300 0x7d000 0x5dc00 0xffffffff 0xd9f05 =
0x99541 0x75300 0x7d000 0x5dc00 0xffffffff 0xd5335 0x958e6 0x75300 =
0x7d000 0x5dc00 0xffffffff 0xd0766 0x91c8b 0x75300 0x7d000 0x5dc00 =
0xffffffff 0xcbb96 0x8e030 0x75300 0x7d000 0x5dc00 0xffffffff 0xc6fc6 =
0x8a3d5 0x75300 0x7d000 0x5dc00 0xffffffff 0xc23f6 0x8677a 0x75300 =
0x7d000 0x5dc00 0xffffffff 0xbd827 0x82b1f 0x75300 0x76c00 0x5dc00 =
0xffffffff 0xb8c57 0x7eec4 0x75300 0x76c00 0x5dc00 0xffffffff 0xb4087 =
0x7b269 0x75300 0x76c00 0x5dc00 0xffffffff 0xaf4b8 0x7760f 0x75300 =
0x76c00 0x5dc00 0xffffffff 0xaa8e8 0x739b4 0x75300 0x76c00 0x5dc00 =
0xffffffff 0xa5d18 0x6fd59 0x75300 0x76c00 0x5dc00 0xffffffff 0xa1149 =
0x6c0fe 0x75300 0x76c00 0x5dc00 0xffffffff 0x9c579 0x684a3 0x75300 =
0x76c00 0x5dc00 0xffffffff 0x979a9 0x64848 0x75300 0x76c00 0x5dc00 =
0xffffffff 0x92dda 0x60bed 0x75300 0x73a00 0x5dc00 0xffffffff 0x8e20a =
0x5cf92 0x75300 0x73a00 0x5dc00 0xffffffff 0x8963a 0x59337 0x75300 =
0x73a00 0x5dc00 0xffffffff 0x84a6a 0x556dc 0x75300 0x73a00 0x5dc00 =
0xffffffff 0x7fe9b 0x51a81 0x75300 0x73a00 0x5dc00 0xffffffff 0x7b2cb =
0x4de27 0x75300 0x73a00 0x5dc00 0xffffffff 0x766fb 0x4a1cc 0x75300 =
0x73a00 0x5dc00 0xffffffff 0x71b2c 0x46571 0x75300 0x6a400 0x5dc00 =
0xffffffff 0x6cf5c 0x42916 0x75300 0x6a400 0x5dc00 0xffffffff 0x6838c =
0x3ecbb 0x75300 0x6a400 0x5dc00 0xffffffff 0x637bd 0x3b060 0x75300 =
0x6a400 0x5dc00 0xffffffff 0x5ebed 0x37405 0x75300 0x6a400 0x5dc00 =
0xffffffff 0x5a01d 0x337aa 0x75300 0x6a400 0x5dc00 0xffffffff 0x5544e =
0x2fb4f 0x75300 0x60e00 0x5dc00 0xffffffff 0x5087e 0x2bef4 0x75300 =
0x60e00 0x5dc00 0xffffffff 0x4bcae 0x28299 0x75300 0x60e00 0x5dc00 =
0xffffffff 0x470de 0x2463f 0x75300 0x60e00 0x5dc00 0xffffffff 0x4250f =
0x209e4 0x75300 0x60e00 0x5dc00 0xffffffff 0x3d93f 0x1cd89 0x75300 =
0x60e00 0x5dc00 0xffffffff 0x38d6f 0x1912e 0x75300 0x60e00 0x5dc00 =
0xffffffff 0x341a0 0x154d3 0x75300 0x60e00 0x5dc00 0xffffffff 0x2f5d0 =
0x11878 0x75300 0x60e00 0x5dc00 0xffffffff>;=0A=
			linux,phandle =3D <0x15>;=0A=
			phandle =3D <0x15>;=0A=
		};=0A=
=0A=
		emergency_balanced {=0A=
			cdev-type =3D "emergency-balanced";=0A=
			num_states =3D <0x1>;=0A=
			cooling-min-state =3D <0x0>;=0A=
			cooling-max-state =3D <0x1>;=0A=
			#cooling-cells =3D <0x2>;=0A=
			status =3D "okay";=0A=
			throttle_table =3D <0x111ed0 0x5f758 0x46500 0x668a0 0x3d860 0x60ae0>;=0A=
			linux,phandle =3D <0x20>;=0A=
			phandle =3D <0x20>;=0A=
		};=0A=
	};=0A=
=0A=
	agic-controller {=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	adma@702e2000 {=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	ahub {=0A=
		status =3D "disabled";=0A=
=0A=
		admaif@0x702d0000 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		sfc@702d2000 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		sfc@702d2200 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		sfc@702d2400 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		sfc@702d2600 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		spkprot@702d8c00 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		amixer@702dbb00 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		i2s@702d1000 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		i2s@702d1100 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		i2s@702d1200 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		i2s@702d1300 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		i2s@702d1400 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		amx@702d3000 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		amx@702d3100 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		adx@702d3800 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		adx@702d3900 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		dmic@702d4000 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		dmic@702d4100 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		dmic@702d4200 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		afc@702d7000 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		afc@702d7100 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		afc@702d7200 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		afc@702d7300 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		afc@702d7400 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		afc@702d7500 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		mvc@702da000 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		mvc@702da200 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		iqc@702de000 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		iqc@702de200 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		ope@702d8000 {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		ope@702d8400 {=0A=
			status =3D "disabled";=0A=
		};=0A=
	};=0A=
=0A=
	adsp_audio {=0A=
		status =3D "disabled";=0A=
	};=0A=
=0A=
	sata@70020000 {=0A=
		status =3D "disabled";=0A=
		hvdd_sata-supply =3D <0x36>;=0A=
		hvdd_pex_pll_e-supply =3D <0x36>;=0A=
		l0_hvddio_sata-supply =3D <0x36>;=0A=
		l0_dvddio_sata-supply =3D <0x40>;=0A=
		dvdd_sata_pll-supply =3D <0x40>;=0A=
=0A=
		prod-settings {=0A=
			#prod-cells =3D <0x4>;=0A=
=0A=
			prod {=0A=
				prod =3D <0x0 0x680 0x1 0x1 0x0 0x690 0xfff 0x715 0x0 0x694 0xff0ff =
0xe01b 0x0 0x6d0 0xffffffff 0xab000f 0x0 0x170 0xf000 0x7000 0x2 0x960 =
0x3000000 0x1000000>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	modem {=0A=
		compatible =3D "nvidia,icera-i500";=0A=
		status =3D "disabled";=0A=
		nvidia,boot-gpio =3D <0x56 0x56 0x1>;=0A=
		nvidia,mdm-power-report-gpio =3D <0x56 0x59 0x1>;=0A=
		nvidia,reset-gpio =3D <0x56 0x58 0x1>;=0A=
		nvidia,mdm-en-gpio =3D <0x56 0x57 0x0>;=0A=
		nvidia,num-temp-sensors =3D <0x3>;=0A=
=0A=
		nvidia,phy-ehci-hsic {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		nvidia,phy-xhci-hsic {=0A=
			status =3D "disabled";=0A=
		};=0A=
=0A=
		nvidia,phy-xhci-utmi {=0A=
			status =3D "disabled";=0A=
		};=0A=
	};=0A=
=0A=
	trusty {=0A=
		compatible =3D "android,trusty-smc-v1";=0A=
		ranges;=0A=
		#address-cells =3D <0x2>;=0A=
		#size-cells =3D <0x2>;=0A=
		status =3D "disabled";=0A=
=0A=
		irq {=0A=
			compatible =3D "android,trusty-irq-v1";=0A=
			interrupt-templates =3D <0xa0 0x0 0x33 0x1 0x1 0x0 0x33 0x1 0x0 0x0>;=0A=
			interrupt-ranges =3D <0x0 0xf 0x0 0x10 0x1f 0x1 0x20 0xdf 0x2>;=0A=
		};=0A=
=0A=
		fiq {=0A=
			compatible =3D "android,trusty-fiq-v1";=0A=
		};=0A=
=0A=
		virtio {=0A=
			compatible =3D "android,trusty-virtio-v1";=0A=
		};=0A=
=0A=
		log {=0A=
			compatible =3D "android,trusty-log-v1";=0A=
		};=0A=
	};=0A=
=0A=
	smp-custom-ipi {=0A=
		compatible =3D "android,CustomIPI";=0A=
		#interrupt-cells =3D <0x1>;=0A=
		interrupt-controller;=0A=
		linux,phandle =3D <0xa0>;=0A=
		phandle =3D <0xa0>;=0A=
	};=0A=
=0A=
	psy_extcon_xudc {=0A=
		compatible =3D "power-supply-extcon";=0A=
		extcon-cables =3D <0x9a 0x1 0x9a 0x2 0x9a 0x3 0x9a 0x4 0x9a 0x5 0x9a =
0x6 0x9a 0x7 0x9a 0x8 0x9a 0x9>;=0A=
		extcon-cable-names =3D "usb-charger", "ta-charger", "maxim-charger", =
"qc2-charger", "downstream-charger", "slow-charger", "apple-500ma", =
"apple-1a", "apple-2a";=0A=
		status =3D "disabled";=0A=
	};=0A=
=0A=
	tegra-supply-tests {=0A=
		compatible =3D "nvidia,tegra-supply-tests";=0A=
		vdd-core-supply =3D <0xa1>;=0A=
	};=0A=
=0A=
	camera-pcl {=0A=
=0A=
		dpd {=0A=
			compatible =3D "nvidia,csi-dpd";=0A=
			#address-cells =3D <0x1>;=0A=
			#size-cells =3D <0x0>;=0A=
			num =3D <0x6>;=0A=
=0A=
			csia {=0A=
				reg =3D <0x0 0x0 0x0 0x0>;=0A=
			};=0A=
=0A=
			csib {=0A=
				reg =3D <0x0 0x1 0x0 0x0>;=0A=
			};=0A=
=0A=
			csic {=0A=
				reg =3D <0x1 0xa 0x0 0x0>;=0A=
			};=0A=
=0A=
			csid {=0A=
				reg =3D <0x1 0xb 0x0 0x0>;=0A=
			};=0A=
=0A=
			csie {=0A=
				reg =3D <0x1 0xc 0x0 0x0>;=0A=
			};=0A=
=0A=
			csif {=0A=
				reg =3D <0x1 0xd 0x0 0x0>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	rollback-protection {=0A=
		device-name =3D "sdmmc";=0A=
		device-method =3D <0x1 0x2>;=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	external-memory-controller@7001b000 {=0A=
		#cooling-cells =3D <0x2>;=0A=
		compatible =3D "nvidia,tegra21-emc", "nvidia,tegra210-emc";=0A=
		reg =3D <0x0 0x7001b000 0x0 0x1000 0x0 0x7001e000 0x0 0x1000 0x0 =
0x7001f000 0x0 0x1000>;=0A=
		clocks =3D <0x21 0x39 0x21 0xf1 0x21 0xed 0x21 0xf3 0x21 0xe9 0x21 =
0x131 0x21 0x140 0x21 0x141 0x21 0x1e0>;=0A=
		clock-names =3D "emc", "pll_m", "pll_c", "pll_p", "clk_m", "pll_mb", =
"pll_mb_ud", "pll_p_ud", "emc_override";=0A=
		#thermal-sensor-cells =3D <0x0>;=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
		nvidia,use-ram-code;=0A=
		linux,phandle =3D <0x1d>;=0A=
		phandle =3D <0x1d>;=0A=
=0A=
		emc-table@0 {=0A=
			nvidia,ram-code =3D <0x0>;=0A=
=0A=
			emc-table@204000 {=0A=
				compatible =3D "nvidia,tegra21-emc-table";=0A=
				nvidia,revision =3D <0x7>;=0A=
				nvidia,dvfs-version =3D "13_204000_12_V9.8.7_V1.6";=0A=
				clock-frequency =3D <0x31ce0>;=0A=
				nvidia,emc-min-mv =3D <0x320>;=0A=
				nvidia,gk20a-min-mv =3D <0x44c>;=0A=
				nvidia,source =3D "pllp_out0";=0A=
				nvidia,src-sel-reg =3D <0x40188002>;=0A=
				nvidia,needs-training =3D <0x0>;=0A=
				nvidia,training_pattern =3D <0x0>;=0A=
				nvidia,trained =3D <0x0>;=0A=
				nvidia,periodic_training =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c0d0u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c0d0u1 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c0d1u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c0d1u1 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d0u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d0u1 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d1u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d1u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d0u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d0u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d1u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d1u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d0u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d0u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d1u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d1u1 =3D <0x0>;=0A=
				nvidia,run_clocks =3D <0xd>;=0A=
				nvidia,tree_margin =3D <0x1>;=0A=
				nvidia,burst-regs-num =3D <0xdd>;=0A=
				nvidia,burst-regs-per-ch-num =3D <0x8>;=0A=
				nvidia,trim-regs-num =3D <0x8a>;=0A=
				nvidia,trim-regs-per-ch-num =3D <0xa>;=0A=
				nvidia,burst-mc-regs-num =3D <0x21>;=0A=
				nvidia,la-scale-regs-num =3D <0x18>;=0A=
				nvidia,vref-regs-num =3D <0x4>;=0A=
				nvidia,training-mod-regs-num =3D <0x14>;=0A=
				nvidia,dram-timing-regs-num =3D <0x5>;=0A=
				nvidia,min-mrs-wait =3D <0x16>;=0A=
				nvidia,emc-mrw =3D <0x88010004>;=0A=
				nvidia,emc-mrw2 =3D <0x88020000>;=0A=
				nvidia,emc-mrw3 =3D <0x880d0000>;=0A=
				nvidia,emc-mrw4 =3D <0xc0000000>;=0A=
				nvidia,emc-mrw9 =3D <0x8c0e7272>;=0A=
				nvidia,emc-mrs =3D <0x0>;=0A=
				nvidia,emc-emrs =3D <0x0>;=0A=
				nvidia,emc-emrs2 =3D <0x0>;=0A=
				nvidia,emc-auto-cal-config =3D <0xa01a51d8>;=0A=
				nvidia,emc-auto-cal-config2 =3D <0x5500000>;=0A=
				nvidia,emc-auto-cal-config3 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config4 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config5 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config6 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config7 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config8 =3D <0x770000>;=0A=
				nvidia,emc-cfg-2 =3D <0x110805>;=0A=
				nvidia,emc-sel-dpd-ctrl =3D <0x40008>;=0A=
				nvidia,emc-fdpd-ctrl-cmd-no-ramp =3D <0x1>;=0A=
				nvidia,dll-clk-src =3D <0x40188002>;=0A=
				nvidia,clk-out-enb-x-0-clk-enb-emc-dll =3D <0x1>;=0A=
				nvidia,emc-clock-latency-change =3D <0xd5c>;=0A=
				nvidia,ptfv =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0xa 0xa 0xa 0x1>;=0A=
				nvidia,emc-registers =3D <0xd 0x3a 0x1d 0x0 0x0 0x9 0x4 0xb 0xd 0x8 =
0xb 0x0 0x4 0x20 0x6 0x6 0x6 0x3 0x0 0x6 0x4 0x2 0x0 0x4 0x8 0xd 0x6 0x5 =
0x0 0x0 0x3 0x88037171 0xc 0x1 0xa 0x10000 0x12 0x14 0x16 0x12 0x14 =
0x304 0x0 0xc1 0x8 0x8 0x3 0x3 0x3 0x14 0x5 0x2 0xd 0x3b 0x3b 0x5 0x5 =
0x4 0x9 0x5 0x4 0x9 0xc8037171 0x31c 0x0 0x9160a00d 0x3bbf 0x2c00a0 =
0x8000 0xbe 0xfff0fff 0xfff0fff 0x0 0x0 0x0 0x0 0x880b0000 0xe0017 =
0x1c0014 0x450031 0x3f002b 0x3d0028 0x3d0031 0xb 0x100004 0x450031 =
0x3f002b 0x3d0028 0x3d0031 0xb 0x100004 0x170017 0xe000e 0x140014 =
0x1c001c 0x17 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f 0x220f40f 0x12 0x64000 =
0x900cc 0xcc0016 0x33000a 0xc1e00303 0x1f13412f 0x10014 0x804 0x550 =
0xf3200000 0xfff0fff 0x713 0xa 0x0 0x0 0x1b 0x1b 0x20000 0x50037 0x0 =
0x10 0x3000 0xa000000 0x2000111 0x8 0x30808 0x15c00 0x101010 0x1600 0x0 =
0x0 0x0 0x34 0x40 0x18000800 0x8000800 0x8000800 0x8000800 0x400080 =
0x8801004 0x20 0x0 0x0 0x0 0x0 0x0 0x0 0xefffefff 0xc0c0c0c0 0xc0c0c0c0 =
0xdcdcdcdc 0xa0a0a0a 0xa0a0a0a 0xa0a0a0a 0x1186033 0x0 0x0 0x14 0xa 0x16 =
0x88161414 0x12 0x10000 0x9080 0x7070404 0x40065 0x513801f 0x1f101100 =
0x14 0x107240 0x1124000 0x1125b6a 0xf081000 0x105800 0x1110fc00 =
0xf081300 0x105800 0x1114fc00 0x7000300 0x107240 0x55553c5a 0xc8161414>;=0A=
				nvidia,emc-burst-regs-per-ch =3D <0x880c7272 0x880c7272 0xc80c7272 =
0xc80c7272 0x8c0e7272 0x8c0e7272 0x4c0e7272 0x4c0e7272>;=0A=
				nvidia,emc-shadow-regs-ca-train =3D <0xd 0x3a 0x1d 0x0 0x0 0x9 0x4 =
0xb 0xd 0x8 0xb 0x0 0x4 0x20 0x6 0x6 0x6 0x3 0x0 0x6 0x4 0x2 0x0 0x4 0x8 =
0xd 0x6 0x5 0x0 0x0 0x3 0x88037171 0xc 0x1 0xa 0x10000 0x12 0x14 0x16 =
0x12 0x14 0x304 0x0 0xc1 0x8 0x8 0x3 0x3 0x3 0x14 0x5 0x2 0xd 0x3b 0x3b =
0x5 0x5 0x4 0x9 0x5 0x4 0x9 0xc8037171 0x31c 0x0 0x9960a00d 0x3bbf =
0x2c00a0 0x8000 0x55 0xfff0fff 0xfff0fff 0x0 0x0 0x0 0x0 0x880b0000 =
0xe0017 0x1c0014 0x450031 0x3f002b 0x3d0028 0x3d0031 0xb 0x100004 =
0x450031 0x3f002b 0x3d0028 0x3d0031 0xb 0x100004 0x170017 0xe000e =
0x140014 0x1c001c 0x17 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f 0x220f40f 0x12 =
0x64000 0x900cc 0xcc0016 0x33000a 0xc1e00303 0x1f13412f 0x10014 0x804 =
0x550 0xf3200000 0xfff0fff 0x713 0xa 0x0 0x0 0x1b 0x1b 0x20000 0x5058033 =
0x5050000 0x0 0x3000 0xa000000 0x2000111 0x8 0x30808 0x15c00 0x101010 =
0x1600 0x0 0x0 0x0 0x34 0x40 0x18000800 0x8000800 0x8000800 0x8000800 =
0x400080 0x8801004 0x20 0x0 0x0 0x0 0x0 0x0 0x0 0xefffefff 0xc0c0c0c0 =
0xc0c0c0c0 0xdcdcdcdc 0xa0a0a0a 0xa0a0a0a 0xa0a0a0a 0x1186033 0x1 0x1f =
0x18 0x8 0x1a 0x88161414 0x10 0x10000 0x9080 0x7070404 0x40065 0x513801f =
0x1f101100 0x14 0x107240 0x1124000 0x1125b6a 0xf081000 0x105800 =
0x1110fc00 0xf081300 0x105800 0x1114fc00 0x7000300 0x107240 0x55553c5a =
0xc8161414>;=0A=
				nvidia,emc-shadow-regs-quse-train =3D <0xd 0x3a 0x1d 0x0 0x0 0x9 0x4 =
0xa 0xd 0x8 0xb 0x0 0x4 0x20 0x6 0x6 0x6 0xc 0x0 0x6 0x4 0x2 0x0 0x4 0x8 =
0xd 0x3 0x2 0x10000000 0x0 0x3 0x88037171 0xb 0x1 0x80000000 0x40000 =
0x12 0x14 0x16 0x12 0x14 0x304 0x0 0xc1 0x8 0x8 0x3 0x3 0x3 0x14 0x5 0x2 =
0xd 0x3b 0x3b 0x5 0x5 0x4 0x9 0x5 0x4 0x9 0xc8037171 0x31c 0x0 =
0x9160400d 0x3bbf 0x2c00a0 0x8000 0xbe 0xfff0fff 0xfff0fff 0x0 0x0 0x0 =
0x0 0x880b0000 0xe0017 0x1c0014 0x450031 0x3f002b 0x3d0028 0x3d0031 0xb =
0x100004 0x450031 0x3f002b 0x3d0028 0x3d0031 0xb 0x100004 0x170017 =
0xe000e 0x140014 0x1c001c 0x17 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f 0x220f40f =
0x12 0x64000 0x900cc 0xcc0016 0x33000a 0xc1e00303 0x1f13412f 0x10014 =
0x804 0x550 0xf3200000 0xfff0fff 0x713 0xa 0x0 0x0 0x1b 0x1b 0x30020000 =
0x58037 0x0 0x10 0x3000 0xa000000 0x2000111 0x8 0x30808 0x15c00 0x101010 =
0x1600 0x0 0x0 0x0 0x34 0x40 0x18000800 0x8000800 0x8000800 0x8000800 =
0x400080 0x8801004 0x20 0x0 0x0 0x0 0x0 0x0 0x0 0xefffefff 0xc0c0c0c0 =
0xc0c0c0c0 0xdcdcdcdc 0xa0a0a0a 0xa0a0a0a 0xa0a0a0a 0x1186033 0x1 0x1f =
0x1e 0x14 0x20 0x88161414 0x1c 0x40000 0x9080 0x7070404 0x40065 =
0x513801f 0x1f101100 0x14 0x107240 0x1124000 0x1125b6a 0xf081000 =
0x105800 0x1110fc00 0xf081300 0x105800 0x1114fc00 0x7000300 0x107240 =
0x55553c5a 0xc8161414>;=0A=
				nvidia,emc-shadow-regs-rdwr-train =3D <0xd 0x3a 0x1d 0x0 0x0 0x9 0x4 =
0xe 0xd 0x8 0xb 0x0 0x4 0x20 0x6 0x6 0x6 0x12 0x13 0x6 0x4 0x2 0x0 0x4 =
0x8 0xd 0x6 0x5 0x10000000 0x30000002 0x3 0x88037171 0xc 0x1 0xa 0x40000 =
0x12 0x14 0x16 0x12 0x14 0x304 0x0 0xc1 0x8 0x8 0x3 0x3 0x3 0x14 0x5 0x2 =
0xd 0x3b 0x3b 0x5 0x5 0x4 0x9 0x5 0x4 0x9 0xc8037171 0x31c 0x0 =
0x9160a00d 0x3bbf 0x2c00a0 0x8000 0xbe 0xfff0fff 0xfff0fff 0x0 0x0 0x0 =
0x0 0x880b0000 0xe0017 0x1c0014 0x450031 0x3f002b 0x3d0028 0x3d0031 0xb =
0x100004 0x450031 0x3f002b 0x3d0028 0x3d0031 0xb 0x100004 0x170017 =
0xe000e 0x140014 0x1c001c 0x17 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f 0x220f40f =
0x12 0x64000 0x900cc 0xcc0016 0x33000a 0xc1e00303 0x1f13412f 0x10014 =
0x804 0x550 0xf3200000 0xfff0fff 0x713 0xa 0x0 0x0 0x1b 0x1b 0x20000 =
0x50037 0x0 0x10 0x3000 0xa000000 0x2000111 0x8 0x30808 0x15c00 0x101010 =
0x1600 0x0 0x0 0x0 0x34 0x40 0x18000800 0x8000800 0x8000800 0x8000800 =
0x400080 0x8801004 0x20 0x0 0x0 0x0 0x0 0x0 0x0 0xefffefff 0xc0c0c0c0 =
0xc0c0c0c0 0xdcdcdcdc 0xa0a0a0a 0xa0a0a0a 0xa0a0a0a 0x1186033 0x1 0x0 =
0x14 0xa 0x16 0x88161414 0x12 0x40000 0xb080 0x7070404 0x40065 0x513801f =
0x1f101100 0x14 0x107240 0x1124000 0x1125b6a 0xf081000 0x105800 =
0x1110fc00 0xf081300 0x105800 0x1114fc00 0x7000300 0x107240 0x55553c5a =
0xc8161414>;=0A=
				nvidia,emc-trim-regs =3D <0x280028 0x280028 0x280028 0x280028 =
0x280028 0x280028 0x280028 0x280028 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x11111111 0x11111111 0x28282828 0x28282828 0x0 0x0 0x0 0x0 =
0xe0017 0x1c0014 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-trim-regs-per-ch =3D <0x0 0x0 0x249249 0x249249 0x249249 =
0x249249 0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-vref-regs =3D <0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-dram-timing-regs =3D <0x12 0x104 0x118 0x18 0x6>;=0A=
				nvidia,emc-training-mod-regs =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-save-restore-mod-regs =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-burst-mc-regs =3D <0x8000001 0x8000004c 0xa1020 =
0x80001028 0x1 0x0 0x3 0x1 0x2 0x1 0x2 0x5 0x1 0x1 0x4 0x8 0x5 0x7 =
0x2020000 0x30201 0x72a30504 0x70000f0f 0x0 0x1f0000 0x80001a 0x80001a =
0x80001a 0x80001a 0x80001a 0x80001a 0x80001a 0x80001a 0x80001a>;=0A=
				nvidia,emc-la-scale-regs =3D <0x1b 0x80001a 0x24c 0xff00b2 0xff00da =
0xff009d 0xff00ff 0xff000c 0xff00ff 0xff000c 0x7f0049 0xff0080 0xff0004 =
0x800ad 0xff 0xff0004 0xff00c6 0xff00c6 0xff006d 0xff00ff 0xff00e2 0xff =
0x80 0xff00ff>;=0A=
			};=0A=
=0A=
			emc-table@1600000 {=0A=
				compatible =3D "nvidia,tegra21-emc-table";=0A=
				nvidia,revision =3D <0x7>;=0A=
				nvidia,dvfs-version =3D "13_1600000_12_V9.8.7_V1.6";=0A=
				clock-frequency =3D <0x186a00>;=0A=
				nvidia,emc-min-mv =3D <0x377>;=0A=
				nvidia,gk20a-min-mv =3D <0x44c>;=0A=
				nvidia,source =3D "pllm_ud";=0A=
				nvidia,src-sel-reg =3D <0x80188000>;=0A=
				nvidia,needs-training =3D <0x2f0>;=0A=
				nvidia,training_pattern =3D <0x0>;=0A=
				nvidia,trained =3D <0x0>;=0A=
				nvidia,periodic_training =3D <0x1>;=0A=
				nvidia,trained_dram_clktree_c0d0u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c0d0u1 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c0d1u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c0d1u1 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d0u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d0u1 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d1u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d1u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d0u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d0u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d1u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d1u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d0u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d0u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d1u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d1u1 =3D <0x0>;=0A=
				nvidia,run_clocks =3D <0x40>;=0A=
				nvidia,tree_margin =3D <0x1>;=0A=
				nvidia,burst-regs-num =3D <0xdd>;=0A=
				nvidia,burst-regs-per-ch-num =3D <0x8>;=0A=
				nvidia,trim-regs-num =3D <0x8a>;=0A=
				nvidia,trim-regs-per-ch-num =3D <0xa>;=0A=
				nvidia,burst-mc-regs-num =3D <0x21>;=0A=
				nvidia,la-scale-regs-num =3D <0x18>;=0A=
				nvidia,vref-regs-num =3D <0x4>;=0A=
				nvidia,training-mod-regs-num =3D <0x14>;=0A=
				nvidia,dram-timing-regs-num =3D <0x5>;=0A=
				nvidia,min-mrs-wait =3D <0x30>;=0A=
				nvidia,emc-mrw =3D <0x88010054>;=0A=
				nvidia,emc-mrw2 =3D <0x8802002d>;=0A=
				nvidia,emc-mrw3 =3D <0x880d0000>;=0A=
				nvidia,emc-mrw4 =3D <0xc0000000>;=0A=
				nvidia,emc-mrw9 =3D <0x8c0e4848>;=0A=
				nvidia,emc-mrs =3D <0x0>;=0A=
				nvidia,emc-emrs =3D <0x0>;=0A=
				nvidia,emc-emrs2 =3D <0x0>;=0A=
				nvidia,emc-auto-cal-config =3D <0xa01a51d8>;=0A=
				nvidia,emc-auto-cal-config2 =3D <0x5500000>;=0A=
				nvidia,emc-auto-cal-config3 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config4 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config5 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config6 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config7 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config8 =3D <0x770000>;=0A=
				nvidia,emc-cfg-2 =3D <0x110835>;=0A=
				nvidia,emc-sel-dpd-ctrl =3D <0x40000>;=0A=
				nvidia,emc-fdpd-ctrl-cmd-no-ramp =3D <0x1>;=0A=
				nvidia,dll-clk-src =3D <0x80188000>;=0A=
				nvidia,clk-out-enb-x-0-clk-enb-emc-dll =3D <0x0>;=0A=
				nvidia,emc-clock-latency-change =3D <0x49c>;=0A=
				nvidia,ptfv =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0xa 0xa 0xa 0x1>;=0A=
				nvidia,emc-registers =3D <0x60 0x1c0 0xe0 0x0 0x0 0x44 0x1d 0x29 =
0x21 0xc 0x2d 0x0 0x4 0x20 0x1d 0x1d 0x10 0x17 0x16 0x6 0xe 0xc 0xa 0xe =
0x8 0xd 0x24 0x8 0x1000001c 0x10000002 0x14 0x8803f1f1 0x1c 0x1f 0xd =
0x6000c 0x33 0x3b 0x3d 0x39 0x3b 0x1820 0x0 0x608 0x10 0x10 0x3 0x3 0x3 =
0x38 0xe 0x2 0x2e 0x1cc 0x1cc 0xd 0x18 0xc 0x40 0x22 0x4 0x14 0xc803f1f1 =
0x1860 0x0 0x9960a00d 0x3bff 0xc00001bb 0x8000 0x55 0x0 0x0 0x0 0x0 0x0 =
0x0 0x880b6666 0x8000e 0x11000c 0x1c001c 0x1c001c 0x1c001c 0x1c001c 0x7 =
0x90002 0x1c001c 0x1c001c 0x1c001c 0x1c001c 0x7 0x90002 0xe000e 0x80008 =
0xc000c 0x110011 0xe 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f 0x220f40f 0x12 0x64000 =
0x310640 0x6400030 0x1900017 0xc1e0030a 0x1f13612f 0x14 0x80d 0x550 =
0xf3200000 0x0 0x308c 0x2b 0x0 0x0 0x1b 0x1b 0x20000 0x33 0x0 0x11 =
0x3000 0x2000000 0x2000101 0x7 0x30808 0x15c00 0x102020 0x1fff1fff 0x0 =
0x0 0x0 0x34 0x40 0x18000800 0x8000800 0x8000800 0x8000800 0x400080 =
0x8801004 0x20 0x0 0x0 0x0 0x0 0x0 0x0 0xefff2210 0x0 0x0 0xdcdcdcdc =
0xa0a0a0a 0xa0a0a0a 0xa0a0a0a 0x1186190 0x0 0x0 0x3b 0x2b 0x3d =
0x88161414 0x33 0x6000c 0x9080 0x7070404 0x40320 0x513801f 0x1f101100 =
0x14 0x103200 0x1124000 0x1125b6a 0xf081000 0x105800 0x1110fc00 =
0xf085300 0x105800 0x1114fc00 0x7004300 0x103200 0x55553c5a 0xc8161414>;=0A=
				nvidia,emc-burst-regs-per-ch =3D <0x880c4848 0x880c4848 0xc80c4848 =
0xc80c4848 0x8c0e4848 0x8c0e4848 0x4c0e4848 0x4c0e4848>;=0A=
				nvidia,emc-shadow-regs-ca-train =3D <0x60 0x1c0 0xe0 0x0 0x0 0x44 =
0x1d 0x29 0x21 0xc 0x2d 0x0 0x4 0x20 0x1d 0x1d 0x10 0x17 0x16 0x6 0xe =
0xc 0xa 0xe 0x8 0xd 0x24 0x8 0x1000001c 0x10000002 0x14 0x8803f1f1 0x1c =
0x1f 0xd 0x6000c 0x33 0x3b 0x3d 0x39 0x3b 0x1820 0x0 0x608 0x10 0x10 0x3 =
0x3 0x3 0x38 0xe 0x2 0x2e 0x1cc 0x1cc 0xd 0x18 0xc 0x40 0x22 0x4 0x14 =
0xc803f1f1 0x1860 0x0 0x9960a00d 0x3bff 0xc00001bb 0x8000 0x55 0x0 0x0 =
0x0 0x0 0x0 0x0 0x880b6666 0x8000e 0x11000c 0x1c001c 0x1c001c 0x1c001c =
0x1c001c 0x7 0x90002 0x1c001c 0x1c001c 0x1c001c 0x1c001c 0x7 0x90002 =
0xe000e 0x80008 0xc000c 0x110011 0xe 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f 0x220f40f =
0x12 0x64000 0x310640 0x6400030 0x1900017 0xc1e0030a 0x1f13612f 0x14 =
0x80d 0x550 0xf3200000 0x0 0x308c 0x2b 0x0 0x0 0x1b 0x1b 0x20000 0x8033 =
0x0 0x0 0x3000 0x2000000 0x2000101 0x7 0x30808 0x15c00 0x102020 =
0x1fff1fff 0x0 0x0 0x0 0x34 0x40 0x18000800 0x8000800 0x8000800 =
0x8000800 0x400080 0x8801004 0x20 0x0 0x0 0x0 0x0 0x0 0x0 0xefff2210 0x0 =
0x0 0xdcdcdcdc 0xa0a0a0a 0xa0a0a0a 0xa0a0a0a 0x1186190 0x1 0x1f 0x41 =
0x2b 0x43 0x88161414 0x33 0x6000c 0x9080 0x7070404 0x40320 0x513801f =
0x1f101100 0x14 0x103200 0x1124000 0x1125b6a 0xf081000 0x105800 =
0x1110fc00 0xf085300 0x105800 0x1114fc00 0x7004300 0x103200 0x55553c5a =
0xc8161414>;=0A=
				nvidia,emc-shadow-regs-quse-train =3D <0x60 0x1c0 0xe0 0x0 0x0 0x44 =
0x1d 0x28 0x21 0xc 0x2d 0x0 0x4 0x20 0x1d 0x1d 0x10 0x11 0x16 0x6 0xe =
0xc 0xa 0xe 0x8 0xd 0x21 0x2 0x1000001c 0x10000002 0x14 0x8803f1f1 0x1b =
0x1 0x80000000 0x6000c 0x33 0x3b 0x3d 0x39 0x3b 0x1820 0x0 0x608 0x10 =
0x10 0x3 0x3 0x3 0x38 0xe 0x2 0x2e 0x1cc 0x1cc 0xd 0x18 0xc 0x40 0x22 =
0x4 0x14 0xc803f1f1 0x1860 0x0 0x9960400d 0x3bff 0xc00001bb 0x8000 0x55 =
0x0 0x0 0x0 0x0 0x0 0x0 0x880b6666 0x8000e 0x11000c 0x1c001c 0x1c001c =
0x1c001c 0x1c001c 0x7 0x90002 0x1c001c 0x1c001c 0x1c001c 0x1c001c 0x7 =
0x90002 0xe000e 0x80008 0xc000c 0x110011 0xe 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f =
0x220f40f 0x12 0x64000 0x310640 0x6400030 0x1900017 0xc1e0030a =
0x1f13612f 0x14 0x80d 0x550 0xf3200000 0x0 0x308c 0x2b 0x0 0x0 0x1b 0x1b =
0x30020000 0x8033 0x0 0x11 0x3000 0x2000000 0x2000101 0x7 0x30808 =
0x15c00 0x102020 0x1fff1fff 0x0 0x0 0x0 0x34 0x40 0x18000800 0x8000800 =
0x8000800 0x8000800 0x400080 0x8801004 0x20 0x0 0x0 0x0 0x0 0x0 0x0 =
0xefff2210 0x0 0x0 0xdcdcdcdc 0xa0a0a0a 0xa0a0a0a 0xa0a0a0a 0x1186190 =
0x1 0x1f 0x45 0x35 0x47 0x88161414 0x3d 0x6000c 0x9080 0x7070404 0x40320 =
0x513801f 0x1f101100 0x14 0x103200 0x1124000 0x1125b6a 0xf081000 =
0x105800 0x1110fc00 0xf085300 0x105800 0x1114fc00 0x7004300 0x103200 =
0x55553c5a 0xc8161414>;=0A=
				nvidia,emc-shadow-regs-rdwr-train =3D <0x60 0x1c0 0xe0 0x0 0x0 0x44 =
0x1d 0x29 0x21 0xc 0x2d 0x0 0x4 0x20 0x1d 0x1d 0x10 0x17 0x16 0x6 0xe =
0xc 0xa 0xe 0x8 0xd 0x24 0x8 0x1000001c 0x10000002 0x14 0x8803f1f1 0x1c =
0x1f 0xd 0x6000c 0x33 0x3b 0x3d 0x39 0x3b 0x1820 0x0 0x608 0x10 0x10 0x3 =
0x3 0x3 0x38 0xe 0x2 0x2e 0x1cc 0x1cc 0xd 0x18 0xc 0x40 0x22 0x4 0x14 =
0xc803f1f1 0x1860 0x0 0x9960a00d 0x3bff 0xc00001bb 0x8000 0x55 0x0 0x0 =
0x0 0x0 0x0 0x0 0x880b6666 0x8000e 0x11000c 0x1c001c 0x1c001c 0x1c001c =
0x1c001c 0x7 0x90002 0x1c001c 0x1c001c 0x1c001c 0x1c001c 0x7 0x90002 =
0xe000e 0x80008 0xc000c 0x110011 0xe 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f 0x220f40f =
0x12 0x64000 0x310640 0x6400030 0x1900017 0xc1e0030a 0x1f13612f 0x14 =
0x80d 0x550 0xf3200000 0x0 0x308c 0x2b 0x0 0x0 0x1b 0x1b 0x20000 0x33 =
0x0 0x11 0x3000 0x2000000 0x2000101 0x7 0x30808 0x15c00 0x102020 =
0x1fff1fff 0x0 0x0 0x0 0x34 0x40 0x18000800 0x8000800 0x8000800 =
0x8000800 0x400080 0x8801004 0x20 0x0 0x0 0x0 0x0 0x0 0x0 0xefff2210 0x0 =
0x0 0xdcdcdcdc 0xa0a0a0a 0xa0a0a0a 0xa0a0a0a 0x1186190 0x1 0x0 0x3b 0x2b =
0x3d 0x88161414 0x33 0x6000c 0xb080 0x7070404 0x40320 0x513801f =
0x1f101100 0x14 0x103200 0x1124000 0x1125b6a 0xf081000 0x105800 =
0x1110fc00 0xf085300 0x105800 0x1114fc00 0x7004300 0x103200 0x55553c5a =
0xc8161414>;=0A=
				nvidia,emc-trim-regs =3D <0x200020 0x200020 0x200020 0x200020 =
0x200020 0x200020 0x200020 0x200020 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x11111111 0x11111111 0x11111111 0x11111111 0x2b0022 =
0x2b0026 0x260025 0x260026 0x8000e 0x11000c 0x2b0022 0x2b0026 0x260025 =
0x260026 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-trim-regs-per-ch =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0>;=0A=
				nvidia,emc-vref-regs =3D <0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-dram-timing-regs =3D <0x12 0x104 0x118 0x7 0x20>;=0A=
				nvidia,emc-training-mod-regs =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-save-restore-mod-regs =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x4 0x4 0x4 0x4>;=0A=
				nvidia,emc-burst-mc-regs =3D <0xc 0x80000080 0xa1020 0x80001028 0x6 =
0x7 0x18 0xf 0xf 0x3 0x3 0xd 0x1 0x1 0xc 0x8 0xa 0x37 0x5060000 0xd080c =
0x726c2419 0x70000f0f 0x0 0x1f0000 0x80001a 0x80001a 0x80001a 0x80001a =
0x80001a 0x80001a 0x80001a 0x80001a 0x80001a>;=0A=
				nvidia,emc-la-scale-regs =3D <0xd0 0x80001a 0x1203 0x80003d 0x800038 =
0x800041 0x800090 0x800005 0x800090 0x800005 0x340049 0x800080 0x800004 =
0x80016 0x80 0x800004 0x800019 0x800019 0x800018 0x800095 0x80001d 0x80 =
0x2c 0x800080>;=0A=
			};=0A=
=0A=
			emc-table-derated@204000 {=0A=
				compatible =3D "nvidia,tegra21-emc-table-derated";=0A=
				nvidia,revision =3D <0x7>;=0A=
				nvidia,dvfs-version =3D "13_derating_204000_V13_V13";=0A=
				clock-frequency =3D <0x31ce0>;=0A=
				nvidia,emc-min-mv =3D <0x320>;=0A=
				nvidia,gk20a-min-mv =3D <0x44c>;=0A=
				nvidia,source =3D "pllp_out0";=0A=
				nvidia,src-sel-reg =3D <0x40188002>;=0A=
				nvidia,needs-training =3D <0x0>;=0A=
				nvidia,training_pattern =3D <0x0>;=0A=
				nvidia,trained =3D <0x0>;=0A=
				nvidia,periodic_training =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c0d0u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c0d0u1 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c0d1u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c0d1u1 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d0u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d0u1 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d1u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d1u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d0u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d0u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d1u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d1u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d0u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d0u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d1u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d1u1 =3D <0x0>;=0A=
				nvidia,run_clocks =3D <0xd>;=0A=
				nvidia,tree_margin =3D <0x1>;=0A=
				nvidia,burst-regs-num =3D <0xdd>;=0A=
				nvidia,burst-regs-per-ch-num =3D <0x8>;=0A=
				nvidia,trim-regs-num =3D <0x8a>;=0A=
				nvidia,trim-regs-per-ch-num =3D <0xa>;=0A=
				nvidia,burst-mc-regs-num =3D <0x21>;=0A=
				nvidia,la-scale-regs-num =3D <0x18>;=0A=
				nvidia,vref-regs-num =3D <0x4>;=0A=
				nvidia,training-mod-regs-num =3D <0x14>;=0A=
				nvidia,dram-timing-regs-num =3D <0x5>;=0A=
				nvidia,min-mrs-wait =3D <0x16>;=0A=
				nvidia,emc-mrw =3D <0x88010004>;=0A=
				nvidia,emc-mrw2 =3D <0x88020000>;=0A=
				nvidia,emc-mrw3 =3D <0x880d0000>;=0A=
				nvidia,emc-mrw4 =3D <0xc0000000>;=0A=
				nvidia,emc-mrw9 =3D <0x8c0e7272>;=0A=
				nvidia,emc-mrs =3D <0x0>;=0A=
				nvidia,emc-emrs =3D <0x0>;=0A=
				nvidia,emc-emrs2 =3D <0x0>;=0A=
				nvidia,emc-auto-cal-config =3D <0xa01a51d8>;=0A=
				nvidia,emc-auto-cal-config2 =3D <0x5500000>;=0A=
				nvidia,emc-auto-cal-config3 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config4 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config5 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config6 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config7 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config8 =3D <0x770000>;=0A=
				nvidia,emc-cfg-2 =3D <0x110805>;=0A=
				nvidia,emc-sel-dpd-ctrl =3D <0x40008>;=0A=
				nvidia,emc-fdpd-ctrl-cmd-no-ramp =3D <0x1>;=0A=
				nvidia,dll-clk-src =3D <0x40188002>;=0A=
				nvidia,clk-out-enb-x-0-clk-enb-emc-dll =3D <0x1>;=0A=
				nvidia,emc-clock-latency-change =3D <0xd5c>;=0A=
				nvidia,ptfv =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0xa 0xa 0xa 0x1>;=0A=
				nvidia,emc-registers =3D <0xd 0x3a 0x1d 0x0 0x0 0x9 0x5 0xb 0xd 0x8 =
0xb 0x0 0x4 0x20 0x6 0x6 0x6 0x3 0x0 0x6 0x4 0x2 0x0 0x4 0x8 0xd 0x5 0x6 =
0x0 0x0 0x2 0x88037171 0xd 0x0 0xb 0x10000 0x12 0x14 0x16 0x12 0x14 0xc1 =
0x0 0x30 0x8 0x8 0x3 0x3 0x3 0x14 0x5 0x2 0xe 0x3b 0x3b 0x5 0x5 0x4 0x9 =
0x5 0x4 0x9 0xc8037171 0x31c 0x0 0x9160a00d 0x3bbf 0x2c00a0 0x8000 0xbe =
0xfff0fff 0xfff0fff 0x0 0x0 0x0 0x0 0x880b0000 0xe0017 0x1c0014 0x450031 =
0x3f002b 0x3d0028 0x3d0031 0xb 0x100004 0x450031 0x3f002b 0x3d0028 =
0x3d0031 0xb 0x100004 0x170017 0xe000e 0x140014 0x1c001c 0x17 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x8020221f 0x220f40f 0x12 0x64000 0x900cc 0xcc0016 0x33000a =
0xc1e00303 0x1f13412f 0x10014 0x804 0x550 0xf3200000 0xfff0fff 0x287 0xa =
0x0 0x0 0x1b 0x1b 0x20000 0x50037 0x0 0x10 0x3000 0xa000000 0x2000111 =
0x8 0x30808 0x15c00 0x101010 0x1600 0x0 0x0 0x0 0x34 0x40 0x18000800 =
0x8000800 0x8000800 0x8000800 0x400080 0x8801004 0x20 0x0 0x0 0x0 0x0 =
0x0 0x0 0xefffefff 0xc0c0c0c0 0xc0c0c0c0 0xdcdcdcdc 0xa0a0a0a 0xa0a0a0a =
0xa0a0a0a 0x1186033 0x0 0x0 0x14 0xa 0x16 0x88161414 0x12 0x10000 0x9080 =
0x7070404 0x40065 0x513801f 0x1f101100 0x14 0x107240 0x1124000 0x1125b6a =
0xf081000 0x105800 0x1110fc00 0xf081300 0x105800 0x1114fc00 0x7000300 =
0x107240 0x55553c5a 0xc8161414>;=0A=
				nvidia,emc-burst-regs-per-ch =3D <0x880c7272 0x880c7272 0xc80c7272 =
0xc80c7272 0x8c0e7272 0x8c0e7272 0x4c0e7272 0x4c0e7272>;=0A=
				nvidia,emc-shadow-regs-ca-train =3D <0xd 0x3a 0x1d 0x0 0x0 0x9 0x5 =
0xb 0xd 0x8 0xb 0x0 0x4 0x20 0x6 0x6 0x6 0x3 0x0 0x6 0x4 0x2 0x0 0x4 0x8 =
0xd 0x5 0x6 0x0 0x0 0x2 0x88037171 0xd 0x0 0xb 0x10000 0x12 0x14 0x16 =
0x12 0x14 0xc1 0x0 0x30 0x8 0x8 0x3 0x3 0x3 0x14 0x5 0x2 0xe 0x3b 0x3b =
0x5 0x5 0x4 0x9 0x5 0x4 0x9 0xc8037171 0x31c 0x0 0x9960a00d 0x3bbf =
0x2c00a0 0x8000 0x55 0xfff0fff 0xfff0fff 0x0 0x0 0x0 0x0 0x880b0000 =
0xe0017 0x1c0014 0x450031 0x3f002b 0x3d0028 0x3d0031 0xb 0x100004 =
0x450031 0x3f002b 0x3d0028 0x3d0031 0xb 0x100004 0x170017 0xe000e =
0x140014 0x1c001c 0x17 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f 0x220f40f 0x12 =
0x64000 0x900cc 0xcc0016 0x33000a 0xc1e00303 0x1f13412f 0x10014 0x804 =
0x550 0xf3200000 0xfff0fff 0x287 0xa 0x0 0x0 0x1b 0x1b 0x20000 0x5058033 =
0x5050000 0x0 0x3000 0xa000000 0x2000111 0x8 0x30808 0x15c00 0x101010 =
0x1600 0x0 0x0 0x0 0x34 0x40 0x18000800 0x8000800 0x8000800 0x8000800 =
0x400080 0x8801004 0x20 0x0 0x0 0x0 0x0 0x0 0x0 0xefffefff 0xc0c0c0c0 =
0xc0c0c0c0 0xdcdcdcdc 0xa0a0a0a 0xa0a0a0a 0xa0a0a0a 0x1186033 0x1 0x1f =
0x18 0x8 0x1a 0x88161414 0x10 0x10000 0x9080 0x7070404 0x40065 0x513801f =
0x1f101100 0x14 0x107240 0x1124000 0x1125b6a 0xf081000 0x105800 =
0x1110fc00 0xf081300 0x105800 0x1114fc00 0x7000300 0x107240 0x55553c5a =
0xc8161414>;=0A=
				nvidia,emc-shadow-regs-quse-train =3D <0xd 0x3a 0x1d 0x0 0x0 0x9 0x5 =
0xa 0xd 0x8 0xb 0x0 0x4 0x20 0x6 0x6 0x6 0xc 0x0 0x6 0x4 0x2 0x0 0x4 0x8 =
0xd 0x3 0x2 0x10000000 0x0 0x3 0x88037171 0xb 0x1 0x80000000 0x40000 =
0x12 0x14 0x16 0x12 0x14 0xc1 0x0 0x30 0x8 0x8 0x3 0x3 0x3 0x14 0x5 0x2 =
0xe 0x3b 0x3b 0x5 0x5 0x4 0x9 0x5 0x4 0x9 0xc8037171 0x31c 0x0 =
0x9160400d 0x3bbf 0x2c00a0 0x8000 0xbe 0xfff0fff 0xfff0fff 0x0 0x0 0x0 =
0x0 0x880b0000 0xe0017 0x1c0014 0x450031 0x3f002b 0x3d0028 0x3d0031 0xb =
0x100004 0x450031 0x3f002b 0x3d0028 0x3d0031 0xb 0x100004 0x170017 =
0xe000e 0x140014 0x1c001c 0x17 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f 0x220f40f =
0x12 0x64000 0x900cc 0xcc0016 0x33000a 0xc1e00303 0x1f13412f 0x10014 =
0x804 0x550 0xf3200000 0xfff0fff 0x287 0xa 0x0 0x0 0x1b 0x1b 0x30020000 =
0x58037 0x0 0x10 0x3000 0xa000000 0x2000111 0x8 0x30808 0x15c00 0x101010 =
0x1600 0x0 0x0 0x0 0x34 0x40 0x18000800 0x8000800 0x8000800 0x8000800 =
0x400080 0x8801004 0x20 0x0 0x0 0x0 0x0 0x0 0x0 0xefffefff 0xc0c0c0c0 =
0xc0c0c0c0 0xdcdcdcdc 0xa0a0a0a 0xa0a0a0a 0xa0a0a0a 0x1186033 0x1 0x1f =
0x1e 0x14 0x20 0x88161414 0x1c 0x40000 0x9080 0x7070404 0x40065 =
0x513801f 0x1f101100 0x14 0x107240 0x1124000 0x1125b6a 0xf081000 =
0x105800 0x1110fc00 0xf081300 0x105800 0x1114fc00 0x7000300 0x107240 =
0x55553c5a 0xc8161414>;=0A=
				nvidia,emc-shadow-regs-rdwr-train =3D <0xd 0x3a 0x1d 0x0 0x0 0x9 0x5 =
0xe 0xd 0x8 0xb 0x0 0x4 0x20 0x6 0x6 0x6 0x13 0x13 0x6 0x4 0x2 0x0 0x4 =
0x8 0xd 0x5 0x6 0x30000000 0x30000002 0x2 0x88037171 0xd 0x0 0xb 0x40000 =
0x12 0x14 0x16 0x12 0x14 0xc1 0x0 0x30 0x8 0x8 0x3 0x3 0x3 0x14 0x5 0x2 =
0xe 0x3b 0x3b 0x5 0x5 0x4 0x9 0x5 0x4 0x9 0xc8037171 0x31c 0x0 =
0x9160a00d 0x3bbf 0x2c00a0 0x8000 0xbe 0xfff0fff 0xfff0fff 0x0 0x0 0x0 =
0x0 0x880b0000 0xe0017 0x1c0014 0x450031 0x3f002b 0x3d0028 0x3d0031 0xb =
0x100004 0x450031 0x3f002b 0x3d0028 0x3d0031 0xb 0x100004 0x170017 =
0xe000e 0x140014 0x1c001c 0x17 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f 0x220f40f =
0x12 0x64000 0x900cc 0xcc0016 0x33000a 0xc1e00303 0x1f13412f 0x10014 =
0x804 0x550 0xf3200000 0xfff0fff 0x287 0xa 0x0 0x0 0x1b 0x1b 0x20000 =
0x50037 0x0 0x10 0x3000 0xa000000 0x2000111 0x8 0x30808 0x15c00 0x101010 =
0x1600 0x0 0x0 0x0 0x34 0x40 0x18000800 0x8000800 0x8000800 0x8000800 =
0x400080 0x8801004 0x20 0x0 0x0 0x0 0x0 0x0 0x0 0xefffefff 0xc0c0c0c0 =
0xc0c0c0c0 0xdcdcdcdc 0xa0a0a0a 0xa0a0a0a 0xa0a0a0a 0x1186033 0x1 0x0 =
0x14 0xa 0x16 0x88161414 0x12 0x40000 0xb080 0x7070404 0x40065 0x513801f =
0x1f101100 0x14 0x107240 0x1124000 0x1125b6a 0xf081000 0x105800 =
0x1110fc00 0xf081300 0x105800 0x1114fc00 0x7000300 0x107240 0x55553c5a =
0xc8161414>;=0A=
				nvidia,emc-trim-regs =3D <0x280028 0x280028 0x280028 0x280028 =
0x280028 0x280028 0x280028 0x280028 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x11111111 0x11111111 0x28282828 0x28282828 0x0 0x0 0x0 0x0 =
0xe0017 0x1c0014 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-trim-regs-per-ch =3D <0x0 0x0 0x249249 0x249249 0x249249 =
0x249249 0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-vref-regs =3D <0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-dram-timing-regs =3D <0x13 0x104 0x118 0x18 0x6>;=0A=
				nvidia,emc-training-mod-regs =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-save-restore-mod-regs =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-burst-mc-regs =3D <0x8000001 0x8000004c 0xa1020 =
0x80001028 0x1 0x1 0x3 0x1 0x2 0x1 0x2 0x4 0x1 0x1 0x4 0x8 0x5 0x7 =
0x2020000 0x30201 0x72a30504 0x70000f0f 0x0 0x1f0000 0x80001a 0x80001a =
0x80001a 0x80001a 0x80001a 0x80001a 0x80001a 0x80001a 0x80001a>;=0A=
				nvidia,emc-la-scale-regs =3D <0x1b 0x80001a 0x24c 0xff00b2 0xff00da =
0xff009d 0xff00ff 0xff000c 0xff00ff 0xff000c 0x7f0049 0xff0080 0xff0004 =
0x800ad 0xff 0xff0004 0xff00c6 0xff00c6 0xff006d 0xff00ff 0xff00e2 0xff =
0x80 0xff00ff>;=0A=
			};=0A=
=0A=
			emc-table-derated@1600000 {=0A=
				compatible =3D "nvidia,tegra21-emc-table-derated";=0A=
				nvidia,revision =3D <0x7>;=0A=
				nvidia,dvfs-version =3D "13_derating_1600000_V13_V13";=0A=
				clock-frequency =3D <0x186a00>;=0A=
				nvidia,emc-min-mv =3D <0x377>;=0A=
				nvidia,gk20a-min-mv =3D <0x44c>;=0A=
				nvidia,source =3D "pllm_ud";=0A=
				nvidia,src-sel-reg =3D <0x80188000>;=0A=
				nvidia,needs-training =3D <0x2f0>;=0A=
				nvidia,training_pattern =3D <0x0>;=0A=
				nvidia,trained =3D <0x0>;=0A=
				nvidia,periodic_training =3D <0x1>;=0A=
				nvidia,trained_dram_clktree_c0d0u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c0d0u1 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c0d1u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c0d1u1 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d0u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d0u1 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d1u0 =3D <0x0>;=0A=
				nvidia,trained_dram_clktree_c1d1u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d0u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d0u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d1u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c0d1u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d0u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d0u1 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d1u0 =3D <0x0>;=0A=
				nvidia,current_dram_clktree_c1d1u1 =3D <0x0>;=0A=
				nvidia,run_clocks =3D <0x40>;=0A=
				nvidia,tree_margin =3D <0x1>;=0A=
				nvidia,burst-regs-num =3D <0xdd>;=0A=
				nvidia,burst-regs-per-ch-num =3D <0x8>;=0A=
				nvidia,trim-regs-num =3D <0x8a>;=0A=
				nvidia,trim-regs-per-ch-num =3D <0xa>;=0A=
				nvidia,burst-mc-regs-num =3D <0x21>;=0A=
				nvidia,la-scale-regs-num =3D <0x18>;=0A=
				nvidia,vref-regs-num =3D <0x4>;=0A=
				nvidia,training-mod-regs-num =3D <0x14>;=0A=
				nvidia,dram-timing-regs-num =3D <0x5>;=0A=
				nvidia,min-mrs-wait =3D <0x30>;=0A=
				nvidia,emc-mrw =3D <0x88010054>;=0A=
				nvidia,emc-mrw2 =3D <0x8802002d>;=0A=
				nvidia,emc-mrw3 =3D <0x880d0000>;=0A=
				nvidia,emc-mrw4 =3D <0xc0000000>;=0A=
				nvidia,emc-mrw9 =3D <0x8c0e4848>;=0A=
				nvidia,emc-mrs =3D <0x0>;=0A=
				nvidia,emc-emrs =3D <0x0>;=0A=
				nvidia,emc-emrs2 =3D <0x0>;=0A=
				nvidia,emc-auto-cal-config =3D <0xa01a51d8>;=0A=
				nvidia,emc-auto-cal-config2 =3D <0x5500000>;=0A=
				nvidia,emc-auto-cal-config3 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config4 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config5 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config6 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config7 =3D <0x770000>;=0A=
				nvidia,emc-auto-cal-config8 =3D <0x770000>;=0A=
				nvidia,emc-cfg-2 =3D <0x110835>;=0A=
				nvidia,emc-sel-dpd-ctrl =3D <0x40000>;=0A=
				nvidia,emc-fdpd-ctrl-cmd-no-ramp =3D <0x1>;=0A=
				nvidia,dll-clk-src =3D <0x80188000>;=0A=
				nvidia,clk-out-enb-x-0-clk-enb-emc-dll =3D <0x0>;=0A=
				nvidia,emc-clock-latency-change =3D <0x49c>;=0A=
				nvidia,ptfv =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0xa 0xa 0xa 0x1>;=0A=
				nvidia,emc-registers =3D <0x66 0x1c0 0xe0 0x0 0x0 0x47 0x20 0x29 =
0x21 0xc 0x2d 0x0 0x4 0x20 0x20 0x20 0x13 0x17 0x16 0x6 0xe 0xc 0xa 0xe =
0x8 0xd 0x24 0x8 0x1000001c 0x10000002 0x14 0x8803f1f1 0x1c 0x1f 0xd =
0x6000c 0x33 0x3b 0x3d 0x39 0x3b 0x5e9 0x0 0x17a 0x10 0x10 0x3 0x3 0x3 =
0x38 0xe 0x2 0x31 0x1cc 0x1cc 0xd 0x18 0xc 0x40 0x25 0x4 0x14 0xc803f1f1 =
0x1860 0x0 0x9960a00d 0x3bff 0xc00001bb 0x8000 0x55 0x0 0x0 0x0 0x0 0x0 =
0x0 0x880b6666 0x8000e 0x11000c 0x1c001c 0x1c001c 0x1c001c 0x1c001c 0x7 =
0x90002 0x1c001c 0x1c001c 0x1c001c 0x1c001c 0x7 0x90002 0xe000e 0x80008 =
0xc000c 0x110011 0xe 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f 0x220f40f 0x12 0x64000 =
0x310640 0x6400030 0x1900017 0xc1e0030a 0x1f13612f 0x14 0x80d 0x550 =
0xf3200000 0x0 0xce6 0x2b 0x0 0x0 0x1b 0x1b 0x20000 0x33 0x0 0x11 0x3000 =
0x2000000 0x2000101 0x7 0x30808 0x15c00 0x102020 0x1fff1fff 0x0 0x0 0x0 =
0x34 0x40 0x18000800 0x8000800 0x8000800 0x8000800 0x400080 0x8801004 =
0x20 0x0 0x0 0x0 0x0 0x0 0x0 0xefff2210 0x0 0x0 0xdcdcdcdc 0xa0a0a0a =
0xa0a0a0a 0xa0a0a0a 0x1186190 0x0 0x0 0x3b 0x2b 0x3d 0x88161414 0x33 =
0x6000c 0x9080 0x7070404 0x40320 0x513801f 0x1f101100 0x14 0x103200 =
0x1124000 0x1125b6a 0xf081000 0x105800 0x1110fc00 0xf085300 0x105800 =
0x1114fc00 0x7004300 0x103200 0x55553c5a 0xc8161414>;=0A=
				nvidia,emc-burst-regs-per-ch =3D <0x880c4848 0x880c4848 0xc80c4848 =
0xc80c4848 0x8c0e4848 0x8c0e4848 0x4c0e4848 0x4c0e4848>;=0A=
				nvidia,emc-shadow-regs-ca-train =3D <0x66 0x1c0 0xe0 0x0 0x0 0x47 =
0x20 0x29 0x21 0xc 0x2d 0x0 0x4 0x20 0x20 0x20 0x13 0x17 0x16 0x6 0xe =
0xc 0xa 0xe 0x8 0xd 0x24 0x8 0x1000001c 0x10000002 0x14 0x8803f1f1 0x1c =
0x1f 0xd 0x6000c 0x33 0x3b 0x3d 0x39 0x3b 0x5e9 0x0 0x17a 0x10 0x10 0x3 =
0x3 0x3 0x38 0xe 0x2 0x31 0x1cc 0x1cc 0xd 0x18 0xc 0x40 0x25 0x4 0x14 =
0xc803f1f1 0x1860 0x0 0x9960a00d 0x3bff 0xc00001bb 0x8000 0x55 0x0 0x0 =
0x0 0x0 0x0 0x0 0x880b6666 0x8000e 0x11000c 0x1c001c 0x1c001c 0x1c001c =
0x1c001c 0x7 0x90002 0x1c001c 0x1c001c 0x1c001c 0x1c001c 0x7 0x90002 =
0xe000e 0x80008 0xc000c 0x110011 0xe 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f 0x220f40f =
0x12 0x64000 0x310640 0x6400030 0x1900017 0xc1e0030a 0x1f13612f 0x14 =
0x80d 0x550 0xf3200000 0x0 0xce6 0x2b 0x0 0x0 0x1b 0x1b 0x20000 0x8033 =
0x0 0x0 0x3000 0x2000000 0x2000101 0x7 0x30808 0x15c00 0x102020 =
0x1fff1fff 0x0 0x0 0x0 0x34 0x40 0x18000800 0x8000800 0x8000800 =
0x8000800 0x400080 0x8801004 0x20 0x0 0x0 0x0 0x0 0x0 0x0 0xefff2210 0x0 =
0x0 0xdcdcdcdc 0xa0a0a0a 0xa0a0a0a 0xa0a0a0a 0x1186190 0x1 0x1f 0x41 =
0x2b 0x43 0x88161414 0x33 0x6000c 0x9080 0x7070404 0x40320 0x513801f =
0x1f101100 0x14 0x103200 0x1124000 0x1125b6a 0xf081000 0x105800 =
0x1110fc00 0xf085300 0x105800 0x1114fc00 0x7004300 0x103200 0x55553c5a =
0xc8161414>;=0A=
				nvidia,emc-shadow-regs-quse-train =3D <0x66 0x1c0 0xe0 0x0 0x0 0x47 =
0x20 0x28 0x21 0xc 0x2d 0x0 0x4 0x20 0x20 0x20 0x13 0x11 0x16 0x6 0xe =
0xc 0xa 0xe 0x8 0xd 0x21 0x2 0x1000001c 0x10000002 0x14 0x8803f1f1 0x1b =
0x1 0x80000000 0x6000c 0x33 0x3b 0x3d 0x39 0x3b 0x5e9 0x0 0x17a 0x10 =
0x10 0x3 0x3 0x3 0x38 0xe 0x2 0x31 0x1cc 0x1cc 0xd 0x18 0xc 0x40 0x25 =
0x4 0x14 0xc803f1f1 0x1860 0x0 0x9960400d 0x3bff 0xc00001bb 0x8000 0x55 =
0x0 0x0 0x0 0x0 0x0 0x0 0x880b6666 0x8000e 0x11000c 0x1c001c 0x1c001c =
0x1c001c 0x1c001c 0x7 0x90002 0x1c001c 0x1c001c 0x1c001c 0x1c001c 0x7 =
0x90002 0xe000e 0x80008 0xc000c 0x110011 0xe 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f =
0x220f40f 0x12 0x64000 0x310640 0x6400030 0x1900017 0xc1e0030a =
0x1f13612f 0x14 0x80d 0x550 0xf3200000 0x0 0xce6 0x2b 0x0 0x0 0x1b 0x1b =
0x30020000 0x8033 0x0 0x11 0x3000 0x2000000 0x2000101 0x7 0x30808 =
0x15c00 0x102020 0x1fff1fff 0x0 0x0 0x0 0x34 0x40 0x18000800 0x8000800 =
0x8000800 0x8000800 0x400080 0x8801004 0x20 0x0 0x0 0x0 0x0 0x0 0x0 =
0xefff2210 0x0 0x0 0xdcdcdcdc 0xa0a0a0a 0xa0a0a0a 0xa0a0a0a 0x1186190 =
0x1 0x1f 0x45 0x35 0x47 0x88161414 0x3d 0x6000c 0x9080 0x7070404 0x40320 =
0x513801f 0x1f101100 0x14 0x103200 0x1124000 0x1125b6a 0xf081000 =
0x105800 0x1110fc00 0xf085300 0x105800 0x1114fc00 0x7004300 0x103200 =
0x55553c5a 0xc8161414>;=0A=
				nvidia,emc-shadow-regs-rdwr-train =3D <0x66 0x1c0 0xe0 0x0 0x0 0x47 =
0x20 0x29 0x21 0xc 0x2d 0x0 0x4 0x20 0x20 0x20 0x13 0x17 0x16 0x6 0xe =
0xc 0xa 0xe 0x8 0xd 0x24 0x8 0x1000001c 0x10000002 0x14 0x8803f1f1 0x1c =
0x1f 0xd 0x6000c 0x33 0x3b 0x3d 0x39 0x3b 0x5e9 0x0 0x17a 0x10 0x10 0x3 =
0x3 0x3 0x38 0xe 0x2 0x31 0x1cc 0x1cc 0xd 0x18 0xc 0x40 0x25 0x4 0x14 =
0xc803f1f1 0x1860 0x0 0x9960a00d 0x3bff 0xc00001bb 0x8000 0x55 0x0 0x0 =
0x0 0x0 0x0 0x0 0x880b6666 0x8000e 0x11000c 0x1c001c 0x1c001c 0x1c001c =
0x1c001c 0x7 0x90002 0x1c001c 0x1c001c 0x1c001c 0x1c001c 0x7 0x90002 =
0xe000e 0x80008 0xc000c 0x110011 0xe 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x8020221f 0x220f40f =
0x12 0x64000 0x310640 0x6400030 0x1900017 0xc1e0030a 0x1f13612f 0x14 =
0x80d 0x550 0xf3200000 0x0 0xce6 0x2b 0x0 0x0 0x1b 0x1b 0x20000 0x33 0x0 =
0x11 0x3000 0x2000000 0x2000101 0x7 0x30808 0x15c00 0x102020 0x1fff1fff =
0x0 0x0 0x0 0x34 0x40 0x18000800 0x8000800 0x8000800 0x8000800 0x400080 =
0x8801004 0x20 0x0 0x0 0x0 0x0 0x0 0x0 0xefff2210 0x0 0x0 0xdcdcdcdc =
0xa0a0a0a 0xa0a0a0a 0xa0a0a0a 0x1186190 0x1 0x0 0x3b 0x2b 0x3d =
0x88161414 0x33 0x6000c 0xb080 0x7070404 0x40320 0x513801f 0x1f101100 =
0x14 0x103200 0x1124000 0x1125b6a 0xf081000 0x105800 0x1110fc00 =
0xf085300 0x105800 0x1114fc00 0x7004300 0x103200 0x55553c5a 0xc8161414>;=0A=
				nvidia,emc-trim-regs =3D <0x200020 0x200020 0x200020 0x200020 =
0x200020 0x200020 0x200020 0x200020 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x11111111 0x11111111 0x11111111 0x11111111 0x2b0022 =
0x2b0026 0x260025 0x260026 0x8000e 0x11000c 0x2b0022 0x2b0026 0x260025 =
0x260026 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-trim-regs-per-ch =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0>;=0A=
				nvidia,emc-vref-regs =3D <0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-dram-timing-regs =3D <0x13 0x104 0x118 0x7 0x20>;=0A=
				nvidia,emc-training-mod-regs =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0>;=0A=
				nvidia,emc-save-restore-mod-regs =3D <0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x4 0x4 0x4 0x4>;=0A=
				nvidia,emc-burst-mc-regs =3D <0xc 0x80000080 0xa1020 0x80001028 0x6 =
0x7 0x19 0x10 0xf 0x4 0x3 0xe 0x1 0x1 0xc 0x8 0xa 0x37 0x5060000 0xe090c =
0x726c241a 0x70000f0f 0x0 0x1f0000 0x80001a 0x80001a 0x80001a 0x80001a =
0x80001a 0x80001a 0x80001a 0x80001a 0x80001a>;=0A=
				nvidia,emc-la-scale-regs =3D <0xd0 0x80001a 0x1203 0x80003d 0x800038 =
0x800041 0x800090 0x800005 0x800090 0x800005 0x340049 0x800080 0x800004 =
0x80016 0x80 0x800004 0x800019 0x800019 0x800018 0x800095 0x80001d 0x80 =
0x2c 0x800080>;=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	dummy-cool-dev {=0A=
		compatible =3D "dummy-cooling-dev";=0A=
		#cooling-cells =3D <0x2>;=0A=
		linux,phandle =3D <0x123>;=0A=
		phandle =3D <0x123>;=0A=
	};=0A=
=0A=
	regulators {=0A=
		compatible =3D "simple-bus";=0A=
		device_type =3D "fixed-regulators";=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
=0A=
		regulator@0 {=0A=
			compatible =3D "regulator-fixed";=0A=
			reg =3D <0x0>;=0A=
			regulator-name =3D "vdd-ac-bat";=0A=
			regulator-min-microvolt =3D <0x4c4b40>;=0A=
			regulator-max-microvolt =3D <0x4c4b40>;=0A=
			regulator-always-on;=0A=
			linux,phandle =3D <0x42>;=0A=
			phandle =3D <0x42>;=0A=
		};=0A=
=0A=
		regulator@1 {=0A=
			compatible =3D "regulator-fixed";=0A=
			reg =3D <0x1>;=0A=
			regulator-name =3D "vdd-5v0-sys";=0A=
			regulator-min-microvolt =3D <0x4c4b40>;=0A=
			regulator-max-microvolt =3D <0x4c4b40>;=0A=
			regulator-enable-ramp-delay =3D <0xa0>;=0A=
			regulator-disable-ramp-delay =3D <0x2710>;=0A=
			linux,phandle =3D <0xa2>;=0A=
			phandle =3D <0xa2>;=0A=
		};=0A=
=0A=
		regulator@2 {=0A=
			compatible =3D "regulator-fixed-sync";=0A=
			reg =3D <0x2>;=0A=
			regulator-name =3D "vdd-3v3-sys";=0A=
			regulator-min-microvolt =3D <0x325aa0>;=0A=
			regulator-max-microvolt =3D <0x325aa0>;=0A=
			gpio =3D <0x1e 0x3 0x0>;=0A=
			enable-active-high;=0A=
			vin-supply =3D <0xa2>;=0A=
			regulator-enable-ramp-delay =3D <0xf0>;=0A=
			regulator-disable-ramp-delay =3D <0x2c4c>;=0A=
			linux,phandle =3D <0x47>;=0A=
			phandle =3D <0x47>;=0A=
		};=0A=
=0A=
		regulator@3 {=0A=
			compatible =3D "regulator-fixed-sync";=0A=
			reg =3D <0x3>;=0A=
			regulator-name =3D "vdd-3v3-sd";=0A=
			regulator-min-microvolt =3D <0x325aa0>;=0A=
			regulator-max-microvolt =3D <0x325aa0>;=0A=
			gpio =3D <0x56 0xcb 0x0>;=0A=
			enable-active-high;=0A=
			regulator-boot-on;=0A=
			vin-supply =3D <0x47>;=0A=
			linux,phandle =3D <0x99>;=0A=
			phandle =3D <0x99>;=0A=
		};=0A=
=0A=
		regulator@4 {=0A=
			compatible =3D "regulator-fixed-sync";=0A=
			reg =3D <0x4>;=0A=
			regulator-name =3D "avdd-io-edp-1v05";=0A=
			regulator-min-microvolt =3D <0x100590>;=0A=
			regulator-max-microvolt =3D <0x100590>;=0A=
			gpio =3D <0x1e 0x7 0x0>;=0A=
			enable-active-high;=0A=
			vin-supply =3D <0x3e>;=0A=
			linux,phandle =3D <0x67>;=0A=
			phandle =3D <0x67>;=0A=
		};=0A=
=0A=
		regulator@5 {=0A=
			compatible =3D "regulator-fixed";=0A=
			reg =3D <0x5>;=0A=
			regulator-name =3D "vdd-5v0-hdmi";=0A=
			regulator-min-microvolt =3D <0x4c4b40>;=0A=
			regulator-max-microvolt =3D <0x4c4b40>;=0A=
			vin-supply =3D <0xa2>;=0A=
			linux,phandle =3D <0x64>;=0A=
			phandle =3D <0x64>;=0A=
		};=0A=
=0A=
		regulator@6 {=0A=
			compatible =3D "regulator-fixed";=0A=
			reg =3D <0x6>;=0A=
			regulator-name =3D "vdd-1v8-sys";=0A=
			regulator-min-microvolt =3D <0x1b7740>;=0A=
			regulator-max-microvolt =3D <0x1b7740>;=0A=
			vin-supply =3D <0x47>;=0A=
			linux,phandle =3D <0xa3>;=0A=
			phandle =3D <0xa3>;=0A=
		};=0A=
=0A=
		regulator@7 {=0A=
			compatible =3D "regulator-fixed";=0A=
			reg =3D <0x7>;=0A=
			regulator-name =3D "vdd-fan";=0A=
			regulator-min-microvolt =3D <0x4c4b40>;=0A=
			regulator-max-microvolt =3D <0x4c4b40>;=0A=
			vin-supply =3D <0xa2>;=0A=
			linux,phandle =3D <0xa4>;=0A=
			phandle =3D <0xa4>;=0A=
		};=0A=
=0A=
		regulator@8 {=0A=
			compatible =3D "regulator-fixed";=0A=
			reg =3D <0x8>;=0A=
			regulator-name =3D "vdd-usb-vbus";=0A=
			regulator-min-microvolt =3D <0x4c4b40>;=0A=
			regulator-max-microvolt =3D <0x4c4b40>;=0A=
			vin-supply =3D <0xa2>;=0A=
			linux,phandle =3D <0x41>;=0A=
			phandle =3D <0x41>;=0A=
		};=0A=
=0A=
		regulator@9 {=0A=
			compatible =3D "regulator-fixed-sync";=0A=
			reg =3D <0x9>;=0A=
			regulator-name =3D "vdd-usb-hub-en";=0A=
			regulator-min-microvolt =3D <0x4c4b40>;=0A=
			regulator-max-microvolt =3D <0x4c4b40>;=0A=
			vin-supply =3D <0xa3>;=0A=
			linux,phandle =3D <0xb3>;=0A=
			phandle =3D <0xb3>;=0A=
		};=0A=
=0A=
		regulator@10 {=0A=
			compatible =3D "regulator-fixed";=0A=
			reg =3D <0xa>;=0A=
			regulator-name =3D "vdd-usb-vbus2";=0A=
			regulator-min-microvolt =3D <0x4c4b40>;=0A=
			regulator-max-microvolt =3D <0x4c4b40>;=0A=
			vin-supply =3D <0x47>;=0A=
			linux,phandle =3D <0x43>;=0A=
			phandle =3D <0x43>;=0A=
		};=0A=
	};=0A=
=0A=
	pwm-fan {=0A=
		vdd-fan-supply =3D <0xa4>;=0A=
		compatible =3D "pwm-fan";=0A=
		status =3D "okay";=0A=
		pwms =3D <0xa5 0x3 0xb116>;=0A=
		shared_data =3D <0xa6>;=0A=
		active_pwm =3D <0x0 0x50 0x78 0xa0 0xff 0xff 0xff 0xff 0xff 0xff>;=0A=
	};=0A=
=0A=
	dvfs_rails {=0A=
		compatible =3D "simple-bus";=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
=0A=
		vdd-gpu-scaling-cdev@7 {=0A=
			status =3D "okay";=0A=
			reg =3D <0x7>;=0A=
			cooling-min-state =3D <0x0>;=0A=
			cooling-max-state =3D <0x5>;=0A=
			#cooling-cells =3D <0x2>;=0A=
			compatible =3D "nvidia,tegra210-rail-scaling-cdev";=0A=
			cdev-type =3D "gpu_scaling";=0A=
			nvidia,constraint;=0A=
			nvidia,trips =3D <0xa7 0x3b6 0x3 0x0 0x5 0x0 0x6 0x0 0x7 0x0 0x8 0x0>;=0A=
			linux,phandle =3D <0x4>;=0A=
			phandle =3D <0x4>;=0A=
		};=0A=
=0A=
		vdd-gpu-vmax-cdev@9 {=0A=
			status =3D "okay";=0A=
			reg =3D <0x9>;=0A=
			cooling-min-state =3D <0x0>;=0A=
			cooling-max-state =3D <0x1>;=0A=
			#cooling-cells =3D <0x2>;=0A=
			compatible =3D "nvidia,tegra210-rail-vmax-cdev";=0A=
			cdev-type =3D "GPU-cap";=0A=
			nvidia,constraint-ucm2;=0A=
			nvidia,trips =3D <0x9 0x46c 0x442>;=0A=
			clocks =3D <0x21 0x1eb>;=0A=
			clock-names =3D "cap-clk";=0A=
			linux,phandle =3D <0xa>;=0A=
			phandle =3D <0xa>;=0A=
		};=0A=
	};=0A=
=0A=
	pfsd {=0A=
		num_resources =3D <0x0>;=0A=
		secret =3D <0x2f>;=0A=
		active_steps =3D <0xa>;=0A=
		active_rpm =3D <0x0 0x3e8 0x7d0 0xbb8 0xfa0 0x1388 0x1770 0x1b58 =
0x2710 0x2af8>;=0A=
		active_rru =3D <0x28 0x2 0x1 0x1 0x1 0x1 0x1 0x1 0x1 0x1>;=0A=
		active_rrd =3D <0x28 0x2 0x1 0x1 0x1 0x1 0x1 0x1 0x1 0x1>;=0A=
		state_cap_lookup =3D <0x2 0x2 0x2 0x2 0x3 0x3 0x3 0x4 0x4 0x4>;=0A=
		pwm_period =3D <0xb116>;=0A=
		pwm_id =3D <0x3>;=0A=
		step_time =3D <0x64>;=0A=
		state_cap =3D <0x7>;=0A=
		active_pwm_max =3D <0xff>;=0A=
		tach_gpio =3D <0x56 0xca 0x1>;=0A=
		pwm_gpio =3D <0x56 0x27 0x1>;=0A=
		pwm_polarity =3D <0x0>;=0A=
		suspend_state =3D <0x0>;=0A=
		tach_period =3D <0x3e8>;=0A=
		linux,phandle =3D <0xa6>;=0A=
		phandle =3D <0xa6>;=0A=
	};=0A=
=0A=
	tegra-camera-platform {=0A=
		compatible =3D "nvidia, tegra-camera-platform";=0A=
		status =3D "okay";=0A=
		num_csi_lanes =3D <0x4>;=0A=
		max_lane_speed =3D <0x16e360>;=0A=
		min_bits_per_pixel =3D <0xa>;=0A=
		vi_peak_byte_per_pixel =3D <0x2>;=0A=
		vi_bw_margin_pct =3D <0x19>;=0A=
		max_pixel_rate =3D <0x3a980>;=0A=
		isp_peak_byte_per_pixel =3D <0x5>;=0A=
		isp_bw_margin_pct =3D <0x19>;=0A=
		linux,phandle =3D <0xc4>;=0A=
		phandle =3D <0xc4>;=0A=
=0A=
		modules {=0A=
=0A=
			module0 {=0A=
				badge =3D "porg_front_RBPCV2";=0A=
				position =3D "front";=0A=
				orientation =3D [31 00];=0A=
				linux,phandle =3D <0xba>;=0A=
				phandle =3D <0xba>;=0A=
=0A=
				drivernode0 {=0A=
					pcl_id =3D "v4l2_sensor";=0A=
					devname =3D "imx219 7-0010";=0A=
					proc-device-tree =3D =
"/proc/device-tree/cam_i2cmux/i2c@0/rbpcv2_imx219_a@10";=0A=
					linux,phandle =3D <0xbb>;=0A=
					phandle =3D <0xbb>;=0A=
				};=0A=
=0A=
				drivernode1 {=0A=
					pcl_id =3D "v4l2_lens";=0A=
					proc-device-tree =3D "/proc/device-tree/lens_imx219@RBPCV2/";=0A=
					linux,phandle =3D <0xbc>;=0A=
					phandle =3D <0xbc>;=0A=
				};=0A=
			};=0A=
=0A=
			module1 {=0A=
				badge =3D "porg_rear_RBPCV2";=0A=
				position =3D "rear";=0A=
				orientation =3D [31 00];=0A=
				linux,phandle =3D <0xc5>;=0A=
				phandle =3D <0xc5>;=0A=
=0A=
				drivernode0 {=0A=
					pcl_id =3D "v4l2_sensor";=0A=
					devname =3D "imx219 8-0010";=0A=
					proc-device-tree =3D =
"/proc/device-tree/cam_i2cmux/i2c@1/rbpcv2_imx219_e@10";=0A=
					linux,phandle =3D <0xcb>;=0A=
					phandle =3D <0xcb>;=0A=
				};=0A=
=0A=
				drivernode1 {=0A=
					pcl_id =3D "v4l2_lens";=0A=
					proc-device-tree =3D "/proc/device-tree/lens_imx219@RBPCV2/";=0A=
					linux,phandle =3D <0xcc>;=0A=
					phandle =3D <0xcc>;=0A=
				};=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	lens_imx219@RBPCV2 {=0A=
		min_focus_distance =3D "0.0";=0A=
		hyper_focal =3D "0.0";=0A=
		focal_length =3D "3.04";=0A=
		f_number =3D "2.0";=0A=
		aperture =3D "0.0";=0A=
	};=0A=
=0A=
	cam_i2cmux {=0A=
		compatible =3D "i2c-mux-gpio";=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
		mux-gpios =3D <0x56 0x40 0x0>;=0A=
		i2c-parent =3D <0xa8>;=0A=
		status =3D "disabled";=0A=
		linux,phandle =3D <0xd1>;=0A=
		phandle =3D <0xd1>;=0A=
=0A=
		i2c@0 {=0A=
			status =3D "disabled";=0A=
			reg =3D <0x0>;=0A=
			#address-cells =3D <0x1>;=0A=
			#size-cells =3D <0x0>;=0A=
			linux,phandle =3D <0xd2>;=0A=
			phandle =3D <0xd2>;=0A=
=0A=
			rbpcv2_imx219_a@10 {=0A=
				compatible =3D "nvidia,imx219";=0A=
				reg =3D <0x10>;=0A=
				devnode =3D "video0";=0A=
				physical_w =3D "3.680";=0A=
				physical_h =3D "2.760";=0A=
				sensor_model =3D "imx219";=0A=
				use_sensor_mode_id =3D "true";=0A=
				status =3D "disabled";=0A=
				reset-gpios =3D <0x56 0x97 0x0>;=0A=
				linux,phandle =3D <0xc9>;=0A=
				phandle =3D <0xc9>;=0A=
=0A=
				mode0 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_a";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "3264";=0A=
					active_h =3D "2464";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "182400000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "21000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "21000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				mode1 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_a";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "3264";=0A=
					active_h =3D "1848";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "182400000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "28000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "28000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				mode2 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_a";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "1920";=0A=
					active_h =3D "1080";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "182400000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "30000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "30000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				mode3 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_a";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "1280";=0A=
					active_h =3D "720";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "182400000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "60000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "60000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				mode4 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_a";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "1280";=0A=
					active_h =3D "720";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "169600000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "120000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "120000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				ports {=0A=
					#address-cells =3D <0x1>;=0A=
					#size-cells =3D <0x0>;=0A=
=0A=
					port@0 {=0A=
						reg =3D <0x0>;=0A=
=0A=
						endpoint {=0A=
							port-index =3D <0x0>;=0A=
							bus-width =3D <0x2>;=0A=
							remote-endpoint =3D <0x73>;=0A=
							linux,phandle =3D <0x74>;=0A=
							phandle =3D <0x74>;=0A=
						};=0A=
					};=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		i2c@1 {=0A=
			status =3D "disabled";=0A=
			reg =3D <0x1>;=0A=
			#address-cells =3D <0x1>;=0A=
			#size-cells =3D <0x0>;=0A=
			linux,phandle =3D <0xd3>;=0A=
			phandle =3D <0xd3>;=0A=
=0A=
			rbpcv2_imx219_e@10 {=0A=
				compatible =3D "nvidia,imx219";=0A=
				reg =3D <0x10>;=0A=
				devnode =3D "video1";=0A=
				physical_w =3D "3.680";=0A=
				physical_h =3D "2.760";=0A=
				sensor_model =3D "imx219";=0A=
				use_sensor_mode_id =3D "true";=0A=
				status =3D "disabled";=0A=
				reset-gpios =3D <0x56 0x98 0x0>;=0A=
				linux,phandle =3D <0xca>;=0A=
				phandle =3D <0xca>;=0A=
=0A=
				mode0 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_e";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "3264";=0A=
					active_h =3D "2464";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "182400000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "21000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "21000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				mode1 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_e";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "3264";=0A=
					active_h =3D "1848";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "182400000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "28000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "28000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				mode2 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_e";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "1920";=0A=
					active_h =3D "1080";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "182400000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "30000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "30000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				mode3 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_e";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "1280";=0A=
					active_h =3D "720";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "182400000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "60000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "60000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				mode4 {=0A=
					mclk_khz =3D "24000";=0A=
					num_lanes =3D [32 00];=0A=
					tegra_sinterface =3D "serial_e";=0A=
					phy_mode =3D "DPHY";=0A=
					discontinuous_clk =3D "yes";=0A=
					dpcm_enable =3D "false";=0A=
					cil_settletime =3D [30 00];=0A=
					active_w =3D "1280";=0A=
					active_h =3D "720";=0A=
					pixel_t =3D "bayer_rggb";=0A=
					readout_orientation =3D "90";=0A=
					line_length =3D "3448";=0A=
					inherent_gain =3D [31 00];=0A=
					mclk_multiplier =3D "9.33";=0A=
					pix_clk_hz =3D "169600000";=0A=
					gain_factor =3D "16";=0A=
					framerate_factor =3D "1000000";=0A=
					exposure_factor =3D "1000000";=0A=
					min_gain_val =3D "16";=0A=
					max_gain_val =3D "170";=0A=
					step_gain_val =3D [31 00];=0A=
					default_gain =3D "16";=0A=
					min_hdr_ratio =3D [31 00];=0A=
					max_hdr_ratio =3D [31 00];=0A=
					min_framerate =3D "2000000";=0A=
					max_framerate =3D "120000000";=0A=
					step_framerate =3D [31 00];=0A=
					default_framerate =3D "120000000";=0A=
					min_exp_time =3D "13";=0A=
					max_exp_time =3D "683709";=0A=
					step_exp_time =3D [31 00];=0A=
					default_exp_time =3D "2495";=0A=
					embedded_metadata_height =3D [32 00];=0A=
				};=0A=
=0A=
				ports {=0A=
					#address-cells =3D <0x1>;=0A=
					#size-cells =3D <0x0>;=0A=
=0A=
					port@0 {=0A=
						reg =3D <0x0>;=0A=
=0A=
						endpoint {=0A=
							port-index =3D <0x4>;=0A=
							bus-width =3D <0x2>;=0A=
							remote-endpoint =3D <0xa9>;=0A=
							linux,phandle =3D <0x76>;=0A=
							phandle =3D <0x76>;=0A=
						};=0A=
					};=0A=
				};=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	tfesd {=0A=
		secret =3D <0x25>;=0A=
		toffset =3D <0x0>;=0A=
		polling_period =3D <0x44c>;=0A=
		ndevs =3D <0x2>;=0A=
		cdev_type =3D "pwm-fan";=0A=
		tzp_governor_name =3D "pid_thermal_gov";=0A=
		linux,phandle =3D <0xaa>;=0A=
		phandle =3D <0xaa>;=0A=
=0A=
		dev1 {=0A=
			dev_data =3D "CPU-therm";=0A=
			coeffs =3D <0x32 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0>;=0A=
		};=0A=
=0A=
		dev2 {=0A=
			dev_data =3D "GPU-therm";=0A=
			coeffs =3D <0x32 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 0x0 =
0x0 0x0 0x0 0x0 0x0 0x0>;=0A=
		};=0A=
	};=0A=
=0A=
	thermal-fan-est {=0A=
		compatible =3D "thermal-fan-est";=0A=
		status =3D "okay";=0A=
		num_resources =3D <0x0>;=0A=
		shared_data =3D <0xaa>;=0A=
		trip_length =3D <0xa>;=0A=
		active_trip_temps =3D <0x0 0xc738 0xee48 0x11558 0x14050 0x222e0 =
0x249f0 0x27100 0x29810 0x2bf20>;=0A=
		active_hysteresis =3D <0x0 0x3a98 0x2328 0x2328 0x2710 0x0 0x0 0x0 0x0 =
0x0>;=0A=
	};=0A=
=0A=
	gpio-keys {=0A=
		compatible =3D "gpio-keys";=0A=
		gpio-keys,name =3D "gpio-keys";=0A=
		status =3D "okay";=0A=
		disable-on-recovery-kernel;=0A=
=0A=
		power {=0A=
			label =3D "Power";=0A=
			gpios =3D <0x56 0xbd 0x1>;=0A=
			linux,code =3D <0x74>;=0A=
			gpio-key,wakeup;=0A=
			debounce-interval =3D <0x1e>;=0A=
			nvidia,pmc-wakeup =3D <0x37 0x0 0x18 0x0>;=0A=
		};=0A=
=0A=
		forcerecovery {=0A=
			label =3D "Forcerecovery";=0A=
			gpios =3D <0x56 0xbe 0x1>;=0A=
			linux,code =3D <0x74>;=0A=
			gpio-key,wakeup;=0A=
			debounce-interval =3D <0x1e>;=0A=
		};=0A=
	};=0A=
=0A=
	gpio-timed-keys {=0A=
		compatible =3D "gpio-timed-keys";=0A=
		gpio-keys,name =3D "gpio-timed-keys";=0A=
		status =3D "disabled";=0A=
		disable-on-recovery-kernel;=0A=
=0A=
		power {=0A=
			label =3D "Power";=0A=
			gpios =3D <0x56 0xbd 0x1>;=0A=
			linux,num_codes =3D <0x3>;=0A=
			linux,press-time-secs =3D <0x1 0x3 0x5>;=0A=
			linux,key-codes =3D <0x6c 0x1af 0x1c>;=0A=
			gpio-key,wakeup;=0A=
		};=0A=
	};=0A=
=0A=
	spdif-dit.0@0 {=0A=
		compatible =3D "linux,spdif-dit";=0A=
		reg =3D <0x0 0x0 0x0 0x0>;=0A=
		linux,phandle =3D <0x4f>;=0A=
		phandle =3D <0x4f>;=0A=
	};=0A=
=0A=
	spdif-dit.1@1 {=0A=
		compatible =3D "linux,spdif-dit";=0A=
		reg =3D <0x0 0x1 0x0 0x0>;=0A=
		linux,phandle =3D <0x51>;=0A=
		phandle =3D <0x51>;=0A=
	};=0A=
=0A=
	spdif-dit.2@2 {=0A=
		compatible =3D "linux,spdif-dit";=0A=
		reg =3D <0x0 0x2 0x0 0x0>;=0A=
		linux,phandle =3D <0x53>;=0A=
		phandle =3D <0x53>;=0A=
	};=0A=
=0A=
	spdif-dit.3@3 {=0A=
		compatible =3D "linux,spdif-dit";=0A=
		reg =3D <0x0 0x3 0x0 0x0>;=0A=
		linux,phandle =3D <0x55>;=0A=
		phandle =3D <0x55>;=0A=
	};=0A=
=0A=
	spdif-dit.4@4 {=0A=
		compatible =3D "linux,spdif-dit";=0A=
		reg =3D <0x0 0x4 0x0 0x0>;=0A=
		linux,phandle =3D <0x124>;=0A=
		phandle =3D <0x124>;=0A=
	};=0A=
=0A=
	spdif-dit.5@5 {=0A=
		compatible =3D "linux,spdif-dit";=0A=
		reg =3D <0x0 0x5 0x0 0x0>;=0A=
		linux,phandle =3D <0x125>;=0A=
		phandle =3D <0x125>;=0A=
	};=0A=
=0A=
	spdif-dit.6@6 {=0A=
		compatible =3D "linux,spdif-dit";=0A=
		reg =3D <0x0 0x6 0x0 0x0>;=0A=
		linux,phandle =3D <0x126>;=0A=
		phandle =3D <0x126>;=0A=
	};=0A=
=0A=
	spdif-dit.7@7 {=0A=
		compatible =3D "linux,spdif-dit";=0A=
		reg =3D <0x0 0x7 0x0 0x0>;=0A=
		linux,phandle =3D <0x127>;=0A=
		phandle =3D <0x127>;=0A=
	};=0A=
=0A=
	cpufreq {=0A=
		compatible =3D "nvidia,tegra210-cpufreq";=0A=
=0A=
		cpu-scaling-data {=0A=
			freq-table =3D <0x18e70 0x31ce0 0x4b000 0x62700 0x7e900 0x96000 =
0xad700 0xc9900 0xe1000 0xfd200 0x114900 0x12ad40 0x143bb0 0x15ca20 =
0x169158 0x175890 0x18e700 0x1a7570 0x1c03e0 0x1d2eb4 0x1ebd24>;=0A=
			preserve-across-suspend;=0A=
		};=0A=
=0A=
		emc-scaling-data {=0A=
			emc-cpu-limit-table =3D <0x18e70 0x109a0 0x32000 0x18e70 0x4b000 =
0x31ce0 0x62638 0x639c0 0xad700 0xa2800 0xfd200 0x186a00>;=0A=
		};=0A=
	};=0A=
=0A=
	eeprom-manager {=0A=
		data-size =3D <0x100>;=0A=
=0A=
		bus@0 {=0A=
			i2c-bus =3D <0xab>;=0A=
			word-address-1-byte-slave-addresses =3D <0x50>;=0A=
		};=0A=
=0A=
		bus@1 {=0A=
			i2c-bus =3D <0xac>;=0A=
			word-address-1-byte-slave-addresses =3D <0x50 0x57>;=0A=
		};=0A=
	};=0A=
=0A=
	plugin-manager {=0A=
=0A=
		fragement@0 {=0A=
			ids =3D ">=3D3448-0000-100", ">=3D3448-0002-100";=0A=
=0A=
			override@0 {=0A=
				target =3D <0xad>;=0A=
=0A=
				_overlay_ {=0A=
=0A=
					channel@0 {=0A=
						ti,rail-name =3D "POM_5V_IN";=0A=
					};=0A=
=0A=
					channel@1 {=0A=
						ti,rail-name =3D "POM_5V_GPU";=0A=
					};=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragment@1 {=0A=
			ids =3D ">=3D3448-0000-101", ">=3D3448-0002-101";=0A=
=0A=
			override@0 {=0A=
				target =3D <0xa1>;=0A=
=0A=
				_overlay_ {=0A=
					regulator-min-microvolt =3D <0x927c0>;=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragment@2 {=0A=
			ids =3D "<3448-0000-200", "<3448-0002-200";=0A=
=0A=
			override@0 {=0A=
				target =3D <0xae>;=0A=
=0A=
				_overlay_ {=0A=
					regulator-supplies =3D "vdd-1v8-audio-hv", "vdd-1v8-audio-hv-bias";=0A=
					vdd-1v8-audio-hv-supply =3D <0x36>;=0A=
					vdd-1v8-audio-hv-bias-supply =3D <0x36>;=0A=
					fsync-width =3D <0xf>;=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@1 {=0A=
				target =3D <0x4e>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "disabled";=0A=
				};=0A=
			};=0A=
=0A=
			override@2 {=0A=
				target =3D <0xaf>;=0A=
=0A=
				_overlay_ {=0A=
=0A=
					nvidia,dai-link-1 {=0A=
						cpu-dai =3D <0xae>;=0A=
						cpu-dai-name =3D "I2S1";=0A=
					};=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragment@3 {=0A=
			ids =3D ">=3D3448-0002-100";=0A=
=0A=
			override@0 {=0A=
				target =3D <0xb0>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@1 {=0A=
				target =3D <0xb1>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "disabled";=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragment@4 {=0A=
			ids =3D "3449-0000-000";=0A=
=0A=
			override@0 {=0A=
				target =3D <0xb2>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "disabled";=0A=
				};=0A=
			};=0A=
=0A=
			override@1 {=0A=
				target =3D <0xb3>;=0A=
=0A=
				_overlay_ {=0A=
					gpio =3D <0x56 0x6 0x0>;=0A=
					enable-active-low;=0A=
					gpio-open-drain;=0A=
				};=0A=
			};=0A=
=0A=
			override@2 {=0A=
				target =3D <0xb4>;=0A=
=0A=
				_overlay_ {=0A=
					vbus-supply =3D <0xb3>;=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragment@5 {=0A=
			ids =3D "3449-0000-100", "3449-0000-200";=0A=
=0A=
			override@0 {=0A=
				target =3D <0xb2>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "disabled";=0A=
				};=0A=
			};=0A=
=0A=
			override@1 {=0A=
				target =3D <0xb3>;=0A=
=0A=
				_overlay_ {=0A=
					gpio =3D <0x56 0x6 0x0>;=0A=
					enable-active-high;=0A=
				};=0A=
			};=0A=
=0A=
			override@2 {=0A=
				target =3D <0xb4>;=0A=
=0A=
				_overlay_ {=0A=
					vbus-supply =3D <0xb3>;=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragement@6 {=0A=
			odm-data =3D "enable-tegra-wdt";=0A=
=0A=
			override@0 {=0A=
				target =3D <0xb5>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragement@7 {=0A=
			odm-data =3D "enable-pmic-wdt";=0A=
=0A=
			override@0 {=0A=
				target =3D <0xb6>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragement@8 {=0A=
			odm-data =3D "enable-pmic-wdt", "enable-tegra-wdt";=0A=
=0A=
			override@0 {=0A=
				target =3D <0xb7>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "disabled";=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragement@9 {=0A=
			ids =3D "<3448-0000-300", "<3448-0002-300";=0A=
=0A=
			override@0 {=0A=
				target =3D <0x58>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "disabled";=0A=
				};=0A=
			};=0A=
=0A=
			override@1 {=0A=
				target =3D <0xb8>;=0A=
=0A=
				_overlay_ {=0A=
					keep-power-in-suspend;=0A=
					non-removable;=0A=
				};=0A=
			};=0A=
=0A=
			override@2 {=0A=
				target =3D <0xb9>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@3 {=0A=
				target =3D <0xba>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					badge =3D "porg_front_RBPCV2";=0A=
					position =3D "front";=0A=
					orientation =3D [31 00];=0A=
				};=0A=
			};=0A=
=0A=
			override@4 {=0A=
				target =3D <0xbb>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					pcl_id =3D "v4l2_sensor";=0A=
					devname =3D "imx219 6-0010";=0A=
					proc-device-tree =3D =
"/proc/device-tree/host1x/i2c@546c0000/rbpcv2_imx219_a@10";=0A=
				};=0A=
			};=0A=
=0A=
			override@5 {=0A=
				target =3D <0xbc>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					pcl_id =3D "v4l2_lens";=0A=
					proc-device-tree =3D "/proc/device-tree/lens_imx219@RBPCV2/";=0A=
				};=0A=
			};=0A=
=0A=
			override@6 {=0A=
				target =3D <0xbd>;=0A=
=0A=
				_overlay_ {=0A=
					num-channels =3D <0x1>;=0A=
				};=0A=
			};=0A=
=0A=
			override@7 {=0A=
				target =3D <0xbe>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@8 {=0A=
				target =3D <0x75>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					port-index =3D <0x0>;=0A=
					bus-width =3D <0x2>;=0A=
					remote-endpoint =3D <0x5a>;=0A=
				};=0A=
			};=0A=
=0A=
			override@9 {=0A=
				target =3D <0xbf>;=0A=
=0A=
				_overlay_ {=0A=
					num-channels =3D <0x1>;=0A=
				};=0A=
			};=0A=
=0A=
			override@10 {=0A=
				target =3D <0xc0>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@11 {=0A=
				target =3D <0xc1>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@12 {=0A=
				target =3D <0x73>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					port-index =3D <0x0>;=0A=
					bus-width =3D <0x2>;=0A=
					remote-endpoint =3D <0xc2>;=0A=
				};=0A=
			};=0A=
=0A=
			override@13 {=0A=
				target =3D <0xc3>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@14 {=0A=
				target =3D <0x5a>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					remote-endpoint =3D <0x75>;=0A=
				};=0A=
			};=0A=
=0A=
			override@15 {=0A=
				target =3D <0xc4>;=0A=
=0A=
				_overlay_ {=0A=
					num_csi_lanes =3D <0x2>;=0A=
					max_lane_speed =3D <0x16e360>;=0A=
					min_bits_per_pixel =3D <0xa>;=0A=
					vi_peak_byte_per_pixel =3D <0x2>;=0A=
					vi_bw_margin_pct =3D <0x19>;=0A=
					max_pixel_rate =3D <0x3a980>;=0A=
					isp_peak_byte_per_pixel =3D <0x5>;=0A=
					isp_bw_margin_pct =3D <0x19>;=0A=
				};=0A=
			};=0A=
=0A=
			override@16 {=0A=
				target =3D <0xc5>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "disabled";=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragement@10 {=0A=
			ids =3D ">=3D3448-0000-300", ">=3D3448-0002-300";=0A=
=0A=
			override@0 {=0A=
				target =3D <0xc6>;=0A=
=0A=
				_overlay_ {=0A=
					nvidia,plat-gpios =3D <0x56 0xe7 0x0>;=0A=
				};=0A=
			};=0A=
=0A=
			override@1 {=0A=
				target =3D <0xb8>;=0A=
=0A=
				_overlay_ {=0A=
					vqmmc-supply =3D <0x58>;=0A=
					no-sdio;=0A=
					no-mmc;=0A=
					sd-uhs-sdr104;=0A=
					sd-uhs-sdr50;=0A=
					sd-uhs-sdr25;=0A=
					sd-uhs-sdr12;=0A=
				};=0A=
			};=0A=
=0A=
			override@2 {=0A=
				target =3D <0xc7>;=0A=
=0A=
				_overlay_ {=0A=
					nvidia,priority =3D <0x32>;=0A=
					nvidia,polarity-active-low =3D <0x1>;=0A=
					nvidia,count-threshold =3D <0x1>;=0A=
					nvidia,alarm-filter =3D <0x4dd1e0>;=0A=
					nvidia,alarm-period =3D <0x0>;=0A=
					nvidia,cpu-throt-percent =3D <0x4b>;=0A=
					nvidia,gpu-throt-level =3D <0x3>;=0A=
				};=0A=
			};=0A=
=0A=
			override@3 {=0A=
				target =3D <0xc8>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@4 {=0A=
				target =3D <0xc9>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@5 {=0A=
				target =3D <0xba>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					badge =3D "porg_front_RBPCV2";=0A=
					position =3D "front";=0A=
					orientation =3D [31 00];=0A=
				};=0A=
			};=0A=
=0A=
			override@6 {=0A=
				target =3D <0xbb>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					pcl_id =3D "v4l2_sensor";=0A=
					devname =3D "imx219 7-0010";=0A=
					proc-device-tree =3D =
"/proc/device-tree/cam_i2cmux/i2c@0/rbpcv2_imx219_a@10";=0A=
				};=0A=
			};=0A=
=0A=
			override@7 {=0A=
				target =3D <0xbc>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					pcl_id =3D "v4l2_lens";=0A=
					proc-device-tree =3D "/proc/device-tree/lens_imx219@RBPCV2/";=0A=
				};=0A=
			};=0A=
=0A=
			override@8 {=0A=
				target =3D <0xca>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@9 {=0A=
				target =3D <0xc5>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					badge =3D "porg_rear_RBPCV2";=0A=
					position =3D "rear";=0A=
					orientation =3D [31 00];=0A=
				};=0A=
			};=0A=
=0A=
			override@10 {=0A=
				target =3D <0xcb>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					pcl_id =3D "v4l2_sensor";=0A=
					devname =3D "imx219 8-0010";=0A=
					proc-device-tree =3D =
"/proc/device-tree/cam_i2cmux/i2c@1/rbpcv2_imx219_e@10";=0A=
				};=0A=
			};=0A=
=0A=
			override@11 {=0A=
				target =3D <0xcc>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					pcl_id =3D "v4l2_lens";=0A=
					proc-device-tree =3D "/proc/device-tree/lens_imx219@RBPCV2/";=0A=
				};=0A=
			};=0A=
=0A=
			override@12 {=0A=
				target =3D <0xbd>;=0A=
=0A=
				_overlay_ {=0A=
					num-channels =3D <0x2>;=0A=
				};=0A=
			};=0A=
=0A=
			override@13 {=0A=
				target =3D <0xbe>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@14 {=0A=
				target =3D <0xcd>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@15 {=0A=
				target =3D <0x75>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					port-index =3D <0x0>;=0A=
					bus-width =3D <0x2>;=0A=
					remote-endpoint =3D <0x5a>;=0A=
				};=0A=
			};=0A=
=0A=
			override@16 {=0A=
				target =3D <0x77>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					port-index =3D <0x4>;=0A=
					bus-width =3D <0x2>;=0A=
					remote-endpoint =3D <0x5b>;=0A=
				};=0A=
			};=0A=
=0A=
			override@17 {=0A=
				target =3D <0xbf>;=0A=
=0A=
				_overlay_ {=0A=
					num-channels =3D <0x2>;=0A=
				};=0A=
			};=0A=
=0A=
			override@18 {=0A=
				target =3D <0xc0>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@19 {=0A=
				target =3D <0xc1>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@20 {=0A=
				target =3D <0x73>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					port-index =3D <0x0>;=0A=
					bus-width =3D <0x2>;=0A=
					remote-endpoint =3D <0x74>;=0A=
				};=0A=
			};=0A=
=0A=
			override@21 {=0A=
				target =3D <0xc3>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@22 {=0A=
				target =3D <0x5a>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@23 {=0A=
				target =3D <0xce>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@24 {=0A=
				target =3D <0xcf>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@25 {=0A=
				target =3D <0xa9>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
					port-index =3D <0x4>;=0A=
					bus-width =3D <0x2>;=0A=
					remote-endpoint =3D <0x76>;=0A=
				};=0A=
			};=0A=
=0A=
			override@26 {=0A=
				target =3D <0xd0>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@27 {=0A=
				target =3D <0x5b>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@28 {=0A=
				target =3D <0xc4>;=0A=
=0A=
				_overlay_ {=0A=
					num_csi_lanes =3D <0x4>;=0A=
					max_lane_speed =3D <0x16e360>;=0A=
					min_bits_per_pixel =3D <0xa>;=0A=
					vi_peak_byte_per_pixel =3D <0x2>;=0A=
					vi_bw_margin_pct =3D <0x19>;=0A=
					max_pixel_rate =3D <0x3a980>;=0A=
					isp_peak_byte_per_pixel =3D <0x5>;=0A=
					isp_bw_margin_pct =3D <0x19>;=0A=
				};=0A=
			};=0A=
=0A=
			override@29 {=0A=
				target =3D <0xd1>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@30 {=0A=
				target =3D <0xd2>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@31 {=0A=
				target =3D <0xd3>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragement@11 {=0A=
			ids =3D ">=3D3448-0000-300", ">=3D3448-0002-300";=0A=
=0A=
			override@0 {=0A=
				target =3D <0xd4>;=0A=
=0A=
				_overlay_ {=0A=
					enable-aspm;=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragment-e2614-common@0 {=0A=
			ids =3D "2614-0000-*";=0A=
=0A=
			overrides@0 {=0A=
				target =3D <0xd5>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			overrides@1 {=0A=
				target =3D <0xd6>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			overrides@2 {=0A=
				target =3D <0xd7>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			overrides@3 {=0A=
				target =3D <0xd8>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			overrides@4 {=0A=
				target =3D <0xd8>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			overrides@6 {=0A=
				target =3D <0xd9>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			overrides@7 {=0A=
				target =3D <0xd7>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			overrides@8 {=0A=
				target =3D <0xd8>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			overrides@9 {=0A=
				target =3D <0xaf>;=0A=
=0A=
				_overlay_ {=0A=
					nvidia,audio-routing =3D "x Headphone Jack", "x HPO L Playback", "x =
Headphone Jack", "x HPO R Playback", "x IN1P", "x Mic Jack", "x Int =
Spk", "x SPO Playback", "x DMIC L1", "x Int Mic", "x DMIC L2", "x Int =
Mic", "x DMIC R1", "x Int Mic", "x DMIC R2", "x Int Mic", "y Headphone", =
"y OUT", "y IN", "y Mic", "a IN", "a Mic", "b IN", "b Mic";=0A=
				};=0A=
			};=0A=
=0A=
			overrides@10 {=0A=
				target =3D <0xda>;=0A=
=0A=
				_overlay_ {=0A=
					link-name =3D "rt565x-playback";=0A=
					codec-dai-name =3D "rt5659-aif1";=0A=
				};=0A=
			};=0A=
=0A=
			overrides@11 {=0A=
				target =3D <0xd9>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@12 {=0A=
				target =3D <0xdb>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragment-e2614-a00@1 {=0A=
			ids =3D "2614-0000-000";=0A=
=0A=
			overrides@0 {=0A=
				target =3D <0xdc>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@1 {=0A=
				target =3D <0xaf>;=0A=
=0A=
				_overlay_ {=0A=
=0A=
					nvidia,dai-link-1 {=0A=
						codec-dai =3D <0xdc>;=0A=
					};=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragment-e2614-b00@2 {=0A=
			ids =3D "2614-0000-100";=0A=
=0A=
			overrides@0 {=0A=
				target =3D <0xdd>;=0A=
=0A=
				_overlay_ {=0A=
					status =3D "okay";=0A=
				};=0A=
			};=0A=
=0A=
			override@1 {=0A=
				target =3D <0xaf>;=0A=
=0A=
				_overlay_ {=0A=
=0A=
					nvidia,dai-link-1 {=0A=
						codec-dai =3D <0xdd>;=0A=
					};=0A=
				};=0A=
			};=0A=
		};=0A=
=0A=
		fragment-e2614-pins@3 {=0A=
			ids =3D "<3448-0000-200";=0A=
=0A=
			overrides@0 {=0A=
				target =3D <0xdb>;=0A=
=0A=
				_overlay_ {=0A=
					gpios =3D <0x8 0x0 0x9 0x0 0xa 0x0 0xb 0x0 0xd8 0x0 0x95 0x0>;=0A=
					label =3D "I2S1_LRCLK", "I2S1_SDIN", "I2S1_SDOUT", "I2S1_CLK", =
"AUDIO_MCLK", "AUD_RST";=0A=
				};=0A=
			};=0A=
		};=0A=
	};=0A=
=0A=
	mods-simple-bus {=0A=
		compatible =3D "simple-bus";=0A=
		device_type =3D "mods-simple-bus";=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
=0A=
		mods-clocks {=0A=
			compatible =3D "nvidia,mods-clocks";=0A=
			status =3D "disabled";=0A=
			clocks =3D <0x21 0x3 0x21 0x4 0x21 0x5 0x21 0x6 0x21 0x8 0x21 0x9 =
0x21 0xb 0x21 0xc 0x21 0xe 0x21 0xf 0x21 0x11 0x21 0x12 0x21 0xe4 0x21 =
0x16 0x21 0x17 0x21 0x1a 0x21 0x1b 0x21 0x1c 0x21 0x1e 0x21 0x20 0x21 =
0x21 0x21 0x22 0x21 0x26 0x21 0x28 0x21 0x29 0x21 0x2c 0x21 0x2e 0x21 =
0x2f 0x21 0x30 0x21 0x34 0x21 0x36 0x21 0x37 0x21 0x38 0x21 0x39 0x21 =
0x3a 0x21 0x3f 0x21 0x41 0x21 0x43 0x21 0x44 0x21 0x45 0x21 0x46 0x21 =
0x47 0x21 0x48 0x21 0x49 0x21 0x4c 0x21 0x4e 0x21 0x4f 0x21 0x51 0x21 =
0x52 0x21 0x53 0x21 0x59 0x21 0x5c 0x21 0x63 0x21 0x64 0x21 0x65 0x21 =
0x66 0x21 0x67 0x21 0x6a 0x21 0x6b 0x21 0x6f 0x21 0x76 0x21 0x77 0x21 =
0x78 0x21 0x79 0x21 0x7a 0x21 0x7b 0x21 0x7c 0x21 0x7d 0x21 0x7f 0x21 =
0x80 0x21 0x81 0x21 0x88 0x21 0x8f 0x21 0x90 0x21 0x91 0x21 0x92 0x21 =
0x93 0x21 0x94 0x21 0x95 0x21 0x98 0x21 0x9c 0x21 0xa1 0x21 0xa2 0x21 =
0xa6 0x21 0xa7 0x21 0xa8 0x21 0xab 0x21 0xad 0x21 0xb1 0x21 0xb2 0x21 =
0xb5 0x21 0xb6 0x21 0xb7 0x21 0xb8 0x21 0xb9 0x21 0xbb 0x21 0xbd 0x21 =
0xc1 0x21 0xc2 0x21 0xc3 0x21 0xc5 0x21 0xc6 0x21 0xc7 0x21 0xc8 0x21 =
0xc9 0x21 0xca 0x21 0xce 0x21 0xcf 0x21 0xd0 0x21 0xd1 0x21 0xd2 0x21 =
0xd3 0x21 0xd4 0x21 0xda 0x21 0xdb 0x21 0xdc 0x21 0xdd 0x21 0xde 0x21 =
0xdf 0x21 0xe0 0x21 0xe1 0x21 0xe2 0x21 0xe3 0x21 0xe5 0x21 0xe6 0x21 =
0xe7 0x21 0xe8 0x21 0xe9 0x21 0xea 0x21 0xeb 0x21 0xec 0x21 0xed 0x21 =
0xee 0x21 0xef 0x21 0xf0 0x21 0xf1 0x21 0xf2 0x21 0xf3 0x21 0xf4 0x21 =
0xf5 0x21 0xf6 0x21 0xf7 0x21 0xf8 0x21 0xf9 0x21 0xfa 0x21 0xfb 0x21 =
0xfc 0x21 0xfd 0x21 0xfe 0x21 0xff 0x21 0x100 0x21 0x101 0x21 0x103 0x21 =
0x104 0x21 0x105 0x21 0x106 0x21 0x107 0x21 0x108 0x21 0x109 0x21 0x10a =
0x21 0x10b 0x21 0x10c 0x21 0x10d 0x21 0x10e 0x21 0x10f 0x21 0x110 0x21 =
0x111 0x21 0x112 0x21 0x113 0x21 0x114 0x21 0x115 0x21 0x116 0x21 0x117 =
0x21 0x118 0x21 0x119 0x21 0x11c 0x21 0x11d 0x21 0x11e 0x21 0x11f 0x21 =
0x120 0x21 0x121 0x21 0x122 0x21 0x123 0x21 0x124 0x21 0x125 0x21 0x126 =
0x21 0x127 0x21 0x128 0x21 0x129 0x21 0x12a 0x21 0x12b 0x21 0x12c 0x21 =
0x12d 0x21 0x12e 0x21 0x12f 0x21 0x130 0x21 0x131 0x21 0x132 0x21 0x133 =
0x21 0x134 0x21 0x135 0x21 0x136 0x21 0x137 0x21 0x138 0x21 0x139 0x21 =
0x13a 0x21 0x13b 0x21 0x13c 0x21 0x13d 0x21 0x13e 0x21 0x13f 0x21 0x140 =
0x21 0x141 0x21 0x142 0x21 0x143 0x21 0x144 0x21 0x15e 0x21 0x15f 0x21 =
0x160 0x21 0x161 0x21 0x162 0x21 0x163 0x21 0x164 0x21 0x165 0x21 0x166 =
0x21 0x167 0x21 0x168 0x21 0x169 0x21 0x16a 0x21 0x16b 0x21 0x16c 0x21 =
0x16d 0x21 0x16e 0x21 0x16f 0x21 0x170 0x21 0x171 0x21 0x172 0x21 0x173 =
0x21 0x174 0x21 0x175 0x21 0x176 0x21 0x177 0x21 0x178 0x21 0x179 0x21 =
0x17a 0x21 0x17b 0x21 0x17c 0x21 0x17d 0x21 0x17e 0x21 0x17f 0x21 0x180 =
0x21 0x181 0x21 0x182 0x21 0x183 0x21 0x184 0x21 0x185 0x21 0x186 0x21 =
0x187 0x21 0x188 0x21 0x189 0x21 0x18a 0x21 0x191 0x21 0x192 0x21 0x193 =
0x21 0x194 0x21 0x195 0x21 0x196 0x21 0x197 0x21 0x198 0x21 0x199 0x21 =
0x19a 0x21 0x19b 0x21 0x19c 0x21 0x19d 0x21 0x19e 0x21 0x19f 0x21 0x1a0 =
0x21 0x1a1 0x21 0x1a2 0x21 0x1a3 0x21 0x1a4 0x21 0x1a5 0x21 0x1a6 0x21 =
0x1a7 0x21 0x1a8 0x21 0x1a9 0x21 0x1aa 0x21 0x1ab 0x21 0x1ac 0x21 0x1ad =
0x21 0x1ae 0x21 0x1af 0x21 0x1b0 0x21 0x1b1 0x21 0x1b2 0x21 0x1b3 0x21 =
0x1b4 0x21 0x1b5 0x21 0x1b6 0x21 0x1b7 0x21 0x1b8 0x21 0x1b9 0x21 0x1ba =
0x21 0x1bb 0x21 0x1bc 0x21 0x1bd 0x21 0x1be 0x21 0x1bf 0x21 0x1c0 0x21 =
0x1c1 0x21 0x1c2 0x21 0x1c3 0x21 0x1c4 0x21 0x1c5 0x21 0x1c6 0x21 0x1c7 =
0x21 0x1c8 0x21 0x1c9 0x21 0x1ca 0x21 0x1cb 0x21 0x1cc 0x21 0x1cd 0x21 =
0x1ce 0x21 0x1cf 0x21 0x1d0 0x21 0x1d1 0x21 0x1d2 0x21 0x1d3 0x21 0x1d4 =
0x21 0x1d5 0x21 0x1d6 0x21 0x1d7 0x21 0x1d8 0x21 0x1d9 0x21 0x1da 0x21 =
0x1db 0x21 0x1dc 0x21 0x1dd 0x21 0x1de 0x21 0x1df 0x21 0x1e0 0x21 0x1e1 =
0x21 0x1e2 0x21 0x1e3 0x21 0x1e4 0x21 0x1e5 0x21 0x1e6 0x21 0x1e7 0x21 =
0x1e8 0x21 0x1e9 0x21 0x1ea 0x21 0x1eb 0x21 0x1ec 0x21 0x1ed 0x21 0x1ee =
0x21 0x1ef 0x21 0x1f0 0x21 0x1f1 0x21 0x1f2 0x21 0x1f3 0x21 0x1f4 0x21 =
0x1f5 0x21 0x1f6 0x21 0x1f7 0x21 0x1f8 0x21 0x1f9 0x21 0x1fa 0x21 0x1fb =
0x21 0x1fc 0x21 0x1fd 0x21 0x1fe 0x21 0x1ff 0x21 0x200 0x21 0x201 0x21 =
0x202 0x21 0x203 0x21 0x204 0x21 0x205 0x21 0x206 0x21 0x207 0x21 0x208 =
0x21 0x209 0x21 0x20a 0x21 0x20b 0x21 0x20c 0x21 0x20d 0x21 0x20e 0x21 =
0x20f>;=0A=
			clock-names =3D "ispb", "rtc", "timer", "uarta", "gpio", "sdmmc2", =
"i2s1", "i2c1", "sdmmc1", "sdmmc4", "pwm", "i2s2", "vi", "usbd", "ispa", =
"disp2", "disp1", "host1x", "i2s0", "mc", "ahbdma", "apbdma", "pmc", =
"kfuse", "sbc1", "sbc2", "sbc3", "i2c5", "dsia", "csi", "i2c2", "uartc", =
"mipi_cal", "emc", "usb2", "bsev", "uartd", "i2c3", "sbc4", "sdmmc3", =
"pcie", "owr", "afi", "csite", "la", "soc_therm", "dtv", "i2cslow", =
"dsib", "tsec", "xusb_host", "csus", "mselect", "tsensor", "i2s3", =
"i2s4", "i2c4", "d_audio", "apb2ape", "hda2codec_2x", "spdif_2x", =
"actmon", "extern1", "extern2", "extern3", "sata_oob", "sata", "hda", =
"se", "hda2hdmi", "sata_cold", "cec", "xusb_gate", "cilab", "cilcd", =
"cile", "dsialp", "dsiblp", "entropy", "dp2", "xusb_ss", "dmic1", =
"dmic2", "i2c6", "mc_capa", "mc_cbpa", "vim2_clk", "mipibif", =
"clk72mhz", "vic03", "dpaux", "sor0", "sor1", "gpu", "dbgapb", =
"pll_p_out_adsp", "pll_g_ref", "sdmmc_legacy", "nvdec", "nvjpg", =
"dmic3", "ape", "adsp", "mc_cdpa", "mc_ccpa", "maud", "tsecb", "dpaux1", =
"vi_i2c", "hsic_trk", "usb2_trk", "qspi", "uartape", "adsp_neon", =
"nvenc", "iqc2", "iqc1", "sor_safe", "pll_p_out_cpu", "uartb", "vfir", =
"spdif_in", "spdif_out", "vi_sensor", "fuse", "fuse_burn", "clk_32k", =
"clk_m", "clk_m_div2", "clk_m_div4", "pll_ref", "pll_c", "pll_c_out1", =
"pll_c2", "pll_c3", "pll_m", "pll_m_out1", "pll_p", "pll_p_out1", =
"pll_p_out2", "pll_p_out3", "pll_p_out4", "pll_a", "pll_a_out0", =
"pll_d", "pll_d_out0", "pll_d2", "pll_d2_out0", "pll_u", "pll_u_480m", =
"pll_u_60m", "pll_u_48m", "pll_x", "pll_x_out0", "pll_re_vco", =
"pll_re_out", "pll_e", "spdif_in_sync", "i2s0_sync", "i2s1_sync", =
"i2s2_sync", "i2s3_sync", "i2s4_sync", "vimclk_sync", "audio0", =
"audio1", "audio2", "audio3", "audio4", "spdif", "clk_out_1", =
"clk_out_2", "clk_out_3", "blink", "qspi_out", "xusb_host_src", =
"xusb_falcon_src", "xusb_fs_src", "xusb_ss_src", "xusb_dev_src", =
"xusb_dev", "xusb_hs_src", "sclk", "hclk", "pclk", "cclk_g", "cclk_lp", =
"dfll_ref", "dfll_soc", "vi_sensor2", "pll_p_out5", "cml0", "cml1", =
"pll_c4", "pll_dp", "pll_e_mux", "pll_mb", "pll_a1", "pll_d_dsi_out", =
"pll_c4_out0", "pll_c4_out1", "pll_c4_out2", "pll_c4_out3", "pll_u_out", =
"pll_u_out1", "pll_u_out2", "usb2_hsic_trk", "pll_p_out_hsio", =
"pll_p_out_xusb", "xusb_ssp_src", "pll_re_out1", "pll_mb_ud", =
"pll_p_ud", "isp", "pll_a_out_adsp", "pll_a_out0_out_adsp", =
"audio0_mux", "audio1_mux", "audio2_mux", "audio3_mux", "audio4_mux", =
"spdif_mux", "clk_out_1_mux", "clk_out_2_mux", "clk_out_3_mux", =
"dsia_mux", "dsib_mux", "sor0_lvds", "xusb_ss_div2", "pll_m_ud", =
"pll_c_ud", "sclk_mux", "sor1_brick", "sor1_mux", "pd2vi", "vi_output", =
"aclk", "sclk_skipper", "disp1_slcg_ovr", "disp2_slcg_ovr", =
"vi_slcg_ovr", "ispa_slcg_ovr", "ispb_slcg_ovr", "nvdec_slcg_ovr", =
"nvenc_slcg_ovr", "nvjpg_slcg_ovr", "vic03_slcg_ovr", =
"xusb_dev_slcg_ovr", "xusb_host_slcg_ovr", "d_audio_slcg_ovr", =
"ape_slcg_ovr", "sata_slcg_ovr", "sata_slcg_ovr_ipfs", =
"sata_slcg_ovr_fpci", "dmic1_sync_clk", "dmic1_sync_clk_mux", =
"dmic2_sync_clk", "dmic2_sync_clk_mux", "dmic3_sync_clk", =
"dmic3_sync_clk_mux", "aclk_slcg_ovr", "c2bus", "c3bus", "vic03_cbus", =
"nvjpg_cbus", "se_cbus", "tsecb_cbus", "cap_c2bus", "cap_vcore_c2bus", =
"cap_throttle_c2bus", "floor_c2bus", "override_c2bus", "edp_c2bus", =
"nvenc_cbus", "nvdec_cbus", "vic_floor_cbus", "cap_c3bus", =
"cap_vcore_c3bus", "cap_throttle_c3bus", "floor_c3bus", =
"override_c3bus", "vi_cbus", "isp_cbus", "override_cbus", =
"cap_vcore_cbus", "via_vi_cbus", "vib_vi_cbus", "ispa_isp_cbus", =
"ispb_isp_cbus", "sbus", "avp_sclk", "bsea_sclk", "usbd_sclk", =
"usb1_sclk", "usb2_sclk", "usb3_sclk", "wake_sclk", "camera_sclk", =
"mon_avp", "cap_sclk", "cap_vcore_sclk", "cap_throttle_sclk", =
"floor_sclk", "override_sclk", "sbc1_sclk", "sbc2_sclk", "sbc3_sclk", =
"sbc4_sclk", "qspi_sclk", "boot_apb_sclk", "emc_master", "avp_emc", =
"cpu_emc", "disp1_emc", "disp2_emc", "disp1_la_emc", "disp2_la_emc", =
"usbd_emc", "usb1_emc", "usb2_emc", "usb3_emc", "sdmmc3_emc", =
"sdmmc4_emc", "mon_emc", "cap_emc", "cap_vcore_emc", "cap_throttle_emc", =
"gr3d_emc", "nvenc_emc", "nvjpg_emc", "nvdec_emc", "tsec_emc", =
"tsecb_emc", "camera_emc", "via_emc", "vib_emc", "ispa_emc", "ispb_emc", =
"iso_emc", "floor_emc", "override_emc", "edp_emc", "vic_emc", =
"vic_shared_emc", "ape_emc", "pcie_emc", "xusb_emc", "gbus", =
"gm20b_gbus", "cap_gbus", "edp_gbus", "cap_vgpu_gbus", =
"cap_throttle_gbus", "cap_profile_gbus", "override_gbus", "floor_gbus", =
"floor_profile_gbus", "host1x_master", "nv_host1x", "vi_host1x", =
"vii2c_host1x", "cap_host1x", "cap_vcore_host1x", "floor_host1x", =
"override_host1x", "mselect_master", "cpu_mselect", "pcie_mselect", =
"cap_vcore_mselect", "override_mselect", "ape_master", "adma_ape", =
"adsp_ape", "xbar_ape", "cap_vcore_ape", "override_ape", "abus", =
"adsp_cpu_abus", "cap_vcore_abus", "override_abus", "vcm_sclk", =
"vcm_ahb_sclk", "vcm_apb_sclk", "ahb_sclk", "apb_sclk", =
"sdmmc4_ahb_sclk", "battery_emc", "cbus";=0A=
			resets =3D <0x21 0x3 0x21 0x4 0x21 0x5 0x21 0x6 0x21 0x8 0x21 0x9 =
0x21 0xb 0x21 0xc 0x21 0xe 0x21 0xf 0x21 0x11 0x21 0x12 0x21 0xe4 0x21 =
0x16 0x21 0x17 0x21 0x1a 0x21 0x1b 0x21 0x1c 0x21 0x1e 0x21 0x20 0x21 =
0x21 0x21 0x22 0x21 0x26 0x21 0x28 0x21 0x29 0x21 0x2c 0x21 0x2e 0x21 =
0x2f 0x21 0x30 0x21 0x34 0x21 0x36 0x21 0x37 0x21 0x38 0x21 0x39 0x21 =
0x3a 0x21 0x3f 0x21 0x41 0x21 0x43 0x21 0x44 0x21 0x45 0x21 0x46 0x21 =
0x47 0x21 0x48 0x21 0x49 0x21 0x4c 0x21 0x4e 0x21 0x4f 0x21 0x51 0x21 =
0x52 0x21 0x53 0x21 0x59 0x21 0x5c 0x21 0x63 0x21 0x64 0x21 0x65 0x21 =
0x66 0x21 0x67 0x21 0x6a 0x21 0x6b 0x21 0x6f 0x21 0x76 0x21 0x77 0x21 =
0x78 0x21 0x79 0x21 0x7a 0x21 0x7b 0x21 0x7c 0x21 0x7d 0x21 0x7f 0x21 =
0x80 0x21 0x81 0x21 0x88 0x21 0x8f 0x21 0x90 0x21 0x91 0x21 0x92 0x21 =
0x93 0x21 0x94 0x21 0x95 0x21 0x98 0x21 0x9c 0x21 0xa1 0x21 0xa2 0x21 =
0xa6 0x21 0xa7 0x21 0xa8 0x21 0xab 0x21 0xad 0x21 0xb1 0x21 0xb2 0x21 =
0xb5 0x21 0xb6 0x21 0xb7 0x21 0xb8 0x21 0xb9 0x21 0xbb 0x21 0xbd 0x21 =
0xc1 0x21 0xc2 0x21 0xc3 0x21 0xc5 0x21 0xc6 0x21 0xc7 0x21 0xc8 0x21 =
0xc9 0x21 0xca 0x21 0xce 0x21 0xcf 0x21 0xd0 0x21 0xd1 0x21 0xd2 0x21 =
0xd3 0x21 0xd4 0x21 0xda 0x21 0xdb 0x21 0xdc 0x21 0xdd 0x21 0xde 0x21 =
0xdf 0x21 0x7 0x21 0xe1 0x21 0xe2 0x21 0xe3 0x21 0xe5 0x21 0xe6 0x21 =
0xe7 0x21 0xe8 0x21 0xe9 0x21 0xea 0x21 0xeb 0x21 0xec 0x21 0xed 0x21 =
0xee 0x21 0xef 0x21 0xf0 0x21 0xf1 0x21 0xf2 0x21 0xf3 0x21 0xf4 0x21 =
0xf5 0x21 0xf6 0x21 0xf7 0x21 0xf8 0x21 0xf9 0x21 0xfa 0x21 0xfb 0x21 =
0xfc 0x21 0xfd 0x21 0xfe 0x21 0xff 0x21 0x100 0x21 0x101 0x21 0x103 0x21 =
0x104 0x21 0x105 0x21 0x106 0x21 0x107 0x21 0x108 0x21 0x109 0x21 0x10a =
0x21 0x10b 0x21 0x10c 0x21 0x10d 0x21 0x10e 0x21 0x10f 0x21 0x110 0x21 =
0x111 0x21 0x112 0x21 0x113 0x21 0x114 0x21 0x115 0x21 0x116 0x21 0x117 =
0x21 0x118 0x21 0x119 0x21 0x11c 0x21 0x11d 0x21 0x11e 0x21 0x11f 0x21 =
0x120 0x21 0x5f 0x21 0x122 0x21 0x123 0x21 0x124 0x21 0x125 0x21 0x126 =
0x21 0x127 0x21 0x128 0x21 0x129 0x21 0x12a 0x21 0x12b 0x21 0x12c 0x21 =
0x12d 0x21 0x12e 0x21 0x12f 0x21 0x130 0x21 0x131 0x21 0x132 0x21 0x133 =
0x21 0x134 0x21 0x135 0x21 0x136 0x21 0x137 0x21 0x138 0x21 0x139 0x21 =
0x13a 0x21 0x13b 0x21 0x13c 0x21 0x13d 0x21 0x13e 0x21 0x13f 0x21 0x140 =
0x21 0x141 0x21 0x142 0x21 0x143 0x21 0x144 0x21 0x15e 0x21 0x15f 0x21 =
0x160 0x21 0x161 0x21 0x162 0x21 0x163 0x21 0x164 0x21 0x165 0x21 0x166 =
0x21 0x167 0x21 0x168 0x21 0x169 0x21 0x16a 0x21 0x16b 0x21 0x16c 0x21 =
0x16d 0x21 0x16e 0x21 0x16f 0x21 0x170 0x21 0x171 0x21 0x172 0x21 0x173 =
0x21 0x174 0x21 0x175 0x21 0x176 0x21 0x177 0x21 0x178 0x21 0x179 0x21 =
0x17a 0x21 0x17b 0x21 0x17c 0x21 0x17d 0x21 0x17e 0x21 0x17f 0x21 0x180 =
0x21 0x181 0x21 0x182 0x21 0x183 0x21 0x184 0x21 0x185 0x21 0x186 0x21 =
0x187 0x21 0x188 0x21 0x189 0x21 0x18a 0x21 0x191 0x21 0x192 0x21 0x193 =
0x21 0x194 0x21 0x195 0x21 0x196 0x21 0x197 0x21 0x198 0x21 0x199 0x21 =
0x19a 0x21 0x19b 0x21 0x19c 0x21 0x19d 0x21 0x19e 0x21 0x19f 0x21 0x1a0 =
0x21 0x1a1 0x21 0x1a2 0x21 0x1a3 0x21 0x1a4 0x21 0x1a5 0x21 0x1a6 0x21 =
0x1a7 0x21 0x1a8 0x21 0x1a9 0x21 0x1aa 0x21 0x1ab 0x21 0x1ac 0x21 0x1ad =
0x21 0x1ae 0x21 0x1af 0x21 0x1b0 0x21 0x1b1 0x21 0x1b2 0x21 0x1b3 0x21 =
0x1b4 0x21 0x1b5 0x21 0x1b6 0x21 0x1b7 0x21 0x1b8 0x21 0x1b9 0x21 0x1ba =
0x21 0x1bb 0x21 0x1bc 0x21 0x1bd 0x21 0x1be 0x21 0x1bf 0x21 0x1c0 0x21 =
0x1c1 0x21 0x1c2 0x21 0x1c3 0x21 0x1c4 0x21 0x1c5 0x21 0x1c6 0x21 0x1c7 =
0x21 0x1c8 0x21 0x1c9 0x21 0x1ca 0x21 0x1cb 0x21 0x1cc 0x21 0x1cd 0x21 =
0x1ce 0x21 0x1cf 0x21 0x1d0 0x21 0x1d1 0x21 0x1d2 0x21 0x1d3 0x21 0x1d4 =
0x21 0x1d5 0x21 0x1d6 0x21 0x1d7 0x21 0x1d8 0x21 0x1d9 0x21 0x1da 0x21 =
0x1db 0x21 0x1dc 0x21 0x1dd 0x21 0x1de 0x21 0x1df 0x21 0x1e0 0x21 0x1e1 =
0x21 0x1e2 0x21 0x1e3 0x21 0x1e4 0x21 0x1e5 0x21 0x1e6 0x21 0x1e7 0x21 =
0x1e8 0x21 0x1e9 0x21 0x1ea 0x21 0x1eb 0x21 0x1ec 0x21 0x1ed 0x21 0x1ee =
0x21 0x1ef 0x21 0x1f0 0x21 0x1f1 0x21 0x1f2 0x21 0x1f3 0x21 0x1f4 0x21 =
0x1f5 0x21 0x1f6 0x21 0x1f7 0x21 0x1f8 0x21 0x1f9 0x21 0x1fa 0x21 0x1fb =
0x21 0x1fc 0x21 0x1fd 0x21 0x1fe 0x21 0x1ff 0x21 0x200 0x21 0x201 0x21 =
0x202 0x21 0x203 0x21 0x204 0x21 0x205 0x21 0x206 0x21 0x207 0x21 0x208 =
0x21 0x209 0x21 0x20a 0x21 0x20b 0x21 0x20c 0x21 0x20d 0x21 0x20e 0x21 =
0x20f>;=0A=
			reset-names =3D "ispb", "rtc", "timer", "uarta", "gpio", "sdmmc2", =
"i2s1", "i2c1", "sdmmc1", "sdmmc4", "pwm", "i2s2", "vi", "usbd", "ispa", =
"disp2", "disp1", "host1x", "i2s0", "mc", "ahbdma", "apbdma", "pmc", =
"kfuse", "sbc1", "sbc2", "sbc3", "i2c5", "dsia", "csi", "i2c2", "uartc", =
"mipi_cal", "emc", "usb2", "bsev", "uartd", "i2c3", "sbc4", "sdmmc3", =
"pcie", "owr", "afi", "csite", "la", "soc_therm", "dtv", "i2cslow", =
"dsib", "tsec", "xusb_host", "csus", "mselect", "tsensor", "i2s3", =
"i2s4", "i2c4", "d_audio", "apb2ape", "hda2codec_2x", "spdif_2x", =
"actmon", "extern1", "extern2", "extern3", "sata_oob", "sata", "hda", =
"se", "hda2hdmi", "sata_cold", "cec", "xusb_gate", "cilab", "cilcd", =
"cile", "dsialp", "dsiblp", "entropy", "dp2", "xusb_ss", "dmic1", =
"dmic2", "i2c6", "mc_capa", "mc_cbpa", "vim2_clk", "mipibif", =
"clk72mhz", "vic03", "dpaux", "sor0", "sor1", "gpu", "dbgapb", =
"pll_p_out_adsp", "pll_g_ref", "sdmmc_legacy", "nvdec", "nvjpg", =
"dmic3", "ape", "adsp", "mc_cdpa", "mc_ccpa", "maud", "tsecb", "dpaux1", =
"vi_i2c", "hsic_trk", "usb2_trk", "qspi", "uartape", "adsp_neon", =
"nvenc", "iqc2", "iqc1", "sor_safe", "pll_p_out_cpu", "uartb", "vfir", =
"spdif_in", "spdif_out", "vi_sensor", "fuse", "fuse_burn", "clk_32k", =
"clk_m", "clk_m_div2", "clk_m_div4", "pll_ref", "pll_c", "pll_c_out1", =
"pll_c2", "pll_c3", "pll_m", "pll_m_out1", "pll_p", "pll_p_out1", =
"pll_p_out2", "pll_p_out3", "pll_p_out4", "pll_a", "pll_a_out0", =
"pll_d", "pll_d_out0", "pll_d2", "pll_d2_out0", "pll_u", "pll_u_480m", =
"pll_u_60m", "pll_u_48m", "pll_x", "pll_x_out0", "pll_re_vco", =
"pll_re_out", "pll_e", "spdif_in_sync", "i2s0_sync", "i2s1_sync", =
"i2s2_sync", "i2s3_sync", "i2s4_sync", "vimclk_sync", "audio0", =
"audio1", "audio2", "audio3", "audio4", "spdif", "clk_out_1", =
"clk_out_2", "clk_out_3", "blink", "qspi_out", "xusb_host_src", =
"xusb_falcon_src", "xusb_fs_src", "xusb_ss_src", "xusb_dev_src", =
"xusb_dev", "xusb_hs_src", "sclk", "hclk", "pclk", "cclk_g", "cclk_lp", =
"dfll_ref", "dfll_soc", "vi_sensor2", "pll_p_out5", "cml0", "cml1", =
"pll_c4", "pll_dp", "pll_e_mux", "pll_mb", "pll_a1", "pll_d_dsi_out", =
"pll_c4_out0", "pll_c4_out1", "pll_c4_out2", "pll_c4_out3", "pll_u_out", =
"pll_u_out1", "pll_u_out2", "usb2_hsic_trk", "pll_p_out_hsio", =
"pll_p_out_xusb", "xusb_ssp_src", "pll_re_out1", "pll_p_ud", "isp", =
"pll_a_out_adsp", "pll_a_out0_out_adsp", "audio0_mux", "audio1_mux", =
"audio2_mux", "audio3_mux", "audio4_mux", "spdif_mux", "clk_out_1_mux", =
"clk_out_2_mux", "clk_out_3_mux", "dsia_mux", "dsib_mux", "sor0_lvds", =
"xusb_ss_div2", "pll_m_ud", "pll_c_ud", "sclk_mux", "sor1_brick", =
"sor1_mux", "pd2vi", "vi_output", "aclk", "sclk_skipper", =
"disp1_slcg_ovr", "disp2_slcg_ovr", "vi_slcg_ovr", "ispa_slcg_ovr", =
"ispb_slcg_ovr", "nvdec_slcg_ovr", "nvenc_slcg_ovr", "nvjpg_slcg_ovr", =
"vic03_slcg_ovr", "xusb_dev_slcg_ovr", "xusb_host_slcg_ovr", =
"d_audio_slcg_ovr", "ape_slcg_ovr", "sata_slcg_ovr", =
"sata_slcg_ovr_ipfs", "sata_slcg_ovr_fpci", "dmic1_sync_clk", =
"dmic1_sync_clk_mux", "dmic2_sync_clk", "dmic2_sync_clk_mux", =
"dmic3_sync_clk", "dmic3_sync_clk_mux", "aclk_slcg_ovr", "c2bus", =
"c3bus", "vic03_cbus", "nvjpg_cbus", "se_cbus", "tsecb_cbus", =
"cap_c2bus", "cap_vcore_c2bus", "cap_throttle_c2bus", "floor_c2bus", =
"override_c2bus", "edp_c2bus", "nvenc_cbus", "nvdec_cbus", =
"vic_floor_cbus", "cap_c3bus", "cap_vcore_c3bus", "cap_throttle_c3bus", =
"floor_c3bus", "override_c3bus", "vi_cbus", "isp_cbus", "override_cbus", =
"cap_vcore_cbus", "via_vi_cbus", "vib_vi_cbus", "ispa_isp_cbus", =
"ispb_isp_cbus", "sbus", "avp_sclk", "bsea_sclk", "usbd_sclk", =
"usb1_sclk", "usb2_sclk", "usb3_sclk", "wake_sclk", "camera_sclk", =
"mon_avp", "cap_sclk", "cap_vcore_sclk", "cap_throttle_sclk", =
"floor_sclk", "override_sclk", "sbc1_sclk", "sbc2_sclk", "sbc3_sclk", =
"sbc4_sclk", "qspi_sclk", "boot_apb_sclk", "emc_master", "avp_emc", =
"cpu_emc", "disp1_emc", "disp2_emc", "disp1_la_emc", "disp2_la_emc", =
"usbd_emc", "usb1_emc", "usb2_emc", "usb3_emc", "sdmmc3_emc", =
"sdmmc4_emc", "mon_emc", "cap_emc", "cap_vcore_emc", "cap_throttle_emc", =
"gr3d_emc", "nvenc_emc", "nvjpg_emc", "nvdec_emc", "tsec_emc", =
"tsecb_emc", "camera_emc", "via_emc", "vib_emc", "ispa_emc", "ispb_emc", =
"iso_emc", "floor_emc", "override_emc", "edp_emc", "vic_emc", =
"vic_shared_emc", "ape_emc", "pcie_emc", "xusb_emc", "gbus", =
"gm20b_gbus", "cap_gbus", "edp_gbus", "cap_vgpu_gbus", =
"cap_throttle_gbus", "cap_profile_gbus", "override_gbus", "floor_gbus", =
"floor_profile_gbus", "host1x_master", "nv_host1x", "vi_host1x", =
"vii2c_host1x", "cap_host1x", "cap_vcore_host1x", "floor_host1x", =
"override_host1x", "mselect_master", "cpu_mselect", "pcie_mselect", =
"cap_vcore_mselect", "override_mselect", "ape_master", "adma_ape", =
"adsp_ape", "xbar_ape", "cap_vcore_ape", "override_ape", "abus", =
"adsp_cpu_abus", "cap_vcore_abus", "override_abus", "vcm_sclk", =
"vcm_ahb_sclk", "vcm_apb_sclk", "ahb_sclk", "apb_sclk", =
"sdmmc4_ahb_sclk", "battery_emc", "cbus";=0A=
		};=0A=
	};=0A=
=0A=
	gps_wake {=0A=
		compatible =3D "gps-wake";=0A=
		gps-enable-gpio =3D <0xd6 0x8 0x0>;=0A=
		gps-wakeup-gpio =3D <0x56 0x26 0x0>;=0A=
		status =3D "disabled";=0A=
		linux,phandle =3D <0x128>;=0A=
		phandle =3D <0x128>;=0A=
	};=0A=
=0A=
	chosen {=0A=
		nvidia,tegra-porg-sku;=0A=
		stdout-path =3D "/serial@70006000";=0A=
		nvidia,tegra-always-on-personality;=0A=
		no-tnid-sn;=0A=
		bootargs =3D "earlycon=3Duart8250,mmio32,0x70006000";=0A=
		nvidia,bootloader-xusb-enable;=0A=
		nvidia,bootloader-vbus-enable =3D <0x1>;=0A=
		nvidia,fastboot_without_usb;=0A=
		nvidia,gpu-disable-power-saving;=0A=
		board-has-eeprom;=0A=
		firmware-blob-partition =3D "RP4";=0A=
=0A=
		verified-boot {=0A=
			poweroff-on-red-state;=0A=
		};=0A=
	};=0A=
=0A=
	gpu-dvfs-rework {=0A=
		status =3D "okay";=0A=
	};=0A=
=0A=
	pwm_regulators {=0A=
		compatible =3D "simple-bus";=0A=
		#address-cells =3D <0x1>;=0A=
		#size-cells =3D <0x0>;=0A=
=0A=
		pwm-regulator@0 {=0A=
			status =3D "okay";=0A=
			reg =3D <0x0>;=0A=
			compatible =3D "pwm-regulator";=0A=
			pwms =3D <0xde 0x0 0x9c4>;=0A=
			regulator-name =3D "vdd-cpu";=0A=
			regulator-min-microvolt =3D <0xacda0>;=0A=
			regulator-max-microvolt =3D <0x143188>;=0A=
			regulator-always-on;=0A=
			regulator-boot-on;=0A=
			voltage-table =3D <0xacda0 0x0 0xb18a0 0x1 0xb63a0 0x2 0xbaea0 0x3 =
0xbf9a0 0x4 0xc44a0 0x5 0xc8fa0 0x6 0xcdaa0 0x7 0xd25a0 0x8 0xd70a0 0x9 =
0xdbba0 0xa 0xe06a0 0xb 0xe51a0 0xc 0xe9ca0 0xd 0xee7a0 0xe 0xf32a0 0xf =
0xf7da0 0x10 0xfc8a0 0x11 0x1013a0 0x12 0x105ea0 0x13 0x10a9a0 0x14 =
0x10f4a0 0x15 0x113fa0 0x16 0x118aa0 0x17 0x11d5a0 0x18 0x1220a0 0x19 =
0x126ba0 0x1a 0x12b6a0 0x1b 0x1301a0 0x1c 0x134ca0 0x1d 0x1397a0 0x1e =
0x13e2a0 0x1f 0x142da0 0x20>;=0A=
			linux,phandle =3D <0x9d>;=0A=
			phandle =3D <0x9d>;=0A=
		};=0A=
=0A=
		pwm-regulator@1 {=0A=
			status =3D "okay";=0A=
			reg =3D <0x1>;=0A=
			compatible =3D "pwm-regulator";=0A=
			pwms =3D <0xa5 0x1 0x1f40>;=0A=
			regulator-name =3D "vdd-gpu";=0A=
			regulator-min-microvolt =3D <0xacda0>;=0A=
			regulator-max-microvolt =3D <0x143188>;=0A=
			regulator-init-microvolt =3D <0xf4240>;=0A=
			regulator-n-voltages =3D <0x3e>;=0A=
			regulator-enable-ramp-delay =3D <0x7d0>;=0A=
			enable-gpio =3D <0x1e 0x6 0x0>;=0A=
			regulator-settling-time-us =3D <0xa0>;=0A=
		};=0A=
	};=0A=
=0A=
	dfll-max77621@70110000 {=0A=
=0A=
		dfll-max77621-integration {=0A=
			i2c-fs-rate =3D <0xf4240>;=0A=
			pmic-i2c-address =3D <0x36>;=0A=
			pmic-i2c-voltage-register =3D <0x1>;=0A=
			sel-conversion-slope =3D <0x1>;=0A=
			linux,phandle =3D <0x129>;=0A=
			phandle =3D <0x129>;=0A=
		};=0A=
=0A=
		dfll-max77621-board-params {=0A=
			sample-rate =3D <0x30d4>;=0A=
			fixed-output-forcing;=0A=
			cf =3D <0xa>;=0A=
			ci =3D <0x0>;=0A=
			cg =3D <0x2>;=0A=
			droop-cut-value =3D <0xf>;=0A=
			droop-restore-ramp =3D <0x0>;=0A=
			scale-out-ramp =3D <0x0>;=0A=
			linux,phandle =3D <0x12a>;=0A=
			phandle =3D <0x12a>;=0A=
		};=0A=
	};=0A=
=0A=
	dfll-cdev-cap {=0A=
		compatible =3D "nvidia,tegra-dfll-cdev-action";=0A=
		act-dev =3D <0x26>;=0A=
		cdev-type =3D "DFLL-cap";=0A=
		#cooling-cells =3D <0x2>;=0A=
		linux,phandle =3D <0x17>;=0A=
		phandle =3D <0x17>;=0A=
	};=0A=
=0A=
	dfll-cdev-floor {=0A=
		compatible =3D "nvidia,tegra-dfll-cdev-action";=0A=
		act-dev =3D <0x26>;=0A=
		cdev-type =3D "DFLL-floor";=0A=
		#cooling-cells =3D <0x2>;=0A=
		linux,phandle =3D <0x10>;=0A=
		phandle =3D <0x10>;=0A=
	};=0A=
=0A=
	dvfs {=0A=
		compatible =3D "nvidia,tegra210-dvfs";=0A=
		vdd-cpu-supply =3D <0x9d>;=0A=
		nvidia,gpu-max-freq-khz =3D <0xe1000>;=0A=
	};=0A=
=0A=
	r8168 {=0A=
		isolate-gpio =3D <0x56 0xbb 0x0>;=0A=
	};=0A=
=0A=
	tegra_udrm {=0A=
		compatible =3D "nvidia,tegra-udrm";=0A=
		linux,phandle =3D <0x12b>;=0A=
		phandle =3D <0x12b>;=0A=
	};=0A=
=0A=
	soft_watchdog {=0A=
		status =3D "okay";=0A=
		linux,phandle =3D <0xb7>;=0A=
		phandle =3D <0xb7>;=0A=
	};=0A=
=0A=
	leds {=0A=
		compatible =3D "gpio-leds";=0A=
		status =3D "disabled";=0A=
		linux,phandle =3D <0xc8>;=0A=
		phandle =3D <0xc8>;=0A=
=0A=
		pwr {=0A=
			gpios =3D <0x56 0x41 0x0>;=0A=
			default-state =3D "on";=0A=
			linux,default-trigger =3D "system-throttle";=0A=
		};=0A=
	};=0A=
=0A=
	memory@80000000 {=0A=
		device_type =3D "memory";=0A=
		reg =3D <0x0 0x80000000 0x0 0x80000000>;=0A=
	};=0A=
=0A=
	cpu_edp {=0A=
		status =3D "okay";=0A=
		nvidia,edp_limit =3D <0x61a8>;=0A=
	};=0A=
=0A=
	gpu_edp {=0A=
		status =3D "okay";=0A=
		nvidia,edp_limit =3D <0x4e20>;=0A=
	};=0A=
=0A=
	__symbols__ {=0A=
		gpu_scaling0 =3D "/thermal-zones/AO-therm/trips/gpu-scaling0";=0A=
		gpu_scaling1 =3D "/thermal-zones/AO-therm/trips/gpu-scaling1";=0A=
		gpu_scaling2 =3D "/thermal-zones/AO-therm/trips/gpu-scaling2";=0A=
		gpu_scaling3 =3D "/thermal-zones/AO-therm/trips/gpu-scaling3";=0A=
		gpu_scaling4 =3D "/thermal-zones/AO-therm/trips/gpu-scaling4";=0A=
		gpu_scaling5 =3D "/thermal-zones/AO-therm/trips/gpu-scaling5";=0A=
		gpu_vmax1 =3D "/thermal-zones/AO-therm/trips/gpu-vmax1";=0A=
		core_dvfs_floor_trip0 =3D =
"/thermal-zones/AO-therm/trips/core_dvfs_floor_trip0";=0A=
		core_dvfs_cap_trip0 =3D =
"/thermal-zones/AO-therm/trips/core_dvfs_cap_trip0";=0A=
		dfll_floor_trip0 =3D "/thermal-zones/AO-therm/trips/dfll-floor-trip0";=0A=
		dfll_cap_trip0 =3D "/thermal-zones/CPU-therm/trips/dfll-cap-trip0";=0A=
		dfll_cap_trip1 =3D "/thermal-zones/CPU-therm/trips/dfll-cap-trip1";=0A=
		pll_dram_throttle =3D "/thermal-zones/PLL-therm/trips/dram-throttle";=0A=
		die_temp_thresh =3D "/thermal-zones/PMIC-Die/trips/hot-die";=0A=
		core_dvfs_floor =3D "/core_dvfs_cdev_floor";=0A=
		core_dvfs_cap =3D "/core_dvfs_cdev_cap";=0A=
		host1x_pd =3D "/power-domain/host1x-pd";=0A=
		pd_audio =3D "/power-domain/ape-pd";=0A=
		adsp_pd =3D "/power-domain/adsp-pd";=0A=
		tsec_pd =3D "/power-domain/tsec-pd";=0A=
		pd_nvdec =3D "/power-domain/nvdec-pd";=0A=
		pd_ve2 =3D "/power-domain/ve2-pd";=0A=
		pd_vic =3D "/power-domain/vic03-pd";=0A=
		pd_nvenc =3D "/power-domain/msenc-pd";=0A=
		pd_nvjpg =3D "/power-domain/nvjpg-pd";=0A=
		pd_pcie =3D "/power-domain/pcie-pd";=0A=
		ve_pd =3D "/power-domain/ve-pd";=0A=
		sata_pd =3D "/power-domain/sata-pd";=0A=
		sor_pd =3D "/power-domain/sor-pd";=0A=
		disa_pd =3D "/power-domain/disa-pd";=0A=
		disb_pd =3D "/power-domain/disb-pd";=0A=
		xusba_pd =3D "/power-domain/xusba-pd";=0A=
		xusbb_pd =3D "/power-domain/xusbb-pd";=0A=
		xusbc_pd =3D "/power-domain/xusbc-pd";=0A=
		C7 =3D "/cpus/idle-states/c7";=0A=
		CC6 =3D "/cpus/idle-states/cc6";=0A=
		L2 =3D "/cpus/l2-cache";=0A=
		tegra_car =3D "/clock";=0A=
		iram =3D "/reserved-memory/iram-carveout";=0A=
		ramoops_reserved =3D "/reserved-memory/ramoops_carveout";=0A=
		vpr =3D "/reserved-memory/vpr-carveout";=0A=
		fb0_reserved =3D "/reserved-memory/fb0_carveout";=0A=
		fb1_reserved =3D "/reserved-memory/fb1_carveout";=0A=
		smmu =3D "/iommu";=0A=
		common_as =3D "/iommu/address-space-prop/common";=0A=
		ppcs_as =3D "/iommu/address-space-prop/ppcs";=0A=
		dc_as =3D "/iommu/address-space-prop/dc";=0A=
		gpu_as =3D "/iommu/address-space-prop/gpu";=0A=
		ape_as =3D "/iommu/address-space-prop/ape";=0A=
		smmu_test =3D "/smmu_test";=0A=
		dma_test =3D "/dma_test";=0A=
		intc =3D "/interrupt-controller";=0A=
		lic =3D "/interrupt-controller@60004000";=0A=
		ahb =3D "/ahb@6000c000";=0A=
		tegra_agic =3D "/aconnect@702c0000/agic@702f9000";=0A=
		adma =3D "/aconnect@702c0000/adma@702e2000";=0A=
		tegra_axbar =3D "/aconnect@702c0000/ahub";=0A=
		tegra_admaif =3D "/aconnect@702c0000/ahub/admaif@0x702d0000";=0A=
		tegra_sfc1 =3D "/aconnect@702c0000/ahub/sfc@702d2000";=0A=
		tegra_sfc2 =3D "/aconnect@702c0000/ahub/sfc@702d2200";=0A=
		tegra_sfc3 =3D "/aconnect@702c0000/ahub/sfc@702d2400";=0A=
		tegra_sfc4 =3D "/aconnect@702c0000/ahub/sfc@702d2600";=0A=
		tegra_amixer =3D "/aconnect@702c0000/ahub/amixer@702dbb00";=0A=
		tegra_i2s1 =3D "/aconnect@702c0000/ahub/i2s@702d1000";=0A=
		tegra_i2s2 =3D "/aconnect@702c0000/ahub/i2s@702d1100";=0A=
		tegra_i2s3 =3D "/aconnect@702c0000/ahub/i2s@702d1200";=0A=
		tegra_i2s4 =3D "/aconnect@702c0000/ahub/i2s@702d1300";=0A=
		tegra_i2s5 =3D "/aconnect@702c0000/ahub/i2s@702d1400";=0A=
		tegra_amx1 =3D "/aconnect@702c0000/ahub/amx@702d3000";=0A=
		tegra_amx2 =3D "/aconnect@702c0000/ahub/amx@702d3100";=0A=
		tegra_adx1 =3D "/aconnect@702c0000/ahub/adx@702d3800";=0A=
		tegra_adx2 =3D "/aconnect@702c0000/ahub/adx@702d3900";=0A=
		tegra_dmic1 =3D "/aconnect@702c0000/ahub/dmic@702d4000";=0A=
		tegra_dmic2 =3D "/aconnect@702c0000/ahub/dmic@702d4100";=0A=
		tegra_dmic3 =3D "/aconnect@702c0000/ahub/dmic@702d4200";=0A=
		tegra_afc1 =3D "/aconnect@702c0000/ahub/afc@702d7000";=0A=
		tegra_afc2 =3D "/aconnect@702c0000/ahub/afc@702d7100";=0A=
		tegra_afc3 =3D "/aconnect@702c0000/ahub/afc@702d7200";=0A=
		tegra_afc4 =3D "/aconnect@702c0000/ahub/afc@702d7300";=0A=
		tegra_afc5 =3D "/aconnect@702c0000/ahub/afc@702d7400";=0A=
		tegra_afc6 =3D "/aconnect@702c0000/ahub/afc@702d7500";=0A=
		tegra_mvc1 =3D "/aconnect@702c0000/ahub/mvc@702da000";=0A=
		tegra_mvc2 =3D "/aconnect@702c0000/ahub/mvc@702da200";=0A=
		tegra_iqc1 =3D "/aconnect@702c0000/ahub/iqc@702de000";=0A=
		tegra_iqc2 =3D "/aconnect@702c0000/ahub/iqc@702de200";=0A=
		tegra_ope1 =3D "/aconnect@702c0000/ahub/ope@702d8000";=0A=
		tegra_ope2 =3D "/aconnect@702c0000/ahub/ope@702d8400";=0A=
		tegra_adsp_audio =3D "/aconnect@702c0000/adsp_audio";=0A=
		apbdma =3D "/dma@60020000";=0A=
		pinmux =3D "/pinmux@700008d4";=0A=
		clkreq_0_bi_dir_state =3D "/pinmux@700008d4/clkreq_0_bi_dir";=0A=
		clkreq_1_bi_dir_state =3D "/pinmux@700008d4/clkreq_1_bi_dir";=0A=
		clkreq_0_in_dir_state =3D "/pinmux@700008d4/clkreq_0_in_dir";=0A=
		clkreq_1_in_dir_state =3D "/pinmux@700008d4/clkreq_1_in_dir";=0A=
		sdmmc1_schmitt_enable_state =3D =
"/pinmux@700008d4/sdmmc1_schmitt_enable";=0A=
		sdmmc1_schmitt_disable_state =3D =
"/pinmux@700008d4/sdmmc1_schmitt_disable";=0A=
		sdmmc1_clk_schmitt_enable_state =3D =
"/pinmux@700008d4/sdmmc1_clk_schmitt_enable";=0A=
		sdmmc1_clk_schmitt_disable_state =3D =
"/pinmux@700008d4/sdmmc1_clk_schmitt_disable";=0A=
		sdmmc1_drv_code_1_8V =3D "/pinmux@700008d4/sdmmc1_drv_code";=0A=
		sdmmc1_default_drv_code_3_3V =3D =
"/pinmux@700008d4/sdmmc1_default_drv_code";=0A=
		sdmmc3_schmitt_enable_state =3D =
"/pinmux@700008d4/sdmmc3_schmitt_enable";=0A=
		sdmmc3_schmitt_disable_state =3D =
"/pinmux@700008d4/sdmmc3_schmitt_disable";=0A=
		sdmmc3_clk_schmitt_enable_state =3D =
"/pinmux@700008d4/sdmmc3_clk_schmitt_enable";=0A=
		sdmmc3_clk_schmitt_disable_state =3D =
"/pinmux@700008d4/sdmmc3_clk_schmitt_disable";=0A=
		sdmmc3_drv_code_1_8V =3D "/pinmux@700008d4/sdmmc3_drv_code";=0A=
		sdmmc3_default_drv_code_3_3V =3D =
"/pinmux@700008d4/sdmmc3_default_drv_code";=0A=
		dvfs_pwm_active_state =3D "/pinmux@700008d4/dvfs_pwm_active";=0A=
		dvfs_pwm_inactive_state =3D "/pinmux@700008d4/dvfs_pwm_inactive";=0A=
		pinmux_default =3D "/pinmux@700008d4/common";=0A=
		pinmux_unused_lowpower =3D "/pinmux@700008d4/unused_lowpower";=0A=
		drive_default =3D "/pinmux@700008d4/drive";=0A=
		gpio =3D "/gpio@6000d000";=0A=
		e2614_audio_pins =3D "/gpio@6000d000/e2614-rt5658-audio";=0A=
		suspend_gpio =3D "/gpio@6000d000/system-suspend-gpio";=0A=
		gpio_default =3D "/gpio@6000d000/default";=0A=
		xusb_mbox =3D "/mailbox@70098000";=0A=
		xusb_padctl =3D "/xusb_padctl@7009f000";=0A=
		tegra_usb_cd =3D "/usb_cd";=0A=
		tegra_padctl_uphy =3D "/pinctrl@7009f000";=0A=
		tegra_ext_cdp =3D "/max16984-cdp";=0A=
		uarta =3D "/serial@70006000";=0A=
		uartb =3D "/serial@70006040";=0A=
		uartc =3D "/serial@70006200";=0A=
		uartd =3D "/serial@70006300";=0A=
		tegra_sound =3D "/sound";=0A=
		hdr40_snd_link_i2s =3D "/sound/nvidia,dai-link-1";=0A=
		i2s_dai_link1 =3D "/sound/nvidia,dai-link-1";=0A=
		tegra_pwm =3D "/pwm@7000a000";=0A=
		spi0 =3D "/spi@7000d400";=0A=
		spi1 =3D "/spi@7000d600";=0A=
		spi2 =3D "/spi@7000d800";=0A=
		spi3 =3D "/spi@7000da00";=0A=
		qspi6 =3D "/spi@70410000";=0A=
		host1x =3D "/host1x";=0A=
		vi_base =3D "/host1x/vi";=0A=
		vi_port0 =3D "/host1x/vi/ports/port@0";=0A=
		rbpcv2_imx219_vi_in0 =3D "/host1x/vi/ports/port@0/endpoint";=0A=
		vi_port1 =3D "/host1x/vi/ports/port@1";=0A=
		rbpcv2_imx219_vi_in1 =3D "/host1x/vi/ports/port@1/endpoint";=0A=
		head0 =3D "/host1x/dc@54200000";=0A=
		head1 =3D "/host1x/dc@54240000";=0A=
		dsi =3D "/host1x/dsi";=0A=
		sor0 =3D "/host1x/sor";=0A=
		sor0_hdmi_display =3D "/host1x/sor/hdmi-display";=0A=
		sor0_dp_display =3D "/host1x/sor/dp-display";=0A=
		sor1 =3D "/host1x/sor1";=0A=
		sor1_hdmi_display =3D "/host1x/sor1/hdmi-display";=0A=
		sor1_dp_display =3D "/host1x/sor1/dp-display";=0A=
		dpaux0 =3D "/host1x/dpaux";=0A=
		dpaux1 =3D "/host1x/dpaux1";=0A=
		i2c7 =3D "/host1x/i2c@546c0000";=0A=
		imx219_single_cam0 =3D "/host1x/i2c@546c0000/rbpcv2_imx219_a@10";=0A=
		rbpcv2_imx219_out0 =3D =
"/host1x/i2c@546c0000/rbpcv2_imx219_a@10/ports/port@0/endpoint";=0A=
		ina3221x =3D "/host1x/i2c@546c0000/ina3221x@40";=0A=
		csi_base =3D "/host1x/nvcsi";=0A=
		csi_chan0 =3D "/host1x/nvcsi/channel@0";=0A=
		csi_chan0_port0 =3D "/host1x/nvcsi/channel@0/ports/port@0";=0A=
		rbpcv2_imx219_csi_in0 =3D =
"/host1x/nvcsi/channel@0/ports/port@0/endpoint@0";=0A=
		csi_chan0_port1 =3D "/host1x/nvcsi/channel@0/ports/port@1";=0A=
		rbpcv2_imx219_csi_out0 =3D =
"/host1x/nvcsi/channel@0/ports/port@1/endpoint@1";=0A=
		csi_chan1 =3D "/host1x/nvcsi/channel@1";=0A=
		csi_chan1_port0 =3D "/host1x/nvcsi/channel@1/ports/port@2";=0A=
		rbpcv2_imx219_csi_in1 =3D =
"/host1x/nvcsi/channel@1/ports/port@2/endpoint@2";=0A=
		csi_chan1_port1 =3D "/host1x/nvcsi/channel@1/ports/port@3";=0A=
		rbpcv2_imx219_csi_out1 =3D =
"/host1x/nvcsi/channel@1/ports/port@3/endpoint@3";=0A=
		tegra_pmc =3D "/pmc@7000e400";=0A=
		pex_io_dpd_disable_state =3D "/pmc@7000e400/pex_en";=0A=
		pex_io_dpd_enable_state =3D "/pmc@7000e400/pex_dis";=0A=
		hdmi_dpd_enable =3D "/pmc@7000e400/hdmi-dpd-enable";=0A=
		hdmi_dpd_disable =3D "/pmc@7000e400/hdmi-dpd-disable";=0A=
		dsi_dpd_enable =3D "/pmc@7000e400/dsi-dpd-enable";=0A=
		dsi_dpd_disable =3D "/pmc@7000e400/dsi-dpd-disable";=0A=
		dsib_dpd_enable =3D "/pmc@7000e400/dsib-dpd-enable";=0A=
		dsib_dpd_disable =3D "/pmc@7000e400/dsib-dpd-disable";=0A=
		pinctrl_iopad_default =3D "/pmc@7000e400/iopad-defaults";=0A=
		sdmmc1_e_33V_enable =3D "/pmc@7000e400/sdmmc1_e_33V_enable";=0A=
		sdmmc1_e_33V_disable =3D "/pmc@7000e400/sdmmc1_e_33V_disable";=0A=
		sdmmc3_e_33V_enable =3D "/pmc@7000e400/sdmmc3_e_33V_enable";=0A=
		sdmmc3_e_33V_disable =3D "/pmc@7000e400/sdmmc3_e_33V_disable";=0A=
		se =3D "/se@70012000";=0A=
		hdr40_i2c0 =3D "/i2c@7000c000";=0A=
		i2c1 =3D "/i2c@7000c000";=0A=
		tegra_nct72 =3D "/i2c@7000c000/temp-sensor@4c";=0A=
		hdr40_i2c1 =3D "/i2c@7000c400";=0A=
		i2c2 =3D "/i2c@7000c400";=0A=
		e2614_i2c_mux =3D "/i2c@7000c400/i2cmux@70";=0A=
		e2614_rt5658_b00 =3D "/i2c@7000c400/i2cmux@70/i2c@3/rt5659.12-001a@1a";=0A=
		e2614_gpio_i2c_1_20 =3D "/i2c@7000c400/gpio@20";=0A=
		e2614_icm20628 =3D "/i2c@7000c400/icm20628@68";=0A=
		e2614_ak8963 =3D "/i2c@7000c400/ak8963@0d";=0A=
		e2614_cm32180 =3D "/i2c@7000c400/cm32180@48";=0A=
		e2614_iqs263 =3D "/i2c@7000c400/iqs263@44";=0A=
		e2614_rt5658_a00 =3D "/i2c@7000c400/rt5659.1-001a@1a";=0A=
		i2c3 =3D "/i2c@7000c500";=0A=
		hdmi_ddc =3D "/i2c@7000c700";=0A=
		i2c4 =3D "/i2c@7000c700";=0A=
		i2c5 =3D "/i2c@7000d000";=0A=
		max77620 =3D "/i2c@7000d000/max77620@3c";=0A=
		max77620_default =3D "/i2c@7000d000/max77620@3c/pinmux@0";=0A=
		spmic_wdt =3D "/i2c@7000d000/max77620@3c/watchdog";=0A=
		max77620_sd0 =3D "/i2c@7000d000/max77620@3c/regulators/sd0";=0A=
		max77620_sd1 =3D "/i2c@7000d000/max77620@3c/regulators/sd1";=0A=
		max77620_sd2 =3D "/i2c@7000d000/max77620@3c/regulators/sd2";=0A=
		max77620_sd3 =3D "/i2c@7000d000/max77620@3c/regulators/sd3";=0A=
		max77620_ldo0 =3D "/i2c@7000d000/max77620@3c/regulators/ldo0";=0A=
		max77620_ldo1 =3D "/i2c@7000d000/max77620@3c/regulators/ldo1";=0A=
		max77620_ldo2 =3D "/i2c@7000d000/max77620@3c/regulators/ldo2";=0A=
		max77620_ldo3 =3D "/i2c@7000d000/max77620@3c/regulators/ldo3";=0A=
		max77620_ldo4 =3D "/i2c@7000d000/max77620@3c/regulators/ldo4";=0A=
		max77620_ldo5 =3D "/i2c@7000d000/max77620@3c/regulators/ldo5";=0A=
		max77620_ldo6 =3D "/i2c@7000d000/max77620@3c/regulators/ldo6";=0A=
		max77620_ldo7 =3D "/i2c@7000d000/max77620@3c/regulators/ldo7";=0A=
		max77620_ldo8 =3D "/i2c@7000d000/max77620@3c/regulators/ldo8";=0A=
		i2c6 =3D "/i2c@7000d100";=0A=
		sdmmc4 =3D "/sdhci@700b0600";=0A=
		sdhci3 =3D "/sdhci@700b0600";=0A=
		sdmmc3 =3D "/sdhci@700b0400";=0A=
		sdhci2 =3D "/sdhci@700b0400";=0A=
		sdmmc2 =3D "/sdhci@700b0200";=0A=
		sdhci1 =3D "/sdhci@700b0200";=0A=
		sdmmc1 =3D "/sdhci@700b0000";=0A=
		sdhci0 =3D "/sdhci@700b0000";=0A=
		tegra_mc =3D "/memory-controller@70019000";=0A=
		tegra_pwm_dfll =3D "/pwm@70110000";=0A=
		tegra_clk_dfll =3D "/clock@70110000";=0A=
		soctherm =3D "/soctherm@0x700E2000";=0A=
		throttle_heavy =3D "/soctherm@0x700E2000/throttle-cfgs/heavy";=0A=
		throttle_oc1 =3D "/soctherm@0x700E2000/throttle-cfgs/oc1";=0A=
		throttle_oc3 =3D "/soctherm@0x700E2000/throttle-cfgs/oc3";=0A=
		tegra_wdt =3D "/watchdog@60005100";=0A=
		tegra_watchdog =3D "/watchdog@60005100";=0A=
		id_gpio_extcon =3D "/extcon/extcon@0";=0A=
		vbus_id_gpio_extcon =3D "/extcon/extcon@1";=0A=
		IPI =3D "/smp-custom-ipi";=0A=
		tegra210_emc_dram_cdev =3D "/external-memory-controller@7001b000";=0A=
		dummy_cool_dev =3D "/dummy-cool-dev";=0A=
		battery_reg =3D "/regulators/regulator@0";=0A=
		hdr40_vdd_5v0 =3D "/regulators/regulator@1";=0A=
		p3449_vdd_5v0_sys =3D "/regulators/regulator@1";=0A=
		hdr40_vdd_3v3 =3D "/regulators/regulator@2";=0A=
		p3448_vdd_3v3_sys =3D "/regulators/regulator@2";=0A=
		p3448_vdd_3v3_sd =3D "/regulators/regulator@3";=0A=
		p3448_avdd_io_edp =3D "/regulators/regulator@4";=0A=
		p3449_vdd_hdmi =3D "/regulators/regulator@5";=0A=
		p3449_vdd_1v8 =3D "/regulators/regulator@6";=0A=
		p3449_vdd_fan =3D "/regulators/regulator@7";=0A=
		p3449_vdd_usb_vbus =3D "/regulators/regulator@8";=0A=
		p3449_vdd_usb_hub_en =3D "/regulators/regulator@9";=0A=
		p3449_vdd_usb_vbus2 =3D "/regulators/regulator@10";=0A=
		gpu_scaling_cdev =3D "/dvfs_rails/vdd-gpu-scaling-cdev@7";=0A=
		gpu_vmax_cdev =3D "/dvfs_rails/vdd-gpu-vmax-cdev@9";=0A=
		pwm_fan_shared_data =3D "/pfsd";=0A=
		tcp =3D "/tegra-camera-platform";=0A=
		cam_module0 =3D "/tegra-camera-platform/modules/module0";=0A=
		cam_module0_drivernode0 =3D =
"/tegra-camera-platform/modules/module0/drivernode0";=0A=
		cam_module0_drivernode1 =3D =
"/tegra-camera-platform/modules/module0/drivernode1";=0A=
		cam_module1 =3D "/tegra-camera-platform/modules/module1";=0A=
		cam_module1_drivernode0 =3D =
"/tegra-camera-platform/modules/module1/drivernode0";=0A=
		cam_module1_drivernode1 =3D =
"/tegra-camera-platform/modules/module1/drivernode1";=0A=
		i2c_0 =3D "/cam_i2cmux/i2c@0";=0A=
		imx219_cam0 =3D "/cam_i2cmux/i2c@0/rbpcv2_imx219_a@10";=0A=
		rbpcv2_imx219_dual_out0 =3D =
"/cam_i2cmux/i2c@0/rbpcv2_imx219_a@10/ports/port@0/endpoint";=0A=
		i2c_1 =3D "/cam_i2cmux/i2c@1";=0A=
		imx219_cam1 =3D "/cam_i2cmux/i2c@1/rbpcv2_imx219_e@10";=0A=
		rbpcv2_imx219_out1 =3D =
"/cam_i2cmux/i2c@1/rbpcv2_imx219_e@10/ports/port@0/endpoint";=0A=
		thermal_fan_est_shared_data =3D "/tfesd";=0A=
		spdif_dit0 =3D "/spdif-dit.0@0";=0A=
		spdif_dit1 =3D "/spdif-dit.1@1";=0A=
		spdif_dit2 =3D "/spdif-dit.2@2";=0A=
		spdif_dit3 =3D "/spdif-dit.3@3";=0A=
		spdif_dit4 =3D "/spdif-dit.4@4";=0A=
		spdif_dit5 =3D "/spdif-dit.5@5";=0A=
		spdif_dit6 =3D "/spdif-dit.6@6";=0A=
		spdif_dit7 =3D "/spdif-dit.7@7";=0A=
		e2614_gps_wake =3D "/gps_wake";=0A=
		cpu_ovr_reg =3D "/pwm_regulators/pwm-regulator@0";=0A=
		i2c_dfll =3D "/dfll-max77621@70110000/dfll-max77621-integration";=0A=
		dfll_max77621_parms =3D =
"/dfll-max77621@70110000/dfll-max77621-board-params";=0A=
		dfll_cap =3D "/dfll-cdev-cap";=0A=
		dfll_floor =3D "/dfll-cdev-floor";=0A=
		tegra_udrm =3D "/tegra_udrm";=0A=
		soft_wdt =3D "/soft_watchdog";=0A=
	};=0A=
};=0A=

------=_NextPart_000_0001_01D66190.996FE380--



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 15:08:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 15:08:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyzJ0-0007Cj-0Y; Fri, 24 Jul 2020 15:08:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hWcK=BD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyzIy-0007Ce-G1
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 15:08:12 +0000
X-Inumbo-ID: 7c63bc3b-cdbf-11ea-8838-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c63bc3b-cdbf-11ea-8838-bc764e2007e4;
 Fri, 24 Jul 2020 15:08:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595603292;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=rbtsgmXzdcMKKFPBYyCVlm3fdoFv5piluTNywmXvKpE=;
 b=CchAOKCnMtg7N9vJlDtDIaAkIYDfj9t/S5eAF/PEKUn7cZvbDCK37PFw
 +oPFPk6JT7vDZPHkrj3TazdOD3m/AsmgJY1C9iBakJ5F/QwpmU9uIS7xP
 DBDWu0NPGmuybctKfCHM1IPiWto36bu0cgXON3dlsKcYy6obmaMsiMMWq 0=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 8fO+meb/pv8aapB3tJzFlGIkmC2KKSC4ZHrvLEfgqIgfu+RkcBS6YHbksVdXX1/6ypkScl64b+
 UBbLekqSkuH68vx8HGlNGRoDxBl35mc5uSRiqV1sUHN4aqHqXcr4evVn10JQOr143Iz2linXRF
 8hn5BOZ3RS3mGLgjksanOuiDFQl3uQJuGyiUXigrn5x5cvMDEsCZIe/HXkY+R2c+Cn1j9636zi
 DMnFxqilZPSU13ggBhfy4ABqkeKs3f54EXbnxhhngAkqR/28G9CyNeuMSekf2tn498YxXyK0gL
 CM0=
X-SBRS: 2.7
X-MesageID: 23153294
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,391,1589256000"; d="scan'208";a="23153294"
Date: Fri, 24 Jul 2020 17:08:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Rahul Singh <rahul.singh@arm.com>
Subject: Re: [RFC PATCH v1 3/4] xen/arm: Enable the existing x86 virtual PCI
 support for ARM.
Message-ID: <20200724150800.GL7191@Air-de-Roger>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <c719ed8e92720d0b470a130c1264f8296dac32ac.1595511416.git.rahul.singh@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c719ed8e92720d0b470a130c1264f8296dac32ac.1595511416.git.rahul.singh@arm.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Paul Durrant <paul@xen.org>, Bertrand.Marquis@arm.com,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Jul 23, 2020 at 04:40:23PM +0100, Rahul Singh wrote:
> The existing VPCI support available for X86 is adapted for Arm.
> When the device is added to XEN via the hyper call
> “PHYSDEVOP_pci_device_add”, VPCI handler for the config space
> access is added to the PCI device to emulate the PCI devices.
> 
> A MMIO trap handler for the PCI ECAM space is registered in XEN
> so that when guest is trying to access the PCI config space,XEN
> will trap the access and emulate read/write using the VPCI and
> not the real PCI hardware.
> 
> VPCI MSI support is disable for ARM as it is not tested on ARM.

I'm not seeing anything on this patch that would disable vPCI MSI
support?

> 
> Change-Id: I5501db2781f8064640403fecce53713091cd9ab4
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  xen/arch/arm/Makefile         |   1 +
>  xen/arch/arm/domain.c         |   4 ++
>  xen/arch/arm/vpci.c           | 102 ++++++++++++++++++++++++++++++++++
>  xen/arch/arm/vpci.h           |  37 ++++++++++++
>  xen/drivers/passthrough/pci.c |   7 +++
>  xen/include/asm-arm/domain.h  |   5 ++
>  xen/include/public/arch-arm.h |   4 ++
>  7 files changed, 160 insertions(+)
>  create mode 100644 xen/arch/arm/vpci.c
>  create mode 100644 xen/arch/arm/vpci.h
> 
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 345cb83eed..5a23ec5cc0 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -7,6 +7,7 @@ obj-y += platforms/
>  endif
>  obj-$(CONFIG_TEE) += tee/
>  obj-$(CONFIG_ARM_PCI) += pci/
> +obj-$(CONFIG_HAS_VPCI) += vpci.o
>  
>  obj-$(CONFIG_HAS_ALTERNATIVE) += alternative.o
>  obj-y += bootfdt.init.o
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 31169326b2..23098ffd02 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -39,6 +39,7 @@
>  #include <asm/vtimer.h>
>  
>  #include "vuart.h"
> +#include "vpci.h"
>  
>  DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
>  
> @@ -747,6 +748,9 @@ int arch_domain_create(struct domain *d,
>      if ( is_hardware_domain(d) && (rc = domain_vuart_init(d)) )
>          goto fail;
>  
> +    if ( (rc = domain_vpci_init(d)) != 0 )
> +        goto fail;
> +
>      return 0;
>  
>  fail:
> diff --git a/xen/arch/arm/vpci.c b/xen/arch/arm/vpci.c
> new file mode 100644
> index 0000000000..49e473ab0d
> --- /dev/null
> +++ b/xen/arch/arm/vpci.c
> @@ -0,0 +1,102 @@
> +/*
> + * xen/arch/arm/vpci.c
> + * Copyright (c) 2020 Arm Ltd.
> + *
> + * Based on arch/x86/hvm/io.c
> + * Copyright (c) 2004, Intel Corporation.
> + * Copyright (c) 2005, International Business Machines Corporation.
> + * Copyright (c) 2008, Citrix Systems, Inc.
> + *
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +#include <xen/sched.h>
> +#include <asm/mmio.h>
> +
> +/* Do some sanity checks. */
> +static bool vpci_mmio_access_allowed(unsigned int reg, unsigned int len)

This is just a copy of vpci_access_allowed from x86, I think you
should consider moving the function to common vpci code and just share
it between x86 and Arm?

> +{
> +    /* Check access size. */
> +    if ( len != 1 && len != 2 && len != 4 && len != 8 )
> +        return false;
> +
> +    /* Check that access is size aligned. */
> +    if ( (reg & (len - 1)) )
> +        return false;
> +
> +    return true;
> +}
> +
> +static int vpci_mmio_read(struct vcpu *v, mmio_info_t *info,
> +        register_t *r, void *priv)
> +{
> +    unsigned int reg;
> +    pci_sbdf_t sbdf;
> +    uint32_t data = 0;
> +    unsigned int size = 1U << info->dabt.size;
> +
> +    sbdf.bdf = (((info->gpa) & 0x0ffff000) >> 12);
> +    reg = (((info->gpa) & 0x00000ffc) | (info->gpa & 3));
> +
> +    if ( !vpci_mmio_access_allowed(reg, size) )
> +        return 1;
> +
> +    data = vpci_read(sbdf, reg, size);
> +
> +    memcpy(r, &data, size);
> +
> +    return 1;
> +}
> +
> +static int vpci_mmio_write(struct vcpu *v, mmio_info_t *info,
> +        register_t r, void *priv)
> +{
> +    unsigned int reg;
> +    pci_sbdf_t sbdf;
> +    uint32_t data = r;
> +    unsigned int size = 1U << info->dabt.size;
> +
> +    sbdf.bdf = (((info->gpa) & 0x0ffff000) >> 12);
> +    reg = (((info->gpa) & 0x00000ffc) | (info->gpa & 3));
> +
> +    if ( !vpci_mmio_access_allowed(reg, size) )
> +        return 1;
> +
> +    vpci_write(sbdf, reg, size, data);
> +
> +    return 1;

Both functions will only return 1 always, so can likely drop the
return value completely?

> +}
> +
> +static const struct mmio_handler_ops vpci_mmio_handler = {
> +    .read  = vpci_mmio_read,
> +    .write = vpci_mmio_write,
> +};
> +
> +int domain_vpci_init(struct domain *d)

FWIW, I think you can drop the domain_ prefix, vPCI is always tied to
a domain.

> +{
> +    if ( !has_vpci(d) || is_hardware_domain(d) )

I wouldn't add a hardware domain exception here, and just make sure
the VPCI flag is not set for the hardware domain on Arm if you don't
want to use it there.

> +        return 0;
> +
> +    register_mmio_handler(d, &vpci_mmio_handler,
> +            GUEST_VPCI_ECAM_BASE,GUEST_VPCI_ECAM_SIZE,NULL);

Missing spaces, and proper indentation.

> +
> +    return 0;

Doesn't seem like domain_vpci_init can fail, so you can likely skip
the return value?

> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> +
> diff --git a/xen/arch/arm/vpci.h b/xen/arch/arm/vpci.h
> new file mode 100644
> index 0000000000..20dce1f4c4
> --- /dev/null
> +++ b/xen/arch/arm/vpci.h
> @@ -0,0 +1,37 @@
> +/*
> + * xen/arch/arm/vpci.h
> + * Copyright (c) 2020 Arm Ltd.
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#ifndef __ARCH_ARM_VPCI_H__
> +#define __ARCH_ARM_VPCI_H__
> +
> +#ifdef CONFIG_HAS_VPCI
> +int domain_vpci_init(struct domain *d);
> +#else
> +static inline int domain_vpci_init(struct domain *d)
> +{
> +    return 0;
> +}
> +#endif
> +
> +#endif /* __ARCH_ARM_VPCI_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index 5846978890..28511eb641 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -804,6 +804,13 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
>      else
>          iommu_enable_device(pdev);
>  
> +#ifdef CONFIG_ARM
> +    ret = vpci_add_handlers(pdev);

Don't you need to drop the __hwdom_init from that function? Or else it
might be freed by the time dom0 calls pci_add_device?

> +    if ( ret ) {
> +        printk(XENLOG_ERR "setup of vPCI for failed: %d\n",ret);
> +        goto out;
> +    }
> +#endif
>      pci_enable_acs(pdev);
>  
>  out:
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 4e2f582006..ad70610226 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -34,6 +34,11 @@ enum domain_type {
>  /* The hardware domain has always its memory direct mapped. */
>  #define is_domain_direct_mapped(d) ((d) == hardware_domain)
>  
> +/* For X86 VPCI is enabled and tested for PVH DOM0 only but
> + * for ARM we enable support VPCI for guest domain also.
> + */
> +#define has_vpci(d) (true)

Urg, that's kind of unconditional, couldn't you pass a flag for
user-space in order to signal whether vPCI should be enabled? There's
no point on enabling it if the domain doesn't support PCI passthrough,
or if there are no PCI devices on the system.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 15:15:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 15:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyzQP-00089u-QT; Fri, 24 Jul 2020 15:15:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5T8C=BD=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jyzQP-00089p-1E
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 15:15:53 +0000
X-Inumbo-ID: 8f62c4e2-cdc0-11ea-a412-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8f62c4e2-cdc0-11ea-a412-12813bfff9fa;
 Fri, 24 Jul 2020 15:15:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=wHEtOyXAmYV0hGwJAz85c5e9GF4DwuFI0+o+BdGXrXE=; b=yrd3JOqiHcq+D0HK7UV4kEg94V
 A6eKzfE4f1L5IUQkukRmO8LsmU9OMCv/BcnWRiaQGInuddDwKI8/56OYFSw2BjXc8zQWtRRB9yW6Q
 cvLJG0TTbiW4BIZ/1JFvSl/pH7gX33OQB5ty8ldaAGh6wuyNaFxgQxWiowYopNvO99gA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyzQL-0002cC-UO; Fri, 24 Jul 2020 15:15:49 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyzQL-00015N-NJ; Fri, 24 Jul 2020 15:15:49 +0000
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Rahul Singh <rahul.singh@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <20200724144404.GJ7191@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <0c53b2cb-47e9-f34e-8922-7095669175be@xen.org>
Date: Fri, 24 Jul 2020 16:15:47 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200724144404.GJ7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, nd@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Bertrand.Marquis@arm.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 24/07/2020 15:44, Roger Pau Monné wrote:
>> diff --git a/xen/arch/arm/pci/Makefile b/xen/arch/arm/pci/Makefile
>> new file mode 100644
>> index 0000000000..358508b787
>> --- /dev/null
>> +++ b/xen/arch/arm/pci/Makefile
>> @@ -0,0 +1,4 @@
>> +obj-y += pci.o
>> +obj-y += pci-host-generic.o
>> +obj-y += pci-host-common.o
>> +obj-y += pci-access.o
> 
> The Kconfig option mentions the support being explicitly for ARM64,
> would it be better to place the code in arch/arm/arm64 then?
I don't believe any of the code in this series is very arm64 specific. I 
guess it was just only tested on arm64. So I would rather keep that 
under arm/pci.

>> +
>> +    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
>> +
>> +    if ( unlikely(!bridge) )
>> +    {
>> +        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
>> +                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
> 
> I had a patch to add a custom modifier to out printf format in
> order to handle pci_sbdf_t natively:
> 
> https://patchew.org/Xen/20190822065132.48200-1-roger.pau@citrix.com/
> 
> It missed maintainers Acks and was never committed. Since you are
> doing a bunch of work here, and likely adding a lot of SBDF related
> prints, feel free to import the modifier (%pp) and use in your code
> (do not attempt to switch existing users, or it's likely to get
> stuck again).

I forgot about this patch :/. It would be good to revive it. Which acks 
are you missing?

[...]

>> +static bool __init dt_pci_parse_bus_range(struct dt_device_node *dev,
>> +        struct pci_config_window *cfg)
>> +{
>> +    const __be32 *cells;
> 
> It's my impression that while based on Linux this is not a verbatim
> copy of a Linux file, and tries to adhere with the Xen coding style.
> If so please use uint32_t here.

uint32_t would be incorrect because this is a 32-bit value always in big 
endian. I don't think we have other typedef to show it is a 32-bit BE 
value, so __be32 is the best choice.

[...]

>> +
>> +    if ( acpi_disabled )
>> +        dt_pci_init();
>> +    else
>> +        acpi_pci_init();
> 
> Isn't there an enum or something that tells you whether the system
> description is coming from ACPI or from DT?
> 
> This if .. else seems fragile.
>

This is the common way we do it on Arm.... I would welcome any 
improvement, but I don't think this should be part of this work.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 15:29:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 15:29:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyzdL-0000op-7Q; Fri, 24 Jul 2020 15:29:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hWcK=BD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyzdK-0000of-7f
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 15:29:14 +0000
X-Inumbo-ID: 6ccacbd0-cdc2-11ea-883a-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ccacbd0-cdc2-11ea-883a-bc764e2007e4;
 Fri, 24 Jul 2020 15:29:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595604552;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=mOKcmmxpsrXCrY+JiEtPT3ajvKxHswL3c+g4nrs4kFQ=;
 b=NZrhV9IGnZYqOrMertRPK/Ppt28gXzWynZWaxey21ohuxdCnaMPeO9+8
 FXYXcmmlPOC0z+WwujQmUNcBxD5OPWqjaIs1VLuVrcmy53SqtUitnV8gz
 oIGukE27+GEfqIcX0gf1zSHXUVq4GNfAeHVDqTUBmtPBoMwTAiTv6YxnT A=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: a6NR0R5HA19khLegae+/VhED3/JNuA+FuS7znJ7LtzUAiIOal32pm2NM05SCYP2TgkgUCewOYg
 o5HX1bnSdZDx3lLRiO5JAtgWhz1q5NHOIiZdM7o+PJ2Z8tjRftfmysF4m2GZOZQEuxgrCxZjY5
 Evu4X87WTWdKoucOSzsqBG9/l0qhmK4RbkqoznodYds2jz1dtZTgt6Ug3+JuTKMVE/2Wdl3VM4
 3bR8APEjZzxlmpwEbiLnNW2ClpNMrlO17qIIR76O/Cu+jRFgCjORhVR8zMbz18P3rfzp+7pq9I
 /Ss=
X-SBRS: 2.7
X-MesageID: 23464643
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,391,1589256000"; d="scan'208";a="23464643"
Date: Fri, 24 Jul 2020 17:29:05 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Message-ID: <20200724152905.GM7191@Air-de-Roger>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <20200724144404.GJ7191@Air-de-Roger>
 <0c53b2cb-47e9-f34e-8922-7095669175be@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0c53b2cb-47e9-f34e-8922-7095669175be@xen.org>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <rahul.singh@arm.com>, Bertrand.Marquis@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 24, 2020 at 04:15:47PM +0100, Julien Grall wrote:
> 
> 
> On 24/07/2020 15:44, Roger Pau Monné wrote:
> > > diff --git a/xen/arch/arm/pci/Makefile b/xen/arch/arm/pci/Makefile
> > > new file mode 100644
> > > index 0000000000..358508b787
> > > --- /dev/null
> > > +++ b/xen/arch/arm/pci/Makefile
> > > @@ -0,0 +1,4 @@
> > > +obj-y += pci.o
> > > +obj-y += pci-host-generic.o
> > > +obj-y += pci-host-common.o
> > > +obj-y += pci-access.o
> > 
> > The Kconfig option mentions the support being explicitly for ARM64,
> > would it be better to place the code in arch/arm/arm64 then?
> I don't believe any of the code in this series is very arm64 specific. I
> guess it was just only tested on arm64. So I would rather keep that under
> arm/pci.

Ack. Could the Kconfg be adjusted to not depend on ARM_64? Just
stating it's only been tested on Arm64 would be enough IMO.

> > > +
> > > +    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
> > > +
> > > +    if ( unlikely(!bridge) )
> > > +    {
> > > +        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
> > > +                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
> > 
> > I had a patch to add a custom modifier to out printf format in
> > order to handle pci_sbdf_t natively:
> > 
> > https://patchew.org/Xen/20190822065132.48200-1-roger.pau@citrix.com/
> > 
> > It missed maintainers Acks and was never committed. Since you are
> > doing a bunch of work here, and likely adding a lot of SBDF related
> > prints, feel free to import the modifier (%pp) and use in your code
> > (do not attempt to switch existing users, or it's likely to get
> > stuck again).
> 
> I forgot about this patch :/. It would be good to revive it. Which acks are
> you missing?

I only had an Ack from Jan, so I was missing Intel and AMD Acks, which
would now only be Intel since AMD has been absorbed by the x86
maintainers.

> [...]
> 
> > > +static bool __init dt_pci_parse_bus_range(struct dt_device_node *dev,
> > > +        struct pci_config_window *cfg)
> > > +{
> > > +    const __be32 *cells;
> > 
> > It's my impression that while based on Linux this is not a verbatim
> > copy of a Linux file, and tries to adhere with the Xen coding style.
> > If so please use uint32_t here.
> 
> uint32_t would be incorrect because this is a 32-bit value always in big
> endian. I don't think we have other typedef to show it is a 32-bit BE value,
> so __be32 is the best choice.

Oh, OK, so this is done to explicitly denote the endianness of a value
on the type itself.

> [...]
> 
> > > +
> > > +    if ( acpi_disabled )
> > > +        dt_pci_init();
> > > +    else
> > > +        acpi_pci_init();
> > 
> > Isn't there an enum or something that tells you whether the system
> > description is coming from ACPI or from DT?
> > 
> > This if .. else seems fragile.
> > 
> 
> This is the common way we do it on Arm.... I would welcome any improvement,
> but I don't think this should be part of this work.

Ack. In any case I think for ACPI PCI init will get called by
acpi_mmcfg_init as part of acpi_boot_init, so I'm not sure there's
much point in having something about ACPI added here, as it seems this
will be DT only?

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 15:43:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 15:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyzqn-0002gz-Ho; Fri, 24 Jul 2020 15:43:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hWcK=BD=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jyzql-0002gu-Ol
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 15:43:07 +0000
X-Inumbo-ID: 5e13f498-cdc4-11ea-a419-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5e13f498-cdc4-11ea-a419-12813bfff9fa;
 Fri, 24 Jul 2020 15:43:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595605386;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=jBwu/lC1CRCBJQBBwKMwaiUjwXvhlSWzTbi+7dWIwo4=;
 b=XM6kJR2enGzp1pgMRCDOtNK40/l6HOnJeEFv4kI6sG51f6O9ayaUuVqw
 iepgvL/pbUQr7DR8hcqPeXHpPinptldZRSTMlotyFsN8ooOSEaOJCETes
 F0wUyWVVDUNAjPSjYyQl2ay60qViwGL2HFzFDohPfjWBv/nvaKddeMW1E A=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: zAX6CyxwJaoQzTG4kTZ0fXv6NecS5ad+meuXDFkXswObRqKFbuVydJ1jgPA/0WJp4rpDm/LqlU
 RilXjzxzcwDOiIBxR8RWoKbqNml0NjxFQlvNe1rW770sEvOxL6lgbGGxXiEvK5cWNc4Yyb1xKm
 kxeYh7O6QlHbHFLNU0nRp11clNYI57dtCQTfnhOryt0RiM3Lbs0+7kWPMdlHAOQlN4pjllB27K
 R5X4ABnz0bjoH60PvaCOwjiB7GuIkDlDleCN0szg36rhmgopboYeOHfcJNBr3ssE0sdTynoY1J
 aY8=
X-SBRS: 2.7
X-MesageID: 23466067
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,391,1589256000"; d="scan'208";a="23466067"
Date: Fri, 24 Jul 2020 17:42:58 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Message-ID: <20200724154258.GN7191@Air-de-Roger>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <20200724144404.GJ7191@Air-de-Roger>
 <0c53b2cb-47e9-f34e-8922-7095669175be@xen.org>
 <20200724152905.GM7191@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200724152905.GM7191@Air-de-Roger>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <rahul.singh@arm.com>, Bertrand.Marquis@arm.com, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 24, 2020 at 05:29:05PM +0200, Roger Pau Monné wrote:
> On Fri, Jul 24, 2020 at 04:15:47PM +0100, Julien Grall wrote:
> > 
> > 
> > On 24/07/2020 15:44, Roger Pau Monné wrote:
> > > > +
> > > > +    if ( acpi_disabled )
> > > > +        dt_pci_init();
> > > > +    else
> > > > +        acpi_pci_init();
> > > 
> > > Isn't there an enum or something that tells you whether the system
> > > description is coming from ACPI or from DT?
> > > 
> > > This if .. else seems fragile.
> > > 
> > 
> > This is the common way we do it on Arm.... I would welcome any improvement,
> > but I don't think this should be part of this work.
> 
> Ack. In any case I think for ACPI PCI init will get called by
> acpi_mmcfg_init as part of acpi_boot_init, so I'm not sure there's
> much point in having something about ACPI added here, as it seems this
> will be DT only?

Sorry I got confused and acpi_boot_init is a x86 specific function,
wrong context. If Arm is not using that path then maybe it makes sense
to init PCI here for ACPI also.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 15:46:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 15:46:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jyzto-0002oO-13; Fri, 24 Jul 2020 15:46:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5T8C=BD=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jyztn-0002oJ-46
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 15:46:15 +0000
X-Inumbo-ID: ce02be92-cdc4-11ea-883a-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce02be92-cdc4-11ea-883a-bc764e2007e4;
 Fri, 24 Jul 2020 15:46:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1116xAQKmyYC/uoGYUlZN0DzCToCu4jzaVyHnY5TRak=; b=S47LNs+S0/lOv41GITFfKigSML
 DjSSbXQVr84Px2gM+iZ2mCGXXEb6ePkDU6dYJ4xByotnMMs8068ctAn+8Y+1LIDk7i+wYYYu+333m
 WRxUSNYuR6lnEe4p2GaGTOArsjQlOKKnOA+ai6RsUDx3QNYLXc0v3fiCx4+bGHeZ0XWg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyztk-0003De-TH; Fri, 24 Jul 2020 15:46:12 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jyztk-0002ug-Kh; Fri, 24 Jul 2020 15:46:12 +0000
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <20200724144404.GJ7191@Air-de-Roger>
 <0c53b2cb-47e9-f34e-8922-7095669175be@xen.org>
 <20200724152905.GM7191@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <69e478a9-7eee-4407-e811-2308dff19b79@xen.org>
Date: Fri, 24 Jul 2020 16:46:10 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200724152905.GM7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <rahul.singh@arm.com>, Bertrand.Marquis@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 24/07/2020 16:29, Roger Pau Monné wrote:
> On Fri, Jul 24, 2020 at 04:15:47PM +0100, Julien Grall wrote:
>>
>>
>> On 24/07/2020 15:44, Roger Pau Monné wrote:
>>>> diff --git a/xen/arch/arm/pci/Makefile b/xen/arch/arm/pci/Makefile
>>>> new file mode 100644
>>>> index 0000000000..358508b787
>>>> --- /dev/null
>>>> +++ b/xen/arch/arm/pci/Makefile
>>>> @@ -0,0 +1,4 @@
>>>> +obj-y += pci.o
>>>> +obj-y += pci-host-generic.o
>>>> +obj-y += pci-host-common.o
>>>> +obj-y += pci-access.o
>>>
>>> The Kconfig option mentions the support being explicitly for ARM64,
>>> would it be better to place the code in arch/arm/arm64 then?
>> I don't believe any of the code in this series is very arm64 specific. I
>> guess it was just only tested on arm64. So I would rather keep that under
>> arm/pci.
> 
> Ack. Could the Kconfg be adjusted to not depend on ARM_64? Just
> stating it's only been tested on Arm64 would be enough IMO.

We already have an option to select PCI (see CONFIG_HAS_PCI). So I would 
prefer if we reuse it (possible renamed to CONFIG_PCI) rather than 
inventing one for Arm specific.

Regarding the dependency, it will depend if it is possible to make it 
build easily on Arm32. If not, then we will need to keep the ARM_64.

> 
>>>> +
>>>> +    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
>>>> +
>>>> +    if ( unlikely(!bridge) )
>>>> +    {
>>>> +        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
>>>> +                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
>>>
>>> I had a patch to add a custom modifier to out printf format in
>>> order to handle pci_sbdf_t natively:
>>>
>>> https://patchew.org/Xen/20190822065132.48200-1-roger.pau@citrix.com/
>>>
>>> It missed maintainers Acks and was never committed. Since you are
>>> doing a bunch of work here, and likely adding a lot of SBDF related
>>> prints, feel free to import the modifier (%pp) and use in your code
>>> (do not attempt to switch existing users, or it's likely to get
>>> stuck again).
>>
>> I forgot about this patch :/. It would be good to revive it. Which acks are
>> you missing?
> 
> I only had an Ack from Jan, so I was missing Intel and AMD Acks, which
> would now only be Intel since AMD has been absorbed by the x86
> maintainers.

Ok. So, it should be easier to get it acked now :).

> 
>> [...]
>>
>>>> +static bool __init dt_pci_parse_bus_range(struct dt_device_node *dev,
>>>> +        struct pci_config_window *cfg)
>>>> +{
>>>> +    const __be32 *cells;
>>>
>>> It's my impression that while based on Linux this is not a verbatim
>>> copy of a Linux file, and tries to adhere with the Xen coding style.
>>> If so please use uint32_t here.
>>
>> uint32_t would be incorrect because this is a 32-bit value always in big
>> endian. I don't think we have other typedef to show it is a 32-bit BE value,
>> so __be32 is the best choice.
> 
> Oh, OK, so this is done to explicitly denote the endianness of a value
> on the type itself.

That's correct. On Linux, they use sparse to then check the BE and LE 
fields are not mixed together. We don't have that on Xen, but at least 
this makes more obvious what we are using.

> 
>> [...]
>>
>>>> +
>>>> +    if ( acpi_disabled )
>>>> +        dt_pci_init();
>>>> +    else
>>>> +        acpi_pci_init();
>>>
>>> Isn't there an enum or something that tells you whether the system
>>> description is coming from ACPI or from DT?
>>>
>>> This if .. else seems fragile.
>>>
>>
>> This is the common way we do it on Arm.... I would welcome any improvement,
>> but I don't think this should be part of this work.
> 
> Ack. In any case I think for ACPI PCI init will get called by
> acpi_mmcfg_init as part of acpi_boot_init, so I'm not sure there's
> much point in having something about ACPI added here, as it seems this
> will be DT only?

acpi_boot_init() does not exist on Arm. Looking at x86, 
acpi_mmcfg_init() is not event called from that function.

In general, I would prefer if each subsystem takes care of 
initialization itself. This makes easier to figure out the difference 
between ACPI and DT. FWIW, this is inline with majority of the Arm code.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 16:01:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 16:01:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz08g-0005FP-Gm; Fri, 24 Jul 2020 16:01:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yKVY=BD=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jz08f-0005FK-F5
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 16:01:37 +0000
X-Inumbo-ID: f3981d4e-cdc6-11ea-883d-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3981d4e-cdc6-11ea-883d-bc764e2007e4;
 Fri, 24 Jul 2020 16:01:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 19FB5B128;
 Fri, 24 Jul 2020 16:01:44 +0000 (UTC)
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
To: Julien Grall <julien@xen.org>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <20200724144404.GJ7191@Air-de-Roger>
 <0c53b2cb-47e9-f34e-8922-7095669175be@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <980fc583-edb6-b536-f211-f6b8ea6d21a7@suse.com>
Date: Fri, 24 Jul 2020 18:01:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0c53b2cb-47e9-f34e-8922-7095669175be@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <rahul.singh@arm.com>, Bertrand.Marquis@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.07.2020 17:15, Julien Grall wrote:
> On 24/07/2020 15:44, Roger Pau Monné wrote:
>>> +
>>> +    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
>>> +
>>> +    if ( unlikely(!bridge) )
>>> +    {
>>> +        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
>>> +                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
>>
>> I had a patch to add a custom modifier to out printf format in
>> order to handle pci_sbdf_t natively:
>>
>> https://patchew.org/Xen/20190822065132.48200-1-roger.pau@citrix.com/
>>
>> It missed maintainers Acks and was never committed. Since you are
>> doing a bunch of work here, and likely adding a lot of SBDF related
>> prints, feel free to import the modifier (%pp) and use in your code
>> (do not attempt to switch existing users, or it's likely to get
>> stuck again).
> 
> I forgot about this patch :/. It would be good to revive it. Which acks 
> are you missing?

It wasn't so much missing acks, but a controversy. And that not so much
about switching existing users, but whether to indeed derive this from
%p (which I continue to consider inefficient).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 16:01:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 16:01:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz08q-0005G5-5D; Fri, 24 Jul 2020 16:01:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yL+a=BD=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jz08p-0005FZ-CT
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 16:01:47 +0000
X-Inumbo-ID: f5fea95e-cdc6-11ea-a425-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f5fea95e-cdc6-11ea-a425-12813bfff9fa;
 Fri, 24 Jul 2020 16:01:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Reply-To:Message-Id:Date:Subject:To:From:Sender:Cc:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ogZM8im0PZvP/3uvGK3JZM+THCjPOOJBt/a9OxuGozA=; b=tyTtdn8czpvLvCjKwcGfOxzmdD
 toJMPxtzfRirgIXANsEagw43ef26JzgPmM0+3y6QWe6+t/0HfqD2zGTSTblwWt/dyNYAjKvwRPibI
 NIHPHNAWd0N9BNw33BztOKU1vtwH6EUqoFL5gRbbSstZOp77gMLDf93Yd9D84INQYR4U=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz08i-00046f-58; Fri, 24 Jul 2020 16:01:40 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=CBG-R90WXYV0.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz08h-0003kp-Su; Fri, 24 Jul 2020 16:01:40 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org, xen-users@lists.xenproject.org,
 xen-announce@lists.xenproject.org
Subject: [ANNOUNCEMENT] Xen 4.14 is released
Date: Fri, 24 Jul 2020 17:01:38 +0100
Message-Id: <20200724160138.129-1-paul@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: xen-devel@lists.xenproject.org, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Dear community members,

I'm pleased to announce that Xen 4.14.0 is released.

Please find the tarball and its signature at:

  https://downloads.xenproject.org/release/xen/4.14.0

Git checkout and build instructions can be found at:

  https://wiki.xenproject.org/wiki/Xen_Project_4.14_Release_Notes#Build_Requirements

Release notes can be found at:

  https://wiki.xenproject.org/wiki/Xen_Project_4.14_Release_Notes

A summary for 4.14 release documents can be found at:

  https://wiki.xenproject.org/wiki/Category:Xen_4.14

Technical blog post for 4.14 can be found at:

  https://xenproject.org/2020/07/24/xen-project-hypervisor-version-4-14-brings-added-security-and-performance

Thanks everyone who contributed to this release. This release would
not have happened without all the awesome contributions from the Xen
community around the globe.

Regards,

Paul Durrant (on behalf of the Xen Project Hypervisor team)



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 16:37:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 16:37:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz0gv-0008Uj-OB; Fri, 24 Jul 2020 16:37:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pZqH=BD=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jz0gv-0008Ue-6v
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 16:37:01 +0000
X-Inumbo-ID: e4ecba16-cdcb-11ea-8854-bc764e2007e4
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4ecba16-cdcb-11ea-8854-bc764e2007e4;
 Fri, 24 Jul 2020 16:36:59 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06OGWbg7137538;
 Fri, 24 Jul 2020 16:36:47 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=01EMyr1wHjzJZt6M9AJ4M1g1NYiy2MuKr1uPKnQ+kU0=;
 b=QmZP+nPPQVtBy607PuPVscN5d5m4XmbyfpGkWNGPZy+624GVTjIvXlsne2sByCsmCKUw
 K7DOIRjH+oh7S+517fxTU2jBdUXevYRp2Erwqi9UJ0AAgMNpUGFQEALJOYkQNxDRvUDY
 No3OOuhhwEiw+Wrcu5ZqOo2K4WVK0tu3FJzvZwm/t4e1zppdasQVa+d+Extq8hvyUWO/
 FJ3pHVtuorOkORbltPlR1hVf5FjwdCDhAhmb+z8m8jlzg6Rg0A/fcBClM7elvY/bQa9j
 AZbJf8nBQhpi2aMKNURLb0dlNgTz4YX4FAL93EdDejmzC7eGVaziwt7NEzy4k+s1RK/w YQ== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2130.oracle.com with ESMTP id 32brgs041x-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 24 Jul 2020 16:36:46 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06OGYKYQ175333;
 Fri, 24 Jul 2020 16:36:46 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by userp3020.oracle.com with ESMTP id 32g38w89t6-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 24 Jul 2020 16:36:46 +0000
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 06OGaZAu005069;
 Fri, 24 Jul 2020 16:36:35 GMT
Received: from [10.39.195.138] (/10.39.195.138)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 24 Jul 2020 09:36:35 -0700
Subject: Re: [PATCH v2 4/4] xen: add helpers to allocate unpopulated memory
To: David Hildenbrand <david@redhat.com>,
 Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
References: <20200724124241.48208-1-roger.pau@citrix.com>
 <20200724124241.48208-5-roger.pau@citrix.com>
 <1778c97f-3a69-8280-141c-879814dd213f@redhat.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <1fd1d29e-5c10-0c29-0628-b79807f81de6@oracle.com>
Date: Fri, 24 Jul 2020 12:36:33 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1778c97f-3a69-8280-141c-879814dd213f@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9692
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0
 suspectscore=0 spamscore=0
 mlxlogscore=999 adultscore=0 malwarescore=0 mlxscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007240128
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9692
 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 bulkscore=0 spamscore=0
 impostorscore=0 suspectscore=0 adultscore=0 clxscore=1011 mlxlogscore=999
 priorityscore=1501 phishscore=0 lowpriorityscore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007240128
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Yan Yankovskyi <yyankovskyi@gmail.com>,
 dri-devel@lists.freedesktop.org, Michal Hocko <mhocko@kernel.org>,
 linux-mm@kvack.org, Daniel Vetter <daniel@ffwll.ch>,
 xen-devel@lists.xenproject.org, Dan Williams <dan.j.williams@intel.com>,
 Dan Carpenter <dan.carpenter@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/24/20 10:34 AM, David Hildenbrand wrote:
> CCing Dan
>
> On 24.07.20 14:42, Roger Pau Monne wrote:
>> diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated=
-alloc.c
>> new file mode 100644
>> index 000000000000..aaa91cefbbf9
>> --- /dev/null
>> +++ b/drivers/xen/unpopulated-alloc.c
>> @@ -0,0 +1,222 @@



>> + */
>> +
>> +#include <linux/errno.h>
>> +#include <linux/gfp.h>
>> +#include <linux/kernel.h>
>> +#include <linux/mm.h>
>> +#include <linux/memremap.h>
>> +#include <linux/slab.h>
>> +
>> +#include <asm/page.h>
>> +
>> +#include <xen/page.h>
>> +#include <xen/xen.h>
>> +
>> +static DEFINE_MUTEX(lock);
>> +static LIST_HEAD(list);
>> +static unsigned int count;
>> +
>> +static int fill(unsigned int nr_pages)


Less generic names? How about=C2=A0 list_lock, pg_list, pg_count,
fill_pglist()? (But these are bad too, so maybe you can come up with
something better)


>> +{
>> +	struct dev_pagemap *pgmap;
>> +	void *vaddr;
>> +	unsigned int i, alloc_pages =3D round_up(nr_pages, PAGES_PER_SECTION=
);
>> +	int nid, ret;
>> +
>> +	pgmap =3D kzalloc(sizeof(*pgmap), GFP_KERNEL);
>> +	if (!pgmap)
>> +		return -ENOMEM;
>> +
>> +	pgmap->type =3D MEMORY_DEVICE_DEVDAX;
>> +	pgmap->res.name =3D "XEN SCRATCH";


Typically iomem resources only capitalize first letters.


>> +	pgmap->res.flags =3D IORESOURCE_MEM | IORESOURCE_BUSY;
>> +
>> +	ret =3D allocate_resource(&iomem_resource, &pgmap->res,
>> +				alloc_pages * PAGE_SIZE, 0, -1,
>> +				PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL);


Are we not going to end up with a whole bunch of "Xen scratch" resource
ranges for each miss in the page list? Or do we expect them to get merged=
?


>> +	if (ret < 0) {
>> +		pr_err("Cannot allocate new IOMEM resource\n");
>> +		kfree(pgmap);
>> +		return ret;
>> +	}
>> +
>> +	nid =3D memory_add_physaddr_to_nid(pgmap->res.start);


Should we consider page range crossing node boundaries?


>> +
>> +#ifdef CONFIG_XEN_HAVE_PVMMU
>> +	/*
>> +	 * We don't support PV MMU when Linux and Xen is using
>> +	 * different page granularity.
>> +	 */
>> +	BUILD_BUG_ON(XEN_PAGE_SIZE !=3D PAGE_SIZE);
>> +
>> +        /*
>> +         * memremap will build page tables for the new memory so
>> +         * the p2m must contain invalid entries so the correct
>> +         * non-present PTEs will be written.
>> +         *
>> +         * If a failure occurs, the original (identity) p2m entries
>> +         * are not restored since this region is now known not to
>> +         * conflict with any devices.
>> +         */
>> +	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>> +		xen_pfn_t pfn =3D PFN_DOWN(pgmap->res.start);
>> +
>> +		for (i =3D 0; i < alloc_pages; i++) {
>> +			if (!set_phys_to_machine(pfn + i, INVALID_P2M_ENTRY)) {
>> +				pr_warn("set_phys_to_machine() failed, no memory added\n");
>> +				release_resource(&pgmap->res);
>> +				kfree(pgmap);
>> +				return -ENOMEM;
>> +			}
>> +                }
>> +	}
>> +#endif
>> +
>> +	vaddr =3D memremap_pages(pgmap, nid);
>> +	if (IS_ERR(vaddr)) {
>> +		pr_err("Cannot remap memory range\n");
>> +		release_resource(&pgmap->res);
>> +		kfree(pgmap);
>> +		return PTR_ERR(vaddr);
>> +	}
>> +
>> +	for (i =3D 0; i < alloc_pages; i++) {
>> +		struct page *pg =3D virt_to_page(vaddr + PAGE_SIZE * i);
>> +
>> +		BUG_ON(!virt_addr_valid(vaddr + PAGE_SIZE * i));
>> +		list_add(&pg->lru, &list);
>> +		count++;
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>> +/**
>> + * xen_alloc_unpopulated_pages - alloc unpopulated pages
>> + * @nr_pages: Number of pages
>> + * @pages: pages returned
>> + * @return 0 on success, error otherwise
>> + */
>> +int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **=
pages)
>> +{
>> +	unsigned int i;
>> +	int ret =3D 0;
>> +
>> +	mutex_lock(&lock);
>> +	if (count < nr_pages) {
>> +		ret =3D fill(nr_pages);


(nr_pages - count) ?


>> +		if (ret)
>> +			goto out;
>> +	}
>> +
>> +	for (i =3D 0; i < nr_pages; i++) {
>> +		struct page *pg =3D list_first_entry_or_null(&list, struct page,
>> +							   lru);
>> +
>> +		BUG_ON(!pg);
>> +		list_del(&pg->lru);
>> +		count--;
>> +		pages[i] =3D pg;
>> +
>> +#ifdef CONFIG_XEN_HAVE_PVMMU
>> +		/*
>> +		 * We don't support PV MMU when Linux and Xen is using
>> +		 * different page granularity.
>> +		 */
>> +		BUILD_BUG_ON(XEN_PAGE_SIZE !=3D PAGE_SIZE);
>> +
>> +		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
>> +			ret =3D xen_alloc_p2m_entry(page_to_pfn(pg));
>> +			if (ret < 0) {
>> +				unsigned int j;
>> +
>> +				for (j =3D 0; j <=3D i; j++) {
>> +					list_add(&pages[j]->lru, &list);
>> +					count++;
>> +				}
>> +				goto out;
>> +			}
>> +		}
>> +#endif
>> +	}
>> +
>> +out:
>> +	mutex_unlock(&lock);
>> +	return ret;
>> +}
>> +EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
>> +



>> +
>> +#ifdef CONFIG_XEN_PV
>> +static int __init init(void)
>> +{
>> +	unsigned int i;
>> +
>> +	if (!xen_domain())
>> +		return -ENODEV;
>> +
>> +	/*
>> +	 * Initialize with pages from the extra memory regions (see
>> +	 * arch/x86/xen/setup.c).
>> +	 */


This loop will be executing only for PV guests so we can just bail out
for non-PV guests here.


-boris


>> +	for (i =3D 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {
>> +		unsigned int j;
>> +
>> +		for (j =3D 0; j < xen_extra_mem[i].n_pfns; j++) {
>> +			struct page *pg =3D
>> +				pfn_to_page(xen_extra_mem[i].start_pfn + j);
>> +
>> +			list_add(&pg->lru, &list);
>> +			count++;
>> +		}
>> +	}
>> +
>> +	return 0;
>> +}
>> +subsys_initcall(init);




From xen-devel-bounces@lists.xenproject.org Fri Jul 24 16:46:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 16:46:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz0q8-000165-VX; Fri, 24 Jul 2020 16:46:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yL+a=BD=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jz0q8-00015t-1H
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 16:46:32 +0000
X-Inumbo-ID: 3973b03e-cdcd-11ea-8855-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3973b03e-cdcd-11ea-8855-bc764e2007e4;
 Fri, 24 Jul 2020 16:46:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ps1c79dS8Hxss26DESmmy5+bX2VNTeAFQ2QuKg57l58=; b=RaJZh4Tnwh4AzAyIlYnGnWgJKq
 PPH7unRrbDmv8YApUVNbuNr8vjQ+OvXhndPp/VttG9dexJ8FWwRG5WgrcbCD1shmfE10AAb7/n/8H
 SG+aQyFLfMdpdUgaFgMRzlSnvHm/fWWNnmhlr2uj+H2G1VXGXKPzbtpExyO0R/YunsXg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz0q4-00054O-Ky; Fri, 24 Jul 2020 16:46:28 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz0q4-0006WL-Ag; Fri, 24 Jul 2020 16:46:28 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 2/6] x86/iommu: add common page-table allocator
Date: Fri, 24 Jul 2020 17:46:15 +0100
Message-Id: <20200724164619.1245-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724164619.1245-1-paul@xen.org>
References: <20200724164619.1245-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Instead of having separate page table allocation functions in VT-d and AMD
IOMMU code, use a common allocation function in the general x86 code.
Also, rather than walking the page tables and using a tasklet to free them
during iommu_domain_destroy(), add allocated page table pages to a list and
then simply walk the list to free them. This saves ~90 lines of code overall.

NOTE: There is no need to clear and sync PTEs during teardown since the per-
      device root entries will have already been cleared (when devices were
      de-assigned) so the page tables can no longer be accessed by the IOMMU.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/drivers/passthrough/amd/iommu.h         | 18 +----
 xen/drivers/passthrough/amd/iommu_map.c     | 10 +--
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 74 +++------------------
 xen/drivers/passthrough/iommu.c             | 23 -------
 xen/drivers/passthrough/vtd/iommu.c         | 51 ++------------
 xen/drivers/passthrough/x86/iommu.c         | 41 ++++++++++++
 xen/include/asm-x86/iommu.h                 |  6 ++
 xen/include/xen/iommu.h                     |  5 --
 8 files changed, 68 insertions(+), 160 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu.h b/xen/drivers/passthrough/amd/iommu.h
index 3489c2a015..e2d174f3b4 100644
--- a/xen/drivers/passthrough/amd/iommu.h
+++ b/xen/drivers/passthrough/amd/iommu.h
@@ -226,7 +226,7 @@ int __must_check amd_iommu_map_page(struct domain *d, dfn_t dfn,
                                     unsigned int *flush_flags);
 int __must_check amd_iommu_unmap_page(struct domain *d, dfn_t dfn,
                                       unsigned int *flush_flags);
-int __must_check amd_iommu_alloc_root(struct domain_iommu *hd);
+int __must_check amd_iommu_alloc_root(struct domain *d);
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
                                        paddr_t phys_addr, unsigned long size,
                                        int iw, int ir);
@@ -356,22 +356,6 @@ static inline int amd_iommu_get_paging_mode(unsigned long max_frames)
     return level;
 }
 
-static inline struct page_info *alloc_amd_iommu_pgtable(void)
-{
-    struct page_info *pg = alloc_domheap_page(NULL, 0);
-
-    if ( pg )
-        clear_domain_page(page_to_mfn(pg));
-
-    return pg;
-}
-
-static inline void free_amd_iommu_pgtable(struct page_info *pg)
-{
-    if ( pg )
-        free_domheap_page(pg);
-}
-
 static inline void *__alloc_amd_iommu_tables(unsigned int order)
 {
     return alloc_xenheap_pages(order, 0);
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 06c564968c..d54cbf1cb9 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -217,7 +217,7 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
             mfn = next_table_mfn;
 
             /* allocate lower level page table */
-            table = alloc_amd_iommu_pgtable();
+            table = iommu_alloc_pgtable(d);
             if ( table == NULL )
             {
                 AMD_IOMMU_DEBUG("Cannot allocate I/O page table\n");
@@ -248,7 +248,7 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
 
             if ( next_table_mfn == 0 )
             {
-                table = alloc_amd_iommu_pgtable();
+                table = iommu_alloc_pgtable(d);
                 if ( table == NULL )
                 {
                     AMD_IOMMU_DEBUG("Cannot allocate I/O page table\n");
@@ -286,7 +286,7 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
 
     spin_lock(&hd->arch.mapping_lock);
 
-    rc = amd_iommu_alloc_root(hd);
+    rc = amd_iommu_alloc_root(d);
     if ( rc )
     {
         spin_unlock(&hd->arch.mapping_lock);
@@ -458,7 +458,7 @@ int __init amd_iommu_quarantine_init(struct domain *d)
 
     spin_lock(&hd->arch.mapping_lock);
 
-    hd->arch.amd_iommu.root_table = alloc_amd_iommu_pgtable();
+    hd->arch.amd_iommu.root_table = iommu_alloc_pgtable(d);
     if ( !hd->arch.amd_iommu.root_table )
         goto out;
 
@@ -473,7 +473,7 @@ int __init amd_iommu_quarantine_init(struct domain *d)
          * page table pages, and the resulting allocations are always
          * zeroed.
          */
-        pg = alloc_amd_iommu_pgtable();
+        pg = iommu_alloc_pgtable(d);
         if ( !pg )
             break;
 
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index c386dc4387..fd9b1e7bd5 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -206,11 +206,13 @@ static int iov_enable_xt(void)
     return 0;
 }
 
-int amd_iommu_alloc_root(struct domain_iommu *hd)
+int amd_iommu_alloc_root(struct domain *d)
 {
+    struct domain_iommu *hd = dom_iommu(d);
+
     if ( unlikely(!hd->arch.amd_iommu.root_table) )
     {
-        hd->arch.amd_iommu.root_table = alloc_amd_iommu_pgtable();
+        hd->arch.amd_iommu.root_table = iommu_alloc_pgtable(d);
         if ( !hd->arch.amd_iommu.root_table )
             return -ENOMEM;
     }
@@ -218,12 +220,13 @@ int amd_iommu_alloc_root(struct domain_iommu *hd)
     return 0;
 }
 
-static int __must_check allocate_domain_resources(struct domain_iommu *hd)
+static int __must_check allocate_domain_resources(struct domain *d)
 {
+    struct domain_iommu *hd = dom_iommu(d);
     int rc;
 
     spin_lock(&hd->arch.mapping_lock);
-    rc = amd_iommu_alloc_root(hd);
+    rc = amd_iommu_alloc_root(d);
     spin_unlock(&hd->arch.mapping_lock);
 
     return rc;
@@ -255,7 +258,7 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
 {
     const struct amd_iommu *iommu;
 
-    if ( allocate_domain_resources(dom_iommu(d)) )
+    if ( allocate_domain_resources(d) )
         BUG();
 
     for_each_amd_iommu ( iommu )
@@ -324,7 +327,6 @@ static int reassign_device(struct domain *source, struct domain *target,
 {
     struct amd_iommu *iommu;
     int bdf, rc;
-    struct domain_iommu *t = dom_iommu(target);
 
     bdf = PCI_BDF2(pdev->bus, pdev->devfn);
     iommu = find_iommu_for_device(pdev->seg, bdf);
@@ -345,7 +347,7 @@ static int reassign_device(struct domain *source, struct domain *target,
         pdev->domain = target;
     }
 
-    rc = allocate_domain_resources(t);
+    rc = allocate_domain_resources(target);
     if ( rc )
         return rc;
 
@@ -378,64 +380,9 @@ static int amd_iommu_assign_device(struct domain *d, u8 devfn,
     return reassign_device(pdev->domain, d, devfn, pdev);
 }
 
-static void deallocate_next_page_table(struct page_info *pg, int level)
-{
-    PFN_ORDER(pg) = level;
-    spin_lock(&iommu_pt_cleanup_lock);
-    page_list_add_tail(pg, &iommu_pt_cleanup_list);
-    spin_unlock(&iommu_pt_cleanup_lock);
-}
-
-static void deallocate_page_table(struct page_info *pg)
-{
-    struct amd_iommu_pte *table_vaddr;
-    unsigned int index, level = PFN_ORDER(pg);
-
-    PFN_ORDER(pg) = 0;
-
-    if ( level <= 1 )
-    {
-        free_amd_iommu_pgtable(pg);
-        return;
-    }
-
-    table_vaddr = __map_domain_page(pg);
-
-    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
-    {
-        struct amd_iommu_pte *pde = &table_vaddr[index];
-
-        if ( pde->mfn && pde->next_level && pde->pr )
-        {
-            /* We do not support skip levels yet */
-            ASSERT(pde->next_level == level - 1);
-            deallocate_next_page_table(mfn_to_page(_mfn(pde->mfn)),
-                                       pde->next_level);
-        }
-    }
-
-    unmap_domain_page(table_vaddr);
-    free_amd_iommu_pgtable(pg);
-}
-
-static void deallocate_iommu_page_tables(struct domain *d)
-{
-    struct domain_iommu *hd = dom_iommu(d);
-
-    spin_lock(&hd->arch.mapping_lock);
-    if ( hd->arch.amd_iommu.root_table )
-    {
-        deallocate_next_page_table(hd->arch.amd_iommu.root_table,
-                                   hd->arch.amd_iommu.paging_mode);
-        hd->arch.amd_iommu.root_table = NULL;
-    }
-    spin_unlock(&hd->arch.mapping_lock);
-}
-
-
 static void amd_iommu_domain_destroy(struct domain *d)
 {
-    deallocate_iommu_page_tables(d);
+    dom_iommu(d)->arch.amd_iommu.root_table = NULL;
     amd_iommu_flush_all_pages(d);
 }
 
@@ -627,7 +574,6 @@ static const struct iommu_ops __initconstrel _iommu_ops = {
     .unmap_page = amd_iommu_unmap_page,
     .iotlb_flush = amd_iommu_flush_iotlb_pages,
     .iotlb_flush_all = amd_iommu_flush_iotlb_all,
-    .free_page_table = deallocate_page_table,
     .reassign_device = reassign_device,
     .get_device_group_id = amd_iommu_group_id,
     .enable_x2apic = iov_enable_xt,
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 1d644844ab..dad4088531 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -49,10 +49,6 @@ bool_t __read_mostly amd_iommu_perdev_intremap = 1;
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
-DEFINE_SPINLOCK(iommu_pt_cleanup_lock);
-PAGE_LIST_HEAD(iommu_pt_cleanup_list);
-static struct tasklet iommu_pt_cleanup_tasklet;
-
 static int __init parse_iommu_param(const char *s)
 {
     const char *ss;
@@ -226,7 +222,6 @@ static void iommu_teardown(struct domain *d)
     struct domain_iommu *hd = dom_iommu(d);
 
     hd->platform_ops->teardown(d);
-    tasklet_schedule(&iommu_pt_cleanup_tasklet);
 }
 
 void iommu_domain_destroy(struct domain *d)
@@ -366,23 +361,6 @@ int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
     return iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags);
 }
 
-static void iommu_free_pagetables(void *unused)
-{
-    do {
-        struct page_info *pg;
-
-        spin_lock(&iommu_pt_cleanup_lock);
-        pg = page_list_remove_head(&iommu_pt_cleanup_list);
-        spin_unlock(&iommu_pt_cleanup_lock);
-        if ( !pg )
-            return;
-        iommu_vcall(iommu_get_ops(), free_page_table, pg);
-    } while ( !softirq_pending(smp_processor_id()) );
-
-    tasklet_schedule_on_cpu(&iommu_pt_cleanup_tasklet,
-                            cpumask_cycle(smp_processor_id(), &cpu_online_map));
-}
-
 int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count,
                       unsigned int flush_flags)
 {
@@ -506,7 +484,6 @@ int __init iommu_setup(void)
 #ifndef iommu_intremap
         printk("Interrupt remapping %sabled\n", iommu_intremap ? "en" : "dis");
 #endif
-        tasklet_init(&iommu_pt_cleanup_tasklet, iommu_free_pagetables, NULL);
     }
 
     return rc;
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index ac1373fb99..40834e2e7a 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -279,13 +279,16 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
         pte_maddr = dma_pte_addr(*pte);
         if ( !pte_maddr )
         {
+            struct page_info *pg;
+
             if ( !alloc )
                 break;
 
-            pte_maddr = alloc_pgtable_maddr(1, hd->node);
-            if ( !pte_maddr )
+            pg = iommu_alloc_pgtable(domain);
+            if ( !pg )
                 break;
 
+            pte_maddr = page_to_maddr(pg);
             dma_set_pte_addr(*pte, pte_maddr);
 
             /*
@@ -675,45 +678,6 @@ static void dma_pte_clear_one(struct domain *domain, uint64_t addr,
     unmap_vtd_domain_page(page);
 }
 
-static void iommu_free_pagetable(u64 pt_maddr, int level)
-{
-    struct page_info *pg = maddr_to_page(pt_maddr);
-
-    if ( pt_maddr == 0 )
-        return;
-
-    PFN_ORDER(pg) = level;
-    spin_lock(&iommu_pt_cleanup_lock);
-    page_list_add_tail(pg, &iommu_pt_cleanup_list);
-    spin_unlock(&iommu_pt_cleanup_lock);
-}
-
-static void iommu_free_page_table(struct page_info *pg)
-{
-    unsigned int i, next_level = PFN_ORDER(pg) - 1;
-    u64 pt_maddr = page_to_maddr(pg);
-    struct dma_pte *pt_vaddr, *pte;
-
-    PFN_ORDER(pg) = 0;
-    pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);
-
-    for ( i = 0; i < PTE_NUM; i++ )
-    {
-        pte = &pt_vaddr[i];
-        if ( !dma_pte_present(*pte) )
-            continue;
-
-        if ( next_level >= 1 )
-            iommu_free_pagetable(dma_pte_addr(*pte), next_level);
-
-        dma_clear_pte(*pte);
-        iommu_sync_cache(pte, sizeof(struct dma_pte));
-    }
-
-    unmap_vtd_domain_page(pt_vaddr);
-    free_pgtable_maddr(pt_maddr);
-}
-
 static int iommu_set_root_entry(struct vtd_iommu *iommu)
 {
     u32 sts;
@@ -1766,11 +1730,7 @@ static void iommu_domain_teardown(struct domain *d)
     if ( iommu_use_hap_pt(d) )
         return;
 
-    spin_lock(&hd->arch.mapping_lock);
-    iommu_free_pagetable(hd->arch.vtd.pgd_maddr,
-                         agaw_to_level(hd->arch.vtd.agaw));
     hd->arch.vtd.pgd_maddr = 0;
-    spin_unlock(&hd->arch.mapping_lock);
 }
 
 static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
@@ -2751,7 +2711,6 @@ static struct iommu_ops __initdata vtd_ops = {
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
     .lookup_page = intel_iommu_lookup_page,
-    .free_page_table = iommu_free_page_table,
     .reassign_device = reassign_device_ownership,
     .get_device_group_id = intel_iommu_group_id,
     .enable_x2apic = intel_iommu_enable_eim,
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index a12109a1de..b3c7da0fe2 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -140,11 +140,19 @@ int arch_iommu_domain_init(struct domain *d)
 
     spin_lock_init(&hd->arch.mapping_lock);
 
+    INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.list);
+    spin_lock_init(&hd->arch.pgtables.lock);
+
     return 0;
 }
 
 void arch_iommu_domain_destroy(struct domain *d)
 {
+    struct domain_iommu *hd = dom_iommu(d);
+    struct page_info *pg;
+
+    while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
+        free_domheap_page(pg);
 }
 
 static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
@@ -257,6 +265,39 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
         return;
 }
 
+struct page_info *iommu_alloc_pgtable(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+#ifdef CONFIG_NUMA
+    unsigned int memflags = (hd->node == NUMA_NO_NODE) ?
+        0 : MEMF_node(hd->node);
+#else
+    unsigned int memflags = 0;
+#endif
+    struct page_info *pg;
+    void *p;
+
+    BUG_ON(!iommu_enabled);
+
+    pg = alloc_domheap_page(NULL, memflags);
+    if ( !pg )
+        return NULL;
+
+    p = __map_domain_page(pg);
+    clear_page(p);
+
+    if ( hd->platform_ops->sync_cache )
+        iommu_vcall(hd->platform_ops, sync_cache, p, PAGE_SIZE);
+
+    unmap_domain_page(p);
+
+    spin_lock(&hd->arch.pgtables.lock);
+    page_list_add(pg, &hd->arch.pgtables.list);
+    spin_unlock(&hd->arch.pgtables.lock);
+
+    return pg;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index a7add5208c..280515966c 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -46,6 +46,10 @@ typedef uint64_t daddr_t;
 struct arch_iommu
 {
     spinlock_t mapping_lock; /* io page table lock */
+    struct {
+        struct page_list_head list;
+        spinlock_t lock;
+    } pgtables;
 
     union {
         /* Intel VT-d */
@@ -131,6 +135,8 @@ int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
         iommu_vcall(ops, sync_cache, addr, size);       \
 })
 
+struct page_info * __must_check iommu_alloc_pgtable(struct domain *d);
+
 #endif /* !__ARCH_X86_IOMMU_H__ */
 /*
  * Local variables:
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 3272874958..51c29180a4 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -263,8 +263,6 @@ struct iommu_ops {
     int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mfn,
                                     unsigned int *flags);
 
-    void (*free_page_table)(struct page_info *);
-
 #ifdef CONFIG_X86
     int (*enable_x2apic)(void);
     void (*disable_x2apic)(void);
@@ -381,9 +379,6 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev);
  */
 DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
-extern struct spinlock iommu_pt_cleanup_lock;
-extern struct page_list_head iommu_pt_cleanup_list;
-
 #endif /* _IOMMU_H_ */
 
 /*
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 16:46:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 16:46:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz0q4-00015i-NY; Fri, 24 Jul 2020 16:46:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yL+a=BD=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jz0q3-00015d-Gh
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 16:46:27 +0000
X-Inumbo-ID: 37062733-cdcd-11ea-a42a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 37062733-cdcd-11ea-a42a-12813bfff9fa;
 Fri, 24 Jul 2020 16:46:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yEPjMYRXDOrpQZeJJKZcCUTT1ISgrZtcUBabCAotbw4=; b=Vh/IsxRqa2qnaf7BfqL9G01LiL
 wAb3BywGPlgB4DUhDq2jqht7kbFG4wxyCecgirLoPNNso5jlbGL8AI+e5GBRfRZvf1XS2b62B59BT
 d0vMb3ONymFIz+Nm6kN+eo2a2PBQL+my7jpGh+6QTE2Z7cD7/FvVEwT8qVU4Pu1X3kwk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz0q2-00053r-7M; Fri, 24 Jul 2020 16:46:26 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz0q1-0006WL-VB; Fri, 24 Jul 2020 16:46:26 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 0/6] IOMMU cleanup
Date: Fri, 24 Jul 2020 17:46:13 +0100
Message-Id: <20200724164619.1245-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Patches accumulated in my during 4.14 freeze...

Paul Durrant (6):
  x86/iommu: re-arrange arch_iommu to separate common fields...
  x86/iommu: add common page-table allocator
  iommu: remove iommu_lookup_page() and the lookup_page() method...
  remove remaining uses of iommu_legacy_map/unmap
  iommu: remove the share_p2m operation
  iommu: stop calling IOMMU page tables 'p2m tables'

 xen/arch/x86/mm.c                           |  22 ++-
 xen/arch/x86/mm/p2m-ept.c                   |  22 ++-
 xen/arch/x86/mm/p2m-pt.c                    |  17 +-
 xen/arch/x86/mm/p2m.c                       |  31 ++-
 xen/arch/x86/tboot.c                        |   4 +-
 xen/arch/x86/x86_64/mm.c                    |  27 ++-
 xen/common/grant_table.c                    |  36 +++-
 xen/common/memory.c                         |   7 +-
 xen/drivers/passthrough/amd/iommu.h         |  18 +-
 xen/drivers/passthrough/amd/iommu_guest.c   |   8 +-
 xen/drivers/passthrough/amd/iommu_map.c     |  22 +--
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 109 +++--------
 xen/drivers/passthrough/iommu.c             |  91 +--------
 xen/drivers/passthrough/vtd/iommu.c         | 206 ++++++--------------
 xen/drivers/passthrough/x86/iommu.c         |  42 +++-
 xen/include/asm-x86/iommu.h                 |  33 +++-
 xen/include/xen/iommu.h                     |  35 +---
 17 files changed, 303 insertions(+), 427 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 16:46:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 16:46:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz0qA-00016x-CY; Fri, 24 Jul 2020 16:46:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yL+a=BD=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jz0q8-00015d-Dj
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 16:46:32 +0000
X-Inumbo-ID: 3915d86a-cdcd-11ea-a42a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3915d86a-cdcd-11ea-a42a-12813bfff9fa;
 Fri, 24 Jul 2020 16:46:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=jHWt+0gcAphRg7r4xZYSd4FYpgcSjlyWG93TNZ6TlV4=; b=YwlIavD3XiJ8t0L2uGoL9HIwDW
 PuOH8iTSROT0yJZPX1wmPjG1fmZORXOXDGxUSyV0OJ2G3D4hZPFR27hSiQQEdEMMRQsWrypqMZBJN
 GSBIeN4+yO9RVrZjLEzBmgC72ZWYARxl81fqWlQOKStaGYV3Ci5wGNczDRQHj73VyIcg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz0q3-00054M-GA; Fri, 24 Jul 2020 16:46:27 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz0q3-0006WL-6F; Fri, 24 Jul 2020 16:46:27 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 1/6] x86/iommu: re-arrange arch_iommu to separate common
 fields...
Date: Fri, 24 Jul 2020 17:46:14 +0100
Message-Id: <20200724164619.1245-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724164619.1245-1-paul@xen.org>
References: <20200724164619.1245-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

... from those specific to VT-d or AMD IOMMU, and put the latter in a union.

There is no functional change in this patch, although the initialization of
the 'mapped_rmrrs' list occurs slightly later in iommu_domain_init() since
it is now done (correctly) in VT-d specific code rather than in general x86
code.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/tboot.c                        |  4 +-
 xen/drivers/passthrough/amd/iommu_guest.c   |  8 ++--
 xen/drivers/passthrough/amd/iommu_map.c     | 14 +++---
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 35 +++++++-------
 xen/drivers/passthrough/vtd/iommu.c         | 53 +++++++++++----------
 xen/drivers/passthrough/x86/iommu.c         |  1 -
 xen/include/asm-x86/iommu.h                 | 27 +++++++----
 7 files changed, 78 insertions(+), 64 deletions(-)

diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index 320e06f129..e66b0940c4 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -230,8 +230,8 @@ static void tboot_gen_domain_integrity(const uint8_t key[TB_KEY_SIZE],
         {
             const struct domain_iommu *dio = dom_iommu(d);
 
-            update_iommu_mac(&ctx, dio->arch.pgd_maddr,
-                             agaw_to_level(dio->arch.agaw));
+            update_iommu_mac(&ctx, dio->arch.vtd.pgd_maddr,
+                             agaw_to_level(dio->arch.vtd.agaw));
         }
     }
 
diff --git a/xen/drivers/passthrough/amd/iommu_guest.c b/xen/drivers/passthrough/amd/iommu_guest.c
index 014a72a54b..26819e82a8 100644
--- a/xen/drivers/passthrough/amd/iommu_guest.c
+++ b/xen/drivers/passthrough/amd/iommu_guest.c
@@ -50,12 +50,12 @@ static uint16_t guest_bdf(struct domain *d, uint16_t machine_bdf)
 
 static inline struct guest_iommu *domain_iommu(struct domain *d)
 {
-    return dom_iommu(d)->arch.g_iommu;
+    return dom_iommu(d)->arch.amd_iommu.g_iommu;
 }
 
 static inline struct guest_iommu *vcpu_iommu(struct vcpu *v)
 {
-    return dom_iommu(v->domain)->arch.g_iommu;
+    return dom_iommu(v->domain)->arch.amd_iommu.g_iommu;
 }
 
 static void guest_iommu_enable(struct guest_iommu *iommu)
@@ -823,7 +823,7 @@ int guest_iommu_init(struct domain* d)
     guest_iommu_reg_init(iommu);
     iommu->mmio_base = ~0ULL;
     iommu->domain = d;
-    hd->arch.g_iommu = iommu;
+    hd->arch.amd_iommu.g_iommu = iommu;
 
     tasklet_init(&iommu->cmd_buffer_tasklet, guest_iommu_process_command, d);
 
@@ -845,5 +845,5 @@ void guest_iommu_destroy(struct domain *d)
     tasklet_kill(&iommu->cmd_buffer_tasklet);
     xfree(iommu);
 
-    dom_iommu(d)->arch.g_iommu = NULL;
+    dom_iommu(d)->arch.amd_iommu.g_iommu = NULL;
 }
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 93e96cd69c..06c564968c 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -180,8 +180,8 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
     struct page_info *table;
     const struct domain_iommu *hd = dom_iommu(d);
 
-    table = hd->arch.root_table;
-    level = hd->arch.paging_mode;
+    table = hd->arch.amd_iommu.root_table;
+    level = hd->arch.amd_iommu.paging_mode;
 
     BUG_ON( table == NULL || level < 1 || level > 6 );
 
@@ -325,7 +325,7 @@ int amd_iommu_unmap_page(struct domain *d, dfn_t dfn,
 
     spin_lock(&hd->arch.mapping_lock);
 
-    if ( !hd->arch.root_table )
+    if ( !hd->arch.amd_iommu.root_table )
     {
         spin_unlock(&hd->arch.mapping_lock);
         return 0;
@@ -450,7 +450,7 @@ int __init amd_iommu_quarantine_init(struct domain *d)
     unsigned int level = amd_iommu_get_paging_mode(end_gfn);
     struct amd_iommu_pte *table;
 
-    if ( hd->arch.root_table )
+    if ( hd->arch.amd_iommu.root_table )
     {
         ASSERT_UNREACHABLE();
         return 0;
@@ -458,11 +458,11 @@ int __init amd_iommu_quarantine_init(struct domain *d)
 
     spin_lock(&hd->arch.mapping_lock);
 
-    hd->arch.root_table = alloc_amd_iommu_pgtable();
-    if ( !hd->arch.root_table )
+    hd->arch.amd_iommu.root_table = alloc_amd_iommu_pgtable();
+    if ( !hd->arch.amd_iommu.root_table )
         goto out;
 
-    table = __map_domain_page(hd->arch.root_table);
+    table = __map_domain_page(hd->arch.amd_iommu.root_table);
     while ( level )
     {
         struct page_info *pg;
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 8d6309cc8c..c386dc4387 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -92,7 +92,8 @@ static void amd_iommu_setup_domain_device(
     u8 bus = pdev->bus;
     const struct domain_iommu *hd = dom_iommu(domain);
 
-    BUG_ON( !hd->arch.root_table || !hd->arch.paging_mode ||
+    BUG_ON( !hd->arch.amd_iommu.root_table ||
+            !hd->arch.amd_iommu.paging_mode ||
             !iommu->dev_table.buffer );
 
     if ( iommu_hwdom_passthrough && is_hardware_domain(domain) )
@@ -111,8 +112,8 @@ static void amd_iommu_setup_domain_device(
 
         /* bind DTE to domain page-tables */
         amd_iommu_set_root_page_table(
-            dte, page_to_maddr(hd->arch.root_table), domain->domain_id,
-            hd->arch.paging_mode, valid);
+            dte, page_to_maddr(hd->arch.amd_iommu.root_table),
+            domain->domain_id, hd->arch.amd_iommu.paging_mode, valid);
 
         /* Undo what amd_iommu_disable_domain_device() may have done. */
         ivrs_dev = &get_ivrs_mappings(iommu->seg)[req_id];
@@ -132,8 +133,8 @@ static void amd_iommu_setup_domain_device(
                         "root table = %#"PRIx64", "
                         "domain = %d, paging mode = %d\n",
                         req_id, pdev->type,
-                        page_to_maddr(hd->arch.root_table),
-                        domain->domain_id, hd->arch.paging_mode);
+                        page_to_maddr(hd->arch.amd_iommu.root_table),
+                        domain->domain_id, hd->arch.amd_iommu.paging_mode);
     }
 
     spin_unlock_irqrestore(&iommu->lock, flags);
@@ -207,10 +208,10 @@ static int iov_enable_xt(void)
 
 int amd_iommu_alloc_root(struct domain_iommu *hd)
 {
-    if ( unlikely(!hd->arch.root_table) )
+    if ( unlikely(!hd->arch.amd_iommu.root_table) )
     {
-        hd->arch.root_table = alloc_amd_iommu_pgtable();
-        if ( !hd->arch.root_table )
+        hd->arch.amd_iommu.root_table = alloc_amd_iommu_pgtable();
+        if ( !hd->arch.amd_iommu.root_table )
             return -ENOMEM;
     }
 
@@ -240,7 +241,7 @@ static int amd_iommu_domain_init(struct domain *d)
      *   physical address space we give it, but this isn't known yet so use 4
      *   unilaterally.
      */
-    hd->arch.paging_mode = amd_iommu_get_paging_mode(
+    hd->arch.amd_iommu.paging_mode = amd_iommu_get_paging_mode(
         is_hvm_domain(d)
         ? 1ul << (DEFAULT_DOMAIN_ADDRESS_WIDTH - PAGE_SHIFT)
         : get_upper_mfn_bound() + 1);
@@ -306,7 +307,7 @@ static void amd_iommu_disable_domain_device(const struct domain *domain,
         AMD_IOMMU_DEBUG("Disable: device id = %#x, "
                         "domain = %d, paging mode = %d\n",
                         req_id,  domain->domain_id,
-                        dom_iommu(domain)->arch.paging_mode);
+                        dom_iommu(domain)->arch.amd_iommu.paging_mode);
     }
     spin_unlock_irqrestore(&iommu->lock, flags);
 
@@ -422,10 +423,11 @@ static void deallocate_iommu_page_tables(struct domain *d)
     struct domain_iommu *hd = dom_iommu(d);
 
     spin_lock(&hd->arch.mapping_lock);
-    if ( hd->arch.root_table )
+    if ( hd->arch.amd_iommu.root_table )
     {
-        deallocate_next_page_table(hd->arch.root_table, hd->arch.paging_mode);
-        hd->arch.root_table = NULL;
+        deallocate_next_page_table(hd->arch.amd_iommu.root_table,
+                                   hd->arch.amd_iommu.paging_mode);
+        hd->arch.amd_iommu.root_table = NULL;
     }
     spin_unlock(&hd->arch.mapping_lock);
 }
@@ -605,11 +607,12 @@ static void amd_dump_p2m_table(struct domain *d)
 {
     const struct domain_iommu *hd = dom_iommu(d);
 
-    if ( !hd->arch.root_table )
+    if ( !hd->arch.amd_iommu.root_table )
         return;
 
-    printk("p2m table has %d levels\n", hd->arch.paging_mode);
-    amd_dump_p2m_table_level(hd->arch.root_table, hd->arch.paging_mode, 0, 0);
+    printk("p2m table has %d levels\n", hd->arch.amd_iommu.paging_mode);
+    amd_dump_p2m_table_level(hd->arch.amd_iommu.root_table,
+                             hd->arch.amd_iommu.paging_mode, 0, 0);
 }
 
 static const struct iommu_ops __initconstrel _iommu_ops = {
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 01dc444771..ac1373fb99 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -257,20 +257,20 @@ static u64 bus_to_context_maddr(struct vtd_iommu *iommu, u8 bus)
 static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
 {
     struct domain_iommu *hd = dom_iommu(domain);
-    int addr_width = agaw_to_width(hd->arch.agaw);
+    int addr_width = agaw_to_width(hd->arch.vtd.agaw);
     struct dma_pte *parent, *pte = NULL;
-    int level = agaw_to_level(hd->arch.agaw);
+    int level = agaw_to_level(hd->arch.vtd.agaw);
     int offset;
     u64 pte_maddr = 0;
 
     addr &= (((u64)1) << addr_width) - 1;
     ASSERT(spin_is_locked(&hd->arch.mapping_lock));
-    if ( !hd->arch.pgd_maddr &&
+    if ( !hd->arch.vtd.pgd_maddr &&
          (!alloc ||
-          ((hd->arch.pgd_maddr = alloc_pgtable_maddr(1, hd->node)) == 0)) )
+          ((hd->arch.vtd.pgd_maddr = alloc_pgtable_maddr(1, hd->node)) == 0)) )
         goto out;
 
-    parent = (struct dma_pte *)map_vtd_domain_page(hd->arch.pgd_maddr);
+    parent = (struct dma_pte *)map_vtd_domain_page(hd->arch.vtd.pgd_maddr);
     while ( level > 1 )
     {
         offset = address_level_offset(addr, level);
@@ -593,7 +593,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d, dfn_t dfn,
     {
         iommu = drhd->iommu;
 
-        if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) )
+        if ( !test_bit(iommu->index, &hd->arch.vtd.iommu_bitmap) )
             continue;
 
         flush_dev_iotlb = !!find_ats_dev_drhd(iommu);
@@ -1281,7 +1281,10 @@ void __init iommu_free(struct acpi_drhd_unit *drhd)
 
 static int intel_iommu_domain_init(struct domain *d)
 {
-    dom_iommu(d)->arch.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
+    struct domain_iommu *hd = dom_iommu(d);
+
+    hd->arch.vtd.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
+    INIT_LIST_HEAD(&hd->arch.vtd.mapped_rmrrs);
 
     return 0;
 }
@@ -1381,10 +1384,10 @@ int domain_context_mapping_one(
         spin_lock(&hd->arch.mapping_lock);
 
         /* Ensure we have pagetables allocated down to leaf PTE. */
-        if ( hd->arch.pgd_maddr == 0 )
+        if ( hd->arch.vtd.pgd_maddr == 0 )
         {
             addr_to_dma_page_maddr(domain, 0, 1);
-            if ( hd->arch.pgd_maddr == 0 )
+            if ( hd->arch.vtd.pgd_maddr == 0 )
             {
             nomem:
                 spin_unlock(&hd->arch.mapping_lock);
@@ -1395,7 +1398,7 @@ int domain_context_mapping_one(
         }
 
         /* Skip top levels of page tables for 2- and 3-level DRHDs. */
-        pgd_maddr = hd->arch.pgd_maddr;
+        pgd_maddr = hd->arch.vtd.pgd_maddr;
         for ( agaw = level_to_agaw(4);
               agaw != level_to_agaw(iommu->nr_pt_levels);
               agaw-- )
@@ -1449,7 +1452,7 @@ int domain_context_mapping_one(
     if ( rc > 0 )
         rc = 0;
 
-    set_bit(iommu->index, &hd->arch.iommu_bitmap);
+    set_bit(iommu->index, &hd->arch.vtd.iommu_bitmap);
 
     unmap_vtd_domain_page(context_entries);
 
@@ -1727,7 +1730,7 @@ static int domain_context_unmap(struct domain *domain, u8 devfn,
     {
         int iommu_domid;
 
-        clear_bit(iommu->index, &dom_iommu(domain)->arch.iommu_bitmap);
+        clear_bit(iommu->index, &dom_iommu(domain)->arch.vtd.iommu_bitmap);
 
         iommu_domid = domain_iommu_domid(domain, iommu);
         if ( iommu_domid == -1 )
@@ -1752,7 +1755,7 @@ static void iommu_domain_teardown(struct domain *d)
     if ( list_empty(&acpi_drhd_units) )
         return;
 
-    list_for_each_entry_safe ( mrmrr, tmp, &hd->arch.mapped_rmrrs, list )
+    list_for_each_entry_safe ( mrmrr, tmp, &hd->arch.vtd.mapped_rmrrs, list )
     {
         list_del(&mrmrr->list);
         xfree(mrmrr);
@@ -1764,8 +1767,9 @@ static void iommu_domain_teardown(struct domain *d)
         return;
 
     spin_lock(&hd->arch.mapping_lock);
-    iommu_free_pagetable(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw));
-    hd->arch.pgd_maddr = 0;
+    iommu_free_pagetable(hd->arch.vtd.pgd_maddr,
+                         agaw_to_level(hd->arch.vtd.agaw));
+    hd->arch.vtd.pgd_maddr = 0;
     spin_unlock(&hd->arch.mapping_lock);
 }
 
@@ -1905,7 +1909,7 @@ static void iommu_set_pgd(struct domain *d)
     mfn_t pgd_mfn;
 
     pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-    dom_iommu(d)->arch.pgd_maddr =
+    dom_iommu(d)->arch.vtd.pgd_maddr =
         pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
 }
 
@@ -1925,7 +1929,7 @@ static int rmrr_identity_mapping(struct domain *d, bool_t map,
      * No need to acquire hd->arch.mapping_lock: Both insertion and removal
      * get done while holding pcidevs_lock.
      */
-    list_for_each_entry( mrmrr, &hd->arch.mapped_rmrrs, list )
+    list_for_each_entry( mrmrr, &hd->arch.vtd.mapped_rmrrs, list )
     {
         if ( mrmrr->base == rmrr->base_address &&
              mrmrr->end == rmrr->end_address )
@@ -1972,7 +1976,7 @@ static int rmrr_identity_mapping(struct domain *d, bool_t map,
     mrmrr->base = rmrr->base_address;
     mrmrr->end = rmrr->end_address;
     mrmrr->count = 1;
-    list_add_tail(&mrmrr->list, &hd->arch.mapped_rmrrs);
+    list_add_tail(&mrmrr->list, &hd->arch.vtd.mapped_rmrrs);
 
     return 0;
 }
@@ -2671,8 +2675,9 @@ static void vtd_dump_p2m_table(struct domain *d)
         return;
 
     hd = dom_iommu(d);
-    printk("p2m table has %d levels\n", agaw_to_level(hd->arch.agaw));
-    vtd_dump_p2m_table_level(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw), 0, 0);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->arch.vtd.agaw));
+    vtd_dump_p2m_table_level(hd->arch.vtd.pgd_maddr,
+                             agaw_to_level(hd->arch.vtd.agaw), 0, 0);
 }
 
 static int __init intel_iommu_quarantine_init(struct domain *d)
@@ -2683,7 +2688,7 @@ static int __init intel_iommu_quarantine_init(struct domain *d)
     unsigned int level = agaw_to_level(agaw);
     int rc;
 
-    if ( hd->arch.pgd_maddr )
+    if ( hd->arch.vtd.pgd_maddr )
     {
         ASSERT_UNREACHABLE();
         return 0;
@@ -2691,11 +2696,11 @@ static int __init intel_iommu_quarantine_init(struct domain *d)
 
     spin_lock(&hd->arch.mapping_lock);
 
-    hd->arch.pgd_maddr = alloc_pgtable_maddr(1, hd->node);
-    if ( !hd->arch.pgd_maddr )
+    hd->arch.vtd.pgd_maddr = alloc_pgtable_maddr(1, hd->node);
+    if ( !hd->arch.vtd.pgd_maddr )
         goto out;
 
-    parent = map_vtd_domain_page(hd->arch.pgd_maddr);
+    parent = map_vtd_domain_page(hd->arch.vtd.pgd_maddr);
     while ( level )
     {
         uint64_t maddr;
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 3d7670e8c6..a12109a1de 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -139,7 +139,6 @@ int arch_iommu_domain_init(struct domain *d)
     struct domain_iommu *hd = dom_iommu(d);
 
     spin_lock_init(&hd->arch.mapping_lock);
-    INIT_LIST_HEAD(&hd->arch.mapped_rmrrs);
 
     return 0;
 }
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index 6c9d5e5632..a7add5208c 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -45,16 +45,23 @@ typedef uint64_t daddr_t;
 
 struct arch_iommu
 {
-    u64 pgd_maddr;                 /* io page directory machine address */
-    spinlock_t mapping_lock;            /* io page table lock */
-    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
-    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
-    struct list_head mapped_rmrrs;
-
-    /* amd iommu support */
-    int paging_mode;
-    struct page_info *root_table;
-    struct guest_iommu *g_iommu;
+    spinlock_t mapping_lock; /* io page table lock */
+
+    union {
+        /* Intel VT-d */
+        struct {
+            u64 pgd_maddr; /* io page directory machine address */
+            int agaw; /* adjusted guest address width, 0 is level 2 30-bit */
+            u64 iommu_bitmap; /* bitmap of iommu(s) that the domain uses */
+            struct list_head mapped_rmrrs;
+        } vtd;
+        /* AMD IOMMU */
+        struct {
+            int paging_mode;
+            struct page_info *root_table;
+            struct guest_iommu *g_iommu;
+        } amd_iommu;
+    };
 };
 
 extern struct iommu_ops iommu_ops;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 16:46:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 16:46:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz0qD-00018G-Lk; Fri, 24 Jul 2020 16:46:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yL+a=BD=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jz0qC-00015t-SC
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 16:46:36 +0000
X-Inumbo-ID: 3abf2220-cdcd-11ea-8855-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3abf2220-cdcd-11ea-8855-bc764e2007e4;
 Fri, 24 Jul 2020 16:46:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AosL3w16YGuBNK92YZOL2rtFKWqYL/0zf7A/3jIqAEA=; b=epYqwH6KbmyaIuwxIGxeO7s4h7
 8dBL3k6uJOlglLQFZ4k0ZlsORcJWEymRcNG7ykte9Q8FyyQFYNCXZBhI48ZxcW2nBHr3tKSaSyRBY
 7VpbatyZO7WurHJkZpUq3Vd+om01KjnKQHJYxt5owe6ld77YXxzybazCb7vEZ7ru9x1w=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz0q7-00054a-0h; Fri, 24 Jul 2020 16:46:31 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz0q6-0006WL-P4; Fri, 24 Jul 2020 16:46:30 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 4/6] remove remaining uses of iommu_legacy_map/unmap
Date: Fri, 24 Jul 2020 17:46:17 +0100
Message-Id: <20200724164619.1245-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724164619.1245-1-paul@xen.org>
References: <20200724164619.1245-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

The 'legacy' functions do implicit flushing so amend the callers to do the
appropriate flushing.

Unfortunately, because of the structure of the P2M code, we cannot remove
the per-CPU 'iommu_dont_flush_iotlb' global and the optimization it
facilitates. It is now checked directly iommu_iotlb_flush(). Also, it is
now declared as bool (rather than bool_t) and setting/clearing it are no
longer pointlessly gated on is_iommu_enabled() returning true. (Arguably
it is also pointless to gate the call to iommu_iotlb_flush() on that
condition - since it is a no-op in that case - but the if clause allows
the scope of a stack variable to be restricted).

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/mm.c               | 22 +++++++++++++++-----
 xen/arch/x86/mm/p2m-ept.c       | 22 +++++++++++++-------
 xen/arch/x86/mm/p2m-pt.c        | 17 +++++++++++----
 xen/arch/x86/mm/p2m.c           | 28 ++++++++++++++++++-------
 xen/arch/x86/x86_64/mm.c        | 27 ++++++++++++++++++------
 xen/common/grant_table.c        | 36 +++++++++++++++++++++++++-------
 xen/common/memory.c             |  7 +++----
 xen/drivers/passthrough/iommu.c | 37 +--------------------------------
 xen/include/xen/iommu.h         | 20 +++++-------------
 9 files changed, 123 insertions(+), 93 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 82bc676553..8a5658b97a 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2446,10 +2446,16 @@ static int cleanup_page_mappings(struct page_info *page)
 
         if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) )
         {
-            int rc2 = iommu_legacy_unmap(d, _dfn(mfn), PAGE_ORDER_4K);
+            unsigned int flush_flags = 0;
+            int err;
 
+            err = iommu_unmap(d, _dfn(mfn), PAGE_ORDER_4K, &flush_flags);
             if ( !rc )
-                rc = rc2;
+                rc = err;
+
+            err = iommu_iotlb_flush(d, _dfn(mfn), 1, flush_flags);
+            if ( !rc )
+                rc = err;
         }
 
         if ( likely(!is_special_page(page)) )
@@ -2971,13 +2977,19 @@ static int _get_page_type(struct page_info *page, unsigned long type,
         if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) )
         {
             mfn_t mfn = page_to_mfn(page);
+            dfn_t dfn = _dfn(mfn_x(mfn));
+            unsigned int flush_flags = 0;
+            int err;
 
             if ( (x & PGT_type_mask) == PGT_writable_page )
-                rc = iommu_legacy_unmap(d, _dfn(mfn_x(mfn)), PAGE_ORDER_4K);
+                rc = iommu_unmap(d, dfn, PAGE_ORDER_4K, &flush_flags);
             else
-                rc = iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn, PAGE_ORDER_4K,
-                                      IOMMUF_readable | IOMMUF_writable);
+                rc = iommu_map(d, dfn, mfn, PAGE_ORDER_4K,
+                               IOMMUF_readable | IOMMUF_writable, &flush_flags);
 
+            err = iommu_iotlb_flush(d, dfn, 1, flush_flags);
+            if ( !rc )
+                rc = err;
             if ( unlikely(rc) )
             {
                 _put_page_type(page, 0, NULL);
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index b8154a7ecc..d71c949b35 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -842,15 +842,21 @@ out:
     if ( rc == 0 && p2m_is_hostp2m(p2m) &&
          need_modify_vtd_table )
     {
-        if ( iommu_use_hap_pt(d) )
-            rc = iommu_iotlb_flush(d, _dfn(gfn), (1u << order),
-                                   (iommu_flags ? IOMMU_FLUSHF_added : 0) |
-                                   (vtd_pte_present ? IOMMU_FLUSHF_modified
-                                                    : 0));
-        else if ( need_iommu_pt_sync(d) )
+        unsigned int flush_flags = 0;
+        int err;
+
+        if ( need_iommu_pt_sync(d) )
             rc = iommu_flags ?
-                iommu_legacy_map(d, _dfn(gfn), mfn, order, iommu_flags) :
-                iommu_legacy_unmap(d, _dfn(gfn), order);
+                iommu_map(d, _dfn(gfn), mfn, order, iommu_flags, &flush_flags) :
+                iommu_unmap(d, _dfn(gfn), order, &flush_flags);
+        else if ( iommu_use_hap_pt(d) )
+            flush_flags =
+                (iommu_flags ? IOMMU_FLUSHF_added : 0) |
+                (vtd_pte_present ? IOMMU_FLUSHF_modified : 0);
+
+        err = iommu_iotlb_flush(d, _dfn(gfn), 1u << order, flush_flags);
+        if ( !rc )
+            rc = err;
     }
 
     unmap_domain_page(table);
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index badb26bc34..c48245cfe4 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -678,10 +678,19 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
 
     if ( need_iommu_pt_sync(p2m->domain) &&
          (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
-        rc = iommu_pte_flags
-             ? iommu_legacy_map(d, _dfn(gfn), mfn, page_order,
-                                iommu_pte_flags)
-             : iommu_legacy_unmap(d, _dfn(gfn), page_order);
+    {
+        unsigned int flush_flags = 0;
+        int err;
+
+        rc = iommu_pte_flags ?
+            iommu_map(d, _dfn(gfn), mfn, page_order, iommu_pte_flags,
+                      &flush_flags) :
+            iommu_unmap(d, _dfn(gfn), page_order, &flush_flags);
+
+        err = iommu_iotlb_flush(d, _dfn(gfn), 1u << page_order, flush_flags);
+        if ( !rc )
+            rc = err;
+    }
 
     /*
      * Free old intermediate tables if necessary.  This has to be the
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index db7bde0230..c5f52a4118 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1350,10 +1350,17 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
 
     if ( !paging_mode_translate(p2m->domain) )
     {
-        if ( !is_iommu_enabled(d) )
-            return 0;
-        return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K,
-                                IOMMUF_readable | IOMMUF_writable);
+        unsigned int flush_flags = 0;
+        int err;
+
+        ret = iommu_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K,
+                        IOMMUF_readable | IOMMUF_writable, &flush_flags);
+
+        err = iommu_iotlb_flush(d, _dfn(gfn_l), 1, flush_flags);
+        if ( !ret )
+            ret = err;
+
+        return ret;
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1441,9 +1448,16 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
 
     if ( !paging_mode_translate(d) )
     {
-        if ( !is_iommu_enabled(d) )
-            return 0;
-        return iommu_legacy_unmap(d, _dfn(gfn_l), PAGE_ORDER_4K);
+        unsigned int flush_flags = 0;
+        int err;
+
+        ret = iommu_unmap(d, _dfn(gfn_l), PAGE_ORDER_4K, &flush_flags);
+
+        err = iommu_iotlb_flush(d, _dfn(gfn_l), 1, flush_flags);
+        if ( !ret )
+            ret = err;
+
+        return ret;
     }
 
     gfn_lock(p2m, gfn, 0);
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 102079a801..3e0bff228e 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1413,21 +1413,36 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
          !iommu_use_hap_pt(hardware_domain) &&
          !need_iommu_pt_sync(hardware_domain) )
     {
+        unsigned int flush_flags = 0;
+        bool failed = false;
+        unsigned int n;
+
         for ( i = spfn; i < epfn; i++ )
-            if ( iommu_legacy_map(hardware_domain, _dfn(i), _mfn(i),
-                                  PAGE_ORDER_4K,
-                                  IOMMUF_readable | IOMMUF_writable) )
+            if ( iommu_map(hardware_domain, _dfn(i), _mfn(i),
+                           PAGE_ORDER_4K, IOMMUF_readable | IOMMUF_writable,
+                           &flush_flags) )
                 break;
         if ( i != epfn )
         {
+            failed = true;
+
             while (i-- > old_max)
                 /* If statement to satisfy __must_check. */
-                if ( iommu_legacy_unmap(hardware_domain, _dfn(i),
-                                        PAGE_ORDER_4K) )
+                if ( iommu_unmap(hardware_domain, _dfn(i), PAGE_ORDER_4K,
+                                 &flush_flags) )
                     continue;
+        }
 
-            goto destroy_m2p;
+        for ( i = spfn; i < epfn; i += n )
+        {
+            n = epfn - i; /* may truncate */
+
+            /* If statement to satisfy __must_check. */
+            if ( iommu_iotlb_flush(hardware_domain, _dfn(i), n, flush_flags) )
+                continue;
         }
+        if ( failed )
+            goto destroy_m2p;
     }
 
     /* We can't revert any more */
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9f0cae52c0..bc2b5000cf 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1225,11 +1225,25 @@ map_grant_ref(
             kind = IOMMUF_readable;
         else
             kind = 0;
-        if ( kind && iommu_legacy_map(ld, _dfn(mfn_x(mfn)), mfn, 0, kind) )
+        if ( kind )
         {
-            double_gt_unlock(lgt, rgt);
-            rc = GNTST_general_error;
-            goto undo_out;
+            dfn_t dfn = _dfn(mfn_x(mfn));
+            unsigned int flush_flags = 0;
+            int err;
+
+            err = iommu_map(ld, dfn, mfn, 0, kind, &flush_flags);
+            if ( err )
+                rc = GNTST_general_error;
+
+            err = iommu_iotlb_flush(ld, dfn, 1, flush_flags);
+            if ( err )
+                rc = GNTST_general_error;
+
+            if ( rc != GNTST_okay )
+            {
+                double_gt_unlock(lgt, rgt);
+                goto undo_out;
+            }
         }
     }
 
@@ -1473,21 +1487,27 @@ unmap_common(
     if ( rc == GNTST_okay && gnttab_need_iommu_mapping(ld) )
     {
         unsigned int kind;
+        dfn_t dfn = _dfn(mfn_x(op->mfn));
+        unsigned int flush_flags = 0;
         int err = 0;
 
         double_gt_lock(lgt, rgt);
 
         kind = mapkind(lgt, rd, op->mfn);
         if ( !kind )
-            err = iommu_legacy_unmap(ld, _dfn(mfn_x(op->mfn)), 0);
+            err = iommu_unmap(ld, dfn, 0, &flush_flags);
         else if ( !(kind & MAPKIND_WRITE) )
-            err = iommu_legacy_map(ld, _dfn(mfn_x(op->mfn)), op->mfn, 0,
-                                   IOMMUF_readable);
+            err = iommu_map(ld, dfn, op->mfn, 0, IOMMUF_readable,
+                            &flush_flags);
 
-        double_gt_unlock(lgt, rgt);
+        if ( err )
+            rc = GNTST_general_error;
 
+        err = iommu_iotlb_flush(ld, dfn, 1, flush_flags);
         if ( err )
             rc = GNTST_general_error;
+
+        double_gt_unlock(lgt, rgt);
     }
 
     /* If just unmapped a writable mapping, mark as dirtied */
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 714077c1e5..fedbd9019e 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -824,8 +824,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
     xatp->gpfn += start;
     xatp->size -= start;
 
-    if ( is_iommu_enabled(d) )
-       this_cpu(iommu_dont_flush_iotlb) = 1;
+    this_cpu(iommu_dont_flush_iotlb) = true;
 
     while ( xatp->size > done )
     {
@@ -845,12 +844,12 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         }
     }
 
+    this_cpu(iommu_dont_flush_iotlb) = false;
+
     if ( is_iommu_enabled(d) )
     {
         int ret;
 
-        this_cpu(iommu_dont_flush_iotlb) = 0;
-
         ret = iommu_iotlb_flush(d, _dfn(xatp->idx - done), done,
                                 IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified);
         if ( unlikely(ret) && rc >= 0 )
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 327df17c5d..f32d8e25a8 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -277,24 +277,6 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
     return rc;
 }
 
-int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
-                     unsigned int page_order, unsigned int flags)
-{
-    unsigned int flush_flags = 0;
-    int rc = iommu_map(d, dfn, mfn, page_order, flags, &flush_flags);
-
-    if ( !this_cpu(iommu_dont_flush_iotlb) )
-    {
-        int err = iommu_iotlb_flush(d, dfn, (1u << page_order),
-                                    flush_flags);
-
-        if ( !rc )
-            rc = err;
-    }
-
-    return rc;
-}
-
 int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order,
                 unsigned int *flush_flags)
 {
@@ -333,23 +315,6 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order,
     return rc;
 }
 
-int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int page_order)
-{
-    unsigned int flush_flags = 0;
-    int rc = iommu_unmap(d, dfn, page_order, &flush_flags);
-
-    if ( !this_cpu(iommu_dont_flush_iotlb) )
-    {
-        int err = iommu_iotlb_flush(d, dfn, (1u << page_order),
-                                    flush_flags);
-
-        if ( !rc )
-            rc = err;
-    }
-
-    return rc;
-}
-
 int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count,
                       unsigned int flush_flags)
 {
@@ -357,7 +322,7 @@ int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count,
     int rc;
 
     if ( !is_iommu_enabled(d) || !hd->platform_ops->iotlb_flush ||
-         !page_count || !flush_flags )
+         !page_count || !flush_flags || this_cpu(iommu_dont_flush_iotlb) )
         return 0;
 
     if ( dfn_eq(dfn, INVALID_DFN) )
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 271bd8e546..ec639ba128 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -151,13 +151,6 @@ int __must_check iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
 int __must_check iommu_unmap(struct domain *d, dfn_t dfn,
                              unsigned int page_order,
                              unsigned int *flush_flags);
-
-int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
-                                  unsigned int page_order,
-                                  unsigned int flags);
-int __must_check iommu_legacy_unmap(struct domain *d, dfn_t dfn,
-                                    unsigned int page_order);
-
 int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn,
                                    unsigned int page_count,
                                    unsigned int flush_flags);
@@ -364,15 +357,12 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev);
 
 /*
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
- * avoid unecessary iotlb_flush in the low level IOMMU code.
- *
- * iommu_map_page/iommu_unmap_page must flush the iotlb but somethimes
- * this operation can be really expensive. This flag will be set by the
- * caller to notify the low level IOMMU code to avoid the iotlb flushes.
- * iommu_iotlb_flush/iommu_iotlb_flush_all will be explicitly called by
- * the caller.
+ * avoid unecessary IOMMU flushing while updating the P2M.
+ * Setting the value to true will cause iommu_iotlb_flush() to return without
+ * actually performing a flush. A batch flush must therefore be done by the
+ * calling code after setting the value back to false.
  */
-DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
+DECLARE_PER_CPU(bool, iommu_dont_flush_iotlb);
 
 #endif /* _IOMMU_H_ */
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 16:46:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 16:46:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz0qF-00019Q-3W; Fri, 24 Jul 2020 16:46:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yL+a=BD=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jz0qD-00015d-E1
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 16:46:37 +0000
X-Inumbo-ID: 39ae33b2-cdcd-11ea-a42a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 39ae33b2-cdcd-11ea-a42a-12813bfff9fa;
 Fri, 24 Jul 2020 16:46:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=cZN6jmbYm9lCxNLsfkXuY/1LcmLkMfluVmH0qNDCEu0=; b=I/cHUa/pd6pC9G6tExT7F95xXC
 VMncjymlOmzZTFVStzMGzHXDL4camoe6ZhB+cRBPNxkP4BftickwLm9xxELciMdqnPvqm0nm5obvX
 s2eB21PKzYpi6rdWnjji8PmV9JArdSa/xnpFTgtfVooiQoM6bKXS+0djifcfoMpW0bio=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz0q5-00054Q-D0; Fri, 24 Jul 2020 16:46:29 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz0q5-0006WL-4k; Fri, 24 Jul 2020 16:46:29 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 3/6] iommu: remove iommu_lookup_page() and the lookup_page()
 method...
Date: Fri, 24 Jul 2020 17:46:16 +0100
Message-Id: <20200724164619.1245-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724164619.1245-1-paul@xen.org>
References: <20200724164619.1245-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Kevin Tian <kevin.tian@intel.com>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

... from iommu_ops.

This patch is essentially a reversion of dd93d54f "vtd: add lookup_page method
to iommu_ops". The code was intended to be used by a patch that has long-
since been abandoned. Therefore it is dead code and can be removed.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>
---
 xen/drivers/passthrough/iommu.c     | 11 --------
 xen/drivers/passthrough/vtd/iommu.c | 41 -----------------------------
 xen/include/xen/iommu.h             |  5 ----
 3 files changed, 57 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index dad4088531..327df17c5d 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -350,17 +350,6 @@ int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int page_order)
     return rc;
 }
 
-int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
-                      unsigned int *flags)
-{
-    const struct domain_iommu *hd = dom_iommu(d);
-
-    if ( !is_iommu_enabled(d) || !hd->platform_ops->lookup_page )
-        return -EOPNOTSUPP;
-
-    return iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags);
-}
-
 int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count,
                       unsigned int flush_flags)
 {
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 40834e2e7a..149d7122c3 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1808,46 +1808,6 @@ static int __must_check intel_iommu_unmap_page(struct domain *d, dfn_t dfn,
     return 0;
 }
 
-static int intel_iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
-                                   unsigned int *flags)
-{
-    struct domain_iommu *hd = dom_iommu(d);
-    struct dma_pte *page, val;
-    u64 pg_maddr;
-
-    /*
-     * If VT-d shares EPT page table or if the domain is the hardware
-     * domain and iommu_passthrough is set then pass back the dfn.
-     */
-    if ( iommu_use_hap_pt(d) ||
-         (iommu_hwdom_passthrough && is_hardware_domain(d)) )
-        return -EOPNOTSUPP;
-
-    spin_lock(&hd->arch.mapping_lock);
-
-    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0);
-    if ( !pg_maddr )
-    {
-        spin_unlock(&hd->arch.mapping_lock);
-        return -ENOENT;
-    }
-
-    page = map_vtd_domain_page(pg_maddr);
-    val = page[dfn_x(dfn) & LEVEL_MASK];
-
-    unmap_vtd_domain_page(page);
-    spin_unlock(&hd->arch.mapping_lock);
-
-    if ( !dma_pte_present(val) )
-        return -ENOENT;
-
-    *mfn = maddr_to_mfn(dma_pte_addr(val));
-    *flags = dma_pte_read(val) ? IOMMUF_readable : 0;
-    *flags |= dma_pte_write(val) ? IOMMUF_writable : 0;
-
-    return 0;
-}
-
 static int __init vtd_ept_page_compatible(struct vtd_iommu *iommu)
 {
     u64 ept_cap, vtd_cap = iommu->cap;
@@ -2710,7 +2670,6 @@ static struct iommu_ops __initdata vtd_ops = {
     .teardown = iommu_domain_teardown,
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
-    .lookup_page = intel_iommu_lookup_page,
     .reassign_device = reassign_device_ownership,
     .get_device_group_id = intel_iommu_group_id,
     .enable_x2apic = intel_iommu_enable_eim,
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 51c29180a4..271bd8e546 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -158,9 +158,6 @@ int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
 int __must_check iommu_legacy_unmap(struct domain *d, dfn_t dfn,
                                     unsigned int page_order);
 
-int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
-                                   unsigned int *flags);
-
 int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn,
                                    unsigned int page_count,
                                    unsigned int flush_flags);
@@ -260,8 +257,6 @@ struct iommu_ops {
                                  unsigned int *flush_flags);
     int __must_check (*unmap_page)(struct domain *d, dfn_t dfn,
                                    unsigned int *flush_flags);
-    int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mfn,
-                                    unsigned int *flags);
 
 #ifdef CONFIG_X86
     int (*enable_x2apic)(void);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 16:46:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 16:46:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz0qJ-0001C6-CF; Fri, 24 Jul 2020 16:46:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yL+a=BD=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jz0qH-00015t-SI
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 16:46:41 +0000
X-Inumbo-ID: 3bc132a8-cdcd-11ea-8855-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3bc132a8-cdcd-11ea-8855-bc764e2007e4;
 Fri, 24 Jul 2020 16:46:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=H2uqJkwBIHNcPC9n5mPbNITBzlMI1bcGzPfXQlHr/2Q=; b=myW6af7AGCR6JlaXt0pQ5VLzmL
 iSGR873513AZ/o71AaEhr3uit+/VpAo1rgTSVd2aVTmL3LTPyXDV4OlxDasbHwGuLFgNuUo/Sp5xT
 4sIbY3Jv5b4X+ez57kz+SDnWecTLi6aouKDiWKLYmx8CNgbi9vCNJgo573SoB9Z+WJ3s=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz0q9-00054p-8e; Fri, 24 Jul 2020 16:46:33 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz0q9-0006WL-11; Fri, 24 Jul 2020 16:46:33 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 6/6] iommu: stop calling IOMMU page tables 'p2m tables'
Date: Fri, 24 Jul 2020 17:46:19 +0100
Message-Id: <20200724164619.1245-7-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724164619.1245-1-paul@xen.org>
References: <20200724164619.1245-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Kevin Tian <kevin.tian@intel.com>,
 Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

It's confusing and not consistent with the terminology introduced with 'dfn_t'.
Just call them IOMMU page tables.

Also remove a pointless check of the 'acpi_drhd_units' list in
vtd_dump_page_table_level(). If the list is empty then IOMMU mappings would
not have been enabled for the domain in the first place.

NOTE: There's also a bit of cosmetic clean-up in iommu_dump_page_tables()
      to make use of %pd.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Tian <kevin.tian@intel.com>
---
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 16 +++++++-------
 xen/drivers/passthrough/iommu.c             | 12 +++++------
 xen/drivers/passthrough/vtd/iommu.c         | 24 +++++++++------------
 xen/include/xen/iommu.h                     |  2 +-
 4 files changed, 25 insertions(+), 29 deletions(-)

diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index fd9b1e7bd5..112bd3eb37 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -499,8 +499,8 @@ static int amd_iommu_group_id(u16 seg, u8 bus, u8 devfn)
 
 #include <asm/io_apic.h>
 
-static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
-                                     paddr_t gpa, int indent)
+static void amd_dump_page_table_level(struct page_info* pg, int level,
+                                      paddr_t gpa, int indent)
 {
     paddr_t address;
     struct amd_iommu_pte *table_vaddr;
@@ -537,7 +537,7 @@ static void amd_dump_p2m_table_level(struct page_info* pg, int level,
 
         address = gpa + amd_offset_level_address(index, level);
         if ( pde->next_level >= 1 )
-            amd_dump_p2m_table_level(
+            amd_dump_page_table_level(
                 mfn_to_page(_mfn(pde->mfn)), pde->next_level,
                 address, indent + 1);
         else
@@ -550,16 +550,16 @@ static void amd_dump_p2m_table_level(struct page_info* pg, int level,
     unmap_domain_page(table_vaddr);
 }
 
-static void amd_dump_p2m_table(struct domain *d)
+static void amd_dump_page_tables(struct domain *d)
 {
     const struct domain_iommu *hd = dom_iommu(d);
 
     if ( !hd->arch.amd_iommu.root_table )
         return;
 
-    printk("p2m table has %d levels\n", hd->arch.amd_iommu.paging_mode);
-    amd_dump_p2m_table_level(hd->arch.amd_iommu.root_table,
-                             hd->arch.amd_iommu.paging_mode, 0, 0);
+    printk("table has %d levels\n", hd->arch.amd_iommu.paging_mode);
+    amd_dump_page_table_level(hd->arch.amd_iommu.root_table,
+                              hd->arch.amd_iommu.paging_mode, 0, 0);
 }
 
 static const struct iommu_ops __initconstrel _iommu_ops = {
@@ -586,7 +586,7 @@ static const struct iommu_ops __initconstrel _iommu_ops = {
     .suspend = amd_iommu_suspend,
     .resume = amd_iommu_resume,
     .crash_shutdown = amd_iommu_crash_shutdown,
-    .dump_p2m_table = amd_dump_p2m_table,
+    .dump_page_tables = amd_dump_page_tables,
 };
 
 static const struct iommu_init_ops __initconstrel _iommu_init_ops = {
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 6a3803ff2c..5bc190bf98 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -22,7 +22,7 @@
 #include <xen/keyhandler.h>
 #include <xsm/xsm.h>
 
-static void iommu_dump_p2m_table(unsigned char key);
+static void iommu_dump_page_tables(unsigned char key);
 
 unsigned int __read_mostly iommu_dev_iotlb_timeout = 1000;
 integer_param("iommu_dev_iotlb_timeout", iommu_dev_iotlb_timeout);
@@ -212,7 +212,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
     if ( !is_iommu_enabled(d) )
         return;
 
-    register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table", 0);
+    register_keyhandler('o', &iommu_dump_page_tables, "dump iommu page tables", 0);
 
     hd->platform_ops->hwdom_init(d);
 }
@@ -513,7 +513,7 @@ bool_t iommu_has_feature(struct domain *d, enum iommu_feature feature)
     return is_iommu_enabled(d) && test_bit(feature, dom_iommu(d)->features);
 }
 
-static void iommu_dump_p2m_table(unsigned char key)
+static void iommu_dump_page_tables(unsigned char key)
 {
     struct domain *d;
     const struct iommu_ops *ops;
@@ -535,12 +535,12 @@ static void iommu_dump_p2m_table(unsigned char key)
 
         if ( iommu_use_hap_pt(d) )
         {
-            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
+            printk("%pd: IOMMU page tables shared with MMU\n", d);
             continue;
         }
 
-        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
-        ops->dump_p2m_table(d);
+        printk("%pd: IOMMU page tables: \n", d);
+        ops->dump_page_tables(d);
     }
 
     rcu_read_unlock(&domlist_read_lock);
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index d09ca3fb3d..82d7eb6224 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -2545,8 +2545,8 @@ static void vtd_resume(void)
     }
 }
 
-static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
-                                     int indent)
+static void vtd_dump_page_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
+                                      int indent)
 {
     paddr_t address;
     int i;
@@ -2575,8 +2575,8 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
 
         address = gpa + offset_level_address(i, level);
         if ( next_level >= 1 ) 
-            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
-                                     address, indent + 1);
+            vtd_dump_page_table_level(dma_pte_addr(*pte), next_level,
+                                      address, indent + 1);
         else
             printk("%*sdfn: %08lx mfn: %08lx\n",
                    indent, "",
@@ -2587,17 +2587,13 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
     unmap_vtd_domain_page(pt_vaddr);
 }
 
-static void vtd_dump_p2m_table(struct domain *d)
+static void vtd_dump_page_tables(struct domain *d)
 {
-    const struct domain_iommu *hd;
+    const struct domain_iommu *hd = dom_iommu(d);
 
-    if ( list_empty(&acpi_drhd_units) )
-        return;
-
-    hd = dom_iommu(d);
-    printk("p2m table has %d levels\n", agaw_to_level(hd->arch.vtd.agaw));
-    vtd_dump_p2m_table_level(hd->arch.vtd.pgd_maddr,
-                             agaw_to_level(hd->arch.vtd.agaw), 0, 0);
+    printk("table has %d levels\n", agaw_to_level(hd->arch.vtd.agaw));
+    vtd_dump_page_table_level(hd->arch.vtd.pgd_maddr,
+                              agaw_to_level(hd->arch.vtd.agaw), 0, 0);
 }
 
 static int __init intel_iommu_quarantine_init(struct domain *d)
@@ -2686,7 +2682,7 @@ static struct iommu_ops __initdata vtd_ops = {
     .iotlb_flush = iommu_flush_iotlb_pages,
     .iotlb_flush_all = iommu_flush_iotlb_all,
     .get_reserved_device_memory = intel_iommu_get_reserved_device_memory,
-    .dump_p2m_table = vtd_dump_p2m_table,
+    .dump_page_tables = vtd_dump_page_tables,
 };
 
 const struct iommu_init_ops __initconstrel intel_iommu_init_ops = {
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index cc122fd10b..804b283c1b 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -272,7 +272,7 @@ struct iommu_ops {
                                     unsigned int flush_flags);
     int __must_check (*iotlb_flush_all)(struct domain *d);
     int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
-    void (*dump_p2m_table)(struct domain *d);
+    void (*dump_page_tables)(struct domain *d);
 
 #ifdef CONFIG_HAS_DEVICE_TREE
     /*
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 16:46:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 16:46:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz0qJ-0001CS-M1; Fri, 24 Jul 2020 16:46:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yL+a=BD=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jz0qI-00015d-EB
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 16:46:42 +0000
X-Inumbo-ID: 3b709e24-cdcd-11ea-a42a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3b709e24-cdcd-11ea-a42a-12813bfff9fa;
 Fri, 24 Jul 2020 16:46:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hPWM6csyUMwffzj4JmSc+ReN9q/RGN5MRKt+OXyzDgc=; b=WI+SOeayTYJXOwV1a/7osUm/8r
 myWTAuywvxg+Fsj/nEulRnchp+XY5sMNkLIEY7SO/zRq4PL5Gi+uIovYKpcZDDXLmEk6A0mJtU5d7
 IZG/kgLgl7uA3xNhej/ePuX0tfhUCU/yG8Wn0LxB2p8//kPHJCkHTkyU1xsQMZj+y/I8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz0q8-00054i-9D; Fri, 24 Jul 2020 16:46:32 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jz0q8-0006WL-0C; Fri, 24 Jul 2020 16:46:32 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 5/6] iommu: remove the share_p2m operation
Date: Fri, 24 Jul 2020 17:46:18 +0100
Message-Id: <20200724164619.1245-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724164619.1245-1-paul@xen.org>
References: <20200724164619.1245-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Sharing of HAP tables is VT-d specific so the operation is never defined for
AMD IOMMU. There's also no need to pro-actively set vtd.pgd_maddr when using
shared EPT as it is straightforward to simply define a helper function to
return the appropriate value in the shared and non-shared cases.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/mm/p2m.c               |  3 --
 xen/drivers/passthrough/iommu.c     |  8 -----
 xen/drivers/passthrough/vtd/iommu.c | 55 ++++++++++++++---------------
 xen/include/xen/iommu.h             |  3 --
 4 files changed, 27 insertions(+), 42 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index c5f52a4118..95b5055648 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -726,9 +726,6 @@ int p2m_alloc_table(struct p2m_domain *p2m)
 
     p2m->phys_table = pagetable_from_mfn(top_mfn);
 
-    if ( hap_enabled(d) )
-        iommu_share_p2m_table(d);
-
     p2m_unlock(p2m);
     return 0;
 }
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index f32d8e25a8..6a3803ff2c 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -478,14 +478,6 @@ int iommu_do_domctl(
     return ret;
 }
 
-void iommu_share_p2m_table(struct domain* d)
-{
-    ASSERT(hap_enabled(d));
-
-    if ( iommu_use_hap_pt(d) )
-        iommu_get_ops()->share_p2m(d);
-}
-
 void iommu_crash_shutdown(void)
 {
     if ( !iommu_crash_disable )
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 149d7122c3..d09ca3fb3d 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -313,6 +313,26 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
     return pte_maddr;
 }
 
+static u64 domain_pgd_maddr(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+
+    ASSERT(spin_is_locked(&hd->arch.mapping_lock));
+
+    if ( iommu_use_hap_pt(d) )
+    {
+        mfn_t pgd_mfn =
+            pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
+
+        return pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
+    }
+
+    if ( !hd->arch.vtd.pgd_maddr )
+        addr_to_dma_page_maddr(d, 0, 1);
+
+    return hd->arch.vtd.pgd_maddr;
+}
+
 static void iommu_flush_write_buffer(struct vtd_iommu *iommu)
 {
     u32 val;
@@ -1347,22 +1367,17 @@ int domain_context_mapping_one(
     {
         spin_lock(&hd->arch.mapping_lock);
 
-        /* Ensure we have pagetables allocated down to leaf PTE. */
-        if ( hd->arch.vtd.pgd_maddr == 0 )
+        pgd_maddr = domain_pgd_maddr(domain);
+        if ( !pgd_maddr )
         {
-            addr_to_dma_page_maddr(domain, 0, 1);
-            if ( hd->arch.vtd.pgd_maddr == 0 )
-            {
-            nomem:
-                spin_unlock(&hd->arch.mapping_lock);
-                spin_unlock(&iommu->lock);
-                unmap_vtd_domain_page(context_entries);
-                return -ENOMEM;
-            }
+        nomem:
+            spin_unlock(&hd->arch.mapping_lock);
+            spin_unlock(&iommu->lock);
+            unmap_vtd_domain_page(context_entries);
+            return -ENOMEM;
         }
 
         /* Skip top levels of page tables for 2- and 3-level DRHDs. */
-        pgd_maddr = hd->arch.vtd.pgd_maddr;
         for ( agaw = level_to_agaw(4);
               agaw != level_to_agaw(iommu->nr_pt_levels);
               agaw-- )
@@ -1727,9 +1742,6 @@ static void iommu_domain_teardown(struct domain *d)
 
     ASSERT(is_iommu_enabled(d));
 
-    if ( iommu_use_hap_pt(d) )
-        return;
-
     hd->arch.vtd.pgd_maddr = 0;
 }
 
@@ -1821,18 +1833,6 @@ static int __init vtd_ept_page_compatible(struct vtd_iommu *iommu)
            (ept_has_1gb(ept_cap) && opt_hap_1gb) <= cap_sps_1gb(vtd_cap);
 }
 
-/*
- * set VT-d page table directory to EPT table if allowed
- */
-static void iommu_set_pgd(struct domain *d)
-{
-    mfn_t pgd_mfn;
-
-    pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-    dom_iommu(d)->arch.vtd.pgd_maddr =
-        pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
-}
-
 static int rmrr_identity_mapping(struct domain *d, bool_t map,
                                  const struct acpi_rmrr_unit *rmrr,
                                  u32 flag)
@@ -2682,7 +2682,6 @@ static struct iommu_ops __initdata vtd_ops = {
     .adjust_irq_affinities = adjust_vtd_irq_affinities,
     .suspend = vtd_suspend,
     .resume = vtd_resume,
-    .share_p2m = iommu_set_pgd,
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = iommu_flush_iotlb_pages,
     .iotlb_flush_all = iommu_flush_iotlb_all,
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index ec639ba128..cc122fd10b 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -266,7 +266,6 @@ struct iommu_ops {
 
     int __must_check (*suspend)(void);
     void (*resume)(void);
-    void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
     int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn,
                                     unsigned int page_count,
@@ -343,8 +342,6 @@ void iommu_resume(void);
 void iommu_crash_shutdown(void);
 int iommu_get_reserved_device_memory(iommu_grdm_t *, void *);
 
-void iommu_share_p2m_table(struct domain *d);
-
 #ifdef CONFIG_HAS_PCI
 int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
                         XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 16:54:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 16:54:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz0xn-0002du-JW; Fri, 24 Jul 2020 16:54:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5T8C=BD=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jz0xm-0002dp-6n
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 16:54:26 +0000
X-Inumbo-ID: 54864a3e-cdce-11ea-8858-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54864a3e-cdce-11ea-8858-bc764e2007e4;
 Fri, 24 Jul 2020 16:54:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=WWtqmrMo2elkg9mLTxYQ8fXFFW+71EuNsD/v4f+ImLM=; b=VyM+LrCf6xUTgvvLwSHYGPYw3S
 juzI5inNvOG5poqTEMizJhrocLdGqlGHRAdmt4MvEuxWLdHHBpSb8Vh0HwyzD4s4I7cKICTkMobMU
 4ytI8+RkjFT2iRry8Lb1DcmyVyecjF3f1TLHXfhlp7yIe5mC/Nj0oD3JMelQ1/4y6yQc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jz0xj-0005ED-Ay; Fri, 24 Jul 2020 16:54:23 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jz0xj-00023s-0n; Fri, 24 Jul 2020 16:54:23 +0000
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
To: Jan Beulich <jbeulich@suse.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <20200724144404.GJ7191@Air-de-Roger>
 <0c53b2cb-47e9-f34e-8922-7095669175be@xen.org>
 <980fc583-edb6-b536-f211-f6b8ea6d21a7@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <3e15d186-e323-613f-05a2-ee02480d74cf@xen.org>
Date: Fri, 24 Jul 2020 17:54:20 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <980fc583-edb6-b536-f211-f6b8ea6d21a7@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <rahul.singh@arm.com>, Bertrand.Marquis@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 24/07/2020 17:01, Jan Beulich wrote:
> On 24.07.2020 17:15, Julien Grall wrote:
>> On 24/07/2020 15:44, Roger Pau Monné wrote:
>>>> +
>>>> +    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
>>>> +
>>>> +    if ( unlikely(!bridge) )
>>>> +    {
>>>> +        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
>>>> +                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
>>>
>>> I had a patch to add a custom modifier to out printf format in
>>> order to handle pci_sbdf_t natively:
>>>
>>> https://patchew.org/Xen/20190822065132.48200-1-roger.pau@citrix.com/
>>>
>>> It missed maintainers Acks and was never committed. Since you are
>>> doing a bunch of work here, and likely adding a lot of SBDF related
>>> prints, feel free to import the modifier (%pp) and use in your code
>>> (do not attempt to switch existing users, or it's likely to get
>>> stuck again).
>>
>> I forgot about this patch :/. It would be good to revive it. Which acks
>> are you missing?
> 
> It wasn't so much missing acks, but a controversy. And that not so much
> about switching existing users, but whether to indeed derive this from
> %p (which I continue to consider inefficient).

Looking at the thread, I can see you (relunctantly) acked any components 
that you are the sole maintainers. Kevin gave his acked for the vtd code 
and I gave it mine for the common code.

I would suggest to not rehash the argument unless another maintainer 
agree with your position. It loosk like to me the next step is for Roger 
(or someone else) to resend the patch so we could collect the missing 
ack (I think there is only one missing from Andrew).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:17:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1KA-0004hM-Ml; Fri, 24 Jul 2020 17:17:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qjWU=BD=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jz1K9-0004gw-7L
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:17:33 +0000
X-Inumbo-ID: 8b702a26-cdd1-11ea-a43d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8b702a26-cdd1-11ea-a43d-12813bfff9fa;
 Fri, 24 Jul 2020 17:17:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yuruKsJDZjTlRFTxb1Igpkoa1I0FiNQjQVWWS0Pc2ik=; b=cnJe9Us6NZIDYFJBqrNOk8cWF
 6Vr4PA6jeL+NqKvmIxfA+qr+1zBXVFD5HyH5vKBOrkrTT/L8HWnZr8NK01AZfPcemNjBCH0xNyR0J
 4N+b9+pA1NGO7PO4XBi5akGUvCbBrW9Tn7CJWbyiLBPloy0XtqXvfr5YbVoz/x5E88jt4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jz1K1-0005le-Bo; Fri, 24 Jul 2020 17:17:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jz1K1-0004Yp-0E; Fri, 24 Jul 2020 17:17:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jz1K0-0006P9-Vm; Fri, 24 Jul 2020 17:17:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152162-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152162: regressions - trouble: fail/pass/starved
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=f37e99aca03f63aa3f2bd13ceaf769455d12c4b0
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jul 2020 17:17:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152162 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152162/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 linux                f37e99aca03f63aa3f2bd13ceaf769455d12c4b0
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   36 days
Failing since        151236  2020-06-19 19:10:35 Z   34 days   54 attempts
Testing same since   152162  2020-07-23 22:10:08 Z    0 days    1 attempts

------------------------------------------------------------
847 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 47045 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:22:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:22:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1Oo-0005eM-Al; Fri, 24 Jul 2020 17:22:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ulAu=BD=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jz1Oo-0005eH-1h
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:22:22 +0000
X-Inumbo-ID: 3ae3a48b-cdd2-11ea-8862-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ae3a48b-cdd2-11ea-8862-bc764e2007e4;
 Fri, 24 Jul 2020 17:22:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jz1Om-00021j-GN; Fri, 24 Jul 2020 18:22:20 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 01/11] schema: Add index for quick lookup by host
Date: Fri, 24 Jul 2020 18:22:06 +0100
Message-Id: <20200724172216.28204-2-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
References: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 schema/runvars-host-index.sql | 8 ++++++++
 1 file changed, 8 insertions(+)
 create mode 100644 schema/runvars-host-index.sql

diff --git a/schema/runvars-host-index.sql b/schema/runvars-host-index.sql
new file mode 100644
index 00000000..fec0b960
--- /dev/null
+++ b/schema/runvars-host-index.sql
@@ -0,0 +1,8 @@
+-- ##OSSTEST## 009 Preparatory
+--
+-- This index helps sg-report-host-history find relevant flights.
+
+CREATE INDEX runvars_host_idx
+    ON runvars (val, flight)
+ WHERE name ='host'
+    OR name LIKE '%_host';
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:22:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:22:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1Ot-0005f8-J1; Fri, 24 Jul 2020 17:22:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ulAu=BD=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jz1Ot-0005eH-0Q
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:22:27 +0000
X-Inumbo-ID: 3ae3a489-cdd2-11ea-8862-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ae3a489-cdd2-11ea-8862-bc764e2007e4;
 Fri, 24 Jul 2020 17:22:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jz1Om-00021j-6C; Fri, 24 Jul 2020 18:22:20 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 00/11] Improve performance of sg-report-host-history
Date: Fri, 24 Jul 2020 18:22:05 +0100
Message-Id: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is reasonably straightforward.

In my tests it reduces the time for an incremental run, with a hot
cache, from several hundred seconds to a handful of seconds.

This series really wants to go after the flight report improvements,
although the schema updates could be combined.

Ian Jackson (11):
  schema: Add index for quick lookup by host
  sg-report-host-history: Find flight limit by flight start date
  sg-report-host-history: Drop per-job debug etc.
  Executive: Export opendb_tests
  sg-report-host-history: Add a debug print after sorting jobs
  sg-report-host-history: Do the main query per host
  sg-report-host-history: Rerganisation: Make mainquery per-host
  sg-report-host-history: Rerganisation: Read old logs later
  sg-report-host-history: Rerganisation: Change loops
  sg-report-host-history: Drop a redundznt AND clause
  sg-report-host-history: Fork

 Osstest/Executive.pm          |   2 +-
 schema/runvars-host-index.sql |   8 ++
 sg-report-host-history        | 152 +++++++++++++++++++---------------
 3 files changed, 94 insertions(+), 68 deletions(-)
 create mode 100644 schema/runvars-host-index.sql

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:22:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:22:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1Oy-0005gU-ST; Fri, 24 Jul 2020 17:22:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ulAu=BD=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jz1Oy-0005eH-0Z
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:22:32 +0000
X-Inumbo-ID: 3b648008-cdd2-11ea-8862-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b648008-cdd2-11ea-8862-bc764e2007e4;
 Fri, 24 Jul 2020 17:22:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jz1Om-00021j-Ob; Fri, 24 Jul 2020 18:22:20 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 02/11] sg-report-host-history: Find flight limit by
 flight start date
Date: Fri, 24 Jul 2020 18:22:07 +0100
Message-Id: <20200724172216.28204-3-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
References: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

By default we look for anything in (roughly) the last year.

This query is in fact quite fast because the flights table is small.

There is still the per-host limit of $limit (2000) recent runs.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 56 ++++++++++++++++++++----------------------
 1 file changed, 27 insertions(+), 29 deletions(-)

diff --git a/sg-report-host-history b/sg-report-host-history
index 54738e68..5dd875c1 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -29,6 +29,7 @@ use POSIX;
 use Osstest::Executive qw(:DEFAULT :colours);
 
 our $limit= 2000;
+our $timelimit= 86400 * (366 + 14);
 our $flightlimit;
 our $htmlout = ".";
 our $read_existing=1;
@@ -45,6 +46,8 @@ while (@ARGV && $ARGV[0] =~ m/^-/) {
     last if m/^--?$/;
     if (m/^--(limit)\=([1-9]\d*)$/) {
         $$1= $2;
+    } elsif (m/^--time-limit\=([1-9]\d*)$/) {
+        $timelimit= $1;
     } elsif (m/^--flight-limit\=([1-9]\d*)$/) {
 	$flightlimit= $1;
     } elsif (restrictflight_arg($_)) {
@@ -108,38 +111,33 @@ sub read_existing_logs ($) {
 }
 
 sub computeflightsrange () {
-    if (!$flightlimit) {
-	my $flagscond =
-	    '('.join(' OR ', map { "f.hostflag = 'blessed-$_'" } @blessings).')';
-	my $nhostsq = db_prepare(<<END);
-	    SELECT count(*)
-	      FROM resources r
-	     WHERE restype='host'
-	       AND EXISTS (SELECT 1
-			     FROM hostflags f
-			    WHERE f.hostname=r.resname
-			      AND $flagscond)
+    if ($flightlimit) {
+	my $minflightsq = db_prepare(<<END);
+	    SELECT flight
+	      FROM (
+		SELECT flight
+		  FROM flights
+		 WHERE $restrictflight_cond
+		 ORDER BY flight DESC
+		 LIMIT $flightlimit
+	      ) f
+	      ORDER BY flight ASC
+	      LIMIT 1
 END
-        $nhostsq->execute();
-	my ($nhosts) = $nhostsq->fetchrow_array();
-	print DEBUG "COUNTED $nhosts hosts\n";
-	$flightlimit = $nhosts * $limit * 2;
-    }
-
-    my $minflightsq = db_prepare(<<END);
-	SELECT flight
-	  FROM (
+	$minflightsq->execute();
+	($minflight,) = $minflightsq->fetchrow_array();
+    } else {
+	my $minflightsq = db_prepare(<<END);
 	    SELECT flight
-	      FROM flights
-             WHERE $restrictflight_cond
-	     ORDER BY flight DESC
-	     LIMIT $flightlimit
-	  ) f
-	  ORDER BY flight ASC
-	  LIMIT 1
+              FROM flights
+             WHERE started >= ?
+          ORDER BY flight ASC
+             LIMIT 1
 END
-    $minflightsq->execute();
-    ($minflight,) = $minflightsq->fetchrow_array();
+	my $now = time // die $!;
+        $minflightsq->execute($now - $timelimit);
+	($minflight,) = $minflightsq->fetchrow_array();
+    }
     $minflight //= 0;
 
     $flightcond = "(flight > $minflight)";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:22:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:22:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1P4-0005il-AA; Fri, 24 Jul 2020 17:22:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ulAu=BD=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jz1P3-0005eH-0m
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:22:37 +0000
X-Inumbo-ID: 3b8d8494-cdd2-11ea-8862-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b8d8494-cdd2-11ea-8862-bc764e2007e4;
 Fri, 24 Jul 2020 17:22:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jz1Om-00021j-W9; Fri, 24 Jul 2020 18:22:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 03/11] sg-report-host-history: Drop per-job debug etc.
Date: Fri, 24 Jul 2020 18:22:08 +0100
Message-Id: <20200724172216.28204-4-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
References: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This printing has a significant effect on the performance of this
program, at least after we optimise various other things.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/sg-report-host-history b/sg-report-host-history
index 5dd875c1..8b409fc7 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -102,9 +102,9 @@ sub read_existing_logs ($) {
 	    my $k = $1;
 	    s{\%([0-9a-f]{2})}{ chr hex $1 }ge;
 	    $ch->{$k} = $_;
-	    print DEBUG "GOTCACHE $hostname $k\n";
+#	    print DEBUG "GOTCACHE $hostname $k\n";
 	}
-	print DEBUG "GOTCACHE $hostname \@ $jr->{flight} $jr->{job} $jr->{status},$jr->{name}\n";
+#	print DEBUG "GOTCACHE $hostname \@ $jr->{flight} $jr->{job} $jr->{status},$jr->{name}\n";
 	$tcache->{$jr->{flight},$jr->{job},$jr->{status},$jr->{name}} = $jr;
     }
     close H;
@@ -272,7 +272,7 @@ END
     my @rows;
     my $cachehits = 0;
     foreach my $jr (@$inrows) {
-	print DEBUG "JOB $jr->{flight}.$jr->{job} ";
+	#print DEBUG "JOB $jr->{flight}.$jr->{job} ";
 
 	my $cacherow =
 	    $tcache->{$jr->{flight},$jr->{job},$jr->{status},$jr->{name}};
@@ -283,11 +283,11 @@ END
 
 	my $endedrow = jobquery($endedq, $jr, 'e');
 	if (!$endedrow) {
-	    print DEBUG "no-finished\n";
+	    #print DEBUG "no-finished\n";
 	    next;
 	}
-	print DEBUG join " ", map { $endedrow->{$_} } sort keys %$endedrow;
-	print DEBUG ".\n";
+	#print DEBUG join " ", map { $endedrow->{$_} } sort keys %$endedrow;
+	#print DEBUG ".\n";
 
 	push @rows, { %$jr, %$endedrow };
     }
@@ -329,7 +329,7 @@ END
 	    next;
 	}
 
-        print DEBUG "JR $jr->{flight}.$jr->{job}\n";
+        #print DEBUG "JR $jr->{flight}.$jr->{job}\n";
 	my $ir = jobquery($infoq, $jr, 'i');
 	my $ar = jobquery($allocdq, $jr, 'a');
 	my $ident = $jr->{name};
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:22:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:22:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1P8-0005kQ-Ie; Fri, 24 Jul 2020 17:22:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ulAu=BD=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jz1P8-0005eH-0j
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:22:42 +0000
X-Inumbo-ID: 3bb38216-cdd2-11ea-8862-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3bb38216-cdd2-11ea-8862-bc764e2007e4;
 Fri, 24 Jul 2020 17:22:22 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jz1On-00021j-7W; Fri, 24 Jul 2020 18:22:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 04/11] Executive: Export opendb_tests
Date: Fri, 24 Jul 2020 18:22:09 +0100
Message-Id: <20200724172216.28204-5-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
References: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

sg-report-host-history is going to want this in a moment

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 33de3708..c1095bac 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -49,7 +49,7 @@ BEGIN {
                       task_spec_desc findtask findtask_spec @all_lock_tables
                       restrictflight_arg restrictflight_cond
                       report_run_getinfo report_altcolour
-                      report_altchangecolour
+                      report_altchangecolour opendb_tests
                       report_blessingscond report_find_push_age_info
                       tcpconnect_queuedaemon plan_search
                       manual_allocation_base_jobinfo
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:22:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:22:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1PD-0005n5-Sp; Fri, 24 Jul 2020 17:22:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ulAu=BD=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jz1PD-0005eH-0k
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:22:47 +0000
X-Inumbo-ID: 3b8d8495-cdd2-11ea-8862-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b8d8495-cdd2-11ea-8862-bc764e2007e4;
 Fri, 24 Jul 2020 17:22:22 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jz1On-00021j-GB; Fri, 24 Jul 2020 18:22:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 05/11] sg-report-host-history: Add a debug print after
 sorting jobs
Date: Fri, 24 Jul 2020 18:22:10 +0100
Message-Id: <20200724172216.28204-6-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
References: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This helps rule this sort out as a source of slowness.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/sg-report-host-history b/sg-report-host-history
index 8b409fc7..25a0c847 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -318,6 +318,8 @@ END
 
     @rows = sort { $b->{finished} <=> $a->{finished} } @rows;
 
+    print DEBUG "SORTED\n";
+
     my $alternate = 0;
     my $wrote = 0;
     my $runvarq_hits = 0;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:22:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:22:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1PJ-0005q3-5S; Fri, 24 Jul 2020 17:22:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ulAu=BD=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jz1PI-0005eH-15
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:22:52 +0000
X-Inumbo-ID: 3b8d8496-cdd2-11ea-8862-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b8d8496-cdd2-11ea-8862-bc764e2007e4;
 Fri, 24 Jul 2020 17:22:22 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jz1On-00021j-Ot; Fri, 24 Jul 2020 18:22:21 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 06/11] sg-report-host-history: Do the main query per
 host
Date: Fri, 24 Jul 2020 18:22:11 +0100
Message-Id: <20200724172216.28204-7-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
References: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In f6001d628c3b3fd42b10cd15351981a04bc02572 we combined these
queries into one:
  sg-report-host-history: Aggregate runvars query for all hosts

Now that we have an index, there is a faster way for the db to do this
query: via that index.  But it doesn't like to do that if be aggregate
the queries.  Experimentally, doing this query separately once per
host is significantly faster.

Also, later, it will allow us to parallelise this work.

So, we undo that.  (Not by reverting, though.)

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 schema/runvars-host-index.sql |  2 +-
 sg-report-host-history        | 27 +++++++++------------------
 2 files changed, 10 insertions(+), 19 deletions(-)

diff --git a/schema/runvars-host-index.sql b/schema/runvars-host-index.sql
index fec0b960..cd6a1f9e 100644
--- a/schema/runvars-host-index.sql
+++ b/schema/runvars-host-index.sql
@@ -1,4 +1,4 @@
--- ##OSSTEST## 009 Preparatory
+-- ##OSSTEST## 009 Needed
 --
 -- This index helps sg-report-host-history find relevant flights.
 
diff --git a/sg-report-host-history b/sg-report-host-history
index 25a0c847..ab88828e 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -165,34 +165,25 @@ sub jobquery ($$$) {
 our %hosts;
 
 sub mainquery () {
-    our $valcond = join " OR ", map { "val = ?" } keys %hosts;
-    our @params = keys %hosts;
-
     our $runvarq //= db_prepare(<<END);
-	SELECT flight, job, name, val, status
+	SELECT flight, job, name, status
 	  FROM runvars
           JOIN jobs USING (flight, job)
-	 WHERE $namecond
-	   AND ($valcond)
+	 WHERE (name = 'host' OR name LIKE '%_host')
+	   AND val = ?
 	   AND $flightcond
            AND $restrictflight_cond
            AND flight > ?
 	 ORDER BY flight DESC
-	 LIMIT ($limit * 3 + 100) * ?
+         LIMIT $limit * 2
 END
+    foreach my $host (sort keys %hosts) {
+	print DEBUG "MAINQUERY $host...\n";
+	$runvarq->execute($host, $minflight);
 
-    push @params, $minflight;
-    push @params, scalar keys %hosts;
-
-    print DEBUG "MAINQUERY...\n";
-    $runvarq->execute(@params);
-
-    print DEBUG "FIRST PASS\n";
-    while (my $jr= $runvarq->fetchrow_hashref()) {
-	print DEBUG " $jr->{flight}.$jr->{job} ";
-	push @{ $hosts{$jr->{val}} }, $jr;
+	$hosts{$host} = $runvarq->fetchall_arrayref({});
+	print DEBUG "MAINQUERY $host got ".(scalar @{ $hosts{$host} })."\n";
     }
-    print DEBUG "\n";
 }
 
 sub reporthost ($) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:22:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:22:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1PO-0005sY-El; Fri, 24 Jul 2020 17:22:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ulAu=BD=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jz1PN-0005eH-12
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:22:57 +0000
X-Inumbo-ID: 3c428380-cdd2-11ea-8862-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c428380-cdd2-11ea-8862-bc764e2007e4;
 Fri, 24 Jul 2020 17:22:23 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jz1Oo-00021j-1H; Fri, 24 Jul 2020 18:22:22 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 07/11] sg-report-host-history: Rerganisation: Make
 mainquery per-host
Date: Fri, 24 Jul 2020 18:22:12 +0100
Message-Id: <20200724172216.28204-8-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
References: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This moves the loop over hosts into the main program.  We are working
our way to a new code structure.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/sg-report-host-history b/sg-report-host-history
index ab88828e..509d053d 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -164,7 +164,9 @@ sub jobquery ($$$) {
 
 our %hosts;
 
-sub mainquery () {
+sub mainquery ($) {
+    my ($host) = @_;
+
     our $runvarq //= db_prepare(<<END);
 	SELECT flight, job, name, status
 	  FROM runvars
@@ -177,13 +179,12 @@ sub mainquery () {
 	 ORDER BY flight DESC
          LIMIT $limit * 2
 END
-    foreach my $host (sort keys %hosts) {
-	print DEBUG "MAINQUERY $host...\n";
-	$runvarq->execute($host, $minflight);
 
-	$hosts{$host} = $runvarq->fetchall_arrayref({});
-	print DEBUG "MAINQUERY $host got ".(scalar @{ $hosts{$host} })."\n";
-    }
+    print DEBUG "MAINQUERY $host...\n";
+    $runvarq->execute($host, $minflight);
+
+    $hosts{$host} = $runvarq->fetchall_arrayref({});
+    print DEBUG "MAINQUERY $host got ".(scalar @{ $hosts{$host} })."\n";
 }
 
 sub reporthost ($) {
@@ -473,7 +474,9 @@ db_retry($dbh_tests, [], sub {
 });
 
 db_retry($dbh_tests, [], sub {
-    mainquery();
+    foreach my $host (sort keys %hosts) {
+	mainquery($host);
+    }
 });
 
 foreach my $host (sort keys %hosts) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:23:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1PS-0005x0-O8; Fri, 24 Jul 2020 17:23:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ulAu=BD=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jz1PS-0005eH-1B
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:23:02 +0000
X-Inumbo-ID: 3b648009-cdd2-11ea-8862-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b648009-cdd2-11ea-8862-bc764e2007e4;
 Fri, 24 Jul 2020 17:22:23 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jz1Oo-00021j-Dx; Fri, 24 Jul 2020 18:22:22 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 08/11] sg-report-host-history: Rerganisation: Read old
 logs later
Date: Fri, 24 Jul 2020 18:22:13 +0100
Message-Id: <20200724172216.28204-9-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
References: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Perhaps at one point something read from these logs influenced the db
query for thye flights range, but that is no longer the case and it
doesn't seem likely to need to come back.

We want to move the per-host stuff together.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/sg-report-host-history b/sg-report-host-history
index 509d053d..4b0b5b2d 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -465,14 +465,14 @@ END
 
 exit 0 unless %hosts;
 
-foreach (keys %hosts) {
-    read_existing_logs($_);
-}
-
 db_retry($dbh_tests, [], sub {
     computeflightsrange();
 });
 
+foreach (keys %hosts) {
+    read_existing_logs($_);
+}
+
 db_retry($dbh_tests, [], sub {
     foreach my $host (sort keys %hosts) {
 	mainquery($host);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:23:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1PY-00060l-2E; Fri, 24 Jul 2020 17:23:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ulAu=BD=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jz1PX-0005eH-1D
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:23:07 +0000
X-Inumbo-ID: 3c9c94d8-cdd2-11ea-8862-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c9c94d8-cdd2-11ea-8862-bc764e2007e4;
 Fri, 24 Jul 2020 17:22:23 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jz1Oo-00021j-N9; Fri, 24 Jul 2020 18:22:22 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 09/11] sg-report-host-history: Rerganisation: Change
 loops
Date: Fri, 24 Jul 2020 18:22:14 +0100
Message-Id: <20200724172216.28204-10-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
References: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Move the per-host code all into the same per-host loop.  One effect is
to transpose the db_retry and host loops for mainquery.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/sg-report-host-history b/sg-report-host-history
index 4b0b5b2d..1f5c14e1 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -469,18 +469,10 @@ db_retry($dbh_tests, [], sub {
     computeflightsrange();
 });
 
-foreach (keys %hosts) {
-    read_existing_logs($_);
-}
-
-db_retry($dbh_tests, [], sub {
-    foreach my $host (sort keys %hosts) {
-	mainquery($host);
-    }
-});
-
 foreach my $host (sort keys %hosts) {
+    read_existing_logs($host);
     db_retry($dbh_tests, [], sub {
+        mainquery($host);
 	reporthost $host;
     });
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:25:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:25:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1Rk-0006Ye-GM; Fri, 24 Jul 2020 17:25:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5T8C=BD=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jz1Rj-0006YV-Es
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:25:23 +0000
X-Inumbo-ID: a77e8ba8-cdd2-11ea-8863-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a77e8ba8-cdd2-11ea-8863-bc764e2007e4;
 Fri, 24 Jul 2020 17:25:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:To:Subject:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=jYMvCtLjkFi3+NHJYjdS2mDK6TqR9cJuhYfypiJ3erw=; b=kPS2tixcI04pWkhHd8CuFyMQ0C
 vFswrl6MEi2ALq428YSTpo9SbdXlY04LAqTsi0aiswSDSLmoWSyixgGAk0et3s1DmFm1uhLu2ICJ+
 ooZ6UI8rpJKBYtajpDeuX9Y17s4QjMoculSOK/6pIOf33URpT3VPmOnmlV1DPKqfTD60=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jz1Rh-0005vD-Qp; Fri, 24 Jul 2020 17:25:21 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jz1Rh-0003wy-G2; Fri, 24 Jul 2020 17:25:21 +0000
Subject: Re: Porting Xen to Jetson Nano
To: Srinivas Bangalore <srini@yujala.com>, xen-devel@lists.xenproject.org,
 'Christopher Clark' <christopher.w.clark@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <002801d66051$90fe2300$b2fa6900$@yujala.com>
 <9736680b-1c81-652b-552b-4103341bad50@xen.org>
 <000001d661cb$45cdaa10$d168fe30$@yujala.com>
From: Julien Grall <julien@xen.org>
Message-ID: <5f985a6a-1bd6-9e68-f35f-b0b665688cee@xen.org>
Date: Fri, 24 Jul 2020 18:25:19 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <000001d661cb$45cdaa10$d168fe30$@yujala.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 24/07/2020 16:01, Srinivas Bangalore wrote:
> Hi Julien,

Hello,

> 
> Thanks for the tips. Comments inline...

I struggled to find your comment inline as your e-mail client doesn't 
quote my answer. Please configure your e-mail client to use some form of 
quoting (the usual is '>').


> Here's the output, truncated since it goes into an infinite loop printing
> the same info:
> [..]
> (XEN) Allocating 1:1 mappings totalling 128MB for dom0:
> (XEN) BANK[0] 0x00000088000000-0x00000090000000 (128MB)
> (XEN) Grant table range: 0x000000fec00000-0x000000fec68000
> (XEN) Loading zImage from 00000000e1000000 to
> 0000000088080000-000000008a23c808
> (XEN) Allocating PPI 16 for event channel interrupt
> (XEN) Loading dom0 DTB to 0x000000008fe00000-0x000000008fe34444
> (XEN) Scrubbing Free RAM on 1 nodes using 4 CPUs
> (XEN) ........done.
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) ***************************************************
> (XEN) WARNING: CONSOLE OUTPUT IS SYNCHRONOUS
> (XEN) This option is intended to aid debugging of Xen by ensuring
> (XEN) that all output is synchronously delivered on the serial line.
> (XEN) However it can introduce SIGNIFICANT latencies and affect
> (XEN) timekeeping. It is NOT recommended for production use!
> (XEN) ***************************************************
> (XEN) 3... 2... 1...
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to
> Xen)
> (XEN) Freed 296kB init memory.
> (XEN) dom0 IPA 0x0000000088080000
> (XEN) P2M @ 0000000803fc3d40 mfn:0x17f0f5
> (XEN) 0TH[0x0] = 0x004000017f0f377f
> (XEN) 1ST[0x2] = 0x02c00000800006fd
> (XEN) Mem access check
> (XEN) dom0 IPA 0x0000000088080000
> (XEN) P2M @ 0000000803fc3d40 mfn:0x17f0f5
> (XEN) 0TH[0x0] = 0x004000017f0f377f
> (XEN) 1ST[0x2] = 0x02c00000800006fd
> (XEN) Mem access check

The instruction abort issue looks normal as the mapping is marked as 
non-executable.

Looking at the rest of bits, bits 55:58 indicates the type of mapping 
used. The value suggest the mapping has been considered to be used 
p2m_mmio_direct_c (RW cacheable MMIO). This looks wrong to me because 
RAM should be mapped using p2m_ram_rw.

Looking at your DT, it looks like the region is marked as reserved. On 
Xen 4.8, reserved-memory region are not correctly handled (IIRC this was 
only fixed in Xen 4.13). This should be possible to confirm by enable 
CONFIG_DEVICE_TREE_DEBUG in your .config.

The option will debug more information about the mapping to dom0 on your 
console.

However, given you are using an old release, you are at risk at keep 
finding bugs that have been resolved in more recent releases. It would 
probably better if you switch to Xen 4.14 and report any bug you can 
find there.

> 
> [..]
> 
> I added the printk for 'Mem access check' inside the 'case FSC_FLT_PERM' of
> the switch (fsc) code following the lookup. That's what you see in the
> output above.
> So it does seem like there's a memory access fault somehow.
>   
>>
>> (XEN) HPFAR_EL2: 0000000000000000
>>
>> (XEN) FAR_EL2: 00000000a0080000
>>
>> (XEN)
>>
>> (XEN) Guest stack trace from sp=0:
>>
>> (XEN) Failed to convert stack to physical address
> 
> [...]
> 
>> It seems the DOM0 kernel did not get added to the task list.
> 
>   From a look at the dump, dom0 vCPU0 has been scheduled and running on
> pCPU0.
> 
>>
>> Boot args for Xen and Dom0 are here:
>> (XEN) Checking for initrd in /chosen
>>
>> (XEN) linux,initrd limits invalid: 0000000084100000 >=
>> 0000000084100000
>>
>> (XEN) RAM: 0000000080000000 - 00000000fedfffff
>>
>> (XEN) RAM: 0000000100000000 - 000000017f1fffff
>>
>> (XEN)
>>
>> (XEN) MODULE[0]: 00000000fc7f8000 - 00000000fc82d000 Device Tree
>>
>> (XEN) MODULE[1]: 00000000e1000000 - 00000000e31bc808 Kernel
>> console=hvc0 earlyprintk=xen earlycon=xen rootfstype=ext4 rw rootwait
>> root=/dev/mmcblk0p1 rdinit=/sbin/init
> 
> You want to use earlycon=xenboot here.
> 
>>
>> (XEN) RESVD[0]: 0000000080000000 - 0000000080020000
>>
>> (XEN) RESVD[1]: 00000000e3500000 - 00000000e3535000
>>
>> (XEN) RESVD[2]: 00000000fc7f8000 - 00000000fc82d000
>>
>> (XEN)
>>
>> (XEN) Command line: console=dtuart earlyprintk=xen
>> earlycon=uart8250,mmio32,0x70006000 sync_console dom0_mem=512M
>> log_lvl=all guest_loglvl=all console_to_ring
> 
> FWIW, earlyprintk and earlycon are not understood by Xen. They are only
> useful for Dom0.
> 
> BTW, to Christopher's point, the dtb did have some issues. I had to hack the
> 'interrupt-controller' node to get the GICv2 working.
> I have attached the .dts file that I'm using.

Best regards,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:29:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:29:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1Vl-0006kB-2l; Fri, 24 Jul 2020 17:29:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tno1=BD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jz1Vj-0006k6-JS
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:29:31 +0000
X-Inumbo-ID: 3af8e950-cdd3-11ea-a43f-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3af8e950-cdd3-11ea-a43f-12813bfff9fa;
 Fri, 24 Jul 2020 17:29:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595611770;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=NmRU0UDMj5uQajIy/P97l/8BR5JPklmtkyXUqGedd7k=;
 b=KqY+NzLCJmhv0YoHglIPyzxMfbGvhvqiVt2DQ1nO5U53zRu7I7D1c0Sn
 3KRj1OitMs7kmkfk9ZXN/CDAfw7FPmpPntuU4APdQIzk5w1+UOXuMzvCs
 /cWEI7hTw2nPcjQhRZyG4Qae0nD83eMBdEn99TlbCnmsbGoeFwSGyiHK7 I=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: P+1tZ3uyva1MANEdhZoHbbY03bv6p5w1KtyPxJAnrhHlJBZT+0FR8cLa6Z79CE6THTBkRlsiXP
 pA6bi53OidV6AenxjkFugn7oJMGqO9hhN4tDe9g7/DauQqRoxlvD+9jhx7O9Y/7x8xgRzxI7YM
 eYIiofx7oOL6J2hfT3TL5Mo2qRzYacDEr3c9yI2OtxGmrJZ8pSqpEqCSzt1aO+v0aq0FrkZN93
 DdQg6HEQWwdpSgqdqKoMa1ks6mEHp3MUemORGT1YMfk70zw1oDW1F3ub5vClnSmQjjxjMB4jSL
 8Aw=
X-SBRS: 2.7
X-MesageID: 23474787
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,391,1589256000"; d="scan'208";a="23474787"
Subject: Re: [PATCH 1/6] x86/iommu: re-arrange arch_iommu to separate common
 fields...
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-2-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <68b40fdc-e578-7005-aa6e-499c6f04589c@citrix.com>
Date: Fri, 24 Jul 2020 18:29:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200724164619.1245-2-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <pdurrant@amazon.com>,
 Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24/07/2020 17:46, Paul Durrant wrote:
> diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
> index 6c9d5e5632..a7add5208c 100644
> --- a/xen/include/asm-x86/iommu.h
> +++ b/xen/include/asm-x86/iommu.h
> @@ -45,16 +45,23 @@ typedef uint64_t daddr_t;
>  
>  struct arch_iommu
>  {
> -    u64 pgd_maddr;                 /* io page directory machine address */
> -    spinlock_t mapping_lock;            /* io page table lock */
> -    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
> -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
> -    struct list_head mapped_rmrrs;
> -
> -    /* amd iommu support */
> -    int paging_mode;
> -    struct page_info *root_table;
> -    struct guest_iommu *g_iommu;
> +    spinlock_t mapping_lock; /* io page table lock */
> +
> +    union {
> +        /* Intel VT-d */
> +        struct {
> +            u64 pgd_maddr; /* io page directory machine address */
> +            int agaw; /* adjusted guest address width, 0 is level 2 30-bit */
> +            u64 iommu_bitmap; /* bitmap of iommu(s) that the domain uses */
> +            struct list_head mapped_rmrrs;
> +        } vtd;
> +        /* AMD IOMMU */
> +        struct {
> +            int paging_mode;
> +            struct page_info *root_table;
> +            struct guest_iommu *g_iommu;
> +        } amd_iommu;
> +    };

The naming split here is weird.

Ideally we'd have struct {vtd,amd}_iommu in appropriate headers, and
this would be simply

union {
    struct vtd_iommu vtd;
    struct amd_iommu amd;
};

If this isn't trivial to arrange, can we at least s/amd_iommu/amd/ here ?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:40:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1fz-0008S3-Fq; Fri, 24 Jul 2020 17:40:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ulAu=BD=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jz1fy-0008OM-Pa
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:40:06 +0000
X-Inumbo-ID: b5d650e5-cdd4-11ea-886a-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b5d650e5-cdd4-11ea-886a-bc764e2007e4;
 Fri, 24 Jul 2020 17:40:06 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jz1Op-00021j-A9; Fri, 24 Jul 2020 18:22:23 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 11/11] sg-report-host-history: Fork
Date: Fri, 24 Jul 2020 18:22:16 +0100
Message-Id: <20200724172216.28204-12-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
References: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Run each host's report in a separate child.  This is considerably
faster.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 47 +++++++++++++++++++++++++++++++++++-------
 1 file changed, 40 insertions(+), 7 deletions(-)

diff --git a/sg-report-host-history b/sg-report-host-history
index 787f7c5b..d8e19127 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -34,6 +34,7 @@ our $flightlimit;
 our $htmlout = ".";
 our $read_existing=1;
 our $doinstall=1;
+our $maxjobs=10;
 our @blessings;
 
 open DEBUG, ">/dev/null";
@@ -44,7 +45,7 @@ csreadconfig();
 while (@ARGV && $ARGV[0] =~ m/^-/) {
     $_= shift @ARGV;
     last if m/^--?$/;
-    if (m/^--(limit)\=([1-9]\d*)$/) {
+    if (m/^--(limit|maxjobs)\=([1-9]\d*)$/) {
         $$1= $2;
     } elsif (m/^--time-limit\=([1-9]\d*)$/) {
         $timelimit= $1;
@@ -468,12 +469,44 @@ db_retry($dbh_tests, [], sub {
     computeflightsrange();
 });
 
+undef $dbh_tests;
+
+our %children;
+our $worst = 0;
+
+sub wait_for_max_children ($) {
+    my ($lim) = @_;
+    while (keys(%children) > $lim) {
+	$!=0; $?=0; my $got = wait;
+	die "$! $got $?" unless exists $children{$got};
+	my $host = $children{$got};
+	delete $children{$got};
+	$worst = $? if $? > $worst;
+	if ($?) {
+	    print STDERR "sg-report-flight[: [$got] failed for $host: $?\n";
+	} else {
+	    print DEBUG "REAPED [$got] $host\n";
+	}
+    }
+}
+
 foreach my $host (sort keys %hosts) {
-    read_existing_logs($host);
-    db_retry($dbh_tests, [], sub {
-        mainquery($host);
-	reporthost $host;
-    });
+    wait_for_max_children($maxjobs);
+
+    my $pid = fork // die $!;
+    if (!$pid) {
+	opendb_tests();
+	read_existing_logs($host);
+	db_retry($dbh_tests, [], sub {
+            mainquery($host);
+	    reporthost $host;
+	});
+	print DEBUG "JQ CACHE ".($jqtotal-$jqcachemisses)." / $jqtotal\n";
+	exit(0);
+    }
+    print DEBUG "SPAWNED [$pid] $host\n";
+    $children{$pid} = $host;
 }
 
-print DEBUG "JQ CACHE ".($jqtotal-$jqcachemisses)." / $jqtotal\n";
+wait_for_max_children(0);
+exit $worst;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:40:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1fk-0007nl-77; Fri, 24 Jul 2020 17:39:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ulAu=BD=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jz1fj-0007ng-D9
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:39:51 +0000
X-Inumbo-ID: ac5af1e6-cdd4-11ea-8867-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac5af1e6-cdd4-11ea-8867-bc764e2007e4;
 Fri, 24 Jul 2020 17:39:50 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jz1Op-00021j-0g; Fri, 24 Jul 2020 18:22:23 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 10/11] sg-report-host-history: Drop a redundznt AND
 clause
Date: Fri, 24 Jul 2020 18:22:15 +0100
Message-Id: <20200724172216.28204-11-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
References: <20200724172216.28204-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This condition is the same as $flightcond.  (This has no effect on the
db performance since the query planner figures it out, but it is
confusing.)

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/sg-report-host-history b/sg-report-host-history
index 1f5c14e1..787f7c5b 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -175,13 +175,12 @@ sub mainquery ($) {
 	   AND val = ?
 	   AND $flightcond
            AND $restrictflight_cond
-           AND flight > ?
 	 ORDER BY flight DESC
          LIMIT $limit * 2
 END
 
     print DEBUG "MAINQUERY $host...\n";
-    $runvarq->execute($host, $minflight);
+    $runvarq->execute($host);
 
     $hosts{$host} = $runvarq->fetchall_arrayref({});
     print DEBUG "MAINQUERY $host got ".(scalar @{ $hosts{$host} })."\n";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:41:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:41:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1hh-0000Lg-Sk; Fri, 24 Jul 2020 17:41:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fOER=BD=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jz1hg-0000LZ-Fj
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:41:52 +0000
X-Inumbo-ID: f4bf2132-cdd4-11ea-a443-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f4bf2132-cdd4-11ea-a443-12813bfff9fa;
 Fri, 24 Jul 2020 17:41:51 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 75F8A2067D;
 Fri, 24 Jul 2020 17:41:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595612510;
 bh=0JRySQ3XLprQNuQtBOMSzxLSi/vZMHzm7aqh5UTaEw0=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=P7yB3bE1ibmPrHzQ5XrsuZZT1Sxm8cFS2nzjRw5vFk1qn30GRixvozdu++T/6SuMD
 lqQkBt6WW2uaALrYKCiXz7OF5lABHR2a7VA3WEgXyPb1vecExajHvFhMmmWXTiXIY5
 i9ZWuW/bRK1JJXDHYjWL0YtW4UPm2g5DxyG6dATw=
Date: Fri, 24 Jul 2020 10:41:50 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
In-Reply-To: <9f09ff42-a930-e4e3-d1c8-612ad03698ae@xen.org>
Message-ID: <alpine.DEB.2.21.2007241036460.17562@sstabellini-ThinkPad-T480s>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <9f09ff42-a930-e4e3-d1c8-612ad03698ae@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com,
 Bertrand.Marquis@arm.com, Rahul Singh <rahul.singh@arm.com>, jbeulich@suse.com,
 xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 24 Jul 2020, Julien Grall wrote:
> On 24/07/2020 00:38, Stefano Stabellini wrote:
> > > +    bridge->dt_node = dev;
> > > +    bridge->sysdata = cfg;
> > > +    bridge->ops = &ops->pci_ops;
> > > +
> > > +    if( !dt_property_read_u32(dev, "linux,pci-domain", &segment) )
> > > +    {
> > > +        printk(XENLOG_ERR "\"linux,pci-domain\" property in not available
> > > in DT\n");
> > > +        return -ENODEV;
> > > +    }
> > > +
> > > +    bridge->segment = (u16)segment;
> > 
> > My understanding is that a Linux pci-domain doesn't correspond exactly
> > to a PCI segment. See for instance:
> > 
> > https://lists.gnu.org/archive/html/qemu-devel/2018-04/msg03885.html
> > 
> > Do we need to care about the difference? If we mean pci-domain here,
> > should we just call them as such instead of calling them "segments" in
> > Xen (if they are not segments)?
> 
> So we definitely need a segment number in hand because this is what the admin
> will use to assign a PCI device to a guest.

Yeah


> The segment number is just a value defined by the software. So as long as
> Linux and Xen agrees with the number, then we should be ok.

As far as I understand a Linux "domain" (linux,pci-domain in device
tree) is a value defined by the software. The PCI segment has a
definition in the PCI spec and it is returned by _SEG on ACPI systems.

The link above suggests that a Linux domain corresponds to (_SEG,
_BBN) where _SEG is the segment and _BBN is the "Bus Number".

I just would like to be precise with the terminology: if we are talking
about domains in the linux sense of the word, as it looks like we are
doing, then let's call them domain instead of segments which seem to
have a different definition.


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 17:47:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 17:47:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz1nA-0000Xo-HN; Fri, 24 Jul 2020 17:47:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fOER=BD=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jz1n9-0000Xj-7k
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 17:47:31 +0000
X-Inumbo-ID: bed969e6-cdd5-11ea-a443-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bed969e6-cdd5-11ea-a443-12813bfff9fa;
 Fri, 24 Jul 2020 17:47:30 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6CB46206F6;
 Fri, 24 Jul 2020 17:47:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595612849;
 bh=jMib/9zVPmUdrJblnmrrcAKQU0Co/1a5agUte3mYzr4=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=ZU61QnxTtbUlxju21WGlAZw7YhogjZQnib8+b7ewWM85Ae6t4KCDd7Ur9pVXhTTe2
 c39nOvsNGukM4CK8q9CuqnQ9thAVhat3qZdliDiSeMPzwJkQWqJqZ8t3zwkdwSpN+q
 8d19Qf4sDO6M1588NSLoH/JQI6ob7FOy6C6nrCxM=
Date: Fri, 24 Jul 2020 10:47:29 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
In-Reply-To: <276d6b48-8cd7-7fb1-1d76-15cb6a95cad9@xen.org>
Message-ID: <alpine.DEB.2.21.2007241045340.17562@sstabellini-ThinkPad-T480s>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <3ee41590-e8ca-84d6-3010-6e5dffe91df0@epam.com>
 <276d6b48-8cd7-7fb1-1d76-15cb6a95cad9@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "nd@arm.com" <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Rahul Singh <rahul.singh@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 24 Jul 2020, Julien Grall wrote:
> > > > +    list_add_tail(&bridge->node, &pci_host_bridges);
> > > It looks like &pci_host_bridges should be an ordered list, ordered by
> > > segment number?
> > 
> > Why? Do you expect bridge access in some specific order so ordered
> > 
> > list will make it faster?
> 
> Access to the configure space will be pretty random. So I don't think ordering
> the list will make anything better.
> 
> However, looking up for the bridge for every config spec access is pretty
> slow. When I was working on the PCI passthrough, I wanted to look whether it
> would be possible to have a pointer to the PCI host bridge passed in argument
> to the helpers (rather than the segment).

I was suggesting ordering the list because it is a rather cheap
optimization: it is easy to implement and it typically leads to decent
improvements (although it varies on a case by case basis). Something
more sophisticated as you mention here would be even better.


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 18:21:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 18:21:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz2KJ-0004Sd-8Q; Fri, 24 Jul 2020 18:21:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5T8C=BD=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jz2KI-0004SY-3P
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 18:21:46 +0000
X-Inumbo-ID: 8747e804-cdda-11ea-a453-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8747e804-cdda-11ea-a453-12813bfff9fa;
 Fri, 24 Jul 2020 18:21:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hO0CSEXxuhKjrfiNoF3MGAgwLOea+8O5dKayf8r11Kg=; b=6MaQjDvh1gcFxQbYEpQuMzHXcv
 blNxyXVDwb19re9Ze2/7evB1AmTvRrDTGabKl6mRJ+PECUnpJr604rp+c1VYfOxpZ1nmRouQZMWIT
 VcHBhiQ4JCJq4PH1YBx8e9jSnijcv4vRLIROBUTXACYR4jPY3n6jO9rUvswOwou0rXQU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jz2KF-0007Cd-Lf; Fri, 24 Jul 2020 18:21:43 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jz2KF-0006zd-Ag; Fri, 24 Jul 2020 18:21:43 +0000
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
To: Stefano Stabellini <sstabellini@kernel.org>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <9f09ff42-a930-e4e3-d1c8-612ad03698ae@xen.org>
 <alpine.DEB.2.21.2007241036460.17562@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <40582d63-49c7-4a51-b35b-8248dfa34b66@xen.org>
Date: Fri, 24 Jul 2020 19:21:41 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007241036460.17562@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <rahul.singh@arm.com>, andrew.cooper3@citrix.com,
 Bertrand.Marquis@arm.com, jbeulich@suse.com, xen-devel@lists.xenproject.org,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 24/07/2020 18:41, Stefano Stabellini wrote:
> On Fri, 24 Jul 2020, Julien Grall wrote:
>> On 24/07/2020 00:38, Stefano Stabellini wrote:
>> The segment number is just a value defined by the software. So as long as
>> Linux and Xen agrees with the number, then we should be ok.
> 
> As far as I understand a Linux "domain" (linux,pci-domain in device
> tree) is a value defined by the software. The PCI segment has a
> definition in the PCI spec and it is returned by _SEG on ACPI systems.
> 
> The link above suggests that a Linux domain corresponds to (_SEG,
> _BBN) where _SEG is the segment and _BBN is the "Bus Number".
>
> I just would like to be precise with the terminology: if we are talking
> about domains in the linux sense of the word, as it looks like we are
> doing, then let's call them domain instead of segments which seem to
> have a different definition.

You seem to argue on the name but this doesn't resolve the underlying 
problem. Indeed, all our external interfaces are expecting a segment number.

If they are not equal, then I fail to see why it would be useful to have 
this value in Xen. In which case, we need to use 
PHYSDEVOP_pci_mmcfg_reserved so Dom0 and Xen can synchronize on the 
segment number.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 18:24:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 18:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz2Mt-0004bF-MI; Fri, 24 Jul 2020 18:24:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tno1=BD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jz2Ms-0004bA-84
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 18:24:26 +0000
X-Inumbo-ID: e703382a-cdda-11ea-a453-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e703382a-cdda-11ea-a453-12813bfff9fa;
 Fri, 24 Jul 2020 18:24:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595615065;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=ivCc5rAae+Bh4wYGC4+w9YCtlJkwRabBZjJXCziX/Q4=;
 b=NqQMvKM0AysO1+xs7XHp0G/LnE6qdeGuccsBoE7mGc6oqc2FzRsz0DNH
 sDOvEegKUUvurduXnAZ6tUZbgkGi4doJlLXW66ubk4gfTYikrqNtywb3k
 UXN9250U2zc2NwgLBORJ2YyU13QUz6Q5byacUVd+DkmeBvyOkSghhoD/9 U=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: RhSL3m/N3qMC3bIZsCvUEY6lHZXgRDQakIO1tb7xbGIHV0S5CNsM+v3aaaKRV71KCYR4xRjP5G
 xkW0EaWkFbFLfxQ5QuWr0vab2WncTd1XSR8OOdeQE/OetNuH5tIpa/KvjaaUs4SR0qP9Ldqcar
 TiBpcaxQiOneytMAORLP/bYUPIu9RSYxsSzHpyBMJ59EBib0MOxeHfasrijcfQgKN4Eed7r1AM
 mbE9zIDR8yPOUYstvoop3IKW1keN3peiLfHhp9YdIfgBK6t7CcE/w5ktu3+EU21FZS+U8l1Fqn
 8Zk=
X-SBRS: 2.7
X-MesageID: 23343864
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,391,1589256000"; d="scan'208";a="23343864"
Subject: Re: [PATCH 2/6] x86/iommu: add common page-table allocator
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-3-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d0a0c46f-1461-144c-ca62-259b0a1894fa@citrix.com>
Date: Fri, 24 Jul 2020 19:24:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200724164619.1245-3-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Paul Durrant <pdurrant@amazon.com>,
 Kevin Tian <kevin.tian@intel.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24/07/2020 17:46, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
>
> Instead of having separate page table allocation functions in VT-d and AMD
> IOMMU code, use a common allocation function in the general x86 code.
> Also, rather than walking the page tables and using a tasklet to free them
> during iommu_domain_destroy(), add allocated page table pages to a list and
> then simply walk the list to free them. This saves ~90 lines of code overall.
>
> NOTE: There is no need to clear and sync PTEs during teardown since the per-
>       device root entries will have already been cleared (when devices were
>       de-assigned) so the page tables can no longer be accessed by the IOMMU.
>
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Oh wow - I don't have any polite words for how that code was organised
before.

Instead of discussing the ~90 lines saved, what about the removal of a
global bottleneck (the tasklet) or expand on the removal of redundant
TLB/cache maintenance?

> diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> index c386dc4387..fd9b1e7bd5 100644
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
> @@ -378,64 +380,9 @@ static int amd_iommu_assign_device(struct domain *d, u8 devfn,
>      return reassign_device(pdev->domain, d, devfn, pdev);
>  }
>  
> -static void deallocate_next_page_table(struct page_info *pg, int level)
> -{
> -    PFN_ORDER(pg) = level;
> -    spin_lock(&iommu_pt_cleanup_lock);
> -    page_list_add_tail(pg, &iommu_pt_cleanup_list);
> -    spin_unlock(&iommu_pt_cleanup_lock);
> -}
> -
> -static void deallocate_page_table(struct page_info *pg)
> -{
> -    struct amd_iommu_pte *table_vaddr;
> -    unsigned int index, level = PFN_ORDER(pg);
> -
> -    PFN_ORDER(pg) = 0;
> -
> -    if ( level <= 1 )
> -    {
> -        free_amd_iommu_pgtable(pg);
> -        return;
> -    }
> -
> -    table_vaddr = __map_domain_page(pg);
> -
> -    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> -    {
> -        struct amd_iommu_pte *pde = &table_vaddr[index];
> -
> -        if ( pde->mfn && pde->next_level && pde->pr )
> -        {
> -            /* We do not support skip levels yet */
> -            ASSERT(pde->next_level == level - 1);
> -            deallocate_next_page_table(mfn_to_page(_mfn(pde->mfn)),
> -                                       pde->next_level);
> -        }
> -    }
> -
> -    unmap_domain_page(table_vaddr);
> -    free_amd_iommu_pgtable(pg);
> -}
> -
> -static void deallocate_iommu_page_tables(struct domain *d)
> -{
> -    struct domain_iommu *hd = dom_iommu(d);
> -
> -    spin_lock(&hd->arch.mapping_lock);
> -    if ( hd->arch.amd_iommu.root_table )
> -    {
> -        deallocate_next_page_table(hd->arch.amd_iommu.root_table,
> -                                   hd->arch.amd_iommu.paging_mode);

I really need to dust off my patch fixing up several bits of dubious
logic, including the name "paging_mode" which is actually simply the
number of levels.

At this point, it will probably be best to get this series in first, and
for me to rebase.

> -        hd->arch.amd_iommu.root_table = NULL;
> -    }
> -    spin_unlock(&hd->arch.mapping_lock);
> -}
> -
> -
>  static void amd_iommu_domain_destroy(struct domain *d)
>  {
> -    deallocate_iommu_page_tables(d);
> +    dom_iommu(d)->arch.amd_iommu.root_table = NULL;
>      amd_iommu_flush_all_pages(d);

Per your NOTE:, shouldn't this flush call be dropped?

> diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
> index a12109a1de..b3c7da0fe2 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -140,11 +140,19 @@ int arch_iommu_domain_init(struct domain *d)
>  
>      spin_lock_init(&hd->arch.mapping_lock);
>  
> +    INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.list);
> +    spin_lock_init(&hd->arch.pgtables.lock);
> +
>      return 0;
>  }
>  
>  void arch_iommu_domain_destroy(struct domain *d)
>  {
> +    struct domain_iommu *hd = dom_iommu(d);
> +    struct page_info *pg;
> +
> +    while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
> +        free_domheap_page(pg);

Some of those 90 lines saved were the logic to not suffer a watchdog
timeout here.

To do it like this, it needs plumbing into the relinquish resources path.

>  }
>  
>  static bool __hwdom_init hwdom_iommu_map(const struct domain *d,
> @@ -257,6 +265,39 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
>          return;
>  }
>  
> +struct page_info *iommu_alloc_pgtable(struct domain *d)
> +{
> +    struct domain_iommu *hd = dom_iommu(d);
> +#ifdef CONFIG_NUMA
> +    unsigned int memflags = (hd->node == NUMA_NO_NODE) ?
> +        0 : MEMF_node(hd->node);
> +#else
> +    unsigned int memflags = 0;
> +#endif
> +    struct page_info *pg;
> +    void *p;

The memflags code is very awkward.  How about initialising it to 0, and
having:

#ifdef CONFIG_NUMA
    if ( hd->node != NUMA_NO_NODE )
        memflags = MEMF_node(hd->node);
#endif

here?

> +
> +    BUG_ON(!iommu_enabled);

Is this really necessary?  Can we plausibly end up in this function
otherwise?


Overall, I wonder if this patch would better be split into several.  One
which introduces the common alloc/free implementation, two which switch
the VT-d and AMD implementations over, and possibly one clean-up on the end?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 18:32:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 18:32:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz2UX-0005bU-Ga; Fri, 24 Jul 2020 18:32:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fOER=BD=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jz2UW-0005bP-HZ
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 18:32:20 +0000
X-Inumbo-ID: 01d82bd2-cddc-11ea-a45b-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 01d82bd2-cddc-11ea-a45b-12813bfff9fa;
 Fri, 24 Jul 2020 18:32:20 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C29EE206F0;
 Fri, 24 Jul 2020 18:32:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595615539;
 bh=edzJ0zO5I70t7wGG5vAuUCZOGjGxj2xutns/fJYnx64=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=DWiaD/PYL6DTyiNtfZga4WXYYdArgvsHGz9FUo9BhDK96Con/hc8CNV8pGtJM0veC
 S6K40lR07tmsCIbDnAPgSnH0GHZ1xZ49KbUBK2vgBMOci/LRlN815BDDDF7gxZ1FyR
 ME6aFIdXcYwWDzscDQJRn8HFJqGPGf1kdnBA0jOw=
Date: Fri, 24 Jul 2020 11:32:18 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
In-Reply-To: <40582d63-49c7-4a51-b35b-8248dfa34b66@xen.org>
Message-ID: <alpine.DEB.2.21.2007241127480.17562@sstabellini-ThinkPad-T480s>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <9f09ff42-a930-e4e3-d1c8-612ad03698ae@xen.org>
 <alpine.DEB.2.21.2007241036460.17562@sstabellini-ThinkPad-T480s>
 <40582d63-49c7-4a51-b35b-8248dfa34b66@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd@arm.com, Stefano Stabellini <sstabellini@kernel.org>,
 andrew.cooper3@citrix.com, Bertrand.Marquis@arm.com, jbeulich@suse.com,
 xen-devel@lists.xenproject.org, Rahul Singh <rahul.singh@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 24 Jul 2020, Julien Grall wrote:
> On 24/07/2020 18:41, Stefano Stabellini wrote:
> > On Fri, 24 Jul 2020, Julien Grall wrote:
> > > On 24/07/2020 00:38, Stefano Stabellini wrote:
> > > The segment number is just a value defined by the software. So as long as
> > > Linux and Xen agrees with the number, then we should be ok.
> > 
> > As far as I understand a Linux "domain" (linux,pci-domain in device
> > tree) is a value defined by the software. The PCI segment has a
> > definition in the PCI spec and it is returned by _SEG on ACPI systems.
> > 
> > The link above suggests that a Linux domain corresponds to (_SEG,
> > _BBN) where _SEG is the segment and _BBN is the "Bus Number".
> > 
> > I just would like to be precise with the terminology: if we are talking
> > about domains in the linux sense of the word, as it looks like we are
> > doing, then let's call them domain instead of segments which seem to
> > have a different definition.
> 
> You seem to argue on the name but this doesn't resolve the underlying problem.
> Indeed, all our external interfaces are expecting a segment number.

Yes, you are right, that is a bigger problem.


> If they are not equal, then I fail to see why it would be useful to have this
> value in Xen.

I think that's because the domain is actually more convenient to use
because a segment can span multiple PCI host bridges. So my
understanding is that a segment alone is not sufficient to identify a
host bridge. From a software implementation point of view it would be
better to use domains.


> In which case, we need to use PHYSDEVOP_pci_mmcfg_reserved so
> Dom0 and Xen can synchronize on the segment number.

I was hoping we could write down the assumption somewhere that for the
cases we care about domain == segment, and error out if it is not the
case.


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 18:38:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 18:38:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz2ao-0005ov-DD; Fri, 24 Jul 2020 18:38:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tno1=BD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jz2an-0005oq-5e
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 18:38:49 +0000
X-Inumbo-ID: e9326844-cddc-11ea-a45b-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9326844-cddc-11ea-a45b-12813bfff9fa;
 Fri, 24 Jul 2020 18:38:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595615929;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=dWbvvceeGY6NXTorz9bY+257l+AXrmrOXmHvAQHxPp4=;
 b=fAy6BtvpIN6Ya4jJ0GCsvAXtxRPrzjfoovrZN+ef3lMXHWoXgQ7ku2ao
 Xo5CDq2L8ExOFc/FGVhEZFO9KxNh2XNfoLd9O8ClI8aUcBFKdtf/V8vZT
 IfszN9S9arPAEHJWUW2EjO3VkPAnoL8zUySbLgWIH6Pyv/leV0SYEgGi1 k=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 469q0NSn0Eu98I/dW2JBXyLmvtMHRolRQzuWQZhbHzrB9Q8xLtF2kQ3iR0tSdf+Prhz4iGonyX
 VIIxOnt604Heggj9CKOH4NF08eCv+Jh2htHVgWGX3LPLHKIOkA7JJBg4mDzCDu1OTC2OnqdVUc
 WNdoRgXKVZjKoG/icQzHm1X0Q2iHxRKM+K9fZFU4aZWY7Adk9rs8ocVhzcFlhJJUgM26bEZumV
 op2zehAkJnLkSIBfB/G4+cMOCoT3YjMAJMV86/20p1eJcecfJ80mUXXaHhE9Yi7ungKmWX/Nmy
 jkQ=
X-SBRS: 2.7
X-MesageID: 23148207
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,391,1589256000"; d="scan'208";a="23148207"
Subject: Re: [PATCH 3/6] iommu: remove iommu_lookup_page() and the
 lookup_page() method...
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-4-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c47710e1-fcb6-3b5d-ff6a-d237a4149b3b@citrix.com>
Date: Fri, 24 Jul 2020 19:38:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200724164619.1245-4-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Kevin Tian <kevin.tian@intel.com>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24/07/2020 17:46, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
>
> ... from iommu_ops.
>
> This patch is essentially a reversion of dd93d54f "vtd: add lookup_page method
> to iommu_ops". The code was intended to be used by a patch that has long-
> since been abandoned. Therefore it is dead code and can be removed.

And by this, you mean the work that you only partial unstreamed, with
the remainder of the feature still very much in use by XenServer?

Please don't go breaking in-use things, simply because we're fixing
Xen's IOMMU mess once large XSA at a time...

As far as I can tell, this patch doesn't interact with any others in the
series.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 18:50:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 18:50:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz2la-0006qU-E0; Fri, 24 Jul 2020 18:49:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G3qI=BD=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jz2la-0006qP-38
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 18:49:58 +0000
X-Inumbo-ID: 77e24978-cdde-11ea-887c-bc764e2007e4
Received: from mail-wm1-x329.google.com (unknown [2a00:1450:4864:20::329])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77e24978-cdde-11ea-887c-bc764e2007e4;
 Fri, 24 Jul 2020 18:49:57 +0000 (UTC)
Received: by mail-wm1-x329.google.com with SMTP id j18so8776008wmi.3
 for <xen-devel@lists.xenproject.org>; Fri, 24 Jul 2020 11:49:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=aEuOAjCsOg+5EH37QI7YpqublmeICj17qtr5o/5tYvs=;
 b=D41lDghna6EFpjpSo6IKrp2GiGfk9FXZe4IkDsFcHTVqD2nufp8/y1U0d35hiFZ3b0
 bv0wecl+2K7NiTqi4wIZavzOAsgBY+N+zhH/nscSiEGCiwICTs/XlZq64XdDKak6XYBH
 LYTVzgZAhM45FmyvfKbKGFeR6rPUXa+xYl0jJf1x9X+xPV8dZgu6D1g9ojqET8vETn5q
 AiC+X1ZEgNY/qQLnjgcmuaJSFFoM+JTRFtqVIqZIhxY1g8iodqaqGv+3ni7tI3NoaPIg
 BLPBe5OyZtAf7q8j5M13rijsJ66afAwGVSZmvDhAPkPkSC4YoA+BIA3eWVEfiNOMmrTw
 NuHA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=aEuOAjCsOg+5EH37QI7YpqublmeICj17qtr5o/5tYvs=;
 b=WC3dPjc6MDoIoQ7rmRA4bOy+VLyoQe2Qyx+xlFaa4GOVycKjBhZkZ/GEhjx5zp7wzL
 Djo9OEOGrg2nZEO1PLBvLIzpkwHvrCOxtD4yilv7mL4eaxMN68ZoC7Pxb5YKIAsMJqrQ
 9kkhf1s/FhuCzM3FN/Pdvs4C2L/2f/R893NtSLFxK+3noiEJn6qNHKuDRBuFIMsfsZnR
 ugqouTdQNjam8dwuwm+jpIgrwP4K/O+WpsxLw3C90J42Sekhl4JOJ/UHibw1d8gKka/N
 EMI45XwKmsuRIeGJTv8kswH5vsyLYEXMdmcjhp6RsX9DRs53y6UZ2ic0/MJdduTR6n33
 pXEA==
X-Gm-Message-State: AOAM530MifB9OwwxCdSFwcrfN8IK09GwXJx6aMWE1r5Y3sJqPXgzUJJY
 vShagKwZ+Zf9/hZsDFCwsg8=
X-Google-Smtp-Source: ABdhPJwUjVL6IGzkkmf9Ut12goObcz/rSnARE8BHFXwwq4BFYxlWcxBVD4/7u0jyAg4zt7+lbXQhmA==
X-Received: by 2002:a7b:cbc5:: with SMTP id n5mr9847043wmi.95.1595616596228;
 Fri, 24 Jul 2020 11:49:56 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:10dd:8439:6158:5aba])
 by smtp.gmail.com with ESMTPSA id g16sm2165821wrs.88.2020.07.24.11.49.55
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 24 Jul 2020 11:49:55 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 <xen-devel@lists.xenproject.org>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-2-paul@xen.org>
 <68b40fdc-e578-7005-aa6e-499c6f04589c@citrix.com>
In-Reply-To: <68b40fdc-e578-7005-aa6e-499c6f04589c@citrix.com>
Subject: RE: [PATCH 1/6] x86/iommu: re-arrange arch_iommu to separate common
 fields...
Date: Fri, 24 Jul 2020 19:49:55 +0100
Message-ID: <000001d661eb$392e1ae0$ab8a50a0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQJEvXWV1fglpFS8p501Sb8VALJosQK2TIaJAcTpbVioFoErYA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Kevin Tian' <kevin.tian@intel.com>, 'Wei Liu' <wl@xen.org>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Lukasz Hawrylko' <lukasz.hawrylko@linux.intel.com>,
 'Jan Beulich' <jbeulich@suse.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 24 July 2020 18:29
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Paul Durrant <pdurrant@amazon.com>; Lukasz Hawrylko =
<lukasz.hawrylko@linux.intel.com>; Jan Beulich
> <jbeulich@suse.com>; Wei Liu <wl@xen.org>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>; Kevin Tian
> <kevin.tian@intel.com>
> Subject: Re: [PATCH 1/6] x86/iommu: re-arrange arch_iommu to separate =
common fields...
>=20
> On 24/07/2020 17:46, Paul Durrant wrote:
> > diff --git a/xen/include/asm-x86/iommu.h =
b/xen/include/asm-x86/iommu.h
> > index 6c9d5e5632..a7add5208c 100644
> > --- a/xen/include/asm-x86/iommu.h
> > +++ b/xen/include/asm-x86/iommu.h
> > @@ -45,16 +45,23 @@ typedef uint64_t daddr_t;
> >
> >  struct arch_iommu
> >  {
> > -    u64 pgd_maddr;                 /* io page directory machine =
address */
> > -    spinlock_t mapping_lock;            /* io page table lock */
> > -    int agaw;     /* adjusted guest address width, 0 is level 2 =
30-bit */
> > -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the =
domain uses */
> > -    struct list_head mapped_rmrrs;
> > -
> > -    /* amd iommu support */
> > -    int paging_mode;
> > -    struct page_info *root_table;
> > -    struct guest_iommu *g_iommu;
> > +    spinlock_t mapping_lock; /* io page table lock */
> > +
> > +    union {
> > +        /* Intel VT-d */
> > +        struct {
> > +            u64 pgd_maddr; /* io page directory machine address */
> > +            int agaw; /* adjusted guest address width, 0 is level 2 =
30-bit */
> > +            u64 iommu_bitmap; /* bitmap of iommu(s) that the domain =
uses */
> > +            struct list_head mapped_rmrrs;
> > +        } vtd;
> > +        /* AMD IOMMU */
> > +        struct {
> > +            int paging_mode;
> > +            struct page_info *root_table;
> > +            struct guest_iommu *g_iommu;
> > +        } amd_iommu;
> > +    };
>=20
> The naming split here is weird.
>=20
> Ideally we'd have struct {vtd,amd}_iommu in appropriate headers, and
> this would be simply
>=20
> union {
>     struct vtd_iommu vtd;
>     struct amd_iommu amd;
> };
>=20
> If this isn't trivial to arrange, can we at least s/amd_iommu/amd/ =
here ?

I was in two minds. I tried to look for a TLA for the AMD IOMMU and =
'amd' seemed a little too non-descript. I don't really mind though if =
there's a strong preference to shorted it.
I can certainly try moving the struct definitions into the =
implementation headers.

  Paul

>=20
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 18:53:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 18:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz2pP-0007lG-VL; Fri, 24 Jul 2020 18:53:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G3qI=BD=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jz2pP-0007lA-4t
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 18:53:55 +0000
X-Inumbo-ID: 0548cf44-cddf-11ea-887f-bc764e2007e4
Received: from mail-wm1-x330.google.com (unknown [2a00:1450:4864:20::330])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0548cf44-cddf-11ea-887f-bc764e2007e4;
 Fri, 24 Jul 2020 18:53:54 +0000 (UTC)
Received: by mail-wm1-x330.google.com with SMTP id g10so9144642wmc.1
 for <xen-devel@lists.xenproject.org>; Fri, 24 Jul 2020 11:53:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=1hESilcHyFiWy0lDakQCudSaZyHaY0Rx4qsU8Kbsh0I=;
 b=lvRYWU2kUx0wKP6oWcuaNQNEXGEs4/z1U6rU6cATeL3K7S2V5E7aHKZQS6pZYzHtM5
 XrQm8QvQ2Bg4XTaUlv0f7D3mWRSmzWArrZlOATWxkuTvFapIYKBqzw+Xwx3Q31RfB8tm
 ZmZOhg1bzMSvOEuYtQ55JvFvAWFJkH/KOOg1aHvn2p47cewVpwMknvKhnAYELqSzVBmO
 /4Biu8WYjNs3ltBKutFHPN9gvUyq+9IiDxUmT2JS5lWOdELgTQqoRNwKSzNB1uIG2mnE
 Dyh4FMw/jwkhJfMmkfb0MAP8V0WLtqidHCrBmESw9eMYga0dUsbFyxwxd2p3fqyX8tfr
 5fSw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=1hESilcHyFiWy0lDakQCudSaZyHaY0Rx4qsU8Kbsh0I=;
 b=cmwOcDOqqgIv60NN94wXy4u0wLpwKF4fNpDnMoP+Kd8r8FYkLSBXidYQTaRZINqqZ+
 vbL6LfG7q0Vm2YTWSshVeKwvdk1G2iJrspavRlo/PY1U0UMNBm6mNOb5Ren9pFnV4SH5
 SAQPLkbC1E1ZoDnvvz/XHaVQOER9hZeHgHFhmguZq1MHl4nTvlYhtlRX6OhBBTATPh7N
 EAaqtmE8+LZVwfQtZydXwQAimr19jHNZzpmt42FoDrQ3S3RoGz/kCTMLzplltdrHrGYC
 uKz4KVZxt9uKpT5TFW1n1nAYs6jD8GLhGHCYV8fZ+mvKQwISRo4koFx9blIzvqAomunn
 348g==
X-Gm-Message-State: AOAM53006A6XAP4DgTl66QGbVbJvtYkoJE2gmbNuIu3CNMEmRK6/hxgL
 R06FlEcMa8Ej7cVeTTH7zkA=
X-Google-Smtp-Source: ABdhPJxjDhXVXCedhv7QyMPMNHXfK0NMU/+HFjDAZUK3ZOwcpZsna6I2gLlv+QiPlUEIIOSbRQ+Rcw==
X-Received: by 2002:a05:600c:2dc1:: with SMTP id
 e1mr9460170wmh.108.1595616833533; 
 Fri, 24 Jul 2020 11:53:53 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:10dd:8439:6158:5aba])
 by smtp.gmail.com with ESMTPSA id j8sm2337535wrd.85.2020.07.24.11.53.52
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 24 Jul 2020 11:53:52 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 <xen-devel@lists.xenproject.org>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-4-paul@xen.org>
 <c47710e1-fcb6-3b5d-ff6a-d237a4149b3b@citrix.com>
In-Reply-To: <c47710e1-fcb6-3b5d-ff6a-d237a4149b3b@citrix.com>
Subject: RE: [PATCH 3/6] iommu: remove iommu_lookup_page() and the
 lookup_page() method...
Date: Fri, 24 Jul 2020 19:53:52 +0100
Message-ID: <000101d661eb$c68a75a0$539f60e0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQJEvXWV1fglpFS8p501Sb8VALJosQFSVPHsAgBGpZeoH8cQ8A==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Paul Durrant' <pdurrant@amazon.com>, 'Kevin Tian' <kevin.tian@intel.com>,
 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 24 July 2020 19:39
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Paul Durrant <pdurrant@amazon.com>; Kevin Tian =
<kevin.tian@intel.com>; Jan Beulich
> <jbeulich@suse.com>
> Subject: Re: [PATCH 3/6] iommu: remove iommu_lookup_page() and the =
lookup_page() method...
>=20
> On 24/07/2020 17:46, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > ... from iommu_ops.
> >
> > This patch is essentially a reversion of dd93d54f "vtd: add =
lookup_page method
> > to iommu_ops". The code was intended to be used by a patch that has =
long-
> > since been abandoned. Therefore it is dead code and can be removed.
>=20
> And by this, you mean the work that you only partial unstreamed, with
> the remainder of the feature still very much in use by XenServer?
>=20

I thought we basically decided to bin the original PV IOMMU idea though? =


> Please don't go breaking in-use things, simply because we're fixing
> Xen's IOMMU mess once large XSA at a time...
>=20
> As far as I can tell, this patch doesn't interact with any others in =
the
> series.
>=20

I can leave it, but I still don't think anything other than current =
XenServer will ever use it... so it really ought to just be in the =
downstream patch queue IMO.

  Paul

> ~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jul 24 19:01:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 19:01:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz2wH-0000Cv-OD; Fri, 24 Jul 2020 19:01:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tno1=BD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jz2wG-0000Cn-Pb
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 19:01:00 +0000
X-Inumbo-ID: 02b4ea46-cde0-11ea-a461-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 02b4ea46-cde0-11ea-a461-12813bfff9fa;
 Fri, 24 Jul 2020 19:00:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595617259;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=qDLpyFG01JuszQaIrW57+Qe/R3/LrJ9ztBtSJyWamN4=;
 b=XSBKHQjqlXlW3wtfRC7Mcc2djxD1iPhxzOECSyxLZuVAx4LCkIJWxMbo
 +6+ldzmSJ5ys/ZFZmUmxnplDfOrYEJJkx3UzilP/nkS58K51eiTwWdmS4
 QAhxmkqACHYrkd4HnM1WRCc2fUXrtDjUwXdc7m9TOjim3FfuHKi2zGdnV s=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 1OYxkBD80Rr28m8mNtIUCswqXFa3S86bTbd6VugChtaqn/c52UmfSarczN4KqV/O86qGd9aEJ1
 tbghh/tQvRA2OIw8F0TCCSNXzHNg0BvYZFmJ/0Vbia/djOi5e0bj4vdtnalajBUKVGzPAPNzYl
 Om3UO2DjzK/+DD6uBqsqYlaO/8X2TdcgqY6lkXihKBYTd6xJHCBgCpfWkDMnWTMK9mcZENAgBH
 uwPkeMn6Fdp95AZgvnGKSd/1dnjT6ONtAP2rzDj6m+OLDiud3ufeY3XPDMkFHYeO5LQolUw5ve
 OJQ=
X-SBRS: 2.7
X-MesageID: 23481107
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,391,1589256000"; d="scan'208";a="23481107"
Subject: Re: [PATCH 5/6] iommu: remove the share_p2m operation
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-6-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0e3d1914-2bf0-0b14-721e-7694f3657523@citrix.com>
Date: Fri, 24 Jul 2020 20:00:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200724164619.1245-6-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <pdurrant@amazon.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24/07/2020 17:46, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
>
> Sharing of HAP tables is VT-d specific so the operation is never defined for
> AMD IOMMU.

It's not VT-d specific, and Xen really did used to share on AMD.

In fact, a remnant of this logic is still present in the form of the
comment for p2m_type_t explaining why p2m_ram_rw needs to be 0.

That said, I agree it shouldn't be a hook, because it is statically
known at domain create time based on flags and hardware properties.

>  There's also no need to pro-actively set vtd.pgd_maddr when using
> shared EPT as it is straightforward to simply define a helper function to
> return the appropriate value in the shared and non-shared cases.

It occurs to me that vtd.pgd_maddr and amd.root_table really are the
same thing, and furthermore are common to all IOMMU implementations on
any architecture.  Is it worth trying to make them common, rather than
continuing to let each implementation handle them differently?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 19:06:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 19:06:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz31T-0000Vs-Cs; Fri, 24 Jul 2020 19:06:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qjWU=BD=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jz31R-0000Vn-SU
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 19:06:21 +0000
X-Inumbo-ID: c294e6cc-cde0-11ea-a463-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c294e6cc-cde0-11ea-a463-12813bfff9fa;
 Fri, 24 Jul 2020 19:06:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=0aVI7tuZxnkrZ2kNoGo/7m2VrsXnIX5xK3zmuvG9oIw=; b=mY+QgsBxNVzMj+fWpS4ZzYsMJ
 b1icVWDBF5Z+mL7vAR9VpuyCOArdYalWpt00GTMnPRBuCahqywj7Gvve3Se0s0mVhiUM+Rvm10lIm
 52gkiWtHHyiYYiHb6kVH7yr2lQ9nV0KzLu8Mkgp+jx4Jg6x9pX5XspMoyLZ8FkU9IwISE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jz31Q-00086t-Bf; Fri, 24 Jul 2020 19:06:20 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jz31Q-0000pr-33; Fri, 24 Jul 2020 19:06:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jz31Q-0005zh-2Q; Fri, 24 Jul 2020 19:06:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152180-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152180: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=0562cbc14cf02b8188b9f1f37f39a4886776ce7c
X-Osstest-Versions-That: xen=b2a64292b0bfa317886b3432d1a5b2a4193a48d6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jul 2020 19:06:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152180 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152180/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0562cbc14cf02b8188b9f1f37f39a4886776ce7c
baseline version:
 xen                  b2a64292b0bfa317886b3432d1a5b2a4193a48d6

Last test of basis   152173  2020-07-24 09:00:31 Z    0 days
Testing same since   152180  2020-07-24 15:02:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b2a64292b0..0562cbc14c  0562cbc14cf02b8188b9f1f37f39a4886776ce7c -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 19:09:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 19:09:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz33y-0000eH-S7; Fri, 24 Jul 2020 19:08:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tno1=BD=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jz33w-0000eC-S2
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 19:08:56 +0000
X-Inumbo-ID: 1ec7a93e-cde1-11ea-8883-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ec7a93e-cde1-11ea-8883-bc764e2007e4;
 Fri, 24 Jul 2020 19:08:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595617735;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=dTA0AOHWB+wTsIepH5Uey9aBnWll55Pv73riAbQys/o=;
 b=Qg+gbYYsdly0/7wU0C/VjP42rdACOZr2tR5rmcZLpgfeyWoT4qLyFUus
 mCPsCW3v/1pM3QSqyMlyryE7g5aXZPKVN/m1cGZJxgVq7qHA0hLh1JGU1
 jJfTX/FCvOegkUUqSkkS1iPFF2XTWQJmXrbnMwhZ6U/JMCSmFnB+FPNtb U=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: G3N7AlhAc81cS/Q9iEDUhqxMnV1QBOYHf8RzEmgcTnjRWgNEKZxUx4U2hPDg066V2lzN6fX6qZ
 C+9RZRyd8gGdRqxOK+FzIlAROqBnLpvkyiHl49iLAs6hMlkMIwi8xj0K++HKy8X1gwaFI7p3hS
 kx17EbbajiMsMvHVk7FR4NbbFTwbH+eniPU3RkCEFEtAJH2DFbCjKJ2Km32n38Y2zO3/djQmOx
 82IzmhbIvDIAu9liFecI1x5lzvAJBFoP6rArMbQsikPaq7LrKAyo8hyslQELfbGTNqanW0W5LS
 GKM=
X-SBRS: 2.7
X-MesageID: 23346714
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,391,1589256000"; d="scan'208";a="23346714"
Subject: Re: [PATCH 6/6] iommu: stop calling IOMMU page tables 'p2m tables'
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-7-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4e1c2ed8-dfc4-812b-d341-04bc5eedad8e@citrix.com>
Date: Fri, 24 Jul 2020 20:08:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200724164619.1245-7-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Kevin Tian <kevin.tian@intel.com>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24/07/2020 17:46, Paul Durrant wrote:
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index 6a3803ff2c..5bc190bf98 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -535,12 +535,12 @@ static void iommu_dump_p2m_table(unsigned char key)
>  
>          if ( iommu_use_hap_pt(d) )
>          {
> -            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
> +            printk("%pd: IOMMU page tables shared with MMU\n", d);

Change MMU to CPU?  MMU is very ambiguous in this context.

>              continue;
>          }
>  
> -        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
> -        ops->dump_p2m_table(d);
> +        printk("%pd: IOMMU page tables: \n", d);

Drop the trailing whitespace?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 19:24:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 19:24:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz3J8-0002V4-Ab; Fri, 24 Jul 2020 19:24:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H90z=BD=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1jz3J6-0002Ux-Ty
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 19:24:37 +0000
X-Inumbo-ID: 4f1e9dd4-cde3-11ea-888b-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4f1e9dd4-cde3-11ea-888b-bc764e2007e4;
 Fri, 24 Jul 2020 19:24:36 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id 3so2238652wmi.1
 for <xen-devel@lists.xenproject.org>; Fri, 24 Jul 2020 12:24:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=8OOi++DoSkGx/GlhLWZoPuYbHeA8uUVPopYyzRdXWb4=;
 b=Lopdyohte65cwZw/pGyg0Bu9ZSrwgUXG6a8+wG50415hwfjbZn+dgz7qiAwuT8o9yl
 spqInx4ja98TsY4QkKjxOemBB/px0F9stmNSm7XQLCe4TY9/5gPlKH9fRL3qaArAIKWX
 8cjYCwuqtB69riqr3n2JHdXQdnc4DhAlcFuKRM0X4+4Be52mC192ntKTr+HYmYYW3pJ1
 3Qw0Cby+zwwqIrALDiyE3Cawd4/wl6i7EMbYdLUN/pEklj+FAGwczBzhgk3UXddSYJG6
 gX2p8K3/ud7+Lr7tsrxoe9h6ECqBFba6Y99alp69Pk8Qz+WjVfpX78GULpcGtruElLoj
 2BFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=8OOi++DoSkGx/GlhLWZoPuYbHeA8uUVPopYyzRdXWb4=;
 b=nQVJD/o7NJcu/YtlrVVHid0BMWtZOSLcvlq3SpLkhkq7F+2LuHcLxGxKxA7WC+H9Zw
 rjb44xB56IwlP62yyIvlLbztLaCwfvbn/Z/SPWqMgAX9h+AnjEnbstxsH/KyrjFpJspr
 ewkSz3ul3D4D2ynKJ2Wn3wSOfVY7Yzqh6Ja5RQR4O2x7JVmNBGMRCBzxGj3D4RCXakVO
 o9lflaNyCfcPC85MM9lBLZEV2kllYbpjX+LJGkO6gq5Tjdbd/Dn8/zSLDYza5VkZNvVy
 DLJ9Oe1tuOzL7aGi0Zoz2t+oEZTR6+nq6iXatwPu/5dqPSTCMOGtqHNT2Gc2GvPmPH5C
 o0Cw==
X-Gm-Message-State: AOAM531vn+1P4nase7i1URANWsWxqZsiJ0LApK6HahSMsNfprIIjMLx4
 9iwnLdt9CGjFoh4bI04UZMGvcDvxp4zf2BYsTfc=
X-Google-Smtp-Source: ABdhPJzWNe1/yPz2ocHzOfVMETlV4DTbvzdrbjK0myon1e3iTF/rOBhGQ8Imv2PkVwGHRlSTXTHmq6Ykq8fRYPlRi80=
X-Received: by 2002:a1c:19c4:: with SMTP id 187mr10466529wmz.100.1595618675314; 
 Fri, 24 Jul 2020 12:24:35 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <9f09ff42-a930-e4e3-d1c8-612ad03698ae@xen.org>
 <alpine.DEB.2.21.2007241036460.17562@sstabellini-ThinkPad-T480s>
 <40582d63-49c7-4a51-b35b-8248dfa34b66@xen.org>
 <alpine.DEB.2.21.2007241127480.17562@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007241127480.17562@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Fri, 24 Jul 2020 20:24:23 +0100
Message-ID: <CAJ=z9a3dXSnEBvhkHkZzV9URAGqSfdtJ1Lc838h_ViAWG3ZO4g@mail.gmail.com>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
To: Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <rahul.singh@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 24 Jul 2020 at 19:32, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > If they are not equal, then I fail to see why it would be useful to have this
> > value in Xen.
>
> I think that's because the domain is actually more convenient to use
> because a segment can span multiple PCI host bridges. So my
> understanding is that a segment alone is not sufficient to identify a
> host bridge. From a software implementation point of view it would be
> better to use domains.

AFAICT, this would be a matter of one check vs two checks in Xen :).
But... looking at Linux, they will also use domain == segment for ACPI
(see [1]). So, I think, they still have to use (domain, bus) to do the lookup.

>
> > In which case, we need to use PHYSDEVOP_pci_mmcfg_reserved so
> > Dom0 and Xen can synchronize on the segment number.
>
> I was hoping we could write down the assumption somewhere that for the
> cases we care about domain == segment, and error out if it is not the
> case.

Given that we have only the domain in hand, how would you enforce that?

>From this discussion, it also looks like there is a mismatch between the
implementation and the understanding on QEMU devel. So I am a bit
concerned that this is not stable and may change in future Linux version.

IOW, we are know tying Xen to Linux. So could we implement
PHYSDEVOP_pci_mmcfg_reserved *or* introduce a new property that
really represent the segment?

Cheers,

[1] https://elixir.bootlin.com/linux/latest/source/arch/arm64/kernel/pci.c#L74

Cheers,


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 19:36:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 19:36:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz3UM-0003UZ-HZ; Fri, 24 Jul 2020 19:36:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qjWU=BD=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jz3UK-0003Tb-UY
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 19:36:12 +0000
X-Inumbo-ID: e9b40612-cde4-11ea-a46f-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9b40612-cde4-11ea-a46f-12813bfff9fa;
 Fri, 24 Jul 2020 19:36:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=A4XN9PQ2j/eJlPgXgKcTkc/F0Nwrzp3pk2DluYMVySQ=; b=6idpFZLBbF3L9MzEitiZr48G7
 eB/+xBLbkOjXlgkArJwRS3uAzvCfSufeaEmXVyil4D9rr+iu++3tQHNgwlARcrEa0H1igBVlxDgIK
 /jlF9wK10r4C5+Ym0lToCp8AKH4JkA06LrQeMm2DiJmG9uTmD9c2ZrqJprV6Rj1zm0Y/E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jz3UB-0000Gt-RK; Fri, 24 Jul 2020 19:36:03 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jz3UB-0003Xk-Gz; Fri, 24 Jul 2020 19:36:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jz3UB-0007XX-GO; Fri, 24 Jul 2020 19:36:03 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152175-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152175: all pass - PUSHED
X-Osstest-Versions-This: ovmf=50528537b2fb0ebdf32c719a0525635c93b905c2
X-Osstest-Versions-That: ovmf=e43d0884ed93ffd8044e48e8d5d2d010a46aab33
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jul 2020 19:36:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152175 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152175/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 50528537b2fb0ebdf32c719a0525635c93b905c2
baseline version:
 ovmf                 e43d0884ed93ffd8044e48e8d5d2d010a46aab33

Last test of basis   152157  2020-07-23 15:45:39 Z    1 days
Testing same since   152175  2020-07-24 09:41:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Guomin Jiang <guomin.jiang@intel.com>
  Jiang, Guomin <Guomin.Jiang@intel.com>
  Ming Tan <ming.tan@intel.com>
  Tan, Ming <ming.tan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e43d0884ed..50528537b2  50528537b2fb0ebdf32c719a0525635c93b905c2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 23:01:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 23:01:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz6gy-0003p7-JE; Fri, 24 Jul 2020 23:01:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fOER=BD=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jz6gx-0003p2-B5
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 23:01:27 +0000
X-Inumbo-ID: 99e139da-ce01-11ea-a4a3-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 99e139da-ce01-11ea-a4a3-12813bfff9fa;
 Fri, 24 Jul 2020 23:01:26 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 7B32A206E3;
 Fri, 24 Jul 2020 23:01:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595631685;
 bh=FAvAZJOwoTWDLLRTfCmqnQivo1eUmvdJOjqG0xIYkaM=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=q/5pcZuF7IsG8rh1UHvYGrrH7Jh+Tf57d7iYydxrwUMTz8Zl35U7do9b5eZ1wJ//Y
 9AcUlMxUVfMZCPhcXgJektUGvELUibNW/RAqxAXBIwKKz2kte1isEsuZKrxadLYKYN
 M3t/zzv8LGYzH0r9KPMIp+PfCTNjRwBmKz8pS9Us=
Date: Fri, 24 Jul 2020 16:01:23 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Anchal Agarwal <anchalag@amazon.com>
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
In-Reply-To: <20200723225745.GB32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Message-ID: <alpine.DEB.2.21.2007241431280.17562@sstabellini-ThinkPad-T480s>
References: <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200721000348.GA19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <408d3ce9-2510-2950-d28d-fdfe8ee41a54@oracle.com>
 <alpine.DEB.2.21.2007211640500.17562@sstabellini-ThinkPad-T480s>
 <20200722180229.GA32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <alpine.DEB.2.21.2007221645430.17562@sstabellini-ThinkPad-T480s>
 <20200723225745.GB32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-129397483-1595626299=:17562"
Content-ID: <alpine.DEB.2.21.2007241432130.17562@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: x86@kernel.org, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 pavel@ucw.cz, hpa@zytor.com, tglx@linutronix.de,
 Stefano Stabellini <sstabellini@kernel.org>, eduval@amazon.com,
 mingo@redhat.com, xen-devel@lists.xenproject.org, sblbir@amazon.com,
 axboe@kernel.dk, konrad.wilk@oracle.com, bp@alien8.de,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, jgross@suse.com,
 netdev@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net,
 kamatam@amazon.com, vkuznets@redhat.com, davem@davemloft.net,
 dwmw@amazon.co.uk, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-129397483-1595626299=:17562
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2007241432131.17562@sstabellini-ThinkPad-T480s>

On Thu, 23 Jul 2020, Anchal Agarwal wrote:
> On Wed, Jul 22, 2020 at 04:49:16PM -0700, Stefano Stabellini wrote:
> > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> > 
> > 
> > 
> > On Wed, 22 Jul 2020, Anchal Agarwal wrote:
> > > On Tue, Jul 21, 2020 at 05:18:34PM -0700, Stefano Stabellini wrote:
> > > > On Tue, 21 Jul 2020, Boris Ostrovsky wrote:
> > > > > >>>>>> +static int xen_setup_pm_notifier(void)
> > > > > >>>>>> +{
> > > > > >>>>>> +     if (!xen_hvm_domain())
> > > > > >>>>>> +             return -ENODEV;
> > > > > >>>>>>
> > > > > >>>>>> I forgot --- what did we decide about non-x86 (i.e. ARM)?
> > > > > >>>>> It would be great to support that however, its  out of
> > > > > >>>>> scope for this patch set.
> > > > > >>>>> I’ll be happy to discuss it separately.
> > > > > >>>>
> > > > > >>>> I wasn't implying that this *should* work on ARM but rather whether this
> > > > > >>>> will break ARM somehow (because xen_hvm_domain() is true there).
> > > > > >>>>
> > > > > >>>>
> > > > > >>> Ok makes sense. TBH, I haven't tested this part of code on ARM and the series
> > > > > >>> was only support x86 guests hibernation.
> > > > > >>> Moreover, this notifier is there to distinguish between 2 PM
> > > > > >>> events PM SUSPEND and PM hibernation. Now since we only care about PM
> > > > > >>> HIBERNATION I may just remove this code and rely on "SHUTDOWN_SUSPEND" state.
> > > > > >>> However, I may have to fix other patches in the series where this check may
> > > > > >>> appear and cater it only for x86 right?
> > > > > >>
> > > > > >>
> > > > > >> I don't know what would happen if ARM guest tries to handle hibernation
> > > > > >> callbacks. The only ones that you are introducing are in block and net
> > > > > >> fronts and that's arch-independent.
> > > > > >>
> > > > > >>
> > > > > >> You do add a bunch of x86-specific code though (syscore ops), would
> > > > > >> something similar be needed for ARM?
> > > > > >>
> > > > > >>
> > > > > > I don't expect this to work out of the box on ARM. To start with something
> > > > > > similar will be needed for ARM too.
> > > > > > We may still want to keep the driver code as-is.
> > > > > >
> > > > > > I understand the concern here wrt ARM, however, currently the support is only
> > > > > > proposed for x86 guests here and similar work could be carried out for ARM.
> > > > > > Also, if regular hibernation works correctly on arm, then all is needed is to
> > > > > > fix Xen side of things.
> > > > > >
> > > > > > I am not sure what could be done to achieve any assurances on arm side as far as
> > > > > > this series is concerned.
> > > >
> > > > Just to clarify: new features don't need to work on ARM or cause any
> > > > addition efforts to you to make them work on ARM. The patch series only
> > > > needs not to break existing code paths (on ARM and any other platforms).
> > > > It should also not make it overly difficult to implement the ARM side of
> > > > things (if there is one) at some point in the future.
> > > >
> > > > FYI drivers/xen/manage.c is compiled and working on ARM today, however
> > > > Xen suspend/resume is not supported. I don't know for sure if
> > > > guest-initiated hibernation works because I have not tested it.
> > > >
> > > >
> > > >
> > > > > If you are not sure what the effects are (or sure that it won't work) on
> > > > > ARM then I'd add IS_ENABLED(CONFIG_X86) check, i.e.
> > > > >
> > > > >
> > > > > if (!IS_ENABLED(CONFIG_X86) || !xen_hvm_domain())
> > > > >       return -ENODEV;
> > > >
> > > > That is a good principle to have and thanks for suggesting it. However,
> > > > in this specific case there is nothing in this patch that doesn't work
> > > > on ARM. From an ARM perspective I think we should enable it and
> > > > &xen_pm_notifier_block should be registered.
> > > >
> > > This question is for Boris, I think you we decided to get rid of the notifier
> > > in V3 as all we need  to check is SHUTDOWN_SUSPEND state which sounds plausible
> > > to me. So this check may go away. It may still be needed for sycore_ops
> > > callbacks registration.
> > > > Given that all guests are HVM guests on ARM, it should work fine as is.
> > > >
> > > >
> > > > I gave a quick look at the rest of the series and everything looks fine
> > > > to me from an ARM perspective. I cannot imaging that the new freeze,
> > > > thaw, and restore callbacks for net and block are going to cause any
> > > > trouble on ARM. The two main x86-specific functions are
> > > > xen_syscore_suspend/resume and they look trivial to implement on ARM (in
> > > > the sense that they are likely going to look exactly the same.)
> > > >
> > > Yes but for now since things are not tested I will put this
> > > !IS_ENABLED(CONFIG_X86) on syscore_ops calls registration part just to be safe
> > > and not break anything.
> > > >
> > > > One question for Anchal: what's going to happen if you trigger a
> > > > hibernation, you have the new callbacks, but you are missing
> > > > xen_syscore_suspend/resume?
> > > >
> > > > Is it any worse than not having the new freeze, thaw and restore
> > > > callbacks at all and try to do a hibernation?
> > > If callbacks are not there, I don't expect hibernation to work correctly.
> > > These callbacks takes care of xen primitives like shared_info_page,
> > > grant table, sched clock, runstate time which are important to save the correct
> > > state of the guest and bring it back up. Other patches in the series, adds all
> > > the logic to these syscore callbacks. Freeze/thaw/restore are just there for at driver
> > > level.
> > 
> > I meant the other way around :-)  Let me rephrase the question.
> > 
> > Do you think that implementing freeze/thaw/restore at the driver level
> > without having xen_syscore_suspend/resume can potentially make things
> > worse compared to not having freeze/thaw/restore at the driver level at
> > all?
> I think in both the cases I don't expect it to work. System may end up in
> different state if you register vs not. Hibernation does not work properly
> at least for domU instances without these changes on x86 and I am assuming the
> same for ARM.
> 
> If you do not register freeze/thaw/restore callbacks for arm, then on
> invocation of xenbus_dev_suspend, default suspend/resume callbacks
> will be called for each driver and since you do not have any code to save domU's
> xen primitives state (syscore_ops), hibernation will either fail or will demand a reboot.
> I do no have setup to test the current state of ARM's hibernation
> 
> If you only register freeze/thaw/restore and no syscore_ops, it will again fail.
> Since, I do not have an ARM setup running, I quickly ran a similar test on x86,
> may not be an apple to apple comparison but instance failed to resume or I
> should say stuck showing huge jump in time and required a reboot.
> 
> Now if this doesn't happen currently when you trigger hibernation on arm domU
> instances or if system is still alive when you trigger hibernation in xen guest
> then not registering the callbacks may be a better idea. In that case  may be 
> I need to put arch specific check when registering freeze/thaw/restore handlers.
> 
> Hope that answers your question.

Yes, it does, thank you. I'd rather not introduce unknown regressions so
I would recommend to add an arch-specific check on registering
freeze/thaw/restore handlers. Maybe something like the following:

#ifdef CONFIG_X86
    .freeze = blkfront_freeze,
    .thaw = blkfront_restore,
    .restore = blkfront_restore
#endif


maybe Boris has a better suggestion on how to do it
--8323329-129397483-1595626299=:17562--


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 23:46:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 23:46:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz7OZ-0007E7-2T; Fri, 24 Jul 2020 23:46:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fOER=BD=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jz7OX-0007E2-Cp
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 23:46:29 +0000
X-Inumbo-ID: e47cb61c-ce07-11ea-88ce-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e47cb61c-ce07-11ea-88ce-bc764e2007e4;
 Fri, 24 Jul 2020 23:46:28 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 84AFD206E3;
 Fri, 24 Jul 2020 23:46:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595634387;
 bh=sFW7J/RkyNnfPfDMhsbVNhVbIhiIRPdKa0RlCygYldE=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=e36Lin7TFrymDCxKw+eOLFZT6usDEqTFF0wEBo5mA/tZz8SswFRXHZTOJXR50r0bP
 GKoUyivPqFwlCPHKkKFRRdSzKOP6m1uwyrzj6VkPY4HZRN225PM3woxxXgiAZXFx5c
 wNerDdqJZVYqvydIOzB5l7x6zgcQKL1g+cCF0bKg=
Date: Fri, 24 Jul 2020 16:46:27 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien.grall.oss@gmail.com>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
In-Reply-To: <CAJ=z9a3dXSnEBvhkHkZzV9URAGqSfdtJ1Lc838h_ViAWG3ZO4g@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2007241353450.17562@sstabellini-ThinkPad-T480s>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <9f09ff42-a930-e4e3-d1c8-612ad03698ae@xen.org>
 <alpine.DEB.2.21.2007241036460.17562@sstabellini-ThinkPad-T480s>
 <40582d63-49c7-4a51-b35b-8248dfa34b66@xen.org>
 <alpine.DEB.2.21.2007241127480.17562@sstabellini-ThinkPad-T480s>
 <CAJ=z9a3dXSnEBvhkHkZzV9URAGqSfdtJ1Lc838h_ViAWG3ZO4g@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Rahul Singh <rahul.singh@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 24 Jul 2020, Julien Grall wrote:
> On Fri, 24 Jul 2020 at 19:32, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > > If they are not equal, then I fail to see why it would be useful to have this
> > > value in Xen.
> >
> > I think that's because the domain is actually more convenient to use
> > because a segment can span multiple PCI host bridges. So my
> > understanding is that a segment alone is not sufficient to identify a
> > host bridge. From a software implementation point of view it would be
> > better to use domains.
> 
> AFAICT, this would be a matter of one check vs two checks in Xen :).
> But... looking at Linux, they will also use domain == segment for ACPI
> (see [1]). So, I think, they still have to use (domain, bus) to do the lookup.
>
> > > In which case, we need to use PHYSDEVOP_pci_mmcfg_reserved so
> > > Dom0 and Xen can synchronize on the segment number.
> >
> > I was hoping we could write down the assumption somewhere that for the
> > cases we care about domain == segment, and error out if it is not the
> > case.
> 
> Given that we have only the domain in hand, how would you enforce that?
>
> >From this discussion, it also looks like there is a mismatch between the
> implementation and the understanding on QEMU devel. So I am a bit
> concerned that this is not stable and may change in future Linux version.
> 
> IOW, we are know tying Xen to Linux. So could we implement
> PHYSDEVOP_pci_mmcfg_reserved *or* introduce a new property that
> really represent the segment?

I don't think we are tying Xen to Linux. Rob has already said that
linux,pci-domain is basically a generic device tree property. And if we
look at https://www.devicetree.org/open-firmware/bindings/pci/pci2_1.pdf
"PCI domain" is described and seems to match the Linux definition.

I do think we need to understand the definitions and the differences.
Reading online [1][2] it looks like a Linux PCI domain matches a "PCI
Segment Group Number" in PCI Express which is probably why Linux is
making the assumption that it is making.

So maybe it is OK to use domains == segments, but we need to verify this
in the specs and also clarify the terminology we use in a doc for our
own sanity --  I am hoping that Rahul can come up with a good
explanation on the topic :-)


[1] https://stackoverflow.com/questions/49050847/how-is-pci-segmentdomain-related-to-multiple-host-bridgesor-root-bridges
[2] https://wiki.osdev.org/PCI_Express


From xen-devel-bounces@lists.xenproject.org Fri Jul 24 23:52:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 24 Jul 2020 23:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz7UX-000839-Oc; Fri, 24 Jul 2020 23:52:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qjWU=BD=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jz7UW-000834-R5
 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2020 23:52:40 +0000
X-Inumbo-ID: c093cac8-ce08-11ea-88cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c093cac8-ce08-11ea-88cf-bc764e2007e4;
 Fri, 24 Jul 2020 23:52:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1iRUNveuDOgv0ggrQpmZ44br8Oou3BQfyulgRrVdqRU=; b=alyamnwlKP4ZlqwRQWdfEUbXM
 zR38DPYiXuXp8AwcHzxJEV4Q+CHg3Lb2kxmr1cPJnz0pVqdtdpPKUW7g6Xf76zCHP6TxPaSQIp+JA
 Hz+bZx7VNPgtWLe6aOw+82Lj+U242yP7SNHTLmZ0/2j5O83hX/YMCgxyeUl/WiBLNX/Ok=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jz7US-0005aD-R8; Fri, 24 Jul 2020 23:52:36 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jz7US-0006ln-Hi; Fri, 24 Jul 2020 23:52:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jz7US-0000F8-H1; Fri, 24 Jul 2020 23:52:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152171-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152171: regressions - trouble: fail/pass/starved
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: qemuu=8ffa52c20d5693d454f65f2024a1494edfea65d4
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 24 Jul 2020 23:52:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152171 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152171/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1       fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1         fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 qemuu                8ffa52c20d5693d454f65f2024a1494edfea65d4
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   42 days
Failing since        151101  2020-06-14 08:32:51 Z   40 days   56 attempts
Testing same since   152171  2020-07-24 06:53:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Duyck <alexander.h.duyck@linux.intel.com>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Antoine Damhet <antoine.damhet@blade-group.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  Liu Yi L <yi.l.liu@intel.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 31491 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 25 00:02:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jul 2020 00:02:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jz7dy-00019D-Iu; Sat, 25 Jul 2020 00:02:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5UE0=BE=davemloft.net=davem@srs-us1.protection.inumbo.net>)
 id 1jz7dx-000198-6o
 for xen-devel@lists.xenproject.org; Sat, 25 Jul 2020 00:02:25 +0000
X-Inumbo-ID: 1ca70e1e-ce0a-11ea-88d0-bc764e2007e4
Received: from shards.monkeyblade.net (unknown [2620:137:e000::1:9])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ca70e1e-ce0a-11ea-88d0-bc764e2007e4;
 Sat, 25 Jul 2020 00:02:21 +0000 (UTC)
Received: from localhost (unknown [IPv6:2601:601:9f00:477::3d5])
 (using TLSv1 with cipher AES256-SHA (256/256 bits))
 (Client did not present a certificate)
 (Authenticated sender: davem-davemloft)
 by shards.monkeyblade.net (Postfix) with ESMTPSA id 961C612756FEE;
 Fri, 24 Jul 2020 16:45:35 -0700 (PDT)
Date: Fri, 24 Jul 2020 17:02:20 -0700 (PDT)
Message-Id: <20200724.170220.1275270219725381485.davem@davemloft.net>
To: andrea.righi@canonical.com
Subject: Re: [PATCH v2] xen-netfront: fix potential deadlock in xennet_remove()
From: David Miller <davem@davemloft.net>
In-Reply-To: <20200724085910.GA1043801@xps-13>
References: <20200724085910.GA1043801@xps-13>
X-Mailer: Mew version 6.8 on Emacs 26.3
Mime-Version: 1.0
Content-Type: Text/Plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Greylist: Sender succeeded SMTP AUTH, not delayed by milter-greylist-4.5.12
 (shards.monkeyblade.net [149.20.54.216]);
 Fri, 24 Jul 2020 16:45:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, sstabellini@kernel.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, kuba@kernel.org,
 boris.ostrovsky@oracle.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Andrea Righi <andrea.righi@canonical.com>
Date: Fri, 24 Jul 2020 10:59:10 +0200

> There's a potential race in xennet_remove(); this is what the driver is
> doing upon unregistering a network device:
> 
>   1. state = read bus state
>   2. if state is not "Closed":
>   3.    request to set state to "Closing"
>   4.    wait for state to be set to "Closing"
>   5.    request to set state to "Closed"
>   6.    wait for state to be set to "Closed"
> 
> If the state changes to "Closed" immediately after step 1 we are stuck
> forever in step 4, because the state will never go back from "Closed" to
> "Closing".
> 
> Make sure to check also for state == "Closed" in step 4 to prevent the
> deadlock.
> 
> Also add a 5 sec timeout any time we wait for the bus state to change,
> to avoid getting stuck forever in wait_event().
> 
> Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
> ---
> Changes in v2:
>  - remove all dev_dbg() calls (as suggested by David Miller)

Applied, thank you.


From xen-devel-bounces@lists.xenproject.org Sat Jul 25 06:33:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jul 2020 06:33:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzDjl-0001Gc-76; Sat, 25 Jul 2020 06:32:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VjDR=BE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzDjk-0001GI-8Q
 for xen-devel@lists.xenproject.org; Sat, 25 Jul 2020 06:32:48 +0000
X-Inumbo-ID: a4777ac8-ce40-11ea-88f9-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a4777ac8-ce40-11ea-88f9-bc764e2007e4;
 Sat, 25 Jul 2020 06:32:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=U5jhLuC1YaKlDGwgjESw7IBpKfLJcVzYhkDxVByaTxg=; b=vno1RAtwid/K2BX3aTGKQOZL8
 3BpYDm52yh9jWcVPwFKpIuwIEAssgwCOe1bw1baqD4meW+iao7ICYeu4STlaWbF3NC30QgYk8aq6w
 RybjS8V9ncvxQJyQcGPUc0oyMzsfMEGJSnO5dAvDZi2lcYFMzJkzlCQfDxF+5hd3xvnJg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzDjd-00089z-EP; Sat, 25 Jul 2020 06:32:41 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzDjd-0000XN-5m; Sat, 25 Jul 2020 06:32:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzDjd-00035Q-5B; Sat, 25 Jul 2020 06:32:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152186-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152186: all pass - PUSHED
X-Osstest-Versions-This: ovmf=91e4bcb313f0c1f0f19b87b5849f5486aa076be4
X-Osstest-Versions-That: ovmf=50528537b2fb0ebdf32c719a0525635c93b905c2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 25 Jul 2020 06:32:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152186 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152186/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 91e4bcb313f0c1f0f19b87b5849f5486aa076be4
baseline version:
 ovmf                 50528537b2fb0ebdf32c719a0525635c93b905c2

Last test of basis   152175  2020-07-24 09:41:10 Z    0 days
Testing same since   152186  2020-07-24 19:40:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bob Feng <bob.c.feng@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   50528537b2..91e4bcb313  91e4bcb313f0c1f0f19b87b5849f5486aa076be4 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jul 25 10:00:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jul 2020 10:00:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzGyK-0001yt-T7; Sat, 25 Jul 2020 10:00:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mx1j=BE=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1jzGyJ-0001nJ-Is
 for xen-devel@lists.xenproject.org; Sat, 25 Jul 2020 10:00:03 +0000
X-Inumbo-ID: 9b28dce2-ce5d-11ea-8900-bc764e2007e4
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b28dce2-ce5d-11ea-8900-bc764e2007e4;
 Sat, 25 Jul 2020 10:00:02 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id a15so10357221wrh.10
 for <xen-devel@lists.xenproject.org>; Sat, 25 Jul 2020 03:00:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=DQr2fn8K0lfi7wz2bcOOr7pSID9kg5rEDXkxGKuFfxk=;
 b=Of9JEDhYB10kyI1kqHj6N9i2y55NqeC9KJBpblLrWIq5PdxZJZ6d0gYVGDgX4M68Ln
 NBO/JU2qSszzrTPDDEnf7DSLsTH4w3/BlgmfRoUAubO6UnFEsaI4iLjI6t06hO9afd4m
 fS+Ns2CuWVSeFi77vNgkx2bo7k98Rj3V/Tbnv7c+BNAfcyFtNFBMh9SF0yF4O/Bp49Sz
 nDjAc0ZuNdMhXu8SmT0MQI3jX7cbQznwG2hhhNM7CvpOXJ0lcBTxy4dW92Ct679/iMaU
 rUASHWNT03vNWbAaotC29OyBKjqDHCP9HOlsDnDQ7YB+23pNLYDD9xs1lIJGQOtJWkue
 JeeA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=DQr2fn8K0lfi7wz2bcOOr7pSID9kg5rEDXkxGKuFfxk=;
 b=LSD0jRlKz36tD9ueGh6SwNI95WGj6uTrEs+HMLnlp1MmEBpzJMmJ1le5mwibMEDzgx
 h+bPTkh8J2LpzjMpQ7MxsM6rKz4pThMADyd6mXkOoFZKAmssrNnkn0E5p2EKl7guTpty
 WGSAEz/uPkOSz7E7OQZ11v4QEzXQfvumXbvA1ddL4krdcEdy6Wvenr17A+kDg4NYWlfS
 Olq9uTzDwoFCMo5rn2uDTuPV3edv/jg+1UPREJOVy9P0p5XGYqkNXrVCbH7A0gwv079X
 0bcaYUk0Ct1clZ8KP7AFdBn0xz1RYws8mbmjqvXoHPVchquFj3TJxkaeVN0bZp323/0Z
 enow==
X-Gm-Message-State: AOAM531ONHflZaVBWqCXmV6uxGJ89JrG55NZDpOMj+yjAsYvE0arOxMA
 JgurAypbQPzK58Zsa08c0GLqGxMfc0ET3t7iS3g=
X-Google-Smtp-Source: ABdhPJyO1goX3bYVCyEmBOJ9i3lEnj1lFFAETkQzxJIvVLWegrGS53cJrCraYBg2qvUX5jfE8OTrY3RFdfz1YnFPHIQ=
X-Received: by 2002:a05:6000:142:: with SMTP id
 r2mr13125339wrx.103.1595671201417; 
 Sat, 25 Jul 2020 03:00:01 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <9f09ff42-a930-e4e3-d1c8-612ad03698ae@xen.org>
 <alpine.DEB.2.21.2007241036460.17562@sstabellini-ThinkPad-T480s>
 <40582d63-49c7-4a51-b35b-8248dfa34b66@xen.org>
 <alpine.DEB.2.21.2007241127480.17562@sstabellini-ThinkPad-T480s>
 <CAJ=z9a3dXSnEBvhkHkZzV9URAGqSfdtJ1Lc838h_ViAWG3ZO4g@mail.gmail.com>
 <alpine.DEB.2.21.2007241353450.17562@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007241353450.17562@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Sat, 25 Jul 2020 10:59:50 +0100
Message-ID: <CAJ=z9a1RWXq3EN5DC=_279yzdsq3M0nw6+CZtKD00yBzKomcaw@mail.gmail.com>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
To: Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <rahul.singh@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, 25 Jul 2020 at 00:46, Stefano Stabellini <sstabellini@kernel.org> wrote:
>
> On Fri, 24 Jul 2020, Julien Grall wrote:
> > On Fri, 24 Jul 2020 at 19:32, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > > > If they are not equal, then I fail to see why it would be useful to have this
> > > > value in Xen.
> > >
> > > I think that's because the domain is actually more convenient to use
> > > because a segment can span multiple PCI host bridges. So my
> > > understanding is that a segment alone is not sufficient to identify a
> > > host bridge. From a software implementation point of view it would be
> > > better to use domains.
> >
> > AFAICT, this would be a matter of one check vs two checks in Xen :).
> > But... looking at Linux, they will also use domain == segment for ACPI
> > (see [1]). So, I think, they still have to use (domain, bus) to do the lookup.
> >
> > > > In which case, we need to use PHYSDEVOP_pci_mmcfg_reserved so
> > > > Dom0 and Xen can synchronize on the segment number.
> > >
> > > I was hoping we could write down the assumption somewhere that for the
> > > cases we care about domain == segment, and error out if it is not the
> > > case.
> >
> > Given that we have only the domain in hand, how would you enforce that?
> >
> > >From this discussion, it also looks like there is a mismatch between the
> > implementation and the understanding on QEMU devel. So I am a bit
> > concerned that this is not stable and may change in future Linux version.
> >
> > IOW, we are know tying Xen to Linux. So could we implement
> > PHYSDEVOP_pci_mmcfg_reserved *or* introduce a new property that
> > really represent the segment?
>
> I don't think we are tying Xen to Linux. Rob has already said that
> linux,pci-domain is basically a generic device tree property.

My concern is not so much the name of the property, but the definition of it.

AFAICT, from this thread there can be two interpretation:
      - domain == segment
      - domain == (segment, bus)

> And if we
> look at https://www.devicetree.org/open-firmware/bindings/pci/pci2_1.pdf
> "PCI domain" is described and seems to match the Linux definition.
>
> I do think we need to understand the definitions and the differences.

+1

> Reading online [1][2] it looks like a Linux PCI domain matches a "PCI
> Segment Group Number" in PCI Express which is probably why Linux is
> making the assumption that it is making.
>
> So maybe it is OK to use domains == segments, but we need to verify this
> in the specs and also clarify the terminology we use in a doc for our
> own sanity --  I am hoping that Rahul can come up with a good
> explanation on the topic :-)

I am a bit confused.... You were the one arguing that domain ==
(segment, bus) in this thread. So may I ask why the interpretation
wouldn't be valid anymore?

Cheers,

> [1] https://stackoverflow.com/questions/49050847/how-is-pci-segmentdomain-related-to-multiple-host-bridgesor-root-bridges
> [2] https://wiki.osdev.org/PCI_Express


From xen-devel-bounces@lists.xenproject.org Sat Jul 25 10:34:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jul 2020 10:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzHVY-0004oA-GA; Sat, 25 Jul 2020 10:34:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VjDR=BE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzHVX-0004o5-8b
 for xen-devel@lists.xenproject.org; Sat, 25 Jul 2020 10:34:23 +0000
X-Inumbo-ID: 66350484-ce62-11ea-a513-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 66350484-ce62-11ea-a513-12813bfff9fa;
 Sat, 25 Jul 2020 10:34:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=sv3FUabw1O46NEusso+GDWDM1U6UagUBf3Mco7xgymY=; b=fSMmp1tj1Rzanfu5oY9DcVKSX
 N4wZ3enXbOIearrrrrcIgKUeXKQRtO4lfIioPycLFzMaDcms6+9STI+Z3V8bSP9wHNvm6O8W/6f/+
 O7OGN2QPlKAhg191f3EHcUgHoJ9PDeT6pDBB5mVksvg2pICaurUs4QS84RTBV9c0N7mtQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzHVT-0005Dx-PP; Sat, 25 Jul 2020 10:34:19 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzHVT-0003W5-7o; Sat, 25 Jul 2020 10:34:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzHVT-00081k-6y; Sat, 25 Jul 2020 10:34:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152183-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152183: regressions - trouble: fail/pass/starved
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-amd64-qemuu-nested-amd:leak-check/basis/l1(16):fail:heisenbug
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=f37e99aca03f63aa3f2bd13ceaf769455d12c4b0
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 25 Jul 2020 10:34:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152183 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152183/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-nested-amd 16 leak-check/basis/l1(16) fail pass in 152162

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail in 152162 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 linux                f37e99aca03f63aa3f2bd13ceaf769455d12c4b0
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   37 days
Failing since        151236  2020-06-19 19:10:35 Z   35 days   55 attempts
Testing same since   152162  2020-07-23 22:10:08 Z    1 days    2 attempts

------------------------------------------------------------
847 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 47045 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 25 11:57:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jul 2020 11:57:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzIni-00035c-S6; Sat, 25 Jul 2020 11:57:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VjDR=BE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzIng-00035G-Mi
 for xen-devel@lists.xenproject.org; Sat, 25 Jul 2020 11:57:12 +0000
X-Inumbo-ID: f5f9f6a0-ce6d-11ea-890a-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5f9f6a0-ce6d-11ea-890a-bc764e2007e4;
 Sat, 25 Jul 2020 11:57:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=6kdjFnCtst37EdhhfcNbWXeCtKaaEke33lZj/YuOHDI=; b=2pIChnSc01oWQrrfp/ZTNEmiU
 o5clJCqD4wUyUqERNr5i/CWCQFZ0o9U962hhLca+E62g8J/LmAjrm83+xj3rwqabufIuvk4lyYwxp
 M09qBp+gyueRJlK1ascyv7MOcpfEOVdfc4WMl7+IAidiIumQ03EGrSHlzpZPpD1YqwchY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzInZ-0006t7-GR; Sat, 25 Jul 2020 11:57:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzInZ-0001Cp-3s; Sat, 25 Jul 2020 11:57:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzInZ-0003mk-3G; Sat, 25 Jul 2020 11:57:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152193-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 152193: regressions - FAIL
X-Osstest-Failures: libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=e2fd95ed45439ee98362adbd4371590b0e11d35c
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 25 Jul 2020 11:57:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152193 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152193/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              e2fd95ed45439ee98362adbd4371590b0e11d35c
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   15 days
Failing since        151818  2020-07-11 04:18:52 Z   14 days   15 attempts
Testing same since   152193  2020-07-25 04:18:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Weblate <noreply@weblate.org>
  Yi Wang <wang.yi59@zte.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2758 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 25 14:12:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jul 2020 14:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzKuG-0006S1-B2; Sat, 25 Jul 2020 14:12:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=179P=BE=lucina.net=martin@srs-us1.protection.inumbo.net>)
 id 1jzKuE-0006Rk-FY
 for xen-devel@lists.xenproject.org; Sat, 25 Jul 2020 14:12:06 +0000
X-Inumbo-ID: cd619335-ce80-11ea-8935-bc764e2007e4
Received: from smtp.lucina.net (unknown [62.176.169.44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cd619335-ce80-11ea-8935-bc764e2007e4;
 Sat, 25 Jul 2020 14:11:59 +0000 (UTC)
Received: from nodbug.lucina.net (78-141-76-187.dynamic.orange.sk
 [78.141.76.187])
 by smtp.lucina.net (Postfix) with ESMTPSA id 6C587122804;
 Sat, 25 Jul 2020 16:11:58 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lucina.net;
 s=dkim-201811; t=1595686318;
 bh=dJpr3t/Bj4rG8E80K4sYxz7KiwigcQD6C2SSO9jK4Go=;
 h=Date:From:To:Cc:Subject:From;
 b=J+JHXndnSp4CdkOvsT84v5Z3sryLP8ktj1SQiH45oyp4TbvmZUd3eY+2jU3/f4xs6
 9v5EddytEVhRUF7ECColBXXS2ZW9ETuGyAS0tQs0zSEh2gQazN3zyXNX+oTTn3T25L
 oztnTdFZrxYDOxyVz2wOXNhG/+vEYYiDXiMpHGFpq+CM6GKYwUvl3pRUm99kJKNZ24
 qZCV0oSJskPyFFFpAziTcI6vigUWdVXr24n0QfUq6z6Vnpz//+pFuPrw7buCeO5e7U
 OuTsDuMe++JM4nVt1VgycBBFkbmjXuBXoC4bhqYIg2flvZxHsImmop5xMd5gewpV9s
 ClQme5k1YUOHg==
Received: by nodbug.lucina.net (Postfix, from userid 1000)
 id 5B0122684962; Sat, 25 Jul 2020 16:11:58 +0200 (CEST)
Date: Sat, 25 Jul 2020 16:11:58 +0200
From: Martin Lucina <martin@lucina.net>
To: mirageos-devel@lists.xenproject.org
Subject: Call for testing: New MirageOS Xen platform stack
Message-ID: <20200725141158.GD27205@nodbug.lucina.net>
Mail-Followup-To: Martin Lucina <martin@lucina.net>,
 mirageos-devel@lists.xenproject.org, xen-devel@lists.xenproject.org
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

over the past couple of months we have developed a new Xen platform stack [1]
for MirageOS, replacing our use of Mini-OS for the low-level C startup and
interfaces to Xen, and aligning the entire stack with our existing
Solo5-based backends as much as is practical.

The implementation is now functionally complete, including the dependent
packages/driver implementation used by the majority of MirageOS
unikernels. The new stack brings support for running MirageOS unikernels as
PVHv2 domUs, and various long-awaited improvements to the overall security
posture for MirageOS unikernels on Xen [2].
 
As this is a from-scratch rewite, I'd like to invite folks to test and
review it before we start the release train. The plan is to release a
version of Mirage 3.x with the new Xen stack early after the summer.

Please note that the new stack builds MirageOS unikernels exclusively as
PVHv2 domUs and thus requires Xen 4.10 or later.  Also, we have removed
support for ARM32 as this never got much traction, so the current
implementation is x86_64-only.

For Qubes OS users, given that the current release of Qubes OS ships with
Xen 4.8 which the new stack does not support, you will need to wait until
testing builds of Qubes OS 4.1 are available.

If you'd like to test your unikernels against the new stack, you can do so
by installing MirageOS from scratch in a new OPAM switch, using the OPAM
repository containing the updated packages as follows:

    opam repo add mirage-dev-3.x+xen-pvh-via-solo5 git+https://github.com/mirage/mirage-dev.git#3.x+xen-pvh-via-solo5

Followed by building MirageOS unikernels for the 'xen' target as usual.

Please report any failures and successes here, or in the overall tracking
issue on Github [1], where you can also find more details on what has
changed from a feature and interface point of view. Note that unikernels or
libraries which access Xen-specific MirageOS interfaces may need to be
updated, see [2] for details.

Martin

[1] https://github.com/mirage/mirage/issues/1159
[2] https://github.com/mirage/mirage-xen/pull/23


From xen-devel-bounces@lists.xenproject.org Sat Jul 25 14:54:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jul 2020 14:54:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzLYt-0001TE-Ip; Sat, 25 Jul 2020 14:54:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VjDR=BE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzLYs-0001Sr-Af
 for xen-devel@lists.xenproject.org; Sat, 25 Jul 2020 14:54:06 +0000
X-Inumbo-ID: a8be18db-ce86-11ea-a57d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a8be18db-ce86-11ea-a57d-12813bfff9fa;
 Sat, 25 Jul 2020 14:53:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=w20kTLkbc27nMtisbBfGAtZF+2ldZciQQmcSSnLXhqY=; b=VQPz6a0EKVQ2yE1L97hhN1ZL3
 wcLvsq/Ip6OjgJ5xZRoav2MC6BEYchKhEOhWOnFgKgzdNInF8oYBNI1tWYJxyNCvT9MVFTATkoUTI
 JW8iruvy6Yr6fAWv6NYp7Mirqfn0jovXeTfEIORG6hPTeGsgpuKSjexhB8mguMblT0Too=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzLYh-00023A-1t; Sat, 25 Jul 2020 14:53:55 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzLYg-0000CX-PS; Sat, 25 Jul 2020 14:53:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzLYg-0007NO-Os; Sat, 25 Jul 2020 14:53:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152189-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152189: regressions - trouble: fail/pass/starved
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: qemuu=7adfbea8fd1efce36019a0c2f198ca73be9d3f18
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 25 Jul 2020 14:53:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152189 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152189/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1       fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1         fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 qemuu                7adfbea8fd1efce36019a0c2f198ca73be9d3f18
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   42 days
Failing since        151101  2020-06-14 08:32:51 Z   41 days   57 attempts
Testing same since   152189  2020-07-25 00:08:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Duyck <alexander.h.duyck@linux.intel.com>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Antoine Damhet <antoine.damhet@blade-group.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  Liu Yi L <yi.l.liu@intel.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 31667 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jul 25 17:54:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 25 Jul 2020 17:54:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzOMx-0008Pm-S5; Sat, 25 Jul 2020 17:53:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VjDR=BE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzOMw-0008Ph-ED
 for xen-devel@lists.xenproject.org; Sat, 25 Jul 2020 17:53:58 +0000
X-Inumbo-ID: cf802e04-ce9f-11ea-a5c5-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cf802e04-ce9f-11ea-a5c5-12813bfff9fa;
 Sat, 25 Jul 2020 17:53:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=RXQr/8nAWfufg2gtOZbMlvMwgsa/YNn6yWvsRd47dCo=; b=Lxy2Rn1Gvq7SE25FUpu5yocga
 I9royfYCSZPgsWt7xFggv5K/uuq0mZqrrjnZ9yQaaQlGfmJWvnVc8HpgC9YO7I8MfumEdJPZewrdJ
 ftKdV4iae5F03BLnWc+AYRhhMY5R6TUgGQdq/5cNwggnXjucQhc1FU2uQp4aCZ6Z90vQ8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzOMu-0006DK-0Z; Sat, 25 Jul 2020 17:53:56 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzOMt-0008KD-Ki; Sat, 25 Jul 2020 17:53:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzOMt-0001da-KB; Sat, 25 Jul 2020 17:53:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152194-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152194: all pass - PUSHED
X-Osstest-Versions-This: ovmf=8c30327debb28c0b6cfa2106b736774e0b20daac
X-Osstest-Versions-That: ovmf=91e4bcb313f0c1f0f19b87b5849f5486aa076be4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 25 Jul 2020 17:53:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152194 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152194/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8c30327debb28c0b6cfa2106b736774e0b20daac
baseline version:
 ovmf                 91e4bcb313f0c1f0f19b87b5849f5486aa076be4

Last test of basis   152186  2020-07-24 19:40:28 Z    0 days
Testing same since   152194  2020-07-25 06:34:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Guomin Jiang <guomin.jiang@intel.com>
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   91e4bcb313..8c30327deb  8c30327debb28c0b6cfa2106b736774e0b20daac -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 00:18:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 00:18:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzUMV-0006nA-Sg; Sun, 26 Jul 2020 00:17:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v6JZ=BF=infradead.org=rdunlap@srs-us1.protection.inumbo.net>)
 id 1jzUMS-0006n5-Lh
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 00:17:53 +0000
X-Inumbo-ID: 6f3acfe7-ced5-11ea-89d6-bc764e2007e4
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f3acfe7-ced5-11ea-89d6-bc764e2007e4;
 Sun, 26 Jul 2020 00:17:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
 Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID:
 Content-Description:In-Reply-To:References;
 bh=bIzdukHQ+ATBfHLpRDrfYN1nk7nF43j1x33WTc27/H4=; b=Ye5o+OVzQqZPY09OgVKgkiYRox
 UzF1lfxQ5ygjF+dzA0Q5eufEWU4AfBfMUITe3VkQSKMNvm51ye+yR1luQU/vKkfsIzXAKS+r+TUHk
 TwN9NSglqWFESQrkIMfPPTLbobEmupZKolAUOWTZ2BKGY4HyLuyHFmUNvmx9MhjwUl1YuFeDjWOCC
 GfCYViJ2PQCEHqMBiNlTwtUv79GAPuJWLIL9iGyS1MEBVGKQXdFArIN9tFFA+zjhwkMcvVt0ejnTm
 fXpBlVTdLclXblKYHSpnHEKBSywW5TjBs2+91hHiPs3znL9C4uMTxgLy/CZcwYoMWGx8e4S86u/nh
 hsAUmfyw==;
Received: from [2601:1c0:6280:3f0::19c2] (helo=smtpauth.infradead.org)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1jzUMB-0002Mn-Kk; Sun, 26 Jul 2020 00:17:36 +0000
From: Randy Dunlap <rdunlap@infradead.org>
To: linux-kernel@vger.kernel.org
Subject: [PATCH] xen: hypercall.h: fix duplicated word
Date: Sat, 25 Jul 2020 17:17:31 -0700
Message-Id: <20200726001731.19540-1-rdunlap@infradead.org>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Randy Dunlap <rdunlap@infradead.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Change the repeated word "as" to "as a".

Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org
---
 arch/x86/include/asm/xen/hypercall.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- linux-next-20200720.orig/arch/x86/include/asm/xen/hypercall.h
+++ linux-next-20200720/arch/x86/include/asm/xen/hypercall.h
@@ -82,7 +82,7 @@ struct xen_dm_op_buf;
  *     - clobber the rest
  *
  * The result certainly isn't pretty, and it really shows up cpp's
- * weakness as as macro language.  Sorry.  (But let's just give thanks
+ * weakness as a macro language.  Sorry.  (But let's just give thanks
  * there aren't more than 5 arguments...)
  */
 


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 01:17:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 01:17:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzVHr-0004Sb-F2; Sun, 26 Jul 2020 01:17:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9Qx=BF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzVHq-0004SE-Gz
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 01:17:10 +0000
X-Inumbo-ID: b55d1fc6-cedd-11ea-a61e-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b55d1fc6-cedd-11ea-a61e-12813bfff9fa;
 Sun, 26 Jul 2020 01:17:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hcIFsdvHWtg2leRc/Q4dTYtumSL44SzN4oP9oaV7i/4=; b=f6F2CcFEUn79qR2sTINpqowcg
 dwL7yRAjSCKZoN6MJhybRzM0Fe1YO6yY1wiYeV022XsFR1dqM12ypFuC075A8oMIymBPXVwJ4Q2Yw
 pI1Bo9Ub1xw2ImVfn2ee31HRKLK8jIw7dL9YQaQdWV3zdAEkComBxMuVc1yDmaq63MHXU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzVHg-0000Pk-MO; Sun, 26 Jul 2020 01:17:00 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzVHg-0004CX-30; Sun, 26 Jul 2020 01:17:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzVHg-00080e-2O; Sun, 26 Jul 2020 01:17:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152197-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152197: regressions - trouble: fail/pass/starved
X-Osstest-Failures: linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-boot:fail:regression
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
 linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=23ee3e4e5bd27bdbc0f1785eef7209ce872794c7
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 26 Jul 2020 01:17:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152197 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152197/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-boot          fail REGR. vs. 151214
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 linux                23ee3e4e5bd27bdbc0f1785eef7209ce872794c7
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   37 days
Failing since        151236  2020-06-19 19:10:35 Z   36 days   56 attempts
Testing same since   152197  2020-07-25 10:38:51 Z    0 days    1 attempts

------------------------------------------------------------
865 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 48843 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 03:26:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 03:26:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzXIH-0007Ap-OE; Sun, 26 Jul 2020 03:25:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9Qx=BF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzXIG-0007AP-CU
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 03:25:44 +0000
X-Inumbo-ID: aa3625d6-ceef-11ea-a630-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aa3625d6-ceef-11ea-a630-12813bfff9fa;
 Sun, 26 Jul 2020 03:25:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Flj7knaDshLC8uhqzJB2Q3IdnFBVGNVSp9NXfmZg2EE=; b=roqFQxf0zEV/LgAlLSq9wF+43R
 M1HD0aPlWgTqft1udVzy2RH+CAwsbk7L8F/51qTl48loznr4MPAJw0emBYPfn1024/Uzpz1vrR0LY
 q6ZY2TlGINkklms0tpECpqpwBlSvqNky2/grRQQeJaPCtPHP8kHO10xBgi/dZ12jCvo8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzXI5-0003QW-3f; Sun, 26 Jul 2020 03:25:33 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzXI4-0001lE-Q3; Sun, 26 Jul 2020 03:25:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzXI4-0002QC-PW; Sun, 26 Jul 2020 03:25:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-qemuu-nested-intel
Message-Id: <E1jzXI4-0002QC-PW@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 26 Jul 2020 03:25:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-qemuu-nested-intel
testid xen-boot/l1

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  da278d58a092bfcc4e36f1e274229c1468dea731
  Bug not present: 23accdf162dcccb9fec9585a64ad01a87b13da5c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/152207/


  commit da278d58a092bfcc4e36f1e274229c1468dea731
  Author: Philippe Mathieu-Daudé <philmd@redhat.com>
  Date:   Fri May 8 12:02:22 2020 +0200
  
      accel: Move Xen accelerator code under accel/xen/
      
      This code is not related to hardware emulation.
      Move it under accel/ with the other hypervisors.
      
      Reviewed-by: Paul Durrant <paul@xen.org>
      Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Message-Id: <20200508100222.7112-1-philmd@redhat.com>
      Reviewed-by: Juan Quintela <quintela@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-intel.xen-boot--l1.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-intel.xen-boot--l1 --summary-out=tmp/152207.bisection-summary --basis-template=151065 --blessings=real,real-bisect qemu-mainline test-amd64-amd64-qemuu-nested-intel xen-boot/l1
Searching for failure / basis pass:
 152189 fail [host=albana0] / 151065 [host=fiano0] 151047 [host=albana1] 150970 [host=elbling0] 150930 [host=huxelrebe1] 150916 [host=chardonnay0] 150909 [host=elbling1] 150899 [host=chardonnay1] 150895 [host=godello0] 150831 [host=fiano1] 150694 [host=debina0] 150631 [host=huxelrebe0] 150608 [host=godello1] 150593 [host=godello0] 150585 ok.
Failure / basis pass flights: 152189 / 150585
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 50528537b2fb0ebdf32c719a0525635c93b905c2 3c659044118e34603161457db9934a34f816d78b 7adfbea8fd1efce36019a0c2f198ca73be9d3f18 6ada2285d9918859699c92e09540e023e0a16054 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b ce20db593f50752badbc94d6a96e4576aa4a2443 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3-50528537b2fb0ebdf32c719a0525635c93b905c2 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://git.qemu.org/qemu.git#ce20db593f50752badbc94d6a96e4576aa4a2443-7adfbea8fd1efce36019a0c2f198ca73be9d3f18 git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-6ada2285d9918859699c92e09540e023e0a16054 git://xenbits.xen.org/xen.git#1497e78068421d83956f8e82fb6e1bf1fc3b1199-8c4532f19d6925538fb0c938f7de9a97da8c5c3b
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 67918 nodes in revision graph
Searching for test results:
 150532 [host=chardonnay0]
 150585 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b ce20db593f50752badbc94d6a96e4576aa4a2443 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 150593 [host=godello0]
 150631 [host=huxelrebe0]
 150608 [host=godello1]
 150694 [host=debina0]
 150831 [host=fiano1]
 150909 [host=elbling1]
 150930 [host=huxelrebe1]
 150916 [host=chardonnay0]
 150895 [host=godello0]
 150899 [host=chardonnay1]
 150970 [host=elbling0]
 151047 [host=albana1]
 151160 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b ce20db593f50752badbc94d6a96e4576aa4a2443 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 151101 fail irrelevant
 151065 [host=fiano0]
 151149 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151212 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b b889212973dabee119a1ab21326a27fc51b88d6d 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151168 fail irrelevant
 151221 blocked irrelevant
 151171 pass irrelevant
 151193 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 71b04329c4f7d5824a289ca5225e1883a278cf3b 2e3de6253422112ae43e608661ba94ea6b345694 e181db8ba4e0797b8f9b55996adfa71ffb5b4081
 151172 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e1d24410da356731da70b3334f86343e11e207d2 3c659044118e34603161457db9934a34f816d78b 470dd165d152ff7ceac61c7b71c2b89220b3aad7 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151215 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b da278d58a092bfcc4e36f1e274229c1468dea731 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151174 pass irrelevant
 151176 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151177 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 3c659044118e34603161457db9934a34f816d78b 9f1f264edbdf5516d6f208497310b3eedbc7b74c 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151178 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b eea8f5df4ecc607d64f091b8d916fcc11a697541 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151179 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 86e8c353f705f14f2f2fd7a6195cefa431aa24d9 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 151180 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 6345d7e2aeb6f7bbaa9c1e7e94e21fccf9453c70 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 151199 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b 6bb228190ef0b45669d285114cf8a280c55f4b39 2e3de6253422112ae43e608661ba94ea6b345694 ad33a573c009d72466432b41ba0591c64e819c19
 151182 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff7c838d09224dd4e4c9b5b93152d8db1b19740 3c659044118e34603161457db9934a34f816d78b 49ee11555262a256afec592dfed7c5902d5eefd2 2e3de6253422112ae43e608661ba94ea6b345694 726c78d14dfe6ec76f5e4c7756821a91f0a04b34
 151183 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 5d2f557b47dfbf8f23277a5bdd8473d4607c681a 2e3de6253422112ae43e608661ba94ea6b345694 51ca66c37371b10b378513af126646de22eddb17
 151216 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b aaacf1c15a225ffeb1ff066b78e211594b3a5053 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151185 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8035edbe12f0f2a58e8fa9b06d05c8ee1c69ffae 3c659044118e34603161457db9934a34f816d78b 7d2410cea154bf915fb30179ebda3b17ac36e70e 2e3de6253422112ae43e608661ba94ea6b345694 780aba2779b834f19b2a6f0dcdea0e7e0b5e1622
 151189 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bb78cfbec07eda45118b630a09b0af549b43a135 3c659044118e34603161457db9934a34f816d78b fe0fe4735e798578097758781166cc221319b93d 2e3de6253422112ae43e608661ba94ea6b345694 d9f58cd54fe2f05e1f05e2fe254684bd1840de8e
 151175 blocked irrelevant
 151218 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b 73b994f6d74ec00a1d78daf4145096ff9f0e2982 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151190 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b 250b1da35d579f42319af234f36207902ca4baa4 2e3de6253422112ae43e608661ba94ea6b345694 dde6174ada5280cd9a6396e3b12606360a0d29a3
 151202 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca407c7246bf405da6d9b1b9d93e5e7f17b4b1f9 3c659044118e34603161457db9934a34f816d78b cccdd8c7971896c339d59c9c5d4647d4ffd9568a 2e3de6253422112ae43e608661ba94ea6b345694 dde6174ada5280cd9a6396e3b12606360a0d29a3
 151220 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b d6048bfd12e24a0980ba2040cfaa2b101df3fa16 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151207 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b af509738f8e4400c26d321abeac924efb04fbfa0 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151211 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b 0d48b436327955c69e2eb53f88aba9aa1e0dbaa0 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 151241 blocked irrelevant
 151286 blocked irrelevant
 151269 blocked irrelevant
 151328 blocked irrelevant
 151304 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 322969adf1fb3d6cfbd613f35121315715aff2ed 3c659044118e34603161457db9934a34f816d78b 171199f56f5f9bdf1e5d670d09ef1351d8f01bae 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151377 blocked irrelevant
 151353 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b d4b78317b7cf8c0c635b70086503813f79ff21ec 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151399 blocked irrelevant
 151414 blocked irrelevant
 151435 blocked irrelevant
 151459 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151471 blocked irrelevant
 151485 blocked irrelevant
 151500 blocked irrelevant
 151518 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151547 blocked irrelevant
 151598 blocked irrelevant
 151577 blocked irrelevant
 151622 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b 7b7515702012219410802a168ae4aa45b72a44df 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151656 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151634 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151645 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151669 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151685 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151704 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151778 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b aff2caf6b3fbab1062e117a47b66d27f7fd2f272 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151721 blocked irrelevant
 151763 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 48f22ad04ead83e61b4b35871ec6f6109779b791 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151744 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 8796c64ecdfd34be394ea277aaaaa53df0c76996 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151804 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151816 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151833 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 827937158b72ce2265841ff528bba3c44a1bfbc8 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151882 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 589b1be07c060e583d9f758ff0cb10e0f1ff242f 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151865 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151885 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151866 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151855 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151841 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 2033cc6efa98b831d7839e367aa7d5aa74d0750f 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151887 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151868 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b eefe34ea4b82c2b47abe28af4cc7247d51553626 2e3de6253422112ae43e608661ba94ea6b345694 25636ed707cf1211ce846c7ec58f8643e435d7a7
 151897 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151871 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 239b50a863704f7960525799eda82de061c7c458 3c659044118e34603161457db9934a34f816d78b 3f429a3400822141651486193d6af625eeab05a5 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 151849 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151872 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 58ae92a993687d913aa6dd00ef3497a1bc5f6c40 3c659044118e34603161457db9934a34f816d78b 54cdfe511219b8051046be55a6e156c4f08ff7ff 2e3de6253422112ae43e608661ba94ea6b345694 71ca0e0ad000e690899936327eb09709ab182ade
 151888 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151873 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a2433243fbe471c250d7eddc2c7da325d91265fd 3c659044118e34603161457db9934a34f816d78b b77b5b3dc7a4730d804090d359c57d33573cf85a 2e3de6253422112ae43e608661ba94ea6b345694 3625b04991b4d6affadd99d377ab84bac48dfff4
 151876 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b db2322469a245eb9d9aa1c98747f6d595cca8f35 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151877 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 9354eaaf16fdb98651574f131ff66ad974e50bba 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151890 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151878 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 9940b2cfbc05cdffdf6b42227a80cb1e6d2a85c2 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151879 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 81cb05732efb36971901c515b007869cc1d3a532 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151893 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b d6b78ac8ecf94f56dbfbecc23fb4365d8772a41a 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151880 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 3c659044118e34603161457db9934a34f816d78b 75a6ed875ff0a2eb6b2971ae2098ed09963d7329 2e3de6253422112ae43e608661ba94ea6b345694 1251402caf8685f45d9d580f01583370f7e2d272
 151874 blocked irrelevant
 151895 blocked irrelevant
 151894 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151896 blocked irrelevant
 151914 blocked irrelevant
 151934 blocked irrelevant
 151968 blocked irrelevant
 151952 blocked irrelevant
 152013 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 939ab64b400b9bec4b59795a87817784093e1acd 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 151988 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff53d2a13740e39dea110d6b3509c156c659586 3c659044118e34603161457db9934a34f816d78b b7bda69c4ef46c57480f6e378923f5215b122778 6ada2285d9918859699c92e09540e023e0a16054 f8fe3c07363d11fc81d8e7382dbcaa357c861569
 151999 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 97f750becac33e3d3e446d3ff4ae9af2577b7877 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152026 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 9fc87111005e8903785db40819af66b8f85b8b96 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152039 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 9fc87111005e8903785db40819af66b8f85b8b96 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152058 blocked irrelevant
 152076 blocked irrelevant
 152108 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9132a31b9c8381197eee75eb66c809182b264110 3c659044118e34603161457db9934a34f816d78b 3cbc8970f55c87cb58699b6dc8fe42998bc79dc0 6ada2285d9918859699c92e09540e023e0a16054 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
 152144 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9132a31b9c8381197eee75eb66c809182b264110 3c659044118e34603161457db9934a34f816d78b d0cc248164961a7ba9d43806feffd76f9f6d7f41 6ada2285d9918859699c92e09540e023e0a16054 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
 152171 fail irrelevant
 152192 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 3c659044118e34603161457db9934a34f816d78b ce20db593f50752badbc94d6a96e4576aa4a2443 2e3de6253422112ae43e608661ba94ea6b345694 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 152195 fail irrelevant
 152196 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b 23accdf162dcccb9fec9585a64ad01a87b13da5c 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 152189 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 50528537b2fb0ebdf32c719a0525635c93b905c2 3c659044118e34603161457db9934a34f816d78b 7adfbea8fd1efce36019a0c2f198ca73be9d3f18 6ada2285d9918859699c92e09540e023e0a16054 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
 152199 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 50528537b2fb0ebdf32c719a0525635c93b905c2 3c659044118e34603161457db9934a34f816d78b 7adfbea8fd1efce36019a0c2f198ca73be9d3f18 6ada2285d9918859699c92e09540e023e0a16054 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
 152202 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b da278d58a092bfcc4e36f1e274229c1468dea731 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 152203 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b 23accdf162dcccb9fec9585a64ad01a87b13da5c 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 152204 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b da278d58a092bfcc4e36f1e274229c1468dea731 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 152205 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b 23accdf162dcccb9fec9585a64ad01a87b13da5c 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
 152207 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b da278d58a092bfcc4e36f1e274229c1468dea731 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
Searching for interesting versions
 Result found: flight 150585 (pass), for basis pass
 Result found: flight 152189 (fail), for basis failure
 Repro found: flight 152192 (pass), for basis pass
 Repro found: flight 152199 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 14c7ed8b51f60097ad771277da69f74b22a7a759 3c659044118e34603161457db9934a34f816d78b 23accdf162dcccb9fec9585a64ad01a87b13da5c 2e3de6253422112ae43e608661ba94ea6b345694 6a49b9a7920c82015381740905582b666160d955
No revisions left to test, checking graph state.
 Result found: flight 152196 (pass), for last pass
 Result found: flight 152202 (fail), for first failure
 Repro found: flight 152203 (pass), for last pass
 Repro found: flight 152204 (fail), for first failure
 Repro found: flight 152205 (pass), for last pass
 Repro found: flight 152207 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  da278d58a092bfcc4e36f1e274229c1468dea731
  Bug not present: 23accdf162dcccb9fec9585a64ad01a87b13da5c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/152207/


  commit da278d58a092bfcc4e36f1e274229c1468dea731
  Author: Philippe Mathieu-Daudé <philmd@redhat.com>
  Date:   Fri May 8 12:02:22 2020 +0200
  
      accel: Move Xen accelerator code under accel/xen/
      
      This code is not related to hardware emulation.
      Move it under accel/ with the other hypervisors.
      
      Reviewed-by: Paul Durrant <paul@xen.org>
      Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Message-Id: <20200508100222.7112-1-philmd@redhat.com>
      Reviewed-by: Juan Quintela <quintela@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.43415 to fit
pnmtopng: 234 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-intel.xen-boot--l1.{dot,ps,png,html,svg}.
----------------------------------------
152207: tolerable ALL FAIL

flight 152207 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/152207/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1      fail baseline untested


jobs:
 test-amd64-amd64-qemuu-nested-intel                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun Jul 26 05:26:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 05:26:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzZB5-0000nn-0P; Sun, 26 Jul 2020 05:26:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9Qx=BF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzZB3-0000ni-IE
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 05:26:25 +0000
X-Inumbo-ID: 8a8440fe-cf00-11ea-89f7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a8440fe-cf00-11ea-89f7-bc764e2007e4;
 Sun, 26 Jul 2020 05:26:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=K/zgdi9s+bhrqTxklE47a9M2PN3bT4rpVb22b+OBBVU=; b=IdhQafvUTCMEyYgbxfNTWeMYg
 9AkkPVzjUYU5NVBmF7mFqIjr+Pfmd43dMAdGdUcom/9nDMbgw+iXvZVFNaWstRulDvQ0NXEC56lAb
 3Ph5heJ8skHJAnKq0WVMZ+l5aD+MVsdruJcc3MuXmJDmrOh5GWOo1/Uc44bsFTKY/yMe0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzZAz-0006M1-9a; Sun, 26 Jul 2020 05:26:21 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzZAy-0007xK-R0; Sun, 26 Jul 2020 05:26:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzZAy-0006L2-QF; Sun, 26 Jul 2020 05:26:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152200-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152200: regressions - trouble: fail/pass/starved
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:hosts-allocate:starved:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: qemuu=7adfbea8fd1efce36019a0c2f198ca73be9d3f18
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 26 Jul 2020 05:26:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152200 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152200/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1       fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1         fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-freebsd11-amd64  2 hosts-allocate           starved n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  2 hosts-allocate           starved n/a

version targeted for testing:
 qemuu                7adfbea8fd1efce36019a0c2f198ca73be9d3f18
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   43 days
Failing since        151101  2020-06-14 08:32:51 Z   41 days   58 attempts
Testing same since   152189  2020-07-25 00:08:21 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Duyck <alexander.h.duyck@linux.intel.com>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Antoine Damhet <antoine.damhet@blade-group.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  Liu Yi L <yi.l.liu@intel.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       starved 
 test-amd64-amd64-qemuu-freebsd12-amd64                       starved 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 31667 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 06:36:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 06:36:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzaG4-0006h0-DW; Sun, 26 Jul 2020 06:35:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9Qx=BF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzaG2-0006gd-UK
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 06:35:38 +0000
X-Inumbo-ID: 3449d7ee-cf0a-11ea-89fa-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3449d7ee-cf0a-11ea-89fa-bc764e2007e4;
 Sun, 26 Jul 2020 06:35:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=d8xwZd1p7QdQqvt8eSup/84FmCN0HI8iXmDYj/fPLOU=; b=qKNucw9LKo5Lz0rnS4UZEwoEd
 BNezWLTdYunHGAHOKeRSz0WDBn74bmdX5eZKarZ9rP+X6ae2W9omiNfFIdq70NWdE9Q7rS34tkn51
 W/RGxVfm9asaDud0Bz1Qs3lNtnjuYijkJO6r+vHetjJJ1IuLmyDV6vNhMF0ttKERSw274=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzaFv-0007ot-AX; Sun, 26 Jul 2020 06:35:31 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzaFu-0003tc-Qk; Sun, 26 Jul 2020 06:35:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzaFu-0007F9-Pn; Sun, 26 Jul 2020 06:35:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152209-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 152209: regressions - FAIL
X-Osstest-Failures: libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=e2fd95ed45439ee98362adbd4371590b0e11d35c
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 26 Jul 2020 06:35:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152209 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152209/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              e2fd95ed45439ee98362adbd4371590b0e11d35c
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   16 days
Failing since        151818  2020-07-11 04:18:52 Z   15 days   16 attempts
Testing same since   152193  2020-07-25 04:18:58 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Weblate <noreply@weblate.org>
  Yi Wang <wang.yi59@zte.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2758 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 07:01:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 07:01:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzaeq-0000nK-Ln; Sun, 26 Jul 2020 07:01:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jzaeq-0000nF-Al
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 07:01:16 +0000
X-Inumbo-ID: cbcdf304-cf0d-11ea-89fc-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cbcdf304-cf0d-11ea-89fc-bc764e2007e4;
 Sun, 26 Jul 2020 07:01:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D1BDAAAC5;
 Sun, 26 Jul 2020 07:01:23 +0000 (UTC)
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
To: Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien.grall.oss@gmail.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <9f09ff42-a930-e4e3-d1c8-612ad03698ae@xen.org>
 <alpine.DEB.2.21.2007241036460.17562@sstabellini-ThinkPad-T480s>
 <40582d63-49c7-4a51-b35b-8248dfa34b66@xen.org>
 <alpine.DEB.2.21.2007241127480.17562@sstabellini-ThinkPad-T480s>
 <CAJ=z9a3dXSnEBvhkHkZzV9URAGqSfdtJ1Lc838h_ViAWG3ZO4g@mail.gmail.com>
 <alpine.DEB.2.21.2007241353450.17562@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <68a6a292-d299-aafa-3b38-4f63b1107c6b@suse.com>
Date: Sun, 26 Jul 2020 09:01:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007241353450.17562@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <rahul.singh@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25.07.2020 01:46, Stefano Stabellini wrote:
> On Fri, 24 Jul 2020, Julien Grall wrote:
>> On Fri, 24 Jul 2020 at 19:32, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>> If they are not equal, then I fail to see why it would be useful to have this
>>>> value in Xen.
>>>
>>> I think that's because the domain is actually more convenient to use
>>> because a segment can span multiple PCI host bridges. So my
>>> understanding is that a segment alone is not sufficient to identify a
>>> host bridge. From a software implementation point of view it would be
>>> better to use domains.
>>
>> AFAICT, this would be a matter of one check vs two checks in Xen :).
>> But... looking at Linux, they will also use domain == segment for ACPI
>> (see [1]). So, I think, they still have to use (domain, bus) to do the lookup.
>>
>>>> In which case, we need to use PHYSDEVOP_pci_mmcfg_reserved so
>>>> Dom0 and Xen can synchronize on the segment number.
>>>
>>> I was hoping we could write down the assumption somewhere that for the
>>> cases we care about domain == segment, and error out if it is not the
>>> case.
>>
>> Given that we have only the domain in hand, how would you enforce that?
>>
>> >From this discussion, it also looks like there is a mismatch between the
>> implementation and the understanding on QEMU devel. So I am a bit
>> concerned that this is not stable and may change in future Linux version.
>>
>> IOW, we are know tying Xen to Linux. So could we implement
>> PHYSDEVOP_pci_mmcfg_reserved *or* introduce a new property that
>> really represent the segment?
> 
> I don't think we are tying Xen to Linux. Rob has already said that
> linux,pci-domain is basically a generic device tree property. And if we
> look at https://www.devicetree.org/open-firmware/bindings/pci/pci2_1.pdf
> "PCI domain" is described and seems to match the Linux definition.
> 
> I do think we need to understand the definitions and the differences.
> Reading online [1][2] it looks like a Linux PCI domain matches a "PCI
> Segment Group Number" in PCI Express which is probably why Linux is
> making the assumption that it is making.

If I may, I'd like to put the question a little differently, in the hope
for me to understand the actual issue here: On the x86 side, by way of
using ACPI, Linux and Xen "naturally" agree on segment numbering (as far
as normal devices go; Intel's Volume Management Device concept still
needs accommodating so that it would work with Xen). This includes the
multiple host bridges case then naturally. How is the Device Tree model
different from ACPI?

Jan


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 08:14:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 08:14:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzbn1-0007bj-Ex; Sun, 26 Jul 2020 08:13:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jzbmz-0007be-Oq
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 08:13:45 +0000
X-Inumbo-ID: ec50ce6c-cf17-11ea-89ff-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec50ce6c-cf17-11ea-89ff-bc764e2007e4;
 Sun, 26 Jul 2020 08:13:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 723D3AC52;
 Sun, 26 Jul 2020 08:13:53 +0000 (UTC)
Subject: Re: [PATCH 1/6] x86/iommu: re-arrange arch_iommu to separate common
 fields...
To: paul@xen.org
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-2-paul@xen.org>
 <68b40fdc-e578-7005-aa6e-499c6f04589c@citrix.com>
 <000001d661eb$392e1ae0$ab8a50a0$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <63ed6df0-e456-48cd-6df0-601600871226@suse.com>
Date: Sun, 26 Jul 2020 10:13:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <000001d661eb$392e1ae0$ab8a50a0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Kevin Tian' <kevin.tian@intel.com>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Lukasz Hawrylko' <lukasz.hawrylko@linux.intel.com>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.07.2020 20:49, Paul Durrant wrote:
>> From: Andrew Cooper <andrew.cooper3@citrix.com>
>> Sent: 24 July 2020 18:29
>>
>> On 24/07/2020 17:46, Paul Durrant wrote:
>>> diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
>>> index 6c9d5e5632..a7add5208c 100644
>>> --- a/xen/include/asm-x86/iommu.h
>>> +++ b/xen/include/asm-x86/iommu.h
>>> @@ -45,16 +45,23 @@ typedef uint64_t daddr_t;
>>>
>>>  struct arch_iommu
>>>  {
>>> -    u64 pgd_maddr;                 /* io page directory machine address */
>>> -    spinlock_t mapping_lock;            /* io page table lock */
>>> -    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
>>> -    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
>>> -    struct list_head mapped_rmrrs;
>>> -
>>> -    /* amd iommu support */
>>> -    int paging_mode;
>>> -    struct page_info *root_table;
>>> -    struct guest_iommu *g_iommu;
>>> +    spinlock_t mapping_lock; /* io page table lock */
>>> +
>>> +    union {
>>> +        /* Intel VT-d */
>>> +        struct {
>>> +            u64 pgd_maddr; /* io page directory machine address */
>>> +            int agaw; /* adjusted guest address width, 0 is level 2 30-bit */
>>> +            u64 iommu_bitmap; /* bitmap of iommu(s) that the domain uses */
>>> +            struct list_head mapped_rmrrs;
>>> +        } vtd;
>>> +        /* AMD IOMMU */
>>> +        struct {
>>> +            int paging_mode;
>>> +            struct page_info *root_table;
>>> +            struct guest_iommu *g_iommu;
>>> +        } amd_iommu;
>>> +    };
>>
>> The naming split here is weird.
>>
>> Ideally we'd have struct {vtd,amd}_iommu in appropriate headers, and
>> this would be simply
>>
>> union {
>>     struct vtd_iommu vtd;
>>     struct amd_iommu amd;
>> };
>>
>> If this isn't trivial to arrange, can we at least s/amd_iommu/amd/ here ?
> 
> I was in two minds. I tried to look for a TLA for the AMD IOMMU and 'amd' seemed a little too non-descript. I don't really mind though if there's a strong preference to shorted it.

+1 for shortening in some way. Even amd_vi would already be better imo,
albeit I'm with Andrew and would think just amd is fine here (and
matches how things are in the file system structure).

While at it, may I ask that you also switch the plain "int" fields to
"unsigned int" - I think that's doable for both of them.

Jan


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 08:26:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 08:26:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzbz7-00005u-Ii; Sun, 26 Jul 2020 08:26:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jzbz5-00005p-NA
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 08:26:15 +0000
X-Inumbo-ID: aa5fc948-cf19-11ea-a650-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aa5fc948-cf19-11ea-a650-12813bfff9fa;
 Sun, 26 Jul 2020 08:26:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C8EEAAEBF;
 Sun, 26 Jul 2020 08:26:21 +0000 (UTC)
Subject: Re: [PATCH 2/6] x86/iommu: add common page-table allocator
To: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-3-paul@xen.org>
 <d0a0c46f-1461-144c-ca62-259b0a1894fa@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <036e10b0-d0b3-0437-f9ac-82845fd0cbd2@suse.com>
Date: Sun, 26 Jul 2020 10:26:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d0a0c46f-1461-144c-ca62-259b0a1894fa@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
 Kevin Tian <kevin.tian@intel.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.07.2020 20:24, Andrew Cooper wrote:
> On 24/07/2020 17:46, Paul Durrant wrote:
>> --- a/xen/drivers/passthrough/x86/iommu.c
>> +++ b/xen/drivers/passthrough/x86/iommu.c
>> @@ -140,11 +140,19 @@ int arch_iommu_domain_init(struct domain *d)
>>  
>>      spin_lock_init(&hd->arch.mapping_lock);
>>  
>> +    INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.list);
>> +    spin_lock_init(&hd->arch.pgtables.lock);
>> +
>>      return 0;
>>  }
>>  
>>  void arch_iommu_domain_destroy(struct domain *d)
>>  {
>> +    struct domain_iommu *hd = dom_iommu(d);
>> +    struct page_info *pg;
>> +
>> +    while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
>> +        free_domheap_page(pg);
> 
> Some of those 90 lines saved were the logic to not suffer a watchdog
> timeout here.
> 
> To do it like this, it needs plumbing into the relinquish resources path.

And indeed this is possible now only because we don't destroy page
tables for still running domains anymore. Maybe the description
should also make this connection.

Jan


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 08:26:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 08:26:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzbzT-00007d-Rw; Sun, 26 Jul 2020 08:26:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rEnG=BF=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1jzbzS-00007Q-6n
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 08:26:38 +0000
X-Inumbo-ID: b8a94a60-cf19-11ea-8a01-bc764e2007e4
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b8a94a60-cf19-11ea-8a01-bc764e2007e4;
 Sun, 26 Jul 2020 08:26:37 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1jzbzP-0007bJ-79; Sun, 26 Jul 2020 08:26:35 +0000
Date: Sun, 26 Jul 2020 09:26:35 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/hvm: Clean up track_dirty_vram() calltree
Message-ID: <20200726082635.GA29099@deinos.phlegethon.org>
References: <20200722151548.4000-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <20200722151548.4000-1-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org);
 SAEximRunCond expanded to false
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

At 16:15 +0100 on 22 Jul (1595434548), Andrew Cooper wrote:
>  * Rename nr to nr_frames.  A plain 'nr' is confusing to follow in the the
>    lower levels.
>  * Use DIV_ROUND_UP() rather than opencoding it in several different ways
>  * The hypercall input is capped at uint32_t, so there is no need for
>    nr_frames to be unsigned long in the lower levels.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Tim Deegan <tim@xen.org>


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 08:27:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 08:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzc0f-0000GV-6Y; Sun, 26 Jul 2020 08:27:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jzc0e-0000GP-F4
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 08:27:52 +0000
X-Inumbo-ID: e519352e-cf19-11ea-a650-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e519352e-cf19-11ea-a650-12813bfff9fa;
 Sun, 26 Jul 2020 08:27:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 682FAAEBF;
 Sun, 26 Jul 2020 08:28:00 +0000 (UTC)
Subject: Re: [PATCH 3/6] iommu: remove iommu_lookup_page() and the
 lookup_page() method...
To: paul@xen.org
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-4-paul@xen.org>
 <c47710e1-fcb6-3b5d-ff6a-d237a4149b3b@citrix.com>
 <000101d661eb$c68a75a0$539f60e0$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <35260401-fd2b-2eba-6e9b-a274cb8c057b@suse.com>
Date: Sun, 26 Jul 2020 10:27:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <000101d661eb$c68a75a0$539f60e0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>, 'Kevin Tian' <kevin.tian@intel.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.07.2020 20:53, Paul Durrant wrote:
>> From: Andrew Cooper <andrew.cooper3@citrix.com>
>> Sent: 24 July 2020 19:39
>>
>> On 24/07/2020 17:46, Paul Durrant wrote:
>>> From: Paul Durrant <pdurrant@amazon.com>
>>>
>>> ... from iommu_ops.
>>>
>>> This patch is essentially a reversion of dd93d54f "vtd: add lookup_page method
>>> to iommu_ops". The code was intended to be used by a patch that has long-
>>> since been abandoned. Therefore it is dead code and can be removed.
>>
>> And by this, you mean the work that you only partial unstreamed, with
>> the remainder of the feature still very much in use by XenServer?
>>
> 
> I thought we basically decided to bin the original PV IOMMU idea though? 

Did we? It's the first time I hear of it, I think.

Jan


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 08:36:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 08:36:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzc8i-0001Bf-1s; Sun, 26 Jul 2020 08:36:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jzc8h-0001Ba-A8
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 08:36:11 +0000
X-Inumbo-ID: 0e6348b0-cf1b-11ea-8a02-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e6348b0-cf1b-11ea-8a02-bc764e2007e4;
 Sun, 26 Jul 2020 08:36:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 22213B601;
 Sun, 26 Jul 2020 08:36:19 +0000 (UTC)
Subject: Re: [PATCH 4/6] remove remaining uses of iommu_legacy_map/unmap
To: Paul Durrant <paul@xen.org>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-5-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3face98c-7fa7-2baf-2fe8-b5869865203f@suse.com>
Date: Sun, 26 Jul 2020 10:36:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200724164619.1245-5-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.07.2020 18:46, Paul Durrant wrote:
> ---
>  xen/arch/x86/mm.c               | 22 +++++++++++++++-----
>  xen/arch/x86/mm/p2m-ept.c       | 22 +++++++++++++-------
>  xen/arch/x86/mm/p2m-pt.c        | 17 +++++++++++----
>  xen/arch/x86/mm/p2m.c           | 28 ++++++++++++++++++-------
>  xen/arch/x86/x86_64/mm.c        | 27 ++++++++++++++++++------
>  xen/common/grant_table.c        | 36 +++++++++++++++++++++++++-------
>  xen/common/memory.c             |  7 +++----
>  xen/drivers/passthrough/iommu.c | 37 +--------------------------------
>  xen/include/xen/iommu.h         | 20 +++++-------------
>  9 files changed, 123 insertions(+), 93 deletions(-)

Overall more code. I wonder whether a map-and-flush function (named
differently than the current ones) wouldn't still be worthwhile to
have.

> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -1225,11 +1225,25 @@ map_grant_ref(
>              kind = IOMMUF_readable;
>          else
>              kind = 0;
> -        if ( kind && iommu_legacy_map(ld, _dfn(mfn_x(mfn)), mfn, 0, kind) )
> +        if ( kind )
>          {
> -            double_gt_unlock(lgt, rgt);
> -            rc = GNTST_general_error;
> -            goto undo_out;
> +            dfn_t dfn = _dfn(mfn_x(mfn));
> +            unsigned int flush_flags = 0;
> +            int err;
> +
> +            err = iommu_map(ld, dfn, mfn, 0, kind, &flush_flags);
> +            if ( err )
> +                rc = GNTST_general_error;
> +
> +            err = iommu_iotlb_flush(ld, dfn, 1, flush_flags);
> +            if ( err )
> +                rc = GNTST_general_error;
> +
> +            if ( rc != GNTST_okay )
> +            {
> +                double_gt_unlock(lgt, rgt);
> +                goto undo_out;
> +            }
>          }

The mapping needs to happen with at least ld's lock held, yes. But
is the same true also for the flushing? Can't (not necessarily
right in this change) the flush be pulled out of the function and
instead done once per batch that got processed?

Jan


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 08:50:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 08:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzcMZ-0002o2-BJ; Sun, 26 Jul 2020 08:50:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=00Q8=BF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jzcMY-0002nw-28
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 08:50:30 +0000
X-Inumbo-ID: 0df4d18a-cf1d-11ea-8a03-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0df4d18a-cf1d-11ea-8a03-bc764e2007e4;
 Sun, 26 Jul 2020 08:50:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 63ACDAC79;
 Sun, 26 Jul 2020 08:50:37 +0000 (UTC)
Subject: Re: [PATCH 5/6] iommu: remove the share_p2m operation
To: Paul Durrant <paul@xen.org>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-6-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d005885d-d983-7328-ee36-efd6032e8c96@suse.com>
Date: Sun, 26 Jul 2020 10:50:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200724164619.1245-6-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.07.2020 18:46, Paul Durrant wrote:
> --- a/xen/drivers/passthrough/vtd/iommu.c
> +++ b/xen/drivers/passthrough/vtd/iommu.c
> @@ -313,6 +313,26 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
>      return pte_maddr;
>  }
>  
> +static u64 domain_pgd_maddr(struct domain *d)

uint64_t please.

> +{
> +    struct domain_iommu *hd = dom_iommu(d);
> +
> +    ASSERT(spin_is_locked(&hd->arch.mapping_lock));
> +
> +    if ( iommu_use_hap_pt(d) )
> +    {
> +        mfn_t pgd_mfn =
> +            pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
> +
> +        return pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
> +    }
> +
> +    if ( !hd->arch.vtd.pgd_maddr )
> +        addr_to_dma_page_maddr(d, 0, 1);
> +
> +    return hd->arch.vtd.pgd_maddr;
> +}
> +
>  static void iommu_flush_write_buffer(struct vtd_iommu *iommu)
>  {
>      u32 val;
> @@ -1347,22 +1367,17 @@ int domain_context_mapping_one(
>      {
>          spin_lock(&hd->arch.mapping_lock);
>  
> -        /* Ensure we have pagetables allocated down to leaf PTE. */
> -        if ( hd->arch.vtd.pgd_maddr == 0 )
> +        pgd_maddr = domain_pgd_maddr(domain);
> +        if ( !pgd_maddr )
>          {
> -            addr_to_dma_page_maddr(domain, 0, 1);
> -            if ( hd->arch.vtd.pgd_maddr == 0 )
> -            {
> -            nomem:
> -                spin_unlock(&hd->arch.mapping_lock);
> -                spin_unlock(&iommu->lock);
> -                unmap_vtd_domain_page(context_entries);
> -                return -ENOMEM;
> -            }
> +        nomem:
> +            spin_unlock(&hd->arch.mapping_lock);
> +            spin_unlock(&iommu->lock);
> +            unmap_vtd_domain_page(context_entries);
> +            return -ENOMEM;
>          }

This renders all calls bogus in shared mode - the function, if
it ended up getting called nevertheless, would then still alloc
the root table. Therefore I'd like to suggest that at least all
its callers get an explicit check. That's really just
dma_pte_clear_one() as it looks.

Jan


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 10:08:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 10:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzdZk-0000TL-BS; Sun, 26 Jul 2020 10:08:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9Qx=BF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzdZi-0000TG-Tf
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 10:08:10 +0000
X-Inumbo-ID: e6ade39a-cf27-11ea-a662-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e6ade39a-cf27-11ea-a662-12813bfff9fa;
 Sun, 26 Jul 2020 10:08:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Fex71lD3WIoXQPwtI8bVa4JLDSOOr8KNmZO3Qv7nof0=; b=GoOWbPtJU8gb2W/620RJyedg/
 ofRnGncp1Czi7BPmDpN/ikhDTwdEbaYWjMhwqkwJDrnqzRrw5bQVpTO182Hbig68hqt1iuGQz/sLB
 o/s6ULMgKPIbJUWYFEiHRpGKgbxnG270FQ9HFMevtrs3mvfcQrySeSyIpoFWlpKPJPQe8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzdZe-0004JC-Hv; Sun, 26 Jul 2020 10:08:06 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzdZe-00058p-AY; Sun, 26 Jul 2020 10:08:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzdZe-0006pg-9s; Sun, 26 Jul 2020 10:08:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152213-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 152213: all pass - PUSHED
X-Osstest-Versions-This: xen=0562cbc14cf02b8188b9f1f37f39a4886776ce7c
X-Osstest-Versions-That: xen=f3885e8c3ceaef101e466466e879e97103ecce18
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 26 Jul 2020 10:08:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152213 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152213/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  0562cbc14cf02b8188b9f1f37f39a4886776ce7c
baseline version:
 xen                  f3885e8c3ceaef101e466466e879e97103ecce18

Last test of basis   152103  2020-07-22 09:24:23 Z    4 days
Testing same since   152213  2020-07-26 09:18:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f3885e8c3c..0562cbc14c  0562cbc14cf02b8188b9f1f37f39a4886776ce7c -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 12:11:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 12:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzfUK-0002lm-Nc; Sun, 26 Jul 2020 12:10:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9Qx=BF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzfUI-0002lh-SV
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 12:10:42 +0000
X-Inumbo-ID: 0555f434-cf39-11ea-a692-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0555f434-cf39-11ea-a692-12813bfff9fa;
 Sun, 26 Jul 2020 12:10:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yq0lDHcL6/Au5CpIWp9T5FIYdFtqwpWluV5QTU3ifVs=; b=BH9ho7eJzjAkTqDUVyixnpebZ
 8xF2VzFDd2dHwKHPSbH/fEsnvKNOJREELbTDX2tiRamvTJJ95cjG36ZJQgJTrcoWvPvSHiOrdWd5x
 f8+//oRy74Jnz6kPmWiUO1g7csyphGdujMlgF+vwEAF8N49uAt1mU6umNpEuxsz+MfGww=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzfUF-0006lv-66; Sun, 26 Jul 2020 12:10:39 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzfUE-0005El-SI; Sun, 26 Jul 2020 12:10:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzfUE-0004Dy-Rn; Sun, 26 Jul 2020 12:10:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152206-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152206: regressions - FAIL
X-Osstest-Failures: linux-linus:test-amd64-i386-libvirt:guest-start/debian.repeat:fail:regression
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start.2:fail:regression
 linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=04300d66f0a06d572d9f2ad6768c38cabde22179
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 26 Jul 2020 12:10:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152206 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152206/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-libvirt     18 guest-start/debian.repeat fail REGR. vs. 151214
 test-arm64-arm64-libvirt-xsm 17 guest-start.2            fail REGR. vs. 151214
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                04300d66f0a06d572d9f2ad6768c38cabde22179
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   38 days
Failing since        151236  2020-06-19 19:10:35 Z   36 days   57 attempts
Testing same since   152206  2020-07-26 01:40:19 Z    0 days    1 attempts

------------------------------------------------------------
913 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 52328 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 16:49:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 16:49:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzjq3-0000oB-2W; Sun, 26 Jul 2020 16:49:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9Qx=BF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzjq2-0000n7-G4
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 16:49:26 +0000
X-Inumbo-ID: f24d8e7a-cf5f-11ea-8a4a-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f24d8e7a-cf5f-11ea-8a4a-bc764e2007e4;
 Sun, 26 Jul 2020 16:49:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=EDrg8xeEb6IUqJZhMmIIcYSv4W+sQutaZyr/K5e8+PM=; b=wnZqkglqLQ5uELW/G/OMBVO42
 5LatDYsl2WvZHMKxfXN+ja23KSJTloAUZHegGEOTGBFX8rj8csgHq84Bd5Zl3iwv8KlZvLOBPtFxy
 XdZrBRDy1Lnn2iaJcTMNgZKqlSaChYrHfQBXnemKWTU+eO3N3CeRdiSUi//NSQtaxr5Og=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzjpt-0004V7-HV; Sun, 26 Jul 2020 16:49:17 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzjpt-0008RL-8Z; Sun, 26 Jul 2020 16:49:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzjpt-0001NT-6b; Sun, 26 Jul 2020 16:49:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152211-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152211: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=b0ce3f021e0157e9a5ab836cb162c48caac132e1
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 26 Jul 2020 16:49:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152211 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152211/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1       fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1         fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                b0ce3f021e0157e9a5ab836cb162c48caac132e1
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   43 days
Failing since        151101  2020-06-14 08:32:51 Z   42 days   59 attempts
Testing same since   152211  2020-07-26 05:28:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Duyck <alexander.h.duyck@linux.intel.com>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Antoine Damhet <antoine.damhet@blade-group.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  Liu Yi L <yi.l.liu@intel.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 31851 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jul 26 20:26:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 20:26:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jznDQ-0002lX-Oo; Sun, 26 Jul 2020 20:25:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kb6m=BF=arm.com=andre.przywara@srs-us1.protection.inumbo.net>)
 id 1jznDP-0002lS-Sq
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 20:25:47 +0000
X-Inumbo-ID: 2f8e0b70-cf7e-11ea-8a5d-bc764e2007e4
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 2f8e0b70-cf7e-11ea-8a5d-bc764e2007e4;
 Sun, 26 Jul 2020 20:25:45 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8ACAA31B;
 Sun, 26 Jul 2020 13:25:45 -0700 (PDT)
Received: from [192.168.2.22] (unknown [172.31.20.19])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BC8313F66E;
 Sun, 26 Jul 2020 13:25:44 -0700 (PDT)
Subject: Re: dom0 LInux 5.8-rc5 kernel failing to initialize cooling maps for
 Allwinner H6 SoC
To: Alejandro <alejandro.gonzalez.correo@gmail.com>,
 Julien Grall <julien@xen.org>
References: <CA+wirGqXMoRkS-aJmfFLipUv8SdY5LKV1aMrF0yKRJQaMvzs6Q@mail.gmail.com>
 <1c5cee83-295e-cc02-d018-b53ddc6e3590@xen.org>
 <CA+wirGpFvLBzYRBaq8yspJj8j9-yoLwN88bt079qM5yqPTwtcA@mail.gmail.com>
From: =?UTF-8?Q?Andr=c3=a9_Przywara?= <andre.przywara@arm.com>
Autocrypt: addr=andre.przywara@arm.com; prefer-encrypt=mutual; keydata=
 xsFNBFNPCKMBEAC+6GVcuP9ri8r+gg2fHZDedOmFRZPtcrMMF2Cx6KrTUT0YEISsqPoJTKld
 tPfEG0KnRL9CWvftyHseWTnU2Gi7hKNwhRkC0oBL5Er2hhNpoi8x4VcsxQ6bHG5/dA7ctvL6
 kYvKAZw4X2Y3GTbAZIOLf+leNPiF9175S8pvqMPi0qu67RWZD5H/uT/TfLpvmmOlRzNiXMBm
 kGvewkBpL3R2clHquv7pB6KLoY3uvjFhZfEedqSqTwBVu/JVZZO7tvYCJPfyY5JG9+BjPmr+
 REe2gS6w/4DJ4D8oMWKoY3r6ZpHx3YS2hWZFUYiCYovPxfj5+bOr78sg3JleEd0OB0yYtzTT
 esiNlQpCo0oOevwHR+jUiaZevM4xCyt23L2G+euzdRsUZcK/M6qYf41Dy6Afqa+PxgMEiDto
 ITEH3Dv+zfzwdeqCuNU0VOGrQZs/vrKOUmU/QDlYL7G8OIg5Ekheq4N+Ay+3EYCROXkstQnf
 YYxRn5F1oeVeqoh1LgGH7YN9H9LeIajwBD8OgiZDVsmb67DdF6EQtklH0ycBcVodG1zTCfqM
 AavYMfhldNMBg4vaLh0cJ/3ZXZNIyDlV372GmxSJJiidxDm7E1PkgdfCnHk+pD8YeITmSNyb
 7qeU08Hqqh4ui8SSeUp7+yie9zBhJB5vVBJoO5D0MikZAODIDwARAQABzS1BbmRyZSBQcnp5
 d2FyYSAoQVJNKSA8YW5kcmUucHJ6eXdhcmFAYXJtLmNvbT7CwXsEEwECACUCGwMGCwkIBwMC
 BhUIAgkKCwQWAgMBAh4BAheABQJTWSV8AhkBAAoJEAL1yD+ydue63REP/1tPqTo/f6StS00g
 NTUpjgVqxgsPWYWwSLkgkaUZn2z9Edv86BLpqTY8OBQZ19EUwfNehcnvR+Olw+7wxNnatyxo
 D2FG0paTia1SjxaJ8Nx3e85jy6l7N2AQrTCFCtFN9lp8Pc0LVBpSbjmP+Peh5Mi7gtCBNkpz
 KShEaJE25a/+rnIrIXzJHrsbC2GwcssAF3bd03iU41J1gMTalB6HCtQUwgqSsbG8MsR/IwHW
 XruOnVp0GQRJwlw07e9T3PKTLj3LWsAPe0LHm5W1Q+euoCLsZfYwr7phQ19HAxSCu8hzp43u
 zSw0+sEQsO+9wz2nGDgQCGepCcJR1lygVn2zwRTQKbq7Hjs+IWZ0gN2nDajScuR1RsxTE4WR
 lj0+Ne6VrAmPiW6QqRhliDO+e82riI75ywSWrJb9TQw0+UkIQ2DlNr0u0TwCUTcQNN6aKnru
 ouVt3qoRlcD5MuRhLH+ttAcmNITMg7GQ6RQajWrSKuKFrt6iuDbjgO2cnaTrLbNBBKPTG4oF
 D6kX8Zea0KvVBagBsaC1CDTDQQMxYBPDBSlqYCb/b2x7KHTvTAHUBSsBRL6MKz8wwruDodTM
 4E4ToV9URl4aE/msBZ4GLTtEmUHBh4/AYwk6ACYByYKyx5r3PDG0iHnJ8bV0OeyQ9ujfgBBP
 B2t4oASNnIOeGEEcQ2rjzsFNBFNPCKMBEACm7Xqafb1Dp1nDl06aw/3O9ixWsGMv1Uhfd2B6
 it6wh1HDCn9HpekgouR2HLMvdd3Y//GG89irEasjzENZPsK82PS0bvkxxIHRFm0pikF4ljIb
 6tca2sxFr/H7CCtWYZjZzPgnOPtnagN0qVVyEM7L5f7KjGb1/o5EDkVR2SVSSjrlmNdTL2Rd
 zaPqrBoxuR/y/n856deWqS1ZssOpqwKhxT1IVlF6S47CjFJ3+fiHNjkljLfxzDyQXwXCNoZn
 BKcW9PvAMf6W1DGASoXtsMg4HHzZ5fW+vnjzvWiC4pXrcP7Ivfxx5pB+nGiOfOY+/VSUlW/9
 GdzPlOIc1bGyKc6tGREH5lErmeoJZ5k7E9cMJx+xzuDItvnZbf6RuH5fg3QsljQy8jLlr4S6
 8YwxlObySJ5K+suPRzZOG2+kq77RJVqAgZXp3Zdvdaov4a5J3H8pxzjj0yZ2JZlndM4X7Msr
 P5tfxy1WvV4Km6QeFAsjcF5gM+wWl+mf2qrlp3dRwniG1vkLsnQugQ4oNUrx0ahwOSm9p6kM
 CIiTITo+W7O9KEE9XCb4vV0ejmLlgdDV8ASVUekeTJkmRIBnz0fa4pa1vbtZoi6/LlIdAEEt
 PY6p3hgkLLtr2GRodOW/Y3vPRd9+rJHq/tLIfwc58ZhQKmRcgrhtlnuTGTmyUqGSiMNfpwAR
 AQABwsFfBBgBAgAJBQJTTwijAhsMAAoJEAL1yD+ydue64BgP/33QKczgAvSdj9XTC14wZCGE
 U8ygZwkkyNf021iNMj+o0dpLU48PIhHIMTXlM2aiiZlPWgKVlDRjlYuc9EZqGgbOOuR/pNYA
 JX9vaqszyE34JzXBL9DBKUuAui8z8GcxRcz49/xtzzP0kH3OQbBIqZWuMRxKEpRptRT0wzBL
 O31ygf4FRxs68jvPCuZjTGKELIo656/Hmk17cmjoBAJK7JHfqdGkDXk5tneeHCkB411p9WJU
 vMO2EqsHjobjuFm89hI0pSxlUoiTL0Nuk9Edemjw70W4anGNyaQtBq+qu1RdjUPBvoJec7y/
 EXJtoGxq9Y+tmm22xwApSiIOyMwUi9A1iLjQLmngLeUdsHyrEWTbEYHd2sAM2sqKoZRyBDSv
 ejRvZD6zwkY/9nRqXt02H1quVOP42xlkwOQU6gxm93o/bxd7S5tEA359Sli5gZRaucpNQkwd
 KLQdCvFdksD270r4jU/rwR2R/Ubi+txfy0dk2wGBjl1xpSf0Lbl/KMR5TQntELfLR4etizLq
 Xpd2byn96Ivi8C8u9zJruXTueHH8vt7gJ1oax3yKRGU5o2eipCRiKZ0s/T7fvkdq+8beg9ku
 fDO4SAgJMIl6H5awliCY2zQvLHysS/Wb8QuB09hmhLZ4AifdHyF1J5qeePEhgTA+BaUbiUZf
 i4aIXCH3Wv6K
Organization: ARM Ltd.
Message-ID: <02b630bd-22e0-afde-6784-be068d0948ae@arm.com>
Date: Sun, 26 Jul 2020 21:24:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CA+wirGpFvLBzYRBaq8yspJj8j9-yoLwN88bt079qM5yqPTwtcA@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24/07/2020 12:20, Alejandro wrote:

Hi,

> El vie., 24 jul. 2020 a las 12:45, Julien Grall (<julien@xen.org>) escribió:
>>> I'm trying Xen 4.13.1 in a Allwinner H6 SoC (more precisely a Pine H64
>>> model B, with a ARM Cortex-A53 CPU).
>>> I managed to get a dom0 Linux 5.8-rc5 kernel running fine, unpatched,
>>> and I'm using the upstream device tree for
>>> my board. However, the dom0 kernel has trouble when reading some DT
>>> nodes that are related to the CPUs, and
>>> it can't initialize the thermal subsystem properly, which is a kind of
>>> showstopper for me, because I'm concerned
>>> that letting the CPU run at the maximum frequency without watching out
>>> its temperature may cause overheating.
>>
>> I understand this concern, I am aware of some efforts to get CPUFreq
>> working on Xen but I am not sure if there is anything available yet. I
>> have CCed a couple of more person that may be able to help here.
> 
> Thank you for the CCs. I hope they can bring on some insight about this :)
> 
>>> The relevant kernel messages are:
>>>
>>> [  +0.001959] sun50i-cpufreq-nvmem: probe of sun50i-cpufreq-nvmem
>>> failed with error -2
>>> ...
>>> [  +0.003053] hw perfevents: failed to parse interrupt-affinity[0] for pmu
>>> [  +0.000043] hw perfevents: /pmu: failed to register PMU devices!
>>> [  +0.000037] armv8-pmu: probe of pmu failed with error -22
>>
>> I am not sure the PMU failure is related to the thermal failure below.
> 
> I'm not sure either, but after comparing the kernel messages for a
> boot with and without Xen, those were the differences (excluding, of
> course, the messages that inform that the Xen hypervisor console is
> being used and such). For the sake of completeness, I decided to
> mention it anyway.
> 
>>> [  +0.000163] OF: /thermal-zones/cpu-thermal/cooling-maps/map0: could
>>> not find phandle
>>> [  +0.000063] thermal_sys: failed to build thermal zone cpu-thermal: -22
>> Would it be possible to paste the device-tree node for
>> /thermal-zones/cpu-thermal/cooling-maps? I suspect the issue is because
>> we recreated /cpus from scratch.
>>
>> I don't know much about how the thermal subsystem works, but I suspect
>> this will not be enough to get it working properly on Xen. For a
>> workaround, you would need to create a dom0 with the same numbers of
>> vCPU as the numbers of pCPUs. They would also need to be pinned.
>>
>> I will leave the others to fill in more details.
> 
> I think I should mention that I've tried to hackily fix things by
> removing the make_cpus_node call on handle_node
> (https://github.com/xen-project/xen/blob/master/xen/arch/arm/domain_build.c#L1585),
> after removing the /cpus node from the skip_matches array. This way,
> the original /cpus node was passed through, without being recreated by
> Xen. Of course, I made sure that dom0 used the same number of vCPUs as
> pCPUs, because otherwise things would probably blow up, which luckily
> that was not a compromise for me. The end result was that the
> aforementioned kernel error messages were gone, and the thermal
> subsystem worked fine again. However, this time the cpufreq-dt probe
> failed, with what I think was an ENODEV error. This left the CPU
> locked at the boot frequency of less than 1 GHz, compared to the
> maximum 1.8 GHz frequency that the SoC supports, which has bad
> implications for performance.

So this was actually my first thought: The firmware (U-Boot SPL) sets up
some basic CPU frequency (888 MHz for H6 [1]), which is known to never
overheat the chip, even under full load. So any concern from your side
about the board or SoC overheating could be dismissed, with the current
mainline code, at least. However you lose the full speed, by quite a
margin on the H6 (on the A64 it's only 816 vs 1200(ish) MHz).
However, without the clock entries in the CPU node, the frequency would
never be changed by Dom0 anyway (nor by Xen, which doesn't even know how
to do this).
So from a practical point of view: unless you hack Xen to pass on more
cpu node properties, you are stuck at 888 MHz anyway, and don't need to
worry about overheating.

Now if you would pass on the CPU clock frequency control to Dom0, you
run into more issues: the Linux governors would probably try to setup
both frequency and voltage based on load, BUT this would be Dom0's bogus
perception of the actual system load. Even with pinned Dom0 vCPUs, a
busy system might spend most of its CPU time in DomU VCPUs, which
probably makes it look mostly idle in Dom0. Using a fixed governor
(performance) would avoid this, at the cost of running full speed all of
the time, probably needlessly.

So fixing the CPU clocking issue is more complex and requires more
ground work in Xen first, probably involving some enlightenend Dom0
drivers as well. I didn't follow latest developments in this area, nor
do I remember x86's answer to this, but it's not something easy, I would
presume.

Alejandro: can you try to measure the actual CPU frequency in Dom0?
Maybe some easy benchmark? "mhz" from lmbench does a great job in
telling you the actual frequency, just by clever measurement. But any
other CPU bound benchmark would do, if you compare bare metal Linux vs.
Dom0.
Also, does cpufreq come up in Dom0 at all? Can you select governors and
frequencies?

Cheers,
Andre.

> Therefore, as it seems that passing more properties (like
> #cooling-cells) is enough to get temperatures working, I suspect that
> fixing the thermal issue is relatively easy, at least for my SoC. But
> maybe I have just been lucky and that's not supposed to work anyway;
> I'm not sure.
> 
>>>
>>> I've searched for issues, code or commits that may be related for this
>>> issue. The most relevant things I found are:
>>>
>>> - A patch that blacklists the A53 PMU:
>>> https://patchwork.kernel.org/patch/10899881/
>>> - The handle_node function in xen/arch/arm/domain_build.c:
>>> https://github.com/xen-project/xen/blob/master/xen/arch/arm/domain_build.c#L1427
>>
>> I remember this discussion. The problem was that the PMU is using
>> per-CPU interrupts. Xen is not yet able to handle PPIs as they often
>> requires more context to be saved/restored (in this case the PMU context).
>>
>> There was a proposal to look if a device is using PPIs and just remove
>> them from the Device-Tree. Unfortunately, I haven't seen any official
>> submission for this patch.
>>
>> Did you have to apply the patch to boot up? If not, then the error above
>> shouldn't be a concern. However, if you need PMU support for the using
>> thermal devices then it is going to require some work.
> 
> No, I didn't apply any patch to Xen whatsoever. It worked fine out of
> the box. As I mentioned above, with a more complete /cpus node
> declaration, the thermal subsystem works. I guess the PMU worked fine
> too, but I didn't test it in any way, so maybe it is just barely able
> to probe successfully somehow.
> 
>>> I've thought about removing "/cpus" from the skip_matches array in the
>>> handle_node function, but I'm not sure
>>> that would be a good fix.
>>
>> The node "/cpus" and its sub-node are recreated by Xen for Dom0. This is
>> because Dom0 may have a different numbers of vCPUs and it doesn't seen
>> the pCPUs.
>>
>> If you don't skip "/cpus" from the host DT then you would end up with
>> two "/cpus" path in your dom0 DT. Mostly likely, Linux will not be happy
>> with it.
> 
> Indeed, that is consistent with my observations of how the source code
> works. Thanks for the confirmation :)
> 
>> I vaguely remember some discussions on how to deal with CPUFreq in Xen.
>> IIRC we agreed that Dom0 should be part of the equation because it
>> already contains all the drivers. However, I can't remember if we agreed
>> how the dom0 would be made aware of the pCPUs.
> 
> That makes sense. Supporting every existing thermal and cpufreq method
> in every ARM SoC seems like a lot of unneeded duplication of work,
> provided that Linux already has pretty good support for that. But, if
> that's the case, I guess we should not mark the "dom0-kernel" cpufreq
> boot parameter as deprecated in the documentation, at least for the
> ARM platform: http://xenbits.xen.org/docs/unstable/misc/xen-command-line.html#cpufreq
> 



From xen-devel-bounces@lists.xenproject.org Sun Jul 26 23:49:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 26 Jul 2020 23:49:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzqNs-0002sJ-LD; Sun, 26 Jul 2020 23:48:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s9Qx=BF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzqNr-0002rz-5p
 for xen-devel@lists.xenproject.org; Sun, 26 Jul 2020 23:48:47 +0000
X-Inumbo-ID: 87874f78-cf9a-11ea-a732-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 87874f78-cf9a-11ea-a732-12813bfff9fa;
 Sun, 26 Jul 2020 23:48:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=XcaWOD3VtlgyhwUBne46t/i6tgZWe3k33hiyXKHqe+U=; b=KD7qZrG3a+2NCfalO9JmuZa+x
 8C4te18+55CmsyFYaioVByIX4c6m24z0TMdVYaXJ8i6+eLlnTS+GYxVmnaDyOgfgE7/OWV793LwOJ
 7mc6GERJlyXDqBIdC+fPGsI3xKi3R6d/7yFsx/AzoHv9OlYodDmxsLHQzjCjhTe3t1oTw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzqNi-0004f4-Ur; Sun, 26 Jul 2020 23:48:39 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzqNi-0007sC-HY; Sun, 26 Jul 2020 23:48:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzqNi-0003Dq-Gu; Sun, 26 Jul 2020 23:48:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152216-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152216: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start.2:fail:regression
 linux-linus:test-amd64-i386-libvirt:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=04300d66f0a06d572d9f2ad6768c38cabde22179
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 26 Jul 2020 23:48:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152216 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152216/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 17 guest-start.2  fail in 152206 REGR. vs. 151214

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt 18 guest-start/debian.repeat fail in 152206 pass in 152216
 test-amd64-amd64-examine    4 memdisk-try-append fail in 152206 pass in 152216
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail in 152206 pass in 152216
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat  fail pass in 152206

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                04300d66f0a06d572d9f2ad6768c38cabde22179
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   38 days
Failing since        151236  2020-06-19 19:10:35 Z   37 days   58 attempts
Testing same since   152206  2020-07-26 01:40:19 Z    0 days    2 attempts

------------------------------------------------------------
913 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 52328 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 05:08:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 05:08:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzvN6-0007D3-OL; Mon, 27 Jul 2020 05:08:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9AKR=BG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzvN5-0007Cy-7a
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 05:08:19 +0000
X-Inumbo-ID: 2d75a430-cfc7-11ea-8a6e-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2d75a430-cfc7-11ea-8a6e-bc764e2007e4;
 Mon, 27 Jul 2020 05:08:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=KmPpW7XaUcuHFmN+MPZN61DKR9vMh5OwXOiDYXzG52A=; b=y+VEXWNz6RKUnvGAtGl7mI314
 hvGdZCKC9J4OMpiC5PNBnpPXDtbMXS4hNQG1dZALP+rlreqVf+3mZEf8hNUDVHlUcZkef3K9rddKn
 bgHwosIBrufw2VFQul0lCrD87F0ostpUkOtlTdJ6VINSRXSndI+Qx9gj4SptYSJCW3ac4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzvN1-00068U-3h; Mon, 27 Jul 2020 05:08:15 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzvN0-0003bV-Rg; Mon, 27 Jul 2020 05:08:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzvN0-0002rj-Qz; Mon, 27 Jul 2020 05:08:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152219-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152219: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=57cdde4a74dd0d68df9e32657773484a5484a027
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jul 2020 05:08:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152219 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152219/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1       fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1         fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                57cdde4a74dd0d68df9e32657773484a5484a027
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   44 days
Failing since        151101  2020-06-14 08:32:51 Z   42 days   60 attempts
Testing same since   152219  2020-07-26 17:08:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Duyck <alexander.h.duyck@linux.intel.com>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Antoine Damhet <antoine.damhet@blade-group.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  Liu Yi L <yi.l.liu@intel.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 31961 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 06:51:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 06:51:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzwyJ-00086i-2x; Mon, 27 Jul 2020 06:50:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TfGp=BG=epam.com=prvs=5477f034df=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1jzwyH-00086W-02
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 06:50:49 +0000
X-Inumbo-ID: 8017f9aa-cfd5-11ea-8a72-bc764e2007e4
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8017f9aa-cfd5-11ea-8a72-bc764e2007e4;
 Mon, 27 Jul 2020 06:50:47 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 06R6oNcB022667; Mon, 27 Jul 2020 06:50:45 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2059.outbound.protection.outlook.com [104.47.13.59])
 by mx0b-0039f301.pphosted.com with ESMTP id 32gc2mu91g-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 27 Jul 2020 06:50:44 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Fq+ayKI7yWOVm+THdF+cF6PBRuElT+dt9mY+NNJUkKgLxAa0o6bI33MBemoLDM45wYh+7ug7AMgmdhMnVGOwVqzwVGNPELU+LdUi4ObQ4FIlsUSNGOSYqB7xpnEJLpjrk29c2nrjdr3leuORQn/Zlco78mDAMd5SFUNqipqiIfi3FB2KA+Y1A4wjl/Bmx0ksn5nBLOQjxeAdOvAP1uQoyIdxR4uC1M2EiJ7/oYdoYnBenAEnev++qQmq4GjPJTXbmt4RTb/8MhYpuAr+VKztiWyQGuFxSDKvlK7qXoqUQ7pd0lGUwXDD9lt+NK6BkawTCaKu7gS1hbdllcZlhgJk9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=38v/I1abWphoVhWzS8FPgJ4hNFp2wwqAAb0g6t1rSCc=;
 b=CsQ96xx28V26at1Kt35liAf31JyqoEl0x8kLlT+Mq5gPdHTf3F67519RPWgN8FBcHgmVaeg8sDNm2e9B14qw4T7iYq4n/R82M0XOWfR1MtBWnTxSK9K+pvaMdoAyytXM5WlECmMrBF1sq4O35qCc5kyrGZ2OVJ2RsDnYPNsnkFReTpjhAIqdnHvmK3E1ibIBZ0sAa1GWMb35xH6ClvlvE6i6IXTryw/TyyHCXkDTOkBPzSCTJAvSqTQpFm+80cjZv6lX7s0jBe5kDkc/3gwdciuE1jRK75aStSOzsapHPKvesQa0d5kXbA9QQNjV1TPk3OIsX7QL9UwkbFnm2+qlbQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=38v/I1abWphoVhWzS8FPgJ4hNFp2wwqAAb0g6t1rSCc=;
 b=LG8kAk/AHWJwTfDIkFUoahePJh26f6tJzaldzNb7ef3ZMRYeDV3//K3hrRJlHFjBrbeDG+LcxV4M6mJfved+VE22L0Z72eK/+lrXBjBOxTwbHxE9yYLqraACyK0+VQrjcJOo8QEHho56qeOmpfp8Ps0J8nDnMGArKn2IzeMW5Whd6+fbFowE7vU1AfHOu4SYQ6XOOgux4o9XjCby7oZO7NFZAKKUkMu3ss1Yrp+ZYLRWEduUYcmiyvgzQFsbT56iBML7xQjlcBbj3yUSnY6wMdMJ1SU0RZWjsrOvC6gKSkIn8vjGVqKUIoyjr/t5Wgt6vpqHcbRVT6NqIPs/3INdAA==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0301MB2196.eurprd03.prod.outlook.com (2603:10a6:200:4d::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.21; Mon, 27 Jul
 2020 06:50:42 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508%7]) with mapi id 15.20.3216.033; Mon, 27 Jul 2020
 06:50:41 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "jgross@suse.com" <jgross@suse.com>, "ian.jackson@eu.citrix.com"
 <ian.jackson@eu.citrix.com>, "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH v2] xen/displif: Protocol version 2
Thread-Topic: [PATCH v2] xen/displif: Protocol version 2
Thread-Index: AQHWT3f2pfc1d0tAak69A/eTMH1j5qkbJYsA
Date: Mon, 27 Jul 2020 06:50:41 +0000
Message-ID: <7b818ab2-17f2-af5b-cecf-b36c8c01ca68@epam.com>
References: <20200701071923.18883-1-andr2000@gmail.com>
In-Reply-To: <20200701071923.18883-1-andr2000@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: faebc0e0-1076-41bb-0635-08d831f9612d
x-ms-traffictypediagnostic: AM4PR0301MB2196:
x-microsoft-antispam-prvs: <AM4PR0301MB2196E9537EB6F2A84FA17DCFE7720@AM4PR0301MB2196.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:2657;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: +IdozC9qZWtXO4HyNQtJedo8uKK3lEXcKvXYz+B31mBJEnnhcwHa9Lh+FSeIZpDh+GwBNN1XYtnGHK4eVmeUecvd5yA8TwAunzBWSPAFlvvxlSV2GRy6I3i0nTdWCe4uU90BxAnBB8P//ia9a8cdiCSTcEMwFZbHe+v74Hg5L+xeZ0SlbOhHVwvpcnkdv/VJfLg6XY3JEJWb641+6Z/3WAcVWPWp0Xg9Bj037Snu8bJ7Bl3k3dSAjFpXcdTBZysG3+a8BphGVCxgkZmBEbfCFDQxIxFwT612a5s91NvXlI+TZqUus7KHyvaJi+ksMIAJok9THYdd+cDgJsiHlzKslw==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM0PR03MB6324.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(376002)(396003)(136003)(366004)(346002)(39860400002)(86362001)(83380400001)(110136005)(31686004)(54906003)(316002)(478600001)(71200400001)(8676002)(4326008)(36756003)(2906002)(8936002)(2616005)(6506007)(64756008)(66476007)(66556008)(26005)(76116006)(186003)(53546011)(66446008)(31696002)(5660300002)(6512007)(6486002)(66946007);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: MA09jytew6vWoGMOAKq3BWEgGVi2iQ0hxJhLcyVs128IVl6Igq/rVi7rZQDVW6ToVMfcvG8DICAH7q6F+fBGn2EDeAA9XUsTCQ3bt2gkDumf/ESvMXpTTjs1hDE8vBYtua8UxEh4x/NyB/Z80lt23PAEVdOzurCrZpypafUpgJ+BSQspTAaXJ8uKoADPd6aJbdOHeUWuRas9toPegK/FG3VNyrywz46EZ3zgAdqtnnphufyFBYQnIxl9HzYNZHkQn5IBfcSpbtOj+X7wdSgo52g1aVIicC3JJ3Q3B1Zk8CtFhwMbydDrHp/Vr7run6aMRc7Xvv513eeDu3X+npsKglbuJU8ymrPbSakG1Tr7JO61an21s4P3mnfp1WM/0vTTgAUfNCkhS0+kkd/JkCiTG0cVVH8oELjfFFvplR6HjZgihyKJRmsqIrOIBOzBUnrDgTEkwQKZEviNy0fI4VW9SfoUVjbcOSgtbMSTZsnVl0E=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <CFA4DCAA3AE6034EA344339F2FA64937@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: faebc0e0-1076-41bb-0635-08d831f9612d
X-MS-Exchange-CrossTenant-originalarrivaltime: 27 Jul 2020 06:50:41.8270 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: MyiFzlR1kx1oXWJ9nP3kTvUWir++PPHCsOKZ1lVtSkT8/OeJ6HJVg8CIyNqjsmuncl/XtFfsa6TlxQ1pl4X6NUbLKD6YCk7ZCDcnE/p9c2fGcs4IcuSjqktLPfX57cve
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0301MB2196
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687
 definitions=2020-07-27_04:2020-07-27,
 2020-07-27 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0
 malwarescore=0
 lowpriorityscore=0 mlxlogscore=999 priorityscore=1501 bulkscore=0
 phishscore=0 spamscore=0 suspectscore=0 mlxscore=0 adultscore=0
 impostorscore=0 clxscore=1011 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2006250000 definitions=main-2007270049
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>, "paul@xen.org" <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

UGluZw0KDQpPbiA3LzEvMjAgMTA6MTkgQU0sIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3Rl
Og0KPiBGcm9tOiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8b2xla3NhbmRyX2FuZHJ1c2hjaGVu
a29AZXBhbS5jb20+DQo+DQo+IDEuIEFkZCBwcm90b2NvbCB2ZXJzaW9uIGFzIGFuIGludGVnZXIN
Cj4NCj4gVmVyc2lvbiBzdHJpbmcsIHdoaWNoIGlzIGluIGZhY3QgYW4gaW50ZWdlciwgaXMgaGFy
ZCB0byBoYW5kbGUgaW4gdGhlDQo+IGNvZGUgdGhhdCBzdXBwb3J0cyBkaWZmZXJlbnQgcHJvdG9j
b2wgdmVyc2lvbnMuIFRvIHNpbXBsaWZ5IHRoYXQNCj4gYWxzbyBhZGQgdGhlIHZlcnNpb24gYXMg
YW4gaW50ZWdlci4NCj4NCj4gMi4gUGFzcyBidWZmZXIgb2Zmc2V0IHdpdGggWEVORElTUExfT1Bf
REJVRl9DUkVBVEUNCj4NCj4gVGhlcmUgYXJlIGNhc2VzIHdoZW4gZGlzcGxheSBkYXRhIGJ1ZmZl
ciBpcyBjcmVhdGVkIHdpdGggbm9uLXplcm8NCj4gb2Zmc2V0IHRvIHRoZSBkYXRhIHN0YXJ0LiBI
YW5kbGUgc3VjaCBjYXNlcyBhbmQgcHJvdmlkZSB0aGF0IG9mZnNldA0KPiB3aGlsZSBjcmVhdGlu
ZyBhIGRpc3BsYXkgYnVmZmVyLg0KPg0KPiAzLiBBZGQgWEVORElTUExfT1BfR0VUX0VESUQgY29t
bWFuZA0KPg0KPiBBZGQgYW4gb3B0aW9uYWwgcmVxdWVzdCBmb3IgcmVhZGluZyBFeHRlbmRlZCBE
aXNwbGF5IElkZW50aWZpY2F0aW9uDQo+IERhdGEgKEVESUQpIHN0cnVjdHVyZSB3aGljaCBhbGxv
d3MgYmV0dGVyIGNvbmZpZ3VyYXRpb24gb2YgdGhlDQo+IGRpc3BsYXkgY29ubmVjdG9ycyBvdmVy
IHRoZSBjb25maWd1cmF0aW9uIHNldCBpbiBYZW5TdG9yZS4NCj4gV2l0aCB0aGlzIGNoYW5nZSBj
b25uZWN0b3JzIG1heSBoYXZlIG11bHRpcGxlIHJlc29sdXRpb25zIGRlZmluZWQNCj4gd2l0aCBy
ZXNwZWN0IHRvIGRldGFpbGVkIHRpbWluZyBkZWZpbml0aW9ucyBhbmQgYWRkaXRpb25hbCBwcm9w
ZXJ0aWVzDQo+IG5vcm1hbGx5IHByb3ZpZGVkIGJ5IGRpc3BsYXlzLg0KPg0KPiBJZiB0aGlzIHJl
cXVlc3QgaXMgbm90IHN1cHBvcnRlZCBieSB0aGUgYmFja2VuZCB0aGVuIHZpc2libGUgYXJlYQ0K
PiBpcyBkZWZpbmVkIGJ5IHRoZSByZWxldmFudCBYZW5TdG9yZSdzICJyZXNvbHV0aW9uIiBwcm9w
ZXJ0eS4NCj4NCj4gSWYgYmFja2VuZCBwcm92aWRlcyBleHRlbmRlZCBkaXNwbGF5IGlkZW50aWZp
Y2F0aW9uIGRhdGEgKEVESUQpIHdpdGgNCj4gWEVORElTUExfT1BfR0VUX0VESUQgcmVxdWVzdCB0
aGVuIEVESUQgdmFsdWVzIG11c3QgdGFrZSBwcmVjZWRlbmNlDQo+IG92ZXIgdGhlIHJlc29sdXRp
b25zIGRlZmluZWQgaW4gWGVuU3RvcmUuDQo+DQo+IDQuIEJ1bXAgcHJvdG9jb2wgdmVyc2lvbiB0
byAyLg0KPg0KPiBTaWduZWQtb2ZmLWJ5OiBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyA8b2xla3Nh
bmRyX2FuZHJ1c2hjaGVua29AZXBhbS5jb20+DQo+IC0tLQ0KPiAgIHhlbi9pbmNsdWRlL3B1Ymxp
Yy9pby9kaXNwbGlmLmggfCA5MSArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrLS0NCj4g
ICAxIGZpbGUgY2hhbmdlZCwgODggaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkNCj4NCj4g
ZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL3B1YmxpYy9pby9kaXNwbGlmLmggYi94ZW4vaW5jbHVk
ZS9wdWJsaWMvaW8vZGlzcGxpZi5oDQo+IGluZGV4IGNjNWRlOWNiMWYzNS4uMDA1NTg5NTUxMGY3
IDEwMDY0NA0KPiAtLS0gYS94ZW4vaW5jbHVkZS9wdWJsaWMvaW8vZGlzcGxpZi5oDQo+ICsrKyBi
L3hlbi9pbmNsdWRlL3B1YmxpYy9pby9kaXNwbGlmLmgNCj4gQEAgLTM4LDcgKzM4LDggQEANCj4g
ICAgKiAgICAgICAgICAgICAgICAgICAgICAgICAgIFByb3RvY29sIHZlcnNpb24NCj4gICAgKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqDQo+ICAgICovDQo+IC0jZGVmaW5lIFhFTkRJU1BMX1BST1RPQ09M
X1ZFUlNJT04gICAgICIxIg0KPiArI2RlZmluZSBYRU5ESVNQTF9QUk9UT0NPTF9WRVJTSU9OICAg
ICAiMiINCj4gKyNkZWZpbmUgWEVORElTUExfUFJPVE9DT0xfVkVSU0lPTl9JTlQgIDINCj4gICAN
Cj4gICAvKg0KPiAgICAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4gQEAgLTIwMiw2ICsyMDMsOSBA
QA0KPiAgICAqICAgICAgV2lkdGggYW5kIGhlaWdodCBvZiB0aGUgY29ubmVjdG9yIGluIHBpeGVs
cyBzZXBhcmF0ZWQgYnkNCj4gICAgKiAgICAgIFhFTkRJU1BMX1JFU09MVVRJT05fU0VQQVJBVE9S
LiBUaGlzIGRlZmluZXMgdmlzaWJsZSBhcmVhIG9mIHRoZQ0KPiAgICAqICAgICAgZGlzcGxheS4N
Cj4gKyAqICAgICAgSWYgYmFja2VuZCBwcm92aWRlcyBleHRlbmRlZCBkaXNwbGF5IGlkZW50aWZp
Y2F0aW9uIGRhdGEgKEVESUQpIHdpdGgNCj4gKyAqICAgICAgWEVORElTUExfT1BfR0VUX0VESUQg
cmVxdWVzdCB0aGVuIEVESUQgdmFsdWVzIG11c3QgdGFrZSBwcmVjZWRlbmNlDQo+ICsgKiAgICAg
IG92ZXIgdGhlIHJlc29sdXRpb25zIGRlZmluZWQgaGVyZS4NCj4gICAgKg0KPiAgICAqLS0tLS0t
LS0tLS0tLS0tLS0tIENvbm5lY3RvciBSZXF1ZXN0IFRyYW5zcG9ydCBQYXJhbWV0ZXJzIC0tLS0t
LS0tLS0tLS0tLS0tLS0NCj4gICAgKg0KPiBAQCAtMzQ5LDYgKzM1Myw4IEBADQo+ICAgI2RlZmlu
ZSBYRU5ESVNQTF9PUF9GQl9ERVRBQ0ggICAgICAgICAweDEzDQo+ICAgI2RlZmluZSBYRU5ESVNQ
TF9PUF9TRVRfQ09ORklHICAgICAgICAweDE0DQo+ICAgI2RlZmluZSBYRU5ESVNQTF9PUF9QR19G
TElQICAgICAgICAgICAweDE1DQo+ICsvKiBUaGUgYmVsb3cgY29tbWFuZCBpcyBhdmFpbGFibGUg
aW4gcHJvdG9jb2wgdmVyc2lvbiAyIGFuZCBhYm92ZS4gKi8NCj4gKyNkZWZpbmUgWEVORElTUExf
T1BfR0VUX0VESUQgICAgICAgICAgMHgxNg0KPiAgIA0KPiAgIC8qDQo+ICAgICoqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKg0KPiBAQCAtMzc3LDYgKzM4MywxMCBAQA0KPiAgICNkZWZpbmUgWEVORElTUExf
RklFTERfQkVfQUxMT0MgICAgICAgImJlLWFsbG9jIg0KPiAgICNkZWZpbmUgWEVORElTUExfRklF
TERfVU5JUVVFX0lEICAgICAgInVuaXF1ZS1pZCINCj4gICANCj4gKyNkZWZpbmUgWEVORElTUExf
RURJRF9CTE9DS19TSVpFICAgICAgMTI4DQo+ICsjZGVmaW5lIFhFTkRJU1BMX0VESURfQkxPQ0tf
Q09VTlQgICAgIDI1Ng0KPiArI2RlZmluZSBYRU5ESVNQTF9FRElEX01BWF9TSVpFICAgICAgICAo
WEVORElTUExfRURJRF9CTE9DS19TSVpFICogWEVORElTUExfRURJRF9CTE9DS19DT1VOVCkNCj4g
Kw0KPiAgIC8qDQo+ICAgICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPiAgICAqICAgICAgICAgICAg
ICAgICAgICAgICAgICBTVEFUVVMgUkVUVVJOIENPREVTDQo+IEBAIC00NTEsNyArNDYxLDkgQEAN
Cj4gICAgKiArLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0t
LS0rLS0tLS0tLS0tLS0tLS0tLSsNCj4gICAgKiB8ICAgICAgICAgICAgICAgICAgICAgICAgICAg
Z3JlZl9kaXJlY3RvcnkgICAgICAgICAgICAgICAgICAgICAgICAgIHwgNDANCj4gICAgKiArLS0t
LS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0t
LS0tLS0tLSsNCj4gLSAqIHwgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHJlc2VydmVkICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgfCA0NA0KPiArICogfCAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgZGF0YV9vZnMgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB8IDQ0DQo+
ICsgKiArLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0r
LS0tLS0tLS0tLS0tLS0tLSsNCj4gKyAqIHwgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHJl
c2VydmVkICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCA0OA0KPiAgICAqICstLS0tLS0t
LS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0t
LS0tKw0KPiAgICAqIHwvXC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wv
XC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wvfA0KPiAgICAqICstLS0tLS0tLS0tLS0tLS0tKy0tLS0t
LS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLS0tKw0KPiBAQCAtNDk0
LDYgKzUwNiw3IEBADQo+ICAgICogICBidWZmZXIgc2l6ZSAoYnVmZmVyX3N6KSBleGNlZWRzIHdo
YXQgY2FuIGJlIGFkZHJlc3NlZCBieSB0aGlzIHNpbmdsZSBwYWdlLA0KPiAgICAqICAgdGhlbiBy
ZWZlcmVuY2UgdG8gdGhlIG5leHQgcGFnZSBtdXN0IGJlIHN1cHBsaWVkIChzZWUgZ3JlZl9kaXJf
bmV4dF9wYWdlDQo+ICAgICogICBiZWxvdykNCj4gKyAqIGRhdGFfb2ZzIC0gdWludDMyX3QsIG9m
ZnNldCBvZiB0aGUgZGF0YSBpbiB0aGUgYnVmZmVyLCBvY3RldHMNCj4gICAgKi8NCj4gICANCj4g
ICAjZGVmaW5lIFhFTkRJU1BMX0RCVUZfRkxHX1JFUV9BTExPQyAgICAgICAoMSA8PCAwKQ0KPiBA
QCAtNTA2LDYgKzUxOSw3IEBAIHN0cnVjdCB4ZW5kaXNwbF9kYnVmX2NyZWF0ZV9yZXEgew0KPiAg
ICAgICB1aW50MzJfdCBidWZmZXJfc3o7DQo+ICAgICAgIHVpbnQzMl90IGZsYWdzOw0KPiAgICAg
ICBncmFudF9yZWZfdCBncmVmX2RpcmVjdG9yeTsNCj4gKyAgICB1aW50MzJfdCBkYXRhX29mczsN
Cj4gICB9Ow0KPiAgIA0KPiAgIC8qDQo+IEBAIC03MzEsNiArNzQ1LDQ0IEBAIHN0cnVjdCB4ZW5k
aXNwbF9wYWdlX2ZsaXBfcmVxIHsNCj4gICAgICAgdWludDY0X3QgZmJfY29va2llOw0KPiAgIH07
DQo+ICAgDQo+ICsvKg0KPiArICogUmVxdWVzdCBFRElEIC0gcmVxdWVzdCBFRElEIGRlc2NyaWJp
bmcgY3VycmVudCBjb25uZWN0b3I6DQo+ICsgKiAgICAgICAgIDAgICAgICAgICAgICAgICAgMSAg
ICAgICAgICAgICAgICAgMiAgICAgICAgICAgICAgIDMgICAgICAgIG9jdGV0DQo+ICsgKiArLS0t
LS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0t
LS0tLS0tLSsNCj4gKyAqIHwgICAgICAgICAgICAgICBpZCAgICAgICAgICAgICAgICB8IF9PUF9H
RVRfRURJRCAgIHwgICByZXNlcnZlZCAgICAgfCA0DQo+ICsgKiArLS0tLS0tLS0tLS0tLS0tLSst
LS0tLS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLSsNCj4gKyAq
IHwgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJ1ZmZlcl9zeiAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgfCA4DQo+ICsgKiArLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLS0t
Ky0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLSsNCj4gKyAqIHwgICAgICAgICAgICAg
ICAgICAgICAgICAgIGdyZWZfZGlyZWN0b3J5ICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAx
Mg0KPiArICogKy0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0t
LS0tKy0tLS0tLS0tLS0tLS0tLS0rDQo+ICsgKiB8ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICByZXNlcnZlZCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgMTYNCj4gKyAqICstLS0t
LS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0t
LS0tLS0tKw0KPiArICogfC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wv
XC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wvXC98DQo+ICsgKiArLS0tLS0tLS0tLS0tLS0tLSstLS0t
LS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLSsNCj4gKyAqIHwg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHJlc2VydmVkICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgfCA2NA0KPiArICogKy0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLSst
LS0tLS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0rDQo+ICsgKg0KPiArICogTm90ZXM6DQo+
ICsgKiAgIC0gVGhpcyBjb21tYW5kIGlzIG5vdCBhdmFpbGFibGUgaW4gcHJvdG9jb2wgdmVyc2lv
biAxIGFuZCBzaG91bGQgYmUNCj4gKyAqICAgICBpZ25vcmVkLg0KPiArICogICAtIFRoaXMgcmVx
dWVzdCBpcyBvcHRpb25hbCBhbmQgaWYgbm90IHN1cHBvcnRlZCB0aGVuIHZpc2libGUgYXJlYQ0K
PiArICogICAgIGlzIGRlZmluZWQgYnkgdGhlIHJlbGV2YW50IFhlblN0b3JlJ3MgInJlc29sdXRp
b24iIHByb3BlcnR5Lg0KPiArICogICAtIFNoYXJlZCBidWZmZXIsIGFsbG9jYXRlZCBmb3IgRURJ
RCBzdG9yYWdlLCBtdXN0IG5vdCBiZSBsZXNzIHRoZW4NCj4gKyAqICAgICBYRU5ESVNQTF9FRElE
X01BWF9TSVpFIG9jdGV0cy4NCj4gKyAqDQo+ICsgKiBidWZmZXJfc3ogLSB1aW50MzJfdCwgYnVm
ZmVyIHNpemUgdG8gYmUgYWxsb2NhdGVkLCBvY3RldHMNCj4gKyAqIGdyZWZfZGlyZWN0b3J5IC0g
Z3JhbnRfcmVmX3QsIGEgcmVmZXJlbmNlIHRvIHRoZSBmaXJzdCBzaGFyZWQgcGFnZQ0KPiArICog
ICBkZXNjcmliaW5nIEVESUQgYnVmZmVyIHJlZmVyZW5jZXMuIFNlZSBYRU5ESVNQTF9PUF9EQlVG
X0NSRUFURSBmb3INCj4gKyAqICAgZ3JhbnQgcGFnZSBkaXJlY3Rvcnkgc3RydWN0dXJlIChzdHJ1
Y3QgeGVuZGlzcGxfcGFnZV9kaXJlY3RvcnkpLg0KPiArICoNCj4gKyAqIFNlZSByZXNwb25zZSBm
b3JtYXQgZm9yIHRoaXMgcmVxdWVzdC4NCj4gKyAqLw0KPiArDQo+ICtzdHJ1Y3QgeGVuZGlzcGxf
Z2V0X2VkaWRfcmVxIHsNCj4gKyAgICB1aW50MzJfdCBidWZmZXJfc3o7DQo+ICsgICAgZ3JhbnRf
cmVmX3QgZ3JlZl9kaXJlY3Rvcnk7DQo+ICt9Ow0KPiArDQo+ICAgLyoNCj4gICAgKi0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0gUmVzcG9uc2VzIC0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tDQo+ICAgICoNCj4gQEAgLTc1Myw2ICs4MDUsMzUgQEAgc3RydWN0IHhlbmRp
c3BsX3BhZ2VfZmxpcF9yZXEgew0KPiAgICAqIGlkIC0gdWludDE2X3QsIHByaXZhdGUgZ3Vlc3Qg
dmFsdWUsIGVjaG9lZCBmcm9tIHJlcXVlc3QNCj4gICAgKiBzdGF0dXMgLSBpbnQzMl90LCByZXNw
b25zZSBzdGF0dXMsIHplcm8gb24gc3VjY2VzcyBhbmQgLVhFTl9FWFggb24gZmFpbHVyZQ0KPiAg
ICAqDQo+ICsgKg0KPiArICogR2V0IEVESUQgcmVzcG9uc2UgLSByZXNwb25zZSBmb3IgWEVORElT
UExfT1BfR0VUX0VESUQ6DQo+ICsgKiAgICAgICAgIDAgICAgICAgICAgICAgICAgMSAgICAgICAg
ICAgICAgICAgMiAgICAgICAgICAgICAgIDMgICAgICAgIG9jdGV0DQo+ICsgKiArLS0tLS0tLS0t
LS0tLS0tLSstLS0tLS0tLS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0t
LSsNCj4gKyAqIHwgICAgICAgICAgICAgICBpZCAgICAgICAgICAgICAgICB8ICAgIG9wZXJhdGlv
biAgIHwgICAgcmVzZXJ2ZWQgICAgfCA0DQo+ICsgKiArLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0t
LS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLSsNCj4gKyAqIHwgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBzdGF0dXMgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgfCA4DQo+ICsgKiArLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLS0tKy0tLS0t
LS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLSsNCj4gKyAqIHwgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIGVkaWRfc3ogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgfCAxMg0KPiAr
ICogKy0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLS0tKy0t
LS0tLS0tLS0tLS0tLS0rDQo+ICsgKiB8ICAgICAgICAgICAgICAgICAgICAgICAgICAgICByZXNl
cnZlZCAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHwgMTYNCj4gKyAqICstLS0tLS0tLS0t
LS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0tLS0tLS0t
Kw0KPiArICogfC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wvXC9cL1wv
XC9cL1wvXC9cL1wvXC9cL1wvXC98DQo+ICsgKiArLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0tLS0t
LS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLSsNCj4gKyAqIHwgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHJlc2VydmVkICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgfCA2NA0KPiArICogKy0tLS0tLS0tLS0tLS0tLS0rLS0tLS0tLS0tLS0tLS0tLSstLS0tLS0t
LS0tLS0tLS0tKy0tLS0tLS0tLS0tLS0tLS0rDQo+ICsgKg0KPiArICogTm90ZXM6DQo+ICsgKiAg
IC0gVGhpcyByZXNwb25zZSBpcyBub3QgYXZhaWxhYmxlIGluIHByb3RvY29sIHZlcnNpb24gMSBh
bmQgc2hvdWxkIGJlDQo+ICsgKiAgICAgaWdub3JlZC4NCj4gKyAqDQo+ICsgKiBlZGlkX3N6IC0g
dWludDMyX3QsIHNpemUgb2YgdGhlIEVESUQsIG9jdGV0cw0KPiArICovDQo+ICsNCj4gK3N0cnVj
dCB4ZW5kaXNwbF9nZXRfZWRpZF9yZXNwIHsNCj4gKyAgICB1aW50MzJfdCBlZGlkX3N6Ow0KPiAr
fTsNCj4gKw0KPiArLyoNCj4gICAgKi0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
IEV2ZW50cyAtLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+ICAgICoNCj4gICAg
KiBFdmVudHMgYXJlIHNlbnQgdmlhIGEgc2hhcmVkIHBhZ2UgYWxsb2NhdGVkIGJ5IHRoZSBmcm9u
dCBhbmQgcHJvcGFnYXRlZCBieQ0KPiBAQCAtODA0LDYgKzg4NSw3IEBAIHN0cnVjdCB4ZW5kaXNw
bF9yZXEgew0KPiAgICAgICAgICAgc3RydWN0IHhlbmRpc3BsX2ZiX2RldGFjaF9yZXEgZmJfZGV0
YWNoOw0KPiAgICAgICAgICAgc3RydWN0IHhlbmRpc3BsX3NldF9jb25maWdfcmVxIHNldF9jb25m
aWc7DQo+ICAgICAgICAgICBzdHJ1Y3QgeGVuZGlzcGxfcGFnZV9mbGlwX3JlcSBwZ19mbGlwOw0K
PiArICAgICAgICBzdHJ1Y3QgeGVuZGlzcGxfZ2V0X2VkaWRfcmVxIGdldF9lZGlkOw0KPiAgICAg
ICAgICAgdWludDhfdCByZXNlcnZlZFs1Nl07DQo+ICAgICAgIH0gb3A7DQo+ICAgfTsNCj4gQEAg
LTgxMyw3ICs4OTUsMTAgQEAgc3RydWN0IHhlbmRpc3BsX3Jlc3Agew0KPiAgICAgICB1aW50OF90
IG9wZXJhdGlvbjsNCj4gICAgICAgdWludDhfdCByZXNlcnZlZDsNCj4gICAgICAgaW50MzJfdCBz
dGF0dXM7DQo+IC0gICAgdWludDhfdCByZXNlcnZlZDFbNTZdOw0KPiArICAgIHVuaW9uIHsNCj4g
KyAgICAgICAgc3RydWN0IHhlbmRpc3BsX2dldF9lZGlkX3Jlc3AgZ2V0X2VkaWQ7DQo+ICsgICAg
ICAgIHVpbnQ4X3QgcmVzZXJ2ZWQxWzU2XTsNCj4gKyAgICB9IG9wOw0KPiAgIH07DQo+ICAgDQo+
ICAgc3RydWN0IHhlbmRpc3BsX2V2dCB7


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 07:04:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 07:04:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzxBG-0000kO-Gh; Mon, 27 Jul 2020 07:04:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9AKR=BG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzxBF-0000k4-St
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 07:04:13 +0000
X-Inumbo-ID: 5bb6e510-cfd7-11ea-a767-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5bb6e510-cfd7-11ea-a767-12813bfff9fa;
 Mon, 27 Jul 2020 07:04:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=abQXLv4AgdUSAouBn/KQ3ZmFpNoXU2ZOLvbNCoTIeSA=; b=qgxcSuQU/Z9Zo7/vDBtQ3LyXI
 5ZF5spf4rtogYAkr+2XXEv2chOxS/wsB0kNYJTGycbNixZyfSXZQmyWH/R1fYpBxOJ/TPOJL+I+//
 e40rJVQJBTHI/XItqKB3ocN4+X2LqzE7lB2wED/aHpf5VQO7B6OTNgPP4bDiQ3AeXHKTE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzxB6-000078-Tb; Mon, 27 Jul 2020 07:04:04 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzxB6-0007F5-H7; Mon, 27 Jul 2020 07:04:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzxB6-0007N7-GF; Mon, 27 Jul 2020 07:04:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152226-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 152226: regressions - FAIL
X-Osstest-Failures: libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=e2fd95ed45439ee98362adbd4371590b0e11d35c
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jul 2020 07:04:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152226 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152226/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              e2fd95ed45439ee98362adbd4371590b0e11d35c
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   17 days
Failing since        151818  2020-07-11 04:18:52 Z   16 days   17 attempts
Testing same since   152193  2020-07-25 04:18:58 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Weblate <noreply@weblate.org>
  Yi Wang <wang.yi59@zte.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2758 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 07:55:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 07:55:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzxyk-0005BH-TW; Mon, 27 Jul 2020 07:55:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9AKR=BG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jzxyk-0005Au-5j
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 07:55:22 +0000
X-Inumbo-ID: 820c4c4e-cfde-11ea-a76c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 820c4c4e-cfde-11ea-a76c-12813bfff9fa;
 Mon, 27 Jul 2020 07:55:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=xAQePxyW5Min/DllGTtyGTi5CIAS3Tn8qbekqe9DltM=; b=gwUQ5i42NuPAuvRDSPbpWAbS2
 ve0fiGTKVywJKuuqSoMlAXm+tj5uO1+yZNPMFfgUl1JVaEgfKQlBCP8/8BdeVXYnZ9jq01cQyLi8q
 Y/qDc3M717XHmxzm8WH25C3QGP7hGlDkvRYZT4CxnrjkzkrZfkUbjy4EdqAZmyNqj7Dr0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzxyd-000182-5G; Mon, 27 Jul 2020 07:55:15 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jzxyc-00019c-Ke; Mon, 27 Jul 2020 07:55:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jzxyc-0000Am-Jv; Mon, 27 Jul 2020 07:55:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152225-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152225: all pass - PUSHED
X-Osstest-Versions-This: ovmf=6074f57e5b19c4cfd45a139b793191f34fa31781
X-Osstest-Versions-That: ovmf=8c30327debb28c0b6cfa2106b736774e0b20daac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jul 2020 07:55:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152225 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152225/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6074f57e5b19c4cfd45a139b793191f34fa31781
baseline version:
 ovmf                 8c30327debb28c0b6cfa2106b736774e0b20daac

Last test of basis   152194  2020-07-25 06:34:31 Z    2 days
Testing same since   152225  2020-07-27 03:39:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ashraf Javeed <ashraf.javeed@intel.com>
  Javeed, Ashraf <ashraf.javeed@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8c30327deb..6074f57e5b  6074f57e5b19c4cfd45a139b793191f34fa31781 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 08:01:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 08:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzy4Q-0006a7-0V; Mon, 27 Jul 2020 08:01:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nbSb=BG=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jzy4O-0006a2-Dh
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 08:01:12 +0000
X-Inumbo-ID: 55e2cd86-cfdf-11ea-8a7c-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 55e2cd86-cfdf-11ea-8a7c-bc764e2007e4;
 Mon, 27 Jul 2020 08:01:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1595836871;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=kFPPskvXLtA3OM8boP9HcoOBUPstBcsqpRElLohYglg=;
 b=b2a2FMPdOGRlnUXAP19PMO9Z0WnHkKxhVYosuQ6v5tK/sVYSDgEU243dVzf+2TmQpACDQy
 JgrVlZxPeGe0UFOuszOYd4wq8q9kyTW5cZb9b+8LYLFyMPvhTYn6P2335Qrus4wkbUgV9R
 blHN2tkJtjFu8WXM2/3usHrywCboKN8=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-233-TfzIlOpdPF6myWxB3Q8wdQ-1; Mon, 27 Jul 2020 04:01:00 -0400
X-MC-Unique: TfzIlOpdPF6myWxB3Q8wdQ-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 782F41DE2;
 Mon, 27 Jul 2020 08:00:57 +0000 (UTC)
Received: from [10.36.114.48] (ovpn-114-48.ams2.redhat.com [10.36.114.48])
 by smtp.corp.redhat.com (Postfix) with ESMTP id C535072691;
 Mon, 27 Jul 2020 08:00:46 +0000 (UTC)
Subject: Re: [PATCH v2 4/4] xen: add helpers to allocate unpopulated memory
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
References: <20200724124241.48208-1-roger.pau@citrix.com>
 <20200724124241.48208-5-roger.pau@citrix.com>
 <1778c97f-3a69-8280-141c-879814dd213f@redhat.com>
 <1fd1d29e-5c10-0c29-0628-b79807f81de6@oracle.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <6bd01b60-2625-c46e-f9ff-95247700a8cc@redhat.com>
Date: Mon, 27 Jul 2020 10:00:44 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <1fd1d29e-5c10-0c29-0628-b79807f81de6@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Yan Yankovskyi <yyankovskyi@gmail.com>,
 dri-devel@lists.freedesktop.org, Michal Hocko <mhocko@kernel.org>,
 linux-mm@kvack.org, Daniel Vetter <daniel@ffwll.ch>,
 xen-devel@lists.xenproject.org, Dan Williams <dan.j.williams@intel.com>,
 Dan Carpenter <dan.carpenter@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.07.20 18:36, Boris Ostrovsky wrote:
> On 7/24/20 10:34 AM, David Hildenbrand wrote:
>> CCing Dan
>>
>> On 24.07.20 14:42, Roger Pau Monne wrote:
>>> diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
>>> new file mode 100644
>>> index 000000000000..aaa91cefbbf9
>>> --- /dev/null
>>> +++ b/drivers/xen/unpopulated-alloc.c
>>> @@ -0,0 +1,222 @@
> 
> 
> 
>>> + */
>>> +
>>> +#include <linux/errno.h>
>>> +#include <linux/gfp.h>
>>> +#include <linux/kernel.h>
>>> +#include <linux/mm.h>
>>> +#include <linux/memremap.h>
>>> +#include <linux/slab.h>
>>> +
>>> +#include <asm/page.h>
>>> +
>>> +#include <xen/page.h>
>>> +#include <xen/xen.h>
>>> +
>>> +static DEFINE_MUTEX(lock);
>>> +static LIST_HEAD(list);
>>> +static unsigned int count;
>>> +
>>> +static int fill(unsigned int nr_pages)
> 
> 
> Less generic names? How about  list_lock, pg_list, pg_count,
> fill_pglist()? (But these are bad too, so maybe you can come up with
> something better)
> 
> 
>>> +{
>>> +	struct dev_pagemap *pgmap;
>>> +	void *vaddr;
>>> +	unsigned int i, alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
>>> +	int nid, ret;
>>> +
>>> +	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
>>> +	if (!pgmap)
>>> +		return -ENOMEM;
>>> +
>>> +	pgmap->type = MEMORY_DEVICE_DEVDAX;
>>> +	pgmap->res.name = "XEN SCRATCH";
> 
> 
> Typically iomem resources only capitalize first letters.
> 
> 
>>> +	pgmap->res.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
>>> +
>>> +	ret = allocate_resource(&iomem_resource, &pgmap->res,
>>> +				alloc_pages * PAGE_SIZE, 0, -1,
>>> +				PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL);
> 
> 
> Are we not going to end up with a whole bunch of "Xen scratch" resource
> ranges for each miss in the page list? Or do we expect them to get merged?
> 

AFAIK, no resources will get merged (and it's in the general case not
safe to do). The old approach (add_memory_resource()) will end up with
the same situation ("Xen Scratch" vs. "System RAM") one new resource per
added memory block/section.

FWIW, I am looking into merging selected resources in the context of
virtio-mem _after_ adding succeeded (not directly when adding the
resource to the tree). Interface might look something like

void merge_child_mem_resources(struct resource *parent, const char *name);

So I can, for example, trigger merging of all "System RAM (virtio_mem)"
resources, that are located under a device node (e.g., "virtio0").

I also thought about tagging each mergeable resource via something like
"IORESOURCE_MERGEABLE" - whereby the user agrees that it does not hold
any pointers to such a resource. But I don't see yet a copelling reason
to sacrifice space for a new flag.

So with this in place, this code could call once adding succeeded

merge_child_mem_resources(&iomem_resource, "Xen Scratch");

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 08:41:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 08:41:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzyh3-0001X2-91; Mon, 27 Jul 2020 08:41:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s7zT=BG=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1jzyh1-0001Wx-NX
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 08:41:07 +0000
X-Inumbo-ID: e7b2d1c0-cfe4-11ea-a774-12813bfff9fa
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.83]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e7b2d1c0-cfe4-11ea-a774-12813bfff9fa;
 Mon, 27 Jul 2020 08:41:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ME4pLc+Ysu5/OH8Oha0+sOISh6/ilN3IU5jCigmE1Pc=;
 b=NGG71uNg2R2x9MtOICQplFeo9ceL0fbtIpOisrqPpiXZeUC5a/PdQ3L6zhOff2TZ86/h04fNFwnDm+HOiXCnhpunT4F/qajjdHwGZX9z647aJPfampuMoMbfiV5FGLHvOzlosrymuvY9ZqEgD5aDmzK5338jhKOZ/AfS8zbnMts=
Received: from AM5PR0502CA0014.eurprd05.prod.outlook.com
 (2603:10a6:203:91::24) by DB6PR0801MB1943.eurprd08.prod.outlook.com
 (2603:10a6:4:74::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Mon, 27 Jul
 2020 08:41:01 +0000
Received: from VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:91:cafe::90) by AM5PR0502CA0014.outlook.office365.com
 (2603:10a6:203:91::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.21 via Frontend
 Transport; Mon, 27 Jul 2020 08:41:01 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT063.mail.protection.outlook.com (10.152.18.236) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Mon, 27 Jul 2020 08:41:01 +0000
Received: ("Tessian outbound 1dc58800d5dd:v62");
 Mon, 27 Jul 2020 08:41:00 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 415a75d793a330cd
X-CR-MTA-TID: 64aa7808
Received: from 0e83c39b836b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9B36A31B-E6D7-491F-B966-9916AC6FCD64.1; 
 Mon, 27 Jul 2020 08:40:55 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0e83c39b836b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 27 Jul 2020 08:40:55 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hVN17U3Psqs9Cr69d7PsKz3EBFLWi0GWjMUISIL+FXFPcREHC06DMMsQXnRq/T/zSEnHRO++ZbNvQxnF5iqqA2YYvfRnzjxUsbUnJubrf6pLR72z+qzBmna5gQ1BZBfWtI0ND/wO0RGzBk7EsvgmTlP73uRmFUkD/AJA4W1ZHOdp8lnb5sxdMk1qenYZ+lja/R5gvBoCFI3nHrs9huQM3Rh/gURVYydttu90yP8hcfDttJCCOf0R0rkpDIuone8bwpYmKPLN1u3axSFg+VxQxzw+2xcSCDC28Ni5DjhWm0uwifXs7rbkgUhKW+36uhG6Usn7Wq1PB3bCR7iVwj+kEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ME4pLc+Ysu5/OH8Oha0+sOISh6/ilN3IU5jCigmE1Pc=;
 b=M2cwFzxR0RX/tl7GtYESdK6gYfLTttc5Zuf+iD9ZmiCG9Jqe1yV+OcxKNa346hEfXcpV46eF0yM3R0fuZ3/xnR/bC+gsseOAQkap1cGXWW3113mF7fX1hbgePQIbJElnoUu/7aK074OaCr9M7heNn1nHLaEDJvvJKTiq4PX9IQRQ3mKaq7nKctD78sJiB/tFGT4WMyTMdbfIINWIAIs5NFR1Tu+ihhpXwY9ivTvWLtsx66XjuFUBWkGm1Gfs5ceVcdmrvfuo7UkTvwkSNoC6+vmG41ZHecquwBW8rFm6aVQfvRziUaMyLcbN84eGmiE6UDp6j1SkR+8V4UgnAKhp9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ME4pLc+Ysu5/OH8Oha0+sOISh6/ilN3IU5jCigmE1Pc=;
 b=NGG71uNg2R2x9MtOICQplFeo9ceL0fbtIpOisrqPpiXZeUC5a/PdQ3L6zhOff2TZ86/h04fNFwnDm+HOiXCnhpunT4F/qajjdHwGZX9z647aJPfampuMoMbfiV5FGLHvOzlosrymuvY9ZqEgD5aDmzK5338jhKOZ/AfS8zbnMts=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB4501.eurprd08.prod.outlook.com (2603:10a6:20b:b5::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.23; Mon, 27 Jul
 2020 08:40:53 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3216.033; Mon, 27 Jul 2020
 08:40:53 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [RFC PATCH v1 2/4] xen/arm: Discovering PCI devices and add the
 PCI devices in XEN.
Thread-Topic: [RFC PATCH v1 2/4] xen/arm: Discovering PCI devices and add the
 PCI devices in XEN.
Thread-Index: AQHWYPbP4iMQjZDQ906szkp+TjbuPqkVohKAgAV/RoA=
Date: Mon, 27 Jul 2020 08:40:53 +0000
Message-ID: <AB1FC4E2-288F-4A1F-87BC-B24E552301F8@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <666df0147054dda8af13ae74a89be44c69984295.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231337140.17562@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007231337140.17562@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c3b26bc4-33f8-4fb7-5e69-08d83208caa1
x-ms-traffictypediagnostic: AM6PR08MB4501:|DB6PR0801MB1943:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <DB6PR0801MB19434E88D37E4E0AF46DEE97FC720@DB6PR0801MB1943.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: IrdYC7cy6/46wvrWvPdaQ/iuSDPTjI78VFrsPpjK2XE6bi7Lpq6cLG12radAqEyUK+k6dAOxpTxSP04UWTt6jjPn4jIeUfasdVNMElf4c7lHedKLrgGhZ7Z6OB16hk6sN6o04vHMbsBqbMiaK+3z3KcBSLQILwp2fZSXsIMc5oBimiNvMLlFJPwIgvy34m8gR8/lGFDd2KzZIp/n2QDYwe6cvJb+KRrQ5bfsc8Dfd7TODX6LRzPu/w5RC42yt2k/SroxdpeGLFzmVGJCraEgUhFe9SP72L0jvrJIEW1iNAIXE5pRi6bWrh19vkYWsw8DstVNjox5YYXWnojk508M0w==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(39860400002)(366004)(396003)(376002)(136003)(91956017)(76116006)(36756003)(478600001)(66946007)(6916009)(54906003)(86362001)(316002)(8936002)(8676002)(26005)(186003)(6512007)(2906002)(33656002)(2616005)(83380400001)(55236004)(66476007)(6486002)(64756008)(6506007)(66446008)(66556008)(53546011)(4326008)(71200400001)(5660300002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: yVLYsmE4RvLjlA3iybaS4sxQF43C5jJ3nXKKI7fdvc8GND8BQ/P+BVYDu32U840yXxwFqIWUUpH41S75tamAkJFAPOmrrar2+WlHkKuJEDNloU+qOK20w3gOmfk627IQm/27mKmmxD56NVtk1Rwo/5Pjvw2pT0N7lvC9wbSNI3ji7j/GEHt8bu3/IcWAPMl8SPEiZW8Ks1d2nresw7Y2pM9oS+bduKGNJb2AmiPCBUO9aSRApWmAPRIgZsTXGxmmsmY/l8Xx/Ge9rkofprDMKbCxXLnY0k46tDOE11mdxSOpebihWdex2WmwVLhDgehOeK6jHxKwe+gfIAUV7q15QlUSG+jyOCalhILq5wW862Zi0GrhAmEdoCuxnndvLCTC/RhWbC5IXfhfBmfLpy/Yfjux/uNlmSYdwESH4xnmhHqNHnW61WTjjaM92rqmasZHxFgiUsNq+/6uu2uuLiwBDFfoha2RtoOSRngkVCOj4vw=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <BC3314D6D3EC324DA519EF59C6A6E5E9@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4501
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 479dd7ad-4671-482d-2110-08d83208c5f9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Ezxm3jz6eoNO1gfacpde05ClsowaYff0zSqT4LRTn8hG/PdcTmCGvU3giHdWLpD5ENqR+DbWjwvr076KZwMh/CR6przBeusbW04vV4jKCY7FVJAAHmw6toffTj88nxci86uF5GnsF7H45Aj+ghtXSKhwx5b31hcf3EHrBgmgvEw3LcqUn25waqN1EGatptbnf6lRheR1IAS6tfVQNzlibOD2m2rDbGmbT9O/DZRSyCbKhRk6vSnqK8O/NfFst5SdJb9dT/3U/9L1AD81X8Hp6JTruYZVs0GramcRjwlE7PzyfqUAkQEW7K+AxYYbZfzcBbg4hlXPhwA+5z9jt87ILoyWtCJ6Fht3eP/YvZ+oZiaMeDYyZ2Lusltlv7jdZG9QgdkV5shkXlkxdC+g5Iz3Bg==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(346002)(396003)(376002)(46966005)(478600001)(82740400003)(4326008)(54906003)(6862004)(6486002)(107886003)(8676002)(316002)(2616005)(186003)(8936002)(336012)(6512007)(6506007)(47076004)(53546011)(83380400001)(26005)(86362001)(81166007)(5660300002)(33656002)(70206006)(70586007)(82310400002)(356005)(2906002)(36906005)(36756003);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jul 2020 08:41:01.2535 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c3b26bc4-33f8-4fb7-5e69-08d83208caa1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1943
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Sorry for the late reply.

> On 23 Jul 2020, at 9:44 pm, Stefano Stabellini <sstabellini@kernel.org> w=
rote:
>=20
> On Thu, 23 Jul 2020, Rahul Singh wrote:
>> Hardware domain is in charge of doing the PCI enumeration and will
>> discover the PCI devices and then will communicate to XEN via hyper
>> call PHYSDEVOP_pci_device_add to add the PCI devices in XEN.
>>=20
>> Change-Id: Ie87e19741689503b4b62da911c8dc2ee318584ac
>=20
> Same question about Change-Id

I think by-mistake Gerrit Change-id is added in the patch series. I will re=
move the Change-Id in next version of the patch.
>=20
>=20
>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>> ---
>> xen/arch/arm/physdev.c | 42 +++++++++++++++++++++++++++++++++++++++---
>> 1 file changed, 39 insertions(+), 3 deletions(-)
>>=20
>> diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
>> index e91355fe22..274720f98a 100644
>> --- a/xen/arch/arm/physdev.c
>> +++ b/xen/arch/arm/physdev.c
>> @@ -9,12 +9,48 @@
>> #include <xen/errno.h>
>> #include <xen/sched.h>
>> #include <asm/hypercall.h>
>> -
>> +#include <xen/guest_access.h>
>> +#include <xsm/xsm.h>
>>=20
>> int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>> {
>> -    gdprintk(XENLOG_DEBUG, "PHYSDEVOP cmd=3D%d: not implemented\n", cmd=
);
>> -    return -ENOSYS;
>> +    int ret =3D 0;
>> +
>> +    switch ( cmd )
>> +    {
>> +#ifdef CONFIG_HAS_PCI
>> +        case PHYSDEVOP_pci_device_add:
>> +            {
>> +                struct physdev_pci_device_add add;
>> +                struct pci_dev_info pdev_info;
>> +                nodeid_t node =3D NUMA_NO_NODE;
>> +
>> +                ret =3D -EFAULT;
>> +                if ( copy_from_guest(&add, arg, 1) !=3D 0 )
>> +                    break;
>> +
>> +                pdev_info.is_extfn =3D !!(add.flags & XEN_PCI_DEV_EXTFN=
);
>> +                if ( add.flags & XEN_PCI_DEV_VIRTFN )
>> +                {
>> +                    pdev_info.is_virtfn =3D 1;
>> +                    pdev_info.physfn.bus =3D add.physfn.bus;
>> +                    pdev_info.physfn.devfn =3D add.physfn.devfn;
>> +                }
>> +                else
>> +                    pdev_info.is_virtfn =3D 0;
>> +
>> +                ret =3D pci_add_device(add.seg, add.bus, add.devfn,
>> +                                &pdev_info, node);
>> +
>> +                break;
>> +            }
>> +#endif
>> +        default:
>> +            gdprintk(XENLOG_DEBUG, "PHYSDEVOP cmd=3D%d: not implemented=
\n", cmd);
>> +            ret =3D -ENOSYS;
>> +    }
>=20
> I think we should make the implementation common between arm and x86 by
> creating xen/common/physdev.c:do_physdev_op as a shared entry point for
> PHYSDEVOP hypercalls implementations. See for instance:
>=20
> xen/common/sysctl.c:do_sysctl
>=20
> and
>=20
> xen/arch/arm/sysctl.c:arch_do_sysctl
> xen/arch/x86/sysctl.c:arch_do_sysctl
>=20

Ok sure I will check if we can create a common entry for ARM and x86 for do=
_physdev_op().=20

> Jan, Andrew, Roger, any opinions?



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 08:42:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 08:42:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzyiV-0001cc-OG; Mon, 27 Jul 2020 08:42:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jzyiU-0001cV-80
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 08:42:38 +0000
X-Inumbo-ID: 1ef7292e-cfe5-11ea-a774-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1ef7292e-cfe5-11ea-a774-12813bfff9fa;
 Mon, 27 Jul 2020 08:42:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595839357;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=2o9INdDVMeRMpvPPdJXUmq8eiLdRA5WJoeGCeiRGYrU=;
 b=F73vK9BpepB+vPl+d+c78bY95pYqIU0GVLtohWw9kd+zjMGX069bJmnQ
 Tj1gfvPmyVUcTQSTAGykOb3Ic7D05uNr9RgV088pGbCuZ5SItOVLut+wm
 36ktWWWOeHylGtp9Ij9Vy5lRvzK69PN1JMfHxiR+Ae39mBOpufYtNuBao k=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Yca7UCFJpaeOVZ9HKKhOTpqfBIiKAriZvcgYQaPo0HE6fOZZvkR2AsF9VPPYu+epj83BLXOj3k
 gl4P3yNaDa+q84isI/NKLZRu5XiC5bGaEOEF3FTgoj+iZcb/ix9957FLQMorJuzTQAIMY4mlrr
 +Np4yzWrMWMuXFxxLOYqckYRLVib17RgIt6eFK+1pbXrqnh8A0gbggdDiPM/afSmih9XF5fKrg
 wxNMorQafr287qc0Qw8RptqovfS5+FeO5tPZwpnc2Guobc2FdC+j1QTLSlRwnKGOD1mALt5D0y
 MwQ=
X-SBRS: 2.7
X-MesageID: 23253209
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23253209"
Date: Mon, 27 Jul 2020 10:42:16 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH v2 4/4] xen: add helpers to allocate unpopulated memory
Message-ID: <20200727084216.GO7191@Air-de-Roger>
References: <20200724124241.48208-1-roger.pau@citrix.com>
 <20200724124241.48208-5-roger.pau@citrix.com>
 <1778c97f-3a69-8280-141c-879814dd213f@redhat.com>
 <1fd1d29e-5c10-0c29-0628-b79807f81de6@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1fd1d29e-5c10-0c29-0628-b79807f81de6@oracle.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Yan
 Yankovskyi <yyankovskyi@gmail.com>, David Hildenbrand <david@redhat.com>,
 linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
 Michal Hocko <mhocko@kernel.org>, linux-mm@kvack.org,
 Daniel Vetter <daniel@ffwll.ch>, xen-devel@lists.xenproject.org,
 Dan Williams <dan.j.williams@intel.com>,
 Dan Carpenter <dan.carpenter@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 24, 2020 at 12:36:33PM -0400, Boris Ostrovsky wrote:
> On 7/24/20 10:34 AM, David Hildenbrand wrote:
> > CCing Dan
> >
> > On 24.07.20 14:42, Roger Pau Monne wrote:
> >> +
> >> +#include <linux/errno.h>
> >> +#include <linux/gfp.h>
> >> +#include <linux/kernel.h>
> >> +#include <linux/mm.h>
> >> +#include <linux/memremap.h>
> >> +#include <linux/slab.h>
> >> +
> >> +#include <asm/page.h>
> >> +
> >> +#include <xen/page.h>
> >> +#include <xen/xen.h>
> >> +
> >> +static DEFINE_MUTEX(lock);
> >> +static LIST_HEAD(list);
> >> +static unsigned int count;
> >> +
> >> +static int fill(unsigned int nr_pages)
> 
> 
> Less generic names? How about  list_lock, pg_list, pg_count,
> fill_pglist()? (But these are bad too, so maybe you can come up with
> something better)

OK, I have to admit I like using such short names when the code allows
to, for example this code is so simple that it didn't seem to warrant
using longer names. Will rename on next version.

> >> +{
> >> +	struct dev_pagemap *pgmap;
> >> +	void *vaddr;
> >> +	unsigned int i, alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
> >> +	int nid, ret;
> >> +
> >> +	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
> >> +	if (!pgmap)
> >> +		return -ENOMEM;
> >> +
> >> +	pgmap->type = MEMORY_DEVICE_DEVDAX;
> >> +	pgmap->res.name = "XEN SCRATCH";
> 
> 
> Typically iomem resources only capitalize first letters.
> 
> 
> >> +	pgmap->res.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
> >> +
> >> +	ret = allocate_resource(&iomem_resource, &pgmap->res,
> >> +				alloc_pages * PAGE_SIZE, 0, -1,
> >> +				PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL);
> 
> 
> Are we not going to end up with a whole bunch of "Xen scratch" resource
> ranges for each miss in the page list? Or do we expect them to get merged?

PAGES_PER_SECTION is IMO big enough to prevent ending up with a lot of
separated ranges. I think the value is 32 or 64MiB on x86, so while we
are likely to end up with more than one resource added, I don't think
it's going to be massive.

> 
> >> +	if (ret < 0) {
> >> +		pr_err("Cannot allocate new IOMEM resource\n");
> >> +		kfree(pgmap);
> >> +		return ret;
> >> +	}
> >> +
> >> +	nid = memory_add_physaddr_to_nid(pgmap->res.start);
> 
> 
> Should we consider page range crossing node boundaries?

I'm not sure whether this is possible (I would think allocate_resource
should return a range from a single node), but then it would greatly
complicate the code to perform the memremap_pages, as we would have to
split the region into multiple dev_pagemap structs.

FWIW the current code in the balloon driver does exactly the same
(which doesn't mean it's correct, but that's where I got the logic
from).

> >> +
> >> +#ifdef CONFIG_XEN_HAVE_PVMMU
> >> +	/*
> >> +	 * We don't support PV MMU when Linux and Xen is using
> >> +	 * different page granularity.
> >> +	 */
> >> +	BUILD_BUG_ON(XEN_PAGE_SIZE != PAGE_SIZE);
> >> +
> >> +        /*
> >> +         * memremap will build page tables for the new memory so
> >> +         * the p2m must contain invalid entries so the correct
> >> +         * non-present PTEs will be written.
> >> +         *
> >> +         * If a failure occurs, the original (identity) p2m entries
> >> +         * are not restored since this region is now known not to
> >> +         * conflict with any devices.
> >> +         */
> >> +	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
> >> +		xen_pfn_t pfn = PFN_DOWN(pgmap->res.start);
> >> +
> >> +		for (i = 0; i < alloc_pages; i++) {
> >> +			if (!set_phys_to_machine(pfn + i, INVALID_P2M_ENTRY)) {
> >> +				pr_warn("set_phys_to_machine() failed, no memory added\n");
> >> +				release_resource(&pgmap->res);
> >> +				kfree(pgmap);
> >> +				return -ENOMEM;
> >> +			}
> >> +                }
> >> +	}
> >> +#endif
> >> +
> >> +	vaddr = memremap_pages(pgmap, nid);
> >> +	if (IS_ERR(vaddr)) {
> >> +		pr_err("Cannot remap memory range\n");
> >> +		release_resource(&pgmap->res);
> >> +		kfree(pgmap);
> >> +		return PTR_ERR(vaddr);
> >> +	}
> >> +
> >> +	for (i = 0; i < alloc_pages; i++) {
> >> +		struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
> >> +
> >> +		BUG_ON(!virt_addr_valid(vaddr + PAGE_SIZE * i));
> >> +		list_add(&pg->lru, &list);
> >> +		count++;
> >> +	}
> >> +
> >> +	return 0;
> >> +}
> >> +
> >> +/**
> >> + * xen_alloc_unpopulated_pages - alloc unpopulated pages
> >> + * @nr_pages: Number of pages
> >> + * @pages: pages returned
> >> + * @return 0 on success, error otherwise
> >> + */
> >> +int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
> >> +{
> >> +	unsigned int i;
> >> +	int ret = 0;
> >> +
> >> +	mutex_lock(&lock);
> >> +	if (count < nr_pages) {
> >> +		ret = fill(nr_pages);
> 
> 
> (nr_pages - count) ?

Yup, already fixed as Juergen also pointed it out.

> >> +
> >> +#ifdef CONFIG_XEN_PV
> >> +static int __init init(void)
> >> +{
> >> +	unsigned int i;
> >> +
> >> +	if (!xen_domain())
> >> +		return -ENODEV;
> >> +
> >> +	/*
> >> +	 * Initialize with pages from the extra memory regions (see
> >> +	 * arch/x86/xen/setup.c).
> >> +	 */
> 
> 
> This loop will be executing only for PV guests so we can just bail out
> for non-PV guests here.

Sure.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 09:09:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 09:09:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzz8e-0003Xv-W1; Mon, 27 Jul 2020 09:09:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jzz8d-0003Xq-Lu
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 09:09:39 +0000
X-Inumbo-ID: e5fa0d86-cfe8-11ea-a776-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e5fa0d86-cfe8-11ea-a776-12813bfff9fa;
 Mon, 27 Jul 2020 09:09:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
 References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=kCuh2KFv5piuBc1L+35MIJLc0EbntaZ4unEdO6wadUs=; b=QB13JIGKx7FKaQH7I1nu7ATXzP
 ftlrz2VnnSn5LBDXWbTzAzcLvyYmNUkCbb3/W4Wcf7LX8GHl3RbgsQWX8aH929sC9bGadWrohIwsV
 QtuV+aC4J47jAcPfIo+orCI6SbXvzAV7i4LDDw9/RPZrsNcm/Ly2s6jEKMCWbXAy+QxM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jzz8a-0003B8-Vm; Mon, 27 Jul 2020 09:09:36 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=edge-cache-102.e-fra50.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jzz8a-0001mB-If; Mon, 27 Jul 2020 09:09:36 +0000
Message-ID: <bbd18a2f7d86d451f529292c627616044955a84c.camel@xen.org>
Subject: Re: [PATCH v7 03/15] x86/mm: rewrite virt_to_xen_l*e
From: Hongyan Xia <hx242@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Date: Mon, 27 Jul 2020 10:09:33 +0100
In-Reply-To: <826d5a28-c391-dd30-d588-6f730b454c18@suse.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <fd5d98198d9539b232a570a83e7a24be2407e739.1590750232.git.hongyxia@amazon.com>
 <826d5a28-c391-dd30-d588-6f730b454c18@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, 2020-07-14 at 12:47 +0200, Jan Beulich wrote:
> On 29.05.2020 13:11, Hongyan Xia wrote:
> > From: Wei Liu <wei.liu2@citrix.com>
> > 
> > Rewrite those functions to use the new APIs. Modify its callers to
> > unmap
> > the pointer returned. Since alloc_xen_pagetable_new() is almost
> > never
> > useful unless accompanied by page clearing and a mapping, introduce
> > a
> > helper alloc_map_clear_xen_pt() for this sequence.
> > 
> > Note that the change of virt_to_xen_l1e() also requires
> > vmap_to_mfn() to
> > unmap the page, which requires domain_page.h header in vmap.
> > 
> > Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> > Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with two further small adjustments:
> 
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -4948,8 +4948,28 @@ void free_xen_pagetable_new(mfn_t mfn)
> >          free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
> >  }
> >  
> > +void *alloc_map_clear_xen_pt(mfn_t *pmfn)
> > +{
> > +    mfn_t mfn = alloc_xen_pagetable_new();
> > +    void *ret;
> > +
> > +    if ( mfn_eq(mfn, INVALID_MFN) )
> > +        return NULL;
> > +
> > +    if ( pmfn )
> > +        *pmfn = mfn;
> > +    ret = map_domain_page(mfn);
> > +    clear_page(ret);
> > +
> > +    return ret;
> > +}
> > +
> >  static DEFINE_SPINLOCK(map_pgdir_lock);
> >  
> > +/*
> > + * For virt_to_xen_lXe() functions, they take a virtual address
> > and return a
> > + * pointer to Xen's LX entry. Caller needs to unmap the pointer.
> > + */
> >  static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
> 
> May I suggest s/virtual/linear/ to at least make the new comment
> correct?
> 
> > --- a/xen/include/asm-x86/page.h
> > +++ b/xen/include/asm-x86/page.h
> > @@ -291,7 +291,13 @@ void copy_page_sse2(void *, const void *);
> >  #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
> >  #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
> >  #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
> > -#define
> > vmap_to_mfn(va)     _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned
> > long)(va))))
> > +
> > +#define vmap_to_mfn(va)
> > ({                                                  \
> > +        const l1_pgentry_t *pl1e_ = virt_to_xen_l1e((unsigned
> > long)(va));   \
> > +        mfn_t mfn_ =
> > l1e_get_mfn(*pl1e_);                                   \
> > +        unmap_domain_page(pl1e_);                                 
> >           \
> > +        mfn_; })
> 
> Just like is already the case in domain_page_map_to_mfn() I think
> you want to add "BUG_ON(!pl1e)" here to limit the impact of any
> problem to DoS (rather than a possible privilege escalation).
> 
> Or actually, considering the only case where virt_to_xen_l1e()
> would return NULL, returning INVALID_MFN here would likely be
> even more robust. There looks to be just a single caller, which
> would need adjusting to cope with an error coming back. In fact -
> it already ASSERT()s, despite NULL right now never coming back
> from vmap_to_page(). I think the loop there would better be
> 
>     for ( i = 0; i < pages; i++ )
>     {
>         struct page_info *page = vmap_to_page(va + i * PAGE_SIZE);
> 
>         if ( page )
>             page_list_add(page, &pg_list);
>         else
>             printk_once(...);
>     }
> 
> Thoughts?

To be honest, I think the current implementation of vmap_to_mfn() is
just incorrect. There is simply no guarantee that a vmap is mapped with
small pages, so IMO we just cannot do virt_to_xen_x1e() here. The
correct way is to have a generic page table walking function which
walks from the base and can stop at any level, and properly return code
to indicate level or any error.

I am inclined to BUG_ON() here, and upstream a proper fix later to
vmap_to_mfn() as an individual patch.

Am I missing anything here?

Hongyan



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 09:15:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 09:15:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzzDx-0004OX-Se; Mon, 27 Jul 2020 09:15:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jzzDx-0004OD-H3
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 09:15:09 +0000
X-Inumbo-ID: a8040e54-cfe9-11ea-8a7e-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a8040e54-cfe9-11ea-8a7e-bc764e2007e4;
 Mon, 27 Jul 2020 09:15:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595841304;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=yXg+mOKMXOlPk9hYGg2gJuPP2ADAccszlFJ6TOt/+Ug=;
 b=IfxfXrJ/joTh9ku3J7FuyT0KyFAAzt4bHsRv/vFVYYBTRqQSPcbmj/IM
 v4cpmsGTUy+AKQexRzegb1s3RO+XY+BnQ4REx8EQwJsZKZiwvTtc6QgTX
 4086FOVZ5PmNZxHJm4z4pidOJ84alp/+YAVPOAUPXKur+cJu/AUNYBt3k A=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: /lonDtJpry0GWq2IuiXzOVqYuyFYZER9q6g6pJnZVdsuhB60qYm8sw5ir81a6lE5JD3VvhAgnZ
 uuL5JM5gPHap68Oz7Fo4P5sHe+FH6rqD/766TtL7+nfxGgX5SQLd2SdX4TYFz/FTZ0ajYT3Xbt
 rB6divUhbolOgbljhK0WH+nMFJ5IRNvSU3I9OV+jZvZifd4q2AgNFuE8TTh/foBPfTGMsl9Jqf
 0SJi+bG8Sd9zz8aeie0dIrsLOFQmPTiT1b5E8G3k8quqYAfiezZ6DqObqRbQGcB2zt1RJBuw1b
 Acc=
X-SBRS: 2.7
X-MesageID: 23569857
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23569857"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Subject: [PATCH v3 1/4] xen/balloon: fix accounting in
 alloc_xenballooned_pages error path
Date: Mon, 27 Jul 2020 11:13:39 +0200
Message-ID: <20200727091342.52325-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200727091342.52325-1-roger.pau@citrix.com>
References: <20200727091342.52325-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

target_unpopulated is incremented with nr_pages at the start of the
function, but the call to free_xenballooned_pages will only subtract
pgno number of pages, and thus the rest need to be subtracted before
returning or else accounting will be skewed.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Cc: stable@vger.kernel.org
---
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/balloon.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 77c57568e5d7..3cb10ed32557 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -630,6 +630,12 @@ int alloc_xenballooned_pages(int nr_pages, struct page **pages)
  out_undo:
 	mutex_unlock(&balloon_mutex);
 	free_xenballooned_pages(pgno, pages);
+	/*
+	 * NB: free_xenballooned_pages will only subtract pgno pages, but since
+	 * target_unpopulated is incremented with nr_pages at the start we need
+	 * to remove the remaining ones also, or accounting will be screwed.
+	 */
+	balloon_stats.target_unpopulated -= nr_pages - pgno;
 	return ret;
 }
 EXPORT_SYMBOL(alloc_xenballooned_pages);
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 09:15:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 09:15:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzzDu-0004OI-KT; Mon, 27 Jul 2020 09:15:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jzzDs-0004OD-MB
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 09:15:04 +0000
X-Inumbo-ID: a6aadbf1-cfe9-11ea-8a7e-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6aadbf1-cfe9-11ea-8a7e-bc764e2007e4;
 Mon, 27 Jul 2020 09:15:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595841303;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=XRiTkYPt/VoC9tJRw/kDVebohjydPMQ3t9eNfBCqXdU=;
 b=Lyj0XVJNKWirANFhaUByZk7n57WoEeGBxa6UlNdyGOcWgMuVRMBfQ5ld
 McA5w4ZKZ27A4R8WDvpLqhHExcn/AV0CBdk5C3MAIavwWs5Q1txPHlfUX
 47rSndytsZtuIALmNOGNUgw+f4PgA49XjTDZ5/LXKOe7TvqlrkDR8NziO Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: +KtSj5NtdshyL3nTLDYcTIfIGM+JtKqneYpA9XnR9mJ4w6Dz8isYXOKKcOnhJLDLSXBqEv1fu+
 VGNP0SE2q8zhO30K32oMsHQOp5hhNZgELfIOa6tfnys3f8oHgqbDGYwofr/TbczILeDIXfHUgr
 poQ3JWUpCw9BwPfD/E3zFgF8Q+/611RVo0J/BqOLVTTTJF+W3UTMRcEOfKG22Cws4bhIWNmy3J
 JevId2zYkDY2i1+FaHRkeOidBsqqHNX/ZvPjexde2s079X6tbdIdRziNV21pXv8TcP/6O/J26r
 2U4=
X-SBRS: 2.7
X-MesageID: 23233911
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23233911"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Subject: [PATCH v3 0/4] xen/balloon: fixes for memory hotplug
Date: Mon, 27 Jul 2020 11:13:38 +0200
Message-ID: <20200727091342.52325-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

The following series contain some fixes in order to split Xen
unpopulated memory handling from the ballooning driver if ZONE_DEVICE is
available, so that physical memory regions used to map foreign pages are
not tied to memory hotplug.

First two patches are bugfixes that IMO should be backported to stable
branches, third patch is a revert of a workaround applied to the balloon
driver and last patch introduces an interface based on ZONE_DEVICE in
order to manage regions to use for foreign mappings.

Thanks, Roger.

Roger Pau Monne (4):
  xen/balloon: fix accounting in alloc_xenballooned_pages error path
  xen/balloon: make the balloon wait interruptible
  Revert "xen/balloon: Fix crash when ballooning on x86 32 bit PAE"
  xen: add helpers to allocate unpopulated memory

 drivers/gpu/drm/xen/xen_drm_front_gem.c |   9 +-
 drivers/xen/Makefile                    |   1 +
 drivers/xen/balloon.c                   |  30 ++--
 drivers/xen/grant-table.c               |   4 +-
 drivers/xen/privcmd.c                   |   4 +-
 drivers/xen/unpopulated-alloc.c         | 185 ++++++++++++++++++++++++
 drivers/xen/xenbus/xenbus_client.c      |   6 +-
 drivers/xen/xlate_mmu.c                 |   4 +-
 include/xen/xen.h                       |   9 ++
 9 files changed, 221 insertions(+), 31 deletions(-)
 create mode 100644 drivers/xen/unpopulated-alloc.c

-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 09:15:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 09:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzzE3-0004Pp-4z; Mon, 27 Jul 2020 09:15:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jzzE2-0004OD-HH
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 09:15:14 +0000
X-Inumbo-ID: a9c3cea0-cfe9-11ea-8a7e-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a9c3cea0-cfe9-11ea-8a7e-bc764e2007e4;
 Mon, 27 Jul 2020 09:15:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595841307;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=3jbhXHOtCfIXZMzu/xfRs6KCwvTkEik1EFWK3NUjQ5c=;
 b=BylR2u4QVXw4hPA3moH2QQHyIhxzYwxT4dTulcmM4nPhXCUDIZrY28W8
 InAud1UkqFEAHJyGd8l+MWkJhNhcGLLbZMKouRDsUx3k+EpUkGlEV0bj4
 DH6TECx+STk1LDkcInhpZmJBSy1ovkGwQGJGm+3TgdpkD/0c38zWMeXy9 k=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: XBCsU74yw3gqieztP4QqoHMKHwPQS43HrGWtokfqoYQFLLraPZfghSADkh+tDlVKFLfYjy+t7V
 wLpS4OPM3WrCC5ogCSsCLNIU308UvGq9/KoTNu0vq/EKwIwdsM8K6oJgPGIdB0UeQHrLnfH6N4
 nBhXexVhbxOjiRM8L3yo31TsUDflXk+vNdRma2GVo9FscqU3iUbZ46O2r587gpwT7DUugcVKgk
 LWnMbM2lZQSLIKZNj9ffanaC/24ugers1l1uocUJ2wim1ysobo/k1h8q4hRYpKSzGmJHkStpvU
 haw=
X-SBRS: 2.7
X-MesageID: 23254979
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23254979"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Subject: [PATCH v3 2/4] xen/balloon: make the balloon wait interruptible
Date: Mon, 27 Jul 2020 11:13:40 +0200
Message-ID: <20200727091342.52325-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200727091342.52325-1-roger.pau@citrix.com>
References: <20200727091342.52325-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

So it can be killed, or else processes can get hung indefinitely
waiting for balloon pages.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Cc: stable@vger.kernel.org
---
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/balloon.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 3cb10ed32557..292413b27575 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -568,11 +568,13 @@ static int add_ballooned_pages(int nr_pages)
 	if (xen_hotplug_unpopulated) {
 		st = reserve_additional_memory();
 		if (st != BP_ECANCELED) {
+			int rc;
+
 			mutex_unlock(&balloon_mutex);
-			wait_event(balloon_wq,
+			rc = wait_event_interruptible(balloon_wq,
 				   !list_empty(&ballooned_pages));
 			mutex_lock(&balloon_mutex);
-			return 0;
+			return rc ? -ENOMEM : 0;
 		}
 	}
 
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 09:15:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 09:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzzE5-0004Qd-EA; Mon, 27 Jul 2020 09:15:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jzzE4-0004QL-6C
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 09:15:16 +0000
X-Inumbo-ID: ae45e47c-cfe9-11ea-a777-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ae45e47c-cfe9-11ea-a777-12813bfff9fa;
 Mon, 27 Jul 2020 09:15:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595841315;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=ypPb5+5mjFQ6dtbL/IxbCNOjD1qVztAdhRNp+jbSIJA=;
 b=YOu20+2J3S4EXyVWKe5hlF3hV02HsKdRExG+iIbFbWxemjji63rodA/L
 sv9V8YWbM5FAnpJT835248CjJ2VGK5IMfuumzbP+3sB48kcbs7E7k0Zuc
 XcXPbbtHU2v8fyqZv0+B0X/bl+0GbmDp4Y9XKVV72bqFdupGNm3r0qLPw w=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: WXyj5KnFWi7XSH4TWCBzVsr8ZMwp+EDQ+LH8inkEJ8y5c8ZI7GpHdELUbkfU50zaT0y+UJ2bg3
 tGkQTS7Cd3n2F0avfc4p3Tv4Om4nVMPdfCIP2fH81xedkGHO42gQj0gaw5WITMYWnXwXlCpuOv
 fYda7epWjgoXV8dnH06TOLIkMc6zV5hwVEA84Nsa63862itJEJbTBUF/qzfcmTAgkBOmuv72yn
 QGZrTN5pyKY/pg/gEvRVPcy84WdrnQKei0WAIP2Epw33rEGjpdrfi7j5pDPW8jfmK3gQUzufVG
 K4A=
X-SBRS: 2.7
X-MesageID: 23569868
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23569868"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Subject: [PATCH v3 4/4] xen: add helpers to allocate unpopulated memory
Date: Mon, 27 Jul 2020 11:13:42 +0200
Message-ID: <20200727091342.52325-5-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200727091342.52325-1-roger.pau@citrix.com>
References: <20200727091342.52325-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Yan
 Yankovskyi <yyankovskyi@gmail.com>, David Hildenbrand <david@redhat.com>,
 dri-devel@lists.freedesktop.org, Michal Hocko <mhocko@kernel.org>,
 linux-mm@kvack.org, Daniel
 Vetter <daniel@ffwll.ch>, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Dan Williams <dan.j.williams@intel.com>,
 Dan Carpenter <dan.carpenter@oracle.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

To be used in order to create foreign mappings. This is based on the
ZONE_DEVICE facility which is used by persistent memory devices in
order to create struct pages and kernel virtual mappings for the IOMEM
areas of such devices. Note that on kernels without support for
ZONE_DEVICE Xen will fallback to use ballooned pages in order to
create foreign mappings.

The newly added helpers use the same parameters as the existing
{alloc/free}_xenballooned_pages functions, which allows for in-place
replacement of the callers. Once a memory region has been added to be
used as scratch mapping space it will no longer be released, and pages
returned are kept in a linked list. This allows to have a buffer of
pages and prevents resorting to frequent additions and removals of
regions.

If enabled (because ZONE_DEVICE is supported) the usage of the new
functionality untangles Xen balloon and RAM hotplug from the usage of
unpopulated physical memory ranges to map foreign pages, which is the
correct thing to do in order to avoid mappings of foreign pages depend
on memory hotplug.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
I've not added a new memory_type type and just used
MEMORY_DEVICE_DEVDAX which seems to be what we want for such memory
regions. I'm unsure whether abusing this type is fine, or if I should
instead add a specific type, maybe MEMORY_DEVICE_GENERIC? I don't
think we should be using a specific Xen type at all.
---
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Yan Yankovskyi <yyankovskyi@gmail.com>
Cc: dri-devel@lists.freedesktop.org
Cc: xen-devel@lists.xenproject.org
Cc: linux-mm@kvack.org
Cc: David Hildenbrand <david@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Dan Williams <dan.j.williams@intel.com>
---
Changes since v2:
 - Drop BUILD_BUG_ON regarding PVMMU page sizes.
 - Use a SPDX license identifier.
 - Call fill with only the minimum required number of pages.
 - Include xen.h header in xen_drm_front_gem.c.
 - Use less generic function names.
 - Exit early from the init function if not a PV guest.
 - Don't use all caps for region name.
---
 drivers/gpu/drm/xen/xen_drm_front_gem.c |   9 +-
 drivers/xen/Makefile                    |   1 +
 drivers/xen/balloon.c                   |   4 +-
 drivers/xen/grant-table.c               |   4 +-
 drivers/xen/privcmd.c                   |   4 +-
 drivers/xen/unpopulated-alloc.c         | 185 ++++++++++++++++++++++++
 drivers/xen/xenbus/xenbus_client.c      |   6 +-
 drivers/xen/xlate_mmu.c                 |   4 +-
 include/xen/xen.h                       |   9 ++
 9 files changed, 211 insertions(+), 15 deletions(-)
 create mode 100644 drivers/xen/unpopulated-alloc.c

diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index f0b85e094111..270e1bd3e4da 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -18,6 +18,7 @@
 #include <drm/drm_probe_helper.h>
 
 #include <xen/balloon.h>
+#include <xen/xen.h>
 
 #include "xen_drm_front.h"
 #include "xen_drm_front_gem.h"
@@ -99,8 +100,8 @@ static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
 		 * allocate ballooned pages which will be used to map
 		 * grant references provided by the backend
 		 */
-		ret = alloc_xenballooned_pages(xen_obj->num_pages,
-					       xen_obj->pages);
+		ret = xen_alloc_unpopulated_pages(xen_obj->num_pages,
+					          xen_obj->pages);
 		if (ret < 0) {
 			DRM_ERROR("Cannot allocate %zu ballooned pages: %d\n",
 				  xen_obj->num_pages, ret);
@@ -152,8 +153,8 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj)
 	} else {
 		if (xen_obj->pages) {
 			if (xen_obj->be_alloc) {
-				free_xenballooned_pages(xen_obj->num_pages,
-							xen_obj->pages);
+				xen_free_unpopulated_pages(xen_obj->num_pages,
+							   xen_obj->pages);
 				gem_free_pages_array(xen_obj);
 			} else {
 				drm_gem_put_pages(&xen_obj->base,
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index 0d322f3d90cd..788a5d9c8ef0 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -42,3 +42,4 @@ xen-gntdev-$(CONFIG_XEN_GNTDEV_DMABUF)	+= gntdev-dmabuf.o
 xen-gntalloc-y				:= gntalloc.o
 xen-privcmd-y				:= privcmd.o privcmd-buf.o
 obj-$(CONFIG_XEN_FRONT_PGDIR_SHBUF)	+= xen-front-pgdir-shbuf.o
+obj-$(CONFIG_ZONE_DEVICE)		+= unpopulated-alloc.o
diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index b1d8b028bf80..815ef10eb2ff 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -654,7 +654,7 @@ void free_xenballooned_pages(int nr_pages, struct page **pages)
 }
 EXPORT_SYMBOL(free_xenballooned_pages);
 
-#ifdef CONFIG_XEN_PV
+#if defined(CONFIG_XEN_PV) && !defined(CONFIG_ZONE_DEVICE)
 static void __init balloon_add_region(unsigned long start_pfn,
 				      unsigned long pages)
 {
@@ -708,7 +708,7 @@ static int __init balloon_init(void)
 	register_sysctl_table(xen_root);
 #endif
 
-#ifdef CONFIG_XEN_PV
+#if defined(CONFIG_XEN_PV) && !defined(CONFIG_ZONE_DEVICE)
 	{
 		int i;
 
diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
index 8d06bf1cc347..523dcdf39cc9 100644
--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -801,7 +801,7 @@ int gnttab_alloc_pages(int nr_pages, struct page **pages)
 {
 	int ret;
 
-	ret = alloc_xenballooned_pages(nr_pages, pages);
+	ret = xen_alloc_unpopulated_pages(nr_pages, pages);
 	if (ret < 0)
 		return ret;
 
@@ -836,7 +836,7 @@ EXPORT_SYMBOL_GPL(gnttab_pages_clear_private);
 void gnttab_free_pages(int nr_pages, struct page **pages)
 {
 	gnttab_pages_clear_private(nr_pages, pages);
-	free_xenballooned_pages(nr_pages, pages);
+	xen_free_unpopulated_pages(nr_pages, pages);
 }
 EXPORT_SYMBOL_GPL(gnttab_free_pages);
 
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index a250d118144a..56000ab70974 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -425,7 +425,7 @@ static int alloc_empty_pages(struct vm_area_struct *vma, int numpgs)
 	if (pages == NULL)
 		return -ENOMEM;
 
-	rc = alloc_xenballooned_pages(numpgs, pages);
+	rc = xen_alloc_unpopulated_pages(numpgs, pages);
 	if (rc != 0) {
 		pr_warn("%s Could not alloc %d pfns rc:%d\n", __func__,
 			numpgs, rc);
@@ -900,7 +900,7 @@ static void privcmd_close(struct vm_area_struct *vma)
 
 	rc = xen_unmap_domain_gfn_range(vma, numgfns, pages);
 	if (rc == 0)
-		free_xenballooned_pages(numpgs, pages);
+		xen_free_unpopulated_pages(numpgs, pages);
 	else
 		pr_crit("unable to unmap MFN range: leaking %d pages. rc=%d\n",
 			numpgs, rc);
diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
new file mode 100644
index 000000000000..c50450375922
--- /dev/null
+++ b/drivers/xen/unpopulated-alloc.c
@@ -0,0 +1,185 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/errno.h>
+#include <linux/gfp.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/memremap.h>
+#include <linux/slab.h>
+
+#include <asm/page.h>
+
+#include <xen/page.h>
+#include <xen/xen.h>
+
+static DEFINE_MUTEX(list_lock);
+static LIST_HEAD(page_list);
+static unsigned int list_count;
+
+static int fill_list(unsigned int nr_pages)
+{
+	struct dev_pagemap *pgmap;
+	void *vaddr;
+	unsigned int i, alloc_pages = round_up(nr_pages, PAGES_PER_SECTION);
+	int nid, ret;
+
+	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
+	if (!pgmap)
+		return -ENOMEM;
+
+	pgmap->type = MEMORY_DEVICE_DEVDAX;
+	pgmap->res.name = "Xen scratch";
+	pgmap->res.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+
+	ret = allocate_resource(&iomem_resource, &pgmap->res,
+				alloc_pages * PAGE_SIZE, 0, -1,
+				PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL);
+	if (ret < 0) {
+		pr_err("Cannot allocate new IOMEM resource\n");
+		kfree(pgmap);
+		return ret;
+	}
+
+	nid = memory_add_physaddr_to_nid(pgmap->res.start);
+
+#ifdef CONFIG_XEN_HAVE_PVMMU
+        /*
+         * memremap will build page tables for the new memory so
+         * the p2m must contain invalid entries so the correct
+         * non-present PTEs will be written.
+         *
+         * If a failure occurs, the original (identity) p2m entries
+         * are not restored since this region is now known not to
+         * conflict with any devices.
+         */
+	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+		xen_pfn_t pfn = PFN_DOWN(pgmap->res.start);
+
+		for (i = 0; i < alloc_pages; i++) {
+			if (!set_phys_to_machine(pfn + i, INVALID_P2M_ENTRY)) {
+				pr_warn("set_phys_to_machine() failed, no memory added\n");
+				release_resource(&pgmap->res);
+				kfree(pgmap);
+				return -ENOMEM;
+			}
+                }
+	}
+#endif
+
+	vaddr = memremap_pages(pgmap, nid);
+	if (IS_ERR(vaddr)) {
+		pr_err("Cannot remap memory range\n");
+		release_resource(&pgmap->res);
+		kfree(pgmap);
+		return PTR_ERR(vaddr);
+	}
+
+	for (i = 0; i < alloc_pages; i++) {
+		struct page *pg = virt_to_page(vaddr + PAGE_SIZE * i);
+
+		BUG_ON(!virt_addr_valid(vaddr + PAGE_SIZE * i));
+		list_add(&pg->lru, &page_list);
+		list_count++;
+	}
+
+	return 0;
+}
+
+/**
+ * xen_alloc_unpopulated_pages - alloc unpopulated pages
+ * @nr_pages: Number of pages
+ * @pages: pages returned
+ * @return 0 on success, error otherwise
+ */
+int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages)
+{
+	unsigned int i;
+	int ret = 0;
+
+	mutex_lock(&list_lock);
+	if (list_count < nr_pages) {
+		ret = fill_list(nr_pages - list_count);
+		if (ret)
+			goto out;
+	}
+
+	for (i = 0; i < nr_pages; i++) {
+		struct page *pg = list_first_entry_or_null(&page_list,
+							   struct page,
+							   lru);
+
+		BUG_ON(!pg);
+		list_del(&pg->lru);
+		list_count--;
+		pages[i] = pg;
+
+#ifdef CONFIG_XEN_HAVE_PVMMU
+		if (!xen_feature(XENFEAT_auto_translated_physmap)) {
+			ret = xen_alloc_p2m_entry(page_to_pfn(pg));
+			if (ret < 0) {
+				unsigned int j;
+
+				for (j = 0; j <= i; j++) {
+					list_add(&pages[j]->lru, &page_list);
+					list_count++;
+				}
+				goto out;
+			}
+		}
+#endif
+	}
+
+out:
+	mutex_unlock(&list_lock);
+	return ret;
+}
+EXPORT_SYMBOL(xen_alloc_unpopulated_pages);
+
+/**
+ * xen_free_unpopulated_pages - return unpopulated pages
+ * @nr_pages: Number of pages
+ * @pages: pages to return
+ */
+void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages)
+{
+	unsigned int i;
+
+	mutex_lock(&list_lock);
+	for (i = 0; i < nr_pages; i++) {
+		list_add(&pages[i]->lru, &page_list);
+		list_count++;
+	}
+	mutex_unlock(&list_lock);
+}
+EXPORT_SYMBOL(xen_free_unpopulated_pages);
+
+#ifdef CONFIG_XEN_PV
+static int __init init(void)
+{
+	unsigned int i;
+
+	if (!xen_domain())
+		return -ENODEV;
+
+	if (!xen_pv_domain())
+		return 0;
+
+	/*
+	 * Initialize with pages from the extra memory regions (see
+	 * arch/x86/xen/setup.c).
+	 */
+	for (i = 0; i < XEN_EXTRA_MEM_MAX_REGIONS; i++) {
+		unsigned int j;
+
+		for (j = 0; j < xen_extra_mem[i].n_pfns; j++) {
+			struct page *pg =
+				pfn_to_page(xen_extra_mem[i].start_pfn + j);
+
+			list_add(&pg->lru, &page_list);
+			list_count++;
+		}
+	}
+
+	return 0;
+}
+subsys_initcall(init);
+#endif
diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index 786fbb7d8be0..70b6c4780fbd 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -615,7 +615,7 @@ static int xenbus_map_ring_hvm(struct xenbus_device *dev,
 	bool leaked = false;
 	unsigned int nr_pages = XENBUS_PAGES(nr_grefs);
 
-	err = alloc_xenballooned_pages(nr_pages, node->hvm.pages);
+	err = xen_alloc_unpopulated_pages(nr_pages, node->hvm.pages);
 	if (err)
 		goto out_err;
 
@@ -656,7 +656,7 @@ static int xenbus_map_ring_hvm(struct xenbus_device *dev,
 			 addr, nr_pages);
  out_free_ballooned_pages:
 	if (!leaked)
-		free_xenballooned_pages(nr_pages, node->hvm.pages);
+		xen_free_unpopulated_pages(nr_pages, node->hvm.pages);
  out_err:
 	return err;
 }
@@ -852,7 +852,7 @@ static int xenbus_unmap_ring_hvm(struct xenbus_device *dev, void *vaddr)
 			       info.addrs);
 	if (!rv) {
 		vunmap(vaddr);
-		free_xenballooned_pages(nr_pages, node->hvm.pages);
+		xen_free_unpopulated_pages(nr_pages, node->hvm.pages);
 	}
 	else
 		WARN(1, "Leaking %p, size %u page(s)\n", vaddr, nr_pages);
diff --git a/drivers/xen/xlate_mmu.c b/drivers/xen/xlate_mmu.c
index 7b1077f0abcb..34742c6e189e 100644
--- a/drivers/xen/xlate_mmu.c
+++ b/drivers/xen/xlate_mmu.c
@@ -232,7 +232,7 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
 		kfree(pages);
 		return -ENOMEM;
 	}
-	rc = alloc_xenballooned_pages(nr_pages, pages);
+	rc = xen_alloc_unpopulated_pages(nr_pages, pages);
 	if (rc) {
 		pr_warn("%s Couldn't balloon alloc %ld pages rc:%d\n", __func__,
 			nr_pages, rc);
@@ -249,7 +249,7 @@ int __init xen_xlate_map_ballooned_pages(xen_pfn_t **gfns, void **virt,
 	if (!vaddr) {
 		pr_warn("%s Couldn't map %ld pages rc:%d\n", __func__,
 			nr_pages, rc);
-		free_xenballooned_pages(nr_pages, pages);
+		xen_free_unpopulated_pages(nr_pages, pages);
 		kfree(pages);
 		kfree(pfns);
 		return -ENOMEM;
diff --git a/include/xen/xen.h b/include/xen/xen.h
index 19a72f591e2b..e93b47e5e378 100644
--- a/include/xen/xen.h
+++ b/include/xen/xen.h
@@ -52,4 +52,13 @@ bool xen_biovec_phys_mergeable(const struct bio_vec *vec1,
 extern u64 xen_saved_max_mem_size;
 #endif
 
+#ifdef CONFIG_ZONE_DEVICE
+int xen_alloc_unpopulated_pages(unsigned int nr_pages, struct page **pages);
+void xen_free_unpopulated_pages(unsigned int nr_pages, struct page **pages);
+#else
+#define xen_alloc_unpopulated_pages alloc_xenballooned_pages
+#define xen_free_unpopulated_pages free_xenballooned_pages
+#include <xen/balloon.h>
+#endif
+
 #endif	/* _XEN_XEN_H */
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 09:15:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 09:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzzE8-0004TU-TT; Mon, 27 Jul 2020 09:15:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jzzE7-0004OD-HT
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 09:15:19 +0000
X-Inumbo-ID: ab0a4cda-cfe9-11ea-8a7e-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab0a4cda-cfe9-11ea-8a7e-bc764e2007e4;
 Mon, 27 Jul 2020 09:15:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595841309;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=Nq9yI9ZapjAU3I2ISQYBx01cREm8iD5IrXb3HWdctu0=;
 b=KZInzin2gcDJQRvZdUwizW0PAokuwA9G1EfiBgPiA+O49KeLaeSfkJMr
 f1GNSEvLtMTwrrUrVsUcnLOhOR1fZr6H5SwkForVztBAWQcQ/rljyQO/d
 Ao6+E4cvtv4YW6j2Xns7IBOVSd5u4r3hr5DvPx21Z+N4q7tfFY29NztrC Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: CaWT5LIbKdSm6bglJh6wY7Rm9pQCl8a6jhy9q8a+NizdhgIBcUgMJIVzAoXIXdLIYwEyNXjysu
 jSQTvNe1bbYVQi7e3KcyTmEZFi3CsDbUMupe+asAciCVSRx7faRfhMETYPniYTT+tB4f4E/ny3
 8Z9NHVXMK/B/ZfpG36jfHVTra6T4EWIxWxLV5SwAHLLEW5I157sJPAJSregsF8/dAdp6FxmzH3
 k9ZqiAppaKjGEJvBHAgFr+VIUtwQG6LUpdZxvOwrLfYO2QKjsRrmZ1Z54Lp0a4thonvBtWrRo4
 ohI=
X-SBRS: 2.7
X-MesageID: 23233916
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23233916"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Subject: [PATCH v3 3/4] Revert "xen/balloon: Fix crash when ballooning on x86
 32 bit PAE"
Date: Mon, 27 Jul 2020 11:13:41 +0200
Message-ID: <20200727091342.52325-4-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200727091342.52325-1-roger.pau@citrix.com>
References: <20200727091342.52325-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This reverts commit dfd74a1edfaba5864276a2859190a8d242d18952.

This has been fixed by commit dca4436d1cf9e0d237c which added the out
of bounds check to __add_memory, so that trying to add blocks past
MAX_PHYSMEM_BITS will fail.

Note the check in the Xen balloon driver was bogus anyway, as it
checked the start address of the resource, but it should instead test
the end address to assert the whole resource falls below
MAX_PHYSMEM_BITS.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
---
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
 drivers/xen/balloon.c | 14 --------------
 1 file changed, 14 deletions(-)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 292413b27575..b1d8b028bf80 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -266,20 +266,6 @@ static struct resource *additional_memory_resource(phys_addr_t size)
 		return NULL;
 	}
 
-#ifdef CONFIG_SPARSEMEM
-	{
-		unsigned long limit = 1UL << (MAX_PHYSMEM_BITS - PAGE_SHIFT);
-		unsigned long pfn = res->start >> PAGE_SHIFT;
-
-		if (pfn > limit) {
-			pr_err("New System RAM resource outside addressable RAM (%lu > %lu)\n",
-			       pfn, limit);
-			release_memory_resource(res);
-			return NULL;
-		}
-	}
-#endif
-
 	return res;
 }
 
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 09:30:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 09:30:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzzSY-0006On-8A; Mon, 27 Jul 2020 09:30:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TI7O=BG=amazon.co.uk=prvs=4708ece4a=pdurrant@srs-us1.protection.inumbo.net>)
 id 1jzzSX-0006Oi-Pl
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 09:30:13 +0000
X-Inumbo-ID: c5ae7974-cfeb-11ea-8a80-bc764e2007e4
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c5ae7974-cfeb-11ea-8a80-bc764e2007e4;
 Mon, 27 Jul 2020 09:30:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
 s=amazon201209; t=1595842213; x=1627378213;
 h=from:to:cc:date:message-id:references:in-reply-to:
 content-transfer-encoding:mime-version:subject;
 bh=Tfv3ybfmHqTOct8fEpnQ5FJFdN5/L399NK+1GqmvEek=;
 b=l/1cygW3hw4Wbr58Qt9h/z1P6MoKbPrekljj/9p6A3oWn5GkgSD2PKHt
 NFj0HJ0t5e8jTMOmbE9q0Axhk72jXqeMuhwNg0kJTqmyTCGnYv6ZqUCkv
 9/0sYBdlzZo23jKuakBpmiH6QznvB2gRMwSurGQELVP7MhvSWtcWZy38H w=;
IronPort-SDR: F0jEPhKiMtLl/kJd3jVSfQbBKyqvEm8cbNRukCWafwkqD2nT4MwC4eE6jnr1BY+mItjboAhgtq
 bH36pzO/rxnA==
X-IronPort-AV: E=Sophos;i="5.75,402,1589241600"; d="scan'208";a="44328673"
Subject: RE: [PATCH 1/6] x86/iommu: re-arrange arch_iommu to separate common
 fields...
Thread-Topic: [PATCH 1/6] x86/iommu: re-arrange arch_iommu to separate common
 fields...
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-1d-f273de60.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 27 Jul 2020 09:30:12 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1d-f273de60.us-east-1.amazon.com (Postfix) with ESMTPS
 id 1E74EA02D3; Mon, 27 Jul 2020 09:30:09 +0000 (UTC)
Received: from EX13D32EUC004.ant.amazon.com (10.43.164.121) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 27 Jul 2020 09:30:09 +0000
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC004.ant.amazon.com (10.43.164.121) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 27 Jul 2020 09:30:08 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 27 Jul 2020 09:30:08 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, "paul@xen.org" <paul@xen.org>
Thread-Index: AQJEvXWV1fglpFS8p501Sb8VALJosQK2TIaJAcTpbVgCkV1pmQFR7GoUp/uB7nA=
Date: Mon, 27 Jul 2020 09:30:08 +0000
Message-ID: <70f75eeb115e4523ac3c47ecc732ea23@EX13D32EUC003.ant.amazon.com>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-2-paul@xen.org>
 <68b40fdc-e578-7005-aa6e-499c6f04589c@citrix.com>
 <000001d661eb$392e1ae0$ab8a50a0$@xen.org>
 <63ed6df0-e456-48cd-6df0-601600871226@suse.com>
In-Reply-To: <63ed6df0-e456-48cd-6df0-601600871226@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.165.155]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Kevin Tian' <kevin.tian@intel.com>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Lukasz
 Hawrylko' <lukasz.hawrylko@linux.intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?utf-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDI2IEp1bHkgMjAyMCAwOToxNA0KPiBUbzogcGF1bEB4ZW4u
b3JnDQo+IENjOiAnQW5kcmV3IENvb3BlcicgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+OyB4
ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmc7IER1cnJhbnQsIFBhdWwNCj4gPHBkdXJyYW50
QGFtYXpvbi5jby51az47ICdMdWthc3ogSGF3cnlsa28nIDxsdWthc3ouaGF3cnlsa29AbGludXgu
aW50ZWwuY29tPjsgJ1dlaSBMaXUnIDx3bEB4ZW4ub3JnPjsNCj4gJ1JvZ2VyIFBhdSBNb25uw6kn
IDxyb2dlci5wYXVAY2l0cml4LmNvbT47ICdLZXZpbiBUaWFuJyA8a2V2aW4udGlhbkBpbnRlbC5j
b20+DQo+IFN1YmplY3Q6IFJFOiBbRVhURVJOQUxdIFtQQVRDSCAxLzZdIHg4Ni9pb21tdTogcmUt
YXJyYW5nZSBhcmNoX2lvbW11IHRvIHNlcGFyYXRlIGNvbW1vbiBmaWVsZHMuLi4NCj4gDQo+IENB
VVRJT046IFRoaXMgZW1haWwgb3JpZ2luYXRlZCBmcm9tIG91dHNpZGUgb2YgdGhlIG9yZ2FuaXph
dGlvbi4gRG8gbm90IGNsaWNrIGxpbmtzIG9yIG9wZW4NCj4gYXR0YWNobWVudHMgdW5sZXNzIHlv
dSBjYW4gY29uZmlybSB0aGUgc2VuZGVyIGFuZCBrbm93IHRoZSBjb250ZW50IGlzIHNhZmUuDQo+
IA0KPiANCj4gDQo+IE9uIDI0LjA3LjIwMjAgMjA6NDksIFBhdWwgRHVycmFudCB3cm90ZToNCj4g
Pj4gRnJvbTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4NCj4gPj4g
U2VudDogMjQgSnVseSAyMDIwIDE4OjI5DQo+ID4+DQo+ID4+IE9uIDI0LzA3LzIwMjAgMTc6NDYs
IFBhdWwgRHVycmFudCB3cm90ZToNCj4gPj4+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20t
eDg2L2lvbW11LmggYi94ZW4vaW5jbHVkZS9hc20teDg2L2lvbW11LmgNCj4gPj4+IGluZGV4IDZj
OWQ1ZTU2MzIuLmE3YWRkNTIwOGMgMTAwNjQ0DQo+ID4+PiAtLS0gYS94ZW4vaW5jbHVkZS9hc20t
eDg2L2lvbW11LmgNCj4gPj4+ICsrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvaW9tbXUuaA0KPiA+
Pj4gQEAgLTQ1LDE2ICs0NSwyMyBAQCB0eXBlZGVmIHVpbnQ2NF90IGRhZGRyX3Q7DQo+ID4+Pg0K
PiA+Pj4gIHN0cnVjdCBhcmNoX2lvbW11DQo+ID4+PiAgew0KPiA+Pj4gLSAgICB1NjQgcGdkX21h
ZGRyOyAgICAgICAgICAgICAgICAgLyogaW8gcGFnZSBkaXJlY3RvcnkgbWFjaGluZSBhZGRyZXNz
ICovDQo+ID4+PiAtICAgIHNwaW5sb2NrX3QgbWFwcGluZ19sb2NrOyAgICAgICAgICAgIC8qIGlv
IHBhZ2UgdGFibGUgbG9jayAqLw0KPiA+Pj4gLSAgICBpbnQgYWdhdzsgICAgIC8qIGFkanVzdGVk
IGd1ZXN0IGFkZHJlc3Mgd2lkdGgsIDAgaXMgbGV2ZWwgMiAzMC1iaXQgKi8NCj4gPj4+IC0gICAg
dTY0IGlvbW11X2JpdG1hcDsgICAgICAgICAgICAgIC8qIGJpdG1hcCBvZiBpb21tdShzKSB0aGF0
IHRoZSBkb21haW4gdXNlcyAqLw0KPiA+Pj4gLSAgICBzdHJ1Y3QgbGlzdF9oZWFkIG1hcHBlZF9y
bXJyczsNCj4gPj4+IC0NCj4gPj4+IC0gICAgLyogYW1kIGlvbW11IHN1cHBvcnQgKi8NCj4gPj4+
IC0gICAgaW50IHBhZ2luZ19tb2RlOw0KPiA+Pj4gLSAgICBzdHJ1Y3QgcGFnZV9pbmZvICpyb290
X3RhYmxlOw0KPiA+Pj4gLSAgICBzdHJ1Y3QgZ3Vlc3RfaW9tbXUgKmdfaW9tbXU7DQo+ID4+PiAr
ICAgIHNwaW5sb2NrX3QgbWFwcGluZ19sb2NrOyAvKiBpbyBwYWdlIHRhYmxlIGxvY2sgKi8NCj4g
Pj4+ICsNCj4gPj4+ICsgICAgdW5pb24gew0KPiA+Pj4gKyAgICAgICAgLyogSW50ZWwgVlQtZCAq
Lw0KPiA+Pj4gKyAgICAgICAgc3RydWN0IHsNCj4gPj4+ICsgICAgICAgICAgICB1NjQgcGdkX21h
ZGRyOyAvKiBpbyBwYWdlIGRpcmVjdG9yeSBtYWNoaW5lIGFkZHJlc3MgKi8NCj4gPj4+ICsgICAg
ICAgICAgICBpbnQgYWdhdzsgLyogYWRqdXN0ZWQgZ3Vlc3QgYWRkcmVzcyB3aWR0aCwgMCBpcyBs
ZXZlbCAyIDMwLWJpdCAqLw0KPiA+Pj4gKyAgICAgICAgICAgIHU2NCBpb21tdV9iaXRtYXA7IC8q
IGJpdG1hcCBvZiBpb21tdShzKSB0aGF0IHRoZSBkb21haW4gdXNlcyAqLw0KPiA+Pj4gKyAgICAg
ICAgICAgIHN0cnVjdCBsaXN0X2hlYWQgbWFwcGVkX3JtcnJzOw0KPiA+Pj4gKyAgICAgICAgfSB2
dGQ7DQo+ID4+PiArICAgICAgICAvKiBBTUQgSU9NTVUgKi8NCj4gPj4+ICsgICAgICAgIHN0cnVj
dCB7DQo+ID4+PiArICAgICAgICAgICAgaW50IHBhZ2luZ19tb2RlOw0KPiA+Pj4gKyAgICAgICAg
ICAgIHN0cnVjdCBwYWdlX2luZm8gKnJvb3RfdGFibGU7DQo+ID4+PiArICAgICAgICAgICAgc3Ry
dWN0IGd1ZXN0X2lvbW11ICpnX2lvbW11Ow0KPiA+Pj4gKyAgICAgICAgfSBhbWRfaW9tbXU7DQo+
ID4+PiArICAgIH07DQo+ID4+DQo+ID4+IFRoZSBuYW1pbmcgc3BsaXQgaGVyZSBpcyB3ZWlyZC4N
Cj4gPj4NCj4gPj4gSWRlYWxseSB3ZSdkIGhhdmUgc3RydWN0IHt2dGQsYW1kfV9pb21tdSBpbiBh
cHByb3ByaWF0ZSBoZWFkZXJzLCBhbmQNCj4gPj4gdGhpcyB3b3VsZCBiZSBzaW1wbHkNCj4gPj4N
Cj4gPj4gdW5pb24gew0KPiA+PiAgICAgc3RydWN0IHZ0ZF9pb21tdSB2dGQ7DQo+ID4+ICAgICBz
dHJ1Y3QgYW1kX2lvbW11IGFtZDsNCj4gPj4gfTsNCj4gPj4NCj4gPj4gSWYgdGhpcyBpc24ndCB0
cml2aWFsIHRvIGFycmFuZ2UsIGNhbiB3ZSBhdCBsZWFzdCBzL2FtZF9pb21tdS9hbWQvIGhlcmUg
Pw0KPiA+DQo+ID4gSSB3YXMgaW4gdHdvIG1pbmRzLiBJIHRyaWVkIHRvIGxvb2sgZm9yIGEgVExB
IGZvciB0aGUgQU1EIElPTU1VIGFuZCAnYW1kJyBzZWVtZWQgYSBsaXR0bGUgdG9vIG5vbi0NCj4g
ZGVzY3JpcHQuIEkgZG9uJ3QgcmVhbGx5IG1pbmQgdGhvdWdoIGlmIHRoZXJlJ3MgYSBzdHJvbmcg
cHJlZmVyZW5jZSB0byBzaG9ydGVkIGl0Lg0KPiANCj4gKzEgZm9yIHNob3J0ZW5pbmcgaW4gc29t
ZSB3YXkuIEV2ZW4gYW1kX3ZpIHdvdWxkIGFscmVhZHkgYmUgYmV0dGVyIGltbywNCj4gYWxiZWl0
IEknbSB3aXRoIEFuZHJldyBhbmQgd291bGQgdGhpbmsganVzdCBhbWQgaXMgZmluZSBoZXJlIChh
bmQNCj4gbWF0Y2hlcyBob3cgdGhpbmdzIGFyZSBpbiB0aGUgZmlsZSBzeXN0ZW0gc3RydWN0dXJl
KS4NCj4gDQoNCk9LLCBJJ2xsIHNob3J0ZW4gdG8gJ2FtZCcuDQoNCj4gV2hpbGUgYXQgaXQsIG1h
eSBJIGFzayB0aGF0IHlvdSBhbHNvIHN3aXRjaCB0aGUgcGxhaW4gImludCIgZmllbGRzIHRvDQo+
ICJ1bnNpZ25lZCBpbnQiIC0gSSB0aGluayB0aGF0J3MgZG9hYmxlIGZvciBib3RoIG9mIHRoZW0u
DQo+IA0KDQpTdXJlLCBJIGNhbiBkbyB0aGF0Lg0KDQogIFBhdWwNCg0KPiBKYW4NCg==


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 09:38:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 09:38:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzzaj-0006eM-3G; Mon, 27 Jul 2020 09:38:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TI7O=BG=amazon.co.uk=prvs=4708ece4a=pdurrant@srs-us1.protection.inumbo.net>)
 id 1jzzai-0006eH-El
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 09:38:40 +0000
X-Inumbo-ID: f33da472-cfec-11ea-8a81-bc764e2007e4
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f33da472-cfec-11ea-8a81-bc764e2007e4;
 Mon, 27 Jul 2020 09:38:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
 s=amazon201209; t=1595842720; x=1627378720;
 h=from:to:cc:date:message-id:references:in-reply-to:
 content-transfer-encoding:mime-version:subject;
 bh=0VLjvdKyeGmxbYMfcdQ08Jsa+quogTnBMX/jxawSbsw=;
 b=lgNQIW3QvGQixj3quvrF/zDrxkxP/tMxY2Ur/IAKmNuA8iqcrE9qQzZq
 2bE77Q/RCCQzuuH6NWc5MFLxqAOKLo7NBAFO+EVcWenfn4XwDUtXLbQvV
 hRuc7qD//jsZA9BMo21/uDwgJjj7RNjF+dH3fd6wt/hYbM3xubVvcN3fD 0=;
IronPort-SDR: MvjbV/Tmpm3PDWtolt+8BfAVux5KR42P4ZrIqoMS7HFkh79mkWaFSF7B1fkgUxf2TDOK5jW7xB
 n2uBlm/Eb8+w==
X-IronPort-AV: E=Sophos;i="5.75,402,1589241600"; d="scan'208";a="54963528"
Subject: RE: [PATCH 2/6] x86/iommu: add common page-table allocator
Thread-Topic: [PATCH 2/6] x86/iommu: add common page-table allocator
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1d-37fd6b3d.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP;
 27 Jul 2020 09:38:13 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1d-37fd6b3d.us-east-1.amazon.com (Postfix) with ESMTPS
 id 9955D284705; Mon, 27 Jul 2020 09:38:12 +0000 (UTC)
Received: from EX13D32EUC004.ant.amazon.com (10.43.164.121) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 27 Jul 2020 09:38:12 +0000
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC004.ant.amazon.com (10.43.164.121) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 27 Jul 2020 09:37:48 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 27 Jul 2020 09:37:48 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Index: AQJEvXWV1fglpFS8p501Sb8VALJosQEN3iu2Al/FU/SoIwmk8A==
Date: Mon, 27 Jul 2020 09:37:47 +0000
Message-ID: <d329b845e6c348e8bf484e45f051779f@EX13D32EUC003.ant.amazon.com>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-3-paul@xen.org>
 <d0a0c46f-1461-144c-ca62-259b0a1894fa@citrix.com>
In-Reply-To: <d0a0c46f-1461-144c-ca62-259b0a1894fa@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.165.155]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei
 Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBBbmRyZXcgQ29vcGVyIDxhbmRy
ZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTZW50OiAyNCBKdWx5IDIwMjAgMTk6MjQNCj4gVG86
IFBhdWwgRHVycmFudCA8cGF1bEB4ZW4ub3JnPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qu
b3JnDQo+IENjOiBEdXJyYW50LCBQYXVsIDxwZHVycmFudEBhbWF6b24uY28udWs+OyBKYW4gQmV1
bGljaCA8amJldWxpY2hAc3VzZS5jb20+OyBLZXZpbiBUaWFuDQo+IDxrZXZpbi50aWFuQGludGVs
LmNvbT47IFdlaSBMaXUgPHdsQHhlbi5vcmc+OyBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVA
Y2l0cml4LmNvbT4NCj4gU3ViamVjdDogUkU6IFtFWFRFUk5BTF0gW1BBVENIIDIvNl0geDg2L2lv
bW11OiBhZGQgY29tbW9uIHBhZ2UtdGFibGUgYWxsb2NhdG9yDQo+IA0KPiBDQVVUSU9OOiBUaGlz
IGVtYWlsIG9yaWdpbmF0ZWQgZnJvbSBvdXRzaWRlIG9mIHRoZSBvcmdhbml6YXRpb24uIERvIG5v
dCBjbGljayBsaW5rcyBvciBvcGVuDQo+IGF0dGFjaG1lbnRzIHVubGVzcyB5b3UgY2FuIGNvbmZp
cm0gdGhlIHNlbmRlciBhbmQga25vdyB0aGUgY29udGVudCBpcyBzYWZlLg0KPiANCj4gDQo+IA0K
PiBPbiAyNC8wNy8yMDIwIDE3OjQ2LCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+ID4gRnJvbTogUGF1
bCBEdXJyYW50IDxwZHVycmFudEBhbWF6b24uY29tPg0KPiA+DQo+ID4gSW5zdGVhZCBvZiBoYXZp
bmcgc2VwYXJhdGUgcGFnZSB0YWJsZSBhbGxvY2F0aW9uIGZ1bmN0aW9ucyBpbiBWVC1kIGFuZCBB
TUQNCj4gPiBJT01NVSBjb2RlLCB1c2UgYSBjb21tb24gYWxsb2NhdGlvbiBmdW5jdGlvbiBpbiB0
aGUgZ2VuZXJhbCB4ODYgY29kZS4NCj4gPiBBbHNvLCByYXRoZXIgdGhhbiB3YWxraW5nIHRoZSBw
YWdlIHRhYmxlcyBhbmQgdXNpbmcgYSB0YXNrbGV0IHRvIGZyZWUgdGhlbQ0KPiA+IGR1cmluZyBp
b21tdV9kb21haW5fZGVzdHJveSgpLCBhZGQgYWxsb2NhdGVkIHBhZ2UgdGFibGUgcGFnZXMgdG8g
YSBsaXN0IGFuZA0KPiA+IHRoZW4gc2ltcGx5IHdhbGsgdGhlIGxpc3QgdG8gZnJlZSB0aGVtLiBU
aGlzIHNhdmVzIH45MCBsaW5lcyBvZiBjb2RlIG92ZXJhbGwuDQo+ID4NCj4gPiBOT1RFOiBUaGVy
ZSBpcyBubyBuZWVkIHRvIGNsZWFyIGFuZCBzeW5jIFBURXMgZHVyaW5nIHRlYXJkb3duIHNpbmNl
IHRoZSBwZXItDQo+ID4gICAgICAgZGV2aWNlIHJvb3QgZW50cmllcyB3aWxsIGhhdmUgYWxyZWFk
eSBiZWVuIGNsZWFyZWQgKHdoZW4gZGV2aWNlcyB3ZXJlDQo+ID4gICAgICAgZGUtYXNzaWduZWQp
IHNvIHRoZSBwYWdlIHRhYmxlcyBjYW4gbm8gbG9uZ2VyIGJlIGFjY2Vzc2VkIGJ5IHRoZSBJT01N
VS4NCj4gPg0KPiA+IFNpZ25lZC1vZmYtYnk6IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9u
LmNvbT4NCj4gDQo+IE9oIHdvdyAtIEkgZG9uJ3QgaGF2ZSBhbnkgcG9saXRlIHdvcmRzIGZvciBo
b3cgdGhhdCBjb2RlIHdhcyBvcmdhbmlzZWQNCj4gYmVmb3JlLg0KPiANCj4gSW5zdGVhZCBvZiBk
aXNjdXNzaW5nIHRoZSB+OTAgbGluZXMgc2F2ZWQsIHdoYXQgYWJvdXQgdGhlIHJlbW92YWwgb2Yg
YQ0KPiBnbG9iYWwgYm90dGxlbmVjayAodGhlIHRhc2tsZXQpIG9yIGV4cGFuZCBvbiB0aGUgcmVt
b3ZhbCBvZiByZWR1bmRhbnQNCj4gVExCL2NhY2hlIG1haW50ZW5hbmNlPw0KPiANCg0KT2suDQoN
Cj4gPiBkaWZmIC0tZ2l0IGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL3BjaV9hbWRfaW9t
bXUuYw0KPiBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9wY2lfYW1kX2lvbW11LmMNCj4g
PiBpbmRleCBjMzg2ZGM0Mzg3Li5mZDliMWU3YmQ1IDEwMDY0NA0KPiA+IC0tLSBhL3hlbi9kcml2
ZXJzL3Bhc3N0aHJvdWdoL2FtZC9wY2lfYW1kX2lvbW11LmMNCj4gPiArKysgYi94ZW4vZHJpdmVy
cy9wYXNzdGhyb3VnaC9hbWQvcGNpX2FtZF9pb21tdS5jDQo+ID4gQEAgLTM3OCw2NCArMzgwLDkg
QEAgc3RhdGljIGludCBhbWRfaW9tbXVfYXNzaWduX2RldmljZShzdHJ1Y3QgZG9tYWluICpkLCB1
OCBkZXZmbiwNCj4gPiAgICAgIHJldHVybiByZWFzc2lnbl9kZXZpY2UocGRldi0+ZG9tYWluLCBk
LCBkZXZmbiwgcGRldik7DQo+ID4gIH0NCj4gPg0KPiA+IC1zdGF0aWMgdm9pZCBkZWFsbG9jYXRl
X25leHRfcGFnZV90YWJsZShzdHJ1Y3QgcGFnZV9pbmZvICpwZywgaW50IGxldmVsKQ0KPiA+IC17
DQo+ID4gLSAgICBQRk5fT1JERVIocGcpID0gbGV2ZWw7DQo+ID4gLSAgICBzcGluX2xvY2soJmlv
bW11X3B0X2NsZWFudXBfbG9jayk7DQo+ID4gLSAgICBwYWdlX2xpc3RfYWRkX3RhaWwocGcsICZp
b21tdV9wdF9jbGVhbnVwX2xpc3QpOw0KPiA+IC0gICAgc3Bpbl91bmxvY2soJmlvbW11X3B0X2Ns
ZWFudXBfbG9jayk7DQo+ID4gLX0NCj4gPiAtDQo+ID4gLXN0YXRpYyB2b2lkIGRlYWxsb2NhdGVf
cGFnZV90YWJsZShzdHJ1Y3QgcGFnZV9pbmZvICpwZykNCj4gPiAtew0KPiA+IC0gICAgc3RydWN0
IGFtZF9pb21tdV9wdGUgKnRhYmxlX3ZhZGRyOw0KPiA+IC0gICAgdW5zaWduZWQgaW50IGluZGV4
LCBsZXZlbCA9IFBGTl9PUkRFUihwZyk7DQo+ID4gLQ0KPiA+IC0gICAgUEZOX09SREVSKHBnKSA9
IDA7DQo+ID4gLQ0KPiA+IC0gICAgaWYgKCBsZXZlbCA8PSAxICkNCj4gPiAtICAgIHsNCj4gPiAt
ICAgICAgICBmcmVlX2FtZF9pb21tdV9wZ3RhYmxlKHBnKTsNCj4gPiAtICAgICAgICByZXR1cm47
DQo+ID4gLSAgICB9DQo+ID4gLQ0KPiA+IC0gICAgdGFibGVfdmFkZHIgPSBfX21hcF9kb21haW5f
cGFnZShwZyk7DQo+ID4gLQ0KPiA+IC0gICAgZm9yICggaW5kZXggPSAwOyBpbmRleCA8IFBURV9Q
RVJfVEFCTEVfU0laRTsgaW5kZXgrKyApDQo+ID4gLSAgICB7DQo+ID4gLSAgICAgICAgc3RydWN0
IGFtZF9pb21tdV9wdGUgKnBkZSA9ICZ0YWJsZV92YWRkcltpbmRleF07DQo+ID4gLQ0KPiA+IC0g
ICAgICAgIGlmICggcGRlLT5tZm4gJiYgcGRlLT5uZXh0X2xldmVsICYmIHBkZS0+cHIgKQ0KPiA+
IC0gICAgICAgIHsNCj4gPiAtICAgICAgICAgICAgLyogV2UgZG8gbm90IHN1cHBvcnQgc2tpcCBs
ZXZlbHMgeWV0ICovDQo+ID4gLSAgICAgICAgICAgIEFTU0VSVChwZGUtPm5leHRfbGV2ZWwgPT0g
bGV2ZWwgLSAxKTsNCj4gPiAtICAgICAgICAgICAgZGVhbGxvY2F0ZV9uZXh0X3BhZ2VfdGFibGUo
bWZuX3RvX3BhZ2UoX21mbihwZGUtPm1mbikpLA0KPiA+IC0gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBwZGUtPm5leHRfbGV2ZWwpOw0KPiA+IC0gICAgICAgIH0NCj4gPiAt
ICAgIH0NCj4gPiAtDQo+ID4gLSAgICB1bm1hcF9kb21haW5fcGFnZSh0YWJsZV92YWRkcik7DQo+
ID4gLSAgICBmcmVlX2FtZF9pb21tdV9wZ3RhYmxlKHBnKTsNCj4gPiAtfQ0KPiA+IC0NCj4gPiAt
c3RhdGljIHZvaWQgZGVhbGxvY2F0ZV9pb21tdV9wYWdlX3RhYmxlcyhzdHJ1Y3QgZG9tYWluICpk
KQ0KPiA+IC17DQo+ID4gLSAgICBzdHJ1Y3QgZG9tYWluX2lvbW11ICpoZCA9IGRvbV9pb21tdShk
KTsNCj4gPiAtDQo+ID4gLSAgICBzcGluX2xvY2soJmhkLT5hcmNoLm1hcHBpbmdfbG9jayk7DQo+
ID4gLSAgICBpZiAoIGhkLT5hcmNoLmFtZF9pb21tdS5yb290X3RhYmxlICkNCj4gPiAtICAgIHsN
Cj4gPiAtICAgICAgICBkZWFsbG9jYXRlX25leHRfcGFnZV90YWJsZShoZC0+YXJjaC5hbWRfaW9t
bXUucm9vdF90YWJsZSwNCj4gPiAtICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBo
ZC0+YXJjaC5hbWRfaW9tbXUucGFnaW5nX21vZGUpOw0KPiANCj4gSSByZWFsbHkgbmVlZCB0byBk
dXN0IG9mZiBteSBwYXRjaCBmaXhpbmcgdXAgc2V2ZXJhbCBiaXRzIG9mIGR1YmlvdXMNCj4gbG9n
aWMsIGluY2x1ZGluZyB0aGUgbmFtZSAicGFnaW5nX21vZGUiIHdoaWNoIGlzIGFjdHVhbGx5IHNp
bXBseSB0aGUNCj4gbnVtYmVyIG9mIGxldmVscy4NCj4gDQo+IEF0IHRoaXMgcG9pbnQsIGl0IHdp
bGwgcHJvYmFibHkgYmUgYmVzdCB0byBnZXQgdGhpcyBzZXJpZXMgaW4gZmlyc3QsIGFuZA0KPiBm
b3IgbWUgdG8gcmViYXNlLg0KPiANCg0KT2suDQoNCj4gPiAtICAgICAgICBoZC0+YXJjaC5hbWRf
aW9tbXUucm9vdF90YWJsZSA9IE5VTEw7DQo+ID4gLSAgICB9DQo+ID4gLSAgICBzcGluX3VubG9j
aygmaGQtPmFyY2gubWFwcGluZ19sb2NrKTsNCj4gPiAtfQ0KPiA+IC0NCj4gPiAtDQo+ID4gIHN0
YXRpYyB2b2lkIGFtZF9pb21tdV9kb21haW5fZGVzdHJveShzdHJ1Y3QgZG9tYWluICpkKQ0KPiA+
ICB7DQo+ID4gLSAgICBkZWFsbG9jYXRlX2lvbW11X3BhZ2VfdGFibGVzKGQpOw0KPiA+ICsgICAg
ZG9tX2lvbW11KGQpLT5hcmNoLmFtZF9pb21tdS5yb290X3RhYmxlID0gTlVMTDsNCj4gPiAgICAg
IGFtZF9pb21tdV9mbHVzaF9hbGxfcGFnZXMoZCk7DQo+IA0KPiBQZXIgeW91ciBOT1RFOiwgc2hv
dWxkbid0IHRoaXMgZmx1c2ggY2FsbCBiZSBkcm9wcGVkPw0KPiANCg0KSW5kZWVkIGl0IHNob3Vs
ZC4NCg0KPiA+IGRpZmYgLS1naXQgYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC94ODYvaW9tbXUu
YyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3g4Ni9pb21tdS5jDQo+ID4gaW5kZXggYTEyMTA5
YTFkZS4uYjNjN2RhMGZlMiAxMDA2NDQNCj4gPiAtLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3Vn
aC94ODYvaW9tbXUuYw0KPiA+ICsrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL3g4Ni9pb21t
dS5jDQo+ID4gQEAgLTE0MCwxMSArMTQwLDE5IEBAIGludCBhcmNoX2lvbW11X2RvbWFpbl9pbml0
KHN0cnVjdCBkb21haW4gKmQpDQo+ID4NCj4gPiAgICAgIHNwaW5fbG9ja19pbml0KCZoZC0+YXJj
aC5tYXBwaW5nX2xvY2spOw0KPiA+DQo+ID4gKyAgICBJTklUX1BBR0VfTElTVF9IRUFEKCZoZC0+
YXJjaC5wZ3RhYmxlcy5saXN0KTsNCj4gPiArICAgIHNwaW5fbG9ja19pbml0KCZoZC0+YXJjaC5w
Z3RhYmxlcy5sb2NrKTsNCj4gPiArDQo+ID4gICAgICByZXR1cm4gMDsNCj4gPiAgfQ0KPiA+DQo+
ID4gIHZvaWQgYXJjaF9pb21tdV9kb21haW5fZGVzdHJveShzdHJ1Y3QgZG9tYWluICpkKQ0KPiA+
ICB7DQo+ID4gKyAgICBzdHJ1Y3QgZG9tYWluX2lvbW11ICpoZCA9IGRvbV9pb21tdShkKTsNCj4g
PiArICAgIHN0cnVjdCBwYWdlX2luZm8gKnBnOw0KPiA+ICsNCj4gPiArICAgIHdoaWxlICggKHBn
ID0gcGFnZV9saXN0X3JlbW92ZV9oZWFkKCZoZC0+YXJjaC5wZ3RhYmxlcy5saXN0KSkgKQ0KPiA+
ICsgICAgICAgIGZyZWVfZG9taGVhcF9wYWdlKHBnKTsNCj4gDQo+IFNvbWUgb2YgdGhvc2UgOTAg
bGluZXMgc2F2ZWQgd2VyZSB0aGUgbG9naWMgdG8gbm90IHN1ZmZlciBhIHdhdGNoZG9nDQo+IHRp
bWVvdXQgaGVyZS4NCj4gDQo+IFRvIGRvIGl0IGxpa2UgdGhpcywgaXQgbmVlZHMgcGx1bWJpbmcg
aW50byB0aGUgcmVsaW5xdWlzaCByZXNvdXJjZXMgcGF0aC4NCj4gDQoNCk9rLiBJIGRvZXMgbG9v
ayBsaWtlIHRoZXJlIGNvdWxkIGJlIG90aGVyIHBvdGVudGlhbGx5IGxlbmd0aHkgZGVzdHJ1Y3Rp
b24gZG9uZSBvZmYgdGhlIGJhY2sgb2YgdGhlIFJDVSBjYWxsLiBPdWdodCB3ZSBoYXZlIHRoZSBh
YmlsaXR5IHRvIGhhdmUgYSByZXN0YXJ0YWJsZSBkb21haW5fZGVzdHJveSgpPw0KDQo+ID4gIH0N
Cj4gPg0KPiA+ICBzdGF0aWMgYm9vbCBfX2h3ZG9tX2luaXQgaHdkb21faW9tbXVfbWFwKGNvbnN0
IHN0cnVjdCBkb21haW4gKmQsDQo+ID4gQEAgLTI1Nyw2ICsyNjUsMzkgQEAgdm9pZCBfX2h3ZG9t
X2luaXQgYXJjaF9pb21tdV9od2RvbV9pbml0KHN0cnVjdCBkb21haW4gKmQpDQo+ID4gICAgICAg
ICAgcmV0dXJuOw0KPiA+ICB9DQo+ID4NCj4gPiArc3RydWN0IHBhZ2VfaW5mbyAqaW9tbXVfYWxs
b2NfcGd0YWJsZShzdHJ1Y3QgZG9tYWluICpkKQ0KPiA+ICt7DQo+ID4gKyAgICBzdHJ1Y3QgZG9t
YWluX2lvbW11ICpoZCA9IGRvbV9pb21tdShkKTsNCj4gPiArI2lmZGVmIENPTkZJR19OVU1BDQo+
ID4gKyAgICB1bnNpZ25lZCBpbnQgbWVtZmxhZ3MgPSAoaGQtPm5vZGUgPT0gTlVNQV9OT19OT0RF
KSA/DQo+ID4gKyAgICAgICAgMCA6IE1FTUZfbm9kZShoZC0+bm9kZSk7DQo+ID4gKyNlbHNlDQo+
ID4gKyAgICB1bnNpZ25lZCBpbnQgbWVtZmxhZ3MgPSAwOw0KPiA+ICsjZW5kaWYNCj4gPiArICAg
IHN0cnVjdCBwYWdlX2luZm8gKnBnOw0KPiA+ICsgICAgdm9pZCAqcDsNCj4gDQo+IFRoZSBtZW1m
bGFncyBjb2RlIGlzIHZlcnkgYXdrd2FyZC4gIEhvdyBhYm91dCBpbml0aWFsaXNpbmcgaXQgdG8g
MCwgYW5kDQo+IGhhdmluZzoNCj4gDQo+ICNpZmRlZiBDT05GSUdfTlVNQQ0KPiAgICAgaWYgKCBo
ZC0+bm9kZSAhPSBOVU1BX05PX05PREUgKQ0KPiAgICAgICAgIG1lbWZsYWdzID0gTUVNRl9ub2Rl
KGhkLT5ub2RlKTsNCj4gI2VuZGlmDQo+IA0KPiBoZXJlPw0KPiANCg0KU3VyZS4NCg0KPiA+ICsN
Cj4gPiArICAgIEJVR19PTighaW9tbXVfZW5hYmxlZCk7DQo+IA0KPiBJcyB0aGlzIHJlYWxseSBu
ZWNlc3Nhcnk/ICBDYW4gd2UgcGxhdXNpYmx5IGVuZCB1cCBpbiB0aGlzIGZ1bmN0aW9uDQo+IG90
aGVyd2lzZT8NCj4gDQoNCk5vdCByZWFsbHk7IEknbGwgZHJvcCBpdC4NCg0KPiANCj4gT3ZlcmFs
bCwgSSB3b25kZXIgaWYgdGhpcyBwYXRjaCB3b3VsZCBiZXR0ZXIgYmUgc3BsaXQgaW50byBzZXZl
cmFsLiAgT25lDQo+IHdoaWNoIGludHJvZHVjZXMgdGhlIGNvbW1vbiBhbGxvYy9mcmVlIGltcGxl
bWVudGF0aW9uLCB0d28gd2hpY2ggc3dpdGNoDQo+IHRoZSBWVC1kIGFuZCBBTUQgaW1wbGVtZW50
YXRpb25zIG92ZXIsIGFuZCBwb3NzaWJseSBvbmUgY2xlYW4tdXAgb24gdGhlIGVuZD8NCj4gDQoN
Ck9rLCBpZiB5b3UgZmVlbCB0aGUgcGF0Y2ggaXMgdG9vIGxhcmdlIGFzLWlzIHRoZW4gSSdsbCBz
cGxpdCBhcyB5b3Ugc3VnZ2VzdC4NCg0KICBQYXVsDQoNCj4gfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 09:58:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 09:58:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jzztX-0008P5-LA; Mon, 27 Jul 2020 09:58:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TI7O=BG=amazon.co.uk=prvs=4708ece4a=pdurrant@srs-us1.protection.inumbo.net>)
 id 1jzztW-0008P0-AD
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 09:58:06 +0000
X-Inumbo-ID: aa475c1b-cfef-11ea-8a85-bc764e2007e4
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa475c1b-cfef-11ea-8a85-bc764e2007e4;
 Mon, 27 Jul 2020 09:58:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
 s=amazon201209; t=1595843885; x=1627379885;
 h=from:to:cc:date:message-id:references:in-reply-to:
 content-transfer-encoding:mime-version:subject;
 bh=i8avPvn+qNyQlxziY57K/zKc+nExyx4EzSJ6b8bK/VU=;
 b=gNNgE1cYNU9clA0rtroi18AjTDTs0BB0xFPXAx17pcThaXLl27BD81fP
 iuGu240oU45W9t3+ywU8G4vl5IJnf3bR/lmAGQgM1JK6xNCFs9FkM93Yp
 8umsFNB17JNOBEcnzxpN487qzgo8K0m6KAHjK7DxcO3mrlV7Qe1qcsAiP E=;
IronPort-SDR: HuskNuZGOfO8CTCffv5/hpJSUIYqXDgVF6YfOKsC9GSr8FpM5PQOhZIRTmaPMw8hTGv5C7mi1z
 ZfLkmmg0QBWw==
X-IronPort-AV: E=Sophos;i="5.75,402,1589241600"; d="scan'208";a="45665271"
Subject: RE: [PATCH 3/6] iommu: remove iommu_lookup_page() and the
 lookup_page() method...
Thread-Topic: [PATCH 3/6] iommu: remove iommu_lookup_page() and the
 lookup_page() method...
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-1e-303d0b0e.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 27 Jul 2020 09:58:05 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1e-303d0b0e.us-east-1.amazon.com (Postfix) with ESMTPS
 id 7CF0FA239C; Mon, 27 Jul 2020 09:58:04 +0000 (UTC)
Received: from EX13D32EUC002.ant.amazon.com (10.43.164.94) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 27 Jul 2020 09:58:04 +0000
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 27 Jul 2020 09:58:03 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 27 Jul 2020 09:58:03 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, "paul@xen.org" <paul@xen.org>
Thread-Index: AQJEvXWV1fglpFS8p501Sb8VALJosQFSVPHsAgBGpZcCc8dadQJMAgC7p/3ofVA=
Date: Mon, 27 Jul 2020 09:58:03 +0000
Message-ID: <35f4099b80b34a20b2d8d86faf990733@EX13D32EUC003.ant.amazon.com>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-4-paul@xen.org>
 <c47710e1-fcb6-3b5d-ff6a-d237a4149b3b@citrix.com>
 <000101d661eb$c68a75a0$539f60e0$@xen.org>
 <35260401-fd2b-2eba-6e9b-a274cb8c057b@suse.com>
In-Reply-To: <35260401-fd2b-2eba-6e9b-a274cb8c057b@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.165.155]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Kevin
 Tian' <kevin.tian@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDI2IEp1bHkgMjAyMCAwOToyOA0KPiBUbzogcGF1bEB4ZW4u
b3JnDQo+IENjOiAnQW5kcmV3IENvb3BlcicgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+OyB4
ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmc7IER1cnJhbnQsIFBhdWwNCj4gPHBkdXJyYW50
QGFtYXpvbi5jby51az47ICdLZXZpbiBUaWFuJyA8a2V2aW4udGlhbkBpbnRlbC5jb20+DQo+IFN1
YmplY3Q6IFJFOiBbRVhURVJOQUxdIFtQQVRDSCAzLzZdIGlvbW11OiByZW1vdmUgaW9tbXVfbG9v
a3VwX3BhZ2UoKSBhbmQgdGhlIGxvb2t1cF9wYWdlKCkgbWV0aG9kLi4uDQo+IA0KPiBDQVVUSU9O
OiBUaGlzIGVtYWlsIG9yaWdpbmF0ZWQgZnJvbSBvdXRzaWRlIG9mIHRoZSBvcmdhbml6YXRpb24u
IERvIG5vdCBjbGljayBsaW5rcyBvciBvcGVuDQo+IGF0dGFjaG1lbnRzIHVubGVzcyB5b3UgY2Fu
IGNvbmZpcm0gdGhlIHNlbmRlciBhbmQga25vdyB0aGUgY29udGVudCBpcyBzYWZlLg0KPiANCj4g
DQo+IA0KPiBPbiAyNC4wNy4yMDIwIDIwOjUzLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+ID4+IEZy
b206IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQo+ID4+IFNlbnQ6
IDI0IEp1bHkgMjAyMCAxOTozOQ0KPiA+Pg0KPiA+PiBPbiAyNC8wNy8yMDIwIDE3OjQ2LCBQYXVs
IER1cnJhbnQgd3JvdGU6DQo+ID4+PiBGcm9tOiBQYXVsIER1cnJhbnQgPHBkdXJyYW50QGFtYXpv
bi5jb20+DQo+ID4+Pg0KPiA+Pj4gLi4uIGZyb20gaW9tbXVfb3BzLg0KPiA+Pj4NCj4gPj4+IFRo
aXMgcGF0Y2ggaXMgZXNzZW50aWFsbHkgYSByZXZlcnNpb24gb2YgZGQ5M2Q1NGYgInZ0ZDogYWRk
IGxvb2t1cF9wYWdlIG1ldGhvZA0KPiA+Pj4gdG8gaW9tbXVfb3BzIi4gVGhlIGNvZGUgd2FzIGlu
dGVuZGVkIHRvIGJlIHVzZWQgYnkgYSBwYXRjaCB0aGF0IGhhcyBsb25nLQ0KPiA+Pj4gc2luY2Ug
YmVlbiBhYmFuZG9uZWQuIFRoZXJlZm9yZSBpdCBpcyBkZWFkIGNvZGUgYW5kIGNhbiBiZSByZW1v
dmVkLg0KPiA+Pg0KPiA+PiBBbmQgYnkgdGhpcywgeW91IG1lYW4gdGhlIHdvcmsgdGhhdCB5b3Ug
b25seSBwYXJ0aWFsIHVuc3RyZWFtZWQsIHdpdGgNCj4gPj4gdGhlIHJlbWFpbmRlciBvZiB0aGUg
ZmVhdHVyZSBzdGlsbCB2ZXJ5IG11Y2ggaW4gdXNlIGJ5IFhlblNlcnZlcj8NCj4gPj4NCj4gPg0K
PiA+IEkgdGhvdWdodCB3ZSBiYXNpY2FsbHkgZGVjaWRlZCB0byBiaW4gdGhlIG9yaWdpbmFsIFBW
IElPTU1VIGlkZWEgdGhvdWdoPw0KPiANCj4gRGlkIHdlPyBJdCdzIHRoZSBmaXJzdCB0aW1lIEkg
aGVhciBvZiBpdCwgSSB0aGluay4NCj4gDQoNCkkgY2lyY3VsYXRlZCBhIGRvYy4gYWdlcyBhZ28s
IHdoaWxlIEkgd2FzIHN0aWxsIGF0IENpdHJpeDogaHR0cHM6Ly9kb2NzLmdvb2dsZS5jb20vZG9j
dW1lbnQvZC8xMi16NkpENDFKX29OckNnX2MweUF4R1dnNUFEQlE4X2JTaVBfTkg2SHF3by9lZGl0
P3VzcD1zaGFyaW5nDQoNCkluIHRoZXJlIEkgcHJvcG9zZSB0aGF0IHdlIGRvbid0IGZvbGxvdyB0
aGUgb3JpZ2luYWwgaWRlYSBvZiBrZWVwaW5nIGEgc2luZ2xlIHNldCBvZiBwZXItZG9tYWluIHRh
YmxlcyBidXQgaW5zdGVhZCBoYXZlIGEgc2V0IG9mIHRhYmxlcyAob3IgSU9NTVUgY29udGV4dHMp
IGZvciBncm91cHMgb2YgZGV2aWNlcy4gJ0NvbnRleHQgMCcgaXMgdGhlIGN1cnJlbnQgc2V0IG9m
IHN0YXRpYyAxOjEgdGFibGVzIGJ1dCBvdGhlciBjb250ZXh0cyBhcmUgbWFuaXB1bGF0ZWQgYnkg
aHlwZXJjYWxsIHNvLCBpbiB0aGlzIHBsYW4sIEkgZG9uJ3QgZW52aXNhZ2UgdGhlIG5lZWQgdG8g
bG9vayB1cCBtYXBwaW5ncyBpbiB0aGUgdGFibGVzIGluIHRoaXMgd2F5Li4uIGJ1dCBJIGd1ZXNz
IEkgY2FuJ3QgcnVsZSBpdCBvdXQuDQoNCiBQYXVsIA0KDQo=


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 10:05:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 10:05:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k000D-0000wC-Hf; Mon, 27 Jul 2020 10:05:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8heM=BG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1k000C-0000w7-EP
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 10:05:00 +0000
X-Inumbo-ID: a1370a02-cff0-11ea-8a85-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a1370a02-cff0-11ea-8a85-bc764e2007e4;
 Mon, 27 Jul 2020 10:04:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 10B4EAD11;
 Mon, 27 Jul 2020 10:05:09 +0000 (UTC)
Subject: Re: [PATCH] xen: hypercall.h: fix duplicated word
To: Randy Dunlap <rdunlap@infradead.org>, linux-kernel@vger.kernel.org
References: <20200726001731.19540-1-rdunlap@infradead.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <076a4b2e-51f2-bff9-b93f-2a44258df2e7@suse.com>
Date: Mon, 27 Jul 2020 12:04:58 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200726001731.19540-1-rdunlap@infradead.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26.07.20 02:17, Randy Dunlap wrote:
> Change the repeated word "as" to "as a".
> 
> Signed-off-by: Randy Dunlap <rdunlap@infradead.org>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 10:14:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 10:14:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k008m-0001qP-E6; Mon, 27 Jul 2020 10:13:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TI7O=BG=amazon.co.uk=prvs=4708ece4a=pdurrant@srs-us1.protection.inumbo.net>)
 id 1k008k-0001pg-A5
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 10:13:50 +0000
X-Inumbo-ID: dd6d23b6-cff1-11ea-8a85-bc764e2007e4
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd6d23b6-cff1-11ea-8a85-bc764e2007e4;
 Mon, 27 Jul 2020 10:13:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
 s=amazon201209; t=1595844830; x=1627380830;
 h=from:to:cc:date:message-id:references:in-reply-to:
 content-transfer-encoding:mime-version:subject;
 bh=r5nDZhrxGII8pB7FymrLLyH1jsMWLlJfmuypoJsDnnc=;
 b=dk5vej5KmyBz0G9NawHAsb8sLeHOteCgTJKI3x69WBkFRv4gJq+qBjs2
 Nt2CPclNGsoKsH7R/Y6fNeCVvbvi9Q8IXmWdr0/gyrRYeplI7sAKH7ukw
 WtxuUpVnMx2eHa4KgkmkVfuNCwZJjSf3a96ts5mQ/XTtIjQqXDr1v9xTu A=;
IronPort-SDR: sWbUITzr8Fnkldg8I/a3W06pAJKpLtW/k9jYlT0aFgMIYn9enWoAIMQjvwM2NUjO4T7Ul7vU1s
 /vIiHXFOC2Yg==
X-IronPort-AV: E=Sophos;i="5.75,402,1589241600"; d="scan'208";a="44341488"
Subject: RE: [PATCH 6/6] iommu: stop calling IOMMU page tables 'p2m tables'
Thread-Topic: [PATCH 6/6] iommu: stop calling IOMMU page tables 'p2m tables'
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2b-5bdc5131.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 27 Jul 2020 10:13:49 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2b-5bdc5131.us-west-2.amazon.com (Postfix) with ESMTPS
 id 85BCFA1E1A; Mon, 27 Jul 2020 10:13:47 +0000 (UTC)
Received: from EX13D32EUC002.ant.amazon.com (10.43.164.94) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 27 Jul 2020 10:13:46 +0000
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 27 Jul 2020 10:13:45 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 27 Jul 2020 10:13:45 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Index: AQJEvXWV1fglpFS8p501Sb8VALJosQEff0zhAU0DZhGoKx3ZEA==
Date: Mon, 27 Jul 2020 10:13:45 +0000
Message-ID: <be5b86a5ad6f4ad286dcdf825ac2175e@EX13D32EUC003.ant.amazon.com>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-7-paul@xen.org>
 <4e1c2ed8-dfc4-812b-d341-04bc5eedad8e@citrix.com>
In-Reply-To: <4e1c2ed8-dfc4-812b-d341-04bc5eedad8e@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.165.155]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBBbmRyZXcgQ29vcGVyIDxhbmRy
ZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTZW50OiAyNCBKdWx5IDIwMjAgMjA6MDkNCj4gVG86
IFBhdWwgRHVycmFudCA8cGF1bEB4ZW4ub3JnPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qu
b3JnDQo+IENjOiBEdXJyYW50LCBQYXVsIDxwZHVycmFudEBhbWF6b24uY28udWs+OyBKYW4gQmV1
bGljaCA8amJldWxpY2hAc3VzZS5jb20+OyBLZXZpbiBUaWFuDQo+IDxrZXZpbi50aWFuQGludGVs
LmNvbT4NCj4gU3ViamVjdDogUkU6IFtFWFRFUk5BTF0gW1BBVENIIDYvNl0gaW9tbXU6IHN0b3Ag
Y2FsbGluZyBJT01NVSBwYWdlIHRhYmxlcyAncDJtIHRhYmxlcycNCj4gDQo+IENBVVRJT046IFRo
aXMgZW1haWwgb3JpZ2luYXRlZCBmcm9tIG91dHNpZGUgb2YgdGhlIG9yZ2FuaXphdGlvbi4gRG8g
bm90IGNsaWNrIGxpbmtzIG9yIG9wZW4NCj4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBjYW4gY29u
ZmlybSB0aGUgc2VuZGVyIGFuZCBrbm93IHRoZSBjb250ZW50IGlzIHNhZmUuDQo+IA0KPiANCj4g
DQo+IE9uIDI0LzA3LzIwMjAgMTc6NDYsIFBhdWwgRHVycmFudCB3cm90ZToNCj4gPiBkaWZmIC0t
Z2l0IGEveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvaW9tbXUuYyBiL3hlbi9kcml2ZXJzL3Bhc3N0
aHJvdWdoL2lvbW11LmMNCj4gPiBpbmRleCA2YTM4MDNmZjJjLi41YmMxOTBiZjk4IDEwMDY0NA0K
PiA+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2lvbW11LmMNCj4gPiArKysgYi94ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9pb21tdS5jDQo+ID4gQEAgLTUzNSwxMiArNTM1LDEyIEBAIHN0
YXRpYyB2b2lkIGlvbW11X2R1bXBfcDJtX3RhYmxlKHVuc2lnbmVkIGNoYXIga2V5KQ0KPiA+DQo+
ID4gICAgICAgICAgaWYgKCBpb21tdV91c2VfaGFwX3B0KGQpICkNCj4gPiAgICAgICAgICB7DQo+
ID4gLSAgICAgICAgICAgIHByaW50aygiXG5kb21haW4lZCBJT01NVSBwMm0gdGFibGUgc2hhcmVk
IHdpdGggTU1VOiBcbiIsIGQtPmRvbWFpbl9pZCk7DQo+ID4gKyAgICAgICAgICAgIHByaW50aygi
JXBkOiBJT01NVSBwYWdlIHRhYmxlcyBzaGFyZWQgd2l0aCBNTVVcbiIsIGQpOw0KPiANCj4gQ2hh
bmdlIE1NVSB0byBDUFU/ICBNTVUgaXMgdmVyeSBhbWJpZ3VvdXMgaW4gdGhpcyBjb250ZXh0Lg0K
PiANCg0KQWN0dWFsbHkgSSBjb3VsZCBwdXNoIHRoaXMgaW50byB0aGUgVlQtZCBjb2RlIGFuZCBq
dXN0IHNheSBzb21ldGhpbmcgbGlrZSAnc2hhcmVkIEVQVCBpcyBlbmFibGVkJy4gV291bGQgdGhh
dCBiZSBsZXNzIGFtYmlndW91cz8NCg0KPiA+ICAgICAgICAgICAgICBjb250aW51ZTsNCj4gPiAg
ICAgICAgICB9DQo+ID4NCj4gPiAtICAgICAgICBwcmludGsoIlxuZG9tYWluJWQgSU9NTVUgcDJt
IHRhYmxlOiBcbiIsIGQtPmRvbWFpbl9pZCk7DQo+ID4gLSAgICAgICAgb3BzLT5kdW1wX3AybV90
YWJsZShkKTsNCj4gPiArICAgICAgICBwcmludGsoIiVwZDogSU9NTVUgcGFnZSB0YWJsZXM6IFxu
IiwgZCk7DQo+IA0KPiBEcm9wIHRoZSB0cmFpbGluZyB3aGl0ZXNwYWNlPw0KPiANCg0KU3VyZS4N
Cg0KICBQYXVsDQoNCj4gfkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 10:32:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 10:32:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k00QC-0003Xv-Vt; Mon, 27 Jul 2020 10:31:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k00QB-0003Xq-Uk
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 10:31:52 +0000
X-Inumbo-ID: 5f8947b0-cff4-11ea-8a88-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f8947b0-cff4-11ea-8a88-bc764e2007e4;
 Mon, 27 Jul 2020 10:31:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595845907;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=Inmii/NatYqHZDQtxUMZ5GZ4S7ACl637bjZeWDU2bH8=;
 b=HAZ9pa9tBG30Roshy1WNJ5RTExvShBe+iKXXKklgH4G3LVW/Lh1AHv5W
 +iTNZC509w20qRCDZ6VIa/iQRl3fDa1QmAfFZX312WNdjM1fzSVHkoD53
 j4tihpgfPeEcO3I1j/dYQvWbC+2WJDBAkM+8NLEvBFrW4rkQbHaOtA2+L U=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: p7b90YH02vB2RhiUob01tl/BXvU2E7+q5ftkRm2jwMa1kvSlxvtQyDgyhEkit7x0V9nhXhoj8Z
 13W9Mqel7MtnrRmITRZBhuGJ6+OEiZYv4xMKQMfBxX8wVq1nF78YEn+0avU1s+9yavkmDeM9WH
 mg+nF5IvIz+6wEKSJefERngJx/ZyTakaBgsPlqXcwk9UZweMOTJl0G7c+yudLApZdLAg34lhYP
 BlSs2kYx3dSgJ+KsxCqDqDXhPHErL6UiCn1PO6hTPLywawvLI0355kFPumxf9ilrKQNaB/yyQg
 +rA=
X-SBRS: 2.7
X-MesageID: 23432805
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23432805"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v3] print: introduce a format specifier for pci_sbdf_t
Date: Mon, 27 Jul 2020 12:31:36 +0200
Message-ID: <20200727103136.53343-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Ian
 Jackson <ian.jackson@eu.citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien.grall@arm.com>, Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The new format specifier is '%pp', and prints a pci_sbdf_t using the
seg:bus:dev.func format. Replace all SBDFs printed using
'%04x:%02x:%02x.%u' to use the new format specifier.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Acked-by: Julien Grall <julien.grall@arm.com>
For just the pieces where Jan is the only maintainer:
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Changes since v2:
 - Rebase.

Changes since v1:
 - Use base 8 to print the function number.
 - Sort the addition in the pointer function alphabetically.
---
 docs/misc/printk-formats.txt                |  5 ++
 xen/arch/x86/hvm/vmsi.c                     | 10 +--
 xen/arch/x86/msi.c                          | 37 ++++-----
 xen/common/vsprintf.c                       | 18 ++++
 xen/drivers/passthrough/amd/iommu_acpi.c    | 22 +++--
 xen/drivers/passthrough/amd/iommu_cmd.c     |  5 +-
 xen/drivers/passthrough/amd/iommu_detect.c  |  5 +-
 xen/drivers/passthrough/amd/iommu_init.c    | 18 ++--
 xen/drivers/passthrough/amd/iommu_intr.c    |  9 +-
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 25 ++----
 xen/drivers/passthrough/pci.c               | 91 +++++++++------------
 xen/drivers/passthrough/vtd/dmar.c          | 27 +++---
 xen/drivers/passthrough/vtd/intremap.c      | 11 +--
 xen/drivers/passthrough/vtd/iommu.c         | 80 ++++++++----------
 xen/drivers/passthrough/vtd/quirks.c        | 22 ++---
 xen/drivers/passthrough/vtd/utils.c         |  6 +-
 xen/drivers/passthrough/x86/ats.c           | 13 +--
 xen/drivers/vpci/header.c                   | 11 +--
 xen/drivers/vpci/msi.c                      |  6 +-
 xen/drivers/vpci/msix.c                     | 24 ++----
 20 files changed, 190 insertions(+), 255 deletions(-)

diff --git a/docs/misc/printk-formats.txt b/docs/misc/printk-formats.txt
index 080f498f65..8f666f696a 100644
--- a/docs/misc/printk-formats.txt
+++ b/docs/misc/printk-formats.txt
@@ -48,3 +48,8 @@ Domain and vCPU information:
                The domain part as above, with the vcpu_id printed in decimal.
                  e.g.  d0v1
                        d[IDLE]v0
+
+PCI:
+
+       %pp     PCI device address in S:B:D.F format from a pci_sbdf_t.
+                 e.g.  0004:02:00.0
diff --git a/xen/arch/x86/hvm/vmsi.c b/xen/arch/x86/hvm/vmsi.c
index 5d4eddebee..7ca19353ab 100644
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -697,10 +697,8 @@ static int vpci_msi_update(const struct pci_dev *pdev, uint32_t data,
 
         if ( rc )
         {
-            gdprintk(XENLOG_ERR,
-                     "%04x:%02x:%02x.%u: failed to bind PIRQ %u: %d\n",
-                     pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
-                     PCI_FUNC(pdev->devfn), pirq + i, rc);
+            gdprintk(XENLOG_ERR, "%pp: failed to bind PIRQ %u: %d\n",
+                     &pdev->sbdf, pirq + i, rc);
             while ( bind.machine_irq-- > pirq )
                 pt_irq_destroy_bind(pdev->domain, &bind);
             return rc;
@@ -754,9 +752,7 @@ static int vpci_msi_enable(const struct pci_dev *pdev, uint32_t data,
                                    &msi_info);
     if ( rc )
     {
-        gdprintk(XENLOG_ERR, "%04x:%02x:%02x.%u: failed to map PIRQ: %d\n",
-                 pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
-                 PCI_FUNC(pdev->devfn), rc);
+        gdprintk(XENLOG_ERR, "%pp: failed to map PIRQ: %d\n", &pdev->sbdf, rc);
         return rc;
     }
 
diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index 161ee60dbe..29e4351a49 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -430,8 +430,8 @@ static bool msi_set_mask_bit(struct irq_desc *desc, bool host, bool guest)
             {
                 pdev->msix->warned = domid;
                 printk(XENLOG_G_WARNING
-                       "cannot mask IRQ %d: masking MSI-X on Dom%d's %04x:%02x:%02x.%u\n",
-                       desc->irq, domid, seg, bus, slot, func);
+                       "cannot mask IRQ %d: masking MSI-X on Dom%d's %pp\n",
+                       desc->irq, domid, &pdev->sbdf);
             }
         }
         pdev->msix->host_maskall = maskall;
@@ -985,11 +985,11 @@ static int msix_capability_init(struct pci_dev *dev,
             struct domain *d = dev->domain ?: currd;
 
             if ( !is_hardware_domain(currd) || d != currd )
-                printk("%s use of MSI-X on %04x:%02x:%02x.%u by Dom%d\n",
+                printk("%s use of MSI-X on %pp by %pd\n",
                        is_hardware_domain(currd)
                        ? XENLOG_WARNING "Potentially insecure"
                        : XENLOG_ERR "Insecure",
-                       seg, bus, slot, func, d->domain_id);
+                       &dev->sbdf, d);
             if ( !is_hardware_domain(d) &&
                  /* Assume a domain without memory has no mappings yet. */
                  (!is_hardware_domain(currd) || domain_tot_pages(d)) )
@@ -1043,18 +1043,15 @@ static int __pci_enable_msi(struct msi_info *msi, struct msi_desc **desc)
     old_desc = find_msi_entry(pdev, msi->irq, PCI_CAP_ID_MSI);
     if ( old_desc )
     {
-        printk(XENLOG_ERR "irq %d already mapped to MSI on %04x:%02x:%02x.%u\n",
-               msi->irq, msi->seg, msi->bus,
-               PCI_SLOT(msi->devfn), PCI_FUNC(msi->devfn));
+        printk(XENLOG_ERR "irq %d already mapped to MSI on %pp\n",
+               msi->irq, &pdev->sbdf);
         return -EEXIST;
     }
 
     old_desc = find_msi_entry(pdev, -1, PCI_CAP_ID_MSIX);
     if ( old_desc )
     {
-        printk(XENLOG_WARNING "MSI-X already in use on %04x:%02x:%02x.%u\n",
-               msi->seg, msi->bus,
-               PCI_SLOT(msi->devfn), PCI_FUNC(msi->devfn));
+        printk(XENLOG_WARNING "MSI-X already in use on %pp\n", &pdev->sbdf);
         __pci_disable_msix(old_desc);
     }
 
@@ -1091,8 +1088,6 @@ static void __pci_disable_msi(struct msi_desc *entry)
 static int __pci_enable_msix(struct msi_info *msi, struct msi_desc **desc)
 {
     struct pci_dev *pdev;
-    u8 slot = PCI_SLOT(msi->devfn);
-    u8 func = PCI_FUNC(msi->devfn);
     struct msi_desc *old_desc;
 
     ASSERT(pcidevs_locked());
@@ -1106,16 +1101,15 @@ static int __pci_enable_msix(struct msi_info *msi, struct msi_desc **desc)
     old_desc = find_msi_entry(pdev, msi->irq, PCI_CAP_ID_MSIX);
     if ( old_desc )
     {
-        printk(XENLOG_ERR "irq %d already mapped to MSI-X on %04x:%02x:%02x.%u\n",
-               msi->irq, msi->seg, msi->bus, slot, func);
+        printk(XENLOG_ERR "irq %d already mapped to MSI-X on %pp\n",
+               msi->irq, &pdev->sbdf);
         return -EEXIST;
     }
 
     old_desc = find_msi_entry(pdev, -1, PCI_CAP_ID_MSI);
     if ( old_desc )
     {
-        printk(XENLOG_WARNING "MSI already in use on %04x:%02x:%02x.%u\n",
-               msi->seg, msi->bus, slot, func);
+        printk(XENLOG_WARNING "MSI already in use on %pp\n", &pdev->sbdf);
         __pci_disable_msi(old_desc);
     }
 
@@ -1162,9 +1156,8 @@ static void __pci_disable_msix(struct msi_desc *entry)
         writel(1, entry->mask_base + PCI_MSIX_ENTRY_VECTOR_CTRL_OFFSET);
     else if ( !(control & PCI_MSIX_FLAGS_MASKALL) )
     {
-        printk(XENLOG_WARNING
-               "cannot disable IRQ %d: masking MSI-X on %04x:%02x:%02x.%u\n",
-               entry->irq, seg, bus, slot, func);
+        printk(XENLOG_WARNING "cannot disable IRQ %d: masking MSI-X on %pp\n",
+               entry->irq, &dev->sbdf);
         maskall = true;
     }
     dev->msix->host_maskall = maskall;
@@ -1340,7 +1333,6 @@ int pci_restore_msi_state(struct pci_dev *pdev)
     struct msi_desc *entry, *tmp;
     struct irq_desc *desc;
     struct msi_msg msg;
-    u8 slot = PCI_SLOT(pdev->devfn), func = PCI_FUNC(pdev->devfn);
     unsigned int type = 0, pos = 0;
     u16 control = 0;
 
@@ -1369,9 +1361,8 @@ int pci_restore_msi_state(struct pci_dev *pdev)
         if (desc->msi_desc != entry)
         {
     bogus:
-            dprintk(XENLOG_ERR,
-                    "Restore MSI for %04x:%02x:%02x:%u entry %u not set?\n",
-                    pdev->seg, pdev->bus, slot, func, i);
+            dprintk(XENLOG_ERR, "Restore MSI for %pp entry %u not set?\n",
+                    &pdev->sbdf, i);
             spin_unlock_irqrestore(&desc->lock, flags);
             if ( type == PCI_CAP_ID_MSIX )
                 pci_conf_write16(pdev->sbdf, msix_control_reg(pos),
diff --git a/xen/common/vsprintf.c b/xen/common/vsprintf.c
index 183d3ed4bb..185a4bd561 100644
--- a/xen/common/vsprintf.c
+++ b/xen/common/vsprintf.c
@@ -394,6 +394,20 @@ static char *print_vcpu(char *str, const char *end, const struct vcpu *v)
     return number(str + 1, end, v->vcpu_id, 10, -1, -1, 0);
 }
 
+static char *print_pci_addr(char *str, const char *end, const pci_sbdf_t *sbdf)
+{
+    str = number(str, end, sbdf->seg, 16, 4, -1, ZEROPAD);
+    if ( str < end )
+        *str = ':';
+    str = number(str + 1, end, sbdf->bus, 16, 2, -1, ZEROPAD);
+    if ( str < end )
+        *str = ':';
+    str = number(str + 1, end, sbdf->dev, 16, 2, -1, ZEROPAD);
+    if ( str < end )
+        *str = '.';
+    return number(str + 1, end, sbdf->fn, 8, -1, -1, 0);
+}
+
 static char *pointer(char *str, const char *end, const char **fmt_ptr,
                      const void *arg, int field_width, int precision,
                      int flags)
@@ -476,6 +490,10 @@ static char *pointer(char *str, const char *end, const char **fmt_ptr,
         }
     }
 
+    case 'p': /* PCI SBDF. */
+        ++*fmt_ptr;
+        return print_pci_addr(str, end, arg);
+
     case 's': /* Symbol name with offset and size (iff offset != 0) */
     case 'S': /* Symbol name unconditionally with offset and size */
     {
diff --git a/xen/drivers/passthrough/amd/iommu_acpi.c b/xen/drivers/passthrough/amd/iommu_acpi.c
index f4abbfd9dc..1f6b004260 100644
--- a/xen/drivers/passthrough/amd/iommu_acpi.c
+++ b/xen/drivers/passthrough/amd/iommu_acpi.c
@@ -92,9 +92,8 @@ static void __init add_ivrs_mapping_entry(
                     iommu, &ivrs_mappings[alias_id].intremap_inuse, 0);
 
             if ( !ivrs_mappings[alias_id].intremap_table )
-                panic("No memory for %04x:%02x:%02x.%u's IRT\n",
-                      iommu->seg, PCI_BUS(alias_id), PCI_SLOT(alias_id),
-                      PCI_FUNC(alias_id));
+                panic("No memory for %pp's IRT\n",
+                      &PCI_SBDF2(iommu->seg, alias_id));
         }
     }
 
@@ -738,9 +737,8 @@ static u16 __init parse_ivhd_device_special(
         return 0;
     }
 
-    AMD_IOMMU_DEBUG("IVHD Special: %04x:%02x:%02x.%u variety %#x handle %#x\n",
-                    seg, PCI_BUS(bdf), PCI_SLOT(bdf), PCI_FUNC(bdf),
-                    special->variety, special->handle);
+    AMD_IOMMU_DEBUG("IVHD Special: %pp variety %#x handle %#x\n",
+                    &PCI_SBDF2(seg, bdf), special->variety, special->handle);
     add_ivrs_mapping_entry(bdf, bdf, special->header.data_setting, true,
                            iommu);
 
@@ -764,9 +762,9 @@ static u16 __init parse_ivhd_device_special(
         if ( idx < nr_ioapic_sbdf )
         {
             AMD_IOMMU_DEBUG("IVHD: Command line override present for IO-APIC %#x"
-                            "(IVRS: %#x devID %04x:%02x:%02x.%u)\n",
-                            ioapic_sbdf[idx].id, special->handle, seg,
-                            PCI_BUS(bdf), PCI_SLOT(bdf), PCI_FUNC(bdf));
+                            "(IVRS: %#x devID %pp)\n",
+                            ioapic_sbdf[idx].id, special->handle,
+                            &PCI_SBDF2(seg, bdf));
             break;
         }
 
@@ -836,9 +834,9 @@ static u16 __init parse_ivhd_device_special(
             break;
         case HPET_CMDL:
             AMD_IOMMU_DEBUG("IVHD: Command line override present for HPET %#x "
-                            "(IVRS: %#x devID %04x:%02x:%02x.%u)\n",
-                            hpet_sbdf.id, special->handle, seg, PCI_BUS(bdf),
-                            PCI_SLOT(bdf), PCI_FUNC(bdf));
+                            "(IVRS: %#x devID %pp)\n",
+                            hpet_sbdf.id, special->handle,
+                            &PCI_SBDF2(seg, bdf));
             break;
         case HPET_NONE:
             /* set device id of hpet */
diff --git a/xen/drivers/passthrough/amd/iommu_cmd.c b/xen/drivers/passthrough/amd/iommu_cmd.c
index 249ed345a0..6c0647c524 100644
--- a/xen/drivers/passthrough/amd/iommu_cmd.c
+++ b/xen/drivers/passthrough/amd/iommu_cmd.c
@@ -289,9 +289,8 @@ void amd_iommu_flush_iotlb(u8 devfn, const struct pci_dev *pdev,
 
     if ( !iommu )
     {
-        AMD_IOMMU_DEBUG("%s: Can't find iommu for %04x:%02x:%02x.%u\n",
-                        __func__, pdev->seg, pdev->bus,
-                        PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
+        AMD_IOMMU_DEBUG("%s: Can't find iommu for %pp\n",
+                        __func__, &pdev->sbdf);
         return;
     }
 
diff --git a/xen/drivers/passthrough/amd/iommu_detect.c b/xen/drivers/passthrough/amd/iommu_detect.c
index 8312bb4b6f..d05bc6a5bb 100644
--- a/xen/drivers/passthrough/amd/iommu_detect.c
+++ b/xen/drivers/passthrough/amd/iommu_detect.c
@@ -182,9 +182,8 @@ int __init amd_iommu_detect_one_acpi(
 
     rt = pci_ro_device(iommu->seg, bus, PCI_DEVFN(dev, func));
     if ( rt )
-        printk(XENLOG_ERR
-               "Could not mark config space of %04x:%02x:%02x.%u read-only (%d)\n",
-               iommu->seg, bus, dev, func, rt);
+        printk(XENLOG_ERR "Could not mark config space of %pp read-only (%d)\n",
+               &PCI_SBDF2(iommu->seg, iommu->bdf), rt);
 
     list_add_tail(&iommu->list, &amd_iommu_head);
     rt = 0;
diff --git a/xen/drivers/passthrough/amd/iommu_init.c b/xen/drivers/passthrough/amd/iommu_init.c
index 034f3b9c2c..24d1dfec40 100644
--- a/xen/drivers/passthrough/amd/iommu_init.c
+++ b/xen/drivers/passthrough/amd/iommu_init.c
@@ -558,10 +558,10 @@ static void parse_event_log_entry(struct amd_iommu *iommu, u32 entry[])
         unsigned int flags = MASK_EXTR(entry[1], IOMMU_EVENT_FLAGS_MASK);
         uint64_t addr = *(uint64_t *)(entry + 2);
 
-        printk(XENLOG_ERR "AMD-Vi: %s: %04x:%02x:%02x.%u d%d addr %016"PRIx64
+        printk(XENLOG_ERR "AMD-Vi: %s: %pp d%u addr %016"PRIx64
                " flags %#x%s%s%s%s%s%s%s%s%s%s\n",
-               code_str, iommu->seg, PCI_BUS(device_id), PCI_SLOT(device_id),
-               PCI_FUNC(device_id), domain_id, addr, flags,
+               code_str, &PCI_SBDF2(iommu->seg, device_id),
+               domain_id, addr, flags,
                (flags & 0xe00) ? " ??" : "",
                (flags & 0x100) ? " TR" : "",
                (flags & 0x080) ? " RZ" : "",
@@ -753,9 +753,8 @@ static bool_t __init set_iommu_interrupt_handler(struct amd_iommu *iommu)
     pcidevs_unlock();
     if ( !iommu->msi.dev )
     {
-        AMD_IOMMU_DEBUG("IOMMU: no pdev for %04x:%02x:%02x.%u\n",
-                        iommu->seg, PCI_BUS(iommu->bdf),
-                        PCI_SLOT(iommu->bdf), PCI_FUNC(iommu->bdf));
+        AMD_IOMMU_DEBUG("IOMMU: no pdev for %pp\n",
+                        &PCI_SBDF2(iommu->seg, iommu->bdf));
         return 0;
     }
 
@@ -841,9 +840,6 @@ __initcall(iov_adjust_irq_affinities);
 static void amd_iommu_erratum_746_workaround(struct amd_iommu *iommu)
 {
     u32 value;
-    u8 bus = PCI_BUS(iommu->bdf);
-    u8 dev = PCI_SLOT(iommu->bdf);
-    u8 func = PCI_FUNC(iommu->bdf);
 
     if ( (boot_cpu_data.x86 != 0x15) ||
          (boot_cpu_data.x86_model < 0x10) ||
@@ -861,8 +857,8 @@ static void amd_iommu_erratum_746_workaround(struct amd_iommu *iommu)
 
     pci_conf_write32(PCI_SBDF2(iommu->seg, iommu->bdf), 0xf4, value | (1 << 2));
     printk(XENLOG_INFO
-           "AMD-Vi: Applying erratum 746 workaround for IOMMU at %04x:%02x:%02x.%u\n",
-           iommu->seg, bus, dev, func);
+           "AMD-Vi: Applying erratum 746 workaround for IOMMU at %pp\n",
+           &PCI_SBDF2(iommu->seg, iommu->bdf));
 
     /* Clear the enable writing bit */
     pci_conf_write32(PCI_SBDF2(iommu->seg, iommu->bdf), 0xf0, 0x90);
diff --git a/xen/drivers/passthrough/amd/iommu_intr.c b/xen/drivers/passthrough/amd/iommu_intr.c
index cec575071d..0adee53fb8 100644
--- a/xen/drivers/passthrough/amd/iommu_intr.c
+++ b/xen/drivers/passthrough/amd/iommu_intr.c
@@ -610,8 +610,7 @@ static struct amd_iommu *_find_iommu_for_device(int seg, int bdf)
     if ( iommu )
         return iommu;
 
-    AMD_IOMMU_DEBUG("No IOMMU for MSI dev = %04x:%02x:%02x.%u\n",
-                    seg, PCI_BUS(bdf), PCI_SLOT(bdf), PCI_FUNC(bdf));
+    AMD_IOMMU_DEBUG("No IOMMU for MSI dev = %pp\n", &PCI_SBDF2(seg, bdf));
     return ERR_PTR(-EINVAL);
 }
 
@@ -863,10 +862,8 @@ static void dump_intremap_table(const struct amd_iommu *iommu,
 
         if ( ivrs_mapping )
         {
-            printk("  %04x:%02x:%02x:%u:\n", iommu->seg,
-                   PCI_BUS(ivrs_mapping->dte_requestor_id),
-                   PCI_SLOT(ivrs_mapping->dte_requestor_id),
-                   PCI_FUNC(ivrs_mapping->dte_requestor_id));
+            printk("  %pp:\n",
+                   &PCI_SBDF2(iommu->seg, ivrs_mapping->dte_requestor_id));
             ivrs_mapping = NULL;
         }
 
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 8d6309cc8c..5f5f4a2eac 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -49,9 +49,8 @@ struct amd_iommu *find_iommu_for_device(int seg, int bdf)
                 tmp.dte_requestor_id = bdf;
             ivrs_mappings[bdf] = tmp;
 
-            printk(XENLOG_WARNING "%04x:%02x:%02x.%u not found in ACPI tables;"
-                   " using same IOMMU as function 0\n",
-                   seg, PCI_BUS(bdf), PCI_SLOT(bdf), PCI_FUNC(bdf));
+            printk(XENLOG_WARNING "%pp not found in ACPI tables;"
+                   " using same IOMMU as function 0\n", &PCI_SBDF2(seg, bdf));
 
             /* write iommu field last */
             ivrs_mappings[bdf].iommu = ivrs_mappings[bd0].iommu;
@@ -349,9 +348,8 @@ static int reassign_device(struct domain *source, struct domain *target,
         return rc;
 
     amd_iommu_setup_domain_device(target, iommu, devfn, pdev);
-    AMD_IOMMU_DEBUG("Re-assign %04x:%02x:%02x.%u from dom%d to dom%d\n",
-                    pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                    source->domain_id, target->domain_id);
+    AMD_IOMMU_DEBUG("Re-assign %pp from dom%d to dom%d\n",
+                    &pdev->sbdf, source->domain_id, target->domain_id);
 
     return 0;
 }
@@ -459,15 +457,12 @@ static int amd_iommu_add_device(u8 devfn, struct pci_dev *pdev)
         if ( pdev->type == DEV_TYPE_PCI_HOST_BRIDGE &&
              is_hardware_domain(pdev->domain) )
         {
-            AMD_IOMMU_DEBUG("Skipping host bridge %04x:%02x:%02x.%u\n",
-                            pdev->seg, pdev->bus, PCI_SLOT(devfn),
-                            PCI_FUNC(devfn));
+            AMD_IOMMU_DEBUG("Skipping host bridge %pp\n", &pdev->sbdf);
             return 0;
         }
 
-        AMD_IOMMU_DEBUG("No iommu for %04x:%02x:%02x.%u; cannot be handed to d%d\n",
-                        pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                        pdev->domain->domain_id);
+        AMD_IOMMU_DEBUG("No iommu for %pp; cannot be handed to d%d\n",
+                        &pdev->sbdf, pdev->domain->domain_id);
         return -ENODEV;
     }
 
@@ -522,10 +517,8 @@ static int amd_iommu_remove_device(u8 devfn, struct pci_dev *pdev)
     iommu = find_iommu_for_device(pdev->seg, bdf);
     if ( !iommu )
     {
-        AMD_IOMMU_DEBUG("Fail to find iommu."
-                        " %04x:%02x:%02x.%u cannot be removed from dom%d\n",
-                        pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                        pdev->domain->domain_id);
+        AMD_IOMMU_DEBUG("Fail to find iommu. %pp cannot be removed from %pd\n",
+                        &pdev->sbdf, pdev->domain);
         return -ENODEV;
     }
 
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 5846978890..561344d35f 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -239,11 +239,7 @@ static void check_pdev(const struct pci_dev *pdev)
     (PCI_STATUS_PARITY | PCI_STATUS_SIG_TARGET_ABORT | \
      PCI_STATUS_REC_TARGET_ABORT | PCI_STATUS_REC_MASTER_ABORT | \
      PCI_STATUS_SIG_SYSTEM_ERROR | PCI_STATUS_DETECTED_PARITY)
-    u16 seg = pdev->seg;
-    u8 bus = pdev->bus;
-    u8 dev = PCI_SLOT(pdev->devfn);
-    u8 func = PCI_FUNC(pdev->devfn);
-    u16 val;
+     u16 val;
 
     if ( command_mask )
     {
@@ -253,8 +249,8 @@ static void check_pdev(const struct pci_dev *pdev)
         val = pci_conf_read16(pdev->sbdf, PCI_STATUS);
         if ( val & PCI_STATUS_CHECK )
         {
-            printk(XENLOG_INFO "%04x:%02x:%02x.%u status %04x -> %04x\n",
-                   seg, bus, dev, func, val, val & ~PCI_STATUS_CHECK);
+            printk(XENLOG_INFO "%pp status %04x -> %04x\n",
+                   &pdev->sbdf, val, val & ~PCI_STATUS_CHECK);
             pci_conf_write16(pdev->sbdf, PCI_STATUS, val & PCI_STATUS_CHECK);
         }
     }
@@ -271,9 +267,8 @@ static void check_pdev(const struct pci_dev *pdev)
         val = pci_conf_read16(pdev->sbdf, PCI_SEC_STATUS);
         if ( val & PCI_STATUS_CHECK )
         {
-            printk(XENLOG_INFO
-                   "%04x:%02x:%02x.%u secondary status %04x -> %04x\n",
-                   seg, bus, dev, func, val, val & ~PCI_STATUS_CHECK);
+            printk(XENLOG_INFO "%pp secondary status %04x -> %04x\n",
+                   &pdev->sbdf, val, val & ~PCI_STATUS_CHECK);
             pci_conf_write16(pdev->sbdf, PCI_SEC_STATUS,
                              val & PCI_STATUS_CHECK);
         }
@@ -427,8 +422,8 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
             break;
 
         default:
-            printk(XENLOG_WARNING "%04x:%02x:%02x.%u: unknown type %d\n",
-                   pseg->nr, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), pdev->type);
+            printk(XENLOG_WARNING "%pp: unknown type %d\n",
+                   &pdev->sbdf, pdev->type);
             break;
     }
 
@@ -660,9 +655,9 @@ unsigned int pci_size_mem_bar(pci_sbdf_t sbdf, unsigned int pos,
         if ( flags & PCI_BAR_LAST )
         {
             printk(XENLOG_WARNING
-                   "%sdevice %04x:%02x:%02x.%u with 64-bit %sBAR in last slot\n",
-                   (flags & PCI_BAR_VF) ? "SR-IOV " : "", sbdf.seg, sbdf.bus,
-                   sbdf.dev, sbdf.fn, (flags & PCI_BAR_VF) ? "vf " : "");
+                   "%sdevice %pp with 64-bit %sBAR in last slot\n",
+                   (flags & PCI_BAR_VF) ? "SR-IOV " : "", &sbdf,
+                   (flags & PCI_BAR_VF) ? "vf " : "");
             *psize = 0;
             return 1;
         }
@@ -765,9 +760,8 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
                      PCI_BASE_ADDRESS_SPACE_IO )
                 {
                     printk(XENLOG_WARNING
-                           "SR-IOV device %04x:%02x:%02x.%u with vf BAR%u"
-                           " in IO space\n",
-                           seg, bus, slot, func, i);
+                           "SR-IOV device %pp with vf BAR%u in IO space\n",
+                           &pdev->sbdf, i);
                     continue;
                 }
                 ret = pci_size_mem_bar(pdev->sbdf, idx, NULL,
@@ -780,10 +774,8 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
             }
         }
         else
-            printk(XENLOG_WARNING
-                   "SR-IOV device %04x:%02x:%02x.%u has its virtual"
-                   " functions already enabled (%04x)\n",
-                   seg, bus, slot, func, ctrl);
+            printk(XENLOG_WARNING "SR-IOV device %pp has its virtual"
+                   " functions already enabled (%04x)\n", &pdev->sbdf, ctrl);
     }
 
     check_pdev(pdev);
@@ -810,15 +802,14 @@ out:
     pcidevs_unlock();
     if ( !ret )
     {
-        printk(XENLOG_DEBUG "PCI add %s %04x:%02x:%02x.%u\n", pdev_type,
-               seg, bus, slot, func);
+        printk(XENLOG_DEBUG "PCI add %s %pp\n", pdev_type,  &pdev->sbdf);
         while ( pdev->phantom_stride )
         {
             func += pdev->phantom_stride;
             if ( PCI_SLOT(func) )
                 break;
-            printk(XENLOG_DEBUG "PCI phantom %04x:%02x:%02x.%u\n",
-                   seg, bus, slot, func);
+            printk(XENLOG_DEBUG "PCI phantom %pp\n",
+                   &PCI_SBDF(seg, bus, slot, func));
         }
     }
     return ret;
@@ -847,9 +838,8 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
             if ( pdev->domain )
                 list_del(&pdev->domain_list);
             pci_cleanup_msi(pdev);
+            printk(XENLOG_DEBUG "PCI remove device %pp\n", &pdev->sbdf);
             free_pdev(pseg, pdev);
-            printk(XENLOG_DEBUG "PCI remove device %04x:%02x:%02x.%u\n",
-                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
             break;
         }
 
@@ -968,8 +958,8 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
 
  out:
     if ( ret )
-        printk(XENLOG_G_ERR "%pd: deassign (%04x:%02x:%02x.%u) failed (%d)\n",
-               d, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), ret);
+        printk(XENLOG_G_ERR "%pd: deassign (%pp) failed (%d)\n",
+               d, &PCI_SBDF3(seg, bus, devfn), ret);
 
     return ret;
 }
@@ -1138,8 +1128,8 @@ static int __init _scan_pci_devices(struct pci_seg *pseg, void *arg)
                 pdev = alloc_pdev(pseg, bus, PCI_DEVFN(dev, func));
                 if ( !pdev )
                 {
-                    printk(XENLOG_WARNING "%04x:%02x:%02x.%u: alloc_pdev failed\n",
-                           pseg->nr, bus, dev, func);
+                    printk(XENLOG_WARNING "%pp: alloc_pdev failed\n",
+                           &PCI_SBDF(pseg->nr, bus, dev, func));
                     return -ENOMEM;
                 }
 
@@ -1180,9 +1170,8 @@ static void __hwdom_init setup_one_hwdom_device(const struct setup_hwdom *ctxt,
         err = ctxt->handler(devfn, pdev);
         if ( err )
         {
-            printk(XENLOG_ERR "setup %04x:%02x:%02x.%u for d%d failed (%d)\n",
-                   pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                   ctxt->d->domain_id, err);
+            printk(XENLOG_ERR "setup %pp for d%d failed (%d)\n",
+                   &pdev->sbdf, ctxt->d->domain_id, err);
             if ( devfn == pdev->devfn )
                 return;
         }
@@ -1223,9 +1212,8 @@ static int __hwdom_init _setup_hwdom_pci_devices(struct pci_seg *pseg, void *arg
                 pdev->domain = dom_xen;
             }
             else if ( pdev->domain != ctxt->d )
-                printk(XENLOG_WARNING "Dom%d owning %04x:%02x:%02x.%u?\n",
-                       pdev->domain->domain_id, pseg->nr, bus,
-                       PCI_SLOT(devfn), PCI_FUNC(devfn));
+                printk(XENLOG_WARNING "Dom%d owning %pp?\n",
+                       pdev->domain->domain_id, &pdev->sbdf);
 
             if ( iommu_verbose )
             {
@@ -1361,9 +1349,8 @@ static int _dump_pci_devices(struct pci_seg *pseg, void *arg)
 
     list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
     {
-        printk("%04x:%02x:%02x.%u - %pd - node %-3d - MSIs < ",
-               pseg->nr, pdev->bus,
-               PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn), pdev->domain,
+        printk("%pp - %pd - node %-3d - MSIs < ",
+               &pdev->sbdf, pdev->domain,
                (pdev->node != NUMA_NO_NODE) ? pdev->node : -1);
         list_for_each_entry ( msi, &pdev->msi_list, list )
                printk("%d ", msi->irq);
@@ -1428,8 +1415,8 @@ static int iommu_add_device(struct pci_dev *pdev)
             return 0;
         rc = hd->platform_ops->add_device(devfn, pci_to_dev(pdev));
         if ( rc )
-            printk(XENLOG_WARNING "IOMMU: add %04x:%02x:%02x.%u failed (%d)\n",
-                   pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
+            printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
+                   &pdev->sbdf, rc);
     }
 }
 
@@ -1473,8 +1460,7 @@ static int iommu_remove_device(struct pci_dev *pdev)
         if ( !rc )
             continue;
 
-        printk(XENLOG_ERR "IOMMU: remove %04x:%02x:%02x.%u failed (%d)\n",
-               pdev->seg, pdev->bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
+        printk(XENLOG_ERR "IOMMU: remove %pp failed (%d)\n", &pdev->sbdf, rc);
         return rc;
     }
 
@@ -1550,8 +1536,8 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
 
  done:
     if ( rc )
-        printk(XENLOG_G_WARNING "%pd: assign (%04x:%02x:%02x.%u) failed (%d)\n",
-               d, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), rc);
+        printk(XENLOG_G_WARNING "%pd: assign (%pp) failed (%d)\n",
+               d, &PCI_SBDF3(seg, bus, devfn), rc);
     /* The device is assigned to dom_io so mark it as quarantined */
     else if ( d == dom_io )
         pdev->quarantine = true;
@@ -1624,10 +1610,8 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev)
     _pci_hide_device(pdev);
 
     if ( !d->is_shutting_down && printk_ratelimit() )
-        printk(XENLOG_ERR
-               "dom%d: ATS device %04x:%02x:%02x.%u flush failed\n",
-               d->domain_id, pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
-               PCI_FUNC(pdev->devfn));
+        printk(XENLOG_ERR "dom%d: ATS device %pp flush failed\n",
+               d->domain_id, &pdev->sbdf);
     if ( !is_hardware_domain(d) )
         domain_crash(d);
 
@@ -1717,9 +1701,8 @@ int iommu_do_pci_domctl(
         {
             if ( ret )
             {
-                printk(XENLOG_G_INFO
-                       "%04x:%02x:%02x.%u already assigned, or non-existent\n",
-                       seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+                printk(XENLOG_G_INFO "%pp already assigned, or non-existent\n",
+                       &PCI_SBDF3(seg, bus, devfn));
                 ret = -EINVAL;
             }
         }
diff --git a/xen/drivers/passthrough/vtd/dmar.c b/xen/drivers/passthrough/vtd/dmar.c
index 29cd5c5d70..36d909b06d 100644
--- a/xen/drivers/passthrough/vtd/dmar.c
+++ b/xen/drivers/passthrough/vtd/dmar.c
@@ -349,9 +349,8 @@ static int __init acpi_parse_dev_scope(
             sub_bus = pci_conf_read8(PCI_SBDF(seg, bus, path->dev, path->fn),
                                      PCI_SUBORDINATE_BUS);
             if ( iommu_verbose )
-                printk(VTDPREFIX
-                       " bridge: %04x:%02x:%02x.%u start=%x sec=%x sub=%x\n",
-                       seg, bus, path->dev, path->fn,
+                printk(VTDPREFIX " bridge: %pp start=%x sec=%x sub=%x\n",
+                       &PCI_SBDF(seg, bus, path->dev, path->fn),
                        acpi_scope->bus, sec_bus, sub_bus);
 
             dmar_scope_add_buses(scope, sec_bus, sub_bus);
@@ -359,8 +358,8 @@ static int __init acpi_parse_dev_scope(
 
         case ACPI_DMAR_SCOPE_TYPE_HPET:
             if ( iommu_verbose )
-                printk(VTDPREFIX " MSI HPET: %04x:%02x:%02x.%u\n",
-                       seg, bus, path->dev, path->fn);
+                printk(VTDPREFIX " MSI HPET: %pp\n",
+                       &PCI_SBDF(seg, bus, path->dev, path->fn));
 
             if ( drhd )
             {
@@ -381,8 +380,8 @@ static int __init acpi_parse_dev_scope(
 
         case ACPI_DMAR_SCOPE_TYPE_ENDPOINT:
             if ( iommu_verbose )
-                printk(VTDPREFIX " endpoint: %04x:%02x:%02x.%u\n",
-                       seg, bus, path->dev, path->fn);
+                printk(VTDPREFIX " endpoint: %pp\n",
+                       &PCI_SBDF(seg, bus, path->dev, path->fn));
 
             if ( drhd )
             {
@@ -395,8 +394,8 @@ static int __init acpi_parse_dev_scope(
 
         case ACPI_DMAR_SCOPE_TYPE_IOAPIC:
             if ( iommu_verbose )
-                printk(VTDPREFIX " IOAPIC: %04x:%02x:%02x.%u\n",
-                       seg, bus, path->dev, path->fn);
+                printk(VTDPREFIX " IOAPIC: %pp\n",
+                       &PCI_SBDF(seg, bus, path->dev, path->fn));
 
             if ( drhd )
             {
@@ -525,8 +524,8 @@ acpi_parse_one_drhd(struct acpi_dmar_header *header)
 
             if ( !pci_device_detect(drhd->segment, b, d, f) )
                 printk(XENLOG_WARNING VTDPREFIX
-                       " Non-existent device (%04x:%02x:%02x.%u) in this DRHD's scope!\n",
-                       drhd->segment, b, d, f);
+                       " Non-existent device (%pp) in this DRHD's scope!\n",
+                       &PCI_SBDF(drhd->segment, b, d, f));
         }
 
         acpi_register_drhd_unit(dmaru);
@@ -562,9 +561,9 @@ static int register_one_rmrr(struct acpi_rmrr_unit *rmrru)
         if ( pci_device_detect(rmrru->segment, b, d, f) == 0 )
         {
             dprintk(XENLOG_WARNING VTDPREFIX,
-                    " Non-existent device (%04x:%02x:%02x.%u) is reported"
-                    " in RMRR [%"PRIx64",%"PRIx64"]'s scope!\n",
-                    rmrru->segment, b, d, f,
+                    " Non-existent device (%pp) is reported"
+                    " in RMRR [%"PRIx64", %"PRIx64"]'s scope!\n",
+                    &PCI_SBDF(rmrru->segment, b, d, f),
                     rmrru->base_address, rmrru->end_address);
             ignore = true;
         }
diff --git a/xen/drivers/passthrough/vtd/intremap.c b/xen/drivers/passthrough/vtd/intremap.c
index a2f02c1bea..0d2a9d78de 100644
--- a/xen/drivers/passthrough/vtd/intremap.c
+++ b/xen/drivers/passthrough/vtd/intremap.c
@@ -521,16 +521,13 @@ static void set_msi_source_id(struct pci_dev *pdev, struct iremap_entry *ire)
         }
         else
             dprintk(XENLOG_WARNING VTDPREFIX,
-                    "d%d: no upstream bridge for %04x:%02x:%02x.%u\n",
-                    pdev->domain->domain_id,
-                    seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+                    "d%d: no upstream bridge for %pp\n",
+                    pdev->domain->domain_id, &pdev->sbdf);
         break;
 
     default:
-        dprintk(XENLOG_WARNING VTDPREFIX,
-                "d%d: unknown(%u): %04x:%02x:%02x.%u\n",
-                pdev->domain->domain_id, pdev->type,
-                seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+        dprintk(XENLOG_WARNING VTDPREFIX, "d%d: unknown(%u): %pp\n",
+                pdev->domain->domain_id, pdev->type, &pdev->sbdf);
         break;
    }
 }
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 2a99cd208f..8921e011a3 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -873,27 +873,24 @@ static int iommu_page_fault_do_one(struct vtd_iommu *iommu, int type,
     {
     case DMA_REMAP:
         printk(XENLOG_G_WARNING VTDPREFIX
-               "DMAR:[%s] Request device [%04x:%02x:%02x.%u] "
+               "DMAR:[%s] Request device [%pp] "
                "fault addr %"PRIx64"\n",
                (type ? "DMA Read" : "DMA Write"),
-               seg, PCI_BUS(source_id), PCI_SLOT(source_id),
-               PCI_FUNC(source_id), addr);
+               &PCI_SBDF2(seg, source_id), addr);
         kind = "DMAR";
         break;
     case INTR_REMAP:
         printk(XENLOG_G_WARNING VTDPREFIX
-               "INTR-REMAP: Request device [%04x:%02x:%02x.%u] "
+               "INTR-REMAP: Request device [%pp] "
                "fault index %"PRIx64"\n",
-               seg, PCI_BUS(source_id), PCI_SLOT(source_id),
-               PCI_FUNC(source_id), addr >> 48);
+               &PCI_SBDF2(seg, source_id), addr >> 48);
         kind = "INTR-REMAP";
         break;
     default:
         printk(XENLOG_G_WARNING VTDPREFIX
-               "UNKNOWN: Request device [%04x:%02x:%02x.%u] "
+               "UNKNOWN: Request device [%pp] "
                "fault addr %"PRIx64"\n",
-               seg, PCI_BUS(source_id), PCI_SLOT(source_id),
-               PCI_FUNC(source_id), addr);
+               &PCI_SBDF2(seg, source_id), addr);
         kind = "UNKNOWN";
         break;
     }
@@ -1339,9 +1336,8 @@ int domain_context_mapping_one(
         {
             if ( pdev->domain != domain )
             {
-                printk(XENLOG_G_INFO VTDPREFIX
-                       "%pd: %04x:%02x:%02x.%u owned by %pd\n",
-                       domain, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
+                printk(XENLOG_G_INFO VTDPREFIX "%pd: %pp owned by %pd",
+                       domain, &PCI_SBDF3(seg, bus, devfn),
                        pdev->domain);
                 res = -EINVAL;
             }
@@ -1354,17 +1350,15 @@ int domain_context_mapping_one(
             if ( cdomain < 0 )
             {
                 printk(XENLOG_G_WARNING VTDPREFIX
-                       "%pd: %04x:%02x:%02x.%u mapped, but can't find owner\n",
-                       domain, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+                       "%pd: %pp mapped, but can't find owner\n",
+                       domain, &PCI_SBDF3(seg, bus, devfn));
                 res = -EINVAL;
             }
             else if ( cdomain != domain->domain_id )
             {
                 printk(XENLOG_G_INFO VTDPREFIX
-                       "%pd: %04x:%02x:%02x.%u already mapped to d%d\n",
-                       domain,
-                       seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                       cdomain);
+                       "%pd: %pp already mapped to d%d",
+                       domain, &PCI_SBDF3(seg, bus, devfn), cdomain);
                 res = -EINVAL;
             }
         }
@@ -1490,9 +1484,8 @@ static int domain_context_mapping(struct domain *domain, u8 devfn,
     {
     case DEV_TYPE_PCI_HOST_BRIDGE:
         if ( iommu_debug )
-            printk(VTDPREFIX "d%d:Hostbridge: skip %04x:%02x:%02x.%u map\n",
-                   domain->domain_id, seg, bus,
-                   PCI_SLOT(devfn), PCI_FUNC(devfn));
+            printk(VTDPREFIX "%pd:Hostbridge: skip %pp map\n",
+                   domain, &PCI_SBDF3(seg, bus, devfn));
         if ( !is_hardware_domain(domain) )
             return -EPERM;
         break;
@@ -1504,9 +1497,8 @@ static int domain_context_mapping(struct domain *domain, u8 devfn,
 
     case DEV_TYPE_PCIe_ENDPOINT:
         if ( iommu_debug )
-            printk(VTDPREFIX "d%d:PCIe: map %04x:%02x:%02x.%u\n",
-                   domain->domain_id, seg, bus,
-                   PCI_SLOT(devfn), PCI_FUNC(devfn));
+            printk(VTDPREFIX "%pd:PCIe: map %pp\n",
+                   domain, &PCI_SBDF3(seg, bus, devfn));
         ret = domain_context_mapping_one(domain, drhd->iommu, bus, devfn,
                                          pdev);
         if ( !ret && devfn == pdev->devfn && ats_device(pdev, drhd) > 0 )
@@ -1516,9 +1508,8 @@ static int domain_context_mapping(struct domain *domain, u8 devfn,
 
     case DEV_TYPE_PCI:
         if ( iommu_debug )
-            printk(VTDPREFIX "d%d:PCI: map %04x:%02x:%02x.%u\n",
-                   domain->domain_id, seg, bus,
-                   PCI_SLOT(devfn), PCI_FUNC(devfn));
+            printk(VTDPREFIX "%pd:PCI: map %pp\n",
+                   domain, &PCI_SBDF3(seg, bus, devfn));
 
         ret = domain_context_mapping_one(domain, drhd->iommu, bus, devfn,
                                          pdev);
@@ -1554,9 +1545,8 @@ static int domain_context_mapping(struct domain *domain, u8 devfn,
         break;
 
     default:
-        dprintk(XENLOG_ERR VTDPREFIX, "d%d:unknown(%u): %04x:%02x:%02x.%u\n",
-                domain->domain_id, pdev->type,
-                seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+        dprintk(XENLOG_ERR VTDPREFIX, "%pd:unknown(%u): %pp\n",
+                domain, pdev->type, &PCI_SBDF3(seg, bus, devfn));
         ret = -EINVAL;
         break;
     }
@@ -1651,9 +1641,8 @@ static int domain_context_unmap(struct domain *domain, u8 devfn,
     {
     case DEV_TYPE_PCI_HOST_BRIDGE:
         if ( iommu_debug )
-            printk(VTDPREFIX "d%d:Hostbridge: skip %04x:%02x:%02x.%u unmap\n",
-                   domain->domain_id, seg, bus,
-                   PCI_SLOT(devfn), PCI_FUNC(devfn));
+            printk(VTDPREFIX "%pd:Hostbridge: skip %pp unmap\n",
+                   domain, &PCI_SBDF3(seg, bus, devfn));
         if ( !is_hardware_domain(domain) )
             return -EPERM;
         goto out;
@@ -1665,9 +1654,8 @@ static int domain_context_unmap(struct domain *domain, u8 devfn,
 
     case DEV_TYPE_PCIe_ENDPOINT:
         if ( iommu_debug )
-            printk(VTDPREFIX "d%d:PCIe: unmap %04x:%02x:%02x.%u\n",
-                   domain->domain_id, seg, bus,
-                   PCI_SLOT(devfn), PCI_FUNC(devfn));
+            printk(VTDPREFIX "%pd:PCIe: unmap %pp\n",
+                   domain, &PCI_SBDF3(seg, bus, devfn));
         ret = domain_context_unmap_one(domain, iommu, bus, devfn);
         if ( !ret && devfn == pdev->devfn && ats_device(pdev, drhd) > 0 )
             disable_ats_device(pdev);
@@ -1676,8 +1664,8 @@ static int domain_context_unmap(struct domain *domain, u8 devfn,
 
     case DEV_TYPE_PCI:
         if ( iommu_debug )
-            printk(VTDPREFIX "d%d:PCI: unmap %04x:%02x:%02x.%u\n",
-                   domain->domain_id, seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+            printk(VTDPREFIX "%pd:PCI: unmap %pp\n",
+                   domain, &PCI_SBDF3(seg, bus, devfn));
         ret = domain_context_unmap_one(domain, iommu, bus, devfn);
         if ( ret )
             break;
@@ -1702,9 +1690,8 @@ static int domain_context_unmap(struct domain *domain, u8 devfn,
         break;
 
     default:
-        dprintk(XENLOG_ERR VTDPREFIX, "d%d:unknown(%u): %04x:%02x:%02x.%u\n",
-                domain->domain_id, pdev->type,
-                seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+        dprintk(XENLOG_ERR VTDPREFIX, "%pd:unknown(%u): %pp\n",
+                domain, pdev->type, &PCI_SBDF3(seg, bus, devfn));
         ret = -EINVAL;
         goto out;
     }
@@ -2462,12 +2449,11 @@ static int intel_iommu_assign_device(
             bool_t relaxed = !!(flag & XEN_DOMCTL_DEV_RDM_RELAXED);
 
             printk(XENLOG_GUEST "%s" VTDPREFIX
-                   " It's %s to assign %04x:%02x:%02x.%u"
-                   " with shared RMRR at %"PRIx64" for Dom%d.\n",
+                   " It's %s to assign %pp"
+                   " with shared RMRR at %"PRIx64" for %pd.\n",
                    relaxed ? XENLOG_WARNING : XENLOG_ERR,
                    relaxed ? "risky" : "disallowed",
-                   seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                   rmrr->base_address, d->domain_id);
+                   &PCI_SBDF3(seg, bus, devfn), rmrr->base_address, d);
             if ( !relaxed )
                 return -EPERM;
         }
@@ -2497,8 +2483,8 @@ static int intel_iommu_assign_device(
                 if ( rc )
                 {
                     printk(XENLOG_ERR VTDPREFIX
-                           " failed to reclaim %04x:%02x:%02x.%u from %pd (%d)\n",
-                           seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn), d, rc);
+                           " failed to reclaim %pp from %pd (%d)\n",
+                           &PCI_SBDF3(seg, bus, devfn), d, rc);
                     domain_crash(d);
                 }
                 break;
diff --git a/xen/drivers/passthrough/vtd/quirks.c b/xen/drivers/passthrough/vtd/quirks.c
index 5594270678..a8330f17bc 100644
--- a/xen/drivers/passthrough/vtd/quirks.c
+++ b/xen/drivers/passthrough/vtd/quirks.c
@@ -413,8 +413,6 @@ void pci_vtd_quirk(const struct pci_dev *pdev)
 {
     int seg = pdev->seg;
     int bus = pdev->bus;
-    int dev = PCI_SLOT(pdev->devfn);
-    int func = PCI_FUNC(pdev->devfn);
     int pos;
     bool_t ff;
     u32 val, val2;
@@ -438,8 +436,7 @@ void pci_vtd_quirk(const struct pci_dev *pdev)
     case 0x3c28: /* Sandybridge */
         val = pci_conf_read32(pdev->sbdf, 0x1AC);
         pci_conf_write32(pdev->sbdf, 0x1AC, val | (1 << 31));
-        printk(XENLOG_INFO "Masked VT-d error signaling on %04x:%02x:%02x.%u\n",
-               seg, bus, dev, func);
+        printk(XENLOG_INFO "Masked VT-d error signaling on %pp\n", &pdev->sbdf);
         break;
 
     /* Tylersburg (EP)/Boxboro (MP) chipsets (NHM-EP/EX, WSM-EP/EX) */
@@ -474,8 +471,7 @@ void pci_vtd_quirk(const struct pci_dev *pdev)
             ff = pcie_aer_get_firmware_first(pdev);
         if ( !pos )
         {
-            printk(XENLOG_WARNING "%04x:%02x:%02x.%u without AER capability?\n",
-                   seg, bus, dev, func);
+            printk(XENLOG_WARNING "%pp without AER capability?\n", &pdev->sbdf);
             break;
         }
 
@@ -498,8 +494,7 @@ void pci_vtd_quirk(const struct pci_dev *pdev)
         val = pci_conf_read32(pdev->sbdf, 0x20c);
         pci_conf_write32(pdev->sbdf, 0x20c, val | (1 << 4));
 
-        printk(XENLOG_INFO "%s UR signaling on %04x:%02x:%02x.%u\n",
-               action, seg, bus, dev, func);
+        printk(XENLOG_INFO "%s UR signaling on %pp\n", action, &pdev->sbdf);
         break;
 
     case 0x0040: case 0x0044: case 0x0048: /* Nehalem/Westmere */
@@ -524,16 +519,15 @@ void pci_vtd_quirk(const struct pci_dev *pdev)
             {
                 __set_bit(0x1c8 * 8 + 20, va);
                 iounmap(va);
-                printk(XENLOG_INFO "Masked UR signaling on %04x:%02x:%02x.%u\n",
-                       seg, bus, dev, func);
+                printk(XENLOG_INFO "Masked UR signaling on %pp\n", &pdev->sbdf);
             }
             else
-                printk(XENLOG_ERR "Could not map %"PRIpaddr" for %04x:%02x:%02x.%u\n",
-                       pa, seg, bus, dev, func);
+                printk(XENLOG_ERR "Could not map %"PRIpaddr" for %pp\n",
+                       pa, &pdev->sbdf);
         }
         else
-            printk(XENLOG_WARNING "Bogus DMIBAR %#"PRIx64" on %04x:%02x:%02x.%u\n",
-                   bar, seg, bus, dev, func);
+            printk(XENLOG_WARNING "Bogus DMIBAR %#"PRIx64" on %pp\n",
+                   bar, &pdev->sbdf);
         break;
     }
 }
diff --git a/xen/drivers/passthrough/vtd/utils.c b/xen/drivers/passthrough/vtd/utils.c
index 7552dd8e0c..4febcf506d 100644
--- a/xen/drivers/passthrough/vtd/utils.c
+++ b/xen/drivers/passthrough/vtd/utils.c
@@ -95,9 +95,9 @@ void print_vtd_entries(struct vtd_iommu *iommu, int bus, int devfn, u64 gmfn)
     u64 *l, val;
     u32 l_index, level;
 
-    printk("print_vtd_entries: iommu #%u dev %04x:%02x:%02x.%u gmfn %"PRI_gfn"\n",
-           iommu->index, iommu->drhd->segment, bus,
-           PCI_SLOT(devfn), PCI_FUNC(devfn), gmfn);
+    printk("print_vtd_entries: iommu #%u dev %pp gmfn %"PRI_gfn"\n",
+           iommu->index, &PCI_SBDF3(iommu->drhd->segment, bus, devfn),
+           gmfn);
 
     if ( iommu->root_maddr == 0 )
     {
diff --git a/xen/drivers/passthrough/x86/ats.c b/xen/drivers/passthrough/x86/ats.c
index 8ae0eae4a2..4628ffde45 100644
--- a/xen/drivers/passthrough/x86/ats.c
+++ b/xen/drivers/passthrough/x86/ats.c
@@ -32,8 +32,7 @@ int enable_ats_device(struct pci_dev *pdev, struct list_head *ats_list)
     BUG_ON(!pos);
 
     if ( iommu_verbose )
-        dprintk(XENLOG_INFO, "%04x:%02x:%02x.%u: ATS capability found\n",
-                seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+        dprintk(XENLOG_INFO, "%pp: ATS capability found\n", &pdev->sbdf);
 
     value = pci_conf_read16(pdev->sbdf, pos + ATS_REG_CTL);
     if ( value & ATS_ENABLE )
@@ -64,9 +63,8 @@ int enable_ats_device(struct pci_dev *pdev, struct list_head *ats_list)
     }
 
     if ( iommu_verbose )
-        dprintk(XENLOG_INFO, "%04x:%02x:%02x.%u: ATS %s enabled\n",
-                seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                pos ? "is" : "was");
+        dprintk(XENLOG_INFO, "%pp: ATS %s enabled\n",
+                &pdev->sbdf, pos ? "is" : "was");
 
     return pos;
 }
@@ -74,8 +72,6 @@ int enable_ats_device(struct pci_dev *pdev, struct list_head *ats_list)
 void disable_ats_device(struct pci_dev *pdev)
 {
     u32 value;
-    u16 seg = pdev->seg;
-    u8 bus = pdev->bus, devfn = pdev->devfn;
 
     BUG_ON(!pdev->ats.cap_pos);
 
@@ -86,6 +82,5 @@ void disable_ats_device(struct pci_dev *pdev)
     list_del(&pdev->ats.list);
 
     if ( iommu_verbose )
-        dprintk(XENLOG_INFO, "%04x:%02x:%02x.%u: ATS is disabled\n",
-                seg, bus, PCI_SLOT(devfn), PCI_FUNC(devfn));
+        dprintk(XENLOG_INFO, "%pp: ATS is disabled\n", &pdev->sbdf);
 }
diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index 3c794f486d..ba9a036202 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -355,7 +355,6 @@ static void bar_write(const struct pci_dev *pdev, unsigned int reg,
                       uint32_t val, void *data)
 {
     struct vpci_bar *bar = data;
-    uint8_t slot = PCI_SLOT(pdev->devfn), func = PCI_FUNC(pdev->devfn);
     bool hi = false;
 
     if ( bar->type == VPCI_BAR_MEM64_HI )
@@ -372,9 +371,8 @@ static void bar_write(const struct pci_dev *pdev, unsigned int reg,
         /* If the value written is the current one avoid printing a warning. */
         if ( val != (uint32_t)(bar->addr >> (hi ? 32 : 0)) )
             gprintk(XENLOG_WARNING,
-                    "%04x:%02x:%02x.%u: ignored BAR %lu write with memory decoding enabled\n",
-                    pdev->seg, pdev->bus, slot, func,
-                    bar - pdev->vpci->header.bars + hi);
+                    "%pp: ignored BAR %lu write with memory decoding enabled\n",
+                    &pdev->sbdf, bar - pdev->vpci->header.bars + hi);
         return;
     }
 
@@ -402,15 +400,14 @@ static void rom_write(const struct pci_dev *pdev, unsigned int reg,
 {
     struct vpci_header *header = &pdev->vpci->header;
     struct vpci_bar *rom = data;
-    uint8_t slot = PCI_SLOT(pdev->devfn), func = PCI_FUNC(pdev->devfn);
     uint16_t cmd = pci_conf_read16(pdev->sbdf, PCI_COMMAND);
     bool new_enabled = val & PCI_ROM_ADDRESS_ENABLE;
 
     if ( (cmd & PCI_COMMAND_MEMORY) && header->rom_enabled && new_enabled )
     {
         gprintk(XENLOG_WARNING,
-                "%04x:%02x:%02x.%u: ignored ROM BAR write with memory decoding enabled\n",
-                pdev->seg, pdev->bus, slot, func);
+                "%pp: ignored ROM BAR write with memory decoding enabled\n",
+                &pdev->sbdf);
         return;
     }
 
diff --git a/xen/drivers/vpci/msi.c b/xen/drivers/vpci/msi.c
index 75010762ed..65db438d24 100644
--- a/xen/drivers/vpci/msi.c
+++ b/xen/drivers/vpci/msi.c
@@ -289,8 +289,7 @@ void vpci_dump_msi(void)
             msi = pdev->vpci->msi;
             if ( msi && msi->enabled )
             {
-                printk("%04x:%02x:%02x.%u MSI\n", pdev->seg, pdev->bus,
-                       PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
+                printk("%pp MSI\n", &pdev->sbdf);
 
                 printk("  enabled: %d 64-bit: %d",
                        msi->enabled, msi->address64);
@@ -307,8 +306,7 @@ void vpci_dump_msi(void)
             {
                 int rc;
 
-                printk("%04x:%02x:%02x.%u MSI-X\n", pdev->seg, pdev->bus,
-                       PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
+                printk("%pp MSI-X\n", &pdev->sbdf);
 
                 printk("  entries: %u maskall: %d enabled: %d\n",
                        msix->max_entries, msix->masked, msix->enabled);
diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
index 38c1e7e5dd..64dd0a929c 100644
--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -42,15 +42,14 @@ static uint32_t control_read(const struct pci_dev *pdev, unsigned int reg,
 static int update_entry(struct vpci_msix_entry *entry,
                         const struct pci_dev *pdev, unsigned int nr)
 {
-    uint8_t slot = PCI_SLOT(pdev->devfn), func = PCI_FUNC(pdev->devfn);
     int rc = vpci_msix_arch_disable_entry(entry, pdev);
 
     /* Ignore ENOENT, it means the entry wasn't setup. */
     if ( rc && rc != -ENOENT )
     {
         gprintk(XENLOG_WARNING,
-                "%04x:%02x:%02x.%u: unable to disable entry %u for update: %d\n",
-                pdev->seg, pdev->bus, slot, func, nr, rc);
+                "%pp: unable to disable entry %u for update: %d\n",
+                &pdev->sbdf, nr, rc);
         return rc;
     }
 
@@ -59,9 +58,8 @@ static int update_entry(struct vpci_msix_entry *entry,
                                                       VPCI_MSIX_TABLE));
     if ( rc )
     {
-        gprintk(XENLOG_WARNING,
-                "%04x:%02x:%02x.%u: unable to enable entry %u: %d\n",
-                pdev->seg, pdev->bus, slot, func, nr, rc);
+        gprintk(XENLOG_WARNING, "%pp: unable to enable entry %u: %d\n",
+                &pdev->sbdf, nr, rc);
         /* Entry is likely not properly configured. */
         return rc;
     }
@@ -72,7 +70,6 @@ static int update_entry(struct vpci_msix_entry *entry,
 static void control_write(const struct pci_dev *pdev, unsigned int reg,
                           uint32_t val, void *data)
 {
-    uint8_t slot = PCI_SLOT(pdev->devfn), func = PCI_FUNC(pdev->devfn);
     struct vpci_msix *msix = data;
     bool new_masked = val & PCI_MSIX_FLAGS_MASKALL;
     bool new_enabled = val & PCI_MSIX_FLAGS_ENABLE;
@@ -133,9 +130,8 @@ static void control_write(const struct pci_dev *pdev, unsigned int reg,
                 /* Ignore non-present entry. */
                 break;
             default:
-                gprintk(XENLOG_WARNING,
-                        "%04x:%02x:%02x.%u: unable to disable entry %u: %d\n",
-                        pdev->seg, pdev->bus, slot, func, i, rc);
+                gprintk(XENLOG_WARNING, "%pp: unable to disable entry %u: %d\n",
+                        &pdev->sbdf, i, rc);
                 return;
             }
         }
@@ -180,8 +176,7 @@ static bool access_allowed(const struct pci_dev *pdev, unsigned long addr,
         return true;
 
     gprintk(XENLOG_WARNING,
-            "%04x:%02x:%02x.%u: unaligned or invalid size MSI-X table access\n",
-            pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn), PCI_FUNC(pdev->devfn));
+            "%pp: unaligned or invalid size MSI-X table access\n", &pdev->sbdf);
 
     return false;
 }
@@ -431,10 +426,9 @@ int vpci_make_msix_hole(const struct pci_dev *pdev)
             default:
                 put_gfn(d, start);
                 gprintk(XENLOG_WARNING,
-                        "%04x:%02x:%02x.%u: existing mapping (mfn: %" PRI_mfn
+                        "%pp: existing mapping (mfn: %" PRI_mfn
                         "type: %d) at %#lx clobbers MSIX MMIO area\n",
-                        pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
-                        PCI_FUNC(pdev->devfn), mfn_x(mfn), t, start);
+                        &pdev->sbdf, mfn_x(mfn), t, start);
                 return -EEXIST;
             }
             put_gfn(d, start);
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 10:34:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 10:34:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k00SS-0003gJ-Gs; Mon, 27 Jul 2020 10:34:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k00SR-0003gE-C4
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 10:34:11 +0000
X-Inumbo-ID: b42a652e-cff4-11ea-a77c-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b42a652e-cff4-11ea-a77c-12813bfff9fa;
 Mon, 27 Jul 2020 10:34:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595846049;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=M+Mu8E2fuwPU/oncrBjE+zkk6QEy8SNkzt0ZiW48cAg=;
 b=HRFH86PpCPOi4bWX1+XeJPCN172akGNRyqZQvhlVibgytPt7MLjbjx/C
 m8LTcZoZifAxcExxX10sJI6Ls4wFRvbN1S9BvmXfd/Jih0NjQBzlf3cQj
 E3HZuFKqWQE6mf9qHEL1B/9TV5gzZfA7XBvuDgkAUD5CSyrCrB94YEHLG g=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 6e6ewNdSYbd4plujjhsGgCQ5H6vxkWeP9ylTamwaSVM+is1Cn+l5iHu+MhZxJnXbH8FLGADftS
 XsR0roq3z7sNbjXxfb9kioino+/MqNTBTSPlvKXNpQkJCt9BPz4YnfY148x/NXggugE6Kw8uqD
 RrXkRBvbw35oNUD7eGHNWcDwlfoVA2yBgKk+UJGFM6nzTaYP+FLn9eavBro5HrfYjDXbEW7I3l
 Anb0hSTXhfrRkXLMNbKrWZPxyjr+ngzDzaVOWGoI9XRa9TsTariKPYKqrQKNSKBhQXrBD4cWXK
 2jQ=
X-SBRS: 2.7
X-MesageID: 23569533
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23569533"
Date: Mon, 27 Jul 2020 12:34:02 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Message-ID: <20200727103402.GP7191@Air-de-Roger>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <20200724144404.GJ7191@Air-de-Roger>
 <0c53b2cb-47e9-f34e-8922-7095669175be@xen.org>
 <980fc583-edb6-b536-f211-f6b8ea6d21a7@suse.com>
 <3e15d186-e323-613f-05a2-ee02480d74cf@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3e15d186-e323-613f-05a2-ee02480d74cf@xen.org>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rahul Singh <rahul.singh@arm.com>, Bertrand.Marquis@arm.com,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 24, 2020 at 05:54:20PM +0100, Julien Grall wrote:
> Hi Jan,
> 
> On 24/07/2020 17:01, Jan Beulich wrote:
> > On 24.07.2020 17:15, Julien Grall wrote:
> > > On 24/07/2020 15:44, Roger Pau Monné wrote:
> > > > > +
> > > > > +    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
> > > > > +
> > > > > +    if ( unlikely(!bridge) )
> > > > > +    {
> > > > > +        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
> > > > > +                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
> > > > 
> > > > I had a patch to add a custom modifier to out printf format in
> > > > order to handle pci_sbdf_t natively:
> > > > 
> > > > https://patchew.org/Xen/20190822065132.48200-1-roger.pau@citrix.com/
> > > > 
> > > > It missed maintainers Acks and was never committed. Since you are
> > > > doing a bunch of work here, and likely adding a lot of SBDF related
> > > > prints, feel free to import the modifier (%pp) and use in your code
> > > > (do not attempt to switch existing users, or it's likely to get
> > > > stuck again).
> > > 
> > > I forgot about this patch :/. It would be good to revive it. Which acks
> > > are you missing?
> > 
> > It wasn't so much missing acks, but a controversy. And that not so much
> > about switching existing users, but whether to indeed derive this from
> > %p (which I continue to consider inefficient).
> 
> Looking at the thread, I can see you (relunctantly) acked any components
> that you are the sole maintainers. Kevin gave his acked for the vtd code and
> I gave it mine for the common code.
> 
> I would suggest to not rehash the argument unless another maintainer agree
> with your position. It loosk like to me the next step is for Roger (or
> someone else) to resend the patch so we could collect the missing ack (I
> think there is only one missing from Andrew).

I've rebased and sent the updated patch with the collected Acks.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 11:07:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 11:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k00yB-0006Le-36; Mon, 27 Jul 2020 11:06:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k00y9-0006LZ-SV
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 11:06:57 +0000
X-Inumbo-ID: 48782d66-cff9-11ea-8a94-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 48782d66-cff9-11ea-8a94-bc764e2007e4;
 Mon, 27 Jul 2020 11:06:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595848015;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=RTz+nVHoLnPbUmm2hTSMnM8sFKDVo+OtCBZA0WSy5vY=;
 b=J+1w66Xzj5tgun+ibOvlHDKaLK2SAwr5faNhAGFfhamc4lH1cnicUdsz
 KOxhcBrw3TZjFEi1kUH/Gn1TGYTssbyJeWTo4plUtTa4r+Q3nQtX7wkt1
 S3r7Xa05+WZlDjnjgfwnECcPW7ngrRel7lUmk3gDMDfmATy4kpj148R9F w=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: XmJ2jXUvBWAReMg+BECCyeFQZXofy6EVvsx0zrqlH4bdjSp6cTPNOOrc4P45LI0niJwO2BqQP2
 uhg2cbgnktw54MTjFoh6m+ebwPPQazKX/EYkqF1IcY76NAWrgeErrSG/6XC4CsO8brmbTVkk0+
 fEQFYmPgsJhJEpOGkv9DVK2OzSSJMjNzQ6Q1RELvb/VZM79tJpmBNfPAYaXhbigDogmI+/npNh
 19p9fBlJyy0AjnQn8FOxCamUiVOs6+FTQBm9tCHiZ425YXouZimZ95ZCMeU94DHdZuhc/HUUoU
 z0g=
X-SBRS: 2.7
X-MesageID: 23571136
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23571136"
Date: Mon, 27 Jul 2020 13:06:48 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien.grall.oss@gmail.com>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Message-ID: <20200727110648.GQ7191@Air-de-Roger>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <9f09ff42-a930-e4e3-d1c8-612ad03698ae@xen.org>
 <alpine.DEB.2.21.2007241036460.17562@sstabellini-ThinkPad-T480s>
 <40582d63-49c7-4a51-b35b-8248dfa34b66@xen.org>
 <alpine.DEB.2.21.2007241127480.17562@sstabellini-ThinkPad-T480s>
 <CAJ=z9a3dXSnEBvhkHkZzV9URAGqSfdtJ1Lc838h_ViAWG3ZO4g@mail.gmail.com>
 <alpine.DEB.2.21.2007241353450.17562@sstabellini-ThinkPad-T480s>
 <CAJ=z9a1RWXq3EN5DC=_279yzdsq3M0nw6+CZtKD00yBzKomcaw@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <CAJ=z9a1RWXq3EN5DC=_279yzdsq3M0nw6+CZtKD00yBzKomcaw@mail.gmail.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Rahul Singh <rahul.singh@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, Jul 25, 2020 at 10:59:50AM +0100, Julien Grall wrote:
> On Sat, 25 Jul 2020 at 00:46, Stefano Stabellini <sstabellini@kernel.org> wrote:
> >
> > On Fri, 24 Jul 2020, Julien Grall wrote:
> > > On Fri, 24 Jul 2020 at 19:32, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > > > > If they are not equal, then I fail to see why it would be useful to have this
> > > > > value in Xen.
> > > >
> > > > I think that's because the domain is actually more convenient to use
> > > > because a segment can span multiple PCI host bridges. So my
> > > > understanding is that a segment alone is not sufficient to identify a
> > > > host bridge. From a software implementation point of view it would be
> > > > better to use domains.
> > >
> > > AFAICT, this would be a matter of one check vs two checks in Xen :).
> > > But... looking at Linux, they will also use domain == segment for ACPI
> > > (see [1]). So, I think, they still have to use (domain, bus) to do the lookup.

You have to use the (segment, bus) tuple when doing a lookup because
MMCFG regions on ACPI are defined for a segment and a bus range, you
can have a MMCFG region that covers segment 0 bus [0, 20) and another
MMCFG region that covers segment 0 bus [20, 255], and those will use
different addresses in the MMIO space.

> > > > > In which case, we need to use PHYSDEVOP_pci_mmcfg_reserved so
> > > > > Dom0 and Xen can synchronize on the segment number.
> > > >
> > > > I was hoping we could write down the assumption somewhere that for the
> > > > cases we care about domain == segment, and error out if it is not the
> > > > case.
> > >
> > > Given that we have only the domain in hand, how would you enforce that?
> > >
> > > >From this discussion, it also looks like there is a mismatch between the
> > > implementation and the understanding on QEMU devel. So I am a bit
> > > concerned that this is not stable and may change in future Linux version.
> > >
> > > IOW, we are know tying Xen to Linux. So could we implement
> > > PHYSDEVOP_pci_mmcfg_reserved *or* introduce a new property that
> > > really represent the segment?
> >
> > I don't think we are tying Xen to Linux. Rob has already said that
> > linux,pci-domain is basically a generic device tree property.
> 
> My concern is not so much the name of the property, but the definition of it.
> 
> AFAICT, from this thread there can be two interpretation:
>       - domain == segment
>       - domain == (segment, bus)

I think domain is just an alias for segment, the difference seems to
be that when using DT all bridges get a different segment (or domain)
number, and thus you will always end up starting numbering at bus 0
for each bridge?

Ideally you would need a way to specify the segment and start/end bus
numbers of each bridge, if not you cannot match what ACPI does. Albeit
it might be fine as long as the OS and Xen agree on the segments and
bus numbers that belong to each bridge (and thus each ECAM region).

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 11:57:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 11:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k01ku-0002BG-T8; Mon, 27 Jul 2020 11:57:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9AKR=BG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k01kt-0002BB-24
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 11:57:19 +0000
X-Inumbo-ID: 511d1628-d000-11ea-8aa6-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 511d1628-d000-11ea-8aa6-bc764e2007e4;
 Mon, 27 Jul 2020 11:57:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=4gVn1eyjHXmuT9ikWPtvwZgURJAlza6CPe09TyiNskk=; b=cRWiate4utP1eSakXrBA3YqYU
 jKsW7s1CsT4ukmuimdUbxG9Uq9RtTIVnozaAy6gddNxR5B46Sn3pwmdNkrWRn/06JXMzRaodPIEXR
 MtArfFP+YIrjkVqKot6XHPiJuwLXLTonKGXVdRMCLbg4sBZjyg3qmXaINYWhU01B2375s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k01kp-0006if-Mh; Mon, 27 Jul 2020 11:57:15 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k01kp-0000zF-AZ; Mon, 27 Jul 2020 11:57:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k01kp-0002sI-9s; Mon, 27 Jul 2020 11:57:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152223-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152223: regressions - FAIL
X-Osstest-Failures: linux-linus:test-amd64-i386-pair:xen-boot/dst_host:fail:regression
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=92ed301919932f777713b9172e525674157e983d
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jul 2020 11:57:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152223 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152223/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-pair         11 xen-boot/dst_host        fail REGR. vs. 151214
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                92ed301919932f777713b9172e525674157e983d
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   39 days
Failing since        151236  2020-06-19 19:10:35 Z   37 days   59 attempts
Testing same since   152223  2020-07-27 00:11:45 Z    0 days    1 attempts

------------------------------------------------------------
932 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 53544 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 12:46:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 12:46:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k02Vt-0006XI-4H; Mon, 27 Jul 2020 12:45:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k02Vr-0006XD-Cy
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 12:45:51 +0000
X-Inumbo-ID: 18f499af-d007-11ea-8ab2-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 18f499af-d007-11ea-8ab2-bc764e2007e4;
 Mon, 27 Jul 2020 12:45:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
 References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=dUPhyrbRxZ9/iZmX05oh+qy6eFjaF0RYmj/4gpOnY9s=; b=lvSwyKcqkWIendRUb6sDmF4lJL
 YtKwNxqQ+kuzzSjnsoMLK6Ffjg6SNXPq7bwEV7G1O2zH1Q4+/PI756sW8Zi5JjN/jS28PUPr2uVs3
 KvTihwkVdSIvjDMUZIYWaPIcqgYwq+0LUhUDWQbJSOM+05iQJwbupfj78qLCkK0ufE4g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k02Vp-0007gT-PY; Mon, 27 Jul 2020 12:45:49 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=edge-cache-102.e-fra50.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k02Vp-0005Sy-EH; Mon, 27 Jul 2020 12:45:49 +0000
Message-ID: <0c421dee1729295eb8504ee81abbc8e57f220b12.camel@xen.org>
Subject: Re: [PATCH v7 09/15] efi: use new page table APIs in copy_mapping
From: Hongyan Xia <hx242@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Date: Mon, 27 Jul 2020 13:45:47 +0100
In-Reply-To: <bfe28c9c-af4e-96c2-9e6c-354a5bf626d8@suse.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <0259b645c81ecc3879240e30760b0e7641a2b602.1590750232.git.hongyxia@amazon.com>
 <bfe28c9c-af4e-96c2-9e6c-354a5bf626d8@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, 2020-07-14 at 14:42 +0200, Jan Beulich wrote:
> On 29.05.2020 13:11, Hongyan Xia wrote:
> > From: Wei Liu <wei.liu2@citrix.com>
> > 
> > After inspection ARM doesn't have alloc_xen_pagetable so this
> > function
> > is x86 only, which means it is safe for us to change.
> 
> Well, it sits inside a "#ifndef CONFIG_ARM" section.
> 
> > @@ -1442,29 +1443,42 @@ static __init void copy_mapping(unsigned
> > long mfn, unsigned long end,
> >                                                   unsigned long
> > emfn))
> >  {
> >      unsigned long next;
> > +    l3_pgentry_t *l3src = NULL, *l3dst = NULL;
> >  
> >      for ( ; mfn < end; mfn = next )
> >      {
> >          l4_pgentry_t l4e = efi_l4_pgtable[l4_table_offset(mfn <<
> > PAGE_SHIFT)];
> > -        l3_pgentry_t *l3src, *l3dst;
> >          unsigned long va = (unsigned long)mfn_to_virt(mfn);
> >  
> > +        if ( !((mfn << PAGE_SHIFT) & ((1UL << L4_PAGETABLE_SHIFT)
> > - 1)) )
> 
> To be in line with ...
> 
> > +        {
> > +            UNMAP_DOMAIN_PAGE(l3src);
> > +            UNMAP_DOMAIN_PAGE(l3dst);
> > +        }
> >          next = mfn + (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT));
> 
> ... this, please avoid the left shift of mfn in the if(). Judgingfrom

What do you mean by "in line" here? It does not look to me that "next
=" can be easily squashed into the if() condition.

Hongyan



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 13:26:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 13:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k039M-0001Vg-FU; Mon, 27 Jul 2020 13:26:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KGXS=BG=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1k039L-0001Vb-IK
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 13:26:39 +0000
X-Inumbo-ID: cd299500-d00c-11ea-8abe-bc764e2007e4
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cd299500-d00c-11ea-8abe-bc764e2007e4;
 Mon, 27 Jul 2020 13:26:38 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id e13so15155744qkg.5
 for <xen-devel@lists.xenproject.org>; Mon, 27 Jul 2020 06:26:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id;
 bh=Kx49vDFEnCjpI9bgQaIAKZwKthm3TcnrdPiQQLY4D+g=;
 b=Fjpay9uk88OzlbDiI4XZj29nQRdpswTCUxSE5QdvPFokrh9SFKNDQMiK6u+vyo9wdQ
 swXZx57LaMMCFUyHaoIXSNvXfUyOA5c85IixL7WdwtZg6h+Z6Z2nxHe4/iokGGMriV5+
 spt+rRmwVhWD8wTYJHxSveQcCE+uWEn5WXG1Tzp63lZAS3oUIFT7tMdgKMLNABgUt0kZ
 Sr0gJdffacM8dR7FA43XVFF6YlzFsoQ3LHLifWEChq6mMMiqXsRk78tvN0ShZbuX14wU
 N+E9KT3WeYN+d2UAm9EW395tn6hkZWEDUMMfHN532cTnh0eXEw2sxLPXUJyxcE0jIBAn
 XVsg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=Kx49vDFEnCjpI9bgQaIAKZwKthm3TcnrdPiQQLY4D+g=;
 b=Nx3p+rfgv0Yb51sx6kPO5Uo2XTUKPr5zk5DLx7MkAMWuil6xS/4jX1sktKj71iQFIF
 6tRF3NjH0S/+f0onf4w59TTR5TkEzuvWksaI9xdg3cUv0pjOqStspPVeDSglwv/Q3V8R
 KxDw81NQV3cgticUyEmZ4Wf6E9z2a0p9RCVSrLYxAHJnkCea+TXOAVUtMivGZoGUTAr7
 z/q+5r1zwSZPr7aAmrsIvr5+ubsV2/b+BTEEdAPrZKZ5yvmCl96WjKF8/8PmVF4J2p3l
 ptS5m50ZDMM84NLgru9o0f+EGN19QwzRL2eYltXr2Ab6HcIBoavdCpVS+D3zID+1uz+R
 nepw==
X-Gm-Message-State: AOAM533hcZrMlp/8JAls0x1IqHZLXV8Dez3A0I5+6XUOBRrENnaWszhx
 90Ry3hY30NoynYDDDjQzNQuV5Xo05OY=
X-Google-Smtp-Source: ABdhPJxGoSMrCEdz6hfZkFVRol1BJDahKxEfx9VpH4C9sQeIhlzkV19UDfnTzYgRfFKQbtFZt3/n9Q==
X-Received: by 2002:a37:5bc3:: with SMTP id
 p186mr23556116qkb.401.1595856397970; 
 Mon, 27 Jul 2020 06:26:37 -0700 (PDT)
Received: from six.lan (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id t8sm11828003qtc.50.2020.07.27.06.26.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 27 Jul 2020 06:26:36 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Subject: [RFC PATCH 0/2] add function support to IDL
Date: Mon, 27 Jul 2020 09:26:31 -0400
Message-Id: <cover.1595854292.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, george.dunlap@citrix.com,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

At a Xen Summit design session for the golang bindings (see [1]), we
agreed that it would be beneficial to expand the libxl IDL with function
support. In addition to benefiting libxl itself, this would allow other
language bindings to easily generate function wrappers.

These RFC patches outline a potential strategy for accomplishing this
goal. The first patch adds the Function and CtxFunction classes to
libxl/idl.py, introducing the idea of functions to the IDL. The second
patch adds a DeviceFunction class and adds some sample definitions to
libxl/libxl_types.idl for example purposes.

[1] https://lists.xenproject.org/archives/html/xen-devel/2020-07/msg00964.html

Nick Rosbrook (2):
  libxl: add Function class to IDL
  libxl: prototype libxl_device_nic_add/remove with IDL

 tools/golang/xenlight/gengotypes.py |  2 +-
 tools/libxl/gentypes.py             |  2 +-
 tools/libxl/idl.py                  | 54 ++++++++++++++++++++++++++++-
 tools/libxl/libxl_types.idl         |  6 ++++
 4 files changed, 61 insertions(+), 3 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 13:26:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 13:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k039R-0001Vu-OC; Mon, 27 Jul 2020 13:26:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KGXS=BG=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1k039Q-0001Vb-Dz
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 13:26:44 +0000
X-Inumbo-ID: cde0e7a1-d00c-11ea-8abe-bc764e2007e4
Received: from mail-qk1-x72a.google.com (unknown [2607:f8b0:4864:20::72a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cde0e7a1-d00c-11ea-8abe-bc764e2007e4;
 Mon, 27 Jul 2020 13:26:40 +0000 (UTC)
Received: by mail-qk1-x72a.google.com with SMTP id l23so15195448qkk.0
 for <xen-devel@lists.xenproject.org>; Mon, 27 Jul 2020 06:26:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :in-reply-to:references;
 bh=KaMEYjlDG3HM6TfMMDf5x6QCTVdKePy2b/fpNHgqtT4=;
 b=AoNJiUMrGU1/zEcVI7rNtQsygstiui6pLZ/O79Gk45wErhqsFyZeXorPFU8EsqyGWv
 p3dmphPgk/gPAsY70nwA+zdMiOy36q17JfiR9hys2QwDiD1/WczxcuVHy1eg0ZZwTODP
 uKpv+ItSTW7lX1qDUMQYI0/s6Mkjp4XJTnmLr1EmfFLnTLTbMuJVzFbjx4KaR0iWlNWl
 4dIPZWNhU03R7Fdj3emaHVlvP60qgEeNPjsStugzCN6ZkvksBHD7yQJRKL730/oVuzvr
 99Rswo1yVL7OYds1gt68gcDN0aQ1dIRL6sQMNHZ4l1KO7/D5NdsF9Iq5ehHsyo7Luqk2
 gFYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:in-reply-to:references;
 bh=KaMEYjlDG3HM6TfMMDf5x6QCTVdKePy2b/fpNHgqtT4=;
 b=A66FOtJ0mLQkqX1iU1mRXDQfn0ow25f/8tvWykOzalaQx6t5R/05lnTYmQuBQePKIK
 Fzrdvc1/Rx9sU3aXUXYqxiYieuCXtMdeHYNmD6DUUri/J9o5M+6M6D1oEYUtLWgFrAkg
 5nkBRZLyMM9D08xbaXetiyoyI/EeQUURbYyChB7vb3X3fs/0NGqcTDXTfv/43WrcvtpT
 4N8EAld5Jwr7R0IEG1cbvq+a0+VVftqIJxQYyp5myazoab8FhdjrXCt8yKzjKaHBsga/
 BGqmCuw4X8VyF6eo4l+3ltnmbqQimmF6OxiqBTYRp/1655qe8WHF+aYVV1c1AoLJFcNM
 5LPw==
X-Gm-Message-State: AOAM530JEJ6Mc+r5rLur5H/d59hBuwi7Xke9ia8sIxxrI7xOUK/WbZBb
 AzUNJ/ETn7cZomdcGNLQjuZ9Cw2CBgA=
X-Google-Smtp-Source: ABdhPJwh5q3SSQneLjUt5N34KQd0L5ewFAMB6FFOFYJbRJ4Fv2ddzOGdKpRH9TNoBLqx2vdusuMLTg==
X-Received: by 2002:a05:620a:153c:: with SMTP id
 n28mr9178029qkk.285.1595856399792; 
 Mon, 27 Jul 2020 06:26:39 -0700 (PDT)
Received: from six.lan (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id t8sm11828003qtc.50.2020.07.27.06.26.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 27 Jul 2020 06:26:38 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Subject: [RFC PATCH 1/2] libxl: add Function class to IDL
Date: Mon, 27 Jul 2020 09:26:32 -0400
Message-Id: <7e1774dffe69c702f738566abeb04a3a9d29e21b.1595854292.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595854292.git.rosbrookn@ainfosec.com>
References: <cover.1595854292.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1595854292.git.rosbrookn@ainfosec.com>
References: <cover.1595854292.git.rosbrookn@ainfosec.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, george.dunlap@citrix.com,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add a Function and CtxFunction classes to idl.py to allow generator
scripts to generate wrappers which are repetitive and straight forward
when doing so by hand. Examples of such functions are the
device_add/remove functions.

To start, a Function has attributes for namespace, name, parameters,
return type, and an indication if the return value should be interpreted as
a status code. The CtxFunction class extends this by indicating that a
libxl_ctx is a required parmeter, and can optionally be an async
function.

Also, add logic to idl.parse to return the list of functions found in an
IDL file. For now, have users of idl.py -- i.e. libxl/gentypes.py and
golang/xenlight/gengotypes.py -- ignore the list of functions returned.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/gengotypes.py |  2 +-
 tools/libxl/gentypes.py             |  2 +-
 tools/libxl/idl.py                  | 46 ++++++++++++++++++++++++++++-
 3 files changed, 47 insertions(+), 3 deletions(-)

diff --git a/tools/golang/xenlight/gengotypes.py b/tools/golang/xenlight/gengotypes.py
index 557fecd07b..bd3663c9ea 100644
--- a/tools/golang/xenlight/gengotypes.py
+++ b/tools/golang/xenlight/gengotypes.py
@@ -718,7 +718,7 @@ def xenlight_golang_fmt_name(name, exported = True):
 if __name__ == '__main__':
     idlname = sys.argv[1]
 
-    (builtins, types) = idl.parse(idlname)
+    (builtins, types, _) = idl.parse(idlname)
 
     for b in builtins:
         name = b.typename
diff --git a/tools/libxl/gentypes.py b/tools/libxl/gentypes.py
index 9a45e45acc..ac7306f3f7 100644
--- a/tools/libxl/gentypes.py
+++ b/tools/libxl/gentypes.py
@@ -592,7 +592,7 @@ if __name__ == '__main__':
 
     (_, idlname, header, header_private, header_json, impl) = sys.argv
 
-    (builtins,types) = idl.parse(idlname)
+    (builtins,types,_) = idl.parse(idlname)
 
     print("outputting libxl type definitions to %s" % header)
 
diff --git a/tools/libxl/idl.py b/tools/libxl/idl.py
index d7367503b4..1839871f86 100644
--- a/tools/libxl/idl.py
+++ b/tools/libxl/idl.py
@@ -347,6 +347,45 @@ class OrderedDict(dict):
     def ordered_items(self):
         return [(x,self[x]) for x in self.__ordered]
 
+class Function(object):
+    """
+    A general description of a function signature.
+
+    Attributes:
+      name (str): name of the function, excluding namespace.
+      params (list of (str,Type)): list of function parameters.
+      return_type (Type): the Type (if any), returned by the function.
+      return_is_status (bool): Indicates that the return value should be
+                               interpreted as an error/status code.
+    """
+    def __init__(self, name=None, params=None, return_type=None,
+                 return_is_status=False, namespace=None):
+
+        if namespace is None:
+            self.namespace = _get_default_namespace()
+        else:
+            self.namespace = namespace
+
+        self.name = self.namespace + name
+        self.params = params
+        self.return_type = return_type
+        self.return_is_status = return_is_status
+
+class CtxFunction(Function):
+    """
+    A function that requires a libxl_ctx.
+
+    Attributes:
+      is_asyncop (bool): indicates that the function accepts a
+                         libxl_asyncop_how parameter.
+    """
+    def __init__(self, name=None, params=None, return_type=None,
+                 return_is_status=False, is_asyncop=False):
+
+        self.is_asyncop = is_asyncop
+
+        Function.__init__(self, name, params, return_type, return_is_status)
+
 def parse(f):
     print("Parsing %s" % f, file=sys.stderr)
 
@@ -358,6 +397,10 @@ def parse(f):
             globs[n] = t
         elif isinstance(t,type(object)) and issubclass(t, Type):
             globs[n] = t
+        elif isinstance(t, Function):
+            globs[n] = t
+        elif isinstance(t,type(object)) and issubclass(t, Function):
+            globs[n] = t
         elif n in ['PASS_BY_REFERENCE', 'PASS_BY_VALUE',
                    'DIR_NONE', 'DIR_IN', 'DIR_OUT', 'DIR_BOTH',
                    'namespace', 'hidden']:
@@ -370,8 +413,9 @@ def parse(f):
                           % (e.lineno, f, e.text))
 
     types = [t for t in locs.ordered_values() if isinstance(t,Type)]
+    funcs = [f for f in locs.ordered_values() if isinstance(f,Function)]
 
     builtins = [t for t in types if isinstance(t,Builtin)]
     types = [t for t in types if not isinstance(t,Builtin)]
 
-    return (builtins,types)
+    return (builtins,types,funcs)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 13:26:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 13:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k039X-0001WL-1c; Mon, 27 Jul 2020 13:26:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KGXS=BG=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1k039V-0001Vb-E5
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 13:26:49 +0000
X-Inumbo-ID: cf386830-d00c-11ea-8abe-bc764e2007e4
Received: from mail-qk1-x736.google.com (unknown [2607:f8b0:4864:20::736])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf386830-d00c-11ea-8abe-bc764e2007e4;
 Mon, 27 Jul 2020 13:26:42 +0000 (UTC)
Received: by mail-qk1-x736.google.com with SMTP id l6so15157261qkc.6
 for <xen-devel@lists.xenproject.org>; Mon, 27 Jul 2020 06:26:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :in-reply-to:references;
 bh=CV8gyjU9PwIWw8K/PGrpSXQm3oWpb9KSiaV2uQ4454U=;
 b=lay2kK3CCt51wkbTIp8LTo2lZhzDQnuSNFU+0RWOUNzK9LMgcSIkudUrTf9IAiY1GA
 fM9vgNgJkZRjrBny0DX0qd/5qTR2n9PkCKhfmzcBXazOhD12PgaaH4vzE/WYn2dWIGHf
 PFB3TsoOP5ItV+VnAZdkmyXToxDIHYFgdGZOQaSHWlyXNkyg+IPjqJbg1JrL6MC1qA+o
 vUEsEkGE2QX1o6s2XCuCNL3tRCABo8VQmN92QcqZwYjpM3Jn6WzYZTYrXpCuAS+Dhg5G
 ZWjwfdYxKZHXD9hQukmZTvdQyOWf7HzFNcc26yEN0c+5ozfQcc2Vrloo9WXSYz3G0CEG
 ztLw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:in-reply-to:references;
 bh=CV8gyjU9PwIWw8K/PGrpSXQm3oWpb9KSiaV2uQ4454U=;
 b=lE3xe9zvjYEXFEgF6iYmbaL68UoRbgUmA9tB50YS5OrVwCNDezBu4nHN4cfK4U0ANQ
 hBgcdEeJlXt4emztcdL5NopXprtgpA6ol42EpB6TFofksX7YcSwmXfaF4WhSnp8PCxC0
 AS8HGn6Z5Em6RplgWcPb4g/zaUWIEbf0UAknQeUBoxWoSYhTTj3sYTsBOo9pw/4OlArN
 t/zsplJNSo4ZRu4O2eLDspnrLa0KKOOy/UMyGL/iMNplXCjetq4seKYkHfqCNRzJFtCR
 OyJKJ5eZLIqRHkZBAiC6nq2KWE0XJ7Hg6Peu554G0w1QGRzBSJKQRfsM8NOjrheCqx9I
 dPUg==
X-Gm-Message-State: AOAM530ebiFQYTrPV5PHbph8c0/rh5kiQ2dKJ+FwnDR6eViXEnw1D/6m
 UWHGnnH5cfm1BJyyBYGZqPrWA8P/n6Q=
X-Google-Smtp-Source: ABdhPJwHEgYH4RxJ65dlOvT3skSegMWUJKq1BEmSOXGP8QWPnZMs5dxi3yTrOWud3p4+upZklo4niA==
X-Received: by 2002:a37:a503:: with SMTP id o3mr22160178qke.162.1595856401512; 
 Mon, 27 Jul 2020 06:26:41 -0700 (PDT)
Received: from six.lan (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id t8sm11828003qtc.50.2020.07.27.06.26.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 27 Jul 2020 06:26:40 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Subject: [RFC PATCH 2/2] libxl: prototype libxl_device_nic_add/remove with IDL
Date: Mon, 27 Jul 2020 09:26:33 -0400
Message-Id: <b7313e96b6865bb13b221720a437c6e2ac57140c.1595854292.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595854292.git.rosbrookn@ainfosec.com>
References: <cover.1595854292.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1595854292.git.rosbrookn@ainfosec.com>
References: <cover.1595854292.git.rosbrookn@ainfosec.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, george.dunlap@citrix.com,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add a DeviceFunction class and describe prototypes for
libxl_device_nic_add/remove in libxl_types.idl.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
--
This is mostly to serve as an example of how the first patch would be
used for function support in the IDL.
---
 tools/libxl/idl.py          | 8 ++++++++
 tools/libxl/libxl_types.idl | 6 ++++++
 2 files changed, 14 insertions(+)

diff --git a/tools/libxl/idl.py b/tools/libxl/idl.py
index 1839871f86..15085af8c7 100644
--- a/tools/libxl/idl.py
+++ b/tools/libxl/idl.py
@@ -386,6 +386,14 @@ class CtxFunction(Function):
 
         Function.__init__(self, name, params, return_type, return_is_status)
 
+class DeviceFunction(CtxFunction):
+    """ A function that modifies a device. """
+    def __init__(self, name=None, device_param=None):
+        params = [ ("domid", uint32), device_param ]
+
+        CtxFunction.__init__(self, name=name, params=params, return_type=integer,
+                             return_is_status=True, is_asyncop=True)
+
 def parse(f):
     print("Parsing %s" % f, file=sys.stderr)
 
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 9d3f05f399..22ba93ee4b 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -769,6 +769,12 @@ libxl_device_nic = Struct("device_nic", [
     ("colo_checkpoint_port", string)
     ])
 
+libxl_device_nic_add = DeviceFunction("device_nic_add",
+    ("nic", libxl_device_nic))
+
+libxl_device_nic_remove = DeviceFunction("device_nic_remove",
+    ("nic", libxl_device_nic))
+
 libxl_device_pci = Struct("device_pci", [
     ("func",      uint8),
     ("dev",       uint8),
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 13:28:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 13:28:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k03Ao-0001jZ-Dn; Mon, 27 Jul 2020 13:28:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s7zT=BG=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1k03Am-0001jJ-Cd
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 13:28:08 +0000
X-Inumbo-ID: 002eb4b2-d00d-11ea-8abe-bc764e2007e4
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.3.65]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 002eb4b2-d00d-11ea-8abe-bc764e2007e4;
 Mon, 27 Jul 2020 13:28:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q1AZBXPB0RSG+TlFYyXiwsMAt8pUt45OdCy5Ih8pDHA=;
 b=I2dAYoKX5i1pKI04L9VPg1ulSt72gvZ8cyKMNlXlw2SiLTjF8YILF3on8lyeNtecQKASdTZbIEsONktC52/LxVVwmztAI3Uw9qwi/MU+GwS3LMXpkQzVDbp5GGrOGyL9zWqjxgBCx4ptWPtfJPWaRKlgiawf8Eqj0B6KG/d4eyU=
Received: from AM6P191CA0057.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:7f::34)
 by AM0PR08MB3121.eurprd08.prod.outlook.com (2603:10a6:208:60::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Mon, 27 Jul
 2020 13:28:02 +0000
Received: from VE1EUR03FT047.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:7f:cafe::6f) by AM6P191CA0057.outlook.office365.com
 (2603:10a6:209:7f::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.23 via Frontend
 Transport; Mon, 27 Jul 2020 13:28:02 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT047.mail.protection.outlook.com (10.152.19.218) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Mon, 27 Jul 2020 13:28:01 +0000
Received: ("Tessian outbound c4059ed8d7bf:v62");
 Mon, 27 Jul 2020 13:28:01 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2a7d3358cad6cc04
X-CR-MTA-TID: 64aa7808
Received: from c43af47c9ba4.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1FB2DA94-9414-4F77-A3FA-B09F7CB53532.1; 
 Mon, 27 Jul 2020 13:27:55 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c43af47c9ba4.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 27 Jul 2020 13:27:55 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DyJMCJwNYxt8hOFMl2R8idx/GMEhFpwJJYAhbfhb8NGbeojOTb1/bmRetJpfcNb82q2HFStqBFPJC53BLPTZVQrUZFbEsjt+oIUh5JAV0WU7ZtbhEOfHaB5Eos/jrphOQrcpXLN9FKT+MIDrExgzEFl6aR/60BzSVLJf+Tdo2G65vgQsSIg2h6hB0v9WUCON9k8Lp9UfXPaRpSxO+xjUwvfC9hndsehrlzmaFIv7fI2cf+ZvKxOlshMFoC2As90kai5T0TMEqPV/AeZ8v2wuxC/Tb6mEceipqyXrOF7QBfYjK7mwPUr9oaxDc68PI2tmPv9i426xi/TJJYFB0ARRgw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q1AZBXPB0RSG+TlFYyXiwsMAt8pUt45OdCy5Ih8pDHA=;
 b=G5Ty38XwIryAzB9BWPvLs1hR9Fxvf4/Ms2og2JbTmNdDMWPAqz5iEZlsnL0E+dMMNh5IwC6TbOgdrZZD/Ccmv6D6i7e+oI/uw91k30ZZH41qwk3PxAS3ZFslyHda3vFedJ2AWvApx9sUw5RP2uj1Rp9WUkaCrkuWOCmUJDBo+dOlBPmPXQNuYcdvUDO1d+oW90rH67/H2YwkD5Cn132fxZLtLZrstDCsVKAIzVYxS1VyDcZCJF0xguqP2j9bDteYTFmOwihLy7cChNfS7jdbFVcBQgi0jrzuR4PDOAKOfXL36UzZ+YkFFbTXGKmr/Q7D8G4jgptBlN0D07Sm3Gnl4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q1AZBXPB0RSG+TlFYyXiwsMAt8pUt45OdCy5Ih8pDHA=;
 b=I2dAYoKX5i1pKI04L9VPg1ulSt72gvZ8cyKMNlXlw2SiLTjF8YILF3on8lyeNtecQKASdTZbIEsONktC52/LxVVwmztAI3Uw9qwi/MU+GwS3LMXpkQzVDbp5GGrOGyL9zWqjxgBCx4ptWPtfJPWaRKlgiawf8Eqj0B6KG/d4eyU=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM5PR0801MB1810.eurprd08.prod.outlook.com (2603:10a6:203:3b::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Mon, 27 Jul
 2020 13:27:53 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3216.033; Mon, 27 Jul 2020
 13:27:53 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Thread-Topic: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Thread-Index: AQHWYPbO0TX+B10vbkuLRSjkBERSxqkV0u2AgAWemgA=
Date: Mon, 27 Jul 2020 13:27:52 +0000
Message-ID: <C06A9F99-6CBE-4B33-9C0F-A9DAD0D4C01A@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: ccc1941e-3705-4471-2160-08d83230e2e0
x-ms-traffictypediagnostic: AM5PR0801MB1810:|AM0PR08MB3121:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <AM0PR08MB3121130B8320CC0CE8EB0ED9FC720@AM0PR08MB3121.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: a7ft0h+KzEBypdpXbfnFtuNMG8kExBjPwLKV5Z6WNz7IJ6OcPy7eYbtrgVU0Dr9X3H0y8Tp8rFbnYqYTpbXE4zpCYNpaIBSluWfWTiTpE6bcJcOjcY523irWvDzzZY1ynMz2qSl23K4Dd6q/RVgHMQRnR9UNIW5nnP1Yer0zAQdhed3UT/l9muIX8WSOME4OBBrtKmF4z4zcMh+PFw2voKHeiokkBoeVD6nMpi+jIlRAs2teqwvVAqb1jjUI7Be8pexNQwxFNn9ajSqlvAn0JAFGcYPKsnephlg93uUWoq+YK6CuhvFRyBtbCVp72jKv79T3OEkm0WnOZttlrgBWTx5fWaccfECtwAl1Z7KtE91DNHwEkpCGQIduYb5ESxh0myKDf0adv/ilkbnfuxL3sFtrkiD45zDJDhIK/2Ppe7FsXI7R7bxt+d6ACPPLkC66GuB8lg0rdFnO/TPmc5HSuQ==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(366004)(346002)(136003)(376002)(396003)(966005)(8676002)(91956017)(76116006)(66556008)(8936002)(478600001)(83380400001)(64756008)(6512007)(66476007)(66946007)(6916009)(5660300002)(86362001)(33656002)(2906002)(26005)(55236004)(6486002)(54906003)(4326008)(316002)(66446008)(71200400001)(36756003)(53546011)(6506007)(186003)(30864003)(2616005)(2004002)(559001)(579004);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: D/Hx4FUA+K8Xcy2y6rcHyBkjruhC2zM2MSz9R7awLUG3sWt1pRK7kZvaZEZC+qBt2nHTQzDBGaHbpmbNGttdXJPp2TCEHKpNkKgtf1L2OJOuDZ56xr0GE6IMBh1ANwI+1ah6GnuyUbnNcKGVkinKFTQbSzGB+/9ty3impFvgJFMawruLdwV5Yeei8tWkUMYoFZonJVOzzaxqpgX7AxZ0h6x0aQ1CaWJeXMJJ2BdyTVFAt9hAYpseW/ziZUapwiUEeGoSmt7EiJcGGYxz5tJ6gafu+FiXVXixKMGZtwvP4AVKBpK9tcdl6RpBX8Qwve/wLAH+MOFCclrw5PIrGSDJ9ufkZs9urx4R9kkw5baV5FWC/a+Tj0M1tfJkzqtwcjx3xLjZgd9nfcqBLF7ASXJeMqAMgVdVkymop347NIgykkXWs9qnQbeFj3WinBzcja+/SrG5rQaLxOMPAccwYTMqX3mOemF8bGg/Eq/eaIVNh1Q=
Content-Type: text/plain; charset="utf-8"
Content-ID: <C8C1AC11FBB3A0409FCB0F878146A43D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB1810
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 3da273ba-9743-4fd8-f590-08d83230dd9b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: XadrIWaZgQk6GyP9ZefCnu8tRAadlIO2DIVCXxnUA2Ob9vQIE0e1vWUILBWD73jgWWq4FMpUWmoUqOwNtS5VbyVdGTcZINyK9ppSeSgXojHoF8SghGW7GH/wKAUJ2vxqtwlcPKX+0IDgkXr7RUTM2QbNoYPj+iKA3GbpKHr5+nymWK0eHGFsj4PsGC/Vf9/WX8o91mCHA6KVQ9+GIz/m00cfQorPy5jOtBX/SZiKoqPLTbh9ejHAmFW+lfFxdWm7GtZu3eH70koNRKfcJJd3we1Xu5QkpYTqmyuqv3nhErTx8BOhAqonxU0dNukZ4sLWhXAenS/c2h5MbjT/ht9jJUFzt3VCjauzafX21iDg43hMQ9jSVgrXiE5E3JKnDTcw29nJygraN1XdOc1t48nK//lI73OnAkRqJssIv0aeJmKQhl/FSDGQCUpGxoEnH4qhDXeJ3qmh8vq8N80jreltQecaCcPYdyZO7IyV6VfymVApbIWq1pQPylhfLYQRJQA6
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(396003)(136003)(376002)(346002)(46966005)(478600001)(33656002)(2906002)(107886003)(70586007)(336012)(2616005)(30864003)(36756003)(966005)(6862004)(26005)(5660300002)(83380400001)(70206006)(186003)(6512007)(53546011)(6506007)(6486002)(36906005)(8676002)(82740400003)(356005)(54906003)(86362001)(81166007)(316002)(4326008)(82310400002)(47076004)(8936002)(2004002);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jul 2020 13:28:01.7594 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ccc1941e-3705-4471-2160-08d83230e2e0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3121
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMjQgSnVsIDIwMjAsIGF0IDEyOjM4IGFtLCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPiANCj4gKyBKYW4sIEFuZHJldywgUm9nZXIN
Cj4gDQo+IFBsZWFzZSBoYXZlIGEgbG9vayBhdCBteSBjb21tZW50IG9uIHdoZXRoZXIgd2Ugc2hv
dWxkIHNoYXJlIHRoZSBNTUNGRw0KPiBjb2RlIGJlbG93LCBmZWVsIGZyZWUgdG8gaWdub3JlIHRo
ZSByZXN0IDotKQ0KPiANCj4gDQo+IE9uIFRodSwgMjMgSnVsIDIwMjAsIFJhaHVsIFNpbmdoIHdy
b3RlOg0KPj4gWEVOIGR1cmluZyBib290IHdpbGwgcmVhZCB0aGUgUENJIGRldmljZSB0cmVlIG5v
ZGUg4oCccmVn4oCdIHByb3BlcnR5DQo+PiBhbmQgd2lsbCBtYXAgdGhlIFBDSSBjb25maWcgc3Bh
Y2UgdG8gdGhlIFhFTiBtZW1vcnkuDQo+PiANCj4+IFhFTiB3aWxsIHJlYWQgdGhlIOKAnGxpbnV4
LCBwY2ktZG9tYWlu4oCdIHByb3BlcnR5IGZyb20gdGhlIGRldmljZSB0cmVlDQo+PiBub2RlIGFu
ZCBjb25maWd1cmUgdGhlIGhvc3QgYnJpZGdlIHNlZ21lbnQgbnVtYmVyIGFjY29yZGluZ2x5Lg0K
Pj4gDQo+PiBBcyBvZiBub3cgInBjaS1ob3N0LWVjYW0tZ2VuZXJpYyIgY29tcGF0aWJsZSBib2Fy
ZCBpcyBzdXBwb3J0ZWQuDQo+PiANCj4+IENoYW5nZS1JZDogSWYzMmY3NzQ4YjdkYzg5ZGQzNzEx
NGRjNTAyOTQzMjIyYTJhMzZjNDkNCj4gDQo+IFdoYXQgaXMgdGhpcyBDaGFuZ2UtSWQgcHJvcGVy
dHk/DQpXaWxsIHJlbW92ZSB0aGlzIGluIG5leHQgcGF0Y2ggc2VyaWVzLg0KPiANCj4gDQo+PiBT
aWduZWQtb2ZmLWJ5OiBSYWh1bCBTaW5naCA8cmFodWwuc2luZ2hAYXJtLmNvbT4NCj4+IC0tLQ0K
Pj4geGVuL2FyY2gvYXJtL0tjb25maWcgICAgICAgICAgICAgICAgfCAgIDcgKw0KPj4geGVuL2Fy
Y2gvYXJtL01ha2VmaWxlICAgICAgICAgICAgICAgfCAgIDEgKw0KPj4geGVuL2FyY2gvYXJtL3Bj
aS9NYWtlZmlsZSAgICAgICAgICAgfCAgIDQgKw0KPj4geGVuL2FyY2gvYXJtL3BjaS9wY2ktYWNj
ZXNzLmMgICAgICAgfCAxMDEgKysrKysrKysrKysrKysNCj4+IHhlbi9hcmNoL2FybS9wY2kvcGNp
LWhvc3QtY29tbW9uLmMgIHwgMTk4ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysNCj4+IHhl
bi9hcmNoL2FybS9wY2kvcGNpLWhvc3QtZ2VuZXJpYy5jIHwgMTMxICsrKysrKysrKysrKysrKysr
Kw0KPj4geGVuL2FyY2gvYXJtL3BjaS9wY2kuYyAgICAgICAgICAgICAgfCAxMTIgKysrKysrKysr
KysrKysrKw0KPj4geGVuL2FyY2gvYXJtL3NldHVwLmMgICAgICAgICAgICAgICAgfCAgIDIgKw0K
Pj4geGVuL2luY2x1ZGUvYXNtLWFybS9kZXZpY2UuaCAgICAgICAgfCAgIDcgKy0NCj4+IHhlbi9p
bmNsdWRlL2FzbS1hcm0vcGNpLmggICAgICAgICAgIHwgIDk3ICsrKysrKysrKysrKystDQo+PiAx
MCBmaWxlcyBjaGFuZ2VkLCA2NTQgaW5zZXJ0aW9ucygrKSwgNiBkZWxldGlvbnMoLSkNCj4+IGNy
ZWF0ZSBtb2RlIDEwMDY0NCB4ZW4vYXJjaC9hcm0vcGNpL01ha2VmaWxlDQo+PiBjcmVhdGUgbW9k
ZSAxMDA2NDQgeGVuL2FyY2gvYXJtL3BjaS9wY2ktYWNjZXNzLmMNCj4+IGNyZWF0ZSBtb2RlIDEw
MDY0NCB4ZW4vYXJjaC9hcm0vcGNpL3BjaS1ob3N0LWNvbW1vbi5jDQo+PiBjcmVhdGUgbW9kZSAx
MDA2NDQgeGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9zdC1nZW5lcmljLmMNCj4+IGNyZWF0ZSBtb2Rl
IDEwMDY0NCB4ZW4vYXJjaC9hcm0vcGNpL3BjaS5jDQo+PiANCj4+IGRpZmYgLS1naXQgYS94ZW4v
YXJjaC9hcm0vS2NvbmZpZyBiL3hlbi9hcmNoL2FybS9LY29uZmlnDQo+PiBpbmRleCAyNzc3Mzg4
MjY1Li5lZTEzMzM5YWU5IDEwMDY0NA0KPj4gLS0tIGEveGVuL2FyY2gvYXJtL0tjb25maWcNCj4+
ICsrKyBiL3hlbi9hcmNoL2FybS9LY29uZmlnDQo+PiBAQCAtMzEsNiArMzEsMTMgQEAgbWVudSAi
QXJjaGl0ZWN0dXJlIEZlYXR1cmVzIg0KPj4gDQo+PiBzb3VyY2UgImFyY2gvS2NvbmZpZyINCj4+
IA0KPj4gK2NvbmZpZyBBUk1fUENJDQo+PiArCWJvb2wgIlBDSSBQYXNzdGhyb3VnaCBTdXBwb3J0
Ig0KPj4gKwlkZXBlbmRzIG9uIEFSTV82NA0KPj4gKwktLS1oZWxwLS0tDQo+PiArDQo+PiArCSAg
UENJIHBhc3N0aHJvdWdoIHN1cHBvcnQgZm9yIFhlbiBvbiBBUk02NC4NCj4+ICsNCj4+IGNvbmZp
ZyBBQ1BJDQo+PiAJYm9vbCAiQUNQSSAoQWR2YW5jZWQgQ29uZmlndXJhdGlvbiBhbmQgUG93ZXIg
SW50ZXJmYWNlKSBTdXBwb3J0IiBpZiBFWFBFUlQNCj4+IAlkZXBlbmRzIG9uIEFSTV82NA0KPj4g
ZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9NYWtlZmlsZSBiL3hlbi9hcmNoL2FybS9NYWtlZmls
ZQ0KPj4gaW5kZXggN2U4MmIyMTc4Yy4uMzQ1Y2I4M2VlZCAxMDA2NDQNCj4+IC0tLSBhL3hlbi9h
cmNoL2FybS9NYWtlZmlsZQ0KPj4gKysrIGIveGVuL2FyY2gvYXJtL01ha2VmaWxlDQo+PiBAQCAt
Niw2ICs2LDcgQEAgaWZuZXEgKCQoQ09ORklHX05PX1BMQVQpLHkpDQo+PiBvYmoteSArPSBwbGF0
Zm9ybXMvDQo+PiBlbmRpZg0KPj4gb2JqLSQoQ09ORklHX1RFRSkgKz0gdGVlLw0KPj4gK29iai0k
KENPTkZJR19BUk1fUENJKSArPSBwY2kvDQo+PiANCj4+IG9iai0kKENPTkZJR19IQVNfQUxURVJO
QVRJVkUpICs9IGFsdGVybmF0aXZlLm8NCj4+IG9iai15ICs9IGJvb3RmZHQuaW5pdC5vDQo+PiBk
aWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL3BjaS9NYWtlZmlsZSBiL3hlbi9hcmNoL2FybS9wY2kv
TWFrZWZpbGUNCj4+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0DQo+PiBpbmRleCAwMDAwMDAwMDAwLi4z
NTg1MDhiNzg3DQo+PiAtLS0gL2Rldi9udWxsDQo+PiArKysgYi94ZW4vYXJjaC9hcm0vcGNpL01h
a2VmaWxlDQo+PiBAQCAtMCwwICsxLDQgQEANCj4+ICtvYmoteSArPSBwY2kubw0KPj4gK29iai15
ICs9IHBjaS1ob3N0LWdlbmVyaWMubw0KPj4gK29iai15ICs9IHBjaS1ob3N0LWNvbW1vbi5vDQo+
PiArb2JqLXkgKz0gcGNpLWFjY2Vzcy5vDQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL3Bj
aS9wY2ktYWNjZXNzLmMgYi94ZW4vYXJjaC9hcm0vcGNpL3BjaS1hY2Nlc3MuYw0KPj4gbmV3IGZp
bGUgbW9kZSAxMDA2NDQNCj4+IGluZGV4IDAwMDAwMDAwMDAuLmM1M2VmNTgzMzYNCj4+IC0tLSAv
ZGV2L251bGwNCj4+ICsrKyBiL3hlbi9hcmNoL2FybS9wY2kvcGNpLWFjY2Vzcy5jDQo+PiBAQCAt
MCwwICsxLDEwMSBAQA0KPj4gKy8qDQo+PiArICogQ29weXJpZ2h0IChDKSAyMDIwIEFybSBMdGQu
DQo+PiArICoNCj4+ICsgKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiBy
ZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeQ0KPj4gKyAqIGl0IHVuZGVyIHRoZSB0ZXJtcyBv
ZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgdmVyc2lvbiAyIGFzDQo+PiArICogcHVi
bGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uDQo+PiArICoNCj4+ICsgKiBU
aGlzIHByb2dyYW0gaXMgZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVz
ZWZ1bCwNCj4+ICsgKiBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUg
aW1wbGllZCB3YXJyYW50eSBvZg0KPj4gKyAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZP
UiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUNCj4+ICsgKiBHTlUgR2VuZXJhbCBQdWJs
aWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLg0KPj4gKyAqDQo+PiArICogWW91IHNob3VsZCBo
YXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UNCj4+
ICsgKiBhbG9uZyB3aXRoIHRoaXMgcHJvZ3JhbS4gIElmIG5vdCwgc2VlIDxodHRwOi8vd3d3Lmdu
dS5vcmcvbGljZW5zZXMvPi4NCj4+ICsgKi8NCj4+ICsNCj4+ICsjaW5jbHVkZSA8eGVuL2luaXQu
aD4NCj4+ICsjaW5jbHVkZSA8eGVuL3BjaS5oPg0KPj4gKyNpbmNsdWRlIDxhc20vcGNpLmg+DQo+
PiArI2luY2x1ZGUgPHhlbi9yd2xvY2suaD4NCj4+ICsNCj4+ICtzdGF0aWMgdWludDMyX3QgcGNp
X2NvbmZpZ19yZWFkKHBjaV9zYmRmX3Qgc2JkZiwgdW5zaWduZWQgaW50IHJlZywNCj4+ICsgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgaW50IGxlbikNCj4+ICt7DQo+PiArICAg
IGludCByYzsNCj4+ICsgICAgdWludDMyX3QgdmFsID0gR0VOTUFTSygwLCBsZW4gKiA4KTsNCj4+
ICsNCj4+ICsgICAgc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdlID0gcGNpX2ZpbmRfaG9z
dF9icmlkZ2Uoc2JkZi5zZWcsIHNiZGYuYnVzKTsNCj4+ICsNCj4+ICsgICAgaWYgKCB1bmxpa2Vs
eSghYnJpZGdlKSApDQo+PiArICAgIHsNCj4+ICsgICAgICAgIHByaW50ayhYRU5MT0dfRVJSICJV
bmFibGUgdG8gZmluZCBicmlkZ2UgZm9yICJQUklfcGNpIlxuIiwNCj4+ICsgICAgICAgICAgICAg
ICAgc2JkZi5zZWcsIHNiZGYuYnVzLCBzYmRmLmRldiwgc2JkZi5mbik7DQo+PiArICAgICAgICBy
ZXR1cm4gdmFsOw0KPj4gKyAgICB9DQo+PiArDQo+PiArICAgIGlmICggdW5saWtlbHkoIWJyaWRn
ZS0+b3BzLT5yZWFkKSApDQo+PiArICAgICAgICByZXR1cm4gdmFsOw0KPj4gKw0KPj4gKyAgICBy
YyA9IGJyaWRnZS0+b3BzLT5yZWFkKGJyaWRnZSwgKHVpbnQzMl90KSBzYmRmLnNiZGYsIHJlZywg
bGVuLCAmdmFsKTsNCj4+ICsgICAgaWYgKCByYyApDQo+PiArICAgICAgICBwcmludGsoWEVOTE9H
X0VSUiAiRmFpbGVkIHRvIHJlYWQgcmVnICUjeCBsZW4gJXUgZm9yICJQUklfcGNpIlxuIiwNCj4+
ICsgICAgICAgICAgICAgICAgcmVnLCBsZW4sIHNiZGYuc2VnLCBzYmRmLmJ1cywgc2JkZi5kZXYs
IHNiZGYuZm4pOw0KPj4gKw0KPj4gKyAgICByZXR1cm4gdmFsOw0KPj4gK30NCj4+ICsNCj4+ICtz
dGF0aWMgdm9pZCBwY2lfY29uZmlnX3dyaXRlKHBjaV9zYmRmX3Qgc2JkZiwgdW5zaWduZWQgaW50
IHJlZywNCj4+ICsgICAgICAgIHVuc2lnbmVkIGludCBsZW4sIHVpbnQzMl90IHZhbCkNCj4+ICt7
DQo+PiArICAgIGludCByYzsNCj4+ICsgICAgc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdl
ID0gcGNpX2ZpbmRfaG9zdF9icmlkZ2Uoc2JkZi5zZWcsIHNiZGYuYnVzKTsNCj4+ICsNCj4+ICsg
ICAgaWYgKCB1bmxpa2VseSghYnJpZGdlKSApDQo+PiArICAgIHsNCj4+ICsgICAgICAgIHByaW50
ayhYRU5MT0dfRVJSICJVbmFibGUgdG8gZmluZCBicmlkZ2UgZm9yICJQUklfcGNpIlxuIiwNCj4+
ICsgICAgICAgICAgICAgICAgc2JkZi5zZWcsIHNiZGYuYnVzLCBzYmRmLmRldiwgc2JkZi5mbik7
DQo+PiArICAgICAgICByZXR1cm47DQo+PiArICAgIH0NCj4+ICsNCj4+ICsgICAgaWYgKCB1bmxp
a2VseSghYnJpZGdlLT5vcHMtPndyaXRlKSApDQo+PiArICAgICAgICByZXR1cm47DQo+PiArDQo+
PiArICAgIHJjID0gYnJpZGdlLT5vcHMtPndyaXRlKGJyaWRnZSwgKHVpbnQzMl90KSBzYmRmLnNi
ZGYsIHJlZywgbGVuLCB2YWwpOw0KPj4gKyAgICBpZiAoIHJjICkNCj4+ICsgICAgICAgIHByaW50
ayhYRU5MT0dfRVJSICJGYWlsZWQgdG8gd3JpdGUgcmVnICUjeCBsZW4gJXUgZm9yICJQUklfcGNp
IlxuIiwNCj4+ICsgICAgICAgICAgICAgICAgcmVnLCBsZW4sIHNiZGYuc2VnLCBzYmRmLmJ1cywg
c2JkZi5kZXYsIHNiZGYuZm4pOw0KPj4gK30NCj4+ICsNCj4+ICsvKg0KPj4gKyAqIFdyYXBwZXJz
IGZvciBhbGwgUENJIGNvbmZpZ3VyYXRpb24gYWNjZXNzIGZ1bmN0aW9ucy4NCj4+ICsgKi8NCj4+
ICsNCj4+ICsjZGVmaW5lIFBDSV9PUF9XUklURShzaXplLCB0eXBlKSBcDQo+PiArICAgIHZvaWQg
cGNpX2NvbmZfd3JpdGUjI3NpemUgKHBjaV9zYmRmX3Qgc2JkZix1bnNpZ25lZCBpbnQgcmVnLCB0
eXBlIHZhbCkgXA0KPj4gK3sgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIFwNCj4+ICsgICAgcGNpX2NvbmZpZ193cml0ZShzYmRmLCByZWcsIHNpemUgLyA4LCB2
YWwpOyAgICAgXA0KPj4gK30NCj4+ICsNCj4+ICsjZGVmaW5lIFBDSV9PUF9SRUFEKHNpemUsIHR5
cGUpIFwNCj4+ICsgICAgdHlwZSBwY2lfY29uZl9yZWFkIyNzaXplIChwY2lfc2JkZl90IHNiZGYs
IHVuc2lnbmVkIGludCByZWcpICBcDQo+PiAreyAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgXA0KPj4gKyAgICByZXR1cm4gcGNpX2NvbmZpZ19yZWFkKHNiZGYs
IHJlZywgc2l6ZSAvIDgpOyAgICAgXA0KPj4gK30NCj4+ICsNCj4+ICtQQ0lfT1BfUkVBRCg4LCB1
OCkNCj4+ICtQQ0lfT1BfUkVBRCgxNiwgdTE2KQ0KPj4gK1BDSV9PUF9SRUFEKDMyLCB1MzIpDQo+
PiArUENJX09QX1dSSVRFKDgsIHU4KQ0KPj4gK1BDSV9PUF9XUklURSgxNiwgdTE2KQ0KPj4gK1BD
SV9PUF9XUklURSgzMiwgdTMyKQ0KPiANCj4gVGhpcyBsb29rcyBsaWtlIGEgc3Vic2V0IG9mIHhl
bi9hcmNoL3g4Ni94ODZfNjQvbW1jb25maWdfNjQuYyA/DQo+IA0KPiBNTUNGRyBpcyBzdXBwb3Nl
ZCB0byBjb3ZlciBFQ0FNLWNvbXBsaWFudCBob3N0IGJyaWRnZXMgdG9vLCBpZiBJIGFtIG5vdA0K
PiBtaXN0YWtlbi4gSXMgdGhlcmUgYW55IHZhbHVlIGluIHNoYXJpbmcgdGhlIGNvZGUgd2l0aCB4
ODY/IEl0IGlzIE9LIGlmDQo+IHdlIGRvbid0LCBidXQgSSB3b3VsZCBsaWtlIHRvIHVuZGVyc3Rh
bmQgdGhlIHJlYXNvbmluZy4NCg0KV2UgaGF2ZSBqdXN0IHN0YXJ0ZWQgdG8gaW52ZXN0aWdhdGUg
dGhlIEFDUEkgc3VwcG9ydCBmb3IgUENJIHdpdGhpbiBYRU4gb24gQVJNLiBPbmNlIHdlIGhhdmUg
dW5kZXJzdGFuZGluZyBob3cgd2UgY2FuIHN1cHBvcnQgdGhlIEFDUEkgd2Ugd2lsbCBoYXZlIGEg
bG9vayB4ODYgTU1DRkcgYWNjZXNzIGFuZCB3aWxsIGNoZWNrIGlmIHdlIGNhbiBoYXZlIGEgY29t
bW9uIGludGVyZmFjZSB0byBhY2Nlc3MgdGhlIGNvbmZpZ3VyYXRpb24gc3BhY2UuIA0KDQo+IA0K
PiANCj4gDQo+PiArLyoNCj4+ICsgKiBMb2NhbCB2YXJpYWJsZXM6DQo+PiArICogbW9kZTogQw0K
Pj4gKyAqIGMtZmlsZS1zdHlsZTogIkJTRCINCj4+ICsgKiBjLWJhc2ljLW9mZnNldDogNA0KPj4g
KyAqIHRhYi13aWR0aDogNA0KPj4gKyAqIGluZGVudC10YWJzLW1vZGU6IG5pbA0KPj4gKyAqIEVu
ZDoNCj4+ICsgKi8NCj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vcGNpL3BjaS1ob3N0LWNv
bW1vbi5jIGIveGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9zdC1jb21tb24uYw0KPj4gbmV3IGZpbGUg
bW9kZSAxMDA2NDQNCj4+IGluZGV4IDAwMDAwMDAwMDAuLmM1Zjk4YmU2OTgNCj4+IC0tLSAvZGV2
L251bGwNCj4+ICsrKyBiL3hlbi9hcmNoL2FybS9wY2kvcGNpLWhvc3QtY29tbW9uLmMNCj4+IEBA
IC0wLDAgKzEsMTk4IEBADQo+PiArLyoNCj4+ICsgKiBDb3B5cmlnaHQgKEMpIDIwMjAgQXJtIEx0
ZC4NCj4+ICsgKg0KPj4gKyAqIEJhc2VkIG9uIExpbnV4IGRyaXZlcnMvcGNpL2VjYW0uYw0KPj4g
KyAqIENvcHlyaWdodCAyMDE2IEJyb2FkY29tLg0KPj4gKyAqDQo+PiArICogQmFzZWQgb24gTGlu
dXggZHJpdmVycy9wY2kvY29udHJvbGxlci9wY2ktaG9zdC1jb21tb24uYw0KPj4gKyAqIEJhc2Vk
IG9uIExpbnV4IGRyaXZlcnMvcGNpL2NvbnRyb2xsZXIvcGNpLWhvc3QtZ2VuZXJpYy5jDQo+PiAr
ICogQ29weXJpZ2h0IChDKSAyMDE0IEFSTSBMaW1pdGVkIFdpbGwgRGVhY29uIDx3aWxsLmRlYWNv
bkBhcm0uY29tPg0KPj4gKyAqDQo+PiArICoNCj4+ICsgKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBz
b2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeQ0KPj4gKyAqIGl0
IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgdmVyc2lv
biAyIGFzDQo+PiArICogcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24u
DQo+PiArICoNCj4+ICsgKiBUaGlzIHByb2dyYW0gaXMgZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUg
dGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwNCj4+ICsgKiBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7
IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3YXJyYW50eSBvZg0KPj4gKyAqIE1FUkNIQU5UQUJJ
TElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUNCj4+ICsg
KiBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRhaWxzLg0KPj4gKyAqDQo+
PiArICogWW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwg
UHVibGljIExpY2Vuc2UNCj4+ICsgKiBhbG9uZyB3aXRoIHRoaXMgcHJvZ3JhbS4gIElmIG5vdCwg
c2VlIDxodHRwOi8vd3d3LmdudS5vcmcvbGljZW5zZXMvPi4NCj4+ICsgKi8NCj4+ICsNCj4+ICsj
aW5jbHVkZSA8eGVuL2luaXQuaD4NCj4+ICsjaW5jbHVkZSA8eGVuL3BjaS5oPg0KPj4gKyNpbmNs
dWRlIDxhc20vcGNpLmg+DQo+PiArI2luY2x1ZGUgPHhlbi9yd2xvY2suaD4NCj4+ICsjaW5jbHVk
ZSA8eGVuL3ZtYXAuaD4NCj4+ICsNCj4+ICsvKg0KPj4gKyAqIExpc3QgZm9yIGFsbCB0aGUgcGNp
IGhvc3QgYnJpZGdlcy4NCj4+ICsgKi8NCj4+ICsNCj4+ICtzdGF0aWMgTElTVF9IRUFEKHBjaV9o
b3N0X2JyaWRnZXMpOw0KPj4gKw0KPj4gK3N0YXRpYyBib29sIF9faW5pdCBkdF9wY2lfcGFyc2Vf
YnVzX3JhbmdlKHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSAqZGV2LA0KPj4gKyAgICAgICAgc3RydWN0
IHBjaV9jb25maWdfd2luZG93ICpjZmcpDQo+PiArew0KPj4gKyAgICBjb25zdCBfX2JlMzIgKmNl
bGxzOw0KPj4gKyAgICB1aW50MzJfdCBsZW47DQo+PiArDQo+PiArICAgIGNlbGxzID0gZHRfZ2V0
X3Byb3BlcnR5KGRldiwgImJ1cy1yYW5nZSIsICZsZW4pOw0KPj4gKyAgICAvKiBidXMtcmFuZ2Ug
c2hvdWxkIGF0IGxlYXN0IGJlIDIgY2VsbHMgKi8NCj4+ICsgICAgaWYgKCAhY2VsbHMgfHwgKGxl
biA8IChzaXplb2YoKmNlbGxzKSAqIDIpKSApDQo+PiArICAgICAgICByZXR1cm4gZmFsc2U7DQo+
PiArDQo+PiArICAgIGNmZy0+YnVzbl9zdGFydCA9IGR0X25leHRfY2VsbCgxLCAmY2VsbHMpOw0K
Pj4gKyAgICBjZmctPmJ1c25fZW5kID0gZHRfbmV4dF9jZWxsKDEsICZjZWxscyk7DQo+PiArDQo+
PiArICAgIHJldHVybiB0cnVlOw0KPj4gK30NCj4+ICsNCj4+ICtzdGF0aWMgaW5saW5lIHZvaWQg
X19pb21lbSAqcGNpX3JlbWFwX2NmZ3NwYWNlKHBhZGRyX3Qgc3RhcnQsIHNpemVfdCBsZW4pDQo+
PiArew0KPj4gKyAgICByZXR1cm4gaW9yZW1hcF9ub2NhY2hlKHN0YXJ0LCBsZW4pOw0KPj4gK30N
Cj4+ICsNCj4+ICtzdGF0aWMgdm9pZCBwY2lfZWNhbV9mcmVlKHN0cnVjdCBwY2lfY29uZmlnX3dp
bmRvdyAqY2ZnKQ0KPj4gK3sNCj4+ICsgICAgaWYgKCBjZmctPndpbiApDQo+PiArICAgICAgICBp
b3VubWFwKGNmZy0+d2luKTsNCj4+ICsNCj4+ICsgICAgeGZyZWUoY2ZnKTsNCj4+ICt9DQo+PiAr
DQo+PiArc3RhdGljIHN0cnVjdCBwY2lfY29uZmlnX3dpbmRvdyAqZ2VuX3BjaV9pbml0KHN0cnVj
dCBkdF9kZXZpY2Vfbm9kZSAqZGV2LA0KPj4gKyAgICAgICAgc3RydWN0IHBjaV9lY2FtX29wcyAq
b3BzKQ0KPj4gK3sNCj4+ICsgICAgaW50IGVycjsNCj4+ICsgICAgc3RydWN0IHBjaV9jb25maWdf
d2luZG93ICpjZmc7DQo+PiArICAgIHBhZGRyX3QgYWRkciwgc2l6ZTsNCj4+ICsNCj4+ICsgICAg
Y2ZnID0geHphbGxvYyhzdHJ1Y3QgcGNpX2NvbmZpZ193aW5kb3cpOw0KPj4gKyAgICBpZiAoICFj
ZmcgKQ0KPj4gKyAgICAgICAgcmV0dXJuIE5VTEw7DQo+PiArDQo+PiArICAgIGVyciA9IGR0X3Bj
aV9wYXJzZV9idXNfcmFuZ2UoZGV2LCBjZmcpOw0KPj4gKyAgICBpZiAoICFlcnIgKSB7DQo+PiAr
ICAgICAgICBjZmctPmJ1c25fc3RhcnQgPSAwOw0KPj4gKyAgICAgICAgY2ZnLT5idXNuX2VuZCA9
IDB4ZmY7DQo+PiArICAgICAgICBwcmludGsoWEVOTE9HX0VSUiAiTm8gYnVzIHJhbmdlIGZvdW5k
IGZvciBwY2kgY29udHJvbGxlclxuIik7DQo+PiArICAgIH0gZWxzZSB7DQo+PiArICAgICAgICBp
ZiAoIGNmZy0+YnVzbl9lbmQgPiBjZmctPmJ1c25fc3RhcnQgKyAweGZmICkNCj4+ICsgICAgICAg
ICAgICBjZmctPmJ1c25fZW5kID0gY2ZnLT5idXNuX3N0YXJ0ICsgMHhmZjsNCj4+ICsgICAgfQ0K
Pj4gKw0KPj4gKyAgICAvKiBQYXJzZSBvdXIgUENJIGVjYW0gcmVnaXN0ZXIgYWRkcmVzcyovDQo+
PiArICAgIGVyciA9IGR0X2RldmljZV9nZXRfYWRkcmVzcyhkZXYsIDAsICZhZGRyLCAmc2l6ZSk7
DQo+PiArICAgIGlmICggZXJyICkNCj4+ICsgICAgICAgIGdvdG8gZXJyX2V4aXQ7DQo+IA0KPiBT
aG91bGRuJ3Qgd2UgaGFuZGxlIHRoZSBwb3NzaWJpbGl0eSBvZiBtdWx0aXBsZSBhZGRyZXNzZXM/
IElzIGl0DQo+IHBvc3NpYmxlIHRvIGhhdmUgbW9yZSB0aGFuIG9uZSByYW5nZSBmb3IgYW4gRUNB
TSBjb21wbGlhbnQgaG9zdCBicmlkZ2U/DQoNCkFzIG9mIG5vdyB3ZSBhcmUgc3VwcG9ydGluZyB0
aGUgUENJIGNvbnRyb2xsZXJzIHRoYXQgaGF2ZSAicGNpLWhvc3QtZWNhbS1nZW5lcmlj4oCdIGNv
bXBhdGlibGUgbm9kZSBpbiBkZXZpY2UgdHJlZSBiaW5kaW5nLkFzIHBlciBteSB1bmRlcnN0YW5k
aW5nIGNvbnRyb2xsZXIgdGhhdCBoYXZlICJwY2ktaG9zdC1lY2FtLWdlbmVyaWPigJ0gY29tcGF0
aWJsZSBwcm9wZXJ0eSBoYXZlIG9ubHkgb25lIHJlZyBwcm9wZXJ0eSB2YWx1ZSBhdCBpbmRleCAw
Lg0KDQpZZXMgdGhlcmUgYXJlIFBDSSBjb250cm9sbGVycyB0aGF0IHN1cHBvcnQgRUNBTSBhbmQg
aGF2ZSBtb3JlIHRoYW4gb25lICJyZWciIHByb3BlcnR5IHZhbHVlcy5BcyBwZXIgbXkgdW5kZXJz
dGFuZGluZyBvbmNlIHdlIHN1cHBvcnQgb3RoZXIgY29udHJvbGxlciBpbiBYRU4gYXQgdGhhdCB0
aW1lIHdlIGhhdmUgdG8gaGFuZGxlIHRoZSBwb3NzaWJpbGl0eSBvZiBtdWx0aXBsZSBhZGRyZXNz
ZXMgYW5kIHRoYXQgY29kZSBpcyBzcGVjaWZpYyB0byB0aGF0IGNvbnRyb2xsZXIgLg0KPiANCj4g
DQo+PiArICAgIGNmZy0+cGh5c19hZGRyID0gYWRkcjsNCj4+ICsgICAgY2ZnLT5zaXplID0gc2l6
ZTsNCj4+ICsgICAgY2ZnLT5vcHMgPSBvcHM7DQo+PiArDQo+PiArICAgIC8qDQo+PiArICAgICAq
IE9uIDY0LWJpdCBzeXN0ZW1zLCB3ZSBkbyBhIHNpbmdsZSBpb3JlbWFwIGZvciB0aGUgd2hvbGUg
Y29uZmlnIHNwYWNlDQo+PiArICAgICAqIHNpbmNlIHdlIGhhdmUgZW5vdWdoIHZpcnR1YWwgYWRk
cmVzcyByYW5nZSBhdmFpbGFibGUuICBPbiAzMi1iaXQsIHdlDQo+PiArICAgICAqIGlvcmVtYXAg
dGhlIGNvbmZpZyBzcGFjZSBmb3IgZWFjaCBidXMgaW5kaXZpZHVhbGx5Lg0KPj4gKyAgICAgKg0K
Pj4gKyAgICAgKiBBcyBvZiBub3cgb25seSA2NC1iaXQgaXMgc3VwcG9ydGVkIDMyLWJpdCBpcyBu
b3Qgc3VwcG9ydGVkLg0KPj4gKyAgICAgKi8NCj4+ICsgICAgY2ZnLT53aW4gPSBwY2lfcmVtYXBf
Y2Znc3BhY2UoY2ZnLT5waHlzX2FkZHIsIGNmZy0+c2l6ZSk7DQo+PiArICAgIGlmICggIWNmZy0+
d2luICkNCj4+ICsgICAgICAgIGdvdG8gZXJyX2V4aXRfcmVtYXA7DQo+PiArDQo+PiArICAgIHBy
aW50aygiRUNBTSBhdCBbbWVtICVseC0lbHhdIGZvciBbYnVzICV4LSV4XSBcbiIsY2ZnLT5waHlz
X2FkZHIsDQo+PiArICAgICAgICAgICAgY2ZnLT5waHlzX2FkZHIgKyBjZmctPnNpemUgLSAxLGNm
Zy0+YnVzbl9zdGFydCxjZmctPmJ1c25fZW5kKTsNCj4+ICsNCj4+ICsgICAgaWYgKCBvcHMtPmlu
aXQgKSB7DQo+PiArICAgICAgICBlcnIgPSBvcHMtPmluaXQoY2ZnKTsNCj4+ICsgICAgICAgIGlm
IChlcnIpDQo+PiArICAgICAgICAgICAgZ290byBlcnJfZXhpdDsNCj4+ICsgICAgfQ0KPj4gKw0K
Pj4gKyAgICByZXR1cm4gY2ZnOw0KPj4gKw0KPj4gK2Vycl9leGl0X3JlbWFwOg0KPj4gKyAgICBw
cmludGsoWEVOTE9HX0VSUiAiRUNBTSBpb3JlbWFwIGZhaWxlZFxuIik7DQo+PiArZXJyX2V4aXQ6
DQo+PiArICAgIHBjaV9lY2FtX2ZyZWUoY2ZnKTsNCj4+ICsgICAgcmV0dXJuIE5VTEw7DQo+PiAr
fQ0KPj4gKw0KPj4gK3N0YXRpYyBzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICogcGNpX2FsbG9jX2hv
c3RfYnJpZGdlKHZvaWQpDQo+PiArew0KPj4gKyAgICBzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICpi
cmlkZ2UgPSB4emFsbG9jKHN0cnVjdCBwY2lfaG9zdF9icmlkZ2UpOw0KPj4gKw0KPj4gKyAgICBp
ZiAoICFicmlkZ2UgKQ0KPj4gKyAgICAgICAgcmV0dXJuIE5VTEw7DQo+PiArDQo+PiArICAgIElO
SVRfTElTVF9IRUFEKCZicmlkZ2UtPm5vZGUpOw0KPj4gKyAgICByZXR1cm4gYnJpZGdlOw0KPj4g
K30NCj4+ICsNCj4+ICtpbnQgcGNpX2hvc3RfY29tbW9uX3Byb2JlKHN0cnVjdCBkdF9kZXZpY2Vf
bm9kZSAqZGV2LA0KPj4gKyAgICAgICAgc3RydWN0IHBjaV9lY2FtX29wcyAqb3BzKQ0KPj4gK3sN
Cj4+ICsgICAgc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdlOw0KPj4gKyAgICBzdHJ1Y3Qg
cGNpX2NvbmZpZ193aW5kb3cgKmNmZzsNCj4+ICsgICAgdTMyIHNlZ21lbnQ7DQo+PiArDQo+PiAr
ICAgIGJyaWRnZSA9IHBjaV9hbGxvY19ob3N0X2JyaWRnZSgpOw0KPj4gKyAgICBpZiAoICFicmlk
Z2UgKQ0KPj4gKyAgICAgICAgcmV0dXJuIC1FTk9NRU07DQo+PiArDQo+PiArICAgIC8qIFBhcnNl
IGFuZCBtYXAgb3VyIENvbmZpZ3VyYXRpb24gU3BhY2Ugd2luZG93cyAqLw0KPj4gKyAgICBjZmcg
PSBnZW5fcGNpX2luaXQoZGV2LCBvcHMpOw0KPj4gKyAgICBpZiAoICFjZmcgKQ0KPj4gKyAgICAg
ICAgcmV0dXJuIC1FTk9NRU07DQo+IA0KPiBJbiBjYXNlIG9mIGVycm9ycyB0aGUgYWxsb2NhdGVk
IGJyaWRnZSBpcyBub3QgZnJlZWQuDQo+IA0KDQpXaWxsIGZpeCB0aGlzIGluIG5leHQgdmVyc2lv
bi4NCg0KPiANCj4+ICsgICAgYnJpZGdlLT5kdF9ub2RlID0gZGV2Ow0KPj4gKyAgICBicmlkZ2Ut
PnN5c2RhdGEgPSBjZmc7DQo+PiArICAgIGJyaWRnZS0+b3BzID0gJm9wcy0+cGNpX29wczsNCj4+
ICsNCj4+ICsgICAgaWYoICFkdF9wcm9wZXJ0eV9yZWFkX3UzMihkZXYsICJsaW51eCxwY2ktZG9t
YWluIiwgJnNlZ21lbnQpICkNCj4+ICsgICAgew0KPj4gKyAgICAgICAgcHJpbnRrKFhFTkxPR19F
UlIgIlwibGludXgscGNpLWRvbWFpblwiIHByb3BlcnR5IGluIG5vdCBhdmFpbGFibGUgaW4gRFRc
biIpOw0KPj4gKyAgICAgICAgcmV0dXJuIC1FTk9ERVY7DQo+PiArICAgIH0NCj4+ICsNCj4+ICsg
ICAgYnJpZGdlLT5zZWdtZW50ID0gKHUxNilzZWdtZW50Ow0KPiANCj4gTXkgdW5kZXJzdGFuZGlu
ZyBpcyB0aGF0IGEgTGludXggcGNpLWRvbWFpbiBkb2Vzbid0IGNvcnJlc3BvbmQgZXhhY3RseQ0K
PiB0byBhIFBDSSBzZWdtZW50LiBTZWUgZm9yIGluc3RhbmNlOg0KPiANCj4gaHR0cHM6Ly9saXN0
cy5nbnUub3JnL2FyY2hpdmUvaHRtbC9xZW11LWRldmVsLzIwMTgtMDQvbXNnMDM4ODUuaHRtbA0K
PiANCj4gRG8gd2UgbmVlZCB0byBjYXJlIGFib3V0IHRoZSBkaWZmZXJlbmNlPyBJZiB3ZSBtZWFu
IHBjaS1kb21haW4gaGVyZSwNCj4gc2hvdWxkIHdlIGp1c3QgY2FsbCB0aGVtIGFzIHN1Y2ggaW5z
dGVhZCBvZiBjYWxsaW5nIHRoZW0gInNlZ21lbnRzIiBpbg0KPiBYZW4gKGlmIHRoZXkgYXJlIG5v
dCBzZWdtZW50cyk/DQo+IA0KDQpBcyBwZXIgbXkgdW5kZXJzdGFuZGluZyBsaW51eCBkb21haW4g
bnVtYmVyIGlzIGFuIGFsaWFzIHRvIHNlZ21lbnQgbnVtYmVyLkRldmljZSB0cmVlIGJpbmRpbmcg
dXNlcyB0aGUgcHJvcGVydHkgZG9tYWluIG51bWJlciBhbmQgQUNQSSB1c2VzIHRoZSBwcm9wZXJ0
eSBzZWdtZW50IG51bWJlci4NCldlIGhhdmUganVzdCBzdGFydGVkIHRoZSBpbnZlc3RpZ2F0aW9u
IG9uIEFDUEksIG9uY2Ugd2UgaGF2ZSBtb3JlIGluZm9ybWF0aW9uIGFib3V0IE1NQ0ZHIEFDUEkg
dGFibGUgaG93IGl0IHdvcmtzIHdpbGwgc2hhcmUgdGhlIGluZm9ybWF0aW9uIGFuZCB3aWxsIHVw
ZGF0ZSB0aGUgcGF0Y2hlcyBhbmQgZGVzaWduIGRvYyBhY2NvcmRpbmdseS4NCg0KDQo+IA0KPj4g
KyAgICBsaXN0X2FkZF90YWlsKCZicmlkZ2UtPm5vZGUsICZwY2lfaG9zdF9icmlkZ2VzKTsNCj4g
DQo+IEl0IGxvb2tzIGxpa2UgJnBjaV9ob3N0X2JyaWRnZXMgc2hvdWxkIGJlIGFuIG9yZGVyZWQg
bGlzdCwgb3JkZXJlZCBieQ0KPiBzZWdtZW50IG51bWJlcj8NCj4gDQoNCkFzIEp1bGllbiBhbHNv
IG1lbnRpb25lZCBhY2Nlc3MgdG8gdGhlIFBDSSBjb25maWcgc3BhY2UgaXMgcmFuZG9tIHRoZXJl
IGlzIG5vIG5lZWQgdG8gaGF2ZSBhbiBvcmRlcmVkIGxpc3QuDQoNCj4gDQo+PiArICAgIHJldHVy
biAwOw0KPj4gK30NCj4+ICsNCj4+ICsvKg0KPj4gKyAqIFRoaXMgZnVuY3Rpb24gd2lsbCBsb29r
dXAgYW4gaG9zdGJyaWRnZSBiYXNlZCBvbiB0aGUgc2VnbWVudCBhbmQgYnVzDQo+PiArICogbnVt
YmVyLg0KPj4gKyAqLw0KPj4gK3N0cnVjdCBwY2lfaG9zdF9icmlkZ2UgKnBjaV9maW5kX2hvc3Rf
YnJpZGdlKHVpbnQxNl90IHNlZ21lbnQsIHVpbnQ4X3QgYnVzKQ0KPj4gK3sNCj4+ICsgICAgc3Ry
dWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdlOw0KPj4gKyAgICBib29sIGZvdW5kID0gZmFsc2U7
DQo+PiArDQo+PiArICAgIGxpc3RfZm9yX2VhY2hfZW50cnkoIGJyaWRnZSwgJnBjaV9ob3N0X2Jy
aWRnZXMsIG5vZGUgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBpZiAoIGJyaWRnZS0+c2VnbWVu
dCAhPSBzZWdtZW50ICkNCj4+ICsgICAgICAgICAgICBjb250aW51ZTsNCj4+ICsNCj4+ICsgICAg
ICAgIGZvdW5kID0gdHJ1ZTsNCj4+ICsgICAgICAgIGJyZWFrOw0KPj4gKyAgICB9DQo+PiArDQo+
PiArICAgIHJldHVybiAoZm91bmQpID8gYnJpZGdlIDogTlVMTDsNCj4+ICt9DQo+PiArLyoNCj4+
ICsgKiBMb2NhbCB2YXJpYWJsZXM6DQo+PiArICogbW9kZTogQw0KPj4gKyAqIGMtZmlsZS1zdHls
ZTogIkJTRCINCj4+ICsgKiBjLWJhc2ljLW9mZnNldDogNA0KPj4gKyAqIHRhYi13aWR0aDogNA0K
Pj4gKyAqIGluZGVudC10YWJzLW1vZGU6IG5pbA0KPj4gKyAqIEVuZDoNCj4+ICsgKi8NCj4+IGRp
ZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vcGNpL3BjaS1ob3N0LWdlbmVyaWMuYyBiL3hlbi9hcmNo
L2FybS9wY2kvcGNpLWhvc3QtZ2VuZXJpYy5jDQo+PiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPj4g
aW5kZXggMDAwMDAwMDAwMC4uY2Q2N2IzZGVjNg0KPj4gLS0tIC9kZXYvbnVsbA0KPj4gKysrIGIv
eGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9zdC1nZW5lcmljLmMNCj4+IEBAIC0wLDAgKzEsMTMxIEBA
DQo+PiArLyoNCj4+ICsgKiBDb3B5cmlnaHQgKEMpIDIwMjAgQXJtIEx0ZC4NCj4+ICsgKg0KPj4g
KyAqIEJhc2VkIG9uIExpbnV4IGRyaXZlcnMvcGNpL2NvbnRyb2xsZXIvcGNpLWhvc3QtY29tbW9u
LmMNCj4+ICsgKiBCYXNlZCBvbiBMaW51eCBkcml2ZXJzL3BjaS9jb250cm9sbGVyL3BjaS1ob3N0
LWdlbmVyaWMuYw0KPj4gKyAqIENvcHlyaWdodCAoQykgMjAxNCBBUk0gTGltaXRlZCBXaWxsIERl
YWNvbiA8d2lsbC5kZWFjb25AYXJtLmNvbT4NCj4+ICsgKg0KPj4gKyAqIFRoaXMgcHJvZ3JhbSBp
cyBmcmVlIHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IgbW9kaWZ5DQo+
PiArICogaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5z
ZSB2ZXJzaW9uIDIgYXMNCj4+ICsgKiBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91
bmRhdGlvbi4NCj4+ICsgKg0KPj4gKyAqIFRoaXMgcHJvZ3JhbSBpcyBkaXN0cmlidXRlZCBpbiB0
aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLA0KPj4gKyAqIGJ1dCBXSVRIT1VUIEFOWSBX
QVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mDQo+PiArICogTUVS
Q0hBTlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAgU2VlIHRo
ZQ0KPj4gKyAqIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBtb3JlIGRldGFpbHMuDQo+
PiArICoNCj4+ICsgKiBZb3Ugc2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUg
R2VuZXJhbCBQdWJsaWMgTGljZW5zZQ0KPj4gKyAqIGFsb25nIHdpdGggdGhpcyBwcm9ncmFtLiAg
SWYgbm90LCBzZWUgPGh0dHA6Ly93d3cuZ251Lm9yZy9saWNlbnNlcy8+Lg0KPj4gKyAqLw0KPj4g
Kw0KPj4gKyNpbmNsdWRlIDxhc20vZGV2aWNlLmg+DQo+PiArI2luY2x1ZGUgPGFzbS9pby5oPg0K
Pj4gKyNpbmNsdWRlIDx4ZW4vcGNpLmg+DQo+PiArI2luY2x1ZGUgPGFzbS9wY2kuaD4NCj4+ICsN
Cj4+ICsvKg0KPj4gKyAqIEZ1bmN0aW9uIHRvIGdldCB0aGUgY29uZmlnIHNwYWNlIGJhc2UuDQo+
PiArICovDQo+PiArc3RhdGljIHZvaWQgX19pb21lbSAqcGNpX2NvbmZpZ19iYXNlKHN0cnVjdCBw
Y2lfaG9zdF9icmlkZ2UgKmJyaWRnZSwNCj4+ICsgICAgICAgIHVpbnQzMl90IHNiZGYsIGludCB3
aGVyZSkNCj4gDQo+IEkgdGhpbmsgdGhlIGZ1bmN0aW9uIGlzIG1pc25hbWVkIGJlY2F1c2UgcmVh
ZGluZyB0aGUgY29kZSBiZWxvdyBpdCBsb29rcw0KPiBsaWtlIGl0IGlzIG5vdCBqdXN0IHJldHVy
bmluZyB0aGUgYmFzZSBjb25maWcgc3BhY2UgYWRkcmVzcyBidXQgYWxzbyB0aGUNCj4gc3BlY2lm
aWMgYWRkcmVzcyB3ZSBuZWVkIHRvIHJlYWQvd3JpdGUgKGFkZGluZyB0aGUgZGV2aWNlIG9mZnNl
dCwNCj4gIndoZXJlIiwgYW5kIGV2ZXJ5dGhpbmcpLg0KPiANCj4gTWF5YmUgcGNpX2NvbmZpZ19n
ZXRfYWRkcmVzcyBvciBzb21ldGhpbmcgbGlrZSB0aGF0Pw0KDQpPSyB5ZXMgd2lsbCByZW5hbWUg
dGhlIGZ1bmN0aW9uIG5hbWUuDQo+IA0KPiANCj4+ICt7DQo+PiArICAgIHN0cnVjdCBwY2lfY29u
ZmlnX3dpbmRvdyAqY2ZnID0gYnJpZGdlLT5zeXNkYXRhOw0KPj4gKyAgICB1bnNpZ25lZCBpbnQg
ZGV2Zm5fc2hpZnQgPSBjZmctPm9wcy0+YnVzX3NoaWZ0IC0gODsNCj4+ICsNCj4+ICsgICAgcGNp
X3NiZGZfdCBzYmRmX3QgPSAocGNpX3NiZGZfdCkgc2JkZiA7DQo+PiArDQo+PiArICAgIHVuc2ln
bmVkIGludCBidXNuID0gc2JkZl90LmJ1czsNCj4+ICsgICAgdm9pZCBfX2lvbWVtICpiYXNlOw0K
Pj4gKw0KPj4gKyAgICBpZiAoIGJ1c24gPCBjZmctPmJ1c25fc3RhcnQgfHwgYnVzbiA+IGNmZy0+
YnVzbl9lbmQgKQ0KPj4gKyAgICAgICAgcmV0dXJuIE5VTEw7DQo+PiArDQo+PiArICAgIGJhc2Ug
PSBjZmctPndpbiArIChidXNuIDw8IGNmZy0+b3BzLT5idXNfc2hpZnQpOw0KPj4gKw0KPj4gKyAg
ICByZXR1cm4gYmFzZSArIChQQ0lfREVWRk4oc2JkZl90LmRldiwgc2JkZl90LmZuKSA8PCBkZXZm
bl9zaGlmdCkgKyB3aGVyZTsNCj4+ICt9DQo+PiArDQo+PiAraW50IHBjaV9lY2FtX2NvbmZpZ193
cml0ZShzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICpicmlkZ2UsIHVpbnQzMl90IHNiZGYsDQo+PiAr
ICAgICAgICBpbnQgd2hlcmUsIGludCBzaXplLCB1MzIgdmFsKQ0KPj4gK3sNCj4+ICsgICAgdm9p
ZCBfX2lvbWVtICphZGRyOw0KPj4gKw0KPj4gKyAgICBhZGRyID0gcGNpX2NvbmZpZ19iYXNlKGJy
aWRnZSwgc2JkZiwgd2hlcmUpOw0KPj4gKyAgICBpZiAoICFhZGRyICkNCj4+ICsgICAgICAgIHJl
dHVybiAtRU5PREVWOw0KPj4gKw0KPj4gKyAgICBpZiAoIHNpemUgPT0gMSApDQo+PiArICAgICAg
ICB3cml0ZWIodmFsLCBhZGRyKTsNCj4+ICsgICAgZWxzZSBpZiAoIHNpemUgPT0gMiApDQo+PiAr
ICAgICAgICB3cml0ZXcodmFsLCBhZGRyKTsNCj4+ICsgICAgZWxzZQ0KPj4gKyAgICAgICAgd3Jp
dGVsKHZhbCwgYWRkcik7DQo+IA0KPiBwbGVhc2UgdXNlIGEgc3dpdGNoDQo+IA0KPiANCj4+ICsg
ICAgcmV0dXJuIDA7DQo+PiArfQ0KPj4gKw0KPj4gK2ludCBwY2lfZWNhbV9jb25maWdfcmVhZChz
dHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICpicmlkZ2UsIHVpbnQzMl90IHNiZGYsDQo+PiArICAgICAg
ICBpbnQgd2hlcmUsIGludCBzaXplLCB1MzIgKnZhbCkNCj4+ICt7DQo+PiArICAgIHZvaWQgX19p
b21lbSAqYWRkcjsNCj4+ICsNCj4+ICsgICAgYWRkciA9IHBjaV9jb25maWdfYmFzZShicmlkZ2Us
IHNiZGYsIHdoZXJlKTsNCj4+ICsgICAgaWYgKCAhYWRkciApIHsNCj4+ICsgICAgICAgICp2YWwg
PSB+MDsNCj4+ICsgICAgICAgIHJldHVybiAtRU5PREVWOw0KPj4gKyAgICB9DQo+PiArDQo+PiAr
ICAgIGlmICggc2l6ZSA9PSAxICkNCj4+ICsgICAgICAgICp2YWwgPSByZWFkYihhZGRyKTsNCj4+
ICsgICAgZWxzZSBpZiAoIHNpemUgPT0gMiApDQo+PiArICAgICAgICAqdmFsID0gcmVhZHcoYWRk
cik7DQo+PiArICAgIGVsc2UNCj4+ICsgICAgICAgICp2YWwgPSByZWFkbChhZGRyKTsNCj4gDQo+
IHBsZWFzZSB1c2UgYSBzd2l0Y2gNCg0KT2suDQo+IA0KPiANCj4+ICsgICAgcmV0dXJuIDA7DQo+
PiArfQ0KPj4gKw0KPj4gKy8qIEVDQU0gb3BzICovDQo+PiArc3RydWN0IHBjaV9lY2FtX29wcyBw
Y2lfZ2VuZXJpY19lY2FtX29wcyA9IHsNCj4+ICsgICAgLmJ1c19zaGlmdCAgPSAyMCwNCj4+ICsg
ICAgLnBjaV9vcHMgICAgPSB7DQo+PiArICAgICAgICAucmVhZCAgICAgICA9IHBjaV9lY2FtX2Nv
bmZpZ19yZWFkLA0KPj4gKyAgICAgICAgLndyaXRlICAgICAgPSBwY2lfZWNhbV9jb25maWdfd3Jp
dGUsDQo+PiArICAgIH0NCj4+ICt9Ow0KPj4gKw0KPj4gK3N0YXRpYyBjb25zdCBzdHJ1Y3QgZHRf
ZGV2aWNlX21hdGNoIGdlbl9wY2lfZHRfbWF0Y2hbXSA9IHsNCj4+ICsgICAgeyAuY29tcGF0aWJs
ZSA9ICJwY2ktaG9zdC1lY2FtLWdlbmVyaWMiLA0KPj4gKyAgICAgIC5kYXRhID0gICAgICAgJnBj
aV9nZW5lcmljX2VjYW1fb3BzIH0sDQo+IA0KPiBzcHVyaW91cyBibGFuayBsaW5lDQoNCk9rIHdp
bGwgZml4IHRoaXMuDQo+IA0KPiANCj4+ICsgICAgeyB9LA0KPj4gK307DQo+PiArDQo+PiArc3Rh
dGljIGludCBnZW5fcGNpX2R0X2luaXQoc3RydWN0IGR0X2RldmljZV9ub2RlICpkZXYsIGNvbnN0
IHZvaWQgKmRhdGEpDQo+PiArew0KPj4gKyAgICBjb25zdCBzdHJ1Y3QgZHRfZGV2aWNlX21hdGNo
ICpvZl9pZDsNCj4+ICsgICAgc3RydWN0IHBjaV9lY2FtX29wcyAqb3BzOw0KPj4gKw0KPj4gKyAg
ICBvZl9pZCA9IGR0X21hdGNoX25vZGUoZ2VuX3BjaV9kdF9tYXRjaCwgZGV2LT5kZXYub2Zfbm9k
ZSk7DQo+PiArICAgIG9wcyA9IChzdHJ1Y3QgcGNpX2VjYW1fb3BzICopIG9mX2lkLT5kYXRhOw0K
Pj4gKw0KPj4gKyAgICBwcmludGsoWEVOTE9HX0lORk8gIkZvdW5kIFBDSSBob3N0IGJyaWRnZSAl
cyBjb21wYXRpYmxlOiVzIFxuIiwNCj4+ICsgICAgICAgICAgICBkdF9ub2RlX2Z1bGxfbmFtZShk
ZXYpLCBvZl9pZC0+Y29tcGF0aWJsZSk7DQo+PiArDQo+PiArICAgIHJldHVybiBwY2lfaG9zdF9j
b21tb25fcHJvYmUoZGV2LCBvcHMpOw0KPj4gK30NCj4+ICsNCj4+ICtEVF9ERVZJQ0VfU1RBUlQo
cGNpX2dlbiwgIlBDSSBIT1NUIEdFTkVSSUMiLCBERVZJQ0VfUENJKQ0KPj4gKy5kdF9tYXRjaCA9
IGdlbl9wY2lfZHRfbWF0Y2gsDQo+PiArLmluaXQgPSBnZW5fcGNpX2R0X2luaXQsDQo+PiArRFRf
REVWSUNFX0VORA0KPj4gKw0KPj4gKy8qDQo+PiArICogTG9jYWwgdmFyaWFibGVzOg0KPj4gKyAq
IG1vZGU6IEMNCj4+ICsgKiBjLWZpbGUtc3R5bGU6ICJCU0QiDQo+PiArICogYy1iYXNpYy1vZmZz
ZXQ6IDQNCj4+ICsgKiB0YWItd2lkdGg6IDQNCj4+ICsgKiBpbmRlbnQtdGFicy1tb2RlOiBuaWwN
Cj4+ICsgKiBFbmQ6DQo+PiArICovDQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL3BjaS9w
Y2kuYyBiL3hlbi9hcmNoL2FybS9wY2kvcGNpLmMNCj4+IG5ldyBmaWxlIG1vZGUgMTAwNjQ0DQo+
PiBpbmRleCAwMDAwMDAwMDAwLi5mOGNiYjk5NTkxDQo+PiAtLS0gL2Rldi9udWxsDQo+PiArKysg
Yi94ZW4vYXJjaC9hcm0vcGNpL3BjaS5jDQo+PiBAQCAtMCwwICsxLDExMiBAQA0KPj4gKy8qDQo+
PiArICogQ29weXJpZ2h0IChDKSAyMDIwIEFybSBMdGQuDQo+PiArICoNCj4+ICsgKiBUaGlzIHBy
b2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1v
ZGlmeQ0KPj4gKyAqIGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGlj
IExpY2Vuc2UgdmVyc2lvbiAyIGFzDQo+PiArICogcHVibGlzaGVkIGJ5IHRoZSBGcmVlIFNvZnR3
YXJlIEZvdW5kYXRpb24uDQo+PiArICoNCj4+ICsgKiBUaGlzIHByb2dyYW0gaXMgZGlzdHJpYnV0
ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwNCj4+ICsgKiBidXQgV0lUSE9V
VCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3YXJyYW50eSBvZg0KPj4g
KyAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4g
IFNlZSB0aGUNCj4+ICsgKiBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBmb3IgbW9yZSBkZXRh
aWxzLg0KPj4gKyAqDQo+PiArICogWW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0
aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UNCj4+ICsgKiBhbG9uZyB3aXRoIHRoaXMgcHJv
Z3JhbS4gIElmIG5vdCwgc2VlIDxodHRwOi8vd3d3LmdudS5vcmcvbGljZW5zZXMvPi4NCj4+ICsg
Ki8NCj4+ICsNCj4+ICsjaW5jbHVkZSA8eGVuL2FjcGkuaD4NCj4+ICsjaW5jbHVkZSA8eGVuL2Rl
dmljZV90cmVlLmg+DQo+PiArI2luY2x1ZGUgPHhlbi9lcnJuby5oPg0KPj4gKyNpbmNsdWRlIDx4
ZW4vaW5pdC5oPg0KPj4gKyNpbmNsdWRlIDx4ZW4vcGNpLmg+DQo+PiArI2luY2x1ZGUgPHhlbi9w
YXJhbS5oPg0KPj4gKw0KPj4gK3N0YXRpYyBpbnQgX19pbml0IGR0X3BjaV9pbml0KHZvaWQpDQo+
PiArew0KPj4gKyAgICBzdHJ1Y3QgZHRfZGV2aWNlX25vZGUgKm5wOw0KPj4gKyAgICBpbnQgcmM7
DQo+PiArDQo+PiArICAgIGR0X2Zvcl9lYWNoX2RldmljZV9ub2RlKGR0X2hvc3QsIG5wKQ0KPj4g
KyAgICB7DQo+PiArICAgICAgICByYyA9IGRldmljZV9pbml0KG5wLCBERVZJQ0VfUENJLCBOVUxM
KTsNCj4+ICsgICAgICAgIGlmKCAhcmMgKQ0KPj4gKyAgICAgICAgICAgIGNvbnRpbnVlOw0KPj4g
KyAgICAgICAgLyoNCj4+ICsgICAgICAgICAqIElnbm9yZSB0aGUgZm9sbG93aW5nIGVycm9yIGNv
ZGVzOg0KPj4gKyAgICAgICAgICogICAtIEVCQURGOiBJbmRpY2F0ZSB0aGUgY3VycmVudCBpcyBu
b3QgYW4gcGNpDQo+PiArICAgICAgICAgKiAgIC0gRU5PREVWOiBUaGUgcGNpIGRldmljZSBpcyBu
b3QgcHJlc2VudCBvciBjYW5ub3QgYmUgdXNlZCBieQ0KPj4gKyAgICAgICAgICogICAgIFhlbi4N
Cj4+ICsgICAgICAgICAqLw0KPj4gKyAgICAgICAgZWxzZSBpZiAoIHJjICE9IC1FQkFERiAmJiBy
YyAhPSAtRU5PREVWICkNCj4+ICsgICAgICAgIHsNCj4+ICsgICAgICAgICAgICBwcmludGsoWEVO
TE9HX0VSUiAiTm8gZHJpdmVyIGZvdW5kIGluIFhFTiBvciBkcml2ZXIgaW5pdCBlcnJvci5cbiIp
Ow0KPj4gKyAgICAgICAgICAgIHJldHVybiByYzsNCj4+ICsgICAgICAgIH0NCj4+ICsgICAgfQ0K
Pj4gKw0KPj4gKyAgICByZXR1cm4gMDsNCj4+ICt9DQo+PiArDQo+PiArI2lmZGVmIENPTkZJR19B
Q1BJDQo+PiArc3RhdGljIHZvaWQgX19pbml0IGFjcGlfcGNpX2luaXQodm9pZCkNCj4+ICt7DQo+
PiArICAgIHByaW50ayhYRU5MT0dfRVJSICJBQ1BJIHBjaSBpbml0IG5vdCBzdXBwb3J0ZWQgXG4i
KTsNCj4+ICsgICAgcmV0dXJuOw0KPj4gK30NCj4+ICsjZWxzZQ0KPj4gK3N0YXRpYyBpbmxpbmUg
dm9pZCBfX2luaXQgYWNwaV9wY2lfaW5pdCh2b2lkKSB7IH0NCj4+ICsjZW5kaWYNCj4+ICsNCj4+
ICtzdGF0aWMgYm9vbCBfX2luaXRkYXRhIHBhcmFtX3BjaV9lbmFibGU7DQo+PiArc3RhdGljIGlu
dCBfX2luaXQgcGFyc2VfcGNpX3BhcmFtKGNvbnN0IGNoYXIgKmFyZykNCj4+ICt7DQo+PiArICAg
IGlmICggIWFyZyApDQo+PiArICAgIHsNCj4+ICsgICAgICAgIHBhcmFtX3BjaV9lbmFibGUgPSBm
YWxzZTsNCj4+ICsgICAgICAgIHJldHVybiAwOw0KPj4gKyAgICB9DQo+PiArDQo+PiArICAgIHN3
aXRjaCAoIHBhcnNlX2Jvb2woYXJnLCBOVUxMKSApDQo+PiArICAgIHsNCj4+ICsgICAgICAgIGNh
c2UgMDoNCj4+ICsgICAgICAgICAgICBwYXJhbV9wY2lfZW5hYmxlID0gZmFsc2U7DQo+PiArICAg
ICAgICAgICAgcmV0dXJuIDA7DQo+PiArICAgICAgICBjYXNlIDE6DQo+PiArICAgICAgICAgICAg
cGFyYW1fcGNpX2VuYWJsZSA9IHRydWU7DQo+PiArICAgICAgICAgICAgcmV0dXJuIDA7DQo+PiAr
ICAgIH0NCj4+ICsNCj4+ICsgICAgcmV0dXJuIC1FSU5WQUw7DQo+PiArfQ0KPj4gK2N1c3RvbV9w
YXJhbSgicGNpIiwgcGFyc2VfcGNpX3BhcmFtKTsNCj4gDQo+IFdoZW4gYWRkaW5nIG5ldyBjb21t
YW5kIGxpbmUgcGFyYW1ldGVycyBwbGVhc2UgYWxzbyBhZGQgaXRzDQo+IGRvY3VtZW50YXRpb24g
KGRvY3MvbWlzYy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYykgaW4gdGhlIHNhbWUgcGF0Y2gsDQo+
IHVubGVzcyB0aGlzIGlzIG1lYW50IHRvIGJlIGp1c3QgdHJhbnNpZW50IGFuZCB3ZSdsbCBnZXQg
cmVtb3ZlZCBiZWZvcmUNCj4gdGhlIGZpbmFsIGNvbW1pdCBvZiB0aGUgc2VyaWVzLg0KPiANCg0K
T2sgeWVzIHdpbGwgYWRkIGl0IHRvIHRoZSBkb2N1bWVudGF0aW9uLg0KDQo+IA0KPj4gK3ZvaWQg
X19pbml0IHBjaV9pbml0KHZvaWQpDQo+PiArew0KPj4gKyAgICAvKg0KPj4gKyAgICAgKiBFbmFi
bGUgUENJIHdoZW4gaGFzIGJlZW4gZW5hYmxlZCBleHBsaWNpdGx5IChwY2k9b24pDQo+PiArICAg
ICAqLw0KPj4gKyAgICBpZiAoICFwYXJhbV9wY2lfZW5hYmxlKQ0KPj4gKyAgICAgICAgZ290byBk
aXNhYmxlOw0KPj4gKw0KPj4gKyAgICBpZiAoIGFjcGlfZGlzYWJsZWQgKQ0KPj4gKyAgICAgICAg
ZHRfcGNpX2luaXQoKTsNCj4+ICsgICAgZWxzZQ0KPj4gKyAgICAgICAgYWNwaV9wY2lfaW5pdCgp
Ow0KPj4gKw0KPj4gKyNpZmRlZiBDT05GSUdfSEFTX1BDSQ0KPj4gKyAgICBwY2lfc2VnbWVudHNf
aW5pdCgpOw0KPj4gKyNlbmRpZg0KPj4gKw0KPj4gK2Rpc2FibGU6DQo+PiArICAgIHJldHVybjsN
Cj4+ICt9DQo+PiArDQo+PiArLyoNCj4+ICsgKiBMb2NhbCB2YXJpYWJsZXM6DQo+PiArICogbW9k
ZTogQw0KPj4gKyAqIGMtZmlsZS1zdHlsZTogIkJTRCINCj4+ICsgKiBjLWJhc2ljLW9mZnNldDog
NA0KPj4gKyAqIHRhYi13aWR0aDogNA0KPj4gKyAqIGluZGVudC10YWJzLW1vZGU6IG5pbA0KPj4g
KyAqIEVuZDoNCj4+ICsgKi8NCj4+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vc2V0dXAuYyBi
L3hlbi9hcmNoL2FybS9zZXR1cC5jDQo+PiBpbmRleCA3OTY4Y2VlNDdkLi4yZDdmMWRiNDRmIDEw
MDY0NA0KPj4gLS0tIGEveGVuL2FyY2gvYXJtL3NldHVwLmMNCj4+ICsrKyBiL3hlbi9hcmNoL2Fy
bS9zZXR1cC5jDQo+PiBAQCAtOTMwLDYgKzkzMCw4IEBAIHZvaWQgX19pbml0IHN0YXJ0X3hlbih1
bnNpZ25lZCBsb25nIGJvb3RfcGh5c19vZmZzZXQsDQo+PiANCj4+ICAgICBzZXR1cF92aXJ0X3Bh
Z2luZygpOw0KPj4gDQo+PiArICAgIHBjaV9pbml0KCk7DQo+IA0KPiBwY2lfaW5pdCBzaG91bGQg
cHJvYmFibHkgYmUgYW4gaW5pdGNhbGwuDQoNCk9LLg0KPiANCj4gDQo+PiAgICAgZG9faW5pdGNh
bGxzKCk7DQo+PiANCj4+ICAgICAvKg0KPj4gZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS1h
cm0vZGV2aWNlLmggYi94ZW4vaW5jbHVkZS9hc20tYXJtL2RldmljZS5oDQo+PiBpbmRleCBlZTdj
ZmYyZDQ0Li4yOGY4MDQ5Y2ZkIDEwMDY0NA0KPj4gLS0tIGEveGVuL2luY2x1ZGUvYXNtLWFybS9k
ZXZpY2UuaA0KPj4gKysrIGIveGVuL2luY2x1ZGUvYXNtLWFybS9kZXZpY2UuaA0KPj4gQEAgLTQs
NiArNCw3IEBADQo+PiBlbnVtIGRldmljZV90eXBlDQo+PiB7DQo+PiAgICAgREVWX0RULA0KPj4g
KyAgICBERVZfUENJLA0KPj4gfTsNCj4+IA0KPj4gc3RydWN0IGRldl9hcmNoZGF0YSB7DQo+PiBA
QCAtMjUsMTUgKzI2LDE1IEBAIHR5cGVkZWYgc3RydWN0IGRldmljZSBkZXZpY2VfdDsNCj4+IA0K
Pj4gI2luY2x1ZGUgPHhlbi9kZXZpY2VfdHJlZS5oPg0KPj4gDQo+PiAtLyogVE9ETzogQ29ycmVj
dGx5IGltcGxlbWVudCBkZXZfaXNfcGNpIHdoZW4gUENJIGlzIHN1cHBvcnRlZCBvbiBBUk0gKi8N
Cj4+IC0jZGVmaW5lIGRldl9pc19wY2koZGV2KSAoKHZvaWQpKGRldiksIDApDQo+PiAtI2RlZmlu
ZSBkZXZfaXNfZHQoZGV2KSAgKChkZXYtPnR5cGUgPT0gREVWX0RUKQ0KPj4gKyNkZWZpbmUgZGV2
X2lzX3BjaShkZXYpIChkZXYtPnR5cGUgPT0gREVWX1BDSSkNCj4+ICsjZGVmaW5lIGRldl9pc19k
dChkZXYpICAoZGV2LT50eXBlID09IERFVl9EVCkNCj4+IA0KPj4gZW51bSBkZXZpY2VfY2xhc3MN
Cj4+IHsNCj4+ICAgICBERVZJQ0VfU0VSSUFMLA0KPj4gICAgIERFVklDRV9JT01NVSwNCj4+ICAg
ICBERVZJQ0VfR0lDLA0KPj4gKyAgICBERVZJQ0VfUENJLA0KPj4gICAgIC8qIFVzZSBmb3IgZXJy
b3IgKi8NCj4+ICAgICBERVZJQ0VfVU5LTk9XTiwNCj4+IH07DQo+PiBkaWZmIC0tZ2l0IGEveGVu
L2luY2x1ZGUvYXNtLWFybS9wY2kuaCBiL3hlbi9pbmNsdWRlL2FzbS1hcm0vcGNpLmgNCj4+IGlu
ZGV4IGRlMTMzNTlmNjUuLjk0ZmQwMDM2MGEgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vaW5jbHVkZS9h
c20tYXJtL3BjaS5oDQo+PiArKysgYi94ZW4vaW5jbHVkZS9hc20tYXJtL3BjaS5oDQo+PiBAQCAt
MSw3ICsxLDk4IEBADQo+PiAtI2lmbmRlZiBfX1g4Nl9QQ0lfSF9fDQo+PiAtI2RlZmluZSBfX1g4
Nl9QQ0lfSF9fDQo+PiArLyoNCj4+ICsgKiBDb3B5cmlnaHQgKEMpIDIwMjAgQXJtIEx0ZC4NCj4+
ICsgKg0KPj4gKyAqIEJhc2VkIG9uIExpbnV4IGRyaXZlcnMvcGNpL2VjYW0uYw0KPj4gKyAqIENv
cHlyaWdodCAyMDE2IEJyb2FkY29tLg0KPj4gKyAqDQo+PiArICogQmFzZWQgb24gTGludXggZHJp
dmVycy9wY2kvY29udHJvbGxlci9wY2ktaG9zdC1jb21tb24uYw0KPj4gKyAqIEJhc2VkIG9uIExp
bnV4IGRyaXZlcnMvcGNpL2NvbnRyb2xsZXIvcGNpLWhvc3QtZ2VuZXJpYy5jDQo+PiArICogQ29w
eXJpZ2h0IChDKSAyMDE0IEFSTSBMaW1pdGVkIFdpbGwgRGVhY29uIDx3aWxsLmRlYWNvbkBhcm0u
Y29tPg0KPj4gKyAqDQo+PiArICogVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlvdSBj
YW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkNCj4+ICsgKiBpdCB1bmRlciB0aGUgdGVy
bXMgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIHZlcnNpb24gMiBhcw0KPj4gKyAq
IHB1Ymxpc2hlZCBieSB0aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLg0KPj4gKyAqDQo+PiAr
ICogVGhpcyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBi
ZSB1c2VmdWwsDQo+PiArICogYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4g
dGhlIGltcGxpZWQgd2FycmFudHkgb2YNCj4+ICsgKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVT
UyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlDQo+PiArICogR05VIEdlbmVyYWwg
UHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4NCj4+ICsgKg0KPj4gKyAqIFlvdSBzaG91
bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNl
DQo+PiArICogYWxvbmcgd2l0aCB0aGlzIHByb2dyYW0uICBJZiBub3QsIHNlZSA8aHR0cDovL3d3
dy5nbnUub3JnL2xpY2Vuc2VzLz4uDQo+PiArICovDQo+PiANCj4+ICsjaWZuZGVmIF9fQVJNX1BD
SV9IX18NCj4+ICsjZGVmaW5lIF9fQVJNX1BDSV9IX18NCj4+ICsNCj4+ICsjaW5jbHVkZSA8eGVu
L3BjaS5oPg0KPj4gKyNpbmNsdWRlIDx4ZW4vZGV2aWNlX3RyZWUuaD4NCj4+ICsjaW5jbHVkZSA8
YXNtL2RldmljZS5oPg0KPj4gKw0KPj4gKyNpZmRlZiBDT05GSUdfQVJNX1BDSQ0KPj4gKw0KPj4g
Ky8qIEFyY2ggcGNpIGRldiBzdHJ1Y3QgKi8NCj4+IHN0cnVjdCBhcmNoX3BjaV9kZXYgew0KPj4g
KyAgICBzdHJ1Y3QgZGV2aWNlIGRldjsNCj4+ICt9Ow0KPiANCj4gQXJlIHlvdSBhY3R1YWxseSB1
c2luZyBzdHJ1Y3QgZGV2aWNlIGluIHN0cnVjdCBhcmNoX3BjaV9kZXY/DQo+IHN0cnVjdCBkZXZp
Y2UgaXMgYWxyZWFkeSBwYXJ0IG9mIHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSBhbmQgYSBwb2ludGVy
IHRvDQo+IGl0IGlzIHN0b3JlZCBpbiBicmlkZ2UtPmR0X25vZGUuDQoNCldpbGwgYmUgdXNpbmcg
dGhpcyBnb2luZyBmb3J3YXJkIG9uY2Ugd2UgaGF2ZSBmdWxsIFBDSSBwYXNzdGhyb3VnaCBzdXBw
b3J0LiANCj4gDQo+IA0KPj4gKyNkZWZpbmUgUFJJX3BjaSAiJTA0eDolMDJ4OiUwMnguJXUiDQo+
PiArI2RlZmluZSBwY2lfdG9fZGV2KHBjaWRldikgKCYocGNpZGV2KS0+YXJjaC5kZXYpDQo+PiAr
DQo+PiArLyoNCj4+ICsgKiBzdHJ1Y3QgdG8gaG9sZCB0aGUgbWFwcGluZ3Mgb2YgYSBjb25maWcg
c3BhY2Ugd2luZG93LiBUaGlzDQo+PiArICogaXMgZXhwZWN0ZWQgdG8gYmUgdXNlZCBhcyBzeXNk
YXRhIGZvciBQQ0kgY29udHJvbGxlcnMgdGhhdA0KPj4gKyAqIHVzZSBFQ0FNLg0KPj4gKyAqLw0K
Pj4gK3N0cnVjdCBwY2lfY29uZmlnX3dpbmRvdyB7DQo+PiArICAgIHBhZGRyX3QgICAgIHBoeXNf
YWRkcjsNCj4+ICsgICAgcGFkZHJfdCAgICAgc2l6ZTsNCj4+ICsgICAgdWludDhfdCAgICAgYnVz
bl9zdGFydDsNCj4+ICsgICAgdWludDhfdCAgICAgYnVzbl9lbmQ7DQo+PiArICAgIHN0cnVjdCBw
Y2lfZWNhbV9vcHMgICAgICpvcHM7DQo+PiArICAgIHZvaWQgX19pb21lbSAgICAgICAgKndpbjsN
Cj4+ICt9Ow0KPj4gKw0KPj4gKy8qIEZvcndhcmQgZGVjbGFyYXRpb24gYXMgcGNpX2hvc3RfYnJp
ZGdlIGFuZCBwY2lfb3BzIGRlcGVuZCBvbiBlYWNoIG90aGVyLiAqLw0KPj4gK3N0cnVjdCBwY2lf
aG9zdF9icmlkZ2U7DQo+PiArDQo+PiArc3RydWN0IHBjaV9vcHMgew0KPj4gKyAgICBpbnQgKCpy
ZWFkKShzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICpicmlkZ2UsDQo+PiArICAgICAgICAgICAgICAg
ICAgICB1aW50MzJfdCBzYmRmLCBpbnQgd2hlcmUsIGludCBzaXplLCB1MzIgKnZhbCk7DQo+PiAr
ICAgIGludCAoKndyaXRlKShzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICpicmlkZ2UsDQo+PiArICAg
ICAgICAgICAgICAgICAgICB1aW50MzJfdCBzYmRmLCBpbnQgd2hlcmUsIGludCBzaXplLCB1MzIg
dmFsKTsNCj4gDQo+IEknZCBwcmVmZXIgaWYgd2UgY291bGQgdXNlIGV4cGxpY2l0bHktc2l6ZWQg
aW50ZWdlcnMgZm9yICJ3aGVyZSIgYW5kDQo+ICJzaXplIiB0b28uIEFsc28sIHNob3VsZCB0aGV5
IGJlIHVuc2lnbmVkIHJhdGhlciB0aGFuIHNpZ25lZD8NCj4gDQo+IENhbiB3ZSB1c2UgcGNpX3Ni
ZGZfdCBmb3IgdGhlIHNiZGYgcGFyYW1ldGVyPw0KDQpPayB3aWxsIGZpeCBpbiBuZXh0IHZlcnNp
b24uDQo+IA0KPiANCj4+ICt9Ow0KPj4gKw0KPj4gKy8qDQo+PiArICogc3RydWN0IHRvIGhvbGQg
cGNpIG9wcyBhbmQgYnVzIHNoaWZ0IG9mIHRoZSBjb25maWcgd2luZG93DQo+PiArICogZm9yIGEg
UENJIGNvbnRyb2xsZXIuDQo+PiArICovDQo+PiArc3RydWN0IHBjaV9lY2FtX29wcyB7DQo+PiAr
ICAgIHVuc2lnbmVkIGludCAgICAgICAgICAgIGJ1c19zaGlmdDsNCj4+ICsgICAgc3RydWN0IHBj
aV9vcHMgICAgICAgICAgcGNpX29wczsNCj4+ICsgICAgaW50ICAgICAgICAgICAgICgqaW5pdCko
c3RydWN0IHBjaV9jb25maWdfd2luZG93ICopOw0KPj4gK307DQo+IA0KPiBBbHRob3VnaCBJIHJl
YWxpemUgdGhhdCB3ZSBhcmUgb25seSB0YXJnZXRpbmcgRUNBTSBub3csIGFuZCB0aGUNCj4gaW1w
bGVtZW50YXRpb24gaXMgYmFzZWQgb24gRUNBTSwgdGhlIGludGVyZmFjZSBkb2Vzbid0IHNlZW0g
dG8gaGF2ZQ0KPiBhbnl0aGluZyBFQ0FNLXNwZWNpZmljIGluIGl0LiBJIHdvdWxkIHJlbmFtZSBw
Y2lfZWNhbV9vcHMgdG8gc29tZXRoaW5nDQo+IGVsc2UsIG1heWJlIHNpbXBseSAicGNpX29wcyIu
DQoNCk9rIHdpbGwgaGF2ZSBhIGxvb2sgYW5kIG1vZGlmeSBhY2NvcmRpbmdseS4NCj4gDQo+IA0K
Pj4gKy8qDQo+PiArICogc3RydWN0IHRvIGhvbGQgcGNpIGhvc3QgYnJpZGdlIGluZm9ybWF0aW9u
DQo+PiArICogZm9yIGEgUENJIGNvbnRyb2xsZXIuDQo+PiArICovDQo+PiArc3RydWN0IHBjaV9o
b3N0X2JyaWRnZSB7DQo+PiArICAgIHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSAqZHRfbm9kZTsgIC8q
IFBvaW50ZXIgdG8gdGhlIGFzc29jaWF0ZWQgRFQgbm9kZSAqLw0KPj4gKyAgICBzdHJ1Y3QgbGlz
dF9oZWFkIG5vZGU7ICAgICAgICAgICAvKiBOb2RlIGluIGxpc3Qgb2YgaG9zdCBicmlkZ2VzICov
DQo+PiArICAgIHVpbnQxNl90IHNlZ21lbnQ7ICAgICAgICAgICAgICAgIC8qIFNlZ21lbnQgbnVt
YmVyICovDQo+PiArICAgIHZvaWQgKnN5c2RhdGE7ICAgICAgICAgICAgICAgICAgIC8qIFBvaW50
ZXIgdG8gdGhlIGNvbmZpZyBzcGFjZSB3aW5kb3cqLw0KPj4gKyAgICBjb25zdCBzdHJ1Y3QgcGNp
X29wcyAqb3BzOw0KPj4gfTsNCj4+IA0KPj4gLSNlbmRpZiAvKiBfX1g4Nl9QQ0lfSF9fICovDQo+
PiArc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqcGNpX2ZpbmRfaG9zdF9icmlkZ2UodWludDE2X3Qg
c2VnbWVudCwgdWludDhfdCBidXMpOw0KPj4gKw0KPj4gK2ludCBwY2lfaG9zdF9jb21tb25fcHJv
YmUoc3RydWN0IGR0X2RldmljZV9ub2RlICpkZXYsDQo+PiArICAgICAgICAgICAgICAgIHN0cnVj
dCBwY2lfZWNhbV9vcHMgKm9wcyk7DQo+PiArDQo+PiArdm9pZCBwY2lfaW5pdCh2b2lkKTsNCj4+
ICsNCj4+ICsjZWxzZSAgIC8qIUNPTkZJR19BUk1fUENJKi8NCj4+ICtzdHJ1Y3QgYXJjaF9wY2lf
ZGV2IHsgfTsNCj4+ICtzdGF0aWMgaW5saW5lIHZvaWQgIHBjaV9pbml0KHZvaWQpIHsgfQ0KPj4g
KyNlbmRpZiAgLyohQ09ORklHX0FSTV9QQ0kqLw0KPj4gKyNlbmRpZiAvKiBfX0FSTV9QQ0lfSF9f
ICovDQo+PiAtLSANCj4+IDIuMTcuMQ0KDQo=


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 13:40:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 13:40:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k03Mk-0003Ox-PW; Mon, 27 Jul 2020 13:40:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s7zT=BG=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1k03Mk-0003Os-0G
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 13:40:30 +0000
X-Inumbo-ID: baeea3ec-d00e-11ea-8abf-bc764e2007e4
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe07::600])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id baeea3ec-d00e-11ea-8abf-bc764e2007e4;
 Mon, 27 Jul 2020 13:40:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=me9f55QJ5Big4uAJQxff/iauv1KmZ5JwLDlH2uwO4OQ=;
 b=FxZ97L7rGRsPyBx1l9GLk2e0iBvbDQx68yH8v3a0Ivp4E75aw8ab2ubjN/RucAX1h4jvQmcAwghD7J7JUP98beJbXcPFgrD+mZ/D6VvlKzko7PQtSq4HMb4BbwDjHlo4WiyUn982YN0tYhqnpLlM8Ph4zN9Hj0val3JbC1TPCNc=
Received: from AM5PR0101CA0025.eurprd01.prod.exchangelabs.com
 (2603:10a6:206:16::38) by AM5PR0801MB1891.eurprd08.prod.outlook.com
 (2603:10a6:203:4a::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Mon, 27 Jul
 2020 13:40:25 +0000
Received: from AM5EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:16:cafe::14) by AM5PR0101CA0025.outlook.office365.com
 (2603:10a6:206:16::38) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.20 via Frontend
 Transport; Mon, 27 Jul 2020 13:40:25 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT017.mail.protection.outlook.com (10.152.16.89) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Mon, 27 Jul 2020 13:40:25 +0000
Received: ("Tessian outbound 73b502bf693a:v62");
 Mon, 27 Jul 2020 13:40:24 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7aa61f53010de262
X-CR-MTA-TID: 64aa7808
Received: from e1b23fbe230e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 13F85376-6254-42C5-AF53-C8C1FF9580B3.1; 
 Mon, 27 Jul 2020 13:40:19 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e1b23fbe230e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 27 Jul 2020 13:40:19 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WRDxwMmZTDpViii3mB9eiZYu9nK4zv0sX4U2IfjLcN82NVoAxVYbe+1k2veZhiAcllkcYIkya/rJMccXfVDMk0CPpp1URKSQ5VBphniUnct03uyjyGtlWv552V4kw+gwfTqA/2NABY1nQw1izsthcT+UjJAwmCqR6k5BxwXjFCbQtbJjuX36GNjZ3n6+h3Ju8X1NcZYim8AbNWi6vMUmEaHxCR5lb7xkSkRpnq6KdW3dgne/P7GZZk3KyTLW2xmivTqCjJlLrTw43edDOT0KDJALTclBT7jDILdY7DgwgYGtIcmpQEvCSf5pPG+rCulyhfHxgDfH70H6ARU8aB3AjA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=me9f55QJ5Big4uAJQxff/iauv1KmZ5JwLDlH2uwO4OQ=;
 b=QL+jandavMoqvSz23kwnD5uQ5PEJfuEppMn/Wbxl5OgHG2e2h+7iAaPnVQTkRYfF1Hlm+eB/lNs5NrsqVwXgSbpDX9IMc69Retpjr04po6OZMkJkxorvc1W6EyBUF3JRO2KNQkV0U76JdjGYovxRfxj7EWp9Wezt4KjYVhWFB9Xzg80dor5Yzd8kR0PwAfv102NOepyIJBYOtOkMmSdVVXk2FzBO/8Ti1I7r01Rea3NqIRn3gF0pNro5Xzq4fpCh2So23DeV2W87cVI+iahZoHPseozLryXqdcmnOhvPcg4XWyBNqZNXS4JXJfpAGKaU6w8DeYCArHZOy0Erpl9+zA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=me9f55QJ5Big4uAJQxff/iauv1KmZ5JwLDlH2uwO4OQ=;
 b=FxZ97L7rGRsPyBx1l9GLk2e0iBvbDQx68yH8v3a0Ivp4E75aw8ab2ubjN/RucAX1h4jvQmcAwghD7J7JUP98beJbXcPFgrD+mZ/D6VvlKzko7PQtSq4HMb4BbwDjHlo4WiyUn982YN0tYhqnpLlM8Ph4zN9Hj0val3JbC1TPCNc=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM7PR08MB5317.eurprd08.prod.outlook.com (2603:10a6:20b:101::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Mon, 27 Jul
 2020 13:40:18 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3216.033; Mon, 27 Jul 2020
 13:40:18 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [RFC PATCH v1 4/4] arm/libxl: Emulated PCI device tree node in
 libxl
Thread-Topic: [RFC PATCH v1 4/4] arm/libxl: Emulated PCI device tree node in
 libxl
Thread-Index: AQHWYPbRfH3razoe7ECmi8NopNKgm6kV0zWAgAWhyoA=
Date: Mon, 27 Jul 2020 13:40:17 +0000
Message-ID: <7D524405-0853-4BC3-8C2D-5830AE097B32@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <23346b24762467bd246b91b05f7b0fc1719282f6.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231505170.17562@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007231505170.17562@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: dcf70f0d-a1d8-46e5-f65e-08d832329de1
x-ms-traffictypediagnostic: AM7PR08MB5317:|AM5PR0801MB1891:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <AM5PR0801MB18914B1EC3CBA5B2A012C290FC720@AM5PR0801MB1891.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: s+wrx8/0xtQfXRmi2mMozh9fBPQ/fNNIX0P+ZYt55oH+LuWtJ5VrP0XCEUmaawUcnNlcHG7JiXJzEb+EvWjSzZcuMYjr86WVjdAVaYVz5I/Sa6KhHVyREGsM3eeE/AmcV0dPGE/4mg/83hQKNkqPgyeS+yjSoh2IenOYhQGqL/dfs/daLyHGF3XRubyAYJ38y8BaSg+NInZM55SHf0UQA293wCobieKqCj3pEL53Dd4mrH3+vZKYX9IXX+P65Uv82ZiKizQ/LZkKOWfT3ajHXqkuPxSDoPlNC0vCiIQprpbS1+j/ZsSPXKQJRSvRTYFuwWY73S73Shdu48Rm7i0e6g==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(396003)(376002)(346002)(136003)(366004)(6486002)(53546011)(6506007)(55236004)(86362001)(33656002)(316002)(2616005)(36756003)(54906003)(4326008)(71200400001)(5660300002)(6916009)(30864003)(83380400001)(66946007)(2906002)(66476007)(66556008)(64756008)(91956017)(66446008)(8936002)(26005)(6512007)(186003)(8676002)(478600001)(76116006);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: cxH34Oyf3Sk/dR1JNzpiG0OmarRI0Eklqe44vC96JnW3AitOMh2ZL5PWaDfKYQ7i8LB7UvP2PS9R0vsen149rIpMCGAngM7+myApN8xpLy/uBlXVv9yddMaXB92dmQJyQbtM+7hONVF871mOhcqrHKbvLcobsznVXqRYzSaMHI4Noy0CMGLyICzwzq+TMzV5UQEJ0YrC2fQhB6O+F1ZoGatEjsKv1oZdOT505xPIC0QI+Tw+HQc4deMkFptrRdTFrcVQthXfEhUFbV09NoM508ZKg8yI50y+0E6e3eKcXcHvLRdh+ggijWQgLYzK1Tz+tmXJRLaV48Ld3P+KIRDwXpY80y6hXP4H7Owew5Wew5KJs/XZALQ7RJxaM8hDBIYKhvld160WK6b5FnOU40+ZURacNBbcKWE7LT2zfd6vS9Ly41dehcDyOKHKqkV8RL+Ad3NAADGSOXCO01CnIJkcG0OZqkpriYDDvWKLnWEG0ns=
Content-Type: text/plain; charset="utf-8"
Content-ID: <EE668B22B821404C940745ED17741DD5@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5317
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 5770b082-c289-40e3-d323-08d8323299b9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: GMeLugZmoD69JSxS9/lIrYGb0ue29YYZhXfco2aTXXkQhbWXmYYrnZUKQQtfz7e/f4dAWQr5Wpp4MWFJmtxspC7hICGPF+5hJOtaNyThwfW9nmytNPR9Ana/oL6q1JivtksLw+YpLOmIO83xstrFMS4H6WrFeHwY75L/yEmdPiXLYY27fAi1Q1CP3Fa6YVCPEwBVw9lszNuKNs5iPLG5wIrFN3VqmYt7DMp8jRkh/OJeXPCnOluGxcX2Pv22oc5XsJwd967EJcMMKmblIuDCVvAPt1vHvl+l5v4kSN1waU5utpcqDPw8gyxkBjeMOxwLKkBsXHvjj/tW5jeHIym0RdkXMRRB2+2ZLWdn1Tvez1kaY/E8VJCtmUeWcKKrewAyEX+sNr8v6HnS9bua7Lq79Q==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(346002)(136003)(39860400002)(396003)(46966005)(36906005)(54906003)(6512007)(2616005)(316002)(186003)(8676002)(107886003)(82740400003)(82310400002)(336012)(86362001)(47076004)(356005)(6506007)(6486002)(83380400001)(26005)(33656002)(478600001)(2906002)(5660300002)(6862004)(53546011)(36756003)(81166007)(70586007)(70206006)(30864003)(8936002)(4326008);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jul 2020 13:40:25.0696 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dcf70f0d-a1d8-46e5-f65e-08d832329de1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB1891
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMjQgSnVsIDIwMjAsIGF0IDEyOjM5IGFtLCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPiANCj4gT24gVGh1LCAyMyBKdWwgMjAyMCwg
UmFodWwgU2luZ2ggd3JvdGU6DQo+PiBsaWJ4bCB3aWxsIGNyZWF0ZSBhbiBlbXVsYXRlZCBQQ0kg
ZGV2aWNlIHRyZWUgbm9kZSBpbiB0aGUNCj4+IGRldmljZSB0cmVlIHRvIGVuYWJsZSB0aGUgZ3Vl
c3QgT1MgdG8gZGlzY292ZXIgdGhlIHZpcnR1YWwNCj4+IFBDSSBkdXJpbmcgZ3Vlc3QgYm9vdC4N
Cj4+IA0KPj4gV2UgaW50cm9kdWNlZCB0aGUgbmV3IGNvbmZpZyBvcHRpb24gW3ZwY2k9ImVjYW0i
XSBmb3IgZ3Vlc3RzLg0KPj4gV2hlbiB0aGlzIGNvbmZpZyBvcHRpb24gaXMgZW5hYmxlZCBpbiBh
IGd1ZXN0IGNvbmZpZ3VyYXRpb24sDQo+PiBhIFBDSSBkZXZpY2UgdHJlZSBub2RlIHdpbGwgYmUg
Y3JlYXRlZCBpbiB0aGUgZ3Vlc3QgZGV2aWNlIHRyZWUuDQo+PiANCj4+IEEgbmV3IGFyZWEgaGFz
IGJlZW4gcmVzZXJ2ZWQgaW4gdGhlIGFybSBndWVzdCBwaHlzaWNhbCBtYXAgYXQNCj4+IHdoaWNo
IHRoZSBWUENJIGJ1cyBpcyBkZWNsYXJlZCBpbiB0aGUgZGV2aWNlIHRyZWUgKHJlZyBhbmQgcmFu
Z2VzDQo+PiBwYXJhbWV0ZXJzIG9mIHRoZSBub2RlKS4NCj4+IA0KPj4gQ2hhbmdlLUlkOiBJNDdk
MzljYmU4MTg0ZGUyMjI2ZjE3NDY0NGRmOTc5MGVjYzYxMGNjZA0KPiANCj4gU2FtZSBxdWVzdGlv
bg0KDQpJIHdpbGwgcmVtb3ZlIHRoZSBjaGFuZ2UtaWQgaW4gdGhlIG5leHQgdmVyc2lvbi4NCg0K
PiANCj4gDQo+PiBTaWduZWQtb2ZmLWJ5OiBSYWh1bCBTaW5naCA8cmFodWwuc2luZ2hAYXJtLmNv
bT4NCj4+IC0tLQ0KPj4gdG9vbHMvbGlieGwvbGlieGxfYXJtLmMgICAgICAgfCAyMDAgKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKw0KPj4gdG9vbHMvbGlieGwvbGlieGxfdHlwZXMu
aWRsICAgfCAgIDYgKw0KPj4gdG9vbHMveGwveGxfcGFyc2UuYyAgICAgICAgICAgfCAgIDcgKysN
Cj4+IHhlbi9pbmNsdWRlL3B1YmxpYy9hcmNoLWFybS5oIHwgIDI4ICsrKysrDQo+PiA0IGZpbGVz
IGNoYW5nZWQsIDI0MSBpbnNlcnRpb25zKCspDQo+PiANCj4+IGRpZmYgLS1naXQgYS90b29scy9s
aWJ4bC9saWJ4bF9hcm0uYyBiL3Rvb2xzL2xpYnhsL2xpYnhsX2FybS5jDQo+PiBpbmRleCAzNGY4
YTI5MDU2Li44NDU2OGU5ZGM5IDEwMDY0NA0KPj4gLS0tIGEvdG9vbHMvbGlieGwvbGlieGxfYXJt
LmMNCj4+ICsrKyBiL3Rvb2xzL2xpYnhsL2xpYnhsX2FybS5jDQo+PiBAQCAtMjY4LDYgKzI2OCwx
MzAgQEAgc3RhdGljIGludCBmZHRfcHJvcGVydHlfcmVncyhsaWJ4bF9fZ2MgKmdjLCB2b2lkICpm
ZHQsDQo+PiAgICAgcmV0dXJuIGZkdF9wcm9wZXJ0eShmZHQsICJyZWciLCByZWdzLCBzaXplb2Yo
cmVncykpOw0KPj4gfQ0KPj4gDQo+PiArc3RhdGljIGludCBmZHRfcHJvcGVydHlfdnBjaV9idXNf
cmFuZ2UobGlieGxfX2djICpnYywgdm9pZCAqZmR0LA0KPj4gKyAgICAgICAgdW5zaWduZWQgbnVt
X2NlbGxzLCAuLi4pDQo+PiArew0KPj4gKyAgICB1aW50MzJfdCBidXNfcmFuZ2VbbnVtX2NlbGxz
XTsNCj4+ICsgICAgYmUzMiAqY2VsbHMgPSAmYnVzX3JhbmdlWzBdOw0KPj4gKyAgICBpbnQgaTsN
Cj4+ICsgICAgdmFfbGlzdCBhcDsNCj4+ICsgICAgdWludDMyX3QgYXJnOw0KPj4gKw0KPj4gKyAg
ICB2YV9zdGFydChhcCwgbnVtX2NlbGxzKTsNCj4+ICsgICAgZm9yIChpID0gMCA7IGkgPCBudW1f
Y2VsbHM7IGkrKykgew0KPj4gKyAgICAgICAgYXJnID0gdmFfYXJnKGFwLCB1aW50MzJfdCk7DQo+
PiArICAgICAgICBzZXRfY2VsbCgmY2VsbHMsIDEsIGFyZyk7DQo+PiArICAgIH0NCj4+ICsgICAg
dmFfZW5kKGFwKTsNCj4+ICsNCj4+ICsgICAgcmV0dXJuIGZkdF9wcm9wZXJ0eShmZHQsICJidXMt
cmFuZ2UiLCBidXNfcmFuZ2UsIHNpemVvZihidXNfcmFuZ2UpKTsNCj4+ICt9DQo+PiArDQo+PiAr
c3RhdGljIGludCBmZHRfcHJvcGVydHlfdnBjaV9pbnRlcnJ1cHRfbWFwX21hc2sobGlieGxfX2dj
ICpnYywgdm9pZCAqZmR0LA0KPj4gKyAgICAgICAgdW5zaWduZWQgbnVtX2NlbGxzLCAuLi4pDQo+
PiArew0KPj4gKyAgICB1aW50MzJfdCBpbnRlcnJ1cHRfbWFwX21hc2tbbnVtX2NlbGxzXTsNCj4+
ICsgICAgYmUzMiAqY2VsbHMgPSAmaW50ZXJydXB0X21hcF9tYXNrWzBdOw0KPj4gKyAgICBpbnQg
aTsNCj4+ICsgICAgdmFfbGlzdCBhcDsNCj4+ICsgICAgdWludDMyX3QgYXJnOw0KPj4gKw0KPj4g
KyAgICB2YV9zdGFydChhcCwgbnVtX2NlbGxzKTsNCj4+ICsgICAgZm9yIChpID0gMCA7IGkgPCBu
dW1fY2VsbHM7IGkrKykgew0KPj4gKyAgICAgICAgYXJnID0gdmFfYXJnKGFwLCB1aW50MzJfdCk7
DQo+PiArICAgICAgICBzZXRfY2VsbCgmY2VsbHMsIDEsIGFyZyk7DQo+PiArICAgIH0NCj4+ICsg
ICAgdmFfZW5kKGFwKTsNCj4+ICsNCj4+ICsgICAgcmV0dXJuIGZkdF9wcm9wZXJ0eShmZHQsICJp
bnRlcnJ1cHQtbWFwLW1hc2siLCBpbnRlcnJ1cHRfbWFwX21hc2ssDQo+PiArICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBzaXplb2YoaW50ZXJydXB0X21hcF9tYXNrKSk7DQo+PiArfQ0K
Pj4gKw0KPj4gK3N0YXRpYyBpbnQgZmR0X3Byb3BlcnR5X3ZwY2lfcmFuZ2VzKGxpYnhsX19nYyAq
Z2MsIHZvaWQgKmZkdCwNCj4+ICsgICAgICAgIHVuc2lnbmVkIHZwY2lfYWRkcl9jZWxscywNCj4+
ICsgICAgICAgIHVuc2lnbmVkIGNwdV9hZGRyX2NlbGxzLA0KPj4gKyAgICAgICAgdW5zaWduZWQg
dnBjaV9zaXplX2NlbGxzLA0KPj4gKyAgICAgICAgdW5zaWduZWQgbnVtX3JlZ3MsIC4uLikNCj4+
ICt7DQo+PiArICAgIHVpbnQzMl90IHJlZ3NbbnVtX3JlZ3MqKHZwY2lfYWRkcl9jZWxscytjcHVf
YWRkcl9jZWxscyt2cGNpX3NpemVfY2VsbHMpXTsNCj4+ICsgICAgYmUzMiAqY2VsbHMgPSAmcmVn
c1swXTsNCj4+ICsgICAgaW50IGk7DQo+PiArICAgIHZhX2xpc3QgYXA7DQo+PiArICAgIHVpbnQ2
NF90IGFyZzsNCj4+ICsNCj4+ICsgICAgdmFfc3RhcnQoYXAsIG51bV9yZWdzKTsNCj4+ICsgICAg
Zm9yIChpID0gMCA7IGkgPCBudW1fcmVnczsgaSsrKSB7DQo+PiArICAgICAgICAvKiBTZXQgdGhl
IG1lbW9yeSBiaXQgZmllbGQgKi8NCj4+ICsgICAgICAgIGFyZyA9IHZhX2FyZyhhcCwgdWludDY0
X3QpOw0KPj4gKyAgICAgICAgc2V0X2NlbGwoJmNlbGxzLCAxLCBhcmcpOw0KPj4gKw0KPj4gKyAg
ICAgICAgLyogU2V0IHRoZSB2cGNpIGJ1cyBhZGRyZXNzICovDQo+PiArICAgICAgICBhcmcgPSB2
cGNpX2FkZHJfY2VsbHMgPyB2YV9hcmcoYXAsIHVpbnQ2NF90KSA6IDA7DQo+PiArICAgICAgICBz
ZXRfY2VsbCgmY2VsbHMsIDIgLCBhcmcpOw0KPj4gKw0KPj4gKyAgICAgICAgLyogU2V0IHRoZSBj
cHUgYnVzIGFkZHJlc3Mgd2hlcmUgdnBjaSBhZGRyZXNzIGlzIG1hcHBlZCAqLw0KPj4gKyAgICAg
ICAgYXJnID0gY3B1X2FkZHJfY2VsbHMgPyB2YV9hcmcoYXAsIHVpbnQ2NF90KSA6IDA7DQo+PiAr
ICAgICAgICBzZXRfY2VsbCgmY2VsbHMsIGNwdV9hZGRyX2NlbGxzLCBhcmcpOw0KPj4gKw0KPj4g
KyAgICAgICAgLyogU2V0IHRoZSB2cGNpIHNpemUgcmVxdWVzdGVkICovDQo+PiArICAgICAgICBh
cmcgPSB2cGNpX3NpemVfY2VsbHMgPyB2YV9hcmcoYXAsIHVpbnQ2NF90KSA6IDA7DQo+PiArICAg
ICAgICBzZXRfY2VsbCgmY2VsbHMsIHZwY2lfc2l6ZV9jZWxscyxhcmcpOw0KPj4gKyAgICB9DQo+
PiArICAgIHZhX2VuZChhcCk7DQo+PiArDQo+PiArICAgIHJldHVybiBmZHRfcHJvcGVydHkoZmR0
LCAicmFuZ2VzIiwgcmVncywgc2l6ZW9mKHJlZ3MpKTsNCj4+ICt9DQo+PiArDQo+PiArc3RhdGlj
IGludCBmZHRfcHJvcGVydHlfdnBjaV9pbnRlcnJ1cHRfbWFwKGxpYnhsX19nYyAqZ2MsIHZvaWQg
KmZkdCwNCj4+ICsgICAgICAgIHVuc2lnbmVkIGNoaWxkX3VuaXRfYWRkcl9jZWxscywNCj4+ICsg
ICAgICAgIHVuc2lnbmVkIGNoaWxkX2ludGVycnVwdF9zcGVjaWZpZXJfY2VsbHMsDQo+PiArICAg
ICAgICB1bnNpZ25lZCBwYXJlbnRfdW5pdF9hZGRyX2NlbGxzLA0KPj4gKyAgICAgICAgdW5zaWdu
ZWQgcGFyZW50X2ludGVycnVwdF9zcGVjaWZpZXJfY2VsbHMsDQo+PiArICAgICAgICB1bnNpZ25l
ZCBudW1fcmVncywgLi4uKQ0KPj4gK3sNCj4+ICsgICAgdWludDMyX3QgaW50ZXJydXB0X21hcFtu
dW1fcmVncyAqIChjaGlsZF91bml0X2FkZHJfY2VsbHMgKw0KPj4gKyAgICAgICAgICAgIGNoaWxk
X2ludGVycnVwdF9zcGVjaWZpZXJfY2VsbHMgKyBwYXJlbnRfdW5pdF9hZGRyX2NlbGxzDQo+PiAr
ICAgICAgICAgICAgKyBwYXJlbnRfaW50ZXJydXB0X3NwZWNpZmllcl9jZWxscyArIDEpXTsNCj4+
ICsgICAgYmUzMiAqY2VsbHMgPSAmaW50ZXJydXB0X21hcFswXTsNCj4+ICsgICAgaW50IGksajsN
Cj4+ICsgICAgdmFfbGlzdCBhcDsNCj4+ICsgICAgdWludDY0X3QgYXJnOw0KPj4gKw0KPj4gKyAg
ICB2YV9zdGFydChhcCwgbnVtX3JlZ3MpOw0KPj4gKyAgICBmb3IgKGkgPSAwIDsgaSA8IG51bV9y
ZWdzOyBpKyspIHsNCj4+ICsgICAgICAgIC8qIFNldCB0aGUgY2hpbGQgdW5pdCBhZGRyZXNzKi8N
Cj4+ICsgICAgICAgIGZvciAoaiA9IDAgOyBqIDwgY2hpbGRfdW5pdF9hZGRyX2NlbGxzOyBqKysp
IHsNCj4+ICsgICAgICAgICAgICBhcmcgPSB2YV9hcmcoYXAsIHVpbnQzMl90KTsNCj4+ICsgICAg
ICAgICAgICBzZXRfY2VsbCgmY2VsbHMsIDEgLCBhcmcpOw0KPj4gKyAgICAgICAgfQ0KPj4gKw0K
Pj4gKyAgICAgICAgLyogU2V0IHRoZSBjaGlsZCBpbnRlcnJ1cHQgc3BlY2lmaWVyKi8NCj4+ICsg
ICAgICAgIGZvciAoaiA9IDAgOyBqIDwgY2hpbGRfaW50ZXJydXB0X3NwZWNpZmllcl9jZWxscyA7
IGorKykgew0KPj4gKyAgICAgICAgICAgIGFyZyA9IHZhX2FyZyhhcCwgdWludDMyX3QpOw0KPj4g
KyAgICAgICAgICAgIHNldF9jZWxsKCZjZWxscywgMSAsIGFyZyk7DQo+PiArICAgICAgICB9DQo+
PiArDQo+PiArICAgICAgICAvKiBTZXQgdGhlIGludGVycnVwdC1wYXJlbnQqLw0KPj4gKyAgICAg
ICAgc2V0X2NlbGwoJmNlbGxzLCAxICwgR1VFU1RfUEhBTkRMRV9HSUMpOw0KPj4gKw0KPj4gKyAg
ICAgICAgLyogU2V0IHRoZSBwYXJlbnQgdW5pdCBhZGRyZXNzKi8NCj4+ICsgICAgICAgIGZvciAo
aiA9IDAgOyBqIDwgcGFyZW50X3VuaXRfYWRkcl9jZWxsczsgaisrKSB7DQo+PiArICAgICAgICAg
ICAgYXJnID0gdmFfYXJnKGFwLCB1aW50MzJfdCk7DQo+PiArICAgICAgICAgICAgc2V0X2NlbGwo
JmNlbGxzLCAxICwgYXJnKTsNCj4+ICsgICAgICAgIH0NCj4+ICsNCj4+ICsgICAgICAgIC8qIFNl
dCB0aGUgcGFyZW50IGludGVycnVwdCBzcGVjaWZpZXIqLw0KPj4gKyAgICAgICAgZm9yIChqID0g
MCA7IGogPCBwYXJlbnRfaW50ZXJydXB0X3NwZWNpZmllcl9jZWxsczsgaisrKSB7DQo+PiArICAg
ICAgICAgICAgYXJnID0gdmFfYXJnKGFwLCB1aW50MzJfdCk7DQo+PiArICAgICAgICAgICAgc2V0
X2NlbGwoJmNlbGxzLCAxICwgYXJnKTsNCj4+ICsgICAgICAgIH0NCj4+ICsgICAgfQ0KPj4gKyAg
ICB2YV9lbmQoYXApOw0KPj4gKw0KPj4gKyAgICByZXR1cm4gZmR0X3Byb3BlcnR5KGZkdCwgImlu
dGVycnVwdC1tYXAiLCBpbnRlcnJ1cHRfbWFwLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgc2l6ZW9mKGludGVycnVwdF9tYXApKTsNCj4+ICt9DQo+PiArDQo+PiBzdGF0aWMg
aW50IG1ha2Vfcm9vdF9wcm9wZXJ0aWVzKGxpYnhsX19nYyAqZ2MsDQo+PiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIGNvbnN0IGxpYnhsX3ZlcnNpb25faW5mbyAqdmVycywNCj4+ICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdm9pZCAqZmR0KQ0KPj4gQEAgLTY1OSw2ICs3
ODMsNzkgQEAgc3RhdGljIGludCBtYWtlX3ZwbDAxMV91YXJ0X25vZGUobGlieGxfX2djICpnYywg
dm9pZCAqZmR0LA0KPj4gICAgIHJldHVybiAwOw0KPj4gfQ0KPj4gDQo+PiArc3RhdGljIGludCBt
YWtlX3ZwY2lfbm9kZShsaWJ4bF9fZ2MgKmdjLCB2b2lkICpmZHQsDQo+PiArICAgICAgICBjb25z
dCBzdHJ1Y3QgYXJjaF9pbmZvICphaW5mbywNCj4+ICsgICAgICAgIHN0cnVjdCB4Y19kb21faW1h
Z2UgKmRvbSkNCj4+ICt7DQo+PiArICAgIGludCByZXM7DQo+PiArICAgIGNvbnN0IHVpbnQ2NF90
IHZwY2lfZWNhbV9iYXNlID0gR1VFU1RfVlBDSV9FQ0FNX0JBU0U7DQo+PiArICAgIGNvbnN0IHVp
bnQ2NF90IHZwY2lfZWNhbV9zaXplID0gR1VFU1RfVlBDSV9FQ0FNX1NJWkU7DQo+PiArICAgIGNv
bnN0IGNoYXIgKm5hbWUgPSBHQ1NQUklOVEYoInBjaWVAJSJQUkl4NjQsIHZwY2lfZWNhbV9iYXNl
KTsNCj4+ICsNCj4+ICsgICAgcmVzID0gZmR0X2JlZ2luX25vZGUoZmR0LCBuYW1lKTsNCj4+ICsg
ICAgaWYgKHJlcykgcmV0dXJuIHJlczsNCj4+ICsNCj4+ICsgICAgcmVzID0gZmR0X3Byb3BlcnR5
X2NvbXBhdChnYywgZmR0LCAxLCAicGNpLWhvc3QtZWNhbS1nZW5lcmljIik7DQo+PiArICAgIGlm
IChyZXMpIHJldHVybiByZXM7DQo+PiArDQo+PiArICAgIHJlcyA9IGZkdF9wcm9wZXJ0eV9zdHJp
bmcoZmR0LCAiZGV2aWNlX3R5cGUiLCAicGNpIik7DQo+PiArICAgIGlmIChyZXMpIHJldHVybiBy
ZXM7DQo+PiArDQo+PiArICAgIHJlcyA9IGZkdF9wcm9wZXJ0eV9yZWdzKGdjLCBmZHQsIEdVRVNU
X1JPT1RfQUREUkVTU19DRUxMUywNCj4+ICsgICAgICAgICAgICBHVUVTVF9ST09UX1NJWkVfQ0VM
TFMsIDEsIHZwY2lfZWNhbV9iYXNlLCB2cGNpX2VjYW1fc2l6ZSk7DQo+PiArICAgIGlmIChyZXMp
IHJldHVybiByZXM7DQo+PiArDQo+PiArICAgIHJlcyA9IGZkdF9wcm9wZXJ0eV92cGNpX2J1c19y
YW5nZShnYywgZmR0LCAyLCAwLDI1NSk7DQo+PiArICAgIGlmIChyZXMpIHJldHVybiByZXM7DQo+
PiArDQo+PiArICAgIHJlcyA9IGZkdF9wcm9wZXJ0eV9jZWxsKGZkdCwgImxpbnV4LHBjaS1kb21h
aW4iLCAwKTsNCj4+ICsgICAgaWYgKHJlcykgcmV0dXJuIHJlczsNCj4+ICsNCj4+ICsgICAgcmVz
ID0gZmR0X3Byb3BlcnR5X2NlbGwoZmR0LCAiI2FkZHJlc3MtY2VsbHMiLCAzKTsNCj4+ICsgICAg
aWYgKHJlcykgcmV0dXJuIHJlczsNCj4+ICsNCj4+ICsgICAgcmVzID0gZmR0X3Byb3BlcnR5X2Nl
bGwoZmR0LCAiI3NpemUtY2VsbHMiLCAyKTsNCj4+ICsgICAgaWYgKHJlcykgcmV0dXJuIHJlczsN
Cj4+ICsNCj4+ICsgICAgcmVzID0gZmR0X3Byb3BlcnR5X2NlbGwoZmR0LCAiI2ludGVycnVwdC1j
ZWxscyIsIDEpOw0KPj4gKyAgICBpZiAocmVzKSByZXR1cm4gcmVzOw0KPj4gKw0KPj4gKyAgICBy
ZXMgPSBmZHRfcHJvcGVydHlfc3RyaW5nKGZkdCwgInN0YXR1cyIsICJva2F5Iik7DQo+PiArICAg
IGlmIChyZXMpIHJldHVybiByZXM7DQo+PiArDQo+PiArICAgIHJlcyA9IGZkdF9wcm9wZXJ0eV92
cGNpX3JhbmdlcyhnYywgZmR0LCBHVUVTVF9QQ0lfQUREUkVTU19DRUxMUywNCj4+ICsgICAgICAg
IEdVRVNUX1JPT1RfQUREUkVTU19DRUxMUywgR1VFU1RfUENJX1NJWkVfQ0VMTFMsDQo+PiArICAg
ICAgICAzLA0KPj4gKyAgICAgICAgR1VFU1RfVlBDSV9BRERSX1RZUEVfTUVNLCBHVUVTVF9WUENJ
X01FTV9QQ0lfQUREUiwNCj4+ICsgICAgICAgIEdVRVNUX1ZQQ0lfTUVNX0NQVV9BRERSLCBHVUVT
VF9WUENJX01FTV9TSVpFLA0KPj4gKyAgICAgICAgR1VFU1RfVlBDSV9BRERSX1RZUEVfUFJFRkVU
Q0hfTUVNLCBHVUVTVF9WUENJX1BSRUZFVENIX01FTV9QQ0lfQUREUiwNCj4+ICsgICAgICAgIEdV
RVNUX1ZQQ0lfUFJFRkVUQ0hfTUVNX0NQVV9BRERSLCBHVUVTVF9WUENJX1BSRUZFVENIX01FTV9T
SVpFLA0KPj4gKyAgICAgICAgR1VFU1RfVlBDSV9BRERSX1RZUEVfSU8sIEdVRVNUX1ZQQ0lfSU9f
UENJX0FERFIsDQo+PiArICAgICAgICBHVUVTVF9WUENJX0lPX0NQVV9BRERSLCBHVUVTVF9WUENJ
X0lPX1NJWkUpOw0KPj4gKyAgICBpZiAocmVzKSByZXR1cm4gcmVzOw0KPj4gKw0KPj4gKyAgICBy
ZXMgPSBmZHRfcHJvcGVydHlfdnBjaV9pbnRlcnJ1cHRfbWFwX21hc2soZ2MsIGZkdCwgNCwgMCwg
MCwgMCwgNyk7DQo+IA0KPiBpdCB3b3VsZCBtYWtlIHNlbnNlIHRvIHNlcGFyYXRlIG91dCBjaGls
ZF91bml0X2FkZHJfY2VsbHMgYW5kDQo+IGNoaWxkX2ludGVycnVwdF9zcGVjaWZpZXJfY2VsbHMg
aGVyZSBsaWtlIHdlIGRvIGJlbG93IHdpdGgNCj4gZmR0X3Byb3BlcnR5X3ZwY2lfaW50ZXJydXB0
X21hcA0KDQpPayB3aWxsIGZpeC4NCg0KPiANCj4gDQo+PiArICAgIGlmIChyZXMpIHJldHVybiBy
ZXM7DQo+PiArDQo+PiArICAgIC8qDQo+PiArICAgICAqIExlZ2FjeSBpbnRlcnJ1cHQgaXMgZm9y
Y2VkIGFuZCBhc3NpZ25lZCB0byB0aGUgZ3Vlc3QuDQo+PiArICAgICAqIFRoaXMgd2lsbCBiZSBy
ZW1vdmVkIG9uY2Ugd2UgaGF2ZSBpbXBsZW1lbnRhdGlvbiBmb3IgTVNJIHN1cHBvcnQuDQo+PiAr
ICAgICAqDQo+PiArICAgICAqLw0KPj4gKyAgICByZXMgPSBmZHRfcHJvcGVydHlfdnBjaV9pbnRl
cnJ1cHRfbWFwKGdjLCBmZHQsIDMsIDEsIDAsIDMsDQo+PiArICAgICAgICAgICAgNCwNCj4+ICsg
ICAgICAgICAgICAwLCAwLCAwLCAxLCAwLCAxMzYsIERUX0lSUV9UWVBFX0xFVkVMX0hJR0gsDQo+
PiArICAgICAgICAgICAgMCwgMCwgMCwgMiwgMCwgMTM3LCBEVF9JUlFfVFlQRV9MRVZFTF9ISUdI
LA0KPj4gKyAgICAgICAgICAgIDAsIDAsIDAsIDMsIDAsIDEzOCwgRFRfSVJRX1RZUEVfTEVWRUxf
SElHSCwNCj4+ICsgICAgICAgICAgICAwLCAwLCAwLCA0LCAwLCAxMzksIERUX0lSUV9UWVBFX0xF
VkVMX0hJR0gpOw0KPiANCj4gVGhlIDQgaW50ZXJydXB0IGFsbG9jYXRlZCBmb3IgdGhpcyBuZWVk
IHRvIGJlIGRlZmluZWQgaW4NCj4geGVuL2luY2x1ZGUvcHVibGljL2FyY2gtYXJtLmggYXMgd2Vs
bC4gQWxzbywgd2h5IHdvdWxkIHdlIHdhbnQgdG8gZ2V0DQo+IHJpZCBvZiB0aGUgbGVnYWN5IGlu
dGVycnVwdHMgY29tcGxldGVseT8gSXQgd291bGQgYmUgcG9zc2libGUgdG8gc3RpbGwNCj4gZmlu
ZCBkZXZpY2Ugb3Igc29mdHdhcmUgdGhhdCByZWx5IG9uIHRoZW0uDQo+IA0KDQpPayB3aWxsIGZp
eCB0aGF0LiANClJlZ2FyZGluZyBsZWdhY3kgaW50ZXJydXB0IHdlIGhhdmUganVzdCB0ZXN0ZWQg
b24gb25lIG9mIHRoZSBib2FyZCBkb27igJl0IGtub3cgaG93IGl0IHdpbGwgd29yayBvbiBvdGhl
ciBib2FyZHMuDQpXZSB3aWxsIG1vc3RseSBzdXBwb3J0IE1TSSBhbmQgd2lsbCBzZWUgaWYgd2Ug
aGF2ZSB0byBzdXBwb3J0IHRoZSBsZWdhY3kgaW50ZXJydXB0IGdvaW5nIGZvcndhcmQuDQoNCj4g
DQo+PiArICAgIGlmIChyZXMpIHJldHVybiByZXM7DQo+PiArDQo+PiArICAgIHJlcyA9IGZkdF9l
bmRfbm9kZShmZHQpOw0KPj4gKyAgICBpZiAocmVzKSByZXR1cm4gcmVzOw0KPj4gKw0KPj4gKyAg
ICByZXR1cm4gMDsNCj4+ICt9DQo+PiArDQo+PiBzdGF0aWMgY29uc3Qgc3RydWN0IGFyY2hfaW5m
byAqZ2V0X2FyY2hfaW5mbyhsaWJ4bF9fZ2MgKmdjLA0KPj4gICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgY29uc3Qgc3RydWN0IHhjX2RvbV9pbWFnZSAqZG9tKQ0K
Pj4gew0KPiANCj4gWy4uLl0NCj4gDQo+IA0KPj4gZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL3B1
YmxpYy9hcmNoLWFybS5oIGIveGVuL2luY2x1ZGUvcHVibGljL2FyY2gtYXJtLmgNCj4+IGluZGV4
IDczNjRhMDczNjIuLjRlMTljNjI5NDggMTAwNjQ0DQo+PiAtLS0gYS94ZW4vaW5jbHVkZS9wdWJs
aWMvYXJjaC1hcm0uaA0KPj4gKysrIGIveGVuL2luY2x1ZGUvcHVibGljL2FyY2gtYXJtLmgNCj4+
IEBAIC00MjYsNiArNDI2LDM0IEBAIHR5cGVkZWYgdWludDY0X3QgeGVuX2NhbGxiYWNrX3Q7DQo+
PiAjZGVmaW5lIEdVRVNUX1ZQQ0lfRUNBTV9CQVNFICAgIHhlbl9ta191bGxvbmcoMHgxMDAwMDAw
MCkNCj4+ICNkZWZpbmUgR1VFU1RfVlBDSV9FQ0FNX1NJWkUgICAgeGVuX21rX3VsbG9uZygweDEw
MDAwMDAwKQ0KPj4gDQo+PiArI2RlZmluZSBHVUVTVF9QQ0lfQUREUkVTU19DRUxMUyAzDQo+PiAr
I2RlZmluZSBHVUVTVF9QQ0lfU0laRV9DRUxMUyAyDQo+PiArDQo+PiArLyogUENJLVBDSWUgbWVt
b3J5IHNwYWNlIHR5cGVzICovDQo+PiArI2RlZmluZSBHVUVTVF9WUENJX0FERFJfVFlQRV9QUkVG
RVRDSF9NRU0geGVuX21rX3VsbG9uZygweDQyMDAwMDAwKQ0KPj4gKyNkZWZpbmUgR1VFU1RfVlBD
SV9BRERSX1RZUEVfTUVNICAgICAgICAgIHhlbl9ta191bGxvbmcoMHgwMjAwMDAwMCkNCj4+ICsj
ZGVmaW5lIEdVRVNUX1ZQQ0lfQUREUl9UWVBFX0lPICAgICAgICAgICB4ZW5fbWtfdWxsb25nKDB4
MDEwMDAwMDApDQo+PiArDQo+PiArLyogR3Vlc3QgUENJLVBDSWUgbWVtb3J5IHNwYWNlIHdoZXJl
IGNvbmZpZyBzcGFjZSBhbmQgQkFSIHdpbGwgYmUgYXZhaWxhYmxlLiovDQo+PiArI2RlZmluZSBH
VUVTVF9WUENJX1BSRUZFVENIX01FTV9DUFVfQUREUiAgeGVuX21rX3VsbG9uZygweDQwMDAwMDAw
MDApDQo+IA0KPiBJdCBsb29rcyBsaWtlIGl0IGNvdWxkIGNvbmZsaWN0IHdpdGggR1VFU1RfUkFN
MV9CQVNFK0dVRVNUX1JBTTFfU0laRT8NCg0KT2sgeWVzIHdpbGwgZml4IHRoYXQgYW5kIHdpbGwg
ZGVmaW5lIHRoZSBhZGRyZXNzIHJhbmdlcyBmb3IgZ3Vlc3Qgb25jZSB3ZSBmaW5hbGlzZWQgIHRo
ZSBWUENJIHRvcG9sb2d5IGZvciB0aGUgZ3Vlc3QuIFdlIGFyZSBjdXJyZW50bHkgaW52ZXN0aWdh
dGluZyBpZiB3ZSB3YW50IHRvIGZvbGxvdyB0aGUgaGFyZHdhcmUgdG9wb2xvZ3kgZm9yIHRoZSBn
dWVzdCBvciB3ZSB3aWxsIGNyZWF0ZSBhIGRpZmZlcmVudCB2aXJ0dWFsIHRvcG9sb2d5IGZvciB0
aGUgZ3Vlc3QgaW5kZXBlbmRlbnQgb2YgdGhlIGh3IHRvcG9sb2d5Lg0KPiANCj4gDQo+PiArI2Rl
ZmluZSBHVUVTVF9WUENJX01FTV9DUFVfQUREUiAgICAgICAgICAgeGVuX21rX3VsbG9uZygweDA0
MDIwMDAwKQ0KPj4gKyNkZWZpbmUgR1VFU1RfVlBDSV9JT19DUFVfQUREUiAgICAgICAgICAgIHhl
bl9ta191bGxvbmcoMHhDMDIwMDgwMCkNCj4gDQo+IDB4QzAyMDA4MDAgbG9va3MgbGlrZSBpdCBj
b3VsZCBjb25mbGljdCB3aXRoDQo+IEdVRVNUX1JBTTBfQkFTRStHVUVTVF9SQU0wX1NJWkU/DQo+
IA0KDQpTYW1lIGNvbW1lbnQgYWJvdmUuDQo+IA0KPj4gKy8qDQo+PiArICogVGhpcyBpcyBoYXJk
Y29kZWQgdmFsdWVzIGZvciB0aGUgcmVhbCBQQ0kgcGh5c2ljYWwgYWRkcmVzc2VzLg0KPj4gKyAq
IFRoaXMgd2lsbCBiZSByZW1vdmVkIG9uY2Ugd2UgcmVhZCB0aGUgcmVhbCBQQ0ktUENJZSBwaHlz
aWNhbA0KPj4gKyAqIGFkZHJlc3NlcyBmb3JtIHRoZSBjb25maWcgc3BhY2UgYW5kIG1hcCB0byB0
aGUgZ3Vlc3QgbWVtb3J5IG1hcA0KPj4gKyAqIHdoZW4gYXNzaWduaW5nIHRoZSBkZXZpY2UgdG8g
Z3Vlc3QgdmlhIFZQQ0kuDQo+PiArICoNCj4+ICsgKi8NCj4+ICsjZGVmaW5lIEdVRVNUX1ZQQ0lf
UFJFRkVUQ0hfTUVNX1BDSV9BRERSICB4ZW5fbWtfdWxsb25nKDB4NDAwMDAwMDAwMCkNCj4+ICsj
ZGVmaW5lIEdVRVNUX1ZQQ0lfTUVNX1BDSV9BRERSICAgICAgICAgICB4ZW5fbWtfdWxsb25nKDB4
NTAwMDAwMDApDQo+PiArI2RlZmluZSBHVUVTVF9WUENJX0lPX1BDSV9BRERSICAgICAgICAgICAg
eGVuX21rX3VsbG9uZygweDAwMDAwMDAwKQ0KPj4gKw0KPj4gKyNkZWZpbmUgR1VFU1RfVlBDSV9Q
UkVGRVRDSF9NRU1fU0laRSAgICAgIHhlbl9ta191bGxvbmcoMHgxMDAwMDAwMDApDQo+PiArI2Rl
ZmluZSBHVUVTVF9WUENJX01FTV9TSVpFICAgICAgICAgICAgICAgeGVuX21rX3VsbG9uZygweDA4
MDAwMDAwKQ0KPiANCj4gSG93IGRpZCB5b3UgY2hvc2UgdGhlc2Ugc2l6ZXM/IEdVRVNUX1ZQQ0lf
TUVNX1NJWkUgYW5kL29yDQo+IEdVRVNUX1ZQQ0lfUFJFRkVUQ0hfTUVNX1NJWkUgYXJlIHN1cHBv
c2VkIHRvIHBvdGVudGlhbGx5IGNvdmVyIGFsbCB0aGUNCj4gUENJIEJBUnMsIGluY2x1ZGluZyBw
b3RlbnRpYWwgZnV0dXJlIGhvdHBsdWcgZGV2aWNlcywgcmlnaHQ/DQo+IA0KPiBJZiBzbywgbWF5
YmUgd2UgbmVlZCB0byBpbmNyZWFzZSBHVUVTVF9WUENJX01FTV9TSVpFIHRvIGEgY291cGxlIG9m
IEdCDQo+IGFuZCBHVUVTVF9WUENJX1BSRUZFVENIX01FTV9TSVpFIHRvIGV2ZW4gbW9yZT8NCg0K
U2FtZSBjb21tZW50cyBhYm92ZS4NCj4gDQo+IA0KPiANCj4gDQo+PiArI2RlZmluZSBHVUVTVF9W
UENJX0lPX1NJWkUgICAgICAgICAgICAgICAgeGVuX21rX3VsbG9uZygweDAwODAwMDAwKQ0KPj4g
Kw0KPj4gLyoNCj4+ICAqIDE2TUIgPT0gNDA5NiBwYWdlcyByZXNlcnZlZCBmb3IgZ3Vlc3QgdG8g
dXNlIGFzIGEgcmVnaW9uIHRvIG1hcCBpdHMNCj4+ICAqIGdyYW50IHRhYmxlIGluLg0KDQo=


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 13:45:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 13:45:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k03Rq-0003Zk-F4; Mon, 27 Jul 2020 13:45:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k03Rp-0003Zf-Er
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 13:45:45 +0000
X-Inumbo-ID: 78011c26-d00f-11ea-a7ce-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 78011c26-d00f-11ea-a7ce-12813bfff9fa;
 Mon, 27 Jul 2020 13:45:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
 References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=FFDYRYFVAaz8Aqbh+dVeouxgn8PgmqGLW5mQE1CoTIs=; b=e42LpALH3BFY0qElxOEWzLQaQR
 0qJa+tkqtn9TZvqPyUPl7ZbNyYmpBLuFLYM4LlEQ1an+CwC4sB1vj6k8GyBCKZqmj22ylVTBVKQah
 1schNvLyrDv93ny4zHfBSNl9Ngv+S2tIjE+vFypFPtU7ATW8XNzwiWeZYO7teoemVdB4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k03Rn-0000UL-QO; Mon, 27 Jul 2020 13:45:43 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=edge-cache-102.e-fra50.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k03Rn-0000m3-GM; Mon, 27 Jul 2020 13:45:43 +0000
Message-ID: <16bfc39511221b683aea98b1440d3ab7f987b27e.camel@xen.org>
Subject: Re: [PATCH v7 09/15] efi: use new page table APIs in copy_mapping
From: Hongyan Xia <hx242@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Date: Mon, 27 Jul 2020 14:45:41 +0100
In-Reply-To: <0c421dee1729295eb8504ee81abbc8e57f220b12.camel@xen.org>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <0259b645c81ecc3879240e30760b0e7641a2b602.1590750232.git.hongyxia@amazon.com>
 <bfe28c9c-af4e-96c2-9e6c-354a5bf626d8@suse.com>
 <0c421dee1729295eb8504ee81abbc8e57f220b12.camel@xen.org>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, 2020-07-27 at 13:45 +0100, Hongyan Xia wrote:
> On Tue, 2020-07-14 at 14:42 +0200, Jan Beulich wrote:
> > On 29.05.2020 13:11, Hongyan Xia wrote:
> > > From: Wei Liu <wei.liu2@citrix.com>
> > > 
> > > After inspection ARM doesn't have alloc_xen_pagetable so this
> > > function
> > > is x86 only, which means it is safe for us to change.
> > 
> > Well, it sits inside a "#ifndef CONFIG_ARM" section.
> > 
> > > @@ -1442,29 +1443,42 @@ static __init void copy_mapping(unsigned
> > > long mfn, unsigned long end,
> > >                                                   unsigned long
> > > emfn))
> > >  {
> > >      unsigned long next;
> > > +    l3_pgentry_t *l3src = NULL, *l3dst = NULL;
> > >  
> > >      for ( ; mfn < end; mfn = next )
> > >      {
> > >          l4_pgentry_t l4e = efi_l4_pgtable[l4_table_offset(mfn <<
> > > PAGE_SHIFT)];
> > > -        l3_pgentry_t *l3src, *l3dst;
> > >          unsigned long va = (unsigned long)mfn_to_virt(mfn);
> > >  
> > > +        if ( !((mfn << PAGE_SHIFT) & ((1UL <<
> > > L4_PAGETABLE_SHIFT)
> > > - 1)) )
> > 
> > To be in line with ...
> > 
> > > +        {
> > > +            UNMAP_DOMAIN_PAGE(l3src);
> > > +            UNMAP_DOMAIN_PAGE(l3dst);
> > > +        }
> > >          next = mfn + (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT));
> > 
> > ... this, please avoid the left shift of mfn in the if().
> > Judgingfrom
> 
> What do you mean by "in line" here? It does not look to me that "next
> =" can be easily squashed into the if() condition.

Sorry, never mind. "in line" != "inline".

Hongyan



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 13:56:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 13:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k03cQ-0004WO-G5; Mon, 27 Jul 2020 13:56:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9AKR=BG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k03cP-0004W4-16
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 13:56:41 +0000
X-Inumbo-ID: fbb13afa-d010-11ea-a7cf-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fbb13afa-d010-11ea-a7cf-12813bfff9fa;
 Mon, 27 Jul 2020 13:56:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yyD8fbYSTni+2HGTdzmgmvSXgPglZgOKnRSzVZ8F5p8=; b=Os9uuOO9T0v7oKIYR8mgJUMoql
 2DRXdZNKVpBproNAn+1Bp23WlQdfbhSY526t07T/LUcWkLj5INf7bsZW6ndgiwIvaCcEgVxO9hpqz
 psgqBh+SuNjx1ViaM7hVXRLPwkoRtBJCSalZ0KuD4+OHssXRRtkR1XjV4rGll3zNNf+o=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k03cI-0000kF-7S; Mon, 27 Jul 2020 13:56:34 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k03cH-000796-Vo; Mon, 27 Jul 2020 13:56:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k03cH-00008S-TO; Mon, 27 Jul 2020 13:56:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-qemuu-nested-amd
Message-Id: <E1k03cH-00008S-TO@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jul 2020 13:56:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-qemuu-nested-amd
testid xen-boot/l1

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  7d3660e79830a069f1848bb4fa1cdf8f666424fb
  Bug not present: 9e3903136d9acde2fb2dd9e967ba928050a6cb4a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/152231/


  (Revision log too long, omitted.)


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-amd.xen-boot--l1.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-amd.xen-boot--l1 --summary-out=tmp/152231.bisection-summary --basis-template=151065 --blessings=real,real-bisect qemu-mainline test-amd64-amd64-qemuu-nested-amd xen-boot/l1
Searching for failure / basis pass:
 152219 fail [host=rimava1] / 151065 ok.
Failure / basis pass flights: 152219 / 151065
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8c30327debb28c0b6cfa2106b736774e0b20daac 3c659044118e34603161457db9934a34f816d78b 57cdde4a74dd0d68df9e32657773484a5484a027 6ada2285d9918859699c92e09540e023e0a16054 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#dafce295e6f447ed8905db4e29241e2c6c2a4389-8c30327debb28c0b6cfa2106b736774e0b20daac git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://git.qemu.org/qemu.git#9e3903136d9acde2fb2dd9e967ba928050a6cb4a-57cdde4a74dd0d68df9e32657773484a5484a027 git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-6ada2285d9918859699c92e09540e023e0a16054 git://xenbits.xen.org/xen.git#058023b343d4e366864831db46e9b438e9e3a178-8c4532f19d6925538fb0c938f7de9a97da8c5c3b
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 67833 nodes in revision graph
Searching for test results:
 151101 fail irrelevant
 151065 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 151149 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151221 blocked irrelevant
 151175 blocked irrelevant
 151241 blocked irrelevant
 151286 blocked irrelevant
 151269 blocked irrelevant
 151328 blocked irrelevant
 151304 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 322969adf1fb3d6cfbd613f35121315715aff2ed 3c659044118e34603161457db9934a34f816d78b 171199f56f5f9bdf1e5d670d09ef1351d8f01bae 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151377 blocked irrelevant
 151353 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b d4b78317b7cf8c0c635b70086503813f79ff21ec 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151399 blocked irrelevant
 151414 blocked irrelevant
 151435 blocked irrelevant
 151459 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151471 blocked irrelevant
 151485 blocked irrelevant
 151500 blocked irrelevant
 151518 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151547 blocked irrelevant
 151598 blocked irrelevant
 151577 blocked irrelevant
 151622 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b 7b7515702012219410802a168ae4aa45b72a44df 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151656 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151634 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151645 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151669 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151685 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151704 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151778 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b aff2caf6b3fbab1062e117a47b66d27f7fd2f272 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151721 blocked irrelevant
 151763 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 48f22ad04ead83e61b4b35871ec6f6109779b791 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151744 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 8796c64ecdfd34be394ea277aaaaa53df0c76996 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151804 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151816 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151833 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 827937158b72ce2265841ff528bba3c44a1bfbc8 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151855 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151841 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 2033cc6efa98b831d7839e367aa7d5aa74d0750f 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151849 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151874 blocked irrelevant
 151895 blocked irrelevant
 151914 blocked irrelevant
 151934 blocked irrelevant
 151968 blocked irrelevant
 151952 blocked irrelevant
 152013 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 939ab64b400b9bec4b59795a87817784093e1acd 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 151988 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff53d2a13740e39dea110d6b3509c156c659586 3c659044118e34603161457db9934a34f816d78b b7bda69c4ef46c57480f6e378923f5215b122778 6ada2285d9918859699c92e09540e023e0a16054 f8fe3c07363d11fc81d8e7382dbcaa357c861569
 151999 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 97f750becac33e3d3e446d3ff4ae9af2577b7877 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152026 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 9fc87111005e8903785db40819af66b8f85b8b96 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152039 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 9fc87111005e8903785db40819af66b8f85b8b96 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152058 blocked irrelevant
 152076 blocked irrelevant
 152108 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9132a31b9c8381197eee75eb66c809182b264110 3c659044118e34603161457db9934a34f816d78b 3cbc8970f55c87cb58699b6dc8fe42998bc79dc0 6ada2285d9918859699c92e09540e023e0a16054 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
 152144 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9132a31b9c8381197eee75eb66c809182b264110 3c659044118e34603161457db9934a34f816d78b d0cc248164961a7ba9d43806feffd76f9f6d7f41 6ada2285d9918859699c92e09540e023e0a16054 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
 152171 fail irrelevant
 152224 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 152200 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 91e4bcb313f0c1f0f19b87b5849f5486aa076be4 3c659044118e34603161457db9934a34f816d78b 7adfbea8fd1efce36019a0c2f198ca73be9d3f18 6ada2285d9918859699c92e09540e023e0a16054 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
 152208 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 152228 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 152210 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 91e4bcb313f0c1f0f19b87b5849f5486aa076be4 3c659044118e34603161457db9934a34f816d78b 7adfbea8fd1efce36019a0c2f198ca73be9d3f18 6ada2285d9918859699c92e09540e023e0a16054 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
 152229 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8c30327debb28c0b6cfa2106b736774e0b20daac 3c659044118e34603161457db9934a34f816d78b 57cdde4a74dd0d68df9e32657773484a5484a027 6ada2285d9918859699c92e09540e023e0a16054 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
 152212 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 7028534d8482d25860c4d1aa8e45f0b911abfc5a
 152214 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 394e8e4bf586b4749620a48a23c816ee19f0e04a 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 152230 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 152215 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 152189 fail irrelevant
 152211 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8c30327debb28c0b6cfa2106b736774e0b20daac 3c659044118e34603161457db9934a34f816d78b b0ce3f021e0157e9a5ab836cb162c48caac132e1 6ada2285d9918859699c92e09540e023e0a16054 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
 152217 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fd708fe0e1f813d6faf02d92ec5e8d73ce876ed1 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 152218 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 152231 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 152220 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8c30327debb28c0b6cfa2106b736774e0b20daac 3c659044118e34603161457db9934a34f816d78b b0ce3f021e0157e9a5ab836cb162c48caac132e1 6ada2285d9918859699c92e09540e023e0a16054 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
 152221 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 152222 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 152219 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8c30327debb28c0b6cfa2106b736774e0b20daac 3c659044118e34603161457db9934a34f816d78b 57cdde4a74dd0d68df9e32657773484a5484a027 6ada2285d9918859699c92e09540e023e0a16054 8c4532f19d6925538fb0c938f7de9a97da8c5c3b
Searching for interesting versions
 Result found: flight 151065 (pass), for basis pass
 Result found: flight 152219 (fail), for basis failure
 Repro found: flight 152228 (pass), for basis pass
 Repro found: flight 152229 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
No revisions left to test, checking graph state.
 Result found: flight 152215 (pass), for last pass
 Result found: flight 152221 (fail), for first failure
 Repro found: flight 152222 (pass), for last pass
 Repro found: flight 152224 (fail), for first failure
 Repro found: flight 152230 (pass), for last pass
 Repro found: flight 152231 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  7d3660e79830a069f1848bb4fa1cdf8f666424fb
  Bug not present: 9e3903136d9acde2fb2dd9e967ba928050a6cb4a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/152231/


  (Revision log too long, omitted.)

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.591355 to fit
pnmtopng: 183 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-nested-amd.xen-boot--l1.{dot,ps,png,html,svg}.
----------------------------------------
152231: tolerable ALL FAIL

flight 152231 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/152231/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1        fail baseline untested


jobs:
 test-amd64-amd64-qemuu-nested-amd                            fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:22:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k041D-00075z-Sh; Mon, 27 Jul 2020 14:22:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k041D-00075p-0v
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:22:19 +0000
X-Inumbo-ID: 936c447c-d014-11ea-8ac4-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 936c447c-d014-11ea-8ac4-bc764e2007e4;
 Mon, 27 Jul 2020 14:22:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=lUTj7fPtlIpACnbEEUp7Ogj2IKf4LNQYjfMh6mK0dVo=; b=OBsJAzs5L70NKsEgXHHzW+mylw
 VSPDELfZnNCa9Hgqcq7AsCYQvbiK0P22wD4kKWa8M20ZpV13xcJQ0Sp+qeWBJHabeJPoonmkoXiAC
 A78rv1Q3clqDU07lQFiCaV+v0gi0anSFRCNDyi4rWNVtA7DPizHqUXDJrz26zJkIUKQE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041A-0001Mf-Vy; Mon, 27 Jul 2020 14:22:16 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041A-0002w6-Jb; Mon, 27 Jul 2020 14:22:16 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 00/15] switch to domheap for Xen page tables
Date: Mon, 27 Jul 2020 15:21:50 +0100
Message-Id: <cover.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 jgrall@amazon.com, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Hongyan Xia <hongyxia@amazon.com>

This series rewrites all the remaining functions and finally makes the
switch from xenheap to domheap for Xen page tables, so that they no
longer need to rely on the direct map, which is a big step towards
removing the direct map.

This series depends on the following mini-series:
https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg00730.html

---
Changed in v8:
- address comments in v7.
- rebase

Changed in v7:
- rebase and cleanup.
- address comments in v6.
- add alloc_map_clear_xen_pt() helper to simplify the patches in this
  series.

Changed in v6:
- drop the patches that have already been merged.
- rebase and cleanup.
- rewrite map_pages_to_xen() and modify_xen_mappings() in a way that
  does not require an end_of_loop goto label.

Hongyan Xia (2):
  x86/mm: drop old page table APIs
  x86: switch to use domheap page for page tables

Wei Liu (13):
  x86/mm: map_pages_to_xen would better have one exit path
  x86/mm: make sure there is one exit path for modify_xen_mappings
  x86/mm: rewrite virt_to_xen_l*e
  x86/mm: switch to new APIs in map_pages_to_xen
  x86/mm: switch to new APIs in modify_xen_mappings
  x86_64/mm: introduce pl2e in paging_init
  x86_64/mm: switch to new APIs in paging_init
  x86_64/mm: switch to new APIs in setup_m2p_table
  efi: use new page table APIs in copy_mapping
  efi: switch to new APIs in EFI code
  x86/smpboot: add exit path for clone_mapping()
  x86/smpboot: switch clone_mapping() to new APIs
  x86/mm: drop _new suffix for page table APIs

 xen/arch/x86/domain_page.c |  11 +-
 xen/arch/x86/efi/runtime.h |  13 ++-
 xen/arch/x86/mm.c          | 272 ++++++++++++++++++++++++++++-----------------
 xen/arch/x86/setup.c       |   4 +-
 xen/arch/x86/smpboot.c     |  70 +++++++-----
 xen/arch/x86/x86_64/mm.c   |  81 ++++++++------
 xen/common/efi/boot.c      |  83 +++++++++-----
 xen/common/efi/efi.h       |   3 +-
 xen/common/efi/runtime.c   |   8 +-
 xen/common/vmap.c          |   1 +
 xen/include/asm-x86/mm.h   |   7 +-
 xen/include/asm-x86/page.h |  15 ++-
 12 files changed, 353 insertions(+), 215 deletions(-)

-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:22:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k041F-000766-41; Mon, 27 Jul 2020 14:22:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k041D-00075u-Pm
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:22:19 +0000
X-Inumbo-ID: 930a170d-d014-11ea-a7d9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 930a170d-d014-11ea-a7d9-12813bfff9fa;
 Mon, 27 Jul 2020 14:22:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=d2dy+O8HMS2n5aYbf6tbA9h3T1JBERn3K5gYW9x8m4c=; b=tzYtqK51jVjeJbI/PqrQpGv6f5
 UYzSv7pwygTT35iG/5CeZTZHyKIptE9fQU/oeoogDQjmBnbqIS5hegfEQO0j3b7aQedxesOW2W+wo
 03bDsDS7BwxR/QcvPLqtvm0zyEn7VQB9RI4JbGl+lJz2YFrnimT+H/EPiZPyyeRzD76o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041C-0001Mj-CH; Mon, 27 Jul 2020 14:22:18 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041C-0002w6-1i; Mon, 27 Jul 2020 14:22:18 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 01/15] x86/mm: map_pages_to_xen would better have one exit
 path
Date: Mon, 27 Jul 2020 15:21:51 +0100
Message-Id: <45a087b2be2f04567af99c630a0bed6500785463.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, jgrall@amazon.com,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

We will soon rewrite the function to handle dynamically mapping and
unmapping of page tables. Since dynamic mappings may map and unmap pages
in different iterations of the while loop, we need to lift pl3e out of
the loop.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed since v4:
- drop the end_of_loop goto label.

Changed since v3:
- remove asserts on rc since rc never gets changed to anything else.
- reword commit message.
---
 xen/arch/x86/mm.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 82bc676553..0ade9b3917 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5085,9 +5085,11 @@ int map_pages_to_xen(
     unsigned int flags)
 {
     bool locking = system_state > SYS_STATE_boot;
+    l3_pgentry_t *pl3e, ol3e;
     l2_pgentry_t *pl2e, ol2e;
     l1_pgentry_t *pl1e, ol1e;
     unsigned int  i;
+    int rc = -ENOMEM;
 
 #define flush_flags(oldf) do {                 \
     unsigned int o_ = (oldf);                  \
@@ -5105,10 +5107,11 @@ int map_pages_to_xen(
 
     while ( nr_mfns != 0 )
     {
-        l3_pgentry_t ol3e, *pl3e = virt_to_xen_l3e(virt);
+        pl3e = virt_to_xen_l3e(virt);
 
         if ( !pl3e )
-            return -ENOMEM;
+            goto out;
+
         ol3e = *pl3e;
 
         if ( cpu_has_page1gb &&
@@ -5198,7 +5201,7 @@ int map_pages_to_xen(
 
             l2t = alloc_xen_pagetable();
             if ( l2t == NULL )
-                return -ENOMEM;
+                goto out;
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
@@ -5227,7 +5230,7 @@ int map_pages_to_xen(
 
         pl2e = virt_to_xen_l2e(virt);
         if ( !pl2e )
-            return -ENOMEM;
+            goto out;
 
         if ( ((((virt >> PAGE_SHIFT) | mfn_x(mfn)) &
                ((1u << PAGETABLE_ORDER) - 1)) == 0) &&
@@ -5271,7 +5274,7 @@ int map_pages_to_xen(
             {
                 pl1e = virt_to_xen_l1e(virt);
                 if ( pl1e == NULL )
-                    return -ENOMEM;
+                    goto out;
             }
             else if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
             {
@@ -5299,7 +5302,7 @@ int map_pages_to_xen(
 
                 l1t = alloc_xen_pagetable();
                 if ( l1t == NULL )
-                    return -ENOMEM;
+                    goto out;
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
@@ -5445,7 +5448,10 @@ int map_pages_to_xen(
 
 #undef flush_flags
 
-    return 0;
+    rc = 0;
+
+ out:
+    return rc;
 }
 
 int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:22:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:22:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k041K-00077x-FI; Mon, 27 Jul 2020 14:22:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k041I-00075u-OI
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:22:24 +0000
X-Inumbo-ID: 94bb3e8c-d014-11ea-a7d9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 94bb3e8c-d014-11ea-a7d9-12813bfff9fa;
 Mon, 27 Jul 2020 14:22:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=BqJIywzpGYOOk00xzU2J1b4Z+JiGyqnjXWw3J7dS0fY=; b=rS7WFKLQNzKO+aHcR6aQGvADsm
 DT8CtwbAgC95miE7A08kZKob3pG6Rd6X44F8HkOOpjXhGuoHXf85gJwb23byhcUqkE1Nbv8pfmOhg
 gS7zo368UpRU5XhOX/366+oB2vECNywbsCjjG8iWh8CP4v9s5wEkPq9+paVBkYSBqiYY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041D-0001NK-RB; Mon, 27 Jul 2020 14:22:19 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041D-0002w6-GJ; Mon, 27 Jul 2020 14:22:19 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 02/15] x86/mm: make sure there is one exit path for
 modify_xen_mappings
Date: Mon, 27 Jul 2020 15:21:52 +0100
Message-Id: <8864d7e68842dc68090bfa5b0b50253d0eb695d8.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, jgrall@amazon.com,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

We will soon need to handle dynamically mapping / unmapping page
tables in the said function. Since dynamic mappings may map and unmap
pl3e in different iterations, lift pl3e out of the loop.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed since v4:
- drop the end_of_loop goto label.

Changed since v3:
- remove asserts on rc since it never gets changed to anything else.
---
 xen/arch/x86/mm.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 0ade9b3917..7a11d022c9 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5474,10 +5474,12 @@ int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 {
     bool locking = system_state > SYS_STATE_boot;
+    l3_pgentry_t *pl3e;
     l2_pgentry_t *pl2e;
     l1_pgentry_t *pl1e;
     unsigned int  i;
     unsigned long v = s;
+    int rc = -ENOMEM;
 
     /* Set of valid PTE bits which may be altered. */
 #define FLAGS_MASK (_PAGE_NX|_PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_RW|_PAGE_PRESENT)
@@ -5488,7 +5490,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
     while ( v < e )
     {
-        l3_pgentry_t *pl3e = virt_to_xen_l3e(v);
+        pl3e = virt_to_xen_l3e(v);
 
         if ( !pl3e || !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
@@ -5521,7 +5523,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             /* PAGE1GB: shatter the superpage and fall through. */
             l2t = alloc_xen_pagetable();
             if ( !l2t )
-                return -ENOMEM;
+                goto out;
+
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(*pl3e) +
@@ -5578,7 +5581,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 /* PSE: shatter the superpage and try again. */
                 l1t = alloc_xen_pagetable();
                 if ( !l1t )
-                    return -ENOMEM;
+                    goto out;
+
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
@@ -5711,7 +5715,10 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     flush_area(NULL, FLUSH_TLB_GLOBAL);
 
 #undef FLAGS_MASK
-    return 0;
+    rc = 0;
+
+ out:
+    return rc;
 }
 
 #undef flush_area
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:22:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k041O-00079M-PG; Mon, 27 Jul 2020 14:22:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k041N-00075u-Oc
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:22:29 +0000
X-Inumbo-ID: 96193afe-d014-11ea-a7d9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 96193afe-d014-11ea-a7d9-12813bfff9fa;
 Mon, 27 Jul 2020 14:22:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ocpaxy196EOPlPXfon4lDI7Bm2winkx9grPvtKNISJ4=; b=r5vQueOv2ciYdgPxMTLL62sfnp
 hv08MX1PHrv9hFQlKg1nbz0GAKsyhciJr1IGTPDlh9dduSPXruXh1Cy9s1/fDCXrJ1wDKKczVtyar
 9YxcWFNnVVGS0HTiZit0XyVSlbX/N2ehkyMFW1B543LPhBbgi6Qddmzksvhh/GgZYAAM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041F-0001NS-Sb; Mon, 27 Jul 2020 14:22:21 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041F-0002w6-Em; Mon, 27 Jul 2020 14:22:21 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 03/15] x86/mm: rewrite virt_to_xen_l*e
Date: Mon, 27 Jul 2020 15:21:53 +0100
Message-Id: <e7963f6d8cab8e4d5d4249b12a8175405d888bba.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 jgrall@amazon.com, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

Rewrite those functions to use the new APIs. Modify its callers to unmap
the pointer returned. Since alloc_xen_pagetable_new() is almost never
useful unless accompanied by page clearing and a mapping, introduce a
helper alloc_map_clear_xen_pt() for this sequence.

Note that the change of virt_to_xen_l1e() also requires vmap_to_mfn() to
unmap the page, which requires domain_page.h header in vmap.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changed in v8:
- s/virtual address/linear address/.
- BUG_ON() on NULL return in vmap_to_mfn().

Changed in v7:
- remove a comment.
- use l1e_get_mfn() instead of converting things back and forth.
- add alloc_map_clear_xen_pt().
- unmap before the next mapping to reduce mapcache pressure.
- use normal unmap calls instead of the macro in error paths because
  unmap can handle NULL now.
---
 xen/arch/x86/domain_page.c | 11 ++++--
 xen/arch/x86/mm.c          | 96 +++++++++++++++++++++++++++++++++-------------
 xen/common/vmap.c          |  1 +
 xen/include/asm-x86/mm.h   |  1 +
 xen/include/asm-x86/page.h | 10 ++++-
 5 files changed, 88 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index b03728e18e..dc8627c1b5 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -333,21 +333,24 @@ void unmap_domain_page_global(const void *ptr)
 mfn_t domain_page_map_to_mfn(const void *ptr)
 {
     unsigned long va = (unsigned long)ptr;
-    const l1_pgentry_t *pl1e;
+    l1_pgentry_t l1e;
 
     if ( va >= DIRECTMAP_VIRT_START )
         return _mfn(virt_to_mfn(ptr));
 
     if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
     {
-        pl1e = virt_to_xen_l1e(va);
+        const l1_pgentry_t *pl1e = virt_to_xen_l1e(va);
+
         BUG_ON(!pl1e);
+        l1e = *pl1e;
+        unmap_domain_page(pl1e);
     }
     else
     {
         ASSERT(va >= MAPCACHE_VIRT_START && va < MAPCACHE_VIRT_END);
-        pl1e = &__linear_l1_table[l1_linear_offset(va)];
+        l1e = __linear_l1_table[l1_linear_offset(va)];
     }
 
-    return l1e_get_mfn(*pl1e);
+    return l1e_get_mfn(l1e);
 }
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 7a11d022c9..fd416c0282 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4965,8 +4965,28 @@ void free_xen_pagetable_new(mfn_t mfn)
         free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
 }
 
+void *alloc_map_clear_xen_pt(mfn_t *pmfn)
+{
+    mfn_t mfn = alloc_xen_pagetable_new();
+    void *ret;
+
+    if ( mfn_eq(mfn, INVALID_MFN) )
+        return NULL;
+
+    if ( pmfn )
+        *pmfn = mfn;
+    ret = map_domain_page(mfn);
+    clear_page(ret);
+
+    return ret;
+}
+
 static DEFINE_SPINLOCK(map_pgdir_lock);
 
+/*
+ * For virt_to_xen_lXe() functions, they take a linear address and return a
+ * pointer to Xen's LX entry. Caller needs to unmap the pointer.
+ */
 static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
 {
     l4_pgentry_t *pl4e;
@@ -4975,33 +4995,33 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
     if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l3_pgentry_t *l3t = alloc_xen_pagetable();
+        mfn_t l3mfn;
+        l3_pgentry_t *l3t = alloc_map_clear_xen_pt(&l3mfn);
 
         if ( !l3t )
             return NULL;
-        clear_page(l3t);
+        UNMAP_DOMAIN_PAGE(l3t);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
         {
-            l4_pgentry_t l4e = l4e_from_paddr(__pa(l3t), __PAGE_HYPERVISOR);
+            l4_pgentry_t l4e = l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR);
 
             l4e_write(pl4e, l4e);
             efi_update_l4_pgtable(l4_table_offset(v), l4e);
-            l3t = NULL;
+            l3mfn = INVALID_MFN;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( l3t )
-            free_xen_pagetable(l3t);
+        free_xen_pagetable_new(l3mfn);
     }
 
-    return l4e_to_l3e(*pl4e) + l3_table_offset(v);
+    return map_l3t_from_l4e(*pl4e) + l3_table_offset(v);
 }
 
 static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
 {
-    l3_pgentry_t *pl3e;
+    l3_pgentry_t *pl3e, l3e;
 
     pl3e = virt_to_xen_l3e(v);
     if ( !pl3e )
@@ -5010,31 +5030,37 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l2_pgentry_t *l2t = alloc_xen_pagetable();
+        mfn_t l2mfn;
+        l2_pgentry_t *l2t = alloc_map_clear_xen_pt(&l2mfn);
 
         if ( !l2t )
+        {
+            unmap_domain_page(pl3e);
             return NULL;
-        clear_page(l2t);
+        }
+        UNMAP_DOMAIN_PAGE(l2t);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
-            l3e_write(pl3e, l3e_from_paddr(__pa(l2t), __PAGE_HYPERVISOR));
-            l2t = NULL;
+            l3e_write(pl3e, l3e_from_mfn(l2mfn, __PAGE_HYPERVISOR));
+            l2mfn = INVALID_MFN;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( l2t )
-            free_xen_pagetable(l2t);
+        free_xen_pagetable_new(l2mfn);
     }
 
     BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
-    return l3e_to_l2e(*pl3e) + l2_table_offset(v);
+    l3e = *pl3e;
+    unmap_domain_page(pl3e);
+
+    return map_l2t_from_l3e(l3e) + l2_table_offset(v);
 }
 
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
 {
-    l2_pgentry_t *pl2e;
+    l2_pgentry_t *pl2e, l2e;
 
     pl2e = virt_to_xen_l2e(v);
     if ( !pl2e )
@@ -5043,26 +5069,32 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l1_pgentry_t *l1t = alloc_xen_pagetable();
+        mfn_t l1mfn;
+        l1_pgentry_t *l1t = alloc_map_clear_xen_pt(&l1mfn);
 
         if ( !l1t )
+        {
+            unmap_domain_page(pl2e);
             return NULL;
-        clear_page(l1t);
+        }
+        UNMAP_DOMAIN_PAGE(l1t);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
-            l2e_write(pl2e, l2e_from_paddr(__pa(l1t), __PAGE_HYPERVISOR));
-            l1t = NULL;
+            l2e_write(pl2e, l2e_from_mfn(l1mfn, __PAGE_HYPERVISOR));
+            l1mfn = INVALID_MFN;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( l1t )
-            free_xen_pagetable(l1t);
+        free_xen_pagetable_new(l1mfn);
     }
 
     BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-    return l2e_to_l1e(*pl2e) + l1_table_offset(v);
+    l2e = *pl2e;
+    unmap_domain_page(pl2e);
+
+    return map_l1t_from_l2e(l2e) + l1_table_offset(v);
 }
 
 /* Convert to from superpage-mapping flags for map_pages_to_xen(). */
@@ -5085,8 +5117,8 @@ int map_pages_to_xen(
     unsigned int flags)
 {
     bool locking = system_state > SYS_STATE_boot;
-    l3_pgentry_t *pl3e, ol3e;
-    l2_pgentry_t *pl2e, ol2e;
+    l3_pgentry_t *pl3e = NULL, ol3e;
+    l2_pgentry_t *pl2e = NULL, ol2e;
     l1_pgentry_t *pl1e, ol1e;
     unsigned int  i;
     int rc = -ENOMEM;
@@ -5107,6 +5139,10 @@ int map_pages_to_xen(
 
     while ( nr_mfns != 0 )
     {
+        /* Clean up mappings mapped in the previous iteration. */
+        UNMAP_DOMAIN_PAGE(pl3e);
+        UNMAP_DOMAIN_PAGE(pl2e);
+
         pl3e = virt_to_xen_l3e(virt);
 
         if ( !pl3e )
@@ -5275,6 +5311,8 @@ int map_pages_to_xen(
                 pl1e = virt_to_xen_l1e(virt);
                 if ( pl1e == NULL )
                     goto out;
+
+                UNMAP_DOMAIN_PAGE(pl1e);
             }
             else if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
             {
@@ -5451,6 +5489,8 @@ int map_pages_to_xen(
     rc = 0;
 
  out:
+    unmap_domain_page(pl2e);
+    unmap_domain_page(pl3e);
     return rc;
 }
 
@@ -5474,7 +5514,7 @@ int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 {
     bool locking = system_state > SYS_STATE_boot;
-    l3_pgentry_t *pl3e;
+    l3_pgentry_t *pl3e = NULL;
     l2_pgentry_t *pl2e;
     l1_pgentry_t *pl1e;
     unsigned int  i;
@@ -5490,6 +5530,9 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
     while ( v < e )
     {
+        /* Clean up mappings mapped in the previous iteration. */
+        UNMAP_DOMAIN_PAGE(pl3e);
+
         pl3e = virt_to_xen_l3e(v);
 
         if ( !pl3e || !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
@@ -5718,6 +5761,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     rc = 0;
 
  out:
+    unmap_domain_page(pl3e);
     return rc;
 }
 
diff --git a/xen/common/vmap.c b/xen/common/vmap.c
index faebc1ddf1..9964ab2096 100644
--- a/xen/common/vmap.c
+++ b/xen/common/vmap.c
@@ -1,6 +1,7 @@
 #ifdef VMAP_VIRT_START
 #include <xen/bitmap.h>
 #include <xen/cache.h>
+#include <xen/domain_page.h>
 #include <xen/init.h>
 #include <xen/mm.h>
 #include <xen/pfn.h>
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 7e74996053..5b76308948 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -586,6 +586,7 @@ void *alloc_xen_pagetable(void);
 void free_xen_pagetable(void *v);
 mfn_t alloc_xen_pagetable_new(void);
 void free_xen_pagetable_new(mfn_t mfn);
+void *alloc_map_clear_xen_pt(mfn_t *pmfn);
 
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
 
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index f632affaef..608a048c28 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -291,7 +291,15 @@ void copy_page_sse2(void *, const void *);
 #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
 #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
 #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
-#define vmap_to_mfn(va)     _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va))))
+
+#define vmap_to_mfn(va) ({                                                  \
+        const l1_pgentry_t *pl1e_ = virt_to_xen_l1e((unsigned long)(va));   \
+        mfn_t mfn_;                                                         \
+        BUG_ON(!pl1e_);                                                     \
+        mfn_ = l1e_get_mfn(*pl1e_);                                         \
+        unmap_domain_page(pl1e_);                                           \
+        mfn_; })
+
 #define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
 
 #endif /* !defined(__ASSEMBLY__) */
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:22:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:22:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k041U-0007BE-1s; Mon, 27 Jul 2020 14:22:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k041S-00075u-Oh
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:22:34 +0000
X-Inumbo-ID: 96c539ee-d014-11ea-a7d9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 96c539ee-d014-11ea-a7d9-12813bfff9fa;
 Mon, 27 Jul 2020 14:22:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=rrvEqUYSNkuezZ5dZTBgKV5PBSNHpJAjMPmahuBCeZs=; b=Unt2X3o3HL7/gE5t6VkgQ38lpd
 pNcOcfD+m8KiwACwOtMGa4rO/kfLMrD5+EkksjJn/2s/Gm/y8cy0kFhcsLNSr3NLxZBzqCI/3MKEc
 cWtkSDCy4DV6bzgsnyJQeumLE49kdhBZbSZHmatvrN40lWhZafSpVti0BDHyBcISqc+M=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041H-0001NZ-AF; Mon, 27 Jul 2020 14:22:23 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041H-0002w6-1D; Mon, 27 Jul 2020 14:22:23 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 04/15] x86/mm: switch to new APIs in map_pages_to_xen
Date: Mon, 27 Jul 2020 15:21:54 +0100
Message-Id: <248e86f82ae0d0d6bf661e6a180874437efacca4.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, jgrall@amazon.com,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

Page tables allocated in that function should be mapped and unmapped
now.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/mm.c | 60 +++++++++++++++++++++++++++++++++----------------------
 1 file changed, 36 insertions(+), 24 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index fd416c0282..edcf164742 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5171,7 +5171,7 @@ int map_pages_to_xen(
                 }
                 else
                 {
-                    l2_pgentry_t *l2t = l3e_to_l2e(ol3e);
+                    l2_pgentry_t *l2t = map_l2t_from_l3e(ol3e);
 
                     for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                     {
@@ -5183,10 +5183,11 @@ int map_pages_to_xen(
                         else
                         {
                             unsigned int j;
-                            const l1_pgentry_t *l1t = l2e_to_l1e(ol2e);
+                            const l1_pgentry_t *l1t = map_l1t_from_l2e(ol2e);
 
                             for ( j = 0; j < L1_PAGETABLE_ENTRIES; j++ )
                                 flush_flags(l1e_get_flags(l1t[j]));
+                            unmap_domain_page(l1t);
                         }
                     }
                     flush_area(virt, flush_flags);
@@ -5195,9 +5196,10 @@ int map_pages_to_xen(
                         ol2e = l2t[i];
                         if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT) &&
                              !(l2e_get_flags(ol2e) & _PAGE_PSE) )
-                            free_xen_pagetable(l2e_to_l1e(ol2e));
+                            free_xen_pagetable_new(l2e_get_mfn(ol2e));
                     }
-                    free_xen_pagetable(l2t);
+                    unmap_domain_page(l2t);
+                    free_xen_pagetable_new(l3e_get_mfn(ol3e));
                 }
             }
 
@@ -5214,6 +5216,7 @@ int map_pages_to_xen(
             unsigned int flush_flags =
                 FLUSH_TLB | FLUSH_ORDER(2 * PAGETABLE_ORDER);
             l2_pgentry_t *l2t;
+            mfn_t l2mfn;
 
             /* Skip this PTE if there is no change. */
             if ( ((l3e_get_pfn(ol3e) & ~(L2_PAGETABLE_ENTRIES *
@@ -5235,15 +5238,17 @@ int map_pages_to_xen(
                 continue;
             }
 
-            l2t = alloc_xen_pagetable();
-            if ( l2t == NULL )
+            l2mfn = alloc_xen_pagetable_new();
+            if ( mfn_eq(l2mfn, INVALID_MFN) )
                 goto out;
 
+            l2t = map_domain_page(l2mfn);
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(ol3e) +
                                        (i << PAGETABLE_ORDER),
                                        l3e_get_flags(ol3e)));
+            UNMAP_DOMAIN_PAGE(l2t);
 
             if ( l3e_get_flags(ol3e) & _PAGE_GLOBAL )
                 flush_flags |= FLUSH_TLB_GLOBAL;
@@ -5253,15 +5258,15 @@ int map_pages_to_xen(
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
-                                                    __PAGE_HYPERVISOR));
-                l2t = NULL;
+                l3e_write_atomic(pl3e,
+                                 l3e_from_mfn(l2mfn, __PAGE_HYPERVISOR));
+                l2mfn = INVALID_MFN;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
             flush_area(virt, flush_flags);
-            if ( l2t )
-                free_xen_pagetable(l2t);
+
+            free_xen_pagetable_new(l2mfn);
         }
 
         pl2e = virt_to_xen_l2e(virt);
@@ -5289,12 +5294,13 @@ int map_pages_to_xen(
                 }
                 else
                 {
-                    l1_pgentry_t *l1t = l2e_to_l1e(ol2e);
+                    l1_pgentry_t *l1t = map_l1t_from_l2e(ol2e);
 
                     for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                         flush_flags(l1e_get_flags(l1t[i]));
                     flush_area(virt, flush_flags);
-                    free_xen_pagetable(l1t);
+                    unmap_domain_page(l1t);
+                    free_xen_pagetable_new(l2e_get_mfn(ol2e));
                 }
             }
 
@@ -5319,6 +5325,7 @@ int map_pages_to_xen(
                 unsigned int flush_flags =
                     FLUSH_TLB | FLUSH_ORDER(PAGETABLE_ORDER);
                 l1_pgentry_t *l1t;
+                mfn_t l1mfn;
 
                 /* Skip this PTE if there is no change. */
                 if ( (((l2e_get_pfn(*pl2e) & ~(L1_PAGETABLE_ENTRIES - 1)) +
@@ -5338,14 +5345,16 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                l1t = alloc_xen_pagetable();
-                if ( l1t == NULL )
+                l1mfn = alloc_xen_pagetable_new();
+                if ( mfn_eq(l1mfn, INVALID_MFN) )
                     goto out;
 
+                l1t = map_domain_page(l1mfn);
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
                                            lNf_to_l1f(l2e_get_flags(*pl2e))));
+                UNMAP_DOMAIN_PAGE(l1t);
 
                 if ( l2e_get_flags(*pl2e) & _PAGE_GLOBAL )
                     flush_flags |= FLUSH_TLB_GLOBAL;
@@ -5355,20 +5364,21 @@ int map_pages_to_xen(
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(l1mfn,
                                                         __PAGE_HYPERVISOR));
-                    l1t = NULL;
+                    l1mfn = INVALID_MFN;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(virt, flush_flags);
-                if ( l1t )
-                    free_xen_pagetable(l1t);
+
+                free_xen_pagetable_new(l1mfn);
             }
 
-            pl1e  = l2e_to_l1e(*pl2e) + l1_table_offset(virt);
+            pl1e  = map_l1t_from_l2e(*pl2e) + l1_table_offset(virt);
             ol1e  = *pl1e;
             l1e_write_atomic(pl1e, l1e_from_mfn(mfn, flags));
+            UNMAP_DOMAIN_PAGE(pl1e);
             if ( (l1e_get_flags(ol1e) & _PAGE_PRESENT) )
             {
                 unsigned int flush_flags = FLUSH_TLB | FLUSH_ORDER(0);
@@ -5412,12 +5422,13 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                l1t = l2e_to_l1e(ol2e);
+                l1t = map_l1t_from_l2e(ol2e);
                 base_mfn = l1e_get_pfn(l1t[0]) & ~(L1_PAGETABLE_ENTRIES - 1);
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     if ( (l1e_get_pfn(l1t[i]) != (base_mfn + i)) ||
                          (l1e_get_flags(l1t[i]) != flags) )
                         break;
+                UNMAP_DOMAIN_PAGE(l1t);
                 if ( i == L1_PAGETABLE_ENTRIES )
                 {
                     l2e_write_atomic(pl2e, l2e_from_pfn(base_mfn,
@@ -5427,7 +5438,7 @@ int map_pages_to_xen(
                     flush_area(virt - PAGE_SIZE,
                                FLUSH_TLB_GLOBAL |
                                FLUSH_ORDER(PAGETABLE_ORDER));
-                    free_xen_pagetable(l2e_to_l1e(ol2e));
+                    free_xen_pagetable_new(l2e_get_mfn(ol2e));
                 }
                 else if ( locking )
                     spin_unlock(&map_pgdir_lock);
@@ -5460,7 +5471,7 @@ int map_pages_to_xen(
                 continue;
             }
 
-            l2t = l3e_to_l2e(ol3e);
+            l2t = map_l2t_from_l3e(ol3e);
             base_mfn = l2e_get_pfn(l2t[0]) & ~(L2_PAGETABLE_ENTRIES *
                                               L1_PAGETABLE_ENTRIES - 1);
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
@@ -5468,6 +5479,7 @@ int map_pages_to_xen(
                       (base_mfn + (i << PAGETABLE_ORDER))) ||
                      (l2e_get_flags(l2t[i]) != l1f_to_lNf(flags)) )
                     break;
+            UNMAP_DOMAIN_PAGE(l2t);
             if ( i == L2_PAGETABLE_ENTRIES )
             {
                 l3e_write_atomic(pl3e, l3e_from_pfn(base_mfn,
@@ -5477,7 +5489,7 @@ int map_pages_to_xen(
                 flush_area(virt - PAGE_SIZE,
                            FLUSH_TLB_GLOBAL |
                            FLUSH_ORDER(2*PAGETABLE_ORDER));
-                free_xen_pagetable(l3e_to_l2e(ol3e));
+                free_xen_pagetable_new(l3e_get_mfn(ol3e));
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:22:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:22:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k041Z-0007EB-FR; Mon, 27 Jul 2020 14:22:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k041X-00075u-Oi
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:22:39 +0000
X-Inumbo-ID: 97b8bd62-d014-11ea-a7d9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 97b8bd62-d014-11ea-a7d9-12813bfff9fa;
 Mon, 27 Jul 2020 14:22:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AzKnclrIuNpwqMYiGGxlH4mfYS6y6TwghSOhmo8lMtw=; b=G79OtY2+doo2xGMiNfvs+QXMm3
 idsuWZkZf8PIsbZnByjOzZIWad+7pdqIgaktWer5y2WBE1l6HE5jazFy+M05I0b1FGjk5ZmMjbaKD
 ieAfsjCGbvRMMGO0OB0IrSD4VRiMFeucNz6FYir+9ncEc/fC1A6JEvUl9+89mdn/q7Cw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041I-0001Nh-Oc; Mon, 27 Jul 2020 14:22:24 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041I-0002w6-FS; Mon, 27 Jul 2020 14:22:24 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 05/15] x86/mm: switch to new APIs in modify_xen_mappings
Date: Mon, 27 Jul 2020 15:21:55 +0100
Message-Id: <d6e921a4a33b0be1ae8147de268854556b08a3bc.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, jgrall@amazon.com,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

Page tables allocated in that function should be mapped and unmapped
now.

Note that pl2e now maybe mapped and unmapped in different iterations, so
we need to add clean-ups for that.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changed in v7:
- use normal unmap in the error path.
---
 xen/arch/x86/mm.c | 57 +++++++++++++++++++++++++++++++++++--------------------
 1 file changed, 36 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index edcf164742..199940a345 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5527,7 +5527,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 {
     bool locking = system_state > SYS_STATE_boot;
     l3_pgentry_t *pl3e = NULL;
-    l2_pgentry_t *pl2e;
+    l2_pgentry_t *pl2e = NULL;
     l1_pgentry_t *pl1e;
     unsigned int  i;
     unsigned long v = s;
@@ -5543,6 +5543,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     while ( v < e )
     {
         /* Clean up mappings mapped in the previous iteration. */
+        UNMAP_DOMAIN_PAGE(pl2e);
         UNMAP_DOMAIN_PAGE(pl3e);
 
         pl3e = virt_to_xen_l3e(v);
@@ -5560,6 +5561,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
         if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
         {
             l2_pgentry_t *l2t;
+            mfn_t l2mfn;
 
             if ( l2_table_offset(v) == 0 &&
                  l1_table_offset(v) == 0 &&
@@ -5576,35 +5578,38 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
-            l2t = alloc_xen_pagetable();
-            if ( !l2t )
+            l2mfn = alloc_xen_pagetable_new();
+            if ( mfn_eq(l2mfn, INVALID_MFN) )
                 goto out;
 
+            l2t = map_domain_page(l2mfn);
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(*pl3e) +
                                        (i << PAGETABLE_ORDER),
                                        l3e_get_flags(*pl3e)));
+            UNMAP_DOMAIN_PAGE(l2t);
+
             if ( locking )
                 spin_lock(&map_pgdir_lock);
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
-                                                    __PAGE_HYPERVISOR));
-                l2t = NULL;
+                l3e_write_atomic(pl3e,
+                                 l3e_from_mfn(l2mfn, __PAGE_HYPERVISOR));
+                l2mfn = INVALID_MFN;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
-            if ( l2t )
-                free_xen_pagetable(l2t);
+
+            free_xen_pagetable_new(l2mfn);
         }
 
         /*
          * The L3 entry has been verified to be present, and we've dealt with
          * 1G pages as well, so the L2 table cannot require allocation.
          */
-        pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(v);
+        pl2e = map_l2t_from_l3e(*pl3e) + l2_table_offset(v);
 
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
@@ -5632,41 +5637,45 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             else
             {
                 l1_pgentry_t *l1t;
-
                 /* PSE: shatter the superpage and try again. */
-                l1t = alloc_xen_pagetable();
-                if ( !l1t )
+                mfn_t l1mfn = alloc_xen_pagetable_new();
+
+                if ( mfn_eq(l1mfn, INVALID_MFN) )
                     goto out;
 
+                l1t = map_domain_page(l1mfn);
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
                                            l2e_get_flags(*pl2e) & ~_PAGE_PSE));
+                UNMAP_DOMAIN_PAGE(l1t);
+
                 if ( locking )
                     spin_lock(&map_pgdir_lock);
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(l1mfn,
                                                         __PAGE_HYPERVISOR));
-                    l1t = NULL;
+                    l1mfn = INVALID_MFN;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
-                if ( l1t )
-                    free_xen_pagetable(l1t);
+
+                free_xen_pagetable_new(l1mfn);
             }
         }
         else
         {
             l1_pgentry_t nl1e, *l1t;
+            mfn_t l1mfn;
 
             /*
              * Ordinary 4kB mapping: The L2 entry has been verified to be
              * present, and we've dealt with 2M pages as well, so the L1 table
              * cannot require allocation.
              */
-            pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(v);
+            pl1e = map_l1t_from_l2e(*pl2e) + l1_table_offset(v);
 
             /* Confirm the caller isn't trying to create new mappings. */
             if ( !(l1e_get_flags(*pl1e) & _PAGE_PRESENT) )
@@ -5677,6 +5686,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                                (l1e_get_flags(*pl1e) & ~FLAGS_MASK) | nf);
 
             l1e_write_atomic(pl1e, nl1e);
+            UNMAP_DOMAIN_PAGE(pl1e);
             v += PAGE_SIZE;
 
             /*
@@ -5706,10 +5716,12 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 continue;
             }
 
-            l1t = l2e_to_l1e(*pl2e);
+            l1mfn = l2e_get_mfn(*pl2e);
+            l1t = map_domain_page(l1mfn);
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                 if ( l1e_get_intpte(l1t[i]) != 0 )
                     break;
+            UNMAP_DOMAIN_PAGE(l1t);
             if ( i == L1_PAGETABLE_ENTRIES )
             {
                 /* Empty: zap the L2E and free the L1 page. */
@@ -5717,7 +5729,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable(l1t);
+                free_xen_pagetable_new(l1mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5748,11 +5760,13 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
         {
             l2_pgentry_t *l2t;
+            mfn_t l2mfn = l3e_get_mfn(*pl3e);
 
-            l2t = l3e_to_l2e(*pl3e);
+            l2t = map_domain_page(l2mfn);
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 if ( l2e_get_intpte(l2t[i]) != 0 )
                     break;
+            UNMAP_DOMAIN_PAGE(l2t);
             if ( i == L2_PAGETABLE_ENTRIES )
             {
                 /* Empty: zap the L3E and free the L2 page. */
@@ -5760,7 +5774,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable(l2t);
+                free_xen_pagetable_new(l2mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5773,6 +5787,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     rc = 0;
 
  out:
+    unmap_domain_page(pl2e);
     unmap_domain_page(pl3e);
     return rc;
 }
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:22:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:22:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k041e-0007Gl-Pm; Mon, 27 Jul 2020 14:22:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k041c-00075u-Os
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:22:44 +0000
X-Inumbo-ID: 989d5e54-d014-11ea-a7d9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 989d5e54-d014-11ea-a7d9-12813bfff9fa;
 Mon, 27 Jul 2020 14:22:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=p+Zha4UMAvAuCQ2vvLKkoAQIt1J/pkiKRu6ISLJk4/g=; b=a+BF/QpCrT3d6dXzIQMg8IhgX5
 cxuYa+p3SMX7YBl6wvl86e4p5d8FHsc5qeauSykhH8NQtQ+Y3m25QQMWX11sceooBuh3HCb557jXa
 rvCUsxJdqpsh2Z7zrfTCWf3VA6Yq9yxTv66UgOEBx2ZF4Gle1y9ZQryQr+8C7ih/KGmw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041K-0001Nr-74; Mon, 27 Jul 2020 14:22:26 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041J-0002w6-U2; Mon, 27 Jul 2020 14:22:26 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 06/15] x86_64/mm: introduce pl2e in paging_init
Date: Mon, 27 Jul 2020 15:21:56 +0100
Message-Id: <fb7afef3e8af222700761d337ef3d6df11fc0f5f.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, jgrall@amazon.com,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

We will soon map and unmap pages in paging_init(). Introduce pl2e so
that we can use l2_ro_mpt to point to the page table itself.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>

---
Changed in v7:
- reword commit message.
---
 xen/arch/x86/x86_64/mm.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 102079a801..243014a119 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -479,7 +479,7 @@ void __init paging_init(void)
     unsigned long i, mpt_size, va;
     unsigned int n, memflags;
     l3_pgentry_t *l3_ro_mpt;
-    l2_pgentry_t *l2_ro_mpt = NULL;
+    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL;
     struct page_info *l1_pg;
 
     /*
@@ -529,7 +529,7 @@ void __init paging_init(void)
             (L2_PAGETABLE_SHIFT - 3 + PAGE_SHIFT)));
 
         if ( cpu_has_page1gb &&
-             !((unsigned long)l2_ro_mpt & ~PAGE_MASK) &&
+             !((unsigned long)pl2e & ~PAGE_MASK) &&
              (mpt_size >> L3_PAGETABLE_SHIFT) > (i >> PAGETABLE_ORDER) )
         {
             unsigned int k, holes;
@@ -589,7 +589,7 @@ void __init paging_init(void)
             memset((void *)(RDWR_MPT_VIRT_START + (i << L2_PAGETABLE_SHIFT)),
                    0xFF, 1UL << L2_PAGETABLE_SHIFT);
         }
-        if ( !((unsigned long)l2_ro_mpt & ~PAGE_MASK) )
+        if ( !((unsigned long)pl2e & ~PAGE_MASK) )
         {
             if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
                 goto nomem;
@@ -597,13 +597,14 @@ void __init paging_init(void)
             l3e_write(&l3_ro_mpt[l3_table_offset(va)],
                       l3e_from_paddr(__pa(l2_ro_mpt),
                                      __PAGE_HYPERVISOR_RO | _PAGE_USER));
+            pl2e = l2_ro_mpt;
             ASSERT(!l2_table_offset(va));
         }
         /* NB. Cannot be GLOBAL: guest user mode should not see it. */
         if ( l1_pg )
-            l2e_write(l2_ro_mpt, l2e_from_page(
+            l2e_write(pl2e, l2e_from_page(
                 l1_pg, /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESENT));
-        l2_ro_mpt++;
+        pl2e++;
     }
 #undef CNT
 #undef MFN
@@ -613,6 +614,7 @@ void __init paging_init(void)
         goto nomem;
     compat_idle_pg_table_l2 = l2_ro_mpt;
     clear_page(l2_ro_mpt);
+    pl2e = l2_ro_mpt;
     /* Allocate and map the compatibility mode machine-to-phys table. */
     mpt_size = (mpt_size >> 1) + (1UL << (L2_PAGETABLE_SHIFT - 1));
     if ( mpt_size > RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START )
@@ -625,7 +627,7 @@ void __init paging_init(void)
              sizeof(*compat_machine_to_phys_mapping))
     BUILD_BUG_ON((sizeof(*frame_table) & ~sizeof(*frame_table)) % \
                  sizeof(*compat_machine_to_phys_mapping));
-    for ( i = 0; i < (mpt_size >> L2_PAGETABLE_SHIFT); i++, l2_ro_mpt++ )
+    for ( i = 0; i < (mpt_size >> L2_PAGETABLE_SHIFT); i++, pl2e++ )
     {
         memflags = MEMF_node(phys_to_nid(i <<
             (L2_PAGETABLE_SHIFT - 2 + PAGE_SHIFT)));
@@ -647,7 +649,7 @@ void __init paging_init(void)
                         (i << L2_PAGETABLE_SHIFT)),
                0xFF, 1UL << L2_PAGETABLE_SHIFT);
         /* NB. Cannot be GLOBAL as the ptes get copied into per-VM space. */
-        l2e_write(l2_ro_mpt, l2e_from_page(l1_pg, _PAGE_PSE|_PAGE_PRESENT));
+        l2e_write(pl2e, l2e_from_page(l1_pg, _PAGE_PSE|_PAGE_PRESENT));
     }
 #undef CNT
 #undef MFN
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:22:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k041j-0007J5-2L; Mon, 27 Jul 2020 14:22:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k041h-00075u-Ot
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:22:49 +0000
X-Inumbo-ID: 997d665c-d014-11ea-a7d9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 997d665c-d014-11ea-a7d9-12813bfff9fa;
 Mon, 27 Jul 2020 14:22:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=SodIzhOZtWgd7aTAQdqyttar/iXLlUvG2VmsW7l5TFQ=; b=0MxQzMgiFwdBg7PVvBvEoRmhDU
 E+yUT9+or+HWVwiyfn/D/ThmQHPkk8ENXofnZKH/Tfg7+dgXzV2n89oCGEDP/GnS7rUgoa2ouApYP
 qD2CqRZUMkoZNz3vX6myjYybY7nPLHpXECQsEwCrb2ihUbISBsHacV1vS0wDBa3No5Fg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041L-0001Nz-Lu; Mon, 27 Jul 2020 14:22:27 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041L-0002w6-CO; Mon, 27 Jul 2020 14:22:27 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 07/15] x86_64/mm: switch to new APIs in paging_init
Date: Mon, 27 Jul 2020 15:21:57 +0100
Message-Id: <9919850a82a7f189de2b5dcc62c55bc9d5337c4b.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, jgrall@amazon.com,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

Map and unmap pages instead of relying on the direct map.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changed in v8:
- replace l3/2_ro_mpt_mfn with just mfn since their lifetimes do not
  overlap

Changed in v7:
- use the new alloc_map_clear_xen_pt() helper.
- move the unmap of pl3t up a bit.
- remove the unmaps in the nomem path.
---
 xen/arch/x86/x86_64/mm.c | 35 +++++++++++++++++++++--------------
 1 file changed, 21 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 243014a119..ebf21d505b 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -481,6 +481,7 @@ void __init paging_init(void)
     l3_pgentry_t *l3_ro_mpt;
     l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL;
     struct page_info *l1_pg;
+    mfn_t mfn;
 
     /*
      * We setup the L3s for 1:1 mapping if host support memory hotplug
@@ -493,22 +494,23 @@ void __init paging_init(void)
         if ( !(l4e_get_flags(idle_pg_table[l4_table_offset(va)]) &
               _PAGE_PRESENT) )
         {
-            l3_pgentry_t *pl3t = alloc_xen_pagetable();
+            mfn_t l3mfn;
+            l3_pgentry_t *pl3t = alloc_map_clear_xen_pt(&l3mfn);
 
             if ( !pl3t )
                 goto nomem;
-            clear_page(pl3t);
+            UNMAP_DOMAIN_PAGE(pl3t);
             l4e_write(&idle_pg_table[l4_table_offset(va)],
-                      l4e_from_paddr(__pa(pl3t), __PAGE_HYPERVISOR_RW));
+                      l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR_RW));
         }
     }
 
     /* Create user-accessible L2 directory to map the MPT for guests. */
-    if ( (l3_ro_mpt = alloc_xen_pagetable()) == NULL )
+    l3_ro_mpt = alloc_map_clear_xen_pt(&mfn);
+    if ( !l3_ro_mpt )
         goto nomem;
-    clear_page(l3_ro_mpt);
     l4e_write(&idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)],
-              l4e_from_paddr(__pa(l3_ro_mpt), __PAGE_HYPERVISOR_RO | _PAGE_USER));
+              l4e_from_mfn(mfn, __PAGE_HYPERVISOR_RO | _PAGE_USER));
 
     /*
      * Allocate and map the machine-to-phys table.
@@ -591,12 +593,14 @@ void __init paging_init(void)
         }
         if ( !((unsigned long)pl2e & ~PAGE_MASK) )
         {
-            if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
+            UNMAP_DOMAIN_PAGE(l2_ro_mpt);
+
+            l2_ro_mpt = alloc_map_clear_xen_pt(&mfn);
+            if ( !l2_ro_mpt )
                 goto nomem;
-            clear_page(l2_ro_mpt);
+
             l3e_write(&l3_ro_mpt[l3_table_offset(va)],
-                      l3e_from_paddr(__pa(l2_ro_mpt),
-                                     __PAGE_HYPERVISOR_RO | _PAGE_USER));
+                      l3e_from_mfn(mfn, __PAGE_HYPERVISOR_RO | _PAGE_USER));
             pl2e = l2_ro_mpt;
             ASSERT(!l2_table_offset(va));
         }
@@ -608,13 +612,16 @@ void __init paging_init(void)
     }
 #undef CNT
 #undef MFN
+    UNMAP_DOMAIN_PAGE(l2_ro_mpt);
+    UNMAP_DOMAIN_PAGE(l3_ro_mpt);
 
     /* Create user-accessible L2 directory to map the MPT for compat guests. */
-    if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
+    mfn = alloc_xen_pagetable_new();
+    if ( mfn_eq(mfn, INVALID_MFN) )
         goto nomem;
-    compat_idle_pg_table_l2 = l2_ro_mpt;
-    clear_page(l2_ro_mpt);
-    pl2e = l2_ro_mpt;
+    compat_idle_pg_table_l2 = map_domain_page_global(mfn);
+    clear_page(compat_idle_pg_table_l2);
+    pl2e = compat_idle_pg_table_l2;
     /* Allocate and map the compatibility mode machine-to-phys table. */
     mpt_size = (mpt_size >> 1) + (1UL << (L2_PAGETABLE_SHIFT - 1));
     if ( mpt_size > RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START )
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:22:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:22:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k041o-0007MW-Cz; Mon, 27 Jul 2020 14:22:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k041m-00075u-P5
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:22:54 +0000
X-Inumbo-ID: 9a3215de-d014-11ea-a7d9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9a3215de-d014-11ea-a7d9-12813bfff9fa;
 Mon, 27 Jul 2020 14:22:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=fUz58rQHZzbX7UbQq79ICRTvYmlrlPqPrctCYP9EW/8=; b=iAvA+u+KbfebHtfq3XewBXJjf7
 gw+Fp0x0DhASWsqn/5/3m5NtHvbLdf9QEznYZqKlGZATniUgtxZauAzvI2o5vmpMF+ccTHImYLUek
 4M4IfMES0LIbHGLfjQ8r1VgxL7gg+JTnb9QGOfNtbZA68Ui+B3OZrI2qCjL+k3feZVLA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041N-0001O5-3P; Mon, 27 Jul 2020 14:22:29 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041M-0002w6-Qc; Mon, 27 Jul 2020 14:22:29 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 08/15] x86_64/mm: switch to new APIs in setup_m2p_table
Date: Mon, 27 Jul 2020 15:21:58 +0100
Message-Id: <c710edf6abb7cc781238fcd3ea003cff2e16af6e.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, jgrall@amazon.com,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

While doing so, avoid repetitive mapping of l2_ro_mpt by keeping it
across loops, and only unmap and map it when crossing 1G boundaries.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changed in v8:
- re-structure if condition around l2_ro_mpt.
- reword the commit message.

Changed in v7:
- avoid repetitive mapping of l2_ro_mpt.
- edit commit message.
- switch to alloc_map_clear_xen_pt().
---
 xen/arch/x86/x86_64/mm.c | 32 +++++++++++++++++++-------------
 1 file changed, 19 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index ebf21d505b..640f561faf 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -385,7 +385,8 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
 
     ASSERT(l4e_get_flags(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)])
             & _PAGE_PRESENT);
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
+    l3_ro_mpt = map_l3t_from_l4e(
+                    idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
 
     smap = (info->spfn & (~((1UL << (L2_PAGETABLE_SHIFT - 3)) -1)));
     emap = ((info->epfn + ((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1 )) &
@@ -403,6 +404,10 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
     i = smap;
     while ( i < emap )
     {
+        if ( (RO_MPT_VIRT_START + i * sizeof(*machine_to_phys_mapping)) &
+             ((1UL << L3_PAGETABLE_SHIFT) - 1) )
+            UNMAP_DOMAIN_PAGE(l2_ro_mpt);
+
         switch ( m2p_mapped(i) )
         {
         case M2P_1G_MAPPED:
@@ -438,32 +443,31 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
 
             ASSERT(!(l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
                   _PAGE_PSE));
-            if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
-              _PAGE_PRESENT )
-                l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
-                  l2_table_offset(va);
+            if ( l2_ro_mpt )
+                /* nothing */;
+            else if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
+                      _PAGE_PRESENT )
+                l2_ro_mpt = map_l2t_from_l3e(l3_ro_mpt[l3_table_offset(va)]);
             else
             {
-                l2_ro_mpt = alloc_xen_pagetable();
+                mfn_t l2_ro_mpt_mfn;
+
+                l2_ro_mpt = alloc_map_clear_xen_pt(&l2_ro_mpt_mfn);
                 if ( !l2_ro_mpt )
                 {
                     ret = -ENOMEM;
                     goto error;
                 }
 
-                clear_page(l2_ro_mpt);
                 l3e_write(&l3_ro_mpt[l3_table_offset(va)],
-                          l3e_from_paddr(__pa(l2_ro_mpt),
-                                         __PAGE_HYPERVISOR_RO | _PAGE_USER));
-                l2_ro_mpt += l2_table_offset(va);
+                          l3e_from_mfn(l2_ro_mpt_mfn,
+                                       __PAGE_HYPERVISOR_RO | _PAGE_USER));
             }
 
             /* NB. Cannot be GLOBAL: guest user mode should not see it. */
-            l2e_write(l2_ro_mpt, l2e_from_mfn(mfn,
+            l2e_write(&l2_ro_mpt[l2_table_offset(va)], l2e_from_mfn(mfn,
                    /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESENT));
         }
-        if ( !((unsigned long)l2_ro_mpt & ~PAGE_MASK) )
-            l2_ro_mpt = NULL;
         i += ( 1UL << (L2_PAGETABLE_SHIFT - 3));
     }
 #undef CNT
@@ -471,6 +475,8 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
 
     ret = setup_compat_m2p_table(info);
 error:
+    unmap_domain_page(l2_ro_mpt);
+    unmap_domain_page(l3_ro_mpt);
     return ret;
 }
 
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:23:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k041s-0007RR-Rd; Mon, 27 Jul 2020 14:23:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k041r-00075u-P6
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:22:59 +0000
X-Inumbo-ID: 9a3215df-d014-11ea-a7d9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9a3215df-d014-11ea-a7d9-12813bfff9fa;
 Mon, 27 Jul 2020 14:22:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=y74N3TUfvk3AYiEnS0qck1kYQhbnuTest5No5Q+t0p8=; b=JbCvw1xbywxErlbJoPNr88Lugh
 iZ3LfHY6W9ihNpKRjpIGRvgVGSHgso8/IEEyyIrIdpG22awMosHugnTLCTjubtUuID2QmPzTBxiIU
 jCE5PP/EvUmucSpThm8wFkEOMdMT/So67ls45lF9oEYFtWK8lOqCyje1symr6jnV/qc0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041O-0001OC-5W; Mon, 27 Jul 2020 14:22:30 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041N-0002w6-SZ; Mon, 27 Jul 2020 14:22:30 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 09/15] efi: use new page table APIs in copy_mapping
Date: Mon, 27 Jul 2020 15:21:59 +0100
Message-Id: <971376c3e2c04248ca0e5ef53ce59b8207bf1c6e.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgrall@amazon.com, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v8:
- remove redundant commit message.
- unmap l3src based on va instead of mfn.
- re-structure if condition around l3dst.

Changed in v7:
- hoist l3 variables out of the loop to avoid repetitive mappings.
---
 xen/common/efi/boot.c | 28 +++++++++++++++++++++-------
 1 file changed, 21 insertions(+), 7 deletions(-)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 5a520bf21d..f116759538 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -6,6 +6,7 @@
 #include <xen/compile.h>
 #include <xen/ctype.h>
 #include <xen/dmi.h>
+#include <xen/domain_page.h>
 #include <xen/init.h>
 #include <xen/keyhandler.h>
 #include <xen/lib.h>
@@ -1442,29 +1443,42 @@ static __init void copy_mapping(unsigned long mfn, unsigned long end,
                                                  unsigned long emfn))
 {
     unsigned long next;
+    l3_pgentry_t *l3src = NULL, *l3dst = NULL;
 
     for ( ; mfn < end; mfn = next )
     {
         l4_pgentry_t l4e = efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)];
-        l3_pgentry_t *l3src, *l3dst;
         unsigned long va = (unsigned long)mfn_to_virt(mfn);
 
+        if ( !(mfn & ((1UL << (L4_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1)) )
+            UNMAP_DOMAIN_PAGE(l3dst);
+        if ( !(va & ((1UL << L4_PAGETABLE_SHIFT) - 1)) )
+            UNMAP_DOMAIN_PAGE(l3src);
         next = mfn + (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT));
         if ( !is_valid(mfn, min(next, end)) )
             continue;
-        if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
+
+        if ( l3dst )
+            /* nothing */;
+        else if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
-            l3dst = alloc_xen_pagetable();
+            mfn_t l3mfn;
+
+            l3dst = alloc_map_clear_xen_pt(&l3mfn);
             BUG_ON(!l3dst);
-            clear_page(l3dst);
             efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)] =
-                l4e_from_paddr(virt_to_maddr(l3dst), __PAGE_HYPERVISOR);
+                l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR);
         }
         else
-            l3dst = l4e_to_l3e(l4e);
-        l3src = l4e_to_l3e(idle_pg_table[l4_table_offset(va)]);
+            l3dst = map_l3t_from_l4e(l4e);
+
+        if ( !l3src )
+            l3src = map_l3t_from_l4e(idle_pg_table[l4_table_offset(va)]);
         l3dst[l3_table_offset(mfn << PAGE_SHIFT)] = l3src[l3_table_offset(va)];
     }
+
+    unmap_domain_page(l3src);
+    unmap_domain_page(l3dst);
 }
 
 static bool __init ram_range_valid(unsigned long smfn, unsigned long emfn)
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:36:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:36:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k04EU-0000VE-3y; Mon, 27 Jul 2020 14:36:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iIX7=BG=whatever-company.com=a.perdaens@srs-us1.protection.inumbo.net>)
 id 1k04ET-0000V5-4Y
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:36:01 +0000
X-Inumbo-ID: 7d144f24-d016-11ea-8ac7-bc764e2007e4
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d144f24-d016-11ea-8ac7-bc764e2007e4;
 Mon, 27 Jul 2020 14:36:00 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id l2so4577579wrc.7
 for <xen-devel@lists.xenproject.org>; Mon, 27 Jul 2020 07:35:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=elium.com; s=google;
 h=from:content-transfer-encoding:mime-version:subject:message-id:date
 :to; bh=w3mAClGwbtL4eYyuh3YjOyz5SaSi3M232OPbxlO7Kvg=;
 b=kT72GXLm2QqoL/vDo+BLIyBKHBSxH1/IxHn9AYUELKo9gaNGI73Xyxavmlk+lz8K8P
 W667GDaOvzg3Qs5qc7JOnGWhqzETUk3J1fUcFEoAFAUJehbKLNDfxnlOjAcZduqMqriu
 vjAQCeB1sQi2VJMRiWGQyt2Hp6qBzFvMI6hPhXqGSCKe8OU1R9uzrbat71uTlfpDc4BR
 RxrSEF+fHps0mMA0rK3PvwVLzqrkox8gqtlWfkqn/MHbSEzB8aDwEbj397tHNyr2OZbP
 ehcpum0fA4GEYnrWGICcD6vs25ChDjP10ZlEaxG2XNSc7p/4LdL3bxA13YUmdeTQm2TM
 swgw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:content-transfer-encoding:mime-version
 :subject:message-id:date:to;
 bh=w3mAClGwbtL4eYyuh3YjOyz5SaSi3M232OPbxlO7Kvg=;
 b=smqvysUWr7cYz3oq0GOhxjNi4/nyNvFfdZyfosejGeDM44y7Zh7j4gkaajWU/+RFyx
 VgPF75D5wgOfQv5sYCfvBFheirhIiW1bpr9uW9O5+UzlHSliQy1pJ4h4O0f1KPXCf7dA
 i8qzZ5UXh/IgI1/7fds6Y4XP4KMf/8etz7rMp8xVWzLckJF3LVXD72ai3UOmbQfBePel
 yMlHtk37vXtaZx/d2malL7he0prNk14HoaHluKrdvAgkeGGFUyqV2r50EBZtT9TBFf36
 k1FOxFqkrFf6FVnxpk/Vq6vGoIoQpG8PI603XHC9ylimlWwUeTtffJUI6oqjQiITIbgb
 TmUA==
X-Gm-Message-State: AOAM533ZrXyAKZq2AbCj9ORwe3dfZn0gzS+yDHAsBRCoXn+CKfxqdOz7
 OMYmbhxU3Lk7fypE4BpCKJNP76E5PoI=
X-Google-Smtp-Source: ABdhPJxzLq5vJ7Jr1p6TwHVhtJA6qMvI6HL5i6AkAOwIuNEE+vwZA/I8Y7J8RL3SimpZcOyBqWvOfQ==
X-Received: by 2002:a5d:6990:: with SMTP id g16mr20072972wru.131.1595860558537; 
 Mon, 27 Jul 2020 07:35:58 -0700 (PDT)
Received: from [10.192.1.211] ([37.19.15.130])
 by smtp.gmail.com with ESMTPSA id f9sm12631505wru.47.2020.07.27.07.35.57
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 27 Jul 2020 07:35:58 -0700 (PDT)
From: Antoine Perdaens <a.perdaens@elium.com>
Content-Type: text/plain;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable
Mime-Version: 1.0 (Mac OS X Mail 13.4 \(3608.80.23.2.2\))
Subject: qdisk error with rbd
Message-Id: <81E74684-4758-4647-BCE5-8C881067ABC9@elium.com>
Date: Mon, 27 Jul 2020 16:35:56 +0200
To: xen-devel@lists.xenproject.org
X-Mailer: Apple Mail (2.3608.80.23.2.2)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello all,

Our current setup is based on Xen 4.11 + Qemu 3.1 with rbd disks, an =
example of a disk that works : =
"vdev=3Dxvda1,backendtype=3Dqdisk,target=3Drbd:rbd-pool/disk-02-root:id=3D=
rbd=E2=80=9D.

The same setup with Qemu 5.0 gives this error in /var/log/xen/qemu-dm-* =
: "qemu-system-i386: failed to create 'qdisk' device '51713': failed to =
create drive: Could not open 'rbd:rbd-pool/disk-02-root:id=3Drbd': No =
such file or directory=E2=80=9D


Looking at the source in xen-block.c it looks like the driver is forced =
to =E2=80=9Cfile=E2=80=9D which wouldn=E2=80=99t work for other type of =
block devices.

It looks like a regression with the xen-disk introduction.

Regards,
Antoine=


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:39:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:39:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k04IC-0000e4-MU; Mon, 27 Jul 2020 14:39:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k04IB-0000dt-9p
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:39:51 +0000
X-Inumbo-ID: 06c97230-d017-11ea-8ac7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06c97230-d017-11ea-8ac7-bc764e2007e4;
 Mon, 27 Jul 2020 14:39:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=0SVN7QjGjE3dnCfA1k5zj/jcGmdnfLnwdJ2VVUqhG5k=; b=WonXnoIWsAqA230psvX0In7bWl
 uVzIRmyiajs2qXzMAeSNEdQ1atSLNx1KeTWA4BDng/75VqLa26X70uHJi5Q4lg2bAI7T94IdQGQx0
 4AxmW14Ua6+sqiJr6c/EDM6OLN/uFEwFC10xYOK41gepNkgHc40jZN6JCKfbMn4sS+t4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k04IA-0001kp-60; Mon, 27 Jul 2020 14:39:50 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041Q-0002w6-OG; Mon, 27 Jul 2020 14:22:33 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 11/15] x86/smpboot: add exit path for clone_mapping()
Date: Mon, 27 Jul 2020 15:22:01 +0100
Message-Id: <c4c7a913319a6c9ae8e9823f8985570dfe7054e8.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, jgrall@amazon.com,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

We will soon need to clean up page table mappings in the exit path.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v7:
- edit commit message.
- begin with rc = 0 and set it to -ENOMEM ahead of if().
---
 xen/arch/x86/smpboot.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 5708573c41..05df08f945 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -676,6 +676,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     l3_pgentry_t *pl3e;
     l2_pgentry_t *pl2e;
     l1_pgentry_t *pl1e;
+    int rc = 0;
 
     /*
      * Sanity check 'linear'.  We only allow cloning from the Xen virtual
@@ -716,7 +717,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
             pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(linear);
             flags = l1e_get_flags(*pl1e);
             if ( !(flags & _PAGE_PRESENT) )
-                return 0;
+                goto out;
             pfn = l1e_get_pfn(*pl1e);
         }
     }
@@ -724,8 +725,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     if ( !(root_get_flags(rpt[root_table_offset(linear)]) & _PAGE_PRESENT) )
     {
         pl3e = alloc_xen_pagetable();
+        rc = -ENOMEM;
         if ( !pl3e )
-            return -ENOMEM;
+            goto out;
         clear_page(pl3e);
         l4e_write(&rpt[root_table_offset(linear)],
                   l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR));
@@ -738,8 +740,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
         pl2e = alloc_xen_pagetable();
+        rc = -ENOMEM;
         if ( !pl2e )
-            return -ENOMEM;
+            goto out;
         clear_page(pl2e);
         l3e_write(pl3e, l3e_from_paddr(__pa(pl2e), __PAGE_HYPERVISOR));
     }
@@ -754,8 +757,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
         pl1e = alloc_xen_pagetable();
+        rc = -ENOMEM;
         if ( !pl1e )
-            return -ENOMEM;
+            goto out;
         clear_page(pl1e);
         l2e_write(pl2e, l2e_from_paddr(__pa(pl1e), __PAGE_HYPERVISOR));
     }
@@ -776,7 +780,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     else
         l1e_write(pl1e, l1e_from_pfn(pfn, flags));
 
-    return 0;
+    rc = 0;
+ out:
+    return rc;
 }
 
 DEFINE_PER_CPU(root_pgentry_t *, root_pgt);
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:39:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:39:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k04IC-0000eA-UD; Mon, 27 Jul 2020 14:39:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k04IC-0000dz-0t
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:39:52 +0000
X-Inumbo-ID: 06c48874-d017-11ea-a7dc-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06c48874-d017-11ea-a7dc-12813bfff9fa;
 Mon, 27 Jul 2020 14:39:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ky65fKbvqZ7A8L5Gk8ErEdsQ2qdgoYphiYBmd6L4iJg=; b=DVityTAwQyVGVd1B7Zqmo7qwz0
 vevx4Ecb6lvT+slCDPePZ0AhFfzNmvQOEyGc77ehhHQM/d0F6+xt3YgPwihcyWRboZ9N9bpyXrDg3
 NdHVJvlhalWof5smlhW08eCrKeI7PUxMmDjryzY5ypRY9/zl2nrZfWOnndl+WcofOeas=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k04IA-0001kl-1W; Mon, 27 Jul 2020 14:39:50 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041W-0002w6-H1; Mon, 27 Jul 2020 14:22:38 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 15/15] x86/mm: drop _new suffix for page table APIs
Date: Mon, 27 Jul 2020 15:22:05 +0100
Message-Id: <5ee357f27bb604305dca480fecb8b56e9de5d8d3.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, jgrall@amazon.com,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/mm.c        | 44 ++++++++++++++++++++++----------------------
 xen/arch/x86/smpboot.c   |  6 +++---
 xen/arch/x86/x86_64/mm.c |  2 +-
 xen/include/asm-x86/mm.h |  4 ++--
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 8348f6329f..465a5bf0df 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -356,7 +356,7 @@ void __init arch_init_memory(void)
             ASSERT(root_pgt_pv_xen_slots < ROOT_PAGETABLE_PV_XEN_SLOTS);
             if ( l4_table_offset(split_va) == l4_table_offset(split_va - 1) )
             {
-                mfn_t l3mfn = alloc_xen_pagetable_new();
+                mfn_t l3mfn = alloc_xen_pagetable();
 
                 if ( !mfn_eq(l3mfn, INVALID_MFN) )
                 {
@@ -4931,7 +4931,7 @@ int mmcfg_intercept_write(
  * them. The caller must check whether the allocation has succeeded, and only
  * pass valid MFNs to map_domain_page().
  */
-mfn_t alloc_xen_pagetable_new(void)
+mfn_t alloc_xen_pagetable(void)
 {
     if ( system_state != SYS_STATE_early_boot )
     {
@@ -4945,7 +4945,7 @@ mfn_t alloc_xen_pagetable_new(void)
 }
 
 /* mfn can be INVALID_MFN */
-void free_xen_pagetable_new(mfn_t mfn)
+void free_xen_pagetable(mfn_t mfn)
 {
     if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
         free_domheap_page(mfn_to_page(mfn));
@@ -4953,7 +4953,7 @@ void free_xen_pagetable_new(mfn_t mfn)
 
 void *alloc_map_clear_xen_pt(mfn_t *pmfn)
 {
-    mfn_t mfn = alloc_xen_pagetable_new();
+    mfn_t mfn = alloc_xen_pagetable();
     void *ret;
 
     if ( mfn_eq(mfn, INVALID_MFN) )
@@ -4999,7 +4999,7 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        free_xen_pagetable_new(l3mfn);
+        free_xen_pagetable(l3mfn);
     }
 
     return map_l3t_from_l4e(*pl4e) + l3_table_offset(v);
@@ -5034,7 +5034,7 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        free_xen_pagetable_new(l2mfn);
+        free_xen_pagetable(l2mfn);
     }
 
     BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
@@ -5073,7 +5073,7 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        free_xen_pagetable_new(l1mfn);
+        free_xen_pagetable(l1mfn);
     }
 
     BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
@@ -5182,10 +5182,10 @@ int map_pages_to_xen(
                         ol2e = l2t[i];
                         if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT) &&
                              !(l2e_get_flags(ol2e) & _PAGE_PSE) )
-                            free_xen_pagetable_new(l2e_get_mfn(ol2e));
+                            free_xen_pagetable(l2e_get_mfn(ol2e));
                     }
                     unmap_domain_page(l2t);
-                    free_xen_pagetable_new(l3e_get_mfn(ol3e));
+                    free_xen_pagetable(l3e_get_mfn(ol3e));
                 }
             }
 
@@ -5224,7 +5224,7 @@ int map_pages_to_xen(
                 continue;
             }
 
-            l2mfn = alloc_xen_pagetable_new();
+            l2mfn = alloc_xen_pagetable();
             if ( mfn_eq(l2mfn, INVALID_MFN) )
                 goto out;
 
@@ -5252,7 +5252,7 @@ int map_pages_to_xen(
                 spin_unlock(&map_pgdir_lock);
             flush_area(virt, flush_flags);
 
-            free_xen_pagetable_new(l2mfn);
+            free_xen_pagetable(l2mfn);
         }
 
         pl2e = virt_to_xen_l2e(virt);
@@ -5286,7 +5286,7 @@ int map_pages_to_xen(
                         flush_flags(l1e_get_flags(l1t[i]));
                     flush_area(virt, flush_flags);
                     unmap_domain_page(l1t);
-                    free_xen_pagetable_new(l2e_get_mfn(ol2e));
+                    free_xen_pagetable(l2e_get_mfn(ol2e));
                 }
             }
 
@@ -5331,7 +5331,7 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                l1mfn = alloc_xen_pagetable_new();
+                l1mfn = alloc_xen_pagetable();
                 if ( mfn_eq(l1mfn, INVALID_MFN) )
                     goto out;
 
@@ -5358,7 +5358,7 @@ int map_pages_to_xen(
                     spin_unlock(&map_pgdir_lock);
                 flush_area(virt, flush_flags);
 
-                free_xen_pagetable_new(l1mfn);
+                free_xen_pagetable(l1mfn);
             }
 
             pl1e  = map_l1t_from_l2e(*pl2e) + l1_table_offset(virt);
@@ -5424,7 +5424,7 @@ int map_pages_to_xen(
                     flush_area(virt - PAGE_SIZE,
                                FLUSH_TLB_GLOBAL |
                                FLUSH_ORDER(PAGETABLE_ORDER));
-                    free_xen_pagetable_new(l2e_get_mfn(ol2e));
+                    free_xen_pagetable(l2e_get_mfn(ol2e));
                 }
                 else if ( locking )
                     spin_unlock(&map_pgdir_lock);
@@ -5475,7 +5475,7 @@ int map_pages_to_xen(
                 flush_area(virt - PAGE_SIZE,
                            FLUSH_TLB_GLOBAL |
                            FLUSH_ORDER(2*PAGETABLE_ORDER));
-                free_xen_pagetable_new(l3e_get_mfn(ol3e));
+                free_xen_pagetable(l3e_get_mfn(ol3e));
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5564,7 +5564,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
-            l2mfn = alloc_xen_pagetable_new();
+            l2mfn = alloc_xen_pagetable();
             if ( mfn_eq(l2mfn, INVALID_MFN) )
                 goto out;
 
@@ -5588,7 +5588,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
 
-            free_xen_pagetable_new(l2mfn);
+            free_xen_pagetable(l2mfn);
         }
 
         /*
@@ -5624,7 +5624,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             {
                 l1_pgentry_t *l1t;
                 /* PSE: shatter the superpage and try again. */
-                mfn_t l1mfn = alloc_xen_pagetable_new();
+                mfn_t l1mfn = alloc_xen_pagetable();
 
                 if ( mfn_eq(l1mfn, INVALID_MFN) )
                     goto out;
@@ -5648,7 +5648,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
 
-                free_xen_pagetable_new(l1mfn);
+                free_xen_pagetable(l1mfn);
             }
         }
         else
@@ -5715,7 +5715,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable_new(l1mfn);
+                free_xen_pagetable(l1mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5760,7 +5760,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable_new(l2mfn);
+                free_xen_pagetable(l2mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index f431f526da..a01412a986 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -902,15 +902,15 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
                     continue;
 
                 ASSERT(!(l2e_get_flags(l2t[i2]) & _PAGE_PSE));
-                free_xen_pagetable_new(l2e_get_mfn(l2t[i2]));
+                free_xen_pagetable(l2e_get_mfn(l2t[i2]));
             }
 
             unmap_domain_page(l2t);
-            free_xen_pagetable_new(l2mfn);
+            free_xen_pagetable(l2mfn);
         }
 
         unmap_domain_page(l3t);
-        free_xen_pagetable_new(l3mfn);
+        free_xen_pagetable(l3mfn);
     }
 
     free_xenheap_page(rpt);
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 640f561faf..74c0bbb4aa 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -622,7 +622,7 @@ void __init paging_init(void)
     UNMAP_DOMAIN_PAGE(l3_ro_mpt);
 
     /* Create user-accessible L2 directory to map the MPT for compat guests. */
-    mfn = alloc_xen_pagetable_new();
+    mfn = alloc_xen_pagetable();
     if ( mfn_eq(mfn, INVALID_MFN) )
         goto nomem;
     compat_idle_pg_table_l2 = map_domain_page_global(mfn);
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 1bd8198133..908d67664d 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -582,8 +582,8 @@ int vcpu_destroy_pagetables(struct vcpu *);
 void *do_page_walk(struct vcpu *v, unsigned long addr);
 
 /* Allocator functions for Xen pagetables. */
-mfn_t alloc_xen_pagetable_new(void);
-void free_xen_pagetable_new(mfn_t mfn);
+mfn_t alloc_xen_pagetable(void);
+void free_xen_pagetable(mfn_t mfn);
 void *alloc_map_clear_xen_pt(mfn_t *pmfn);
 
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:39:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k04II-0000g8-5q; Mon, 27 Jul 2020 14:39:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k04IG-0000dt-8j
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:39:56 +0000
X-Inumbo-ID: 06c71b7a-d017-11ea-8ac7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06c71b7a-d017-11ea-8ac7-bc764e2007e4;
 Mon, 27 Jul 2020 14:39:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Uq+2iB5atuihy3Lh1/f+rppWdOqqwU8mCB8CS1E3Jiw=; b=DBexqBYbHXsR73yk2bwDOmRR9F
 TJNRLR0NhDxNYHENtnhQQD7kqJ+vk926FhM50KXyAREqVIrT68sagGLIpvMPa/XxG9w/dtuAryyKs
 C5VwzEPPDjLKRTOw4bBPOckokc7PLg+wRuxbXmI8f7I8Q0uWsfBWfKvDN/behUCUvOQE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k04IA-0001kn-3c; Mon, 27 Jul 2020 14:39:50 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041T-0002w6-Ky; Mon, 27 Jul 2020 14:22:35 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 13/15] x86/mm: drop old page table APIs
Date: Mon, 27 Jul 2020 15:22:03 +0100
Message-Id: <fa5bf0b889d6ff1c9ffa5dcac109d6f28a9cdd54.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, jgrall@amazon.com,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Hongyan Xia <hongyxia@amazon.com>

Two sets of old APIs, alloc/free_xen_pagetable() and lXe_to_lYe(), are
now dropped to avoid the dependency on direct map.

There are two special cases which still have not been re-written into
the new APIs, thus need special treatment:

rpt in smpboot.c cannot use ephemeral mappings yet. The problem is that
rpt is read and written in context switch code, but the mapping
infrastructure is NOT context-switch-safe, meaning we cannot map rpt in
one domain and unmap in another. Before the mapping infrastructure
supports context switches, rpt has to be globally mapped.

Also, lXe_to_lYe() during Xen image relocation cannot be converted into
map/unmap pairs. We cannot hold on to mappings while the mapping
infrastructure is being relocated! It is enough to remove the direct map
in the second e820 pass, so we still use the direct map (<4GiB) in Xen
relocation (which is during the first e820 pass).

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/mm.c          | 14 --------------
 xen/arch/x86/setup.c       |  4 ++--
 xen/arch/x86/smpboot.c     |  4 ++--
 xen/include/asm-x86/mm.h   |  2 --
 xen/include/asm-x86/page.h |  5 -----
 5 files changed, 4 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 199940a345..76b8c681c9 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4925,20 +4925,6 @@ int mmcfg_intercept_write(
     return X86EMUL_OKAY;
 }
 
-void *alloc_xen_pagetable(void)
-{
-    mfn_t mfn = alloc_xen_pagetable_new();
-
-    return mfn_eq(mfn, INVALID_MFN) ? NULL : mfn_to_virt(mfn_x(mfn));
-}
-
-void free_xen_pagetable(void *v)
-{
-    mfn_t mfn = v ? virt_to_mfn(v) : INVALID_MFN;
-
-    free_xen_pagetable_new(mfn);
-}
-
 /*
  * For these PTE APIs, the caller must follow the alloc-map-unmap-free
  * lifecycle, which means explicitly mapping the PTE pages before accessing
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index c9b6af826d..1f73589d5b 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1247,7 +1247,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                     continue;
                 *pl4e = l4e_from_intpte(l4e_get_intpte(*pl4e) +
                                         xen_phys_start);
-                pl3e = l4e_to_l3e(*pl4e);
+                pl3e = __va(l4e_get_paddr(*pl4e));
                 for ( j = 0; j < L3_PAGETABLE_ENTRIES; j++, pl3e++ )
                 {
                     /* Not present, 1GB mapping, or already relocated? */
@@ -1257,7 +1257,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                         continue;
                     *pl3e = l3e_from_intpte(l3e_get_intpte(*pl3e) +
                                             xen_phys_start);
-                    pl2e = l3e_to_l2e(*pl3e);
+                    pl2e = __va(l3e_get_paddr(*pl3e));
                     for ( k = 0; k < L2_PAGETABLE_ENTRIES; k++, pl2e++ )
                     {
                         /* Not present, PSE, or already relocated? */
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index c965222e19..f431f526da 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -810,7 +810,7 @@ static int setup_cpu_root_pgt(unsigned int cpu)
     if ( !opt_xpti_hwdom && !opt_xpti_domu )
         return 0;
 
-    rpt = alloc_xen_pagetable();
+    rpt = alloc_xenheap_page();
     if ( !rpt )
         return -ENOMEM;
 
@@ -913,7 +913,7 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
         free_xen_pagetable_new(l3mfn);
     }
 
-    free_xen_pagetable(rpt);
+    free_xenheap_page(rpt);
 
     /* Also zap the stub mapping for this CPU. */
     if ( stub_linear )
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 5b76308948..1bd8198133 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -582,8 +582,6 @@ int vcpu_destroy_pagetables(struct vcpu *);
 void *do_page_walk(struct vcpu *v, unsigned long addr);
 
 /* Allocator functions for Xen pagetables. */
-void *alloc_xen_pagetable(void);
-void free_xen_pagetable(void *v);
 mfn_t alloc_xen_pagetable_new(void);
 void free_xen_pagetable_new(mfn_t mfn);
 void *alloc_map_clear_xen_pt(mfn_t *pmfn);
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 608a048c28..45ed561772 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -188,11 +188,6 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
 #define l4e_has_changed(x,y,flags) \
     ( !!(((x).l4 ^ (y).l4) & ((PADDR_MASK&PAGE_MASK)|put_pte_flags(flags))) )
 
-/* Pagetable walking. */
-#define l2e_to_l1e(x)              ((l1_pgentry_t *)__va(l2e_get_paddr(x)))
-#define l3e_to_l2e(x)              ((l2_pgentry_t *)__va(l3e_get_paddr(x)))
-#define l4e_to_l3e(x)              ((l3_pgentry_t *)__va(l4e_get_paddr(x)))
-
 #define map_l1t_from_l2e(x)        (l1_pgentry_t *)map_domain_page(l2e_get_mfn(x))
 #define map_l2t_from_l3e(x)        (l2_pgentry_t *)map_domain_page(l3e_get_mfn(x))
 #define map_l3t_from_l4e(x)        (l3_pgentry_t *)map_domain_page(l4e_get_mfn(x))
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:39:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k04II-0000gX-GF; Mon, 27 Jul 2020 14:39:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k04IG-0000dz-TH
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:39:56 +0000
X-Inumbo-ID: 06b3e7f8-d017-11ea-a7dc-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06b3e7f8-d017-11ea-a7dc-12813bfff9fa;
 Mon, 27 Jul 2020 14:39:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=5NW5X4tKkJdncJ3jCsyAFhkJXBFKJtHVYTfU6S+d5S0=; b=nD6Ce3tQfWBHg/ctqxHn3jozva
 xe8pRF309lfmlRrapwqOKGfGgV+d3/dcHc6OjmX1GDQfdDdTxcD9RfSrvfkY8buKwd3VQRy02v2BB
 iDDl2nRpjgIPcDX/3w88eQtFOpUkLzrv2M4eGIxdfKDfSIVBgIOFPy1/MgVHQC6SATBk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k04I9-0001kj-Uo; Mon, 27 Jul 2020 14:39:49 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041V-0002w6-2v; Mon, 27 Jul 2020 14:22:37 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 14/15] x86: switch to use domheap page for page tables
Date: Mon, 27 Jul 2020 15:22:04 +0100
Message-Id: <53c9d3ced017091305a5fcf94bf3d74c58735c63.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, jgrall@amazon.com,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Hongyan Xia <hongyxia@amazon.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changed in v8:
- const qualify pg in alloc_xen_pagetable_new().
---
 xen/arch/x86/mm.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 76b8c681c9..8348f6329f 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4935,10 +4935,10 @@ mfn_t alloc_xen_pagetable_new(void)
 {
     if ( system_state != SYS_STATE_early_boot )
     {
-        void *ptr = alloc_xenheap_page();
+        const struct page_info *pg = alloc_domheap_page(NULL, 0);
 
-        BUG_ON(!hardware_domain && !ptr);
-        return ptr ? virt_to_mfn(ptr) : INVALID_MFN;
+        BUG_ON(!hardware_domain && !pg);
+        return pg ? page_to_mfn(pg) : INVALID_MFN;
     }
 
     return alloc_boot_pages(1, 1);
@@ -4948,7 +4948,7 @@ mfn_t alloc_xen_pagetable_new(void)
 void free_xen_pagetable_new(mfn_t mfn)
 {
     if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
-        free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
+        free_domheap_page(mfn_to_page(mfn));
 }
 
 void *alloc_map_clear_xen_pt(mfn_t *pmfn)
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:40:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:40:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k04IN-0000r2-17; Mon, 27 Jul 2020 14:40:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k04IL-0000dt-8s
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:40:01 +0000
X-Inumbo-ID: 06d6f414-d017-11ea-8ac7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06d6f414-d017-11ea-8ac7-bc764e2007e4;
 Mon, 27 Jul 2020 14:39:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=W32xTjpvUmjvkngH88KQETVFnfyWGChOM6HrUWd8JKI=; b=OzLNH4tNvAMKFTottAU2hdC/tf
 aLEbyJxKp5lYQvkMXBeeLnzpSwljDA/awYuX7ZJWGW2CN+BwLhdsJmjHxfCOznkrKXWXrPb1QIZJb
 rCj0kJnKoZYjXxNgWee5JGh2lqcBWlVrWYJriGP6bXFPCyaJWfu33XHPoltRUaObrbjI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k04IA-0001kt-9C; Mon, 27 Jul 2020 14:39:50 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041P-0002w6-AP; Mon, 27 Jul 2020 14:22:31 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 10/15] efi: switch to new APIs in EFI code
Date: Mon, 27 Jul 2020 15:22:00 +0100
Message-Id: <9f824496cf87c1f282d774562456ba5d51ce7979.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, jgrall@amazon.com,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changed in v7:
- add blank line after declaration.
- rename efi_l4_pgtable into efi_l4t.
- pass the mapped efi_l4t to copy_mapping() instead of map it again.
- use the alloc_map_clear_xen_pt() API.
- unmap pl3e, pl2e, l1t earlier.
---
 xen/arch/x86/efi/runtime.h | 13 ++++++++---
 xen/common/efi/boot.c      | 55 +++++++++++++++++++++++++++-------------------
 xen/common/efi/efi.h       |  3 ++-
 xen/common/efi/runtime.c   |  8 +++----
 4 files changed, 48 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/efi/runtime.h b/xen/arch/x86/efi/runtime.h
index d9eb8f5c27..77866c5f21 100644
--- a/xen/arch/x86/efi/runtime.h
+++ b/xen/arch/x86/efi/runtime.h
@@ -1,12 +1,19 @@
+#include <xen/domain_page.h>
+#include <xen/mm.h>
 #include <asm/atomic.h>
 #include <asm/mc146818rtc.h>
 
 #ifndef COMPAT
-l4_pgentry_t *__read_mostly efi_l4_pgtable;
+mfn_t __read_mostly efi_l4_mfn = INVALID_MFN_INITIALIZER;
 
 void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e)
 {
-    if ( efi_l4_pgtable )
-        l4e_write(efi_l4_pgtable + l4idx, l4e);
+    if ( !mfn_eq(efi_l4_mfn, INVALID_MFN) )
+    {
+        l4_pgentry_t *efi_l4t = map_domain_page(efi_l4_mfn);
+
+        l4e_write(efi_l4t + l4idx, l4e);
+        unmap_domain_page(efi_l4t);
+    }
 }
 #endif
diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index f116759538..f2f1dbbc77 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1440,14 +1440,15 @@ custom_param("efi", parse_efi_param);
 
 static __init void copy_mapping(unsigned long mfn, unsigned long end,
                                 bool (*is_valid)(unsigned long smfn,
-                                                 unsigned long emfn))
+                                                 unsigned long emfn),
+                                l4_pgentry_t *efi_l4t)
 {
     unsigned long next;
     l3_pgentry_t *l3src = NULL, *l3dst = NULL;
 
     for ( ; mfn < end; mfn = next )
     {
-        l4_pgentry_t l4e = efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)];
+        l4_pgentry_t l4e = efi_l4t[l4_table_offset(mfn << PAGE_SHIFT)];
         unsigned long va = (unsigned long)mfn_to_virt(mfn);
 
         if ( !(mfn & ((1UL << (L4_PAGETABLE_SHIFT - PAGE_SHIFT)) - 1)) )
@@ -1466,7 +1467,7 @@ static __init void copy_mapping(unsigned long mfn, unsigned long end,
 
             l3dst = alloc_map_clear_xen_pt(&l3mfn);
             BUG_ON(!l3dst);
-            efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)] =
+            efi_l4t[l4_table_offset(mfn << PAGE_SHIFT)] =
                 l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR);
         }
         else
@@ -1499,6 +1500,7 @@ static bool __init rt_range_valid(unsigned long smfn, unsigned long emfn)
 void __init efi_init_memory(void)
 {
     unsigned int i;
+    l4_pgentry_t *efi_l4t;
     struct rt_extra {
         struct rt_extra *next;
         unsigned long smfn, emfn;
@@ -1613,11 +1615,10 @@ void __init efi_init_memory(void)
      * Set up 1:1 page tables for runtime calls. See SetVirtualAddressMap() in
      * efi_exit_boot().
      */
-    efi_l4_pgtable = alloc_xen_pagetable();
-    BUG_ON(!efi_l4_pgtable);
-    clear_page(efi_l4_pgtable);
+    efi_l4t = alloc_map_clear_xen_pt(&efi_l4_mfn);
+    BUG_ON(!efi_l4t);
 
-    copy_mapping(0, max_page, ram_range_valid);
+    copy_mapping(0, max_page, ram_range_valid, efi_l4t);
 
     /* Insert non-RAM runtime mappings inside the direct map. */
     for ( i = 0; i < efi_memmap_size; i += efi_mdesc_size )
@@ -1633,58 +1634,64 @@ void __init efi_init_memory(void)
             copy_mapping(PFN_DOWN(desc->PhysicalStart),
                          PFN_UP(desc->PhysicalStart +
                                 (desc->NumberOfPages << EFI_PAGE_SHIFT)),
-                         rt_range_valid);
+                         rt_range_valid, efi_l4t);
     }
 
     /* Insert non-RAM runtime mappings outside of the direct map. */
     while ( (extra = extra_head) != NULL )
     {
         unsigned long addr = extra->smfn << PAGE_SHIFT;
-        l4_pgentry_t l4e = efi_l4_pgtable[l4_table_offset(addr)];
+        l4_pgentry_t l4e = efi_l4t[l4_table_offset(addr)];
         l3_pgentry_t *pl3e;
         l2_pgentry_t *pl2e;
         l1_pgentry_t *l1t;
 
         if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
-            pl3e = alloc_xen_pagetable();
+            mfn_t l3mfn;
+
+            pl3e = alloc_map_clear_xen_pt(&l3mfn);
             BUG_ON(!pl3e);
-            clear_page(pl3e);
-            efi_l4_pgtable[l4_table_offset(addr)] =
-                l4e_from_paddr(virt_to_maddr(pl3e), __PAGE_HYPERVISOR);
+            efi_l4t[l4_table_offset(addr)] =
+                l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR);
         }
         else
-            pl3e = l4e_to_l3e(l4e);
+            pl3e = map_l3t_from_l4e(l4e);
         pl3e += l3_table_offset(addr);
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
-            pl2e = alloc_xen_pagetable();
+            mfn_t l2mfn;
+
+            pl2e = alloc_map_clear_xen_pt(&l2mfn);
             BUG_ON(!pl2e);
-            clear_page(pl2e);
-            *pl3e = l3e_from_paddr(virt_to_maddr(pl2e), __PAGE_HYPERVISOR);
+            *pl3e = l3e_from_mfn(l2mfn, __PAGE_HYPERVISOR);
         }
         else
         {
             BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
-            pl2e = l3e_to_l2e(*pl3e);
+            pl2e = map_l2t_from_l3e(*pl3e);
         }
+        UNMAP_DOMAIN_PAGE(pl3e);
         pl2e += l2_table_offset(addr);
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
-            l1t = alloc_xen_pagetable();
+            mfn_t l1mfn;
+
+            l1t = alloc_map_clear_xen_pt(&l1mfn);
             BUG_ON(!l1t);
-            clear_page(l1t);
-            *pl2e = l2e_from_paddr(virt_to_maddr(l1t), __PAGE_HYPERVISOR);
+            *pl2e = l2e_from_mfn(l1mfn, __PAGE_HYPERVISOR);
         }
         else
         {
             BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-            l1t = l2e_to_l1e(*pl2e);
+            l1t = map_l1t_from_l2e(*pl2e);
         }
+        UNMAP_DOMAIN_PAGE(pl2e);
         for ( i = l1_table_offset(addr);
               i < L1_PAGETABLE_ENTRIES && extra->smfn < extra->emfn;
               ++i, ++extra->smfn )
             l1t[i] = l1e_from_pfn(extra->smfn, extra->prot);
+        UNMAP_DOMAIN_PAGE(l1t);
 
         if ( extra->smfn == extra->emfn )
         {
@@ -1696,6 +1703,8 @@ void __init efi_init_memory(void)
     /* Insert Xen mappings. */
     for ( i = l4_table_offset(HYPERVISOR_VIRT_START);
           i < l4_table_offset(DIRECTMAP_VIRT_END); ++i )
-        efi_l4_pgtable[i] = idle_pg_table[i];
+        efi_l4t[i] = idle_pg_table[i];
+
+    unmap_domain_page(efi_l4t);
 }
 #endif
diff --git a/xen/common/efi/efi.h b/xen/common/efi/efi.h
index 2e38d05f3d..e364bae3e0 100644
--- a/xen/common/efi/efi.h
+++ b/xen/common/efi/efi.h
@@ -6,6 +6,7 @@
 #include <efi/eficapsule.h>
 #include <efi/efiapi.h>
 #include <xen/efi.h>
+#include <xen/mm.h>
 #include <xen/spinlock.h>
 #include <asm/page.h>
 
@@ -29,7 +30,7 @@ extern UINTN efi_memmap_size, efi_mdesc_size;
 extern void *efi_memmap;
 
 #ifdef CONFIG_X86
-extern l4_pgentry_t *efi_l4_pgtable;
+extern mfn_t efi_l4_mfn;
 #endif
 
 extern const struct efi_pci_rom *efi_pci_roms;
diff --git a/xen/common/efi/runtime.c b/xen/common/efi/runtime.c
index 95367694b5..375b94229e 100644
--- a/xen/common/efi/runtime.c
+++ b/xen/common/efi/runtime.c
@@ -85,7 +85,7 @@ struct efi_rs_state efi_rs_enter(void)
     static const u32 mxcsr = MXCSR_DEFAULT;
     struct efi_rs_state state = { .cr3 = 0 };
 
-    if ( !efi_l4_pgtable )
+    if ( mfn_eq(efi_l4_mfn, INVALID_MFN) )
         return state;
 
     state.cr3 = read_cr3();
@@ -111,7 +111,7 @@ struct efi_rs_state efi_rs_enter(void)
         lgdt(&gdt_desc);
     }
 
-    switch_cr3_cr4(virt_to_maddr(efi_l4_pgtable), read_cr4());
+    switch_cr3_cr4(mfn_to_maddr(efi_l4_mfn), read_cr4());
 
     return state;
 }
@@ -140,9 +140,9 @@ void efi_rs_leave(struct efi_rs_state *state)
 
 bool efi_rs_using_pgtables(void)
 {
-    return efi_l4_pgtable &&
+    return !mfn_eq(efi_l4_mfn, INVALID_MFN) &&
            (smp_processor_id() == efi_rs_on_cpu) &&
-           (read_cr3() == virt_to_maddr(efi_l4_pgtable));
+           (read_cr3() == mfn_to_maddr(efi_l4_mfn));
 }
 
 unsigned long efi_get_time(void)
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:40:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:40:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k04IN-0000s0-Bg; Mon, 27 Jul 2020 14:40:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wM/5=BG=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1k04IL-0000dz-TU
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:40:01 +0000
X-Inumbo-ID: 06d48f12-d017-11ea-a7dc-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06d48f12-d017-11ea-a7dc-12813bfff9fa;
 Mon, 27 Jul 2020 14:39:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=UfyYK+8DzrY89B6vi2RzWDTZyEViNsi3rlfmpn5A0tM=; b=CVI2Iz220Wao66SJqlzH+EpNvH
 q0mK8/PRAd6FkYSxyhbxBoSVsSUDhZq//saCiP6K9cZdK6PUporf2SQazRjjGzCoAf03cvD2nsIQ8
 v4khyY/ctwk+7bovHMOxBp93meAi0Ud9LFgFRzAKEA2aHg+UZdA6mwRIN1vdypNigTXk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k04IA-0001kr-7h; Mon, 27 Jul 2020 14:39:50 +0000
Received: from 54-240-197-233.amazon.com ([54.240.197.233]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1k041S-0002w6-6H; Mon, 27 Jul 2020 14:22:34 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 12/15] x86/smpboot: switch clone_mapping() to new APIs
Date: Mon, 27 Jul 2020 15:22:02 +0100
Message-Id: <31b850b40373f4499f5f51a6e8c7f00f7adb67ec.1595857947.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
In-Reply-To: <cover.1595857947.git.hongyxia@amazon.com>
References: <cover.1595857947.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, jgrall@amazon.com,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>

---
Changed in v7:
- change patch title
- remove initialiser of pl3e.
- combine the initialisation of pl3e into a single assignment.
- use the new alloc_map_clear() helper.
- use the normal map_domain_page() in the error path.
---
 xen/arch/x86/smpboot.c | 44 +++++++++++++++++++++++++++-----------------
 1 file changed, 27 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 05df08f945..c965222e19 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -674,8 +674,8 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     unsigned long linear = (unsigned long)ptr, pfn;
     unsigned int flags;
     l3_pgentry_t *pl3e;
-    l2_pgentry_t *pl2e;
-    l1_pgentry_t *pl1e;
+    l2_pgentry_t *pl2e = NULL;
+    l1_pgentry_t *pl1e = NULL;
     int rc = 0;
 
     /*
@@ -690,7 +690,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
          (linear >= XEN_VIRT_END && linear < DIRECTMAP_VIRT_START) )
         return -EINVAL;
 
-    pl3e = l4e_to_l3e(idle_pg_table[root_table_offset(linear)]) +
+    pl3e = map_l3t_from_l4e(idle_pg_table[root_table_offset(linear)]) +
         l3_table_offset(linear);
 
     flags = l3e_get_flags(*pl3e);
@@ -703,7 +703,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     }
     else
     {
-        pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(linear);
+        pl2e = map_l2t_from_l3e(*pl3e) + l2_table_offset(linear);
         flags = l2e_get_flags(*pl2e);
         ASSERT(flags & _PAGE_PRESENT);
         if ( flags & _PAGE_PSE )
@@ -714,7 +714,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
         else
         {
-            pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(linear);
+            pl1e = map_l1t_from_l2e(*pl2e) + l1_table_offset(linear);
             flags = l1e_get_flags(*pl1e);
             if ( !(flags & _PAGE_PRESENT) )
                 goto out;
@@ -722,51 +722,58 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
     }
 
+    UNMAP_DOMAIN_PAGE(pl1e);
+    UNMAP_DOMAIN_PAGE(pl2e);
+    UNMAP_DOMAIN_PAGE(pl3e);
+
     if ( !(root_get_flags(rpt[root_table_offset(linear)]) & _PAGE_PRESENT) )
     {
-        pl3e = alloc_xen_pagetable();
+        mfn_t l3mfn;
+
+        pl3e = alloc_map_clear_xen_pt(&l3mfn);
         rc = -ENOMEM;
         if ( !pl3e )
             goto out;
-        clear_page(pl3e);
         l4e_write(&rpt[root_table_offset(linear)],
-                  l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR));
+                  l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR));
     }
     else
-        pl3e = l4e_to_l3e(rpt[root_table_offset(linear)]);
+        pl3e = map_l3t_from_l4e(rpt[root_table_offset(linear)]);
 
     pl3e += l3_table_offset(linear);
 
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
-        pl2e = alloc_xen_pagetable();
+        mfn_t l2mfn;
+
+        pl2e = alloc_map_clear_xen_pt(&l2mfn);
         rc = -ENOMEM;
         if ( !pl2e )
             goto out;
-        clear_page(pl2e);
-        l3e_write(pl3e, l3e_from_paddr(__pa(pl2e), __PAGE_HYPERVISOR));
+        l3e_write(pl3e, l3e_from_mfn(l2mfn, __PAGE_HYPERVISOR));
     }
     else
     {
         ASSERT(!(l3e_get_flags(*pl3e) & _PAGE_PSE));
-        pl2e = l3e_to_l2e(*pl3e);
+        pl2e = map_l2t_from_l3e(*pl3e);
     }
 
     pl2e += l2_table_offset(linear);
 
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
-        pl1e = alloc_xen_pagetable();
+        mfn_t l1mfn;
+
+        pl1e = alloc_map_clear_xen_pt(&l1mfn);
         rc = -ENOMEM;
         if ( !pl1e )
             goto out;
-        clear_page(pl1e);
-        l2e_write(pl2e, l2e_from_paddr(__pa(pl1e), __PAGE_HYPERVISOR));
+        l2e_write(pl2e, l2e_from_mfn(l1mfn, __PAGE_HYPERVISOR));
     }
     else
     {
         ASSERT(!(l2e_get_flags(*pl2e) & _PAGE_PSE));
-        pl1e = l2e_to_l1e(*pl2e);
+        pl1e = map_l1t_from_l2e(*pl2e);
     }
 
     pl1e += l1_table_offset(linear);
@@ -782,6 +789,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 
     rc = 0;
  out:
+    unmap_domain_page(pl1e);
+    unmap_domain_page(pl2e);
+    unmap_domain_page(pl3e);
     return rc;
 }
 
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 14:55:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 14:55:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k04XP-0002m2-Ra; Mon, 27 Jul 2020 14:55:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k04XP-0002lx-2l
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 14:55:35 +0000
X-Inumbo-ID: 38b20d64-d019-11ea-a787-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 38b20d64-d019-11ea-a787-12813bfff9fa;
 Mon, 27 Jul 2020 14:55:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595861733;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=TSQx4u/NB3+tiEqbPj9WfpMigBdq193jCABaybi7IOc=;
 b=ci665bOPwfWbRFHFlxaWOVs7PbEqF+3fIz9/u0UM+yh8VNzyDrms7Opf
 4g2oInyhm1NKN/bDyymVRGSZMi756Y6aU0BygAbaCR0KMvf4X7vXS23ED
 1KsdquJUOade/rLi3KoCHoyzCRJDUNz2Qdp/jZPGaWigrxvisljVrRp0+ E=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: sD/fboHLkvKPD/flZmpPerZr3r9zoVp7Mwe5Y0h5eJWSW5XHrSIaYszpgAu71QsZK6KtzIyGtm
 UxSE2OYt6SR7zgYx+tMaf+ZqwCwCDDtnWKVIcxFi+FMJeWC8s04B/pIQ6oehhKb6fzxLCBOQez
 DT/xnmXYufLK6490aoO81YgyL8A14npC8iEx2h+VWai/huuEnk2gq7nETOAKP3TrFbUTUvZ/Bm
 2ZucoCdEBQMN/t7RyXhYBtGu+BJ91uFKfYc1IdkWp9YietdSE/zcmccIu1oklFY4CeyUHcU/BF
 bk4=
X-SBRS: 2.7
X-MesageID: 23592628
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23592628"
Date: Mon, 27 Jul 2020 16:55:26 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 1/4] x86: replace __ASM_{CL,ST}AC
Message-ID: <20200727145526.GR7191@Air-de-Roger>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <fc8e042e-fef8-ac38-34d8-16b13e4b0135@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <fc8e042e-fef8-ac38-34d8-16b13e4b0135@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 12:48:14PM +0200, Jan Beulich wrote:
> Introduce proper assembler macros instead, enabled only when the
> assembler itself doesn't support the insns. To avoid duplicating the
> macros for assembly and C files, have them processed into asm-macros.h.
> This in turn requires adding a multiple inclusion guard when generating
> that header.
> 
> No change to generated code.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -235,7 +235,10 @@ $(BASEDIR)/include/asm-x86/asm-macros.h:
>  	echo '#if 0' >$@.new
>  	echo '.if 0' >>$@.new
>  	echo '#endif' >>$@.new
> +	echo '#ifndef __ASM_MACROS_H__' >>$@.new
> +	echo '#define __ASM_MACROS_H__' >>$@.new
>  	echo 'asm ( ".include \"$@\"" );' >>$@.new
> +	echo '#endif /* __ASM_MACROS_H__ */' >>$@.new
>  	echo '#if 0' >>$@.new
>  	echo '.endif' >>$@.new
>  	cat $< >>$@.new
> --- a/xen/arch/x86/arch.mk
> +++ b/xen/arch/x86/arch.mk
> @@ -20,6 +20,7 @@ $(call as-option-add,CFLAGS,CC,"rdrand %
>  $(call as-option-add,CFLAGS,CC,"rdfsbase %rax",-DHAVE_AS_FSGSBASE)
>  $(call as-option-add,CFLAGS,CC,"xsaveopt (%rax)",-DHAVE_AS_XSAVEOPT)
>  $(call as-option-add,CFLAGS,CC,"rdseed %eax",-DHAVE_AS_RDSEED)
> +$(call as-option-add,CFLAGS,CC,"clac",-DHAVE_AS_CLAC_STAC)
>  $(call as-option-add,CFLAGS,CC,"clwb (%rax)",-DHAVE_AS_CLWB)
>  $(call as-option-add,CFLAGS,CC,".equ \"x\"$$(comma)1",-DHAVE_AS_QUOTED_SYM)
>  $(call as-option-add,CFLAGS,CC,"invpcid (%rax)$$(comma)%rax",-DHAVE_AS_INVPCID)
> --- a/xen/arch/x86/asm-macros.c
> +++ b/xen/arch/x86/asm-macros.c
> @@ -1 +1,2 @@
> +#include <asm/asm-defns.h>
>  #include <asm/alternative-asm.h>
> --- /dev/null
> +++ b/xen/include/asm-x86/asm-defns.h

Maybe this could be asm-insn.h or a different name? I find it
confusing to have asm-defns.h and an asm_defs.h.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 15:00:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 15:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k04br-0003cC-Eh; Mon, 27 Jul 2020 15:00:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k04bq-0003c7-L9
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 15:00:10 +0000
X-Inumbo-ID: dd15a852-d019-11ea-8ac9-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd15a852-d019-11ea-8ac9-bc764e2007e4;
 Mon, 27 Jul 2020 15:00:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595862009;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=ghC7Sl4Drx1oxfzrzb59GnoNx/B87pPHw6/ehG6Kud0=;
 b=f4vcSbrZdhdUkdKMRTUlaRIxh06ktlghnAdnXURcVQ4nBoLWcl8gX37R
 tVtihFwkwbyFDyAn/ROGUMwi9rTFX5LArblO5g0wpekq7dwJPdC6WSeig
 24ND7ua8w2jYR5IvybEDjbbKtfFqwfNp1hwTf0kc4KCcb4N1n+FsGS+A3 Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: V4ujmgoOXi3dcztOsZvxoPbi352l92IsXh3e1o0vRIzcIcSFavIRh56fhekWu4K9n9S2U4y3Y8
 mYpR/Fk6HqZ14vmJjFvg6Sb6CoIyNCMevSJxF7morJGqCKqMeZ9Sr/zSg+F25MEGAV9iLAwVAh
 Z0xwg0EfU67qe/QMcAAMXs+88Wxq559J1782ovBmOwOl0Bng1NSUy01vnscHWzkZEvlq7WvCcn
 ISp05G+Z7AB5y0Qlub8syfNYUQKNhd2Kqx+i58UI/99qL7lySZNwpmh1WCnsM+N/OroE6qnQWm
 Zrs=
X-SBRS: 2.7
X-MesageID: 23455758
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23455758"
Date: Mon, 27 Jul 2020 17:00:02 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 2/4] x86: reduce CET-SS related #ifdef-ary
Message-ID: <20200727150002.GS7191@Air-de-Roger>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <58615a18-7f81-c000-d499-1a49f4752879@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <58615a18-7f81-c000-d499-1a49f4752879@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 12:48:46PM +0200, Jan Beulich wrote:
> Commit b586a81b7a90 ("x86/CET: Fix build following c/s 43b98e7190") had
> to introduce a number of #ifdef-s to make the build work with older tool
> chains. Introduce an assembler macro covering for tool chains not
> knowing of CET-SS, allowing some conditionals where just SETSSBSY is the
> problem to be dropped again.
> 
> No change to generated code.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Looks like an improvement overall in code clarity:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Can you test on clang? Just to be on the safe side, otherwise I can
test it.

> ---
> Now that I've done this I'm not longer sure which direction is better to
> follow: On one hand this introduces dead code (even if just NOPs) into
> CET-SS-disabled builds. Otoh this is a step towards breaking the tool
> chain version dependency of the feature.
> 
> I've also dropped conditionals around bigger chunks of code; while I
> think that's preferable, I'm open to undo those parts.
> 
> --- a/xen/arch/x86/boot/x86_64.S
> +++ b/xen/arch/x86/boot/x86_64.S
> @@ -31,7 +31,6 @@ ENTRY(__high_start)
>          jz      .L_bsp
>  
>          /* APs.  Set up shadow stacks before entering C. */
> -#ifdef CONFIG_XEN_SHSTK
>          testl   $cpufeat_mask(X86_FEATURE_XEN_SHSTK), \
>                  CPUINFO_FEATURE_OFFSET(X86_FEATURE_XEN_SHSTK) + boot_cpu_data(%rip)
>          je      .L_ap_shstk_done
> @@ -55,7 +54,6 @@ ENTRY(__high_start)
>          mov     $XEN_MINIMAL_CR4 | X86_CR4_CET, %ecx
>          mov     %rcx, %cr4
>          setssbsy
> -#endif
>  
>  .L_ap_shstk_done:
>          call    start_secondary
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -668,7 +668,7 @@ static void __init noreturn reinit_bsp_s
>      stack_base[0] = stack;
>      memguard_guard_stack(stack);
>  
> -    if ( IS_ENABLED(CONFIG_XEN_SHSTK) && cpu_has_xen_shstk )
> +    if ( cpu_has_xen_shstk )
>      {
>          wrmsrl(MSR_PL0_SSP,
>                 (unsigned long)stack + (PRIMARY_SHSTK_SLOT + 1) * PAGE_SIZE - 8);
> --- a/xen/arch/x86/x86_64/compat/entry.S
> +++ b/xen/arch/x86/x86_64/compat/entry.S
> @@ -198,9 +198,7 @@ ENTRY(cr4_pv32_restore)
>  
>  /* See lstar_enter for entry register state. */
>  ENTRY(cstar_enter)
> -#ifdef CONFIG_XEN_SHSTK
>          ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
> -#endif
>          /* sti could live here when we don't switch page tables below. */
>          CR4_PV32_RESTORE
>          movq  8(%rsp),%rax /* Restore %rax. */
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -237,9 +237,7 @@ iret_exit_to_guest:
>   * %ss must be saved into the space left by the trampoline.
>   */
>  ENTRY(lstar_enter)
> -#ifdef CONFIG_XEN_SHSTK
>          ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK

Should the setssbsy be quoted, or it doesn't matter? I'm asking
because the same construction used by CLAC/STAC doesn't quote the
instruction.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 15:09:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 15:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k04ka-0003sy-EG; Mon, 27 Jul 2020 15:09:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8heM=BG=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1k04kZ-0003st-C0
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 15:09:11 +0000
X-Inumbo-ID: 1f50df88-d01b-11ea-a78b-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1f50df88-d01b-11ea-a78b-12813bfff9fa;
 Mon, 27 Jul 2020 15:09:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 88571B71A;
 Mon, 27 Jul 2020 15:09:19 +0000 (UTC)
Subject: Re: [PATCH v3 4/4] xen: add helpers to allocate unpopulated memory
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
References: <20200727091342.52325-1-roger.pau@citrix.com>
 <20200727091342.52325-5-roger.pau@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <06e21488-25a6-1c9f-366b-3c2ab63e4632@suse.com>
Date: Mon, 27 Jul 2020 17:09:07 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200727091342.52325-5-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Yan Yankovskyi <yyankovskyi@gmail.com>,
 David Hildenbrand <david@redhat.com>, dri-devel@lists.freedesktop.org,
 Michal Hocko <mhocko@kernel.org>, linux-mm@kvack.org,
 Daniel Vetter <daniel@ffwll.ch>, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Dan Williams <dan.j.williams@intel.com>,
 Dan Carpenter <dan.carpenter@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.07.20 11:13, Roger Pau Monne wrote:
> To be used in order to create foreign mappings. This is based on the
> ZONE_DEVICE facility which is used by persistent memory devices in
> order to create struct pages and kernel virtual mappings for the IOMEM
> areas of such devices. Note that on kernels without support for
> ZONE_DEVICE Xen will fallback to use ballooned pages in order to
> create foreign mappings.
> 
> The newly added helpers use the same parameters as the existing
> {alloc/free}_xenballooned_pages functions, which allows for in-place
> replacement of the callers. Once a memory region has been added to be
> used as scratch mapping space it will no longer be released, and pages
> returned are kept in a linked list. This allows to have a buffer of
> pages and prevents resorting to frequent additions and removals of
> regions.
> 
> If enabled (because ZONE_DEVICE is supported) the usage of the new
> functionality untangles Xen balloon and RAM hotplug from the usage of
> unpopulated physical memory ranges to map foreign pages, which is the
> correct thing to do in order to avoid mappings of foreign pages depend
> on memory hotplug.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> I've not added a new memory_type type and just used
> MEMORY_DEVICE_DEVDAX which seems to be what we want for such memory
> regions. I'm unsure whether abusing this type is fine, or if I should
> instead add a specific type, maybe MEMORY_DEVICE_GENERIC? I don't
> think we should be using a specific Xen type at all.

What about introducing MEMORY_DEVICE_VIRT? The comment for
MEMORY_DEVICE_DEVDAX doesn't really fit, as there is no character device
involved.

In the end its the memory management maintainers to decide that.

Other than that you can have my

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 15:10:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 15:10:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k04m8-0004d2-Pm; Mon, 27 Jul 2020 15:10:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k04m7-0004cv-Nj
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 15:10:47 +0000
X-Inumbo-ID: 58f64a2b-d01b-11ea-8aca-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58f64a2b-d01b-11ea-8aca-bc764e2007e4;
 Mon, 27 Jul 2020 15:10:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595862647;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=FySu7NYNkuhkayE6m51zCZc7Az18SjtYiAYtvXrdIJ0=;
 b=hzKGm89frz5lrozRIEGBcmne3Pxc2zfGLhlbxD3Pdl2s9Xz+U2rJA5ZI
 WXK15Qv+G5DgNQP9kHIe1iO2cT//urePke4yz3im14QJkBQ4NmU6YqcXC
 1kWHX/rPkHu+X1/01KpBlGAYWQYEJhfAuziK7VYqpLsG5t+/9QXOVx9kM 8=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: dSgr5E+vM4PNlPlc1sR9CerkiXuoznkHZYav8BJemLR0KjkXNoPsxHlynaiilZKGd2MVTS3q+V
 FpUYGOJc/ov5KZ/m7BgMr4RAnh7AfTJb9shBaOABmxZI7Ps4U3+I7vxNoY2ZNMxuTbKq2YHoRl
 aWSYDcwozAEYIbh0PIdkCFhlTRDeSjGL+q4ccor176FJEaRduWUdznn4/BpIN8bgbudAMZHCuK
 cJGcdAWxV1u8s2TE2p4jXCLX/99U0HwGBgFOziNu8XDxFLsniRqfkBkjI4hYQ0HPxPaM8/3cEV
 +xU=
X-SBRS: 2.7
X-MesageID: 23262252
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23262252"
Date: Mon, 27 Jul 2020 17:10:39 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 3/4] x86: drop ASM_{CL,ST}AC
Message-ID: <20200727151039.GT7191@Air-de-Roger>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <048c3702-f0b0-6f8e-341e-bec6cfaded27@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <048c3702-f0b0-6f8e-341e-bec6cfaded27@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 12:49:04PM +0200, Jan Beulich wrote:
> Use ALTERNATIVE directly, such that at the use sites it is visible that
> alternative code patching is in use. Similarly avoid hiding the fact in
> SAVE_ALL.
> 
> No change to generated code.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I think the code is correct, I'm not sure however whether open coding
ASM_CLAC/STAC is better than what we currently do. I will leave for
Andrew to decide, since he is more knowledgeable of such piece of
code.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 15:16:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 15:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k04ro-0004qU-Fs; Mon, 27 Jul 2020 15:16:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k04rn-0004qP-Kq
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 15:16:39 +0000
X-Inumbo-ID: 2a8747f6-d01c-11ea-8acc-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a8747f6-d01c-11ea-8acc-bc764e2007e4;
 Mon, 27 Jul 2020 15:16:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595862998;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=QxHKKpPVPtY7n6gB7RbvButmrCihfJsVSdeL8qvd5wc=;
 b=EyWp971Mae8GGielSL3I6hDyQvsD4ryN4cIB249T/PTvUJ0+Jutlc4TX
 PfdMYFI6RBdwde/bWQrE/Dmr0NwD7yRbVxyNpCp6bBV1s1cDocloIHiAG
 McT2kvTPcMkmvGoQyD5NooLPErFUkWX0/pBbRbowdhiSiTc6Q1gG+Gfbc E=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: HTb+sqVtINoCdeeUBznbMtFR82HwkjLAIsfU5vtGkArlRP6rMMhwA1YFoYxW6DUHSioQ5dtwi+
 jJ5LR+9o2HJ9hsxvdIv30l3Tzro+Fr41lmEQLOhG4Llf7VymmvGNm4bnLwPCCAHtvmSfUZgeoQ
 bTZ5Go1fiokDFdf9piqfYxBe7kdX3ta1gSELVbToNe0OAWugQ8ByokLsueKVYCh3irIUUxMpbR
 kmVn8BnC7N03jEfBJ1g/UiGm22wWA6O8jodL4d5vGoS+uc/+CogwWbkUnE+r0PNdy/sAZpiWqy
 5ek=
X-SBRS: 2.7
X-MesageID: 23281947
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23281947"
Date: Mon, 27 Jul 2020 17:16:30 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 4/4] x86: fold indirect_thunk_asm.h into asm-defns.h
Message-ID: <20200727151630.GU7191@Air-de-Roger>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <af69f44a-5009-40e8-fbbc-6f78b67f7adb@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <af69f44a-5009-40e8-fbbc-6f78b67f7adb@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 15, 2020 at 12:49:40PM +0200, Jan Beulich wrote:
> There's little point in having two separate headers both getting
> included by asm_defns.h. This in particular reduces the number of
> instances of guarding asm(".include ...") suitably in such dual use
> headers.
> 
> No change to generated code.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

LGTM:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Some testing with clang might be required, as with the other patch.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 15:20:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 15:20:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k04vP-0005dm-0o; Mon, 27 Jul 2020 15:20:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s7zT=BG=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1k04vN-0005de-3n
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 15:20:21 +0000
X-Inumbo-ID: ad03ce5c-d01c-11ea-8acf-bc764e2007e4
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.68]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad03ce5c-d01c-11ea-8acf-bc764e2007e4;
 Mon, 27 Jul 2020 15:20:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A3XhbjLQ6/jVTZ3RUTG9QheyREIUjWamtDfW9yH0Wj8=;
 b=7Vt2ExEnX381x3VY0LA+VL5HvSV+fwK4Oh61Q+3IEE7kL3TJkh2qDrxzh6SL+Yu7YYQo8MKKDO+BzX4erDCuj+5TWM8AZZIjgde0roIXX907m5zhhkjRdNgGo3Jw6OMAw93a8i3bBXvFAccMUdD/eEaNtHJfX8s0HE/FlA7Kvr4=
Received: from DB6PR0802CA0043.eurprd08.prod.outlook.com (2603:10a6:4:a3::29)
 by AM5PR0801MB1826.eurprd08.prod.outlook.com (2603:10a6:203:39::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.22; Mon, 27 Jul
 2020 15:20:14 +0000
Received: from DB5EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a3:cafe::b7) by DB6PR0802CA0043.outlook.office365.com
 (2603:10a6:4:a3::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.21 via Frontend
 Transport; Mon, 27 Jul 2020 15:20:14 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT032.mail.protection.outlook.com (10.152.20.162) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Mon, 27 Jul 2020 15:20:14 +0000
Received: ("Tessian outbound 7de93d801f24:v62");
 Mon, 27 Jul 2020 15:20:14 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: fa45cc6b87669578
X-CR-MTA-TID: 64aa7808
Received: from 59c6df8ef6ab.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4021C7F5-A8B0-421D-A20C-FCE0E5090E3A.1; 
 Mon, 27 Jul 2020 15:20:09 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 59c6df8ef6ab.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 27 Jul 2020 15:20:09 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HXeJ70l1Vb2aCT/rpWXW1SEz2kGjn0tP96Af95LMgYOCnEdpvfAs4YpyrDVxzHwZQDFfy7WTZ6eBQhxJ3cb8N69kPXBwAAemUwApRJOtJa8DGuNl904nv5dUi71eqv9bSvZFPdkmVVDexQajTuwO8Wlvd86Pkm1TeQ7Xddf2CwO7EFblHehzslVBZxQa5hzjtjpgc+/jA5Uoet0OozHpF35GBr683eglYlC3Npy0/81zXp3tR5f6TjUcFjzi8lVZ4CGuFvN2tAF0Ft7AEHdnQFmdNW++B3BtPN4dzVCQRVjzHJH7Bm/5ix4+bnh2T16v9NG/NxNQCujsD7/RwBP0ig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A3XhbjLQ6/jVTZ3RUTG9QheyREIUjWamtDfW9yH0Wj8=;
 b=eNN4Q1ZTCjjhJYPb6fubQr2WpypO0mACr7DNBGXn4Kib1j5CRm7Eon59d96rSZVHu52weAe5+uWSfqEt7LLRAjLhiaClcfd1ZvnHZ7Ib3U6945Pv9tO4u4gmARF2bDFdJq2nVdh1bOPFsKkRIrM7WBfma48oW/LdATL5U0tEZ/7vkQ1PiIXEnqRiQbyx/1m1tGVo1Ek2yyk0Rd/kEMXMrUgjs1oDFIgaWPJ1xNLPCaef+M+svz9jELwQVEaY4fVK4k5ZcEzyrIpuDkGrAQ+z38nSYeWMrGNxy1s1M2OGTSPufIepCbzIdA4Bv8CJlubt3D8jUiJkhTaLILuSqtx3rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A3XhbjLQ6/jVTZ3RUTG9QheyREIUjWamtDfW9yH0Wj8=;
 b=7Vt2ExEnX381x3VY0LA+VL5HvSV+fwK4Oh61Q+3IEE7kL3TJkh2qDrxzh6SL+Yu7YYQo8MKKDO+BzX4erDCuj+5TWM8AZZIjgde0roIXX907m5zhhkjRdNgGo3Jw6OMAw93a8i3bBXvFAccMUdD/eEaNtHJfX8s0HE/FlA7Kvr4=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM5PR0801MB1682.eurprd08.prod.outlook.com (2603:10a6:203:2e::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Mon, 27 Jul
 2020 15:20:06 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3216.033; Mon, 27 Jul 2020
 15:20:06 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Thread-Topic: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Thread-Index: AQHWYPbO0TX+B10vbkuLRSjkBERSxqkV0u2AgAB8NICABUHAgA==
Date: Mon, 27 Jul 2020 15:20:06 +0000
Message-ID: <80133DE4-AFE6-4DCE-AC8F-663C89D4C975@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <3ee41590-e8ca-84d6-3010-6e5dffe91df0@epam.com>
In-Reply-To: <3ee41590-e8ca-84d6-3010-6e5dffe91df0@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 0b057116-8fa3-4d65-dc46-08d832408fe4
x-ms-traffictypediagnostic: AM5PR0801MB1682:|AM5PR0801MB1826:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <AM5PR0801MB18264F319376646FEFCE0D16FC720@AM5PR0801MB1826.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:5236;OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 3Xs77B9gnH7K3IGX2oJ23VTtUJ/SLx4lI8nRzkLX9WR6hHKomiw8g3UYIyFuHoljoZ9UhpMQP01CHfzHW/Fwh3dmVgseulO+CjZjhhOXWEum/i+pyIN0goROzwUpiq3qrndszRCSJI5oj6pWjL0zIhH9I/m64IfXBYNJ7W8eekrfaKKIJXJImBdNoacktvFhO+2Y36RkmTPCeUhTiiOHdsKA7wT5F8J/QoPteJrIsYWSMqzlPlB+aCpt0NnjqgFT5uE+NWu6Ce+nqKZVXkatQLeqyGB3lG5D7DwengC+cI4aSixNo1qhkD9/y6WqlpLGTPKXbTdXNzl92eO+w/+9Ir8j2x0xkzR+3aNXw6lumPtr4tyd7aGI5E4GAikuk4WBjdhvcQcbjhUJzdeyipTBLWONrxwVhAc2j+FlZhySAoeB2Nwmi7NVkRbZav5BfwcpLSyMQSzTSndcQCQlVGKpSQ==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(346002)(376002)(366004)(396003)(39860400002)(91956017)(66446008)(30864003)(83380400001)(66946007)(2906002)(8936002)(64756008)(66476007)(66556008)(26005)(76116006)(6512007)(966005)(8676002)(478600001)(186003)(33656002)(86362001)(316002)(2616005)(6486002)(6506007)(53546011)(36756003)(71200400001)(5660300002)(6916009)(4326008)(54906003)(55236004)(2004002)(559001)(579004);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: TMIjpvKjLf6zcGGPh7ZQDVIX396LDoXYampDEG0BhHIrqEKYDNBphHd1I55JvoVVKRX5XmJW/OUmgmCHNAHlhi+9UZCAJawgvXayaVvHnXytRVh0vVpQDAuSaQa11zRZh9zcl2w/5swUXfC2SGrZWRWRjC4InHmfLyMsYKCSdeXgoR6fj3mkrH6pq4WlvMPL38Tiy+wIdM5nutFWJTCtxBQnFZmHfNVjRnjyas5ktRLVZfDgOl5dqy2EFn73hq0+fQAEerNEQIupp9lfNbJQumk1YUWCfSEaYbeDpDSJexD2Sq5GDuEcR7PZDYdBONQvl0nD6sctJR4wOQwm+AamwUfE+Lhrgy9l551neLTF9awK/QzYqyqbwTi8ljmzMkQpwdNX2oZXEkb6/OmGpKMljgemtzXQ52cKZRPfvxCfgGPvEPmHnvL9iu2O9Nlf6bGPICn/B0On/p2ccbo2mgbywt8mE1JIEPvyjYnnYt3ceGM=
Content-Type: text/plain; charset="utf-8"
Content-ID: <0FE2426AFA964245BCD1800008565502@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB1682
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 7bd89ccc-9e85-4fc4-18bd-08d832408b3a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: u7LhzRXGVQ+Z5xr2nAiMxxfVebGyW/gqXQp7N0sqIIX0lsq9wDojK2NnS56xNlBIuSL8SGWpuI6Mjx+6yigItCej51B1TZTqV/MumJfww7/0uui9THSEkbHvl3jzV2LpPJS3wNulfq7V8csP0c1KeQolVpbBgAfDPEd6fMPSMfmWKCFbwYUn9mhgbMDH7H+vaoPxhWYHdQIR/SKpGw4k6gTRHLzzvnUWg8+W37zyS/KpcZgmSegZMVW9bwgi6HHqHrxVslHbV+7ArAoCmQzvw99eQl339ztpII1M/pfG5w+LGM/zJUSFttgBd1BbX58CSQJfF+6wMHBIdIqJdaRwhMrSPQvgXSRkPvvxRNOWp7KPBnLaR+vg38mkf3SiBZsp20LJ5EuIzY2/YZLbpQty2V4w/uYORSGO3Q3N+3Ma9M1RIX5PjhHzMOdPDu1Tr/oakaz39gI9F2W7wc+lfhD6BIL6BZrTwolyUUPpxMpQkD8VWcFn1AYys2XQFaGGVDZP
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(46966005)(33656002)(478600001)(86362001)(6862004)(36756003)(82310400002)(26005)(47076004)(5660300002)(83380400001)(336012)(82740400003)(4326008)(30864003)(8676002)(53546011)(6506007)(316002)(8936002)(6486002)(2906002)(186003)(81166007)(356005)(6512007)(70206006)(70586007)(2616005)(966005)(107886003)(54906003)(2004002);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jul 2020 15:20:14.6281 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b057116-8fa3-4d65-dc46-08d832408fe4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB1826
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMjQgSnVsIDIwMjAsIGF0IDg6MDMgYW0sIE9sZWtzYW5kciBBbmRydXNoY2hlbmtv
IDxPbGVrc2FuZHJfQW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4gd3JvdGU6DQo+IA0KPiANCj4gT24g
Ny8yNC8yMCAyOjM4IEFNLCBTdGVmYW5vIFN0YWJlbGxpbmkgd3JvdGU6DQo+PiArIEphbiwgQW5k
cmV3LCBSb2dlcg0KPj4gDQo+PiBQbGVhc2UgaGF2ZSBhIGxvb2sgYXQgbXkgY29tbWVudCBvbiB3
aGV0aGVyIHdlIHNob3VsZCBzaGFyZSB0aGUgTU1DRkcNCj4+IGNvZGUgYmVsb3csIGZlZWwgZnJl
ZSB0byBpZ25vcmUgdGhlIHJlc3QgOi0pDQo+PiANCj4+IA0KPj4gT24gVGh1LCAyMyBKdWwgMjAy
MCwgUmFodWwgU2luZ2ggd3JvdGU6DQo+Pj4gWEVOIGR1cmluZyBib290IHdpbGwgcmVhZCB0aGUg
UENJIGRldmljZSB0cmVlIG5vZGUg4oCccmVn4oCdIHByb3BlcnR5DQo+Pj4gYW5kIHdpbGwgbWFw
IHRoZSBQQ0kgY29uZmlnIHNwYWNlIHRvIHRoZSBYRU4gbWVtb3J5Lg0KPj4+IA0KPj4+IFhFTiB3
aWxsIHJlYWQgdGhlIOKAnGxpbnV4LCBwY2ktZG9tYWlu4oCdIHByb3BlcnR5IGZyb20gdGhlIGRl
dmljZSB0cmVlDQo+Pj4gbm9kZSBhbmQgY29uZmlndXJlIHRoZSBob3N0IGJyaWRnZSBzZWdtZW50
IG51bWJlciBhY2NvcmRpbmdseS4NCj4+PiANCj4+PiBBcyBvZiBub3cgInBjaS1ob3N0LWVjYW0t
Z2VuZXJpYyIgY29tcGF0aWJsZSBib2FyZCBpcyBzdXBwb3J0ZWQuDQo+Pj4gDQo+Pj4gQ2hhbmdl
LUlkOiBJZjMyZjc3NDhiN2RjODlkZDM3MTE0ZGM1MDI5NDMyMjJhMmEzNmM0OQ0KPj4gV2hhdCBp
cyB0aGlzIENoYW5nZS1JZCBwcm9wZXJ0eT8NCj4gR2Vycml0IDspDQo+PiANCj4+IA0KPj4+IFNp
Z25lZC1vZmYtYnk6IFJhaHVsIFNpbmdoIDxyYWh1bC5zaW5naEBhcm0uY29tPg0KPj4+IC0tLQ0K
Pj4+ICB4ZW4vYXJjaC9hcm0vS2NvbmZpZyAgICAgICAgICAgICAgICB8ICAgNyArDQo+Pj4gIHhl
bi9hcmNoL2FybS9NYWtlZmlsZSAgICAgICAgICAgICAgIHwgICAxICsNCj4+PiAgeGVuL2FyY2gv
YXJtL3BjaS9NYWtlZmlsZSAgICAgICAgICAgfCAgIDQgKw0KPj4+ICB4ZW4vYXJjaC9hcm0vcGNp
L3BjaS1hY2Nlc3MuYyAgICAgICB8IDEwMSArKysrKysrKysrKysrKw0KPj4+ICB4ZW4vYXJjaC9h
cm0vcGNpL3BjaS1ob3N0LWNvbW1vbi5jICB8IDE5OCArKysrKysrKysrKysrKysrKysrKysrKysr
KysrDQo+Pj4gIHhlbi9hcmNoL2FybS9wY2kvcGNpLWhvc3QtZ2VuZXJpYy5jIHwgMTMxICsrKysr
KysrKysrKysrKysrKw0KPj4+ICB4ZW4vYXJjaC9hcm0vcGNpL3BjaS5jICAgICAgICAgICAgICB8
IDExMiArKysrKysrKysrKysrKysrDQo+Pj4gIHhlbi9hcmNoL2FybS9zZXR1cC5jICAgICAgICAg
ICAgICAgIHwgICAyICsNCj4+PiAgeGVuL2luY2x1ZGUvYXNtLWFybS9kZXZpY2UuaCAgICAgICAg
fCAgIDcgKy0NCj4+PiAgeGVuL2luY2x1ZGUvYXNtLWFybS9wY2kuaCAgICAgICAgICAgfCAgOTcg
KysrKysrKysrKysrKy0NCj4+PiAgMTAgZmlsZXMgY2hhbmdlZCwgNjU0IGluc2VydGlvbnMoKyks
IDYgZGVsZXRpb25zKC0pDQo+Pj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB4ZW4vYXJjaC9hcm0vcGNp
L01ha2VmaWxlDQo+Pj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB4ZW4vYXJjaC9hcm0vcGNpL3BjaS1h
Y2Nlc3MuYw0KPj4+ICBjcmVhdGUgbW9kZSAxMDA2NDQgeGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9z
dC1jb21tb24uYw0KPj4+ICBjcmVhdGUgbW9kZSAxMDA2NDQgeGVuL2FyY2gvYXJtL3BjaS9wY2kt
aG9zdC1nZW5lcmljLmMNCj4+PiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHhlbi9hcmNoL2FybS9wY2kv
cGNpLmMNCj4+PiANCj4+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL0tjb25maWcgYi94ZW4v
YXJjaC9hcm0vS2NvbmZpZw0KPj4+IGluZGV4IDI3NzczODgyNjUuLmVlMTMzMzlhZTkgMTAwNjQ0
DQo+Pj4gLS0tIGEveGVuL2FyY2gvYXJtL0tjb25maWcNCj4+PiArKysgYi94ZW4vYXJjaC9hcm0v
S2NvbmZpZw0KPj4+IEBAIC0zMSw2ICszMSwxMyBAQCBtZW51ICJBcmNoaXRlY3R1cmUgRmVhdHVy
ZXMiDQo+Pj4gDQo+Pj4gIHNvdXJjZSAiYXJjaC9LY29uZmlnIg0KPj4+IA0KPj4+ICtjb25maWcg
QVJNX1BDSQ0KPj4+ICsJYm9vbCAiUENJIFBhc3N0aHJvdWdoIFN1cHBvcnQiDQo+Pj4gKwlkZXBl
bmRzIG9uIEFSTV82NA0KPj4+ICsJLS0taGVscC0tLQ0KPj4+ICsNCj4+PiArCSAgUENJIHBhc3N0
aHJvdWdoIHN1cHBvcnQgZm9yIFhlbiBvbiBBUk02NC4NCj4+PiArDQo+Pj4gIGNvbmZpZyBBQ1BJ
DQo+Pj4gIAlib29sICJBQ1BJIChBZHZhbmNlZCBDb25maWd1cmF0aW9uIGFuZCBQb3dlciBJbnRl
cmZhY2UpIFN1cHBvcnQiIGlmIEVYUEVSVA0KPj4+ICAJZGVwZW5kcyBvbiBBUk1fNjQNCj4+PiBk
aWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL01ha2VmaWxlIGIveGVuL2FyY2gvYXJtL01ha2VmaWxl
DQo+Pj4gaW5kZXggN2U4MmIyMTc4Yy4uMzQ1Y2I4M2VlZCAxMDA2NDQNCj4+PiAtLS0gYS94ZW4v
YXJjaC9hcm0vTWFrZWZpbGUNCj4+PiArKysgYi94ZW4vYXJjaC9hcm0vTWFrZWZpbGUNCj4+PiBA
QCAtNiw2ICs2LDcgQEAgaWZuZXEgKCQoQ09ORklHX05PX1BMQVQpLHkpDQo+Pj4gIG9iai15ICs9
IHBsYXRmb3Jtcy8NCj4+PiAgZW5kaWYNCj4+PiAgb2JqLSQoQ09ORklHX1RFRSkgKz0gdGVlLw0K
Pj4+ICtvYmotJChDT05GSUdfQVJNX1BDSSkgKz0gcGNpLw0KPj4+IA0KPj4+ICBvYmotJChDT05G
SUdfSEFTX0FMVEVSTkFUSVZFKSArPSBhbHRlcm5hdGl2ZS5vDQo+Pj4gIG9iai15ICs9IGJvb3Rm
ZHQuaW5pdC5vDQo+Pj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9wY2kvTWFrZWZpbGUgYi94
ZW4vYXJjaC9hcm0vcGNpL01ha2VmaWxlDQo+Pj4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4+PiBp
bmRleCAwMDAwMDAwMDAwLi4zNTg1MDhiNzg3DQo+Pj4gLS0tIC9kZXYvbnVsbA0KPj4+ICsrKyBi
L3hlbi9hcmNoL2FybS9wY2kvTWFrZWZpbGUNCj4+PiBAQCAtMCwwICsxLDQgQEANCj4+PiArb2Jq
LXkgKz0gcGNpLm8NCj4+PiArb2JqLXkgKz0gcGNpLWhvc3QtZ2VuZXJpYy5vDQo+Pj4gK29iai15
ICs9IHBjaS1ob3N0LWNvbW1vbi5vDQo+Pj4gK29iai15ICs9IHBjaS1hY2Nlc3Mubw0KPj4+IGRp
ZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vcGNpL3BjaS1hY2Nlc3MuYyBiL3hlbi9hcmNoL2FybS9w
Y2kvcGNpLWFjY2Vzcy5jDQo+Pj4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4+PiBpbmRleCAwMDAw
MDAwMDAwLi5jNTNlZjU4MzM2DQo+Pj4gLS0tIC9kZXYvbnVsbA0KPj4+ICsrKyBiL3hlbi9hcmNo
L2FybS9wY2kvcGNpLWFjY2Vzcy5jDQo+Pj4gQEAgLTAsMCArMSwxMDEgQEANCj4+PiArLyoNCj4+
PiArICogQ29weXJpZ2h0IChDKSAyMDIwIEFybSBMdGQuDQo+IEkgdGhpbmsgU1BEWCB3aWxsIGZp
dCBiZXR0ZXIgaW4gYW55IG5ldyBjb2RlLg0KDQpJdCB3aWxsIGJlIGdvb2QgaWYgY29tbXVuaXR5
IGhlbHBzIHVzIHRvIGRlY2lkZSB3aGljaCBsaWNlbnNlIGlzIGJlc3QgZm9yIG5ldyBmaWxlcy4N
Cg0KPj4+ICsgKg0KPj4+ICsgKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNh
biByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeQ0KPj4+ICsgKiBpdCB1bmRlciB0aGUgdGVy
bXMgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIHZlcnNpb24gMiBhcw0KPj4+ICsg
KiBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbi4NCj4+PiArICoNCj4+
PiArICogVGhpcyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2ls
bCBiZSB1c2VmdWwsDQo+Pj4gKyAqIGJ1dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBl
dmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mDQo+Pj4gKyAqIE1FUkNIQU5UQUJJTElUWSBvciBG
SVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUNCj4+PiArICogR05VIEdl
bmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4NCj4+PiArICoNCj4+PiArICog
WW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGlj
IExpY2Vuc2UNCj4+PiArICogYWxvbmcgd2l0aCB0aGlzIHByb2dyYW0uICBJZiBub3QsIHNlZSA8
aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4uDQo+Pj4gKyAqLw0KPj4+ICsNCj4+PiArI2lu
Y2x1ZGUgPHhlbi9pbml0Lmg+DQo+Pj4gKyNpbmNsdWRlIDx4ZW4vcGNpLmg+DQo+Pj4gKyNpbmNs
dWRlIDxhc20vcGNpLmg+DQo+Pj4gKyNpbmNsdWRlIDx4ZW4vcndsb2NrLmg+DQo+Pj4gKw0KPj4+
ICtzdGF0aWMgdWludDMyX3QgcGNpX2NvbmZpZ19yZWFkKHBjaV9zYmRmX3Qgc2JkZiwgdW5zaWdu
ZWQgaW50IHJlZywNCj4+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGlu
dCBsZW4pDQo+Pj4gK3sNCj4+PiArICAgIGludCByYzsNCj4+PiArICAgIHVpbnQzMl90IHZhbCA9
IEdFTk1BU0soMCwgbGVuICogOCk7DQo+Pj4gKw0KPj4+ICsgICAgc3RydWN0IHBjaV9ob3N0X2Jy
aWRnZSAqYnJpZGdlID0gcGNpX2ZpbmRfaG9zdF9icmlkZ2Uoc2JkZi5zZWcsIHNiZGYuYnVzKTsN
Cj4+PiArDQo+Pj4gKyAgICBpZiAoIHVubGlrZWx5KCFicmlkZ2UpICkNCj4+PiArICAgIHsNCj4+
PiArICAgICAgICBwcmludGsoWEVOTE9HX0VSUiAiVW5hYmxlIHRvIGZpbmQgYnJpZGdlIGZvciAi
UFJJX3BjaSJcbiIsDQo+Pj4gKyAgICAgICAgICAgICAgICBzYmRmLnNlZywgc2JkZi5idXMsIHNi
ZGYuZGV2LCBzYmRmLmZuKTsNCj4+PiArICAgICAgICByZXR1cm4gdmFsOw0KPj4+ICsgICAgfQ0K
Pj4+ICsNCj4+PiArICAgIGlmICggdW5saWtlbHkoIWJyaWRnZS0+b3BzLT5yZWFkKSApDQo+Pj4g
KyAgICAgICAgcmV0dXJuIHZhbDsNCj4+PiArDQo+Pj4gKyAgICByYyA9IGJyaWRnZS0+b3BzLT5y
ZWFkKGJyaWRnZSwgKHVpbnQzMl90KSBzYmRmLnNiZGYsIHJlZywgbGVuLCAmdmFsKTsNCj4+PiAr
ICAgIGlmICggcmMgKQ0KPj4+ICsgICAgICAgIHByaW50ayhYRU5MT0dfRVJSICJGYWlsZWQgdG8g
cmVhZCByZWcgJSN4IGxlbiAldSBmb3IgIlBSSV9wY2kiXG4iLA0KPj4+ICsgICAgICAgICAgICAg
ICAgcmVnLCBsZW4sIHNiZGYuc2VnLCBzYmRmLmJ1cywgc2JkZi5kZXYsIHNiZGYuZm4pOw0KPj4+
ICsNCj4+PiArICAgIHJldHVybiB2YWw7DQo+Pj4gK30NCj4+PiArDQo+Pj4gK3N0YXRpYyB2b2lk
IHBjaV9jb25maWdfd3JpdGUocGNpX3NiZGZfdCBzYmRmLCB1bnNpZ25lZCBpbnQgcmVnLA0KPj4+
ICsgICAgICAgIHVuc2lnbmVkIGludCBsZW4sIHVpbnQzMl90IHZhbCkNCj4+PiArew0KPj4+ICsg
ICAgaW50IHJjOw0KPj4+ICsgICAgc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdlID0gcGNp
X2ZpbmRfaG9zdF9icmlkZ2Uoc2JkZi5zZWcsIHNiZGYuYnVzKTsNCj4+PiArDQo+Pj4gKyAgICBp
ZiAoIHVubGlrZWx5KCFicmlkZ2UpICkNCj4+PiArICAgIHsNCj4+PiArICAgICAgICBwcmludGso
WEVOTE9HX0VSUiAiVW5hYmxlIHRvIGZpbmQgYnJpZGdlIGZvciAiUFJJX3BjaSJcbiIsDQo+Pj4g
KyAgICAgICAgICAgICAgICBzYmRmLnNlZywgc2JkZi5idXMsIHNiZGYuZGV2LCBzYmRmLmZuKTsN
Cj4+PiArICAgICAgICByZXR1cm47DQo+Pj4gKyAgICB9DQo+Pj4gKw0KPj4+ICsgICAgaWYgKCB1
bmxpa2VseSghYnJpZGdlLT5vcHMtPndyaXRlKSApDQo+Pj4gKyAgICAgICAgcmV0dXJuOw0KPj4+
ICsNCj4+PiArICAgIHJjID0gYnJpZGdlLT5vcHMtPndyaXRlKGJyaWRnZSwgKHVpbnQzMl90KSBz
YmRmLnNiZGYsIHJlZywgbGVuLCB2YWwpOw0KPj4+ICsgICAgaWYgKCByYyApDQo+Pj4gKyAgICAg
ICAgcHJpbnRrKFhFTkxPR19FUlIgIkZhaWxlZCB0byB3cml0ZSByZWcgJSN4IGxlbiAldSBmb3Ig
IlBSSV9wY2kiXG4iLA0KPj4+ICsgICAgICAgICAgICAgICAgcmVnLCBsZW4sIHNiZGYuc2VnLCBz
YmRmLmJ1cywgc2JkZi5kZXYsIHNiZGYuZm4pOw0KPj4+ICt9DQo+Pj4gKw0KPj4+ICsvKg0KPj4+
ICsgKiBXcmFwcGVycyBmb3IgYWxsIFBDSSBjb25maWd1cmF0aW9uIGFjY2VzcyBmdW5jdGlvbnMu
DQo+Pj4gKyAqLw0KPj4+ICsNCj4+PiArI2RlZmluZSBQQ0lfT1BfV1JJVEUoc2l6ZSwgdHlwZSkg
XA0KPj4+ICsgICAgdm9pZCBwY2lfY29uZl93cml0ZSMjc2l6ZSAocGNpX3NiZGZfdCBzYmRmLHVu
c2lnbmVkIGludCByZWcsIHR5cGUgdmFsKSBcDQo+Pj4gK3sgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4+PiArICAgIHBjaV9jb25maWdfd3JpdGUoc2Jk
ZiwgcmVnLCBzaXplIC8gOCwgdmFsKTsgICAgIFwNCj4+PiArfQ0KPj4+ICsNCj4+PiArI2RlZmlu
ZSBQQ0lfT1BfUkVBRChzaXplLCB0eXBlKSBcDQo+Pj4gKyAgICB0eXBlIHBjaV9jb25mX3JlYWQj
I3NpemUgKHBjaV9zYmRmX3Qgc2JkZiwgdW5zaWduZWQgaW50IHJlZykgIFwNCj4+PiAreyAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPj4+ICsgICAgcmV0
dXJuIHBjaV9jb25maWdfcmVhZChzYmRmLCByZWcsIHNpemUgLyA4KTsgICAgIFwNCj4+PiArfQ0K
Pj4+ICsNCj4+PiArUENJX09QX1JFQUQoOCwgdTgpDQo+Pj4gK1BDSV9PUF9SRUFEKDE2LCB1MTYp
DQo+Pj4gK1BDSV9PUF9SRUFEKDMyLCB1MzIpDQo+Pj4gK1BDSV9PUF9XUklURSg4LCB1OCkNCj4+
PiArUENJX09QX1dSSVRFKDE2LCB1MTYpDQo+Pj4gK1BDSV9PUF9XUklURSgzMiwgdTMyKQ0KPj4g
VGhpcyBsb29rcyBsaWtlIGEgc3Vic2V0IG9mIHhlbi9hcmNoL3g4Ni94ODZfNjQvbW1jb25maWdf
NjQuYyA/DQo+PiANCj4+IE1NQ0ZHIGlzIHN1cHBvc2VkIHRvIGNvdmVyIEVDQU0tY29tcGxpYW50
IGhvc3QgYnJpZGdlcyB0b28sIGlmIEkgYW0gbm90DQo+PiBtaXN0YWtlbi4gSXMgdGhlcmUgYW55
IHZhbHVlIGluIHNoYXJpbmcgdGhlIGNvZGUgd2l0aCB4ODY/IEl0IGlzIE9LIGlmDQo+PiB3ZSBk
b24ndCwgYnV0IEkgd291bGQgbGlrZSB0byB1bmRlcnN0YW5kIHRoZSByZWFzb25pbmcuDQo+PiAN
Cj4+IA0KPj4gDQo+Pj4gKy8qDQo+Pj4gKyAqIExvY2FsIHZhcmlhYmxlczoNCj4+PiArICogbW9k
ZTogQw0KPj4+ICsgKiBjLWZpbGUtc3R5bGU6ICJCU0QiDQo+Pj4gKyAqIGMtYmFzaWMtb2Zmc2V0
OiA0DQo+Pj4gKyAqIHRhYi13aWR0aDogNA0KPj4+ICsgKiBpbmRlbnQtdGFicy1tb2RlOiBuaWwN
Cj4+PiArICogRW5kOg0KPj4+ICsgKi8NCj4+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL3Bj
aS9wY2ktaG9zdC1jb21tb24uYyBiL3hlbi9hcmNoL2FybS9wY2kvcGNpLWhvc3QtY29tbW9uLmMN
Cj4+PiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPj4+IGluZGV4IDAwMDAwMDAwMDAuLmM1Zjk4YmU2
OTgNCj4+PiAtLS0gL2Rldi9udWxsDQo+Pj4gKysrIGIveGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9z
dC1jb21tb24uYw0KPj4+IEBAIC0wLDAgKzEsMTk4IEBADQo+Pj4gKy8qDQo+Pj4gKyAqIENvcHly
aWdodCAoQykgMjAyMCBBcm0gTHRkLg0KPj4+ICsgKg0KPj4+ICsgKiBCYXNlZCBvbiBMaW51eCBk
cml2ZXJzL3BjaS9lY2FtLmMNCj4+PiArICogQ29weXJpZ2h0IDIwMTYgQnJvYWRjb20uDQo+Pj4g
KyAqDQo+Pj4gKyAqIEJhc2VkIG9uIExpbnV4IGRyaXZlcnMvcGNpL2NvbnRyb2xsZXIvcGNpLWhv
c3QtY29tbW9uLmMNCj4+PiArICogQmFzZWQgb24gTGludXggZHJpdmVycy9wY2kvY29udHJvbGxl
ci9wY2ktaG9zdC1nZW5lcmljLmMNCj4+PiArICogQ29weXJpZ2h0IChDKSAyMDE0IEFSTSBMaW1p
dGVkIFdpbGwgRGVhY29uIDx3aWxsLmRlYWNvbkBhcm0uY29tPg0KPj4+ICsgKg0KPj4+ICsgKg0K
Pj4+ICsgKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1
dGUgaXQgYW5kL29yIG1vZGlmeQ0KPj4+ICsgKiBpdCB1bmRlciB0aGUgdGVybXMgb2YgdGhlIEdO
VSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIHZlcnNpb24gMiBhcw0KPj4+ICsgKiBwdWJsaXNoZWQg
YnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRhdGlvbi4NCj4+PiArICoNCj4+PiArICogVGhpcyBw
cm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWws
DQo+Pj4gKyAqIGJ1dCBXSVRIT1VUIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBs
aWVkIHdhcnJhbnR5IG9mDQo+Pj4gKyAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBB
IFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUNCj4+PiArICogR05VIEdlbmVyYWwgUHVibGlj
IExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4NCj4+PiArICoNCj4+PiArICogWW91IHNob3VsZCBo
YXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UNCj4+
PiArICogYWxvbmcgd2l0aCB0aGlzIHByb2dyYW0uICBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5n
bnUub3JnL2xpY2Vuc2VzLz4uDQo+Pj4gKyAqLw0KPj4+ICsNCj4+PiArI2luY2x1ZGUgPHhlbi9p
bml0Lmg+DQo+Pj4gKyNpbmNsdWRlIDx4ZW4vcGNpLmg+DQo+Pj4gKyNpbmNsdWRlIDxhc20vcGNp
Lmg+DQo+Pj4gKyNpbmNsdWRlIDx4ZW4vcndsb2NrLmg+DQo+Pj4gKyNpbmNsdWRlIDx4ZW4vdm1h
cC5oPg0KPj4+ICsNCj4+PiArLyoNCj4+PiArICogTGlzdCBmb3IgYWxsIHRoZSBwY2kgaG9zdCBi
cmlkZ2VzLg0KPj4+ICsgKi8NCj4+PiArDQo+Pj4gK3N0YXRpYyBMSVNUX0hFQUQocGNpX2hvc3Rf
YnJpZGdlcyk7DQo+Pj4gKw0KPj4+ICtzdGF0aWMgYm9vbCBfX2luaXQgZHRfcGNpX3BhcnNlX2J1
c19yYW5nZShzdHJ1Y3QgZHRfZGV2aWNlX25vZGUgKmRldiwNCj4+PiArICAgICAgICBzdHJ1Y3Qg
cGNpX2NvbmZpZ193aW5kb3cgKmNmZykNCj4+PiArew0KPj4+ICsgICAgY29uc3QgX19iZTMyICpj
ZWxsczsNCj4+PiArICAgIHVpbnQzMl90IGxlbjsNCj4+PiArDQo+Pj4gKyAgICBjZWxscyA9IGR0
X2dldF9wcm9wZXJ0eShkZXYsICJidXMtcmFuZ2UiLCAmbGVuKTsNCj4+PiArICAgIC8qIGJ1cy1y
YW5nZSBzaG91bGQgYXQgbGVhc3QgYmUgMiBjZWxscyAqLw0KPj4+ICsgICAgaWYgKCAhY2VsbHMg
fHwgKGxlbiA8IChzaXplb2YoKmNlbGxzKSAqIDIpKSApDQo+Pj4gKyAgICAgICAgcmV0dXJuIGZh
bHNlOw0KPj4+ICsNCj4+PiArICAgIGNmZy0+YnVzbl9zdGFydCA9IGR0X25leHRfY2VsbCgxLCAm
Y2VsbHMpOw0KPj4+ICsgICAgY2ZnLT5idXNuX2VuZCA9IGR0X25leHRfY2VsbCgxLCAmY2VsbHMp
Ow0KPj4+ICsNCj4+PiArICAgIHJldHVybiB0cnVlOw0KPj4+ICt9DQo+Pj4gKw0KPj4+ICtzdGF0
aWMgaW5saW5lIHZvaWQgX19pb21lbSAqcGNpX3JlbWFwX2NmZ3NwYWNlKHBhZGRyX3Qgc3RhcnQs
IHNpemVfdCBsZW4pDQo+Pj4gK3sNCj4+PiArICAgIHJldHVybiBpb3JlbWFwX25vY2FjaGUoc3Rh
cnQsIGxlbik7DQo+Pj4gK30NCj4+PiArDQo+Pj4gK3N0YXRpYyB2b2lkIHBjaV9lY2FtX2ZyZWUo
c3RydWN0IHBjaV9jb25maWdfd2luZG93ICpjZmcpDQo+Pj4gK3sNCj4+PiArICAgIGlmICggY2Zn
LT53aW4gKQ0KPj4+ICsgICAgICAgIGlvdW5tYXAoY2ZnLT53aW4pOw0KPj4+ICsNCj4+PiArICAg
IHhmcmVlKGNmZyk7DQo+Pj4gK30NCj4gDQo+IFRoZSB0d28gZnVuY3Rpb25zIGFib3ZlIHNlZW0g
dG8gZGVhbCB3aXRoIHRoZSBzYW1lIHJlc291cmNlcywgZS5nLiBjZmctPndpbg0KPiANCj4gYW5k
IG1hcC91bm1hcC4gV291bGQgaXQgbWFrZSBzZW5zZSB0byBhbGlnbiB0aG9zZSwgc29tZXRoaW5n
IGxpa2UNCj4gDQo+IHMvcGNpX3JlbWFwX2NmZ3NwYWNlL3BjaV9lY2FtX2FsbG9jIGFuZCBwY2lf
ZWNhbV9hbGxvYyBoYW5kbGVzIGNmZy0+d2luPw0KPiANCj4gT3IgYW55dGhpbmcgd2hpY2ggbWFr
ZXMgdGhlbSBsb29rIGluaXQvZmluaSBzdHlsZT8NCg0KT2sgd2lsbCByZW5hbWUgdGhlIGZ1bmN0
aW9uIG5hbWUuDQo+IA0KPj4+ICsNCj4+PiArc3RhdGljIHN0cnVjdCBwY2lfY29uZmlnX3dpbmRv
dyAqZ2VuX3BjaV9pbml0KHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSAqZGV2LA0KPj4+ICsgICAgICAg
IHN0cnVjdCBwY2lfZWNhbV9vcHMgKm9wcykNCj4+PiArew0KPj4+ICsgICAgaW50IGVycjsNCj4+
PiArICAgIHN0cnVjdCBwY2lfY29uZmlnX3dpbmRvdyAqY2ZnOw0KPj4+ICsgICAgcGFkZHJfdCBh
ZGRyLCBzaXplOw0KPj4+ICsNCj4+PiArICAgIGNmZyA9IHh6YWxsb2Moc3RydWN0IHBjaV9jb25m
aWdfd2luZG93KTsNCj4+PiArICAgIGlmICggIWNmZyApDQo+Pj4gKyAgICAgICAgcmV0dXJuIE5V
TEw7DQo+Pj4gKw0KPj4+ICsgICAgZXJyID0gZHRfcGNpX3BhcnNlX2J1c19yYW5nZShkZXYsIGNm
Zyk7DQo+Pj4gKyAgICBpZiAoICFlcnIgKSB7DQo+Pj4gKyAgICAgICAgY2ZnLT5idXNuX3N0YXJ0
ID0gMDsNCj4+PiArICAgICAgICBjZmctPmJ1c25fZW5kID0gMHhmZjsNCj4+PiArICAgICAgICBw
cmludGsoWEVOTE9HX0VSUiAiTm8gYnVzIHJhbmdlIGZvdW5kIGZvciBwY2kgY29udHJvbGxlclxu
Iik7DQo+Pj4gKyAgICB9IGVsc2Ugew0KPj4+ICsgICAgICAgIGlmICggY2ZnLT5idXNuX2VuZCA+
IGNmZy0+YnVzbl9zdGFydCArIDB4ZmYgKQ0KPj4+ICsgICAgICAgICAgICBjZmctPmJ1c25fZW5k
ID0gY2ZnLT5idXNuX3N0YXJ0ICsgMHhmZjsNCj4gDQo+IFNvLCBpZiBidXMgc3RhcnQgaXMsIGZv
ciBleGFtcGxlLCAweDEwIHRoZW4gd2UnbGwgZW5kIHVwIHdpdGggYnVzIGVuZCBhdCAoMHgxMCAr
IDB4ZmYpID4gMHhmZg0KPiANCj4gd2hpY2ggZG9lc24ndCBzZWVtIHRvIGJlIHdoYXQgd2Ugd2Fu
dA0KDQpUaGlzIGlzIHRvIGZpeCBpZiBpbiBkZXZpY2UgdHJlZSBidXMtcmFuZ2VzIHByb3BlcnR5
LCBidXMgZW5kIGNvcnJlc3BvbmRzIHRvIG1vcmUgdGhhbiAyNTYgYnVzLg0KDQo+IA0KPj4+ICsg
ICAgfQ0KPj4+ICsNCj4+PiArICAgIC8qIFBhcnNlIG91ciBQQ0kgZWNhbSByZWdpc3RlciBhZGRy
ZXNzKi8NCj4+PiArICAgIGVyciA9IGR0X2RldmljZV9nZXRfYWRkcmVzcyhkZXYsIDAsICZhZGRy
LCAmc2l6ZSk7DQo+Pj4gKyAgICBpZiAoIGVyciApDQo+Pj4gKyAgICAgICAgZ290byBlcnJfZXhp
dDsNCj4+IFNob3VsZG4ndCB3ZSBoYW5kbGUgdGhlIHBvc3NpYmlsaXR5IG9mIG11bHRpcGxlIGFk
ZHJlc3Nlcz8gSXMgaXQNCj4+IHBvc3NpYmxlIHRvIGhhdmUgbW9yZSB0aGFuIG9uZSByYW5nZSBm
b3IgYW4gRUNBTSBjb21wbGlhbnQgaG9zdCBicmlkZ2U/DQo+PiANCj4+IA0KPj4+ICsgICAgY2Zn
LT5waHlzX2FkZHIgPSBhZGRyOw0KPj4+ICsgICAgY2ZnLT5zaXplID0gc2l6ZTsNCj4+PiArICAg
IGNmZy0+b3BzID0gb3BzOw0KPj4+ICsNCj4+PiArICAgIC8qDQo+Pj4gKyAgICAgKiBPbiA2NC1i
aXQgc3lzdGVtcywgd2UgZG8gYSBzaW5nbGUgaW9yZW1hcCBmb3IgdGhlIHdob2xlIGNvbmZpZyBz
cGFjZQ0KPj4+ICsgICAgICogc2luY2Ugd2UgaGF2ZSBlbm91Z2ggdmlydHVhbCBhZGRyZXNzIHJh
bmdlIGF2YWlsYWJsZS4gIE9uIDMyLWJpdCwgd2UNCj4+PiArICAgICAqIGlvcmVtYXAgdGhlIGNv
bmZpZyBzcGFjZSBmb3IgZWFjaCBidXMgaW5kaXZpZHVhbGx5Lg0KPj4+ICsgICAgICoNCj4+PiAr
ICAgICAqIEFzIG9mIG5vdyBvbmx5IDY0LWJpdCBpcyBzdXBwb3J0ZWQgMzItYml0IGlzIG5vdCBz
dXBwb3J0ZWQuDQo+Pj4gKyAgICAgKi8NCj4+PiArICAgIGNmZy0+d2luID0gcGNpX3JlbWFwX2Nm
Z3NwYWNlKGNmZy0+cGh5c19hZGRyLCBjZmctPnNpemUpOw0KPiANCj4gSSBhbSBmaW5lIHdpdGgg
IndpbiIsIGJ1dCBjYW4gd2UgdGhpbmsgb2Ygc29tZXRoaW5nIHRoYXQgdGVsbHMgdXMgdGhhdA0K
PiANCj4gIndpbiIgaXMgYWN0dWFsbHkgRUNBTSBiYXNlIGFkZHJlc3MsIHNvIG9uZSBkb2Vzbid0
IG5lZWQgdG8gbWFwICJ3aW4iIHRvICJFQ0FNIg0KPiANCj4gd2hpbGUgcmVhZGluZz8NCg0KQWNr
LiBXaWxsIGZpeC4NCj4gDQo+Pj4gKyAgICBpZiAoICFjZmctPndpbiApDQo+Pj4gKyAgICAgICAg
Z290byBlcnJfZXhpdF9yZW1hcDsNCj4+PiArDQo+Pj4gKyAgICBwcmludGsoIkVDQU0gYXQgW21l
bSAlbHgtJWx4XSBmb3IgW2J1cyAleC0leF0gXG4iLGNmZy0+cGh5c19hZGRyLA0KPj4+ICsgICAg
ICAgICAgICBjZmctPnBoeXNfYWRkciArIGNmZy0+c2l6ZSAtIDEsY2ZnLT5idXNuX3N0YXJ0LGNm
Zy0+YnVzbl9lbmQpOw0KPj4+ICsNCj4+PiArICAgIGlmICggb3BzLT5pbml0ICkgew0KPj4+ICsg
ICAgICAgIGVyciA9IG9wcy0+aW5pdChjZmcpOw0KPj4+ICsgICAgICAgIGlmIChlcnIpDQo+Pj4g
KyAgICAgICAgICAgIGdvdG8gZXJyX2V4aXQ7DQo+Pj4gKyAgICB9DQo+Pj4gKw0KPj4+ICsgICAg
cmV0dXJuIGNmZzsNCj4+PiArDQo+Pj4gK2Vycl9leGl0X3JlbWFwOg0KPj4+ICsgICAgcHJpbnRr
KFhFTkxPR19FUlIgIkVDQU0gaW9yZW1hcCBmYWlsZWRcbiIpOw0KPj4+ICtlcnJfZXhpdDoNCj4+
PiArICAgIHBjaV9lY2FtX2ZyZWUoY2ZnKTsNCj4+PiArICAgIHJldHVybiBOVUxMOw0KPj4+ICt9
DQo+Pj4gKw0KPj4+ICtzdGF0aWMgc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqIHBjaV9hbGxvY19o
b3N0X2JyaWRnZSh2b2lkKQ0KPj4+ICt7DQo+Pj4gKyAgICBzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdl
ICpicmlkZ2UgPSB4emFsbG9jKHN0cnVjdCBwY2lfaG9zdF9icmlkZ2UpOw0KPj4+ICsNCj4+PiAr
ICAgIGlmICggIWJyaWRnZSApDQo+Pj4gKyAgICAgICAgcmV0dXJuIE5VTEw7DQo+Pj4gKw0KPj4+
ICsgICAgSU5JVF9MSVNUX0hFQUQoJmJyaWRnZS0+bm9kZSk7DQo+Pj4gKyAgICByZXR1cm4gYnJp
ZGdlOw0KPj4+ICt9DQo+Pj4gKw0KPj4+ICtpbnQgcGNpX2hvc3RfY29tbW9uX3Byb2JlKHN0cnVj
dCBkdF9kZXZpY2Vfbm9kZSAqZGV2LA0KPj4+ICsgICAgICAgIHN0cnVjdCBwY2lfZWNhbV9vcHMg
Km9wcykNCj4+PiArew0KPj4+ICsgICAgc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdlOw0K
Pj4+ICsgICAgc3RydWN0IHBjaV9jb25maWdfd2luZG93ICpjZmc7DQo+Pj4gKyAgICB1MzIgc2Vn
bWVudDsNCj4+PiArDQo+Pj4gKyAgICBicmlkZ2UgPSBwY2lfYWxsb2NfaG9zdF9icmlkZ2UoKTsN
Cj4+PiArICAgIGlmICggIWJyaWRnZSApDQo+Pj4gKyAgICAgICAgcmV0dXJuIC1FTk9NRU07DQo+
Pj4gKw0KPj4+ICsgICAgLyogUGFyc2UgYW5kIG1hcCBvdXIgQ29uZmlndXJhdGlvbiBTcGFjZSB3
aW5kb3dzICovDQo+IERvIHlvdSBleHBlY3QgbXVsdGlwbGUgd2luZG93cyBoZXJlIGFzIHRoZSBj
b21tZW50IHNheXM/DQoNCklmIGdvaW5nIGZvcndhcmQgd2Ugd2FudCB0byBzdXBwb3J0IDMyLWJp
dCB3ZSBoYXZlIHRvIGlvcmVtYXAgdGhlIGNvbmZpZyBzcGFjZSBmb3IgZWFjaCBidXMgaW5kaXZp
ZHVhbGx5LCBidXQgYXMgb2Ygb25seSA2NCBiaXQgaXMgc3VwcG9ydGVkLg0KDQo+Pj4gKyAgICBj
ZmcgPSBnZW5fcGNpX2luaXQoZGV2LCBvcHMpDQo+Pj4gKyAgICBpZiAoICFjZmcgKQ0KPj4+ICsg
ICAgICAgIHJldHVybiAtRU5PTUVNOw0KPj4gSW4gY2FzZSBvZiBlcnJvcnMgdGhlIGFsbG9jYXRl
ZCBicmlkZ2UgaXMgbm90IGZyZWVkLg0KPj4gDQo+PiANCj4+PiArICAgIGJyaWRnZS0+ZHRfbm9k
ZSA9IGRldjsNCj4+PiArICAgIGJyaWRnZS0+c3lzZGF0YSA9IGNmZzsNCj4+PiArICAgIGJyaWRn
ZS0+b3BzID0gJm9wcy0+cGNpX29wczsNCj4gDQo+IENhbiB3ZSBoYXZlIHNvbWUgc29ydCBvZiBk
dW1teSBvcHMgc28gd2UgZG9uJ3QgaGF2ZSB0byBjaGVjayBmb3Igb3BzICE9IE5VTEwgZXZlcnkg
dGltZQ0KPiANCj4gd2UgcmVhZC93cml0ZSBjb25maWc/IElzIGl0IHJlYWxseSBwb3NzaWJsZSB0
aGF0IHdlIGhhdmUgb3BzIHNldCB0byBOVUxMIGFmdGVyIHdlIGhhdmUNCj4gDQo+IHRoZSBkZXZl
bG9wbWVudCBmaW5pc2hlZD8NCg0KV2UgZG9u4oCZdCB3YW50IHRvIHN1cHBvcnQgaG9zdC1icmlk
Z2VzIHdpdGggZHVtcCBvcHMuIFByb3BlciBvcHMgaGFzIHRvIGJlIHNldHVwIGZvciB0aGUgaG9z
dC1icmlkZ2UgdG8gYWNjZXNzIHRoZSByZWFkL3dyaXRlLiANCk9uY2Ugd2UgYWNjZXNzIHRoZSBj
b25maWcgc3BhY2Ugd2UgYXJlIHN1cmUgdGhhdCBvcHMgaXMgYWxyZWFkeSBzZXR1cCBzbyBubyBu
ZWVkIHRvIGNoZWNrIGZvciBOVUxMIGVhY2ggdGltZS4gDQoNCj4gDQo+Pj4gKw0KPj4+ICsgICAg
aWYoICFkdF9wcm9wZXJ0eV9yZWFkX3UzMihkZXYsICJsaW51eCxwY2ktZG9tYWluIiwgJnNlZ21l
bnQpICkNCj4+PiArICAgIHsNCj4+PiArICAgICAgICBwcmludGsoWEVOTE9HX0VSUiAiXCJsaW51
eCxwY2ktZG9tYWluXCIgcHJvcGVydHkgaW4gbm90IGF2YWlsYWJsZSBpbiBEVFxuIik7DQo+Pj4g
KyAgICAgICAgcmV0dXJuIC1FTk9ERVY7DQo+Pj4gKyAgICB9DQo+Pj4gKw0KPj4+ICsgICAgYnJp
ZGdlLT5zZWdtZW50ID0gKHUxNilzZWdtZW50Ow0KPj4gTXkgdW5kZXJzdGFuZGluZyBpcyB0aGF0
IGEgTGludXggcGNpLWRvbWFpbiBkb2Vzbid0IGNvcnJlc3BvbmQgZXhhY3RseQ0KPj4gdG8gYSBQ
Q0kgc2VnbWVudC4gU2VlIGZvciBpbnN0YW5jZToNCj4+IA0KPj4gaHR0cHM6Ly9saXN0cy5nbnUu
b3JnL2FyY2hpdmUvaHRtbC9xZW11LWRldmVsLzIwMTgtMDQvbXNnMDM4ODUuaHRtbA0KPj4gDQo+
PiBEbyB3ZSBuZWVkIHRvIGNhcmUgYWJvdXQgdGhlIGRpZmZlcmVuY2U/IElmIHdlIG1lYW4gcGNp
LWRvbWFpbiBoZXJlLA0KPj4gc2hvdWxkIHdlIGp1c3QgY2FsbCB0aGVtIGFzIHN1Y2ggaW5zdGVh
ZCBvZiBjYWxsaW5nIHRoZW0gInNlZ21lbnRzIiBpbg0KPj4gWGVuIChpZiB0aGV5IGFyZSBub3Qg
c2VnbWVudHMpPw0KPiANCj4+IA0KPj4gDQo+Pj4gKyAgICBsaXN0X2FkZF90YWlsKCZicmlkZ2Ut
Pm5vZGUsICZwY2lfaG9zdF9icmlkZ2VzKTsNCj4+IEl0IGxvb2tzIGxpa2UgJnBjaV9ob3N0X2Jy
aWRnZXMgc2hvdWxkIGJlIGFuIG9yZGVyZWQgbGlzdCwgb3JkZXJlZCBieQ0KPj4gc2VnbWVudCBu
dW1iZXI/DQo+IA0KPiBXaHk/IERvIHlvdSBleHBlY3QgYnJpZGdlIGFjY2VzcyBpbiBzb21lIHNw
ZWNpZmljIG9yZGVyIHNvIG9yZGVyZWQNCj4gDQo+IGxpc3Qgd2lsbCBtYWtlIGl0IGZhc3Rlcj8N
Cg0KQXMgSSBtZW50aW9uZWQgaW4gZWFybGllciBubyBuZWVkIHRvIGhhdmUgb3JkZXJlZCBsaXN0
IGFzIFBDSSBjb25maWcgc3BhY2UgYWNjZXNzIGlzIHJhbmRvbS4NCg0KPiANCj4+IA0KPj4gDQo+
Pj4gKyAgICByZXR1cm4gMDsNCj4+PiArfQ0KPj4+ICsNCj4+PiArLyoNCj4+PiArICogVGhpcyBm
dW5jdGlvbiB3aWxsIGxvb2t1cCBhbiBob3N0YnJpZGdlIGJhc2VkIG9uIHRoZSBzZWdtZW50IGFu
ZCBidXMNCj4+PiArICogbnVtYmVyLg0KPj4+ICsgKi8NCj4+PiArc3RydWN0IHBjaV9ob3N0X2Jy
aWRnZSAqcGNpX2ZpbmRfaG9zdF9icmlkZ2UodWludDE2X3Qgc2VnbWVudCwgdWludDhfdCBidXMp
DQo+Pj4gK3sNCj4+PiArICAgIHN0cnVjdCBwY2lfaG9zdF9icmlkZ2UgKmJyaWRnZTsNCj4+PiAr
ICAgIGJvb2wgZm91bmQgPSBmYWxzZTsNCj4+PiArDQo+Pj4gKyAgICBsaXN0X2Zvcl9lYWNoX2Vu
dHJ5KCBicmlkZ2UsICZwY2lfaG9zdF9icmlkZ2VzLCBub2RlICkNCj4+PiArICAgIHsNCj4+PiAr
ICAgICAgICBpZiAoIGJyaWRnZS0+c2VnbWVudCAhPSBzZWdtZW50ICkNCj4+PiArICAgICAgICAg
ICAgY29udGludWU7DQo+Pj4gKw0KPj4+ICsgICAgICAgIGZvdW5kID0gdHJ1ZTsNCj4+PiArICAg
ICAgICBicmVhazsNCj4+PiArICAgIH0NCj4+PiArDQo+Pj4gKyAgICByZXR1cm4gKGZvdW5kKSA/
IGJyaWRnZSA6IE5VTEw7DQo+Pj4gK30NCj4+PiArLyoNCj4+PiArICogTG9jYWwgdmFyaWFibGVz
Og0KPj4+ICsgKiBtb2RlOiBDDQo+Pj4gKyAqIGMtZmlsZS1zdHlsZTogIkJTRCINCj4+PiArICog
Yy1iYXNpYy1vZmZzZXQ6IDQNCj4+PiArICogdGFiLXdpZHRoOiA0DQo+Pj4gKyAqIGluZGVudC10
YWJzLW1vZGU6IG5pbA0KPj4+ICsgKiBFbmQ6DQo+Pj4gKyAqLw0KPj4+IGRpZmYgLS1naXQgYS94
ZW4vYXJjaC9hcm0vcGNpL3BjaS1ob3N0LWdlbmVyaWMuYyBiL3hlbi9hcmNoL2FybS9wY2kvcGNp
LWhvc3QtZ2VuZXJpYy5jDQo+Pj4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4+PiBpbmRleCAwMDAw
MDAwMDAwLi5jZDY3YjNkZWM2DQo+Pj4gLS0tIC9kZXYvbnVsbA0KPj4+ICsrKyBiL3hlbi9hcmNo
L2FybS9wY2kvcGNpLWhvc3QtZ2VuZXJpYy5jDQo+Pj4gQEAgLTAsMCArMSwxMzEgQEANCj4+PiAr
LyoNCj4+PiArICogQ29weXJpZ2h0IChDKSAyMDIwIEFybSBMdGQuDQo+Pj4gKyAqDQo+Pj4gKyAq
IEJhc2VkIG9uIExpbnV4IGRyaXZlcnMvcGNpL2NvbnRyb2xsZXIvcGNpLWhvc3QtY29tbW9uLmMN
Cj4+PiArICogQmFzZWQgb24gTGludXggZHJpdmVycy9wY2kvY29udHJvbGxlci9wY2ktaG9zdC1n
ZW5lcmljLmMNCj4+PiArICogQ29weXJpZ2h0IChDKSAyMDE0IEFSTSBMaW1pdGVkIFdpbGwgRGVh
Y29uIDx3aWxsLmRlYWNvbkBhcm0uY29tPg0KPj4+ICsgKg0KPj4+ICsgKiBUaGlzIHByb2dyYW0g
aXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeQ0K
Pj4+ICsgKiBpdCB1bmRlciB0aGUgdGVybXMgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNl
bnNlIHZlcnNpb24gMiBhcw0KPj4+ICsgKiBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUg
Rm91bmRhdGlvbi4NCj4+PiArICoNCj4+PiArICogVGhpcyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVk
IGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWwsDQo+Pj4gKyAqIGJ1dCBXSVRIT1VU
IEFOWSBXQVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mDQo+Pj4g
KyAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4g
IFNlZSB0aGUNCj4+PiArICogR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0
YWlscy4NCj4+PiArICoNCj4+PiArICogWW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBv
ZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UNCj4+PiArICogYWxvbmcgd2l0aCB0aGlz
IHByb2dyYW0uICBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4uDQo+
Pj4gKyAqLw0KPj4+ICsNCj4+PiArI2luY2x1ZGUgPGFzbS9kZXZpY2UuaD4NCj4+PiArI2luY2x1
ZGUgPGFzbS9pby5oPg0KPj4+ICsjaW5jbHVkZSA8eGVuL3BjaS5oPg0KPj4+ICsjaW5jbHVkZSA8
YXNtL3BjaS5oPg0KPj4+ICsNCj4+PiArLyoNCj4+PiArICogRnVuY3Rpb24gdG8gZ2V0IHRoZSBj
b25maWcgc3BhY2UgYmFzZS4NCj4+PiArICovDQo+Pj4gK3N0YXRpYyB2b2lkIF9faW9tZW0gKnBj
aV9jb25maWdfYmFzZShzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICpicmlkZ2UsDQo+Pj4gKyAgICAg
ICAgdWludDMyX3Qgc2JkZiwgaW50IHdoZXJlKQ0KPj4gSSB0aGluayB0aGUgZnVuY3Rpb24gaXMg
bWlzbmFtZWQgYmVjYXVzZSByZWFkaW5nIHRoZSBjb2RlIGJlbG93IGl0IGxvb2tzDQo+PiBsaWtl
IGl0IGlzIG5vdCBqdXN0IHJldHVybmluZyB0aGUgYmFzZSBjb25maWcgc3BhY2UgYWRkcmVzcyBi
dXQgYWxzbyB0aGUNCj4+IHNwZWNpZmljIGFkZHJlc3Mgd2UgbmVlZCB0byByZWFkL3dyaXRlIChh
ZGRpbmcgdGhlIGRldmljZSBvZmZzZXQsDQo+PiAid2hlcmUiLCBhbmQgZXZlcnl0aGluZykuDQo+
PiANCj4+IE1heWJlIHBjaV9jb25maWdfZ2V0X2FkZHJlc3Mgb3Igc29tZXRoaW5nIGxpa2UgdGhh
dD8NCj4+IA0KPj4gDQo+Pj4gK3sNCj4+PiArICAgIHN0cnVjdCBwY2lfY29uZmlnX3dpbmRvdyAq
Y2ZnID0gYnJpZGdlLT5zeXNkYXRhOw0KPiANCj4gSSBhbSBhIGJpdCBjb25mdXNlZCBvZiB0aGUg
bmFtaW5nIDspDQo+IA0KPiBXZSBhbHJlYWR5IGhhdmUgMiBtYXBzOiB3aW4gLT4gRUNBTSBiYXNl
IGFuZCBub3cgc3lzZGF0YSAtPiBjZmcuDQo+IA0KPiBDYW4gd2UgcGxlYXNlIGhhdmUgdGhhdCBh
bGlnbmVkIHNvbWVob3cgc28gaXQgaXMgZWFzaWVyIHRvIGZvbGxvdz8NCg0KQWNrLg0KDQo+IA0K
Pj4+ICsgICAgdW5zaWduZWQgaW50IGRldmZuX3NoaWZ0ID0gY2ZnLT5vcHMtPmJ1c19zaGlmdCAt
IDg7DQo+IA0KPiBXZSBhcmUgbm90IGNoZWNraW5nIGNmZy0+b3BzIGZvciBOVUxMLCBzbyBwcm9i
YWJseSB3ZSBkbyBub3Qgd2FudCBicmlkZ2VzDQo+IA0KPiB3aXRoIE5VTEwgb3BzIGFzIHdlbGw/
DQoNClllcyBjb3JyZWN0IHdlIGRvbuKAmXQgd2FudCBob3N0LWJyaWRnZSB3aXRoIE5VTEwgb3Bz
LiBEbyB5b3Ugc2VlIGFueSB1c2UgY2FzZSB0byBoYXZlIGhvc3QtYnJpZGdlIHdpdGggTlVMTCBv
cHM/DQoNCj4gDQo+Pj4gKw0KPj4+ICsgICAgcGNpX3NiZGZfdCBzYmRmX3QgPSAocGNpX3NiZGZf
dCkgc2JkZiA7DQo+Pj4gKw0KPj4+ICsgICAgdW5zaWduZWQgaW50IGJ1c24gPSBzYmRmX3QuYnVz
Ow0KPj4+ICsgICAgdm9pZCBfX2lvbWVtICpiYXNlOw0KPj4+ICsNCj4+PiArICAgIGlmICggYnVz
biA8IGNmZy0+YnVzbl9zdGFydCB8fCBidXNuID4gY2ZnLT5idXNuX2VuZCApDQo+Pj4gKyAgICAg
ICAgcmV0dXJuIE5VTEw7DQo+Pj4gKw0KPj4+ICsgICAgYmFzZSA9IGNmZy0+d2luICsgKGJ1c24g
PDwgY2ZnLT5vcHMtPmJ1c19zaGlmdCk7DQo+Pj4gKw0KPj4+ICsgICAgcmV0dXJuIGJhc2UgKyAo
UENJX0RFVkZOKHNiZGZfdC5kZXYsIHNiZGZfdC5mbikgPDwgZGV2Zm5fc2hpZnQpICsgd2hlcmU7
DQo+Pj4gK30NCj4+PiArDQo+Pj4gK2ludCBwY2lfZWNhbV9jb25maWdfd3JpdGUoc3RydWN0IHBj
aV9ob3N0X2JyaWRnZSAqYnJpZGdlLCB1aW50MzJfdCBzYmRmLA0KPj4+ICsgICAgICAgIGludCB3
aGVyZSwgaW50IHNpemUsIHUzMiB2YWwpDQo+Pj4gK3sNCj4+PiArICAgIHZvaWQgX19pb21lbSAq
YWRkcjsNCj4+PiArDQo+Pj4gKyAgICBhZGRyID0gcGNpX2NvbmZpZ19iYXNlKGJyaWRnZSwgc2Jk
Ziwgd2hlcmUpOw0KPj4+ICsgICAgaWYgKCAhYWRkciApDQo+Pj4gKyAgICAgICAgcmV0dXJuIC1F
Tk9ERVY7DQo+Pj4gKw0KPj4+ICsgICAgaWYgKCBzaXplID09IDEgKQ0KPj4+ICsgICAgICAgIHdy
aXRlYih2YWwsIGFkZHIpOw0KPj4+ICsgICAgZWxzZSBpZiAoIHNpemUgPT0gMiApDQo+Pj4gKyAg
ICAgICAgd3JpdGV3KHZhbCwgYWRkcik7DQo+Pj4gKyAgICBlbHNlDQo+Pj4gKyAgICAgICAgd3Jp
dGVsKHZhbCwgYWRkcik7DQo+PiBwbGVhc2UgdXNlIGEgc3dpdGNoDQo+PiANCj4+IA0KPj4+ICsg
ICAgcmV0dXJuIDA7DQo+Pj4gK30NCj4+PiArDQo+Pj4gK2ludCBwY2lfZWNhbV9jb25maWdfcmVh
ZChzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICpicmlkZ2UsIHVpbnQzMl90IHNiZGYsDQo+Pj4gKyAg
ICAgICAgaW50IHdoZXJlLCBpbnQgc2l6ZSwgdTMyICp2YWwpDQo+Pj4gK3sNCj4+PiArICAgIHZv
aWQgX19pb21lbSAqYWRkcjsNCj4+PiArDQo+Pj4gKyAgICBhZGRyID0gcGNpX2NvbmZpZ19iYXNl
KGJyaWRnZSwgc2JkZiwgd2hlcmUpOw0KPj4+ICsgICAgaWYgKCAhYWRkciApIHsNCj4+PiArICAg
ICAgICAqdmFsID0gfjA7DQo+Pj4gKyAgICAgICAgcmV0dXJuIC1FTk9ERVY7DQo+Pj4gKyAgICB9
DQo+Pj4gKw0KPj4+ICsgICAgaWYgKCBzaXplID09IDEgKQ0KPj4+ICsgICAgICAgICp2YWwgPSBy
ZWFkYihhZGRyKTsNCj4+PiArICAgIGVsc2UgaWYgKCBzaXplID09IDIgKQ0KPj4+ICsgICAgICAg
ICp2YWwgPSByZWFkdyhhZGRyKTsNCj4+PiArICAgIGVsc2UNCj4+PiArICAgICAgICAqdmFsID0g
cmVhZGwoYWRkcik7DQo+PiBwbGVhc2UgdXNlIGEgc3dpdGNoDQo+PiANCj4+IA0KPj4+ICsgICAg
cmV0dXJuIDA7DQo+Pj4gK30NCj4+PiArDQo+Pj4gKy8qIEVDQU0gb3BzICovDQo+Pj4gK3N0cnVj
dCBwY2lfZWNhbV9vcHMgcGNpX2dlbmVyaWNfZWNhbV9vcHMgPSB7DQo+Pj4gKyAgICAuYnVzX3No
aWZ0ICA9IDIwLA0KPj4+ICsgICAgLnBjaV9vcHMgICAgPSB7DQo+Pj4gKyAgICAgICAgLnJlYWQg
ICAgICAgPSBwY2lfZWNhbV9jb25maWdfcmVhZCwNCj4+PiArICAgICAgICAud3JpdGUgICAgICA9
IHBjaV9lY2FtX2NvbmZpZ193cml0ZSwNCj4+PiArICAgIH0NCj4+PiArfTsNCj4+PiArDQo+Pj4g
K3N0YXRpYyBjb25zdCBzdHJ1Y3QgZHRfZGV2aWNlX21hdGNoIGdlbl9wY2lfZHRfbWF0Y2hbXSA9
IHsNCj4+PiArICAgIHsgLmNvbXBhdGlibGUgPSAicGNpLWhvc3QtZWNhbS1nZW5lcmljIiwNCj4+
PiArICAgICAgLmRhdGEgPSAgICAgICAmcGNpX2dlbmVyaWNfZWNhbV9vcHMgfSwNCj4+IHNwdXJp
b3VzIGJsYW5rIGxpbmUNCj4+IA0KPj4gDQo+Pj4gKyAgICB7IH0sDQo+Pj4gK307DQo+Pj4gKw0K
Pj4+ICtzdGF0aWMgaW50IGdlbl9wY2lfZHRfaW5pdChzdHJ1Y3QgZHRfZGV2aWNlX25vZGUgKmRl
diwgY29uc3Qgdm9pZCAqZGF0YSkNCj4+PiArew0KPj4+ICsgICAgY29uc3Qgc3RydWN0IGR0X2Rl
dmljZV9tYXRjaCAqb2ZfaWQ7DQo+Pj4gKyAgICBzdHJ1Y3QgcGNpX2VjYW1fb3BzICpvcHM7DQo+
Pj4gKw0KPj4+ICsgICAgb2ZfaWQgPSBkdF9tYXRjaF9ub2RlKGdlbl9wY2lfZHRfbWF0Y2gsIGRl
di0+ZGV2Lm9mX25vZGUpOw0KPj4+ICsgICAgb3BzID0gKHN0cnVjdCBwY2lfZWNhbV9vcHMgKikg
b2ZfaWQtPmRhdGE7DQo+Pj4gKw0KPj4+ICsgICAgcHJpbnRrKFhFTkxPR19JTkZPICJGb3VuZCBQ
Q0kgaG9zdCBicmlkZ2UgJXMgY29tcGF0aWJsZTolcyBcbiIsDQo+Pj4gKyAgICAgICAgICAgIGR0
X25vZGVfZnVsbF9uYW1lKGRldiksIG9mX2lkLT5jb21wYXRpYmxlKTsNCj4+PiArDQo+Pj4gKyAg
ICByZXR1cm4gcGNpX2hvc3RfY29tbW9uX3Byb2JlKGRldiwgb3BzKTsNCj4+PiArfQ0KPj4+ICsN
Cj4+PiArRFRfREVWSUNFX1NUQVJUKHBjaV9nZW4sICJQQ0kgSE9TVCBHRU5FUklDIiwgREVWSUNF
X1BDSSkNCj4+PiArLmR0X21hdGNoID0gZ2VuX3BjaV9kdF9tYXRjaCwNCj4+PiArLmluaXQgPSBn
ZW5fcGNpX2R0X2luaXQsDQo+Pj4gK0RUX0RFVklDRV9FTkQNCj4+PiArDQo+Pj4gKy8qDQo+Pj4g
KyAqIExvY2FsIHZhcmlhYmxlczoNCj4+PiArICogbW9kZTogQw0KPj4+ICsgKiBjLWZpbGUtc3R5
bGU6ICJCU0QiDQo+Pj4gKyAqIGMtYmFzaWMtb2Zmc2V0OiA0DQo+Pj4gKyAqIHRhYi13aWR0aDog
NA0KPj4+ICsgKiBpbmRlbnQtdGFicy1tb2RlOiBuaWwNCj4+PiArICogRW5kOg0KPj4+ICsgKi8N
Cj4+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL3BjaS9wY2kuYyBiL3hlbi9hcmNoL2FybS9w
Y2kvcGNpLmMNCj4+PiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPj4+IGluZGV4IDAwMDAwMDAwMDAu
LmY4Y2JiOTk1OTENCj4+PiAtLS0gL2Rldi9udWxsDQo+Pj4gKysrIGIveGVuL2FyY2gvYXJtL3Bj
aS9wY2kuYw0KPj4+IEBAIC0wLDAgKzEsMTEyIEBADQo+Pj4gKy8qDQo+Pj4gKyAqIENvcHlyaWdo
dCAoQykgMjAyMCBBcm0gTHRkLg0KPj4+ICsgKg0KPj4+ICsgKiBUaGlzIHByb2dyYW0gaXMgZnJl
ZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1dGUgaXQgYW5kL29yIG1vZGlmeQ0KPj4+ICsg
KiBpdCB1bmRlciB0aGUgdGVybXMgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIHZl
cnNpb24gMiBhcw0KPj4+ICsgKiBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRh
dGlvbi4NCj4+PiArICoNCj4+PiArICogVGhpcyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRo
ZSBob3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWwsDQo+Pj4gKyAqIGJ1dCBXSVRIT1VUIEFOWSBX
QVJSQU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mDQo+Pj4gKyAqIE1F
UkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJQ1VMQVIgUFVSUE9TRS4gIFNlZSB0
aGUNCj4+PiArICogR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4N
Cj4+PiArICoNCj4+PiArICogWW91IHNob3VsZCBoYXZlIHJlY2VpdmVkIGEgY29weSBvZiB0aGUg
R05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UNCj4+PiArICogYWxvbmcgd2l0aCB0aGlzIHByb2dy
YW0uICBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4uDQo+Pj4gKyAq
Lw0KPj4+ICsNCj4+PiArI2luY2x1ZGUgPHhlbi9hY3BpLmg+DQo+Pj4gKyNpbmNsdWRlIDx4ZW4v
ZGV2aWNlX3RyZWUuaD4NCj4+PiArI2luY2x1ZGUgPHhlbi9lcnJuby5oPg0KPj4+ICsjaW5jbHVk
ZSA8eGVuL2luaXQuaD4NCj4+PiArI2luY2x1ZGUgPHhlbi9wY2kuaD4NCj4+PiArI2luY2x1ZGUg
PHhlbi9wYXJhbS5oPg0KPj4+ICsNCj4+PiArc3RhdGljIGludCBfX2luaXQgZHRfcGNpX2luaXQo
dm9pZCkNCj4+PiArew0KPj4+ICsgICAgc3RydWN0IGR0X2RldmljZV9ub2RlICpucDsNCj4+PiAr
ICAgIGludCByYzsNCj4+PiArDQo+Pj4gKyAgICBkdF9mb3JfZWFjaF9kZXZpY2Vfbm9kZShkdF9o
b3N0LCBucCkNCj4+PiArICAgIHsNCj4+PiArICAgICAgICByYyA9IGRldmljZV9pbml0KG5wLCBE
RVZJQ0VfUENJLCBOVUxMKTsNCj4+PiArICAgICAgICBpZiggIXJjICkNCj4+PiArICAgICAgICAg
ICAgY29udGludWU7DQo+Pj4gKyAgICAgICAgLyoNCj4+PiArICAgICAgICAgKiBJZ25vcmUgdGhl
IGZvbGxvd2luZyBlcnJvciBjb2RlczoNCj4+PiArICAgICAgICAgKiAgIC0gRUJBREY6IEluZGlj
YXRlIHRoZSBjdXJyZW50IGlzIG5vdCBhbiBwY2kNCj4+PiArICAgICAgICAgKiAgIC0gRU5PREVW
OiBUaGUgcGNpIGRldmljZSBpcyBub3QgcHJlc2VudCBvciBjYW5ub3QgYmUgdXNlZCBieQ0KPj4+
ICsgICAgICAgICAqICAgICBYZW4uDQo+Pj4gKyAgICAgICAgICovDQo+Pj4gKyAgICAgICAgZWxz
ZSBpZiAoIHJjICE9IC1FQkFERiAmJiByYyAhPSAtRU5PREVWICkNCj4+PiArICAgICAgICB7DQo+
Pj4gKyAgICAgICAgICAgIHByaW50ayhYRU5MT0dfRVJSICJObyBkcml2ZXIgZm91bmQgaW4gWEVO
IG9yIGRyaXZlciBpbml0IGVycm9yLlxuIik7DQo+Pj4gKyAgICAgICAgICAgIHJldHVybiByYzsN
Cj4+PiArICAgICAgICB9DQo+Pj4gKyAgICB9DQo+Pj4gKw0KPj4+ICsgICAgcmV0dXJuIDA7DQo+
Pj4gK30NCj4+PiArDQo+Pj4gKyNpZmRlZiBDT05GSUdfQUNQSQ0KPj4+ICtzdGF0aWMgdm9pZCBf
X2luaXQgYWNwaV9wY2lfaW5pdCh2b2lkKQ0KPj4+ICt7DQo+Pj4gKyAgICBwcmludGsoWEVOTE9H
X0VSUiAiQUNQSSBwY2kgaW5pdCBub3Qgc3VwcG9ydGVkIFxuIik7DQo+Pj4gKyAgICByZXR1cm47
DQo+Pj4gK30NCj4+PiArI2Vsc2UNCj4+PiArc3RhdGljIGlubGluZSB2b2lkIF9faW5pdCBhY3Bp
X3BjaV9pbml0KHZvaWQpIHsgfQ0KPj4+ICsjZW5kaWYNCj4+PiArDQo+Pj4gK3N0YXRpYyBib29s
IF9faW5pdGRhdGEgcGFyYW1fcGNpX2VuYWJsZTsNCj4+PiArc3RhdGljIGludCBfX2luaXQgcGFy
c2VfcGNpX3BhcmFtKGNvbnN0IGNoYXIgKmFyZykNCj4+PiArew0KPj4+ICsgICAgaWYgKCAhYXJn
ICkNCj4+PiArICAgIHsNCj4+PiArICAgICAgICBwYXJhbV9wY2lfZW5hYmxlID0gZmFsc2U7DQo+
Pj4gKyAgICAgICAgcmV0dXJuIDA7DQo+Pj4gKyAgICB9DQo+Pj4gKw0KPj4+ICsgICAgc3dpdGNo
ICggcGFyc2VfYm9vbChhcmcsIE5VTEwpICkNCj4+PiArICAgIHsNCj4+PiArICAgICAgICBjYXNl
IDA6DQo+Pj4gKyAgICAgICAgICAgIHBhcmFtX3BjaV9lbmFibGUgPSBmYWxzZTsNCj4+PiArICAg
ICAgICAgICAgcmV0dXJuIDA7DQo+Pj4gKyAgICAgICAgY2FzZSAxOg0KPj4+ICsgICAgICAgICAg
ICBwYXJhbV9wY2lfZW5hYmxlID0gdHJ1ZTsNCj4+PiArICAgICAgICAgICAgcmV0dXJuIDA7DQo+
Pj4gKyAgICB9DQo+Pj4gKw0KPj4+ICsgICAgcmV0dXJuIC1FSU5WQUw7DQo+Pj4gK30NCj4+PiAr
Y3VzdG9tX3BhcmFtKCJwY2kiLCBwYXJzZV9wY2lfcGFyYW0pOw0KPj4gV2hlbiBhZGRpbmcgbmV3
IGNvbW1hbmQgbGluZSBwYXJhbWV0ZXJzIHBsZWFzZSBhbHNvIGFkZCBpdHMNCj4+IGRvY3VtZW50
YXRpb24gKGRvY3MvbWlzYy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYykgaW4gdGhlIHNhbWUgcGF0
Y2gsDQo+PiB1bmxlc3MgdGhpcyBpcyBtZWFudCB0byBiZSBqdXN0IHRyYW5zaWVudCBhbmQgd2Un
bGwgZ2V0IHJlbW92ZWQgYmVmb3JlDQo+PiB0aGUgZmluYWwgY29tbWl0IG9mIHRoZSBzZXJpZXMu
DQo+PiANCj4+IA0KPj4+ICt2b2lkIF9faW5pdCBwY2lfaW5pdCh2b2lkKQ0KPj4+ICt7DQo+Pj4g
KyAgICAvKg0KPj4+ICsgICAgICogRW5hYmxlIFBDSSB3aGVuIGhhcyBiZWVuIGVuYWJsZWQgZXhw
bGljaXRseSAocGNpPW9uKQ0KPj4+ICsgICAgICovDQo+Pj4gKyAgICBpZiAoICFwYXJhbV9wY2lf
ZW5hYmxlKQ0KPj4+ICsgICAgICAgIGdvdG8gZGlzYWJsZTsNCj4+PiArDQo+Pj4gKyAgICBpZiAo
IGFjcGlfZGlzYWJsZWQgKQ0KPj4+ICsgICAgICAgIGR0X3BjaV9pbml0KCk7DQo+Pj4gKyAgICBl
bHNlDQo+Pj4gKyAgICAgICAgYWNwaV9wY2lfaW5pdCgpOw0KPj4+ICsNCj4+PiArI2lmZGVmIENP
TkZJR19IQVNfUENJDQo+Pj4gKyAgICBwY2lfc2VnbWVudHNfaW5pdCgpOw0KPj4+ICsjZW5kaWYN
Cj4+PiArDQo+Pj4gK2Rpc2FibGU6DQo+Pj4gKyAgICByZXR1cm47DQo+Pj4gK30NCj4+PiArDQo+
Pj4gKy8qDQo+Pj4gKyAqIExvY2FsIHZhcmlhYmxlczoNCj4+PiArICogbW9kZTogQw0KPj4+ICsg
KiBjLWZpbGUtc3R5bGU6ICJCU0QiDQo+Pj4gKyAqIGMtYmFzaWMtb2Zmc2V0OiA0DQo+Pj4gKyAq
IHRhYi13aWR0aDogNA0KPj4+ICsgKiBpbmRlbnQtdGFicy1tb2RlOiBuaWwNCj4+PiArICogRW5k
Og0KPj4+ICsgKi8NCj4+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL3NldHVwLmMgYi94ZW4v
YXJjaC9hcm0vc2V0dXAuYw0KPj4+IGluZGV4IDc5NjhjZWU0N2QuLjJkN2YxZGI0NGYgMTAwNjQ0
DQo+Pj4gLS0tIGEveGVuL2FyY2gvYXJtL3NldHVwLmMNCj4+PiArKysgYi94ZW4vYXJjaC9hcm0v
c2V0dXAuYw0KPj4+IEBAIC05MzAsNiArOTMwLDggQEAgdm9pZCBfX2luaXQgc3RhcnRfeGVuKHVu
c2lnbmVkIGxvbmcgYm9vdF9waHlzX29mZnNldCwNCj4+PiANCj4+PiAgICAgIHNldHVwX3ZpcnRf
cGFnaW5nKCk7DQo+Pj4gDQo+Pj4gKyAgICBwY2lfaW5pdCgpOw0KPj4gcGNpX2luaXQgc2hvdWxk
IHByb2JhYmx5IGJlIGFuIGluaXRjYWxsDQo+PiANCj4+IA0KPj4+ICAgICAgZG9faW5pdGNhbGxz
KCk7DQo+Pj4gDQo+Pj4gICAgICAvKg0KPj4+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20t
YXJtL2RldmljZS5oIGIveGVuL2luY2x1ZGUvYXNtLWFybS9kZXZpY2UuaA0KPj4+IGluZGV4IGVl
N2NmZjJkNDQuLjI4ZjgwNDljZmQgMTAwNjQ0DQo+Pj4gLS0tIGEveGVuL2luY2x1ZGUvYXNtLWFy
bS9kZXZpY2UuaA0KPj4+ICsrKyBiL3hlbi9pbmNsdWRlL2FzbS1hcm0vZGV2aWNlLmgNCj4+PiBA
QCAtNCw2ICs0LDcgQEANCj4+PiAgZW51bSBkZXZpY2VfdHlwZQ0KPj4+ICB7DQo+Pj4gICAgICBE
RVZfRFQsDQo+Pj4gKyAgICBERVZfUENJLA0KPj4+ICB9Ow0KPj4+IA0KPj4+ICBzdHJ1Y3QgZGV2
X2FyY2hkYXRhIHsNCj4+PiBAQCAtMjUsMTUgKzI2LDE1IEBAIHR5cGVkZWYgc3RydWN0IGRldmlj
ZSBkZXZpY2VfdDsNCj4+PiANCj4+PiAgI2luY2x1ZGUgPHhlbi9kZXZpY2VfdHJlZS5oPg0KPj4+
IA0KPj4+IC0vKiBUT0RPOiBDb3JyZWN0bHkgaW1wbGVtZW50IGRldl9pc19wY2kgd2hlbiBQQ0kg
aXMgc3VwcG9ydGVkIG9uIEFSTSAqLw0KPj4+IC0jZGVmaW5lIGRldl9pc19wY2koZGV2KSAoKHZv
aWQpKGRldiksIDApDQo+Pj4gLSNkZWZpbmUgZGV2X2lzX2R0KGRldikgICgoZGV2LT50eXBlID09
IERFVl9EVCkNCj4gRGlkbid0IHdlIGhhdmUgYSBwYXRjaCBmb3IgdGhhdCByZWNlbnRseSBvciB0
YWxrZWQgYWJvdXQ/DQoNCk5vdCBzdXJlIG5lZWQgdG8gY2hlY2suDQoNCj4+PiArI2RlZmluZSBk
ZXZfaXNfcGNpKGRldikgKGRldi0+dHlwZSA9PSBERVZfUENJKQ0KPj4+ICsjZGVmaW5lIGRldl9p
c19kdChkZXYpICAoZGV2LT50eXBlID09IERFVl9EVCkNCj4+PiANCj4+PiAgZW51bSBkZXZpY2Vf
Y2xhc3MNCj4+PiAgew0KPj4+ICAgICAgREVWSUNFX1NFUklBTCwNCj4+PiAgICAgIERFVklDRV9J
T01NVSwNCj4+PiAgICAgIERFVklDRV9HSUMsDQo+Pj4gKyAgICBERVZJQ0VfUENJLA0KPj4+ICAg
ICAgLyogVXNlIGZvciBlcnJvciAqLw0KPj4+ICAgICAgREVWSUNFX1VOS05PV04sDQo+Pj4gIH07
DQo+Pj4gZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS1hcm0vcGNpLmggYi94ZW4vaW5jbHVk
ZS9hc20tYXJtL3BjaS5oDQo+Pj4gaW5kZXggZGUxMzM1OWY2NS4uOTRmZDAwMzYwYSAxMDA2NDQN
Cj4+PiAtLS0gYS94ZW4vaW5jbHVkZS9hc20tYXJtL3BjaS5oDQo+Pj4gKysrIGIveGVuL2luY2x1
ZGUvYXNtLWFybS9wY2kuaA0KPj4+IEBAIC0xLDcgKzEsOTggQEANCj4+PiAtI2lmbmRlZiBfX1g4
Nl9QQ0lfSF9fDQo+Pj4gLSNkZWZpbmUgX19YODZfUENJX0hfXw0KPj4+ICsvKg0KPj4+ICsgKiBD
b3B5cmlnaHQgKEMpIDIwMjAgQXJtIEx0ZC4NCj4+PiArICoNCj4+PiArICogQmFzZWQgb24gTGlu
dXggZHJpdmVycy9wY2kvZWNhbS5jDQo+Pj4gKyAqIENvcHlyaWdodCAyMDE2IEJyb2FkY29tLg0K
Pj4+ICsgKg0KPj4+ICsgKiBCYXNlZCBvbiBMaW51eCBkcml2ZXJzL3BjaS9jb250cm9sbGVyL3Bj
aS1ob3N0LWNvbW1vbi5jDQo+Pj4gKyAqIEJhc2VkIG9uIExpbnV4IGRyaXZlcnMvcGNpL2NvbnRy
b2xsZXIvcGNpLWhvc3QtZ2VuZXJpYy5jDQo+Pj4gKyAqIENvcHlyaWdodCAoQykgMjAxNCBBUk0g
TGltaXRlZCBXaWxsIERlYWNvbiA8d2lsbC5kZWFjb25AYXJtLmNvbT4NCj4+PiArICoNCj4+PiAr
ICogVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0
IGFuZC9vciBtb2RpZnkNCj4+PiArICogaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2Vu
ZXJhbCBQdWJsaWMgTGljZW5zZSB2ZXJzaW9uIDIgYXMNCj4+PiArICogcHVibGlzaGVkIGJ5IHRo
ZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uDQo+Pj4gKyAqDQo+Pj4gKyAqIFRoaXMgcHJvZ3Jh
bSBpcyBkaXN0cmlidXRlZCBpbiB0aGUgaG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLA0KPj4+
ICsgKiBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3
YXJyYW50eSBvZg0KPj4+ICsgKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJU
SUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlDQo+Pj4gKyAqIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNl
bnNlIGZvciBtb3JlIGRldGFpbHMuDQo+Pj4gKyAqDQo+Pj4gKyAqIFlvdSBzaG91bGQgaGF2ZSBy
ZWNlaXZlZCBhIGNvcHkgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlDQo+Pj4gKyAq
IGFsb25nIHdpdGggdGhpcyBwcm9ncmFtLiAgSWYgbm90LCBzZWUgPGh0dHA6Ly93d3cuZ251Lm9y
Zy9saWNlbnNlcy8+Lg0KPj4+ICsgKi8NCj4+PiANCj4+PiArI2lmbmRlZiBfX0FSTV9QQ0lfSF9f
DQo+Pj4gKyNkZWZpbmUgX19BUk1fUENJX0hfXw0KPj4+ICsNCj4+PiArI2luY2x1ZGUgPHhlbi9w
Y2kuaD4NCj4+PiArI2luY2x1ZGUgPHhlbi9kZXZpY2VfdHJlZS5oPg0KPj4+ICsjaW5jbHVkZSA8
YXNtL2RldmljZS5oPg0KPj4+ICsNCj4+PiArI2lmZGVmIENPTkZJR19BUk1fUENJDQo+Pj4gKw0K
Pj4+ICsvKiBBcmNoIHBjaSBkZXYgc3RydWN0ICovDQo+Pj4gIHN0cnVjdCBhcmNoX3BjaV9kZXYg
ew0KPj4+ICsgICAgc3RydWN0IGRldmljZSBkZXY7DQo+Pj4gK307DQo+PiBBcmUgeW91IGFjdHVh
bGx5IHVzaW5nIHN0cnVjdCBkZXZpY2UgaW4gc3RydWN0IGFyY2hfcGNpX2Rldj8NCj4+IHN0cnVj
dCBkZXZpY2UgaXMgYWxyZWFkeSBwYXJ0IG9mIHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSBhbmQgYSBw
b2ludGVyIHRvDQo+PiBpdCBpcyBzdG9yZWQgaW4gYnJpZGdlLT5kdF9ub2RlLg0KPj4gDQo+PiAN
Cj4+PiArI2RlZmluZSBQUklfcGNpICIlMDR4OiUwMng6JTAyeC4ldSINCj4+PiArI2RlZmluZSBw
Y2lfdG9fZGV2KHBjaWRldikgKCYocGNpZGV2KS0+YXJjaC5kZXYpDQo+Pj4gKw0KPj4+ICsvKg0K
Pj4+ICsgKiBzdHJ1Y3QgdG8gaG9sZCB0aGUgbWFwcGluZ3Mgb2YgYSBjb25maWcgc3BhY2Ugd2lu
ZG93LiBUaGlzDQo+Pj4gKyAqIGlzIGV4cGVjdGVkIHRvIGJlIHVzZWQgYXMgc3lzZGF0YSBmb3Ig
UENJIGNvbnRyb2xsZXJzIHRoYXQNCj4+PiArICogdXNlIEVDQU0uDQo+Pj4gKyAqLw0KPj4+ICtz
dHJ1Y3QgcGNpX2NvbmZpZ193aW5kb3cgew0KPj4+ICsgICAgcGFkZHJfdCAgICAgcGh5c19hZGRy
Ow0KPj4+ICsgICAgcGFkZHJfdCAgICAgc2l6ZTsNCj4+PiArICAgIHVpbnQ4X3QgICAgIGJ1c25f
c3RhcnQ7DQo+Pj4gKyAgICB1aW50OF90ICAgICBidXNuX2VuZDsNCj4+PiArICAgIHN0cnVjdCBw
Y2lfZWNhbV9vcHMgICAgICpvcHM7DQo+Pj4gKyAgICB2b2lkIF9faW9tZW0gICAgICAgICp3aW47
DQo+Pj4gK307DQo+Pj4gKw0KPj4+ICsvKiBGb3J3YXJkIGRlY2xhcmF0aW9uIGFzIHBjaV9ob3N0
X2JyaWRnZSBhbmQgcGNpX29wcyBkZXBlbmQgb24gZWFjaCBvdGhlci4gKi8NCj4+PiArc3RydWN0
IHBjaV9ob3N0X2JyaWRnZTsNCj4+PiArDQo+Pj4gK3N0cnVjdCBwY2lfb3BzIHsNCj4+PiArICAg
IGludCAoKnJlYWQpKHN0cnVjdCBwY2lfaG9zdF9icmlkZ2UgKmJyaWRnZSwNCj4+PiArICAgICAg
ICAgICAgICAgICAgICB1aW50MzJfdCBzYmRmLCBpbnQgd2hlcmUsIGludCBzaXplLCB1MzIgKnZh
bCk7DQo+Pj4gKyAgICBpbnQgKCp3cml0ZSkoc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdl
LA0KPj4+ICsgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IHNiZGYsIGludCB3aGVyZSwgaW50
IHNpemUsIHUzMiB2YWwpOw0KPj4gSSdkIHByZWZlciBpZiB3ZSBjb3VsZCB1c2UgZXhwbGljaXRs
eS1zaXplZCBpbnRlZ2VycyBmb3IgIndoZXJlIiBhbmQNCj4+ICJzaXplIiB0b28uIEFsc28sIHNo
b3VsZCB0aGV5IGJlIHVuc2lnbmVkIHJhdGhlciB0aGFuIHNpZ25lZD8NCj4+IA0KPj4gQ2FuIHdl
IHVzZSBwY2lfc2JkZl90IGZvciB0aGUgc2JkZiBwYXJhbWV0ZXI/DQo+PiANCj4+IA0KPj4+ICt9
Ow0KPj4+ICsNCj4+PiArLyoNCj4+PiArICogc3RydWN0IHRvIGhvbGQgcGNpIG9wcyBhbmQgYnVz
IHNoaWZ0IG9mIHRoZSBjb25maWcgd2luZG93DQo+Pj4gKyAqIGZvciBhIFBDSSBjb250cm9sbGVy
Lg0KPj4+ICsgKi8NCj4+PiArc3RydWN0IHBjaV9lY2FtX29wcyB7DQo+Pj4gKyAgICB1bnNpZ25l
ZCBpbnQgICAgICAgICAgICBidXNfc2hpZnQ7DQo+Pj4gKyAgICBzdHJ1Y3QgcGNpX29wcyAgICAg
ICAgICBwY2lfb3BzOw0KPj4+ICsgICAgaW50ICAgICAgICAgICAgICgqaW5pdCkoc3RydWN0IHBj
aV9jb25maWdfd2luZG93ICopOw0KPj4+ICt9Ow0KPj4gQWx0aG91Z2ggSSByZWFsaXplIHRoYXQg
d2UgYXJlIG9ubHkgdGFyZ2V0aW5nIEVDQU0gbm93LCBhbmQgdGhlDQo+PiBpbXBsZW1lbnRhdGlv
biBpcyBiYXNlZCBvbiBFQ0FNLCB0aGUgaW50ZXJmYWNlIGRvZXNuJ3Qgc2VlbSB0byBoYXZlDQo+
PiBhbnl0aGluZyBFQ0FNLXNwZWNpZmljIGluIGl0LiBJIHdvdWxkIHJlbmFtZSBwY2lfZWNhbV9v
cHMgdG8gc29tZXRoaW5nDQo+PiBlbHNlLCBtYXliZSBzaW1wbHkgInBjaV9vcHMiLg0KPiANCj4g
WWVzLCBwbGVhc2UsIGJlYXIgaW4gbWluZCB0aGF0IHdlIGFyZSBhYm91dCB0byB3b3JrIHdpdGgg
dGhpcyBjb2RlIG9uDQo+IA0KPiBub24tRUNBTSBIVyBmcm9tIHRoZSB2ZXJ5IGJlZ2lubmluZywg
c28gdGhpcyBpcyBzb21ldGhpbmcgdGhhdCB3ZSB3b3VsZA0KPiANCj4gbGlrZSB0byBzZWUgZnJv
bSB0aGUgZ3JvdW5kIHVwDQoNCk9rLiBXaWxsIHRha2UgY2FyZSBvZiB0aGF0IGluIG5leHQgdmVy
c2lvbiBvZiB0aGUgcGF0Y2guDQoNCj4gDQo+PiANCj4+IA0KPj4+ICsvKg0KPj4+ICsgKiBzdHJ1
Y3QgdG8gaG9sZCBwY2kgaG9zdCBicmlkZ2UgaW5mb3JtYXRpb24NCj4+PiArICogZm9yIGEgUENJ
IGNvbnRyb2xsZXIuDQo+Pj4gKyAqLw0KPj4+ICtzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlIHsNCj4+
PiArICAgIHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSAqZHRfbm9kZTsgIC8qIFBvaW50ZXIgdG8gdGhl
IGFzc29jaWF0ZWQgRFQgbm9kZSAqLw0KPj4+ICsgICAgc3RydWN0IGxpc3RfaGVhZCBub2RlOyAg
ICAgICAgICAgLyogTm9kZSBpbiBsaXN0IG9mIGhvc3QgYnJpZGdlcyAqLw0KPj4+ICsgICAgdWlu
dDE2X3Qgc2VnbWVudDsgICAgICAgICAgICAgICAgLyogU2VnbWVudCBudW1iZXIgKi8NCj4+PiAr
ICAgIHZvaWQgKnN5c2RhdGE7ICAgICAgICAgICAgICAgICAgIC8qIFBvaW50ZXIgdG8gdGhlIGNv
bmZpZyBzcGFjZSB3aW5kb3cqLw0KPj4+ICsgICAgY29uc3Qgc3RydWN0IHBjaV9vcHMgKm9wczsN
Cj4+PiAgfTsNCj4+PiANCj4+PiAtI2VuZGlmIC8qIF9fWDg2X1BDSV9IX18gKi8NCj4+PiArc3Ry
dWN0IHBjaV9ob3N0X2JyaWRnZSAqcGNpX2ZpbmRfaG9zdF9icmlkZ2UodWludDE2X3Qgc2VnbWVu
dCwgdWludDhfdCBidXMpOw0KPj4+ICsNCj4+PiAraW50IHBjaV9ob3N0X2NvbW1vbl9wcm9iZShz
dHJ1Y3QgZHRfZGV2aWNlX25vZGUgKmRldiwNCj4+PiArICAgICAgICAgICAgICAgIHN0cnVjdCBw
Y2lfZWNhbV9vcHMgKm9wcyk7DQo+Pj4gKw0KPj4+ICt2b2lkIHBjaV9pbml0KHZvaWQpOw0KPj4+
ICsNCj4+PiArI2Vsc2UgICAvKiFDT05GSUdfQVJNX1BDSSovDQo+Pj4gK3N0cnVjdCBhcmNoX3Bj
aV9kZXYgeyB9Ow0KPj4+ICtzdGF0aWMgaW5saW5lIHZvaWQgIHBjaV9pbml0KHZvaWQpIHsgfQ0K
Pj4+ICsjZW5kaWYgIC8qIUNPTkZJR19BUk1fUENJKi8NCj4+PiArI2VuZGlmIC8qIF9fQVJNX1BD
SV9IX18gKi8NCj4+PiAtLSANCj4+PiAyLjE3LjENCg0K


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 15:27:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 15:27:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k052R-0005rz-S2; Mon, 27 Jul 2020 15:27:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s7zT=BG=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1k052Q-0005ru-EB
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 15:27:38 +0000
X-Inumbo-ID: b2aa484e-d01d-11ea-8acf-bc764e2007e4
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe05::610])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b2aa484e-d01d-11ea-8acf-bc764e2007e4;
 Mon, 27 Jul 2020 15:27:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=abeDaZp8PSQysPOwTzxejPKX3OzNPTMntkedkWl94oI=;
 b=xdG9EsYVc2uvmxNqgLNUiAhuXLPnmE1eHhaJTULhHcmT7BTLaedmv0oAg9E7holr3Q5JDzE4sph+BabV5yPJ9Zsa/QXUAN7vkCWJV5IHOJ98jxPp4z7ZNa7mbFWVNldn8JJbmiqv/+pqAhicfxq8fashiKUwfi/6X5wVs2n1fH8=
Received: from AM6P195CA0019.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:81::32)
 by VI1PR08MB4334.eurprd08.prod.outlook.com (2603:10a6:803:f1::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Mon, 27 Jul
 2020 15:27:33 +0000
Received: from VE1EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:81:cafe::7c) by AM6P195CA0019.outlook.office365.com
 (2603:10a6:209:81::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.23 via Frontend
 Transport; Mon, 27 Jul 2020 15:27:32 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT023.mail.protection.outlook.com (10.152.18.133) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Mon, 27 Jul 2020 15:27:32 +0000
Received: ("Tessian outbound 2ae7cfbcc26c:v62");
 Mon, 27 Jul 2020 15:27:31 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: c4fbdf5884710f1b
X-CR-MTA-TID: 64aa7808
Received: from 10f074bb4d2c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E01B0C4A-986A-423F-A575-02543C929001.1; 
 Mon, 27 Jul 2020 15:27:26 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 10f074bb4d2c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 27 Jul 2020 15:27:26 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VGJXncQrcc2rL8cwgTJajPLX3v0fwsNSWwFBbkicWHb8tcyEMAjN9fnmUSoUGaRA+BdiiWfSzD+4QpbbQHC0+w78eKQfhNVn2rVM69HLTi3rELzfKv0poqIv7fHMHL9jCjN81ONVGnklF3w+w/xc5augBNwIMPIFbKsblGSjnXLf74Gedcn7snsa+VliG8Du5XzmjVdMaSmMVuufevgb0nI5UyNG7dfEW4hTGTK/SLM3W5Ck4gBh93PEh/r+N/PjGpGZfmZvSdOZw5tm1r8ublIPes4JsupiwGBRpJrAKk/N5du5M2NcZdoxLf+HAeeoCvOvC8pH7Njl51wAI1TxmQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=abeDaZp8PSQysPOwTzxejPKX3OzNPTMntkedkWl94oI=;
 b=Cyt2b4obKmNf300HSiiDVFbpJq4K/DguRk79to4mF/x0Vr9pQu1qNgfKstwYjn0JSo6jRfchp7Iik1ql+ZITvW53VMqxKX1YMWkb0wpywAiD1R/qT8wIF6xJTcvhEIc3U2WOrXwx5CqrTnm23jjP+low8lUoxb4+reP9iEbEzKQg179eXt4V8XCMLH8zi8scSLoCXYZZrAJvbYLxs2beqcx+bK1lbSy17Jf4u9LkGvqGTEcfFH14hnnPbBJ4VIDDOGkRbWbonEgliA57zm2DyUfjBhtx7/sx6MQeShEhSkGj06EM84Nimm51OarBUWKGo0pLFxmABNZrOBsey0lWfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=abeDaZp8PSQysPOwTzxejPKX3OzNPTMntkedkWl94oI=;
 b=xdG9EsYVc2uvmxNqgLNUiAhuXLPnmE1eHhaJTULhHcmT7BTLaedmv0oAg9E7holr3Q5JDzE4sph+BabV5yPJ9Zsa/QXUAN7vkCWJV5IHOJ98jxPp4z7ZNa7mbFWVNldn8JJbmiqv/+pqAhicfxq8fashiKUwfi/6X5wVs2n1fH8=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB3879.eurprd08.prod.outlook.com (2603:10a6:20b:8c::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Mon, 27 Jul
 2020 15:27:22 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3216.033; Mon, 27 Jul 2020
 15:27:22 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Thread-Topic: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Thread-Index: AQHWYPbO0TX+B10vbkuLRSjkBERSxqkV0u2AgAB8NICAABFdAIAFMmuA
Date: Mon, 27 Jul 2020 15:27:21 +0000
Message-ID: <53A27297-DC0A-4C2F-A978-885E1B9A7603@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <3ee41590-e8ca-84d6-3010-6e5dffe91df0@epam.com>
 <276d6b48-8cd7-7fb1-1d76-15cb6a95cad9@xen.org>
In-Reply-To: <276d6b48-8cd7-7fb1-1d76-15cb6a95cad9@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: ff3b98ab-7c7a-4a98-ff7b-08d8324194cc
x-ms-traffictypediagnostic: AM6PR08MB3879:|VI1PR08MB4334:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <VI1PR08MB433415DA78E2DA4DA372C09BFC720@VI1PR08MB4334.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: MXp+SXQQUiOk92s/WkEN3q3xzXtz9H9v368q6XiAlATePXX+WOlOrF8l6H98t46cAPYc0uj+j0+9vQlAiZhm6+kZg/ihxFJLNGGa5ALI/IRfBFJiaQAI/shuIVsP6N4eFgpli7netN5VQ992zoDcitQb3hO9kl8trm5kPOEv+7r5P2UMF3M7CUyb89qD2E0M9wrHtsie4TQKE61hiwTUyY6w6BrJp2rcBrETFre2EkxO3oFulHCumfl9Zc6fP48CfMF92LR27EM0TBCzj9ZzfqNeyvuUrQskDkXkEPUNJF4laI7wu4hPGUVxDb+Xi49Kx4KMug6g9t/N6+qzH6B4iw==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(366004)(346002)(396003)(376002)(478600001)(71200400001)(4326008)(54906003)(6916009)(2616005)(6486002)(8676002)(316002)(8936002)(55236004)(6512007)(53546011)(83380400001)(6506007)(186003)(86362001)(33656002)(5660300002)(64756008)(26005)(91956017)(66946007)(76116006)(2906002)(66446008)(66556008)(66476007)(36756003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: ditiIIPoZFTLVHS9HNqC7xVJoa7updy4FSZjdBXvPbxAY8hnbQPLAww91mIZT3ZKVOw3d+UYQOrfjcCbzGwLWEOq6RZ1cGyW04hPEWLCMDBRSUjCE8I46ri6dRX4wdQ9efHeHSyVuMFn1z5BHvmZkSyITCZtwkVCegSlqXJDbEd40GVIF7EFTWog83pD6kwYEfH/oRgi0FbBUxvJkun4fxbr/CR6ATkJfNzkuZ/Sbl8I0U58DZ0HYhPBqE9MsuMydvOoa6211IRKtdeSzi4xv9pSsV8RkpWarfBtCzDWbT3gUXitV+L/y4tjaMd30zhz7oiCpsIsRBW175vQp98XoyVG4ZP/aglfMhY2hHFCPb50UhFCj/1lSMr4NFeGZyhSyyTlhGouBV1wYnzYyKbF6FXbPsUvAdftnvzL1NjN2OU8vpqG+5kQWBUjyURF3epn5rl2CzD5eGiSlPhtfzpwsYdicuKEV+HY6bs94QdFKjw=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <F9FA100E897C9F4AA804C4BB60373C25@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3879
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 7bf6e424-482a-45da-a8f2-08d832418ea8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 4ZnFl/TJFk6TgRh/DR2OXTZQFM6tp9uG7Et2IZkJj3PJBzABDy7sqaQPfYJWdb0ki94uYLq4ueOSsZwbhpIDv1lyRdEJjq5aGVMPo7tT5EOokcSBV0huBqqd/Qdko9vBTG8hEWzJcB8jhDAbI37HNldRQz5JESn6IfhIaavT2ldd7eQ8DahnQyOJHFuE+yGoXYw8Nrt9t20tYayMWiMsrLIXeGoROHFrS3OaRid3Sb++N6fJLcRrl2bkfM8JNh6AvYWNB4Nj/PaaH32FZ+0XSWB8ojX50qgsemjNSAdqmxnsXBe1Pm+7Ru6vNyX6laiPaN0DBBeDSgbL52p4UZv5RMSXZ86c/XSD4eeOa/v7J2wB1Am7ZMFuyTohBfIQaLX/vqfJHM9bJbkjTDKdqgFldw==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(346002)(376002)(39860400002)(396003)(46966005)(5660300002)(70586007)(36756003)(70206006)(26005)(8936002)(33656002)(478600001)(186003)(53546011)(336012)(83380400001)(6506007)(2906002)(2616005)(6486002)(356005)(86362001)(8676002)(82740400003)(82310400002)(6862004)(81166007)(47076004)(4326008)(6512007)(36906005)(316002)(107886003)(54906003);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jul 2020 15:27:32.2501 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ff3b98ab-7c7a-4a98-ff7b-08d8324194cc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4334
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 24 Jul 2020, at 9:05 am, Julien Grall <julien@xen.org> wrote:
>=20
> Hi,
>=20
> On 24/07/2020 08:03, Oleksandr Andrushchenko wrote:
>>>> diff --git a/xen/arch/arm/pci/pci-access.c b/xen/arch/arm/pci/pci-acce=
ss.c
>>>> new file mode 100644
>>>> index 0000000000..c53ef58336
>>>> --- /dev/null
>>>> +++ b/xen/arch/arm/pci/pci-access.c
>>>> @@ -0,0 +1,101 @@
>>>> +/*
>>>> + * Copyright (C) 2020 Arm Ltd.
>> I think SPDX will fit better in any new code.
>=20
> While I would love to use SPDX in Xen, there was some push back in the pa=
st to use it. So the new code should use the full-blown copyright until the=
re is an agreement to use it.

Ack. We will use the copyright until we will get the confirmation from the =
community to use the SPDX in XEN.
>=20
>>>=20
>>>> +    list_add_tail(&bridge->node, &pci_host_bridges);
>>> It looks like &pci_host_bridges should be an ordered list, ordered by
>>> segment number?
>> Why? Do you expect bridge access in some specific order so ordered
>> list will make it faster?
>=20
> Access to the configure space will be pretty random. So I don't think ord=
ering the list will make anything better.
>=20
> However, looking up for the bridge for every config spec access is pretty=
 slow. When I was working on the PCI passthrough, I wanted to look whether =
it would be possible to have a pointer to the PCI host bridge passed in arg=
ument to the helpers (rather than the segment).

Yes true every config space access has to scan the host-bridges list to fin=
d the corresponding host-bridge. I will try to find better solution so that=
 no need to scan the host-bridge every time for read/write access.


> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 15:29:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 15:29:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k053y-0005zG-6t; Mon, 27 Jul 2020 15:29:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s7zT=BG=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1k053w-0005z8-Uu
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 15:29:12 +0000
X-Inumbo-ID: ebc99238-d01d-11ea-a792-12813bfff9fa
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.69]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ebc99238-d01d-11ea-a792-12813bfff9fa;
 Mon, 27 Jul 2020 15:29:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gh+GZNaeqoxrikv21vTVFcnARyS7tqp7e4GjswCzLT8=;
 b=UseO7oMBnx1wDjVvQiOiZnBJdi4hb8zeW5X1LOnVPwBHLT6ZZTS+XCTvC8vETmRbh5FHAjFzubuL88hDe4mtpuu0aZYcF87fuAqSZCYy00dfvn6JBW9kMa5zEeDbLAW7aTXpPH1Q55jdfBScyv8HTONuEryX+xTDzdJckMkIFhM=
Received: from AM6P195CA0033.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:81::46)
 by DB7PR08MB3753.eurprd08.prod.outlook.com (2603:10a6:10:7e::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.22; Mon, 27 Jul
 2020 15:29:09 +0000
Received: from AM5EUR03FT050.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:81:cafe::aa) by AM6P195CA0033.outlook.office365.com
 (2603:10a6:209:81::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.23 via Frontend
 Transport; Mon, 27 Jul 2020 15:29:09 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT050.mail.protection.outlook.com (10.152.17.47) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Mon, 27 Jul 2020 15:29:09 +0000
Received: ("Tessian outbound 73b502bf693a:v62");
 Mon, 27 Jul 2020 15:29:09 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: a7e691fa6351b41b
X-CR-MTA-TID: 64aa7808
Received: from 0e295821c243.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 44BE6E17-536E-4ED8-9F8D-E98B98866A57.1; 
 Mon, 27 Jul 2020 15:29:03 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0e295821c243.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 27 Jul 2020 15:29:03 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eN5VA8AdJ09hS8KsXMR8ugUtUC3Eu0NVNDDKfwXEVdno8bv6YaS5WLaAkYAvzUg7+EJderxvzhlrjk7oGgak1YZB2qAwS/OTbquh3CJVCrM9vtbEBW+bqV6xDBJpbiy47YbgP7TrFkDdxLHKxEtAhhh50lyEBarTyx30phbmRX8sEclXK1CePUbKXOPmvpv/Th63ANADA6jbU7wA3TZEAJsxZLm7W0wruygti0LMxw5JaYoUI/s3NrDxtGXFczNd+8TGYoBP6NSU0dtuvPdMXuk4wLUw1xIGG63hTGegjUFQx5Jt+2MImRtQ9sX6Ny+9OAPRLQJqXR37OIx1AN8omQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gh+GZNaeqoxrikv21vTVFcnARyS7tqp7e4GjswCzLT8=;
 b=Zebc6RKo0HKDdteCLmJShfiBmYkNswEdq7tVOrnWJ28g5Wl/xnHfl3BotM25gGyhSw3ajhj9qO6RRQ0erLVcOKrXKts3qwta0ipsMxnazlZJqSqhF5jRIJBOPHncg0auMfIzBnoaX4ZPxJ5kut7zRo6QahftNdmAxwLJE9aohKpgZgugQM+kvW4851FUUg0EmWP5brXr0l6Osj7p2/eBIewzM+Gx2o20Rrlh5gW4uGGLCh8kfFejOWMeJp7ASchmV9DLhI7SFvVf9poTBbBbAo0lggHGI776k7DXwr98HYYc4Mxsbj3/fnd6ZCTsgj+qJCmk50GdUR2se9ny04IljQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gh+GZNaeqoxrikv21vTVFcnARyS7tqp7e4GjswCzLT8=;
 b=UseO7oMBnx1wDjVvQiOiZnBJdi4hb8zeW5X1LOnVPwBHLT6ZZTS+XCTvC8vETmRbh5FHAjFzubuL88hDe4mtpuu0aZYcF87fuAqSZCYy00dfvn6JBW9kMa5zEeDbLAW7aTXpPH1Q55jdfBScyv8HTONuEryX+xTDzdJckMkIFhM=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB3879.eurprd08.prod.outlook.com (2603:10a6:20b:8c::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Mon, 27 Jul
 2020 15:29:01 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3216.033; Mon, 27 Jul 2020
 15:29:01 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Thread-Topic: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Thread-Index: AQHWYPbO0TX+B10vbkuLRSjkBERSxqkWZWGAgAUt/4A=
Date: Mon, 27 Jul 2020 15:29:01 +0000
Message-ID: <389FC8F3-DDB6-473E-8F89-795EDC21B4E4@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <756d7979-0ebf-b9a4-72bd-18782762f7da@xen.org>
In-Reply-To: <756d7979-0ebf-b9a4-72bd-18782762f7da@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f41d8b28-8324-4be1-b6a5-08d83241ce93
x-ms-traffictypediagnostic: AM6PR08MB3879:|DB7PR08MB3753:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <DB7PR08MB37536C3415C59F3C9909725AFC720@DB7PR08MB3753.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: uPlvfw2h0zcqEbqOPqs5eALaZYbMneD4q+Mn84hgKdNsCr41VmSJ3df5QXjBOlsndueLiykh/9gyTEqX5kILuVj9mqXo18/QYPFD5I56C0c56itTDRg86GZW7KGeet7iIe7zrG2LxLl4rPpQEX+yzsiR8ZqKoFCbvgKwKrTx8ZH4HBvp7l0N2UHYpX7CZQ4t+4KX9WRP4Ytw+OBT6K6MRUmKINGKHoal6PsmxMd1q4SxSK+HqYVkLrhXHYMUaUF+TPXi45x02lTOghjeA6Yul0g0e/ErvlfOQEbZQGQMtWm+oQ+AJHbNmN0H05Cr+o+jZaDGuF556O9ivTh5elw7bQ==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(366004)(346002)(396003)(376002)(478600001)(71200400001)(4326008)(54906003)(6916009)(2616005)(6486002)(8676002)(316002)(8936002)(55236004)(6512007)(53546011)(83380400001)(6506007)(186003)(86362001)(33656002)(5660300002)(64756008)(26005)(91956017)(66946007)(76116006)(2906002)(66446008)(66556008)(66476007)(36756003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: /F1zS39wOx9OUULj1CI1O+9LQ2iPJxr7UmcfAQq9gre6w6itHy+0NeSrXEsY3WUfTub6nQBvkBBhIGBGi6BSXBb6B5dcaRoUNCwUrRegN4lyWcrkiAtmigeb3JPsDMtQk5m9iVrEzxK3ms5j+4olEibVizsVkShlo697HUP6BQZ3QlXa8s2M7XaGJsct1txWDIr90CvgO0CFNDrrIG7GZRT6nlwKmk7fhUSl4QRxVDEyWaKKSm2L+yKoaWIMTavI8FcV4ns5nCFS2PCnNafBr64Q/MQmxkI0Uv9ANT4o233jNEzIAlhnkVdGJ2Z6pKeFzDPCtPdr2lhMtO3giAc+0emtPJuqzEx7fkQm/4PTRqAQGsLEXhFwI0y/ILCCU6xDdVy+yXEudVAy9N9gR45koU3wViCuCos3TtZuyZrO/Q7wpAfIKjOLqVF20JAuz4voHW7+KCHO/Oyq22FJn9R2WA8gpTbfDzetAFqN8Mdjwb8=
Content-Type: text/plain; charset="utf-8"
Content-ID: <7BB12CB1A9972D41A3DA41F1295AE3C5@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3879
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 6fed4ed4-166d-4e86-81f5-08d83241ca18
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: x+uD4FxYCE4CvJJZ2n/pdz9At9/ItP9exxaflZss8b9T6VblM3SXTFh5YxfLTffiMqddTkWTnLWa4TT5pAmhDHWrb3IcpLB/OhrwpuSjwzcaEijQPd3ups/emkn51N6GgjQLP4FjTLepsXQYtpko5RukXlNXGmYfmx/H8GQTriBlSbO3fZkr2KgKF4gE4Anfi+/mngORCwjOnlTeFgfUbZDezb6fLi2DdVXfQSFO0KziHbNOV1faLX5QY0y30yYPQ0UqAIjLYlgz1NCY0YYqJyptX4oLU3wNKE9dD9/sqjaOSlG1V41X5a3zHUw/iMBWF2zX5MnYNAdMO5MGMXzzJS4QXO0lfLjxKbyPqBQwLDszBKIlZvZ3LVLTsANNxifJ6HzVw1potXf6Ks/T9eknzQ==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(346002)(376002)(39860400002)(136003)(46966005)(6486002)(6506007)(53546011)(5660300002)(82310400002)(4326008)(2906002)(26005)(6862004)(81166007)(47076004)(107886003)(82740400003)(83380400001)(356005)(70586007)(54906003)(70206006)(6512007)(8936002)(8676002)(336012)(2616005)(186003)(478600001)(86362001)(36756003)(33656002)(36906005)(316002);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jul 2020 15:29:09.2415 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f41d8b28-8324-4be1-b6a5-08d83241ce93
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3753
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMjQgSnVsIDIwMjAsIGF0IDk6MjMgYW0sIEp1bGllbiBHcmFsbCA8anVsaWVuQHhl
bi5vcmc+IHdyb3RlOg0KPiANCj4gSGkgUmFodWwsDQo+IA0KPiBPbiAyMy8wNy8yMDIwIDE2OjQw
LCBSYWh1bCBTaW5naCB3cm90ZToNCj4+IFhFTiBkdXJpbmcgYm9vdCB3aWxsIHJlYWQgdGhlIFBD
SSBkZXZpY2UgdHJlZSBub2RlIOKAnHJlZ+KAnSBwcm9wZXJ0eQ0KPj4gYW5kIHdpbGwgbWFwIHRo
ZSBQQ0kgY29uZmlnIHNwYWNlIHRvIHRoZSBYRU4gbWVtb3J5Lg0KPj4gWEVOIHdpbGwgcmVhZCB0
aGUg4oCcbGludXgsIHBjaS1kb21haW7igJ0gcHJvcGVydHkgZnJvbSB0aGUgZGV2aWNlIHRyZWUN
Cj4+IG5vZGUgYW5kIGNvbmZpZ3VyZSB0aGUgaG9zdCBicmlkZ2Ugc2VnbWVudCBudW1iZXIgYWNj
b3JkaW5nbHkuDQo+PiBBcyBvZiBub3cgInBjaS1ob3N0LWVjYW0tZ2VuZXJpYyIgY29tcGF0aWJs
ZSBib2FyZCBpcyBzdXBwb3J0ZWQuDQo+PiBDaGFuZ2UtSWQ6IElmMzJmNzc0OGI3ZGM4OWRkMzcx
MTRkYzUwMjk0MzIyMmEyYTM2YzQ5DQo+PiBTaWduZWQtb2ZmLWJ5OiBSYWh1bCBTaW5naCA8cmFo
dWwuc2luZ2hAYXJtLmNvbT4NCj4+IC0tLQ0KPj4gIHhlbi9hcmNoL2FybS9LY29uZmlnICAgICAg
ICAgICAgICAgIHwgICA3ICsNCj4+ICB4ZW4vYXJjaC9hcm0vTWFrZWZpbGUgICAgICAgICAgICAg
ICB8ICAgMSArDQo+PiAgeGVuL2FyY2gvYXJtL3BjaS9NYWtlZmlsZSAgICAgICAgICAgfCAgIDQg
Kw0KPj4gIHhlbi9hcmNoL2FybS9wY2kvcGNpLWFjY2Vzcy5jICAgICAgIHwgMTAxICsrKysrKysr
KysrKysrDQo+PiAgeGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9zdC1jb21tb24uYyAgfCAxOTggKysr
KysrKysrKysrKysrKysrKysrKysrKysrKw0KPj4gIHhlbi9hcmNoL2FybS9wY2kvcGNpLWhvc3Qt
Z2VuZXJpYy5jIHwgMTMxICsrKysrKysrKysrKysrKysrKw0KPj4gIHhlbi9hcmNoL2FybS9wY2kv
cGNpLmMgICAgICAgICAgICAgIHwgMTEyICsrKysrKysrKysrKysrKysNCj4+ICB4ZW4vYXJjaC9h
cm0vc2V0dXAuYyAgICAgICAgICAgICAgICB8ICAgMiArDQo+PiAgeGVuL2luY2x1ZGUvYXNtLWFy
bS9kZXZpY2UuaCAgICAgICAgfCAgIDcgKy0NCj4+ICB4ZW4vaW5jbHVkZS9hc20tYXJtL3BjaS5o
ICAgICAgICAgICB8ICA5NyArKysrKysrKysrKysrLQ0KPj4gIDEwIGZpbGVzIGNoYW5nZWQsIDY1
NCBpbnNlcnRpb25zKCspLCA2IGRlbGV0aW9ucygtKQ0KPiANCj4gQXMgYSBnZW5lcmFsIGNvbW1l
bnQsIEkgd291bGQgc3VnZ2VzdCB0byBzcGxpdCB0aGUgcGF0Y2ggaW4gc21hbGxlciBjaHVuay4g
VGhpcyB3b3VsZCBoZWxwIHRoZSByZXZpZXcgYW5kIGFsc28gYWxsb3cgdG8gcHJvdmlkZSBtb3Jl
IGV4cGxhbmF0aW9uIG9uIHdoYXQgaXMgZG9uZS4NCg0KT2sgSSB3aWxsIHNwbGl0IHRoZSBwYXRj
aGVzIGluIG5leHQgdmVyc2lvbiBvZiB0aGUgcGF0Y2ggc2VyaWVzLg0KPiANCj4gRm9yIGluc3Rh
bmNlLCBJIHRoaW5rIGl0IGlzIHBvc3NpYmxlIHRvIGEgc3BsaXQgbG9va2luZyBsaWtlOg0KPiAg
ICAtIEFkZCBmcmFtZXdvcmsgdG8gYWNjZXNzIGFuIGhvc3RicmlkZ2UNCj4gICAgLSBBZGQgc3Vw
cG9ydCBmb3IgRUNBTQ0KPiAgICAtIEFkZCBjb2RlIHRvIGluaXRpYWxpemUgdGhlIFBDSSBzdWJz
eXN0ZW0NCj4gDQo+IFRoZXJlIGlzIGFsc28gc29tZSBzbWFsbCBmaXhlcyBpbiB0aGlzIGNvZGUg
dGhhdCBwcm9iYWJseSBjYW4gbW92ZSBpbiB0aGVyZSBvd24gcGF0Y2hlcy4NCg0KQWNrLg0KPiAN
Cj4gQ2hlZXJzLA0KPiANCj4gLS0gDQo+IEp1bGllbiBHcmFsbA0KDQo=


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 16:04:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 16:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k05bx-0001Qb-OD; Mon, 27 Jul 2020 16:04:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GrHA=BG=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k05bw-0001QW-S6
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 16:04:21 +0000
X-Inumbo-ID: d459d658-d022-11ea-8ad3-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d459d658-d022-11ea-8ad3-bc764e2007e4;
 Mon, 27 Jul 2020 16:04:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=uGzMCkkCdnk4jadoJ/DjFhZHwkxgms05YWDAwzdOhss=; b=GQk2w304VYKGoSzyElL1/si2Fw
 3TP6rtw/ZHq5EYCMcCCGqkat7XAMnWNU4eFUHTRgW10dnwwHapxAA1kmc6F3q+D1GSvcHrmYJHBTZ
 RMEt+VRRfOuoqond1eU3sTJnoXcidhuf6N40rmV1NIKFspxiOn8IOHxHc8nZ4iOMMc4g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k05bv-000462-Ha; Mon, 27 Jul 2020 16:04:19 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k05bv-0000hr-6W; Mon, 27 Jul 2020 16:04:19 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [[XSATOOL]] repo: Add missing spaces in the configure cmdline for
 "xentools"
Date: Mon, 27 Jul 2020 17:04:15 +0100
Message-Id: <20200727160415.717-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <jgrall@amazon.com>, julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

The operator + will just concatenate two strings. As the result, the
configure cmdline for "xentools" will look like:

./configure --disable-stubdom --disable-qemu-traditional--with-system-qemu=/bin/false --with-system-seabios=/bin/false--disable-ovmf

This can be avoided by explicitely adding the spaces.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 repo.go | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/repo.go b/repo.go
index 1e7802f8142c..f00b7469101f 100644
--- a/repo.go
+++ b/repo.go
@@ -139,8 +139,8 @@ func MainRepoInit(unused *XSAMeta, args []string) (ret int) {
 	G.config.Tool.BuildSequences = map[string]BuildSequence{
 		"simple": {"./configure", "make -j 8"},
 		"xen":    {"make -j 8 xen"},
-		"xentools": {"./configure --disable-stubdom --disable-qemu-traditional" +
-			"--with-system-qemu=/bin/false --with-system-seabios=/bin/false" +
+		"xentools": {"./configure --disable-stubdom --disable-qemu-traditional " +
+			"--with-system-qemu=/bin/false --with-system-seabios=/bin/false " +
 			"--disable-ovmf",
 			"make -j 8"},
 	}
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 16:05:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 16:05:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k05cg-0001SS-1D; Mon, 27 Jul 2020 16:05:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GrHA=BG=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k05ce-0001SL-S9
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 16:05:04 +0000
X-Inumbo-ID: eec1f4f8-d022-11ea-8ad3-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eec1f4f8-d022-11ea-8ad3-bc764e2007e4;
 Mon, 27 Jul 2020 16:05:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=X0e+T0Pm0z4vGKCKezjuosEdiWF70tt/kzl1QNExTa0=; b=ImSb7zIbrozq2lLcj8MDFbi6XK
 bTGzp65NQE7pyUn0if+xFNe6pI3mAiHRY4/TS2q1xGGAdR5HM1Ik10UEscF0L2vqBu8s+IdjBkr5s
 HayYiZsYZsMLnhe9qGZiR/T852WVBw1tsIDddt6Ry4FysOJS0zrYvch2OlVcGverjn60=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k05cb-00046l-VG; Mon, 27 Jul 2020 16:05:01 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k05cb-0000kX-NO; Mon, 27 Jul 2020 16:05:01 +0000
Subject: Re: [[XSATOOL]] repo: Add missing spaces in the configure cmdline for
 "xentools"
To: xen-devel@lists.xenproject.org
References: <20200727160415.717-1-julien@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <0299389e-bb24-6ec9-9fb4-18cc7c4ec181@xen.org>
Date: Mon, 27 Jul 2020 17:04:59 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200727160415.717-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <jgrall@amazon.com>, George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hmmm I forgot to CC George. Sorry for that.

On 27/07/2020 17:04, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The operator + will just concatenate two strings. As the result, the
> configure cmdline for "xentools" will look like:
> 
> ./configure --disable-stubdom --disable-qemu-traditional--with-system-qemu=/bin/false --with-system-seabios=/bin/false--disable-ovmf
> 
> This can be avoided by explicitely adding the spaces.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>   repo.go | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/repo.go b/repo.go
> index 1e7802f8142c..f00b7469101f 100644
> --- a/repo.go
> +++ b/repo.go
> @@ -139,8 +139,8 @@ func MainRepoInit(unused *XSAMeta, args []string) (ret int) {
>   	G.config.Tool.BuildSequences = map[string]BuildSequence{
>   		"simple": {"./configure", "make -j 8"},
>   		"xen":    {"make -j 8 xen"},
> -		"xentools": {"./configure --disable-stubdom --disable-qemu-traditional" +
> -			"--with-system-qemu=/bin/false --with-system-seabios=/bin/false" +
> +		"xentools": {"./configure --disable-stubdom --disable-qemu-traditional " +
> +			"--with-system-qemu=/bin/false --with-system-seabios=/bin/false " +
>   			"--disable-ovmf",
>   			"make -j 8"},
>   	}
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 16:11:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 16:11:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k05iL-0002M0-QE; Mon, 27 Jul 2020 16:10:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=s7zT=BG=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1k05iK-0002Lv-VI
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 16:10:56 +0000
X-Inumbo-ID: bf8457f3-d023-11ea-8ad3-bc764e2007e4
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.85]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bf8457f3-d023-11ea-8ad3-bc764e2007e4;
 Mon, 27 Jul 2020 16:10:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DsZfFnXMBRM0Wk5LaTsic24+555RRDHubAUAxa2ZpbA=;
 b=2+vbZejYWvFZOcgkYO+ErdY0xyHEbBiLkVW9Tr27UoDLJcPJ6pFT57lnVmmp5NBvakuSzTb3Yvx6UCCmmbTtf6HigUotymRZ3RtmQ8N14nmddkoKA4/zRaRwx1dTES5cxPX6tV/ZNOi0KsRTBiNXAJnUgHFNBfSCSEY9hRXaXVY=
Received: from AM7PR04CA0025.eurprd04.prod.outlook.com (2603:10a6:20b:110::35)
 by DB7PR08MB3513.eurprd08.prod.outlook.com (2603:10a6:10:4a::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Mon, 27 Jul
 2020 16:10:52 +0000
Received: from AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:110:cafe::a3) by AM7PR04CA0025.outlook.office365.com
 (2603:10a6:20b:110::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.20 via Frontend
 Transport; Mon, 27 Jul 2020 16:10:52 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT059.mail.protection.outlook.com (10.152.17.193) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Mon, 27 Jul 2020 16:10:52 +0000
Received: ("Tessian outbound c4059ed8d7bf:v62");
 Mon, 27 Jul 2020 16:10:51 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: cdaea35e16563ff3
X-CR-MTA-TID: 64aa7808
Received: from 181a67bb8d82.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 81696E7E-C31B-4935-AAA8-969C4584F439.1; 
 Mon, 27 Jul 2020 16:10:46 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 181a67bb8d82.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 27 Jul 2020 16:10:46 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CBWen2b6x+bgchYoolsrUjRQICMDO1AvRr1284W8nZlDIDLEYsUvNY1XpM6bW0kIo+Y/K4cm2DiNcrg35BgvY4kcnHb9iufhggQ9cg2O8VnbvMIM5CTHFx6NjveNYBh4DieC+EqnL3Rwt3c6Gas4S8HVS+UT+V0mv5kgNd83UQNGv5LuLhmOtK4lXquhsMWk0lvfMYMc3+Xf2X2hWAMdbfPXtK2xX/7mE7onEQCeNPzZZKv31ohamxkAbo4hBiydno31jeTsf7O2mPCO1DoGboGzHsruL0ioFa5eu4WbwwcX6rHbshVZgZxVcwGiRkm5a6eNVLQGuAJO6/FquqVtBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DsZfFnXMBRM0Wk5LaTsic24+555RRDHubAUAxa2ZpbA=;
 b=jwXG1jRNpGLw51g79OMCZQMhn5yXHvAOsn8OZXGPgm3XKrXyqfbIC08ANpYv+dbjJ0BQNnpZJLLKHz8AWT1EnU3K4cdIXrSsExhZDKWC7WTYTFrDRgNk0FnRpHwacyedb9ASFKuHThwQz+qWo6kDxmckwfLNNyE7jqMS4AoiUclH/VfH1fiiGM9IU8MCQh8qu0UR+3+ZjNzyCaYqIY4c/K6Gr2oIV8/HgnUDVcBK4kgfuGWCvZxJy4WgNGNt9vuYH4rornFBBKIyYAvjKfTwTpOA8dxrd8nJK9p/9eUgF5ue8OcYaUSRT+KmrdwNbjPxy/a6/NsmFkoJj/orn+eQWw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DsZfFnXMBRM0Wk5LaTsic24+555RRDHubAUAxa2ZpbA=;
 b=2+vbZejYWvFZOcgkYO+ErdY0xyHEbBiLkVW9Tr27UoDLJcPJ6pFT57lnVmmp5NBvakuSzTb3Yvx6UCCmmbTtf6HigUotymRZ3RtmQ8N14nmddkoKA4/zRaRwx1dTES5cxPX6tV/ZNOi0KsRTBiNXAJnUgHFNBfSCSEY9hRXaXVY=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB5240.eurprd08.prod.outlook.com (2603:10a6:20b:ec::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Mon, 27 Jul
 2020 16:10:45 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3216.033; Mon, 27 Jul 2020
 16:10:45 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Subject: Re: [RFC PATCH v1 2/4] xen/arm: Discovering PCI devices and add the
 PCI devices in XEN.
Thread-Topic: [RFC PATCH v1 2/4] xen/arm: Discovering PCI devices and add the
 PCI devices in XEN.
Thread-Index: AQHWYPbP4iMQjZDQ906szkp+TjbuPqkVohKAgACwKACABUzOAA==
Date: Mon, 27 Jul 2020 16:10:44 +0000
Message-ID: <E01D4585-F28B-421A-8B6B-3B46BA9D804C@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <666df0147054dda8af13ae74a89be44c69984295.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231337140.17562@sstabellini-ThinkPad-T480s>
 <81cad0cd-731d-e1d5-cacd-d64f2c0781b6@epam.com>
In-Reply-To: <81cad0cd-731d-e1d5-cacd-d64f2c0781b6@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 55e8ad16-d9eb-4be8-c2a6-08d83247a25e
x-ms-traffictypediagnostic: AM6PR08MB5240:|DB7PR08MB3513:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <DB7PR08MB351399C9AE4E788A6D958690FC720@DB7PR08MB3513.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: hPTSieqt+a2MN7sjPWuO78tkiJC2hdtAZvKZEucWOoIx/JRB7eYwTrGIrZ8u5dHaeMpz3iXlcE5nuVz9ubsSPk1wS5DlsOZCUCl2PKOjmh0cgp7Hdw59ECSNRrmKZq3AHlIyNMISU6uaiIwQXAjfGl6YilepWatKktNC6saO6qDwVVVTW57E4wRyaARDn3Nf/vNuFa1w0s0nThy2ACyV+y0uuvbKf983u4ccJBrxxZKttVqJ+W7Sko4VL0Qvvdnuvxfu6WT6FwOIhvAK08Nf9x59ZIMzo7+mxGXGoaWSjsDGcWciWY2SIwhacC+urCYgXI4mOIjS8XdCSY+zZtZWcFms92crJWSlaOLin7hBDf3447Xgivc1KoNjsknla+oy+MwmyvnEv5/nwagva6avvg==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(396003)(366004)(346002)(376002)(136003)(76116006)(966005)(91956017)(5660300002)(55236004)(33656002)(53546011)(36756003)(26005)(2906002)(478600001)(4326008)(66946007)(66556008)(66476007)(8936002)(64756008)(66446008)(83380400001)(54906003)(6916009)(6512007)(316002)(6486002)(6506007)(86362001)(8676002)(71200400001)(186003)(2616005);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: edEsyjGDboljEFItozBbFZJYc8wHmWoRY/asqEoFRXfjXsf4gHOKyUBa2x3tzRNnm20A23RVz8GUW8EBAw6Dislqd8S6tNpVmMBGrHk/tDg0Qth3AxwqWhualP1N2DBAPRpebEdffV5oIv0/e+jQ6D4FTA0sTuAO1fVlmQW2RaGMe5oZEfVD38KDXol8H38W5lcz5O/RreHEyMPPr3z2TOF9gxcmcM1lSsvf+o1IKu4sFnMXaM7ZG+X6GO2XJgErfAqYX9KaD7gqj7g01qN6RsladiF/641l/c7PlhH1XDGRP22SVIi+xODUUoezLbDKtbjLTHz9SQUG1KyjjG1y7TJ5ANfA2HXvKjC81qNmHVwEIGGYfOxF3t6CN1+c+AYio2AR5z9edusaucDe1v9BzCU3CbuDyjiWDOGEi67tYULsHBwJHfkBRui+KtBnekm7dRdvRNo1nYkBC6knr4VfvcRyk6i6QkXoxH2JjYly64c=
Content-Type: text/plain; charset="us-ascii"
Content-ID: <AF325F7B3FE8634CA0422900B2F34C5F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5240
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: f5f9a7bd-4689-4350-0f3e-08d832479e30
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: eVbU+2do6VZtzMSK8jT3tlzCT/jL1DtW+UAuqoIRuuTXkYuHu47vY/te6KZPl6CN5Y/mheijz60tMI0I/x53HysJGa8DMvmb17FbxaYDaxcJ+/LAdcPc7kKg5EGKHzbstg04CGK87/CmmV50OOTE59rYfjHHkGLY7/cJyg1M5i6XniAIxE+1kgYVWuaNiv8ZzrgUoH1ISGPc/ceZiq86IXUege6F5FazlfiPLOrUagiJnHVSMXQzMyNPz/DD/+cDI967IzK2IgmXDouO7EDUi8DmMde79tZv8AxInQfQHLaXAZNxh8DvtvIfGBjwOk9n6dwXYWj4O3uYuCHV9mjvl+mChb9SXgFrDGr/IuxSc7QB3aBtdUz9l2S5DvcK3hv1pCsDQ6K8eXGhTAnWq9Frtujy/5DDsqe7eIO3O8g+layyWUijlDBs3ujpD0e50K3BRn0CVUQl3+GQLHm9pFxvokclr31YWbxs09O833KwI3w=
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(376002)(396003)(136003)(39860400002)(46966005)(81166007)(26005)(5660300002)(33656002)(86362001)(966005)(2906002)(36756003)(70206006)(70586007)(356005)(82310400002)(36906005)(6862004)(478600001)(54906003)(82740400003)(4326008)(186003)(2616005)(6506007)(6512007)(47076004)(53546011)(83380400001)(107886003)(8936002)(336012)(6486002)(8676002)(316002);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jul 2020 16:10:52.0461 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 55e8ad16-d9eb-4be8-c2a6-08d83247a25e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3513
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 24 Jul 2020, at 8:14 am, Oleksandr Andrushchenko <Oleksandr_Andrushche=
nko@epam.com> wrote:
>=20
>=20
> On 7/23/20 11:44 PM, Stefano Stabellini wrote:
>> On Thu, 23 Jul 2020, Rahul Singh wrote:
>>> Hardware domain is in charge of doing the PCI enumeration and will
>>> discover the PCI devices and then will communicate to XEN via hyper
>>> call PHYSDEVOP_pci_device_add to add the PCI devices in XEN.
>>>=20
>>> Change-Id: Ie87e19741689503b4b62da911c8dc2ee318584ac
>> Same question about Change-Id
>>=20
>>=20
>>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>>> ---
>>>  xen/arch/arm/physdev.c | 42 +++++++++++++++++++++++++++++++++++++++---
>>>  1 file changed, 39 insertions(+), 3 deletions(-)
>>>=20
>>> diff --git a/xen/arch/arm/physdev.c b/xen/arch/arm/physdev.c
>>> index e91355fe22..274720f98a 100644
>>> --- a/xen/arch/arm/physdev.c
>>> +++ b/xen/arch/arm/physdev.c
>>> @@ -9,12 +9,48 @@
>>>  #include <xen/errno.h>
>>>  #include <xen/sched.h>
>>>  #include <asm/hypercall.h>
>>> -
>>> +#include <xen/guest_access.h>
>>> +#include <xsm/xsm.h>
>>>=20
>>>  int do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>>>  {
>>> -    gdprintk(XENLOG_DEBUG, "PHYSDEVOP cmd=3D%d: not implemented\n", cm=
d);
>>> -    return -ENOSYS;
>>> +    int ret =3D 0;
>>> +
>>> +    switch ( cmd )
>>> +    {
>>> +#ifdef CONFIG_HAS_PCI
>=20
> In the cover letter you were saying "we are not enabling the HAS_PCI and =
HAS_VPCI flags for ARM".
>=20
> Is this still valid?

Yes right we are not enabling it because full support of PCI passthrough is=
 not implemented and tested.=20
>=20
>>> +        case PHYSDEVOP_pci_device_add:
>>> +            {
>>> +                struct physdev_pci_device_add add;
>>> +                struct pci_dev_info pdev_info;
>>> +                nodeid_t node =3D NUMA_NO_NODE;
>>> +
>>> +                ret =3D -EFAULT;
>>> +                if ( copy_from_guest(&add, arg, 1) !=3D 0 )
>>> +                    break;
>>> +
>>> +                pdev_info.is_extfn =3D !!(add.flags & XEN_PCI_DEV_EXTF=
N);
>>> +                if ( add.flags & XEN_PCI_DEV_VIRTFN )
>>> +                {
>>> +                    pdev_info.is_virtfn =3D 1;
>>> +                    pdev_info.physfn.bus =3D add.physfn.bus;
>>> +                    pdev_info.physfn.devfn =3D add.physfn.devfn;
>>> +                }
>>> +                else
>>> +                    pdev_info.is_virtfn =3D 0;
>>> +
>>> +                ret =3D pci_add_device(add.seg, add.bus, add.devfn,
>>> +                                &pdev_info, node);
>>> +
>>> +                break;
>>> +            }
>>> +#endif
>>> +        default:
>>> +            gdprintk(XENLOG_DEBUG, "PHYSDEVOP cmd=3D%d: not implemente=
d\n", cmd);
>>> +            ret =3D -ENOSYS;
>>> +    }
>> I think we should make the implementation common between arm and x86 by
>> creating xen/common/physdev.c:do_physdev_op as a shared entry point for
>> PHYSDEVOP hypercalls implementations. See for instance:
>>=20
>> xen/common/sysctl.c:do_sysctl
>>=20
>> and
>>=20
>> xen/arch/arm/sysctl.c:arch_do_sysctl
>> xen/arch/x86/sysctl.c:arch_do_sysctl
>>=20
>>=20
>> Jan, Andrew, Roger, any opinions?
>>=20
>>=20
> I think we can also have a look at [1] by Julien. That implementation,
>=20
> IMO, had some thoughts on making Arm/x86 code common where possible

Ok. Thanks for the pointer. We will have a look.
>=20
>=20
> [1] https://xenbits.xen.org/gitweb/?p=3Dpeople/julieng/xen-unstable.git;a=
=3Dshortlog;h=3Drefs/heads/dev-pci



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 16:15:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 16:15:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k05mx-0002Wj-CD; Mon, 27 Jul 2020 16:15:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qU+V=BG=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1k05mv-0002We-JU
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 16:15:41 +0000
X-Inumbo-ID: 69a87e66-d024-11ea-8ad3-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69a87e66-d024-11ea-8ad3-bc764e2007e4;
 Mon, 27 Jul 2020 16:15:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595866540;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=xYD4y5LZG9bWdcQ4wHI/ketk1UinKIDQYh7dblPQgaA=;
 b=C8RUSahkYSYgC97mQceqxK9fTE1yGhXATjmDiLk7IK3DDpsmqFFf8klt
 kB/CLNdMWtsPIBv1hMMKHbmur/N4Ixtt/YldT9QhsmeQzturpNDAYh7Sa
 Crn/SzbW+PF2D7Eh9BpazcbwvePb8sqrCyB6vvn3xhpSjoFBWsRVv0D9K E=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: blYReVZ7jcnzuKuWDXWEqtriAH915lGvknKJaAFw2SYEgAHHaU9vrLn3JpnHPh1nWnF8SL+2gU
 yG87+gTaiEVUjfcBQMX0NZVWBuYM0vY4fYv/TodITVtcn+8RWZe2YWUF7nX3ghf6tToa5EawIa
 ZzERfsyABbIYxxYpouSgQCW9czUmi9o2IH7qIMHDQcZ55y0SUbt9irfzBYza0MkpgDSqxJb8mP
 ba7GuQFSSOj7y/6EX9nvgTZyybCm6TyTb17z1aaCxw1DEmzkcq9OUeBKfwrxqoboopkc84IIuR
 Iu4=
X-SBRS: 2.7
X-MesageID: 23463965
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23463965"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [OSSTEST PATCH 06/14] sg-report-flight: Use WITH clause to use
 index for $anypassq
Thread-Topic: [OSSTEST PATCH 06/14] sg-report-flight: Use WITH clause to use
 index for $anypassq
Thread-Index: AQHWX46tFgmYVwZS+kmF2kdEj3oAf6kbga0A
Date: Mon, 27 Jul 2020 16:15:36 +0000
Message-ID: <E1356BFA-1FDF-42B8-A4E1-47C45F93D036@citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
 <20200721184205.15232-7-ian.jackson@eu.citrix.com>
In-Reply-To: <20200721184205.15232-7-ian.jackson@eu.citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="us-ascii"
Content-ID: <8654226DEB495D4D95881F479438D734@citrix.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On Jul 21, 2020, at 7:41 PM, Ian Jackson <ian.jackson@eu.citrix.com> wrot=
e:
>=20
> Perf: runtime of my test case now ~11s
>=20
> Example query before (from the Perl DBI trace):
>=20
>        SELECT * FROM flights JOIN steps USING (flight)
>            WHERE (branch=3D'xen-unstable')
>              AND job=3D? and testid=3D? and status=3D'pass'
>              AND ( (TRUE AND flight <=3D 151903) AND (blessing=3D'real') =
)
>            LIMIT 1
>=20
> After:
>=20
>        WITH s AS
>        (
>        SELECT * FROM steps
>         WHERE job=3D? and testid=3D? and status=3D'pass'
>        )
>        SELECT * FROM flights JOIN s USING (flight)
>            WHERE (branch=3D'xen-unstable')
>              AND ( (TRUE AND flight <=3D 151903) AND (blessing=3D'real') =
)
>            LIMIT 1
>=20
> In both cases with bind vars:
>=20
>   "test-amd64-i386-xl-pvshim"
>   "guest-start"
>=20
> Diff to the query:
>=20
> -        SELECT * FROM flights JOIN steps USING (flight)
> +        WITH s AS
> +        (
> +        SELECT * FROM steps
> +         WHERE job=3D? and testid=3D? and status=3D'pass'
> +        )
> +        SELECT * FROM flights JOIN s USING (flight)
>             WHERE (branch=3D'xen-unstable')
> -              AND job=3D? and testid=3D? and status=3D'pass'
>               AND ( (TRUE AND flight <=3D 151903) AND (blessing=3D'real')=
 )
>             LIMIT 1
>=20
> CC: George Dunlap <George.Dunlap@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
> schema/steps-job-index.sql |  2 +-
> sg-report-flight           | 14 ++++++++++++--
> 2 files changed, 13 insertions(+), 3 deletions(-)
>=20
> diff --git a/schema/steps-job-index.sql b/schema/steps-job-index.sql
> index 07dc5a30..2c33af72 100644
> --- a/schema/steps-job-index.sql
> +++ b/schema/steps-job-index.sql
> @@ -1,4 +1,4 @@
> --- ##OSSTEST## 006 Preparatory
> +-- ##OSSTEST## 006 Needed
> --
> -- This index helps sg-report-flight find if a test ever passed.
>=20
> diff --git a/sg-report-flight b/sg-report-flight
> index b5398573..b8d948da 100755
> --- a/sg-report-flight
> +++ b/sg-report-flight
> @@ -849,10 +849,20 @@ sub justifyfailures ($;$) {
>=20
>     my @failures=3D values %{ $fi->{Failures} };
>=20
> +    # In psql 9.6 this WITH clause makes postgresql do the steps query
> +    # first.  This is good because if this test never passed we can
> +    # determine that really quickly using the new index, without
> +    # having to scan the flights table.  (If the test passed we will
> +    # probably not have to look at many flights to find one, so in
> +    # that case this is not much worse.)

Seems a bit weird, but OK.  The SQL looks the same, so:

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 16:45:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 16:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k06FN-00055P-3s; Mon, 27 Jul 2020 16:45:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9AKR=BG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k06FL-00054e-Qp
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 16:45:03 +0000
X-Inumbo-ID: 7f87c83c-d028-11ea-a7a9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f87c83c-d028-11ea-a7a9-12813bfff9fa;
 Mon, 27 Jul 2020 16:44:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ZNBlxwVkXZXBH+3CIWjTWMWNCjesxg0BvSNCjSNHnyE=; b=3LeWfDwJEq7Bh3FRqpA7kYBeg
 jyB41hDL19I50WeQNi6E/iP8pVbWPFDh8xID/gjqDQiFzZOV8zXxNbe4yZGo+qmsM1gs2Gyehu+RE
 VTM9j0R70SZ+ec2O1TFz6Zd95P9cEGPYcoyqgFxR+YkQWrU3s5ibBayQYFAsHg5h/qp+E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k06FC-0004u4-81; Mon, 27 Jul 2020 16:44:54 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k06FB-0000Ib-U2; Mon, 27 Jul 2020 16:44:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k06FB-0005bk-TO; Mon, 27 Jul 2020 16:44:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152235-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152235: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=c27a184225eab54d20435c8cab5ad0ef384dc2c0
X-Osstest-Versions-That: xen=0562cbc14cf02b8188b9f1f37f39a4886776ce7c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jul 2020 16:44:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152235 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152235/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c27a184225eab54d20435c8cab5ad0ef384dc2c0
baseline version:
 xen                  0562cbc14cf02b8188b9f1f37f39a4886776ce7c

Last test of basis   152180  2020-07-24 15:02:33 Z    3 days
Testing same since   152235  2020-07-27 14:10:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0562cbc14c..c27a184225  c27a184225eab54d20435c8cab5ad0ef384dc2c0 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 17:06:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 17:06:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k06ZW-0006qE-UG; Mon, 27 Jul 2020 17:05:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k06ZV-0006q9-Ds
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 17:05:53 +0000
X-Inumbo-ID: 6c90f751-d02b-11ea-8ad9-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c90f751-d02b-11ea-8ad9-bc764e2007e4;
 Mon, 27 Jul 2020 17:05:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595869553;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=30xkhWEAxIXXdhxrNAKv+7VMBsBG5McF1NECK0NyvdQ=;
 b=PUFv2B03vxYEnAuB5HYqrTCg0Rq2vXA5bm3VJhKvq4iCGVYfNiszPm47
 39HnB0LqmR/1zjbtGHbx0n4jGARqjdhAiJuuxL3PI8Ro4HTDIoYu7rObu
 mCeGhF+K6h+7+QIaV7XuqUbgJcFeKZRowKPgLyPcQ1asOkqcYiJQhVlgI k=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: AjIhgtY8/UzQItIxDwebzYaiSUJaVSigFKfePUwJZGszfyqhfbxcNPdxktb6jfqNOYBW0J+EoD
 Hi0ktSSME83+lDXkribw4iCEg4Jj/8lv2wjRQdtymV6D93lbaKhkoUKdM2MMG7Q7r6nic0sISJ
 Zih93AMcD/fam8/XzTaBe0YWD+Tgx2COsUFTZaCV6uVI9xNoll6r6CkFd6EU1w2sTdaqJmjkbi
 ItoY73Hv8npJlsyFNClIUPT86jDWkh5/tWMjhc7aSj47tQfSS3oIiEtOiYt2EZ5ET7PjQMIG2j
 Txw=
X-SBRS: 2.7
X-MesageID: 23292967
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23292967"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 0/5] x86_ virtual timer related fixes
Date: Mon, 27 Jul 2020 19:05:34 +0200
Message-ID: <20200727170539.55798-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

This is the first part of the vPT fixes that in the past I've posted as
a single series. I don't think it has any controversial patches, and
most have already been Acked or RB. I've split them from the series
because those can likely go in now, while I work on properly finishing the remaining
ones.

Thanks, Roger.

Roger Pau Monne (5):
  x86/hvm: fix vIO-APIC build without IRQ0_SPECIAL_ROUTING
  x86/hvm: don't force vCPU 0 for IRQ 0 when using fixed destination
    mode
  x86/hvm: fix ISA IRQ 0 handling when set as lowest priority mode in IO
    APIC
  x86/vpt: only try to resume timers belonging to enabled devices
  x86/hvm: only translate ISA interrupts to GSIs in virtual timers

 xen/arch/x86/hvm/vioapic.c | 42 +++++++++++---------------------------
 xen/arch/x86/hvm/vpt.c     | 19 ++++++++++-------
 2 files changed, 24 insertions(+), 37 deletions(-)

-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 17:06:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 17:06:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k06Zi-0006sp-4h; Mon, 27 Jul 2020 17:06:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k06Zh-0006qK-7w
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 17:06:05 +0000
X-Inumbo-ID: 7062a6ee-d02b-11ea-a7b1-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7062a6ee-d02b-11ea-a7b1-12813bfff9fa;
 Mon, 27 Jul 2020 17:05:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595869559;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=saiDrgvFoGPsvHXwjrhIy9joLoxVibLVmDC510j+ikM=;
 b=TZ9nwE2DH71zn7T3PV+juoG9+N6zGMOToH/bDr1Oe5n3I9yhxf3tk36L
 gTgxeg/0OiucXSacAWIdnq1UOKZDQAFzbRczDhnNwVYfG11c/Ux98XPDd
 0VAJ/wsafXs4XCd+r6KadlZfDI648LPPSF8k5VTbwzXVKkMZajQzXMV+D k=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: NgwrdsjxFH/xaOvgM3zcURG7VC2t47NoPT9WCA+NYDz0pOkGAEBir4ieksjaq60g1CaMvh66Yc
 oHyamu8lkusknUi3B9lV/UGWODQqIYOFWylWXp1/pdtO+rLYjcAPmds6eJ/Gs2WcWr6dVrgJ9Z
 iOsAU7lTqtT3B82XPtDrf5tFl00hAwGTTHY98J4VTTgMZXczofvtAXHUqbvQ1ZLCQg6LB6Z7xS
 5zMM4luNKAbPztVazylyRJbRpcjAe02awLB5yTeBzVHN1JpuGZpjMGi0spnS6/wxgcJpjNmNc4
 IL4=
X-SBRS: 2.7
X-MesageID: 23613409
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23613409"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 3/5] x86/hvm: fix ISA IRQ 0 handling when set as lowest
 priority mode in IO APIC
Date: Mon, 27 Jul 2020 19:05:37 +0200
Message-ID: <20200727170539.55798-4-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200727170539.55798-1-roger.pau@citrix.com>
References: <20200727170539.55798-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Lowest priority destination mode does allow the vIO APIC code to
select a vCPU to inject the interrupt to, but the selected vCPU must
be part of the possible destinations configured for such IO APIC pin.

Fix the code in order to only force vCPU 0 if it's part of the
listed destinations.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Add a comment regarding the vlapic_enabled check.
---
NB: I haven't added a fallback to vCPU 0 if no destination is found,
as it's not how real hardware behaves. I think we should assume that
no user have relied on this bogus Xen behavior for IRQ 0 interrupt
injection.
---
 xen/arch/x86/hvm/vioapic.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index 123191db75..67d4a6237f 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -415,12 +415,14 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
     case dest_LowestPrio:
     {
 #ifdef IRQ0_SPECIAL_ROUTING
-        /* Force round-robin to pick VCPU 0 */
-        if ( (irq == hvm_isa_irq_to_gsi(0)) && pt_active(&d->arch.vpit.pt0) )
-        {
-            v = d->vcpu ? d->vcpu[0] : NULL;
-            target = v ? vcpu_vlapic(v) : NULL;
-        }
+        struct vlapic *lapic0 = vcpu_vlapic(d->vcpu[0]);
+
+        /* Force to pick vCPU 0 if part of the destination list */
+        if ( (irq == hvm_isa_irq_to_gsi(0)) && pt_active(&d->arch.vpit.pt0) &&
+             vlapic_match_dest(lapic0, NULL, 0, dest, dest_mode) &&
+             /* Mimic the vlapic_enabled check found in vlapic_lowest_prio. */
+             vlapic_enabled(lapic0) )
+            target = lapic0;
         else
 #endif
             target = vlapic_lowest_prio(d, NULL, 0, dest, dest_mode);
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 17:06:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 17:06:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k06Zc-0006r0-Hu; Mon, 27 Jul 2020 17:06:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k06Zc-0006qK-7U
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 17:06:00 +0000
X-Inumbo-ID: 6f1647b4-d02b-11ea-a7b1-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6f1647b4-d02b-11ea-a7b1-12813bfff9fa;
 Mon, 27 Jul 2020 17:05:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595869555;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=9BA2HuY2XfFbO7rOd5htwjlC8tyCla4CvMSRDDan0zM=;
 b=hvHbfeSU5EghLHXL0RE4MfWuUjtKOxBqV8nRz33X9lmu942Mmx3W0MQl
 NXSNgPZoG7VmOzuF64RdN7Q7qTHkngnSu3b1W22L8Z+I9RJ4zNuHVTsCf
 kfx8cpwXaxW0jwegLsYdCQ9jEyLHhSyb0+eRoQtwiRbgVaW0cNNPWVRtZ c=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Cn+VFplSK30A9ca43AE3YL9md4WrnPn8Hd8a+AWow0raSKjK6EGyBpPdmEAToQ+UOrkXTHUPwr
 0xkgbKh5R/r6OQ/Yt/bZtjewBYXoXMkehllRyFl9N4vJKukDB4zyoGt/tzExtPRE/tf7Q8rHGY
 3YzHsq7bBlo0iHYFpkLbALQbdYVKqCC9hreyMI9J1GIiy93wKQdv+uGJiVpsLw4MNlXozw2oJQ
 PkGLS+mHAkzhteDah9IoEOTqheAA83On1BKqcsuj9MA/yrEcKnHO/rm9UlBHNIGm8MblLga+J4
 sF0=
X-SBRS: 2.7
X-MesageID: 23610699
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23610699"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 2/5] x86/hvm: don't force vCPU 0 for IRQ 0 when using fixed
 destination mode
Date: Mon, 27 Jul 2020 19:05:36 +0200
Message-ID: <20200727170539.55798-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200727170539.55798-1-roger.pau@citrix.com>
References: <20200727170539.55798-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

When the IO APIC pin mapped to the ISA IRQ 0 has been configured to
use fixed delivery mode do not forcefully route interrupts to vCPU 0,
as the OS might have setup those interrupts to be injected to a
different vCPU, and injecting to vCPU 0 can cause the OS to miss such
interrupts or errors to happen due to unexpected vectors being
injected on vCPU 0.

In order to fix remove such handling altogether for fixed destination
mode pins and just inject them according to the data setup in the
IO-APIC entry.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/hvm/vioapic.c | 23 ++++-------------------
 1 file changed, 4 insertions(+), 19 deletions(-)

diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index b00037ea87..123191db75 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -438,26 +438,11 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
     }
 
     case dest_Fixed:
-    {
-#ifdef IRQ0_SPECIAL_ROUTING
-        /* Do not deliver timer interrupts to VCPU != 0 */
-        if ( (irq == hvm_isa_irq_to_gsi(0)) && pt_active(&d->arch.vpit.pt0) )
-        {
-            if ( (v = d->vcpu ? d->vcpu[0] : NULL) != NULL )
-                ioapic_inj_irq(vioapic, vcpu_vlapic(v), vector,
-                               trig_mode, delivery_mode);
-        }
-        else
-#endif
-        {
-            for_each_vcpu ( d, v )
-                if ( vlapic_match_dest(vcpu_vlapic(v), NULL,
-                                       0, dest, dest_mode) )
-                    ioapic_inj_irq(vioapic, vcpu_vlapic(v), vector,
-                                   trig_mode, delivery_mode);
-        }
+        for_each_vcpu ( d, v )
+            if ( vlapic_match_dest(vcpu_vlapic(v), NULL, 0, dest, dest_mode) )
+                ioapic_inj_irq(vioapic, vcpu_vlapic(v), vector, trig_mode,
+                               delivery_mode);
         break;
-    }
 
     case dest_NMI:
     {
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 17:06:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 17:06:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k06ZZ-0006qP-5r; Mon, 27 Jul 2020 17:05:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k06ZX-0006qK-Dz
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 17:05:55 +0000
X-Inumbo-ID: 6dc71d84-d02b-11ea-a7b1-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6dc71d84-d02b-11ea-a7b1-12813bfff9fa;
 Mon, 27 Jul 2020 17:05:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595869553;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=Up0va2PHL+5mIou+IC4Y87q4ak8L8JXy1B8SeD1Qmuo=;
 b=H/6TthRiqEImLObwWiNtt7rj7iwF8BAlcLZNBs3K0NP5EGL3p3H9RVYb
 90yDbpYkf8UpZnuVqWfI/W1NtFth4HM64O1bKYJphkUctRURBK2EaWecv
 VY3B/lky/VmmXtvdv4HTbTjagTAwge3x8wZEDSId5M79Lg4BiGw+jwKa0 A=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 3NyJ2mKrVC5cGwM/IZpfLlAZeMKYPgseDdHEdrtodAuhDkLYDbi6eSn0FW/qhovh51HuBK0BGf
 MTJbaOvYa9NIe1o5Ry3qTfE72NiTRpA9PcEO0SPSm63l52XSGSpEHLIcPg0Ej1BrtcpWy7VHWq
 wr00foe33T3x+7UtxwGQf9PYmMFHAhb5HI03Aj2xB5FQrzt8uOQjvNS+4UNekypK1gUuOcfVFy
 rfqCoJmIMWFNU+UH65802BZYo4QvLQqOtCWpfReV/yO/UAHxqmHcV3Rrdo4RaDzx5HskJ7yMTk
 nQE=
X-SBRS: 2.7
X-MesageID: 23469011
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23469011"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 1/5] x86/hvm: fix vIO-APIC build without
 IRQ0_SPECIAL_ROUTING
Date: Mon, 27 Jul 2020 19:05:35 +0200
Message-ID: <20200727170539.55798-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200727170539.55798-1-roger.pau@citrix.com>
References: <20200727170539.55798-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

pit_channel0_enabled needs to be guarded with IRQ0_SPECIAL_ROUTING
since it's only used when the special handling of ISA IRQ 0 is
enabled. However such helper being a single line it's better to just
inline it directly in vioapic_deliver where it's used.

No functional change.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Remove pit_channel0_enabled altogether.
---
 xen/arch/x86/hvm/vioapic.c | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/vioapic.c b/xen/arch/x86/hvm/vioapic.c
index b87facb0e0..b00037ea87 100644
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -391,11 +391,6 @@ static void ioapic_inj_irq(
     vlapic_set_irq(target, vector, trig_mode);
 }
 
-static inline int pit_channel0_enabled(void)
-{
-    return pt_active(&current->domain->arch.vpit.pt0);
-}
-
 static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
 {
     uint16_t dest = vioapic->redirtbl[pin].fields.dest_id;
@@ -421,7 +416,7 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
     {
 #ifdef IRQ0_SPECIAL_ROUTING
         /* Force round-robin to pick VCPU 0 */
-        if ( (irq == hvm_isa_irq_to_gsi(0)) && pit_channel0_enabled() )
+        if ( (irq == hvm_isa_irq_to_gsi(0)) && pt_active(&d->arch.vpit.pt0) )
         {
             v = d->vcpu ? d->vcpu[0] : NULL;
             target = v ? vcpu_vlapic(v) : NULL;
@@ -446,7 +441,7 @@ static void vioapic_deliver(struct hvm_vioapic *vioapic, unsigned int pin)
     {
 #ifdef IRQ0_SPECIAL_ROUTING
         /* Do not deliver timer interrupts to VCPU != 0 */
-        if ( (irq == hvm_isa_irq_to_gsi(0)) && pit_channel0_enabled() )
+        if ( (irq == hvm_isa_irq_to_gsi(0)) && pt_active(&d->arch.vpit.pt0) )
         {
             if ( (v = d->vcpu ? d->vcpu[0] : NULL) != NULL )
                 ioapic_inj_irq(vioapic, vcpu_vlapic(v), vector,
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 17:06:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 17:06:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k06Zf-0006s7-Rz; Mon, 27 Jul 2020 17:06:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k06Ze-0006s0-Vf
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 17:06:03 +0000
X-Inumbo-ID: 72e92cc6-d02b-11ea-8ad9-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 72e92cc6-d02b-11ea-8ad9-bc764e2007e4;
 Mon, 27 Jul 2020 17:06:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595869562;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=dFV/5IIieWSOyYLpuMea63bzinjrU9UWYgTJ7LkynSg=;
 b=QN/r0ryMllnh41ZJKvznUKbIbis/5iIlLxAEBhsmugf+38kyo0ZAJPbD
 Ivv9H5uc2zmJ3IxCwawAQJ3QD3BMFpf1cL2Yz3KCORa/4DAKhu6bNbbhQ
 XMY6Z6XYyWzDBaArK3wTZwW+WRi3uxeYxG1lOd+vBU59emlFmlwf5rVwU M=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: yX0Ucm7OmM9ftMh86I8MenFv19Jl15favDXMS1gmwiOs726c9E7XWQ6DfS6NhCbf7Yx5kJlvun
 ICR0PpJB8CmlbWKP8a2CS8m4x4dRvg9TZATeU3UTnouR72F0u4sS++/mxdQ+m9xRGC+WuANi63
 ANe5dWZrpj/Ur3UEuG74BTf4G92WneT9hdmOqi4hVFVnKIAeX9V+bbrWXru+MyXroPqGn9HBU0
 1lKGaH01YRIvIe7PhSUGzPYHchfzMuzPzCmw6J7WYHBbkpEGiJ7vDxc2s9Pk76huPiTWGQypCD
 35k=
X-SBRS: 2.7
X-MesageID: 23469017
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="23469017"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 5/5] x86/hvm: only translate ISA interrupts to GSIs in
 virtual timers
Date: Mon, 27 Jul 2020 19:05:39 +0200
Message-ID: <20200727170539.55798-6-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200727170539.55798-1-roger.pau@citrix.com>
References: <20200727170539.55798-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Only call hvm_isa_irq_to_gsi for ISA interrupts, interrupts
originating from an IO APIC pin already use a GSI and don't need to be
translated.

I haven't observed any issues from this, but I think it's better to
use it correctly.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes since v1:
 - Delay the setting of gsi a bit.
---
 xen/arch/x86/hvm/vpt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index 62c87867c5..c68bbd1558 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -86,13 +86,13 @@ static int pt_irq_vector(struct periodic_time *pt, enum hvm_intsrc src)
         return pt->irq;
 
     isa_irq = pt->irq;
-    gsi = hvm_isa_irq_to_gsi(isa_irq);
 
     if ( src == hvm_intsrc_pic )
         return (v->domain->arch.hvm.vpic[isa_irq >> 3].irq_base
                 + (isa_irq & 7));
 
     ASSERT(src == hvm_intsrc_lapic);
+    gsi = pt->source == PTSRC_isa ? hvm_isa_irq_to_gsi(isa_irq) : pt->irq;
     vector = vioapic_get_vector(v->domain, gsi);
     if ( vector < 0 )
     {
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 17:06:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 17:06:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k06Zn-0006uz-Ef; Mon, 27 Jul 2020 17:06:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1WzZ=BG=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k06Zm-0006qK-89
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 17:06:10 +0000
X-Inumbo-ID: 71a883fc-d02b-11ea-a7b1-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 71a883fc-d02b-11ea-a7b1-12813bfff9fa;
 Mon, 27 Jul 2020 17:06:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595869560;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=+UrtDnq8F4eKIjcjNHn8k54mlLah4ENCqM3HBW0OiIE=;
 b=gRRbpzVUawqj5TxHdGBpiyB8+CjxxNXjIzboe1SnfvadfwT5M2EsUUwh
 Kjg1TfG5DXv03E9UiHCLg5gwGE+PuJPipuXRFkvSaDxs/90gC5UqXrK+Z
 cEQ+E8J8ALjVCfibQqSvUmp39D1ufBhZjR5qnExFxGBL6n75swfD9TgyU s=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 2dtOmp8c2Nwrd+LUQ4J9RPBxhaEZfrC856jhgoQ+7Bw5E1CyN4gbGYR9OyI4x1PcqlfV58Rzqv
 3BYXLDz/Ea9OY0JQoggv3zA2atPYZ4/eDbCaHnUWPnMKbItTX+9PXbzrDbNwarHC5a6JGoSYAx
 ula5+FWpHzijYGbT0T7FlaRq/o37xxIxQfcvkOcVEsVZcjOL/NJW//1i4FwzODJ1sqz9KMRvFz
 cPShm+jK6lXrdXtwg916Ne5pDATSeQQdQMXceGKkZia2/X8QR2tv6kFoIVuxJnwHwC7GCsXlNQ
 U/s=
X-SBRS: 2.7
X-MesageID: 24144262
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,402,1589256000"; d="scan'208";a="24144262"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 4/5] x86/vpt: only try to resume timers belonging to
 enabled devices
Date: Mon, 27 Jul 2020 19:05:38 +0200
Message-ID: <20200727170539.55798-5-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200727170539.55798-1-roger.pau@citrix.com>
References: <20200727170539.55798-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Check whether the emulated device is actually enabled before trying to
resume the associated timers.

Thankfully all those structures are zeroed at initialization, and
since the devices are not enabled they are never populated, which
triggers the pt->vcpu check at the beginning of pt_resume forcing an
exit from the function.

While there limit the scope of i and make it unsigned.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/hvm/vpt.c | 17 +++++++++++------
 1 file changed, 11 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index 47f2c2aa64..62c87867c5 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -636,14 +636,19 @@ static void pt_resume(struct periodic_time *pt)
 
 void pt_may_unmask_irq(struct domain *d, struct periodic_time *vlapic_pt)
 {
-    int i;
-
     if ( d )
     {
-        pt_resume(&d->arch.vpit.pt0);
-        pt_resume(&d->arch.hvm.pl_time->vrtc.pt);
-        for ( i = 0; i < HPET_TIMER_NUM; i++ )
-            pt_resume(&d->arch.hvm.pl_time->vhpet.pt[i]);
+        if ( has_vpit(d) )
+            pt_resume(&d->arch.vpit.pt0);
+        if ( has_vrtc(d) )
+            pt_resume(&d->arch.hvm.pl_time->vrtc.pt);
+        if ( has_vhpet(d) )
+        {
+            unsigned int i;
+
+            for ( i = 0; i < HPET_TIMER_NUM; i++ )
+                pt_resume(&d->arch.hvm.pl_time->vhpet.pt[i]);
+        }
     }
 
     if ( vlapic_pt )
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 17:44:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 17:44:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k07An-0002Ep-Mv; Mon, 27 Jul 2020 17:44:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qU+V=BG=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1k07Am-0002Ek-PB
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 17:44:24 +0000
X-Inumbo-ID: cea9cd4a-d030-11ea-a7ba-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cea9cd4a-d030-11ea-a7ba-12813bfff9fa;
 Mon, 27 Jul 2020 17:44:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595871863;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=bJ0kaQduqDVF+77W+mBhRVvoJakxF9BxcsDihqyVBuI=;
 b=JJ+qHXr6RFrUFILq8XLXMk7WOUeR/5vbU8ntWxP53q571gDuRcDsodvH
 05L1s3E5uf9xkJQOg3QJXgaQBQcBQ07oKYYvPzv3NokbXhjTUiNb2m4Dc
 p5UKdGaA+9gwaDeejv3q/4Q2LrvHIMmUiKC/zUXQTEPeHi7TQtX3DhzuD w=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: sYkvvr2wlyKV7bIgqh3sIKkYlYj28T/iMdh0Q5O7voTAIyZ/K/YtKKy0ZtwZ6k/C5qDJeoTIBv
 9ugxTSuyvIgbgmj1gNLHxnEqGVdWRbc6rbYceqbuZQOfXlYY+4p/0zlunruFSRSeh0IZyznV3c
 pwwNhNPG4n+JEjJWxZHz5fvIoJ3MaciGuefcylx6wgx4H+L+enFJMug73lDBAXM2jjGNNhZqPO
 u6H+PGdMVu3emFjvr8MjNkxcpG69mnwwU9jWW6qmXwdo4F57frDAkSC4wwGQ81SOj/+CmlUWpM
 7rc=
X-SBRS: 2.7
X-MesageID: 24147597
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,403,1589256000"; d="scan'208";a="24147597"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [OSSTEST PATCH 14/14] duration_estimator: Move duration query
 loop into database
Thread-Topic: [OSSTEST PATCH 14/14] duration_estimator: Move duration query
 loop into database
Thread-Index: AQHWX5IXwUKXL54pjkKddmBXrxrv3qkbmlIA
Date: Mon, 27 Jul 2020 17:43:54 +0000
Message-ID: <7A4B6786-4456-44E4-A85D-9CC83B522FBB@citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
 <20200721184205.15232-15-ian.jackson@eu.citrix.com>
In-Reply-To: <20200721184205.15232-15-ian.jackson@eu.citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <568C883840095F499A78DF2F162C85AA@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDIxLCAyMDIwLCBhdCA3OjQyIFBNLCBJYW4gSmFja3NvbiA8aWFuLmphY2tz
b25AZXUuY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBTdHVmZiB0aGUgdHdvIHF1ZXJpZXMgdG9n
ZXRoZXI6IHdlIHVzZSB0aGUgZmlyc3R5IHF1ZXJ5IGFzIGEgV0lUSA0KPiBjbGF1c2UuICBUaGlz
IGlzIHNpZ25pZmljYW50bHkgZmFzdGVyLCBwZXJoYXBzIGJlY2F1c2UgdGhlIHF1ZXJ5DQo+IG9w
dGltaXNlciBkb2VzIGEgYmV0dGVyIGpvYiBidXQgcHJvYmFibHkganVzdCBiZWNhdXNlIGl0IHNh
dmVzIG9uDQo+IHJvdW5kIHRyaXBzLg0KPiANCj4gTm8gZnVuY3Rpb25hbCBjaGFuZ2UuDQo+IA0K
PiBQZXJmOiBzdWJqZWN0aXZlbHkgdGhpcyBzZWVtZWQgdG8gaGVscCB3aGVuIHRoZSBjYWNoZSB3
YXMgY29sZC4gIE5vdyBJDQo+IGhhdmUgYSB3YXJtIGNhY2hlIGFuZCBpdCBkb2Vzbid0IHNlZW0g
dG8gbWFrZSBtdWNoIGRpZmZlcmVuY2UuDQo+IA0KPiBQZXJmOiBydW50aW1lIG9mIG15IHRlc3Qg
Y2FzZSBub3cgfjUtN3MuDQo+IA0KPiBFeGFtcGxlIHF1ZXJpZXMgYmVmb3JlIChmcm9tIHRoZSBk
ZWJ1Z2dpbmcgb3V0cHV0KToNCj4gDQo+IFF1ZXJ5IEEgcGFydCBJOg0KPiANCj4gICAgICAgICAg
ICBTRUxFQ1QgZi5mbGlnaHQgQVMgZmxpZ2h0LA0KPiAgICAgICAgICAgICAgICAgICBqLmpvYiBB
UyBqb2IsDQo+ICAgICAgICAgICAgICAgICAgIGYuc3RhcnRlZCBBUyBzdGFydGVkLA0KPiAgICAg
ICAgICAgICAgICAgICBqLnN0YXR1cyBBUyBzdGF0dXMNCj4gICAgICAgICAgICAgICAgICAgICBG
Uk9NIGZsaWdodHMgZg0KPiAgICAgICAgICAgICAgICAgICAgIEpPSU4gam9icyBqIFVTSU5HIChm
bGlnaHQpDQo+ICAgICAgICAgICAgICAgICAgICAgSk9JTiBydW52YXJzIHINCj4gICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIE9OICBmLmZsaWdodD1yLmZsaWdodA0KPiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBBTkQgIHIubmFtZT0/DQo+ICAgICAgICAgICAgICAgICAgICBXSEVSRSAg
ai5qb2I9ci5qb2INCg0KRGlkIHRoZXNlIGxhc3QgdHdvIGdldCBtaXhlZCB1cD8gIE15IGxpbWl0
ZWQgZXhwZXJpZW5jZSB3LyBKT0lOIE9OIGFuZCBXSEVSRSB3b3VsZCBsZWFkIG1lIHRvIGV4cGVj
dCB3ZeKAmXJlIGpvaW5pbmcgb24gYGYuZmxpZ2h0PXIuZmxpZ2h0IGFuZCByLmpvYiA9IGouam9i
YCwgYW5kIGhhdmluZyBgci5uYW1lID0gP2AgYXMgcGFydCBvZiB0aGUgV0hFUkUgY2xhdXNlLiAg
SSBzZWUgaXTigJlzIHRoZSBzYW1lIGluIHRoZSBjb21iaW5lZCBxdWVyeSBhcyB3ZWxsLg0KDQo+
ICAgICAgICAgICAgICAgICAgICAgIEFORCAgZi5ibGVzc2luZz0/DQo+ICAgICAgICAgICAgICAg
ICAgICAgIEFORCAgZi5icmFuY2g9Pw0KPiAgICAgICAgICAgICAgICAgICAgICBBTkQgIGouam9i
PT8NCj4gICAgICAgICAgICAgICAgICAgICAgQU5EICByLnZhbD0/DQo+ICAgICAgICAgICAgICAg
ICAgICAgIEFORCAgKGouc3RhdHVzPSdwYXNzJyBPUiBqLnN0YXR1cz0nZmFpbCcNCj4gICAgICAg
ICAgICAgICAgICAgICAgICAgICBPUiBqLnN0YXR1cz0ndHJ1bmNhdGVkJyEpDQo+ICAgICAgICAg
ICAgICAgICAgICAgIEFORCAgZi5zdGFydGVkIElTIE5PVCBOVUxMDQo+ICAgICAgICAgICAgICAg
ICAgICAgIEFORCAgZi5zdGFydGVkID49ID8NCj4gICAgICAgICAgICAgICAgIE9SREVSIEJZIGYu
c3RhcnRlZCBERVNDDQo+IA0KPiBXaXRoIGJpbmQgdmFyaWFibGVzOg0KPiAgICAgInRlc3QtYW1k
NjQtaTM4Ni14bC1wdnNoaW0iDQo+ICAgICAiZ3Vlc3Qtc3RhcnQiDQo+IA0KPiBRdWVyeSBCIHBh
cnQgSToNCj4gDQo+ICAgICAgICAgICAgU0VMRUNUIGYuZmxpZ2h0IEFTIGZsaWdodCwNCj4gICAg
ICAgICAgICAgICAgICAgcy5qb2IgQVMgam9iLA0KPiAgICAgICAgICAgICAgICAgICBOVUxMIGFz
IHN0YXJ0ZWQsDQo+ICAgICAgICAgICAgICAgICAgIE5VTEwgYXMgc3RhdHVzLA0KPiAgICAgICAg
ICAgICAgICAgICBtYXgocy5maW5pc2hlZCkgQVMgbWF4X2ZpbmlzaGVkDQo+ICAgICAgICAgICAg
ICAgICAgICAgIEZST00gc3RlcHMgcyBKT0lOIGZsaWdodHMgZg0KPiAgICAgICAgICAgICAgICAg
ICAgICAgIE9OIHMuZmxpZ2h0PWYuZmxpZ2h0DQo+ICAgICAgICAgICAgICAgICAgICAgV0hFUkUg
cy5qb2I9PyBBTkQgZi5ibGVzc2luZz0/IEFORCBmLmJyYW5jaD0/DQo+ICAgICAgICAgICAgICAg
ICAgICAgICBBTkQgcy5maW5pc2hlZCBJUyBOT1QgTlVMTA0KPiAgICAgICAgICAgICAgICAgICAg
ICAgQU5EIGYuc3RhcnRlZCBJUyBOT1QgTlVMTA0KPiAgICAgICAgICAgICAgICAgICAgICAgQU5E
IGYuc3RhcnRlZCA+PSA/DQo+ICAgICAgICAgICAgICAgICAgICAgR1JPVVAgQlkgZi5mbGlnaHQs
IHMuam9iDQo+ICAgICAgICAgICAgICAgICAgICAgT1JERVIgQlkgbWF4X2ZpbmlzaGVkIERFU0MN
Cj4gDQo+IFdpdGggYmluZCB2YXJpYWJsZXM6DQo+ICAgICJ0ZXN0LWFybWhmLWFybWhmLWxpYnZp
cnQiDQo+ICAgICdyZWFsJw0KPiAgICAieGVuLXVuc3RhYmxlIg0KPiAgICAxNTk0MTQ0NDY5DQo+
IA0KPiBRdWVyeSBjb21tb24gcGFydCBJSToNCj4gDQo+ICAgICAgICBXSVRIIHRzdGVwcyBBUw0K
PiAgICAgICAgKA0KPiAgICAgICAgICAgIFNFTEVDVCAqDQo+ICAgICAgICAgICAgICBGUk9NIHN0
ZXBzDQo+ICAgICAgICAgICAgIFdIRVJFIGZsaWdodD0/IEFORCBqb2I9Pw0KPiAgICAgICAgKQ0K
PiAgICAgICAgLCB0c3RlcHMyIEFTDQo+ICAgICAgICAoDQo+ICAgICAgICAgICAgU0VMRUNUICoN
Cj4gICAgICAgICAgICAgIEZST00gdHN0ZXBzDQo+ICAgICAgICAgICAgIFdIRVJFIGZpbmlzaGVk
IDw9DQo+ICAgICAgICAgICAgICAgICAgICAgKFNFTEVDVCBmaW5pc2hlZA0KPiAgICAgICAgICAg
ICAgICAgICAgICAgIEZST00gdHN0ZXBzDQo+ICAgICAgICAgICAgICAgICAgICAgICBXSEVSRSB0
c3RlcHMudGVzdGlkID0gPykNCj4gICAgICAgICkNCj4gICAgICAgIFNFTEVDVCAoDQo+ICAgICAg
ICAgICAgU0VMRUNUIG1heChmaW5pc2hlZCktbWluKHN0YXJ0ZWQpDQo+ICAgICAgICAgICAgICBG
Uk9NIHRzdGVwczINCj4gICAgICAgICAgKSAtICgNCj4gICAgICAgICAgICBTRUxFQ1Qgc3VtKGZp
bmlzaGVkLXN0YXJ0ZWQpDQo+ICAgICAgICAgICAgICBGUk9NIHRzdGVwczINCj4gICAgICAgICAg
ICAgV0hFUkUgc3RlcCA9ICd0cy1ob3N0cy1hbGxvY2F0ZScNCj4gICAgICAgICAgKQ0KPiAgICAg
ICAgICAgICAgICBBUyBkdXJhdGlvbg0KDQpFciwgd2FpdCDigJQgeW91IHdlcmUgZG9pbmcgYSBz
ZXBhcmF0ZSBgZHVyYXRpb25gIHF1ZXJ5IGZvciBlYWNoIHJvdyBvZiB0aGUgcHJldmlvdXMgcXVl
cnk/ICBZZWFoLCB0aGF0IHNvdW5kcyBsaWtlIGl0IGNvdWxkIGJlIGEgbG90IG9mIHJvdW5kIHRy
aXBzLiA6LSkNCg0KPiANCj4gV2l0aCBiaW5kIHZhcmlhYmxlcyBmcm9tIHByZXZpb3VzIHF1ZXJ5
LCBlZzoNCj4gICAgIDE1MjA0NQ0KPiAgICAgInRlc3QtYXJtaGYtYXJtaGYtbGlidmlydCINCj4g
ICAgICJndWVzdC1zdGFydC4yIg0KPiANCj4gQWZ0ZXI6DQo+IA0KPiBRdWVyeSBBIChjb21iaW5l
ZCk6DQo+IA0KPiAgICAgICAgICAgIFdJVEggZiBBUyAoDQo+ICAgICAgICAgICAgU0VMRUNUIGYu
ZmxpZ2h0IEFTIGZsaWdodCwNCj4gICAgICAgICAgICAgICAgICAgai5qb2IgQVMgam9iLA0KPiAg
ICAgICAgICAgICAgICAgICBmLnN0YXJ0ZWQgQVMgc3RhcnRlZCwNCj4gICAgICAgICAgICAgICAg
ICAgai5zdGF0dXMgQVMgc3RhdHVzDQo+ICAgICAgICAgICAgICAgICAgICAgRlJPTSBmbGlnaHRz
IGYNCj4gICAgICAgICAgICAgICAgICAgICBKT0lOIGpvYnMgaiBVU0lORyAoZmxpZ2h0KQ0KPiAg
ICAgICAgICAgICAgICAgICAgIEpPSU4gcnVudmFycyByDQo+ICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBPTiAgZi5mbGlnaHQ9ci5mbGlnaHQNCj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgQU5EICByLm5hbWU9Pw0KPiAgICAgICAgICAgICAgICAgICAgV0hFUkUgIGouam9iPXIuam9i
DQo+ICAgICAgICAgICAgICAgICAgICAgIEFORCAgZi5ibGVzc2luZz0/DQo+ICAgICAgICAgICAg
ICAgICAgICAgIEFORCAgZi5icmFuY2g9Pw0KPiAgICAgICAgICAgICAgICAgICAgICBBTkQgIGou
am9iPT8NCj4gICAgICAgICAgICAgICAgICAgICAgQU5EICByLnZhbD0/DQo+ICAgICAgICAgICAg
ICAgICAgICAgIEFORCAgKGouc3RhdHVzPSdwYXNzJyBPUiBqLnN0YXR1cz0nZmFpbCcNCj4gICAg
ICAgICAgICAgICAgICAgICAgICAgICBPUiBqLnN0YXR1cz0ndHJ1bmNhdGVkJyEpDQo+ICAgICAg
ICAgICAgICAgICAgICAgIEFORCAgZi5zdGFydGVkIElTIE5PVCBOVUxMDQo+ICAgICAgICAgICAg
ICAgICAgICAgIEFORCAgZi5zdGFydGVkID49ID8NCj4gICAgICAgICAgICAgICAgIE9SREVSIEJZ
IGYuc3RhcnRlZCBERVNDDQo+IA0KPiAgICAgICAgICAgICkNCj4gICAgICAgICAgICBTRUxFQ1Qg
ZmxpZ2h0LCBtYXhfZmluaXNoZWQsIGpvYiwgc3RhcnRlZCwgc3RhdHVzLA0KPiAgICAgICAgICAg
ICgNCj4gICAgICAgIFdJVEggdHN0ZXBzIEFTDQo+ICAgICAgICAoDQo+ICAgICAgICAgICAgU0VM
RUNUICoNCj4gICAgICAgICAgICAgIEZST00gc3RlcHMNCj4gICAgICAgICAgICAgV0hFUkUgZmxp
Z2h0PWYuZmxpZ2h0IEFORCBqb2I9Zi5qb2INCj4gICAgICAgICkNCj4gICAgICAgICwgdHN0ZXBz
MiBBUw0KPiAgICAgICAgKA0KPiAgICAgICAgICAgIFNFTEVDVCAqDQo+ICAgICAgICAgICAgICBG
Uk9NIHRzdGVwcw0KPiAgICAgICAgICAgICBXSEVSRSBmaW5pc2hlZCA8PQ0KPiAgICAgICAgICAg
ICAgICAgICAgIChTRUxFQ1QgZmluaXNoZWQNCj4gICAgICAgICAgICAgICAgICAgICAgICBGUk9N
IHRzdGVwcw0KPiAgICAgICAgICAgICAgICAgICAgICAgV0hFUkUgdHN0ZXBzLnRlc3RpZCA9ID8p
DQo+ICAgICAgICApDQo+ICAgICAgICBTRUxFQ1QgKA0KPiAgICAgICAgICAgIFNFTEVDVCBtYXgo
ZmluaXNoZWQpLW1pbihzdGFydGVkKQ0KPiAgICAgICAgICAgICAgRlJPTSB0c3RlcHMyDQo+ICAg
ICAgICAgICkgLSAoDQo+ICAgICAgICAgICAgU0VMRUNUIHN1bShmaW5pc2hlZC1zdGFydGVkKQ0K
PiAgICAgICAgICAgICAgRlJPTSB0c3RlcHMyDQo+ICAgICAgICAgICAgIFdIRVJFIHN0ZXAgPSAn
dHMtaG9zdHMtYWxsb2NhdGUnDQo+ICAgICAgICAgICkNCj4gICAgICAgICAgICAgICAgQVMgZHVy
YXRpb24NCj4gDQo+ICAgICAgICAgICAgKSBGUk9NIGYNCg0KSSBtZWFuLCBpbiBib3RoIHF1ZXJp
ZXMgKEEgYW5kIEIpLCB0aGUgdHJhbnNmb3JtIHNob3VsZCBiYXNpY2FsbHkgcmVzdWx0IGluIHRo
ZSBzYW1lIHRoaW5nIGhhcHBlbmluZywgYXMgZmFyIGFzIEkgY2FuIHRlbGwuDQoNCkkgY2FuIHRy
eSB0byBhbmFseXplIHRoZSBkdXJhdGlvbiBxdWVyeSBhbmQgc2VlIGlmIEkgY2FuIGNvbWUgdXAg
d2l0aCBhbnkgc3VnZ2VzdGlvbnMsIGJ1dCB0aGF0IHdvdWxkIGJlIGEgZGlmZmVyZW50IHBhdGNo
IGFueXdheS4NCg0KIC1HZW9yZ2UNCg0K


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 19:31:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 19:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k08qG-0003D1-92; Mon, 27 Jul 2020 19:31:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k08qF-0003Cw-PV
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 19:31:19 +0000
X-Inumbo-ID: be5ddd8c-d03f-11ea-a7ef-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id be5ddd8c-d03f-11ea-a7ef-12813bfff9fa;
 Mon, 27 Jul 2020 19:31:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 71EEEB17A;
 Mon, 27 Jul 2020 19:31:28 +0000 (UTC)
Subject: Re: [PATCH v7 03/15] x86/mm: rewrite virt_to_xen_l*e
To: Hongyan Xia <hx242@xen.org>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <fd5d98198d9539b232a570a83e7a24be2407e739.1590750232.git.hongyxia@amazon.com>
 <826d5a28-c391-dd30-d588-6f730b454c18@suse.com>
 <bbd18a2f7d86d451f529292c627616044955a84c.camel@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4827e2f5-eac4-fc9b-b206-e6443213652c@suse.com>
Date: Mon, 27 Jul 2020 21:31:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <bbd18a2f7d86d451f529292c627616044955a84c.camel@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.07.2020 11:09, Hongyan Xia wrote:
> On Tue, 2020-07-14 at 12:47 +0200, Jan Beulich wrote:
>> On 29.05.2020 13:11, Hongyan Xia wrote:
>>> --- a/xen/include/asm-x86/page.h
>>> +++ b/xen/include/asm-x86/page.h
>>> @@ -291,7 +291,13 @@ void copy_page_sse2(void *, const void *);
>>>   #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
>>>   #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
>>>   #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
>>> -#define
>>> vmap_to_mfn(va)     _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned
>>> long)(va))))
>>> +
>>> +#define vmap_to_mfn(va)
>>> ({                                                  \
>>> +        const l1_pgentry_t *pl1e_ = virt_to_xen_l1e((unsigned
>>> long)(va));   \
>>> +        mfn_t mfn_ =
>>> l1e_get_mfn(*pl1e_);                                   \
>>> +        unmap_domain_page(pl1e_);
>>>            \
>>> +        mfn_; })
>>
>> Just like is already the case in domain_page_map_to_mfn() I think
>> you want to add "BUG_ON(!pl1e)" here to limit the impact of any
>> problem to DoS (rather than a possible privilege escalation).
>>
>> Or actually, considering the only case where virt_to_xen_l1e()
>> would return NULL, returning INVALID_MFN here would likely be
>> even more robust. There looks to be just a single caller, which
>> would need adjusting to cope with an error coming back. In fact -
>> it already ASSERT()s, despite NULL right now never coming back
>> from vmap_to_page(). I think the loop there would better be
>>
>>      for ( i = 0; i < pages; i++ )
>>      {
>>          struct page_info *page = vmap_to_page(va + i * PAGE_SIZE);
>>
>>          if ( page )
>>              page_list_add(page, &pg_list);
>>          else
>>              printk_once(...);
>>      }
>>
>> Thoughts?
> 
> To be honest, I think the current implementation of vmap_to_mfn() is
> just incorrect. There is simply no guarantee that a vmap is mapped with
> small pages, so IMO we just cannot do virt_to_xen_x1e() here. The
> correct way is to have a generic page table walking function which
> walks from the base and can stop at any level, and properly return code
> to indicate level or any error.
> 
> I am inclined to BUG_ON() here, and upstream a proper fix later to
> vmap_to_mfn() as an individual patch.

Well, yes, in principle large pages can result from e.g. vmalloc()ing
a large enough area. However, rather than thinking of a generic
walking function as a solution, how about the simple one for the
immediate needs: Add MAP_SMALL_PAGES?

Also, as a general remark: When you disagree with review feedback, I
think it would be quite reasonable to wait with sending the next
version until the disagreement gets resolved, unless this is taking
unduly long delays.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 19:33:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 19:33:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k08sn-0003MD-Ro; Mon, 27 Jul 2020 19:33:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k08sn-0003M8-7Z
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 19:33:57 +0000
X-Inumbo-ID: 1bb7905f-d040-11ea-8aee-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1bb7905f-d040-11ea-8aee-bc764e2007e4;
 Mon, 27 Jul 2020 19:33:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 138D7AD43;
 Mon, 27 Jul 2020 19:34:06 +0000 (UTC)
Subject: Re: [PATCH v7 09/15] efi: use new page table APIs in copy_mapping
To: Hongyan Xia <hx242@xen.org>
References: <cover.1590750232.git.hongyxia@amazon.com>
 <0259b645c81ecc3879240e30760b0e7641a2b602.1590750232.git.hongyxia@amazon.com>
 <bfe28c9c-af4e-96c2-9e6c-354a5bf626d8@suse.com>
 <0c421dee1729295eb8504ee81abbc8e57f220b12.camel@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <176a8e78-9924-e5af-df2a-1e2eaae681c5@suse.com>
Date: Mon, 27 Jul 2020 21:33:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0c421dee1729295eb8504ee81abbc8e57f220b12.camel@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.07.2020 14:45, Hongyan Xia wrote:
> On Tue, 2020-07-14 at 14:42 +0200, Jan Beulich wrote:
>> On 29.05.2020 13:11, Hongyan Xia wrote:
>>> @@ -1442,29 +1443,42 @@ static __init void copy_mapping(unsigned
>>> long mfn, unsigned long end,
>>>                                                    unsigned long
>>> emfn))
>>>   {
>>>       unsigned long next;
>>> +    l3_pgentry_t *l3src = NULL, *l3dst = NULL;
>>>   
>>>       for ( ; mfn < end; mfn = next )
>>>       {
>>>           l4_pgentry_t l4e = efi_l4_pgtable[l4_table_offset(mfn <<
>>> PAGE_SHIFT)];
>>> -        l3_pgentry_t *l3src, *l3dst;
>>>           unsigned long va = (unsigned long)mfn_to_virt(mfn);
>>>   
>>> +        if ( !((mfn << PAGE_SHIFT) & ((1UL << L4_PAGETABLE_SHIFT)
>>> - 1)) )
>>
>> To be in line with ...
>>
>>> +        {
>>> +            UNMAP_DOMAIN_PAGE(l3src);
>>> +            UNMAP_DOMAIN_PAGE(l3dst);
>>> +        }
>>>           next = mfn + (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT));
>>
>> ... this, please avoid the left shift of mfn in the if(). Judgingfrom
> 
> What do you mean by "in line" here? It does not look to me that "next
> =" can be easily squashed into the if() condition.

I'm not thinking about squashing anything into an if(). I've talked
about avoiding the left shift of mfn, as this last quoted line does
(by instead subtracting PAGE_SHIFT from the left-shift count.

Jan

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 19:42:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 19:42:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k090W-0004Em-PQ; Mon, 27 Jul 2020 19:41:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k090V-0004Eh-7V
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 19:41:55 +0000
X-Inumbo-ID: 3870bde7-d041-11ea-a7f0-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3870bde7-d041-11ea-a7f0-12813bfff9fa;
 Mon, 27 Jul 2020 19:41:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8EF74AD43;
 Mon, 27 Jul 2020 19:42:03 +0000 (UTC)
Subject: Re: [PATCH 2/6] x86/iommu: add common page-table allocator
To: "Durrant, Paul" <pdurrant@amazon.co.uk>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-3-paul@xen.org>
 <d0a0c46f-1461-144c-ca62-259b0a1894fa@citrix.com>
 <d329b845e6c348e8bf484e45f051779f@EX13D32EUC003.ant.amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <344d62f5-1983-9438-853d-795dfbb8091a@suse.com>
Date: Mon, 27 Jul 2020 21:41:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d329b845e6c348e8bf484e45f051779f@EX13D32EUC003.ant.amazon.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.07.2020 11:37, Durrant, Paul wrote:
>> From: Andrew Cooper <andrew.cooper3@citrix.com>
>> Sent: 24 July 2020 19:24
>>
>> On 24/07/2020 17:46, Paul Durrant wrote:
>>> --- a/xen/drivers/passthrough/x86/iommu.c
>>> +++ b/xen/drivers/passthrough/x86/iommu.c
>>> @@ -140,11 +140,19 @@ int arch_iommu_domain_init(struct domain *d)
>>>
>>>       spin_lock_init(&hd->arch.mapping_lock);
>>>
>>> +    INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.list);
>>> +    spin_lock_init(&hd->arch.pgtables.lock);
>>> +
>>>       return 0;
>>>   }
>>>
>>>   void arch_iommu_domain_destroy(struct domain *d)
>>>   {
>>> +    struct domain_iommu *hd = dom_iommu(d);
>>> +    struct page_info *pg;
>>> +
>>> +    while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
>>> +        free_domheap_page(pg);
>>
>> Some of those 90 lines saved were the logic to not suffer a watchdog
>> timeout here.
>>
>> To do it like this, it needs plumbing into the relinquish resources path.
>>
> 
> Ok. I does look like there could be other potentially lengthy destruction done off the back of the RCU call. Ought we have the ability to have a restartable domain_destroy()?

I don't see how this would be (easily) feasible. Instead - why do
page tables not get cleaned up already at relinquish_resources time?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 19:48:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 19:48:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k096L-0004Px-Es; Mon, 27 Jul 2020 19:47:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k096K-0004Ps-0I
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 19:47:56 +0000
X-Inumbo-ID: 100a1ea1-d042-11ea-8aee-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 100a1ea1-d042-11ea-8aee-bc764e2007e4;
 Mon, 27 Jul 2020 19:47:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 793D3ACC3;
 Mon, 27 Jul 2020 19:48:04 +0000 (UTC)
Subject: Re: [PATCH 1/4] x86: replace __ASM_{CL,ST}AC
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <fc8e042e-fef8-ac38-34d8-16b13e4b0135@suse.com>
 <20200727145526.GR7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b29e4b17-8ec2-a0db-8426-94393e9eb2c0@suse.com>
Date: Mon, 27 Jul 2020 21:47:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200727145526.GR7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.07.2020 16:55, Roger Pau Monné wrote:
> On Wed, Jul 15, 2020 at 12:48:14PM +0200, Jan Beulich wrote:
>> --- /dev/null
>> +++ b/xen/include/asm-x86/asm-defns.h
> 
> Maybe this could be asm-insn.h or a different name? I find it
> confusing to have asm-defns.h and an asm_defs.h.

While indeed I anticipated a reply to this effect, I don't consider
asm-insn.h or asm-macros.h suitable: We don't want to limit this
header to a more narrow purpose than "all sorts of definition", I
don't think. Hence I chose that name despite its similarity to the
C header's one.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 19:50:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 19:50:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k098j-0005DO-SK; Mon, 27 Jul 2020 19:50:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xfpx=BG=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k098j-0005DJ-Gc
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 19:50:25 +0000
X-Inumbo-ID: 6968ec6a-d042-11ea-8aee-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6968ec6a-d042-11ea-8aee-bc764e2007e4;
 Mon, 27 Jul 2020 19:50:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 76758AE83;
 Mon, 27 Jul 2020 19:50:34 +0000 (UTC)
Subject: Re: [PATCH 2/4] x86: reduce CET-SS related #ifdef-ary
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <58615a18-7f81-c000-d499-1a49f4752879@suse.com>
 <20200727150002.GS7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d2a33851-10b3-a1c7-646a-96a0b5783923@suse.com>
Date: Mon, 27 Jul 2020 21:50:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200727150002.GS7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.07.2020 17:00, Roger Pau Monné wrote:
> On Wed, Jul 15, 2020 at 12:48:46PM +0200, Jan Beulich wrote:
>> Commit b586a81b7a90 ("x86/CET: Fix build following c/s 43b98e7190") had
>> to introduce a number of #ifdef-s to make the build work with older tool
>> chains. Introduce an assembler macro covering for tool chains not
>> knowing of CET-SS, allowing some conditionals where just SETSSBSY is the
>> problem to be dropped again.
>>
>> No change to generated code.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Looks like an improvement overall in code clarity:
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> Can you test on clang? Just to be on the safe side, otherwise I can
> test it.

Works with 5.<whatever> that I have on one of my boxes.

>> --- a/xen/arch/x86/x86_64/entry.S
>> +++ b/xen/arch/x86/x86_64/entry.S
>> @@ -237,9 +237,7 @@ iret_exit_to_guest:
>>    * %ss must be saved into the space left by the trampoline.
>>    */
>>   ENTRY(lstar_enter)
>> -#ifdef CONFIG_XEN_SHSTK
>>           ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
> 
> Should the setssbsy be quoted, or it doesn't matter? I'm asking
> because the same construction used by CLAC/STAC doesn't quote the
> instruction.

I actually thought we consistently quote these. It doesn't matter
as long as it's a single word. Quoting becomes necessary when
there are e.g. blanks involved, which happens for insns with
operands.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 19:58:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 19:58:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k09Gm-0005T8-Nh; Mon, 27 Jul 2020 19:58:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9AKR=BG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k09Gl-0005T3-Bs
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 19:58:43 +0000
X-Inumbo-ID: 90e60830-d043-11ea-8af0-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 90e60830-d043-11ea-8af0-bc764e2007e4;
 Mon, 27 Jul 2020 19:58:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=qTWqzwhQ4Aef+/Cr3o6e0XfgCbsKBG7xNDc9pHZuEbc=; b=cceCU359a2XIIgdcpGqKTCuHZ
 cqA6SCDlATUky+vlrjfgBeQ1lYT18awIoKs9099TGAeXSu9iZruiF7lv5eyr/oGCeLH8m1ma6gvSX
 LMjszjffxIUQgSqYjAYnVoZa79x+S7WBcCrcyFCsM5U+w9/YJmd+t1RxQWN/K6oJc79sI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k09Gh-0000ZX-Sv; Mon, 27 Jul 2020 19:58:39 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k09Gh-0005Hg-HC; Mon, 27 Jul 2020 19:58:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k09Gh-0000bp-GW; Mon, 27 Jul 2020 19:58:39 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152227-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152227: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=194f8ca825854abef3aceca1ed7eb5a53b08751f
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 27 Jul 2020 19:58:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152227 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152227/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1       fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1         fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                194f8ca825854abef3aceca1ed7eb5a53b08751f
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   44 days
Failing since        151101  2020-06-14 08:32:51 Z   43 days   61 attempts
Testing same since   152227  2020-07-27 05:12:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Duyck <alexander.h.duyck@linux.intel.com>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Antoine Damhet <antoine.damhet@blade-group.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  Liu Yi L <yi.l.liu@intel.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Sven Schnelle <svens@stackframe.org>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 32032 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jul 27 22:09:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 22:09:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0BJd-00081l-2V; Mon, 27 Jul 2020 22:09:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RNOj=BG=yujala.com=srini@srs-us1.protection.inumbo.net>)
 id 1k0BJb-00081d-KT
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 22:09:47 +0000
X-Inumbo-ID: e14ad000-d055-11ea-8b02-bc764e2007e4
Received: from gproxy7-pub.mail.unifiedlayer.com (unknown [70.40.196.235])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e14ad000-d055-11ea-8b02-bc764e2007e4;
 Mon, 27 Jul 2020 22:09:46 +0000 (UTC)
Received: from cmgw11.unifiedlayer.com (unknown [10.9.0.11])
 by gproxy7.mail.unifiedlayer.com (Postfix) with ESMTP id 67DDA215D8C
 for <xen-devel@lists.xenproject.org>; Mon, 27 Jul 2020 16:09:45 -0600 (MDT)
Received: from md-71.webhostbox.net ([204.11.58.143]) by cmsmtp with ESMTP
 id 0BJYkbcGVpSV40BJZk586l; Mon, 27 Jul 2020 16:09:45 -0600
X-Authority-Reason: nr=8
X-Authority-Analysis: v=2.3 cv=GKEm7NFK c=1 sm=1 tr=0
 a=yS0qNmEK8ed8yKyeR8R6rg==:117 a=yS0qNmEK8ed8yKyeR8R6rg==:17
 a=dLZJa+xiwSxG16/P+YVxDGlgEgI=:19 a=8nJEP1OIZ-IA:10:nop_charset_1
 a=_RQrkK6FrEwA:10:nop_rcvd_month_year
 a=o-A10e_uY_YA:10:endurance_base64_authed_username_1 a=V_Ht6SivflJ7o7_gc2UA:9
 a=wPNLvfGTeEIA:10:nop_charset_2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=yujala.com; 
 s=default;
 h=Content-Transfer-Encoding:Content-Type:MIME-Version:Message-ID:
 Date:Subject:In-Reply-To:References:To:From:Sender:Reply-To:Cc:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe:
 List-Post:List-Owner:List-Archive;
 bh=3SgGnnpZgltvxZCcexkSDs/YgGLWO8vqovnNKlIbfL8=; b=SFQ1ltdqB3XdNshHcGUlTKj0UC
 TAZ4OVnsq9iZ62JfL+M9C+vZMcKaSTSAFO4vrTTRzuaPX2pqjZooHpP/e0IwRF/dAg1VkaDLZBASg
 7Fz+o2lZRnmZ/99+vlB71GjlUo/wi13EyWWeiyb1/dwPSIzCnPoMA1y/S6Yt6erS7ck0uywlAhMtQ
 OWbeGCHXEBoUgm0Y9MR3Gtk7htgV2fMplZM7ZEG4XYHASD8JJSyc4cIjRTmTA5IQIMapJIEEAnmZG
 QFwBV34U4texTVXgpl6q2hBEtAgeqTwRIhogxuUPeabQr3yZSYnFLJmQSY+5WNeXrkzNu5VWkW0qe
 pEPD0hww==;
Received: from 162-231-240-210.lightspeed.sntcca.sbcglobal.net
 ([162.231.240.210]:56562 helo=SRINIASUSLAPTOP)
 by md-71.webhostbox.net with esmtpsa (TLS1.2) tls
 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.93)
 (envelope-from <srini@yujala.com>)
 id 1k0BJY-001dcD-9H; Mon, 27 Jul 2020 22:09:44 +0000
From: "Srinivas Bangalore" <srini@yujala.com>
To: "'Julien Grall'" <julien@xen.org>, <xen-devel@lists.xenproject.org>,
 "'Christopher Clark'" <christopher.w.clark@gmail.com>,
 "'Stefano Stabellini'" <sstabellini@kernel.org>
References: <002801d66051$90fe2300$b2fa6900$@yujala.com>
 <9736680b-1c81-652b-552b-4103341bad50@xen.org>
 <000001d661cb$45cdaa10$d168fe30$@yujala.com>
 <5f985a6a-1bd6-9e68-f35f-b0b665688cee@xen.org>
In-Reply-To: <5f985a6a-1bd6-9e68-f35f-b0b665688cee@xen.org>
Subject: RE: Porting Xen to Jetson Nano
Date: Mon, 27 Jul 2020 15:09:42 -0700
Message-ID: <002901d66462$a1dff530$e59fdf90$@yujala.com>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-us
Thread-Index: AQIl7jaf5+ZLFToUYJ/P44Ycp83hwAFkmiVWAaCz0KsBnyrDOahXwpMQ
X-AntiAbuse: This header was added to track abuse,
 please include it with any abuse report
X-AntiAbuse: Primary Hostname - md-71.webhostbox.net
X-AntiAbuse: Original Domain - lists.xenproject.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - yujala.com
X-BWhitelist: no
X-Source-IP: 162.231.240.210
X-Source-L: No
X-Exim-ID: 1k0BJY-001dcD-9H
X-Source: 
X-Source-Args: 
X-Source-Dir: 
X-Source-Sender: 162-231-240-210.lightspeed.sntcca.sbcglobal.net
 (SRINIASUSLAPTOP) [162.231.240.210]:56562
X-Source-Auth: srini@yujala.com
X-Email-Count: 1
X-Source-Cap: c3JpbmlxbGw7c3JpbmlxbGw7bWQtNzEud2ViaG9zdGJveC5uZXQ=
X-Local-Domain: yes
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,


On 24/07/2020 16:01, Srinivas Bangalore wrote:
> Hi Julien,

Hello,

>=20
> Thanks for the tips. Comments inline...

I struggled to find your comment inline as your e-mail client doesn't =
quote
my answer. Please configure your e-mail client to use some form of =
quoting
(the usual is '>').

[<SB>] Done! Sorry about that.

> Here's the output, truncated since it goes into an infinite loop=20
> printing the same info:
> [..]
> (XEN) Allocating 1:1 mappings totalling 128MB for dom0:
> (XEN) BANK[0] 0x00000088000000-0x00000090000000 (128MB)
> (XEN) Grant table range: 0x000000fec00000-0x000000fec68000
> (XEN) Loading zImage from 00000000e1000000 to
> 0000000088080000-000000008a23c808
> (XEN) Allocating PPI 16 for event channel interrupt
> (XEN) Loading dom0 DTB to 0x000000008fe00000-0x000000008fe34444
> (XEN) Scrubbing Free RAM on 1 nodes using 4 CPUs
> (XEN) ........done.
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) ***************************************************
> (XEN) WARNING: CONSOLE OUTPUT IS SYNCHRONOUS
> (XEN) This option is intended to aid debugging of Xen by ensuring
> (XEN) that all output is synchronously delivered on the serial line.
> (XEN) However it can introduce SIGNIFICANT latencies and affect
> (XEN) timekeeping. It is NOT recommended for production use!
> (XEN) ***************************************************
> (XEN) 3... 2... 1...
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch=20
> input to
> Xen)
> (XEN) Freed 296kB init memory.
> (XEN) dom0 IPA 0x0000000088080000
> (XEN) P2M @ 0000000803fc3d40 mfn:0x17f0f5
> (XEN) 0TH[0x0] =3D 0x004000017f0f377f
> (XEN) 1ST[0x2] =3D 0x02c00000800006fd
> (XEN) Mem access check
> (XEN) dom0 IPA 0x0000000088080000
> (XEN) P2M @ 0000000803fc3d40 mfn:0x17f0f5
> (XEN) 0TH[0x0] =3D 0x004000017f0f377f
> (XEN) 1ST[0x2] =3D 0x02c00000800006fd
> (XEN) Mem access check

The instruction abort issue looks normal as the mapping is marked as
non-executable.

Looking at the rest of bits, bits 55:58 indicates the type of mapping =
used.
The value suggest the mapping has been considered to be used
p2m_mmio_direct_c (RW cacheable MMIO). This looks wrong to me because =
RAM
should be mapped using p2m_ram_rw.

Looking at your DT, it looks like the region is marked as reserved. On =
Xen
4.8, reserved-memory region are not correctly handled (IIRC this was =
only
fixed in Xen 4.13). This should be possible to confirm by enable
CONFIG_DEVICE_TREE_DEBUG in your .config.

The option will debug more information about the mapping to dom0 on your
console.

However, given you are using an old release, you are at risk at keep =
finding
bugs that have been resolved in more recent releases. It would probably
better if you switch to Xen 4.14 and report any bug you can find there.

[<SB>] OK, I started porting the patch series to 4.14, but it is =
definitely
not straightforward ;) Will take some time to do this. BTW, I was =
looking at
xen/arch/arm/Rules.mk in 4.14 and it is blank. The previous releases had
some board-specific stuff in this file - esp the EARLY_PRINTK =
definitions.
Has this changed in 4.14?=20
=20
>=20
> [..]
>=20
> I added the printk for 'Mem access check' inside the 'case =
FSC_FLT_PERM'
of
> the switch (fsc) code following the lookup. That's what you see in the
> output above.
> So it does seem like there's a memory access fault somehow.
>  =20
>>
>> (XEN)=A0 HPFAR_EL2: 0000000000000000
>>
>> (XEN)=A0=A0=A0 FAR_EL2: 00000000a0080000
>>
>> (XEN)
>>
>> (XEN) Guest stack trace from sp=3D0:
>>
>> (XEN)=A0=A0 Failed to convert stack to physical address
>=20
> [...]
>=20
>> It seems the DOM0 kernel did not get added to the task list=85.
>=20
>   From a look at the dump, dom0 vCPU0 has been scheduled and running =
on
> pCPU0.
>=20
>>
>> Boot args for Xen and Dom0 are here:
>> (XEN) Checking for initrd in /chosen
>>
>> (XEN) linux,initrd limits invalid: 0000000084100000 >=3D
>> 0000000084100000
>>
>> (XEN) RAM: 0000000080000000 - 00000000fedfffff
>>
>> (XEN) RAM: 0000000100000000 - 000000017f1fffff
>>
>> (XEN)
>>
>> (XEN) MODULE[0]: 00000000fc7f8000 - 00000000fc82d000 Device Tree
>>
>> (XEN) MODULE[1]: 00000000e1000000 - 00000000e31bc808 Kernel
>> console=3Dhvc0 earlyprintk=3Dxen earlycon=3Dxen rootfstype=3Dext4 rw =
rootwait
>> root=3D/dev/mmcblk0p1 rdinit=3D/sbin/init
>=20
> You want to use earlycon=3Dxenboot here.
>=20
>>
>> (XEN)=A0 RESVD[0]: 0000000080000000 - 0000000080020000
>>
>> (XEN)=A0 RESVD[1]: 00000000e3500000 - 00000000e3535000
>>
>> (XEN)=A0 RESVD[2]: 00000000fc7f8000 - 00000000fc82d000
>>
>> (XEN)
>>
>> (XEN) Command line: console=3Ddtuart earlyprintk=3Dxen
>> earlycon=3Duart8250,mmio32,0x70006000 sync_console dom0_mem=3D512M
>> log_lvl=3Dall guest_loglvl=3Dall console_to_ring
>=20
> FWIW, earlyprintk and earlycon are not understood by Xen. They are =
only
> useful for Dom0.
>=20
> BTW, to Christopher's point, the dtb did have some issues. I had to =
hack
the
> 'interrupt-controller' node to get the GICv2 working.
> I have attached the .dts file that I'm using.
[<SB>]=20

Regards,
Srini




From xen-devel-bounces@lists.xenproject.org Mon Jul 27 22:09:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 22:09:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0BJG-000818-Pn; Mon, 27 Jul 2020 22:09:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i/KF=BG=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1k0BJF-000813-Rh
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 22:09:25 +0000
X-Inumbo-ID: d3f943d3-d055-11ea-8b02-bc764e2007e4
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d3f943d3-d055-11ea-8b02-bc764e2007e4;
 Mon, 27 Jul 2020 22:09:24 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06RM7nDB101336;
 Mon, 27 Jul 2020 22:08:53 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=oo4gy89JKS9v8t35Nn6iznW8h1M/3xDxBEt5v+lIvxo=;
 b=PHWvHwOUu44Kh7cWpT4r1cQmFcCVxnyU/IxbHndgtBYNQAGWpKTVGBCeFHx+ZKijYp1X
 ekdd+ur5jb5mf2iEQBBzEYbcgpUhPJMXp/I2Vd9p5gn7SPycHk9YHEou+DTrS0SK0WZw
 lQ8EW71zDHbcOJ9irK8S2NCua+Pdiev/k//apBqMiES1mgKzNUffewR81rUEnNrQ4j77
 VI24QKUhF0S3ZRx4I56XFNt2AXjm35lSAcVcc6hUtZ5nUxHOn8UWJXbienGHBu5Z4kNi
 wc6Vya6F+uWwL2bzjNsZXk+JpP/niEn6eeBFaItkQOD7crhwYhBCF4ZqsNVyiP/ROBAL Qw== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 32hu1jc4s7-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Mon, 27 Jul 2020 22:08:53 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06RM8RHr100084;
 Mon, 27 Jul 2020 22:08:52 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by aserp3020.oracle.com with ESMTP id 32hu5rw25n-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 27 Jul 2020 22:08:52 +0000
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 06RM8afa004764;
 Mon, 27 Jul 2020 22:08:36 GMT
Received: from [10.39.225.118] (/10.39.225.118)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Mon, 27 Jul 2020 15:08:35 -0700
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
To: Stefano Stabellini <sstabellini@kernel.org>,
 Anchal Agarwal <anchalag@amazon.com>
References: <50298859-0d0e-6eb0-029b-30df2a4ecd63@oracle.com>
 <20200715204943.GB17938@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0ca3c501-e69a-d2c9-a24c-f83afd4bdb8c@oracle.com>
 <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200721000348.GA19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <408d3ce9-2510-2950-d28d-fdfe8ee41a54@oracle.com>
 <alpine.DEB.2.21.2007211640500.17562@sstabellini-ThinkPad-T480s>
 <20200722180229.GA32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <alpine.DEB.2.21.2007221645430.17562@sstabellini-ThinkPad-T480s>
 <20200723225745.GB32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <alpine.DEB.2.21.2007241431280.17562@sstabellini-ThinkPad-T480s>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <66a9b838-70ed-0807-9260-f2c31343a081@oracle.com>
Date: Mon, 27 Jul 2020 18:08:29 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007241431280.17562@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9695
 signatures=668679
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0
 adultscore=0 bulkscore=0
 malwarescore=0 mlxscore=0 spamscore=0 mlxlogscore=999 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007270148
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9695
 signatures=668679
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 clxscore=1011
 mlxlogscore=999
 malwarescore=0 impostorscore=0 priorityscore=1501 spamscore=0 phishscore=0
 suspectscore=0 bulkscore=0 mlxscore=0 lowpriorityscore=0 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007270148
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, kamatam@amazon.com, mingo@redhat.com,
 xen-devel@lists.xenproject.org, sblbir@amazon.com, axboe@kernel.dk,
 konrad.wilk@oracle.com, bp@alien8.de, tglx@linutronix.de, jgross@suse.com,
 netdev@vger.kernel.org, linux-pm@vger.kernel.org, rjw@rjwysocki.net,
 linux-kernel@vger.kernel.org, vkuznets@redhat.com, davem@davemloft.net,
 dwmw@amazon.co.uk, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/24/20 7:01 PM, Stefano Stabellini wrote:
> Yes, it does, thank you. I'd rather not introduce unknown regressions s=
o
> I would recommend to add an arch-specific check on registering
> freeze/thaw/restore handlers. Maybe something like the following:
>
> #ifdef CONFIG_X86
>     .freeze =3D blkfront_freeze,
>     .thaw =3D blkfront_restore,
>     .restore =3D blkfront_restore
> #endif
>
>
> maybe Boris has a better suggestion on how to do it


An alternative might be to still install pm notifier in
drivers/xen/manage.c (I think as result of latest discussions we decided
we won't need it) and return -ENOTSUPP for ARM for
PM_HIBERNATION_PREPARE and friends. Would that work?


-boris




From xen-devel-bounces@lists.xenproject.org Mon Jul 27 23:24:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 23:24:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0CTi-0006Pv-6z; Mon, 27 Jul 2020 23:24:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tQrV=BG=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1k0CTh-0006Pq-2Z
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 23:24:17 +0000
X-Inumbo-ID: 49b5bd9e-d060-11ea-a817-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 49b5bd9e-d060-11ea-a817-12813bfff9fa;
 Mon, 27 Jul 2020 23:24:16 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id D7F8320FC3;
 Mon, 27 Jul 2020 23:24:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595892255;
 bh=Ezm59BrI2xzB7EIKgvwFqNMC5HrCH6WKJKXNUOGexwc=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=DfqzY1W5wnuDOLPOiPE+wY9phF3SRYlb9QuUfyqloFR6gZYZdhXWQsa6UGDXwLNpw
 KRRvkaaYdJ072Dlrgg3ImqvyUtYQReoBkim4pKJtF//OfMy/xafTI/ZNt8ZAWKMafc
 kf3tuk3uOOMdwWsnrS9yUS1R/ypYUdkM4/YNXr8g=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Subject: [PATCH AUTOSEL 5.7 22/25] xen-netfront: fix potential deadlock in
 xennet_remove()
Date: Mon, 27 Jul 2020 19:23:42 -0400
Message-Id: <20200727232345.717432-22-sashal@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200727232345.717432-1-sashal@kernel.org>
References: <20200727232345.717432-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Sasha Levin <sashal@kernel.org>, Andrea Righi <andrea.righi@canonical.com>,
 netdev@vger.kernel.org, "David S . Miller" <davem@davemloft.net>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Andrea Righi <andrea.righi@canonical.com>

[ Upstream commit c2c633106453611be07821f53dff9e93a9d1c3f0 ]

There's a potential race in xennet_remove(); this is what the driver is
doing upon unregistering a network device:

  1. state = read bus state
  2. if state is not "Closed":
  3.    request to set state to "Closing"
  4.    wait for state to be set to "Closing"
  5.    request to set state to "Closed"
  6.    wait for state to be set to "Closed"

If the state changes to "Closed" immediately after step 1 we are stuck
forever in step 4, because the state will never go back from "Closed" to
"Closing".

Make sure to check also for state == "Closed" in step 4 to prevent the
deadlock.

Also add a 5 sec timeout any time we wait for the bus state to change,
to avoid getting stuck forever in wait_event().

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/xen-netfront.c | 64 +++++++++++++++++++++++++-------------
 1 file changed, 42 insertions(+), 22 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 482c6c8b0fb7e..88280057e0321 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -63,6 +63,8 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644);
 MODULE_PARM_DESC(max_queues,
 		 "Maximum number of queues per virtual interface");
 
+#define XENNET_TIMEOUT  (5 * HZ)
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -1334,12 +1336,15 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	netif_carrier_off(netdev);
 
-	xenbus_switch_state(dev, XenbusStateInitialising);
-	wait_event(module_wq,
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateClosed &&
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateUnknown);
+	do {
+		xenbus_switch_state(dev, XenbusStateInitialising);
+		err = wait_event_timeout(module_wq,
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateClosed &&
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateUnknown, XENNET_TIMEOUT);
+	} while (!err);
+
 	return netdev;
 
  exit:
@@ -2139,28 +2144,43 @@ static const struct attribute_group xennet_dev_group = {
 };
 #endif /* CONFIG_SYSFS */
 
-static int xennet_remove(struct xenbus_device *dev)
+static void xennet_bus_close(struct xenbus_device *dev)
 {
-	struct netfront_info *info = dev_get_drvdata(&dev->dev);
-
-	dev_dbg(&dev->dev, "%s\n", dev->nodename);
+	int ret;
 
-	if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) {
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
+	do {
 		xenbus_switch_state(dev, XenbusStateClosing);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosing ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosing ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+	} while (!ret);
+
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
 
+	do {
 		xenbus_switch_state(dev, XenbusStateClosed);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosed ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
-	}
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+	} while (!ret);
+}
+
+static int xennet_remove(struct xenbus_device *dev)
+{
+	struct netfront_info *info = dev_get_drvdata(&dev->dev);
 
+	xennet_bus_close(dev);
 	xennet_disconnect_backend(info);
 
 	if (info->netdev->reg_state == NETREG_REGISTERED)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 23:24:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 23:24:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0CU8-0006RK-Fw; Mon, 27 Jul 2020 23:24:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tQrV=BG=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1k0CU7-0006R9-81
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 23:24:43 +0000
X-Inumbo-ID: 5864b8a4-d060-11ea-8b07-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5864b8a4-d060-11ea-8b07-bc764e2007e4;
 Mon, 27 Jul 2020 23:24:42 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A782621D95;
 Mon, 27 Jul 2020 23:24:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595892280;
 bh=Ezm59BrI2xzB7EIKgvwFqNMC5HrCH6WKJKXNUOGexwc=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=BGSteS5ITVEMj0WODuAdwdhV7JIHEQjhjWxb1V5OQkPrbIxEhBZXcMKhTVmGr0BQR
 LLGeqFAqXIDKf/51dLNZxbsDhdrY4c44vldmcDXlYPvklli2H9j+GvanZ1QMFIdV02
 an1X3o2X9Shw8xFMxuk6/N7x67eCYaLOSx40Tlow=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Subject: [PATCH AUTOSEL 5.4 15/17] xen-netfront: fix potential deadlock in
 xennet_remove()
Date: Mon, 27 Jul 2020 19:24:18 -0400
Message-Id: <20200727232420.717684-15-sashal@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200727232420.717684-1-sashal@kernel.org>
References: <20200727232420.717684-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Sasha Levin <sashal@kernel.org>, Andrea Righi <andrea.righi@canonical.com>,
 netdev@vger.kernel.org, "David S . Miller" <davem@davemloft.net>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Andrea Righi <andrea.righi@canonical.com>

[ Upstream commit c2c633106453611be07821f53dff9e93a9d1c3f0 ]

There's a potential race in xennet_remove(); this is what the driver is
doing upon unregistering a network device:

  1. state = read bus state
  2. if state is not "Closed":
  3.    request to set state to "Closing"
  4.    wait for state to be set to "Closing"
  5.    request to set state to "Closed"
  6.    wait for state to be set to "Closed"

If the state changes to "Closed" immediately after step 1 we are stuck
forever in step 4, because the state will never go back from "Closed" to
"Closing".

Make sure to check also for state == "Closed" in step 4 to prevent the
deadlock.

Also add a 5 sec timeout any time we wait for the bus state to change,
to avoid getting stuck forever in wait_event().

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/xen-netfront.c | 64 +++++++++++++++++++++++++-------------
 1 file changed, 42 insertions(+), 22 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 482c6c8b0fb7e..88280057e0321 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -63,6 +63,8 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644);
 MODULE_PARM_DESC(max_queues,
 		 "Maximum number of queues per virtual interface");
 
+#define XENNET_TIMEOUT  (5 * HZ)
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -1334,12 +1336,15 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	netif_carrier_off(netdev);
 
-	xenbus_switch_state(dev, XenbusStateInitialising);
-	wait_event(module_wq,
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateClosed &&
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateUnknown);
+	do {
+		xenbus_switch_state(dev, XenbusStateInitialising);
+		err = wait_event_timeout(module_wq,
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateClosed &&
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateUnknown, XENNET_TIMEOUT);
+	} while (!err);
+
 	return netdev;
 
  exit:
@@ -2139,28 +2144,43 @@ static const struct attribute_group xennet_dev_group = {
 };
 #endif /* CONFIG_SYSFS */
 
-static int xennet_remove(struct xenbus_device *dev)
+static void xennet_bus_close(struct xenbus_device *dev)
 {
-	struct netfront_info *info = dev_get_drvdata(&dev->dev);
-
-	dev_dbg(&dev->dev, "%s\n", dev->nodename);
+	int ret;
 
-	if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) {
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
+	do {
 		xenbus_switch_state(dev, XenbusStateClosing);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosing ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosing ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+	} while (!ret);
+
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
 
+	do {
 		xenbus_switch_state(dev, XenbusStateClosed);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosed ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
-	}
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+	} while (!ret);
+}
+
+static int xennet_remove(struct xenbus_device *dev)
+{
+	struct netfront_info *info = dev_get_drvdata(&dev->dev);
 
+	xennet_bus_close(dev);
 	xennet_disconnect_backend(info);
 
 	if (info->netdev->reg_state == NETREG_REGISTERED)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 23:24:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 23:24:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0CUM-0006U2-Sy; Mon, 27 Jul 2020 23:24:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tQrV=BG=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1k0CUM-0006Tt-J9
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 23:24:58 +0000
X-Inumbo-ID: 627a5de4-d060-11ea-a817-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 627a5de4-d060-11ea-a817-12813bfff9fa;
 Mon, 27 Jul 2020 23:24:58 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 91BF8208E4;
 Mon, 27 Jul 2020 23:24:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595892297;
 bh=U+chRqPGR8rJrwHBA7q6Oj2Gm8e0BLsd3/ugWeyaAfw=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=aS9gm9q4YI0vvOm1tahQfiVySX+ojIgDSCMi/Ps3Y3MfwKdOSXRIMeZQFV+DyHLkM
 j8AxaqcvwXpv6C83Nz8W2pGQTXHsFVjHRehksxxWZt2BNFUnMk3UPyECnb0dXgETzM
 xa2vSqTx5xlfqyAQKiDtydLidOpXcDLMN5C0mQ34=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Subject: [PATCH AUTOSEL 4.19 10/10] xen-netfront: fix potential deadlock in
 xennet_remove()
Date: Mon, 27 Jul 2020 19:24:43 -0400
Message-Id: <20200727232443.718000-10-sashal@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200727232443.718000-1-sashal@kernel.org>
References: <20200727232443.718000-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Sasha Levin <sashal@kernel.org>, Andrea Righi <andrea.righi@canonical.com>,
 netdev@vger.kernel.org, "David S . Miller" <davem@davemloft.net>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Andrea Righi <andrea.righi@canonical.com>

[ Upstream commit c2c633106453611be07821f53dff9e93a9d1c3f0 ]

There's a potential race in xennet_remove(); this is what the driver is
doing upon unregistering a network device:

  1. state = read bus state
  2. if state is not "Closed":
  3.    request to set state to "Closing"
  4.    wait for state to be set to "Closing"
  5.    request to set state to "Closed"
  6.    wait for state to be set to "Closed"

If the state changes to "Closed" immediately after step 1 we are stuck
forever in step 4, because the state will never go back from "Closed" to
"Closing".

Make sure to check also for state == "Closed" in step 4 to prevent the
deadlock.

Also add a 5 sec timeout any time we wait for the bus state to change,
to avoid getting stuck forever in wait_event().

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/xen-netfront.c | 64 +++++++++++++++++++++++++-------------
 1 file changed, 42 insertions(+), 22 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 6b4675a9494b2..c8e84276e6397 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -63,6 +63,8 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644);
 MODULE_PARM_DESC(max_queues,
 		 "Maximum number of queues per virtual interface");
 
+#define XENNET_TIMEOUT  (5 * HZ)
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -1337,12 +1339,15 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	netif_carrier_off(netdev);
 
-	xenbus_switch_state(dev, XenbusStateInitialising);
-	wait_event(module_wq,
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateClosed &&
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateUnknown);
+	do {
+		xenbus_switch_state(dev, XenbusStateInitialising);
+		err = wait_event_timeout(module_wq,
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateClosed &&
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateUnknown, XENNET_TIMEOUT);
+	} while (!err);
+
 	return netdev;
 
  exit:
@@ -2142,28 +2147,43 @@ static const struct attribute_group xennet_dev_group = {
 };
 #endif /* CONFIG_SYSFS */
 
-static int xennet_remove(struct xenbus_device *dev)
+static void xennet_bus_close(struct xenbus_device *dev)
 {
-	struct netfront_info *info = dev_get_drvdata(&dev->dev);
-
-	dev_dbg(&dev->dev, "%s\n", dev->nodename);
+	int ret;
 
-	if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) {
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
+	do {
 		xenbus_switch_state(dev, XenbusStateClosing);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosing ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosing ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+	} while (!ret);
+
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
 
+	do {
 		xenbus_switch_state(dev, XenbusStateClosed);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosed ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
-	}
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+	} while (!ret);
+}
+
+static int xennet_remove(struct xenbus_device *dev)
+{
+	struct netfront_info *info = dev_get_drvdata(&dev->dev);
 
+	xennet_bus_close(dev);
 	xennet_disconnect_backend(info);
 
 	if (info->netdev->reg_state == NETREG_REGISTERED)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 23:25:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 23:25:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0CUd-0006YG-6T; Mon, 27 Jul 2020 23:25:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tQrV=BG=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1k0CUc-0006Y3-84
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 23:25:14 +0000
X-Inumbo-ID: 6bdbb3ba-d060-11ea-8b09-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6bdbb3ba-d060-11ea-8b09-bc764e2007e4;
 Mon, 27 Jul 2020 23:25:13 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 5065520786;
 Mon, 27 Jul 2020 23:25:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595892313;
 bh=7pV5o2dUOChnwgy5Ia/epCMtOeOGr9iiOJ1bOiZJNjA=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=KL8zdUhiMiWYpQz3kFouS5qmkKXrjB9fl4SBcSaITFEGXulL0Ph60Qgtr2uKkIoyA
 RuW5O1kPyq5O9pxVPfF2qlrfzvOUzZzotGjx5wucXkTSZ1i2t37EDQEEFCXlkDORMw
 5mC6BJvZgLPHqcmbMNnEgiXiA+8ef/b5zOeVzKFU=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Subject: [PATCH AUTOSEL 4.14 10/10] xen-netfront: fix potential deadlock in
 xennet_remove()
Date: Mon, 27 Jul 2020 19:24:58 -0400
Message-Id: <20200727232458.718131-10-sashal@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200727232458.718131-1-sashal@kernel.org>
References: <20200727232458.718131-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Sasha Levin <sashal@kernel.org>, Andrea Righi <andrea.righi@canonical.com>,
 netdev@vger.kernel.org, "David S . Miller" <davem@davemloft.net>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Andrea Righi <andrea.righi@canonical.com>

[ Upstream commit c2c633106453611be07821f53dff9e93a9d1c3f0 ]

There's a potential race in xennet_remove(); this is what the driver is
doing upon unregistering a network device:

  1. state = read bus state
  2. if state is not "Closed":
  3.    request to set state to "Closing"
  4.    wait for state to be set to "Closing"
  5.    request to set state to "Closed"
  6.    wait for state to be set to "Closed"

If the state changes to "Closed" immediately after step 1 we are stuck
forever in step 4, because the state will never go back from "Closed" to
"Closing".

Make sure to check also for state == "Closed" in step 4 to prevent the
deadlock.

Also add a 5 sec timeout any time we wait for the bus state to change,
to avoid getting stuck forever in wait_event().

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/xen-netfront.c | 64 +++++++++++++++++++++++++-------------
 1 file changed, 42 insertions(+), 22 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 91bf86cee2733..1131397454bd4 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -63,6 +63,8 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644);
 MODULE_PARM_DESC(max_queues,
 		 "Maximum number of queues per virtual interface");
 
+#define XENNET_TIMEOUT  (5 * HZ)
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -1336,12 +1338,15 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	netif_carrier_off(netdev);
 
-	xenbus_switch_state(dev, XenbusStateInitialising);
-	wait_event(module_wq,
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateClosed &&
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateUnknown);
+	do {
+		xenbus_switch_state(dev, XenbusStateInitialising);
+		err = wait_event_timeout(module_wq,
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateClosed &&
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateUnknown, XENNET_TIMEOUT);
+	} while (!err);
+
 	return netdev;
 
  exit:
@@ -2142,28 +2147,43 @@ static const struct attribute_group xennet_dev_group = {
 };
 #endif /* CONFIG_SYSFS */
 
-static int xennet_remove(struct xenbus_device *dev)
+static void xennet_bus_close(struct xenbus_device *dev)
 {
-	struct netfront_info *info = dev_get_drvdata(&dev->dev);
-
-	dev_dbg(&dev->dev, "%s\n", dev->nodename);
+	int ret;
 
-	if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) {
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
+	do {
 		xenbus_switch_state(dev, XenbusStateClosing);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosing ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosing ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+	} while (!ret);
+
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
 
+	do {
 		xenbus_switch_state(dev, XenbusStateClosed);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosed ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
-	}
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+	} while (!ret);
+}
+
+static int xennet_remove(struct xenbus_device *dev)
+{
+	struct netfront_info *info = dev_get_drvdata(&dev->dev);
 
+	xennet_bus_close(dev);
 	xennet_disconnect_backend(info);
 
 	if (info->netdev->reg_state == NETREG_REGISTERED)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 23:25:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 23:25:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0CUp-0006bi-Fz; Mon, 27 Jul 2020 23:25:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tQrV=BG=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1k0CUn-0006bN-Ra
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 23:25:25 +0000
X-Inumbo-ID: 72c82212-d060-11ea-8b09-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 72c82212-d060-11ea-8b09-bc764e2007e4;
 Mon, 27 Jul 2020 23:25:25 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E996D22B40;
 Mon, 27 Jul 2020 23:25:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595892324;
 bh=5A5lx/WYgPnL2PcAsnUTD1sCwUsJ/qCAZDKh9yMqyfc=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=TVlNnCtQw6ltBsOhYsgGUzMptpNAT2vHABaqHnwMYX1idj06mSMdgB3aMXULpdckY
 HpVt80OY5YOSHfTvmOrCxd3eCU5xA3Mimczxx4dyKT+2f+7m2PZJPEahPspWcl8PAO
 IqgSSkxS6HfpTrdn8VbCR1aLLlrtyK61GkM4+FvU=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Subject: [PATCH AUTOSEL 4.9 7/7] xen-netfront: fix potential deadlock in
 xennet_remove()
Date: Mon, 27 Jul 2020 19:25:14 -0400
Message-Id: <20200727232514.718265-7-sashal@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200727232514.718265-1-sashal@kernel.org>
References: <20200727232514.718265-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Sasha Levin <sashal@kernel.org>, Andrea Righi <andrea.righi@canonical.com>,
 netdev@vger.kernel.org, "David S . Miller" <davem@davemloft.net>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Andrea Righi <andrea.righi@canonical.com>

[ Upstream commit c2c633106453611be07821f53dff9e93a9d1c3f0 ]

There's a potential race in xennet_remove(); this is what the driver is
doing upon unregistering a network device:

  1. state = read bus state
  2. if state is not "Closed":
  3.    request to set state to "Closing"
  4.    wait for state to be set to "Closing"
  5.    request to set state to "Closed"
  6.    wait for state to be set to "Closed"

If the state changes to "Closed" immediately after step 1 we are stuck
forever in step 4, because the state will never go back from "Closed" to
"Closing".

Make sure to check also for state == "Closed" in step 4 to prevent the
deadlock.

Also add a 5 sec timeout any time we wait for the bus state to change,
to avoid getting stuck forever in wait_event().

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/xen-netfront.c | 64 +++++++++++++++++++++++++-------------
 1 file changed, 42 insertions(+), 22 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 6d391a268469f..ceaf6b30d683d 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -62,6 +62,8 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644);
 MODULE_PARM_DESC(max_queues,
 		 "Maximum number of queues per virtual interface");
 
+#define XENNET_TIMEOUT  (5 * HZ)
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -1355,12 +1357,15 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	netif_carrier_off(netdev);
 
-	xenbus_switch_state(dev, XenbusStateInitialising);
-	wait_event(module_wq,
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateClosed &&
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateUnknown);
+	do {
+		xenbus_switch_state(dev, XenbusStateInitialising);
+		err = wait_event_timeout(module_wq,
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateClosed &&
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateUnknown, XENNET_TIMEOUT);
+	} while (!err);
+
 	return netdev;
 
  exit:
@@ -2172,28 +2177,43 @@ static const struct attribute_group xennet_dev_group = {
 };
 #endif /* CONFIG_SYSFS */
 
-static int xennet_remove(struct xenbus_device *dev)
+static void xennet_bus_close(struct xenbus_device *dev)
 {
-	struct netfront_info *info = dev_get_drvdata(&dev->dev);
-
-	dev_dbg(&dev->dev, "%s\n", dev->nodename);
+	int ret;
 
-	if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) {
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
+	do {
 		xenbus_switch_state(dev, XenbusStateClosing);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosing ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosing ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+	} while (!ret);
+
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
 
+	do {
 		xenbus_switch_state(dev, XenbusStateClosed);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosed ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
-	}
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+	} while (!ret);
+}
+
+static int xennet_remove(struct xenbus_device *dev)
+{
+	struct netfront_info *info = dev_get_drvdata(&dev->dev);
 
+	xennet_bus_close(dev);
 	xennet_disconnect_backend(info);
 
 	if (info->netdev->reg_state == NETREG_REGISTERED)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jul 27 23:25:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 27 Jul 2020 23:25:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0CUv-0006e1-P7; Mon, 27 Jul 2020 23:25:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tQrV=BG=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1k0CUu-0006dX-Nm
 for xen-devel@lists.xenproject.org; Mon, 27 Jul 2020 23:25:32 +0000
X-Inumbo-ID: 76d5e704-d060-11ea-a818-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 76d5e704-d060-11ea-a818-12813bfff9fa;
 Mon, 27 Jul 2020 23:25:32 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id B372720A8B;
 Mon, 27 Jul 2020 23:25:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595892331;
 bh=putxGiY+5IfGT6KOGyXiieruCr/zyJjCepA4Vq/teIA=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=OnSEb1WZ7bOR66Ehsu0+OAUn/cBdOk2yvp3MnPEUcSxYO8rflUEn8QAEDnt7H43UA
 7HOO7KBmGdfgVTnCo1MCxxNKWsGWGtfJ8P9JfHQ2RsdpNehH8SI/B2wbwDFfx0nE1H
 cCcqNBKu6S2tuaFt+XZVbee7rEVhchk7axJwI15Y=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Subject: [PATCH AUTOSEL 4.4 4/4] xen-netfront: fix potential deadlock in
 xennet_remove()
Date: Mon, 27 Jul 2020 19:25:25 -0400
Message-Id: <20200727232525.718372-4-sashal@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200727232525.718372-1-sashal@kernel.org>
References: <20200727232525.718372-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Sasha Levin <sashal@kernel.org>, Andrea Righi <andrea.righi@canonical.com>,
 netdev@vger.kernel.org, "David S . Miller" <davem@davemloft.net>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Andrea Righi <andrea.righi@canonical.com>

[ Upstream commit c2c633106453611be07821f53dff9e93a9d1c3f0 ]

There's a potential race in xennet_remove(); this is what the driver is
doing upon unregistering a network device:

  1. state = read bus state
  2. if state is not "Closed":
  3.    request to set state to "Closing"
  4.    wait for state to be set to "Closing"
  5.    request to set state to "Closed"
  6.    wait for state to be set to "Closed"

If the state changes to "Closed" immediately after step 1 we are stuck
forever in step 4, because the state will never go back from "Closed" to
"Closing".

Make sure to check also for state == "Closed" in step 4 to prevent the
deadlock.

Also add a 5 sec timeout any time we wait for the bus state to change,
to avoid getting stuck forever in wait_event().

Signed-off-by: Andrea Righi <andrea.righi@canonical.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/xen-netfront.c | 64 +++++++++++++++++++++++++-------------
 1 file changed, 42 insertions(+), 22 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 02b6a6c108400..7d4c0c46a889d 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -62,6 +62,8 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644);
 MODULE_PARM_DESC(max_queues,
 		 "Maximum number of queues per virtual interface");
 
+#define XENNET_TIMEOUT  (5 * HZ)
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -1349,12 +1351,15 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	netif_carrier_off(netdev);
 
-	xenbus_switch_state(dev, XenbusStateInitialising);
-	wait_event(module_wq,
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateClosed &&
-		   xenbus_read_driver_state(dev->otherend) !=
-		   XenbusStateUnknown);
+	do {
+		xenbus_switch_state(dev, XenbusStateInitialising);
+		err = wait_event_timeout(module_wq,
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateClosed &&
+				 xenbus_read_driver_state(dev->otherend) !=
+				 XenbusStateUnknown, XENNET_TIMEOUT);
+	} while (!err);
+
 	return netdev;
 
  exit:
@@ -2166,28 +2171,43 @@ static const struct attribute_group xennet_dev_group = {
 };
 #endif /* CONFIG_SYSFS */
 
-static int xennet_remove(struct xenbus_device *dev)
+static void xennet_bus_close(struct xenbus_device *dev)
 {
-	struct netfront_info *info = dev_get_drvdata(&dev->dev);
-
-	dev_dbg(&dev->dev, "%s\n", dev->nodename);
+	int ret;
 
-	if (xenbus_read_driver_state(dev->otherend) != XenbusStateClosed) {
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
+	do {
 		xenbus_switch_state(dev, XenbusStateClosing);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosing ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosing ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+	} while (!ret);
+
+	if (xenbus_read_driver_state(dev->otherend) == XenbusStateClosed)
+		return;
 
+	do {
 		xenbus_switch_state(dev, XenbusStateClosed);
-		wait_event(module_wq,
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateClosed ||
-			   xenbus_read_driver_state(dev->otherend) ==
-			   XenbusStateUnknown);
-	}
+		ret = wait_event_timeout(module_wq,
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateClosed ||
+				   xenbus_read_driver_state(dev->otherend) ==
+				   XenbusStateUnknown,
+				   XENNET_TIMEOUT);
+	} while (!ret);
+}
+
+static int xennet_remove(struct xenbus_device *dev)
+{
+	struct netfront_info *info = dev_get_drvdata(&dev->dev);
 
+	xennet_bus_close(dev);
 	xennet_disconnect_backend(info);
 
 	if (info->netdev->reg_state == NETREG_REGISTERED)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 00:06:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 00:06:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0D8X-0002Os-50; Tue, 28 Jul 2020 00:06:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o87v=BH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k0D8V-0002On-Se
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 00:06:27 +0000
X-Inumbo-ID: 2e1a2790-d066-11ea-a827-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e1a2790-d066-11ea-a827-12813bfff9fa;
 Tue, 28 Jul 2020 00:06:27 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E4A062065E;
 Tue, 28 Jul 2020 00:06:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595894786;
 bh=60CfqE119VTISR+XRxABpy6bfc54aN9vJfrZcfzKuKo=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=KJJY3b90Tmo4mo/0HAGTcpHESMRuv8Z9SISkeIc7MBzFRivg4Q9NjgP5+1wRmYKj1
 MQ+/kt+yoe8bUKSG7OYWMnSJC4l1slFlYE06XHwI8Aw3w8z3LFXfjG6lvMp7Hki25u
 N+UjFutMyOqhHktd/9yWNgqkCmqW0C/SPRwVjs3Y=
Date: Mon, 27 Jul 2020 17:06:25 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
In-Reply-To: <20200727110648.GQ7191@Air-de-Roger>
Message-ID: <alpine.DEB.2.21.2007271411000.27071@sstabellini-ThinkPad-T480s>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <9f09ff42-a930-e4e3-d1c8-612ad03698ae@xen.org>
 <alpine.DEB.2.21.2007241036460.17562@sstabellini-ThinkPad-T480s>
 <40582d63-49c7-4a51-b35b-8248dfa34b66@xen.org>
 <alpine.DEB.2.21.2007241127480.17562@sstabellini-ThinkPad-T480s>
 <CAJ=z9a3dXSnEBvhkHkZzV9URAGqSfdtJ1Lc838h_ViAWG3ZO4g@mail.gmail.com>
 <alpine.DEB.2.21.2007241353450.17562@sstabellini-ThinkPad-T480s>
 <CAJ=z9a1RWXq3EN5DC=_279yzdsq3M0nw6+CZtKD00yBzKomcaw@mail.gmail.com>
 <20200727110648.GQ7191@Air-de-Roger>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-292516064-1595884364=:27071"
Content-ID: <alpine.DEB.2.21.2007271414160.27071@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Rahul Singh <rahul.singh@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-292516064-1595884364=:27071
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2007271414161.27071@sstabellini-ThinkPad-T480s>

On Mon, 27 Jul 2020, Roger Pau Monné wrote:
> On Sat, Jul 25, 2020 at 10:59:50AM +0100, Julien Grall wrote:
> > On Sat, 25 Jul 2020 at 00:46, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > >
> > > On Fri, 24 Jul 2020, Julien Grall wrote:
> > > > On Fri, 24 Jul 2020 at 19:32, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > > > > > If they are not equal, then I fail to see why it would be useful to have this
> > > > > > value in Xen.
> > > > >
> > > > > I think that's because the domain is actually more convenient to use
> > > > > because a segment can span multiple PCI host bridges. So my
> > > > > understanding is that a segment alone is not sufficient to identify a
> > > > > host bridge. From a software implementation point of view it would be
> > > > > better to use domains.
> > > >
> > > > AFAICT, this would be a matter of one check vs two checks in Xen :).
> > > > But... looking at Linux, they will also use domain == segment for ACPI
> > > > (see [1]). So, I think, they still have to use (domain, bus) to do the lookup.
> 
> You have to use the (segment, bus) tuple when doing a lookup because
> MMCFG regions on ACPI are defined for a segment and a bus range, you
> can have a MMCFG region that covers segment 0 bus [0, 20) and another
> MMCFG region that covers segment 0 bus [20, 255], and those will use
> different addresses in the MMIO space.

Thanks for the clarification!


> > > > > > In which case, we need to use PHYSDEVOP_pci_mmcfg_reserved so
> > > > > > Dom0 and Xen can synchronize on the segment number.
> > > > >
> > > > > I was hoping we could write down the assumption somewhere that for the
> > > > > cases we care about domain == segment, and error out if it is not the
> > > > > case.
> > > >
> > > > Given that we have only the domain in hand, how would you enforce that?
> > > >
> > > > >From this discussion, it also looks like there is a mismatch between the
> > > > implementation and the understanding on QEMU devel. So I am a bit
> > > > concerned that this is not stable and may change in future Linux version.
> > > >
> > > > IOW, we are know tying Xen to Linux. So could we implement
> > > > PHYSDEVOP_pci_mmcfg_reserved *or* introduce a new property that
> > > > really represent the segment?
> > >
> > > I don't think we are tying Xen to Linux. Rob has already said that
> > > linux,pci-domain is basically a generic device tree property.
> > 
> > My concern is not so much the name of the property, but the definition of it.
> > 
> > AFAICT, from this thread there can be two interpretation:
> >       - domain == segment
> >       - domain == (segment, bus)
> 
> I think domain is just an alias for segment, the difference seems to
> be that when using DT all bridges get a different segment (or domain)
> number, and thus you will always end up starting numbering at bus 0
> for each bridge?
>
> Ideally you would need a way to specify the segment and start/end bus
> numbers of each bridge, if not you cannot match what ACPI does. Albeit
> it might be fine as long as the OS and Xen agree on the segments and
> bus numbers that belong to each bridge (and thus each ECAM region).

That is what I thought and it is why I was asking to clarify the naming
and/or writing a document to explain the assumptions, if any.

Then after Julien's email I followed up in the Linux codebase and
clearly there is a different assumption baked in the Linux kernel for
architectures that have CONFIG_PCI_DOMAINS enabled (including ARM64).

The assumption is that segment == domain == unique host bridge. It
looks like it is coming from IEEE Std 1275-1994 but I am not certain.
In fact, it seems that ACPI MCFG and IEEE Std 1275-1994 don't exactly
match. So I am starting to think that domain == segment for IEEE Std
1275-1994 compliant device tree based systems.
--8323329-292516064-1595884364=:27071--


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 03:28:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 03:28:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0GHW-0003M7-0y; Tue, 28 Jul 2020 03:27:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E6Nk=BH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0GHV-0003M2-1A
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 03:27:57 +0000
X-Inumbo-ID: 53bd49d4-d082-11ea-a836-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 53bd49d4-d082-11ea-a836-12813bfff9fa;
 Tue, 28 Jul 2020 03:27:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=euHksG7HnSj/NJeWW/I7xS4CbW6WXLe03gqpR3jxEw8=; b=xKMHNUVG/6gUsofViMkpTqQlU
 aCafecX4h7uTHe0WH4yA7V1CIYHD5ssVSNOddLvIVcCn9xDNhdpKFlD736LKyyPTxYBSgUOJjCh1S
 qLznbtPlwFpQ/bjLLDOmEXC7R8kdiZIwQ8UNOqDDPZ6QJ1Cco0PRdRZUR7/1crfBmeakQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0GHT-0003kN-B3; Tue, 28 Jul 2020 03:27:55 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0GHS-00029U-V8; Tue, 28 Jul 2020 03:27:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0GHS-0006QH-UU; Tue, 28 Jul 2020 03:27:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152232-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152232: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-i386-pair:xen-boot/dst_host:fail:heisenbug
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:guest-start/redhat.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=92ed301919932f777713b9172e525674157e983d
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jul 2020 03:27:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152232 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152232/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which are failing intermittently (not blocking):
 test-amd64-i386-pair        11 xen-boot/dst_host fail in 152223 pass in 152232
 test-amd64-i386-qemut-rhel6hvm-intel 12 guest-start/redhat.repeat fail pass in 152223

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                92ed301919932f777713b9172e525674157e983d
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   40 days
Failing since        151236  2020-06-19 19:10:35 Z   38 days   60 attempts
Testing same since   152223  2020-07-27 00:11:45 Z    1 days    2 attempts

------------------------------------------------------------
932 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 53544 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 07:02:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 07:02:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Jd7-00055r-S9; Tue, 28 Jul 2020 07:02:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E6Nk=BH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0Jd7-00055X-98
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 07:02:29 +0000
X-Inumbo-ID: 484448c8-d0a0-11ea-a860-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 484448c8-d0a0-11ea-a860-12813bfff9fa;
 Tue, 28 Jul 2020 07:02:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=YQ89ciisTGmPDz6vbgvVIvU5oKi+z3hbIGUwsY+74SQ=; b=3k52JBoAnjLA3Bf3tGlaVn8pe
 0RD1hUdfpy+PTOt5LDYluOMsBXxhrwO5iHCdYpO2y0mKI3+jW5+nZv9df24OcGVRP8pumDzXM5CYR
 +zk03B2gNLRQXogok5PkCf6w4+Llv5M7ywLqoNP7gfsykMgd1Julsy/5nhiOCYvHAK3Mo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0Jcy-0000IC-Nw; Tue, 28 Jul 2020 07:02:20 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0Jcy-0000vp-82; Tue, 28 Jul 2020 07:02:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0Jcy-00072x-7O; Tue, 28 Jul 2020 07:02:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152244-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152244: all pass - PUSHED
X-Osstest-Versions-This: ovmf=a44f558a84c67cd88b8215d4c076123cf58438f4
X-Osstest-Versions-That: ovmf=6074f57e5b19c4cfd45a139b793191f34fa31781
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jul 2020 07:02:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152244 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152244/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 a44f558a84c67cd88b8215d4c076123cf58438f4
baseline version:
 ovmf                 6074f57e5b19c4cfd45a139b793191f34fa31781

Last test of basis   152225  2020-07-27 03:39:54 Z    1 days
Testing same since   152244  2020-07-28 00:40:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jessica Clarke <jrtc27@jrtc27.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6074f57e5b..a44f558a84  a44f558a84c67cd88b8215d4c076123cf58438f4 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 07:11:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 07:11:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0JlX-0005yi-NC; Tue, 28 Jul 2020 07:11:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Yh8=BH=gmail.com=jrdr.linux@srs-us1.protection.inumbo.net>)
 id 1k0JlW-0005yd-MH
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 07:11:10 +0000
X-Inumbo-ID: 829c1c98-d0a1-11ea-8b19-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 829c1c98-d0a1-11ea-8b19-bc764e2007e4;
 Tue, 28 Jul 2020 07:11:09 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id v4so10302220ljd.0
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jul 2020 00:11:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=DRjmm9UeBZr7w+ollM4mmi9tSJezjZAc4TEhn0vLBdU=;
 b=KI0fDHv19vGSEykHx1Xxf2uYI1XOVLyGfQIU2/9514vqCxwHHTuFLpmtTzjcMIpBqX
 Q4N6atO6Hv6xE+kmibPoISKeJ0N8BEwd7vDhnnrta5dig7mD+QKpIMhi2LR5In0odMxD
 v/nw+xjY3Qv9PFJ5+ftNOCgEYwYFO7UBhG2UaoeCiyFYdShW2l65MLefZvXX/TuFKjTd
 fpKWungKDmOiDVtcfDpM1ne/HrIMnuanUoAnejRD22L1Nh254yuh+FQ8NFKakM4PXwQo
 WXZL7qsvYv7QZfeTgazav5diHpXeDKzma6PRjlvDlgFL5fVjpDm/UP17EucCmH9R4gtQ
 i8cA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=DRjmm9UeBZr7w+ollM4mmi9tSJezjZAc4TEhn0vLBdU=;
 b=oq9BKRuL2wrRWdNVlcKsjuRpam4+Qgw9I8eF9C8RkEzyvSVeZ4s4gw51R0lnpERAdv
 P3o7RGdhil2a0lxFbW3rZA89EpJivbK66XuXySTOoQvm7Pp7THxo921iAUJnSyGRRtl6
 YDKVAFa0MAHZv3qDnOU16DR+K9A8i4pShNsL0nw2IjZfqfXUlXpkhgkWskUwxRE77MOH
 8S2CD6Db/YvQi54E30sopseBLav3XxS71EYyM9hmwsXsiZZqS44hvcKAWrmBhFdjB29e
 pdSnqBTpwnCs1if44T6tnYt1BFRZl1iJsDkIYjpKMHKJvH8PlRuAYqR4dYbtsByGELXg
 r2tw==
X-Gm-Message-State: AOAM532d7VNH4xfbFX2dwEvDOJCKFd6RNkLO9AJLuT4rykjIJqOkwbKP
 SNhaC3PjuMkbUyRGZraBADnZblSetWJCl2G8jY4=
X-Google-Smtp-Source: ABdhPJyjhWJvOexjJ9Q7VOnPBUZjH1m1y72aPQWml/W5gmrJ56dELIFOcCms21hPKK+BstFoDGJXHhFoL5JEnD4D+Xc=
X-Received: by 2002:a2e:920e:: with SMTP id k14mr12349908ljg.430.1595920268329; 
 Tue, 28 Jul 2020 00:11:08 -0700 (PDT)
MIME-Version: 1.0
References: <1594525195-28345-1-git-send-email-jrdr.linux@gmail.com>
In-Reply-To: <1594525195-28345-1-git-send-email-jrdr.linux@gmail.com>
From: Souptick Joarder <jrdr.linux@gmail.com>
Date: Tue, 28 Jul 2020 12:40:57 +0530
Message-ID: <CAFqt6za8U04U2TQhe6DUCv7B4h9L-iqPtyE1DuALXUWOD=1M3A@mail.gmail.com>
Subject: Re: [PATCH v3 0/3] Few bug fixes and Convert to pin_user_pages*()
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>, sstabellini@kernel.org
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Paul Durrant <xadimgnik@gmail.com>, John Hubbard <jhubbard@nvidia.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Boris,

On Sun, Jul 12, 2020 at 9:01 AM Souptick Joarder <jrdr.linux@gmail.com> wrote:
>
> This series contains few clean up, minor bug fixes and
> Convert get_user_pages() to pin_user_pages().
>
> I'm compile tested this, but unable to run-time test,
> so any testing help is much appriciated.
>
> v2:
>         Addressed few review comments and compile issue.
>         Patch[1/2] from v1 split into 2 in v2.
> v3:
>         Address review comment. Add review tag.
>
> Cc: John Hubbard <jhubbard@nvidia.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: Paul Durrant <xadimgnik@gmail.com>
>
> Souptick Joarder (3):
>   xen/privcmd: Corrected error handling path
>   xen/privcmd: Mark pages as dirty
>   xen/privcmd: Convert get_user_pages*() to pin_user_pages*()

Does this series looks good to go for 5.9 ?

>
>  drivers/xen/privcmd.c | 32 ++++++++++++++------------------
>  1 file changed, 14 insertions(+), 18 deletions(-)
>
> --
> 1.9.1
>


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 07:38:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 07:38:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0KBx-0008N3-VT; Tue, 28 Jul 2020 07:38:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2JRE=BH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1k0KBw-0008My-G9
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 07:38:28 +0000
X-Inumbo-ID: 5335b316-d0a5-11ea-8b19-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5335b316-d0a5-11ea-8b19-bc764e2007e4;
 Tue, 28 Jul 2020 07:38:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A4630AD32;
 Tue, 28 Jul 2020 07:38:37 +0000 (UTC)
Subject: Re: [PATCH v3 0/3] Few bug fixes and Convert to pin_user_pages*()
To: Souptick Joarder <jrdr.linux@gmail.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, sstabellini@kernel.org
References: <1594525195-28345-1-git-send-email-jrdr.linux@gmail.com>
 <CAFqt6za8U04U2TQhe6DUCv7B4h9L-iqPtyE1DuALXUWOD=1M3A@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d6489b26-3891-cfc2-e614-1d5677d3999f@suse.com>
Date: Tue, 28 Jul 2020 09:38:26 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CAFqt6za8U04U2TQhe6DUCv7B4h9L-iqPtyE1DuALXUWOD=1M3A@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Paul Durrant <xadimgnik@gmail.com>, John Hubbard <jhubbard@nvidia.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.20 09:10, Souptick Joarder wrote:
> Hi Boris,
> 
> On Sun, Jul 12, 2020 at 9:01 AM Souptick Joarder <jrdr.linux@gmail.com> wrote:
>>
>> This series contains few clean up, minor bug fixes and
>> Convert get_user_pages() to pin_user_pages().
>>
>> I'm compile tested this, but unable to run-time test,
>> so any testing help is much appriciated.
>>
>> v2:
>>          Addressed few review comments and compile issue.
>>          Patch[1/2] from v1 split into 2 in v2.
>> v3:
>>          Address review comment. Add review tag.
>>
>> Cc: John Hubbard <jhubbard@nvidia.com>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Cc: Paul Durrant <xadimgnik@gmail.com>
>>
>> Souptick Joarder (3):
>>    xen/privcmd: Corrected error handling path
>>    xen/privcmd: Mark pages as dirty
>>    xen/privcmd: Convert get_user_pages*() to pin_user_pages*()
> 
> Does this series looks good to go for 5.9 ?

Yes. I already have it in my queue.


Juergen


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 07:41:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 07:41:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0KF5-0000ij-Hl; Tue, 28 Jul 2020 07:41:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMc3=BH=3mdeb.com=norbert.kaminski@srs-us1.protection.inumbo.net>)
 id 1k0KF4-0000ie-75
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 07:41:42 +0000
X-Inumbo-ID: c5f41410-d0a5-11ea-a869-12813bfff9fa
Received: from 7.mo179.mail-out.ovh.net (unknown [46.105.61.94])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c5f41410-d0a5-11ea-a869-12813bfff9fa;
 Tue, 28 Jul 2020 07:41:40 +0000 (UTC)
Received: from player770.ha.ovh.net (unknown [10.108.57.53])
 by mo179.mail-out.ovh.net (Postfix) with ESMTP id E617E16FF90
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jul 2020 09:41:38 +0200 (CEST)
Received: from 3mdeb.com (85-222-117-222.dynamic.chello.pl [85.222.117.222])
 (Authenticated sender: norbert.kaminski@3mdeb.com)
 by player770.ha.ovh.net (Postfix) with ESMTPSA id 8B15E14D53567;
 Tue, 28 Jul 2020 07:41:33 +0000 (UTC)
Authentication-Results: garm.ovh; auth=pass
 (GARM-95G001336571aa-5fee-4d5d-b215-046d926df4aa,44753483405F3E1C42F8196D1C7200706683A5BB)
 smtp.auth=norbert.kaminski@3mdeb.com
From: Norbert Kaminski <norbert.kaminski@3mdeb.com>
To: xen-devel@lists.xenproject.org
Subject: fwupd support under Xen - firmware updates with the UEFI capsule
Message-ID: <497f1524-b57e-0ea1-5899-62f677bfae91@3mdeb.com>
Date: Tue, 28 Jul 2020 09:41:32 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: multipart/alternative;
 boundary="------------178149245AA07C14C2986652"
Content-Language: en-US
X-Ovh-Tracer-Id: 15470427670647445866
X-VR-SPAMSTATE: OK
X-VR-SPAMSCORE: -100
X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgeduiedriedugdduvdduucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecuqfggjfdpvefjgfevmfevgfenuceurghilhhouhhtmecuhedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujfgurhephffvuffkffgfgggtsegrtderredtfeejnecuhfhrohhmpefpohhrsggvrhhtucfmrghmihhnshhkihcuoehnohhrsggvrhhtrdhkrghmihhnshhkihesfehmuggvsgdrtghomheqnecuggftrfgrthhtvghrnheptedtheejgeeileektedvteefhfduffdtgefggfejgeeufffhudehtdevieelfeefnecuffhomhgrihhnpehgihhtlhgrsgdrtghomhdpghhithhhuhgsrdgtohhmpdefmhguvggsrdgtohhmnecukfhppedtrddtrddtrddtpdekhedrvddvvddruddujedrvddvvdenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmhepmhhouggvpehsmhhtphdqohhuthdphhgvlhhopehplhgrhigvrhejjedtrdhhrgdrohhvhhdrnhgvthdpihhnvghtpedtrddtrddtrddtpdhmrghilhhfrhhomhepnhhorhgsvghrthdrkhgrmhhinhhskhhiseefmhguvggsrdgtohhmpdhrtghpthhtohepgigvnhdquggvvhgvlheslhhishhtshdrgigvnhhprhhojhgvtghtrdhorhhg
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: andrew.cooper3@citrix.com, Maciej Pijanowski <maciej.pijanowski@3mdeb.com>,
 piotr.krol@3mdeb.com, marmarek@invisiblethingslab.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a multi-part message in MIME format.
--------------178149245AA07C14C2986652
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

Hello all,

I'm trying to add support for the firmware updates with the UEFI capsule in
Qubes OS. I've got the troubles with reading ESRT (EFI System Resource 
Table)
in the dom0, which is based on the EFI memory map. The EFI_MEMMAP is not
enabled despite the loaded drivers (CONFIG_EFI, CONFIG_EFI_ESRT) and kernel
cmdline parameters (add_efi_memmap):

```
[    3.451249] efi: EFI_MEMMAP is not enabled.
```

The fwupd bases on the ESRT entries, which provide the system firmware 
GUID.
The GUID is checked using LVFS metadata, which contains information 
about updates.
When efi_memmap is not enabled, there are no ESRT entries in the sysfs, 
and fwupd
has no information about the system firmware GUID.  It is therefore not 
possible to
check whether updates are available for the BIOS.

This is how the ESRT entries looks in the Ubuntu:

```
ubuntu@ubuntu:/sys/firmware/efi/esrt$ ll
total 0
drwxr-xr-x 3 root root    0 Jul 27 13:14 ./
drwxr-xr-x 6 root root    0 Jul 27 13:13 ../
drwxr-xr-x 3 root root    0 Jul 27 13:17 entries/
-r-------- 1 root root 4096 Jul 27 13:17 fw_resource_count
-r-------- 1 root root 4096 Jul 27 13:17 fw_resource_count_max
-r-------- 1 root root 4096 Jul 27 13:17 fw_resource_version
ubuntu@ubuntu:/sys/firmware/efi/esrt/entries/entry0$ ll
total 0
drwxr-xr-x 2 root root    0 Jul 27 13:17 ./
drwxr-xr-x 3 root root    0 Jul 27 13:17 ../
-r-------- 1 root root 4096 Jul 27 13:17 capsule_flags
-r-------- 1 root root 4096 Jul 27 13:17 fw_class
-r-------- 1 root root 4096 Jul 27 13:17 fw_type
-r-------- 1 root root 4096 Jul 27 13:17 fw_version
-r-------- 1 root root 4096 Jul 27 13:17 last_attempt_status
-r-------- 1 root root 4096 Jul 27 13:17 last_attempt_version
-r-------- 1 root root 4096 Jul 27 13:17 lowest_supported_fw_version
ubuntu@ubuntu:/sys/firmware/efi/esrt/entries/entry0$ sudo cat fw_class
34578c72-11dc-4378-bc7f-b643866f598c
```

This is the source code of the ESRT driver, which provides those 
directories:

https://gitlab.com/cki-project/kernel-ark/-/blob/os-build/drivers/firmware/efi/esrt.c 


EFI_MEMMAP dependency is in the 248th line:

https://gitlab.com/cki-project/kernel-ark/-/blob/os-build/drivers/firmware/efi/esrt.c#L248

I need to pass ESRT to the dom0. What would be the best way to do that?

Ps. Marek Marczykowski-Górecki (Qubes /Project lead) /found some more 
information,
where the problem lays:

/EFI_MEMMAP is not enabled on EFI_PARAVIRT (which I believe is the case 
on Xen dom0):/

/https://github.com/torvalds/linux/blob/92ed301919932f777713b9172e525674157e983d/drivers/firmware/efi/memmap.c#L110/

/My reading the source code says the Xen side to extract this info 
exists, but
Linux doesn't use it specifically, EFI config table address is get here:/

/https://github.com/torvalds/linux/blob/master/arch/x86/xen/efi.c#L56-L63/

/But then nothing uses efi_systab_xen.tables.
efi_config_parse_tables() function should be called on those addresses:
/

/https://github.com/torvalds/linux/blob/master/drivers/firmware/efi/efi.c#L542
/

/But I don't think it is called in PV dom0 boot path (not fully sure 
about that yet)./


Best Regards,
Norbert Kamiński
Junior Embedded Systems Engineer
GPG key ID: 9E9F90AFE10F466A
3mdeb.com


--------------178149245AA07C14C2986652
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p>Hello all,
      <br>
      <br>
      I'm trying to add support for the firmware updates with the UEFI
      capsule in
      <br>
      Qubes OS. I've got the troubles with reading ESRT (EFI System
      Resource Table)
      <br>
      in the dom0, which is based on the EFI memory map. The EFI_MEMMAP
      is not
      <br>
      enabled despite the loaded drivers (CONFIG_EFI, CONFIG_EFI_ESRT)
      and kernel
      <br>
      cmdline parameters (add_efi_memmap):
      <br>
      <br>
      ```
      <br>
      [    3.451249] efi: EFI_MEMMAP is not enabled.
      <br>
      ```
      <br>
      <br>
      The fwupd bases on the ESRT entries, which provide the system
      firmware GUID.
      <br>
      The GUID is checked using LVFS metadata, which contains
      information about updates.
      <br>
      When efi_memmap is not enabled, there are no ESRT entries in the
      sysfs, and fwupd
      <br>
      has no information about the system firmware GUID.  It is
      therefore not possible to
      <br>
      check whether updates are available for the BIOS.
      <br>
      <br>
      This is how the ESRT entries looks in the Ubuntu:
      <br>
      <br>
      ```
      <br>
      ubuntu@ubuntu:/sys/firmware/efi/esrt$ ll
      <br>
      total 0
      <br>
      drwxr-xr-x 3 root root    0 Jul 27 13:14 ./
      <br>
      drwxr-xr-x 6 root root    0 Jul 27 13:13 ../
      <br>
      drwxr-xr-x 3 root root    0 Jul 27 13:17 entries/
      <br>
      -r-------- 1 root root 4096 Jul 27 13:17 fw_resource_count
      <br>
      -r-------- 1 root root 4096 Jul 27 13:17 fw_resource_count_max
      <br>
      -r-------- 1 root root 4096 Jul 27 13:17 fw_resource_version
      <br>
      ubuntu@ubuntu:/sys/firmware/efi/esrt/entries/entry0$ ll
      <br>
      total 0
      <br>
      drwxr-xr-x 2 root root    0 Jul 27 13:17 ./
      <br>
      drwxr-xr-x 3 root root    0 Jul 27 13:17 ../
      <br>
      -r-------- 1 root root 4096 Jul 27 13:17 capsule_flags
      <br>
      -r-------- 1 root root 4096 Jul 27 13:17 fw_class
      <br>
      -r-------- 1 root root 4096 Jul 27 13:17 fw_type
      <br>
      -r-------- 1 root root 4096 Jul 27 13:17 fw_version
      <br>
      -r-------- 1 root root 4096 Jul 27 13:17 last_attempt_status
      <br>
      -r-------- 1 root root 4096 Jul 27 13:17 last_attempt_version
      <br>
      -r-------- 1 root root 4096 Jul 27 13:17
      lowest_supported_fw_version
      <br>
      ubuntu@ubuntu:/sys/firmware/efi/esrt/entries/entry0$ sudo cat
      fw_class
      <br>
      34578c72-11dc-4378-bc7f-b643866f598c
      <br>
      ```
      <br>
      <br>
      This is the source code of the ESRT driver, which provides those
      directories:
      <br>
      <br>
      <a class="moz-txt-link-freetext"
href="https://gitlab.com/cki-project/kernel-ark/-/blob/os-build/drivers/firmware/efi/esrt.c">https://gitlab.com/cki-project/kernel-ark/-/blob/os-build/drivers/firmware/efi/esrt.c</a>
      <br>
      <br>
      EFI_MEMMAP dependency is in the 248th line:
      <br>
      <br>
      <a class="moz-txt-link-freetext"
href="https://gitlab.com/cki-project/kernel-ark/-/blob/os-build/drivers/firmware/efi/esrt.c#L248">https://gitlab.com/cki-project/kernel-ark/-/blob/os-build/drivers/firmware/efi/esrt.c#L248</a></p>
    <p>I need to pass ESRT to the dom0. What would be the best way to do
      that?</p>
    <p>Ps. Marek Marczykowski-Górecki (Qubes <em class="role
        half-bottom">Project lead) </em><span class="role half-bottom"></span><span
        class="role half-bottom">found some more information,<br>
        where the problem lays:</span></p>
    <p><em class="role half-bottom">EFI_MEMMAP is not enabled on
        EFI_PARAVIRT (which I believe is the case on Xen dom0):</em></p>
    <p><em class="role half-bottom"><a class="moz-txt-link-freetext" href="https://github.com/torvalds/linux/blob/92ed301919932f777713b9172e525674157e983d/drivers/firmware/efi/memmap.c#L110">https://github.com/torvalds/linux/blob/92ed301919932f777713b9172e525674157e983d/drivers/firmware/efi/memmap.c#L110</a></em></p>
    <p><em class="role half-bottom">My reading the source code says the
        Xen side to extract this info exists, but <br>
        Linux doesn't use it specifically, EFI config table address is
        get here:</em></p>
    <p><em class="role half-bottom"><a class="moz-txt-link-freetext" href="https://github.com/torvalds/linux/blob/master/arch/x86/xen/efi.c#L56-L63">https://github.com/torvalds/linux/blob/master/arch/x86/xen/efi.c#L56-L63</a></em></p>
    <p><em class="role half-bottom">But then nothing uses
        efi_systab_xen.tables. <br>
        efi_config_parse_tables() function should be called on those
        addresses: <br>
      </em></p>
    <p><em class="role half-bottom"><a class="moz-txt-link-freetext" href="https://github.com/torvalds/linux/blob/master/drivers/firmware/efi/efi.c#L542">https://github.com/torvalds/linux/blob/master/drivers/firmware/efi/efi.c#L542</a><br>
      </em></p>
    <p><em class="role half-bottom">But I don't think it is called in PV
        dom0 boot path (not fully sure about that yet).</em></p>
    <p><br>
      Best Regards,
      <br>
      Norbert Kamiński
      <br>
      Junior Embedded Systems Engineer
      <br>
      GPG key ID: 9E9F90AFE10F466A
      <br>
      3mdeb.com
    </p>
  </body>
</html>

--------------178149245AA07C14C2986652--


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 07:43:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 07:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0KH6-0000qp-Un; Tue, 28 Jul 2020 07:43:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E6Nk=BH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0KH5-0000qj-QN
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 07:43:47 +0000
X-Inumbo-ID: 10f23262-d0a6-11ea-a869-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 10f23262-d0a6-11ea-a869-12813bfff9fa;
 Tue, 28 Jul 2020 07:43:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=NBZTdmWETaCOFvSUHTYVlekY/B/J45eAYazUHxCMR1E=; b=PXYv9GzxnoQqZOYMr3LZPsgRS
 UKtUb7WeGeSjGxhcfRydBfKwiRltdaY5YpvCgFnShmtn8LW43ruqG8E5eKmR6I8ZeSu18FtkSY+zR
 zCnNsWFoOTaF6t2G4jgvKEvYEAAxTR81K8nAggPMfRoQme9lK8Vm93F4YnBk5LIUhJz4U=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0KH2-0001A9-WF; Tue, 28 Jul 2020 07:43:45 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0KH2-0003O1-JV; Tue, 28 Jul 2020 07:43:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0KH2-0003XO-Iu; Tue, 28 Jul 2020 07:43:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152247-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 152247: regressions - FAIL
X-Osstest-Failures: libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=a34fab5399c0ef84051af8ce2d8881243fa01f1b
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jul 2020 07:43:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152247 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152247/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              a34fab5399c0ef84051af8ce2d8881243fa01f1b
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   18 days
Failing since        151818  2020-07-11 04:18:52 Z   17 days   18 attempts
Testing same since   152247  2020-07-28 04:18:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Weblate <noreply@weblate.org>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2956 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 08:06:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 08:06:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Kd8-0003Ab-AQ; Tue, 28 Jul 2020 08:06:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ne9c=BH=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1k0Kd6-0003AV-7r
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 08:06:32 +0000
X-Inumbo-ID: 3c81dd1c-d0a9-11ea-8b19-bc764e2007e4
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::61f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c81dd1c-d0a9-11ea-8b19-bc764e2007e4;
 Tue, 28 Jul 2020 08:06:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W0hkomKlqkrbPvW9HKZrovHagpLOLw5TZgQ/efsSQ1c=;
 b=KlhzIc28QhbfSOhpB/CsujAdMgkW85jVal8FpSfhtiDrclorJCNJA/u3H8wBGAGsILCZ0WVLzF5dckPlewJZfP74qCKr5Zh5/YDjDzuxoRwopgYCERadSxmQtjGsfWo3E1HaTo/gxoy+BVGzq+eOElfzf/elT17d++66wPWME2U=
Received: from AM0PR10CA0065.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:15::18)
 by VI1PR0802MB2160.eurprd08.prod.outlook.com (2603:10a6:800:9b::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.23; Tue, 28 Jul
 2020 08:06:25 +0000
Received: from AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:15:cafe::48) by AM0PR10CA0065.outlook.office365.com
 (2603:10a6:208:15::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.20 via Frontend
 Transport; Tue, 28 Jul 2020 08:06:25 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT037.mail.protection.outlook.com (10.152.17.241) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Tue, 28 Jul 2020 08:06:25 +0000
Received: ("Tessian outbound 73b502bf693a:v62");
 Tue, 28 Jul 2020 08:06:24 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 048ddb5fb79246e1
X-CR-MTA-TID: 64aa7808
Received: from a695be3e63d1.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 65F6E082-9A15-4827-91B4-31973F6DEED5.1; 
 Tue, 28 Jul 2020 08:06:19 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a695be3e63d1.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 28 Jul 2020 08:06:19 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f6mMnFo4Iaq/F0Zy2bwBSLUERWQHAEjgHJJjXrKAqEqzl0sWfXD7pINrvMZjggemBOHvpBfCJbVWGhdafJIwADa3x/zcnkg3m6De14DtS8tt8F0NRwdiV/TnI1D+JW/SSdAx8keEL5d35u2t28uUEeD7YZ5B7bEXo6GFyx34+sjOTaNCRbBpuEnqI/NvFqt/Y0IVlMgGGYpD1uJ+g7clBjp1ZxnO7l/ebmDlpRIgLBvQ1Wa3YbGgaCnFA6zbEdS+mcn3cmH/BXVeya1bQugJomv/0GWpKakdmYKqF5x+u1YJrTS19chTDtf+Sta7upIClllEwxLLmqmotykmSAsxnQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W0hkomKlqkrbPvW9HKZrovHagpLOLw5TZgQ/efsSQ1c=;
 b=KWrXLokfB9YboDyi5Q0AUK0gZ8qO4dqnazX3PJsVsw14dNzLixGVaA0LVzcSg3RHJI8AXyO4XYYuuMiiIzbZT+6Des4xA0qAVinon8OwVrJ1+0WxvFMCfJyTtSVxy4jlG88+luFSs9OYpux0H7F79reycx8j0+U688Pg+Su+c0pG+rkm76ujk8Owv09NuNke4YXtXIFPgeg7ex/Jeb/2+SKDqAYLWIQOVCNZ5xWcnpjy9ZjacFdx5LklnDKsbRj8D+8rSO3C5jJWM5q1uX5M9PrAioFxE9BTXGfAhxIuU/ucRHAx/NJ3h2BYGlQ+jUK1uyMy2B7JsaUAegCnldKkMw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=W0hkomKlqkrbPvW9HKZrovHagpLOLw5TZgQ/efsSQ1c=;
 b=KlhzIc28QhbfSOhpB/CsujAdMgkW85jVal8FpSfhtiDrclorJCNJA/u3H8wBGAGsILCZ0WVLzF5dckPlewJZfP74qCKr5Zh5/YDjDzuxoRwopgYCERadSxmQtjGsfWo3E1HaTo/gxoy+BVGzq+eOElfzf/elT17d++66wPWME2U=
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com (2603:10a6:20b:4c::19)
 by AM6PR08MB4183.eurprd08.prod.outlook.com (2603:10a6:20b:a1::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Tue, 28 Jul
 2020 08:06:17 +0000
Received: from AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5]) by AM6PR08MB3560.eurprd08.prod.outlook.com
 ([fe80::e891:3b33:766:cad5%6]) with mapi id 15.20.3216.033; Tue, 28 Jul 2020
 08:06:17 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Thread-Topic: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Thread-Index: AQHWYPbO0TX+B10vbkuLRSjkBERSxqkWz9MAgAXaLgA=
Date: Tue, 28 Jul 2020 08:06:17 +0000
Message-ID: <567A8D4D-9D7F-4D53-B740-6095F5512026@arm.com>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <20200724144404.GJ7191@Air-de-Roger>
In-Reply-To: <20200724144404.GJ7191@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 047bcffe-9a85-459d-09ce-08d832cd1f8e
x-ms-traffictypediagnostic: AM6PR08MB4183:|VI1PR0802MB2160:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS: <VI1PR0802MB21600C85C93F1DCD2E353B45FC730@VI1PR0802MB2160.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:5516;OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: py4IQ/GMXkQBpODYdF6Z+cr+40k0hdIu/jw2SXXMdwBJiKXmRArwxv4DUGYQjdm7+EclOG9tGggB507OHGg6QljGkzeiPxzUnxoQYhReILsq1Qdhk/fmjDtp+7VOvlXDA+h7EH8KKINWItmLpnGMM1pvSeGZ1FJ2UNEjsP5zJ7NDRh0OQ0WYu4XRkS4Ro9YJKge3n0TyDWT0vzAZ42zoFa3xjrWSVtFPlNJTtY9dkvunF36I007wNnejQiSm8+WfwA1ug/v6Ujd8CBRGNnAC9fr2wOgxv7Op1c5sfNT2ZrR7WlgFqYxbu8iowNU2F0uoOybei8MjUDed+lh88vRWDG3gngzClhwUkoEL7gZ1Kx/2IuNlkxr3Mn60bhm2rPFE3nHqywhnKXQtXkcEBEqo44iWOG/vdmHbeO6ughXvDFLfr0nfwEqUaO5jPGN0LIhl
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM6PR08MB3560.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(376002)(136003)(396003)(39860400002)(366004)(64756008)(66556008)(66446008)(71200400001)(66476007)(76116006)(966005)(33656002)(83380400001)(36756003)(30864003)(5660300002)(478600001)(6512007)(2616005)(86362001)(186003)(6486002)(4326008)(8936002)(26005)(91956017)(6506007)(2906002)(55236004)(6916009)(316002)(8676002)(66946007)(54906003)(53546011)(2004002)(579004)(559001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: mjb22p4dVkKb7WJXDOjevmf/8aPnAOc3Ft0bNcUU2MqRhmdKzGddpq3YsrrNaLzUcSbeiiPO0Qv+cISoIFURst+eAnEHn7afcn0WLZf3OWx3kOrnVMVk7Qa6MrOql3+Iao4uux62CsZWAv1NNQB0o4zMRcYnIJg0fHVrUKt1HJFgt+cvBQmqb3QBOkK6Q4bNM+sKXhSIDvaj+poRKqALaKTDPOVl78V2c24nh0GATfuQaji5SYZXC4b4SXye3M+YBkXKXkt1dw9iCdgLt3w6R/V9nYzfOrJ7bUfLS4qsW+NSnsW5O+pbSBpyAOhSUd4Hnnp0QCylTbT/Bx8Y/7MbKCRKDDpN79wmDEyJO4MT5iUOyai3jJstu2B35odXsyEETJ1GmmirtoDEwL2SaKcdRLoScv9VanthJCmnWmt1IoU2iAE6N60nFKqOiUohy3rmJebtyJiigu+PATkL9aoqo6qv7Ljmk5PSmLuO/UuENCA=
Content-Type: text/plain; charset="utf-8"
Content-ID: <2E487AEEB770C548B7EEDE7FFB3599BD@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4183
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: fa8349af-a2f6-4e47-797c-08d832cd1afb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: fP8ISl+krPYROl9ANHEth6BUPVgeUUoX8bDr3Xd3H7hR7q1KsL2zc2lc1n7ZtNt4SSy0x7A3wZl+WtsEu0Bx6T6sH/A1ZDwyLvOSXB7Z7FnGfnAtkhtEI/QFY2KQDfd//a/C+1UY7Zx0q0erKBy0LdzxwHPr6EMRHFuV9R1HdOWsVFnT20ZUYDNi5yM3hs5z1ft+fbL9WP61/STla3tMW7ecRoiyWIcF6UJoQNPVpyVpKIjEXx1JZGsy98++ZvXDvCAkABfwrYVBPew3SqxifE6bWVWZwzrMjPrQbSNc477aepaeBdZcOD8UY/WFCviPQn9EvtPCzrydsd5Lfza46jxKeTD6k6Y0akc0W1H/wnx/m5GD261m/Rz3gCcpi3N9Q00PBoIxbBUo0I7JdzbJ1XPuxGZ/meuAisGp+9ECIqlweDtqMW05sc2CR7QL7eJk4xtwZ4axrwiiLbugYayauJG4my1n+pvcSM7xTHQq08bxNwjxg+euZKVxfC9wwXSP
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(346002)(376002)(39860400002)(396003)(46966005)(6506007)(53546011)(70206006)(70586007)(2616005)(2906002)(8676002)(83380400001)(6862004)(336012)(6486002)(4326008)(33656002)(8936002)(26005)(36906005)(6512007)(54906003)(82310400002)(316002)(36756003)(356005)(186003)(966005)(86362001)(81166007)(478600001)(82740400003)(47076004)(5660300002)(107886003)(30864003)(2004002);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jul 2020 08:06:25.1232 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 047bcffe-9a85-459d-09ce-08d832cd1f8e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0802MB2160
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMjQgSnVsIDIwMjAsIGF0IDM6NDQgcG0sIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2Vy
LnBhdUBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IE9uIFRodSwgSnVsIDIzLCAyMDIwIGF0IDA0
OjQwOjIxUE0gKzAxMDAsIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4gWEVOIGR1cmluZyBib290IHdp
bGwgcmVhZCB0aGUgUENJIGRldmljZSB0cmVlIG5vZGUg4oCccmVn4oCdIHByb3BlcnR5DQo+PiBh
bmQgd2lsbCBtYXAgdGhlIFBDSSBjb25maWcgc3BhY2UgdG8gdGhlIFhFTiBtZW1vcnkuDQo+PiAN
Cj4+IFhFTiB3aWxsIHJlYWQgdGhlIOKAnGxpbnV4LCBwY2ktZG9tYWlu4oCdIHByb3BlcnR5IGZy
b20gdGhlIGRldmljZSB0cmVlDQo+PiBub2RlIGFuZCBjb25maWd1cmUgdGhlIGhvc3QgYnJpZGdl
IHNlZ21lbnQgbnVtYmVyIGFjY29yZGluZ2x5Lg0KPj4gDQo+PiBBcyBvZiBub3cgInBjaS1ob3N0
LWVjYW0tZ2VuZXJpYyIgY29tcGF0aWJsZSBib2FyZCBpcyBzdXBwb3J0ZWQuDQo+PiANCj4+IENo
YW5nZS1JZDogSWYzMmY3NzQ4YjdkYzg5ZGQzNzExNGRjNTAyOTQzMjIyYTJhMzZjNDkNCj4+IFNp
Z25lZC1vZmYtYnk6IFJhaHVsIFNpbmdoIDxyYWh1bC5zaW5naEBhcm0uY29tPg0KPj4gLS0tDQo+
PiB4ZW4vYXJjaC9hcm0vS2NvbmZpZyAgICAgICAgICAgICAgICB8ICAgNyArDQo+PiB4ZW4vYXJj
aC9hcm0vTWFrZWZpbGUgICAgICAgICAgICAgICB8ICAgMSArDQo+PiB4ZW4vYXJjaC9hcm0vcGNp
L01ha2VmaWxlICAgICAgICAgICB8ICAgNCArDQo+PiB4ZW4vYXJjaC9hcm0vcGNpL3BjaS1hY2Nl
c3MuYyAgICAgICB8IDEwMSArKysrKysrKysrKysrKw0KPj4geGVuL2FyY2gvYXJtL3BjaS9wY2kt
aG9zdC1jb21tb24uYyAgfCAxOTggKysrKysrKysrKysrKysrKysrKysrKysrKysrKw0KPj4geGVu
L2FyY2gvYXJtL3BjaS9wY2ktaG9zdC1nZW5lcmljLmMgfCAxMzEgKysrKysrKysrKysrKysrKysr
DQo+PiB4ZW4vYXJjaC9hcm0vcGNpL3BjaS5jICAgICAgICAgICAgICB8IDExMiArKysrKysrKysr
KysrKysrDQo+PiB4ZW4vYXJjaC9hcm0vc2V0dXAuYyAgICAgICAgICAgICAgICB8ICAgMiArDQo+
PiB4ZW4vaW5jbHVkZS9hc20tYXJtL2RldmljZS5oICAgICAgICB8ICAgNyArLQ0KPj4geGVuL2lu
Y2x1ZGUvYXNtLWFybS9wY2kuaCAgICAgICAgICAgfCAgOTcgKysrKysrKysrKysrKy0NCj4+IDEw
IGZpbGVzIGNoYW5nZWQsIDY1NCBpbnNlcnRpb25zKCspLCA2IGRlbGV0aW9ucygtKQ0KPj4gY3Jl
YXRlIG1vZGUgMTAwNjQ0IHhlbi9hcmNoL2FybS9wY2kvTWFrZWZpbGUNCj4+IGNyZWF0ZSBtb2Rl
IDEwMDY0NCB4ZW4vYXJjaC9hcm0vcGNpL3BjaS1hY2Nlc3MuYw0KPj4gY3JlYXRlIG1vZGUgMTAw
NjQ0IHhlbi9hcmNoL2FybS9wY2kvcGNpLWhvc3QtY29tbW9uLmMNCj4+IGNyZWF0ZSBtb2RlIDEw
MDY0NCB4ZW4vYXJjaC9hcm0vcGNpL3BjaS1ob3N0LWdlbmVyaWMuYw0KPj4gY3JlYXRlIG1vZGUg
MTAwNjQ0IHhlbi9hcmNoL2FybS9wY2kvcGNpLmMNCj4+IA0KPj4gZGlmZiAtLWdpdCBhL3hlbi9h
cmNoL2FybS9LY29uZmlnIGIveGVuL2FyY2gvYXJtL0tjb25maWcNCj4+IGluZGV4IDI3NzczODgy
NjUuLmVlMTMzMzlhZTkgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vYXJjaC9hcm0vS2NvbmZpZw0KPj4g
KysrIGIveGVuL2FyY2gvYXJtL0tjb25maWcNCj4+IEBAIC0zMSw2ICszMSwxMyBAQCBtZW51ICJB
cmNoaXRlY3R1cmUgRmVhdHVyZXMiDQo+PiANCj4+IHNvdXJjZSAiYXJjaC9LY29uZmlnIg0KPj4g
DQo+PiArY29uZmlnIEFSTV9QQ0kNCj4+ICsJYm9vbCAiUENJIFBhc3N0aHJvdWdoIFN1cHBvcnQi
DQo+PiArCWRlcGVuZHMgb24gQVJNXzY0DQo+PiArCS0tLWhlbHAtLS0NCj4+ICsNCj4+ICsJICBQ
Q0kgcGFzc3Rocm91Z2ggc3VwcG9ydCBmb3IgWGVuIG9uIEFSTTY0Lg0KPj4gKw0KPj4gY29uZmln
IEFDUEkNCj4+IAlib29sICJBQ1BJIChBZHZhbmNlZCBDb25maWd1cmF0aW9uIGFuZCBQb3dlciBJ
bnRlcmZhY2UpIFN1cHBvcnQiIGlmIEVYUEVSVA0KPj4gCWRlcGVuZHMgb24gQVJNXzY0DQo+PiBk
aWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL01ha2VmaWxlIGIveGVuL2FyY2gvYXJtL01ha2VmaWxl
DQo+PiBpbmRleCA3ZTgyYjIxNzhjLi4zNDVjYjgzZWVkIDEwMDY0NA0KPj4gLS0tIGEveGVuL2Fy
Y2gvYXJtL01ha2VmaWxlDQo+PiArKysgYi94ZW4vYXJjaC9hcm0vTWFrZWZpbGUNCj4+IEBAIC02
LDYgKzYsNyBAQCBpZm5lcSAoJChDT05GSUdfTk9fUExBVCkseSkNCj4+IG9iai15ICs9IHBsYXRm
b3Jtcy8NCj4+IGVuZGlmDQo+PiBvYmotJChDT05GSUdfVEVFKSArPSB0ZWUvDQo+PiArb2JqLSQo
Q09ORklHX0FSTV9QQ0kpICs9IHBjaS8NCj4+IA0KPj4gb2JqLSQoQ09ORklHX0hBU19BTFRFUk5B
VElWRSkgKz0gYWx0ZXJuYXRpdmUubw0KPj4gb2JqLXkgKz0gYm9vdGZkdC5pbml0Lm8NCj4+IGRp
ZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vcGNpL01ha2VmaWxlIGIveGVuL2FyY2gvYXJtL3BjaS9N
YWtlZmlsZQ0KPj4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4+IGluZGV4IDAwMDAwMDAwMDAuLjM1
ODUwOGI3ODcNCj4+IC0tLSAvZGV2L251bGwNCj4+ICsrKyBiL3hlbi9hcmNoL2FybS9wY2kvTWFr
ZWZpbGUNCj4+IEBAIC0wLDAgKzEsNCBAQA0KPj4gK29iai15ICs9IHBjaS5vDQo+PiArb2JqLXkg
Kz0gcGNpLWhvc3QtZ2VuZXJpYy5vDQo+PiArb2JqLXkgKz0gcGNpLWhvc3QtY29tbW9uLm8NCj4+
ICtvYmoteSArPSBwY2ktYWNjZXNzLm8NCj4gDQo+IFRoZSBLY29uZmlnIG9wdGlvbiBtZW50aW9u
cyB0aGUgc3VwcG9ydCBiZWluZyBleHBsaWNpdGx5IGZvciBBUk02NCwNCj4gd291bGQgaXQgYmUg
YmV0dGVyIHRvIHBsYWNlIHRoZSBjb2RlIGluIGFyY2gvYXJtL2FybTY0IHRoZW4/DQoNCk9rLkFz
IGp1bGllbiBhbHNvIG1lbnRpb25lZCB3ZSB0ZXN0ZWQgb24gQVJNNjQgLCBXZSBoYXZlIHRvIHRl
c3Qgb24gQVJNMzIgcGxhdGZvcm1zLiAgDQo+IA0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2Fy
bS9wY2kvcGNpLWFjY2Vzcy5jIGIveGVuL2FyY2gvYXJtL3BjaS9wY2ktYWNjZXNzLmMNCj4+IG5l
dyBmaWxlIG1vZGUgMTAwNjQ0DQo+PiBpbmRleCAwMDAwMDAwMDAwLi5jNTNlZjU4MzM2DQo+PiAt
LS0gL2Rldi9udWxsDQo+PiArKysgYi94ZW4vYXJjaC9hcm0vcGNpL3BjaS1hY2Nlc3MuYw0KPj4g
QEAgLTAsMCArMSwxMDEgQEANCj4+ICsvKg0KPj4gKyAqIENvcHlyaWdodCAoQykgMjAyMCBBcm0g
THRkLg0KPj4gKyAqDQo+PiArICogVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlvdSBj
YW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkNCj4+ICsgKiBpdCB1bmRlciB0aGUgdGVy
bXMgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIHZlcnNpb24gMiBhcw0KPj4gKyAq
IHB1Ymxpc2hlZCBieSB0aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLg0KPj4gKyAqDQo+PiAr
ICogVGhpcyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBi
ZSB1c2VmdWwsDQo+PiArICogYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4g
dGhlIGltcGxpZWQgd2FycmFudHkgb2YNCj4+ICsgKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVT
UyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlDQo+PiArICogR05VIEdlbmVyYWwg
UHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4NCj4+ICsgKg0KPj4gKyAqIFlvdSBzaG91
bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNl
DQo+PiArICogYWxvbmcgd2l0aCB0aGlzIHByb2dyYW0uICBJZiBub3QsIHNlZSA8aHR0cDovL3d3
dy5nbnUub3JnL2xpY2Vuc2VzLz4uDQo+PiArICovDQo+PiArDQo+PiArI2luY2x1ZGUgPHhlbi9p
bml0Lmg+DQo+PiArI2luY2x1ZGUgPHhlbi9wY2kuaD4NCj4+ICsjaW5jbHVkZSA8YXNtL3BjaS5o
Pg0KPj4gKyNpbmNsdWRlIDx4ZW4vcndsb2NrLmg+DQo+PiArDQo+PiArc3RhdGljIHVpbnQzMl90
IHBjaV9jb25maWdfcmVhZChwY2lfc2JkZl90IHNiZGYsIHVuc2lnbmVkIGludCByZWcsDQo+PiAr
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGludCBsZW4pDQo+IA0KPiBQbGVh
c2UgYWxpZ24gd2l0aCB0aGUgb3BlbmluZyBwYXJlbnRoZXNpcyAoaGVyZSBhbmQgZXZlcnl3aGVy
ZSBpbiB0aGUNCj4gcGF0Y2ggc2VyaWVzKS4NCg0KQWNrLg0KPiANCj4+ICt7DQo+PiArICAgIGlu
dCByYzsNCj4+ICsgICAgdWludDMyX3QgdmFsID0gR0VOTUFTSygwLCBsZW4gKiA4KTsNCj4gDQo+
IFlvdSBjYW4ganVzdCBzZXQgdmFsID0gfjAuIFRoZSByZXR1cm4gdHlwZSBvZiB0aGUgcGNpX2Nv
bmZfcmVhZFhYDQo+IGhlbHBlcnMgd2lsbCBhbHJlYWR5IHRydW5jYXRlIHRoZSB2YWx1ZS4NCj4g
DQoNCkFjay4NCg0KPj4gKw0KPj4gKyAgICBzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICpicmlkZ2Ug
PSBwY2lfZmluZF9ob3N0X2JyaWRnZShzYmRmLnNlZywgc2JkZi5idXMpOw0KPj4gKw0KPj4gKyAg
ICBpZiAoIHVubGlrZWx5KCFicmlkZ2UpICkNCj4+ICsgICAgew0KPj4gKyAgICAgICAgcHJpbnRr
KFhFTkxPR19FUlIgIlVuYWJsZSB0byBmaW5kIGJyaWRnZSBmb3IgIlBSSV9wY2kiXG4iLA0KPj4g
KyAgICAgICAgICAgICAgICBzYmRmLnNlZywgc2JkZi5idXMsIHNiZGYuZGV2LCBzYmRmLmZuKTsN
Cj4gDQo+IEkgaGFkIGEgcGF0Y2ggdG8gYWRkIGEgY3VzdG9tIG1vZGlmaWVyIHRvIG91dCBwcmlu
dGYgZm9ybWF0IGluDQo+IG9yZGVyIHRvIGhhbmRsZSBwY2lfc2JkZl90IG5hdGl2ZWx5Og0KPiAN
Cj4gaHR0cHM6Ly9wYXRjaGV3Lm9yZy9YZW4vMjAxOTA4MjIwNjUxMzIuNDgyMDAtMS1yb2dlci5w
YXVAY2l0cml4LmNvbS8NCj4gDQo+IEl0IG1pc3NlZCBtYWludGFpbmVycyBBY2tzIGFuZCB3YXMg
bmV2ZXIgY29tbWl0dGVkLiBTaW5jZSB5b3UgYXJlDQo+IGRvaW5nIGEgYnVuY2ggb2Ygd29yayBo
ZXJlLCBhbmQgbGlrZWx5IGFkZGluZyBhIGxvdCBvZiBTQkRGIHJlbGF0ZWQNCj4gcHJpbnRzLCBm
ZWVsIGZyZWUgdG8gaW1wb3J0IHRoZSBtb2RpZmllciAoJXBwKSBhbmQgdXNlIGluIHlvdXIgY29k
ZQ0KPiAoZG8gbm90IGF0dGVtcHQgdG8gc3dpdGNoIGV4aXN0aW5nIHVzZXJzLCBvciBpdCdzIGxp
a2VseSB0byBnZXQNCj4gc3R1Y2sgYWdhaW4pLg0KDQpPayBXaWxsIGludGVncmF0ZSB0aGF0IHBh
dGNoIG9uY2Ugc3VibWl0dGVkLg0KPiANCj4+ICsgICAgICAgIHJldHVybiB2YWw7DQo+PiArICAg
IH0NCj4+ICsNCj4+ICsgICAgaWYgKCB1bmxpa2VseSghYnJpZGdlLT5vcHMtPnJlYWQpICkNCj4+
ICsgICAgICAgIHJldHVybiB2YWw7DQo+PiArDQo+PiArICAgIHJjID0gYnJpZGdlLT5vcHMtPnJl
YWQoYnJpZGdlLCAodWludDMyX3QpIHNiZGYuc2JkZiwgcmVnLCBsZW4sICZ2YWwpOw0KPiANCj4g
VGhlcmUncyBubyBuZWVkIGZvciB0aGUgdWludDMyX3QgY2FzdCwgdGhlIHNiZGYgZmllbGQgaXMg
YWxyZWFkeSBvZg0KPiBzdWNoIHR5cGUNCg0KQWNrLg0KDQoNCj4+ICsgICAgaWYgKCByYyApDQo+
PiArICAgICAgICBwcmludGsoWEVOTE9HX0VSUiAiRmFpbGVkIHRvIHJlYWQgcmVnICUjeCBsZW4g
JXUgZm9yICJQUklfcGNpIlxuIiwNCj4+ICsgICAgICAgICAgICAgICAgcmVnLCBsZW4sIHNiZGYu
c2VnLCBzYmRmLmJ1cywgc2JkZi5kZXYsIHNiZGYuZm4pOw0KPj4gKw0KPj4gKyAgICByZXR1cm4g
dmFsOw0KPj4gK30NCj4+ICsNCj4+ICtzdGF0aWMgdm9pZCBwY2lfY29uZmlnX3dyaXRlKHBjaV9z
YmRmX3Qgc2JkZiwgdW5zaWduZWQgaW50IHJlZywNCj4+ICsgICAgICAgIHVuc2lnbmVkIGludCBs
ZW4sIHVpbnQzMl90IHZhbCkNCj4+ICt7DQo+PiArICAgIGludCByYzsNCj4+ICsgICAgc3RydWN0
IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdlID0gcGNpX2ZpbmRfaG9zdF9icmlkZ2Uoc2JkZi5zZWcs
IHNiZGYuYnVzKTsNCj4+ICsNCj4+ICsgICAgaWYgKCB1bmxpa2VseSghYnJpZGdlKSApDQo+PiAr
ICAgIHsNCj4+ICsgICAgICAgIHByaW50ayhYRU5MT0dfRVJSICJVbmFibGUgdG8gZmluZCBicmlk
Z2UgZm9yICJQUklfcGNpIlxuIiwNCj4+ICsgICAgICAgICAgICAgICAgc2JkZi5zZWcsIHNiZGYu
YnVzLCBzYmRmLmRldiwgc2JkZi5mbik7DQo+PiArICAgICAgICByZXR1cm47DQo+PiArICAgIH0N
Cj4+ICsNCj4+ICsgICAgaWYgKCB1bmxpa2VseSghYnJpZGdlLT5vcHMtPndyaXRlKSApDQo+PiAr
ICAgICAgICByZXR1cm47DQo+PiArDQo+PiArICAgIHJjID0gYnJpZGdlLT5vcHMtPndyaXRlKGJy
aWRnZSwgKHVpbnQzMl90KSBzYmRmLnNiZGYsIHJlZywgbGVuLCB2YWwpOw0KPj4gKyAgICBpZiAo
IHJjICkNCj4+ICsgICAgICAgIHByaW50ayhYRU5MT0dfRVJSICJGYWlsZWQgdG8gd3JpdGUgcmVn
ICUjeCBsZW4gJXUgZm9yICJQUklfcGNpIlxuIiwNCj4+ICsgICAgICAgICAgICAgICAgcmVnLCBs
ZW4sIHNiZGYuc2VnLCBzYmRmLmJ1cywgc2JkZi5kZXYsIHNiZGYuZm4pOw0KPj4gK30NCj4+ICsN
Cj4+ICsvKg0KPj4gKyAqIFdyYXBwZXJzIGZvciBhbGwgUENJIGNvbmZpZ3VyYXRpb24gYWNjZXNz
IGZ1bmN0aW9ucy4NCj4+ICsgKi8NCj4+ICsNCj4+ICsjZGVmaW5lIFBDSV9PUF9XUklURShzaXpl
LCB0eXBlKSBcDQo+PiArICAgIHZvaWQgcGNpX2NvbmZfd3JpdGUjI3NpemUgKHBjaV9zYmRmX3Qg
c2JkZix1bnNpZ25lZCBpbnQgcmVnLCB0eXBlIHZhbCkgXA0KPj4gK3sgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4+ICsgICAgcGNpX2NvbmZpZ193cml0
ZShzYmRmLCByZWcsIHNpemUgLyA4LCB2YWwpOyAgICAgXA0KPj4gK30NCj4+ICsNCj4+ICsjZGVm
aW5lIFBDSV9PUF9SRUFEKHNpemUsIHR5cGUpIFwNCj4+ICsgICAgdHlwZSBwY2lfY29uZl9yZWFk
IyNzaXplIChwY2lfc2JkZl90IHNiZGYsIHVuc2lnbmVkIGludCByZWcpICBcDQo+PiAreyAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPj4gKyAgICByZXR1
cm4gcGNpX2NvbmZpZ19yZWFkKHNiZGYsIHJlZywgc2l6ZSAvIDgpOyAgICAgXA0KPj4gK30NCj4+
ICsNCj4+ICtQQ0lfT1BfUkVBRCg4LCB1OCkNCj4+ICtQQ0lfT1BfUkVBRCgxNiwgdTE2KQ0KPj4g
K1BDSV9PUF9SRUFEKDMyLCB1MzIpDQo+PiArUENJX09QX1dSSVRFKDgsIHU4KQ0KPj4gK1BDSV9P
UF9XUklURSgxNiwgdTE2KQ0KPj4gK1BDSV9PUF9XUklURSgzMiwgdTMyKQ0KPiANCj4gUGxlYXNl
IHVzZSB1aW50WFhfdC4NCj4gDQo+IEFsc28sIEl0J3MgbmljZSB0byBhZGQgc29tZSBraW5kIG9m
IHNpZ25hbHMgZm9yIGNzY29wZSBhbmQgZnJpZW5kcyBzbw0KPiB0aGV5IGNhbiBmaW5kIHRoZSBh
dXRvZ2VuZXJhdGVkIGZ1bmN0aW9ucywgaWU6DQo+IA0KPiAjZGVmaW5lIHBjaV9jb25mX3JlYWQ4
DQo+ICN1bmRlZiBwY2lfY29uZl9yZWFkOA0KPiAjZGVmaW5lIHBjaV9jb25mX3JlYWQxNg0KPiAj
dW5kZWYgcGNpX2NvbmZfcmVhZDE2DQo+IC4uLg0KPiANCj4gSXQncyB0ZWRpb3VzIGJ1dCBoZWxw
cyBmdXR1cmUgdXNlcnMgZmluZCB3aGVyZSB0aGUgY29kZSBpcyBnZW5lcmF0ZWQuDQoNCkFjay4N
Cg0KPiANCj4+ICsNCj4+ICsvKg0KPj4gKyAqIExvY2FsIHZhcmlhYmxlczoNCj4+ICsgKiBtb2Rl
OiBDDQo+PiArICogYy1maWxlLXN0eWxlOiAiQlNEIg0KPj4gKyAqIGMtYmFzaWMtb2Zmc2V0OiA0
DQo+PiArICogdGFiLXdpZHRoOiA0DQo+PiArICogaW5kZW50LXRhYnMtbW9kZTogbmlsDQo+PiAr
ICogRW5kOg0KPj4gKyAqLw0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9wY2kvcGNpLWhv
c3QtY29tbW9uLmMgYi94ZW4vYXJjaC9hcm0vcGNpL3BjaS1ob3N0LWNvbW1vbi5jDQo+PiBuZXcg
ZmlsZSBtb2RlIDEwMDY0NA0KPj4gaW5kZXggMDAwMDAwMDAwMC4uYzVmOThiZTY5OA0KPj4gLS0t
IC9kZXYvbnVsbA0KPj4gKysrIGIveGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9zdC1jb21tb24uYw0K
Pj4gQEAgLTAsMCArMSwxOTggQEANCj4+ICsvKg0KPj4gKyAqIENvcHlyaWdodCAoQykgMjAyMCBB
cm0gTHRkLg0KPj4gKyAqDQo+PiArICogQmFzZWQgb24gTGludXggZHJpdmVycy9wY2kvZWNhbS5j
DQo+PiArICogQ29weXJpZ2h0IDIwMTYgQnJvYWRjb20uDQo+PiArICoNCj4+ICsgKiBCYXNlZCBv
biBMaW51eCBkcml2ZXJzL3BjaS9jb250cm9sbGVyL3BjaS1ob3N0LWNvbW1vbi5jDQo+PiArICog
QmFzZWQgb24gTGludXggZHJpdmVycy9wY2kvY29udHJvbGxlci9wY2ktaG9zdC1nZW5lcmljLmMN
Cj4+ICsgKiBDb3B5cmlnaHQgKEMpIDIwMTQgQVJNIExpbWl0ZWQgV2lsbCBEZWFjb24gPHdpbGwu
ZGVhY29uQGFybS5jb20+DQo+PiArICoNCj4+ICsgKg0KPj4gKyAqIFRoaXMgcHJvZ3JhbSBpcyBm
cmVlIHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0ZSBpdCBhbmQvb3IgbW9kaWZ5DQo+PiAr
ICogaXQgdW5kZXIgdGhlIHRlcm1zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSB2
ZXJzaW9uIDIgYXMNCj4+ICsgKiBwdWJsaXNoZWQgYnkgdGhlIEZyZWUgU29mdHdhcmUgRm91bmRh
dGlvbi4NCj4+ICsgKg0KPj4gKyAqIFRoaXMgcHJvZ3JhbSBpcyBkaXN0cmlidXRlZCBpbiB0aGUg
aG9wZSB0aGF0IGl0IHdpbGwgYmUgdXNlZnVsLA0KPj4gKyAqIGJ1dCBXSVRIT1VUIEFOWSBXQVJS
QU5UWTsgd2l0aG91dCBldmVuIHRoZSBpbXBsaWVkIHdhcnJhbnR5IG9mDQo+PiArICogTUVSQ0hB
TlRBQklMSVRZIG9yIEZJVE5FU1MgRk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAgU2VlIHRoZQ0K
Pj4gKyAqIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvciBtb3JlIGRldGFpbHMuDQo+PiAr
ICoNCj4+ICsgKiBZb3Ugc2hvdWxkIGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2Vu
ZXJhbCBQdWJsaWMgTGljZW5zZQ0KPj4gKyAqIGFsb25nIHdpdGggdGhpcyBwcm9ncmFtLiAgSWYg
bm90LCBzZWUgPGh0dHA6Ly93d3cuZ251Lm9yZy9saWNlbnNlcy8+Lg0KPj4gKyAqLw0KPj4gKw0K
Pj4gKyNpbmNsdWRlIDx4ZW4vaW5pdC5oPg0KPj4gKyNpbmNsdWRlIDx4ZW4vcGNpLmg+DQo+PiAr
I2luY2x1ZGUgPGFzbS9wY2kuaD4NCj4+ICsjaW5jbHVkZSA8eGVuL3J3bG9jay5oPg0KPj4gKyNp
bmNsdWRlIDx4ZW4vdm1hcC5oPg0KPj4gKw0KPj4gKy8qDQo+PiArICogTGlzdCBmb3IgYWxsIHRo
ZSBwY2kgaG9zdCBicmlkZ2VzLg0KPj4gKyAqLw0KPj4gKw0KPj4gK3N0YXRpYyBMSVNUX0hFQUQo
cGNpX2hvc3RfYnJpZGdlcyk7DQo+PiArDQo+PiArc3RhdGljIGJvb2wgX19pbml0IGR0X3BjaV9w
YXJzZV9idXNfcmFuZ2Uoc3RydWN0IGR0X2RldmljZV9ub2RlICpkZXYsDQo+PiArICAgICAgICBz
dHJ1Y3QgcGNpX2NvbmZpZ193aW5kb3cgKmNmZykNCj4+ICt7DQo+PiArICAgIGNvbnN0IF9fYmUz
MiAqY2VsbHM7DQo+IA0KPiBJdCdzIG15IGltcHJlc3Npb24gdGhhdCB3aGlsZSBiYXNlZCBvbiBM
aW51eCB0aGlzIGlzIG5vdCBhIHZlcmJhdGltDQo+IGNvcHkgb2YgYSBMaW51eCBmaWxlLCBhbmQg
dHJpZXMgdG8gYWRoZXJlIHdpdGggdGhlIFhlbiBjb2Rpbmcgc3R5bGUuDQo+IElmIHNvIHBsZWFz
ZSB1c2UgdWludDMyX3QgaGVyZS4NCj4gDQo+PiArICAgIHVpbnQzMl90IGxlbjsNCj4+ICsNCj4+
ICsgICAgY2VsbHMgPSBkdF9nZXRfcHJvcGVydHkoZGV2LCAiYnVzLXJhbmdlIiwgJmxlbik7DQo+
PiArICAgIC8qIGJ1cy1yYW5nZSBzaG91bGQgYXQgbGVhc3QgYmUgMiBjZWxscyAqLw0KPj4gKyAg
ICBpZiAoICFjZWxscyB8fCAobGVuIDwgKHNpemVvZigqY2VsbHMpICogMikpICkNCj4+ICsgICAg
ICAgIHJldHVybiBmYWxzZTsNCj4+ICsNCj4+ICsgICAgY2ZnLT5idXNuX3N0YXJ0ID0gZHRfbmV4
dF9jZWxsKDEsICZjZWxscyk7DQo+PiArICAgIGNmZy0+YnVzbl9lbmQgPSBkdF9uZXh0X2NlbGwo
MSwgJmNlbGxzKTsNCj4+ICsNCj4+ICsgICAgcmV0dXJuIHRydWU7DQo+PiArfQ0KPj4gKw0KPj4g
K3N0YXRpYyBpbmxpbmUgdm9pZCBfX2lvbWVtICpwY2lfcmVtYXBfY2Znc3BhY2UocGFkZHJfdCBz
dGFydCwgc2l6ZV90IGxlbikNCj4+ICt7DQo+PiArICAgIHJldHVybiBpb3JlbWFwX25vY2FjaGUo
c3RhcnQsIGxlbik7DQo+PiArfQ0KPj4gKw0KPj4gK3N0YXRpYyB2b2lkIHBjaV9lY2FtX2ZyZWUo
c3RydWN0IHBjaV9jb25maWdfd2luZG93ICpjZmcpDQo+PiArew0KPj4gKyAgICBpZiAoIGNmZy0+
d2luICkNCj4+ICsgICAgICAgIGlvdW5tYXAoY2ZnLT53aW4pOw0KPj4gKw0KPj4gKyAgICB4ZnJl
ZShjZmcpOw0KPj4gK30NCj4+ICsNCj4+ICtzdGF0aWMgc3RydWN0IHBjaV9jb25maWdfd2luZG93
ICpnZW5fcGNpX2luaXQoc3RydWN0IGR0X2RldmljZV9ub2RlICpkZXYsDQo+PiArICAgICAgICBz
dHJ1Y3QgcGNpX2VjYW1fb3BzICpvcHMpDQo+PiArew0KPj4gKyAgICBpbnQgZXJyOw0KPj4gKyAg
ICBzdHJ1Y3QgcGNpX2NvbmZpZ193aW5kb3cgKmNmZzsNCj4+ICsgICAgcGFkZHJfdCBhZGRyLCBz
aXplOw0KPj4gKw0KPj4gKyAgICBjZmcgPSB4emFsbG9jKHN0cnVjdCBwY2lfY29uZmlnX3dpbmRv
dyk7DQo+PiArICAgIGlmICggIWNmZyApDQo+PiArICAgICAgICByZXR1cm4gTlVMTDsNCj4+ICsN
Cj4+ICsgICAgZXJyID0gZHRfcGNpX3BhcnNlX2J1c19yYW5nZShkZXYsIGNmZyk7DQo+PiArICAg
IGlmICggIWVyciApIHsNCj4gDQo+IEJyYWNlcw0KDQpBY2sNCg0KPiANCj4+ICsgICAgICAgIGNm
Zy0+YnVzbl9zdGFydCA9IDA7DQo+PiArICAgICAgICBjZmctPmJ1c25fZW5kID0gMHhmZjsNCj4+
ICsgICAgICAgIHByaW50ayhYRU5MT0dfRVJSICJObyBidXMgcmFuZ2UgZm91bmQgZm9yIHBjaSBj
b250cm9sbGVyXG4iKTsNCj4+ICsgICAgfSBlbHNlIHsNCj4+ICsgICAgICAgIGlmICggY2ZnLT5i
dXNuX2VuZCA+IGNmZy0+YnVzbl9zdGFydCArIDB4ZmYgKQ0KPj4gKyAgICAgICAgICAgIGNmZy0+
YnVzbl9lbmQgPSBjZmctPmJ1c25fc3RhcnQgKyAweGZmOw0KPj4gKyAgICB9DQo+PiArDQo+PiAr
ICAgIC8qIFBhcnNlIG91ciBQQ0kgZWNhbSByZWdpc3RlciBhZGRyZXNzKi8NCj4+ICsgICAgZXJy
ID0gZHRfZGV2aWNlX2dldF9hZGRyZXNzKGRldiwgMCwgJmFkZHIsICZzaXplKTsNCj4+ICsgICAg
aWYgKCBlcnIgKQ0KPj4gKyAgICAgICAgZ290byBlcnJfZXhpdDsNCj4+ICsNCj4+ICsgICAgY2Zn
LT5waHlzX2FkZHIgPSBhZGRyOw0KPj4gKyAgICBjZmctPnNpemUgPSBzaXplOw0KPj4gKyAgICBj
ZmctPm9wcyA9IG9wczsNCj4+ICsNCj4+ICsgICAgLyoNCj4+ICsgICAgICogT24gNjQtYml0IHN5
c3RlbXMsIHdlIGRvIGEgc2luZ2xlIGlvcmVtYXAgZm9yIHRoZSB3aG9sZSBjb25maWcgc3BhY2UN
Cj4+ICsgICAgICogc2luY2Ugd2UgaGF2ZSBlbm91Z2ggdmlydHVhbCBhZGRyZXNzIHJhbmdlIGF2
YWlsYWJsZS4gIE9uIDMyLWJpdCwgd2UNCj4+ICsgICAgICogaW9yZW1hcCB0aGUgY29uZmlnIHNw
YWNlIGZvciBlYWNoIGJ1cyBpbmRpdmlkdWFsbHkuDQo+PiArICAgICAqDQo+PiArICAgICAqIEFz
IG9mIG5vdyBvbmx5IDY0LWJpdCBpcyBzdXBwb3J0ZWQgMzItYml0IGlzIG5vdCBzdXBwb3J0ZWQu
DQo+PiArICAgICAqLw0KPj4gKyAgICBjZmctPndpbiA9IHBjaV9yZW1hcF9jZmdzcGFjZShjZmct
PnBoeXNfYWRkciwgY2ZnLT5zaXplKTsNCj4+ICsgICAgaWYgKCAhY2ZnLT53aW4gKQ0KPj4gKyAg
ICAgICAgZ290byBlcnJfZXhpdF9yZW1hcDsNCj4+ICsNCj4+ICsgICAgcHJpbnRrKCJFQ0FNIGF0
IFttZW0gJWx4LSVseF0gZm9yIFtidXMgJXgtJXhdIFxuIixjZmctPnBoeXNfYWRkciwNCj4+ICsg
ICAgICAgICAgICBjZmctPnBoeXNfYWRkciArIGNmZy0+c2l6ZSAtIDEsY2ZnLT5idXNuX3N0YXJ0
LGNmZy0+YnVzbl9lbmQpOw0KPj4gKw0KPj4gKyAgICBpZiAoIG9wcy0+aW5pdCApIHsNCj4+ICsg
ICAgICAgIGVyciA9IG9wcy0+aW5pdChjZmcpOw0KPj4gKyAgICAgICAgaWYgKGVycikNCj4+ICsg
ICAgICAgICAgICBnb3RvIGVycl9leGl0Ow0KPj4gKyAgICB9DQo+PiArDQo+PiArICAgIHJldHVy
biBjZmc7DQo+PiArDQo+PiArZXJyX2V4aXRfcmVtYXA6DQo+PiArICAgIHByaW50ayhYRU5MT0df
RVJSICJFQ0FNIGlvcmVtYXAgZmFpbGVkXG4iKTsNCj4+ICtlcnJfZXhpdDoNCj4+ICsgICAgcGNp
X2VjYW1fZnJlZShjZmcpOw0KPj4gKyAgICByZXR1cm4gTlVMTDsNCj4+ICt9DQo+PiArDQo+PiAr
c3RhdGljIHN0cnVjdCBwY2lfaG9zdF9icmlkZ2UgKiBwY2lfYWxsb2NfaG9zdF9icmlkZ2Uodm9p
ZCkNCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXiBleHRyYSBzcGFjZQ0KDQpB
Y2suDQo+PiArew0KPj4gKyAgICBzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICpicmlkZ2UgPSB4emFs
bG9jKHN0cnVjdCBwY2lfaG9zdF9icmlkZ2UpOw0KPj4gKw0KPj4gKyAgICBpZiAoICFicmlkZ2Ug
KQ0KPj4gKyAgICAgICAgcmV0dXJuIE5VTEw7DQo+PiArDQo+PiArICAgIElOSVRfTElTVF9IRUFE
KCZicmlkZ2UtPm5vZGUpOw0KPj4gKyAgICByZXR1cm4gYnJpZGdlOw0KPj4gK30NCj4+ICsNCj4+
ICtpbnQgcGNpX2hvc3RfY29tbW9uX3Byb2JlKHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSAqZGV2LA0K
Pj4gKyAgICAgICAgc3RydWN0IHBjaV9lY2FtX29wcyAqb3BzKQ0KPj4gK3sNCj4+ICsgICAgc3Ry
dWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdlOw0KPj4gKyAgICBzdHJ1Y3QgcGNpX2NvbmZpZ193
aW5kb3cgKmNmZzsNCj4+ICsgICAgdTMyIHNlZ21lbnQ7DQo+PiArDQo+PiArICAgIGJyaWRnZSA9
IHBjaV9hbGxvY19ob3N0X2JyaWRnZSgpOw0KPj4gKyAgICBpZiAoICFicmlkZ2UgKQ0KPj4gKyAg
ICAgICAgcmV0dXJuIC1FTk9NRU07DQo+PiArDQo+PiArICAgIC8qIFBhcnNlIGFuZCBtYXAgb3Vy
IENvbmZpZ3VyYXRpb24gU3BhY2Ugd2luZG93cyAqLw0KPj4gKyAgICBjZmcgPSBnZW5fcGNpX2lu
aXQoZGV2LCBvcHMpOw0KPj4gKyAgICBpZiAoICFjZmcgKQ0KPj4gKyAgICAgICAgcmV0dXJuIC1F
Tk9NRU07DQo+IA0KPiBZb3UgYXJlIGxlYWtpbmcgdGhlIGFsbG9jYXRpb24gb2YgYnJpZGdlIGhl
cmUgLi4uDQo+IA0KPj4gKw0KPj4gKyAgICBicmlkZ2UtPmR0X25vZGUgPSBkZXY7DQo+PiArICAg
IGJyaWRnZS0+c3lzZGF0YSA9IGNmZzsNCj4+ICsgICAgYnJpZGdlLT5vcHMgPSAmb3BzLT5wY2lf
b3BzOw0KPj4gKw0KPj4gKyAgICBpZiggIWR0X3Byb3BlcnR5X3JlYWRfdTMyKGRldiwgImxpbnV4
LHBjaS1kb21haW4iLCAmc2VnbWVudCkgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBwcmludGso
WEVOTE9HX0VSUiAiXCJsaW51eCxwY2ktZG9tYWluXCIgcHJvcGVydHkgaW4gbm90IGF2YWlsYWJs
ZSBpbiBEVFxuIik7DQo+PiArICAgICAgICByZXR1cm4gLUVOT0RFVjsNCj4gDQo+IC4uLiBhbmQg
aGVyZS4NCj4gDQo+PiArICAgIH0NCj4+ICsNCj4+ICsgICAgYnJpZGdlLT5zZWdtZW50ID0gKHUx
NilzZWdtZW50Ow0KPj4gKw0KPj4gKyAgICBsaXN0X2FkZF90YWlsKCZicmlkZ2UtPm5vZGUsICZw
Y2lfaG9zdF9icmlkZ2VzKTsNCj4+ICsNCj4+ICsgICAgcmV0dXJuIDA7DQo+PiArfQ0KPj4gKw0K
Pj4gKy8qDQo+PiArICogVGhpcyBmdW5jdGlvbiB3aWxsIGxvb2t1cCBhbiBob3N0YnJpZGdlIGJh
c2VkIG9uIHRoZSBzZWdtZW50IGFuZCBidXMNCj4+ICsgKiBudW1iZXIuDQo+PiArICovDQo+PiAr
c3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqcGNpX2ZpbmRfaG9zdF9icmlkZ2UodWludDE2X3Qgc2Vn
bWVudCwgdWludDhfdCBidXMpDQo+PiArew0KPj4gKyAgICBzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdl
ICpicmlkZ2U7DQo+PiArICAgIGJvb2wgZm91bmQgPSBmYWxzZTsNCj4+ICsNCj4+ICsgICAgbGlz
dF9mb3JfZWFjaF9lbnRyeSggYnJpZGdlLCAmcGNpX2hvc3RfYnJpZGdlcywgbm9kZSApDQo+PiAr
ICAgIHsNCj4+ICsgICAgICAgIGlmICggYnJpZGdlLT5zZWdtZW50ICE9IHNlZ21lbnQgKQ0KPj4g
KyAgICAgICAgICAgIGNvbnRpbnVlOw0KPj4gKw0KPj4gKyAgICAgICAgZm91bmQgPSB0cnVlOw0K
Pj4gKyAgICAgICAgYnJlYWs7DQo+PiArICAgIH0NCj4+ICsNCj4+ICsgICAgcmV0dXJuIChmb3Vu
ZCkgPyBicmlkZ2UgOiBOVUxMOw0KPiANCj4gVGhpcyBjYW4gYmUgbXVjaCBzaG9ydGVyOg0KPiAN
Cj4gc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqcGNpX2ZpbmRfaG9zdF9icmlkZ2UodWludDE2X3Qg
c2VnbWVudCwgdWludDhfdCBidXMpDQo+IHsNCj4gICAgc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAq
YnJpZGdlOw0KPiANCj4gICAgbGlzdF9mb3JfZWFjaF9lbnRyeSggYnJpZGdlLCAmcGNpX2hvc3Rf
YnJpZGdlcywgbm9kZSApDQo+ICAgICAgICBpZiAoIGJyaWRnZS0+c2VnbWVudCA9PSBzZWdtZW50
ICkNCj4gICAgICAgICAgICByZXR1cm4gYnJpZGdlOw0KPiANCj4gICAgcmV0dXJuIE5VTEw7DQo+
IH0NCj4gDQo+IEFsYmVpdCBJJ20gY29uZnVzZWQgYnkgdGhlIGZhY3QgdGhhdCB5b3UgcGFzcyBh
IGJ1cyBudW1iZXIgdGhhdCdzDQo+IGNvbXBsZXRlbHkgdW51c2VkLg0KDQpPayBXaWxsIGZpeC4N
Cj4gDQo+PiArfQ0KPj4gKy8qDQo+PiArICogTG9jYWwgdmFyaWFibGVzOg0KPj4gKyAqIG1vZGU6
IEMNCj4+ICsgKiBjLWZpbGUtc3R5bGU6ICJCU0QiDQo+PiArICogYy1iYXNpYy1vZmZzZXQ6IDQN
Cj4+ICsgKiB0YWItd2lkdGg6IDQNCj4+ICsgKiBpbmRlbnQtdGFicy1tb2RlOiBuaWwNCj4+ICsg
KiBFbmQ6DQo+PiArICovDQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL3BjaS9wY2ktaG9z
dC1nZW5lcmljLmMgYi94ZW4vYXJjaC9hcm0vcGNpL3BjaS1ob3N0LWdlbmVyaWMuYw0KPj4gbmV3
IGZpbGUgbW9kZSAxMDA2NDQNCj4+IGluZGV4IDAwMDAwMDAwMDAuLmNkNjdiM2RlYzYNCj4+IC0t
LSAvZGV2L251bGwNCj4+ICsrKyBiL3hlbi9hcmNoL2FybS9wY2kvcGNpLWhvc3QtZ2VuZXJpYy5j
DQo+PiBAQCAtMCwwICsxLDEzMSBAQA0KPj4gKy8qDQo+PiArICogQ29weXJpZ2h0IChDKSAyMDIw
IEFybSBMdGQuDQo+PiArICoNCj4+ICsgKiBCYXNlZCBvbiBMaW51eCBkcml2ZXJzL3BjaS9jb250
cm9sbGVyL3BjaS1ob3N0LWNvbW1vbi5jDQo+PiArICogQmFzZWQgb24gTGludXggZHJpdmVycy9w
Y2kvY29udHJvbGxlci9wY2ktaG9zdC1nZW5lcmljLmMNCj4+ICsgKiBDb3B5cmlnaHQgKEMpIDIw
MTQgQVJNIExpbWl0ZWQgV2lsbCBEZWFjb24gPHdpbGwuZGVhY29uQGFybS5jb20+DQo+PiArICoN
Cj4+ICsgKiBUaGlzIHByb2dyYW0gaXMgZnJlZSBzb2Z0d2FyZTsgeW91IGNhbiByZWRpc3RyaWJ1
dGUgaXQgYW5kL29yIG1vZGlmeQ0KPj4gKyAqIGl0IHVuZGVyIHRoZSB0ZXJtcyBvZiB0aGUgR05V
IEdlbmVyYWwgUHVibGljIExpY2Vuc2UgdmVyc2lvbiAyIGFzDQo+PiArICogcHVibGlzaGVkIGJ5
IHRoZSBGcmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uDQo+PiArICoNCj4+ICsgKiBUaGlzIHByb2dy
YW0gaXMgZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUgdGhhdCBpdCB3aWxsIGJlIHVzZWZ1bCwNCj4+
ICsgKiBidXQgV0lUSE9VVCBBTlkgV0FSUkFOVFk7IHdpdGhvdXQgZXZlbiB0aGUgaW1wbGllZCB3
YXJyYW50eSBvZg0KPj4gKyAqIE1FUkNIQU5UQUJJTElUWSBvciBGSVRORVNTIEZPUiBBIFBBUlRJ
Q1VMQVIgUFVSUE9TRS4gIFNlZSB0aGUNCj4+ICsgKiBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5z
ZSBmb3IgbW9yZSBkZXRhaWxzLg0KPj4gKyAqDQo+PiArICogWW91IHNob3VsZCBoYXZlIHJlY2Vp
dmVkIGEgY29weSBvZiB0aGUgR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UNCj4+ICsgKiBhbG9u
ZyB3aXRoIHRoaXMgcHJvZ3JhbS4gIElmIG5vdCwgc2VlIDxodHRwOi8vd3d3LmdudS5vcmcvbGlj
ZW5zZXMvPi4NCj4+ICsgKi8NCj4+ICsNCj4+ICsjaW5jbHVkZSA8YXNtL2RldmljZS5oPg0KPj4g
KyNpbmNsdWRlIDxhc20vaW8uaD4NCj4+ICsjaW5jbHVkZSA8eGVuL3BjaS5oPg0KPj4gKyNpbmNs
dWRlIDxhc20vcGNpLmg+DQo+PiArDQo+PiArLyoNCj4+ICsgKiBGdW5jdGlvbiB0byBnZXQgdGhl
IGNvbmZpZyBzcGFjZSBiYXNlLg0KPj4gKyAqLw0KPj4gK3N0YXRpYyB2b2lkIF9faW9tZW0gKnBj
aV9jb25maWdfYmFzZShzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICpicmlkZ2UsDQo+PiArICAgICAg
ICB1aW50MzJfdCBzYmRmLCBpbnQgd2hlcmUpDQo+IA0KPiBZb3Ugd291bGQgYmUgYmV0dGVyIHBh
c3NpbmcgYSBwY2lfc2JkZl90IGRpcmVjdGx5IGhlcmUuIEFsc28gJ3doZXJlJw0KPiBzaG91bGQg
YmUgcmVuYW1lZCB0byBvZmZzZXQsIG9yIHJlZywgYW5kIGJlIG1hZGUgdW5zaWduZWQgaW50LiBB
RkFJQ1QNCj4geW91IHdpbGwgbmV2ZXIgcGFzcyBhIG5lZ2F0aXZlIHZhbHVlIGhlcmUuDQo+IA0K
PiBGb3Igc2FuaXR5IHlvdSBzaG91bGQgYWxzbyBhc3NlcnQgdGhhdCB0aGUgb2Zmc2V0IGZhbGxz
IGJldHdlZW4gdGhlDQo+IFBDSSBjb25maWcgc3BhY2UgdXNlZCBieSB0aGUgZGV2aWNlLCBpbiBv
cmRlciB0byBlYXNpbHkgY2F0Y2ggd2l0aA0KPiB3cm9uZyBvZmZzZXRzIGJlaW5nIHVzZWQuDQoN
CkFjay4gDQo+IA0KPj4gK3sNCj4+ICsgICAgc3RydWN0IHBjaV9jb25maWdfd2luZG93ICpjZmcg
PSBicmlkZ2UtPnN5c2RhdGE7DQo+IA0KPiBjb25zdA0KDQpBY2suIA0KPiANCj4+ICsgICAgdW5z
aWduZWQgaW50IGRldmZuX3NoaWZ0ID0gY2ZnLT5vcHMtPmJ1c19zaGlmdCAtIDg7DQo+PiArDQo+
PiArICAgIHBjaV9zYmRmX3Qgc2JkZl90ID0gKHBjaV9zYmRmX3QpIHNiZGYgOw0KPj4gKw0KPj4g
KyAgICB1bnNpZ25lZCBpbnQgYnVzbiA9IHNiZGZfdC5idXM7DQo+PiArICAgIHZvaWQgX19pb21l
bSAqYmFzZTsNCj4gDQo+IElNTyBhZGRpbmcgbmV3bGluZXMgYmV0d2VlbiB2YXJpYWJsZSBkZWZp
bml0aW9ucyBpcyBub3QgaGVscGZ1bCwgYnV0DQo+IHRoYXQncyBteSB0YXN0ZS4NCg0KQWNrLiAN
Cj4gDQo+PiArDQo+PiArICAgIGlmICggYnVzbiA8IGNmZy0+YnVzbl9zdGFydCB8fCBidXNuID4g
Y2ZnLT5idXNuX2VuZCApDQo+PiArICAgICAgICByZXR1cm4gTlVMTDsNCj4+ICsNCj4+ICsgICAg
YmFzZSA9IGNmZy0+d2luICsgKGJ1c24gPDwgY2ZnLT5vcHMtPmJ1c19zaGlmdCk7DQo+PiArDQo+
PiArICAgIHJldHVybiBiYXNlICsgKFBDSV9ERVZGTihzYmRmX3QuZGV2LCBzYmRmX3QuZm4pIDw8
IGRldmZuX3NoaWZ0KSArIHdoZXJlOw0KPj4gK30NCj4+ICsNCj4+ICtpbnQgcGNpX2VjYW1fY29u
ZmlnX3dyaXRlKHN0cnVjdCBwY2lfaG9zdF9icmlkZ2UgKmJyaWRnZSwgdWludDMyX3Qgc2JkZiwN
Cj4+ICsgICAgICAgIGludCB3aGVyZSwgaW50IHNpemUsIHUzMiB2YWwpDQo+PiArew0KPj4gKyAg
ICB2b2lkIF9faW9tZW0gKmFkZHI7DQo+PiArDQo+PiArICAgIGFkZHIgPSBwY2lfY29uZmlnX2Jh
c2UoYnJpZGdlLCBzYmRmLCB3aGVyZSk7DQo+IA0KPiBZb3UgY2FuIGluaXRpYWxpemUgYXQgZGVm
aW5pdGlvbi4NCg0KQWNrLiANCj4gDQo+PiArICAgIGlmICggIWFkZHIgKQ0KPj4gKyAgICAgICAg
cmV0dXJuIC1FTk9ERVY7DQo+PiArDQo+PiArICAgIGlmICggc2l6ZSA9PSAxICkNCj4+ICsgICAg
ICAgIHdyaXRlYih2YWwsIGFkZHIpOw0KPj4gKyAgICBlbHNlIGlmICggc2l6ZSA9PSAyICkNCj4+
ICsgICAgICAgIHdyaXRldyh2YWwsIGFkZHIpOw0KPj4gKyAgICBlbHNlDQo+PiArICAgICAgICB3
cml0ZWwodmFsLCBhZGRyKTsNCj4gDQo+IFBsZWFzZSB1c2UgYSBzd2l0Y2gsIGFuZCBjaGVjayBh
Z2FpbnN0IHNwZWNpZmljIHZhbHVlcy4gVGhlIGRlZmF1bHQNCj4gY2FzZSBzaG91bGQgYmUgYSBC
VUcoKTsuIFNlZSBwY2lfY29uZl9yZWFkIGZyb20geDg2IGZvciBhbiBleGFtcGxlLg0KDQpBY2su
IA0KPiANCj4+ICsNCj4+ICsgICAgcmV0dXJuIDA7DQo+PiArfQ0KPj4gKw0KPj4gK2ludCBwY2lf
ZWNhbV9jb25maWdfcmVhZChzdHJ1Y3QgcGNpX2hvc3RfYnJpZGdlICpicmlkZ2UsIHVpbnQzMl90
IHNiZGYsDQo+PiArICAgICAgICBpbnQgd2hlcmUsIGludCBzaXplLCB1MzIgKnZhbCkNCj4+ICt7
DQo+PiArICAgIHZvaWQgX19pb21lbSAqYWRkcjsNCj4+ICsNCj4+ICsgICAgYWRkciA9IHBjaV9j
b25maWdfYmFzZShicmlkZ2UsIHNiZGYsIHdoZXJlKTsNCj4+ICsgICAgaWYgKCAhYWRkciApIHsN
Cj4+ICsgICAgICAgICp2YWwgPSB+MDsNCj4+ICsgICAgICAgIHJldHVybiAtRU5PREVWOw0KPj4g
KyAgICB9DQo+PiArDQo+PiArICAgIGlmICggc2l6ZSA9PSAxICkNCj4+ICsgICAgICAgICp2YWwg
PSByZWFkYihhZGRyKTsNCj4+ICsgICAgZWxzZSBpZiAoIHNpemUgPT0gMiApDQo+PiArICAgICAg
ICAqdmFsID0gcmVhZHcoYWRkcik7DQo+PiArICAgIGVsc2UNCj4+ICsgICAgICAgICp2YWwgPSBy
ZWFkbChhZGRyKTsNCj4+ICsNCj4+ICsgICAgcmV0dXJuIDA7DQo+PiArfQ0KPj4gKw0KPj4gKy8q
IEVDQU0gb3BzICovDQo+PiArc3RydWN0IHBjaV9lY2FtX29wcyBwY2lfZ2VuZXJpY19lY2FtX29w
cyA9IHsNCj4+ICsgICAgLmJ1c19zaGlmdCAgPSAyMCwNCj4+ICsgICAgLnBjaV9vcHMgICAgPSB7
DQo+PiArICAgICAgICAucmVhZCAgICAgICA9IHBjaV9lY2FtX2NvbmZpZ19yZWFkLA0KPj4gKyAg
ICAgICAgLndyaXRlICAgICAgPSBwY2lfZWNhbV9jb25maWdfd3JpdGUsDQo+PiArICAgIH0NCj4+
ICt9Ow0KPj4gKw0KPj4gK3N0YXRpYyBjb25zdCBzdHJ1Y3QgZHRfZGV2aWNlX21hdGNoIGdlbl9w
Y2lfZHRfbWF0Y2hbXSA9IHsNCj4+ICsgICAgeyAuY29tcGF0aWJsZSA9ICJwY2ktaG9zdC1lY2Ft
LWdlbmVyaWMiLA0KPj4gKyAgICAgIC5kYXRhID0gICAgICAgJnBjaV9nZW5lcmljX2VjYW1fb3Bz
IH0sDQo+PiArDQo+PiArICAgIHsgfSwNCj4+ICt9Ow0KPj4gKw0KPj4gK3N0YXRpYyBpbnQgZ2Vu
X3BjaV9kdF9pbml0KHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSAqZGV2LCBjb25zdCB2b2lkICpkYXRh
KQ0KPj4gK3sNCj4+ICsgICAgY29uc3Qgc3RydWN0IGR0X2RldmljZV9tYXRjaCAqb2ZfaWQ7DQo+
PiArICAgIHN0cnVjdCBwY2lfZWNhbV9vcHMgKm9wczsNCj4+ICsNCj4+ICsgICAgb2ZfaWQgPSBk
dF9tYXRjaF9ub2RlKGdlbl9wY2lfZHRfbWF0Y2gsIGRldi0+ZGV2Lm9mX25vZGUpOw0KPj4gKyAg
ICBvcHMgPSAoc3RydWN0IHBjaV9lY2FtX29wcyAqKSBvZl9pZC0+ZGF0YTsNCj4+ICsNCj4+ICsg
ICAgcHJpbnRrKFhFTkxPR19JTkZPICJGb3VuZCBQQ0kgaG9zdCBicmlkZ2UgJXMgY29tcGF0aWJs
ZTolcyBcbiIsDQo+PiArICAgICAgICAgICAgZHRfbm9kZV9mdWxsX25hbWUoZGV2KSwgb2ZfaWQt
PmNvbXBhdGlibGUpOw0KPj4gKw0KPj4gKyAgICByZXR1cm4gcGNpX2hvc3RfY29tbW9uX3Byb2Jl
KGRldiwgb3BzKTsNCj4+ICt9DQo+PiArDQo+PiArRFRfREVWSUNFX1NUQVJUKHBjaV9nZW4sICJQ
Q0kgSE9TVCBHRU5FUklDIiwgREVWSUNFX1BDSSkNCj4+ICsuZHRfbWF0Y2ggPSBnZW5fcGNpX2R0
X21hdGNoLA0KPj4gKy5pbml0ID0gZ2VuX3BjaV9kdF9pbml0LA0KPj4gK0RUX0RFVklDRV9FTkQN
Cj4+ICsNCj4+ICsvKg0KPj4gKyAqIExvY2FsIHZhcmlhYmxlczoNCj4+ICsgKiBtb2RlOiBDDQo+
PiArICogYy1maWxlLXN0eWxlOiAiQlNEIg0KPj4gKyAqIGMtYmFzaWMtb2Zmc2V0OiA0DQo+PiAr
ICogdGFiLXdpZHRoOiA0DQo+PiArICogaW5kZW50LXRhYnMtbW9kZTogbmlsDQo+PiArICogRW5k
Og0KPj4gKyAqLw0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9wY2kvcGNpLmMgYi94ZW4v
YXJjaC9hcm0vcGNpL3BjaS5jDQo+PiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPj4gaW5kZXggMDAw
MDAwMDAwMC4uZjhjYmI5OTU5MQ0KPj4gLS0tIC9kZXYvbnVsbA0KPj4gKysrIGIveGVuL2FyY2gv
YXJtL3BjaS9wY2kuYw0KPj4gQEAgLTAsMCArMSwxMTIgQEANCj4+ICsvKg0KPj4gKyAqIENvcHly
aWdodCAoQykgMjAyMCBBcm0gTHRkLg0KPj4gKyAqDQo+PiArICogVGhpcyBwcm9ncmFtIGlzIGZy
ZWUgc29mdHdhcmU7IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkNCj4+ICsg
KiBpdCB1bmRlciB0aGUgdGVybXMgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIHZl
cnNpb24gMiBhcw0KPj4gKyAqIHB1Ymxpc2hlZCBieSB0aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0
aW9uLg0KPj4gKyAqDQo+PiArICogVGhpcyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBo
b3BlIHRoYXQgaXQgd2lsbCBiZSB1c2VmdWwsDQo+PiArICogYnV0IFdJVEhPVVQgQU5ZIFdBUlJB
TlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2YNCj4+ICsgKiBNRVJDSEFO
VEFCSUxJVFkgb3IgRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlDQo+
PiArICogR05VIEdlbmVyYWwgUHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4NCj4+ICsg
Kg0KPj4gKyAqIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhlIEdOVSBHZW5l
cmFsIFB1YmxpYyBMaWNlbnNlDQo+PiArICogYWxvbmcgd2l0aCB0aGlzIHByb2dyYW0uICBJZiBu
b3QsIHNlZSA8aHR0cDovL3d3dy5nbnUub3JnL2xpY2Vuc2VzLz4uDQo+PiArICovDQo+PiArDQo+
PiArI2luY2x1ZGUgPHhlbi9hY3BpLmg+DQo+PiArI2luY2x1ZGUgPHhlbi9kZXZpY2VfdHJlZS5o
Pg0KPj4gKyNpbmNsdWRlIDx4ZW4vZXJybm8uaD4NCj4+ICsjaW5jbHVkZSA8eGVuL2luaXQuaD4N
Cj4+ICsjaW5jbHVkZSA8eGVuL3BjaS5oPg0KPj4gKyNpbmNsdWRlIDx4ZW4vcGFyYW0uaD4NCj4+
ICsNCj4+ICtzdGF0aWMgaW50IF9faW5pdCBkdF9wY2lfaW5pdCh2b2lkKQ0KPj4gK3sNCj4+ICsg
ICAgc3RydWN0IGR0X2RldmljZV9ub2RlICpucDsNCj4+ICsgICAgaW50IHJjOw0KPj4gKw0KPj4g
KyAgICBkdF9mb3JfZWFjaF9kZXZpY2Vfbm9kZShkdF9ob3N0LCBucCkNCj4+ICsgICAgew0KPj4g
KyAgICAgICAgcmMgPSBkZXZpY2VfaW5pdChucCwgREVWSUNFX1BDSSwgTlVMTCk7DQo+PiArICAg
ICAgICBpZiggIXJjICkNCj4+ICsgICAgICAgICAgICBjb250aW51ZTsNCj4+ICsgICAgICAgIC8q
DQo+PiArICAgICAgICAgKiBJZ25vcmUgdGhlIGZvbGxvd2luZyBlcnJvciBjb2RlczoNCj4+ICsg
ICAgICAgICAqICAgLSBFQkFERjogSW5kaWNhdGUgdGhlIGN1cnJlbnQgaXMgbm90IGFuIHBjaQ0K
Pj4gKyAgICAgICAgICogICAtIEVOT0RFVjogVGhlIHBjaSBkZXZpY2UgaXMgbm90IHByZXNlbnQg
b3IgY2Fubm90IGJlIHVzZWQgYnkNCj4+ICsgICAgICAgICAqICAgICBYZW4uDQo+PiArICAgICAg
ICAgKi8NCj4+ICsgICAgICAgIGVsc2UgaWYgKCByYyAhPSAtRUJBREYgJiYgcmMgIT0gLUVOT0RF
ViApDQo+PiArICAgICAgICB7DQo+PiArICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19FUlIgIk5v
IGRyaXZlciBmb3VuZCBpbiBYRU4gb3IgZHJpdmVyIGluaXQgZXJyb3IuXG4iKTsNCj4+ICsgICAg
ICAgICAgICByZXR1cm4gcmM7DQo+PiArICAgICAgICB9DQo+PiArICAgIH0NCj4+ICsNCj4+ICsg
ICAgcmV0dXJuIDA7DQo+PiArfQ0KPj4gKw0KPj4gKyNpZmRlZiBDT05GSUdfQUNQSQ0KPj4gK3N0
YXRpYyB2b2lkIF9faW5pdCBhY3BpX3BjaV9pbml0KHZvaWQpDQo+PiArew0KPj4gKyAgICBwcmlu
dGsoWEVOTE9HX0VSUiAiQUNQSSBwY2kgaW5pdCBub3Qgc3VwcG9ydGVkIFxuIik7DQo+PiArICAg
IHJldHVybjsNCj4+ICt9DQo+PiArI2Vsc2UNCj4+ICtzdGF0aWMgaW5saW5lIHZvaWQgX19pbml0
IGFjcGlfcGNpX2luaXQodm9pZCkgeyB9DQo+PiArI2VuZGlmDQo+PiArDQo+PiArc3RhdGljIGJv
b2wgX19pbml0ZGF0YSBwYXJhbV9wY2lfZW5hYmxlOw0KPj4gK3N0YXRpYyBpbnQgX19pbml0IHBh
cnNlX3BjaV9wYXJhbShjb25zdCBjaGFyICphcmcpDQo+PiArew0KPj4gKyAgICBpZiAoICFhcmcg
KQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBwYXJhbV9wY2lfZW5hYmxlID0gZmFsc2U7DQo+PiAr
ICAgICAgICByZXR1cm4gMDsNCj4+ICsgICAgfQ0KPj4gKw0KPj4gKyAgICBzd2l0Y2ggKCBwYXJz
ZV9ib29sKGFyZywgTlVMTCkgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBjYXNlIDA6DQo+PiAr
ICAgICAgICAgICAgcGFyYW1fcGNpX2VuYWJsZSA9IGZhbHNlOw0KPj4gKyAgICAgICAgICAgIHJl
dHVybiAwOw0KPj4gKyAgICAgICAgY2FzZSAxOg0KPj4gKyAgICAgICAgICAgIHBhcmFtX3BjaV9l
bmFibGUgPSB0cnVlOw0KPj4gKyAgICAgICAgICAgIHJldHVybiAwOw0KPj4gKyAgICB9DQo+PiAr
DQo+PiArICAgIHJldHVybiAtRUlOVkFMOw0KPj4gK30NCj4+ICtjdXN0b21fcGFyYW0oInBjaSIs
IHBhcnNlX3BjaV9wYXJhbSk7DQo+IA0KPiBZb3UgbmVlZCB0byBpbnRyb2R1Y2UgdGhlIGRvY3Vt
ZW50YXRpb24gZm9yIHRoZSBwYXJhbWV0ZXIgYXQNCj4gZG9jcy9taXNjL3hlbi1jb21tYW5kLWxp
bmUucGFuZG9jDQo+IA0KPiBBbGJlaXQgSSdtIG5vdCBzdXJlIEkgbGlrZSBpdCwgd2h5IGRvIHlv
dSBuZWVkIHRvIGVuYWJsZSBQQ0kNCj4gZXhwbGljaXRseT8NCj4gDQo+IFNob3VsZG4ndCBpdCBi
ZSBkaXNjb3ZlcmVkIGF1dG9tYXRpY2FsbHkgYW5kIGVuYWJsZWQgYnkgZGVmYXVsdD8NCg0KQWNr
LiANCj4gDQo+PiArdm9pZCBfX2luaXQgcGNpX2luaXQodm9pZCkNCj4+ICt7DQo+PiArICAgIC8q
DQo+PiArICAgICAqIEVuYWJsZSBQQ0kgd2hlbiBoYXMgYmVlbiBlbmFibGVkIGV4cGxpY2l0bHkg
KHBjaT1vbikNCj4+ICsgICAgICovDQo+PiArICAgIGlmICggIXBhcmFtX3BjaV9lbmFibGUpDQo+
PiArICAgICAgICBnb3RvIGRpc2FibGU7DQo+IA0KPiBKdXN0IHJldHVybiBoZXJlLCB0aGVyZSdz
IG5vdCBwb2ludCBpbiBoYXZpbmcgYSBsYWJlbCB0byBwZXJmb3JtIGENCj4gcmV0dXJuLg0KDQpB
Y2suIA0KPiANCj4+ICsNCj4+ICsgICAgaWYgKCBhY3BpX2Rpc2FibGVkICkNCj4+ICsgICAgICAg
IGR0X3BjaV9pbml0KCk7DQo+PiArICAgIGVsc2UNCj4+ICsgICAgICAgIGFjcGlfcGNpX2luaXQo
KTsNCj4gDQo+IElzbid0IHRoZXJlIGFuIGVudW0gb3Igc29tZXRoaW5nIHRoYXQgdGVsbHMgeW91
IHdoZXRoZXIgdGhlIHN5c3RlbQ0KPiBkZXNjcmlwdGlvbiBpcyBjb21pbmcgZnJvbSBBQ1BJIG9y
IGZyb20gRFQ/DQo+IA0KPiBUaGlzIGlmIC4uIGVsc2Ugc2VlbXMgZnJhZ2lsZS4NCj4gDQo+IEFs
c28gZm9yIEFDUEkgeW91IHdpbGwgZ2V0IGNhbGxlZCBieSBhY3BpX2Jvb3RfaW5pdCwgYW5kIGxp
a2VseSBuZWVkDQo+IHRvIGltcGxlbWVudCBhIGFjcGlfbW1jZmdfaW5pdCBvciBwY2lfbW1jZmdf
YXJjaF97aW5pdCxlbmFibGV9LiBJJ20NCj4gbm90IHN1cmUgd2hldGhlciB0aGUgY29kZSBpbiBh
Y3BpX21tY2ZnX2luaXQgY291bGQgYmUgbWFkZSBzaGFyZWQNCj4gYmV0d2VlbiBib3RoIHg4NiBh
bmQgQXJtLg0KPiANCj4+ICsNCj4+ICsjaWZkZWYgQ09ORklHX0hBU19QQ0kNCj4+ICsgICAgcGNp
X3NlZ21lbnRzX2luaXQoKTsNCj4+ICsjZW5kaWYNCj4+ICsNCj4+ICtkaXNhYmxlOg0KPj4gKyAg
ICByZXR1cm47DQo+PiArfQ0KPj4gKw0KPj4gKy8qDQo+PiArICogTG9jYWwgdmFyaWFibGVzOg0K
Pj4gKyAqIG1vZGU6IEMNCj4+ICsgKiBjLWZpbGUtc3R5bGU6ICJCU0QiDQo+PiArICogYy1iYXNp
Yy1vZmZzZXQ6IDQNCj4+ICsgKiB0YWItd2lkdGg6IDQNCj4+ICsgKiBpbmRlbnQtdGFicy1tb2Rl
OiBuaWwNCj4+ICsgKiBFbmQ6DQo+PiArICovDQo+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJt
L3NldHVwLmMgYi94ZW4vYXJjaC9hcm0vc2V0dXAuYw0KPj4gaW5kZXggNzk2OGNlZTQ3ZC4uMmQ3
ZjFkYjQ0ZiAxMDA2NDQNCj4+IC0tLSBhL3hlbi9hcmNoL2FybS9zZXR1cC5jDQo+PiArKysgYi94
ZW4vYXJjaC9hcm0vc2V0dXAuYw0KPj4gQEAgLTkzMCw2ICs5MzAsOCBAQCB2b2lkIF9faW5pdCBz
dGFydF94ZW4odW5zaWduZWQgbG9uZyBib290X3BoeXNfb2Zmc2V0LA0KPj4gDQo+PiAgICAgc2V0
dXBfdmlydF9wYWdpbmcoKTsNCj4+IA0KPj4gKyAgICBwY2lfaW5pdCgpOw0KPj4gKw0KPj4gICAg
IGRvX2luaXRjYWxscygpOw0KPj4gDQo+PiAgICAgLyoNCj4+IGRpZmYgLS1naXQgYS94ZW4vaW5j
bHVkZS9hc20tYXJtL2RldmljZS5oIGIveGVuL2luY2x1ZGUvYXNtLWFybS9kZXZpY2UuaA0KPj4g
aW5kZXggZWU3Y2ZmMmQ0NC4uMjhmODA0OWNmZCAxMDA2NDQNCj4+IC0tLSBhL3hlbi9pbmNsdWRl
L2FzbS1hcm0vZGV2aWNlLmgNCj4+ICsrKyBiL3hlbi9pbmNsdWRlL2FzbS1hcm0vZGV2aWNlLmgN
Cj4+IEBAIC00LDYgKzQsNyBAQA0KPj4gZW51bSBkZXZpY2VfdHlwZQ0KPj4gew0KPj4gICAgIERF
Vl9EVCwNCj4+ICsgICAgREVWX1BDSSwNCj4+IH07DQo+PiANCj4+IHN0cnVjdCBkZXZfYXJjaGRh
dGEgew0KPj4gQEAgLTI1LDE1ICsyNiwxNSBAQCB0eXBlZGVmIHN0cnVjdCBkZXZpY2UgZGV2aWNl
X3Q7DQo+PiANCj4+ICNpbmNsdWRlIDx4ZW4vZGV2aWNlX3RyZWUuaD4NCj4+IA0KPj4gLS8qIFRP
RE86IENvcnJlY3RseSBpbXBsZW1lbnQgZGV2X2lzX3BjaSB3aGVuIFBDSSBpcyBzdXBwb3J0ZWQg
b24gQVJNICovDQo+PiAtI2RlZmluZSBkZXZfaXNfcGNpKGRldikgKCh2b2lkKShkZXYpLCAwKQ0K
Pj4gLSNkZWZpbmUgZGV2X2lzX2R0KGRldikgICgoZGV2LT50eXBlID09IERFVl9EVCkNCj4+ICsj
ZGVmaW5lIGRldl9pc19wY2koZGV2KSAoZGV2LT50eXBlID09IERFVl9QQ0kpDQo+PiArI2RlZmlu
ZSBkZXZfaXNfZHQoZGV2KSAgKGRldi0+dHlwZSA9PSBERVZfRFQpDQo+PiANCj4+IGVudW0gZGV2
aWNlX2NsYXNzDQo+PiB7DQo+PiAgICAgREVWSUNFX1NFUklBTCwNCj4+ICAgICBERVZJQ0VfSU9N
TVUsDQo+PiAgICAgREVWSUNFX0dJQywNCj4+ICsgICAgREVWSUNFX1BDSSwNCj4gDQo+IEl0IHNl
ZW1zIHRvIGJlIGxpa2UgdGhpcyB3YW50cyB0byBiZSBERVZJQ0VfUENJX0hPU1RfQlJJREdFIG9y
IHNvbWUNCj4gc3VjaCwgc2luY2UgdGhpcyBpcyBub3QgdXNlZCB0byBpZGVudGlmeSBhbGwgUENJ
IGRldmljZXMsIGJ1dCBqdXN0DQo+IGJyaWRnZXM/DQo+IA0KPj4gICAgIC8qIFVzZSBmb3IgZXJy
b3IgKi8NCj4+ICAgICBERVZJQ0VfVU5LTk9XTiwNCj4+IH07DQo+PiBkaWZmIC0tZ2l0IGEveGVu
L2luY2x1ZGUvYXNtLWFybS9wY2kuaCBiL3hlbi9pbmNsdWRlL2FzbS1hcm0vcGNpLmgNCj4+IGlu
ZGV4IGRlMTMzNTlmNjUuLjk0ZmQwMDM2MGEgMTAwNjQ0DQo+PiAtLS0gYS94ZW4vaW5jbHVkZS9h
c20tYXJtL3BjaS5oDQo+PiArKysgYi94ZW4vaW5jbHVkZS9hc20tYXJtL3BjaS5oDQo+PiBAQCAt
MSw3ICsxLDk4IEBADQo+PiAtI2lmbmRlZiBfX1g4Nl9QQ0lfSF9fDQo+PiAtI2RlZmluZSBfX1g4
Nl9QQ0lfSF9fDQo+PiArLyoNCj4+ICsgKiBDb3B5cmlnaHQgKEMpIDIwMjAgQXJtIEx0ZC4NCj4+
ICsgKg0KPj4gKyAqIEJhc2VkIG9uIExpbnV4IGRyaXZlcnMvcGNpL2VjYW0uYw0KPj4gKyAqIENv
cHlyaWdodCAyMDE2IEJyb2FkY29tLg0KPj4gKyAqDQo+PiArICogQmFzZWQgb24gTGludXggZHJp
dmVycy9wY2kvY29udHJvbGxlci9wY2ktaG9zdC1jb21tb24uYw0KPj4gKyAqIEJhc2VkIG9uIExp
bnV4IGRyaXZlcnMvcGNpL2NvbnRyb2xsZXIvcGNpLWhvc3QtZ2VuZXJpYy5jDQo+PiArICogQ29w
eXJpZ2h0IChDKSAyMDE0IEFSTSBMaW1pdGVkIFdpbGwgRGVhY29uIDx3aWxsLmRlYWNvbkBhcm0u
Y29tPg0KPj4gKyAqDQo+PiArICogVGhpcyBwcm9ncmFtIGlzIGZyZWUgc29mdHdhcmU7IHlvdSBj
YW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkNCj4+ICsgKiBpdCB1bmRlciB0aGUgdGVy
bXMgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIHZlcnNpb24gMiBhcw0KPj4gKyAq
IHB1Ymxpc2hlZCBieSB0aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLg0KPj4gKyAqDQo+PiAr
ICogVGhpcyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIHRoYXQgaXQgd2lsbCBi
ZSB1c2VmdWwsDQo+PiArICogYnV0IFdJVEhPVVQgQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4g
dGhlIGltcGxpZWQgd2FycmFudHkgb2YNCj4+ICsgKiBNRVJDSEFOVEFCSUxJVFkgb3IgRklUTkVT
UyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBPU0UuICBTZWUgdGhlDQo+PiArICogR05VIEdlbmVyYWwg
UHVibGljIExpY2Vuc2UgZm9yIG1vcmUgZGV0YWlscy4NCj4+ICsgKg0KPj4gKyAqIFlvdSBzaG91
bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNl
DQo+PiArICogYWxvbmcgd2l0aCB0aGlzIHByb2dyYW0uICBJZiBub3QsIHNlZSA8aHR0cDovL3d3
dy5nbnUub3JnL2xpY2Vuc2VzLz4uDQo+PiArICovDQo+PiANCj4+ICsjaWZuZGVmIF9fQVJNX1BD
SV9IX18NCj4+ICsjZGVmaW5lIF9fQVJNX1BDSV9IX18NCj4+ICsNCj4+ICsjaW5jbHVkZSA8eGVu
L3BjaS5oPg0KPj4gKyNpbmNsdWRlIDx4ZW4vZGV2aWNlX3RyZWUuaD4NCj4+ICsjaW5jbHVkZSA8
YXNtL2RldmljZS5oPg0KPj4gKw0KPj4gKyNpZmRlZiBDT05GSUdfQVJNX1BDSQ0KPj4gKw0KPj4g
Ky8qIEFyY2ggcGNpIGRldiBzdHJ1Y3QgKi8NCj4+IHN0cnVjdCBhcmNoX3BjaV9kZXYgew0KPj4g
KyAgICBzdHJ1Y3QgZGV2aWNlIGRldjsNCj4+ICt9Ow0KPiANCj4gVGhpcyBzZWVtcyB0byBiZSBj
b21wbGV0ZWx5IHVudXNlZD8NCg0KV2lsbCBiZSB1c2luZyBnb2luZyBmb3J3YXJkLg0KPiANCj4+
ICsNCj4+ICsjZGVmaW5lIFBSSV9wY2kgIiUwNHg6JTAyeDolMDJ4LiV1Ig0KPj4gKyNkZWZpbmUg
cGNpX3RvX2RldihwY2lkZXYpICgmKHBjaWRldiktPmFyY2guZGV2KQ0KPj4gKw0KPj4gKy8qDQo+
PiArICogc3RydWN0IHRvIGhvbGQgdGhlIG1hcHBpbmdzIG9mIGEgY29uZmlnIHNwYWNlIHdpbmRv
dy4gVGhpcw0KPj4gKyAqIGlzIGV4cGVjdGVkIHRvIGJlIHVzZWQgYXMgc3lzZGF0YSBmb3IgUENJ
IGNvbnRyb2xsZXJzIHRoYXQNCj4+ICsgKiB1c2UgRUNBTS4NCj4+ICsgKi8NCj4+ICtzdHJ1Y3Qg
cGNpX2NvbmZpZ193aW5kb3cgew0KPj4gKyAgICBwYWRkcl90ICAgICBwaHlzX2FkZHI7DQo+PiAr
ICAgIHBhZGRyX3QgICAgIHNpemU7DQo+PiArICAgIHVpbnQ4X3QgICAgIGJ1c25fc3RhcnQ7DQo+
PiArICAgIHVpbnQ4X3QgICAgIGJ1c25fZW5kOw0KPj4gKyAgICBzdHJ1Y3QgcGNpX2VjYW1fb3Bz
ICAgICAqb3BzOw0KPiANCj4gY29uc3Q/DQo+IA0KPj4gKyAgICB2b2lkIF9faW9tZW0gICAgICAg
ICp3aW47DQo+PiArfTsNCj4+ICsNCj4+ICsvKiBGb3J3YXJkIGRlY2xhcmF0aW9uIGFzIHBjaV9o
b3N0X2JyaWRnZSBhbmQgcGNpX29wcyBkZXBlbmQgb24gZWFjaCBvdGhlci4gKi8NCj4+ICtzdHJ1
Y3QgcGNpX2hvc3RfYnJpZGdlOw0KPj4gKw0KPj4gK3N0cnVjdCBwY2lfb3BzIHsNCj4+ICsgICAg
aW50ICgqcmVhZCkoc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdlLA0KPj4gKyAgICAgICAg
ICAgICAgICAgICAgdWludDMyX3Qgc2JkZiwgaW50IHdoZXJlLCBpbnQgc2l6ZSwgdTMyICp2YWwp
Ow0KPj4gKyAgICBpbnQgKCp3cml0ZSkoc3RydWN0IHBjaV9ob3N0X2JyaWRnZSAqYnJpZGdlLA0K
Pj4gKyAgICAgICAgICAgICAgICAgICAgdWludDMyX3Qgc2JkZiwgaW50IHdoZXJlLCBpbnQgc2l6
ZSwgdTMyIHZhbCk7DQo+PiArfTsNCj4+ICsNCj4+ICsvKg0KPj4gKyAqIHN0cnVjdCB0byBob2xk
IHBjaSBvcHMgYW5kIGJ1cyBzaGlmdCBvZiB0aGUgY29uZmlnIHdpbmRvdw0KPj4gKyAqIGZvciBh
IFBDSSBjb250cm9sbGVyLg0KPj4gKyAqLw0KPj4gK3N0cnVjdCBwY2lfZWNhbV9vcHMgew0KPj4g
KyAgICB1bnNpZ25lZCBpbnQgICAgICAgICAgICBidXNfc2hpZnQ7DQo+PiArICAgIHN0cnVjdCBw
Y2lfb3BzICAgICAgICAgIHBjaV9vcHM7DQo+PiArICAgIGludCAgICAgICAgICAgICAoKmluaXQp
KHN0cnVjdCBwY2lfY29uZmlnX3dpbmRvdyAqKTsNCj4+ICt9Ow0KPj4gKw0KPj4gKy8qDQo+PiAr
ICogc3RydWN0IHRvIGhvbGQgcGNpIGhvc3QgYnJpZGdlIGluZm9ybWF0aW9uDQo+PiArICogZm9y
IGEgUENJIGNvbnRyb2xsZXIuDQo+PiArICovDQo+PiArc3RydWN0IHBjaV9ob3N0X2JyaWRnZSB7
DQo+PiArICAgIHN0cnVjdCBkdF9kZXZpY2Vfbm9kZSAqZHRfbm9kZTsgIC8qIFBvaW50ZXIgdG8g
dGhlIGFzc29jaWF0ZWQgRFQgbm9kZSAqLw0KPj4gKyAgICBzdHJ1Y3QgbGlzdF9oZWFkIG5vZGU7
ICAgICAgICAgICAvKiBOb2RlIGluIGxpc3Qgb2YgaG9zdCBicmlkZ2VzICovDQo+PiArICAgIHVp
bnQxNl90IHNlZ21lbnQ7ICAgICAgICAgICAgICAgIC8qIFNlZ21lbnQgbnVtYmVyICovDQo+PiAr
ICAgIHZvaWQgKnN5c2RhdGE7ICAgICAgICAgICAgICAgICAgIC8qIFBvaW50ZXIgdG8gdGhlIGNv
bmZpZyBzcGFjZSB3aW5kb3cqLw0KPj4gKyAgICBjb25zdCBzdHJ1Y3QgcGNpX29wcyAqb3BzOw0K
PiANCj4gWW91IHNlZW0gdG8gaW50cm9kdWNlIGEgbG90IG9mIG9wcyBzdHJ1Y3RzLCB5ZXQgdGhl
cmUncyBvbmx5IG9uZQ0KPiBpbXBsZW1lbnRhdGlvbiB0aGUgZ2VuZXJpYyBFQ0FNIG9uZSwgYW5k
IGFkZGluZyBzdWNoIGNvbXBsZXhpdHkgc2hvdWxkDQo+IElNTyBiZSBkb25lIHdoZW4gZnVydGhl
ciBpbXBsZW1lbnRhdGlvbnMgYXJlIGFkZGVkLiBBbHNvIGdpdmVuIHRoaXMgaXMNCj4gYSBmdWxs
eSBFQ0FNIGNvbXBsaWFudCBicmlkZ2UgeW91IGNvdWxkIGp1c3QgdXNlIG1vc3Qgb2YgdGhlIGV4
aXN0aW5nDQo+IGxvZ2ljIGZvciB4ODY/DQo+IA0KPiBJIHVuZGVyc3RhbmQgdGhlIGRpc2NvdmVy
eSBuZWVkcyB0byBiZSBkaWZmZXJlbnQsIGJ1dCB4ODYgTUNGRyBsb2dpYw0KPiBzaG91bGQgYWxy
ZWFkeSBiZSBjYXBhYmxlIG9mIGhhbmRsaW5nIG11bHRpcGxlIEVDQU0gcmVnaW9ucy4NCj4gDQo+
IEkgYWxzbyBhZ3JlZSB3aXRoIEp1bGllbiB0aGF0IHNwbGl0dGluZyB0aGlzIGludG8gc2VwYXJh
dGUgcGF0Y2ggd291bGQNCj4gbWFrZSBpdCBlYXNpZXIgdG8gcmV2aWV3LiBGb3IgZXhhbXBsZSB5
b3UgY2FuIHN0YXJ0IHdpdGggdGhlIGRpc2NvdmVyeQ0KPiBsb2dpYywgZm9sbG93ZWQgYnkgdGhl
IGluaXRpYWxpemF0aW9uIGFuZCB0aGUgYWRkIHRoZSBhY2Nlc3NvcnMgdG8gdGhlDQo+IGNvbmZp
ZyBzcGFjZSBsYXN0bHkuDQoNCk9rIC4gV2lsbCBzcGlsdCB0aGUgcGF0Y2hlcyBpbiB0aGUgbmV4
dCB2ZXJzaW9uIG9mIHRoZSBwYXRjaCBzZXJpZXMuDQoNCj4gDQo+IFRoYW5rcywgUm9nZXIuDQoN
Cg==


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 08:10:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 08:10:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0KhD-0003zF-2y; Tue, 28 Jul 2020 08:10:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E6Nk=BH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0KhB-0003yu-QM
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 08:10:45 +0000
X-Inumbo-ID: d2043e34-d0a9-11ea-a878-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d2043e34-d0a9-11ea-a878-12813bfff9fa;
 Tue, 28 Jul 2020 08:10:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HIc5VaAC3CmAJUv26JZ7ZvIedTD1tCYlQG10SUSrp1c=; b=2TLowJnofpEygxhfZknCqJVdq
 qW4AHP3Ku0Wew7P7owJfMhqEtNeDRbFpvs1AWP8MYlBpaXWj+VkPsf5UvmGnbC43/EknoQNTpmswM
 u8/ppPEtYLgEVi4qW6ZP7VoKymN1AQeEpJONOQmCprZiVXA8HjqqLdY/NsxMsjxnF2mRE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0Kh3-0002F1-CB; Tue, 28 Jul 2020 08:10:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0Kh3-0004Oe-1K; Tue, 28 Jul 2020 08:10:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0Kh3-000347-0i; Tue, 28 Jul 2020 08:10:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152233-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 152233: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start.2:fail:allowable
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: xen=0562cbc14cf02b8188b9f1f37f39a4886776ce7c
X-Osstest-Versions-That: xen=8c4532f19d6925538fb0c938f7de9a97da8c5c3b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jul 2020 08:10:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152233 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152233/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     17 guest-start.2            fail REGR. vs. 152045

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152045
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152045
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152045
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 152045
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152045
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152045
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152045
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 152045
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 152045
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  0562cbc14cf02b8188b9f1f37f39a4886776ce7c
baseline version:
 xen                  8c4532f19d6925538fb0c938f7de9a97da8c5c3b

Last test of basis   152045  2020-07-20 13:36:39 Z    7 days
Failing since        152067  2020-07-21 06:59:07 Z    7 days    2 attempts
Testing same since   152233  2020-07-27 13:06:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  George Dunlap <george.dunlap@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8c4532f19d..0562cbc14c  0562cbc14cf02b8188b9f1f37f39a4886776ce7c -> master


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 08:21:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 08:21:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0KrK-0005GZ-6F; Tue, 28 Jul 2020 08:21:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZWt7=BH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k0KrJ-0005GU-Hl
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 08:21:13 +0000
X-Inumbo-ID: 4baff204-d0ab-11ea-8b19-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4baff204-d0ab-11ea-8b19-bc764e2007e4;
 Tue, 28 Jul 2020 08:21:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595924472;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=aZP3t9TBueTxsgu8tPwdePqIEMOiI1XOwqtOVR0zxI8=;
 b=DTnn0z0CJyesWKzCqYkrow1RZ0BLD7YSw4umNxi1LOkxBDgN9BnWDXlh
 igrp7FR5QJUi2QZNwYr9Qhfv2MxIxVF97x9/rcUw9u5WhMTKgxSlYRTzT
 vR6vzh0XWP5JoA37TsKVCUiMVC2eqgaPDvj/MMLgC7aWSKn5K1wF8LoI8 Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 204lIH4aEN1DMKu9W6chZFoKKFrr+fR5AGV6o8C0mfa5uMecM6x6MG50KFcF71MwTE5hygykD7
 QXILzuNXpybRnAMRSul4OdYTVUthcSE2nqxYtD9bFuB8RkJ1ts6z8UcfTseXSpVZVa3iBzujoe
 ZkiUhcvTynp9KOt4VzNyZWasBrGtnIUWQlKPrxAF+acPoeWCF+9gPkRUQbEObmNRpFX0frulFQ
 VJG9IkSoLQg6fljKJQnqb+Rvf3af4wmEb2h4VcJNwmOxKYyY3eCBmnZJ3bp2aUT94hRcsvtwHq
 aCE=
X-SBRS: 2.7
X-MesageID: 23660252
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,405,1589256000"; d="scan'208";a="23660252"
Date: Tue, 28 Jul 2020 10:21:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Rahul Singh <Rahul.Singh@arm.com>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Message-ID: <20200728082100.GV7191@Air-de-Roger>
References: <cover.1595511416.git.rahul.singh@arm.com>
 <64ebd4ef614b36a5844c52426a4a6a4a23b1f087.1595511416.git.rahul.singh@arm.com>
 <20200724144404.GJ7191@Air-de-Roger>
 <567A8D4D-9D7F-4D53-B740-6095F5512026@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <567A8D4D-9D7F-4D53-B740-6095F5512026@arm.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 28, 2020 at 08:06:17AM +0000, Rahul Singh wrote:
> 
> 
> > On 24 Jul 2020, at 3:44 pm, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > 
> > On Thu, Jul 23, 2020 at 04:40:21PM +0100, Rahul Singh wrote:
> >> +
> >> +    struct pci_host_bridge *bridge = pci_find_host_bridge(sbdf.seg, sbdf.bus);
> >> +
> >> +    if ( unlikely(!bridge) )
> >> +    {
> >> +        printk(XENLOG_ERR "Unable to find bridge for "PRI_pci"\n",
> >> +                sbdf.seg, sbdf.bus, sbdf.dev, sbdf.fn);
> > 
> > I had a patch to add a custom modifier to out printf format in
> > order to handle pci_sbdf_t natively:
> > 
> > https://patchew.org/Xen/20190822065132.48200-1-roger.pau@citrix.com/
> > 
> > It missed maintainers Acks and was never committed. Since you are
> > doing a bunch of work here, and likely adding a lot of SBDF related
> > prints, feel free to import the modifier (%pp) and use in your code
> > (do not attempt to switch existing users, or it's likely to get
> > stuck again).
> 
> Ok Will integrate that patch once submitted.

I've posted an updated version to the list yesterday:

https://lore.kernel.org/xen-devel/20200727103136.53343-1-roger.pau@citrix.com/

> >> diff --git a/xen/include/asm-arm/pci.h b/xen/include/asm-arm/pci.h
> >> index de13359f65..94fd00360a 100644
> >> --- a/xen/include/asm-arm/pci.h
> >> +++ b/xen/include/asm-arm/pci.h
> >> @@ -1,7 +1,98 @@
> >> -#ifndef __X86_PCI_H__
> >> -#define __X86_PCI_H__
> >> +/*
> >> + * Copyright (C) 2020 Arm Ltd.
> >> + *
> >> + * Based on Linux drivers/pci/ecam.c
> >> + * Copyright 2016 Broadcom.
> >> + *
> >> + * Based on Linux drivers/pci/controller/pci-host-common.c
> >> + * Based on Linux drivers/pci/controller/pci-host-generic.c
> >> + * Copyright (C) 2014 ARM Limited Will Deacon <will.deacon@arm.com>
> >> + *
> >> + * This program is free software; you can redistribute it and/or modify
> >> + * it under the terms of the GNU General Public License version 2 as
> >> + * published by the Free Software Foundation.
> >> + *
> >> + * This program is distributed in the hope that it will be useful,
> >> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> >> + * GNU General Public License for more details.
> >> + *
> >> + * You should have received a copy of the GNU General Public License
> >> + * along with this program.  If not, see <http://secure-web.cisco.com/1YIShdE6aU0WIMsRjhIYYGdRo_H_wyOVeLqk0ITHI7i_FMIXuMEmV9Y6lR76qCBB4XHss0ba_WMfQa0mLGD37bM1S3AFMBVC1WUEF0LmbURPWld2kU2zuyOamUxqnYwy4ZAv4N2EUJy7HCgDYn2s183YCQ-5YBc98vH8TO-tfBgEShkKVzAm23AgzIqikuKN_BIhcQhAbsjypr9ffeNZDsSX2jC8ClqegEPXFsCWJldrASGgWoR16rMjI-INfPNy55m9nGP5UmHIYWBmTpnHLjXtqCkruENixh20vPWIUROhcjjtzVBrR8d-Q5HnJy0hSR97WdlUSAryfxeH-8VxcPw/http%3A%2F%2Fwww.gnu.org%2Flicenses%2F>.
> >> + */
> >> 
> >> +#ifndef __ARM_PCI_H__
> >> +#define __ARM_PCI_H__
> >> +
> >> +#include <xen/pci.h>
> >> +#include <xen/device_tree.h>
> >> +#include <asm/device.h>
> >> +
> >> +#ifdef CONFIG_ARM_PCI
> >> +
> >> +/* Arch pci dev struct */
> >> struct arch_pci_dev {
> >> +    struct device dev;
> >> +};
> > 
> > This seems to be completely unused?
> 
> Will be using going forward.

Please introduce it when it's required then.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 08:33:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 08:33:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0L32-0006DA-Ag; Tue, 28 Jul 2020 08:33:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZWt7=BH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k0L30-0006D5-Pq
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 08:33:18 +0000
X-Inumbo-ID: fbbd6f91-d0ac-11ea-a87c-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fbbd6f91-d0ac-11ea-a87c-12813bfff9fa;
 Tue, 28 Jul 2020 08:33:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595925197;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=lNz5oGkBC9VwfOlKxTcc5WvkTjpWiXUIUwt5aAYRs28=;
 b=P5xn5s/VrtgFjimmSL9q93AbLNZDDJHy/n/AQhF0wICD1cTiY6lzXvet
 KGVhFyjCX/3ojPXU9JmgAScokM6x+9jvzRIumsDeNRbgAviYHEidndZdN
 jK3hiXIJw/EskyHchhu/l3YmGTOpVNJpA4k3FxBbDD5ql4qB1Z2cHERh2 U=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: H/Ozf1HpDxwGYEK6VLwK1rYT9cV4P5QSSC8TJ8thbt/zQI8veXbRPFickjKgsu7s6/VdN2PcLX
 wW7oVzwolAAYeUSXXrwfaaYMuMC9EX0GiBNMPkzuPXCbUbaO+Ochmw3dxC9k2x86emMhncxVbh
 gxPOIojptryf3hC+5jAgFvGSKubx34hDJAGLu4twjN2kGG6pypJsTFRw9KtJl9uolyK0A5aGSC
 0gsi4FfMabmnZcm/mR1Jc04MHoPw7+jjaJ4Ua9YXJXFwXXTQs6ZHkpK+fkhLe1k2s+bAC+ILP0
 XVQ=
X-SBRS: 2.7
X-MesageID: 23655971
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,405,1589256000"; d="scan'208";a="23655971"
Date: Tue, 28 Jul 2020 10:33:10 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
Message-ID: <20200728083310.GW7191@Air-de-Roger>
References: <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <9f09ff42-a930-e4e3-d1c8-612ad03698ae@xen.org>
 <alpine.DEB.2.21.2007241036460.17562@sstabellini-ThinkPad-T480s>
 <40582d63-49c7-4a51-b35b-8248dfa34b66@xen.org>
 <alpine.DEB.2.21.2007241127480.17562@sstabellini-ThinkPad-T480s>
 <CAJ=z9a3dXSnEBvhkHkZzV9URAGqSfdtJ1Lc838h_ViAWG3ZO4g@mail.gmail.com>
 <alpine.DEB.2.21.2007241353450.17562@sstabellini-ThinkPad-T480s>
 <CAJ=z9a1RWXq3EN5DC=_279yzdsq3M0nw6+CZtKD00yBzKomcaw@mail.gmail.com>
 <20200727110648.GQ7191@Air-de-Roger>
 <alpine.DEB.2.21.2007271411000.27071@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <alpine.DEB.2.21.2007271411000.27071@sstabellini-ThinkPad-T480s>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Rahul Singh <rahul.singh@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 27, 2020 at 05:06:25PM -0700, Stefano Stabellini wrote:
> On Mon, 27 Jul 2020, Roger Pau Monné wrote:
> > On Sat, Jul 25, 2020 at 10:59:50AM +0100, Julien Grall wrote:
> > > On Sat, 25 Jul 2020 at 00:46, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > > >
> > > > On Fri, 24 Jul 2020, Julien Grall wrote:
> > > > > On Fri, 24 Jul 2020 at 19:32, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > > > > > > If they are not equal, then I fail to see why it would be useful to have this
> > > > > > > value in Xen.
> > > > > >
> > > > > > I think that's because the domain is actually more convenient to use
> > > > > > because a segment can span multiple PCI host bridges. So my
> > > > > > understanding is that a segment alone is not sufficient to identify a
> > > > > > host bridge. From a software implementation point of view it would be
> > > > > > better to use domains.
> > > > >
> > > > > AFAICT, this would be a matter of one check vs two checks in Xen :).
> > > > > But... looking at Linux, they will also use domain == segment for ACPI
> > > > > (see [1]). So, I think, they still have to use (domain, bus) to do the lookup.
> > 
> > You have to use the (segment, bus) tuple when doing a lookup because
> > MMCFG regions on ACPI are defined for a segment and a bus range, you
> > can have a MMCFG region that covers segment 0 bus [0, 20) and another
> > MMCFG region that covers segment 0 bus [20, 255], and those will use
> > different addresses in the MMIO space.
> 
> Thanks for the clarification!
> 
> 
> > > > > > > In which case, we need to use PHYSDEVOP_pci_mmcfg_reserved so
> > > > > > > Dom0 and Xen can synchronize on the segment number.
> > > > > >
> > > > > > I was hoping we could write down the assumption somewhere that for the
> > > > > > cases we care about domain == segment, and error out if it is not the
> > > > > > case.
> > > > >
> > > > > Given that we have only the domain in hand, how would you enforce that?
> > > > >
> > > > > >From this discussion, it also looks like there is a mismatch between the
> > > > > implementation and the understanding on QEMU devel. So I am a bit
> > > > > concerned that this is not stable and may change in future Linux version.
> > > > >
> > > > > IOW, we are know tying Xen to Linux. So could we implement
> > > > > PHYSDEVOP_pci_mmcfg_reserved *or* introduce a new property that
> > > > > really represent the segment?
> > > >
> > > > I don't think we are tying Xen to Linux. Rob has already said that
> > > > linux,pci-domain is basically a generic device tree property.
> > > 
> > > My concern is not so much the name of the property, but the definition of it.
> > > 
> > > AFAICT, from this thread there can be two interpretation:
> > >       - domain == segment
> > >       - domain == (segment, bus)
> > 
> > I think domain is just an alias for segment, the difference seems to
> > be that when using DT all bridges get a different segment (or domain)
> > number, and thus you will always end up starting numbering at bus 0
> > for each bridge?
> >
> > Ideally you would need a way to specify the segment and start/end bus
> > numbers of each bridge, if not you cannot match what ACPI does. Albeit
> > it might be fine as long as the OS and Xen agree on the segments and
> > bus numbers that belong to each bridge (and thus each ECAM region).
> 
> That is what I thought and it is why I was asking to clarify the naming
> and/or writing a document to explain the assumptions, if any.
> 
> Then after Julien's email I followed up in the Linux codebase and
> clearly there is a different assumption baked in the Linux kernel for
> architectures that have CONFIG_PCI_DOMAINS enabled (including ARM64).
> 
> The assumption is that segment == domain == unique host bridge. It
> looks like it is coming from IEEE Std 1275-1994 but I am not certain.
> In fact, it seems that ACPI MCFG and IEEE Std 1275-1994 don't exactly
> match. So I am starting to think that domain == segment for IEEE Std
> 1275-1994 compliant device tree based systems.

I don't think the ACPI MCFG spec contains the notion of bridges, it
just describes ECAM (or MMCFG) regions, but those could be made up by
concatenating different bridge ECAM regions by the firmware itself, so
you could AFAICT end up with multiple bridges being aggregated into a
single ECAM region, and thus using the same segment number, which
seems not possible with the DT spec, where each bridge must get a
different segment number?

If you could assign both a segment number and a bus start and end
values to a bridge then I think it would be kind of equivalent to ACPI
MCFG.

I assume we would never support a system where Xen is getting the
hardware description from a DT and the hardware domain is using ACPI
(or the other way around)?

If so, I don't think we care that enumeration when using DT is
different than when using ACPI, as we can only guarantee consistency
when both Xen and the hardware domain use the same source for the
hardware description.

If when using DT each bridge has a unique segment number that's fine
as long as Xen and the OS agree to not change such values.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 08:34:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 08:34:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0L3w-0006Hg-PI; Tue, 28 Jul 2020 08:34:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uMAr=BH=amazon.com=prvs=4712fd9bf=elnikety@srs-us1.protection.inumbo.net>)
 id 1k0L3v-0006HU-9g
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 08:34:15 +0000
X-Inumbo-ID: 1e8022ca-d0ad-11ea-a87c-12813bfff9fa
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1e8022ca-d0ad-11ea-a87c-12813bfff9fa;
 Tue, 28 Jul 2020 08:34:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1595925255; x=1627461255;
 h=from:to:cc:subject:date:message-id:mime-version;
 bh=QXuCPaBFwD3FLj/A53XcKjEkryN9839jPfi9Dd1Dorg=;
 b=UkyvHY+/7y2zpkdz9P4UlnlgNkdASBcStysxA4W94GCPFIGiIYleVE2G
 5n3O0bUtlm4yQFSqAoEWoaVm9d1jBtIXzXEmtIIM3uURQlrLMXRnGshHB
 wl9G2XSHmeqdHVHmveWEOr+RbQBMQDaeolrh/fbe/b+ehzciW3kyenEYl 4=;
IronPort-SDR: 6DXbhNr046JubuV4KDNdBBx9GeCIZWksuRWojQ0/uWt1YYY3eVg54fY5aVdWaznDzJdghaWPUj
 M8/aQ3cW5CIQ==
X-IronPort-AV: E=Sophos;i="5.75,405,1589241600"; d="scan'208";a="44424179"
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2c-c6afef2e.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 28 Jul 2020 08:34:14 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2c-c6afef2e.us-west-2.amazon.com (Postfix) with ESMTPS
 id DC4BDA2683; Tue, 28 Jul 2020 08:34:12 +0000 (UTC)
Received: from EX13D32EUB001.ant.amazon.com (10.43.166.125) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 28 Jul 2020 08:34:12 +0000
Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by
 EX13D32EUB001.ant.amazon.com (10.43.166.125) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 28 Jul 2020 08:34:10 +0000
Received: from dev-dsk-elnikety-1b-cd63f796.eu-west-1.amazon.com (10.15.63.96)
 by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Tue, 28 Jul 2020 08:34:10 +0000
Received: by dev-dsk-elnikety-1b-cd63f796.eu-west-1.amazon.com (Postfix,
 from userid 6438462)
 id 510FFA0139; Tue, 28 Jul 2020 08:34:10 +0000 (UTC)
From: Eslam Elnikety <elnikety@amazon.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/vhpet: Fix type size in timer_int_route_valid
Date: Tue, 28 Jul 2020 08:33:57 +0000
Message-ID: <20200728083357.77999-1-elnikety@amazon.com>
X-Mailer: git-send-email 2.16.6
MIME-Version: 1.0
Content-Type: text/plain
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Eslam Elnikety <elnikety@amazon.com>, Paul Durrant <pdurrant@amazon.co.uk>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The macro timer_int_route_cap evalutes to a 64 bit value. Extend the
size of left side of timer_int_route_valid to match.

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.

Signed-off-by: Eslam Elnikety <elnikety@amazon.com>
---
 xen/arch/x86/hvm/hpet.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index ca94e8b453..9afe6e6760 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -66,7 +66,7 @@
     MASK_EXTR(timer_config(h, n), HPET_TN_INT_ROUTE_CAP)
 
 #define timer_int_route_valid(h, n) \
-    ((1u << timer_int_route(h, n)) & timer_int_route_cap(h, n))
+    ((1ULL << timer_int_route(h, n)) & timer_int_route_cap(h, n))
 
 static inline uint64_t hpet_read_maincounter(HPETState *h, uint64_t guest_time)
 {
-- 
2.16.6



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 08:37:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 08:37:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0L6a-0006RJ-7G; Tue, 28 Jul 2020 08:37:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZWt7=BH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k0L6Y-0006RE-8O
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 08:36:58 +0000
X-Inumbo-ID: 7e847996-d0ad-11ea-a87d-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7e847996-d0ad-11ea-a87d-12813bfff9fa;
 Tue, 28 Jul 2020 08:36:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595925416;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=Wt+2Rk35DFsivT+PLFZ3saMbD9lTBuqwKxEAHavAZuY=;
 b=iUPdUzqqDVpvJNZfyxLd57jeG5fE+9oN2+00Y6DO8tZ+/QFss8lTDcV8
 zJA75F4UFOEipTxXIciydcGvn5Z4pKjudXHahaIPT+oYQJkdfUcE9htjq
 944CgToN7mkHNn9u1BIkJmebA5MExBoYGmqCdp98m1mEWw7ppDVuU7kzG o=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 0RJBp9F2bSyi1xbHsn216+P/WXCsxqXqn75hjXrC2Ufa+TSCdT/qbopDLbeouxi7+80fFIoQ3H
 PdPNQtcl6sGFrYrPxGRyHxJXQUi80hgmIu5J4jhAzE/3jh+sxheShwvfw+GKPRU+p5xirb22FB
 qYrT84PzYRALE+kXc5qmGYeeFsYbeNr3EMM16Hf4KiiN0vIPjRLPhydeokA8UOjipG4MXLF6xH
 OkjevtsFDvh+qUsxz+ol3RKoyYEGjM7gXVltt3VWp3Cz+kCtQEIQP8FexrhmLps/PXN5nUIFLr
 iBU=
X-SBRS: 2.7
X-MesageID: 24191433
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,405,1589256000"; d="scan'208";a="24191433"
Date: Tue, 28 Jul 2020 10:36:49 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 2/4] x86: reduce CET-SS related #ifdef-ary
Message-ID: <20200728083649.GX7191@Air-de-Roger>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <58615a18-7f81-c000-d499-1a49f4752879@suse.com>
 <20200727150002.GS7191@Air-de-Roger>
 <d2a33851-10b3-a1c7-646a-96a0b5783923@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d2a33851-10b3-a1c7-646a-96a0b5783923@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 27, 2020 at 09:50:23PM +0200, Jan Beulich wrote:
> On 27.07.2020 17:00, Roger Pau Monné wrote:
> > On Wed, Jul 15, 2020 at 12:48:46PM +0200, Jan Beulich wrote:
> > Should the setssbsy be quoted, or it doesn't matter? I'm asking
> > because the same construction used by CLAC/STAC doesn't quote the
> > instruction.
> 
> I actually thought we consistently quote these. It doesn't matter
> as long as it's a single word. Quoting becomes necessary when
> there are e.g. blanks involved, which happens for insns with
> operands.

Could you quote the usages in patch 1 please then in order to be
consistent?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 08:58:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 08:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0LRR-0008C0-0t; Tue, 28 Jul 2020 08:58:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZWt7=BH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k0LRQ-0008Bv-Kc
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 08:58:32 +0000
X-Inumbo-ID: 822dafce-d0b0-11ea-8b1b-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 822dafce-d0b0-11ea-8b1b-bc764e2007e4;
 Tue, 28 Jul 2020 08:58:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595926710;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=sUvuB+SQAcx3A3Ddb7pG9ZwS7HsLQy3Nggy8p7965gE=;
 b=TGaisiGi0ePOPNkXq/Jj2AzhT8vFjD84xrrlU0IjrYhthTVj2Mwv7TOA
 YCJq0adrJsExUhApIzhFk+AH3EHM9FnSQDJRVfOPfX/YqwBkjuNR8E91t
 LX8KezvnjWTXIS4FEiT7UhvAHKFFahrWP0lxuIRQVcxiitNEn5x0MgJXP s=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: /bZkDnZuXjo0SWY7PwnfnFfGpru2vPR112n/AkE305ku0uelirJMTzl1CNgdCMVtSFeIrfCMas
 1tFUaiYgODKpcIbW7ltXyvR/Ie4ojvYaRMvJwT1r2pJ2TtTags9CgqoKPUVqd8Y9Y09tNKDRg9
 LMZHN6Ge9XQCD8ckxOmixPSKMBSZg3hBNJ4vfWIRUORJRDhU18AcJAtoLSak+lKW9skwRbCea+
 QcHDVfT4eKoODdhezggYh6X+mYxb4dNIHNbT0gR1jrsne4mJ0ce+U9XnpDfcXprbf+Xv4RakOq
 dwA=
X-SBRS: 2.7
X-MesageID: 23515422
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,405,1589256000"; d="scan'208";a="23515422"
Date: Tue, 28 Jul 2020 10:58:15 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Eslam Elnikety <elnikety@amazon.com>
Subject: Re: [PATCH] x86/vhpet: Fix type size in timer_int_route_valid
Message-ID: <20200728085815.GY7191@Air-de-Roger>
References: <20200728083357.77999-1-elnikety@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <20200728083357.77999-1-elnikety@amazon.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Paul Durrant <pdurrant@amazon.co.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 28, 2020 at 08:33:57AM +0000, Eslam Elnikety wrote:
> The macro timer_int_route_cap evalutes to a 64 bit value. Extend the
> size of left side of timer_int_route_valid to match.

I'm very dull with this things, so forgive me.

Isn't the left side just promoted to an unsigned 64bit value?

Also timer_int_route will strictly be <= 31, which makes the shift
safe?

I'm not opposed to switching to use unsigned long, but I think I'm not
understanding the issue.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 09:07:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 09:07:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0LZf-0000f8-Sd; Tue, 28 Jul 2020 09:07:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZWt7=BH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k0LZe-0000f3-Ea
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 09:07:02 +0000
X-Inumbo-ID: b25e04ae-d0b1-11ea-8b1c-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b25e04ae-d0b1-11ea-8b1c-bc764e2007e4;
 Tue, 28 Jul 2020 09:07:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595927221;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=PBb4O1ZjcnYgIxh0EQAN7soMcQ53JITdPUUJn86Y/z4=;
 b=Tb/YRNURN0atgUV8ELQ0XNP6lHTZb9B0UcdoxGOdBJ/vME8N2RmQEj7T
 zK6NtJrOEoU8jE7aSFunpHGGkWH6eE/cgTZNHH3oblCSGqXuSboS49p+g
 FdxnZqQ6wznpufjJuHuFFgNgmO3b5niALvAj5XCPDZWSKc4GZdMLOweVq g=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: uTmgI8PxOzjXzTKFv/wuH25kCpkMuPEwq+7BWgPZv2kPlz8v6EpZPXhdFEz2dNXOmtaPEzonHB
 DCqEQ+nr//gDEkJQQkWdv5HeExG4xCv+cPStmbGxdX6oY7kLO/wx94zJtSzt1e18eM0iTb/+CJ
 CCVBwOV1uUQL77JM6ap2ZVHm4vhtZX8xhWPV8WtaKVYcRVeykIdBOWJLRbMRs3meBxi4EygrUB
 wetHtNoxdoHKPRp0MLQDcOMEp5Gr2o2wXG3wvww6WsdwtU5AnVQP3uzqFsoVXyjW2CLC34QUYs
 jQ4=
X-SBRS: 2.7
X-MesageID: 23658075
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,405,1589256000"; d="scan'208";a="23658075"
Date: Tue, 28 Jul 2020 11:06:54 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 1/4] x86: replace __ASM_{CL,ST}AC
Message-ID: <20200728090618.GZ7191@Air-de-Roger>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <fc8e042e-fef8-ac38-34d8-16b13e4b0135@suse.com>
 <20200727145526.GR7191@Air-de-Roger>
 <b29e4b17-8ec2-a0db-8426-94393e9eb2c0@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b29e4b17-8ec2-a0db-8426-94393e9eb2c0@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 27, 2020 at 09:47:52PM +0200, Jan Beulich wrote:
> On 27.07.2020 16:55, Roger Pau Monné wrote:
> > On Wed, Jul 15, 2020 at 12:48:14PM +0200, Jan Beulich wrote:
> > > --- /dev/null
> > > +++ b/xen/include/asm-x86/asm-defns.h
> > 
> > Maybe this could be asm-insn.h or a different name? I find it
> > confusing to have asm-defns.h and an asm_defs.h.
> 
> While indeed I anticipated a reply to this effect, I don't consider
> asm-insn.h or asm-macros.h suitable: We don't want to limit this
> header to a more narrow purpose than "all sorts of definition", I
> don't think. Hence I chose that name despite its similarity to the
> C header's one.

I think it's confusing, but I also think the whole magic we do with
asm includes is already confusing (me), so if you and Andrew agree
this is the best name I'm certainly fine with it. FWIW:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Please quote the clac/stac instructions in order to match the other
usages of ALTERNATIVE.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 09:14:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 09:14:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0LhD-0001X0-LZ; Tue, 28 Jul 2020 09:14:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uMAr=BH=amazon.com=prvs=4712fd9bf=elnikety@srs-us1.protection.inumbo.net>)
 id 1k0LhC-0001Wv-Rc
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 09:14:50 +0000
X-Inumbo-ID: c9f58a50-d0b2-11ea-a884-12813bfff9fa
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c9f58a50-d0b2-11ea-a884-12813bfff9fa;
 Tue, 28 Jul 2020 09:14:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1595927691; x=1627463691;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=Uqe1Nay8XvAbyfPor+W0T3vBFrSJIFtN50u7uH/H0rQ=;
 b=KkhuP+tnt+Xu6IrW6T1qe4tEXIifQ/BvVTFgD+YskVJNwajek4ORH9fc
 n8+9hsLLxzvOwPtMHjhDDBqpUUUCId0a6TLAxZGBAAN6nZe7NzQEvN48v
 VFq3sjkh7+7swqQAeACSXI7tHft9KpFq+f5epwzapiPq6fNwQn1uotEkh M=;
IronPort-SDR: LkAw6DeIXCnTl+/nN/x9qovTJgrP9wZhL7wa400RWEp3jNbs9kCtOjLhEzgkPV2dW4NxSuAaej
 wOFLOdgbskVg==
X-IronPort-AV: E=Sophos;i="5.75,405,1589241600"; d="scan'208";a="44605149"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-1e-303d0b0e.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 28 Jul 2020 09:14:50 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1e-303d0b0e.us-east-1.amazon.com (Postfix) with ESMTPS
 id E8525A21B1; Tue, 28 Jul 2020 09:14:47 +0000 (UTC)
Received: from EX13D03EUA002.ant.amazon.com (10.43.165.166) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 28 Jul 2020 09:14:46 +0000
Received: from a483e73f63b0.ant.amazon.com (10.43.160.48) by
 EX13D03EUA002.ant.amazon.com (10.43.165.166) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 28 Jul 2020 09:14:43 +0000
Subject: Re: [PATCH] x86/vhpet: Fix type size in timer_int_route_valid
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Eslam Elnikety
 <elnikety@amazon.com>
References: <20200728083357.77999-1-elnikety@amazon.com>
 <20200728085815.GY7191@Air-de-Roger>
From: Eslam Elnikety <elnikety@amazon.com>
Message-ID: <8c2a7d95-c830-485c-05c2-980994806425@amazon.com>
Date: Tue, 28 Jul 2020 11:14:38 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.9.0
MIME-Version: 1.0
In-Reply-To: <20200728085815.GY7191@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Originating-IP: [10.43.160.48]
X-ClientProxiedBy: EX13D16UWB004.ant.amazon.com (10.43.161.170) To
 EX13D03EUA002.ant.amazon.com (10.43.165.166)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.co.uk>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Roger,

On 28.07.20 10:58, Roger Pau Monné wrote:
> On Tue, Jul 28, 2020 at 08:33:57AM +0000, Eslam Elnikety wrote:
>> The macro timer_int_route_cap evalutes to a 64 bit value. Extend the
>> size of left side of timer_int_route_valid to match.
> 
> I'm very dull with this things, so forgive me.
> 
> Isn't the left side just promoted to an unsigned 64bit value?
> 
> Also timer_int_route will strictly be <= 31, which makes the shift
> safe?

This is all true. The size mismatch is indeed benign. The patch is only 
for code sanity.

> 
> I'm not opposed to switching to use unsigned long, but I think I'm not
> understanding the issue.
> 
> Thanks, Roger.
> 

Regards,
Eslam


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 09:18:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 09:18:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Lkp-0001gM-6d; Tue, 28 Jul 2020 09:18:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=139A=BH=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k0Lko-0001gH-7i
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 09:18:34 +0000
X-Inumbo-ID: 4f0e0640-d0b3-11ea-8b1d-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4f0e0640-d0b3-11ea-8b1d-bc764e2007e4;
 Tue, 28 Jul 2020 09:18:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=sjv/L6hz1YrCvxTOhpyWiGw3pyQEgsy/LiJEyUnsD/U=; b=FU0HNTN1iUoJ8yy8ZzYwTC6Ap7
 8/izMpkwkHS/Ft1mujCVI/UNMGzrPP8FtbfQ78o312H8gUmgiScz0nFl8EOiYy2nueAS2V2bzPZhg
 KW6vUC45EVTYIwfgxrzXnXwG6kPbI2lNUMKoHHAoVs0oAusn5SFauRpixnrvxFsjKgTU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k0Lkm-0003hk-5S; Tue, 28 Jul 2020 09:18:32 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k0Lkl-0007du-SK; Tue, 28 Jul 2020 09:18:32 +0000
From: Paul Durrant <paul@xen.org>
To: qemu-devel@nongnu.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH] configure: define CONFIG_XEN when Xen is enabled
Date: Tue, 28 Jul 2020 10:18:28 +0100
Message-Id: <20200728091828.21702-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

The recent commit da278d58a092 "accel: Move Xen accelerator code under
accel/xen/" introduced a subtle semantic change, making xen_enabled() always
return false unless CONFIG_XEN is defined prior to inclusion of sysemu/xen.h,
which appears to be the normal case. This causes various use-cases of QEMU
with Xen to break.

This patch makes sure that CONFIG_XEN is defined if --enable-xen is passed
to configure.

Fixes: da278d58a092 ("accel: Move Xen accelerator code under accel/xen/")
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: "Philippe Mathieu-Daudé" <philmd@redhat.com>
Cc: Laurent Vivier <laurent@vivier.eu>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
---
 configure | 1 +
 1 file changed, 1 insertion(+)

diff --git a/configure b/configure
index 2acc4d1465..f1b9d129fd 100755
--- a/configure
+++ b/configure
@@ -7434,6 +7434,7 @@ if test "$virglrenderer" = "yes" ; then
   echo "VIRGL_LIBS=$virgl_libs" >> $config_host_mak
 fi
 if test "$xen" = "yes" ; then
+  echo "CONFIG_XEN=y" >> $config_host_mak
   echo "CONFIG_XEN_BACKEND=y" >> $config_host_mak
   echo "CONFIG_XEN_CTRL_INTERFACE_VERSION=$xen_ctrl_version" >> $config_host_mak
 fi
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 09:26:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 09:26:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0LsC-0002Y7-0O; Tue, 28 Jul 2020 09:26:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0LsA-0002Y2-Fe
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 09:26:10 +0000
X-Inumbo-ID: 5e1447f2-d0b4-11ea-8b20-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e1447f2-d0b4-11ea-8b20-bc764e2007e4;
 Tue, 28 Jul 2020 09:26:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595928368;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=JtQqa/GSc1fAEqGRuQBN4Er6GGqRLBpKt2Wq0dwMWvU=;
 b=UURj25CLdlyZbCh4A6nAogS38q7qWrDwjiK+eBqGCJTxbTLHvyo5Hxjp
 xXhRxJhjn02FFIXLHSwVUvZ+GH7u+wLW80lVVhIMpqXSBHsXnrm4dytX9
 RAXdGqdFuY8DdoCcMRNtjb5vxXf6ktmE5fdBZ4Qh9iS8W83w4THS/75rY w=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Qb2MkFE+yO0eWbn4GnRskLe0dbzK2qXnHanbNaPZjeAtAh9/bs+u7PJljIsAxnQ/xKp+CamAAx
 xmdCbnyk+F5y4zDNrO1AbG/IpDo7Un/3A5PU8e29sCgbr35HKvC4Ja8En/ysBYk/tC2wQhDm56
 5Wh2htKYdILqbD+UZocK0VzDwNkJMrNwzpQ1T2UpiJc9o3+SGNYaBc6Jwdpj6GEO1Imjb1TScP
 U2hEKZ8HK7kt+nbp1px9/w19db/vxHp3UU3LVQNXZcIR98JAG8uoAOdW51wetoloF54UWQG63D
 V+A=
X-SBRS: 2.7
X-MesageID: 23663992
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,405,1589256000"; d="scan'208";a="23663992"
Subject: Re: [PATCH] x86/vhpet: Fix type size in timer_int_route_valid
To: Eslam Elnikety <elnikety@amazon.com>, <xen-devel@lists.xenproject.org>
References: <20200728083357.77999-1-elnikety@amazon.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a55fba45-a008-059e-ea8c-b7300e2e8b7d@citrix.com>
Date: Tue, 28 Jul 2020 10:26:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728083357.77999-1-elnikety@amazon.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Paul Durrant <pdurrant@amazon.co.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/07/2020 09:33, Eslam Elnikety wrote:
> The macro timer_int_route_cap evalutes to a 64 bit value. Extend the
> size of left side of timer_int_route_valid to match.
>
> This bug was discovered and resolved using Coverity Static Analysis
> Security Testing (SAST) by Synopsys, Inc.
>
> Signed-off-by: Eslam Elnikety <elnikety@amazon.com>
> ---
>  xen/arch/x86/hvm/hpet.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
> index ca94e8b453..9afe6e6760 100644
> --- a/xen/arch/x86/hvm/hpet.c
> +++ b/xen/arch/x86/hvm/hpet.c
> @@ -66,7 +66,7 @@
>      MASK_EXTR(timer_config(h, n), HPET_TN_INT_ROUTE_CAP)
>  
>  #define timer_int_route_valid(h, n) \
> -    ((1u << timer_int_route(h, n)) & timer_int_route_cap(h, n))
> +    ((1ULL << timer_int_route(h, n)) & timer_int_route_cap(h, n))
>  
>  static inline uint64_t hpet_read_maincounter(HPETState *h, uint64_t guest_time)
>  {

Does this work?

diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index ca94e8b453..638f6174de 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -62,8 +62,7 @@
 
 #define timer_int_route(h, n)    MASK_EXTR(timer_config(h, n),
HPET_TN_ROUTE)
 
-#define timer_int_route_cap(h, n) \
-    MASK_EXTR(timer_config(h, n), HPET_TN_INT_ROUTE_CAP)
+#define timer_int_route_cap(h, n) (h)->hpet.timers[(n)].route
 
 #define timer_int_route_valid(h, n) \
     ((1u << timer_int_route(h, n)) & timer_int_route_cap(h, n))
diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
index f0e0eaec83..a41fc443cc 100644
--- a/xen/include/asm-x86/hvm/vpt.h
+++ b/xen/include/asm-x86/hvm/vpt.h
@@ -73,7 +73,13 @@ struct hpet_registers {
     uint64_t isr;               /* interrupt status reg */
     uint64_t mc64;              /* main counter */
     struct {                    /* timers */
-        uint64_t config;        /* configuration/cap */
+        union {
+            uint64_t config;    /* configuration/cap */
+            struct {
+                uint32_t _;
+                uint32_t route;
+            };
+        };
         uint64_t cmp;           /* comparator */
         uint64_t fsb;           /* FSB route, not supported now */
     } timers[HPET_TIMER_NUM];



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 09:27:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 09:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0LtH-0002cT-BJ; Tue, 28 Jul 2020 09:27:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iaET=BH=linaro.org=peter.maydell@srs-us1.protection.inumbo.net>)
 id 1k0LtG-0002cN-QH
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 09:27:18 +0000
X-Inumbo-ID: 87b73d30-d0b4-11ea-8b20-bc764e2007e4
Received: from mail-oi1-x241.google.com (unknown [2607:f8b0:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 87b73d30-d0b4-11ea-8b20-bc764e2007e4;
 Tue, 28 Jul 2020 09:27:17 +0000 (UTC)
Received: by mail-oi1-x241.google.com with SMTP id u63so5140777oie.5
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jul 2020 02:27:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=0jlqF1vNQDLT4jtpLs7LnhRZhztTYxX9+FwYb5e1cXw=;
 b=eRqUuFEncWn7FuFUnXBers8AJ9XN0eRqhILUyjbnliu64xB8yyByOZvCMI8cIVmplA
 ym5Oxrcg+dSpxd2RBRjREgyV/ymKJDz8B4vMzGuXGq3h5dIOW0dUitokRqDa9tEBn+Sj
 Zag69kqe1yR3RsBzVRNoHcGdu9bU7JANw0ruGcMwpI7Z0oL0IEXLyGn+XzCt2qEFHn9t
 MOmppHvcvytbbG3ruItt/Tone39XjHwj5rIRgK+PHa+G0cjBsGDrx57lgtIob0Zr/z+m
 k2T5DMBv0uxS2s1jdwvMvoC90YECmFkKNcebLbdwSdUuA7PrFdI6hOhdbPO3KdnlCFxJ
 y8Tg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=0jlqF1vNQDLT4jtpLs7LnhRZhztTYxX9+FwYb5e1cXw=;
 b=Zrx806uwfvhymbfnl4xsBrHAhe2RZscjxBYi6x7pl52foNrEUXBNSe/kFN6KZWPnYV
 i8TCjAyEUAHDREw8uvyvCHoi+KrzBQQriZDQIOL4PSDeJAoztJeQnd4LbtKL2xjvDuD9
 SqGuaH8mIHslO76XMzZCUXDoP4G+iuCnCYFyW+OGVneF1kcqtC4YvRxlInWt2YtXqSAg
 qU/GtyjQo+XbBPr/IWUK3dmqlMvg0lVdIp4hTa/4x+hTR7KlZitPs/RVQst5DNrzNx2W
 wJUqVh6ru9UVlEynncvHvt1+T9apAKWLsQ8pkxXjRxJAjmzDYuU9AnHgQT5nOj6Tv0dP
 TKTw==
X-Gm-Message-State: AOAM533EV9qsWGrYUKfLlfH6C3ZS9jOdt1Xu0FpzSqGDOjTp4dynakHp
 WFnJhIqqQBd5lsSK3hgpWuxW30PzZXhuFktkM2f7Dg==
X-Google-Smtp-Source: ABdhPJx7pJQRRKQTwke8VyQeZ02E0pIOiXvGYB2MeLNmSe8kmmcIflbUUPw95KHqQOILTAurpITPfgKIJUh6zSV6dIo=
X-Received: by 2002:aca:4a96:: with SMTP id x144mr2689821oia.163.1595928437503; 
 Tue, 28 Jul 2020 02:27:17 -0700 (PDT)
MIME-Version: 1.0
References: <20200728091828.21702-1-paul@xen.org>
In-Reply-To: <20200728091828.21702-1-paul@xen.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 28 Jul 2020 10:27:06 +0100
Message-ID: <CAFEAcA_wKTFWk9Uk5HMabqfa6QkkTAdzBotmnrA_EH1BR4XjYg@mail.gmail.com>
Subject: Re: [PATCH] configure: define CONFIG_XEN when Xen is enabled
To: Paul Durrant <paul@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Paul Durrant <pdurrant@amazon.com>, QEMU Developers <qemu-devel@nongnu.org>,
 Laurent Vivier <laurent@vivier.eu>, Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, 28 Jul 2020 at 10:19, Paul Durrant <paul@xen.org> wrote:
>
> From: Paul Durrant <pdurrant@amazon.com>
>
> The recent commit da278d58a092 "accel: Move Xen accelerator code under
> accel/xen/" introduced a subtle semantic change, making xen_enabled() alw=
ays
> return false unless CONFIG_XEN is defined prior to inclusion of sysemu/xe=
n.h,
> which appears to be the normal case. This causes various use-cases of QEM=
U
> with Xen to break.
>
> This patch makes sure that CONFIG_XEN is defined if --enable-xen is passe=
d
> to configure.
>
> Fixes: da278d58a092 ("accel: Move Xen accelerator code under accel/xen/")
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> ---
> Cc: "Philippe Mathieu-Daud=C3=A9" <philmd@redhat.com>
> Cc: Laurent Vivier <laurent@vivier.eu>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> ---
>  configure | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/configure b/configure
> index 2acc4d1465..f1b9d129fd 100755
> --- a/configure
> +++ b/configure
> @@ -7434,6 +7434,7 @@ if test "$virglrenderer" =3D "yes" ; then
>    echo "VIRGL_LIBS=3D$virgl_libs" >> $config_host_mak
>  fi
>  if test "$xen" =3D "yes" ; then
> +  echo "CONFIG_XEN=3Dy" >> $config_host_mak
>    echo "CONFIG_XEN_BACKEND=3Dy" >> $config_host_mak
>    echo "CONFIG_XEN_CTRL_INTERFACE_VERSION=3D$xen_ctrl_version" >> $confi=
g_host_mak
>  fi

Configure already defines CONFIG_XEN as a target-specific
config define in config-target.mak for the specific targets
that Xen will work for (ie if you build --enable-xen for
x86_64-softmmu and ppc64-softmmu then CONFIG_XEN is set for
the former and not the latter). This patch makes it a
build-wide config setting by putting it in config-host.mak.

We should figure out which of those two is correct and do
just one of them, not do both at the same time.

Since CONFIG_HAX, CONFIG_KVM and other accelerator-type
config defines are also per-target, I suspect that the
correct fix for this bug is not in configure but elsewhere.

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 09:51:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 09:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0MGP-00056I-G0; Tue, 28 Jul 2020 09:51:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YMeI=BH=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1k0MGN-00056D-KN
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 09:51:11 +0000
X-Inumbo-ID: db5416b9-d0b7-11ea-a888-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id db5416b9-d0b7-11ea-a888-12813bfff9fa;
 Tue, 28 Jul 2020 09:51:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1595929866;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=TDDW6Vmqlcc/jdTvWAQk+vQDf4pMTloVpXra5sEPZ1M=;
 b=hPrOqu9LO2+0e/SywpQQbnovURRCQFvL7juDF9xcWezL6B/g1NWlI1VboFCnkHI/q8hUJd
 DLt/yjDQ7q6IbYPsthlZNtAuy6dMrDg/dJDN5FIZ8WelHtVsf/KbWYsxTYAij6PoWMuaWk
 kICMnpqnAXY5gj7kk35eixS08/9mqfY=
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-331-DE6agHEcNVyQ8lpCCsZj1Q-1; Tue, 28 Jul 2020 05:51:04 -0400
X-MC-Unique: DE6agHEcNVyQ8lpCCsZj1Q-1
Received: by mail-wr1-f71.google.com with SMTP id d6so3544878wrv.23
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jul 2020 02:51:04 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:autocrypt
 :message-id:date:user-agent:mime-version:in-reply-to
 :content-language:content-transfer-encoding;
 bh=TDDW6Vmqlcc/jdTvWAQk+vQDf4pMTloVpXra5sEPZ1M=;
 b=KM+KDIaHWPTZRzmKG7Jm1QkgL9DpJrhsaqnhFxpUQaAC4HPNDmsMHiS34w97M4BpXX
 TJzdtqTlWrzXZljeIiFHRMrxmeEg668jeAagbI8v1APRdxTYZr566aR2z20xeB3XPRmQ
 Yc+VMjcRIOGNYQzt7bb0RJAlYSXw/OPt3JyZCtyqEOaHQLJ5cFyEgCNPCWkY9HudiKs2
 IGtLrdnVOt+2jq7PuaYDRBnDv8pkY8XzPpZ2qhkFbCDULZcVwGy/pqGj71Ai3w2La1KF
 B1My/gkIM+W0g+AQsUVoVWnWQ45Vv4R527ohX8FbdFYw1dPftxnEELfeGPQn+vmo9Y/D
 jaMA==
X-Gm-Message-State: AOAM533N3BcnOtC8jybcWc2+k0zAK0chY3Wsst7VFBJPAV09xjVmNk+d
 uf4uzN+fjrsOXNC5v3ijt5AK1+cCXxmKUG74S42IVZw3c0mhKeO8OwXnYMpz74xrvaAw5O6XmJy
 4+xjXM9rBIIKhhKJnnlvMbcnUcI8=
X-Received: by 2002:adf:de07:: with SMTP id b7mr26233016wrm.302.1595929863670; 
 Tue, 28 Jul 2020 02:51:03 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJzO4h2FddLw3k4T2cWIwjScSzLKkXiGx9EEQhnWLPDvoMFerDSIOHcrpY3xpuJiRQ5Sq0g6tg==
X-Received: by 2002:adf:de07:: with SMTP id b7mr26232994wrm.302.1595929863478; 
 Tue, 28 Jul 2020 02:51:03 -0700 (PDT)
Received: from [192.168.1.39] (214.red-88-21-68.staticip.rima-tde.net.
 [88.21.68.214])
 by smtp.gmail.com with ESMTPSA id h11sm17616910wrb.68.2020.07.28.02.51.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 28 Jul 2020 02:51:02 -0700 (PDT)
Subject: Re: [PATCH] configure: define CONFIG_XEN when Xen is enabled
To: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Paolo Bonzini <pbonzini@redhat.com>
References: <20200728091828.21702-1-paul@xen.org>
 <CAFEAcA_wKTFWk9Uk5HMabqfa6QkkTAdzBotmnrA_EH1BR4XjYg@mail.gmail.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Autocrypt: addr=philmd@redhat.com; keydata=
 mQINBDXML8YBEADXCtUkDBKQvNsQA7sDpw6YLE/1tKHwm24A1au9Hfy/OFmkpzo+MD+dYc+7
 bvnqWAeGweq2SDq8zbzFZ1gJBd6+e5v1a/UrTxvwBk51yEkadrpRbi+r2bDpTJwXc/uEtYAB
 GvsTZMtiQVA4kRID1KCdgLa3zztPLCj5H1VZhqZsiGvXa/nMIlhvacRXdbgllPPJ72cLUkXf
 z1Zu4AkEKpccZaJspmLWGSzGu6UTZ7UfVeR2Hcc2KI9oZB1qthmZ1+PZyGZ/Dy+z+zklC0xl
 XIpQPmnfy9+/1hj1LzJ+pe3HzEodtlVA+rdttSvA6nmHKIt8Ul6b/h1DFTmUT1lN1WbAGxmg
 CH1O26cz5nTrzdjoqC/b8PpZiT0kO5MKKgiu5S4PRIxW2+RA4H9nq7nztNZ1Y39bDpzwE5Sp
 bDHzd5owmLxMLZAINtCtQuRbSOcMjZlg4zohA9TQP9krGIk+qTR+H4CV22sWldSkVtsoTaA2
 qNeSJhfHQY0TyQvFbqRsSNIe2gTDzzEQ8itsmdHHE/yzhcCVvlUzXhAT6pIN0OT+cdsTTfif
 MIcDboys92auTuJ7U+4jWF1+WUaJ8gDL69ThAsu7mGDBbm80P3vvUZ4fQM14NkxOnuGRrJxO
 qjWNJ2ZUxgyHAh5TCxMLKWZoL5hpnvx3dF3Ti9HW2dsUUWICSQARAQABtDJQaGlsaXBwZSBN
 YXRoaWV1LURhdWTDqSAoUGhpbCkgPHBoaWxtZEByZWRoYXQuY29tPokCVQQTAQgAPwIbDwYL
 CQgHAwIGFQgCCQoLBBYCAwECHgECF4AWIQSJweePYB7obIZ0lcuio/1u3q3A3gUCXsfWwAUJ
 KtymWgAKCRCio/1u3q3A3ircD/9Vjh3aFNJ3uF3hddeoFg1H038wZr/xi8/rX27M1Vj2j9VH
 0B8Olp4KUQw/hyO6kUxqkoojmzRpmzvlpZ0cUiZJo2bQIWnvScyHxFCv33kHe+YEIqoJlaQc
 JfKYlbCoubz+02E2A6bFD9+BvCY0LBbEj5POwyKGiDMjHKCGuzSuDRbCn0Mz4kCa7nFMF5Jv
 piC+JemRdiBd6102ThqgIsyGEBXuf1sy0QIVyXgaqr9O2b/0VoXpQId7yY7OJuYYxs7kQoXI
 6WzSMpmuXGkmfxOgbc/L6YbzB0JOriX0iRClxu4dEUg8Bs2pNnr6huY2Ft+qb41RzCJvvMyu
 gS32LfN0bTZ6Qm2A8ayMtUQgnwZDSO23OKgQWZVglGliY3ezHZ6lVwC24Vjkmq/2yBSLakZE
 6DZUjZzCW1nvtRK05ebyK6tofRsx8xB8pL/kcBb9nCuh70aLR+5cmE41X4O+MVJbwfP5s/RW
 9BFSL3qgXuXso/3XuWTQjJJGgKhB6xXjMmb1J4q/h5IuVV4juv1Fem9sfmyrh+Wi5V1IzKI7
 RPJ3KVb937eBgSENk53P0gUorwzUcO+ASEo3Z1cBKkJSPigDbeEjVfXQMzNt0oDRzpQqH2vp
 apo2jHnidWt8BsckuWZpxcZ9+/9obQ55DyVQHGiTN39hkETy3Emdnz1JVHTU0Q==
Message-ID: <32ad0742-bff2-1fbc-2f7a-d078980eb171@redhat.com>
Date: Tue, 28 Jul 2020 11:51:01 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <CAFEAcA_wKTFWk9Uk5HMabqfa6QkkTAdzBotmnrA_EH1BR4XjYg@mail.gmail.com>
Content-Language: en-US
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Paul Durrant <pdurrant@amazon.com>, QEMU Developers <qemu-devel@nongnu.org>,
 Laurent Vivier <laurent@vivier.eu>, Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/28/20 11:27 AM, Peter Maydell wrote:
> On Tue, 28 Jul 2020 at 10:19, Paul Durrant <paul@xen.org> wrote:
>>
>> From: Paul Durrant <pdurrant@amazon.com>
>>
>> The recent commit da278d58a092 "accel: Move Xen accelerator code under
>> accel/xen/" introduced a subtle semantic change, making xen_enabled() always
>> return false unless CONFIG_XEN is defined prior to inclusion of sysemu/xen.h,
>> which appears to be the normal case. This causes various use-cases of QEMU
>> with Xen to break.
>>
>> This patch makes sure that CONFIG_XEN is defined if --enable-xen is passed
>> to configure.
>>
>> Fixes: da278d58a092 ("accel: Move Xen accelerator code under accel/xen/")
>> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
>> ---
>> Cc: "Philippe Mathieu-Daudé" <philmd@redhat.com>
>> Cc: Laurent Vivier <laurent@vivier.eu>
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Anthony Perard <anthony.perard@citrix.com>
>> ---
>>  configure | 1 +
>>  1 file changed, 1 insertion(+)
>>
>> diff --git a/configure b/configure
>> index 2acc4d1465..f1b9d129fd 100755
>> --- a/configure
>> +++ b/configure
>> @@ -7434,6 +7434,7 @@ if test "$virglrenderer" = "yes" ; then
>>    echo "VIRGL_LIBS=$virgl_libs" >> $config_host_mak
>>  fi
>>  if test "$xen" = "yes" ; then
>> +  echo "CONFIG_XEN=y" >> $config_host_mak
>>    echo "CONFIG_XEN_BACKEND=y" >> $config_host_mak
>>    echo "CONFIG_XEN_CTRL_INTERFACE_VERSION=$xen_ctrl_version" >> $config_host_mak
>>  fi
> 
> Configure already defines CONFIG_XEN as a target-specific
> config define in config-target.mak for the specific targets
> that Xen will work for (ie if you build --enable-xen for
> x86_64-softmmu and ppc64-softmmu then CONFIG_XEN is set for
> the former and not the latter). This patch makes it a
> build-wide config setting by putting it in config-host.mak.
> 
> We should figure out which of those two is correct and do
> just one of them, not do both at the same time.
> 
> Since CONFIG_HAX, CONFIG_KVM and other accelerator-type
> config defines are also per-target, I suspect that the
> correct fix for this bug is not in configure but elsewhere.

This might come from this change:

-#include "hw/xen/xen.h"
+#include "sysemu/xen.h"

Before Xen was target-specific, now it is a target-agnostic accelerator.

However in include/qemu/osdep.h we have:

 30 #include "config-host.h"
 31 #ifdef NEED_CPU_H
 32 #include "config-target.h"
 33 #else
 34 #include "exec/poison.h"
 35 #endif

CONFIG_XEN is generated in "config-target.h" (target-specific),
so target-agnostic code always has it undefined.

I'd rather uninline xen_enabled() but I'm not sure this has perf
penalties. Paolo is that OK to uninline it?

Phil.



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 09:53:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 09:53:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0MIV-0005Dz-St; Tue, 28 Jul 2020 09:53:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iaET=BH=linaro.org=peter.maydell@srs-us1.protection.inumbo.net>)
 id 1k0MIU-0005Dr-QA
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 09:53:22 +0000
X-Inumbo-ID: 2bfa8bf6-d0b8-11ea-8b24-bc764e2007e4
Received: from mail-ot1-x344.google.com (unknown [2607:f8b0:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2bfa8bf6-d0b8-11ea-8b24-bc764e2007e4;
 Tue, 28 Jul 2020 09:53:22 +0000 (UTC)
Received: by mail-ot1-x344.google.com with SMTP id v21so7184639otj.9
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jul 2020 02:53:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=bIUEUz/EAiYRpt2WF2Gu3Ojb4mqpsyytf7cpodlW4II=;
 b=CQqRT4Hg47XRLF6tzyrfHAQjvIY+LZv/43yt7a/JNKbRATGnHrOMTV3gf3XNXzacat
 j4lsq7sLk9uKTEevI0cAp1OclUpBjMWsvEvMM3f6SIegsYLs/mRnYThdXh9PRMBNYTtC
 Lw+6G+Grdy3uaiMqmudqLMMIBT28tyPKwh3SoWzRgjrwa1YxxMr74aZjM+thdcmWIq2x
 gbN8hJrqU1LLcsS+DB5NRZktQQm6USh1zsHH97h8iyEUvKuWAshY3qiWeQ3ptLyjkLBb
 DHf1eoj3TBVmLnEXe9OW0EMSw5kKemKDAgxtbTGAbTlkKFiNQR2UbicoqFSZ4x1Hw5o9
 AgOQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=bIUEUz/EAiYRpt2WF2Gu3Ojb4mqpsyytf7cpodlW4II=;
 b=mOIHTc9ZiSfh9sJ6JnWgSfyuX/NhceyC4PcNDoq9kwIUN+8NZTvRB3SQbtpqWNyx5P
 cp49rmIMkOZ1t0MiJAB86uFpogGpVnHFDO2FIsn5JQKT1C/Qsnzo5LAWEsoPlsCQww1X
 KHHzsQxPdFjxH5zetsoHrH+OJlyYIZ8zIaZTP9UqkgzfeyMgKrdTTvuCG+LBoQ6oepKS
 yWYcsMQ798wNBeBUx+S9kVQqVTm/884FkeEwFMaS8squIP7220woT3vTOptfTUwB5uJ2
 yQTRZ+nAeGhl5O6BMfMUsixzN68nUjgtbf9azfr0qB4aHX8JSpxnOAnNQY6dBUbNV2PA
 CsxA==
X-Gm-Message-State: AOAM532cLmxIYshHdwpyxGWu1QtTU2DjDAbaZNtXa5oHOg5FasZgetEe
 yOP1I6meU7BnVzS3kitp3dhdiOc8U+a62ilwR4G+cA==
X-Google-Smtp-Source: ABdhPJxvuQZlFISljT+BrSNAaPEtI0usqXjXhKX0zGweSLdhJcx7AtuCM7hYcsi9yHIhzUhOUeI4YgIIiqmVx7n0ny4=
X-Received: by 2002:a05:6830:10ce:: with SMTP id
 z14mr24370836oto.135.1595930001461; 
 Tue, 28 Jul 2020 02:53:21 -0700 (PDT)
MIME-Version: 1.0
References: <20200728091828.21702-1-paul@xen.org>
 <CAFEAcA_wKTFWk9Uk5HMabqfa6QkkTAdzBotmnrA_EH1BR4XjYg@mail.gmail.com>
 <32ad0742-bff2-1fbc-2f7a-d078980eb171@redhat.com>
In-Reply-To: <32ad0742-bff2-1fbc-2f7a-d078980eb171@redhat.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 28 Jul 2020 10:53:10 +0100
Message-ID: <CAFEAcA84fH3aGpbrJoA6S3qJ-FjD3NZMoj0G7jqvRneH_pS6=A@mail.gmail.com>
Subject: Re: [PATCH] configure: define CONFIG_XEN when Xen is enabled
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Paul Durrant <pdurrant@amazon.com>, QEMU Developers <qemu-devel@nongnu.org>,
 Laurent Vivier <laurent@vivier.eu>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, 28 Jul 2020 at 10:51, Philippe Mathieu-Daud=C3=A9 <philmd@redhat.co=
m> wrote:
> I'd rather uninline xen_enabled() but I'm not sure this has perf
> penalties. Paolo is that OK to uninline it?

Can we just follow the same working pattern we already have
for kvm_enabled() etc ?

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 09:56:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 09:56:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0MLC-0005Lx-BB; Tue, 28 Jul 2020 09:56:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YMeI=BH=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1k0MLB-0005Ls-0k
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 09:56:09 +0000
X-Inumbo-ID: 8ec4cd5a-d0b8-11ea-a888-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 8ec4cd5a-d0b8-11ea-a888-12813bfff9fa;
 Tue, 28 Jul 2020 09:56:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1595930167;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=aUrAS5QUx7aJBq33EdKIen4hJEFhAfzXF11a+eBiMWg=;
 b=OVkNmgQBGoh0fDpQ165MEYJgj3c4nB2sPnlB/6Wsu2l+ghY0cxHYgletVURa1u8FOSEusB
 NyC3X9p4Ut7yZclUP+SlAADfKANWGnD836KzSMcboPS6rCyVsQwsQY8MSazBuxUu/748hW
 G3JtqUyXTxFjp94Q32XoP6pAAJYoYYw=
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-221-4vRTAGzJMJeiAoKdcKJLvg-1; Tue, 28 Jul 2020 05:56:05 -0400
X-MC-Unique: 4vRTAGzJMJeiAoKdcKJLvg-1
Received: by mail-wr1-f71.google.com with SMTP id d6so3549775wrv.23
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jul 2020 02:56:05 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:autocrypt
 :message-id:date:user-agent:mime-version:in-reply-to
 :content-language:content-transfer-encoding;
 bh=aUrAS5QUx7aJBq33EdKIen4hJEFhAfzXF11a+eBiMWg=;
 b=gtO+6xJ/rEOcVeo3XQUGLhC9+8hK2aU4+hmFymulPcmS9VuQO//CDqpdorCFdWOer6
 eyHhkqTsRGdRHWmVKLdQ7MpwC37GeSfuHwFVRRcvcQyZlnBlJmb/v2Maf0/nelEbD5tr
 Au0YkfgiJtSe/w0TzjYSOL+BAHoJOTm4FQZ4QutNaUYVFgmVsd2RtgOYT0QppxYCyfYT
 /WilE6CKJzmQWt8TJSxOc0Z1+yl5RwJgmqZ1cEb6Q6GgUabtR2HF5OSwxXo63X7hU9R/
 Nkopc/SLQmuZZdSYhkm8rDlte9cdQRJCnDTVsyx5Thm7o4yAm43tjuliVd9XJPEGMQqi
 +z1g==
X-Gm-Message-State: AOAM53379WOABCJlfF0ww3NvLsFzNhdJmCGSTjy0yA5G8JzrtuNGSYVV
 CNZwTSPhjMdi8Ku1hcesdkNqhsVKdGYpo3f7ksFtVS4bKStQVeXlc5moGykbVR2fVBB+y03F43E
 b47grJXL4tKc7eweZ8mfl0GFBYO8=
X-Received: by 2002:adf:bc07:: with SMTP id s7mr25860876wrg.254.1595930164761; 
 Tue, 28 Jul 2020 02:56:04 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJzPTniTmO93DbdEYYgW3nFfiWgEddHrB861exzX5lhYD6rzP5MEcd1ayoqZ2y2R89ir2LKEvg==
X-Received: by 2002:adf:bc07:: with SMTP id s7mr25860863wrg.254.1595930164627; 
 Tue, 28 Jul 2020 02:56:04 -0700 (PDT)
Received: from [192.168.1.39] (214.red-88-21-68.staticip.rima-tde.net.
 [88.21.68.214])
 by smtp.gmail.com with ESMTPSA id x4sm17624541wru.81.2020.07.28.02.56.03
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 28 Jul 2020 02:56:04 -0700 (PDT)
Subject: Re: [PATCH] configure: define CONFIG_XEN when Xen is enabled
To: Peter Maydell <peter.maydell@linaro.org>
References: <20200728091828.21702-1-paul@xen.org>
 <CAFEAcA_wKTFWk9Uk5HMabqfa6QkkTAdzBotmnrA_EH1BR4XjYg@mail.gmail.com>
 <32ad0742-bff2-1fbc-2f7a-d078980eb171@redhat.com>
 <CAFEAcA84fH3aGpbrJoA6S3qJ-FjD3NZMoj0G7jqvRneH_pS6=A@mail.gmail.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Autocrypt: addr=philmd@redhat.com; keydata=
 mQINBDXML8YBEADXCtUkDBKQvNsQA7sDpw6YLE/1tKHwm24A1au9Hfy/OFmkpzo+MD+dYc+7
 bvnqWAeGweq2SDq8zbzFZ1gJBd6+e5v1a/UrTxvwBk51yEkadrpRbi+r2bDpTJwXc/uEtYAB
 GvsTZMtiQVA4kRID1KCdgLa3zztPLCj5H1VZhqZsiGvXa/nMIlhvacRXdbgllPPJ72cLUkXf
 z1Zu4AkEKpccZaJspmLWGSzGu6UTZ7UfVeR2Hcc2KI9oZB1qthmZ1+PZyGZ/Dy+z+zklC0xl
 XIpQPmnfy9+/1hj1LzJ+pe3HzEodtlVA+rdttSvA6nmHKIt8Ul6b/h1DFTmUT1lN1WbAGxmg
 CH1O26cz5nTrzdjoqC/b8PpZiT0kO5MKKgiu5S4PRIxW2+RA4H9nq7nztNZ1Y39bDpzwE5Sp
 bDHzd5owmLxMLZAINtCtQuRbSOcMjZlg4zohA9TQP9krGIk+qTR+H4CV22sWldSkVtsoTaA2
 qNeSJhfHQY0TyQvFbqRsSNIe2gTDzzEQ8itsmdHHE/yzhcCVvlUzXhAT6pIN0OT+cdsTTfif
 MIcDboys92auTuJ7U+4jWF1+WUaJ8gDL69ThAsu7mGDBbm80P3vvUZ4fQM14NkxOnuGRrJxO
 qjWNJ2ZUxgyHAh5TCxMLKWZoL5hpnvx3dF3Ti9HW2dsUUWICSQARAQABtDJQaGlsaXBwZSBN
 YXRoaWV1LURhdWTDqSAoUGhpbCkgPHBoaWxtZEByZWRoYXQuY29tPokCVQQTAQgAPwIbDwYL
 CQgHAwIGFQgCCQoLBBYCAwECHgECF4AWIQSJweePYB7obIZ0lcuio/1u3q3A3gUCXsfWwAUJ
 KtymWgAKCRCio/1u3q3A3ircD/9Vjh3aFNJ3uF3hddeoFg1H038wZr/xi8/rX27M1Vj2j9VH
 0B8Olp4KUQw/hyO6kUxqkoojmzRpmzvlpZ0cUiZJo2bQIWnvScyHxFCv33kHe+YEIqoJlaQc
 JfKYlbCoubz+02E2A6bFD9+BvCY0LBbEj5POwyKGiDMjHKCGuzSuDRbCn0Mz4kCa7nFMF5Jv
 piC+JemRdiBd6102ThqgIsyGEBXuf1sy0QIVyXgaqr9O2b/0VoXpQId7yY7OJuYYxs7kQoXI
 6WzSMpmuXGkmfxOgbc/L6YbzB0JOriX0iRClxu4dEUg8Bs2pNnr6huY2Ft+qb41RzCJvvMyu
 gS32LfN0bTZ6Qm2A8ayMtUQgnwZDSO23OKgQWZVglGliY3ezHZ6lVwC24Vjkmq/2yBSLakZE
 6DZUjZzCW1nvtRK05ebyK6tofRsx8xB8pL/kcBb9nCuh70aLR+5cmE41X4O+MVJbwfP5s/RW
 9BFSL3qgXuXso/3XuWTQjJJGgKhB6xXjMmb1J4q/h5IuVV4juv1Fem9sfmyrh+Wi5V1IzKI7
 RPJ3KVb937eBgSENk53P0gUorwzUcO+ASEo3Z1cBKkJSPigDbeEjVfXQMzNt0oDRzpQqH2vp
 apo2jHnidWt8BsckuWZpxcZ9+/9obQ55DyVQHGiTN39hkETy3Emdnz1JVHTU0Q==
Message-ID: <a09853d3-5c27-893f-54ed-63dc461bfacb@redhat.com>
Date: Tue, 28 Jul 2020 11:56:03 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <CAFEAcA84fH3aGpbrJoA6S3qJ-FjD3NZMoj0G7jqvRneH_pS6=A@mail.gmail.com>
Content-Language: en-US
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Paul Durrant <pdurrant@amazon.com>, QEMU Developers <qemu-devel@nongnu.org>,
 Laurent Vivier <laurent@vivier.eu>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/28/20 11:53 AM, Peter Maydell wrote:
> On Tue, 28 Jul 2020 at 10:51, Philippe Mathieu-Daudé <philmd@redhat.com> wrote:
>> I'd rather uninline xen_enabled() but I'm not sure this has perf
>> penalties. Paolo is that OK to uninline it?

I suppose no because it is in various hot paths:

exec.c:588:    if (xen_enabled() && memory_access_is_direct(mr, is_write)) {
exec.c:2243:        if (xen_enabled()) {
exec.c:2326:    if (xen_enabled()) {
exec.c:2478:    } else if (xen_enabled()) {
exec.c:2525:            } else if (xen_enabled()) {
exec.c:2576:    if (xen_enabled() && block->host == NULL) {
exec.c:2609:    if (xen_enabled() && block->host == NULL) {
exec.c:2657:    if (xen_enabled()) {
exec.c:3625:        if (xen_enabled()) {
exec.c:3717:    if (xen_enabled()) {
include/exec/ram_addr.h:295:    if (!mask && !xen_enabled()) {

> 
> Can we just follow the same working pattern we already have
> for kvm_enabled() etc ?

This was the idea... I'll look at what I missed.

Phil.



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 10:00:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 10:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0MPX-0006Fu-Tp; Tue, 28 Jul 2020 10:00:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YMeI=BH=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1k0MPW-0006Fp-VC
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 10:00:38 +0000
X-Inumbo-ID: 2fee4ea4-d0b9-11ea-8b25-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 2fee4ea4-d0b9-11ea-8b25-bc764e2007e4;
 Tue, 28 Jul 2020 10:00:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1595930437;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=8llwRlhuTRz0kMN5En47W7+f3GRPs0B7vs9qH049YxE=;
 b=imKyPDbsNdJIRrwWr+HSAACLOetNXFIHrVAUjztq+uE1axoiPwLTiWDftlwbVD5/pZT9la
 RZCV7IIIX8/sozY1mnDXL2+PwIeJzKqY0tN74FI7DJ0BcGQYKD0EwtsDb4McmfruF201Zu
 YNB4zTTzjncEslvE7LzOcBX3sUrRub0=
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-79-Jx80jnCOPU6-908sOrtx7w-1; Tue, 28 Jul 2020 06:00:36 -0400
X-MC-Unique: Jx80jnCOPU6-908sOrtx7w-1
Received: by mail-wr1-f70.google.com with SMTP id w7so2804591wre.11
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jul 2020 03:00:35 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:from:to:cc:references:autocrypt
 :message-id:date:user-agent:mime-version:in-reply-to
 :content-language:content-transfer-encoding;
 bh=8llwRlhuTRz0kMN5En47W7+f3GRPs0B7vs9qH049YxE=;
 b=TuCksaRcAcyzjFK+x+JiuGlJe52GN//Pgnz3fzL++ima2Z9JcKz/0GxmnCt9UgE2ZV
 Hnw3poCeZU28roAaZuNhabP09jgjgGwBvLarM2N6vhBfdptbqH3wDZNbNqW+sCgWYjfU
 KVOv5uqbYq4NNx6uhgM4TFu+pSIEI5dfrwO5rsK87UZXHywOjwg2fY0kh0OA4LuTqctd
 GB7Vf+38TcbpfEcoAWGnthurGP+tNsxK8XCPq3gzhicDddomogLjEP/PWT4yhc+z8Z44
 sDfCo0x+QgNDOc+lMCQ7ojE+9iwUvHP51lwOv2bYRVhUDBoW9Sr/qLHfn88KSLUCitLQ
 31OQ==
X-Gm-Message-State: AOAM532nGINiwKW1xrp5J21hyp3LW93ViDtfyfBlw9KISveQP6fWFWSd
 Yz55ugL8naQ11QSogWk4u8sCObrch/5T5Ycir/7kKukoXcyNnL2CzcVqbWG8fbBiwKKr7WicjIl
 2tVwP69VM/XQ0oRlU+9heziQMXlQ=
X-Received: by 2002:a7b:c2a1:: with SMTP id c1mr3230777wmk.89.1595930435104;
 Tue, 28 Jul 2020 03:00:35 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJx3X9HbyE+SCPrq9Yc39ZhAsG9Vz8HVwZoIaX6iNk77HjX+r29efYJXwp7uBX+x6ies5bt5LQ==
X-Received: by 2002:a7b:c2a1:: with SMTP id c1mr3230759wmk.89.1595930434887;
 Tue, 28 Jul 2020 03:00:34 -0700 (PDT)
Received: from [192.168.1.39] (214.red-88-21-68.staticip.rima-tde.net.
 [88.21.68.214])
 by smtp.gmail.com with ESMTPSA id z127sm3454400wme.44.2020.07.28.03.00.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 28 Jul 2020 03:00:34 -0700 (PDT)
Subject: Re: [PATCH] configure: define CONFIG_XEN when Xen is enabled
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
To: Peter Maydell <peter.maydell@linaro.org>
References: <20200728091828.21702-1-paul@xen.org>
 <CAFEAcA_wKTFWk9Uk5HMabqfa6QkkTAdzBotmnrA_EH1BR4XjYg@mail.gmail.com>
 <32ad0742-bff2-1fbc-2f7a-d078980eb171@redhat.com>
 <CAFEAcA84fH3aGpbrJoA6S3qJ-FjD3NZMoj0G7jqvRneH_pS6=A@mail.gmail.com>
 <a09853d3-5c27-893f-54ed-63dc461bfacb@redhat.com>
Autocrypt: addr=philmd@redhat.com; keydata=
 mQINBDXML8YBEADXCtUkDBKQvNsQA7sDpw6YLE/1tKHwm24A1au9Hfy/OFmkpzo+MD+dYc+7
 bvnqWAeGweq2SDq8zbzFZ1gJBd6+e5v1a/UrTxvwBk51yEkadrpRbi+r2bDpTJwXc/uEtYAB
 GvsTZMtiQVA4kRID1KCdgLa3zztPLCj5H1VZhqZsiGvXa/nMIlhvacRXdbgllPPJ72cLUkXf
 z1Zu4AkEKpccZaJspmLWGSzGu6UTZ7UfVeR2Hcc2KI9oZB1qthmZ1+PZyGZ/Dy+z+zklC0xl
 XIpQPmnfy9+/1hj1LzJ+pe3HzEodtlVA+rdttSvA6nmHKIt8Ul6b/h1DFTmUT1lN1WbAGxmg
 CH1O26cz5nTrzdjoqC/b8PpZiT0kO5MKKgiu5S4PRIxW2+RA4H9nq7nztNZ1Y39bDpzwE5Sp
 bDHzd5owmLxMLZAINtCtQuRbSOcMjZlg4zohA9TQP9krGIk+qTR+H4CV22sWldSkVtsoTaA2
 qNeSJhfHQY0TyQvFbqRsSNIe2gTDzzEQ8itsmdHHE/yzhcCVvlUzXhAT6pIN0OT+cdsTTfif
 MIcDboys92auTuJ7U+4jWF1+WUaJ8gDL69ThAsu7mGDBbm80P3vvUZ4fQM14NkxOnuGRrJxO
 qjWNJ2ZUxgyHAh5TCxMLKWZoL5hpnvx3dF3Ti9HW2dsUUWICSQARAQABtDJQaGlsaXBwZSBN
 YXRoaWV1LURhdWTDqSAoUGhpbCkgPHBoaWxtZEByZWRoYXQuY29tPokCVQQTAQgAPwIbDwYL
 CQgHAwIGFQgCCQoLBBYCAwECHgECF4AWIQSJweePYB7obIZ0lcuio/1u3q3A3gUCXsfWwAUJ
 KtymWgAKCRCio/1u3q3A3ircD/9Vjh3aFNJ3uF3hddeoFg1H038wZr/xi8/rX27M1Vj2j9VH
 0B8Olp4KUQw/hyO6kUxqkoojmzRpmzvlpZ0cUiZJo2bQIWnvScyHxFCv33kHe+YEIqoJlaQc
 JfKYlbCoubz+02E2A6bFD9+BvCY0LBbEj5POwyKGiDMjHKCGuzSuDRbCn0Mz4kCa7nFMF5Jv
 piC+JemRdiBd6102ThqgIsyGEBXuf1sy0QIVyXgaqr9O2b/0VoXpQId7yY7OJuYYxs7kQoXI
 6WzSMpmuXGkmfxOgbc/L6YbzB0JOriX0iRClxu4dEUg8Bs2pNnr6huY2Ft+qb41RzCJvvMyu
 gS32LfN0bTZ6Qm2A8ayMtUQgnwZDSO23OKgQWZVglGliY3ezHZ6lVwC24Vjkmq/2yBSLakZE
 6DZUjZzCW1nvtRK05ebyK6tofRsx8xB8pL/kcBb9nCuh70aLR+5cmE41X4O+MVJbwfP5s/RW
 9BFSL3qgXuXso/3XuWTQjJJGgKhB6xXjMmb1J4q/h5IuVV4juv1Fem9sfmyrh+Wi5V1IzKI7
 RPJ3KVb937eBgSENk53P0gUorwzUcO+ASEo3Z1cBKkJSPigDbeEjVfXQMzNt0oDRzpQqH2vp
 apo2jHnidWt8BsckuWZpxcZ9+/9obQ55DyVQHGiTN39hkETy3Emdnz1JVHTU0Q==
Message-ID: <ee8374bd-1257-1d29-6800-3902426b1a0b@redhat.com>
Date: Tue, 28 Jul 2020 12:00:33 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <a09853d3-5c27-893f-54ed-63dc461bfacb@redhat.com>
Content-Language: en-US
Authentication-Results: relay.mimecast.com;
 auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Paul Durrant <pdurrant@amazon.com>, QEMU Developers <qemu-devel@nongnu.org>,
 Laurent Vivier <laurent@vivier.eu>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/28/20 11:56 AM, Philippe Mathieu-Daudé wrote:
> On 7/28/20 11:53 AM, Peter Maydell wrote:
>> On Tue, 28 Jul 2020 at 10:51, Philippe Mathieu-Daudé <philmd@redhat.com> wrote:
>>> I'd rather uninline xen_enabled() but I'm not sure this has perf
>>> penalties. Paolo is that OK to uninline it?
> 
> I suppose no because it is in various hot paths:
> 
> exec.c:588:    if (xen_enabled() && memory_access_is_direct(mr, is_write)) {
> exec.c:2243:        if (xen_enabled()) {
> exec.c:2326:    if (xen_enabled()) {
> exec.c:2478:    } else if (xen_enabled()) {
> exec.c:2525:            } else if (xen_enabled()) {
> exec.c:2576:    if (xen_enabled() && block->host == NULL) {
> exec.c:2609:    if (xen_enabled() && block->host == NULL) {
> exec.c:2657:    if (xen_enabled()) {
> exec.c:3625:        if (xen_enabled()) {
> exec.c:3717:    if (xen_enabled()) {
> include/exec/ram_addr.h:295:    if (!mask && !xen_enabled()) {
> 
>>
>> Can we just follow the same working pattern we already have
>> for kvm_enabled() etc ?
> 
> This was the idea... I'll look at what I missed.

Apparently kvm_enabled() checks CONFIG_KVM_IS_POSSIBLE instead
of CONFIG_KVM, I suppose to bypass this limitation (from osdep.h):

 21 #ifdef NEED_CPU_H
 22 # ifdef CONFIG_KVM
 24 #  define CONFIG_KVM_IS_POSSIBLE
 25 # endif
 26 #else
 27 # define CONFIG_KVM_IS_POSSIBLE
 28 #endif
 29
 30 #ifdef CONFIG_KVM_IS_POSSIBLE
    ...

Paolo do you confirm this is the reason?

I'll prepare a similar patch.

> 
> Phil.
> 



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 10:09:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 10:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0MY7-0006Uf-Rn; Tue, 28 Jul 2020 10:09:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YMeI=BH=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1k0MY6-0006Ua-Ui
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 10:09:30 +0000
X-Inumbo-ID: 6d1286f0-d0ba-11ea-8b26-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 6d1286f0-d0ba-11ea-8b26-bc764e2007e4;
 Tue, 28 Jul 2020 10:09:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1595930969;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding;
 bh=BhUW9ESioUiSA1BxCHyRzLsoe5bVOy2Ifgl37qfIfW8=;
 b=NtZplSVJwZ7F5rq1xoMDzu8GtJNtYQ9kJJQ6xtpmGyQmEnFTt8LXSxRU+RyYLFxdPDHZy/
 7xULRLR9URQU+kbIuSaf0zbV8bn/W8WIgaNh7BZOliVdSHaLMODlR0H0lQguE3OObM2ebL
 MTiZKb9vrSkvtAvxJcq6jKRkzojzDjY=
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-483-EGscLMuqPqqiQS7NiaRsog-1; Tue, 28 Jul 2020 06:09:28 -0400
X-MC-Unique: EGscLMuqPqqiQS7NiaRsog-1
Received: by mail-wm1-f70.google.com with SMTP id l5so8686845wml.7
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jul 2020 03:09:28 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=BhUW9ESioUiSA1BxCHyRzLsoe5bVOy2Ifgl37qfIfW8=;
 b=rScWMQTxXXShHykeXVJ2wm7lOCm9s+y1pPSGef4+d50WZFOg4WkGbtBFBPYH064s9l
 GXhkPrsQGUzNSJGTHDQIGMMW2fjaXhjkaQlZLpX+Sr+gYnJKx03hr4Iggg7aW4HhNqCy
 YaWLEzJc4TglL3z5Jr08mBN3RBh6tb+cN+9kQytK98H90VRCElrjiaUfUYIHhIspp3Tw
 9jY2zkbQLcE5VKRCefD5lAek2dc2nipL3j/zfpBqpR5qIkU0+RwcdjjX+kYSEwEqj2WA
 csOaD3iMZcIWLFTl0oLtufTZE7PalfmLz8aJyw0BOAGYHrJ3Qej4RwfInjQ/SP7iJ6w2
 2DLQ==
X-Gm-Message-State: AOAM533KblaNqPu9Ydb39I+f6pBE1fDonaOqE3xhPgwRmtFV0X+Wehe5
 AWgeW1KcHPCbYKnntkrsgIVMC7uxlQHQCrfNK/D0VZhazpU3wuyCTaFT2bNYSiPztkPQkDzKWRy
 B56RkYzGi5Cdb9u3iCnu6BuTEdZY=
X-Received: by 2002:a7b:cc12:: with SMTP id f18mr3138781wmh.129.1595930967269; 
 Tue, 28 Jul 2020 03:09:27 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJyBky3N1fiOwAUAbRDwFfDOzU+3zWFKcPh/zvwPQji/U4MbFglqtL0NILX0yFEK52pUUrByrA==
X-Received: by 2002:a7b:cc12:: with SMTP id f18mr3138766wmh.129.1595930967114; 
 Tue, 28 Jul 2020 03:09:27 -0700 (PDT)
Received: from localhost.localdomain (214.red-88-21-68.staticip.rima-tde.net.
 [88.21.68.214])
 by smtp.gmail.com with ESMTPSA id t202sm3475876wmt.20.2020.07.28.03.09.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 28 Jul 2020 03:09:26 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH-for-5.1] accel/xen: Fix xen_enabled() behavior on
 target-agnostic objects
Date: Tue, 28 Jul 2020 12:09:25 +0200
Message-Id: <20200728100925.10454-1-philmd@redhat.com>
X-Mailer: git-send-email 2.21.3
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8;
	text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Paul Durrant <pdurrant@amazon.com>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

CONFIG_XEN is generated by configure and stored in "config-target.h",
which is (obviously) only include for target-specific objects.
This is a problem for target-agnostic objects as CONFIG_XEN is never
defined and xen_enabled() is always inlined as 'false'.

Fix by following the KVM schema, defining CONFIG_XEN_IS_POSSIBLE
when we don't know to force the call of the non-inlined function,
returning the xen_allowed boolean.

Fixes: da278d58a092 ("accel: Move Xen accelerator code under accel/xen/")
Reported-by: Paul Durrant <pdurrant@amazon.com>
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 include/sysemu/xen.h | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/include/sysemu/xen.h b/include/sysemu/xen.h
index 1ca292715e..385a1fa2bf 100644
--- a/include/sysemu/xen.h
+++ b/include/sysemu/xen.h
@@ -8,7 +8,15 @@
 #ifndef SYSEMU_XEN_H
 #define SYSEMU_XEN_H
 
-#ifdef CONFIG_XEN
+#ifdef NEED_CPU_H
+# ifdef CONFIG_XEN
+#  define CONFIG_XEN_IS_POSSIBLE
+# endif
+#else
+# define CONFIG_XEN_IS_POSSIBLE
+#endif
+
+#ifdef CONFIG_XEN_IS_POSSIBLE
 
 bool xen_enabled(void);
 
@@ -18,7 +26,7 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
                    struct MemoryRegion *mr, Error **errp);
 #endif
 
-#else /* !CONFIG_XEN */
+#else /* !CONFIG_XEN_IS_POSSIBLE */
 
 #define xen_enabled() 0
 #ifndef CONFIG_USER_ONLY
@@ -33,6 +41,6 @@ static inline void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
 }
 #endif
 
-#endif /* CONFIG_XEN */
+#endif /* CONFIG_XEN_IS_POSSIBLE */
 
 #endif
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 10:12:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 10:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Mak-0007GD-EN; Tue, 28 Jul 2020 10:12:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z9cF=BH=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1k0Mai-0007G8-P7
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 10:12:12 +0000
X-Inumbo-ID: cc88db48-d0ba-11ea-a88d-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cc88db48-d0ba-11ea-a88d-12813bfff9fa;
 Tue, 28 Jul 2020 10:12:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595931130;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=wvjaajopii/5qeiGlm4RMJFm4dux7qZNErym3qfGlD8=;
 b=Ki3tUUWW01Sc9FwU9YedVkfo+hiVH6VBv+7Sp8mxEsd5P6F0X9d7mHZX
 BPlEhIklVl6wRtVuKR1rwPGWZE09avPQS3ApQpxPqE0BpBhsam1+iv8Mz
 bJGFJDggyVlRyJfFbUMcTXlcg8YsU7PwXF4IYExVcXu5N78qckRT/t/bR c=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 9I0ZK56lX5DAzvZBQ8nzAh1NdX72NXPvnCcCoMj/FW6k3qYdDzLEF6Khr4an8wlkPhXKBueraU
 1WivZlftNwz9V0F9fGl9PW9NwKXbgoXNoZihQeAH/Bx/7ME/Lp42J8+MfJaq7/nEPvaFi2t4N+
 R1gMWQfv59G6GIeyLvge4MjQIVrazi7nL6OujHP0oZ5Qve1zgdw6yhtq3pHSUmjzz9oFxhKndH
 I/fIOMPmU3lCch/ia1eYEgHvhdSH0OXhirHiPXhP/drLVh9xcgmWcRyv0mTzvjMS5eodVXDCOm
 H54=
X-SBRS: 2.7
X-MesageID: 23344363
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23344363"
From: George Dunlap <George.Dunlap@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [[XSATOOL]] repo: Add missing spaces in the configure cmdline for
 "xentools"
Thread-Topic: [[XSATOOL]] repo: Add missing spaces in the configure cmdline
 for "xentools"
Thread-Index: AQHWZC+x8Lr8eJ8LdEOQDs5vOcX3hKkcpTEA
Date: Tue, 28 Jul 2020 10:12:06 +0000
Message-ID: <7AA2EA3C-EE42-4D3A-B266-03D38C25A6DB@citrix.com>
References: <20200727160415.717-1-julien@xen.org>
 <0299389e-bb24-6ec9-9fb4-18cc7c4ec181@xen.org>
In-Reply-To: <0299389e-bb24-6ec9-9fb4-18cc7c4ec181@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <700AE54022947B42AD1FFFF141FFD307@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Julien
 Grall <jgrall@amazon.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDI3LCAyMDIwLCBhdCA1OjA0IFBNLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4
ZW4ub3JnPiB3cm90ZToNCj4gDQo+IEhtbW0gSSBmb3Jnb3QgdG8gQ0MgR2VvcmdlLiBTb3JyeSBm
b3IgdGhhdC4NCj4gDQo+IE9uIDI3LzA3LzIwMjAgMTc6MDQsIEp1bGllbiBHcmFsbCB3cm90ZToN
Cj4+IEZyb206IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQo+PiBUaGUgb3BlcmF0
b3IgKyB3aWxsIGp1c3QgY29uY2F0ZW5hdGUgdHdvIHN0cmluZ3MuIEFzIHRoZSByZXN1bHQsIHRo
ZQ0KPj4gY29uZmlndXJlIGNtZGxpbmUgZm9yICJ4ZW50b29scyIgd2lsbCBsb29rIGxpa2U6DQo+
PiAuL2NvbmZpZ3VyZSAtLWRpc2FibGUtc3R1YmRvbSAtLWRpc2FibGUtcWVtdS10cmFkaXRpb25h
bC0td2l0aC1zeXN0ZW0tcWVtdT0vYmluL2ZhbHNlIC0td2l0aC1zeXN0ZW0tc2VhYmlvcz0vYmlu
L2ZhbHNlLS1kaXNhYmxlLW92bWYNCj4+IFRoaXMgY2FuIGJlIGF2b2lkZWQgYnkgZXhwbGljaXRl
bHkgYWRkaW5nIHRoZSBzcGFjZXMuDQo+PiBTaWduZWQtb2ZmLWJ5OiBKdWxpZW4gR3JhbGwgPGpn
cmFsbEBhbWF6b24uY29tPg0KDQpPb3BzIOKAlCB0aGFua3MuDQoNClJldmlld2VkLWJ5OiBHZW9y
Z2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+DQoNCg==


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 10:13:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 10:13:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0McJ-0007OJ-QV; Tue, 28 Jul 2020 10:13:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iaET=BH=linaro.org=peter.maydell@srs-us1.protection.inumbo.net>)
 id 1k0McI-0007OD-2d
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 10:13:50 +0000
X-Inumbo-ID: 07944953-d0bb-11ea-8b26-bc764e2007e4
Received: from mail-oi1-x241.google.com (unknown [2607:f8b0:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 07944953-d0bb-11ea-8b26-bc764e2007e4;
 Tue, 28 Jul 2020 10:13:49 +0000 (UTC)
Received: by mail-oi1-x241.google.com with SMTP id y22so16949000oie.8
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jul 2020 03:13:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=dIGitEZzrPfalnnDRusGe28l2o0rf72O/DvG7N9N97E=;
 b=b0fbrVc8hmErEmlFhfL5lTeN9t7A3Y/NNz9ll10zrA+Rqr6Rm33Pceg/B5+x4mmt+Y
 EHGT2F4txMqMn5Sf4UTpq5A8nt+UJzjmVOCu/z27+epYaa4ProLwm0ttSKIaoJs+DyFT
 kbBZaFP9pyzsVLI3NPiAiLTCcqYJLlo/x+kQN/FH3/+MecdNkLz/QaRHqm59CAmkkUw7
 Fo047mxpm3vj+qaoPjxhWJAImYZE7NgWvBMvxu9MGwD4gzoA4He7ZYSPm00pw72HgX4j
 S/IxVEn9Fb+dRcB5NKxPulhl/XnafW/kclyefM+2AjouvvckBIyGc1R5Ns8RovWh6du0
 hyEw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=dIGitEZzrPfalnnDRusGe28l2o0rf72O/DvG7N9N97E=;
 b=E4+jIZ2KMBnql5MwQS+U4JE/Re2r4B7c23NDMDzxz3OOQuVX85hAR5UcBZqi+kyUc1
 OQSrJ+MP1FoxV3XLGAJ/IfTrfKrKLX9QKrYQg7tZBB7KIv0NyS+1ZybX1k7xPQWFz8HY
 Qz9yQnXT13feosNhcB5+d3TT7On1hHpCCEoimKNJcuTUq/umuo0lC6wIoB0eRWWexU9y
 WmXjVKfupOPUQBeZD0rgjBTqZEcRlNxk+I1brIu88p61uXhwHz2dVXEVRrjT4USYqlAo
 6btRUIV0vcMG/QvcTnAMK3B6dFZtlnnjrkOvEraboWThA4xPxhGXAi2RIIc5yupBaFkA
 N0zw==
X-Gm-Message-State: AOAM5323rjuf+tfzdS3f2mCt1dPpZU6P0mNJLCdVvucrStO9WAO/1Cv2
 5Lt1TdvBVc7XD0tW/O+biu9vTsxYSfOETHzBrjBblw==
X-Google-Smtp-Source: ABdhPJzFJC22rWeZ4x/tjI2G7ahXjg4PWDt64qOjlCb0r7cf4shJQj8p9wDx5Yd4f0r/+JUpZ9r5PzneOkT/HQbrL8w=
X-Received: by 2002:aca:4a96:: with SMTP id x144mr2814958oia.163.1595931229010; 
 Tue, 28 Jul 2020 03:13:49 -0700 (PDT)
MIME-Version: 1.0
References: <20200728091828.21702-1-paul@xen.org>
 <CAFEAcA_wKTFWk9Uk5HMabqfa6QkkTAdzBotmnrA_EH1BR4XjYg@mail.gmail.com>
 <32ad0742-bff2-1fbc-2f7a-d078980eb171@redhat.com>
 <CAFEAcA84fH3aGpbrJoA6S3qJ-FjD3NZMoj0G7jqvRneH_pS6=A@mail.gmail.com>
 <a09853d3-5c27-893f-54ed-63dc461bfacb@redhat.com>
 <ee8374bd-1257-1d29-6800-3902426b1a0b@redhat.com>
In-Reply-To: <ee8374bd-1257-1d29-6800-3902426b1a0b@redhat.com>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 28 Jul 2020 11:13:38 +0100
Message-ID: <CAFEAcA9zp48p64mPxR4_NyLDdYxjtEkKE_xQz_4D+Uau7HTE3A@mail.gmail.com>
Subject: Re: [PATCH] configure: define CONFIG_XEN when Xen is enabled
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Paul Durrant <pdurrant@amazon.com>, QEMU Developers <qemu-devel@nongnu.org>,
 Laurent Vivier <laurent@vivier.eu>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, 28 Jul 2020 at 11:00, Philippe Mathieu-Daud=C3=A9 <philmd@redhat.co=
m> wrote:
> Apparently kvm_enabled() checks CONFIG_KVM_IS_POSSIBLE instead
> of CONFIG_KVM, I suppose to bypass this limitation (from osdep.h):
>
>  21 #ifdef NEED_CPU_H
>  22 # ifdef CONFIG_KVM
>  24 #  define CONFIG_KVM_IS_POSSIBLE
>  25 # endif
>  26 #else
>  27 # define CONFIG_KVM_IS_POSSIBLE
>  28 #endif
>  29
>  30 #ifdef CONFIG_KVM_IS_POSSIBLE
>     ...

Interesting. We don't have CONFIG_WHPX_IS_POSSIBLE,
CONFIG_HVF_IS_POSSIBLE, etc -- also bugs, or do we avoid
them by happening not to check whpx_enabled(), hvf_enabled(),
etc in obj-common-compiled source files?

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 10:16:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 10:16:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0MeQ-0007VW-7i; Tue, 28 Jul 2020 10:16:02 +0000
Resent-Date: Tue, 28 Jul 2020 10:16:02 +0000
Resent-Message-Id: <E1k0MeQ-0007VW-7i@lists.xenproject.org>
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPGR=BH=patchew.org=no-reply@srs-us1.protection.inumbo.net>)
 id 1k0MeP-0007VN-Co
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 10:16:01 +0000
X-Inumbo-ID: 50a8014c-d0bb-11ea-a88e-12813bfff9fa
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 50a8014c-d0bb-11ea-a88e-12813bfff9fa;
 Tue, 28 Jul 2020 10:15:52 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1595931335; cv=none; 
 d=zohomail.com; s=zohoarc; 
 b=gnWIF8tKZ21krMd4CyQBMHsJ3sp85Sky3fozkOqX5IPzKyBFRAFN4rxWCfiakbA6G45qPkZAwrZOz4upub/0uCCFmGZcu5nHmwAxV9pMG40T3A/gSCtnMmRrBf6DWmf/6CJMkLtYoKINksSQr1ZKVnUhPQNsRBZ++IQ2/jDeuhQ=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com;
 s=zohoarc; t=1595931335;
 h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:Reply-To:Subject:To;
 bh=IAuw8LyCXyP8m3ZRiPRN7vC6Q+SkTb4+JXRWw49XNSE=; 
 b=WazofzMan9KjIol6ttc45TjVf67qSJcZykkSH3/ux052HxGznA0IiMJOZtLIC4uZmF+dpbKjrLWdjiKtMKFM8gB5j7OsyBjnjSl2cM0FFe8Sq/PjLnsGZqoRptJ4x1/cxxVEzrYNH90gv+Sqi1BReUG5e/NU9bk+atiae6QTF2g=
ARC-Authentication-Results: i=1; mx.zohomail.com;
 spf=pass  smtp.mailfrom=no-reply@patchew.org;
 dmarc=pass header.from=<no-reply@patchew.org>
 header.from=<no-reply@patchew.org>
Received: from [172.17.0.3] (23.253.156.214 [23.253.156.214]) by
 mx.zohomail.com with SMTPS id 1595931333107699.9161531048355;
 Tue, 28 Jul 2020 03:15:33 -0700 (PDT)
Subject: Re: [PATCH-for-5.1] accel/xen: Fix xen_enabled() behavior on
 target-agnostic objects
Message-ID: <159593133240.22228.17592220013997022688@66eaa9a8a123>
In-Reply-To: <20200728100925.10454-1-philmd@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Resent-From: 
From: no-reply@patchew.org
To: philmd@redhat.com
Date: Tue, 28 Jul 2020 03:15:33 -0700 (PDT)
X-ZohoMailClient: External
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: qemu-devel@nongnu.org
Cc: peter.maydell@linaro.org, sstabellini@kernel.org, paul@xen.org,
 pdurrant@amazon.com, qemu-devel@nongnu.org, pbonzini@redhat.com,
 anthony.perard@citrix.com, xen-devel@lists.xenproject.org, philmd@redhat.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

UGF0Y2hldyBVUkw6IGh0dHBzOi8vcGF0Y2hldy5vcmcvUUVNVS8yMDIwMDcyODEwMDkyNS4xMDQ1
NC0xLXBoaWxtZEByZWRoYXQuY29tLwoKCgpIaSwKClRoaXMgc2VyaWVzIGZhaWxlZCB0aGUgZG9j
a2VyLXF1aWNrQGNlbnRvczcgYnVpbGQgdGVzdC4gUGxlYXNlIGZpbmQgdGhlIHRlc3RpbmcgY29t
bWFuZHMgYW5kCnRoZWlyIG91dHB1dCBiZWxvdy4gSWYgeW91IGhhdmUgRG9ja2VyIGluc3RhbGxl
ZCwgeW91IGNhbiBwcm9iYWJseSByZXByb2R1Y2UgaXQKbG9jYWxseS4KCj09PSBURVNUIFNDUklQ
VCBCRUdJTiA9PT0KIyEvYmluL2Jhc2gKbWFrZSBkb2NrZXItaW1hZ2UtY2VudG9zNyBWPTEgTkVU
V09SSz0xCnRpbWUgbWFrZSBkb2NrZXItdGVzdC1xdWlja0BjZW50b3M3IFNIT1dfRU5WPTEgSj0x
NCBORVRXT1JLPTEKPT09IFRFU1QgU0NSSVBUIEVORCA9PT0KCgoKClRoZSBmdWxsIGxvZyBpcyBh
dmFpbGFibGUgYXQKaHR0cDovL3BhdGNoZXcub3JnL2xvZ3MvMjAyMDA3MjgxMDA5MjUuMTA0NTQt
MS1waGlsbWRAcmVkaGF0LmNvbS90ZXN0aW5nLmRvY2tlci1xdWlja0BjZW50b3M3Lz90eXBlPW1l
c3NhZ2UuCi0tLQpFbWFpbCBnZW5lcmF0ZWQgYXV0b21hdGljYWxseSBieSBQYXRjaGV3IFtodHRw
czovL3BhdGNoZXcub3JnL10uClBsZWFzZSBzZW5kIHlvdXIgZmVlZGJhY2sgdG8gcGF0Y2hldy1k
ZXZlbEByZWRoYXQuY29t


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 10:16:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 10:16:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0MeT-0007Vm-G1; Tue, 28 Jul 2020 10:16:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0MeS-0007Vg-L0
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 10:16:04 +0000
X-Inumbo-ID: 571de7da-d0bb-11ea-8b26-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 571de7da-d0bb-11ea-8b26-bc764e2007e4;
 Tue, 28 Jul 2020 10:16:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595931363;
 h=from:to:cc:subject:date:message-id:mime-version;
 bh=eResLKENKrnBvZCEPNYPX+GbqgM/48vwzyRSj+mBaw8=;
 b=D7ulHQGTbCsO0jZGFZcVzJtrsQ0xtyQQo58EGu8tBqteqKK/O9o5Wgi2
 eP2loA9WxmHkxgBzJmmbOxqIXcvC+tNGXrKTRJ1vEjmYw16NS9yphTf9q
 Xagk3ZV4xNBuaQe7RxV0e+fCa1QJ6AOw0yIrgY4K5B9oY+/7f602j/TpA o=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 4jNe9HkyjwpDKXp9aDdQaicuN+zhaneZt6IDT4vGSS6Cmm5gSPaOg2DU4IieG74IfRsJsoMo7+
 dfDFv7uAHOBpd9gLh1Ql5WF96i95uKqYtscNhkuvppfR3jxX39ZH9YfYf6IXuF1tTVsoO37l2k
 0GxlWl8LYZ8i7GgPsoWH2ovGhe8SnUFXtgfwebiYeo2JvgQMJuQevIRYd9PEZrk528A79EHNq2
 kh4g68we4OUzBz6aujwRX4aIlDlEJGTekepNg3mdFY6iDlNzyGe5YAdR6ltYZaNA9MKbkPEQYm
 gWw=
X-SBRS: 2.7
X-MesageID: 23344596
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23344596"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] public/domctl: Fix the struct xen_domctl ABI in 32bit builds
Date: Tue, 28 Jul 2020 11:15:29 +0100
Message-ID: <20200728101529.13753-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jan
 Beulich <JBeulich@suse.com>, Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The Xen domctl ABI currently relies on the union containing a field with
alignment of 8.

32bit projects which only copy the used subset of functionality end up with an
ABI breakage if they don't have at least one uint64_aligned_t field copied.

Insert explicit padding, and some build assertions to ensure it never changes
moving forwards.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>

Further proof that C isn't an appropriate way to desribe an ABI...
---
 xen/common/domctl.c         | 8 ++++++++
 xen/include/public/domctl.h | 1 +
 2 files changed, 9 insertions(+)

diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index a69b3b59a8..20ef8399bd 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -959,6 +959,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     return ret;
 }
 
+static void __init __maybe_unused build_assertions(void)
+{
+    struct xen_domctl d;
+
+    BUILD_BUG_ON(sizeof(d) != 16 /* header */ + 128 /* union */);
+    BUILD_BUG_ON(offsetof(typeof(d), u) != 16);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 59bdc28c89..9464a9058a 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1222,6 +1222,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_gdbsx_domstatus             1003
     uint32_t interface_version; /* XEN_DOMCTL_INTERFACE_VERSION */
     domid_t  domain;
+    uint16_t _pad[3];
     union {
         struct xen_domctl_createdomain      createdomain;
         struct xen_domctl_getdomaininfo     getdomaininfo;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 10:22:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 10:22:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0MkG-0008R5-6B; Tue, 28 Jul 2020 10:22:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TSwU=BH=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k0MkF-0008R0-5g
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 10:22:03 +0000
X-Inumbo-ID: 2d462e76-d0bc-11ea-a891-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2d462e76-d0bc-11ea-a891-12813bfff9fa;
 Tue, 28 Jul 2020 10:22:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=SyFuU7+M8MVwPeAkKMNBGwL2tvYp7N5iAJ2NgFRHDPo=; b=gJ3h4Z26GKcwX1YndiawYvz7gL
 vAtfb5oJmFNfw3y/x78SKY3W0qX7tVjHCIpT+wi5czZCnyNnt9jLCLzTZP4gK+uLz4aE81NLcTilx
 ORkeHg7rXgUFbKqeS/f7gMLMyq4Y4pQERb8Jc3Z/H2p+zkk0SS11Jab1oHSmuvK7fSwI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0Mk9-00057a-8g; Tue, 28 Jul 2020 10:21:57 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0Mk9-0002gt-0D; Tue, 28 Jul 2020 10:21:57 +0000
Subject: Re: [PATCH] public/domctl: Fix the struct xen_domctl ABI in 32bit
 builds
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20200728101529.13753-1-andrew.cooper3@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7ecab0e1-b218-cff0-f2dd-a5f81c5afaeb@xen.org>
Date: Tue, 28 Jul 2020 11:21:54 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728101529.13753-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Jan Beulich <JBeulich@suse.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Andrew,

On 28/07/2020 11:15, Andrew Cooper wrote:
> The Xen domctl ABI currently relies on the union containing a field with
> alignment of 8.
> 
> 32bit projects which only copy the used subset of functionality end up with an
> ABI breakage if they don't have at least one uint64_aligned_t field copied.
> 
> Insert explicit padding, and some build assertions to ensure it never changes
> moving forwards.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 10:39:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 10:39:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0N17-00012P-Nc; Tue, 28 Jul 2020 10:39:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ae7q=BH=gmail.com=alejandro.gonzalez.correo@srs-us1.protection.inumbo.net>)
 id 1k0N16-00011t-Dn
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 10:39:28 +0000
X-Inumbo-ID: 9c0495c6-d0be-11ea-8b26-bc764e2007e4
Received: from mail-oi1-x243.google.com (unknown [2607:f8b0:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c0495c6-d0be-11ea-8b26-bc764e2007e4;
 Tue, 28 Jul 2020 10:39:27 +0000 (UTC)
Received: by mail-oi1-x243.google.com with SMTP id w17so17035004oie.6
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jul 2020 03:39:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=F/Ok6kXKD7ZtkHsiCRqYn+tABQDkmCdFMgWtmQnzruE=;
 b=eQpT/2ug3/kbEtXJOQ7V5qGPweHBgcxFFc7N00o3Rpa9ON4x1LK9VN2xCpFMuJZWfM
 5K5/A6HJmFxKYIwHA0c0+pJShojFAUa26adaGlNLTJBcwPebOfHm6KAX0wmT1Ny/Irhs
 6/UINtNGAnLal1iDkijIW8dslyV/JTtVtWd3AplR78s0j/eLA3xoFA2uxb3KPvhBRD7W
 mPQsB49HZL4g8w6hQEL2wFl6QJQ4Tv0wTPxCproOHcBdwigtYu+ZHsI79mxxT8OPbSYO
 D36See2l5Ykrb+t83SP+5ou+EXUTDLrWnfAZOw/sdLC21yySCI2QVxN7PsOST1lD1jNg
 gGkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=F/Ok6kXKD7ZtkHsiCRqYn+tABQDkmCdFMgWtmQnzruE=;
 b=I0Pb8e/e85Z20arQ8KHlMBSx8s8NiyamrxvP5XOlqnB9mp8lAXknij1jegJ3a7qwWJ
 2nsCq7x7Az9a3T3jXhmzO704hGW1wJLCgiZCY1Gnydu1V+yFEuQJMwho7sLNxOakErWO
 O9JoJv8RAcWWP+hxVoLExE9Jihpzv8lqvn61heRIX8CFVc2xqOGBpv+ZAVIKju/g1MPd
 T7IViCqAbxUnYV7tzkzcngQ56pYSiQeGuceL0T553rGl/NI/1QvH3rV0eNY4cphkiPYq
 YPmkgyzme6zv2exPLHddbYYddW+27pKDIXlejkYVDvRD+St4j4taqNBnYGJYPO5fbFFJ
 fR5A==
X-Gm-Message-State: AOAM533c0ubhoDJbg2PLA9sy3HWECcg+j63uhTLRK/zjpABCwzh3GLFv
 oe/Ea6qL6xwjWKm1J4pr4NsWD8kxDqZHioIXImU=
X-Google-Smtp-Source: ABdhPJx9Jr+sG7cAAE0/zaQ15n0JwVIsrHMvEFSnuFvRl87JtK6CVw3FRDNn5yNZCc42ywq7qCcC28BN1ePQM1o+Wo0=
X-Received: by 2002:a05:6808:3b8:: with SMTP id
 n24mr2807719oie.84.1595932766515; 
 Tue, 28 Jul 2020 03:39:26 -0700 (PDT)
MIME-Version: 1.0
References: <CA+wirGqXMoRkS-aJmfFLipUv8SdY5LKV1aMrF0yKRJQaMvzs6Q@mail.gmail.com>
 <1c5cee83-295e-cc02-d018-b53ddc6e3590@xen.org>
 <CA+wirGpFvLBzYRBaq8yspJj8j9-yoLwN88bt079qM5yqPTwtcA@mail.gmail.com>
 <02b630bd-22e0-afde-6784-be068d0948ae@arm.com>
In-Reply-To: <02b630bd-22e0-afde-6784-be068d0948ae@arm.com>
From: Alejandro <alejandro.gonzalez.correo@gmail.com>
Date: Tue, 28 Jul 2020 12:39:14 +0200
Message-ID: <CA+wirGoG+im2mwb2ye6j4MpcVtfQ-prhhmVgdBTosus7hjeu=w@mail.gmail.com>
Subject: Re: dom0 LInux 5.8-rc5 kernel failing to initialize cooling maps for
 Allwinner H6 SoC
To: =?UTF-8?Q?Andr=C3=A9_Przywara?= <andre.przywara@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

El dom., 26 jul. 2020 a las 22:25, Andr=C3=A9 Przywara
(<andre.przywara@arm.com>) escribi=C3=B3:
> So this was actually my first thought: The firmware (U-Boot SPL) sets up
> some basic CPU frequency (888 MHz for H6 [1]), which is known to never
> overheat the chip, even under full load. So any concern from your side
> about the board or SoC overheating could be dismissed, with the current
> mainline code, at least. However you lose the full speed, by quite a
> margin on the H6 (on the A64 it's only 816 vs 1200(ish) MHz).
> However, without the clock entries in the CPU node, the frequency would
> never be changed by Dom0 anyway (nor by Xen, which doesn't even know how
> to do this).
> So from a practical point of view: unless you hack Xen to pass on more
> cpu node properties, you are stuck at 888 MHz anyway, and don't need to
> worry about overheating.
Thank you. Knowing that at least it won't overheat is a relief. But
the performance definitely suffers from the current situation, and
quite a bit. I'm thinking about using KVM instead: even if it does
less paravirtualization of guests, I'm sure that the ability to use
the maximum frequency of the CPU would offset the additional overhead,
and in general offer better performance. But with KVM I lose the
ability to have individual domU's dedicated to some device driver,
which is a nice thing to have from a security standpoint.

> Now if you would pass on the CPU clock frequency control to Dom0, you
> run into more issues: the Linux governors would probably try to setup
> both frequency and voltage based on load, BUT this would be Dom0's bogus
> perception of the actual system load. Even with pinned Dom0 vCPUs, a
> busy system might spend most of its CPU time in DomU VCPUs, which
> probably makes it look mostly idle in Dom0. Using a fixed governor
> (performance) would avoid this, at the cost of running full speed all of
> the time, probably needlessly.
>
> So fixing the CPU clocking issue is more complex and requires more
> ground work in Xen first, probably involving some enlightenend Dom0
> drivers as well. I didn't follow latest developments in this area, nor
> do I remember x86's answer to this, but it's not something easy, I would
> presume.
I understand, thanks :). I know that recent Intel CPUs (from Sandy
Bridge onwards) use P-states to manage frequencies, and even have a
mode of operation that lets the CPU select the P-states by itself. On
older processors, Xen can probably rely on ACPI data to do the
frequency scaling. But the most similar "standard thing" that my board
has, a AR100 coprocessor that with the (work in progress) Crust
firmware can be used with SCMI, doesn't even seem to support the use
case of changing CPU frequency... and SCMI is the most promising
approach for adding DVFS support in Xen for ARM, according to this
previous work: https://www.slideshare.net/xen_com_mgr/xpdds18-cpufreq-in-xe=
n-on-arm-oleksandr-tyshchenko-epam-systems

> Alejandro: can you try to measure the actual CPU frequency in Dom0?
> Maybe some easy benchmark? "mhz" from lmbench does a great job in
> telling you the actual frequency, just by clever measurement. But any
> other CPU bound benchmark would do, if you compare bare metal Linux vs.
> Dom0.
I have measured the CPU frequency in Dom0 using lmbench several times
and it seems to be stuck at 888 MHz, the frequency set by U-Boot.
Overall, the system feels more sluggish than when using bare Linux,
too. It doesn't matter if I apply the "hacky fix" I mentioned before
or not.

> Also, does cpufreq come up in Dom0 at all? Can you select governors and
> frequencies?
It doesn't come up, and no sysfs entries are created for cpufreq. With
the "fix", the kernel prints an error message complaining that it
couldn't probe cpufreq-dt, but it still doesn't come up, and sysfs
entries for cpufreq aren't created either.


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 11:09:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 11:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0NUE-0003at-6C; Tue, 28 Jul 2020 11:09:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uMAr=BH=amazon.com=prvs=4712fd9bf=elnikety@srs-us1.protection.inumbo.net>)
 id 1k0NUD-0003ao-Nm
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 11:09:33 +0000
X-Inumbo-ID: d036db5c-d0c2-11ea-a8a0-12813bfff9fa
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d036db5c-d0c2-11ea-a8a0-12813bfff9fa;
 Tue, 28 Jul 2020 11:09:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1595934572; x=1627470572;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=3Requ7kIPFVQtzR1jFatrXyBIb7TnieU1eKfFKFEi2Y=;
 b=mAie5v7UlX+RWi45EdTAqrIOaaX0AdGC6xcw5ZSOcZ2EaK3+OXRlTMF3
 EfSBnaccobsg3vlSnHLaUnXwCvKcBr+Yrr42O85ygDuuYQLPfFcNpxq+F
 79RLxaxl3FlTjtGX03vXj5bag4GVvG59sx72Mq9rbNvtP8yUmXp+Jn3I3 Q=;
IronPort-SDR: TxpgzrKoVN4IKCnHDU0jgqi0jdFjgJEyuTLyxPTtGoXoAEvXnXiN4TzJIywZV/NEkO2Pnm8f/H
 Pah0glIPdQfw==
X-IronPort-AV: E=Sophos;i="5.75,406,1589241600"; d="scan'208";a="63548064"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2c-2225282c.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 28 Jul 2020 11:09:30 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2c-2225282c.us-west-2.amazon.com (Postfix) with ESMTPS
 id 3561BA1DD9; Tue, 28 Jul 2020 11:09:30 +0000 (UTC)
Received: from EX13D03EUA002.ant.amazon.com (10.43.165.166) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 28 Jul 2020 11:09:29 +0000
Received: from a483e73f63b0.ant.amazon.com (10.43.162.73) by
 EX13D03EUA002.ant.amazon.com (10.43.165.166) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 28 Jul 2020 11:09:25 +0000
Subject: Re: [PATCH] x86/vhpet: Fix type size in timer_int_route_valid
To: Andrew Cooper <andrew.cooper3@citrix.com>, Eslam Elnikety
 <elnikety@amazon.com>, <xen-devel@lists.xenproject.org>
References: <20200728083357.77999-1-elnikety@amazon.com>
 <a55fba45-a008-059e-ea8c-b7300e2e8b7d@citrix.com>
From: Eslam Elnikety <elnikety@amazon.com>
Message-ID: <09b2f75d-13d7-8f53-54a1-6f10ecd7b6e2@amazon.com>
Date: Tue, 28 Jul 2020 13:09:21 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a55fba45-a008-059e-ea8c-b7300e2e8b7d@citrix.com>
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Originating-IP: [10.43.162.73]
X-ClientProxiedBy: EX13D42UWB004.ant.amazon.com (10.43.161.99) To
 EX13D03EUA002.ant.amazon.com (10.43.165.166)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.co.uk>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.20 11:26, Andrew Cooper wrote:
> On 28/07/2020 09:33, Eslam Elnikety wrote:
>> The macro timer_int_route_cap evalutes to a 64 bit value. Extend the
>> size of left side of timer_int_route_valid to match.
>>
>> This bug was discovered and resolved using Coverity Static Analysis
>> Security Testing (SAST) by Synopsys, Inc.
>>
>> Signed-off-by: Eslam Elnikety <elnikety@amazon.com>
>> ---
>>   xen/arch/x86/hvm/hpet.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
>> index ca94e8b453..9afe6e6760 100644
>> --- a/xen/arch/x86/hvm/hpet.c
>> +++ b/xen/arch/x86/hvm/hpet.c
>> @@ -66,7 +66,7 @@
>>       MASK_EXTR(timer_config(h, n), HPET_TN_INT_ROUTE_CAP)
>>   
>>   #define timer_int_route_valid(h, n) \
>> -    ((1u << timer_int_route(h, n)) & timer_int_route_cap(h, n))
>> +    ((1ULL << timer_int_route(h, n)) & timer_int_route_cap(h, n))
>>   
>>   static inline uint64_t hpet_read_maincounter(HPETState *h, uint64_t guest_time)
>>   {
> 
> Does this work?

Yes! This is better than my fix (and I like that it clarifies the route 
part of the config. Will you sign-off and send a patch?

> 
> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
> index ca94e8b453..638f6174de 100644
> --- a/xen/arch/x86/hvm/hpet.c
> +++ b/xen/arch/x86/hvm/hpet.c
> @@ -62,8 +62,7 @@
>   
>   #define timer_int_route(h, n)    MASK_EXTR(timer_config(h, n),
> HPET_TN_ROUTE)
>   
> -#define timer_int_route_cap(h, n) \
> -    MASK_EXTR(timer_config(h, n), HPET_TN_INT_ROUTE_CAP)
> +#define timer_int_route_cap(h, n) (h)->hpet.timers[(n)].route
>   
>   #define timer_int_route_valid(h, n) \
>       ((1u << timer_int_route(h, n)) & timer_int_route_cap(h, n))
> diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
> index f0e0eaec83..a41fc443cc 100644
> --- a/xen/include/asm-x86/hvm/vpt.h
> +++ b/xen/include/asm-x86/hvm/vpt.h
> @@ -73,7 +73,13 @@ struct hpet_registers {
>       uint64_t isr;               /* interrupt status reg */
>       uint64_t mc64;              /* main counter */
>       struct {                    /* timers */
> -        uint64_t config;        /* configuration/cap */
> +        union {
> +            uint64_t config;    /* configuration/cap */
> +            struct {
> +                uint32_t _;
> +                uint32_t route;
> +            };
> +        };
>           uint64_t cmp;           /* comparator */
>           uint64_t fsb;           /* FSB route, not supported now */
>       } timers[HPET_TIMER_NUM];
> 
> 



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 11:19:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 11:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0NdC-0004Ve-8U; Tue, 28 Jul 2020 11:18:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FtAL=BH=arm.com=andre.przywara@srs-us1.protection.inumbo.net>)
 id 1k0NdB-0004VZ-9w
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 11:18:49 +0000
X-Inumbo-ID: 1af5f8de-d0c4-11ea-8b27-bc764e2007e4
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 1af5f8de-d0c4-11ea-8b27-bc764e2007e4;
 Tue, 28 Jul 2020 11:18:47 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0D6D51FB;
 Tue, 28 Jul 2020 04:18:47 -0700 (PDT)
Received: from [192.168.2.22] (unknown [172.31.20.19])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 49F283F66E;
 Tue, 28 Jul 2020 04:18:46 -0700 (PDT)
Subject: Re: dom0 LInux 5.8-rc5 kernel failing to initialize cooling maps for
 Allwinner H6 SoC
To: Alejandro <alejandro.gonzalez.correo@gmail.com>
References: <CA+wirGqXMoRkS-aJmfFLipUv8SdY5LKV1aMrF0yKRJQaMvzs6Q@mail.gmail.com>
 <1c5cee83-295e-cc02-d018-b53ddc6e3590@xen.org>
 <CA+wirGpFvLBzYRBaq8yspJj8j9-yoLwN88bt079qM5yqPTwtcA@mail.gmail.com>
 <02b630bd-22e0-afde-6784-be068d0948ae@arm.com>
 <CA+wirGoG+im2mwb2ye6j4MpcVtfQ-prhhmVgdBTosus7hjeu=w@mail.gmail.com>
From: =?UTF-8?Q?Andr=c3=a9_Przywara?= <andre.przywara@arm.com>
Autocrypt: addr=andre.przywara@arm.com; prefer-encrypt=mutual; keydata=
 xsFNBFNPCKMBEAC+6GVcuP9ri8r+gg2fHZDedOmFRZPtcrMMF2Cx6KrTUT0YEISsqPoJTKld
 tPfEG0KnRL9CWvftyHseWTnU2Gi7hKNwhRkC0oBL5Er2hhNpoi8x4VcsxQ6bHG5/dA7ctvL6
 kYvKAZw4X2Y3GTbAZIOLf+leNPiF9175S8pvqMPi0qu67RWZD5H/uT/TfLpvmmOlRzNiXMBm
 kGvewkBpL3R2clHquv7pB6KLoY3uvjFhZfEedqSqTwBVu/JVZZO7tvYCJPfyY5JG9+BjPmr+
 REe2gS6w/4DJ4D8oMWKoY3r6ZpHx3YS2hWZFUYiCYovPxfj5+bOr78sg3JleEd0OB0yYtzTT
 esiNlQpCo0oOevwHR+jUiaZevM4xCyt23L2G+euzdRsUZcK/M6qYf41Dy6Afqa+PxgMEiDto
 ITEH3Dv+zfzwdeqCuNU0VOGrQZs/vrKOUmU/QDlYL7G8OIg5Ekheq4N+Ay+3EYCROXkstQnf
 YYxRn5F1oeVeqoh1LgGH7YN9H9LeIajwBD8OgiZDVsmb67DdF6EQtklH0ycBcVodG1zTCfqM
 AavYMfhldNMBg4vaLh0cJ/3ZXZNIyDlV372GmxSJJiidxDm7E1PkgdfCnHk+pD8YeITmSNyb
 7qeU08Hqqh4ui8SSeUp7+yie9zBhJB5vVBJoO5D0MikZAODIDwARAQABzS1BbmRyZSBQcnp5
 d2FyYSAoQVJNKSA8YW5kcmUucHJ6eXdhcmFAYXJtLmNvbT7CwXsEEwECACUCGwMGCwkIBwMC
 BhUIAgkKCwQWAgMBAh4BAheABQJTWSV8AhkBAAoJEAL1yD+ydue63REP/1tPqTo/f6StS00g
 NTUpjgVqxgsPWYWwSLkgkaUZn2z9Edv86BLpqTY8OBQZ19EUwfNehcnvR+Olw+7wxNnatyxo
 D2FG0paTia1SjxaJ8Nx3e85jy6l7N2AQrTCFCtFN9lp8Pc0LVBpSbjmP+Peh5Mi7gtCBNkpz
 KShEaJE25a/+rnIrIXzJHrsbC2GwcssAF3bd03iU41J1gMTalB6HCtQUwgqSsbG8MsR/IwHW
 XruOnVp0GQRJwlw07e9T3PKTLj3LWsAPe0LHm5W1Q+euoCLsZfYwr7phQ19HAxSCu8hzp43u
 zSw0+sEQsO+9wz2nGDgQCGepCcJR1lygVn2zwRTQKbq7Hjs+IWZ0gN2nDajScuR1RsxTE4WR
 lj0+Ne6VrAmPiW6QqRhliDO+e82riI75ywSWrJb9TQw0+UkIQ2DlNr0u0TwCUTcQNN6aKnru
 ouVt3qoRlcD5MuRhLH+ttAcmNITMg7GQ6RQajWrSKuKFrt6iuDbjgO2cnaTrLbNBBKPTG4oF
 D6kX8Zea0KvVBagBsaC1CDTDQQMxYBPDBSlqYCb/b2x7KHTvTAHUBSsBRL6MKz8wwruDodTM
 4E4ToV9URl4aE/msBZ4GLTtEmUHBh4/AYwk6ACYByYKyx5r3PDG0iHnJ8bV0OeyQ9ujfgBBP
 B2t4oASNnIOeGEEcQ2rjzsFNBFNPCKMBEACm7Xqafb1Dp1nDl06aw/3O9ixWsGMv1Uhfd2B6
 it6wh1HDCn9HpekgouR2HLMvdd3Y//GG89irEasjzENZPsK82PS0bvkxxIHRFm0pikF4ljIb
 6tca2sxFr/H7CCtWYZjZzPgnOPtnagN0qVVyEM7L5f7KjGb1/o5EDkVR2SVSSjrlmNdTL2Rd
 zaPqrBoxuR/y/n856deWqS1ZssOpqwKhxT1IVlF6S47CjFJ3+fiHNjkljLfxzDyQXwXCNoZn
 BKcW9PvAMf6W1DGASoXtsMg4HHzZ5fW+vnjzvWiC4pXrcP7Ivfxx5pB+nGiOfOY+/VSUlW/9
 GdzPlOIc1bGyKc6tGREH5lErmeoJZ5k7E9cMJx+xzuDItvnZbf6RuH5fg3QsljQy8jLlr4S6
 8YwxlObySJ5K+suPRzZOG2+kq77RJVqAgZXp3Zdvdaov4a5J3H8pxzjj0yZ2JZlndM4X7Msr
 P5tfxy1WvV4Km6QeFAsjcF5gM+wWl+mf2qrlp3dRwniG1vkLsnQugQ4oNUrx0ahwOSm9p6kM
 CIiTITo+W7O9KEE9XCb4vV0ejmLlgdDV8ASVUekeTJkmRIBnz0fa4pa1vbtZoi6/LlIdAEEt
 PY6p3hgkLLtr2GRodOW/Y3vPRd9+rJHq/tLIfwc58ZhQKmRcgrhtlnuTGTmyUqGSiMNfpwAR
 AQABwsFfBBgBAgAJBQJTTwijAhsMAAoJEAL1yD+ydue64BgP/33QKczgAvSdj9XTC14wZCGE
 U8ygZwkkyNf021iNMj+o0dpLU48PIhHIMTXlM2aiiZlPWgKVlDRjlYuc9EZqGgbOOuR/pNYA
 JX9vaqszyE34JzXBL9DBKUuAui8z8GcxRcz49/xtzzP0kH3OQbBIqZWuMRxKEpRptRT0wzBL
 O31ygf4FRxs68jvPCuZjTGKELIo656/Hmk17cmjoBAJK7JHfqdGkDXk5tneeHCkB411p9WJU
 vMO2EqsHjobjuFm89hI0pSxlUoiTL0Nuk9Edemjw70W4anGNyaQtBq+qu1RdjUPBvoJec7y/
 EXJtoGxq9Y+tmm22xwApSiIOyMwUi9A1iLjQLmngLeUdsHyrEWTbEYHd2sAM2sqKoZRyBDSv
 ejRvZD6zwkY/9nRqXt02H1quVOP42xlkwOQU6gxm93o/bxd7S5tEA359Sli5gZRaucpNQkwd
 KLQdCvFdksD270r4jU/rwR2R/Ubi+txfy0dk2wGBjl1xpSf0Lbl/KMR5TQntELfLR4etizLq
 Xpd2byn96Ivi8C8u9zJruXTueHH8vt7gJ1oax3yKRGU5o2eipCRiKZ0s/T7fvkdq+8beg9ku
 fDO4SAgJMIl6H5awliCY2zQvLHysS/Wb8QuB09hmhLZ4AifdHyF1J5qeePEhgTA+BaUbiUZf
 i4aIXCH3Wv6K
Organization: ARM Ltd.
Message-ID: <e091c32f-d121-d549-a2fa-f906d28ff8f1@arm.com>
Date: Tue, 28 Jul 2020 12:17:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CA+wirGoG+im2mwb2ye6j4MpcVtfQ-prhhmVgdBTosus7hjeu=w@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/07/2020 11:39, Alejandro wrote:
> Hello,
> 
> El dom., 26 jul. 2020 a las 22:25, André Przywara
> (<andre.przywara@arm.com>) escribió:
>> So this was actually my first thought: The firmware (U-Boot SPL) sets up
>> some basic CPU frequency (888 MHz for H6 [1]), which is known to never
>> overheat the chip, even under full load. So any concern from your side
>> about the board or SoC overheating could be dismissed, with the current
>> mainline code, at least. However you lose the full speed, by quite a
>> margin on the H6 (on the A64 it's only 816 vs 1200(ish) MHz).
>> However, without the clock entries in the CPU node, the frequency would
>> never be changed by Dom0 anyway (nor by Xen, which doesn't even know how
>> to do this).
>> So from a practical point of view: unless you hack Xen to pass on more
>> cpu node properties, you are stuck at 888 MHz anyway, and don't need to
>> worry about overheating.
> Thank you. Knowing that at least it won't overheat is a relief. But
> the performance definitely suffers from the current situation, and
> quite a bit. I'm thinking about using KVM instead: even if it does
> less paravirtualization of guests,

What is this statement based on? I think on ARM this never really
applied, and in general whether you do virtio or xen front-end/back-end
does not really matter. IMHO any reasoning about performance just based
on software architecture is mostly flawed (because it's complex and
reality might have missed some memos ;-) So just measure your particular
use case, then you know.

> I'm sure that the ability to use
> the maximum frequency of the CPU would offset the additional overhead,
> and in general offer better performance. But with KVM I lose the
> ability to have individual domU's dedicated to some device driver,
> which is a nice thing to have from a security standpoint.

I understand the theoretical merits, but a) does this really work on
your board and b) is this really more secure? What do you want to
protect against?

>> Now if you would pass on the CPU clock frequency control to Dom0, you
>> run into more issues: the Linux governors would probably try to setup
>> both frequency and voltage based on load, BUT this would be Dom0's bogus
>> perception of the actual system load. Even with pinned Dom0 vCPUs, a
>> busy system might spend most of its CPU time in DomU VCPUs, which
>> probably makes it look mostly idle in Dom0. Using a fixed governor
>> (performance) would avoid this, at the cost of running full speed all of
>> the time, probably needlessly.
>>
>> So fixing the CPU clocking issue is more complex and requires more
>> ground work in Xen first, probably involving some enlightenend Dom0
>> drivers as well. I didn't follow latest developments in this area, nor
>> do I remember x86's answer to this, but it's not something easy, I would
>> presume.
> I understand, thanks :). I know that recent Intel CPUs (from Sandy
> Bridge onwards) use P-states to manage frequencies, and even have a
> mode of operation that lets the CPU select the P-states by itself. On
> older processors, Xen can probably rely on ACPI data to do the
> frequency scaling. But the most similar "standard thing" that my board
> has, a AR100 coprocessor that with the (work in progress) Crust
> firmware can be used with SCMI, doesn't even seem to support the use
> case of changing CPU frequency... and SCMI is the most promising
> approach for adding DVFS support in Xen for ARM, according to this
> previous work: https://www.slideshare.net/xen_com_mgr/xpdds18-cpufreq-in-xen-on-arm-oleksandr-tyshchenko-epam-systems

So architecturally you could run all cores at full speed, always, and
tell Crust to clock down / decrease voltage once a thermal condition
triggers. That's not power-saving, but at least should be relatively safe.
On Allwinner platforms this isn't really bullet-proof, though, since the
THS device is non-secure, so anyone with access to the MMIO region could
turn it off. Or Dom0 could just turn the THS clock off - which it
actually does, because it's not used.
In the end it's a much bigger discussion about doing those things in
firmware or in the OS. For those traditionally embedded platforms like
Allwinner there is a huge fraction that does not trust firmware,
unfortunately, so moving responsibility to firmware is not very popular
upstream (been there, done that).

>> Alejandro: can you try to measure the actual CPU frequency in Dom0?
>> Maybe some easy benchmark? "mhz" from lmbench does a great job in
>> telling you the actual frequency, just by clever measurement. But any
>> other CPU bound benchmark would do, if you compare bare metal Linux vs.
>> Dom0.
> I have measured the CPU frequency in Dom0 using lmbench several times
> and it seems to be stuck at 888 MHz, the frequency set by U-Boot.
> Overall, the system feels more sluggish than when using bare Linux,
> too. It doesn't matter if I apply the "hacky fix" I mentioned before
> or not.>
>> Also, does cpufreq come up in Dom0 at all? Can you select governors and
>> frequencies?
> It doesn't come up, and no sysfs entries are created for cpufreq. With
> the "fix", the kernel prints an error message complaining that it
> couldn't probe cpufreq-dt, but it still doesn't come up, and sysfs
> entries for cpufreq aren't created either.

I see, many thanks for doing this, as this seems to confirm my assumptions.

If you have good cooling in place, or always one hand on the power plug,
you could change U-Boot to bump up the CPU frequency (make menuconfig,
search for CONFIG_SYS_CLK_FREQ). Then you could at least see if your
observed performance issues are related to the core frequency. You might
need to adjust the CPU voltage, too.

Cheers,
Andre


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 11:37:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 11:37:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0NvL-0006C1-0x; Tue, 28 Jul 2020 11:37:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0NvJ-0006Br-G5
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 11:37:33 +0000
X-Inumbo-ID: b8e18ad4-d0c6-11ea-8b28-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b8e18ad4-d0c6-11ea-8b28-bc764e2007e4;
 Tue, 28 Jul 2020 11:37:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595936253;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=hLRKErlmwe4NVS9+YaW7c8jBL06IM4hgS3F+Z8g4rWY=;
 b=HZNfHV1mwLDlauzLWmUJKA5XZhlwyFlS9owkEwXwzO8hd9Fv7NKocNck
 chbTN+2ZzJfnbbaKWcIO4iYAd/DOCTV2T36fcMppE+hHUfshJS3sZlfJF
 kD1iurYe0m35l9GhRNAHAV17V+W+dtKNN5Lb4P4E/D0uVSsL113NyVX3c s=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: wkiniwnR8YeCE2n7IgGkdtyN/iNg7wK2C/34nL3lEyhS9OFHgUrvsEX3usdR+sM70PAJ/iRaiI
 EymaPtsnlwm3M8kzBbowTG9KbqS+F7OBNRlM5UQF13mso1MLzwOCytn1b35heDGL2ejhRFjISm
 8IfRqWq4XH7SHoQqBM5Ih5WSb4NO4l991xaBIpIlYY9ZQpiKkIBc1BRuZUen6ilHoMsg0PuFCn
 t/AVz+o0hGJ9TrS2VEZY4CttZo2t1Xxw01+6bYfu/5SgeAvXSnJnQPHq7p9+dV44I+kLhYWxhV
 JT0=
X-SBRS: 2.7
X-MesageID: 23330303
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23330303"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 5/5] tools/foreignmem: Support querying the size of a resource
Date: Tue, 28 Jul 2020 12:37:11 +0100
Message-ID: <20200728113712.22966-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200728113712.22966-1-andrew.cooper3@citrix.com>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

With the Xen side of this interface fixed to return real sizes, userspace
needs to be able to make the query.

Introduce xenforeignmemory_resource_size() for the purpose, bumping the
library minor version and providing compatiblity for the non-Linux builds.

Its not possible to reuse the IOCTL_PRIVCMD_MMAP_RESOURCE infrastructure,
because it depends on having already mmap()'d a suitably sized region before
it will make an XENMEM_acquire_resource hypercall to Xen.

Instead, open a xencall handle and make an XENMEM_acquire_resource hypercall
directly.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <Ian.Jackson@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Paul Durrant <paul@xen.org>
CC: Michał Leszczyński <michal.leszczynski@cert.pl>
CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
---
 tools/libs/foreignmemory/Makefile                  |  2 +-
 tools/libs/foreignmemory/core.c                    | 14 +++++++++
 .../libs/foreignmemory/include/xenforeignmemory.h  | 15 ++++++++++
 tools/libs/foreignmemory/libxenforeignmemory.map   |  4 +++
 tools/libs/foreignmemory/linux.c                   | 35 ++++++++++++++++++++++
 tools/libs/foreignmemory/private.h                 | 14 +++++++++
 6 files changed, 83 insertions(+), 1 deletion(-)

diff --git a/tools/libs/foreignmemory/Makefile b/tools/libs/foreignmemory/Makefile
index 28f1bddc96..8e07f92c59 100644
--- a/tools/libs/foreignmemory/Makefile
+++ b/tools/libs/foreignmemory/Makefile
@@ -2,7 +2,7 @@ XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR    = 1
-MINOR    = 3
+MINOR    = 4
 LIBNAME  := foreignmemory
 USELIBS  := toollog toolcore
 
diff --git a/tools/libs/foreignmemory/core.c b/tools/libs/foreignmemory/core.c
index 63f12e2450..5d95c59c48 100644
--- a/tools/libs/foreignmemory/core.c
+++ b/tools/libs/foreignmemory/core.c
@@ -53,6 +53,10 @@ xenforeignmemory_handle *xenforeignmemory_open(xentoollog_logger *logger,
         if (!fmem->logger) goto err;
     }
 
+    fmem->xcall = xencall_open(fmem->logger, 0);
+    if ( !fmem->xcall )
+        goto err;
+
     rc = osdep_xenforeignmemory_open(fmem);
     if ( rc  < 0 ) goto err;
 
@@ -61,6 +65,7 @@ xenforeignmemory_handle *xenforeignmemory_open(xentoollog_logger *logger,
 err:
     xentoolcore__deregister_active_handle(&fmem->tc_ah);
     osdep_xenforeignmemory_close(fmem);
+    xencall_close(fmem->xcall);
     xtl_logger_destroy(fmem->logger_tofree);
     free(fmem);
     return NULL;
@@ -75,6 +80,7 @@ int xenforeignmemory_close(xenforeignmemory_handle *fmem)
 
     xentoolcore__deregister_active_handle(&fmem->tc_ah);
     rc = osdep_xenforeignmemory_close(fmem);
+    xencall_close(fmem->xcall);
     xtl_logger_destroy(fmem->logger_tofree);
     free(fmem);
     return rc;
@@ -188,6 +194,14 @@ int xenforeignmemory_unmap_resource(
     return rc;
 }
 
+int xenforeignmemory_resource_size(
+    xenforeignmemory_handle *fmem, domid_t domid, unsigned int type,
+    unsigned int id, unsigned long *nr_frames)
+{
+    return osdep_xenforeignmemory_resource_size(fmem, domid, type,
+                                                id, nr_frames);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libs/foreignmemory/include/xenforeignmemory.h b/tools/libs/foreignmemory/include/xenforeignmemory.h
index d594be8df0..1ba2f5316b 100644
--- a/tools/libs/foreignmemory/include/xenforeignmemory.h
+++ b/tools/libs/foreignmemory/include/xenforeignmemory.h
@@ -179,6 +179,21 @@ xenforeignmemory_resource_handle *xenforeignmemory_map_resource(
 int xenforeignmemory_unmap_resource(
     xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle *fres);
 
+/**
+ * Determine the maximum size of a specific resource.
+ *
+ * @parm fmem handle to the open foreignmemory interface
+ * @parm domid the domain id
+ * @parm type the resource type
+ * @parm id the type-specific resource identifier
+ *
+ * Return 0 on success and fills in *nr_frames.  Sets errno and return -1 on
+ * error.
+ */
+int xenforeignmemory_resource_size(
+    xenforeignmemory_handle *fmem, domid_t domid, unsigned int type,
+    unsigned int id, unsigned long *nr_frames);
+
 #endif
 
 /*
diff --git a/tools/libs/foreignmemory/libxenforeignmemory.map b/tools/libs/foreignmemory/libxenforeignmemory.map
index d5323c87d9..8aca341b99 100644
--- a/tools/libs/foreignmemory/libxenforeignmemory.map
+++ b/tools/libs/foreignmemory/libxenforeignmemory.map
@@ -19,3 +19,7 @@ VERS_1.3 {
 		xenforeignmemory_map_resource;
 		xenforeignmemory_unmap_resource;
 } VERS_1.2;
+VERS_1.4 {
+	global:
+		xenforeignmemory_resource_size;
+} VERS_1.3;
diff --git a/tools/libs/foreignmemory/linux.c b/tools/libs/foreignmemory/linux.c
index 8daa5828e3..67e0ca1e83 100644
--- a/tools/libs/foreignmemory/linux.c
+++ b/tools/libs/foreignmemory/linux.c
@@ -28,6 +28,8 @@
 
 #include "private.h"
 
+#include <xen/memory.h>
+
 #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
 
 #ifndef O_CLOEXEC
@@ -340,6 +342,39 @@ int osdep_xenforeignmemory_map_resource(
     return 0;
 }
 
+int osdep_xenforeignmemory_resource_size(
+    xenforeignmemory_handle *fmem, domid_t domid, unsigned int type,
+    unsigned int id, unsigned long *nr_frames)
+{
+    int rc;
+    struct xen_mem_acquire_resource *xmar =
+        xencall_alloc_buffer(fmem->xcall, sizeof(*xmar));
+
+    if ( !xmar )
+    {
+        PERROR("Could not bounce memory for acquire_resource hypercall");
+        return -1;
+    }
+
+    *xmar = (struct xen_mem_acquire_resource){
+        .domid = domid,
+        .type = type,
+        .id = id,
+    };
+
+    rc = xencall2(fmem->xcall, __HYPERVISOR_memory_op,
+                  XENMEM_acquire_resource, (uintptr_t)xmar);
+    if ( rc )
+        goto out;
+
+    *nr_frames = xmar->nr_frames;
+
+ out:
+    xencall_free_buffer(fmem->xcall, xmar);
+
+    return rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libs/foreignmemory/private.h b/tools/libs/foreignmemory/private.h
index 8f1bf081ed..1a6b685f45 100644
--- a/tools/libs/foreignmemory/private.h
+++ b/tools/libs/foreignmemory/private.h
@@ -4,6 +4,7 @@
 #include <xentoollog.h>
 
 #include <xenforeignmemory.h>
+#include <xencall.h>
 
 #include <xentoolcore_internal.h>
 
@@ -20,6 +21,7 @@
 
 struct xenforeignmemory_handle {
     xentoollog_logger *logger, *logger_tofree;
+    xencall_handle *xcall;
     unsigned flags;
     int fd;
     Xentoolcore__Active_Handle tc_ah;
@@ -74,6 +76,15 @@ static inline int osdep_xenforeignmemory_unmap_resource(
 {
     return 0;
 }
+
+static inline int osdep_xenforeignmemory_resource_size(
+    xenforeignmemory_handle *fmem, domid_t domid, unsigned int type,
+    unsigned int id, unsigned long *nr_frames)
+{
+    errno = EOPNOTSUPP;
+    return -1;
+}
+
 #else
 int osdep_xenforeignmemory_restrict(xenforeignmemory_handle *fmem,
                                     domid_t domid);
@@ -81,6 +92,9 @@ int osdep_xenforeignmemory_map_resource(
     xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle *fres);
 int osdep_xenforeignmemory_unmap_resource(
     xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle *fres);
+int osdep_xenforeignmemory_resource_size(
+    xenforeignmemory_handle *fmem, domid_t domid, unsigned int type,
+    unsigned int id, unsigned long *nr_frames);
 #endif
 
 #define PERROR(_f...) \
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 11:37:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 11:37:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0NvL-0006C7-97; Tue, 28 Jul 2020 11:37:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0NvJ-0006Bw-Tb
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 11:37:33 +0000
X-Inumbo-ID: b9750142-d0c6-11ea-a8af-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b9750142-d0c6-11ea-a8af-12813bfff9fa;
 Tue, 28 Jul 2020 11:37:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595936252;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=C30vCiBJRWVkXa2f7MgjSiHtu+zIa4xf5Yikt9EeBnM=;
 b=fdH3oDwtkYReN3a/EGRfwUz4TF2umd7FiSBCUdJo6YacrK9ZerkYKUei
 kikt1cYEfGYvtXdJyn0VsACQiib9wEJx5zungJcYBMUczjRQsWhhCg0D1
 +UxTjigSksyn7cOxJTeKXgwJUu3GcofzSxqlCrNZTcD7fH8xaTMmB5NdH 8=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: v8/lpfGxqn1loOuLjuecXkShCkCaTzInJqJvVSwD90xze7nR9dWxBGetJ8AkmkPDm6ggBCixtj
 fWgVUCLAw5wVADZIE/CwCCxOHO6HooXWpZyta1JLuWM5MuVJYdMxy9lBWTZvEC+zgLh1S/tlm6
 aIzCb7YW6EpP8kpY/IcUlqBqP08k+uLsOlUvjdhrILKPD1gZqLHopcC8LU6l8bC0ewQkHqdlCo
 RHpjcua+Ef5zJdHsxucOLdUb96w23iXIsKBKaYbzokWui+CgrRwBsaZcyqU1iEZz4utw3fCO4Y
 plY=
X-SBRS: 2.7
X-MesageID: 23524684
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23524684"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 2/5] xen/gnttab: Rework resource acquisition
Date: Tue, 28 Jul 2020 12:37:08 +0100
Message-ID: <20200728113712.22966-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200728113712.22966-1-andrew.cooper3@citrix.com>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>, Jan
 Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The existing logic doesn't function in the general case for mapping a guests
grant table, due to arbitrary 32 frame limit, and the default grant table
limit being 64.

In order to start addressing this, rework the existing grant table logic by
implementing a single gnttab_acquire_resource().  This is far more efficient
than the previous acquire_grant_table() in memory.c because it doesn't take
the grant table write lock, and attempt to grow the table, for every single
frame.

The new gnttab_acquire_resource() function subsumes the previous two
gnttab_get_{shared,status}_frame() helpers.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
CC: Paul Durrant <paul@xen.org>
CC: Michał Leszczyński <michal.leszczynski@cert.pl>
CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
---
 xen/common/grant_table.c      | 93 ++++++++++++++++++++++++++++++-------------
 xen/common/memory.c           | 42 +------------------
 xen/include/xen/grant_table.h | 19 ++++-----
 3 files changed, 75 insertions(+), 79 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9f0cae52c0..122d1e7596 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -4013,6 +4013,72 @@ static int gnttab_get_shared_frame_mfn(struct domain *d,
     return 0;
 }
 
+int gnttab_acquire_resource(
+    struct domain *d, unsigned int id, unsigned long frame,
+    unsigned int nr_frames, xen_pfn_t mfn_list[])
+{
+    struct grant_table *gt = d->grant_table;
+    unsigned int i = nr_frames, tot_frames;
+    void **vaddrs;
+    int rc = 0;
+
+    /* Input sanity. */
+    if ( !nr_frames )
+        return -EINVAL;
+
+    /* Overflow checks */
+    if ( frame + nr_frames < frame )
+        return -EINVAL;
+
+    tot_frames = frame + nr_frames;
+    if ( tot_frames != frame + nr_frames )
+        return -EINVAL;
+
+    /* Grow table if necessary. */
+    grant_write_lock(gt);
+    switch ( id )
+    {
+        mfn_t tmp;
+
+    case XENMEM_resource_grant_table_id_shared:
+        rc = gnttab_get_shared_frame_mfn(d, tot_frames - 1, &tmp);
+        break;
+
+    case XENMEM_resource_grant_table_id_status:
+        if ( gt->gt_version != 2 )
+        {
+    default:
+            rc = -EINVAL;
+            break;
+        }
+        rc = gnttab_get_status_frame_mfn(d, tot_frames - 1, &tmp);
+        break;
+    }
+
+    /* Any errors from growing the table? */
+    if ( rc )
+        goto out;
+
+    switch ( id )
+    {
+    case XENMEM_resource_grant_table_id_shared:
+        vaddrs = gt->shared_raw;
+        break;
+
+    case XENMEM_resource_grant_table_id_status:
+        vaddrs = (void **)gt->status;
+        break;
+    }
+
+    for ( i = 0; i < nr_frames; ++i )
+        mfn_list[i] = virt_to_mfn(vaddrs[frame + i]);
+
+ out:
+    grant_write_unlock(gt);
+
+    return rc;
+}
+
 int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn, mfn_t *mfn)
 {
     int rc = 0;
@@ -4047,33 +4113,6 @@ int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn, mfn_t *mfn)
     return rc;
 }
 
-int gnttab_get_shared_frame(struct domain *d, unsigned long idx,
-                            mfn_t *mfn)
-{
-    struct grant_table *gt = d->grant_table;
-    int rc;
-
-    grant_write_lock(gt);
-    rc = gnttab_get_shared_frame_mfn(d, idx, mfn);
-    grant_write_unlock(gt);
-
-    return rc;
-}
-
-int gnttab_get_status_frame(struct domain *d, unsigned long idx,
-                            mfn_t *mfn)
-{
-    struct grant_table *gt = d->grant_table;
-    int rc;
-
-    grant_write_lock(gt);
-    rc = (gt->gt_version == 2) ?
-        gnttab_get_status_frame_mfn(d, idx, mfn) : -EINVAL;
-    grant_write_unlock(gt);
-
-    return rc;
-}
-
 static void gnttab_usage_print(struct domain *rd)
 {
     int first = 1;
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 714077c1e5..dc3a7248e3 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1007,44 +1007,6 @@ static long xatp_permission_check(struct domain *d, unsigned int space)
     return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
 }
 
-static int acquire_grant_table(struct domain *d, unsigned int id,
-                               unsigned long frame,
-                               unsigned int nr_frames,
-                               xen_pfn_t mfn_list[])
-{
-    unsigned int i = nr_frames;
-
-    /* Iterate backwards in case table needs to grow */
-    while ( i-- != 0 )
-    {
-        mfn_t mfn = INVALID_MFN;
-        int rc;
-
-        switch ( id )
-        {
-        case XENMEM_resource_grant_table_id_shared:
-            rc = gnttab_get_shared_frame(d, frame + i, &mfn);
-            break;
-
-        case XENMEM_resource_grant_table_id_status:
-            rc = gnttab_get_status_frame(d, frame + i, &mfn);
-            break;
-
-        default:
-            rc = -EINVAL;
-            break;
-        }
-
-        if ( rc )
-            return rc;
-
-        ASSERT(!mfn_eq(mfn, INVALID_MFN));
-        mfn_list[i] = mfn_x(mfn);
-    }
-
-    return 0;
-}
-
 static int acquire_resource(
     XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
 {
@@ -1091,8 +1053,8 @@ static int acquire_resource(
     switch ( xmar.type )
     {
     case XENMEM_resource_grant_table:
-        rc = acquire_grant_table(d, xmar.id, xmar.frame, xmar.nr_frames,
-                                 mfn_list);
+        rc = gnttab_acquire_resource(d, xmar.id, xmar.frame, xmar.nr_frames,
+                                     mfn_list);
         break;
 
     default:
diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index 98603604b8..5a2c75b880 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -56,10 +56,10 @@ int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref,
 
 int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn,
                      mfn_t *mfn);
-int gnttab_get_shared_frame(struct domain *d, unsigned long idx,
-                            mfn_t *mfn);
-int gnttab_get_status_frame(struct domain *d, unsigned long idx,
-                            mfn_t *mfn);
+
+int gnttab_acquire_resource(
+    struct domain *d, unsigned int id, unsigned long frame,
+    unsigned int nr_frames, xen_pfn_t mfn_list[]);
 
 #else
 
@@ -93,14 +93,9 @@ static inline int gnttab_map_frame(struct domain *d, unsigned long idx,
     return -EINVAL;
 }
 
-static inline int gnttab_get_shared_frame(struct domain *d, unsigned long idx,
-                                          mfn_t *mfn)
-{
-    return -EINVAL;
-}
-
-static inline int gnttab_get_status_frame(struct domain *d, unsigned long idx,
-                                          mfn_t *mfn)
+static inline int gnttab_acquire_resource(
+    struct domain *d, unsigned int id, unsigned long frame,
+    unsigned int nr_frames, xen_pfn_t mfn_list[])
 {
     return -EINVAL;
 }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 11:37:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 11:37:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0NvP-0006Cs-L1; Tue, 28 Jul 2020 11:37:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0NvO-0006Br-Bq
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 11:37:38 +0000
X-Inumbo-ID: ba239a7c-d0c6-11ea-8b28-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba239a7c-d0c6-11ea-8b28-bc764e2007e4;
 Tue, 28 Jul 2020 11:37:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595936254;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=IUokeGrxEyJUToEgizkOaWgPFondXajxhHjfPToParY=;
 b=Z3jQOnKbywGACgi4shJBcoRDnBmKektgKvqVOnkg0nMDWCUG/AiQ8O90
 Eac1cIbL4jO5+vgS9xUWKtfyFOu2oAn7A0p/q3DzrUZeGWta8lGis/XwC
 OixhgnQEdQYykr22DHMvrCwvccPxzMOS/OEygz9H1CSeho9xv+9Zh74Bi U=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: UY50K4X8eCfaARt6jfYigiGMp9wttQgys0GM6l06PUGOhv67KMyifBt5Koa2iVXe8HR7jZv58K
 iw/MPZgVj6r5wH237feE/FViWjXNfb9AAijjir4IgJG0JoTntdN3Xu5an9aB5H19hG96PIdVhW
 U4MDbxOvVeSQXN+sFHc36FF48NO0q26LJpN8vyvRhNA94RuiM8YhlWX0dpfyCK3+f81TS/imjS
 /ko1vmsr22NxCsI8MksVJdlCQnvitfS6O6O9C/yG/rYVjW/Z5nSMEJoJX6Ycc49nAGqFNQfyFr
 Alc=
X-SBRS: 2.7
X-MesageID: 23330308
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23330308"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 4/5] xen/memory: Fix acquire_resource size semantics
Date: Tue, 28 Jul 2020 12:37:10 +0100
Message-ID: <20200728113712.22966-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200728113712.22966-1-andrew.cooper3@citrix.com>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>, Jan
 Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Calling XENMEM_acquire_resource with a NULL frame_list is a request for the
size of the resource, but the returned 32 is bogus.

If someone tries to follow it for XENMEM_resource_ioreq_server, the acquire
call will fail as IOREQ servers currently top out at 2 frames, and it is only
half the size of the default grant table limit for guests.

Also, no users actually request a resource size, because it was never wired up
in the sole implemenation of resource aquisition in Linux.

Introduce a new resource_max_frames() to calculate the size of a resource, and
implement it the IOREQ and grant subsystems.

It is impossible to guarentee that a mapping call following a successful size
call will succedd (e.g. The target IOREQ server gets destroyed, or the domain
switches from grant v2 to v1).  Document the restriction, and use the
flexibility to simplify the paths to be lockless.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
CC: Paul Durrant <paul@xen.org>
CC: Michał Leszczyński <michal.leszczynski@cert.pl>
CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
---
 xen/arch/x86/mm.c             | 20 ++++++++++++++++
 xen/common/grant_table.c      | 19 +++++++++++++++
 xen/common/memory.c           | 55 +++++++++++++++++++++++++++++++++----------
 xen/include/asm-x86/mm.h      |  3 +++
 xen/include/public/memory.h   | 16 +++++++++----
 xen/include/xen/grant_table.h |  8 +++++++
 xen/include/xen/mm.h          |  6 +++++
 7 files changed, 110 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 82bc676553..f73a90a2ab 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4600,6 +4600,26 @@ int xenmem_add_to_physmap_one(
     return rc;
 }
 
+unsigned int arch_resource_max_frames(
+    struct domain *d, unsigned int type, unsigned int id)
+{
+    unsigned int nr = 0;
+
+    switch ( type )
+    {
+#ifdef CONFIG_HVM
+    case XENMEM_resource_ioreq_server:
+        if ( !is_hvm_domain(d) )
+            break;
+        /* One frame for the buf-ioreq ring, and one frame per 128 vcpus. */
+        nr = 1 + DIV_ROUND_UP(d->max_vcpus * sizeof(struct ioreq), PAGE_SIZE);
+        break;
+#endif
+    }
+
+    return nr;
+}
+
 int arch_acquire_resource(struct domain *d, unsigned int type,
                           unsigned int id, unsigned long frame,
                           unsigned int nr_frames, xen_pfn_t mfn_list[])
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 122d1e7596..0962fc7169 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -4013,6 +4013,25 @@ static int gnttab_get_shared_frame_mfn(struct domain *d,
     return 0;
 }
 
+unsigned int gnttab_resource_max_frames(struct domain *d, unsigned int id)
+{
+    unsigned int nr = 0;
+
+    /* Don't need the grant lock.  This limit is fixed at domain create time. */
+    switch ( id )
+    {
+    case XENMEM_resource_grant_table_id_shared:
+        nr = d->grant_table->max_grant_frames;
+        break;
+
+    case XENMEM_resource_grant_table_id_status:
+        nr = grant_to_status_frames(d->grant_table->max_grant_frames);
+        break;
+    }
+
+    return nr;
+}
+
 int gnttab_acquire_resource(
     struct domain *d, unsigned int id, unsigned long frame,
     unsigned int nr_frames, xen_pfn_t mfn_list[])
diff --git a/xen/common/memory.c b/xen/common/memory.c
index dc3a7248e3..21edabf9cc 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1007,6 +1007,26 @@ static long xatp_permission_check(struct domain *d, unsigned int space)
     return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
 }
 
+/*
+ * Return 0 on any kind of error.  Caller converts to -EINVAL.
+ *
+ * All nonzero values should be repeatable (i.e. derived from some fixed
+ * proerty of the domain), and describe the full resource (i.e. mapping the
+ * result of this call will be the entire resource).
+ */
+static unsigned int resource_max_frames(struct domain *d,
+                                        unsigned int type, unsigned int id)
+{
+    switch ( type )
+    {
+    case XENMEM_resource_grant_table:
+        return gnttab_resource_max_frames(d, id);
+
+    default:
+        return arch_resource_max_frames(d, type, id);
+    }
+}
+
 static int acquire_resource(
     XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
 {
@@ -1018,6 +1038,7 @@ static int acquire_resource(
      * use-cases then per-CPU arrays or heap allocations may be required.
      */
     xen_pfn_t mfn_list[32];
+    unsigned int max_frames;
     int rc;
 
     if ( copy_from_guest(&xmar, arg, 1) )
@@ -1026,19 +1047,6 @@ static int acquire_resource(
     if ( xmar.pad != 0 )
         return -EINVAL;
 
-    if ( guest_handle_is_null(xmar.frame_list) )
-    {
-        if ( xmar.nr_frames )
-            return -EINVAL;
-
-        xmar.nr_frames = ARRAY_SIZE(mfn_list);
-
-        if ( __copy_field_to_guest(arg, &xmar, nr_frames) )
-            return -EFAULT;
-
-        return 0;
-    }
-
     if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
         return -E2BIG;
 
@@ -1050,6 +1058,27 @@ static int acquire_resource(
     if ( rc )
         goto out;
 
+    max_frames = resource_max_frames(d, xmar.type, xmar.id);
+
+    rc = -EINVAL;
+    if ( !max_frames )
+        goto out;
+
+    if ( guest_handle_is_null(xmar.frame_list) )
+    {
+        if ( xmar.nr_frames )
+            goto out;
+
+        xmar.nr_frames = max_frames;
+
+        rc = -EFAULT;
+        if ( __copy_field_to_guest(arg, &xmar, nr_frames) )
+            goto out;
+
+        rc = 0;
+        goto out;
+    }
+
     switch ( xmar.type )
     {
     case XENMEM_resource_grant_table:
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 7e74996053..b0caf372a8 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -649,6 +649,9 @@ static inline bool arch_mfn_in_directmap(unsigned long mfn)
     return mfn <= (virt_to_mfn(eva - 1) + 1);
 }
 
+unsigned int arch_resource_max_frames(struct domain *d,
+                                      unsigned int type, unsigned int id);
+
 int arch_acquire_resource(struct domain *d, unsigned int type,
                           unsigned int id, unsigned long frame,
                           unsigned int nr_frames, xen_pfn_t mfn_list[]);
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 21057ed78e..cea88cf40c 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -639,10 +639,18 @@ struct xen_mem_acquire_resource {
 #define XENMEM_resource_grant_table_id_status 1
 
     /*
-     * IN/OUT - As an IN parameter number of frames of the resource
-     *          to be mapped. However, if the specified value is 0 and
-     *          frame_list is NULL then this field will be set to the
-     *          maximum value supported by the implementation on return.
+     * IN/OUT
+     *
+     * As an IN parameter number of frames of the resource to be mapped.
+     *
+     * When frame_list is NULL and nr_frames is 0, this is interpreted as a
+     * request for the size of the resource, which shall be returned in the
+     * nr_frames field.
+     *
+     * The size of a resource will never be zero, but a nonzero result doesn't
+     * guarentee that a subsequent mapping request will be successful.  There
+     * are further type/id specific constraints which may change between the
+     * two calls.
      */
     uint32_t nr_frames;
     uint32_t pad;
diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index 5a2c75b880..bae4d79623 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -57,6 +57,8 @@ int mem_sharing_gref_to_gfn(struct grant_table *gt, grant_ref_t ref,
 int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn,
                      mfn_t *mfn);
 
+unsigned int gnttab_resource_max_frames(struct domain *d, unsigned int id);
+
 int gnttab_acquire_resource(
     struct domain *d, unsigned int id, unsigned long frame,
     unsigned int nr_frames, xen_pfn_t mfn_list[]);
@@ -93,6 +95,12 @@ static inline int gnttab_map_frame(struct domain *d, unsigned long idx,
     return -EINVAL;
 }
 
+static inline unsigned int gnttab_resource_max_frames(
+    struct domain *d, unsigned int id)
+{
+    return 0;
+}
+
 static inline int gnttab_acquire_resource(
     struct domain *d, unsigned int id, unsigned long frame,
     unsigned int nr_frames, xen_pfn_t mfn_list[])
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 1b2c1f6b32..c184dc1db1 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -686,6 +686,12 @@ static inline void put_page_alloc_ref(struct page_info *page)
 }
 
 #ifndef CONFIG_ARCH_ACQUIRE_RESOURCE
+static inline unsigned int arch_resource_max_frames(
+    struct domain *d, unsigned int type, unsigned int id)
+{
+    return 0;
+}
+
 static inline int arch_acquire_resource(
     struct domain *d, unsigned int type, unsigned int id, unsigned long frame,
     unsigned int nr_frames, xen_pfn_t mfn_list[])
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 11:37:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 11:37:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0NvP-0006D0-U6; Tue, 28 Jul 2020 11:37:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0NvO-0006Bw-PZ
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 11:37:38 +0000
X-Inumbo-ID: b98ebc40-d0c6-11ea-a8af-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b98ebc40-d0c6-11ea-a8af-12813bfff9fa;
 Tue, 28 Jul 2020 11:37:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595936253;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=3No2HcQqm9xHBXNjS9kD8Wz1wiCqHR/Ty8ECtIr5Vtg=;
 b=V6P5/adkx3u7+PqSh8S3wXgaP5TvmvPKjXWo6i135bdDlJuwBzdH90sS
 aZoSuW5Sv0qHJ5ov4yLPbfsk9/1SXwRg20/uy3VVsTNr5ZDu+29mhMMuR
 IcJbGhnj7fqOJcXSpXT6uqatKmvQnOnqnVAk32CJmHEofoyIhxKNbsQ+x o=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: lDNkjIazj4HQryWxseOpakg+d+ncEDdqGsqTiRxwvfZROzNMooGTh9Q5S1oseHkmy1Q6Ysctvo
 u3aAEjJYUM864kQXvyL5E7bxYHNRdDIJjeM7ikIecx4ezxYMtWKEtNVzpIligCKQZ7cTrgkoqW
 VucbE9igE84WEVyZjsGkQoKKKFFDQVNYGVE7h5klzDRWP1lVOZ2zOeepG9bjxPNMaPOub0rQ0M
 g6bQWpF4LviyPqD8T0p58gvYAPE1/wyVuUrtpTI3Sy7UPyCMGYN2MSzs3WtVlzvNWHOtgwsMw+
 2eY=
X-SBRS: 2.7
X-MesageID: 23330306
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23330306"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 0/5] Multiple fixes to XENMEM_acquire_resource
Date: Tue, 28 Jul 2020 12:37:06 +0100
Message-ID: <20200728113712.22966-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
 Jan Beulich <JBeulich@suse.com>, Ian Jackson <Ian.Jackson@citrix.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

I thought this was going to be a very simple small bugfix for Michał's
Processor Trace series.  Serves me right for expecting it not to be full of
bear traps...

The sole implementation of acquire_resource never asks for size, so its little
surprise that Xen is broken for compat callers, and returns incorrect
information for regular callers.

Patches 1-2 are cleanup with no net functional change.  Patches 3-5 are fixes
to the size side of this interface, and allow userspace to obtain the actual
size of resources.

I'm still working on fixing the batch mapping support, which has further sharp
corners, especially for compat callers.

This is sufficenet of a series so far to post for comments.  A branch can be
obtained from:

  https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/xen-acquire-size

Andrew Cooper (5):
  xen/memory: Introduce CONFIG_ARCH_ACQUIRE_RESOURCE
  xen/gnttab: Rework resource acquisition
  xen/memory: Fix compat XENMEM_acquire_resource for size requests
  xen/memory: Fix acquire_resource size semantics
  tools/foreignmem: Support querying the size of a resource

 tools/libs/foreignmemory/Makefile                  |   2 +-
 tools/libs/foreignmemory/core.c                    |  14 +++
 .../libs/foreignmemory/include/xenforeignmemory.h  |  15 +++
 tools/libs/foreignmemory/libxenforeignmemory.map   |   4 +
 tools/libs/foreignmemory/linux.c                   |  35 +++++++
 tools/libs/foreignmemory/private.h                 |  14 +++
 xen/arch/x86/Kconfig                               |   1 +
 xen/arch/x86/mm.c                                  |  20 ++++
 xen/common/Kconfig                                 |   3 +
 xen/common/compat/memory.c                         |   2 +-
 xen/common/grant_table.c                           | 112 ++++++++++++++++-----
 xen/common/memory.c                                |  85 +++++++---------
 xen/include/asm-arm/mm.h                           |   8 --
 xen/include/asm-x86/mm.h                           |   3 +
 xen/include/public/memory.h                        |  16 ++-
 xen/include/xen/grant_table.h                      |  21 ++--
 xen/include/xen/mm.h                               |  15 +++
 17 files changed, 273 insertions(+), 97 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 11:37:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 11:37:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0NvV-0006FD-6t; Tue, 28 Jul 2020 11:37:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0NvT-0006Bw-Pe
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 11:37:43 +0000
X-Inumbo-ID: ba9e3aa2-d0c6-11ea-a8af-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba9e3aa2-d0c6-11ea-a8af-12813bfff9fa;
 Tue, 28 Jul 2020 11:37:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595936255;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=9OlWrYwwBC2nlui7wolnYoUzIAvQh25bk2h1rFifnms=;
 b=L0Mz4JCHJQYSLsCUOIiX9d8Rt8Ofd/rtttuupIqrU1rBa5gOkv+x91v8
 hsrAi7I7tlEBe+zZ8p4K47qvl45IQFvREj4VV1iJ7/YcPDdLtjRkW1uWG
 cForkMqkSxWe0pBdQN+yvkgg3MEW1+TZ0nRQBcAEeNsYBeOjrwq2Hv8GO w=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: m9XKyherp46SXYwrhXV7kJyXn2sW1WqwI7PhNSStILK1R1a6B7s+lNNAMvY7NJ74rwDWP3UBuv
 olRkvmTVZG1Jj3Oh2Vrhb+XUWNcZJ+r06st/fSD9kjuShRn45Oci6B+3QBg4JtFCmnPhi5kBYr
 GFqrFR6o1gKt147gUkmpPn3+xD/NQ2g64DQT09uG1lKFC6EEqZIXBP224VXDMSbEInZ4m1m1Da
 Tgp3k2gNjrjces4wgCWiDqdZ8fio1q1Dp+WMUG/29LEdv8+sSVBC0v+T2KEeXlqFhqev8N0NwE
 XnA=
X-SBRS: 2.7
X-MesageID: 23330309
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23330309"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 3/5] xen/memory: Fix compat XENMEM_acquire_resource for size
 requests
Date: Tue, 28 Jul 2020 12:37:09 +0100
Message-ID: <20200728113712.22966-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200728113712.22966-1-andrew.cooper3@citrix.com>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>, Jan
 Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Copy the nr_frames from the correct structure, so the caller doesn't
unconditionally receive 0.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
CC: Paul Durrant <paul@xen.org>
CC: Michał Leszczyński <michal.leszczynski@cert.pl>
CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
---
 xen/common/compat/memory.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index 3851f756c7..ed92e05b08 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -599,7 +599,7 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
                 if ( __copy_field_to_guest(
                          guest_handle_cast(compat,
                                            compat_mem_acquire_resource_t),
-                         &cmp.mar, nr_frames) )
+                         nat.mar, nr_frames) )
                     return -EFAULT;
             }
             else
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 11:37:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 11:37:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0NvZ-0006Gx-HM; Tue, 28 Jul 2020 11:37:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0NvY-0006Bw-Pl
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 11:37:48 +0000
X-Inumbo-ID: bd6816ae-d0c6-11ea-a8af-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bd6816ae-d0c6-11ea-a8af-12813bfff9fa;
 Tue, 28 Jul 2020 11:37:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595936259;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=RkXLHOS6wpf6G4+l7e752XZD9SMg/CfYFIDpBklde9o=;
 b=UcOibjlZKdsXXmgl2O37fcJAMRgllKBVnkO4kyiP5Ket5e2D8pok7zbg
 qpYun1NeYIG4176lkYGWebFwQ81vp8RuhT3/3IuLrkQiuY4b/jAXP8bj4
 ZhgPOFJ7fa38OyvXrvYfVUKABG5bH+y+F1ezBZ5Nkz6BwGtz01gRZR9n1 8=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: +NVwCWu+BqEKtJP+3sNKXmvYcsDwY3TNoqsX/w6ha0A1PxR8C7sdHpDcaDYy54oGf00KjegAz8
 IW5ca5hUNk8RSkFr0dn+7l59ixRfGcfoXDkblqTU/gfTTiiy82fY0uW/MGawNQ1e7ZUMd/9jEc
 3Emj+Z+Ltol8BTtjVr8OqF2uDwMwXA8Dl3+qQ6a+y5yZ+pSwKIP/GhMnbAhUP56Zoj5XhSaHER
 JwBJHoAC93xw00Jgn/T4bsYwWezZncg5UKlZbh6hE8hzUpPZ2G2WkWA0vJliIWwxHo4kYAccEb
 3s8=
X-SBRS: 2.7
X-MesageID: 24202968
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="24202968"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 1/5] xen/memory: Introduce CONFIG_ARCH_ACQUIRE_RESOURCE
Date: Tue, 28 Jul 2020 12:37:07 +0100
Message-ID: <20200728113712.22966-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200728113712.22966-1-andrew.cooper3@citrix.com>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?q?Micha=C5=82=20Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
 Jan Beulich <JBeulich@suse.com>, Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

New architectures shouldn't be forced to implement no-op stubs for unused
functionality.

Introduce CONFIG_ARCH_ACQUIRE_RESOURCE which can be opted in to, and provide
compatibility logic in xen/mm.h

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Paul Durrant <paul@xen.org>
CC: Michał Leszczyński <michal.leszczynski@cert.pl>
CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
---
 xen/arch/x86/Kconfig     | 1 +
 xen/common/Kconfig       | 3 +++
 xen/include/asm-arm/mm.h | 8 --------
 xen/include/xen/mm.h     | 9 +++++++++
 4 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index a636a4bb1e..e7644a0a9d 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -6,6 +6,7 @@ config X86
 	select ACPI
 	select ACPI_LEGACY_TABLES_LOOKUP
 	select ARCH_SUPPORTS_INT128
+	select ARCH_ACQUIRE_RESOURCE
 	select COMPAT
 	select CORE_PARKING
 	select HAS_ALTERNATIVE
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 15e3b79ff5..593459ea6e 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -22,6 +22,9 @@ config GRANT_TABLE
 
 	  If unsure, say Y.
 
+config ARCH_ACQUIRE_RESOURCE
+	bool
+
 config HAS_ALTERNATIVE
 	bool
 
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index f8ba49b118..0b7de3102e 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -358,14 +358,6 @@ static inline void put_page_and_type(struct page_info *page)
 
 void clear_and_clean_page(struct page_info *page);
 
-static inline
-int arch_acquire_resource(struct domain *d, unsigned int type, unsigned int id,
-                          unsigned long frame, unsigned int nr_frames,
-                          xen_pfn_t mfn_list[])
-{
-    return -EOPNOTSUPP;
-}
-
 unsigned int arch_get_dma_bitsize(void);
 
 #endif /*  __ARCH_ARM_MM__ */
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 1061765bcd..1b2c1f6b32 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -685,4 +685,13 @@ static inline void put_page_alloc_ref(struct page_info *page)
     }
 }
 
+#ifndef CONFIG_ARCH_ACQUIRE_RESOURCE
+static inline int arch_acquire_resource(
+    struct domain *d, unsigned int type, unsigned int id, unsigned long frame,
+    unsigned int nr_frames, xen_pfn_t mfn_list[])
+{
+    return -EOPNOTSUPP;
+}
+#endif /* !CONFIG_ARCH_ACQUIRE_RESOURCE */
+
 #endif /* __XEN_MM_H__ */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 11:38:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 11:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Nvs-0006Qu-RF; Tue, 28 Jul 2020 11:38:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fsnm=BH=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1k0Nvr-0006QY-Vg
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 11:38:08 +0000
X-Inumbo-ID: ce06d022-d0c6-11ea-8b28-bc764e2007e4
Received: from mail-wm1-x330.google.com (unknown [2a00:1450:4864:20::330])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce06d022-d0c6-11ea-8b28-bc764e2007e4;
 Tue, 28 Jul 2020 11:38:07 +0000 (UTC)
Received: by mail-wm1-x330.google.com with SMTP id t142so11486410wmt.4
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jul 2020 04:38:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=GtCXrUyMcdfUK9u5L9780Nv/+GQh+rPlWS9tXEVEQ1E=;
 b=tM7aGjd0FQT3yaT7+031Tlnysibz7ocdgILc/qNcllesimpWOyi1Ny6a4L/hDJnr6K
 llwf0E9RA0hHT4WNR/XM88aVBh5S1/P8YX2Sm4gohXauAAR5fNremiNKDoN9a0RLMGvb
 hqMDnJaxM8MJ5kjXyaNPVZOZKVUJ2uNSPyGLUQBEVKBKzR14DtyKpOCWkEwOl9MtCmg6
 W6Hn/X7KlpZdQCj+hnM4LkQQkibiFPMM5SfTvW/u06BBR9GmLXgS+vWlr2M3o8dOMG3J
 30QOlaozpUpSqkYfZPsl7FiCN+4XiPh+pyBGxt9h/DC5penPyMY0RFiii1wAn3eGREBX
 zXqA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=GtCXrUyMcdfUK9u5L9780Nv/+GQh+rPlWS9tXEVEQ1E=;
 b=ky9wPg1pomZsADw07RhTwz8ZI9dvyew+HerZgd9aHB7rfnob3a9ApUelfgHOHvFkpl
 ejgLNZcAUrRuPaXzcMNBAhXjEe6q2jAHW3tCUGCyyk2pjtw+px3O0HS3j3pct90e4r17
 KU6QDzukmk8csinBwoiy0Y5CPcN0KJkPqqdI4bIUlUFzX/P0vcBR8pFj1MLugU0yTpPf
 7oGxfxbstz003hZJTeOBo6AyG58rVOL1+sZbuH13vgx5iTXGwJ0x1OJhXqo5lScAaIyn
 drc44epexao8pRarrgyaKZrxD50mI4vluQU5pdCV1PxiU8CKgyjC5jyel6udB3SZKt8q
 rksg==
X-Gm-Message-State: AOAM5304B5vPSV7UH3Mz/zAPNMrVPBlhnaU/MfuGSt68wyKaaDJVb3Dq
 mUfFsT7rxJ3CxxH2n80Nt0Y=
X-Google-Smtp-Source: ABdhPJz/rLdReHCjDfI4CPt0oZx4S9Edc53mF4qYVIcJnJHvTAYUGj9wZfJjhvhtT6hUxywQ/eS0bw==
X-Received: by 2002:a1c:a9c6:: with SMTP id s189mr3429083wme.166.1595936285688; 
 Tue, 28 Jul 2020 04:38:05 -0700 (PDT)
Received: from CBGR90WXYV0 (host86-143-223-30.range86-143.btcentralplus.com.
 [86.143.223.30])
 by smtp.gmail.com with ESMTPSA id x11sm3521703wmc.33.2020.07.28.04.38.04
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 28 Jul 2020 04:38:05 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: =?utf-8?Q?'Philippe_Mathieu-Daud=C3=A9'?= <philmd@redhat.com>,
 <qemu-devel@nongnu.org>
References: <20200728100925.10454-1-philmd@redhat.com>
In-Reply-To: <20200728100925.10454-1-philmd@redhat.com>
Subject: RE: [PATCH-for-5.1] accel/xen: Fix xen_enabled() behavior on
 target-agnostic objects
Date: Tue, 28 Jul 2020 12:37:42 +0100
Message-ID: <000501d664d3$8e72bed0$ab583c70$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQENRsOLIapjYfI4XdI3JVM2u0h2V6qvGVsw
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Peter Maydell' <peter.maydell@linaro.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Paul Durrant' <pdurrant@amazon.com>, xen-devel@lists.xenproject.org,
 'Anthony Perard' <anthony.perard@citrix.com>,
 'Paolo Bonzini' <pbonzini@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> Sent: 28 July 2020 11:09
> To: qemu-devel@nongnu.org
> Cc: Paul Durrant <paul@xen.org>; Paolo Bonzini <pbonzini@redhat.com>; =
xen-devel@lists.xenproject.org;
> Stefano Stabellini <sstabellini@kernel.org>; Anthony Perard =
<anthony.perard@citrix.com>; Philippe
> Mathieu-Daud=C3=A9 <philmd@redhat.com>; Paul Durrant =
<pdurrant@amazon.com>; Peter Maydell
> <peter.maydell@linaro.org>
> Subject: [PATCH-for-5.1] accel/xen: Fix xen_enabled() behavior on =
target-agnostic objects
>=20
> CONFIG_XEN is generated by configure and stored in "config-target.h",
> which is (obviously) only include for target-specific objects.
> This is a problem for target-agnostic objects as CONFIG_XEN is never
> defined and xen_enabled() is always inlined as 'false'.
>=20
> Fix by following the KVM schema, defining CONFIG_XEN_IS_POSSIBLE
> when we don't know to force the call of the non-inlined function,
> returning the xen_allowed boolean.
>=20
> Fixes: da278d58a092 ("accel: Move Xen accelerator code under =
accel/xen/")
> Reported-by: Paul Durrant <pdurrant@amazon.com>
> Suggested-by: Peter Maydell <peter.maydell@linaro.org>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>

Tested-by: Paul Durrant <paul@xen.org>

> ---
>  include/sysemu/xen.h | 14 +++++++++++---
>  1 file changed, 11 insertions(+), 3 deletions(-)
>=20
> diff --git a/include/sysemu/xen.h b/include/sysemu/xen.h
> index 1ca292715e..385a1fa2bf 100644
> --- a/include/sysemu/xen.h
> +++ b/include/sysemu/xen.h
> @@ -8,7 +8,15 @@
>  #ifndef SYSEMU_XEN_H
>  #define SYSEMU_XEN_H
>=20
> -#ifdef CONFIG_XEN
> +#ifdef NEED_CPU_H
> +# ifdef CONFIG_XEN
> +#  define CONFIG_XEN_IS_POSSIBLE
> +# endif
> +#else
> +# define CONFIG_XEN_IS_POSSIBLE
> +#endif
> +
> +#ifdef CONFIG_XEN_IS_POSSIBLE
>=20
>  bool xen_enabled(void);
>=20
> @@ -18,7 +26,7 @@ void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t =
size,
>                     struct MemoryRegion *mr, Error **errp);
>  #endif
>=20
> -#else /* !CONFIG_XEN */
> +#else /* !CONFIG_XEN_IS_POSSIBLE */
>=20
>  #define xen_enabled() 0
>  #ifndef CONFIG_USER_ONLY
> @@ -33,6 +41,6 @@ static inline void xen_ram_alloc(ram_addr_t =
ram_addr, ram_addr_t size,
>  }
>  #endif
>=20
> -#endif /* CONFIG_XEN */
> +#endif /* CONFIG_XEN_IS_POSSIBLE */
>=20
>  #endif
> --
> 2.21.3




From xen-devel-bounces@lists.xenproject.org Tue Jul 28 11:42:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 11:42:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0O0O-0007ZP-KK; Tue, 28 Jul 2020 11:42:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZWt7=BH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k0O0N-0007ZK-2W
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 11:42:47 +0000
X-Inumbo-ID: 73fd8c0b-d0c7-11ea-8b28-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 73fd8c0b-d0c7-11ea-8b28-bc764e2007e4;
 Tue, 28 Jul 2020 11:42:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595936566;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=iwLhOsjZtYhl+WFa5BUY/82Z8FbB6FQLSph99aT8EGw=;
 b=BixnbRswkYIbU65CmkXJxMolgQMy1yusl99uG/1f9Tn4MIamnnF3sVRi
 YKppZ/ffq1PWUKcKKaKAT095A9+xJW+8pxWMT+bfa/7/jxA2GDcqbSWfo
 obu6mSQjCldJrZwaVS100/N/0Q3/YveWmgf9ohhvopIunKfNHPFlD8tcO c=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: eST4pcblEO9C4zMoJzXs8FUL90X3L5/U0rACoVj/dtaz69BM1mQPX91USa/YLWbf/s5kkvs3z3
 QoxfVkTFF9gEtfVjJ2yybcqAH46FK8ACLdf97hJ6Hifthnze4ZxYG9Nc41FuSq47pXZeBrwobb
 TZ9GbHrh1h//0/sZC1rIdcSLxLAhbUvoZpFO4DAN7DuYKGEmcrKTu+MzROy8YleM2D0DxUYTLp
 s4j/3TQR8RoAyxfz2785glGSM5s8yBtxv6NnlJwLCUT3EhorCmMCbHdUMduNkBYU4PVBX7wQYr
 O9w=
X-SBRS: 2.7
X-MesageID: 23666671
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23666671"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
Subject: [PATCH] xen/balloon: add header guard
Date: Tue, 28 Jul 2020 13:42:35 +0200
Message-ID: <20200728114235.58619-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
In-Reply-To: <20200727091342.52325-5-roger.pau@citrix.com>
References: <20200727091342.52325-5-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In order to protect against the header being included multiple times
on the same compilation unit.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org
---
This is required as a pre-patch to use ZONE_DEVICE, or else the
fallback of including the balloon header might not work properly.
---
 include/xen/balloon.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/include/xen/balloon.h b/include/xen/balloon.h
index 6fb95aa19405..6dbdb0b3fd03 100644
--- a/include/xen/balloon.h
+++ b/include/xen/balloon.h
@@ -2,6 +2,8 @@
 /******************************************************************************
  * Xen balloon functionality
  */
+#ifndef _XEN_BALLOON_H
+#define _XEN_BALLOON_H
 
 #define RETRY_UNLIMITED	0
 
@@ -34,3 +36,5 @@ static inline void xen_balloon_init(void)
 {
 }
 #endif
+
+#endif	/* _XEN_BALLOON_H */
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 12:08:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 12:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0OP7-00014D-6X; Tue, 28 Jul 2020 12:08:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E6Nk=BH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0OP5-000148-9g
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 12:08:19 +0000
X-Inumbo-ID: 0546bfbc-d0cb-11ea-a8b6-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0546bfbc-d0cb-11ea-a8b6-12813bfff9fa;
 Tue, 28 Jul 2020 12:08:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=4GykV/FJJCAbwWOJk4uwAGIo6lKI+SKaL3E2nrR5CLU=; b=KkutKignzh1EEWFGziafmnX2Q
 3scXDH28T6wpDP2284xNFSjlqlLCwJWSK9Rs8gHrsU4Lwzm7rUWffMv8VbMTa29ZulcE60nm4qeIb
 ZiPaWFdaCIckF9Xe1cdgvp2XkH2C+7+Sts6RqQOLRjool5Xv+kTmBUypfEStTK8deaHj0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0OP2-0007Jw-MH; Tue, 28 Jul 2020 12:08:16 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0OP2-0001za-Aq; Tue, 28 Jul 2020 12:08:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0OP2-0003hs-A5; Tue, 28 Jul 2020 12:08:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152249-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152249: all pass - PUSHED
X-Osstest-Versions-This: ovmf=ffde22468e2f0e93b51f97b801e6c7a181088c61
X-Osstest-Versions-That: ovmf=a44f558a84c67cd88b8215d4c076123cf58438f4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jul 2020 12:08:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152249 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152249/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ffde22468e2f0e93b51f97b801e6c7a181088c61
baseline version:
 ovmf                 a44f558a84c67cd88b8215d4c076123cf58438f4

Last test of basis   152244  2020-07-28 00:40:52 Z    0 days
Testing same since   152249  2020-07-28 07:04:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Guomin Jiang <guomin.jiang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Michael Kubacki <michael.a.kubacki@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   a44f558a84..ffde22468e  ffde22468e2f0e93b51f97b801e6c7a181088c61 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 13:46:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 13:46:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Pw2-0000re-HR; Tue, 28 Jul 2020 13:46:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0Pw1-0000rZ-MJ
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 13:46:25 +0000
X-Inumbo-ID: b98830a3-d0d8-11ea-8b54-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b98830a3-d0d8-11ea-8b54-bc764e2007e4;
 Tue, 28 Jul 2020 13:46:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595943984;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=YhoL7kY+bwriu01woWNcAHyzh1F/hQ8bxdDZSr+kIVk=;
 b=gMzFarRKKobk+EXxdCVHDHPkxfxtROcdyzmizfwRpdD+Fkh0nFa4Bs//
 5eDZA4mbj/WQi1QUWFAuvyRhbKgeaxgzCD5pnhVgIfHjzBXrJQ5IiF8S+
 x+W+S2LBs2BSOmLgUaW5VMYfvU7HIENnhWmNRyMv49EynZ2aZGPeqwxHe M=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Hz2AnqTPk3jwtY+eH1yLKDlvxEPfmviLDPbxLMBzHQX9tYoaxKKsr0bpFaE/CnK/ak/W91Xotx
 ZPKZPG+ya6vDuLJu0tuZgX/B4jOY9NHpsNjaXHMskmttgm6CaOASt0yhMFgFF9ja5kNTwV1aQs
 fd6+5m8xNygCAJeQ9rO594NLdkafoCOKd5xzVOYLccQHI00JW7EoOBq5SOufoFAeKdgOjc+MsW
 5hAq/8z3NxzRyUcdEAh91howVCp6ywTtcLttCqb9d1kW8LYxDO4MdnfWpVHX4NiYNZmhtI0MsW
 XxQ=
X-SBRS: 2.7
X-MesageID: 23684190
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23684190"
Subject: Re: [PATCH] x86/vhpet: Fix type size in timer_int_route_valid
To: Eslam Elnikety <elnikety@amazon.com>, <xen-devel@lists.xenproject.org>
References: <20200728083357.77999-1-elnikety@amazon.com>
 <a55fba45-a008-059e-ea8c-b7300e2e8b7d@citrix.com>
 <09b2f75d-13d7-8f53-54a1-6f10ecd7b6e2@amazon.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <da2eb826-5652-6020-0738-27f55659925c@citrix.com>
Date: Tue, 28 Jul 2020 14:46:19 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <09b2f75d-13d7-8f53-54a1-6f10ecd7b6e2@amazon.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.co.uk>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/07/2020 12:09, Eslam Elnikety wrote:
> On 28.07.20 11:26, Andrew Cooper wrote:
>> On 28/07/2020 09:33, Eslam Elnikety wrote:
>>> The macro timer_int_route_cap evalutes to a 64 bit value. Extend the
>>> size of left side of timer_int_route_valid to match.
>>>
>>> This bug was discovered and resolved using Coverity Static Analysis
>>> Security Testing (SAST) by Synopsys, Inc.
>>>
>>> Signed-off-by: Eslam Elnikety <elnikety@amazon.com>
>>> ---
>>>   xen/arch/x86/hvm/hpet.c | 2 +-
>>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
>>> index ca94e8b453..9afe6e6760 100644
>>> --- a/xen/arch/x86/hvm/hpet.c
>>> +++ b/xen/arch/x86/hvm/hpet.c
>>> @@ -66,7 +66,7 @@
>>>       MASK_EXTR(timer_config(h, n), HPET_TN_INT_ROUTE_CAP)
>>>     #define timer_int_route_valid(h, n) \
>>> -    ((1u << timer_int_route(h, n)) & timer_int_route_cap(h, n))
>>> +    ((1ULL << timer_int_route(h, n)) & timer_int_route_cap(h, n))
>>>     static inline uint64_t hpet_read_maincounter(HPETState *h,
>>> uint64_t guest_time)
>>>   {
>>
>> Does this work?
>
> Yes! This is better than my fix (and I like that it clarifies the
> route part of the config. Will you sign-off and send a patch?

Any chance I can persuade you, or someone else to do this?  Loads of the
macros can be removed by filling in proper bitfield names in place of
'_', resulting in rather better code.

~Andrew

>
>>
>> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
>> index ca94e8b453..638f6174de 100644
>> --- a/xen/arch/x86/hvm/hpet.c
>> +++ b/xen/arch/x86/hvm/hpet.c
>> @@ -62,8 +62,7 @@
>>     #define timer_int_route(h, n)    MASK_EXTR(timer_config(h, n),
>> HPET_TN_ROUTE)
>>   -#define timer_int_route_cap(h, n) \
>> -    MASK_EXTR(timer_config(h, n), HPET_TN_INT_ROUTE_CAP)
>> +#define timer_int_route_cap(h, n) (h)->hpet.timers[(n)].route
>>     #define timer_int_route_valid(h, n) \
>>       ((1u << timer_int_route(h, n)) & timer_int_route_cap(h, n))
>> diff --git a/xen/include/asm-x86/hvm/vpt.h
>> b/xen/include/asm-x86/hvm/vpt.h
>> index f0e0eaec83..a41fc443cc 100644
>> --- a/xen/include/asm-x86/hvm/vpt.h
>> +++ b/xen/include/asm-x86/hvm/vpt.h
>> @@ -73,7 +73,13 @@ struct hpet_registers {
>>       uint64_t isr;               /* interrupt status reg */
>>       uint64_t mc64;              /* main counter */
>>       struct {                    /* timers */
>> -        uint64_t config;        /* configuration/cap */
>> +        union {
>> +            uint64_t config;    /* configuration/cap */
>> +            struct {
>> +                uint32_t _;
>> +                uint32_t route;
>> +            };
>> +        };
>>           uint64_t cmp;           /* comparator */
>>           uint64_t fsb;           /* FSB route, not supported now */
>>       } timers[HPET_TIMER_NUM];
>>
>>
>



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 13:55:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 13:55:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Q4S-0001kG-Dh; Tue, 28 Jul 2020 13:55:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0Q4Q-0001kB-Vo
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 13:55:07 +0000
X-Inumbo-ID: f0ae1aaa-d0d9-11ea-8b61-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0ae1aaa-d0d9-11ea-8b61-bc764e2007e4;
 Tue, 28 Jul 2020 13:55:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595944505;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=dMn9RHv+dPpFytGvedx8MhPCEoe+fjDhnRX3gQvxjg8=;
 b=ffpSzJp32XfT61n8ojlOCVzXEdLGKdZmHehOPF/ZYm9gYKJBFh0baHAP
 cGmbvTsUk5L2fXIOtN12vUQLs2SviF+xM3SknIjCX+dAHaYLfFwulNTjd
 n0u3Crn/UMzwr9glCS7YvPxiJrnQuFc/0I3ZTrIU9R7Q2pMaLiG9qe0r5 M=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: uBlueMc8D39Fl806HQgGc7o3+lPrtdgkT7MfPgoExp6OAl1YE2dAikYcEhgWxGQntWgrXFRY6m
 s5wvSyAMIPxr4Kyj0rYT5UNmGEpQkJR5uhTKtjolHnEMoutGtUt0rekkho31kNTVsoGfgUYO/3
 hGJO7ermaVAUtSBCUdAR/SRo7FOMGtOcRkZUr4UKhtEs2dsEzIZh/zKTJTUitpqVjl+9VkP50w
 GuuwFt4V2uiE8119pZxDCx08hETQwb9AM9e/vxQ+ZuWiogmGqXp8xD36GsZFq9xemLVVdYNOAI
 hrU=
X-SBRS: 2.7
X-MesageID: 24216523
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="24216523"
Subject: Re: [PATCH 1/4] x86: replace __ASM_{CL,ST}AC
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <fc8e042e-fef8-ac38-34d8-16b13e4b0135@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ea6eeb6d-7af2-97cb-4c11-6e0a81755961@citrix.com>
Date: Tue, 28 Jul 2020 14:55:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <fc8e042e-fef8-ac38-34d8-16b13e4b0135@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15/07/2020 11:48, Jan Beulich wrote:
> --- a/xen/arch/x86/arch.mk
> +++ b/xen/arch/x86/arch.mk
> @@ -20,6 +20,7 @@ $(call as-option-add,CFLAGS,CC,"rdrand %
>  $(call as-option-add,CFLAGS,CC,"rdfsbase %rax",-DHAVE_AS_FSGSBASE)
>  $(call as-option-add,CFLAGS,CC,"xsaveopt (%rax)",-DHAVE_AS_XSAVEOPT)
>  $(call as-option-add,CFLAGS,CC,"rdseed %eax",-DHAVE_AS_RDSEED)
> +$(call as-option-add,CFLAGS,CC,"clac",-DHAVE_AS_CLAC_STAC)

Kconfig please, rather than extending this legacy section.

That said, surely stac/clac support is old enough for us to start using
unconditionally?

Could we see about sorting a reasonable minimum toolchain version,
before we churn all the logic to deal with obsolete toolchains?

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 13:55:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 13:55:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Q51-0001nB-NA; Tue, 28 Jul 2020 13:55:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uMAr=BH=amazon.com=prvs=4712fd9bf=elnikety@srs-us1.protection.inumbo.net>)
 id 1k0Q4z-0001n1-Ph
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 13:55:41 +0000
X-Inumbo-ID: 0601aeef-d0da-11ea-8b61-bc764e2007e4
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0601aeef-d0da-11ea-8b61-bc764e2007e4;
 Tue, 28 Jul 2020 13:55:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1595944541; x=1627480541;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=pzzlPohXMmuFAnHly/l4tQpx3Vs5iY/K7RWtn9IkALM=;
 b=luzQw0Ub/ZEG8XB56nG1vIOzPyA6JrfNjNU5vKbPJrGHRocmnUdE4zAZ
 +m7BrgALGfa5DTi5jISNSEN0hVJqkdDgwmV7OOBHhoa4Tpc1srkEEjxsx
 wv7kj+OxFf52qP/lWWvONfcqtcMh5tKzjCFRE/prcVfkE03RwVMqqRezz w=;
IronPort-SDR: wxGW1hOizBSJTCKJNeHfnK5OPCP0y/k3J+p298XYpIu9T4V35758RHm9JFsVkQbbgWJpEXjoas
 d4UdRxnc0/GQ==
X-IronPort-AV: E=Sophos;i="5.75,406,1589241600"; d="scan'208";a="45986696"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-1e-62350142.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 28 Jul 2020 13:55:40 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1e-62350142.us-east-1.amazon.com (Postfix) with ESMTPS
 id 5D6F9A20CC; Tue, 28 Jul 2020 13:55:39 +0000 (UTC)
Received: from EX13D03EUA002.ant.amazon.com (10.43.165.166) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 28 Jul 2020 13:55:38 +0000
Received: from a483e73f63b0.ant.amazon.com (10.43.160.65) by
 EX13D03EUA002.ant.amazon.com (10.43.165.166) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 28 Jul 2020 13:55:34 +0000
Subject: Re: [PATCH] x86/vhpet: Fix type size in timer_int_route_valid
To: Andrew Cooper <andrew.cooper3@citrix.com>, Eslam Elnikety
 <elnikety@amazon.com>, <xen-devel@lists.xenproject.org>
References: <20200728083357.77999-1-elnikety@amazon.com>
 <a55fba45-a008-059e-ea8c-b7300e2e8b7d@citrix.com>
 <09b2f75d-13d7-8f53-54a1-6f10ecd7b6e2@amazon.com>
 <da2eb826-5652-6020-0738-27f55659925c@citrix.com>
From: Eslam Elnikety <elnikety@amazon.com>
Message-ID: <9a3970c8-0186-45b5-d608-838f42f726d8@amazon.com>
Date: Tue, 28 Jul 2020 15:55:29 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <da2eb826-5652-6020-0738-27f55659925c@citrix.com>
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Originating-IP: [10.43.160.65]
X-ClientProxiedBy: EX13D28UWC002.ant.amazon.com (10.43.162.145) To
 EX13D03EUA002.ant.amazon.com (10.43.165.166)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Paul Durrant <pdurrant@amazon.co.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.20 15:46, Andrew Cooper wrote:
> On 28/07/2020 12:09, Eslam Elnikety wrote:
>> On 28.07.20 11:26, Andrew Cooper wrote:
>>> On 28/07/2020 09:33, Eslam Elnikety wrote:
>>>> The macro timer_int_route_cap evalutes to a 64 bit value. Extend the
>>>> size of left side of timer_int_route_valid to match.
>>>>
>>>> This bug was discovered and resolved using Coverity Static Analysis
>>>> Security Testing (SAST) by Synopsys, Inc.
>>>>
>>>> Signed-off-by: Eslam Elnikety <elnikety@amazon.com>
>>>> ---
>>>>    xen/arch/x86/hvm/hpet.c | 2 +-
>>>>    1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
>>>> index ca94e8b453..9afe6e6760 100644
>>>> --- a/xen/arch/x86/hvm/hpet.c
>>>> +++ b/xen/arch/x86/hvm/hpet.c
>>>> @@ -66,7 +66,7 @@
>>>>        MASK_EXTR(timer_config(h, n), HPET_TN_INT_ROUTE_CAP)
>>>>      #define timer_int_route_valid(h, n) \
>>>> -    ((1u << timer_int_route(h, n)) & timer_int_route_cap(h, n))
>>>> +    ((1ULL << timer_int_route(h, n)) & timer_int_route_cap(h, n))
>>>>      static inline uint64_t hpet_read_maincounter(HPETState *h,
>>>> uint64_t guest_time)
>>>>    {
>>>
>>> Does this work?
>>
>> Yes! This is better than my fix (and I like that it clarifies the
>> route part of the config. Will you sign-off and send a patch?
> 
> Any chance I can persuade you, or someone else to do this?  Loads of the
> macros can be removed by filling in proper bitfield names in place of
> '_', resulting in rather better code.
> 
> ~Andrew
> 

Sure, I can send a patch for this one occurrence at hand right away -- 
and I will keep my eye on the general pattern. Since the patch will be 
mostly your diff, please send your sign-off (or another tag as you see fit).

Thanks,
Eslam

>>
>>>
>>> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
>>> index ca94e8b453..638f6174de 100644
>>> --- a/xen/arch/x86/hvm/hpet.c
>>> +++ b/xen/arch/x86/hvm/hpet.c
>>> @@ -62,8 +62,7 @@
>>>      #define timer_int_route(h, n)    MASK_EXTR(timer_config(h, n),
>>> HPET_TN_ROUTE)
>>>    -#define timer_int_route_cap(h, n) \
>>> -    MASK_EXTR(timer_config(h, n), HPET_TN_INT_ROUTE_CAP)
>>> +#define timer_int_route_cap(h, n) (h)->hpet.timers[(n)].route
>>>      #define timer_int_route_valid(h, n) \
>>>        ((1u << timer_int_route(h, n)) & timer_int_route_cap(h, n))
>>> diff --git a/xen/include/asm-x86/hvm/vpt.h
>>> b/xen/include/asm-x86/hvm/vpt.h
>>> index f0e0eaec83..a41fc443cc 100644
>>> --- a/xen/include/asm-x86/hvm/vpt.h
>>> +++ b/xen/include/asm-x86/hvm/vpt.h
>>> @@ -73,7 +73,13 @@ struct hpet_registers {
>>>        uint64_t isr;               /* interrupt status reg */
>>>        uint64_t mc64;              /* main counter */
>>>        struct {                    /* timers */
>>> -        uint64_t config;        /* configuration/cap */
>>> +        union {
>>> +            uint64_t config;    /* configuration/cap */
>>> +            struct {
>>> +                uint32_t _;
>>> +                uint32_t route;
>>> +            };
>>> +        };
>>>            uint64_t cmp;           /* comparator */
>>>            uint64_t fsb;           /* FSB route, not supported now */
>>>        } timers[HPET_TIMER_NUM];
>>>
>>>
>>
> 
> 



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 13:56:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 13:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Q5o-0001tC-0x; Tue, 28 Jul 2020 13:56:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jBLr=BH=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1k0Q5m-0001sw-Cn
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 13:56:30 +0000
X-Inumbo-ID: 22ca76a0-d0da-11ea-a8eb-12813bfff9fa
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 22ca76a0-d0da-11ea-a8eb-12813bfff9fa;
 Tue, 28 Jul 2020 13:56:29 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06SDpcaW046243;
 Tue, 28 Jul 2020 13:56:27 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=kyh+ViF7bEqz01VV8OsBLvN7HPOkXKgdVhMj2OLW28w=;
 b=UFIC8uV4nGF9MKABFg1jrHe/KFO1kRvux0l8eEQ/GNe6G+nOGYr3YjVmPckIy4FV+zOW
 5aCfFv0OZo60ynDbujYukQHkYeTiCUmyZOt6wnI6uE8tJARzthqWwZY5KaKKJ+9y4ytS
 wVz5FzB3optGCV5dtjtH9RI3Og3AxaLZY1ggCbBYPvoVIKdKA2Xy5xpmOKQOHtt++sZS
 FzVCJcWhFYPyIbacDYeWotTWtJk1u4Ic6LtfjONHgqA7j2oLMxXxcOH2UDvQBet+lwzk
 vmQrQmOKqAaXFwCZeN15ZkYdr4REYAQFvTbIeT6mifShDVHsqPc7xYLDQSFgY7YWH/DH BA== 
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by aserp2120.oracle.com with ESMTP id 32hu1j7mp9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 28 Jul 2020 13:56:27 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06SDqiUQ004987;
 Tue, 28 Jul 2020 13:56:26 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by aserp3030.oracle.com with ESMTP id 32hu5u50qt-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 28 Jul 2020 13:56:26 +0000
Received: from abhmp0011.oracle.com (abhmp0011.oracle.com [141.146.116.17])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 06SDuODp017512;
 Tue, 28 Jul 2020 13:56:25 GMT
Received: from [10.39.205.95] (/10.39.205.95)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 28 Jul 2020 06:56:22 -0700
Subject: Re: [PATCH] xen/balloon: add header guard
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
References: <20200727091342.52325-5-roger.pau@citrix.com>
 <20200728114235.58619-1-roger.pau@citrix.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <f7740589-0076-2c11-ccbe-f727c6918315@oracle.com>
Date: Tue, 28 Jul 2020 09:56:17 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728114235.58619-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9695
 signatures=668679
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0
 malwarescore=0
 mlxscore=0 adultscore=0 spamscore=0 phishscore=0 mlxlogscore=999
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007280106
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9695
 signatures=668679
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0
 bulkscore=0 mlxlogscore=999
 lowpriorityscore=0 malwarescore=0 clxscore=1015 mlxscore=0 impostorscore=0
 phishscore=0 adultscore=0 suspectscore=0 priorityscore=1501
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007280106
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/28/20 7:42 AM, Roger Pau Monne wrote:
> In order to protect against the header being included multiple times
> on the same compilation unit.
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 13:59:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 13:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Q8h-00024T-Fv; Tue, 28 Jul 2020 13:59:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0Q8h-00024O-3G
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 13:59:31 +0000
X-Inumbo-ID: 8e4abcaa-d0da-11ea-8b61-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e4abcaa-d0da-11ea-8b61-bc764e2007e4;
 Tue, 28 Jul 2020 13:59:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595944771;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=v7TjwgpIipnilB9SKQprJ0BZzCL27UD9AeH4NyXe6m4=;
 b=M/Tjz45NOZlupJFcynPLNn2xkiX6rdoDdZPQ2L8jwhxsaWVWNE+18bTP
 hBuKSI2WoZuhPfVtDS8ZAv3z6QKEDKaeDWurkjtwpvK97YMMRlnS1s+vM
 5Q1N+pouydrrq6y2xibBffSsNW2NDGSf7vmA/UBbXv+TbGJi1IjWeXdwx U=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: EKu8ZImCF3nFbwNS64XopW8AVXuVnC+aBrx8EODBWfsnZeqQ6rmWePstkU8Mq5A4+2lGWXnR/v
 /5E4dZDquyd19hoLD2wwXUdl510KA48SXGBemrIGhFnuO4OYAmJwZ+CTxTlm8yJsspKfluaOTS
 I/m+FlUosxMtagHL4k19OlDfY5W3EjoD0kONvGOHS5vVugmAT1iwK6d6TP4VKzpyT7P+ViDML5
 A9avXKv7Fd5llHcxMaEath0t8hXf/2Q3xhfXArPk1v8JO2/kTkf1EYUBPFWbEMgIfBCOPgC3ge
 F2Y=
X-SBRS: 2.7
X-MesageID: 23685490
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23685490"
Subject: Re: [PATCH 1/4] x86: replace __ASM_{CL,ST}AC
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <fc8e042e-fef8-ac38-34d8-16b13e4b0135@suse.com>
 <20200727145526.GR7191@Air-de-Roger>
 <b29e4b17-8ec2-a0db-8426-94393e9eb2c0@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <868f864b-ae8e-0b01-8cf0-74a0fd3982ee@citrix.com>
Date: Tue, 28 Jul 2020 14:59:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b29e4b17-8ec2-a0db-8426-94393e9eb2c0@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27/07/2020 20:47, Jan Beulich wrote:
> On 27.07.2020 16:55, Roger Pau Monné wrote:
>> On Wed, Jul 15, 2020 at 12:48:14PM +0200, Jan Beulich wrote:
>>> --- /dev/null
>>> +++ b/xen/include/asm-x86/asm-defns.h
>>
>> Maybe this could be asm-insn.h or a different name? I find it
>> confusing to have asm-defns.h and an asm_defs.h.
>
> While indeed I anticipated a reply to this effect, I don't consider
> asm-insn.h or asm-macros.h suitable: We don't want to limit this
> header to a more narrow purpose than "all sorts of definition", I
> don't think. Hence I chose that name despite its similarity to the
> C header's one.

Roger is correct.  Having asm-defns.h and asm_defs.h is too confusing,
and there is already too much behind the scenes magic here.

What is the anticipated end result, file wise, because that might
highlight a better way forward.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 14:12:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 14:12:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0QKf-0003k6-PO; Tue, 28 Jul 2020 14:11:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0QKe-0003k1-MB
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 14:11:52 +0000
X-Inumbo-ID: 471bd2f6-d0dc-11ea-a8f3-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 471bd2f6-d0dc-11ea-a8f3-12813bfff9fa;
 Tue, 28 Jul 2020 14:11:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595945511;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=OuE3A6MRBokKKxndisn6x3i03m4uvfO/NGVkQ2H3vfQ=;
 b=bRXACsoSsHzX1CpOYQFb1ktdQ4MKi7BXJFRcxEsX5ALZdPCSI/J2ksGw
 SqBnX2PG9SzwklqZRhyi+5byw2WNBjP6o7j9nH5naDHYlN5uySc/WhM80
 IQDCWF2P/VlNmTucYuer+0lbMU189duo/lO+RflPPqkB0NLOH0I0e+Crp I=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: +hG5mCBM1DMxBIpktzqQNzNiw+hwmS4q2Geb/9+dpT9L5KGCEc0MHMF4is4iSgH08uHAa2wq3l
 yKki94fH2rcrBhzrMUdwJfqDXJTCZNclEllFdUIqkGdlsUCFyMyZmYIIC0yKb7zzZYNK03YC3k
 n7huPsPxnBO0B8ZbKtRbAqj+gKpQHMu4Zk/Y451mUu+D4qdA91mg8nSloWj9BLl37qzO/13srY
 S+Vr1ZKGVArYfyTPdUzullG+ehyUSYAMIVidewPG6K6jOzUy3Eh1kUXFLzOgwZm96J9iemX+AX
 6Ek=
X-SBRS: 2.7
X-MesageID: 23345124
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23345124"
Subject: Re: [PATCH 2/5] xen/gnttab: Rework resource acquisition
To: Xen-devel <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-3-andrew.cooper3@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c95fdb03-c5d7-ac97-1b6f-eb280979968f@citrix.com>
Date: Tue, 28 Jul 2020 15:11:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728113712.22966-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek
 Wilk <konrad.wilk@oracle.com>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Paul Durrant <paul@xen.org>, Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/07/2020 12:37, Andrew Cooper wrote:
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 9f0cae52c0..122d1e7596 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -4013,6 +4013,72 @@ static int gnttab_get_shared_frame_mfn(struct domain *d,
>      return 0;
>  }
>  
> +int gnttab_acquire_resource(
> +    struct domain *d, unsigned int id, unsigned long frame,
> +    unsigned int nr_frames, xen_pfn_t mfn_list[])
> +{
> +    struct grant_table *gt = d->grant_table;
> +    unsigned int i = nr_frames, tot_frames;
> +    void **vaddrs;

I've folded an "= NULL" here to fix the CentOS 6 build.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 14:14:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 14:14:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0QNS-0003sa-7g; Tue, 28 Jul 2020 14:14:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0QNR-0003sV-C2
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 14:14:45 +0000
X-Inumbo-ID: af3fc6b0-d0dc-11ea-a8f3-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id af3fc6b0-d0dc-11ea-a8f3-12813bfff9fa;
 Tue, 28 Jul 2020 14:14:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595945684;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=tKXRMwykMcTyq4Ew5Chd2SfSrSaptAYVUdFD9NU8Aog=;
 b=iRFGd4FkIvVk/DVnmBFVaUo/ZQlQ5EuNDY0HGmommHhLzbmn8o1ugofV
 1vjSTZYYTK9UMOQbSfIw/cO7z2GFLcXhSQ1sP8zhMtuPKCjIcIiAoQ94h
 0WwLLiQQZUBoksFyJbUYKvrRndL1Ivzs4vptQ+u6Oa1KVd1ehUCNQnAae s=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: q+PyR+OGHB9jSQO8KUsBg9ErfHhN9hgSOlYrykfR0Emt8JWehlWGE7BLCY9oR8Kxz9Km5E6ZQB
 o0mcEBhBX8nPx6DBgHVQMlR7KkCd1TGuZmjIAlZXZTHT65ZQmya9A8pgxEdNp+S7BMITBEpxfQ
 cGgi5X3ZhK4wNmEYlkNqVEz/ZJgz6WA4jLKzTKiqDyTwcuc26F0Vb5wM1DAWZOxeSYtCzjmA6y
 lIPBLIcHsZUNzLg10yOE/Ukc2Lc1oT91LLYZxC6LvPyQLRWq+XLLvH0s2mBEKY3ePUq6mz4fGP
 k/s=
X-SBRS: 2.7
X-MesageID: 23540281
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23540281"
Subject: Re: [PATCH 5/5] tools/foreignmem: Support querying the size of a
 resource
To: Xen-devel <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-6-andrew.cooper3@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ed045b42-55aa-7b63-fda9-ff7788e03ff9@citrix.com>
Date: Tue, 28 Jul 2020 15:14:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728113712.22966-6-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <Ian.Jackson@citrix.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Hubert Jasudowicz <hubert.jasudowicz@cert.pl>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/07/2020 12:37, Andrew Cooper wrote:
> With the Xen side of this interface fixed to return real sizes, userspace
> needs to be able to make the query.
>
> Introduce xenforeignmemory_resource_size() for the purpose, bumping the
> library minor version and providing compatiblity for the non-Linux builds.
>
> Its not possible to reuse the IOCTL_PRIVCMD_MMAP_RESOURCE infrastructure,
> because it depends on having already mmap()'d a suitably sized region before
> it will make an XENMEM_acquire_resource hypercall to Xen.
>
> Instead, open a xencall handle and make an XENMEM_acquire_resource hypercall
> directly.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Ian Jackson <Ian.Jackson@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Paul Durrant <paul@xen.org>
> CC: Michał Leszczyński <michal.leszczynski@cert.pl>
> CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>

I've folded:

diff --git a/tools/Rules.mk b/tools/Rules.mk
index 5ed5664bf7..b8ccf03ea9 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -123,7 +123,7 @@ LDLIBS_libxencall = $(SHDEPS_libxencall)
$(XEN_LIBXENCALL)/libxencall$(libextens
 SHLIB_libxencall  = $(SHDEPS_libxencall) -Wl,-rpath-link=$(XEN_LIBXENCALL)
 
 CFLAGS_libxenforeignmemory = -I$(XEN_LIBXENFOREIGNMEMORY)/include
$(CFLAGS_xeninclude)
-SHDEPS_libxenforeignmemory = $(SHLIB_libxentoolcore)
+SHDEPS_libxenforeignmemory = $(SHLIB_libxentoolcore) $(SHDEPS_libxencall)
 LDLIBS_libxenforeignmemory = $(SHDEPS_libxenforeignmemory)
$(XEN_LIBXENFOREIGNMEMORY)/libxenforeignmemory$(libextension)
 SHLIB_libxenforeignmemory  = $(SHDEPS_libxenforeignmemory)
-Wl,-rpath-link=$(XEN_LIBXENFOREIGNMEMORY)
 
diff --git a/tools/libs/foreignmemory/Makefile
b/tools/libs/foreignmemory/Makefile
index 8e07f92c59..f3a61e27c7 100644
--- a/tools/libs/foreignmemory/Makefile
+++ b/tools/libs/foreignmemory/Makefile
@@ -4,7 +4,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 MAJOR    = 1
 MINOR    = 4
 LIBNAME  := foreignmemory
-USELIBS  := toollog toolcore
+USELIBS  := toollog toolcore call
 
 SRCS-y                 += core.c
 SRCS-$(CONFIG_Linux)   += linux.c

to fix the build in certain containers.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 14:30:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 14:30:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Qc6-0004rU-Jg; Tue, 28 Jul 2020 14:29:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0Qc4-0004rP-PK
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 14:29:52 +0000
X-Inumbo-ID: cbc8e030-d0de-11ea-8b6c-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cbc8e030-d0de-11ea-8b6c-bc764e2007e4;
 Tue, 28 Jul 2020 14:29:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595946591;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=DU53sn8unujLT4cYR/KYPEYMCvZlJOKY+lGGmICstp0=;
 b=b6KapncVTPjXGaYJHeBGh4sRycbIMQwEcMvHBQHulsJXy0IsrWXYYAA0
 wgv9puwRj7Oio4aI9g8jOHqqNE2vR2xQfQL0ayM0X1HblnIl+pXbWqb3+
 MBxLsB4yOsZYMI8hhHJVePcYIuYuZmpfiRKe1ePyzM8VAOTKaDZC+kMar w=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 4qN49BupJCa7Xl6rlLMnQJWTmnds5IOcajG79uEVeOJwNtrb85pQZt0FQmHndZnOCafupHIn2S
 y9zh08NWGema4O6Aizuf6zFaT56IzgF+LFJJ4erpkuFr2DD+aJ2kP0J2Ok73fCNLrSBa6O6Ziw
 9bDii9LF1UfE71u2ykVRwjcbat1NuT7LFoDV2xWpupn4+3jeK/GQw30c3Byd+OF8G5D00DWD8+
 vqLUCjmmjyqQqE7AvfWP0iVGLIKybZp9TYNEhwXhthh0R1rvlQBLGOwL01T8trixc7SXYyqJ2A
 OdI=
X-SBRS: 2.7
X-MesageID: 23366191
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23366191"
Subject: Re: [PATCH 2/4] x86: reduce CET-SS related #ifdef-ary
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <58615a18-7f81-c000-d499-1a49f4752879@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <5abaf9e1-c7ba-a58c-d735-47430013eb65@citrix.com>
Date: Tue, 28 Jul 2020 15:29:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <58615a18-7f81-c000-d499-1a49f4752879@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15/07/2020 11:48, Jan Beulich wrote:
> Commit b586a81b7a90 ("x86/CET: Fix build following c/s 43b98e7190") had
> to introduce a number of #ifdef-s to make the build work with older tool
> chains. Introduce an assembler macro covering for tool chains not
> knowing of CET-SS, allowing some conditionals where just SETSSBSY is the
> problem to be dropped again.
>
> No change to generated code.



>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Now that I've done this I'm not longer sure which direction is better to
> follow: On one hand this introduces dead code (even if just NOPs) into
> CET-SS-disabled builds. Otoh this is a step towards breaking the tool
> chain version dependency of the feature.

The toolchain dependency can't be broken, because of incssp and wrss in C.

There is 0 value and added complexity to trying to partially support
legacy toolchains.  Furthermore, this adds a pile of nops into builds
which have specifically opted out of CONFIG_XEN_SHSTK, which isn't ideal
for embedded usecases.

As a consequence, I think its better to keep things consistent with how
they are now.

One thing I already considered was to make cpu_has_xen_shstk return
false for !CONFIG_XEN_SHSTK, which subsumes at least one hunk in this
change.

> --- a/xen/arch/x86/x86_64/compat/entry.S
> +++ b/xen/arch/x86/x86_64/compat/entry.S
> @@ -198,9 +198,7 @@ ENTRY(cr4_pv32_restore)
>  
>  /* See lstar_enter for entry register state. */
>  ENTRY(cstar_enter)
> -#ifdef CONFIG_XEN_SHSTK
>          ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
> -#endif

I can't currently think of any option better than leaving these ifdef's
in place, other than perhaps

#ifdef CONFIG_XEN_SHSTK
# define MAYBE_SETSSBSY ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
#else
# define MAYBE_SETSSBSY
#endif

and I don't like it much.

The think is that everything present there is semantically relevant
information, and dropping it makes the code worse rather than better.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 14:51:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 14:51:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Qwp-0007Ek-Bj; Tue, 28 Jul 2020 14:51:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0Qwo-0007Ec-CR
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 14:51:18 +0000
X-Inumbo-ID: c81c34d5-d0e1-11ea-a904-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c81c34d5-d0e1-11ea-a904-12813bfff9fa;
 Tue, 28 Jul 2020 14:51:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595947875;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=lv7DvAEAMGAuUxfBUUIinPORNQOh3EyicsTN4QUqkR8=;
 b=YyMTsLsXhjTUNaxqdSZ44Z2u0eF/+KO4mjPkTB5z/OmSsmbjYR+URdzK
 uJPmiruWtiwF+Xf3vTdtGmvyg+BN7QjCvDOyTOrYojZLppOsNWtueNjS+
 iULck9bYrcocbJo9R9X+Cma3dUCFXNzx7eERe4fX+7GKz8rZ6gmJuvOV+ E=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: cebk8yMFoerV6c2w4zkT4m0X7eegtucwket3KNbUL0/kr50xR0JwdhLCZD+PPifBH7BwUNUZ1T
 RtLsUrT6mGQZYC1kOyD9GxrVHePVy9pbHVdB0DFYaslkTR4hv7+yYERxpZV2MEU/64pe/v6Yc1
 34RtyDQe8bA8G3hLAOPrK1nVlQLEI1l60zFBxcvf/mDfU3qr0xaOdHK+Svejsrj/D0bFG6SDnh
 G5AgrmGnMh93SLi4txo+gTRX7oZjN4TM+PI0qh0aWKLGqQejU3l+ZxF7z+48Pixh6wxhN1qPXX
 Glg=
X-SBRS: 2.7
X-MesageID: 23349954
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23349954"
Subject: Re: [PATCH 3/4] x86: drop ASM_{CL,ST}AC
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <048c3702-f0b0-6f8e-341e-bec6cfaded27@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <07750e83-6b9d-a88d-856b-20db4f63fd11@citrix.com>
Date: Tue, 28 Jul 2020 15:51:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <048c3702-f0b0-6f8e-341e-bec6cfaded27@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15/07/2020 11:49, Jan Beulich wrote:
> Use ALTERNATIVE directly, such that at the use sites it is visible that
> alternative code patching is in use. Similarly avoid hiding the fact in
> SAVE_ALL.
>
> No change to generated code.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Definitely +1 to not hiding the STAC/CLAC in SAVE_ALL.  I've been
meaning to undo that mistake for ages.

OOI, what made you change your mind?  I'm pleased that you have.

>
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -2165,9 +2165,9 @@ void activate_debugregs(const struct vcp
>  void asm_domain_crash_synchronous(unsigned long addr)
>  {
>      /*
> -     * We need clear AC bit here because in entry.S AC is set
> -     * by ASM_STAC to temporarily allow accesses to user pages
> -     * which is prevented by SMAP by default.
> +     * We need to clear AC bit here because in entry.S it gets set to
> +     * temporarily allow accesses to user pages which is prevented by
> +     * SMAP by default.

As you're adjusting the text, It should read "We need to clear the AC
bit ..."

But I also think it would be clearer to say that exception fixup may
leave user access enabled, which we fix up here by unconditionally
disabling user access.

Preferably with this rewritten, Reviewed-by: Andrew Cooper
<andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 15:18:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 15:18:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0RNA-0000h8-L4; Tue, 28 Jul 2020 15:18:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E6Nk=BH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0RN9-0000gi-Ff
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 15:18:31 +0000
X-Inumbo-ID: 92b9fa0c-d0e5-11ea-a907-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 92b9fa0c-d0e5-11ea-a907-12813bfff9fa;
 Tue, 28 Jul 2020 15:18:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HZM9ZDt7X58nRaV4E2IrM4zOR4j7qI2kPQcLkUodxlY=; b=k0AzIQn3imyS1pG72zQrUY4Vt
 GwwkBjJGdPeLF6DtYdlJSGi0mVVaSVEKHAjY8PgWXSDT/HlQBd2AeQWVg9feBhm/4g0ndSvX+Nvpy
 FwREaDilICFKYyunCLSUN/KgP9FuqUe+XLcYtwTjFyguPcPz5z44JKxXx+ZWxyXrmNvAE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0RMz-0002oG-3V; Tue, 28 Jul 2020 15:18:21 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0RMy-0007Md-K8; Tue, 28 Jul 2020 15:18:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0RMy-0003sz-Je; Tue, 28 Jul 2020 15:18:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152241-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152241: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=9303ecb658a0194560d1eecde165a1511223c2d8
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jul 2020 15:18:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152241 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152241/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1       fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1         fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                9303ecb658a0194560d1eecde165a1511223c2d8
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   45 days
Failing since        151101  2020-06-14 08:32:51 Z   44 days   62 attempts
Testing same since   152241  2020-07-27 20:11:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Duyck <alexander.h.duyck@linux.intel.com>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Antoine Damhet <antoine.damhet@blade-group.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  KONRAD Frederic <frederic.konrad@adacore.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  Liu Yi L <yi.l.liu@intel.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Sven Schnelle <svens@stackframe.org>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Viktor Mihajlovski <mihajlov@linux.ibm.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 32511 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 15:53:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 15:53:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Ruf-0003x6-I3; Tue, 28 Jul 2020 15:53:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kvrz=BH=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k0Rue-0003x1-21
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 15:53:08 +0000
X-Inumbo-ID: 6d6b52be-d0ea-11ea-a90f-12813bfff9fa
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.15.82]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6d6b52be-d0ea-11ea-a90f-12813bfff9fa;
 Tue, 28 Jul 2020 15:53:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o/hjoToH+YF1XXX6PP8N1wKXPnL1QZ3k4NfiXQwytUU=;
 b=pZgwnxc0My7y7lmK3Li+1ZBv897RF6UtjV0WrTBDHpLeO/mo3Ww2hBdBb6FRAJU0u6UGQwRdT5bvuanwXiJrdgWYtv0Xw52b8/zWfMjZDEvXBkS1P7OJ0ychDrPR50KDGJkfsYGwYbRZBanlD9cyiTOatyg9dEG5yHaLRkLL8bo=
Received: from MR2P264CA0100.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:33::16)
 by DB7PR08MB3003.eurprd08.prod.outlook.com (2603:10a6:5:1b::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.26; Tue, 28 Jul
 2020 15:53:05 +0000
Received: from VE1EUR03FT039.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:33:cafe::84) by MR2P264CA0100.outlook.office365.com
 (2603:10a6:500:33::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.16 via Frontend
 Transport; Tue, 28 Jul 2020 15:53:04 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT039.mail.protection.outlook.com (10.152.19.196) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Tue, 28 Jul 2020 15:53:04 +0000
Received: ("Tessian outbound 2ae7cfbcc26c:v62");
 Tue, 28 Jul 2020 15:53:04 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2e05cd15b33bd27f
X-CR-MTA-TID: 64aa7808
Received: from 3d316b07d0a4.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 AC60E24A-AB0F-491B-93CA-AD2E045D730F.1; 
 Tue, 28 Jul 2020 15:52:58 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3d316b07d0a4.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 28 Jul 2020 15:52:58 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M4O4bOX83tzITGwMajsxCcmLbK3s8srEhWFtj4rh8PpkahhXKlGz75c7YiKgSaCozyV8AdJoxTqgYyWmLnD9FvWTTWulb/uX6RSKu60HES3X1wdXBkUyPfTkN9/1NHn4Q8JoLlOObZDmF3mwIm80Is+pe1xQAGjWRCVPF5DA2hpa5EVMVRGvp7NGLqb7VlKNe0vJYFuHQN6l9mLhPMVHCEu5BVT3BRpBgRRlWuNaFxVMv4LUNf+MuUFcCc9GnxyVrKpJvDHclnlEzi7uwbTMLUJ2lyI9W3dQZJyYod/aPnPXxbAnBNhEAKAnR+eagiQEGb/FKyrK77aIcc8ax/9bzw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o/hjoToH+YF1XXX6PP8N1wKXPnL1QZ3k4NfiXQwytUU=;
 b=gIgu3ZodWzfTzOsSQl1ZdQxglQTtISaUBEDwmF86o14WB29n0zv2LO+T+2lzgnjL1sX3UYt+LHTGd2mF3HJg5+UmvmHyw6GA1aC9XeH3bP0ZbrQxoSh8ZNKAGI/yl600TzCzIkzTwOugm1zZ2P/Vk5bZzUahwLTvWxhUqivi7of1V9mEvgdyl2mg+QFhl9hGx6sNakMOP2WwQIrT9M+YAsQzqeEnIjCSGHHEACHQBXGnj1WHudhW2WyZai1Hv2lj8RzQ8OpOIcnVMaa4iPCwDdtVP4NIgtKn7pgT5RFWqYGYLGmpSYo2cokvw87PMrBO70i+SPXRr5CVss8rEy4GJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o/hjoToH+YF1XXX6PP8N1wKXPnL1QZ3k4NfiXQwytUU=;
 b=pZgwnxc0My7y7lmK3Li+1ZBv897RF6UtjV0WrTBDHpLeO/mo3Ww2hBdBb6FRAJU0u6UGQwRdT5bvuanwXiJrdgWYtv0Xw52b8/zWfMjZDEvXBkS1P7OJ0ychDrPR50KDGJkfsYGwYbRZBanlD9cyiTOatyg9dEG5yHaLRkLL8bo=
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none; lists.xenproject.org;
 dmarc=none action=none header.from=arm.com;
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3692.eurprd08.prod.outlook.com (2603:10a6:10:30::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.22; Tue, 28 Jul
 2020 15:52:56 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3216.033; Tue, 28 Jul 2020
 15:52:56 +0000
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2] xen/arm: Convert runstate address during hypcall
Date: Tue, 28 Jul 2020 16:52:41 +0100
Message-Id: <4647a019c7b42d40d3c2f5b0a3685954bea7f982.1595948219.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
Content-Type: text/plain
X-ClientProxiedBy: SN4PR0601CA0024.namprd06.prod.outlook.com
 (2603:10b6:803:2f::34) To DB7PR08MB3689.eurprd08.prod.outlook.com
 (2603:10a6:10:79::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from e109506-lin.cambridge.arm.com (217.140.106.53) by
 SN4PR0601CA0024.namprd06.prod.outlook.com (2603:10b6:803:2f::34) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.16 via Frontend
 Transport; Tue, 28 Jul 2020 15:52:53 +0000
X-Mailer: git-send-email 2.17.1
X-Originating-IP: [217.140.106.53]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: e4a5cbd4-39e9-4bf1-b54b-08d8330e507a
X-MS-TrafficTypeDiagnostic: DB7PR08MB3692:|DB7PR08MB3003:
X-Microsoft-Antispam-PRVS: <DB7PR08MB30038E16EAE3347EE0F0C6479D730@DB7PR08MB3003.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: PpFJ+fpfHTUWIF3xCXDVZakyobW86hbW4V+3ge2c+kKAP/nUJLHqphQdDG5OJQFg77fd8h1cstucSVXoY40Wj4Bt/U+/eLvs4DTzb/6CNuIyeb2pp36XubZtqluu2X5J+jax8eiaBSXHE2Ahm09vgAaX80IEjkedeX8VekIKBcjk+j9niJ8Fcii1HrzNMGDa91fGA3D13ELE/NCfuXHPqfSAInLsH+gIKdQN6BChTOlrjiCl/PcANw3sdK7btTMCgMYUCL43zau+4td3BELnX4V7PEAWQI5Hhr1Emg29dEOIYAxDzhnywOv1A+aFXAd2yXaeMD6YwcVA0ZGlGe1UAYEr15QRkZx9y7dK/N11enlArr7pkYpjaDsec7MVoTuL
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(39860400002)(366004)(376002)(346002)(396003)(66476007)(66556008)(66946007)(54906003)(52116002)(7416002)(36756003)(8936002)(86362001)(30864003)(316002)(478600001)(8676002)(956004)(2616005)(7696005)(44832011)(6666004)(186003)(2906002)(4326008)(6916009)(5660300002)(6486002)(83380400001)(26005)(16526019)(136400200001);
 DIR:OUT; SFP:1101; 
X-MS-Exchange-AntiSpam-MessageData: Ed6bvtf1IIdmtx4E5desROcBiYMj64pMlQJZHQos8lCJf1Ql0N90+zdg+45v1/vr6uQHLGE8yqCacXrzDmBY3Djtd3/Q/w+McVXFiQCdS0aSCMRb+nt5h4NHK5+S4GveMLc7yzaupj2LjqjswergJVcHoGYdVC/9+uD3HKe7XmneNud+E0HGFzOaiU6nYT7RoptXt3KoszQh4+0rLOxmZulQ7+8Y53DsAy4BFP8kAbOW2RZQ8EGpitI9Mh3g5jJaT+HINJokpeH7lFjXTpAosJ5uk+yO1PoW+NY4HOiLjeZK5x5veKs6aza6l3S18Kex9X//lTqqKh9HjxVCHlGua+nOEmmtVoiYCkD7ZZDxUYb852Ue1yoM8ubi5/ZTtIzk7XriUED49NBPzJroGjUFmfR5clhhOec75nxow47Q7PCevlEGkkD7+0H18pqz/FoIWd9YliCMjOEfb7KbcYIxjCxKDejk02f0s8JPJ+VQ/80xN+qbj82+3zTx0zmVa6aV
X-MS-Exchange-Transport-Forked: True
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3692
Original-Authentication-Results: lists.xenproject.org;
 dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 9d4149af-3aa5-4eaf-3c72-08d8330e4b5d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: CHkPbjKcS1m6eNZLdfowgS3l+kgv8MD/Nq7inE7TwwjPDwc5QgZraQ7jpbL9/B4UVnCNHJVCGjXPNzAcfMOpWq9UslEYiD8sKAI52eop0D1DIf8X0/vVuRYgGoDcOCILiJzxppIQsTlafT6olbFZGRWL7jNl9+oHzlOMTIo2nkldIzyRKDpmnSF+bqUbVj8nZJLIpuLlGRpm9mUwbfqlJ5pVITNThov1u96Ja1l5itf56uJFsfLm1phItjh5xEujpD9HRzAoBfhgLGeIlKF/pZoy2cyFiS2bDzPewe7yagsb5yJ3sNteo94JwrnveLP8WMb+q8PYbl1CEXBd0/RSsMXhbWqvmbzHmfRM7biYSGtKaLrD3sbs0yUY/ikhh2Ygqu6TlxJsr4+lboDnx00TsXQ4PNZ7JiUm17yUJpsmTohnFHJ2DGanGIFCAPTetWsT
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(376002)(136003)(396003)(39860400002)(46966005)(336012)(26005)(16526019)(6666004)(70206006)(30864003)(7696005)(478600001)(2616005)(70586007)(6486002)(186003)(956004)(36756003)(44832011)(81166007)(2906002)(107886003)(82310400002)(86362001)(47076004)(5660300002)(6916009)(8936002)(316002)(36906005)(83380400001)(82740400003)(356005)(54906003)(4326008)(8676002)(136400200001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jul 2020 15:53:04.3594 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e4a5cbd4-39e9-4bf1-b54b-08d8330e507a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3003
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

At the moment on Arm, a Linux guest running with KTPI enabled will
cause the following error when a context switch happens in user mode:
(XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0

The error is caused by the virtual address for the runstate area
registered by the guest only being accessible when the guest is running
in kernel space when KPTI is enabled.

To solve this issue, this patch is doing the translation from virtual
address to physical address during the hypercall and mapping the
required pages using vmap. This is removing the conversion from virtual
to physical address during the context switch which is solving the
problem with KPTI.

This is done only on arm architecture, the behaviour on x86 is not
modified by this patch and the address conversion is done as before
during each context switch.

This is introducing several limitations in comparison to the previous
behaviour (on arm only):
- if the guest is remapping the area at a different physical address Xen
will continue to update the area at the previous physical address. As
the area is in kernel space and usually defined as a global variable this
is something which is believed not to happen. If this is required by a
guest, it will have to call the hypercall with the new area (even if it
is at the same virtual address).
- the area needs to be mapped during the hypercall. For the same reasons
as for the previous case, even if the area is registered for a different
vcpu. It is believed that registering an area using a virtual address
unmapped is not something done.

inline functions in headers could not be used as the architecture
domain.h is included before the global domain.h making it impossible
to use the struct vcpu inside the architecture header.
This should not have any performance impact as the hypercall is only
called once per vcpu usually.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

---
  Changes in v2
    - use vmap to map the pages during the hypercall.
    - reintroduce initial copy during hypercall.

---
 xen/arch/arm/domain.c        | 160 +++++++++++++++++++++++++++++++----
 xen/arch/x86/domain.c        |  30 ++++++-
 xen/arch/x86/x86_64/domain.c |   4 +-
 xen/common/domain.c          |  19 ++---
 xen/include/asm-arm/domain.h |   9 ++
 xen/include/asm-x86/domain.h |  16 ++++
 xen/include/xen/domain.h     |   5 ++
 xen/include/xen/sched.h      |  16 +---
 8 files changed, 207 insertions(+), 52 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 31169326b2..c595438bd9 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,6 +19,7 @@
 #include <xen/sched.h>
 #include <xen/softirq.h>
 #include <xen/wait.h>
+#include <xen/vmap.h>
 
 #include <asm/alternative.h>
 #include <asm/cpuerrata.h>
@@ -275,36 +276,157 @@ static void ctxt_switch_to(struct vcpu *n)
     virt_timer_restore(n);
 }
 
-/* Update per-VCPU guest runstate shared memory area (if registered). */
-static void update_runstate_area(struct vcpu *v)
+static void cleanup_runstate_vcpu_locked(struct vcpu *v)
+{
+    if ( v->arch.runstate_guest )
+    {
+        vunmap((void *)((unsigned long)v->arch.runstate_guest & PAGE_MASK));
+
+        put_page(v->arch.runstate_guest_page[0]);
+
+        if ( v->arch.runstate_guest_page[1] )
+        {
+            put_page(v->arch.runstate_guest_page[1]);
+        }
+        v->arch.runstate_guest = NULL;
+    }
+}
+
+void arch_vcpu_cleanup_runstate(struct vcpu *v)
 {
-    void __user *guest_handle = NULL;
+    spin_lock(&v->arch.runstate_guest_lock);
+
+    cleanup_runstate_vcpu_locked(v);
+
+    spin_unlock(&v->arch.runstate_guest_lock);
+}
+
+static int setup_runstate_vcpu_locked(struct vcpu *v, vaddr_t vaddr)
+{
+    unsigned int offset;
+    mfn_t mfn[2];
+    struct page_info *page;
+    unsigned int numpages;
     struct vcpu_runstate_info runstate;
+    void *p;
 
-    if ( guest_handle_is_null(runstate_guest(v)) )
-        return;
+    /* user can pass a NULL address to unregister a previous area */
+    if ( vaddr == 0 )
+        return 0;
 
-    memcpy(&runstate, &v->runstate, sizeof(runstate));
+    offset = vaddr & ~PAGE_MASK;
 
-    if ( VM_ASSIST(v->domain, runstate_update_flag) )
+    /* provided address must be aligned to a 64bit */
+    if ( offset % alignof(struct vcpu_runstate_info) )
     {
-        guest_handle = &v->runstate_guest.p->state_entry_time + 1;
-        guest_handle--;
-        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
-        __raw_copy_to_guest(guest_handle,
-                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
-        smp_wmb();
+        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
+                "Invalid offset\n", vaddr);
+        return -EINVAL;
+    }
+
+    page = get_page_from_gva(v, vaddr, GV2M_WRITE);
+    if ( !page )
+    {
+        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
+                "Page is not mapped\n", vaddr);
+        return -EINVAL;
+    }
+    mfn[0] = page_to_mfn(page);
+    v->arch.runstate_guest_page[0] = page;
+
+    if ( offset > (PAGE_SIZE - sizeof(struct vcpu_runstate_info)) )
+    {
+        /* guest area is crossing pages */
+        page = get_page_from_gva(v, vaddr + PAGE_SIZE, GV2M_WRITE);
+        if ( !page )
+        {
+            put_page(v->arch.runstate_guest_page[0]);
+            gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
+                    "2nd Page is not mapped\n", vaddr);
+            return -EINVAL;
+        }
+        mfn[1] = page_to_mfn(page);
+        v->arch.runstate_guest_page[1] = page;
+        numpages = 2;
+    }
+    else
+    {
+        v->arch.runstate_guest_page[1] = NULL;
+        numpages = 1;
+    }
+
+    p = vmap(mfn, numpages);
+    if ( !p )
+    {
+        put_page(v->arch.runstate_guest_page[0]);
+        if ( numpages == 2 )
+            put_page(v->arch.runstate_guest_page[1]);
+
+        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
+                "vmap error\n", vaddr);
+        return -EINVAL;
     }
 
-    __copy_to_guest(runstate_guest(v), &runstate, 1);
+    v->arch.runstate_guest = p + offset;
 
-    if ( guest_handle )
+    if (v == current)
     {
-        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
+        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate));
+    }
+    else
+    {
+        vcpu_runstate_get(v, &runstate);
+        memcpy(v->arch.runstate_guest, &runstate, sizeof(v->runstate));
+    }
+
+    return 0;
+}
+
+int arch_vcpu_setup_runstate(struct vcpu *v,
+                             struct vcpu_register_runstate_memory_area area)
+{
+    int rc;
+
+    spin_lock(&v->arch.runstate_guest_lock);
+
+    /* cleanup if we are recalled */
+    cleanup_runstate_vcpu_locked(v);
+
+    rc = setup_runstate_vcpu_locked(v, (vaddr_t)area.addr.v);
+
+    spin_unlock(&v->arch.runstate_guest_lock);
+
+    return rc;
+}
+
+
+/* Update per-VCPU guest runstate shared memory area (if registered). */
+static void update_runstate_area(struct vcpu *v)
+{
+    spin_lock(&v->arch.runstate_guest_lock);
+
+    if ( v->arch.runstate_guest )
+    {
+        if ( VM_ASSIST(v->domain, runstate_update_flag) )
+        {
+            v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
+            v->arch.runstate_guest->state_entry_time |= XEN_RUNSTATE_UPDATE;
+            smp_wmb();
+        }
+
+        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate));
+
+        /* copy must be done before switching the bit */
         smp_wmb();
-        __raw_copy_to_guest(guest_handle,
-                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
+
+        if ( VM_ASSIST(v->domain, runstate_update_flag) )
+        {
+            v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
+            v->arch.runstate_guest->state_entry_time &= ~XEN_RUNSTATE_UPDATE;
+        }
     }
+
+    spin_unlock(&v->arch.runstate_guest_lock);
 }
 
 static void schedule_tail(struct vcpu *prev)
@@ -560,6 +682,8 @@ int arch_vcpu_create(struct vcpu *v)
     v->arch.saved_context.sp = (register_t)v->arch.cpu_info;
     v->arch.saved_context.pc = (register_t)continue_new_vcpu;
 
+    spin_lock_init(&v->arch.runstate_guest_lock);
+
     /* Idle VCPUs don't need the rest of this setup */
     if ( is_idle_vcpu(v) )
         return rc;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index fee6c3931a..98910b7cf1 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1642,6 +1642,30 @@ void paravirt_ctxt_switch_to(struct vcpu *v)
         wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
 }
 
+int arch_vcpu_setup_runstate(struct vcpu *v,
+                             struct vcpu_register_runstate_memory_area area)
+{
+    struct vcpu_runstate_info runstate;
+
+    runstate_guest(v) = area.addr.h;
+
+    if ( v == current )
+    {
+        __copy_to_guest(runstate_guest(v), &v->runstate, 1);
+    }
+    else
+    {
+        vcpu_runstate_get(v, &runstate);
+        __copy_to_guest(runstate_guest(v), &runstate, 1);
+    }
+    return 0;
+}
+
+void arch_vcpu_cleanup_runstate(struct vcpu *v)
+{
+    set_xen_guest_handle(runstate_guest(v), NULL);
+}
+
 /* Update per-VCPU guest runstate shared memory area (if registered). */
 bool update_runstate_area(struct vcpu *v)
 {
@@ -1660,8 +1684,8 @@ bool update_runstate_area(struct vcpu *v)
     if ( VM_ASSIST(v->domain, runstate_update_flag) )
     {
         guest_handle = has_32bit_shinfo(v->domain)
-            ? &v->runstate_guest.compat.p->state_entry_time + 1
-            : &v->runstate_guest.native.p->state_entry_time + 1;
+            ? &v->arch.runstate_guest.compat.p->state_entry_time + 1
+            : &v->arch.runstate_guest.native.p->state_entry_time + 1;
         guest_handle--;
         runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
         __raw_copy_to_guest(guest_handle,
@@ -1674,7 +1698,7 @@ bool update_runstate_area(struct vcpu *v)
         struct compat_vcpu_runstate_info info;
 
         XLAT_vcpu_runstate_info(&info, &runstate);
-        __copy_to_guest(v->runstate_guest.compat, &info, 1);
+        __copy_to_guest(v->arch.runstate_guest.compat, &info, 1);
         rc = true;
     }
     else
diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index c46dccc25a..b879e8dd2c 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -36,7 +36,7 @@ arch_compat_vcpu_op(
             break;
 
         rc = 0;
-        guest_from_compat_handle(v->runstate_guest.compat, area.addr.h);
+        guest_from_compat_handle(v->arch.runstate_guest.compat, area.addr.h);
 
         if ( v == current )
         {
@@ -49,7 +49,7 @@ arch_compat_vcpu_op(
             vcpu_runstate_get(v, &runstate);
             XLAT_vcpu_runstate_info(&info, &runstate);
         }
-        __copy_to_guest(v->runstate_guest.compat, &info, 1);
+        __copy_to_guest(v->arch.runstate_guest.compat, &info, 1);
 
         break;
     }
diff --git a/xen/common/domain.c b/xen/common/domain.c
index e9be05f1d0..cd8595b186 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -727,7 +727,10 @@ int domain_kill(struct domain *d)
         if ( cpupool_move_domain(d, cpupool0) )
             return -ERESTART;
         for_each_vcpu ( d, v )
+        {
+            arch_vcpu_cleanup_runstate(v);
             unmap_vcpu_info(v);
+        }
         d->is_dying = DOMDYING_dead;
         /* Mem event cleanup has to go here because the rings 
          * have to be put before we call put_domain. */
@@ -1167,7 +1170,7 @@ int domain_soft_reset(struct domain *d)
 
     for_each_vcpu ( d, v )
     {
-        set_xen_guest_handle(runstate_guest(v), NULL);
+        arch_vcpu_cleanup_runstate(v);
         unmap_vcpu_info(v);
     }
 
@@ -1494,7 +1497,6 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
     case VCPUOP_register_runstate_memory_area:
     {
         struct vcpu_register_runstate_memory_area area;
-        struct vcpu_runstate_info runstate;
 
         rc = -EFAULT;
         if ( copy_from_guest(&area, arg, 1) )
@@ -1503,18 +1505,7 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( !guest_handle_okay(area.addr.h, 1) )
             break;
 
-        rc = 0;
-        runstate_guest(v) = area.addr.h;
-
-        if ( v == current )
-        {
-            __copy_to_guest(runstate_guest(v), &v->runstate, 1);
-        }
-        else
-        {
-            vcpu_runstate_get(v, &runstate);
-            __copy_to_guest(runstate_guest(v), &runstate, 1);
-        }
+        rc = arch_vcpu_setup_runstate(v, area);
 
         break;
     }
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 4e2f582006..9df4d10abb 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -206,6 +206,15 @@ struct arch_vcpu
      */
     bool need_flush_to_ram;
 
+    /* runstate guest lock */
+    spinlock_t runstate_guest_lock;
+
+    /* runstate guest info */
+    struct vcpu_runstate_info *runstate_guest;
+
+    /* runstate pages mapped for runstate_guest */
+    struct page_info *runstate_guest_page[2];
+
 }  __cacheline_aligned;
 
 void vcpu_show_execution_state(struct vcpu *);
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 6fd94c2e14..c369d22ccc 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -11,6 +11,11 @@
 #include <asm/x86_emulate.h>
 #include <public/vcpu.h>
 #include <public/hvm/hvm_info_table.h>
+#ifdef CONFIG_COMPAT
+#include <compat/vcpu.h>
+DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t);
+#endif
+
 
 #define has_32bit_shinfo(d)    ((d)->arch.has_32bit_shinfo)
 
@@ -627,6 +632,17 @@ struct arch_vcpu
     struct {
         bool next_interrupt_enabled;
     } monitor;
+
+#ifndef CONFIG_COMPAT
+# define runstate_guest(v) ((v)->arch.runstate_guest)
+    XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest address */
+#else
+# define runstate_guest(v) ((v)->arch.runstate_guest.native)
+    union {
+        XEN_GUEST_HANDLE(vcpu_runstate_info_t) native;
+        XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat;
+    } runstate_guest;
+#endif
 };
 
 struct guest_memory_policy
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 7e51d361de..5e8cbba31d 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -5,6 +5,7 @@
 #include <xen/types.h>
 
 #include <public/xen.h>
+#include <public/vcpu.h>
 #include <asm/domain.h>
 #include <asm/numa.h>
 
@@ -63,6 +64,10 @@ void arch_vcpu_destroy(struct vcpu *v);
 int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
 void unmap_vcpu_info(struct vcpu *v);
 
+int arch_vcpu_setup_runstate(struct vcpu *v,
+                             struct vcpu_register_runstate_memory_area area);
+void arch_vcpu_cleanup_runstate(struct vcpu *v);
+
 int arch_domain_create(struct domain *d,
                        struct xen_domctl_createdomain *config);
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index ac53519d7f..fac030fb83 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -29,11 +29,6 @@
 #include <public/vcpu.h>
 #include <public/event_channel.h>
 
-#ifdef CONFIG_COMPAT
-#include <compat/vcpu.h>
-DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t);
-#endif
-
 /*
  * Stats
  *
@@ -166,16 +161,7 @@ struct vcpu
     struct sched_unit *sched_unit;
 
     struct vcpu_runstate_info runstate;
-#ifndef CONFIG_COMPAT
-# define runstate_guest(v) ((v)->runstate_guest)
-    XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest address */
-#else
-# define runstate_guest(v) ((v)->runstate_guest.native)
-    union {
-        XEN_GUEST_HANDLE(vcpu_runstate_info_t) native;
-        XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat;
-    } runstate_guest; /* guest address */
-#endif
+
     unsigned int     new_state;
 
     /* Has the FPU been initialised? */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Jul 28 15:54:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 15:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Rvt-00040J-Tb; Tue, 28 Jul 2020 15:54:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0Rvt-00040D-Lc
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 15:54:25 +0000
X-Inumbo-ID: 9b42dfcc-d0ea-11ea-8b89-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b42dfcc-d0ea-11ea-8b89-bc764e2007e4;
 Tue, 28 Jul 2020 15:54:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595951664;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=3RqD1ZWNAXPqA1HlEKynr4F4GCWiwQSbeP3zx33qDi4=;
 b=b1GLf4BxfAhCUAlSyjQGDuAg1TKi/iawApxbZS8wQgGXC9SmJdYODCjg
 4yDqMZ73kPUar6ZBO56ONUBFWHo8jVZO8CNTd9h9MYzhV1R0BXAY9BKMm
 s8Vbm1RiT8eapTGr5U8BTkp39Ytrks3wTd6JJZWcGHQFZNh1kYY49aXTZ o=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: fNUJfxBsaAtfnz8WwdLmKRjJy4+x5tliyGcJyedIBFbzqOIKLKwIVZD9b285TXQU2XXIL5adQJ
 owlFaLNCsN0hZcHd2uXpPb9AQzZhGpnjCEbK5HtTxM+TY3ps8A14yxPAbqBQ/svf0CLb5BDSOR
 A3W+KSbgRpTDUe7Nd8BgsaGGyYuZ9Wo60anePXdIihv5QTPUS956Oa9LpOjmRkoM3LDQgWz+Do
 JiOye1H0/4exIvUVgiMWsmrqx6JrqN/3QAEn+8rYvhTygr+oyJXK/vTL3pmnpcTpInS26PqCv2
 hko=
X-SBRS: 2.7
X-MesageID: 23551922
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23551922"
Subject: Re: [PATCH] x86/hvm: Clean up track_dirty_vram() calltree
To: Jan Beulich <jbeulich@suse.com>
References: <20200722151548.4000-1-andrew.cooper3@citrix.com>
 <07ecb7dd-c823-0c6a-2bcd-7fc22471af7a@suse.com>
 <822f6c64-0e63-1199-63b0-f27449fd79c6@citrix.com>
 <635385e7-81f4-138a-f8ba-269a6d2c7ddb@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9831be29-93e6-d7af-b42a-49cd6766dcc9@citrix.com>
Date: Tue, 28 Jul 2020 16:54:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <635385e7-81f4-138a-f8ba-269a6d2c7ddb@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/07/2020 11:25, Jan Beulich wrote:
> On 23.07.2020 11:40, Andrew Cooper wrote:
>> On 22/07/2020 17:13, Jan Beulich wrote:
>>> On 22.07.2020 17:15, Andrew Cooper wrote:
>>>>  * Rename nr to nr_frames.  A plain 'nr' is confusing to follow in the the
>>>>    lower levels.
>>>>  * Use DIV_ROUND_UP() rather than opencoding it in several different ways
>>>>  * The hypercall input is capped at uint32_t, so there is no need for
>>>>    nr_frames to be unsigned long in the lower levels.
>>>>
>>>> No functional change.
>>>>
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> I'd like to note though that ...
>>>
>>>> --- a/xen/arch/x86/mm/hap/hap.c
>>>> +++ b/xen/arch/x86/mm/hap/hap.c
>>>> @@ -58,16 +58,16 @@
>>>>  
>>>>  int hap_track_dirty_vram(struct domain *d,
>>>>                           unsigned long begin_pfn,
>>>> -                         unsigned long nr,
>>>> +                         unsigned int nr_frames,
>>>>                           XEN_GUEST_HANDLE(void) guest_dirty_bitmap)
>>>>  {
>>>>      long rc = 0;
>>>>      struct sh_dirty_vram *dirty_vram;
>>>>      uint8_t *dirty_bitmap = NULL;
>>>>  
>>>> -    if ( nr )
>>>> +    if ( nr_frames )
>>>>      {
>>>> -        int size = (nr + BITS_PER_BYTE - 1) / BITS_PER_BYTE;
>>>> +        unsigned int size = DIV_ROUND_UP(nr_frames, BITS_PER_BYTE);
>>> ... with the change from long to int this construct will now no
>>> longer be correct for the (admittedly absurd) case of a hypercall
>>> input in the range of [0xfffffff9,0xffffffff]. We now fully
>>> depend on this getting properly rejected at the top level hypercall
>>> handler (which limits to 1Gb worth of tracked space).
>> I don't see how this makes any difference at all.
>>
>> Exactly the same would be true in the old code for an input in the range
>> [0xfffffffffffffff9,0xffffffffffffffff], where the aspect which protects
>> you is the fact that the hypercall ABI truncates to 32 bits.
> Exactly: The hypercall ABI won't change. The GB(1) check up the call
> tree may go away, without the then arising issue being noticed.

The ABI is equally as like to change as the 1G limit.  Either both
issues will be fixed (and almost certainly together), or neither will
change forever more.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 16:48:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 16:48:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0SmI-0000Ou-Vz; Tue, 28 Jul 2020 16:48:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TSwU=BH=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k0SmH-0000Op-L5
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 16:48:33 +0000
X-Inumbo-ID: 2be86eaa-d0f2-11ea-8ba1-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2be86eaa-d0f2-11ea-8ba1-bc764e2007e4;
 Tue, 28 Jul 2020 16:48:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=W/q3EC2xp/1uSlfzKWEodtbt8+Hkh9fk7Ot5gVkNxDI=; b=0EjygVLrDEeM7fxzEKp2ZCZfyW
 cCJP1O+If5bhP4xNNzbCXDP+PxlwyNtosVREbzqftCtJMDJxbBiLcvc4YspOaTvnKcrrHu2hK0nTJ
 1oEy0V5ch4ZjpSEb84qfhoA5KsAiF8/udmsoizbh1NJwVVwacNWHhuOak4ho4Awlprt8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0SmB-00059l-9Z; Tue, 28 Jul 2020 16:48:27 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0SmA-0007yB-V2; Tue, 28 Jul 2020 16:48:27 +0000
Subject: Re: [PATCH v3 4/4] xen: add helpers to allocate unpopulated memory
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
References: <20200727091342.52325-1-roger.pau@citrix.com>
 <20200727091342.52325-5-roger.pau@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b5460659-88a5-c2aa-c339-815d5618bcb5@xen.org>
Date: Tue, 28 Jul 2020 17:48:23 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200727091342.52325-5-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Yan Yankovskyi <yyankovskyi@gmail.com>,
 David Hildenbrand <david@redhat.com>, dri-devel@lists.freedesktop.org,
 Michal Hocko <mhocko@kernel.org>, linux-mm@kvack.org,
 Daniel Vetter <daniel@ffwll.ch>, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Dan Williams <dan.j.williams@intel.com>,
 Dan Carpenter <dan.carpenter@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 27/07/2020 10:13, Roger Pau Monne wrote:
> To be used in order to create foreign mappings. This is based on the
> ZONE_DEVICE facility which is used by persistent memory devices in
> order to create struct pages and kernel virtual mappings for the IOMEM
> areas of such devices. Note that on kernels without support for
> ZONE_DEVICE Xen will fallback to use ballooned pages in order to
> create foreign mappings.
> 
> The newly added helpers use the same parameters as the existing
> {alloc/free}_xenballooned_pages functions, which allows for in-place
> replacement of the callers. Once a memory region has been added to be
> used as scratch mapping space it will no longer be released, and pages
> returned are kept in a linked list. This allows to have a buffer of
> pages and prevents resorting to frequent additions and removals of
> regions.
> 
> If enabled (because ZONE_DEVICE is supported) the usage of the new
> functionality untangles Xen balloon and RAM hotplug from the usage of
> unpopulated physical memory ranges to map foreign pages, which is the
> correct thing to do in order to avoid mappings of foreign pages depend
> on memory hotplug.
I think this is going to break Dom0 on Arm if the kernel has been built 
with hotplug. This is because you may end up to re-use region that will 
be used for the 1:1 mapping of a foreign map.

Note that I don't know whether hotplug has been tested on Xen on Arm 
yet. So it might be possible to be already broken.

Meanwhile, my suggestion would be to make the use of hotplug in the 
balloon code conditional (maybe using CONFIG_ARM64 and CONFIG_ARM)?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 16:57:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 16:57:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0SuJ-0001GS-Ra; Tue, 28 Jul 2020 16:56:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TSwU=BH=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k0SuI-0001GN-UP
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 16:56:50 +0000
X-Inumbo-ID: 53fa5c9a-d0f3-11ea-a91b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 53fa5c9a-d0f3-11ea-a91b-12813bfff9fa;
 Tue, 28 Jul 2020 16:56:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:To:Subject:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ne/5f8Dz1USlmrgYJGbkJHI1b0murad2BfKI+443u80=; b=4mxGsr4g8PgshJvwRUqY4v41Rn
 vtqGxEUykkes1fa/mEJ+5CksVi/mwLFb9sgcA38ZVZqlFmLUPeo9rdByOAImeMwliEw1ar/ZV48Cz
 1TfekSsNNetD/kHvz4WjWI8EjivFN2iaJAEXhxo//xXgNQQk5mf/r0XBLezc3eygaHDY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0SuG-0005KQ-MG; Tue, 28 Jul 2020 16:56:48 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0SuG-0003el-EZ; Tue, 28 Jul 2020 16:56:48 +0000
Subject: Re: Porting Xen to Jetson Nano
To: Srinivas Bangalore <srini@yujala.com>, xen-devel@lists.xenproject.org,
 'Christopher Clark' <christopher.w.clark@gmail.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>
References: <002801d66051$90fe2300$b2fa6900$@yujala.com>
 <9736680b-1c81-652b-552b-4103341bad50@xen.org>
 <000001d661cb$45cdaa10$d168fe30$@yujala.com>
 <5f985a6a-1bd6-9e68-f35f-b0b665688cee@xen.org>
 <002901d66462$a1dff530$e59fdf90$@yujala.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f8de3b17-d8bd-884d-a37f-6e6d58bcab8c@xen.org>
Date: Tue, 28 Jul 2020 17:56:46 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <002901d66462$a1dff530$e59fdf90$@yujala.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 27/07/2020 23:09, Srinivas Bangalore wrote:
> Hi,
> 
> 
> On 24/07/2020 16:01, Srinivas Bangalore wrote:
>> Hi Julien,
> 
> Hello,
> 
>>
>> Thanks for the tips. Comments inline...
> 
> I struggled to find your comment inline as your e-mail client doesn't quote
> my answer. Please configure your e-mail client to use some form of quoting
> (the usual is '>').
> 
> [<SB>] Done! Sorry about that.

Thanks this is a good start. Unfortunately, it doesn't fully help it 
when you have a reply split accross multiple line. This is become more 
proeminent after a few back and forth. Which e-mail client are you using?

> [<SB>] OK, I started porting the patch series to 4.14, but it is definitely
> not straightforward ;) Will take some time to do this. BTW, I was looking at
> xen/arch/arm/Rules.mk in 4.14 and it is blank. The previous releases had
> some board-specific stuff in this file - esp the EARLY_PRINTK definitions.
> Has this changed in 4.14?

earlyprintk can now be configured using Kconfig. This should be easier 
to configure as you can do it the same way as you would for other options.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 16:59:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 16:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Sws-0001OH-96; Tue, 28 Jul 2020 16:59:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZWt7=BH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k0Swr-0001OC-02
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 16:59:29 +0000
X-Inumbo-ID: b23958f7-d0f3-11ea-a91b-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b23958f7-d0f3-11ea-a91b-12813bfff9fa;
 Tue, 28 Jul 2020 16:59:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595955569;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=tRrG2kIPRmHZl5KYjBZDnpKOe+a1KAifCS6TcraQCvw=;
 b=bsvj2kZHsIsV/VdDQu9Ewf3LB7Ze6qu+aPDlb0qSu42JejEVQH7EPLQZ
 1InNm2Hr85rqkI/aHquXZhV3Sxou5DSCjiracnmcNbDJ/FPkGNiS7a5TU
 DICTBnGAviWnA380nELIgDpLk51v/BCsX+fwnMrimFeKzKFsLk+0sK2ev w=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ek+XUNwRJ4Q5p4iyKzYjQH2lwOuZGVQ6Lx+zstg9NwpzzjRk2zB+v1/h9r6kuVVwRRNSCQGoXa
 D8++MW18YDeKCEmXhrvpH1ZyV0JbQvvIgpmIJERE5rrUKGp9cmDKMmPTr4piKhlvofOfDIfJYe
 2rK+680bc67sJoyZee3FTviqXiwfbr0qq0/CqTZ5Z0mVPLSMrngigHlxQVYUtTkhCySNVkupEs
 zSdbmNe7SgW5IpviGcOeDar3EuhU7jdXOS/8qypW3eFImgTAyx5Vym8iTQCHVMYS3dHYj3I2H8
 GeI=
X-SBRS: 2.7
X-MesageID: 23705708
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23705708"
Date: Tue, 28 Jul 2020 18:59:19 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v3 4/4] xen: add helpers to allocate unpopulated memory
Message-ID: <20200728165919.GA7191@Air-de-Roger>
References: <20200727091342.52325-1-roger.pau@citrix.com>
 <20200727091342.52325-5-roger.pau@citrix.com>
 <b5460659-88a5-c2aa-c339-815d5618bcb5@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <b5460659-88a5-c2aa-c339-815d5618bcb5@xen.org>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Oleksandr
 Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Yan Yankovskyi <yyankovskyi@gmail.com>,
 David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org,
 dri-devel@lists.freedesktop.org, Michal Hocko <mhocko@kernel.org>,
 linux-mm@kvack.org, Daniel Vetter <daniel@ffwll.ch>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Dan Williams <dan.j.williams@intel.com>, Dan
 Carpenter <dan.carpenter@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 28, 2020 at 05:48:23PM +0100, Julien Grall wrote:
> Hi,
> 
> On 27/07/2020 10:13, Roger Pau Monne wrote:
> > To be used in order to create foreign mappings. This is based on the
> > ZONE_DEVICE facility which is used by persistent memory devices in
> > order to create struct pages and kernel virtual mappings for the IOMEM
> > areas of such devices. Note that on kernels without support for
> > ZONE_DEVICE Xen will fallback to use ballooned pages in order to
> > create foreign mappings.
> > 
> > The newly added helpers use the same parameters as the existing
> > {alloc/free}_xenballooned_pages functions, which allows for in-place
> > replacement of the callers. Once a memory region has been added to be
> > used as scratch mapping space it will no longer be released, and pages
> > returned are kept in a linked list. This allows to have a buffer of
> > pages and prevents resorting to frequent additions and removals of
> > regions.
> > 
> > If enabled (because ZONE_DEVICE is supported) the usage of the new
> > functionality untangles Xen balloon and RAM hotplug from the usage of
> > unpopulated physical memory ranges to map foreign pages, which is the
> > correct thing to do in order to avoid mappings of foreign pages depend
> > on memory hotplug.
> I think this is going to break Dom0 on Arm if the kernel has been built with
> hotplug. This is because you may end up to re-use region that will be used
> for the 1:1 mapping of a foreign map.
> 
> Note that I don't know whether hotplug has been tested on Xen on Arm yet. So
> it might be possible to be already broken.
> 
> Meanwhile, my suggestion would be to make the use of hotplug in the balloon
> code conditional (maybe using CONFIG_ARM64 and CONFIG_ARM)?

Right, this feature (allocation of unpopulated memory separated from
the balloon driver) is currently gated on CONFIG_ZONE_DEVICE, which I
think could be used on Arm.

IMO the right solution seems to be to subtract the physical memory
regions that can be used for the identity mappings of foreign pages
(all RAM on the system AFAICT) from iomem_resource, as that would make
this and the memory hotplug done in the balloon driver safe?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 17:06:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 17:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0T3i-0002GT-1Z; Tue, 28 Jul 2020 17:06:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0T3g-0002GO-Ly
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 17:06:32 +0000
X-Inumbo-ID: ae8cae50-d0f4-11ea-8ba1-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae8cae50-d0f4-11ea-8ba1-bc764e2007e4;
 Tue, 28 Jul 2020 17:06:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595955991;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=NhQlpfgEzw5uIL26d7//uQgtQ8+lTO6nNMY87duUokQ=;
 b=O8Dr5+wflPoHgMNwSG90Lkw7PM5DuRN7DSm/wfSus+sfwGGQ8VnhCxFJ
 LGZDWWAx0qEdA2lsYcePWzpQ7m4oOMqHHwMBCjsZOkuhOa/aQqBvCu9eN
 gAY/ttoVvjtTnNswb4DIHtVbycRPlVcEV+VEJLRPp+Hnyzc2wl46VHTwQ Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Zv07BbGuPhsiZuK1RAL+MXQS7+DkALPbSOLVSZU6zFhcWsNovWd2ezPA0u0r1ydAx+ftO+5rIq
 NYZIB7Ir83eaSN5NLUnEQUwpkSM93rbTHf5FdDjDIWKAeHW5XlIoRUumb0e9wguAvIvp4K7b+H
 wj1PKHL59HxgJHNABW9nSYRmjho8ubni3kkx1ZFgHlasMw2iQH4TLZsXntv6C0nojnpI0osC1H
 P4GfMlvA6ODgHDaodR3qUiDDbVxaEHvhgheUl2K9TujfOtvBGncUzSfPj+3oWbSEcATqj0RXdv
 ExM=
X-SBRS: 2.7
X-MesageID: 23700460
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23700460"
Subject: Re: [PATCH v3 4/4] xen: add helpers to allocate unpopulated memory
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Julien Grall
 <julien@xen.org>
References: <20200727091342.52325-1-roger.pau@citrix.com>
 <20200727091342.52325-5-roger.pau@citrix.com>
 <b5460659-88a5-c2aa-c339-815d5618bcb5@xen.org>
 <20200728165919.GA7191@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <cb1790b3-2ad0-2c1b-a632-e4fea4b6bcfa@citrix.com>
Date: Tue, 28 Jul 2020 18:06:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728165919.GA7191@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Yan
 Yankovskyi <yyankovskyi@gmail.com>, David Hildenbrand <david@redhat.com>,
 linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, Michal
 Hocko <mhocko@kernel.org>, linux-mm@kvack.org, Daniel Vetter <daniel@ffwll.ch>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Dan Williams <dan.j.williams@intel.com>, Dan
 Carpenter <dan.carpenter@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/07/2020 17:59, Roger Pau Monné wrote:
> On Tue, Jul 28, 2020 at 05:48:23PM +0100, Julien Grall wrote:
>> Hi,
>>
>> On 27/07/2020 10:13, Roger Pau Monne wrote:
>>> To be used in order to create foreign mappings. This is based on the
>>> ZONE_DEVICE facility which is used by persistent memory devices in
>>> order to create struct pages and kernel virtual mappings for the IOMEM
>>> areas of such devices. Note that on kernels without support for
>>> ZONE_DEVICE Xen will fallback to use ballooned pages in order to
>>> create foreign mappings.
>>>
>>> The newly added helpers use the same parameters as the existing
>>> {alloc/free}_xenballooned_pages functions, which allows for in-place
>>> replacement of the callers. Once a memory region has been added to be
>>> used as scratch mapping space it will no longer be released, and pages
>>> returned are kept in a linked list. This allows to have a buffer of
>>> pages and prevents resorting to frequent additions and removals of
>>> regions.
>>>
>>> If enabled (because ZONE_DEVICE is supported) the usage of the new
>>> functionality untangles Xen balloon and RAM hotplug from the usage of
>>> unpopulated physical memory ranges to map foreign pages, which is the
>>> correct thing to do in order to avoid mappings of foreign pages depend
>>> on memory hotplug.
>> I think this is going to break Dom0 on Arm if the kernel has been built with
>> hotplug. This is because you may end up to re-use region that will be used
>> for the 1:1 mapping of a foreign map.
>>
>> Note that I don't know whether hotplug has been tested on Xen on Arm yet. So
>> it might be possible to be already broken.
>>
>> Meanwhile, my suggestion would be to make the use of hotplug in the balloon
>> code conditional (maybe using CONFIG_ARM64 and CONFIG_ARM)?
> Right, this feature (allocation of unpopulated memory separated from
> the balloon driver) is currently gated on CONFIG_ZONE_DEVICE, which I
> think could be used on Arm.
>
> IMO the right solution seems to be to subtract the physical memory
> regions that can be used for the identity mappings of foreign pages
> (all RAM on the system AFAICT) from iomem_resource, as that would make
> this and the memory hotplug done in the balloon driver safe?

The right solution is a mechanism for translated guests to query Xen to
find regions of guest physical address space which are unused, and can
be safely be used for foreign/grant/other  mappings.

Please don't waste any more time applying more duct tape to a broken
system, and instead spend the time organising some proper foundations.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 17:13:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 17:13:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0T9u-00036U-Mn; Tue, 28 Jul 2020 17:12:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TSwU=BH=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k0T9t-00036P-DW
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 17:12:57 +0000
X-Inumbo-ID: 947c6acc-d0f5-11ea-8ba3-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 947c6acc-d0f5-11ea-8ba3-bc764e2007e4;
 Tue, 28 Jul 2020 17:12:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=gAxUDflyYD2Qz0uZDypM7APCGN6LsjUhjSUSWoYLPcY=; b=U8RJ5NSwZQ8iU78q1Zh5y2VxyZ
 KLqWvDt3ijkmQpa/cyxyk4D4lH1X6HTzXUZ8buyclAQa4oFopGO7GizuG8UlZlYXhKf06aO3+jWxW
 L5i2sFLebxgLmSaQiTOm/ba8eK8Vj7kcWqyUDl3TI1/IWRsEE2Nm52OaedKFXHgUc09g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0T9m-0005gL-AT; Tue, 28 Jul 2020 17:12:50 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0T9l-0004gC-Us; Tue, 28 Jul 2020 17:12:50 +0000
Subject: Re: [PATCH v3 4/4] xen: add helpers to allocate unpopulated memory
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200727091342.52325-1-roger.pau@citrix.com>
 <20200727091342.52325-5-roger.pau@citrix.com>
 <b5460659-88a5-c2aa-c339-815d5618bcb5@xen.org>
 <20200728165919.GA7191@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <b1732413-0bd0-6f58-6324-37497347ce5b@xen.org>
Date: Tue, 28 Jul 2020 18:12:46 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728165919.GA7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Yan Yankovskyi <yyankovskyi@gmail.com>,
 David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org,
 dri-devel@lists.freedesktop.org, Michal Hocko <mhocko@kernel.org>,
 linux-mm@kvack.org, Daniel Vetter <daniel@ffwll.ch>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Dan Williams <dan.j.williams@intel.com>,
 Dan Carpenter <dan.carpenter@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Roger,

On 28/07/2020 17:59, Roger Pau Monné wrote:
> On Tue, Jul 28, 2020 at 05:48:23PM +0100, Julien Grall wrote:
>> Hi,
>>
>> On 27/07/2020 10:13, Roger Pau Monne wrote:
>>> To be used in order to create foreign mappings. This is based on the
>>> ZONE_DEVICE facility which is used by persistent memory devices in
>>> order to create struct pages and kernel virtual mappings for the IOMEM
>>> areas of such devices. Note that on kernels without support for
>>> ZONE_DEVICE Xen will fallback to use ballooned pages in order to
>>> create foreign mappings.
>>>
>>> The newly added helpers use the same parameters as the existing
>>> {alloc/free}_xenballooned_pages functions, which allows for in-place
>>> replacement of the callers. Once a memory region has been added to be
>>> used as scratch mapping space it will no longer be released, and pages
>>> returned are kept in a linked list. This allows to have a buffer of
>>> pages and prevents resorting to frequent additions and removals of
>>> regions.
>>>
>>> If enabled (because ZONE_DEVICE is supported) the usage of the new
>>> functionality untangles Xen balloon and RAM hotplug from the usage of
>>> unpopulated physical memory ranges to map foreign pages, which is the
>>> correct thing to do in order to avoid mappings of foreign pages depend
>>> on memory hotplug.
>> I think this is going to break Dom0 on Arm if the kernel has been built with
>> hotplug. This is because you may end up to re-use region that will be used
>> for the 1:1 mapping of a foreign map.
>>
>> Note that I don't know whether hotplug has been tested on Xen on Arm yet. So
>> it might be possible to be already broken.
>>
>> Meanwhile, my suggestion would be to make the use of hotplug in the balloon
>> code conditional (maybe using CONFIG_ARM64 and CONFIG_ARM)?
> 
> Right, this feature (allocation of unpopulated memory separated from
> the balloon driver) is currently gated on CONFIG_ZONE_DEVICE, which I
> think could be used on Arm.
> 
> IMO the right solution seems to be to subtract the physical memory
> regions that can be used for the identity mappings of foreign pages
> (all RAM on the system AFAICT) from iomem_resource, as that would make
> this and the memory hotplug done in the balloon driver safe?

Dom0 doesn't know the regions used for the identity mappings as this is 
only managed by Xen. So there is nothing you can really do here.

But don't you have the same issue on x86 with "magic pages"?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 17:18:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 17:18:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0TF4-0003IT-Bb; Tue, 28 Jul 2020 17:18:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UTyt=BH=yujala.com=srini@srs-us1.protection.inumbo.net>)
 id 1k0TF3-0003IO-6X
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 17:18:17 +0000
X-Inumbo-ID: 52a5df10-d0f6-11ea-a91d-12813bfff9fa
Received: from gproxy8-pub.mail.unifiedlayer.com (unknown [67.222.33.93])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 52a5df10-d0f6-11ea-a91d-12813bfff9fa;
 Tue, 28 Jul 2020 17:18:15 +0000 (UTC)
Received: from cmgw12.unifiedlayer.com (unknown [10.9.0.12])
 by gproxy8.mail.unifiedlayer.com (Postfix) with ESMTP id 0EBD01AB207
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jul 2020 11:18:13 -0600 (MDT)
Received: from md-71.webhostbox.net ([204.11.58.143]) by cmsmtp with ESMTP
 id 0TEykTrfGWYdh0TEykGdC9; Tue, 28 Jul 2020 11:18:13 -0600
X-Authority-Reason: nr=8
X-Authority-Analysis: v=2.3 cv=XYinMrx5 c=1 sm=1 tr=0
 a=yS0qNmEK8ed8yKyeR8R6rg==:117 a=yS0qNmEK8ed8yKyeR8R6rg==:17
 a=dLZJa+xiwSxG16/P+YVxDGlgEgI=:19 a=IkcTkHD0fZMA:10:nop_charset_1
 a=_RQrkK6FrEwA:10:nop_rcvd_month_year
 a=o-A10e_uY_YA:10:endurance_base64_authed_username_1 a=Dvu1CkYPihaQVThZ2xYA:9
 a=QEXdDO2ut3YA:10:nop_charset_2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=yujala.com; 
 s=default;
 h=Content-Transfer-Encoding:Content-Type:MIME-Version:Message-ID:
 Date:Subject:In-Reply-To:References:To:From:Sender:Reply-To:Cc:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe:
 List-Post:List-Owner:List-Archive;
 bh=nr81cWo7PvcaFT6hx2wfxGYolQs3JoHlEmFOEpv77N4=; b=jQiatLthwg2MqmuAaQCDa1M2yW
 QqN2t95kN9qnxxBKxgxWNi8MMopBhpbV1Tm0XMEl9LH04ofVMGauzXDNhQkeBJWaXYOqG40zNa9/c
 f15qj0iGot6jA45Mb765IottS9uJwLMoO6stf/+qsrPF+t7h+dPKYHWxYYRRondDT9zsNMyF8tqPv
 9Rp48vrchjI+0ZgSqa5HXHstfwvCTvX4m388qRjMIzZAsTW61kwPILJCDLlcf2Jqvbx7C24J9ONDy
 8AsO+8hsuoKUqoGq1rUO2cVujY5U34KA/p/vTOzLNNY8tkafpMZSTB2KBG2FwE1HhG14akCpcHd9a
 QKefgG3g==;
Received: from 162-231-240-210.lightspeed.sntcca.sbcglobal.net
 ([162.231.240.210]:57263 helo=SRINIASUSLAPTOP)
 by md-71.webhostbox.net with esmtpsa (TLS1.2) tls
 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.93)
 (envelope-from <srini@yujala.com>)
 id 1k0TEx-003F2G-V5; Tue, 28 Jul 2020 17:18:12 +0000
From: "Srinivas Bangalore" <srini@yujala.com>
To: "'Julien Grall'" <julien@xen.org>, <xen-devel@lists.xenproject.org>,
 "'Christopher Clark'" <christopher.w.clark@gmail.com>,
 "'Stefano Stabellini'" <sstabellini@kernel.org>
References: <002801d66051$90fe2300$b2fa6900$@yujala.com>
 <9736680b-1c81-652b-552b-4103341bad50@xen.org>
 <000001d661cb$45cdaa10$d168fe30$@yujala.com>
 <5f985a6a-1bd6-9e68-f35f-b0b665688cee@xen.org>
 <002901d66462$a1dff530$e59fdf90$@yujala.com>
 <f8de3b17-d8bd-884d-a37f-6e6d58bcab8c@xen.org>
In-Reply-To: <f8de3b17-d8bd-884d-a37f-6e6d58bcab8c@xen.org>
Subject: RE: Porting Xen to Jetson Nano
Date: Tue, 28 Jul 2020 10:18:11 -0700
Message-ID: <003501d66503$125388e0$36fa9aa0$@yujala.com>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-us
Thread-Index: AQIl7jaf5+ZLFToUYJ/P44Ycp83hwAFkmiVWAaCz0KsBnyrDOQIUsjAOAt+hgOCoMWHSIA==
X-AntiAbuse: This header was added to track abuse,
 please include it with any abuse report
X-AntiAbuse: Primary Hostname - md-71.webhostbox.net
X-AntiAbuse: Original Domain - lists.xenproject.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - yujala.com
X-BWhitelist: no
X-Source-IP: 162.231.240.210
X-Source-L: No
X-Exim-ID: 1k0TEx-003F2G-V5
X-Source: 
X-Source-Args: 
X-Source-Dir: 
X-Source-Sender: 162-231-240-210.lightspeed.sntcca.sbcglobal.net
 (SRINIASUSLAPTOP) [162.231.240.210]:57263
X-Source-Auth: srini@yujala.com
X-Email-Count: 1
X-Source-Cap: c3JpbmlxbGw7c3JpbmlxbGw7bWQtNzEud2ViaG9zdGJveC5uZXQ=
X-Local-Domain: yes
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> I struggled to find your comment inline as your e-mail client doesn't=20
> quote my answer. Please configure your e-mail client to use some form=20
> of quoting (the usual is '>').
>=20
> [<SB>] Done! Sorry about that.

Thanks this is a good start. Unfortunately, it doesn't fully help it =
when you have a reply split accross multiple line. This is become more =
proeminent after a few back and forth. Which e-mail client are you =
using?

[<SB>] I'm using Microsoft Outlook

> [<SB>] OK, I started porting the patch series to 4.14, but it is=20
> definitely not straightforward ;) Will take some time to do this. BTW, =

> I was looking at xen/arch/arm/Rules.mk in 4.14 and it is blank. The=20
> previous releases had some board-specific stuff in this file - esp the =
EARLY_PRINTK definitions.
> Has this changed in 4.14?

earlyprintk can now be configured using Kconfig. This should be easier =
to configure as you can do it the same way as you would for other =
options.

[<SB>] Thanks. Yes, I noticed this change. I'm working on applying the =
patches and will get back with an update in a day or two.

Thanks,
Srini




From xen-devel-bounces@lists.xenproject.org Tue Jul 28 17:43:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 17:43:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Tcs-0005jG-Jp; Tue, 28 Jul 2020 17:42:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZWt7=BH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k0Tcq-0005jB-Nc
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 17:42:52 +0000
X-Inumbo-ID: c1b1a2a6-d0f9-11ea-8ba9-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1b1a2a6-d0f9-11ea-8ba9-bc764e2007e4;
 Tue, 28 Jul 2020 17:42:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595958172;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=bm6xpS3UPlSHgrr/AwBHxbllr8iQgva7zYaEqmfywQE=;
 b=Isvlt8g6tTyBuPgT6CzTYbpm21V5W2Qq0D78W3YuJnZTzZvyM4MLtigM
 92A2YgHALjRR9DT0V2nHkaxuWsIbgD2l1OuNmgFfhJbWeRuL/ZyD4tGSU
 NSEaH1ZCxWuEOkbFAYJ/LETnb6RZeE10gXtXtmiJMTYDDxrxbrtbZlhxg c=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: tqfc4/1PrubWMv5ewSP0LHdTujHFP8U+m86/9kdKUf1uh2GxSqdRLKvvIjL7M7hl/BUVdvDraQ
 7xWgR+boyI9h5kdqmKdL4Un2zB12iUpSymGIbUEEQ7O8Hw5Bf2EayFOnHW8bDGV9iixxNVgLOC
 ohHW/uZUKSlMasSdB5PVjkQ2CjgPz8+aAB4j7j0KQdWkgHS/AF8C2gxIr9YIlD7LejhmX4/3f6
 NuNWHo6AupAExmeZCKgns9LBWBBDJcckRANyYzThzC0R8PZACty0nlqgJXI7DofDVY/yjkEeEd
 sT0=
X-SBRS: 2.7
X-MesageID: 23367204
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23367204"
Date: Tue, 28 Jul 2020 19:42:42 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v3 4/4] xen: add helpers to allocate unpopulated memory
Message-ID: <20200728174242.GB7191@Air-de-Roger>
References: <20200727091342.52325-1-roger.pau@citrix.com>
 <20200727091342.52325-5-roger.pau@citrix.com>
 <b5460659-88a5-c2aa-c339-815d5618bcb5@xen.org>
 <20200728165919.GA7191@Air-de-Roger>
 <cb1790b3-2ad0-2c1b-a632-e4fea4b6bcfa@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <cb1790b3-2ad0-2c1b-a632-e4fea4b6bcfa@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Oleksandr
 Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Yan Yankovskyi <yyankovskyi@gmail.com>,
 David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org,
 dri-devel@lists.freedesktop.org, Michal Hocko <mhocko@kernel.org>,
 linux-mm@kvack.org, Daniel Vetter <daniel@ffwll.ch>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Dan Williams <dan.j.williams@intel.com>, Dan
 Carpenter <dan.carpenter@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 28, 2020 at 06:06:25PM +0100, Andrew Cooper wrote:
> On 28/07/2020 17:59, Roger Pau Monné wrote:
> > On Tue, Jul 28, 2020 at 05:48:23PM +0100, Julien Grall wrote:
> >> Hi,
> >>
> >> On 27/07/2020 10:13, Roger Pau Monne wrote:
> >>> To be used in order to create foreign mappings. This is based on the
> >>> ZONE_DEVICE facility which is used by persistent memory devices in
> >>> order to create struct pages and kernel virtual mappings for the IOMEM
> >>> areas of such devices. Note that on kernels without support for
> >>> ZONE_DEVICE Xen will fallback to use ballooned pages in order to
> >>> create foreign mappings.
> >>>
> >>> The newly added helpers use the same parameters as the existing
> >>> {alloc/free}_xenballooned_pages functions, which allows for in-place
> >>> replacement of the callers. Once a memory region has been added to be
> >>> used as scratch mapping space it will no longer be released, and pages
> >>> returned are kept in a linked list. This allows to have a buffer of
> >>> pages and prevents resorting to frequent additions and removals of
> >>> regions.
> >>>
> >>> If enabled (because ZONE_DEVICE is supported) the usage of the new
> >>> functionality untangles Xen balloon and RAM hotplug from the usage of
> >>> unpopulated physical memory ranges to map foreign pages, which is the
> >>> correct thing to do in order to avoid mappings of foreign pages depend
> >>> on memory hotplug.
> >> I think this is going to break Dom0 on Arm if the kernel has been built with
> >> hotplug. This is because you may end up to re-use region that will be used
> >> for the 1:1 mapping of a foreign map.
> >>
> >> Note that I don't know whether hotplug has been tested on Xen on Arm yet. So
> >> it might be possible to be already broken.
> >>
> >> Meanwhile, my suggestion would be to make the use of hotplug in the balloon
> >> code conditional (maybe using CONFIG_ARM64 and CONFIG_ARM)?
> > Right, this feature (allocation of unpopulated memory separated from
> > the balloon driver) is currently gated on CONFIG_ZONE_DEVICE, which I
> > think could be used on Arm.
> >
> > IMO the right solution seems to be to subtract the physical memory
> > regions that can be used for the identity mappings of foreign pages
> > (all RAM on the system AFAICT) from iomem_resource, as that would make
> > this and the memory hotplug done in the balloon driver safe?
> 
> The right solution is a mechanism for translated guests to query Xen to
> find regions of guest physical address space which are unused, and can
> be safely be used for foreign/grant/other  mappings.
> 
> Please don't waste any more time applying more duct tape to a broken
> system, and instead spend the time organising some proper foundations.

The piece added here (using ZONE_DEVICE) will be relevant when Xen can
provide the space to map foreign pages, it's just that right now it
relies on iomem_resource instead of a Xen specific resource map that
should be provided by the hypervisor. It should indeed be fixed, but
right now this patch should allow a PVH dom0 to work slightly better.
When Xen provides such areas Linux just needs to populate a custom Xen
resource with them and use it instead of iomem_resurce.

The Arm stuff I'm certainly not familiar with, and can't provide much
insight on that. If it's best to just disable it and continue to rely
on ballooned out pages that's fine.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 17:44:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 17:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0TeZ-0005pj-Vb; Tue, 28 Jul 2020 17:44:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZWt7=BH=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k0TeY-0005pb-LW
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 17:44:38 +0000
X-Inumbo-ID: 00367fb1-d0fa-11ea-8ba9-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00367fb1-d0fa-11ea-8ba9-bc764e2007e4;
 Tue, 28 Jul 2020 17:44:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595958276;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=eHsq7Ta01QmeqnE9DLs8/mBiesJkCYGnL9YfXoA2X4o=;
 b=M2SZYfC1S2F3VwzhijMO/hp4VzBPNWYLjr/k4I7wPT7ORCo3nuZeNfQt
 eSdfuAxw3h+3efb/cGgmDomWUvjJFEJCfsN053Xv80JU2I5aAK6UNbVcg
 3q6rLunKURQ4W25aOxEwYCw0V3CDvr66XN8yRLBLuwi/AXkv76zKKYk3u 8=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 2RdrO/AnzZYrb9zj19qTPY0bAooeEOHP1ELFiUghgXaTqNYu/IsfAaeOofDt0adaNBw2Xa0AC6
 t6lHyuGC5gkUWy/N7rzG8yB31qUpk6aKp4puI4SGJ8w7qoXEtqg8F+J/doKRdiBkCkcVvIxcwR
 q9J9cBAZhRCwaqR6vQaMZgUwvDwO9y4NebuTt/KmXt2MRQWqepK8FGyJefexjYCWZrVGObAaI8
 Yw/nTGNLt85irvc409hapDt3JL5ro6jwcrTY4qUVugZVoEP/hHQ79mpfhww2W8LLw7sXFLWMwl
 r/U=
X-SBRS: 2.7
X-MesageID: 23704211
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,406,1589256000"; d="scan'208";a="23704211"
Date: Tue, 28 Jul 2020 19:44:29 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v3 4/4] xen: add helpers to allocate unpopulated memory
Message-ID: <20200728174429.GC7191@Air-de-Roger>
References: <20200727091342.52325-1-roger.pau@citrix.com>
 <20200727091342.52325-5-roger.pau@citrix.com>
 <b5460659-88a5-c2aa-c339-815d5618bcb5@xen.org>
 <20200728165919.GA7191@Air-de-Roger>
 <b1732413-0bd0-6f58-6324-37497347ce5b@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b1732413-0bd0-6f58-6324-37497347ce5b@xen.org>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Oleksandr
 Andrushchenko <oleksandr_andrushchenko@epam.com>,
 David Airlie <airlied@linux.ie>, Yan Yankovskyi <yyankovskyi@gmail.com>,
 David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org,
 dri-devel@lists.freedesktop.org, Michal Hocko <mhocko@kernel.org>,
 linux-mm@kvack.org, Daniel Vetter <daniel@ffwll.ch>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Dan Williams <dan.j.williams@intel.com>, Dan
 Carpenter <dan.carpenter@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 28, 2020 at 06:12:46PM +0100, Julien Grall wrote:
> Hi Roger,
> 
> On 28/07/2020 17:59, Roger Pau Monné wrote:
> > On Tue, Jul 28, 2020 at 05:48:23PM +0100, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 27/07/2020 10:13, Roger Pau Monne wrote:
> > > > To be used in order to create foreign mappings. This is based on the
> > > > ZONE_DEVICE facility which is used by persistent memory devices in
> > > > order to create struct pages and kernel virtual mappings for the IOMEM
> > > > areas of such devices. Note that on kernels without support for
> > > > ZONE_DEVICE Xen will fallback to use ballooned pages in order to
> > > > create foreign mappings.
> > > > 
> > > > The newly added helpers use the same parameters as the existing
> > > > {alloc/free}_xenballooned_pages functions, which allows for in-place
> > > > replacement of the callers. Once a memory region has been added to be
> > > > used as scratch mapping space it will no longer be released, and pages
> > > > returned are kept in a linked list. This allows to have a buffer of
> > > > pages and prevents resorting to frequent additions and removals of
> > > > regions.
> > > > 
> > > > If enabled (because ZONE_DEVICE is supported) the usage of the new
> > > > functionality untangles Xen balloon and RAM hotplug from the usage of
> > > > unpopulated physical memory ranges to map foreign pages, which is the
> > > > correct thing to do in order to avoid mappings of foreign pages depend
> > > > on memory hotplug.
> > > I think this is going to break Dom0 on Arm if the kernel has been built with
> > > hotplug. This is because you may end up to re-use region that will be used
> > > for the 1:1 mapping of a foreign map.
> > > 
> > > Note that I don't know whether hotplug has been tested on Xen on Arm yet. So
> > > it might be possible to be already broken.
> > > 
> > > Meanwhile, my suggestion would be to make the use of hotplug in the balloon
> > > code conditional (maybe using CONFIG_ARM64 and CONFIG_ARM)?
> > 
> > Right, this feature (allocation of unpopulated memory separated from
> > the balloon driver) is currently gated on CONFIG_ZONE_DEVICE, which I
> > think could be used on Arm.
> > 
> > IMO the right solution seems to be to subtract the physical memory
> > regions that can be used for the identity mappings of foreign pages
> > (all RAM on the system AFAICT) from iomem_resource, as that would make
> > this and the memory hotplug done in the balloon driver safe?
> 
> Dom0 doesn't know the regions used for the identity mappings as this is only
> managed by Xen. So there is nothing you can really do here.

OK, I will add the guards to prevent this being built on Arm.

> But don't you have the same issue on x86 with "magic pages"?

Those are marked as reserved on the memory map, and hence I would
expect them to never end up in iomem_resource.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 17:52:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 17:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Tle-0006hX-Oa; Tue, 28 Jul 2020 17:51:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Qgq5=BH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k0Tle-0006hS-3L
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 17:51:58 +0000
X-Inumbo-ID: 077d85ce-d0fb-11ea-a91f-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 077d85ce-d0fb-11ea-a91f-12813bfff9fa;
 Tue, 28 Jul 2020 17:51:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AE6A2AC50;
 Tue, 28 Jul 2020 17:52:07 +0000 (UTC)
Subject: Re: [PATCH] x86/vhpet: Fix type size in timer_int_route_valid
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Eslam Elnikety <elnikety@amazon.com>
References: <20200728083357.77999-1-elnikety@amazon.com>
 <a55fba45-a008-059e-ea8c-b7300e2e8b7d@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <278f0f31-619b-a392-6627-e75e65d0d14f@suse.com>
Date: Tue, 28 Jul 2020 19:51:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a55fba45-a008-059e-ea8c-b7300e2e8b7d@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <pdurrant@amazon.co.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 11:26, Andrew Cooper wrote:
> Does this work?
> 
> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
> index ca94e8b453..638f6174de 100644
> --- a/xen/arch/x86/hvm/hpet.c
> +++ b/xen/arch/x86/hvm/hpet.c
> @@ -62,8 +62,7 @@
>   
>   #define timer_int_route(h, n)    MASK_EXTR(timer_config(h, n),
> HPET_TN_ROUTE)
>   
> -#define timer_int_route_cap(h, n) \
> -    MASK_EXTR(timer_config(h, n), HPET_TN_INT_ROUTE_CAP)
> +#define timer_int_route_cap(h, n) (h)->hpet.timers[(n)].route

Seeing that this is likely the route taken here, and hence to avoid
an extra round trip, two remarks: Here I see no need for the
parentheses inside the square brackets.

> diff --git a/xen/include/asm-x86/hvm/vpt.h b/xen/include/asm-x86/hvm/vpt.h
> index f0e0eaec83..a41fc443cc 100644
> --- a/xen/include/asm-x86/hvm/vpt.h
> +++ b/xen/include/asm-x86/hvm/vpt.h
> @@ -73,7 +73,13 @@ struct hpet_registers {
>       uint64_t isr;               /* interrupt status reg */
>       uint64_t mc64;              /* main counter */
>       struct {                    /* timers */
> -        uint64_t config;        /* configuration/cap */
> +        union {
> +            uint64_t config;    /* configuration/cap */
> +            struct {
> +                uint32_t _;
> +                uint32_t route;
> +            };
> +        };

So long as there are no static initializers for this construct
that would then suffer the old-gcc problem, this is of course a
fine arrangement to make.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 18:00:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 18:00:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0TtW-0007XF-Jl; Tue, 28 Jul 2020 18:00:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Qgq5=BH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k0TtV-0007MT-HU
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 18:00:05 +0000
X-Inumbo-ID: 29b6c26d-d0fc-11ea-8baf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 29b6c26d-d0fc-11ea-8baf-bc764e2007e4;
 Tue, 28 Jul 2020 18:00:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DB72CABE2;
 Tue, 28 Jul 2020 18:00:14 +0000 (UTC)
Subject: Re: [PATCH] public/domctl: Fix the struct xen_domctl ABI in 32bit
 builds
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200728101529.13753-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <69eed808-a15e-b6a8-14e6-dd4d5bfaae2a@suse.com>
Date: Tue, 28 Jul 2020 20:00:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728101529.13753-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 12:15, Andrew Cooper wrote:
> The Xen domctl ABI currently relies on the union containing a field with
> alignment of 8.
> 
> 32bit projects which only copy the used subset of functionality end up with an
> ABI breakage if they don't have at least one uint64_aligned_t field copied.

I'm entirely find with the change, but I'm struggling with this justification:
How can _any_ change to the header in our tree help people using custom
derivations (i.e. including those defining only some of the union members)?

> Further proof that C isn't an appropriate way to desribe an ABI...

Further proof that it requires more careful writing in C to serve as an ABI
description, I would say.

> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -959,6 +959,14 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>       return ret;
>   }
>   
> +static void __init __maybe_unused build_assertions(void)
> +{
> +    struct xen_domctl d;

Doesn't this also need __maybe_unused? Afaik ...

> +    BUILD_BUG_ON(sizeof(d) != 16 /* header */ + 128 /* union */);
> +    BUILD_BUG_ON(offsetof(typeof(d), u) != 16);
> +}

... neither sizeof() nor typeof() count as actual uses.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 18:01:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 18:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Tux-0007j8-VQ; Tue, 28 Jul 2020 18:01:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LJbi=BH=gmail.com=persaur@srs-us1.protection.inumbo.net>)
 id 1k0Tuw-0007j0-Na
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 18:01:34 +0000
X-Inumbo-ID: 5f760002-d0fc-11ea-8baf-bc764e2007e4
Received: from mail-qv1-xf33.google.com (unknown [2607:f8b0:4864:20::f33])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f760002-d0fc-11ea-8baf-bc764e2007e4;
 Tue, 28 Jul 2020 18:01:34 +0000 (UTC)
Received: by mail-qv1-xf33.google.com with SMTP id m9so9539372qvx.5
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jul 2020 11:01:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=content-transfer-encoding:from:mime-version:subject:date:message-id
 :references:cc:in-reply-to:to;
 bh=qKkvr51SWK/oV5l7w0J8nJzLQWGszvMg6NFOQAuY5/s=;
 b=vP2zR02Lvcq118Lgiq+yYIzDPX1NTgapIF6REAgDlJsF8XKlnzxYdLOPuwmYVI0YKe
 c4BZTajC75XUKUJUKCCivQqM+Qa+tYBt/F+ZgGqiOdUYvHQnWuQynG8cf7xmLZtyGtwx
 NXUIzY/huydsRx2zypF7/zwMPE2xEExkA5azqE3G1W/JWFYt0lDWTkvCKEvxzgj3byif
 dO5Qf3+4uJrPB96YvG6rRgVXO4MjJO4fPzfE5IfcsHF5DBuIIV7sLdsnpGQ4nH9zvRYB
 lC+8Lkp2nhrDwJ1GxB3rZwk25L7U8HL+Ze6ZFY86MHp2Kd3deLRmwWdjd4RTx/d2iDWK
 BccQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:content-transfer-encoding:from:mime-version
 :subject:date:message-id:references:cc:in-reply-to:to;
 bh=qKkvr51SWK/oV5l7w0J8nJzLQWGszvMg6NFOQAuY5/s=;
 b=BuyPChRb2As8cEPS9UHjrP8tQwQShmT+N+zeS3H8xowNLK8iYhgJqVGZvCsLO2ZMbH
 Dmjg5PBWdfTKuK4B6EqpQVvKNIeXqUJZ7PixsI0DEj1YwmrTYoufulpIvU+hLTxCz2Gm
 TTCdKYEAX5JKth9ATVaqe/thEWnN1lTow0qlIWV/qB8wDQJLjiPasYlnDxXm0b/AgXER
 QnLrQTeysYVfXNXK02YE9ajGZ3UU8HXLm1ggIZof+q69tvQ7NelJXnj5CT3o2zroaXoG
 LUb2V3tNWMofCByT1OBhRyxEI+xpHEEEGYZ8blZH1SF9VhAFvngpuZ63k5erUqT8FALO
 mfPg==
X-Gm-Message-State: AOAM533A7V7D1rqA3Qf9YmnJA9Xhypn9UDLMoqGObvribfQjck3VGzxD
 HMjTnn9z9TQ6SxiLAC/5VEQ96ns+NOA=
X-Google-Smtp-Source: ABdhPJx5lhlFNbzcp8o/ph19ZB6mDryP8WY1ZjO2NJWccvsLkC6/kssdiIlgtkKtaiRIJMSumK2vcw==
X-Received: by 2002:ad4:5768:: with SMTP id r8mr29249407qvx.213.1595959293649; 
 Tue, 28 Jul 2020 11:01:33 -0700 (PDT)
Received: from [100.64.72.40] ([173.245.215.240])
 by smtp.gmail.com with ESMTPSA id t93sm18999793qtd.97.2020.07.28.11.01.32
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 28 Jul 2020 11:01:33 -0700 (PDT)
Content-Type: multipart/alternative;
 boundary=Apple-Mail-85897A5B-98E5-4512-98C6-744C7CA7FF3D
Content-Transfer-Encoding: 7bit
From: Rich Persaud <persaur@gmail.com>
Mime-Version: 1.0 (1.0)
Subject: Re: Porting Xen to Jetson Nano
Date: Tue, 28 Jul 2020 14:01:32 -0400
Message-Id: <129D8F3A-91B1-4F27-B9F0-F804B21260EB@gmail.com>
References: <003501d66503$125388e0$36fa9aa0$@yujala.com>
In-Reply-To: <003501d66503$125388e0$36fa9aa0$@yujala.com>
To: Srinivas Bangalore <srini@yujala.com>
X-Mailer: iPad Mail (17G68)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>,
 Christopher Clark <christopher.w.clark@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--Apple-Mail-85897A5B-98E5-4512-98C6-744C7CA7FF3D
Content-Type: text/plain;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Jul 28, 2020, at 13:19, Srinivas Bangalore <srini@yujala.com> wrote:
>=20
> =EF=BB=BF
>>=20
>> I struggled to find your comment inline as your e-mail client doesn't=20
>> quote my answer. Please configure your e-mail client to use some form=20
>> of quoting (the usual is '>').
>>=20
>> [<SB>] Done! Sorry about that.
>=20
> Thanks this is a good start. Unfortunately, it doesn't fully help it when y=
ou have a reply split accross multiple line. This is become more proeminent a=
fter a few back and forth. Which e-mail client are you using?
>=20
> [<SB>] I'm using Microsoft Outlook

Might be worth trying these instructions:
https://www.slipstick.com/outlook/email/to-use-internet-style-quoting/

Rich


--Apple-Mail-85897A5B-98E5-4512-98C6-744C7CA7FF3D
Content-Type: text/html;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D=
utf-8"></head><body dir=3D"auto"><div dir=3D"ltr">On Jul 28, 2020, at 13:19,=
 Srinivas Bangalore &lt;srini@yujala.com&gt; wrote:</div><div dir=3D"ltr"><b=
lockquote type=3D"cite"><br></blockquote></div><blockquote type=3D"cite"><di=
v dir=3D"ltr">=EF=BB=BF<blockquote type=3D"cite"><span>I struggled to find y=
our comment inline as your e-mail client doesn't </span><br></blockquote><bl=
ockquote type=3D"cite"><span>quote my answer. Please configure your e-mail c=
lient to use some form </span><br></blockquote><blockquote type=3D"cite"><sp=
an>of quoting (the usual is '&gt;').</span><br></blockquote><blockquote type=
=3D"cite"><span></span><br></blockquote><blockquote type=3D"cite"><span>[&lt=
;SB&gt;] Done! Sorry about that.</span><br></blockquote><span></span><br><sp=
an>Thanks this is a good start. Unfortunately, it doesn't fully help it when=
 you have a reply split accross multiple line. This is become more proeminen=
t after a few back and forth. Which e-mail client are you using?</span><br><=
span></span><br><span>[&lt;SB&gt;] I'm using Microsoft Outlook</span><br></d=
iv></blockquote><br><div>Might be worth trying these instructions:</div><div=
><a href=3D"https://www.slipstick.com/outlook/email/to-use-internet-style-qu=
oting/">https://www.slipstick.com/outlook/email/to-use-internet-style-quotin=
g/</a></div><div><br></div><div>Rich</div><div><br></div></body></html>=

--Apple-Mail-85897A5B-98E5-4512-98C6-744C7CA7FF3D--


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 18:14:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 18:14:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0U7O-0000IG-4e; Tue, 28 Jul 2020 18:14:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o87v=BH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k0U7M-0000IB-3d
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 18:14:24 +0000
X-Inumbo-ID: 28cb16c7-d0fe-11ea-a922-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 28cb16c7-d0fe-11ea-a922-12813bfff9fa;
 Tue, 28 Jul 2020 18:14:23 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 1FB9E2070B;
 Tue, 28 Jul 2020 18:14:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595960062;
 bh=Ji1GHAO1Gn5m+jRIp21fZ0RWs5O53yohkzmTy0JAAqI=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=JPD7xYi4lTHBB9Rfqm25yKAV5DyXF/IO9C3qQQG1x95wTYVsePitnutw7zGg3yUAD
 OB4BxzkEb82aM9Bl+bc4OgjIKO5zRY7e3Utp0dPD1DWq543zpl+VmqouBpIR+FNidZ
 FEuFRju8LBOHnx6XpwzKZqSGuwGb9Yhjtzlijq5c=
Date: Tue, 28 Jul 2020 11:14:14 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: =?UTF-8?Q?Andr=C3=A9_Przywara?= <andre.przywara@arm.com>
Subject: Re: dom0 LInux 5.8-rc5 kernel failing to initialize cooling maps
 for Allwinner H6 SoC
In-Reply-To: <e091c32f-d121-d549-a2fa-f906d28ff8f1@arm.com>
Message-ID: <alpine.DEB.2.21.2007281054520.646@sstabellini-ThinkPad-T480s>
References: <CA+wirGqXMoRkS-aJmfFLipUv8SdY5LKV1aMrF0yKRJQaMvzs6Q@mail.gmail.com>
 <1c5cee83-295e-cc02-d018-b53ddc6e3590@xen.org>
 <CA+wirGpFvLBzYRBaq8yspJj8j9-yoLwN88bt079qM5yqPTwtcA@mail.gmail.com>
 <02b630bd-22e0-afde-6784-be068d0948ae@arm.com>
 <CA+wirGoG+im2mwb2ye6j4MpcVtfQ-prhhmVgdBTosus7hjeu=w@mail.gmail.com>
 <e091c32f-d121-d549-a2fa-f906d28ff8f1@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-67695330-1595959176=:646"
Content-ID: <alpine.DEB.2.21.2007281059580.646@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Alejandro <alejandro.gonzalez.correo@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-67695330-1595959176=:646
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2007281059581.646@sstabellini-ThinkPad-T480s>

On Tue, 28 Jul 2020, André Przywara wrote:
> On 28/07/2020 11:39, Alejandro wrote:
> > Hello,
> > 
> > El dom., 26 jul. 2020 a las 22:25, André Przywara
> > (<andre.przywara@arm.com>) escribió:
> >> So this was actually my first thought: The firmware (U-Boot SPL) sets up
> >> some basic CPU frequency (888 MHz for H6 [1]), which is known to never
> >> overheat the chip, even under full load. So any concern from your side
> >> about the board or SoC overheating could be dismissed, with the current
> >> mainline code, at least. However you lose the full speed, by quite a
> >> margin on the H6 (on the A64 it's only 816 vs 1200(ish) MHz).
> >> However, without the clock entries in the CPU node, the frequency would
> >> never be changed by Dom0 anyway (nor by Xen, which doesn't even know how
> >> to do this).
> >> So from a practical point of view: unless you hack Xen to pass on more
> >> cpu node properties, you are stuck at 888 MHz anyway, and don't need to
> >> worry about overheating.
> > Thank you. Knowing that at least it won't overheat is a relief. But
> > the performance definitely suffers from the current situation, and
> > quite a bit. I'm thinking about using KVM instead: even if it does
> > less paravirtualization of guests,
> 
> What is this statement based on? I think on ARM this never really
> applied, and in general whether you do virtio or xen front-end/back-end
> does not really matter. IMHO any reasoning about performance just based
> on software architecture is mostly flawed (because it's complex and
> reality might have missed some memos ;-) So just measure your particular
> use case, then you know.
> 
> > I'm sure that the ability to use
> > the maximum frequency of the CPU would offset the additional overhead,
> > and in general offer better performance. But with KVM I lose the
> > ability to have individual domU's dedicated to some device driver,
> > which is a nice thing to have from a security standpoint.
> 
> I understand the theoretical merits, but a) does this really work on
> your board and b) is this really more secure? What do you want to
> protect against?

For "does it work on your board", the main obstacle is typically IOMMU
support to be able to do device assignment properly. That's definitely
something to check. If it doesn't work nowadays you can try to
workaround it by using direct 1:1 memory mappings [1].  However, for
security then you have to configure a MPU. I wonder if H6 has a MPU and
how it can be configured. In any case, something to keep in mind in case
the default IOMMU-based setup doesn't work for some reason for the
device you care about.

For "is this really more secure?", yes it is more secure as you are
running larger portions of the codebase in unprivileged mode and isolated
from each other with IOMMU (or MPU) protection. See what the OpenXT and
Qubes OS guys have been doing.


[1] https://marc.info/?l=xen-devel&m=158691258712815
--8323329-67695330-1595959176=:646--


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 18:33:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 18:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0UQ2-00020o-Uz; Tue, 28 Jul 2020 18:33:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o87v=BH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k0UQ1-00020I-VN
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 18:33:41 +0000
X-Inumbo-ID: dbf552be-d100-11ea-8bbe-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dbf552be-d100-11ea-8bbe-bc764e2007e4;
 Tue, 28 Jul 2020 18:33:41 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 2B3DD2074F;
 Tue, 28 Jul 2020 18:33:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595961220;
 bh=k1P+4rZ0W4FYpwyVeMkTE5AeZBsBFWhFjpfSBsfN+A4=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=pv9B7RalN5rzs50EIG3jUxnm126ZpZWDC+T76070LYZNQwi27UrxRjBRWLMnCMnSx
 bUj1NbgWLE2Y6xeEN98xYj6ia6maKXZI/F5VDmZxX8i+Ey4JWMYLiN9xJI6KapvbLG
 mzplzOG72osYZ8CP2Td9xD4LfcBxeyjph5pcxuEk=
Date: Tue, 28 Jul 2020 11:33:39 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [RFC PATCH v1 1/4] arm/pci: PCI setup and PCI host bridge
 discovery within XEN on ARM.
In-Reply-To: <20200728083310.GW7191@Air-de-Roger>
Message-ID: <alpine.DEB.2.21.2007281124180.646@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2007231055230.17562@sstabellini-ThinkPad-T480s>
 <9f09ff42-a930-e4e3-d1c8-612ad03698ae@xen.org>
 <alpine.DEB.2.21.2007241036460.17562@sstabellini-ThinkPad-T480s>
 <40582d63-49c7-4a51-b35b-8248dfa34b66@xen.org>
 <alpine.DEB.2.21.2007241127480.17562@sstabellini-ThinkPad-T480s>
 <CAJ=z9a3dXSnEBvhkHkZzV9URAGqSfdtJ1Lc838h_ViAWG3ZO4g@mail.gmail.com>
 <alpine.DEB.2.21.2007241353450.17562@sstabellini-ThinkPad-T480s>
 <CAJ=z9a1RWXq3EN5DC=_279yzdsq3M0nw6+CZtKD00yBzKomcaw@mail.gmail.com>
 <20200727110648.GQ7191@Air-de-Roger>
 <alpine.DEB.2.21.2007271411000.27071@sstabellini-ThinkPad-T480s>
 <20200728083310.GW7191@Air-de-Roger>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1371224172-1595961140=:646"
Content-ID: <alpine.DEB.2.21.2007281132430.646@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Rahul Singh <rahul.singh@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1371224172-1595961140=:646
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2007281132431.646@sstabellini-ThinkPad-T480s>

On Tue, 28 Jul 2020, Roger Pau Monné wrote:
> On Mon, Jul 27, 2020 at 05:06:25PM -0700, Stefano Stabellini wrote:
> > On Mon, 27 Jul 2020, Roger Pau Monné wrote:
> > > On Sat, Jul 25, 2020 at 10:59:50AM +0100, Julien Grall wrote:
> > > > On Sat, 25 Jul 2020 at 00:46, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > > > >
> > > > > On Fri, 24 Jul 2020, Julien Grall wrote:
> > > > > > On Fri, 24 Jul 2020 at 19:32, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > > > > > > > If they are not equal, then I fail to see why it would be useful to have this
> > > > > > > > value in Xen.
> > > > > > >
> > > > > > > I think that's because the domain is actually more convenient to use
> > > > > > > because a segment can span multiple PCI host bridges. So my
> > > > > > > understanding is that a segment alone is not sufficient to identify a
> > > > > > > host bridge. From a software implementation point of view it would be
> > > > > > > better to use domains.
> > > > > >
> > > > > > AFAICT, this would be a matter of one check vs two checks in Xen :).
> > > > > > But... looking at Linux, they will also use domain == segment for ACPI
> > > > > > (see [1]). So, I think, they still have to use (domain, bus) to do the lookup.
> > > 
> > > You have to use the (segment, bus) tuple when doing a lookup because
> > > MMCFG regions on ACPI are defined for a segment and a bus range, you
> > > can have a MMCFG region that covers segment 0 bus [0, 20) and another
> > > MMCFG region that covers segment 0 bus [20, 255], and those will use
> > > different addresses in the MMIO space.
> > 
> > Thanks for the clarification!
> > 
> > 
> > > > > > > > In which case, we need to use PHYSDEVOP_pci_mmcfg_reserved so
> > > > > > > > Dom0 and Xen can synchronize on the segment number.
> > > > > > >
> > > > > > > I was hoping we could write down the assumption somewhere that for the
> > > > > > > cases we care about domain == segment, and error out if it is not the
> > > > > > > case.
> > > > > >
> > > > > > Given that we have only the domain in hand, how would you enforce that?
> > > > > >
> > > > > > >From this discussion, it also looks like there is a mismatch between the
> > > > > > implementation and the understanding on QEMU devel. So I am a bit
> > > > > > concerned that this is not stable and may change in future Linux version.
> > > > > >
> > > > > > IOW, we are know tying Xen to Linux. So could we implement
> > > > > > PHYSDEVOP_pci_mmcfg_reserved *or* introduce a new property that
> > > > > > really represent the segment?
> > > > >
> > > > > I don't think we are tying Xen to Linux. Rob has already said that
> > > > > linux,pci-domain is basically a generic device tree property.
> > > > 
> > > > My concern is not so much the name of the property, but the definition of it.
> > > > 
> > > > AFAICT, from this thread there can be two interpretation:
> > > >       - domain == segment
> > > >       - domain == (segment, bus)
> > > 
> > > I think domain is just an alias for segment, the difference seems to
> > > be that when using DT all bridges get a different segment (or domain)
> > > number, and thus you will always end up starting numbering at bus 0
> > > for each bridge?
> > >
> > > Ideally you would need a way to specify the segment and start/end bus
> > > numbers of each bridge, if not you cannot match what ACPI does. Albeit
> > > it might be fine as long as the OS and Xen agree on the segments and
> > > bus numbers that belong to each bridge (and thus each ECAM region).
> > 
> > That is what I thought and it is why I was asking to clarify the naming
> > and/or writing a document to explain the assumptions, if any.
> > 
> > Then after Julien's email I followed up in the Linux codebase and
> > clearly there is a different assumption baked in the Linux kernel for
> > architectures that have CONFIG_PCI_DOMAINS enabled (including ARM64).
> > 
> > The assumption is that segment == domain == unique host bridge. It
> > looks like it is coming from IEEE Std 1275-1994 but I am not certain.
> > In fact, it seems that ACPI MCFG and IEEE Std 1275-1994 don't exactly
> > match. So I am starting to think that domain == segment for IEEE Std
> > 1275-1994 compliant device tree based systems.
> 
> I don't think the ACPI MCFG spec contains the notion of bridges, it
> just describes ECAM (or MMCFG) regions, but those could be made up by
> concatenating different bridge ECAM regions by the firmware itself, so
> you could AFAICT end up with multiple bridges being aggregated into a
> single ECAM region, and thus using the same segment number, which
> seems not possible with the DT spec, where each bridge must get a
> different segment number?

Yes, that's my understanding too


> If you could assign both a segment number and a bus start and end
> values to a bridge then I think it would be kind of equivalent to ACPI
> MCFG.
> 
> I assume we would never support a system where Xen is getting the
> hardware description from a DT and the hardware domain is using ACPI
> (or the other way around)?

Yeah, I think it is a good assumption


> If so, I don't think we care that enumeration when using DT is
> different than when using ACPI, as we can only guarantee consistency
> when both Xen and the hardware domain use the same source for the
> hardware description.
> 
> If when using DT each bridge has a unique segment number that's fine
> as long as Xen and the OS agree to not change such values.

I agree
--8323329-1371224172-1595961140=:646--


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 18:53:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 18:53:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Uik-0003qC-1S; Tue, 28 Jul 2020 18:53:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ltMw=BH=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1k0Uii-0003q7-PV
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 18:53:00 +0000
X-Inumbo-ID: 8ea9ca82-d103-11ea-8bc4-bc764e2007e4
Received: from mail-oi1-x244.google.com (unknown [2607:f8b0:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ea9ca82-d103-11ea-8bc4-bc764e2007e4;
 Tue, 28 Jul 2020 18:52:59 +0000 (UTC)
Received: by mail-oi1-x244.google.com with SMTP id w17so18434383oie.6
 for <xen-devel@lists.xenproject.org>; Tue, 28 Jul 2020 11:52:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=wG80olMuj0yU8vksl8FAt5NrP7SBZdTf2h2KTFtompE=;
 b=qaYo2iFntC2oTEi64xCKEney2JMwllUHbBExMWRdWr0Aa04ZpQi6UMG4SkuUFuVQ8G
 WaiUNg6VhziPQgOwSSuBB6uJFxLFxoMhvCUlmLjgr81Z7JqHF3VHs/RqLUaNttKz5UHg
 bh/eDJycn7cgM7iZf5OPirjLYmg8Eudfbi/q9JYnf4npO2nVkBzdBJXVQiP+WMJER8CA
 jtfsUOv+SONb6DKtcZByxT6UmoeP56PFXH5boijIM5KSb9EFZyvFz60feSPXaZidY2TO
 d0SodP7HHx+06r9cTtM+Bdiq+6SGl7N1BwX3WLj5fv3Roena3WmfwWhU/5/IuPgrZJSo
 qPdw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=wG80olMuj0yU8vksl8FAt5NrP7SBZdTf2h2KTFtompE=;
 b=no80378HUELiZUYJd5Ossst/T0Sb50CLqvEG2BZOhrOMZZ7Y/hH1DUf22Lfy+1ucAt
 ynFFUvRAul/wheCB2fQAJJAdkmF1JPuDBF/JcKrWigk9RxhKpbIfFKlS32tUDYQbBBj8
 IOIX5zZM24p24N5Zs2f85v3ua1Eb87ODujxtMKHSZPo/BCllk7hxuYEPk5Xcy8/ZsjlA
 xSuWV41dzSkfN7uPK9lesV4sqQbWI7zgcQ+8by5SnSt/Q09ZUpR718gFN6ytsBg8EuVu
 VxLw5o48HVlhJeiWZ9N/slC+Y42xuak3IiTp7BS9cKm8JaVminI/u9dScfAvHffESBow
 h2VQ==
X-Gm-Message-State: AOAM531o++g7lJOvSBhUufsza76CIt4ckf63/rmNFfsd/zOR5knQUjZ7
 XgRSuC68VgP715YY2BNcmIqhJJ9JnqOlV5M0ou4=
X-Google-Smtp-Source: ABdhPJxHv5OwMm7p82gJnB2GuqXDBmOS1UDi7BgNiCP1QRFRGMpPH1/LQQC/UxybKpje5kc5VZl+JNiHEVqfbAc16fM=
X-Received: by 2002:aca:380a:: with SMTP id f10mr4493818oia.161.1595962379343; 
 Tue, 28 Jul 2020 11:52:59 -0700 (PDT)
MIME-Version: 1.0
References: <CA+wirGqXMoRkS-aJmfFLipUv8SdY5LKV1aMrF0yKRJQaMvzs6Q@mail.gmail.com>
 <1c5cee83-295e-cc02-d018-b53ddc6e3590@xen.org>
 <CA+wirGpFvLBzYRBaq8yspJj8j9-yoLwN88bt079qM5yqPTwtcA@mail.gmail.com>
 <02b630bd-22e0-afde-6784-be068d0948ae@arm.com>
 <CA+wirGoG+im2mwb2ye6j4MpcVtfQ-prhhmVgdBTosus7hjeu=w@mail.gmail.com>
 <e091c32f-d121-d549-a2fa-f906d28ff8f1@arm.com>
 <alpine.DEB.2.21.2007281054520.646@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007281054520.646@sstabellini-ThinkPad-T480s>
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Tue, 28 Jul 2020 11:52:40 -0700
Message-ID: <CACMJ4GYWBNV5O4otbDj2Lx3Qq6sFPWm8bX4CRABEU3g1izQraQ@mail.gmail.com>
Subject: Re: dom0 LInux 5.8-rc5 kernel failing to initialize cooling maps for
 Allwinner H6 SoC
To: Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?Q?Andr=C3=A9_Przywara?= <andre.przywara@arm.com>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Alejandro <alejandro.gonzalez.correo@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Jul 28, 2020 at 11:16 AM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> On Tue, 28 Jul 2020, Andr=C3=A9 Przywara wrote:
> > On 28/07/2020 11:39, Alejandro wrote:
> > > Hello,
> > >
> > > El dom., 26 jul. 2020 a las 22:25, Andr=C3=A9 Przywara
> > > (<andre.przywara@arm.com>) escribi=C3=B3:
> > >> So this was actually my first thought: The firmware (U-Boot SPL) set=
s up
> > >> some basic CPU frequency (888 MHz for H6 [1]), which is known to nev=
er
> > >> overheat the chip, even under full load. So any concern from your si=
de
> > >> about the board or SoC overheating could be dismissed, with the curr=
ent
> > >> mainline code, at least. However you lose the full speed, by quite a
> > >> margin on the H6 (on the A64 it's only 816 vs 1200(ish) MHz).
> > >> However, without the clock entries in the CPU node, the frequency wo=
uld
> > >> never be changed by Dom0 anyway (nor by Xen, which doesn't even know=
 how
> > >> to do this).
> > >> So from a practical point of view: unless you hack Xen to pass on mo=
re
> > >> cpu node properties, you are stuck at 888 MHz anyway, and don't need=
 to
> > >> worry about overheating.
> > > Thank you. Knowing that at least it won't overheat is a relief. But
> > > the performance definitely suffers from the current situation, and
> > > quite a bit. I'm thinking about using KVM instead: even if it does
> > > less paravirtualization of guests,
> >
> > What is this statement based on? I think on ARM this never really
> > applied, and in general whether you do virtio or xen front-end/back-end
> > does not really matter.

When you say "in general" here, this becomes a very broad statement
about virtio and xen front-end/back-ends being equivalent and
interchangable, and that could cause some misunderstanding for a
newcomer.

There are important differences between the isolation properties of
classic virtio and Xen's front-end/back-ends -- and also the Argo
transport. It's particularly important for Xen because it has
priortized support for stronger isolation between execution
environments to a greater extent than some other hypervisors. It is a
critical differentiator for it. The importance of isolation is why Xen
4.14's headline feature was support for Linux stubdomains, upstreamed
to Xen after years of work by the Qubes and OpenXT communities.

> > IMHO any reasoning about performance just based
> > on software architecture is mostly flawed (because it's complex and
> > reality might have missed some memos ;-)

That's another pretty strong statement. Measurement is great, but
maybe performance analysis that is informed and directed by an
understanding of the architecture under test could potentially be more
rigorous and persuasive than work done without it?

> > So just measure your particular use case, then you know.

Hmm.

> > > I'm sure that the ability to use
> > > the maximum frequency of the CPU would offset the additional overhead=
,
> > > and in general offer better performance. But with KVM I lose the
> > > ability to have individual domU's dedicated to some device driver,
> > > which is a nice thing to have from a security standpoint.
> >
> > I understand the theoretical merits, but a) does this really work on
> > your board and b) is this really more secure? What do you want to
> > protect against?
>
> For "does it work on your board", the main obstacle is typically IOMMU
> support to be able to do device assignment properly. That's definitely
> something to check. If it doesn't work nowadays you can try to
> workaround it by using direct 1:1 memory mappings [1].  However, for
> security then you have to configure a MPU. I wonder if H6 has a MPU and
> how it can be configured. In any case, something to keep in mind in case
> the default IOMMU-based setup doesn't work for some reason for the
> device you care about.
>
> For "is this really more secure?", yes it is more secure as you are
> running larger portions of the codebase in unprivileged mode and isolated
> from each other with IOMMU (or MPU) protection. See what the OpenXT and
> Qubes OS guys have been doing.

Yes. Both projects have done quite a lot of work to enable and
maintain driver domains.

thanks,

Christopher

>
>
> [1] https://marc.info/?l=3Dxen-devel&m=3D158691258712815


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 19:04:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 19:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Uto-0004mB-4f; Tue, 28 Jul 2020 19:04:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o87v=BH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k0Utn-0004m6-5o
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 19:04:27 +0000
X-Inumbo-ID: 2741c956-d105-11ea-a925-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2741c956-d105-11ea-a925-12813bfff9fa;
 Tue, 28 Jul 2020 19:04:25 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 62AF522C9E;
 Tue, 28 Jul 2020 19:04:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1595963064;
 bh=YFQxmi5He0aDdYwkXuovzDD7FsIUx1MG20jPnRJI0fo=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=wRi4h7oV6D05WDvmsw9f63RzEm6Hc0cDWlINpRvh2oyIxmENkrGJP54eij5VCZzE7
 DMG9xNRQy7uzWFggfVtZRdvr1uEU0t8/tJPE45kCN3YZ+7eeQqxt6D1ESdqsuH2gvS
 JHWAqJXw2fryiVSVr6lhheo9gMTVL7LUXjzIBmSc=
Date: Tue, 28 Jul 2020 12:04:23 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
Subject: Re: [PATCH v2] xen/arm: Convert runstate address during hypcall
In-Reply-To: <4647a019c7b42d40d3c2f5b0a3685954bea7f982.1595948219.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2007281153110.646@sstabellini-ThinkPad-T480s>
References: <4647a019c7b42d40d3c2f5b0a3685954bea7f982.1595948219.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, 28 Jul 2020, Bertrand Marquis wrote:
> At the moment on Arm, a Linux guest running with KTPI enabled will
> cause the following error when a context switch happens in user mode:
> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
> 
> The error is caused by the virtual address for the runstate area
> registered by the guest only being accessible when the guest is running
> in kernel space when KPTI is enabled.
> 
> To solve this issue, this patch is doing the translation from virtual
> address to physical address during the hypercall and mapping the
> required pages using vmap. This is removing the conversion from virtual
> to physical address during the context switch which is solving the
> problem with KPTI.
> 
> This is done only on arm architecture, the behaviour on x86 is not
> modified by this patch and the address conversion is done as before
> during each context switch.
> 
> This is introducing several limitations in comparison to the previous
> behaviour (on arm only):
> - if the guest is remapping the area at a different physical address Xen
> will continue to update the area at the previous physical address. As
> the area is in kernel space and usually defined as a global variable this
> is something which is believed not to happen. If this is required by a
> guest, it will have to call the hypercall with the new area (even if it
> is at the same virtual address).
> - the area needs to be mapped during the hypercall. For the same reasons
> as for the previous case, even if the area is registered for a different
> vcpu. It is believed that registering an area using a virtual address
> unmapped is not something done.
> 
> inline functions in headers could not be used as the architecture
> domain.h is included before the global domain.h making it impossible
> to use the struct vcpu inside the architecture header.
> This should not have any performance impact as the hypercall is only
> called once per vcpu usually.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
>   Changes in v2
>     - use vmap to map the pages during the hypercall.
>     - reintroduce initial copy during hypercall.
> 
> ---
>  xen/arch/arm/domain.c        | 160 +++++++++++++++++++++++++++++++----
>  xen/arch/x86/domain.c        |  30 ++++++-
>  xen/arch/x86/x86_64/domain.c |   4 +-
>  xen/common/domain.c          |  19 ++---
>  xen/include/asm-arm/domain.h |   9 ++
>  xen/include/asm-x86/domain.h |  16 ++++
>  xen/include/xen/domain.h     |   5 ++
>  xen/include/xen/sched.h      |  16 +---
>  8 files changed, 207 insertions(+), 52 deletions(-)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 31169326b2..c595438bd9 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -19,6 +19,7 @@
>  #include <xen/sched.h>
>  #include <xen/softirq.h>
>  #include <xen/wait.h>
> +#include <xen/vmap.h>
>  
>  #include <asm/alternative.h>
>  #include <asm/cpuerrata.h>
> @@ -275,36 +276,157 @@ static void ctxt_switch_to(struct vcpu *n)
>      virt_timer_restore(n);
>  }
>  
> -/* Update per-VCPU guest runstate shared memory area (if registered). */
> -static void update_runstate_area(struct vcpu *v)
> +static void cleanup_runstate_vcpu_locked(struct vcpu *v)
> +{
> +    if ( v->arch.runstate_guest )
> +    {
> +        vunmap((void *)((unsigned long)v->arch.runstate_guest & PAGE_MASK));
> +
> +        put_page(v->arch.runstate_guest_page[0]);
> +
> +        if ( v->arch.runstate_guest_page[1] )
> +        {
> +            put_page(v->arch.runstate_guest_page[1]);
> +        }
> +        v->arch.runstate_guest = NULL;
> +    }
> +}
> +
> +void arch_vcpu_cleanup_runstate(struct vcpu *v)
>  {
> -    void __user *guest_handle = NULL;
> +    spin_lock(&v->arch.runstate_guest_lock);
> +
> +    cleanup_runstate_vcpu_locked(v);
> +
> +    spin_unlock(&v->arch.runstate_guest_lock);
> +}
> +
> +static int setup_runstate_vcpu_locked(struct vcpu *v, vaddr_t vaddr)
> +{
> +    unsigned int offset;
> +    mfn_t mfn[2];
> +    struct page_info *page;
> +    unsigned int numpages;
>      struct vcpu_runstate_info runstate;
> +    void *p;
>  
> -    if ( guest_handle_is_null(runstate_guest(v)) )
> -        return;
> +    /* user can pass a NULL address to unregister a previous area */
> +    if ( vaddr == 0 )
> +        return 0;
>  
> -    memcpy(&runstate, &v->runstate, sizeof(runstate));
> +    offset = vaddr & ~PAGE_MASK;
>  
> -    if ( VM_ASSIST(v->domain, runstate_update_flag) )
> +    /* provided address must be aligned to a 64bit */
> +    if ( offset % alignof(struct vcpu_runstate_info) )
>      {
> -        guest_handle = &v->runstate_guest.p->state_entry_time + 1;
> -        guest_handle--;
> -        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
> -        __raw_copy_to_guest(guest_handle,
> -                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
> -        smp_wmb();
> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
> +                "Invalid offset\n", vaddr);
> +        return -EINVAL;
> +    }
> +
> +    page = get_page_from_gva(v, vaddr, GV2M_WRITE);
> +    if ( !page )
> +    {
> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
> +                "Page is not mapped\n", vaddr);
> +        return -EINVAL;
> +    }
> +    mfn[0] = page_to_mfn(page);
> +    v->arch.runstate_guest_page[0] = page;
> +
> +    if ( offset > (PAGE_SIZE - sizeof(struct vcpu_runstate_info)) )
> +    {
> +        /* guest area is crossing pages */
> +        page = get_page_from_gva(v, vaddr + PAGE_SIZE, GV2M_WRITE);
> +        if ( !page )
> +        {
> +            put_page(v->arch.runstate_guest_page[0]);
> +            gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
> +                    "2nd Page is not mapped\n", vaddr);
> +            return -EINVAL;
> +        }
> +        mfn[1] = page_to_mfn(page);
> +        v->arch.runstate_guest_page[1] = page;
> +        numpages = 2;
> +    }
> +    else
> +    {
> +        v->arch.runstate_guest_page[1] = NULL;
> +        numpages = 1;
> +    }
> +
> +    p = vmap(mfn, numpages);
> +    if ( !p )
> +    {
> +        put_page(v->arch.runstate_guest_page[0]);
> +        if ( numpages == 2 )
> +            put_page(v->arch.runstate_guest_page[1]);
> +
> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
> +                "vmap error\n", vaddr);
> +        return -EINVAL;
>      }
>  
> -    __copy_to_guest(runstate_guest(v), &runstate, 1);
> +    v->arch.runstate_guest = p + offset;
>  
> -    if ( guest_handle )
> +    if (v == current)
>      {
> -        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
> +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate));
> +    }
> +    else
> +    {
> +        vcpu_runstate_get(v, &runstate);
> +        memcpy(v->arch.runstate_guest, &runstate, sizeof(v->runstate));
> +    }
> +
> +    return 0;
> +}


The arm32 build breaks with:

domain.c: In function 'setup_runstate_vcpu_locked':
domain.c:322:9: error: format '%lx' expects argument of type 'long unsigned int', but argument 3 has type 'vaddr_t' [-Werror=format=]
         gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
         ^
domain.c:330:9: error: format '%lx' expects argument of type 'long unsigned int', but argument 3 has type 'vaddr_t' [-Werror=format=]
         gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
         ^
domain.c:344:13: error: format '%lx' expects argument of type 'long unsigned int', but argument 3 has type 'vaddr_t' [-Werror=format=]
             gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
             ^
domain.c:365:9: error: format '%lx' expects argument of type 'long unsigned int', but argument 3 has type 'vaddr_t' [-Werror=format=]
         gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
         ^
cc1: all warnings being treated as errors


> +int arch_vcpu_setup_runstate(struct vcpu *v,
> +                             struct vcpu_register_runstate_memory_area area)
> +{
> +    int rc;
> +
> +    spin_lock(&v->arch.runstate_guest_lock);
> +
> +    /* cleanup if we are recalled */
> +    cleanup_runstate_vcpu_locked(v);
> +
> +    rc = setup_runstate_vcpu_locked(v, (vaddr_t)area.addr.v);
> +
> +    spin_unlock(&v->arch.runstate_guest_lock);
> +
> +    return rc;
> +}
> +
> +
> +/* Update per-VCPU guest runstate shared memory area (if registered). */
> +static void update_runstate_area(struct vcpu *v)
> +{
> +    spin_lock(&v->arch.runstate_guest_lock);
> +
> +    if ( v->arch.runstate_guest )
> +    {
> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
> +        {
> +            v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
> +            v->arch.runstate_guest->state_entry_time |= XEN_RUNSTATE_UPDATE;

Please use write_atomic (as suggested by Julien here:
https://marc.info/?l=xen-devel&m=159225391619240)


> +            smp_wmb();
> +        }
> +
> +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate));
> +
> +        /* copy must be done before switching the bit */
>          smp_wmb();
> -        __raw_copy_to_guest(guest_handle,
> -                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
> +
> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
> +        {
> +            v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
> +            v->arch.runstate_guest->state_entry_time &= ~XEN_RUNSTATE_UPDATE;

and here too

The rest looks OK to me.


> +        }
>      }
> +
> +    spin_unlock(&v->arch.runstate_guest_lock);
>  }
>  
>  static void schedule_tail(struct vcpu *prev)


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 19:06:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 19:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Uvd-0004sO-Hu; Tue, 28 Jul 2020 19:06:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E6Nk=BH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0Uvc-0004ru-CY
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 19:06:20 +0000
X-Inumbo-ID: 684c2450-d105-11ea-8bc5-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 684c2450-d105-11ea-8bc5-bc764e2007e4;
 Tue, 28 Jul 2020 19:06:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=SevM+Kb/9xdcYyv8Q9okAtHdHJzc6pVZHKi34+wU4To=; b=wgxpDfbarAnad4Zhq6rtVtVOW
 9yiogJicCcdRbQkRAii1SGMDGa9QXvKVwmE5C0aBi4MM1FvxRua/TGuCn0gbuzcOI9D5fpRnnnNOA
 Az8OB9RRLCHgwHP+7t96P4N0VEGZKXsIkkN9o66wEhXBk0N1XBFdTcOLnqyWCshJZTQgw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0UvV-00089n-T6; Tue, 28 Jul 2020 19:06:13 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0UvV-0004yI-Ib; Tue, 28 Jul 2020 19:06:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0UvV-0002EM-Ht; Tue, 28 Jul 2020 19:06:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152261-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152261: all pass - PUSHED
X-Osstest-Versions-This: ovmf=3887820e5fecdb9e948f88eb4e92298f6c3dd86f
X-Osstest-Versions-That: ovmf=ffde22468e2f0e93b51f97b801e6c7a181088c61
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jul 2020 19:06:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152261 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152261/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3887820e5fecdb9e948f88eb4e92298f6c3dd86f
baseline version:
 ovmf                 ffde22468e2f0e93b51f97b801e6c7a181088c61

Last test of basis   152249  2020-07-28 07:04:39 Z    0 days
Testing same since   152261  2020-07-28 12:10:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Qi Zhang <qi1.zhang@intel.com>
  Zhang, Qi <qi1.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ffde22468e..3887820e5f  3887820e5fecdb9e948f88eb4e92298f6c3dd86f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 19:18:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 19:18:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0V7G-0005rS-SC; Tue, 28 Jul 2020 19:18:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Qgq5=BH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k0V7F-0005rN-MB
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 19:18:21 +0000
X-Inumbo-ID: 185f6824-d107-11ea-a92c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 185f6824-d107-11ea-a92c-12813bfff9fa;
 Tue, 28 Jul 2020 19:18:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ED9ADAD5B;
 Tue, 28 Jul 2020 19:18:29 +0000 (UTC)
Subject: Re: [PATCH 1/4] x86: replace __ASM_{CL,ST}AC
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <fc8e042e-fef8-ac38-34d8-16b13e4b0135@suse.com>
 <ea6eeb6d-7af2-97cb-4c11-6e0a81755961@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9083209c-a5b5-2238-0453-31a730705365@suse.com>
Date: Tue, 28 Jul 2020 21:18:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <ea6eeb6d-7af2-97cb-4c11-6e0a81755961@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 15:55, Andrew Cooper wrote:
> On 15/07/2020 11:48, Jan Beulich wrote:
>> --- a/xen/arch/x86/arch.mk
>> +++ b/xen/arch/x86/arch.mk
>> @@ -20,6 +20,7 @@ $(call as-option-add,CFLAGS,CC,"rdrand %
>>   $(call as-option-add,CFLAGS,CC,"rdfsbase %rax",-DHAVE_AS_FSGSBASE)
>>   $(call as-option-add,CFLAGS,CC,"xsaveopt (%rax)",-DHAVE_AS_XSAVEOPT)
>>   $(call as-option-add,CFLAGS,CC,"rdseed %eax",-DHAVE_AS_RDSEED)
>> +$(call as-option-add,CFLAGS,CC,"clac",-DHAVE_AS_CLAC_STAC)
> 
> Kconfig please, rather than extending this legacy section.

Did you forget for a moment that we're still to discuss this use of
Kconfig before we extend it to further instances? I'm pretty sure I
gave an ack to one of the respective changes of yours only on the
condition that we'd sort out whether this is indeed the way forward,
without a preset outcome (and without reasoning like "let's do it
because Linux does so").

> That said, surely stac/clac support is old enough for us to start using
> unconditionally?

Can't check right now, but I'm sure I wouldn't have introduced the
construct if we could rely on all supported tool chains to have
support for them.

> Could we see about sorting a reasonable minimum toolchain version,
> before we churn all the logic to deal with obsolete toolchains?

Who's "we" here? I see you keep proposing this every once in a
while, but I don't see who's going to do the work. The main reason
why, while I agree we should bump the base line, I don't see myself
do this is because I don't see any even just half way clear
criteria by which to decide what the new level is supposed to be.
Once again I don't think "let's follow what Linux does" is a
suitable approach.

Additionally I fear that with raising the tool chain base line,
people may start considering to raise other minimum versions.
While I'm personally quite fine building my own binutils and gcc
(and maybe a few other pieces), I don't fancy having to rebuild,
say, coreutils just to be able to build Xen.

Maybe a topic for the next community call, which isn't too far
out?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 19:25:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 19:25:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0VDg-0006ho-Kj; Tue, 28 Jul 2020 19:25:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Qgq5=BH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k0VDf-0006hj-Ds
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 19:24:59 +0000
X-Inumbo-ID: 06277650-d108-11ea-8bc6-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06277650-d108-11ea-8bc6-bc764e2007e4;
 Tue, 28 Jul 2020 19:24:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EF286AD5B;
 Tue, 28 Jul 2020 19:25:08 +0000 (UTC)
Subject: Re: [PATCH 1/4] x86: replace __ASM_{CL,ST}AC
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <fc8e042e-fef8-ac38-34d8-16b13e4b0135@suse.com>
 <20200727145526.GR7191@Air-de-Roger>
 <b29e4b17-8ec2-a0db-8426-94393e9eb2c0@suse.com>
 <868f864b-ae8e-0b01-8cf0-74a0fd3982ee@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f6b29bdb-71c0-bcc9-d524-f6c8d67fa24b@suse.com>
Date: Tue, 28 Jul 2020 21:24:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <868f864b-ae8e-0b01-8cf0-74a0fd3982ee@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 15:59, Andrew Cooper wrote:
> On 27/07/2020 20:47, Jan Beulich wrote:
>> On 27.07.2020 16:55, Roger Pau Monné wrote:
>>> On Wed, Jul 15, 2020 at 12:48:14PM +0200, Jan Beulich wrote:
>>>> --- /dev/null
>>>> +++ b/xen/include/asm-x86/asm-defns.h
>>>
>>> Maybe this could be asm-insn.h or a different name? I find it
>>> confusing to have asm-defns.h and an asm_defs.h.
>>
>> While indeed I anticipated a reply to this effect, I don't consider
>> asm-insn.h or asm-macros.h suitable: We don't want to limit this
>> header to a more narrow purpose than "all sorts of definition", I
>> don't think. Hence I chose that name despite its similarity to the
>> C header's one.
> 
> Roger is correct.  Having asm-defns.h and asm_defs.h is too confusing,
> and there is already too much behind the scenes magic here.
> 
> What is the anticipated end result, file wise, because that might
> highlight a better way forward.

For one I'm afraid I don't understand "file wise" here. The one meaning
I could guess can't be it: The name of the file.

And then, "the anticipated end result" is at least ambiguous too: You
can surely see what the file contains by the end of this series, so
again this can't be meant. I have no immediate plans beyond this
series, so I can only state what I did say in reply to Roger's remark
already: "all sorts of asm definitions".

I'd also like to emphasize that asm-defns.h really is a companion of
asm_defns.h, supposed to be include only by the latter (as I think
can be seen from the patches). In this role I think its name being
as similar to its "parent" as suggested makes sufficient sense.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 19:33:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 19:33:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0VLq-0007a0-HX; Tue, 28 Jul 2020 19:33:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Qgq5=BH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k0VLp-0007Zv-Lz
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 19:33:25 +0000
X-Inumbo-ID: 33d952c0-d109-11ea-8bca-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 33d952c0-d109-11ea-8bca-bc764e2007e4;
 Tue, 28 Jul 2020 19:33:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 22661AD5B;
 Tue, 28 Jul 2020 19:33:35 +0000 (UTC)
Subject: Re: [PATCH 2/4] x86: reduce CET-SS related #ifdef-ary
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <58615a18-7f81-c000-d499-1a49f4752879@suse.com>
 <5abaf9e1-c7ba-a58c-d735-47430013eb65@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d3eb260b-e9e8-1178-828e-73eb119a01ba@suse.com>
Date: Tue, 28 Jul 2020 21:33:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <5abaf9e1-c7ba-a58c-d735-47430013eb65@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 16:29, Andrew Cooper wrote:
> On 15/07/2020 11:48, Jan Beulich wrote:
>> Now that I've done this I'm not longer sure which direction is better to
>> follow: On one hand this introduces dead code (even if just NOPs) into
>> CET-SS-disabled builds. Otoh this is a step towards breaking the tool
>> chain version dependency of the feature.
> 
> The toolchain dependency can't be broken, because of incssp and wrss in C.
> 
> There is 0 value and added complexity to trying to partially support
> legacy toolchains.

Complexity: Yes. Zero value - surely not. I'm having a hard time seeing
why you may think so. Would you mind explaining yourself?

>  Furthermore, this adds a pile of nops into builds
> which have specifically opted out of CONFIG_XEN_SHSTK, which isn't ideal
> for embedded usecases.
> 
> As a consequence, I think its better to keep things consistent with how
> they are now.
> 
> One thing I already considered was to make cpu_has_xen_shstk return
> false for !CONFIG_XEN_SHSTK, which subsumes at least one hunk in this
> change.

One is better than nothing, but still pretty little.

>> --- a/xen/arch/x86/x86_64/compat/entry.S
>> +++ b/xen/arch/x86/x86_64/compat/entry.S
>> @@ -198,9 +198,7 @@ ENTRY(cr4_pv32_restore)
>>   
>>   /* See lstar_enter for entry register state. */
>>   ENTRY(cstar_enter)
>> -#ifdef CONFIG_XEN_SHSTK
>>           ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
>> -#endif
> 
> I can't currently think of any option better than leaving these ifdef's
> in place, other than perhaps
> 
> #ifdef CONFIG_XEN_SHSTK
> # define MAYBE_SETSSBSY ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
> #else
> # define MAYBE_SETSSBSY
> #endif
> 
> and I don't like it much.

Neither do I. Then we'd also switch STAC/CLAC to MAYBE_STAC / MAYBE_CLAC.

> The think is that everything present there is semantically relevant
> information, and dropping it makes the code worse rather than better.

Everything? I don't see why the #ifdef-s are semantically relevant
(the needed infor is already conveyed by the ALTERNATIVE and its
arguments). I consider them primarily harming readability, and thus I
think we should strive to eliminate them if we can. Hence this patch
...

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 19:41:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 19:41:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0VTy-0008St-IM; Tue, 28 Jul 2020 19:41:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Qgq5=BH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k0VTx-0008So-6A
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 19:41:49 +0000
X-Inumbo-ID: 5fd127f8-d10a-11ea-a92f-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5fd127f8-d10a-11ea-a92f-12813bfff9fa;
 Tue, 28 Jul 2020 19:41:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 64576AD7A;
 Tue, 28 Jul 2020 19:41:58 +0000 (UTC)
Subject: Re: [PATCH 3/4] x86: drop ASM_{CL,ST}AC
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <048c3702-f0b0-6f8e-341e-bec6cfaded27@suse.com>
 <07750e83-6b9d-a88d-856b-20db4f63fd11@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9f60b140-a790-43af-122d-1516190af2d9@suse.com>
Date: Tue, 28 Jul 2020 21:41:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <07750e83-6b9d-a88d-856b-20db4f63fd11@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 16:51, Andrew Cooper wrote:
> On 15/07/2020 11:49, Jan Beulich wrote:
>> Use ALTERNATIVE directly, such that at the use sites it is visible that
>> alternative code patching is in use. Similarly avoid hiding the fact in
>> SAVE_ALL.
>>
>> No change to generated code.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Definitely +1 to not hiding the STAC/CLAC in SAVE_ALL.  I've been
> meaning to undo that mistake for ages.
> 
> OOI, what made you change your mind?  I'm pleased that you have.

This, to me, is a direct consequence of dropping ASM_STAC / ASM_CLAC:
If we don't want the fact of a use of ALTERNATIVE be hidden there, it
also shouldn't be hidden inside SAVE_ALL.

>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -2165,9 +2165,9 @@ void activate_debugregs(const struct vcp
>>   void asm_domain_crash_synchronous(unsigned long addr)
>>   {
>>       /*
>> -     * We need clear AC bit here because in entry.S AC is set
>> -     * by ASM_STAC to temporarily allow accesses to user pages
>> -     * which is prevented by SMAP by default.
>> +     * We need to clear AC bit here because in entry.S it gets set to
>> +     * temporarily allow accesses to user pages which is prevented by
>> +     * SMAP by default.
> 
> As you're adjusting the text, It should read "We need to clear the AC
> bit ..."
> 
> But I also think it would be clearer to say that exception fixup may
> leave user access enabled, which we fix up here by unconditionally
> disabling user access.

Can do.

> Preferably with this rewritten, Reviewed-by: Andrew Cooper
> <andrew.cooper3@citrix.com>

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 19:54:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 19:54:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0VgN-0000yl-R9; Tue, 28 Jul 2020 19:54:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Qgq5=BH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k0VgM-0000yg-Is
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 19:54:38 +0000
X-Inumbo-ID: 2a6babeb-d10c-11ea-8bce-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a6babeb-d10c-11ea-8bce-bc764e2007e4;
 Tue, 28 Jul 2020 19:54:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B9396AD6F;
 Tue, 28 Jul 2020 19:54:47 +0000 (UTC)
Subject: Re: [PATCH v2] xen/arm: Convert runstate address during hypcall
To: Bertrand Marquis <bertrand.marquis@arm.com>
References: <4647a019c7b42d40d3c2f5b0a3685954bea7f982.1595948219.git.bertrand.marquis@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8d2d7f03-450c-d50c-630b-8608c6d42bb9@suse.com>
Date: Tue, 28 Jul 2020 21:54:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <4647a019c7b42d40d3c2f5b0a3685954bea7f982.1595948219.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 17:52, Bertrand Marquis wrote:
> At the moment on Arm, a Linux guest running with KTPI enabled will
> cause the following error when a context switch happens in user mode:
> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
> 
> The error is caused by the virtual address for the runstate area
> registered by the guest only being accessible when the guest is running
> in kernel space when KPTI is enabled.
> 
> To solve this issue, this patch is doing the translation from virtual
> address to physical address during the hypercall and mapping the
> required pages using vmap. This is removing the conversion from virtual
> to physical address during the context switch which is solving the
> problem with KPTI.
> 
> This is done only on arm architecture, the behaviour on x86 is not
> modified by this patch and the address conversion is done as before
> during each context switch.
> 
> This is introducing several limitations in comparison to the previous
> behaviour (on arm only):
> - if the guest is remapping the area at a different physical address Xen
> will continue to update the area at the previous physical address. As
> the area is in kernel space and usually defined as a global variable this
> is something which is believed not to happen. If this is required by a
> guest, it will have to call the hypercall with the new area (even if it
> is at the same virtual address).
> - the area needs to be mapped during the hypercall. For the same reasons
> as for the previous case, even if the area is registered for a different
> vcpu. It is believed that registering an area using a virtual address
> unmapped is not something done.

Beside me thinking that an in-use and stable ABI can't be changed like
this, no matter what is "believed" kernel code may or may not do, I
also don't think having arch-es diverge in behavior here is a good
idea. Use of commonly available interfaces shouldn't lead to head
aches or surprises when porting code from one arch to another. I'm
pretty sure it was suggested before: Why don't you simply introduce
a physical address based hypercall (and then also on x86 at the same
time, keeping functional parity)? I even seem to recall giving a
suggestion how to fit this into a future "physical addresses only"
model, as long as we can settle on the basic principles of that
conversion path that we want to go sooner or later anyway (as I
understand).

> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1642,6 +1642,30 @@ void paravirt_ctxt_switch_to(struct vcpu *v)
>           wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
>   }
>   
> +int arch_vcpu_setup_runstate(struct vcpu *v,
> +                             struct vcpu_register_runstate_memory_area area)
> +{
> +    struct vcpu_runstate_info runstate;
> +
> +    runstate_guest(v) = area.addr.h;
> +
> +    if ( v == current )
> +    {
> +        __copy_to_guest(runstate_guest(v), &v->runstate, 1);
> +    }

Pointless braces (and I think there are more instances).

> +    else
> +    {
> +        vcpu_runstate_get(v, &runstate);
> +        __copy_to_guest(runstate_guest(v), &runstate, 1);
> +    }
> +    return 0;

Missing blank line before main "return".

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 20:00:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 20:00:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Vln-0001sV-JE; Tue, 28 Jul 2020 20:00:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Qgq5=BH=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k0Vlm-0001sQ-Sd
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 20:00:14 +0000
X-Inumbo-ID: f2f3220a-d10c-11ea-8bd2-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f2f3220a-d10c-11ea-8bd2-bc764e2007e4;
 Tue, 28 Jul 2020 20:00:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3DCE0ABD2;
 Tue, 28 Jul 2020 20:00:24 +0000 (UTC)
Subject: Re: fwupd support under Xen - firmware updates with the UEFI capsule
To: Norbert Kaminski <norbert.kaminski@3mdeb.com>
References: <497f1524-b57e-0ea1-5899-62f677bfae91@3mdeb.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <39be665c-b6c8-23e3-b18b-d38cfe5c1286@suse.com>
Date: Tue, 28 Jul 2020 22:00:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <497f1524-b57e-0ea1-5899-62f677bfae91@3mdeb.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com,
 Maciej Pijanowski <maciej.pijanowski@3mdeb.com>, piotr.krol@3mdeb.com,
 andrew.cooper3@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 09:41, Norbert Kaminski wrote:
> I'm trying to add support for the firmware updates with the UEFI capsule in
> Qubes OS. I've got the troubles with reading ESRT (EFI System Resource Table)
> in the dom0, which is based on the EFI memory map. The EFI_MEMMAP is not
> enabled despite the loaded drivers (CONFIG_EFI, CONFIG_EFI_ESRT) and kernel
> cmdline parameters (add_efi_memmap):
> 
> ```
> [    3.451249] efi: EFI_MEMMAP is not enabled.
> ```

It is, according to my understanding, a layering violation to expose
the EFI memory map to Dom0. It's not supposed to make use of this
information in any way. Hence any functionality depending on its use
also needs to be implemented in the hypervisor, with Dom0 making a
suitable hypercall to access this functionality. (And I find it
quite natural to expect that Xen gets involved in an update of the
firmware of a system.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 21:02:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 21:02:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0WjE-0006t6-FU; Tue, 28 Jul 2020 21:01:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K5Bo=BH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0WjD-0006t1-14
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 21:01:39 +0000
X-Inumbo-ID: 86a91aec-d115-11ea-a945-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 86a91aec-d115-11ea-a945-12813bfff9fa;
 Tue, 28 Jul 2020 21:01:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1595970098;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=i031syynmeL6Zsizw4CYlaroB+FVuMeN3cNPF18XDto=;
 b=Sm9jrWBBK5Ao8hHjUADCC3soflhmWyAqHBjuWt9WX7tTVMenYFNRZAhn
 SRja8sZskYwSST/qvMwNOThnPZiO4M10L0Lhckqak/+7k2Zcsj7yWConP
 bYaR3xV14y/XMtIPV3lcxkz8/BtACr/LEPkrV8B3tdcq6Z0xpQn/4xdoL 0=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: dOjogrFoiwK0EjOIQEmM6WCIhV55c7wkfHSZGRm/eaHMRWvh0soA2pC+3y3CAg22U0klhsKdMH
 uQk509QW63fTAAbAmn4XVoiM/bp6nkpeuLceasB1H6nQt1SQWCFw4SSYnCAhxXdgVuqy8J9bcE
 SJsTdHddQkpn/0W1KlsB2WovVYkaLuy2rycJOOq9GXXyhLiMMGIRdHpeFPOI18EJDuGjkX9VQA
 CHN5YwjN+yl4T26SknO1gBNeG93X6/7c9hJ0Mrm0+Ey4P/ZmmNrOBs9nu+y++5Y3t+SWt2pzNq
 dXw=
X-SBRS: 2.7
X-MesageID: 23383797
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,407,1589256000"; d="scan'208";a="23383797"
Subject: Re: fwupd support under Xen - firmware updates with the UEFI capsule
To: Jan Beulich <jbeulich@suse.com>, Norbert Kaminski
 <norbert.kaminski@3mdeb.com>
References: <497f1524-b57e-0ea1-5899-62f677bfae91@3mdeb.com>
 <39be665c-b6c8-23e3-b18b-d38cfe5c1286@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <bbe85f76-0999-1150-3d48-c7f9e1796dac@citrix.com>
Date: Tue, 28 Jul 2020 22:01:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <39be665c-b6c8-23e3-b18b-d38cfe5c1286@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Maciej Pijanowski <maciej.pijanowski@3mdeb.com>, piotr.krol@3mdeb.com,
 marmarek@invisiblethingslab.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/07/2020 21:00, Jan Beulich wrote:
> On 28.07.2020 09:41, Norbert Kaminski wrote:
>> I'm trying to add support for the firmware updates with the UEFI
>> capsule in
>> Qubes OS. I've got the troubles with reading ESRT (EFI System
>> Resource Table)
>> in the dom0, which is based on the EFI memory map. The EFI_MEMMAP is not
>> enabled despite the loaded drivers (CONFIG_EFI, CONFIG_EFI_ESRT) and
>> kernel
>> cmdline parameters (add_efi_memmap):
>>
>> ```
>> [    3.451249] efi: EFI_MEMMAP is not enabled.
>> ```
>
> It is, according to my understanding, a layering violation to expose
> the EFI memory map to Dom0. It's not supposed to make use of this
> information in any way. Hence any functionality depending on its use
> also needs to be implemented in the hypervisor, with Dom0 making a
> suitable hypercall to access this functionality. (And I find it
> quite natural to expect that Xen gets involved in an update of the
> firmware of a system.)

ERST is a table (read only by the looks of things) which is a catalogue
of various bits of firmware in the system, including GUIDs for
identification, and version information.

It is the kind of data which the hardware domain should have access to,
and AFAICT, behaves just like a static ACPI table.

Presumably it wants to an E820 reserved region so dom0 gets indent
access, and something in the EFI subsystem needs extending to pass the
ERST address to dom0.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 22:03:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 22:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0XgV-0003UP-1l; Tue, 28 Jul 2020 22:02:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E6Nk=BH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0XgT-0003UK-Va
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 22:02:54 +0000
X-Inumbo-ID: 149aee86-d11e-11ea-8be5-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 149aee86-d11e-11ea-8be5-bc764e2007e4;
 Tue, 28 Jul 2020 22:02:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=rXK/F4yvm5yHLsGkTMo1DdSmSsnF7/d7WFLnENXbnLA=; b=CGxM/CLthIqQMEI5aXx50Q8ZT
 kgGErao47N45AKOuV4k1nb1YnIprLVUti89TdozVkdsiRIZ9aR0WYbQIVW5AEqe6MXWV2X2N/jn7U
 cCNpr9HC8gVH7Opef7HsbnNjH2s9t8qufjV2Iom1H80tWveTN0kRpIiVS7z/zLF97Pihg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0XgQ-0003TN-SB; Tue, 28 Jul 2020 22:02:50 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0XgQ-0000Ip-Ex; Tue, 28 Jul 2020 22:02:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0XgQ-0005P1-E4; Tue, 28 Jul 2020 22:02:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152246-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152246: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:guest-start/redhat.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=92ed301919932f777713b9172e525674157e983d
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jul 2020 22:02:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152246 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152246/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-intel 12 guest-start/redhat.repeat fail in 152232 pass in 152246
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail pass in 152232
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 152232

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                92ed301919932f777713b9172e525674157e983d
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   40 days
Failing since        151236  2020-06-19 19:10:35 Z   39 days   61 attempts
Testing same since   152223  2020-07-27 00:11:45 Z    1 days    3 attempts

------------------------------------------------------------
932 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 53544 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 22:17:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 22:17:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Xu1-0004Tm-Dj; Tue, 28 Jul 2020 22:16:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bBlf=BH=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1k0Xu0-0004Th-SY
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 22:16:53 +0000
X-Inumbo-ID: 093b26a8-d120-11ea-8be7-bc764e2007e4
Received: from wout3-smtp.messagingengine.com (unknown [64.147.123.19])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 093b26a8-d120-11ea-8be7-bc764e2007e4;
 Tue, 28 Jul 2020 22:16:51 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id 522667F8;
 Tue, 28 Jul 2020 18:16:50 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Tue, 28 Jul 2020 18:16:50 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
 messagingengine.com; h=cc:content-type:date:from:in-reply-to
 :message-id:mime-version:references:subject:to:x-me-proxy
 :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; bh=NJE5iT
 bRAUSREXOKgGh/V5hHtyKpPpr+YxsVQQIVXfw=; b=VAwexPhjsAYtJl+3RV8sxf
 W6L9ExR0YsPrABe6GY4zkf0pT2fj3NKMCPldVBVjlCXuamZ49UTrau8n5fwu3OFn
 HT0WIs+4KKd6bzYkwk9QmjN2U4rIvTjBavaoMXsqoouI02FkQDZYwo2c/4SmKql6
 MOWIDi4Q3fdcet40QZNj8IHaBEkRForUlhLR2cckRLoBsLxGtPitfgB1ztyzUa/n
 nBy+qf9PoFUz0F8kTDh3g/ZeP+eHGkZb8A4vHoFjPi/Wi8RrVV+YRHXy+fC6Qvfg
 JZntbHlpdbMNN5n7gcWAjb+bskOOl/oMFqfhdSfEY0lCbxhEVVaKttrvOi5lHPMg
 ==
X-ME-Sender: <xms:0aMgX_rxB4W3dg_swZGOt22JfWo-RoilDBPkmS_za_1U19oZ9Y3tDA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduiedrieefgddtjecutefuodetggdotefrodftvf
 curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
 uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
 fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
 ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
 hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteevffei
 gffhkefhgfegfeffhfegveeikeettdfhheevieehieeitddugeefteffnecukfhppeelud
 drieehrdefgedrfeefnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghi
 lhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtg
 homh
X-ME-Proxy: <xmx:0aMgX5q_Nu4lUOz7HRbxefy9MKBiwyFesYjYoCYglKnesYM0KKpZFw>
 <xmx:0aMgX8NIzMIwLIBwnUU9TloPGpTNPfvylJIaX1rEuusyTEuncQS0mQ>
 <xmx:0aMgXy6ZAe0KL6Y8anKJdFLOYwSQZYBxEUfhn8pueXnA8RoYjZL-Aw>
 <xmx:0aMgX0RcVVylnKxWO8Jv56uzckTjdin5-cQ8Fjfnyj7aMmcPq80crQ>
Received: from mail-itl (ip5b412221.dynamic.kabel-deutschland.de [91.65.34.33])
 by mail.messagingengine.com (Postfix) with ESMTPA id 379513280060;
 Tue, 28 Jul 2020 18:16:48 -0400 (EDT)
Date: Wed, 29 Jul 2020 00:16:45 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: fwupd support under Xen - firmware updates with the UEFI capsule
Message-ID: <20200728221645.GO1626@mail-itl>
References: <497f1524-b57e-0ea1-5899-62f677bfae91@3mdeb.com>
 <39be665c-b6c8-23e3-b18b-d38cfe5c1286@suse.com>
 <bbe85f76-0999-1150-3d48-c7f9e1796dac@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature"; boundary="QxN5xOWGsmh5a4wb"
Content-Disposition: inline
In-Reply-To: <bbe85f76-0999-1150-3d48-c7f9e1796dac@citrix.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: piotr.krol@3mdeb.com, xen-devel@lists.xenproject.org,
 Maciej Pijanowski <maciej.pijanowski@3mdeb.com>,
 Jan Beulich <jbeulich@suse.com>, Norbert Kaminski <norbert.kaminski@3mdeb.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--QxN5xOWGsmh5a4wb
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: fwupd support under Xen - firmware updates with the UEFI capsule

On Tue, Jul 28, 2020 at 10:01:33PM +0100, Andrew Cooper wrote:
> On 28/07/2020 21:00, Jan Beulich wrote:
> > On 28.07.2020 09:41, Norbert Kaminski wrote:
> >> I'm trying to add support for the firmware updates with the UEFI
> >> capsule in
> >> Qubes OS. I've got the troubles with reading ESRT (EFI System
> >> Resource Table)
> >> in the dom0, which is based on the EFI memory map. The EFI_MEMMAP is n=
ot
> >> enabled despite the loaded drivers (CONFIG_EFI, CONFIG_EFI_ESRT) and
> >> kernel
> >> cmdline parameters (add_efi_memmap):
> >>
> >> ```
> >> [=C2=A0=C2=A0=C2=A0 3.451249] efi: EFI_MEMMAP is not enabled.
> >> ```
> >
> > It is, according to my understanding, a layering violation to expose
> > the EFI memory map to Dom0. It's not supposed to make use of this
> > information in any way. Hence any functionality depending on its use
> > also needs to be implemented in the hypervisor, with Dom0 making a
> > suitable hypercall to access this functionality. (And I find it
> > quite natural to expect that Xen gets involved in an update of the
> > firmware of a system.)
>=20
> ERST is a table (read only by the looks of things) which is a catalogue
> of various bits of firmware in the system, including GUIDs for
> identification, and version information.
>=20
> It is the kind of data which the hardware domain should have access to,
> and AFAICT, behaves just like a static ACPI table.
>=20
> Presumably it wants to an E820 reserved region so dom0 gets indent
> access, and something in the EFI subsystem needs extending to pass the
> ERST address to dom0.

I think most (if not all) pieces in Xen are already there - there is
XENPF_firmware_info with XEN_EFW_EFI_INFO + XEN_FW_EFI_CONFIG_TABLE
that gives address of the EFI config table. Linux saves it in
efi_systab_xen.tables (arch/x86/xen/efi.c:xen_efi_probe().
I haven't figured out yet if it does anything with that information, but
the content of /sys/firmware/efi/systab suggests it does.

It seems ESRT driver in Linux uses memmap just for some sanity checks
(if the ESRT points at memory with EFI_MEMORY_RUNTIME and appropriate
type). Perhaps the check (if really necessary) can be added to Xen and
in case of dom0 simply skipped in Linux.

Norbert, if you're brave enough ;) I would suggests trying the (Linux)
patch below:

-----8<-----
diff --git a/drivers/firmware/efi/esrt.c b/drivers/firmware/efi/esrt.c
index e3d692696583..a2a5ccbb00a8 100644
--- a/drivers/firmware/efi/esrt.c
+++ b/drivers/firmware/efi/esrt.c
@@ -245,13 +245,14 @@ void __init efi_esrt_init(void)
 	int rc;
 	phys_addr_t end;
=20
-	if (!efi_enabled(EFI_MEMMAP))
+	if (!efi_enabled(EFI_MEMMAP) && !efi_enabled(EFI_PARAVIRT))
 		return;
=20
 	pr_debug("esrt-init: loading.\n");
 	if (!esrt_table_exists())
 		return;
=20
+	if (!efi_enabled(EFI_PARAVIRT)) {
 	rc =3D efi_mem_desc_lookup(efi.esrt, &md);
 	if (rc < 0 ||
 	    (!(md.attribute & EFI_MEMORY_RUNTIME) &&
@@ -276,6 +277,7 @@ void __init efi_esrt_init(void)
 		       size, max);
 		return;
 	}
+	}
=20
 	va =3D early_memremap(efi.esrt, size);
 	if (!va) {
@@ -331,7 +333,8 @@ void __init efi_esrt_init(void)
=20
 	end =3D esrt_data + size;
 	pr_info("Reserving ESRT space from %pa to %pa.\n", &esrt_data, &end);
-	if (md.type =3D=3D EFI_BOOT_SERVICES_DATA)
+
+	if (!efi_enabled(EFI_PARAVIRT) && md.type =3D=3D EFI_BOOT_SERVICES_DATA)
 		efi_mem_reserve(esrt_data, esrt_data_size);
=20
 	pr_debug("esrt-init: loaded.\n");
----8<-----


--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--QxN5xOWGsmh5a4wb
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl8go80ACgkQ24/THMrX
1yxHfgf/Vba4UEFX2tdHVYoNTsDpl0iM6Z3iYuX8dpbhaZoTUIjEJNmxPIW9xh3t
kmMbaCSh4ovvwKI1ltbwoayclM+k1IFc655whzcM9NZLVF2mSsYbOPzK8K7348Um
miSjjv5fkB+dRlXg68FZPGQpKnXDa8oR3UoVUNCG0B1ezfY7gVA92TdempMgp7C2
hBaHulUhcNT7bAc0+NBpO+kvGcO0yQnCB29LI0V9RDmyP0oyhdZ6bbtWuTnpDv/W
YIhxMiN0Qqyq1rFKdYaX4lrnJes0PQDupJhLFTZQjY57l9U8h7nBeJosep1C1SvS
LYPnAOw+IFXjfTzixD9zv0iU7QZ/bQ==
=2EKX
-----END PGP SIGNATURE-----

--QxN5xOWGsmh5a4wb--


From xen-devel-bounces@lists.xenproject.org Tue Jul 28 22:19:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 28 Jul 2020 22:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0XwB-0004bR-RC; Tue, 28 Jul 2020 22:19:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E6Nk=BH=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0XwA-0004bL-EE
 for xen-devel@lists.xenproject.org; Tue, 28 Jul 2020 22:19:06 +0000
X-Inumbo-ID: 590ad700-d120-11ea-8be7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 590ad700-d120-11ea-8be7-bc764e2007e4;
 Tue, 28 Jul 2020 22:19:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=FbFIRNa7iDXbVzHBgt08hGgncB4rnfJyV7mrkrO/Nms=; b=x/lkqFw4UjbJ4SnyHYaBb6YEP
 zMQiMdu3dLDNny/epVtg/woGTeNe3lEvfTApFhppfLo9aF4JpAcEAnzuQl0yvr7FqUGa6eTvZAfNd
 +D72I5pM2CkeCqXco8PKKB6oIZ/xbkXy1KQ08+xcqW+/RtAKhV0BTA96bMX9q/pzAZdCo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0Xw8-0003nH-SA; Tue, 28 Jul 2020 22:19:04 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0Xw8-0001JA-Ff; Tue, 28 Jul 2020 22:19:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0Xw8-0000cl-F4; Tue, 28 Jul 2020 22:19:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152269-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152269: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=b071ec25e85c4aacf3da59e5258cda0b1c4df45d
X-Osstest-Versions-That: xen=c27a184225eab54d20435c8cab5ad0ef384dc2c0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 28 Jul 2020 22:19:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152269 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152269/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b071ec25e85c4aacf3da59e5258cda0b1c4df45d
baseline version:
 xen                  c27a184225eab54d20435c8cab5ad0ef384dc2c0

Last test of basis   152235  2020-07-27 14:10:29 Z    1 days
Testing same since   152269  2020-07-28 19:05:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien.grall@arm.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c27a184225..b071ec25e8  b071ec25e85c4aacf3da59e5258cda0b1c4df45d -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 00:21:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 00:21:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0Zpi-0007Ke-O3; Wed, 29 Jul 2020 00:20:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uy70=BI=arm.com=andre.przywara@srs-us1.protection.inumbo.net>)
 id 1k0Zph-0007KZ-6Q
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 00:20:33 +0000
X-Inumbo-ID: 4fac5cd6-d131-11ea-a968-12813bfff9fa
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 4fac5cd6-d131-11ea-a968-12813bfff9fa;
 Wed, 29 Jul 2020 00:20:31 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8B93A1FB;
 Tue, 28 Jul 2020 17:20:30 -0700 (PDT)
Received: from [192.168.2.22] (unknown [172.31.20.19])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A9A513F71F;
 Tue, 28 Jul 2020 17:20:29 -0700 (PDT)
From: =?UTF-8?Q?Andr=c3=a9_Przywara?= <andre.przywara@arm.com>
Subject: Re: dom0 LInux 5.8-rc5 kernel failing to initialize cooling maps for
 Allwinner H6 SoC
To: Christopher Clark <christopher.w.clark@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <CA+wirGqXMoRkS-aJmfFLipUv8SdY5LKV1aMrF0yKRJQaMvzs6Q@mail.gmail.com>
 <1c5cee83-295e-cc02-d018-b53ddc6e3590@xen.org>
 <CA+wirGpFvLBzYRBaq8yspJj8j9-yoLwN88bt079qM5yqPTwtcA@mail.gmail.com>
 <02b630bd-22e0-afde-6784-be068d0948ae@arm.com>
 <CA+wirGoG+im2mwb2ye6j4MpcVtfQ-prhhmVgdBTosus7hjeu=w@mail.gmail.com>
 <e091c32f-d121-d549-a2fa-f906d28ff8f1@arm.com>
 <alpine.DEB.2.21.2007281054520.646@sstabellini-ThinkPad-T480s>
 <CACMJ4GYWBNV5O4otbDj2Lx3Qq6sFPWm8bX4CRABEU3g1izQraQ@mail.gmail.com>
Autocrypt: addr=andre.przywara@arm.com; prefer-encrypt=mutual; keydata=
 xsFNBFNPCKMBEAC+6GVcuP9ri8r+gg2fHZDedOmFRZPtcrMMF2Cx6KrTUT0YEISsqPoJTKld
 tPfEG0KnRL9CWvftyHseWTnU2Gi7hKNwhRkC0oBL5Er2hhNpoi8x4VcsxQ6bHG5/dA7ctvL6
 kYvKAZw4X2Y3GTbAZIOLf+leNPiF9175S8pvqMPi0qu67RWZD5H/uT/TfLpvmmOlRzNiXMBm
 kGvewkBpL3R2clHquv7pB6KLoY3uvjFhZfEedqSqTwBVu/JVZZO7tvYCJPfyY5JG9+BjPmr+
 REe2gS6w/4DJ4D8oMWKoY3r6ZpHx3YS2hWZFUYiCYovPxfj5+bOr78sg3JleEd0OB0yYtzTT
 esiNlQpCo0oOevwHR+jUiaZevM4xCyt23L2G+euzdRsUZcK/M6qYf41Dy6Afqa+PxgMEiDto
 ITEH3Dv+zfzwdeqCuNU0VOGrQZs/vrKOUmU/QDlYL7G8OIg5Ekheq4N+Ay+3EYCROXkstQnf
 YYxRn5F1oeVeqoh1LgGH7YN9H9LeIajwBD8OgiZDVsmb67DdF6EQtklH0ycBcVodG1zTCfqM
 AavYMfhldNMBg4vaLh0cJ/3ZXZNIyDlV372GmxSJJiidxDm7E1PkgdfCnHk+pD8YeITmSNyb
 7qeU08Hqqh4ui8SSeUp7+yie9zBhJB5vVBJoO5D0MikZAODIDwARAQABzS1BbmRyZSBQcnp5
 d2FyYSAoQVJNKSA8YW5kcmUucHJ6eXdhcmFAYXJtLmNvbT7CwXsEEwECACUCGwMGCwkIBwMC
 BhUIAgkKCwQWAgMBAh4BAheABQJTWSV8AhkBAAoJEAL1yD+ydue63REP/1tPqTo/f6StS00g
 NTUpjgVqxgsPWYWwSLkgkaUZn2z9Edv86BLpqTY8OBQZ19EUwfNehcnvR+Olw+7wxNnatyxo
 D2FG0paTia1SjxaJ8Nx3e85jy6l7N2AQrTCFCtFN9lp8Pc0LVBpSbjmP+Peh5Mi7gtCBNkpz
 KShEaJE25a/+rnIrIXzJHrsbC2GwcssAF3bd03iU41J1gMTalB6HCtQUwgqSsbG8MsR/IwHW
 XruOnVp0GQRJwlw07e9T3PKTLj3LWsAPe0LHm5W1Q+euoCLsZfYwr7phQ19HAxSCu8hzp43u
 zSw0+sEQsO+9wz2nGDgQCGepCcJR1lygVn2zwRTQKbq7Hjs+IWZ0gN2nDajScuR1RsxTE4WR
 lj0+Ne6VrAmPiW6QqRhliDO+e82riI75ywSWrJb9TQw0+UkIQ2DlNr0u0TwCUTcQNN6aKnru
 ouVt3qoRlcD5MuRhLH+ttAcmNITMg7GQ6RQajWrSKuKFrt6iuDbjgO2cnaTrLbNBBKPTG4oF
 D6kX8Zea0KvVBagBsaC1CDTDQQMxYBPDBSlqYCb/b2x7KHTvTAHUBSsBRL6MKz8wwruDodTM
 4E4ToV9URl4aE/msBZ4GLTtEmUHBh4/AYwk6ACYByYKyx5r3PDG0iHnJ8bV0OeyQ9ujfgBBP
 B2t4oASNnIOeGEEcQ2rjzsFNBFNPCKMBEACm7Xqafb1Dp1nDl06aw/3O9ixWsGMv1Uhfd2B6
 it6wh1HDCn9HpekgouR2HLMvdd3Y//GG89irEasjzENZPsK82PS0bvkxxIHRFm0pikF4ljIb
 6tca2sxFr/H7CCtWYZjZzPgnOPtnagN0qVVyEM7L5f7KjGb1/o5EDkVR2SVSSjrlmNdTL2Rd
 zaPqrBoxuR/y/n856deWqS1ZssOpqwKhxT1IVlF6S47CjFJ3+fiHNjkljLfxzDyQXwXCNoZn
 BKcW9PvAMf6W1DGASoXtsMg4HHzZ5fW+vnjzvWiC4pXrcP7Ivfxx5pB+nGiOfOY+/VSUlW/9
 GdzPlOIc1bGyKc6tGREH5lErmeoJZ5k7E9cMJx+xzuDItvnZbf6RuH5fg3QsljQy8jLlr4S6
 8YwxlObySJ5K+suPRzZOG2+kq77RJVqAgZXp3Zdvdaov4a5J3H8pxzjj0yZ2JZlndM4X7Msr
 P5tfxy1WvV4Km6QeFAsjcF5gM+wWl+mf2qrlp3dRwniG1vkLsnQugQ4oNUrx0ahwOSm9p6kM
 CIiTITo+W7O9KEE9XCb4vV0ejmLlgdDV8ASVUekeTJkmRIBnz0fa4pa1vbtZoi6/LlIdAEEt
 PY6p3hgkLLtr2GRodOW/Y3vPRd9+rJHq/tLIfwc58ZhQKmRcgrhtlnuTGTmyUqGSiMNfpwAR
 AQABwsFfBBgBAgAJBQJTTwijAhsMAAoJEAL1yD+ydue64BgP/33QKczgAvSdj9XTC14wZCGE
 U8ygZwkkyNf021iNMj+o0dpLU48PIhHIMTXlM2aiiZlPWgKVlDRjlYuc9EZqGgbOOuR/pNYA
 JX9vaqszyE34JzXBL9DBKUuAui8z8GcxRcz49/xtzzP0kH3OQbBIqZWuMRxKEpRptRT0wzBL
 O31ygf4FRxs68jvPCuZjTGKELIo656/Hmk17cmjoBAJK7JHfqdGkDXk5tneeHCkB411p9WJU
 vMO2EqsHjobjuFm89hI0pSxlUoiTL0Nuk9Edemjw70W4anGNyaQtBq+qu1RdjUPBvoJec7y/
 EXJtoGxq9Y+tmm22xwApSiIOyMwUi9A1iLjQLmngLeUdsHyrEWTbEYHd2sAM2sqKoZRyBDSv
 ejRvZD6zwkY/9nRqXt02H1quVOP42xlkwOQU6gxm93o/bxd7S5tEA359Sli5gZRaucpNQkwd
 KLQdCvFdksD270r4jU/rwR2R/Ubi+txfy0dk2wGBjl1xpSf0Lbl/KMR5TQntELfLR4etizLq
 Xpd2byn96Ivi8C8u9zJruXTueHH8vt7gJ1oax3yKRGU5o2eipCRiKZ0s/T7fvkdq+8beg9ku
 fDO4SAgJMIl6H5awliCY2zQvLHysS/Wb8QuB09hmhLZ4AifdHyF1J5qeePEhgTA+BaUbiUZf
 i4aIXCH3Wv6K
Organization: ARM Ltd.
Message-ID: <6b55a50a-70c1-991b-d780-f6829b0c87e8@arm.com>
Date: Wed, 29 Jul 2020 01:18:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CACMJ4GYWBNV5O4otbDj2Lx3Qq6sFPWm8bX4CRABEU3g1izQraQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Alejandro <alejandro.gonzalez.correo@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/07/2020 19:52, Christopher Clark wrote:

Hi Christopher,

wow, this quickly got out of hand. I never meant to downplay anyone's
work here, but on this particular platform some things might look a bit
different than normal. See below.

> On Tue, Jul 28, 2020 at 11:16 AM Stefano Stabellini
> <sstabellini@kernel.org> wrote:
>>
>> On Tue, 28 Jul 2020, André Przywara wrote:
>>> On 28/07/2020 11:39, Alejandro wrote:
>>>> Hello,
>>>>
>>>> El dom., 26 jul. 2020 a las 22:25, André Przywara
>>>> (<andre.przywara@arm.com>) escribió:
>>>>> So this was actually my first thought: The firmware (U-Boot SPL) sets up
>>>>> some basic CPU frequency (888 MHz for H6 [1]), which is known to never
>>>>> overheat the chip, even under full load. So any concern from your side
>>>>> about the board or SoC overheating could be dismissed, with the current
>>>>> mainline code, at least. However you lose the full speed, by quite a
>>>>> margin on the H6 (on the A64 it's only 816 vs 1200(ish) MHz).
>>>>> However, without the clock entries in the CPU node, the frequency would
>>>>> never be changed by Dom0 anyway (nor by Xen, which doesn't even know how
>>>>> to do this).
>>>>> So from a practical point of view: unless you hack Xen to pass on more
>>>>> cpu node properties, you are stuck at 888 MHz anyway, and don't need to
>>>>> worry about overheating.
>>>> Thank you. Knowing that at least it won't overheat is a relief. But
>>>> the performance definitely suffers from the current situation, and
>>>> quite a bit. I'm thinking about using KVM instead: even if it does
>>>> less paravirtualization of guests,
>>>
>>> What is this statement based on? I think on ARM this never really
>>> applied, and in general whether you do virtio or xen front-end/back-end
>>> does not really matter.
> 
> When you say "in general" here, this becomes a very broad statement
> about virtio and xen front-end/back-ends being equivalent and
> interchangable, and that could cause some misunderstanding for a
> newcomer.
> 
> There are important differences between the isolation properties of
> classic virtio and Xen's front-end/back-ends -- and also the Argo
> transport. It's particularly important for Xen because it has
> priortized support for stronger isolation between execution
> environments to a greater extent than some other hypervisors. It is a
> critical differentiator for it. The importance of isolation is why Xen
> 4.14's headline feature was support for Linux stubdomains, upstreamed
> to Xen after years of work by the Qubes and OpenXT communities.

He was talking about performance. My take on this was that this seems to
go back to the old days, when Xen was considered faster because of
paravirt (vs. trap&emulate h/w in QEMU). And this clearly does not apply
anymore, and never really applied to ARM.

>>> IMHO any reasoning about performance just based
>>> on software architecture is mostly flawed (because it's complex and
>>> reality might have missed some memos ;-)
> 
> That's another pretty strong statement. Measurement is great, but
> maybe performance analysis that is informed and directed by an
> understanding of the architecture under test could potentially be more
> rigorous and persuasive than work done without it?

You seem to draw quite a lot from my statement. All I was saying that
modern systems are far too complex to reason about actual performance
based on some architectural ideas.
Also my statement was in response to some generic statement, but of
course in this particular context. Please keep in mind that we are
talking about a 5 US$ TV-box SoC here, basically a toy platform. The
chip has severe architectural issues (secure devices not being secure,
critical devices not being isolated). I/O probably means SD card at
about 25MB/s, the fastest I have seen is 80MB/s on some better (but
optional!) eMMC modules. DRAM is via a single channel 32bit path. The
cores are using an almost 8 year old energy-efficient
micro-architecture. So whether any clever architecture really
contributes to performance on this system is somewhat questionable.

So I was suggesting that before jumping to conclusions based on broad
architectural design ideas an actual reality check of whether those
really apply to the platform might be warranted.
Also I haven't seen what kind of performance he is actually interested
in. Is the task at hand I/O bound, memory bound, CPU bound?
The discussion so far was about the CPU clock frequency only.

>>> So just measure your particular use case, then you know.
> 
> Hmm.

Is this questioning the usefulness of actual performance measurement? He
seems to be after a particular setup, so keeping an eye on the *actual*
performance outcome seems quite reasonable to me.

>>>> I'm sure that the ability to use
>>>> the maximum frequency of the CPU would offset the additional overhead,
>>>> and in general offer better performance. But with KVM I lose the
>>>> ability to have individual domU's dedicated to some device driver,
>>>> which is a nice thing to have from a security standpoint.
>>>
>>> I understand the theoretical merits, but a) does this really work on
>>> your board and b) is this really more secure? What do you want to
>>> protect against?
>>
>> For "does it work on your board", the main obstacle is typically IOMMU
>> support to be able to do device assignment properly. That's definitely
>> something to check. If it doesn't work nowadays you can try to
>> workaround it by using direct 1:1 memory mappings [1].  However, for
>> security then you have to configure a MPU. I wonder if H6 has a MPU and
>> how it can be configured. In any case, something to keep in mind in case
>> the default IOMMU-based setup doesn't work for some reason for the
>> device you care about.

It's even worse: this SoC only provides platform devices, which all rely
on at least pinctrl, clocks and regulators to function. All of this
functionality is provided via centralised devices, probably controlled
by Dom0 (or just one domain, anyway). The MMC controller for instance
needs to adjust the SD bus clock to the storage array dynamically, which
requires to reprogram the CCU. So I don't see how a driver domain would
conceptually work, without solving the very same problems that we just
faced with cpufreq here.

And of course this device does not have an IOMMU worth mentioning: there
is some device with that name, but it mostly provides scatter-gather
support for the video and display devices *only*.
The MMC controller has its own built-in DMA controller, so it owns the
*whole* of memory, including Xen's very own one.

>> For "is this really more secure?", yes it is more secure as you are
>> running larger portions of the codebase in unprivileged mode and isolated
>> from each other with IOMMU (or MPU) protection. See what the OpenXT and
>> Qubes OS guys have been doing.
> 
> Yes. Both projects have done quite a lot of work to enable and
> maintain driver domains.

Which don't work here, see above. Besides, I was genuinely interested in
the actual threat model here. What do we expect to go wrong and how
would putting the driver in its own domain help? (while considering the
platform's limitations)

Cheers,
Andre


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 02:52:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 02:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0cCc-0004EE-QW; Wed, 29 Jul 2020 02:52:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ULCb=BI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0cCb-0004E8-P1
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 02:52:21 +0000
X-Inumbo-ID: 84f7d5b8-d146-11ea-8c08-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 84f7d5b8-d146-11ea-8c08-bc764e2007e4;
 Wed, 29 Jul 2020 02:52:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=OnU4xwBANt6PXpE+E08teN8QUuaUWW6ajFUrYpE06F0=; b=xeY/IKyphWvSdFIjpmpHRha1h
 nxbsTDdPUfGpFixeYbu4M22Xxxp8LtNtncXnSNaZfXEX44UUKG3Xqgl/RZqwC6wBk4ZxoFOBiW6jY
 Q5goGxSaFgfEg0xmer15EbE+uDBBoC0n7MKekoacT8JGWYEfyfv6BP151ciKukxHW3z/Q=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0cCZ-0003Ke-6c; Wed, 29 Jul 2020 02:52:19 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0cCY-0005ji-2U; Wed, 29 Jul 2020 02:52:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0cCX-0005iz-Vt; Wed, 29 Jul 2020 02:52:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152267-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 152267: tolerable FAIL - PUSHED
X-Osstest-Failures: seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: seabios=d9c812dda519a1a73e8370e1b81ddf46eb22ed16
X-Osstest-Versions-That: seabios=6ada2285d9918859699c92e09540e023e0a16054
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jul 2020 02:52:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152267 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152267/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151947
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151947
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151947
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 151947
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 seabios              d9c812dda519a1a73e8370e1b81ddf46eb22ed16
baseline version:
 seabios              6ada2285d9918859699c92e09540e023e0a16054

Last test of basis   151947  2020-07-16 17:27:32 Z   12 days
Testing same since   152267  2020-07-28 15:41:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Kevin O'Connor <kevin@koconnor.net>
  Paul Menzel <pmenzel@molgen.mpg.de>
  Stefan Reiter <s.reiter@proxmox.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   6ada228..d9c812d  d9c812dda519a1a73e8370e1b81ddf46eb22ed16 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 03:57:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 03:57:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0dCm-0000sJ-VD; Wed, 29 Jul 2020 03:56:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ULCb=BI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0dCl-0000sE-Rv
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 03:56:35 +0000
X-Inumbo-ID: 7db730d8-d14f-11ea-8c0f-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7db730d8-d14f-11ea-8c0f-bc764e2007e4;
 Wed, 29 Jul 2020 03:56:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=51bzpJDo3NhQ0dSiXKghO2oP4f+DhoQ7r442XUYwON4=; b=tOJ3wsNAC8CSvXo7Cx4aWGZbg
 mp/pVwykILhsbv8jDuiqCUBQ0fAghGTo9paM76S4B0Sr7+7pQ4TXcdURWN1kM7bQAC1ZQL0hN1W0R
 DEnlKOySm6QvHMtqID7M90U75HGZW7j1w+rebAAdBkmy2MaOPhFL6hu/TsypHLzm78urI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0dCi-0004cc-3K; Wed, 29 Jul 2020 03:56:32 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0dCh-00023Z-R9; Wed, 29 Jul 2020 03:56:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0dCh-0002ZC-Q5; Wed, 29 Jul 2020 03:56:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152251-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 152251: regressions - FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-vhd:leak-check/check:fail:regression
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: xen=c27a184225eab54d20435c8cab5ad0ef384dc2c0
X-Osstest-Versions-That: xen=0562cbc14cf02b8188b9f1f37f39a4886776ce7c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jul 2020 03:56:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152251 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152251/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd      18 leak-check/check         fail REGR. vs. 152233

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152233
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152233
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152233
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 152233
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152233
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152233
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152233
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 152233
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 152233
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  c27a184225eab54d20435c8cab5ad0ef384dc2c0
baseline version:
 xen                  0562cbc14cf02b8188b9f1f37f39a4886776ce7c

Last test of basis   152233  2020-07-27 13:06:33 Z    1 days
Testing same since   152251  2020-07-28 08:13:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c27a184225eab54d20435c8cab5ad0ef384dc2c0
Author: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Date:   Wed Jul 1 10:19:23 2020 +0300

    xen/displif: Protocol version 2
    
    1. Add protocol version as an integer
    
    Version string, which is in fact an integer, is hard to handle in the
    code that supports different protocol versions. To simplify that
    also add the version as an integer.
    
    2. Pass buffer offset with XENDISPL_OP_DBUF_CREATE
    
    There are cases when display data buffer is created with non-zero
    offset to the data start. Handle such cases and provide that offset
    while creating a display buffer.
    
    3. Add XENDISPL_OP_GET_EDID command
    
    Add an optional request for reading Extended Display Identification
    Data (EDID) structure which allows better configuration of the
    display connectors over the configuration set in XenStore.
    With this change connectors may have multiple resolutions defined
    with respect to detailed timing definitions and additional properties
    normally provided by displays.
    
    If this request is not supported by the backend then visible area
    is defined by the relevant XenStore's "resolution" property.
    
    If backend provides extended display identification data (EDID) with
    XENDISPL_OP_GET_EDID request then EDID values must take precedence
    over the resolutions defined in XenStore.
    
    4. Bump protocol version to 2.
    
    Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 04:12:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 04:12:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0dRe-0002bc-Bv; Wed, 29 Jul 2020 04:11:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ULCb=BI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0dRd-0002ar-23
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 04:11:57 +0000
X-Inumbo-ID: a075e2ca-d151-11ea-a971-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a075e2ca-d151-11ea-a971-12813bfff9fa;
 Wed, 29 Jul 2020 04:11:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=NHUOzWPSaQhnOlhh5rWELBJp/koy/2ltrgleOsY2HeM=; b=0K8tn0iu06kzwRfjVOvVs2EE+
 Mkglx25u7o60lrkgdEBIOGVinFvTOhHvYOz+CGyF51EezqXkyr+PYb7ASjnyNiYXyryoKpaMWTiS6
 G70HrPwqpPREmJHZPTkce0o12WTQymzxgkYhbG4T0K4nKXPScclEw0S52rNiZrf2TVGMQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0dRV-00051H-Q7; Wed, 29 Jul 2020 04:11:49 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0dRV-00036r-DZ; Wed, 29 Jul 2020 04:11:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0dRV-0006Vi-Cq; Wed, 29 Jul 2020 04:11:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152270-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152270: all pass - PUSHED
X-Osstest-Versions-This: ovmf=744ad444e5306ef68edbe899b5f5dc87e82c146b
X-Osstest-Versions-That: ovmf=3887820e5fecdb9e948f88eb4e92298f6c3dd86f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jul 2020 04:11:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152270 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152270/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 744ad444e5306ef68edbe899b5f5dc87e82c146b
baseline version:
 ovmf                 3887820e5fecdb9e948f88eb4e92298f6c3dd86f

Last test of basis   152261  2020-07-28 12:10:07 Z    0 days
Testing same since   152270  2020-07-28 19:10:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3887820e5f..744ad444e5  744ad444e5306ef68edbe899b5f5dc87e82c146b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 06:48:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 06:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0fsK-00078u-Hk; Wed, 29 Jul 2020 06:47:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kSGO=BI=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k0fsJ-00078p-Pf
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 06:47:39 +0000
X-Inumbo-ID: 64090fe0-d167-11ea-a982-12813bfff9fa
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.58]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 64090fe0-d167-11ea-a982-12813bfff9fa;
 Wed, 29 Jul 2020 06:47:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+kA/oPnFXLs8cMzNjrSsVvWanCdQPpok16aLCIZbi+8=;
 b=xVQxwiMqVudEriXiv9Ki8QLz7gciarvTjJmhclPqITgsg9RTA9CQaQZnulrBgSBx1OLjFZLtiURf+l0ws7aLSh6cvdvf5OUlVbuFJXrCsDZMfu08Ix0heHLqvWbE/cQymUl/U73TLu6pvavjwgYCp4p8iVHTmptwD1+BDn8DB2A=
Received: from DB7PR05CA0072.eurprd05.prod.outlook.com (2603:10a6:10:2e::49)
 by AM4PR08MB2932.eurprd08.prod.outlook.com (2603:10a6:205:e::33) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.26; Wed, 29 Jul
 2020 06:47:35 +0000
Received: from DB5EUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2e:cafe::9) by DB7PR05CA0072.outlook.office365.com
 (2603:10a6:10:2e::49) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.23 via Frontend
 Transport; Wed, 29 Jul 2020 06:47:35 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT019.mail.protection.outlook.com (10.152.20.163) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Wed, 29 Jul 2020 06:47:35 +0000
Received: ("Tessian outbound 7de93d801f24:v62");
 Wed, 29 Jul 2020 06:47:35 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2b9d694e94dda080
X-CR-MTA-TID: 64aa7808
Received: from c041395dfa92.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3F86F3AC-E8D1-4615-911A-E6C82B2BB5B2.1; 
 Wed, 29 Jul 2020 06:47:30 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c041395dfa92.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jul 2020 06:47:30 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H4WtfENfwqtK+uLDu+LvgJ/eZuv0UJoIxTdHM1ke8SkbkBqETeS6L/oXZcgKW9rdbq2IUZKjeYj+7kjzPFQ0YdcKZ6vdoyb00jaKtpeGDuYU528Vs8rJoAVBLrwWDXu/BK3vgW5GHTKPOUFzJgY0sMCzwo9pA/NDYwYs7DAmjUIX8fyyhHdg+dJkxQ6fDeHEHn3sZUJp6mZfqGkRayUCTMGSWgboHnLx5AfGQemHiRL7m27dVfPUB1kU9515JLRk6QKOp9mp0QpCALFEc0GTdeZ1Ic6XqOGrYeWyBJl2tx7LlfLmgsZEET1UlnEkwr+3hlASlt3bD5zK6fN4B1SH6g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+kA/oPnFXLs8cMzNjrSsVvWanCdQPpok16aLCIZbi+8=;
 b=abCh4e/DhWnqBCGJFVi+hMRpYp9sKlQWnfYnA5/83Alvc2I38HNbsbyJXRXIEICNUkvQuwONTKDah2b5BbemtEwkLt4xi5cYtU4Q0+fu/nbXqf0SEB66DeTTNB3a1PdQNyCcEXhuB+IqseVMz8CFnp1nY5lIFAb4GrGNKcH6A1KMp8u09G5oMrPjvVAIzB1vbE+SVThxlFi62Vv8a8WhXjj0jPcrkWiouul2CpzoNlt1O9PMw2elf5jGQpMei7UL7rGQcB3p4NNVjcLFtl/IJXZ7X1LH/jAZqXSfZvalC4SX9TndTmnvLnvBRJbOPUL2xhdgaUwtNhlXvuC/G0F1Vw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+kA/oPnFXLs8cMzNjrSsVvWanCdQPpok16aLCIZbi+8=;
 b=xVQxwiMqVudEriXiv9Ki8QLz7gciarvTjJmhclPqITgsg9RTA9CQaQZnulrBgSBx1OLjFZLtiURf+l0ws7aLSh6cvdvf5OUlVbuFJXrCsDZMfu08Ix0heHLqvWbE/cQymUl/U73TLu6pvavjwgYCp4p8iVHTmptwD1+BDn8DB2A=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4251.eurprd08.prod.outlook.com (2603:10a6:10:d1::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.24; Wed, 29 Jul
 2020 06:47:29 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3216.033; Wed, 29 Jul 2020
 06:47:28 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2] xen/arm: Convert runstate address during hypcall
Thread-Topic: [PATCH v2] xen/arm: Convert runstate address during hypcall
Thread-Index: AQHWZPcpcJdIAbXhcEqLxJ7JubPXUKkdWeGAgADEb4A=
Date: Wed, 29 Jul 2020 06:47:28 +0000
Message-ID: <AAD684DA-CDD9-4251-A296-5612F1B53013@arm.com>
References: <4647a019c7b42d40d3c2f5b0a3685954bea7f982.1595948219.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2007281153110.646@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007281153110.646@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [2a01:cb10:b9:6f00:456:e93c:a708:a06e]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 9f2ec3e6-7b9b-4257-e43a-08d8338b46f0
x-ms-traffictypediagnostic: DBBPR08MB4251:|AM4PR08MB2932:
X-Microsoft-Antispam-PRVS: <AM4PR08MB2932FE16F0D142C68754E57A9D700@AM4PR08MB2932.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: MYwHLzXmLrsa0427IZRSslURvsAGUQs2vpqxe9gsnXYfSStsSuugvZJoi/kcdp70gaebDVLSOxmoFh27JONca6ioaQ6O8oQWDSskqjAUODoWYPMcS8RVMC+eG6ia6kf2ma7jM1+stjTO8B63aBPyl/7sTkpyTlnx0Jm+97b/TYt34x8Zs7LKdTLsJh0aY+uAo1I1ZaJ66+8QS1awvrlRZCLMkaOVRNTkwvo2VGCOJAVR+eL+1ECUvA+ZF8gG7drRPSP+5moohm5yj+0tPP9L5GyuYaKfWeB/oqbIEjEnOJufJT83h6F7Xs2Hdajg7ba7bHyk0j0FjWiK3TbR2y+fB0tEPMOR+ZRIDzGAFOgcdViK93u2YN1YTkqTMyvMxQwe3r0xG/pDWjV61a5U4Y5BAXW8q4L5COTWs8o25smsyobGXZ/moxBB0I/KLjl5bMXj6Noa+l9rLvUppgEkCpSURQ==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(366004)(39860400002)(136003)(376002)(346002)(6506007)(4326008)(6486002)(54906003)(53546011)(66476007)(33656002)(8676002)(2906002)(66556008)(66446008)(66946007)(186003)(64756008)(6512007)(36756003)(91956017)(5660300002)(76116006)(2616005)(316002)(86362001)(8936002)(83380400001)(966005)(478600001)(6916009)(7416002)(71200400001)(6606295002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: SIjZNUEiNddQirtL54jo4TqOMiC0GYznRRBS56IPB9LnYptTWtWlYC0FQxuzvSs0tSaZlpxhCKVrkAmeNpzWujwSfe65pbu0FtJzty31mkxxa1TdAZd7pA1W1nuz5HsZZv8lC3RLAJ6cH8MXA8sZDUZrP1MLZVnFrE1IHYSiyRaoyUJUY7oU4Y0yGpQXg+sEF0aIHjXGbENI0GmUHy3jVxikFEmyeLoecqYcT2ehQciymhgyXvfal9825oD0MVk2d4rMACOzpxuuiDK+gf/l7rgkD6Eucs7CXKeCTAuWtuF3kunqVRT4dEev+cf/FyXhIHuJuwUtn3Ui5lgsiJeNOj+d39lkVk130cXbcotRAr2BQrcwnPXV7E6WIxLAaOSLZL/O+jxFLAP8EIcuUPZwBdiXIYpEFnvsb1TZG4M6TD1cWMo8Z+d6zCRI4ZPAKVmdjT51o8je7NdxpM1g+HD9i6BQY267tu5jF7n8hK5gHD5Q/dMX7zJhuMWr5EFnU3zJ88C+JRLjqypyW6Sl1N9Vq23h/TJhM7YyrAUhhWFR0BA=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <E14399C20AE5954B82A4FEBBFD0DA417@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4251
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 66f3df0d-6e1d-40a4-0fb3-08d8338b42e6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: p4nKnrVaTft+Nr37KK4lLGqhwGXIRfVC1wDqd1EXH4+lZb7spEgNWHsGLWSVlF5F3e7c38zpv01A/lw+t0nL21+Sq9ksrqu0Wr4bJ9X0gU2TpaR4DXo0IlmsxnfgbZW0s4bPKf0++hvIFDzSqEQB6I1VVlV7fHtbRNTBglpCzisstHWxYn7p0CBqaAaptPKXvTPtzNjM2vk5CkdLIYPldmgt8rFb/K5MRc/vI5Je/Ffoqez5s7wA71QgJABdisRNCvqs4l9TPdwiHaDOZN51lyxLfR0QvNNSdxFzumNGSddNO+k1qDU4WUNe6xR/dnjPeCphtouJfQDVAwb7cyjplO2OM11ObwIiLhNa8wGdM2hzADgpJrhMEei94c8BXTIkjEnMkbjRL1lDydAii+jpWYLQVeqsoJJunOM3hEl2iNpc7BTkLGml7EddKOPEw9buEEgXX6jbwRjtWw2OxxN5Ja6AVx5dIMwIw9wsSCb11CxdYFFNLUr3RyDCTYAL9Bmv
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(136003)(376002)(39860400002)(396003)(46966005)(6862004)(8676002)(36756003)(6512007)(2906002)(966005)(316002)(70586007)(70206006)(82310400002)(107886003)(478600001)(356005)(53546011)(86362001)(8936002)(186003)(33656002)(54906003)(2616005)(83380400001)(82740400003)(47076004)(81166007)(26005)(5660300002)(6486002)(4326008)(6506007)(336012)(6606295002);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jul 2020 06:47:35.6508 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9f2ec3e6-7b9b-4257-e43a-08d8338b46f0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR08MB2932
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 28 Jul 2020, at 21:04, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> On Tue, 28 Jul 2020, Bertrand Marquis wrote:
>> At the moment on Arm, a Linux guest running with KTPI enabled will
>> cause the following error when a context switch happens in user mode:
>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>>=20
>> The error is caused by the virtual address for the runstate area
>> registered by the guest only being accessible when the guest is running
>> in kernel space when KPTI is enabled.
>>=20
>> To solve this issue, this patch is doing the translation from virtual
>> address to physical address during the hypercall and mapping the
>> required pages using vmap. This is removing the conversion from virtual
>> to physical address during the context switch which is solving the
>> problem with KPTI.
>>=20
>> This is done only on arm architecture, the behaviour on x86 is not
>> modified by this patch and the address conversion is done as before
>> during each context switch.
>>=20
>> This is introducing several limitations in comparison to the previous
>> behaviour (on arm only):
>> - if the guest is remapping the area at a different physical address Xen
>> will continue to update the area at the previous physical address. As
>> the area is in kernel space and usually defined as a global variable thi=
s
>> is something which is believed not to happen. If this is required by a
>> guest, it will have to call the hypercall with the new area (even if it
>> is at the same virtual address).
>> - the area needs to be mapped during the hypercall. For the same reasons
>> as for the previous case, even if the area is registered for a different
>> vcpu. It is believed that registering an area using a virtual address
>> unmapped is not something done.
>>=20
>> inline functions in headers could not be used as the architecture
>> domain.h is included before the global domain.h making it impossible
>> to use the struct vcpu inside the architecture header.
>> This should not have any performance impact as the hypercall is only
>> called once per vcpu usually.
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>>  Changes in v2
>>    - use vmap to map the pages during the hypercall.
>>    - reintroduce initial copy during hypercall.
>>=20
>> ---
>> xen/arch/arm/domain.c        | 160 +++++++++++++++++++++++++++++++----
>> xen/arch/x86/domain.c        |  30 ++++++-
>> xen/arch/x86/x86_64/domain.c |   4 +-
>> xen/common/domain.c          |  19 ++---
>> xen/include/asm-arm/domain.h |   9 ++
>> xen/include/asm-x86/domain.h |  16 ++++
>> xen/include/xen/domain.h     |   5 ++
>> xen/include/xen/sched.h      |  16 +---
>> 8 files changed, 207 insertions(+), 52 deletions(-)
>>=20
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index 31169326b2..c595438bd9 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -19,6 +19,7 @@
>> #include <xen/sched.h>
>> #include <xen/softirq.h>
>> #include <xen/wait.h>
>> +#include <xen/vmap.h>
>>=20
>> #include <asm/alternative.h>
>> #include <asm/cpuerrata.h>
>> @@ -275,36 +276,157 @@ static void ctxt_switch_to(struct vcpu *n)
>>     virt_timer_restore(n);
>> }
>>=20
>> -/* Update per-VCPU guest runstate shared memory area (if registered). *=
/
>> -static void update_runstate_area(struct vcpu *v)
>> +static void cleanup_runstate_vcpu_locked(struct vcpu *v)
>> +{
>> +    if ( v->arch.runstate_guest )
>> +    {
>> +        vunmap((void *)((unsigned long)v->arch.runstate_guest & PAGE_MA=
SK));
>> +
>> +        put_page(v->arch.runstate_guest_page[0]);
>> +
>> +        if ( v->arch.runstate_guest_page[1] )
>> +        {
>> +            put_page(v->arch.runstate_guest_page[1]);
>> +        }
>> +        v->arch.runstate_guest =3D NULL;
>> +    }
>> +}
>> +
>> +void arch_vcpu_cleanup_runstate(struct vcpu *v)
>> {
>> -    void __user *guest_handle =3D NULL;
>> +    spin_lock(&v->arch.runstate_guest_lock);
>> +
>> +    cleanup_runstate_vcpu_locked(v);
>> +
>> +    spin_unlock(&v->arch.runstate_guest_lock);
>> +}
>> +
>> +static int setup_runstate_vcpu_locked(struct vcpu *v, vaddr_t vaddr)
>> +{
>> +    unsigned int offset;
>> +    mfn_t mfn[2];
>> +    struct page_info *page;
>> +    unsigned int numpages;
>>     struct vcpu_runstate_info runstate;
>> +    void *p;
>>=20
>> -    if ( guest_handle_is_null(runstate_guest(v)) )
>> -        return;
>> +    /* user can pass a NULL address to unregister a previous area */
>> +    if ( vaddr =3D=3D 0 )
>> +        return 0;
>>=20
>> -    memcpy(&runstate, &v->runstate, sizeof(runstate));
>> +    offset =3D vaddr & ~PAGE_MASK;
>>=20
>> -    if ( VM_ASSIST(v->domain, runstate_update_flag) )
>> +    /* provided address must be aligned to a 64bit */
>> +    if ( offset % alignof(struct vcpu_runstate_info) )
>>     {
>> -        guest_handle =3D &v->runstate_guest.p->state_entry_time + 1;
>> -        guest_handle--;
>> -        runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE;
>> -        __raw_copy_to_guest(guest_handle,
>> -                            (void *)(&runstate.state_entry_time + 1) - =
1, 1);
>> -        smp_wmb();
>> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: =
"
>> +                "Invalid offset\n", vaddr);
>> +        return -EINVAL;
>> +    }
>> +
>> +    page =3D get_page_from_gva(v, vaddr, GV2M_WRITE);
>> +    if ( !page )
>> +    {
>> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: =
"
>> +                "Page is not mapped\n", vaddr);
>> +        return -EINVAL;
>> +    }
>> +    mfn[0] =3D page_to_mfn(page);
>> +    v->arch.runstate_guest_page[0] =3D page;
>> +
>> +    if ( offset > (PAGE_SIZE - sizeof(struct vcpu_runstate_info)) )
>> +    {
>> +        /* guest area is crossing pages */
>> +        page =3D get_page_from_gva(v, vaddr + PAGE_SIZE, GV2M_WRITE);
>> +        if ( !page )
>> +        {
>> +            put_page(v->arch.runstate_guest_page[0]);
>> +            gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%=
lx: "
>> +                    "2nd Page is not mapped\n", vaddr);
>> +            return -EINVAL;
>> +        }
>> +        mfn[1] =3D page_to_mfn(page);
>> +        v->arch.runstate_guest_page[1] =3D page;
>> +        numpages =3D 2;
>> +    }
>> +    else
>> +    {
>> +        v->arch.runstate_guest_page[1] =3D NULL;
>> +        numpages =3D 1;
>> +    }
>> +
>> +    p =3D vmap(mfn, numpages);
>> +    if ( !p )
>> +    {
>> +        put_page(v->arch.runstate_guest_page[0]);
>> +        if ( numpages =3D=3D 2 )
>> +            put_page(v->arch.runstate_guest_page[1]);
>> +
>> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: =
"
>> +                "vmap error\n", vaddr);
>> +        return -EINVAL;
>>     }
>>=20
>> -    __copy_to_guest(runstate_guest(v), &runstate, 1);
>> +    v->arch.runstate_guest =3D p + offset;
>>=20
>> -    if ( guest_handle )
>> +    if (v =3D=3D current)
>>     {
>> -        runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE;
>> +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate=
));
>> +    }
>> +    else
>> +    {
>> +        vcpu_runstate_get(v, &runstate);
>> +        memcpy(v->arch.runstate_guest, &runstate, sizeof(v->runstate));
>> +    }
>> +
>> +    return 0;
>> +}
>=20
>=20
> The arm32 build breaks with:
>=20
> domain.c: In function 'setup_runstate_vcpu_locked':
> domain.c:322:9: error: format '%lx' expects argument of type 'long unsign=
ed int', but argument 3 has type 'vaddr_t' [-Werror=3Dformat=3D]
>         gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
>         ^
> domain.c:330:9: error: format '%lx' expects argument of type 'long unsign=
ed int', but argument 3 has type 'vaddr_t' [-Werror=3Dformat=3D]
>         gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
>         ^
> domain.c:344:13: error: format '%lx' expects argument of type 'long unsig=
ned int', but argument 3 has type 'vaddr_t' [-Werror=3Dformat=3D]
>             gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx=
: "
>             ^
> domain.c:365:9: error: format '%lx' expects argument of type 'long unsign=
ed int', but argument 3 has type 'vaddr_t' [-Werror=3Dformat=3D]
>         gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%lx: "
>         ^
> cc1: all warnings being treated as errors

My bad. I tested x86 and arm64 build but forgot the arm32.
I will fix that.

>=20
>=20
>> +int arch_vcpu_setup_runstate(struct vcpu *v,
>> +                             struct vcpu_register_runstate_memory_area =
area)
>> +{
>> +    int rc;
>> +
>> +    spin_lock(&v->arch.runstate_guest_lock);
>> +
>> +    /* cleanup if we are recalled */
>> +    cleanup_runstate_vcpu_locked(v);
>> +
>> +    rc =3D setup_runstate_vcpu_locked(v, (vaddr_t)area.addr.v);
>> +
>> +    spin_unlock(&v->arch.runstate_guest_lock);
>> +
>> +    return rc;
>> +}
>> +
>> +
>> +/* Update per-VCPU guest runstate shared memory area (if registered). *=
/
>> +static void update_runstate_area(struct vcpu *v)
>> +{
>> +    spin_lock(&v->arch.runstate_guest_lock);
>> +
>> +    if ( v->arch.runstate_guest )
>> +    {
>> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
>> +        {
>> +            v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE;
>> +            v->arch.runstate_guest->state_entry_time |=3D XEN_RUNSTATE_=
UPDATE;
>=20
> Please use write_atomic (as suggested by Julien here:
> https://marc.info/?l=3Dxen-devel&m=3D159225391619240)

I will do that.

>=20
>=20
>> +            smp_wmb();
>> +        }
>> +
>> +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate=
));
>> +
>> +        /* copy must be done before switching the bit */
>>         smp_wmb();
>> -        __raw_copy_to_guest(guest_handle,
>> -                            (void *)(&runstate.state_entry_time + 1) - =
1, 1);
>> +
>> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
>> +        {
>> +            v->runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE;
>> +            v->arch.runstate_guest->state_entry_time &=3D ~XEN_RUNSTATE=
_UPDATE;
>=20
> and here too
>=20
> The rest looks OK to me.

Thanks for the review.
Regards
Bertrand

>=20
>=20
>> +        }
>>     }
>> +
>> +    spin_unlock(&v->arch.runstate_guest_lock);
>> }
>>=20
>> static void schedule_tail(struct vcpu *prev)



From xen-devel-bounces@lists.xenproject.org Wed Jul 29 07:09:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 07:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0gD8-0000X4-HW; Wed, 29 Jul 2020 07:09:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kSGO=BI=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k0gD7-0000Wz-0Q
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 07:09:09 +0000
X-Inumbo-ID: 646ee81c-d16a-11ea-8c20-bc764e2007e4
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.0.85]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 646ee81c-d16a-11ea-8c20-bc764e2007e4;
 Wed, 29 Jul 2020 07:09:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U2cKBtYay6SUBHhKq0BxG/6jpeaOSxBEP3HVwLHv64Y=;
 b=H/3M88cx7YA03hfxYsWo37rfC+zJVWlOCYnOvjKt5r/inaqvKhHDqjIoNv+FdGj0kGwV3DjkAsAR5cIh1P+XGdHCNCiqmI4TEbxOM/thGBiZ8ttkVxv4B8kehi6HqJTOHCFMHv3VkIb5NMnZ/LLFmvfGvBzyY/YIduoU3XPCmX8=
Received: from AM6P192CA0094.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:8d::35)
 by VI1PR08MB3024.eurprd08.prod.outlook.com (2603:10a6:803:45::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.16; Wed, 29 Jul
 2020 07:09:04 +0000
Received: from AM5EUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8d:cafe::2a) by AM6P192CA0094.outlook.office365.com
 (2603:10a6:209:8d::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.23 via Frontend
 Transport; Wed, 29 Jul 2020 07:09:04 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT011.mail.protection.outlook.com (10.152.16.152) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.10 via Frontend Transport; Wed, 29 Jul 2020 07:09:04 +0000
Received: ("Tessian outbound 2ae7cfbcc26c:v62");
 Wed, 29 Jul 2020 07:09:04 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9e8ac2cd3a0c897f
X-CR-MTA-TID: 64aa7808
Received: from 00240e42dc7e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 58E921A7-443D-469E-A455-B61E884DF8FB.1; 
 Wed, 29 Jul 2020 07:08:59 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 00240e42dc7e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 29 Jul 2020 07:08:59 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TS69m3nXzlJyyKMa1Bgz2VZGb79s0/tshaohVikmEoL6doyij5byz3b9tEZYbFfQgheYcwDJnZsVOwg/r0kMqgsF/NcRfhI+5UkGqPisXrgMybtK3gHjpaVvqWMherqk9X2+StjL2zR05lTDYGvZ2S1uVSLEQ9B2YFDP1cKJoU0c8yw3V8w4nEyl5WSXWgwymS3fsGriiY/QFbL6yNtaxnniW8tPiVdFETE69bzj9AoMAJqaw3HI2NwpC5vrq4aTzBhKzPhybvH6zHkH/a65I+X0gYbFTSij9nWCQsCoIreSC+v9KTrcBQMTsf8itmnl/Ii1lmLpWI16gIMfHsiayQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U2cKBtYay6SUBHhKq0BxG/6jpeaOSxBEP3HVwLHv64Y=;
 b=ERkNFSrG26/sM5yF1HHzgABmqF2zOuMm3hEkti8gJsAxoccpYVb1M2oSDsK7+B7ayJVP+p6ah9IEf+CHsr++uYDsqphh00+oGgN0K1I0kg+egdy7S5URNSZ8on3Z1EFAovNavrvG2D+h50c/przriEEFgh5KfDGX2L1Uu6KLgqBfQWBeBYWlK+qDg2LN+DVMLJg6gli/OwalhyBHtjLPMjuCO3L9WERKjDQRTHcauSNgOgVeAc15eTdEtbgTwy0XuJEQ8scKH7lwOLn7cKjRFnNVOiqzafsIMC8NUmAc35V9ZSE06Zbzhby2t+HeZInHowb+943//2XgpiU47LMxYw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U2cKBtYay6SUBHhKq0BxG/6jpeaOSxBEP3HVwLHv64Y=;
 b=H/3M88cx7YA03hfxYsWo37rfC+zJVWlOCYnOvjKt5r/inaqvKhHDqjIoNv+FdGj0kGwV3DjkAsAR5cIh1P+XGdHCNCiqmI4TEbxOM/thGBiZ8ttkVxv4B8kehi6HqJTOHCFMHv3VkIb5NMnZ/LLFmvfGvBzyY/YIduoU3XPCmX8=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3161.eurprd08.prod.outlook.com (2603:10a6:5:1d::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.21; Wed, 29 Jul
 2020 07:08:57 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3216.033; Wed, 29 Jul 2020
 07:08:56 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2] xen/arm: Convert runstate address during hypcall
Thread-Topic: [PATCH v2] xen/arm: Convert runstate address during hypcall
Thread-Index: AQHWZPcpcJdIAbXhcEqLxJ7JubPXUKkdZ+eAgAC8aQA=
Date: Wed, 29 Jul 2020 07:08:56 +0000
Message-ID: <FCAB700B-4617-4323-BE1E-B80DDA1806C1@arm.com>
References: <4647a019c7b42d40d3c2f5b0a3685954bea7f982.1595948219.git.bertrand.marquis@arm.com>
 <8d2d7f03-450c-d50c-630b-8608c6d42bb9@suse.com>
In-Reply-To: <8d2d7f03-450c-d50c-630b-8608c6d42bb9@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [2a01:cb10:b9:6f00:456:e93c:a708:a06e]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 61defd00-357c-4bb3-90c1-08d8338e4722
x-ms-traffictypediagnostic: DB7PR08MB3161:|VI1PR08MB3024:
X-Microsoft-Antispam-PRVS: <VI1PR08MB3024F95ACD36D9D04165A6D79D700@VI1PR08MB3024.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 34eYB54tesb4p30YBLba2E/PGOPHJa6+Arce25ZqoeE1fwJ1WBPABUgOt/IcuMKkOk2PxzZHWq6T8vIzvF+gjK8CrRVky7VSPZU+jk5dbelgIA+HcpQITaqPneVvtVNqaXH4VZJGHZiuXzJjgk16s3KNmj+pBkaKF4E0nePizFyrzPL1ilBV56tgkpcT24KHLqQk1959LDMrEfOYTUCwmEF2P0Y1dzeI5gPtG+BHxWd4F44Pn9yNdC2/G9+IetHQg0u1q+/L0uGRht4LSwTeyyutoZ5Nz0eNwHUlQWu0BDkIElzExLDi9PehvbdvuHdo6MncTsRGir/LzA2qK6SFV1K276bWmLUcUHMfId3my5IZ4BBkgLLBqJjy44O7pRmf
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(396003)(366004)(136003)(346002)(376002)(6512007)(6506007)(53546011)(71200400001)(54906003)(83380400001)(33656002)(8936002)(6916009)(36756003)(4326008)(186003)(6486002)(316002)(8676002)(76116006)(86362001)(64756008)(5660300002)(2906002)(66946007)(66446008)(478600001)(7416002)(2616005)(66476007)(91956017)(66556008)(21314003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: okYkwMG0R4VCtIt3KI9twYS5FUBebYecLx1bNrdEet6YrlclMcm43CspUovtV/aIZ807Q7YrRAG5NOKZP5TtastWqW5fTKKchAHNzgnxhv1BwyCCB89/lkR5Hw+aCZjOGldA0bhBR9rVtXZ2bnm/C03o4e/Z1Ki1MGjqTt6rVT1WYufx5AzLelsxa5/JfvKPnFakcSdI2a5PKL4mIm8qPJDXXz/5qs45e+Qas1BKEmc7QZNUUsvT0JRYgdueMOdB4m5sPwlXLhVwBREGc/NBKhP8GVbySNnit2w/euZzAQmvsBKAZch47DswyyZeu3ySPHNxGMroX8omejdPlSEJCHDV+9nrPrazDOmjOdreKJsXAx3r5CV769UELqz7yK+NpmY3kR5zM9k4h0mGrvz2YHs5yiyeK+cN5TnqnkkbF5Q5SEyntHubQBfvofOog3JnEoMAQJZhdORymhKql5Ot7aaI4Tjp5YJnxblZymcOJRFXuUl0u1FKsbaKnVyXn7SJ9GW4LZCS9qSPmSZlFiQmrjh5GjvWulHI4FYb2pOOSk8=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <30EC55053AF9844B8F10D7D301AC1229@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3161
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: af9dd93d-a4b4-4eb8-f0aa-08d8338e4266
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: KKVXRE6i+9VlH3CDNRozMeMGz/XxR0QZbTKWV1ez7QaZIcDWfapRxM321cS2O6cUCimUKQ46Lx25oDYBocH3An17Fa6Bgf7o4d35G34hATmzCHt32lnxd+FHMGuPQlnk1p6TfMkGHmxeXXzjUxiIF09sHxKgK51muFSXq3OiijFFh9Whs1OZZCj8CExkA4KU5y/O1ZfpPpZZNX/wLOxdDWyZcQeDrozAhn5tkq1N7CkvrOZe5KTcwaRxSoBpUVfrh9j58kjb8roT8qcrib8tSSdAtrg5BlJ+T8ce5vrd9/M8AXZJUk1ABuLeK6iJS1t5loFE8eMJa0n2Y1Dn3ae/vBk+tkNAZmZcxyPdPd7UmgF4f4w2ny6iT3lbdPyYpnEQcNsghZPUmnGmCZ7/8/wdNUiYaKBcdJbNuWehyfhXTBU=
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(376002)(346002)(39850400004)(396003)(46966005)(6862004)(6512007)(478600001)(2616005)(6486002)(336012)(4326008)(107886003)(2906002)(26005)(36756003)(8936002)(36906005)(86362001)(54906003)(53546011)(6506007)(186003)(33656002)(316002)(8676002)(82740400003)(356005)(81166007)(70586007)(82310400002)(47076004)(70206006)(5660300002)(83380400001)(21314003);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Jul 2020 07:09:04.4035 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 61defd00-357c-4bb3-90c1-08d8338e4722
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3024
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMjggSnVsIDIwMjAsIGF0IDIxOjU0LCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3Vz
ZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMjguMDcuMjAyMCAxNzo1MiwgQmVydHJhbmQgTWFycXVp
cyB3cm90ZToNCj4+IEF0IHRoZSBtb21lbnQgb24gQXJtLCBhIExpbnV4IGd1ZXN0IHJ1bm5pbmcg
d2l0aCBLVFBJIGVuYWJsZWQgd2lsbA0KPj4gY2F1c2UgdGhlIGZvbGxvd2luZyBlcnJvciB3aGVu
IGEgY29udGV4dCBzd2l0Y2ggaGFwcGVucyBpbiB1c2VyIG1vZGU6DQo+PiAoWEVOKSBwMm0uYzox
ODkwOiBkMXYwOiBGYWlsZWQgdG8gd2FsayBwYWdlLXRhYmxlIHZhIDB4ZmZmZmZmODM3ZWJlMGNk
MA0KPj4gVGhlIGVycm9yIGlzIGNhdXNlZCBieSB0aGUgdmlydHVhbCBhZGRyZXNzIGZvciB0aGUg
cnVuc3RhdGUgYXJlYQ0KPj4gcmVnaXN0ZXJlZCBieSB0aGUgZ3Vlc3Qgb25seSBiZWluZyBhY2Nl
c3NpYmxlIHdoZW4gdGhlIGd1ZXN0IGlzIHJ1bm5pbmcNCj4+IGluIGtlcm5lbCBzcGFjZSB3aGVu
IEtQVEkgaXMgZW5hYmxlZC4NCj4+IFRvIHNvbHZlIHRoaXMgaXNzdWUsIHRoaXMgcGF0Y2ggaXMg
ZG9pbmcgdGhlIHRyYW5zbGF0aW9uIGZyb20gdmlydHVhbA0KPj4gYWRkcmVzcyB0byBwaHlzaWNh
bCBhZGRyZXNzIGR1cmluZyB0aGUgaHlwZXJjYWxsIGFuZCBtYXBwaW5nIHRoZQ0KPj4gcmVxdWly
ZWQgcGFnZXMgdXNpbmcgdm1hcC4gVGhpcyBpcyByZW1vdmluZyB0aGUgY29udmVyc2lvbiBmcm9t
IHZpcnR1YWwNCj4+IHRvIHBoeXNpY2FsIGFkZHJlc3MgZHVyaW5nIHRoZSBjb250ZXh0IHN3aXRj
aCB3aGljaCBpcyBzb2x2aW5nIHRoZQ0KPj4gcHJvYmxlbSB3aXRoIEtQVEkuDQo+PiBUaGlzIGlz
IGRvbmUgb25seSBvbiBhcm0gYXJjaGl0ZWN0dXJlLCB0aGUgYmVoYXZpb3VyIG9uIHg4NiBpcyBu
b3QNCj4+IG1vZGlmaWVkIGJ5IHRoaXMgcGF0Y2ggYW5kIHRoZSBhZGRyZXNzIGNvbnZlcnNpb24g
aXMgZG9uZSBhcyBiZWZvcmUNCj4+IGR1cmluZyBlYWNoIGNvbnRleHQgc3dpdGNoLg0KPj4gVGhp
cyBpcyBpbnRyb2R1Y2luZyBzZXZlcmFsIGxpbWl0YXRpb25zIGluIGNvbXBhcmlzb24gdG8gdGhl
IHByZXZpb3VzDQo+PiBiZWhhdmlvdXIgKG9uIGFybSBvbmx5KToNCj4+IC0gaWYgdGhlIGd1ZXN0
IGlzIHJlbWFwcGluZyB0aGUgYXJlYSBhdCBhIGRpZmZlcmVudCBwaHlzaWNhbCBhZGRyZXNzIFhl
bg0KPj4gd2lsbCBjb250aW51ZSB0byB1cGRhdGUgdGhlIGFyZWEgYXQgdGhlIHByZXZpb3VzIHBo
eXNpY2FsIGFkZHJlc3MuIEFzDQo+PiB0aGUgYXJlYSBpcyBpbiBrZXJuZWwgc3BhY2UgYW5kIHVz
dWFsbHkgZGVmaW5lZCBhcyBhIGdsb2JhbCB2YXJpYWJsZSB0aGlzDQo+PiBpcyBzb21ldGhpbmcg
d2hpY2ggaXMgYmVsaWV2ZWQgbm90IHRvIGhhcHBlbi4gSWYgdGhpcyBpcyByZXF1aXJlZCBieSBh
DQo+PiBndWVzdCwgaXQgd2lsbCBoYXZlIHRvIGNhbGwgdGhlIGh5cGVyY2FsbCB3aXRoIHRoZSBu
ZXcgYXJlYSAoZXZlbiBpZiBpdA0KPj4gaXMgYXQgdGhlIHNhbWUgdmlydHVhbCBhZGRyZXNzKS4N
Cj4+IC0gdGhlIGFyZWEgbmVlZHMgdG8gYmUgbWFwcGVkIGR1cmluZyB0aGUgaHlwZXJjYWxsLiBG
b3IgdGhlIHNhbWUgcmVhc29ucw0KPj4gYXMgZm9yIHRoZSBwcmV2aW91cyBjYXNlLCBldmVuIGlm
IHRoZSBhcmVhIGlzIHJlZ2lzdGVyZWQgZm9yIGEgZGlmZmVyZW50DQo+PiB2Y3B1LiBJdCBpcyBi
ZWxpZXZlZCB0aGF0IHJlZ2lzdGVyaW5nIGFuIGFyZWEgdXNpbmcgYSB2aXJ0dWFsIGFkZHJlc3MN
Cj4+IHVubWFwcGVkIGlzIG5vdCBzb21ldGhpbmcgZG9uZS4NCj4gDQo+IEJlc2lkZSBtZSB0aGlu
a2luZyB0aGF0IGFuIGluLXVzZSBhbmQgc3RhYmxlIEFCSSBjYW4ndCBiZSBjaGFuZ2VkIGxpa2UN
Cj4gdGhpcywgbm8gbWF0dGVyIHdoYXQgaXMgImJlbGlldmVkIiBrZXJuZWwgY29kZSBtYXkgb3Ig
bWF5IG5vdCBkbywgSQ0KPiBhbHNvIGRvbid0IHRoaW5rIGhhdmluZyBhcmNoLWVzIGRpdmVyZ2Ug
aW4gYmVoYXZpb3IgaGVyZSBpcyBhIGdvb2QNCj4gaWRlYS4gVXNlIG9mIGNvbW1vbmx5IGF2YWls
YWJsZSBpbnRlcmZhY2VzIHNob3VsZG4ndCBsZWFkIHRvIGhlYWQNCj4gYWNoZXMgb3Igc3VycHJp
c2VzIHdoZW4gcG9ydGluZyBjb2RlIGZyb20gb25lIGFyY2ggdG8gYW5vdGhlci4gSSdtDQo+IHBy
ZXR0eSBzdXJlIGl0IHdhcyBzdWdnZXN0ZWQgYmVmb3JlOiBXaHkgZG9uJ3QgeW91IHNpbXBseSBp
bnRyb2R1Y2UNCj4gYSBwaHlzaWNhbCBhZGRyZXNzIGJhc2VkIGh5cGVyY2FsbCAoYW5kIHRoZW4g
YWxzbyBvbiB4ODYgYXQgdGhlIHNhbWUNCj4gdGltZSwga2VlcGluZyBmdW5jdGlvbmFsIHBhcml0
eSk/IEkgZXZlbiBzZWVtIHRvIHJlY2FsbCBnaXZpbmcgYQ0KPiBzdWdnZXN0aW9uIGhvdyB0byBm
aXQgdGhpcyBpbnRvIGEgZnV0dXJlICJwaHlzaWNhbCBhZGRyZXNzZXMgb25seSINCj4gbW9kZWws
IGFzIGxvbmcgYXMgd2UgY2FuIHNldHRsZSBvbiB0aGUgYmFzaWMgcHJpbmNpcGxlcyBvZiB0aGF0
DQo+IGNvbnZlcnNpb24gcGF0aCB0aGF0IHdlIHdhbnQgdG8gZ28gc29vbmVyIG9yIGxhdGVyIGFu
eXdheSAoYXMgSQ0KPiB1bmRlcnN0YW5kKS4NCg0KSSBmdWxseSBhZ3JlZSB3aXRoIHRoZSDigJxw
aHlzaWNhbCBhZGRyZXNzIG9ubHnigJ0gbW9kZWwgYW5kIGkgdGhpbmsgaXQgbXVzdCBiZQ0KZG9u
ZS4gSW50cm9kdWNpbmcgYSBuZXcgaHlwZXJjYWxsIHRha2luZyBhIHBoeXNpY2FsIGFkZHJlc3Mg
YXMgcGFyYW1ldGVyDQppcyB0aGUgbG9uZyB0ZXJtIHNvbHV0aW9uIChhbmQgSSB3b3VsZCBldmVu
IHZvbHVudGVlciB0byBkbyBpdCBpbiBhIG5ldyBwYXRjaHNldCkuDQpCdXQgdGhpcyB3b3VsZCBu
b3Qgc29sdmUgdGhlIGlzc3VlIGhlcmUgdW5sZXNzIGxpbnV4IGlzIG1vZGlmaWVkLg0KU28gSSBk
byBzZWUgdGhpcyBwYXRjaCBhcyBhIOKAnGJ1ZyBmaXjigJ0uDQoNCj4gDQo+PiAtLS0gYS94ZW4v
YXJjaC94ODYvZG9tYWluLmMNCj4+ICsrKyBiL3hlbi9hcmNoL3g4Ni9kb21haW4uYw0KPj4gQEAg
LTE2NDIsNiArMTY0MiwzMCBAQCB2b2lkIHBhcmF2aXJ0X2N0eHRfc3dpdGNoX3RvKHN0cnVjdCB2
Y3B1ICp2KQ0KPj4gICAgICAgICAgd3Jtc3JfdHNjX2F1eCh2LT5hcmNoLm1zcnMtPnRzY19hdXgp
Ow0KPj4gIH0NCj4+ICAraW50IGFyY2hfdmNwdV9zZXR1cF9ydW5zdGF0ZShzdHJ1Y3QgdmNwdSAq
diwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHN0cnVjdCB2Y3B1X3JlZ2lzdGVy
X3J1bnN0YXRlX21lbW9yeV9hcmVhIGFyZWEpDQo+PiArew0KPj4gKyAgICBzdHJ1Y3QgdmNwdV9y
dW5zdGF0ZV9pbmZvIHJ1bnN0YXRlOw0KPj4gKw0KPj4gKyAgICBydW5zdGF0ZV9ndWVzdCh2KSA9
IGFyZWEuYWRkci5oOw0KPj4gKw0KPj4gKyAgICBpZiAoIHYgPT0gY3VycmVudCApDQo+PiArICAg
IHsNCj4+ICsgICAgICAgIF9fY29weV90b19ndWVzdChydW5zdGF0ZV9ndWVzdCh2KSwgJnYtPnJ1
bnN0YXRlLCAxKTsNCj4+ICsgICAgfQ0KPiANCj4gUG9pbnRsZXNzIGJyYWNlcyAoYW5kIEkgdGhp
bmsgdGhlcmUgYXJlIG1vcmUgaW5zdGFuY2VzKS4NCg0KU286DQppZiBjb25kDQogICBpbnN0cnVj
dGlvbg0KZWxzZQ0Kew0KICAgeHh4DQp9DQoNCmlzIHNvbWV0aGluZyB0aGF0IHNob3VsZCBiZSBk
b25lIGluIFhlbiA/DQoNClNvcnJ5IGlmIGkgZG8gdGhvc2Uga2luZCBvZiBtaXN0YWtlcyBpbiB0
aGUgZnV0dXJlIGFzIGkgYW0gbW9yZSB1c2VkIHRvIGEgbW9kZWwNCndoZXJlIG5vIGJyYWNlcyBp
cyBhbiBhYnNvbHV0ZSBuby1nby4gSSB3aWxsIHRyeSB0byByZW1lbWJlciB0aGlzLg0KDQo+IA0K
Pj4gKyAgICBlbHNlDQo+PiArICAgIHsNCj4+ICsgICAgICAgIHZjcHVfcnVuc3RhdGVfZ2V0KHYs
ICZydW5zdGF0ZSk7DQo+PiArICAgICAgICBfX2NvcHlfdG9fZ3Vlc3QocnVuc3RhdGVfZ3Vlc3Qo
diksICZydW5zdGF0ZSwgMSk7DQo+PiArICAgIH0NCj4+ICsgICAgcmV0dXJuIDA7DQo+IA0KPiBN
aXNzaW5nIGJsYW5rIGxpbmUgYmVmb3JlIG1haW4gInJldHVybiIuDQoNCkkgd2lsbCBmaXggaXQu
DQpCZXJ0cmFuZA0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 08:12:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 08:12:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0hCS-0006lC-1b; Wed, 29 Jul 2020 08:12:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mo4V=BI=amazon.co.uk=prvs=472d6771e=pdurrant@srs-us1.protection.inumbo.net>)
 id 1k0hCQ-0006l7-Fk
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 08:12:30 +0000
X-Inumbo-ID: 3ece5058-d173-11ea-8c30-bc764e2007e4
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ece5058-d173-11ea-8c30-bc764e2007e4;
 Wed, 29 Jul 2020 08:12:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
 s=amazon201209; t=1596010350; x=1627546350;
 h=from:to:cc:date:message-id:references:in-reply-to:
 content-transfer-encoding:mime-version:subject;
 bh=6vcRAB27ls93TsENA4IBQqvbiF1CMrS2M/Z8hoki6y4=;
 b=VzRhq3HPvrT9lcVGGwYtKuVkADP7NsRIFGaDfVtvXB7u1WyMAKuTmr3u
 UBDyre35mlk7GCN/c1w8sd23WtLUE3XY74UnzjmHiLZsInef60IoBehUw
 55pAouH1Imf3ZIABNy74XKE7xREPM6LpvVFTxAi7s4eAiFd5/4yilxdee M=;
IronPort-SDR: 8CF7jLBI6RiWtox6OBzkymrdt/3XmhCfosBn7uUg5T7tNP/g9cDVb7FHBk9kIKq+RacpEikSrT
 3Kg0JOfs6LYg==
X-IronPort-AV: E=Sophos;i="5.75,409,1589241600"; d="scan'208";a="55689104"
Subject: RE: [PATCH 4/6] remove remaining uses of iommu_legacy_map/unmap
Thread-Topic: [PATCH 4/6] remove remaining uses of iommu_legacy_map/unmap
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1e-97fdccfd.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP;
 29 Jul 2020 08:12:26 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1e-97fdccfd.us-east-1.amazon.com (Postfix) with ESMTPS
 id B1E00A2096; Wed, 29 Jul 2020 08:12:21 +0000 (UTC)
Received: from EX13D32EUC002.ant.amazon.com (10.43.164.94) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 29 Jul 2020 08:12:21 +0000
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 29 Jul 2020 08:12:20 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Wed, 29 Jul 2020 08:12:20 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Thread-Index: AQJEvXWV1fglpFS8p501Sb8VALJosQG5eCzFAqpVTo2oHmSP0A==
Date: Wed, 29 Jul 2020 08:12:20 +0000
Message-ID: <0bb87826ac30438d9f279bd3148ce4cc@EX13D32EUC003.ant.amazon.com>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-5-paul@xen.org>
 <3face98c-7fa7-2baf-2fe8-b5869865203f@suse.com>
In-Reply-To: <3face98c-7fa7-2baf-2fe8-b5869865203f@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.90]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jun
 Nakajima <jun.nakajima@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDI2IEp1bHkgMjAyMCAwOTozNg0KPiBUbzogUGF1bCBEdXJy
YW50IDxwYXVsQHhlbi5vcmc+DQo+IENjOiB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmc7
IER1cnJhbnQsIFBhdWwgPHBkdXJyYW50QGFtYXpvbi5jby51az47IEFuZHJldyBDb29wZXINCj4g
PGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+OyBXZWkgTGl1IDx3bEB4ZW4ub3JnPjsgUm9nZXIg
UGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+OyBHZW9yZ2UNCj4gRHVubGFwIDxnZW9y
Z2UuZHVubGFwQGNpdHJpeC5jb20+OyBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25AZXUuY2l0cml4
LmNvbT47IEp1bGllbiBHcmFsbA0KPiA8anVsaWVuQHhlbi5vcmc+OyBTdGVmYW5vIFN0YWJlbGxp
bmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBKdW4gTmFrYWppbWEgPGp1bi5uYWthamltYUBp
bnRlbC5jb20+Ow0KPiBLZXZpbiBUaWFuIDxrZXZpbi50aWFuQGludGVsLmNvbT4NCj4gU3ViamVj
dDogUkU6IFtFWFRFUk5BTF0gW1BBVENIIDQvNl0gcmVtb3ZlIHJlbWFpbmluZyB1c2VzIG9mIGlv
bW11X2xlZ2FjeV9tYXAvdW5tYXANCj4gDQo+IENBVVRJT046IFRoaXMgZW1haWwgb3JpZ2luYXRl
ZCBmcm9tIG91dHNpZGUgb2YgdGhlIG9yZ2FuaXphdGlvbi4gRG8gbm90IGNsaWNrIGxpbmtzIG9y
IG9wZW4NCj4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBjYW4gY29uZmlybSB0aGUgc2VuZGVyIGFu
ZCBrbm93IHRoZSBjb250ZW50IGlzIHNhZmUuDQo+IA0KPiANCj4gDQo+IE9uIDI0LjA3LjIwMjAg
MTg6NDYsIFBhdWwgRHVycmFudCB3cm90ZToNCj4gPiAtLS0NCj4gPiAgeGVuL2FyY2gveDg2L21t
LmMgICAgICAgICAgICAgICB8IDIyICsrKysrKysrKysrKysrKy0tLS0tDQo+ID4gIHhlbi9hcmNo
L3g4Ni9tbS9wMm0tZXB0LmMgICAgICAgfCAyMiArKysrKysrKysrKysrLS0tLS0tLQ0KPiA+ICB4
ZW4vYXJjaC94ODYvbW0vcDJtLXB0LmMgICAgICAgIHwgMTcgKysrKysrKysrKystLS0tDQo+ID4g
IHhlbi9hcmNoL3g4Ni9tbS9wMm0uYyAgICAgICAgICAgfCAyOCArKysrKysrKysrKysrKysrKyst
LS0tLS0tDQo+ID4gIHhlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYyAgICAgICAgfCAyNyArKysrKysr
KysrKysrKysrKystLS0tLS0NCj4gPiAgeGVuL2NvbW1vbi9ncmFudF90YWJsZS5jICAgICAgICB8
IDM2ICsrKysrKysrKysrKysrKysrKysrKysrKystLS0tLS0tDQo+ID4gIHhlbi9jb21tb24vbWVt
b3J5LmMgICAgICAgICAgICAgfCAgNyArKystLS0tDQo+ID4gIHhlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL2lvbW11LmMgfCAzNyArLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4gPiAg
eGVuL2luY2x1ZGUveGVuL2lvbW11LmggICAgICAgICB8IDIwICsrKysrLS0tLS0tLS0tLS0tLQ0K
PiA+ICA5IGZpbGVzIGNoYW5nZWQsIDEyMyBpbnNlcnRpb25zKCspLCA5MyBkZWxldGlvbnMoLSkN
Cj4gDQo+IE92ZXJhbGwgbW9yZSBjb2RlLiBJIHdvbmRlciB3aGV0aGVyIGEgbWFwLWFuZC1mbHVz
aCBmdW5jdGlvbiAobmFtZWQNCj4gZGlmZmVyZW50bHkgdGhhbiB0aGUgY3VycmVudCBvbmVzKSB3
b3VsZG4ndCBzdGlsbCBiZSB3b3J0aHdoaWxlIHRvDQo+IGhhdmUuDQoNCkFncmVlZCBidXQgYW4g
ZXh0cmEgMzAgbGluZXMgaXMgbm90IGh1Z2UuIEknZCBzdGlsbCBsaWtlIHRvIGtlZXAgbWFwL3Vu
bWFwIGFuZCBmbHVzaCBzZXBhcmF0ZSBidXQgSSdsbCBzZWUgaWYgSSBjYW4gcmVkdWNlIHRoZSBh
ZGRlZCBsaW5lcy4NCg0KPiANCj4gPiAtLS0gYS94ZW4vY29tbW9uL2dyYW50X3RhYmxlLmMNCj4g
PiArKysgYi94ZW4vY29tbW9uL2dyYW50X3RhYmxlLmMNCj4gPiBAQCAtMTIyNSwxMSArMTIyNSwy
NSBAQCBtYXBfZ3JhbnRfcmVmKA0KPiA+ICAgICAgICAgICAgICBraW5kID0gSU9NTVVGX3JlYWRh
YmxlOw0KPiA+ICAgICAgICAgIGVsc2UNCj4gPiAgICAgICAgICAgICAga2luZCA9IDA7DQo+ID4g
LSAgICAgICAgaWYgKCBraW5kICYmIGlvbW11X2xlZ2FjeV9tYXAobGQsIF9kZm4obWZuX3gobWZu
KSksIG1mbiwgMCwga2luZCkgKQ0KPiA+ICsgICAgICAgIGlmICgga2luZCApDQo+ID4gICAgICAg
ICAgew0KPiA+IC0gICAgICAgICAgICBkb3VibGVfZ3RfdW5sb2NrKGxndCwgcmd0KTsNCj4gPiAt
ICAgICAgICAgICAgcmMgPSBHTlRTVF9nZW5lcmFsX2Vycm9yOw0KPiA+IC0gICAgICAgICAgICBn
b3RvIHVuZG9fb3V0Ow0KPiA+ICsgICAgICAgICAgICBkZm5fdCBkZm4gPSBfZGZuKG1mbl94KG1m
bikpOw0KPiA+ICsgICAgICAgICAgICB1bnNpZ25lZCBpbnQgZmx1c2hfZmxhZ3MgPSAwOw0KPiA+
ICsgICAgICAgICAgICBpbnQgZXJyOw0KPiA+ICsNCj4gPiArICAgICAgICAgICAgZXJyID0gaW9t
bXVfbWFwKGxkLCBkZm4sIG1mbiwgMCwga2luZCwgJmZsdXNoX2ZsYWdzKTsNCj4gPiArICAgICAg
ICAgICAgaWYgKCBlcnIgKQ0KPiA+ICsgICAgICAgICAgICAgICAgcmMgPSBHTlRTVF9nZW5lcmFs
X2Vycm9yOw0KPiA+ICsNCj4gPiArICAgICAgICAgICAgZXJyID0gaW9tbXVfaW90bGJfZmx1c2go
bGQsIGRmbiwgMSwgZmx1c2hfZmxhZ3MpOw0KPiA+ICsgICAgICAgICAgICBpZiAoIGVyciApDQo+
ID4gKyAgICAgICAgICAgICAgICByYyA9IEdOVFNUX2dlbmVyYWxfZXJyb3I7DQo+ID4gKw0KPiA+
ICsgICAgICAgICAgICBpZiAoIHJjICE9IEdOVFNUX29rYXkgKQ0KPiA+ICsgICAgICAgICAgICB7
DQo+ID4gKyAgICAgICAgICAgICAgICBkb3VibGVfZ3RfdW5sb2NrKGxndCwgcmd0KTsNCj4gPiAr
ICAgICAgICAgICAgICAgIGdvdG8gdW5kb19vdXQ7DQo+ID4gKyAgICAgICAgICAgIH0NCj4gPiAg
ICAgICAgICB9DQo+IA0KPiBUaGUgbWFwcGluZyBuZWVkcyB0byBoYXBwZW4gd2l0aCBhdCBsZWFz
dCBsZCdzIGxvY2sgaGVsZCwgeWVzLiBCdXQNCj4gaXMgdGhlIHNhbWUgdHJ1ZSBhbHNvIGZvciB0
aGUgZmx1c2hpbmc/IENhbid0IChub3QgbmVjZXNzYXJpbHkNCj4gcmlnaHQgaW4gdGhpcyBjaGFu
Z2UpIHRoZSBmbHVzaCBiZSBwdWxsZWQgb3V0IG9mIHRoZSBmdW5jdGlvbiBhbmQNCj4gaW5zdGVh
ZCBkb25lIG9uY2UgcGVyIGJhdGNoIHRoYXQgZ290IHByb2Nlc3NlZD8NCj4gDQoNClRydWUsIHRo
ZSBsb2NrcyBuZWVkIG5vdCBiZSBoZWxkIGFjcm9zcyB0aGUgZmx1c2guIEknbGwgaGF2ZSBhIGxv
b2sgYXQgYmF0Y2hpbmcgdG9vLg0KDQogIFBhdWwNCg0KPiBKYW4NCg==


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 08:43:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 08:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0hgi-0000sz-FV; Wed, 29 Jul 2020 08:43:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mo4V=BI=amazon.co.uk=prvs=472d6771e=pdurrant@srs-us1.protection.inumbo.net>)
 id 1k0hgh-0000su-1Z
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 08:43:47 +0000
X-Inumbo-ID: 9d410c76-d177-11ea-8c31-bc764e2007e4
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d410c76-d177-11ea-8c31-bc764e2007e4;
 Wed, 29 Jul 2020 08:43:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
 s=amazon201209; t=1596012227; x=1627548227;
 h=from:to:cc:date:message-id:references:in-reply-to:
 content-transfer-encoding:mime-version:subject;
 bh=OfSsSp6Dy4pjFLphQPxj8p6BUGXjTaeaZSSWXQ7GJQs=;
 b=tbLsoMDojL9awG4g2i+T8q65IRQgrZU7DwYk9jTXfi4kh4oKwrg16eEW
 EKkOsSkOj2uwAXhmrXePm6Q8pudbL7R9daqgYYja9wlq5RBODzzGEdY9X
 oMVSVzlWO4gm4snM8ZiFoxNTpdTEHNNWdg2sGAuCyg7rBufqQFaADQy/M E=;
IronPort-SDR: KEmM/0zXdwPY2MTgIlimUlUsJfrE6jB0gBLUwdH7gyE5vxDPXFVUXZ1IqUTImg7oa+jLmyve+K
 vT2MXAECWIMg==
X-IronPort-AV: E=Sophos;i="5.75,409,1589241600"; d="scan'208";a="63893741"
Subject: RE: [PATCH 5/6] iommu: remove the share_p2m operation
Thread-Topic: [PATCH 5/6] iommu: remove the share_p2m operation
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2c-c6afef2e.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 29 Jul 2020 08:43:44 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2c-c6afef2e.us-west-2.amazon.com (Postfix) with ESMTPS
 id 1682EA1758; Wed, 29 Jul 2020 08:43:43 +0000 (UTC)
Received: from EX13D32EUC004.ant.amazon.com (10.43.164.121) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 29 Jul 2020 08:43:42 +0000
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC004.ant.amazon.com (10.43.164.121) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 29 Jul 2020 08:43:41 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Wed, 29 Jul 2020 08:43:41 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Thread-Index: AQJEvXWV1fglpFS8p501Sb8VALJosQJLmpxbAdhpFh2oIGsK0A==
Date: Wed, 29 Jul 2020 08:43:41 +0000
Message-ID: <f739576139d143f48dfd30516a3380e5@EX13D32EUC003.ant.amazon.com>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-6-paul@xen.org>
 <0e3d1914-2bf0-0b14-721e-7694f3657523@citrix.com>
In-Reply-To: <0e3d1914-2bf0-0b14-721e-7694f3657523@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.90]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBBbmRyZXcgQ29vcGVyIDxhbmRy
ZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTZW50OiAyNCBKdWx5IDIwMjAgMjA6MDENCj4gVG86
IFBhdWwgRHVycmFudCA8cGF1bEB4ZW4ub3JnPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qu
b3JnDQo+IENjOiBEdXJyYW50LCBQYXVsIDxwZHVycmFudEBhbWF6b24uY28udWs+OyBKYW4gQmV1
bGljaCA8amJldWxpY2hAc3VzZS5jb20+OyBHZW9yZ2UgRHVubGFwDQo+IDxnZW9yZ2UuZHVubGFw
QGNpdHJpeC5jb20+OyBXZWkgTGl1IDx3bEB4ZW4ub3JnPjsgUm9nZXIgUGF1IE1vbm7DqSA8cm9n
ZXIucGF1QGNpdHJpeC5jb20+OyBLZXZpbiBUaWFuDQo+IDxrZXZpbi50aWFuQGludGVsLmNvbT4N
Cj4gU3ViamVjdDogUkU6IFtFWFRFUk5BTF0gW1BBVENIIDUvNl0gaW9tbXU6IHJlbW92ZSB0aGUg
c2hhcmVfcDJtIG9wZXJhdGlvbg0KPiANCj4gQ0FVVElPTjogVGhpcyBlbWFpbCBvcmlnaW5hdGVk
IGZyb20gb3V0c2lkZSBvZiB0aGUgb3JnYW5pemF0aW9uLiBEbyBub3QgY2xpY2sgbGlua3Mgb3Ig
b3Blbg0KPiBhdHRhY2htZW50cyB1bmxlc3MgeW91IGNhbiBjb25maXJtIHRoZSBzZW5kZXIgYW5k
IGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS4NCj4gDQo+IA0KPiANCj4gT24gMjQvMDcvMjAyMCAx
Nzo0NiwgUGF1bCBEdXJyYW50IHdyb3RlOg0KPiA+IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJh
bnRAYW1hem9uLmNvbT4NCj4gPg0KPiA+IFNoYXJpbmcgb2YgSEFQIHRhYmxlcyBpcyBWVC1kIHNw
ZWNpZmljIHNvIHRoZSBvcGVyYXRpb24gaXMgbmV2ZXIgZGVmaW5lZCBmb3INCj4gPiBBTUQgSU9N
TVUuDQo+IA0KPiBJdCdzIG5vdCBWVC1kIHNwZWNpZmljLCBhbmQgWGVuIHJlYWxseSBkaWQgdXNl
ZCB0byBzaGFyZSBvbiBBTUQuDQo+IA0KDQpPaCwgSSBuZXZlciB0aG91Z2h0IHRoYXQgZXZlciB3
b3JrZWQuDQoNCj4gSW4gZmFjdCwgYSByZW1uYW50IG9mIHRoaXMgbG9naWMgaXMgc3RpbGwgcHJl
c2VudCBpbiB0aGUgZm9ybSBvZiB0aGUNCj4gY29tbWVudCBmb3IgcDJtX3R5cGVfdCBleHBsYWlu
aW5nIHdoeSBwMm1fcmFtX3J3IG5lZWRzIHRvIGJlIDAuDQo+IA0KPiBUaGF0IHNhaWQsIEkgYWdy
ZWUgaXQgc2hvdWxkbid0IGJlIGEgaG9vaywgYmVjYXVzZSBpdCBpcyBzdGF0aWNhbGx5DQo+IGtu
b3duIGF0IGRvbWFpbiBjcmVhdGUgdGltZSBiYXNlZCBvbiBmbGFncyBhbmQgaGFyZHdhcmUgcHJv
cGVydGllcy4NCj4gDQoNCldlbGwsIGZvciBWVC1kIHRoYXQgbWF5IHdlbGwgY2hhbmdlIGluIGZ1
dHVyZSA7LSkNCg0KPiA+ICBUaGVyZSdzIGFsc28gbm8gbmVlZCB0byBwcm8tYWN0aXZlbHkgc2V0
IHZ0ZC5wZ2RfbWFkZHIgd2hlbiB1c2luZw0KPiA+IHNoYXJlZCBFUFQgYXMgaXQgaXMgc3RyYWln
aHRmb3J3YXJkIHRvIHNpbXBseSBkZWZpbmUgYSBoZWxwZXIgZnVuY3Rpb24gdG8NCj4gPiByZXR1
cm4gdGhlIGFwcHJvcHJpYXRlIHZhbHVlIGluIHRoZSBzaGFyZWQgYW5kIG5vbi1zaGFyZWQgY2Fz
ZXMuDQo+IA0KPiBJdCBvY2N1cnMgdG8gbWUgdGhhdCB2dGQucGdkX21hZGRyIGFuZCBhbWQucm9v
dF90YWJsZSByZWFsbHkgYXJlIHRoZQ0KPiBzYW1lIHRoaW5nLCBhbmQgZnVydGhlcm1vcmUgYXJl
IGNvbW1vbiB0byBhbGwgSU9NTVUgaW1wbGVtZW50YXRpb25zIG9uDQo+IGFueSBhcmNoaXRlY3R1
cmUuICBJcyBpdCB3b3J0aCB0cnlpbmcgdG8gbWFrZSB0aGVtIGNvbW1vbiwgcmF0aGVyIHRoYW4N
Cj4gY29udGludWluZyB0byBsZXQgZWFjaCBpbXBsZW1lbnRhdGlvbiBoYW5kbGUgdGhlbSBkaWZm
ZXJlbnRseT8NCg0KSSBsb29rZWQgYXQgdGhpcy4gVGhlIHByb2JsZW0gcmVhbGx5IGxpZXMgaW4g
dGVybWlub2xvZ3kuIFRoZSAncm9vdCB0YWJsZScgaW4gdGhlIFZULWQgY29kZSByZWZlcnMgdG8g
dGhlIHNpbmdsZSByb290IHRhYmxlIHdoaWNoIElJUkMgaXMgY2FsbGVkIHRoZSBkZXZpY2UgdGFi
bGUgaW4gdGhlIEFNRCBJT01NVSBjb2RlIHNvLCB3aGlsc3QgSSBjb3VsZCBjb252ZXJ0IGJvdGgg
dG8gdXNlIGEgc2luZ2xlIGNvbW1vbiBmaWVsZCwgSSB0aGluayBpdCdzIG5vdCByZWFsbHkgdGhh
dCB2YWx1YWJsZSB0byBkbyBzbyBzaW5jZSBpdCdzIGxpa2VseSB0byBtYWtlIHRoZSBjb2RlIHNs
aWdodGx5IG1vcmUgY29uZnVzaW5nIHRvIHJlYWQgKGZvciBzb21lb25lIGV4cGVjdGluZyB0aGUg
bmFtZXMgdG8gdGFsbHkgd2l0aCB0aGUgc3BlYykuDQoNCiAgUGF1bA0KDQo+IA0KPiB+QW5kcmV3
DQo=


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 08:45:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 08:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0hin-00010C-Vc; Wed, 29 Jul 2020 08:45:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mo4V=BI=amazon.co.uk=prvs=472d6771e=pdurrant@srs-us1.protection.inumbo.net>)
 id 1k0him-000106-Ft
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 08:45:56 +0000
X-Inumbo-ID: ea611460-d177-11ea-a993-12813bfff9fa
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ea611460-d177-11ea-a993-12813bfff9fa;
 Wed, 29 Jul 2020 08:45:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
 s=amazon201209; t=1596012356; x=1627548356;
 h=from:to:cc:date:message-id:references:in-reply-to:
 content-transfer-encoding:mime-version:subject;
 bh=CKdJV+JgaMZyz8cY/v6LH76+8eASYE5xaX2k2Sxkxbw=;
 b=DGdPrtxSXMUCVJ1dRh2SSUhBRqrbHxa5Bo0RnfKren2za4YU5z1pF1ZA
 NgXjaQ2eYDGHJ8fsWK/EMoYSxydhF0tZgz7rskFtR2nasr7hvvnSm5W8Q
 2dGB7NfEUBMP7wrjcKx26WZ61KrNM+coeKY1kSRY4/udZhPrgd8e5oich o=;
IronPort-SDR: dw50FH2SMuBPCN040W8krWId+p1P3nB7mqTXhk+yftg+99dM24FftPei9QcxH93eSSGJijdx5W
 fzc2pbIrqSFA==
X-IronPort-AV: E=Sophos;i="5.75,409,1589241600"; d="scan'208";a="63894231"
Subject: RE: [PATCH 5/6] iommu: remove the share_p2m operation
Thread-Topic: [PATCH 5/6] iommu: remove the share_p2m operation
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2b-5bdc5131.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 29 Jul 2020 08:45:55 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2b-5bdc5131.us-west-2.amazon.com (Postfix) with ESMTPS
 id 54EDEA2BC0; Wed, 29 Jul 2020 08:45:54 +0000 (UTC)
Received: from EX13D32EUC002.ant.amazon.com (10.43.164.94) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 29 Jul 2020 08:45:53 +0000
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Wed, 29 Jul 2020 08:45:53 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Wed, 29 Jul 2020 08:45:52 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Thread-Index: AQJEvXWV1fglpFS8p501Sb8VALJosQJLmpxbAnxUoGeoG00acA==
Date: Wed, 29 Jul 2020 08:45:52 +0000
Message-ID: <319306da26bc4e848e9a46cf0caa10ea@EX13D32EUC003.ant.amazon.com>
References: <20200724164619.1245-1-paul@xen.org>
 <20200724164619.1245-6-paul@xen.org>
 <d005885d-d983-7328-ee36-efd6032e8c96@suse.com>
In-Reply-To: <d005885d-d983-7328-ee36-efd6032e8c96@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.90]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDI2IEp1bHkgMjAyMCAwOTo1MA0KPiBUbzogUGF1bCBEdXJy
YW50IDxwYXVsQHhlbi5vcmc+DQo+IENjOiB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmc7
IER1cnJhbnQsIFBhdWwgPHBkdXJyYW50QGFtYXpvbi5jby51az47IEFuZHJldyBDb29wZXINCj4g
PGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+OyBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFw
QGNpdHJpeC5jb20+OyBXZWkgTGl1IDx3bEB4ZW4ub3JnPjsgUm9nZXIgUGF1DQo+IE1vbm7DqSA8
cm9nZXIucGF1QGNpdHJpeC5jb20+OyBLZXZpbiBUaWFuIDxrZXZpbi50aWFuQGludGVsLmNvbT4N
Cj4gU3ViamVjdDogUkU6IFtFWFRFUk5BTF0gW1BBVENIIDUvNl0gaW9tbXU6IHJlbW92ZSB0aGUg
c2hhcmVfcDJtIG9wZXJhdGlvbg0KPiANCj4gQ0FVVElPTjogVGhpcyBlbWFpbCBvcmlnaW5hdGVk
IGZyb20gb3V0c2lkZSBvZiB0aGUgb3JnYW5pemF0aW9uLiBEbyBub3QgY2xpY2sgbGlua3Mgb3Ig
b3Blbg0KPiBhdHRhY2htZW50cyB1bmxlc3MgeW91IGNhbiBjb25maXJtIHRoZSBzZW5kZXIgYW5k
IGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS4NCj4gDQo+IA0KPiANCj4gT24gMjQuMDcuMjAyMCAx
ODo0NiwgUGF1bCBEdXJyYW50IHdyb3RlOg0KPiA+IC0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL3Z0ZC9pb21tdS5jDQo+ID4gKysrIGIveGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvdnRkL2lv
bW11LmMNCj4gPiBAQCAtMzEzLDYgKzMxMywyNiBAQCBzdGF0aWMgdTY0IGFkZHJfdG9fZG1hX3Bh
Z2VfbWFkZHIoc3RydWN0IGRvbWFpbiAqZG9tYWluLCB1NjQgYWRkciwgaW50IGFsbG9jKQ0KPiA+
ICAgICAgcmV0dXJuIHB0ZV9tYWRkcjsNCj4gPiAgfQ0KPiA+DQo+ID4gK3N0YXRpYyB1NjQgZG9t
YWluX3BnZF9tYWRkcihzdHJ1Y3QgZG9tYWluICpkKQ0KPiANCj4gdWludDY0X3QgcGxlYXNlLg0K
PiANCg0KT2suDQoNCj4gPiArew0KPiA+ICsgICAgc3RydWN0IGRvbWFpbl9pb21tdSAqaGQgPSBk
b21faW9tbXUoZCk7DQo+ID4gKw0KPiA+ICsgICAgQVNTRVJUKHNwaW5faXNfbG9ja2VkKCZoZC0+
YXJjaC5tYXBwaW5nX2xvY2spKTsNCj4gPiArDQo+ID4gKyAgICBpZiAoIGlvbW11X3VzZV9oYXBf
cHQoZCkgKQ0KPiA+ICsgICAgew0KPiA+ICsgICAgICAgIG1mbl90IHBnZF9tZm4gPQ0KPiA+ICsg
ICAgICAgICAgICBwYWdldGFibGVfZ2V0X21mbihwMm1fZ2V0X3BhZ2V0YWJsZShwMm1fZ2V0X2hv
c3RwMm0oZCkpKTsNCj4gPiArDQo+ID4gKyAgICAgICAgcmV0dXJuIHBhZ2V0YWJsZV9nZXRfcGFk
ZHIocGFnZXRhYmxlX2Zyb21fbWZuKHBnZF9tZm4pKTsNCj4gPiArICAgIH0NCj4gPiArDQo+ID4g
KyAgICBpZiAoICFoZC0+YXJjaC52dGQucGdkX21hZGRyICkNCj4gPiArICAgICAgICBhZGRyX3Rv
X2RtYV9wYWdlX21hZGRyKGQsIDAsIDEpOw0KPiA+ICsNCj4gPiArICAgIHJldHVybiBoZC0+YXJj
aC52dGQucGdkX21hZGRyOw0KPiA+ICt9DQo+ID4gKw0KPiA+ICBzdGF0aWMgdm9pZCBpb21tdV9m
bHVzaF93cml0ZV9idWZmZXIoc3RydWN0IHZ0ZF9pb21tdSAqaW9tbXUpDQo+ID4gIHsNCj4gPiAg
ICAgIHUzMiB2YWw7DQo+ID4gQEAgLTEzNDcsMjIgKzEzNjcsMTcgQEAgaW50IGRvbWFpbl9jb250
ZXh0X21hcHBpbmdfb25lKA0KPiA+ICAgICAgew0KPiA+ICAgICAgICAgIHNwaW5fbG9jaygmaGQt
PmFyY2gubWFwcGluZ19sb2NrKTsNCj4gPg0KPiA+IC0gICAgICAgIC8qIEVuc3VyZSB3ZSBoYXZl
IHBhZ2V0YWJsZXMgYWxsb2NhdGVkIGRvd24gdG8gbGVhZiBQVEUuICovDQo+ID4gLSAgICAgICAg
aWYgKCBoZC0+YXJjaC52dGQucGdkX21hZGRyID09IDAgKQ0KPiA+ICsgICAgICAgIHBnZF9tYWRk
ciA9IGRvbWFpbl9wZ2RfbWFkZHIoZG9tYWluKTsNCj4gPiArICAgICAgICBpZiAoICFwZ2RfbWFk
ZHIgKQ0KPiA+ICAgICAgICAgIHsNCj4gPiAtICAgICAgICAgICAgYWRkcl90b19kbWFfcGFnZV9t
YWRkcihkb21haW4sIDAsIDEpOw0KPiA+IC0gICAgICAgICAgICBpZiAoIGhkLT5hcmNoLnZ0ZC5w
Z2RfbWFkZHIgPT0gMCApDQo+ID4gLSAgICAgICAgICAgIHsNCj4gPiAtICAgICAgICAgICAgbm9t
ZW06DQo+ID4gLSAgICAgICAgICAgICAgICBzcGluX3VubG9jaygmaGQtPmFyY2gubWFwcGluZ19s
b2NrKTsNCj4gPiAtICAgICAgICAgICAgICAgIHNwaW5fdW5sb2NrKCZpb21tdS0+bG9jayk7DQo+
ID4gLSAgICAgICAgICAgICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2UoY29udGV4dF9lbnRyaWVz
KTsNCj4gPiAtICAgICAgICAgICAgICAgIHJldHVybiAtRU5PTUVNOw0KPiA+IC0gICAgICAgICAg
ICB9DQo+ID4gKyAgICAgICAgbm9tZW06DQo+ID4gKyAgICAgICAgICAgIHNwaW5fdW5sb2NrKCZo
ZC0+YXJjaC5tYXBwaW5nX2xvY2spOw0KPiA+ICsgICAgICAgICAgICBzcGluX3VubG9jaygmaW9t
bXUtPmxvY2spOw0KPiA+ICsgICAgICAgICAgICB1bm1hcF92dGRfZG9tYWluX3BhZ2UoY29udGV4
dF9lbnRyaWVzKTsNCj4gPiArICAgICAgICAgICAgcmV0dXJuIC1FTk9NRU07DQo+ID4gICAgICAg
ICAgfQ0KPiANCj4gVGhpcyByZW5kZXJzIGFsbCBjYWxscyBib2d1cyBpbiBzaGFyZWQgbW9kZSAt
IHRoZSBmdW5jdGlvbiwgaWYNCj4gaXQgZW5kZWQgdXAgZ2V0dGluZyBjYWxsZWQgbmV2ZXJ0aGVs
ZXNzLCB3b3VsZCB0aGVuIHN0aWxsIGFsbG9jDQo+IHRoZSByb290IHRhYmxlLiBUaGVyZWZvcmUg
SSdkIGxpa2UgdG8gc3VnZ2VzdCB0aGF0IGF0IGxlYXN0IGFsbA0KPiBpdHMgY2FsbGVycyBnZXQg
YW4gZXhwbGljaXQgY2hlY2suIFRoYXQncyByZWFsbHkganVzdA0KPiBkbWFfcHRlX2NsZWFyX29u
ZSgpIGFzIGl0IGxvb2tzLg0KPiANCg0KT2ssIEkgdGhpbmsgSSBtYXkgbW92ZSB0aGlzIGNvZGUg
b3V0IGludG8gYSBzZXBhcmF0ZSBmdW5jdGlvbiB0b28gc2luY2UgdGhlIG5vbWVtIGxhYmVsIGlz
IHNsaWdodGx5IGF3a3dhcmQsIHNvIEknbGwgcmUtZmFjdG9yIGl0Lg0KDQogIFBhdWwNCg0KPiBK
YW4NCg==


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 09:25:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 09:25:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0iKI-0004QX-8f; Wed, 29 Jul 2020 09:24:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ULCb=BI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0iKH-0004QS-Mt
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 09:24:41 +0000
X-Inumbo-ID: 53008802-d17d-11ea-a994-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 53008802-d17d-11ea-a994-12813bfff9fa;
 Wed, 29 Jul 2020 09:24:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=I8JGI4A9uwGQgrsoYDKQSQM8mcpzgbP92XoXCWYAqP8=; b=B8grOlaZ9slTdI9fS5iqXQbnW
 2ac2rqhIu55FGzzAE91mEqIBEV+RQHejI4Gu5/k0d/Ko/9phL3yq54ZR7vAdxXTYZj04G9CuyNAMv
 kCmt6xsWVrxdENyg8uvB2IUvmiYnlzsMwTuC2cVjmzaaX9r55NIDAfylhBt5JWaxJrdNQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0iKD-0003vg-H6; Wed, 29 Jul 2020 09:24:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0iKD-00050l-8l; Wed, 29 Jul 2020 09:24:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0iKD-0003Nt-8B; Wed, 29 Jul 2020 09:24:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152278-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 152278: regressions - FAIL
X-Osstest-Failures: libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=42db3cc2659c706e20f0e376d721ba90e872dc2a
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jul 2020 09:24:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152278 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152278/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a

version targeted for testing:
 libvirt              42db3cc2659c706e20f0e376d721ba90e872dc2a
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   19 days
Failing since        151818  2020-07-11 04:18:52 Z   18 days   19 attempts
Testing same since   152278  2020-07-29 04:19:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Weblate <noreply@weblate.org>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3045 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 10:21:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 10:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0jDS-0000yh-JC; Wed, 29 Jul 2020 10:21:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ULCb=BI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0jDR-0000yH-Db
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 10:21:41 +0000
X-Inumbo-ID: 47a37750-d185-11ea-8c38-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47a37750-d185-11ea-8c38-bc764e2007e4;
 Wed, 29 Jul 2020 10:21:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ve+SkCNJpYjbGsog950IRakGV/SqA1wfyx5ZBjk+UP8=; b=hMAlWPSyvKuWsTBxoI+yyuLsW
 aKeWl+YzH/9dQSWhHwoHe3Jm4Dz01qRTUuIeRNwDvaYGQSGYUXYDjUUfLsrgpIBUX69Bc24bUWVce
 9vORlz9cyyIRyRqelm7gavt7MqgTcDmNkxGpdEcdLAtXCcjrlmn5mp6Tf5T6FaTLGHIEE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0jDK-00059j-NT; Wed, 29 Jul 2020 10:21:34 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0jDK-0008KA-BE; Wed, 29 Jul 2020 10:21:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0jDK-0001gQ-AU; Wed, 29 Jul 2020 10:21:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152283-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 152283: all pass - PUSHED
X-Osstest-Versions-This: xen=b071ec25e85c4aacf3da59e5258cda0b1c4df45d
X-Osstest-Versions-That: xen=0562cbc14cf02b8188b9f1f37f39a4886776ce7c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jul 2020 10:21:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152283 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152283/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  b071ec25e85c4aacf3da59e5258cda0b1c4df45d
baseline version:
 xen                  0562cbc14cf02b8188b9f1f37f39a4886776ce7c

Last test of basis   152213  2020-07-26 09:18:31 Z    3 days
Testing same since   152283  2020-07-29 09:19:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien.grall@arm.com>
  Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0562cbc14c..b071ec25e8  b071ec25e85c4aacf3da59e5258cda0b1c4df45d -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 11:01:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 11:01:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0jpI-0004Md-Oy; Wed, 29 Jul 2020 11:00:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ULCb=BI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0jpH-0004MY-T0
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 11:00:47 +0000
X-Inumbo-ID: c0275886-d18a-11ea-8c3f-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c0275886-d18a-11ea-8c3f-bc764e2007e4;
 Wed, 29 Jul 2020 11:00:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=sQE7raCFBD7MJ7GwpQ2asLyugyH/y2Wi/SQRRBoZd7o=; b=YTqOWyCOayD+G8/LMkwim4C63
 bLlzpaIY+pCPOV14wuAJFatPQfse22H+0OaLUlkERzBmMXDhxW4LbGKtBYlIfkWY+vui4AHHIm8pj
 FO+ZWm2N/f1yeSniUQygykMT+9tpLcqVx/81tlYF7aJR3+MioN+cSmdck9dS7rMKDI5o8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0jpE-0005we-Bl; Wed, 29 Jul 2020 11:00:44 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0jpD-0002b4-U1; Wed, 29 Jul 2020 11:00:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0jpD-0001B7-TD; Wed, 29 Jul 2020 11:00:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152266-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152266: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=264991512193ee50e27d43e66f832d5041cf3b28
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jul 2020 11:00:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152266 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152266/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1       fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1         fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 151065
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                264991512193ee50e27d43e66f832d5041cf3b28
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   46 days
Failing since        151101  2020-06-14 08:32:51 Z   45 days   63 attempts
Testing same since   152266  2020-07-28 15:20:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Duyck <alexander.h.duyck@linux.intel.com>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Antoine Damhet <antoine.damhet@blade-group.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Hogan Wang <hogan.wang@huawei.com>
  Hogan Wang <king.wang@huawei.com>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  KONRAD Frederic <frederic.konrad@adacore.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  Liu Yi L <yi.l.liu@intel.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Turschmid <peter.turschm@nutanix.com>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Sven Schnelle <svens@stackframe.org>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Viktor Mihajlovski <mihajlov@linux.ibm.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33362 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 11:13:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 11:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0k1x-0005MF-48; Wed, 29 Jul 2020 11:13:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wgl/=BI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k0k1v-0005MA-P0
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 11:13:51 +0000
X-Inumbo-ID: 93e4c694-d18c-11ea-a9a4-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 93e4c694-d18c-11ea-a9a4-12813bfff9fa;
 Wed, 29 Jul 2020 11:13:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596021230;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=ltCUzbEZ2+9uj7fZHbzDVa+LQOjGccZT024Lno+r4Hw=;
 b=IgGC2XarEt7HNnPZAa5DdyyU/e8EnXYUmm0UIy+STiLQ5OqBW74USwbM
 SZ+HKQcCgtkXxp+dCsI/N1JCbEkOwZlInKGtCjzikQ5k8wcpPogzr0Y1P
 FBeGmu23mJ3sXAAA486YlvlnyF6gkOJLwJACSngFQdeZTn3Wnwj/3HHSq k=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: nvPtda8nmZUyePAFIgJ5GzuHipQyvX5GSfEe328YF4FLzBeOV5SLGXjhTpTIuJmZHkv0sV3Scj
 hVakrvywlKyKXFiSRJAGQmaXr7Fs87YSYfUz2bWrPcsWkET1G/fn+N4ZDI7geZzn9WkanZxY/9
 mBlwDlY18mHV6E+CTT1ZQR/mzdg+zOwgxtvshZbWPijKXDN9KGdcAzL7AZBdevwx7h27J1CZoy
 XJEM7Xk3vGua0kXu1/u2xMqZW3CNnJPc2f8dyMoGpXHmdi8D35Utj2E0Mjs2fpDPYiCu56BJjN
 h+4=
X-SBRS: 2.7
X-MesageID: 23442515
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,410,1589256000"; d="scan'208";a="23442515"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH] xen/spinlock: move debug helpers inside the locked regions
Date: Wed, 29 Jul 2020 13:13:30 +0200
Message-ID: <20200729111330.64549-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Ian
 Jackson <ian.jackson@eu.citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Debug helpers such as lock profiling or the invariant pCPU assertions
must strictly be performed inside the exclusive locked region, or else
races might happen.

Note the issue was not strictly introduced by the pointed commit in
the Fixes tag, since lock stats where already incremented before the
barrier, but that commit made it more apparent as manipulating the cpu
field could happen outside of the locked regions and thus trigger the
BUG_ON. This is only enabled on debug builds, and thus releases are
not affected.

Fixes: 80cba391a35 ('spinlocks: in debug builds store cpu holding the lock')
Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/common/spinlock.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index 17f4519fc7..ce3106e2d3 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -170,9 +170,9 @@ void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data)
             cb(data);
         arch_lock_relax();
     }
+    arch_lock_acquire_barrier();
     got_lock(&lock->debug);
     LOCK_PROFILE_GOT;
-    arch_lock_acquire_barrier();
 }
 
 void _spin_lock(spinlock_t *lock)
@@ -198,9 +198,9 @@ unsigned long _spin_lock_irqsave(spinlock_t *lock)
 
 void _spin_unlock(spinlock_t *lock)
 {
-    arch_lock_release_barrier();
     LOCK_PROFILE_REL;
     rel_lock(&lock->debug);
+    arch_lock_release_barrier();
     add_sized(&lock->tickets.head, 1);
     arch_lock_signal();
     preempt_enable();
@@ -249,15 +249,15 @@ int _spin_trylock(spinlock_t *lock)
         preempt_enable();
         return 0;
     }
+    /*
+     * cmpxchg() is a full barrier so no need for an
+     * arch_lock_acquire_barrier().
+     */
     got_lock(&lock->debug);
 #ifdef CONFIG_DEBUG_LOCK_PROFILE
     if (lock->profile)
         lock->profile->time_locked = NOW();
 #endif
-    /*
-     * cmpxchg() is a full barrier so no need for an
-     * arch_lock_acquire_barrier().
-     */
     return 1;
 }
 
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Wed Jul 29 13:22:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 13:22:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0m21-0007eL-7C; Wed, 29 Jul 2020 13:22:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ULCb=BI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0m20-0007e1-EC
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 13:22:04 +0000
X-Inumbo-ID: 7a705248-d19e-11ea-a9d3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7a705248-d19e-11ea-a9d3-12813bfff9fa;
 Wed, 29 Jul 2020 13:21:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=NXeePRQFOlKZZKH0vaaXF+/Hb68nt34CUdMTZ/EbfII=; b=whN2Ma5uzuFHOajIPgMrw89RB
 Ex7GdoZg1u0+KXxGALi9RedjoQ+BgYJ+kfioa1rXx9yixIj47++EHr1jV/W8Icf4PyqDA4abeH6ju
 /2gVky4wIMIovGpxUnID0UWSv9e/98l/aZXbAhubE8LmjjNl/rRMYNwEK0m5UK6kvpl0I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0m1t-0000NW-9z; Wed, 29 Jul 2020 13:21:57 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0m1s-0003dF-Tf; Wed, 29 Jul 2020 13:21:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0m1s-0007Vl-T0; Wed, 29 Jul 2020 13:21:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152277-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152277: all pass - PUSHED
X-Osstest-Versions-This: ovmf=e848b58d7c85293cd4121287abcea2d22a4f0620
X-Osstest-Versions-That: ovmf=744ad444e5306ef68edbe899b5f5dc87e82c146b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jul 2020 13:21:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152277 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152277/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e848b58d7c85293cd4121287abcea2d22a4f0620
baseline version:
 ovmf                 744ad444e5306ef68edbe899b5f5dc87e82c146b

Last test of basis   152270  2020-07-28 19:10:31 Z    0 days
Testing same since   152277  2020-07-29 04:16:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   744ad444e5..e848b58d7c  e848b58d7c85293cd4121287abcea2d22a4f0620 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 13:38:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 13:38:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0mHI-0000CI-Ji; Wed, 29 Jul 2020 13:37:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hQvr=BI=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k0mHG-0000CD-W1
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 13:37:51 +0000
X-Inumbo-ID: b1f319e2-d1a0-11ea-a9dd-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1f319e2-d1a0-11ea-a9dd-12813bfff9fa;
 Wed, 29 Jul 2020 13:37:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=z8E+vyWXWo0n5LO3YM0SyvNsPVMjLfOwh8ImNv99O2A=; b=LRvUhClPw8bGd2TT2GQ2/iWvLZ
 e3VXIFFfWdAc+PXNdOBHSnUXrwH1hGL/MS9lG9t4yCUxKUNfu/MxWijHtR2oEhUUb/WHQDEQZRww1
 t1UvF/kzm+niglPtO3fvpaELZSI7bYD3ETA4SNl60W+oABeMlMNZdWcUFMLYVAIHbj2I=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0mHD-0000h4-6h; Wed, 29 Jul 2020 13:37:47 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0mHC-00070X-Tq; Wed, 29 Jul 2020 13:37:47 +0000
Subject: Re: [PATCH] xen/spinlock: move debug helpers inside the locked regions
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20200729111330.64549-1-roger.pau@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <16dd0f04-598b-8b84-8a25-6b89af9214d7@xen.org>
Date: Wed, 29 Jul 2020 14:37:44 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200729111330.64549-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Roger,

On 29/07/2020 12:13, Roger Pau Monne wrote:
> Debug helpers such as lock profiling or the invariant pCPU assertions
> must strictly be performed inside the exclusive locked region, or else
> races might happen.
> 
> Note the issue was not strictly introduced by the pointed commit in
> the Fixes tag, since lock stats where already incremented before the
> barrier, but that commit made it more apparent as manipulating the cpu
> field could happen outside of the locked regions and thus trigger the
> BUG_ON.

 From the wording, it is not entirely clear which BUG_ON() you are 
referring to. I am guessing, it is the one in rel_lock(). Am I correct?

Otherwise, the change looks good to me.

Cheers,

> This is only enabled on debug builds, and thus releases are
> not affected.
> 
> Fixes: 80cba391a35 ('spinlocks: in debug builds store cpu holding the lock')
> Reported-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
>   xen/common/spinlock.c | 12 ++++++------
>   1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
> index 17f4519fc7..ce3106e2d3 100644
> --- a/xen/common/spinlock.c
> +++ b/xen/common/spinlock.c
> @@ -170,9 +170,9 @@ void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data)
>               cb(data);
>           arch_lock_relax();
>       }
> +    arch_lock_acquire_barrier();
>       got_lock(&lock->debug);
>       LOCK_PROFILE_GOT;
> -    arch_lock_acquire_barrier();
>   }
>   
>   void _spin_lock(spinlock_t *lock)
> @@ -198,9 +198,9 @@ unsigned long _spin_lock_irqsave(spinlock_t *lock)
>   
>   void _spin_unlock(spinlock_t *lock)
>   {
> -    arch_lock_release_barrier();
>       LOCK_PROFILE_REL;
>       rel_lock(&lock->debug);
> +    arch_lock_release_barrier();
>       add_sized(&lock->tickets.head, 1);
>       arch_lock_signal();
>       preempt_enable();
> @@ -249,15 +249,15 @@ int _spin_trylock(spinlock_t *lock)
>           preempt_enable();
>           return 0;
>       }
> +    /*
> +     * cmpxchg() is a full barrier so no need for an
> +     * arch_lock_acquire_barrier().
> +     */
>       got_lock(&lock->debug);
>   #ifdef CONFIG_DEBUG_LOCK_PROFILE
>       if (lock->profile)
>           lock->profile->time_locked = NOW();
>   #endif
> -    /*
> -     * cmpxchg() is a full barrier so no need for an
> -     * arch_lock_acquire_barrier().
> -     */
>       return 1;
>   }
>   
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 13:51:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 13:51:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0mTy-0001nU-PR; Wed, 29 Jul 2020 13:50:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wgl/=BI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k0mTx-0001nP-JO
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 13:50:57 +0000
X-Inumbo-ID: 86482592-d1a2-11ea-a9e0-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 86482592-d1a2-11ea-a9e0-12813bfff9fa;
 Wed, 29 Jul 2020 13:50:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596030655;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=+qc1DfEjoiXNVWZ2IG+kEXKZSJ0lzCe82RtpTGv5Hwc=;
 b=FlY5+I4GKBEpfLKwEbT86OMqWkzxXa97n0/SI/gzpUsrHasdaU9O9O1O
 /qhxZ5udADRsm5dN/lISyqF5sg3l3Dvo5pFsUitTRxS8BjtnqJj2N5/M6
 H8nXMX2JE+JH7JIlvn6zXrHONwck59LNkvEK/KFdCFaKGwiPjAQQRponA E=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Acl+VR3AtgtHSSwAQ3sQGZ/3906aHJ6E9Gt6P6wdChGqnhHnH6ILG/3X37zIshulMeA3Aj5XGK
 /F9qDUSI4QyFjgtoqFtI4EdZ/fjqR0w8XUPNH1ciYeJh3scHmrWbK3DFOnc24V4lquCv5f3rNB
 keiO2Sxn81x2ddoIK8Dp3Z56tOCwoE7Gxwl3h20ZaWybIlopFepgNDfv/QZIq/3pPZPWjlkzVF
 ltgc+zCumF+9Tz1VUdQvLxyGuACVQQgHHdHlI+gewqwsZgi0dqx2XBgqfeW/gCvMdKKhrnN5PG
 Et8=
X-SBRS: 2.7
X-MesageID: 23772965
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,410,1589256000"; d="scan'208";a="23772965"
Date: Wed, 29 Jul 2020 15:50:45 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH] xen/spinlock: move debug helpers inside the locked regions
Message-ID: <20200729135045.GD7191@Air-de-Roger>
References: <20200729111330.64549-1-roger.pau@citrix.com>
 <16dd0f04-598b-8b84-8a25-6b89af9214d7@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <16dd0f04-598b-8b84-8a25-6b89af9214d7@xen.org>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Ian
 Jackson <ian.jackson@eu.citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 29, 2020 at 02:37:44PM +0100, Julien Grall wrote:
> Hi Roger,
> 
> On 29/07/2020 12:13, Roger Pau Monne wrote:
> > Debug helpers such as lock profiling or the invariant pCPU assertions
> > must strictly be performed inside the exclusive locked region, or else
> > races might happen.
> > 
> > Note the issue was not strictly introduced by the pointed commit in
> > the Fixes tag, since lock stats where already incremented before the
> > barrier, but that commit made it more apparent as manipulating the cpu
> > field could happen outside of the locked regions and thus trigger the
> > BUG_ON.
> 
> From the wording, it is not entirely clear which BUG_ON() you are referring
> to. I am guessing, it is the one in rel_lock(). Am I correct?

Yes, that's right. Expanding to:

"...  and thus trigger the BUG_ON in rel_lock()." would be better.

> Otherwise, the change looks good to me.

Thanks.


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 14:57:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 14:57:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0nWG-00077B-EK; Wed, 29 Jul 2020 14:57:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hQvr=BI=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k0nWE-000774-Na
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 14:57:22 +0000
X-Inumbo-ID: ce2fe8be-d1ab-11ea-aa05-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce2fe8be-d1ab-11ea-aa05-12813bfff9fa;
 Wed, 29 Jul 2020 14:57:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=QhHEaHPkW4uhDh2CPLU4RGxx9scYOzNumVO2pU8mGGw=; b=1WOxbrDcGnX1WwsrimygZXLZvZ
 VtFZkOJnD5z3bXkaz/vBK37BiCKEA1LEGVogG4T385YwjvS5INB/Y2aT5AS6RORdCF/qGmBM+RNfT
 eSgqshUfzZEDvp9E6G2YwdX9Y7Z4HrC29UF6E8sReTz2EnRU5vrnFp/uuEPnK8Aa1L1Q=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0nWB-0002Pm-6n; Wed, 29 Jul 2020 14:57:19 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0nWA-0003s2-V0; Wed, 29 Jul 2020 14:57:19 +0000
Subject: Re: [PATCH] xen/spinlock: move debug helpers inside the locked regions
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200729111330.64549-1-roger.pau@citrix.com>
 <16dd0f04-598b-8b84-8a25-6b89af9214d7@xen.org>
 <20200729135045.GD7191@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <bf6cdb76-e4ca-da72-182f-d61de3e92ccf@xen.org>
Date: Wed, 29 Jul 2020 15:57:16 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200729135045.GD7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Roger,

On 29/07/2020 14:50, Roger Pau Monné wrote:
> On Wed, Jul 29, 2020 at 02:37:44PM +0100, Julien Grall wrote:
>> Hi Roger,
>>
>> On 29/07/2020 12:13, Roger Pau Monne wrote:
>>> Debug helpers such as lock profiling or the invariant pCPU assertions
>>> must strictly be performed inside the exclusive locked region, or else
>>> races might happen.
>>>
>>> Note the issue was not strictly introduced by the pointed commit in
>>> the Fixes tag, since lock stats where already incremented before the
>>> barrier, but that commit made it more apparent as manipulating the cpu
>>> field could happen outside of the locked regions and thus trigger the
>>> BUG_ON.
>>
>>  From the wording, it is not entirely clear which BUG_ON() you are referring
>> to. I am guessing, it is the one in rel_lock(). Am I correct?
> 
> Yes, that's right. Expanding to:
> 
> "...  and thus trigger the BUG_ON in rel_lock()." would be better.

Looks good to me. With that:

Reviewed-by: Julien Grall <jgrall@amazon.com>

I am happy to do the update on commit if there is no more comments.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 15:02:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 15:02:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0nai-00082i-In; Wed, 29 Jul 2020 15:02:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ULCb=BI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0nah-00082c-4L
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 15:01:59 +0000
X-Inumbo-ID: 71eaf156-d1ac-11ea-aa08-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 71eaf156-d1ac-11ea-aa08-12813bfff9fa;
 Wed, 29 Jul 2020 15:01:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=vQ9fkxV/Z+RgSE525Xkxh/Ug15hF3oNaUo2aBvQrv+E=; b=OgOHPpHLKQkDMIVzO1GP73F0gG
 w4cpxUIgj9cKET1kquFhRtN5WFcpNZ918MTaYCekuVIZAACb41wKj0cBWW+gpfXAtqV0rv3MEgUbd
 1JGcYsgnev9wbUKk6nLlxYOdgBpX/3WkJ/NWoSdXLhrELKhSKpDkH9ffGqfhrLJGgd4Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0nae-0002Xt-73; Wed, 29 Jul 2020 15:01:56 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0nad-0002Zd-Vn; Wed, 29 Jul 2020 15:01:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0nad-0006oc-VH; Wed, 29 Jul 2020 15:01:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete
 test-amd64-amd64-xl-qemuu-ws16-amd64
Message-Id: <E1k0nad-0006oc-VH@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jul 2020 15:01:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-ws16-amd64
testid windows-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  7d3660e79830a069f1848bb4fa1cdf8f666424fb
  Bug not present: 9e3903136d9acde2fb2dd9e967ba928050a6cb4a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/152280/


  (Revision log too long, omitted.)


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ws16-amd64.windows-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ws16-amd64.windows-install --summary-out=tmp/152285.bisection-summary --basis-template=151065 --blessings=real,real-bisect qemu-mainline test-amd64-amd64-xl-qemuu-ws16-amd64 windows-install
Searching for failure / basis pass:
 152266 fail [host=rimava1] / 151065 ok.
Failure / basis pass flights: 152266 / 151065
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ffde22468e2f0e93b51f97b801e6c7a181088c61 3c659044118e34603161457db9934a34f816d78b 264991512193ee50e27d43e66f832d5041cf3b28 6ada2285d9918859699c92e09540e023e0a16054 0562cbc14cf02b8188b9f1f37f39a4886776ce7c
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#dafce295e6f447ed8905db4e29241e2c6c2a4389-ffde22468e2f0e93b51f97b801e6c7a181088c61 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://git.qemu.org/qemu.git#9e3903136d9acde2fb2dd9e967ba928050a6cb4a-264991512193ee50e27d43e66f832d5041cf3b28 git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-6ada2285d9918859699c92e09540e023e0a16054 git://xenbits.xen.org/xen.git#058023b343d4e366864831db46e9b438e9e3a178-0562cbc14cf02b8188b9f1f37f39a4886776ce7c
>From git://cache:9419/git://xenbits.xen.org/osstest/ovmf
   744ad444e5..e848b58d7c  xen-tested-master -> origin/xen-tested-master
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 55489 nodes in revision graph
Searching for test results:
 151101 fail irrelevant
 151065 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 151149 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 151221 fail irrelevant
 151175 fail irrelevant
 151241 fail irrelevant
 151286 fail irrelevant
 151269 fail irrelevant
 151328 fail irrelevant
 151304 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 322969adf1fb3d6cfbd613f35121315715aff2ed 3c659044118e34603161457db9934a34f816d78b 171199f56f5f9bdf1e5d670d09ef1351d8f01bae 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151377 fail irrelevant
 151353 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1a992030522622c42aa063788b3276789c56c1e1 3c659044118e34603161457db9934a34f816d78b d4b78317b7cf8c0c635b70086503813f79ff21ec 2e3de6253422112ae43e608661ba94ea6b345694 fde76f895d0aa817a1207d844d793239c6639bc6
 151399 fail irrelevant
 151414 fail irrelevant
 151435 fail irrelevant
 151459 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0f01cec52f4794777feb067e4fa0bfcedfdc124e 3c659044118e34603161457db9934a34f816d78b e7651153a8801dad6805d450ea8bef9b46c1adf5 88ab0c15525ced2eefe39220742efe4769089ad8 88cfd062e8318dfeb67c7d2eb50b6cd224b0738a
 151471 fail irrelevant
 151485 fail irrelevant
 151500 fail irrelevant
 151518 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00217f1919270007d7a911f89b32e39b9dcaa907 3c659044118e34603161457db9934a34f816d78b fc1bff958998910ec8d25db86cd2f53ff125f7ab 88ab0c15525ced2eefe39220742efe4769089ad8 23ca7ec0ba620db52a646d80e22f9703a6589f66
 151547 fail irrelevant
 151598 fail irrelevant
 151577 fail irrelevant
 151622 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b 7b7515702012219410802a168ae4aa45b72a44df 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151656 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151634 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151645 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151669 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151685 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151704 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 3c659044118e34603161457db9934a34f816d78b eb6490f544388dd24c0d054a96dd304bc7284450 88ab0c15525ced2eefe39220742efe4769089ad8 be63d9d47f571a60d70f8fb630c03871312d9655
 151778 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b aff2caf6b3fbab1062e117a47b66d27f7fd2f272 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151721 fail irrelevant
 151763 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 48f22ad04ead83e61b4b35871ec6f6109779b791 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151744 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 8796c64ecdfd34be394ea277aaaaa53df0c76996 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151804 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151816 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bdafda8c457eb90c65f37026589b54258300f05c 3c659044118e34603161457db9934a34f816d78b 45db94cc90c286a9965a285ba19450f448760a09 88ab0c15525ced2eefe39220742efe4769089ad8 3fdc211b01b29f252166937238efe02d15cb5780
 151833 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 827937158b72ce2265841ff528bba3c44a1bfbc8 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151855 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151841 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b 2033cc6efa98b831d7839e367aa7d5aa74d0750f 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151849 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f45e3a4afa65a45ea1a956a7c5e7410ff40190d1 3c659044118e34603161457db9934a34f816d78b d34498309cff7560ac90c422c56e3137e6a64b19 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151874 fail irrelevant
 151895 fail irrelevant
 151914 fail irrelevant
 151934 fail irrelevant
 151968 fail irrelevant
 151952 fail irrelevant
 152013 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 939ab64b400b9bec4b59795a87817784093e1acd 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 151988 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 6ff53d2a13740e39dea110d6b3509c156c659586 3c659044118e34603161457db9934a34f816d78b b7bda69c4ef46c57480f6e378923f5215b122778 6ada2285d9918859699c92e09540e023e0a16054 f8fe3c07363d11fc81d8e7382dbcaa357c861569
 151999 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 97f750becac33e3d3e446d3ff4ae9af2577b7877 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152026 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 9fc87111005e8903785db40819af66b8f85b8b96 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152039 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d8327496762b4f2a54c9bafd7a214314ec28e9e 3c659044118e34603161457db9934a34f816d78b 9fc87111005e8903785db40819af66b8f85b8b96 6ada2285d9918859699c92e09540e023e0a16054 fb024b779336a0f73b3aee885b2ce082e812881f
 152058 fail irrelevant
 152076 fail irrelevant
 152108 fail irrelevant
 152144 fail irrelevant
 152171 fail irrelevant
 152200 fail irrelevant
 152242 fail irrelevant
 152250 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fd708fe0e1f813d6faf02d92ec5e8d73ce876ed1 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 152189 fail irrelevant
 152211 fail irrelevant
 152243 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 7028534d8482d25860c4d1aa8e45f0b911abfc5a
 152259 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 152234 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 152219 fail irrelevant
 152245 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 394e8e4bf586b4749620a48a23c816ee19f0e04a 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 2995d0afdf2d3fb44d07eada088db3613741db1e
 152227 fail irrelevant
 152236 fail irrelevant
 152271 fail irrelevant
 152248 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 152241 fail irrelevant
 152285 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ffde22468e2f0e93b51f97b801e6c7a181088c61 3c659044118e34603161457db9934a34f816d78b 264991512193ee50e27d43e66f832d5041cf3b28 6ada2285d9918859699c92e09540e023e0a16054 0562cbc14cf02b8188b9f1f37f39a4886776ce7c
 152268 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 058023b343d4e366864831db46e9b438e9e3a178
 152273 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 152266 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ffde22468e2f0e93b51f97b801e6c7a181088c61 3c659044118e34603161457db9934a34f816d78b 264991512193ee50e27d43e66f832d5041cf3b28 6ada2285d9918859699c92e09540e023e0a16054 0562cbc14cf02b8188b9f1f37f39a4886776ce7c
 152274 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 152279 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
 152280 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 7d3660e79830a069f1848bb4fa1cdf8f666424fb 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
Searching for interesting versions
 Result found: flight 151065 (pass), for basis pass
 Result found: flight 152266 (fail), for basis failure
 Repro found: flight 152268 (pass), for basis pass
 Repro found: flight 152285 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 567bc4b4ae8a975791382dd30ac413bc0d3ce88c 3c659044118e34603161457db9934a34f816d78b 9e3903136d9acde2fb2dd9e967ba928050a6cb4a 2e3de6253422112ae43e608661ba94ea6b345694 b91825f628c9a62cf2a3a0d972ea81484a8b7fce
No revisions left to test, checking graph state.
 Result found: flight 152248 (pass), for last pass
 Result found: flight 152259 (fail), for first failure
 Repro found: flight 152273 (pass), for last pass
 Repro found: flight 152274 (fail), for first failure
 Repro found: flight 152279 (pass), for last pass
 Repro found: flight 152280 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  7d3660e79830a069f1848bb4fa1cdf8f666424fb
  Bug not present: 9e3903136d9acde2fb2dd9e967ba928050a6cb4a
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/152280/


  (Revision log too long, omitted.)

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.546244 to fit
pnmtopng: 158 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ws16-amd64.windows-install.{dot,ps,png,html,svg}.
----------------------------------------
152285: tolerable FAIL

flight 152285 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/152285/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install fail baseline untested


jobs:
 build-amd64                                                  pass    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Jul 29 17:08:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 17:08:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0pYt-00025X-Io; Wed, 29 Jul 2020 17:08:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ULCb=BI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0pYr-00025S-Qj
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 17:08:13 +0000
X-Inumbo-ID: 14be66c2-d1be-11ea-8c9d-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14be66c2-d1be-11ea-8c9d-bc764e2007e4;
 Wed, 29 Jul 2020 17:08:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yLUXKLz/iNLHieVVAQr//JZdj9eB7OEB/bqBRe3xGy0=; b=L7QDL5aAZJ6U7YR6dBYYQlAG2
 XyLsjmlHeRbeFtp3cJ6C08GiygFI/CKyXPKEoA8R9egQJCeHrHmE1/nm0UaE5ZFwBqTbxOgZqwm+1
 0KehXx6dGf3XnTd4HbIWWndUyMEOwHh0ypdyKXcJn+tmC05t/pqhfT8MoT+u9rRWeABso=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0pYo-0005d0-KH; Wed, 29 Jul 2020 17:08:10 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0pYn-0002el-OK; Wed, 29 Jul 2020 17:08:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0pYn-00088s-Mg; Wed, 29 Jul 2020 17:08:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152272-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152272: regressions - FAIL
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=6ba1b005ffc388c2aeaddae20da29e4810dea298
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jul 2020 17:08:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152272 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152272/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151214

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6ba1b005ffc388c2aeaddae20da29e4810dea298
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   41 days
Failing since        151236  2020-06-19 19:10:35 Z   39 days   62 attempts
Testing same since   152272  2020-07-28 22:11:26 Z    0 days    1 attempts

------------------------------------------------------------
938 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 53875 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 17:12:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 17:12:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0pdN-0002sx-6R; Wed, 29 Jul 2020 17:12:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wgl/=BI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k0pdL-0002ss-OD
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 17:12:51 +0000
X-Inumbo-ID: bb2d8ac5-d1be-11ea-8ca1-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb2d8ac5-d1be-11ea-8ca1-bc764e2007e4;
 Wed, 29 Jul 2020 17:12:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596042770;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=7hfovDxMnNjWkR9w66TvoiFRNg1khpSwk/pSKaP5HZY=;
 b=eX9zgmPdMD5gzOa5bcjkj/abTGNjD+5MWdTadkO1AW84YWnX1Yu2Tz+D
 7G1DSNwLQMg/Q9sadhR6nFg7IL3h5BO/9IQVEPM0a1u85VG66HkjDRiw0
 LR1oQjbl/O20vDcGlht+l92ZsGfaetJM4xsvH0IOJ6IAg2s6KaPWNIC52 8=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: j0oc+PtaiX398gPKGe0zVIOBIhhG7t7/+UK2qNVl0BxV3t1jGFiOKVABvm3nzt3Vydonc5WXdF
 it6ChF09eM7GTmfIDfywxIKoR0RrVLSOgOogeuhVTB6HCE3sNcb0tCDfY4BHlZnU/azSw6HBRS
 9NNVZiOUQP08yMznspnqMYJSJFj5ldKUwHXvHKse31KKZsLjIC7+9C/2GzDTHfB4a01p/VDqVe
 gt3KDt9VRZXLZcxnWYck03ruVAuNjgHZrqDkWRRR7pBEQscyiWgioN5lJqGW0NsaSSuoz9AYWu
 oHY=
X-SBRS: 2.7
X-MesageID: 23801227
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,411,1589256000"; d="scan'208";a="23801227"
Date: Wed, 29 Jul 2020 19:12:40 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: <fam@euphon.net>
Subject: Re: [PATCH] x86/cpuid: Fix APIC bit clearing
Message-ID: <20200729171240.GE7191@Air-de-Roger>
References: <20200729163341.5662-1-fam@euphon.net>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200729163341.5662-1-fam@euphon.net>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: famzheng@amazon.com, xen-devel@lists.xenproject.org,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 29, 2020 at 05:33:41PM +0100, fam@euphon.net wrote:
> From: Fam Zheng <famzheng@amazon.com>
> 
> The bug is obvious here, other places in this function used
> "cpufeat_mask" correctly.
> 
> Signed-off-by: Fam Zheng <famzheng@amazon.com>
> Fixes: 46df8a65 ("x86/cpuid: Effectively remove pv_cpuid() and hvm_cpuid()")

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks!


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 18:05:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 18:05:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0qRn-0007Az-FW; Wed, 29 Jul 2020 18:04:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XnVs=BI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0qRl-0007Au-Rf
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 18:04:57 +0000
X-Inumbo-ID: 0270771e-d1c6-11ea-8cae-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0270771e-d1c6-11ea-8cae-bc764e2007e4;
 Wed, 29 Jul 2020 18:04:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596045896;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=w3Dz5LvumULsclC6LqmSvlT/fw+wPczVA22E2m8zbZI=;
 b=JtNMggkT2UhtkN3k37rVYINb/mUvIBFYIkeLQahgNcn3lceL7JPuTSss
 oNsQwtBMJsLqJ3A1R92ukbuHc3LL/HULGlri8Ea1hqExj+fT5P9gXVEvR
 WfVCsu1Sr9rnpMJZvOjmG6JvIkyXLrMsmXT4GdNeZMj86E+wmuvOo3Oyg Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: qsmU7kjoJyvZ0VhlRx6UHpYF/AIwuLltpHQ0hQhIP9cV3OfCPgWLSxfW9T5FXrNkUeS4vkFK9n
 kAUVOFh/1CBT5GhLfVXkuF5KpIKPZy2nbjfsMXYbdVPHfIMvH8Jj/15FkocEOioXKhFzQLcSNo
 GaUtVqEq1Zwv4eKQhx0xZFhaH9gcO/55JFbSZlK6MVq4VHjqffaMxaeGXUqQTm26v76DK6nbEx
 GeNil7uVgnB2iq0TqYrdzkfrGXTuNQw38yzvXHEc0h6BA879PFXirx9ppXar933PAfOTCZ52KK
 Nas=
X-SBRS: 2.7
X-MesageID: 23661488
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,411,1589256000"; d="scan'208";a="23661488"
Subject: Re: [PATCH] x86/cpuid: Fix APIC bit clearing
To: <fam@euphon.net>, <xen-devel@lists.xenproject.org>
References: <20200729163341.5662-1-fam@euphon.net>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <693e5bfd-b9ce-fb78-5044-4df0b22f93da@citrix.com>
Date: Wed, 29 Jul 2020 19:04:52 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200729163341.5662-1-fam@euphon.net>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: famzheng@amazon.com, Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/07/2020 17:33, fam@euphon.net wrote:
> From: Fam Zheng <famzheng@amazon.com>
>
> The bug is obvious here, other places in this function used
> "cpufeat_mask" correctly.
>
> Signed-off-by: Fam Zheng <famzheng@amazon.com>
> Fixes: 46df8a65 ("x86/cpuid: Effectively remove pv_cpuid() and hvm_cpuid()")
> ---
>  xen/arch/x86/cpuid.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
> index 6a4a787b68..63a03ef1e5 100644
> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -1057,7 +1057,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
>          {
>              /* Fast-forward MSR_APIC_BASE.EN. */
>              if ( vlapic_hw_disabled(vcpu_vlapic(v)) )
> -                res->d &= ~cpufeat_bit(X86_FEATURE_APIC);
> +                res->d &= ~cpufeat_mask(X86_FEATURE_APIC);
>  
>              /*
>               * PSE36 is not supported in shadow mode.  This bit should be

Oops.  Good spot.

However, the Fixes you identify was just code movement.  The bug was
actually introduced in b648feff8ea2c9bff250b4b262704fb100b1f9cf two
years earlier.

I've tweaked the Fixes line and committed.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 18:35:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 18:35:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0qux-0001HY-UB; Wed, 29 Jul 2020 18:35:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k0quw-0001HT-TZ
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 18:35:06 +0000
X-Inumbo-ID: 38aac114-d1ca-11ea-aa3e-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 38aac114-d1ca-11ea-aa3e-12813bfff9fa;
 Wed, 29 Jul 2020 18:35:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B9339B1CB;
 Wed, 29 Jul 2020 18:35:16 +0000 (UTC)
Subject: Re: fwupd support under Xen - firmware updates with the UEFI capsule
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <497f1524-b57e-0ea1-5899-62f677bfae91@3mdeb.com>
 <39be665c-b6c8-23e3-b18b-d38cfe5c1286@suse.com>
 <bbe85f76-0999-1150-3d48-c7f9e1796dac@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7d2de308-774c-9ffe-de1a-3ca3caaeda8c@suse.com>
Date: Wed, 29 Jul 2020 20:35:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <bbe85f76-0999-1150-3d48-c7f9e1796dac@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, marmarek@invisiblethingslab.com,
 Maciej Pijanowski <maciej.pijanowski@3mdeb.com>, piotr.krol@3mdeb.com,
 Norbert Kaminski <norbert.kaminski@3mdeb.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 23:01, Andrew Cooper wrote:
> On 28/07/2020 21:00, Jan Beulich wrote:
>> On 28.07.2020 09:41, Norbert Kaminski wrote:
>>> I'm trying to add support for the firmware updates with the UEFI
>>> capsule in
>>> Qubes OS. I've got the troubles with reading ESRT (EFI System
>>> Resource Table)
>>> in the dom0, which is based on the EFI memory map. The EFI_MEMMAP is not
>>> enabled despite the loaded drivers (CONFIG_EFI, CONFIG_EFI_ESRT) and
>>> kernel
>>> cmdline parameters (add_efi_memmap):
>>>
>>> ```
>>> [    3.451249] efi: EFI_MEMMAP is not enabled.
>>> ```
>>
>> It is, according to my understanding, a layering violation to expose
>> the EFI memory map to Dom0. It's not supposed to make use of this
>> information in any way. Hence any functionality depending on its use
>> also needs to be implemented in the hypervisor, with Dom0 making a
>> suitable hypercall to access this functionality. (And I find it
>> quite natural to expect that Xen gets involved in an update of the
>> firmware of a system.)
> 
> ERST is a table (read only by the looks of things) which is a catalogue
> of various bits of firmware in the system, including GUIDs for
> identification, and version information.
> 
> It is the kind of data which the hardware domain should have access to,
> and AFAICT, behaves just like a static ACPI table.

I'm unaware of us hiding this table, so Dom0 has access.

> Presumably it wants to an E820 reserved region so dom0 gets indent
> access, and something in the EFI subsystem needs extending to pass the
> ERST address to dom0.

I'm afraid the beginning of this sentence is such that I can't guess
what exactly you mean. As per above - I don't see why Dom0 wouldn't
have access to ERST. What it doesn't (and shouldn't) have access to is
the raw EFI memory map (there's no E820 map there). There is a way
for Dom0 to get at some "cooked" memory map (XENMEM_machine_memory_map),
but of course in a r/o fashion only.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 18:41:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 18:41:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0r0p-000286-OE; Wed, 29 Jul 2020 18:41:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k0r0p-000281-9t
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 18:41:11 +0000
X-Inumbo-ID: 11d93f10-d1cb-11ea-8cba-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 11d93f10-d1cb-11ea-8cba-bc764e2007e4;
 Wed, 29 Jul 2020 18:41:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 17D2FB1CA;
 Wed, 29 Jul 2020 18:41:21 +0000 (UTC)
Subject: Re: [PATCH v2] xen/arm: Convert runstate address during hypcall
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <4647a019c7b42d40d3c2f5b0a3685954bea7f982.1595948219.git.bertrand.marquis@arm.com>
 <8d2d7f03-450c-d50c-630b-8608c6d42bb9@suse.com>
 <FCAB700B-4617-4323-BE1E-B80DDA1806C1@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1b046f2c-05c8-9276-a91e-fd55ec098bed@suse.com>
Date: Wed, 29 Jul 2020 20:41:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <FCAB700B-4617-4323-BE1E-B80DDA1806C1@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.07.2020 09:08, Bertrand Marquis wrote:
> 
> 
>> On 28 Jul 2020, at 21:54, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 28.07.2020 17:52, Bertrand Marquis wrote:
>>> At the moment on Arm, a Linux guest running with KTPI enabled will
>>> cause the following error when a context switch happens in user mode:
>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>>> The error is caused by the virtual address for the runstate area
>>> registered by the guest only being accessible when the guest is running
>>> in kernel space when KPTI is enabled.
>>> To solve this issue, this patch is doing the translation from virtual
>>> address to physical address during the hypercall and mapping the
>>> required pages using vmap. This is removing the conversion from virtual
>>> to physical address during the context switch which is solving the
>>> problem with KPTI.
>>> This is done only on arm architecture, the behaviour on x86 is not
>>> modified by this patch and the address conversion is done as before
>>> during each context switch.
>>> This is introducing several limitations in comparison to the previous
>>> behaviour (on arm only):
>>> - if the guest is remapping the area at a different physical address Xen
>>> will continue to update the area at the previous physical address. As
>>> the area is in kernel space and usually defined as a global variable this
>>> is something which is believed not to happen. If this is required by a
>>> guest, it will have to call the hypercall with the new area (even if it
>>> is at the same virtual address).
>>> - the area needs to be mapped during the hypercall. For the same reasons
>>> as for the previous case, even if the area is registered for a different
>>> vcpu. It is believed that registering an area using a virtual address
>>> unmapped is not something done.
>>
>> Beside me thinking that an in-use and stable ABI can't be changed like
>> this, no matter what is "believed" kernel code may or may not do, I
>> also don't think having arch-es diverge in behavior here is a good
>> idea. Use of commonly available interfaces shouldn't lead to head
>> aches or surprises when porting code from one arch to another. I'm
>> pretty sure it was suggested before: Why don't you simply introduce
>> a physical address based hypercall (and then also on x86 at the same
>> time, keeping functional parity)? I even seem to recall giving a
>> suggestion how to fit this into a future "physical addresses only"
>> model, as long as we can settle on the basic principles of that
>> conversion path that we want to go sooner or later anyway (as I
>> understand).
> 
> I fully agree with the “physical address only” model and i think it must be
> done. Introducing a new hypercall taking a physical address as parameter
> is the long term solution (and I would even volunteer to do it in a new patchset).
> But this would not solve the issue here unless linux is modified.
> So I do see this patch as a “bug fix”.

Well, it is sort of implied by my previous reply that we won't get away
without an OS side change here. The prereq to get away without would be
that it is okay to change the behavior of a hypercall like you do, and
that it is okay to make the behavior diverge between arch-es. I think
I've made pretty clear that I don't think either is really an option.

>>> --- a/xen/arch/x86/domain.c
>>> +++ b/xen/arch/x86/domain.c
>>> @@ -1642,6 +1642,30 @@ void paravirt_ctxt_switch_to(struct vcpu *v)
>>>           wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
>>>   }
>>>   +int arch_vcpu_setup_runstate(struct vcpu *v,
>>> +                             struct vcpu_register_runstate_memory_area area)
>>> +{
>>> +    struct vcpu_runstate_info runstate;
>>> +
>>> +    runstate_guest(v) = area.addr.h;
>>> +
>>> +    if ( v == current )
>>> +    {
>>> +        __copy_to_guest(runstate_guest(v), &v->runstate, 1);
>>> +    }
>>
>> Pointless braces (and I think there are more instances).
> 
> So:
> if cond
>     instruction
> else
> {
>     xxx
> }
> 
> is something that should be done in Xen ?

Yes. Especially in coding styles placing opening braces on their own
lines this is, afaik, quite common.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 19:29:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 19:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0rl1-0005dr-Jt; Wed, 29 Jul 2020 19:28:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k0rl0-0005dm-5p
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 19:28:54 +0000
X-Inumbo-ID: bc26deea-d1d1-11ea-aa50-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bc26deea-d1d1-11ea-aa50-12813bfff9fa;
 Wed, 29 Jul 2020 19:28:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CC785AC7F;
 Wed, 29 Jul 2020 19:29:03 +0000 (UTC)
Subject: Re: [PATCH v3] print: introduce a format specifier for pci_sbdf_t
To: Roger Pau Monne <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200727103136.53343-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ca6cd6a5-3221-4d34-08a0-8ea4b2dc92d0@suse.com>
Date: Wed, 29 Jul 2020 21:28:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200727103136.53343-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Andrew,

On 27.07.2020 12:31, Roger Pau Monne wrote:
> The new format specifier is '%pp', and prints a pci_sbdf_t using the
> seg:bus:dev.func format. Replace all SBDFs printed using
> '%04x:%02x:%02x.%u' to use the new format specifier.
> 
> No functional change intended.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> Acked-by: Julien Grall <julien.grall@arm.com>
> For just the pieces where Jan is the only maintainer:
> Acked-by: Jan Beulich <jbeulich@suse.com>

for a change as controversial as this one I think it is particularly
relevant that formal aspects get obeyed to. With the acks above I
don't think the change could have gone in. I would assume you simply
forgot to add yours while committing, but then I'd have expected at
least an on-list instance of it, which I don't think I've seen. (But
yes, email hasn't been as reliable here lately as one would expect
it to be, so I'm not going to exclude that I've simply missed it.)

Me restricting my ack to just what's needed to avoid further stalling
of the change was for a reason, as you may recall. In particular I
wanted to make sure people actually supporting the approach taken
would be recognizable from the eventual commit, rather than me as
being the one who was opposed to it.

In all reality, Roger, it looks to me as if you should have dropped
my ack, as there seems to be nothing left at this point that I'm
the only maintainer of.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 19:41:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 19:41:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0rxG-0007E9-Ne; Wed, 29 Jul 2020 19:41:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k0rxF-0007E4-4E
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 19:41:33 +0000
X-Inumbo-ID: 80e14378-d1d3-11ea-8ccf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80e14378-d1d3-11ea-8ccf-bc764e2007e4;
 Wed, 29 Jul 2020 19:41:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5FF07ADD9;
 Wed, 29 Jul 2020 19:41:43 +0000 (UTC)
Subject: Re: [PATCH 1/5] xen/memory: Introduce CONFIG_ARCH_ACQUIRE_RESOURCE
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <49ceef5f-e353-c7ef-1e8e-0a0c10643304@suse.com>
Date: Wed, 29 Jul 2020 21:41:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728113712.22966-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 13:37, Andrew Cooper wrote:
> New architectures shouldn't be forced to implement no-op stubs for unused
> functionality.
> 
> Introduce CONFIG_ARCH_ACQUIRE_RESOURCE which can be opted in to, and provide
> compatibility logic in xen/mm.h
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 19:59:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 19:59:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0sE4-0008GO-8Q; Wed, 29 Jul 2020 19:58:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ULCb=BI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0sE3-0008G3-1d
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 19:58:55 +0000
X-Inumbo-ID: ea1bd6da-d1d5-11ea-aa54-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ea1bd6da-d1d5-11ea-aa54-12813bfff9fa;
 Wed, 29 Jul 2020 19:58:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=LxOKmJkiRWVgn2qsJWKQJstXt9+EneksShC5/HgCblI=; b=LYK201PQd1QYuwKqkqaCFXq7x
 18nFyL1tPTwpVByc+J2bMyCOK4j8XFfUav9m17BHQZ0gFg+bpzOmM5m/m4kqchSLziI49FRy1SDEk
 iBSrXCn5uO+fevrLYOo++REAZ0P0OA8iqFbXEPU0TUR+k+SAXr63Emk8zZWSEn2UVgDWg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0sDv-0000gh-5A; Wed, 29 Jul 2020 19:58:47 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0sDu-0002i7-Ru; Wed, 29 Jul 2020 19:58:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0sDu-00083E-RD; Wed, 29 Jul 2020 19:58:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152275-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 152275: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: xen=b071ec25e85c4aacf3da59e5258cda0b1c4df45d
X-Osstest-Versions-That: xen=0562cbc14cf02b8188b9f1f37f39a4886776ce7c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jul 2020 19:58:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152275 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152275/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152233
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152233
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152233
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 152233
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152233
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152233
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152233
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 152233
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 152233
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  b071ec25e85c4aacf3da59e5258cda0b1c4df45d
baseline version:
 xen                  0562cbc14cf02b8188b9f1f37f39a4886776ce7c

Last test of basis   152233  2020-07-27 13:06:33 Z    2 days
Failing since        152251  2020-07-28 08:13:50 Z    1 days    2 attempts
Testing same since   152275  2020-07-29 04:00:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien.grall@arm.com>
  Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0562cbc14c..b071ec25e8  b071ec25e85c4aacf3da59e5258cda0b1c4df45d -> master


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 20:02:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 20:02:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0sHF-0000rB-Au; Wed, 29 Jul 2020 20:02:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k0sHD-0000r6-M2
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 20:02:11 +0000
X-Inumbo-ID: 6261c439-d1d6-11ea-8cd8-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6261c439-d1d6-11ea-8cd8-bc764e2007e4;
 Wed, 29 Jul 2020 20:02:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B6FBBAEB6;
 Wed, 29 Jul 2020 20:02:21 +0000 (UTC)
Subject: Re: [PATCH 2/5] xen/gnttab: Rework resource acquisition
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <784bf5c1-be13-2c09-5494-6eb64c400473@suse.com>
Date: Wed, 29 Jul 2020 22:02:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728113712.22966-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 13:37, Andrew Cooper wrote:
> The existing logic doesn't function in the general case for mapping a guests
> grant table, due to arbitrary 32 frame limit, and the default grant table
> limit being 64.
> 
> In order to start addressing this, rework the existing grant table logic by
> implementing a single gnttab_acquire_resource().  This is far more efficient
> than the previous acquire_grant_table() in memory.c because it doesn't take
> the grant table write lock, and attempt to grow the table, for every single
> frame.

Among the code you replace there is a comment "Iterate backwards in case
table needs to grow" explaining why what you say about growing the grant
table didn't actually happen.

> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -4013,6 +4013,72 @@ static int gnttab_get_shared_frame_mfn(struct domain *d,
>       return 0;
>   }
>   
> +int gnttab_acquire_resource(
> +    struct domain *d, unsigned int id, unsigned long frame,
> +    unsigned int nr_frames, xen_pfn_t mfn_list[])
> +{
> +    struct grant_table *gt = d->grant_table;
> +    unsigned int i = nr_frames, tot_frames;
> +    void **vaddrs;
> +    int rc = 0;
> +
> +    /* Input sanity. */
> +    if ( !nr_frames )
> +        return -EINVAL;

I can't seem to be able to find an equivalent of this in the old logic,
and hence this looks like an unwarranted change in behavior to me. We
have quite a few hypercall ops where some count being zero is simply
a no-op, i.e. yielding success without doing anything.

> +    /* Overflow checks */
> +    if ( frame + nr_frames < frame )
> +        return -EINVAL;
> +
> +    tot_frames = frame + nr_frames;
> +    if ( tot_frames != frame + nr_frames )
> +        return -EINVAL;

I find the naming here quite confusing. I realize part of this stems
from the code you replace, but anyway: "unsigned long frame" typically
represents a memory frame number of some sort, making the calculation
look as if it was wrong. (Initially I merely meant to ask whether this
check isn't redundant with the prior one, or vice versa.)

> +    /* Grow table if necessary. */
> +    grant_write_lock(gt);
> +    switch ( id )
> +    {
> +        mfn_t tmp;
> +
> +    case XENMEM_resource_grant_table_id_shared:
> +        rc = gnttab_get_shared_frame_mfn(d, tot_frames - 1, &tmp);
> +        break;
> +
> +    case XENMEM_resource_grant_table_id_status:
> +        if ( gt->gt_version != 2 )
> +        {
> +    default:
> +            rc = -EINVAL;
> +            break;
> +        }
> +        rc = gnttab_get_status_frame_mfn(d, tot_frames - 1, &tmp);
> +        break;
> +    }
> +
> +    /* Any errors from growing the table? */
> +    if ( rc )
> +        goto out;
> +
> +    switch ( id )
> +    {
> +    case XENMEM_resource_grant_table_id_shared:
> +        vaddrs = gt->shared_raw;
> +        break;
> +
> +    case XENMEM_resource_grant_table_id_status:
> +        vaddrs = (void **)gt->status;

Now this is the kind of cast that I consider really dangerous, and hence
worth trying hard to avoid. With the code structure as is, I don't see
an immediate solution though.

> +        break;
> +    }

Worth having an ASSERT_UNREACHABLE() default case here?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 20:09:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 20:09:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0sNw-00014Z-2C; Wed, 29 Jul 2020 20:09:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TMQ+=BI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k0sNu-00014U-FM
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 20:09:06 +0000
X-Inumbo-ID: 5a5e63b2-d1d7-11ea-aa56-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5a5e63b2-d1d7-11ea-aa56-12813bfff9fa;
 Wed, 29 Jul 2020 20:09:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C2198AB55;
 Wed, 29 Jul 2020 20:09:16 +0000 (UTC)
Subject: Re: [PATCH 3/5] xen/memory: Fix compat XENMEM_acquire_resource for
 size requests
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0c275cb5-55ec-b0b0-6ba8-cfa7ca23978b@suse.com>
Date: Wed, 29 Jul 2020 22:09:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728113712.22966-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 13:37, Andrew Cooper wrote:
> Copy the nr_frames from the correct structure, so the caller doesn't
> unconditionally receive 0.

Well, no - it does get copied from the correct structure. It's just
that the field doesn't get set properly up front. Otherwise you'll
(a) build in an unchecked assumption that the native and compat
fields match in type and (b) set a bad example for people looking
here and then cloning this code in perhaps a case where (a) is not
even true. If you agree, the alternative change of setting
cmp.mar.nr_frames from nat.mar->nr_frames before the call is

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 21:08:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 21:08:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0tJS-00063m-Ly; Wed, 29 Jul 2020 21:08:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hQvr=BI=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k0tJS-00063h-1i
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 21:08:34 +0000
X-Inumbo-ID: a81d9e30-d1df-11ea-aa66-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a81d9e30-d1df-11ea-aa66-12813bfff9fa;
 Wed, 29 Jul 2020 21:08:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zVpYACbU4MPht8AlhKCnilCOtSu+EU/xz3z2jevWnEY=; b=Bc0Kh2nJPje5D4sBton4IptPVi
 nL2DdlPls1y+HuNQXXJ8vbWKPbSKDef9miGnLv2mafd8iYOQ7Lxv1e8932uCKa+TY5MqVYDie/+89
 e+Yo04N4vx136D6lRSzPl95tvorztmMfwzTBpOMDOf5FpQ0+a0lixe7lJ6+Pn/p6Ms9o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0tJO-0002GY-II; Wed, 29 Jul 2020 21:08:30 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0tJO-0005fQ-5c; Wed, 29 Jul 2020 21:08:30 +0000
Subject: Re: [RFC v2 1/2] arm,smmu: switch to using iommu_fwspec functions
To: Brian Woods <brian.woods@xilinx.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <1595390431-24805-1-git-send-email-brian.woods@xilinx.com>
 <1595390431-24805-2-git-send-email-brian.woods@xilinx.com>
From: Julien Grall <julien@xen.org>
Message-ID: <572599f3-8abf-4be5-3840-73665557c631@xen.org>
Date: Wed, 29 Jul 2020 22:08:28 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1595390431-24805-2-git-send-email-brian.woods@xilinx.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Brian,

On 22/07/2020 05:00, Brian Woods wrote:
> Modify the smmu driver so that it uses the iommu_fwspec helper
> functions.  This means both ARM IOMMU drivers will both use the
> iommu_fwspec helper functions.
> 
> Signed-off-by: Brian Woods <brian.woods@xilinx.com>
> ---
> 
> Interested in if combining the legacy and generic bindings paths are
> worth or if Xen plans to depreicate legacy bindings at some point.

The legacy binding is technically already deprecated in Linux. However, 
I don't think we can remove it completely until Linux does it.

Ideally, I would like the legacy master to be probed as part of 
add_device rather than when the IOMMU initialization (see more below).


> v1 -> v2
>      - removed MAX_MASTER_STREAMIDS
>      - removed unneeded curly brackets
> 
>   xen/drivers/passthrough/arm/smmu.c    | 81 +++++++++++++++++++----------------
>   xen/drivers/passthrough/device_tree.c |  3 ++
>   2 files changed, 47 insertions(+), 37 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 94662a8..7a5c6cd 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -49,6 +49,7 @@
>   #include <asm/atomic.h>
>   #include <asm/device.h>
>   #include <asm/io.h>
> +#include <asm/iommu_fwspec.h>
>   #include <asm/platform.h>
>   
>   /* Xen: The below defines are redefined within the file. Undef it */
> @@ -302,9 +303,6 @@ static struct iommu_group *iommu_group_get(struct device *dev)
>   
>   /***** Start of Linux SMMU code *****/
>   
> -/* Maximum number of stream IDs assigned to a single device */
> -#define MAX_MASTER_STREAMIDS		MAX_PHANDLE_ARGS
> -
>   /* Maximum number of context banks per SMMU */
>   #define ARM_SMMU_MAX_CBS		128
>   
> @@ -597,8 +595,7 @@ struct arm_smmu_smr {
>   };
>   
>   struct arm_smmu_master_cfg {
> -	int				num_streamids;
> -	u16				streamids[MAX_MASTER_STREAMIDS];
> +	struct iommu_fwspec		*fwspec;
>   	struct arm_smmu_smr		*smrs;
>   };
>   
> @@ -779,7 +776,7 @@ static int register_smmu_master(struct arm_smmu_device *smmu,

We originally intended to keep the SMMU driver as close as possible to 
Linux. It turns out to be quite difficult as they reworked a fair bit 
the IOMMU subsystem. Although, I would still like to document the change 
we made in the code.

Could you add a comment maybe at the top of file to mention you added 
support for fwspec?

>   				struct device *dev,
>   				struct of_phandle_args *masterspec)
>   {
> -	int i;
> +	int i, ret = 0;
>   	struct arm_smmu_master *master;
>   
>   	master = find_smmu_master(smmu, masterspec->np);
> @@ -790,34 +787,37 @@ static int register_smmu_master(struct arm_smmu_device *smmu,
>   		return -EBUSY;
>   	}
>   
> -	if (masterspec->args_count > MAX_MASTER_STREAMIDS) {
> -		dev_err(dev,
> -			"reached maximum number (%d) of stream IDs for master device %s\n",
> -			MAX_MASTER_STREAMIDS, masterspec->np->name);
> -		return -ENOSPC;
> -	}
> -
>   	master = devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
>   	if (!master)
>   		return -ENOMEM;
> +	master->of_node = masterspec->np;
>   
> -	master->of_node			= masterspec->np;
> -	master->cfg.num_streamids	= masterspec->args_count;
> +	ret = iommu_fwspec_init(&master->of_node->dev, smmu->dev);
> +	if (ret) {
> +		kfree(master);
> +		return ret;
> +	}
> +	master->cfg.fwspec = dev_iommu_fwspec_get(&master->of_node->dev);
> +
> +	/* adding the ids here */
> +	ret = iommu_fwspec_add_ids(&masterspec->np->dev,
> +				   masterspec->args,
> +				   masterspec->args_count);
> +	if (ret)
> +		return ret;
>   
>   	/* Xen: Let Xen know that the device is protected by an SMMU */
>   	dt_device_set_protected(masterspec->np);
>   
> -	for (i = 0; i < master->cfg.num_streamids; ++i) {
> -		u16 streamid = masterspec->args[i];
> -
> -		if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH) &&
> -		     (streamid >= smmu->num_mapping_groups)) {
> -			dev_err(dev,
> -				"stream ID for master device %s greater than maximum allowed (%d)\n",
> -				masterspec->np->name, smmu->num_mapping_groups);
> -			return -ERANGE;
> +	if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH)) {
> +		for (i = 0; i < master->cfg.fwspec->num_ids; ++i) {
> +			if (masterspec->args[i] >= smmu->num_mapping_groups) {
> +				dev_err(dev,
> +					"stream ID for master device %s greater than maximum allowed (%d)\n",
> +					masterspec->np->name, smmu->num_mapping_groups);
> +				return -ERANGE;
> +			}
>   		}
> -		master->cfg.streamids[i] = streamid;
>   	}
>   	return insert_smmu_master(smmu, master);
>   }
> @@ -1397,15 +1397,15 @@ static int arm_smmu_master_configure_smrs(struct arm_smmu_device *smmu,
>   	if (cfg->smrs)
>   		return -EEXIST;
>   
> -	smrs = kmalloc_array(cfg->num_streamids, sizeof(*smrs), GFP_KERNEL);
> +	smrs = kmalloc_array(cfg->fwspec->num_ids, sizeof(*smrs), GFP_KERNEL);
>   	if (!smrs) {
>   		dev_err(smmu->dev, "failed to allocate %d SMRs\n",
> -			cfg->num_streamids);
> +			cfg->fwspec->num_ids);
>   		return -ENOMEM;
>   	}
>   
>   	/* Allocate the SMRs on the SMMU */
> -	for (i = 0; i < cfg->num_streamids; ++i) {
> +	for (i = 0; i < cfg->fwspec->num_ids; ++i) {
>   		int idx = __arm_smmu_alloc_bitmap(smmu->smr_map, 0,
>   						  smmu->num_mapping_groups);
>   		if (IS_ERR_VALUE(idx)) {
> @@ -1416,12 +1416,12 @@ static int arm_smmu_master_configure_smrs(struct arm_smmu_device *smmu,
>   		smrs[i] = (struct arm_smmu_smr) {
>   			.idx	= idx,
>   			.mask	= 0, /* We don't currently share SMRs */
> -			.id	= cfg->streamids[i],
> +			.id	= cfg->fwspec->ids[i],
>   		};
>   	}
>   
>   	/* It worked! Now, poke the actual hardware */
> -	for (i = 0; i < cfg->num_streamids; ++i) {
> +	for (i = 0; i < cfg->fwspec->num_ids; ++i) {
>   		u32 reg = SMR_VALID | smrs[i].id << SMR_ID_SHIFT |
>   			  smrs[i].mask << SMR_MASK_SHIFT;
>   		writel_relaxed(reg, gr0_base + ARM_SMMU_GR0_SMR(smrs[i].idx));
> @@ -1448,7 +1448,7 @@ static void arm_smmu_master_free_smrs(struct arm_smmu_device *smmu,
>   		return;
>   
>   	/* Invalidate the SMRs before freeing back to the allocator */
> -	for (i = 0; i < cfg->num_streamids; ++i) {
> +	for (i = 0; i < cfg->fwspec->num_ids; ++i) {
>   		u8 idx = smrs[i].idx;
>   
>   		writel_relaxed(~SMR_VALID, gr0_base + ARM_SMMU_GR0_SMR(idx));
> @@ -1471,10 +1471,10 @@ static int arm_smmu_domain_add_master(struct arm_smmu_domain *smmu_domain,
>   	if (ret)
>   		return ret == -EEXIST ? 0 : ret;
>   
> -	for (i = 0; i < cfg->num_streamids; ++i) {
> +	for (i = 0; i < cfg->fwspec->num_ids; ++i) {
>   		u32 idx, s2cr;
>   
> -		idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i];
> +		idx = cfg->smrs ? cfg->smrs[i].idx : cfg->fwspec->ids[i];
>   		s2cr = S2CR_TYPE_TRANS |
>   		       (smmu_domain->cfg.cbndx << S2CR_CBNDX_SHIFT);
>   		writel_relaxed(s2cr, gr0_base + ARM_SMMU_GR0_S2CR(idx));
> @@ -1499,8 +1499,8 @@ static void arm_smmu_domain_remove_master(struct arm_smmu_domain *smmu_domain,
>   	 * that it can be re-allocated immediately.
>   	 * Xen: Unlike Linux, any access to non-configured stream will fault.
>   	 */
> -	for (i = 0; i < cfg->num_streamids; ++i) {
> -		u32 idx = cfg->smrs ? cfg->smrs[i].idx : cfg->streamids[i];
> +	for (i = 0; i < cfg->fwspec->num_ids; ++i) {
> +		u32 idx = cfg->smrs ? cfg->smrs[i].idx : cfg->fwspec->ids[i];
>   
>   		writel_relaxed(S2CR_TYPE_FAULT,
>   			       gr0_base + ARM_SMMU_GR0_S2CR(idx));
> @@ -1924,14 +1924,21 @@ static int arm_smmu_add_device(struct device *dev)
>   			ret = -ENOMEM;
>   			goto out_put_group;
>   		}
> +		cfg->fwspec = kzalloc(sizeof(struct iommu_fwspec), GFP_KERNEL);
> +		if (!cfg->fwspec) {
> +			kfree(cfg);
> +			ret = -ENOMEM;
> +			goto out_put_group;
> +		}
> +		iommu_fwspec_init(dev, smmu->dev);
>   
> -		cfg->num_streamids = 1;
> +		cfg->fwspec->num_ids = 1;
>   		/*
>   		 * Assume Stream ID == Requester ID for now.
>   		 * We need a way to describe the ID mappings in FDT.
>   		 */
>   		pci_for_each_dma_alias(pdev, __arm_smmu_get_pci_sid,
> -				       &cfg->streamids[0]);
> +				       &cfg->fwspec->ids[0]);
>   		releasefn = __arm_smmu_release_pci_iommudata;
>   	} else {
>   		struct arm_smmu_master *master;
> diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
> index 999b831..acf6b62 100644
> --- a/xen/drivers/passthrough/device_tree.c
> +++ b/xen/drivers/passthrough/device_tree.c
> @@ -140,6 +140,9 @@ int iommu_add_dt_device(struct dt_device_node *np)
>       if ( !ops )
>           return -EINVAL;
>   
> +    if ( dt_device_is_protected(np) )
> +        return 0;

With the fwspec in place, I would have hoped we can remove an specific 
hack for the SMMU in the generic DT code. Could you instead explore 
whether it is possible to initialize the legacy master from add_device?

> +
>       if ( dev_iommu_fwspec_get(dev) )
>           return -EEXIST;
>   
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 21:38:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 21:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0tlx-000082-3O; Wed, 29 Jul 2020 21:38:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hQvr=BI=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k0tlw-00007i-3D
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 21:38:00 +0000
X-Inumbo-ID: c4cfa557-d1e3-11ea-aa68-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c4cfa557-d1e3-11ea-aa68-12813bfff9fa;
 Wed, 29 Jul 2020 21:37:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=NTLuQuHmN+ZPuU1Qs58GQHwETFBehkSexxez9KdpQAM=; b=PfY69rbPxpUq8llAzouTy0NkxL
 WUTy4wdSefXgY/qPyC+Phm3TEiPz9kGaFyiN/mV0BZYgjzkLB08KmmeS1O/SCllyhBLdc8qyn5VxE
 krk55BuJHnsdZj/di7ezLSsY+HcOfcbexno3MpJWUc/L2Ax/Cb44m2woWetzixy+//xg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0tlt-0002rv-4p; Wed, 29 Jul 2020 21:37:57 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k0tls-0007Rh-PB; Wed, 29 Jul 2020 21:37:57 +0000
Subject: Re: [RFC v2 2/2] arm,smmu: add support for generic DT bindings
To: Brian Woods <brian.woods@xilinx.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <1595390431-24805-1-git-send-email-brian.woods@xilinx.com>
 <1595390431-24805-3-git-send-email-brian.woods@xilinx.com>
From: Julien Grall <julien@xen.org>
Message-ID: <854e0671-898d-1a78-3dfd-92d8f6b82348@xen.org>
Date: Wed, 29 Jul 2020 22:37:55 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1595390431-24805-3-git-send-email-brian.woods@xilinx.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Brian,

On 22/07/2020 05:00, Brian Woods wrote:
> Restructure some of the code and add supporting functions for adding
> generic device tree (DT) binding support.  

It feels to me you want to split the patch in two:
    1) Restructure the code
    2) Add support for DT bindings

> This will allow for using
> current Linux device trees with just modifying the chosen field to
> enable Xen.

So what happen if the legacy binding and generic bindings co-exist. 
Which one will be used?

> 
> Signed-off-by: Brian Woods <brian.woods@xilinx.com>
> ---
> 
> Just realized that I'm fairly sure I need to do some work on the SMRs.
> Other than that though, I think things should be okayish.

The SMMU code in Xen is pretty awful (I know I adapted it for Xen). It 
would be hard to make it worse :).

> 
> v1 -> v2
>      - Corrected how reading of DT is done with generic bindings
> 
> 
>   xen/drivers/passthrough/arm/smmu.c    | 102 +++++++++++++++++++++++++---------
>   xen/drivers/passthrough/device_tree.c |  17 +-----
>   2 files changed, 78 insertions(+), 41 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 7a5c6cd..25c090a 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -251,6 +251,8 @@ struct iommu_group
>   	atomic_t ref;
>   };
>   
> +static const struct arm_smmu_device *find_smmu(const struct device *dev);
> +
>   static struct iommu_group *iommu_group_alloc(void)
>   {
>   	struct iommu_group *group = xzalloc(struct iommu_group);
> @@ -772,56 +774,104 @@ static int insert_smmu_master(struct arm_smmu_device *smmu,
>   	return 0;
>   }
>   
> -static int register_smmu_master(struct arm_smmu_device *smmu,
> -				struct device *dev,
> -				struct of_phandle_args *masterspec)
> +static int arm_smmu_dt_add_device_legacy(struct arm_smmu_device *smmu,
> +					 struct device *dev,
> +					 struct iommu_fwspec *fwspec)
>   {
> -	int i, ret = 0;
> +	int i;
>   	struct arm_smmu_master *master;
> +	struct device_node *dev_node = dev_get_dev_node(dev);
>   
> -	master = find_smmu_master(smmu, masterspec->np);
> +	master = find_smmu_master(smmu, dev_node);
>   	if (master) {
>   		dev_err(dev,
>   			"rejecting multiple registrations for master device %s\n",
> -			masterspec->np->name);
> +			dev_node->name);
>   		return -EBUSY;
>   	}
>   
>   	master = devm_kzalloc(dev, sizeof(*master), GFP_KERNEL);
>   	if (!master)
>   		return -ENOMEM;
> -	master->of_node = masterspec->np;
>   
> -	ret = iommu_fwspec_init(&master->of_node->dev, smmu->dev);
> -	if (ret) {
> -		kfree(master);
> -		return ret;
> -	}
> -	master->cfg.fwspec = dev_iommu_fwspec_get(&master->of_node->dev);
> -
> -	/* adding the ids here */
> -	ret = iommu_fwspec_add_ids(&masterspec->np->dev,
> -				   masterspec->args,
> -				   masterspec->args_count);
> -	if (ret)
> -		return ret;
> +	master->of_node = dev_node;
> +	master->cfg.fwspec = fwspec;
>   
>   	/* Xen: Let Xen know that the device is protected by an SMMU */
> -	dt_device_set_protected(masterspec->np);
> +	dt_device_set_protected(dev_node);
>   
>   	if (!(smmu->features & ARM_SMMU_FEAT_STREAM_MATCH)) {
> -		for (i = 0; i < master->cfg.fwspec->num_ids; ++i) {
> -			if (masterspec->args[i] >= smmu->num_mapping_groups) {
> +		for (i = 0; i < fwspec->num_ids; ++i) {
> +			if (fwspec->ids[i] >= smmu->num_mapping_groups) {
>   				dev_err(dev,
>   					"stream ID for master device %s greater than maximum allowed (%d)\n",
> -					masterspec->np->name, smmu->num_mapping_groups);
> +					dev_node->name, smmu->num_mapping_groups);
>   				return -ERANGE;
>   			}
>   		}
>   	}
> +
>   	return insert_smmu_master(smmu, master);
>   }
>   
> +static int arm_smmu_dt_add_device_generic(u8 devfn, struct device *dev)
> +{
> +	struct arm_smmu_device *smmu;
> +	struct iommu_fwspec *fwspec;
> +
> +	fwspec = dev_iommu_fwspec_get(dev);
> +	if (fwspec == NULL)
> +		return -ENXIO;
> +
> +	smmu = (struct arm_smmu_device *) find_smmu(fwspec->iommu_dev)

Please don't use explicit cast to remove a const. If you need 
find_smmu() to return a non-const value, then you should drop the const 
from the return function.


> +	if (smmu == NULL)
> +		return -ENXIO;
> +
> +	return arm_smmu_dt_add_device_legacy(smmu, dev, fwspec);

This feels a bit odd to me to call a "legacy" function from a "generic" 
call. How about remove "legacy" from the function name?

> +}
> +
> +static int arm_smmu_dt_xlate_generic(struct device *dev,
> +				    const struct of_phandle_args *spec)

Please use dt_phandle_args to stay consistent with the naming and the 
fact the code is mostly Xen specific (though derived from Linux).

>   
> -static __init const struct arm_smmu_device *find_smmu(const struct device *dev)
> +static const struct arm_smmu_device *find_smmu(const struct device *dev)
>   {
>   	struct arm_smmu_device *smmu;
>   	bool found = false;
> diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
> index acf6b62..dd9cf65 100644
> --- a/xen/drivers/passthrough/device_tree.c
> +++ b/xen/drivers/passthrough/device_tree.c
> @@ -158,22 +158,7 @@ int iommu_add_dt_device(struct dt_device_node *np)
>            * these callback implemented.
>            */
>           if ( !ops->add_device || !ops->dt_xlate )
> -        {
> -            /*
> -             * Some Device Trees may expose both legacy SMMU and generic
> -             * IOMMU bindings together. However, the SMMU driver is only
> -             * supporting the former and will protect them during the
> -             * initialization. So we need to skip them and not return
> -             * error here.
> -             *
> -             * XXX: This can be dropped when the SMMU is able to deal
> -             * with generic bindings.
> -             */
> -            if ( dt_device_is_protected(np) )
> -                return 0;
> -            else
> -                return -EINVAL;
> -        }

I would add a comment in the commit message explaining the hack in 
iommu_add_dt_device() can be removed.

> +            return -EINVAL;
>   
>           if ( !dt_device_is_available(iommu_spec.np) )
>               break;
> 

Cheers,


-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 21:41:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 21:41:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0tpG-0000vZ-Ky; Wed, 29 Jul 2020 21:41:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ULCb=BI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0tpF-0000uO-31
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 21:41:25 +0000
X-Inumbo-ID: 3b3bc1ca-d1e4-11ea-aa6a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3b3bc1ca-d1e4-11ea-aa6a-12813bfff9fa;
 Wed, 29 Jul 2020 21:41:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=omM/sdu3h7QV8x0kp356c73359gekvO4brxKtU/myZQ=; b=kKCxXxDLPnvBHGRuIVLxebQhC
 QG3MBWtRRv3Jag+3mmQldk46s95hqijzX0sSTEBlAG2DO6SXluYpHn0Nk95Fsxpjb75aGanLBKG7+
 xDPcitTQzzuEMfcNtuiZs8uIGcqnzF4iGM1ihX2U7uay8Mr+K+qcHz6Lco5q4FVHr+HYQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0tp5-0002v7-Vm; Wed, 29 Jul 2020 21:41:16 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0tp5-000884-HI; Wed, 29 Jul 2020 21:41:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0tp5-0000EK-Gc; Wed, 29 Jul 2020 21:41:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152282-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 152282: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=58a12e3368dbcadc57c6b3f5fcbecce757426f02
X-Osstest-Versions-That: linux=d811d29517d1ea05bc159579231652d3ca1c2a01
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jul 2020 21:41:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152282 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152282/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 152137
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass

version targeted for testing:
 linux                58a12e3368dbcadc57c6b3f5fcbecce757426f02
baseline version:
 linux                d811d29517d1ea05bc159579231652d3ca1c2a01

Last test of basis   152137  2020-07-23 06:56:52 Z    6 days
Testing same since   152282  2020-07-29 08:44:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Lobakin <alobakin@marvell.com>
  Andrew Morton <akpm@linux-foundation.org>
  André Almeida <andrealmeid@collabora.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Angelo Dureghello <angelo.dureghello@timesys.com>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Arnd Bergmann <arnd@arndb.de>
  Aurabindo Pillai <aurabindo.pillai@amd.com>
  Ben Skeggs <bskeggs@redhat.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Bjorn Helgaas <bhelgaas@google.com>
  Boris Burkov <boris@bur.io>
  Caiyuan Xie <caiyuan.xie@cn.alps.com>
  Charan Teja Kalla <charante@codeaurora.org>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chen-Yu Tsai <wens@csie.org>
  Christian Brauner <christian.brauner@ubuntu.com>
  Christian König <christian.koenig@amd.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chu Lin <linchuyuan@google.com>
  Chunfeng Yun <chunfeng.yun@mediatek.com>
  Claire Chang <tientzu@chromium.org>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Cong Wang <xiyou.wangcong@gmail.com>
  Cristian Marussi <cristian.marussi@arm.com>
  Damien Le Moal <damien.lemoal@wdc.com>
  Dan Williams <dan.j.williams@intel.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Anglin <dave.anglin@bell.net>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Derek Basehore <dbasehore@chromium.org>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Douglas Anderson <dianders@chromium.org>
  Eddie James <eajames@linux.ibm.com>
  Emil Renner Berthing <kernel@esmil.dk>
  Eric Biggers <ebiggers@google.com>
  Evgeny Novikov <novikov@ispras.ru>
  Fabio Estevam <festevam@gmail.com>
  Fangrui Song <maskray@google.com>
  Federico Ricchiuto <fed.ricchiuto@gmail.com>
  Felipe Balbi <balbi@kernel.org>
  Filipe Manana <fdmanana@suse.com>
  Forest Crossman <cyrozap@gmail.com>
  Gavin Shan <gshan@redhat.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  George Kennedy <george.kennedy@oracle.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  guodeqing <geffrey.guo@huawei.com>
  Hans de Goede <hdegoede@redhat.com>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Helge Deller <deller@gmx.de>
  Helmut Grohne <helmut.grohne@intenta.de>
  Huang Guobin <huangguobin4@huawei.com>
  Huazhong Tan <tanhuazhong@huawei.com>
  Hugh Dickins <hughd@google.com>
  Ian Abbott <abbotti@mev.co.uk>
  Igor Russkikh <irusskikh@marvell.com>
  Ilya Katsnelson <me@0upti.me>
  Ingo Molnar <mingo@kernel.org>
  J. Bruce Fields <bfields@redhat.com>
  Jack Xiao <Jack.Xiao@amd.com>
  Jacky Hu <hengqing.hu@gmail.com>
  Jakub Kicinski <kuba@kernel.org>
  Jason Gunthorpe <jgg@nvidia.com>
  Jerry (Fangzhi) Zuo <Jerry.Zuo@amd.com>
  Jing Xiangfeng <jingxiangfeng@huawei.com>
  Jiri Kosina <jkosina@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Joerg Roedel <jroedel@suse.de>
  Johan Hovold <johan@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  Johannes Thumshirn <johannes.thumshirn@wdc.com>
  John David Anglin <dave.anglin@bell.net>
  Joonho Wohn <doomsheart@gmail.com>
  Julian Anastasov <ja@ssi.bg>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kalle Valo <kvalo@codeaurora.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  leilk.liu <leilk.liu@mediatek.com>
  Leon Romanovsky <leonro@mellanox.com>
  Leonid Ravich <Leonid.Ravich@emc.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Liu Jian <liujian56@huawei.com>
  Luca Coelho <luciano.coelho@intel.com>
  Mans Rullgard <mans@mansr.com>
  Maor Gottlieb <maorg@mellanox.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Mark Brown <broonie@kernel.org>
  Mark O'Donovan <shiftee@posteo.net>
  Markus Theil <markus.theil@tu-ilmenau.de>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matthew Gerlach <matthew.gerlach@linux.intel.com>
  Matthew Howell <matthew.howell@sealevel.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Maxime Ripard <maxime@cerno.tech>
  Merlijn Wajer <merlijn@wizzup.org>
  Michael Chan <michael.chan@broadcom.com>
  Michael Hennerich <michael.hennerich@analog.com>
  Michael J. Ruhl <michael.j.ruhl@intel.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Kalderon <michal.kalderon@marvell.com>
  Mike Snitzer <snitzer@redhat.com>
  Miklos Szeredi <mszeredi@redhat.com>
  Mikulas Patocka <mpatocka redhat com>
  Mikulas Patocka <mpatocka@redhat.com>
  Moritz Fischer <mdf@kernel.org>
  Muchun Song <songmuchun@bytedance.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Oleg Nesterov <oleg@redhat.com>
  Olga Kornievskaia <kolga@netapp.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Pavel Shilovsky <pshilov@microsoft.com>
  Paweł Gronowski <me@woland.xyz>
  PeiSen Hou <pshou@realtek.com.tw>
  Peter Chen <peter.chen@nxp.com>
  Pi-Hsun Shih <pihsun@chromium.org>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qi Liu <liuqi115@huawei.com>
  Qiu Wenbo <qiuwenbo@phytium.com.cn>
  Qiujun Huang <hqjagain@gmail.com>
  Qu Wenruo <wqu@suse.com>
  Richard Cochran <richardcochran@gmail.com>
  Robbie Ko <robbieko@synology.com>
  Rodrigo Rivas Costa <rodrigorivascosta@gmail.com>
  Roman Gushchin <guro@fb.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Rustam Kovhaev <rkovhaev@gmail.com>
  Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org>
  Sasha Levin <sashal@kernel.org>
  Serge Semin <Sergey.Semin@baikalelectronics.ru>
  Sergey Organov <sorganov@gmail.com>
  Shannon Nelson <snelson@pensando.io>
  Shawn Guo <shawnguo@kernel.org>
  Shik Chen <shik@chromium.org>
  Shuah Khan <skhan@linuxfoundation.org>
  Siarhei Vishniakou <svv@google.com>
  Sreekanth Reddy <sreekanth.reddy@broadcom.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Stefan Schmidt <stefan@datenfreihafen.org>
  Stefano Garzarella <sgarzare@redhat.com>
  Steve French <stfrench@microsoft.com>
  Steve Schremmer <steve.schremmer@netapp.com>
  Sumit Semwal <sumit.semwal@linaro.org>
  Taehee Yoo <ap420073@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thierry Reding <treding@nvidia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tim Harvey <tharvey@gateworks.com>
  Todd Kjos <tkjos@google.com>
  Tom Rix <trix@redhat.com>
  Tony Lindgren <tony@atomide.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vasiliy Kupriakov <rublag-ns@yandex.ru>
  Vasundhara Volam <vasundhara-v.volam@broadcom.com>
  Viktor Jägersküpper <viktor_jaegerskuepper@freenet.de>
  Vinod Koul <vkoul@kernel.org>
  Vlastimil Babka <vbabka@suse.cz>
  Wang Hai <wanghai38@huawei.com>
  Will Deacon <will@kernel.org>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Wolfram Sang <wsa@kernel.org>
  Wu Hao <hao.wu@intel.com>
  Xie He <xie.he.0141@gmail.com>
  Xu Yilun <yilun.xu@intel.com>
  Yang Shi <yang.shi@linux.alibaba.com>
  Yang Yingliang <yangyingliang@huawei.com>
  Yunsheng Lin <linyunsheng@huawei.com>
  Zhang Xiaoxu <zhangxiaoxu5@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   d811d29517d1..58a12e3368db  58a12e3368dbcadc57c6b3f5fcbecce757426f02 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 22:23:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 22:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0uTL-0004K7-38; Wed, 29 Jul 2020 22:22:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ULCb=BI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0uTK-0004Jn-Co
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 22:22:50 +0000
X-Inumbo-ID: 0567a374-d1ea-11ea-aa70-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0567a374-d1ea-11ea-aa70-12813bfff9fa;
 Wed, 29 Jul 2020 22:22:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=uLB57gA70g4VsS8jO2JHgLPLNf/F4lTgzr5CnoT1z20=; b=x4AqcX+fRMO6iASxR02bG53n/
 AMfqIaX5VESiXMCrxeuF8G3lRXtKxP+FgDUeGPkNMLJZ+QW/OLHvnigGK9dvm5h63JmA2awNqDQv6
 8iNqJelqZaEbEy4FyMezIIrW0HVHEdtC70LMnhGc35BJqE6XKGq7adNbP4nf093lt1V9A=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0uTC-0003nB-Bh; Wed, 29 Jul 2020 22:22:42 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0uTC-0006AL-1i; Wed, 29 Jul 2020 22:22:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0uTC-0007x3-0y; Wed, 29 Jul 2020 22:22:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152288-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152288: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
 xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=64219fa179c3e48adad12bfce3f6b3f1596cccbf
X-Osstest-Versions-That: xen=b071ec25e85c4aacf3da59e5258cda0b1c4df45d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 29 Jul 2020 22:22:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152288 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152288/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       7 xen-boot                 fail REGR. vs. 152269

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  64219fa179c3e48adad12bfce3f6b3f1596cccbf
baseline version:
 xen                  b071ec25e85c4aacf3da59e5258cda0b1c4df45d

Last test of basis   152269  2020-07-28 19:05:32 Z    1 days
Testing same since   152288  2020-07-29 19:01:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Fam Zheng <famzheng@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 64219fa179c3e48adad12bfce3f6b3f1596cccbf
Author: Fam Zheng <famzheng@amazon.com>
Date:   Wed Jul 29 18:51:45 2020 +0100

    x86/cpuid: Fix APIC bit clearing
    
    The bug is obvious here, other places in this function used
    "cpufeat_mask" correctly.
    
    Fixed: b648feff8ea2 ("xen/x86: Improvements to in-hypervisor cpuid sanity checks")
    Signed-off-by: Fam Zheng <famzheng@amazon.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 23:30:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 23:30:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0vWI-000114-BY; Wed, 29 Jul 2020 23:29:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1ssD=BI=runbox.com=m.v.b@srs-us1.protection.inumbo.net>)
 id 1k0vWH-00010z-4a
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 23:29:57 +0000
X-Inumbo-ID: 64f84574-d1f3-11ea-8cfa-bc764e2007e4
Received: from aibo.runbox.com (unknown [91.220.196.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 64f84574-d1f3-11ea-8cfa-bc764e2007e4;
 Wed, 29 Jul 2020 23:29:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=runbox.com; 
 s=selector2;
 h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
 bh=rISNQPH3HYP5FNkch553OTLzMMgKtfy1SUBnSvFHBcA=; b=gELl8Sj7dSWfhwEdBrc7g+F2AK
 ++6Vjd0/7jXRGM7shd4FlEq9oEjxT9hKTkOb0xJnRSeXIn53MFgtW+/yM19wysR4PUFnkdUr/Js2E
 PXBO7Ze1Vnn9vTdUxfkpV7qs5yjN8B1G6E7JCCYtucSoTmtzD4yXLOb3M+VaMHNwO11oZiDtAWmHC
 eGBIiV4ng0x5uNcLs+WmdCXXKwLrmjSnNfkiFuqXmlK+BB8ag1YTnWWtBZhaFh6CsRULoi7L85Rq4
 gmTC/OLBxTmLk0znM+ORUvhSw9mHpb9T4IDtt4eVzHiyg83mtWzp9o4hOknfsk06IRkPFTg2TIwqG
 a7CDLftw==;
Received: from [10.9.9.73] (helo=submission02.runbox)
 by mailtransmit03.runbox with esmtp (Exim 4.86_2)
 (envelope-from <m.v.b@runbox.com>)
 id 1k0vW2-0004IW-Uj; Thu, 30 Jul 2020 01:29:43 +0200
Received: by submission02.runbox with esmtpsa [Authenticated alias (536975)]
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1)
 id 1k0vVm-00053U-Ap; Thu, 30 Jul 2020 01:29:26 +0200
Subject: Re: [PATCH] x86/S3: put data segment registers into known state upon
 resume
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>
References: <3cad2798-1a01-7d5e-ea55-ddb9ba6388d9@suse.com>
 <6343ad61-246f-fefd-cd12-d260807e82f0@citrix.com>
 <c726cdc7-271b-0ea7-4056-8ab86686282e@suse.com>
 <e61e34c4-38dd-d201-8035-ead79a7595c2@citrix.com>
From: "M. Vefa Bicakci" <m.v.b@runbox.com>
Message-ID: <072d2566-8640-2bf6-d660-2eeb019c8a08@runbox.com>
Date: Thu, 30 Jul 2020 02:29:23 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.0.1
MIME-Version: 1.0
In-Reply-To: <e61e34c4-38dd-d201-8035-ead79a7595c2@citrix.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/23/20 7:00 PM, Andrew Cooper wrote:
> On 23/07/2020 16:19, Jan Beulich wrote:
>> On 23.07.2020 16:40, Andrew Cooper wrote:
>>> On 20/07/2020 16:20, Jan Beulich wrote:
>>>> wakeup_32 sets %ds and %es to BOOT_DS, while leaving %fs at what
>>>> wakeup_start did set it to, and %gs at whatever BIOS did load into it.
>>>> All of this may end up confusing the first load_segments() to run on
>>>> the BSP after resume, in particular allowing a non-nul selector value
>>>> to be left in %fs.
>>>>
>>>> Alongside %ss, also put all other data segment registers into the same
>>>> state that the boot and CPU bringup paths put them in.
>>>>
>>>> Reported-by: M. Vefa Bicakci <m.v.b@runbox.com>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> --- a/xen/arch/x86/acpi/wakeup_prot.S
>>>> +++ b/xen/arch/x86/acpi/wakeup_prot.S
>>>> @@ -52,6 +52,16 @@ ENTRY(s3_resume)
>>>>           mov     %eax, %ss
>>>>           mov     saved_rsp(%rip), %rsp
>>>>   
>>>> +        /*
>>>> +         * Also put other segment registers into known state, like would
>>>> +         * be done on the boot path. This is in particular necessary for
>>>> +         * the first load_segments() to work as intended.
>>>> +         */
>>> I don't think the comment is helpful, not least because it refers to a
>>> broken behaviour in load_segemnts() which is soon going to change anyway.
>> Well, I can drop it. I merely thought I'd be nice and comment my
>> code once in a while (and the comment could be dropped / adjusted
>> when load_segments() changes)...
>>
>>> We've literally just loaded the GDT, at which point reloading all
>>> segments *is* the expected thing to do.
>> In a way, unless some/all are assumed to already hold a nul selector.
>>
>>> I'd recommend that the diff be simply:
>>>
>>> diff --git a/xen/arch/x86/acpi/wakeup_prot.S
>>> b/xen/arch/x86/acpi/wakeup_prot.S
>>> index dcc7e2327d..a2c41c4f3f 100644
>>> --- a/xen/arch/x86/acpi/wakeup_prot.S
>>> +++ b/xen/arch/x86/acpi/wakeup_prot.S
>>> @@ -49,6 +49,10 @@ ENTRY(s3_resume)
>>>   mov %rax, %cr0
>>>   
>>>   mov $__HYPERVISOR_DS64, %eax
>>> + mov %eax, %ds
>>> + mov %eax, %es
>>> + mov %eax, %fs
>>> + mov %eax, %gs
>>>   mov %eax, %ss
>>>   mov saved_rsp(%rip), %rsp
>> So I had specifically elected to not put the addition there, to make
>> sure the stack would get established first. But seeing both Roger
>> and you ask me to do otherwise - well, so be it then.
> 
> There is no IDT. Any fault is will be triple, irrespective of the exact
> code layout.
> 
> This sequence actually matches what we have in __high_start().
> 
> I don't think it is wise to write code which presumes that
> __HYPERVISOR_DS64 is 0 (it happens to be, but could easily be 0xe010 as
> well), or that the trampoline has fixed behaviours for the segments.

Hello Jan and Andrew,

Is there anything I can do to help with the delivery/merging of this patch?

If it would help, I can prepare and publish a patch according to Andrew's
comments. Given that the patch is not my work though, I assume that it
would be appropriate for me credit both of you in the commit message and
add a Signed-off-by tag in the commit message for each of you.

Vefa


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 23:31:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 23:31:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0vYB-0001k1-Nw; Wed, 29 Jul 2020 23:31:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XnVs=BI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k0vYA-0001jv-3G
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 23:31:54 +0000
X-Inumbo-ID: ae28ff7c-d1f3-11ea-aa7c-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ae28ff7c-d1f3-11ea-aa7c-12813bfff9fa;
 Wed, 29 Jul 2020 23:31:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596065512;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=P7Bd6LYoU4Om/hsW0kYYE0Q5zLNZTDMtD0E63E93pEA=;
 b=H1PZ+plBS71MulvtiP1VuIBAeJnfyuz4PhdCqc65BvbX/uywiloBRTcs
 i6B+mzoLRvIz0tzkYUtf8QZuvOTLnMNNjEw2lvd1bdNz+be2t92sdRFTL
 KNt2896d54LqvSQ0IdjGdtZqpsKRWD8KY3a5m+U+vdisce9s7LBHkgR+c c=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 8xHH3hqjO32hswaIL9iw6uG2Z/fln4W+R/S1coGMp874+y0Hq5oAD1nMK8KjT5lSP1WPZwlACI
 R1vUDxCCO6b4dvJ4O9FSE43xtXgq8U/oWcCGfPTcoDH1mr6SVJUD/w4ox6SiJUR8SstbN3F+/h
 oQUqfVASUV5PUx8uoIIHe5YLv1g/YbuzG8/WEv6DC0KouNtneqL2NFAqUzXXOOJnDHeRvpRN/z
 MOwL8sF9GanZHnjacDe+aEF9/HZgLc41cCTDosQbvVSPyPVGRtHdiTLpzbegLuP8xGLjNnk9+V
 axk=
X-SBRS: 2.7
X-MesageID: 23509159
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,412,1589256000"; d="scan'208";a="23509159"
Subject: Re: [PATCH] x86/S3: put data segment registers into known state upon
 resume
To: "M. Vefa Bicakci" <m.v.b@runbox.com>, Jan Beulich <jbeulich@suse.com>
References: <3cad2798-1a01-7d5e-ea55-ddb9ba6388d9@suse.com>
 <6343ad61-246f-fefd-cd12-d260807e82f0@citrix.com>
 <c726cdc7-271b-0ea7-4056-8ab86686282e@suse.com>
 <e61e34c4-38dd-d201-8035-ead79a7595c2@citrix.com>
 <072d2566-8640-2bf6-d660-2eeb019c8a08@runbox.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <de1f03ff-6740-d9da-8cc6-e2dd4fea3c68@citrix.com>
Date: Thu, 30 Jul 2020 00:31:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <072d2566-8640-2bf6-d660-2eeb019c8a08@runbox.com>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30/07/2020 00:29, M. Vefa Bicakci wrote:
> On 7/23/20 7:00 PM, Andrew Cooper wrote:
>> On 23/07/2020 16:19, Jan Beulich wrote:
>>> On 23.07.2020 16:40, Andrew Cooper wrote:
>>>> On 20/07/2020 16:20, Jan Beulich wrote:
>>>>> wakeup_32 sets %ds and %es to BOOT_DS, while leaving %fs at what
>>>>> wakeup_start did set it to, and %gs at whatever BIOS did load into
>>>>> it.
>>>>> All of this may end up confusing the first load_segments() to run on
>>>>> the BSP after resume, in particular allowing a non-nul selector value
>>>>> to be left in %fs.
>>>>>
>>>>> Alongside %ss, also put all other data segment registers into the
>>>>> same
>>>>> state that the boot and CPU bringup paths put them in.
>>>>>
>>>>> Reported-by: M. Vefa Bicakci <m.v.b@runbox.com>
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>
>>>>> --- a/xen/arch/x86/acpi/wakeup_prot.S
>>>>> +++ b/xen/arch/x86/acpi/wakeup_prot.S
>>>>> @@ -52,6 +52,16 @@ ENTRY(s3_resume)
>>>>>  mov %eax, %ss
>>>>>  mov saved_rsp(%rip), %rsp
>>>>>  + /*
>>>>> + * Also put other segment registers into known state,
>>>>> like would
>>>>> + * be done on the boot path. This is in particular
>>>>> necessary for
>>>>> + * the first load_segments() to work as intended.
>>>>> + */
>>>> I don't think the comment is helpful, not least because it refers to a
>>>> broken behaviour in load_segemnts() which is soon going to change
>>>> anyway.
>>> Well, I can drop it. I merely thought I'd be nice and comment my
>>> code once in a while (and the comment could be dropped / adjusted
>>> when load_segments() changes)...
>>>
>>>> We've literally just loaded the GDT, at which point reloading all
>>>> segments *is* the expected thing to do.
>>> In a way, unless some/all are assumed to already hold a nul selector.
>>>
>>>> I'd recommend that the diff be simply:
>>>>
>>>> diff --git a/xen/arch/x86/acpi/wakeup_prot.S
>>>> b/xen/arch/x86/acpi/wakeup_prot.S
>>>> index dcc7e2327d..a2c41c4f3f 100644
>>>> --- a/xen/arch/x86/acpi/wakeup_prot.S
>>>> +++ b/xen/arch/x86/acpi/wakeup_prot.S
>>>> @@ -49,6 +49,10 @@ ENTRY(s3_resume)
>>>>  mov %rax, %cr0
>>>>   mov $__HYPERVISOR_DS64, %eax
>>>> + mov %eax, %ds
>>>> + mov %eax, %es
>>>> + mov %eax, %fs
>>>> + mov %eax, %gs
>>>>  mov %eax, %ss
>>>>  mov saved_rsp(%rip), %rsp
>>> So I had specifically elected to not put the addition there, to make
>>> sure the stack would get established first. But seeing both Roger
>>> and you ask me to do otherwise - well, so be it then.
>>
>> There is no IDT. Any fault is will be triple, irrespective of the exact
>> code layout.
>>
>> This sequence actually matches what we have in __high_start().
>>
>> I don't think it is wise to write code which presumes that
>> __HYPERVISOR_DS64 is 0 (it happens to be, but could easily be 0xe010 as
>> well), or that the trampoline has fixed behaviours for the segments.
>
> Hello Jan and Andrew,
>
> Is there anything I can do to help with the delivery/merging of this
> patch?

It was committed last Friday.

https://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=55f8c389d4348cc517946fdcb10794112458e81e

I presume Jan will backport it to stable trees when he's not OoO.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jul 29 23:39:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 29 Jul 2020 23:39:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0vfJ-0001zb-H1; Wed, 29 Jul 2020 23:39:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1ssD=BI=runbox.com=m.v.b@srs-us1.protection.inumbo.net>)
 id 1k0vfH-0001zU-9n
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 23:39:15 +0000
X-Inumbo-ID: b2189dda-d1f4-11ea-8cfa-bc764e2007e4
Received: from aibo.runbox.com (unknown [91.220.196.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b2189dda-d1f4-11ea-8cfa-bc764e2007e4;
 Wed, 29 Jul 2020 23:39:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=runbox.com; 
 s=selector2;
 h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
 bh=ryw1KN8G6h/++LeW6jPFPZaW48ovXavii2BR1FFNXKI=; b=FPowg7Di5qDZCGTgKJv0O15P14
 MEiCJNiet/vWmBU9fEZyTS3psaPzs43NLu9lME8G6p2+XvrkFuXUgaEcvuJ0fgipQFu0r+yqxN/7b
 p43sESKGo2tcg2Pbuds0IoF7k+Y4lyN2EME7Cz49wW1yBi4azCzRVjuh4+yBX95UAeIE9eeH/a3Cw
 cKtwtA2sYU/nVQsqPvvJNN+4zuJtIpM/h4IvWv+//uOqpJ33oPeZloP2rMRVu5dV/X/QrPA6lTBJM
 0o2ZgMiTACk5mcMEzG65a3dPzZB0Fhb8O8KbAFOxG9n/eGEyWS0NdmLKHXsOUqSxTfHq6tk6x3tNq
 ajSdHVfQ==;
Received: from [10.9.9.73] (helo=submission02.runbox)
 by mailtransmit02.runbox with esmtp (Exim 4.86_2)
 (envelope-from <m.v.b@runbox.com>)
 id 1k0vf4-0007Wm-PR; Thu, 30 Jul 2020 01:39:02 +0200
Received: by submission02.runbox with esmtpsa [Authenticated alias (536975)]
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1)
 id 1k0vew-0006HT-In; Thu, 30 Jul 2020 01:38:54 +0200
Subject: Re: [PATCH] x86/S3: put data segment registers into known state upon
 resume
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>
References: <3cad2798-1a01-7d5e-ea55-ddb9ba6388d9@suse.com>
 <6343ad61-246f-fefd-cd12-d260807e82f0@citrix.com>
 <c726cdc7-271b-0ea7-4056-8ab86686282e@suse.com>
 <e61e34c4-38dd-d201-8035-ead79a7595c2@citrix.com>
 <072d2566-8640-2bf6-d660-2eeb019c8a08@runbox.com>
 <de1f03ff-6740-d9da-8cc6-e2dd4fea3c68@citrix.com>
From: "M. Vefa Bicakci" <m.v.b@runbox.com>
Message-ID: <b9bae032-cdbe-de26-e8c1-1980bce175d3@runbox.com>
Date: Thu, 30 Jul 2020 02:38:52 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.0.1
MIME-Version: 1.0
In-Reply-To: <de1f03ff-6740-d9da-8cc6-e2dd4fea3c68@citrix.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/30/20 2:31 AM, Andrew Cooper wrote:
> On 30/07/2020 00:29, M. Vefa Bicakci wrote:
>> On 7/23/20 7:00 PM, Andrew Cooper wrote:
>>> On 23/07/2020 16:19, Jan Beulich wrote:
>>>> On 23.07.2020 16:40, Andrew Cooper wrote:
>>>>> On 20/07/2020 16:20, Jan Beulich wrote:
>>>>>> wakeup_32 sets %ds and %es to BOOT_DS, while leaving %fs at what
>>>>>> wakeup_start did set it to, and %gs at whatever BIOS did load into
>>>>>> it.
>>>>>> All of this may end up confusing the first load_segments() to run on
>>>>>> the BSP after resume, in particular allowing a non-nul selector value
>>>>>> to be left in %fs.
>>>>>>
>>>>>> Alongside %ss, also put all other data segment registers into the
>>>>>> same
>>>>>> state that the boot and CPU bringup paths put them in.
>>>>>>
>>>>>> Reported-by: M. Vefa Bicakci <m.v.b@runbox.com>
>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>>
>>>>>> --- a/xen/arch/x86/acpi/wakeup_prot.S
>>>>>> +++ b/xen/arch/x86/acpi/wakeup_prot.S
>>>>>> @@ -52,6 +52,16 @@ ENTRY(s3_resume)
>>>>>>   mov %eax, %ss
>>>>>>   mov saved_rsp(%rip), %rsp
>>>>>>   + /*
>>>>>> + * Also put other segment registers into known state,
>>>>>> like would
>>>>>> + * be done on the boot path. This is in particular
>>>>>> necessary for
>>>>>> + * the first load_segments() to work as intended.
>>>>>> + */
>>>>> I don't think the comment is helpful, not least because it refers to a
>>>>> broken behaviour in load_segemnts() which is soon going to change
>>>>> anyway.
>>>> Well, I can drop it. I merely thought I'd be nice and comment my
>>>> code once in a while (and the comment could be dropped / adjusted
>>>> when load_segments() changes)...
>>>>
>>>>> We've literally just loaded the GDT, at which point reloading all
>>>>> segments *is* the expected thing to do.
>>>> In a way, unless some/all are assumed to already hold a nul selector.
>>>>
>>>>> I'd recommend that the diff be simply:
>>>>>
>>>>> diff --git a/xen/arch/x86/acpi/wakeup_prot.S
>>>>> b/xen/arch/x86/acpi/wakeup_prot.S
>>>>> index dcc7e2327d..a2c41c4f3f 100644
>>>>> --- a/xen/arch/x86/acpi/wakeup_prot.S
>>>>> +++ b/xen/arch/x86/acpi/wakeup_prot.S
>>>>> @@ -49,6 +49,10 @@ ENTRY(s3_resume)
>>>>>   mov %rax, %cr0
>>>>>    mov $__HYPERVISOR_DS64, %eax
>>>>> + mov %eax, %ds
>>>>> + mov %eax, %es
>>>>> + mov %eax, %fs
>>>>> + mov %eax, %gs
>>>>>   mov %eax, %ss
>>>>>   mov saved_rsp(%rip), %rsp
>>>> So I had specifically elected to not put the addition there, to make
>>>> sure the stack would get established first. But seeing both Roger
>>>> and you ask me to do otherwise - well, so be it then.
>>>
>>> There is no IDT. Any fault is will be triple, irrespective of the exact
>>> code layout.
>>>
>>> This sequence actually matches what we have in __high_start().
>>>
>>> I don't think it is wise to write code which presumes that
>>> __HYPERVISOR_DS64 is 0 (it happens to be, but could easily be 0xe010 as
>>> well), or that the trampoline has fixed behaviours for the segments.
>>
>> Hello Jan and Andrew,
>>
>> Is there anything I can do to help with the delivery/merging of this
>> patch?
> 
> It was committed last Friday.
> 
> https://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=55f8c389d4348cc517946fdcb10794112458e81e
> 
> I presume Jan will backport it to stable trees when he's not OoO.

Great -- thanks! (And sorry for not checking the git tree prior to
sending my e-mail.)

Vefa


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 01:27:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 01:27:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0xLs-0004E7-M7; Thu, 30 Jul 2020 01:27:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fgvr=BJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k0xLr-0004E2-7u
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 01:27:19 +0000
X-Inumbo-ID: ce7c3fb8-d203-11ea-aa93-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce7c3fb8-d203-11ea-aa93-12813bfff9fa;
 Thu, 30 Jul 2020 01:27:18 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 56FB52072A;
 Thu, 30 Jul 2020 01:27:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596072437;
 bh=c4k4NdZHECgYEpdRMVrdYhI1v6rcVuDTAhUtbursfNM=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=jIoEgXuQpC4q/wnQ6PRHRYtBYUJN0HpShJlFNeJj21J28KBNpT5dgyw9v3L4iB0FM
 E8s2O5L08S0PyZyE4jf03HNXEy0Q2O2bwtyXN8RoTtFJyVsiWBVXr3naRInjpMSuFU
 RVNbyomgLzJNX/W9V7e85EVsxKOZbuafwcwRG+ek=
Date: Wed, 29 Jul 2020 18:27:16 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: srini@yujala.com
Subject: Re: Porting Xen to Jetson Nano
In-Reply-To: <67c102642b0932d88ab2f70e96742ef0@yujala.com>
Message-ID: <alpine.DEB.2.21.2007291756380.1767@sstabellini-ThinkPad-T480s>
References: <002801d66051$90fe2300$b2fa6900$@yujala.com>
 <9736680b-1c81-652b-552b-4103341bad50@xen.org>
 <000001d661cb$45cdaa10$d168fe30$@yujala.com>
 <5f985a6a-1bd6-9e68-f35f-b0b665688cee@xen.org>
 <67c102642b0932d88ab2f70e96742ef0@yujala.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>,
 'Christopher Clark' <christopher.w.clark@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 29 Jul 2020, srini@yujala.com wrote:
> Hi Julien,
> 
> On 2020-07-24 17:25, Julien Grall wrote:
> > On 24/07/2020 16:01, Srinivas Bangalore wrote:
> > 
> > I struggled to find your comment inline as your e-mail client doesn't
> > quote my answer. Please configure your e-mail client to use some form
> > of quoting (the usual is '>').
> > 
> > 
> I have switched to a web-based email client, so I hope this is better now.

Seems better, thank you


> > > (XEN) Freed 296kB init memory.
> > > (XEN) dom0 IPA 0x0000000088080000
> > > (XEN) P2M @ 0000000803fc3d40 mfn:0x17f0f5
> > > (XEN) 0TH[0x0] = 0x004000017f0f377f
> > > (XEN) 1ST[0x2] = 0x02c00000800006fd
> > > (XEN) Mem access check
> > > (XEN) dom0 IPA 0x0000000088080000
> > > (XEN) P2M @ 0000000803fc3d40 mfn:0x17f0f5
> > > (XEN) 0TH[0x0] = 0x004000017f0f377f
> > > (XEN) 1ST[0x2] = 0x02c00000800006fd
> > > (XEN) Mem access check
> > 
> > The instruction abort issue looks normal as the mapping is marked as
> > non-executable.
> > 
> > Looking at the rest of bits, bits 55:58 indicates the type of mapping
> > used. The value suggest the mapping has been considered to be used
> > p2m_mmio_direct_c (RW cacheable MMIO). This looks wrong to me because
> > RAM should be mapped using p2m_ram_rw.
> > 
> > Looking at your DT, it looks like the region is marked as reserved. On
> > Xen 4.8, reserved-memory region are not correctly handled (IIRC this
> > was only fixed in Xen 4.13). This should be possible to confirm by
> > enable CONFIG_DEVICE_TREE_DEBUG in your .config.
> > 
> > The option will debug more information about the mapping to dom0 on
> > your console.
> > 
> > However, given you are using an old release, you are at risk at keep
> > finding bugs that have been resolved in more recent releases. It would
> > probably better if you switch to Xen 4.14 and report any bug you can
> > find there.
> > 
> Ok. I applied to patch series to 4.14. Enabled EARLY_PRINTK, CONFIG_DEBUG and
> DEVICE_TREE_DEBUG.
> Here's the log...
> 
> ## Flattened Device Tree blob at e3500000
>    Booting using the fdt blob at 0xe3500000
>    reserving fdt memory region: addr=80000000 size=20000
>    reserving fdt memory region: addr=e3500000 size=35000
>    Loading Device Tree to 00000000fc7f8000, end 00000000fc82ffff ... OK
> 
> Starting kernel ...
> 
> - UART enabled -
> - Boot CPU booting -
> - Current EL 00000008 -
> - Initialize CPU -
> - Turning on paging -
> - Zero BSS -
> - Ready -
> (XEN) Invalid size for reg
> (XEN) fdt: node `reserved-memory': parsing failed
> (XEN)
> (XEN) MODULE[0]: 00000000e0000000 - 00000000e014b0c8 Xen
> (XEN) MODULE[1]: 00000000fc7f8000 - 00000000fc82d000 Device Tree
> (XEN)  RESVD[0]: 0000000080000000 - 0000000080020000
> (XEN)  RESVD[1]: 00000000e3500000 - 00000000e3535000
> (XEN)  RESVD[2]: 00000000fc7f8000 - 00000000fc82d000
> (XEN)  RESVD[3]: 0000000040001000 - 000000004003ffff
> (XEN)  RESVD[4]: 00000000b0000000 - 00000000b01fffff
> (XEN)
> (XEN)
> (XEN) Command line: console=dtuart sync_console dom0_mem=128M loglvl=debug
> guest_loglvl=debug console_to_ring
> (XEN) Xen BUG at page_alloc.c:398
> (XEN) ----[ Xen-4.14.0  arm64  debug=y   Not tainted ]----
> (XEN) CPU:    0
> (XEN) PC:     00000000002b7b90 alloc_boot_pages+0x38/0x9c
> (XEN) LR:     00000000002cda04
> (XEN) SP:     0000000000307d40
> (XEN) CPSR:   a00003c9 MODE:64-bit EL2h (Hypervisor, handler)
> (XEN)      X0: 000fc80000002000  X1: 0000000000002000  X2: 0000000000000000
> (XEN)      X3: 000fffffffffffff  X4: ffffffffffffffff  X5: 0000000000000000
> (XEN)      X6: 0000000000307df0  X7: 0000000000000003  X8: 0000000000000008
> (XEN)      X9: fffffffffffffffb X10: 0101010101010101 X11: 0000000000000007
> (XEN)     X12: 0000000000000004 X13: ffffffffffffffff X14: ffffffffff000000
> (XEN)     X15: ffffffffffffffff X16: 0000000000000000 X17: 0000000000000000
> (XEN)     X18: 00000000fc834dd0 X19: 00000000002b5000 X20: 00000000fc7f8000
> (XEN)     X21: 00000000fc7f8000 X22: 0000000000000000 X23: fc80000000000038
> (XEN)     X24: 00000000fed9de28 X25: ffffffffffffffff X26: fc80000002000000
> (XEN)     X27: 0000000002000000 X28: 0000000000000000  FP: 0000000000307d40
> (XEN)
> (XEN)   VTCR_EL2: 80000000
> (XEN)  VTTBR_EL2: 0000000000000000
> (XEN)
> (XEN)  SCTLR_EL2: 30cd183d
> (XEN)    HCR_EL2: 0000000000000038
> (XEN)  TTBR0_EL2: 00000000e0145000
> (XEN)
> (XEN)    ESR_EL2: f2000001
> (XEN)  HPFAR_EL2: 0000000000000000
> (XEN)    FAR_EL2: 0000000000000000
> (XEN)
> (XEN) Xen stack trace from sp=0000000000307d40:
> (XEN)    0000000000307df0 00000000002cf114 0000000000000000 0000000000307d68
> (XEN)    00000000fc7f8000 00000000002ceeb0 0000000000400000 00676e69725f6f74
> (XEN)    ffffffffffffffff 0000000000000000 0000000000000000 0000000000307df0
> (XEN)    0000000000307df0 00000000002cef58 000000003fffffff 00000000fc7f8000
> (XEN)    00000000fc7f8000 000fc80000002000 0000000000400000 0080000000000000
> (XEN)    0000000000000000 000000000003ffff 00000000fc831170 00000000002001b8
> (XEN)    00000000e0000000 00000000dfe00000 00000000fc7f8000 0000000000000000
> (XEN)    0000000000400000 00000000fed9de28 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000400 0000000000000000 0000000000035000
> (XEN)    00000000fc7f8000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000300000000 0000000000000000 00000040ffffffff
> (XEN)    00000000ffffffff 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN) Xen call trace:
> (XEN)    [<00000000002b7b90>] alloc_boot_pages+0x38/0x9c (PC)
> (XEN)    [<00000000002cda04>] setup_frametable_mappings+0xb4/0x310 (LR)
> (XEN)    [<00000000002cf114>] start_xen+0x3a0/0xc48
> (XEN)    [<00000000002001b8>] arm64/head.o#primary_switched+0x10/0x30
> (XEN)
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Xen BUG at page_alloc.c:398
> (XEN) ****************************************
> (XEN)
> (XEN) Reboot in five seconds...
> 
> There seems to be a problem with the DT in the 'reserved-memory' node.  I
> commented out the fb0-carveout, fb1-carveout sections, recompiled and tried to
> boot again.

Yes, those reserved-memory nodes won't work correctly with Xen
unfortunately: they either use "size" instead of "reg" (vpr-carveout) or
they specify "no-map". Only regular "reg" reserved memory regions
without "no-map" are properly parsed by Xen at the moment.



> This time the log shows the device tree messages (see attached log
> file), but Xen fails at this point...
> 
> (XEN) Allocating PPI 16 for event channel interrupt
> (XEN) Create hypervisor node
> (XEN) Create PSCI node
> (XEN) Create cpus node
> (XEN) Create cpu@0 (logical CPUID: 0) node
> (XEN) Create cpu@1 (logical CPUID: 1) node
> (XEN) Create cpu@2 (logical CPUID: 2) node
> (XEN) Create cpu@3 (logical CPUID: 3) node
> (XEN) Create memory node (reg size 4, nr cells 4)
> (XEN)   Bank 0: 0xe8000000->0xf0000000
> (XEN) Create memory node (reg size 4, nr cells 8)
> (XEN)   Bank 0: 0x40001000->0x40040000
> (XEN)   Bank 1: 0xb0000000->0xb0200000
> (XEN) Loading zImage from 00000000e1000000 to
> 00000000e8080000-00000000ea23c808
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Unable to copy the kernel in the hwdom memory
> (XEN) ****************************************
> (XEN)
> 
> Device tree and log file attached. Is there an issue with the DT? Any pointers
> on where I should be looking next?

Is it possible that the kernel image was loaded on a memory area not
recognized as ram?

So xen/arch/arm/guestcopy.c:translate_get_page fails the check
p2m_is_ram?

That would happen for instance if a device or special node is also
covering that address range.


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 01:27:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 01:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0xMQ-0004Gd-33; Thu, 30 Jul 2020 01:27:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fgvr=BJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k0xMP-0004GV-78
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 01:27:53 +0000
X-Inumbo-ID: e2f73c5e-d203-11ea-aa93-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e2f73c5e-d203-11ea-aa93-12813bfff9fa;
 Thu, 30 Jul 2020 01:27:52 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id D97042072A;
 Thu, 30 Jul 2020 01:27:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596072472;
 bh=yYLx8q35/fPrXE1BZgU0sNVrJc+bNHF+soXkDRUvL8o=;
 h=Date:From:To:cc:Subject:From;
 b=Kg1LuCQVFfm7lJnHaWeUtJqa0JkBHW80fDLWRUf4zeADjEYSQFbHbsTmd4/2WQLgD
 EMEXDyLdQTgaAhSe/hBTcTBKo1Tf27vC2rdNib83b6U+hLckUBIGaIavuR5hPTuRbL
 0pScOQfuPshSWFvNhm5rrNKNquDCo2R2fwozQsqE=
Date: Wed, 29 Jul 2020 18:27:51 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: committers@xenproject.org
Subject: kernel-doc and xen.git
Message-ID: <alpine.DEB.2.21.2007291644330.1767@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 Bertrand.Marquis@arm.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi all,

I would like to ask for your feedback on the adoption of the kernel-doc
format for in-code comments.

In the FuSa SIG we have started looking into FuSa documents for Xen. One
of the things we are investigating are ways to link these documents to
in-code comments in xen.git and vice versa.

In this context, Andrew Cooper suggested to have a look at "kernel-doc"
[1] during one of the virtual beer sessions at the last Xen Summit.

I did give a look at kernel-doc and it is very promising. kernel-doc is
a script that can generate nice rst text documents from in-code
comments. (The generated rst files can then be used as input for sphinx
to generate html docs.) The comment syntax [2] is simple and similar to
Doxygen:

    /**
     * function_name() - Brief description of function.
     * @arg1: Describe the first argument.
     * @arg2: Describe the second argument.
     *        One can provide multiple line descriptions
     *        for arguments.
     */

kernel-doc is actually better than Doxygen because it is a much simpler
tool, one we could customize to our needs and with predictable output.
Specifically, we could add the tagging, numbering, and referencing
required by FuSa requirement documents.

I would like your feedback on whether it would be good to start
converting xen.git in-code comments to the kernel-doc format so that
proper documents can be generated out of them. One day we could import
kernel-doc into xen.git/scripts and use it to generate a set of html
documents via sphinx.

At a minimum we'll need to start the in-code comment blocks with two
stars:

    /**

There could be also other small changes required to make sure the output
is appropriate.


Feedback is welcome!

Cheers,

Stefano

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/scripts/kernel-doc
[2] https://www.kernel.org/doc/html/latest/doc-guide/kernel-doc.html


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 01:31:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 01:31:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0xPQ-00058Y-JZ; Thu, 30 Jul 2020 01:31:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fgvr=BJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k0xPP-00058T-Hy
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 01:30:59 +0000
X-Inumbo-ID: 51f811e6-d204-11ea-aa93-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 51f811e6-d204-11ea-aa93-12813bfff9fa;
 Thu, 30 Jul 2020 01:30:58 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id CE6CB2072A;
 Thu, 30 Jul 2020 01:30:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596072658;
 bh=pwaS2DhviccpLgb8UHdq6zhxMiIUSL6GWPcfbEpYbjk=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=EuA/oaBXh3DQItnwlASPkdmWLWKe9zRL4amugM2L7tvt13liYGgDaeMSbS4ckiePR
 JzuWBMnPj5N68Wx/mi4PqvjxuxN0zh92gl8VUREFe+uWYflLayTx5e+mXTMCS4J0tc
 DmOUtWBuKXHbxjyE3uvJqO+hgBOvHg0EdhkEx0bQ=
Date: Wed, 29 Jul 2020 18:30:57 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2] xen/arm: Convert runstate address during hypcall
In-Reply-To: <1b046f2c-05c8-9276-a91e-fd55ec098bed@suse.com>
Message-ID: <alpine.DEB.2.21.2007291356060.1767@sstabellini-ThinkPad-T480s>
References: <4647a019c7b42d40d3c2f5b0a3685954bea7f982.1595948219.git.bertrand.marquis@arm.com>
 <8d2d7f03-450c-d50c-630b-8608c6d42bb9@suse.com>
 <FCAB700B-4617-4323-BE1E-B80DDA1806C1@arm.com>
 <1b046f2c-05c8-9276-a91e-fd55ec098bed@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-914584651-1596056332=:1767"
Content-ID: <alpine.DEB.2.21.2007291402180.1767@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-914584651-1596056332=:1767
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2007291402181.1767@sstabellini-ThinkPad-T480s>

On Wed, 29 Jul 2020, Jan Beulich wrote:
> On 29.07.2020 09:08, Bertrand Marquis wrote:
> > > On 28 Jul 2020, at 21:54, Jan Beulich <jbeulich@suse.com> wrote:
> > > 
> > > On 28.07.2020 17:52, Bertrand Marquis wrote:
> > > > At the moment on Arm, a Linux guest running with KTPI enabled will
> > > > cause the following error when a context switch happens in user mode:
> > > > (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
> > > > The error is caused by the virtual address for the runstate area
> > > > registered by the guest only being accessible when the guest is running
> > > > in kernel space when KPTI is enabled.
> > > > To solve this issue, this patch is doing the translation from virtual
> > > > address to physical address during the hypercall and mapping the
> > > > required pages using vmap. This is removing the conversion from virtual
> > > > to physical address during the context switch which is solving the
> > > > problem with KPTI.
> > > > This is done only on arm architecture, the behaviour on x86 is not
> > > > modified by this patch and the address conversion is done as before
> > > > during each context switch.
> > > > This is introducing several limitations in comparison to the previous
> > > > behaviour (on arm only):
> > > > - if the guest is remapping the area at a different physical address Xen
> > > > will continue to update the area at the previous physical address. As
> > > > the area is in kernel space and usually defined as a global variable
> > > > this
> > > > is something which is believed not to happen. If this is required by a
> > > > guest, it will have to call the hypercall with the new area (even if it
> > > > is at the same virtual address).
> > > > - the area needs to be mapped during the hypercall. For the same reasons
> > > > as for the previous case, even if the area is registered for a different
> > > > vcpu. It is believed that registering an area using a virtual address
> > > > unmapped is not something done.
> > > 
> > > Beside me thinking that an in-use and stable ABI can't be changed like
> > > this, no matter what is "believed" kernel code may or may not do, I
> > > also don't think having arch-es diverge in behavior here is a good
> > > idea. Use of commonly available interfaces shouldn't lead to head
> > > aches or surprises when porting code from one arch to another. I'm
> > > pretty sure it was suggested before: Why don't you simply introduce
> > > a physical address based hypercall (and then also on x86 at the same
> > > time, keeping functional parity)? I even seem to recall giving a
> > > suggestion how to fit this into a future "physical addresses only"
> > > model, as long as we can settle on the basic principles of that
> > > conversion path that we want to go sooner or later anyway (as I
> > > understand).
> > 
> > I fully agree with the “physical address only” model and i think it must be
> > done. Introducing a new hypercall taking a physical address as parameter
> > is the long term solution (and I would even volunteer to do it in a new
> > patchset).
> > But this would not solve the issue here unless linux is modified.
> > So I do see this patch as a “bug fix”.
> 
> Well, it is sort of implied by my previous reply that we won't get away
> without an OS side change here. The prereq to get away without would be
> that it is okay to change the behavior of a hypercall like you do, and
> that it is okay to make the behavior diverge between arch-es. I think
> I've made pretty clear that I don't think either is really an option.

Hi Jan,

This is a difficult problem to solve and the current situation honestly
sucks: there is no way to solve the problem without making compromises.

The new hypercall is good-to-have in any case (it is a better interface)
but it is not a full solution.  If we introduce a new hypercall we fix
new guests but don't fix existing guests. If we change Linux in any way,
we are still going to have problems with all already-released kernel
binaries. Leaving the issue unfixed is not an option either because the
problem can't be ignored.

There is no simple way out of this situation.


After careful considerations and many discussions (most of them on
xen-devel [1][2]) we came to the realization that changing the behavior
for this hypercall on ARM is the best we can do:

1) it solves the problem for existing guests
2) it has no ill effects as far as we know


Are we happy about it? We are not. It is the best we can do given the
constraints. The rule to never change hypercalls is a good one to have
and I fully support it, but in this unique circumstance it is best to
make an exception.

The idea is to minimize the impact to x86 so that's why Bertrand's patch
is introducing a divergence between Arm and x86 on this hypercall.


In summary, please consider this patch together with its context. This
has been the only time to my memory when we had to do this -- it is
certainly a unique situation.



[1] https://marc.info/?l=xen-devel&m=158946660216159
[2] https://marc.info/?l=xen-devel&m=159067967432381
--8323329-914584651-1596056332=:1767--


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 03:15:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 03:15:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0z1p-0005Ob-Ee; Thu, 30 Jul 2020 03:14:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6uMI=BJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k0z1n-0005OE-BJ
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 03:14:43 +0000
X-Inumbo-ID: ca9982fc-d212-11ea-aa9b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ca9982fc-d212-11ea-aa9b-12813bfff9fa;
 Thu, 30 Jul 2020 03:14:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=D+odb5aiOOd83rjAhbsywKg3Ay5uJ/Y3rYuV5ug/rUQ=; b=aXxf1zDQTvHhDackBWFbjbEjp
 0+LT2rs8/Gn4CHR98lVNsQI5QwQTmnXNMFiOg2ASsvk39kkxY3KIucQMxtvJ1YL0ZW5t1rw4+lmv1
 m8cgcqTMgs9QWqY+3Kn480DAOMFYKQNYmCVFaA0CTmUg4vmZY4zQwaNhQy3267Vqx8uRQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0z1d-0003hY-Fo; Thu, 30 Jul 2020 03:14:33 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k0z1d-0004Ii-0a; Thu, 30 Jul 2020 03:14:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k0z1d-0004Cx-02; Thu, 30 Jul 2020 03:14:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152284-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152284: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
 qemu-mainline:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
 qemu-mainline:test-armhf-armhf-xl-vhd:host-ping-check-native:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
 qemu-mainline:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
 qemu-mainline:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
 qemu-mainline:test-arm64-arm64-xl:xen-boot:fail:regression
 qemu-mainline:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=5772f2b1fc5d00e7e04e01fa28e9081d6550440a
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jul 2020 03:14:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152284 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152284/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-seattle   7 xen-boot                 fail REGR. vs. 151065
 test-arm64-arm64-xl-credit1   7 xen-boot                 fail REGR. vs. 151065
 test-armhf-armhf-xl-vhd       5 host-ping-check-native   fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1       fail REGR. vs. 151065
 test-arm64-arm64-xl-thunderx  7 xen-boot                 fail REGR. vs. 151065
 test-arm64-arm64-xl-credit2   7 xen-boot                 fail REGR. vs. 151065
 test-arm64-arm64-xl           7 xen-boot                 fail REGR. vs. 151065
 test-arm64-arm64-xl-xsm       7 xen-boot                 fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm  7 xen-boot                 fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1         fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                5772f2b1fc5d00e7e04e01fa28e9081d6550440a
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   47 days
Failing since        151101  2020-06-14 08:32:51 Z   45 days   64 attempts
Testing same since   152284  2020-07-29 11:03:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Duyck <alexander.h.duyck@linux.intel.com>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andreas Schwab <schwab@suse.de>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Antoine Damhet <antoine.damhet@blade-group.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Bruce Rogers <brogers@suse.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dongjiu Geng <gengdongjiu@huawei.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Hogan Wang <hogan.wang@huawei.com>
  Hogan Wang <king.wang@huawei.com>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiskza@siemens.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  KONRAD Frederic <frederic.konrad@adacore.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  Liu Yi L <yi.l.liu@intel.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Turschmid <peter.turschm@nutanix.com>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Sven Schnelle <svens@stackframe.org>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Viktor Mihajlovski <mihajlov@linux.ibm.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 34292 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 04:09:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 04:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0zsz-0001Gn-SP; Thu, 30 Jul 2020 04:09:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4+cv=BI=euphon.net=fam@srs-us1.protection.inumbo.net>)
 id 1k0p1v-0007yt-OX
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 16:34:12 +0000
X-Inumbo-ID: 51ecf96e-d1b9-11ea-8c97-bc764e2007e4
Received: from sender2-op-o12.zoho.com.cn (unknown [163.53.93.243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 51ecf96e-d1b9-11ea-8c97-bc764e2007e4;
 Wed, 29 Jul 2020 16:34:08 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1596040431; cv=none; d=zoho.com.cn; s=zohoarc; 
 b=RAIS9PAY2Jt1iDjYHPgQ/6xupBEovUgXf1YflQUBodAexRxpbpob94u2bKHyOi5rG3H2EiVB7SdGqn7rm2rv3m6b1hrH97egxH428Ju7e9SFuX60Vti6/eee6SPmOt1nQX63m3xviOiTRNbr3Pjo92NuBKDc8ZrcQv5WMu1qXrg=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com.cn;
 s=zohoarc; 
 t=1596040431; h=Cc:Date:From:Message-ID:Subject:To; 
 bh=qf+v61LzDCqXASCfRuZXnlqCP79J34yOy7yE9dChAZs=; 
 b=NuvtotFev6VztUSwIBAF0o4NuXR+80lBwaIoF0LOTDVzE2roLqxtxXIvgc5deQcare8x0nrMnQSb54gJwtQotsPkczw6V5nviKlZo5kSBqo8L49a6Xx1A5O5k9Ozco7pb1uwFHdxfQf6e8h3VmzExsWmwOYZ5Hknn7yEqJOh+fU=
ARC-Authentication-Results: i=1; mx.zoho.com.cn;
 dkim=pass  header.i=euphon.net;
 spf=pass  smtp.mailfrom=fam@euphon.net;
 dmarc=pass header.from=<fam@euphon.net> header.from=<fam@euphon.net>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1596040431; 
 s=zoho; d=euphon.net; i=fam@euphon.net;
 h=From:To:Cc:Subject:Date:Message-Id;
 bh=qf+v61LzDCqXASCfRuZXnlqCP79J34yOy7yE9dChAZs=;
 b=GuHJN0yRWD2zqY0FZgjgjyKK639XfbijCxtFeopXgwFNVHYG9qwQgJNPsrtWwRj9
 QxMODejono5nNs+VZgoPKaDT7r/oAMbL0bpJB2swlcCjq6colmkyqS8Z+7b5+sgFpKB
 HdHQFhIi/4D0N/J4pExr3zgunbhnT18EW6nILx8s=
Received: from localhost (54.239.6.186 [54.239.6.186]) by mx.zoho.com.cn
 with SMTPS id 15960404289281017.0441578230091;
 Thu, 30 Jul 2020 00:33:48 +0800 (CST)
From: fam@euphon.net
To: xen-devel@lists.xenproject.org
Subject: [PATCH] x86/cpuid: Fix APIC bit clearing
Date: Wed, 29 Jul 2020 17:33:41 +0100
Message-Id: <20200729163341.5662-1-fam@euphon.net>
X-Mailer: git-send-email 2.17.1
X-ZohoCNMailClient: External
X-Mailman-Approved-At: Thu, 30 Jul 2020 04:09:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: fam@euphon.net, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 famzheng@amazon.com,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Fam Zheng <famzheng@amazon.com>

The bug is obvious here, other places in this function used
"cpufeat_mask" correctly.

Signed-off-by: Fam Zheng <famzheng@amazon.com>
Fixes: 46df8a65 ("x86/cpuid: Effectively remove pv_cpuid() and hvm_cpuid()")
---
 xen/arch/x86/cpuid.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 6a4a787b68..63a03ef1e5 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -1057,7 +1057,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         {
             /* Fast-forward MSR_APIC_BASE.EN. */
             if ( vlapic_hw_disabled(vcpu_vlapic(v)) )
-                res->d &= ~cpufeat_bit(X86_FEATURE_APIC);
+                res->d &= ~cpufeat_mask(X86_FEATURE_APIC);
 
             /*
              * PSE36 is not supported in shadow mode.  This bit should be
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 04:09:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 04:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k0zt0-0001Gt-5z; Thu, 30 Jul 2020 04:09:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cHsj=BI=yujala.com=srini@srs-us1.protection.inumbo.net>)
 id 1k0ppg-0003q7-VE
 for xen-devel@lists.xenproject.org; Wed, 29 Jul 2020 17:25:37 +0000
X-Inumbo-ID: 82ae41a0-d1c0-11ea-8ca3-bc764e2007e4
Received: from gproxy8-pub.mail.unifiedlayer.com (unknown [67.222.33.93])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82ae41a0-d1c0-11ea-8ca3-bc764e2007e4;
 Wed, 29 Jul 2020 17:25:34 +0000 (UTC)
Received: from cmgw14.unifiedlayer.com (unknown [10.9.0.14])
 by gproxy8.mail.unifiedlayer.com (Postfix) with ESMTP id B67CE1AB061
 for <xen-devel@lists.xenproject.org>; Wed, 29 Jul 2020 11:25:32 -0600 (MDT)
Received: from md-71.webhostbox.net ([204.11.58.143]) by cmsmtp with ESMTP
 id 0ppakkJYvwNNl0ppbkEu2N; Wed, 29 Jul 2020 11:25:32 -0600
X-Authority-Reason: nr=8
X-Authority-Analysis: v=2.3 cv=EpP8UxUA c=1 sm=1 tr=0
 a=yS0qNmEK8ed8yKyeR8R6rg==:117 a=dLZJa+xiwSxG16/P+YVxDGlgEgI=:19
 a=_RQrkK6FrEwA:10:nop_rcvd_month_year
 a=o-A10e_uY_YA:10:endurance_base64_authed_username_1 a=IunIYjxrTQwdYXDVPiEA:9
 a=CjuIK1q_8ugA:10:nop_charset_2 a=HMUB7uMQ5RUd_Y6GMyUA:9
 a=KTP4nb5cLGMSD2Hv:21 a=6ctBOVSbc-o-UnJq:21 a=PKR2JL43MSgCF-QgmYIA:9
 a=AGEyEeAJx9FPORn3:21 a=6jjH-iIKsZa2G_vv:21 a=b4qJkNQYcYxkQ8qB:21
 a=m-Z_27IZkzAA:10:nop_attachment_filename_extension_2
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=yujala.com; 
 s=default;
 h=Content-Type:Message-ID:References:In-Reply-To:Subject:Cc:To:
 From:Date:MIME-Version:Sender:Reply-To:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe:
 List-Post:List-Owner:List-Archive;
 bh=kFNQZNgdyRCaVGTmG90uBgjKkWpDvuhI0+7RQ0WbWhs=; b=ZRVpnp2Rzt9XWcNRFYc/YpP5Gx
 oO3RXCpZT2zklYE70YA1nqxUOngp5I2KAqacH7FmwQqkAx5UNT2NPZXo1sf0XCOsSQ9LWEZ+vaaIb
 P58fEO+RJVcJ8l8eGV8AcijVLpWXeNQJATnOSSyHKCxTnNCwR7vUa4lyjalFNJf0Uz5JrMPa1hIJe
 km8Z4a/CRaxYFUK1baFhrX/jmEj95i0581ZaSMF3C+yJcOcYDqxmXaY/vxh+ccGxNogKAJM0mqW+9
 WweQ6aXX4FdnngigfG5EskW9tz6x13TnV+TO9SXM4mQHobVrZhXpGWKIh4HNIffb5CY1zq1dLkMlH
 G8fK7wbg==;
Received: from md-71.webhostbox.net ([204.11.58.143]:51672)
 by md-71.webhostbox.net with esmtpa (Exim 4.93)
 (envelope-from <srini@yujala.com>)
 id 1k0ppa-001cSp-Fr; Wed, 29 Jul 2020 17:25:30 +0000
MIME-Version: 1.0
Date: Wed, 29 Jul 2020 17:25:30 +0000
From: srini@yujala.com
To: Julien Grall <julien@xen.org>
Subject: Re: Porting Xen to Jetson Nano
In-Reply-To: <5f985a6a-1bd6-9e68-f35f-b0b665688cee@xen.org>
References: <002801d66051$90fe2300$b2fa6900$@yujala.com>
 <9736680b-1c81-652b-552b-4103341bad50@xen.org>
 <000001d661cb$45cdaa10$d168fe30$@yujala.com>
 <5f985a6a-1bd6-9e68-f35f-b0b665688cee@xen.org>
Message-ID: <67c102642b0932d88ab2f70e96742ef0@yujala.com>
X-Sender: srini@yujala.com
User-Agent: Roundcube Webmail/1.3.13
Content-Type: multipart/mixed;
 boundary="=_48a69f2ecb1c59fcda7c31583f854280"
X-AntiAbuse: This header was added to track abuse,
 please include it with any abuse report
X-AntiAbuse: Primary Hostname - md-71.webhostbox.net
X-AntiAbuse: Original Domain - lists.xenproject.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - yujala.com
X-BWhitelist: no
X-Source-IP: 204.11.58.143
X-Source-L: Yes
X-Exim-ID: 1k0ppa-001cSp-Fr
X-Source: 
X-Source-Args: 
X-Source-Dir: 
X-Source-Sender: md-71.webhostbox.net [204.11.58.143]:51672
X-Source-Auth: srini@yujala.com
X-Email-Count: 3
X-Source-Cap: c3JpbmlxbGw7c3JpbmlxbGw7bWQtNzEud2ViaG9zdGJveC5uZXQ=
X-Local-Domain: yes
X-Mailman-Approved-At: Thu, 30 Jul 2020 04:09:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 'Christopher Clark' <christopher.w.clark@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--=_48a69f2ecb1c59fcda7c31583f854280
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=US-ASCII;
 format=flowed


Hi Julien,

On 2020-07-24 17:25, Julien Grall wrote:
> On 24/07/2020 16:01, Srinivas Bangalore wrote:
> 
> I struggled to find your comment inline as your e-mail client doesn't
> quote my answer. Please configure your e-mail client to use some form
> of quoting (the usual is '>').
> 
> 
I have switched to a web-based email client, so I hope this is better 
now.

>> (XEN) Freed 296kB init memory.
>> (XEN) dom0 IPA 0x0000000088080000
>> (XEN) P2M @ 0000000803fc3d40 mfn:0x17f0f5
>> (XEN) 0TH[0x0] = 0x004000017f0f377f
>> (XEN) 1ST[0x2] = 0x02c00000800006fd
>> (XEN) Mem access check
>> (XEN) dom0 IPA 0x0000000088080000
>> (XEN) P2M @ 0000000803fc3d40 mfn:0x17f0f5
>> (XEN) 0TH[0x0] = 0x004000017f0f377f
>> (XEN) 1ST[0x2] = 0x02c00000800006fd
>> (XEN) Mem access check
> 
> The instruction abort issue looks normal as the mapping is marked as
> non-executable.
> 
> Looking at the rest of bits, bits 55:58 indicates the type of mapping
> used. The value suggest the mapping has been considered to be used
> p2m_mmio_direct_c (RW cacheable MMIO). This looks wrong to me because
> RAM should be mapped using p2m_ram_rw.
> 
> Looking at your DT, it looks like the region is marked as reserved. On
> Xen 4.8, reserved-memory region are not correctly handled (IIRC this
> was only fixed in Xen 4.13). This should be possible to confirm by
> enable CONFIG_DEVICE_TREE_DEBUG in your .config.
> 
> The option will debug more information about the mapping to dom0 on
> your console.
> 
> However, given you are using an old release, you are at risk at keep
> finding bugs that have been resolved in more recent releases. It would
> probably better if you switch to Xen 4.14 and report any bug you can
> find there.
> 
Ok. I applied to patch series to 4.14. Enabled EARLY_PRINTK, 
CONFIG_DEBUG and DEVICE_TREE_DEBUG.
Here's the log...

## Flattened Device Tree blob at e3500000
    Booting using the fdt blob at 0xe3500000
    reserving fdt memory region: addr=80000000 size=20000
    reserving fdt memory region: addr=e3500000 size=35000
    Loading Device Tree to 00000000fc7f8000, end 00000000fc82ffff ... OK

Starting kernel ...

- UART enabled -
- Boot CPU booting -
- Current EL 00000008 -
- Initialize CPU -
- Turning on paging -
- Zero BSS -
- Ready -
(XEN) Invalid size for reg
(XEN) fdt: node `reserved-memory': parsing failed
(XEN)
(XEN) MODULE[0]: 00000000e0000000 - 00000000e014b0c8 Xen
(XEN) MODULE[1]: 00000000fc7f8000 - 00000000fc82d000 Device Tree
(XEN)  RESVD[0]: 0000000080000000 - 0000000080020000
(XEN)  RESVD[1]: 00000000e3500000 - 00000000e3535000
(XEN)  RESVD[2]: 00000000fc7f8000 - 00000000fc82d000
(XEN)  RESVD[3]: 0000000040001000 - 000000004003ffff
(XEN)  RESVD[4]: 00000000b0000000 - 00000000b01fffff
(XEN)
(XEN)
(XEN) Command line: console=dtuart sync_console dom0_mem=128M 
loglvl=debug guest_loglvl=debug console_to_ring
(XEN) Xen BUG at page_alloc.c:398
(XEN) ----[ Xen-4.14.0  arm64  debug=y   Not tainted ]----
(XEN) CPU:    0
(XEN) PC:     00000000002b7b90 alloc_boot_pages+0x38/0x9c
(XEN) LR:     00000000002cda04
(XEN) SP:     0000000000307d40
(XEN) CPSR:   a00003c9 MODE:64-bit EL2h (Hypervisor, handler)
(XEN)      X0: 000fc80000002000  X1: 0000000000002000  X2: 
0000000000000000
(XEN)      X3: 000fffffffffffff  X4: ffffffffffffffff  X5: 
0000000000000000
(XEN)      X6: 0000000000307df0  X7: 0000000000000003  X8: 
0000000000000008
(XEN)      X9: fffffffffffffffb X10: 0101010101010101 X11: 
0000000000000007
(XEN)     X12: 0000000000000004 X13: ffffffffffffffff X14: 
ffffffffff000000
(XEN)     X15: ffffffffffffffff X16: 0000000000000000 X17: 
0000000000000000
(XEN)     X18: 00000000fc834dd0 X19: 00000000002b5000 X20: 
00000000fc7f8000
(XEN)     X21: 00000000fc7f8000 X22: 0000000000000000 X23: 
fc80000000000038
(XEN)     X24: 00000000fed9de28 X25: ffffffffffffffff X26: 
fc80000002000000
(XEN)     X27: 0000000002000000 X28: 0000000000000000  FP: 
0000000000307d40
(XEN)
(XEN)   VTCR_EL2: 80000000
(XEN)  VTTBR_EL2: 0000000000000000
(XEN)
(XEN)  SCTLR_EL2: 30cd183d
(XEN)    HCR_EL2: 0000000000000038
(XEN)  TTBR0_EL2: 00000000e0145000
(XEN)
(XEN)    ESR_EL2: f2000001
(XEN)  HPFAR_EL2: 0000000000000000
(XEN)    FAR_EL2: 0000000000000000
(XEN)
(XEN) Xen stack trace from sp=0000000000307d40:
(XEN)    0000000000307df0 00000000002cf114 0000000000000000 
0000000000307d68
(XEN)    00000000fc7f8000 00000000002ceeb0 0000000000400000 
00676e69725f6f74
(XEN)    ffffffffffffffff 0000000000000000 0000000000000000 
0000000000307df0
(XEN)    0000000000307df0 00000000002cef58 000000003fffffff 
00000000fc7f8000
(XEN)    00000000fc7f8000 000fc80000002000 0000000000400000 
0080000000000000
(XEN)    0000000000000000 000000000003ffff 00000000fc831170 
00000000002001b8
(XEN)    00000000e0000000 00000000dfe00000 00000000fc7f8000 
0000000000000000
(XEN)    0000000000400000 00000000fed9de28 0000000000000000 
0000000000000000
(XEN)    0000000000000000 0000000000000400 0000000000000000 
0000000000035000
(XEN)    00000000fc7f8000 0000000000000000 0000000000000000 
0000000000000000
(XEN)    0000000000000000 0000000300000000 0000000000000000 
00000040ffffffff
(XEN)    00000000ffffffff 0000000000000000 0000000000000000 
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 
0000000000000000
(XEN)    0000000000000000 0000000000000000 0000000000000000 
0000000000000000
(XEN) Xen call trace:
(XEN)    [<00000000002b7b90>] alloc_boot_pages+0x38/0x9c (PC)
(XEN)    [<00000000002cda04>] setup_frametable_mappings+0xb4/0x310 (LR)
(XEN)    [<00000000002cf114>] start_xen+0x3a0/0xc48
(XEN)    [<00000000002001b8>] arm64/head.o#primary_switched+0x10/0x30
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Xen BUG at page_alloc.c:398
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...

There seems to be a problem with the DT in the 'reserved-memory' node.  
I commented out the fb0-carveout, fb1-carveout sections, recompiled and 
tried to boot again. This time the log shows the device tree messages 
(see attached log file), but Xen fails at this point...

(XEN) Allocating PPI 16 for event channel interrupt
(XEN) Create hypervisor node
(XEN) Create PSCI node
(XEN) Create cpus node
(XEN) Create cpu@0 (logical CPUID: 0) node
(XEN) Create cpu@1 (logical CPUID: 1) node
(XEN) Create cpu@2 (logical CPUID: 2) node
(XEN) Create cpu@3 (logical CPUID: 3) node
(XEN) Create memory node (reg size 4, nr cells 4)
(XEN)   Bank 0: 0xe8000000->0xf0000000
(XEN) Create memory node (reg size 4, nr cells 8)
(XEN)   Bank 0: 0x40001000->0x40040000
(XEN)   Bank 1: 0xb0000000->0xb0200000
(XEN) Loading zImage from 00000000e1000000 to 
00000000e8080000-00000000ea23c808
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Unable to copy the kernel in the hwdom memory
(XEN) ****************************************
(XEN)

Device tree and log file attached. Is there an issue with the DT? Any 
pointers on where I should be looking next?

Thanks for your help.

Regards,
Srini

--=_48a69f2ecb1c59fcda7c31583f854280
Content-Transfer-Encoding: base64
Content-Type: text/plain;
 name=jetson-nano-b00.dts
Content-Disposition: attachment;
 filename=jetson-nano-b00.dts;
 size=287912

L2R0cy12MS87CgovbWVtcmVzZXJ2ZS8JMHgwMDAwMDAwMDgwMDAwMDAwIDB4MDAwMDAwMDAwMDAy
MDAwMDsKLyB7Cgljb21wYXRpYmxlID0gIm52aWRpYSxwMzQ0OS0wMDAwLWIwMCtwMzQ0OC0wMDAw
LWIwMCIsICJudmlkaWEsamV0c29uLW5hbm8iLCAibnZpZGlhLHRlZ3JhMjEwIjsKCWludGVycnVw
dC1wYXJlbnQgPSA8MHgxPjsKCSNhZGRyZXNzLWNlbGxzID0gPDB4Mj47Cgkjc2l6ZS1jZWxscyA9
IDwweDI+OwoJbnZpZGlhLGR0YmJ1aWxkdGltZSA9ICJKdWwgMjMgMjAyMCIsICIxNzozMDo0OCI7
CgludmlkaWEsYm9hcmRpZHMgPSAiMzQ0OCI7CgludmlkaWEscHJvYy1ib2FyZGlkID0gIjM0NDgi
OwoJbnZpZGlhLHBtdS1ib2FyZGlkID0gIjM0NDgiOwoJbnZpZGlhLGZhc3Rib290LXVzYi1waWQg
PSA8MHhiNDQyPjsKCW1vZGVsID0gIk5WSURJQSBKZXRzb24gTmFubyBEZXZlbG9wZXIgS2l0IjsK
CW52aWRpYSxkdHNmaWxlbmFtZSA9ICIuLi9hcmNoL2FybTY0L2Jvb3QvZHRzLy4uLy4uLy4uLy4u
Ly4uLy4uL2hhcmR3YXJlL252aWRpYS9wbGF0Zm9ybS90MjEwL3Bvcmcva2VybmVsLWR0cy90ZWdy
YTIxMC1wMzQ0OC0wMDAwLXAzNDQ5LTAwMDAtYjAwLmR0cyI7CgoJdGhlcm1hbC16b25lcyB7CgoJ
CUFPLXRoZXJtIHsKCQkJc3RhdHVzID0gIm9rYXkiOwoJCQlwb2xsaW5nLWRlbGF5ID0gPDB4M2U4
PjsKCQkJcG9sbGluZy1kZWxheS1wYXNzaXZlID0gPDB4M2U4PjsKCQkJdGhlcm1hbC1zZW5zb3Jz
ID0gPDB4Mj47CgoJCQl0cmlwcyB7CgoJCQkJdHJpcF9zaHV0ZG93biB7CgkJCQkJdGVtcGVyYXR1
cmUgPSA8MHgxYWRiMD47CgkJCQkJaHlzdGVyZXNpcyA9IDwweDA+OwoJCQkJCXR5cGUgPSAiY3Jp
dGljYWwiOwoJCQkJCXdyaXRhYmxlOwoJCQkJfTsKCgkJCQlncHUtc2NhbGluZzAgewoJCQkJCXRl
bXBlcmF0dXJlID0gPDB4ZmZmZjllNTg+OwoJCQkJCWh5c3RlcmVzaXMgPSA8MHgwPjsKCQkJCQl0
eXBlID0gImFjdGl2ZSI7CgkJCQkJbGludXgscGhhbmRsZSA9IDwweGE3PjsKCQkJCQlwaGFuZGxl
ID0gPDB4YTc+OwoJCQkJfTsKCgkJCQlncHUtc2NhbGluZzEgewoJCQkJCXRlbXBlcmF0dXJlID0g
PDB4M2E5OD47CgkJCQkJaHlzdGVyZXNpcyA9IDwweDNlOD47CgkJCQkJdHlwZSA9ICJhY3RpdmUi
OwoJCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHgzPjsKCQkJCQlwaGFuZGxlID0gPDB4Mz47CgkJCQl9
OwoKCQkJCWdwdS1zY2FsaW5nMiB7CgkJCQkJdGVtcGVyYXR1cmUgPSA8MHg3NTMwPjsKCQkJCQlo
eXN0ZXJlc2lzID0gPDB4M2U4PjsKCQkJCQl0eXBlID0gImFjdGl2ZSI7CgkJCQkJbGludXgscGhh
bmRsZSA9IDwweDU+OwoJCQkJCXBoYW5kbGUgPSA8MHg1PjsKCQkJCX07CgoJCQkJZ3B1LXNjYWxp
bmczIHsKCQkJCQl0ZW1wZXJhdHVyZSA9IDwweGMzNTA+OwoJCQkJCWh5c3RlcmVzaXMgPSA8MHgz
ZTg+OwoJCQkJCXR5cGUgPSAiYWN0aXZlIjsKCQkJCQlsaW51eCxwaGFuZGxlID0gPDB4Nj47CgkJ
CQkJcGhhbmRsZSA9IDwweDY+OwoJCQkJfTsKCgkJCQlncHUtc2NhbGluZzQgewoJCQkJCXRlbXBl
cmF0dXJlID0gPDB4MTExNzA+OwoJCQkJCWh5c3RlcmVzaXMgPSA8MHgzZTg+OwoJCQkJCXR5cGUg
PSAiYWN0aXZlIjsKCQkJCQlsaW51eCxwaGFuZGxlID0gPDB4Nz47CgkJCQkJcGhhbmRsZSA9IDww
eDc+OwoJCQkJfTsKCgkJCQlncHUtc2NhbGluZzUgewoJCQkJCXRlbXBlcmF0dXJlID0gPDB4MTlh
Mjg+OwoJCQkJCWh5c3RlcmVzaXMgPSA8MHgzZTg+OwoJCQkJCXR5cGUgPSAiYWN0aXZlIjsKCQkJ
CQlsaW51eCxwaGFuZGxlID0gPDB4OD47CgkJCQkJcGhhbmRsZSA9IDwweDg+OwoJCQkJfTsKCgkJ
CQlncHUtdm1heDEgewoJCQkJCXRlbXBlcmF0dXJlID0gPDB4MTQ0Mzg+OwoJCQkJCWh5c3RlcmVz
aXMgPSA8MHgzZTg+OwoJCQkJCXR5cGUgPSAiYWN0aXZlIjsKCQkJCQlsaW51eCxwaGFuZGxlID0g
PDB4OT47CgkJCQkJcGhhbmRsZSA9IDwweDk+OwoJCQkJfTsKCgkJCQljb3JlX2R2ZnNfZmxvb3Jf
dHJpcDAgewoJCQkJCXRlbXBlcmF0dXJlID0gPDB4M2E5OD47CgkJCQkJaHlzdGVyZXNpcyA9IDww
eDNlOD47CgkJCQkJdHlwZSA9ICJhY3RpdmUiOwoJCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHhiPjsK
CQkJCQlwaGFuZGxlID0gPDB4Yj47CgkJCQl9OwoKCQkJCWNvcmVfZHZmc19jYXBfdHJpcDAgewoJ
CQkJCXRlbXBlcmF0dXJlID0gPDB4MTRjMDg+OwoJCQkJCWh5c3RlcmVzaXMgPSA8MHgzZTg+OwoJ
CQkJCXR5cGUgPSAiYWN0aXZlIjsKCQkJCQlsaW51eCxwaGFuZGxlID0gPDB4ZD47CgkJCQkJcGhh
bmRsZSA9IDwweGQ+OwoJCQkJfTsKCgkJCQlkZmxsLWZsb29yLXRyaXAwIHsKCQkJCQl0ZW1wZXJh
dHVyZSA9IDwweDNhOTg+OwoJCQkJCWh5c3RlcmVzaXMgPSA8MHgzZTg+OwoJCQkJCXR5cGUgPSAi
YWN0aXZlIjsKCQkJCQlsaW51eCxwaGFuZGxlID0gPDB4Zj47CgkJCQkJcGhhbmRsZSA9IDwweGY+
OwoJCQkJfTsKCQkJfTsKCgkJCXRoZXJtYWwtem9uZS1wYXJhbXMgewoJCQkJZ292ZXJub3ItbmFt
ZSA9ICJwaWRfdGhlcm1hbF9nb3YiOwoJCQl9OwoKCQkJY29vbGluZy1tYXBzIHsKCgkJCQlncHUt
c2NhbGluZy1tYXAxIHsKCQkJCQl0cmlwID0gPDB4Mz47CgkJCQkJY29vbGluZy1kZXZpY2UgPSA8
MHg0IDB4MSAweDE+OwoJCQkJfTsKCgkJCQlncHUtc2NhbGluZy1tYXAyIHsKCQkJCQl0cmlwID0g
PDB4NT47CgkJCQkJY29vbGluZy1kZXZpY2UgPSA8MHg0IDB4MiAweDI+OwoJCQkJfTsKCgkJCQln
cHVfc2NhbGluZ19tYXAzIHsKCQkJCQl0cmlwID0gPDB4Nj47CgkJCQkJY29vbGluZy1kZXZpY2Ug
PSA8MHg0IDB4MyAweDM+OwoJCQkJfTsKCgkJCQlncHUtc2NhbGluZy1tYXA0IHsKCQkJCQl0cmlw
ID0gPDB4Nz47CgkJCQkJY29vbGluZy1kZXZpY2UgPSA8MHg0IDB4NCAweDQ+OwoJCQkJfTsKCgkJ
CQlncHUtc2NhbGluZy1tYXA1IHsKCQkJCQl0cmlwID0gPDB4OD47CgkJCQkJY29vbGluZy1kZXZp
Y2UgPSA8MHg0IDB4NSAweDU+OwoJCQkJfTsKCgkJCQlncHUtdm1heC1tYXAxIHsKCQkJCQl0cmlw
ID0gPDB4OT47CgkJCQkJY29vbGluZy1kZXZpY2UgPSA8MHhhIDB4MSAweDE+OwoJCQkJfTsKCgkJ
CQljb3JlX2R2ZnNfZmxvb3JfbWFwMCB7CgkJCQkJdHJpcCA9IDwweGI+OwoJCQkJCWNvb2xpbmct
ZGV2aWNlID0gPDB4YyAweDEgMHgxPjsKCQkJCX07CgoJCQkJY29yZV9kdmZzX2NhcF9tYXAwIHsK
CQkJCQl0cmlwID0gPDB4ZD47CgkJCQkJY29vbGluZy1kZXZpY2UgPSA8MHhlIDB4MSAweDE+OwoJ
CQkJfTsKCgkJCQlkZmxsLWZsb29yLW1hcDAgewoJCQkJCXRyaXAgPSA8MHhmPjsKCQkJCQljb29s
aW5nLWRldmljZSA9IDwweDEwIDB4MSAweDE+OwoJCQkJfTsKCQkJfTsKCQl9OwoKCQlDUFUtdGhl
cm0gewoJCQlwb2xsaW5nLWRlbGF5ID0gPDB4MD47CgkJCXBvbGxpbmctZGVsYXktcGFzc2l2ZSA9
IDwweDFmND47CgkJCXRoZXJtYWwtc2Vuc29ycyA9IDwweDExIDB4MD47CgkJCXN0YXR1cyA9ICJv
a2F5IjsKCgkJCXRoZXJtYWwtem9uZS1wYXJhbXMgewoJCQkJZ292ZXJub3ItbmFtZSA9ICJzdGVw
X3dpc2UiOwoJCQkJbWF4X2Vycl90ZW1wID0gPDB4MjMyOD47CgkJCQltYXhfZXJyX2dhaW4gPSA8
MHgzZTg+OwoJCQkJZ2Fpbl9wID0gPDB4M2U4PjsKCQkJCWdhaW5fZCA9IDwweDA+OwoJCQkJdXBf
Y29tcGVuc2F0aW9uID0gPDB4MTQ+OwoJCQkJZG93bl9jb21wZW5zYXRpb24gPSA8MHgxND47CgkJ
CX07CgoJCQl0cmlwcyB7CgoJCQkJZGZsbC1jYXAtdHJpcDAgewoJCQkJCXRlbXBlcmF0dXJlID0g
PDB4MTAxZDA+OwoJCQkJCWh5c3RlcmVzaXMgPSA8MHgzZTg+OwoJCQkJCXR5cGUgPSAiYWN0aXZl
IjsKCQkJCQlsaW51eCxwaGFuZGxlID0gPDB4MTY+OwoJCQkJCXBoYW5kbGUgPSA8MHgxNj47CgkJ
CQl9OwoKCQkJCWRmbGwtY2FwLXRyaXAxIHsKCQkJCQl0ZW1wZXJhdHVyZSA9IDwweDE0ZmYwPjsK
CQkJCQloeXN0ZXJlc2lzID0gPDB4M2U4PjsKCQkJCQl0eXBlID0gImFjdGl2ZSI7CgkJCQkJbGlu
dXgscGhhbmRsZSA9IDwweDE4PjsKCQkJCQlwaGFuZGxlID0gPDB4MTg+OwoJCQkJfTsKCgkJCQlj
cHVfY3JpdGljYWwgewoJCQkJCXRlbXBlcmF0dXJlID0gPDB4MThlNzA+OwoJCQkJCWh5c3RlcmVz
aXMgPSA8MHgwPjsKCQkJCQl0eXBlID0gImNyaXRpY2FsIjsKCQkJCQl3cml0YWJsZTsKCQkJCX07
CgoJCQkJY3B1X2hlYXZ5IHsKCQkJCQl0ZW1wZXJhdHVyZSA9IDwweDE4ODk0PjsKCQkJCQloeXN0
ZXJlc2lzID0gPDB4MD47CgkJCQkJdHlwZSA9ICJob3QiOwoJCQkJCXdyaXRhYmxlOwoJCQkJCWxp
bnV4LHBoYW5kbGUgPSA8MHgxMj47CgkJCQkJcGhhbmRsZSA9IDwweDEyPjsKCQkJCX07CgoJCQkJ
Y3B1X3Rocm90dGxlIHsKCQkJCQl0ZW1wZXJhdHVyZSA9IDwweDE3YWU4PjsKCQkJCQloeXN0ZXJl
c2lzID0gPDB4MD47CgkJCQkJdHlwZSA9ICJwYXNzaXZlIjsKCQkJCQl3cml0YWJsZTsKCQkJCQls
aW51eCxwaGFuZGxlID0gPDB4MTQ+OwoJCQkJCXBoYW5kbGUgPSA8MHgxND47CgkJCQl9OwoJCQl9
OwoKCQkJY29vbGluZy1tYXBzIHsKCgkJCQltYXAxIHsKCQkJCQl0cmlwID0gPDB4MTI+OwoJCQkJ
CWNkZXYtdHlwZSA9ICJ0ZWdyYV9oZWF2eSI7CgkJCQkJY29vbGluZy1kZXZpY2UgPSA8MHgxMyAw
eDEgMHgxPjsKCQkJCX07CgoJCQkJbWFwMiB7CgkJCQkJdHJpcCA9IDwweDE0PjsKCQkJCQljZGV2
LXR5cGUgPSAiY3B1LWJhbGFuY2VkIjsKCQkJCQljb29saW5nLWRldmljZSA9IDwweDE1IDB4ZmZm
ZmZmZmYgMHhmZmZmZmZmZj47CgkJCQl9OwoKCQkJCWRmbGwtY2FwLW1hcDAgewoJCQkJCXRyaXAg
PSA8MHgxNj47CgkJCQkJY29vbGluZy1kZXZpY2UgPSA8MHgxNyAweDEgMHgxPjsKCQkJCX07CgoJ
CQkJZGZsbC1jYXAtbWFwMSB7CgkJCQkJdHJpcCA9IDwweDE4PjsKCQkJCQljb29saW5nLWRldmlj
ZSA9IDwweDE3IDB4MiAweDI+OwoJCQkJfTsKCQkJfTsKCQl9OwoKCQlHUFUtdGhlcm0gewoJCQlw
b2xsaW5nLWRlbGF5ID0gPDB4MD47CgkJCXBvbGxpbmctZGVsYXktcGFzc2l2ZSA9IDwweDFmND47
CgkJCXRoZXJtYWwtc2Vuc29ycyA9IDwweDExIDB4Mj47CgkJCXN0YXR1cyA9ICJva2F5IjsKCgkJ
CXRoZXJtYWwtem9uZS1wYXJhbXMgewoJCQkJZ292ZXJub3ItbmFtZSA9ICJzdGVwX3dpc2UiOwoJ
CQkJbWF4X2Vycl90ZW1wID0gPDB4MjMyOD47CgkJCQltYXhfZXJyX2dhaW4gPSA8MHgzZTg+OwoJ
CQkJZ2Fpbl9wID0gPDB4M2U4PjsKCQkJCWdhaW5fZCA9IDwweDA+OwoJCQkJdXBfY29tcGVuc2F0
aW9uID0gPDB4MTQ+OwoJCQkJZG93bl9jb21wZW5zYXRpb24gPSA8MHgxND47CgkJCX07CgoJCQl0
cmlwcyB7CgoJCQkJZ3B1X2NyaXRpY2FsIHsKCQkJCQl0ZW1wZXJhdHVyZSA9IDwweDE5MDY0PjsK
CQkJCQloeXN0ZXJlc2lzID0gPDB4MD47CgkJCQkJdHlwZSA9ICJjcml0aWNhbCI7CgkJCQkJd3Jp
dGFibGU7CgkJCQl9OwoKCQkJCWdwdV9oZWF2eSB7CgkJCQkJdGVtcGVyYXR1cmUgPSA8MHgxOGE4
OD47CgkJCQkJaHlzdGVyZXNpcyA9IDwweDA+OwoJCQkJCXR5cGUgPSAiaG90IjsKCQkJCQl3cml0
YWJsZTsKCQkJCQlsaW51eCxwaGFuZGxlID0gPDB4MTk+OwoJCQkJCXBoYW5kbGUgPSA8MHgxOT47
CgkJCQl9OwoKCQkJCWdwdV90aHJvdHRsZSB7CgkJCQkJdGVtcGVyYXR1cmUgPSA8MHgxN2NkYz47
CgkJCQkJaHlzdGVyZXNpcyA9IDwweDA+OwoJCQkJCXR5cGUgPSAicGFzc2l2ZSI7CgkJCQkJd3Jp
dGFibGU7CgkJCQkJbGludXgscGhhbmRsZSA9IDwweDFhPjsKCQkJCQlwaGFuZGxlID0gPDB4MWE+
OwoJCQkJfTsKCQkJfTsKCgkJCWNvb2xpbmctbWFwcyB7CgoJCQkJbWFwMSB7CgkJCQkJdHJpcCA9
IDwweDE5PjsKCQkJCQljZGV2LXR5cGUgPSAidGVncmFfaGVhdnkiOwoJCQkJCWNvb2xpbmctZGV2
aWNlID0gPDB4MTMgMHgxIDB4MT47CgkJCQl9OwoKCQkJCW1hcDIgewoJCQkJCXRyaXAgPSA8MHgx
YT47CgkJCQkJY2Rldi10eXBlID0gImdwdS1iYWxhbmNlZCI7CgkJCQkJY29vbGluZy1kZXZpY2Ug
PSA8MHgxYiAweGZmZmZmZmZmIDB4ZmZmZmZmZmY+OwoJCQkJfTsKCQkJfTsKCQl9OwoKCQlQTEwt
dGhlcm0gewoJCQlwb2xsaW5nLWRlbGF5ID0gPDB4MD47CgkJCXBvbGxpbmctZGVsYXktcGFzc2l2
ZSA9IDwweDNlOD47CgkJCXRoZXJtYWwtc2Vuc29ycyA9IDwweDExIDB4Mz47CgkJCXN0YXR1cyA9
ICJva2F5IjsKCgkJCXRoZXJtYWwtem9uZS1wYXJhbXMgewoJCQkJZ292ZXJub3ItbmFtZSA9ICJw
aWRfdGhlcm1hbF9nb3YiOwoJCQkJbWF4X2Vycl90ZW1wID0gPDB4MjMyOD47CgkJCQltYXhfZXJy
X2dhaW4gPSA8MHgzZTg+OwoJCQkJZ2Fpbl9wID0gPDB4M2U4PjsKCQkJCWdhaW5fZCA9IDwweDA+
OwoJCQkJdXBfY29tcGVuc2F0aW9uID0gPDB4MTQ+OwoJCQkJZG93bl9jb21wZW5zYXRpb24gPSA8
MHgxND47CgkJCX07CgoJCQl0cmlwcyB7CgoJCQkJZHJhbS10aHJvdHRsZSB7CgkJCQkJdGVtcGVy
YXR1cmUgPSA8MHgxMTE3MD47CgkJCQkJaHlzdGVyZXNpcyA9IDwweDNlOD47CgkJCQkJdHlwZSA9
ICJwYXNzaXZlIjsKCQkJCQl3cml0YWJsZTsKCQkJCQlsaW51eCxwaGFuZGxlID0gPDB4MWM+OwoJ
CQkJCXBoYW5kbGUgPSA8MHgxYz47CgkJCQl9OwoJCQl9OwoKCQkJY29vbGluZy1tYXBzIHsKCgkJ
CQltYXAtdGVncmEtZHJhbSB7CgkJCQkJdHJpcCA9IDwweDFjPjsKCQkJCQljb29saW5nLWRldmlj
ZSA9IDwweDFkIDB4MSAweDE+OwoJCQkJCWNkZXYtdHlwZSA9ICJ0ZWdyYS1kcmFtIjsKCQkJCX07
CgkJCX07CgkJfTsKCgkJUE1JQy1EaWUgewoJCQlwb2xsaW5nLWRlbGF5ID0gPDB4MD47CgkJCXBv
bGxpbmctZGVsYXktcGFzc2l2ZSA9IDwweDA+OwoJCQl0aGVybWFsLXNlbnNvcnMgPSA8MHgxZT47
CgoJCQl0cmlwcyB7CgoJCQkJaG90LWRpZSB7CgkJCQkJdGVtcGVyYXR1cmUgPSA8MHgxZDRjMD47
CgkJCQkJdHlwZSA9ICJhY3RpdmUiOwoJCQkJCWh5c3RlcmVzaXMgPSA8MHgwPjsKCQkJCQlsaW51
eCxwaGFuZGxlID0gPDB4MWY+OwoJCQkJCXBoYW5kbGUgPSA8MHgxZj47CgkJCQl9OwoJCQl9OwoK
CQkJY29vbGluZy1tYXBzIHsKCgkJCQltYXAwIHsKCQkJCQl0cmlwID0gPDB4MWY+OwoJCQkJCWNv
b2xpbmctZGV2aWNlID0gPDB4MjAgMHhmZmZmZmZmZiAweGZmZmZmZmZmPjsKCQkJCQljb250cmli
dXRpb24gPSA8MHg2ND47CgkJCQkJY2Rldi10eXBlID0gImVtZXJnZW5jeS1iYWxhbmNlZCI7CgkJ
CQl9OwoJCQl9OwoJCX07Cgl9OwoKCWNvcmVfZHZmc19jZGV2X2Zsb29yIHsKCQljb21wYXRpYmxl
ID0gIm52aWRpYSx0ZWdyYS1jb3JlLWNkZXYtYWN0aW9uIjsKCQljZGV2LXR5cGUgPSAiQ09SRS1m
bG9vciI7CgkJI2Nvb2xpbmctY2VsbHMgPSA8MHgyPjsKCQlsaW51eCxwaGFuZGxlID0gPDB4Yz47
CgkJcGhhbmRsZSA9IDwweGM+OwoJfTsKCgljb3JlX2R2ZnNfY2Rldl9jYXAgewoJCWNvbXBhdGli
bGUgPSAibnZpZGlhLHRlZ3JhLWNvcmUtY2Rldi1hY3Rpb24iOwoJCWNkZXYtdHlwZSA9ICJDT1JF
LWNhcCI7CgkJI2Nvb2xpbmctY2VsbHMgPSA8MHgyPjsKCQljbG9ja3MgPSA8MHgyMSAweDE5OCAw
eDIxIDB4MWExIDB4MjEgMHgxYjggMHgyMSAweDFmNiAweDIxIDB4MjA2PjsKCQljbG9jay1uYW1l
cyA9ICJjMmJ1c19jYXAiLCAiYzNidXNfY2FwIiwgInNjbGtfY2FwIiwgImhvc3QxeF9jYXAiLCAi
YWRzcF9jYXAiOwoJCWxpbnV4LHBoYW5kbGUgPSA8MHhlPjsKCQlwaGFuZGxlID0gPDB4ZT47Cgl9
OwoKCXBvd2VyLWRvbWFpbiB7CgkJY29tcGF0aWJsZSA9ICJ0ZWdyYS1wb3dlci1kb21haW5zIjsK
CgkJaG9zdDF4LXBkIHsKCQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtaG9zdDF4LXBk
IjsKCQkJaXNfb2ZmOwoJCQlob3N0MXg7CgkJCSNwb3dlci1kb21haW4tY2VsbHMgPSA8MHgwPjsK
CQkJbGludXgscGhhbmRsZSA9IDwweDIzPjsKCQkJcGhhbmRsZSA9IDwweDIzPjsKCQl9OwoKCQlh
cGUtcGQgewoJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1hcGUtcGQiOwoJCQlpc19v
ZmY7CgkJCSNwb3dlci1kb21haW4tY2VsbHMgPSA8MHgwPjsKCQkJcGFydGl0aW9uLWlkID0gPDB4
MWI+OwoJCQljbG9ja3MgPSA8MHgyMSAweGM2IDB4MjEgMHg2YiAweDIxIDB4Yzc+OwoJCQljbG9j
ay1uYW1lcyA9ICJhcGUiLCAiYXBiMmFwZSIsICJhZHNwIjsKCQkJbGludXgscGhhbmRsZSA9IDww
eDIyPjsKCQkJcGhhbmRsZSA9IDwweDIyPjsKCQl9OwoKCQlhZHNwLXBkIHsKCQkJY29tcGF0aWJs
ZSA9ICJudmlkaWEsdGVncmEyMTAtYWRzcC1wZCI7CgkJCWlzX29mZjsKCQkJI3Bvd2VyLWRvbWFp
bi1jZWxscyA9IDwweDA+OwoJCQlwb3dlci1kb21haW5zID0gPDB4MjI+OwoJCQlsaW51eCxwaGFu
ZGxlID0gPDB4ZGY+OwoJCQlwaGFuZGxlID0gPDB4ZGY+OwoJCX07CgoJCXRzZWMtcGQgewoJCQlj
b21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC10c2VjLXBkIjsKCQkJaXNfb2ZmOwoJCQkjcG93
ZXItZG9tYWluLWNlbGxzID0gPDB4MD47CgkJCXBvd2VyLWRvbWFpbnMgPSA8MHgyMz47CgkJCWxp
bnV4LHBoYW5kbGUgPSA8MHg2Yj47CgkJCXBoYW5kbGUgPSA8MHg2Yj47CgkJfTsKCgkJbnZkZWMt
cGQgewoJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1udmRlYy1wZCI7CgkJCWlzX29m
ZjsKCQkJI3Bvd2VyLWRvbWFpbi1jZWxscyA9IDwweDA+OwoJCQlwb3dlci1kb21haW5zID0gPDB4
MjM+OwoJCQlwYXJ0aXRpb24taWQgPSA8MHgxOT47CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg2Yz47
CgkJCXBoYW5kbGUgPSA8MHg2Yz47CgkJfTsKCgkJdmUyLXBkIHsKCQkJY29tcGF0aWJsZSA9ICJu
dmlkaWEsdGVncmEyMTAtdmUyLXBkIjsKCQkJaXNfb2ZmOwoJCQkjcG93ZXItZG9tYWluLWNlbGxz
ID0gPDB4MD47CgkJCXBvd2VyLWRvbWFpbnMgPSA8MHgyMz47CgkJCXBhcnRpdGlvbi1pZCA9IDww
eDFkPjsKCQkJbGludXgscGhhbmRsZSA9IDwweDVjPjsKCQkJcGhhbmRsZSA9IDwweDVjPjsKCQl9
OwoKCQl2aWMwMy1wZCB7CgkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXZpYzAzLXBk
IjsKCQkJaXNfb2ZmOwoJCQkjcG93ZXItZG9tYWluLWNlbGxzID0gPDB4MD47CgkJCXBvd2VyLWRv
bWFpbnMgPSA8MHgyMz47CgkJCXBhcnRpdGlvbi1pZCA9IDwweDE3PjsKCQkJbGludXgscGhhbmRs
ZSA9IDwweDY5PjsKCQkJcGhhbmRsZSA9IDwweDY5PjsKCQl9OwoKCQltc2VuYy1wZCB7CgkJCWNv
bXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLW1zZW5jLXBkIjsKCQkJaXNfb2ZmOwoJCQkjcG93
ZXItZG9tYWluLWNlbGxzID0gPDB4MD47CgkJCXBvd2VyLWRvbWFpbnMgPSA8MHgyMz47CgkJCXBh
cnRpdGlvbi1pZCA9IDwweDY+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4NmE+OwoJCQlwaGFuZGxl
ID0gPDB4NmE+OwoJCX07CgoJCW52anBnLXBkIHsKCQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVn
cmEyMTAtbnZqcGctcGQiOwoJCQlpc19vZmY7CgkJCSNwb3dlci1kb21haW4tY2VsbHMgPSA8MHgw
PjsKCQkJcG93ZXItZG9tYWlucyA9IDwweDIzPjsKCQkJcGFydGl0aW9uLWlkID0gPDB4MWE+OwoJ
CQlsaW51eCxwaGFuZGxlID0gPDB4NmQ+OwoJCQlwaGFuZGxlID0gPDB4NmQ+OwoJCX07CgoJCXBj
aWUtcGQgewoJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1wY2llLXBkIjsKCQkJaXNf
b2ZmOwoJCQkjcG93ZXItZG9tYWluLWNlbGxzID0gPDB4MD47CgkJCXBhcnRpdGlvbi1pZCA9IDww
eDM+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4N2E+OwoJCQlwaGFuZGxlID0gPDB4N2E+OwoJCX07
CgoJCXZlLXBkIHsKCQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtdmUtcGQiOwoJCQlp
c19vZmY7CgkJCSNwb3dlci1kb21haW4tY2VsbHMgPSA8MHgwPjsKCQkJcG93ZXItZG9tYWlucyA9
IDwweDIzPjsKCQkJcGFydGl0aW9uLWlkID0gPDB4Mj47CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg1
OT47CgkJCXBoYW5kbGUgPSA8MHg1OT47CgkJfTsKCgkJc2F0YS1wZCB7CgkJCWNvbXBhdGlibGUg
PSAibnZpZGlhLHRlZ3JhMjEwLXNhdGEtcGQiOwoJCQkjcG93ZXItZG9tYWluLWNlbGxzID0gPDB4
MD47CgkJCXBhcnRpdGlvbi1pZCA9IDwweDg+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4ZTA+OwoJ
CQlwaGFuZGxlID0gPDB4ZTA+OwoJCX07CgoJCXNvci1wZCB7CgkJCWNvbXBhdGlibGUgPSAibnZp
ZGlhLHRlZ3JhMjEwLXNvci1wZCI7CgkJCSNwb3dlci1kb21haW4tY2VsbHMgPSA8MHgwPjsKCQkJ
cGFydGl0aW9uLWlkID0gPDB4MTE+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4ZTE+OwoJCQlwaGFu
ZGxlID0gPDB4ZTE+OwoJCX07CgoJCWRpc2EtcGQgewoJCQljb21wYXRpYmxlID0gIm52aWRpYSx0
ZWdyYTIxMC1kaXNhLXBkIjsKCQkJI3Bvd2VyLWRvbWFpbi1jZWxscyA9IDwweDA+OwoJCQlwYXJ0
aXRpb24taWQgPSA8MHgxMj47CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHhlMj47CgkJCXBoYW5kbGUg
PSA8MHhlMj47CgkJfTsKCgkJZGlzYi1wZCB7CgkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3Jh
MjEwLWRpc2ItcGQiOwoJCQkjcG93ZXItZG9tYWluLWNlbGxzID0gPDB4MD47CgkJCXBhcnRpdGlv
bi1pZCA9IDwweDEzPjsKCQkJbGludXgscGhhbmRsZSA9IDwweGUzPjsKCQkJcGhhbmRsZSA9IDww
eGUzPjsKCQl9OwoKCQl4dXNiYS1wZCB7CgkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEw
LXh1c2JhLXBkIjsKCQkJI3Bvd2VyLWRvbWFpbi1jZWxscyA9IDwweDA+OwoJCQlwYXJ0aXRpb24t
aWQgPSA8MHgxND47CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHhlND47CgkJCXBoYW5kbGUgPSA8MHhl
ND47CgkJfTsKCgkJeHVzYmItcGQgewoJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC14
dXNiYi1wZCI7CgkJCSNwb3dlci1kb21haW4tY2VsbHMgPSA8MHgwPjsKCQkJcGFydGl0aW9uLWlk
ID0gPDB4MTU+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4ZTU+OwoJCQlwaGFuZGxlID0gPDB4ZTU+
OwoJCX07CgoJCXh1c2JjLXBkIHsKCQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAteHVz
YmMtcGQiOwoJCQkjcG93ZXItZG9tYWluLWNlbGxzID0gPDB4MD47CgkJCXBhcnRpdGlvbi1pZCA9
IDwweDE2PjsKCQkJbGludXgscGhhbmRsZSA9IDwweGU2PjsKCQkJcGhhbmRsZSA9IDwweGU2PjsK
CQl9OwoJfTsKCglhY3Rtb25ANjAwMGM4MDAgewoJCXN0YXR1cyA9ICJva2F5IjsKCQkjYWRkcmVz
cy1jZWxscyA9IDwweDI+OwoJCSNzaXplLWNlbGxzID0gPDB4Mj47CgkJY29tcGF0aWJsZSA9ICJu
dmlkaWEsdGVncmEyMTAtY2FjdG1vbiI7CgkJcmVnID0gPDB4MCAweDYwMDBjODAwIDB4MCAweDQw
MD47CgkJaW50ZXJydXB0cyA9IDwweDAgMHgyZCAweDQ+OwoJCWNsb2NrcyA9IDwweDIxIDB4Nzc+
OwoJCWNsb2NrLW5hbWVzID0gImFjdG1vbiI7CgkJcmVzZXRzID0gPDB4MjEgMHg3Nz47CgkJcmVz
ZXQtbmFtZXMgPSAiYWN0bW9uIjsKCQludmlkaWEsc2FtcGxlX3BlcmlvZCA9IFsxNF07CgoJCW1j
X2FsbCB7CgkJCSNhZGRyZXNzLWNlbGxzID0gPDB4MT47CgkJCSNzaXplLWNlbGxzID0gPDB4MD47
CgkJCW52aWRpYSxjb25faWQgPSAibWNfYWxsIjsKCQkJbnZpZGlhLGRldl9pZCA9ICJhY3Rtb24i
OwoJCQludmlkaWEscmVnX29mZnMgPSA8MHgxYzA+OwoJCQludmlkaWEsaXJxX21hc2sgPSA8MHg0
MDAwMDAwPjsKCQkJbnZpZGlhLHN1c3BlbmRfZnJlcSA9IDwweDMyNGIwPjsKCQkJbnZpZGlhLGJv
b3N0X2ZyZXFfc3RlcCA9IDwweDNlODA+OwoJCQludmlkaWEsYm9vc3RfdXBfY29lZiA9IDwweGM4
PjsKCQkJbnZpZGlhLGJvb3N0X2Rvd25fY29lZiA9IDwweDMyPjsKCQkJbnZpZGlhLGJvb3N0X3Vw
X3RocmVzaG9sZCA9IDwweDNjPjsKCQkJbnZpZGlhLGJvb3N0X2Rvd25fdGhyZXNob2xkID0gPDB4
Mjg+OwoJCQludmlkaWEsdXBfd21hcmtfd2luZG93ID0gWzAxXTsKCQkJbnZpZGlhLGRvd25fd21h
cmtfd2luZG93ID0gWzAzXTsKCQkJbnZpZGlhLGF2Z193aW5kb3dfbG9nMiA9IFswN107CgkJCW52
aWRpYSxjb3VudF93ZWlnaHQgPSA8MHg0MDA+OwoJCQludmlkaWEsbWF4X2RyYW1fY2hhbm5lbHMg
PSBbMDJdOwoJCQludmlkaWEsdHlwZSA9IDwweDE+OwoJCQlzdGF0dXMgPSAib2theSI7CgkJfTsK
CX07CgoJYWxpYXNlcyB7CgkJc2RoY2kwID0gIi9zZGhjaUA3MDBiMDAwMCI7CgkJc2RoY2kxID0g
Ii9zZGhjaUA3MDBiMDIwMCI7CgkJc2RoY2kyID0gIi9zZGhjaUA3MDBiMDQwMCI7CgkJc2RoY2kz
ID0gIi9zZGhjaUA3MDBiMDYwMCI7CgkJaTJjMCA9ICIvaTJjQDcwMDBjMDAwIjsKCQlpMmMxID0g
Ii9pMmNANzAwMGM0MDAiOwoJCWkyYzIgPSAiL2kyY0A3MDAwYzUwMCI7CgkJaTJjMyA9ICIvaTJj
QDcwMDBjNzAwIjsKCQlpMmM0ID0gIi9pMmNANzAwMGQwMDAiOwoJCWkyYzUgPSAiL2kyY0A3MDAw
ZDEwMCI7CgkJaTJjNiA9ICIvaG9zdDF4L2kyY0A1NDZjMDAwMCI7CgkJc3BpMCA9ICIvc3BpQDcw
MDBkNDAwIjsKCQlzcGkxID0gIi9zcGlANzAwMGQ2MDAiOwoJCXNwaTIgPSAiL3NwaUA3MDAwZDgw
MCI7CgkJc3BpMyA9ICIvc3BpQDcwMDBkYTAwIjsKCQlxc3BpNiA9ICIvc3BpQDcwNDEwMDAwIjsK
CQlzZXJpYWwwID0gIi9zZXJpYWxANzAwMDYwMDAiOwoJCXNlcmlhbDEgPSAiL3NlcmlhbEA3MDAw
NjA0MCI7CgkJc2VyaWFsMiA9ICIvc2VyaWFsQDcwMDA2MjAwIjsKCQlzZXJpYWwzID0gIi9zZXJp
YWxANzAwMDYzMDAiOwoJCXJ0YzAgPSAiL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYyI7CgkJcnRj
MSA9ICIvcnRjIjsKCX07CgoJY3B1cyB7CgkJI2FkZHJlc3MtY2VsbHMgPSA8MHgyPjsKCQkjc2l6
ZS1jZWxscyA9IDwweDA+OwoJCXN0YXR1cyA9ICJva2F5IjsKCgkJY3B1QDAgewoJCQlkZXZpY2Vf
dHlwZSA9ICJjcHUiOwoJCQljb21wYXRpYmxlID0gImFybSxjb3J0ZXgtYTU3LTY0Yml0IiwgImFy
bSxhcm12OCI7CgkJCXJlZyA9IDwweDAgMHgwPjsKCQkJZW5hYmxlLW1ldGhvZCA9ICJwc2NpIjsK
CQkJY3B1LWlkbGUtc3RhdGVzID0gPDB4MjQ+OwoJCQllcnJhdGFfaHdjYXBzID0gPDB4Nz47CgkJ
CWNwdS1pcGMgPSA8MHg0MDA+OwoJCQluZXh0LWxldmVsLWNhY2hlID0gPDB4MjU+OwoJCQlzdGF0
dXMgPSAib2theSI7CgkJCWNsb2NrcyA9IDwweDIxIDB4MTI2IDB4MjEgMHgxMjcgMHgyMSAweDEw
MyAweDIxIDB4ZjcgMHgyNj47CgkJCWNsb2NrLW5hbWVzID0gImNwdV9nIiwgImNwdV9scCIsICJw
bGxfeCIsICJwbGxfcCIsICJkZmxsIjsKCQkJY2xvY2stbGF0ZW5jeSA9IDwweDQ5M2UwPjsKCQkJ
bGludXgscGhhbmRsZSA9IDwweDI3PjsKCQkJcGhhbmRsZSA9IDwweDI3PjsKCQl9OwoKCQljcHVA
MSB7CgkJCWRldmljZV90eXBlID0gImNwdSI7CgkJCWNvbXBhdGlibGUgPSAiYXJtLGNvcnRleC1h
NTctNjRiaXQiLCAiYXJtLGFybXY4IjsKCQkJcmVnID0gPDB4MCAweDE+OwoJCQllbmFibGUtbWV0
aG9kID0gInBzY2kiOwoJCQljcHUtaWRsZS1zdGF0ZXMgPSA8MHgyND47CgkJCWVycmF0YV9od2Nh
cHMgPSA8MHg3PjsKCQkJY3B1LWlwYyA9IDwweDQwMD47CgkJCW5leHQtbGV2ZWwtY2FjaGUgPSA8
MHgyNT47CgkJCXN0YXR1cyA9ICJva2F5IjsKCQkJbGludXgscGhhbmRsZSA9IDwweDI4PjsKCQkJ
cGhhbmRsZSA9IDwweDI4PjsKCQl9OwoKCQljcHVAMiB7CgkJCWRldmljZV90eXBlID0gImNwdSI7
CgkJCWNvbXBhdGlibGUgPSAiYXJtLGNvcnRleC1hNTctNjRiaXQiLCAiYXJtLGFybXY4IjsKCQkJ
cmVnID0gPDB4MCAweDI+OwoJCQllbmFibGUtbWV0aG9kID0gInBzY2kiOwoJCQljcHUtaWRsZS1z
dGF0ZXMgPSA8MHgyND47CgkJCWVycmF0YV9od2NhcHMgPSA8MHg3PjsKCQkJY3B1LWlwYyA9IDww
eDQwMD47CgkJCW5leHQtbGV2ZWwtY2FjaGUgPSA8MHgyNT47CgkJCXN0YXR1cyA9ICJva2F5IjsK
CQkJbGludXgscGhhbmRsZSA9IDwweDI5PjsKCQkJcGhhbmRsZSA9IDwweDI5PjsKCQl9OwoKCQlj
cHVAMyB7CgkJCWRldmljZV90eXBlID0gImNwdSI7CgkJCWNvbXBhdGlibGUgPSAiYXJtLGNvcnRl
eC1hNTctNjRiaXQiLCAiYXJtLGFybXY4IjsKCQkJcmVnID0gPDB4MCAweDM+OwoJCQllbmFibGUt
bWV0aG9kID0gInBzY2kiOwoJCQljcHUtaWRsZS1zdGF0ZXMgPSA8MHgyND47CgkJCWVycmF0YV9o
d2NhcHMgPSA8MHg3PjsKCQkJY3B1LWlwYyA9IDwweDQwMD47CgkJCW5leHQtbGV2ZWwtY2FjaGUg
PSA8MHgyNT47CgkJCXN0YXR1cyA9ICJva2F5IjsKCQkJbGludXgscGhhbmRsZSA9IDwweDJhPjsK
CQkJcGhhbmRsZSA9IDwweDJhPjsKCQl9OwoKCQlpZGxlLXN0YXRlcyB7CgkJCWVudHJ5LW1ldGhv
ZCA9ICJwc2NpIjsKCgkJCWM3IHsKCQkJCWNvbXBhdGlibGUgPSAiYXJtLGlkbGUtc3RhdGUiOwoJ
CQkJYXJtLHBzY2ktc3VzcGVuZC1wYXJhbSA9IDwweDQwMDAwMDA3PjsKCQkJCXdha2V1cC1sYXRl
bmN5LXVzID0gPDB4ODI+OwoJCQkJbWluLXJlc2lkZW5jeS11cyA9IDwweDNlOD47CgkJCQlpZGxl
LXN0YXRlLW5hbWUgPSAiYzctY3B1LXBvd2VyZ2F0ZWQiOwoJCQkJc3RhdHVzID0gIm9rYXkiOwoJ
CQkJbGludXgscGhhbmRsZSA9IDwweDI0PjsKCQkJCXBoYW5kbGUgPSA8MHgyND47CgkJCX07CgoJ
CQljYzYgewoJCQkJY29tcGF0aWJsZSA9ICJhcm0saWRsZS1zdGF0ZSI7CgkJCQlhcm0scHNjaS1z
dXNwZW5kLXBhcmFtID0gPDB4NDAwMDAwMTA+OwoJCQkJd2FrZXVwLWxhdGVuY3ktdXMgPSA8MHhl
Nj47CgkJCQltaW4tcmVzaWRlbmN5LXVzID0gPDB4MjcxMD47CgkJCQlpZGxlLXN0YXRlLW5hbWUg
PSAiY2M2LWNsdXN0ZXItcG93ZXJnYXRlZCI7CgkJCQlzdGF0dXMgPSAib2theSI7CgkJCQlsaW51
eCxwaGFuZGxlID0gPDB4ZTc+OwoJCQkJcGhhbmRsZSA9IDwweGU3PjsKCQkJfTsKCQl9OwoKCQls
Mi1jYWNoZSB7CgkJCWNvbXBhdGlibGUgPSAiY2FjaGUiOwoJCQlsaW51eCxwaGFuZGxlID0gPDB4
MjU+OwoJCQlwaGFuZGxlID0gPDB4MjU+OwoJCX07Cgl9OwoKCXBzY2kgewoJCWNvbXBhdGlibGUg
PSAiYXJtLHBzY2ktMS4wIjsKCQlzdGF0dXMgPSAib2theSI7CgkJbWV0aG9kID0gInNtYyI7Cgl9
OwoKCXRsayB7CgkJY29tcGF0aWJsZSA9ICJhbmRyb2lkLHRsay1kcml2ZXIiOwoJCXN0YXR1cyA9
ICJkaXNhYmxlZCI7CgoJCWxvZyB7CgkJCWNvbXBhdGlibGUgPSAiYW5kcm9pZCxvdGUtbG9nZ2Vy
IjsKCQl9OwoJfTsKCglhcm0tcG11IHsKCQljb21wYXRpYmxlID0gImFybSxhcm12OC1wbXV2MyI7
CgkJc3RhdHVzID0gIm9rYXkiOwoJCWludGVycnVwdHMgPSA8MHgwIDB4OTAgMHg0IDB4MCAweDkx
IDB4NCAweDAgMHg5MiAweDQgMHgwIDB4OTMgMHg0PjsKCQlpbnRlcnJ1cHQtYWZmaW5pdHkgPSA8
MHgyNyAweDI4IDB4MjkgMHgyYT47Cgl9OwoKCWNsb2NrIHsKCQljb21wYXRpYmxlID0gIm52aWRp
YSx0ZWdyYTIxMC1jYXIiOwoJCXJlZyA9IDwweDAgMHg2MDAwNjAwMCAweDAgMHgxMDAwPjsKCQkj
Y2xvY2stY2VsbHMgPSA8MHgxPjsKCQkjcmVzZXQtY2VsbHMgPSA8MHgxPjsKCQlzdGF0dXMgPSAi
b2theSI7CgkJbGludXgscGhhbmRsZSA9IDwweDIxPjsKCQlwaGFuZGxlID0gPDB4MjE+OwoJfTsK
Cglid21nciB7CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsYndtZ3IiOwoJCWNsb2NrcyA9IDwweDIx
IDB4MjEyPjsKCQludmlkaWEsYndtZ3ItdXNlLXNoYXJlZC1tYXN0ZXI7CgkJY2xvY2stbmFtZXMg
PSAiZW1jIjsKCQlzdGF0dXMgPSAib2theSI7Cgl9OwoKCXJlc2VydmVkLW1lbW9yeSB7CgkJI2Fk
ZHJlc3MtY2VsbHMgPSA8MHgyPjsKCQkjc2l6ZS1jZWxscyA9IDwweDI+OwoJCXJhbmdlczsKCgkJ
aXJhbS1jYXJ2ZW91dCB7CgkJCWNvbXBhdGlibGUgPSAibnZpZGlhLGlyYW0tY2FydmVvdXQiOwoJ
CQlyZWcgPSA8MHgwIDB4NDAwMDEwMDAgMHgwIDB4M2YwMDA+OwoJCQluby1tYXA7CgkJCWxpbnV4
LHBoYW5kbGUgPSA8MHgyZD47CgkJCXBoYW5kbGUgPSA8MHgyZD47CgkJfTsKCgkJcmFtb29wc19j
YXJ2ZW91dCB7CgkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHJhbW9vcHMiOwoJCQlyZWcgPSA8MHgw
IDB4YjAwMDAwMDAgMHgwIDB4MjAwMDAwPjsKCQkJbm8tbWFwOwoJCQlsaW51eCxwaGFuZGxlID0g
PDB4ZTg+OwoJCQlwaGFuZGxlID0gPDB4ZTg+OwoJCX07CgoJCXZwci1jYXJ2ZW91dCB7CgkJCWNv
bXBhdGlibGUgPSAibnZpZGlhLHZwci1jYXJ2ZW91dCI7CgkJCXNpemUgPSA8MHgwIDB4MTkwMDAw
MDA+OwoJCQlhbGlnbm1lbnQgPSA8MHgwIDB4NDAwMDAwPjsKCQkJYWxsb2MtcmFuZ2VzID0gPDB4
MCAweDgwMDAwMDAwIDB4MCAweDcwMDAwMDAwPjsKCQkJcmV1c2FibGU7CgkJCWxpbnV4LHBoYW5k
bGUgPSA8MHgyYz47CgkJCXBoYW5kbGUgPSA8MHgyYz47CgkJfTsKCgkJZmIwX2NhcnZlb3V0IHsK
CQkJcmVnID0gPDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDA+OwoJCQlyZWctbmFtZXMg
PSAic3VyZmFjZSIsICJsdXQiOwoJCQluby1tYXA7CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg1ZD47
CgkJCXBoYW5kbGUgPSA8MHg1ZD47CgkJfTsKCgkJZmIxX2NhcnZlb3V0IHsKCQkJcmVnID0gPDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDA+OwoJCQlyZWctbmFtZXMgPSAic3VyZmFjZSIs
ICJsdXQiOwoJCQluby1tYXA7CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg2Nj47CgkJCXBoYW5kbGUg
PSA8MHg2Nj47CgkJfTsKCgl9OwoKCXRlZ3JhLWNhcnZlb3V0cyB7CgkJY29tcGF0aWJsZSA9ICJu
dmlkaWEsY2FydmVvdXRzIjsKCQlpb21tdXMgPSA8MHgyYiAweDY+OwoJCW1lbW9yeS1yZWdpb24g
PSA8MHgyYyAweDJkPjsKCQlzdGF0dXMgPSAib2theSI7Cgl9OwoKCWlvbW11IHsKCQljb21wYXRp
YmxlID0gIm52aWRpYSx0ZWdyYTIxMC1zbW11IjsKCQlyZWcgPSA8MHgwIDB4NzAwMTkwMDAgMHgw
IDB4MTAwMCAweDAgMHg2MDAwYzAwMCAweDAgMHgxMDAwPjsKCQlzdGF0dXMgPSAib2theSI7CgkJ
I2FzaWRzID0gPDB4ODA+OwoJCWRtYS13aW5kb3cgPSA8MHgwIDB4ODAwMDAwMDAgMHgwIDB4N2Zm
MDAwMDA+OwoJCSNpb21tdS1jZWxscyA9IDwweDE+OwoJCXN3Z2lkLW1hc2sgPSA8MHgxMDBmZmYg
MHhmZmZjY2RjZj47CgkJI251bS10cmFuc2xhdGlvbi1lbmFibGUgPSA8MHg1PjsKCQkjbnVtLWFz
aWQtc2VjdXJpdHkgPSA8MHg4PjsKCQlkb21haW5zID0gPDB4MmUgMHgxMDA0MDAwIDB4NDkgMHgy
ZiAweDgwMDAwMDAwIDB4MCAweDMwIDB4MCAweDQgMHgzMSAweDQwNCAweDAgMHgzMSAweDggMHgw
IDB4MzIgMHgxIDB4MCAweDMyIDB4MjAwMDAwMCAweDAgMHgzMiAweDQwMDAwMDAgMHgwIDB4MzIg
MHg4MDAwMDAwIDB4MCAweDMyIDB4MTAwMDAwMDAgMHgwIDB4MzIgMHgyIDB4MCAweDMyIDB4MCAw
eDEwMDAwMCAweDMyIDB4ZmZmZmZmZmYgMHhmZmZmZmZmZj47CgkJbGludXgscGhhbmRsZSA9IDww
eDJiPjsKCQlwaGFuZGxlID0gPDB4MmI+OwoKCQlhZGRyZXNzLXNwYWNlLXByb3AgewoKCQkJY29t
bW9uIHsKCQkJCWlvdmEtc3RhcnQgPSA8MHgwIDB4ODAwMDAwMDA+OwoJCQkJaW92YS1zaXplID0g
PDB4MCAweDdmZjAwMDAwPjsKCQkJCW51bS1wZi1wYWdlID0gPDB4MD47CgkJCQlnYXAtcGFnZSA9
IDwweDE+OwoJCQkJbGludXgscGhhbmRsZSA9IDwweDMyPjsKCQkJCXBoYW5kbGUgPSA8MHgzMj47
CgkJCX07CgoJCQlwcGNzIHsKCQkJCWlvdmEtc3RhcnQgPSA8MHgwIDB4ODAwMDAwMDA+OwoJCQkJ
aW92YS1zaXplID0gPDB4MCAweDdmZjAwMDAwPjsKCQkJCW51bS1wZi1wYWdlID0gPDB4MT47CgkJ
CQlnYXAtcGFnZSA9IDwweDE+OwoJCQkJbGludXgscGhhbmRsZSA9IDwweDJlPjsKCQkJCXBoYW5k
bGUgPSA8MHgyZT47CgkJCX07CgoJCQlkYyB7CgkJCQlpb3ZhLXN0YXJ0ID0gPDB4MCAweDEwMDAw
PjsKCQkJCWlvdmEtc2l6ZSA9IDwweDAgMHhmZmZlZmZmZj47CgkJCQludW0tcGYtcGFnZSA9IDww
eDA+OwoJCQkJZ2FwLXBhZ2UgPSA8MHgwPjsKCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHgzMT47CgkJ
CQlwaGFuZGxlID0gPDB4MzE+OwoJCQl9OwoKCQkJZ3B1IHsKCQkJCWlvdmEtc3RhcnQgPSA8MHgw
IDB4MTAwMDAwPjsKCQkJCWlvdmEtc2l6ZSA9IDwweDMgMHhmZmVmZmZmZj47CgkJCQlhbGlnbm1l
bnQgPSA8MHgyMDAwMD47CgkJCQludW0tcGYtcGFnZSA9IDwweDA+OwoJCQkJZ2FwLXBhZ2UgPSA8
MHgwPjsKCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHgyZj47CgkJCQlwaGFuZGxlID0gPDB4MmY+OwoJ
CQl9OwoKCQkJYXBlIHsKCQkJCWlvdmEtc3RhcnQgPSA8MHgwIDB4NzAzMDAwMDA+OwoJCQkJaW92
YS1zaXplID0gPDB4MCAweDhmYzAwMDAwPjsKCQkJCW51bS1wZi1wYWdlID0gPDB4MD47CgkJCQln
YXAtcGFnZSA9IDwweDE+OwoJCQkJbGludXgscGhhbmRsZSA9IDwweDMwPjsKCQkJCXBoYW5kbGUg
PSA8MHgzMD47CgkJCX07CgkJfTsKCX07CgoJc21tdV90ZXN0IHsKCQljb21wYXRpYmxlID0gIm52
aWRpYSxzbW11X3Rlc3QiOwoJCWlvbW11cyA9IDwweDJiIDB4MzQ+OwoJCWxpbnV4LHBoYW5kbGUg
PSA8MHhlOT47CgkJcGhhbmRsZSA9IDwweGU5PjsKCX07CgoJZG1hX3Rlc3QgewoJCWNvbXBhdGli
bGUgPSAibnZpZGlhLGRtYV90ZXN0IjsKCQlsaW51eCxwaGFuZGxlID0gPDB4ZWE+OwoJCXBoYW5k
bGUgPSA8MHhlYT47Cgl9OwoKCWJwbXAgewoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEw
LWJwbXAiOwoJCWNhcnZlb3V0LXN0YXJ0ID0gPDB4ODAwMDUwMDA+OwoJCWNhcnZlb3V0LXNpemUg
PSA8MHgxMDAwMD47CgkJcmVzZXRzID0gPDB4MjEgMHgxPjsKCQlyZXNldC1uYW1lcyA9ICJjb3Ai
OwoJCWNsb2NrcyA9IDwweDIxIDB4MWFlPjsKCQljbG9jay1uYW1lcyA9ICJzY2xrIjsKCQlyZWcg
PSA8MHgwIDB4NzAwMTYwMDAgMHgwIDB4MjAwMCAweDAgMHg2MDAwMTAwMCAweDAgMHgxMDAwPjsK
CQlpb21tdXMgPSA8MHgyYiAweDE+OwoJCXN0YXR1cyA9ICJkaXNhYmxlZCI7Cgl9OwoKCW1jIHsK
CQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYS1tYyI7CgkJcmVnLXJhbmdlcyA9IDwweGE+OwoJ
CXJlZyA9IDwweDAgMHg3MDAxOTAwMCAweDAgMHhjIDB4MCAweDcwMDE5MDUwIDB4MCAweDE5YyAw
eDAgMHg3MDAxOTIwMCAweDAgMHgyNCAweDAgMHg3MDAxOTI5YyAweDAgMHgxYjggMHgwIDB4NzAw
MTk0NjQgMHgwIDB4MTk4IDB4MCAweDcwMDE5NjA0IDB4MCAweDNiMCAweDAgMHg3MDAxOTliYyAw
eDAgMHgyMCAweDAgMHg3MDAxOTlmOCAweDAgMHg4YyAweDAgMHg3MDAxOWFlNCAweDAgMHhiMCAw
eDAgMHg3MDAxOWJhMCAweDAgMHg0NjAgMHgwIDB4NzAwMWMwMDAgMHgwIDB4YyAweDAgMHg3MDAx
YzA1MCAweDAgMHgxOTggMHgwIDB4NzAwMWMyMDAgMHgwIDB4MjQgMHgwIDB4NzAwMWMyOWMgMHgw
IDB4MWI4IDB4MCAweDcwMDFjNDY0IDB4MCAweDE5OCAweDAgMHg3MDAxYzYwNCAweDAgMHgzYjAg
MHgwIDB4NzAwMWM5YmMgMHgwIDB4MjAgMHgwIDB4NzAwMWM5ZjggMHgwIDB4OGMgMHgwIDB4NzAw
MWNhZTQgMHgwIDB4YjAgMHgwIDB4NzAwMWNiYTAgMHgwIDB4NDYwIDB4MCAweDcwMDFkMDAwIDB4
MCAweGMgMHgwIDB4NzAwMWQwNTAgMHgwIDB4MTk4IDB4MCAweDcwMDFkMjAwIDB4MCAweDI0IDB4
MCAweDcwMDFkMjljIDB4MCAweDFiOCAweDAgMHg3MDAxZDQ2NCAweDAgMHgxOTggMHgwIDB4NzAw
MWQ2MDQgMHgwIDB4M2IwIDB4MCAweDcwMDFkOWJjIDB4MCAweDIwIDB4MCAweDcwMDFkOWY4IDB4
MCAweDhjIDB4MCAweDcwMDFkYWU0IDB4MCAweGIwIDB4MCAweDcwMDFkYmEwIDB4MCAweDQ2MD47
CgkJaW50ZXJydXB0cyA9IDwweDAgMHg0ZCAweDQ+OwoJCWludF9tYXNrID0gPDB4MjNkNDA+OwoJ
CWNoYW5uZWxzID0gPDB4Mj47CgkJc3RhdHVzID0gIm9rYXkiOwoJfTsKCglpbnRlcnJ1cHQtY29u
dHJvbGxlciB7CgkJY29tcGF0aWJsZSA9ICJhcm0sY29ydGV4LWExNS1naWMiOwoJCWludGVycnVw
dC1wYXJlbnQgPSA8MHgzMz47CgkJI2ludGVycnVwdC1jZWxscyA9IDwweDM+OwoJCWludGVycnVw
dC1jb250cm9sbGVyOwoJCXJlZyA9IDwweDAgMHg1MDA0MTAwMCAweDAgMHgxMDAwIDB4MCAweDUw
MDQyMDAwIDB4MCAweDIwMDAgMHgwIDB4NTAwNDQwMDAgMHgwIDB4MjAwMCAweDAgMHg1MDA0NjAw
MCAweDAgMHgyMDAwPjsKCQlzdGF0dXMgPSAib2theSI7CgkJaW50ZXJydXB0cyA9IDwweDEgMHg5
IDB4ZjA0PjsKCQlsaW51eCxwaGFuZGxlID0gPDB4MzM+OwoJCXBoYW5kbGUgPSA8MHgzMz47Cgl9
OwoKCWludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwIHsKCQljb21wYXRpYmxlID0gIm52aWRp
YSx0ZWdyYTIxMC1pY3RsciI7CgkJaW50ZXJydXB0LXBhcmVudCA9IDwweDMzPjsKCQlpbnRlcnJ1
cHQtY29udHJvbGxlcjsKCQkjaW50ZXJydXB0LWNlbGxzID0gPDB4Mz47CgkJcmVnID0gPDB4MCAw
eDYwMDA0MDAwIDB4MCAweDQwIDB4MCAweDYwMDA0MTAwIDB4MCAweDQwIDB4MCAweDYwMDA0MjAw
IDB4MCAweDQwIDB4MCAweDYwMDA0MzAwIDB4MCAweDQwIDB4MCAweDYwMDA0NDAwIDB4MCAweDQw
IDB4MCAweDYwMDA0NTAwIDB4MCAweDQwPjsKCQlpbnRlcnJ1cHRzID0gPDB4MCAweDQgMHg0IDB4
MCAweDUgMHg0IDB4MCAweDcgMHg0IDB4MCAweDEyIDB4ND47CgkJb3V0Z29pbmctZG9vcmJlbGwg
PSA8MHg2PjsKCQlzdGF0dXMgPSAib2theSI7CgkJbGludXgscGhhbmRsZSA9IDwweDE+OwoJCXBo
YW5kbGUgPSA8MHgxPjsKCX07CgoJZmxvdy1jb250cm9sbGVyQDYwMDA3MDAwIHsKCQljb21wYXRp
YmxlID0gIm52aWRpYSx0ZWdyYTIxMC1mbG93Y3RybCI7CgkJcmVnID0gPDB4MCAweDYwMDA3MDAw
IDB4MCAweDEwMDA+OwoJfTsKCglhaGJANjAwMGMwMDAgewoJCWNvbXBhdGlibGUgPSAibnZpZGlh
LHRlZ3JhMjEwLWFoYiIsICJudmlkaWEsdGVncmEzMC1haGIiOwoJCXJlZyA9IDwweDAgMHg2MDAw
YzAwMCAweDAgMHgxNGY+OwoJCXN0YXR1cyA9ICJva2F5IjsKCQlsaW51eCxwaGFuZGxlID0gPDB4
ZWI+OwoJCXBoYW5kbGUgPSA8MHhlYj47Cgl9OwoKCWFjb25uZWN0QDcwMmMwMDAwIHsKCQljb21w
YXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1hY29ubmVjdCI7CgkJY2xvY2tzID0gPDB4MjEgMHhj
NiAweDIxIDB4NmI+OwoJCWNsb2NrLW5hbWVzID0gImFwZSIsICJhcGIyYXBlIjsKCQlwb3dlci1k
b21haW5zID0gPDB4MjI+OwoJCSNhZGRyZXNzLWNlbGxzID0gPDB4Mj47CgkJI3NpemUtY2VsbHMg
PSA8MHgyPjsKCQlyYW5nZXM7CgkJc3RhdHVzID0gIm9rYXkiOwoKCQlhZ2ljQDcwMmY5MDAwIHsK
CQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtYWdpYyI7CgkJCSNpbnRlcnJ1cHQtY2Vs
bHMgPSA8MHg0PjsKCQkJaW50ZXJydXB0LWNvbnRyb2xsZXI7CgkJCXJlZyA9IDwweDAgMHg3MDJm
OTAwMCAweDAgMHgyMDAwIDB4MCAweDcwMmZhMDAwIDB4MCAweDIwMDA+OwoJCQlpbnRlcnJ1cHRz
ID0gPDB4MCAweDY2IDB4ZjA0PjsKCQkJY2xvY2tzID0gPDB4MjEgMHhjNj47CgkJCWNsb2NrLW5h
bWVzID0gImNsayI7CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHgzND47CgkJCXBoYW5kbGUgPSA8MHgz
ND47CgkJfTsKCgkJYWRzcCB7CgkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLWFkc3Ai
OwoJCQl3YWtldXAtZGlzYWJsZTsKCQkJaW50ZXJydXB0LXBhcmVudCA9IDwweDM0PjsKCQkJcmVn
ID0gPDB4MCAweDcwMmVmMDAwIDB4MCAweDEwMDAgMHgwIDB4NzAyZWMwMDAgMHgwIDB4MjAwMCAw
eDAgMHg3MDJlZTAwMCAweDAgMHgxMDAwIDB4MCAweDcwMmRjODAwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgxIDB4MCAweDEwMDAwMDAgMHgwIDB4NmYyYzAwMDAgMHgwIDB4NzAzMDAwMDAgMHgwIDB4
OGZkMDAwMDA+OwoJCQlpb21tdXMgPSA8MHgyYiAweDIyPjsKCQkJZG1hLW1hc2sgPSA8MHgwIDB4
ZmZmMDAwMDA+OwoJCQlpb21tdS1yZXN2LXJlZ2lvbnMgPSA8MHgwIDB4MCAweDAgMHg3MDMwMDAw
MCAweDAgMHhmZmYwMDAwMCAweGZmZmZmZmZmIDB4ZmZmZmZmZmY+OwoJCQlpb21tdS1ncm91cC1p
ZCA9IDwweDI+OwoJCQludmlkaWEsYWRzcF9tZW0gPSA8MHg4MDMwMDAwMCAweDEwMDAwMDAgMHg4
MGIwMDAwMCAweDgwMDAwMCAweDQwMDAwMCAweDEwMDAwIDB4ODAzMDAwMDAgMHgyMDAwMDA+OwoJ
CQludmlkaWEsYWRzcC1ldnAtYmFzZSA9IDwweDcwMmVmNzAwIDB4NDA+OwoJCQlpbnRlcnJ1cHRz
ID0gPDB4MCAweDUgMHg0IDB4MCAweDAgMHgwIDB4NCAweDAgMHgwIDB4MmYgMHg0IDB4MCAweDAg
MHgzNCAweDQgMHgwIDB4MCAweDMyIDB4NCAweDAgMHgwIDB4MzcgMHg0IDB4MCAweDAgMHg0IDB4
NCAweDEgMHgwIDB4MSAweDQgMHgxIDB4MCAweDIgMHg0IDB4MT47CgkJCWNsb2NrcyA9IDwweDIx
IDB4MjAwIDB4MjEgMHg2YiAweDIxIDB4ZGEgMHgyMSAweGM3IDB4MjEgMHgyMDU+OwoJCQljbG9j
ay1uYW1lcyA9ICJhZHNwLmFwZSIsICJhZHNwLmFwYjJhcGUiLCAiYWRzcG5lb24iLCAiYWRzcCIs
ICJhZHNwX2NwdV9hYnVzIjsKCQkJcmVzZXRzID0gPDB4MjEgMHhlMT47CgkJCXJlc2V0LW5hbWVz
ID0gImFkc3BhbGwiOwoJCQludmlkaWEsYWRzcF91bml0X2ZwZ2FfcmVzZXQgPSA8MHgwIDB4NDA+
OwoJCQlzdGF0dXMgPSAib2theSI7CgkJfTsKCgkJYWRtYUA3MDJlMjAwMCB7CgkJCWNvbXBhdGli
bGUgPSAibnZpZGlhLHRlZ3JhMjEwLWFkbWEiOwoJCQlpbnRlcnJ1cHQtcGFyZW50ID0gPDB4MzQ+
OwoJCQlyZWcgPSA8MHgwIDB4NzAyZTIwMDAgMHgwIDB4MjAwMCAweDAgMHg3MDJlYzAwMCAweDAg
MHg3Mj47CgkJCWNsb2NrcyA9IDwweDIxIDB4NmE+OwoJCQljbG9jay1uYW1lcyA9ICJkX2F1ZGlv
IjsKCQkJaW50ZXJydXB0cyA9IDwweDAgMHgxOCAweDQgMHgwIDB4MCAweDE5IDB4NCAweDAgMHgw
IDB4MWEgMHg0IDB4MCAweDAgMHgxYiAweDQgMHgwIDB4MCAweDFjIDB4NCAweDAgMHgwIDB4MWQg
MHg0IDB4MCAweDAgMHgxZSAweDQgMHgwIDB4MCAweDFmIDB4NCAweDAgMHgwIDB4MjAgMHg0IDB4
MCAweDAgMHgyMSAweDQgMHgwIDB4MCAweDIyIDB4NCAweDAgMHgwIDB4MjMgMHg0IDB4MCAweDAg
MHgyNCAweDQgMHgwIDB4MCAweDI1IDB4NCAweDAgMHgwIDB4MjYgMHg0IDB4MCAweDAgMHgyNyAw
eDQgMHgwIDB4MCAweDI4IDB4NCAweDAgMHgwIDB4MjkgMHg0IDB4MCAweDAgMHgyYSAweDQgMHgw
IDB4MCAweDJiIDB4NCAweDAgMHgwIDB4MmMgMHg0IDB4MCAweDAgMHgyZCAweDQgMHgwPjsKCQkJ
I2RtYS1jZWxscyA9IDwweDE+OwoJCQlzdGF0dXMgPSAib2theSI7CgkJCWxpbnV4LHBoYW5kbGUg
PSA8MHgzNT47CgkJCXBoYW5kbGUgPSA8MHgzNT47CgkJfTsKCgkJYWh1YiB7CgkJCWNvbXBhdGli
bGUgPSAibnZpZGlhLHRlZ3JhMjEwLWF4YmFyIjsKCQkJd2FrZXVwLWRpc2FibGU7CgkJCXJlZyA9
IDwweDAgMHg3MDJkMDgwMCAweDAgMHg4MDA+OwoJCQljbG9ja3MgPSA8MHgyMSAweDZhIDB4MjEg
MHhmOSAweDIxIDB4YzYgMHgyMSAweDZiPjsKCQkJY2xvY2stbmFtZXMgPSAiYWh1YiIsICJwYXJl
bnQiLCAieGJhci5hcGUiLCAiYXBiMmFwZSI7CgkJCWFzc2lnbmVkLWNsb2NrcyA9IDwweDIxIDB4
NmE+OwoJCQlhc3NpZ25lZC1jbG9jay1wYXJlbnRzID0gPDB4MjEgMHhmMz47CgkJCWFzc2lnbmVk
LWNsb2NrLXJhdGVzID0gPDB4NGRkMWUwMD47CgkJCXN0YXR1cyA9ICJva2F5IjsKCQkJI2FkZHJl
c3MtY2VsbHMgPSA8MHgxPjsKCQkJI3NpemUtY2VsbHMgPSA8MHgxPjsKCQkJcmFuZ2VzID0gPDB4
NzAyZDAwMDAgMHgwIDB4NzAyZDAwMDAgMHgxMDAwMD47CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg0
ZD47CgkJCXBoYW5kbGUgPSA8MHg0ZD47CgoJCQlhZG1haWZAMHg3MDJkMDAwMCB7CgkJCQljb21w
YXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1hZG1haWYiOwoJCQkJcmVnID0gPDB4NzAyZDAwMDAg
MHg4MDA+OwoJCQkJZG1hcyA9IDwweDM1IDB4MSAweDM1IDB4MSAweDM1IDB4MiAweDM1IDB4MiAw
eDM1IDB4MyAweDM1IDB4MyAweDM1IDB4NCAweDM1IDB4NCAweDM1IDB4NSAweDM1IDB4NSAweDM1
IDB4NiAweDM1IDB4NiAweDM1IDB4NyAweDM1IDB4NyAweDM1IDB4OCAweDM1IDB4OCAweDM1IDB4
OSAweDM1IDB4OSAweDM1IDB4YSAweDM1IDB4YT47CgkJCQlkbWEtbmFtZXMgPSAicngxIiwgInR4
MSIsICJyeDIiLCAidHgyIiwgInJ4MyIsICJ0eDMiLCAicng0IiwgInR4NCIsICJyeDUiLCAidHg1
IiwgInJ4NiIsICJ0eDYiLCAicng3IiwgInR4NyIsICJyeDgiLCAidHg4IiwgInJ4OSIsICJ0eDki
LCAicngxMCIsICJ0eDEwIjsKCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCWxpbnV4LHBoYW5kbGUg
PSA8MHhlYz47CgkJCQlwaGFuZGxlID0gPDB4ZWM+OwoJCQl9OwoKCQkJc2ZjQDcwMmQyMDAwIHsK
CQkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXNmYyI7CgkJCQlyZWcgPSA8MHg3MDJk
MjAwMCAweDIwMD47CgkJCQludmlkaWEsYWh1Yi1zZmMtaWQgPSA8MHgwPjsKCQkJCXN0YXR1cyA9
ICJva2F5IjsKCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHhlZD47CgkJCQlwaGFuZGxlID0gPDB4ZWQ+
OwoJCQl9OwoKCQkJc2ZjQDcwMmQyMjAwIHsKCQkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3Jh
MjEwLXNmYyI7CgkJCQlyZWcgPSA8MHg3MDJkMjIwMCAweDIwMD47CgkJCQludmlkaWEsYWh1Yi1z
ZmMtaWQgPSA8MHgxPjsKCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCWxpbnV4LHBoYW5kbGUgPSA8
MHhlZT47CgkJCQlwaGFuZGxlID0gPDB4ZWU+OwoJCQl9OwoKCQkJc2ZjQDcwMmQyNDAwIHsKCQkJ
CWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXNmYyI7CgkJCQlyZWcgPSA8MHg3MDJkMjQw
MCAweDIwMD47CgkJCQludmlkaWEsYWh1Yi1zZmMtaWQgPSA8MHgyPjsKCQkJCXN0YXR1cyA9ICJv
a2F5IjsKCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHhlZj47CgkJCQlwaGFuZGxlID0gPDB4ZWY+OwoJ
CQl9OwoKCQkJc2ZjQDcwMmQyNjAwIHsKCQkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEw
LXNmYyI7CgkJCQlyZWcgPSA8MHg3MDJkMjYwMCAweDIwMD47CgkJCQludmlkaWEsYWh1Yi1zZmMt
aWQgPSA8MHgzPjsKCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHhm
MD47CgkJCQlwaGFuZGxlID0gPDB4ZjA+OwoJCQl9OwoKCQkJc3BrcHJvdEA3MDJkOGMwMCB7CgkJ
CQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1zcGtwcm90IjsKCQkJCXJlZyA9IDwweDcw
MmQ4YzAwIDB4NDAwPjsKCQkJCW52aWRpYSxhaHViLXNwa3Byb3QtaWQgPSA8MHgwPjsKCQkJCXN0
YXR1cyA9ICJva2F5IjsKCQkJfTsKCgkJCWFtaXhlckA3MDJkYmIwMCB7CgkJCQljb21wYXRpYmxl
ID0gIm52aWRpYSx0ZWdyYTIxMC1hbWl4ZXIiOwoJCQkJcmVnID0gPDB4NzAyZGJiMDAgMHg4MDA+
OwoJCQkJbnZpZGlhLGFodWItYW1peGVyLWlkID0gPDB4MD47CgkJCQlzdGF0dXMgPSAib2theSI7
CgkJCQlsaW51eCxwaGFuZGxlID0gPDB4ZjE+OwoJCQkJcGhhbmRsZSA9IDwweGYxPjsKCQkJfTsK
CgkJCWkyc0A3MDJkMTAwMCB7CgkJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1pMnMi
OwoJCQkJcmVnID0gPDB4NzAyZDEwMDAgMHgxMDA+OwoJCQkJbnZpZGlhLGFodWItaTJzLWlkID0g
PDB4MD47CgkJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQkJY2xvY2tzID0gPDB4MjEgMHgxZSAw
eDIxIDB4ZjkgMHgyMSAweDEwOSAweDIxIDB4MTVlPjsKCQkJCWNsb2NrLW5hbWVzID0gImkycyIs
ICJpMnNfY2xrX3BhcmVudCIsICJleHRfYXVkaW9fc3luYyIsICJhdWRpb19zeW5jIjsKCQkJCWFz
c2lnbmVkLWNsb2NrcyA9IDwweDIxIDB4MWU+OwoJCQkJYXNzaWduZWQtY2xvY2stcGFyZW50cyA9
IDwweDIxIDB4Zjk+OwoJCQkJYXNzaWduZWQtY2xvY2stcmF0ZXMgPSA8MHgxNzcwMDA+OwoJCQkJ
cGluY3RybC1uYW1lcyA9ICJkYXBfYWN0aXZlIiwgImRhcF9pbmFjdGl2ZSI7CgkJCQlwaW5jdHJs
LTA7CgkJCQlwaW5jdHJsLTE7CgkJCQlsaW51eCxwaGFuZGxlID0gPDB4YWU+OwoJCQkJcGhhbmRs
ZSA9IDwweGFlPjsKCQkJfTsKCgkJCWkyc0A3MDJkMTEwMCB7CgkJCQljb21wYXRpYmxlID0gIm52
aWRpYSx0ZWdyYTIxMC1pMnMiOwoJCQkJcmVnID0gPDB4NzAyZDExMDAgMHgxMDA+OwoJCQkJbnZp
ZGlhLGFodWItaTJzLWlkID0gPDB4MT47CgkJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQkJY2xv
Y2tzID0gPDB4MjEgMHhiIDB4MjEgMHhmOSAweDIxIDB4MTBhIDB4MjEgMHgxNWY+OwoJCQkJY2xv
Y2stbmFtZXMgPSAiaTJzIiwgImkyc19jbGtfcGFyZW50IiwgImV4dF9hdWRpb19zeW5jIiwgImF1
ZGlvX3N5bmMiOwoJCQkJYXNzaWduZWQtY2xvY2tzID0gPDB4MjEgMHhiPjsKCQkJCWFzc2lnbmVk
LWNsb2NrLXBhcmVudHMgPSA8MHgyMSAweGY5PjsKCQkJCWFzc2lnbmVkLWNsb2NrLXJhdGVzID0g
PDB4MTc3MDAwPjsKCQkJCXBpbmN0cmwtbmFtZXMgPSAiZGFwX2FjdGl2ZSIsICJkYXBfaW5hY3Rp
dmUiOwoJCQkJcGluY3RybC0wOwoJCQkJcGluY3RybC0xOwoJCQkJbGludXgscGhhbmRsZSA9IDww
eGYyPjsKCQkJCXBoYW5kbGUgPSA8MHhmMj47CgkJCX07CgoJCQlpMnNANzAyZDEyMDAgewoJCQkJ
Y29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtaTJzIjsKCQkJCXJlZyA9IDwweDcwMmQxMjAw
IDB4MTAwPjsKCQkJCW52aWRpYSxhaHViLWkycy1pZCA9IDwweDI+OwoJCQkJc3RhdHVzID0gIm9r
YXkiOwoJCQkJY2xvY2tzID0gPDB4MjEgMHgxMiAweDIxIDB4ZjkgMHgyMSAweDEwYiAweDIxIDB4
MTYwPjsKCQkJCWNsb2NrLW5hbWVzID0gImkycyIsICJpMnNfY2xrX3BhcmVudCIsICJleHRfYXVk
aW9fc3luYyIsICJhdWRpb19zeW5jIjsKCQkJCWFzc2lnbmVkLWNsb2NrcyA9IDwweDIxIDB4MTI+
OwoJCQkJYXNzaWduZWQtY2xvY2stcGFyZW50cyA9IDwweDIxIDB4Zjk+OwoJCQkJYXNzaWduZWQt
Y2xvY2stcmF0ZXMgPSA8MHgxNzcwMDA+OwoJCQkJcHJvZC1uYW1lID0gImkyczJfcHJvZCI7CgkJ
CQlwaW5jdHJsLW5hbWVzID0gImRhcF9hY3RpdmUiLCAiZGFwX2luYWN0aXZlIjsKCQkJCXBpbmN0
cmwtMDsKCQkJCXBpbmN0cmwtMTsKCQkJCXJlZ3VsYXRvci1zdXBwbGllcyA9ICJ2ZGQtMXY4LWRt
aWMiOwoJCQkJdmRkLTF2OC1kbWljLXN1cHBseSA9IDwweDM2PjsKCQkJCWZzeW5jLXdpZHRoID0g
PDB4Zj47CgkJCQlsaW51eCxwaGFuZGxlID0gPDB4NTA+OwoJCQkJcGhhbmRsZSA9IDwweDUwPjsK
CQkJfTsKCgkJCWkyc0A3MDJkMTMwMCB7CgkJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIx
MC1pMnMiOwoJCQkJcmVnID0gPDB4NzAyZDEzMDAgMHgxMDA+OwoJCQkJbnZpZGlhLGFodWItaTJz
LWlkID0gPDB4Mz47CgkJCQlzdGF0dXMgPSAib2theSI7CgkJCQljbG9ja3MgPSA8MHgyMSAweDY1
IDB4MjEgMHhmOSAweDIxIDB4MTBjIDB4MjEgMHgxNjE+OwoJCQkJY2xvY2stbmFtZXMgPSAiaTJz
IiwgImkyc19jbGtfcGFyZW50IiwgImV4dF9hdWRpb19zeW5jIiwgImF1ZGlvX3N5bmMiOwoJCQkJ
YXNzaWduZWQtY2xvY2tzID0gPDB4MjEgMHg2NT47CgkJCQlhc3NpZ25lZC1jbG9jay1wYXJlbnRz
ID0gPDB4MjEgMHhmOT47CgkJCQlhc3NpZ25lZC1jbG9jay1yYXRlcyA9IDwweDE3NzAwMD47CgkJ
CQlwaW5jdHJsLW5hbWVzID0gImRhcF9hY3RpdmUiLCAiZGFwX2luYWN0aXZlIjsKCQkJCXBpbmN0
cmwtMDsKCQkJCXBpbmN0cmwtMTsKCQkJCXJlZ3VsYXRvci1zdXBwbGllcyA9ICJ2ZGRpby11YXJ0
IjsKCQkJCXZkZGlvLXVhcnQtc3VwcGx5ID0gPDB4MzY+OwoJCQkJZnN5bmMtd2lkdGggPSA8MHhm
PjsKCQkJCWVuYWJsZS1jeWE7CgkJCQlsaW51eCxwaGFuZGxlID0gPDB4NGU+OwoJCQkJcGhhbmRs
ZSA9IDwweDRlPjsKCQkJfTsKCgkJCWkyc0A3MDJkMTQwMCB7CgkJCQljb21wYXRpYmxlID0gIm52
aWRpYSx0ZWdyYTIxMC1pMnMiOwoJCQkJcmVnID0gPDB4NzAyZDE0MDAgMHgxMDA+OwoJCQkJbnZp
ZGlhLGFodWItaTJzLWlkID0gPDB4ND47CgkJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQkJY2xv
Y2tzID0gPDB4MjEgMHg2NiAweDIxIDB4ZjkgMHgyMSAweDEwZCAweDIxIDB4MTYyPjsKCQkJCWNs
b2NrLW5hbWVzID0gImkycyIsICJpMnNfY2xrX3BhcmVudCIsICJleHRfYXVkaW9fc3luYyIsICJh
dWRpb19zeW5jIjsKCQkJCWFzc2lnbmVkLWNsb2NrcyA9IDwweDIxIDB4NjY+OwoJCQkJYXNzaWdu
ZWQtY2xvY2stcGFyZW50cyA9IDwweDIxIDB4Zjk+OwoJCQkJYXNzaWduZWQtY2xvY2stcmF0ZXMg
PSA8MHgxNzcwMDA+OwoJCQkJcGluY3RybC1uYW1lcyA9ICJkYXBfYWN0aXZlIiwgImRhcF9pbmFj
dGl2ZSI7CgkJCQlwaW5jdHJsLTA7CgkJCQlwaW5jdHJsLTE7CgkJCQlsaW51eCxwaGFuZGxlID0g
PDB4ZjM+OwoJCQkJcGhhbmRsZSA9IDwweGYzPjsKCQkJfTsKCgkJCWFteEA3MDJkMzAwMCB7CgkJ
CQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1hbXgiOwoJCQkJcmVnID0gPDB4NzAyZDMw
MDAgMHgxMDA+OwoJCQkJbnZpZGlhLGFodWItYW14LWlkID0gPDB4MD47CgkJCQlzdGF0dXMgPSAi
b2theSI7CgkJCQlsaW51eCxwaGFuZGxlID0gPDB4ZjQ+OwoJCQkJcGhhbmRsZSA9IDwweGY0PjsK
CQkJfTsKCgkJCWFteEA3MDJkMzEwMCB7CgkJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIx
MC1hbXgiOwoJCQkJcmVnID0gPDB4NzAyZDMxMDAgMHgxMDA+OwoJCQkJbnZpZGlhLGFodWItYW14
LWlkID0gPDB4MT47CgkJCQlzdGF0dXMgPSAib2theSI7CgkJCQlsaW51eCxwaGFuZGxlID0gPDB4
ZjU+OwoJCQkJcGhhbmRsZSA9IDwweGY1PjsKCQkJfTsKCgkJCWFkeEA3MDJkMzgwMCB7CgkJCQlj
b21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1hZHgiOwoJCQkJcmVnID0gPDB4NzAyZDM4MDAg
MHgxMDA+OwoJCQkJbnZpZGlhLGFodWItYWR4LWlkID0gPDB4MD47CgkJCQlzdGF0dXMgPSAib2th
eSI7CgkJCQlsaW51eCxwaGFuZGxlID0gPDB4ZjY+OwoJCQkJcGhhbmRsZSA9IDwweGY2PjsKCQkJ
fTsKCgkJCWFkeEA3MDJkMzkwMCB7CgkJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1h
ZHgiOwoJCQkJcmVnID0gPDB4NzAyZDM5MDAgMHgxMDA+OwoJCQkJbnZpZGlhLGFodWItYWR4LWlk
ID0gPDB4MT47CgkJCQlzdGF0dXMgPSAib2theSI7CgkJCQlsaW51eCxwaGFuZGxlID0gPDB4Zjc+
OwoJCQkJcGhhbmRsZSA9IDwweGY3PjsKCQkJfTsKCgkJCWRtaWNANzAyZDQwMDAgewoJCQkJY29t
cGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtZG1pYyI7CgkJCQlyZWcgPSA8MHg3MDJkNDAwMCAw
eDEwMD47CgkJCQludmlkaWEsYWh1Yi1kbWljLWlkID0gPDB4MD47CgkJCQlzdGF0dXMgPSAib2th
eSI7CgkJCQljbG9ja3MgPSA8MHgyMSAweGExIDB4MjEgMHhmOT47CgkJCQljbG9jay1uYW1lcyA9
ICJkbWljIiwgInBhcmVudCI7CgkJCQlhc3NpZ25lZC1jbG9ja3MgPSA8MHgyMSAweGExPjsKCQkJ
CWFzc2lnbmVkLWNsb2NrLXBhcmVudHMgPSA8MHgyMSAweGY5PjsKCQkJCWFzc2lnbmVkLWNsb2Nr
LXJhdGVzID0gPDB4MmVlMDAwPjsKCQkJCXJlZ3VsYXRvci1zdXBwbGllcyA9ICJ2ZGQtMXY4LWRt
aWMiOwoJCQkJdmRkLTF2OC1kbWljLXN1cHBseSA9IDwweDM2PjsKCQkJCWxpbnV4LHBoYW5kbGUg
PSA8MHg1Mj47CgkJCQlwaGFuZGxlID0gPDB4NTI+OwoJCQl9OwoKCQkJZG1pY0A3MDJkNDEwMCB7
CgkJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1kbWljIjsKCQkJCXJlZyA9IDwweDcw
MmQ0MTAwIDB4MTAwPjsKCQkJCW52aWRpYSxhaHViLWRtaWMtaWQgPSA8MHgxPjsKCQkJCXN0YXR1
cyA9ICJva2F5IjsKCQkJCWNsb2NrcyA9IDwweDIxIDB4YTIgMHgyMSAweGY5PjsKCQkJCWNsb2Nr
LW5hbWVzID0gImRtaWMiLCAicGFyZW50IjsKCQkJCWFzc2lnbmVkLWNsb2NrcyA9IDwweDIxIDB4
YTI+OwoJCQkJYXNzaWduZWQtY2xvY2stcGFyZW50cyA9IDwweDIxIDB4Zjk+OwoJCQkJYXNzaWdu
ZWQtY2xvY2stcmF0ZXMgPSA8MHgyZWUwMDA+OwoJCQkJcmVndWxhdG9yLXN1cHBsaWVzID0gInZk
ZC0xdjgtZG1pYyI7CgkJCQl2ZGQtMXY4LWRtaWMtc3VwcGx5ID0gPDB4MzY+OwoJCQkJbGludXgs
cGhhbmRsZSA9IDwweDU0PjsKCQkJCXBoYW5kbGUgPSA8MHg1ND47CgkJCX07CgoJCQlkbWljQDcw
MmQ0MjAwIHsKCQkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLWRtaWMiOwoJCQkJcmVn
ID0gPDB4NzAyZDQyMDAgMHgxMDA+OwoJCQkJbnZpZGlhLGFodWItZG1pYy1pZCA9IDwweDI+OwoJ
CQkJc3RhdHVzID0gImRpc2FibGVkIjsKCQkJCWNsb2NrcyA9IDwweDIxIDB4YzUgMHgyMSAweGY5
PjsKCQkJCWNsb2NrLW5hbWVzID0gImRtaWMiLCAicGFyZW50IjsKCQkJCWFzc2lnbmVkLWNsb2Nr
cyA9IDwweDIxIDB4YzU+OwoJCQkJYXNzaWduZWQtY2xvY2stcGFyZW50cyA9IDwweDIxIDB4Zjk+
OwoJCQkJYXNzaWduZWQtY2xvY2stcmF0ZXMgPSA8MHgyZWUwMDA+OwoJCQkJbGludXgscGhhbmRs
ZSA9IDwweGY4PjsKCQkJCXBoYW5kbGUgPSA8MHhmOD47CgkJCX07CgoJCQlhZmNANzAyZDcwMDAg
ewoJCQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtYWZjIjsKCQkJCXJlZyA9IDwweDcw
MmQ3MDAwIDB4MTAwPjsKCQkJCW52aWRpYSxhaHViLWFmYy1pZCA9IDwweDA+OwoJCQkJc3RhdHVz
ID0gIm9rYXkiOwoJCQkJbGludXgscGhhbmRsZSA9IDwweGY5PjsKCQkJCXBoYW5kbGUgPSA8MHhm
OT47CgkJCX07CgoJCQlhZmNANzAyZDcxMDAgewoJCQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVn
cmEyMTAtYWZjIjsKCQkJCXJlZyA9IDwweDcwMmQ3MTAwIDB4MTAwPjsKCQkJCW52aWRpYSxhaHVi
LWFmYy1pZCA9IDwweDE+OwoJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJbGludXgscGhhbmRsZSA9
IDwweGZhPjsKCQkJCXBoYW5kbGUgPSA8MHhmYT47CgkJCX07CgoJCQlhZmNANzAyZDcyMDAgewoJ
CQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtYWZjIjsKCQkJCXJlZyA9IDwweDcwMmQ3
MjAwIDB4MTAwPjsKCQkJCW52aWRpYSxhaHViLWFmYy1pZCA9IDwweDI+OwoJCQkJc3RhdHVzID0g
Im9rYXkiOwoJCQkJbGludXgscGhhbmRsZSA9IDwweGZiPjsKCQkJCXBoYW5kbGUgPSA8MHhmYj47
CgkJCX07CgoJCQlhZmNANzAyZDczMDAgewoJCQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEy
MTAtYWZjIjsKCQkJCXJlZyA9IDwweDcwMmQ3MzAwIDB4MTAwPjsKCQkJCW52aWRpYSxhaHViLWFm
Yy1pZCA9IDwweDM+OwoJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJbGludXgscGhhbmRsZSA9IDww
eGZjPjsKCQkJCXBoYW5kbGUgPSA8MHhmYz47CgkJCX07CgoJCQlhZmNANzAyZDc0MDAgewoJCQkJ
Y29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtYWZjIjsKCQkJCXJlZyA9IDwweDcwMmQ3NDAw
IDB4MTAwPjsKCQkJCW52aWRpYSxhaHViLWFmYy1pZCA9IDwweDQ+OwoJCQkJc3RhdHVzID0gIm9r
YXkiOwoJCQkJbGludXgscGhhbmRsZSA9IDwweGZkPjsKCQkJCXBoYW5kbGUgPSA8MHhmZD47CgkJ
CX07CgoJCQlhZmNANzAyZDc1MDAgewoJCQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAt
YWZjIjsKCQkJCXJlZyA9IDwweDcwMmQ3NTAwIDB4MTAwPjsKCQkJCW52aWRpYSxhaHViLWFmYy1p
ZCA9IDwweDU+OwoJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJbGludXgscGhhbmRsZSA9IDwweGZl
PjsKCQkJCXBoYW5kbGUgPSA8MHhmZT47CgkJCX07CgoJCQltdmNANzAyZGEwMDAgewoJCQkJY29t
cGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtbXZjIjsKCQkJCXJlZyA9IDwweDcwMmRhMDAwIDB4
MjAwPjsKCQkJCW52aWRpYSxhaHViLW12Yy1pZCA9IDwweDA+OwoJCQkJc3RhdHVzID0gIm9rYXki
OwoJCQkJbGludXgscGhhbmRsZSA9IDwweGZmPjsKCQkJCXBoYW5kbGUgPSA8MHhmZj47CgkJCX07
CgoJCQltdmNANzAyZGEyMDAgewoJCQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtbXZj
IjsKCQkJCXJlZyA9IDwweDcwMmRhMjAwIDB4MjAwPjsKCQkJCW52aWRpYSxhaHViLW12Yy1pZCA9
IDwweDE+OwoJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJbGludXgscGhhbmRsZSA9IDwweDEwMD47
CgkJCQlwaGFuZGxlID0gPDB4MTAwPjsKCQkJfTsKCgkJCWlxY0A3MDJkZTAwMCB7CgkJCQljb21w
YXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1pcWMiOwoJCQkJcmVnID0gPDB4NzAyZGUwMDAgMHgy
MDA+OwoJCQkJbnZpZGlhLGFodWItaXFjLWlkID0gPDB4MD47CgkJCQlzdGF0dXMgPSAiZGlzYWJs
ZWQiOwoJCQkJbGludXgscGhhbmRsZSA9IDwweDEwMT47CgkJCQlwaGFuZGxlID0gPDB4MTAxPjsK
CQkJfTsKCgkJCWlxY0A3MDJkZTIwMCB7CgkJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIx
MC1pcWMiOwoJCQkJcmVnID0gPDB4NzAyZGUyMDAgMHgyMDA+OwoJCQkJbnZpZGlhLGFodWItaXFj
LWlkID0gPDB4MT47CgkJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQkJbGludXgscGhhbmRsZSA9
IDwweDEwMj47CgkJCQlwaGFuZGxlID0gPDB4MTAyPjsKCQkJfTsKCgkJCW9wZUA3MDJkODAwMCB7
CgkJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1vcGUiOwoJCQkJcmVnID0gPDB4NzAy
ZDgwMDAgMHgxMDAgMHg3MDJkODEwMCAweDEwMCAweDcwMmQ4MjAwIDB4MjAwPjsKCQkJCW52aWRp
YSxhaHViLW9wZS1pZCA9IDwweDA+OwoJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJbGludXgscGhh
bmRsZSA9IDwweDEwMz47CgkJCQlwaGFuZGxlID0gPDB4MTAzPjsKCgkJCQlwZXFANzAyZDgxMDAg
ewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCX07CgoJCQkJbWJkcmNANzAyZDgyMDAgewoJCQkJ
CXN0YXR1cyA9ICJva2F5IjsKCQkJCX07CgkJCX07CgoJCQlvcGVANzAyZDg0MDAgewoJCQkJY29t
cGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtb3BlIjsKCQkJCXJlZyA9IDwweDcwMmQ4NDAwIDB4
MTAwIDB4NzAyZDg1MDAgMHgxMDAgMHg3MDJkODYwMCAweDIwMD47CgkJCQludmlkaWEsYWh1Yi1v
cGUtaWQgPSA8MHgxPjsKCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCWxpbnV4LHBoYW5kbGUgPSA8
MHgxMDQ+OwoJCQkJcGhhbmRsZSA9IDwweDEwND47CgoJCQkJcGVxQDcwMmQ4NTAwIHsKCQkJCQlz
dGF0dXMgPSAib2theSI7CgkJCQl9OwoKCQkJCW1iZHJjQDcwMmQ4NjAwIHsKCQkJCQlzdGF0dXMg
PSAib2theSI7CgkJCQl9OwoJCQl9OwoKCQkJbXZjQDB4NzAyZGEyMDAgewoJCQkJc3RhdHVzID0g
Im9rYXkiOwoJCQl9OwoJCX07CgoJCWFkc3BfYXVkaW8gewoJCQljb21wYXRpYmxlID0gIm52aWRp
YSx0ZWdyYTIxMC1hZHNwLWF1ZGlvIjsKCQkJd2FrZXVwLWRpc2FibGU7CgkJCWlvbW11cyA9IDww
eDJiIDB4MjI+OwoJCQlpb21tdS1yZXN2LXJlZ2lvbnMgPSA8MHgwIDB4MCAweDAgMHg3MDMwMDAw
MCAweDAgMHhmZmYwMDAwMCAweGZmZmZmZmZmIDB4ZmZmZmZmZmY+OwoJCQlpb21tdS1ncm91cC1p
ZCA9IDwweDI+OwoJCQludmlkaWEsYWRtYV9jaF9zdGFydCA9IDwweGI+OwoJCQludmlkaWEsYWRt
YV9jaF9jbnQgPSA8MHhiPjsKCQkJaW50ZXJydXB0LXBhcmVudCA9IDwweDM0PjsKCQkJaW50ZXJy
dXB0cyA9IDwweDAgMHgyMyAweDQgMHgxIDB4MCAweDI0IDB4NCAweDEgMHgwIDB4MjUgMHg0IDB4
MSAweDAgMHgyNiAweDQgMHgxIDB4MCAweDI3IDB4NCAweDEgMHgwIDB4MjggMHg0IDB4MSAweDAg
MHgyOSAweDQgMHgxIDB4MCAweDJhIDB4NCAweDEgMHgwIDB4MmIgMHg0IDB4MSAweDAgMHgyYyAw
eDQgMHgxIDB4MCAweDJkIDB4NCAweDE+OwoJCQljbG9ja3MgPSA8MHgyMSAweDZhIDB4MjEgMHhj
Nj47CgkJCWNsb2NrLW5hbWVzID0gImFodWIiLCAiYXBlIjsKCQkJc3RhdHVzID0gIm9rYXkiOwoJ
CQlsaW51eCxwaGFuZGxlID0gPDB4MTA1PjsKCQkJcGhhbmRsZSA9IDwweDEwNT47CgkJfTsKCX07
CgoJdGltZXIgewoJCWNvbXBhdGlibGUgPSAiYXJtLGFybXY4LXRpbWVyIjsKCQlpbnRlcnJ1cHQt
cGFyZW50ID0gPDB4MzM+OwoJCWludGVycnVwdHMgPSA8MHgxIDB4ZCAweGYwOCAweDEgMHhlIDB4
ZjA4IDB4MSAweGIgMHhmMDggMHgxIDB4YSAweGYwOD47CgkJY2xvY2stZnJlcXVlbmN5ID0gPDB4
MTI0ZjgwMD47CgkJc3RhdHVzID0gIm9rYXkiOwoJfTsKCgl0aW1lckA2MDAwNTAwMCB7CgkJY29t
cGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtdGltZXIiLCAibnZpZGlhLHRlZ3JhMzAtdGltZXIi
LCAibnZpZGlhLHRlZ3JhMzAtdGltZXItd2R0IjsKCQlyZWcgPSA8MHgwIDB4NjAwMDUwMDAgMHgw
IDB4NDAwPjsKCQlpbnRlcnJ1cHRzID0gPDB4MCAweGIwIDB4NCAweDAgMHhiMSAweDQgMHgwIDB4
YjIgMHg0IDB4MCAweGIzIDB4ND47CgkJY2xvY2tzID0gPDB4MjEgMHg1PjsKCQlzdGF0dXMgPSAi
b2theSI7Cgl9OwoKCXJ0YyB7CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEtcnRjIjsKCQly
ZWcgPSA8MHgwIDB4NzAwMGUwMDAgMHgwIDB4MTAwPjsKCQlpbnRlcnJ1cHRzID0gPDB4MCAweDIg
MHg0PjsKCQlzdGF0dXMgPSAib2theSI7CgkJbnZpZGlhLHBtYy13YWtldXAgPSA8MHgzNyAweDEg
MHgxMCAweDQ+OwoJfTsKCglkbWFANjAwMjAwMDAgewoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRl
Z3JhMTQ4LWFwYmRtYSI7CgkJcmVnID0gPDB4MCAweDYwMDIwMDAwIDB4MCAweDE0MDA+OwoJCWNs
b2NrcyA9IDwweDIxIDB4MjI+OwoJCWNsb2NrLW5hbWVzID0gImRtYSI7CgkJcmVzZXRzID0gPDB4
MjEgMHgyMj47CgkJcmVzZXQtbmFtZXMgPSAiZG1hIjsKCQlpbnRlcnJ1cHRzID0gPDB4MCAweDY4
IDB4NCAweDAgMHg2OSAweDQgMHgwIDB4NmEgMHg0IDB4MCAweDZiIDB4NCAweDAgMHg2YyAweDQg
MHgwIDB4NmQgMHg0IDB4MCAweDZlIDB4NCAweDAgMHg2ZiAweDQgMHgwIDB4NzAgMHg0IDB4MCAw
eDcxIDB4NCAweDAgMHg3MiAweDQgMHgwIDB4NzMgMHg0IDB4MCAweDc0IDB4NCAweDAgMHg3NSAw
eDQgMHgwIDB4NzYgMHg0IDB4MCAweDc3IDB4NCAweDAgMHg4MCAweDQgMHgwIDB4ODEgMHg0IDB4
MCAweDgyIDB4NCAweDAgMHg4MyAweDQgMHgwIDB4ODQgMHg0IDB4MCAweDg1IDB4NCAweDAgMHg4
NiAweDQgMHgwIDB4ODcgMHg0IDB4MCAweDg4IDB4NCAweDAgMHg4OSAweDQgMHgwIDB4OGEgMHg0
IDB4MCAweDhiIDB4NCAweDAgMHg4YyAweDQgMHgwIDB4OGQgMHg0IDB4MCAweDhlIDB4NCAweDAg
MHg4ZiAweDQ+OwoJCSNkbWEtY2VsbHMgPSA8MHgxPjsKCQlzdGF0dXMgPSAib2theSI7CgkJbGlu
dXgscGhhbmRsZSA9IDwweDRjPjsKCQlwaGFuZGxlID0gPDB4NGM+OwoJfTsKCglwaW5tdXhANzAw
MDA4ZDQgewoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXBpbm11eCI7CgkJcmVnID0g
PDB4MCAweDcwMDAwOGQ0IDB4MCAweDJhNSAweDAgMHg3MDAwMzAwMCAweDAgMHgyOTQ+OwoJCSNn
cGlvLXJhbmdlLWNlbGxzID0gPDB4Mz47CgkJc3RhdHVzID0gIm9rYXkiOwoJCXBpbmN0cmwtbmFt
ZXMgPSAiZGVmYXVsdCIsICJkcml2ZSIsICJ1bnVzZWQiOwoJCXBpbmN0cmwtMCA9IDwweDM4PjsK
CQlwaW5jdHJsLTEgPSA8MHgzOT47CgkJcGluY3RybC0yID0gPDB4M2E+OwoJCWxpbnV4LHBoYW5k
bGUgPSA8MHgzYj47CgkJcGhhbmRsZSA9IDwweDNiPjsKCgkJY2xrcmVxXzBfYmlfZGlyIHsKCQkJ
bGludXgscGhhbmRsZSA9IDwweDdiPjsKCQkJcGhhbmRsZSA9IDwweDdiPjsKCgkJCWNsa3JlcTAg
ewoJCQkJbnZpZGlhLHBpbnMgPSAicGV4X2wwX2Nsa3JlcV9uX3BhMSI7CgkJCQludmlkaWEsdHJp
c3RhdGUgPSA8MHgwPjsKCQkJfTsKCQl9OwoKCQljbGtyZXFfMV9iaV9kaXIgewoJCQlsaW51eCxw
aGFuZGxlID0gPDB4N2M+OwoJCQlwaGFuZGxlID0gPDB4N2M+OwoKCQkJY2xrcmVxMSB7CgkJCQlu
dmlkaWEscGlucyA9ICJwZXhfbDFfY2xrcmVxX25fcGE0IjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9
IDwweDA+OwoJCQl9OwoJCX07CgoJCWNsa3JlcV8wX2luX2RpciB7CgkJCWxpbnV4LHBoYW5kbGUg
PSA8MHg3ZD47CgkJCXBoYW5kbGUgPSA8MHg3ZD47CgoJCQljbGtyZXEwIHsKCQkJCW52aWRpYSxw
aW5zID0gInBleF9sMF9jbGtyZXFfbl9wYTEiOwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47
CgkJCX07CgkJfTsKCgkJY2xrcmVxXzFfaW5fZGlyIHsKCQkJbGludXgscGhhbmRsZSA9IDwweDdl
PjsKCQkJcGhhbmRsZSA9IDwweDdlPjsKCgkJCWNsa3JlcTEgewoJCQkJbnZpZGlhLHBpbnMgPSAi
cGV4X2wxX2Nsa3JlcV9uX3BhNCI7CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgxPjsKCQkJfTsK
CQl9OwoKCQlwcm9kLXNldHRpbmdzIHsKCQkJI3Byb2QtY2VsbHMgPSA8MHg0PjsKCgkJCXByb2Qg
ewoJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJbnZpZGlhLHByb2QtYm9vdC1pbml0OwoJCQkJcHJv
ZCA9IDwweDAgMHgxYzQgMHhmN2Y3ZjAwMCAweDUxMjEyMDAwIDB4MCAweDEyOCAweDFmMWYwMDAg
MHgxMDEwMDAwIDB4MCAweDEyYyAweDFmMWYwMDAgMHgxMDEwMDAwIDB4MCAweDFjOCAweGYwMDAz
ZmZkIDB4MTA0MCAweDAgMHgxZGMgMHhmN2Y3ZjAwMCAweDUxMjEyMDAwIDB4MCAweDFlMCAweGYw
MDAzZmZkIDB4MTA0MCAweDAgMHgyM2MgMHgxZjFmMDAwIDB4MWYxZjAwMCAweDAgMHgyMCAweDFm
MWYwMDAgMHgxMDEwMDAwIDB4MCAweDQ0IDB4MWYxZjAwMCAweDEwMTAwMDAgMHgwIDB4NTAgMHgx
ZjFmMDAwIDB4MTAxMDAwMCAweDAgMHg1OCAweDFmMWYwMDAgMHgxMDEwMDAwIDB4MCAweDVjIDB4
MWYxZjAwMCAweDEwMTAwMDAgMHgwIDB4YTAgMHgxZjFmMDAwIDB4MTAxMDAwMCAweDAgMHhhNCAw
eDFmMWYwMDAgMHgxMDEwMDAwIDB4MCAweGE4IDB4MWYxZjAwMCAweDEwMTAwMDAgMHgwIDB4YWMg
MHgxZjFmMDAwIDB4MTAxMDAwMCAweDAgMHhiMCAweDFmMWYwMDAgMHgxZjFmMDAwIDB4MCAweGI0
IDB4MWYxZjAwMCAweDFmMWYwMDAgMHgwIDB4YjggMHgxZjFmMDAwIDB4MWYxZjAwMCAweDAgMHhi
YyAweDFmMWYwMDAgMHgxZjFmMDAwIDB4MCAweGMwIDB4MWYxZjAwMCAweDFmMWYwMDAgMHgwIDB4
YzQgMHgxZjFmMDAwIDB4MWYxZjAwMCAweDEgMHgwIDB4NzIwMCAweDIwMDAgMHgxIDB4NCAweDcy
MDAgMHgyMDAwIDB4MSAweDggMHg3MjAwIDB4MjAwMCAweDEgMHhjIDB4NzIwMCAweDIwMDAgMHgx
IDB4MTAgMHg3MjAwIDB4MjAwMCAweDEgMHgxNCAweDcyMDAgMHgyMDAwIDB4MSAweDFjIDB4NzIw
MCAweDIwMDAgMHgxIDB4MjAgMHg3MjAwIDB4MjAwMCAweDEgMHgyNCAweDcyMDAgMHgyMDAwIDB4
MSAweDI4IDB4NzIwMCAweDIwMDAgMHgxIDB4MmMgMHg3MjAwIDB4MjAwMCAweDEgMHgzMCAweDcy
MDAgMHgyMDAwIDB4MSAweDE2MCAweDEwMDAgMHgxMDAwPjsKCQkJfTsKCgkJCWkyczJfcHJvZCB7
CgkJCQlwcm9kID0gPDB4MCAweGIwIDB4MWYxZjAwMCAweDEwMTAwMDAgMHgwIDB4YjQgMHgxZjFm
MDAwIDB4MTAxMDAwMCAweDAgMHhiOCAweDFmMWYwMDAgMHgxMDEwMDAwIDB4MCAweGJjIDB4MWYx
ZjAwMCAweDEwMTAwMDA+OwoJCQl9OwoKCQkJc3BpMV9wcm9kIHsKCQkJCW52aWRpYSxwcm9kLWJv
b3QtaW5pdDsKCQkJCXByb2QgPSA8MHgwIDB4MjAwIDB4ZjAwMDAwMDAgMHg1MDAwMDAwMCAweDAg
MHgyMDQgMHhmMDAwMDAwMCAweDUwMDAwMDAwIDB4MCAweDIwOCAweGYwMDAwMDAwIDB4NTAwMDAw
MDAgMHgwIDB4MjBjIDB4ZjAwMDAwMDAgMHg1MDAwMDAwMCAweDAgMHgyMTAgMHhmMDAwMDAwMCAw
eDUwMDAwMDAwIDB4MSAweDUwIDB4NjAwMCAweDYwNDAgMHgxIDB4NTQgMHg2MDAwIDB4NjA0MCAw
eDEgMHg1OCAweDYwMDAgMHg2MDQwIDB4MSAweDVjIDB4NjAwMCAweDYwNDAgMHgxIDB4NjAgMHg2
MDAwIDB4NjA0MD47CgkJCX07CgoJCQlzcGkyX3Byb2QgewoJCQkJbnZpZGlhLHByb2QtYm9vdC1p
bml0OwoJCQkJcHJvZCA9IDwweDAgMHgyMTQgMHhmMDAwMDAwMCAweGQwMDAwMDAwIDB4MCAweDIx
OCAweGYwMDAwMDAwIDB4ZDAwMDAwMDAgMHgwIDB4MjFjIDB4ZjAwMDAwMDAgMHhkMDAwMDAwMCAw
eDAgMHgyMjAgMHhmMDAwMDAwMCAweGQwMDAwMDAwIDB4MCAweDIyNCAweGYwMDAwMDAwIDB4ZDAw
MDAwMDAgMHgxIDB4NjQgMHg2MDAwIDB4NjA0MCAweDEgMHg2OCAweDYwMDAgMHg2MDQwIDB4MSAw
eDZjIDB4NjAwMCAweDYwNDAgMHgxIDB4NzAgMHg2MDAwIDB4NjA0MCAweDEgMHg3NCAweDYwMDAg
MHg2MDQwPjsKCQkJfTsKCgkJCXNwaTNfcHJvZCB7CgkJCQludmlkaWEscHJvZC1ib290LWluaXQ7
CgkJCQlwcm9kID0gPDB4MCAweGNjIDB4MTQwNDAwMCAweDE0MTQwMDAgMHgwIDB4ZDAgMHgxNDA0
MDAwIDB4MTQxNDAwMCAweDAgMHgxNDAgMHgxNDA0MDAwIDB4MTQxNDAwMCAweDAgMHgxNDQgMHgx
NDA0MDAwIDB4MTQxNDAwMD47CgkJCX07CgoJCQlzcGk0X3Byb2QgewoJCQkJbnZpZGlhLHByb2Qt
Ym9vdC1pbml0OwoJCQkJcHJvZCA9IDwweDAgMHgyNjggMHgxNDA0MDAwIDB4MTQxNDAwMCAweDAg
MHgyNmMgMHgxNDA0MDAwIDB4MTQxNDAwMCAweDAgMHgyNzAgMHgxNDA0MDAwIDB4MTQxNDAwMCAw
eDAgMHgyNzQgMHgxNDA0MDAwIDB4MTQxNDAwMD47CgkJCX07CgoJCQlpMmMwX3Byb2QgewoJCQkJ
bnZpZGlhLHByb2QtYm9vdC1pbml0OwoJCQkJcHJvZCA9IDwweDAgMHhkNCAweDFmMWYwMDAgMHgx
ZjAwMCAweDAgMHhkOCAweDFmMWYwMDAgMHgxZjAwMCAweDEgMHhiYyAweDExMDAgMHgwIDB4MSAw
eGMwIDB4MTEwMCAweDA+OwoJCQl9OwoKCQkJaTJjMV9wcm9kIHsKCQkJCW52aWRpYSxwcm9kLWJv
b3QtaW5pdDsKCQkJCXByb2QgPSA8MHgwIDB4ZGMgMHgxZjFmMDAwIDB4MWYwMDAgMHgwIDB4ZTAg
MHgxZjFmMDAwIDB4MWYwMDAgMHgxIDB4YzQgMHgxMTAwIDB4MCAweDEgMHhjOCAweDExMDAgMHgw
PjsKCQkJfTsKCgkJCWkyYzJfcHJvZCB7CgkJCQludmlkaWEscHJvZC1ib290LWluaXQ7CgkJCQlw
cm9kID0gPDB4MCAweGU0IDB4MWYxZjAwMCAweDFmMDAwIDB4MCAweGU4IDB4MWYxZjAwMCAweDFm
MDAwIDB4MSAweGNjIDB4MTEwMCAweDAgMHgxIDB4ZDAgMHgxMTAwIDB4MCAweDAgMHg2MCAweDFm
MWYwMDAgMHgxZjAwMCAweDAgMHg2NCAweDFmMWYwMDAgMHgxZjAwMCAweDEgMHhkNCAweDExMDAg
MHgwIDB4MSAweGQ4IDB4MTEwMCAweDA+OwoJCQl9OwoKCQkJaTJjNF9wcm9kIHsKCQkJCW52aWRp
YSxwcm9kLWJvb3QtaW5pdDsKCQkJCXByb2QgPSA8MHgwIDB4MTk4IDB4MWYxZjAwMCAweDFmMDAw
IDB4MCAweDE5YyAweDFmMWYwMDAgMHgxZjAwMCAweDEgMHhkYyAweDExMDAgMHgwIDB4MSAweGUw
IDB4MTEwMCAweDA+OwoJCQl9OwoKCQkJaTJjMF9oc19wcm9kIHsKCQkJCXByb2QgPSA8MHgwIDB4
ZDQgMHgxZjFmMDAwIDB4MWYxZjAwMCAweDAgMHhkOCAweDFmMWYwMDAgMHgxZjFmMDAwIDB4MSAw
eGJjIDB4MTEwMCAweDEwMDAgMHgxIDB4YzAgMHgxMTAwIDB4MTAwMD47CgkJCX07CgoJCQlpMmMx
X2hzX3Byb2QgewoJCQkJcHJvZCA9IDwweDAgMHhkYyAweDFmMWYwMDAgMHgxZjFmMDAwIDB4MCAw
eGUwIDB4MWYxZjAwMCAweDFmMWYwMDAgMHgxIDB4YzQgMHgxMTAwIDB4MTAwMCAweDEgMHhjOCAw
eDExMDAgMHgxMDAwPjsKCQkJfTsKCgkJCWkyYzJfaHNfcHJvZCB7CgkJCQlwcm9kID0gPDB4MCAw
eGU0IDB4MWYxZjAwMCAweDFmMWYwMDAgMHgwIDB4ZTggMHgxZjFmMDAwIDB4MWYxZjAwMCAweDEg
MHhjYyAweDExMDAgMHgxMDAwIDB4MSAweGQwIDB4MTEwMCAweDEwMDAgMHgwIDB4NjAgMHgxZjFm
MDAwIDB4MWYxZjAwMCAweDAgMHg2NCAweDFmMWYwMDAgMHgxZjFmMDAwIDB4MSAweGQ0IDB4MTEw
MCAweDEwMDAgMHgxIDB4ZDggMHgxMTAwIDB4MTAwMD47CgkJCX07CgoJCQlpMmM0X2hzX3Byb2Qg
ewoJCQkJcHJvZCA9IDwweDAgMHgxOTggMHgxZjFmMDAwIDB4MWYxZjAwMCAweDAgMHgxOWMgMHgx
ZjFmMDAwIDB4MWYxZjAwMCAweDEgMHhkYyAweDExMDAgMHgxMDAwIDB4MSAweGUwIDB4MTEwMCAw
eDEwMDA+OwoJCQl9OwoJCX07CgoJCXNkbW1jMV9zY2htaXR0X2VuYWJsZSB7CgkJCWxpbnV4LHBo
YW5kbGUgPSA8MHg5MD47CgkJCXBoYW5kbGUgPSA8MHg5MD47CgoJCQlzZG1tYzEgewoJCQkJbnZp
ZGlhLHBpbnMgPSAic2RtbWMxX2NtZF9wbTEiLCAic2RtbWMxX2RhdDBfcG01IiwgInNkbW1jMV9k
YXQxX3BtNCIsICJzZG1tYzFfZGF0Ml9wbTMiLCAic2RtbWMxX2RhdDNfcG0yIjsKCQkJCW52aWRp
YSxzY2htaXR0ID0gPDB4MT47CgkJCX07CgkJfTsKCgkJc2RtbWMxX3NjaG1pdHRfZGlzYWJsZSB7
CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg5MT47CgkJCXBoYW5kbGUgPSA8MHg5MT47CgoJCQlzZG1t
YzEgewoJCQkJbnZpZGlhLHBpbnMgPSAic2RtbWMxX2NtZF9wbTEiLCAic2RtbWMxX2RhdDBfcG01
IiwgInNkbW1jMV9kYXQxX3BtNCIsICJzZG1tYzFfZGF0Ml9wbTMiLCAic2RtbWMxX2RhdDNfcG0y
IjsKCQkJCW52aWRpYSxzY2htaXR0ID0gPDB4MD47CgkJCX07CgkJfTsKCgkJc2RtbWMxX2Nsa19z
Y2htaXR0X2VuYWJsZSB7CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg5Mj47CgkJCXBoYW5kbGUgPSA8
MHg5Mj47CgoJCQlzZG1tYzEgewoJCQkJbnZpZGlhLHBpbnMgPSAic2RtbWMxX2Nsa19wbTAiOwoJ
CQkJbnZpZGlhLHNjaG1pdHQgPSA8MHgxPjsKCQkJfTsKCQl9OwoKCQlzZG1tYzFfY2xrX3NjaG1p
dHRfZGlzYWJsZSB7CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg5Mz47CgkJCXBoYW5kbGUgPSA8MHg5
Mz47CgoJCQlzZG1tYzEgewoJCQkJbnZpZGlhLHBpbnMgPSAic2RtbWMxX2Nsa19wbTAiOwoJCQkJ
bnZpZGlhLHNjaG1pdHQgPSA8MHgwPjsKCQkJfTsKCQl9OwoKCQlzZG1tYzFfZHJ2X2NvZGUgewoJ
CQlsaW51eCxwaGFuZGxlID0gPDB4OTQ+OwoJCQlwaGFuZGxlID0gPDB4OTQ+OwoKCQkJc2RtbWMx
IHsKCQkJCW52aWRpYSxwaW5zID0gImRyaXZlX3NkbW1jMSI7CgkJCQludmlkaWEscHVsbC1kb3du
LXN0cmVuZ3RoID0gPDB4MTU+OwoJCQkJbnZpZGlhLHB1bGwtdXAtc3RyZW5ndGggPSA8MHgxMT47
CgkJCX07CgkJfTsKCgkJc2RtbWMxX2RlZmF1bHRfZHJ2X2NvZGUgewoJCQlsaW51eCxwaGFuZGxl
ID0gPDB4OTU+OwoJCQlwaGFuZGxlID0gPDB4OTU+OwoKCQkJc2RtbWMxIHsKCQkJCW52aWRpYSxw
aW5zID0gImRyaXZlX3NkbW1jMSI7CgkJCQludmlkaWEscHVsbC1kb3duLXN0cmVuZ3RoID0gPDB4
MTI+OwoJCQkJbnZpZGlhLHB1bGwtdXAtc3RyZW5ndGggPSA8MHgxMj47CgkJCX07CgkJfTsKCgkJ
c2RtbWMzX3NjaG1pdHRfZW5hYmxlIHsKCQkJbGludXgscGhhbmRsZSA9IDwweDg4PjsKCQkJcGhh
bmRsZSA9IDwweDg4PjsKCgkJCXNkbW1jMyB7CgkJCQludmlkaWEscGlucyA9ICJzZG1tYzNfY21k
X3BwMSIsICJzZG1tYzNfZGF0MF9wcDUiLCAic2RtbWMzX2RhdDFfcHA0IiwgInNkbW1jM19kYXQy
X3BwMyIsICJzZG1tYzNfZGF0M19wcDIiOwoJCQkJbnZpZGlhLHNjaG1pdHQgPSA8MHgxPjsKCQkJ
fTsKCQl9OwoKCQlzZG1tYzNfc2NobWl0dF9kaXNhYmxlIHsKCQkJbGludXgscGhhbmRsZSA9IDww
eDg5PjsKCQkJcGhhbmRsZSA9IDwweDg5PjsKCgkJCXNkbW1jMyB7CgkJCQludmlkaWEscGlucyA9
ICJzZG1tYzNfY21kX3BwMSIsICJzZG1tYzNfZGF0MF9wcDUiLCAic2RtbWMzX2RhdDFfcHA0Iiwg
InNkbW1jM19kYXQyX3BwMyIsICJzZG1tYzNfZGF0M19wcDIiOwoJCQkJbnZpZGlhLHNjaG1pdHQg
PSA8MHgwPjsKCQkJfTsKCQl9OwoKCQlzZG1tYzNfY2xrX3NjaG1pdHRfZW5hYmxlIHsKCQkJbGlu
dXgscGhhbmRsZSA9IDwweDhhPjsKCQkJcGhhbmRsZSA9IDwweDhhPjsKCgkJCXNkbW1jMyB7CgkJ
CQludmlkaWEscGlucyA9ICJzZG1tYzNfY2xrX3BwMCI7CgkJCQludmlkaWEsc2NobWl0dCA9IDww
eDE+OwoJCQl9OwoJCX07CgoJCXNkbW1jM19jbGtfc2NobWl0dF9kaXNhYmxlIHsKCQkJbGludXgs
cGhhbmRsZSA9IDwweDhiPjsKCQkJcGhhbmRsZSA9IDwweDhiPjsKCgkJCXNkbW1jMyB7CgkJCQlu
dmlkaWEscGlucyA9ICJzZG1tYzNfY2xrX3BwMCI7CgkJCQludmlkaWEsc2NobWl0dCA9IDwweDA+
OwoJCQl9OwoJCX07CgoJCXNkbW1jM19kcnZfY29kZSB7CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg4
Yz47CgkJCXBoYW5kbGUgPSA8MHg4Yz47CgoJCQlzZG1tYzMgewoJCQkJbnZpZGlhLHBpbnMgPSAi
ZHJpdmVfc2RtbWMzIjsKCQkJCW52aWRpYSxwdWxsLWRvd24tc3RyZW5ndGggPSA8MHgxNT47CgkJ
CQludmlkaWEscHVsbC11cC1zdHJlbmd0aCA9IDwweDExPjsKCQkJfTsKCQl9OwoKCQlzZG1tYzNf
ZGVmYXVsdF9kcnZfY29kZSB7CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg4ZD47CgkJCXBoYW5kbGUg
PSA8MHg4ZD47CgoJCQlzZG1tYzMgewoJCQkJbnZpZGlhLHBpbnMgPSAiZHJpdmVfc2RtbWMzIjsK
CQkJCW52aWRpYSxwdWxsLWRvd24tc3RyZW5ndGggPSA8MHgxMj47CgkJCQludmlkaWEscHVsbC11
cC1zdHJlbmd0aCA9IDwweDEyPjsKCQkJfTsKCQl9OwoKCQlkdmZzX3B3bV9hY3RpdmUgewoJCQls
aW51eCxwaGFuZGxlID0gPDB4OWI+OwoJCQlwaGFuZGxlID0gPDB4OWI+OwoKCQkJZHZmc19wd21f
cGJiMSB7CgkJCQludmlkaWEscGlucyA9ICJkdmZzX3B3bV9wYmIxIjsKCQkJCW52aWRpYSx0cmlz
dGF0ZSA9IDwweDA+OwoJCQl9OwoJCX07CgoJCWR2ZnNfcHdtX2luYWN0aXZlIHsKCQkJbGludXgs
cGhhbmRsZSA9IDwweDljPjsKCQkJcGhhbmRsZSA9IDwweDljPjsKCgkJCWR2ZnNfcHdtX3BiYjEg
ewoJCQkJbnZpZGlhLHBpbnMgPSAiZHZmc19wd21fcGJiMSI7CgkJCQludmlkaWEsdHJpc3RhdGUg
PSA8MHgxPjsKCQkJfTsKCQl9OwoKCQljb21tb24gewoJCQlsaW51eCxwaGFuZGxlID0gPDB4Mzg+
OwoJCQlwaGFuZGxlID0gPDB4Mzg+OwoKCQkJZHZmc19wd21fcGJiMSB7CgkJCQludmlkaWEscGlu
cyA9ICJkdmZzX3B3bV9wYmIxIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJjbGR2ZnMiOwoJCQkJ
bnZpZGlhLHB1bGwgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZp
ZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJZG1pYzFfY2xrX3BlMCB7CgkJCQlu
dmlkaWEscGlucyA9ICJkbWljMV9jbGtfcGUwIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJpMnMz
IjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsK
CQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgxPjsKCQkJfTsKCgkJCWRtaWMxX2RhdF9wZTEg
ewoJCQkJbnZpZGlhLHBpbnMgPSAiZG1pYzFfZGF0X3BlMSI7CgkJCQludmlkaWEsZnVuY3Rpb24g
PSAiaTJzMyI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0g
PDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCX07CgoJCQlkbWljMl9j
bGtfcGUyIHsKCQkJCW52aWRpYSxwaW5zID0gImRtaWMyX2Nsa19wZTIiOwoJCQkJbnZpZGlhLGZ1
bmN0aW9uID0gImkyczMiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmlz
dGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJCQl9OwoKCQkJ
ZG1pYzJfZGF0X3BlMyB7CgkJCQludmlkaWEscGlucyA9ICJkbWljMl9kYXRfcGUzIjsKCQkJCW52
aWRpYSxmdW5jdGlvbiA9ICJpMnMzIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47CgkJCQludmlk
aWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgxPjsKCQkJ
fTsKCgkJCXBlNyB7CgkJCQludmlkaWEscGlucyA9ICJwZTciOwoJCQkJbnZpZGlhLGZ1bmN0aW9u
ID0gInB3bTMiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9
IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJZ2VuM19p
MmNfc2NsX3BmMCB7CgkJCQludmlkaWEscGlucyA9ICJnZW4zX2kyY19zY2xfcGYwIjsKCQkJCW52
aWRpYSxmdW5jdGlvbiA9ICJpMmMzIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47CgkJCQludmlk
aWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgxPjsKCQkJ
CW52aWRpYSxpby1oaWdoLXZvbHRhZ2UgPSA8MHgwPjsKCQkJfTsKCgkJCWdlbjNfaTJjX3NkYV9w
ZjEgewoJCQkJbnZpZGlhLHBpbnMgPSAiZ2VuM19pMmNfc2RhX3BmMSI7CgkJCQludmlkaWEsZnVu
Y3Rpb24gPSAiaTJjMyI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0
YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCQludmlkaWEs
aW8taGlnaC12b2x0YWdlID0gPDB4MD47CgkJCX07CgoJCQljYW1faTJjX3NjbF9wczIgewoJCQkJ
bnZpZGlhLHBpbnMgPSAiY2FtX2kyY19zY2xfcHMyIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJp
MmN2aSI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4
MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCQludmlkaWEsaW8taGlnaC12
b2x0YWdlID0gPDB4MT47CgkJCX07CgoJCQljYW1faTJjX3NkYV9wczMgewoJCQkJbnZpZGlhLHBp
bnMgPSAiY2FtX2kyY19zZGFfcHMzIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJpMmN2aSI7CgkJ
CQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQlu
dmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCQludmlkaWEsaW8taGlnaC12b2x0YWdlID0g
PDB4MT47CgkJCX07CgoJCQljYW0xX21jbGtfcHMwIHsKCQkJCW52aWRpYSxwaW5zID0gImNhbTFf
bWNsa19wczAiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gImV4dHBlcmlwaDMiOwoJCQkJbnZpZGlh
LHB1bGwgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVu
YWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJY2FtMl9tY2xrX3BzMSB7CgkJCQludmlkaWEs
cGlucyA9ICJjYW0yX21jbGtfcHMxIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJleHRwZXJpcGgz
IjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsK
CQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJfTsKCgkJCXBleF9sMF9jbGtyZXFf
bl9wYTEgewoJCQkJbnZpZGlhLHBpbnMgPSAicGV4X2wwX2Nsa3JlcV9uX3BhMSI7CgkJCQludmlk
aWEsZnVuY3Rpb24gPSAicGUwIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47CgkJCQludmlkaWEs
dHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgxPjsKCQkJCW52
aWRpYSxpby1oaWdoLXZvbHRhZ2UgPSA8MHgxPjsKCQkJfTsKCgkJCXBleF9sMF9yc3Rfbl9wYTAg
ewoJCQkJbnZpZGlhLHBpbnMgPSAicGV4X2wwX3JzdF9uX3BhMCI7CgkJCQludmlkaWEsZnVuY3Rp
b24gPSAicGUwIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47CgkJCQludmlkaWEsdHJpc3RhdGUg
PSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJCW52aWRpYSxpby1o
aWdoLXZvbHRhZ2UgPSA8MHgxPjsKCQkJfTsKCgkJCXBleF9sMV9jbGtyZXFfbl9wYTQgewoJCQkJ
bnZpZGlhLHBpbnMgPSAicGV4X2wxX2Nsa3JlcV9uX3BhNCI7CgkJCQludmlkaWEsZnVuY3Rpb24g
PSAicGUxIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8
MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgxPjsKCQkJCW52aWRpYSxpby1oaWdo
LXZvbHRhZ2UgPSA8MHgxPjsKCQkJfTsKCgkJCXBleF9sMV9yc3Rfbl9wYTMgewoJCQkJbnZpZGlh
LHBpbnMgPSAicGV4X2wxX3JzdF9uX3BhMyI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicGUxIjsK
CQkJCW52aWRpYSxwdWxsID0gPDB4MD47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJ
CW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJCW52aWRpYSxpby1oaWdoLXZvbHRhZ2Ug
PSA8MHgxPjsKCQkJfTsKCgkJCXBleF93YWtlX25fcGEyIHsKCQkJCW52aWRpYSxwaW5zID0gInBl
eF93YWtlX25fcGEyIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJwZSI7CgkJCQludmlkaWEscHVs
bCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxl
LWlucHV0ID0gPDB4MT47CgkJCQludmlkaWEsaW8taGlnaC12b2x0YWdlID0gPDB4MT47CgkJCX07
CgoJCQlzZG1tYzFfY2xrX3BtMCB7CgkJCQludmlkaWEscGlucyA9ICJzZG1tYzFfY2xrX3BtMCI7
CgkJCQludmlkaWEsZnVuY3Rpb24gPSAic2RtbWMxIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47
CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8
MHgxPjsKCQkJfTsKCgkJCXNkbW1jMV9jbWRfcG0xIHsKCQkJCW52aWRpYSxwaW5zID0gInNkbW1j
MV9jbWRfcG0xIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJzZG1tYzEiOwoJCQkJbnZpZGlhLHB1
bGwgPSA8MHgyPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJs
ZS1pbnB1dCA9IDwweDE+OwoJCQl9OwoKCQkJc2RtbWMxX2RhdDBfcG01IHsKCQkJCW52aWRpYSxw
aW5zID0gInNkbW1jMV9kYXQwX3BtNSI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAic2RtbWMxIjsK
CQkJCW52aWRpYSxwdWxsID0gPDB4Mj47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJ
CW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgxPjsKCQkJfTsKCgkJCXNkbW1jMV9kYXQxX3BtNCB7
CgkJCQludmlkaWEscGlucyA9ICJzZG1tYzFfZGF0MV9wbTQiOwoJCQkJbnZpZGlhLGZ1bmN0aW9u
ID0gInNkbW1jMSI7CgkJCQludmlkaWEscHVsbCA9IDwweDI+OwoJCQkJbnZpZGlhLHRyaXN0YXRl
ID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCX07CgoJCQlzZG1t
YzFfZGF0Ml9wbTMgewoJCQkJbnZpZGlhLHBpbnMgPSAic2RtbWMxX2RhdDJfcG0zIjsKCQkJCW52
aWRpYSxmdW5jdGlvbiA9ICJzZG1tYzEiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgyPjsKCQkJCW52
aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJ
CQl9OwoKCQkJc2RtbWMxX2RhdDNfcG0yIHsKCQkJCW52aWRpYSxwaW5zID0gInNkbW1jMV9kYXQz
X3BtMiI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAic2RtbWMxIjsKCQkJCW52aWRpYSxwdWxsID0g
PDB4Mj47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5w
dXQgPSA8MHgxPjsKCQkJfTsKCgkJCXNkbW1jM19jbGtfcHAwIHsKCQkJCW52aWRpYSxwaW5zID0g
InNkbW1jM19jbGtfcHAwIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJzZG1tYzMiOwoJCQkJbnZp
ZGlhLHB1bGwgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlh
LGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJCQl9OwoKCQkJc2RtbWMzX2NtZF9wcDEgewoJCQkJbnZp
ZGlhLHBpbnMgPSAic2RtbWMzX2NtZF9wcDEiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInNkbW1j
MyI7CgkJCQludmlkaWEscHVsbCA9IDwweDI+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47
CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCX07CgoJCQlzZG1tYzNfZGF0MF9w
cDUgewoJCQkJbnZpZGlhLHBpbnMgPSAic2RtbWMzX2RhdDBfcHA1IjsKCQkJCW52aWRpYSxmdW5j
dGlvbiA9ICJzZG1tYzMiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgyPjsKCQkJCW52aWRpYSx0cmlz
dGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJCQl9OwoKCQkJ
c2RtbWMzX2RhdDFfcHA0IHsKCQkJCW52aWRpYSxwaW5zID0gInNkbW1jM19kYXQxX3BwNCI7CgkJ
CQludmlkaWEsZnVuY3Rpb24gPSAic2RtbWMzIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4Mj47CgkJ
CQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgx
PjsKCQkJfTsKCgkJCXNkbW1jM19kYXQyX3BwMyB7CgkJCQludmlkaWEscGlucyA9ICJzZG1tYzNf
ZGF0Ml9wcDMiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInNkbW1jMyI7CgkJCQludmlkaWEscHVs
bCA9IDwweDI+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxl
LWlucHV0ID0gPDB4MT47CgkJCX07CgoJCQlzZG1tYzNfZGF0M19wcDIgewoJCQkJbnZpZGlhLHBp
bnMgPSAic2RtbWMzX2RhdDNfcHAyIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJzZG1tYzMiOwoJ
CQkJbnZpZGlhLHB1bGwgPSA8MHgyPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJ
bnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJCQl9OwoKCQkJc2h1dGRvd24gewoJCQkJbnZp
ZGlhLHBpbnMgPSAic2h1dGRvd24iOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInNodXRkb3duIjsK
CQkJCW52aWRpYSxwdWxsID0gPDB4MD47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJ
CW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJfTsKCgkJCWxjZF9ncGlvMl9wdjQgewoJ
CQkJbnZpZGlhLHBpbnMgPSAibGNkX2dwaW8yX3B2NCI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAi
cHdtMSI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4
MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQlwd3JfaTJjX3Nj
bF9weTMgewoJCQkJbnZpZGlhLHBpbnMgPSAicHdyX2kyY19zY2xfcHkzIjsKCQkJCW52aWRpYSxm
dW5jdGlvbiA9ICJpMmNwbXUiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgwPjsKCQkJCW52aWRpYSx0
cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJCQkJbnZp
ZGlhLGlvLWhpZ2gtdm9sdGFnZSA9IDwweDA+OwoJCQl9OwoKCQkJcHdyX2kyY19zZGFfcHk0IHsK
CQkJCW52aWRpYSxwaW5zID0gInB3cl9pMmNfc2RhX3B5NCI7CgkJCQludmlkaWEsZnVuY3Rpb24g
PSAiaTJjcG11IjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47CgkJCQludmlkaWEsdHJpc3RhdGUg
PSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgxPjsKCQkJCW52aWRpYSxpby1o
aWdoLXZvbHRhZ2UgPSA8MHgwPjsKCQkJfTsKCgkJCWNsa18zMmtfaW4gewoJCQkJbnZpZGlhLHBp
bnMgPSAiY2xrXzMya19pbiI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAiY2xrIjsKCQkJCW52aWRp
YSxwdWxsID0gPDB4MD47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxl
bmFibGUtaW5wdXQgPSA8MHgxPjsKCQkJfTsKCgkJCWNsa18zMmtfb3V0X3B5NSB7CgkJCQludmlk
aWEscGlucyA9ICJjbGtfMzJrX291dF9weTUiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInNvYyI7
CgkJCQludmlkaWEscHVsbCA9IDwweDI+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJ
CQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCX07CgoJCQlwejEgewoJCQkJbnZpZGlh
LHBpbnMgPSAicHoxIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJzZG1tYzEiOwoJCQkJbnZpZGlh
LHB1bGwgPSA8MHgyPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVu
YWJsZS1pbnB1dCA9IDwweDE+OwoJCQl9OwoKCQkJcHo1IHsKCQkJCW52aWRpYSxwaW5zID0gInB6
NSI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAic29jIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4Mj47
CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8
MHgxPjsKCQkJfTsKCgkJCWNvcmVfcHdyX3JlcSB7CgkJCQludmlkaWEscGlucyA9ICJjb3JlX3B3
cl9yZXEiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gImNvcmUiOwoJCQkJbnZpZGlhLHB1bGwgPSA8
MHgwPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1
dCA9IDwweDA+OwoJCQl9OwoKCQkJcHdyX2ludF9uIHsKCQkJCW52aWRpYSxwaW5zID0gInB3cl9p
bnRfbiI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicG1pIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4
Mj47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQg
PSA8MHgxPjsKCQkJfTsKCgkJCWdlbjFfaTJjX3NjbF9wajEgewoJCQkJbnZpZGlhLHBpbnMgPSAi
Z2VuMV9pMmNfc2NsX3BqMSI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAiaTJjMSI7CgkJCQludmlk
aWEscHVsbCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEs
ZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCQludmlkaWEsaW8taGlnaC12b2x0YWdlID0gPDB4MT47
CgkJCX07CgoJCQlnZW4xX2kyY19zZGFfcGowIHsKCQkJCW52aWRpYSxwaW5zID0gImdlbjFfaTJj
X3NkYV9wajAiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gImkyYzEiOwoJCQkJbnZpZGlhLHB1bGwg
PSA8MHgwPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1p
bnB1dCA9IDwweDE+OwoJCQkJbnZpZGlhLGlvLWhpZ2gtdm9sdGFnZSA9IDwweDE+OwoJCQl9OwoK
CQkJZ2VuMl9pMmNfc2NsX3BqMiB7CgkJCQludmlkaWEscGlucyA9ICJnZW4yX2kyY19zY2xfcGoy
IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJpMmMyIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47
CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8
MHgxPjsKCQkJCW52aWRpYSxpby1oaWdoLXZvbHRhZ2UgPSA8MHgxPjsKCQkJfTsKCgkJCWdlbjJf
aTJjX3NkYV9wajMgewoJCQkJbnZpZGlhLHBpbnMgPSAiZ2VuMl9pMmNfc2RhX3BqMyI7CgkJCQlu
dmlkaWEsZnVuY3Rpb24gPSAiaTJjMiI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZp
ZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJ
CQludmlkaWEsaW8taGlnaC12b2x0YWdlID0gPDB4MT47CgkJCX07CgoJCQl1YXJ0Ml90eF9wZzAg
ewoJCQkJbnZpZGlhLHBpbnMgPSAidWFydDJfdHhfcGcwIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9
ICJ1YXJ0YiI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0g
PDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQl1YXJ0Ml9y
eF9wZzEgewoJCQkJbnZpZGlhLHBpbnMgPSAidWFydDJfcnhfcGcxIjsKCQkJCW52aWRpYSxmdW5j
dGlvbiA9ICJ1YXJ0YiI7CgkJCQludmlkaWEscHVsbCA9IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0
YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCX07CgoJCQl1
YXJ0MV90eF9wdTAgewoJCQkJbnZpZGlhLHBpbnMgPSAidWFydDFfdHhfcHUwIjsKCQkJCW52aWRp
YSxmdW5jdGlvbiA9ICJ1YXJ0YSI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZpZGlh
LHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJCX07
CgoJCQl1YXJ0MV9yeF9wdTEgewoJCQkJbnZpZGlhLHBpbnMgPSAidWFydDFfcnhfcHUxIjsKCQkJ
CW52aWRpYSxmdW5jdGlvbiA9ICJ1YXJ0YSI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJ
bnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47
CgkJCX07CgoJCQlqdGFnX3J0Y2sgewoJCQkJbnZpZGlhLHBpbnMgPSAianRhZ19ydGNrIjsKCQkJ
CW52aWRpYSxmdW5jdGlvbiA9ICJqdGFnIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47CgkJCQlu
dmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsK
CQkJfTsKCgkJCXVhcnQzX3R4X3BkMSB7CgkJCQludmlkaWEscGlucyA9ICJ1YXJ0M190eF9wZDEi
OwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInVhcnRjIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47
CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8
MHgwPjsKCQkJfTsKCgkJCXVhcnQzX3J4X3BkMiB7CgkJCQludmlkaWEscGlucyA9ICJ1YXJ0M19y
eF9wZDIiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInVhcnRjIjsKCQkJCW52aWRpYSxwdWxsID0g
PDB4MD47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5w
dXQgPSA8MHgxPjsKCQkJfTsKCgkJCXVhcnQzX3J0c19wZDMgewoJCQkJbnZpZGlhLHBpbnMgPSAi
dWFydDNfcnRzX3BkMyI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAidWFydGMiOwoJCQkJbnZpZGlh
LHB1bGwgPSA8MHgyPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVu
YWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJdWFydDNfY3RzX3BkNCB7CgkJCQludmlkaWEs
cGlucyA9ICJ1YXJ0M19jdHNfcGQ0IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJ1YXJ0YyI7CgkJ
CQludmlkaWEscHVsbCA9IDwweDI+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQlu
dmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCX07CgoJCQl1YXJ0NF90eF9waTQgewoJCQkJ
bnZpZGlhLHBpbnMgPSAidWFydDRfdHhfcGk0IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJ1YXJ0
ZCI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47
CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQl1YXJ0NF9yeF9waTUg
ewoJCQkJbnZpZGlhLHBpbnMgPSAidWFydDRfcnhfcGk1IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9
ICJ1YXJ0ZCI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0g
PDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCX07CgoJCQl1YXJ0NF9y
dHNfcGk2IHsKCQkJCW52aWRpYSxwaW5zID0gInVhcnQ0X3J0c19waTYiOwoJCQkJbnZpZGlhLGZ1
bmN0aW9uID0gInVhcnRkIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47CgkJCQludmlkaWEsdHJp
c3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJfTsKCgkJ
CXVhcnQ0X2N0c19waTcgewoJCQkJbnZpZGlhLHBpbnMgPSAidWFydDRfY3RzX3BpNyI7CgkJCQlu
dmlkaWEsZnVuY3Rpb24gPSAidWFydGQiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgyPjsKCQkJCW52
aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJ
CQl9OwoKCQkJcXNwaV9pbzBfcGVlMiB7CgkJCQludmlkaWEscGlucyA9ICJxc3BpX2lvMF9wZWUy
IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJxc3BpIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47
CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8
MHgxPjsKCQkJfTsKCgkJCXFzcGlfaW8xX3BlZTMgewoJCQkJbnZpZGlhLHBpbnMgPSAicXNwaV9p
bzFfcGVlMyI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicXNwaSI7CgkJCQludmlkaWEscHVsbCA9
IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlu
cHV0ID0gPDB4MT47CgkJCX07CgoJCQlxc3BpX3Nja19wZWUwIHsKCQkJCW52aWRpYSxwaW5zID0g
InFzcGlfc2NrX3BlZTAiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInFzcGkiOwoJCQkJbnZpZGlh
LHB1bGwgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVu
YWJsZS1pbnB1dCA9IDwweDE+OwoJCQl9OwoKCQkJcXNwaV9jc19uX3BlZTEgewoJCQkJbnZpZGlh
LHBpbnMgPSAicXNwaV9jc19uX3BlZTEiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInFzcGkiOwoJ
CQkJbnZpZGlhLHB1bGwgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJ
bnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJcXNwaV9pbzJfcGVlNCB7CgkJ
CQludmlkaWEscGlucyA9ICJxc3BpX2lvMl9wZWU0IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJx
c3BpIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgw
PjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgxPjsKCQkJfTsKCgkJCXFzcGlfaW8zX3Bl
ZTUgewoJCQkJbnZpZGlhLHBpbnMgPSAicXNwaV9pbzNfcGVlNSI7CgkJCQludmlkaWEsZnVuY3Rp
b24gPSAicXNwaSI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRl
ID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCX07CgoJCQlkYXAy
X2Rpbl9wYWEyIHsKCQkJCW52aWRpYSxwaW5zID0gImRhcDJfZGluX3BhYTIiOwoJCQkJbnZpZGlh
LGZ1bmN0aW9uID0gImkyczIiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgwPjsKCQkJCW52aWRpYSx0
cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJCQl9OwoK
CQkJZGFwMl9kb3V0X3BhYTMgewoJCQkJbnZpZGlhLHBpbnMgPSAiZGFwMl9kb3V0X3BhYTMiOwoJ
CQkJbnZpZGlhLGZ1bmN0aW9uID0gImkyczIiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgwPjsKCQkJ
CW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDE+
OwoJCQl9OwoKCQkJZGFwMl9mc19wYWEwIHsKCQkJCW52aWRpYSxwaW5zID0gImRhcDJfZnNfcGFh
MCI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAiaTJzMiI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+
OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0g
PDB4MT47CgkJCX07CgoJCQlkYXAyX3NjbGtfcGFhMSB7CgkJCQludmlkaWEscGlucyA9ICJkYXAy
X3NjbGtfcGFhMSI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAiaTJzMiI7CgkJCQludmlkaWEscHVs
bCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxl
LWlucHV0ID0gPDB4MT47CgkJCX07CgoJCQlkcF9ocGQwX3BjYzYgewoJCQkJbnZpZGlhLHBpbnMg
PSAiZHBfaHBkMF9wY2M2IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJkcCI7CgkJCQludmlkaWEs
cHVsbCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5h
YmxlLWlucHV0ID0gPDB4MT47CgkJCX07CgoJCQloZG1pX2ludF9kcF9ocGRfcGNjMSB7CgkJCQlu
dmlkaWEscGlucyA9ICJoZG1pX2ludF9kcF9ocGRfcGNjMSI7CgkJCQludmlkaWEsZnVuY3Rpb24g
PSAiZHAiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDww
eDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJCQkJbnZpZGlhLGlvLWhpZ2gt
dm9sdGFnZSA9IDwweDA+OwoJCQl9OwoKCQkJaGRtaV9jZWNfcGNjMCB7CgkJCQludmlkaWEscGlu
cyA9ICJoZG1pX2NlY19wY2MwIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJjZWMiOwoJCQkJbnZp
ZGlhLHB1bGwgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlh
LGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJCQkJbnZpZGlhLGlvLWhpZ2gtdm9sdGFnZSA9IDwweDE+
OwoJCQl9OwoKCQkJY2FtMV9wd2RuX3BzNyB7CgkJCQludmlkaWEscGlucyA9ICJjYW0xX3B3ZG5f
cHM3IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7CgkJCQludmlkaWEscHVsbCA9IDww
eDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0
ID0gPDB4MD47CgkJCX07CgoJCQljYW0yX3B3ZG5fcHQwIHsKCQkJCW52aWRpYSxwaW5zID0gImNh
bTJfcHdkbl9wdDAiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInJzdmQxIjsKCQkJCW52aWRpYSxw
dWxsID0gPDB4MD47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFi
bGUtaW5wdXQgPSA8MHgwPjsKCQkJfTsKCgkJCXNhdGFfbGVkX2FjdGl2ZV9wYTUgewoJCQkJbnZp
ZGlhLHBpbnMgPSAic2F0YV9sZWRfYWN0aXZlX3BhNSI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAi
cnN2ZDEiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgyPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDww
eDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJCQl9OwoKCQkJcGE2IHsKCQkJ
CW52aWRpYSxwaW5zID0gInBhNiI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDEiOwoJCQkJ
bnZpZGlhLHB1bGwgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZp
ZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJYWxzX3Byb3hfaW50X3B4MyB7CgkJ
CQludmlkaWEscGlucyA9ICJhbHNfcHJveF9pbnRfcHgzIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9
ICJyc3ZkMCI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0g
PDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQl0ZW1wX2Fs
ZXJ0X3B4NCB7CgkJCQludmlkaWEscGlucyA9ICJ0ZW1wX2FsZXJ0X3B4NCI7CgkJCQludmlkaWEs
ZnVuY3Rpb24gPSAicnN2ZDAiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgyPjsKCQkJCW52aWRpYSx0
cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJCQl9OwoK
CQkJYnV0dG9uX3Bvd2VyX29uX3B4NSB7CgkJCQludmlkaWEscGlucyA9ICJidXR0b25fcG93ZXJf
b25fcHg1IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMCI7CgkJCQludmlkaWEscHVsbCA9
IDwweDI+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlu
cHV0ID0gPDB4MT47CgkJCX07CgoJCQlidXR0b25fdm9sX3VwX3B4NiB7CgkJCQludmlkaWEscGlu
cyA9ICJidXR0b25fdm9sX3VwX3B4NiI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDAiOwoJ
CQkJbnZpZGlhLHB1bGwgPSA8MHgyPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJ
bnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJCQl9OwoKCQkJYnV0dG9uX2hvbWVfcHkxIHsK
CQkJCW52aWRpYSxwaW5zID0gImJ1dHRvbl9ob21lX3B5MSI7CgkJCQludmlkaWEsZnVuY3Rpb24g
PSAicnN2ZDAiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgyPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9
IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJCQl9OwoKCQkJbGNkX2Js
X2VuX3B2MSB7CgkJCQludmlkaWEscGlucyA9ICJsY2RfYmxfZW5fcHYxIjsKCQkJCW52aWRpYSxm
dW5jdGlvbiA9ICJyc3ZkMCI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZpZGlhLHRy
aXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCX07CgoJ
CQlwejIgewoJCQkJbnZpZGlhLHBpbnMgPSAicHoyIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJy
c3ZkMiI7CgkJCQludmlkaWEscHVsbCA9IDwweDI+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4
MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCX07CgoJCQlwejMgewoJCQkJ
bnZpZGlhLHBpbnMgPSAicHozIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7CgkJCQlu
dmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlk
aWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQl3aWZpX2VuX3BoMCB7CgkJCQludmlk
aWEscGlucyA9ICJ3aWZpX2VuX3BoMCI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDAiOwoJ
CQkJbnZpZGlhLHB1bGwgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJ
bnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJd2lmaV93YWtlX2FwX3BoMiB7
CgkJCQludmlkaWEscGlucyA9ICJ3aWZpX3dha2VfYXBfcGgyIjsKCQkJCW52aWRpYSxmdW5jdGlv
biA9ICJyc3ZkMCI7CgkJCQludmlkaWEscHVsbCA9IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRl
ID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47CgkJCX07CgoJCQlhcF93
YWtlX2J0X3BoMyB7CgkJCQludmlkaWEscGlucyA9ICJhcF93YWtlX2J0X3BoMyI7CgkJCQludmlk
aWEsZnVuY3Rpb24gPSAicnN2ZDAiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgwPjsKCQkJCW52aWRp
YSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9
OwoKCQkJYnRfcnN0X3BoNCB7CgkJCQludmlkaWEscGlucyA9ICJidF9yc3RfcGg0IjsKCQkJCW52
aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMCI7CgkJCQludmlkaWEscHVsbCA9IDwweDA+OwoJCQkJbnZp
ZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJ
CX07CgoJCQlidF93YWtlX2FwX3BoNSB7CgkJCQludmlkaWEscGlucyA9ICJidF93YWtlX2FwX3Bo
NSI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDAiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgy
PjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9
IDwweDE+OwoJCQl9OwoKCQkJcGg2IHsKCQkJCW52aWRpYSxwaW5zID0gInBoNiI7CgkJCQludmlk
aWEsZnVuY3Rpb24gPSAicnN2ZDAiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgyPjsKCQkJCW52aWRp
YSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDE+OwoJCQl9
OwoKCQkJYXBfd2FrZV9uZmNfcGg3IHsKCQkJCW52aWRpYSxwaW5zID0gImFwX3dha2VfbmZjX3Bo
NyI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDAiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgw
PjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDA+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9
IDwweDA+OwoJCQl9OwoKCQkJbmZjX2VuX3BpMCB7CgkJCQludmlkaWEscGlucyA9ICJuZmNfZW5f
cGkwIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMCI7CgkJCQludmlkaWEscHVsbCA9IDww
eDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0
ID0gPDB4MD47CgkJCX07CgoJCQluZmNfaW50X3BpMSB7CgkJCQludmlkaWEscGlucyA9ICJuZmNf
aW50X3BpMSI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDAiOwoJCQkJbnZpZGlhLHB1bGwg
PSA8MHgwPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1p
bnB1dCA9IDwweDE+OwoJCQl9OwoKCQkJZ3BzX2VuX3BpMiB7CgkJCQludmlkaWEscGlucyA9ICJn
cHNfZW5fcGkyIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMCI7CgkJCQludmlkaWEscHVs
bCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxl
LWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQlwY2M3IHsKCQkJCW52aWRpYSxwaW5zID0gInBjYzci
OwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInJzdmQwIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MD47
CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgwPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8
MHgwPjsKCQkJCW52aWRpYSxpby1oaWdoLXZvbHRhZ2UgPSA8MHgxPjsKCQkJfTsKCgkJCXVzYl92
YnVzX2VuMF9wY2M0IHsKCQkJCW52aWRpYSxwaW5zID0gInVzYl92YnVzX2VuMF9wY2M0IjsKCQkJ
CW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7CgkJCQludmlkaWEscHVsbCA9IDwweDI+OwoJCQkJ
bnZpZGlhLHRyaXN0YXRlID0gPDB4MD47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MT47
CgkJCQludmlkaWEsaW8taGlnaC12b2x0YWdlID0gPDB4MD47CgkJCX07CgkJfTsKCgkJdW51c2Vk
X2xvd3Bvd2VyIHsKCQkJbGludXgscGhhbmRsZSA9IDwweDNhPjsKCQkJcGhhbmRsZSA9IDwweDNh
PjsKCgkJCWF1ZF9tY2xrX3BiYjAgewoJCQkJbnZpZGlhLHBpbnMgPSAiYXVkX21jbGtfcGJiMCI7
CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDEiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsK
CQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDww
eDA+OwoJCQl9OwoKCQkJZHZmc19jbGtfcGJiMiB7CgkJCQludmlkaWEscGlucyA9ICJkdmZzX2Ns
a19wYmIyIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMCI7CgkJCQludmlkaWEscHVsbCA9
IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5hYmxlLWlu
cHV0ID0gPDB4MD47CgkJCX07CgoJCQlncGlvX3gxX2F1ZF9wYmIzIHsKCQkJCW52aWRpYSxwaW5z
ID0gImdwaW9feDFfYXVkX3BiYjMiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInJzdmQwIjsKCQkJ
CW52aWRpYSxwdWxsID0gPDB4MT47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgxPjsKCQkJCW52
aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJfTsKCgkJCWdwaW9feDNfYXVkX3BiYjQgewoJ
CQkJbnZpZGlhLHBpbnMgPSAiZ3Bpb194M19hdWRfcGJiNCI7CgkJCQludmlkaWEsZnVuY3Rpb24g
PSAicnN2ZDAiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9
IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJZGFwMV9k
aW5fcGIxIHsKCQkJCW52aWRpYSxwaW5zID0gImRhcDFfZGluX3BiMSI7CgkJCQludmlkaWEsZnVu
Y3Rpb24gPSAicnN2ZDEiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlz
dGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJ
ZGFwMV9kb3V0X3BiMiB7CgkJCQludmlkaWEscGlucyA9ICJkYXAxX2RvdXRfcGIyIjsKCQkJCW52
aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7CgkJCQludmlkaWEscHVsbCA9IDwweDE+OwoJCQkJbnZp
ZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJ
CX07CgoJCQlkYXAxX2ZzX3BiMCB7CgkJCQludmlkaWEscGlucyA9ICJkYXAxX2ZzX3BiMCI7CgkJ
CQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDEiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJ
CW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+
OwoJCQl9OwoKCQkJZGFwMV9zY2xrX3BiMyB7CgkJCQludmlkaWEscGlucyA9ICJkYXAxX3NjbGtf
cGIzIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7CgkJCQludmlkaWEscHVsbCA9IDww
eDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0
ID0gPDB4MD47CgkJCX07CgoJCQlzcGkyX21vc2lfcGI0IHsKCQkJCW52aWRpYSxwaW5zID0gInNw
aTJfbW9zaV9wYjQiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInJzdmQyIjsKCQkJCW52aWRpYSxw
dWxsID0gPDB4MT47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgxPjsKCQkJCW52aWRpYSxlbmFi
bGUtaW5wdXQgPSA8MHgwPjsKCQkJfTsKCgkJCXNwaTJfbWlzb19wYjUgewoJCQkJbnZpZGlhLHBp
bnMgPSAic3BpMl9taXNvX3BiNSI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDIiOwoJCQkJ
bnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZp
ZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJc3BpMl9zY2tfcGI2IHsKCQkJCW52
aWRpYSxwaW5zID0gInNwaTJfc2NrX3BiNiI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDIi
OwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJ
CQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJc3BpMl9jczBfcGI3IHsK
CQkJCW52aWRpYSxwaW5zID0gInNwaTJfY3MwX3BiNyI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAi
cnN2ZDIiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDww
eDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJc3BpMl9jczFf
cGRkMCB7CgkJCQludmlkaWEscGlucyA9ICJzcGkyX2NzMV9wZGQwIjsKCQkJCW52aWRpYSxmdW5j
dGlvbiA9ICJyc3ZkMSI7CgkJCQludmlkaWEscHVsbCA9IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0
YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQlk
bWljM19jbGtfcGU0IHsKCQkJCW52aWRpYSxwaW5zID0gImRtaWMzX2Nsa19wZTQiOwoJCQkJbnZp
ZGlhLGZ1bmN0aW9uID0gInJzdmQyIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MT47CgkJCQludmlk
aWEsdHJpc3RhdGUgPSA8MHgxPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJ
fTsKCgkJCWRtaWMzX2RhdF9wZTUgewoJCQkJbnZpZGlhLHBpbnMgPSAiZG1pYzNfZGF0X3BlNSI7
CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDIiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsK
CQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDww
eDA+OwoJCQl9OwoKCQkJcGU2IHsKCQkJCW52aWRpYSxwaW5zID0gInBlNiI7CgkJCQludmlkaWEs
ZnVuY3Rpb24gPSAicnN2ZDAiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0
cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoK
CQkJY2FtX3JzdF9wczQgewoJCQkJbnZpZGlhLHBpbnMgPSAiY2FtX3JzdF9wczQiOwoJCQkJbnZp
ZGlhLGZ1bmN0aW9uID0gInJzdmQxIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MT47CgkJCQludmlk
aWEsdHJpc3RhdGUgPSA8MHgxPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJ
fTsKCgkJCWNhbV9hZl9lbl9wczUgewoJCQkJbnZpZGlhLHBpbnMgPSAiY2FtX2FmX2VuX3BzNSI7
CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDIiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsK
CQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDww
eDA+OwoJCQl9OwoKCQkJY2FtX2ZsYXNoX2VuX3BzNiB7CgkJCQludmlkaWEscGlucyA9ICJjYW1f
Zmxhc2hfZW5fcHM2IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMiI7CgkJCQludmlkaWEs
cHVsbCA9IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5h
YmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQljYW0xX3N0cm9iZV9wdDEgewoJCQkJbnZpZGlh
LHBpbnMgPSAiY2FtMV9zdHJvYmVfcHQxIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7
CgkJCQludmlkaWEscHVsbCA9IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJ
CQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQltb3Rpb25faW50X3B4MiB7
CgkJCQludmlkaWEscGlucyA9ICJtb3Rpb25faW50X3B4MiI7CgkJCQludmlkaWEsZnVuY3Rpb24g
PSAicnN2ZDAiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9
IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJdG91Y2hf
cnN0X3B2NiB7CgkJCQludmlkaWEscGlucyA9ICJ0b3VjaF9yc3RfcHY2IjsKCQkJCW52aWRpYSxm
dW5jdGlvbiA9ICJyc3ZkMCI7CgkJCQludmlkaWEscHVsbCA9IDwweDE+OwoJCQkJbnZpZGlhLHRy
aXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJ
CQl0b3VjaF9jbGtfcHY3IHsKCQkJCW52aWRpYSxwaW5zID0gInRvdWNoX2Nsa19wdjciOwoJCQkJ
bnZpZGlhLGZ1bmN0aW9uID0gInJzdmQxIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MT47CgkJCQlu
dmlkaWEsdHJpc3RhdGUgPSA8MHgxPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsK
CQkJfTsKCgkJCXRvdWNoX2ludF9weDEgewoJCQkJbnZpZGlhLHBpbnMgPSAidG91Y2hfaW50X3B4
MSI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDAiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgx
PjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9
IDwweDA+OwoJCQl9OwoKCQkJbW9kZW1fd2FrZV9hcF9weDAgewoJCQkJbnZpZGlhLHBpbnMgPSAi
bW9kZW1fd2FrZV9hcF9weDAiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInJzdmQwIjsKCQkJCW52
aWRpYSxwdWxsID0gPDB4MT47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgxPjsKCQkJCW52aWRp
YSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJfTsKCgkJCWJ1dHRvbl92b2xfZG93bl9weDcgewoJ
CQkJbnZpZGlhLHBpbnMgPSAiYnV0dG9uX3ZvbF9kb3duX3B4NyI7CgkJCQludmlkaWEsZnVuY3Rp
b24gPSAicnN2ZDAiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0
ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJYnV0
dG9uX3NsaWRlX3N3X3B5MCB7CgkJCQludmlkaWEscGlucyA9ICJidXR0b25fc2xpZGVfc3dfcHkw
IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMCI7CgkJCQludmlkaWEscHVsbCA9IDwweDE+
OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0g
PDB4MD47CgkJCX07CgoJCQlsY2RfdGVfcHkyIHsKCQkJCW52aWRpYSxwaW5zID0gImxjZF90ZV9w
eTIiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInJzdmQxIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4
MT47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgxPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQg
PSA8MHgwPjsKCQkJfTsKCgkJCWxjZF9ibF9wd21fcHYwIHsKCQkJCW52aWRpYSxwaW5zID0gImxj
ZF9ibF9wd21fcHYwIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMyI7CgkJCQludmlkaWEs
cHVsbCA9IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5h
YmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQlsY2RfcnN0X3B2MiB7CgkJCQludmlkaWEscGlu
cyA9ICJsY2RfcnN0X3B2MiI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDAiOwoJCQkJbnZp
ZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlh
LGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJbGNkX2dwaW8xX3B2MyB7CgkJCQludmlk
aWEscGlucyA9ICJsY2RfZ3BpbzFfcHYzIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7
CgkJCQludmlkaWEscHVsbCA9IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJ
CQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQlhcF9yZWFkeV9wdjUgewoJ
CQkJbnZpZGlhLHBpbnMgPSAiYXBfcmVhZHlfcHY1IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJy
c3ZkMCI7CgkJCQludmlkaWEscHVsbCA9IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4
MT47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQlwejAgewoJCQkJ
bnZpZGlhLHBpbnMgPSAicHowIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7CgkJCQlu
dmlkaWEscHVsbCA9IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlk
aWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQlwejQgewoJCQkJbnZpZGlhLHBpbnMg
PSAicHo0IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7CgkJCQludmlkaWEscHVsbCA9
IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5hYmxlLWlu
cHV0ID0gPDB4MD47CgkJCX07CgoJCQljbGtfcmVxIHsKCQkJCW52aWRpYSxwaW5zID0gImNsa19y
ZXEiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInJzdmQxIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4
MT47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgxPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQg
PSA8MHgwPjsKCQkJfTsKCgkJCWNwdV9wd3JfcmVxIHsKCQkJCW52aWRpYSxwaW5zID0gImNwdV9w
d3JfcmVxIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7CgkJCQludmlkaWEscHVsbCA9
IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5hYmxlLWlu
cHV0ID0gPDB4MD47CgkJCX07CgoJCQlkYXA0X2Rpbl9wajUgewoJCQkJbnZpZGlhLHBpbnMgPSAi
ZGFwNF9kaW5fcGo1IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7CgkJCQludmlkaWEs
cHVsbCA9IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5h
YmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQlkYXA0X2RvdXRfcGo2IHsKCQkJCW52aWRpYSxw
aW5zID0gImRhcDRfZG91dF9wajYiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInJzdmQxIjsKCQkJ
CW52aWRpYSxwdWxsID0gPDB4MT47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgxPjsKCQkJCW52
aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJfTsKCgkJCWRhcDRfZnNfcGo0IHsKCQkJCW52
aWRpYSxwaW5zID0gImRhcDRfZnNfcGo0IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7
CgkJCQludmlkaWEscHVsbCA9IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJ
CQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQlkYXA0X3NjbGtfcGo3IHsK
CQkJCW52aWRpYSxwaW5zID0gImRhcDRfc2Nsa19wajciOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0g
InJzdmQxIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MT47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8
MHgxPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJfTsKCgkJCXVhcnQyX3J0
c19wZzIgewoJCQkJbnZpZGlhLHBpbnMgPSAidWFydDJfcnRzX3BnMiI7CgkJCQludmlkaWEsZnVu
Y3Rpb24gPSAicnN2ZDIiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlz
dGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJ
dWFydDJfY3RzX3BnMyB7CgkJCQludmlkaWEscGlucyA9ICJ1YXJ0Ml9jdHNfcGczIjsKCQkJCW52
aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMiI7CgkJCQludmlkaWEscHVsbCA9IDwweDE+OwoJCQkJbnZp
ZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47CgkJ
CX07CgoJCQl1YXJ0MV9ydHNfcHUyIHsKCQkJCW52aWRpYSxwaW5zID0gInVhcnQxX3J0c19wdTIi
OwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInJzdmQxIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MT47
CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgxPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8
MHgwPjsKCQkJfTsKCgkJCXVhcnQxX2N0c19wdTMgewoJCQkJbnZpZGlhLHBpbnMgPSAidWFydDFf
Y3RzX3B1MyI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDEiOwoJCQkJbnZpZGlhLHB1bGwg
PSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1p
bnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJcGswIHsKCQkJCW52aWRpYSxwaW5zID0gInBrMCI7CgkJ
CQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDIiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJ
CW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+
OwoJCQl9OwoKCQkJcGsxIHsKCQkJCW52aWRpYSxwaW5zID0gInBrMSI7CgkJCQludmlkaWEsZnVu
Y3Rpb24gPSAicnN2ZDIiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlz
dGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJ
cGsyIHsKCQkJCW52aWRpYSxwaW5zID0gInBrMiI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2
ZDIiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+
OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJcGszIHsKCQkJCW52
aWRpYSxwaW5zID0gInBrMyI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDIiOwoJCQkJbnZp
ZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlh
LGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJcGs0IHsKCQkJCW52aWRpYSxwaW5zID0g
InBrNCI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDEiOwoJCQkJbnZpZGlhLHB1bGwgPSA8
MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1
dCA9IDwweDA+OwoJCQl9OwoKCQkJcGs1IHsKCQkJCW52aWRpYSxwaW5zID0gInBrNSI7CgkJCQlu
dmlkaWEsZnVuY3Rpb24gPSAicnN2ZDEiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52
aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJ
CQl9OwoKCQkJcGs2IHsKCQkJCW52aWRpYSxwaW5zID0gInBrNiI7CgkJCQludmlkaWEsZnVuY3Rp
b24gPSAicnN2ZDEiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0
ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJcGs3
IHsKCQkJCW52aWRpYSxwaW5zID0gInBrNyI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDEi
OwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJ
CQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJcGwwIHsKCQkJCW52aWRp
YSxwaW5zID0gInBsMCI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDAiOwoJCQkJbnZpZGlh
LHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVu
YWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJcGwxIHsKCQkJCW52aWRpYSxwaW5zID0gInBs
MSI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDEiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgx
PjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9
IDwweDA+OwoJCQl9OwoKCQkJc3BpMV9tb3NpX3BjMCB7CgkJCQludmlkaWEscGlucyA9ICJzcGkx
X21vc2lfcGMwIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7CgkJCQludmlkaWEscHVs
bCA9IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5hYmxl
LWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQlzcGkxX21pc29fcGMxIHsKCQkJCW52aWRpYSxwaW5z
ID0gInNwaTFfbWlzb19wYzEiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInJzdmQxIjsKCQkJCW52
aWRpYSxwdWxsID0gPDB4MT47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgxPjsKCQkJCW52aWRp
YSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJfTsKCgkJCXNwaTFfc2NrX3BjMiB7CgkJCQludmlk
aWEscGlucyA9ICJzcGkxX3Nja19wYzIiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInJzdmQxIjsK
CQkJCW52aWRpYSxwdWxsID0gPDB4MT47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgxPjsKCQkJ
CW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJfTsKCgkJCXNwaTFfY3MwX3BjMyB7CgkJ
CQludmlkaWEscGlucyA9ICJzcGkxX2NzMF9wYzMiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInJz
dmQxIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MT47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgx
PjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJfTsKCgkJCXNwaTFfY3MxX3Bj
NCB7CgkJCQludmlkaWEscGlucyA9ICJzcGkxX2NzMV9wYzQiOwoJCQkJbnZpZGlhLGZ1bmN0aW9u
ID0gInJzdmQxIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MT47CgkJCQludmlkaWEsdHJpc3RhdGUg
PSA8MHgxPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJfTsKCgkJCXNwaTRf
bW9zaV9wYzcgewoJCQkJbnZpZGlhLHBpbnMgPSAic3BpNF9tb3NpX3BjNyI7CgkJCQludmlkaWEs
ZnVuY3Rpb24gPSAicnN2ZDEiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0
cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoK
CQkJc3BpNF9taXNvX3BkMCB7CgkJCQludmlkaWEscGlucyA9ICJzcGk0X21pc29fcGQwIjsKCQkJ
CW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7CgkJCQludmlkaWEscHVsbCA9IDwweDE+OwoJCQkJ
bnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0gPDB4MD47
CgkJCX07CgoJCQlzcGk0X3Nja19wYzUgewoJCQkJbnZpZGlhLHBpbnMgPSAic3BpNF9zY2tfcGM1
IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7CgkJCQludmlkaWEscHVsbCA9IDwweDE+
OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5hYmxlLWlucHV0ID0g
PDB4MD47CgkJCX07CgoJCQlzcGk0X2NzMF9wYzYgewoJCQkJbnZpZGlhLHBpbnMgPSAic3BpNF9j
czBfcGM2IjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMSI7CgkJCQludmlkaWEscHVsbCA9
IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5hYmxlLWlu
cHV0ID0gPDB4MD47CgkJCX07CgoJCQl3aWZpX3JzdF9waDEgewoJCQkJbnZpZGlhLHBpbnMgPSAi
d2lmaV9yc3RfcGgxIjsKCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJyc3ZkMCI7CgkJCQludmlkaWEs
cHVsbCA9IDwweDE+OwoJCQkJbnZpZGlhLHRyaXN0YXRlID0gPDB4MT47CgkJCQludmlkaWEsZW5h
YmxlLWlucHV0ID0gPDB4MD47CgkJCX07CgoJCQlncHNfcnN0X3BpMyB7CgkJCQludmlkaWEscGlu
cyA9ICJncHNfcnN0X3BpMyI7CgkJCQludmlkaWEsZnVuY3Rpb24gPSAicnN2ZDAiOwoJCQkJbnZp
ZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9IDwweDE+OwoJCQkJbnZpZGlh
LGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJc3BkaWZfb3V0X3BjYzIgewoJCQkJbnZp
ZGlhLHBpbnMgPSAic3BkaWZfb3V0X3BjYzIiOwoJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInJzdmQx
IjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MT47CgkJCQludmlkaWEsdHJpc3RhdGUgPSA8MHgxPjsK
CQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsKCQkJfTsKCgkJCXNwZGlmX2luX3BjYzMg
ewoJCQkJbnZpZGlhLHBpbnMgPSAic3BkaWZfaW5fcGNjMyI7CgkJCQludmlkaWEsZnVuY3Rpb24g
PSAicnN2ZDEiOwoJCQkJbnZpZGlhLHB1bGwgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmlzdGF0ZSA9
IDwweDE+OwoJCQkJbnZpZGlhLGVuYWJsZS1pbnB1dCA9IDwweDA+OwoJCQl9OwoKCQkJdXNiX3Zi
dXNfZW4xX3BjYzUgewoJCQkJbnZpZGlhLHBpbnMgPSAidXNiX3ZidXNfZW4xX3BjYzUiOwoJCQkJ
bnZpZGlhLGZ1bmN0aW9uID0gInJzdmQxIjsKCQkJCW52aWRpYSxwdWxsID0gPDB4MT47CgkJCQlu
dmlkaWEsdHJpc3RhdGUgPSA8MHgxPjsKCQkJCW52aWRpYSxlbmFibGUtaW5wdXQgPSA8MHgwPjsK
CQkJCW52aWRpYSxpby1oaWdoLXZvbHRhZ2UgPSA8MHgwPjsKCQkJfTsKCQl9OwoKCQlkcml2ZSB7
CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHgzOT47CgkJCXBoYW5kbGUgPSA8MHgzOT47CgkJfTsKCX07
CgoJZ3Bpb0A2MDAwZDAwMCB7CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtZ3BpbyIs
ICJudmlkaWEsdGVncmExMjQtZ3BpbyIsICJudmlkaWEsdGVncmEzMC1ncGlvIjsKCQlyZWcgPSA8
MHgwIDB4NjAwMGQwMDAgMHgwIDB4MTAwMD47CgkJaW50ZXJydXB0cyA9IDwweDAgMHgyMCAweDQg
MHgwIDB4MjEgMHg0IDB4MCAweDIyIDB4NCAweDAgMHgyMyAweDQgMHgwIDB4MzcgMHg0IDB4MCAw
eDU3IDB4NCAweDAgMHg1OSAweDQgMHgwIDB4N2QgMHg0PjsKCQkjZ3Bpby1jZWxscyA9IDwweDI+
OwoJCWdwaW8tY29udHJvbGxlcjsKCQkjaW50ZXJydXB0LWNlbGxzID0gPDB4Mj47CgkJaW50ZXJy
dXB0LWNvbnRyb2xsZXI7CgkJZ3Bpby1yYW5nZXMgPSA8MHgzYiAweDAgMHgwIDB4ZjY+OwoJCXN0
YXR1cyA9ICJva2F5IjsKCQlncGlvLWluaXQtbmFtZXMgPSAiZGVmYXVsdCI7CgkJZ3Bpby1pbml0
LTAgPSA8MHgzYz47CgkJZ3Bpby1saW5lLW5hbWVzID0gWzAwIDAwIDAwIDAwIDAwIDAwIDAwIDAw
IDAwIDAwIDAwIDAwIDUzIDUwIDQ5IDMxIDVmIDRkIDRmIDUzIDQ5IDAwIDUzIDUwIDQ5IDMxIDVm
IDRkIDQ5IDUzIDRmIDAwIDUzIDUwIDQ5IDMxIDVmIDUzIDQzIDRiIDAwIDUzIDUwIDQ5IDMxIDVm
IDQzIDUzIDMwIDAwIDUzIDUwIDQ5IDMwIDVmIDRkIDRmIDUzIDQ5IDAwIDUzIDUwIDQ5IDMwIDVm
IDRkIDQ5IDUzIDRmIDAwIDUzIDUwIDQ5IDMwIDVmIDUzIDQzIDRiIDAwIDUzIDUwIDQ5IDMwIDVm
IDQzIDUzIDMwIDAwIDUzIDUwIDQ5IDMwIDVmIDQzIDUzIDMxIDAwIDAwIDAwIDAwIDAwIDAwIDAw
IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDQ3IDUwIDQ5IDRmIDMxIDMzIDAwIDAw
IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDU1IDQxIDUyIDU0IDMxIDVmIDUyIDU0IDUz
IDAwIDU1IDQxIDUyIDU0IDMxIDVmIDQzIDU0IDUzIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAw
IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDQ5IDMyIDUz
IDMwIDVmIDQ2IDUzIDAwIDQ5IDMyIDUzIDMwIDVmIDQ0IDQ5IDRlIDAwIDQ5IDMyIDUzIDMwIDVm
IDQ0IDRmIDU1IDU0IDAwIDQ5IDMyIDUzIDMwIDVmIDUzIDQzIDRjIDRiIDAwIDAwIDAwIDAwIDAw
IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAw
IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAw
IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAw
IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDQ3IDUwIDQ5IDRmIDMwIDMxIDAwIDAwIDAwIDAwIDAw
IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDQ3IDUwIDQ5IDRmIDMw
IDM3IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAw
IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDQ3IDUwIDQ5IDRmIDMxIDMyIDAwIDAwIDAwIDAwIDAw
IDAwIDQ3IDUwIDQ5IDRmIDMxIDMxIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAw
IDAwIDAwIDAwIDAwIDQ3IDUwIDQ5IDRmIDMwIDM5IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAwIDAw
IDAwIDAwIDAwIDAwIDAwIDAwIDAwIDUzIDUwIDQ5IDMxIDVmIDQzIDUzIDMxIDAwIDAwIDAwIDAw
IDAwIDAwIDAwIDAwXTsKCQlsaW51eCxwaGFuZGxlID0gPDB4NTY+OwoJCXBoYW5kbGUgPSA8MHg1
Nj47CgoJCWNhbWVyYS1jb250cm9sLW91dHB1dC1sb3cgewoJCQlncGlvLWhvZzsKCQkJb3V0cHV0
LWxvdzsKCQkJZ3Bpb3MgPSA8MHg5NyAweDAgMHg5OCAweDA+OwoJCQlsYWJlbCA9ICJjYW0xLXB3
ZG4iLCAiY2FtMi1wd2RuIjsKCQl9OwoKCQllMjYxNC1ydDU2NTgtYXVkaW8gewoJCQlncGlvLWhv
ZzsKCQkJZnVuY3Rpb247CgkJCWdwaW9zID0gPDB4NGMgMHgwIDB4NGQgMHgwIDB4NGUgMHgwIDB4
NGYgMHgwIDB4ZDggMHgwIDB4OTUgMHgwPjsKCQkJbGFiZWwgPSAiSTJTNF9MUkNMSyIsICJJMlM0
X1NESU4iLCAiSTJTNF9TRE9VVCIsICJJMlM0X0NMSyIsICJBVURJT19NQ0xLIiwgIkFVRF9SU1Qi
OwoJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQlsaW51eCxwaGFuZGxlID0gPDB4ZGI+OwoJCQlw
aGFuZGxlID0gPDB4ZGI+OwoJCX07CgoJCXN5c3RlbS1zdXNwZW5kLWdwaW8gewoJCQlzdGF0dXMg
PSAib2theSI7CgkJCWdwaW8taG9nOwoJCQlvdXRwdXQtaGlnaDsKCQkJZ3Bpby1zdXNwZW5kOwoJ
CQlzdXNwZW5kLW91dHB1dC1sb3c7CgkJCWdwaW9zID0gPDB4NiAweDA+OwoJCQlsaW51eCxwaGFu
ZGxlID0gPDB4YjI+OwoJCQlwaGFuZGxlID0gPDB4YjI+OwoJCX07CgoJCWRlZmF1bHQgewoJCQln
cGlvLWlucHV0ID0gPDB4NSAweGJjIDB4YmQgMHhiZSAweGMxIDB4YTkgMHhjYSAweDNhIDB4M2Qg
MHgzZSAweDQxIDB4ZTQ+OwoJCQlncGlvLW91dHB1dC1sb3cgPSA8MHg5NyAweDk4IDB4Y2IgMHgz
OCAweDNiIDB4M2MgMHgzZiAweDQwIDB4NDI+OwoJCQlncGlvLW91dHB1dC1oaWdoID0gPDB4NiAw
eGJiIDB4ZTc+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4M2M+OwoJCQlwaGFuZGxlID0gPDB4M2M+
OwoJCX07Cgl9OwoKCXhvdGcgewoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXhvdGci
OwoJCWludGVycnVwdHMgPSA8MHgwIDB4MzEgMHg0IDB4MCAweDE0IDB4ND47CgkJc3RhdHVzID0g
ImRpc2FibGVkIjsKCQkjZXh0Y29uLWNlbGxzID0gPDB4MT47Cgl9OwoKCW1haWxib3hANzAwOTgw
MDAgewoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXh1c2ItbWJveCI7CgkJcmVnID0g
PDB4MCAweDcwMDk4MDAwIDB4MCAweDEwMDA+OwoJCWludGVycnVwdHMgPSA8MHgwIDB4MjggMHg0
PjsKCQkjbWJveC1jZWxscyA9IDwweDA+OwoJCXN0YXR1cyA9ICJva2F5IjsKCQlsaW51eCxwaGFu
ZGxlID0gPDB4NDY+OwoJCXBoYW5kbGUgPSA8MHg0Nj47Cgl9OwoKCXh1c2JfcGFkY3RsQDcwMDlm
MDAwIHsKCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC14dXNiLXBhZGN0bCI7CgkJcmVn
ID0gPDB4MCAweDcwMDlmMDAwIDB4MCAweDEwMDA+OwoJCXJlZy1uYW1lcyA9ICJwYWRjdGwiOwoJ
CXJlc2V0cyA9IDwweDIxIDB4OGU+OwoJCXJlc2V0LW5hbWVzID0gInBhZGN0bCI7CgkJc3RhdHVz
ID0gIm9rYXkiOwoJCXZkZGlvLWhzaWMtc3VwcGx5ID0gPDB4M2Q+OwoJCWF2ZGRfcGxsX3VlcmVm
ZS1zdXBwbHkgPSA8MHgzZT47CgkJaHZkZF9wZXhfcGxsX2Utc3VwcGx5ID0gPDB4MzY+OwoJCWR2
ZGRfcGV4X3BsbC1zdXBwbHkgPSA8MHgzZj47CgkJaHZkZGlvX3BleC1zdXBwbHkgPSA8MHgzNj47
CgkJZHZkZGlvX3BleC1zdXBwbHkgPSA8MHgzZj47CgkJaHZkZF9zYXRhLXN1cHBseSA9IDwweDM2
PjsKCQlkdmRkX3NhdGFfcGxsLXN1cHBseSA9IDwweDQwPjsKCQlodmRkaW9fc2F0YS1zdXBwbHkg
PSA8MHgzNj47CgkJZHZkZGlvX3NhdGEtc3VwcGx5ID0gPDB4NDA+OwoJCWxpbnV4LHBoYW5kbGUg
PSA8MHg0ND47CgkJcGhhbmRsZSA9IDwweDQ0PjsKCgkJcGFkcyB7CgoJCQl1c2IyIHsKCQkJCWNs
b2NrcyA9IDwweDIxIDB4ZDI+OwoJCQkJY2xvY2stbmFtZXMgPSAidHJrIjsKCQkJCXN0YXR1cyA9
ICJva2F5IjsKCgkJCQlsYW5lcyB7CgoJCQkJCXVzYjItMCB7CgkJCQkJCXN0YXR1cyA9ICJva2F5
IjsKCQkJCQkJI3BoeS1jZWxscyA9IDwweDA+OwoJCQkJCQludmlkaWEsZnVuY3Rpb24gPSAieHVz
YiI7CgkJCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHg0NT47CgkJCQkJCXBoYW5kbGUgPSA8MHg0NT47
CgkJCQkJfTsKCgkJCQkJdXNiMi0xIHsKCQkJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJCQkjcGh5
LWNlbGxzID0gPDB4MD47CgkJCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJ4dXNiIjsKCQkJCQkJbGlu
dXgscGhhbmRsZSA9IDwweDQ5PjsKCQkJCQkJcGhhbmRsZSA9IDwweDQ5PjsKCQkJCQl9OwoKCQkJ
CQl1c2IyLTIgewoJCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQkJCSNwaHktY2VsbHMgPSA8MHgw
PjsKCQkJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInh1c2IiOwoJCQkJCQlsaW51eCxwaGFuZGxlID0g
PDB4NGE+OwoJCQkJCQlwaGFuZGxlID0gPDB4NGE+OwoJCQkJCX07CgoJCQkJCXVzYjItMyB7CgkJ
CQkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJCQkJCSNwaHktY2VsbHMgPSA8MHgwPjsKCQkJCQl9
OwoJCQkJfTsKCQkJfTsKCgkJCXBjaWUgewoJCQkJY2xvY2tzID0gPDB4MjEgMHgxMDc+OwoJCQkJ
Y2xvY2stbmFtZXMgPSAicGxsIjsKCQkJCXJlc2V0cyA9IDwweDIxIDB4Y2Q+OwoJCQkJcmVzZXQt
bmFtZXMgPSAicGh5IjsKCQkJCXN0YXR1cyA9ICJva2F5IjsKCgkJCQlsYW5lcyB7CgoJCQkJCXBj
aWUtMCB7CgkJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCQkJI3BoeS1jZWxscyA9IDwweDA+OwoJ
CQkJCQludmlkaWEsZnVuY3Rpb24gPSAicGNpZS14MSI7CgkJCQkJCWxpbnV4LHBoYW5kbGUgPSA8
MHg4NT47CgkJCQkJCXBoYW5kbGUgPSA8MHg4NT47CgkJCQkJfTsKCgkJCQkJcGNpZS0xIHsKCQkJ
CQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJCQkjcGh5LWNlbGxzID0gPDB4MD47CgkJCQkJCW52aWRp
YSxmdW5jdGlvbiA9ICJwY2llLXg0IjsKCQkJCQkJbGludXgscGhhbmRsZSA9IDwweDgxPjsKCQkJ
CQkJcGhhbmRsZSA9IDwweDgxPjsKCQkJCQl9OwoKCQkJCQlwY2llLTIgewoJCQkJCQlzdGF0dXMg
PSAib2theSI7CgkJCQkJCSNwaHktY2VsbHMgPSA8MHgwPjsKCQkJCQkJbnZpZGlhLGZ1bmN0aW9u
ID0gInBjaWUteDQiOwoJCQkJCQlsaW51eCxwaGFuZGxlID0gPDB4ODI+OwoJCQkJCQlwaGFuZGxl
ID0gPDB4ODI+OwoJCQkJCX07CgoJCQkJCXBjaWUtMyB7CgkJCQkJCXN0YXR1cyA9ICJva2F5IjsK
CQkJCQkJI3BoeS1jZWxscyA9IDwweDA+OwoJCQkJCQludmlkaWEsZnVuY3Rpb24gPSAicGNpZS14
NCI7CgkJCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHg4Mz47CgkJCQkJCXBoYW5kbGUgPSA8MHg4Mz47
CgkJCQkJfTsKCgkJCQkJcGNpZS00IHsKCQkJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJCQkjcGh5
LWNlbGxzID0gPDB4MD47CgkJCQkJCW52aWRpYSxmdW5jdGlvbiA9ICJwY2llLXg0IjsKCQkJCQkJ
bGludXgscGhhbmRsZSA9IDwweDg0PjsKCQkJCQkJcGhhbmRsZSA9IDwweDg0PjsKCQkJCQl9OwoK
CQkJCQlwY2llLTUgewoJCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQkJCSNwaHktY2VsbHMgPSA8
MHgwPjsKCQkJCQkJbnZpZGlhLGZ1bmN0aW9uID0gInh1c2IiOwoJCQkJCX07CgoJCQkJCXBjaWUt
NiB7CgkJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCQkJI3BoeS1jZWxscyA9IDwweDA+OwoJCQkJ
CQludmlkaWEsZnVuY3Rpb24gPSAieHVzYiI7CgkJCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHg0Yj47
CgkJCQkJCXBoYW5kbGUgPSA8MHg0Yj47CgkJCQkJfTsKCQkJCX07CgkJCX07CgoJCQlzYXRhIHsK
CQkJCWNsb2NrcyA9IDwweDIxIDB4MTA3PjsKCQkJCWNsb2NrLW5hbWVzID0gInBsbCI7CgkJCQly
ZXNldHMgPSA8MHgyMSAweGNjPjsKCQkJCXJlc2V0LW5hbWVzID0gInBoeSI7CgkJCQlzdGF0dXMg
PSAiZGlzYWJsZWQiOwoKCQkJCWxhbmVzIHsKCgkJCQkJc2F0YS0wIHsKCQkJCQkJc3RhdHVzID0g
ImRpc2FibGVkIjsKCQkJCQkJI3BoeS1jZWxscyA9IDwweDA+OwoJCQkJCX07CgkJCQl9OwoJCQl9
OwoKCQkJaHNpYyB7CgkJCQljbG9ja3MgPSA8MHgyMSAweGQxPjsKCQkJCWNsb2NrLW5hbWVzID0g
InRyayI7CgkJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoKCQkJCWxhbmVzIHsKCgkJCQkJaHNpYy0w
IHsKCQkJCQkJc3RhdHVzID0gImRpc2FibGVkIjsKCQkJCQkJI3BoeS1jZWxscyA9IDwweDA+OwoJ
CQkJCX07CgkJCQl9OwoJCQl9OwoJCX07CgoJCXBvcnRzIHsKCgkJCXVzYjItMCB7CgkJCQlzdGF0
dXMgPSAib2theSI7CgkJCQl2YnVzLXN1cHBseSA9IDwweDQxPjsKCQkJCW1vZGUgPSAib3RnIjsK
CQkJCW52aWRpYSx1c2IzLXBvcnQtZmFrZSA9IDwweDM+OwoJCQl9OwoKCQkJdXNiMi0xIHsKCQkJ
CXN0YXR1cyA9ICJva2F5IjsKCQkJCXZidXMtc3VwcGx5ID0gPDB4NDI+OwoJCQkJbW9kZSA9ICJo
b3N0IjsKCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHhiND47CgkJCQlwaGFuZGxlID0gPDB4YjQ+OwoJ
CQl9OwoKCQkJdXNiMi0yIHsKCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCXZidXMtc3VwcGx5ID0g
PDB4NDM+OwoJCQkJbW9kZSA9ICJob3N0IjsKCQkJfTsKCgkJCXVzYjItMyB7CgkJCQlzdGF0dXMg
PSAiZGlzYWJsZWQiOwoJCQl9OwoKCQkJdXNiMy0wIHsKCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJ
CW52aWRpYSx1c2IyLWNvbXBhbmlvbiA9IDwweDE+OwoJCQl9OwoKCQkJdXNiMy0xIHsKCQkJCXN0
YXR1cyA9ICJkaXNhYmxlZCI7CgkJCX07CgoJCQl1c2IzLTIgewoJCQkJc3RhdHVzID0gImRpc2Fi
bGVkIjsKCQkJfTsKCgkJCXVzYjMtMyB7CgkJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQl9OwoK
CQkJaHNpYy0wIHsKCQkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJCX07CgkJfTsKCgkJcHJvZC1z
ZXR0aW5ncyB7CgkJCSNwcm9kLWNlbGxzID0gPDB4ND47CgoJCQlwcm9kX2NfYmlhcyB7CgkJCQlw
cm9kID0gPDB4MCAweDI4NCAweDNmIDB4M2E+OwoJCQl9OwoKCQkJcHJvZF9jX2JpYXNfYTAyIHsK
CQkJCXByb2QgPSA8MHgwIDB4Mjg0IDB4M2YgMHgzOD47CgkJCX07CgoJCQlwcm9kX2NfdXRtaTAg
ewoJCQkJcHJvZCA9IDwweDAgMHg4NCAweDIwIDB4NDA+OwoJCQl9OwoKCQkJcHJvZF9jX3V0bWkx
IHsKCQkJCXByb2QgPSA8MHgwIDB4YzQgMHgyMCAweDQwPjsKCQkJfTsKCgkJCXByb2RfY191dG1p
MiB7CgkJCQlwcm9kID0gPDB4MCAweDEwNCAweDIwIDB4NDA+OwoJCQl9OwoKCQkJcHJvZF9jX3V0
bWkzIHsKCQkJCXByb2QgPSA8MHgwIDB4MTQ0IDB4MjAgMHg0MD47CgkJCX07CgoJCQlwcm9kX2Nf
c3MwIHsKCQkJCXByb2QgPSA8MHgwIDB4YTYwIDB4MzAwMDAgMHgyMDAwMCAweDAgMHhhNjQgMHhm
ZmZmIDB4ZmMgMHgwIDB4YTY4IDB4ZmZmZmZmZmYgMHhjMDA3N2YxZiAweDAgMHhhNzQgMHhmZmZm
ZmZmZiAweGZjZjAxMzY4PjsKCQkJfTsKCgkJCXByb2RfY19zczEgewoJCQkJcHJvZCA9IDwweDAg
MHhhYTAgMHgzMDAwMCAweDIwMDAwIDB4MCAweGFhNCAweGZmZmYgMHhmYyAweDAgMHhhYTggMHhm
ZmZmZmZmZiAweGMwMDc3ZjFmIDB4MCAweGFiNCAweGZmZmZmZmZmIDB4ZmNmMDEzNjg+OwoJCQl9
OwoKCQkJcHJvZF9jX3NzMiB7CgkJCQlwcm9kID0gPDB4MCAweGFlMCAweDMwMDAwIDB4MjAwMDAg
MHgwIDB4YWU0IDB4ZmZmZiAweGZjIDB4MCAweGFlOCAweGZmZmZmZmZmIDB4YzAwNzdmMWYgMHgw
IDB4YWY0IDB4ZmZmZmZmZmYgMHhmY2YwMTM2OD47CgkJCX07CgoJCQlwcm9kX2Nfc3MzIHsKCQkJ
CXByb2QgPSA8MHgwIDB4YjIwIDB4MzAwMDAgMHgyMDAwMCAweDAgMHhiMjQgMHhmZmZmIDB4ZmMg
MHgwIDB4YjI4IDB4ZmZmZmZmZmYgMHhjMDA3N2YxZiAweDAgMHhiMzQgMHhmZmZmZmZmZiAweGZj
ZjAxMzY4PjsKCQkJfTsKCgkJCXByb2RfY19oc2ljMCB7CgkJCQlwcm9kID0gPDB4MCAweDM0NCAw
eDFmIDB4MWM+OwoJCQl9OwoKCQkJcHJvZF9jX2hzaWMxIHsKCQkJCXByb2QgPSA8MHgwIDB4MzQ0
IDB4MWYgMHgxYz47CgkJCX07CgkJfTsKCX07CgoJdXNiX2NkIHsKCQljb21wYXRpYmxlID0gIm52
aWRpYSx0ZWdyYTIxMC11c2ItY2QiOwoJCW52aWRpYSx4dXNiLXBhZGN0bCA9IDwweDQ0PjsKCQlz
dGF0dXMgPSAiZGlzYWJsZWQiOwoJCXJlZyA9IDwweDAgMHg3MDA5ZjAwMCAweDAgMHgxMDAwPjsK
CQkjZXh0Y29uLWNlbGxzID0gPDB4MT47CgkJZHQtb3ZlcnJpZGUtc3RhdHVzLW9kbS1kYXRhID0g
PDB4MTAwMDAwMCAweDEwMDAwMDA+OwoJCXBoeXMgPSA8MHg0NT47CgkJcGh5LW5hbWVzID0gIm90
Zy1waHkiOwoJCWxpbnV4LHBoYW5kbGUgPSA8MHg5YT47CgkJcGhhbmRsZSA9IDwweDlhPjsKCX07
CgoJcGluY3RybEA3MDA5ZjAwMCB7CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMXgtcGFk
Y3RsLXVwaHkiOwoJCXJlZyA9IDwweDAgMHg3MDA5ZjAwMCAweDAgMHgxMDAwPjsKCQlyZWctbmFt
ZXMgPSAicGFkY3RsIjsKCQlyZXNldHMgPSA8MHgyMSAweDhlIDB4MjEgMHhjYyAweDIxIDB4Y2Q+
OwoJCXJlc2V0LW5hbWVzID0gInBhZGN0bCIsICJzYXRhX3VwaHkiLCAicGV4X3VwaHkiOwoJCWNs
b2NrcyA9IDwweDIxIDB4ZDEgMHgyMSAweGQyIDB4MjEgMHgxMDc+OwoJCWNsb2NrLW5hbWVzID0g
ImhzaWNfdHJrIiwgInVzYjJfdHJrIiwgInBsbF9lIjsKCQlpbnRlcnJ1cHRzID0gPDB4MCAweDMx
IDB4ND47CgkJbWJveGVzID0gPDB4NDY+OwoJCW1ib3gtbmFtZXMgPSAieHVzYiI7CgkJI3BoeS1j
ZWxscyA9IDwweDE+OwoJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJbGludXgscGhhbmRsZSA9IDww
eDEwNj47CgkJcGhhbmRsZSA9IDwweDEwNj47Cgl9OwoKCXh1c2JANzAwOTAwMDAgewoJCWNvbXBh
dGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXhoY2kiOwoJCXJlZyA9IDwweDAgMHg3MDA5MDAwMCAw
eDAgMHg4MDAwIDB4MCAweDcwMDk4MDAwIDB4MCAweDEwMDAgMHgwIDB4NzAwOTkwMDAgMHgwIDB4
MTAwMD47CgkJaW50ZXJydXB0cyA9IDwweDAgMHgyNyAweDQgMHgwIDB4MjggMHg0IDB4MCAweDMx
IDB4ND47CgkJbnZpZGlhLHh1c2ItcGFkY3RsID0gPDB4NDQ+OwoJCWNsb2NrcyA9IDwweDIxIDB4
NTkgMHgyMSAweDExZCAweDIxIDB4OWMgMHgyMSAweDExZiAweDIxIDB4MTIyIDB4MjEgMHgxMWUg
MHgyMSAweGZmIDB4MjEgMHhlOSAweDIxIDB4MTA3PjsKCQljbG9jay1uYW1lcyA9ICJ4dXNiX2hv
c3QiLCAieHVzYl9mYWxjb25fc3JjIiwgInh1c2Jfc3MiLCAieHVzYl9zc19zcmMiLCAieHVzYl9o
c19zcmMiLCAieHVzYl9mc19zcmMiLCAicGxsX3VfNDgwbSIsICJjbGtfbSIsICJwbGxfZSI7CgkJ
aW9tbXVzID0gPDB4MmIgMHgxND47CgkJc3RhdHVzID0gIm9rYXkiOwoJCWh2ZGRfdXNiLXN1cHBs
eSA9IDwweDQ3PjsKCQlhdmRkX3BsbF91dG1pcC1zdXBwbHkgPSA8MHgzNj47CgkJdmRkaW9faHNp
Yy1zdXBwbHkgPSA8MHgzZD47CgkJYXZkZGlvX3VzYi1zdXBwbHkgPSA8MHgzZj47CgkJZHZkZF9z
YXRhLXN1cHBseSA9IDwweDQwPjsKCQlhdmRkaW9fcGxsX3VlcmVmZS1zdXBwbHkgPSA8MHgzZT47
CgkJZXh0Y29uLWNhYmxlcyA9IDwweDQ4IDB4MT47CgkJZXh0Y29uLWNhYmxlLW5hbWVzID0gImlk
IjsKCQlwaHlzID0gPDB4NDUgMHg0OSAweDRhIDB4NGI+OwoJCXBoeS1uYW1lcyA9ICJ1c2IyLTAi
LCAidXNiMi0xIiwgInVzYjItMiIsICJ1c2IzLTAiOwoJCSNleHRjb24tY2VsbHMgPSA8MHgxPjsK
CQludmlkaWEscG1jLXdha2V1cCA9IDwweDM3IDB4MSAweDI3IDB4NCAweDM3IDB4MSAweDI4IDB4
NCAweDM3IDB4MSAweDI5IDB4NCAweDM3IDB4MSAweDJhIDB4NCAweDM3IDB4MSAweDJjIDB4ND47
CgkJbnZpZGlhLGJvb3N0X2NwdV9mcmVxID0gPDB4NGIwPjsKCX07CgoJbWF4MTY5ODQtY2RwIHsK
CQljb21wYXRpYmxlID0gIm1heGltLG1heDE2OTg0LXRlZ3JhMjEwLWNkcC1waHkiOwoJCSNwaHkt
Y2VsbHMgPSA8MHgxPjsKCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCWxpbnV4LHBoYW5kbGUgPSA8
MHgxMDc+OwoJCXBoYW5kbGUgPSA8MHgxMDc+OwoJfTsKCglzZXJpYWxANzAwMDYwMDAgewoJCWNv
bXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXVhcnQiLCAibnZpZGlhLHRlZ3JhMTE0LWhzdWFy
dCIsICJudmlkaWEsdGVncmEyMC11YXJ0IjsKCQlyZWcgPSA8MHgwIDB4NzAwMDYwMDAgMHgwIDB4
NDA+OwoJCXJlZy1zaGlmdCA9IDwweDI+OwoJCWludGVycnVwdHMgPSA8MHgwIDB4MjQgMHg0PjsK
CQlpb21tdXMgPSA8MHgyYiAweGU+OwoJCWRtYXMgPSA8MHg0YyAweDggMHg0YyAweDg+OwoJCWRt
YS1uYW1lcyA9ICJyeCIsICJ0eCI7CgkJY2xvY2tzID0gPDB4MjEgMHg2IDB4MjEgMHhmMz47CgkJ
Y2xvY2stbmFtZXMgPSAic2VyaWFsIiwgInBhcmVudCI7CgkJbnZpZGlhLGFkanVzdC1iYXVkLXJh
dGVzID0gPDB4MWMyMDAgMHgxYzIwMCAweDY0PjsKCQlzdGF0dXMgPSAib2theSI7CgkJY29uc29s
ZS1wb3J0OwoJCXNxYS1hdXRvbWF0aW9uLXBvcnQ7CgkJZW5hYmxlLXJ4LXBvbGwtdGltZXI7CgkJ
bGludXgscGhhbmRsZSA9IDwweDEwOD47CgkJcGhhbmRsZSA9IDwweDEwOD47Cgl9OwoKCXNlcmlh
bEA3MDAwNjA0MCB7CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmExMTQtaHN1YXJ0IjsKCQly
ZWcgPSA8MHgwIDB4NzAwMDYwNDAgMHgwIDB4NDA+OwoJCXJlZy1zaGlmdCA9IDwweDI+OwoJCWlu
dGVycnVwdHMgPSA8MHgwIDB4MjUgMHg0PjsKCQlpb21tdXMgPSA8MHgyYiAweGU+OwoJCWRtYXMg
PSA8MHg0YyAweDkgMHg0YyAweDk+OwoJCWRtYS1uYW1lcyA9ICJyeCIsICJ0eCI7CgkJY2xvY2tz
ID0gPDB4MjEgMHhlMCAweDIxIDB4ZjM+OwoJCWNsb2NrLW5hbWVzID0gInNlcmlhbCIsICJwYXJl
bnQiOwoJCXJlc2V0cyA9IDwweDIxIDB4Nz47CgkJcmVzZXQtbmFtZXMgPSAic2VyaWFsIjsKCQlu
dmlkaWEsYWRqdXN0LWJhdWQtcmF0ZXMgPSA8MHgxYzIwMCAweDFjMjAwIDB4NjQ+OwoJCXN0YXR1
cyA9ICJva2F5IjsKCQlsaW51eCxwaGFuZGxlID0gPDB4MTA5PjsKCQlwaGFuZGxlID0gPDB4MTA5
PjsKCX07CgoJc2VyaWFsQDcwMDA2MjAwIHsKCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTEx
NC1oc3VhcnQiOwoJCXJlZyA9IDwweDAgMHg3MDAwNjIwMCAweDAgMHg0MD47CgkJcmVnLXNoaWZ0
ID0gPDB4Mj47CgkJaW50ZXJydXB0cyA9IDwweDAgMHgyZSAweDQ+OwoJCWlvbW11cyA9IDwweDJi
IDB4ZT47CgkJZG1hcyA9IDwweDRjIDB4YSAweDRjIDB4YT47CgkJZG1hLW5hbWVzID0gInR4IjsK
CQljbG9ja3MgPSA8MHgyMSAweDM3IDB4MjEgMHhmMz47CgkJY2xvY2stbmFtZXMgPSAic2VyaWFs
IiwgInBhcmVudCI7CgkJcmVzZXRzID0gPDB4MjEgMHgzNz47CgkJcmVzZXQtbmFtZXMgPSAic2Vy
aWFsIjsKCQludmlkaWEsYWRqdXN0LWJhdWQtcmF0ZXMgPSA8MHhlMTAwMCAweGUxMDAwIDB4NjQ+
OwoJCXN0YXR1cyA9ICJva2F5IjsKCQlsaW51eCxwaGFuZGxlID0gPDB4MTBhPjsKCQlwaGFuZGxl
ID0gPDB4MTBhPjsKCX07CgoJc2VyaWFsQDcwMDA2MzAwIHsKCQljb21wYXRpYmxlID0gIm52aWRp
YSx0ZWdyYTExNC1oc3VhcnQiOwoJCXJlZyA9IDwweDAgMHg3MDAwNjMwMCAweDAgMHg0MD47CgkJ
cmVnLXNoaWZ0ID0gPDB4Mj47CgkJaW50ZXJydXB0cyA9IDwweDAgMHg1YSAweDQ+OwoJCWlvbW11
cyA9IDwweDJiIDB4ZT47CgkJZG1hcyA9IDwweDRjIDB4MTMgMHg0YyAweDEzPjsKCQlkbWEtbmFt
ZXMgPSAicngiLCAidHgiOwoJCWNsb2NrcyA9IDwweDIxIDB4NDEgMHgyMSAweGYzPjsKCQljbG9j
ay1uYW1lcyA9ICJzZXJpYWwiLCAicGFyZW50IjsKCQlyZXNldHMgPSA8MHgyMSAweDQxPjsKCQly
ZXNldC1uYW1lcyA9ICJzZXJpYWwiOwoJCW52aWRpYSxhZGp1c3QtYmF1ZC1yYXRlcyA9IDwweDFj
MjAwIDB4MWMyMDAgMHg2ND47CgkJc3RhdHVzID0gImRpc2FibGVkIjsKCQlsaW51eCxwaGFuZGxl
ID0gPDB4MTBiPjsKCQlwaGFuZGxlID0gPDB4MTBiPjsKCX07CgoJc291bmQgewoJCWlvbW11cyA9
IDwweDJiIDB4MjI+OwoJCWRtYS1tYXNrID0gPDB4MCAweGZmZjAwMDAwPjsKCQlpb21tdS1yZXN2
LXJlZ2lvbnMgPSA8MHgwIDB4MCAweDAgMHg3MDMwMDAwMCAweDAgMHhmZmYwMDAwMCAweGZmZmZm
ZmZmIDB4ZmZmZmZmZmY+OwoJCWlvbW11LWdyb3VwLWlkID0gPDB4Mj47CgkJc3RhdHVzID0gIm9r
YXkiOwoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhLWF1ZGlvLXQyMTByZWYtbW9iaWxlLXJ0
NTY1eCI7CgkJbnZpZGlhLG1vZGVsID0gInRlZ3JhLXNuZC10MjEwcmVmLW1vYmlsZS1ydDU2NXgi
OwoJCWNsb2NrcyA9IDwweDIxIDB4ZjggMHgyMSAweGY5IDB4MjEgMHg3OD47CgkJY2xvY2stbmFt
ZXMgPSAicGxsX2EiLCAicGxsX2Ffb3V0MCIsICJleHRlcm4xIjsKCQlhc3NpZ25lZC1jbG9ja3Mg
PSA8MHgyMSAweDc4PjsKCQlhc3NpZ25lZC1jbG9jay1wYXJlbnRzID0gPDB4MjEgMHhmOT47CgkJ
bnZpZGlhLG51bS1jb2RlYy1saW5rID0gPDB4ND47CgkJbnZpZGlhLGF1ZGlvLXJvdXRpbmcgPSAi
eCBIZWFkcGhvbmUiLCAieCBPVVQiLCAieCBJTiIsICJ4IE1pYyIsICJ5IEhlYWRwaG9uZSIsICJ5
IE9VVCIsICJ5IElOIiwgInkgTWljIiwgImEgSU4iLCAiYSBNaWMiLCAiYiBJTiIsICJiIE1pYyI7
CgkJbnZpZGlhLHhiYXIgPSA8MHg0ZD47CgkJbWNsay1mcyA9IDwweDEwMD47CgkJbGludXgscGhh
bmRsZSA9IDwweGFmPjsKCQlwaGFuZGxlID0gPDB4YWY+OwoKCQludmlkaWEsZGFpLWxpbmstMSB7
CgkJCWxpbmstbmFtZSA9ICJzcGRpZi1kaXQtMCI7CgkJCWNwdS1kYWkgPSA8MHg0ZT47CgkJCWNv
ZGVjLWRhaSA9IDwweDRmPjsKCQkJY3B1LWRhaS1uYW1lID0gIkkyUzQiOwoJCQljb2RlYy1kYWkt
bmFtZSA9ICJkaXQtaGlmaSI7CgkJCWZvcm1hdCA9ICJpMnMiOwoJCQliaXRjbG9jay1zbGF2ZTsK
CQkJZnJhbWUtc2xhdmU7CgkJCWJpdGNsb2NrLW5vbmludmVyc2lvbjsKCQkJZnJhbWUtbm9uaW52
ZXJzaW9uOwoJCQliaXQtZm9ybWF0ID0gInMxNl9sZSI7CgkJCXNyYXRlID0gPDB4YmI4MD47CgkJ
CW51bS1jaGFubmVsID0gPDB4Mj47CgkJCWlnbm9yZV9zdXNwZW5kOwoJCQluYW1lLXByZWZpeCA9
IFs3OCAwMF07CgkJCXN0YXR1cyA9ICJva2F5IjsKCQkJbGludXgscGhhbmRsZSA9IDwweGRhPjsK
CQkJcGhhbmRsZSA9IDwweGRhPjsKCQl9OwoKCQludmlkaWEsZGFpLWxpbmstMiB7CgkJCWxpbmst
bmFtZSA9ICJzcGRpZi1kaXQtMSI7CgkJCWNwdS1kYWkgPSA8MHg1MD47CgkJCWNvZGVjLWRhaSA9
IDwweDUxPjsKCQkJY3B1LWRhaS1uYW1lID0gIkkyUzMiOwoJCQljb2RlYy1kYWktbmFtZSA9ICJk
aXQtaGlmaSI7CgkJCWZvcm1hdCA9ICJpMnMiOwoJCQliaXRjbG9jay1zbGF2ZTsKCQkJZnJhbWUt
c2xhdmU7CgkJCWJpdGNsb2NrLW5vbmludmVyc2lvbjsKCQkJZnJhbWUtbm9uaW52ZXJzaW9uOwoJ
CQliaXQtZm9ybWF0ID0gInMxNl9sZSI7CgkJCXNyYXRlID0gPDB4YmI4MD47CgkJCW51bS1jaGFu
bmVsID0gPDB4Mj47CgkJCWlnbm9yZV9zdXNwZW5kOwoJCQluYW1lLXByZWZpeCA9IFs3OSAwMF07
CgkJCXN0YXR1cyA9ICJva2F5IjsKCQl9OwoKCQludmlkaWEsZGFpLWxpbmstMyB7CgkJCWxpbmst
bmFtZSA9ICJzcGRpZi1kaXQtMiI7CgkJCWNwdS1kYWkgPSA8MHg1Mj47CgkJCWNvZGVjLWRhaSA9
IDwweDUzPjsKCQkJY3B1LWRhaS1uYW1lID0gIkRNSUMxIjsKCQkJY29kZWMtZGFpLW5hbWUgPSAi
ZGl0LWhpZmkiOwoJCQlmb3JtYXQgPSAiaTJzIjsKCQkJYml0LWZvcm1hdCA9ICJzMTZfbGUiOwoJ
CQlzcmF0ZSA9IDwweGJiODA+OwoJCQlpZ25vcmVfc3VzcGVuZDsKCQkJbnVtLWNoYW5uZWwgPSA8
MHgyPjsKCQkJbmFtZS1wcmVmaXggPSBbNjEgMDBdOwoJCQlzdGF0dXMgPSAib2theSI7CgkJfTsK
CgkJbnZpZGlhLGRhaS1saW5rLTQgewoJCQlsaW5rLW5hbWUgPSAic3BkaWYtZGl0LTMiOwoJCQlj
cHUtZGFpID0gPDB4NTQ+OwoJCQljb2RlYy1kYWkgPSA8MHg1NT47CgkJCWNwdS1kYWktbmFtZSA9
ICJETUlDMiI7CgkJCWNvZGVjLWRhaS1uYW1lID0gImRpdC1oaWZpIjsKCQkJZm9ybWF0ID0gImky
cyI7CgkJCWJpdC1mb3JtYXQgPSAiczE2X2xlIjsKCQkJc3JhdGUgPSA8MHhiYjgwPjsKCQkJaWdu
b3JlX3N1c3BlbmQ7CgkJCW51bS1jaGFubmVsID0gPDB4Mj47CgkJCW5hbWUtcHJlZml4ID0gWzYy
IDAwXTsKCQkJc3RhdHVzID0gIm9rYXkiOwoJCX07Cgl9OwoKCXNvdW5kX3JlZiB7CgkJaW9tbXVz
ID0gPDB4MmIgMHgyMj47CgkJZG1hLW1hc2sgPSA8MHgwIDB4ZmZmMDAwMDA+OwoJCWlvbW11LXJl
c3YtcmVnaW9ucyA9IDwweDAgMHgwIDB4MCAweDcwMzAwMDAwIDB4MCAweGZmZjAwMDAwIDB4ZmZm
ZmZmZmYgMHhmZmZmZmZmZj47CgkJaW9tbXUtZ3JvdXAtaWQgPSA8MHgyPjsKCQlzdGF0dXMgPSAi
b2theSI7Cgl9OwoKCXB3bUA3MDAwYTAwMCB7CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEx
MjQtcHdtIiwgIm52aWRpYSx0ZWdyYTIwLXB3bSI7CgkJcmVnID0gPDB4MCAweDcwMDBhMDAwIDB4
MCAweDEwMD47CgkJI3B3bS1jZWxscyA9IDwweDI+OwoJCXN0YXR1cyA9ICJva2F5IjsKCQljbG9j
a3MgPSA8MHgyMSAweDExIDB4MjEgMHhmMz47CgkJY2xvY2stbmFtZXMgPSAicHdtIiwgInBhcmVu
dCI7CgkJcmVzZXRzID0gPDB4MjEgMHgxMT47CgkJcmVzZXQtbmFtZXMgPSAicHdtIjsKCQludmlk
aWEsbm8tY2xrLXNsZWVwaW5nLWluLW9wczsKCQlsaW51eCxwaGFuZGxlID0gPDB4YTU+OwoJCXBo
YW5kbGUgPSA8MHhhNT47Cgl9OwoKCXNwaUA3MDAwZDQwMCB7CgkJY29tcGF0aWJsZSA9ICJudmlk
aWEsdGVncmEyMTAtc3BpIjsKCQlyZWcgPSA8MHgwIDB4NzAwMGQ0MDAgMHgwIDB4MjAwPjsKCQlp
bnRlcnJ1cHRzID0gPDB4MCAweDNiIDB4ND47CgkJaW9tbXVzID0gPDB4MmIgMHhlPjsKCQkjYWRk
cmVzcy1jZWxscyA9IDwweDE+OwoJCSNzaXplLWNlbGxzID0gPDB4MD47CgkJZG1hcyA9IDwweDRj
IDB4ZiAweDRjIDB4Zj47CgkJZG1hLW5hbWVzID0gInJ4IiwgInR4IjsKCQludmlkaWEsY2xrLXBh
cmVudHMgPSAicGxsX3AiLCAiY2xrX20iOwoJCWNsb2NrcyA9IDwweDIxIDB4MjkgMHgyMSAweGYz
IDB4MjEgMHhlOT47CgkJY2xvY2stbmFtZXMgPSAic3BpIiwgInBsbF9wIiwgImNsa19tIjsKCQly
ZXNldHMgPSA8MHgyMSAweDI5PjsKCQlyZXNldC1uYW1lcyA9ICJzcGkiOwoJCXN0YXR1cyA9ICJv
a2F5IjsKCQlsaW51eCxwaGFuZGxlID0gPDB4MTBjPjsKCQlwaGFuZGxlID0gPDB4MTBjPjsKCgkJ
cHJvZC1zZXR0aW5ncyB7CgkJCSNwcm9kLWNlbGxzID0gPDB4Mz47CgoJCQlwcm9kIHsKCQkJCXBy
b2QgPSA8MHg0IDB4ZmZmIDB4MD47CgkJCX07CgoJCQlwcm9kX2NfZmxhc2ggewoJCQkJc3RhdHVz
ID0gImRpc2FibGVkIjsKCQkJCXByb2QgPSA8MHg0IDB4M2YgMHg3PjsKCQkJfTsKCgkJCXByb2Rf
Y19sb29wIHsKCQkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJCQlwcm9kID0gPDB4NCAweGZmZiAw
eDQ0Yj47CgkJCX07CgkJfTsKCgkJc3BpQDAgewoJCQljb21wYXRpYmxlID0gInNwaWRldiI7CgkJ
CXJlZyA9IDwweDA+OwoJCQlzcGktbWF4LWZyZXF1ZW5jeSA9IDwweDFmNzhhNDA+OwoKCQkJY29u
dHJvbGxlci1kYXRhIHsKCQkJCW52aWRpYSxlbmFibGUtaHctYmFzZWQtY3M7CgkJCQludmlkaWEs
cngtY2xrLXRhcC1kZWxheSA9IDwweDc+OwoJCQl9OwoJCX07CgoJCXNwaUAxIHsKCQkJY29tcGF0
aWJsZSA9ICJzcGlkZXYiOwoJCQlyZWcgPSA8MHgxPjsKCQkJc3BpLW1heC1mcmVxdWVuY3kgPSA8
MHgxZjc4YTQwPjsKCgkJCWNvbnRyb2xsZXItZGF0YSB7CgkJCQludmlkaWEsZW5hYmxlLWh3LWJh
c2VkLWNzOwoJCQkJbnZpZGlhLHJ4LWNsay10YXAtZGVsYXkgPSA8MHg3PjsKCQkJfTsKCQl9OwoJ
fTsKCglzcGlANzAwMGQ2MDAgewoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXNwaSI7
CgkJcmVnID0gPDB4MCAweDcwMDBkNjAwIDB4MCAweDIwMD47CgkJaW50ZXJydXB0cyA9IDwweDAg
MHg1MiAweDQ+OwoJCWlvbW11cyA9IDwweDJiIDB4ZT47CgkJI2FkZHJlc3MtY2VsbHMgPSA8MHgx
PjsKCQkjc2l6ZS1jZWxscyA9IDwweDA+OwoJCWRtYXMgPSA8MHg0YyAweDEwIDB4NGMgMHgxMD47
CgkJZG1hLW5hbWVzID0gInJ4IiwgInR4IjsKCQludmlkaWEsY2xrLXBhcmVudHMgPSAicGxsX3Ai
LCAiY2xrX20iOwoJCWNsb2NrcyA9IDwweDIxIDB4MmMgMHgyMSAweGYzIDB4MjEgMHhlOT47CgkJ
Y2xvY2stbmFtZXMgPSAic3BpIiwgInBsbF9wIiwgImNsa19tIjsKCQlyZXNldHMgPSA8MHgyMSAw
eDJjPjsKCQlyZXNldC1uYW1lcyA9ICJzcGkiOwoJCXN0YXR1cyA9ICJva2F5IjsKCQlsaW51eCxw
aGFuZGxlID0gPDB4MTBkPjsKCQlwaGFuZGxlID0gPDB4MTBkPjsKCgkJcHJvZC1zZXR0aW5ncyB7
CgkJCSNwcm9kLWNlbGxzID0gPDB4Mz47CgoJCQlwcm9kIHsKCQkJCXByb2QgPSA8MHg0IDB4ZmZm
IDB4MD47CgkJCX07CgoJCQlwcm9kX2NfZmxhc2ggewoJCQkJc3RhdHVzID0gImRpc2FibGVkIjsK
CQkJCXByb2QgPSA8MHg0IDB4M2YgMHg2PjsKCQkJfTsKCgkJCXByb2RfY19sb29wIHsKCQkJCXN0
YXR1cyA9ICJkaXNhYmxlZCI7CgkJCQlwcm9kID0gPDB4NCAweGZmZiAweDQ0Yj47CgkJCX07CgkJ
fTsKCgkJc3BpQDAgewoJCQljb21wYXRpYmxlID0gInNwaWRldiI7CgkJCXJlZyA9IDwweDA+OwoJ
CQlzcGktbWF4LWZyZXF1ZW5jeSA9IDwweDFmNzhhNDA+OwoKCQkJY29udHJvbGxlci1kYXRhIHsK
CQkJCW52aWRpYSxlbmFibGUtaHctYmFzZWQtY3M7CgkJCQludmlkaWEscngtY2xrLXRhcC1kZWxh
eSA9IDwweDY+OwoJCQl9OwoJCX07CgoJCXNwaUAxIHsKCQkJY29tcGF0aWJsZSA9ICJzcGlkZXYi
OwoJCQlyZWcgPSA8MHgxPjsKCQkJc3BpLW1heC1mcmVxdWVuY3kgPSA8MHgxZjc4YTQwPjsKCgkJ
CWNvbnRyb2xsZXItZGF0YSB7CgkJCQludmlkaWEsZW5hYmxlLWh3LWJhc2VkLWNzOwoJCQkJbnZp
ZGlhLHJ4LWNsay10YXAtZGVsYXkgPSA8MHg2PjsKCQkJfTsKCQl9OwoJfTsKCglzcGlANzAwMGQ4
MDAgewoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXNwaSI7CgkJcmVnID0gPDB4MCAw
eDcwMDBkODAwIDB4MCAweDIwMD47CgkJaW50ZXJydXB0cyA9IDwweDAgMHg1MyAweDQ+OwoJCWlv
bW11cyA9IDwweDJiIDB4ZT47CgkJI2FkZHJlc3MtY2VsbHMgPSA8MHgxPjsKCQkjc2l6ZS1jZWxs
cyA9IDwweDA+OwoJCWRtYXMgPSA8MHg0YyAweDExIDB4NGMgMHgxMT47CgkJZG1hLW5hbWVzID0g
InJ4IiwgInR4IjsKCQludmlkaWEsY2xrLXBhcmVudHMgPSAicGxsX3AiLCAiY2xrX20iOwoJCWNs
b2NrcyA9IDwweDIxIDB4MmUgMHgyMSAweGYzIDB4MjEgMHhlOT47CgkJY2xvY2stbmFtZXMgPSAi
c3BpIiwgInBsbF9wIiwgImNsa19tIjsKCQlyZXNldHMgPSA8MHgyMSAweDJlPjsKCQlyZXNldC1u
YW1lcyA9ICJzcGkiOwoJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJbGludXgscGhhbmRsZSA9IDww
eDEwZT47CgkJcGhhbmRsZSA9IDwweDEwZT47CgoJCXByb2Qtc2V0dGluZ3MgewoJCQkjcHJvZC1j
ZWxscyA9IDwweDM+OwoKCQkJcHJvZCB7CgkJCQlwcm9kID0gPDB4NCAweGZmZiAweDA+OwoJCQl9
OwoKCQkJcHJvZF9jX2ZsYXNoIHsKCQkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJCQlwcm9kID0g
PDB4NCAweDNmIDB4OD47CgkJCX07CgoJCQlwcm9kX2NfbG9vcCB7CgkJCQlzdGF0dXMgPSAiZGlz
YWJsZWQiOwoJCQkJcHJvZCA9IDwweDQgMHhmZmYgMHg0NGI+OwoJCQl9OwoJCX07Cgl9OwoKCXNw
aUA3MDAwZGEwMCB7CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtc3BpIjsKCQlyZWcg
PSA8MHgwIDB4NzAwMGRhMDAgMHgwIDB4MjAwPjsKCQlpbnRlcnJ1cHRzID0gPDB4MCAweDVkIDB4
ND47CgkJaW9tbXVzID0gPDB4MmIgMHhlPjsKCQkjYWRkcmVzcy1jZWxscyA9IDwweDE+OwoJCSNz
aXplLWNlbGxzID0gPDB4MD47CgkJZG1hcyA9IDwweDRjIDB4MTIgMHg0YyAweDEyPjsKCQlkbWEt
bmFtZXMgPSAicngiLCAidHgiOwoJCW52aWRpYSxjbGstcGFyZW50cyA9ICJwbGxfcCIsICJjbGtf
bSI7CgkJY2xvY2tzID0gPDB4MjEgMHg0NCAweDIxIDB4ZjMgMHgyMSAweGU5PjsKCQljbG9jay1u
YW1lcyA9ICJzcGkiLCAicGxsX3AiLCAiY2xrX20iOwoJCXJlc2V0cyA9IDwweDIxIDB4NDQ+OwoJ
CXJlc2V0LW5hbWVzID0gInNwaSI7CgkJc3RhdHVzID0gImRpc2FibGVkIjsKCQlzcGktbWF4LWZy
ZXF1ZW5jeSA9IDwweGI3MWIwMD47CgkJbGludXgscGhhbmRsZSA9IDwweDEwZj47CgkJcGhhbmRs
ZSA9IDwweDEwZj47CgoJCXByb2Qtc2V0dGluZ3MgewoJCQkjcHJvZC1jZWxscyA9IDwweDM+OwoK
CQkJcHJvZCB7CgkJCQlwcm9kID0gPDB4NCAweGZmZiAweDA+OwoJCQl9OwoKCQkJcHJvZF9jX2Zs
YXNoIHsKCQkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJCQlwcm9kID0gPDB4NCAweGZmZiAweDQ0
Yj47CgkJCX07CgoJCQlwcm9kX2NfY3MwIHsKCQkJCXByb2QgPSA8MHg0IDB4ZmMwIDB4NDAwPjsK
CQkJfTsKCQl9OwoKCQlzcGktdG91Y2gxOXgxMkAwIHsKCQkJY29tcGF0aWJsZSA9ICJyYXlkaXVt
LHJtX3RzX3NwaWRldiI7CgkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJCXJlZyA9IDwweDA+OwoJ
CQlzcGktbWF4LWZyZXF1ZW5jeSA9IDwweGI3MWIwMD47CgkJCWludGVycnVwdC1wYXJlbnQgPSA8
MHg1Nj47CgkJCWludGVycnVwdHMgPSA8MHhiOSAweDE+OwoJCQlyZXNldC1ncGlvID0gPDB4NTYg
MHhhZSAweDA+OwoJCQljb25maWcgPSA8MHgwPjsKCQkJcGxhdGZvcm0taWQgPSA8MHhkPjsKCQkJ
bmFtZS1vZi1jbG9jayA9ICJjbGtfb3V0XzIiOwoJCQluYW1lLW9mLWNsb2NrLWNvbiA9ICJleHRl
cm4yIjsKCQkJYXZkZC1zdXBwbHkgPSA8MHg1Nz47CgkJCWR2ZGQtc3VwcGx5ID0gPDB4NTg+OwoJ
CX07CgoJCXNwaS10b3VjaDI1eDE2QDAgewoJCQljb21wYXRpYmxlID0gInJheWRpdW0scm1fdHNf
c3BpZGV2IjsKCQkJc3RhdHVzID0gImRpc2FibGVkIjsKCQkJcmVnID0gPDB4MD47CgkJCXNwaS1t
YXgtZnJlcXVlbmN5ID0gPDB4YjcxYjAwPjsKCQkJaW50ZXJydXB0LXBhcmVudCA9IDwweDU2PjsK
CQkJaW50ZXJydXB0cyA9IDwweGI5IDB4MT47CgkJCXJlc2V0LWdwaW8gPSA8MHg1NiAweGFlIDB4
MD47CgkJCWNvbmZpZyA9IDwweDA+OwoJCQlwbGF0Zm9ybS1pZCA9IDwweDg+OwoJCQluYW1lLW9m
LWNsb2NrID0gImNsa19vdXRfMiI7CgkJCW5hbWUtb2YtY2xvY2stY29uID0gImV4dGVybjIiOwoJ
CQlhdmRkLXN1cHBseSA9IDwweDU3PjsKCQkJZHZkZC1zdXBwbHkgPSA8MHg1OD47CgkJfTsKCgkJ
c3BpLXRvdWNoMTR4OEAwIHsKCQkJY29tcGF0aWJsZSA9ICJyYXlkaXVtLHJtX3RzX3NwaWRldiI7
CgkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJCXJlZyA9IDwweDA+OwoJCQlzcGktbWF4LWZyZXF1
ZW5jeSA9IDwweGI3MWIwMD47CgkJCWludGVycnVwdC1wYXJlbnQgPSA8MHg1Nj47CgkJCWludGVy
cnVwdHMgPSA8MHhiOSAweDE+OwoJCQlyZXNldC1ncGlvID0gPDB4NTYgMHhhZSAweDA+OwoJCQlj
b25maWcgPSA8MHgwPjsKCQkJcGxhdGZvcm0taWQgPSA8MHg5PjsKCQkJbmFtZS1vZi1jbG9jayA9
ICJjbGtfb3V0XzIiOwoJCQluYW1lLW9mLWNsb2NrLWNvbiA9ICJleHRlcm4yIjsKCQkJYXZkZC1z
dXBwbHkgPSA8MHg1Nz47CgkJCWR2ZGQtc3VwcGx5ID0gPDB4NTg+OwoJCX07Cgl9OwoKCXNwaUA3
MDQxMDAwMCB7CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtcXNwaSI7CgkJcmVnID0g
PDB4MCAweDcwNDEwMDAwIDB4MCAweDEwMDA+OwoJCWludGVycnVwdHMgPSA8MHgwIDB4YSAweDQ+
OwoJCWlvbW11cyA9IDwweDJiIDB4ZT47CgkJI2FkZHJlc3MtY2VsbHMgPSA8MHgxPjsKCQkjc2l6
ZS1jZWxscyA9IDwweDA+OwoJCWRtYXMgPSA8MHg0YyAweDUgMHg0YyAweDU+OwoJCWRtYS1uYW1l
cyA9ICJyeCIsICJ0eCI7CgkJY2xvY2tzID0gPDB4MjEgMHhkMyAweDIxIDB4MTE5PjsKCQljbG9j
ay1uYW1lcyA9ICJxc3BpIiwgInFzcGlfb3V0IjsKCQlyZXNldHMgPSA8MHgyMSAweGQzPjsKCQly
ZXNldC1uYW1lcyA9ICJxc3BpIjsKCQlzdGF0dXMgPSAib2theSI7CgkJc3BpLW1heC1mcmVxdWVu
Y3kgPSA8MHg2MzJlYTAwPjsKCQlsaW51eCxwaGFuZGxlID0gPDB4MTEwPjsKCQlwaGFuZGxlID0g
PDB4MTEwPjsKCgkJc3BpZmxhc2hAMCB7CgkJCSNhZGRyZXNzLWNlbGxzID0gPDB4MT47CgkJCSNz
aXplLWNlbGxzID0gPDB4MT47CgkJCWNvbXBhdGlibGUgPSAiTVgyNVUzMjM1RiI7CgkJCXJlZyA9
IDwweDA+OwoJCQlzcGktbWF4LWZyZXF1ZW5jeSA9IDwweDYzMmVhMDA+OwoKCQkJY29udHJvbGxl
ci1kYXRhIHsKCQkJCW52aWRpYSx4MS1sZW4tbGltaXQgPSA8MHgxMD47CgkJCQludmlkaWEseDEt
YnVzLXNwZWVkID0gPDB4NjMyZWEwMD47CgkJCQludmlkaWEseDEtZHltbXktY3ljbGUgPSA8MHg4
PjsKCQkJCW52aWRpYSxjdHJsLWJ1cy1jbGstcmF0aW8gPSBbMDFdOwoJCQkJbnZpZGlhLHg0LWJ1
cy1zcGVlZCA9IDwweDYzMmVhMDA+OwoJCQkJbnZpZGlhLHg0LWR5bW15LWN5Y2xlID0gPDB4OD47
CgkJCX07CgkJfTsKCX07CgoJaG9zdDF4IHsKCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIx
MC1ob3N0MXgiLCAic2ltcGxlLWJ1cyI7CgkJcG93ZXItZG9tYWlucyA9IDwweDIzPjsKCQl3YWtl
dXAtY2FwYWJsZTsKCQlyZWcgPSA8MHgwIDB4NTAwMDAwMDAgMHgwIDB4MzQwMDA+OwoJCWludGVy
cnVwdHMgPSA8MHgwIDB4NDEgMHg0IDB4MCAweDQzIDB4ND47CgkJaW9tbXVzID0gPDB4MmIgMHg2
PjsKCQkjYWRkcmVzcy1jZWxscyA9IDwweDI+OwoJCSNzaXplLWNlbGxzID0gPDB4Mj47CgkJc3Rh
dHVzID0gIm9rYXkiOwoJCXJhbmdlczsKCQljbG9ja3MgPSA8MHgyMSAweDFmMiAweDIxIDB4Nzc+
OwoJCWNsb2NrLW5hbWVzID0gImhvc3QxeCIsICJhY3Rtb24iOwoJCXJlc2V0cyA9IDwweDIxIDB4
MWM+OwoJCW52aWRpYSxjaC1iYXNlID0gPDB4MD47CgkJbnZpZGlhLG5iLWNoYW5uZWxzID0gPDB4
Yz47CgkJbnZpZGlhLG5iLWh3LXB0cyA9IDwweGMwPjsKCQludmlkaWEscHRzLWJhc2UgPSA8MHgw
PjsKCQludmlkaWEsbmItcHRzID0gPDB4YzA+OwoJCWFzc2lnbmVkLWNsb2NrcyA9IDwweDIxIDB4
N2EgMHgyMSAweDkyIDB4MjEgMHg5MSAweDIxIDB4OTAgMHgyMSAweGQwIDB4MjEgMHgxNjYgMHgy
MSAweGU0IDB4MjEgMHgxNDIgMHgyMSAweDM+OwoJCWFzc2lnbmVkLWNsb2NrLXBhcmVudHMgPSA8
MHgyMSAweGYzIDB4MjEgMHhmMyAweDIxIDB4ZjMgMHgyMSAweGYzIDB4MjEgMHhmMyAweDIxIDB4
N2EgMHgyMSAweGVkIDB4MjEgMHhlZCAweDIxIDB4MTQyPjsKCQlhc3NpZ25lZC1jbG9jay1yYXRl
cyA9IDwweDE2ZTM2MDAgMHg2MTQ2NTgwIDB4NjE0NjU4MCAweDYxNDY1ODAgMHg2MTQ2NTgwIDB4
MTZlMzYwMCAweDE4NTE5NjAwIDB4MTg1MTk2MDAgMHgwPjsKCQlsaW51eCxwaGFuZGxlID0gPDB4
Nzg+OwoJCXBoYW5kbGUgPSA8MHg3OD47CgoJCXZpIHsKCQkJY29tcGF0aWJsZSA9ICJudmlkaWEs
dGVncmEyMTAtdmkiLCAic2ltcGxlLWJ1cyI7CgkJCXBvd2VyLWRvbWFpbnMgPSA8MHg1OT47CgkJ
CXJlZyA9IDwweDAgMHg1NDA4MDAwMCAweDAgMHg0MDAwMD47CgkJCWludGVycnVwdHMgPSA8MHgw
IDB4NDUgMHg0PjsKCQkJaW9tbXVzID0gPDB4MmIgMHgxMj47CgkJCXN0YXR1cyA9ICJva2F5IjsK
CQkJY2xvY2tzID0gPDB4MjEgMHgyMTAgMHgyMSAweDM0IDB4MjEgMHg5MCAweDIxIDB4OTEgMHgy
MSAweDkyIDB4MjEgMHhkMCAweDIxIDB4NTEgMHgyMSAweGZhIDB4MjEgMHgxMzM+OwoJCQljbG9j
ay1uYW1lcyA9ICJ2aSIsICJjc2kiLCAiY2lsYWIiLCAiY2lsY2QiLCAiY2lsZSIsICJ2aWkyYyIs
ICJpMmNzbG93IiwgInBsbF9kIiwgInBsbF9kX2RzaV9vdXQiOwoJCQlyZXNldHMgPSA8MHgyMSAw
eDE0PjsKCQkJcmVzZXQtbmFtZXMgPSAidmkiOwoJCQkjYWRkcmVzcy1jZWxscyA9IDwweDE+OwoJ
CQkjc2l6ZS1jZWxscyA9IDwweDA+OwoJCQlhdmRkX2RzaV9jc2ktc3VwcGx5ID0gPDB4MzY+OwoJ
CQludW0tY2hhbm5lbHMgPSA8MHgyPjsKCQkJbGludXgscGhhbmRsZSA9IDwweGJkPjsKCQkJcGhh
bmRsZSA9IDwweGJkPjsKCgkJCXBvcnRzIHsKCQkJCSNhZGRyZXNzLWNlbGxzID0gPDB4MT47CgkJ
CQkjc2l6ZS1jZWxscyA9IDwweDA+OwoKCQkJCXBvcnRAMCB7CgkJCQkJcmVnID0gPDB4MD47CgkJ
CQkJbGludXgscGhhbmRsZSA9IDwweGJlPjsKCQkJCQlwaGFuZGxlID0gPDB4YmU+OwoKCQkJCQll
bmRwb2ludCB7CgkJCQkJCXBvcnQtaW5kZXggPSA8MHgwPjsKCQkJCQkJYnVzLXdpZHRoID0gPDB4
Mj47CgkJCQkJCXJlbW90ZS1lbmRwb2ludCA9IDwweDVhPjsKCQkJCQkJbGludXgscGhhbmRsZSA9
IDwweDc1PjsKCQkJCQkJcGhhbmRsZSA9IDwweDc1PjsKCQkJCQl9OwoJCQkJfTsKCgkJCQlwb3J0
QDEgewoJCQkJCXJlZyA9IDwweDE+OwoJCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHhjZD47CgkJCQkJ
cGhhbmRsZSA9IDwweGNkPjsKCgkJCQkJZW5kcG9pbnQgewoJCQkJCQlwb3J0LWluZGV4ID0gPDB4
ND47CgkJCQkJCWJ1cy13aWR0aCA9IDwweDI+OwoJCQkJCQlyZW1vdGUtZW5kcG9pbnQgPSA8MHg1
Yj47CgkJCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHg3Nz47CgkJCQkJCXBoYW5kbGUgPSA8MHg3Nz47
CgkJCQkJfTsKCQkJCX07CgkJCX07CgkJfTsKCgkJdmktYnlwYXNzIHsKCQkJY29tcGF0aWJsZSA9
ICJudmlkaWEsdGVncmEyMTAtdmktYnlwYXNzIjsKCQkJc3RhdHVzID0gIm9rYXkiOwoJCX07CgoJ
CWlzcEA1NDYwMDAwMCB7CgkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLWlzcCI7CgkJ
CXBvd2VyLWRvbWFpbnMgPSA8MHg1OT47CgkJCXJlZyA9IDwweDAgMHg1NDYwMDAwMCAweDAgMHg0
MDAwMD47CgkJCWludGVycnVwdHMgPSA8MHgwIDB4NDcgMHg0PjsKCQkJaW9tbXVzID0gPDB4MmIg
MHg4PjsKCQkJc3RhdHVzID0gIm9rYXkiOwoJCQljbG9ja3MgPSA8MHgyMSAweDFhYj47CgkJCWNs
b2NrLW5hbWVzID0gImlzcGEiOwoJCQlyZXNldHMgPSA8MHgyMSAweDE3PjsKCQl9OwoKCQlpc3BA
NTQ2ODAwMDAgewoJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1pc3AiOwoJCQlwb3dl
ci1kb21haW5zID0gPDB4NWM+OwoJCQlyZWcgPSA8MHgwIDB4NTQ2ODAwMDAgMHgwIDB4NDAwMDA+
OwoJCQlpbnRlcnJ1cHRzID0gPDB4MCAweDQ2IDB4ND47CgkJCWlvbW11cyA9IDwweDJiIDB4MWQ+
OwoJCQlzdGF0dXMgPSAib2theSI7CgkJCWNsb2NrcyA9IDwweDIxIDB4MWFjPjsKCQkJY2xvY2st
bmFtZXMgPSAiaXNwYiI7CgkJCXJlc2V0cyA9IDwweDIxIDB4Mz47CgkJfTsKCgkJZGNANTQyMDAw
MDAgewoJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1kYyI7CgkJCWF1eC1kZXZpY2Ut
bmFtZSA9ICJ0ZWdyYWRjLjAiOwoJCQlyZWcgPSA8MHgwIDB4NTQyMDAwMDAgMHgwIDB4NDAwMDA+
OwoJCQlpbnRlcnJ1cHRzID0gPDB4MCAweDQ5IDB4ND47CgkJCXdpbi1tYXNrID0gPDB4Nz47CgkJ
CW52aWRpYSxmYi13aW4gPSA8MHgwPjsKCQkJaW9tbXVzID0gPDB4MmIgMHgyIDB4MmIgMHhhPjsK
CQkJY2xvY2tzID0gPDB4MjEgMHgxYiAweDIxIDB4NSAweDIxIDB4MWM1IDB4MjEgMHgxYzcgMHgy
MSAweGY2IDB4MjEgMHhmYiAweDIxIDB4ZmE+OwoJCQljbG9jay1uYW1lcyA9ICJkaXNwMSIsICJ0
aW1lciIsICJkaXNwMV9lbWMiLCAiZGlzcDFfbGFfZW1jIiwgInBsbF9wX291dDMiLCAicGxsX2Rf
b3V0MCIsICJwbGxfZCI7CgkJCXJlc2V0cyA9IDwweDIxIDB4MWI+OwoJCQlyZXNldC1uYW1lcyA9
ICJkY19yc3QiOwoJCQlzdGF0dXMgPSAib2theSI7CgkJCW52aWRpYSxkYy1jdHJsbnVtID0gPDB4
MD47CgkJCWZiX3Jlc2VydmVkID0gPDB4NWQ+OwoJCQlpb21tdS1kaXJlY3QtcmVnaW9ucyA9IDww
eDVkPjsKCQkJcGluY3RybC1uYW1lcyA9ICJkc2ktZHBkLWRpc2FibGUiLCAiZHNpLWRwZC1lbmFi
bGUiLCAiZHNpYi1kcGQtZGlzYWJsZSIsICJkc2liLWRwZC1lbmFibGUiLCAiaGRtaS1kcGQtZGlz
YWJsZSIsICJoZG1pLWRwZC1lbmFibGUiOwoJCQlwaW5jdHJsLTAgPSA8MHg1ZT47CgkJCXBpbmN0
cmwtMSA9IDwweDVmPjsKCQkJcGluY3RybC0yID0gPDB4NjA+OwoJCQlwaW5jdHJsLTMgPSA8MHg2
MT47CgkJCXBpbmN0cmwtNCA9IDwweDYyPjsKCQkJcGluY3RybC01ID0gPDB4NjM+OwoJCQlhdmRk
X2hkbWktc3VwcGx5ID0gPDB4NDA+OwoJCQlhdmRkX2hkbWlfcGxsLXN1cHBseSA9IDwweDM2PjsK
CQkJdmRkX2hkbWlfNXYwLXN1cHBseSA9IDwweDY0PjsKCQkJbnZpZGlhLGRjLWZsYWdzID0gPDB4
MT47CgkJCW52aWRpYSxlbWMtY2xrLXJhdGUgPSA8MHgxMWUxYTMwMD47CgkJCW52aWRpYSxjbXUt
ZW5hYmxlID0gPDB4MT47CgkJCW52aWRpYSxmYi1icHAgPSA8MHgyMD47CgkJCW52aWRpYSxmYi1m
bGFncyA9IDwweDE+OwoJCQludmlkaWEsZGMtb3Itbm9kZSA9ICIvaG9zdDF4L3NvcjEiOwoJCQlu
dmlkaWEsZGMtY29ubmVjdG9yID0gPDB4NjU+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4MTExPjsK
CQkJcGhhbmRsZSA9IDwweDExMT47CgoJCQlyZ2IgewoJCQkJc3RhdHVzID0gImRpc2FibGVkIjsK
CQkJfTsKCQl9OwoKCQlkY0A1NDI0MDAwMCB7CgkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3Jh
MjEwLWRjIjsKCQkJYXV4LWRldmljZS1uYW1lID0gInRlZ3JhZGMuMSI7CgkJCXJlZyA9IDwweDAg
MHg1NDI0MDAwMCAweDAgMHg0MDAwMD47CgkJCWludGVycnVwdHMgPSA8MHgwIDB4NGEgMHg0PjsK
CQkJd2luLW1hc2sgPSA8MHg3PjsKCQkJbnZpZGlhLGZiLXdpbiA9IDwweDA+OwoJCQlpb21tdXMg
PSA8MHgyYiAweDM+OwoJCQlzdGF0dXMgPSAib2theSI7CgkJCW52aWRpYSxkYy1jdHJsbnVtID0g
PDB4MT47CgkJCWNsb2NrcyA9IDwweDIxIDB4MWEgMHgyMSAweDUgMHgyMSAweDFjNiAweDIxIDB4
MWM4IDB4MjEgMHhmZCAweDIxIDB4ZmM+OwoJCQljbG9jay1uYW1lcyA9ICJkaXNwMiIsICJ0aW1l
ciIsICJkaXNwMl9lbWMiLCAiZGlzcDJfbGFfZW1jIiwgInBsbF9kMl9vdXQwIiwgInBsbF9kMiI7
CgkJCXJlc2V0cyA9IDwweDIxIDB4MWE+OwoJCQlyZXNldC1uYW1lcyA9ICJkY19yc3QiOwoJCQlm
Yl9yZXNlcnZlZCA9IDwweDY2PjsKCQkJaW9tbXUtZGlyZWN0LXJlZ2lvbnMgPSA8MHg2Nj47CgkJ
CXBpbmN0cmwtbmFtZXMgPSAiZHNpLWRwZC1kaXNhYmxlIiwgImRzaS1kcGQtZW5hYmxlIiwgImRz
aWItZHBkLWRpc2FibGUiLCAiZHNpYi1kcGQtZW5hYmxlIiwgImhkbWktZHBkLWRpc2FibGUiLCAi
aGRtaS1kcGQtZW5hYmxlIjsKCQkJcGluY3RybC0wID0gPDB4NWU+OwoJCQlwaW5jdHJsLTEgPSA8
MHg1Zj47CgkJCXBpbmN0cmwtMiA9IDwweDYwPjsKCQkJcGluY3RybC0zID0gPDB4NjE+OwoJCQlw
aW5jdHJsLTQgPSA8MHg2Mj47CgkJCXBpbmN0cmwtNSA9IDwweDYzPjsKCQkJdmRkLWRwLXB3ci1z
dXBwbHkgPSA8MHg2Nz47CgkJCWF2ZGQtZHAtcGxsLXN1cHBseSA9IDwweDM2PjsKCQkJdmRkLWVk
cC1zZWMtbW9kZS1zdXBwbHkgPSA8MHg0Mj47CgkJCXZkZC1kcC1wYWQtc3VwcGx5ID0gPDB4NDI+
OwoJCQl2ZGRfaGRtaV81djAtc3VwcGx5ID0gPDB4NjQ+OwoJCQludmlkaWEsZGMtZmxhZ3MgPSA8
MHgxPjsKCQkJbnZpZGlhLGVtYy1jbGstcmF0ZSA9IDwweDExZTFhMzAwPjsKCQkJbnZpZGlhLGNt
dS1lbmFibGUgPSA8MHgxPjsKCQkJbnZpZGlhLGZiLWJwcCA9IDwweDIwPjsKCQkJbnZpZGlhLGZi
LWZsYWdzID0gPDB4MT47CgkJCW52aWRpYSxkYy1vci1ub2RlID0gIi9ob3N0MXgvc29yIjsKCQkJ
bnZpZGlhLGRjLWNvbm5lY3RvciA9IDwweDY4PjsKCQkJbGludXgscGhhbmRsZSA9IDwweDExMj47
CgkJCXBoYW5kbGUgPSA8MHgxMTI+OwoKCQkJcmdiIHsKCQkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7
CgkJCX07CgkJfTsKCgkJZHNpIHsKCQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtZHNp
IjsKCQkJcmVnID0gPDB4MCAweDU0MzAwMDAwIDB4MCAweDQwMDAwIDB4MCAweDU0NDAwMDAwIDB4
MCAweDQwMDAwPjsKCQkJY2xvY2tzID0gPDB4MjEgMHgzMCAweDIxIDB4OTMgMHgyMSAweDUyIDB4
MjEgMHg5NCAweDIxIDB4ZjYgMHgyMSAweGIxPjsKCQkJY2xvY2stbmFtZXMgPSAiZHNpIiwgImRz
aWFfbHAiLCAiZHNpYiIsICJkc2liX2xwIiwgInBsbF9wX291dDMiLCAiY2xrNzJtaHoiOwoJCQly
ZXNldHMgPSA8MHgyMSAweDMwIDB4MjEgMHg1Mj47CgkJCXJlc2V0LW5hbWVzID0gImRzaWEiLCAi
ZHNpYiI7CgkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHgxMTM+
OwoJCQlwaGFuZGxlID0gPDB4MTEzPjsKCQl9OwoKCQl2aWMgewoJCQljb21wYXRpYmxlID0gIm52
aWRpYSx0ZWdyYTIxMC12aWMiOwoJCQlwb3dlci1kb21haW5zID0gPDB4Njk+OwoJCQlyZWcgPSA8
MHgwIDB4NTQzNDAwMDAgMHgwIDB4NDAwMDA+OwoJCQlpb21tdXMgPSA8MHgyYiAweDEzPjsKCQkJ
aW9tbXUtZ3JvdXAtaWQgPSA8MHgxPjsKCQkJc3RhdHVzID0gIm9rYXkiOwoJCQljbG9ja3MgPSA8
MHgyMSAweDE5MyAweDIxIDB4MWUyIDB4MjEgMHgxOWYgMHgyMSAweDFlMz47CgkJCWNsb2NrLW5h
bWVzID0gInZpYzAzIiwgImVtYyIsICJ2aWNfZmxvb3IiLCAiZW1jX3NoYXJlZCI7CgkJCXJlc2V0
cyA9IDwweDIxIDB4YjI+OwoJCX07CgoJCW52ZW5jIHsKCQkJY29tcGF0aWJsZSA9ICJudmlkaWEs
dGVncmEyMTAtbnZlbmMiOwoJCQlwb3dlci1kb21haW5zID0gPDB4NmE+OwoJCQlyZWcgPSA8MHgw
IDB4NTQ0YzAwMDAgMHgwIDB4NDAwMDA+OwoJCQljbG9ja3MgPSA8MHgyMSAweDE5ZD47CgkJCWNs
b2NrLW5hbWVzID0gIm1zZW5jIjsKCQkJcmVzZXRzID0gPDB4MjEgMHhkYj47CgkJCWlvbW11cyA9
IDwweDJiIDB4Yj47CgkJCWlvbW11LWdyb3VwLWlkID0gPDB4MT47CgkJCXN0YXR1cyA9ICJva2F5
IjsKCQl9OwoKCQl0c2VjIHsKCQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtdHNlYyI7
CgkJCXBvd2VyLWRvbWFpbnMgPSA8MHg2Yj47CgkJCXJlZyA9IDwweDAgMHg1NDUwMDAwMCAweDAg
MHg0MDAwMD47CgkJCWNsb2NrcyA9IDwweDIxIDB4NTM+OwoJCQljbG9jay1uYW1lcyA9ICJ0c2Vj
IjsKCQkJcmVzZXRzID0gPDB4MjEgMHg1Mz47CgkJCWlvbW11cyA9IDwweDJiIDB4MTc+OwoJCQlp
b21tdS1ncm91cC1pZCA9IDwweDE+OwoJCQlzdGF0dXMgPSAib2theSI7CgkJfTsKCgkJdHNlY2Ig
ewoJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC10c2VjIjsKCQkJcG93ZXItZG9tYWlu
cyA9IDwweDZiPjsKCQkJcmVnID0gPDB4MCAweDU0MTAwMDAwIDB4MCAweDQwMDAwPjsKCQkJY2xv
Y2tzID0gPDB4MjEgMHgxOTY+OwoJCQljbG9jay1uYW1lcyA9ICJ0c2VjYiI7CgkJCXJlc2V0cyA9
IDwweDIxIDB4Y2U+OwoJCQlpb21tdXMgPSA8MHgyYiAweDI5PjsKCQkJaW9tbXUtZ3JvdXAtaWQg
PSA8MHgxPjsKCQkJc3RhdHVzID0gIm9rYXkiOwoJCX07CgoJCW52ZGVjIHsKCQkJY29tcGF0aWJs
ZSA9ICJudmlkaWEsdGVncmEyMTAtbnZkZWMiOwoJCQlwb3dlci1kb21haW5zID0gPDB4NmM+OwoJ
CQlyZWcgPSA8MHgwIDB4NTQ0ODAwMDAgMHgwIDB4NDAwMDA+OwoJCQljbG9ja3MgPSA8MHgyMSAw
eDE5ZT47CgkJCWNsb2NrLW5hbWVzID0gIm52ZGVjIjsKCQkJcmVzZXRzID0gPDB4MjEgMHhjMj47
CgkJCWlvbW11cyA9IDwweDJiIDB4MjE+OwoJCQlpb21tdS1ncm91cC1pZCA9IDwweDE+OwoJCQlz
dGF0dXMgPSAib2theSI7CgkJfTsKCgkJbnZqcGcgewoJCQljb21wYXRpYmxlID0gIm52aWRpYSx0
ZWdyYTIxMC1udmpwZyI7CgkJCXBvd2VyLWRvbWFpbnMgPSA8MHg2ZD47CgkJCXJlZyA9IDwweDAg
MHg1NDM4MDAwMCAweDAgMHg0MDAwMD47CgkJCWNsb2NrcyA9IDwweDIxIDB4MTk0PjsKCQkJY2xv
Y2stbmFtZXMgPSAibnZqcGciOwoJCQlyZXNldHMgPSA8MHgyMSAweGMzPjsKCQkJaW9tbXVzID0g
PDB4MmIgMHgyND47CgkJCWlvbW11LWdyb3VwLWlkID0gPDB4MT47CgkJCXN0YXR1cyA9ICJva2F5
IjsKCQl9OwoKCQlzb3IgewoJCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1zb3IiOwoJ
CQlyZWcgPSA8MHgwIDB4NTQ1NDAwMDAgMHgwIDB4NDAwMDA+OwoJCQlyZWctbmFtZXMgPSAic29y
IjsKCQkJc3RhdHVzID0gIm9rYXkiOwoJCQludmlkaWEsc29yLWN0cmxudW0gPSA8MHgwPjsKCQkJ
bnZpZGlhLGRwYXV4ID0gPDB4NmU+OwoJCQludmlkaWEseGJhci1jdHJsID0gPDB4MiAweDEgMHgw
IDB4MyAweDQ+OwoJCQljbG9ja3MgPSA8MHgyMSAweGRlIDB4MjEgMHhiNiAweDIxIDB4MTJmPjsK
CQkJY2xvY2stbmFtZXMgPSAic29yX3NhZmUiLCAic29yMCIsICJwbGxfZHAiOwoJCQlyZXNldHMg
PSA8MHgyMSAweGI2PjsKCQkJcmVzZXQtbmFtZXMgPSAic29yMCI7CgkJCW52aWRpYSxzb3ItYXVk
aW8tbm90LXN1cHBvcnRlZDsKCQkJbnZpZGlhLHNvcjEtb3V0cHV0LXR5cGUgPSAiZHAiOwoJCQlu
dmlkaWEsYWN0aXZlLXBhbmVsID0gPDB4NmY+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4Njg+OwoJ
CQlwaGFuZGxlID0gPDB4Njg+OwoKCQkJaGRtaS1kaXNwbGF5IHsKCQkJCWNvbXBhdGlibGUgPSAi
aGRtaSxkaXNwbGF5IjsKCQkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJCQlsaW51eCxwaGFuZGxl
ID0gPDB4MTE0PjsKCQkJCXBoYW5kbGUgPSA8MHgxMTQ+OwoJCQl9OwoKCQkJZHAtZGlzcGxheSB7
CgkJCQljb21wYXRpYmxlID0gImRwLCBkaXNwbGF5IjsKCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJ
CW52aWRpYSxocGQtZ3BpbyA9IDwweDU2IDB4ZTEgMHgxPjsKCQkJCW52aWRpYSxpc19leHRfZHBf
cGFuZWwgPSA8MHgxPjsKCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHg2Zj47CgkJCQlwaGFuZGxlID0g
PDB4NmY+OwoKCQkJCWRpc3AtZGVmYXVsdC1vdXQgewoJCQkJCW52aWRpYSxvdXQtdHlwZSA9IDww
eDM+OwoJCQkJCW52aWRpYSxvdXQtYWxpZ24gPSA8MHgwPjsKCQkJCQludmlkaWEsb3V0LW9yZGVy
ID0gPDB4MD47CgkJCQkJbnZpZGlhLG91dC1mbGFncyA9IDwweDA+OwoJCQkJCW52aWRpYSxvdXQt
cGlucyA9IDwweDEgMHgwIDB4MiAweDAgMHgzIDB4MCAweDAgMHgxPjsKCQkJCQludmlkaWEsb3V0
LXBhcmVudC1jbGsgPSAicGxsX2Rfb3V0MCI7CgkJCQl9OwoKCQkJCWRwLWx0LXNldHRpbmdzIHsK
CgkJCQkJbHQtc2V0dGluZ0AwIHsKCQkJCQkJbnZpZGlhLGRyaXZlLWN1cnJlbnQgPSA8MHgwIDB4
MCAweDAgMHgwPjsKCQkJCQkJbnZpZGlhLGxhbmUtcHJlZW1waGFzaXMgPSA8MHgwIDB4MCAweDAg
MHgwPjsKCQkJCQkJbnZpZGlhLHBvc3QtY3Vyc29yID0gPDB4MCAweDAgMHgwIDB4MD47CgkJCQkJ
CW52aWRpYSx0eC1wdSA9IDwweDA+OwoJCQkJCQludmlkaWEsbG9hZC1hZGogPSA8MHgzPjsKCQkJ
CQl9OwoKCQkJCQlsdC1zZXR0aW5nQDEgewoJCQkJCQludmlkaWEsZHJpdmUtY3VycmVudCA9IDww
eDAgMHgwIDB4MCAweDA+OwoJCQkJCQludmlkaWEsbGFuZS1wcmVlbXBoYXNpcyA9IDwweDAgMHgw
IDB4MCAweDA+OwoJCQkJCQludmlkaWEscG9zdC1jdXJzb3IgPSA8MHgwIDB4MCAweDAgMHgwPjsK
CQkJCQkJbnZpZGlhLHR4LXB1ID0gPDB4MD47CgkJCQkJCW52aWRpYSxsb2FkLWFkaiA9IDwweDQ+
OwoJCQkJCX07CgoJCQkJCWx0LXNldHRpbmdAMiB7CgkJCQkJCW52aWRpYSxkcml2ZS1jdXJyZW50
ID0gPDB4MCAweDAgMHgwIDB4MD47CgkJCQkJCW52aWRpYSxsYW5lLXByZWVtcGhhc2lzID0gPDB4
MSAweDEgMHgxIDB4MT47CgkJCQkJCW52aWRpYSxwb3N0LWN1cnNvciA9IDwweDAgMHgwIDB4MCAw
eDA+OwoJCQkJCQludmlkaWEsdHgtcHUgPSA8MHgwPjsKCQkJCQkJbnZpZGlhLGxvYWQtYWRqID0g
PDB4Nj47CgkJCQkJfTsKCQkJCX07CgkJCX07CgoJCQlwcm9kLXNldHRpbmdzIHsKCQkJCSNwcm9k
LWNlbGxzID0gPDB4Mz47CgoJCQkJcHJvZF9jX2RwIHsKCQkJCQlwcm9kID0gPDB4NWMgMHhmMDAw
ZjEwIDB4MTAwMDMxMCAweDYwIDB4M2YwMDEwMCAweDQwMDEwMCAweDY4IDB4MjAwMCAweDIwMDAg
MHg3MCAweGZmZmZmZmZmIDB4MCAweDE4MCAweDEgMHgxPjsKCQkJCX07CgkJCX07CgkJfTsKCgkJ
c29yMSB7CgkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXNvcjEiOwoJCQlyZWcgPSA8
MHgwIDB4NTQ1ODAwMDAgMHgwIDB4NDAwMDA+OwoJCQlyZWctbmFtZXMgPSAic29yIjsKCQkJaW50
ZXJydXB0cyA9IDwweDAgMHg0YyAweDQ+OwoJCQlzdGF0dXMgPSAib2theSI7CgkJCW52aWRpYSxz
b3ItY3RybG51bSA9IDwweDE+OwoJCQludmlkaWEsZHBhdXggPSA8MHg3MD47CgkJCW52aWRpYSx4
YmFyLWN0cmwgPSA8MHgwIDB4MSAweDIgMHgzIDB4ND47CgkJCWNsb2NrcyA9IDwweDIxIDB4MTZm
IDB4MjEgMHhkZSAweDIxIDB4MTZlIDB4MjEgMHhiNyAweDIxIDB4MTJmIDB4MjEgMHhmMyAweDIx
IDB4Y2EgMHgyMSAweDdkIDB4MjEgMHg2ZiAweDIxIDB4ODA+OwoJCQljbG9jay1uYW1lcyA9ICJz
b3IxX3JlZiIsICJzb3Jfc2FmZSIsICJzb3IxX3BhZF9jbGtvdXQiLCAic29yMSIsICJwbGxfZHAi
LCAicGxsX3AiLCAibWF1ZCIsICJoZGEiLCAiaGRhMmNvZGVjXzJ4IiwgImhkYTJoZG1pIjsKCQkJ
cmVzZXRzID0gPDB4MjEgMHhiNyAweDIxIDB4N2QgMHgyMSAweDZmIDB4MjEgMHg4MD47CgkJCXJl
c2V0LW5hbWVzID0gInNvcjEiLCAiaGRhX3JzdCIsICJoZGEyY29kZWNfMnhfcnN0IiwgImhkYTJo
ZG1pX3JzdCI7CgkJCW52aWRpYSxkZGMtaTJjLWJ1cyA9IDwweDcxPjsKCQkJbnZpZGlhLGhwZC1n
cGlvID0gPDB4NTYgMHhlMSAweDE+OwoJCQludmlkaWEsYWN0aXZlLXBhbmVsID0gPDB4NzI+OwoJ
CQlsaW51eCxwaGFuZGxlID0gPDB4NjU+OwoJCQlwaGFuZGxlID0gPDB4NjU+OwoKCQkJaGRtaS1k
aXNwbGF5IHsKCQkJCWNvbXBhdGlibGUgPSAiaGRtaSxkaXNwbGF5IjsKCQkJCXN0YXR1cyA9ICJv
a2F5IjsKCQkJCWdlbmVyaWMtaW5mb2ZyYW1lLXR5cGUgPSA8MHg4Nz47CgkJCQlsaW51eCxwaGFu
ZGxlID0gPDB4NzI+OwoJCQkJcGhhbmRsZSA9IDwweDcyPjsKCgkJCQlkaXNwLWRlZmF1bHQtb3V0
IHsKCQkJCQludmlkaWEsb3V0LXhyZXMgPSA8MHgxMDAwPjsKCQkJCQludmlkaWEsb3V0LXlyZXMg
PSA8MHg4NzA+OwoJCQkJCW52aWRpYSxvdXQtdHlwZSA9IDwweDE+OwoJCQkJCW52aWRpYSxvdXQt
ZmxhZ3MgPSA8MHgyPjsKCQkJCQludmlkaWEsb3V0LXBhcmVudC1jbGsgPSAicGxsX2QyIjsKCQkJ
CQludmlkaWEsb3V0LWFsaWduID0gPDB4MD47CgkJCQkJbnZpZGlhLG91dC1vcmRlciA9IDwweDA+
OwoJCQkJfTsKCQkJfTsKCgkJCWRwLWRpc3BsYXkgewoJCQkJY29tcGF0aWJsZSA9ICJkcCwgZGlz
cGxheSI7CgkJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQkJbGludXgscGhhbmRsZSA9IDwweDEx
NT47CgkJCQlwaGFuZGxlID0gPDB4MTE1PjsKCQkJfTsKCgkJCXByb2Qtc2V0dGluZ3MgewoJCQkJ
I3Byb2QtY2VsbHMgPSA8MHgzPjsKCQkJCXByb2RfbGlzdF9oZG1pX3NvYyA9ICJwcm9kX2NfaGRt
aV8wbV81NG0iLCAicHJvZF9jX2hkbWlfNTRtXzExMW0iLCAicHJvZF9jX2hkbWlfMTExbV8yMjNt
IiwgInByb2RfY19oZG1pXzIyM21fMzAwbSIsICJwcm9kX2NfaGRtaV8zMDBtXzYwMG0iOwoJCQkJ
cHJvZF9saXN0X2hkbWlfYm9hcmQgPSAicHJvZF9jX2hkbWlfMG1fNTRtIiwgInByb2RfY19oZG1p
XzU0bV83NW0iLCAicHJvZF9jX2hkbWlfNzVtXzE1MG0iLCAicHJvZF9jX2hkbWlfMTUwbV8zMDBt
IiwgInByb2RfY19oZG1pXzMwMG1fNjAwbSI7CgoJCQkJcHJvZCB7CgkJCQkJcHJvZCA9IDwweDNh
MCAweDEgMHgxIDB4NWMgMHhmMDAwNzAwIDB4MTAwMDAwMCAweDYwIDB4ZjAxZjAwIDB4MzAwZjgw
IDB4NjggMHhmMDAwMDAwIDB4ZTAwMDAwMCAweDEzOCAweGZmZmZmZmZmIDB4M2MzYzNjM2MgMHgx
NDggMHhmZmZmZmZmZiAweDAgMHgxNzAgMHg0MGZmMDAgMHg0MDEwMDA+OwoJCQkJfTsKCgkJCQlw
cm9kX2NfaGRtaV8wbV81NG0gewoJCQkJCXByb2QgPSA8MHgzYTAgMHgyIDB4MiAweDVjIDB4ZjAw
MDcwMCAweDUwMDAzMTAgMHg2MCAweGYwMWYwMCAweDExMDAgMHg2OCAweGYwMDAwMDAgMHg4MDAw
MDAwIDB4MTM4IDB4ZmZmZmZmZmYgMHgyZDJmMmYyZiAweDE0OCAweGZmZmZmZmZmIDB4MCAweDE3
MCAweGYwNDBmZjAwIDB4ODA0MDY2MDA+OwoJCQkJfTsKCgkJCQlwcm9kX2NfaGRtaV81NG1fMTEx
bSB7CgkJCQkJcHJvZCA9IDwweDNhMCAweDIgMHgyIDB4NWMgMHhmMDAwNzAwIDB4MTAwMDEwMCAw
eDYwIDB4ZjAxZjAwIDB4NDAxMzgwIDB4NjggMHhmMDAwMDAwIDB4ODAwMDAwMCAweDEzOCAweGZm
ZmZmZmZmIDB4MzMzYTNhM2EgMHgxNDggMHhmZmZmZmZmZiAweDAgMHgxNzAgMHg0MGZmMDAgMHg0
MDQwMDA+OwoJCQkJfTsKCgkJCQlwcm9kX2NfaGRtaV8xMTFtXzIyM20gewoJCQkJCXByb2QgPSA8
MHgzYTAgMHgyIDB4MCAweDVjIDB4ZjAwMDcwMCAweDEwMDAzMDAgMHg2MCAweGZmMGZlMGZmIDB4
NDAxMzgwIDB4NjggMHhmMDAwMDAwIDB4ODAwMDAwMCAweDEzOCAweGZmZmZmZmZmIDB4MzMzYTNh
M2EgMHgxNDggMHhmZmZmZmZmZiAweDAgMHgxNzAgMHg0MGZmMDAgMHg0MDY2MDA+OwoJCQkJfTsK
CgkJCQlwcm9kX2NfaGRtaV8yMjNtXzMwMG0gewoJCQkJCXByb2QgPSA8MHgzYTAgMHgyIDB4MCAw
eDVjIDB4ZjAwMDcwMCAweDEwMDAzMDAgMHg2MCAweGYwMWYwMCAweDQwMTM4MCAweDY4IDB4ZjAw
MDAwMCAweGEwMDAwMDAgMHgxMzggMHhmZmZmZmZmZiAweDMzM2YzZjNmIDB4MTQ4IDB4ZmZmZmZm
ZmYgMHgxNzE3MTcgMHgxNzAgMHg0MGZmMDAgMHg0MDY2MDA+OwoJCQkJfTsKCgkJCQlwcm9kX2Nf
aGRtaV8zMDBtXzYwMG0gewoJCQkJCXByb2QgPSA8MHgzYTAgMHgyIDB4MiAweDVjIDB4ZjAwMDcw
MCAweDUwMDAzMTAgMHg2MCAweGYwMWYwMCAweDMwMGYwMCAweDY4IDB4ZjAwMDAwMCAweDgwMDAw
MDAgMHgxMzggMHhmZmZmZmZmZiAweDMwMzUzNTM3IDB4MTQ4IDB4ZmZmZmZmZmYgMHgwIDB4MTcw
IDB4NDBmZjAwIDB4NDA2MDAwPjsKCQkJCX07CgoJCQkJcHJvZF9jXzU0TSB7CgkJCQkJcHJvZCA9
IDwweDNhMCAweDIgMHgyIDB4NWMgMHhmMDAwNzAwIDB4MTAwMDAwMCAweDYwIDB4ZjAxZjAwIDB4
NDAxMzgwIDB4NjggMHhmMDAwMDAwIDB4ODAwMDAwMCAweDEzOCAweGZmZmZmZmZmIDB4MzMzYTNh
M2EgMHgxNDggMHhmZmZmZmZmZiAweDAgMHgxNzAgMHg0MGZmMDAgMHg0MDEwMDA+OwoJCQkJfTsK
CgkJCQlwcm9kX2NfNzVNIHsKCQkJCQlwcm9kID0gPDB4M2EwIDB4MiAweDIgMHg1YyAweGYwMDA3
MDAgMHgxMDAwMTAwIDB4NjAgMHhmMDFmMDAgMHg0MDEzODAgMHg2OCAweGYwMDAwMDAgMHg4MDAw
MDAwIDB4MTM4IDB4ZmZmZmZmZmYgMHgzMzNhM2EzYSAweDE0OCAweGZmZmZmZmZmIDB4MCAweDE3
MCAweDQwZmYwMCAweDQwNDAwMD47CgkJCQl9OwoKCQkJCXByb2RfY18xNTBNIHsKCQkJCQlwcm9k
ID0gPDB4M2EwIDB4MiAweDAgMHg1YyAweGYwMDA3MDAgMHgxMDAwMzAwIDB4NjAgMHhmZjBmZTBm
ZiAweDQwMTM4MCAweDY4IDB4ZjAwMDAwMCAweDgwMDAwMDAgMHgxMzggMHhmZmZmZmZmZiAweDMz
M2EzYTNhIDB4MTQ4IDB4ZmZmZmZmZmYgMHgwIDB4MTcwIDB4NDBmZjAwIDB4NDA2NjAwPjsKCQkJ
CX07CgoJCQkJcHJvZF9jXzMwME0gewoJCQkJCXByb2QgPSA8MHgzYTAgMHgyIDB4MCAweDVjIDB4
ZjAwMDcwMCAweDEwMDAzMDAgMHg2MCAweGYwMWYwMCAweDQwMTM4MCAweDY4IDB4ZjAwMDAwMCAw
eGEwMDAwMDAgMHgxMzggMHhmZmZmZmZmZiAweDMzM2YzZjNmIDB4MTQ4IDB4ZmZmZmZmZmYgMHgx
NzE3MTcgMHgxNzAgMHg0MGZmMDAgMHg0MDY2MDA+OwoJCQkJfTsKCgkJCQlwcm9kX2NfNjAwTSB7
CgkJCQkJcHJvZCA9IDwweDNhMCAweDIgMHgyIDB4NWMgMHhmMDAwNzAwIDB4MTAwMDMwMCAweDYw
IDB4ZjAxZjAwIDB4NDAxMzgwIDB4NjggMHhmMDAwMDAwIDB4ODAwMDAwMCAweDEzOCAweGZmZmZm
ZmZmIDB4MzMzZjNmM2YgMHgxNDggMHhmZmZmZmZmZiAweDAgMHgxNzAgMHg0MGZmMDAgMHg0MDY2
MDA+OwoJCQkJfTsKCgkJCQlwcm9kX2NfZHAgewoJCQkJCXByb2QgPSA8MHg1YyAweGYwMDBmMTAg
MHgxMDAwMzEwIDB4NjAgMHgzZjAwMTAwIDB4NDAwMTAwIDB4NjggMHgyMDAwIDB4MjAwMCAweDcw
IDB4ZmZmZmZmZmYgMHgwIDB4MTgwIDB4MSAweDE+OwoJCQkJfTsKCgkJCQlwcm9kX2NfaGRtaV81
NG1fNzVtIHsKCQkJCQlwcm9kID0gPDB4M2EwIDB4MiAweDIgMHg1YyAweGYwMDA3MDAgMHg1MDAw
MzEwIDB4NjAgMHhmMDFmMDAgMHgzMDE1MDAgMHg2OCAweGYwMDAwMDAgMHg4MDAwMDAwIDB4MTM4
IDB4ZmZmZmZmZmYgMHgyZDMwMzAzMCAweDE0OCAweGZmZmZmZmZmIDB4MCAweDE3MCAweGYwNDBm
ZjAwIDB4ODA0MDY2MDA+OwoJCQkJfTsKCgkJCQlwcm9kX2NfaGRtaV83NW1fMTUwbSB7CgkJCQkJ
cHJvZCA9IDwweDNhMCAweDIgMHgwIDB4NWMgMHhmMDAwNzAwIDB4MTAwMDMwMCAweDYwIDB4ZjAx
ZjAwIDB4MzA5MzAwIDB4NjggMHhmMDAwMDAwIDB4ODAwMDAwMCAweDEzOCAweGZmZmZmZmZmIDB4
MmQzMDMwMzAgMHgxNDggMHhmZmZmZmZmZiAweDAgMHgxNzAgMHhmMDQwZmYwMCAweDgwNDA2NjAw
PjsKCQkJCX07CgoJCQkJcHJvZF9jX2hkbWlfMTUwbV8zMDBtIHsKCQkJCQlwcm9kID0gPDB4M2Ew
IDB4MiAweDAgMHg1YyAweGYwMDA3MDAgMHgxMDAwMzAwIDB4NjAgMHhmMDFmMDAgMHgzMDkzMDAg
MHg2OCAweGYwMDAwMDAgMHg4MDAwMDAwIDB4MTM4IDB4ZmZmZmZmZmYgMHgyZDMwMzQzMCAweDE0
OCAweGZmZmZmZmZmIDB4MCAweDE3MCAweGYwNDBmZjAwIDB4ODA0MDY2MDA+OwoJCQkJfTsKCQkJ
fTsKCQl9OwoKCQlkcGF1eCB7CgkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLWRwYXV4
IjsKCQkJcmVnID0gPDB4MCAweDU0NWMwMDAwIDB4MCAweDQwMDAwPjsKCQkJaW50ZXJydXB0cyA9
IDwweDAgMHg5ZiAweDQ+OwoJCQludmlkaWEsZHBhdXgtY3RybG51bSA9IDwweDA+OwoJCQlzdGF0
dXMgPSAib2theSI7CgkJCWNsb2NrcyA9IDwweDIxIDB4YjU+OwoJCQljbG9jay1uYW1lcyA9ICJk
cGF1eCI7CgkJCXJlc2V0cyA9IDwweDIxIDB4YjU+OwoJCQlyZXNldC1uYW1lcyA9ICJkcGF1eCI7
CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg2ZT47CgkJCXBoYW5kbGUgPSA8MHg2ZT47CgoJCQlwcm9k
LXNldHRpbmdzIHsKCQkJCSNwcm9kLWNlbGxzID0gPDB4Mz47CgoJCQkJcHJvZF9jX2RwYXV4X2Rw
IHsKCQkJCQlwcm9kID0gPDB4MTI0IDB4MzdmZSAweDI0YzI+OwoJCQkJfTsKCgkJCQlwcm9kX2Nf
ZHBhdXhfaGRtaSB7CgkJCQkJcHJvZCA9IDwweDEyNCAweDcwMCAweDQwMD47CgkJCQl9OwoJCQl9
OwoJCX07CgoJCWRwYXV4MSB7CgkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLWRwYXV4
MSI7CgkJCXJlZyA9IDwweDAgMHg1NDA0MDAwMCAweDAgMHg0MDAwMD47CgkJCWludGVycnVwdHMg
PSA8MHgwIDB4YiAweDQ+OwoJCQludmlkaWEsZHBhdXgtY3RybG51bSA9IDwweDE+OwoJCQlzdGF0
dXMgPSAib2theSI7CgkJCWNsb2NrcyA9IDwweDIxIDB4Y2Y+OwoJCQljbG9jay1uYW1lcyA9ICJk
cGF1eDEiOwoJCQlyZXNldHMgPSA8MHgyMSAweGNmPjsKCQkJcmVzZXQtbmFtZXMgPSAiZHBhdXgx
IjsKCQkJbGludXgscGhhbmRsZSA9IDwweDcwPjsKCQkJcGhhbmRsZSA9IDwweDcwPjsKCgkJCXBy
b2Qtc2V0dGluZ3MgewoJCQkJI3Byb2QtY2VsbHMgPSA8MHgzPjsKCgkJCQlwcm9kX2NfZHBhdXhf
ZHAgewoJCQkJCXByb2QgPSA8MHgxMjQgMHgzN2ZlIDB4MjRjMj47CgkJCQl9OwoKCQkJCXByb2Rf
Y19kcGF1eF9oZG1pIHsKCQkJCQlwcm9kID0gPDB4MTI0IDB4NzAwIDB4NDAwPjsKCQkJCX07CgkJ
CX07CgkJfTsKCgkJaTJjQDU0NmMwMDAwIHsKCQkJI2FkZHJlc3MtY2VsbHMgPSA8MHgxPjsKCQkJ
I3NpemUtY2VsbHMgPSA8MHgwPjsKCQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtdmlp
MmMiOwoJCQlyZWcgPSA8MHgwIDB4NTQ2YzAwMDAgMHgwIDB4MzQwMDA+OwoJCQlpb21tdXMgPSA8
MHgyYiAweDEyPjsKCQkJaW50ZXJydXB0cyA9IDwweDAgMHgxMSAweDQ+OwoJCQlzY2wtZ3BpbyA9
IDwweDU2IDB4OTIgMHgwPjsKCQkJc2RhLWdwaW8gPSA8MHg1NiAweDkzIDB4MD47CgkJCXN0YXR1
cyA9ICJva2F5IjsKCQkJY2xvY2tzID0gPDB4MjEgMHhkMCAweDIxIDB4NTEgMHgyMSAweDFjPjsK
CQkJY2xvY2stbmFtZXMgPSAidmlpMmMiLCAiaTJjc2xvdyIsICJob3N0MXgiOwoJCQlyZXNldHMg
PSA8MHgyMSAweGQwPjsKCQkJcmVzZXQtbmFtZXMgPSAidmlpMmMiOwoJCQljbG9jay1mcmVxdWVu
Y3kgPSA8MHg2MWE4MD47CgkJCWJ1cy1wdWxsdXAtc3VwcGx5ID0gPDB4NDI+OwoJCQlhdmRkX2Rz
aV9jc2ktc3VwcGx5ID0gPDB4MzY+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4YTg+OwoJCQlwaGFu
ZGxlID0gPDB4YTg+OwoKCQkJcmJwY3YyX2lteDIxOV9hQDEwIHsKCQkJCWNvbXBhdGlibGUgPSAi
bnZpZGlhLGlteDIxOSI7CgkJCQlyZWcgPSA8MHgxMD47CgkJCQlkZXZub2RlID0gInZpZGVvMCI7
CgkJCQlwaHlzaWNhbF93ID0gIjMuNjgwIjsKCQkJCXBoeXNpY2FsX2ggPSAiMi43NjAiOwoJCQkJ
c2Vuc29yX21vZGVsID0gImlteDIxOSI7CgkJCQl1c2Vfc2Vuc29yX21vZGVfaWQgPSAidHJ1ZSI7
CgkJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQkJcmVzZXQtZ3Bpb3MgPSA8MHg1NiAweDk3IDB4
MD47CgkJCQlsaW51eCxwaGFuZGxlID0gPDB4Yjk+OwoJCQkJcGhhbmRsZSA9IDwweGI5PjsKCgkJ
CQltb2RlMCB7CgkJCQkJbWNsa19raHogPSAiMjQwMDAiOwoJCQkJCW51bV9sYW5lcyA9IFszMiAw
MF07CgkJCQkJdGVncmFfc2ludGVyZmFjZSA9ICJzZXJpYWxfYSI7CgkJCQkJcGh5X21vZGUgPSAi
RFBIWSI7CgkJCQkJZGlzY29udGludW91c19jbGsgPSAieWVzIjsKCQkJCQlkcGNtX2VuYWJsZSA9
ICJmYWxzZSI7CgkJCQkJY2lsX3NldHRsZXRpbWUgPSBbMzAgMDBdOwoJCQkJCWFjdGl2ZV93ID0g
IjMyNjQiOwoJCQkJCWFjdGl2ZV9oID0gIjI0NjQiOwoJCQkJCXBpeGVsX3QgPSAiYmF5ZXJfcmdn
YiI7CgkJCQkJcmVhZG91dF9vcmllbnRhdGlvbiA9ICI5MCI7CgkJCQkJbGluZV9sZW5ndGggPSAi
MzQ0OCI7CgkJCQkJaW5oZXJlbnRfZ2FpbiA9IFszMSAwMF07CgkJCQkJbWNsa19tdWx0aXBsaWVy
ID0gIjkuMzMiOwoJCQkJCXBpeF9jbGtfaHogPSAiMTgyNDAwMDAwIjsKCQkJCQlnYWluX2ZhY3Rv
ciA9ICIxNiI7CgkJCQkJZnJhbWVyYXRlX2ZhY3RvciA9ICIxMDAwMDAwIjsKCQkJCQlleHBvc3Vy
ZV9mYWN0b3IgPSAiMTAwMDAwMCI7CgkJCQkJbWluX2dhaW5fdmFsID0gIjE2IjsKCQkJCQltYXhf
Z2Fpbl92YWwgPSAiMTcwIjsKCQkJCQlzdGVwX2dhaW5fdmFsID0gWzMxIDAwXTsKCQkJCQlkZWZh
dWx0X2dhaW4gPSAiMTYiOwoJCQkJCW1pbl9oZHJfcmF0aW8gPSBbMzEgMDBdOwoJCQkJCW1heF9o
ZHJfcmF0aW8gPSBbMzEgMDBdOwoJCQkJCW1pbl9mcmFtZXJhdGUgPSAiMjAwMDAwMCI7CgkJCQkJ
bWF4X2ZyYW1lcmF0ZSA9ICIyMTAwMDAwMCI7CgkJCQkJc3RlcF9mcmFtZXJhdGUgPSBbMzEgMDBd
OwoJCQkJCWRlZmF1bHRfZnJhbWVyYXRlID0gIjIxMDAwMDAwIjsKCQkJCQltaW5fZXhwX3RpbWUg
PSAiMTMiOwoJCQkJCW1heF9leHBfdGltZSA9ICI2ODM3MDkiOwoJCQkJCXN0ZXBfZXhwX3RpbWUg
PSBbMzEgMDBdOwoJCQkJCWRlZmF1bHRfZXhwX3RpbWUgPSAiMjQ5NSI7CgkJCQkJZW1iZWRkZWRf
bWV0YWRhdGFfaGVpZ2h0ID0gWzMyIDAwXTsKCQkJCX07CgoJCQkJbW9kZTEgewoJCQkJCW1jbGtf
a2h6ID0gIjI0MDAwIjsKCQkJCQludW1fbGFuZXMgPSBbMzIgMDBdOwoJCQkJCXRlZ3JhX3NpbnRl
cmZhY2UgPSAic2VyaWFsX2EiOwoJCQkJCXBoeV9tb2RlID0gIkRQSFkiOwoJCQkJCWRpc2NvbnRp
bnVvdXNfY2xrID0gInllcyI7CgkJCQkJZHBjbV9lbmFibGUgPSAiZmFsc2UiOwoJCQkJCWNpbF9z
ZXR0bGV0aW1lID0gWzMwIDAwXTsKCQkJCQlhY3RpdmVfdyA9ICIzMjY0IjsKCQkJCQlhY3RpdmVf
aCA9ICIxODQ4IjsKCQkJCQlwaXhlbF90ID0gImJheWVyX3JnZ2IiOwoJCQkJCXJlYWRvdXRfb3Jp
ZW50YXRpb24gPSAiOTAiOwoJCQkJCWxpbmVfbGVuZ3RoID0gIjM0NDgiOwoJCQkJCWluaGVyZW50
X2dhaW4gPSBbMzEgMDBdOwoJCQkJCW1jbGtfbXVsdGlwbGllciA9ICI5LjMzIjsKCQkJCQlwaXhf
Y2xrX2h6ID0gIjE4MjQwMDAwMCI7CgkJCQkJZ2Fpbl9mYWN0b3IgPSAiMTYiOwoJCQkJCWZyYW1l
cmF0ZV9mYWN0b3IgPSAiMTAwMDAwMCI7CgkJCQkJZXhwb3N1cmVfZmFjdG9yID0gIjEwMDAwMDAi
OwoJCQkJCW1pbl9nYWluX3ZhbCA9ICIxNiI7CgkJCQkJbWF4X2dhaW5fdmFsID0gIjE3MCI7CgkJ
CQkJc3RlcF9nYWluX3ZhbCA9IFszMSAwMF07CgkJCQkJZGVmYXVsdF9nYWluID0gIjE2IjsKCQkJ
CQltaW5faGRyX3JhdGlvID0gWzMxIDAwXTsKCQkJCQltYXhfaGRyX3JhdGlvID0gWzMxIDAwXTsK
CQkJCQltaW5fZnJhbWVyYXRlID0gIjIwMDAwMDAiOwoJCQkJCW1heF9mcmFtZXJhdGUgPSAiMjgw
MDAwMDAiOwoJCQkJCXN0ZXBfZnJhbWVyYXRlID0gWzMxIDAwXTsKCQkJCQlkZWZhdWx0X2ZyYW1l
cmF0ZSA9ICIyODAwMDAwMCI7CgkJCQkJbWluX2V4cF90aW1lID0gIjEzIjsKCQkJCQltYXhfZXhw
X3RpbWUgPSAiNjgzNzA5IjsKCQkJCQlzdGVwX2V4cF90aW1lID0gWzMxIDAwXTsKCQkJCQlkZWZh
dWx0X2V4cF90aW1lID0gIjI0OTUiOwoJCQkJCWVtYmVkZGVkX21ldGFkYXRhX2hlaWdodCA9IFsz
MiAwMF07CgkJCQl9OwoKCQkJCW1vZGUyIHsKCQkJCQltY2xrX2toeiA9ICIyNDAwMCI7CgkJCQkJ
bnVtX2xhbmVzID0gWzMyIDAwXTsKCQkJCQl0ZWdyYV9zaW50ZXJmYWNlID0gInNlcmlhbF9hIjsK
CQkJCQlwaHlfbW9kZSA9ICJEUEhZIjsKCQkJCQlkaXNjb250aW51b3VzX2NsayA9ICJ5ZXMiOwoJ
CQkJCWRwY21fZW5hYmxlID0gImZhbHNlIjsKCQkJCQljaWxfc2V0dGxldGltZSA9IFszMCAwMF07
CgkJCQkJYWN0aXZlX3cgPSAiMTkyMCI7CgkJCQkJYWN0aXZlX2ggPSAiMTA4MCI7CgkJCQkJcGl4
ZWxfdCA9ICJiYXllcl9yZ2diIjsKCQkJCQlyZWFkb3V0X29yaWVudGF0aW9uID0gIjkwIjsKCQkJ
CQlsaW5lX2xlbmd0aCA9ICIzNDQ4IjsKCQkJCQlpbmhlcmVudF9nYWluID0gWzMxIDAwXTsKCQkJ
CQltY2xrX211bHRpcGxpZXIgPSAiOS4zMyI7CgkJCQkJcGl4X2Nsa19oeiA9ICIxODI0MDAwMDAi
OwoJCQkJCWdhaW5fZmFjdG9yID0gIjE2IjsKCQkJCQlmcmFtZXJhdGVfZmFjdG9yID0gIjEwMDAw
MDAiOwoJCQkJCWV4cG9zdXJlX2ZhY3RvciA9ICIxMDAwMDAwIjsKCQkJCQltaW5fZ2Fpbl92YWwg
PSAiMTYiOwoJCQkJCW1heF9nYWluX3ZhbCA9ICIxNzAiOwoJCQkJCXN0ZXBfZ2Fpbl92YWwgPSBb
MzEgMDBdOwoJCQkJCWRlZmF1bHRfZ2FpbiA9ICIxNiI7CgkJCQkJbWluX2hkcl9yYXRpbyA9IFsz
MSAwMF07CgkJCQkJbWF4X2hkcl9yYXRpbyA9IFszMSAwMF07CgkJCQkJbWluX2ZyYW1lcmF0ZSA9
ICIyMDAwMDAwIjsKCQkJCQltYXhfZnJhbWVyYXRlID0gIjMwMDAwMDAwIjsKCQkJCQlzdGVwX2Zy
YW1lcmF0ZSA9IFszMSAwMF07CgkJCQkJZGVmYXVsdF9mcmFtZXJhdGUgPSAiMzAwMDAwMDAiOwoJ
CQkJCW1pbl9leHBfdGltZSA9ICIxMyI7CgkJCQkJbWF4X2V4cF90aW1lID0gIjY4MzcwOSI7CgkJ
CQkJc3RlcF9leHBfdGltZSA9IFszMSAwMF07CgkJCQkJZGVmYXVsdF9leHBfdGltZSA9ICIyNDk1
IjsKCQkJCQllbWJlZGRlZF9tZXRhZGF0YV9oZWlnaHQgPSBbMzIgMDBdOwoJCQkJfTsKCgkJCQlt
b2RlMyB7CgkJCQkJbWNsa19raHogPSAiMjQwMDAiOwoJCQkJCW51bV9sYW5lcyA9IFszMiAwMF07
CgkJCQkJdGVncmFfc2ludGVyZmFjZSA9ICJzZXJpYWxfYSI7CgkJCQkJcGh5X21vZGUgPSAiRFBI
WSI7CgkJCQkJZGlzY29udGludW91c19jbGsgPSAieWVzIjsKCQkJCQlkcGNtX2VuYWJsZSA9ICJm
YWxzZSI7CgkJCQkJY2lsX3NldHRsZXRpbWUgPSBbMzAgMDBdOwoJCQkJCWFjdGl2ZV93ID0gIjEy
ODAiOwoJCQkJCWFjdGl2ZV9oID0gIjcyMCI7CgkJCQkJcGl4ZWxfdCA9ICJiYXllcl9yZ2diIjsK
CQkJCQlyZWFkb3V0X29yaWVudGF0aW9uID0gIjkwIjsKCQkJCQlsaW5lX2xlbmd0aCA9ICIzNDQ4
IjsKCQkJCQlpbmhlcmVudF9nYWluID0gWzMxIDAwXTsKCQkJCQltY2xrX211bHRpcGxpZXIgPSAi
OS4zMyI7CgkJCQkJcGl4X2Nsa19oeiA9ICIxODI0MDAwMDAiOwoJCQkJCWdhaW5fZmFjdG9yID0g
IjE2IjsKCQkJCQlmcmFtZXJhdGVfZmFjdG9yID0gIjEwMDAwMDAiOwoJCQkJCWV4cG9zdXJlX2Zh
Y3RvciA9ICIxMDAwMDAwIjsKCQkJCQltaW5fZ2Fpbl92YWwgPSAiMTYiOwoJCQkJCW1heF9nYWlu
X3ZhbCA9ICIxNzAiOwoJCQkJCXN0ZXBfZ2Fpbl92YWwgPSBbMzEgMDBdOwoJCQkJCWRlZmF1bHRf
Z2FpbiA9ICIxNiI7CgkJCQkJbWluX2hkcl9yYXRpbyA9IFszMSAwMF07CgkJCQkJbWF4X2hkcl9y
YXRpbyA9IFszMSAwMF07CgkJCQkJbWluX2ZyYW1lcmF0ZSA9ICIyMDAwMDAwIjsKCQkJCQltYXhf
ZnJhbWVyYXRlID0gIjYwMDAwMDAwIjsKCQkJCQlzdGVwX2ZyYW1lcmF0ZSA9IFszMSAwMF07CgkJ
CQkJZGVmYXVsdF9mcmFtZXJhdGUgPSAiNjAwMDAwMDAiOwoJCQkJCW1pbl9leHBfdGltZSA9ICIx
MyI7CgkJCQkJbWF4X2V4cF90aW1lID0gIjY4MzcwOSI7CgkJCQkJc3RlcF9leHBfdGltZSA9IFsz
MSAwMF07CgkJCQkJZGVmYXVsdF9leHBfdGltZSA9ICIyNDk1IjsKCQkJCQllbWJlZGRlZF9tZXRh
ZGF0YV9oZWlnaHQgPSBbMzIgMDBdOwoJCQkJfTsKCgkJCQltb2RlNCB7CgkJCQkJbWNsa19raHog
PSAiMjQwMDAiOwoJCQkJCW51bV9sYW5lcyA9IFszMiAwMF07CgkJCQkJdGVncmFfc2ludGVyZmFj
ZSA9ICJzZXJpYWxfYSI7CgkJCQkJcGh5X21vZGUgPSAiRFBIWSI7CgkJCQkJZGlzY29udGludW91
c19jbGsgPSAieWVzIjsKCQkJCQlkcGNtX2VuYWJsZSA9ICJmYWxzZSI7CgkJCQkJY2lsX3NldHRs
ZXRpbWUgPSBbMzAgMDBdOwoJCQkJCWFjdGl2ZV93ID0gIjEyODAiOwoJCQkJCWFjdGl2ZV9oID0g
IjcyMCI7CgkJCQkJcGl4ZWxfdCA9ICJiYXllcl9yZ2diIjsKCQkJCQlyZWFkb3V0X29yaWVudGF0
aW9uID0gIjkwIjsKCQkJCQlsaW5lX2xlbmd0aCA9ICIzNDQ4IjsKCQkJCQlpbmhlcmVudF9nYWlu
ID0gWzMxIDAwXTsKCQkJCQltY2xrX211bHRpcGxpZXIgPSAiOS4zMyI7CgkJCQkJcGl4X2Nsa19o
eiA9ICIxNjk2MDAwMDAiOwoJCQkJCWdhaW5fZmFjdG9yID0gIjE2IjsKCQkJCQlmcmFtZXJhdGVf
ZmFjdG9yID0gIjEwMDAwMDAiOwoJCQkJCWV4cG9zdXJlX2ZhY3RvciA9ICIxMDAwMDAwIjsKCQkJ
CQltaW5fZ2Fpbl92YWwgPSAiMTYiOwoJCQkJCW1heF9nYWluX3ZhbCA9ICIxNzAiOwoJCQkJCXN0
ZXBfZ2Fpbl92YWwgPSBbMzEgMDBdOwoJCQkJCWRlZmF1bHRfZ2FpbiA9ICIxNiI7CgkJCQkJbWlu
X2hkcl9yYXRpbyA9IFszMSAwMF07CgkJCQkJbWF4X2hkcl9yYXRpbyA9IFszMSAwMF07CgkJCQkJ
bWluX2ZyYW1lcmF0ZSA9ICIyMDAwMDAwIjsKCQkJCQltYXhfZnJhbWVyYXRlID0gIjEyMDAwMDAw
MCI7CgkJCQkJc3RlcF9mcmFtZXJhdGUgPSBbMzEgMDBdOwoJCQkJCWRlZmF1bHRfZnJhbWVyYXRl
ID0gIjEyMDAwMDAwMCI7CgkJCQkJbWluX2V4cF90aW1lID0gIjEzIjsKCQkJCQltYXhfZXhwX3Rp
bWUgPSAiNjgzNzA5IjsKCQkJCQlzdGVwX2V4cF90aW1lID0gWzMxIDAwXTsKCQkJCQlkZWZhdWx0
X2V4cF90aW1lID0gIjI0OTUiOwoJCQkJCWVtYmVkZGVkX21ldGFkYXRhX2hlaWdodCA9IFszMiAw
MF07CgkJCQl9OwoKCQkJCXBvcnRzIHsKCQkJCQkjYWRkcmVzcy1jZWxscyA9IDwweDE+OwoJCQkJ
CSNzaXplLWNlbGxzID0gPDB4MD47CgoJCQkJCXBvcnRAMCB7CgkJCQkJCXJlZyA9IDwweDA+OwoK
CQkJCQkJZW5kcG9pbnQgewoJCQkJCQkJcG9ydC1pbmRleCA9IDwweDA+OwoJCQkJCQkJYnVzLXdp
ZHRoID0gPDB4Mj47CgkJCQkJCQlyZW1vdGUtZW5kcG9pbnQgPSA8MHg3Mz47CgkJCQkJCQlsaW51
eCxwaGFuZGxlID0gPDB4YzI+OwoJCQkJCQkJcGhhbmRsZSA9IDwweGMyPjsKCQkJCQkJfTsKCQkJ
CQl9OwoJCQkJfTsKCQkJfTsKCgkJCWluYTMyMjF4QDQwIHsKCQkJCWNvbXBhdGlibGUgPSAidGks
aW5hMzIyMXgiOwoJCQkJcmVnID0gPDB4NDA+OwoJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJdGks
dHJpZ2dlci1jb25maWcgPSA8MHg3MDAzPjsKCQkJCXRpLGNvbnRpbnVvdXMtY29uZmlnID0gPDB4
NzYwNz47CgkJCQl0aSxlbmFibGUtZm9yY2VkLWNvbnRpbnVvdXM7CgkJCQkjaW8tY2hhbm5lbC1j
ZWxscyA9IDwweDE+OwoJCQkJI2FkZHJlc3MtY2VsbHMgPSA8MHgxPjsKCQkJCSNzaXplLWNlbGxz
ID0gPDB4MD47CgkJCQlsaW51eCxwaGFuZGxlID0gPDB4YWQ+OwoJCQkJcGhhbmRsZSA9IDwweGFk
PjsKCgkJCQljaGFubmVsQDAgewoJCQkJCXJlZyA9IDwweDA+OwoJCQkJCXRpLHJhaWwtbmFtZSA9
ICJQT01fNVZfR1BVIjsKCQkJCQl0aSxzaHVudC1yZXNpc3Rvci1tb2htID0gPDB4NT47CgkJCQl9
OwoKCQkJCWNoYW5uZWxAMSB7CgkJCQkJcmVnID0gPDB4MT47CgkJCQkJdGkscmFpbC1uYW1lID0g
IlBPTV81Vl9JTiI7CgkJCQkJdGksc2h1bnQtcmVzaXN0b3ItbW9obSA9IDwweDU+OwoJCQkJfTsK
CgkJCQljaGFubmVsQDIgewoJCQkJCXJlZyA9IDwweDI+OwoJCQkJCXRpLHJhaWwtbmFtZSA9ICJQ
T01fNVZfQ1BVIjsKCQkJCQl0aSxzaHVudC1yZXNpc3Rvci1tb2htID0gPDB4NT47CgkJCQl9OwoJ
CQl9OwoJCX07CgoJCW52Y3NpIHsKCQkJbnVtLWNoYW5uZWxzID0gPDB4Mj47CgkJCSNhZGRyZXNz
LWNlbGxzID0gPDB4MT47CgkJCSNzaXplLWNlbGxzID0gPDB4MD47CgkJCWxpbnV4LHBoYW5kbGUg
PSA8MHhiZj47CgkJCXBoYW5kbGUgPSA8MHhiZj47CgoJCQljaGFubmVsQDAgewoJCQkJcmVnID0g
PDB4MD47CgkJCQlsaW51eCxwaGFuZGxlID0gPDB4YzA+OwoJCQkJcGhhbmRsZSA9IDwweGMwPjsK
CgkJCQlwb3J0cyB7CgkJCQkJI2FkZHJlc3MtY2VsbHMgPSA8MHgxPjsKCQkJCQkjc2l6ZS1jZWxs
cyA9IDwweDA+OwoKCQkJCQlwb3J0QDAgewoJCQkJCQlyZWcgPSA8MHgwPjsKCQkJCQkJbGludXgs
cGhhbmRsZSA9IDwweGMxPjsKCQkJCQkJcGhhbmRsZSA9IDwweGMxPjsKCgkJCQkJCWVuZHBvaW50
QDAgewoJCQkJCQkJcG9ydC1pbmRleCA9IDwweDA+OwoJCQkJCQkJYnVzLXdpZHRoID0gPDB4Mj47
CgkJCQkJCQlyZW1vdGUtZW5kcG9pbnQgPSA8MHg3ND47CgkJCQkJCQlsaW51eCxwaGFuZGxlID0g
PDB4NzM+OwoJCQkJCQkJcGhhbmRsZSA9IDwweDczPjsKCQkJCQkJfTsKCQkJCQl9OwoKCQkJCQlw
b3J0QDEgewoJCQkJCQlyZWcgPSA8MHgxPjsKCQkJCQkJbGludXgscGhhbmRsZSA9IDwweGMzPjsK
CQkJCQkJcGhhbmRsZSA9IDwweGMzPjsKCgkJCQkJCWVuZHBvaW50QDEgewoJCQkJCQkJcmVtb3Rl
LWVuZHBvaW50ID0gPDB4NzU+OwoJCQkJCQkJbGludXgscGhhbmRsZSA9IDwweDVhPjsKCQkJCQkJ
CXBoYW5kbGUgPSA8MHg1YT47CgkJCQkJCX07CgkJCQkJfTsKCQkJCX07CgkJCX07CgoJCQljaGFu
bmVsQDEgewoJCQkJcmVnID0gPDB4MT47CgkJCQlsaW51eCxwaGFuZGxlID0gPDB4Y2U+OwoJCQkJ
cGhhbmRsZSA9IDwweGNlPjsKCgkJCQlwb3J0cyB7CgkJCQkJI2FkZHJlc3MtY2VsbHMgPSA8MHgx
PjsKCQkJCQkjc2l6ZS1jZWxscyA9IDwweDA+OwoKCQkJCQlwb3J0QDIgewoJCQkJCQlyZWcgPSA8
MHgwPjsKCQkJCQkJbGludXgscGhhbmRsZSA9IDwweGNmPjsKCQkJCQkJcGhhbmRsZSA9IDwweGNm
PjsKCgkJCQkJCWVuZHBvaW50QDIgewoJCQkJCQkJcG9ydC1pbmRleCA9IDwweDQ+OwoJCQkJCQkJ
YnVzLXdpZHRoID0gPDB4Mj47CgkJCQkJCQlyZW1vdGUtZW5kcG9pbnQgPSA8MHg3Nj47CgkJCQkJ
CQlsaW51eCxwaGFuZGxlID0gPDB4YTk+OwoJCQkJCQkJcGhhbmRsZSA9IDwweGE5PjsKCQkJCQkJ
fTsKCQkJCQl9OwoKCQkJCQlwb3J0QDMgewoJCQkJCQlyZWcgPSA8MHgxPjsKCQkJCQkJbGludXgs
cGhhbmRsZSA9IDwweGQwPjsKCQkJCQkJcGhhbmRsZSA9IDwweGQwPjsKCgkJCQkJCWVuZHBvaW50
QDMgewoJCQkJCQkJcmVtb3RlLWVuZHBvaW50ID0gPDB4Nzc+OwoJCQkJCQkJbGludXgscGhhbmRs
ZSA9IDwweDViPjsKCQkJCQkJCXBoYW5kbGUgPSA8MHg1Yj47CgkJCQkJCX07CgkJCQkJfTsKCQkJ
CX07CgkJCX07CgkJfTsKCX07CgoJZ3B1IHsKCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIx
MC1nbTIwYiIsICJudmlkaWEsZ20yMGIiOwoJCW52aWRpYSxob3N0MXggPSA8MHg3OD47CgkJcmVn
ID0gPDB4MCAweDU3MDAwMDAwIDB4MCAweDEwMDAwMDAgMHgwIDB4NTgwMDAwMDAgMHgwIDB4MTAw
MDAwMCAweDAgMHg1MzhmMDAwMCAweDAgMHgxMDAwPjsKCQlpbnRlcnJ1cHRzID0gPDB4MCAweDlk
IDB4NCAweDAgMHg5ZSAweDQ+OwoJCWludGVycnVwdC1uYW1lcyA9ICJzdGFsbCIsICJub25zdGFs
bCI7CgkJaW9tbXVzID0gPDB4MmIgMHgxZj47CgkJYWNjZXNzLXZwci1waHlzOwoJCXN0YXR1cyA9
ICJva2F5IjsKCQlyZXNldHMgPSA8MHgyMSAweGI4PjsKCQlyZXNldC1uYW1lcyA9ICJncHUiOwoJ
fTsKCgltaXBpY2FsIHsKCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1taXBpY2FsIjsK
CQlyZWcgPSA8MHgwIDB4NzAwZTMwMDAgMHgwIDB4MTAwPjsKCQljbG9ja3MgPSA8MHgyMSAweDM4
IDB4MjEgMHhiMT47CgkJY2xvY2stbmFtZXMgPSAibWlwaV9jYWwiLCAidWFydF9taXBpX2NhbCI7
CgkJc3RhdHVzID0gIm9rYXkiOwoJCWFzc2lnbmVkLWNsb2NrcyA9IDwweDIxIDB4YjE+OwoJCWFz
c2lnbmVkLWNsb2NrLXBhcmVudHMgPSA8MHgyMSAweGYzPjsKCQlhc3NpZ25lZC1jbG9jay1yYXRl
cyA9IDwweDQwZDk5MDA+OwoKCQlwcm9kLXNldHRpbmdzIHsKCQkJI3Byb2QtY2VsbHMgPSA8MHgz
PjsKCgkJCXByb2RfY19kcGh5X2RzaSB7CgkJCQlwcm9kID0gPDB4MzggMHgxZjAwIDB4MjAwIDB4
M2MgMHgxZjAwIDB4MjAwIDB4NDAgMHgxZjAwIDB4MjAwIDB4NDQgMHgxZjAwIDB4MjAwIDB4NWMg
MHhmMDAgMHgzMDAgMHg2MCAweGYwMGYwIDB4MTAwMTAgMHg2NCAweDFmIDB4MiAweDY4IDB4MWYg
MHgyIDB4NzAgMHgxZiAweDIgMHg3NCAweDFmIDB4Mj47CgkJCX07CgkJfTsKCX07CgoJcG1jQDcw
MDBlNDAwIHsKCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1wbWMiOwoJCXJlZyA9IDww
eDAgMHg3MDAwZTQwMCAweDAgMHg0MDA+OwoJCSNwYWRjb250cm9sbGVyLWNlbGxzID0gPDB4MT47
CgkJc3RhdHVzID0gIm9rYXkiOwoJCWNsb2NrcyA9IDwweDIxIDB4MTI1PjsKCQljbG9jay1uYW1l
cyA9ICJwY2xrIjsKCQludmlkaWEsc2VjdXJlLXBtYzsKCQljbGVhci1hbGwtaW8tcGFkcy1kcGQ7
CgkJcGluY3RybC1uYW1lcyA9ICJkZWZhdWx0IjsKCQlwaW5jdHJsLTAgPSA8MHg3OT47CgkJbnZp
ZGlhLHJlc3RyaWN0LXZvbHRhZ2Utc3dpdGNoOwoJCSNudmlkaWEsd2FrZS1jZWxscyA9IDwweDM+
OwoJCW52aWRpYSxpbnZlcnQtaW50ZXJydXB0OwoJCW52aWRpYSxzdXNwZW5kLW1vZGUgPSA8MHgw
PjsKCQludmlkaWEsY3B1LXB3ci1nb29kLXRpbWUgPSA8MHgwPjsKCQludmlkaWEsY3B1LXB3ci1v
ZmYtdGltZSA9IDwweDA+OwoJCW52aWRpYSxjb3JlLXB3ci1nb29kLXRpbWUgPSA8MHgxMWViIDB4
ZjI0PjsKCQludmlkaWEsY29yZS1wd3Itb2ZmLXRpbWUgPSA8MHg5ODk5PjsKCQludmlkaWEsY29y
ZS1wd3ItcmVxLWFjdGl2ZS1oaWdoOwoJCW52aWRpYSxzeXMtY2xvY2stcmVxLWFjdGl2ZS1oaWdo
OwoJCWxpbnV4LHBoYW5kbGUgPSA8MHgzNz47CgkJcGhhbmRsZSA9IDwweDM3PjsKCgkJcGV4X2Vu
IHsKCQkJbGludXgscGhhbmRsZSA9IDwweDdmPjsKCQkJcGhhbmRsZSA9IDwweDdmPjsKCgkJCXBl
eC1pby1kcGQtc2lnbmFscy1kaXMgewoJCQkJcGlucyA9ICJwZXgtYmlhcyIsICJwZXgtY2xrMSIs
ICJwZXgtY2xrMiI7CgkJCQlsb3ctcG93ZXItZGlzYWJsZTsKCQkJfTsKCQl9OwoKCQlwZXhfZGlz
IHsKCQkJbGludXgscGhhbmRsZSA9IDwweDgwPjsKCQkJcGhhbmRsZSA9IDwweDgwPjsKCgkJCXBl
eC1pby1kcGQtc2lnbmFscy1lbiB7CgkJCQlwaW5zID0gInBleC1iaWFzIiwgInBleC1jbGsxIiwg
InBleC1jbGsyIjsKCQkJCWxvdy1wb3dlci1lbmFibGU7CgkJCX07CgkJfTsKCgkJaGRtaS1kcGQt
ZW5hYmxlIHsKCQkJbGludXgscGhhbmRsZSA9IDwweDYzPjsKCQkJcGhhbmRsZSA9IDwweDYzPjsK
CgkJCWhkbWktcGFkLWxvd3Bvd2VyLWVuYWJsZSB7CgkJCQlwaW5zID0gImhkbWkiOwoJCQkJbG93
LXBvd2VyLWVuYWJsZTsKCQkJfTsKCQl9OwoKCQloZG1pLWRwZC1kaXNhYmxlIHsKCQkJbGludXgs
cGhhbmRsZSA9IDwweDYyPjsKCQkJcGhhbmRsZSA9IDwweDYyPjsKCgkJCWhkbWktcGFkLWxvd3Bv
d2VyLWRpc2FibGUgewoJCQkJcGlucyA9ICJoZG1pIjsKCQkJCWxvdy1wb3dlci1kaXNhYmxlOwoJ
CQl9OwoJCX07CgoJCWRzaS1kcGQtZW5hYmxlIHsKCQkJbGludXgscGhhbmRsZSA9IDwweDVmPjsK
CQkJcGhhbmRsZSA9IDwweDVmPjsKCgkJCWRzaS1wYWQtbG93cG93ZXItZW5hYmxlIHsKCQkJCXBp
bnMgPSAiZHNpIjsKCQkJCWxvdy1wb3dlci1lbmFibGU7CgkJCX07CgkJfTsKCgkJZHNpLWRwZC1k
aXNhYmxlIHsKCQkJbGludXgscGhhbmRsZSA9IDwweDVlPjsKCQkJcGhhbmRsZSA9IDwweDVlPjsK
CgkJCWRzaS1wYWQtbG93cG93ZXItZGlzYWJsZSB7CgkJCQlwaW5zID0gImRzaSI7CgkJCQlsb3ct
cG93ZXItZGlzYWJsZTsKCQkJfTsKCQl9OwoKCQlkc2liLWRwZC1lbmFibGUgewoJCQlsaW51eCxw
aGFuZGxlID0gPDB4NjE+OwoJCQlwaGFuZGxlID0gPDB4NjE+OwoKCQkJZHNpYi1wYWQtbG93cG93
ZXItZW5hYmxlIHsKCQkJCXBpbnMgPSAiZHNpYiI7CgkJCQlsb3ctcG93ZXItZW5hYmxlOwoJCQl9
OwoJCX07CgoJCWRzaWItZHBkLWRpc2FibGUgewoJCQlsaW51eCxwaGFuZGxlID0gPDB4NjA+OwoJ
CQlwaGFuZGxlID0gPDB4NjA+OwoKCQkJZHNpYi1wYWQtbG93cG93ZXItZGlzYWJsZSB7CgkJCQlw
aW5zID0gImRzaWIiOwoJCQkJbG93LXBvd2VyLWRpc2FibGU7CgkJCX07CgkJfTsKCgkJaW9wYWQt
ZGVmYXVsdHMgewoJCQlsaW51eCxwaGFuZGxlID0gPDB4Nzk+OwoJCQlwaGFuZGxlID0gPDB4Nzk+
OwoKCQkJYXVkaW8tcGFkcyB7CgkJCQlwaW5zID0gImF1ZGlvIjsKCQkJCW52aWRpYSxwb3dlci1z
b3VyY2Utdm9sdGFnZSA9IDwweDA+OwoJCQl9OwoKCQkJY2FtLXBhZHMgewoJCQkJcGlucyA9ICJj
YW0iOwoJCQkJbnZpZGlhLHBvd2VyLXNvdXJjZS12b2x0YWdlID0gPDB4MD47CgkJCX07CgoJCQlk
YmctcGFkcyB7CgkJCQlwaW5zID0gImRiZyI7CgkJCQludmlkaWEscG93ZXItc291cmNlLXZvbHRh
Z2UgPSA8MHgwPjsKCQkJfTsKCgkJCWRtaWMtcGFkcyB7CgkJCQlwaW5zID0gImRtaWMiOwoJCQkJ
bnZpZGlhLHBvd2VyLXNvdXJjZS12b2x0YWdlID0gPDB4MD47CgkJCX07CgoJCQlwZXgtY3RybC1w
YWRzIHsKCQkJCXBpbnMgPSAicGV4LWN0cmwiOwoJCQkJbnZpZGlhLHBvd2VyLXNvdXJjZS12b2x0
YWdlID0gPDB4MD47CgkJCX07CgoJCQlzcGktcGFkcyB7CgkJCQlwaW5zID0gInNwaSI7CgkJCQlu
dmlkaWEscG93ZXItc291cmNlLXZvbHRhZ2UgPSA8MHgwPjsKCQkJfTsKCgkJCXVhcnQtcGFkcyB7
CgkJCQlwaW5zID0gInVhcnQiOwoJCQkJbnZpZGlhLHBvd2VyLXNvdXJjZS12b2x0YWdlID0gPDB4
MD47CgkJCX07CgoJCQlwZXgtaW8tcGFkcyB7CgkJCQlwaW5zID0gInBleC1iaWFzIiwgInBleC1j
bGsxIiwgInBleC1jbGsyIjsKCQkJCWxvdy1wb3dlci1lbmFibGU7CgkJCX07CgoJCQlhdWRpby1o
di1wYWRzIHsKCQkJCXBpbnMgPSAiYXVkaW8taHYiOwoJCQkJbnZpZGlhLHBvd2VyLXNvdXJjZS12
b2x0YWdlID0gPDB4MD47CgkJCX07CgoJCQlzcGktaHYtcGFkcyB7CgkJCQlwaW5zID0gInNwaS1o
diI7CgkJCQludmlkaWEscG93ZXItc291cmNlLXZvbHRhZ2UgPSA8MHgwPjsKCQkJfTsKCgkJCWdw
aW8tcGFkcyB7CgkJCQlwaW5zID0gImdwaW8iOwoJCQkJbnZpZGlhLHBvd2VyLXNvdXJjZS12b2x0
YWdlID0gPDB4MD47CgkJCX07CgoJCQlzZG1tYy1pby1wYWRzIHsKCQkJCXBpbnMgPSAic2RtbWMx
IiwgInNkbW1jMyI7CgkJCQludmlkaWEsZW5hYmxlLXZvbHRhZ2Utc3dpdGNoaW5nOwoJCQl9OwoJ
CX07CgoJCWJvb3Ryb20tY29tbWFuZHMgewoJCQludmlkaWEsY29tbWFuZC1yZXRyaWVzLWNvdW50
ID0gPDB4Mj47CgkJCW52aWRpYSxkZWxheS1iZXR3ZWVuLWNvbW1hbmRzLXVzID0gPDB4YT47CgkJ
CW52aWRpYSx3YWl0LXN0YXJ0LWJ1cy1jbGVhci11cyA9IDwweGE+OwoJCQkjYWRkcmVzcy1jZWxs
cyA9IDwweDE+OwoJCQkjc2l6ZS1jZWxscyA9IDwweDA+OwoKCQkJcmVzZXQtY29tbWFuZHMgewoJ
CQkJbnZpZGlhLGNvbW1hbmQtcmV0cmllcy1jb3VudCA9IDwweDI+OwoJCQkJbnZpZGlhLGRlbGF5
LWJldHdlZW4tY29tbWFuZHMtdXMgPSA8MHhhPjsKCQkJCW52aWRpYSx3YWl0LXN0YXJ0LWJ1cy1j
bGVhci11cyA9IDwweGE+OwoJCQkJI2FkZHJlc3MtY2VsbHMgPSA8MHgxPjsKCQkJCSNzaXplLWNl
bGxzID0gPDB4MD47CgoJCQkJY29tbWFuZHNANC0wMDNjIHsKCQkJCQludmlkaWEsY29tbWFuZC1u
YW1lcyA9ICJwbWljLXJhaWxzIjsKCQkJCQlyZWcgPSA8MHgzYz47CgkJCQkJbnZpZGlhLGVuYWJs
ZS04Yml0LXJlZ2lzdGVyOwoJCQkJCW52aWRpYSxlbmFibGUtOGJpdC1kYXRhOwoJCQkJCW52aWRp
YSxjb250cm9sbGVyLXR5cGUtaTJjOwoJCQkJCW52aWRpYSxjb250cm9sbGVyLWlkID0gPDB4ND47
CgkJCQkJbnZpZGlhLGVuYWJsZS1jb250cm9sbGVyLXJlc2V0OwoJCQkJCW52aWRpYSx3cml0ZS1j
b21tYW5kcyA9IDwweDE2IDB4MjA+OwoJCQkJfTsKCQkJfTsKCgkJCXBvd2VyLW9mZi1jb21tYW5k
cyB7CgkJCQludmlkaWEsY29tbWFuZC1yZXRyaWVzLWNvdW50ID0gPDB4Mj47CgkJCQludmlkaWEs
ZGVsYXktYmV0d2Vlbi1jb21tYW5kcy11cyA9IDwweGE+OwoJCQkJbnZpZGlhLHdhaXQtc3RhcnQt
YnVzLWNsZWFyLXVzID0gPDB4YT47CgkJCQkjYWRkcmVzcy1jZWxscyA9IDwweDE+OwoJCQkJI3Np
emUtY2VsbHMgPSA8MHgwPjsKCgkJCQljb21tYW5kc0A0LTAwM2MgewoJCQkJCW52aWRpYSxjb21t
YW5kLW5hbWVzID0gInBtaWMtcmFpbHMiOwoJCQkJCXJlZyA9IDwweDNjPjsKCQkJCQludmlkaWEs
ZW5hYmxlLThiaXQtcmVnaXN0ZXI7CgkJCQkJbnZpZGlhLGVuYWJsZS04Yml0LWRhdGE7CgkJCQkJ
bnZpZGlhLGNvbnRyb2xsZXItdHlwZS1pMmM7CgkJCQkJbnZpZGlhLGNvbnRyb2xsZXItaWQgPSA8
MHg0PjsKCQkJCQludmlkaWEsZW5hYmxlLWNvbnRyb2xsZXItcmVzZXQ7CgkJCQkJbnZpZGlhLHdy
aXRlLWNvbW1hbmRzID0gPDB4M2IgMHgxIDB4NDIgMHg1YiAweDQxIDB4Zjg+OwoJCQkJfTsKCQkJ
fTsKCQl9OwoKCQlzZG1tYzFfZV8zM1ZfZW5hYmxlIHsKCQkJbGludXgscGhhbmRsZSA9IDwweDk2
PjsKCQkJcGhhbmRsZSA9IDwweDk2PjsKCgkJCXNkbW1jMSB7CgkJCQlwaW5zID0gInNkbW1jMSI7
CgkJCQludmlkaWEscG93ZXItc291cmNlLXZvbHRhZ2UgPSA8MHgxPjsKCQkJfTsKCQl9OwoKCQlz
ZG1tYzFfZV8zM1ZfZGlzYWJsZSB7CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg5Nz47CgkJCXBoYW5k
bGUgPSA8MHg5Nz47CgoJCQlzZG1tYzEgewoJCQkJcGlucyA9ICJzZG1tYzEiOwoJCQkJbnZpZGlh
LHBvd2VyLXNvdXJjZS12b2x0YWdlID0gPDB4MD47CgkJCX07CgkJfTsKCgkJc2RtbWMzX2VfMzNW
X2VuYWJsZSB7CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg4ZT47CgkJCXBoYW5kbGUgPSA8MHg4ZT47
CgoJCQlzZG1tYzMgewoJCQkJcGlucyA9ICJzZG1tYzMiOwoJCQkJbnZpZGlhLHBvd2VyLXNvdXJj
ZS12b2x0YWdlID0gPDB4MT47CgkJCX07CgkJfTsKCgkJc2RtbWMzX2VfMzNWX2Rpc2FibGUgewoJ
CQlsaW51eCxwaGFuZGxlID0gPDB4OGY+OwoJCQlwaGFuZGxlID0gPDB4OGY+OwoKCQkJc2RtbWMz
IHsKCQkJCXBpbnMgPSAic2RtbWMzIjsKCQkJCW52aWRpYSxwb3dlci1zb3VyY2Utdm9sdGFnZSA9
IDwweDA+OwoJCQl9OwoJCX07Cgl9OwoKCXNlQDcwMDEyMDAwIHsKCQljb21wYXRpYmxlID0gIm52
aWRpYSx0ZWdyYTIxMC1zZSI7CgkJcmVnID0gPDB4MCAweDcwMDEyMDAwIDB4MCAweDIwMDA+OwoJ
CWlvbW11cyA9IDwweDJiIDB4MjMgMHgyYiAweDI2PjsKCQlpb21tdS1ncm91cC1pZCA9IDwweDQ+
OwoJCWludGVycnVwdHMgPSA8MHgwIDB4M2EgMHg0PjsKCQljbG9ja3MgPSA8MHgyMSAweDE5NSAw
eDIxIDB4OTU+OwoJCWNsb2NrLW5hbWVzID0gInNlIiwgImVudHJvcHkiOwoJCXN0YXR1cyA9ICJv
a2F5IjsKCQlzdXBwb3J0ZWQtYWxnb3MgPSAiYWVzIiwgImRyYmciLCAicnNhIiwgInNoYSI7CgkJ
bGludXgscGhhbmRsZSA9IDwweDExNj47CgkJcGhhbmRsZSA9IDwweDExNj47Cgl9OwoKCWhkYUA3
MDAzMDAwMCB7CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEzMC1oZGEiOwoJCXJlZyA9IDww
eDAgMHg3MDAzMDAwMCAweDAgMHgxMDAwMD47CgkJaW50ZXJydXB0cyA9IDwweDAgMHg1MSAweDQ+
OwoJCWNsb2NrcyA9IDwweDIxIDB4N2QgMHgyMSAweDgwIDB4MjEgMHg2ZiAweDIxIDB4Y2E+OwoJ
CWNsb2NrLW5hbWVzID0gImhkYSIsICJoZGEyaGRtaSIsICJoZGEyY29kZWNfMngiLCAibWF1ZCI7
CgkJc3RhdHVzID0gIm9rYXkiOwoJfTsKCglwY2llQDEwMDMwMDAgewoJCWNvbXBhdGlibGUgPSAi
bnZpZGlhLHRlZ3JhMjEwLXBjaWUiLCAibnZpZGlhLHRlZ3JhMTI0LXBjaWUiOwoJCXBvd2VyLWRv
bWFpbnMgPSA8MHg3YT47CgkJZGV2aWNlX3R5cGUgPSAicGNpIjsKCQlyZWcgPSA8MHgwIDB4MTAw
MzAwMCAweDAgMHg4MDAgMHgwIDB4MTAwMzgwMCAweDAgMHg4MDAgMHgwIDB4MTFmZmYwMDAgMHgw
IDB4MTAwMD47CgkJcmVnLW5hbWVzID0gInBhZHMiLCAiYWZpIiwgImNzIjsKCQlpbnRlcnJ1cHRz
ID0gPDB4MCAweDYyIDB4NCAweDAgMHg2MyAweDQ+OwoJCWludGVycnVwdC1uYW1lcyA9ICJpbnRy
IiwgIm1zaSI7CgkJY2xvY2tzID0gPDB4MjEgMHg0NiAweDIxIDB4NDggMHgyMSAweDEwNyAweDIx
IDB4MTJjIDB4MjEgMHg2Mz47CgkJY2xvY2stbmFtZXMgPSAicGV4IiwgImFmaSIsICJwbGxfZSIs
ICJjbWwiLCAibXNlbGVjdCI7CgkJcmVzZXRzID0gPDB4MjEgMHg0NiAweDIxIDB4NDggMHgyMSAw
eDRhPjsKCQlyZXNldC1uYW1lcyA9ICJwZXgiLCAiYWZpIiwgInBjaWVfeCI7CgkJI2ludGVycnVw
dC1jZWxscyA9IDwweDE+OwoJCWludGVycnVwdC1tYXAtbWFzayA9IDwweDAgMHgwIDB4MCAweDA+
OwoJCWludGVycnVwdC1tYXAgPSA8MHgwIDB4MCAweDAgMHgwIDB4MzMgMHgwIDB4NjIgMHg0PjsK
CQlwaW5jdHJsLW5hbWVzID0gImNsa3JlcS0wLWJpLWRpci1lbmFibGUiLCAiY2xrcmVxLTEtYmkt
ZGlyLWVuYWJsZSIsICJjbGtyZXEtMC1pbi1kaXItZW5hYmxlIiwgImNsa3JlcS0xLWluLWRpci1l
bmFibGUiLCAicGV4LWlvLWRwZC1kaXMiLCAicGV4LWlvLWRwZC1lbiI7CgkJcGluY3RybC0wID0g
PDB4N2I+OwoJCXBpbmN0cmwtMSA9IDwweDdjPjsKCQlwaW5jdHJsLTIgPSA8MHg3ZD47CgkJcGlu
Y3RybC0zID0gPDB4N2U+OwoJCXBpbmN0cmwtNCA9IDwweDdmPjsKCQlwaW5jdHJsLTUgPSA8MHg4
MD47CgkJYnVzLXJhbmdlID0gPDB4MCAweGZmPjsKCQkjYWRkcmVzcy1jZWxscyA9IDwweDM+OwoJ
CSNzaXplLWNlbGxzID0gPDB4Mj47CgkJcmFuZ2VzID0gPDB4ODIwMDAwMDAgMHgwIDB4MTAwMDAw
MCAweDAgMHgxMDAwMDAwIDB4MCAweDEwMDAgMHg4MjAwMDAwMCAweDAgMHgxMDAxMDAwIDB4MCAw
eDEwMDEwMDAgMHgwIDB4MTAwMCAweDgxMDAwMDAwIDB4MCAweDAgMHgwIDB4MTIwMDAwMDAgMHgw
IDB4MTAwMDAgMHg4MjAwMDAwMCAweDAgMHgxMzAwMDAwMCAweDAgMHgxMzAwMDAwMCAweDAgMHhk
MDAwMDAwIDB4YzIwMDAwMDAgMHgwIDB4MjAwMDAwMDAgMHgwIDB4MjAwMDAwMDAgMHgwIDB4MjAw
MDAwMDA+OwoJCXN0YXR1cyA9ICJva2F5IjsKCQludmlkaWEsd2FrZS1ncGlvID0gPDB4NTYgMHgy
IDB4MD47CgkJbnZpZGlhLHBtYy13YWtldXAgPSA8MHgzNyAweDEgMHgwIDB4OD47CgkJYXZkZC1w
bGwtdWVyZWZlLXN1cHBseSA9IDwweDNlPjsKCQlodmRkaW8tcGV4LXN1cHBseSA9IDwweDM2PjsK
CQlkdmRkaW8tcGV4LXN1cHBseSA9IDwweDNmPjsKCQlkdmRkLXBleC1wbGwtc3VwcGx5ID0gPDB4
M2Y+OwoJCWh2ZGQtcGV4LXBsbC1lLXN1cHBseSA9IDwweDM2PjsKCQl2ZGRpby1wZXgtY3RsLXN1
cHBseSA9IDwweDM2PjsKCgkJcGNpQDEsMCB7CgkJCWRldmljZV90eXBlID0gInBjaSI7CgkJCWFz
c2lnbmVkLWFkZHJlc3NlcyA9IDwweDgyMDAwODAwIDB4MCAweDEwMDAwMDAgMHgwIDB4MTAwMD47
CgkJCXJlZyA9IDwweDgwMCAweDAgMHgwIDB4MCAweDA+OwoJCQlzdGF0dXMgPSAib2theSI7CgkJ
CSNhZGRyZXNzLWNlbGxzID0gPDB4Mz47CgkJCSNzaXplLWNlbGxzID0gPDB4Mj47CgkJCXJhbmdl
czsKCQkJbnZpZGlhLG51bS1sYW5lcyA9IDwweDQ+OwoJCQludmlkaWEsYWZpLWN0bC1vZmZzZXQg
PSA8MHgxMTA+OwoJCQludmlkaWEsZGlzYWJsZS1hc3BtLXN0YXRlcyA9IDwweGY+OwoJCQlwaHlz
ID0gPDB4ODEgMHg4MiAweDgzIDB4ODQ+OwoJCQlwaHktbmFtZXMgPSAicGNpZS0wIiwgInBjaWUt
MSIsICJwY2llLTIiLCAicGNpZS0zIjsKCQl9OwoKCQlwY2lAMiwwIHsKCQkJZGV2aWNlX3R5cGUg
PSAicGNpIjsKCQkJYXNzaWduZWQtYWRkcmVzc2VzID0gPDB4ODIwMDEwMDAgMHgwIDB4MTAwMTAw
MCAweDAgMHgxMDAwPjsKCQkJcmVnID0gPDB4MTAwMCAweDAgMHgwIDB4MCAweDA+OwoJCQlzdGF0
dXMgPSAib2theSI7CgkJCSNhZGRyZXNzLWNlbGxzID0gPDB4Mz47CgkJCSNzaXplLWNlbGxzID0g
PDB4Mj47CgkJCXJhbmdlczsKCQkJbnZpZGlhLG51bS1sYW5lcyA9IDwweDE+OwoJCQludmlkaWEs
YWZpLWN0bC1vZmZzZXQgPSA8MHgxMTg+OwoJCQludmlkaWEsZGlzYWJsZS1hc3BtLXN0YXRlcyA9
IDwweGY+OwoJCQlwaHlzID0gPDB4ODU+OwoJCQlwaHktbmFtZXMgPSAicGNpZS0wIjsKCQkJbnZp
ZGlhLHBsYXQtZ3Bpb3MgPSA8MHg1NiAweGJiIDB4MD47CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHhj
Nj47CgkJCXBoYW5kbGUgPSA8MHhjNj47CgoJCQlldGhlcm5ldEAwLDAgewoJCQkJcmVnID0gPDB4
MCAweDAgMHgwIDB4MCAweDA+OwoJCQkJbGludXgscGhhbmRsZSA9IDwweGQ0PjsKCQkJCXBoYW5k
bGUgPSA8MHhkND47CgkJCX07CgkJfTsKCgkJcHJvZC1zZXR0aW5ncyB7CgkJCSNwcm9kLWNlbGxz
ID0gPDB4Mz47CgoJCQlwcm9kX2NfcGFkIHsKCQkJCXByb2QgPSA8MHhjOCAweGZmZmZmZmZmIDB4
OTBiODkwYjg+OwoJCQl9OwoKCQkJcHJvZF9jX3JwIHsKCQkJCXByb2QgPSA8MHhlODQgMHhmZmZm
IDB4ZiAweGVhNCAweGZmZmYgMHg4ZiAweGU5MCAweGZmZmZmZmZmIDB4NTUwMTAwMDAgMHhlOTQg
MHhmZmZmZmZmZiAweDEgMHhlYjAgMHhmZmZmZmZmZiAweDU1MDEwMDAwIDB4ZWI0IDB4ZmZmZmZm
ZmYgMHgxIDB4ZThjIDB4ZmZmZjAwMDAgMHg2NzAwMDAgMHhlYWMgMHhmZmZmMDAwMCAweGM3MDAw
MD47CgkJCX07CgkJfTsKCX07CgoJaTJjQDcwMDBjMDAwIHsKCQkjYWRkcmVzcy1jZWxscyA9IDww
eDE+OwoJCSNzaXplLWNlbGxzID0gPDB4MD47CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEy
MTAtaTJjIjsKCQlyZWcgPSA8MHgwIDB4NzAwMGMwMDAgMHgwIDB4MTAwPjsKCQlpbnRlcnJ1cHRz
ID0gPDB4MCAweDI2IDB4ND47CgkJaW9tbXVzID0gPDB4MmIgMHhlPjsKCQlzdGF0dXMgPSAib2th
eSI7CgkJY2xvY2stZnJlcXVlbmN5ID0gPDB4NjFhODA+OwoJCWRtYXMgPSA8MHg0YyAweDE1IDB4
NGMgMHgxNT47CgkJZG1hLW5hbWVzID0gInJ4IiwgInR4IjsKCQljbG9ja3MgPSA8MHgyMSAweGMg
MHgyMSAweGYzPjsKCQljbG9jay1uYW1lcyA9ICJkaXYtY2xrIiwgInBhcmVudCI7CgkJcmVzZXRz
ID0gPDB4MjEgMHhjPjsKCQlyZXNldC1uYW1lcyA9ICJpMmMiOwoJCWxpbnV4LHBoYW5kbGUgPSA8
MHhhYj47CgkJcGhhbmRsZSA9IDwweGFiPjsKCgkJdGVtcC1zZW5zb3JANGMgewoJCQkjdGhlcm1h
bC1zZW5zb3ItY2VsbHMgPSA8MHgxPjsKCQkJY29tcGF0aWJsZSA9ICJ0aSx0bXA0NTEiOwoJCQly
ZWcgPSA8MHg0Yz47CgkJCXNlbnNvci1uYW1lID0gInRlZ3JhIjsKCQkJc3VwcG9ydGVkLWh3cmV2
ID0gPDB4MT47CgkJCW9mZnNldCA9IDwweDA+OwoJCQljb252LXJhdGUgPSA8MHg2PjsKCQkJZXh0
ZW5kZWQtcmFnZSA9IDwweDE+OwoJCQlpbnRlcnJ1cHQtcGFyZW50ID0gPDB4NTY+OwoJCQlpbnRl
cnJ1cHRzID0gPDB4YmMgMHg4PjsKCQkJdmRkLXN1cHBseSA9IDwweDM2PjsKCQkJdGVtcC1hbGVy
dC1ncGlvID0gPDB4NTYgMHhiYyAweDA+OwoJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQlsaW51
eCxwaGFuZGxlID0gPDB4MTE3PjsKCQkJcGhhbmRsZSA9IDwweDExNz47CgoJCQlsb2MgewoJCQkJ
c2h1dGRvd24tbGltaXQgPSA8MHg3OD47CgkJCX07CgoJCQlleHQgewoJCQkJc2h1dGRvd24tbGlt
aXQgPSA8MHg2OT47CgkJCX07CgkJfTsKCX07CgoJaTJjQDcwMDBjNDAwIHsKCQkjYWRkcmVzcy1j
ZWxscyA9IDwweDE+OwoJCSNzaXplLWNlbGxzID0gPDB4MD47CgkJY29tcGF0aWJsZSA9ICJudmlk
aWEsdGVncmEyMTAtaTJjIjsKCQlyZWcgPSA8MHgwIDB4NzAwMGM0MDAgMHgwIDB4MTAwPjsKCQlp
bnRlcnJ1cHRzID0gPDB4MCAweDU0IDB4ND47CgkJaW9tbXVzID0gPDB4MmIgMHhlPjsKCQlzdGF0
dXMgPSAib2theSI7CgkJY2xvY2stZnJlcXVlbmN5ID0gPDB4MTg2YTA+OwoJCWRtYXMgPSA8MHg0
YyAweDE2IDB4NGMgMHgxNj47CgkJZG1hLW5hbWVzID0gInJ4IiwgInR4IjsKCQljbG9ja3MgPSA8
MHgyMSAweDM2IDB4MjEgMHhmMz47CgkJY2xvY2stbmFtZXMgPSAiZGl2LWNsayIsICJwYXJlbnQi
OwoJCXJlc2V0cyA9IDwweDIxIDB4MzY+OwoJCXJlc2V0LW5hbWVzID0gImkyYyI7CgkJbGludXgs
cGhhbmRsZSA9IDwweDExOD47CgkJcGhhbmRsZSA9IDwweDExOD47CgoJCWkyY211eEA3MCB7CgkJ
CWNvbXBhdGlibGUgPSAibnhwLHBjYTk1NDYiOwoJCQlyZWcgPSA8MHg3MD47CgkJCSNhZGRyZXNz
LWNlbGxzID0gPDB4MT47CgkJCSNzaXplLWNlbGxzID0gPDB4MD47CgkJCXZjYy1zdXBwbHkgPSA8
MHg0Nz47CgkJCXZjYy1wdWxsdXAtc3VwcGx5ID0gPDB4NDc+OwoJCQlzdGF0dXMgPSAiZGlzYWJs
ZWQiOwoJCQlsaW51eCxwaGFuZGxlID0gPDB4ZDU+OwoJCQlwaGFuZGxlID0gPDB4ZDU+OwoKCQkJ
aTJjQDAgewoJCQkJcmVnID0gPDB4MD47CgkJCQlpMmMtbXV4LGRlc2VsZWN0LW9uLWV4aXQ7CgkJ
CQkjYWRkcmVzcy1jZWxscyA9IDwweDE+OwoJCQkJI3NpemUtY2VsbHMgPSA8MHgwPjsKCQkJfTsK
CgkJCWkyY0AxIHsKCQkJCXJlZyA9IDwweDE+OwoJCQkJaTJjLW11eCxkZXNlbGVjdC1vbi1leGl0
OwoJCQkJI2FkZHJlc3MtY2VsbHMgPSA8MHgxPjsKCQkJCSNzaXplLWNlbGxzID0gPDB4MD47CgoJ
CQkJaW5hMzIyMXhANDAgewoJCQkJCWNvbXBhdGlibGUgPSAidGksaW5hMzIyMXgiOwoJCQkJCXJl
ZyA9IDwweDQwPjsKCQkJCQl0aSx0cmlnZ2VyLWNvbmZpZyA9IDwweDcwMDM+OwoJCQkJCXRpLGNv
bnRpbnVvdXMtY29uZmlnID0gPDB4N2MwNz47CgkJCQkJdGksZW5hYmxlLWZvcmNlZC1jb250aW51
b3VzOwoJCQkJCSNhZGRyZXNzLWNlbGxzID0gPDB4MT47CgkJCQkJI3NpemUtY2VsbHMgPSA8MHgw
PjsKCgkJCQkJY2hhbm5lbEAwIHsKCQkJCQkJcmVnID0gPDB4MD47CgkJCQkJCXRpLHJhaWwtbmFt
ZSA9ICJWRERfNVYiOwoJCQkJCQl0aSxzaHVudC1yZXNpc3Rvci1tb2htID0gPDB4YT47CgkJCQkJ
fTsKCgkJCQkJY2hhbm5lbEAxIHsKCQkJCQkJcmVnID0gPDB4MT47CgkJCQkJCXRpLHJhaWwtbmFt
ZSA9ICJWRERfM1YzIjsKCQkJCQkJdGksc2h1bnQtcmVzaXN0b3ItbW9obSA9IDwweGE+OwoJCQkJ
CX07CgoJCQkJCWNoYW5uZWxAMiB7CgkJCQkJCXJlZyA9IDwweDI+OwoJCQkJCQl0aSxyYWlsLW5h
bWUgPSAiVkREXzFWOCI7CgkJCQkJCXRpLHNodW50LXJlc2lzdG9yLW1vaG0gPSA8MHgxPjsKCQkJ
CQl9OwoJCQkJfTsKCgkJCQlpbmEzMjIxeEA0MSB7CgkJCQkJY29tcGF0aWJsZSA9ICJ0aSxpbmEz
MjIxeCI7CgkJCQkJcmVnID0gPDB4NDE+OwoJCQkJCXRpLHRyaWdnZXItY29uZmlnID0gPDB4NzAw
Mz47CgkJCQkJdGksY29udGludW91cy1jb25maWcgPSA8MHg3YzA3PjsKCQkJCQl0aSxlbmFibGUt
Zm9yY2VkLWNvbnRpbnVvdXM7CgkJCQkJI2FkZHJlc3MtY2VsbHMgPSA8MHgxPjsKCQkJCQkjc2l6
ZS1jZWxscyA9IDwweDA+OwoKCQkJCQljaGFubmVsQDAgewoJCQkJCQlyZWcgPSA8MHgwPjsKCQkJ
CQkJdGkscmFpbC1uYW1lID0gIlZERF81Vl9BVUQiOwoJCQkJCQl0aSxzaHVudC1yZXNpc3Rvci1t
b2htID0gPDB4MT47CgkJCQkJfTsKCgkJCQkJY2hhbm5lbEAxIHsKCQkJCQkJcmVnID0gPDB4MT47
CgkJCQkJCXRpLHJhaWwtbmFtZSA9ICJWRERfM1YzX0FVRCI7CgkJCQkJCXRpLHNodW50LXJlc2lz
dG9yLW1vaG0gPSA8MHhhPjsKCQkJCQl9OwoKCQkJCQljaGFubmVsQDIgewoJCQkJCQlyZWcgPSA8
MHgyPjsKCQkJCQkJdGkscmFpbC1uYW1lID0gIlZERF8xVjhfQVVEIjsKCQkJCQkJdGksc2h1bnQt
cmVzaXN0b3ItbW9obSA9IDwweGE+OwoJCQkJCX07CgkJCQl9OwoKCQkJCWluYTMyMjF4QDQyIHsK
CQkJCQljb21wYXRpYmxlID0gInRpLGluYTMyMjF4IjsKCQkJCQlyZWcgPSA8MHg0Mj47CgkJCQkJ
dGksdHJpZ2dlci1jb25maWcgPSA8MHg3MDAzPjsKCQkJCQl0aSxjb250aW51b3VzLWNvbmZpZyA9
IDwweDdjMDc+OwoJCQkJCXRpLGVuYWJsZS1mb3JjZWQtY29udGludW91czsKCQkJCQkjYWRkcmVz
cy1jZWxscyA9IDwweDE+OwoJCQkJCSNzaXplLWNlbGxzID0gPDB4MD47CgoJCQkJCWNoYW5uZWxA
MCB7CgkJCQkJCXJlZyA9IDwweDA+OwoJCQkJCQl0aSxyYWlsLW5hbWUgPSAiVkREXzNWM19HUFMi
OwoJCQkJCQl0aSxzaHVudC1yZXNpc3Rvci1tb2htID0gPDB4YT47CgkJCQkJfTsKCgkJCQkJY2hh
bm5lbEAxIHsKCQkJCQkJcmVnID0gPDB4MT47CgkJCQkJCXRpLHJhaWwtbmFtZSA9ICJWRERfM1Yz
X05GQyI7CgkJCQkJCXRpLHNodW50LXJlc2lzdG9yLW1vaG0gPSA8MHhhPjsKCQkJCQl9OwoKCQkJ
CQljaGFubmVsQDIgewoJCQkJCQlyZWcgPSA8MHgyPjsKCQkJCQkJdGkscmFpbC1uYW1lID0gIlZE
RF8zVjNfR1lSTyI7CgkJCQkJCXRpLHNodW50LXJlc2lzdG9yLW1vaG0gPSA8MHhhPjsKCQkJCQl9
OwoJCQkJfTsKCQkJfTsKCgkJCWkyY0AyIHsKCQkJCXJlZyA9IDwweDI+OwoJCQkJaTJjLW11eCxk
ZXNlbGVjdC1vbi1leGl0OwoJCQkJI2FkZHJlc3MtY2VsbHMgPSA8MHgxPjsKCQkJCSNzaXplLWNl
bGxzID0gPDB4MD47CgkJCX07CgoJCQlpMmNAMyB7CgkJCQlyZWcgPSA8MHgzPjsKCQkJCWkyYy1t
dXgsZGVzZWxlY3Qtb24tZXhpdDsKCQkJCSNhZGRyZXNzLWNlbGxzID0gPDB4MT47CgkJCQkjc2l6
ZS1jZWxscyA9IDwweDA+OwoKCQkJCXJ0NTY1OS4xMi0wMDFhQDFhIHsKCQkJCQljb21wYXRpYmxl
ID0gInJlYWx0ZWsscnQ1NjU4IjsKCQkJCQlyZWcgPSA8MHgxYT47CgkJCQkJc3RhdHVzID0gImRp
c2FibGVkIjsKCQkJCQlncGlvcyA9IDwweDU2IDB4ZSAweDA+OwoJCQkJCXJlYWx0ZWssamQtc3Jj
ID0gPDB4MT47CgkJCQkJcmVhbHRlayxkbWljMS1kYXRhLXBpbiA9IDwweDI+OwoJCQkJCWxpbnV4
LHBoYW5kbGUgPSA8MHhkZD47CgkJCQkJcGhhbmRsZSA9IDwweGRkPjsKCQkJCX07CgkJCX07CgkJ
fTsKCgkJZ3Bpb0AyMCB7CgkJCWNvbXBhdGlibGUgPSAidGksdGNhNjQxNiI7CgkJCXJlZyA9IDww
eDIwPjsKCQkJZ3Bpby1jb250cm9sbGVyOwoJCQkjZ3Bpby1jZWxscyA9IDwweDI+OwoJCQl2Y2Mt
c3VwcGx5ID0gPDB4NDc+OwoJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQlsaW51eCxwaGFuZGxl
ID0gPDB4ZDY+OwoJCQlwaGFuZGxlID0gPDB4ZDY+OwoJCX07CgoJCWljbTIwNjI4QDY4IHsKCQkJ
Y29tcGF0aWJsZSA9ICJpbnZlbnNlbnNlLG1wdTZ4eHgiOwoJCQlyZWcgPSA8MHg2OD47CgkJCWlu
dGVycnVwdC1wYXJlbnQgPSA8MHg1Nj47CgkJCWludGVycnVwdHMgPSA8MHhjOCAweDE+OwoJCQlh
Y2NlbGVyb21ldGVyX21hdHJpeCA9IFswMSAwMCAwMCAwMCAwMSAwMCAwMCAwMCAwMV07CgkJCWd5
cm9zY29wZV9tYXRyaXggPSBbMDEgMDAgMDAgMDAgMDEgMDAgMDAgMDAgMDFdOwoJCQlnZW9tYWdu
ZXRpY19yb3RhdGlvbl92ZWN0b3JfZGlzYWJsZSA9IDwweDE+OwoJCQlneXJvc2NvcGVfdW5jYWxp
YnJhdGVkX2Rpc2FibGUgPSA8MHgxPjsKCQkJcXVhdGVybmlvbl9kaXNhYmxlID0gPDB4MT47CgkJ
CXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHhkNz47CgkJCXBoYW5k
bGUgPSA8MHhkNz47CgkJfTsKCgkJYWs4OTYzQDBkIHsKCQkJY29tcGF0aWJsZSA9ICJhayxhazg5
eHgiOwoJCQlyZWcgPSA8MHhkPjsKCQkJbWFnbmV0aWNfZmllbGRfbWF0cml4ID0gWzAxIDAwIDAw
IDAwIDAxIDAwIDAwIDAwIDAxXTsKCQkJc3RhdHVzID0gImRpc2FibGVkIjsKCQkJbGludXgscGhh
bmRsZSA9IDwweGQ4PjsKCQkJcGhhbmRsZSA9IDwweGQ4PjsKCQl9OwoKCQljbTMyMTgwQDQ4IHsK
CQkJY29tcGF0aWJsZSA9ICJjYXBlbGxhLGNtMzIxODAiOwoJCQlyZWcgPSA8MHg0OD47CgkJCWdw
aW9faXJxID0gPDB4NTYgMHhjIDB4MT47CgkJCWxpZ2h0X3VuY2FsaWJyYXRlZF9sbyA9IDwweDE+
OwoJCQlsaWdodF9jYWxpYnJhdGVkX2xvID0gPDB4OTY+OwoJCQlsaWdodF91bmNhbGlicmF0ZWRf
aGkgPSA8MHgxNzMxOD47CgkJCWxpZ2h0X2NhbGlicmF0ZWRfaGkgPSA8MHgxYWIzZjA+OwoJCQlz
dGF0dXMgPSAiZGlzYWJsZWQiOwoJCQlsaW51eCxwaGFuZGxlID0gPDB4ZDk+OwoJCQlwaGFuZGxl
ID0gPDB4ZDk+OwoJCX07CgoJCWlxczI2M0A0NCB7CgkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJ
CWxpbnV4LHBoYW5kbGUgPSA8MHgxMTk+OwoJCQlwaGFuZGxlID0gPDB4MTE5PjsKCQl9OwoKCQly
dDU2NTkuMS0wMDFhQDFhIHsKCQkJY29tcGF0aWJsZSA9ICJyZWFsdGVrLHJ0NTY1OCI7CgkJCXJl
ZyA9IDwweDFhPjsKCQkJc3RhdHVzID0gImRpc2FibGVkIjsKCQkJZ3Bpb3MgPSA8MHg1NiAweGUg
MHgwPjsKCQkJcmVhbHRlayxqZC1zcmMgPSA8MHgxPjsKCQkJcmVhbHRlayxkbWljMS1kYXRhLXBp
biA9IDwweDI+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4ZGM+OwoJCQlwaGFuZGxlID0gPDB4ZGM+
OwoJCX07Cgl9OwoKCWkyY0A3MDAwYzUwMCB7CgkJI2FkZHJlc3MtY2VsbHMgPSA8MHgxPjsKCQkj
c2l6ZS1jZWxscyA9IDwweDA+OwoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLWkyYyI7
CgkJcmVnID0gPDB4MCAweDcwMDBjNTAwIDB4MCAweDEwMD47CgkJaW50ZXJydXB0cyA9IDwweDAg
MHg1YyAweDQ+OwoJCWlvbW11cyA9IDwweDJiIDB4ZT47CgkJc3RhdHVzID0gIm9rYXkiOwoJCWNs
b2NrLWZyZXF1ZW5jeSA9IDwweDYxYTgwPjsKCQlkbWFzID0gPDB4NGMgMHgxNyAweDRjIDB4MTc+
OwoJCWRtYS1uYW1lcyA9ICJyeCIsICJ0eCI7CgkJY2xvY2tzID0gPDB4MjEgMHg0MyAweDIxIDB4
ZjM+OwoJCWNsb2NrLW5hbWVzID0gImRpdi1jbGsiLCAicGFyZW50IjsKCQlyZXNldHMgPSA8MHgy
MSAweDQzPjsKCQlyZXNldC1uYW1lcyA9ICJpMmMiOwoJCWxpbnV4LHBoYW5kbGUgPSA8MHhhYz47
CgkJcGhhbmRsZSA9IDwweGFjPjsKCgkJYmF0dGVyeS1jaGFyZ2VyQDZiIHsKCQkJc3RhdHVzID0g
ImRpc2FibGVkIjsKCQl9OwoJfTsKCglpMmNANzAwMGM3MDAgewoJCSNhZGRyZXNzLWNlbGxzID0g
PDB4MT47CgkJI3NpemUtY2VsbHMgPSA8MHgwPjsKCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdy
YTIxMC1pMmMiOwoJCXJlZyA9IDwweDAgMHg3MDAwYzcwMCAweDAgMHgxMDA+OwoJCWludGVycnVw
dHMgPSA8MHgwIDB4NzggMHg0PjsKCQlpb21tdXMgPSA8MHgyYiAweGU+OwoJCXN0YXR1cyA9ICJv
a2F5IjsKCQljbG9jay1mcmVxdWVuY3kgPSA8MHgxODZhMD47CgkJZG1hcyA9IDwweDRjIDB4MWEg
MHg0YyAweDFhPjsKCQlkbWEtbmFtZXMgPSAicngiLCAidHgiOwoJCWNsb2NrcyA9IDwweDIxIDB4
NjcgMHgyMSAweGYzPjsKCQljbG9jay1uYW1lcyA9ICJkaXYtY2xrIiwgInBhcmVudCI7CgkJcmVz
ZXRzID0gPDB4MjEgMHg2Nz47CgkJcmVzZXQtbmFtZXMgPSAiaTJjIjsKCQludmlkaWEscmVzdHJp
Y3QtY2xrLWNoYW5nZTsKCQlwcmludC1yYXRlLWxpbWl0ID0gPDB4NzggMHgxPjsKCQlsaW51eCxw
aGFuZGxlID0gPDB4NzE+OwoJCXBoYW5kbGUgPSA8MHg3MT47Cgl9OwoKCWkyY0A3MDAwZDAwMCB7
CgkJI2FkZHJlc3MtY2VsbHMgPSA8MHgxPjsKCQkjc2l6ZS1jZWxscyA9IDwweDA+OwoJCWNvbXBh
dGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLWkyYyI7CgkJcmVnID0gPDB4MCAweDcwMDBkMDAwIDB4
MCAweDEwMD47CgkJaW50ZXJydXB0cyA9IDwweDAgMHgzNSAweDQ+OwoJCXNjbC1ncGlvID0gPDB4
NTYgMHhjMyAweDA+OwoJCXNkYS1ncGlvID0gPDB4NTYgMHhjNCAweDA+OwoJCW52aWRpYSxyZXF1
aXJlLWNsZHZmcy1jbG9jazsKCQlpb21tdXMgPSA8MHgyYiAweGU+OwoJCXN0YXR1cyA9ICJva2F5
IjsKCQljbG9jay1mcmVxdWVuY3kgPSA8MHhmNDI0MD47CgkJZG1hcyA9IDwweDRjIDB4MTggMHg0
YyAweDE4PjsKCQlkbWEtbmFtZXMgPSAicngiLCAidHgiOwoJCWNsb2NrcyA9IDwweDIxIDB4MmYg
MHgyMSAweGYzPjsKCQljbG9jay1uYW1lcyA9ICJkaXYtY2xrIiwgInBhcmVudCI7CgkJcmVzZXRz
ID0gPDB4MjEgMHgyZj47CgkJcmVzZXQtbmFtZXMgPSAiaTJjIjsKCQludmlkaWEsYml0LWJhbmct
YWZ0ZXItc2h1dGRvd247CgkJbGludXgscGhhbmRsZSA9IDwweDExYT47CgkJcGhhbmRsZSA9IDww
eDExYT47CgoJCW1heDc3NjIwQDNjIHsKCQkJY29tcGF0aWJsZSA9ICJtYXhpbSxtYXg3NzYyMCI7
CgkJCXJlZyA9IDwweDNjPjsKCQkJaW50ZXJydXB0cyA9IDwweDAgMHg1NiAweDA+OwoJCQludmlk
aWEscG1jLXdha2V1cCA9IDwweDM3IDB4MSAweDMzIDB4OD47CgkJCSNpbnRlcnJ1cHQtY2VsbHMg
PSA8MHgyPjsKCQkJaW50ZXJydXB0LWNvbnRyb2xsZXI7CgkJCWdwaW8tY29udHJvbGxlcjsKCQkJ
I2dwaW8tY2VsbHMgPSA8MHgyPjsKCQkJbWF4aW0sZW5hYmxlLWNsb2NrMzJrLW91dDsKCQkJbWF4
aW0sc3lzdGVtLXBtaWMtcG93ZXItb2ZmOwoJCQltYXhpbSxob3QtZGllLXRocmVzaG9sZC10ZW1w
ID0gPDB4MWFkYjA+OwoJCQkjdGhlcm1hbC1zZW5zb3ItY2VsbHMgPSA8MHgwPjsKCQkJcGluY3Ry
bC1uYW1lcyA9ICJkZWZhdWx0IjsKCQkJcGluY3RybC0wID0gPDB4ODY+OwoJCQltYXhpbSxwb3dl
ci1zaHV0ZG93bi1ncGlvLXN0YXRlcyA9IDwweDEgMHgwPjsKCQkJbGludXgscGhhbmRsZSA9IDww
eDFlPjsKCQkJcGhhbmRsZSA9IDwweDFlPjsKCgkJCXBpbm11eEAwIHsKCQkJCWxpbnV4LHBoYW5k
bGUgPSA8MHg4Nj47CgkJCQlwaGFuZGxlID0gPDB4ODY+OwoKCQkJCXBpbl9ncGlvMCB7CgkJCQkJ
cGlucyA9ICJncGlvMCI7CgkJCQkJZnVuY3Rpb24gPSAiZ3BpbyI7CgkJCQl9OwoKCQkJCXBpbl9n
cGlvMSB7CgkJCQkJcGlucyA9ICJncGlvMSI7CgkJCQkJZnVuY3Rpb24gPSAiZ3BpbyI7CgkJCQkJ
ZHJpdmUtb3Blbi1kcmFpbiA9IDwweDE+OwoJCQkJCW1heGltLGFjdGl2ZS1mcHMtc291cmNlID0g
PDB4Mz47CgkJCQkJbWF4aW0sYWN0aXZlLWZwcy1wb3dlci11cC1zbG90ID0gPDB4MD47CgkJCQkJ
bWF4aW0sYWN0aXZlLWZwcy1wb3dlci1kb3duLXNsb3QgPSA8MHg3PjsKCQkJCX07CgoJCQkJcGlu
X2dwaW8yIHsKCQkJCQlwaW5zID0gImdwaW8yIjsKCQkJCQltYXhpbSxhY3RpdmUtZnBzLXNvdXJj
ZSA9IDwweDA+OwoJCQkJCW1heGltLGFjdGl2ZS1mcHMtcG93ZXItdXAtc2xvdCA9IDwweDA+OwoJ
CQkJCW1heGltLGFjdGl2ZS1mcHMtcG93ZXItZG93bi1zbG90ID0gPDB4Nz47CgkJCQl9OwoKCQkJ
CXBpbl9ncGlvMyB7CgkJCQkJcGlucyA9ICJncGlvMyI7CgkJCQkJbWF4aW0sYWN0aXZlLWZwcy1z
b3VyY2UgPSA8MHgwPjsKCQkJCQltYXhpbSxhY3RpdmUtZnBzLXBvd2VyLXVwLXNsb3QgPSA8MHg0
PjsKCQkJCQltYXhpbSxhY3RpdmUtZnBzLXBvd2VyLWRvd24tc2xvdCA9IDwweDM+OwoJCQkJfTsK
CgkJCQlwaW5fZ3BpbzJfMyB7CgkJCQkJcGlucyA9ICJncGlvMiIsICJncGlvMyI7CgkJCQkJZnVu
Y3Rpb24gPSAiZnBzLW91dCI7CgkJCQkJZHJpdmUtb3Blbi1kcmFpbiA9IDwweDE+OwoJCQkJCW1h
eGltLGFjdGl2ZS1mcHMtc291cmNlID0gPDB4MD47CgkJCQl9OwoKCQkJCXBpbl9ncGlvNCB7CgkJ
CQkJcGlucyA9ICJncGlvNCI7CgkJCQkJZnVuY3Rpb24gPSAiMzJrLW91dDEiOwoJCQkJfTsKCgkJ
CQlwaW5fZ3BpbzVfNl83IHsKCQkJCQlwaW5zID0gImdwaW81IiwgImdwaW82IiwgImdwaW83IjsK
CQkJCQlmdW5jdGlvbiA9ICJncGlvIjsKCQkJCQlkcml2ZS1wdXNoLXB1bGwgPSA8MHgxPjsKCQkJ
CX07CgkJCX07CgoJCQlzcG1pYy1kZWZhdWx0LW91dHB1dC1oaWdoIHsKCQkJCWdwaW8taG9nOwoJ
CQkJb3V0cHV0LWhpZ2g7CgkJCQlncGlvcyA9IDwweDEgMHgwPjsKCQkJCWxhYmVsID0gInNwbWlj
LWRlZmF1bHQtb3V0cHV0LWhpZ2giOwoJCQl9OwoKCQkJd2F0Y2hkb2cgewoJCQkJbWF4aW0sd2R0
LXRpbWVvdXQgPSA8MHgxMD47CgkJCQltYXhpbSx3ZHQtY2xlYXItdGltZSA9IDwweGQ+OwoJCQkJ
c3RhdHVzID0gImRpc2FibGVkIjsKCQkJCWR0LW92ZXJyaWRlLXN0YXR1cy1vZG0tZGF0YSA9IDww
eDIwMDAwIDB4MjAwMDA+OwoJCQkJbGludXgscGhhbmRsZSA9IDwweGI2PjsKCQkJCXBoYW5kbGUg
PSA8MHhiNj47CgkJCX07CgoJCQlmcHMgewoJCQkJI2FkZHJlc3MtY2VsbHMgPSA8MHgxPjsKCQkJ
CSNzaXplLWNlbGxzID0gPDB4MD47CgoJCQkJZnBzMCB7CgkJCQkJcmVnID0gPDB4MD47CgkJCQkJ
bWF4aW0sc2h1dGRvd24tZnBzLXRpbWUtcGVyaW9kaS11cyA9IDwweDUwMD47CgkJCQkJbWF4aW0s
ZnBzLWV2ZW50LXNvdXJjZSA9IDwweDA+OwoJCQkJfTsKCgkJCQlmcHMxIHsKCQkJCQlyZWcgPSA8
MHgxPjsKCQkJCQltYXhpbSxzaHV0ZG93bi1mcHMtdGltZS1wZXJpb2QtdXMgPSA8MHg1MDA+OwoJ
CQkJCW1heGltLGZwcy1ldmVudC1zb3VyY2UgPSA8MHgxPjsKCQkJCQltYXhpbSxkZXZpY2Utc3Rh
dGUtb24tZGlzYWJsZWQtZXZlbnQgPSA8MHgwPjsKCQkJCX07CgoJCQkJZnBzMiB7CgkJCQkJcmVn
ID0gPDB4Mj47CgkJCQkJbWF4aW0sZnBzLWV2ZW50LXNvdXJjZSA9IDwweDA+OwoJCQkJfTsKCQkJ
fTsKCgkJCWJhY2t1cC1iYXR0ZXJ5IHsKCQkJCW1heGltLGJhY2t1cC1iYXR0ZXJ5LWNoYXJnaW5n
LWN1cnJlbnQgPSA8MHg2ND47CgkJCQltYXhpbSxiYWNrdXAtYmF0dGVyeS1jaGFyZ2luZy12b2x0
YWdlID0gPDB4MmRjNmMwPjsKCQkJCW1heGltLGJhY2t1cC1iYXR0ZXJ5LW91dHB1dC1yZXNpc3Rl
ciA9IDwweDY0PjsKCQkJfTsKCgkJCXJlZ3VsYXRvcnMgewoJCQkJaW4tbGRvMC0xLXN1cHBseSA9
IDwweDg3PjsKCQkJCWluLWxkbzctOC1zdXBwbHkgPSA8MHg4Nz47CgoJCQkJc2QwIHsKCQkJCQly
ZWd1bGF0b3ItbmFtZSA9ICJ2ZGQtY29yZSI7CgkJCQkJcmVndWxhdG9yLW1pbi1taWNyb3ZvbHQg
PSA8MHhmNDI0MD47CgkJCQkJcmVndWxhdG9yLW1heC1taWNyb3ZvbHQgPSA8MHgxMWRhNTA+OwoJ
CQkJCXJlZ3VsYXRvci1ib290LW9uOwoJCQkJCXJlZ3VsYXRvci1hbHdheXMtb247CgkJCQkJbWF4
aW0sYWN0aXZlLWZwcy1zb3VyY2UgPSA8MHgxPjsKCQkJCQlyZWd1bGF0b3ItaW5pdC1tb2RlID0g
PDB4Mj47CgkJCQkJbWF4aW0sYWN0aXZlLWZwcy1wb3dlci11cC1zbG90ID0gPDB4MT47CgkJCQkJ
bWF4aW0sYWN0aXZlLWZwcy1wb3dlci1kb3duLXNsb3QgPSA8MHg2PjsKCQkJCQlyZWd1bGF0b3It
ZW5hYmxlLXJhbXAtZGVsYXkgPSA8MHg5Mj47CgkJCQkJcmVndWxhdG9yLWRpc2FibGUtcmFtcC1k
ZWxheSA9IDwweGZmMD47CgkJCQkJcmVndWxhdG9yLXJhbXAtZGVsYXkgPSA8MHg2YjZjPjsKCQkJ
CQlyZWd1bGF0b3ItcmFtcC1kZWxheS1zY2FsZSA9IDwweDEyYz47CgkJCQkJbGludXgscGhhbmRs
ZSA9IDwweGExPjsKCQkJCQlwaGFuZGxlID0gPDB4YTE+OwoJCQkJfTsKCgkJCQlzZDEgewoJCQkJ
CXJlZ3VsYXRvci1uYW1lID0gInZkZC1kZHItMXYxIjsKCQkJCQlyZWd1bGF0b3ItYWx3YXlzLW9u
OwoJCQkJCXJlZ3VsYXRvci1ib290LW9uOwoJCQkJCXJlZ3VsYXRvci1pbml0LW1vZGUgPSA8MHgy
PjsKCQkJCQltYXhpbSxhY3RpdmUtZnBzLXNvdXJjZSA9IDwweDA+OwoJCQkJCW1heGltLGFjdGl2
ZS1mcHMtcG93ZXItdXAtc2xvdCA9IDwweDU+OwoJCQkJCW1heGltLGFjdGl2ZS1mcHMtcG93ZXIt
ZG93bi1zbG90ID0gPDB4Mj47CgkJCQkJcmVndWxhdG9yLW1pbi1taWNyb3ZvbHQgPSA8MHgxMThj
MzA+OwoJCQkJCXJlZ3VsYXRvci1tYXgtbWljcm92b2x0ID0gPDB4MTE4YzMwPjsKCQkJCQlyZWd1
bGF0b3ItZW5hYmxlLXJhbXAtZGVsYXkgPSA8MHg4Mj47CgkJCQkJcmVndWxhdG9yLWRpc2FibGUt
cmFtcC1kZWxheSA9IDwweDIzOTg4PjsKCQkJCQlyZWd1bGF0b3ItcmFtcC1kZWxheSA9IDwweDZi
NmM+OwoJCQkJCXJlZ3VsYXRvci1yYW1wLWRlbGF5LXNjYWxlID0gPDB4MTJjPjsKCQkJCQlsaW51
eCxwaGFuZGxlID0gPDB4MTFiPjsKCQkJCQlwaGFuZGxlID0gPDB4MTFiPjsKCQkJCX07CgoJCQkJ
c2QyIHsKCQkJCQlyZWd1bGF0b3ItbmFtZSA9ICJ2ZGQtcHJlLXJlZy0xdjM1IjsKCQkJCQlyZWd1
bGF0b3ItbWluLW1pY3Jvdm9sdCA9IDwweDE0OTk3MD47CgkJCQkJcmVndWxhdG9yLW1heC1taWNy
b3ZvbHQgPSA8MHgxNDk5NzA+OwoJCQkJCXJlZ3VsYXRvci1hbHdheXMtb247CgkJCQkJcmVndWxh
dG9yLWJvb3Qtb247CgkJCQkJbWF4aW0sYWN0aXZlLWZwcy1zb3VyY2UgPSA8MHgxPjsKCQkJCQlt
YXhpbSxhY3RpdmUtZnBzLXBvd2VyLXVwLXNsb3QgPSA8MHgyPjsKCQkJCQltYXhpbSxhY3RpdmUt
ZnBzLXBvd2VyLWRvd24tc2xvdCA9IDwweDU+OwoJCQkJCXJlZ3VsYXRvci1lbmFibGUtcmFtcC1k
ZWxheSA9IDwweGIwPjsKCQkJCQlyZWd1bGF0b3ItZGlzYWJsZS1yYW1wLWRlbGF5ID0gPDB4N2Qw
MD47CgkJCQkJcmVndWxhdG9yLXJhbXAtZGVsYXkgPSA8MHg2YjZjPjsKCQkJCQlyZWd1bGF0b3It
cmFtcC1kZWxheS1zY2FsZSA9IDwweDE1ZT47CgkJCQkJbGludXgscGhhbmRsZSA9IDwweDg3PjsK
CQkJCQlwaGFuZGxlID0gPDB4ODc+OwoJCQkJfTsKCgkJCQlzZDMgewoJCQkJCXJlZ3VsYXRvci1u
YW1lID0gInZkZC0xdjgiOwoJCQkJCXJlZ3VsYXRvci1taW4tbWljcm92b2x0ID0gPDB4MWI3NzQw
PjsKCQkJCQlyZWd1bGF0b3ItbWF4LW1pY3Jvdm9sdCA9IDwweDFiNzc0MD47CgkJCQkJcmVndWxh
dG9yLWFsd2F5cy1vbjsKCQkJCQlyZWd1bGF0b3ItYm9vdC1vbjsKCQkJCQltYXhpbSxhY3RpdmUt
ZnBzLXNvdXJjZSA9IDwweDA+OwoJCQkJCXJlZ3VsYXRvci1pbml0LW1vZGUgPSA8MHgyPjsKCQkJ
CQltYXhpbSxhY3RpdmUtZnBzLXBvd2VyLXVwLXNsb3QgPSA8MHgzPjsKCQkJCQltYXhpbSxhY3Rp
dmUtZnBzLXBvd2VyLWRvd24tc2xvdCA9IDwweDQ+OwoJCQkJCXJlZ3VsYXRvci1lbmFibGUtcmFt
cC1kZWxheSA9IDwweGYyPjsKCQkJCQlyZWd1bGF0b3ItZGlzYWJsZS1yYW1wLWRlbGF5ID0gPDB4
MWNjZjA+OwoJCQkJCXJlZ3VsYXRvci1yYW1wLWRlbGF5ID0gPDB4NmI2Yz47CgkJCQkJcmVndWxh
dG9yLXJhbXAtZGVsYXktc2NhbGUgPSA8MHgxNjg+OwoJCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHgz
Nj47CgkJCQkJcGhhbmRsZSA9IDwweDM2PjsKCQkJCX07CgoJCQkJbGRvMCB7CgkJCQkJcmVndWxh
dG9yLW5hbWUgPSAiYXZkZC1zeXMtMXYyIjsKCQkJCQlyZWd1bGF0b3ItbWluLW1pY3Jvdm9sdCA9
IDwweDEyNGY4MD47CgkJCQkJcmVndWxhdG9yLW1heC1taWNyb3ZvbHQgPSA8MHgxMjRmODA+OwoJ
CQkJCXJlZ3VsYXRvci1ib290LW9uOwoJCQkJCW1heGltLGFjdGl2ZS1mcHMtc291cmNlID0gPDB4
Mz47CgkJCQkJbWF4aW0sYWN0aXZlLWZwcy1wb3dlci11cC1zbG90ID0gPDB4MD47CgkJCQkJbWF4
aW0sYWN0aXZlLWZwcy1wb3dlci1kb3duLXNsb3QgPSA8MHg3PjsKCQkJCQlyZWd1bGF0b3ItZW5h
YmxlLXJhbXAtZGVsYXkgPSA8MHgxYT47CgkJCQkJcmVndWxhdG9yLWRpc2FibGUtcmFtcC1kZWxh
eSA9IDwweDI3Mj47CgkJCQkJcmVndWxhdG9yLXJhbXAtZGVsYXkgPSA8MHgxODZhMD47CgkJCQkJ
cmVndWxhdG9yLXJhbXAtZGVsYXktc2NhbGUgPSA8MHhjOD47CgkJCQkJbGludXgscGhhbmRsZSA9
IDwweDNkPjsKCQkJCQlwaGFuZGxlID0gPDB4M2Q+OwoJCQkJfTsKCgkJCQlsZG8xIHsKCQkJCQly
ZWd1bGF0b3ItbmFtZSA9ICJ2ZGQtcGV4LTF2MCI7CgkJCQkJcmVndWxhdG9yLW1pbi1taWNyb3Zv
bHQgPSA8MHgxMDA1OTA+OwoJCQkJCXJlZ3VsYXRvci1tYXgtbWljcm92b2x0ID0gPDB4MTAwNTkw
PjsKCQkJCQlyZWd1bGF0b3ItYWx3YXlzLW9uOwoJCQkJCW1heGltLGFjdGl2ZS1mcHMtc291cmNl
ID0gPDB4Mz47CgkJCQkJbWF4aW0sYWN0aXZlLWZwcy1wb3dlci11cC1zbG90ID0gPDB4MD47CgkJ
CQkJbWF4aW0sYWN0aXZlLWZwcy1wb3dlci1kb3duLXNsb3QgPSA8MHg3PjsKCQkJCQlyZWd1bGF0
b3ItZW5hYmxlLXJhbXAtZGVsYXkgPSA8MHgxNj47CgkJCQkJcmVndWxhdG9yLWRpc2FibGUtcmFt
cC1kZWxheSA9IDwweDI3Nj47CgkJCQkJcmVndWxhdG9yLXJhbXAtZGVsYXkgPSA8MHgxODZhMD47
CgkJCQkJcmVndWxhdG9yLXJhbXAtZGVsYXktc2NhbGUgPSA8MHhjOD47CgkJCQkJbGludXgscGhh
bmRsZSA9IDwweDNmPjsKCQkJCQlwaGFuZGxlID0gPDB4M2Y+OwoJCQkJfTsKCgkJCQlsZG8yIHsK
CQkJCQlyZWd1bGF0b3ItbmFtZSA9ICJ2ZGRpby1zZG1tYy1hcCI7CgkJCQkJcmVndWxhdG9yLW1p
bi1taWNyb3ZvbHQgPSA8MHgxYjc3NDA+OwoJCQkJCXJlZ3VsYXRvci1tYXgtbWljcm92b2x0ID0g
PDB4MzI1YWEwPjsKCQkJCQltYXhpbSxhY3RpdmUtZnBzLXNvdXJjZSA9IDwweDM+OwoJCQkJCW1h
eGltLGFjdGl2ZS1mcHMtcG93ZXItdXAtc2xvdCA9IDwweDA+OwoJCQkJCW1heGltLGFjdGl2ZS1m
cHMtcG93ZXItZG93bi1zbG90ID0gPDB4Nz47CgkJCQkJcmVndWxhdG9yLWVuYWJsZS1yYW1wLWRl
bGF5ID0gPDB4M2U+OwoJCQkJCXJlZ3VsYXRvci1kaXNhYmxlLXJhbXAtZGVsYXkgPSA8MHgyOGE+
OwoJCQkJCXJlZ3VsYXRvci1yYW1wLWRlbGF5ID0gPDB4MTg2YTA+OwoJCQkJCXJlZ3VsYXRvci1y
YW1wLWRlbGF5LXNjYWxlID0gPDB4Yzg+OwoJCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHg5OD47CgkJ
CQkJcGhhbmRsZSA9IDwweDk4PjsKCQkJCX07CgoJCQkJbGRvMyB7CgkJCQkJcmVndWxhdG9yLW5h
bWUgPSAidmRkLWxkbzMiOwoJCQkJCXJlZ3VsYXRvci1taW4tbWljcm92b2x0ID0gPDB4MmFiOTgw
PjsKCQkJCQlyZWd1bGF0b3ItbWF4LW1pY3Jvdm9sdCA9IDwweDJhYjk4MD47CgkJCQkJbWF4aW0s
YWN0aXZlLWZwcy1zb3VyY2UgPSA8MHgzPjsKCQkJCQltYXhpbSxhY3RpdmUtZnBzLXBvd2VyLXVw
LXNsb3QgPSA8MHgwPjsKCQkJCQltYXhpbSxhY3RpdmUtZnBzLXBvd2VyLWRvd24tc2xvdCA9IDww
eDc+OwoJCQkJCXJlZ3VsYXRvci1lbmFibGUtcmFtcC1kZWxheSA9IDwweDMyPjsKCQkJCQlyZWd1
bGF0b3ItZGlzYWJsZS1yYW1wLWRlbGF5ID0gPDB4NDU2PjsKCQkJCQlyZWd1bGF0b3ItcmFtcC1k
ZWxheSA9IDwweDE4NmEwPjsKCQkJCQlyZWd1bGF0b3ItcmFtcC1kZWxheS1zY2FsZSA9IDwweGM4
PjsKCQkJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHgxMWM+
OwoJCQkJCXBoYW5kbGUgPSA8MHgxMWM+OwoJCQkJfTsKCgkJCQlsZG80IHsKCQkJCQlyZWd1bGF0
b3ItbmFtZSA9ICJ2ZGQtcnRjIjsKCQkJCQlyZWd1bGF0b3ItbWluLW1pY3Jvdm9sdCA9IDwweGNm
ODUwPjsKCQkJCQlyZWd1bGF0b3ItbWF4LW1pY3Jvdm9sdCA9IDwweDEwYzhlMD47CgkJCQkJcmVn
dWxhdG9yLWFsd2F5cy1vbjsKCQkJCQlyZWd1bGF0b3ItYm9vdC1vbjsKCQkJCQltYXhpbSxhY3Rp
dmUtZnBzLXNvdXJjZSA9IDwweDA+OwoJCQkJCW1heGltLGFjdGl2ZS1mcHMtcG93ZXItdXAtc2xv
dCA9IDwweDE+OwoJCQkJCW1heGltLGFjdGl2ZS1mcHMtcG93ZXItZG93bi1zbG90ID0gPDB4Nj47
CgkJCQkJcmVndWxhdG9yLWVuYWJsZS1yYW1wLWRlbGF5ID0gPDB4MTY+OwoJCQkJCXJlZ3VsYXRv
ci1kaXNhYmxlLXJhbXAtZGVsYXkgPSA8MHgyNjI+OwoJCQkJCXJlZ3VsYXRvci1yYW1wLWRlbGF5
ID0gPDB4MTg2YTA+OwoJCQkJCXJlZ3VsYXRvci1yYW1wLWRlbGF5LXNjYWxlID0gPDB4Yzg+OwoJ
CQkJCXJlZ3VsYXRvci1kaXNhYmxlLWFjdGl2ZS1kaXNjaGFyZ2U7CgkJCQkJbGludXgscGhhbmRs
ZSA9IDwweDExZD47CgkJCQkJcGhhbmRsZSA9IDwweDExZD47CgkJCQl9OwoKCQkJCWxkbzUgewoJ
CQkJCXJlZ3VsYXRvci1uYW1lID0gInZkZC1sZG81IjsKCQkJCQlyZWd1bGF0b3ItbWluLW1pY3Jv
dm9sdCA9IDwweDMyNWFhMD47CgkJCQkJcmVndWxhdG9yLW1heC1taWNyb3ZvbHQgPSA8MHgzMjVh
YTA+OwoJCQkJCW1heGltLGFjdGl2ZS1mcHMtc291cmNlID0gPDB4Mz47CgkJCQkJbWF4aW0sYWN0
aXZlLWZwcy1wb3dlci11cC1zbG90ID0gPDB4MD47CgkJCQkJbWF4aW0sYWN0aXZlLWZwcy1wb3dl
ci1kb3duLXNsb3QgPSA8MHg3PjsKCQkJCQlyZWd1bGF0b3ItZW5hYmxlLXJhbXAtZGVsYXkgPSA8
MHgzZT47CgkJCQkJcmVndWxhdG9yLWRpc2FibGUtcmFtcC1kZWxheSA9IDwweDI4MD47CgkJCQkJ
cmVndWxhdG9yLXJhbXAtZGVsYXkgPSA8MHgxODZhMD47CgkJCQkJcmVndWxhdG9yLXJhbXAtZGVs
YXktc2NhbGUgPSA8MHhjOD47CgkJCQkJc3RhdHVzID0gImRpc2FibGVkIjsKCQkJCQlsaW51eCxw
aGFuZGxlID0gPDB4NTc+OwoJCQkJCXBoYW5kbGUgPSA8MHg1Nz47CgkJCQl9OwoKCQkJCWxkbzYg
ewoJCQkJCXJlZ3VsYXRvci1uYW1lID0gInZkZGlvLXNkbW1jMy1hcCI7CgkJCQkJcmVndWxhdG9y
LW1pbi1taWNyb3ZvbHQgPSA8MHgxYjc3NDA+OwoJCQkJCXJlZ3VsYXRvci1tYXgtbWljcm92b2x0
ID0gPDB4MzI1YWEwPjsKCQkJCQlyZWd1bGF0b3ItYm9vdC1vbjsKCQkJCQltYXhpbSxhY3RpdmUt
ZnBzLXNvdXJjZSA9IDwweDM+OwoJCQkJCW1heGltLGFjdGl2ZS1mcHMtcG93ZXItdXAtc2xvdCA9
IDwweDA+OwoJCQkJCW1heGltLGFjdGl2ZS1mcHMtcG93ZXItZG93bi1zbG90ID0gPDB4Nz47CgkJ
CQkJcmVndWxhdG9yLWVuYWJsZS1yYW1wLWRlbGF5ID0gPDB4MjQ+OwoJCQkJCXJlZ3VsYXRvci1k
aXNhYmxlLXJhbXAtZGVsYXkgPSA8MHgyYTI+OwoJCQkJCXJlZ3VsYXRvci1yYW1wLWRlbGF5ID0g
PDB4MTg2YTA+OwoJCQkJCXJlZ3VsYXRvci1yYW1wLWRlbGF5LXNjYWxlID0gPDB4Yzg+OwoJCQkJ
CWxpbnV4LHBoYW5kbGUgPSA8MHg1OD47CgkJCQkJcGhhbmRsZSA9IDwweDU4PjsKCQkJCX07CgoJ
CQkJbGRvNyB7CgkJCQkJcmVndWxhdG9yLW5hbWUgPSAiYXZkZC0xdjA1LXBsbCI7CgkJCQkJcmVn
dWxhdG9yLW1pbi1taWNyb3ZvbHQgPSA8MHgxMDA1OTA+OwoJCQkJCXJlZ3VsYXRvci1tYXgtbWlj
cm92b2x0ID0gPDB4MTAwNTkwPjsKCQkJCQlyZWd1bGF0b3ItYWx3YXlzLW9uOwoJCQkJCXJlZ3Vs
YXRvci1ib290LW9uOwoJCQkJCW1heGltLGFjdGl2ZS1mcHMtc291cmNlID0gPDB4MT47CgkJCQkJ
bWF4aW0sYWN0aXZlLWZwcy1wb3dlci11cC1zbG90ID0gPDB4Mz47CgkJCQkJbWF4aW0sYWN0aXZl
LWZwcy1wb3dlci1kb3duLXNsb3QgPSA8MHg0PjsKCQkJCQlyZWd1bGF0b3ItZW5hYmxlLXJhbXAt
ZGVsYXkgPSA8MHgxOD47CgkJCQkJcmVndWxhdG9yLWRpc2FibGUtcmFtcC1kZWxheSA9IDwweGFk
MD47CgkJCQkJcmVndWxhdG9yLXJhbXAtZGVsYXkgPSA8MHgxODZhMD47CgkJCQkJcmVndWxhdG9y
LXJhbXAtZGVsYXktc2NhbGUgPSA8MHhjOD47CgkJCQkJbGludXgscGhhbmRsZSA9IDwweDNlPjsK
CQkJCQlwaGFuZGxlID0gPDB4M2U+OwoJCQkJfTsKCgkJCQlsZG84IHsKCQkJCQlyZWd1bGF0b3It
bmFtZSA9ICJhdmRkLWlvLWhkbWktZHAiOwoJCQkJCXJlZ3VsYXRvci1taW4tbWljcm92b2x0ID0g
PDB4MTAwNTkwPjsKCQkJCQlyZWd1bGF0b3ItbWF4LW1pY3Jvdm9sdCA9IDwweDEwMDU5MD47CgkJ
CQkJcmVndWxhdG9yLWJvb3Qtb247CgkJCQkJcmVndWxhdG9yLWFsd2F5cy1vbjsKCQkJCQltYXhp
bSxhY3RpdmUtZnBzLXNvdXJjZSA9IDwweDE+OwoJCQkJCW1heGltLGFjdGl2ZS1mcHMtcG93ZXIt
dXAtc2xvdCA9IDwweDY+OwoJCQkJCW1heGltLGFjdGl2ZS1mcHMtcG93ZXItZG93bi1zbG90ID0g
PDB4MT47CgkJCQkJcmVndWxhdG9yLWVuYWJsZS1yYW1wLWRlbGF5ID0gPDB4MTY+OwoJCQkJCXJl
Z3VsYXRvci1kaXNhYmxlLXJhbXAtZGVsYXkgPSA8MHg0ODg+OwoJCQkJCXJlZ3VsYXRvci1yYW1w
LWRlbGF5ID0gPDB4MTg2YTA+OwoJCQkJCXJlZ3VsYXRvci1yYW1wLWRlbGF5LXNjYWxlID0gPDB4
Yzg+OwoJCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHg0MD47CgkJCQkJcGhhbmRsZSA9IDwweDQwPjsK
CQkJCX07CgkJCX07CgoJCQlsb3ctYmF0dGVyeS1tb25pdG9yIHsKCQkJCW1heGltLGxvdy1iYXR0
ZXJ5LXNodXRkb3duLWVuYWJsZTsKCQkJfTsKCQl9OwoJfTsKCglpMmNANzAwMGQxMDAgewoJCSNh
ZGRyZXNzLWNlbGxzID0gPDB4MT47CgkJI3NpemUtY2VsbHMgPSA8MHgwPjsKCQljb21wYXRpYmxl
ID0gIm52aWRpYSx0ZWdyYTIxMC1pMmMiOwoJCXJlZyA9IDwweDAgMHg3MDAwZDEwMCAweDAgMHgx
MDA+OwoJCWludGVycnVwdHMgPSA8MHgwIDB4M2YgMHg0PjsKCQlpb21tdXMgPSA8MHgyYiAweGU+
OwoJCXN0YXR1cyA9ICJva2F5IjsKCQljbG9jay1mcmVxdWVuY3kgPSA8MHg2MWE4MD47CgkJZG1h
cyA9IDwweDRjIDB4MWUgMHg0YyAweDFlPjsKCQlkbWEtbmFtZXMgPSAicngiLCAidHgiOwoJCWNs
b2NrcyA9IDwweDIxIDB4YTYgMHgyMSAweGYzPjsKCQljbG9jay1uYW1lcyA9ICJkaXYtY2xrIiwg
InBhcmVudCI7CgkJcmVzZXRzID0gPDB4MjEgMHhhNj47CgkJcmVzZXQtbmFtZXMgPSAiaTJjIjsK
CQlsaW51eCxwaGFuZGxlID0gPDB4MTFlPjsKCQlwaGFuZGxlID0gPDB4MTFlPjsKCX07CgoJc2Ro
Y2lANzAwYjA2MDAgewoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXNkaGNpIjsKCQly
ZWcgPSA8MHgwIDB4NzAwYjA2MDAgMHgwIDB4MjAwPjsKCQlpbnRlcnJ1cHRzID0gPDB4MCAweDFm
IDB4ND47CgkJYXV4LWRldmljZS1uYW1lID0gInNkaGNpLXRlZ3JhLjMiOwoJCWlvbW11cyA9IDww
eDJiIDB4MWM+OwoJCW52aWRpYSxydW50aW1lLXBtLXR5cGUgPSA8MHgxPjsKCQljbG9ja3MgPSA8
MHgyMSAweGYgMHgyMSAweGYzIDB4MjEgMHgxMzQgMHgyMSAweGMxPjsKCQljbG9jay1uYW1lcyA9
ICJzZG1tYyIsICJwbGxfcCIsICJwbGxfYzRfb3V0MCIsICJzZG1tY19sZWdhY3lfdG0iOwoJCXJl
c2V0cyA9IDwweDIxIDB4Zj47CgkJcmVzZXQtbmFtZXMgPSAic2RoY2kiOwoJCXN0YXR1cyA9ICJk
aXNhYmxlZCI7CgkJdGFwLWRlbGF5ID0gPDB4ND47CgkJdHJpbS1kZWxheSA9IDwweDg+OwoJCW52
aWRpYSxpcy1kZHItdGFwLWRlbGF5OwoJCW52aWRpYSxkZHItdGFwLWRlbGF5ID0gPDB4MD47CgkJ
bW1jLW9jci1tYXNrID0gPDB4MD47CgkJbWF4LWNsay1saW1pdCA9IDwweGJlYmMyMDA+OwoJCWJ1
cy13aWR0aCA9IDwweDg+OwoJCWJ1aWx0LWluOwoJCWNhbGliLTN2My1vZmZzZXRzID0gPDB4NTA1
PjsKCQljYWxpYi0xdjgtb2Zmc2V0cyA9IDwweDUwNT47CgkJY29tcGFkLXZyZWYtM3YzID0gPDB4
Nz47CgkJY29tcGFkLXZyZWYtMXY4ID0gPDB4Nz47CgkJbnZpZGlhLGVuLWlvLXRyaW0tdm9sdDsK
CQludmlkaWEsaXMtZW1tYzsKCQludmlkaWEsZW5hYmxlLWNxOwoJCWlnbm9yZS1wbS1ub3RpZnk7
CgkJa2VlcC1wb3dlci1pbi1zdXNwZW5kOwoJCW5vbi1yZW1vdmFibGU7CgkJY2FwLW1tYy1oaWdo
c3BlZWQ7CgkJY2FwLXNkLWhpZ2hzcGVlZDsKCQltbWMtZGRyLTFfOHY7CgkJbW1jLWhzMjAwLTFf
OHY7CgkJbW1jLWhzNDAwLTFfOHY7CgkJbnZpZGlhLGVuYWJsZS1zdHJvYmUtbW9kZTsKCQltbWMt
aHM0MDAtZW5oYW5jZWQtc3Ryb2JlOwoJCW52aWRpYSxtaW4tdGFwLWRlbGF5ID0gPDB4NmE+OwoJ
CW52aWRpYSxtYXgtdGFwLWRlbGF5ID0gPDB4Yjk+OwoJCXBsbF9zb3VyY2UgPSAicGxsX3AiLCAi
cGxsX2M0X291dDIiOwoJCXZxbW1jLXN1cHBseSA9IDwweDM2PjsKCQl2bW1jLXN1cHBseSA9IDww
eDQ3PjsKCQl1aHMtbWFzayA9IDwweDA+OwoJCXBvd2VyLW9mZi1yYWlsOwoJCW5vLXNkaW87CgkJ
bm8tc2Q7CgkJbGludXgscGhhbmRsZSA9IDwweGIwPjsKCQlwaGFuZGxlID0gPDB4YjA+OwoKCQlw
cm9kLXNldHRpbmdzIHsKCQkJI3Byb2QtY2VsbHMgPSA8MHgzPjsKCgkJCXByb2RfY19kcyB7CgkJ
CQlwcm9kID0gPDB4MTAwIDB4ZmYwMDAwIDB4NDAwMDAgMHgxZTAgMHhmIDB4NyAweDFlNCAweDMw
MDc3ZjdmIDB4MzAwMDA1MDU+OwoJCQl9OwoKCQkJcHJvZF9jX2hzIHsKCQkJCXByb2QgPSA8MHgx
MDAgMHhmZjAwMDAgMHg0MDAwMCAweDFlMCAweGYgMHg3IDB4MWU0IDB4MzAwNzdmN2YgMHgzMDAw
MDUwNT47CgkJCX07CgoJCQlwcm9kX2NfZGRyNTIgewoJCQkJcHJvZCA9IDwweDEwMCAweDFmZmYw
MDAwIDB4MCAweDFlMCAweGYgMHg3IDB4MWU0IDB4MzAwNzdmN2YgMHgzMDAwMDUwNT47CgkJCX07
CgoJCQlwcm9kX2NfaHMyMDAgewoJCQkJcHJvZCA9IDwweDEwMCAweGZmMDAwMCAweDQwMDAwIDB4
MWMwIDB4ZTAwMCAweDQwMDAgMHgxZTAgMHhmIDB4NyAweDFlNCAweDMwMDc3ZjdmIDB4MzAwMDA1
MDU+OwoJCQl9OwoKCQkJcHJvZF9jX2hzNDAwIHsKCQkJCXByb2QgPSA8MHgxMDAgMHhmZjAwMDAg
MHg0MDAwMCAweDFjMCAweGUwMDAgMHg0MDAwIDB4MWUwIDB4ZiAweDcgMHgxZTQgMHgzMDA3N2Y3
ZiAweDMwMDAwNTA1PjsKCQkJfTsKCgkJCXByb2RfY19oczUzMyB7CgkJCQlwcm9kID0gPDB4MTAw
IDB4ZmYwMDAwIDB4NDAwMDAgMHgxYzAgMHhlMDAwIDB4MjAwMCAweDFlMCAweGYgMHg3IDB4MWU0
IDB4MzAwMDA1MDUgMHgzMDAwMDUwNT47CgkJCX07CgoJCQlwcm9kIHsKCQkJCXByb2QgPSA8MHgx
MDAgMHgxZmZmMDAwZSAweDgwOTAwMjggMHgxMGMgMHgzZjAwIDB4MjgwMCAweDFjMCAweDgwMDFm
YzAgMHg4MDAwMDQwIDB4MWM0IDB4NzcgMHgwIDB4MTIwIDB4MjAwMDEgMHgxIDB4MTI4IDB4NDMw
MDAwMDAgMHgwIDB4MWYwIDB4ODAwMDAgMHg4MDAwMD47CgkJCX07CgkJfTsKCX07CgoJc2RoY2lA
NzAwYjA0MDAgewoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXNkaGNpIjsKCQlyZWcg
PSA8MHgwIDB4NzAwYjA0MDAgMHgwIDB4MjAwPjsKCQlpbnRlcnJ1cHRzID0gPDB4MCAweDEzIDB4
ND47CgkJYXV4LWRldmljZS1uYW1lID0gInNkaGNpLXRlZ3JhLjIiOwoJCWlvbW11cyA9IDwweDJi
IDB4MWI+OwoJCW52aWRpYSxydW50aW1lLXBtLXR5cGUgPSA8MHgwPjsKCQljbG9ja3MgPSA8MHgy
MSAweDQ1IDB4MjEgMHhmMyAweDIxIDB4MTM2IDB4MjEgMHhjMT47CgkJY2xvY2stbmFtZXMgPSAi
c2RtbWMiLCAicGxsX3AiLCAicGxsX2M0X291dDIiLCAic2RtbWNfbGVnYWN5X3RtIjsKCQlyZXNl
dHMgPSA8MHgyMSAweDQ1PjsKCQlyZXNldC1uYW1lcyA9ICJzZGhjaSI7CgkJc3RhdHVzID0gImRp
c2FibGVkIjsKCQl0YXAtZGVsYXkgPSA8MHgzPjsKCQl0cmltLWRlbGF5ID0gPDB4Mz47CgkJbW1j
LW9jci1tYXNrID0gPDB4Mz47CgkJbWF4LWNsay1saW1pdCA9IDwweDYxYTgwPjsKCQlkZHItY2xr
LWxpbWl0ID0gPDB4MmRjNmMwMD47CgkJYnVzLXdpZHRoID0gPDB4ND47CgkJY2FsaWItM3YzLW9m
ZnNldHMgPSA8MHg3ZD47CgkJY2FsaWItMXY4LW9mZnNldHMgPSA8MHg3YjdiPjsKCQljb21wYWQt
dnJlZi0zdjMgPSA8MHg3PjsKCQljb21wYWQtdnJlZi0xdjggPSA8MHg3PjsKCQlwbGxfc291cmNl
ID0gInBsbF9wIiwgInBsbF9jNF9vdXQyIjsKCQlpZ25vcmUtcG0tbm90aWZ5OwoJCWNhcC1tbWMt
aGlnaHNwZWVkOwoJCWNhcC1zZC1oaWdoc3BlZWQ7CgkJbnZpZGlhLGVuLWlvLXRyaW0tdm9sdDsK
CQludmlkaWEsZW4tcGVyaW9kaWMtY2FsaWI7CgkJY2QtaW52ZXJ0ZWQ7CgkJd3AtaW52ZXJ0ZWQ7
CgkJcHdyZGV0LXN1cHBvcnQ7CgkJbnZpZGlhLG1pbi10YXAtZGVsYXkgPSA8MHg2YT47CgkJbnZp
ZGlhLG1heC10YXAtZGVsYXkgPSA8MHhiOT47CgkJcGluY3RybC1uYW1lcyA9ICJzZG1tY19zY2ht
aXR0X2VuYWJsZSIsICJzZG1tY19zY2htaXR0X2Rpc2FibGUiLCAic2RtbWNfY2xrX3NjaG1pdHRf
ZW5hYmxlIiwgInNkbW1jX2Nsa19zY2htaXR0X2Rpc2FibGUiLCAic2RtbWNfZHJ2X2NvZGUiLCAi
c2RtbWNfZGVmYXVsdF9kcnZfY29kZSIsICJzZG1tY19lXzMzdl9lbmFibGUiLCAic2RtbWNfZV8z
M3ZfZGlzYWJsZSI7CgkJcGluY3RybC0wID0gPDB4ODg+OwoJCXBpbmN0cmwtMSA9IDwweDg5PjsK
CQlwaW5jdHJsLTIgPSA8MHg4YT47CgkJcGluY3RybC0zID0gPDB4OGI+OwoJCXBpbmN0cmwtNCA9
IDwweDhjPjsKCQlwaW5jdHJsLTUgPSA8MHg4ZD47CgkJcGluY3RybC02ID0gPDB4OGU+OwoJCXBp
bmN0cmwtNyA9IDwweDhmPjsKCQl2cW1tYy1zdXBwbHkgPSA8MHgzNj47CgkJdm1tYy1zdXBwbHkg
PSA8MHg0Nz47CgkJbW1jLWRkci0xXzh2OwoJCXVocy1tYXNrID0gPDB4MD47CgkJbGludXgscGhh
bmRsZSA9IDwweGI4PjsKCQlwaGFuZGxlID0gPDB4Yjg+OwoKCQlwcm9kLXNldHRpbmdzIHsKCQkJ
I3Byb2QtY2VsbHMgPSA8MHgzPjsKCgkJCXByb2RfY19kcyB7CgkJCQlwcm9kID0gPDB4MTAwIDB4
ZmYwMDAwIDB4MTAwMDAgMHgxZTAgMHhmIDB4NyAweDFlNCAweDMwMDc3ZjdmIDB4MzAwMDAwN2Q+
OwoJCQl9OwoKCQkJcHJvZF9jX2hzIHsKCQkJCXByb2QgPSA8MHgxMDAgMHhmZjAwMDAgMHgxMDAw
MCAweDFlMCAweGYgMHg3IDB4MWU0IDB4MzAwNzdmN2YgMHgzMDAwMDA3ZD47CgkJCX07CgoJCQlw
cm9kX2Nfc2RyMTIgewoJCQkJcHJvZCA9IDwweDEwMCAweGZmMDAwMCAweDEwMDAwIDB4MWUwIDB4
ZiAweDcgMHgxZTQgMHgzMDA3N2Y3ZiAweDMwMDA3YjdiPjsKCQkJfTsKCgkJCXByb2RfY19zZHIy
NSB7CgkJCQlwcm9kID0gPDB4MTAwIDB4ZmYwMDAwIDB4MTAwMDAgMHgxZTAgMHhmIDB4NyAweDFl
NCAweDMwMDc3ZjdmIDB4MzAwMDdiN2I+OwoJCQl9OwoKCQkJcHJvZF9jX3NkcjUwIHsKCQkJCXBy
b2QgPSA8MHgxMDAgMHhmZjAwMDAgMHgxMDAwMCAweDFjMCAweGUwMDAgMHg4MDAwIDB4MWUwIDB4
ZiAweDcgMHgxZTQgMHgzMDA3N2Y3ZiAweDMwMDA3YjdiPjsKCQkJfTsKCgkJCXByb2RfY19zZHIx
MDQgewoJCQkJcHJvZCA9IDwweDEwMCAweGZmMDAwMCAweDEwMDAwIDB4MWMwIDB4ZTAwMCAweDQw
MDAgMHgxZTAgMHhmIDB4NyAweDFlNCAweDMwMDc3ZjdmIDB4MzAwMDdiN2I+OwoJCQl9OwoKCQkJ
cHJvZF9jX2RkcjUyIHsKCQkJCXByb2QgPSA8MHgxMDAgMHgxZmZmMDAwMCAweDAgMHgxZTAgMHhm
IDB4NyAweDFlNCAweDMwMDc3ZjdmIDB4MzAwMDdiN2I+OwoJCQl9OwoKCQkJcHJvZCB7CgkJCQlw
cm9kID0gPDB4MTAwIDB4MWZmZjAwMGUgMHgzMDkwMDI4IDB4MWMwIDB4ODAwMWZjMCAweDgwMDAw
NDAgMHgxYzQgMHg3NyAweDAgMHgxMjAgMHgyMDAwMSAweDEgMHgxMjggMHg0MzAwMDAwMCAweDAg
MHgxZjAgMHg4MDAwMCAweDgwMDAwPjsKCQkJfTsKCQl9OwoJfTsKCglzZGhjaUA3MDBiMDIwMCB7
CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtc2RoY2kiOwoJCXJlZyA9IDwweDAgMHg3
MDBiMDIwMCAweDAgMHgyMDA+OwoJCWludGVycnVwdHMgPSA8MHgwIDB4ZiAweDQ+OwoJCWF1eC1k
ZXZpY2UtbmFtZSA9ICJzZGhjaS10ZWdyYS4xIjsKCQludmlkaWEscnVudGltZS1wbS10eXBlID0g
PDB4MT47CgkJY2xvY2tzID0gPDB4MjEgMHg5IDB4MjEgMHhmMyAweDIxIDB4YzE+OwoJCWNsb2Nr
LW5hbWVzID0gInNkbW1jIiwgInBsbF9wIiwgInNkbW1jX2xlZ2FjeV90bSI7CgkJcmVzZXRzID0g
PDB4MjEgMHg5PjsKCQlyZXNldC1uYW1lcyA9ICJzZGhjaSI7CgkJc3RhdHVzID0gImRpc2FibGVk
IjsKCQl0YXAtZGVsYXkgPSA8MHg0PjsKCQl0cmltLWRlbGF5ID0gPDB4OD47CgkJbW1jLW9jci1t
YXNrID0gPDB4MD47CgkJbWF4LWNsay1saW1pdCA9IDwweGMyOGNiMDA+OwoJCWRkci1jbGstbGlt
aXQgPSA8MHgyNzE5YzQwPjsKCQlidXMtd2lkdGggPSA8MHg0PjsKCQljYWxpYi0zdjMtb2Zmc2V0
cyA9IDwweDUwNT47CgkJY2FsaWItMXY4LW9mZnNldHMgPSA8MHg1MDU+OwoJCWNvbXBhZC12cmVm
LTN2MyA9IDwweDc+OwoJCWNvbXBhZC12cmVmLTF2OCA9IDwweDc+OwoJCWRlZmF1bHQtZHJpdmUt
dHlwZSA9IDwweDE+OwoJCW52aWRpYSxtaW4tdGFwLWRlbGF5ID0gPDB4NmE+OwoJCW52aWRpYSxt
YXgtdGFwLWRlbGF5ID0gPDB4Yjk+OwoJCXBsbF9zb3VyY2UgPSAicGxsX3AiOwoJCW5vbi1yZW1v
dmFibGU7CgkJY2FwLW1tYy1oaWdoc3BlZWQ7CgkJY2FwLXNkLWhpZ2hzcGVlZDsKCQlrZWVwLXBv
d2VyLWluLXN1c3BlbmQ7CgkJaWdub3JlLXBtLW5vdGlmeTsKCQludmlkaWEsZW4taW8tdHJpbS12
b2x0OwoJCXZxbW1jLXN1cHBseSA9IDwweDM2PjsKCQl2bW1jLXN1cHBseSA9IDwweDQ3PjsKCQl1
aHMtbWFzayA9IDwweDg+OwoJCXBvd2VyLW9mZi1yYWlsOwoJCWZvcmNlLW5vbi1yZW1vdmFibGUt
cmVzY2FuOwoJCWxpbnV4LHBoYW5kbGUgPSA8MHgxMWY+OwoJCXBoYW5kbGUgPSA8MHgxMWY+OwoK
CQlwcm9kLXNldHRpbmdzIHsKCQkJI3Byb2QtY2VsbHMgPSA8MHgzPjsKCgkJCXByb2RfY19kcyB7
CgkJCQlwcm9kID0gPDB4MTAwIDB4ZmYwMDAwIDB4NDAwMDAgMHgxZTAgMHhmIDB4NyAweDFlNCAw
eDMwMDc3ZjdmIDB4MzAwMDA1MDU+OwoJCQl9OwoKCQkJcHJvZF9jX2hzIHsKCQkJCXByb2QgPSA8
MHgxMDAgMHhmZjAwMDAgMHg0MDAwMCAweDFlMCAweGYgMHg3IDB4MWU0IDB4MzAwNzdmN2YgMHgz
MDAwMDUwNT47CgkJCX07CgoJCQlwcm9kX2Nfc2RyMTIgewoJCQkJcHJvZCA9IDwweDEwMCAweGZm
MDAwMCAweDQwMDAwIDB4MWUwIDB4ZiAweDcgMHgxZTQgMHgzMDA3N2Y3ZiAweDMwMDAwNTA1PjsK
CQkJfTsKCgkJCXByb2RfY19zZHIyNSB7CgkJCQlwcm9kID0gPDB4MTAwIDB4ZmYwMDAwIDB4NDAw
MDAgMHgxZTAgMHhmIDB4NyAweDFlNCAweDMwMDc3ZjdmIDB4MzAwMDA1MDU+OwoJCQl9OwoKCQkJ
cHJvZF9jX3NkcjUwIHsKCQkJCXByb2QgPSA8MHgxMDAgMHhmZjAwMDAgMHg0MDAwMCAweDFjMCAw
eGUwMDAgMHg4MDAwIDB4MWUwIDB4ZiAweDcgMHgxZTQgMHgzMDA3N2Y3ZiAweDMwMDAwNTA1PjsK
CQkJfTsKCgkJCXByb2RfY19zZHIxMDQgewoJCQkJcHJvZCA9IDwweDEwMCAweGZmMDAwMCAweDQw
MDAwIDB4MWMwIDB4ZTAwMCAweDQwMDAgMHgxZTAgMHhmIDB4NyAweDFlNCAweDMwMDc3ZjdmIDB4
MzAwMDA1MDU+OwoJCQl9OwoKCQkJcHJvZF9jX2RkcjUyIHsKCQkJCXByb2QgPSA8MHgxMDAgMHgx
ZmZmMDAwMCAweDQwMDAwIDB4MWUwIDB4ZiAweDcgMHgxZTQgMHgzMDA3N2Y3ZiAweDMwMDAwNTA1
PjsKCQkJfTsKCgkJCXByb2RfY19oczIwMCB7CgkJCQlwcm9kID0gPDB4MTAwIDB4ZmYwMDAwIDB4
NDAwMDAgMHgxYzAgMHhlMDAwIDB4NDAwMCAweDFlMCAweGYgMHg3IDB4MWU0IDB4MzAwNzdmN2Yg
MHgzMDAwMDUwNT47CgkJCX07CgoJCQlwcm9kX2NfaHM0MDAgewoJCQkJcHJvZCA9IDwweDEwMCAw
eGZmMDAwMCAweDQwMDAwIDB4MWMwIDB4ZTAwMCAweDQwMDAgMHgxZTAgMHhmIDB4NyAweDFlNCAw
eDMwMDc3ZjdmIDB4MzAwMDA1MDU+OwoJCQl9OwoKCQkJcHJvZF9jX2hzNTMzIHsKCQkJCXByb2Qg
PSA8MHgxMDAgMHhmZjAwMDAgMHg0MDAwMCAweDFjMCAweGUwMDAgMHgyMDAwIDB4MWUwIDB4ZiAw
eDcgMHgxZTQgMHgzMDAwMDUwNSAweDMwMDAwNTA1PjsKCQkJfTsKCgkJCXByb2QgewoJCQkJcHJv
ZCA9IDwweDEwMCAweDFmZmYwMDBlIDB4ODA5MDAyOCAweDFjMCAweDgwMDFmYzAgMHg4MDAwMDQw
IDB4MWM0IDB4NzcgMHgwIDB4MTIwIDB4MjAwMDEgMHgxIDB4MTI4IDB4NDMwMDAwMDAgMHgwIDB4
MWYwIDB4ODAwMDAgMHg4MDAwMD47CgkJCX07CgkJfTsKCX07CgoJc2RoY2lANzAwYjAwMDAgewoJ
CWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXNkaGNpIjsKCQlyZWcgPSA8MHgwIDB4NzAw
YjAwMDAgMHgwIDB4MjAwPjsKCQlpbnRlcnJ1cHRzID0gPDB4MCAweGUgMHg0PjsKCQlhdXgtZGV2
aWNlLW5hbWUgPSAic2RoY2ktdGVncmEuMCI7CgkJaW9tbXVzID0gPDB4MmIgMHgxOT47CgkJbnZp
ZGlhLHJ1bnRpbWUtcG0tdHlwZSA9IDwweDE+OwoJCWNsb2NrcyA9IDwweDIxIDB4ZSAweDIxIDB4
ZjMgMHgyMSAweGMxPjsKCQljbG9jay1uYW1lcyA9ICJzZG1tYyIsICJwbGxfcCIsICJzZG1tY19s
ZWdhY3lfdG0iOwoJCXJlc2V0cyA9IDwweDIxIDB4ZT47CgkJcmVzZXQtbmFtZXMgPSAic2RoY2ki
OwoJCXN0YXR1cyA9ICJva2F5IjsKCQl0YXAtZGVsYXkgPSA8MHg0PjsKCQl0cmltLWRlbGF5ID0g
PDB4Mj47CgkJbWF4LWNsay1saW1pdCA9IDwweGMyOGNiMDA+OwoJCWRkci1jbGstbGltaXQgPSA8
MHgyZGM2YzAwPjsKCQlidXMtd2lkdGggPSA8MHg0PjsKCQltbWMtb2NyLW1hc2sgPSA8MHgzPjsK
CQljYWxpYi0zdjMtb2Zmc2V0cyA9IDwweDdkPjsKCQljYWxpYi0xdjgtb2Zmc2V0cyA9IDwweDdi
N2I+OwoJCWNvbXBhZC12cmVmLTN2MyA9IDwweDc+OwoJCWNvbXBhZC12cmVmLTF2OCA9IDwweDc+
OwoJCWNkLWdwaW9zID0gPDB4NTYgMHhjOSAweDA+OwoJCXBsbF9zb3VyY2UgPSAicGxsX3AiOwoJ
CWNhcC1tbWMtaGlnaHNwZWVkOwoJCWNhcC1zZC1oaWdoc3BlZWQ7CgkJbnZpZGlhLGVuLWlvLXRy
aW0tdm9sdDsKCQludmlkaWEsZW4tcGVyaW9kaWMtY2FsaWI7CgkJa2VlcC1wb3dlci1pbi1zdXNw
ZW5kOwoJCWlnbm9yZS1wbS1ub3RpZnk7CgkJY2QtaW52ZXJ0ZWQ7CgkJd3AtaW52ZXJ0ZWQ7CgkJ
bnZpZGlhLG1pbi10YXAtZGVsYXkgPSA8MHg2YT47CgkJbnZpZGlhLG1heC10YXAtZGVsYXkgPSA8
MHhiOT47CgkJcHdyZGV0LXN1cHBvcnQ7CgkJcGluY3RybC1uYW1lcyA9ICJzZG1tY19zY2htaXR0
X2VuYWJsZSIsICJzZG1tY19zY2htaXR0X2Rpc2FibGUiLCAic2RtbWNfY2xrX3NjaG1pdHRfZW5h
YmxlIiwgInNkbW1jX2Nsa19zY2htaXR0X2Rpc2FibGUiLCAic2RtbWNfZHJ2X2NvZGUiLCAic2Rt
bWNfZGVmYXVsdF9kcnZfY29kZSIsICJzZG1tY19lXzMzdl9lbmFibGUiLCAic2RtbWNfZV8zM3Zf
ZGlzYWJsZSI7CgkJcGluY3RybC0wID0gPDB4OTA+OwoJCXBpbmN0cmwtMSA9IDwweDkxPjsKCQlw
aW5jdHJsLTIgPSA8MHg5Mj47CgkJcGluY3RybC0zID0gPDB4OTM+OwoJCXBpbmN0cmwtNCA9IDww
eDk0PjsKCQlwaW5jdHJsLTUgPSA8MHg5NT47CgkJcGluY3RybC02ID0gPDB4OTY+OwoJCXBpbmN0
cmwtNyA9IDwweDk3PjsKCQl2cW1tYy1zdXBwbHkgPSA8MHg5OD47CgkJdm1tYy1zdXBwbHkgPSA8
MHg5OT47CgkJZGVmYXVsdC1kcnYtdHlwZSA9IDwweDE+OwoJCXNkLXVocy1zZHIxMDQ7CgkJc2Qt
dWhzLXNkcjUwOwoJCXNkLXVocy1zZHIyNTsKCQlzZC11aHMtc2RyMTI7CgkJbW1jLWRkci0xXzh2
OwoJCW1tYy1oczIwMC0xXzh2OwoJCW52aWRpYSxjZC13YWtldXAtY2FwYWJsZTsKCQludmlkaWEs
dXBkYXRlLXBpbmN0cmwtc2V0dGluZ3M7CgkJbnZpZGlhLHBtYy13YWtldXAgPSA8MHgzNyAweDAg
MHgyMyAweDA+OwoJCXVocy1tYXNrID0gPDB4Yz47CgkJbm8tc2RpbzsKCQluby1tbWM7CgkJZGlz
YWJsZS13cDsKCQlsaW51eCxwaGFuZGxlID0gPDB4YjE+OwoJCXBoYW5kbGUgPSA8MHhiMT47CgoJ
CXByb2Qtc2V0dGluZ3MgewoJCQkjcHJvZC1jZWxscyA9IDwweDM+OwoKCQkJcHJvZF9jX2RzIHsK
CQkJCXByb2QgPSA8MHgxMDAgMHhmZjAwMDAgMHg0MDAwMCAweDFlMCAweGYgMHg3IDB4MWU0IDB4
MzAwNzdmN2YgMHgzMDAwMDA3ZD47CgkJCX07CgoJCQlwcm9kX2NfaHMgewoJCQkJcHJvZCA9IDww
eDEwMCAweGZmMDAwMCAweDQwMDAwIDB4MWUwIDB4ZiAweDcgMHgxZTQgMHgzMDA3N2Y3ZiAweDMw
MDAwMDdkPjsKCQkJfTsKCgkJCXByb2RfY19zZHIxMiB7CgkJCQlwcm9kID0gPDB4MTAwIDB4ZmYw
MDAwIDB4NDAwMDAgMHgxZTAgMHhmIDB4NyAweDFlNCAweDMwMDc3ZjdmIDB4MzAwMDdiN2I+OwoJ
CQl9OwoKCQkJcHJvZF9jX3NkcjI1IHsKCQkJCXByb2QgPSA8MHgxMDAgMHhmZjAwMDAgMHg0MDAw
MCAweDFlMCAweGYgMHg3IDB4MWU0IDB4MzAwNzdmN2YgMHgzMDAwN2I3Yj47CgkJCX07CgoJCQlw
cm9kX2Nfc2RyNTAgewoJCQkJcHJvZCA9IDwweDEwMCAweGZmMDAwMCAweDQwMDAwIDB4MWMwIDB4
ZTAwMCAweDgwMDAgMHgxZTAgMHhmIDB4NyAweDFlNCAweDMwMDc3ZjdmIDB4MzAwMDdiN2I+OwoJ
CQl9OwoKCQkJcHJvZF9jX3NkcjEwNCB7CgkJCQlwcm9kID0gPDB4MTAwIDB4ZmYwMDAwIDB4NDAw
MDAgMHgxYzAgMHhlMDAwIDB4NDAwMCAweDFlMCAweGYgMHg3IDB4MWU0IDB4MzAwNzdmN2YgMHgz
MDAwN2I3Yj47CgkJCX07CgoJCQlwcm9kX2NfZGRyNTIgewoJCQkJcHJvZCA9IDwweDEwMCAweDFm
ZmYwMDAwIDB4MCAweDFlMCAweGYgMHg3IDB4MWU0IDB4MzAwNzdmN2YgMHgzMDAwN2I3Yj47CgkJ
CX07CgoJCQlwcm9kIHsKCQkJCXByb2QgPSA8MHgxMDAgMHgxZmZmMDAwZSAweDIwOTAwMjggMHgx
YzAgMHg4MDAxZmMwIDB4ODAwMDA0MCAweDFjNCAweDc3IDB4MCAweDEyMCAweDIwMDAxIDB4MSAw
eDEyOCAweDQzMDAwMDAwIDB4MCAweDFmMCAweDgwMDAwIDB4ODAwMDA+OwoJCQl9OwoJCX07Cgl9
OwoKCWVmdXNlQDcwMDBmODAwIHsKCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIxMC1lZnVz
ZSI7CgkJcmVnID0gPDB4MCAweDcwMDBmODAwIDB4MCAweDQwMD47CgkJY2xvY2tzID0gPDB4MjEg
MHhlNj47CgkJY2xvY2stbmFtZXMgPSAiZnVzZSI7CgkJbnZpZGlhLGNsb2NrLWFsd2F5cy1vbjsK
CQlzdGF0dXMgPSAib2theSI7CgkJdnBwX2Z1c2Utc3VwcGx5ID0gPDB4MzY+OwoKCQllZnVzZS1i
dXJuIHsKCQkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtZWZ1c2UtYnVybiI7CgkJCWNs
b2NrcyA9IDwweDIxIDB4ZTk+OwoJCQljbG9jay1uYW1lcyA9ICJjbGtfbSI7CgkJCXN0YXR1cyA9
ICJva2F5IjsKCQl9OwoJfTsKCglrZnVzZUA3MDAwZmMwMCB7CgkJY29tcGF0aWJsZSA9ICJudmlk
aWEsdGVncmEyMTAta2Z1c2UiOwoJCXJlZyA9IDwweDAgMHg3MDAwZmMwMCAweDAgMHg0MDA+OwoJ
CWNsb2NrcyA9IDwweDIxIDB4Mjg+OwoJCWNsb2NrLW5hbWVzID0gImtmdXNlIjsKCQlzdGF0dXMg
PSAib2theSI7Cgl9OwoKCXBtYy1pb3Bvd2VyIHsKCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdy
YTIxMC1wbWMtaW9wb3dlciI7CgkJcGFkLWNvbnRyb2xsZXJzID0gPDB4MzcgMHgzMiAweDM3IDB4
MmIgMHgzNyAweDAgMHgzNyAweDIgMHgzNyAweDIyIDB4MzcgMHgyMyAweDM3IDB4MjYgMHgzNyAw
eDMzIDB4MzcgMHgxIDB4MzcgMHhhIDB4MzcgMHhjIDB4MzcgMHgxNSAweDM3IDB4MjkgMHgzNyAw
eDJhIDB4MzcgMHhmIDB4MzcgMHgxMCAweDM3IDB4MTEgMHgzNyAweDEyIDB4MzcgMHgxNz47CgkJ
cGFkLW5hbWVzID0gInN5cyIsICJ1YXJ0IiwgImF1ZGlvIiwgImNhbSIsICJwZXgtY3RybCIsICJz
ZG1tYzEiLCAic2RtbWMzIiwgImh2IiwgImF1ZGlvLWh2IiwgImRlYnVnIiwgImRtaWMiLCAiZ3Bp
byIsICJzcGkiLCAic3BpLWh2IiwgImRzaWEiLCAiZHNpYiIsICJkc2ljIiwgImRzaWQiLCAiaGRt
aSI7CgkJc3RhdHVzID0gIm9rYXkiOwoJCWlvcG93ZXItc3lzLXN1cHBseSA9IDwweDM2PjsKCQlp
b3Bvd2VyLXVhcnQtc3VwcGx5ID0gPDB4MzY+OwoJCWlvcG93ZXItYXVkaW8tc3VwcGx5ID0gPDB4
MzY+OwoJCWlvcG93ZXItY2FtLXN1cHBseSA9IDwweDM2PjsKCQlpb3Bvd2VyLXBleC1jdHJsLXN1
cHBseSA9IDwweDM2PjsKCQlpb3Bvd2VyLXNkbW1jMS1zdXBwbHkgPSA8MHg5OD47CgkJaW9wb3dl
ci1zZG1tYzMtc3VwcGx5ID0gPDB4MzY+OwoJCWlvcG93ZXItc2RtbWM0LXN1cHBseSA9IDwweDM2
PjsKCQlpb3Bvd2VyLWF1ZGlvLWh2LXN1cHBseSA9IDwweDM2PjsKCQlpb3Bvd2VyLWRlYnVnLXN1
cHBseSA9IDwweDM2PjsKCQlpb3Bvd2VyLWRtaWMtc3VwcGx5ID0gPDB4MzY+OwoJCWlvcG93ZXIt
Z3Bpby1zdXBwbHkgPSA8MHgzNj47CgkJaW9wb3dlci1zcGktc3VwcGx5ID0gPDB4MzY+OwoJCWlv
cG93ZXItc3BpLWh2LXN1cHBseSA9IDwweDM2PjsKCQlpb3Bvd2VyLXNkbW1jMi1zdXBwbHkgPSA8
MHgzNj47CgkJaW9wb3dlci1kcC1zdXBwbHkgPSA8MHgzNj47Cgl9OwoKCWR0dkA3MDAwYzMwMCB7
CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtZHR2IjsKCQlyZWcgPSA8MHgwIDB4NzAw
MGMzMDAgMHgwIDB4MTAwPjsKCQlkbWFzID0gPDB4NGMgMHhiPjsKCQlkbWEtbmFtZXMgPSAicngi
OwoJCXN0YXR1cyA9ICJkaXNhYmxlZCI7Cgl9OwoKCXh1ZGNANzAwZDAwMDAgewoJCWNvbXBhdGli
bGUgPSAibnZpZGlhLHRlZ3JhMjEwLXh1ZGMiOwoJCXJlZyA9IDwweDAgMHg3MDBkMDAwMCAweDAg
MHg4MDAwIDB4MCAweDcwMGQ4MDAwIDB4MCAweDEwMDAgMHgwIDB4NzAwZDkwMDAgMHgwIDB4MTAw
MD47CgkJaW50ZXJydXB0cyA9IDwweDAgMHgyYyAweDQ+OwoJCWNsb2NrcyA9IDwweDIxIDB4MTIx
IDB4MjEgMHg5YyAweDIxIDB4MTNlIDB4MjEgMHgxMjIgMHgyMSAweDExZT47CgkJbnZpZGlhLHh1
c2ItcGFkY3RsID0gPDB4NDQ+OwoJCWlvbW11cyA9IDwweDJiIDB4MTU+OwoJCXN0YXR1cyA9ICJv
a2F5IjsKCQljaGFyZ2VyLWRldGVjdG9yID0gPDB4OWE+OwoJCWh2ZGRfdXNiLXN1cHBseSA9IDww
eDQ3PjsKCQlhdmRkX3BsbF91dG1pcC1zdXBwbHkgPSA8MHgzNj47CgkJYXZkZGlvX3VzYi1zdXBw
bHkgPSA8MHgzZj47CgkJYXZkZGlvX3BsbF91ZXJlZmUtc3VwcGx5ID0gPDB4M2U+OwoJCWV4dGNv
bi1jYWJsZXMgPSA8MHg0OCAweDA+OwoJCWV4dGNvbi1jYWJsZS1uYW1lcyA9ICJ2YnVzIjsKCQlw
aHlzID0gPDB4NDU+OwoJCXBoeS1uYW1lcyA9ICJ1c2IyIjsKCQkjZXh0Y29uLWNlbGxzID0gPDB4
MT47Cgl9OwoKCW1lbW9yeS1jb250cm9sbGVyQDcwMDE5MDAwIHsKCQljb21wYXRpYmxlID0gIm52
aWRpYSx0ZWdyYTIxMC1tYyI7CgkJcmVnID0gPDB4MCAweDcwMDE5MDAwIDB4MCAweDEwMDA+OwoJ
CWNsb2NrcyA9IDwweDIxIDB4MjAgMHgyMSAweDM5PjsKCQljbG9jay1uYW1lcyA9ICJtYyIsICJl
bWMiOwoJCWludGVycnVwdHMgPSA8MHgwIDB4NGQgMHg0PjsKCQkjaW9tbXUtY2VsbHMgPSA8MHgx
PjsKCQkjcmVzZXQtY2VsbHMgPSA8MHgxPjsKCQlzdGF0dXMgPSAib2theSI7CgkJbGludXgscGhh
bmRsZSA9IDwweDEyMD47CgkJcGhhbmRsZSA9IDwweDEyMD47Cgl9OwoKCXB3bUA3MDExMDAwMCB7
CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtZGZsbC1wd20iOwoJCXJlZyA9IDwweDAg
MHg3MDExMDAwMCAweDAgMHg0MDA+OwoJCWNsb2NrcyA9IDwweDIxIDB4MTI4PjsKCQljbG9jay1u
YW1lcyA9ICJyZWYiOwoJCXBpbmN0cmwtbmFtZXMgPSAiZHZmc19wd21fZW5hYmxlIiwgImR2ZnNf
cHdtX2Rpc2FibGUiOwoJCSNwd20tY2VsbHMgPSA8MHgyPjsKCQlzdGF0dXMgPSAib2theSI7CgkJ
cGluY3RybC0wID0gPDB4OWI+OwoJCXBpbmN0cmwtMSA9IDwweDljPjsKCQlwd20tcmVndWxhdG9y
ID0gPDB4OWQ+OwoJCWxpbnV4LHBoYW5kbGUgPSA8MHhkZT47CgkJcGhhbmRsZSA9IDwweGRlPjsK
CX07CgoJY2xvY2tANzAxMTAwMDAgewoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLWRm
bGwiOwoJCXJlZyA9IDwweDAgMHg3MDExMDAwMCAweDAgMHgxMDAgMHgwIDB4NzAxMTAwMDAgMHgw
IDB4MTAwIDB4MCAweDcwMTEwMTAwIDB4MCAweDEwMCAweDAgMHg3MDExMDIwMCAweDAgMHgxMDA+
OwoJCWludGVycnVwdHMgPSA8MHgwIDB4M2UgMHg0PjsKCQljbG9ja3MgPSA8MHgyMSAweDEyOSAw
eDIxIDB4MTI4IDB4MjEgMHgyZj47CgkJY2xvY2stbmFtZXMgPSAic29jIiwgInJlZiIsICJpMmMi
OwoJCXJlc2V0cyA9IDwweDIxIDB4ZTA+OwoJCXJlc2V0LW5hbWVzID0gImR2Y28iOwoJCSNjbG9j
ay1jZWxscyA9IDwweDA+OwoJCWNsb2NrLW91dHB1dC1uYW1lcyA9ICJkZmxsQ1BVX291dCI7CgkJ
b3V0LWNsb2NrLW5hbWUgPSAiZGZsbF9jcHUiOwoJCXN0YXR1cyA9ICJva2F5IjsKCQl2ZGQtY3B1
LXN1cHBseSA9IDwweDlkPjsKCQludmlkaWEsZGZsbC1tYXgtZnJlcS1raHogPSA8MHgxNjkxNTg+
OwoJCW52aWRpYSxwd20tdG8tcG1pYzsKCQludmlkaWEsaW5pdC11diA9IDwweGY0MjQwPjsKCQlu
dmlkaWEsc2FtcGxlLXJhdGUgPSA8MHg2MWE4PjsKCQludmlkaWEsZHJvb3AtY3RybCA9IDwweGYw
MD47CgkJbnZpZGlhLGZvcmNlLW1vZGUgPSA8MHgxPjsKCQludmlkaWEsY2YgPSA8MHg2PjsKCQlu
dmlkaWEsY2kgPSA8MHgwPjsKCQludmlkaWEsY2cgPSA8MHgyPjsKCQludmlkaWEsaWRsZS1vdmVy
cmlkZTsKCQludmlkaWEsb25lLXNob3QtY2FsaWJyYXRlOwoJCW52aWRpYSxwd20tcGVyaW9kID0g
PDB4OWM0PjsKCQlwaW5jdHJsLW5hbWVzID0gImR2ZnNfcHdtX2VuYWJsZSIsICJkdmZzX3B3bV9k
aXNhYmxlIjsKCQlwaW5jdHJsLTAgPSA8MHg5Yj47CgkJcGluY3RybC0xID0gPDB4OWM+OwoJCW52
aWRpYSxhbGlnbi1vZmZzZXQtdXYgPSA8MHhhY2RhMD47CgkJbnZpZGlhLGFsaWduLXN0ZXAtdXYg
PSA8MHg0YjAwPjsKCQlsaW51eCxwaGFuZGxlID0gPDB4MjY+OwoJCXBoYW5kbGUgPSA8MHgyNj47
Cgl9OwoKCXNvY3RoZXJtQDB4NzAwRTIwMDAgewoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3Jh
LXNvY3RoZXJtIiwgIm52aWRpYSx0ZWdyYTIxMC1zb2N0aGVybSI7CgkJcmVnID0gPDB4MCAweDcw
MGUyMDAwIDB4MCAweDYwMCAweDAgMHg2MDAwNjAwMCAweDAgMHg0MDAgMHgwIDB4NzAwNDAwMDAg
MHgwIDB4MjAwPjsKCQlyZWctbmFtZXMgPSAic29jdGhlcm0tcmVnIiwgImNhci1yZWciLCAiY2Ny
b2MtcmVnIjsKCQlpbnRlcnJ1cHRzID0gPDB4MCAweDMwIDB4NCAweDAgMHgzMyAweDQ+OwoJCWNs
b2NrcyA9IDwweDIxIDB4NjQgMHgyMSAweDRlPjsKCQljbG9jay1uYW1lcyA9ICJ0c2Vuc29yIiwg
InNvY3RoZXJtIjsKCQlyZXNldHMgPSA8MHgyMSAweDRlPjsKCQlyZXNldC1uYW1lcyA9ICJzb2N0
aGVybSI7CgkJI3RoZXJtYWwtc2Vuc29yLWNlbGxzID0gPDB4MT47CgkJc3RhdHVzID0gIm9rYXki
OwoJCWludGVycnVwdC1jb250cm9sbGVyOwoJCSNpbnRlcnJ1cHQtY2VsbHMgPSA8MHgyPjsKCQlz
b2N0aGVybS1jbG9jay1mcmVxdWVuY3kgPSA8MHgzMGEzMmMwPjsKCQl0c2Vuc29yLWNsb2NrLWZy
ZXF1ZW5jeSA9IDwweDYxYTgwPjsKCQlzZW5zb3ItcGFyYW1zLXRhbGwgPSA8MHgzZmFjPjsKCQlz
ZW5zb3ItcGFyYW1zLXRpZGRxID0gPDB4MT47CgkJc2Vuc29yLXBhcmFtcy10ZW4tY291bnQgPSA8
MHgxPjsKCQlzZW5zb3ItcGFyYW1zLXRzYW1wbGUgPSA8MHg3OD47CgkJc2Vuc29yLXBhcmFtcy1w
ZGl2ID0gPDB4OD47CgkJc2Vuc29yLXBhcmFtcy10c2FtcC1hdGUgPSA8MHgxZTA+OwoJCXNlbnNv
ci1wYXJhbXMtcGRpdi1hdGUgPSA8MHg4PjsKCQlody1wbGx4LW9mZnNldHMgPSA8MHgwIDB4M2U4
IDB4MWI1OCAweDIgMHg3ZDAgMHhmYTA+OwoJCW52aWRpYSx0aGVybXRyaXBzID0gPDB4MCAweDE5
MDY0IDB4MiAweDE5MjU4PjsKCQlsaW51eCxwaGFuZGxlID0gPDB4MTE+OwoJCXBoYW5kbGUgPSA8
MHgxMT47CgoJCXRocm90dGxlLWNmZ3MgewoKCQkJaGVhdnkgewoJCQkJbnZpZGlhLHByaW9yaXR5
ID0gPDB4NjQ+OwoJCQkJbnZpZGlhLGNwdS10aHJvdC1wZXJjZW50ID0gPDB4NTU+OwoJCQkJbnZp
ZGlhLGdwdS10aHJvdC1sZXZlbCA9IDwweDM+OwoJCQkJI2Nvb2xpbmctY2VsbHMgPSA8MHgyPjsK
CQkJCWxpbnV4LHBoYW5kbGUgPSA8MHgxMz47CgkJCQlwaGFuZGxlID0gPDB4MTM+OwoJCQl9OwoK
CQkJb2MxIHsKCQkJCW52aWRpYSxwcmlvcml0eSA9IDwweDA+OwoJCQkJbnZpZGlhLHBvbGFyaXR5
LWFjdGl2ZS1sb3cgPSA8MHgwPjsKCQkJCW52aWRpYSxjb3VudC10aHJlc2hvbGQgPSA8MHgwPjsK
CQkJCW52aWRpYSxhbGFybS1maWx0ZXIgPSA8MHgwPjsKCQkJCW52aWRpYSxhbGFybS1wZXJpb2Qg
PSA8MHgwPjsKCQkJCW52aWRpYSxjcHUtdGhyb3QtcGVyY2VudCA9IDwweDA+OwoJCQkJbnZpZGlh
LGdwdS10aHJvdC1sZXZlbCA9IDwweDA+OwoJCQkJbGludXgscGhhbmRsZSA9IDwweGM3PjsKCQkJ
CXBoYW5kbGUgPSA8MHhjNz47CgkJCX07CgoJCQlvYzMgewoJCQkJbnZpZGlhLHByaW9yaXR5ID0g
PDB4Mjg+OwoJCQkJbnZpZGlhLHBvbGFyaXR5LWFjdGl2ZS1sb3cgPSA8MHgxPjsKCQkJCW52aWRp
YSxjb3VudC10aHJlc2hvbGQgPSA8MHhmPjsKCQkJCW52aWRpYSxhbGFybS1maWx0ZXIgPSA8MHg0
ZGQxZTA+OwoJCQkJbnZpZGlhLGFsYXJtLXBlcmlvZCA9IDwweDA+OwoJCQkJbnZpZGlhLGNwdS10
aHJvdC1wZXJjZW50ID0gPDB4NGI+OwoJCQkJbnZpZGlhLGdwdS10aHJvdC1sZXZlbCA9IDwweDI+
OwoJCQkJbGludXgscGhhbmRsZSA9IDwweDEyMT47CgkJCQlwaGFuZGxlID0gPDB4MTIxPjsKCQkJ
fTsKCQl9OwoKCQlmdXNlX3dhckBmdXNlX3Jldl8wXzEgewoJCQlkZXZpY2VfdHlwZSA9ICJmdXNl
X3dhciI7CgkJCW1hdGNoX2Z1c2VfcmV2ID0gPDB4MCAweDE+OwoJCQljcHUwID0gPDB4MTA5Y2Jj
IDB4NjExMjA+OwoJCQljcHUxID0gPDB4MTA3MTYwIDB4MTA2MDMwPjsKCQkJY3B1MiA9IDwweDEw
ZGJhMCAweGZmZjY5NWQ4PjsKCQkJY3B1MyA9IDwweDEwYjBhOCAweGZmZjIzZmIwPjsKCQkJbWVt
MCA9IDwweDEwOGY3NCAweGZmZmU3ZmEwPjsKCQkJbWVtMSA9IDwweDEwZGJhMCAweGZmZmU3ZDQ4
PjsKCQkJZ3B1ID0gPDB4MTA5MTY4IDB4ZmZlZjQwZTQ+OwoJCQlwbGx4ID0gPDB4MTA3NjEwIDB4
ZmZmMjY4YjQ+OwoJCX07CgoJCWZ1c2Vfd2FyQGZ1c2VfcmV2XzIgewoJCQlkZXZpY2VfdHlwZSA9
ICJmdXNlX3dhciI7CgkJCW1hdGNoX2Z1c2VfcmV2ID0gPDB4Mj47CgkJCWNwdTAgPSA8MHgxMDhl
NDggMHgzMTgwYTg+OwoJCQljcHUxID0gPDB4MTEyZjM4IDB4ZmZmZWY4NTQ+OwoJCQljcHUyID0g
PDB4MTBjMmEwIDB4MjI1OTVjPjsKCQkJY3B1MyA9IDwweDEwZTgyMCAweDkzMjRjPjsKCQkJbWVt
MCA9IDwweDEwNTA5MCAweDM2MmFjYz47CgkJCW1lbTEgPSA8MHgxMWU4YzQgMHhmZmEwNmNkMD47
CgkJCWdwdSA9IDwweDEwNjQ3YyAweDI5YmIzND47CgkJCXBsbHggPSA8MHhmZGQ1NCAweDY4MzQy
Yz47CgkJfTsKCgkJdGhyb3R0bGVAY3JpdGljYWwgewoJCQlkZXZpY2VfdHlwZSA9ICJ0aHJvdHRs
ZWN0bCI7CgkJCWNkZXYtdHlwZSA9ICJ0ZWdyYS1zaHV0ZG93biI7CgkJCWNvb2xpbmctbWluLXN0
YXRlID0gPDB4MD47CgkJCWNvb2xpbmctbWF4LXN0YXRlID0gPDB4Mz47CgkJCSNjb29saW5nLWNl
bGxzID0gPDB4Mj47CgkJfTsKCgkJdGhyb3R0bGVAaGVhdnkgewoJCQlkZXZpY2VfdHlwZSA9ICJ0
aHJvdHRsZWN0bCI7CgkJCWNkZXYtdHlwZSA9ICJ0ZWdyYS1oZWF2eSI7CgkJCWNvb2xpbmctbWlu
LXN0YXRlID0gPDB4MD47CgkJCWNvb2xpbmctbWF4LXN0YXRlID0gPDB4Mz47CgkJCSNjb29saW5n
LWNlbGxzID0gPDB4Mj47CgkJCXByaW9yaXR5ID0gPDB4NjQ+OwoJCQl0aHJvdHRsZV9kZXYgPSA8
MHg5ZSAweDlmPjsKCQl9OwoKCQl0aHJvdHRsZV9kZXZAY3B1X2hpZ2ggewoJCQlkZXB0aCA9IDww
eDU1PjsKCQkJbGludXgscGhhbmRsZSA9IDwweDllPjsKCQkJcGhhbmRsZSA9IDwweDllPjsKCQl9
OwoKCQl0aHJvdHRsZV9kZXZAZ3B1X2hpZ2ggewoJCQlsZXZlbCA9ICJoZWF2eV90aHJvdHRsaW5n
IjsKCQkJbGludXgscGhhbmRsZSA9IDwweDlmPjsKCQkJcGhhbmRsZSA9IDwweDlmPjsKCQl9OwoJ
fTsKCgl0ZWdyYS1hb3RhZyB7CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEyMXgtYW90YWci
OwoJCXBhcmVudC1ibG9jayA9IDwweDM3PjsKCQlzdGF0dXMgPSAib2theSI7CgkJc2Vuc29yLXBh
cmFtcy10YWxsID0gPDB4NGM+OwoJCXNlbnNvci1wYXJhbXMtdGlkZHEgPSA8MHgxPjsKCQlzZW5z
b3ItcGFyYW1zLXRlbi1jb3VudCA9IDwweDEwPjsKCQlzZW5zb3ItcGFyYW1zLXRzYW1wbGUgPSA8
MHg5PjsKCQlzZW5zb3ItcGFyYW1zLXBkaXYgPSA8MHg4PjsKCQlzZW5zb3ItcGFyYW1zLXRzYW1w
LWF0ZSA9IDwweDI3PjsKCQlzZW5zb3ItcGFyYW1zLXBkaXYtYXRlID0gPDB4OD47CgkJI3RoZXJt
YWwtc2Vuc29yLWNlbGxzID0gPDB4MD47CgkJc2Vuc29yLW5hbWUgPSAiYW90YWcwIjsKCQlzZW5z
b3ItaWQgPSA8MHgwPjsKCQlhZHZlcnRpc2VkLXNlbnNvci1pZCA9IDwweDk+OwoJCXNlbnNvci1u
b21pbmFsLXRlbXAtY3AgPSA8MHgxOT47CgkJc2Vuc29yLW5vbWluYWwtdGVtcC1mdCA9IDwweDY5
PjsKCQlzZW5zb3ItY29tcGVuc2F0aW9uLWEgPSA8MHgyOTg4PjsKCQlzZW5zb3ItY29tcGVuc2F0
aW9uLWIgPSA8MHhmZmZlZjg1ZT47CgkJbGludXgscGhhbmRsZSA9IDwweDI+OwoJCXBoYW5kbGUg
PSA8MHgyPjsKCX07CgoJdGVncmFfY2VjIHsKCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYTIx
MC1jZWMiOwoJCXJlZyA9IDwweDAgMHg3MDAxNTAwMCAweDAgMHgxMDAwPjsKCQlpbnRlcnJ1cHRz
ID0gPDB4MCAweDMgMHg0PjsKCQljbG9ja3MgPSA8MHgyMSAweDg4PjsKCQljbG9jay1uYW1lcyA9
ICJjZWMiOwoJCXN0YXR1cyA9ICJva2F5IjsKCX07CgoJd2F0Y2hkb2dANjAwMDUxMDAgewoJCWNv
bXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhLXdkdC10MjF4IjsKCQlyZWcgPSA8MHgwIDB4NjAwMDUx
MDAgMHgwIDB4MjAgMHgwIDB4NjAwMDUwODggMHgwIDB4OD47CgkJaW50ZXJydXB0cyA9IDwweDAg
MHg3YiAweDQ+OwoJCW52aWRpYSxleHBpcnktY291bnQgPSA8MHg0PjsKCQludmlkaWEsdGltZXIt
aW5kZXggPSA8MHg3PjsKCQludmlkaWEsZW5hYmxlLW9uLWluaXQ7CgkJc3RhdHVzID0gImRpc2Fi
bGVkIjsKCQlkdC1vdmVycmlkZS1zdGF0dXMtb2RtLWRhdGEgPSA8MHgxMDAwMCAweDEwMDAwPjsK
CQl0aW1lb3V0LXNlYyA9IDwweDc4PjsKCQlsaW51eCxwaGFuZGxlID0gPDB4YjU+OwoJCXBoYW5k
bGUgPSA8MHhiNT47Cgl9OwoKCXRlZ3JhX2ZpcV9kZWJ1Z2dlciB7CgkJY29tcGF0aWJsZSA9ICJu
dmlkaWEsZmlxLWRlYnVnZ2VyIjsKCQl1c2UtY29uc29sZS1wb3J0OwoJCWludGVycnVwdHMgPSA8
MHgwIDB4N2IgMHg0PjsKCX07CgoJcHRtIHsKCQljb21wYXRpYmxlID0gIm52aWRpYSxwdG0iOwoJ
CXJlZyA9IDwweDAgMHg3MjAxMDAwMCAweDAgMHgxMDAwIDB4MCAweDcyMDMwMDAwIDB4MCAweDEw
MDAgMHgwIDB4NzIwNDAwMDAgMHgwIDB4MTAwMCAweDAgMHg3MjA1MDAwMCAweDAgMHgxMDAwIDB4
MCAweDcyMDYwMDAwIDB4MCAweDEwMDAgMHgwIDB4NzMwMTAwMDAgMHgwIDB4MTAwMCAweDAgMHg3
MzQ0MDAwMCAweDAgMHgxMDAwIDB4MCAweDczNTQwMDAwIDB4MCAweDEwMDAgMHgwIDB4NzM2NDAw
MDAgMHgwIDB4MTAwMCAweDAgMHg3Mzc0MDAwMCAweDAgMHgxMDAwIDB4MCAweDcyODIwMDAwIDB4
MCAweDEwMDAgMHgwIDB4NzJhMWMwMDAgMHgwIDB4MTAwMD47CgkJc3RhdHVzID0gIm9rYXkiOwoJ
fTsKCgltc2VsZWN0IHsKCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYS1tc2VsZWN0IjsKCQlp
bnRlcnJ1cHRzID0gPDB4MCAweGFmIDB4ND47CgkJcmVnID0gPDB4MCAweDUwMDYwMDAwIDB4MCAw
eDEwMDA+OwoJCXN0YXR1cyA9ICJkaXNhYmxlZCI7Cgl9OwoKCWNwdWlkbGUgewoJCWNvbXBhdGli
bGUgPSAibnZpZGlhLHRlZ3JhMjEwLWNwdWlkbGUiOwoJCWNjNC1uby1yZXRlbnRpb247Cgl9OwoK
CWFwYm1pc2NANzAwMDA4MDAgewoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLWFwYm1p
c2MiLCAibnZpZGlhLHRlZ3JhMjAtYXBibWlzYyI7CgkJcmVnID0gPDB4MCAweDcwMDAwODAwIDB4
MCAweDY0IDB4MCAweDcwMDAwMDA4IDB4MCAweDQ+OwoJfTsKCgludmR1bXBlciB7CgkJY29tcGF0
aWJsZSA9ICJudmlkaWEsdGVncmEyMTAtbnZkdW1wZXIiOwoJCXN0YXR1cyA9ICJkaXNhYmxlZCI7
Cgl9OwoKCXRlZ3JhLXBtYy1ibGluay1wd20gewoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3Jh
MjEwLXBtYy1ibGluay1wd20iOwoJCXN0YXR1cyA9ICJkaXNhYmxlZCI7Cgl9OwoKCW52cG1vZGVs
IHsKCQljb21wYXRpYmxlID0gIm52aWRpYSxudnBtb2RlbCI7CgkJc3RhdHVzID0gIm9rYXkiOwoJ
fTsKCglleHRjb24gewoJCWNvbXBhdGlibGUgPSAic2ltcGxlLWJ1cyI7CgkJZGV2aWNlX3R5cGUg
PSAiZXh0ZXJuYWwtY29ubmVjdGlvbiI7CgkJI2FkZHJlc3MtY2VsbHMgPSA8MHgxPjsKCQkjc2l6
ZS1jZWxscyA9IDwweDA+OwoKCQlkaXNwLXN0YXRlIHsKCQkJY29tcGF0aWJsZSA9ICJleHRjb24t
ZGlzcC1zdGF0ZSI7CgkJCSNleHRjb24tY2VsbHMgPSA8MHgxPjsKCQl9OwoKCQlleHRjb25AMCB7
CgkJCWNvbXBhdGlibGUgPSAiZXh0Y29uLWdwaW8iOwoJCQlyZWcgPSA8MHgwPjsKCQkJZXh0Y29u
LWdwaW8sbmFtZSA9ICJJRCI7CgkJCWdwaW8gPSA8MHgxZSAweDAgMHgwPjsKCQkJZXh0Y29uLWdw
aW8sY29ubmVjdGlvbi1zdGF0ZS1sb3c7CgkJCWV4dGNvbi1ncGlvLGNhYmxlLW5hbWUgPSAiVVNC
LUhvc3QiOwoJCQkjZXh0Y29uLWNlbGxzID0gPDB4MT47CgkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7
CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHgxMjI+OwoJCQlwaGFuZGxlID0gPDB4MTIyPjsKCQl9OwoK
CQlleHRjb25AMSB7CgkJCWNvbXBhdGlibGUgPSAiZXh0Y29uLWdwaW8tc3RhdGVzIjsKCQkJcmVn
ID0gPDB4MT47CgkJCWV4dGNvbi1ncGlvLG5hbWUgPSAiVkJVUyI7CgkJCWV4dGNvbi1ncGlvLGNh
YmxlLXN0YXRlcyA9IDwweDAgMHgxIDB4MSAweDA+OwoJCQlncGlvcyA9IDwweDU2IDB4ZTQgMHgw
PjsKCQkJZXh0Y29uLWdwaW8sb3V0LWNhYmxlLW5hbWVzID0gPDB4MSAweDIgMHgwPjsKCQkJd2Fr
ZXVwLXNvdXJjZTsKCQkJI2V4dGNvbi1jZWxscyA9IDwweDE+OwoJCQludmlkaWEscG1jLXdha2V1
cCA9IDwweDM3IDB4MCAweDM2IDB4MD47CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg0OD47CgkJCXBo
YW5kbGUgPSA8MHg0OD47CgkJfTsKCX07CgoJYnRocm90X2NkZXYgewoJCWNvbXBhdGlibGUgPSAi
bnZpZGlhLHRlZ3JhLWJhbGFuY2VkLXRocm90dGxlIjsKCQljbG9ja3MgPSA8MHgyMSAweDEyNiAw
eDIxIDB4MWVjIDB4MjEgMHgxOTkgMHgyMSAweDFhMiAweDIxIDB4MWI5IDB4MjEgMHgxZDI+OwoJ
CWNsb2NrLW5hbWVzID0gImNjbGtfZyIsICJncHUiLCAiY2FwLnRocm90dGxlLmMyYnVzIiwgImNh
cC50aHJvdHRsZS5jM2J1cyIsICJjYXAudGhyb3R0bGUuc2NsayIsICJlbWMiOwoKCQlza2luX2Jh
bGFuY2VkIHsKCQkJY2Rldi10eXBlID0gInNraW4tYmFsYW5jZWQiOwoJCQludW1fc3RhdGVzID0g
PDB4NDI+OwoJCQljb29saW5nLW1pbi1zdGF0ZSA9IDwweDA+OwoJCQljb29saW5nLW1heC1zdGF0
ZSA9IDwweDQyPjsKCQkJI2Nvb2xpbmctY2VsbHMgPSA8MHgyPjsKCQkJc3RhdHVzID0gIm9rYXki
OwoJCQl0aHJvdHRsZV90YWJsZSA9IDwweDE2MzU4YyAweGQ0Y2IwIDB4NzUzMDAgMHg3ZDAwMCAw
eDVkYzAwIDB4ZmZmZmZmZmYgMHgxNWU5YmMgMHhkMWM5ZiAweDc1MzAwIDB4N2QwMDAgMHg1ZGMw
MCAweGZmZmZmZmZmIDB4MTU5ZGVkIDB4Y2VjOGYgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhm
ZmZmZmZmZiAweDE1NTIxZCAweGNiYzdlIDB4NzUzMDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZm
ZmYgMHgxNTA2NGQgMHhjOGM2ZSAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4
MTRiYTdlIDB4YzVjNWQgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDE0NmVh
ZSAweGMyYzRjIDB4NzUzMDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZmZmYgMHgxNDIyZGUgMHhi
ZmMzYyAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4MTNkNzBlIDB4YmNjMmIg
MHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDEzOGIzZiAweGI5YzFhIDB4NzUz
MDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZmZmYgMHgxMzNmNmYgMHhiNmMwYSAweDc1MzAwIDB4
N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4MTJmMzlmIDB4YjNiZjkgMHg3NTMwMCAweDdkMDAw
IDB4NWRjMDAgMHhmZmZmZmZmZiAweDEyYTdkMCAweGIwYmU5IDB4NzUzMDAgMHg3ZDAwMCAweDVk
YzAwIDB4ZmZmZmZmZmYgMHgxMjVjMDAgMHhhZGJkOCAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAw
eGZmZmZmZmZmIDB4MTIxMDMwIDB4YWFiYzcgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZm
ZmZmZiAweDExYzQ2MSAweGE3YmI3IDB4NzUzMDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZmZmYg
MHgxMTc4OTEgMHhhNGJhNiAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4MTEy
Y2MxIDB4YTFiOTYgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDEwZTBmMiAw
eDllYjg1IDB4NzUzMDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZmZmYgMHgxMDk1MjIgMHg5YmI3
NCAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4MTA0OTUyIDB4OThiNjQgMHg3
NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGZmZDgyIDB4OTViNTMgMHg3NTMwMCAw
eDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGZiMWIzIDB4OTJiNDIgMHg3NTMwMCAweDdkMDAw
IDB4NWRjMDAgMHhmZmZmZmZmZiAweGY2NWUzIDB4OGZiMzIgMHg3NTMwMCAweDdkMDAwIDB4NWRj
MDAgMHhmZmZmZmZmZiAweGYxYTEzIDB4OGNiMjEgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhm
ZmZmZmZmZiAweGVjZTQ0IDB4ODliMTEgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZm
ZiAweGU4Mjc0IDB4ODZiMDAgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGUz
NmE0IDB4ODNhZWYgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGRlYWQ1IDB4
ODBhZGYgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGQ5ZjA1IDB4N2RhY2Ug
MHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGQ1MzM1IDB4N2FhYmUgMHg3NTMw
MCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGQwNzY2IDB4NzdhYWQgMHg3NTMwMCAweDdk
MDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGNiYjk2IDB4NzRhOWMgMHg3NTMwMCAweDdkMDAwIDB4
NWRjMDAgMHhmZmZmZmZmZiAweGM2ZmM2IDB4NzFhOGMgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAg
MHhmZmZmZmZmZiAweGMyM2Y2IDB4NmVhN2IgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZm
ZmZmZiAweGJkODI3IDB4NmJhNmEgMHg3NTMwMCAweDc2YzAwIDB4NWRjMDAgMHhmZmZmZmZmZiAw
eGI4YzU3IDB4NjhhNWEgMHg3NTMwMCAweDc2YzAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGI0MDg3
IDB4NjVhNDkgMHg3NTMwMCAweDc2YzAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGFmNGI4IDB4NjJh
MzkgMHg3NTMwMCAweDc2YzAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGFhOGU4IDB4NWZhMjggMHg3
NTMwMCAweDc2YzAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGE1ZDE4IDB4NWNhMTcgMHg3NTMwMCAw
eDc2YzAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGExMTQ5IDB4NTlhMDcgMHg3NTMwMCAweDc2YzAw
IDB4NWRjMDAgMHhmZmZmZmZmZiAweDljNTc5IDB4NTY5ZjYgMHg3NTMwMCAweDc2YzAwIDB4NWRj
MDAgMHhmZmZmZmZmZiAweDk3OWE5IDB4NTM5ZTYgMHg3NTMwMCAweDc2YzAwIDB4NWRjMDAgMHhm
ZmZmZmZmZiAweDkyZGRhIDB4NTA5ZDUgMHg3NTMwMCAweDczYTAwIDB4NWRjMDAgMHhmZmZmZmZm
ZiAweDhlMjBhIDB4NGQ5YzQgMHg3NTMwMCAweDczYTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDg5
NjNhIDB4NGE5YjQgMHg3NTMwMCAweDczYTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDg0YTZhIDB4
NDc5YTMgMHg3NTMwMCAweDczYTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDdmZTliIDB4NDQ5OTIg
MHg3NTMwMCAweDczYTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDdiMmNiIDB4NDE5ODIgMHg3NTMw
MCAweDczYTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDc2NmZiIDB4M2U5NzEgMHg3NTMwMCAweDcz
YTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDcxYjJjIDB4M2I5NjEgMHg3NTMwMCAweDZhNDAwIDB4
NWRjMDAgMHhmZmZmZmZmZiAweDZjZjVjIDB4Mzg5NTAgMHg3NTMwMCAweDZhNDAwIDB4NWRjMDAg
MHhmZmZmZmZmZiAweDY4MzhjIDB4MzU5M2YgMHg3NTMwMCAweDZhNDAwIDB4NWRjMDAgMHhmZmZm
ZmZmZiAweDYzN2JkIDB4MzI5MmYgMHg3NTMwMCAweDZhNDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAw
eDVlYmVkIDB4MmY5MWUgMHg3NTMwMCAweDZhNDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDVhMDFk
IDB4MmM5MGUgMHg3NTMwMCAweDZhNDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDU1NDRlIDB4Mjk4
ZmQgMHg3NTMwMCAweDYwZTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDUwODdlIDB4MjY4ZWMgMHg3
NTMwMCAweDYwZTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDRiY2FlIDB4MjM4ZGMgMHg3NTMwMCAw
eDYwZTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDQ3MGRlIDB4MjA4Y2IgMHg3NTMwMCAweDYwZTAw
IDB4NWRjMDAgMHhmZmZmZmZmZiAweDQyNTBmIDB4MWQ4YmEgMHg3NTMwMCAweDYwZTAwIDB4NWRj
MDAgMHhmZmZmZmZmZiAweDNkOTNmIDB4MWE4YWEgMHg3NTMwMCAweDYwZTAwIDB4NWRjMDAgMHhm
ZmZmZmZmZiAweDM4ZDZmIDB4MTc4OTkgMHg3NTMwMCAweDYwZTAwIDB4NWRjMDAgMHhmZmZmZmZm
ZiAweDM0MWEwIDB4MTQ4ODkgMHg3NTMwMCAweDYwZTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDJm
NWQwIDB4MTE4NzggMHg3NTMwMCAweDYwZTAwIDB4NWRjMDAgMHhmZmZmZmZmZj47CgkJfTsKCgkJ
Z3B1X2JhbGFuY2VkIHsKCQkJY2Rldi10eXBlID0gImdwdS1iYWxhbmNlZCI7CgkJCW51bV9zdGF0
ZXMgPSA8MHg0Mj47CgkJCWNvb2xpbmctbWluLXN0YXRlID0gPDB4MD47CgkJCWNvb2xpbmctbWF4
LXN0YXRlID0gPDB4NDI+OwoJCQkjY29vbGluZy1jZWxscyA9IDwweDI+OwoJCQlzdGF0dXMgPSAi
b2theSI7CgkJCXRocm90dGxlX3RhYmxlID0gPDB4MTYzNThjIDB4Y2ViMDggMHg3NTMwMCAweDdk
MDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDE1ZTliYyAweGNhZmIwIDB4NzUzMDAgMHg3ZDAwMCAw
eDVkYzAwIDB4ZmZmZmZmZmYgMHgxNTlkZWQgMHhjNzQ1OCAweDc1MzAwIDB4N2QwMDAgMHg1ZGMw
MCAweGZmZmZmZmZmIDB4MTU1MjFkIDB4YzM5MDAgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhm
ZmZmZmZmZiAweDE1MDY0ZCAweGJmZGE3IDB4NzUzMDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZm
ZmYgMHgxNGJhN2UgMHhiYzI0ZiAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4
MTQ2ZWFlIDB4Yjg2ZjcgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDE0MjJk
ZSAweGI0YjlmIDB4NzUzMDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZmZmYgMHgxM2Q3MGUgMHhi
MTA0NyAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4MTM4YjNmIDB4YWQ0ZWYg
MHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDEzM2Y2ZiAweGE5OTk2IDB4NzUz
MDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZmZmYgMHgxMmYzOWYgMHhhNWUzZSAweDc1MzAwIDB4
N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4MTJhN2QwIDB4YTIyZTYgMHg3NTMwMCAweDdkMDAw
IDB4NWRjMDAgMHhmZmZmZmZmZiAweDEyNWMwMCAweDllNzhlIDB4NzUzMDAgMHg3ZDAwMCAweDVk
YzAwIDB4ZmZmZmZmZmYgMHgxMjEwMzAgMHg5YWMzNiAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAw
eGZmZmZmZmZmIDB4MTFjNDYxIDB4OTcwZGUgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZm
ZmZmZiAweDExNzg5MSAweDkzNTg1IDB4NzUzMDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZmZmYg
MHgxMTJjYzEgMHg4ZmEyZCAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4MTBl
MGYyIDB4OGJlZDUgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDEwOTUyMiAw
eDg4MzdkIDB4NzUzMDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZmZmYgMHgxMDQ5NTIgMHg4NDgy
NSAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4ZmZkODIgMHg4MGNjZCAweDc1
MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4ZmIxYjMgMHg3ZDE3NSAweDc1MzAwIDB4
N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4ZjY1ZTMgMHg3OTYxYyAweDc1MzAwIDB4N2QwMDAg
MHg1ZGMwMCAweGZmZmZmZmZmIDB4ZjFhMTMgMHg3NWFjNCAweDc1MzAwIDB4N2QwMDAgMHg1ZGMw
MCAweGZmZmZmZmZmIDB4ZWNlNDQgMHg3MWY2YyAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZm
ZmZmZmZmIDB4ZTgyNzQgMHg2ZTQxNCAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZm
IDB4ZTM2YTQgMHg2YThiYyAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4ZGVh
ZDUgMHg2NmQ2NCAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4ZDlmMDUgMHg2
MzIwYiAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4ZDUzMzUgMHg1ZjZiMyAw
eDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4ZDA3NjYgMHg1YmI1YiAweDc1MzAw
IDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4Y2JiOTYgMHg1ODAwMyAweDc1MzAwIDB4N2Qw
MDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4YzZmYzYgMHg1NDRhYiAweDc1MzAwIDB4N2QwMDAgMHg1
ZGMwMCAweGZmZmZmZmZmIDB4YzIzZjYgMHg1MDk1MyAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAw
eGZmZmZmZmZmIDB4YmQ4MjcgMHg0Y2RmYiAweDc1MzAwIDB4NzZjMDAgMHg1ZGMwMCAweGZmZmZm
ZmZmIDB4YjhjNTcgMHg0OTJhMiAweDc1MzAwIDB4NzZjMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4
YjQwODcgMHg0NTc0YSAweDc1MzAwIDB4NzZjMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4YWY0Yjgg
MHg0MWJmMiAweDc1MzAwIDB4NzZjMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4YWE4ZTggMHgzZTA5
YSAweDc1MzAwIDB4NzZjMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4YTVkMTggMHgzYTU0MiAweDc1
MzAwIDB4NzZjMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4YTExNDkgMHgzNjllYSAweDc1MzAwIDB4
NzZjMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4OWM1NzkgMHgzMmU5MSAweDc1MzAwIDB4NzZjMDAg
MHg1ZGMwMCAweGZmZmZmZmZmIDB4OTc5YTkgMHgyZjMzOSAweDc1MzAwIDB4NzZjMDAgMHg1ZGMw
MCAweGZmZmZmZmZmIDB4OTJkZGEgMHgyYjdlMSAweDc1MzAwIDB4NzNhMDAgMHg1ZGMwMCAweGZm
ZmZmZmZmIDB4OGUyMGEgMHgyN2M4OSAweDc1MzAwIDB4NzNhMDAgMHg1ZGMwMCAweGZmZmZmZmZm
IDB4ODk2M2EgMHgyNDEzMSAweDc1MzAwIDB4NzNhMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4ODRh
NmEgMHgyMDVkOSAweDc1MzAwIDB4NzNhMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4N2ZlOWIgMHgx
Y2E4MCAweDc1MzAwIDB4NzNhMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4N2IyY2IgMHgxOGYyOCAw
eDc1MzAwIDB4NzNhMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4NzY2ZmIgMHgxNTNkMCAweDc1MzAw
IDB4NzNhMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4NzFiMmMgMHhmMTY4IDB4NzUzMDAgMHg2YTQw
MCAweDVkYzAwIDB4ZmZmZmZmZmY+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4MWI+OwoJCQlwaGFu
ZGxlID0gPDB4MWI+OwoJCX07CgoJCWNwdV9iYWxhbmNlZCB7CgkJCWNkZXYtdHlwZSA9ICJjcHUt
YmFsYW5jZWQiOwoJCQludW1fc3RhdGVzID0gPDB4NDI+OwoJCQljb29saW5nLW1pbi1zdGF0ZSA9
IDwweDA+OwoJCQljb29saW5nLW1heC1zdGF0ZSA9IDwweDQyPjsKCQkJI2Nvb2xpbmctY2VsbHMg
PSA8MHgyPjsKCQkJc3RhdHVzID0gIm9rYXkiOwoJCQl0aHJvdHRsZV90YWJsZSA9IDwweDE2MzU4
YyAweGZmZmZmZmZmIDB4NzUzMDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZmZmYgMHgxNWU5YmMg
MHhmZmZmZmZmZiAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4MTU5ZGVkIDB4
ZmZmZmZmZmYgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDE1NTIxZCAweGZm
ZmZmZmZmIDB4NzUzMDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZmZmYgMHgxNTA2NGQgMHhmZmZm
ZmZmZiAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4MTRiYTdlIDB4ZmZmZmZm
ZmYgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDE0NmVhZSAweGZmZmZmZmZm
IDB4NzUzMDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZmZmYgMHgxNDIyZGUgMHhmZmZmZmZmZiAw
eDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4MTNkNzBlIDB4ZmZmZmZmZmYgMHg3
NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDEzOGIzZiAweGZmZmZmZmZmIDB4NzUz
MDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZmZmYgMHgxMzNmNmYgMHhlMTAwMCAweDc1MzAwIDB4
N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4MTJmMzlmIDB4ZGQzYTUgMHg3NTMwMCAweDdkMDAw
IDB4NWRjMDAgMHhmZmZmZmZmZiAweDEyYTdkMCAweGQ5NzRhIDB4NzUzMDAgMHg3ZDAwMCAweDVk
YzAwIDB4ZmZmZmZmZmYgMHgxMjVjMDAgMHhkNWFlZiAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAw
eGZmZmZmZmZmIDB4MTIxMDMwIDB4ZDFlOTQgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZm
ZmZmZiAweDExYzQ2MSAweGNlMjM5IDB4NzUzMDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZmZmYg
MHgxMTc4OTEgMHhjYTVkZiAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4MTEy
Y2MxIDB4YzY5ODQgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDEwZTBmMiAw
eGMyZDI5IDB4NzUzMDAgMHg3ZDAwMCAweDVkYzAwIDB4ZmZmZmZmZmYgMHgxMDk1MjIgMHhiZjBj
ZSAweDc1MzAwIDB4N2QwMDAgMHg1ZGMwMCAweGZmZmZmZmZmIDB4MTA0OTUyIDB4YmI0NzMgMHg3
NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGZmZDgyIDB4Yjc4MTggMHg3NTMwMCAw
eDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGZiMWIzIDB4YjNiYmQgMHg3NTMwMCAweDdkMDAw
IDB4NWRjMDAgMHhmZmZmZmZmZiAweGY2NWUzIDB4YWZmNjIgMHg3NTMwMCAweDdkMDAwIDB4NWRj
MDAgMHhmZmZmZmZmZiAweGYxYTEzIDB4YWMzMDcgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhm
ZmZmZmZmZiAweGVjZTQ0IDB4YTg2YWMgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZm
ZiAweGU4Mjc0IDB4YTRhNTEgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGUz
NmE0IDB4YTBkZjcgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGRlYWQ1IDB4
OWQxOWMgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGQ5ZjA1IDB4OTk1NDEg
MHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGQ1MzM1IDB4OTU4ZTYgMHg3NTMw
MCAweDdkMDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGQwNzY2IDB4OTFjOGIgMHg3NTMwMCAweDdk
MDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGNiYjk2IDB4OGUwMzAgMHg3NTMwMCAweDdkMDAwIDB4
NWRjMDAgMHhmZmZmZmZmZiAweGM2ZmM2IDB4OGEzZDUgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAg
MHhmZmZmZmZmZiAweGMyM2Y2IDB4ODY3N2EgMHg3NTMwMCAweDdkMDAwIDB4NWRjMDAgMHhmZmZm
ZmZmZiAweGJkODI3IDB4ODJiMWYgMHg3NTMwMCAweDc2YzAwIDB4NWRjMDAgMHhmZmZmZmZmZiAw
eGI4YzU3IDB4N2VlYzQgMHg3NTMwMCAweDc2YzAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGI0MDg3
IDB4N2IyNjkgMHg3NTMwMCAweDc2YzAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGFmNGI4IDB4Nzc2
MGYgMHg3NTMwMCAweDc2YzAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGFhOGU4IDB4NzM5YjQgMHg3
NTMwMCAweDc2YzAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGE1ZDE4IDB4NmZkNTkgMHg3NTMwMCAw
eDc2YzAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweGExMTQ5IDB4NmMwZmUgMHg3NTMwMCAweDc2YzAw
IDB4NWRjMDAgMHhmZmZmZmZmZiAweDljNTc5IDB4Njg0YTMgMHg3NTMwMCAweDc2YzAwIDB4NWRj
MDAgMHhmZmZmZmZmZiAweDk3OWE5IDB4NjQ4NDggMHg3NTMwMCAweDc2YzAwIDB4NWRjMDAgMHhm
ZmZmZmZmZiAweDkyZGRhIDB4NjBiZWQgMHg3NTMwMCAweDczYTAwIDB4NWRjMDAgMHhmZmZmZmZm
ZiAweDhlMjBhIDB4NWNmOTIgMHg3NTMwMCAweDczYTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDg5
NjNhIDB4NTkzMzcgMHg3NTMwMCAweDczYTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDg0YTZhIDB4
NTU2ZGMgMHg3NTMwMCAweDczYTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDdmZTliIDB4NTFhODEg
MHg3NTMwMCAweDczYTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDdiMmNiIDB4NGRlMjcgMHg3NTMw
MCAweDczYTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDc2NmZiIDB4NGExY2MgMHg3NTMwMCAweDcz
YTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDcxYjJjIDB4NDY1NzEgMHg3NTMwMCAweDZhNDAwIDB4
NWRjMDAgMHhmZmZmZmZmZiAweDZjZjVjIDB4NDI5MTYgMHg3NTMwMCAweDZhNDAwIDB4NWRjMDAg
MHhmZmZmZmZmZiAweDY4MzhjIDB4M2VjYmIgMHg3NTMwMCAweDZhNDAwIDB4NWRjMDAgMHhmZmZm
ZmZmZiAweDYzN2JkIDB4M2IwNjAgMHg3NTMwMCAweDZhNDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAw
eDVlYmVkIDB4Mzc0MDUgMHg3NTMwMCAweDZhNDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDVhMDFk
IDB4MzM3YWEgMHg3NTMwMCAweDZhNDAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDU1NDRlIDB4MmZi
NGYgMHg3NTMwMCAweDYwZTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDUwODdlIDB4MmJlZjQgMHg3
NTMwMCAweDYwZTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDRiY2FlIDB4MjgyOTkgMHg3NTMwMCAw
eDYwZTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDQ3MGRlIDB4MjQ2M2YgMHg3NTMwMCAweDYwZTAw
IDB4NWRjMDAgMHhmZmZmZmZmZiAweDQyNTBmIDB4MjA5ZTQgMHg3NTMwMCAweDYwZTAwIDB4NWRj
MDAgMHhmZmZmZmZmZiAweDNkOTNmIDB4MWNkODkgMHg3NTMwMCAweDYwZTAwIDB4NWRjMDAgMHhm
ZmZmZmZmZiAweDM4ZDZmIDB4MTkxMmUgMHg3NTMwMCAweDYwZTAwIDB4NWRjMDAgMHhmZmZmZmZm
ZiAweDM0MWEwIDB4MTU0ZDMgMHg3NTMwMCAweDYwZTAwIDB4NWRjMDAgMHhmZmZmZmZmZiAweDJm
NWQwIDB4MTE4NzggMHg3NTMwMCAweDYwZTAwIDB4NWRjMDAgMHhmZmZmZmZmZj47CgkJCWxpbnV4
LHBoYW5kbGUgPSA8MHgxNT47CgkJCXBoYW5kbGUgPSA8MHgxNT47CgkJfTsKCgkJZW1lcmdlbmN5
X2JhbGFuY2VkIHsKCQkJY2Rldi10eXBlID0gImVtZXJnZW5jeS1iYWxhbmNlZCI7CgkJCW51bV9z
dGF0ZXMgPSA8MHgxPjsKCQkJY29vbGluZy1taW4tc3RhdGUgPSA8MHgwPjsKCQkJY29vbGluZy1t
YXgtc3RhdGUgPSA8MHgxPjsKCQkJI2Nvb2xpbmctY2VsbHMgPSA8MHgyPjsKCQkJc3RhdHVzID0g
Im9rYXkiOwoJCQl0aHJvdHRsZV90YWJsZSA9IDwweDExMWVkMCAweDVmNzU4IDB4NDY1MDAgMHg2
NjhhMCAweDNkODYwIDB4NjBhZTA+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4MjA+OwoJCQlwaGFu
ZGxlID0gPDB4MjA+OwoJCX07Cgl9OwoKCWFnaWMtY29udHJvbGxlciB7CgkJc3RhdHVzID0gIm9r
YXkiOwoJfTsKCglhZG1hQDcwMmUyMDAwIHsKCQlzdGF0dXMgPSAib2theSI7Cgl9OwoKCWFodWIg
ewoJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgoJCWFkbWFpZkAweDcwMmQwMDAwIHsKCQkJc3RhdHVz
ID0gImRpc2FibGVkIjsKCQl9OwoKCQlzZmNANzAyZDIwMDAgewoJCQlzdGF0dXMgPSAiZGlzYWJs
ZWQiOwoJCX07CgoJCXNmY0A3MDJkMjIwMCB7CgkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJfTsK
CgkJc2ZjQDcwMmQyNDAwIHsKCQkJc3RhdHVzID0gImRpc2FibGVkIjsKCQl9OwoKCQlzZmNANzAy
ZDI2MDAgewoJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCX07CgoJCXNwa3Byb3RANzAyZDhjMDAg
ewoJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCX07CgoJCWFtaXhlckA3MDJkYmIwMCB7CgkJCXN0
YXR1cyA9ICJkaXNhYmxlZCI7CgkJfTsKCgkJaTJzQDcwMmQxMDAwIHsKCQkJc3RhdHVzID0gImRp
c2FibGVkIjsKCQl9OwoKCQlpMnNANzAyZDExMDAgewoJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJ
CX07CgoJCWkyc0A3MDJkMTIwMCB7CgkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJfTsKCgkJaTJz
QDcwMmQxMzAwIHsKCQkJc3RhdHVzID0gImRpc2FibGVkIjsKCQl9OwoKCQlpMnNANzAyZDE0MDAg
ewoJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCX07CgoJCWFteEA3MDJkMzAwMCB7CgkJCXN0YXR1
cyA9ICJkaXNhYmxlZCI7CgkJfTsKCgkJYW14QDcwMmQzMTAwIHsKCQkJc3RhdHVzID0gImRpc2Fi
bGVkIjsKCQl9OwoKCQlhZHhANzAyZDM4MDAgewoJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCX07
CgoJCWFkeEA3MDJkMzkwMCB7CgkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJfTsKCgkJZG1pY0A3
MDJkNDAwMCB7CgkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJfTsKCgkJZG1pY0A3MDJkNDEwMCB7
CgkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJfTsKCgkJZG1pY0A3MDJkNDIwMCB7CgkJCXN0YXR1
cyA9ICJkaXNhYmxlZCI7CgkJfTsKCgkJYWZjQDcwMmQ3MDAwIHsKCQkJc3RhdHVzID0gImRpc2Fi
bGVkIjsKCQl9OwoKCQlhZmNANzAyZDcxMDAgewoJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCX07
CgoJCWFmY0A3MDJkNzIwMCB7CgkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJfTsKCgkJYWZjQDcw
MmQ3MzAwIHsKCQkJc3RhdHVzID0gImRpc2FibGVkIjsKCQl9OwoKCQlhZmNANzAyZDc0MDAgewoJ
CQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCX07CgoJCWFmY0A3MDJkNzUwMCB7CgkJCXN0YXR1cyA9
ICJkaXNhYmxlZCI7CgkJfTsKCgkJbXZjQDcwMmRhMDAwIHsKCQkJc3RhdHVzID0gImRpc2FibGVk
IjsKCQl9OwoKCQltdmNANzAyZGEyMDAgewoJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCX07CgoJ
CWlxY0A3MDJkZTAwMCB7CgkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJfTsKCgkJaXFjQDcwMmRl
MjAwIHsKCQkJc3RhdHVzID0gImRpc2FibGVkIjsKCQl9OwoKCQlvcGVANzAyZDgwMDAgewoJCQlz
dGF0dXMgPSAiZGlzYWJsZWQiOwoJCX07CgoJCW9wZUA3MDJkODQwMCB7CgkJCXN0YXR1cyA9ICJk
aXNhYmxlZCI7CgkJfTsKCX07CgoJYWRzcF9hdWRpbyB7CgkJc3RhdHVzID0gImRpc2FibGVkIjsK
CX07CgoJc2F0YUA3MDAyMDAwMCB7CgkJc3RhdHVzID0gImRpc2FibGVkIjsKCQlodmRkX3NhdGEt
c3VwcGx5ID0gPDB4MzY+OwoJCWh2ZGRfcGV4X3BsbF9lLXN1cHBseSA9IDwweDM2PjsKCQlsMF9o
dmRkaW9fc2F0YS1zdXBwbHkgPSA8MHgzNj47CgkJbDBfZHZkZGlvX3NhdGEtc3VwcGx5ID0gPDB4
NDA+OwoJCWR2ZGRfc2F0YV9wbGwtc3VwcGx5ID0gPDB4NDA+OwoKCQlwcm9kLXNldHRpbmdzIHsK
CQkJI3Byb2QtY2VsbHMgPSA8MHg0PjsKCgkJCXByb2QgewoJCQkJcHJvZCA9IDwweDAgMHg2ODAg
MHgxIDB4MSAweDAgMHg2OTAgMHhmZmYgMHg3MTUgMHgwIDB4Njk0IDB4ZmYwZmYgMHhlMDFiIDB4
MCAweDZkMCAweGZmZmZmZmZmIDB4YWIwMDBmIDB4MCAweDE3MCAweGYwMDAgMHg3MDAwIDB4MiAw
eDk2MCAweDMwMDAwMDAgMHgxMDAwMDAwPjsKCQkJfTsKCQl9OwoJfTsKCgltb2RlbSB7CgkJY29t
cGF0aWJsZSA9ICJudmlkaWEsaWNlcmEtaTUwMCI7CgkJc3RhdHVzID0gImRpc2FibGVkIjsKCQlu
dmlkaWEsYm9vdC1ncGlvID0gPDB4NTYgMHg1NiAweDE+OwoJCW52aWRpYSxtZG0tcG93ZXItcmVw
b3J0LWdwaW8gPSA8MHg1NiAweDU5IDB4MT47CgkJbnZpZGlhLHJlc2V0LWdwaW8gPSA8MHg1NiAw
eDU4IDB4MT47CgkJbnZpZGlhLG1kbS1lbi1ncGlvID0gPDB4NTYgMHg1NyAweDA+OwoJCW52aWRp
YSxudW0tdGVtcC1zZW5zb3JzID0gPDB4Mz47CgoJCW52aWRpYSxwaHktZWhjaS1oc2ljIHsKCQkJ
c3RhdHVzID0gImRpc2FibGVkIjsKCQl9OwoKCQludmlkaWEscGh5LXhoY2ktaHNpYyB7CgkJCXN0
YXR1cyA9ICJkaXNhYmxlZCI7CgkJfTsKCgkJbnZpZGlhLHBoeS14aGNpLXV0bWkgewoJCQlzdGF0
dXMgPSAiZGlzYWJsZWQiOwoJCX07Cgl9OwoKCXRydXN0eSB7CgkJY29tcGF0aWJsZSA9ICJhbmRy
b2lkLHRydXN0eS1zbWMtdjEiOwoJCXJhbmdlczsKCQkjYWRkcmVzcy1jZWxscyA9IDwweDI+OwoJ
CSNzaXplLWNlbGxzID0gPDB4Mj47CgkJc3RhdHVzID0gImRpc2FibGVkIjsKCgkJaXJxIHsKCQkJ
Y29tcGF0aWJsZSA9ICJhbmRyb2lkLHRydXN0eS1pcnEtdjEiOwoJCQlpbnRlcnJ1cHQtdGVtcGxh
dGVzID0gPDB4YTAgMHgwIDB4MzMgMHgxIDB4MSAweDAgMHgzMyAweDEgMHgwIDB4MD47CgkJCWlu
dGVycnVwdC1yYW5nZXMgPSA8MHgwIDB4ZiAweDAgMHgxMCAweDFmIDB4MSAweDIwIDB4ZGYgMHgy
PjsKCQl9OwoKCQlmaXEgewoJCQljb21wYXRpYmxlID0gImFuZHJvaWQsdHJ1c3R5LWZpcS12MSI7
CgkJfTsKCgkJdmlydGlvIHsKCQkJY29tcGF0aWJsZSA9ICJhbmRyb2lkLHRydXN0eS12aXJ0aW8t
djEiOwoJCX07CgoJCWxvZyB7CgkJCWNvbXBhdGlibGUgPSAiYW5kcm9pZCx0cnVzdHktbG9nLXYx
IjsKCQl9OwoJfTsKCglzbXAtY3VzdG9tLWlwaSB7CgkJY29tcGF0aWJsZSA9ICJhbmRyb2lkLEN1
c3RvbUlQSSI7CgkJI2ludGVycnVwdC1jZWxscyA9IDwweDE+OwoJCWludGVycnVwdC1jb250cm9s
bGVyOwoJCWxpbnV4LHBoYW5kbGUgPSA8MHhhMD47CgkJcGhhbmRsZSA9IDwweGEwPjsKCX07CgoJ
cHN5X2V4dGNvbl94dWRjIHsKCQljb21wYXRpYmxlID0gInBvd2VyLXN1cHBseS1leHRjb24iOwoJ
CWV4dGNvbi1jYWJsZXMgPSA8MHg5YSAweDEgMHg5YSAweDIgMHg5YSAweDMgMHg5YSAweDQgMHg5
YSAweDUgMHg5YSAweDYgMHg5YSAweDcgMHg5YSAweDggMHg5YSAweDk+OwoJCWV4dGNvbi1jYWJs
ZS1uYW1lcyA9ICJ1c2ItY2hhcmdlciIsICJ0YS1jaGFyZ2VyIiwgIm1heGltLWNoYXJnZXIiLCAi
cWMyLWNoYXJnZXIiLCAiZG93bnN0cmVhbS1jaGFyZ2VyIiwgInNsb3ctY2hhcmdlciIsICJhcHBs
ZS01MDBtYSIsICJhcHBsZS0xYSIsICJhcHBsZS0yYSI7CgkJc3RhdHVzID0gImRpc2FibGVkIjsK
CX07CgoJdGVncmEtc3VwcGx5LXRlc3RzIHsKCQljb21wYXRpYmxlID0gIm52aWRpYSx0ZWdyYS1z
dXBwbHktdGVzdHMiOwoJCXZkZC1jb3JlLXN1cHBseSA9IDwweGExPjsKCX07CgoJY2FtZXJhLXBj
bCB7CgoJCWRwZCB7CgkJCWNvbXBhdGlibGUgPSAibnZpZGlhLGNzaS1kcGQiOwoJCQkjYWRkcmVz
cy1jZWxscyA9IDwweDE+OwoJCQkjc2l6ZS1jZWxscyA9IDwweDA+OwoJCQludW0gPSA8MHg2PjsK
CgkJCWNzaWEgewoJCQkJcmVnID0gPDB4MCAweDAgMHgwIDB4MD47CgkJCX07CgoJCQljc2liIHsK
CQkJCXJlZyA9IDwweDAgMHgxIDB4MCAweDA+OwoJCQl9OwoKCQkJY3NpYyB7CgkJCQlyZWcgPSA8
MHgxIDB4YSAweDAgMHgwPjsKCQkJfTsKCgkJCWNzaWQgewoJCQkJcmVnID0gPDB4MSAweGIgMHgw
IDB4MD47CgkJCX07CgoJCQljc2llIHsKCQkJCXJlZyA9IDwweDEgMHhjIDB4MCAweDA+OwoJCQl9
OwoKCQkJY3NpZiB7CgkJCQlyZWcgPSA8MHgxIDB4ZCAweDAgMHgwPjsKCQkJfTsKCQl9OwoJfTsK
Cglyb2xsYmFjay1wcm90ZWN0aW9uIHsKCQlkZXZpY2UtbmFtZSA9ICJzZG1tYyI7CgkJZGV2aWNl
LW1ldGhvZCA9IDwweDEgMHgyPjsKCQlzdGF0dXMgPSAib2theSI7Cgl9OwoKCWV4dGVybmFsLW1l
bW9yeS1jb250cm9sbGVyQDcwMDFiMDAwIHsKCQkjY29vbGluZy1jZWxscyA9IDwweDI+OwoJCWNv
bXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEtZW1jIiwgIm52aWRpYSx0ZWdyYTIxMC1lbWMiOwoJ
CXJlZyA9IDwweDAgMHg3MDAxYjAwMCAweDAgMHgxMDAwIDB4MCAweDcwMDFlMDAwIDB4MCAweDEw
MDAgMHgwIDB4NzAwMWYwMDAgMHgwIDB4MTAwMD47CgkJY2xvY2tzID0gPDB4MjEgMHgzOSAweDIx
IDB4ZjEgMHgyMSAweGVkIDB4MjEgMHhmMyAweDIxIDB4ZTkgMHgyMSAweDEzMSAweDIxIDB4MTQw
IDB4MjEgMHgxNDEgMHgyMSAweDFlMD47CgkJY2xvY2stbmFtZXMgPSAiZW1jIiwgInBsbF9tIiwg
InBsbF9jIiwgInBsbF9wIiwgImNsa19tIiwgInBsbF9tYiIsICJwbGxfbWJfdWQiLCAicGxsX3Bf
dWQiLCAiZW1jX292ZXJyaWRlIjsKCQkjdGhlcm1hbC1zZW5zb3ItY2VsbHMgPSA8MHgwPjsKCQkj
YWRkcmVzcy1jZWxscyA9IDwweDE+OwoJCSNzaXplLWNlbGxzID0gPDB4MD47CgkJbnZpZGlhLHVz
ZS1yYW0tY29kZTsKCQlsaW51eCxwaGFuZGxlID0gPDB4MWQ+OwoJCXBoYW5kbGUgPSA8MHgxZD47
CgoJCWVtYy10YWJsZUAwIHsKCQkJbnZpZGlhLHJhbS1jb2RlID0gPDB4MD47CgoJCQllbWMtdGFi
bGVAMjA0MDAwIHsKCQkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEtZW1jLXRhYmxlIjsK
CQkJCW52aWRpYSxyZXZpc2lvbiA9IDwweDc+OwoJCQkJbnZpZGlhLGR2ZnMtdmVyc2lvbiA9ICIx
M18yMDQwMDBfMTJfVjkuOC43X1YxLjYiOwoJCQkJY2xvY2stZnJlcXVlbmN5ID0gPDB4MzFjZTA+
OwoJCQkJbnZpZGlhLGVtYy1taW4tbXYgPSA8MHgzMjA+OwoJCQkJbnZpZGlhLGdrMjBhLW1pbi1t
diA9IDwweDQ0Yz47CgkJCQludmlkaWEsc291cmNlID0gInBsbHBfb3V0MCI7CgkJCQludmlkaWEs
c3JjLXNlbC1yZWcgPSA8MHg0MDE4ODAwMj47CgkJCQludmlkaWEsbmVlZHMtdHJhaW5pbmcgPSA8
MHgwPjsKCQkJCW52aWRpYSx0cmFpbmluZ19wYXR0ZXJuID0gPDB4MD47CgkJCQludmlkaWEsdHJh
aW5lZCA9IDwweDA+OwoJCQkJbnZpZGlhLHBlcmlvZGljX3RyYWluaW5nID0gPDB4MD47CgkJCQlu
dmlkaWEsdHJhaW5lZF9kcmFtX2Nsa3RyZWVfYzBkMHUwID0gPDB4MD47CgkJCQludmlkaWEsdHJh
aW5lZF9kcmFtX2Nsa3RyZWVfYzBkMHUxID0gPDB4MD47CgkJCQludmlkaWEsdHJhaW5lZF9kcmFt
X2Nsa3RyZWVfYzBkMXUwID0gPDB4MD47CgkJCQludmlkaWEsdHJhaW5lZF9kcmFtX2Nsa3RyZWVf
YzBkMXUxID0gPDB4MD47CgkJCQludmlkaWEsdHJhaW5lZF9kcmFtX2Nsa3RyZWVfYzFkMHUwID0g
PDB4MD47CgkJCQludmlkaWEsdHJhaW5lZF9kcmFtX2Nsa3RyZWVfYzFkMHUxID0gPDB4MD47CgkJ
CQludmlkaWEsdHJhaW5lZF9kcmFtX2Nsa3RyZWVfYzFkMXUwID0gPDB4MD47CgkJCQludmlkaWEs
dHJhaW5lZF9kcmFtX2Nsa3RyZWVfYzFkMXUxID0gPDB4MD47CgkJCQludmlkaWEsY3VycmVudF9k
cmFtX2Nsa3RyZWVfYzBkMHUwID0gPDB4MD47CgkJCQludmlkaWEsY3VycmVudF9kcmFtX2Nsa3Ry
ZWVfYzBkMHUxID0gPDB4MD47CgkJCQludmlkaWEsY3VycmVudF9kcmFtX2Nsa3RyZWVfYzBkMXUw
ID0gPDB4MD47CgkJCQludmlkaWEsY3VycmVudF9kcmFtX2Nsa3RyZWVfYzBkMXUxID0gPDB4MD47
CgkJCQludmlkaWEsY3VycmVudF9kcmFtX2Nsa3RyZWVfYzFkMHUwID0gPDB4MD47CgkJCQludmlk
aWEsY3VycmVudF9kcmFtX2Nsa3RyZWVfYzFkMHUxID0gPDB4MD47CgkJCQludmlkaWEsY3VycmVu
dF9kcmFtX2Nsa3RyZWVfYzFkMXUwID0gPDB4MD47CgkJCQludmlkaWEsY3VycmVudF9kcmFtX2Ns
a3RyZWVfYzFkMXUxID0gPDB4MD47CgkJCQludmlkaWEscnVuX2Nsb2NrcyA9IDwweGQ+OwoJCQkJ
bnZpZGlhLHRyZWVfbWFyZ2luID0gPDB4MT47CgkJCQludmlkaWEsYnVyc3QtcmVncy1udW0gPSA8
MHhkZD47CgkJCQludmlkaWEsYnVyc3QtcmVncy1wZXItY2gtbnVtID0gPDB4OD47CgkJCQludmlk
aWEsdHJpbS1yZWdzLW51bSA9IDwweDhhPjsKCQkJCW52aWRpYSx0cmltLXJlZ3MtcGVyLWNoLW51
bSA9IDwweGE+OwoJCQkJbnZpZGlhLGJ1cnN0LW1jLXJlZ3MtbnVtID0gPDB4MjE+OwoJCQkJbnZp
ZGlhLGxhLXNjYWxlLXJlZ3MtbnVtID0gPDB4MTg+OwoJCQkJbnZpZGlhLHZyZWYtcmVncy1udW0g
PSA8MHg0PjsKCQkJCW52aWRpYSx0cmFpbmluZy1tb2QtcmVncy1udW0gPSA8MHgxND47CgkJCQlu
dmlkaWEsZHJhbS10aW1pbmctcmVncy1udW0gPSA8MHg1PjsKCQkJCW52aWRpYSxtaW4tbXJzLXdh
aXQgPSA8MHgxNj47CgkJCQludmlkaWEsZW1jLW1ydyA9IDwweDg4MDEwMDA0PjsKCQkJCW52aWRp
YSxlbWMtbXJ3MiA9IDwweDg4MDIwMDAwPjsKCQkJCW52aWRpYSxlbWMtbXJ3MyA9IDwweDg4MGQw
MDAwPjsKCQkJCW52aWRpYSxlbWMtbXJ3NCA9IDwweGMwMDAwMDAwPjsKCQkJCW52aWRpYSxlbWMt
bXJ3OSA9IDwweDhjMGU3MjcyPjsKCQkJCW52aWRpYSxlbWMtbXJzID0gPDB4MD47CgkJCQludmlk
aWEsZW1jLWVtcnMgPSA8MHgwPjsKCQkJCW52aWRpYSxlbWMtZW1yczIgPSA8MHgwPjsKCQkJCW52
aWRpYSxlbWMtYXV0by1jYWwtY29uZmlnID0gPDB4YTAxYTUxZDg+OwoJCQkJbnZpZGlhLGVtYy1h
dXRvLWNhbC1jb25maWcyID0gPDB4NTUwMDAwMD47CgkJCQludmlkaWEsZW1jLWF1dG8tY2FsLWNv
bmZpZzMgPSA8MHg3NzAwMDA+OwoJCQkJbnZpZGlhLGVtYy1hdXRvLWNhbC1jb25maWc0ID0gPDB4
NzcwMDAwPjsKCQkJCW52aWRpYSxlbWMtYXV0by1jYWwtY29uZmlnNSA9IDwweDc3MDAwMD47CgkJ
CQludmlkaWEsZW1jLWF1dG8tY2FsLWNvbmZpZzYgPSA8MHg3NzAwMDA+OwoJCQkJbnZpZGlhLGVt
Yy1hdXRvLWNhbC1jb25maWc3ID0gPDB4NzcwMDAwPjsKCQkJCW52aWRpYSxlbWMtYXV0by1jYWwt
Y29uZmlnOCA9IDwweDc3MDAwMD47CgkJCQludmlkaWEsZW1jLWNmZy0yID0gPDB4MTEwODA1PjsK
CQkJCW52aWRpYSxlbWMtc2VsLWRwZC1jdHJsID0gPDB4NDAwMDg+OwoJCQkJbnZpZGlhLGVtYy1m
ZHBkLWN0cmwtY21kLW5vLXJhbXAgPSA8MHgxPjsKCQkJCW52aWRpYSxkbGwtY2xrLXNyYyA9IDww
eDQwMTg4MDAyPjsKCQkJCW52aWRpYSxjbGstb3V0LWVuYi14LTAtY2xrLWVuYi1lbWMtZGxsID0g
PDB4MT47CgkJCQludmlkaWEsZW1jLWNsb2NrLWxhdGVuY3ktY2hhbmdlID0gPDB4ZDVjPjsKCQkJ
CW52aWRpYSxwdGZ2ID0gPDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHhhIDB4YSAw
eGEgMHgxPjsKCQkJCW52aWRpYSxlbWMtcmVnaXN0ZXJzID0gPDB4ZCAweDNhIDB4MWQgMHgwIDB4
MCAweDkgMHg0IDB4YiAweGQgMHg4IDB4YiAweDAgMHg0IDB4MjAgMHg2IDB4NiAweDYgMHgzIDB4
MCAweDYgMHg0IDB4MiAweDAgMHg0IDB4OCAweGQgMHg2IDB4NSAweDAgMHgwIDB4MyAweDg4MDM3
MTcxIDB4YyAweDEgMHhhIDB4MTAwMDAgMHgxMiAweDE0IDB4MTYgMHgxMiAweDE0IDB4MzA0IDB4
MCAweGMxIDB4OCAweDggMHgzIDB4MyAweDMgMHgxNCAweDUgMHgyIDB4ZCAweDNiIDB4M2IgMHg1
IDB4NSAweDQgMHg5IDB4NSAweDQgMHg5IDB4YzgwMzcxNzEgMHgzMWMgMHgwIDB4OTE2MGEwMGQg
MHgzYmJmIDB4MmMwMGEwIDB4ODAwMCAweGJlIDB4ZmZmMGZmZiAweGZmZjBmZmYgMHgwIDB4MCAw
eDAgMHgwIDB4ODgwYjAwMDAgMHhlMDAxNyAweDFjMDAxNCAweDQ1MDAzMSAweDNmMDAyYiAweDNk
MDAyOCAweDNkMDAzMSAweGIgMHgxMDAwMDQgMHg0NTAwMzEgMHgzZjAwMmIgMHgzZDAwMjggMHgz
ZDAwMzEgMHhiIDB4MTAwMDA0IDB4MTcwMDE3IDB4ZTAwMGUgMHgxNDAwMTQgMHgxYzAwMWMgMHgx
NyAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHg4MDIwMjIx
ZiAweDIyMGY0MGYgMHgxMiAweDY0MDAwIDB4OTAwY2MgMHhjYzAwMTYgMHgzMzAwMGEgMHhjMWUw
MDMwMyAweDFmMTM0MTJmIDB4MTAwMTQgMHg4MDQgMHg1NTAgMHhmMzIwMDAwMCAweGZmZjBmZmYg
MHg3MTMgMHhhIDB4MCAweDAgMHgxYiAweDFiIDB4MjAwMDAgMHg1MDAzNyAweDAgMHgxMCAweDMw
MDAgMHhhMDAwMDAwIDB4MjAwMDExMSAweDggMHgzMDgwOCAweDE1YzAwIDB4MTAxMDEwIDB4MTYw
MCAweDAgMHgwIDB4MCAweDM0IDB4NDAgMHgxODAwMDgwMCAweDgwMDA4MDAgMHg4MDAwODAwIDB4
ODAwMDgwMCAweDQwMDA4MCAweDg4MDEwMDQgMHgyMCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eGVmZmZlZmZmIDB4YzBjMGMwYzAgMHhjMGMwYzBjMCAweGRjZGNkY2RjIDB4YTBhMGEwYSAweGEw
YTBhMGEgMHhhMGEwYTBhIDB4MTE4NjAzMyAweDAgMHgwIDB4MTQgMHhhIDB4MTYgMHg4ODE2MTQx
NCAweDEyIDB4MTAwMDAgMHg5MDgwIDB4NzA3MDQwNCAweDQwMDY1IDB4NTEzODAxZiAweDFmMTAx
MTAwIDB4MTQgMHgxMDcyNDAgMHgxMTI0MDAwIDB4MTEyNWI2YSAweGYwODEwMDAgMHgxMDU4MDAg
MHgxMTEwZmMwMCAweGYwODEzMDAgMHgxMDU4MDAgMHgxMTE0ZmMwMCAweDcwMDAzMDAgMHgxMDcy
NDAgMHg1NTU1M2M1YSAweGM4MTYxNDE0PjsKCQkJCW52aWRpYSxlbWMtYnVyc3QtcmVncy1wZXIt
Y2ggPSA8MHg4ODBjNzI3MiAweDg4MGM3MjcyIDB4YzgwYzcyNzIgMHhjODBjNzI3MiAweDhjMGU3
MjcyIDB4OGMwZTcyNzIgMHg0YzBlNzI3MiAweDRjMGU3MjcyPjsKCQkJCW52aWRpYSxlbWMtc2hh
ZG93LXJlZ3MtY2EtdHJhaW4gPSA8MHhkIDB4M2EgMHgxZCAweDAgMHgwIDB4OSAweDQgMHhiIDB4
ZCAweDggMHhiIDB4MCAweDQgMHgyMCAweDYgMHg2IDB4NiAweDMgMHgwIDB4NiAweDQgMHgyIDB4
MCAweDQgMHg4IDB4ZCAweDYgMHg1IDB4MCAweDAgMHgzIDB4ODgwMzcxNzEgMHhjIDB4MSAweGEg
MHgxMDAwMCAweDEyIDB4MTQgMHgxNiAweDEyIDB4MTQgMHgzMDQgMHgwIDB4YzEgMHg4IDB4OCAw
eDMgMHgzIDB4MyAweDE0IDB4NSAweDIgMHhkIDB4M2IgMHgzYiAweDUgMHg1IDB4NCAweDkgMHg1
IDB4NCAweDkgMHhjODAzNzE3MSAweDMxYyAweDAgMHg5OTYwYTAwZCAweDNiYmYgMHgyYzAwYTAg
MHg4MDAwIDB4NTUgMHhmZmYwZmZmIDB4ZmZmMGZmZiAweDAgMHgwIDB4MCAweDAgMHg4ODBiMDAw
MCAweGUwMDE3IDB4MWMwMDE0IDB4NDUwMDMxIDB4M2YwMDJiIDB4M2QwMDI4IDB4M2QwMDMxIDB4
YiAweDEwMDAwNCAweDQ1MDAzMSAweDNmMDAyYiAweDNkMDAyOCAweDNkMDAzMSAweGIgMHgxMDAw
MDQgMHgxNzAwMTcgMHhlMDAwZSAweDE0MDAxNCAweDFjMDAxYyAweDE3IDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDgwMjAyMjFmIDB4MjIwZjQwZiAweDEy
IDB4NjQwMDAgMHg5MDBjYyAweGNjMDAxNiAweDMzMDAwYSAweGMxZTAwMzAzIDB4MWYxMzQxMmYg
MHgxMDAxNCAweDgwNCAweDU1MCAweGYzMjAwMDAwIDB4ZmZmMGZmZiAweDcxMyAweGEgMHgwIDB4
MCAweDFiIDB4MWIgMHgyMDAwMCAweDUwNTgwMzMgMHg1MDUwMDAwIDB4MCAweDMwMDAgMHhhMDAw
MDAwIDB4MjAwMDExMSAweDggMHgzMDgwOCAweDE1YzAwIDB4MTAxMDEwIDB4MTYwMCAweDAgMHgw
IDB4MCAweDM0IDB4NDAgMHgxODAwMDgwMCAweDgwMDA4MDAgMHg4MDAwODAwIDB4ODAwMDgwMCAw
eDQwMDA4MCAweDg4MDEwMDQgMHgyMCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweGVmZmZlZmZm
IDB4YzBjMGMwYzAgMHhjMGMwYzBjMCAweGRjZGNkY2RjIDB4YTBhMGEwYSAweGEwYTBhMGEgMHhh
MGEwYTBhIDB4MTE4NjAzMyAweDEgMHgxZiAweDE4IDB4OCAweDFhIDB4ODgxNjE0MTQgMHgxMCAw
eDEwMDAwIDB4OTA4MCAweDcwNzA0MDQgMHg0MDA2NSAweDUxMzgwMWYgMHgxZjEwMTEwMCAweDE0
IDB4MTA3MjQwIDB4MTEyNDAwMCAweDExMjViNmEgMHhmMDgxMDAwIDB4MTA1ODAwIDB4MTExMGZj
MDAgMHhmMDgxMzAwIDB4MTA1ODAwIDB4MTExNGZjMDAgMHg3MDAwMzAwIDB4MTA3MjQwIDB4NTU1
NTNjNWEgMHhjODE2MTQxND47CgkJCQludmlkaWEsZW1jLXNoYWRvdy1yZWdzLXF1c2UtdHJhaW4g
PSA8MHhkIDB4M2EgMHgxZCAweDAgMHgwIDB4OSAweDQgMHhhIDB4ZCAweDggMHhiIDB4MCAweDQg
MHgyMCAweDYgMHg2IDB4NiAweGMgMHgwIDB4NiAweDQgMHgyIDB4MCAweDQgMHg4IDB4ZCAweDMg
MHgyIDB4MTAwMDAwMDAgMHgwIDB4MyAweDg4MDM3MTcxIDB4YiAweDEgMHg4MDAwMDAwMCAweDQw
MDAwIDB4MTIgMHgxNCAweDE2IDB4MTIgMHgxNCAweDMwNCAweDAgMHhjMSAweDggMHg4IDB4MyAw
eDMgMHgzIDB4MTQgMHg1IDB4MiAweGQgMHgzYiAweDNiIDB4NSAweDUgMHg0IDB4OSAweDUgMHg0
IDB4OSAweGM4MDM3MTcxIDB4MzFjIDB4MCAweDkxNjA0MDBkIDB4M2JiZiAweDJjMDBhMCAweDgw
MDAgMHhiZSAweGZmZjBmZmYgMHhmZmYwZmZmIDB4MCAweDAgMHgwIDB4MCAweDg4MGIwMDAwIDB4
ZTAwMTcgMHgxYzAwMTQgMHg0NTAwMzEgMHgzZjAwMmIgMHgzZDAwMjggMHgzZDAwMzEgMHhiIDB4
MTAwMDA0IDB4NDUwMDMxIDB4M2YwMDJiIDB4M2QwMDI4IDB4M2QwMDMxIDB4YiAweDEwMDAwNCAw
eDE3MDAxNyAweGUwMDBlIDB4MTQwMDE0IDB4MWMwMDFjIDB4MTcgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4ODAyMDIyMWYgMHgyMjBmNDBmIDB4MTIgMHg2
NDAwMCAweDkwMGNjIDB4Y2MwMDE2IDB4MzMwMDBhIDB4YzFlMDAzMDMgMHgxZjEzNDEyZiAweDEw
MDE0IDB4ODA0IDB4NTUwIDB4ZjMyMDAwMDAgMHhmZmYwZmZmIDB4NzEzIDB4YSAweDAgMHgwIDB4
MWIgMHgxYiAweDMwMDIwMDAwIDB4NTgwMzcgMHgwIDB4MTAgMHgzMDAwIDB4YTAwMDAwMCAweDIw
MDAxMTEgMHg4IDB4MzA4MDggMHgxNWMwMCAweDEwMTAxMCAweDE2MDAgMHgwIDB4MCAweDAgMHgz
NCAweDQwIDB4MTgwMDA4MDAgMHg4MDAwODAwIDB4ODAwMDgwMCAweDgwMDA4MDAgMHg0MDAwODAg
MHg4ODAxMDA0IDB4MjAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHhlZmZmZWZmZiAweGMwYzBj
MGMwIDB4YzBjMGMwYzAgMHhkY2RjZGNkYyAweGEwYTBhMGEgMHhhMGEwYTBhIDB4YTBhMGEwYSAw
eDExODYwMzMgMHgxIDB4MWYgMHgxZSAweDE0IDB4MjAgMHg4ODE2MTQxNCAweDFjIDB4NDAwMDAg
MHg5MDgwIDB4NzA3MDQwNCAweDQwMDY1IDB4NTEzODAxZiAweDFmMTAxMTAwIDB4MTQgMHgxMDcy
NDAgMHgxMTI0MDAwIDB4MTEyNWI2YSAweGYwODEwMDAgMHgxMDU4MDAgMHgxMTEwZmMwMCAweGYw
ODEzMDAgMHgxMDU4MDAgMHgxMTE0ZmMwMCAweDcwMDAzMDAgMHgxMDcyNDAgMHg1NTU1M2M1YSAw
eGM4MTYxNDE0PjsKCQkJCW52aWRpYSxlbWMtc2hhZG93LXJlZ3MtcmR3ci10cmFpbiA9IDwweGQg
MHgzYSAweDFkIDB4MCAweDAgMHg5IDB4NCAweGUgMHhkIDB4OCAweGIgMHgwIDB4NCAweDIwIDB4
NiAweDYgMHg2IDB4MTIgMHgxMyAweDYgMHg0IDB4MiAweDAgMHg0IDB4OCAweGQgMHg2IDB4NSAw
eDEwMDAwMDAwIDB4MzAwMDAwMDIgMHgzIDB4ODgwMzcxNzEgMHhjIDB4MSAweGEgMHg0MDAwMCAw
eDEyIDB4MTQgMHgxNiAweDEyIDB4MTQgMHgzMDQgMHgwIDB4YzEgMHg4IDB4OCAweDMgMHgzIDB4
MyAweDE0IDB4NSAweDIgMHhkIDB4M2IgMHgzYiAweDUgMHg1IDB4NCAweDkgMHg1IDB4NCAweDkg
MHhjODAzNzE3MSAweDMxYyAweDAgMHg5MTYwYTAwZCAweDNiYmYgMHgyYzAwYTAgMHg4MDAwIDB4
YmUgMHhmZmYwZmZmIDB4ZmZmMGZmZiAweDAgMHgwIDB4MCAweDAgMHg4ODBiMDAwMCAweGUwMDE3
IDB4MWMwMDE0IDB4NDUwMDMxIDB4M2YwMDJiIDB4M2QwMDI4IDB4M2QwMDMxIDB4YiAweDEwMDAw
NCAweDQ1MDAzMSAweDNmMDAyYiAweDNkMDAyOCAweDNkMDAzMSAweGIgMHgxMDAwMDQgMHgxNzAw
MTcgMHhlMDAwZSAweDE0MDAxNCAweDFjMDAxYyAweDE3IDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDgwMjAyMjFmIDB4MjIwZjQwZiAweDEyIDB4NjQwMDAg
MHg5MDBjYyAweGNjMDAxNiAweDMzMDAwYSAweGMxZTAwMzAzIDB4MWYxMzQxMmYgMHgxMDAxNCAw
eDgwNCAweDU1MCAweGYzMjAwMDAwIDB4ZmZmMGZmZiAweDcxMyAweGEgMHgwIDB4MCAweDFiIDB4
MWIgMHgyMDAwMCAweDUwMDM3IDB4MCAweDEwIDB4MzAwMCAweGEwMDAwMDAgMHgyMDAwMTExIDB4
OCAweDMwODA4IDB4MTVjMDAgMHgxMDEwMTAgMHgxNjAwIDB4MCAweDAgMHgwIDB4MzQgMHg0MCAw
eDE4MDAwODAwIDB4ODAwMDgwMCAweDgwMDA4MDAgMHg4MDAwODAwIDB4NDAwMDgwIDB4ODgwMTAw
NCAweDIwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4ZWZmZmVmZmYgMHhjMGMwYzBjMCAweGMw
YzBjMGMwIDB4ZGNkY2RjZGMgMHhhMGEwYTBhIDB4YTBhMGEwYSAweGEwYTBhMGEgMHgxMTg2MDMz
IDB4MSAweDAgMHgxNCAweGEgMHgxNiAweDg4MTYxNDE0IDB4MTIgMHg0MDAwMCAweGIwODAgMHg3
MDcwNDA0IDB4NDAwNjUgMHg1MTM4MDFmIDB4MWYxMDExMDAgMHgxNCAweDEwNzI0MCAweDExMjQw
MDAgMHgxMTI1YjZhIDB4ZjA4MTAwMCAweDEwNTgwMCAweDExMTBmYzAwIDB4ZjA4MTMwMCAweDEw
NTgwMCAweDExMTRmYzAwIDB4NzAwMDMwMCAweDEwNzI0MCAweDU1NTUzYzVhIDB4YzgxNjE0MTQ+
OwoJCQkJbnZpZGlhLGVtYy10cmltLXJlZ3MgPSA8MHgyODAwMjggMHgyODAwMjggMHgyODAwMjgg
MHgyODAwMjggMHgyODAwMjggMHgyODAwMjggMHgyODAwMjggMHgyODAwMjggMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgxMTExMTExMSAweDExMTExMTExIDB4MjgyODI4MjggMHgyODI4MjgyOCAweDAg
MHgwIDB4MCAweDAgMHhlMDAxNyAweDFjMDAxNCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MD47CgkJCQludmlkaWEsZW1jLXRyaW0tcmVn
cy1wZXItY2ggPSA8MHgwIDB4MCAweDI0OTI0OSAweDI0OTI0OSAweDI0OTI0OSAweDI0OTI0OSAw
eDAgMHgwIDB4MCAweDA+OwoJCQkJbnZpZGlhLGVtYy12cmVmLXJlZ3MgPSA8MHgwIDB4MCAweDAg
MHgwPjsKCQkJCW52aWRpYSxlbWMtZHJhbS10aW1pbmctcmVncyA9IDwweDEyIDB4MTA0IDB4MTE4
IDB4MTggMHg2PjsKCQkJCW52aWRpYSxlbWMtdHJhaW5pbmctbW9kLXJlZ3MgPSA8MHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MD47CgkJCQludmlkaWEsZW1jLXNhdmUtcmVzdG9yZS1tb2QtcmVncyA9IDww
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MD47CgkJCQludmlk
aWEsZW1jLWJ1cnN0LW1jLXJlZ3MgPSA8MHg4MDAwMDAxIDB4ODAwMDAwNGMgMHhhMTAyMCAweDgw
MDAxMDI4IDB4MSAweDAgMHgzIDB4MSAweDIgMHgxIDB4MiAweDUgMHgxIDB4MSAweDQgMHg4IDB4
NSAweDcgMHgyMDIwMDAwIDB4MzAyMDEgMHg3MmEzMDUwNCAweDcwMDAwZjBmIDB4MCAweDFmMDAw
MCAweDgwMDAxYSAweDgwMDAxYSAweDgwMDAxYSAweDgwMDAxYSAweDgwMDAxYSAweDgwMDAxYSAw
eDgwMDAxYSAweDgwMDAxYSAweDgwMDAxYT47CgkJCQludmlkaWEsZW1jLWxhLXNjYWxlLXJlZ3Mg
PSA8MHgxYiAweDgwMDAxYSAweDI0YyAweGZmMDBiMiAweGZmMDBkYSAweGZmMDA5ZCAweGZmMDBm
ZiAweGZmMDAwYyAweGZmMDBmZiAweGZmMDAwYyAweDdmMDA0OSAweGZmMDA4MCAweGZmMDAwNCAw
eDgwMGFkIDB4ZmYgMHhmZjAwMDQgMHhmZjAwYzYgMHhmZjAwYzYgMHhmZjAwNmQgMHhmZjAwZmYg
MHhmZjAwZTIgMHhmZiAweDgwIDB4ZmYwMGZmPjsKCQkJfTsKCgkJCWVtYy10YWJsZUAxNjAwMDAw
IHsKCQkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEtZW1jLXRhYmxlIjsKCQkJCW52aWRp
YSxyZXZpc2lvbiA9IDwweDc+OwoJCQkJbnZpZGlhLGR2ZnMtdmVyc2lvbiA9ICIxM18xNjAwMDAw
XzEyX1Y5LjguN19WMS42IjsKCQkJCWNsb2NrLWZyZXF1ZW5jeSA9IDwweDE4NmEwMD47CgkJCQlu
dmlkaWEsZW1jLW1pbi1tdiA9IDwweDM3Nz47CgkJCQludmlkaWEsZ2syMGEtbWluLW12ID0gPDB4
NDRjPjsKCQkJCW52aWRpYSxzb3VyY2UgPSAicGxsbV91ZCI7CgkJCQludmlkaWEsc3JjLXNlbC1y
ZWcgPSA8MHg4MDE4ODAwMD47CgkJCQludmlkaWEsbmVlZHMtdHJhaW5pbmcgPSA8MHgyZjA+OwoJ
CQkJbnZpZGlhLHRyYWluaW5nX3BhdHRlcm4gPSA8MHgwPjsKCQkJCW52aWRpYSx0cmFpbmVkID0g
PDB4MD47CgkJCQludmlkaWEscGVyaW9kaWNfdHJhaW5pbmcgPSA8MHgxPjsKCQkJCW52aWRpYSx0
cmFpbmVkX2RyYW1fY2xrdHJlZV9jMGQwdTAgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmFpbmVkX2Ry
YW1fY2xrdHJlZV9jMGQwdTEgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmFpbmVkX2RyYW1fY2xrdHJl
ZV9jMGQxdTAgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmFpbmVkX2RyYW1fY2xrdHJlZV9jMGQxdTEg
PSA8MHgwPjsKCQkJCW52aWRpYSx0cmFpbmVkX2RyYW1fY2xrdHJlZV9jMWQwdTAgPSA8MHgwPjsK
CQkJCW52aWRpYSx0cmFpbmVkX2RyYW1fY2xrdHJlZV9jMWQwdTEgPSA8MHgwPjsKCQkJCW52aWRp
YSx0cmFpbmVkX2RyYW1fY2xrdHJlZV9jMWQxdTAgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmFpbmVk
X2RyYW1fY2xrdHJlZV9jMWQxdTEgPSA8MHgwPjsKCQkJCW52aWRpYSxjdXJyZW50X2RyYW1fY2xr
dHJlZV9jMGQwdTAgPSA8MHgwPjsKCQkJCW52aWRpYSxjdXJyZW50X2RyYW1fY2xrdHJlZV9jMGQw
dTEgPSA8MHgwPjsKCQkJCW52aWRpYSxjdXJyZW50X2RyYW1fY2xrdHJlZV9jMGQxdTAgPSA8MHgw
PjsKCQkJCW52aWRpYSxjdXJyZW50X2RyYW1fY2xrdHJlZV9jMGQxdTEgPSA8MHgwPjsKCQkJCW52
aWRpYSxjdXJyZW50X2RyYW1fY2xrdHJlZV9jMWQwdTAgPSA8MHgwPjsKCQkJCW52aWRpYSxjdXJy
ZW50X2RyYW1fY2xrdHJlZV9jMWQwdTEgPSA8MHgwPjsKCQkJCW52aWRpYSxjdXJyZW50X2RyYW1f
Y2xrdHJlZV9jMWQxdTAgPSA8MHgwPjsKCQkJCW52aWRpYSxjdXJyZW50X2RyYW1fY2xrdHJlZV9j
MWQxdTEgPSA8MHgwPjsKCQkJCW52aWRpYSxydW5fY2xvY2tzID0gPDB4NDA+OwoJCQkJbnZpZGlh
LHRyZWVfbWFyZ2luID0gPDB4MT47CgkJCQludmlkaWEsYnVyc3QtcmVncy1udW0gPSA8MHhkZD47
CgkJCQludmlkaWEsYnVyc3QtcmVncy1wZXItY2gtbnVtID0gPDB4OD47CgkJCQludmlkaWEsdHJp
bS1yZWdzLW51bSA9IDwweDhhPjsKCQkJCW52aWRpYSx0cmltLXJlZ3MtcGVyLWNoLW51bSA9IDww
eGE+OwoJCQkJbnZpZGlhLGJ1cnN0LW1jLXJlZ3MtbnVtID0gPDB4MjE+OwoJCQkJbnZpZGlhLGxh
LXNjYWxlLXJlZ3MtbnVtID0gPDB4MTg+OwoJCQkJbnZpZGlhLHZyZWYtcmVncy1udW0gPSA8MHg0
PjsKCQkJCW52aWRpYSx0cmFpbmluZy1tb2QtcmVncy1udW0gPSA8MHgxND47CgkJCQludmlkaWEs
ZHJhbS10aW1pbmctcmVncy1udW0gPSA8MHg1PjsKCQkJCW52aWRpYSxtaW4tbXJzLXdhaXQgPSA8
MHgzMD47CgkJCQludmlkaWEsZW1jLW1ydyA9IDwweDg4MDEwMDU0PjsKCQkJCW52aWRpYSxlbWMt
bXJ3MiA9IDwweDg4MDIwMDJkPjsKCQkJCW52aWRpYSxlbWMtbXJ3MyA9IDwweDg4MGQwMDAwPjsK
CQkJCW52aWRpYSxlbWMtbXJ3NCA9IDwweGMwMDAwMDAwPjsKCQkJCW52aWRpYSxlbWMtbXJ3OSA9
IDwweDhjMGU0ODQ4PjsKCQkJCW52aWRpYSxlbWMtbXJzID0gPDB4MD47CgkJCQludmlkaWEsZW1j
LWVtcnMgPSA8MHgwPjsKCQkJCW52aWRpYSxlbWMtZW1yczIgPSA8MHgwPjsKCQkJCW52aWRpYSxl
bWMtYXV0by1jYWwtY29uZmlnID0gPDB4YTAxYTUxZDg+OwoJCQkJbnZpZGlhLGVtYy1hdXRvLWNh
bC1jb25maWcyID0gPDB4NTUwMDAwMD47CgkJCQludmlkaWEsZW1jLWF1dG8tY2FsLWNvbmZpZzMg
PSA8MHg3NzAwMDA+OwoJCQkJbnZpZGlhLGVtYy1hdXRvLWNhbC1jb25maWc0ID0gPDB4NzcwMDAw
PjsKCQkJCW52aWRpYSxlbWMtYXV0by1jYWwtY29uZmlnNSA9IDwweDc3MDAwMD47CgkJCQludmlk
aWEsZW1jLWF1dG8tY2FsLWNvbmZpZzYgPSA8MHg3NzAwMDA+OwoJCQkJbnZpZGlhLGVtYy1hdXRv
LWNhbC1jb25maWc3ID0gPDB4NzcwMDAwPjsKCQkJCW52aWRpYSxlbWMtYXV0by1jYWwtY29uZmln
OCA9IDwweDc3MDAwMD47CgkJCQludmlkaWEsZW1jLWNmZy0yID0gPDB4MTEwODM1PjsKCQkJCW52
aWRpYSxlbWMtc2VsLWRwZC1jdHJsID0gPDB4NDAwMDA+OwoJCQkJbnZpZGlhLGVtYy1mZHBkLWN0
cmwtY21kLW5vLXJhbXAgPSA8MHgxPjsKCQkJCW52aWRpYSxkbGwtY2xrLXNyYyA9IDwweDgwMTg4
MDAwPjsKCQkJCW52aWRpYSxjbGstb3V0LWVuYi14LTAtY2xrLWVuYi1lbWMtZGxsID0gPDB4MD47
CgkJCQludmlkaWEsZW1jLWNsb2NrLWxhdGVuY3ktY2hhbmdlID0gPDB4NDljPjsKCQkJCW52aWRp
YSxwdGZ2ID0gPDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHhhIDB4YSAweGEgMHgx
PjsKCQkJCW52aWRpYSxlbWMtcmVnaXN0ZXJzID0gPDB4NjAgMHgxYzAgMHhlMCAweDAgMHgwIDB4
NDQgMHgxZCAweDI5IDB4MjEgMHhjIDB4MmQgMHgwIDB4NCAweDIwIDB4MWQgMHgxZCAweDEwIDB4
MTcgMHgxNiAweDYgMHhlIDB4YyAweGEgMHhlIDB4OCAweGQgMHgyNCAweDggMHgxMDAwMDAxYyAw
eDEwMDAwMDAyIDB4MTQgMHg4ODAzZjFmMSAweDFjIDB4MWYgMHhkIDB4NjAwMGMgMHgzMyAweDNi
IDB4M2QgMHgzOSAweDNiIDB4MTgyMCAweDAgMHg2MDggMHgxMCAweDEwIDB4MyAweDMgMHgzIDB4
MzggMHhlIDB4MiAweDJlIDB4MWNjIDB4MWNjIDB4ZCAweDE4IDB4YyAweDQwIDB4MjIgMHg0IDB4
MTQgMHhjODAzZjFmMSAweDE4NjAgMHgwIDB4OTk2MGEwMGQgMHgzYmZmIDB4YzAwMDAxYmIgMHg4
MDAwIDB4NTUgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHg4ODBiNjY2NiAweDgwMDBlIDB4MTEw
MDBjIDB4MWMwMDFjIDB4MWMwMDFjIDB4MWMwMDFjIDB4MWMwMDFjIDB4NyAweDkwMDAyIDB4MWMw
MDFjIDB4MWMwMDFjIDB4MWMwMDFjIDB4MWMwMDFjIDB4NyAweDkwMDAyIDB4ZTAwMGUgMHg4MDAw
OCAweGMwMDBjIDB4MTEwMDExIDB4ZSAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHg4MDIwMjIxZiAweDIyMGY0MGYgMHgxMiAweDY0MDAwIDB4MzEwNjQwIDB4
NjQwMDAzMCAweDE5MDAwMTcgMHhjMWUwMDMwYSAweDFmMTM2MTJmIDB4MTQgMHg4MGQgMHg1NTAg
MHhmMzIwMDAwMCAweDAgMHgzMDhjIDB4MmIgMHgwIDB4MCAweDFiIDB4MWIgMHgyMDAwMCAweDMz
IDB4MCAweDExIDB4MzAwMCAweDIwMDAwMDAgMHgyMDAwMTAxIDB4NyAweDMwODA4IDB4MTVjMDAg
MHgxMDIwMjAgMHgxZmZmMWZmZiAweDAgMHgwIDB4MCAweDM0IDB4NDAgMHgxODAwMDgwMCAweDgw
MDA4MDAgMHg4MDAwODAwIDB4ODAwMDgwMCAweDQwMDA4MCAweDg4MDEwMDQgMHgyMCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweGVmZmYyMjEwIDB4MCAweDAgMHhkY2RjZGNkYyAweGEwYTBhMGEg
MHhhMGEwYTBhIDB4YTBhMGEwYSAweDExODYxOTAgMHgwIDB4MCAweDNiIDB4MmIgMHgzZCAweDg4
MTYxNDE0IDB4MzMgMHg2MDAwYyAweDkwODAgMHg3MDcwNDA0IDB4NDAzMjAgMHg1MTM4MDFmIDB4
MWYxMDExMDAgMHgxNCAweDEwMzIwMCAweDExMjQwMDAgMHgxMTI1YjZhIDB4ZjA4MTAwMCAweDEw
NTgwMCAweDExMTBmYzAwIDB4ZjA4NTMwMCAweDEwNTgwMCAweDExMTRmYzAwIDB4NzAwNDMwMCAw
eDEwMzIwMCAweDU1NTUzYzVhIDB4YzgxNjE0MTQ+OwoJCQkJbnZpZGlhLGVtYy1idXJzdC1yZWdz
LXBlci1jaCA9IDwweDg4MGM0ODQ4IDB4ODgwYzQ4NDggMHhjODBjNDg0OCAweGM4MGM0ODQ4IDB4
OGMwZTQ4NDggMHg4YzBlNDg0OCAweDRjMGU0ODQ4IDB4NGMwZTQ4NDg+OwoJCQkJbnZpZGlhLGVt
Yy1zaGFkb3ctcmVncy1jYS10cmFpbiA9IDwweDYwIDB4MWMwIDB4ZTAgMHgwIDB4MCAweDQ0IDB4
MWQgMHgyOSAweDIxIDB4YyAweDJkIDB4MCAweDQgMHgyMCAweDFkIDB4MWQgMHgxMCAweDE3IDB4
MTYgMHg2IDB4ZSAweGMgMHhhIDB4ZSAweDggMHhkIDB4MjQgMHg4IDB4MTAwMDAwMWMgMHgxMDAw
MDAwMiAweDE0IDB4ODgwM2YxZjEgMHgxYyAweDFmIDB4ZCAweDYwMDBjIDB4MzMgMHgzYiAweDNk
IDB4MzkgMHgzYiAweDE4MjAgMHgwIDB4NjA4IDB4MTAgMHgxMCAweDMgMHgzIDB4MyAweDM4IDB4
ZSAweDIgMHgyZSAweDFjYyAweDFjYyAweGQgMHgxOCAweGMgMHg0MCAweDIyIDB4NCAweDE0IDB4
YzgwM2YxZjEgMHgxODYwIDB4MCAweDk5NjBhMDBkIDB4M2JmZiAweGMwMDAwMWJiIDB4ODAwMCAw
eDU1IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4ODgwYjY2NjYgMHg4MDAwZSAweDExMDAwYyAw
eDFjMDAxYyAweDFjMDAxYyAweDFjMDAxYyAweDFjMDAxYyAweDcgMHg5MDAwMiAweDFjMDAxYyAw
eDFjMDAxYyAweDFjMDAxYyAweDFjMDAxYyAweDcgMHg5MDAwMiAweGUwMDBlIDB4ODAwMDggMHhj
MDAwYyAweDExMDAxMSAweGUgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4ODAyMDIyMWYgMHgyMjBmNDBmIDB4MTIgMHg2NDAwMCAweDMxMDY0MCAweDY0MDAw
MzAgMHgxOTAwMDE3IDB4YzFlMDAzMGEgMHgxZjEzNjEyZiAweDE0IDB4ODBkIDB4NTUwIDB4ZjMy
MDAwMDAgMHgwIDB4MzA4YyAweDJiIDB4MCAweDAgMHgxYiAweDFiIDB4MjAwMDAgMHg4MDMzIDB4
MCAweDAgMHgzMDAwIDB4MjAwMDAwMCAweDIwMDAxMDEgMHg3IDB4MzA4MDggMHgxNWMwMCAweDEw
MjAyMCAweDFmZmYxZmZmIDB4MCAweDAgMHgwIDB4MzQgMHg0MCAweDE4MDAwODAwIDB4ODAwMDgw
MCAweDgwMDA4MDAgMHg4MDAwODAwIDB4NDAwMDgwIDB4ODgwMTAwNCAweDIwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4ZWZmZjIyMTAgMHgwIDB4MCAweGRjZGNkY2RjIDB4YTBhMGEwYSAweGEw
YTBhMGEgMHhhMGEwYTBhIDB4MTE4NjE5MCAweDEgMHgxZiAweDQxIDB4MmIgMHg0MyAweDg4MTYx
NDE0IDB4MzMgMHg2MDAwYyAweDkwODAgMHg3MDcwNDA0IDB4NDAzMjAgMHg1MTM4MDFmIDB4MWYx
MDExMDAgMHgxNCAweDEwMzIwMCAweDExMjQwMDAgMHgxMTI1YjZhIDB4ZjA4MTAwMCAweDEwNTgw
MCAweDExMTBmYzAwIDB4ZjA4NTMwMCAweDEwNTgwMCAweDExMTRmYzAwIDB4NzAwNDMwMCAweDEw
MzIwMCAweDU1NTUzYzVhIDB4YzgxNjE0MTQ+OwoJCQkJbnZpZGlhLGVtYy1zaGFkb3ctcmVncy1x
dXNlLXRyYWluID0gPDB4NjAgMHgxYzAgMHhlMCAweDAgMHgwIDB4NDQgMHgxZCAweDI4IDB4MjEg
MHhjIDB4MmQgMHgwIDB4NCAweDIwIDB4MWQgMHgxZCAweDEwIDB4MTEgMHgxNiAweDYgMHhlIDB4
YyAweGEgMHhlIDB4OCAweGQgMHgyMSAweDIgMHgxMDAwMDAxYyAweDEwMDAwMDAyIDB4MTQgMHg4
ODAzZjFmMSAweDFiIDB4MSAweDgwMDAwMDAwIDB4NjAwMGMgMHgzMyAweDNiIDB4M2QgMHgzOSAw
eDNiIDB4MTgyMCAweDAgMHg2MDggMHgxMCAweDEwIDB4MyAweDMgMHgzIDB4MzggMHhlIDB4MiAw
eDJlIDB4MWNjIDB4MWNjIDB4ZCAweDE4IDB4YyAweDQwIDB4MjIgMHg0IDB4MTQgMHhjODAzZjFm
MSAweDE4NjAgMHgwIDB4OTk2MDQwMGQgMHgzYmZmIDB4YzAwMDAxYmIgMHg4MDAwIDB4NTUgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHg4ODBiNjY2NiAweDgwMDBlIDB4MTEwMDBjIDB4MWMwMDFj
IDB4MWMwMDFjIDB4MWMwMDFjIDB4MWMwMDFjIDB4NyAweDkwMDAyIDB4MWMwMDFjIDB4MWMwMDFj
IDB4MWMwMDFjIDB4MWMwMDFjIDB4NyAweDkwMDAyIDB4ZTAwMGUgMHg4MDAwOCAweGMwMDBjIDB4
MTEwMDExIDB4ZSAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHg4MDIwMjIxZiAweDIyMGY0MGYgMHgxMiAweDY0MDAwIDB4MzEwNjQwIDB4NjQwMDAzMCAweDE5
MDAwMTcgMHhjMWUwMDMwYSAweDFmMTM2MTJmIDB4MTQgMHg4MGQgMHg1NTAgMHhmMzIwMDAwMCAw
eDAgMHgzMDhjIDB4MmIgMHgwIDB4MCAweDFiIDB4MWIgMHgzMDAyMDAwMCAweDgwMzMgMHgwIDB4
MTEgMHgzMDAwIDB4MjAwMDAwMCAweDIwMDAxMDEgMHg3IDB4MzA4MDggMHgxNWMwMCAweDEwMjAy
MCAweDFmZmYxZmZmIDB4MCAweDAgMHgwIDB4MzQgMHg0MCAweDE4MDAwODAwIDB4ODAwMDgwMCAw
eDgwMDA4MDAgMHg4MDAwODAwIDB4NDAwMDgwIDB4ODgwMTAwNCAweDIwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4ZWZmZjIyMTAgMHgwIDB4MCAweGRjZGNkY2RjIDB4YTBhMGEwYSAweGEwYTBh
MGEgMHhhMGEwYTBhIDB4MTE4NjE5MCAweDEgMHgxZiAweDQ1IDB4MzUgMHg0NyAweDg4MTYxNDE0
IDB4M2QgMHg2MDAwYyAweDkwODAgMHg3MDcwNDA0IDB4NDAzMjAgMHg1MTM4MDFmIDB4MWYxMDEx
MDAgMHgxNCAweDEwMzIwMCAweDExMjQwMDAgMHgxMTI1YjZhIDB4ZjA4MTAwMCAweDEwNTgwMCAw
eDExMTBmYzAwIDB4ZjA4NTMwMCAweDEwNTgwMCAweDExMTRmYzAwIDB4NzAwNDMwMCAweDEwMzIw
MCAweDU1NTUzYzVhIDB4YzgxNjE0MTQ+OwoJCQkJbnZpZGlhLGVtYy1zaGFkb3ctcmVncy1yZHdy
LXRyYWluID0gPDB4NjAgMHgxYzAgMHhlMCAweDAgMHgwIDB4NDQgMHgxZCAweDI5IDB4MjEgMHhj
IDB4MmQgMHgwIDB4NCAweDIwIDB4MWQgMHgxZCAweDEwIDB4MTcgMHgxNiAweDYgMHhlIDB4YyAw
eGEgMHhlIDB4OCAweGQgMHgyNCAweDggMHgxMDAwMDAxYyAweDEwMDAwMDAyIDB4MTQgMHg4ODAz
ZjFmMSAweDFjIDB4MWYgMHhkIDB4NjAwMGMgMHgzMyAweDNiIDB4M2QgMHgzOSAweDNiIDB4MTgy
MCAweDAgMHg2MDggMHgxMCAweDEwIDB4MyAweDMgMHgzIDB4MzggMHhlIDB4MiAweDJlIDB4MWNj
IDB4MWNjIDB4ZCAweDE4IDB4YyAweDQwIDB4MjIgMHg0IDB4MTQgMHhjODAzZjFmMSAweDE4NjAg
MHgwIDB4OTk2MGEwMGQgMHgzYmZmIDB4YzAwMDAxYmIgMHg4MDAwIDB4NTUgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHg4ODBiNjY2NiAweDgwMDBlIDB4MTEwMDBjIDB4MWMwMDFjIDB4MWMwMDFj
IDB4MWMwMDFjIDB4MWMwMDFjIDB4NyAweDkwMDAyIDB4MWMwMDFjIDB4MWMwMDFjIDB4MWMwMDFj
IDB4MWMwMDFjIDB4NyAweDkwMDAyIDB4ZTAwMGUgMHg4MDAwOCAweGMwMDBjIDB4MTEwMDExIDB4
ZSAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHg4MDIwMjIx
ZiAweDIyMGY0MGYgMHgxMiAweDY0MDAwIDB4MzEwNjQwIDB4NjQwMDAzMCAweDE5MDAwMTcgMHhj
MWUwMDMwYSAweDFmMTM2MTJmIDB4MTQgMHg4MGQgMHg1NTAgMHhmMzIwMDAwMCAweDAgMHgzMDhj
IDB4MmIgMHgwIDB4MCAweDFiIDB4MWIgMHgyMDAwMCAweDMzIDB4MCAweDExIDB4MzAwMCAweDIw
MDAwMDAgMHgyMDAwMTAxIDB4NyAweDMwODA4IDB4MTVjMDAgMHgxMDIwMjAgMHgxZmZmMWZmZiAw
eDAgMHgwIDB4MCAweDM0IDB4NDAgMHgxODAwMDgwMCAweDgwMDA4MDAgMHg4MDAwODAwIDB4ODAw
MDgwMCAweDQwMDA4MCAweDg4MDEwMDQgMHgyMCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweGVm
ZmYyMjEwIDB4MCAweDAgMHhkY2RjZGNkYyAweGEwYTBhMGEgMHhhMGEwYTBhIDB4YTBhMGEwYSAw
eDExODYxOTAgMHgxIDB4MCAweDNiIDB4MmIgMHgzZCAweDg4MTYxNDE0IDB4MzMgMHg2MDAwYyAw
eGIwODAgMHg3MDcwNDA0IDB4NDAzMjAgMHg1MTM4MDFmIDB4MWYxMDExMDAgMHgxNCAweDEwMzIw
MCAweDExMjQwMDAgMHgxMTI1YjZhIDB4ZjA4MTAwMCAweDEwNTgwMCAweDExMTBmYzAwIDB4ZjA4
NTMwMCAweDEwNTgwMCAweDExMTRmYzAwIDB4NzAwNDMwMCAweDEwMzIwMCAweDU1NTUzYzVhIDB4
YzgxNjE0MTQ+OwoJCQkJbnZpZGlhLGVtYy10cmltLXJlZ3MgPSA8MHgyMDAwMjAgMHgyMDAwMjAg
MHgyMDAwMjAgMHgyMDAwMjAgMHgyMDAwMjAgMHgyMDAwMjAgMHgyMDAwMjAgMHgyMDAwMjAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgxMTExMTExMSAweDExMTExMTExIDB4MTExMTExMTEgMHgxMTEx
MTExMSAweDJiMDAyMiAweDJiMDAyNiAweDI2MDAyNSAweDI2MDAyNiAweDgwMDBlIDB4MTEwMDBj
IDB4MmIwMDIyIDB4MmIwMDI2IDB4MjYwMDI1IDB4MjYwMDI2IDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDA+OwoJCQkJbnZpZGlhLGVtYy10cmltLXJlZ3MtcGVyLWNo
ID0gPDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MD47CgkJCQludmlkaWEs
ZW1jLXZyZWYtcmVncyA9IDwweDAgMHgwIDB4MCAweDA+OwoJCQkJbnZpZGlhLGVtYy1kcmFtLXRp
bWluZy1yZWdzID0gPDB4MTIgMHgxMDQgMHgxMTggMHg3IDB4MjA+OwoJCQkJbnZpZGlhLGVtYy10
cmFpbmluZy1tb2QtcmVncyA9IDwweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwPjsKCQkJCW52aWRpYSxl
bWMtc2F2ZS1yZXN0b3JlLW1vZC1yZWdzID0gPDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHg0IDB4NCAweDQgMHg0PjsKCQkJCW52aWRpYSxlbWMtYnVyc3QtbWMtcmVncyA9IDwweGMg
MHg4MDAwMDA4MCAweGExMDIwIDB4ODAwMDEwMjggMHg2IDB4NyAweDE4IDB4ZiAweGYgMHgzIDB4
MyAweGQgMHgxIDB4MSAweGMgMHg4IDB4YSAweDM3IDB4NTA2MDAwMCAweGQwODBjIDB4NzI2YzI0
MTkgMHg3MDAwMGYwZiAweDAgMHgxZjAwMDAgMHg4MDAwMWEgMHg4MDAwMWEgMHg4MDAwMWEgMHg4
MDAwMWEgMHg4MDAwMWEgMHg4MDAwMWEgMHg4MDAwMWEgMHg4MDAwMWEgMHg4MDAwMWE+OwoJCQkJ
bnZpZGlhLGVtYy1sYS1zY2FsZS1yZWdzID0gPDB4ZDAgMHg4MDAwMWEgMHgxMjAzIDB4ODAwMDNk
IDB4ODAwMDM4IDB4ODAwMDQxIDB4ODAwMDkwIDB4ODAwMDA1IDB4ODAwMDkwIDB4ODAwMDA1IDB4
MzQwMDQ5IDB4ODAwMDgwIDB4ODAwMDA0IDB4ODAwMTYgMHg4MCAweDgwMDAwNCAweDgwMDAxOSAw
eDgwMDAxOSAweDgwMDAxOCAweDgwMDA5NSAweDgwMDAxZCAweDgwIDB4MmMgMHg4MDAwODA+OwoJ
CQl9OwoKCQkJZW1jLXRhYmxlLWRlcmF0ZWRAMjA0MDAwIHsKCQkJCWNvbXBhdGlibGUgPSAibnZp
ZGlhLHRlZ3JhMjEtZW1jLXRhYmxlLWRlcmF0ZWQiOwoJCQkJbnZpZGlhLHJldmlzaW9uID0gPDB4
Nz47CgkJCQludmlkaWEsZHZmcy12ZXJzaW9uID0gIjEzX2RlcmF0aW5nXzIwNDAwMF9WMTNfVjEz
IjsKCQkJCWNsb2NrLWZyZXF1ZW5jeSA9IDwweDMxY2UwPjsKCQkJCW52aWRpYSxlbWMtbWluLW12
ID0gPDB4MzIwPjsKCQkJCW52aWRpYSxnazIwYS1taW4tbXYgPSA8MHg0NGM+OwoJCQkJbnZpZGlh
LHNvdXJjZSA9ICJwbGxwX291dDAiOwoJCQkJbnZpZGlhLHNyYy1zZWwtcmVnID0gPDB4NDAxODgw
MDI+OwoJCQkJbnZpZGlhLG5lZWRzLXRyYWluaW5nID0gPDB4MD47CgkJCQludmlkaWEsdHJhaW5p
bmdfcGF0dGVybiA9IDwweDA+OwoJCQkJbnZpZGlhLHRyYWluZWQgPSA8MHgwPjsKCQkJCW52aWRp
YSxwZXJpb2RpY190cmFpbmluZyA9IDwweDA+OwoJCQkJbnZpZGlhLHRyYWluZWRfZHJhbV9jbGt0
cmVlX2MwZDB1MCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyYWluZWRfZHJhbV9jbGt0cmVlX2MwZDB1
MSA9IDwweDA+OwoJCQkJbnZpZGlhLHRyYWluZWRfZHJhbV9jbGt0cmVlX2MwZDF1MCA9IDwweDA+
OwoJCQkJbnZpZGlhLHRyYWluZWRfZHJhbV9jbGt0cmVlX2MwZDF1MSA9IDwweDA+OwoJCQkJbnZp
ZGlhLHRyYWluZWRfZHJhbV9jbGt0cmVlX2MxZDB1MCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyYWlu
ZWRfZHJhbV9jbGt0cmVlX2MxZDB1MSA9IDwweDA+OwoJCQkJbnZpZGlhLHRyYWluZWRfZHJhbV9j
bGt0cmVlX2MxZDF1MCA9IDwweDA+OwoJCQkJbnZpZGlhLHRyYWluZWRfZHJhbV9jbGt0cmVlX2Mx
ZDF1MSA9IDwweDA+OwoJCQkJbnZpZGlhLGN1cnJlbnRfZHJhbV9jbGt0cmVlX2MwZDB1MCA9IDww
eDA+OwoJCQkJbnZpZGlhLGN1cnJlbnRfZHJhbV9jbGt0cmVlX2MwZDB1MSA9IDwweDA+OwoJCQkJ
bnZpZGlhLGN1cnJlbnRfZHJhbV9jbGt0cmVlX2MwZDF1MCA9IDwweDA+OwoJCQkJbnZpZGlhLGN1
cnJlbnRfZHJhbV9jbGt0cmVlX2MwZDF1MSA9IDwweDA+OwoJCQkJbnZpZGlhLGN1cnJlbnRfZHJh
bV9jbGt0cmVlX2MxZDB1MCA9IDwweDA+OwoJCQkJbnZpZGlhLGN1cnJlbnRfZHJhbV9jbGt0cmVl
X2MxZDB1MSA9IDwweDA+OwoJCQkJbnZpZGlhLGN1cnJlbnRfZHJhbV9jbGt0cmVlX2MxZDF1MCA9
IDwweDA+OwoJCQkJbnZpZGlhLGN1cnJlbnRfZHJhbV9jbGt0cmVlX2MxZDF1MSA9IDwweDA+OwoJ
CQkJbnZpZGlhLHJ1bl9jbG9ja3MgPSA8MHhkPjsKCQkJCW52aWRpYSx0cmVlX21hcmdpbiA9IDww
eDE+OwoJCQkJbnZpZGlhLGJ1cnN0LXJlZ3MtbnVtID0gPDB4ZGQ+OwoJCQkJbnZpZGlhLGJ1cnN0
LXJlZ3MtcGVyLWNoLW51bSA9IDwweDg+OwoJCQkJbnZpZGlhLHRyaW0tcmVncy1udW0gPSA8MHg4
YT47CgkJCQludmlkaWEsdHJpbS1yZWdzLXBlci1jaC1udW0gPSA8MHhhPjsKCQkJCW52aWRpYSxi
dXJzdC1tYy1yZWdzLW51bSA9IDwweDIxPjsKCQkJCW52aWRpYSxsYS1zY2FsZS1yZWdzLW51bSA9
IDwweDE4PjsKCQkJCW52aWRpYSx2cmVmLXJlZ3MtbnVtID0gPDB4ND47CgkJCQludmlkaWEsdHJh
aW5pbmctbW9kLXJlZ3MtbnVtID0gPDB4MTQ+OwoJCQkJbnZpZGlhLGRyYW0tdGltaW5nLXJlZ3Mt
bnVtID0gPDB4NT47CgkJCQludmlkaWEsbWluLW1ycy13YWl0ID0gPDB4MTY+OwoJCQkJbnZpZGlh
LGVtYy1tcncgPSA8MHg4ODAxMDAwND47CgkJCQludmlkaWEsZW1jLW1ydzIgPSA8MHg4ODAyMDAw
MD47CgkJCQludmlkaWEsZW1jLW1ydzMgPSA8MHg4ODBkMDAwMD47CgkJCQludmlkaWEsZW1jLW1y
dzQgPSA8MHhjMDAwMDAwMD47CgkJCQludmlkaWEsZW1jLW1ydzkgPSA8MHg4YzBlNzI3Mj47CgkJ
CQludmlkaWEsZW1jLW1ycyA9IDwweDA+OwoJCQkJbnZpZGlhLGVtYy1lbXJzID0gPDB4MD47CgkJ
CQludmlkaWEsZW1jLWVtcnMyID0gPDB4MD47CgkJCQludmlkaWEsZW1jLWF1dG8tY2FsLWNvbmZp
ZyA9IDwweGEwMWE1MWQ4PjsKCQkJCW52aWRpYSxlbWMtYXV0by1jYWwtY29uZmlnMiA9IDwweDU1
MDAwMDA+OwoJCQkJbnZpZGlhLGVtYy1hdXRvLWNhbC1jb25maWczID0gPDB4NzcwMDAwPjsKCQkJ
CW52aWRpYSxlbWMtYXV0by1jYWwtY29uZmlnNCA9IDwweDc3MDAwMD47CgkJCQludmlkaWEsZW1j
LWF1dG8tY2FsLWNvbmZpZzUgPSA8MHg3NzAwMDA+OwoJCQkJbnZpZGlhLGVtYy1hdXRvLWNhbC1j
b25maWc2ID0gPDB4NzcwMDAwPjsKCQkJCW52aWRpYSxlbWMtYXV0by1jYWwtY29uZmlnNyA9IDww
eDc3MDAwMD47CgkJCQludmlkaWEsZW1jLWF1dG8tY2FsLWNvbmZpZzggPSA8MHg3NzAwMDA+OwoJ
CQkJbnZpZGlhLGVtYy1jZmctMiA9IDwweDExMDgwNT47CgkJCQludmlkaWEsZW1jLXNlbC1kcGQt
Y3RybCA9IDwweDQwMDA4PjsKCQkJCW52aWRpYSxlbWMtZmRwZC1jdHJsLWNtZC1uby1yYW1wID0g
PDB4MT47CgkJCQludmlkaWEsZGxsLWNsay1zcmMgPSA8MHg0MDE4ODAwMj47CgkJCQludmlkaWEs
Y2xrLW91dC1lbmIteC0wLWNsay1lbmItZW1jLWRsbCA9IDwweDE+OwoJCQkJbnZpZGlhLGVtYy1j
bG9jay1sYXRlbmN5LWNoYW5nZSA9IDwweGQ1Yz47CgkJCQludmlkaWEscHRmdiA9IDwweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4YSAweGEgMHhhIDB4MT47CgkJCQludmlkaWEsZW1j
LXJlZ2lzdGVycyA9IDwweGQgMHgzYSAweDFkIDB4MCAweDAgMHg5IDB4NSAweGIgMHhkIDB4OCAw
eGIgMHgwIDB4NCAweDIwIDB4NiAweDYgMHg2IDB4MyAweDAgMHg2IDB4NCAweDIgMHgwIDB4NCAw
eDggMHhkIDB4NSAweDYgMHgwIDB4MCAweDIgMHg4ODAzNzE3MSAweGQgMHgwIDB4YiAweDEwMDAw
IDB4MTIgMHgxNCAweDE2IDB4MTIgMHgxNCAweGMxIDB4MCAweDMwIDB4OCAweDggMHgzIDB4MyAw
eDMgMHgxNCAweDUgMHgyIDB4ZSAweDNiIDB4M2IgMHg1IDB4NSAweDQgMHg5IDB4NSAweDQgMHg5
IDB4YzgwMzcxNzEgMHgzMWMgMHgwIDB4OTE2MGEwMGQgMHgzYmJmIDB4MmMwMGEwIDB4ODAwMCAw
eGJlIDB4ZmZmMGZmZiAweGZmZjBmZmYgMHgwIDB4MCAweDAgMHgwIDB4ODgwYjAwMDAgMHhlMDAx
NyAweDFjMDAxNCAweDQ1MDAzMSAweDNmMDAyYiAweDNkMDAyOCAweDNkMDAzMSAweGIgMHgxMDAw
MDQgMHg0NTAwMzEgMHgzZjAwMmIgMHgzZDAwMjggMHgzZDAwMzEgMHhiIDB4MTAwMDA0IDB4MTcw
MDE3IDB4ZTAwMGUgMHgxNDAwMTQgMHgxYzAwMWMgMHgxNyAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHg4MDIwMjIxZiAweDIyMGY0MGYgMHgxMiAweDY0MDAw
IDB4OTAwY2MgMHhjYzAwMTYgMHgzMzAwMGEgMHhjMWUwMDMwMyAweDFmMTM0MTJmIDB4MTAwMTQg
MHg4MDQgMHg1NTAgMHhmMzIwMDAwMCAweGZmZjBmZmYgMHgyODcgMHhhIDB4MCAweDAgMHgxYiAw
eDFiIDB4MjAwMDAgMHg1MDAzNyAweDAgMHgxMCAweDMwMDAgMHhhMDAwMDAwIDB4MjAwMDExMSAw
eDggMHgzMDgwOCAweDE1YzAwIDB4MTAxMDEwIDB4MTYwMCAweDAgMHgwIDB4MCAweDM0IDB4NDAg
MHgxODAwMDgwMCAweDgwMDA4MDAgMHg4MDAwODAwIDB4ODAwMDgwMCAweDQwMDA4MCAweDg4MDEw
MDQgMHgyMCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweGVmZmZlZmZmIDB4YzBjMGMwYzAgMHhj
MGMwYzBjMCAweGRjZGNkY2RjIDB4YTBhMGEwYSAweGEwYTBhMGEgMHhhMGEwYTBhIDB4MTE4NjAz
MyAweDAgMHgwIDB4MTQgMHhhIDB4MTYgMHg4ODE2MTQxNCAweDEyIDB4MTAwMDAgMHg5MDgwIDB4
NzA3MDQwNCAweDQwMDY1IDB4NTEzODAxZiAweDFmMTAxMTAwIDB4MTQgMHgxMDcyNDAgMHgxMTI0
MDAwIDB4MTEyNWI2YSAweGYwODEwMDAgMHgxMDU4MDAgMHgxMTEwZmMwMCAweGYwODEzMDAgMHgx
MDU4MDAgMHgxMTE0ZmMwMCAweDcwMDAzMDAgMHgxMDcyNDAgMHg1NTU1M2M1YSAweGM4MTYxNDE0
PjsKCQkJCW52aWRpYSxlbWMtYnVyc3QtcmVncy1wZXItY2ggPSA8MHg4ODBjNzI3MiAweDg4MGM3
MjcyIDB4YzgwYzcyNzIgMHhjODBjNzI3MiAweDhjMGU3MjcyIDB4OGMwZTcyNzIgMHg0YzBlNzI3
MiAweDRjMGU3MjcyPjsKCQkJCW52aWRpYSxlbWMtc2hhZG93LXJlZ3MtY2EtdHJhaW4gPSA8MHhk
IDB4M2EgMHgxZCAweDAgMHgwIDB4OSAweDUgMHhiIDB4ZCAweDggMHhiIDB4MCAweDQgMHgyMCAw
eDYgMHg2IDB4NiAweDMgMHgwIDB4NiAweDQgMHgyIDB4MCAweDQgMHg4IDB4ZCAweDUgMHg2IDB4
MCAweDAgMHgyIDB4ODgwMzcxNzEgMHhkIDB4MCAweGIgMHgxMDAwMCAweDEyIDB4MTQgMHgxNiAw
eDEyIDB4MTQgMHhjMSAweDAgMHgzMCAweDggMHg4IDB4MyAweDMgMHgzIDB4MTQgMHg1IDB4MiAw
eGUgMHgzYiAweDNiIDB4NSAweDUgMHg0IDB4OSAweDUgMHg0IDB4OSAweGM4MDM3MTcxIDB4MzFj
IDB4MCAweDk5NjBhMDBkIDB4M2JiZiAweDJjMDBhMCAweDgwMDAgMHg1NSAweGZmZjBmZmYgMHhm
ZmYwZmZmIDB4MCAweDAgMHgwIDB4MCAweDg4MGIwMDAwIDB4ZTAwMTcgMHgxYzAwMTQgMHg0NTAw
MzEgMHgzZjAwMmIgMHgzZDAwMjggMHgzZDAwMzEgMHhiIDB4MTAwMDA0IDB4NDUwMDMxIDB4M2Yw
MDJiIDB4M2QwMDI4IDB4M2QwMDMxIDB4YiAweDEwMDAwNCAweDE3MDAxNyAweGUwMDBlIDB4MTQw
MDE0IDB4MWMwMDFjIDB4MTcgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4ODAyMDIyMWYgMHgyMjBmNDBmIDB4MTIgMHg2NDAwMCAweDkwMGNjIDB4Y2MwMDE2
IDB4MzMwMDBhIDB4YzFlMDAzMDMgMHgxZjEzNDEyZiAweDEwMDE0IDB4ODA0IDB4NTUwIDB4ZjMy
MDAwMDAgMHhmZmYwZmZmIDB4Mjg3IDB4YSAweDAgMHgwIDB4MWIgMHgxYiAweDIwMDAwIDB4NTA1
ODAzMyAweDUwNTAwMDAgMHgwIDB4MzAwMCAweGEwMDAwMDAgMHgyMDAwMTExIDB4OCAweDMwODA4
IDB4MTVjMDAgMHgxMDEwMTAgMHgxNjAwIDB4MCAweDAgMHgwIDB4MzQgMHg0MCAweDE4MDAwODAw
IDB4ODAwMDgwMCAweDgwMDA4MDAgMHg4MDAwODAwIDB4NDAwMDgwIDB4ODgwMTAwNCAweDIwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4ZWZmZmVmZmYgMHhjMGMwYzBjMCAweGMwYzBjMGMwIDB4
ZGNkY2RjZGMgMHhhMGEwYTBhIDB4YTBhMGEwYSAweGEwYTBhMGEgMHgxMTg2MDMzIDB4MSAweDFm
IDB4MTggMHg4IDB4MWEgMHg4ODE2MTQxNCAweDEwIDB4MTAwMDAgMHg5MDgwIDB4NzA3MDQwNCAw
eDQwMDY1IDB4NTEzODAxZiAweDFmMTAxMTAwIDB4MTQgMHgxMDcyNDAgMHgxMTI0MDAwIDB4MTEy
NWI2YSAweGYwODEwMDAgMHgxMDU4MDAgMHgxMTEwZmMwMCAweGYwODEzMDAgMHgxMDU4MDAgMHgx
MTE0ZmMwMCAweDcwMDAzMDAgMHgxMDcyNDAgMHg1NTU1M2M1YSAweGM4MTYxNDE0PjsKCQkJCW52
aWRpYSxlbWMtc2hhZG93LXJlZ3MtcXVzZS10cmFpbiA9IDwweGQgMHgzYSAweDFkIDB4MCAweDAg
MHg5IDB4NSAweGEgMHhkIDB4OCAweGIgMHgwIDB4NCAweDIwIDB4NiAweDYgMHg2IDB4YyAweDAg
MHg2IDB4NCAweDIgMHgwIDB4NCAweDggMHhkIDB4MyAweDIgMHgxMDAwMDAwMCAweDAgMHgzIDB4
ODgwMzcxNzEgMHhiIDB4MSAweDgwMDAwMDAwIDB4NDAwMDAgMHgxMiAweDE0IDB4MTYgMHgxMiAw
eDE0IDB4YzEgMHgwIDB4MzAgMHg4IDB4OCAweDMgMHgzIDB4MyAweDE0IDB4NSAweDIgMHhlIDB4
M2IgMHgzYiAweDUgMHg1IDB4NCAweDkgMHg1IDB4NCAweDkgMHhjODAzNzE3MSAweDMxYyAweDAg
MHg5MTYwNDAwZCAweDNiYmYgMHgyYzAwYTAgMHg4MDAwIDB4YmUgMHhmZmYwZmZmIDB4ZmZmMGZm
ZiAweDAgMHgwIDB4MCAweDAgMHg4ODBiMDAwMCAweGUwMDE3IDB4MWMwMDE0IDB4NDUwMDMxIDB4
M2YwMDJiIDB4M2QwMDI4IDB4M2QwMDMxIDB4YiAweDEwMDAwNCAweDQ1MDAzMSAweDNmMDAyYiAw
eDNkMDAyOCAweDNkMDAzMSAweGIgMHgxMDAwMDQgMHgxNzAwMTcgMHhlMDAwZSAweDE0MDAxNCAw
eDFjMDAxYyAweDE3IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDgwMjAyMjFmIDB4MjIwZjQwZiAweDEyIDB4NjQwMDAgMHg5MDBjYyAweGNjMDAxNiAweDMz
MDAwYSAweGMxZTAwMzAzIDB4MWYxMzQxMmYgMHgxMDAxNCAweDgwNCAweDU1MCAweGYzMjAwMDAw
IDB4ZmZmMGZmZiAweDI4NyAweGEgMHgwIDB4MCAweDFiIDB4MWIgMHgzMDAyMDAwMCAweDU4MDM3
IDB4MCAweDEwIDB4MzAwMCAweGEwMDAwMDAgMHgyMDAwMTExIDB4OCAweDMwODA4IDB4MTVjMDAg
MHgxMDEwMTAgMHgxNjAwIDB4MCAweDAgMHgwIDB4MzQgMHg0MCAweDE4MDAwODAwIDB4ODAwMDgw
MCAweDgwMDA4MDAgMHg4MDAwODAwIDB4NDAwMDgwIDB4ODgwMTAwNCAweDIwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4ZWZmZmVmZmYgMHhjMGMwYzBjMCAweGMwYzBjMGMwIDB4ZGNkY2RjZGMg
MHhhMGEwYTBhIDB4YTBhMGEwYSAweGEwYTBhMGEgMHgxMTg2MDMzIDB4MSAweDFmIDB4MWUgMHgx
NCAweDIwIDB4ODgxNjE0MTQgMHgxYyAweDQwMDAwIDB4OTA4MCAweDcwNzA0MDQgMHg0MDA2NSAw
eDUxMzgwMWYgMHgxZjEwMTEwMCAweDE0IDB4MTA3MjQwIDB4MTEyNDAwMCAweDExMjViNmEgMHhm
MDgxMDAwIDB4MTA1ODAwIDB4MTExMGZjMDAgMHhmMDgxMzAwIDB4MTA1ODAwIDB4MTExNGZjMDAg
MHg3MDAwMzAwIDB4MTA3MjQwIDB4NTU1NTNjNWEgMHhjODE2MTQxND47CgkJCQludmlkaWEsZW1j
LXNoYWRvdy1yZWdzLXJkd3ItdHJhaW4gPSA8MHhkIDB4M2EgMHgxZCAweDAgMHgwIDB4OSAweDUg
MHhlIDB4ZCAweDggMHhiIDB4MCAweDQgMHgyMCAweDYgMHg2IDB4NiAweDEzIDB4MTMgMHg2IDB4
NCAweDIgMHgwIDB4NCAweDggMHhkIDB4NSAweDYgMHgzMDAwMDAwMCAweDMwMDAwMDAyIDB4MiAw
eDg4MDM3MTcxIDB4ZCAweDAgMHhiIDB4NDAwMDAgMHgxMiAweDE0IDB4MTYgMHgxMiAweDE0IDB4
YzEgMHgwIDB4MzAgMHg4IDB4OCAweDMgMHgzIDB4MyAweDE0IDB4NSAweDIgMHhlIDB4M2IgMHgz
YiAweDUgMHg1IDB4NCAweDkgMHg1IDB4NCAweDkgMHhjODAzNzE3MSAweDMxYyAweDAgMHg5MTYw
YTAwZCAweDNiYmYgMHgyYzAwYTAgMHg4MDAwIDB4YmUgMHhmZmYwZmZmIDB4ZmZmMGZmZiAweDAg
MHgwIDB4MCAweDAgMHg4ODBiMDAwMCAweGUwMDE3IDB4MWMwMDE0IDB4NDUwMDMxIDB4M2YwMDJi
IDB4M2QwMDI4IDB4M2QwMDMxIDB4YiAweDEwMDAwNCAweDQ1MDAzMSAweDNmMDAyYiAweDNkMDAy
OCAweDNkMDAzMSAweGIgMHgxMDAwMDQgMHgxNzAwMTcgMHhlMDAwZSAweDE0MDAxNCAweDFjMDAx
YyAweDE3IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDgw
MjAyMjFmIDB4MjIwZjQwZiAweDEyIDB4NjQwMDAgMHg5MDBjYyAweGNjMDAxNiAweDMzMDAwYSAw
eGMxZTAwMzAzIDB4MWYxMzQxMmYgMHgxMDAxNCAweDgwNCAweDU1MCAweGYzMjAwMDAwIDB4ZmZm
MGZmZiAweDI4NyAweGEgMHgwIDB4MCAweDFiIDB4MWIgMHgyMDAwMCAweDUwMDM3IDB4MCAweDEw
IDB4MzAwMCAweGEwMDAwMDAgMHgyMDAwMTExIDB4OCAweDMwODA4IDB4MTVjMDAgMHgxMDEwMTAg
MHgxNjAwIDB4MCAweDAgMHgwIDB4MzQgMHg0MCAweDE4MDAwODAwIDB4ODAwMDgwMCAweDgwMDA4
MDAgMHg4MDAwODAwIDB4NDAwMDgwIDB4ODgwMTAwNCAweDIwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4ZWZmZmVmZmYgMHhjMGMwYzBjMCAweGMwYzBjMGMwIDB4ZGNkY2RjZGMgMHhhMGEwYTBh
IDB4YTBhMGEwYSAweGEwYTBhMGEgMHgxMTg2MDMzIDB4MSAweDAgMHgxNCAweGEgMHgxNiAweDg4
MTYxNDE0IDB4MTIgMHg0MDAwMCAweGIwODAgMHg3MDcwNDA0IDB4NDAwNjUgMHg1MTM4MDFmIDB4
MWYxMDExMDAgMHgxNCAweDEwNzI0MCAweDExMjQwMDAgMHgxMTI1YjZhIDB4ZjA4MTAwMCAweDEw
NTgwMCAweDExMTBmYzAwIDB4ZjA4MTMwMCAweDEwNTgwMCAweDExMTRmYzAwIDB4NzAwMDMwMCAw
eDEwNzI0MCAweDU1NTUzYzVhIDB4YzgxNjE0MTQ+OwoJCQkJbnZpZGlhLGVtYy10cmltLXJlZ3Mg
PSA8MHgyODAwMjggMHgyODAwMjggMHgyODAwMjggMHgyODAwMjggMHgyODAwMjggMHgyODAwMjgg
MHgyODAwMjggMHgyODAwMjggMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgxMTExMTExMSAweDExMTEx
MTExIDB4MjgyODI4MjggMHgyODI4MjgyOCAweDAgMHgwIDB4MCAweDAgMHhlMDAxNyAweDFjMDAx
NCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MD47CgkJCQludmlkaWEsZW1jLXRyaW0tcmVncy1wZXItY2ggPSA8MHgwIDB4MCAweDI0OTI0
OSAweDI0OTI0OSAweDI0OTI0OSAweDI0OTI0OSAweDAgMHgwIDB4MCAweDA+OwoJCQkJbnZpZGlh
LGVtYy12cmVmLXJlZ3MgPSA8MHgwIDB4MCAweDAgMHgwPjsKCQkJCW52aWRpYSxlbWMtZHJhbS10
aW1pbmctcmVncyA9IDwweDEzIDB4MTA0IDB4MTE4IDB4MTggMHg2PjsKCQkJCW52aWRpYSxlbWMt
dHJhaW5pbmctbW9kLXJlZ3MgPSA8MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MD47CgkJCQludmlkaWEs
ZW1jLXNhdmUtcmVzdG9yZS1tb2QtcmVncyA9IDwweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MD47CgkJCQludmlkaWEsZW1jLWJ1cnN0LW1jLXJlZ3MgPSA8MHg4
MDAwMDAxIDB4ODAwMDAwNGMgMHhhMTAyMCAweDgwMDAxMDI4IDB4MSAweDEgMHgzIDB4MSAweDIg
MHgxIDB4MiAweDQgMHgxIDB4MSAweDQgMHg4IDB4NSAweDcgMHgyMDIwMDAwIDB4MzAyMDEgMHg3
MmEzMDUwNCAweDcwMDAwZjBmIDB4MCAweDFmMDAwMCAweDgwMDAxYSAweDgwMDAxYSAweDgwMDAx
YSAweDgwMDAxYSAweDgwMDAxYSAweDgwMDAxYSAweDgwMDAxYSAweDgwMDAxYSAweDgwMDAxYT47
CgkJCQludmlkaWEsZW1jLWxhLXNjYWxlLXJlZ3MgPSA8MHgxYiAweDgwMDAxYSAweDI0YyAweGZm
MDBiMiAweGZmMDBkYSAweGZmMDA5ZCAweGZmMDBmZiAweGZmMDAwYyAweGZmMDBmZiAweGZmMDAw
YyAweDdmMDA0OSAweGZmMDA4MCAweGZmMDAwNCAweDgwMGFkIDB4ZmYgMHhmZjAwMDQgMHhmZjAw
YzYgMHhmZjAwYzYgMHhmZjAwNmQgMHhmZjAwZmYgMHhmZjAwZTIgMHhmZiAweDgwIDB4ZmYwMGZm
PjsKCQkJfTsKCgkJCWVtYy10YWJsZS1kZXJhdGVkQDE2MDAwMDAgewoJCQkJY29tcGF0aWJsZSA9
ICJudmlkaWEsdGVncmEyMS1lbWMtdGFibGUtZGVyYXRlZCI7CgkJCQludmlkaWEscmV2aXNpb24g
PSA8MHg3PjsKCQkJCW52aWRpYSxkdmZzLXZlcnNpb24gPSAiMTNfZGVyYXRpbmdfMTYwMDAwMF9W
MTNfVjEzIjsKCQkJCWNsb2NrLWZyZXF1ZW5jeSA9IDwweDE4NmEwMD47CgkJCQludmlkaWEsZW1j
LW1pbi1tdiA9IDwweDM3Nz47CgkJCQludmlkaWEsZ2syMGEtbWluLW12ID0gPDB4NDRjPjsKCQkJ
CW52aWRpYSxzb3VyY2UgPSAicGxsbV91ZCI7CgkJCQludmlkaWEsc3JjLXNlbC1yZWcgPSA8MHg4
MDE4ODAwMD47CgkJCQludmlkaWEsbmVlZHMtdHJhaW5pbmcgPSA8MHgyZjA+OwoJCQkJbnZpZGlh
LHRyYWluaW5nX3BhdHRlcm4gPSA8MHgwPjsKCQkJCW52aWRpYSx0cmFpbmVkID0gPDB4MD47CgkJ
CQludmlkaWEscGVyaW9kaWNfdHJhaW5pbmcgPSA8MHgxPjsKCQkJCW52aWRpYSx0cmFpbmVkX2Ry
YW1fY2xrdHJlZV9jMGQwdTAgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmFpbmVkX2RyYW1fY2xrdHJl
ZV9jMGQwdTEgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmFpbmVkX2RyYW1fY2xrdHJlZV9jMGQxdTAg
PSA8MHgwPjsKCQkJCW52aWRpYSx0cmFpbmVkX2RyYW1fY2xrdHJlZV9jMGQxdTEgPSA8MHgwPjsK
CQkJCW52aWRpYSx0cmFpbmVkX2RyYW1fY2xrdHJlZV9jMWQwdTAgPSA8MHgwPjsKCQkJCW52aWRp
YSx0cmFpbmVkX2RyYW1fY2xrdHJlZV9jMWQwdTEgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmFpbmVk
X2RyYW1fY2xrdHJlZV9jMWQxdTAgPSA8MHgwPjsKCQkJCW52aWRpYSx0cmFpbmVkX2RyYW1fY2xr
dHJlZV9jMWQxdTEgPSA8MHgwPjsKCQkJCW52aWRpYSxjdXJyZW50X2RyYW1fY2xrdHJlZV9jMGQw
dTAgPSA8MHgwPjsKCQkJCW52aWRpYSxjdXJyZW50X2RyYW1fY2xrdHJlZV9jMGQwdTEgPSA8MHgw
PjsKCQkJCW52aWRpYSxjdXJyZW50X2RyYW1fY2xrdHJlZV9jMGQxdTAgPSA8MHgwPjsKCQkJCW52
aWRpYSxjdXJyZW50X2RyYW1fY2xrdHJlZV9jMGQxdTEgPSA8MHgwPjsKCQkJCW52aWRpYSxjdXJy
ZW50X2RyYW1fY2xrdHJlZV9jMWQwdTAgPSA8MHgwPjsKCQkJCW52aWRpYSxjdXJyZW50X2RyYW1f
Y2xrdHJlZV9jMWQwdTEgPSA8MHgwPjsKCQkJCW52aWRpYSxjdXJyZW50X2RyYW1fY2xrdHJlZV9j
MWQxdTAgPSA8MHgwPjsKCQkJCW52aWRpYSxjdXJyZW50X2RyYW1fY2xrdHJlZV9jMWQxdTEgPSA8
MHgwPjsKCQkJCW52aWRpYSxydW5fY2xvY2tzID0gPDB4NDA+OwoJCQkJbnZpZGlhLHRyZWVfbWFy
Z2luID0gPDB4MT47CgkJCQludmlkaWEsYnVyc3QtcmVncy1udW0gPSA8MHhkZD47CgkJCQludmlk
aWEsYnVyc3QtcmVncy1wZXItY2gtbnVtID0gPDB4OD47CgkJCQludmlkaWEsdHJpbS1yZWdzLW51
bSA9IDwweDhhPjsKCQkJCW52aWRpYSx0cmltLXJlZ3MtcGVyLWNoLW51bSA9IDwweGE+OwoJCQkJ
bnZpZGlhLGJ1cnN0LW1jLXJlZ3MtbnVtID0gPDB4MjE+OwoJCQkJbnZpZGlhLGxhLXNjYWxlLXJl
Z3MtbnVtID0gPDB4MTg+OwoJCQkJbnZpZGlhLHZyZWYtcmVncy1udW0gPSA8MHg0PjsKCQkJCW52
aWRpYSx0cmFpbmluZy1tb2QtcmVncy1udW0gPSA8MHgxND47CgkJCQludmlkaWEsZHJhbS10aW1p
bmctcmVncy1udW0gPSA8MHg1PjsKCQkJCW52aWRpYSxtaW4tbXJzLXdhaXQgPSA8MHgzMD47CgkJ
CQludmlkaWEsZW1jLW1ydyA9IDwweDg4MDEwMDU0PjsKCQkJCW52aWRpYSxlbWMtbXJ3MiA9IDww
eDg4MDIwMDJkPjsKCQkJCW52aWRpYSxlbWMtbXJ3MyA9IDwweDg4MGQwMDAwPjsKCQkJCW52aWRp
YSxlbWMtbXJ3NCA9IDwweGMwMDAwMDAwPjsKCQkJCW52aWRpYSxlbWMtbXJ3OSA9IDwweDhjMGU0
ODQ4PjsKCQkJCW52aWRpYSxlbWMtbXJzID0gPDB4MD47CgkJCQludmlkaWEsZW1jLWVtcnMgPSA8
MHgwPjsKCQkJCW52aWRpYSxlbWMtZW1yczIgPSA8MHgwPjsKCQkJCW52aWRpYSxlbWMtYXV0by1j
YWwtY29uZmlnID0gPDB4YTAxYTUxZDg+OwoJCQkJbnZpZGlhLGVtYy1hdXRvLWNhbC1jb25maWcy
ID0gPDB4NTUwMDAwMD47CgkJCQludmlkaWEsZW1jLWF1dG8tY2FsLWNvbmZpZzMgPSA8MHg3NzAw
MDA+OwoJCQkJbnZpZGlhLGVtYy1hdXRvLWNhbC1jb25maWc0ID0gPDB4NzcwMDAwPjsKCQkJCW52
aWRpYSxlbWMtYXV0by1jYWwtY29uZmlnNSA9IDwweDc3MDAwMD47CgkJCQludmlkaWEsZW1jLWF1
dG8tY2FsLWNvbmZpZzYgPSA8MHg3NzAwMDA+OwoJCQkJbnZpZGlhLGVtYy1hdXRvLWNhbC1jb25m
aWc3ID0gPDB4NzcwMDAwPjsKCQkJCW52aWRpYSxlbWMtYXV0by1jYWwtY29uZmlnOCA9IDwweDc3
MDAwMD47CgkJCQludmlkaWEsZW1jLWNmZy0yID0gPDB4MTEwODM1PjsKCQkJCW52aWRpYSxlbWMt
c2VsLWRwZC1jdHJsID0gPDB4NDAwMDA+OwoJCQkJbnZpZGlhLGVtYy1mZHBkLWN0cmwtY21kLW5v
LXJhbXAgPSA8MHgxPjsKCQkJCW52aWRpYSxkbGwtY2xrLXNyYyA9IDwweDgwMTg4MDAwPjsKCQkJ
CW52aWRpYSxjbGstb3V0LWVuYi14LTAtY2xrLWVuYi1lbWMtZGxsID0gPDB4MD47CgkJCQludmlk
aWEsZW1jLWNsb2NrLWxhdGVuY3ktY2hhbmdlID0gPDB4NDljPjsKCQkJCW52aWRpYSxwdGZ2ID0g
PDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHhhIDB4YSAweGEgMHgxPjsKCQkJCW52
aWRpYSxlbWMtcmVnaXN0ZXJzID0gPDB4NjYgMHgxYzAgMHhlMCAweDAgMHgwIDB4NDcgMHgyMCAw
eDI5IDB4MjEgMHhjIDB4MmQgMHgwIDB4NCAweDIwIDB4MjAgMHgyMCAweDEzIDB4MTcgMHgxNiAw
eDYgMHhlIDB4YyAweGEgMHhlIDB4OCAweGQgMHgyNCAweDggMHgxMDAwMDAxYyAweDEwMDAwMDAy
IDB4MTQgMHg4ODAzZjFmMSAweDFjIDB4MWYgMHhkIDB4NjAwMGMgMHgzMyAweDNiIDB4M2QgMHgz
OSAweDNiIDB4NWU5IDB4MCAweDE3YSAweDEwIDB4MTAgMHgzIDB4MyAweDMgMHgzOCAweGUgMHgy
IDB4MzEgMHgxY2MgMHgxY2MgMHhkIDB4MTggMHhjIDB4NDAgMHgyNSAweDQgMHgxNCAweGM4MDNm
MWYxIDB4MTg2MCAweDAgMHg5OTYwYTAwZCAweDNiZmYgMHhjMDAwMDFiYiAweDgwMDAgMHg1NSAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDg4MGI2NjY2IDB4ODAwMGUgMHgxMTAwMGMgMHgxYzAw
MWMgMHgxYzAwMWMgMHgxYzAwMWMgMHgxYzAwMWMgMHg3IDB4OTAwMDIgMHgxYzAwMWMgMHgxYzAw
MWMgMHgxYzAwMWMgMHgxYzAwMWMgMHg3IDB4OTAwMDIgMHhlMDAwZSAweDgwMDA4IDB4YzAwMGMg
MHgxMTAwMTEgMHhlIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDgwMjAyMjFmIDB4MjIwZjQwZiAweDEyIDB4NjQwMDAgMHgzMTA2NDAgMHg2NDAwMDMwIDB4
MTkwMDAxNyAweGMxZTAwMzBhIDB4MWYxMzYxMmYgMHgxNCAweDgwZCAweDU1MCAweGYzMjAwMDAw
IDB4MCAweGNlNiAweDJiIDB4MCAweDAgMHgxYiAweDFiIDB4MjAwMDAgMHgzMyAweDAgMHgxMSAw
eDMwMDAgMHgyMDAwMDAwIDB4MjAwMDEwMSAweDcgMHgzMDgwOCAweDE1YzAwIDB4MTAyMDIwIDB4
MWZmZjFmZmYgMHgwIDB4MCAweDAgMHgzNCAweDQwIDB4MTgwMDA4MDAgMHg4MDAwODAwIDB4ODAw
MDgwMCAweDgwMDA4MDAgMHg0MDAwODAgMHg4ODAxMDA0IDB4MjAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHhlZmZmMjIxMCAweDAgMHgwIDB4ZGNkY2RjZGMgMHhhMGEwYTBhIDB4YTBhMGEwYSAw
eGEwYTBhMGEgMHgxMTg2MTkwIDB4MCAweDAgMHgzYiAweDJiIDB4M2QgMHg4ODE2MTQxNCAweDMz
IDB4NjAwMGMgMHg5MDgwIDB4NzA3MDQwNCAweDQwMzIwIDB4NTEzODAxZiAweDFmMTAxMTAwIDB4
MTQgMHgxMDMyMDAgMHgxMTI0MDAwIDB4MTEyNWI2YSAweGYwODEwMDAgMHgxMDU4MDAgMHgxMTEw
ZmMwMCAweGYwODUzMDAgMHgxMDU4MDAgMHgxMTE0ZmMwMCAweDcwMDQzMDAgMHgxMDMyMDAgMHg1
NTU1M2M1YSAweGM4MTYxNDE0PjsKCQkJCW52aWRpYSxlbWMtYnVyc3QtcmVncy1wZXItY2ggPSA8
MHg4ODBjNDg0OCAweDg4MGM0ODQ4IDB4YzgwYzQ4NDggMHhjODBjNDg0OCAweDhjMGU0ODQ4IDB4
OGMwZTQ4NDggMHg0YzBlNDg0OCAweDRjMGU0ODQ4PjsKCQkJCW52aWRpYSxlbWMtc2hhZG93LXJl
Z3MtY2EtdHJhaW4gPSA8MHg2NiAweDFjMCAweGUwIDB4MCAweDAgMHg0NyAweDIwIDB4MjkgMHgy
MSAweGMgMHgyZCAweDAgMHg0IDB4MjAgMHgyMCAweDIwIDB4MTMgMHgxNyAweDE2IDB4NiAweGUg
MHhjIDB4YSAweGUgMHg4IDB4ZCAweDI0IDB4OCAweDEwMDAwMDFjIDB4MTAwMDAwMDIgMHgxNCAw
eDg4MDNmMWYxIDB4MWMgMHgxZiAweGQgMHg2MDAwYyAweDMzIDB4M2IgMHgzZCAweDM5IDB4M2Ig
MHg1ZTkgMHgwIDB4MTdhIDB4MTAgMHgxMCAweDMgMHgzIDB4MyAweDM4IDB4ZSAweDIgMHgzMSAw
eDFjYyAweDFjYyAweGQgMHgxOCAweGMgMHg0MCAweDI1IDB4NCAweDE0IDB4YzgwM2YxZjEgMHgx
ODYwIDB4MCAweDk5NjBhMDBkIDB4M2JmZiAweGMwMDAwMWJiIDB4ODAwMCAweDU1IDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4ODgwYjY2NjYgMHg4MDAwZSAweDExMDAwYyAweDFjMDAxYyAweDFj
MDAxYyAweDFjMDAxYyAweDFjMDAxYyAweDcgMHg5MDAwMiAweDFjMDAxYyAweDFjMDAxYyAweDFj
MDAxYyAweDFjMDAxYyAweDcgMHg5MDAwMiAweGUwMDBlIDB4ODAwMDggMHhjMDAwYyAweDExMDAx
MSAweGUgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4ODAy
MDIyMWYgMHgyMjBmNDBmIDB4MTIgMHg2NDAwMCAweDMxMDY0MCAweDY0MDAwMzAgMHgxOTAwMDE3
IDB4YzFlMDAzMGEgMHgxZjEzNjEyZiAweDE0IDB4ODBkIDB4NTUwIDB4ZjMyMDAwMDAgMHgwIDB4
Y2U2IDB4MmIgMHgwIDB4MCAweDFiIDB4MWIgMHgyMDAwMCAweDgwMzMgMHgwIDB4MCAweDMwMDAg
MHgyMDAwMDAwIDB4MjAwMDEwMSAweDcgMHgzMDgwOCAweDE1YzAwIDB4MTAyMDIwIDB4MWZmZjFm
ZmYgMHgwIDB4MCAweDAgMHgzNCAweDQwIDB4MTgwMDA4MDAgMHg4MDAwODAwIDB4ODAwMDgwMCAw
eDgwMDA4MDAgMHg0MDAwODAgMHg4ODAxMDA0IDB4MjAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHhlZmZmMjIxMCAweDAgMHgwIDB4ZGNkY2RjZGMgMHhhMGEwYTBhIDB4YTBhMGEwYSAweGEwYTBh
MGEgMHgxMTg2MTkwIDB4MSAweDFmIDB4NDEgMHgyYiAweDQzIDB4ODgxNjE0MTQgMHgzMyAweDYw
MDBjIDB4OTA4MCAweDcwNzA0MDQgMHg0MDMyMCAweDUxMzgwMWYgMHgxZjEwMTEwMCAweDE0IDB4
MTAzMjAwIDB4MTEyNDAwMCAweDExMjViNmEgMHhmMDgxMDAwIDB4MTA1ODAwIDB4MTExMGZjMDAg
MHhmMDg1MzAwIDB4MTA1ODAwIDB4MTExNGZjMDAgMHg3MDA0MzAwIDB4MTAzMjAwIDB4NTU1NTNj
NWEgMHhjODE2MTQxND47CgkJCQludmlkaWEsZW1jLXNoYWRvdy1yZWdzLXF1c2UtdHJhaW4gPSA8
MHg2NiAweDFjMCAweGUwIDB4MCAweDAgMHg0NyAweDIwIDB4MjggMHgyMSAweGMgMHgyZCAweDAg
MHg0IDB4MjAgMHgyMCAweDIwIDB4MTMgMHgxMSAweDE2IDB4NiAweGUgMHhjIDB4YSAweGUgMHg4
IDB4ZCAweDIxIDB4MiAweDEwMDAwMDFjIDB4MTAwMDAwMDIgMHgxNCAweDg4MDNmMWYxIDB4MWIg
MHgxIDB4ODAwMDAwMDAgMHg2MDAwYyAweDMzIDB4M2IgMHgzZCAweDM5IDB4M2IgMHg1ZTkgMHgw
IDB4MTdhIDB4MTAgMHgxMCAweDMgMHgzIDB4MyAweDM4IDB4ZSAweDIgMHgzMSAweDFjYyAweDFj
YyAweGQgMHgxOCAweGMgMHg0MCAweDI1IDB4NCAweDE0IDB4YzgwM2YxZjEgMHgxODYwIDB4MCAw
eDk5NjA0MDBkIDB4M2JmZiAweGMwMDAwMWJiIDB4ODAwMCAweDU1IDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4ODgwYjY2NjYgMHg4MDAwZSAweDExMDAwYyAweDFjMDAxYyAweDFjMDAxYyAweDFj
MDAxYyAweDFjMDAxYyAweDcgMHg5MDAwMiAweDFjMDAxYyAweDFjMDAxYyAweDFjMDAxYyAweDFj
MDAxYyAweDcgMHg5MDAwMiAweGUwMDBlIDB4ODAwMDggMHhjMDAwYyAweDExMDAxMSAweGUgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4ODAyMDIyMWYgMHgy
MjBmNDBmIDB4MTIgMHg2NDAwMCAweDMxMDY0MCAweDY0MDAwMzAgMHgxOTAwMDE3IDB4YzFlMDAz
MGEgMHgxZjEzNjEyZiAweDE0IDB4ODBkIDB4NTUwIDB4ZjMyMDAwMDAgMHgwIDB4Y2U2IDB4MmIg
MHgwIDB4MCAweDFiIDB4MWIgMHgzMDAyMDAwMCAweDgwMzMgMHgwIDB4MTEgMHgzMDAwIDB4MjAw
MDAwMCAweDIwMDAxMDEgMHg3IDB4MzA4MDggMHgxNWMwMCAweDEwMjAyMCAweDFmZmYxZmZmIDB4
MCAweDAgMHgwIDB4MzQgMHg0MCAweDE4MDAwODAwIDB4ODAwMDgwMCAweDgwMDA4MDAgMHg4MDAw
ODAwIDB4NDAwMDgwIDB4ODgwMTAwNCAweDIwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4ZWZm
ZjIyMTAgMHgwIDB4MCAweGRjZGNkY2RjIDB4YTBhMGEwYSAweGEwYTBhMGEgMHhhMGEwYTBhIDB4
MTE4NjE5MCAweDEgMHgxZiAweDQ1IDB4MzUgMHg0NyAweDg4MTYxNDE0IDB4M2QgMHg2MDAwYyAw
eDkwODAgMHg3MDcwNDA0IDB4NDAzMjAgMHg1MTM4MDFmIDB4MWYxMDExMDAgMHgxNCAweDEwMzIw
MCAweDExMjQwMDAgMHgxMTI1YjZhIDB4ZjA4MTAwMCAweDEwNTgwMCAweDExMTBmYzAwIDB4ZjA4
NTMwMCAweDEwNTgwMCAweDExMTRmYzAwIDB4NzAwNDMwMCAweDEwMzIwMCAweDU1NTUzYzVhIDB4
YzgxNjE0MTQ+OwoJCQkJbnZpZGlhLGVtYy1zaGFkb3ctcmVncy1yZHdyLXRyYWluID0gPDB4NjYg
MHgxYzAgMHhlMCAweDAgMHgwIDB4NDcgMHgyMCAweDI5IDB4MjEgMHhjIDB4MmQgMHgwIDB4NCAw
eDIwIDB4MjAgMHgyMCAweDEzIDB4MTcgMHgxNiAweDYgMHhlIDB4YyAweGEgMHhlIDB4OCAweGQg
MHgyNCAweDggMHgxMDAwMDAxYyAweDEwMDAwMDAyIDB4MTQgMHg4ODAzZjFmMSAweDFjIDB4MWYg
MHhkIDB4NjAwMGMgMHgzMyAweDNiIDB4M2QgMHgzOSAweDNiIDB4NWU5IDB4MCAweDE3YSAweDEw
IDB4MTAgMHgzIDB4MyAweDMgMHgzOCAweGUgMHgyIDB4MzEgMHgxY2MgMHgxY2MgMHhkIDB4MTgg
MHhjIDB4NDAgMHgyNSAweDQgMHgxNCAweGM4MDNmMWYxIDB4MTg2MCAweDAgMHg5OTYwYTAwZCAw
eDNiZmYgMHhjMDAwMDFiYiAweDgwMDAgMHg1NSAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDg4
MGI2NjY2IDB4ODAwMGUgMHgxMTAwMGMgMHgxYzAwMWMgMHgxYzAwMWMgMHgxYzAwMWMgMHgxYzAw
MWMgMHg3IDB4OTAwMDIgMHgxYzAwMWMgMHgxYzAwMWMgMHgxYzAwMWMgMHgxYzAwMWMgMHg3IDB4
OTAwMDIgMHhlMDAwZSAweDgwMDA4IDB4YzAwMGMgMHgxMTAwMTEgMHhlIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDgwMjAyMjFmIDB4MjIwZjQwZiAweDEy
IDB4NjQwMDAgMHgzMTA2NDAgMHg2NDAwMDMwIDB4MTkwMDAxNyAweGMxZTAwMzBhIDB4MWYxMzYx
MmYgMHgxNCAweDgwZCAweDU1MCAweGYzMjAwMDAwIDB4MCAweGNlNiAweDJiIDB4MCAweDAgMHgx
YiAweDFiIDB4MjAwMDAgMHgzMyAweDAgMHgxMSAweDMwMDAgMHgyMDAwMDAwIDB4MjAwMDEwMSAw
eDcgMHgzMDgwOCAweDE1YzAwIDB4MTAyMDIwIDB4MWZmZjFmZmYgMHgwIDB4MCAweDAgMHgzNCAw
eDQwIDB4MTgwMDA4MDAgMHg4MDAwODAwIDB4ODAwMDgwMCAweDgwMDA4MDAgMHg0MDAwODAgMHg4
ODAxMDA0IDB4MjAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHhlZmZmMjIxMCAweDAgMHgwIDB4
ZGNkY2RjZGMgMHhhMGEwYTBhIDB4YTBhMGEwYSAweGEwYTBhMGEgMHgxMTg2MTkwIDB4MSAweDAg
MHgzYiAweDJiIDB4M2QgMHg4ODE2MTQxNCAweDMzIDB4NjAwMGMgMHhiMDgwIDB4NzA3MDQwNCAw
eDQwMzIwIDB4NTEzODAxZiAweDFmMTAxMTAwIDB4MTQgMHgxMDMyMDAgMHgxMTI0MDAwIDB4MTEy
NWI2YSAweGYwODEwMDAgMHgxMDU4MDAgMHgxMTEwZmMwMCAweGYwODUzMDAgMHgxMDU4MDAgMHgx
MTE0ZmMwMCAweDcwMDQzMDAgMHgxMDMyMDAgMHg1NTU1M2M1YSAweGM4MTYxNDE0PjsKCQkJCW52
aWRpYSxlbWMtdHJpbS1yZWdzID0gPDB4MjAwMDIwIDB4MjAwMDIwIDB4MjAwMDIwIDB4MjAwMDIw
IDB4MjAwMDIwIDB4MjAwMDIwIDB4MjAwMDIwIDB4MjAwMDIwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MTExMTExMTEgMHgxMTExMTExMSAweDExMTExMTExIDB4MTExMTExMTEgMHgyYjAwMjIgMHgy
YjAwMjYgMHgyNjAwMjUgMHgyNjAwMjYgMHg4MDAwZSAweDExMDAwYyAweDJiMDAyMiAweDJiMDAy
NiAweDI2MDAyNSAweDI2MDAyNiAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgw
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAw
eDAgMHgwPjsKCQkJCW52aWRpYSxlbWMtdHJpbS1yZWdzLXBlci1jaCA9IDwweDAgMHgwIDB4MCAw
eDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDA+OwoJCQkJbnZpZGlhLGVtYy12cmVmLXJlZ3MgPSA8
MHgwIDB4MCAweDAgMHgwPjsKCQkJCW52aWRpYSxlbWMtZHJhbS10aW1pbmctcmVncyA9IDwweDEz
IDB4MTA0IDB4MTE4IDB4NyAweDIwPjsKCQkJCW52aWRpYSxlbWMtdHJhaW5pbmctbW9kLXJlZ3Mg
PSA8MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4
MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MD47CgkJCQludmlkaWEsZW1jLXNhdmUtcmVzdG9yZS1t
b2QtcmVncyA9IDwweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4NCAweDQgMHg0IDB4
ND47CgkJCQludmlkaWEsZW1jLWJ1cnN0LW1jLXJlZ3MgPSA8MHhjIDB4ODAwMDAwODAgMHhhMTAy
MCAweDgwMDAxMDI4IDB4NiAweDcgMHgxOSAweDEwIDB4ZiAweDQgMHgzIDB4ZSAweDEgMHgxIDB4
YyAweDggMHhhIDB4MzcgMHg1MDYwMDAwIDB4ZTA5MGMgMHg3MjZjMjQxYSAweDcwMDAwZjBmIDB4
MCAweDFmMDAwMCAweDgwMDAxYSAweDgwMDAxYSAweDgwMDAxYSAweDgwMDAxYSAweDgwMDAxYSAw
eDgwMDAxYSAweDgwMDAxYSAweDgwMDAxYSAweDgwMDAxYT47CgkJCQludmlkaWEsZW1jLWxhLXNj
YWxlLXJlZ3MgPSA8MHhkMCAweDgwMDAxYSAweDEyMDMgMHg4MDAwM2QgMHg4MDAwMzggMHg4MDAw
NDEgMHg4MDAwOTAgMHg4MDAwMDUgMHg4MDAwOTAgMHg4MDAwMDUgMHgzNDAwNDkgMHg4MDAwODAg
MHg4MDAwMDQgMHg4MDAxNiAweDgwIDB4ODAwMDA0IDB4ODAwMDE5IDB4ODAwMDE5IDB4ODAwMDE4
IDB4ODAwMDk1IDB4ODAwMDFkIDB4ODAgMHgyYyAweDgwMDA4MD47CgkJCX07CgkJfTsKCX07CgoJ
ZHVtbXktY29vbC1kZXYgewoJCWNvbXBhdGlibGUgPSAiZHVtbXktY29vbGluZy1kZXYiOwoJCSNj
b29saW5nLWNlbGxzID0gPDB4Mj47CgkJbGludXgscGhhbmRsZSA9IDwweDEyMz47CgkJcGhhbmRs
ZSA9IDwweDEyMz47Cgl9OwoKCXJlZ3VsYXRvcnMgewoJCWNvbXBhdGlibGUgPSAic2ltcGxlLWJ1
cyI7CgkJZGV2aWNlX3R5cGUgPSAiZml4ZWQtcmVndWxhdG9ycyI7CgkJI2FkZHJlc3MtY2VsbHMg
PSA8MHgxPjsKCQkjc2l6ZS1jZWxscyA9IDwweDA+OwoKCQlyZWd1bGF0b3JAMCB7CgkJCWNvbXBh
dGlibGUgPSAicmVndWxhdG9yLWZpeGVkIjsKCQkJcmVnID0gPDB4MD47CgkJCXJlZ3VsYXRvci1u
YW1lID0gInZkZC1hYy1iYXQiOwoJCQlyZWd1bGF0b3ItbWluLW1pY3Jvdm9sdCA9IDwweDRjNGI0
MD47CgkJCXJlZ3VsYXRvci1tYXgtbWljcm92b2x0ID0gPDB4NGM0YjQwPjsKCQkJcmVndWxhdG9y
LWFsd2F5cy1vbjsKCQkJbGludXgscGhhbmRsZSA9IDwweDQyPjsKCQkJcGhhbmRsZSA9IDwweDQy
PjsKCQl9OwoKCQlyZWd1bGF0b3JAMSB7CgkJCWNvbXBhdGlibGUgPSAicmVndWxhdG9yLWZpeGVk
IjsKCQkJcmVnID0gPDB4MT47CgkJCXJlZ3VsYXRvci1uYW1lID0gInZkZC01djAtc3lzIjsKCQkJ
cmVndWxhdG9yLW1pbi1taWNyb3ZvbHQgPSA8MHg0YzRiNDA+OwoJCQlyZWd1bGF0b3ItbWF4LW1p
Y3Jvdm9sdCA9IDwweDRjNGI0MD47CgkJCXJlZ3VsYXRvci1lbmFibGUtcmFtcC1kZWxheSA9IDww
eGEwPjsKCQkJcmVndWxhdG9yLWRpc2FibGUtcmFtcC1kZWxheSA9IDwweDI3MTA+OwoJCQlsaW51
eCxwaGFuZGxlID0gPDB4YTI+OwoJCQlwaGFuZGxlID0gPDB4YTI+OwoJCX07CgoJCXJlZ3VsYXRv
ckAyIHsKCQkJY29tcGF0aWJsZSA9ICJyZWd1bGF0b3ItZml4ZWQtc3luYyI7CgkJCXJlZyA9IDww
eDI+OwoJCQlyZWd1bGF0b3ItbmFtZSA9ICJ2ZGQtM3YzLXN5cyI7CgkJCXJlZ3VsYXRvci1taW4t
bWljcm92b2x0ID0gPDB4MzI1YWEwPjsKCQkJcmVndWxhdG9yLW1heC1taWNyb3ZvbHQgPSA8MHgz
MjVhYTA+OwoJCQlncGlvID0gPDB4MWUgMHgzIDB4MD47CgkJCWVuYWJsZS1hY3RpdmUtaGlnaDsK
CQkJdmluLXN1cHBseSA9IDwweGEyPjsKCQkJcmVndWxhdG9yLWVuYWJsZS1yYW1wLWRlbGF5ID0g
PDB4ZjA+OwoJCQlyZWd1bGF0b3ItZGlzYWJsZS1yYW1wLWRlbGF5ID0gPDB4MmM0Yz47CgkJCWxp
bnV4LHBoYW5kbGUgPSA8MHg0Nz47CgkJCXBoYW5kbGUgPSA8MHg0Nz47CgkJfTsKCgkJcmVndWxh
dG9yQDMgewoJCQljb21wYXRpYmxlID0gInJlZ3VsYXRvci1maXhlZC1zeW5jIjsKCQkJcmVnID0g
PDB4Mz47CgkJCXJlZ3VsYXRvci1uYW1lID0gInZkZC0zdjMtc2QiOwoJCQlyZWd1bGF0b3ItbWlu
LW1pY3Jvdm9sdCA9IDwweDMyNWFhMD47CgkJCXJlZ3VsYXRvci1tYXgtbWljcm92b2x0ID0gPDB4
MzI1YWEwPjsKCQkJZ3BpbyA9IDwweDU2IDB4Y2IgMHgwPjsKCQkJZW5hYmxlLWFjdGl2ZS1oaWdo
OwoJCQlyZWd1bGF0b3ItYm9vdC1vbjsKCQkJdmluLXN1cHBseSA9IDwweDQ3PjsKCQkJbGludXgs
cGhhbmRsZSA9IDwweDk5PjsKCQkJcGhhbmRsZSA9IDwweDk5PjsKCQl9OwoKCQlyZWd1bGF0b3JA
NCB7CgkJCWNvbXBhdGlibGUgPSAicmVndWxhdG9yLWZpeGVkLXN5bmMiOwoJCQlyZWcgPSA8MHg0
PjsKCQkJcmVndWxhdG9yLW5hbWUgPSAiYXZkZC1pby1lZHAtMXYwNSI7CgkJCXJlZ3VsYXRvci1t
aW4tbWljcm92b2x0ID0gPDB4MTAwNTkwPjsKCQkJcmVndWxhdG9yLW1heC1taWNyb3ZvbHQgPSA8
MHgxMDA1OTA+OwoJCQlncGlvID0gPDB4MWUgMHg3IDB4MD47CgkJCWVuYWJsZS1hY3RpdmUtaGln
aDsKCQkJdmluLXN1cHBseSA9IDwweDNlPjsKCQkJbGludXgscGhhbmRsZSA9IDwweDY3PjsKCQkJ
cGhhbmRsZSA9IDwweDY3PjsKCQl9OwoKCQlyZWd1bGF0b3JANSB7CgkJCWNvbXBhdGlibGUgPSAi
cmVndWxhdG9yLWZpeGVkIjsKCQkJcmVnID0gPDB4NT47CgkJCXJlZ3VsYXRvci1uYW1lID0gInZk
ZC01djAtaGRtaSI7CgkJCXJlZ3VsYXRvci1taW4tbWljcm92b2x0ID0gPDB4NGM0YjQwPjsKCQkJ
cmVndWxhdG9yLW1heC1taWNyb3ZvbHQgPSA8MHg0YzRiNDA+OwoJCQl2aW4tc3VwcGx5ID0gPDB4
YTI+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4NjQ+OwoJCQlwaGFuZGxlID0gPDB4NjQ+OwoJCX07
CgoJCXJlZ3VsYXRvckA2IHsKCQkJY29tcGF0aWJsZSA9ICJyZWd1bGF0b3ItZml4ZWQiOwoJCQly
ZWcgPSA8MHg2PjsKCQkJcmVndWxhdG9yLW5hbWUgPSAidmRkLTF2OC1zeXMiOwoJCQlyZWd1bGF0
b3ItbWluLW1pY3Jvdm9sdCA9IDwweDFiNzc0MD47CgkJCXJlZ3VsYXRvci1tYXgtbWljcm92b2x0
ID0gPDB4MWI3NzQwPjsKCQkJdmluLXN1cHBseSA9IDwweDQ3PjsKCQkJbGludXgscGhhbmRsZSA9
IDwweGEzPjsKCQkJcGhhbmRsZSA9IDwweGEzPjsKCQl9OwoKCQlyZWd1bGF0b3JANyB7CgkJCWNv
bXBhdGlibGUgPSAicmVndWxhdG9yLWZpeGVkIjsKCQkJcmVnID0gPDB4Nz47CgkJCXJlZ3VsYXRv
ci1uYW1lID0gInZkZC1mYW4iOwoJCQlyZWd1bGF0b3ItbWluLW1pY3Jvdm9sdCA9IDwweDRjNGI0
MD47CgkJCXJlZ3VsYXRvci1tYXgtbWljcm92b2x0ID0gPDB4NGM0YjQwPjsKCQkJdmluLXN1cHBs
eSA9IDwweGEyPjsKCQkJbGludXgscGhhbmRsZSA9IDwweGE0PjsKCQkJcGhhbmRsZSA9IDwweGE0
PjsKCQl9OwoKCQlyZWd1bGF0b3JAOCB7CgkJCWNvbXBhdGlibGUgPSAicmVndWxhdG9yLWZpeGVk
IjsKCQkJcmVnID0gPDB4OD47CgkJCXJlZ3VsYXRvci1uYW1lID0gInZkZC11c2ItdmJ1cyI7CgkJ
CXJlZ3VsYXRvci1taW4tbWljcm92b2x0ID0gPDB4NGM0YjQwPjsKCQkJcmVndWxhdG9yLW1heC1t
aWNyb3ZvbHQgPSA8MHg0YzRiNDA+OwoJCQl2aW4tc3VwcGx5ID0gPDB4YTI+OwoJCQlsaW51eCxw
aGFuZGxlID0gPDB4NDE+OwoJCQlwaGFuZGxlID0gPDB4NDE+OwoJCX07CgoJCXJlZ3VsYXRvckA5
IHsKCQkJY29tcGF0aWJsZSA9ICJyZWd1bGF0b3ItZml4ZWQtc3luYyI7CgkJCXJlZyA9IDwweDk+
OwoJCQlyZWd1bGF0b3ItbmFtZSA9ICJ2ZGQtdXNiLWh1Yi1lbiI7CgkJCXJlZ3VsYXRvci1taW4t
bWljcm92b2x0ID0gPDB4NGM0YjQwPjsKCQkJcmVndWxhdG9yLW1heC1taWNyb3ZvbHQgPSA8MHg0
YzRiNDA+OwoJCQl2aW4tc3VwcGx5ID0gPDB4YTM+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4YjM+
OwoJCQlwaGFuZGxlID0gPDB4YjM+OwoJCX07CgoJCXJlZ3VsYXRvckAxMCB7CgkJCWNvbXBhdGli
bGUgPSAicmVndWxhdG9yLWZpeGVkIjsKCQkJcmVnID0gPDB4YT47CgkJCXJlZ3VsYXRvci1uYW1l
ID0gInZkZC11c2ItdmJ1czIiOwoJCQlyZWd1bGF0b3ItbWluLW1pY3Jvdm9sdCA9IDwweDRjNGI0
MD47CgkJCXJlZ3VsYXRvci1tYXgtbWljcm92b2x0ID0gPDB4NGM0YjQwPjsKCQkJdmluLXN1cHBs
eSA9IDwweDQ3PjsKCQkJbGludXgscGhhbmRsZSA9IDwweDQzPjsKCQkJcGhhbmRsZSA9IDwweDQz
PjsKCQl9OwoJfTsKCglwd20tZmFuIHsKCQl2ZGQtZmFuLXN1cHBseSA9IDwweGE0PjsKCQljb21w
YXRpYmxlID0gInB3bS1mYW4iOwoJCXN0YXR1cyA9ICJva2F5IjsKCQlwd21zID0gPDB4YTUgMHgz
IDB4YjExNj47CgkJc2hhcmVkX2RhdGEgPSA8MHhhNj47CgkJYWN0aXZlX3B3bSA9IDwweDAgMHg1
MCAweDc4IDB4YTAgMHhmZiAweGZmIDB4ZmYgMHhmZiAweGZmIDB4ZmY+OwoJfTsKCglkdmZzX3Jh
aWxzIHsKCQljb21wYXRpYmxlID0gInNpbXBsZS1idXMiOwoJCSNhZGRyZXNzLWNlbGxzID0gPDB4
MT47CgkJI3NpemUtY2VsbHMgPSA8MHgwPjsKCgkJdmRkLWdwdS1zY2FsaW5nLWNkZXZANyB7CgkJ
CXN0YXR1cyA9ICJva2F5IjsKCQkJcmVnID0gPDB4Nz47CgkJCWNvb2xpbmctbWluLXN0YXRlID0g
PDB4MD47CgkJCWNvb2xpbmctbWF4LXN0YXRlID0gPDB4NT47CgkJCSNjb29saW5nLWNlbGxzID0g
PDB4Mj47CgkJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLXJhaWwtc2NhbGluZy1jZGV2
IjsKCQkJY2Rldi10eXBlID0gImdwdV9zY2FsaW5nIjsKCQkJbnZpZGlhLGNvbnN0cmFpbnQ7CgkJ
CW52aWRpYSx0cmlwcyA9IDwweGE3IDB4M2I2IDB4MyAweDAgMHg1IDB4MCAweDYgMHgwIDB4NyAw
eDAgMHg4IDB4MD47CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg0PjsKCQkJcGhhbmRsZSA9IDwweDQ+
OwoJCX07CgoJCXZkZC1ncHUtdm1heC1jZGV2QDkgewoJCQlzdGF0dXMgPSAib2theSI7CgkJCXJl
ZyA9IDwweDk+OwoJCQljb29saW5nLW1pbi1zdGF0ZSA9IDwweDA+OwoJCQljb29saW5nLW1heC1z
dGF0ZSA9IDwweDE+OwoJCQkjY29vbGluZy1jZWxscyA9IDwweDI+OwoJCQljb21wYXRpYmxlID0g
Im52aWRpYSx0ZWdyYTIxMC1yYWlsLXZtYXgtY2RldiI7CgkJCWNkZXYtdHlwZSA9ICJHUFUtY2Fw
IjsKCQkJbnZpZGlhLGNvbnN0cmFpbnQtdWNtMjsKCQkJbnZpZGlhLHRyaXBzID0gPDB4OSAweDQ2
YyAweDQ0Mj47CgkJCWNsb2NrcyA9IDwweDIxIDB4MWViPjsKCQkJY2xvY2stbmFtZXMgPSAiY2Fw
LWNsayI7CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHhhPjsKCQkJcGhhbmRsZSA9IDwweGE+OwoJCX07
Cgl9OwoKCXBmc2QgewoJCW51bV9yZXNvdXJjZXMgPSA8MHgwPjsKCQlzZWNyZXQgPSA8MHgyZj47
CgkJYWN0aXZlX3N0ZXBzID0gPDB4YT47CgkJYWN0aXZlX3JwbSA9IDwweDAgMHgzZTggMHg3ZDAg
MHhiYjggMHhmYTAgMHgxMzg4IDB4MTc3MCAweDFiNTggMHgyNzEwIDB4MmFmOD47CgkJYWN0aXZl
X3JydSA9IDwweDI4IDB4MiAweDEgMHgxIDB4MSAweDEgMHgxIDB4MSAweDEgMHgxPjsKCQlhY3Rp
dmVfcnJkID0gPDB4MjggMHgyIDB4MSAweDEgMHgxIDB4MSAweDEgMHgxIDB4MSAweDE+OwoJCXN0
YXRlX2NhcF9sb29rdXAgPSA8MHgyIDB4MiAweDIgMHgyIDB4MyAweDMgMHgzIDB4NCAweDQgMHg0
PjsKCQlwd21fcGVyaW9kID0gPDB4YjExNj47CgkJcHdtX2lkID0gPDB4Mz47CgkJc3RlcF90aW1l
ID0gPDB4NjQ+OwoJCXN0YXRlX2NhcCA9IDwweDc+OwoJCWFjdGl2ZV9wd21fbWF4ID0gPDB4ZmY+
OwoJCXRhY2hfZ3BpbyA9IDwweDU2IDB4Y2EgMHgxPjsKCQlwd21fZ3BpbyA9IDwweDU2IDB4Mjcg
MHgxPjsKCQlwd21fcG9sYXJpdHkgPSA8MHgwPjsKCQlzdXNwZW5kX3N0YXRlID0gPDB4MD47CgkJ
dGFjaF9wZXJpb2QgPSA8MHgzZTg+OwoJCWxpbnV4LHBoYW5kbGUgPSA8MHhhNj47CgkJcGhhbmRs
ZSA9IDwweGE2PjsKCX07CgoJdGVncmEtY2FtZXJhLXBsYXRmb3JtIHsKCQljb21wYXRpYmxlID0g
Im52aWRpYSwgdGVncmEtY2FtZXJhLXBsYXRmb3JtIjsKCQlzdGF0dXMgPSAib2theSI7CgkJbnVt
X2NzaV9sYW5lcyA9IDwweDQ+OwoJCW1heF9sYW5lX3NwZWVkID0gPDB4MTZlMzYwPjsKCQltaW5f
Yml0c19wZXJfcGl4ZWwgPSA8MHhhPjsKCQl2aV9wZWFrX2J5dGVfcGVyX3BpeGVsID0gPDB4Mj47
CgkJdmlfYndfbWFyZ2luX3BjdCA9IDwweDE5PjsKCQltYXhfcGl4ZWxfcmF0ZSA9IDwweDNhOTgw
PjsKCQlpc3BfcGVha19ieXRlX3Blcl9waXhlbCA9IDwweDU+OwoJCWlzcF9id19tYXJnaW5fcGN0
ID0gPDB4MTk+OwoJCWxpbnV4LHBoYW5kbGUgPSA8MHhjND47CgkJcGhhbmRsZSA9IDwweGM0PjsK
CgkJbW9kdWxlcyB7CgoJCQltb2R1bGUwIHsKCQkJCWJhZGdlID0gInBvcmdfZnJvbnRfUkJQQ1Yy
IjsKCQkJCXBvc2l0aW9uID0gImZyb250IjsKCQkJCW9yaWVudGF0aW9uID0gWzMxIDAwXTsKCQkJ
CWxpbnV4LHBoYW5kbGUgPSA8MHhiYT47CgkJCQlwaGFuZGxlID0gPDB4YmE+OwoKCQkJCWRyaXZl
cm5vZGUwIHsKCQkJCQlwY2xfaWQgPSAidjRsMl9zZW5zb3IiOwoJCQkJCWRldm5hbWUgPSAiaW14
MjE5IDctMDAxMCI7CgkJCQkJcHJvYy1kZXZpY2UtdHJlZSA9ICIvcHJvYy9kZXZpY2UtdHJlZS9j
YW1faTJjbXV4L2kyY0AwL3JicGN2Ml9pbXgyMTlfYUAxMCI7CgkJCQkJbGludXgscGhhbmRsZSA9
IDwweGJiPjsKCQkJCQlwaGFuZGxlID0gPDB4YmI+OwoJCQkJfTsKCgkJCQlkcml2ZXJub2RlMSB7
CgkJCQkJcGNsX2lkID0gInY0bDJfbGVucyI7CgkJCQkJcHJvYy1kZXZpY2UtdHJlZSA9ICIvcHJv
Yy9kZXZpY2UtdHJlZS9sZW5zX2lteDIxOUBSQlBDVjIvIjsKCQkJCQlsaW51eCxwaGFuZGxlID0g
PDB4YmM+OwoJCQkJCXBoYW5kbGUgPSA8MHhiYz47CgkJCQl9OwoJCQl9OwoKCQkJbW9kdWxlMSB7
CgkJCQliYWRnZSA9ICJwb3JnX3JlYXJfUkJQQ1YyIjsKCQkJCXBvc2l0aW9uID0gInJlYXIiOwoJ
CQkJb3JpZW50YXRpb24gPSBbMzEgMDBdOwoJCQkJbGludXgscGhhbmRsZSA9IDwweGM1PjsKCQkJ
CXBoYW5kbGUgPSA8MHhjNT47CgoJCQkJZHJpdmVybm9kZTAgewoJCQkJCXBjbF9pZCA9ICJ2NGwy
X3NlbnNvciI7CgkJCQkJZGV2bmFtZSA9ICJpbXgyMTkgOC0wMDEwIjsKCQkJCQlwcm9jLWRldmlj
ZS10cmVlID0gIi9wcm9jL2RldmljZS10cmVlL2NhbV9pMmNtdXgvaTJjQDEvcmJwY3YyX2lteDIx
OV9lQDEwIjsKCQkJCQlsaW51eCxwaGFuZGxlID0gPDB4Y2I+OwoJCQkJCXBoYW5kbGUgPSA8MHhj
Yj47CgkJCQl9OwoKCQkJCWRyaXZlcm5vZGUxIHsKCQkJCQlwY2xfaWQgPSAidjRsMl9sZW5zIjsK
CQkJCQlwcm9jLWRldmljZS10cmVlID0gIi9wcm9jL2RldmljZS10cmVlL2xlbnNfaW14MjE5QFJC
UENWMi8iOwoJCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHhjYz47CgkJCQkJcGhhbmRsZSA9IDwweGNj
PjsKCQkJCX07CgkJCX07CgkJfTsKCX07CgoJbGVuc19pbXgyMTlAUkJQQ1YyIHsKCQltaW5fZm9j
dXNfZGlzdGFuY2UgPSAiMC4wIjsKCQloeXBlcl9mb2NhbCA9ICIwLjAiOwoJCWZvY2FsX2xlbmd0
aCA9ICIzLjA0IjsKCQlmX251bWJlciA9ICIyLjAiOwoJCWFwZXJ0dXJlID0gIjAuMCI7Cgl9OwoK
CWNhbV9pMmNtdXggewoJCWNvbXBhdGlibGUgPSAiaTJjLW11eC1ncGlvIjsKCQkjYWRkcmVzcy1j
ZWxscyA9IDwweDE+OwoJCSNzaXplLWNlbGxzID0gPDB4MD47CgkJbXV4LWdwaW9zID0gPDB4NTYg
MHg0MCAweDA+OwoJCWkyYy1wYXJlbnQgPSA8MHhhOD47CgkJc3RhdHVzID0gImRpc2FibGVkIjsK
CQlsaW51eCxwaGFuZGxlID0gPDB4ZDE+OwoJCXBoYW5kbGUgPSA8MHhkMT47CgoJCWkyY0AwIHsK
CQkJc3RhdHVzID0gImRpc2FibGVkIjsKCQkJcmVnID0gPDB4MD47CgkJCSNhZGRyZXNzLWNlbGxz
ID0gPDB4MT47CgkJCSNzaXplLWNlbGxzID0gPDB4MD47CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHhk
Mj47CgkJCXBoYW5kbGUgPSA8MHhkMj47CgoJCQlyYnBjdjJfaW14MjE5X2FAMTAgewoJCQkJY29t
cGF0aWJsZSA9ICJudmlkaWEsaW14MjE5IjsKCQkJCXJlZyA9IDwweDEwPjsKCQkJCWRldm5vZGUg
PSAidmlkZW8wIjsKCQkJCXBoeXNpY2FsX3cgPSAiMy42ODAiOwoJCQkJcGh5c2ljYWxfaCA9ICIy
Ljc2MCI7CgkJCQlzZW5zb3JfbW9kZWwgPSAiaW14MjE5IjsKCQkJCXVzZV9zZW5zb3JfbW9kZV9p
ZCA9ICJ0cnVlIjsKCQkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJCQlyZXNldC1ncGlvcyA9IDww
eDU2IDB4OTcgMHgwPjsKCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHhjOT47CgkJCQlwaGFuZGxlID0g
PDB4Yzk+OwoKCQkJCW1vZGUwIHsKCQkJCQltY2xrX2toeiA9ICIyNDAwMCI7CgkJCQkJbnVtX2xh
bmVzID0gWzMyIDAwXTsKCQkJCQl0ZWdyYV9zaW50ZXJmYWNlID0gInNlcmlhbF9hIjsKCQkJCQlw
aHlfbW9kZSA9ICJEUEhZIjsKCQkJCQlkaXNjb250aW51b3VzX2NsayA9ICJ5ZXMiOwoJCQkJCWRw
Y21fZW5hYmxlID0gImZhbHNlIjsKCQkJCQljaWxfc2V0dGxldGltZSA9IFszMCAwMF07CgkJCQkJ
YWN0aXZlX3cgPSAiMzI2NCI7CgkJCQkJYWN0aXZlX2ggPSAiMjQ2NCI7CgkJCQkJcGl4ZWxfdCA9
ICJiYXllcl9yZ2diIjsKCQkJCQlyZWFkb3V0X29yaWVudGF0aW9uID0gIjkwIjsKCQkJCQlsaW5l
X2xlbmd0aCA9ICIzNDQ4IjsKCQkJCQlpbmhlcmVudF9nYWluID0gWzMxIDAwXTsKCQkJCQltY2xr
X211bHRpcGxpZXIgPSAiOS4zMyI7CgkJCQkJcGl4X2Nsa19oeiA9ICIxODI0MDAwMDAiOwoJCQkJ
CWdhaW5fZmFjdG9yID0gIjE2IjsKCQkJCQlmcmFtZXJhdGVfZmFjdG9yID0gIjEwMDAwMDAiOwoJ
CQkJCWV4cG9zdXJlX2ZhY3RvciA9ICIxMDAwMDAwIjsKCQkJCQltaW5fZ2Fpbl92YWwgPSAiMTYi
OwoJCQkJCW1heF9nYWluX3ZhbCA9ICIxNzAiOwoJCQkJCXN0ZXBfZ2Fpbl92YWwgPSBbMzEgMDBd
OwoJCQkJCWRlZmF1bHRfZ2FpbiA9ICIxNiI7CgkJCQkJbWluX2hkcl9yYXRpbyA9IFszMSAwMF07
CgkJCQkJbWF4X2hkcl9yYXRpbyA9IFszMSAwMF07CgkJCQkJbWluX2ZyYW1lcmF0ZSA9ICIyMDAw
MDAwIjsKCQkJCQltYXhfZnJhbWVyYXRlID0gIjIxMDAwMDAwIjsKCQkJCQlzdGVwX2ZyYW1lcmF0
ZSA9IFszMSAwMF07CgkJCQkJZGVmYXVsdF9mcmFtZXJhdGUgPSAiMjEwMDAwMDAiOwoJCQkJCW1p
bl9leHBfdGltZSA9ICIxMyI7CgkJCQkJbWF4X2V4cF90aW1lID0gIjY4MzcwOSI7CgkJCQkJc3Rl
cF9leHBfdGltZSA9IFszMSAwMF07CgkJCQkJZGVmYXVsdF9leHBfdGltZSA9ICIyNDk1IjsKCQkJ
CQllbWJlZGRlZF9tZXRhZGF0YV9oZWlnaHQgPSBbMzIgMDBdOwoJCQkJfTsKCgkJCQltb2RlMSB7
CgkJCQkJbWNsa19raHogPSAiMjQwMDAiOwoJCQkJCW51bV9sYW5lcyA9IFszMiAwMF07CgkJCQkJ
dGVncmFfc2ludGVyZmFjZSA9ICJzZXJpYWxfYSI7CgkJCQkJcGh5X21vZGUgPSAiRFBIWSI7CgkJ
CQkJZGlzY29udGludW91c19jbGsgPSAieWVzIjsKCQkJCQlkcGNtX2VuYWJsZSA9ICJmYWxzZSI7
CgkJCQkJY2lsX3NldHRsZXRpbWUgPSBbMzAgMDBdOwoJCQkJCWFjdGl2ZV93ID0gIjMyNjQiOwoJ
CQkJCWFjdGl2ZV9oID0gIjE4NDgiOwoJCQkJCXBpeGVsX3QgPSAiYmF5ZXJfcmdnYiI7CgkJCQkJ
cmVhZG91dF9vcmllbnRhdGlvbiA9ICI5MCI7CgkJCQkJbGluZV9sZW5ndGggPSAiMzQ0OCI7CgkJ
CQkJaW5oZXJlbnRfZ2FpbiA9IFszMSAwMF07CgkJCQkJbWNsa19tdWx0aXBsaWVyID0gIjkuMzMi
OwoJCQkJCXBpeF9jbGtfaHogPSAiMTgyNDAwMDAwIjsKCQkJCQlnYWluX2ZhY3RvciA9ICIxNiI7
CgkJCQkJZnJhbWVyYXRlX2ZhY3RvciA9ICIxMDAwMDAwIjsKCQkJCQlleHBvc3VyZV9mYWN0b3Ig
PSAiMTAwMDAwMCI7CgkJCQkJbWluX2dhaW5fdmFsID0gIjE2IjsKCQkJCQltYXhfZ2Fpbl92YWwg
PSAiMTcwIjsKCQkJCQlzdGVwX2dhaW5fdmFsID0gWzMxIDAwXTsKCQkJCQlkZWZhdWx0X2dhaW4g
PSAiMTYiOwoJCQkJCW1pbl9oZHJfcmF0aW8gPSBbMzEgMDBdOwoJCQkJCW1heF9oZHJfcmF0aW8g
PSBbMzEgMDBdOwoJCQkJCW1pbl9mcmFtZXJhdGUgPSAiMjAwMDAwMCI7CgkJCQkJbWF4X2ZyYW1l
cmF0ZSA9ICIyODAwMDAwMCI7CgkJCQkJc3RlcF9mcmFtZXJhdGUgPSBbMzEgMDBdOwoJCQkJCWRl
ZmF1bHRfZnJhbWVyYXRlID0gIjI4MDAwMDAwIjsKCQkJCQltaW5fZXhwX3RpbWUgPSAiMTMiOwoJ
CQkJCW1heF9leHBfdGltZSA9ICI2ODM3MDkiOwoJCQkJCXN0ZXBfZXhwX3RpbWUgPSBbMzEgMDBd
OwoJCQkJCWRlZmF1bHRfZXhwX3RpbWUgPSAiMjQ5NSI7CgkJCQkJZW1iZWRkZWRfbWV0YWRhdGFf
aGVpZ2h0ID0gWzMyIDAwXTsKCQkJCX07CgoJCQkJbW9kZTIgewoJCQkJCW1jbGtfa2h6ID0gIjI0
MDAwIjsKCQkJCQludW1fbGFuZXMgPSBbMzIgMDBdOwoJCQkJCXRlZ3JhX3NpbnRlcmZhY2UgPSAi
c2VyaWFsX2EiOwoJCQkJCXBoeV9tb2RlID0gIkRQSFkiOwoJCQkJCWRpc2NvbnRpbnVvdXNfY2xr
ID0gInllcyI7CgkJCQkJZHBjbV9lbmFibGUgPSAiZmFsc2UiOwoJCQkJCWNpbF9zZXR0bGV0aW1l
ID0gWzMwIDAwXTsKCQkJCQlhY3RpdmVfdyA9ICIxOTIwIjsKCQkJCQlhY3RpdmVfaCA9ICIxMDgw
IjsKCQkJCQlwaXhlbF90ID0gImJheWVyX3JnZ2IiOwoJCQkJCXJlYWRvdXRfb3JpZW50YXRpb24g
PSAiOTAiOwoJCQkJCWxpbmVfbGVuZ3RoID0gIjM0NDgiOwoJCQkJCWluaGVyZW50X2dhaW4gPSBb
MzEgMDBdOwoJCQkJCW1jbGtfbXVsdGlwbGllciA9ICI5LjMzIjsKCQkJCQlwaXhfY2xrX2h6ID0g
IjE4MjQwMDAwMCI7CgkJCQkJZ2Fpbl9mYWN0b3IgPSAiMTYiOwoJCQkJCWZyYW1lcmF0ZV9mYWN0
b3IgPSAiMTAwMDAwMCI7CgkJCQkJZXhwb3N1cmVfZmFjdG9yID0gIjEwMDAwMDAiOwoJCQkJCW1p
bl9nYWluX3ZhbCA9ICIxNiI7CgkJCQkJbWF4X2dhaW5fdmFsID0gIjE3MCI7CgkJCQkJc3RlcF9n
YWluX3ZhbCA9IFszMSAwMF07CgkJCQkJZGVmYXVsdF9nYWluID0gIjE2IjsKCQkJCQltaW5faGRy
X3JhdGlvID0gWzMxIDAwXTsKCQkJCQltYXhfaGRyX3JhdGlvID0gWzMxIDAwXTsKCQkJCQltaW5f
ZnJhbWVyYXRlID0gIjIwMDAwMDAiOwoJCQkJCW1heF9mcmFtZXJhdGUgPSAiMzAwMDAwMDAiOwoJ
CQkJCXN0ZXBfZnJhbWVyYXRlID0gWzMxIDAwXTsKCQkJCQlkZWZhdWx0X2ZyYW1lcmF0ZSA9ICIz
MDAwMDAwMCI7CgkJCQkJbWluX2V4cF90aW1lID0gIjEzIjsKCQkJCQltYXhfZXhwX3RpbWUgPSAi
NjgzNzA5IjsKCQkJCQlzdGVwX2V4cF90aW1lID0gWzMxIDAwXTsKCQkJCQlkZWZhdWx0X2V4cF90
aW1lID0gIjI0OTUiOwoJCQkJCWVtYmVkZGVkX21ldGFkYXRhX2hlaWdodCA9IFszMiAwMF07CgkJ
CQl9OwoKCQkJCW1vZGUzIHsKCQkJCQltY2xrX2toeiA9ICIyNDAwMCI7CgkJCQkJbnVtX2xhbmVz
ID0gWzMyIDAwXTsKCQkJCQl0ZWdyYV9zaW50ZXJmYWNlID0gInNlcmlhbF9hIjsKCQkJCQlwaHlf
bW9kZSA9ICJEUEhZIjsKCQkJCQlkaXNjb250aW51b3VzX2NsayA9ICJ5ZXMiOwoJCQkJCWRwY21f
ZW5hYmxlID0gImZhbHNlIjsKCQkJCQljaWxfc2V0dGxldGltZSA9IFszMCAwMF07CgkJCQkJYWN0
aXZlX3cgPSAiMTI4MCI7CgkJCQkJYWN0aXZlX2ggPSAiNzIwIjsKCQkJCQlwaXhlbF90ID0gImJh
eWVyX3JnZ2IiOwoJCQkJCXJlYWRvdXRfb3JpZW50YXRpb24gPSAiOTAiOwoJCQkJCWxpbmVfbGVu
Z3RoID0gIjM0NDgiOwoJCQkJCWluaGVyZW50X2dhaW4gPSBbMzEgMDBdOwoJCQkJCW1jbGtfbXVs
dGlwbGllciA9ICI5LjMzIjsKCQkJCQlwaXhfY2xrX2h6ID0gIjE4MjQwMDAwMCI7CgkJCQkJZ2Fp
bl9mYWN0b3IgPSAiMTYiOwoJCQkJCWZyYW1lcmF0ZV9mYWN0b3IgPSAiMTAwMDAwMCI7CgkJCQkJ
ZXhwb3N1cmVfZmFjdG9yID0gIjEwMDAwMDAiOwoJCQkJCW1pbl9nYWluX3ZhbCA9ICIxNiI7CgkJ
CQkJbWF4X2dhaW5fdmFsID0gIjE3MCI7CgkJCQkJc3RlcF9nYWluX3ZhbCA9IFszMSAwMF07CgkJ
CQkJZGVmYXVsdF9nYWluID0gIjE2IjsKCQkJCQltaW5faGRyX3JhdGlvID0gWzMxIDAwXTsKCQkJ
CQltYXhfaGRyX3JhdGlvID0gWzMxIDAwXTsKCQkJCQltaW5fZnJhbWVyYXRlID0gIjIwMDAwMDAi
OwoJCQkJCW1heF9mcmFtZXJhdGUgPSAiNjAwMDAwMDAiOwoJCQkJCXN0ZXBfZnJhbWVyYXRlID0g
WzMxIDAwXTsKCQkJCQlkZWZhdWx0X2ZyYW1lcmF0ZSA9ICI2MDAwMDAwMCI7CgkJCQkJbWluX2V4
cF90aW1lID0gIjEzIjsKCQkJCQltYXhfZXhwX3RpbWUgPSAiNjgzNzA5IjsKCQkJCQlzdGVwX2V4
cF90aW1lID0gWzMxIDAwXTsKCQkJCQlkZWZhdWx0X2V4cF90aW1lID0gIjI0OTUiOwoJCQkJCWVt
YmVkZGVkX21ldGFkYXRhX2hlaWdodCA9IFszMiAwMF07CgkJCQl9OwoKCQkJCW1vZGU0IHsKCQkJ
CQltY2xrX2toeiA9ICIyNDAwMCI7CgkJCQkJbnVtX2xhbmVzID0gWzMyIDAwXTsKCQkJCQl0ZWdy
YV9zaW50ZXJmYWNlID0gInNlcmlhbF9hIjsKCQkJCQlwaHlfbW9kZSA9ICJEUEhZIjsKCQkJCQlk
aXNjb250aW51b3VzX2NsayA9ICJ5ZXMiOwoJCQkJCWRwY21fZW5hYmxlID0gImZhbHNlIjsKCQkJ
CQljaWxfc2V0dGxldGltZSA9IFszMCAwMF07CgkJCQkJYWN0aXZlX3cgPSAiMTI4MCI7CgkJCQkJ
YWN0aXZlX2ggPSAiNzIwIjsKCQkJCQlwaXhlbF90ID0gImJheWVyX3JnZ2IiOwoJCQkJCXJlYWRv
dXRfb3JpZW50YXRpb24gPSAiOTAiOwoJCQkJCWxpbmVfbGVuZ3RoID0gIjM0NDgiOwoJCQkJCWlu
aGVyZW50X2dhaW4gPSBbMzEgMDBdOwoJCQkJCW1jbGtfbXVsdGlwbGllciA9ICI5LjMzIjsKCQkJ
CQlwaXhfY2xrX2h6ID0gIjE2OTYwMDAwMCI7CgkJCQkJZ2Fpbl9mYWN0b3IgPSAiMTYiOwoJCQkJ
CWZyYW1lcmF0ZV9mYWN0b3IgPSAiMTAwMDAwMCI7CgkJCQkJZXhwb3N1cmVfZmFjdG9yID0gIjEw
MDAwMDAiOwoJCQkJCW1pbl9nYWluX3ZhbCA9ICIxNiI7CgkJCQkJbWF4X2dhaW5fdmFsID0gIjE3
MCI7CgkJCQkJc3RlcF9nYWluX3ZhbCA9IFszMSAwMF07CgkJCQkJZGVmYXVsdF9nYWluID0gIjE2
IjsKCQkJCQltaW5faGRyX3JhdGlvID0gWzMxIDAwXTsKCQkJCQltYXhfaGRyX3JhdGlvID0gWzMx
IDAwXTsKCQkJCQltaW5fZnJhbWVyYXRlID0gIjIwMDAwMDAiOwoJCQkJCW1heF9mcmFtZXJhdGUg
PSAiMTIwMDAwMDAwIjsKCQkJCQlzdGVwX2ZyYW1lcmF0ZSA9IFszMSAwMF07CgkJCQkJZGVmYXVs
dF9mcmFtZXJhdGUgPSAiMTIwMDAwMDAwIjsKCQkJCQltaW5fZXhwX3RpbWUgPSAiMTMiOwoJCQkJ
CW1heF9leHBfdGltZSA9ICI2ODM3MDkiOwoJCQkJCXN0ZXBfZXhwX3RpbWUgPSBbMzEgMDBdOwoJ
CQkJCWRlZmF1bHRfZXhwX3RpbWUgPSAiMjQ5NSI7CgkJCQkJZW1iZWRkZWRfbWV0YWRhdGFfaGVp
Z2h0ID0gWzMyIDAwXTsKCQkJCX07CgoJCQkJcG9ydHMgewoJCQkJCSNhZGRyZXNzLWNlbGxzID0g
PDB4MT47CgkJCQkJI3NpemUtY2VsbHMgPSA8MHgwPjsKCgkJCQkJcG9ydEAwIHsKCQkJCQkJcmVn
ID0gPDB4MD47CgoJCQkJCQllbmRwb2ludCB7CgkJCQkJCQlwb3J0LWluZGV4ID0gPDB4MD47CgkJ
CQkJCQlidXMtd2lkdGggPSA8MHgyPjsKCQkJCQkJCXJlbW90ZS1lbmRwb2ludCA9IDwweDczPjsK
CQkJCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHg3ND47CgkJCQkJCQlwaGFuZGxlID0gPDB4NzQ+OwoJ
CQkJCQl9OwoJCQkJCX07CgkJCQl9OwoJCQl9OwoJCX07CgoJCWkyY0AxIHsKCQkJc3RhdHVzID0g
ImRpc2FibGVkIjsKCQkJcmVnID0gPDB4MT47CgkJCSNhZGRyZXNzLWNlbGxzID0gPDB4MT47CgkJ
CSNzaXplLWNlbGxzID0gPDB4MD47CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHhkMz47CgkJCXBoYW5k
bGUgPSA8MHhkMz47CgoJCQlyYnBjdjJfaW14MjE5X2VAMTAgewoJCQkJY29tcGF0aWJsZSA9ICJu
dmlkaWEsaW14MjE5IjsKCQkJCXJlZyA9IDwweDEwPjsKCQkJCWRldm5vZGUgPSAidmlkZW8xIjsK
CQkJCXBoeXNpY2FsX3cgPSAiMy42ODAiOwoJCQkJcGh5c2ljYWxfaCA9ICIyLjc2MCI7CgkJCQlz
ZW5zb3JfbW9kZWwgPSAiaW14MjE5IjsKCQkJCXVzZV9zZW5zb3JfbW9kZV9pZCA9ICJ0cnVlIjsK
CQkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJCQlyZXNldC1ncGlvcyA9IDwweDU2IDB4OTggMHgw
PjsKCQkJCWxpbnV4LHBoYW5kbGUgPSA8MHhjYT47CgkJCQlwaGFuZGxlID0gPDB4Y2E+OwoKCQkJ
CW1vZGUwIHsKCQkJCQltY2xrX2toeiA9ICIyNDAwMCI7CgkJCQkJbnVtX2xhbmVzID0gWzMyIDAw
XTsKCQkJCQl0ZWdyYV9zaW50ZXJmYWNlID0gInNlcmlhbF9lIjsKCQkJCQlwaHlfbW9kZSA9ICJE
UEhZIjsKCQkJCQlkaXNjb250aW51b3VzX2NsayA9ICJ5ZXMiOwoJCQkJCWRwY21fZW5hYmxlID0g
ImZhbHNlIjsKCQkJCQljaWxfc2V0dGxldGltZSA9IFszMCAwMF07CgkJCQkJYWN0aXZlX3cgPSAi
MzI2NCI7CgkJCQkJYWN0aXZlX2ggPSAiMjQ2NCI7CgkJCQkJcGl4ZWxfdCA9ICJiYXllcl9yZ2di
IjsKCQkJCQlyZWFkb3V0X29yaWVudGF0aW9uID0gIjkwIjsKCQkJCQlsaW5lX2xlbmd0aCA9ICIz
NDQ4IjsKCQkJCQlpbmhlcmVudF9nYWluID0gWzMxIDAwXTsKCQkJCQltY2xrX211bHRpcGxpZXIg
PSAiOS4zMyI7CgkJCQkJcGl4X2Nsa19oeiA9ICIxODI0MDAwMDAiOwoJCQkJCWdhaW5fZmFjdG9y
ID0gIjE2IjsKCQkJCQlmcmFtZXJhdGVfZmFjdG9yID0gIjEwMDAwMDAiOwoJCQkJCWV4cG9zdXJl
X2ZhY3RvciA9ICIxMDAwMDAwIjsKCQkJCQltaW5fZ2Fpbl92YWwgPSAiMTYiOwoJCQkJCW1heF9n
YWluX3ZhbCA9ICIxNzAiOwoJCQkJCXN0ZXBfZ2Fpbl92YWwgPSBbMzEgMDBdOwoJCQkJCWRlZmF1
bHRfZ2FpbiA9ICIxNiI7CgkJCQkJbWluX2hkcl9yYXRpbyA9IFszMSAwMF07CgkJCQkJbWF4X2hk
cl9yYXRpbyA9IFszMSAwMF07CgkJCQkJbWluX2ZyYW1lcmF0ZSA9ICIyMDAwMDAwIjsKCQkJCQlt
YXhfZnJhbWVyYXRlID0gIjIxMDAwMDAwIjsKCQkJCQlzdGVwX2ZyYW1lcmF0ZSA9IFszMSAwMF07
CgkJCQkJZGVmYXVsdF9mcmFtZXJhdGUgPSAiMjEwMDAwMDAiOwoJCQkJCW1pbl9leHBfdGltZSA9
ICIxMyI7CgkJCQkJbWF4X2V4cF90aW1lID0gIjY4MzcwOSI7CgkJCQkJc3RlcF9leHBfdGltZSA9
IFszMSAwMF07CgkJCQkJZGVmYXVsdF9leHBfdGltZSA9ICIyNDk1IjsKCQkJCQllbWJlZGRlZF9t
ZXRhZGF0YV9oZWlnaHQgPSBbMzIgMDBdOwoJCQkJfTsKCgkJCQltb2RlMSB7CgkJCQkJbWNsa19r
aHogPSAiMjQwMDAiOwoJCQkJCW51bV9sYW5lcyA9IFszMiAwMF07CgkJCQkJdGVncmFfc2ludGVy
ZmFjZSA9ICJzZXJpYWxfZSI7CgkJCQkJcGh5X21vZGUgPSAiRFBIWSI7CgkJCQkJZGlzY29udGlu
dW91c19jbGsgPSAieWVzIjsKCQkJCQlkcGNtX2VuYWJsZSA9ICJmYWxzZSI7CgkJCQkJY2lsX3Nl
dHRsZXRpbWUgPSBbMzAgMDBdOwoJCQkJCWFjdGl2ZV93ID0gIjMyNjQiOwoJCQkJCWFjdGl2ZV9o
ID0gIjE4NDgiOwoJCQkJCXBpeGVsX3QgPSAiYmF5ZXJfcmdnYiI7CgkJCQkJcmVhZG91dF9vcmll
bnRhdGlvbiA9ICI5MCI7CgkJCQkJbGluZV9sZW5ndGggPSAiMzQ0OCI7CgkJCQkJaW5oZXJlbnRf
Z2FpbiA9IFszMSAwMF07CgkJCQkJbWNsa19tdWx0aXBsaWVyID0gIjkuMzMiOwoJCQkJCXBpeF9j
bGtfaHogPSAiMTgyNDAwMDAwIjsKCQkJCQlnYWluX2ZhY3RvciA9ICIxNiI7CgkJCQkJZnJhbWVy
YXRlX2ZhY3RvciA9ICIxMDAwMDAwIjsKCQkJCQlleHBvc3VyZV9mYWN0b3IgPSAiMTAwMDAwMCI7
CgkJCQkJbWluX2dhaW5fdmFsID0gIjE2IjsKCQkJCQltYXhfZ2Fpbl92YWwgPSAiMTcwIjsKCQkJ
CQlzdGVwX2dhaW5fdmFsID0gWzMxIDAwXTsKCQkJCQlkZWZhdWx0X2dhaW4gPSAiMTYiOwoJCQkJ
CW1pbl9oZHJfcmF0aW8gPSBbMzEgMDBdOwoJCQkJCW1heF9oZHJfcmF0aW8gPSBbMzEgMDBdOwoJ
CQkJCW1pbl9mcmFtZXJhdGUgPSAiMjAwMDAwMCI7CgkJCQkJbWF4X2ZyYW1lcmF0ZSA9ICIyODAw
MDAwMCI7CgkJCQkJc3RlcF9mcmFtZXJhdGUgPSBbMzEgMDBdOwoJCQkJCWRlZmF1bHRfZnJhbWVy
YXRlID0gIjI4MDAwMDAwIjsKCQkJCQltaW5fZXhwX3RpbWUgPSAiMTMiOwoJCQkJCW1heF9leHBf
dGltZSA9ICI2ODM3MDkiOwoJCQkJCXN0ZXBfZXhwX3RpbWUgPSBbMzEgMDBdOwoJCQkJCWRlZmF1
bHRfZXhwX3RpbWUgPSAiMjQ5NSI7CgkJCQkJZW1iZWRkZWRfbWV0YWRhdGFfaGVpZ2h0ID0gWzMy
IDAwXTsKCQkJCX07CgoJCQkJbW9kZTIgewoJCQkJCW1jbGtfa2h6ID0gIjI0MDAwIjsKCQkJCQlu
dW1fbGFuZXMgPSBbMzIgMDBdOwoJCQkJCXRlZ3JhX3NpbnRlcmZhY2UgPSAic2VyaWFsX2UiOwoJ
CQkJCXBoeV9tb2RlID0gIkRQSFkiOwoJCQkJCWRpc2NvbnRpbnVvdXNfY2xrID0gInllcyI7CgkJ
CQkJZHBjbV9lbmFibGUgPSAiZmFsc2UiOwoJCQkJCWNpbF9zZXR0bGV0aW1lID0gWzMwIDAwXTsK
CQkJCQlhY3RpdmVfdyA9ICIxOTIwIjsKCQkJCQlhY3RpdmVfaCA9ICIxMDgwIjsKCQkJCQlwaXhl
bF90ID0gImJheWVyX3JnZ2IiOwoJCQkJCXJlYWRvdXRfb3JpZW50YXRpb24gPSAiOTAiOwoJCQkJ
CWxpbmVfbGVuZ3RoID0gIjM0NDgiOwoJCQkJCWluaGVyZW50X2dhaW4gPSBbMzEgMDBdOwoJCQkJ
CW1jbGtfbXVsdGlwbGllciA9ICI5LjMzIjsKCQkJCQlwaXhfY2xrX2h6ID0gIjE4MjQwMDAwMCI7
CgkJCQkJZ2Fpbl9mYWN0b3IgPSAiMTYiOwoJCQkJCWZyYW1lcmF0ZV9mYWN0b3IgPSAiMTAwMDAw
MCI7CgkJCQkJZXhwb3N1cmVfZmFjdG9yID0gIjEwMDAwMDAiOwoJCQkJCW1pbl9nYWluX3ZhbCA9
ICIxNiI7CgkJCQkJbWF4X2dhaW5fdmFsID0gIjE3MCI7CgkJCQkJc3RlcF9nYWluX3ZhbCA9IFsz
MSAwMF07CgkJCQkJZGVmYXVsdF9nYWluID0gIjE2IjsKCQkJCQltaW5faGRyX3JhdGlvID0gWzMx
IDAwXTsKCQkJCQltYXhfaGRyX3JhdGlvID0gWzMxIDAwXTsKCQkJCQltaW5fZnJhbWVyYXRlID0g
IjIwMDAwMDAiOwoJCQkJCW1heF9mcmFtZXJhdGUgPSAiMzAwMDAwMDAiOwoJCQkJCXN0ZXBfZnJh
bWVyYXRlID0gWzMxIDAwXTsKCQkJCQlkZWZhdWx0X2ZyYW1lcmF0ZSA9ICIzMDAwMDAwMCI7CgkJ
CQkJbWluX2V4cF90aW1lID0gIjEzIjsKCQkJCQltYXhfZXhwX3RpbWUgPSAiNjgzNzA5IjsKCQkJ
CQlzdGVwX2V4cF90aW1lID0gWzMxIDAwXTsKCQkJCQlkZWZhdWx0X2V4cF90aW1lID0gIjI0OTUi
OwoJCQkJCWVtYmVkZGVkX21ldGFkYXRhX2hlaWdodCA9IFszMiAwMF07CgkJCQl9OwoKCQkJCW1v
ZGUzIHsKCQkJCQltY2xrX2toeiA9ICIyNDAwMCI7CgkJCQkJbnVtX2xhbmVzID0gWzMyIDAwXTsK
CQkJCQl0ZWdyYV9zaW50ZXJmYWNlID0gInNlcmlhbF9lIjsKCQkJCQlwaHlfbW9kZSA9ICJEUEhZ
IjsKCQkJCQlkaXNjb250aW51b3VzX2NsayA9ICJ5ZXMiOwoJCQkJCWRwY21fZW5hYmxlID0gImZh
bHNlIjsKCQkJCQljaWxfc2V0dGxldGltZSA9IFszMCAwMF07CgkJCQkJYWN0aXZlX3cgPSAiMTI4
MCI7CgkJCQkJYWN0aXZlX2ggPSAiNzIwIjsKCQkJCQlwaXhlbF90ID0gImJheWVyX3JnZ2IiOwoJ
CQkJCXJlYWRvdXRfb3JpZW50YXRpb24gPSAiOTAiOwoJCQkJCWxpbmVfbGVuZ3RoID0gIjM0NDgi
OwoJCQkJCWluaGVyZW50X2dhaW4gPSBbMzEgMDBdOwoJCQkJCW1jbGtfbXVsdGlwbGllciA9ICI5
LjMzIjsKCQkJCQlwaXhfY2xrX2h6ID0gIjE4MjQwMDAwMCI7CgkJCQkJZ2Fpbl9mYWN0b3IgPSAi
MTYiOwoJCQkJCWZyYW1lcmF0ZV9mYWN0b3IgPSAiMTAwMDAwMCI7CgkJCQkJZXhwb3N1cmVfZmFj
dG9yID0gIjEwMDAwMDAiOwoJCQkJCW1pbl9nYWluX3ZhbCA9ICIxNiI7CgkJCQkJbWF4X2dhaW5f
dmFsID0gIjE3MCI7CgkJCQkJc3RlcF9nYWluX3ZhbCA9IFszMSAwMF07CgkJCQkJZGVmYXVsdF9n
YWluID0gIjE2IjsKCQkJCQltaW5faGRyX3JhdGlvID0gWzMxIDAwXTsKCQkJCQltYXhfaGRyX3Jh
dGlvID0gWzMxIDAwXTsKCQkJCQltaW5fZnJhbWVyYXRlID0gIjIwMDAwMDAiOwoJCQkJCW1heF9m
cmFtZXJhdGUgPSAiNjAwMDAwMDAiOwoJCQkJCXN0ZXBfZnJhbWVyYXRlID0gWzMxIDAwXTsKCQkJ
CQlkZWZhdWx0X2ZyYW1lcmF0ZSA9ICI2MDAwMDAwMCI7CgkJCQkJbWluX2V4cF90aW1lID0gIjEz
IjsKCQkJCQltYXhfZXhwX3RpbWUgPSAiNjgzNzA5IjsKCQkJCQlzdGVwX2V4cF90aW1lID0gWzMx
IDAwXTsKCQkJCQlkZWZhdWx0X2V4cF90aW1lID0gIjI0OTUiOwoJCQkJCWVtYmVkZGVkX21ldGFk
YXRhX2hlaWdodCA9IFszMiAwMF07CgkJCQl9OwoKCQkJCW1vZGU0IHsKCQkJCQltY2xrX2toeiA9
ICIyNDAwMCI7CgkJCQkJbnVtX2xhbmVzID0gWzMyIDAwXTsKCQkJCQl0ZWdyYV9zaW50ZXJmYWNl
ID0gInNlcmlhbF9lIjsKCQkJCQlwaHlfbW9kZSA9ICJEUEhZIjsKCQkJCQlkaXNjb250aW51b3Vz
X2NsayA9ICJ5ZXMiOwoJCQkJCWRwY21fZW5hYmxlID0gImZhbHNlIjsKCQkJCQljaWxfc2V0dGxl
dGltZSA9IFszMCAwMF07CgkJCQkJYWN0aXZlX3cgPSAiMTI4MCI7CgkJCQkJYWN0aXZlX2ggPSAi
NzIwIjsKCQkJCQlwaXhlbF90ID0gImJheWVyX3JnZ2IiOwoJCQkJCXJlYWRvdXRfb3JpZW50YXRp
b24gPSAiOTAiOwoJCQkJCWxpbmVfbGVuZ3RoID0gIjM0NDgiOwoJCQkJCWluaGVyZW50X2dhaW4g
PSBbMzEgMDBdOwoJCQkJCW1jbGtfbXVsdGlwbGllciA9ICI5LjMzIjsKCQkJCQlwaXhfY2xrX2h6
ID0gIjE2OTYwMDAwMCI7CgkJCQkJZ2Fpbl9mYWN0b3IgPSAiMTYiOwoJCQkJCWZyYW1lcmF0ZV9m
YWN0b3IgPSAiMTAwMDAwMCI7CgkJCQkJZXhwb3N1cmVfZmFjdG9yID0gIjEwMDAwMDAiOwoJCQkJ
CW1pbl9nYWluX3ZhbCA9ICIxNiI7CgkJCQkJbWF4X2dhaW5fdmFsID0gIjE3MCI7CgkJCQkJc3Rl
cF9nYWluX3ZhbCA9IFszMSAwMF07CgkJCQkJZGVmYXVsdF9nYWluID0gIjE2IjsKCQkJCQltaW5f
aGRyX3JhdGlvID0gWzMxIDAwXTsKCQkJCQltYXhfaGRyX3JhdGlvID0gWzMxIDAwXTsKCQkJCQlt
aW5fZnJhbWVyYXRlID0gIjIwMDAwMDAiOwoJCQkJCW1heF9mcmFtZXJhdGUgPSAiMTIwMDAwMDAw
IjsKCQkJCQlzdGVwX2ZyYW1lcmF0ZSA9IFszMSAwMF07CgkJCQkJZGVmYXVsdF9mcmFtZXJhdGUg
PSAiMTIwMDAwMDAwIjsKCQkJCQltaW5fZXhwX3RpbWUgPSAiMTMiOwoJCQkJCW1heF9leHBfdGlt
ZSA9ICI2ODM3MDkiOwoJCQkJCXN0ZXBfZXhwX3RpbWUgPSBbMzEgMDBdOwoJCQkJCWRlZmF1bHRf
ZXhwX3RpbWUgPSAiMjQ5NSI7CgkJCQkJZW1iZWRkZWRfbWV0YWRhdGFfaGVpZ2h0ID0gWzMyIDAw
XTsKCQkJCX07CgoJCQkJcG9ydHMgewoJCQkJCSNhZGRyZXNzLWNlbGxzID0gPDB4MT47CgkJCQkJ
I3NpemUtY2VsbHMgPSA8MHgwPjsKCgkJCQkJcG9ydEAwIHsKCQkJCQkJcmVnID0gPDB4MD47CgoJ
CQkJCQllbmRwb2ludCB7CgkJCQkJCQlwb3J0LWluZGV4ID0gPDB4ND47CgkJCQkJCQlidXMtd2lk
dGggPSA8MHgyPjsKCQkJCQkJCXJlbW90ZS1lbmRwb2ludCA9IDwweGE5PjsKCQkJCQkJCWxpbnV4
LHBoYW5kbGUgPSA8MHg3Nj47CgkJCQkJCQlwaGFuZGxlID0gPDB4NzY+OwoJCQkJCQl9OwoJCQkJ
CX07CgkJCQl9OwoJCQl9OwoJCX07Cgl9OwoKCXRmZXNkIHsKCQlzZWNyZXQgPSA8MHgyNT47CgkJ
dG9mZnNldCA9IDwweDA+OwoJCXBvbGxpbmdfcGVyaW9kID0gPDB4NDRjPjsKCQluZGV2cyA9IDww
eDI+OwoJCWNkZXZfdHlwZSA9ICJwd20tZmFuIjsKCQl0enBfZ292ZXJub3JfbmFtZSA9ICJwaWRf
dGhlcm1hbF9nb3YiOwoJCWxpbnV4LHBoYW5kbGUgPSA8MHhhYT47CgkJcGhhbmRsZSA9IDwweGFh
PjsKCgkJZGV2MSB7CgkJCWRldl9kYXRhID0gIkNQVS10aGVybSI7CgkJCWNvZWZmcyA9IDwweDMy
IDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MD47CgkJfTsKCgkJZGV2MiB7CgkJCWRldl9kYXRhID0gIkdQVS10
aGVybSI7CgkJCWNvZWZmcyA9IDwweDMyIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAg
MHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MCAweDAgMHgwIDB4MD47CgkJfTsKCX07CgoJ
dGhlcm1hbC1mYW4tZXN0IHsKCQljb21wYXRpYmxlID0gInRoZXJtYWwtZmFuLWVzdCI7CgkJc3Rh
dHVzID0gIm9rYXkiOwoJCW51bV9yZXNvdXJjZXMgPSA8MHgwPjsKCQlzaGFyZWRfZGF0YSA9IDww
eGFhPjsKCQl0cmlwX2xlbmd0aCA9IDwweGE+OwoJCWFjdGl2ZV90cmlwX3RlbXBzID0gPDB4MCAw
eGM3MzggMHhlZTQ4IDB4MTE1NTggMHgxNDA1MCAweDIyMmUwIDB4MjQ5ZjAgMHgyNzEwMCAweDI5
ODEwIDB4MmJmMjA+OwoJCWFjdGl2ZV9oeXN0ZXJlc2lzID0gPDB4MCAweDNhOTggMHgyMzI4IDB4
MjMyOCAweDI3MTAgMHgwIDB4MCAweDAgMHgwIDB4MD47Cgl9OwoKCWdwaW8ta2V5cyB7CgkJY29t
cGF0aWJsZSA9ICJncGlvLWtleXMiOwoJCWdwaW8ta2V5cyxuYW1lID0gImdwaW8ta2V5cyI7CgkJ
c3RhdHVzID0gIm9rYXkiOwoJCWRpc2FibGUtb24tcmVjb3Zlcnkta2VybmVsOwoKCQlwb3dlciB7
CgkJCWxhYmVsID0gIlBvd2VyIjsKCQkJZ3Bpb3MgPSA8MHg1NiAweGJkIDB4MT47CgkJCWxpbnV4
LGNvZGUgPSA8MHg3ND47CgkJCWdwaW8ta2V5LHdha2V1cDsKCQkJZGVib3VuY2UtaW50ZXJ2YWwg
PSA8MHgxZT47CgkJCW52aWRpYSxwbWMtd2FrZXVwID0gPDB4MzcgMHgwIDB4MTggMHgwPjsKCQl9
OwoKCQlmb3JjZXJlY292ZXJ5IHsKCQkJbGFiZWwgPSAiRm9yY2VyZWNvdmVyeSI7CgkJCWdwaW9z
ID0gPDB4NTYgMHhiZSAweDE+OwoJCQlsaW51eCxjb2RlID0gPDB4NzQ+OwoJCQlncGlvLWtleSx3
YWtldXA7CgkJCWRlYm91bmNlLWludGVydmFsID0gPDB4MWU+OwoJCX07Cgl9OwoKCWdwaW8tdGlt
ZWQta2V5cyB7CgkJY29tcGF0aWJsZSA9ICJncGlvLXRpbWVkLWtleXMiOwoJCWdwaW8ta2V5cyxu
YW1lID0gImdwaW8tdGltZWQta2V5cyI7CgkJc3RhdHVzID0gImRpc2FibGVkIjsKCQlkaXNhYmxl
LW9uLXJlY292ZXJ5LWtlcm5lbDsKCgkJcG93ZXIgewoJCQlsYWJlbCA9ICJQb3dlciI7CgkJCWdw
aW9zID0gPDB4NTYgMHhiZCAweDE+OwoJCQlsaW51eCxudW1fY29kZXMgPSA8MHgzPjsKCQkJbGlu
dXgscHJlc3MtdGltZS1zZWNzID0gPDB4MSAweDMgMHg1PjsKCQkJbGludXgsa2V5LWNvZGVzID0g
PDB4NmMgMHgxYWYgMHgxYz47CgkJCWdwaW8ta2V5LHdha2V1cDsKCQl9OwoJfTsKCglzcGRpZi1k
aXQuMEAwIHsKCQljb21wYXRpYmxlID0gImxpbnV4LHNwZGlmLWRpdCI7CgkJcmVnID0gPDB4MCAw
eDAgMHgwIDB4MD47CgkJbGludXgscGhhbmRsZSA9IDwweDRmPjsKCQlwaGFuZGxlID0gPDB4NGY+
OwoJfTsKCglzcGRpZi1kaXQuMUAxIHsKCQljb21wYXRpYmxlID0gImxpbnV4LHNwZGlmLWRpdCI7
CgkJcmVnID0gPDB4MCAweDEgMHgwIDB4MD47CgkJbGludXgscGhhbmRsZSA9IDwweDUxPjsKCQlw
aGFuZGxlID0gPDB4NTE+OwoJfTsKCglzcGRpZi1kaXQuMkAyIHsKCQljb21wYXRpYmxlID0gImxp
bnV4LHNwZGlmLWRpdCI7CgkJcmVnID0gPDB4MCAweDIgMHgwIDB4MD47CgkJbGludXgscGhhbmRs
ZSA9IDwweDUzPjsKCQlwaGFuZGxlID0gPDB4NTM+OwoJfTsKCglzcGRpZi1kaXQuM0AzIHsKCQlj
b21wYXRpYmxlID0gImxpbnV4LHNwZGlmLWRpdCI7CgkJcmVnID0gPDB4MCAweDMgMHgwIDB4MD47
CgkJbGludXgscGhhbmRsZSA9IDwweDU1PjsKCQlwaGFuZGxlID0gPDB4NTU+OwoJfTsKCglzcGRp
Zi1kaXQuNEA0IHsKCQljb21wYXRpYmxlID0gImxpbnV4LHNwZGlmLWRpdCI7CgkJcmVnID0gPDB4
MCAweDQgMHgwIDB4MD47CgkJbGludXgscGhhbmRsZSA9IDwweDEyND47CgkJcGhhbmRsZSA9IDww
eDEyND47Cgl9OwoKCXNwZGlmLWRpdC41QDUgewoJCWNvbXBhdGlibGUgPSAibGludXgsc3BkaWYt
ZGl0IjsKCQlyZWcgPSA8MHgwIDB4NSAweDAgMHgwPjsKCQlsaW51eCxwaGFuZGxlID0gPDB4MTI1
PjsKCQlwaGFuZGxlID0gPDB4MTI1PjsKCX07CgoJc3BkaWYtZGl0LjZANiB7CgkJY29tcGF0aWJs
ZSA9ICJsaW51eCxzcGRpZi1kaXQiOwoJCXJlZyA9IDwweDAgMHg2IDB4MCAweDA+OwoJCWxpbnV4
LHBoYW5kbGUgPSA8MHgxMjY+OwoJCXBoYW5kbGUgPSA8MHgxMjY+OwoJfTsKCglzcGRpZi1kaXQu
N0A3IHsKCQljb21wYXRpYmxlID0gImxpbnV4LHNwZGlmLWRpdCI7CgkJcmVnID0gPDB4MCAweDcg
MHgwIDB4MD47CgkJbGludXgscGhhbmRsZSA9IDwweDEyNz47CgkJcGhhbmRsZSA9IDwweDEyNz47
Cgl9OwoKCWNwdWZyZXEgewoJCWNvbXBhdGlibGUgPSAibnZpZGlhLHRlZ3JhMjEwLWNwdWZyZXEi
OwoKCQljcHUtc2NhbGluZy1kYXRhIHsKCQkJZnJlcS10YWJsZSA9IDwweDE4ZTcwIDB4MzFjZTAg
MHg0YjAwMCAweDYyNzAwIDB4N2U5MDAgMHg5NjAwMCAweGFkNzAwIDB4Yzk5MDAgMHhlMTAwMCAw
eGZkMjAwIDB4MTE0OTAwIDB4MTJhZDQwIDB4MTQzYmIwIDB4MTVjYTIwIDB4MTY5MTU4IDB4MTc1
ODkwIDB4MThlNzAwIDB4MWE3NTcwIDB4MWMwM2UwIDB4MWQyZWI0IDB4MWViZDI0PjsKCQkJcHJl
c2VydmUtYWNyb3NzLXN1c3BlbmQ7CgkJfTsKCgkJZW1jLXNjYWxpbmctZGF0YSB7CgkJCWVtYy1j
cHUtbGltaXQtdGFibGUgPSA8MHgxOGU3MCAweDEwOWEwIDB4MzIwMDAgMHgxOGU3MCAweDRiMDAw
IDB4MzFjZTAgMHg2MjYzOCAweDYzOWMwIDB4YWQ3MDAgMHhhMjgwMCAweGZkMjAwIDB4MTg2YTAw
PjsKCQl9OwoJfTsKCgllZXByb20tbWFuYWdlciB7CgkJZGF0YS1zaXplID0gPDB4MTAwPjsKCgkJ
YnVzQDAgewoJCQlpMmMtYnVzID0gPDB4YWI+OwoJCQl3b3JkLWFkZHJlc3MtMS1ieXRlLXNsYXZl
LWFkZHJlc3NlcyA9IDwweDUwPjsKCQl9OwoKCQlidXNAMSB7CgkJCWkyYy1idXMgPSA8MHhhYz47
CgkJCXdvcmQtYWRkcmVzcy0xLWJ5dGUtc2xhdmUtYWRkcmVzc2VzID0gPDB4NTAgMHg1Nz47CgkJ
fTsKCX07CgoJcGx1Z2luLW1hbmFnZXIgewoKCQlmcmFnZW1lbnRAMCB7CgkJCWlkcyA9ICI+PTM0
NDgtMDAwMC0xMDAiLCAiPj0zNDQ4LTAwMDItMTAwIjsKCgkJCW92ZXJyaWRlQDAgewoJCQkJdGFy
Z2V0ID0gPDB4YWQ+OwoKCQkJCV9vdmVybGF5XyB7CgoJCQkJCWNoYW5uZWxAMCB7CgkJCQkJCXRp
LHJhaWwtbmFtZSA9ICJQT01fNVZfSU4iOwoJCQkJCX07CgoJCQkJCWNoYW5uZWxAMSB7CgkJCQkJ
CXRpLHJhaWwtbmFtZSA9ICJQT01fNVZfR1BVIjsKCQkJCQl9OwoJCQkJfTsKCQkJfTsKCQl9OwoK
CQlmcmFnbWVudEAxIHsKCQkJaWRzID0gIj49MzQ0OC0wMDAwLTEwMSIsICI+PTM0NDgtMDAwMi0x
MDEiOwoKCQkJb3ZlcnJpZGVAMCB7CgkJCQl0YXJnZXQgPSA8MHhhMT47CgoJCQkJX292ZXJsYXlf
IHsKCQkJCQlyZWd1bGF0b3ItbWluLW1pY3Jvdm9sdCA9IDwweDkyN2MwPjsKCQkJCX07CgkJCX07
CgkJfTsKCgkJZnJhZ21lbnRAMiB7CgkJCWlkcyA9ICI8MzQ0OC0wMDAwLTIwMCIsICI8MzQ0OC0w
MDAyLTIwMCI7CgoJCQlvdmVycmlkZUAwIHsKCQkJCXRhcmdldCA9IDwweGFlPjsKCgkJCQlfb3Zl
cmxheV8gewoJCQkJCXJlZ3VsYXRvci1zdXBwbGllcyA9ICJ2ZGQtMXY4LWF1ZGlvLWh2IiwgInZk
ZC0xdjgtYXVkaW8taHYtYmlhcyI7CgkJCQkJdmRkLTF2OC1hdWRpby1odi1zdXBwbHkgPSA8MHgz
Nj47CgkJCQkJdmRkLTF2OC1hdWRpby1odi1iaWFzLXN1cHBseSA9IDwweDM2PjsKCQkJCQlmc3lu
Yy13aWR0aCA9IDwweGY+OwoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCX07CgkJCX07CgoJCQlv
dmVycmlkZUAxIHsKCQkJCXRhcmdldCA9IDwweDRlPjsKCgkJCQlfb3ZlcmxheV8gewoJCQkJCXN0
YXR1cyA9ICJkaXNhYmxlZCI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVAMiB7CgkJCQl0YXJn
ZXQgPSA8MHhhZj47CgoJCQkJX292ZXJsYXlfIHsKCgkJCQkJbnZpZGlhLGRhaS1saW5rLTEgewoJ
CQkJCQljcHUtZGFpID0gPDB4YWU+OwoJCQkJCQljcHUtZGFpLW5hbWUgPSAiSTJTMSI7CgkJCQkJ
fTsKCQkJCX07CgkJCX07CgkJfTsKCgkJZnJhZ21lbnRAMyB7CgkJCWlkcyA9ICI+PTM0NDgtMDAw
Mi0xMDAiOwoKCQkJb3ZlcnJpZGVAMCB7CgkJCQl0YXJnZXQgPSA8MHhiMD47CgoJCQkJX292ZXJs
YXlfIHsKCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVAMSB7
CgkJCQl0YXJnZXQgPSA8MHhiMT47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAiZGlz
YWJsZWQiOwoJCQkJfTsKCQkJfTsKCQl9OwoKCQlmcmFnbWVudEA0IHsKCQkJaWRzID0gIjM0NDkt
MDAwMC0wMDAiOwoKCQkJb3ZlcnJpZGVAMCB7CgkJCQl0YXJnZXQgPSA8MHhiMj47CgoJCQkJX292
ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQkJfTsKCQkJfTsKCgkJCW92ZXJy
aWRlQDEgewoJCQkJdGFyZ2V0ID0gPDB4YjM+OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJZ3BpbyA9
IDwweDU2IDB4NiAweDA+OwoJCQkJCWVuYWJsZS1hY3RpdmUtbG93OwoJCQkJCWdwaW8tb3Blbi1k
cmFpbjsKCQkJCX07CgkJCX07CgoJCQlvdmVycmlkZUAyIHsKCQkJCXRhcmdldCA9IDwweGI0PjsK
CgkJCQlfb3ZlcmxheV8gewoJCQkJCXZidXMtc3VwcGx5ID0gPDB4YjM+OwoJCQkJfTsKCQkJfTsK
CQl9OwoKCQlmcmFnbWVudEA1IHsKCQkJaWRzID0gIjM0NDktMDAwMC0xMDAiLCAiMzQ0OS0wMDAw
LTIwMCI7CgoJCQlvdmVycmlkZUAwIHsKCQkJCXRhcmdldCA9IDwweGIyPjsKCgkJCQlfb3Zlcmxh
eV8gewoJCQkJCXN0YXR1cyA9ICJkaXNhYmxlZCI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVA
MSB7CgkJCQl0YXJnZXQgPSA8MHhiMz47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlncGlvID0gPDB4
NTYgMHg2IDB4MD47CgkJCQkJZW5hYmxlLWFjdGl2ZS1oaWdoOwoJCQkJfTsKCQkJfTsKCgkJCW92
ZXJyaWRlQDIgewoJCQkJdGFyZ2V0ID0gPDB4YjQ+OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJdmJ1
cy1zdXBwbHkgPSA8MHhiMz47CgkJCQl9OwoJCQl9OwoJCX07CgoJCWZyYWdlbWVudEA2IHsKCQkJ
b2RtLWRhdGEgPSAiZW5hYmxlLXRlZ3JhLXdkdCI7CgoJCQlvdmVycmlkZUAwIHsKCQkJCXRhcmdl
dCA9IDwweGI1PjsKCgkJCQlfb3ZlcmxheV8gewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCX07
CgkJCX07CgkJfTsKCgkJZnJhZ2VtZW50QDcgewoJCQlvZG0tZGF0YSA9ICJlbmFibGUtcG1pYy13
ZHQiOwoKCQkJb3ZlcnJpZGVAMCB7CgkJCQl0YXJnZXQgPSA8MHhiNj47CgoJCQkJX292ZXJsYXlf
IHsKCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQl9OwoJCQl9OwoJCX07CgoJCWZyYWdlbWVudEA4
IHsKCQkJb2RtLWRhdGEgPSAiZW5hYmxlLXBtaWMtd2R0IiwgImVuYWJsZS10ZWdyYS13ZHQiOwoK
CQkJb3ZlcnJpZGVAMCB7CgkJCQl0YXJnZXQgPSA8MHhiNz47CgoJCQkJX292ZXJsYXlfIHsKCQkJ
CQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQkJfTsKCQkJfTsKCQl9OwoKCQlmcmFnZW1lbnRAOSB7
CgkJCWlkcyA9ICI8MzQ0OC0wMDAwLTMwMCIsICI8MzQ0OC0wMDAyLTMwMCI7CgoJCQlvdmVycmlk
ZUAwIHsKCQkJCXRhcmdldCA9IDwweDU4PjsKCgkJCQlfb3ZlcmxheV8gewoJCQkJCXN0YXR1cyA9
ICJkaXNhYmxlZCI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVAMSB7CgkJCQl0YXJnZXQgPSA8
MHhiOD47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlrZWVwLXBvd2VyLWluLXN1c3BlbmQ7CgkJCQkJ
bm9uLXJlbW92YWJsZTsKCQkJCX07CgkJCX07CgoJCQlvdmVycmlkZUAyIHsKCQkJCXRhcmdldCA9
IDwweGI5PjsKCgkJCQlfb3ZlcmxheV8gewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCX07CgkJ
CX07CgoJCQlvdmVycmlkZUAzIHsKCQkJCXRhcmdldCA9IDwweGJhPjsKCgkJCQlfb3ZlcmxheV8g
ewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCQliYWRnZSA9ICJwb3JnX2Zyb250X1JCUENWMiI7
CgkJCQkJcG9zaXRpb24gPSAiZnJvbnQiOwoJCQkJCW9yaWVudGF0aW9uID0gWzMxIDAwXTsKCQkJ
CX07CgkJCX07CgoJCQlvdmVycmlkZUA0IHsKCQkJCXRhcmdldCA9IDwweGJiPjsKCgkJCQlfb3Zl
cmxheV8gewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCQlwY2xfaWQgPSAidjRsMl9zZW5zb3Ii
OwoJCQkJCWRldm5hbWUgPSAiaW14MjE5IDYtMDAxMCI7CgkJCQkJcHJvYy1kZXZpY2UtdHJlZSA9
ICIvcHJvYy9kZXZpY2UtdHJlZS9ob3N0MXgvaTJjQDU0NmMwMDAwL3JicGN2Ml9pbXgyMTlfYUAx
MCI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVANSB7CgkJCQl0YXJnZXQgPSA8MHhiYz47CgoJ
CQkJX292ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQkJcGNsX2lkID0gInY0bDJf
bGVucyI7CgkJCQkJcHJvYy1kZXZpY2UtdHJlZSA9ICIvcHJvYy9kZXZpY2UtdHJlZS9sZW5zX2lt
eDIxOUBSQlBDVjIvIjsKCQkJCX07CgkJCX07CgoJCQlvdmVycmlkZUA2IHsKCQkJCXRhcmdldCA9
IDwweGJkPjsKCgkJCQlfb3ZlcmxheV8gewoJCQkJCW51bS1jaGFubmVscyA9IDwweDE+OwoJCQkJ
fTsKCQkJfTsKCgkJCW92ZXJyaWRlQDcgewoJCQkJdGFyZ2V0ID0gPDB4YmU+OwoKCQkJCV9vdmVy
bGF5XyB7CgkJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJfTsKCQkJfTsKCgkJCW92ZXJyaWRlQDgg
ewoJCQkJdGFyZ2V0ID0gPDB4NzU+OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJc3RhdHVzID0gIm9r
YXkiOwoJCQkJCXBvcnQtaW5kZXggPSA8MHgwPjsKCQkJCQlidXMtd2lkdGggPSA8MHgyPjsKCQkJ
CQlyZW1vdGUtZW5kcG9pbnQgPSA8MHg1YT47CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVAOSB7
CgkJCQl0YXJnZXQgPSA8MHhiZj47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQludW0tY2hhbm5lbHMg
PSA8MHgxPjsKCQkJCX07CgkJCX07CgoJCQlvdmVycmlkZUAxMCB7CgkJCQl0YXJnZXQgPSA8MHhj
MD47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQl9OwoJCQl9OwoK
CQkJb3ZlcnJpZGVAMTEgewoJCQkJdGFyZ2V0ID0gPDB4YzE+OwoKCQkJCV9vdmVybGF5XyB7CgkJ
CQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJfTsKCQkJfTsKCgkJCW92ZXJyaWRlQDEyIHsKCQkJCXRh
cmdldCA9IDwweDczPjsKCgkJCQlfb3ZlcmxheV8gewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJ
CQlwb3J0LWluZGV4ID0gPDB4MD47CgkJCQkJYnVzLXdpZHRoID0gPDB4Mj47CgkJCQkJcmVtb3Rl
LWVuZHBvaW50ID0gPDB4YzI+OwoJCQkJfTsKCQkJfTsKCgkJCW92ZXJyaWRlQDEzIHsKCQkJCXRh
cmdldCA9IDwweGMzPjsKCgkJCQlfb3ZlcmxheV8gewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJ
CX07CgkJCX07CgoJCQlvdmVycmlkZUAxNCB7CgkJCQl0YXJnZXQgPSA8MHg1YT47CgoJCQkJX292
ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQkJcmVtb3RlLWVuZHBvaW50ID0gPDB4
NzU+OwoJCQkJfTsKCQkJfTsKCgkJCW92ZXJyaWRlQDE1IHsKCQkJCXRhcmdldCA9IDwweGM0PjsK
CgkJCQlfb3ZlcmxheV8gewoJCQkJCW51bV9jc2lfbGFuZXMgPSA8MHgyPjsKCQkJCQltYXhfbGFu
ZV9zcGVlZCA9IDwweDE2ZTM2MD47CgkJCQkJbWluX2JpdHNfcGVyX3BpeGVsID0gPDB4YT47CgkJ
CQkJdmlfcGVha19ieXRlX3Blcl9waXhlbCA9IDwweDI+OwoJCQkJCXZpX2J3X21hcmdpbl9wY3Qg
PSA8MHgxOT47CgkJCQkJbWF4X3BpeGVsX3JhdGUgPSA8MHgzYTk4MD47CgkJCQkJaXNwX3BlYWtf
Ynl0ZV9wZXJfcGl4ZWwgPSA8MHg1PjsKCQkJCQlpc3BfYndfbWFyZ2luX3BjdCA9IDwweDE5PjsK
CQkJCX07CgkJCX07CgoJCQlvdmVycmlkZUAxNiB7CgkJCQl0YXJnZXQgPSA8MHhjNT47CgoJCQkJ
X292ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQkJfTsKCQkJfTsKCQl9OwoK
CQlmcmFnZW1lbnRAMTAgewoJCQlpZHMgPSAiPj0zNDQ4LTAwMDAtMzAwIiwgIj49MzQ0OC0wMDAy
LTMwMCI7CgoJCQlvdmVycmlkZUAwIHsKCQkJCXRhcmdldCA9IDwweGM2PjsKCgkJCQlfb3Zlcmxh
eV8gewoJCQkJCW52aWRpYSxwbGF0LWdwaW9zID0gPDB4NTYgMHhlNyAweDA+OwoJCQkJfTsKCQkJ
fTsKCgkJCW92ZXJyaWRlQDEgewoJCQkJdGFyZ2V0ID0gPDB4Yjg+OwoKCQkJCV9vdmVybGF5XyB7
CgkJCQkJdnFtbWMtc3VwcGx5ID0gPDB4NTg+OwoJCQkJCW5vLXNkaW87CgkJCQkJbm8tbW1jOwoJ
CQkJCXNkLXVocy1zZHIxMDQ7CgkJCQkJc2QtdWhzLXNkcjUwOwoJCQkJCXNkLXVocy1zZHIyNTsK
CQkJCQlzZC11aHMtc2RyMTI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVAMiB7CgkJCQl0YXJn
ZXQgPSA8MHhjNz47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQludmlkaWEscHJpb3JpdHkgPSA8MHgz
Mj47CgkJCQkJbnZpZGlhLHBvbGFyaXR5LWFjdGl2ZS1sb3cgPSA8MHgxPjsKCQkJCQludmlkaWEs
Y291bnQtdGhyZXNob2xkID0gPDB4MT47CgkJCQkJbnZpZGlhLGFsYXJtLWZpbHRlciA9IDwweDRk
ZDFlMD47CgkJCQkJbnZpZGlhLGFsYXJtLXBlcmlvZCA9IDwweDA+OwoJCQkJCW52aWRpYSxjcHUt
dGhyb3QtcGVyY2VudCA9IDwweDRiPjsKCQkJCQludmlkaWEsZ3B1LXRocm90LWxldmVsID0gPDB4
Mz47CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVAMyB7CgkJCQl0YXJnZXQgPSA8MHhjOD47CgoJ
CQkJX292ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQl9OwoJCQl9OwoKCQkJb3Zl
cnJpZGVANCB7CgkJCQl0YXJnZXQgPSA8MHhjOT47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlzdGF0
dXMgPSAib2theSI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVANSB7CgkJCQl0YXJnZXQgPSA8
MHhiYT47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQkJYmFkZ2Ug
PSAicG9yZ19mcm9udF9SQlBDVjIiOwoJCQkJCXBvc2l0aW9uID0gImZyb250IjsKCQkJCQlvcmll
bnRhdGlvbiA9IFszMSAwMF07CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVANiB7CgkJCQl0YXJn
ZXQgPSA8MHhiYj47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQkJ
cGNsX2lkID0gInY0bDJfc2Vuc29yIjsKCQkJCQlkZXZuYW1lID0gImlteDIxOSA3LTAwMTAiOwoJ
CQkJCXByb2MtZGV2aWNlLXRyZWUgPSAiL3Byb2MvZGV2aWNlLXRyZWUvY2FtX2kyY211eC9pMmNA
MC9yYnBjdjJfaW14MjE5X2FAMTAiOwoJCQkJfTsKCQkJfTsKCgkJCW92ZXJyaWRlQDcgewoJCQkJ
dGFyZ2V0ID0gPDB4YmM+OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJc3RhdHVzID0gIm9rYXkiOwoJ
CQkJCXBjbF9pZCA9ICJ2NGwyX2xlbnMiOwoJCQkJCXByb2MtZGV2aWNlLXRyZWUgPSAiL3Byb2Mv
ZGV2aWNlLXRyZWUvbGVuc19pbXgyMTlAUkJQQ1YyLyI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJp
ZGVAOCB7CgkJCQl0YXJnZXQgPSA8MHhjYT47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlzdGF0dXMg
PSAib2theSI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVAOSB7CgkJCQl0YXJnZXQgPSA8MHhj
NT47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQkJYmFkZ2UgPSAi
cG9yZ19yZWFyX1JCUENWMiI7CgkJCQkJcG9zaXRpb24gPSAicmVhciI7CgkJCQkJb3JpZW50YXRp
b24gPSBbMzEgMDBdOwoJCQkJfTsKCQkJfTsKCgkJCW92ZXJyaWRlQDEwIHsKCQkJCXRhcmdldCA9
IDwweGNiPjsKCgkJCQlfb3ZlcmxheV8gewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCQlwY2xf
aWQgPSAidjRsMl9zZW5zb3IiOwoJCQkJCWRldm5hbWUgPSAiaW14MjE5IDgtMDAxMCI7CgkJCQkJ
cHJvYy1kZXZpY2UtdHJlZSA9ICIvcHJvYy9kZXZpY2UtdHJlZS9jYW1faTJjbXV4L2kyY0AxL3Ji
cGN2Ml9pbXgyMTlfZUAxMCI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVAMTEgewoJCQkJdGFy
Z2V0ID0gPDB4Y2M+OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJ
CXBjbF9pZCA9ICJ2NGwyX2xlbnMiOwoJCQkJCXByb2MtZGV2aWNlLXRyZWUgPSAiL3Byb2MvZGV2
aWNlLXRyZWUvbGVuc19pbXgyMTlAUkJQQ1YyLyI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVA
MTIgewoJCQkJdGFyZ2V0ID0gPDB4YmQ+OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJbnVtLWNoYW5u
ZWxzID0gPDB4Mj47CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVAMTMgewoJCQkJdGFyZ2V0ID0g
PDB4YmU+OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJfTsKCQkJ
fTsKCgkJCW92ZXJyaWRlQDE0IHsKCQkJCXRhcmdldCA9IDwweGNkPjsKCgkJCQlfb3ZlcmxheV8g
ewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCX07CgkJCX07CgoJCQlvdmVycmlkZUAxNSB7CgkJ
CQl0YXJnZXQgPSA8MHg3NT47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAib2theSI7
CgkJCQkJcG9ydC1pbmRleCA9IDwweDA+OwoJCQkJCWJ1cy13aWR0aCA9IDwweDI+OwoJCQkJCXJl
bW90ZS1lbmRwb2ludCA9IDwweDVhPjsKCQkJCX07CgkJCX07CgoJCQlvdmVycmlkZUAxNiB7CgkJ
CQl0YXJnZXQgPSA8MHg3Nz47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAib2theSI7
CgkJCQkJcG9ydC1pbmRleCA9IDwweDQ+OwoJCQkJCWJ1cy13aWR0aCA9IDwweDI+OwoJCQkJCXJl
bW90ZS1lbmRwb2ludCA9IDwweDViPjsKCQkJCX07CgkJCX07CgoJCQlvdmVycmlkZUAxNyB7CgkJ
CQl0YXJnZXQgPSA8MHhiZj47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQludW0tY2hhbm5lbHMgPSA8
MHgyPjsKCQkJCX07CgkJCX07CgoJCQlvdmVycmlkZUAxOCB7CgkJCQl0YXJnZXQgPSA8MHhjMD47
CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQl9OwoJCQl9OwoKCQkJ
b3ZlcnJpZGVAMTkgewoJCQkJdGFyZ2V0ID0gPDB4YzE+OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJ
c3RhdHVzID0gIm9rYXkiOwoJCQkJfTsKCQkJfTsKCgkJCW92ZXJyaWRlQDIwIHsKCQkJCXRhcmdl
dCA9IDwweDczPjsKCgkJCQlfb3ZlcmxheV8gewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCQlw
b3J0LWluZGV4ID0gPDB4MD47CgkJCQkJYnVzLXdpZHRoID0gPDB4Mj47CgkJCQkJcmVtb3RlLWVu
ZHBvaW50ID0gPDB4NzQ+OwoJCQkJfTsKCQkJfTsKCgkJCW92ZXJyaWRlQDIxIHsKCQkJCXRhcmdl
dCA9IDwweGMzPjsKCgkJCQlfb3ZlcmxheV8gewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCX07
CgkJCX07CgoJCQlvdmVycmlkZUAyMiB7CgkJCQl0YXJnZXQgPSA8MHg1YT47CgoJCQkJX292ZXJs
YXlfIHsKCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVAMjMg
ewoJCQkJdGFyZ2V0ID0gPDB4Y2U+OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJc3RhdHVzID0gIm9r
YXkiOwoJCQkJfTsKCQkJfTsKCgkJCW92ZXJyaWRlQDI0IHsKCQkJCXRhcmdldCA9IDwweGNmPjsK
CgkJCQlfb3ZlcmxheV8gewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCX07CgkJCX07CgoJCQlv
dmVycmlkZUAyNSB7CgkJCQl0YXJnZXQgPSA8MHhhOT47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlz
dGF0dXMgPSAib2theSI7CgkJCQkJcG9ydC1pbmRleCA9IDwweDQ+OwoJCQkJCWJ1cy13aWR0aCA9
IDwweDI+OwoJCQkJCXJlbW90ZS1lbmRwb2ludCA9IDwweDc2PjsKCQkJCX07CgkJCX07CgoJCQlv
dmVycmlkZUAyNiB7CgkJCQl0YXJnZXQgPSA8MHhkMD47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlz
dGF0dXMgPSAib2theSI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVAMjcgewoJCQkJdGFyZ2V0
ID0gPDB4NWI+OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJfTsK
CQkJfTsKCgkJCW92ZXJyaWRlQDI4IHsKCQkJCXRhcmdldCA9IDwweGM0PjsKCgkJCQlfb3Zlcmxh
eV8gewoJCQkJCW51bV9jc2lfbGFuZXMgPSA8MHg0PjsKCQkJCQltYXhfbGFuZV9zcGVlZCA9IDww
eDE2ZTM2MD47CgkJCQkJbWluX2JpdHNfcGVyX3BpeGVsID0gPDB4YT47CgkJCQkJdmlfcGVha19i
eXRlX3Blcl9waXhlbCA9IDwweDI+OwoJCQkJCXZpX2J3X21hcmdpbl9wY3QgPSA8MHgxOT47CgkJ
CQkJbWF4X3BpeGVsX3JhdGUgPSA8MHgzYTk4MD47CgkJCQkJaXNwX3BlYWtfYnl0ZV9wZXJfcGl4
ZWwgPSA8MHg1PjsKCQkJCQlpc3BfYndfbWFyZ2luX3BjdCA9IDwweDE5PjsKCQkJCX07CgkJCX07
CgoJCQlvdmVycmlkZUAyOSB7CgkJCQl0YXJnZXQgPSA8MHhkMT47CgoJCQkJX292ZXJsYXlfIHsK
CQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVAMzAgewoJCQkJ
dGFyZ2V0ID0gPDB4ZDI+OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJc3RhdHVzID0gIm9rYXkiOwoJ
CQkJfTsKCQkJfTsKCgkJCW92ZXJyaWRlQDMxIHsKCQkJCXRhcmdldCA9IDwweGQzPjsKCgkJCQlf
b3ZlcmxheV8gewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCX07CgkJCX07CgkJfTsKCgkJZnJh
Z2VtZW50QDExIHsKCQkJaWRzID0gIj49MzQ0OC0wMDAwLTMwMCIsICI+PTM0NDgtMDAwMi0zMDAi
OwoKCQkJb3ZlcnJpZGVAMCB7CgkJCQl0YXJnZXQgPSA8MHhkND47CgoJCQkJX292ZXJsYXlfIHsK
CQkJCQllbmFibGUtYXNwbTsKCQkJCX07CgkJCX07CgkJfTsKCgkJZnJhZ21lbnQtZTI2MTQtY29t
bW9uQDAgewoJCQlpZHMgPSAiMjYxNC0wMDAwLSoiOwoKCQkJb3ZlcnJpZGVzQDAgewoJCQkJdGFy
Z2V0ID0gPDB4ZDU+OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJ
fTsKCQkJfTsKCgkJCW92ZXJyaWRlc0AxIHsKCQkJCXRhcmdldCA9IDwweGQ2PjsKCgkJCQlfb3Zl
cmxheV8gewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCX07CgkJCX07CgoJCQlvdmVycmlkZXNA
MiB7CgkJCQl0YXJnZXQgPSA8MHhkNz47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAi
b2theSI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVzQDMgewoJCQkJdGFyZ2V0ID0gPDB4ZDg+
OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJfTsKCQkJfTsKCgkJ
CW92ZXJyaWRlc0A0IHsKCQkJCXRhcmdldCA9IDwweGQ4PjsKCgkJCQlfb3ZlcmxheV8gewoJCQkJ
CXN0YXR1cyA9ICJva2F5IjsKCQkJCX07CgkJCX07CgoJCQlvdmVycmlkZXNANiB7CgkJCQl0YXJn
ZXQgPSA8MHhkOT47CgoJCQkJX292ZXJsYXlfIHsKCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQl9
OwoJCQl9OwoKCQkJb3ZlcnJpZGVzQDcgewoJCQkJdGFyZ2V0ID0gPDB4ZDc+OwoKCQkJCV9vdmVy
bGF5XyB7CgkJCQkJc3RhdHVzID0gIm9rYXkiOwoJCQkJfTsKCQkJfTsKCgkJCW92ZXJyaWRlc0A4
IHsKCQkJCXRhcmdldCA9IDwweGQ4PjsKCgkJCQlfb3ZlcmxheV8gewoJCQkJCXN0YXR1cyA9ICJv
a2F5IjsKCQkJCX07CgkJCX07CgoJCQlvdmVycmlkZXNAOSB7CgkJCQl0YXJnZXQgPSA8MHhhZj47
CgoJCQkJX292ZXJsYXlfIHsKCQkJCQludmlkaWEsYXVkaW8tcm91dGluZyA9ICJ4IEhlYWRwaG9u
ZSBKYWNrIiwgInggSFBPIEwgUGxheWJhY2siLCAieCBIZWFkcGhvbmUgSmFjayIsICJ4IEhQTyBS
IFBsYXliYWNrIiwgInggSU4xUCIsICJ4IE1pYyBKYWNrIiwgInggSW50IFNwayIsICJ4IFNQTyBQ
bGF5YmFjayIsICJ4IERNSUMgTDEiLCAieCBJbnQgTWljIiwgInggRE1JQyBMMiIsICJ4IEludCBN
aWMiLCAieCBETUlDIFIxIiwgInggSW50IE1pYyIsICJ4IERNSUMgUjIiLCAieCBJbnQgTWljIiwg
InkgSGVhZHBob25lIiwgInkgT1VUIiwgInkgSU4iLCAieSBNaWMiLCAiYSBJTiIsICJhIE1pYyIs
ICJiIElOIiwgImIgTWljIjsKCQkJCX07CgkJCX07CgoJCQlvdmVycmlkZXNAMTAgewoJCQkJdGFy
Z2V0ID0gPDB4ZGE+OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJbGluay1uYW1lID0gInJ0NTY1eC1w
bGF5YmFjayI7CgkJCQkJY29kZWMtZGFpLW5hbWUgPSAicnQ1NjU5LWFpZjEiOwoJCQkJfTsKCQkJ
fTsKCgkJCW92ZXJyaWRlc0AxMSB7CgkJCQl0YXJnZXQgPSA8MHhkOT47CgoJCQkJX292ZXJsYXlf
IHsKCQkJCQlzdGF0dXMgPSAib2theSI7CgkJCQl9OwoJCQl9OwoKCQkJb3ZlcnJpZGVAMTIgewoJ
CQkJdGFyZ2V0ID0gPDB4ZGI+OwoKCQkJCV9vdmVybGF5XyB7CgkJCQkJc3RhdHVzID0gIm9rYXki
OwoJCQkJfTsKCQkJfTsKCQl9OwoKCQlmcmFnbWVudC1lMjYxNC1hMDBAMSB7CgkJCWlkcyA9ICIy
NjE0LTAwMDAtMDAwIjsKCgkJCW92ZXJyaWRlc0AwIHsKCQkJCXRhcmdldCA9IDwweGRjPjsKCgkJ
CQlfb3ZlcmxheV8gewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCX07CgkJCX07CgoJCQlvdmVy
cmlkZUAxIHsKCQkJCXRhcmdldCA9IDwweGFmPjsKCgkJCQlfb3ZlcmxheV8gewoKCQkJCQludmlk
aWEsZGFpLWxpbmstMSB7CgkJCQkJCWNvZGVjLWRhaSA9IDwweGRjPjsKCQkJCQl9OwoJCQkJfTsK
CQkJfTsKCQl9OwoKCQlmcmFnbWVudC1lMjYxNC1iMDBAMiB7CgkJCWlkcyA9ICIyNjE0LTAwMDAt
MTAwIjsKCgkJCW92ZXJyaWRlc0AwIHsKCQkJCXRhcmdldCA9IDwweGRkPjsKCgkJCQlfb3Zlcmxh
eV8gewoJCQkJCXN0YXR1cyA9ICJva2F5IjsKCQkJCX07CgkJCX07CgoJCQlvdmVycmlkZUAxIHsK
CQkJCXRhcmdldCA9IDwweGFmPjsKCgkJCQlfb3ZlcmxheV8gewoKCQkJCQludmlkaWEsZGFpLWxp
bmstMSB7CgkJCQkJCWNvZGVjLWRhaSA9IDwweGRkPjsKCQkJCQl9OwoJCQkJfTsKCQkJfTsKCQl9
OwoKCQlmcmFnbWVudC1lMjYxNC1waW5zQDMgewoJCQlpZHMgPSAiPDM0NDgtMDAwMC0yMDAiOwoK
CQkJb3ZlcnJpZGVzQDAgewoJCQkJdGFyZ2V0ID0gPDB4ZGI+OwoKCQkJCV9vdmVybGF5XyB7CgkJ
CQkJZ3Bpb3MgPSA8MHg4IDB4MCAweDkgMHgwIDB4YSAweDAgMHhiIDB4MCAweGQ4IDB4MCAweDk1
IDB4MD47CgkJCQkJbGFiZWwgPSAiSTJTMV9MUkNMSyIsICJJMlMxX1NESU4iLCAiSTJTMV9TRE9V
VCIsICJJMlMxX0NMSyIsICJBVURJT19NQ0xLIiwgIkFVRF9SU1QiOwoJCQkJfTsKCQkJfTsKCQl9
OwoJfTsKCgltb2RzLXNpbXBsZS1idXMgewoJCWNvbXBhdGlibGUgPSAic2ltcGxlLWJ1cyI7CgkJ
ZGV2aWNlX3R5cGUgPSAibW9kcy1zaW1wbGUtYnVzIjsKCQkjYWRkcmVzcy1jZWxscyA9IDwweDE+
OwoJCSNzaXplLWNlbGxzID0gPDB4MD47CgoJCW1vZHMtY2xvY2tzIHsKCQkJY29tcGF0aWJsZSA9
ICJudmlkaWEsbW9kcy1jbG9ja3MiOwoJCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCQljbG9ja3Mg
PSA8MHgyMSAweDMgMHgyMSAweDQgMHgyMSAweDUgMHgyMSAweDYgMHgyMSAweDggMHgyMSAweDkg
MHgyMSAweGIgMHgyMSAweGMgMHgyMSAweGUgMHgyMSAweGYgMHgyMSAweDExIDB4MjEgMHgxMiAw
eDIxIDB4ZTQgMHgyMSAweDE2IDB4MjEgMHgxNyAweDIxIDB4MWEgMHgyMSAweDFiIDB4MjEgMHgx
YyAweDIxIDB4MWUgMHgyMSAweDIwIDB4MjEgMHgyMSAweDIxIDB4MjIgMHgyMSAweDI2IDB4MjEg
MHgyOCAweDIxIDB4MjkgMHgyMSAweDJjIDB4MjEgMHgyZSAweDIxIDB4MmYgMHgyMSAweDMwIDB4
MjEgMHgzNCAweDIxIDB4MzYgMHgyMSAweDM3IDB4MjEgMHgzOCAweDIxIDB4MzkgMHgyMSAweDNh
IDB4MjEgMHgzZiAweDIxIDB4NDEgMHgyMSAweDQzIDB4MjEgMHg0NCAweDIxIDB4NDUgMHgyMSAw
eDQ2IDB4MjEgMHg0NyAweDIxIDB4NDggMHgyMSAweDQ5IDB4MjEgMHg0YyAweDIxIDB4NGUgMHgy
MSAweDRmIDB4MjEgMHg1MSAweDIxIDB4NTIgMHgyMSAweDUzIDB4MjEgMHg1OSAweDIxIDB4NWMg
MHgyMSAweDYzIDB4MjEgMHg2NCAweDIxIDB4NjUgMHgyMSAweDY2IDB4MjEgMHg2NyAweDIxIDB4
NmEgMHgyMSAweDZiIDB4MjEgMHg2ZiAweDIxIDB4NzYgMHgyMSAweDc3IDB4MjEgMHg3OCAweDIx
IDB4NzkgMHgyMSAweDdhIDB4MjEgMHg3YiAweDIxIDB4N2MgMHgyMSAweDdkIDB4MjEgMHg3ZiAw
eDIxIDB4ODAgMHgyMSAweDgxIDB4MjEgMHg4OCAweDIxIDB4OGYgMHgyMSAweDkwIDB4MjEgMHg5
MSAweDIxIDB4OTIgMHgyMSAweDkzIDB4MjEgMHg5NCAweDIxIDB4OTUgMHgyMSAweDk4IDB4MjEg
MHg5YyAweDIxIDB4YTEgMHgyMSAweGEyIDB4MjEgMHhhNiAweDIxIDB4YTcgMHgyMSAweGE4IDB4
MjEgMHhhYiAweDIxIDB4YWQgMHgyMSAweGIxIDB4MjEgMHhiMiAweDIxIDB4YjUgMHgyMSAweGI2
IDB4MjEgMHhiNyAweDIxIDB4YjggMHgyMSAweGI5IDB4MjEgMHhiYiAweDIxIDB4YmQgMHgyMSAw
eGMxIDB4MjEgMHhjMiAweDIxIDB4YzMgMHgyMSAweGM1IDB4MjEgMHhjNiAweDIxIDB4YzcgMHgy
MSAweGM4IDB4MjEgMHhjOSAweDIxIDB4Y2EgMHgyMSAweGNlIDB4MjEgMHhjZiAweDIxIDB4ZDAg
MHgyMSAweGQxIDB4MjEgMHhkMiAweDIxIDB4ZDMgMHgyMSAweGQ0IDB4MjEgMHhkYSAweDIxIDB4
ZGIgMHgyMSAweGRjIDB4MjEgMHhkZCAweDIxIDB4ZGUgMHgyMSAweGRmIDB4MjEgMHhlMCAweDIx
IDB4ZTEgMHgyMSAweGUyIDB4MjEgMHhlMyAweDIxIDB4ZTUgMHgyMSAweGU2IDB4MjEgMHhlNyAw
eDIxIDB4ZTggMHgyMSAweGU5IDB4MjEgMHhlYSAweDIxIDB4ZWIgMHgyMSAweGVjIDB4MjEgMHhl
ZCAweDIxIDB4ZWUgMHgyMSAweGVmIDB4MjEgMHhmMCAweDIxIDB4ZjEgMHgyMSAweGYyIDB4MjEg
MHhmMyAweDIxIDB4ZjQgMHgyMSAweGY1IDB4MjEgMHhmNiAweDIxIDB4ZjcgMHgyMSAweGY4IDB4
MjEgMHhmOSAweDIxIDB4ZmEgMHgyMSAweGZiIDB4MjEgMHhmYyAweDIxIDB4ZmQgMHgyMSAweGZl
IDB4MjEgMHhmZiAweDIxIDB4MTAwIDB4MjEgMHgxMDEgMHgyMSAweDEwMyAweDIxIDB4MTA0IDB4
MjEgMHgxMDUgMHgyMSAweDEwNiAweDIxIDB4MTA3IDB4MjEgMHgxMDggMHgyMSAweDEwOSAweDIx
IDB4MTBhIDB4MjEgMHgxMGIgMHgyMSAweDEwYyAweDIxIDB4MTBkIDB4MjEgMHgxMGUgMHgyMSAw
eDEwZiAweDIxIDB4MTEwIDB4MjEgMHgxMTEgMHgyMSAweDExMiAweDIxIDB4MTEzIDB4MjEgMHgx
MTQgMHgyMSAweDExNSAweDIxIDB4MTE2IDB4MjEgMHgxMTcgMHgyMSAweDExOCAweDIxIDB4MTE5
IDB4MjEgMHgxMWMgMHgyMSAweDExZCAweDIxIDB4MTFlIDB4MjEgMHgxMWYgMHgyMSAweDEyMCAw
eDIxIDB4MTIxIDB4MjEgMHgxMjIgMHgyMSAweDEyMyAweDIxIDB4MTI0IDB4MjEgMHgxMjUgMHgy
MSAweDEyNiAweDIxIDB4MTI3IDB4MjEgMHgxMjggMHgyMSAweDEyOSAweDIxIDB4MTJhIDB4MjEg
MHgxMmIgMHgyMSAweDEyYyAweDIxIDB4MTJkIDB4MjEgMHgxMmUgMHgyMSAweDEyZiAweDIxIDB4
MTMwIDB4MjEgMHgxMzEgMHgyMSAweDEzMiAweDIxIDB4MTMzIDB4MjEgMHgxMzQgMHgyMSAweDEz
NSAweDIxIDB4MTM2IDB4MjEgMHgxMzcgMHgyMSAweDEzOCAweDIxIDB4MTM5IDB4MjEgMHgxM2Eg
MHgyMSAweDEzYiAweDIxIDB4MTNjIDB4MjEgMHgxM2QgMHgyMSAweDEzZSAweDIxIDB4MTNmIDB4
MjEgMHgxNDAgMHgyMSAweDE0MSAweDIxIDB4MTQyIDB4MjEgMHgxNDMgMHgyMSAweDE0NCAweDIx
IDB4MTVlIDB4MjEgMHgxNWYgMHgyMSAweDE2MCAweDIxIDB4MTYxIDB4MjEgMHgxNjIgMHgyMSAw
eDE2MyAweDIxIDB4MTY0IDB4MjEgMHgxNjUgMHgyMSAweDE2NiAweDIxIDB4MTY3IDB4MjEgMHgx
NjggMHgyMSAweDE2OSAweDIxIDB4MTZhIDB4MjEgMHgxNmIgMHgyMSAweDE2YyAweDIxIDB4MTZk
IDB4MjEgMHgxNmUgMHgyMSAweDE2ZiAweDIxIDB4MTcwIDB4MjEgMHgxNzEgMHgyMSAweDE3MiAw
eDIxIDB4MTczIDB4MjEgMHgxNzQgMHgyMSAweDE3NSAweDIxIDB4MTc2IDB4MjEgMHgxNzcgMHgy
MSAweDE3OCAweDIxIDB4MTc5IDB4MjEgMHgxN2EgMHgyMSAweDE3YiAweDIxIDB4MTdjIDB4MjEg
MHgxN2QgMHgyMSAweDE3ZSAweDIxIDB4MTdmIDB4MjEgMHgxODAgMHgyMSAweDE4MSAweDIxIDB4
MTgyIDB4MjEgMHgxODMgMHgyMSAweDE4NCAweDIxIDB4MTg1IDB4MjEgMHgxODYgMHgyMSAweDE4
NyAweDIxIDB4MTg4IDB4MjEgMHgxODkgMHgyMSAweDE4YSAweDIxIDB4MTkxIDB4MjEgMHgxOTIg
MHgyMSAweDE5MyAweDIxIDB4MTk0IDB4MjEgMHgxOTUgMHgyMSAweDE5NiAweDIxIDB4MTk3IDB4
MjEgMHgxOTggMHgyMSAweDE5OSAweDIxIDB4MTlhIDB4MjEgMHgxOWIgMHgyMSAweDE5YyAweDIx
IDB4MTlkIDB4MjEgMHgxOWUgMHgyMSAweDE5ZiAweDIxIDB4MWEwIDB4MjEgMHgxYTEgMHgyMSAw
eDFhMiAweDIxIDB4MWEzIDB4MjEgMHgxYTQgMHgyMSAweDFhNSAweDIxIDB4MWE2IDB4MjEgMHgx
YTcgMHgyMSAweDFhOCAweDIxIDB4MWE5IDB4MjEgMHgxYWEgMHgyMSAweDFhYiAweDIxIDB4MWFj
IDB4MjEgMHgxYWQgMHgyMSAweDFhZSAweDIxIDB4MWFmIDB4MjEgMHgxYjAgMHgyMSAweDFiMSAw
eDIxIDB4MWIyIDB4MjEgMHgxYjMgMHgyMSAweDFiNCAweDIxIDB4MWI1IDB4MjEgMHgxYjYgMHgy
MSAweDFiNyAweDIxIDB4MWI4IDB4MjEgMHgxYjkgMHgyMSAweDFiYSAweDIxIDB4MWJiIDB4MjEg
MHgxYmMgMHgyMSAweDFiZCAweDIxIDB4MWJlIDB4MjEgMHgxYmYgMHgyMSAweDFjMCAweDIxIDB4
MWMxIDB4MjEgMHgxYzIgMHgyMSAweDFjMyAweDIxIDB4MWM0IDB4MjEgMHgxYzUgMHgyMSAweDFj
NiAweDIxIDB4MWM3IDB4MjEgMHgxYzggMHgyMSAweDFjOSAweDIxIDB4MWNhIDB4MjEgMHgxY2Ig
MHgyMSAweDFjYyAweDIxIDB4MWNkIDB4MjEgMHgxY2UgMHgyMSAweDFjZiAweDIxIDB4MWQwIDB4
MjEgMHgxZDEgMHgyMSAweDFkMiAweDIxIDB4MWQzIDB4MjEgMHgxZDQgMHgyMSAweDFkNSAweDIx
IDB4MWQ2IDB4MjEgMHgxZDcgMHgyMSAweDFkOCAweDIxIDB4MWQ5IDB4MjEgMHgxZGEgMHgyMSAw
eDFkYiAweDIxIDB4MWRjIDB4MjEgMHgxZGQgMHgyMSAweDFkZSAweDIxIDB4MWRmIDB4MjEgMHgx
ZTAgMHgyMSAweDFlMSAweDIxIDB4MWUyIDB4MjEgMHgxZTMgMHgyMSAweDFlNCAweDIxIDB4MWU1
IDB4MjEgMHgxZTYgMHgyMSAweDFlNyAweDIxIDB4MWU4IDB4MjEgMHgxZTkgMHgyMSAweDFlYSAw
eDIxIDB4MWViIDB4MjEgMHgxZWMgMHgyMSAweDFlZCAweDIxIDB4MWVlIDB4MjEgMHgxZWYgMHgy
MSAweDFmMCAweDIxIDB4MWYxIDB4MjEgMHgxZjIgMHgyMSAweDFmMyAweDIxIDB4MWY0IDB4MjEg
MHgxZjUgMHgyMSAweDFmNiAweDIxIDB4MWY3IDB4MjEgMHgxZjggMHgyMSAweDFmOSAweDIxIDB4
MWZhIDB4MjEgMHgxZmIgMHgyMSAweDFmYyAweDIxIDB4MWZkIDB4MjEgMHgxZmUgMHgyMSAweDFm
ZiAweDIxIDB4MjAwIDB4MjEgMHgyMDEgMHgyMSAweDIwMiAweDIxIDB4MjAzIDB4MjEgMHgyMDQg
MHgyMSAweDIwNSAweDIxIDB4MjA2IDB4MjEgMHgyMDcgMHgyMSAweDIwOCAweDIxIDB4MjA5IDB4
MjEgMHgyMGEgMHgyMSAweDIwYiAweDIxIDB4MjBjIDB4MjEgMHgyMGQgMHgyMSAweDIwZSAweDIx
IDB4MjBmPjsKCQkJY2xvY2stbmFtZXMgPSAiaXNwYiIsICJydGMiLCAidGltZXIiLCAidWFydGEi
LCAiZ3BpbyIsICJzZG1tYzIiLCAiaTJzMSIsICJpMmMxIiwgInNkbW1jMSIsICJzZG1tYzQiLCAi
cHdtIiwgImkyczIiLCAidmkiLCAidXNiZCIsICJpc3BhIiwgImRpc3AyIiwgImRpc3AxIiwgImhv
c3QxeCIsICJpMnMwIiwgIm1jIiwgImFoYmRtYSIsICJhcGJkbWEiLCAicG1jIiwgImtmdXNlIiwg
InNiYzEiLCAic2JjMiIsICJzYmMzIiwgImkyYzUiLCAiZHNpYSIsICJjc2kiLCAiaTJjMiIsICJ1
YXJ0YyIsICJtaXBpX2NhbCIsICJlbWMiLCAidXNiMiIsICJic2V2IiwgInVhcnRkIiwgImkyYzMi
LCAic2JjNCIsICJzZG1tYzMiLCAicGNpZSIsICJvd3IiLCAiYWZpIiwgImNzaXRlIiwgImxhIiwg
InNvY190aGVybSIsICJkdHYiLCAiaTJjc2xvdyIsICJkc2liIiwgInRzZWMiLCAieHVzYl9ob3N0
IiwgImNzdXMiLCAibXNlbGVjdCIsICJ0c2Vuc29yIiwgImkyczMiLCAiaTJzNCIsICJpMmM0Iiwg
ImRfYXVkaW8iLCAiYXBiMmFwZSIsICJoZGEyY29kZWNfMngiLCAic3BkaWZfMngiLCAiYWN0bW9u
IiwgImV4dGVybjEiLCAiZXh0ZXJuMiIsICJleHRlcm4zIiwgInNhdGFfb29iIiwgInNhdGEiLCAi
aGRhIiwgInNlIiwgImhkYTJoZG1pIiwgInNhdGFfY29sZCIsICJjZWMiLCAieHVzYl9nYXRlIiwg
ImNpbGFiIiwgImNpbGNkIiwgImNpbGUiLCAiZHNpYWxwIiwgImRzaWJscCIsICJlbnRyb3B5Iiwg
ImRwMiIsICJ4dXNiX3NzIiwgImRtaWMxIiwgImRtaWMyIiwgImkyYzYiLCAibWNfY2FwYSIsICJt
Y19jYnBhIiwgInZpbTJfY2xrIiwgIm1pcGliaWYiLCAiY2xrNzJtaHoiLCAidmljMDMiLCAiZHBh
dXgiLCAic29yMCIsICJzb3IxIiwgImdwdSIsICJkYmdhcGIiLCAicGxsX3Bfb3V0X2Fkc3AiLCAi
cGxsX2dfcmVmIiwgInNkbW1jX2xlZ2FjeSIsICJudmRlYyIsICJudmpwZyIsICJkbWljMyIsICJh
cGUiLCAiYWRzcCIsICJtY19jZHBhIiwgIm1jX2NjcGEiLCAibWF1ZCIsICJ0c2VjYiIsICJkcGF1
eDEiLCAidmlfaTJjIiwgImhzaWNfdHJrIiwgInVzYjJfdHJrIiwgInFzcGkiLCAidWFydGFwZSIs
ICJhZHNwX25lb24iLCAibnZlbmMiLCAiaXFjMiIsICJpcWMxIiwgInNvcl9zYWZlIiwgInBsbF9w
X291dF9jcHUiLCAidWFydGIiLCAidmZpciIsICJzcGRpZl9pbiIsICJzcGRpZl9vdXQiLCAidmlf
c2Vuc29yIiwgImZ1c2UiLCAiZnVzZV9idXJuIiwgImNsa18zMmsiLCAiY2xrX20iLCAiY2xrX21f
ZGl2MiIsICJjbGtfbV9kaXY0IiwgInBsbF9yZWYiLCAicGxsX2MiLCAicGxsX2Nfb3V0MSIsICJw
bGxfYzIiLCAicGxsX2MzIiwgInBsbF9tIiwgInBsbF9tX291dDEiLCAicGxsX3AiLCAicGxsX3Bf
b3V0MSIsICJwbGxfcF9vdXQyIiwgInBsbF9wX291dDMiLCAicGxsX3Bfb3V0NCIsICJwbGxfYSIs
ICJwbGxfYV9vdXQwIiwgInBsbF9kIiwgInBsbF9kX291dDAiLCAicGxsX2QyIiwgInBsbF9kMl9v
dXQwIiwgInBsbF91IiwgInBsbF91XzQ4MG0iLCAicGxsX3VfNjBtIiwgInBsbF91XzQ4bSIsICJw
bGxfeCIsICJwbGxfeF9vdXQwIiwgInBsbF9yZV92Y28iLCAicGxsX3JlX291dCIsICJwbGxfZSIs
ICJzcGRpZl9pbl9zeW5jIiwgImkyczBfc3luYyIsICJpMnMxX3N5bmMiLCAiaTJzMl9zeW5jIiwg
ImkyczNfc3luYyIsICJpMnM0X3N5bmMiLCAidmltY2xrX3N5bmMiLCAiYXVkaW8wIiwgImF1ZGlv
MSIsICJhdWRpbzIiLCAiYXVkaW8zIiwgImF1ZGlvNCIsICJzcGRpZiIsICJjbGtfb3V0XzEiLCAi
Y2xrX291dF8yIiwgImNsa19vdXRfMyIsICJibGluayIsICJxc3BpX291dCIsICJ4dXNiX2hvc3Rf
c3JjIiwgInh1c2JfZmFsY29uX3NyYyIsICJ4dXNiX2ZzX3NyYyIsICJ4dXNiX3NzX3NyYyIsICJ4
dXNiX2Rldl9zcmMiLCAieHVzYl9kZXYiLCAieHVzYl9oc19zcmMiLCAic2NsayIsICJoY2xrIiwg
InBjbGsiLCAiY2Nsa19nIiwgImNjbGtfbHAiLCAiZGZsbF9yZWYiLCAiZGZsbF9zb2MiLCAidmlf
c2Vuc29yMiIsICJwbGxfcF9vdXQ1IiwgImNtbDAiLCAiY21sMSIsICJwbGxfYzQiLCAicGxsX2Rw
IiwgInBsbF9lX211eCIsICJwbGxfbWIiLCAicGxsX2ExIiwgInBsbF9kX2RzaV9vdXQiLCAicGxs
X2M0X291dDAiLCAicGxsX2M0X291dDEiLCAicGxsX2M0X291dDIiLCAicGxsX2M0X291dDMiLCAi
cGxsX3Vfb3V0IiwgInBsbF91X291dDEiLCAicGxsX3Vfb3V0MiIsICJ1c2IyX2hzaWNfdHJrIiwg
InBsbF9wX291dF9oc2lvIiwgInBsbF9wX291dF94dXNiIiwgInh1c2Jfc3NwX3NyYyIsICJwbGxf
cmVfb3V0MSIsICJwbGxfbWJfdWQiLCAicGxsX3BfdWQiLCAiaXNwIiwgInBsbF9hX291dF9hZHNw
IiwgInBsbF9hX291dDBfb3V0X2Fkc3AiLCAiYXVkaW8wX211eCIsICJhdWRpbzFfbXV4IiwgImF1
ZGlvMl9tdXgiLCAiYXVkaW8zX211eCIsICJhdWRpbzRfbXV4IiwgInNwZGlmX211eCIsICJjbGtf
b3V0XzFfbXV4IiwgImNsa19vdXRfMl9tdXgiLCAiY2xrX291dF8zX211eCIsICJkc2lhX211eCIs
ICJkc2liX211eCIsICJzb3IwX2x2ZHMiLCAieHVzYl9zc19kaXYyIiwgInBsbF9tX3VkIiwgInBs
bF9jX3VkIiwgInNjbGtfbXV4IiwgInNvcjFfYnJpY2siLCAic29yMV9tdXgiLCAicGQydmkiLCAi
dmlfb3V0cHV0IiwgImFjbGsiLCAic2Nsa19za2lwcGVyIiwgImRpc3AxX3NsY2dfb3ZyIiwgImRp
c3AyX3NsY2dfb3ZyIiwgInZpX3NsY2dfb3ZyIiwgImlzcGFfc2xjZ19vdnIiLCAiaXNwYl9zbGNn
X292ciIsICJudmRlY19zbGNnX292ciIsICJudmVuY19zbGNnX292ciIsICJudmpwZ19zbGNnX292
ciIsICJ2aWMwM19zbGNnX292ciIsICJ4dXNiX2Rldl9zbGNnX292ciIsICJ4dXNiX2hvc3Rfc2xj
Z19vdnIiLCAiZF9hdWRpb19zbGNnX292ciIsICJhcGVfc2xjZ19vdnIiLCAic2F0YV9zbGNnX292
ciIsICJzYXRhX3NsY2dfb3ZyX2lwZnMiLCAic2F0YV9zbGNnX292cl9mcGNpIiwgImRtaWMxX3N5
bmNfY2xrIiwgImRtaWMxX3N5bmNfY2xrX211eCIsICJkbWljMl9zeW5jX2NsayIsICJkbWljMl9z
eW5jX2Nsa19tdXgiLCAiZG1pYzNfc3luY19jbGsiLCAiZG1pYzNfc3luY19jbGtfbXV4IiwgImFj
bGtfc2xjZ19vdnIiLCAiYzJidXMiLCAiYzNidXMiLCAidmljMDNfY2J1cyIsICJudmpwZ19jYnVz
IiwgInNlX2NidXMiLCAidHNlY2JfY2J1cyIsICJjYXBfYzJidXMiLCAiY2FwX3Zjb3JlX2MyYnVz
IiwgImNhcF90aHJvdHRsZV9jMmJ1cyIsICJmbG9vcl9jMmJ1cyIsICJvdmVycmlkZV9jMmJ1cyIs
ICJlZHBfYzJidXMiLCAibnZlbmNfY2J1cyIsICJudmRlY19jYnVzIiwgInZpY19mbG9vcl9jYnVz
IiwgImNhcF9jM2J1cyIsICJjYXBfdmNvcmVfYzNidXMiLCAiY2FwX3Rocm90dGxlX2MzYnVzIiwg
ImZsb29yX2MzYnVzIiwgIm92ZXJyaWRlX2MzYnVzIiwgInZpX2NidXMiLCAiaXNwX2NidXMiLCAi
b3ZlcnJpZGVfY2J1cyIsICJjYXBfdmNvcmVfY2J1cyIsICJ2aWFfdmlfY2J1cyIsICJ2aWJfdmlf
Y2J1cyIsICJpc3BhX2lzcF9jYnVzIiwgImlzcGJfaXNwX2NidXMiLCAic2J1cyIsICJhdnBfc2Ns
ayIsICJic2VhX3NjbGsiLCAidXNiZF9zY2xrIiwgInVzYjFfc2NsayIsICJ1c2IyX3NjbGsiLCAi
dXNiM19zY2xrIiwgIndha2Vfc2NsayIsICJjYW1lcmFfc2NsayIsICJtb25fYXZwIiwgImNhcF9z
Y2xrIiwgImNhcF92Y29yZV9zY2xrIiwgImNhcF90aHJvdHRsZV9zY2xrIiwgImZsb29yX3NjbGsi
LCAib3ZlcnJpZGVfc2NsayIsICJzYmMxX3NjbGsiLCAic2JjMl9zY2xrIiwgInNiYzNfc2NsayIs
ICJzYmM0X3NjbGsiLCAicXNwaV9zY2xrIiwgImJvb3RfYXBiX3NjbGsiLCAiZW1jX21hc3RlciIs
ICJhdnBfZW1jIiwgImNwdV9lbWMiLCAiZGlzcDFfZW1jIiwgImRpc3AyX2VtYyIsICJkaXNwMV9s
YV9lbWMiLCAiZGlzcDJfbGFfZW1jIiwgInVzYmRfZW1jIiwgInVzYjFfZW1jIiwgInVzYjJfZW1j
IiwgInVzYjNfZW1jIiwgInNkbW1jM19lbWMiLCAic2RtbWM0X2VtYyIsICJtb25fZW1jIiwgImNh
cF9lbWMiLCAiY2FwX3Zjb3JlX2VtYyIsICJjYXBfdGhyb3R0bGVfZW1jIiwgImdyM2RfZW1jIiwg
Im52ZW5jX2VtYyIsICJudmpwZ19lbWMiLCAibnZkZWNfZW1jIiwgInRzZWNfZW1jIiwgInRzZWNi
X2VtYyIsICJjYW1lcmFfZW1jIiwgInZpYV9lbWMiLCAidmliX2VtYyIsICJpc3BhX2VtYyIsICJp
c3BiX2VtYyIsICJpc29fZW1jIiwgImZsb29yX2VtYyIsICJvdmVycmlkZV9lbWMiLCAiZWRwX2Vt
YyIsICJ2aWNfZW1jIiwgInZpY19zaGFyZWRfZW1jIiwgImFwZV9lbWMiLCAicGNpZV9lbWMiLCAi
eHVzYl9lbWMiLCAiZ2J1cyIsICJnbTIwYl9nYnVzIiwgImNhcF9nYnVzIiwgImVkcF9nYnVzIiwg
ImNhcF92Z3B1X2didXMiLCAiY2FwX3Rocm90dGxlX2didXMiLCAiY2FwX3Byb2ZpbGVfZ2J1cyIs
ICJvdmVycmlkZV9nYnVzIiwgImZsb29yX2didXMiLCAiZmxvb3JfcHJvZmlsZV9nYnVzIiwgImhv
c3QxeF9tYXN0ZXIiLCAibnZfaG9zdDF4IiwgInZpX2hvc3QxeCIsICJ2aWkyY19ob3N0MXgiLCAi
Y2FwX2hvc3QxeCIsICJjYXBfdmNvcmVfaG9zdDF4IiwgImZsb29yX2hvc3QxeCIsICJvdmVycmlk
ZV9ob3N0MXgiLCAibXNlbGVjdF9tYXN0ZXIiLCAiY3B1X21zZWxlY3QiLCAicGNpZV9tc2VsZWN0
IiwgImNhcF92Y29yZV9tc2VsZWN0IiwgIm92ZXJyaWRlX21zZWxlY3QiLCAiYXBlX21hc3RlciIs
ICJhZG1hX2FwZSIsICJhZHNwX2FwZSIsICJ4YmFyX2FwZSIsICJjYXBfdmNvcmVfYXBlIiwgIm92
ZXJyaWRlX2FwZSIsICJhYnVzIiwgImFkc3BfY3B1X2FidXMiLCAiY2FwX3Zjb3JlX2FidXMiLCAi
b3ZlcnJpZGVfYWJ1cyIsICJ2Y21fc2NsayIsICJ2Y21fYWhiX3NjbGsiLCAidmNtX2FwYl9zY2xr
IiwgImFoYl9zY2xrIiwgImFwYl9zY2xrIiwgInNkbW1jNF9haGJfc2NsayIsICJiYXR0ZXJ5X2Vt
YyIsICJjYnVzIjsKCQkJcmVzZXRzID0gPDB4MjEgMHgzIDB4MjEgMHg0IDB4MjEgMHg1IDB4MjEg
MHg2IDB4MjEgMHg4IDB4MjEgMHg5IDB4MjEgMHhiIDB4MjEgMHhjIDB4MjEgMHhlIDB4MjEgMHhm
IDB4MjEgMHgxMSAweDIxIDB4MTIgMHgyMSAweGU0IDB4MjEgMHgxNiAweDIxIDB4MTcgMHgyMSAw
eDFhIDB4MjEgMHgxYiAweDIxIDB4MWMgMHgyMSAweDFlIDB4MjEgMHgyMCAweDIxIDB4MjEgMHgy
MSAweDIyIDB4MjEgMHgyNiAweDIxIDB4MjggMHgyMSAweDI5IDB4MjEgMHgyYyAweDIxIDB4MmUg
MHgyMSAweDJmIDB4MjEgMHgzMCAweDIxIDB4MzQgMHgyMSAweDM2IDB4MjEgMHgzNyAweDIxIDB4
MzggMHgyMSAweDM5IDB4MjEgMHgzYSAweDIxIDB4M2YgMHgyMSAweDQxIDB4MjEgMHg0MyAweDIx
IDB4NDQgMHgyMSAweDQ1IDB4MjEgMHg0NiAweDIxIDB4NDcgMHgyMSAweDQ4IDB4MjEgMHg0OSAw
eDIxIDB4NGMgMHgyMSAweDRlIDB4MjEgMHg0ZiAweDIxIDB4NTEgMHgyMSAweDUyIDB4MjEgMHg1
MyAweDIxIDB4NTkgMHgyMSAweDVjIDB4MjEgMHg2MyAweDIxIDB4NjQgMHgyMSAweDY1IDB4MjEg
MHg2NiAweDIxIDB4NjcgMHgyMSAweDZhIDB4MjEgMHg2YiAweDIxIDB4NmYgMHgyMSAweDc2IDB4
MjEgMHg3NyAweDIxIDB4NzggMHgyMSAweDc5IDB4MjEgMHg3YSAweDIxIDB4N2IgMHgyMSAweDdj
IDB4MjEgMHg3ZCAweDIxIDB4N2YgMHgyMSAweDgwIDB4MjEgMHg4MSAweDIxIDB4ODggMHgyMSAw
eDhmIDB4MjEgMHg5MCAweDIxIDB4OTEgMHgyMSAweDkyIDB4MjEgMHg5MyAweDIxIDB4OTQgMHgy
MSAweDk1IDB4MjEgMHg5OCAweDIxIDB4OWMgMHgyMSAweGExIDB4MjEgMHhhMiAweDIxIDB4YTYg
MHgyMSAweGE3IDB4MjEgMHhhOCAweDIxIDB4YWIgMHgyMSAweGFkIDB4MjEgMHhiMSAweDIxIDB4
YjIgMHgyMSAweGI1IDB4MjEgMHhiNiAweDIxIDB4YjcgMHgyMSAweGI4IDB4MjEgMHhiOSAweDIx
IDB4YmIgMHgyMSAweGJkIDB4MjEgMHhjMSAweDIxIDB4YzIgMHgyMSAweGMzIDB4MjEgMHhjNSAw
eDIxIDB4YzYgMHgyMSAweGM3IDB4MjEgMHhjOCAweDIxIDB4YzkgMHgyMSAweGNhIDB4MjEgMHhj
ZSAweDIxIDB4Y2YgMHgyMSAweGQwIDB4MjEgMHhkMSAweDIxIDB4ZDIgMHgyMSAweGQzIDB4MjEg
MHhkNCAweDIxIDB4ZGEgMHgyMSAweGRiIDB4MjEgMHhkYyAweDIxIDB4ZGQgMHgyMSAweGRlIDB4
MjEgMHhkZiAweDIxIDB4NyAweDIxIDB4ZTEgMHgyMSAweGUyIDB4MjEgMHhlMyAweDIxIDB4ZTUg
MHgyMSAweGU2IDB4MjEgMHhlNyAweDIxIDB4ZTggMHgyMSAweGU5IDB4MjEgMHhlYSAweDIxIDB4
ZWIgMHgyMSAweGVjIDB4MjEgMHhlZCAweDIxIDB4ZWUgMHgyMSAweGVmIDB4MjEgMHhmMCAweDIx
IDB4ZjEgMHgyMSAweGYyIDB4MjEgMHhmMyAweDIxIDB4ZjQgMHgyMSAweGY1IDB4MjEgMHhmNiAw
eDIxIDB4ZjcgMHgyMSAweGY4IDB4MjEgMHhmOSAweDIxIDB4ZmEgMHgyMSAweGZiIDB4MjEgMHhm
YyAweDIxIDB4ZmQgMHgyMSAweGZlIDB4MjEgMHhmZiAweDIxIDB4MTAwIDB4MjEgMHgxMDEgMHgy
MSAweDEwMyAweDIxIDB4MTA0IDB4MjEgMHgxMDUgMHgyMSAweDEwNiAweDIxIDB4MTA3IDB4MjEg
MHgxMDggMHgyMSAweDEwOSAweDIxIDB4MTBhIDB4MjEgMHgxMGIgMHgyMSAweDEwYyAweDIxIDB4
MTBkIDB4MjEgMHgxMGUgMHgyMSAweDEwZiAweDIxIDB4MTEwIDB4MjEgMHgxMTEgMHgyMSAweDEx
MiAweDIxIDB4MTEzIDB4MjEgMHgxMTQgMHgyMSAweDExNSAweDIxIDB4MTE2IDB4MjEgMHgxMTcg
MHgyMSAweDExOCAweDIxIDB4MTE5IDB4MjEgMHgxMWMgMHgyMSAweDExZCAweDIxIDB4MTFlIDB4
MjEgMHgxMWYgMHgyMSAweDEyMCAweDIxIDB4NWYgMHgyMSAweDEyMiAweDIxIDB4MTIzIDB4MjEg
MHgxMjQgMHgyMSAweDEyNSAweDIxIDB4MTI2IDB4MjEgMHgxMjcgMHgyMSAweDEyOCAweDIxIDB4
MTI5IDB4MjEgMHgxMmEgMHgyMSAweDEyYiAweDIxIDB4MTJjIDB4MjEgMHgxMmQgMHgyMSAweDEy
ZSAweDIxIDB4MTJmIDB4MjEgMHgxMzAgMHgyMSAweDEzMSAweDIxIDB4MTMyIDB4MjEgMHgxMzMg
MHgyMSAweDEzNCAweDIxIDB4MTM1IDB4MjEgMHgxMzYgMHgyMSAweDEzNyAweDIxIDB4MTM4IDB4
MjEgMHgxMzkgMHgyMSAweDEzYSAweDIxIDB4MTNiIDB4MjEgMHgxM2MgMHgyMSAweDEzZCAweDIx
IDB4MTNlIDB4MjEgMHgxM2YgMHgyMSAweDE0MCAweDIxIDB4MTQxIDB4MjEgMHgxNDIgMHgyMSAw
eDE0MyAweDIxIDB4MTQ0IDB4MjEgMHgxNWUgMHgyMSAweDE1ZiAweDIxIDB4MTYwIDB4MjEgMHgx
NjEgMHgyMSAweDE2MiAweDIxIDB4MTYzIDB4MjEgMHgxNjQgMHgyMSAweDE2NSAweDIxIDB4MTY2
IDB4MjEgMHgxNjcgMHgyMSAweDE2OCAweDIxIDB4MTY5IDB4MjEgMHgxNmEgMHgyMSAweDE2YiAw
eDIxIDB4MTZjIDB4MjEgMHgxNmQgMHgyMSAweDE2ZSAweDIxIDB4MTZmIDB4MjEgMHgxNzAgMHgy
MSAweDE3MSAweDIxIDB4MTcyIDB4MjEgMHgxNzMgMHgyMSAweDE3NCAweDIxIDB4MTc1IDB4MjEg
MHgxNzYgMHgyMSAweDE3NyAweDIxIDB4MTc4IDB4MjEgMHgxNzkgMHgyMSAweDE3YSAweDIxIDB4
MTdiIDB4MjEgMHgxN2MgMHgyMSAweDE3ZCAweDIxIDB4MTdlIDB4MjEgMHgxN2YgMHgyMSAweDE4
MCAweDIxIDB4MTgxIDB4MjEgMHgxODIgMHgyMSAweDE4MyAweDIxIDB4MTg0IDB4MjEgMHgxODUg
MHgyMSAweDE4NiAweDIxIDB4MTg3IDB4MjEgMHgxODggMHgyMSAweDE4OSAweDIxIDB4MThhIDB4
MjEgMHgxOTEgMHgyMSAweDE5MiAweDIxIDB4MTkzIDB4MjEgMHgxOTQgMHgyMSAweDE5NSAweDIx
IDB4MTk2IDB4MjEgMHgxOTcgMHgyMSAweDE5OCAweDIxIDB4MTk5IDB4MjEgMHgxOWEgMHgyMSAw
eDE5YiAweDIxIDB4MTljIDB4MjEgMHgxOWQgMHgyMSAweDE5ZSAweDIxIDB4MTlmIDB4MjEgMHgx
YTAgMHgyMSAweDFhMSAweDIxIDB4MWEyIDB4MjEgMHgxYTMgMHgyMSAweDFhNCAweDIxIDB4MWE1
IDB4MjEgMHgxYTYgMHgyMSAweDFhNyAweDIxIDB4MWE4IDB4MjEgMHgxYTkgMHgyMSAweDFhYSAw
eDIxIDB4MWFiIDB4MjEgMHgxYWMgMHgyMSAweDFhZCAweDIxIDB4MWFlIDB4MjEgMHgxYWYgMHgy
MSAweDFiMCAweDIxIDB4MWIxIDB4MjEgMHgxYjIgMHgyMSAweDFiMyAweDIxIDB4MWI0IDB4MjEg
MHgxYjUgMHgyMSAweDFiNiAweDIxIDB4MWI3IDB4MjEgMHgxYjggMHgyMSAweDFiOSAweDIxIDB4
MWJhIDB4MjEgMHgxYmIgMHgyMSAweDFiYyAweDIxIDB4MWJkIDB4MjEgMHgxYmUgMHgyMSAweDFi
ZiAweDIxIDB4MWMwIDB4MjEgMHgxYzEgMHgyMSAweDFjMiAweDIxIDB4MWMzIDB4MjEgMHgxYzQg
MHgyMSAweDFjNSAweDIxIDB4MWM2IDB4MjEgMHgxYzcgMHgyMSAweDFjOCAweDIxIDB4MWM5IDB4
MjEgMHgxY2EgMHgyMSAweDFjYiAweDIxIDB4MWNjIDB4MjEgMHgxY2QgMHgyMSAweDFjZSAweDIx
IDB4MWNmIDB4MjEgMHgxZDAgMHgyMSAweDFkMSAweDIxIDB4MWQyIDB4MjEgMHgxZDMgMHgyMSAw
eDFkNCAweDIxIDB4MWQ1IDB4MjEgMHgxZDYgMHgyMSAweDFkNyAweDIxIDB4MWQ4IDB4MjEgMHgx
ZDkgMHgyMSAweDFkYSAweDIxIDB4MWRiIDB4MjEgMHgxZGMgMHgyMSAweDFkZCAweDIxIDB4MWRl
IDB4MjEgMHgxZGYgMHgyMSAweDFlMCAweDIxIDB4MWUxIDB4MjEgMHgxZTIgMHgyMSAweDFlMyAw
eDIxIDB4MWU0IDB4MjEgMHgxZTUgMHgyMSAweDFlNiAweDIxIDB4MWU3IDB4MjEgMHgxZTggMHgy
MSAweDFlOSAweDIxIDB4MWVhIDB4MjEgMHgxZWIgMHgyMSAweDFlYyAweDIxIDB4MWVkIDB4MjEg
MHgxZWUgMHgyMSAweDFlZiAweDIxIDB4MWYwIDB4MjEgMHgxZjEgMHgyMSAweDFmMiAweDIxIDB4
MWYzIDB4MjEgMHgxZjQgMHgyMSAweDFmNSAweDIxIDB4MWY2IDB4MjEgMHgxZjcgMHgyMSAweDFm
OCAweDIxIDB4MWY5IDB4MjEgMHgxZmEgMHgyMSAweDFmYiAweDIxIDB4MWZjIDB4MjEgMHgxZmQg
MHgyMSAweDFmZSAweDIxIDB4MWZmIDB4MjEgMHgyMDAgMHgyMSAweDIwMSAweDIxIDB4MjAyIDB4
MjEgMHgyMDMgMHgyMSAweDIwNCAweDIxIDB4MjA1IDB4MjEgMHgyMDYgMHgyMSAweDIwNyAweDIx
IDB4MjA4IDB4MjEgMHgyMDkgMHgyMSAweDIwYSAweDIxIDB4MjBiIDB4MjEgMHgyMGMgMHgyMSAw
eDIwZCAweDIxIDB4MjBlIDB4MjEgMHgyMGY+OwoJCQlyZXNldC1uYW1lcyA9ICJpc3BiIiwgInJ0
YyIsICJ0aW1lciIsICJ1YXJ0YSIsICJncGlvIiwgInNkbW1jMiIsICJpMnMxIiwgImkyYzEiLCAi
c2RtbWMxIiwgInNkbW1jNCIsICJwd20iLCAiaTJzMiIsICJ2aSIsICJ1c2JkIiwgImlzcGEiLCAi
ZGlzcDIiLCAiZGlzcDEiLCAiaG9zdDF4IiwgImkyczAiLCAibWMiLCAiYWhiZG1hIiwgImFwYmRt
YSIsICJwbWMiLCAia2Z1c2UiLCAic2JjMSIsICJzYmMyIiwgInNiYzMiLCAiaTJjNSIsICJkc2lh
IiwgImNzaSIsICJpMmMyIiwgInVhcnRjIiwgIm1pcGlfY2FsIiwgImVtYyIsICJ1c2IyIiwgImJz
ZXYiLCAidWFydGQiLCAiaTJjMyIsICJzYmM0IiwgInNkbW1jMyIsICJwY2llIiwgIm93ciIsICJh
ZmkiLCAiY3NpdGUiLCAibGEiLCAic29jX3RoZXJtIiwgImR0diIsICJpMmNzbG93IiwgImRzaWIi
LCAidHNlYyIsICJ4dXNiX2hvc3QiLCAiY3N1cyIsICJtc2VsZWN0IiwgInRzZW5zb3IiLCAiaTJz
MyIsICJpMnM0IiwgImkyYzQiLCAiZF9hdWRpbyIsICJhcGIyYXBlIiwgImhkYTJjb2RlY18yeCIs
ICJzcGRpZl8yeCIsICJhY3Rtb24iLCAiZXh0ZXJuMSIsICJleHRlcm4yIiwgImV4dGVybjMiLCAi
c2F0YV9vb2IiLCAic2F0YSIsICJoZGEiLCAic2UiLCAiaGRhMmhkbWkiLCAic2F0YV9jb2xkIiwg
ImNlYyIsICJ4dXNiX2dhdGUiLCAiY2lsYWIiLCAiY2lsY2QiLCAiY2lsZSIsICJkc2lhbHAiLCAi
ZHNpYmxwIiwgImVudHJvcHkiLCAiZHAyIiwgInh1c2Jfc3MiLCAiZG1pYzEiLCAiZG1pYzIiLCAi
aTJjNiIsICJtY19jYXBhIiwgIm1jX2NicGEiLCAidmltMl9jbGsiLCAibWlwaWJpZiIsICJjbGs3
Mm1oeiIsICJ2aWMwMyIsICJkcGF1eCIsICJzb3IwIiwgInNvcjEiLCAiZ3B1IiwgImRiZ2FwYiIs
ICJwbGxfcF9vdXRfYWRzcCIsICJwbGxfZ19yZWYiLCAic2RtbWNfbGVnYWN5IiwgIm52ZGVjIiwg
Im52anBnIiwgImRtaWMzIiwgImFwZSIsICJhZHNwIiwgIm1jX2NkcGEiLCAibWNfY2NwYSIsICJt
YXVkIiwgInRzZWNiIiwgImRwYXV4MSIsICJ2aV9pMmMiLCAiaHNpY190cmsiLCAidXNiMl90cmsi
LCAicXNwaSIsICJ1YXJ0YXBlIiwgImFkc3BfbmVvbiIsICJudmVuYyIsICJpcWMyIiwgImlxYzEi
LCAic29yX3NhZmUiLCAicGxsX3Bfb3V0X2NwdSIsICJ1YXJ0YiIsICJ2ZmlyIiwgInNwZGlmX2lu
IiwgInNwZGlmX291dCIsICJ2aV9zZW5zb3IiLCAiZnVzZSIsICJmdXNlX2J1cm4iLCAiY2xrXzMy
ayIsICJjbGtfbSIsICJjbGtfbV9kaXYyIiwgImNsa19tX2RpdjQiLCAicGxsX3JlZiIsICJwbGxf
YyIsICJwbGxfY19vdXQxIiwgInBsbF9jMiIsICJwbGxfYzMiLCAicGxsX20iLCAicGxsX21fb3V0
MSIsICJwbGxfcCIsICJwbGxfcF9vdXQxIiwgInBsbF9wX291dDIiLCAicGxsX3Bfb3V0MyIsICJw
bGxfcF9vdXQ0IiwgInBsbF9hIiwgInBsbF9hX291dDAiLCAicGxsX2QiLCAicGxsX2Rfb3V0MCIs
ICJwbGxfZDIiLCAicGxsX2QyX291dDAiLCAicGxsX3UiLCAicGxsX3VfNDgwbSIsICJwbGxfdV82
MG0iLCAicGxsX3VfNDhtIiwgInBsbF94IiwgInBsbF94X291dDAiLCAicGxsX3JlX3ZjbyIsICJw
bGxfcmVfb3V0IiwgInBsbF9lIiwgInNwZGlmX2luX3N5bmMiLCAiaTJzMF9zeW5jIiwgImkyczFf
c3luYyIsICJpMnMyX3N5bmMiLCAiaTJzM19zeW5jIiwgImkyczRfc3luYyIsICJ2aW1jbGtfc3lu
YyIsICJhdWRpbzAiLCAiYXVkaW8xIiwgImF1ZGlvMiIsICJhdWRpbzMiLCAiYXVkaW80IiwgInNw
ZGlmIiwgImNsa19vdXRfMSIsICJjbGtfb3V0XzIiLCAiY2xrX291dF8zIiwgImJsaW5rIiwgInFz
cGlfb3V0IiwgInh1c2JfaG9zdF9zcmMiLCAieHVzYl9mYWxjb25fc3JjIiwgInh1c2JfZnNfc3Jj
IiwgInh1c2Jfc3Nfc3JjIiwgInh1c2JfZGV2X3NyYyIsICJ4dXNiX2RldiIsICJ4dXNiX2hzX3Ny
YyIsICJzY2xrIiwgImhjbGsiLCAicGNsayIsICJjY2xrX2ciLCAiY2Nsa19scCIsICJkZmxsX3Jl
ZiIsICJkZmxsX3NvYyIsICJ2aV9zZW5zb3IyIiwgInBsbF9wX291dDUiLCAiY21sMCIsICJjbWwx
IiwgInBsbF9jNCIsICJwbGxfZHAiLCAicGxsX2VfbXV4IiwgInBsbF9tYiIsICJwbGxfYTEiLCAi
cGxsX2RfZHNpX291dCIsICJwbGxfYzRfb3V0MCIsICJwbGxfYzRfb3V0MSIsICJwbGxfYzRfb3V0
MiIsICJwbGxfYzRfb3V0MyIsICJwbGxfdV9vdXQiLCAicGxsX3Vfb3V0MSIsICJwbGxfdV9vdXQy
IiwgInVzYjJfaHNpY190cmsiLCAicGxsX3Bfb3V0X2hzaW8iLCAicGxsX3Bfb3V0X3h1c2IiLCAi
eHVzYl9zc3Bfc3JjIiwgInBsbF9yZV9vdXQxIiwgInBsbF9wX3VkIiwgImlzcCIsICJwbGxfYV9v
dXRfYWRzcCIsICJwbGxfYV9vdXQwX291dF9hZHNwIiwgImF1ZGlvMF9tdXgiLCAiYXVkaW8xX211
eCIsICJhdWRpbzJfbXV4IiwgImF1ZGlvM19tdXgiLCAiYXVkaW80X211eCIsICJzcGRpZl9tdXgi
LCAiY2xrX291dF8xX211eCIsICJjbGtfb3V0XzJfbXV4IiwgImNsa19vdXRfM19tdXgiLCAiZHNp
YV9tdXgiLCAiZHNpYl9tdXgiLCAic29yMF9sdmRzIiwgInh1c2Jfc3NfZGl2MiIsICJwbGxfbV91
ZCIsICJwbGxfY191ZCIsICJzY2xrX211eCIsICJzb3IxX2JyaWNrIiwgInNvcjFfbXV4IiwgInBk
MnZpIiwgInZpX291dHB1dCIsICJhY2xrIiwgInNjbGtfc2tpcHBlciIsICJkaXNwMV9zbGNnX292
ciIsICJkaXNwMl9zbGNnX292ciIsICJ2aV9zbGNnX292ciIsICJpc3BhX3NsY2dfb3ZyIiwgImlz
cGJfc2xjZ19vdnIiLCAibnZkZWNfc2xjZ19vdnIiLCAibnZlbmNfc2xjZ19vdnIiLCAibnZqcGdf
c2xjZ19vdnIiLCAidmljMDNfc2xjZ19vdnIiLCAieHVzYl9kZXZfc2xjZ19vdnIiLCAieHVzYl9o
b3N0X3NsY2dfb3ZyIiwgImRfYXVkaW9fc2xjZ19vdnIiLCAiYXBlX3NsY2dfb3ZyIiwgInNhdGFf
c2xjZ19vdnIiLCAic2F0YV9zbGNnX292cl9pcGZzIiwgInNhdGFfc2xjZ19vdnJfZnBjaSIsICJk
bWljMV9zeW5jX2NsayIsICJkbWljMV9zeW5jX2Nsa19tdXgiLCAiZG1pYzJfc3luY19jbGsiLCAi
ZG1pYzJfc3luY19jbGtfbXV4IiwgImRtaWMzX3N5bmNfY2xrIiwgImRtaWMzX3N5bmNfY2xrX211
eCIsICJhY2xrX3NsY2dfb3ZyIiwgImMyYnVzIiwgImMzYnVzIiwgInZpYzAzX2NidXMiLCAibnZq
cGdfY2J1cyIsICJzZV9jYnVzIiwgInRzZWNiX2NidXMiLCAiY2FwX2MyYnVzIiwgImNhcF92Y29y
ZV9jMmJ1cyIsICJjYXBfdGhyb3R0bGVfYzJidXMiLCAiZmxvb3JfYzJidXMiLCAib3ZlcnJpZGVf
YzJidXMiLCAiZWRwX2MyYnVzIiwgIm52ZW5jX2NidXMiLCAibnZkZWNfY2J1cyIsICJ2aWNfZmxv
b3JfY2J1cyIsICJjYXBfYzNidXMiLCAiY2FwX3Zjb3JlX2MzYnVzIiwgImNhcF90aHJvdHRsZV9j
M2J1cyIsICJmbG9vcl9jM2J1cyIsICJvdmVycmlkZV9jM2J1cyIsICJ2aV9jYnVzIiwgImlzcF9j
YnVzIiwgIm92ZXJyaWRlX2NidXMiLCAiY2FwX3Zjb3JlX2NidXMiLCAidmlhX3ZpX2NidXMiLCAi
dmliX3ZpX2NidXMiLCAiaXNwYV9pc3BfY2J1cyIsICJpc3BiX2lzcF9jYnVzIiwgInNidXMiLCAi
YXZwX3NjbGsiLCAiYnNlYV9zY2xrIiwgInVzYmRfc2NsayIsICJ1c2IxX3NjbGsiLCAidXNiMl9z
Y2xrIiwgInVzYjNfc2NsayIsICJ3YWtlX3NjbGsiLCAiY2FtZXJhX3NjbGsiLCAibW9uX2F2cCIs
ICJjYXBfc2NsayIsICJjYXBfdmNvcmVfc2NsayIsICJjYXBfdGhyb3R0bGVfc2NsayIsICJmbG9v
cl9zY2xrIiwgIm92ZXJyaWRlX3NjbGsiLCAic2JjMV9zY2xrIiwgInNiYzJfc2NsayIsICJzYmMz
X3NjbGsiLCAic2JjNF9zY2xrIiwgInFzcGlfc2NsayIsICJib290X2FwYl9zY2xrIiwgImVtY19t
YXN0ZXIiLCAiYXZwX2VtYyIsICJjcHVfZW1jIiwgImRpc3AxX2VtYyIsICJkaXNwMl9lbWMiLCAi
ZGlzcDFfbGFfZW1jIiwgImRpc3AyX2xhX2VtYyIsICJ1c2JkX2VtYyIsICJ1c2IxX2VtYyIsICJ1
c2IyX2VtYyIsICJ1c2IzX2VtYyIsICJzZG1tYzNfZW1jIiwgInNkbW1jNF9lbWMiLCAibW9uX2Vt
YyIsICJjYXBfZW1jIiwgImNhcF92Y29yZV9lbWMiLCAiY2FwX3Rocm90dGxlX2VtYyIsICJncjNk
X2VtYyIsICJudmVuY19lbWMiLCAibnZqcGdfZW1jIiwgIm52ZGVjX2VtYyIsICJ0c2VjX2VtYyIs
ICJ0c2VjYl9lbWMiLCAiY2FtZXJhX2VtYyIsICJ2aWFfZW1jIiwgInZpYl9lbWMiLCAiaXNwYV9l
bWMiLCAiaXNwYl9lbWMiLCAiaXNvX2VtYyIsICJmbG9vcl9lbWMiLCAib3ZlcnJpZGVfZW1jIiwg
ImVkcF9lbWMiLCAidmljX2VtYyIsICJ2aWNfc2hhcmVkX2VtYyIsICJhcGVfZW1jIiwgInBjaWVf
ZW1jIiwgInh1c2JfZW1jIiwgImdidXMiLCAiZ20yMGJfZ2J1cyIsICJjYXBfZ2J1cyIsICJlZHBf
Z2J1cyIsICJjYXBfdmdwdV9nYnVzIiwgImNhcF90aHJvdHRsZV9nYnVzIiwgImNhcF9wcm9maWxl
X2didXMiLCAib3ZlcnJpZGVfZ2J1cyIsICJmbG9vcl9nYnVzIiwgImZsb29yX3Byb2ZpbGVfZ2J1
cyIsICJob3N0MXhfbWFzdGVyIiwgIm52X2hvc3QxeCIsICJ2aV9ob3N0MXgiLCAidmlpMmNfaG9z
dDF4IiwgImNhcF9ob3N0MXgiLCAiY2FwX3Zjb3JlX2hvc3QxeCIsICJmbG9vcl9ob3N0MXgiLCAi
b3ZlcnJpZGVfaG9zdDF4IiwgIm1zZWxlY3RfbWFzdGVyIiwgImNwdV9tc2VsZWN0IiwgInBjaWVf
bXNlbGVjdCIsICJjYXBfdmNvcmVfbXNlbGVjdCIsICJvdmVycmlkZV9tc2VsZWN0IiwgImFwZV9t
YXN0ZXIiLCAiYWRtYV9hcGUiLCAiYWRzcF9hcGUiLCAieGJhcl9hcGUiLCAiY2FwX3Zjb3JlX2Fw
ZSIsICJvdmVycmlkZV9hcGUiLCAiYWJ1cyIsICJhZHNwX2NwdV9hYnVzIiwgImNhcF92Y29yZV9h
YnVzIiwgIm92ZXJyaWRlX2FidXMiLCAidmNtX3NjbGsiLCAidmNtX2FoYl9zY2xrIiwgInZjbV9h
cGJfc2NsayIsICJhaGJfc2NsayIsICJhcGJfc2NsayIsICJzZG1tYzRfYWhiX3NjbGsiLCAiYmF0
dGVyeV9lbWMiLCAiY2J1cyI7CgkJfTsKCX07CgoJZ3BzX3dha2UgewoJCWNvbXBhdGlibGUgPSAi
Z3BzLXdha2UiOwoJCWdwcy1lbmFibGUtZ3BpbyA9IDwweGQ2IDB4OCAweDA+OwoJCWdwcy13YWtl
dXAtZ3BpbyA9IDwweDU2IDB4MjYgMHgwPjsKCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCWxpbnV4
LHBoYW5kbGUgPSA8MHgxMjg+OwoJCXBoYW5kbGUgPSA8MHgxMjg+OwoJfTsKCgljaG9zZW4gewoJ
CW52aWRpYSx0ZWdyYS1wb3JnLXNrdTsKCQlzdGRvdXQtcGF0aCA9ICIvc2VyaWFsQDcwMDA2MDAw
IjsKCQludmlkaWEsdGVncmEtYWx3YXlzLW9uLXBlcnNvbmFsaXR5OwoJCW5vLXRuaWQtc247CgkJ
Ym9vdGFyZ3MgPSAiZWFybHljb249dWFydDgyNTAsbW1pbzMyLDB4NzAwMDYwMDAiOwoJCW52aWRp
YSxib290bG9hZGVyLXh1c2ItZW5hYmxlOwoJCW52aWRpYSxib290bG9hZGVyLXZidXMtZW5hYmxl
ID0gPDB4MT47CgkJbnZpZGlhLGZhc3Rib290X3dpdGhvdXRfdXNiOwoJCW52aWRpYSxncHUtZGlz
YWJsZS1wb3dlci1zYXZpbmc7CgkJYm9hcmQtaGFzLWVlcHJvbTsKCQlmaXJtd2FyZS1ibG9iLXBh
cnRpdGlvbiA9ICJSUDQiOwoKCQl2ZXJpZmllZC1ib290IHsKCQkJcG93ZXJvZmYtb24tcmVkLXN0
YXRlOwoJCX07Cgl9OwoKCWdwdS1kdmZzLXJld29yayB7CgkJc3RhdHVzID0gIm9rYXkiOwoJfTsK
Cglwd21fcmVndWxhdG9ycyB7CgkJY29tcGF0aWJsZSA9ICJzaW1wbGUtYnVzIjsKCQkjYWRkcmVz
cy1jZWxscyA9IDwweDE+OwoJCSNzaXplLWNlbGxzID0gPDB4MD47CgoJCXB3bS1yZWd1bGF0b3JA
MCB7CgkJCXN0YXR1cyA9ICJva2F5IjsKCQkJcmVnID0gPDB4MD47CgkJCWNvbXBhdGlibGUgPSAi
cHdtLXJlZ3VsYXRvciI7CgkJCXB3bXMgPSA8MHhkZSAweDAgMHg5YzQ+OwoJCQlyZWd1bGF0b3It
bmFtZSA9ICJ2ZGQtY3B1IjsKCQkJcmVndWxhdG9yLW1pbi1taWNyb3ZvbHQgPSA8MHhhY2RhMD47
CgkJCXJlZ3VsYXRvci1tYXgtbWljcm92b2x0ID0gPDB4MTQzMTg4PjsKCQkJcmVndWxhdG9yLWFs
d2F5cy1vbjsKCQkJcmVndWxhdG9yLWJvb3Qtb247CgkJCXZvbHRhZ2UtdGFibGUgPSA8MHhhY2Rh
MCAweDAgMHhiMThhMCAweDEgMHhiNjNhMCAweDIgMHhiYWVhMCAweDMgMHhiZjlhMCAweDQgMHhj
NDRhMCAweDUgMHhjOGZhMCAweDYgMHhjZGFhMCAweDcgMHhkMjVhMCAweDggMHhkNzBhMCAweDkg
MHhkYmJhMCAweGEgMHhlMDZhMCAweGIgMHhlNTFhMCAweGMgMHhlOWNhMCAweGQgMHhlZTdhMCAw
eGUgMHhmMzJhMCAweGYgMHhmN2RhMCAweDEwIDB4ZmM4YTAgMHgxMSAweDEwMTNhMCAweDEyIDB4
MTA1ZWEwIDB4MTMgMHgxMGE5YTAgMHgxNCAweDEwZjRhMCAweDE1IDB4MTEzZmEwIDB4MTYgMHgx
MThhYTAgMHgxNyAweDExZDVhMCAweDE4IDB4MTIyMGEwIDB4MTkgMHgxMjZiYTAgMHgxYSAweDEy
YjZhMCAweDFiIDB4MTMwMWEwIDB4MWMgMHgxMzRjYTAgMHgxZCAweDEzOTdhMCAweDFlIDB4MTNl
MmEwIDB4MWYgMHgxNDJkYTAgMHgyMD47CgkJCWxpbnV4LHBoYW5kbGUgPSA8MHg5ZD47CgkJCXBo
YW5kbGUgPSA8MHg5ZD47CgkJfTsKCgkJcHdtLXJlZ3VsYXRvckAxIHsKCQkJc3RhdHVzID0gIm9r
YXkiOwoJCQlyZWcgPSA8MHgxPjsKCQkJY29tcGF0aWJsZSA9ICJwd20tcmVndWxhdG9yIjsKCQkJ
cHdtcyA9IDwweGE1IDB4MSAweDFmNDA+OwoJCQlyZWd1bGF0b3ItbmFtZSA9ICJ2ZGQtZ3B1IjsK
CQkJcmVndWxhdG9yLW1pbi1taWNyb3ZvbHQgPSA8MHhhY2RhMD47CgkJCXJlZ3VsYXRvci1tYXgt
bWljcm92b2x0ID0gPDB4MTQzMTg4PjsKCQkJcmVndWxhdG9yLWluaXQtbWljcm92b2x0ID0gPDB4
ZjQyNDA+OwoJCQlyZWd1bGF0b3Itbi12b2x0YWdlcyA9IDwweDNlPjsKCQkJcmVndWxhdG9yLWVu
YWJsZS1yYW1wLWRlbGF5ID0gPDB4N2QwPjsKCQkJZW5hYmxlLWdwaW8gPSA8MHgxZSAweDYgMHgw
PjsKCQkJcmVndWxhdG9yLXNldHRsaW5nLXRpbWUtdXMgPSA8MHhhMD47CgkJfTsKCX07CgoJZGZs
bC1tYXg3NzYyMUA3MDExMDAwMCB7CgoJCWRmbGwtbWF4Nzc2MjEtaW50ZWdyYXRpb24gewoJCQlp
MmMtZnMtcmF0ZSA9IDwweGY0MjQwPjsKCQkJcG1pYy1pMmMtYWRkcmVzcyA9IDwweDM2PjsKCQkJ
cG1pYy1pMmMtdm9sdGFnZS1yZWdpc3RlciA9IDwweDE+OwoJCQlzZWwtY29udmVyc2lvbi1zbG9w
ZSA9IDwweDE+OwoJCQlsaW51eCxwaGFuZGxlID0gPDB4MTI5PjsKCQkJcGhhbmRsZSA9IDwweDEy
OT47CgkJfTsKCgkJZGZsbC1tYXg3NzYyMS1ib2FyZC1wYXJhbXMgewoJCQlzYW1wbGUtcmF0ZSA9
IDwweDMwZDQ+OwoJCQlmaXhlZC1vdXRwdXQtZm9yY2luZzsKCQkJY2YgPSA8MHhhPjsKCQkJY2kg
PSA8MHgwPjsKCQkJY2cgPSA8MHgyPjsKCQkJZHJvb3AtY3V0LXZhbHVlID0gPDB4Zj47CgkJCWRy
b29wLXJlc3RvcmUtcmFtcCA9IDwweDA+OwoJCQlzY2FsZS1vdXQtcmFtcCA9IDwweDA+OwoJCQls
aW51eCxwaGFuZGxlID0gPDB4MTJhPjsKCQkJcGhhbmRsZSA9IDwweDEyYT47CgkJfTsKCX07CgoJ
ZGZsbC1jZGV2LWNhcCB7CgkJY29tcGF0aWJsZSA9ICJudmlkaWEsdGVncmEtZGZsbC1jZGV2LWFj
dGlvbiI7CgkJYWN0LWRldiA9IDwweDI2PjsKCQljZGV2LXR5cGUgPSAiREZMTC1jYXAiOwoJCSNj
b29saW5nLWNlbGxzID0gPDB4Mj47CgkJbGludXgscGhhbmRsZSA9IDwweDE3PjsKCQlwaGFuZGxl
ID0gPDB4MTc+OwoJfTsKCglkZmxsLWNkZXYtZmxvb3IgewoJCWNvbXBhdGlibGUgPSAibnZpZGlh
LHRlZ3JhLWRmbGwtY2Rldi1hY3Rpb24iOwoJCWFjdC1kZXYgPSA8MHgyNj47CgkJY2Rldi10eXBl
ID0gIkRGTEwtZmxvb3IiOwoJCSNjb29saW5nLWNlbGxzID0gPDB4Mj47CgkJbGludXgscGhhbmRs
ZSA9IDwweDEwPjsKCQlwaGFuZGxlID0gPDB4MTA+OwoJfTsKCglkdmZzIHsKCQljb21wYXRpYmxl
ID0gIm52aWRpYSx0ZWdyYTIxMC1kdmZzIjsKCQl2ZGQtY3B1LXN1cHBseSA9IDwweDlkPjsKCQlu
dmlkaWEsZ3B1LW1heC1mcmVxLWtoeiA9IDwweGUxMDAwPjsKCX07CgoJcjgxNjggewoJCWlzb2xh
dGUtZ3BpbyA9IDwweDU2IDB4YmIgMHgwPjsKCX07CgoJdGVncmFfdWRybSB7CgkJY29tcGF0aWJs
ZSA9ICJudmlkaWEsdGVncmEtdWRybSI7CgkJbGludXgscGhhbmRsZSA9IDwweDEyYj47CgkJcGhh
bmRsZSA9IDwweDEyYj47Cgl9OwoKCXNvZnRfd2F0Y2hkb2cgewoJCXN0YXR1cyA9ICJva2F5IjsK
CQlsaW51eCxwaGFuZGxlID0gPDB4Yjc+OwoJCXBoYW5kbGUgPSA8MHhiNz47Cgl9OwoKCWxlZHMg
ewoJCWNvbXBhdGlibGUgPSAiZ3Bpby1sZWRzIjsKCQlzdGF0dXMgPSAiZGlzYWJsZWQiOwoJCWxp
bnV4LHBoYW5kbGUgPSA8MHhjOD47CgkJcGhhbmRsZSA9IDwweGM4PjsKCgkJcHdyIHsKCQkJZ3Bp
b3MgPSA8MHg1NiAweDQxIDB4MD47CgkJCWRlZmF1bHQtc3RhdGUgPSAib24iOwoJCQlsaW51eCxk
ZWZhdWx0LXRyaWdnZXIgPSAic3lzdGVtLXRocm90dGxlIjsKCQl9OwoJfTsKCgltZW1vcnlAODAw
MDAwMDAgewoJCWRldmljZV90eXBlID0gIm1lbW9yeSI7CgkJcmVnID0gPDB4MCAweDgwMDAwMDAw
IDB4MCAweDgwMDAwMDAwPjsKCX07CgoJY3B1X2VkcCB7CgkJc3RhdHVzID0gIm9rYXkiOwoJCW52
aWRpYSxlZHBfbGltaXQgPSA8MHg2MWE4PjsKCX07CgoJZ3B1X2VkcCB7CgkJc3RhdHVzID0gIm9r
YXkiOwoJCW52aWRpYSxlZHBfbGltaXQgPSA8MHg0ZTIwPjsKCX07CgoJX19zeW1ib2xzX18gewoJ
CWdwdV9zY2FsaW5nMCA9ICIvdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlwcy9ncHUtc2NhbGlu
ZzAiOwoJCWdwdV9zY2FsaW5nMSA9ICIvdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlwcy9ncHUt
c2NhbGluZzEiOwoJCWdwdV9zY2FsaW5nMiA9ICIvdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlw
cy9ncHUtc2NhbGluZzIiOwoJCWdwdV9zY2FsaW5nMyA9ICIvdGhlcm1hbC16b25lcy9BTy10aGVy
bS90cmlwcy9ncHUtc2NhbGluZzMiOwoJCWdwdV9zY2FsaW5nNCA9ICIvdGhlcm1hbC16b25lcy9B
Ty10aGVybS90cmlwcy9ncHUtc2NhbGluZzQiOwoJCWdwdV9zY2FsaW5nNSA9ICIvdGhlcm1hbC16
b25lcy9BTy10aGVybS90cmlwcy9ncHUtc2NhbGluZzUiOwoJCWdwdV92bWF4MSA9ICIvdGhlcm1h
bC16b25lcy9BTy10aGVybS90cmlwcy9ncHUtdm1heDEiOwoJCWNvcmVfZHZmc19mbG9vcl90cmlw
MCA9ICIvdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlwcy9jb3JlX2R2ZnNfZmxvb3JfdHJpcDAi
OwoJCWNvcmVfZHZmc19jYXBfdHJpcDAgPSAiL3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vdHJpcHMv
Y29yZV9kdmZzX2NhcF90cmlwMCI7CgkJZGZsbF9mbG9vcl90cmlwMCA9ICIvdGhlcm1hbC16b25l
cy9BTy10aGVybS90cmlwcy9kZmxsLWZsb29yLXRyaXAwIjsKCQlkZmxsX2NhcF90cmlwMCA9ICIv
dGhlcm1hbC16b25lcy9DUFUtdGhlcm0vdHJpcHMvZGZsbC1jYXAtdHJpcDAiOwoJCWRmbGxfY2Fw
X3RyaXAxID0gIi90aGVybWFsLXpvbmVzL0NQVS10aGVybS90cmlwcy9kZmxsLWNhcC10cmlwMSI7
CgkJcGxsX2RyYW1fdGhyb3R0bGUgPSAiL3RoZXJtYWwtem9uZXMvUExMLXRoZXJtL3RyaXBzL2Ry
YW0tdGhyb3R0bGUiOwoJCWRpZV90ZW1wX3RocmVzaCA9ICIvdGhlcm1hbC16b25lcy9QTUlDLURp
ZS90cmlwcy9ob3QtZGllIjsKCQljb3JlX2R2ZnNfZmxvb3IgPSAiL2NvcmVfZHZmc19jZGV2X2Zs
b29yIjsKCQljb3JlX2R2ZnNfY2FwID0gIi9jb3JlX2R2ZnNfY2Rldl9jYXAiOwoJCWhvc3QxeF9w
ZCA9ICIvcG93ZXItZG9tYWluL2hvc3QxeC1wZCI7CgkJcGRfYXVkaW8gPSAiL3Bvd2VyLWRvbWFp
bi9hcGUtcGQiOwoJCWFkc3BfcGQgPSAiL3Bvd2VyLWRvbWFpbi9hZHNwLXBkIjsKCQl0c2VjX3Bk
ID0gIi9wb3dlci1kb21haW4vdHNlYy1wZCI7CgkJcGRfbnZkZWMgPSAiL3Bvd2VyLWRvbWFpbi9u
dmRlYy1wZCI7CgkJcGRfdmUyID0gIi9wb3dlci1kb21haW4vdmUyLXBkIjsKCQlwZF92aWMgPSAi
L3Bvd2VyLWRvbWFpbi92aWMwMy1wZCI7CgkJcGRfbnZlbmMgPSAiL3Bvd2VyLWRvbWFpbi9tc2Vu
Yy1wZCI7CgkJcGRfbnZqcGcgPSAiL3Bvd2VyLWRvbWFpbi9udmpwZy1wZCI7CgkJcGRfcGNpZSA9
ICIvcG93ZXItZG9tYWluL3BjaWUtcGQiOwoJCXZlX3BkID0gIi9wb3dlci1kb21haW4vdmUtcGQi
OwoJCXNhdGFfcGQgPSAiL3Bvd2VyLWRvbWFpbi9zYXRhLXBkIjsKCQlzb3JfcGQgPSAiL3Bvd2Vy
LWRvbWFpbi9zb3ItcGQiOwoJCWRpc2FfcGQgPSAiL3Bvd2VyLWRvbWFpbi9kaXNhLXBkIjsKCQlk
aXNiX3BkID0gIi9wb3dlci1kb21haW4vZGlzYi1wZCI7CgkJeHVzYmFfcGQgPSAiL3Bvd2VyLWRv
bWFpbi94dXNiYS1wZCI7CgkJeHVzYmJfcGQgPSAiL3Bvd2VyLWRvbWFpbi94dXNiYi1wZCI7CgkJ
eHVzYmNfcGQgPSAiL3Bvd2VyLWRvbWFpbi94dXNiYy1wZCI7CgkJQzcgPSAiL2NwdXMvaWRsZS1z
dGF0ZXMvYzciOwoJCUNDNiA9ICIvY3B1cy9pZGxlLXN0YXRlcy9jYzYiOwoJCUwyID0gIi9jcHVz
L2wyLWNhY2hlIjsKCQl0ZWdyYV9jYXIgPSAiL2Nsb2NrIjsKCQlpcmFtID0gIi9yZXNlcnZlZC1t
ZW1vcnkvaXJhbS1jYXJ2ZW91dCI7CgkJcmFtb29wc19yZXNlcnZlZCA9ICIvcmVzZXJ2ZWQtbWVt
b3J5L3JhbW9vcHNfY2FydmVvdXQiOwoJCXZwciA9ICIvcmVzZXJ2ZWQtbWVtb3J5L3Zwci1jYXJ2
ZW91dCI7CgkJZmIwX3Jlc2VydmVkID0gIi9yZXNlcnZlZC1tZW1vcnkvZmIwX2NhcnZlb3V0IjsK
CQlmYjFfcmVzZXJ2ZWQgPSAiL3Jlc2VydmVkLW1lbW9yeS9mYjFfY2FydmVvdXQiOwoJCXNtbXUg
PSAiL2lvbW11IjsKCQljb21tb25fYXMgPSAiL2lvbW11L2FkZHJlc3Mtc3BhY2UtcHJvcC9jb21t
b24iOwoJCXBwY3NfYXMgPSAiL2lvbW11L2FkZHJlc3Mtc3BhY2UtcHJvcC9wcGNzIjsKCQlkY19h
cyA9ICIvaW9tbXUvYWRkcmVzcy1zcGFjZS1wcm9wL2RjIjsKCQlncHVfYXMgPSAiL2lvbW11L2Fk
ZHJlc3Mtc3BhY2UtcHJvcC9ncHUiOwoJCWFwZV9hcyA9ICIvaW9tbXUvYWRkcmVzcy1zcGFjZS1w
cm9wL2FwZSI7CgkJc21tdV90ZXN0ID0gIi9zbW11X3Rlc3QiOwoJCWRtYV90ZXN0ID0gIi9kbWFf
dGVzdCI7CgkJaW50YyA9ICIvaW50ZXJydXB0LWNvbnRyb2xsZXIiOwoJCWxpYyA9ICIvaW50ZXJy
dXB0LWNvbnRyb2xsZXJANjAwMDQwMDAiOwoJCWFoYiA9ICIvYWhiQDYwMDBjMDAwIjsKCQl0ZWdy
YV9hZ2ljID0gIi9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwIjsKCQlhZG1hID0gIi9h
Y29ubmVjdEA3MDJjMDAwMC9hZG1hQDcwMmUyMDAwIjsKCQl0ZWdyYV9heGJhciA9ICIvYWNvbm5l
Y3RANzAyYzAwMDAvYWh1YiI7CgkJdGVncmFfYWRtYWlmID0gIi9hY29ubmVjdEA3MDJjMDAwMC9h
aHViL2FkbWFpZkAweDcwMmQwMDAwIjsKCQl0ZWdyYV9zZmMxID0gIi9hY29ubmVjdEA3MDJjMDAw
MC9haHViL3NmY0A3MDJkMjAwMCI7CgkJdGVncmFfc2ZjMiA9ICIvYWNvbm5lY3RANzAyYzAwMDAv
YWh1Yi9zZmNANzAyZDIyMDAiOwoJCXRlZ3JhX3NmYzMgPSAiL2Fjb25uZWN0QDcwMmMwMDAwL2Fo
dWIvc2ZjQDcwMmQyNDAwIjsKCQl0ZWdyYV9zZmM0ID0gIi9hY29ubmVjdEA3MDJjMDAwMC9haHVi
L3NmY0A3MDJkMjYwMCI7CgkJdGVncmFfYW1peGVyID0gIi9hY29ubmVjdEA3MDJjMDAwMC9haHVi
L2FtaXhlckA3MDJkYmIwMCI7CgkJdGVncmFfaTJzMSA9ICIvYWNvbm5lY3RANzAyYzAwMDAvYWh1
Yi9pMnNANzAyZDEwMDAiOwoJCXRlZ3JhX2kyczIgPSAiL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIv
aTJzQDcwMmQxMTAwIjsKCQl0ZWdyYV9pMnMzID0gIi9hY29ubmVjdEA3MDJjMDAwMC9haHViL2ky
c0A3MDJkMTIwMCI7CgkJdGVncmFfaTJzNCA9ICIvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9pMnNA
NzAyZDEzMDAiOwoJCXRlZ3JhX2kyczUgPSAiL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvaTJzQDcw
MmQxNDAwIjsKCQl0ZWdyYV9hbXgxID0gIi9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FteEA3MDJk
MzAwMCI7CgkJdGVncmFfYW14MiA9ICIvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hbXhANzAyZDMx
MDAiOwoJCXRlZ3JhX2FkeDEgPSAiL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvYWR4QDcwMmQzODAw
IjsKCQl0ZWdyYV9hZHgyID0gIi9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FkeEA3MDJkMzkwMCI7
CgkJdGVncmFfZG1pYzEgPSAiL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvZG1pY0A3MDJkNDAwMCI7
CgkJdGVncmFfZG1pYzIgPSAiL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvZG1pY0A3MDJkNDEwMCI7
CgkJdGVncmFfZG1pYzMgPSAiL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvZG1pY0A3MDJkNDIwMCI7
CgkJdGVncmFfYWZjMSA9ICIvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZmNANzAyZDcwMDAiOwoJ
CXRlZ3JhX2FmYzIgPSAiL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvYWZjQDcwMmQ3MTAwIjsKCQl0
ZWdyYV9hZmMzID0gIi9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FmY0A3MDJkNzIwMCI7CgkJdGVn
cmFfYWZjNCA9ICIvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZmNANzAyZDczMDAiOwoJCXRlZ3Jh
X2FmYzUgPSAiL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvYWZjQDcwMmQ3NDAwIjsKCQl0ZWdyYV9h
ZmM2ID0gIi9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FmY0A3MDJkNzUwMCI7CgkJdGVncmFfbXZj
MSA9ICIvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9tdmNANzAyZGEwMDAiOwoJCXRlZ3JhX212YzIg
PSAiL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvbXZjQDcwMmRhMjAwIjsKCQl0ZWdyYV9pcWMxID0g
Ii9hY29ubmVjdEA3MDJjMDAwMC9haHViL2lxY0A3MDJkZTAwMCI7CgkJdGVncmFfaXFjMiA9ICIv
YWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9pcWNANzAyZGUyMDAiOwoJCXRlZ3JhX29wZTEgPSAiL2Fj
b25uZWN0QDcwMmMwMDAwL2FodWIvb3BlQDcwMmQ4MDAwIjsKCQl0ZWdyYV9vcGUyID0gIi9hY29u
bmVjdEA3MDJjMDAwMC9haHViL29wZUA3MDJkODQwMCI7CgkJdGVncmFfYWRzcF9hdWRpbyA9ICIv
YWNvbm5lY3RANzAyYzAwMDAvYWRzcF9hdWRpbyI7CgkJYXBiZG1hID0gIi9kbWFANjAwMjAwMDAi
OwoJCXBpbm11eCA9ICIvcGlubXV4QDcwMDAwOGQ0IjsKCQljbGtyZXFfMF9iaV9kaXJfc3RhdGUg
PSAiL3Bpbm11eEA3MDAwMDhkNC9jbGtyZXFfMF9iaV9kaXIiOwoJCWNsa3JlcV8xX2JpX2Rpcl9z
dGF0ZSA9ICIvcGlubXV4QDcwMDAwOGQ0L2Nsa3JlcV8xX2JpX2RpciI7CgkJY2xrcmVxXzBfaW5f
ZGlyX3N0YXRlID0gIi9waW5tdXhANzAwMDA4ZDQvY2xrcmVxXzBfaW5fZGlyIjsKCQljbGtyZXFf
MV9pbl9kaXJfc3RhdGUgPSAiL3Bpbm11eEA3MDAwMDhkNC9jbGtyZXFfMV9pbl9kaXIiOwoJCXNk
bW1jMV9zY2htaXR0X2VuYWJsZV9zdGF0ZSA9ICIvcGlubXV4QDcwMDAwOGQ0L3NkbW1jMV9zY2ht
aXR0X2VuYWJsZSI7CgkJc2RtbWMxX3NjaG1pdHRfZGlzYWJsZV9zdGF0ZSA9ICIvcGlubXV4QDcw
MDAwOGQ0L3NkbW1jMV9zY2htaXR0X2Rpc2FibGUiOwoJCXNkbW1jMV9jbGtfc2NobWl0dF9lbmFi
bGVfc3RhdGUgPSAiL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFfY2xrX3NjaG1pdHRfZW5hYmxlIjsK
CQlzZG1tYzFfY2xrX3NjaG1pdHRfZGlzYWJsZV9zdGF0ZSA9ICIvcGlubXV4QDcwMDAwOGQ0L3Nk
bW1jMV9jbGtfc2NobWl0dF9kaXNhYmxlIjsKCQlzZG1tYzFfZHJ2X2NvZGVfMV84ViA9ICIvcGlu
bXV4QDcwMDAwOGQ0L3NkbW1jMV9kcnZfY29kZSI7CgkJc2RtbWMxX2RlZmF1bHRfZHJ2X2NvZGVf
M18zViA9ICIvcGlubXV4QDcwMDAwOGQ0L3NkbW1jMV9kZWZhdWx0X2Rydl9jb2RlIjsKCQlzZG1t
YzNfc2NobWl0dF9lbmFibGVfc3RhdGUgPSAiL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfc2NobWl0
dF9lbmFibGUiOwoJCXNkbW1jM19zY2htaXR0X2Rpc2FibGVfc3RhdGUgPSAiL3Bpbm11eEA3MDAw
MDhkNC9zZG1tYzNfc2NobWl0dF9kaXNhYmxlIjsKCQlzZG1tYzNfY2xrX3NjaG1pdHRfZW5hYmxl
X3N0YXRlID0gIi9waW5tdXhANzAwMDA4ZDQvc2RtbWMzX2Nsa19zY2htaXR0X2VuYWJsZSI7CgkJ
c2RtbWMzX2Nsa19zY2htaXR0X2Rpc2FibGVfc3RhdGUgPSAiL3Bpbm11eEA3MDAwMDhkNC9zZG1t
YzNfY2xrX3NjaG1pdHRfZGlzYWJsZSI7CgkJc2RtbWMzX2Rydl9jb2RlXzFfOFYgPSAiL3Bpbm11
eEA3MDAwMDhkNC9zZG1tYzNfZHJ2X2NvZGUiOwoJCXNkbW1jM19kZWZhdWx0X2Rydl9jb2RlXzNf
M1YgPSAiL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfZGVmYXVsdF9kcnZfY29kZSI7CgkJZHZmc19w
d21fYWN0aXZlX3N0YXRlID0gIi9waW5tdXhANzAwMDA4ZDQvZHZmc19wd21fYWN0aXZlIjsKCQlk
dmZzX3B3bV9pbmFjdGl2ZV9zdGF0ZSA9ICIvcGlubXV4QDcwMDAwOGQ0L2R2ZnNfcHdtX2luYWN0
aXZlIjsKCQlwaW5tdXhfZGVmYXVsdCA9ICIvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbiI7CgkJcGlu
bXV4X3VudXNlZF9sb3dwb3dlciA9ICIvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlciI7
CgkJZHJpdmVfZGVmYXVsdCA9ICIvcGlubXV4QDcwMDAwOGQ0L2RyaXZlIjsKCQlncGlvID0gIi9n
cGlvQDYwMDBkMDAwIjsKCQllMjYxNF9hdWRpb19waW5zID0gIi9ncGlvQDYwMDBkMDAwL2UyNjE0
LXJ0NTY1OC1hdWRpbyI7CgkJc3VzcGVuZF9ncGlvID0gIi9ncGlvQDYwMDBkMDAwL3N5c3RlbS1z
dXNwZW5kLWdwaW8iOwoJCWdwaW9fZGVmYXVsdCA9ICIvZ3Bpb0A2MDAwZDAwMC9kZWZhdWx0IjsK
CQl4dXNiX21ib3ggPSAiL21haWxib3hANzAwOTgwMDAiOwoJCXh1c2JfcGFkY3RsID0gIi94dXNi
X3BhZGN0bEA3MDA5ZjAwMCI7CgkJdGVncmFfdXNiX2NkID0gIi91c2JfY2QiOwoJCXRlZ3JhX3Bh
ZGN0bF91cGh5ID0gIi9waW5jdHJsQDcwMDlmMDAwIjsKCQl0ZWdyYV9leHRfY2RwID0gIi9tYXgx
Njk4NC1jZHAiOwoJCXVhcnRhID0gIi9zZXJpYWxANzAwMDYwMDAiOwoJCXVhcnRiID0gIi9zZXJp
YWxANzAwMDYwNDAiOwoJCXVhcnRjID0gIi9zZXJpYWxANzAwMDYyMDAiOwoJCXVhcnRkID0gIi9z
ZXJpYWxANzAwMDYzMDAiOwoJCXRlZ3JhX3NvdW5kID0gIi9zb3VuZCI7CgkJaGRyNDBfc25kX2xp
bmtfaTJzID0gIi9zb3VuZC9udmlkaWEsZGFpLWxpbmstMSI7CgkJaTJzX2RhaV9saW5rMSA9ICIv
c291bmQvbnZpZGlhLGRhaS1saW5rLTEiOwoJCXRlZ3JhX3B3bSA9ICIvcHdtQDcwMDBhMDAwIjsK
CQlzcGkwID0gIi9zcGlANzAwMGQ0MDAiOwoJCXNwaTEgPSAiL3NwaUA3MDAwZDYwMCI7CgkJc3Bp
MiA9ICIvc3BpQDcwMDBkODAwIjsKCQlzcGkzID0gIi9zcGlANzAwMGRhMDAiOwoJCXFzcGk2ID0g
Ii9zcGlANzA0MTAwMDAiOwoJCWhvc3QxeCA9ICIvaG9zdDF4IjsKCQl2aV9iYXNlID0gIi9ob3N0
MXgvdmkiOwoJCXZpX3BvcnQwID0gIi9ob3N0MXgvdmkvcG9ydHMvcG9ydEAwIjsKCQlyYnBjdjJf
aW14MjE5X3ZpX2luMCA9ICIvaG9zdDF4L3ZpL3BvcnRzL3BvcnRAMC9lbmRwb2ludCI7CgkJdmlf
cG9ydDEgPSAiL2hvc3QxeC92aS9wb3J0cy9wb3J0QDEiOwoJCXJicGN2Ml9pbXgyMTlfdmlfaW4x
ID0gIi9ob3N0MXgvdmkvcG9ydHMvcG9ydEAxL2VuZHBvaW50IjsKCQloZWFkMCA9ICIvaG9zdDF4
L2RjQDU0MjAwMDAwIjsKCQloZWFkMSA9ICIvaG9zdDF4L2RjQDU0MjQwMDAwIjsKCQlkc2kgPSAi
L2hvc3QxeC9kc2kiOwoJCXNvcjAgPSAiL2hvc3QxeC9zb3IiOwoJCXNvcjBfaGRtaV9kaXNwbGF5
ID0gIi9ob3N0MXgvc29yL2hkbWktZGlzcGxheSI7CgkJc29yMF9kcF9kaXNwbGF5ID0gIi9ob3N0
MXgvc29yL2RwLWRpc3BsYXkiOwoJCXNvcjEgPSAiL2hvc3QxeC9zb3IxIjsKCQlzb3IxX2hkbWlf
ZGlzcGxheSA9ICIvaG9zdDF4L3NvcjEvaGRtaS1kaXNwbGF5IjsKCQlzb3IxX2RwX2Rpc3BsYXkg
PSAiL2hvc3QxeC9zb3IxL2RwLWRpc3BsYXkiOwoJCWRwYXV4MCA9ICIvaG9zdDF4L2RwYXV4IjsK
CQlkcGF1eDEgPSAiL2hvc3QxeC9kcGF1eDEiOwoJCWkyYzcgPSAiL2hvc3QxeC9pMmNANTQ2YzAw
MDAiOwoJCWlteDIxOV9zaW5nbGVfY2FtMCA9ICIvaG9zdDF4L2kyY0A1NDZjMDAwMC9yYnBjdjJf
aW14MjE5X2FAMTAiOwoJCXJicGN2Ml9pbXgyMTlfb3V0MCA9ICIvaG9zdDF4L2kyY0A1NDZjMDAw
MC9yYnBjdjJfaW14MjE5X2FAMTAvcG9ydHMvcG9ydEAwL2VuZHBvaW50IjsKCQlpbmEzMjIxeCA9
ICIvaG9zdDF4L2kyY0A1NDZjMDAwMC9pbmEzMjIxeEA0MCI7CgkJY3NpX2Jhc2UgPSAiL2hvc3Qx
eC9udmNzaSI7CgkJY3NpX2NoYW4wID0gIi9ob3N0MXgvbnZjc2kvY2hhbm5lbEAwIjsKCQljc2lf
Y2hhbjBfcG9ydDAgPSAiL2hvc3QxeC9udmNzaS9jaGFubmVsQDAvcG9ydHMvcG9ydEAwIjsKCQly
YnBjdjJfaW14MjE5X2NzaV9pbjAgPSAiL2hvc3QxeC9udmNzaS9jaGFubmVsQDAvcG9ydHMvcG9y
dEAwL2VuZHBvaW50QDAiOwoJCWNzaV9jaGFuMF9wb3J0MSA9ICIvaG9zdDF4L252Y3NpL2NoYW5u
ZWxAMC9wb3J0cy9wb3J0QDEiOwoJCXJicGN2Ml9pbXgyMTlfY3NpX291dDAgPSAiL2hvc3QxeC9u
dmNzaS9jaGFubmVsQDAvcG9ydHMvcG9ydEAxL2VuZHBvaW50QDEiOwoJCWNzaV9jaGFuMSA9ICIv
aG9zdDF4L252Y3NpL2NoYW5uZWxAMSI7CgkJY3NpX2NoYW4xX3BvcnQwID0gIi9ob3N0MXgvbnZj
c2kvY2hhbm5lbEAxL3BvcnRzL3BvcnRAMiI7CgkJcmJwY3YyX2lteDIxOV9jc2lfaW4xID0gIi9o
b3N0MXgvbnZjc2kvY2hhbm5lbEAxL3BvcnRzL3BvcnRAMi9lbmRwb2ludEAyIjsKCQljc2lfY2hh
bjFfcG9ydDEgPSAiL2hvc3QxeC9udmNzaS9jaGFubmVsQDEvcG9ydHMvcG9ydEAzIjsKCQlyYnBj
djJfaW14MjE5X2NzaV9vdXQxID0gIi9ob3N0MXgvbnZjc2kvY2hhbm5lbEAxL3BvcnRzL3BvcnRA
My9lbmRwb2ludEAzIjsKCQl0ZWdyYV9wbWMgPSAiL3BtY0A3MDAwZTQwMCI7CgkJcGV4X2lvX2Rw
ZF9kaXNhYmxlX3N0YXRlID0gIi9wbWNANzAwMGU0MDAvcGV4X2VuIjsKCQlwZXhfaW9fZHBkX2Vu
YWJsZV9zdGF0ZSA9ICIvcG1jQDcwMDBlNDAwL3BleF9kaXMiOwoJCWhkbWlfZHBkX2VuYWJsZSA9
ICIvcG1jQDcwMDBlNDAwL2hkbWktZHBkLWVuYWJsZSI7CgkJaGRtaV9kcGRfZGlzYWJsZSA9ICIv
cG1jQDcwMDBlNDAwL2hkbWktZHBkLWRpc2FibGUiOwoJCWRzaV9kcGRfZW5hYmxlID0gIi9wbWNA
NzAwMGU0MDAvZHNpLWRwZC1lbmFibGUiOwoJCWRzaV9kcGRfZGlzYWJsZSA9ICIvcG1jQDcwMDBl
NDAwL2RzaS1kcGQtZGlzYWJsZSI7CgkJZHNpYl9kcGRfZW5hYmxlID0gIi9wbWNANzAwMGU0MDAv
ZHNpYi1kcGQtZW5hYmxlIjsKCQlkc2liX2RwZF9kaXNhYmxlID0gIi9wbWNANzAwMGU0MDAvZHNp
Yi1kcGQtZGlzYWJsZSI7CgkJcGluY3RybF9pb3BhZF9kZWZhdWx0ID0gIi9wbWNANzAwMGU0MDAv
aW9wYWQtZGVmYXVsdHMiOwoJCXNkbW1jMV9lXzMzVl9lbmFibGUgPSAiL3BtY0A3MDAwZTQwMC9z
ZG1tYzFfZV8zM1ZfZW5hYmxlIjsKCQlzZG1tYzFfZV8zM1ZfZGlzYWJsZSA9ICIvcG1jQDcwMDBl
NDAwL3NkbW1jMV9lXzMzVl9kaXNhYmxlIjsKCQlzZG1tYzNfZV8zM1ZfZW5hYmxlID0gIi9wbWNA
NzAwMGU0MDAvc2RtbWMzX2VfMzNWX2VuYWJsZSI7CgkJc2RtbWMzX2VfMzNWX2Rpc2FibGUgPSAi
L3BtY0A3MDAwZTQwMC9zZG1tYzNfZV8zM1ZfZGlzYWJsZSI7CgkJc2UgPSAiL3NlQDcwMDEyMDAw
IjsKCQloZHI0MF9pMmMwID0gIi9pMmNANzAwMGMwMDAiOwoJCWkyYzEgPSAiL2kyY0A3MDAwYzAw
MCI7CgkJdGVncmFfbmN0NzIgPSAiL2kyY0A3MDAwYzAwMC90ZW1wLXNlbnNvckA0YyI7CgkJaGRy
NDBfaTJjMSA9ICIvaTJjQDcwMDBjNDAwIjsKCQlpMmMyID0gIi9pMmNANzAwMGM0MDAiOwoJCWUy
NjE0X2kyY19tdXggPSAiL2kyY0A3MDAwYzQwMC9pMmNtdXhANzAiOwoJCWUyNjE0X3J0NTY1OF9i
MDAgPSAiL2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDMvcnQ1NjU5LjEyLTAwMWFAMWEiOwoJ
CWUyNjE0X2dwaW9faTJjXzFfMjAgPSAiL2kyY0A3MDAwYzQwMC9ncGlvQDIwIjsKCQllMjYxNF9p
Y20yMDYyOCA9ICIvaTJjQDcwMDBjNDAwL2ljbTIwNjI4QDY4IjsKCQllMjYxNF9hazg5NjMgPSAi
L2kyY0A3MDAwYzQwMC9hazg5NjNAMGQiOwoJCWUyNjE0X2NtMzIxODAgPSAiL2kyY0A3MDAwYzQw
MC9jbTMyMTgwQDQ4IjsKCQllMjYxNF9pcXMyNjMgPSAiL2kyY0A3MDAwYzQwMC9pcXMyNjNANDQi
OwoJCWUyNjE0X3J0NTY1OF9hMDAgPSAiL2kyY0A3MDAwYzQwMC9ydDU2NTkuMS0wMDFhQDFhIjsK
CQlpMmMzID0gIi9pMmNANzAwMGM1MDAiOwoJCWhkbWlfZGRjID0gIi9pMmNANzAwMGM3MDAiOwoJ
CWkyYzQgPSAiL2kyY0A3MDAwYzcwMCI7CgkJaTJjNSA9ICIvaTJjQDcwMDBkMDAwIjsKCQltYXg3
NzYyMCA9ICIvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjIjsKCQltYXg3NzYyMF9kZWZhdWx0ID0g
Ii9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcGlubXV4QDAiOwoJCXNwbWljX3dkdCA9ICIvaTJj
QDcwMDBkMDAwL21heDc3NjIwQDNjL3dhdGNoZG9nIjsKCQltYXg3NzYyMF9zZDAgPSAiL2kyY0A3
MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL3NkMCI7CgkJbWF4Nzc2MjBfc2QxID0gIi9p
MmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9zZDEiOwoJCW1heDc3NjIwX3NkMiA9
ICIvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvc2QyIjsKCQltYXg3NzYyMF9z
ZDMgPSAiL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL3NkMyI7CgkJbWF4Nzc2
MjBfbGRvMCA9ICIvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRvMCI7CgkJ
bWF4Nzc2MjBfbGRvMSA9ICIvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRv
MSI7CgkJbWF4Nzc2MjBfbGRvMiA9ICIvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRv
cnMvbGRvMiI7CgkJbWF4Nzc2MjBfbGRvMyA9ICIvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3Jl
Z3VsYXRvcnMvbGRvMyI7CgkJbWF4Nzc2MjBfbGRvNCA9ICIvaTJjQDcwMDBkMDAwL21heDc3NjIw
QDNjL3JlZ3VsYXRvcnMvbGRvNCI7CgkJbWF4Nzc2MjBfbGRvNSA9ICIvaTJjQDcwMDBkMDAwL21h
eDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRvNSI7CgkJbWF4Nzc2MjBfbGRvNiA9ICIvaTJjQDcwMDBk
MDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRvNiI7CgkJbWF4Nzc2MjBfbGRvNyA9ICIvaTJj
QDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRvNyI7CgkJbWF4Nzc2MjBfbGRvOCA9
ICIvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRvOCI7CgkJaTJjNiA9ICIv
aTJjQDcwMDBkMTAwIjsKCQlzZG1tYzQgPSAiL3NkaGNpQDcwMGIwNjAwIjsKCQlzZGhjaTMgPSAi
L3NkaGNpQDcwMGIwNjAwIjsKCQlzZG1tYzMgPSAiL3NkaGNpQDcwMGIwNDAwIjsKCQlzZGhjaTIg
PSAiL3NkaGNpQDcwMGIwNDAwIjsKCQlzZG1tYzIgPSAiL3NkaGNpQDcwMGIwMjAwIjsKCQlzZGhj
aTEgPSAiL3NkaGNpQDcwMGIwMjAwIjsKCQlzZG1tYzEgPSAiL3NkaGNpQDcwMGIwMDAwIjsKCQlz
ZGhjaTAgPSAiL3NkaGNpQDcwMGIwMDAwIjsKCQl0ZWdyYV9tYyA9ICIvbWVtb3J5LWNvbnRyb2xs
ZXJANzAwMTkwMDAiOwoJCXRlZ3JhX3B3bV9kZmxsID0gIi9wd21ANzAxMTAwMDAiOwoJCXRlZ3Jh
X2Nsa19kZmxsID0gIi9jbG9ja0A3MDExMDAwMCI7CgkJc29jdGhlcm0gPSAiL3NvY3RoZXJtQDB4
NzAwRTIwMDAiOwoJCXRocm90dGxlX2hlYXZ5ID0gIi9zb2N0aGVybUAweDcwMEUyMDAwL3Rocm90
dGxlLWNmZ3MvaGVhdnkiOwoJCXRocm90dGxlX29jMSA9ICIvc29jdGhlcm1AMHg3MDBFMjAwMC90
aHJvdHRsZS1jZmdzL29jMSI7CgkJdGhyb3R0bGVfb2MzID0gIi9zb2N0aGVybUAweDcwMEUyMDAw
L3Rocm90dGxlLWNmZ3Mvb2MzIjsKCQl0ZWdyYV93ZHQgPSAiL3dhdGNoZG9nQDYwMDA1MTAwIjsK
CQl0ZWdyYV93YXRjaGRvZyA9ICIvd2F0Y2hkb2dANjAwMDUxMDAiOwoJCWlkX2dwaW9fZXh0Y29u
ID0gIi9leHRjb24vZXh0Y29uQDAiOwoJCXZidXNfaWRfZ3Bpb19leHRjb24gPSAiL2V4dGNvbi9l
eHRjb25AMSI7CgkJSVBJID0gIi9zbXAtY3VzdG9tLWlwaSI7CgkJdGVncmEyMTBfZW1jX2RyYW1f
Y2RldiA9ICIvZXh0ZXJuYWwtbWVtb3J5LWNvbnRyb2xsZXJANzAwMWIwMDAiOwoJCWR1bW15X2Nv
b2xfZGV2ID0gIi9kdW1teS1jb29sLWRldiI7CgkJYmF0dGVyeV9yZWcgPSAiL3JlZ3VsYXRvcnMv
cmVndWxhdG9yQDAiOwoJCWhkcjQwX3ZkZF81djAgPSAiL3JlZ3VsYXRvcnMvcmVndWxhdG9yQDEi
OwoJCXAzNDQ5X3ZkZF81djBfc3lzID0gIi9yZWd1bGF0b3JzL3JlZ3VsYXRvckAxIjsKCQloZHI0
MF92ZGRfM3YzID0gIi9yZWd1bGF0b3JzL3JlZ3VsYXRvckAyIjsKCQlwMzQ0OF92ZGRfM3YzX3N5
cyA9ICIvcmVndWxhdG9ycy9yZWd1bGF0b3JAMiI7CgkJcDM0NDhfdmRkXzN2M19zZCA9ICIvcmVn
dWxhdG9ycy9yZWd1bGF0b3JAMyI7CgkJcDM0NDhfYXZkZF9pb19lZHAgPSAiL3JlZ3VsYXRvcnMv
cmVndWxhdG9yQDQiOwoJCXAzNDQ5X3ZkZF9oZG1pID0gIi9yZWd1bGF0b3JzL3JlZ3VsYXRvckA1
IjsKCQlwMzQ0OV92ZGRfMXY4ID0gIi9yZWd1bGF0b3JzL3JlZ3VsYXRvckA2IjsKCQlwMzQ0OV92
ZGRfZmFuID0gIi9yZWd1bGF0b3JzL3JlZ3VsYXRvckA3IjsKCQlwMzQ0OV92ZGRfdXNiX3ZidXMg
PSAiL3JlZ3VsYXRvcnMvcmVndWxhdG9yQDgiOwoJCXAzNDQ5X3ZkZF91c2JfaHViX2VuID0gIi9y
ZWd1bGF0b3JzL3JlZ3VsYXRvckA5IjsKCQlwMzQ0OV92ZGRfdXNiX3ZidXMyID0gIi9yZWd1bGF0
b3JzL3JlZ3VsYXRvckAxMCI7CgkJZ3B1X3NjYWxpbmdfY2RldiA9ICIvZHZmc19yYWlscy92ZGQt
Z3B1LXNjYWxpbmctY2RldkA3IjsKCQlncHVfdm1heF9jZGV2ID0gIi9kdmZzX3JhaWxzL3ZkZC1n
cHUtdm1heC1jZGV2QDkiOwoJCXB3bV9mYW5fc2hhcmVkX2RhdGEgPSAiL3Bmc2QiOwoJCXRjcCA9
ICIvdGVncmEtY2FtZXJhLXBsYXRmb3JtIjsKCQljYW1fbW9kdWxlMCA9ICIvdGVncmEtY2FtZXJh
LXBsYXRmb3JtL21vZHVsZXMvbW9kdWxlMCI7CgkJY2FtX21vZHVsZTBfZHJpdmVybm9kZTAgPSAi
L3RlZ3JhLWNhbWVyYS1wbGF0Zm9ybS9tb2R1bGVzL21vZHVsZTAvZHJpdmVybm9kZTAiOwoJCWNh
bV9tb2R1bGUwX2RyaXZlcm5vZGUxID0gIi90ZWdyYS1jYW1lcmEtcGxhdGZvcm0vbW9kdWxlcy9t
b2R1bGUwL2RyaXZlcm5vZGUxIjsKCQljYW1fbW9kdWxlMSA9ICIvdGVncmEtY2FtZXJhLXBsYXRm
b3JtL21vZHVsZXMvbW9kdWxlMSI7CgkJY2FtX21vZHVsZTFfZHJpdmVybm9kZTAgPSAiL3RlZ3Jh
LWNhbWVyYS1wbGF0Zm9ybS9tb2R1bGVzL21vZHVsZTEvZHJpdmVybm9kZTAiOwoJCWNhbV9tb2R1
bGUxX2RyaXZlcm5vZGUxID0gIi90ZWdyYS1jYW1lcmEtcGxhdGZvcm0vbW9kdWxlcy9tb2R1bGUx
L2RyaXZlcm5vZGUxIjsKCQlpMmNfMCA9ICIvY2FtX2kyY211eC9pMmNAMCI7CgkJaW14MjE5X2Nh
bTAgPSAiL2NhbV9pMmNtdXgvaTJjQDAvcmJwY3YyX2lteDIxOV9hQDEwIjsKCQlyYnBjdjJfaW14
MjE5X2R1YWxfb3V0MCA9ICIvY2FtX2kyY211eC9pMmNAMC9yYnBjdjJfaW14MjE5X2FAMTAvcG9y
dHMvcG9ydEAwL2VuZHBvaW50IjsKCQlpMmNfMSA9ICIvY2FtX2kyY211eC9pMmNAMSI7CgkJaW14
MjE5X2NhbTEgPSAiL2NhbV9pMmNtdXgvaTJjQDEvcmJwY3YyX2lteDIxOV9lQDEwIjsKCQlyYnBj
djJfaW14MjE5X291dDEgPSAiL2NhbV9pMmNtdXgvaTJjQDEvcmJwY3YyX2lteDIxOV9lQDEwL3Bv
cnRzL3BvcnRAMC9lbmRwb2ludCI7CgkJdGhlcm1hbF9mYW5fZXN0X3NoYXJlZF9kYXRhID0gIi90
ZmVzZCI7CgkJc3BkaWZfZGl0MCA9ICIvc3BkaWYtZGl0LjBAMCI7CgkJc3BkaWZfZGl0MSA9ICIv
c3BkaWYtZGl0LjFAMSI7CgkJc3BkaWZfZGl0MiA9ICIvc3BkaWYtZGl0LjJAMiI7CgkJc3BkaWZf
ZGl0MyA9ICIvc3BkaWYtZGl0LjNAMyI7CgkJc3BkaWZfZGl0NCA9ICIvc3BkaWYtZGl0LjRANCI7
CgkJc3BkaWZfZGl0NSA9ICIvc3BkaWYtZGl0LjVANSI7CgkJc3BkaWZfZGl0NiA9ICIvc3BkaWYt
ZGl0LjZANiI7CgkJc3BkaWZfZGl0NyA9ICIvc3BkaWYtZGl0LjdANyI7CgkJZTI2MTRfZ3BzX3dh
a2UgPSAiL2dwc193YWtlIjsKCQljcHVfb3ZyX3JlZyA9ICIvcHdtX3JlZ3VsYXRvcnMvcHdtLXJl
Z3VsYXRvckAwIjsKCQlpMmNfZGZsbCA9ICIvZGZsbC1tYXg3NzYyMUA3MDExMDAwMC9kZmxsLW1h
eDc3NjIxLWludGVncmF0aW9uIjsKCQlkZmxsX21heDc3NjIxX3Bhcm1zID0gIi9kZmxsLW1heDc3
NjIxQDcwMTEwMDAwL2RmbGwtbWF4Nzc2MjEtYm9hcmQtcGFyYW1zIjsKCQlkZmxsX2NhcCA9ICIv
ZGZsbC1jZGV2LWNhcCI7CgkJZGZsbF9mbG9vciA9ICIvZGZsbC1jZGV2LWZsb29yIjsKCQl0ZWdy
YV91ZHJtID0gIi90ZWdyYV91ZHJtIjsKCQlzb2Z0X3dkdCA9ICIvc29mdF93YXRjaGRvZyI7Cgl9
Owp9Owo=
--=_48a69f2ecb1c59fcda7c31583f854280
Content-Transfer-Encoding: base64
Content-Type: text/plain;
 name="jetson-nano-log-with fdt-fix.txt"
Content-Disposition: attachment;
 filename="jetson-nano-log-with fdt-fix.txt";
 size=659573

DQojIyBGbGF0dGVuZWQgRGV2aWNlIFRyZWUgYmxvYiBhdCBlMzUwMDAwMA0KICAgQm9vdGluZyB1
c2luZyB0aGUgZmR0IGJsb2IgYXQgMHhlMzUwMDAwMA0KICAgcmVzZXJ2aW5nIGZkdCBtZW1vcnkg
cmVnaW9uOiBhZGRyPTgwMDAwMDAwIHNpemU9MjAwMDANCiAgIHJlc2VydmluZyBmZHQgbWVtb3J5
IHJlZ2lvbjogYWRkcj1lMzUwMDAwMCBzaXplPTM1MDAwDQogICBMb2FkaW5nIERldmljZSBUcmVl
IHRvIDAwMDAwMDAwZmM3ZjgwMDAsIGVuZCAwMDAwMDAwMGZjODJmZmZmIC4uLiBPSw0KDQpTdGFy
dGluZyBrZXJuZWwgLi4uDQoNCi0gVUFSVCBlbmFibGVkIC0NCi0gQm9vdCBDUFUgYm9vdGluZyAt
DQotIEN1cnJlbnQgRUwgMDAwMDAwMDggLQ0KLSBJbml0aWFsaXplIENQVSAtDQotIFR1cm5pbmcg
b24gcGFnaW5nIC0NCi0gWmVybyBCU1MgLQ0KLSBSZWFkeSAtDQooWEVOKSBDaGVja2luZyBmb3Ig
aW5pdHJkIGluIC9jaG9zZW4NCihYRU4pIGxpbnV4LGluaXRyZCBsaW1pdHMgaW52YWxpZDogMDAw
MDAwMDA4NDEwMDAwMCA+PSAwMDAwMDAwMDg0MTAwMDAwDQooWEVOKSBSQU06IDAwMDAwMDAwODAw
MDAwMDAgLSAwMDAwMDAwMGZlZGZmZmZmDQooWEVOKSBSQU06IDAwMDAwMDAxMDAwMDAwMDAgLSAw
MDAwMDAwMTdmMWZmZmZmDQooWEVOKSANCihYRU4pIE1PRFVMRVswXTogMDAwMDAwMDBlMDAwMDAw
MCAtIDAwMDAwMDAwZTAxNGIwYzggWGVuICAgICAgICAgDQooWEVOKSBNT0RVTEVbMV06IDAwMDAw
MDAwZmM3ZjgwMDAgLSAwMDAwMDAwMGZjODJkMDAwIERldmljZSBUcmVlIA0KKFhFTikgTU9EVUxF
WzJdOiAwMDAwMDAwMGUxMDAwMDAwIC0gMDAwMDAwMDBlMzFiYzgwOCBLZXJuZWwgICAgICANCihY
RU4pICBSRVNWRFswXTogMDAwMDAwMDA4MDAwMDAwMCAtIDAwMDAwMDAwODAwMjAwMDANCihYRU4p
ICBSRVNWRFsxXTogMDAwMDAwMDBlMzUwMDAwMCAtIDAwMDAwMDAwZTM1MzUwMDANCihYRU4pICBS
RVNWRFsyXTogMDAwMDAwMDBmYzdmODAwMCAtIDAwMDAwMDAwZmM4MmQwMDANCihYRU4pICBSRVNW
RFszXTogMDAwMDAwMDA0MDAwMTAwMCAtIDAwMDAwMDAwNDAwM2ZmZmYNCihYRU4pICBSRVNWRFs0
XTogMDAwMDAwMDBiMDAwMDAwMCAtIDAwMDAwMDAwYjAxZmZmZmYNCihYRU4pIA0KKFhFTikgQ01E
TElORVswMDAwMDAwMGUxMDAwMDAwXTpjaG9zZW4gY29uc29sZT1odmMwIGVhcmx5Y29uPXhlbmJv
b3Qgcm9vdGZzdHlwZT1leHQ0IHJ3IHJvb3R3YWl0IHJvb3Q9L2Rldi9tbWNibGswcDEgcmRpbml0
PS9zYmluL2luaXQgY2xrX2lnbm9yZV91bnVzZWQNCihYRU4pIA0KKFhFTikgQ29tbWFuZCBsaW5l
OiBjb25zb2xlPWR0dWFydCBzeW5jX2NvbnNvbGUgZG9tMF9tZW09MTI4TSBsb2dsdmw9ZGVidWcg
Z3Vlc3RfbG9nbHZsPWRlYnVnIGNvbnNvbGVfdG9fcmluZw0KKFhFTikgRG9tYWluIGhlYXAgaW5p
dGlhbGlzZWQNCihYRU4pIEJvb3RpbmcgdXNpbmcgRGV2aWNlIFRyZWUNCihYRU4pICAtPiB1bmZs
YXR0ZW5fZGV2aWNlX3RyZWUoKQ0KKFhFTikgVW5mbGF0dGVuaW5nIGRldmljZSB0cmVlOg0KKFhF
TikgbWFnaWM6IDB4ZDAwZGZlZWQNCihYRU4pIHNpemU6IDB4MDM1MDAwDQooWEVOKSB2ZXJzaW9u
OiAweDAwMDAxMQ0KKFhFTikgICBzaXplIGlzIDB4NmViOTAgYWxsb2NhdGluZy4uLg0KKFhFTikg
ICB1bmZsYXR0ZW5pbmcgODAwMGZmMTAwMDAwLi4uDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciAg
LT4gDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB0aGVybWFsLXpvbmVzIC0+IHRoZXJtYWwtem9u
ZXMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIEFPLXRoZXJtIC0+IEFPLXRoZXJtDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciB0cmlwcyAtPiB0cmlwcw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
dHJpcF9zaHV0ZG93biAtPiB0cmlwX3NodXRkb3duDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBn
cHUtc2NhbGluZzAgLT4gZ3B1LXNjYWxpbmcwDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncHUt
c2NhbGluZzEgLT4gZ3B1LXNjYWxpbmcxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncHUtc2Nh
bGluZzIgLT4gZ3B1LXNjYWxpbmcyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncHUtc2NhbGlu
ZzMgLT4gZ3B1LXNjYWxpbmczDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncHUtc2NhbGluZzQg
LT4gZ3B1LXNjYWxpbmc0DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncHUtc2NhbGluZzUgLT4g
Z3B1LXNjYWxpbmc1DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncHUtdm1heDEgLT4gZ3B1LXZt
YXgxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjb3JlX2R2ZnNfZmxvb3JfdHJpcDAgLT4gY29y
ZV9kdmZzX2Zsb29yX3RyaXAwDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjb3JlX2R2ZnNfY2Fw
X3RyaXAwIC0+IGNvcmVfZHZmc19jYXBfdHJpcDANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRm
bGwtZmxvb3ItdHJpcDAgLT4gZGZsbC1mbG9vci10cmlwMA0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgdGhlcm1hbC16b25lLXBhcmFtcyAtPiB0aGVybWFsLXpvbmUtcGFyYW1zDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBjb29saW5nLW1hcHMgLT4gY29vbGluZy1tYXBzDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBncHUtc2NhbGluZy1tYXAxIC0+IGdwdS1zY2FsaW5nLW1hcDENCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIGdwdS1zY2FsaW5nLW1hcDIgLT4gZ3B1LXNjYWxpbmctbWFwMg0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgZ3B1X3NjYWxpbmdfbWFwMyAtPiBncHVfc2NhbGluZ19tYXAz
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncHUtc2NhbGluZy1tYXA0IC0+IGdwdS1zY2FsaW5n
LW1hcDQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdwdS1zY2FsaW5nLW1hcDUgLT4gZ3B1LXNj
YWxpbmctbWFwNQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZ3B1LXZtYXgtbWFwMSAtPiBncHUt
dm1heC1tYXAxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjb3JlX2R2ZnNfZmxvb3JfbWFwMCAt
PiBjb3JlX2R2ZnNfZmxvb3JfbWFwMA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY29yZV9kdmZz
X2NhcF9tYXAwIC0+IGNvcmVfZHZmc19jYXBfbWFwMA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
ZGZsbC1mbG9vci1tYXAwIC0+IGRmbGwtZmxvb3ItbWFwMA0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgQ1BVLXRoZXJtIC0+IENQVS10aGVybQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdGhlcm1h
bC16b25lLXBhcmFtcyAtPiB0aGVybWFsLXpvbmUtcGFyYW1zDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciB0cmlwcyAtPiB0cmlwcw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZGZsbC1jYXAtdHJp
cDAgLT4gZGZsbC1jYXAtdHJpcDANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRmbGwtY2FwLXRy
aXAxIC0+IGRmbGwtY2FwLXRyaXAxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjcHVfY3JpdGlj
YWwgLT4gY3B1X2NyaXRpY2FsDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjcHVfaGVhdnkgLT4g
Y3B1X2hlYXZ5DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjcHVfdGhyb3R0bGUgLT4gY3B1X3Ro
cm90dGxlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjb29saW5nLW1hcHMgLT4gY29vbGluZy1t
YXBzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBtYXAxIC0+IG1hcDENCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIG1hcDIgLT4gbWFwMg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZGZsbC1jYXAt
bWFwMCAtPiBkZmxsLWNhcC1tYXAwDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkZmxsLWNhcC1t
YXAxIC0+IGRmbGwtY2FwLW1hcDENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIEdQVS10aGVybSAt
PiBHUFUtdGhlcm0NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHRoZXJtYWwtem9uZS1wYXJhbXMg
LT4gdGhlcm1hbC16b25lLXBhcmFtcw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdHJpcHMgLT4g
dHJpcHMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdwdV9jcml0aWNhbCAtPiBncHVfY3JpdGlj
YWwNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdwdV9oZWF2eSAtPiBncHVfaGVhdnkNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIGdwdV90aHJvdHRsZSAtPiBncHVfdGhyb3R0bGUNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIGNvb2xpbmctbWFwcyAtPiBjb29saW5nLW1hcHMNCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIG1hcDEgLT4gbWFwMQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbWFwMiAt
PiBtYXAyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBQTEwtdGhlcm0gLT4gUExMLXRoZXJtDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciB0aGVybWFsLXpvbmUtcGFyYW1zIC0+IHRoZXJtYWwtem9u
ZS1wYXJhbXMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHRyaXBzIC0+IHRyaXBzDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBkcmFtLXRocm90dGxlIC0+IGRyYW0tdGhyb3R0bGUNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIGNvb2xpbmctbWFwcyAtPiBjb29saW5nLW1hcHMNCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIG1hcC10ZWdyYS1kcmFtIC0+IG1hcC10ZWdyYS1kcmFtDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBQTUlDLURpZSAtPiBQTUlDLURpZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgdHJpcHMgLT4gdHJpcHMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGhvdC1kaWUgLT4gaG90
LWRpZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY29vbGluZy1tYXBzIC0+IGNvb2xpbmctbWFw
cw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbWFwMCAtPiBtYXAwDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBjb3JlX2R2ZnNfY2Rldl9mbG9vciAtPiBjb3JlX2R2ZnNfY2Rldl9mbG9vcg0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgY29yZV9kdmZzX2NkZXZfY2FwIC0+IGNvcmVfZHZmc19jZGV2
X2NhcA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcG93ZXItZG9tYWluIC0+IHBvd2VyLWRvbWFp
bg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaG9zdDF4LXBkIC0+IGhvc3QxeC1wZA0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgYXBlLXBkIC0+IGFwZS1wZA0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgYWRzcC1wZCAtPiBhZHNwLXBkDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB0c2VjLXBkIC0+
IHRzZWMtcGQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG52ZGVjLXBkIC0+IG52ZGVjLXBkDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciB2ZTItcGQgLT4gdmUyLXBkDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciB2aWMwMy1wZCAtPiB2aWMwMy1wZA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbXNl
bmMtcGQgLT4gbXNlbmMtcGQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG52anBnLXBkIC0+IG52
anBnLXBkDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwY2llLXBkIC0+IHBjaWUtcGQNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHZlLXBkIC0+IHZlLXBkDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBzYXRhLXBkIC0+IHNhdGEtcGQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNvci1wZCAtPiBz
b3ItcGQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRpc2EtcGQgLT4gZGlzYS1wZA0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgZGlzYi1wZCAtPiBkaXNiLXBkDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciB4dXNiYS1wZCAtPiB4dXNiYS1wZA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgeHVzYmIt
cGQgLT4geHVzYmItcGQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHh1c2JjLXBkIC0+IHh1c2Jj
LXBkDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBhY3Rtb25ANjAwMGM4MDAgLT4gYWN0bW9uDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBtY19hbGwgLT4gbWNfYWxsDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBhbGlhc2VzIC0+IGFsaWFzZXMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNwdXMg
LT4gY3B1cw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY3B1QDAgLT4gY3B1DQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBjcHVAMSAtPiBjcHUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNwdUAy
IC0+IGNwdQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY3B1QDMgLT4gY3B1DQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBpZGxlLXN0YXRlcyAtPiBpZGxlLXN0YXRlcw0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgYzcgLT4gYzcNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNjNiAtPiBjYzYNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIGwyLWNhY2hlIC0+IGwyLWNhY2hlDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBwc2NpIC0+IHBzY2kNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHRsayAtPiB0
bGsNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGxvZyAtPiBsb2cNCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIGFybS1wbXUgLT4gYXJtLXBtdQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY2xvY2sg
LT4gY2xvY2sNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGJ3bWdyIC0+IGJ3bWdyDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciByZXNlcnZlZC1tZW1vcnkgLT4gcmVzZXJ2ZWQtbWVtb3J5DQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBpcmFtLWNhcnZlb3V0IC0+IGlyYW0tY2FydmVvdXQNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHJhbW9vcHNfY2FydmVvdXQgLT4gcmFtb29wc19jYXJ2ZW91dA0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdnByLWNhcnZlb3V0IC0+IHZwci1jYXJ2ZW91dA0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgdGVncmEtY2FydmVvdXRzIC0+IHRlZ3JhLWNhcnZlb3V0cw0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaW9tbXUgLT4gaW9tbXUNCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIGFkZHJlc3Mtc3BhY2UtcHJvcCAtPiBhZGRyZXNzLXNwYWNlLXByb3ANCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIGNvbW1vbiAtPiBjb21tb24NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IHBwY3MgLT4gcHBjcw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZGMgLT4gZGMNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIGdwdSAtPiBncHUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGFwZSAt
PiBhcGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNtbXVfdGVzdCAtPiBzbW11X3Rlc3QNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRtYV90ZXN0IC0+IGRtYV90ZXN0DQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBicG1wIC0+IGJwbXANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG1jIC0+IG1j
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpbnRlcnJ1cHQtY29udHJvbGxlciAtPiBpbnRlcnJ1
cHQtY29udHJvbGxlcg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaW50ZXJydXB0LWNvbnRyb2xs
ZXJANjAwMDQwMDAgLT4gaW50ZXJydXB0LWNvbnRyb2xsZXINCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIGZsb3ctY29udHJvbGxlckA2MDAwNzAwMCAtPiBmbG93LWNvbnRyb2xsZXINCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIGFoYkA2MDAwYzAwMCAtPiBhaGINCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIGFjb25uZWN0QDcwMmMwMDAwIC0+IGFjb25uZWN0DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBhZ2ljQDcwMmY5MDAwIC0+IGFnaWMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGFkc3AgLT4g
YWRzcA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYWRtYUA3MDJlMjAwMCAtPiBhZG1hDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBhaHViIC0+IGFodWINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IGFkbWFpZkAweDcwMmQwMDAwIC0+IGFkbWFpZg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2Zj
QDcwMmQyMDAwIC0+IHNmYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2ZjQDcwMmQyMjAwIC0+
IHNmYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2ZjQDcwMmQyNDAwIC0+IHNmYw0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3Igc2ZjQDcwMmQyNjAwIC0+IHNmYw0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3Igc3BrcHJvdEA3MDJkOGMwMCAtPiBzcGtwcm90DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBhbWl4ZXJANzAyZGJiMDAgLT4gYW1peGVyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMnNA
NzAyZDEwMDAgLT4gaTJzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMnNANzAyZDExMDAgLT4g
aTJzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMnNANzAyZDEyMDAgLT4gaTJzDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBpMnNANzAyZDEzMDAgLT4gaTJzDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBpMnNANzAyZDE0MDAgLT4gaTJzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBhbXhANzAy
ZDMwMDAgLT4gYW14DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBhbXhANzAyZDMxMDAgLT4gYW14
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBhZHhANzAyZDM4MDAgLT4gYWR4DQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBhZHhANzAyZDM5MDAgLT4gYWR4DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBkbWljQDcwMmQ0MDAwIC0+IGRtaWMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRtaWNANzAy
ZDQxMDAgLT4gZG1pYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZG1pY0A3MDJkNDIwMCAtPiBk
bWljDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBhZmNANzAyZDcwMDAgLT4gYWZjDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBhZmNANzAyZDcxMDAgLT4gYWZjDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBhZmNANzAyZDcyMDAgLT4gYWZjDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBhZmNANzAy
ZDczMDAgLT4gYWZjDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBhZmNANzAyZDc0MDAgLT4gYWZj
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBhZmNANzAyZDc1MDAgLT4gYWZjDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBtdmNANzAyZGEwMDAgLT4gbXZjDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBtdmNANzAyZGEyMDAgLT4gbXZjDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpcWNANzAyZGUw
MDAgLT4gaXFjDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpcWNANzAyZGUyMDAgLT4gaXFjDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvcGVANzAyZDgwMDAgLT4gb3BlDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBwZXFANzAyZDgxMDAgLT4gcGVxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBt
YmRyY0A3MDJkODIwMCAtPiBtYmRyYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3BlQDcwMmQ4
NDAwIC0+IG9wZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGVxQDcwMmQ4NTAwIC0+IHBlcQ0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbWJkcmNANzAyZDg2MDAgLT4gbWJkcmMNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIG12Y0AweDcwMmRhMjAwIC0+IG12Yw0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgYWRzcF9hdWRpbyAtPiBhZHNwX2F1ZGlvDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB0
aW1lciAtPiB0aW1lcg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdGltZXJANjAwMDUwMDAgLT4g
dGltZXINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHJ0YyAtPiBydGMNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGRtYUA2MDAyMDAwMCAtPiBkbWENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBp
bm11eEA3MDAwMDhkNCAtPiBwaW5tdXgNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNsa3JlcV8w
X2JpX2RpciAtPiBjbGtyZXFfMF9iaV9kaXINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNsa3Jl
cTAgLT4gY2xrcmVxMA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY2xrcmVxXzFfYmlfZGlyIC0+
IGNsa3JlcV8xX2JpX2Rpcg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY2xrcmVxMSAtPiBjbGty
ZXExDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjbGtyZXFfMF9pbl9kaXIgLT4gY2xrcmVxXzBf
aW5fZGlyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjbGtyZXEwIC0+IGNsa3JlcTANCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIGNsa3JlcV8xX2luX2RpciAtPiBjbGtyZXFfMV9pbl9kaXINCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNsa3JlcTEgLT4gY2xrcmVxMQ0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgcHJvZC1zZXR0aW5ncyAtPiBwcm9kLXNldHRpbmdzDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBwcm9kIC0+IHByb2QNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyczJfcHJvZCAt
PiBpMnMyX3Byb2QNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwaTFfcHJvZCAtPiBzcGkxX3By
b2QNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwaTJfcHJvZCAtPiBzcGkyX3Byb2QNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHNwaTNfcHJvZCAtPiBzcGkzX3Byb2QNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHNwaTRfcHJvZCAtPiBzcGk0X3Byb2QNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IGkyYzBfcHJvZCAtPiBpMmMwX3Byb2QNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyYzFfcHJv
ZCAtPiBpMmMxX3Byb2QNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyYzJfcHJvZCAtPiBpMmMy
X3Byb2QNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyYzRfcHJvZCAtPiBpMmM0X3Byb2QNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyYzBfaHNfcHJvZCAtPiBpMmMwX2hzX3Byb2QNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIGkyYzFfaHNfcHJvZCAtPiBpMmMxX2hzX3Byb2QNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIGkyYzJfaHNfcHJvZCAtPiBpMmMyX2hzX3Byb2QNCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIGkyYzRfaHNfcHJvZCAtPiBpMmM0X2hzX3Byb2QNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHNkbW1jMV9zY2htaXR0X2VuYWJsZSAtPiBzZG1tYzFfc2NobWl0dF9lbmFibGUN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNkbW1jMSAtPiBzZG1tYzENCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHNkbW1jMV9zY2htaXR0X2Rpc2FibGUgLT4gc2RtbWMxX3NjaG1pdHRfZGlzYWJs
ZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2RtbWMxIC0+IHNkbW1jMQ0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3Igc2RtbWMxX2Nsa19zY2htaXR0X2VuYWJsZSAtPiBzZG1tYzFfY2xrX3NjaG1p
dHRfZW5hYmxlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzZG1tYzEgLT4gc2RtbWMxDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBzZG1tYzFfY2xrX3NjaG1pdHRfZGlzYWJsZSAtPiBzZG1tYzFf
Y2xrX3NjaG1pdHRfZGlzYWJsZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2RtbWMxIC0+IHNk
bW1jMQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2RtbWMxX2Rydl9jb2RlIC0+IHNkbW1jMV9k
cnZfY29kZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2RtbWMxIC0+IHNkbW1jMQ0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3Igc2RtbWMxX2RlZmF1bHRfZHJ2X2NvZGUgLT4gc2RtbWMxX2RlZmF1
bHRfZHJ2X2NvZGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNkbW1jMSAtPiBzZG1tYzENCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNkbW1jM19zY2htaXR0X2VuYWJsZSAtPiBzZG1tYzNfc2No
bWl0dF9lbmFibGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNkbW1jMyAtPiBzZG1tYzMNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNkbW1jM19zY2htaXR0X2Rpc2FibGUgLT4gc2RtbWMzX3Nj
aG1pdHRfZGlzYWJsZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2RtbWMzIC0+IHNkbW1jMw0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2RtbWMzX2Nsa19zY2htaXR0X2VuYWJsZSAtPiBzZG1t
YzNfY2xrX3NjaG1pdHRfZW5hYmxlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzZG1tYzMgLT4g
c2RtbWMzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzZG1tYzNfY2xrX3NjaG1pdHRfZGlzYWJs
ZSAtPiBzZG1tYzNfY2xrX3NjaG1pdHRfZGlzYWJsZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
c2RtbWMzIC0+IHNkbW1jMw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2RtbWMzX2Rydl9jb2Rl
IC0+IHNkbW1jM19kcnZfY29kZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2RtbWMzIC0+IHNk
bW1jMw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2RtbWMzX2RlZmF1bHRfZHJ2X2NvZGUgLT4g
c2RtbWMzX2RlZmF1bHRfZHJ2X2NvZGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNkbW1jMyAt
PiBzZG1tYzMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGR2ZnNfcHdtX2FjdGl2ZSAtPiBkdmZz
X3B3bV9hY3RpdmUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGR2ZnNfcHdtX3BiYjEgLT4gZHZm
c19wd21fcGJiMQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZHZmc19wd21faW5hY3RpdmUgLT4g
ZHZmc19wd21faW5hY3RpdmUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGR2ZnNfcHdtX3BiYjEg
LT4gZHZmc19wd21fcGJiMQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY29tbW9uIC0+IGNvbW1v
bg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZHZmc19wd21fcGJiMSAtPiBkdmZzX3B3bV9wYmIx
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkbWljMV9jbGtfcGUwIC0+IGRtaWMxX2Nsa19wZTAN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRtaWMxX2RhdF9wZTEgLT4gZG1pYzFfZGF0X3BlMQ0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZG1pYzJfY2xrX3BlMiAtPiBkbWljMl9jbGtfcGUyDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkbWljMl9kYXRfcGUzIC0+IGRtaWMyX2RhdF9wZTMNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBlNyAtPiBwZTcNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IGdlbjNfaTJjX3NjbF9wZjAgLT4gZ2VuM19pMmNfc2NsX3BmMA0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgZ2VuM19pMmNfc2RhX3BmMSAtPiBnZW4zX2kyY19zZGFfcGYxDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBjYW1faTJjX3NjbF9wczIgLT4gY2FtX2kyY19zY2xfcHMyDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBjYW1faTJjX3NkYV9wczMgLT4gY2FtX2kyY19zZGFfcHMzDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBjYW0xX21jbGtfcHMwIC0+IGNhbTFfbWNsa19wczANCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIGNhbTJfbWNsa19wczEgLT4gY2FtMl9tY2xrX3BzMQ0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgcGV4X2wwX2Nsa3JlcV9uX3BhMSAtPiBwZXhfbDBfY2xrcmVxX25fcGEx
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwZXhfbDBfcnN0X25fcGEwIC0+IHBleF9sMF9yc3Rf
bl9wYTANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBleF9sMV9jbGtyZXFfbl9wYTQgLT4gcGV4
X2wxX2Nsa3JlcV9uX3BhNA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGV4X2wxX3JzdF9uX3Bh
MyAtPiBwZXhfbDFfcnN0X25fcGEzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwZXhfd2FrZV9u
X3BhMiAtPiBwZXhfd2FrZV9uX3BhMg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2RtbWMxX2Ns
a19wbTAgLT4gc2RtbWMxX2Nsa19wbTANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNkbW1jMV9j
bWRfcG0xIC0+IHNkbW1jMV9jbWRfcG0xDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzZG1tYzFf
ZGF0MF9wbTUgLT4gc2RtbWMxX2RhdDBfcG01DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzZG1t
YzFfZGF0MV9wbTQgLT4gc2RtbWMxX2RhdDFfcG00DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBz
ZG1tYzFfZGF0Ml9wbTMgLT4gc2RtbWMxX2RhdDJfcG0zDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBzZG1tYzFfZGF0M19wbTIgLT4gc2RtbWMxX2RhdDNfcG0yDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBzZG1tYzNfY2xrX3BwMCAtPiBzZG1tYzNfY2xrX3BwMA0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3Igc2RtbWMzX2NtZF9wcDEgLT4gc2RtbWMzX2NtZF9wcDENCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHNkbW1jM19kYXQwX3BwNSAtPiBzZG1tYzNfZGF0MF9wcDUNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHNkbW1jM19kYXQxX3BwNCAtPiBzZG1tYzNfZGF0MV9wcDQNCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIHNkbW1jM19kYXQyX3BwMyAtPiBzZG1tYzNfZGF0Ml9wcDMNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIHNkbW1jM19kYXQzX3BwMiAtPiBzZG1tYzNfZGF0M19wcDINCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHNodXRkb3duIC0+IHNodXRkb3duDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBsY2RfZ3BpbzJfcHY0IC0+IGxjZF9ncGlvMl9wdjQNCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHB3cl9pMmNfc2NsX3B5MyAtPiBwd3JfaTJjX3NjbF9weTMNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHB3cl9pMmNfc2RhX3B5NCAtPiBwd3JfaTJjX3NkYV9weTQNCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIGNsa18zMmtfaW4gLT4gY2xrXzMya19pbg0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgY2xrXzMya19vdXRfcHk1IC0+IGNsa18zMmtfb3V0X3B5NQ0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgcHoxIC0+IHB6MQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHo1IC0+IHB6NQ0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY29yZV9wd3JfcmVxIC0+IGNvcmVfcHdyX3JlcQ0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgcHdyX2ludF9uIC0+IHB3cl9pbnRfbg0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgZ2VuMV9pMmNfc2NsX3BqMSAtPiBnZW4xX2kyY19zY2xfcGoxDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBnZW4xX2kyY19zZGFfcGowIC0+IGdlbjFfaTJjX3NkYV9wajANCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdlbjJfaTJjX3NjbF9wajIgLT4gZ2VuMl9pMmNfc2NsX3Bq
Mg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZ2VuMl9pMmNfc2RhX3BqMyAtPiBnZW4yX2kyY19z
ZGFfcGozDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0Ml90eF9wZzAgLT4gdWFydDJfdHhf
cGcwDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0Ml9yeF9wZzEgLT4gdWFydDJfcnhfcGcx
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0MV90eF9wdTAgLT4gdWFydDFfdHhfcHUwDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0MV9yeF9wdTEgLT4gdWFydDFfcnhfcHUxDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBqdGFnX3J0Y2sgLT4ganRhZ19ydGNrDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciB1YXJ0M190eF9wZDEgLT4gdWFydDNfdHhfcGQxDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciB1YXJ0M19yeF9wZDIgLT4gdWFydDNfcnhfcGQyDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciB1YXJ0M19ydHNfcGQzIC0+IHVhcnQzX3J0c19wZDMNCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIHVhcnQzX2N0c19wZDQgLT4gdWFydDNfY3RzX3BkNA0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgdWFydDRfdHhfcGk0IC0+IHVhcnQ0X3R4X3BpNA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
dWFydDRfcnhfcGk1IC0+IHVhcnQ0X3J4X3BpNQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdWFy
dDRfcnRzX3BpNiAtPiB1YXJ0NF9ydHNfcGk2DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0
NF9jdHNfcGk3IC0+IHVhcnQ0X2N0c19waTcNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHFzcGlf
aW8wX3BlZTIgLT4gcXNwaV9pbzBfcGVlMg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcXNwaV9p
bzFfcGVlMyAtPiBxc3BpX2lvMV9wZWUzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBxc3BpX3Nj
a19wZWUwIC0+IHFzcGlfc2NrX3BlZTANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHFzcGlfY3Nf
bl9wZWUxIC0+IHFzcGlfY3Nfbl9wZWUxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBxc3BpX2lv
Ml9wZWU0IC0+IHFzcGlfaW8yX3BlZTQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHFzcGlfaW8z
X3BlZTUgLT4gcXNwaV9pbzNfcGVlNQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZGFwMl9kaW5f
cGFhMiAtPiBkYXAyX2Rpbl9wYWEyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkYXAyX2RvdXRf
cGFhMyAtPiBkYXAyX2RvdXRfcGFhMw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZGFwMl9mc19w
YWEwIC0+IGRhcDJfZnNfcGFhMA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZGFwMl9zY2xrX3Bh
YTEgLT4gZGFwMl9zY2xrX3BhYTENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRwX2hwZDBfcGNj
NiAtPiBkcF9ocGQwX3BjYzYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGhkbWlfaW50X2RwX2hw
ZF9wY2MxIC0+IGhkbWlfaW50X2RwX2hwZF9wY2MxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBo
ZG1pX2NlY19wY2MwIC0+IGhkbWlfY2VjX3BjYzANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNh
bTFfcHdkbl9wczcgLT4gY2FtMV9wd2RuX3BzNw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY2Ft
Ml9wd2RuX3B0MCAtPiBjYW0yX3B3ZG5fcHQwDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzYXRh
X2xlZF9hY3RpdmVfcGE1IC0+IHNhdGFfbGVkX2FjdGl2ZV9wYTUNCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHBhNiAtPiBwYTYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGFsc19wcm94X2ludF9w
eDMgLT4gYWxzX3Byb3hfaW50X3B4Mw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdGVtcF9hbGVy
dF9weDQgLT4gdGVtcF9hbGVydF9weDQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGJ1dHRvbl9w
b3dlcl9vbl9weDUgLT4gYnV0dG9uX3Bvd2VyX29uX3B4NQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgYnV0dG9uX3ZvbF91cF9weDYgLT4gYnV0dG9uX3ZvbF91cF9weDYNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGJ1dHRvbl9ob21lX3B5MSAtPiBidXR0b25faG9tZV9weTENCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIGxjZF9ibF9lbl9wdjEgLT4gbGNkX2JsX2VuX3B2MQ0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgcHoyIC0+IHB6Mg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHozIC0+IHB6
Mw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igd2lmaV9lbl9waDAgLT4gd2lmaV9lbl9waDANCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHdpZmlfd2FrZV9hcF9waDIgLT4gd2lmaV93YWtlX2FwX3Bo
Mg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYXBfd2FrZV9idF9waDMgLT4gYXBfd2FrZV9idF9w
aDMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGJ0X3JzdF9waDQgLT4gYnRfcnN0X3BoNA0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgYnRfd2FrZV9hcF9waDUgLT4gYnRfd2FrZV9hcF9waDUNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBoNiAtPiBwaDYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IGFwX3dha2VfbmZjX3BoNyAtPiBhcF93YWtlX25mY19waDcNCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIG5mY19lbl9waTAgLT4gbmZjX2VuX3BpMA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbmZj
X2ludF9waTEgLT4gbmZjX2ludF9waTENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdwc19lbl9w
aTIgLT4gZ3BzX2VuX3BpMg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGNjNyAtPiBwY2M3DQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1c2JfdmJ1c19lbjBfcGNjNCAtPiB1c2JfdmJ1c19lbjBf
cGNjNA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdW51c2VkX2xvd3Bvd2VyIC0+IHVudXNlZF9s
b3dwb3dlcg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYXVkX21jbGtfcGJiMCAtPiBhdWRfbWNs
a19wYmIwDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkdmZzX2Nsa19wYmIyIC0+IGR2ZnNfY2xr
X3BiYjINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdwaW9feDFfYXVkX3BiYjMgLT4gZ3Bpb194
MV9hdWRfcGJiMw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZ3Bpb194M19hdWRfcGJiNCAtPiBn
cGlvX3gzX2F1ZF9wYmI0DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkYXAxX2Rpbl9wYjEgLT4g
ZGFwMV9kaW5fcGIxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkYXAxX2RvdXRfcGIyIC0+IGRh
cDFfZG91dF9wYjINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRhcDFfZnNfcGIwIC0+IGRhcDFf
ZnNfcGIwDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkYXAxX3NjbGtfcGIzIC0+IGRhcDFfc2Ns
a19wYjMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwaTJfbW9zaV9wYjQgLT4gc3BpMl9tb3Np
X3BiNA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpMl9taXNvX3BiNSAtPiBzcGkyX21pc29f
cGI1DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGkyX3Nja19wYjYgLT4gc3BpMl9zY2tfcGI2
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGkyX2NzMF9wYjcgLT4gc3BpMl9jczBfcGI3DQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGkyX2NzMV9wZGQwIC0+IHNwaTJfY3MxX3BkZDANCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRtaWMzX2Nsa19wZTQgLT4gZG1pYzNfY2xrX3BlNA0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgZG1pYzNfZGF0X3BlNSAtPiBkbWljM19kYXRfcGU1DQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBwZTYgLT4gcGU2DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBj
YW1fcnN0X3BzNCAtPiBjYW1fcnN0X3BzNA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY2FtX2Fm
X2VuX3BzNSAtPiBjYW1fYWZfZW5fcHM1DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjYW1fZmxh
c2hfZW5fcHM2IC0+IGNhbV9mbGFzaF9lbl9wczYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNh
bTFfc3Ryb2JlX3B0MSAtPiBjYW0xX3N0cm9iZV9wdDENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IG1vdGlvbl9pbnRfcHgyIC0+IG1vdGlvbl9pbnRfcHgyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciB0b3VjaF9yc3RfcHY2IC0+IHRvdWNoX3JzdF9wdjYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IHRvdWNoX2Nsa19wdjcgLT4gdG91Y2hfY2xrX3B2Nw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
dG91Y2hfaW50X3B4MSAtPiB0b3VjaF9pbnRfcHgxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBt
b2RlbV93YWtlX2FwX3B4MCAtPiBtb2RlbV93YWtlX2FwX3B4MA0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgYnV0dG9uX3ZvbF9kb3duX3B4NyAtPiBidXR0b25fdm9sX2Rvd25fcHg3DQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBidXR0b25fc2xpZGVfc3dfcHkwIC0+IGJ1dHRvbl9zbGlkZV9zd19w
eTANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGxjZF90ZV9weTIgLT4gbGNkX3RlX3B5Mg0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgbGNkX2JsX3B3bV9wdjAgLT4gbGNkX2JsX3B3bV9wdjANCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIGxjZF9yc3RfcHYyIC0+IGxjZF9yc3RfcHYyDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBsY2RfZ3BpbzFfcHYzIC0+IGxjZF9ncGlvMV9wdjMNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIGFwX3JlYWR5X3B2NSAtPiBhcF9yZWFkeV9wdjUNCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIHB6MCAtPiBwejANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHB6NCAtPiBw
ejQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNsa19yZXEgLT4gY2xrX3JlcQ0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgY3B1X3B3cl9yZXEgLT4gY3B1X3B3cl9yZXENCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGRhcDRfZGluX3BqNSAtPiBkYXA0X2Rpbl9wajUNCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIGRhcDRfZG91dF9wajYgLT4gZGFwNF9kb3V0X3BqNg0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgZGFwNF9mc19wajQgLT4gZGFwNF9mc19wajQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IGRhcDRfc2Nsa19wajcgLT4gZGFwNF9zY2xrX3BqNw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
dWFydDJfcnRzX3BnMiAtPiB1YXJ0Ml9ydHNfcGcyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1
YXJ0Ml9jdHNfcGczIC0+IHVhcnQyX2N0c19wZzMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVh
cnQxX3J0c19wdTIgLT4gdWFydDFfcnRzX3B1Mg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdWFy
dDFfY3RzX3B1MyAtPiB1YXJ0MV9jdHNfcHUzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwazAg
LT4gcGswDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwazEgLT4gcGsxDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBwazIgLT4gcGsyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwazMgLT4gcGsz
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwazQgLT4gcGs0DQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBwazUgLT4gcGs1DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwazYgLT4gcGs2DQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBwazcgLT4gcGs3DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBw
bDAgLT4gcGwwDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwbDEgLT4gcGwxDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBzcGkxX21vc2lfcGMwIC0+IHNwaTFfbW9zaV9wYzANCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIHNwaTFfbWlzb19wYzEgLT4gc3BpMV9taXNvX3BjMQ0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3Igc3BpMV9zY2tfcGMyIC0+IHNwaTFfc2NrX3BjMg0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3Igc3BpMV9jczBfcGMzIC0+IHNwaTFfY3MwX3BjMw0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3Igc3BpMV9jczFfcGM0IC0+IHNwaTFfY3MxX3BjNA0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3Igc3BpNF9tb3NpX3BjNyAtPiBzcGk0X21vc2lfcGM3DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBzcGk0X21pc29fcGQwIC0+IHNwaTRfbWlzb19wZDANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IHNwaTRfc2NrX3BjNSAtPiBzcGk0X3Nja19wYzUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNw
aTRfY3MwX3BjNiAtPiBzcGk0X2NzMF9wYzYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHdpZmlf
cnN0X3BoMSAtPiB3aWZpX3JzdF9waDENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdwc19yc3Rf
cGkzIC0+IGdwc19yc3RfcGkzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGRpZl9vdXRfcGNj
MiAtPiBzcGRpZl9vdXRfcGNjMg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BkaWZfaW5fcGNj
MyAtPiBzcGRpZl9pbl9wY2MzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1c2JfdmJ1c19lbjFf
cGNjNSAtPiB1c2JfdmJ1c19lbjFfcGNjNQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZHJpdmUg
LT4gZHJpdmUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdwaW9ANjAwMGQwMDAgLT4gZ3Bpbw0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY2FtZXJhLWNvbnRyb2wtb3V0cHV0LWxvdyAtPiBjYW1l
cmEtY29udHJvbC1vdXRwdXQtbG93DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBlMjYxNC1ydDU2
NTgtYXVkaW8gLT4gZTI2MTQtcnQ1NjU4LWF1ZGlvDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBz
eXN0ZW0tc3VzcGVuZC1ncGlvIC0+IHN5c3RlbS1zdXNwZW5kLWdwaW8NCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGRlZmF1bHQgLT4gZGVmYXVsdA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgeG90
ZyAtPiB4b3RnDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBtYWlsYm94QDcwMDk4MDAwIC0+IG1h
aWxib3gNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHh1c2JfcGFkY3RsQDcwMDlmMDAwIC0+IHh1
c2JfcGFkY3RsDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwYWRzIC0+IHBhZHMNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIHVzYjIgLT4gdXNiMg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbGFu
ZXMgLT4gbGFuZXMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVzYjItMCAtPiB1c2IyLTANCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVzYjItMSAtPiB1c2IyLTENCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHVzYjItMiAtPiB1c2IyLTINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVzYjItMyAt
PiB1c2IyLTMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBjaWUgLT4gcGNpZQ0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgbGFuZXMgLT4gbGFuZXMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBj
aWUtMCAtPiBwY2llLTANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBjaWUtMSAtPiBwY2llLTEN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBjaWUtMiAtPiBwY2llLTINCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHBjaWUtMyAtPiBwY2llLTMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBjaWUt
NCAtPiBwY2llLTQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBjaWUtNSAtPiBwY2llLTUNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBjaWUtNiAtPiBwY2llLTYNCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHNhdGEgLT4gc2F0YQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbGFuZXMgLT4gbGFu
ZXMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNhdGEtMCAtPiBzYXRhLTANCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIGhzaWMgLT4gaHNpYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbGFuZXMg
LT4gbGFuZXMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGhzaWMtMCAtPiBoc2ljLTANCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHBvcnRzIC0+IHBvcnRzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciB1c2IyLTAgLT4gdXNiMi0wDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1c2IyLTEgLT4gdXNi
Mi0xDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1c2IyLTIgLT4gdXNiMi0yDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciB1c2IyLTMgLT4gdXNiMi0zDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1
c2IzLTAgLT4gdXNiMy0wDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1c2IzLTEgLT4gdXNiMy0x
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1c2IzLTIgLT4gdXNiMy0yDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciB1c2IzLTMgLT4gdXNiMy0zDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBoc2lj
LTAgLT4gaHNpYy0wDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kLXNldHRpbmdzIC0+IHBy
b2Qtc2V0dGluZ3MNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19iaWFzIC0+IHByb2Rf
Y19iaWFzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2NfYmlhc19hMDIgLT4gcHJvZF9j
X2JpYXNfYTAyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2NfdXRtaTAgLT4gcHJvZF9j
X3V0bWkwDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2NfdXRtaTEgLT4gcHJvZF9jX3V0
bWkxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2NfdXRtaTIgLT4gcHJvZF9jX3V0bWky
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2NfdXRtaTMgLT4gcHJvZF9jX3V0bWkzDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2Nfc3MwIC0+IHByb2RfY19zczANCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIHByb2RfY19zczEgLT4gcHJvZF9jX3NzMQ0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgcHJvZF9jX3NzMiAtPiBwcm9kX2Nfc3MyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBwcm9kX2Nfc3MzIC0+IHByb2RfY19zczMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2Rf
Y19oc2ljMCAtPiBwcm9kX2NfaHNpYzANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19o
c2ljMSAtPiBwcm9kX2NfaHNpYzENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVzYl9jZCAtPiB1
c2JfY2QNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbmN0cmxANzAwOWYwMDAgLT4gcGluY3Ry
bA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgeHVzYkA3MDA5MDAwMCAtPiB4dXNiDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBtYXgxNjk4NC1jZHAgLT4gbWF4MTY5ODQtY2RwDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBzZXJpYWxANzAwMDYwMDAgLT4gc2VyaWFsDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBzZXJpYWxANzAwMDYwNDAgLT4gc2VyaWFsDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBzZXJpYWxANzAwMDYyMDAgLT4gc2VyaWFsDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzZXJp
YWxANzAwMDYzMDAgLT4gc2VyaWFsDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzb3VuZCAtPiBz
b3VuZA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbnZpZGlhLGRhaS1saW5rLTEgLT4gbnZpZGlh
LGRhaS1saW5rLTENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG52aWRpYSxkYWktbGluay0yIC0+
IG52aWRpYSxkYWktbGluay0yDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBudmlkaWEsZGFpLWxp
bmstMyAtPiBudmlkaWEsZGFpLWxpbmstMw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbnZpZGlh
LGRhaS1saW5rLTQgLT4gbnZpZGlhLGRhaS1saW5rLTQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IHNvdW5kX3JlZiAtPiBzb3VuZF9yZWYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHB3bUA3MDAw
YTAwMCAtPiBwd20NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwaUA3MDAwZDQwMCAtPiBzcGkN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2Qtc2V0dGluZ3MgLT4gcHJvZC1zZXR0aW5ncw0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHJvZCAtPiBwcm9kDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBwcm9kX2NfZmxhc2ggLT4gcHJvZF9jX2ZsYXNoDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBwcm9kX2NfbG9vcCAtPiBwcm9kX2NfbG9vcA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3Bp
QDAgLT4gc3BpDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjb250cm9sbGVyLWRhdGEgLT4gY29u
dHJvbGxlci1kYXRhDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGlAMSAtPiBzcGkNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIGNvbnRyb2xsZXItZGF0YSAtPiBjb250cm9sbGVyLWRhdGENCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwaUA3MDAwZDYwMCAtPiBzcGkNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHByb2Qtc2V0dGluZ3MgLT4gcHJvZC1zZXR0aW5ncw0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgcHJvZCAtPiBwcm9kDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2NfZmxh
c2ggLT4gcHJvZF9jX2ZsYXNoDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2NfbG9vcCAt
PiBwcm9kX2NfbG9vcA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpQDAgLT4gc3BpDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBjb250cm9sbGVyLWRhdGEgLT4gY29udHJvbGxlci1kYXRhDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGlAMSAtPiBzcGkNCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIGNvbnRyb2xsZXItZGF0YSAtPiBjb250cm9sbGVyLWRhdGENCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHNwaUA3MDAwZDgwMCAtPiBzcGkNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2Qt
c2V0dGluZ3MgLT4gcHJvZC1zZXR0aW5ncw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHJvZCAt
PiBwcm9kDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2NfZmxhc2ggLT4gcHJvZF9jX2Zs
YXNoDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2NfbG9vcCAtPiBwcm9kX2NfbG9vcA0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpQDcwMDBkYTAwIC0+IHNwaQ0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgcHJvZC1zZXR0aW5ncyAtPiBwcm9kLXNldHRpbmdzDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBwcm9kIC0+IHByb2QNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19m
bGFzaCAtPiBwcm9kX2NfZmxhc2gNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19jczAg
LT4gcHJvZF9jX2NzMA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpLXRvdWNoMTl4MTJAMCAt
PiBzcGktdG91Y2gxOXgxMg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpLXRvdWNoMjV4MTZA
MCAtPiBzcGktdG91Y2gyNXgxNg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpLXRvdWNoMTR4
OEAwIC0+IHNwaS10b3VjaDE0eDgNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwaUA3MDQxMDAw
MCAtPiBzcGkNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwaWZsYXNoQDAgLT4gc3BpZmxhc2gN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNvbnRyb2xsZXItZGF0YSAtPiBjb250cm9sbGVyLWRh
dGENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGhvc3QxeCAtPiBob3N0MXgNCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIHZpIC0+IHZpDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwb3J0cyAtPiBw
b3J0cw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcG9ydEAwIC0+IHBvcnQNCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIGVuZHBvaW50IC0+IGVuZHBvaW50DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBwb3J0QDEgLT4gcG9ydA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZW5kcG9pbnQgLT4gZW5k
cG9pbnQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHZpLWJ5cGFzcyAtPiB2aS1ieXBhc3MNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIGlzcEA1NDYwMDAwMCAtPiBpc3ANCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGlzcEA1NDY4MDAwMCAtPiBpc3ANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRj
QDU0MjAwMDAwIC0+IGRjDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciByZ2IgLT4gcmdiDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBkY0A1NDI0MDAwMCAtPiBkYw0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgcmdiIC0+IHJnYg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZHNpIC0+IGRzaQ0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgdmljIC0+IHZpYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
bnZlbmMgLT4gbnZlbmMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHRzZWMgLT4gdHNlYw0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgdHNlY2IgLT4gdHNlY2INCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIG52ZGVjIC0+IG52ZGVjDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBudmpwZyAtPiBudmpw
Zw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc29yIC0+IHNvcg0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgaGRtaS1kaXNwbGF5IC0+IGhkbWktZGlzcGxheQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgZHAtZGlzcGxheSAtPiBkcC1kaXNwbGF5DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkaXNw
LWRlZmF1bHQtb3V0IC0+IGRpc3AtZGVmYXVsdC1vdXQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IGRwLWx0LXNldHRpbmdzIC0+IGRwLWx0LXNldHRpbmdzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBsdC1zZXR0aW5nQDAgLT4gbHQtc2V0dGluZw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbHQt
c2V0dGluZ0AxIC0+IGx0LXNldHRpbmcNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGx0LXNldHRp
bmdAMiAtPiBsdC1zZXR0aW5nDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kLXNldHRpbmdz
IC0+IHByb2Qtc2V0dGluZ3MNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19kcCAtPiBw
cm9kX2NfZHANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNvcjEgLT4gc29yMQ0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgaGRtaS1kaXNwbGF5IC0+IGhkbWktZGlzcGxheQ0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgZGlzcC1kZWZhdWx0LW91dCAtPiBkaXNwLWRlZmF1bHQtb3V0DQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBkcC1kaXNwbGF5IC0+IGRwLWRpc3BsYXkNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHByb2Qtc2V0dGluZ3MgLT4gcHJvZC1zZXR0aW5ncw0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgcHJvZCAtPiBwcm9kDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2NfaGRt
aV8wbV81NG0gLT4gcHJvZF9jX2hkbWlfMG1fNTRtDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBw
cm9kX2NfaGRtaV81NG1fMTExbSAtPiBwcm9kX2NfaGRtaV81NG1fMTExbQ0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgcHJvZF9jX2hkbWlfMTExbV8yMjNtIC0+IHByb2RfY19oZG1pXzExMW1fMjIz
bQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHJvZF9jX2hkbWlfMjIzbV8zMDBtIC0+IHByb2Rf
Y19oZG1pXzIyM21fMzAwbQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHJvZF9jX2hkbWlfMzAw
bV82MDBtIC0+IHByb2RfY19oZG1pXzMwMG1fNjAwbQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
cHJvZF9jXzU0TSAtPiBwcm9kX2NfNTRNDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2Nf
NzVNIC0+IHByb2RfY183NU0NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY18xNTBNIC0+
IHByb2RfY18xNTBNDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2NfMzAwTSAtPiBwcm9k
X2NfMzAwTQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHJvZF9jXzYwME0gLT4gcHJvZF9jXzYw
ME0NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19kcCAtPiBwcm9kX2NfZHANCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19oZG1pXzU0bV83NW0gLT4gcHJvZF9jX2hkbWlfNTRt
Xzc1bQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHJvZF9jX2hkbWlfNzVtXzE1MG0gLT4gcHJv
ZF9jX2hkbWlfNzVtXzE1MG0NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19oZG1pXzE1
MG1fMzAwbSAtPiBwcm9kX2NfaGRtaV8xNTBtXzMwMG0NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IGRwYXV4IC0+IGRwYXV4DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kLXNldHRpbmdzIC0+
IHByb2Qtc2V0dGluZ3MNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19kcGF1eF9kcCAt
PiBwcm9kX2NfZHBhdXhfZHANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19kcGF1eF9o
ZG1pIC0+IHByb2RfY19kcGF1eF9oZG1pDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkcGF1eDEg
LT4gZHBhdXgxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kLXNldHRpbmdzIC0+IHByb2Qt
c2V0dGluZ3MNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19kcGF1eF9kcCAtPiBwcm9k
X2NfZHBhdXhfZHANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19kcGF1eF9oZG1pIC0+
IHByb2RfY19kcGF1eF9oZG1pDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMmNANTQ2YzAwMDAg
LT4gaTJjDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciByYnBjdjJfaW14MjE5X2FAMTAgLT4gcmJw
Y3YyX2lteDIxOV9hDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBtb2RlMCAtPiBtb2RlMA0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgbW9kZTEgLT4gbW9kZTENCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIG1vZGUyIC0+IG1vZGUyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBtb2RlMyAtPiBtb2Rl
Mw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbW9kZTQgLT4gbW9kZTQNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHBvcnRzIC0+IHBvcnRzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwb3J0QDAg
LT4gcG9ydA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZW5kcG9pbnQgLT4gZW5kcG9pbnQNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIGluYTMyMjF4QDQwIC0+IGluYTMyMjF4DQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBjaGFubmVsQDAgLT4gY2hhbm5lbA0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgY2hhbm5lbEAxIC0+IGNoYW5uZWwNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNoYW5uZWxA
MiAtPiBjaGFubmVsDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBudmNzaSAtPiBudmNzaQ0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgY2hhbm5lbEAwIC0+IGNoYW5uZWwNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHBvcnRzIC0+IHBvcnRzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwb3J0QDAg
LT4gcG9ydA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZW5kcG9pbnRAMCAtPiBlbmRwb2ludA0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcG9ydEAxIC0+IHBvcnQNCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIGVuZHBvaW50QDEgLT4gZW5kcG9pbnQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNo
YW5uZWxAMSAtPiBjaGFubmVsDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwb3J0cyAtPiBwb3J0
cw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcG9ydEAyIC0+IHBvcnQNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGVuZHBvaW50QDIgLT4gZW5kcG9pbnQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IHBvcnRAMyAtPiBwb3J0DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBlbmRwb2ludEAzIC0+IGVu
ZHBvaW50DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncHUgLT4gZ3B1DQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBtaXBpY2FsIC0+IG1pcGljYWwNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBy
b2Qtc2V0dGluZ3MgLT4gcHJvZC1zZXR0aW5ncw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHJv
ZF9jX2RwaHlfZHNpIC0+IHByb2RfY19kcGh5X2RzaQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
cG1jQDcwMDBlNDAwIC0+IHBtYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGV4X2VuIC0+IHBl
eF9lbg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGV4LWlvLWRwZC1zaWduYWxzLWRpcyAtPiBw
ZXgtaW8tZHBkLXNpZ25hbHMtZGlzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwZXhfZGlzIC0+
IHBleF9kaXMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBleC1pby1kcGQtc2lnbmFscy1lbiAt
PiBwZXgtaW8tZHBkLXNpZ25hbHMtZW4NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGhkbWktZHBk
LWVuYWJsZSAtPiBoZG1pLWRwZC1lbmFibGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGhkbWkt
cGFkLWxvd3Bvd2VyLWVuYWJsZSAtPiBoZG1pLXBhZC1sb3dwb3dlci1lbmFibGUNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIGhkbWktZHBkLWRpc2FibGUgLT4gaGRtaS1kcGQtZGlzYWJsZQ0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgaGRtaS1wYWQtbG93cG93ZXItZGlzYWJsZSAtPiBoZG1pLXBh
ZC1sb3dwb3dlci1kaXNhYmxlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkc2ktZHBkLWVuYWJs
ZSAtPiBkc2ktZHBkLWVuYWJsZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZHNpLXBhZC1sb3dw
b3dlci1lbmFibGUgLT4gZHNpLXBhZC1sb3dwb3dlci1lbmFibGUNCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIGRzaS1kcGQtZGlzYWJsZSAtPiBkc2ktZHBkLWRpc2FibGUNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGRzaS1wYWQtbG93cG93ZXItZGlzYWJsZSAtPiBkc2ktcGFkLWxvd3Bvd2VyLWRp
c2FibGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRzaWItZHBkLWVuYWJsZSAtPiBkc2liLWRw
ZC1lbmFibGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRzaWItcGFkLWxvd3Bvd2VyLWVuYWJs
ZSAtPiBkc2liLXBhZC1sb3dwb3dlci1lbmFibGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRz
aWItZHBkLWRpc2FibGUgLT4gZHNpYi1kcGQtZGlzYWJsZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgZHNpYi1wYWQtbG93cG93ZXItZGlzYWJsZSAtPiBkc2liLXBhZC1sb3dwb3dlci1kaXNhYmxl
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpb3BhZC1kZWZhdWx0cyAtPiBpb3BhZC1kZWZhdWx0
cw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYXVkaW8tcGFkcyAtPiBhdWRpby1wYWRzDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBjYW0tcGFkcyAtPiBjYW0tcGFkcw0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgZGJnLXBhZHMgLT4gZGJnLXBhZHMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRt
aWMtcGFkcyAtPiBkbWljLXBhZHMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBleC1jdHJsLXBh
ZHMgLT4gcGV4LWN0cmwtcGFkcw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpLXBhZHMgLT4g
c3BpLXBhZHMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVhcnQtcGFkcyAtPiB1YXJ0LXBhZHMN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBleC1pby1wYWRzIC0+IHBleC1pby1wYWRzDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBhdWRpby1odi1wYWRzIC0+IGF1ZGlvLWh2LXBhZHMNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHNwaS1odi1wYWRzIC0+IHNwaS1odi1wYWRzDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBncGlvLXBhZHMgLT4gZ3Bpby1wYWRzDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBzZG1tYy1pby1wYWRzIC0+IHNkbW1jLWlvLXBhZHMNCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIGJvb3Ryb20tY29tbWFuZHMgLT4gYm9vdHJvbS1jb21tYW5kcw0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgcmVzZXQtY29tbWFuZHMgLT4gcmVzZXQtY29tbWFuZHMNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGNvbW1hbmRzQDQtMDAzYyAtPiBjb21tYW5kcw0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgcG93ZXItb2ZmLWNvbW1hbmRzIC0+IHBvd2VyLW9mZi1jb21tYW5kcw0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgY29tbWFuZHNANC0wMDNjIC0+IGNvbW1hbmRzDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBzZG1tYzFfZV8zM1ZfZW5hYmxlIC0+IHNkbW1jMV9lXzMzVl9lbmFibGUNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNkbW1jMSAtPiBzZG1tYzENCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHNkbW1jMV9lXzMzVl9kaXNhYmxlIC0+IHNkbW1jMV9lXzMzVl9kaXNhYmxlDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBzZG1tYzEgLT4gc2RtbWMxDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBzZG1tYzNfZV8zM1ZfZW5hYmxlIC0+IHNkbW1jM19lXzMzVl9lbmFibGUNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIHNkbW1jMyAtPiBzZG1tYzMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IHNkbW1jM19lXzMzVl9kaXNhYmxlIC0+IHNkbW1jM19lXzMzVl9kaXNhYmxlDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBzZG1tYzMgLT4gc2RtbWMzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBz
ZUA3MDAxMjAwMCAtPiBzZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaGRhQDcwMDMwMDAwIC0+
IGhkYQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGNpZUAxMDAzMDAwIC0+IHBjaWUNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHBjaUAxLDAgLT4gcGNpDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBwY2lAMiwwIC0+IHBjaQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZXRoZXJuZXRAMCwwIC0+
IGV0aGVybmV0DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kLXNldHRpbmdzIC0+IHByb2Qt
c2V0dGluZ3MNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19wYWQgLT4gcHJvZF9jX3Bh
ZA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHJvZF9jX3JwIC0+IHByb2RfY19ycA0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgaTJjQDcwMDBjMDAwIC0+IGkyYw0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgdGVtcC1zZW5zb3JANGMgLT4gdGVtcC1zZW5zb3INCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIGxvYyAtPiBsb2MNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGV4dCAtPiBleHQNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIGkyY0A3MDAwYzQwMCAtPiBpMmMNCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIGkyY211eEA3MCAtPiBpMmNtdXgNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyY0Aw
IC0+IGkyYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjQDEgLT4gaTJjDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBpbmEzMjIxeEA0MCAtPiBpbmEzMjIxeA0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgY2hhbm5lbEAwIC0+IGNoYW5uZWwNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNoYW5u
ZWxAMSAtPiBjaGFubmVsDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjaGFubmVsQDIgLT4gY2hh
bm5lbA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaW5hMzIyMXhANDEgLT4gaW5hMzIyMXgNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNoYW5uZWxAMCAtPiBjaGFubmVsDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBjaGFubmVsQDEgLT4gY2hhbm5lbA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
Y2hhbm5lbEAyIC0+IGNoYW5uZWwNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGluYTMyMjF4QDQy
IC0+IGluYTMyMjF4DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjaGFubmVsQDAgLT4gY2hhbm5l
bA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY2hhbm5lbEAxIC0+IGNoYW5uZWwNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIGNoYW5uZWxAMiAtPiBjaGFubmVsDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBpMmNAMiAtPiBpMmMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyY0AzIC0+IGkyYw0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcnQ1NjU5LjEyLTAwMWFAMWEgLT4gcnQ1NjU5LjEyLTAw
MWENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdwaW9AMjAgLT4gZ3Bpbw0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgaWNtMjA2MjhANjggLT4gaWNtMjA2MjgNCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIGFrODk2M0AwZCAtPiBhazg5NjMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNtMzIxODBA
NDggLT4gY20zMjE4MA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaXFzMjYzQDQ0IC0+IGlxczI2
Mw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcnQ1NjU5LjEtMDAxYUAxYSAtPiBydDU2NTkuMS0w
MDFhDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMmNANzAwMGM1MDAgLT4gaTJjDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBiYXR0ZXJ5LWNoYXJnZXJANmIgLT4gYmF0dGVyeS1jaGFyZ2VyDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMmNANzAwMGM3MDAgLT4gaTJjDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBpMmNANzAwMGQwMDAgLT4gaTJjDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBt
YXg3NzYyMEAzYyAtPiBtYXg3NzYyMA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGlubXV4QDAg
LT4gcGlubXV4DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW5fZ3BpbzAgLT4gcGluX2dwaW8w
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW5fZ3BpbzEgLT4gcGluX2dwaW8xDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBwaW5fZ3BpbzIgLT4gcGluX2dwaW8yDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBwaW5fZ3BpbzMgLT4gcGluX2dwaW8zDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBw
aW5fZ3BpbzJfMyAtPiBwaW5fZ3BpbzJfMw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluX2dw
aW80IC0+IHBpbl9ncGlvNA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluX2dwaW81XzZfNyAt
PiBwaW5fZ3BpbzVfNl83DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcG1pYy1kZWZhdWx0LW91
dHB1dC1oaWdoIC0+IHNwbWljLWRlZmF1bHQtb3V0cHV0LWhpZ2gNCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHdhdGNoZG9nIC0+IHdhdGNoZG9nDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBmcHMg
LT4gZnBzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBmcHMwIC0+IGZwczANCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIGZwczEgLT4gZnBzMQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZnBzMiAt
PiBmcHMyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBiYWNrdXAtYmF0dGVyeSAtPiBiYWNrdXAt
YmF0dGVyeQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcmVndWxhdG9ycyAtPiByZWd1bGF0b3Jz
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzZDAgLT4gc2QwDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBzZDEgLT4gc2QxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzZDIgLT4gc2QyDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBzZDMgLT4gc2QzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBs
ZG8wIC0+IGxkbzANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGxkbzEgLT4gbGRvMQ0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgbGRvMiAtPiBsZG8yDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBs
ZG8zIC0+IGxkbzMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGxkbzQgLT4gbGRvNA0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgbGRvNSAtPiBsZG81DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBs
ZG82IC0+IGxkbzYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGxkbzcgLT4gbGRvNw0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgbGRvOCAtPiBsZG84DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBs
b3ctYmF0dGVyeS1tb25pdG9yIC0+IGxvdy1iYXR0ZXJ5LW1vbml0b3INCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGkyY0A3MDAwZDEwMCAtPiBpMmMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNk
aGNpQDcwMGIwNjAwIC0+IHNkaGNpDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kLXNldHRp
bmdzIC0+IHByb2Qtc2V0dGluZ3MNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19kcyAt
PiBwcm9kX2NfZHMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19ocyAtPiBwcm9kX2Nf
aHMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19kZHI1MiAtPiBwcm9kX2NfZGRyNTIN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19oczIwMCAtPiBwcm9kX2NfaHMyMDANCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19oczQwMCAtPiBwcm9kX2NfaHM0MDANCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19oczUzMyAtPiBwcm9kX2NfaHM1MzMNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIHByb2QgLT4gcHJvZA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2Ro
Y2lANzAwYjA0MDAgLT4gc2RoY2kNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2Qtc2V0dGlu
Z3MgLT4gcHJvZC1zZXR0aW5ncw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHJvZF9jX2RzIC0+
IHByb2RfY19kcw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHJvZF9jX2hzIC0+IHByb2RfY19o
cw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHJvZF9jX3NkcjEyIC0+IHByb2RfY19zZHIxMg0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHJvZF9jX3NkcjI1IC0+IHByb2RfY19zZHIyNQ0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgcHJvZF9jX3NkcjUwIC0+IHByb2RfY19zZHI1MA0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgcHJvZF9jX3NkcjEwNCAtPiBwcm9kX2Nfc2RyMTA0DQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBwcm9kX2NfZGRyNTIgLT4gcHJvZF9jX2RkcjUyDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBwcm9kIC0+IHByb2QNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNkaGNp
QDcwMGIwMjAwIC0+IHNkaGNpDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kLXNldHRpbmdz
IC0+IHByb2Qtc2V0dGluZ3MNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19kcyAtPiBw
cm9kX2NfZHMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19ocyAtPiBwcm9kX2NfaHMN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19zZHIxMiAtPiBwcm9kX2Nfc2RyMTINCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19zZHIyNSAtPiBwcm9kX2Nfc2RyMjUNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHByb2RfY19zZHI1MCAtPiBwcm9kX2Nfc2RyNTANCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIHByb2RfY19zZHIxMDQgLT4gcHJvZF9jX3NkcjEwNA0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgcHJvZF9jX2RkcjUyIC0+IHByb2RfY19kZHI1Mg0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgcHJvZF9jX2hzMjAwIC0+IHByb2RfY19oczIwMA0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgcHJvZF9jX2hzNDAwIC0+IHByb2RfY19oczQwMA0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgcHJvZF9jX2hzNTMzIC0+IHByb2RfY19oczUzMw0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgcHJvZCAtPiBwcm9kDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzZGhjaUA3MDBiMDAwMCAt
PiBzZGhjaQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHJvZC1zZXR0aW5ncyAtPiBwcm9kLXNl
dHRpbmdzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2NfZHMgLT4gcHJvZF9jX2RzDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwcm9kX2NfaHMgLT4gcHJvZF9jX2hzDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBwcm9kX2Nfc2RyMTIgLT4gcHJvZF9jX3NkcjEyDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBwcm9kX2Nfc2RyMjUgLT4gcHJvZF9jX3NkcjI1DQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBwcm9kX2Nfc2RyNTAgLT4gcHJvZF9jX3NkcjUwDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBwcm9kX2Nfc2RyMTA0IC0+IHByb2RfY19zZHIxMDQNCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIHByb2RfY19kZHI1MiAtPiBwcm9kX2NfZGRyNTINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IHByb2QgLT4gcHJvZA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZWZ1c2VANzAwMGY4MDAgLT4g
ZWZ1c2UNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGVmdXNlLWJ1cm4gLT4gZWZ1c2UtYnVybg0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3Iga2Z1c2VANzAwMGZjMDAgLT4ga2Z1c2UNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIHBtYy1pb3Bvd2VyIC0+IHBtYy1pb3Bvd2VyDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBkdHZANzAwMGMzMDAgLT4gZHR2DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB4
dWRjQDcwMGQwMDAwIC0+IHh1ZGMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG1lbW9yeS1jb250
cm9sbGVyQDcwMDE5MDAwIC0+IG1lbW9yeS1jb250cm9sbGVyDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBwd21ANzAxMTAwMDAgLT4gcHdtDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjbG9ja0A3
MDExMDAwMCAtPiBjbG9jaw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc29jdGhlcm1AMHg3MDBF
MjAwMCAtPiBzb2N0aGVybQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdGhyb3R0bGUtY2ZncyAt
PiB0aHJvdHRsZS1jZmdzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBoZWF2eSAtPiBoZWF2eQ0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb2MxIC0+IG9jMQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3Igb2MzIC0+IG9jMw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZnVzZV93YXJAZnVzZV9yZXZf
MF8xIC0+IGZ1c2Vfd2FyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBmdXNlX3dhckBmdXNlX3Jl
dl8yIC0+IGZ1c2Vfd2FyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB0aHJvdHRsZUBjcml0aWNh
bCAtPiB0aHJvdHRsZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdGhyb3R0bGVAaGVhdnkgLT4g
dGhyb3R0bGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHRocm90dGxlX2RldkBjcHVfaGlnaCAt
PiB0aHJvdHRsZV9kZXYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHRocm90dGxlX2RldkBncHVf
aGlnaCAtPiB0aHJvdHRsZV9kZXYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHRlZ3JhLWFvdGFn
IC0+IHRlZ3JhLWFvdGFnDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB0ZWdyYV9jZWMgLT4gdGVn
cmFfY2VjDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB3YXRjaGRvZ0A2MDAwNTEwMCAtPiB3YXRj
aGRvZw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdGVncmFfZmlxX2RlYnVnZ2VyIC0+IHRlZ3Jh
X2ZpcV9kZWJ1Z2dlcg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHRtIC0+IHB0bQ0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgbXNlbGVjdCAtPiBtc2VsZWN0DQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBjcHVpZGxlIC0+IGNwdWlkbGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGFwYm1pc2NA
NzAwMDA4MDAgLT4gYXBibWlzYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbnZkdW1wZXIgLT4g
bnZkdW1wZXINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHRlZ3JhLXBtYy1ibGluay1wd20gLT4g
dGVncmEtcG1jLWJsaW5rLXB3bQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbnZwbW9kZWwgLT4g
bnZwbW9kZWwNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGV4dGNvbiAtPiBleHRjb24NCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIGRpc3Atc3RhdGUgLT4gZGlzcC1zdGF0ZQ0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgZXh0Y29uQDAgLT4gZXh0Y29uDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBl
eHRjb25AMSAtPiBleHRjb24NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGJ0aHJvdF9jZGV2IC0+
IGJ0aHJvdF9jZGV2DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBza2luX2JhbGFuY2VkIC0+IHNr
aW5fYmFsYW5jZWQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdwdV9iYWxhbmNlZCAtPiBncHVf
YmFsYW5jZWQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNwdV9iYWxhbmNlZCAtPiBjcHVfYmFs
YW5jZWQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGVtZXJnZW5jeV9iYWxhbmNlZCAtPiBlbWVy
Z2VuY3lfYmFsYW5jZWQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGFnaWMtY29udHJvbGxlciAt
PiBhZ2ljLWNvbnRyb2xsZXINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGFkbWFANzAyZTIwMDAg
LT4gYWRtYQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYWh1YiAtPiBhaHViDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBhZG1haWZAMHg3MDJkMDAwMCAtPiBhZG1haWYNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHNmY0A3MDJkMjAwMCAtPiBzZmMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNm
Y0A3MDJkMjIwMCAtPiBzZmMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNmY0A3MDJkMjQwMCAt
PiBzZmMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNmY0A3MDJkMjYwMCAtPiBzZmMNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHNwa3Byb3RANzAyZDhjMDAgLT4gc3BrcHJvdA0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgYW1peGVyQDcwMmRiYjAwIC0+IGFtaXhlcg0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgaTJzQDcwMmQxMDAwIC0+IGkycw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJz
QDcwMmQxMTAwIC0+IGkycw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJzQDcwMmQxMjAwIC0+
IGkycw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJzQDcwMmQxMzAwIC0+IGkycw0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgaTJzQDcwMmQxNDAwIC0+IGkycw0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgYW14QDcwMmQzMDAwIC0+IGFteA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYW14QDcw
MmQzMTAwIC0+IGFteA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYWR4QDcwMmQzODAwIC0+IGFk
eA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYWR4QDcwMmQzOTAwIC0+IGFkeA0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgZG1pY0A3MDJkNDAwMCAtPiBkbWljDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBkbWljQDcwMmQ0MTAwIC0+IGRtaWMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRtaWNA
NzAyZDQyMDAgLT4gZG1pYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYWZjQDcwMmQ3MDAwIC0+
IGFmYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYWZjQDcwMmQ3MTAwIC0+IGFmYw0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgYWZjQDcwMmQ3MjAwIC0+IGFmYw0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgYWZjQDcwMmQ3MzAwIC0+IGFmYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYWZjQDcw
MmQ3NDAwIC0+IGFmYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYWZjQDcwMmQ3NTAwIC0+IGFm
Yw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbXZjQDcwMmRhMDAwIC0+IG12Yw0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgbXZjQDcwMmRhMjAwIC0+IG12Yw0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgaXFjQDcwMmRlMDAwIC0+IGlxYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaXFjQDcwMmRl
MjAwIC0+IGlxYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3BlQDcwMmQ4MDAwIC0+IG9wZQ0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3BlQDcwMmQ4NDAwIC0+IG9wZQ0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgYWRzcF9hdWRpbyAtPiBhZHNwX2F1ZGlvDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBzYXRhQDcwMDIwMDAwIC0+IHNhdGENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHByb2Qt
c2V0dGluZ3MgLT4gcHJvZC1zZXR0aW5ncw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHJvZCAt
PiBwcm9kDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBtb2RlbSAtPiBtb2RlbQ0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgbnZpZGlhLHBoeS1laGNpLWhzaWMgLT4gbnZpZGlhLHBoeS1laGNpLWhz
aWMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG52aWRpYSxwaHkteGhjaS1oc2ljIC0+IG52aWRp
YSxwaHkteGhjaS1oc2ljDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBudmlkaWEscGh5LXhoY2kt
dXRtaSAtPiBudmlkaWEscGh5LXhoY2ktdXRtaQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdHJ1
c3R5IC0+IHRydXN0eQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaXJxIC0+IGlycQ0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgZmlxIC0+IGZpcQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igdmly
dGlvIC0+IHZpcnRpbw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbG9nIC0+IGxvZw0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3Igc21wLWN1c3RvbS1pcGkgLT4gc21wLWN1c3RvbS1pcGkNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHBzeV9leHRjb25feHVkYyAtPiBwc3lfZXh0Y29uX3h1ZGMNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHRlZ3JhLXN1cHBseS10ZXN0cyAtPiB0ZWdyYS1zdXBwbHkt
dGVzdHMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNhbWVyYS1wY2wgLT4gY2FtZXJhLXBjbA0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZHBkIC0+IGRwZA0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgY3NpYSAtPiBjc2lhDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjc2liIC0+IGNzaWINCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNzaWMgLT4gY3NpYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgY3NpZCAtPiBjc2lkDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjc2llIC0+IGNzaWUNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNzaWYgLT4gY3NpZg0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3Igcm9sbGJhY2stcHJvdGVjdGlvbiAtPiByb2xsYmFjay1wcm90ZWN0aW9uDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBleHRlcm5hbC1tZW1vcnktY29udHJvbGxlckA3MDAxYjAwMCAtPiBleHRl
cm5hbC1tZW1vcnktY29udHJvbGxlcg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZW1jLXRhYmxl
QDAgLT4gZW1jLXRhYmxlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBlbWMtdGFibGVAMjA0MDAw
IC0+IGVtYy10YWJsZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZW1jLXRhYmxlQDE2MDAwMDAg
LT4gZW1jLXRhYmxlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBlbWMtdGFibGUtZGVyYXRlZEAy
MDQwMDAgLT4gZW1jLXRhYmxlLWRlcmF0ZWQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGVtYy10
YWJsZS1kZXJhdGVkQDE2MDAwMDAgLT4gZW1jLXRhYmxlLWRlcmF0ZWQNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGR1bW15LWNvb2wtZGV2IC0+IGR1bW15LWNvb2wtZGV2DQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciByZWd1bGF0b3JzIC0+IHJlZ3VsYXRvcnMNCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIHJlZ3VsYXRvckAwIC0+IHJlZ3VsYXRvcg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcmVn
dWxhdG9yQDEgLT4gcmVndWxhdG9yDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciByZWd1bGF0b3JA
MiAtPiByZWd1bGF0b3INCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHJlZ3VsYXRvckAzIC0+IHJl
Z3VsYXRvcg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcmVndWxhdG9yQDQgLT4gcmVndWxhdG9y
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciByZWd1bGF0b3JANSAtPiByZWd1bGF0b3INCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHJlZ3VsYXRvckA2IC0+IHJlZ3VsYXRvcg0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgcmVndWxhdG9yQDcgLT4gcmVndWxhdG9yDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciByZWd1bGF0b3JAOCAtPiByZWd1bGF0b3INCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHJl
Z3VsYXRvckA5IC0+IHJlZ3VsYXRvcg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcmVndWxhdG9y
QDEwIC0+IHJlZ3VsYXRvcg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHdtLWZhbiAtPiBwd20t
ZmFuDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkdmZzX3JhaWxzIC0+IGR2ZnNfcmFpbHMNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHZkZC1ncHUtc2NhbGluZy1jZGV2QDcgLT4gdmRkLWdwdS1z
Y2FsaW5nLWNkZXYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHZkZC1ncHUtdm1heC1jZGV2QDkg
LT4gdmRkLWdwdS12bWF4LWNkZXYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBmc2QgLT4gcGZz
ZA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdGVncmEtY2FtZXJhLXBsYXRmb3JtIC0+IHRlZ3Jh
LWNhbWVyYS1wbGF0Zm9ybQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbW9kdWxlcyAtPiBtb2R1
bGVzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBtb2R1bGUwIC0+IG1vZHVsZTANCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIGRyaXZlcm5vZGUwIC0+IGRyaXZlcm5vZGUwDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBkcml2ZXJub2RlMSAtPiBkcml2ZXJub2RlMQ0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgbW9kdWxlMSAtPiBtb2R1bGUxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkcml2ZXJu
b2RlMCAtPiBkcml2ZXJub2RlMA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZHJpdmVybm9kZTEg
LT4gZHJpdmVybm9kZTENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGxlbnNfaW14MjE5QFJCUENW
MiAtPiBsZW5zX2lteDIxOQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY2FtX2kyY211eCAtPiBj
YW1faTJjbXV4DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMmNAMCAtPiBpMmMNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIHJicGN2Ml9pbXgyMTlfYUAxMCAtPiByYnBjdjJfaW14MjE5X2ENCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIG1vZGUwIC0+IG1vZGUwDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBtb2RlMSAtPiBtb2RlMQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbW9kZTIgLT4gbW9k
ZTINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG1vZGUzIC0+IG1vZGUzDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBtb2RlNCAtPiBtb2RlNA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcG9ydHMg
LT4gcG9ydHMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBvcnRAMCAtPiBwb3J0DQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBlbmRwb2ludCAtPiBlbmRwb2ludA0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgaTJjQDEgLT4gaTJjDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciByYnBjdjJfaW14MjE5
X2VAMTAgLT4gcmJwY3YyX2lteDIxOV9lDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBtb2RlMCAt
PiBtb2RlMA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbW9kZTEgLT4gbW9kZTENCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIG1vZGUyIC0+IG1vZGUyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBt
b2RlMyAtPiBtb2RlMw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgbW9kZTQgLT4gbW9kZTQNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBvcnRzIC0+IHBvcnRzDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBwb3J0QDAgLT4gcG9ydA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZW5kcG9pbnQgLT4g
ZW5kcG9pbnQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHRmZXNkIC0+IHRmZXNkDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBkZXYxIC0+IGRldjENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRl
djIgLT4gZGV2Mg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdGhlcm1hbC1mYW4tZXN0IC0+IHRo
ZXJtYWwtZmFuLWVzdA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZ3Bpby1rZXlzIC0+IGdwaW8t
a2V5cw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcG93ZXIgLT4gcG93ZXINCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIGZvcmNlcmVjb3ZlcnkgLT4gZm9yY2VyZWNvdmVyeQ0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgZ3Bpby10aW1lZC1rZXlzIC0+IGdwaW8tdGltZWQta2V5cw0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgcG93ZXIgLT4gcG93ZXINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNw
ZGlmLWRpdC4wQDAgLT4gc3BkaWYtZGl0LjANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwZGlm
LWRpdC4xQDEgLT4gc3BkaWYtZGl0LjENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwZGlmLWRp
dC4yQDIgLT4gc3BkaWYtZGl0LjINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwZGlmLWRpdC4z
QDMgLT4gc3BkaWYtZGl0LjMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwZGlmLWRpdC40QDQg
LT4gc3BkaWYtZGl0LjQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwZGlmLWRpdC41QDUgLT4g
c3BkaWYtZGl0LjUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwZGlmLWRpdC42QDYgLT4gc3Bk
aWYtZGl0LjYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwZGlmLWRpdC43QDcgLT4gc3BkaWYt
ZGl0LjcNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNwdWZyZXEgLT4gY3B1ZnJlcQ0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgY3B1LXNjYWxpbmctZGF0YSAtPiBjcHUtc2NhbGluZy1kYXRhDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBlbWMtc2NhbGluZy1kYXRhIC0+IGVtYy1zY2FsaW5nLWRh
dGENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGVlcHJvbS1tYW5hZ2VyIC0+IGVlcHJvbS1tYW5h
Z2VyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBidXNAMCAtPiBidXMNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGJ1c0AxIC0+IGJ1cw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGx1Z2luLW1h
bmFnZXIgLT4gcGx1Z2luLW1hbmFnZXINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGZyYWdlbWVu
dEAwIC0+IGZyYWdlbWVudA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAMCAtPiBv
dmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY2hhbm5lbEAwIC0+IGNoYW5uZWwNCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIGNoYW5uZWxAMSAtPiBjaGFubmVsDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBmcmFnbWVudEAxIC0+IGZyYWdtZW50DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlk
ZUAwIC0+IG92ZXJyaWRlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292
ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBmcmFnbWVudEAyIC0+IGZyYWdtZW50DQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZUAwIC0+IG92ZXJyaWRlDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBvdmVycmlkZUAxIC0+IG92ZXJyaWRlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3Zl
cmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZUAyIC0+
IG92ZXJyaWRlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlf
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBudmlkaWEsZGFpLWxpbmstMSAtPiBudmlkaWEsZGFp
LWxpbmstMQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZnJhZ21lbnRAMyAtPiBmcmFnbWVudA0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAMCAtPiBvdmVycmlkZQ0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3Igb3ZlcnJpZGVAMSAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292
ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZnJhZ21lbnRANCAt
PiBmcmFnbWVudA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAMCAtPiBvdmVycmlk
ZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAMSAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
b3ZlcnJpZGVAMiAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlf
IC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZnJhZ21lbnRANSAtPiBmcmFn
bWVudA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAMCAtPiBvdmVycmlkZQ0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3Igb3ZlcnJpZGVAMSAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJp
ZGVAMiAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9v
dmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZnJhZ2VtZW50QDYgLT4gZnJhZ2VtZW50
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZUAwIC0+IG92ZXJyaWRlDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBmcmFnZW1lbnRANyAtPiBmcmFnZW1lbnQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IG92ZXJyaWRlQDAgLT4gb3ZlcnJpZGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIF9vdmVybGF5
XyAtPiBfb3ZlcmxheV8NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGZyYWdlbWVudEA4IC0+IGZy
YWdlbWVudA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAMCAtPiBvdmVycmlkZQ0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgZnJhZ2VtZW50QDkgLT4gZnJhZ2VtZW50DQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBvdmVycmlkZUAwIC0+IG92ZXJyaWRlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBf
b3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZUAx
IC0+IG92ZXJyaWRlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJs
YXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZUAyIC0+IG92ZXJyaWRlDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBvdmVycmlkZUAzIC0+IG92ZXJyaWRlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlk
ZUA0IC0+IG92ZXJyaWRlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292
ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZUA1IC0+IG92ZXJyaWRlDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBvdmVycmlkZUA2IC0+IG92ZXJyaWRlDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVy
cmlkZUA3IC0+IG92ZXJyaWRlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4g
X292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZUA4IC0+IG92ZXJyaWRl
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZUA5IC0+IG92ZXJyaWRlDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBv
dmVycmlkZUAxMCAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlf
IC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAMTEgLT4gb3Zl
cnJpZGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIF9vdmVybGF5XyAtPiBfb3ZlcmxheV8NCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIG92ZXJyaWRlQDEyIC0+IG92ZXJyaWRlDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBvdmVycmlkZUAxMyAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292
ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAMTQg
LT4gb3ZlcnJpZGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIF9vdmVybGF5XyAtPiBfb3Zlcmxh
eV8NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG92ZXJyaWRlQDE1IC0+IG92ZXJyaWRlDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBvdmVycmlkZUAxNiAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZnJhZ2Vt
ZW50QDEwIC0+IGZyYWdlbWVudA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAMCAt
PiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5
Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAMSAtPiBvdmVycmlkZQ0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3Igb3ZlcnJpZGVAMiAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
X292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVA
MyAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVy
bGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVANCAtPiBvdmVycmlkZQ0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3Igb3ZlcnJpZGVANSAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJp
ZGVANiAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9v
dmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVANyAtPiBvdmVycmlkZQ0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAOCAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3Zl
cnJpZGVAOSAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+
IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAMTAgLT4gb3ZlcnJp
ZGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIF9vdmVybGF5XyAtPiBfb3ZlcmxheV8NCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIG92ZXJyaWRlQDExIC0+IG92ZXJyaWRlDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBvdmVycmlkZUAxMiAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJs
YXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAMTMgLT4g
b3ZlcnJpZGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIF9vdmVybGF5XyAtPiBfb3ZlcmxheV8N
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG92ZXJyaWRlQDE0IC0+IG92ZXJyaWRlDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBvdmVycmlkZUAxNSAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
X292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVA
MTYgLT4gb3ZlcnJpZGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIF9vdmVybGF5XyAtPiBfb3Zl
cmxheV8NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG92ZXJyaWRlQDE3IC0+IG92ZXJyaWRlDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBvdmVycmlkZUAxOCAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3Zl
cnJpZGVAMTkgLT4gb3ZlcnJpZGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIF9vdmVybGF5XyAt
PiBfb3ZlcmxheV8NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG92ZXJyaWRlQDIwIC0+IG92ZXJy
aWRlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZUAyMSAtPiBvdmVycmlkZQ0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3Igb3ZlcnJpZGVAMjIgLT4gb3ZlcnJpZGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIF9vdmVy
bGF5XyAtPiBfb3ZlcmxheV8NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG92ZXJyaWRlQDIzIC0+
IG92ZXJyaWRlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlf
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZUAyNCAtPiBvdmVycmlkZQ0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3Igb3ZlcnJpZGVAMjUgLT4gb3ZlcnJpZGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IF9vdmVybGF5XyAtPiBfb3ZlcmxheV8NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG92ZXJyaWRl
QDI2IC0+IG92ZXJyaWRlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292
ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZUAyNyAtPiBvdmVycmlkZQ0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAMjggLT4gb3ZlcnJpZGUNCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIF9vdmVybGF5XyAtPiBfb3ZlcmxheV8NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG92
ZXJyaWRlQDI5IC0+IG92ZXJyaWRlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8g
LT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZUAzMCAtPiBvdmVy
cmlkZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVAMzEgLT4gb3ZlcnJpZGUNCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIF9vdmVybGF5XyAtPiBfb3ZlcmxheV8NCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIGZyYWdlbWVudEAxMSAtPiBmcmFnZW1lbnQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG92
ZXJyaWRlQDAgLT4gb3ZlcnJpZGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIF9vdmVybGF5XyAt
PiBfb3ZlcmxheV8NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGZyYWdtZW50LWUyNjE0LWNvbW1v
bkAwIC0+IGZyYWdtZW50LWUyNjE0LWNvbW1vbg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3Zl
cnJpZGVzQDAgLT4gb3ZlcnJpZGVzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8g
LT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZXNAMSAtPiBvdmVy
cmlkZXMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIF9vdmVybGF5XyAtPiBfb3ZlcmxheV8NCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIG92ZXJyaWRlc0AyIC0+IG92ZXJyaWRlcw0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3Igb3ZlcnJpZGVzQDMgLT4gb3ZlcnJpZGVzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBf
b3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZXNA
NCAtPiBvdmVycmlkZXMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIF9vdmVybGF5XyAtPiBfb3Zl
cmxheV8NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG92ZXJyaWRlc0A2IC0+IG92ZXJyaWRlcw0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVzQDcgLT4gb3ZlcnJpZGVzDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBv
dmVycmlkZXNAOCAtPiBvdmVycmlkZXMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIF9vdmVybGF5
XyAtPiBfb3ZlcmxheV8NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG92ZXJyaWRlc0A5IC0+IG92
ZXJyaWRlcw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVzQDEwIC0+IG92ZXJyaWRlcw0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3Igb3ZlcnJpZGVzQDExIC0+IG92ZXJyaWRlcw0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgX292ZXJsYXlfIC0+IF9vdmVybGF5Xw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJp
ZGVAMTIgLT4gb3ZlcnJpZGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIF9vdmVybGF5XyAtPiBf
b3ZlcmxheV8NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGZyYWdtZW50LWUyNjE0LWEwMEAxIC0+
IGZyYWdtZW50LWUyNjE0LWEwMA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVzQDAg
LT4gb3ZlcnJpZGVzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJs
YXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZUAxIC0+IG92ZXJyaWRlDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBudmlkaWEsZGFpLWxpbmstMSAtPiBudmlkaWEsZGFpLWxpbmstMQ0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgZnJhZ21lbnQtZTI2MTQtYjAwQDIgLT4gZnJhZ21lbnQtZTI2MTQt
YjAwDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBvdmVycmlkZXNAMCAtPiBvdmVycmlkZXMNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIF9vdmVybGF5XyAtPiBfb3ZlcmxheV8NCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIG92ZXJyaWRlQDEgLT4gb3ZlcnJpZGUNCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIF9vdmVybGF5XyAtPiBfb3ZlcmxheV8NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG52aWRp
YSxkYWktbGluay0xIC0+IG52aWRpYSxkYWktbGluay0xDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBmcmFnbWVudC1lMjYxNC1waW5zQDMgLT4gZnJhZ21lbnQtZTI2MTQtcGlucw0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3Igb3ZlcnJpZGVzQDAgLT4gb3ZlcnJpZGVzDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBfb3ZlcmxheV8gLT4gX292ZXJsYXlfDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBt
b2RzLXNpbXBsZS1idXMgLT4gbW9kcy1zaW1wbGUtYnVzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBtb2RzLWNsb2NrcyAtPiBtb2RzLWNsb2Nrcw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZ3Bz
X3dha2UgLT4gZ3BzX3dha2UNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNob3NlbiAtPiBjaG9z
ZW4NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBtdS1ib2FyZCAtPiBwbXUtYm9hcmQNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHByb2MtYm9hcmQgLT4gcHJvYy1ib2FyZA0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgZGlzcGxheS1ib2FyZCAtPiBkaXNwbGF5LWJvYXJkDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciByZXNldCAtPiByZXNldA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGx1Z2lu
LW1hbmFnZXIgLT4gcGx1Z2luLW1hbmFnZXINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGlkcyAt
PiBpZHMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNvbm5lY3Rpb24gLT4gY29ubmVjdGlvbg0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjQDcwMDBjNTAwIC0+IGkyYw0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgbW9kdWxlQDB4NTAgLT4gbW9kdWxlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBtb2R1bGVAMHg1NyAtPiBtb2R1bGUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG9kbS1kYXRh
IC0+IG9kbS1kYXRhDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBtb2R1bGVAMCAtPiBtb2R1bGUN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHZlcmlmaWVkLWJvb3QgLT4gdmVyaWZpZWQtYm9vdA0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZ3B1LWR2ZnMtcmV3b3JrIC0+IGdwdS1kdmZzLXJld29y
aw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHdtX3JlZ3VsYXRvcnMgLT4gcHdtX3JlZ3VsYXRv
cnMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHB3bS1yZWd1bGF0b3JAMCAtPiBwd20tcmVndWxh
dG9yDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwd20tcmVndWxhdG9yQDEgLT4gcHdtLXJlZ3Vs
YXRvcg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZGZsbC1tYXg3NzYyMUA3MDExMDAwMCAtPiBk
ZmxsLW1heDc3NjIxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkZmxsLW1heDc3NjIxLWludGVn
cmF0aW9uIC0+IGRmbGwtbWF4Nzc2MjEtaW50ZWdyYXRpb24NCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIGRmbGwtbWF4Nzc2MjEtYm9hcmQtcGFyYW1zIC0+IGRmbGwtbWF4Nzc2MjEtYm9hcmQtcGFy
YW1zDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkZmxsLWNkZXYtY2FwIC0+IGRmbGwtY2Rldi1j
YXANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRmbGwtY2Rldi1mbG9vciAtPiBkZmxsLWNkZXYt
Zmxvb3INCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGR2ZnMgLT4gZHZmcw0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgcjgxNjggLT4gcjgxNjgNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHRlZ3Jh
X3Vkcm0gLT4gdGVncmFfdWRybQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc29mdF93YXRjaGRv
ZyAtPiBzb2Z0X3dhdGNoZG9nDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBsZWRzIC0+IGxlZHMN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHB3ciAtPiBwd3INCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIG1lbW9yeUA4MDAwMDAwMCAtPiBtZW1vcnkNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNw
dV9lZHAgLT4gY3B1X2VkcA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZ3B1X2VkcCAtPiBncHVf
ZWRwDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBfX3N5bWJvbHNfXyAtPiBfX3N5bWJvbHNfXw0K
KFhFTikgIDwtIHVuZmxhdHRlbl9kZXZpY2VfdHJlZSgpDQooWEVOKSBhZGRpbmcgRFQgYWxpYXM6
c2RoY2kwOiBzdGVtPXNkaGNpIGlkPTAgbm9kZT0vc2RoY2lANzAwYjAwMDANCihYRU4pIGFkZGlu
ZyBEVCBhbGlhczpzZGhjaTE6IHN0ZW09c2RoY2kgaWQ9MSBub2RlPS9zZGhjaUA3MDBiMDIwMA0K
KFhFTikgYWRkaW5nIERUIGFsaWFzOnNkaGNpMjogc3RlbT1zZGhjaSBpZD0yIG5vZGU9L3NkaGNp
QDcwMGIwNDAwDQooWEVOKSBhZGRpbmcgRFQgYWxpYXM6c2RoY2kzOiBzdGVtPXNkaGNpIGlkPTMg
bm9kZT0vc2RoY2lANzAwYjA2MDANCihYRU4pIGFkZGluZyBEVCBhbGlhczppMmMwOiBzdGVtPWky
YyBpZD0wIG5vZGU9L2kyY0A3MDAwYzAwMA0KKFhFTikgYWRkaW5nIERUIGFsaWFzOmkyYzE6IHN0
ZW09aTJjIGlkPTEgbm9kZT0vaTJjQDcwMDBjNDAwDQooWEVOKSBhZGRpbmcgRFQgYWxpYXM6aTJj
Mjogc3RlbT1pMmMgaWQ9MiBub2RlPS9pMmNANzAwMGM1MDANCihYRU4pIGFkZGluZyBEVCBhbGlh
czppMmMzOiBzdGVtPWkyYyBpZD0zIG5vZGU9L2kyY0A3MDAwYzcwMA0KKFhFTikgYWRkaW5nIERU
IGFsaWFzOmkyYzQ6IHN0ZW09aTJjIGlkPTQgbm9kZT0vaTJjQDcwMDBkMDAwDQooWEVOKSBhZGRp
bmcgRFQgYWxpYXM6aTJjNTogc3RlbT1pMmMgaWQ9NSBub2RlPS9pMmNANzAwMGQxMDANCihYRU4p
IGFkZGluZyBEVCBhbGlhczppMmM2OiBzdGVtPWkyYyBpZD02IG5vZGU9L2hvc3QxeC9pMmNANTQ2
YzAwMDANCihYRU4pIGFkZGluZyBEVCBhbGlhczpzcGkwOiBzdGVtPXNwaSBpZD0wIG5vZGU9L3Nw
aUA3MDAwZDQwMA0KKFhFTikgYWRkaW5nIERUIGFsaWFzOnNwaTE6IHN0ZW09c3BpIGlkPTEgbm9k
ZT0vc3BpQDcwMDBkNjAwDQooWEVOKSBhZGRpbmcgRFQgYWxpYXM6c3BpMjogc3RlbT1zcGkgaWQ9
MiBub2RlPS9zcGlANzAwMGQ4MDANCihYRU4pIGFkZGluZyBEVCBhbGlhczpzcGkzOiBzdGVtPXNw
aSBpZD0zIG5vZGU9L3NwaUA3MDAwZGEwMA0KKFhFTikgYWRkaW5nIERUIGFsaWFzOnFzcGk2OiBz
dGVtPXFzcGkgaWQ9NiBub2RlPS9zcGlANzA0MTAwMDANCihYRU4pIGFkZGluZyBEVCBhbGlhczpz
ZXJpYWwwOiBzdGVtPXNlcmlhbCBpZD0wIG5vZGU9L3NlcmlhbEA3MDAwNjAwMA0KKFhFTikgYWRk
aW5nIERUIGFsaWFzOnNlcmlhbDE6IHN0ZW09c2VyaWFsIGlkPTEgbm9kZT0vc2VyaWFsQDcwMDA2
MDQwDQooWEVOKSBhZGRpbmcgRFQgYWxpYXM6c2VyaWFsMjogc3RlbT1zZXJpYWwgaWQ9MiBub2Rl
PS9zZXJpYWxANzAwMDYyMDANCihYRU4pIGFkZGluZyBEVCBhbGlhczpzZXJpYWwzOiBzdGVtPXNl
cmlhbCBpZD0zIG5vZGU9L3NlcmlhbEA3MDAwNjMwMA0KKFhFTikgYWRkaW5nIERUIGFsaWFzOnJ0
YzA6IHN0ZW09cnRjIGlkPTAgbm9kZT0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjDQooWEVOKSBh
ZGRpbmcgRFQgYWxpYXM6cnRjMTogc3RlbT1ydGMgaWQ9MSBub2RlPS9ydGMNCihYRU4pIFBsYXRm
b3JtOiBHZW5lcmljIFN5c3RlbQ0KKFhFTikgVGFraW5nIGR0dWFydCBjb25maWd1cmF0aW9uIGZy
b20gL2Nob3Nlbi9zdGRvdXQtcGF0aA0KKFhFTikgTG9va2luZyBmb3IgZHR1YXJ0IGF0ICIvc2Vy
aWFsQDcwMDA2MDAwIiwgb3B0aW9ucyAiIg0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBk
ZXZpY2UgL3NlcmlhbEA3MDAwNjAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0y
LCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8
Mz4gNzAwMDYwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgZHRfZGV2
aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NlcmlhbEA3MDAwNjAwMCwgaW5kZXg9MA0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhF
TikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJy
dXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDI0Li4u
XSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRy
b2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINIFhlbiA0LjE0LjAN
CihYRU4pIFhlbiB2ZXJzaW9uIDQuMTQuMCAoc3Jpbml2YXNAKSAoYWFyY2g2NC1saW51eC1nbnUt
Z2NjIChVYnVudHUvTGluYXJvIDcuNS4wLTN1YnVudHUxfjE4LjA0KSA3LjUuMCkgZGVidWc9eSAg
V2VkIEp1bCAyOSAwODowMzozOSBQRFQgMjAyMA0KKFhFTikgTGF0ZXN0IENoYW5nZVNldDogVGh1
IEp1bCAyMyAxNjowNzo1MSAyMDIwICswMTAwIGdpdDo0NTY5NTdhYWExLWRpcnR5DQooWEVOKSBi
dWlsZC1pZDogNzFkOWQzZDg0YzU3ZGNkZWY1NzU5NjZhMjRhNjdjNWE4MTlhNmU1YQ0KKFhFTikg
Q29uc29sZSBvdXRwdXQgaXMgc3luY2hyb25vdXMuDQooWEVOKSBQcm9jZXNzb3I6IDQxMWZkMDcx
OiAiQVJNIExpbWl0ZWQiLCB2YXJpYW50OiAweDEsIHBhcnQgMHhkMDcsIHJldiAweDENCihYRU4p
IDY0LWJpdCBFeGVjdXRpb246DQooWEVOKSAgIFByb2Nlc3NvciBGZWF0dXJlczogMDAwMDAwMDAw
MDAwMjIyMiAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSAgICAgRXhjZXB0aW9uIExldmVsczogRUwz
OjY0KzMyIEVMMjo2NCszMiBFTDE6NjQrMzIgRUwwOjY0KzMyDQooWEVOKSAgICAgRXh0ZW5zaW9u
czogRmxvYXRpbmdQb2ludCBBZHZhbmNlZFNJTUQNCihYRU4pICAgRGVidWcgRmVhdHVyZXM6IDAw
MDAwMDAwMTAzMDUxMDYgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICBBdXhpbGlhcnkgRmVhdHVy
ZXM6IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICBNZW1vcnkgTW9k
ZWwgRmVhdHVyZXM6IDAwMDAwMDAwMDAwMDExMjQgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICBJ
U0EgRmVhdHVyZXM6ICAwMDAwMDAwMDAwMDExMTIwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4pIDMy
LWJpdCBFeGVjdXRpb246DQooWEVOKSAgIFByb2Nlc3NvciBGZWF0dXJlczogMDAwMDAxMzE6MDAw
MTEwMTENCihYRU4pICAgICBJbnN0cnVjdGlvbiBTZXRzOiBBQXJjaDMyIEEzMiBUaHVtYiBUaHVt
Yi0yIEphemVsbGUNCihYRU4pICAgICBFeHRlbnNpb25zOiBHZW5lcmljVGltZXIgU2VjdXJpdHkN
CihYRU4pICAgRGVidWcgRmVhdHVyZXM6IDAzMDEwMDY2DQooWEVOKSAgIEF1eGlsaWFyeSBGZWF0
dXJlczogMDAwMDAwMDANCihYRU4pICAgTWVtb3J5IE1vZGVsIEZlYXR1cmVzOiAxMDEwMTEwNSA0
MDAwMDAwMCAwMTI2MDAwMCAwMjEwMjIxMQ0KKFhFTikgIElTQSBGZWF0dXJlczogMDIxMDExMTAg
MTMxMTIxMTEgMjEyMzIwNDIgMDExMTIxMzEgMDAwMTExNDIgMDAwMTExMjENCihYRU4pIFVzaW5n
IFNNQyBDYWxsaW5nIENvbnZlbnRpb24gdjEuMQ0KKFhFTikgVXNpbmcgUFNDSSB2MS4wDQooWEVO
KSBTTVA6IEFsbG93aW5nIDQgQ1BVcw0KKFhFTikgZW5hYmxlZCB3b3JrYXJvdW5kIGZvcjogQVJN
IGVycmF0dW0gODMyMDc1DQooWEVOKSBlbmFibGVkIHdvcmthcm91bmQgZm9yOiBBUk0gZXJyYXR1
bSA4MzQyMjANCihYRU4pIGVuYWJsZWQgd29ya2Fyb3VuZCBmb3I6IEFSTSBlcnJhdHVtIDEzMTkz
NjcNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS90aW1lciwgaW5kZXg9MA0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MSBpbnRsZW49
MTINCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTEyDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFy
PS9pbnRlcnJ1cHQtY29udHJvbGxlcixpbnRzcGVjPVsweDAwMDAwMDAxIDB4MDAwMDAwMGQuLi5d
LG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJv
bGxlciwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQoo
WEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vdGltZXIsIGluZGV4PTENCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTEgaW50bGVuPTEyDQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50
ZXJydXB0LWNvbnRyb2xsZXIsaW50c3BlYz1bMHgwMDAwMDAwMSAweDAwMDAwMDBlLi4uXSxvaW50
c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXIs
IHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikg
ZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3RpbWVyLCBpbmRleD0yDQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0xIGludGxlbj0xMg0KKFhFTikg
IGludHNpemU9MyBpbnRsZW49MTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVw
dC1jb250cm9sbGVyLGludHNwZWM9WzB4MDAwMDAwMDEgMHgwMDAwMDAwYi4uLl0sb2ludHNpemU9
Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyLCBzaXpl
PTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rl
dmljZV9nZXRfcmF3X2lycTogZGV2PS90aW1lciwgaW5kZXg9Mw0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MSBpbnRsZW49MTINCihYRU4pICBpbnRz
aXplPTMgaW50bGVuPTEyDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29u
dHJvbGxlcixpbnRzcGVjPVsweDAwMDAwMDAxIDB4MDAwMDAwMGEuLi5dLG9pbnRzaXplPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlciwgc2l6ZT0zDQoo
WEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBHZW5lcmljIFRp
bWVyIElSUTogcGh5cz0zMCBoeXA9MjYgdmlydD0yNyBGcmVxOiAxOTIwMCBLSHoNCihYRU4pIERU
OiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9pbnRlcnJ1cHQtY29udHJvbGxlciAqKg0KKFhF
TikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNs
YXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNTAwNDEwMDA8Mz4NCihYRU4pIERUOiByZWFj
aGVkIHJvb3Qgbm9kZQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2ludGVy
cnVwdC1jb250cm9sbGVyICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIp
IG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA1MDA0
MjAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSBEVDogKiogdHJhbnNs
YXRpb24gZm9yIGRldmljZSAvaW50ZXJydXB0LWNvbnRyb2xsZXIgKioNCihYRU4pIERUOiBidXMg
aXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJl
c3M6PDM+IDAwMDAwMDAwPDM+IDUwMDQ0MDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5v
ZGUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9pbnRlcnJ1cHQtY29udHJv
bGxlciAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVO
KSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNTAwNDYwMDA8Mz4NCihY
RU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBk
ZXY9L2ludGVycnVwdC1jb250cm9sbGVyLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0xIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxl
cixpbnRzcGVjPVsweDAwMDAwMDAxIDB4MDAwMDAwMDkuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlciwgc2l6ZT0zDQooWEVOKSAg
LT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBHSUN2MiBpbml0aWFsaXph
dGlvbjoNCihYRU4pICAgICAgICAgZ2ljX2Rpc3RfYWRkcj0wMDAwMDAwMDUwMDQxMDAwDQooWEVO
KSAgICAgICAgIGdpY19jcHVfYWRkcj0wMDAwMDAwMDUwMDQyMDAwDQooWEVOKSAgICAgICAgIGdp
Y19oeXBfYWRkcj0wMDAwMDAwMDUwMDQ0MDAwDQooWEVOKSAgICAgICAgIGdpY192Y3B1X2FkZHI9
MDAwMDAwMDA1MDA0NjAwMA0KKFhFTikgICAgICAgICBnaWNfbWFpbnRlbmFuY2VfaXJxPTI1DQoo
WEVOKSBHSUN2MjogMjI0IGxpbmVzLCA0IGNwdXMsIHNlY3VyZSAoSUlEIDAyMDAxNDNiKS4NCihY
RU4pIFhTTSBGcmFtZXdvcmsgdjEuMC4wIGluaXRpYWxpemVkDQooWEVOKSBJbml0aWFsaXNpbmcg
WFNNIFNJTE8gbW9kZQ0KKFhFTikgVXNpbmcgc2NoZWR1bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxl
ciByZXYyIChjcmVkaXQyKQ0KKFhFTikgSW5pdGlhbGl6aW5nIENyZWRpdDIgc2NoZWR1bGVyDQoo
WEVOKSAgbG9hZF9wcmVjaXNpb25fc2hpZnQ6IDE4DQooWEVOKSAgbG9hZF93aW5kb3dfc2hpZnQ6
IDMwDQooWEVOKSAgdW5kZXJsb2FkX2JhbGFuY2VfdG9sZXJhbmNlOiAwDQooWEVOKSAgb3Zlcmxv
YWRfYmFsYW5jZV90b2xlcmFuY2U6IC0zDQooWEVOKSAgcnVucXVldWVzIGFycmFuZ2VtZW50OiBz
b2NrZXQNCihYRU4pICBjYXAgZW5mb3JjZW1lbnQgZ3JhbnVsYXJpdHk6IDEwbXMNCihYRU4pIGxv
YWQgdHJhY2tpbmcgd2luZG93IGxlbmd0aCAxMDczNzQxODI0IG5zDQooWEVOKSBBbGxvY2F0ZWQg
Y29uc29sZSByaW5nIG9mIDMyIEtpQi4NCihYRU4pIENQVTA6IEd1ZXN0IGF0b21pY3Mgd2lsbCB0
cnkgOCB0aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9tYWluDQooWEVOKSBCcmluZ2luZyB1cCBD
UFUxDQotIENQVSAwMDAwMDAwMSBib290aW5nIC0NCi0gQ3VycmVudCBFTCAwMDAwMDAwOCAtDQot
IEluaXRpYWxpemUgQ1BVIC0NCi0gVHVybmluZyBvbiBwYWdpbmcgLQ0KLSBSZWFkeSAtDQooWEVO
KSBDUFUxOiBHdWVzdCBhdG9taWNzIHdpbGwgdHJ5IDggdGltZXMgYmVmb3JlIHBhdXNpbmcgdGhl
IGRvbWFpbg0KKFhFTikgQ1BVIDEgYm9vdGVkLg0KKFhFTikgQnJpbmdpbmcgdXAgQ1BVMg0KLSBD
UFUgMDAwMDAwMDIgYm9vdGluZyAtDQotIEN1cnJlbnQgRUwgMDAwMDAwMDggLQ0KLSBJbml0aWFs
aXplIENQVSAtDQotIFR1cm5pbmcgb24gcGFnaW5nIC0NCi0gUmVhZHkgLQ0KKFhFTikgQ1BVMjog
R3Vlc3QgYXRvbWljcyB3aWxsIHRyeSA5IHRpbWVzIGJlZm9yZSBwYXVzaW5nIHRoZSBkb21haW4N
CihYRU4pIENQVSAyIGJvb3RlZC4NCihYRU4pIEJyaW5naW5nIHVwIENQVTMNCi0gQ1BVIDAwMDAw
MDAzIGJvb3RpbmcgLQ0KLSBDdXJyZW50IEVMIDAwMDAwMDA4IC0NCi0gSW5pdGlhbGl6ZSBDUFUg
LQ0KLSBUdXJuaW5nIG9uIHBhZ2luZyAtDQotIFJlYWR5IC0NCihYRU4pIENQVTM6IEd1ZXN0IGF0
b21pY3Mgd2lsbCB0cnkgOCB0aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9tYWluDQooWEVOKSBD
UFUgMyBib290ZWQuDQooWEVOKSBCcm91Z2h0IHVwIDQgQ1BVcw0KKFhFTikgSS9PIHZpcnR1YWxp
c2F0aW9uIGRpc2FibGVkDQooWEVOKSBQMk06IDQ0LWJpdCBJUEEgd2l0aCA0NC1iaXQgUEEgYW5k
IDgtYml0IFZNSUQNCihYRU4pIFAyTTogNCBsZXZlbHMgd2l0aCBvcmRlci0wIHJvb3QsIFZUQ1Ig
MHg4MDA0MzU5NA0KKFhFTikgU2NoZWR1bGluZyBncmFudWxhcml0eTogY3B1LCAxIENQVSBwZXIg
c2NoZWQtcmVzb3VyY2UNCihYRU4pIEFkZGluZyBjcHUgMCB0byBydW5xdWV1ZSAwDQooWEVOKSAg
Rmlyc3QgY3B1IG9uIHJ1bnF1ZXVlLCBhY3RpdmF0aW5nDQooWEVOKSBBZGRpbmcgY3B1IDEgdG8g
cnVucXVldWUgMA0KKFhFTikgQWRkaW5nIGNwdSAyIHRvIHJ1bnF1ZXVlIDANCihYRU4pIEFkZGlu
ZyBjcHUgMyB0byBydW5xdWV1ZSAwDQooWEVOKSBhbHRlcm5hdGl2ZXM6IFBhdGNoaW5nIHdpdGgg
YWx0IHRhYmxlIDAwMDAwMDAwMDAyZGM0MDAgLT4gMDAwMDAwMDAwMDJkY2FmMA0KKFhFTikgQ1BV
MiB3aWxsIGNhbGwgQVJNX1NNQ0NDX0FSQ0hfV09SS0FST1VORF8xIG9uIGV4Y2VwdGlvbiBlbnRy
eQ0KKFhFTikgQ1BVMSB3aWxsIGNhbGwgQVJNX1NNQ0NDX0FSQ0hfV09SS0FST1VORF8xIG9uIGV4
Y2VwdGlvbiBlbnRyeQ0KKFhFTikgQ1BVMCB3aWxsIGNhbGwgQVJNX1NNQ0NDX0FSQ0hfV09SS0FS
T1VORF8xIG9uIGV4Y2VwdGlvbiBlbnRyeQ0KKFhFTikgQ1BVMyB3aWxsIGNhbGwgQVJNX1NNQ0ND
X0FSQ0hfV09SS0FST1VORF8xIG9uIGV4Y2VwdGlvbiBlbnRyeQ0KKFhFTikgKioqIExPQURJTkcg
RE9NQUlOIDAgKioqDQooWEVOKSBMb2FkaW5nIGQwIGtlcm5lbCBmcm9tIGJvb3QgbW9kdWxlIEAg
MDAwMDAwMDBlMTAwMDAwMA0KKFhFTikgQWxsb2NhdGluZyAxOjEgbWFwcGluZ3MgdG90YWxsaW5n
IDEyOE1CIGZvciBkb20wOg0KKFhFTikgQkFOS1swXSAweDAwMDAwMGU4MDAwMDAwLTB4MDAwMDAw
ZjAwMDAwMDAgKDEyOE1CKQ0KKFhFTikgR3JhbnQgdGFibGUgcmFuZ2U6IDB4MDAwMDAwZTAwMDAw
MDAtMHgwMDAwMDBlMDA0MDAwMA0KKFhFTikgaGFuZGxlIC8NCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vDQooWEVOKSAvIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9Lw0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpvbmVzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3RoZXJtYWwtem9uZXMNCihYRU4pIC90aGVybWFsLXpvbmVzIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpvbmVzIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzDQoo
WEVOKSBoYW5kbGUgL3RoZXJtYWwtem9uZXMvQU8tdGhlcm0NCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vdGhlcm1hbC16b25lcy9BTy10aGVybQ0KKFhFTikgL3RoZXJtYWwtem9uZXMvQU8tdGhl
cm0gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RoZXJtYWwtem9u
ZXMvQU8tdGhlcm0gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQU8tdGhlcm0NCihYRU4pIGhhbmRsZSAvdGhlcm1h
bC16b25lcy9BTy10aGVybS90cmlwcw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFs
LXpvbmVzL0FPLXRoZXJtL3RyaXBzDQooWEVOKSAvdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlw
cyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25l
cy9BTy10aGVybS90cmlwcyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlwcw0KKFhFTikgaGFu
ZGxlIC90aGVybWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBzL3RyaXBfc2h1dGRvd24NCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlwcy90cmlwX3NodXRk
b3duDQooWEVOKSAvdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlwcy90cmlwX3NodXRkb3duIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpvbmVzL0FP
LXRoZXJtL3RyaXBzL3RyaXBfc2h1dGRvd24gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vdHJpcHMv
dHJpcF9zaHV0ZG93bg0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBz
L2dwdS1zY2FsaW5nMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0FP
LXRoZXJtL3RyaXBzL2dwdS1zY2FsaW5nMA0KKFhFTikgL3RoZXJtYWwtem9uZXMvQU8tdGhlcm0v
dHJpcHMvZ3B1LXNjYWxpbmcwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC90aGVybWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBzL2dwdS1zY2FsaW5nMCBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16
b25lcy9BTy10aGVybS90cmlwcy9ncHUtc2NhbGluZzANCihYRU4pIGhhbmRsZSAvdGhlcm1hbC16
b25lcy9BTy10aGVybS90cmlwcy9ncHUtc2NhbGluZzENCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlwcy9ncHUtc2NhbGluZzENCihYRU4pIC90aGVy
bWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBzL2dwdS1zY2FsaW5nMSBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlwcy9ncHUt
c2NhbGluZzEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vdHJpcHMvZ3B1LXNjYWxpbmcxDQooWEVO
KSBoYW5kbGUgL3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vdHJpcHMvZ3B1LXNjYWxpbmcyDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vdHJpcHMvZ3B1LXNj
YWxpbmcyDQooWEVOKSAvdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlwcy9ncHUtc2NhbGluZzIg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RoZXJtYWwtem9uZXMv
QU8tdGhlcm0vdHJpcHMvZ3B1LXNjYWxpbmcyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBz
L2dwdS1zY2FsaW5nMg0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBz
L2dwdS1zY2FsaW5nMw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0FP
LXRoZXJtL3RyaXBzL2dwdS1zY2FsaW5nMw0KKFhFTikgL3RoZXJtYWwtem9uZXMvQU8tdGhlcm0v
dHJpcHMvZ3B1LXNjYWxpbmczIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC90aGVybWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBzL2dwdS1zY2FsaW5nMyBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16
b25lcy9BTy10aGVybS90cmlwcy9ncHUtc2NhbGluZzMNCihYRU4pIGhhbmRsZSAvdGhlcm1hbC16
b25lcy9BTy10aGVybS90cmlwcy9ncHUtc2NhbGluZzQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlwcy9ncHUtc2NhbGluZzQNCihYRU4pIC90aGVy
bWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBzL2dwdS1zY2FsaW5nNCBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlwcy9ncHUt
c2NhbGluZzQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vdHJpcHMvZ3B1LXNjYWxpbmc0DQooWEVO
KSBoYW5kbGUgL3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vdHJpcHMvZ3B1LXNjYWxpbmc1DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vdHJpcHMvZ3B1LXNj
YWxpbmc1DQooWEVOKSAvdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlwcy9ncHUtc2NhbGluZzUg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RoZXJtYWwtem9uZXMv
QU8tdGhlcm0vdHJpcHMvZ3B1LXNjYWxpbmc1IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBz
L2dwdS1zY2FsaW5nNQ0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBz
L2dwdS12bWF4MQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0FPLXRo
ZXJtL3RyaXBzL2dwdS12bWF4MQ0KKFhFTikgL3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vdHJpcHMv
Z3B1LXZtYXgxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90aGVy
bWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBzL2dwdS12bWF4MSBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9BTy10aGVy
bS90cmlwcy9ncHUtdm1heDENCihYRU4pIGhhbmRsZSAvdGhlcm1hbC16b25lcy9BTy10aGVybS90
cmlwcy9jb3JlX2R2ZnNfZmxvb3JfdHJpcDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhl
cm1hbC16b25lcy9BTy10aGVybS90cmlwcy9jb3JlX2R2ZnNfZmxvb3JfdHJpcDANCihYRU4pIC90
aGVybWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBzL2NvcmVfZHZmc19mbG9vcl90cmlwMCBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25lcy9BTy10aGVy
bS90cmlwcy9jb3JlX2R2ZnNfZmxvb3JfdHJpcDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vdHJp
cHMvY29yZV9kdmZzX2Zsb29yX3RyaXAwDQooWEVOKSBoYW5kbGUgL3RoZXJtYWwtem9uZXMvQU8t
dGhlcm0vdHJpcHMvY29yZV9kdmZzX2NhcF90cmlwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS90aGVybWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBzL2NvcmVfZHZmc19jYXBfdHJpcDANCihYRU4p
IC90aGVybWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBzL2NvcmVfZHZmc19jYXBfdHJpcDAgcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RoZXJtYWwtem9uZXMvQU8tdGhl
cm0vdHJpcHMvY29yZV9kdmZzX2NhcF90cmlwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlw
cy9jb3JlX2R2ZnNfY2FwX3RyaXAwDQooWEVOKSBoYW5kbGUgL3RoZXJtYWwtem9uZXMvQU8tdGhl
cm0vdHJpcHMvZGZsbC1mbG9vci10cmlwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVy
bWFsLXpvbmVzL0FPLXRoZXJtL3RyaXBzL2RmbGwtZmxvb3ItdHJpcDANCihYRU4pIC90aGVybWFs
LXpvbmVzL0FPLXRoZXJtL3RyaXBzL2RmbGwtZmxvb3ItdHJpcDAgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vdHJpcHMvZGZs
bC1mbG9vci10cmlwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9BTy10aGVybS90cmlwcy9kZmxsLWZsb29yLXRy
aXAwDQooWEVOKSBoYW5kbGUgL3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vdGhlcm1hbC16b25lLXBh
cmFtcw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0FPLXRoZXJtL3Ro
ZXJtYWwtem9uZS1wYXJhbXMNCihYRU4pIC90aGVybWFsLXpvbmVzL0FPLXRoZXJtL3RoZXJtYWwt
em9uZS1wYXJhbXMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Ro
ZXJtYWwtem9uZXMvQU8tdGhlcm0vdGhlcm1hbC16b25lLXBhcmFtcyBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9B
Ty10aGVybS90aGVybWFsLXpvbmUtcGFyYW1zDQooWEVOKSBoYW5kbGUgL3RoZXJtYWwtem9uZXMv
QU8tdGhlcm0vY29vbGluZy1tYXBzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwt
em9uZXMvQU8tdGhlcm0vY29vbGluZy1tYXBzDQooWEVOKSAvdGhlcm1hbC16b25lcy9BTy10aGVy
bS9jb29saW5nLW1hcHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vY29vbGluZy1tYXBzIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0FPLXRo
ZXJtL2Nvb2xpbmctbWFwcw0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpvbmVzL0FPLXRoZXJtL2Nv
b2xpbmctbWFwcy9ncHUtc2NhbGluZy1tYXAxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Ro
ZXJtYWwtem9uZXMvQU8tdGhlcm0vY29vbGluZy1tYXBzL2dwdS1zY2FsaW5nLW1hcDENCihYRU4p
IC90aGVybWFsLXpvbmVzL0FPLXRoZXJtL2Nvb2xpbmctbWFwcy9ncHUtc2NhbGluZy1tYXAxIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpvbmVzL0FP
LXRoZXJtL2Nvb2xpbmctbWFwcy9ncHUtc2NhbGluZy1tYXAxIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0FPLXRo
ZXJtL2Nvb2xpbmctbWFwcy9ncHUtc2NhbGluZy1tYXAxDQooWEVOKSBoYW5kbGUgL3RoZXJtYWwt
em9uZXMvQU8tdGhlcm0vY29vbGluZy1tYXBzL2dwdS1zY2FsaW5nLW1hcDINCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9BTy10aGVybS9jb29saW5nLW1hcHMvZ3B1LXNj
YWxpbmctbWFwMg0KKFhFTikgL3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vY29vbGluZy1tYXBzL2dw
dS1zY2FsaW5nLW1hcDIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vY29vbGluZy1tYXBzL2dwdS1zY2FsaW5nLW1hcDIgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Ro
ZXJtYWwtem9uZXMvQU8tdGhlcm0vY29vbGluZy1tYXBzL2dwdS1zY2FsaW5nLW1hcDINCihYRU4p
IGhhbmRsZSAvdGhlcm1hbC16b25lcy9BTy10aGVybS9jb29saW5nLW1hcHMvZ3B1X3NjYWxpbmdf
bWFwMw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0FPLXRoZXJtL2Nv
b2xpbmctbWFwcy9ncHVfc2NhbGluZ19tYXAzDQooWEVOKSAvdGhlcm1hbC16b25lcy9BTy10aGVy
bS9jb29saW5nLW1hcHMvZ3B1X3NjYWxpbmdfbWFwMyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25lcy9BTy10aGVybS9jb29saW5nLW1hcHMvZ3B1
X3NjYWxpbmdfbWFwMyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9BTy10aGVybS9jb29saW5nLW1hcHMvZ3B1X3Nj
YWxpbmdfbWFwMw0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpvbmVzL0FPLXRoZXJtL2Nvb2xpbmct
bWFwcy9ncHUtc2NhbGluZy1tYXA0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwt
em9uZXMvQU8tdGhlcm0vY29vbGluZy1tYXBzL2dwdS1zY2FsaW5nLW1hcDQNCihYRU4pIC90aGVy
bWFsLXpvbmVzL0FPLXRoZXJtL2Nvb2xpbmctbWFwcy9ncHUtc2NhbGluZy1tYXA0IHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpvbmVzL0FPLXRoZXJt
L2Nvb2xpbmctbWFwcy9ncHUtc2NhbGluZy1tYXA0IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0FPLXRoZXJtL2Nv
b2xpbmctbWFwcy9ncHUtc2NhbGluZy1tYXA0DQooWEVOKSBoYW5kbGUgL3RoZXJtYWwtem9uZXMv
QU8tdGhlcm0vY29vbGluZy1tYXBzL2dwdS1zY2FsaW5nLW1hcDUNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vdGhlcm1hbC16b25lcy9BTy10aGVybS9jb29saW5nLW1hcHMvZ3B1LXNjYWxpbmct
bWFwNQ0KKFhFTikgL3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vY29vbGluZy1tYXBzL2dwdS1zY2Fs
aW5nLW1hcDUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RoZXJt
YWwtem9uZXMvQU8tdGhlcm0vY29vbGluZy1tYXBzL2dwdS1zY2FsaW5nLW1hcDUgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwt
em9uZXMvQU8tdGhlcm0vY29vbGluZy1tYXBzL2dwdS1zY2FsaW5nLW1hcDUNCihYRU4pIGhhbmRs
ZSAvdGhlcm1hbC16b25lcy9BTy10aGVybS9jb29saW5nLW1hcHMvZ3B1LXZtYXgtbWFwMQ0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0FPLXRoZXJtL2Nvb2xpbmctbWFw
cy9ncHUtdm1heC1tYXAxDQooWEVOKSAvdGhlcm1hbC16b25lcy9BTy10aGVybS9jb29saW5nLW1h
cHMvZ3B1LXZtYXgtbWFwMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvdGhlcm1hbC16b25lcy9BTy10aGVybS9jb29saW5nLW1hcHMvZ3B1LXZtYXgtbWFwMSBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhl
cm1hbC16b25lcy9BTy10aGVybS9jb29saW5nLW1hcHMvZ3B1LXZtYXgtbWFwMQ0KKFhFTikgaGFu
ZGxlIC90aGVybWFsLXpvbmVzL0FPLXRoZXJtL2Nvb2xpbmctbWFwcy9jb3JlX2R2ZnNfZmxvb3Jf
bWFwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0FPLXRoZXJtL2Nv
b2xpbmctbWFwcy9jb3JlX2R2ZnNfZmxvb3JfbWFwMA0KKFhFTikgL3RoZXJtYWwtem9uZXMvQU8t
dGhlcm0vY29vbGluZy1tYXBzL2NvcmVfZHZmc19mbG9vcl9tYXAwIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpvbmVzL0FPLXRoZXJtL2Nvb2xpbmct
bWFwcy9jb3JlX2R2ZnNfZmxvb3JfbWFwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9BTy10aGVybS9jb29saW5n
LW1hcHMvY29yZV9kdmZzX2Zsb29yX21hcDANCihYRU4pIGhhbmRsZSAvdGhlcm1hbC16b25lcy9B
Ty10aGVybS9jb29saW5nLW1hcHMvY29yZV9kdmZzX2NhcF9tYXAwDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vY29vbGluZy1tYXBzL2NvcmVfZHZmc19j
YXBfbWFwMA0KKFhFTikgL3RoZXJtYWwtem9uZXMvQU8tdGhlcm0vY29vbGluZy1tYXBzL2NvcmVf
ZHZmc19jYXBfbWFwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
dGhlcm1hbC16b25lcy9BTy10aGVybS9jb29saW5nLW1hcHMvY29yZV9kdmZzX2NhcF9tYXAwIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90
aGVybWFsLXpvbmVzL0FPLXRoZXJtL2Nvb2xpbmctbWFwcy9jb3JlX2R2ZnNfY2FwX21hcDANCihY
RU4pIGhhbmRsZSAvdGhlcm1hbC16b25lcy9BTy10aGVybS9jb29saW5nLW1hcHMvZGZsbC1mbG9v
ci1tYXAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQU8tdGhlcm0v
Y29vbGluZy1tYXBzL2RmbGwtZmxvb3ItbWFwMA0KKFhFTikgL3RoZXJtYWwtem9uZXMvQU8tdGhl
cm0vY29vbGluZy1tYXBzL2RmbGwtZmxvb3ItbWFwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25lcy9BTy10aGVybS9jb29saW5nLW1hcHMvZGZs
bC1mbG9vci1tYXAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0FPLXRoZXJtL2Nvb2xpbmctbWFwcy9kZmxsLWZs
b29yLW1hcDANCihYRU4pIGhhbmRsZSAvdGhlcm1hbC16b25lcy9DUFUtdGhlcm0NCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9DUFUtdGhlcm0NCihYRU4pIC90aGVybWFs
LXpvbmVzL0NQVS10aGVybSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvdGhlcm1hbC16b25lcy9DUFUtdGhlcm0gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQ1BVLXRoZXJtDQooWEVO
KSBoYW5kbGUgL3RoZXJtYWwtem9uZXMvQ1BVLXRoZXJtL3RoZXJtYWwtem9uZS1wYXJhbXMNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9DUFUtdGhlcm0vdGhlcm1hbC16
b25lLXBhcmFtcw0KKFhFTikgL3RoZXJtYWwtem9uZXMvQ1BVLXRoZXJtL3RoZXJtYWwtem9uZS1w
YXJhbXMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RoZXJtYWwt
em9uZXMvQ1BVLXRoZXJtL3RoZXJtYWwtem9uZS1wYXJhbXMgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQ1BVLXRo
ZXJtL3RoZXJtYWwtem9uZS1wYXJhbXMNCihYRU4pIGhhbmRsZSAvdGhlcm1hbC16b25lcy9DUFUt
dGhlcm0vdHJpcHMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9DUFUt
dGhlcm0vdHJpcHMNCihYRU4pIC90aGVybWFsLXpvbmVzL0NQVS10aGVybS90cmlwcyBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25lcy9DUFUtdGhl
cm0vdHJpcHMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQ1BVLXRoZXJtL3RyaXBzDQooWEVOKSBoYW5kbGUgL3Ro
ZXJtYWwtem9uZXMvQ1BVLXRoZXJtL3RyaXBzL2RmbGwtY2FwLXRyaXAwDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQ1BVLXRoZXJtL3RyaXBzL2RmbGwtY2FwLXRyaXAw
DQooWEVOKSAvdGhlcm1hbC16b25lcy9DUFUtdGhlcm0vdHJpcHMvZGZsbC1jYXAtdHJpcDAgcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RoZXJtYWwtem9uZXMvQ1BV
LXRoZXJtL3RyaXBzL2RmbGwtY2FwLXRyaXAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0NQVS10aGVybS90cmlw
cy9kZmxsLWNhcC10cmlwMA0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpvbmVzL0NQVS10aGVybS90
cmlwcy9kZmxsLWNhcC10cmlwMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpv
bmVzL0NQVS10aGVybS90cmlwcy9kZmxsLWNhcC10cmlwMQ0KKFhFTikgL3RoZXJtYWwtem9uZXMv
Q1BVLXRoZXJtL3RyaXBzL2RmbGwtY2FwLXRyaXAxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpvbmVzL0NQVS10aGVybS90cmlwcy9kZmxsLWNhcC10
cmlwMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vdGhlcm1hbC16b25lcy9DUFUtdGhlcm0vdHJpcHMvZGZsbC1jYXAtdHJpcDENCihYRU4p
IGhhbmRsZSAvdGhlcm1hbC16b25lcy9DUFUtdGhlcm0vdHJpcHMvY3B1X2NyaXRpY2FsDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQ1BVLXRoZXJtL3RyaXBzL2NwdV9j
cml0aWNhbA0KKFhFTikgL3RoZXJtYWwtem9uZXMvQ1BVLXRoZXJtL3RyaXBzL2NwdV9jcml0aWNh
bCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25l
cy9DUFUtdGhlcm0vdHJpcHMvY3B1X2NyaXRpY2FsIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0NQVS10aGVybS90
cmlwcy9jcHVfY3JpdGljYWwNCihYRU4pIGhhbmRsZSAvdGhlcm1hbC16b25lcy9DUFUtdGhlcm0v
dHJpcHMvY3B1X2hlYXZ5DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMv
Q1BVLXRoZXJtL3RyaXBzL2NwdV9oZWF2eQ0KKFhFTikgL3RoZXJtYWwtem9uZXMvQ1BVLXRoZXJt
L3RyaXBzL2NwdV9oZWF2eSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvdGhlcm1hbC16b25lcy9DUFUtdGhlcm0vdHJpcHMvY3B1X2hlYXZ5IGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVz
L0NQVS10aGVybS90cmlwcy9jcHVfaGVhdnkNCihYRU4pIGhhbmRsZSAvdGhlcm1hbC16b25lcy9D
UFUtdGhlcm0vdHJpcHMvY3B1X3Rocm90dGxlDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Ro
ZXJtYWwtem9uZXMvQ1BVLXRoZXJtL3RyaXBzL2NwdV90aHJvdHRsZQ0KKFhFTikgL3RoZXJtYWwt
em9uZXMvQ1BVLXRoZXJtL3RyaXBzL2NwdV90aHJvdHRsZSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25lcy9DUFUtdGhlcm0vdHJpcHMvY3B1X3Ro
cm90dGxlIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS90aGVybWFsLXpvbmVzL0NQVS10aGVybS90cmlwcy9jcHVfdGhyb3R0bGUNCihYRU4p
IGhhbmRsZSAvdGhlcm1hbC16b25lcy9DUFUtdGhlcm0vY29vbGluZy1tYXBzDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQ1BVLXRoZXJtL2Nvb2xpbmctbWFwcw0KKFhF
TikgL3RoZXJtYWwtem9uZXMvQ1BVLXRoZXJtL2Nvb2xpbmctbWFwcyBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25lcy9DUFUtdGhlcm0vY29vbGlu
Zy1tYXBzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS90aGVybWFsLXpvbmVzL0NQVS10aGVybS9jb29saW5nLW1hcHMNCihYRU4pIGhhbmRs
ZSAvdGhlcm1hbC16b25lcy9DUFUtdGhlcm0vY29vbGluZy1tYXBzL21hcDENCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9DUFUtdGhlcm0vY29vbGluZy1tYXBzL21hcDEN
CihYRU4pIC90aGVybWFsLXpvbmVzL0NQVS10aGVybS9jb29saW5nLW1hcHMvbWFwMSBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25lcy9DUFUtdGhl
cm0vY29vbGluZy1tYXBzL21hcDEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQ1BVLXRoZXJtL2Nvb2xpbmctbWFw
cy9tYXAxDQooWEVOKSBoYW5kbGUgL3RoZXJtYWwtem9uZXMvQ1BVLXRoZXJtL2Nvb2xpbmctbWFw
cy9tYXAyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvQ1BVLXRoZXJt
L2Nvb2xpbmctbWFwcy9tYXAyDQooWEVOKSAvdGhlcm1hbC16b25lcy9DUFUtdGhlcm0vY29vbGlu
Zy1tYXBzL21hcDIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Ro
ZXJtYWwtem9uZXMvQ1BVLXRoZXJtL2Nvb2xpbmctbWFwcy9tYXAyIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0NQ
VS10aGVybS9jb29saW5nLW1hcHMvbWFwMg0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpvbmVzL0NQ
VS10aGVybS9jb29saW5nLW1hcHMvZGZsbC1jYXAtbWFwMA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS90aGVybWFsLXpvbmVzL0NQVS10aGVybS9jb29saW5nLW1hcHMvZGZsbC1jYXAtbWFwMA0K
KFhFTikgL3RoZXJtYWwtem9uZXMvQ1BVLXRoZXJtL2Nvb2xpbmctbWFwcy9kZmxsLWNhcC1tYXAw
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpvbmVz
L0NQVS10aGVybS9jb29saW5nLW1hcHMvZGZsbC1jYXAtbWFwMCBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9DUFUt
dGhlcm0vY29vbGluZy1tYXBzL2RmbGwtY2FwLW1hcDANCihYRU4pIGhhbmRsZSAvdGhlcm1hbC16
b25lcy9DUFUtdGhlcm0vY29vbGluZy1tYXBzL2RmbGwtY2FwLW1hcDENCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9DUFUtdGhlcm0vY29vbGluZy1tYXBzL2RmbGwtY2Fw
LW1hcDENCihYRU4pIC90aGVybWFsLXpvbmVzL0NQVS10aGVybS9jb29saW5nLW1hcHMvZGZsbC1j
YXAtbWFwMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGhlcm1h
bC16b25lcy9DUFUtdGhlcm0vY29vbGluZy1tYXBzL2RmbGwtY2FwLW1hcDEgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9u
ZXMvQ1BVLXRoZXJtL2Nvb2xpbmctbWFwcy9kZmxsLWNhcC1tYXAxDQooWEVOKSBoYW5kbGUgL3Ro
ZXJtYWwtem9uZXMvR1BVLXRoZXJtDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwt
em9uZXMvR1BVLXRoZXJtDQooWEVOKSAvdGhlcm1hbC16b25lcy9HUFUtdGhlcm0gcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RoZXJtYWwtem9uZXMvR1BVLXRoZXJt
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS90aGVybWFsLXpvbmVzL0dQVS10aGVybQ0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpvbmVzL0dQ
VS10aGVybS90aGVybWFsLXpvbmUtcGFyYW1zDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Ro
ZXJtYWwtem9uZXMvR1BVLXRoZXJtL3RoZXJtYWwtem9uZS1wYXJhbXMNCihYRU4pIC90aGVybWFs
LXpvbmVzL0dQVS10aGVybS90aGVybWFsLXpvbmUtcGFyYW1zIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpvbmVzL0dQVS10aGVybS90aGVybWFsLXpv
bmUtcGFyYW1zIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0dQVS10aGVybS90aGVybWFsLXpvbmUtcGFyYW1zDQoo
WEVOKSBoYW5kbGUgL3RoZXJtYWwtem9uZXMvR1BVLXRoZXJtL3RyaXBzDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvR1BVLXRoZXJtL3RyaXBzDQooWEVOKSAvdGhlcm1h
bC16b25lcy9HUFUtdGhlcm0vdHJpcHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3RoZXJtYWwtem9uZXMvR1BVLXRoZXJtL3RyaXBzIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0dQ
VS10aGVybS90cmlwcw0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpvbmVzL0dQVS10aGVybS90cmlw
cy9ncHVfY3JpdGljYWwNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9H
UFUtdGhlcm0vdHJpcHMvZ3B1X2NyaXRpY2FsDQooWEVOKSAvdGhlcm1hbC16b25lcy9HUFUtdGhl
cm0vdHJpcHMvZ3B1X2NyaXRpY2FsIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC90aGVybWFsLXpvbmVzL0dQVS10aGVybS90cmlwcy9ncHVfY3JpdGljYWwgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJt
YWwtem9uZXMvR1BVLXRoZXJtL3RyaXBzL2dwdV9jcml0aWNhbA0KKFhFTikgaGFuZGxlIC90aGVy
bWFsLXpvbmVzL0dQVS10aGVybS90cmlwcy9ncHVfaGVhdnkNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vdGhlcm1hbC16b25lcy9HUFUtdGhlcm0vdHJpcHMvZ3B1X2hlYXZ5DQooWEVOKSAvdGhl
cm1hbC16b25lcy9HUFUtdGhlcm0vdHJpcHMvZ3B1X2hlYXZ5IHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpvbmVzL0dQVS10aGVybS90cmlwcy9ncHVf
aGVhdnkgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3RoZXJtYWwtem9uZXMvR1BVLXRoZXJtL3RyaXBzL2dwdV9oZWF2eQ0KKFhFTikgaGFu
ZGxlIC90aGVybWFsLXpvbmVzL0dQVS10aGVybS90cmlwcy9ncHVfdGhyb3R0bGUNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9HUFUtdGhlcm0vdHJpcHMvZ3B1X3Rocm90
dGxlDQooWEVOKSAvdGhlcm1hbC16b25lcy9HUFUtdGhlcm0vdHJpcHMvZ3B1X3Rocm90dGxlIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpvbmVzL0dQ
VS10aGVybS90cmlwcy9ncHVfdGhyb3R0bGUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvR1BVLXRoZXJtL3RyaXBz
L2dwdV90aHJvdHRsZQ0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpvbmVzL0dQVS10aGVybS9jb29s
aW5nLW1hcHMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9HUFUtdGhl
cm0vY29vbGluZy1tYXBzDQooWEVOKSAvdGhlcm1hbC16b25lcy9HUFUtdGhlcm0vY29vbGluZy1t
YXBzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpv
bmVzL0dQVS10aGVybS9jb29saW5nLW1hcHMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvR1BVLXRoZXJtL2Nvb2xp
bmctbWFwcw0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpvbmVzL0dQVS10aGVybS9jb29saW5nLW1h
cHMvbWFwMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL0dQVS10aGVy
bS9jb29saW5nLW1hcHMvbWFwMQ0KKFhFTikgL3RoZXJtYWwtem9uZXMvR1BVLXRoZXJtL2Nvb2xp
bmctbWFwcy9tYXAxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90
aGVybWFsLXpvbmVzL0dQVS10aGVybS9jb29saW5nLW1hcHMvbWFwMSBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9H
UFUtdGhlcm0vY29vbGluZy1tYXBzL21hcDENCihYRU4pIGhhbmRsZSAvdGhlcm1hbC16b25lcy9H
UFUtdGhlcm0vY29vbGluZy1tYXBzL21hcDINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhl
cm1hbC16b25lcy9HUFUtdGhlcm0vY29vbGluZy1tYXBzL21hcDINCihYRU4pIC90aGVybWFsLXpv
bmVzL0dQVS10aGVybS9jb29saW5nLW1hcHMvbWFwMiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25lcy9HUFUtdGhlcm0vY29vbGluZy1tYXBzL21h
cDIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3RoZXJtYWwtem9uZXMvR1BVLXRoZXJtL2Nvb2xpbmctbWFwcy9tYXAyDQooWEVOKSBoYW5k
bGUgL3RoZXJtYWwtem9uZXMvUExMLXRoZXJtDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Ro
ZXJtYWwtem9uZXMvUExMLXRoZXJtDQooWEVOKSAvdGhlcm1hbC16b25lcy9QTEwtdGhlcm0gcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RoZXJtYWwtem9uZXMvUExM
LXRoZXJtIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS90aGVybWFsLXpvbmVzL1BMTC10aGVybQ0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpv
bmVzL1BMTC10aGVybS90aGVybWFsLXpvbmUtcGFyYW1zDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3RoZXJtYWwtem9uZXMvUExMLXRoZXJtL3RoZXJtYWwtem9uZS1wYXJhbXMNCihYRU4pIC90
aGVybWFsLXpvbmVzL1BMTC10aGVybS90aGVybWFsLXpvbmUtcGFyYW1zIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpvbmVzL1BMTC10aGVybS90aGVy
bWFsLXpvbmUtcGFyYW1zIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL1BMTC10aGVybS90aGVybWFsLXpvbmUtcGFy
YW1zDQooWEVOKSBoYW5kbGUgL3RoZXJtYWwtem9uZXMvUExMLXRoZXJtL3RyaXBzDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvUExMLXRoZXJtL3RyaXBzDQooWEVOKSAv
dGhlcm1hbC16b25lcy9QTEwtdGhlcm0vdHJpcHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3RoZXJtYWwtem9uZXMvUExMLXRoZXJtL3RyaXBzIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpv
bmVzL1BMTC10aGVybS90cmlwcw0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpvbmVzL1BMTC10aGVy
bS90cmlwcy9kcmFtLXRocm90dGxlDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwt
em9uZXMvUExMLXRoZXJtL3RyaXBzL2RyYW0tdGhyb3R0bGUNCihYRU4pIC90aGVybWFsLXpvbmVz
L1BMTC10aGVybS90cmlwcy9kcmFtLXRocm90dGxlIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpvbmVzL1BMTC10aGVybS90cmlwcy9kcmFtLXRocm90
dGxlIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS90aGVybWFsLXpvbmVzL1BMTC10aGVybS90cmlwcy9kcmFtLXRocm90dGxlDQooWEVOKSBo
YW5kbGUgL3RoZXJtYWwtem9uZXMvUExMLXRoZXJtL2Nvb2xpbmctbWFwcw0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL1BMTC10aGVybS9jb29saW5nLW1hcHMNCihYRU4p
IC90aGVybWFsLXpvbmVzL1BMTC10aGVybS9jb29saW5nLW1hcHMgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RoZXJtYWwtem9uZXMvUExMLXRoZXJtL2Nvb2xpbmct
bWFwcyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vdGhlcm1hbC16b25lcy9QTEwtdGhlcm0vY29vbGluZy1tYXBzDQooWEVOKSBoYW5kbGUg
L3RoZXJtYWwtem9uZXMvUExMLXRoZXJtL2Nvb2xpbmctbWFwcy9tYXAtdGVncmEtZHJhbQ0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL1BMTC10aGVybS9jb29saW5nLW1h
cHMvbWFwLXRlZ3JhLWRyYW0NCihYRU4pIC90aGVybWFsLXpvbmVzL1BMTC10aGVybS9jb29saW5n
LW1hcHMvbWFwLXRlZ3JhLWRyYW0gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL3RoZXJtYWwtem9uZXMvUExMLXRoZXJtL2Nvb2xpbmctbWFwcy9tYXAtdGVncmEtZHJh
bSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vdGhlcm1hbC16b25lcy9QTEwtdGhlcm0vY29vbGluZy1tYXBzL21hcC10ZWdyYS1kcmFtDQoo
WEVOKSBoYW5kbGUgL3RoZXJtYWwtem9uZXMvUE1JQy1EaWUNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vdGhlcm1hbC16b25lcy9QTUlDLURpZQ0KKFhFTikgL3RoZXJtYWwtem9uZXMvUE1JQy1E
aWUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RoZXJtYWwtem9u
ZXMvUE1JQy1EaWUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMvUE1JQy1EaWUNCihYRU4pIGhhbmRsZSAvdGhlcm1h
bC16b25lcy9QTUlDLURpZS90cmlwcw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFs
LXpvbmVzL1BNSUMtRGllL3RyaXBzDQooWEVOKSAvdGhlcm1hbC16b25lcy9QTUlDLURpZS90cmlw
cyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25l
cy9QTUlDLURpZS90cmlwcyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9QTUlDLURpZS90cmlwcw0KKFhFTikgaGFu
ZGxlIC90aGVybWFsLXpvbmVzL1BNSUMtRGllL3RyaXBzL2hvdC1kaWUNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9QTUlDLURpZS90cmlwcy9ob3QtZGllDQooWEVOKSAv
dGhlcm1hbC16b25lcy9QTUlDLURpZS90cmlwcy9ob3QtZGllIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpvbmVzL1BNSUMtRGllL3RyaXBzL2hvdC1k
aWUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3RoZXJtYWwtem9uZXMvUE1JQy1EaWUvdHJpcHMvaG90LWRpZQ0KKFhFTikgaGFuZGxlIC90
aGVybWFsLXpvbmVzL1BNSUMtRGllL2Nvb2xpbmctbWFwcw0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS90aGVybWFsLXpvbmVzL1BNSUMtRGllL2Nvb2xpbmctbWFwcw0KKFhFTikgL3RoZXJtYWwt
em9uZXMvUE1JQy1EaWUvY29vbGluZy1tYXBzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC90aGVybWFsLXpvbmVzL1BNSUMtRGllL2Nvb2xpbmctbWFwcyBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1h
bC16b25lcy9QTUlDLURpZS9jb29saW5nLW1hcHMNCihYRU4pIGhhbmRsZSAvdGhlcm1hbC16b25l
cy9QTUlDLURpZS9jb29saW5nLW1hcHMvbWFwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90
aGVybWFsLXpvbmVzL1BNSUMtRGllL2Nvb2xpbmctbWFwcy9tYXAwDQooWEVOKSAvdGhlcm1hbC16
b25lcy9QTUlDLURpZS9jb29saW5nLW1hcHMvbWFwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25lcy9QTUlDLURpZS9jb29saW5nLW1hcHMvbWFw
MCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vdGhlcm1hbC16b25lcy9QTUlDLURpZS9jb29saW5nLW1hcHMvbWFwMA0KKFhFTikgaGFuZGxl
IC9jb3JlX2R2ZnNfY2Rldl9mbG9vcg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jb3JlX2R2
ZnNfY2Rldl9mbG9vcg0KKFhFTikgL2NvcmVfZHZmc19jZGV2X2Zsb29yIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jb3JlX2R2ZnNfY2Rldl9mbG9vciBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY29yZV9k
dmZzX2NkZXZfZmxvb3INCihYRU4pIGhhbmRsZSAvY29yZV9kdmZzX2NkZXZfY2FwDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2NvcmVfZHZmc19jZGV2X2NhcA0KKFhFTikgL2NvcmVfZHZmc19j
ZGV2X2NhcCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvY29yZV9k
dmZzX2NkZXZfY2FwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9jb3JlX2R2ZnNfY2Rldl9jYXANCihYRU4pIGhhbmRsZSAvcG93ZXItZG9t
YWluDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bvd2VyLWRvbWFpbg0KKFhFTikgL3Bvd2Vy
LWRvbWFpbiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcG93ZXIt
ZG9tYWluIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9wb3dlci1kb21haW4NCihYRU4pIGhhbmRsZSAvcG93ZXItZG9tYWluL2hvc3QxeC1w
ZA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wb3dlci1kb21haW4vaG9zdDF4LXBkDQooWEVO
KSAvcG93ZXItZG9tYWluL2hvc3QxeC1wZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvcG93ZXItZG9tYWluL2hvc3QxeC1wZCBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG93ZXItZG9tYWluL2hvc3QxeC1w
ZA0KKFhFTikgaGFuZGxlIC9wb3dlci1kb21haW4vYXBlLXBkDQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3Bvd2VyLWRvbWFpbi9hcGUtcGQNCihYRU4pIC9wb3dlci1kb21haW4vYXBlLXBkIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wb3dlci1kb21haW4vYXBl
LXBkIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9wb3dlci1kb21haW4vYXBlLXBkDQooWEVOKSBoYW5kbGUgL3Bvd2VyLWRvbWFpbi9hZHNw
LXBkDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bvd2VyLWRvbWFpbi9hZHNwLXBkDQooWEVO
KSAvcG93ZXItZG9tYWluL2Fkc3AtcGQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3Bvd2VyLWRvbWFpbi9hZHNwLXBkIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wb3dlci1kb21haW4vYWRzcC1wZA0KKFhF
TikgaGFuZGxlIC9wb3dlci1kb21haW4vdHNlYy1wZA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9wb3dlci1kb21haW4vdHNlYy1wZA0KKFhFTikgL3Bvd2VyLWRvbWFpbi90c2VjLXBkIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wb3dlci1kb21haW4vdHNlYy1w
ZCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcG93ZXItZG9tYWluL3RzZWMtcGQNCihYRU4pIGhhbmRsZSAvcG93ZXItZG9tYWluL252ZGVj
LXBkDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bvd2VyLWRvbWFpbi9udmRlYy1wZA0KKFhF
TikgL3Bvd2VyLWRvbWFpbi9udmRlYy1wZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvcG93ZXItZG9tYWluL252ZGVjLXBkIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wb3dlci1kb21haW4vbnZkZWMtcGQN
CihYRU4pIGhhbmRsZSAvcG93ZXItZG9tYWluL3ZlMi1wZA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9wb3dlci1kb21haW4vdmUyLXBkDQooWEVOKSAvcG93ZXItZG9tYWluL3ZlMi1wZCBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcG93ZXItZG9tYWluL3ZlMi1w
ZCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcG93ZXItZG9tYWluL3ZlMi1wZA0KKFhFTikgaGFuZGxlIC9wb3dlci1kb21haW4vdmljMDMt
cGQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG93ZXItZG9tYWluL3ZpYzAzLXBkDQooWEVO
KSAvcG93ZXItZG9tYWluL3ZpYzAzLXBkIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9wb3dlci1kb21haW4vdmljMDMtcGQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bvd2VyLWRvbWFpbi92aWMwMy1wZA0K
KFhFTikgaGFuZGxlIC9wb3dlci1kb21haW4vbXNlbmMtcGQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcG93ZXItZG9tYWluL21zZW5jLXBkDQooWEVOKSAvcG93ZXItZG9tYWluL21zZW5jLXBk
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wb3dlci1kb21haW4v
bXNlbmMtcGQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3Bvd2VyLWRvbWFpbi9tc2VuYy1wZA0KKFhFTikgaGFuZGxlIC9wb3dlci1kb21h
aW4vbnZqcGctcGQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG93ZXItZG9tYWluL252anBn
LXBkDQooWEVOKSAvcG93ZXItZG9tYWluL252anBnLXBkIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9wb3dlci1kb21haW4vbnZqcGctcGQgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bvd2VyLWRvbWFpbi9u
dmpwZy1wZA0KKFhFTikgaGFuZGxlIC9wb3dlci1kb21haW4vcGNpZS1wZA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9wb3dlci1kb21haW4vcGNpZS1wZA0KKFhFTikgL3Bvd2VyLWRvbWFpbi9w
Y2llLXBkIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wb3dlci1k
b21haW4vcGNpZS1wZCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcG93ZXItZG9tYWluL3BjaWUtcGQNCihYRU4pIGhhbmRsZSAvcG93ZXIt
ZG9tYWluL3ZlLXBkDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bvd2VyLWRvbWFpbi92ZS1w
ZA0KKFhFTikgL3Bvd2VyLWRvbWFpbi92ZS1wZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvcG93ZXItZG9tYWluL3ZlLXBkIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wb3dlci1kb21haW4vdmUtcGQNCihY
RU4pIGhhbmRsZSAvcG93ZXItZG9tYWluL3NhdGEtcGQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcG93ZXItZG9tYWluL3NhdGEtcGQNCihYRU4pIC9wb3dlci1kb21haW4vc2F0YS1wZCBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcG93ZXItZG9tYWluL3NhdGEt
cGQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3Bvd2VyLWRvbWFpbi9zYXRhLXBkDQooWEVOKSBoYW5kbGUgL3Bvd2VyLWRvbWFpbi9zb3It
cGQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG93ZXItZG9tYWluL3Nvci1wZA0KKFhFTikg
L3Bvd2VyLWRvbWFpbi9zb3ItcGQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL3Bvd2VyLWRvbWFpbi9zb3ItcGQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bvd2VyLWRvbWFpbi9zb3ItcGQNCihYRU4pIGhh
bmRsZSAvcG93ZXItZG9tYWluL2Rpc2EtcGQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG93
ZXItZG9tYWluL2Rpc2EtcGQNCihYRU4pIC9wb3dlci1kb21haW4vZGlzYS1wZCBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcG93ZXItZG9tYWluL2Rpc2EtcGQgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bv
d2VyLWRvbWFpbi9kaXNhLXBkDQooWEVOKSBoYW5kbGUgL3Bvd2VyLWRvbWFpbi9kaXNiLXBkDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bvd2VyLWRvbWFpbi9kaXNiLXBkDQooWEVOKSAvcG93
ZXItZG9tYWluL2Rpc2ItcGQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3Bvd2VyLWRvbWFpbi9kaXNiLXBkIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wb3dlci1kb21haW4vZGlzYi1wZA0KKFhFTikgaGFu
ZGxlIC9wb3dlci1kb21haW4veHVzYmEtcGQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG93
ZXItZG9tYWluL3h1c2JhLXBkDQooWEVOKSAvcG93ZXItZG9tYWluL3h1c2JhLXBkIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wb3dlci1kb21haW4veHVzYmEtcGQg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3Bvd2VyLWRvbWFpbi94dXNiYS1wZA0KKFhFTikgaGFuZGxlIC9wb3dlci1kb21haW4veHVzYmIt
cGQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG93ZXItZG9tYWluL3h1c2JiLXBkDQooWEVO
KSAvcG93ZXItZG9tYWluL3h1c2JiLXBkIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9wb3dlci1kb21haW4veHVzYmItcGQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bvd2VyLWRvbWFpbi94dXNiYi1wZA0K
KFhFTikgaGFuZGxlIC9wb3dlci1kb21haW4veHVzYmMtcGQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcG93ZXItZG9tYWluL3h1c2JjLXBkDQooWEVOKSAvcG93ZXItZG9tYWluL3h1c2JjLXBk
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wb3dlci1kb21haW4v
eHVzYmMtcGQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3Bvd2VyLWRvbWFpbi94dXNiYy1wZA0KKFhFTikgaGFuZGxlIC9hY3Rtb25ANjAw
MGM4MDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWN0bW9uQDYwMDBjODAwDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRl
dj0vYWN0bW9uQDYwMDBjODAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHBy
b3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAw
NDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMmQuLi5dLG9pbnRzaXplPTMNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6
ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvYWN0
bW9uQDYwMDBjODAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9h
Y3Rtb25ANjAwMGM4MDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L2FjdG1vbkA2MDAwYzgwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRz
JyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2FjdG1vbkA2MDAwYzgw
MCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGlu
dHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgw
MDAwMDAwMCAweDAwMDAwMDJkLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
aXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFk
ZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDAgbm90IChkaXJlY3RseSBv
ciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0
byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlv
biBmb3IgZGV2aWNlIC9hY3Rtb25ANjAwMGM4MDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVs
dCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAw
MDAwMDAwPDM+IDYwMDBjODAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4p
ICAgLSBNTUlPOiAwMDYwMDBjODAwIC0gMDA2MDAwY2MwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRs
ZSAvYWN0bW9uQDYwMDBjODAwL21jX2FsbA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hY3Rt
b25ANjAwMGM4MDAvbWNfYWxsDQooWEVOKSAvYWN0bW9uQDYwMDBjODAwL21jX2FsbCBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvYWN0bW9uQDYwMDBjODAwL21jX2Fs
bCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vYWN0bW9uQDYwMDBjODAwL21jX2FsbA0KKFhFTikgaGFuZGxlIC9hbGlhc2VzDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2FsaWFzZXMNCihYRU4pIC9hbGlhc2VzIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9hbGlhc2VzIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hbGlhc2VzDQooWEVOKSBoYW5k
bGUgL2NwdXMNCihYRU4pICAgU2tpcCBpdCAobWF0Y2hlZCkNCihYRU4pIGhhbmRsZSAvcHNjaQ0K
KFhFTikgICBTa2lwIGl0IChtYXRjaGVkKQ0KKFhFTikgaGFuZGxlIC90bGsNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vdGxrDQooWEVOKSAvdGxrIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC90bGsgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Rsaw0KKFhFTikgaGFuZGxlIC90bGsvbG9nDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3Rsay9sb2cNCihYRU4pIC90bGsvbG9nIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90bGsvbG9nIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90bGsvbG9nDQooWEVOKSBoYW5k
bGUgL2FybS1wbXUNCihYRU4pICAgU2tpcCBpdCAobWF0Y2hlZCkNCihYRU4pIGhhbmRsZSAvY2xv
Y2sNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2xvY2sNCihYRU4pIC9jbG9jayBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvY2xvY2sgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Nsb2NrDQooWEVOKSBE
VDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvY2xvY2sgKioNCihYRU4pIERUOiBidXMgaXMg
ZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6
PDM+IDAwMDAwMDAwPDM+IDYwMDA2MDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUN
CihYRU4pICAgLSBNTUlPOiAwMDYwMDA2MDAwIC0gMDA2MDAwNzAwMCBQMk1UeXBlPTUNCihYRU4p
IGhhbmRsZSAvYndtZ3INCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYndtZ3INCihYRU4pIC9i
d21nciBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvYndtZ3IgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2J3
bWdyDQooWEVOKSBoYW5kbGUgL3Jlc2VydmVkLW1lbW9yeQ0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9yZXNlcnZlZC1tZW1vcnkNCihYRU4pIC9yZXNlcnZlZC1tZW1vcnkgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Jlc2VydmVkLW1lbW9yeSBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcmVzZXJ2ZWQt
bWVtb3J5DQooWEVOKSBoYW5kbGUgL3Jlc2VydmVkLW1lbW9yeS9pcmFtLWNhcnZlb3V0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Jlc2VydmVkLW1lbW9yeS9pcmFtLWNhcnZlb3V0DQooWEVO
KSAvcmVzZXJ2ZWQtbWVtb3J5L2lyYW0tY2FydmVvdXQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MQ0KKFhFTikgQ2hlY2sgaWYgL3Jlc2VydmVkLW1lbW9yeS9pcmFtLWNhcnZlb3V0IGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9yZXNlcnZl
ZC1tZW1vcnkvaXJhbS1jYXJ2ZW91dA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZp
Y2UgL3Jlc2VydmVkLW1lbW9yeS9pcmFtLWNhcnZlb3V0ICoqDQooWEVOKSBEVDogYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9yZXNlcnZlZC1tZW1vcnkNCihYRU4pIERUOiB0cmFuc2xh
dGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA0MDAwMTAwMDwzPg0KKFhFTikgRFQ6IHBhcmVu
dCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdl
czsgMToxIHRyYW5zbGF0aW9uDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4g
MDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNDAwMDEwMDAN
CihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDQwMDAxMDAw
PDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDQwMDAx
MDAwIC0gMDA0MDA0MDAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvcmVzZXJ2ZWQtbWVtb3J5
L3JhbW9vcHNfY2FydmVvdXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcmVzZXJ2ZWQtbWVt
b3J5L3JhbW9vcHNfY2FydmVvdXQNCihYRU4pIC9yZXNlcnZlZC1tZW1vcnkvcmFtb29wc19jYXJ2
ZW91dCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvcmVzZXJ2ZWQt
bWVtb3J5L3JhbW9vcHNfY2FydmVvdXQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Jlc2VydmVkLW1lbW9yeS9yYW1vb3BzX2NhcnZlb3V0
DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvcmVzZXJ2ZWQtbWVtb3J5L3Jh
bW9vcHNfY2FydmVvdXQgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikg
b24gL3Jlc2VydmVkLW1lbW9yeQ0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAw
MDAwMDAwPDM+IGIwMDAwMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChu
YT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24N
CihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAw
MDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiBiMDAwMDAwMA0KKFhFTikgRFQ6IG9uZSBsZXZl
bCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gYjAwMDAwMDA8Mz4NCihYRU4pIERUOiByZWFj
aGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwYjAwMDAwMDAgLSAwMGIwMjAwMDAwIFAy
TVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9yZXNlcnZlZC1tZW1vcnkvdnByLWNhcnZlb3V0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Jlc2VydmVkLW1lbW9yeS92cHItY2FydmVvdXQNCihYRU4p
IC9yZXNlcnZlZC1tZW1vcnkvdnByLWNhcnZlb3V0IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9yZXNlcnZlZC1tZW1vcnkvdnByLWNhcnZlb3V0IGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9yZXNlcnZlZC1t
ZW1vcnkvdnByLWNhcnZlb3V0DQooWEVOKSBoYW5kbGUgL3RlZ3JhLWNhcnZlb3V0cw0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS90ZWdyYS1jYXJ2ZW91dHMNCihYRU4pIC90ZWdyYS1jYXJ2ZW91
dHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RlZ3JhLWNhcnZl
b3V0cyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vdGVncmEtY2FydmVvdXRzDQooWEVOKSBoYW5kbGUgL2lvbW11DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L2lvbW11DQooWEVOKSAvaW9tbXUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
Mg0KKFhFTikgQ2hlY2sgaWYgL2lvbW11IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pb21tdQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9u
IGZvciBkZXZpY2UgL2lvbW11ICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5z
PTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3
MDAxOTAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzog
MDA3MDAxOTAwMCAtIDAwNzAwMWEwMDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiogdHJhbnNsYXRp
b24gZm9yIGRldmljZSAvaW9tbXUgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9Miwg
bnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+
IDYwMDBjMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlP
OiAwMDYwMDBjMDAwIC0gMDA2MDAwZDAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvaW9tbXUv
YWRkcmVzcy1zcGFjZS1wcm9wDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2lvbW11L2FkZHJl
c3Mtc3BhY2UtcHJvcA0KKFhFTikgL2lvbW11L2FkZHJlc3Mtc3BhY2UtcHJvcCBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaW9tbXUvYWRkcmVzcy1zcGFjZS1wcm9w
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9pb21tdS9hZGRyZXNzLXNwYWNlLXByb3ANCihYRU4pIGhhbmRsZSAvaW9tbXUvYWRkcmVzcy1z
cGFjZS1wcm9wL2NvbW1vbg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pb21tdS9hZGRyZXNz
LXNwYWNlLXByb3AvY29tbW9uDQooWEVOKSAvaW9tbXUvYWRkcmVzcy1zcGFjZS1wcm9wL2NvbW1v
biBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaW9tbXUvYWRkcmVz
cy1zcGFjZS1wcm9wL2NvbW1vbiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vaW9tbXUvYWRkcmVzcy1zcGFjZS1wcm9wL2NvbW1vbg0KKFhF
TikgaGFuZGxlIC9pb21tdS9hZGRyZXNzLXNwYWNlLXByb3AvcHBjcw0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9pb21tdS9hZGRyZXNzLXNwYWNlLXByb3AvcHBjcw0KKFhFTikgL2lvbW11L2Fk
ZHJlc3Mtc3BhY2UtcHJvcC9wcGNzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9pb21tdS9hZGRyZXNzLXNwYWNlLXByb3AvcHBjcyBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaW9tbXUvYWRkcmVzcy1zcGFj
ZS1wcm9wL3BwY3MNCihYRU4pIGhhbmRsZSAvaW9tbXUvYWRkcmVzcy1zcGFjZS1wcm9wL2RjDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2lvbW11L2FkZHJlc3Mtc3BhY2UtcHJvcC9kYw0KKFhF
TikgL2lvbW11L2FkZHJlc3Mtc3BhY2UtcHJvcC9kYyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvaW9tbXUvYWRkcmVzcy1zcGFjZS1wcm9wL2RjIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pb21tdS9hZGRy
ZXNzLXNwYWNlLXByb3AvZGMNCihYRU4pIGhhbmRsZSAvaW9tbXUvYWRkcmVzcy1zcGFjZS1wcm9w
L2dwdQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pb21tdS9hZGRyZXNzLXNwYWNlLXByb3Av
Z3B1DQooWEVOKSAvaW9tbXUvYWRkcmVzcy1zcGFjZS1wcm9wL2dwdSBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaW9tbXUvYWRkcmVzcy1zcGFjZS1wcm9wL2dwdSBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
aW9tbXUvYWRkcmVzcy1zcGFjZS1wcm9wL2dwdQ0KKFhFTikgaGFuZGxlIC9pb21tdS9hZGRyZXNz
LXNwYWNlLXByb3AvYXBlDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2lvbW11L2FkZHJlc3Mt
c3BhY2UtcHJvcC9hcGUNCihYRU4pIC9pb21tdS9hZGRyZXNzLXNwYWNlLXByb3AvYXBlIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9pb21tdS9hZGRyZXNzLXNwYWNl
LXByb3AvYXBlIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9pb21tdS9hZGRyZXNzLXNwYWNlLXByb3AvYXBlDQooWEVOKSBoYW5kbGUgL3Nt
bXVfdGVzdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zbW11X3Rlc3QNCihYRU4pIC9zbW11
X3Rlc3QgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NtbXVfdGVz
dCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc21tdV90ZXN0DQooWEVOKSBoYW5kbGUgL2RtYV90ZXN0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L2RtYV90ZXN0DQooWEVOKSAvZG1hX3Rlc3QgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL2RtYV90ZXN0IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9kbWFfdGVzdA0KKFhFTikgaGFuZGxlIC9icG1w
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2JwbXANCihYRU4pIC9icG1wIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDINCihYRU4pIENoZWNrIGlmIC9icG1wIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9icG1wDQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvYnBtcCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0
IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAw
MDAwMDA8Mz4gNzAwMTYwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikg
ICAtIE1NSU86IDAwNzAwMTYwMDAgLSAwMDcwMDE4MDAwIFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoq
IHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2JwbXAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVs
dCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAw
MDAwMDAwPDM+IDYwMDAxMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4p
ICAgLSBNTUlPOiAwMDYwMDAxMDAwIC0gMDA2MDAwMjAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRs
ZSAvbWMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vbWMNCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXpl
PTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9tYywgaW5kZXg9
MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBp
bnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAw
eDAwMDAwMDRkLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50
ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTIN
CihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL21jIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDMw
DQooWEVOKSBDaGVjayBpZiAvbWMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L21jDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3Bl
cnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0z
DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vbWMsIGluZGV4PTANCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVy
cnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA0ZC4u
Ll0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250
cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4g
Z290IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVj
dGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9s
bGVyQDYwMDA0MDAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvbWMgKioN
CihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRy
YW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcwMDE5MDAwPDM+DQooWEVOKSBEVDog
cmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMDE5MDAwIC0gMDA3MDAxOTAw
YyBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9tYyAqKg0K
KFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJh
bnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwMTkwNTA8Mz4NCihYRU4pIERUOiBy
ZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwMTkwNTAgLSAwMDcwMDE5MWVj
IFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL21jICoqDQoo
WEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFu
c2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDAxOTIwMDwzPg0KKFhFTikgRFQ6IHJl
YWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDAxOTIwMCAtIDAwNzAwMTkyMjQg
UDJNVHlwZT01DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvbWMgKioNCihY
RU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5z
bGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcwMDE5MjljPDM+DQooWEVOKSBEVDogcmVh
Y2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMDE5MjljIC0gMDA3MDAxOTQ1NCBQ
Mk1UeXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9tYyAqKg0KKFhF
TikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNs
YXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwMTk0NjQ8Mz4NCihYRU4pIERUOiByZWFj
aGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwMTk0NjQgLSAwMDcwMDE5NWZjIFAy
TVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL21jICoqDQooWEVO
KSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xh
dGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDAxOTYwNDwzPg0KKFhFTikgRFQ6IHJlYWNo
ZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDAxOTYwNCAtIDAwNzAwMTk5YjQgUDJN
VHlwZT01DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvbWMgKioNCihYRU4p
IERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0
aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcwMDE5OWJjPDM+DQooWEVOKSBEVDogcmVhY2hl
ZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMDE5OWJjIC0gMDA3MDAxOTlkYyBQMk1U
eXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9tYyAqKg0KKFhFTikg
RFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRp
bmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwMTk5Zjg8Mz4NCihYRU4pIERUOiByZWFjaGVk
IHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwMTk5ZjggLSAwMDcwMDE5YTg0IFAyTVR5
cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL21jICoqDQooWEVOKSBE
VDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGlu
ZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDAxOWFlNDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQg
cm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDAxOWFlNCAtIDAwNzAwMTliOTQgUDJNVHlw
ZT01DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvbWMgKioNCihYRU4pIERU
OiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5n
IGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcwMDE5YmEwPDM+DQooWEVOKSBEVDogcmVhY2hlZCBy
b290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMDE5YmEwIC0gMDA3MDAxYTAwMCBQMk1UeXBl
PTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9tYyAqKg0KKFhFTikgRFQ6
IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcg
YWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwMWMwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJv
b3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwMWMwMDAgLSAwMDcwMDFjMDBjIFAyTVR5cGU9
NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL21jICoqDQooWEVOKSBEVDog
YnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBh
ZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDAxYzA1MDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9v
dCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDAxYzA1MCAtIDAwNzAwMWMxZTggUDJNVHlwZT01
DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvbWMgKioNCihYRU4pIERUOiBi
dXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFk
ZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcwMDFjMjAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290
IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMDFjMjAwIC0gMDA3MDAxYzIyNCBQMk1UeXBlPTUN
CihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9tYyAqKg0KKFhFTikgRFQ6IGJ1
cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRk
cmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwMWMyOWM8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qg
bm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwMWMyOWMgLSAwMDcwMDFjNDU0IFAyTVR5cGU9NQ0K
KFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL21jICoqDQooWEVOKSBEVDogYnVz
IGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRy
ZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDAxYzQ2NDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBu
b2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDAxYzQ2NCAtIDAwNzAwMWM1ZmMgUDJNVHlwZT01DQoo
WEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvbWMgKioNCihYRU4pIERUOiBidXMg
aXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJl
c3M6PDM+IDAwMDAwMDAwPDM+IDcwMDFjNjA0PDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5v
ZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMDFjNjA0IC0gMDA3MDAxYzliNCBQMk1UeXBlPTUNCihY
RU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9tYyAqKg0KKFhFTikgRFQ6IGJ1cyBp
cyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVz
czo8Mz4gMDAwMDAwMDA8Mz4gNzAwMWM5YmM8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9k
ZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwMWM5YmMgLSAwMDcwMDFjOWRjIFAyTVR5cGU9NQ0KKFhF
TikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL21jICoqDQooWEVOKSBEVDogYnVzIGlz
IGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNz
OjwzPiAwMDAwMDAwMDwzPiA3MDAxYzlmODwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2Rl
DQooWEVOKSAgIC0gTU1JTzogMDA3MDAxYzlmOCAtIDAwNzAwMWNhODQgUDJNVHlwZT01DQooWEVO
KSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvbWMgKioNCihYRU4pIERUOiBidXMgaXMg
ZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6
PDM+IDAwMDAwMDAwPDM+IDcwMDFjYWU0PDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUN
CihYRU4pICAgLSBNTUlPOiAwMDcwMDFjYWU0IC0gMDA3MDAxY2I5NCBQMk1UeXBlPTUNCihYRU4p
IERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9tYyAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBk
ZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8
Mz4gMDAwMDAwMDA8Mz4gNzAwMWNiYTA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0K
KFhFTikgICAtIE1NSU86IDAwNzAwMWNiYTAgLSAwMDcwMDFkMDAwIFAyTVR5cGU9NQ0KKFhFTikg
RFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL21jICoqDQooWEVOKSBEVDogYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwz
PiAwMDAwMDAwMDwzPiA3MDAxZDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQoo
WEVOKSAgIC0gTU1JTzogMDA3MDAxZDAwMCAtIDAwNzAwMWQwMGMgUDJNVHlwZT01DQooWEVOKSBE
VDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvbWMgKioNCihYRU4pIERUOiBidXMgaXMgZGVm
YXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+
IDAwMDAwMDAwPDM+IDcwMDFkMDUwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihY
RU4pICAgLSBNTUlPOiAwMDcwMDFkMDUwIC0gMDA3MDAxZDFlOCBQMk1UeXBlPTUNCihYRU4pIERU
OiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9tYyAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZh
dWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4g
MDAwMDAwMDA8Mz4gNzAwMWQyMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhF
TikgICAtIE1NSU86IDAwNzAwMWQyMDAgLSAwMDcwMDFkMjI0IFAyTVR5cGU9NQ0KKFhFTikgRFQ6
ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL21jICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1
bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAw
MDAwMDAwMDwzPiA3MDAxZDI5YzwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVO
KSAgIC0gTU1JTzogMDA3MDAxZDI5YyAtIDAwNzAwMWQ0NTQgUDJNVHlwZT01DQooWEVOKSBEVDog
KiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvbWMgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVs
dCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAw
MDAwMDAwPDM+IDcwMDFkNDY0PDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4p
ICAgLSBNTUlPOiAwMDcwMDFkNDY0IC0gMDA3MDAxZDVmYyBQMk1UeXBlPTUNCihYRU4pIERUOiAq
KiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9tYyAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0
IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAw
MDAwMDA8Mz4gNzAwMWQ2MDQ8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikg
ICAtIE1NSU86IDAwNzAwMWQ2MDQgLSAwMDcwMDFkOWI0IFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoq
IHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL21jICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAw
MDAwMDwzPiA3MDAxZDliYzwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDA3MDAxZDliYyAtIDAwNzAwMWQ5ZGMgUDJNVHlwZT01DQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvbWMgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAo
bmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAw
MDAwPDM+IDcwMDFkOWY4PDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAg
LSBNTUlPOiAwMDcwMDFkOWY4IC0gMDA3MDAxZGE4NCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0
cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9tYyAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChu
YT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAw
MDA8Mz4gNzAwMWRhZTQ8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAt
IE1NSU86IDAwNzAwMWRhZTQgLSAwMDcwMDFkYjk0IFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRy
YW5zbGF0aW9uIGZvciBkZXZpY2UgL21jICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5h
PTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAw
MDwzPiA3MDAxZGJhMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0g
TU1JTzogMDA3MDAxZGJhMCAtIDAwNzAwMWUwMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL2lu
dGVycnVwdC1jb250cm9sbGVyDQooWEVOKSBDcmVhdGUgZ2ljIG5vZGUNCihYRU4pICAgU2V0IHBo
YW5kbGUgPSAweDMzDQooWEVOKSBoYW5kbGUgL2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAw
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAw
DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj0xMg0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MTINCihYRU4pIGR0X2RldmljZV9nZXRf
cmF3X2lycTogZGV2PS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgaW5kZXg9MA0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49
MTINCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTEyDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFy
PS9pbnRlcnJ1cHQtY29udHJvbGxlcixpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMDQuLi5d
LG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJv
bGxlciwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQoo
WEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDAsIGluZGV4PTENCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4p
ICBpbnRzcGVjPTAgaW50bGVuPTEyDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXIsaW50c3BlYz1bMHgwMDAw
MDAwMCAweDAwMDAwMDA1Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBh
cj0vaW50ZXJydXB0LWNvbnRyb2xsZXIsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihY
RU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2ludGVy
cnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBpbmRleD0yDQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0xMg0KKFhFTikgIGludHNpemU9
MyBpbnRsZW49MTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9s
bGVyLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAwNy4uLl0sb2ludHNpemU9Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyLCBzaXplPTMNCihYRU4p
ICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRf
cmF3X2lycTogZGV2PS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgaW5kZXg9Mw0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49
MTINCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTEyDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFy
PS9pbnRlcnJ1cHQtY29udHJvbGxlcixpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMTIuLi5d
LG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJv
bGxlciwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQoo
WEVOKSAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gNg0KKFhFTikgQ2hlY2sgaWYgL2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pbnRl
cnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MTINCihYRU4pICBpbnRzaXplPTMgaW50bGVu
PTEyDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaW50ZXJydXB0LWNvbnRyb2xs
ZXJANjAwMDQwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTEyDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXIsaW50c3BlYz1b
MHgwMDAwMDAwMCAweDAwMDAwMDA0Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXIsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXpl
PTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0xMg0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49MTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1j
b250cm9sbGVyLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAwNC4uLl0sb2ludHNpemU9Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyLCBzaXplPTMN
CihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pICAgLSBJUlE6
IDM2DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaW50ZXJydXB0LWNvbnRyb2xs
ZXJANjAwMDQwMDAsIGluZGV4PTENCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTEyDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXIsaW50c3BlYz1b
MHgwMDAwMDAwMCAweDAwMDAwMDA1Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXIsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXpl
PTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBpbmRleD0xDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0xMg0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49MTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1j
b250cm9sbGVyLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAwNS4uLl0sb2ludHNpemU9Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyLCBzaXplPTMN
CihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pICAgLSBJUlE6
IDM3DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaW50ZXJydXB0LWNvbnRyb2xs
ZXJANjAwMDQwMDAsIGluZGV4PTINCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTEyDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXIsaW50c3BlYz1b
MHgwMDAwMDAwMCAweDAwMDAwMDA3Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXIsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXpl
PTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBpbmRleD0yDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0xMg0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49MTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1j
b250cm9sbGVyLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAwNy4uLl0sb2ludHNpemU9Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyLCBzaXplPTMN
CihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pICAgLSBJUlE6
IDM5DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaW50ZXJydXB0LWNvbnRyb2xs
ZXJANjAwMDQwMDAsIGluZGV4PTMNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTEyDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXIsaW50c3BlYz1b
MHgwMDAwMDAwMCAweDAwMDAwMDEyLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXIsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXpl
PTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBpbmRleD0zDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0xMg0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49MTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1j
b250cm9sbGVyLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAxMi4uLl0sb2ludHNpemU9Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyLCBzaXplPTMN
CihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pICAgLSBJUlE6
IDUwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvaW50ZXJydXB0LWNvbnRy
b2xsZXJANjAwMDQwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikg
b24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDYwMDA0
MDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDYw
MDA0MDAwIC0gMDA2MDAwNDA0MCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBm
b3IgZGV2aWNlIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCAqKg0KKFhFTikgRFQ6IGJ1
cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRk
cmVzczo8Mz4gMDAwMDAwMDA8Mz4gNjAwMDQxMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qg
bm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNjAwMDQxMDAgLSAwMDYwMDA0MTQwIFAyTVR5cGU9NQ0K
KFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2ludGVycnVwdC1jb250cm9sbGVy
QDYwMDA0MDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8N
CihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA2MDAwNDIwMDwz
Pg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA2MDAwNDIw
MCAtIDAwNjAwMDQyNDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRl
dmljZSAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAgKioNCihYRU4pIERUOiBidXMgaXMg
ZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6
PDM+IDAwMDAwMDAwPDM+IDYwMDA0MzAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUN
CihYRU4pICAgLSBNTUlPOiAwMDYwMDA0MzAwIC0gMDA2MDAwNDM0MCBQMk1UeXBlPTUNCihYRU4p
IERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAw
NDAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVO
KSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNjAwMDQ0MDA8Mz4NCihY
RU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNjAwMDQ0MDAgLSAw
MDYwMDA0NDQwIFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2Ug
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1
bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAw
MDAwMDAwMDwzPiA2MDAwNDUwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVO
KSAgIC0gTU1JTzogMDA2MDAwNDUwMCAtIDAwNjAwMDQ1NDAgUDJNVHlwZT01DQooWEVOKSBoYW5k
bGUgL2Zsb3ctY29udHJvbGxlckA2MDAwNzAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9m
bG93LWNvbnRyb2xsZXJANjAwMDcwMDANCihYRU4pIC9mbG93LWNvbnRyb2xsZXJANjAwMDcwMDAg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2Zsb3ctY29udHJvbGxl
ckA2MDAwNzAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vZmxvdy1jb250cm9sbGVyQDYwMDA3MDAwDQooWEVOKSBEVDogKiogdHJhbnNs
YXRpb24gZm9yIGRldmljZSAvZmxvdy1jb250cm9sbGVyQDYwMDA3MDAwICoqDQooWEVOKSBEVDog
YnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBh
ZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA2MDAwNzAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9v
dCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA2MDAwNzAwMCAtIDAwNjAwMDgwMDAgUDJNVHlwZT01
DQooWEVOKSBoYW5kbGUgL2FoYkA2MDAwYzAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9h
aGJANjAwMGMwMDANCihYRU4pIC9haGJANjAwMGMwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MQ0KKFhFTikgQ2hlY2sgaWYgL2FoYkA2MDAwYzAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWhiQDYwMDBjMDAwDQooWEVOKSBEVDog
KiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvYWhiQDYwMDBjMDAwICoqDQooWEVOKSBEVDogYnVz
IGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRy
ZXNzOjwzPiAwMDAwMDAwMDwzPiA2MDAwYzAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBu
b2RlDQooWEVOKSAgIC0gTU1JTzogMDA2MDAwYzAwMCAtIDAwNjAwMGMxNGYgUDJNVHlwZT01DQoo
WEVOKSBoYW5kbGUgL2Fjb25uZWN0QDcwMmMwMDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2Fjb25uZWN0QDcwMmMwMDAwDQooWEVOKSAvYWNvbm5lY3RANzAyYzAwMDAgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2Fjb25uZWN0QDcwMmMwMDAwIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hY29ubmVj
dEA3MDJjMDAwMA0KKFhFTikgaGFuZGxlIC9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAw
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkw
MDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAg
aW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRf
cmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBpbmRleD0wDQoo
WEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxl
bj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFy
PS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAw
MDAwNjYuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1
cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhF
TikgIC0+IGdvdCBpdCAhDQooWEVOKSAvYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAyDQooWEVOKSBDaGVjayBpZiAvYWNvbm5lY3RANzAyYzAw
MDAvYWdpY0A3MDJmOTAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMA0KKFhFTikg
IHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0K
KFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBk
ZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsIGluZGV4PTANCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4p
ICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVw
dC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA2Ni4uLl0s
b2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9s
bGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVk
IHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVy
QDYwMDA0MDAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvYWNvbm5lY3RA
NzAyYzAwMDAvYWdpY0A3MDJmOTAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0y
LCBucz0yKSBvbiAvYWNvbm5lY3RANzAyYzAwMDANCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRy
ZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDJmOTAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMg
ZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRy
YW5zbGF0aW9uDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8
Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNzAyZjkwMDANCihYRU4pIERU
OiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmY5MDAwPDM+DQooWEVO
KSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMmY5MDAwIC0gMDA3
MDJmYjAwMCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9h
Y29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1
bHQgKG5hPTIsIG5zPTIpIG9uIC9hY29ubmVjdEA3MDJjMDAwMA0KKFhFTikgRFQ6IHRyYW5zbGF0
aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcwMmZhMDAwPDM+DQooWEVOKSBEVDogcGFyZW50
IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogZW1wdHkgcmFuZ2Vz
OyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAw
MDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA3MDJmYTAwMA0K
KFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZmEwMDA8
Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAyZmEw
MDAgLSAwMDcwMmZjMDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9hY29ubmVjdEA3MDJjMDAw
MC9hZHNwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3AN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTM2DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj0zNg0KKFhFTikgZHRfZGV2aWNlX2dldF9y
YXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3AsIGluZGV4PTANCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTM2DQooWEVO
KSAgaW50c2l6ZT00IGludGxlbj0zNg0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5l
Y3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMDUu
Li5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJj
MDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAg
LT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3
MDJjMDAwMC9hZHNwLCBpbmRleD0xDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zNg0KKFhFTikgIGludHNpemU9NCBpbnRsZW49MzYN
CihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkw
MDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDAwLi4uXSxvaW50c2l6ZT00DQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6
ZT00DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9k
ZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRzcCwgaW5kZXg9Mg0K
KFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRs
ZW49MzYNCihYRU4pICBpbnRzaXplPTQgaW50bGVuPTM2DQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
cGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLGludHNwZWM9WzB4MDAwMDAwMDAg
MHgwMDAwMDAyZi4uLl0sb2ludHNpemU9NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fj
b25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+IGFkZHJzaXpl
PTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9
L2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3AsIGluZGV4PTMNCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTM2DQooWEVOKSAgaW50c2l6ZT00
IGludGxlbj0zNg0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAv
YWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMzQuLi5dLG9pbnRzaXpl
PTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcw
MmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICEN
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZHNw
LCBpbmRleD00DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj0zNg0KKFhFTikgIGludHNpemU9NCBpbnRsZW49MzYNCihYRU4pIGR0X2ly
cV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsaW50c3BlYz1b
MHgwMDAwMDAwMCAweDAwMDAwMDMyLi4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogaXBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAg
LT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jh
d19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRzcCwgaW5kZXg9NQ0KKFhFTikgIHVzaW5n
ICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MzYNCihYRU4p
ICBpbnRzaXplPTQgaW50bGVuPTM2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVj
dEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAzNy4u
Ll0sb2ludHNpemU9NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMw
MDAwL2FnaWNANzAyZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAt
PiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcw
MmMwMDAwL2Fkc3AsIGluZGV4PTYNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTM2DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj0zNg0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAw
MCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMDQuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXpl
PTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rl
dmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZHNwLCBpbmRleD03DQoo
WEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxl
bj0zNg0KKFhFTikgIGludHNpemU9NCBpbnRsZW49MzYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBw
YXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAw
eDAwMDAwMDAxLi4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vYWNv
bm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAgLT4gYWRkcnNpemU9
Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0v
YWNvbm5lY3RANzAyYzAwMDAvYWRzcCwgaW5kZXg9OA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRz
JyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MzYNCihYRU4pICBpbnRzaXplPTQg
aW50bGVuPTM2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9h
Z2ljQDcwMmY5MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAwMi4uLl0sb2ludHNpemU9
NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAy
ZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0K
KFhFTikgL2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3AgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gNw0K
KFhFTikgQ2hlY2sgaWYgL2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3AgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAw
L2Fkc3ANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTM2DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj0zNg0KKFhFTikgZHRfZGV2aWNl
X2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3AsIGluZGV4PTANCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTM2
DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj0zNg0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0v
YWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAw
MDAwMDUuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVj
dEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQoo
WEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3Rs
eSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2Fjb25uZWN0
QDcwMmMwMDAwL2FnaWNANzAyZjkwMDANCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2
PS9hY29ubmVjdEA3MDJjMDAwMC9hZHNwLCBpbmRleD0xDQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zNg0KKFhFTikgIGludHNpemU9
NCBpbnRsZW49MzYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAw
L2FnaWNANzAyZjkwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDAwLi4uXSxvaW50c2l6
ZT00DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3
MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAh
DQooWEVOKSBpcnEgMSBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBw
cmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcw
MmY5MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAw
MDAvYWRzcCwgaW5kZXg9Mg0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49MzYNCihYRU4pICBpbnRzaXplPTQgaW50bGVuPTM2DQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLGlu
dHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyZi4uLl0sb2ludHNpemU9NA0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsIHNpemU9NA0K
KFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDIgbm90
IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIu
IENvbm5lY3RlZCB0byAvYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMA0KKFhFTikgZHRf
ZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3AsIGluZGV4PTMN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTM2DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj0zNg0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAw
IDB4MDAwMDAwMzQuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9h
Y29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6
ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAzIG5vdCAoZGlyZWN0bHkgb3IgaW5k
aXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2Fj
b25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDANCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2ly
cTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZHNwLCBpbmRleD00DQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zNg0KKFhFTikgIGlu
dHNpemU9NCBpbnRsZW49MzYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcw
MmMwMDAwL2FnaWNANzAyZjkwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDMyLi4uXSxv
aW50c2l6ZT00DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RANzAyYzAwMDAv
YWdpY0A3MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdv
dCBpdCAhDQooWEVOKSBpcnEgNCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3Rl
ZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9hY29ubmVjdEA3MDJjMDAwMC9h
Z2ljQDcwMmY5MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RA
NzAyYzAwMDAvYWRzcCwgaW5kZXg9NQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0
eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MzYNCihYRU4pICBpbnRzaXplPTQgaW50bGVuPTM2
DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5
MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAzNy4uLl0sb2ludHNpemU9NA0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsIHNp
emU9NA0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJx
IDUgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRy
b2xsZXIuIENvbm5lY3RlZCB0byAvYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMA0KKFhF
TikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3AsIGlu
ZGV4PTYNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTM2DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj0zNg0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAw
MDAwMDAwIDB4MDAwMDAwMDQuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBp
cGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBh
ZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSA2IG5vdCAoZGlyZWN0bHkg
b3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQg
dG8gL2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDANCihYRU4pIGR0X2RldmljZV9nZXRf
cmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZHNwLCBpbmRleD03DQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zNg0KKFhF
TikgIGludHNpemU9NCBpbnRsZW49MzYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fjb25u
ZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDAx
Li4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RANzAy
YzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikg
IC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgNyBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNv
bm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9hY29ubmVjdEA3MDJj
MDAwMC9hZ2ljQDcwMmY5MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNv
bm5lY3RANzAyYzAwMDAvYWRzcCwgaW5kZXg9OA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MzYNCihYRU4pICBpbnRzaXplPTQgaW50
bGVuPTM2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2lj
QDcwMmY5MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAwMi4uLl0sb2ludHNpemU9NA0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkw
MDAsIHNpemU9NA0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhF
TikgaXJxIDggbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFy
eWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAw
MA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2Fjb25uZWN0QDcwMmMwMDAw
L2Fkc3AgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL2Fjb25u
ZWN0QDcwMmMwMDAwDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8
Mz4gNzAyZWYwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5z
PTIpIG9uIC8NCihYRU4pIERUOiBlbXB0eSByYW5nZXM7IDE6MSB0cmFuc2xhdGlvbg0KKFhFTikg
RFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDAwMDAwMDAwPDM+DQoo
WEVOKSBEVDogd2l0aCBvZmZzZXQ6IDcwMmVmMDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5z
bGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJlZjAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9v
dCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDJlZjAwMCAtIDAwNzAyZjAwMDAgUDJNVHlwZT01
DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvYWNvbm5lY3RANzAyYzAwMDAv
YWRzcCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvYWNvbm5l
Y3RANzAyYzAwMDANCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwz
PiA3MDJlYzAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9
Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRyYW5zbGF0aW9uDQooWEVOKSBE
VDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihY
RU4pIERUOiB3aXRoIG9mZnNldDogNzAyZWMwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNs
YXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmVjMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290
IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMmVjMDAwIC0gMDA3MDJlZTAwMCBQMk1UeXBlPTUN
CihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9hY29ubmVjdEA3MDJjMDAwMC9h
ZHNwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9hY29ubmVj
dEA3MDJjMDAwMA0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+
IDcwMmVlMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0y
KSBvbiAvDQooWEVOKSBEVDogZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERU
OiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhF
TikgRFQ6IHdpdGggb2Zmc2V0OiA3MDJlZTAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xh
dGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZWUwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qg
bm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAyZWUwMDAgLSAwMDcwMmVmMDAwIFAyTVR5cGU9NQ0K
KFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2Fjb25uZWN0QDcwMmMwMDAwL2Fk
c3AgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL2Fjb25uZWN0
QDcwMmMwMDAwDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4g
NzAyZGM4MDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIp
IG9uIC8NCihYRU4pIERUOiBlbXB0eSByYW5nZXM7IDE6MSB0cmFuc2xhdGlvbg0KKFhFTikgRFQ6
IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDAwMDAwMDAwPDM+DQooWEVO
KSBEVDogd2l0aCBvZmZzZXQ6IDcwMmRjODAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0
aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJkYzgwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBu
b2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDJkYzgwMCAtIDAwNzAyZGM4MDAgUDJNVHlwZT01DQoo
WEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvYWNvbm5lY3RANzAyYzAwMDAvYWRz
cCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvYWNvbm5lY3RA
NzAyYzAwMDANCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiAw
MDAwMDAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikg
b24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRyYW5zbGF0aW9uDQooWEVOKSBEVDog
cGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4p
IERUOiB3aXRoIG9mZnNldDogMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4g
MDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhF
TikgICAtIE1NSU86IDAwMDAwMDAwMDAgLSAwMDAwMDAwMDAxIFAyTVR5cGU9NQ0KKFhFTikgRFQ6
ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3AgKioNCihY
RU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL2Fjb25uZWN0QDcwMmMwMDAw
DQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gMDEwMDAwMDA8
Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihY
RU4pIERUOiBlbXB0eSByYW5nZXM7IDE6MSB0cmFuc2xhdGlvbg0KKFhFTikgRFQ6IHBhcmVudCB0
cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDAwMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0
aCBvZmZzZXQ6IDEwMDAwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAw
MDAwMDAwPDM+IDAxMDAwMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4p
ICAgLSBNTUlPOiAwMDAxMDAwMDAwIC0gMDA3MDJjMDAwMCBQMk1UeXBlPTUNCihYRU4pIERUOiAq
KiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9hY29ubmVjdEA3MDJjMDAwMC9hZHNwICoqDQooWEVO
KSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9hY29ubmVjdEA3MDJjMDAwMA0K
KFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcwMzAwMDAwPDM+
DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVO
KSBEVDogZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJh
bnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGgg
b2Zmc2V0OiA3MDMwMDAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAw
MDAwMDA8Mz4gNzAzMDAwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikg
ICAtIE1NSU86IDAwNzAzMDAwMDAgLSAwMTAwMDAwMDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxl
IC9hY29ubmVjdEA3MDJjMDAwMC9hZG1hQDcwMmUyMDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FkbWFANzAyZTIwMDANCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6
ZT00IGludGxlbj04OA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0
QDcwMmMwMDAwL2FkbWFANzAyZTIwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00
IGludGxlbj04OA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAv
YWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMTguLi5dLG9pbnRzaXpl
PTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcw
MmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICEN
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZG1h
QDcwMmUyMDAwLCBpbmRleD0xDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQoo
WEVOKSAgaW50c3BlYz0wIGludGxlbj04OA0KKFhFTikgIGludHNpemU9NCBpbnRsZW49ODgNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAs
aW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDE5Li4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00
DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZp
Y2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwgaW5k
ZXg9Mg0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9
MCBpbnRsZW49ODgNCihYRU4pICBpbnRzaXplPTQgaW50bGVuPTg4DQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLGludHNwZWM9WzB4MDAw
MDAwMDAgMHgwMDAwMDAxYS4uLl0sb2ludHNpemU9NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlw
YXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+IGFk
ZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJx
OiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FkbWFANzAyZTIwMDAsIGluZGV4PTMNCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTg4DQoo
WEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNv
bm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAw
MWIuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3
MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVO
KSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVj
dEA3MDJjMDAwMC9hZG1hQDcwMmUyMDAwLCBpbmRleD00DQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj04OA0KKFhFTikgIGludHNpemU9
NCBpbnRsZW49ODgNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAw
L2FnaWNANzAyZjkwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDFjLi4uXSxvaW50c2l6
ZT00DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3
MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAh
DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRt
YUA3MDJlMjAwMCwgaW5kZXg9NQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49ODgNCihYRU4pICBpbnRzaXplPTQgaW50bGVuPTg4DQoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAw
LGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAxZC4uLl0sb2ludHNpemU9NA0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsIHNpemU9
NA0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2
aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FkbWFANzAyZTIwMDAsIGlu
ZGV4PTYNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAw
MDAwMDAwIDB4MDAwMDAwMWUuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBp
cGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBh
ZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2ly
cTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZG1hQDcwMmUyMDAwLCBpbmRleD03DQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj04OA0K
KFhFTikgIGludHNpemU9NCBpbnRsZW49ODgNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fj
b25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAw
MDFmLi4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RA
NzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhF
TikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5l
Y3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwgaW5kZXg9OA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1
cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49ODgNCihYRU4pICBpbnRzaXpl
PTQgaW50bGVuPTg4DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAw
MC9hZ2ljQDcwMmY5MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyMC4uLl0sb2ludHNp
emU9NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNA
NzAyZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQg
IQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2Fk
bWFANzAyZTIwMDAsIGluZGV4PTkNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAw
MCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMjEuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXpl
PTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rl
dmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZG1hQDcwMmUyMDAwLCBp
bmRleD0xMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49ODgNCihYRU4pICBpbnRzaXplPTQgaW50bGVuPTg4DQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLGludHNwZWM9WzB4
MDAwMDAwMDAgMHgwMDAwMDAyMi4uLl0sb2ludHNpemU9NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IGlwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+
IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdf
aXJxOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FkbWFANzAyZTIwMDAsIGluZGV4PTExDQooWEVO
KSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj04
OA0KKFhFTikgIGludHNpemU9NCBpbnRsZW49ODgNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9
L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAw
MDAwMDIzLi4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vYWNvbm5l
Y3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0K
KFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNv
bm5lY3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwgaW5kZXg9MTINCihYRU4pICB1c2luZyAnaW50
ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50
c2l6ZT00IGludGxlbj04OA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAy
YzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMjQuLi5dLG9p
bnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9h
Z2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAw
MC9hZG1hQDcwMmUyMDAwLCBpbmRleD0xMw0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49ODgNCihYRU4pICBpbnRzaXplPTQgaW50bGVu
PTg4DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcw
MmY5MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyNS4uLl0sb2ludHNpemU9NA0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAs
IHNpemU9NA0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikg
ZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FkbWFANzAyZTIw
MDAsIGluZGV4PTE0DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAg
aW50c3BlYz0wIGludGxlbj04OA0KKFhFTikgIGludHNpemU9NCBpbnRsZW49ODgNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsaW50c3Bl
Yz1bMHgwMDAwMDAwMCAweDAwMDAwMDI2Li4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogaXBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00DQooWEVO
KSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0
X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwgaW5kZXg9MTUN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAw
IDB4MDAwMDAwMjcuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9h
Y29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6
ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2
PS9hY29ubmVjdEA3MDJjMDAwMC9hZG1hQDcwMmUyMDAwLCBpbmRleD0xNg0KKFhFTikgIHVzaW5n
ICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49ODgNCihYRU4p
ICBpbnRzaXplPTQgaW50bGVuPTg4DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVj
dEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyOC4u
Ll0sb2ludHNpemU9NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMw
MDAwL2FnaWNANzAyZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAt
PiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcw
MmMwMDAwL2FkbWFANzAyZTIwMDAsIGluZGV4PTE3DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMn
IHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj04OA0KKFhFTikgIGludHNpemU9NCBp
bnRsZW49ODgNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2Fn
aWNANzAyZjkwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDI5Li4uXSxvaW50c2l6ZT00
DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJm
OTAwMCwgc2l6ZT00DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQoo
WEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRtYUA3
MDJlMjAwMCwgaW5kZXg9MTgNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihY
RU4pICBpbnRzcGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxp
bnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMmEuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2ly
cV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQN
CihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rldmlj
ZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZG1hQDcwMmUyMDAwLCBpbmRl
eD0xOQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9
MCBpbnRsZW49ODgNCihYRU4pICBpbnRzaXplPTQgaW50bGVuPTg4DQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLGludHNwZWM9WzB4MDAw
MDAwMDAgMHgwMDAwMDAyYi4uLl0sb2ludHNpemU9NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlw
YXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+IGFk
ZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJx
OiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FkbWFANzAyZTIwMDAsIGluZGV4PTIwDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj04OA0K
KFhFTikgIGludHNpemU9NCBpbnRsZW49ODgNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fj
b25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAw
MDJjLi4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RA
NzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhF
TikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5l
Y3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwgaW5kZXg9MjENCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6
ZT00IGludGxlbj04OA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAw
MDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMmQuLi5dLG9pbnRz
aXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2lj
QDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0
ICENCihYRU4pIC9hY29ubmVjdEA3MDJjMDAwMC9hZG1hQDcwMmUyMDAwIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDINCihYRU4pIENoZWNrIGlmIC9hY29ubmVjdEA3MDJjMDAwMC9hZG1hQDcwMmUy
MDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZG1hQDcwMmUyMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVy
cnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj04OA0KKFhFTikgIGludHNp
emU9NCBpbnRsZW49ODgNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVj
dEA3MDJjMDAwMC9hZG1hQDcwMmUyMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj04OA0KKFhFTikgIGludHNpemU9
NCBpbnRsZW49ODgNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAw
L2FnaWNANzAyZjkwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDE4Li4uXSxvaW50c2l6
ZT00DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3
MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAh
DQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBw
cmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcw
MmY5MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAw
MDAvYWRtYUA3MDJlMjAwMCwgaW5kZXg9MQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49ODgNCihYRU4pICBpbnRzaXplPTQgaW50bGVu
PTg4DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcw
MmY5MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAxOS4uLl0sb2ludHNpemU9NA0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAs
IHNpemU9NA0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikg
aXJxIDEgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNv
bnRyb2xsZXIuIENvbm5lY3RlZCB0byAvYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMA0K
KFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FkbWFA
NzAyZTIwMDAsIGluZGV4PTINCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihY
RU4pICBpbnRzcGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxp
bnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMWEuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2ly
cV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQN
CihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAyIG5v
dCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVy
LiBDb25uZWN0ZWQgdG8gL2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDANCihYRU4pIGR0
X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZG1hQDcwMmUyMDAw
LCBpbmRleD0zDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj04OA0KKFhFTikgIGludHNpemU9NCBpbnRsZW49ODgNCihYRU4pIGR0X2ly
cV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsaW50c3BlYz1b
MHgwMDAwMDAwMCAweDAwMDAwMDFiLi4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogaXBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAg
LT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMyBub3QgKGRpcmVj
dGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVj
dGVkIHRvIC9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwDQooWEVOKSBkdF9kZXZpY2Vf
Z2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwgaW5kZXg9
NA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBp
bnRsZW49ODgNCihYRU4pICBpbnRzaXplPTQgaW50bGVuPTg4DQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLGludHNwZWM9WzB4MDAwMDAw
MDAgMHgwMDAwMDAxYy4uLl0sb2ludHNpemU9NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9
L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+IGFkZHJz
aXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDQgbm90IChkaXJlY3RseSBvciBp
bmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAv
YWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdf
aXJxOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FkbWFANzAyZTIwMDAsIGluZGV4PTUNCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTg4
DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0v
YWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAw
MDAwMWQuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVj
dEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQoo
WEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSA1IG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3Rs
eSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2Fjb25uZWN0
QDcwMmMwMDAwL2FnaWNANzAyZjkwMDANCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2
PS9hY29ubmVjdEA3MDJjMDAwMC9hZG1hQDcwMmUyMDAwLCBpbmRleD02DQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj04OA0KKFhFTikg
IGludHNpemU9NCBpbnRsZW49ODgNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0
QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDFlLi4u
XSxvaW50c2l6ZT00DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RANzAyYzAw
MDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+
IGdvdCBpdCAhDQooWEVOKSBpcnEgNiBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5l
Y3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9hY29ubmVjdEA3MDJjMDAw
MC9hZ2ljQDcwMmY5MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5l
Y3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwgaW5kZXg9Nw0KKFhFTikgIHVzaW5nICdpbnRlcnJ1
cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49ODgNCihYRU4pICBpbnRzaXpl
PTQgaW50bGVuPTg4DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAw
MC9hZ2ljQDcwMmY5MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAxZi4uLl0sb2ludHNp
emU9NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNA
NzAyZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQg
IQ0KKFhFTikgaXJxIDcgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8g
cHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3
MDJmOTAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcwMmMw
MDAwL2FkbWFANzAyZTIwMDAsIGluZGV4PTgNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00IGludGxl
bj04OA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3
MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMjAuLi5dLG9pbnRzaXplPTQNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAw
LCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4p
IGlycSA4IG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnlj
b250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAN
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZG1h
QDcwMmUyMDAwLCBpbmRleD05DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQoo
WEVOKSAgaW50c3BlYz0wIGludGxlbj04OA0KKFhFTikgIGludHNpemU9NCBpbnRsZW49ODgNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAs
aW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDIxLi4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00
DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgOSBu
b3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxl
ci4gQ29ubmVjdGVkIHRvIC9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwDQooWEVOKSBk
dF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRtYUA3MDJlMjAw
MCwgaW5kZXg9MTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBp
bnRzcGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVj
PVsweDAwMDAwMDAwIDB4MDAwMDAwMjIuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4p
ICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAxMCBub3QgKGRp
cmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29u
bmVjdGVkIHRvIC9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwDQooWEVOKSBkdF9kZXZp
Y2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwgaW5k
ZXg9MTENCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAw
MDAwMDAwIDB4MDAwMDAwMjMuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBp
cGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBh
ZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAxMSBub3QgKGRpcmVjdGx5
IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVk
IHRvIC9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0
X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwgaW5kZXg9MTIN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAw
IDB4MDAwMDAwMjQuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9h
Y29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6
ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAxMiBub3QgKGRpcmVjdGx5IG9yIGlu
ZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9h
Y29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19p
cnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwgaW5kZXg9MTMNCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTg4
DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0v
YWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAw
MDAwMjUuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVj
dEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQoo
WEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAxMyBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0
bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9hY29ubmVj
dEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRl
dj0vYWNvbm5lY3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwgaW5kZXg9MTQNCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTg4DQooWEVO
KSAgaW50c2l6ZT00IGludGxlbj04OA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5l
Y3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMjYu
Li5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJj
MDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAg
LT4gZ290IGl0ICENCihYRU4pIGlycSAxNCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNv
bm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9hY29ubmVjdEA3MDJj
MDAwMC9hZ2ljQDcwMmY5MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNv
bm5lY3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwgaW5kZXg9MTUNCihYRU4pICB1c2luZyAnaW50
ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50
c2l6ZT00IGludGxlbj04OA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAy
YzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMjcuLi5dLG9p
bnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9h
Z2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIGlycSAxNSBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3Rl
ZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9hY29ubmVjdEA3MDJjMDAwMC9h
Z2ljQDcwMmY5MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RA
NzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwgaW5kZXg9MTYNCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00
IGludGxlbj04OA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAv
YWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMjguLi5dLG9pbnRzaXpl
PTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcw
MmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICEN
CihYRU4pIGlycSAxNiBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBw
cmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcw
MmY5MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAw
MDAvYWRtYUA3MDJlMjAwMCwgaW5kZXg9MTcNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00IGludGxl
bj04OA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3
MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMjkuLi5dLG9pbnRzaXplPTQNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAw
LCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4p
IGlycSAxNyBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5
Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAw
DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRt
YUA3MDJlMjAwMCwgaW5kZXg9MTgNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAw
MCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMmEuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXpl
PTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAx
OCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJv
bGxlci4gQ29ubmVjdGVkIHRvIC9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwDQooWEVO
KSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRtYUA3MDJl
MjAwMCwgaW5kZXg9MTkNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4p
ICBpbnRzcGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRz
cGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMmIuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihY
RU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAxOSBub3Qg
KGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4g
Q29ubmVjdGVkIHRvIC9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwDQooWEVOKSBkdF9k
ZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwg
aW5kZXg9MjANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRz
cGVjPTAgaW50bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsw
eDAwMDAwMDAwIDB4MDAwMDAwMmMuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAt
PiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAyMCBub3QgKGRpcmVj
dGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVj
dGVkIHRvIC9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwDQooWEVOKSBkdF9kZXZpY2Vf
Z2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCwgaW5kZXg9
MjENCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAg
aW50bGVuPTg4DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj04OA0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAw
MDAwIDB4MDAwMDAwMmQuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFy
PS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRy
c2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAyMSBub3QgKGRpcmVjdGx5IG9y
IGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRv
IC9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRp
b24gZm9yIGRldmljZSAvYWNvbm5lY3RANzAyYzAwMDAvYWRtYUA3MDJlMjAwMCAqKg0KKFhFTikg
RFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvYWNvbm5lY3RANzAyYzAwMDANCihY
RU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDJlMjAwMDwzPg0K
KFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikg
RFQ6IGVtcHR5IHJhbmdlczsgMToxIHRyYW5zbGF0aW9uDQooWEVOKSBEVDogcGFyZW50IHRyYW5z
bGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9m
ZnNldDogNzAyZTIwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAw
MDAwPDM+IDcwMmUyMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAg
LSBNTUlPOiAwMDcwMmUyMDAwIC0gMDA3MDJlNDAwMCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0
cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9hY29ubmVjdEA3MDJjMDAwMC9hZG1hQDcwMmUyMDAwICoq
DQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9hY29ubmVjdEA3MDJj
MDAwMA0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcwMmVj
MDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAv
DQooWEVOKSBEVDogZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJl
bnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6
IHdpdGggb2Zmc2V0OiA3MDJlYzAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8
Mz4gMDAwMDAwMDA8Mz4gNzAyZWMwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0K
KFhFTikgICAtIE1NSU86IDAwNzAyZWMwMDAgLSAwMDcwMmVjMDcyIFAyTVR5cGU9NQ0KKFhFTikg
aGFuZGxlIC9hY29ubmVjdEA3MDJjMDAwMC9haHViDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2Fjb25uZWN0QDcwMmMwMDAwL2FodWINCihYRU4pIC9hY29ubmVjdEA3MDJjMDAwMC9haHViIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9hY29ubmVjdEA3MDJjMDAw
MC9haHViIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24g
Zm9yIGRldmljZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1YiAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBk
ZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvYWNvbm5lY3RANzAyYzAwMDANCihYRU4pIERUOiB0cmFu
c2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDJkMDgwMDwzPg0KKFhFTikgRFQ6IHBh
cmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJh
bmdlczsgMToxIHRyYW5zbGF0aW9uDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8
Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNzAyZDA4
MDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQw
ODAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcw
MmQwODAwIC0gMDA3MDJkMTAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvYWNvbm5lY3RANzAy
YzAwMDAvYWh1Yi9hZG1haWZAMHg3MDJkMDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9h
Y29ubmVjdEA3MDJjMDAwMC9haHViL2FkbWFpZkAweDcwMmQwMDAwDQooWEVOKSAvYWNvbm5lY3RA
NzAyYzAwMDAvYWh1Yi9hZG1haWZAMHg3MDJkMDAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAx
DQooWEVOKSBDaGVjayBpZiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZG1haWZAMHg3MDJkMDAw
MCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZG1haWZAMHg3MDJkMDAwMA0KKFhFTikgRFQ6ICoq
IHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvYWRtYWlmQDB4
NzAyZDAwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL2Fj
b25uZWN0QDcwMmMwMDAwL2FodWINCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3
MDJkMDAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikg
b24gL2Fjb25uZWN0QDcwMmMwMDAwDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4p
IERUOiBkZWZhdWx0IG1hcCwgY3A9NzAyZDAwMDAsIHM9MTAwMDAsIGRhPTcwMmQwMDAwDQooWEVO
KSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDAwMDA8Mz4N
CihYRU4pIERUOiB3aXRoIG9mZnNldDogMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlv
bjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDAwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiBlbXB0eSByYW5nZXM7IDE6MSB0cmFu
c2xhdGlvbg0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+
IDAwMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDcwMmQwMDAwDQooWEVOKSBEVDog
b25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJkMDAwMDwzPg0KKFhFTikg
RFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDJkMDAwMCAtIDAwNzAy
ZDA4MDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvc2Zj
QDcwMmQyMDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2Fo
dWIvc2ZjQDcwMmQyMDAwDQooWEVOKSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9zZmNANzAyZDIw
MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2Fjb25uZWN0QDcw
MmMwMDAwL2FodWIvc2ZjQDcwMmQyMDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL3NmY0A3MDJk
MjAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2Fjb25uZWN0QDcwMmMw
MDAwL2FodWIvc2ZjQDcwMmQyMDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEs
IG5zPTEpIG9uIC9hY29ubmVjdEA3MDJjMDAwMC9haHViDQooWEVOKSBEVDogdHJhbnNsYXRpbmcg
YWRkcmVzczo8Mz4gNzAyZDIwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTIpIG9uIC9hY29ubmVjdEA3MDJjMDAwMA0KKFhFTikgRFQ6IHdhbGtpbmcgcmFu
Z2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTcwMmQwMDAwLCBzPTEwMDAwLCBkYT03
MDJkMjAwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+
IDcwMmQwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDIwMDANCihYRU4pIERUOiBvbmUg
bGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQyMDAwPDM+DQooWEVOKSBEVDog
cGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogZW1wdHkg
cmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9y
OjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA3MDJk
MjAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNzAy
ZDIwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAw
NzAyZDIwMDAgLSAwMDcwMmQyMjAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9hY29ubmVjdEA3
MDJjMDAwMC9haHViL3NmY0A3MDJkMjIwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hY29u
bmVjdEA3MDJjMDAwMC9haHViL3NmY0A3MDJkMjIwMA0KKFhFTikgL2Fjb25uZWN0QDcwMmMwMDAw
L2FodWIvc2ZjQDcwMmQyMjAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNr
IGlmIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL3NmY0A3MDJkMjIwMCBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RANzAyYzAw
MDAvYWh1Yi9zZmNANzAyZDIyMDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNl
IC9hY29ubmVjdEA3MDJjMDAwMC9haHViL3NmY0A3MDJkMjIwMCAqKg0KKFhFTikgRFQ6IGJ1cyBp
cyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yg0KKFhFTikg
RFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDcwMmQyMjAwPDM+DQooWEVOKSBEVDogcGFyZW50
IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvYWNvbm5lY3RANzAyYzAwMDANCihYRU4p
IERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03MDJkMDAw
MCwgcz0xMDAwMCwgZGE9NzAyZDIyMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9y
OjwzPiAwMDAwMDAwMDwzPiA3MDJkMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiAyMjAw
DQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJkMjIw
MDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0K
KFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRyYW5zbGF0aW9uDQooWEVOKSBEVDogcGFyZW50
IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiB3
aXRoIG9mZnNldDogNzAyZDIyMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+
IDAwMDAwMDAwPDM+IDcwMmQyMjAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihY
RU4pICAgLSBNTUlPOiAwMDcwMmQyMjAwIC0gMDA3MDJkMjQwMCBQMk1UeXBlPTUNCihYRU4pIGhh
bmRsZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9zZmNANzAyZDI0MDANCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9zZmNANzAyZDI0MDANCihYRU4pIC9h
Y29ubmVjdEA3MDJjMDAwMC9haHViL3NmY0A3MDJkMjQwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAxDQooWEVOKSBDaGVjayBpZiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9zZmNANzAyZDI0MDAg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvc2ZjQDcwMmQyNDAwDQooWEVOKSBEVDogKiogdHJhbnNs
YXRpb24gZm9yIGRldmljZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9zZmNANzAyZDI0MDAgKioN
CihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL2Fjb25uZWN0QDcwMmMw
MDAwL2FodWINCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3MDJkMjQwMDwzPg0K
KFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL2Fjb25uZWN0
QDcwMmMwMDAwDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0
IG1hcCwgY3A9NzAyZDAwMDAsIHM9MTAwMDAsIGRhPTcwMmQyNDAwDQooWEVOKSBEVDogcGFyZW50
IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDAwMDA8Mz4NCihYRU4pIERUOiB3
aXRoIG9mZnNldDogMjQwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAw
MDAwMDA8Mz4gNzAyZDI0MDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5h
PTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiBlbXB0eSByYW5nZXM7IDE6MSB0cmFuc2xhdGlvbg0K
KFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDAwMDAwMDAw
PDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDcwMmQyNDAwDQooWEVOKSBEVDogb25lIGxldmVs
IHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJkMjQwMDwzPg0KKFhFTikgRFQ6IHJlYWNo
ZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDJkMjQwMCAtIDAwNzAyZDI2MDAgUDJN
VHlwZT01DQooWEVOKSBoYW5kbGUgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvc2ZjQDcwMmQyNjAw
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvc2ZjQDcw
MmQyNjAwDQooWEVOKSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9zZmNANzAyZDI2MDAgcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2Fjb25uZWN0QDcwMmMwMDAwL2Fo
dWIvc2ZjQDcwMmQyNjAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL3NmY0A3MDJkMjYwMA0KKFhF
TikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIv
c2ZjQDcwMmQyNjAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9u
IC9hY29ubmVjdEA3MDJjMDAwMC9haHViDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8
Mz4gNzAyZDI2MDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5z
PTIpIG9uIC9hY29ubmVjdEA3MDJjMDAwMA0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQoo
WEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTcwMmQwMDAwLCBzPTEwMDAwLCBkYT03MDJkMjYwMA0K
KFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDcwMmQwMDAw
PDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDI2MDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJh
bnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQyNjAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1
cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogZW1wdHkgcmFuZ2VzOyAx
OjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAw
MDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA3MDJkMjYwMA0KKFhF
TikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDI2MDA8Mz4N
CihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAyZDI2MDAg
LSAwMDcwMmQyODAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9hY29ubmVjdEA3MDJjMDAwMC9h
aHViL3Nwa3Byb3RANzAyZDhjMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RA
NzAyYzAwMDAvYWh1Yi9zcGtwcm90QDcwMmQ4YzAwDQooWEVOKSAvYWNvbm5lY3RANzAyYzAwMDAv
YWh1Yi9zcGtwcm90QDcwMmQ4YzAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENo
ZWNrIGlmIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL3Nwa3Byb3RANzAyZDhjMDAgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0
QDcwMmMwMDAwL2FodWIvc3BrcHJvdEA3MDJkOGMwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9u
IGZvciBkZXZpY2UgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvc3BrcHJvdEA3MDJkOGMwMCAqKg0K
KFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvYWNvbm5lY3RANzAyYzAw
MDAvYWh1Yg0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDcwMmQ4YzAwPDM+DQoo
WEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvYWNvbm5lY3RA
NzAyYzAwMDANCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQg
bWFwLCBjcD03MDJkMDAwMCwgcz0xMDAwMCwgZGE9NzAyZDhjMDANCihYRU4pIERUOiBwYXJlbnQg
dHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiA3MDJkMDAwMDwzPg0KKFhFTikgRFQ6IHdp
dGggb2Zmc2V0OiA4YzAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAw
MDAwMDwzPiA3MDJkOGMwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9
MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRyYW5zbGF0aW9uDQoo
WEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8
Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNzAyZDhjMDANCihYRU4pIERUOiBvbmUgbGV2ZWwg
dHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQ4YzAwPDM+DQooWEVOKSBEVDogcmVhY2hl
ZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMmQ4YzAwIC0gMDA3MDJkOTAwMCBQMk1U
eXBlPTUNCihYRU4pIGhhbmRsZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hbWl4ZXJANzAyZGJi
MDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hbWl4
ZXJANzAyZGJiMDANCihYRU4pIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FtaXhlckA3MDJkYmIw
MCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvYWNvbm5lY3RANzAy
YzAwMDAvYWh1Yi9hbWl4ZXJANzAyZGJiMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvYW1peGVy
QDcwMmRiYjAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvYWNvbm5lY3RA
NzAyYzAwMDAvYWh1Yi9hbWl4ZXJANzAyZGJiMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVs
dCAobmE9MSwgbnM9MSkgb24gL2Fjb25uZWN0QDcwMmMwMDAwL2FodWINCihYRU4pIERUOiB0cmFu
c2xhdGluZyBhZGRyZXNzOjwzPiA3MDJkYmIwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMg
ZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL2Fjb25uZWN0QDcwMmMwMDAwDQooWEVOKSBEVDogd2Fs
a2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9NzAyZDAwMDAsIHM9MTAw
MDAsIGRhPTcwMmRiYjAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAw
MDAwMDA8Mz4gNzAyZDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogYmIwMA0KKFhFTikg
RFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZGJiMDA8Mz4NCihY
RU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERU
OiBlbXB0eSByYW5nZXM7IDE6MSB0cmFuc2xhdGlvbg0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xh
dGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDAwMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZz
ZXQ6IDcwMmRiYjAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAw
MDwzPiA3MDJkYmIwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0g
TU1JTzogMDA3MDJkYmIwMCAtIDAwNzAyZGMzMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL2Fj
b25uZWN0QDcwMmMwMDAwL2FodWIvaTJzQDcwMmQxMDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvaTJzQDcwMmQxMDAwDQooWEVOKSAvYWNvbm5lY3RA
NzAyYzAwMDAvYWh1Yi9pMnNANzAyZDEwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhF
TikgQ2hlY2sgaWYgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvaTJzQDcwMmQxMDAwIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hY29ubmVj
dEA3MDJjMDAwMC9haHViL2kyc0A3MDJkMTAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZv
ciBkZXZpY2UgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvaTJzQDcwMmQxMDAwICoqDQooWEVOKSBE
VDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9hY29ubmVjdEA3MDJjMDAwMC9haHVi
DQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gNzAyZDEwMDA8Mz4NCihYRU4pIERU
OiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9hY29ubmVjdEA3MDJjMDAw
MA0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNw
PTcwMmQwMDAwLCBzPTEwMDAwLCBkYT03MDJkMTAwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xh
dGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDcwMmQwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZz
ZXQ6IDEwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+
IDcwMmQxMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0y
KSBvbiAvDQooWEVOKSBEVDogZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERU
OiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhF
TikgRFQ6IHdpdGggb2Zmc2V0OiA3MDJkMTAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xh
dGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDEwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qg
bm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAyZDEwMDAgLSAwMDcwMmQxMTAwIFAyTVR5cGU9NQ0K
KFhFTikgaGFuZGxlIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2kyc0A3MDJkMTEwMA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL2kyc0A3MDJkMTEwMA0K
KFhFTikgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvaTJzQDcwMmQxMTAwIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2kyc0A3
MDJkMTEwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9pMnNANzAyZDExMDANCihYRU4pIERUOiAq
KiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2kyc0A3MDJk
MTEwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvYWNvbm5l
Y3RANzAyYzAwMDAvYWh1Yg0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDcwMmQx
MTAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAv
YWNvbm5lY3RANzAyYzAwMDANCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6
IGRlZmF1bHQgbWFwLCBjcD03MDJkMDAwMCwgcz0xMDAwMCwgZGE9NzAyZDExMDANCihYRU4pIERU
OiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiA3MDJkMDAwMDwzPg0KKFhF
TikgRFQ6IHdpdGggb2Zmc2V0OiAxMTAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9u
OjwzPiAwMDAwMDAwMDwzPiA3MDJkMTEwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVm
YXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRyYW5z
bGF0aW9uDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4g
MDAwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNzAyZDExMDANCihYRU4pIERUOiBv
bmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQxMTAwPDM+DQooWEVOKSBE
VDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMmQxMTAwIC0gMDA3MDJk
MTIwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9pMnNA
NzAyZDEyMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWh1
Yi9pMnNANzAyZDEyMDANCihYRU4pIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2kyc0A3MDJkMTIw
MCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvYWNvbm5lY3RANzAy
YzAwMDAvYWh1Yi9pMnNANzAyZDEyMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvaTJzQDcwMmQx
MjAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvYWNvbm5lY3RANzAyYzAw
MDAvYWh1Yi9pMnNANzAyZDEyMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwg
bnM9MSkgb24gL2Fjb25uZWN0QDcwMmMwMDAwL2FodWINCihYRU4pIERUOiB0cmFuc2xhdGluZyBh
ZGRyZXNzOjwzPiA3MDJkMTIwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAo
bmE9MiwgbnM9Mikgb24gL2Fjb25uZWN0QDcwMmMwMDAwDQooWEVOKSBEVDogd2Fsa2luZyByYW5n
ZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9NzAyZDAwMDAsIHM9MTAwMDAsIGRhPTcw
MmQxMjAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4g
NzAyZDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogMTIwMA0KKFhFTikgRFQ6IG9uZSBs
ZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDEyMDA8Mz4NCihYRU4pIERUOiBw
YXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiBlbXB0eSBy
YW5nZXM7IDE6MSB0cmFuc2xhdGlvbg0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6
PDM+IDAwMDAwMDAwPDM+IDAwMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDcwMmQx
MjAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJk
MTIwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3
MDJkMTIwMCAtIDAwNzAyZDEzMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL2Fjb25uZWN0QDcw
MmMwMDAwL2FodWIvaTJzQDcwMmQxMzAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25u
ZWN0QDcwMmMwMDAwL2FodWIvaTJzQDcwMmQxMzAwDQooWEVOKSAvYWNvbm5lY3RANzAyYzAwMDAv
YWh1Yi9pMnNANzAyZDEzMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sg
aWYgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvaTJzQDcwMmQxMzAwIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAw
MC9haHViL2kyc0A3MDJkMTMwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2Ug
L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvaTJzQDcwMmQxMzAwICoqDQooWEVOKSBEVDogYnVzIGlz
IGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9hY29ubmVjdEA3MDJjMDAwMC9haHViDQooWEVOKSBE
VDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gNzAyZDEzMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQg
YnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9hY29ubmVjdEA3MDJjMDAwMA0KKFhFTikg
RFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTcwMmQwMDAw
LCBzPTEwMDAwLCBkYT03MDJkMTMwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6
PDM+IDAwMDAwMDAwPDM+IDcwMmQwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDEzMDAN
CihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQxMzAw
PDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQoo
WEVOKSBEVDogZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQg
dHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdp
dGggb2Zmc2V0OiA3MDJkMTMwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4g
MDAwMDAwMDA8Mz4gNzAyZDEzMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhF
TikgICAtIE1NSU86IDAwNzAyZDEzMDAgLSAwMDcwMmQxNDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFu
ZGxlIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2kyc0A3MDJkMTQwMA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL2kyc0A3MDJkMTQwMA0KKFhFTikgL2Fj
b25uZWN0QDcwMmMwMDAwL2FodWIvaTJzQDcwMmQxNDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDENCihYRU4pIENoZWNrIGlmIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2kyc0A3MDJkMTQwMCBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
YWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9pMnNANzAyZDE0MDANCihYRU4pIERUOiAqKiB0cmFuc2xh
dGlvbiBmb3IgZGV2aWNlIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2kyc0A3MDJkMTQwMCAqKg0K
KFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvYWNvbm5lY3RANzAyYzAw
MDAvYWh1Yg0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDcwMmQxNDAwPDM+DQoo
WEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvYWNvbm5lY3RA
NzAyYzAwMDANCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQg
bWFwLCBjcD03MDJkMDAwMCwgcz0xMDAwMCwgZGE9NzAyZDE0MDANCihYRU4pIERUOiBwYXJlbnQg
dHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiA3MDJkMDAwMDwzPg0KKFhFTikgRFQ6IHdp
dGggb2Zmc2V0OiAxNDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAw
MDAwMDwzPiA3MDJkMTQwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9
MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRyYW5zbGF0aW9uDQoo
WEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8
Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNzAyZDE0MDANCihYRU4pIERUOiBvbmUgbGV2ZWwg
dHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQxNDAwPDM+DQooWEVOKSBEVDogcmVhY2hl
ZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMmQxNDAwIC0gMDA3MDJkMTUwMCBQMk1U
eXBlPTUNCihYRU4pIGhhbmRsZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hbXhANzAyZDMwMDAN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hbXhANzAy
ZDMwMDANCihYRU4pIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FteEA3MDJkMzAwMCBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1
Yi9hbXhANzAyZDMwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvYW14QDcwMmQzMDAwDQooWEVO
KSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9h
bXhANzAyZDMwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24g
L2Fjb25uZWN0QDcwMmMwMDAwL2FodWINCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwz
PiA3MDJkMzAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9
Mikgb24gL2Fjb25uZWN0QDcwMmMwMDAwDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihY
RU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9NzAyZDAwMDAsIHM9MTAwMDAsIGRhPTcwMmQzMDAwDQoo
WEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDAwMDA8
Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogMzAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFu
c2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDMwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVz
IGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiBlbXB0eSByYW5nZXM7IDE6
MSB0cmFuc2xhdGlvbg0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAw
MDAwPDM+IDAwMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDcwMmQzMDAwDQooWEVO
KSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJkMzAwMDwzPg0K
KFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDJkMzAwMCAt
IDAwNzAyZDMxMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL2Fjb25uZWN0QDcwMmMwMDAwL2Fo
dWIvYW14QDcwMmQzMTAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMw
MDAwL2FodWIvYW14QDcwMmQzMTAwDQooWEVOKSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hbXhA
NzAyZDMxMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2Fjb25u
ZWN0QDcwMmMwMDAwL2FodWIvYW14QDcwMmQzMTAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL2Ft
eEA3MDJkMzEwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2Fjb25uZWN0
QDcwMmMwMDAwL2FodWIvYW14QDcwMmQzMTAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTEsIG5zPTEpIG9uIC9hY29ubmVjdEA3MDJjMDAwMC9haHViDQooWEVOKSBEVDogdHJhbnNs
YXRpbmcgYWRkcmVzczo8Mz4gNzAyZDMxMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9hY29ubmVjdEA3MDJjMDAwMA0KKFhFTikgRFQ6IHdhbGtp
bmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTcwMmQwMDAwLCBzPTEwMDAw
LCBkYT03MDJkMzEwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAw
MDAwPDM+IDcwMmQwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDMxMDANCihYRU4pIERU
OiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQzMTAwPDM+DQooWEVO
KSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDog
ZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRp
b24gZm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0
OiA3MDJkMzEwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8
Mz4gNzAyZDMxMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1N
SU86IDAwNzAyZDMxMDAgLSAwMDcwMmQzMjAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9hY29u
bmVjdEA3MDJjMDAwMC9haHViL2FkeEA3MDJkMzgwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FkeEA3MDJkMzgwMA0KKFhFTikgL2Fjb25uZWN0QDcw
MmMwMDAwL2FodWIvYWR4QDcwMmQzODAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4p
IENoZWNrIGlmIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FkeEA3MDJkMzgwMCBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RA
NzAyYzAwMDAvYWh1Yi9hZHhANzAyZDM4MDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3Ig
ZGV2aWNlIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FkeEA3MDJkMzgwMCAqKg0KKFhFTikgRFQ6
IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yg0K
KFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDcwMmQzODAwPDM+DQooWEVOKSBEVDog
cGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvYWNvbm5lY3RANzAyYzAwMDAN
CihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03
MDJkMDAwMCwgcz0xMDAwMCwgZGE9NzAyZDM4MDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRp
b24gZm9yOjwzPiAwMDAwMDAwMDwzPiA3MDJkMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0
OiAzODAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3
MDJkMzgwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikg
b24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRyYW5zbGF0aW9uDQooWEVOKSBEVDog
cGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4p
IERUOiB3aXRoIG9mZnNldDogNzAyZDM4MDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRp
b246PDM+IDAwMDAwMDAwPDM+IDcwMmQzODAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5v
ZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMmQzODAwIC0gMDA3MDJkMzkwMCBQMk1UeXBlPTUNCihY
RU4pIGhhbmRsZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZHhANzAyZDM5MDANCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZHhANzAyZDM5MDANCihY
RU4pIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FkeEA3MDJkMzkwMCBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZHhANzAy
ZDM5MDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvYWR4QDcwMmQzOTAwDQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZHhANzAyZDM5
MDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL2Fjb25uZWN0
QDcwMmMwMDAwL2FodWINCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3MDJkMzkw
MDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL2Fj
b25uZWN0QDcwMmMwMDAwDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBk
ZWZhdWx0IG1hcCwgY3A9NzAyZDAwMDAsIHM9MTAwMDAsIGRhPTcwMmQzOTAwDQooWEVOKSBEVDog
cGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDAwMDA8Mz4NCihYRU4p
IERUOiB3aXRoIG9mZnNldDogMzkwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8
Mz4gMDAwMDAwMDA8Mz4gNzAyZDM5MDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1
bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiBlbXB0eSByYW5nZXM7IDE6MSB0cmFuc2xh
dGlvbg0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDAw
MDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDcwMmQzOTAwDQooWEVOKSBEVDogb25l
IGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJkMzkwMDwzPg0KKFhFTikgRFQ6
IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDJkMzkwMCAtIDAwNzAyZDNh
MDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvZG1pY0A3
MDJkNDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHVi
L2RtaWNANzAyZDQwMDANCihYRU4pIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2RtaWNANzAyZDQw
MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2Fjb25uZWN0QDcw
MmMwMDAwL2FodWIvZG1pY0A3MDJkNDAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9kbWljQDcw
MmQ0MDAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvYWNvbm5lY3RANzAy
YzAwMDAvYWh1Yi9kbWljQDcwMmQ0MDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5h
PTEsIG5zPTEpIG9uIC9hY29ubmVjdEA3MDJjMDAwMC9haHViDQooWEVOKSBEVDogdHJhbnNsYXRp
bmcgYWRkcmVzczo8Mz4gNzAyZDQwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1
bHQgKG5hPTIsIG5zPTIpIG9uIC9hY29ubmVjdEA3MDJjMDAwMA0KKFhFTikgRFQ6IHdhbGtpbmcg
cmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTcwMmQwMDAwLCBzPTEwMDAwLCBk
YT03MDJkNDAwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAw
PDM+IDcwMmQwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDQwMDANCihYRU4pIERUOiBv
bmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQ0MDAwPDM+DQooWEVOKSBE
VDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogZW1w
dHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24g
Zm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA3
MDJkNDAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4g
NzAyZDQwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86
IDAwNzAyZDQwMDAgLSAwMDcwMmQ0MTAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9hY29ubmVj
dEA3MDJjMDAwMC9haHViL2RtaWNANzAyZDQxMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
YWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9kbWljQDcwMmQ0MTAwDQooWEVOKSAvYWNvbm5lY3RANzAy
YzAwMDAvYWh1Yi9kbWljQDcwMmQ0MTAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4p
IENoZWNrIGlmIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2RtaWNANzAyZDQxMDAgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0
QDcwMmMwMDAwL2FodWIvZG1pY0A3MDJkNDEwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZv
ciBkZXZpY2UgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvZG1pY0A3MDJkNDEwMCAqKg0KKFhFTikg
RFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1
Yg0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDcwMmQ0MTAwPDM+DQooWEVOKSBE
VDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvYWNvbm5lY3RANzAyYzAw
MDANCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBj
cD03MDJkMDAwMCwgcz0xMDAwMCwgZGE9NzAyZDQxMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNs
YXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiA3MDJkMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zm
c2V0OiA0MTAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwz
PiA3MDJkNDEwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9
Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRyYW5zbGF0aW9uDQooWEVOKSBE
VDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihY
RU4pIERUOiB3aXRoIG9mZnNldDogNzAyZDQxMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNs
YXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQ0MTAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290
IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMmQ0MTAwIC0gMDA3MDJkNDIwMCBQMk1UeXBlPTUN
CihYRU4pIGhhbmRsZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9kbWljQDcwMmQ0MjAwDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvZG1pY0A3MDJkNDIw
MA0KKFhFTikgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvZG1pY0A3MDJkNDIwMCBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9k
bWljQDcwMmQ0MjAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL2RtaWNANzAyZDQyMDANCihYRU4p
IERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2Rt
aWNANzAyZDQyMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24g
L2Fjb25uZWN0QDcwMmMwMDAwL2FodWINCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwz
PiA3MDJkNDIwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9
Mikgb24gL2Fjb25uZWN0QDcwMmMwMDAwDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihY
RU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9NzAyZDAwMDAsIHM9MTAwMDAsIGRhPTcwMmQ0MjAwDQoo
WEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDAwMDA8
Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNDIwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFu
c2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDQyMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVz
IGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiBlbXB0eSByYW5nZXM7IDE6
MSB0cmFuc2xhdGlvbg0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAw
MDAwPDM+IDAwMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDcwMmQ0MjAwDQooWEVO
KSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJkNDIwMDwzPg0K
KFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDJkNDIwMCAt
IDAwNzAyZDQzMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL2Fjb25uZWN0QDcwMmMwMDAwL2Fo
dWIvYWZjQDcwMmQ3MDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMw
MDAwL2FodWIvYWZjQDcwMmQ3MDAwDQooWEVOKSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZmNA
NzAyZDcwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2Fjb25u
ZWN0QDcwMmMwMDAwL2FodWIvYWZjQDcwMmQ3MDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL2Fm
Y0A3MDJkNzAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2Fjb25uZWN0
QDcwMmMwMDAwL2FodWIvYWZjQDcwMmQ3MDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTEsIG5zPTEpIG9uIC9hY29ubmVjdEA3MDJjMDAwMC9haHViDQooWEVOKSBEVDogdHJhbnNs
YXRpbmcgYWRkcmVzczo8Mz4gNzAyZDcwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9hY29ubmVjdEA3MDJjMDAwMA0KKFhFTikgRFQ6IHdhbGtp
bmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTcwMmQwMDAwLCBzPTEwMDAw
LCBkYT03MDJkNzAwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAw
MDAwPDM+IDcwMmQwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDcwMDANCihYRU4pIERU
OiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQ3MDAwPDM+DQooWEVO
KSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDog
ZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRp
b24gZm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0
OiA3MDJkNzAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8
Mz4gNzAyZDcwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1N
SU86IDAwNzAyZDcwMDAgLSAwMDcwMmQ3MTAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9hY29u
bmVjdEA3MDJjMDAwMC9haHViL2FmY0A3MDJkNzEwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FmY0A3MDJkNzEwMA0KKFhFTikgL2Fjb25uZWN0QDcw
MmMwMDAwL2FodWIvYWZjQDcwMmQ3MTAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4p
IENoZWNrIGlmIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FmY0A3MDJkNzEwMCBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RA
NzAyYzAwMDAvYWh1Yi9hZmNANzAyZDcxMDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3Ig
ZGV2aWNlIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FmY0A3MDJkNzEwMCAqKg0KKFhFTikgRFQ6
IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yg0K
KFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDcwMmQ3MTAwPDM+DQooWEVOKSBEVDog
cGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvYWNvbm5lY3RANzAyYzAwMDAN
CihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03
MDJkMDAwMCwgcz0xMDAwMCwgZGE9NzAyZDcxMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRp
b24gZm9yOjwzPiAwMDAwMDAwMDwzPiA3MDJkMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0
OiA3MTAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3
MDJkNzEwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikg
b24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRyYW5zbGF0aW9uDQooWEVOKSBEVDog
cGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4p
IERUOiB3aXRoIG9mZnNldDogNzAyZDcxMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRp
b246PDM+IDAwMDAwMDAwPDM+IDcwMmQ3MTAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5v
ZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMmQ3MTAwIC0gMDA3MDJkNzIwMCBQMk1UeXBlPTUNCihY
RU4pIGhhbmRsZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZmNANzAyZDcyMDANCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZmNANzAyZDcyMDANCihY
RU4pIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FmY0A3MDJkNzIwMCBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZmNANzAy
ZDcyMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvYWZjQDcwMmQ3MjAwDQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZmNANzAyZDcy
MDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL2Fjb25uZWN0
QDcwMmMwMDAwL2FodWINCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3MDJkNzIw
MDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL2Fj
b25uZWN0QDcwMmMwMDAwDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBk
ZWZhdWx0IG1hcCwgY3A9NzAyZDAwMDAsIHM9MTAwMDAsIGRhPTcwMmQ3MjAwDQooWEVOKSBEVDog
cGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDAwMDA8Mz4NCihYRU4p
IERUOiB3aXRoIG9mZnNldDogNzIwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8
Mz4gMDAwMDAwMDA8Mz4gNzAyZDcyMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1
bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiBlbXB0eSByYW5nZXM7IDE6MSB0cmFuc2xh
dGlvbg0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDAw
MDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDcwMmQ3MjAwDQooWEVOKSBEVDogb25l
IGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJkNzIwMDwzPg0KKFhFTikgRFQ6
IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDJkNzIwMCAtIDAwNzAyZDcz
MDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvYWZjQDcw
MmQ3MzAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIv
YWZjQDcwMmQ3MzAwDQooWEVOKSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZmNANzAyZDczMDAg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2Fjb25uZWN0QDcwMmMw
MDAwL2FodWIvYWZjQDcwMmQ3MzAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FmY0A3MDJkNzMw
MA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2Fjb25uZWN0QDcwMmMwMDAw
L2FodWIvYWZjQDcwMmQ3MzAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5z
PTEpIG9uIC9hY29ubmVjdEA3MDJjMDAwMC9haHViDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRk
cmVzczo8Mz4gNzAyZDczMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5h
PTIsIG5zPTIpIG9uIC9hY29ubmVjdEA3MDJjMDAwMA0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2Vz
Li4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTcwMmQwMDAwLCBzPTEwMDAwLCBkYT03MDJk
NzMwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDcw
MmQwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDczMDANCihYRU4pIERUOiBvbmUgbGV2
ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQ3MzAwPDM+DQooWEVOKSBEVDogcGFy
ZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogZW1wdHkgcmFu
Z2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwz
PiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA3MDJkNzMw
MA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDcz
MDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAy
ZDczMDAgLSAwMDcwMmQ3NDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9hY29ubmVjdEA3MDJj
MDAwMC9haHViL2FmY0A3MDJkNzQwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hY29ubmVj
dEA3MDJjMDAwMC9haHViL2FmY0A3MDJkNzQwMA0KKFhFTikgL2Fjb25uZWN0QDcwMmMwMDAwL2Fo
dWIvYWZjQDcwMmQ3NDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlm
IC9hY29ubmVjdEA3MDJjMDAwMC9haHViL2FmY0A3MDJkNzQwMCBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAv
YWh1Yi9hZmNANzAyZDc0MDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9h
Y29ubmVjdEA3MDJjMDAwMC9haHViL2FmY0A3MDJkNzQwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBk
ZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yg0KKFhFTikgRFQ6
IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDcwMmQ3NDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1
cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvYWNvbm5lY3RANzAyYzAwMDANCihYRU4pIERU
OiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03MDJkMDAwMCwg
cz0xMDAwMCwgZGE9NzAyZDc0MDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwz
PiAwMDAwMDAwMDwzPiA3MDJkMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA3NDAwDQoo
WEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJkNzQwMDwz
Pg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhF
TikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRyYW5zbGF0aW9uDQooWEVOKSBEVDogcGFyZW50IHRy
YW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRo
IG9mZnNldDogNzAyZDc0MDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAw
MDAwMDAwPDM+IDcwMmQ3NDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4p
ICAgLSBNTUlPOiAwMDcwMmQ3NDAwIC0gMDA3MDJkNzUwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRs
ZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZmNANzAyZDc1MDANCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZmNANzAyZDc1MDANCihYRU4pIC9hY29u
bmVjdEA3MDJjMDAwMC9haHViL2FmY0A3MDJkNzUwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAx
DQooWEVOKSBDaGVjayBpZiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZmNANzAyZDc1MDAgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fj
b25uZWN0QDcwMmMwMDAwL2FodWIvYWZjQDcwMmQ3NTAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRp
b24gZm9yIGRldmljZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9hZmNANzAyZDc1MDAgKioNCihY
RU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL2Fjb25uZWN0QDcwMmMwMDAw
L2FodWINCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3MDJkNzUwMDwzPg0KKFhF
TikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL2Fjb25uZWN0QDcw
MmMwMDAwDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1h
cCwgY3A9NzAyZDAwMDAsIHM9MTAwMDAsIGRhPTcwMmQ3NTAwDQooWEVOKSBEVDogcGFyZW50IHRy
YW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDAwMDA8Mz4NCihYRU4pIERUOiB3aXRo
IG9mZnNldDogNzUwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAw
MDA8Mz4gNzAyZDc1MDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIs
IG5zPTIpIG9uIC8NCihYRU4pIERUOiBlbXB0eSByYW5nZXM7IDE6MSB0cmFuc2xhdGlvbg0KKFhF
TikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDAwMDAwMDAwPDM+
DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDcwMmQ3NTAwDQooWEVOKSBEVDogb25lIGxldmVsIHRy
YW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJkNzUwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQg
cm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDJkNzUwMCAtIDAwNzAyZDc2MDAgUDJNVHlw
ZT01DQooWEVOKSBoYW5kbGUgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvbXZjQDcwMmRhMDAwDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvbXZjQDcwMmRh
MDAwDQooWEVOKSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9tdmNANzAyZGEwMDAgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIv
bXZjQDcwMmRhMDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL212Y0A3MDJkYTAwMA0KKFhFTikg
RFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvbXZj
QDcwMmRhMDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9h
Y29ubmVjdEA3MDJjMDAwMC9haHViDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4g
NzAyZGEwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIp
IG9uIC9hY29ubmVjdEA3MDJjMDAwMA0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVO
KSBEVDogZGVmYXVsdCBtYXAsIGNwPTcwMmQwMDAwLCBzPTEwMDAwLCBkYT03MDJkYTAwMA0KKFhF
TikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDcwMmQwMDAwPDM+
DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IGEwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNs
YXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmRhMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBp
cyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogZW1wdHkgcmFuZ2VzOyAxOjEg
dHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAw
MDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA3MDJkYTAwMA0KKFhFTikg
RFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZGEwMDA8Mz4NCihY
RU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAyZGEwMDAgLSAw
MDcwMmRhMjAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9hY29ubmVjdEA3MDJjMDAwMC9haHVi
L212Y0A3MDJkYTIwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAw
MC9haHViL212Y0A3MDJkYTIwMA0KKFhFTikgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvbXZjQDcw
MmRhMjAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9hY29ubmVj
dEA3MDJjMDAwMC9haHViL212Y0A3MDJkYTIwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9tdmNA
NzAyZGEyMDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9hY29ubmVjdEA3
MDJjMDAwMC9haHViL212Y0A3MDJkYTIwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChu
YT0xLCBucz0xKSBvbiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yg0KKFhFTikgRFQ6IHRyYW5zbGF0
aW5nIGFkZHJlc3M6PDM+IDcwMmRhMjAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZh
dWx0IChuYT0yLCBucz0yKSBvbiAvYWNvbm5lY3RANzAyYzAwMDANCihYRU4pIERUOiB3YWxraW5n
IHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03MDJkMDAwMCwgcz0xMDAwMCwg
ZGE9NzAyZGEyMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAw
MDwzPiA3MDJkMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiBhMjAwDQooWEVOKSBEVDog
b25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJkYTIwMDwzPg0KKFhFTikg
RFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVt
cHR5IHJhbmdlczsgMToxIHRyYW5zbGF0aW9uDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9u
IGZvcjo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDog
NzAyZGEyMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+
IDcwMmRhMjAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlP
OiAwMDcwMmRhMjAwIC0gMDA3MDJkYTQwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvYWNvbm5l
Y3RANzAyYzAwMDAvYWh1Yi9pcWNANzAyZGUwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
YWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9pcWNANzAyZGUwMDANCihYRU4pIC9hY29ubmVjdEA3MDJj
MDAwMC9haHViL2lxY0A3MDJkZTAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBD
aGVjayBpZiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9pcWNANzAyZGUwMDAgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcw
MmMwMDAwL2FodWIvaXFjQDcwMmRlMDAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRl
dmljZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9pcWNANzAyZGUwMDAgKioNCihYRU4pIERUOiBi
dXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL2Fjb25uZWN0QDcwMmMwMDAwL2FodWINCihY
RU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3MDJkZTAwMDwzPg0KKFhFTikgRFQ6IHBh
cmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL2Fjb25uZWN0QDcwMmMwMDAwDQoo
WEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9NzAy
ZDAwMDAsIHM9MTAwMDAsIGRhPTcwMmRlMDAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9u
IGZvcjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDog
ZTAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNzAy
ZGUwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9u
IC8NCihYRU4pIERUOiBlbXB0eSByYW5nZXM7IDE6MSB0cmFuc2xhdGlvbg0KKFhFTikgRFQ6IHBh
cmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDAwMDAwMDAwPDM+DQooWEVOKSBE
VDogd2l0aCBvZmZzZXQ6IDcwMmRlMDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9u
OjwzPiAwMDAwMDAwMDwzPiA3MDJkZTAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2Rl
DQooWEVOKSAgIC0gTU1JTzogMDA3MDJkZTAwMCAtIDAwNzAyZGUyMDAgUDJNVHlwZT01DQooWEVO
KSBoYW5kbGUgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvaXFjQDcwMmRlMjAwDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvaXFjQDcwMmRlMjAwDQooWEVO
KSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9pcWNANzAyZGUyMDAgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvaXFjQDcwMmRl
MjAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL2lxY0A3MDJkZTIwMA0KKFhFTikgRFQ6ICoqIHRy
YW5zbGF0aW9uIGZvciBkZXZpY2UgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvaXFjQDcwMmRlMjAw
ICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9hY29ubmVjdEA3
MDJjMDAwMC9haHViDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gNzAyZGUyMDA8
Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9hY29u
bmVjdEA3MDJjMDAwMA0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVm
YXVsdCBtYXAsIGNwPTcwMmQwMDAwLCBzPTEwMDAwLCBkYT03MDJkZTIwMA0KKFhFTikgRFQ6IHBh
cmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDcwMmQwMDAwPDM+DQooWEVOKSBE
VDogd2l0aCBvZmZzZXQ6IGUyMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+
IDAwMDAwMDAwPDM+IDcwMmRlMjAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0
IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRp
b24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAw
MDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA3MDJkZTIwMA0KKFhFTikgRFQ6IG9uZSBs
ZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZGUyMDA8Mz4NCihYRU4pIERUOiBy
ZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAyZGUyMDAgLSAwMDcwMmRlNDAw
IFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL29wZUA3MDJk
ODAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL29w
ZUA3MDJkODAwMA0KKFhFTikgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvb3BlQDcwMmQ4MDAwIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDMNCihYRU4pIENoZWNrIGlmIC9hY29ubmVjdEA3MDJjMDAw
MC9haHViL29wZUA3MDJkODAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9vcGVANzAyZDgwMDAN
CihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9hY29ubmVjdEA3MDJjMDAwMC9h
aHViL29wZUA3MDJkODAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0x
KSBvbiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yg0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJl
c3M6PDM+IDcwMmQ4MDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0y
LCBucz0yKSBvbiAvYWNvbm5lY3RANzAyYzAwMDANCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4u
Lg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03MDJkMDAwMCwgcz0xMDAwMCwgZGE9NzAyZDgw
MDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiA3MDJk
MDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA4MDAwDQooWEVOKSBEVDogb25lIGxldmVs
IHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJkODAwMDwzPg0KKFhFTikgRFQ6IHBhcmVu
dCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdl
czsgMToxIHRyYW5zbGF0aW9uDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4g
MDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNzAyZDgwMDAN
CihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQ4MDAw
PDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMmQ4
MDAwIC0gMDA3MDJkODEwMCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3Ig
ZGV2aWNlIC9hY29ubmVjdEA3MDJjMDAwMC9haHViL29wZUA3MDJkODAwMCAqKg0KKFhFTikgRFQ6
IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yg0K
KFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDcwMmQ4MTAwPDM+DQooWEVOKSBEVDog
cGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvYWNvbm5lY3RANzAyYzAwMDAN
CihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03
MDJkMDAwMCwgcz0xMDAwMCwgZGE9NzAyZDgxMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRp
b24gZm9yOjwzPiAwMDAwMDAwMDwzPiA3MDJkMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0
OiA4MTAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3
MDJkODEwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikg
b24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRyYW5zbGF0aW9uDQooWEVOKSBEVDog
cGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4p
IERUOiB3aXRoIG9mZnNldDogNzAyZDgxMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRp
b246PDM+IDAwMDAwMDAwPDM+IDcwMmQ4MTAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5v
ZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMmQ4MTAwIC0gMDA3MDJkODIwMCBQMk1UeXBlPTUNCihY
RU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9hY29ubmVjdEA3MDJjMDAwMC9haHVi
L29wZUA3MDJkODAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBv
biAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yg0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6
PDM+IDcwMmQ4MjAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBu
cz0yKSBvbiAvYWNvbm5lY3RANzAyYzAwMDANCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0K
KFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03MDJkMDAwMCwgcz0xMDAwMCwgZGE9NzAyZDgyMDAN
CihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiA3MDJkMDAw
MDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA4MjAwDQooWEVOKSBEVDogb25lIGxldmVsIHRy
YW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA3MDJkODIwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBi
dXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsg
MToxIHRyYW5zbGF0aW9uDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAw
MDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNzAyZDgyMDANCihY
RU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQ4MjAwPDM+
DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMmQ4MjAw
IC0gMDA3MDJkODQwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvYWNvbm5lY3RANzAyYzAwMDAv
YWh1Yi9vcGVANzAyZDgwMDAvcGVxQDcwMmQ4MTAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvb3BlQDcwMmQ4MDAwL3BlcUA3MDJkODEwMA0KKFhFTikg
L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvb3BlQDcwMmQ4MDAwL3BlcUA3MDJkODEwMCBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1
Yi9vcGVANzAyZDgwMDAvcGVxQDcwMmQ4MTAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL29wZUA3
MDJkODAwMC9wZXFANzAyZDgxMDANCihYRU4pIGhhbmRsZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1
Yi9vcGVANzAyZDgwMDAvbWJkcmNANzAyZDgyMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
YWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9vcGVANzAyZDgwMDAvbWJkcmNANzAyZDgyMDANCihYRU4p
IC9hY29ubmVjdEA3MDJjMDAwMC9haHViL29wZUA3MDJkODAwMC9tYmRyY0A3MDJkODIwMCBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvYWNvbm5lY3RANzAyYzAwMDAv
YWh1Yi9vcGVANzAyZDgwMDAvbWJkcmNANzAyZDgyMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIv
b3BlQDcwMmQ4MDAwL21iZHJjQDcwMmQ4MjAwDQooWEVOKSBoYW5kbGUgL2Fjb25uZWN0QDcwMmMw
MDAwL2FodWIvb3BlQDcwMmQ4NDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0
QDcwMmMwMDAwL2FodWIvb3BlQDcwMmQ4NDAwDQooWEVOKSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1
Yi9vcGVANzAyZDg0MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMw0KKFhFTikgQ2hlY2sgaWYg
L2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvb3BlQDcwMmQ4NDAwIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9h
aHViL29wZUA3MDJkODQwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2Fj
b25uZWN0QDcwMmMwMDAwL2FodWIvb3BlQDcwMmQ4NDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRl
ZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9hY29ubmVjdEA3MDJjMDAwMC9haHViDQooWEVOKSBEVDog
dHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gNzAyZDg0MDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVz
IGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9hY29ubmVjdEA3MDJjMDAwMA0KKFhFTikgRFQ6
IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTcwMmQwMDAwLCBz
PTEwMDAwLCBkYT03MDJkODQwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+
IDAwMDAwMDAwPDM+IDcwMmQwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDg0MDANCihY
RU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQ4NDAwPDM+
DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVO
KSBEVDogZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJh
bnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGgg
b2Zmc2V0OiA3MDJkODQwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAw
MDAwMDA8Mz4gNzAyZDg0MDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikg
ICAtIE1NSU86IDAwNzAyZDg0MDAgLSAwMDcwMmQ4NTAwIFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoq
IHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvb3BlQDcwMmQ4
NDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9hY29ubmVj
dEA3MDJjMDAwMC9haHViDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gNzAyZDg1
MDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9h
Y29ubmVjdEA3MDJjMDAwMA0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDog
ZGVmYXVsdCBtYXAsIGNwPTcwMmQwMDAwLCBzPTEwMDAwLCBkYT03MDJkODUwMA0KKFhFTikgRFQ6
IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDcwMmQwMDAwPDM+DQooWEVO
KSBEVDogd2l0aCBvZmZzZXQ6IDg1MDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246
PDM+IDAwMDAwMDAwPDM+IDcwMmQ4NTAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZh
dWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNs
YXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiAw
MDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA3MDJkODUwMA0KKFhFTikgRFQ6IG9u
ZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNzAyZDg1MDA8Mz4NCihYRU4pIERU
OiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAyZDg1MDAgLSAwMDcwMmQ4
NjAwIFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2Fjb25u
ZWN0QDcwMmMwMDAwL2FodWIvb3BlQDcwMmQ4NDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1
bHQgKG5hPTEsIG5zPTEpIG9uIC9hY29ubmVjdEA3MDJjMDAwMC9haHViDQooWEVOKSBEVDogdHJh
bnNsYXRpbmcgYWRkcmVzczo8Mz4gNzAyZDg2MDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlz
IGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9hY29ubmVjdEA3MDJjMDAwMA0KKFhFTikgRFQ6IHdh
bGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTcwMmQwMDAwLCBzPTEw
MDAwLCBkYT03MDJkODYwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAw
MDAwMDAwPDM+IDcwMmQwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDg2MDANCihYRU4p
IERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDcwMmQ4NjAwPDM+DQoo
WEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBE
VDogZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNs
YXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zm
c2V0OiA3MDJkODYwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAw
MDA8Mz4gNzAyZDg2MDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAt
IE1NSU86IDAwNzAyZDg2MDAgLSAwMDcwMmQ4ODAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9h
Y29ubmVjdEA3MDJjMDAwMC9haHViL29wZUA3MDJkODQwMC9wZXFANzAyZDg1MDANCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9vcGVANzAyZDg0MDAvcGVx
QDcwMmQ4NTAwDQooWEVOKSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9vcGVANzAyZDg0MDAvcGVx
QDcwMmQ4NTAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9hY29u
bmVjdEA3MDJjMDAwMC9haHViL29wZUA3MDJkODQwMC9wZXFANzAyZDg1MDAgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcw
MmMwMDAwL2FodWIvb3BlQDcwMmQ4NDAwL3BlcUA3MDJkODUwMA0KKFhFTikgaGFuZGxlIC9hY29u
bmVjdEA3MDJjMDAwMC9haHViL29wZUA3MDJkODQwMC9tYmRyY0A3MDJkODYwMA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL29wZUA3MDJkODQwMC9tYmRy
Y0A3MDJkODYwMA0KKFhFTikgL2Fjb25uZWN0QDcwMmMwMDAwL2FodWIvb3BlQDcwMmQ4NDAwL21i
ZHJjQDcwMmQ4NjAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9h
Y29ubmVjdEA3MDJjMDAwMC9haHViL29wZUA3MDJkODQwMC9tYmRyY0A3MDJkODYwMCBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWNvbm5l
Y3RANzAyYzAwMDAvYWh1Yi9vcGVANzAyZDg0MDAvbWJkcmNANzAyZDg2MDANCihYRU4pIGhhbmRs
ZSAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9tdmNAMHg3MDJkYTIwMA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9haHViL212Y0AweDcwMmRhMjAwDQooWEVOKSAv
YWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9tdmNAMHg3MDJkYTIwMCBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9tdmNAMHg3MDJk
YTIwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWh1Yi9tdmNAMHg3MDJkYTIwMA0KKFhFTikgaGFuZGxl
IC9hY29ubmVjdEA3MDJjMDAwMC9hZHNwX2F1ZGlvDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3BfYXVkaW8NCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTQ0DQooWEVOKSAgaW50c2l6ZT00IGlu
dGxlbj00NA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcwMmMw
MDAwL2Fkc3BfYXVkaW8sIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVy
dHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTQ0DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj00
NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJm
OTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMjMuLi5dLG9pbnRzaXplPTQNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBz
aXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0
X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZHNwX2F1ZGlvLCBp
bmRleD0xDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3Bl
Yz0wIGludGxlbj00NA0KKFhFTikgIGludHNpemU9NCBpbnRsZW49NDQNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsaW50c3BlYz1bMHgw
MDAwMDAwMCAweDAwMDAwMDI0Li4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
aXBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAgLT4g
YWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19p
cnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRzcF9hdWRpbywgaW5kZXg9Mg0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49NDQNCihY
RU4pICBpbnRzaXplPTQgaW50bGVuPTQ0DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29u
bmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAy
NS4uLl0sb2ludHNpemU9NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcw
MmMwMDAwL2FnaWNANzAyZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4p
ICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0
QDcwMmMwMDAwL2Fkc3BfYXVkaW8sIGluZGV4PTMNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTQ0DQooWEVOKSAgaW50c2l6ZT00IGlu
dGxlbj00NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdp
Y0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMjYuLi5dLG9pbnRzaXplPTQN
CihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5
MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihY
RU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZHNwX2F1
ZGlvLCBpbmRleD00DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAg
aW50c3BlYz0wIGludGxlbj00NA0KKFhFTikgIGludHNpemU9NCBpbnRsZW49NDQNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsaW50c3Bl
Yz1bMHgwMDAwMDAwMCAweDAwMDAwMDI3Li4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogaXBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00DQooWEVO
KSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0
X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRzcF9hdWRpbywgaW5kZXg9NQ0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49
NDQNCihYRU4pICBpbnRzaXplPTQgaW50bGVuPTQ0DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFy
PS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgw
MDAwMDAyOC4uLl0sb2ludHNpemU9NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fjb25u
ZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+IGFkZHJzaXplPTIN
CihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fj
b25uZWN0QDcwMmMwMDAwL2Fkc3BfYXVkaW8sIGluZGV4PTYNCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTQ0DQooWEVOKSAgaW50c2l6
ZT00IGludGxlbj00NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAw
MDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMjkuLi5dLG9pbnRz
aXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2lj
QDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0
ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9h
ZHNwX2F1ZGlvLCBpbmRleD03DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQoo
WEVOKSAgaW50c3BlYz0wIGludGxlbj00NA0KKFhFTikgIGludHNpemU9NCBpbnRsZW49NDQNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAs
aW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDJhLi4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00
DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZp
Y2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRzcF9hdWRpbywgaW5kZXg9
OA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBp
bnRsZW49NDQNCihYRU4pICBpbnRzaXplPTQgaW50bGVuPTQ0DQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLGludHNwZWM9WzB4MDAwMDAw
MDAgMHgwMDAwMDAyYi4uLl0sb2ludHNpemU9NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9
L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+IGFkZHJz
aXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBk
ZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3BfYXVkaW8sIGluZGV4PTkNCihYRU4pICB1c2luZyAn
aW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTQ0DQooWEVOKSAg
aW50c2l6ZT00IGludGxlbj00NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RA
NzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMmMuLi5d
LG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAw
MC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4g
Z290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJj
MDAwMC9hZHNwX2F1ZGlvLCBpbmRleD0xMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49NDQNCihYRU4pICBpbnRzaXplPTQgaW50bGVu
PTQ0DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcw
MmY5MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyZC4uLl0sb2ludHNpemU9NA0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAs
IHNpemU9NA0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikg
L2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3BfYXVkaW8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3BfYXVkaW8gaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fjb25uZWN0QDcw
MmMwMDAwL2Fkc3BfYXVkaW8NCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihY
RU4pICBpbnRzcGVjPTAgaW50bGVuPTQ0DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj00NA0KKFhF
TikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3BfYXVk
aW8sIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBp
bnRzcGVjPTAgaW50bGVuPTQ0DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj00NA0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVj
PVsweDAwMDAwMDAwIDB4MDAwMDAwMjMuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4p
ICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGly
ZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25u
ZWN0ZWQgdG8gL2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDANCihYRU4pIGR0X2Rldmlj
ZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZHNwX2F1ZGlvLCBpbmRleD0x
DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj00NA0KKFhFTikgIGludHNpemU9NCBpbnRsZW49NDQNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsaW50c3BlYz1bMHgwMDAwMDAw
MCAweDAwMDAwMDI0Li4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0v
YWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAgLT4gYWRkcnNp
emU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMSBub3QgKGRpcmVjdGx5IG9yIGlu
ZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9h
Y29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19p
cnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRzcF9hdWRpbywgaW5kZXg9Mg0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49NDQNCihY
RU4pICBpbnRzaXplPTQgaW50bGVuPTQ0DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29u
bmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAy
NS4uLl0sb2ludHNpemU9NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcw
MmMwMDAwL2FnaWNANzAyZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4p
ICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDIgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBj
b25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvYWNvbm5lY3RANzAy
YzAwMDAvYWdpY0A3MDJmOTAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fj
b25uZWN0QDcwMmMwMDAwL2Fkc3BfYXVkaW8sIGluZGV4PTMNCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTQ0DQooWEVOKSAgaW50c2l6
ZT00IGludGxlbj00NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAw
MDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMjYuLi5dLG9pbnRz
aXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2lj
QDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0
ICENCihYRU4pIGlycSAzIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRv
IHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNA
NzAyZjkwMDANCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJj
MDAwMC9hZHNwX2F1ZGlvLCBpbmRleD00DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3Bl
cnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj00NA0KKFhFTikgIGludHNpemU9NCBpbnRsZW49
NDQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAy
ZjkwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDI3Li4uXSxvaW50c2l6ZT00DQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwg
c2l6ZT00DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBp
cnEgNCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29u
dHJvbGxlci4gQ29ubmVjdGVkIHRvIC9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwDQoo
WEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vYWNvbm5lY3RANzAyYzAwMDAvYWRzcF9h
dWRpbywgaW5kZXg9NQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikg
IGludHNwZWM9MCBpbnRsZW49NDQNCihYRU4pICBpbnRzaXplPTQgaW50bGVuPTQ0DQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLGludHNw
ZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyOC4uLl0sb2ludHNpemU9NA0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsIHNpemU9NA0KKFhF
TikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDUgbm90IChk
aXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENv
bm5lY3RlZCB0byAvYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMA0KKFhFTikgZHRfZGV2
aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcwMmMwMDAwL2Fkc3BfYXVkaW8sIGluZGV4
PTYNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAg
aW50bGVuPTQ0DQooWEVOKSAgaW50c2l6ZT00IGludGxlbj00NA0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAw
MDAwIDB4MDAwMDAwMjkuLi5dLG9pbnRzaXplPTQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFy
PS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLCBzaXplPTQNCihYRU4pICAtPiBhZGRy
c2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSA2IG5vdCAoZGlyZWN0bHkgb3Ig
aW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8g
L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDANCihYRU4pIGR0X2RldmljZV9nZXRfcmF3
X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZHNwX2F1ZGlvLCBpbmRleD03DQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj00NA0K
KFhFTikgIGludHNpemU9NCBpbnRsZW49NDQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2Fj
b25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAw
MDJhLi4uXSxvaW50c2l6ZT00DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vYWNvbm5lY3RA
NzAyYzAwMDAvYWdpY0A3MDJmOTAwMCwgc2l6ZT00DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhF
TikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgNyBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkp
IGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9hY29ubmVjdEA3
MDJjMDAwMC9hZ2ljQDcwMmY5MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0v
YWNvbm5lY3RANzAyYzAwMDAvYWRzcF9hdWRpbywgaW5kZXg9OA0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49NDQNCihYRU4pICBpbnRz
aXplPTQgaW50bGVuPTQ0DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVjdEA3MDJj
MDAwMC9hZ2ljQDcwMmY5MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyYi4uLl0sb2lu
dHNpemU9NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2Fn
aWNANzAyZjkwMDAsIHNpemU9NA0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3Qg
aXQgIQ0KKFhFTikgaXJxIDggbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQg
dG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvYWNvbm5lY3RANzAyYzAwMDAvYWdp
Y0A3MDJmOTAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Fjb25uZWN0QDcw
MmMwMDAwL2Fkc3BfYXVkaW8sIGluZGV4PTkNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTQ0DQooWEVOKSAgaW50c2l6ZT00IGludGxl
bj00NA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vYWNvbm5lY3RANzAyYzAwMDAvYWdpY0A3
MDJmOTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMmMuLi5dLG9pbnRzaXplPTQNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAw
LCBzaXplPTQNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4p
IGlycSA5IG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnlj
b250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAN
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9hY29ubmVjdEA3MDJjMDAwMC9hZHNw
X2F1ZGlvLCBpbmRleD0xMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49NDQNCihYRU4pICBpbnRzaXplPTQgaW50bGVuPTQ0DQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogcGFyPS9hY29ubmVjdEA3MDJjMDAwMC9hZ2ljQDcwMmY5MDAwLGlu
dHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyZC4uLl0sb2ludHNpemU9NA0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IGlwYXI9L2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDAsIHNpemU9NA0K
KFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDEwIG5v
dCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVy
LiBDb25uZWN0ZWQgdG8gL2Fjb25uZWN0QDcwMmMwMDAwL2FnaWNANzAyZjkwMDANCihYRU4pIGhh
bmRsZSAvdGltZXINCihYRU4pIENyZWF0ZSB0aW1lciBub2RlDQooWEVOKSAgIFNlY3VyZSBpbnRl
cnJ1cHQgMjkNCihYRU4pICAgTm9uIHNlY3VyZSBpbnRlcnJ1cHQgMzANCihYRU4pICAgVmlydCBp
bnRlcnJ1cHQgMjcNCihYRU4pIGhhbmRsZSAvdGltZXJANjAwMDUwMDANCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vdGltZXJANjAwMDUwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTEyDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0xMg0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3RpbWVyQDYwMDA1MDAwLCBp
bmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3Bl
Yz0wIGludGxlbj0xMg0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MTINCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAw
MDAwMDAgMHgwMDAwMDBiMC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlw
YXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRy
c2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTog
ZGV2PS90aW1lckA2MDAwNTAwMCwgaW5kZXg9MQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MTINCihYRU4pICBpbnRzaXplPTMgaW50
bGVuPTEyDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2
MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwYjEuLi5dLG9pbnRzaXplPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwg
c2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBk
dF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vdGltZXJANjAwMDUwMDAsIGluZGV4PTINCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTEy
DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0v
aW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAw
MGIyLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0
LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4p
ICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3RpbWVyQDYw
MDA1MDAwLCBpbmRleD0zDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj0xMg0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MTINCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNw
ZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDBiMy4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4p
ICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC90aW1lckA2MDAwNTAw
MCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvdGltZXJANjAwMDUw
MDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3RpbWVyQDYwMDA1MDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQoo
WEVOKSAgaW50c3BlYz0wIGludGxlbj0xMg0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MTINCihY
RU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS90aW1lckA2MDAwNTAwMCwgaW5kZXg9MA0K
KFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRs
ZW49MTINCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTEyDQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
cGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4
MDAwMDAwYjAuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRl
cnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0K
KFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0
bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1
cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9
L3RpbWVyQDYwMDA1MDAwLCBpbmRleD0xDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3Bl
cnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0xMg0KKFhFTikgIGludHNpemU9MyBpbnRsZW49
MTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0
MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDBiMS4uLl0sb2ludHNpemU9Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXpl
PTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAx
IG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9s
bGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBk
dF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vdGltZXJANjAwMDUwMDAsIGluZGV4PTINCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTEy
DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0v
aW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAw
MGIyLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0
LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4p
ICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDIgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBj
b25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNv
bnRyb2xsZXJANjAwMDQwMDANCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS90aW1l
ckA2MDAwNTAwMCwgaW5kZXg9Mw0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49MTINCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTEyDQoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxp
bnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwYjMuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2ly
cV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQoo
WEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMyBub3Qg
KGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4g
Q29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoq
IHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3RpbWVyQDYwMDA1MDAwICoqDQooWEVOKSBEVDogYnVz
IGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRy
ZXNzOjwzPiAwMDAwMDAwMDwzPiA2MDAwNTAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBu
b2RlDQooWEVOKSAgIC0gTU1JTzogMDA2MDAwNTAwMCAtIDAwNjAwMDU0MDAgUDJNVHlwZT01DQoo
WEVOKSBoYW5kbGUgL3J0Yw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ydGMNCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2
PS9ydGMsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4p
ICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9
WzB4MDAwMDAwMDAgMHgwMDAwMDAwMi4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAt
PiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9ydGMgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3J0YyBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcnRjDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50
c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vcnRjLCBp
bmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3Bl
Yz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAw
MDAwIDB4MDAwMDAwMDIuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFy
PS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNp
emU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9yIGlu
ZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9p
bnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZv
ciBkZXZpY2UgL3J0YyAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBv
biAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwMGUw
MDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAw
MGUwMDAgLSAwMDcwMDBlMTAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9kbWFANjAwMjAwMDAN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vZG1hQDYwMDIwMDAwDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj05Ng0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49OTYNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9kbWFA
NjAwMjAwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihY
RU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50
c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDY4Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhF
TikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dl
dF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MQ0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49OTYNCihYRU4pICBpbnRz
aXplPTMgaW50bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29u
dHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNjkuLi5dLG9pbnRz
aXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2
MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAh
DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vZG1hQDYwMDIwMDAwLCBpbmRleD0y
DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj05Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49OTYNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAg
MHgwMDAwMDA2YS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2lu
dGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0y
DQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9k
bWFANjAwMjAwMDAsIGluZGV4PTMNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAs
aW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDZiLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0K
KFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNl
X2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9NA0KKFhFTikgIHVzaW5nICdp
bnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49OTYNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQt
Y29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNmMuLi5dLG9p
bnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxl
ckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBp
dCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vZG1hQDYwMDIwMDAwLCBpbmRl
eD01DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0w
IGludGxlbj05Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49OTYNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAw
MDAgMHgwMDAwMDA2ZC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6
ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2
PS9kbWFANjAwMjAwMDAsIGluZGV4PTYNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVy
dHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05
Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQw
MDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDZlLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9
Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2
aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9Nw0KKFhFTikgIHVzaW5n
ICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49OTYNCihYRU4p
ICBpbnRzaXplPTMgaW50bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1
cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNmYuLi5d
LG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJv
bGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdv
dCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vZG1hQDYwMDIwMDAwLCBp
bmRleD04DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3Bl
Yz0wIGludGxlbj05Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49OTYNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAw
MDAwMDAgMHgwMDAwMDA3MC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlw
YXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRy
c2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTog
ZGV2PS9kbWFANjAwMjAwMDAsIGluZGV4PTkNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDcxLi4uXSxvaW50c2l6ZT0zDQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNp
emU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRf
ZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MTANCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50
ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDcy
Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNv
bnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAt
PiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAw
MCwgaW5kZXg9MTENCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBp
bnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1b
MHgwMDAwMDAwMCAweDAwMDAwMDczLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+
IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdf
aXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MTINCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xs
ZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDc0Li4uXSxvaW50c2l6ZT0z
DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQw
MDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhF
TikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MTMNCihY
RU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVu
PTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBh
cj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAw
MDAwMDc1Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJy
dXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihY
RU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2
MDAyMDAwMCwgaW5kZXg9MTQNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihY
RU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50
c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDc2Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhF
TikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dl
dF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MTUNCihYRU4pICB1c2luZyAnaW50
ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50
c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNv
bnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDc3Li4uXSxvaW50
c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJA
NjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQg
IQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9
MTYNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAg
aW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAw
MCAweDAwMDAwMDgwLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0v
aW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXpl
PTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9
L2RtYUA2MDAyMDAwMCwgaW5kZXg9MTcNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVy
dHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05
Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQw
MDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDgxLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9
Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2
aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MTgNCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJy
dXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDgyLi4u
XSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRy
b2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBn
b3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwg
aW5kZXg9MTkNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRz
cGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgw
MDAwMDAwMCAweDAwMDAwMDgzLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
aXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFk
ZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJx
OiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MjANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGlu
dGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJA
NjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDg0Li4uXSxvaW50c2l6ZT0zDQoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAs
IHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikg
ZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MjENCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2
DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0v
aW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAw
MDg1Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0
LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4p
ICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAy
MDAwMCwgaW5kZXg9MjINCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4p
ICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3Bl
Yz1bMHgwMDAwMDAwMCAweDAwMDAwMDg2Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikg
IC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9y
YXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MjMNCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6
ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRy
b2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDg3Li4uXSxvaW50c2l6
ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0K
KFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MjQN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAw
eDAwMDAwMDg4Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50
ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTIN
CihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Rt
YUA2MDAyMDAwMCwgaW5kZXg9MjUNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAs
aW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDg5Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0K
KFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNl
X2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MjYNCihYRU4pICB1c2luZyAn
aW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAg
aW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0
LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDhhLi4uXSxv
aW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xs
ZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3Qg
aXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5k
ZXg9MjcNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAw
MDAwMCAweDAwMDAwMDhiLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBh
cj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJz
aXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBk
ZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MjgNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDhjLi4uXSxvaW50c2l6ZT0zDQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNp
emU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRf
ZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MjkNCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50
ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDhk
Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNv
bnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAt
PiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAw
MCwgaW5kZXg9MzANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBp
bnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1b
MHgwMDAwMDAwMCAweDAwMDAwMDhlLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+
IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdf
aXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MzENCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xs
ZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDhmLi4uXSxvaW50c2l6ZT0z
DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQw
MDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhF
TikgL2RtYUA2MDAyMDAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBp
ZiAvZG1hQDYwMDIwMDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9kbWFANjAwMjAwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGlu
dGxlbj05Ng0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwg
aW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49OTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTk2DQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAw
MDAwMDAwIDB4MDAwMDAwNjguLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBp
cGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRk
cnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9y
IGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRv
IC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdf
aXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRz
JyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49OTYNCihYRU4pICBpbnRzaXplPTMg
aW50bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxl
ckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNjkuLi5dLG9pbnRzaXplPTMN
CihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAw
MCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVO
KSBpcnEgMSBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5
Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0K
KFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9Mg0K
KFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRs
ZW49OTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
cGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4
MDAwMDAwNmEuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRl
cnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0K
KFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMiBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0
bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1
cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9
L2RtYUA2MDAyMDAwMCwgaW5kZXg9Mw0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0
eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49OTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTk2
DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAw
MCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNmIuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0z
DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMyBu
b3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxl
ci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRf
ZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9NA0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49OTYNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRl
cnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNmMu
Li5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29u
dHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+
IGdvdCBpdCAhDQooWEVOKSBpcnEgNCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5l
Y3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJv
bGxlckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAy
MDAwMCwgaW5kZXg9NQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikg
IGludHNwZWM9MCBpbnRsZW49OTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTk2DQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVj
PVsweDAwMDAwMDAwIDB4MDAwMDAwNmQuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAg
LT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgNSBub3QgKGRpcmVj
dGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVj
dGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dl
dF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9Ng0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49OTYNCihYRU4pICBpbnRz
aXplPTMgaW50bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29u
dHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNmUuLi5dLG9pbnRz
aXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2
MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAh
DQooWEVOKSBpcnEgNiBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBw
cmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAw
NDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5k
ZXg9Nw0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9
MCBpbnRsZW49OTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAw
MDAwIDB4MDAwMDAwNmYuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFy
PS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNp
emU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgNyBub3QgKGRpcmVjdGx5IG9yIGlu
ZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9p
bnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJx
OiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9OA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49OTYNCihYRU4pICBpbnRzaXplPTMgaW50
bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2
MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNzAuLi5dLG9pbnRzaXplPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwg
c2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBp
cnEgOCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29u
dHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhF
TikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9OQ0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49
OTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFy
PS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAw
MDAwNzEuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1
cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhF
TikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgOSBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkp
IGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQt
Y29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Rt
YUA2MDAyMDAwMCwgaW5kZXg9MTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAs
aW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDcyLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0K
KFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDEwIG5v
dCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVy
LiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBkdF9k
ZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vZG1hQDYwMDIwMDAwLCBpbmRleD0xMQ0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49OTYNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRl
cnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNzMu
Li5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29u
dHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+
IGdvdCBpdCAhDQooWEVOKSBpcnEgMTEgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25u
ZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRy
b2xsZXJANjAwMDQwMDANCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9kbWFANjAw
MjAwMDAsIGluZGV4PTEyDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj05Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49OTYNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNw
ZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3NC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4p
ICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAxMiBub3QgKGRp
cmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29u
bmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNl
X2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MTMNCihYRU4pICB1c2luZyAn
aW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAg
aW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0
LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDc1Li4uXSxv
aW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xs
ZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3Qg
aXQgIQ0KKFhFTikgaXJxIDEzIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVk
IHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVy
QDYwMDA0MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vZG1hQDYwMDIwMDAw
LCBpbmRleD0xNA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGlu
dHNwZWM9MCBpbnRsZW49OTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTk2DQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsw
eDAwMDAwMDAwIDB4MDAwMDAwNzYuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4g
YWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMTQgbm90IChkaXJlY3Rs
eSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3Rl
ZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIGR0X2RldmljZV9nZXRf
cmF3X2lycTogZGV2PS9kbWFANjAwMjAwMDAsIGluZGV4PTE1DQooWEVOKSAgdXNpbmcgJ2ludGVy
cnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj05Ng0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49OTYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250
cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3Ny4uLl0sb2ludHNp
emU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYw
MDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICEN
CihYRU4pIGlycSAxNSBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBw
cmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAw
NDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5k
ZXg9MTYNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAw
MDAwMCAweDAwMDAwMDgwLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBh
cj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJz
aXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDE2IG5vdCAoZGlyZWN0bHkgb3Ig
aW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8g
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19p
cnE6IGRldj0vZG1hQDYwMDIwMDAwLCBpbmRleD0xNw0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRz
JyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49OTYNCihYRU4pICBpbnRzaXplPTMg
aW50bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxl
ckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwODEuLi5dLG9pbnRzaXplPTMN
CihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAw
MCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVO
KSBpcnEgMTcgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFy
eWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAN
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9kbWFANjAwMjAwMDAsIGluZGV4PTE4
DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj05Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49OTYNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAg
MHgwMDAwMDA4Mi4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2lu
dGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0y
DQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAxOCBub3QgKGRpcmVjdGx5IG9yIGluZGly
ZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRl
cnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBk
ZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MTkNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDgzLi4uXSxvaW50c2l6ZT0zDQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNp
emU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJx
IDE5IG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250
cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVO
KSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vZG1hQDYwMDIwMDAwLCBpbmRleD0yMA0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49
OTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFy
PS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAw
MDAwODQuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1
cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhF
TikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMjAgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5
KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0
LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9k
bWFANjAwMjAwMDAsIGluZGV4PTIxDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj05Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49OTYN
CihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAw
LGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA4NS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMN
CihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAyMSBu
b3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxl
ci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRf
ZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MjINCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50
ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDg2
Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNv
bnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAt
PiBnb3QgaXQgIQ0KKFhFTikgaXJxIDIyIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29u
bmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250
cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vZG1hQDYw
MDIwMDAwLCBpbmRleD0yMw0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49OTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTk2DQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRz
cGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwODcuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVO
KSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMjMgbm90IChk
aXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENv
bm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIGR0X2Rldmlj
ZV9nZXRfcmF3X2lycTogZGV2PS9kbWFANjAwMjAwMDAsIGluZGV4PTI0DQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj05Ng0KKFhFTikg
IGludHNpemU9MyBpbnRsZW49OTYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVw
dC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA4OC4uLl0s
b2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9s
bGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIGlycSAyNCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3Rl
ZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxl
ckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAw
MCwgaW5kZXg9MjUNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBp
bnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1b
MHgwMDAwMDAwMCAweDAwMDAwMDg5Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+
IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDI1IG5vdCAoZGlyZWN0
bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0
ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0
X3Jhd19pcnE6IGRldj0vZG1hQDYwMDIwMDAwLCBpbmRleD0yNg0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49OTYNCihYRU4pICBpbnRz
aXplPTMgaW50bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29u
dHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwOGEuLi5dLG9pbnRz
aXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2
MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAh
DQooWEVOKSBpcnEgMjYgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8g
cHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDANCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9kbWFANjAwMjAwMDAsIGlu
ZGV4PTI3DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3Bl
Yz0wIGludGxlbj05Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49OTYNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAw
MDAwMDAgMHgwMDAwMDA4Yi4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlw
YXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRy
c2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAyNyBub3QgKGRpcmVjdGx5IG9y
IGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRv
IC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdf
aXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MjgNCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTk2DQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xs
ZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDhjLi4uXSxvaW50c2l6ZT0z
DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQw
MDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhF
TikgaXJxIDI4IG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1h
cnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAw
DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vZG1hQDYwMDIwMDAwLCBpbmRleD0y
OQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBp
bnRsZW49OTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTk2DQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAw
IDB4MDAwMDAwOGQuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9p
bnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9
Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMjkgbm90IChkaXJlY3RseSBvciBpbmRp
cmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50
ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTog
ZGV2PS9kbWFANjAwMjAwMDAsIGluZGV4PTMwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHBy
b3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj05Ng0KKFhFTikgIGludHNpemU9MyBpbnRs
ZW49OTYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYw
MDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA4ZS4uLl0sb2ludHNpemU9Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBz
aXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGly
cSAzMCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29u
dHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhF
TikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2RtYUA2MDAyMDAwMCwgaW5kZXg9MzENCihY
RU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVu
PTk2DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBh
cj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAw
MDAwMDhmLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJy
dXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihY
RU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDMxIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3Rs
eSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVw
dC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmlj
ZSAvZG1hQDYwMDIwMDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIp
IG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA2MDAy
MDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA2
MDAyMDAwMCAtIDAwNjAwMjE0MDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAw
MDhkNA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQNCihYRU4pIC9w
aW5tdXhANzAwMDA4ZDQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMg0KKFhFTikgQ2hlY2sgaWYg
L3Bpbm11eEA3MDAwMDhkNCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0DQooWEVOKSBEVDogKiogdHJhbnNsYXRp
b24gZm9yIGRldmljZSAvcGlubXV4QDcwMDAwOGQ0ICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1
bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAw
MDAwMDAwMDwzPiA3MDAwMDhkNDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVO
KSAgIC0gTU1JTzogMDA3MDAwMDhkNCAtIDAwNzAwMDBiNzkgUDJNVHlwZT01DQooWEVOKSBEVDog
KiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvcGlubXV4QDcwMDAwOGQ0ICoqDQooWEVOKSBEVDog
YnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBh
ZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDAwMzAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9v
dCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDAwMzAwMCAtIDAwNzAwMDMyOTQgUDJNVHlwZT01
DQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9jbGtyZXFfMF9iaV9kaXINCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2Nsa3JlcV8wX2JpX2Rpcg0KKFhFTikg
L3Bpbm11eEA3MDAwMDhkNC9jbGtyZXFfMF9iaV9kaXIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jbGtyZXFfMF9iaV9kaXIgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11
eEA3MDAwMDhkNC9jbGtyZXFfMF9iaV9kaXINCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0
L2Nsa3JlcV8wX2JpX2Rpci9jbGtyZXEwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11
eEA3MDAwMDhkNC9jbGtyZXFfMF9iaV9kaXIvY2xrcmVxMA0KKFhFTikgL3Bpbm11eEA3MDAwMDhk
NC9jbGtyZXFfMF9iaV9kaXIvY2xrcmVxMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2Nsa3JlcV8wX2JpX2Rpci9jbGtyZXEwIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5t
dXhANzAwMDA4ZDQvY2xrcmVxXzBfYmlfZGlyL2Nsa3JlcTANCihYRU4pIGhhbmRsZSAvcGlubXV4
QDcwMDAwOGQ0L2Nsa3JlcV8xX2JpX2Rpcg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5t
dXhANzAwMDA4ZDQvY2xrcmVxXzFfYmlfZGlyDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2Nsa3Jl
cV8xX2JpX2RpciBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlu
bXV4QDcwMDAwOGQ0L2Nsa3JlcV8xX2JpX2RpciBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2Nsa3JlcV8xX2Jp
X2Rpcg0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY2xrcmVxXzFfYmlfZGlyL2Nsa3Jl
cTENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2Nsa3JlcV8xX2Jp
X2Rpci9jbGtyZXExDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2Nsa3JlcV8xX2JpX2Rpci9jbGty
ZXExIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAw
MDA4ZDQvY2xrcmVxXzFfYmlfZGlyL2Nsa3JlcTEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jbGtyZXFfMV9i
aV9kaXIvY2xrcmVxMQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY2xrcmVxXzBfaW5f
ZGlyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jbGtyZXFfMF9p
bl9kaXINCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvY2xrcmVxXzBfaW5fZGlyIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY2xrcmVxXzBf
aW5fZGlyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY2xrcmVxXzBfaW5fZGlyDQooWEVOKSBoYW5kbGUgL3Bp
bm11eEA3MDAwMDhkNC9jbGtyZXFfMF9pbl9kaXIvY2xrcmVxMA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY2xrcmVxXzBfaW5fZGlyL2Nsa3JlcTANCihYRU4pIC9w
aW5tdXhANzAwMDA4ZDQvY2xrcmVxXzBfaW5fZGlyL2Nsa3JlcTAgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jbGtyZXFfMF9pbl9kaXIv
Y2xrcmVxMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2Nsa3JlcV8wX2luX2Rpci9jbGtyZXEwDQooWEVOKSBo
YW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9jbGtyZXFfMV9pbl9kaXINCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2Nsa3JlcV8xX2luX2Rpcg0KKFhFTikgL3Bpbm11eEA3
MDAwMDhkNC9jbGtyZXFfMV9pbl9kaXIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jbGtyZXFfMV9pbl9kaXIgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhk
NC9jbGtyZXFfMV9pbl9kaXINCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2Nsa3JlcV8x
X2luX2Rpci9jbGtyZXExDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhk
NC9jbGtyZXFfMV9pbl9kaXIvY2xrcmVxMQ0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jbGtyZXFf
MV9pbl9kaXIvY2xrcmVxMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvcGlubXV4QDcwMDAwOGQ0L2Nsa3JlcV8xX2luX2Rpci9jbGtyZXExIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4
ZDQvY2xrcmVxXzFfaW5fZGlyL2Nsa3JlcTENCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0
L3Byb2Qtc2V0dGluZ3MNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0
L3Byb2Qtc2V0dGluZ3MNCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvcHJvZC1zZXR0aW5ncyBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3By
b2Qtc2V0dGluZ3MgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzDQooWEVOKSBoYW5kbGUg
L3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL3Byb2QNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3MvcHJvZA0KKFhFTikgL3Bpbm11eEA3
MDAwMDhkNC9wcm9kLXNldHRpbmdzL3Byb2QgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL3Byb2QgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3
MDAwMDhkNC9wcm9kLXNldHRpbmdzL3Byb2QNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0
L3Byb2Qtc2V0dGluZ3MvaTJzMl9wcm9kDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11
eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL2kyczJfcHJvZA0KKFhFTikgL3Bpbm11eEA3MDAwMDhk
NC9wcm9kLXNldHRpbmdzL2kyczJfcHJvZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3MvaTJzMl9wcm9kIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5t
dXhANzAwMDA4ZDQvcHJvZC1zZXR0aW5ncy9pMnMyX3Byb2QNCihYRU4pIGhhbmRsZSAvcGlubXV4
QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3Mvc3BpMV9wcm9kDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL3NwaTFfcHJvZA0KKFhFTikgL3Bpbm11
eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL3NwaTFfcHJvZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3Mvc3BpMV9w
cm9kIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9waW5tdXhANzAwMDA4ZDQvcHJvZC1zZXR0aW5ncy9zcGkxX3Byb2QNCihYRU4pIGhhbmRs
ZSAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3Mvc3BpMl9wcm9kDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL3NwaTJfcHJvZA0KKFhF
TikgL3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL3NwaTJfcHJvZCBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGlu
Z3Mvc3BpMl9wcm9kIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvcHJvZC1zZXR0aW5ncy9zcGkyX3Byb2QNCihY
RU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3Mvc3BpM19wcm9kDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL3NwaTNf
cHJvZA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL3NwaTNfcHJvZCBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3By
b2Qtc2V0dGluZ3Mvc3BpM19wcm9kIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvcHJvZC1zZXR0aW5ncy9zcGkz
X3Byb2QNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3Mvc3BpNF9w
cm9kDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRp
bmdzL3NwaTRfcHJvZA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL3NwaTRf
cHJvZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcw
MDAwOGQ0L3Byb2Qtc2V0dGluZ3Mvc3BpNF9wcm9kIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvcHJvZC1zZXR0
aW5ncy9zcGk0X3Byb2QNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGlu
Z3MvaTJjMF9wcm9kDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9w
cm9kLXNldHRpbmdzL2kyYzBfcHJvZA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRp
bmdzL2kyYzBfcHJvZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
cGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3MvaTJjMF9wcm9kIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQv
cHJvZC1zZXR0aW5ncy9pMmMwX3Byb2QNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3By
b2Qtc2V0dGluZ3MvaTJjMV9wcm9kDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3
MDAwMDhkNC9wcm9kLXNldHRpbmdzL2kyYzFfcHJvZA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9w
cm9kLXNldHRpbmdzL2kyYzFfcHJvZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3MvaTJjMV9wcm9kIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhA
NzAwMDA4ZDQvcHJvZC1zZXR0aW5ncy9pMmMxX3Byb2QNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcw
MDAwOGQ0L3Byb2Qtc2V0dGluZ3MvaTJjMl9wcm9kDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL2kyYzJfcHJvZA0KKFhFTikgL3Bpbm11eEA3
MDAwMDhkNC9wcm9kLXNldHRpbmdzL2kyYzJfcHJvZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3MvaTJjMl9wcm9k
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9waW5tdXhANzAwMDA4ZDQvcHJvZC1zZXR0aW5ncy9pMmMyX3Byb2QNCihYRU4pIGhhbmRsZSAv
cGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3MvaTJjNF9wcm9kDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL2kyYzRfcHJvZA0KKFhFTikg
L3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL2kyYzRfcHJvZCBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3Mv
aTJjNF9wcm9kIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvcHJvZC1zZXR0aW5ncy9pMmM0X3Byb2QNCihYRU4p
IGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3MvaTJjMF9oc19wcm9kDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL2kyYzBf
aHNfcHJvZA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL2kyYzBfaHNfcHJv
ZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAw
OGQ0L3Byb2Qtc2V0dGluZ3MvaTJjMF9oc19wcm9kIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvcHJvZC1zZXR0
aW5ncy9pMmMwX2hzX3Byb2QNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0
dGluZ3MvaTJjMV9oc19wcm9kDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAw
MDhkNC9wcm9kLXNldHRpbmdzL2kyYzFfaHNfcHJvZA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9w
cm9kLXNldHRpbmdzL2kyYzFfaHNfcHJvZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3MvaTJjMV9oc19wcm9kIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
aW5tdXhANzAwMDA4ZDQvcHJvZC1zZXR0aW5ncy9pMmMxX2hzX3Byb2QNCihYRU4pIGhhbmRsZSAv
cGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3MvaTJjMl9oc19wcm9kDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL2kyYzJfaHNfcHJvZA0K
KFhFTikgL3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRpbmdzL2kyYzJfaHNfcHJvZCBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qt
c2V0dGluZ3MvaTJjMl9oc19wcm9kIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvcHJvZC1zZXR0aW5ncy9pMmMy
X2hzX3Byb2QNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3MvaTJj
NF9oc19wcm9kDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9wcm9k
LXNldHRpbmdzL2kyYzRfaHNfcHJvZA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9wcm9kLXNldHRp
bmdzL2kyYzRfaHNfcHJvZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvcGlubXV4QDcwMDAwOGQ0L3Byb2Qtc2V0dGluZ3MvaTJjNF9oc19wcm9kIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAw
MDA4ZDQvcHJvZC1zZXR0aW5ncy9pMmM0X2hzX3Byb2QNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcw
MDAwOGQ0L3NkbW1jMV9zY2htaXR0X2VuYWJsZQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
aW5tdXhANzAwMDA4ZDQvc2RtbWMxX3NjaG1pdHRfZW5hYmxlDQooWEVOKSAvcGlubXV4QDcwMDAw
OGQ0L3NkbW1jMV9zY2htaXR0X2VuYWJsZSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3NkbW1jMV9zY2htaXR0X2VuYWJsZSBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4
QDcwMDAwOGQ0L3NkbW1jMV9zY2htaXR0X2VuYWJsZQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAw
MDA4ZDQvc2RtbWMxX3NjaG1pdHRfZW5hYmxlL3NkbW1jMQ0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9waW5tdXhANzAwMDA4ZDQvc2RtbWMxX3NjaG1pdHRfZW5hYmxlL3NkbW1jMQ0KKFhFTikg
L3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFfc2NobWl0dF9lbmFibGUvc2RtbWMxIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMxX3Nj
aG1pdHRfZW5hYmxlL3NkbW1jMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3NkbW1jMV9zY2htaXR0X2VuYWJs
ZS9zZG1tYzENCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3NkbW1jMV9zY2htaXR0X2Rp
c2FibGUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3NkbW1jMV9z
Y2htaXR0X2Rpc2FibGUNCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMxX3NjaG1pdHRfZGlz
YWJsZSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcw
MDAwOGQ0L3NkbW1jMV9zY2htaXR0X2Rpc2FibGUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFfc2No
bWl0dF9kaXNhYmxlDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFfc2NobWl0
dF9kaXNhYmxlL3NkbW1jMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4
ZDQvc2RtbWMxX3NjaG1pdHRfZGlzYWJsZS9zZG1tYzENCihYRU4pIC9waW5tdXhANzAwMDA4ZDQv
c2RtbWMxX3NjaG1pdHRfZGlzYWJsZS9zZG1tYzEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFfc2NobWl0dF9kaXNhYmxlL3Nk
bW1jMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGlubXV4QDcwMDAwOGQ0L3NkbW1jMV9zY2htaXR0X2Rpc2FibGUvc2RtbWMxDQooWEVO
KSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFfY2xrX3NjaG1pdHRfZW5hYmxlDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFfY2xrX3NjaG1pdHRf
ZW5hYmxlDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3NkbW1jMV9jbGtfc2NobWl0dF9lbmFibGUg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhk
NC9zZG1tYzFfY2xrX3NjaG1pdHRfZW5hYmxlIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvc2RtbWMxX2Nsa19z
Y2htaXR0X2VuYWJsZQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMxX2Nsa19z
Y2htaXR0X2VuYWJsZS9zZG1tYzENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcw
MDAwOGQ0L3NkbW1jMV9jbGtfc2NobWl0dF9lbmFibGUvc2RtbWMxDQooWEVOKSAvcGlubXV4QDcw
MDAwOGQ0L3NkbW1jMV9jbGtfc2NobWl0dF9lbmFibGUvc2RtbWMxIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMxX2Nsa19zY2ht
aXR0X2VuYWJsZS9zZG1tYzEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFfY2xrX3NjaG1pdHRfZW5h
YmxlL3NkbW1jMQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMxX2Nsa19zY2ht
aXR0X2Rpc2FibGUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3Nk
bW1jMV9jbGtfc2NobWl0dF9kaXNhYmxlDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3NkbW1jMV9j
bGtfc2NobWl0dF9kaXNhYmxlIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMxX2Nsa19zY2htaXR0X2Rpc2FibGUgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3
MDAwMDhkNC9zZG1tYzFfY2xrX3NjaG1pdHRfZGlzYWJsZQ0KKFhFTikgaGFuZGxlIC9waW5tdXhA
NzAwMDA4ZDQvc2RtbWMxX2Nsa19zY2htaXR0X2Rpc2FibGUvc2RtbWMxDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFfY2xrX3NjaG1pdHRfZGlzYWJsZS9z
ZG1tYzENCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMxX2Nsa19zY2htaXR0X2Rpc2FibGUv
c2RtbWMxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhA
NzAwMDA4ZDQvc2RtbWMxX2Nsa19zY2htaXR0X2Rpc2FibGUvc2RtbWMxIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4
ZDQvc2RtbWMxX2Nsa19zY2htaXR0X2Rpc2FibGUvc2RtbWMxDQooWEVOKSBoYW5kbGUgL3Bpbm11
eEA3MDAwMDhkNC9zZG1tYzFfZHJ2X2NvZGUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlu
bXV4QDcwMDAwOGQ0L3NkbW1jMV9kcnZfY29kZQ0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9zZG1t
YzFfZHJ2X2NvZGUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bp
bm11eEA3MDAwMDhkNC9zZG1tYzFfZHJ2X2NvZGUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFfZHJ2
X2NvZGUNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3NkbW1jMV9kcnZfY29kZS9zZG1t
YzENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3NkbW1jMV9kcnZf
Y29kZS9zZG1tYzENCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMxX2Rydl9jb2RlL3NkbW1j
MSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAw
OGQ0L3NkbW1jMV9kcnZfY29kZS9zZG1tYzEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFfZHJ2X2Nv
ZGUvc2RtbWMxDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFfZGVmYXVsdF9k
cnZfY29kZQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvc2RtbWMx
X2RlZmF1bHRfZHJ2X2NvZGUNCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMxX2RlZmF1bHRf
ZHJ2X2NvZGUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11
eEA3MDAwMDhkNC9zZG1tYzFfZGVmYXVsdF9kcnZfY29kZSBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3NkbW1j
MV9kZWZhdWx0X2Rydl9jb2RlDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFf
ZGVmYXVsdF9kcnZfY29kZS9zZG1tYzENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4
QDcwMDAwOGQ0L3NkbW1jMV9kZWZhdWx0X2Rydl9jb2RlL3NkbW1jMQ0KKFhFTikgL3Bpbm11eEA3
MDAwMDhkNC9zZG1tYzFfZGVmYXVsdF9kcnZfY29kZS9zZG1tYzEgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFfZGVmYXVsdF9k
cnZfY29kZS9zZG1tYzEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9zZG1tYzFfZGVmYXVsdF9kcnZfY29kZS9z
ZG1tYzENCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3NkbW1jM19zY2htaXR0X2VuYWJs
ZQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvc2RtbWMzX3NjaG1p
dHRfZW5hYmxlDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3NkbW1jM19zY2htaXR0X2VuYWJsZSBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0
L3NkbW1jM19zY2htaXR0X2VuYWJsZSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3NkbW1jM19zY2htaXR0X2Vu
YWJsZQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMzX3NjaG1pdHRfZW5hYmxl
L3NkbW1jMw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvc2RtbWMz
X3NjaG1pdHRfZW5hYmxlL3NkbW1jMw0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfc2No
bWl0dF9lbmFibGUvc2RtbWMzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMzX3NjaG1pdHRfZW5hYmxlL3NkbW1jMyBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4
QDcwMDAwOGQ0L3NkbW1jM19zY2htaXR0X2VuYWJsZS9zZG1tYzMNCihYRU4pIGhhbmRsZSAvcGlu
bXV4QDcwMDAwOGQ0L3NkbW1jM19zY2htaXR0X2Rpc2FibGUNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGlubXV4QDcwMDAwOGQ0L3NkbW1jM19zY2htaXR0X2Rpc2FibGUNCihYRU4pIC9waW5t
dXhANzAwMDA4ZDQvc2RtbWMzX3NjaG1pdHRfZGlzYWJsZSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3NkbW1jM19zY2htaXR0X2Rpc2Fi
bGUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfc2NobWl0dF9kaXNhYmxlDQooWEVOKSBoYW5kbGUg
L3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfc2NobWl0dF9kaXNhYmxlL3NkbW1jMw0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvc2RtbWMzX3NjaG1pdHRfZGlzYWJsZS9z
ZG1tYzMNCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMzX3NjaG1pdHRfZGlzYWJsZS9zZG1t
YzMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAw
MDhkNC9zZG1tYzNfc2NobWl0dF9kaXNhYmxlL3NkbW1jMyBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3NkbW1j
M19zY2htaXR0X2Rpc2FibGUvc2RtbWMzDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9z
ZG1tYzNfY2xrX3NjaG1pdHRfZW5hYmxlDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11
eEA3MDAwMDhkNC9zZG1tYzNfY2xrX3NjaG1pdHRfZW5hYmxlDQooWEVOKSAvcGlubXV4QDcwMDAw
OGQ0L3NkbW1jM19jbGtfc2NobWl0dF9lbmFibGUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfY2xrX3NjaG1pdHRfZW5hYmxl
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9waW5tdXhANzAwMDA4ZDQvc2RtbWMzX2Nsa19zY2htaXR0X2VuYWJsZQ0KKFhFTikgaGFuZGxl
IC9waW5tdXhANzAwMDA4ZDQvc2RtbWMzX2Nsa19zY2htaXR0X2VuYWJsZS9zZG1tYzMNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3NkbW1jM19jbGtfc2NobWl0dF9l
bmFibGUvc2RtbWMzDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3NkbW1jM19jbGtfc2NobWl0dF9l
bmFibGUvc2RtbWMzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9w
aW5tdXhANzAwMDA4ZDQvc2RtbWMzX2Nsa19zY2htaXR0X2VuYWJsZS9zZG1tYzMgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3
MDAwMDhkNC9zZG1tYzNfY2xrX3NjaG1pdHRfZW5hYmxlL3NkbW1jMw0KKFhFTikgaGFuZGxlIC9w
aW5tdXhANzAwMDA4ZDQvc2RtbWMzX2Nsa19zY2htaXR0X2Rpc2FibGUNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3NkbW1jM19jbGtfc2NobWl0dF9kaXNhYmxlDQoo
WEVOKSAvcGlubXV4QDcwMDAwOGQ0L3NkbW1jM19jbGtfc2NobWl0dF9kaXNhYmxlIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMz
X2Nsa19zY2htaXR0X2Rpc2FibGUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfY2xrX3NjaG1pdHRf
ZGlzYWJsZQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMzX2Nsa19zY2htaXR0
X2Rpc2FibGUvc2RtbWMzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhk
NC9zZG1tYzNfY2xrX3NjaG1pdHRfZGlzYWJsZS9zZG1tYzMNCihYRU4pIC9waW5tdXhANzAwMDA4
ZDQvc2RtbWMzX2Nsa19zY2htaXR0X2Rpc2FibGUvc2RtbWMzIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvc2RtbWMzX2Nsa19zY2htaXR0
X2Rpc2FibGUvc2RtbWMzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvc2RtbWMzX2Nsa19zY2htaXR0X2Rpc2Fi
bGUvc2RtbWMzDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfZHJ2X2NvZGUN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3NkbW1jM19kcnZfY29k
ZQ0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfZHJ2X2NvZGUgcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfZHJ2X2Nv
ZGUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfZHJ2X2NvZGUNCihYRU4pIGhhbmRsZSAvcGlubXV4
QDcwMDAwOGQ0L3NkbW1jM19kcnZfY29kZS9zZG1tYzMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcGlubXV4QDcwMDAwOGQ0L3NkbW1jM19kcnZfY29kZS9zZG1tYzMNCihYRU4pIC9waW5tdXhA
NzAwMDA4ZDQvc2RtbWMzX2Rydl9jb2RlL3NkbW1jMyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3NkbW1jM19kcnZfY29kZS9zZG1tYzMg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfZHJ2X2NvZGUvc2RtbWMzDQooWEVOKSBoYW5kbGUgL3Bp
bm11eEA3MDAwMDhkNC9zZG1tYzNfZGVmYXVsdF9kcnZfY29kZQ0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9waW5tdXhANzAwMDA4ZDQvc2RtbWMzX2RlZmF1bHRfZHJ2X2NvZGUNCihYRU4pIC9w
aW5tdXhANzAwMDA4ZDQvc2RtbWMzX2RlZmF1bHRfZHJ2X2NvZGUgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfZGVmYXVsdF9k
cnZfY29kZSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3NkbW1jM19kZWZhdWx0X2Rydl9jb2RlDQooWEVOKSBo
YW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfZGVmYXVsdF9kcnZfY29kZS9zZG1tYzMNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3NkbW1jM19kZWZhdWx0X2Ry
dl9jb2RlL3NkbW1jMw0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9zZG1tYzNfZGVmYXVsdF9kcnZf
Y29kZS9zZG1tYzMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bp
bm11eEA3MDAwMDhkNC9zZG1tYzNfZGVmYXVsdF9kcnZfY29kZS9zZG1tYzMgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAw
MDhkNC9zZG1tYzNfZGVmYXVsdF9kcnZfY29kZS9zZG1tYzMNCihYRU4pIGhhbmRsZSAvcGlubXV4
QDcwMDAwOGQ0L2R2ZnNfcHdtX2FjdGl2ZQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5t
dXhANzAwMDA4ZDQvZHZmc19wd21fYWN0aXZlDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2R2ZnNf
cHdtX2FjdGl2ZSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlu
bXV4QDcwMDAwOGQ0L2R2ZnNfcHdtX2FjdGl2ZSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2R2ZnNfcHdtX2Fj
dGl2ZQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvZHZmc19wd21fYWN0aXZlL2R2ZnNf
cHdtX3BiYjENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2R2ZnNf
cHdtX2FjdGl2ZS9kdmZzX3B3bV9wYmIxDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2R2ZnNfcHdt
X2FjdGl2ZS9kdmZzX3B3bV9wYmIxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvZHZmc19wd21fYWN0aXZlL2R2ZnNfcHdtX3BiYjEgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bp
bm11eEA3MDAwMDhkNC9kdmZzX3B3bV9hY3RpdmUvZHZmc19wd21fcGJiMQ0KKFhFTikgaGFuZGxl
IC9waW5tdXhANzAwMDA4ZDQvZHZmc19wd21faW5hY3RpdmUNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGlubXV4QDcwMDAwOGQ0L2R2ZnNfcHdtX2luYWN0aXZlDQooWEVOKSAvcGlubXV4QDcw
MDAwOGQ0L2R2ZnNfcHdtX2luYWN0aXZlIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvZHZmc19wd21faW5hY3RpdmUgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAw
MDhkNC9kdmZzX3B3bV9pbmFjdGl2ZQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvZHZm
c19wd21faW5hY3RpdmUvZHZmc19wd21fcGJiMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
aW5tdXhANzAwMDA4ZDQvZHZmc19wd21faW5hY3RpdmUvZHZmc19wd21fcGJiMQ0KKFhFTikgL3Bp
bm11eEA3MDAwMDhkNC9kdmZzX3B3bV9pbmFjdGl2ZS9kdmZzX3B3bV9wYmIxIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvZHZmc19wd21f
aW5hY3RpdmUvZHZmc19wd21fcGJiMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2R2ZnNfcHdtX2luYWN0aXZl
L2R2ZnNfcHdtX3BiYjENCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbg0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uDQooWEVOKSAvcGlu
bXV4QDcwMDAwOGQ0L2NvbW1vbiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbg0KKFhF
TikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2R2ZnNfcHdtX3BiYjENCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9kdmZzX3B3bV9wYmIxDQoo
WEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9kdmZzX3B3bV9wYmIxIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2R2ZnNf
cHdtX3BiYjEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vZHZmc19wd21fcGJiMQ0KKFhFTikgaGFu
ZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2RtaWMxX2Nsa19wZTANCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9kbWljMV9jbGtfcGUwDQooWEVOKSAv
cGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9kbWljMV9jbGtfcGUwIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2RtaWMxX2Nsa19w
ZTAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vZG1pYzFfY2xrX3BlMA0KKFhFTikgaGFuZGxlIC9w
aW5tdXhANzAwMDA4ZDQvY29tbW9uL2RtaWMxX2RhdF9wZTENCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9kbWljMV9kYXRfcGUxDQooWEVOKSAvcGlubXV4
QDcwMDAwOGQ0L2NvbW1vbi9kbWljMV9kYXRfcGUxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2RtaWMxX2RhdF9wZTEgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bp
bm11eEA3MDAwMDhkNC9jb21tb24vZG1pYzFfZGF0X3BlMQ0KKFhFTikgaGFuZGxlIC9waW5tdXhA
NzAwMDA4ZDQvY29tbW9uL2RtaWMyX2Nsa19wZTINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9kbWljMl9jbGtfcGUyDQooWEVOKSAvcGlubXV4QDcwMDAw
OGQ0L2NvbW1vbi9kbWljMl9jbGtfcGUyIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2RtaWMyX2Nsa19wZTIgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3
MDAwMDhkNC9jb21tb24vZG1pYzJfY2xrX3BlMg0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4
ZDQvY29tbW9uL2RtaWMyX2RhdF9wZTMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4
QDcwMDAwOGQ0L2NvbW1vbi9kbWljMl9kYXRfcGUzDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2Nv
bW1vbi9kbWljMl9kYXRfcGUzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2RtaWMyX2RhdF9wZTMgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhk
NC9jb21tb24vZG1pYzJfZGF0X3BlMw0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29t
bW9uL3BlNw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9u
L3BlNw0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcGU3IHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3BlNyBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlu
bXV4QDcwMDAwOGQ0L2NvbW1vbi9wZTcNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2Nv
bW1vbi9nZW4zX2kyY19zY2xfcGYwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3
MDAwMDhkNC9jb21tb24vZ2VuM19pMmNfc2NsX3BmMA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9j
b21tb24vZ2VuM19pMmNfc2NsX3BmMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9nZW4zX2kyY19zY2xfcGYwIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhA
NzAwMDA4ZDQvY29tbW9uL2dlbjNfaTJjX3NjbF9wZjANCihYRU4pIGhhbmRsZSAvcGlubXV4QDcw
MDAwOGQ0L2NvbW1vbi9nZW4zX2kyY19zZGFfcGYxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vZ2VuM19pMmNfc2RhX3BmMQ0KKFhFTikgL3Bpbm11eEA3
MDAwMDhkNC9jb21tb24vZ2VuM19pMmNfc2RhX3BmMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9nZW4zX2kyY19zZGFfcGYx
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2dlbjNfaTJjX3NkYV9wZjENCihYRU4pIGhhbmRsZSAv
cGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9jYW1faTJjX3NjbF9wczINCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9jYW1faTJjX3NjbF9wczINCihYRU4pIC9w
aW5tdXhANzAwMDA4ZDQvY29tbW9uL2NhbV9pMmNfc2NsX3BzMiBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9jYW1faTJjX3Nj
bF9wczIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vY2FtX2kyY19zY2xfcHMyDQooWEVOKSBoYW5k
bGUgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vY2FtX2kyY19zZGFfcHMzDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vY2FtX2kyY19zZGFfcHMzDQooWEVO
KSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9jYW1faTJjX3NkYV9wczMgcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vY2FtX2ky
Y19zZGFfcHMzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2NhbV9pMmNfc2RhX3BzMw0KKFhFTikg
aGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2NhbTFfbWNsa19wczANCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9jYW0xX21jbGtfcHMwDQooWEVO
KSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9jYW0xX21jbGtfcHMwIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2NhbTFfbWNs
a19wczAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vY2FtMV9tY2xrX3BzMA0KKFhFTikgaGFuZGxl
IC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2NhbTJfbWNsa19wczENCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9jYW0yX21jbGtfcHMxDQooWEVOKSAvcGlu
bXV4QDcwMDAwOGQ0L2NvbW1vbi9jYW0yX21jbGtfcHMxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2NhbTJfbWNsa19wczEg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vY2FtMl9tY2xrX3BzMQ0KKFhFTikgaGFuZGxlIC9waW5t
dXhANzAwMDA4ZDQvY29tbW9uL3BleF9sMF9jbGtyZXFfbl9wYTENCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9wZXhfbDBfY2xrcmVxX25fcGExDQooWEVO
KSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9wZXhfbDBfY2xrcmVxX25fcGExIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3Bl
eF9sMF9jbGtyZXFfbl9wYTEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcGV4X2wwX2Nsa3JlcV9u
X3BhMQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3BleF9sMF9yc3Rfbl9w
YTANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9wZXhf
bDBfcnN0X25fcGEwDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9wZXhfbDBfcnN0X25f
cGEwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAw
MDA4ZDQvY29tbW9uL3BleF9sMF9yc3Rfbl9wYTAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcGV4
X2wwX3JzdF9uX3BhMA0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3BleF9s
MV9jbGtyZXFfbl9wYTQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0
L2NvbW1vbi9wZXhfbDFfY2xrcmVxX25fcGE0DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1v
bi9wZXhfbDFfY2xrcmVxX25fcGE0IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3BleF9sMV9jbGtyZXFfbl9wYTQgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11
eEA3MDAwMDhkNC9jb21tb24vcGV4X2wxX2Nsa3JlcV9uX3BhNA0KKFhFTikgaGFuZGxlIC9waW5t
dXhANzAwMDA4ZDQvY29tbW9uL3BleF9sMV9yc3Rfbl9wYTMNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9wZXhfbDFfcnN0X25fcGEzDQooWEVOKSAvcGlu
bXV4QDcwMDAwOGQ0L2NvbW1vbi9wZXhfbDFfcnN0X25fcGEzIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3BleF9sMV9yc3Rf
bl9wYTMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcGV4X2wxX3JzdF9uX3BhMw0KKFhFTikgaGFu
ZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3BleF93YWtlX25fcGEyDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcGV4X3dha2Vfbl9wYTINCihYRU4p
IC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3BleF93YWtlX25fcGEyIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3BleF93YWtl
X25fcGEyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3BleF93YWtlX25fcGEyDQooWEVOKSBoYW5k
bGUgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2RtbWMxX2Nsa19wbTANCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzFfY2xrX3BtMA0KKFhFTikg
L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2RtbWMxX2Nsa19wbTAgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2RtbWMxX2Ns
a19wbTAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2RtbWMxX2Nsa19wbTANCihYRU4pIGhhbmRs
ZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzFfY21kX3BtMQ0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jMV9jbWRfcG0xDQooWEVOKSAv
cGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzFfY21kX3BtMSBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzFfY21k
X3BtMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzFfY21kX3BtMQ0KKFhFTikgaGFuZGxl
IC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jMV9kYXQwX3BtNQ0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jMV9kYXQwX3BtNQ0KKFhFTikg
L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2RtbWMxX2RhdDBfcG01IHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jMV9k
YXQwX3BtNSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzFfZGF0MF9wbTUNCihYRU4pIGhh
bmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzFfZGF0MV9wbTQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzFfZGF0MV9wbTQNCihY
RU4pIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jMV9kYXQxX3BtNCBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1t
YzFfZGF0MV9wbTQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2RtbWMxX2RhdDFfcG00DQooWEVO
KSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2RtbWMxX2RhdDJfcG0zDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2RtbWMxX2RhdDJfcG0z
DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzFfZGF0Ml9wbTMgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24v
c2RtbWMxX2RhdDJfcG0zIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jMV9kYXQyX3BtMw0K
KFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jMV9kYXQzX3BtMg0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jMV9kYXQz
X3BtMg0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2RtbWMxX2RhdDNfcG0yIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29t
bW9uL3NkbW1jMV9kYXQzX3BtMiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzFfZGF0M19w
bTINCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzNfY2xrX3BwMA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jM19j
bGtfcHAwDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzNfY2xrX3BwMCBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2Nv
bW1vbi9zZG1tYzNfY2xrX3BwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzNfY2xrX3Bw
MA0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jM19jbWRfcHAxDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2RtbWMzX2Nt
ZF9wcDENCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jM19jbWRfcHAxIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29t
bW9uL3NkbW1jM19jbWRfcHAxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jM19jbWRfcHAx
DQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2RtbWMzX2RhdDBfcHA1DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2RtbWMzX2Rh
dDBfcHA1DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzNfZGF0MF9wcDUgcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9j
b21tb24vc2RtbWMzX2RhdDBfcHA1IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jM19kYXQw
X3BwNQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jM19kYXQxX3Bw
NA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1j
M19kYXQxX3BwNA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2RtbWMzX2RhdDFfcHA0
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4
ZDQvY29tbW9uL3NkbW1jM19kYXQxX3BwNCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzNf
ZGF0MV9wcDQNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzNfZGF0
Ml9wcDMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9z
ZG1tYzNfZGF0Ml9wcDMNCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NkbW1jM19kYXQy
X3BwMyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcw
MDAwOGQ0L2NvbW1vbi9zZG1tYzNfZGF0Ml9wcDMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2Rt
bWMzX2RhdDJfcHAzDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vc2RtbWMz
X2RhdDNfcHAyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21t
b24vc2RtbWMzX2RhdDNfcHAyDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zZG1tYzNf
ZGF0M19wcDIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11
eEA3MDAwMDhkNC9jb21tb24vc2RtbWMzX2RhdDNfcHAyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9u
L3NkbW1jM19kYXQzX3BwMg0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3No
dXRkb3duDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24v
c2h1dGRvd24NCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NodXRkb3duIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9u
L3NodXRkb3duIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3NodXRkb3duDQooWEVOKSBoYW5kbGUg
L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vbGNkX2dwaW8yX3B2NA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2xjZF9ncGlvMl9wdjQNCihYRU4pIC9waW5t
dXhANzAwMDA4ZDQvY29tbW9uL2xjZF9ncGlvMl9wdjQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vbGNkX2dwaW8yX3B2NCBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9sY2RfZ3BpbzJfcHY0DQooWEVOKSBoYW5kbGUgL3Bpbm11
eEA3MDAwMDhkNC9jb21tb24vcHdyX2kyY19zY2xfcHkzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcHdyX2kyY19zY2xfcHkzDQooWEVOKSAvcGlubXV4
QDcwMDAwOGQ0L2NvbW1vbi9wd3JfaTJjX3NjbF9weTMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcHdyX2kyY19zY2xfcHkz
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3B3cl9pMmNfc2NsX3B5Mw0KKFhFTikgaGFuZGxlIC9w
aW5tdXhANzAwMDA4ZDQvY29tbW9uL3B3cl9pMmNfc2RhX3B5NA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3B3cl9pMmNfc2RhX3B5NA0KKFhFTikgL3Bp
bm11eEA3MDAwMDhkNC9jb21tb24vcHdyX2kyY19zZGFfcHk0IHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3B3cl9pMmNfc2Rh
X3B5NCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9wd3JfaTJjX3NkYV9weTQNCihYRU4pIGhhbmRs
ZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9jbGtfMzJrX2luDQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vY2xrXzMya19pbg0KKFhFTikgL3Bpbm11eEA3
MDAwMDhkNC9jb21tb24vY2xrXzMya19pbiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9jbGtfMzJrX2luIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAw
MDA4ZDQvY29tbW9uL2Nsa18zMmtfaW4NCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2Nv
bW1vbi9jbGtfMzJrX291dF9weTUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcw
MDAwOGQ0L2NvbW1vbi9jbGtfMzJrX291dF9weTUNCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvY29t
bW9uL2Nsa18zMmtfb3V0X3B5NSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9jbGtfMzJrX291dF9weTUgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAw
MDhkNC9jb21tb24vY2xrXzMya19vdXRfcHk1DQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhk
NC9jb21tb24vcHoxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9j
b21tb24vcHoxDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9wejEgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcHox
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3B6MQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4
ZDQvY29tbW9uL3B6NQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQv
Y29tbW9uL3B6NQ0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcHo1IHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3B6
NSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9wejUNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAw
OGQ0L2NvbW1vbi9jb3JlX3B3cl9yZXENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4
QDcwMDAwOGQ0L2NvbW1vbi9jb3JlX3B3cl9yZXENCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvY29t
bW9uL2NvcmVfcHdyX3JlcSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9jb3JlX3B3cl9yZXEgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9j
b21tb24vY29yZV9wd3JfcmVxDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24v
cHdyX2ludF9uDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21t
b24vcHdyX2ludF9uDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9wd3JfaW50X24gcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9j
b21tb24vcHdyX2ludF9uIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3B3cl9pbnRfbg0KKFhFTikg
aGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2dlbjFfaTJjX3NjbF9wajENCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9nZW4xX2kyY19zY2xfcGox
DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9nZW4xX2kyY19zY2xfcGoxIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9u
L2dlbjFfaTJjX3NjbF9wajEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vZ2VuMV9pMmNfc2NsX3Bq
MQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2dlbjFfaTJjX3NkYV9wajAN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9nZW4xX2ky
Y19zZGFfcGowDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9nZW4xX2kyY19zZGFfcGow
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4
ZDQvY29tbW9uL2dlbjFfaTJjX3NkYV9wajAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vZ2VuMV9p
MmNfc2RhX3BqMA0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2dlbjJfaTJj
X3NjbF9wajINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1v
bi9nZW4yX2kyY19zY2xfcGoyDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9nZW4yX2ky
Y19zY2xfcGoyIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5t
dXhANzAwMDA4ZDQvY29tbW9uL2dlbjJfaTJjX3NjbF9wajIgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21t
b24vZ2VuMl9pMmNfc2NsX3BqMg0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9u
L2dlbjJfaTJjX3NkYV9wajMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAw
OGQ0L2NvbW1vbi9nZW4yX2kyY19zZGFfcGozDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1v
bi9nZW4yX2kyY19zZGFfcGozIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2dlbjJfaTJjX3NkYV9wajMgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAw
MDhkNC9jb21tb24vZ2VuMl9pMmNfc2RhX3BqMw0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4
ZDQvY29tbW9uL3VhcnQyX3R4X3BnMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhA
NzAwMDA4ZDQvY29tbW9uL3VhcnQyX3R4X3BnMA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21t
b24vdWFydDJfdHhfcGcwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3VhcnQyX3R4X3BnMCBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2Nv
bW1vbi91YXJ0Ml90eF9wZzANCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91
YXJ0Ml9yeF9wZzENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2Nv
bW1vbi91YXJ0Ml9yeF9wZzENCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3VhcnQyX3J4
X3BnMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcw
MDAwOGQ0L2NvbW1vbi91YXJ0Ml9yeF9wZzEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdWFydDJf
cnhfcGcxDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdWFydDFfdHhfcHUw
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdWFydDFf
dHhfcHUwDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91YXJ0MV90eF9wdTAgcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21t
b24vdWFydDFfdHhfcHUwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3VhcnQxX3R4X3B1MA0KKFhF
TikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3VhcnQxX3J4X3B1MQ0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3VhcnQxX3J4X3B1MQ0KKFhF
TikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdWFydDFfcnhfcHUxIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3VhcnQxX3J4
X3B1MSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91YXJ0MV9yeF9wdTENCihYRU4pIGhhbmRsZSAv
cGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9qdGFnX3J0Y2sNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9qdGFnX3J0Y2sNCihYRU4pIC9waW5tdXhANzAwMDA4
ZDQvY29tbW9uL2p0YWdfcnRjayBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9qdGFnX3J0Y2sgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9j
b21tb24vanRhZ19ydGNrDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdWFy
dDNfdHhfcGQxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21t
b24vdWFydDNfdHhfcGQxDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91YXJ0M190eF9w
ZDEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAw
MDhkNC9jb21tb24vdWFydDNfdHhfcGQxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3VhcnQzX3R4
X3BkMQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3VhcnQzX3J4X3BkMg0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3VhcnQzX3J4
X3BkMg0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdWFydDNfcnhfcGQyIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9u
L3VhcnQzX3J4X3BkMiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91YXJ0M19yeF9wZDINCihYRU4p
IGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91YXJ0M19ydHNfcGQzDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdWFydDNfcnRzX3BkMw0KKFhF
TikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdWFydDNfcnRzX3BkMyBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91YXJ0M19y
dHNfcGQzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3VhcnQzX3J0c19wZDMNCihYRU4pIGhhbmRs
ZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91YXJ0M19jdHNfcGQ0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdWFydDNfY3RzX3BkNA0KKFhFTikgL3Bp
bm11eEA3MDAwMDhkNC9jb21tb24vdWFydDNfY3RzX3BkNCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91YXJ0M19jdHNfcGQ0
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3VhcnQzX2N0c19wZDQNCihYRU4pIGhhbmRsZSAvcGlu
bXV4QDcwMDAwOGQ0L2NvbW1vbi91YXJ0NF90eF9waTQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91YXJ0NF90eF9waTQNCihYRU4pIC9waW5tdXhANzAw
MDA4ZDQvY29tbW9uL3VhcnQ0X3R4X3BpNCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91YXJ0NF90eF9waTQgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3
MDAwMDhkNC9jb21tb24vdWFydDRfdHhfcGk0DQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhk
NC9jb21tb24vdWFydDRfcnhfcGk1DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3
MDAwMDhkNC9jb21tb24vdWFydDRfcnhfcGk1DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1v
bi91YXJ0NF9yeF9waTUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdWFydDRfcnhfcGk1IGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29t
bW9uL3VhcnQ0X3J4X3BpNQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3Vh
cnQ0X3J0c19waTYNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2Nv
bW1vbi91YXJ0NF9ydHNfcGk2DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91YXJ0NF9y
dHNfcGk2IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhA
NzAwMDA4ZDQvY29tbW9uL3VhcnQ0X3J0c19waTYgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdWFy
dDRfcnRzX3BpNg0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3VhcnQ0X2N0
c19waTcNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91
YXJ0NF9jdHNfcGk3DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91YXJ0NF9jdHNfcGk3
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4
ZDQvY29tbW9uL3VhcnQ0X2N0c19waTcgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdWFydDRfY3Rz
X3BpNw0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3FzcGlfaW8wX3BlZTIN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9xc3BpX2lv
MF9wZWUyDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9xc3BpX2lvMF9wZWUyIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29t
bW9uL3FzcGlfaW8wX3BlZTIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcXNwaV9pbzBfcGVlMg0K
KFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3FzcGlfaW8xX3BlZTMNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9xc3BpX2lvMV9wZWUz
DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9xc3BpX2lvMV9wZWUzIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3Fz
cGlfaW8xX3BlZTMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcXNwaV9pbzFfcGVlMw0KKFhFTikg
aGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3FzcGlfc2NrX3BlZTANCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9xc3BpX3Nja19wZWUwDQooWEVO
KSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9xc3BpX3Nja19wZWUwIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3FzcGlfc2Nr
X3BlZTAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcXNwaV9zY2tfcGVlMA0KKFhFTikgaGFuZGxl
IC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3FzcGlfY3Nfbl9wZWUxDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcXNwaV9jc19uX3BlZTENCihYRU4pIC9w
aW5tdXhANzAwMDA4ZDQvY29tbW9uL3FzcGlfY3Nfbl9wZWUxIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3FzcGlfY3Nfbl9w
ZWUxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3FzcGlfY3Nfbl9wZWUxDQooWEVOKSBoYW5kbGUg
L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcXNwaV9pbzJfcGVlNA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3FzcGlfaW8yX3BlZTQNCihYRU4pIC9waW5t
dXhANzAwMDA4ZDQvY29tbW9uL3FzcGlfaW8yX3BlZTQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcXNwaV9pbzJfcGVlNCBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9xc3BpX2lvMl9wZWU0DQooWEVOKSBoYW5kbGUgL3Bpbm11
eEA3MDAwMDhkNC9jb21tb24vcXNwaV9pbzNfcGVlNQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3FzcGlfaW8zX3BlZTUNCihYRU4pIC9waW5tdXhANzAw
MDA4ZDQvY29tbW9uL3FzcGlfaW8zX3BlZTUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcXNwaV9pbzNfcGVlNSBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4
QDcwMDAwOGQ0L2NvbW1vbi9xc3BpX2lvM19wZWU1DQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAw
MDhkNC9jb21tb24vZGFwMl9kaW5fcGFhMg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5t
dXhANzAwMDA4ZDQvY29tbW9uL2RhcDJfZGluX3BhYTINCihYRU4pIC9waW5tdXhANzAwMDA4ZDQv
Y29tbW9uL2RhcDJfZGluX3BhYTIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vZGFwMl9kaW5fcGFhMiBpcyBiZWhpbmQgdGhl
IElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAw
OGQ0L2NvbW1vbi9kYXAyX2Rpbl9wYWEyDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9j
b21tb24vZGFwMl9kb3V0X3BhYTMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcw
MDAwOGQ0L2NvbW1vbi9kYXAyX2RvdXRfcGFhMw0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21t
b24vZGFwMl9kb3V0X3BhYTMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vZGFwMl9kb3V0X3BhYTMgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhk
NC9jb21tb24vZGFwMl9kb3V0X3BhYTMNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2Nv
bW1vbi9kYXAyX2ZzX3BhYTANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAw
OGQ0L2NvbW1vbi9kYXAyX2ZzX3BhYTANCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2Rh
cDJfZnNfcGFhMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlu
bXV4QDcwMDAwOGQ0L2NvbW1vbi9kYXAyX2ZzX3BhYTAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24v
ZGFwMl9mc19wYWEwDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vZGFwMl9z
Y2xrX3BhYTENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1v
bi9kYXAyX3NjbGtfcGFhMQ0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vZGFwMl9zY2xr
X3BhYTEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3
MDAwMDhkNC9jb21tb24vZGFwMl9zY2xrX3BhYTEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vZGFw
Ml9zY2xrX3BhYTENCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9kcF9ocGQw
X3BjYzYNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9k
cF9ocGQwX3BjYzYNCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2RwX2hwZDBfcGNjNiBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0
L2NvbW1vbi9kcF9ocGQwX3BjYzYgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vZHBfaHBkMF9wY2M2
DQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vaGRtaV9pbnRfZHBfaHBkX3Bj
YzENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9oZG1p
X2ludF9kcF9ocGRfcGNjMQ0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vaGRtaV9pbnRf
ZHBfaHBkX3BjYzEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bp
bm11eEA3MDAwMDhkNC9jb21tb24vaGRtaV9pbnRfZHBfaHBkX3BjYzEgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhk
NC9jb21tb24vaGRtaV9pbnRfZHBfaHBkX3BjYzENCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAw
OGQ0L2NvbW1vbi9oZG1pX2NlY19wY2MwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11
eEA3MDAwMDhkNC9jb21tb24vaGRtaV9jZWNfcGNjMA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9j
b21tb24vaGRtaV9jZWNfcGNjMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9oZG1pX2NlY19wY2MwIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4
ZDQvY29tbW9uL2hkbWlfY2VjX3BjYzANCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2Nv
bW1vbi9jYW0xX3B3ZG5fcHM3DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAw
MDhkNC9jb21tb24vY2FtMV9wd2RuX3BzNw0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24v
Y2FtMV9wd2RuX3BzNyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
cGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9jYW0xX3B3ZG5fcHM3IGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29t
bW9uL2NhbTFfcHdkbl9wczcNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9j
YW0yX3B3ZG5fcHQwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9j
b21tb24vY2FtMl9wd2RuX3B0MA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vY2FtMl9w
d2RuX3B0MCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4
QDcwMDAwOGQ0L2NvbW1vbi9jYW0yX3B3ZG5fcHQwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2Nh
bTJfcHdkbl9wdDANCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zYXRhX2xl
ZF9hY3RpdmVfcGE1DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9j
b21tb24vc2F0YV9sZWRfYWN0aXZlX3BhNQ0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24v
c2F0YV9sZWRfYWN0aXZlX3BhNSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9zYXRhX2xlZF9hY3RpdmVfcGE1IGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhA
NzAwMDA4ZDQvY29tbW9uL3NhdGFfbGVkX2FjdGl2ZV9wYTUNCihYRU4pIGhhbmRsZSAvcGlubXV4
QDcwMDAwOGQ0L2NvbW1vbi9wYTYNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcw
MDAwOGQ0L2NvbW1vbi9wYTYNCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3BhNiBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2Nv
bW1vbi9wYTYgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcGE2DQooWEVOKSBoYW5kbGUgL3Bpbm11
eEA3MDAwMDhkNC9jb21tb24vYWxzX3Byb3hfaW50X3B4Mw0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2Fsc19wcm94X2ludF9weDMNCihYRU4pIC9waW5t
dXhANzAwMDA4ZDQvY29tbW9uL2Fsc19wcm94X2ludF9weDMgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vYWxzX3Byb3hfaW50
X3B4MyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9hbHNfcHJveF9pbnRfcHgzDQooWEVOKSBoYW5k
bGUgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdGVtcF9hbGVydF9weDQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi90ZW1wX2FsZXJ0X3B4NA0KKFhFTikg
L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdGVtcF9hbGVydF9weDQgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdGVtcF9hbGVy
dF9weDQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vdGVtcF9hbGVydF9weDQNCihYRU4pIGhhbmRs
ZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9idXR0b25fcG93ZXJfb25fcHg1DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vYnV0dG9uX3Bvd2VyX29uX3B4
NQ0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vYnV0dG9uX3Bvd2VyX29uX3B4NSBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2Nv
bW1vbi9idXR0b25fcG93ZXJfb25fcHg1IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2J1dHRvbl9w
b3dlcl9vbl9weDUNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9idXR0b25f
dm9sX3VwX3B4Ng0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29t
bW9uL2J1dHRvbl92b2xfdXBfcHg2DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9idXR0
b25fdm9sX3VwX3B4NiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
cGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9idXR0b25fdm9sX3VwX3B4NiBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0
L2NvbW1vbi9idXR0b25fdm9sX3VwX3B4Ng0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQv
Y29tbW9uL2J1dHRvbl9ob21lX3B5MQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhA
NzAwMDA4ZDQvY29tbW9uL2J1dHRvbl9ob21lX3B5MQ0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9j
b21tb24vYnV0dG9uX2hvbWVfcHkxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2J1dHRvbl9ob21lX3B5MSBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcw
MDAwOGQ0L2NvbW1vbi9idXR0b25faG9tZV9weTENCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAw
OGQ0L2NvbW1vbi9sY2RfYmxfZW5fcHYxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11
eEA3MDAwMDhkNC9jb21tb24vbGNkX2JsX2VuX3B2MQ0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9j
b21tb24vbGNkX2JsX2VuX3B2MSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9sY2RfYmxfZW5fcHYxIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4
ZDQvY29tbW9uL2xjZF9ibF9lbl9wdjENCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2Nv
bW1vbi9wejINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1v
bi9wejINCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3B6MiBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9wejIgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bp
bm11eEA3MDAwMDhkNC9jb21tb24vcHoyDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9j
b21tb24vcHozDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21t
b24vcHozDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9wejMgcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcHozIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
aW5tdXhANzAwMDA4ZDQvY29tbW9uL3B6Mw0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQv
Y29tbW9uL3dpZmlfZW5fcGgwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAw
MDhkNC9jb21tb24vd2lmaV9lbl9waDANCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3dp
ZmlfZW5fcGgwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5t
dXhANzAwMDA4ZDQvY29tbW9uL3dpZmlfZW5fcGgwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3dp
ZmlfZW5fcGgwDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vd2lmaV93YWtl
X2FwX3BoMg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9u
L3dpZmlfd2FrZV9hcF9waDINCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3dpZmlfd2Fr
ZV9hcF9waDIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11
eEA3MDAwMDhkNC9jb21tb24vd2lmaV93YWtlX2FwX3BoMiBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1v
bi93aWZpX3dha2VfYXBfcGgyDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24v
YXBfd2FrZV9idF9waDMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0
L2NvbW1vbi9hcF93YWtlX2J0X3BoMw0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vYXBf
d2FrZV9idF9waDMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bp
bm11eEA3MDAwMDhkNC9jb21tb24vYXBfd2FrZV9idF9waDMgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21t
b24vYXBfd2FrZV9idF9waDMNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9i
dF9yc3RfcGg0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21t
b24vYnRfcnN0X3BoNA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vYnRfcnN0X3BoNCBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0
L2NvbW1vbi9idF9yc3RfcGg0IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2J0X3JzdF9waDQNCihY
RU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9idF93YWtlX2FwX3BoNQ0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2J0X3dha2VfYXBfcGg1
DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9idF93YWtlX2FwX3BoNSBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9i
dF93YWtlX2FwX3BoNSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9idF93YWtlX2FwX3BoNQ0KKFhF
TikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3BoNg0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3BoNg0KKFhFTikgL3Bpbm11eEA3MDAwMDhk
NC9jb21tb24vcGg2IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9w
aW5tdXhANzAwMDA4ZDQvY29tbW9uL3BoNiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9waDYNCihY
RU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9hcF93YWtlX25mY19waDcNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9hcF93YWtlX25mY19w
aDcNCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2FwX3dha2VfbmZjX3BoNyBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1v
bi9hcF93YWtlX25mY19waDcgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vYXBfd2FrZV9uZmNfcGg3
DQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vbmZjX2VuX3BpMA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL25mY19lbl9waTANCihY
RU4pIC9waW5tdXhANzAwMDA4ZDQvY29tbW9uL25mY19lbl9waTAgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vbmZjX2VuX3Bp
MCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9uZmNfZW5fcGkwDQooWEVOKSBoYW5kbGUgL3Bpbm11
eEA3MDAwMDhkNC9jb21tb24vbmZjX2ludF9waTENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9uZmNfaW50X3BpMQ0KKFhFTikgL3Bpbm11eEA3MDAwMDhk
NC9jb21tb24vbmZjX2ludF9waTEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vbmZjX2ludF9waTEgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhk
NC9jb21tb24vbmZjX2ludF9waTENCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1v
bi9ncHNfZW5fcGkyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9j
b21tb24vZ3BzX2VuX3BpMg0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC9jb21tb24vZ3BzX2VuX3Bp
MiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAw
OGQ0L2NvbW1vbi9ncHNfZW5fcGkyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL2dwc19lbl9waTIN
CihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9wY2M3DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9jb21tb24vcGNjNw0KKFhFTikgL3Bpbm11eEA3
MDAwMDhkNC9jb21tb24vcGNjNyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi9wY2M3IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9u
L3BjYzcNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91c2JfdmJ1c19lbjBf
cGNjNA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvY29tbW9uL3Vz
Yl92YnVzX2VuMF9wY2M0DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L2NvbW1vbi91c2JfdmJ1c19l
bjBfcGNjNCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4
QDcwMDAwOGQ0L2NvbW1vbi91c2JfdmJ1c19lbjBfcGNjNCBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L2NvbW1v
bi91c2JfdmJ1c19lbjBfcGNjNA0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvdW51c2Vk
X2xvd3Bvd2VyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVz
ZWRfbG93cG93ZXINCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51
c2VkX2xvd3Bvd2VyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyDQooWEVOKSBoYW5k
bGUgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvYXVkX21jbGtfcGJiMA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2F1ZF9t
Y2xrX3BiYjANCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2F1ZF9tY2xr
X3BiYjAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3
MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvYXVkX21jbGtfcGJiMCBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3Vu
dXNlZF9sb3dwb3dlci9hdWRfbWNsa19wYmIwDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhk
NC91bnVzZWRfbG93cG93ZXIvZHZmc19jbGtfcGJiMg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2R2ZnNfY2xrX3BiYjINCihYRU4pIC9w
aW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2R2ZnNfY2xrX3BiYjIgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93
cG93ZXIvZHZmc19jbGtfcGJiMiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9kdmZz
X2Nsa19wYmIyDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIv
Z3Bpb194MV9hdWRfcGJiMw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4
ZDQvdW51c2VkX2xvd3Bvd2VyL2dwaW9feDFfYXVkX3BiYjMNCihYRU4pIC9waW5tdXhANzAwMDA4
ZDQvdW51c2VkX2xvd3Bvd2VyL2dwaW9feDFfYXVkX3BiYjMgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvZ3Bp
b194MV9hdWRfcGJiMyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9ncGlvX3gxX2F1
ZF9wYmIzDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvZ3Bp
b194M19hdWRfcGJiNA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQv
dW51c2VkX2xvd3Bvd2VyL2dwaW9feDNfYXVkX3BiYjQNCihYRU4pIC9waW5tdXhANzAwMDA4ZDQv
dW51c2VkX2xvd3Bvd2VyL2dwaW9feDNfYXVkX3BiYjQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvZ3Bpb194
M19hdWRfcGJiNCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9ncGlvX3gzX2F1ZF9w
YmI0DQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvZGFwMV9k
aW5fcGIxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRf
bG93cG93ZXIvZGFwMV9kaW5fcGIxDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dw
b3dlci9kYXAxX2Rpbl9wYjEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvZGFwMV9kaW5fcGIxIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhA
NzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2RhcDFfZGluX3BiMQ0KKFhFTikgaGFuZGxlIC9waW5t
dXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2RhcDFfZG91dF9wYjINCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9kYXAxX2RvdXRfcGIy
DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9kYXAxX2RvdXRfcGIyIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQv
dW51c2VkX2xvd3Bvd2VyL2RhcDFfZG91dF9wYjIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93
cG93ZXIvZGFwMV9kb3V0X3BiMg0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvdW51c2Vk
X2xvd3Bvd2VyL2RhcDFfZnNfcGIwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3
MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvZGFwMV9mc19wYjANCihYRU4pIC9waW5tdXhANzAwMDA4
ZDQvdW51c2VkX2xvd3Bvd2VyL2RhcDFfZnNfcGIwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2RhcDFfZnNf
cGIwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2RhcDFfZnNfcGIwDQooWEVOKSBo
YW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvZGFwMV9zY2xrX3BiMw0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2Rh
cDFfc2Nsa19wYjMNCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2RhcDFf
c2Nsa19wYjMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11
eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvZGFwMV9zY2xrX3BiMyBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0
L3VudXNlZF9sb3dwb3dlci9kYXAxX3NjbGtfcGIzDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAw
MDhkNC91bnVzZWRfbG93cG93ZXIvc3BpMl9tb3NpX3BiNA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3NwaTJfbW9zaV9wYjQNCihYRU4p
IC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3NwaTJfbW9zaV9wYjQgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRf
bG93cG93ZXIvc3BpMl9tb3NpX3BiNCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9z
cGkyX21vc2lfcGI0DQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93
ZXIvc3BpMl9taXNvX3BiNQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4
ZDQvdW51c2VkX2xvd3Bvd2VyL3NwaTJfbWlzb19wYjUNCihYRU4pIC9waW5tdXhANzAwMDA4ZDQv
dW51c2VkX2xvd3Bvd2VyL3NwaTJfbWlzb19wYjUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpMl9taXNv
X3BiNSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGkyX21pc29fcGI1DQooWEVO
KSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpMl9zY2tfcGI2DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIv
c3BpMl9zY2tfcGI2DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGky
X3Nja19wYjYgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11
eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpMl9zY2tfcGI2IGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQv
dW51c2VkX2xvd3Bvd2VyL3NwaTJfc2NrX3BiNg0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4
ZDQvdW51c2VkX2xvd3Bvd2VyL3NwaTJfY3MwX3BiNw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3NwaTJfY3MwX3BiNw0KKFhFTikgL3Bp
bm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpMl9jczBfcGI3IHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bv
d2VyL3NwaTJfY3MwX3BiNyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGkyX2Nz
MF9wYjcNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGky
X2NzMV9wZGQwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVz
ZWRfbG93cG93ZXIvc3BpMl9jczFfcGRkMA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRf
bG93cG93ZXIvc3BpMl9jczFfcGRkMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGkyX2NzMV9wZGQwIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
aW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3NwaTJfY3MxX3BkZDANCihYRU4pIGhhbmRs
ZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9kbWljM19jbGtfcGU0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvZG1pYzNf
Y2xrX3BlNA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvZG1pYzNfY2xr
X3BlNCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcw
MDAwOGQ0L3VudXNlZF9sb3dwb3dlci9kbWljM19jbGtfcGU0IGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51
c2VkX2xvd3Bvd2VyL2RtaWMzX2Nsa19wZTQNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0
L3VudXNlZF9sb3dwb3dlci9kbWljM19kYXRfcGU1DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvZG1pYzNfZGF0X3BlNQ0KKFhFTikgL3Bp
bm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvZG1pYzNfZGF0X3BlNSBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dw
b3dlci9kbWljM19kYXRfcGU1IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2RtaWMz
X2RhdF9wZTUNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9w
ZTYNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dw
b3dlci9wZTYNCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3BlNiBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3Vu
dXNlZF9sb3dwb3dlci9wZTYgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvcGU2DQoo
WEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvY2FtX3JzdF9wczQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dl
ci9jYW1fcnN0X3BzNA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvY2Ft
X3JzdF9wczQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11
eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvY2FtX3JzdF9wczQgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91
bnVzZWRfbG93cG93ZXIvY2FtX3JzdF9wczQNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0
L3VudXNlZF9sb3dwb3dlci9jYW1fYWZfZW5fcHM1DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvY2FtX2FmX2VuX3BzNQ0KKFhFTikgL3Bp
bm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvY2FtX2FmX2VuX3BzNSBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dw
b3dlci9jYW1fYWZfZW5fcHM1IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2NhbV9h
Zl9lbl9wczUNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9j
YW1fZmxhc2hfZW5fcHM2DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhk
NC91bnVzZWRfbG93cG93ZXIvY2FtX2ZsYXNoX2VuX3BzNg0KKFhFTikgL3Bpbm11eEA3MDAwMDhk
NC91bnVzZWRfbG93cG93ZXIvY2FtX2ZsYXNoX2VuX3BzNiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9jYW1f
Zmxhc2hfZW5fcHM2IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2NhbV9mbGFzaF9l
bl9wczYNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9jYW0x
X3N0cm9iZV9wdDENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3Vu
dXNlZF9sb3dwb3dlci9jYW0xX3N0cm9iZV9wdDENCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvdW51
c2VkX2xvd3Bvd2VyL2NhbTFfc3Ryb2JlX3B0MSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9jYW0xX3N0cm9i
ZV9wdDEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvY2FtMV9zdHJvYmVfcHQxDQoo
WEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvbW90aW9uX2ludF9w
eDINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dw
b3dlci9tb3Rpb25faW50X3B4Mg0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93
ZXIvbW90aW9uX2ludF9weDIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvbW90aW9uX2ludF9weDIgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11
eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvbW90aW9uX2ludF9weDINCihYRU4pIGhhbmRsZSAv
cGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci90b3VjaF9yc3RfcHY2DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvdG91Y2hfcnN0
X3B2Ng0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvdG91Y2hfcnN0X3B2
NiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAw
OGQ0L3VudXNlZF9sb3dwb3dlci90b3VjaF9yc3RfcHY2IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2Vk
X2xvd3Bvd2VyL3RvdWNoX3JzdF9wdjYNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3Vu
dXNlZF9sb3dwb3dlci90b3VjaF9jbGtfcHY3DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bp
bm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvdG91Y2hfY2xrX3B2Nw0KKFhFTikgL3Bpbm11
eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvdG91Y2hfY2xrX3B2NyBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dl
ci90b3VjaF9jbGtfcHY3IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3RvdWNoX2Ns
a19wdjcNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci90b3Vj
aF9pbnRfcHgxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVz
ZWRfbG93cG93ZXIvdG91Y2hfaW50X3B4MQ0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRf
bG93cG93ZXIvdG91Y2hfaW50X3B4MSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci90b3VjaF9pbnRfcHgxIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
aW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3RvdWNoX2ludF9weDENCihYRU4pIGhhbmRs
ZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9tb2RlbV93YWtlX2FwX3B4MA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL21v
ZGVtX3dha2VfYXBfcHgwDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9t
b2RlbV93YWtlX2FwX3B4MCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9tb2RlbV93YWtlX2FwX3B4MCBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlu
bXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9tb2RlbV93YWtlX2FwX3B4MA0KKFhFTikgaGFu
ZGxlIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2J1dHRvbl92b2xfZG93bl9weDcN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dl
ci9idXR0b25fdm9sX2Rvd25fcHg3DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dw
b3dlci9idXR0b25fdm9sX2Rvd25fcHg3IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2J1dHRvbl92b2xfZG93
bl9weDcgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvYnV0dG9uX3ZvbF9kb3duX3B4
Nw0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2J1dHRvbl9z
bGlkZV9zd19weTANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3Vu
dXNlZF9sb3dwb3dlci9idXR0b25fc2xpZGVfc3dfcHkwDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0
L3VudXNlZF9sb3dwb3dlci9idXR0b25fc2xpZGVfc3dfcHkwIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2J1
dHRvbl9zbGlkZV9zd19weTAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvYnV0dG9u
X3NsaWRlX3N3X3B5MA0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bv
d2VyL2xjZF90ZV9weTINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0
L3VudXNlZF9sb3dwb3dlci9sY2RfdGVfcHkyDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNl
ZF9sb3dwb3dlci9sY2RfdGVfcHkyIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2xjZF90ZV9weTIgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11
eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvbGNkX3RlX3B5Mg0KKFhFTikgaGFuZGxlIC9waW5t
dXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2xjZF9ibF9wd21fcHYwDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvbGNkX2JsX3B3bV9w
djANCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2xjZF9ibF9wd21fcHYw
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4
ZDQvdW51c2VkX2xvd3Bvd2VyL2xjZF9ibF9wd21fcHYwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2Vk
X2xvd3Bvd2VyL2xjZF9ibF9wd21fcHYwDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91
bnVzZWRfbG93cG93ZXIvbGNkX3JzdF9wdjINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlu
bXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9sY2RfcnN0X3B2Mg0KKFhFTikgL3Bpbm11eEA3
MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvbGNkX3JzdF9wdjIgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvbGNk
X3JzdF9wdjIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvbGNkX3JzdF9wdjINCihY
RU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9sY2RfZ3BpbzFfcHYz
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93
ZXIvbGNkX2dwaW8xX3B2Mw0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIv
bGNkX2dwaW8xX3B2MyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
cGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9sY2RfZ3BpbzFfcHYzIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAw
MDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2xjZF9ncGlvMV9wdjMNCihYRU4pIGhhbmRsZSAvcGlubXV4
QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9hcF9yZWFkeV9wdjUNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9hcF9yZWFkeV9wdjUNCihY
RU4pIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2FwX3JlYWR5X3B2NSBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNl
ZF9sb3dwb3dlci9hcF9yZWFkeV9wdjUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIv
YXBfcmVhZHlfcHY1DQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93
ZXIvcHowDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRf
bG93cG93ZXIvcHowDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9wejAg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhk
NC91bnVzZWRfbG93cG93ZXIvcHowIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3B6
MA0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3B6NA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3B6
NA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvcHo0IHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xv
d3Bvd2VyL3B6NCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9wejQNCihYRU4pIGhh
bmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9jbGtfcmVxDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvY2xrX3JlcQ0K
KFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvY2xrX3JlcSBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9s
b3dwb3dlci9jbGtfcmVxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2Nsa19yZXEN
CihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9jcHVfcHdyX3Jl
cQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bv
d2VyL2NwdV9wd3JfcmVxDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9j
cHVfcHdyX3JlcSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlu
bXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9jcHVfcHdyX3JlcSBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0
L3VudXNlZF9sb3dwb3dlci9jcHVfcHdyX3JlcQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4
ZDQvdW51c2VkX2xvd3Bvd2VyL2RhcDRfZGluX3BqNQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2RhcDRfZGluX3BqNQ0KKFhFTikgL3Bp
bm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvZGFwNF9kaW5fcGo1IHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bv
d2VyL2RhcDRfZGluX3BqNSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9kYXA0X2Rp
bl9wajUNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9kYXA0
X2RvdXRfcGo2DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVz
ZWRfbG93cG93ZXIvZGFwNF9kb3V0X3BqNg0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRf
bG93cG93ZXIvZGFwNF9kb3V0X3BqNiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9kYXA0X2RvdXRfcGo2IGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
aW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2RhcDRfZG91dF9wajYNCihYRU4pIGhhbmRs
ZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9kYXA0X2ZzX3BqNA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2RhcDRfZnNf
cGo0DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9kYXA0X2ZzX3BqNCBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0
L3VudXNlZF9sb3dwb3dlci9kYXA0X2ZzX3BqNCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dw
b3dlci9kYXA0X2ZzX3BqNA0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xv
d3Bvd2VyL2RhcDRfc2Nsa19wajcNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcw
MDAwOGQ0L3VudXNlZF9sb3dwb3dlci9kYXA0X3NjbGtfcGo3DQooWEVOKSAvcGlubXV4QDcwMDAw
OGQ0L3VudXNlZF9sb3dwb3dlci9kYXA0X3NjbGtfcGo3IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL2RhcDRf
c2Nsa19wajcgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvZGFwNF9zY2xrX3BqNw0K
KFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3VhcnQyX3J0c19w
ZzINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dw
b3dlci91YXJ0Ml9ydHNfcGcyDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dl
ci91YXJ0Ml9ydHNfcGcyIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3VhcnQyX3J0c19wZzIgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3
MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvdWFydDJfcnRzX3BnMg0KKFhFTikgaGFuZGxlIC9waW5t
dXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3VhcnQyX2N0c19wZzMNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci91YXJ0Ml9jdHNfcGcz
DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci91YXJ0Ml9jdHNfcGczIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQv
dW51c2VkX2xvd3Bvd2VyL3VhcnQyX2N0c19wZzMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93
cG93ZXIvdWFydDJfY3RzX3BnMw0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvdW51c2Vk
X2xvd3Bvd2VyL3VhcnQxX3J0c19wdTINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4
QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci91YXJ0MV9ydHNfcHUyDQooWEVOKSAvcGlubXV4QDcw
MDAwOGQ0L3VudXNlZF9sb3dwb3dlci91YXJ0MV9ydHNfcHUyIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3Vh
cnQxX3J0c19wdTIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvdWFydDFfcnRzX3B1
Mg0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3VhcnQxX2N0
c19wdTMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9s
b3dwb3dlci91YXJ0MV9jdHNfcHUzDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dw
b3dlci91YXJ0MV9jdHNfcHUzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3VhcnQxX2N0c19wdTMgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11
eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvdWFydDFfY3RzX3B1Mw0KKFhFTikgaGFuZGxlIC9w
aW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3BrMA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3BrMA0KKFhFTikgL3Bpbm11eEA3
MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvcGswIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3BrMCBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4
QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9wazANCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAw
OGQ0L3VudXNlZF9sb3dwb3dlci9wazENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4
QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9wazENCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvdW51
c2VkX2xvd3Bvd2VyL3BrMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9wazEgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91
bnVzZWRfbG93cG93ZXIvcGsxDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRf
bG93cG93ZXIvcGsyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91
bnVzZWRfbG93cG93ZXIvcGsyDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dl
ci9wazIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3
MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvcGsyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bv
d2VyL3BrMg0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3Br
Mw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bv
d2VyL3BrMw0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvcGszIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51
c2VkX2xvd3Bvd2VyL3BrMyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9wazMNCihY
RU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9wazQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9wazQNCihY
RU4pIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3BrNCBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dl
ci9wazQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvcGs0DQooWEVOKSBoYW5kbGUg
L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvcGs1DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvcGs1DQooWEVOKSAvcGlubXV4
QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9wazUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvcGs1IGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5t
dXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3BrNQ0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAw
MDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3BrNg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5t
dXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3BrNg0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC91
bnVzZWRfbG93cG93ZXIvcGs2IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3BrNiBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0
L3VudXNlZF9sb3dwb3dlci9wazYNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNl
ZF9sb3dwb3dlci9wazcNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0
L3VudXNlZF9sb3dwb3dlci9wazcNCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bv
d2VyL3BrNyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4
QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9wazcgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93
cG93ZXIvcGs3DQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIv
cGwwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93
cG93ZXIvcGwwDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9wbDAgcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC91
bnVzZWRfbG93cG93ZXIvcGwwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3BsMA0K
KFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3BsMQ0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3BsMQ0K
KFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvcGwxIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bv
d2VyL3BsMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9wbDENCihYRU4pIGhhbmRs
ZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGkxX21vc2lfcGMwDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpMV9t
b3NpX3BjMA0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpMV9tb3Np
X3BjMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcw
MDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGkxX21vc2lfcGMwIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51
c2VkX2xvd3Bvd2VyL3NwaTFfbW9zaV9wYzANCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0
L3VudXNlZF9sb3dwb3dlci9zcGkxX21pc29fcGMxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpMV9taXNvX3BjMQ0KKFhFTikgL3Bp
bm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpMV9taXNvX3BjMSBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dw
b3dlci9zcGkxX21pc29fcGMxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3NwaTFf
bWlzb19wYzENCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9z
cGkxX3Nja19wYzINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3Vu
dXNlZF9sb3dwb3dlci9zcGkxX3Nja19wYzINCihYRU4pIC9waW5tdXhANzAwMDA4ZDQvdW51c2Vk
X2xvd3Bvd2VyL3NwaTFfc2NrX3BjMiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGkxX3Nja19wYzIgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bp
bm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpMV9zY2tfcGMyDQooWEVOKSBoYW5kbGUg
L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpMV9jczBfcGMzDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpMV9jczBf
cGMzDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGkxX2NzMF9wYzMg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhk
NC91bnVzZWRfbG93cG93ZXIvc3BpMV9jczBfcGMzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xv
d3Bvd2VyL3NwaTFfY3MwX3BjMw0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4ZDQvdW51c2Vk
X2xvd3Bvd2VyL3NwaTFfY3MxX3BjNA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhA
NzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3NwaTFfY3MxX3BjNA0KKFhFTikgL3Bpbm11eEA3MDAw
MDhkNC91bnVzZWRfbG93cG93ZXIvc3BpMV9jczFfcGM0IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3NwaTFf
Y3MxX3BjNCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGkxX2NzMV9wYzQNCihY
RU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGk0X21vc2lfcGM3
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93
ZXIvc3BpNF9tb3NpX3BjNw0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIv
c3BpNF9tb3NpX3BjNyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
cGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGk0X21vc2lfcGM3IGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAw
MDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3NwaTRfbW9zaV9wYzcNCihYRU4pIGhhbmRsZSAvcGlubXV4
QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGk0X21pc29fcGQwDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpNF9taXNvX3BkMA0K
KFhFTikgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpNF9taXNvX3BkMCBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3Vu
dXNlZF9sb3dwb3dlci9zcGk0X21pc29fcGQwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bv
d2VyL3NwaTRfbWlzb19wZDANCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9s
b3dwb3dlci9zcGk0X3Nja19wYzUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcw
MDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGk0X3Nja19wYzUNCihYRU4pIC9waW5tdXhANzAwMDA4
ZDQvdW51c2VkX2xvd3Bvd2VyL3NwaTRfc2NrX3BjNSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGk0X3Nj
a19wYzUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpNF9zY2tfcGM1DQooWEVO
KSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpNF9jczBfcGM2DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIv
c3BpNF9jczBfcGM2DQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGk0
X2NzMF9wYzYgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11
eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BpNF9jczBfcGM2IGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQv
dW51c2VkX2xvd3Bvd2VyL3NwaTRfY3MwX3BjNg0KKFhFTikgaGFuZGxlIC9waW5tdXhANzAwMDA4
ZDQvdW51c2VkX2xvd3Bvd2VyL3dpZmlfcnN0X3BoMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3dpZmlfcnN0X3BoMQ0KKFhFTikgL3Bp
bm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvd2lmaV9yc3RfcGgxIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bv
d2VyL3dpZmlfcnN0X3BoMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci93aWZpX3Jz
dF9waDENCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9ncHNf
cnN0X3BpMw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2Vk
X2xvd3Bvd2VyL2dwc19yc3RfcGkzDQooWEVOKSAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dw
b3dlci9ncHNfcnN0X3BpMyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9ncHNfcnN0X3BpMyBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcw
MDAwOGQ0L3VudXNlZF9sb3dwb3dlci9ncHNfcnN0X3BpMw0KKFhFTikgaGFuZGxlIC9waW5tdXhA
NzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3NwZGlmX291dF9wY2MyDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvc3BkaWZfb3V0X3BjYzIN
CihYRU4pIC9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3NwZGlmX291dF9wY2MyIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9waW5tdXhANzAwMDA4ZDQv
dW51c2VkX2xvd3Bvd2VyL3NwZGlmX291dF9wY2MyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvdW51c2VkX2xv
d3Bvd2VyL3NwZGlmX291dF9wY2MyDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91bnVz
ZWRfbG93cG93ZXIvc3BkaWZfaW5fcGNjMw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5t
dXhANzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3NwZGlmX2luX3BjYzMNCihYRU4pIC9waW5tdXhA
NzAwMDA4ZDQvdW51c2VkX2xvd3Bvd2VyL3NwZGlmX2luX3BjYzMgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIv
c3BkaWZfaW5fcGNjMyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3VudXNlZF9sb3dwb3dlci9zcGRpZl9pbl9w
Y2MzDQooWEVOKSBoYW5kbGUgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvdXNiX3Zi
dXNfZW4xX3BjYzUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGlubXV4QDcwMDAwOGQ0L3Vu
dXNlZF9sb3dwb3dlci91c2JfdmJ1c19lbjFfcGNjNQ0KKFhFTikgL3Bpbm11eEA3MDAwMDhkNC91
bnVzZWRfbG93cG93ZXIvdXNiX3ZidXNfZW4xX3BjYzUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvdXNiX3Zi
dXNfZW4xX3BjYzUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC91bnVzZWRfbG93cG93ZXIvdXNiX3ZidXNfZW4x
X3BjYzUNCihYRU4pIGhhbmRsZSAvcGlubXV4QDcwMDAwOGQ0L2RyaXZlDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3Bpbm11eEA3MDAwMDhkNC9kcml2ZQ0KKFhFTikgL3Bpbm11eEA3MDAwMDhk
NC9kcml2ZSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGlubXV4
QDcwMDAwOGQ0L2RyaXZlIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9waW5tdXhANzAwMDA4ZDQvZHJpdmUNCihYRU4pIGhhbmRsZSAvZ3Bp
b0A2MDAwZDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ncGlvQDYwMDBkMDAwDQooWEVO
KSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0y
NA0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MjQNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2ly
cTogZGV2PS9ncGlvQDYwMDBkMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMn
IHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0yNA0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49MjQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVy
QDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyMC4uLl0sb2ludHNpemU9Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAw
LCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4p
IGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9ncGlvQDYwMDBkMDAwLCBpbmRleD0xDQooWEVO
KSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0y
NA0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MjQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAw
MDAyMS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVw
dC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVO
KSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9ncGlvQDYw
MDBkMDAwLCBpbmRleD0yDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj0yNA0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MjQNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNw
ZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyMi4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4p
ICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRf
cmF3X2lycTogZGV2PS9ncGlvQDYwMDBkMDAwLCBpbmRleD0zDQooWEVOKSAgdXNpbmcgJ2ludGVy
cnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0yNA0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49MjQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250
cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyMy4uLl0sb2ludHNp
emU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYw
MDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICEN
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9ncGlvQDYwMDBkMDAwLCBpbmRleD00
DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj0yNA0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MjQNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAg
MHgwMDAwMDAzNy4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2lu
dGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0y
DQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9n
cGlvQDYwMDBkMDAwLCBpbmRleD01DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0yNA0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MjQN
CihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAw
LGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA1Ny4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMN
CihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rldmlj
ZV9nZXRfcmF3X2lycTogZGV2PS9ncGlvQDYwMDBkMDAwLCBpbmRleD02DQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0yNA0KKFhFTikg
IGludHNpemU9MyBpbnRsZW49MjQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVw
dC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA1OS4uLl0s
b2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9s
bGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9ncGlvQDYwMDBkMDAwLCBp
bmRleD03DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3Bl
Yz0wIGludGxlbj0yNA0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MjQNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAw
MDAwMDAgMHgwMDAwMDA3ZC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlw
YXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRy
c2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9ncGlvQDYwMDBkMDAwIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9ncGlvQDYwMDBkMDAwIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ncGlvQDYw
MDBkMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3Bl
Yz0wIGludGxlbj0yNA0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MjQNCihYRU4pIGR0X2Rldmlj
ZV9nZXRfcmF3X2lycTogZGV2PS9ncGlvQDYwMDBkMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0yNA0KKFhFTikg
IGludHNpemU9MyBpbnRsZW49MjQNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVw
dC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyMC4uLl0s
b2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9s
bGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVk
IHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVy
QDYwMDA0MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vZ3Bpb0A2MDAwZDAw
MCwgaW5kZXg9MQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGlu
dHNwZWM9MCBpbnRsZW49MjQNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTI0DQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsw
eDAwMDAwMDAwIDB4MDAwMDAwMjEuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4g
YWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMSBub3QgKGRpcmVjdGx5
IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVk
IHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9y
YXdfaXJxOiBkZXY9L2dwaW9ANjAwMGQwMDAsIGluZGV4PTINCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTI0DQooWEVOKSAgaW50c2l6
ZT0zIGludGxlbj0yNA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRy
b2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDIyLi4uXSxvaW50c2l6
ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0K
KFhFTikgaXJxIDIgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJp
bWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQw
MDANCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9ncGlvQDYwMDBkMDAwLCBpbmRl
eD0zDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0w
IGludGxlbj0yNA0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MjQNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAw
MDAgMHgwMDAwMDAyMy4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6
ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAzIG5vdCAoZGlyZWN0bHkgb3IgaW5k
aXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2lu
dGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6
IGRldj0vZ3Bpb0A2MDAwZDAwMCwgaW5kZXg9NA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MjQNCihYRU4pICBpbnRzaXplPTMgaW50
bGVuPTI0DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2
MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMzcuLi5dLG9pbnRzaXplPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwg
c2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBp
cnEgNCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29u
dHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhF
TikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2dwaW9ANjAwMGQwMDAsIGluZGV4PTUNCihY
RU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVu
PTI0DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0yNA0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBh
cj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAw
MDAwMDU3Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJy
dXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihY
RU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDUgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5
KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0
LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9n
cGlvQDYwMDBkMDAwLCBpbmRleD02DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0yNA0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MjQN
CihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAw
LGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA1OS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMN
CihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSA2IG5v
dCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVy
LiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBkdF9k
ZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vZ3Bpb0A2MDAwZDAwMCwgaW5kZXg9Nw0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MjQNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTI0DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRl
cnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwN2Qu
Li5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29u
dHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+
IGdvdCBpdCAhDQooWEVOKSBpcnEgNyBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5l
Y3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJv
bGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2dwaW9A
NjAwMGQwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0K
KFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDYwMDBkMDAwPDM+
DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDYwMDBkMDAw
IC0gMDA2MDAwZTAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvZ3Bpb0A2MDAwZDAwMC9jYW1l
cmEtY29udHJvbC1vdXRwdXQtbG93DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2dwaW9ANjAw
MGQwMDAvY2FtZXJhLWNvbnRyb2wtb3V0cHV0LWxvdw0KKFhFTikgL2dwaW9ANjAwMGQwMDAvY2Ft
ZXJhLWNvbnRyb2wtb3V0cHV0LWxvdyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvZ3Bpb0A2MDAwZDAwMC9jYW1lcmEtY29udHJvbC1vdXRwdXQtbG93IGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ncGlvQDYw
MDBkMDAwL2NhbWVyYS1jb250cm9sLW91dHB1dC1sb3cNCihYRU4pIGhhbmRsZSAvZ3Bpb0A2MDAw
ZDAwMC9lMjYxNC1ydDU2NTgtYXVkaW8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vZ3Bpb0A2
MDAwZDAwMC9lMjYxNC1ydDU2NTgtYXVkaW8NCihYRU4pIC9ncGlvQDYwMDBkMDAwL2UyNjE0LXJ0
NTY1OC1hdWRpbyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvZ3Bp
b0A2MDAwZDAwMC9lMjYxNC1ydDU2NTgtYXVkaW8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2dwaW9ANjAwMGQwMDAvZTI2MTQtcnQ1NjU4
LWF1ZGlvDQooWEVOKSBoYW5kbGUgL2dwaW9ANjAwMGQwMDAvc3lzdGVtLXN1c3BlbmQtZ3Bpbw0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ncGlvQDYwMDBkMDAwL3N5c3RlbS1zdXNwZW5kLWdw
aW8NCihYRU4pIC9ncGlvQDYwMDBkMDAwL3N5c3RlbS1zdXNwZW5kLWdwaW8gcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2dwaW9ANjAwMGQwMDAvc3lzdGVtLXN1c3Bl
bmQtZ3BpbyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vZ3Bpb0A2MDAwZDAwMC9zeXN0ZW0tc3VzcGVuZC1ncGlvDQooWEVOKSBoYW5kbGUg
L2dwaW9ANjAwMGQwMDAvZGVmYXVsdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ncGlvQDYw
MDBkMDAwL2RlZmF1bHQNCihYRU4pIC9ncGlvQDYwMDBkMDAwL2RlZmF1bHQgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2dwaW9ANjAwMGQwMDAvZGVmYXVsdCBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vZ3Bp
b0A2MDAwZDAwMC9kZWZhdWx0DQooWEVOKSBoYW5kbGUgL3hvdGcNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0veG90Zw0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikg
IGludHNwZWM9MCBpbnRsZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRf
ZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3hvdGcsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50
ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTYNCihYRU4pICBpbnRz
aXplPTMgaW50bGVuPTYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250
cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAzMS4uLl0sb2ludHNp
emU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYw
MDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICEN
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS94b3RnLCBpbmRleD0xDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj02DQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj02DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRl
cnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMTQu
Li5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29u
dHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+
IGdvdCBpdCAhDQooWEVOKSAveG90ZyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAveG90ZyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0veG90Zw0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhF
TikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3hvdGcsIGluZGV4PTANCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTYNCihYRU4p
ICBpbnRzaXplPTMgaW50bGVuPTYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVw
dC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAzMS4uLl0s
b2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9s
bGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVk
IHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVy
QDYwMDA0MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0veG90ZywgaW5kZXg9
MQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBp
bnRsZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAw
eDAwMDAwMDE0Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50
ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTIN
CihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDEgbm90IChkaXJlY3RseSBvciBpbmRpcmVj
dGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJy
dXB0LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIGhhbmRsZSAvbWFpbGJveEA3MDA5ODAwMA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9tYWlsYm94QDcwMDk4MDAwDQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAg
aW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vbWFp
bGJveEA3MDA5ODAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0
eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAs
aW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDI4Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0K
KFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL21haWxib3hA
NzAwOTgwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL21haWxi
b3hANzAwOTgwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L21haWxib3hANzAwOTgwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50
bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9tYWlsYm94QDcwMDk4MDAw
LCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAw
MDAwMDAwIDB4MDAwMDAwMjguLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBp
cGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRk
cnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9y
IGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRv
IC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9u
IGZvciBkZXZpY2UgL21haWxib3hANzAwOTgwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVs
dCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAw
MDAwMDAwPDM+IDcwMDk4MDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4p
ICAgLSBNTUlPOiAwMDcwMDk4MDAwIC0gMDA3MDA5OTAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRs
ZSAveHVzYl9wYWRjdGxANzAwOWYwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9w
YWRjdGxANzAwOWYwMDANCihYRU4pIC94dXNiX3BhZGN0bEA3MDA5ZjAwMCBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAveHVzYl9wYWRjdGxANzAwOWYwMDAgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2Jf
cGFkY3RsQDcwMDlmMDAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAveHVz
Yl9wYWRjdGxANzAwOWYwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9
Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcw
MDlmMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAw
MDcwMDlmMDAwIC0gMDA3MDBhMDAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAveHVzYl9wYWRj
dGxANzAwOWYwMDAvcGFkcw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS94dXNiX3BhZGN0bEA3
MDA5ZjAwMC9wYWRzDQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcyBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFk
cyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcw0KKFhFTikgaGFuZGxlIC94dXNiX3BhZGN0bEA3
MDA5ZjAwMC9wYWRzL3VzYjINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxA
NzAwOWYwMDAvcGFkcy91c2IyDQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy91c2Iy
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3
MDA5ZjAwMC9wYWRzL3VzYjIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvdXNiMg0KKFhFTikg
aGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3VzYjIvbGFuZXMNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy91c2IyL2xhbmVzDQooWEVO
KSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy91c2IyL2xhbmVzIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3VzYjIv
bGFuZXMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvdXNiMi9sYW5lcw0KKFhFTikgaGFuZGxl
IC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3VzYjIvbGFuZXMvdXNiMi0wDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvdXNiMi9sYW5lcy91c2Iy
LTANCihYRU4pIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3VzYjIvbGFuZXMvdXNiMi0wIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5
ZjAwMC9wYWRzL3VzYjIvbGFuZXMvdXNiMi0wIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3Vz
YjIvbGFuZXMvdXNiMi0wDQooWEVOKSBoYW5kbGUgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMv
dXNiMi9sYW5lcy91c2IyLTENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxA
NzAwOWYwMDAvcGFkcy91c2IyL2xhbmVzL3VzYjItMQ0KKFhFTikgL3h1c2JfcGFkY3RsQDcwMDlm
MDAwL3BhZHMvdXNiMi9sYW5lcy91c2IyLTEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvdXNiMi9sYW5lcy91c2IyLTEg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvdXNiMi9sYW5lcy91c2IyLTENCihYRU4pIGhhbmRs
ZSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy91c2IyL2xhbmVzL3VzYjItMg0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3VzYjIvbGFuZXMvdXNi
Mi0yDQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy91c2IyL2xhbmVzL3VzYjItMiBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAveHVzYl9wYWRjdGxANzAw
OWYwMDAvcGFkcy91c2IyL2xhbmVzL3VzYjItMiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy91
c2IyL2xhbmVzL3VzYjItMg0KKFhFTikgaGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRz
L3VzYjIvbGFuZXMvdXNiMi0zDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3Rs
QDcwMDlmMDAwL3BhZHMvdXNiMi9sYW5lcy91c2IyLTMNCihYRU4pIC94dXNiX3BhZGN0bEA3MDA5
ZjAwMC9wYWRzL3VzYjIvbGFuZXMvdXNiMi0zIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3VzYjIvbGFuZXMvdXNiMi0z
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3VzYjIvbGFuZXMvdXNiMi0zDQooWEVOKSBoYW5k
bGUgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvcGNpZQ0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3BjaWUNCihYRU4pIC94dXNiX3BhZGN0bEA3
MDA5ZjAwMC9wYWRzL3BjaWUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvcGNpZSBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAv
cGFkcy9wY2llDQooWEVOKSBoYW5kbGUgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvcGNpZS9s
YW5lcw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRz
L3BjaWUvbGFuZXMNCihYRU4pIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3BjaWUvbGFuZXMg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3h1c2JfcGFkY3RsQDcw
MDlmMDAwL3BhZHMvcGNpZS9sYW5lcyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy9wY2llL2xh
bmVzDQooWEVOKSBoYW5kbGUgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvcGNpZS9sYW5lcy9w
Y2llLTANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcGFk
cy9wY2llL2xhbmVzL3BjaWUtMA0KKFhFTikgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvcGNp
ZS9sYW5lcy9wY2llLTAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvcGNpZS9sYW5lcy9wY2llLTAgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3Rs
QDcwMDlmMDAwL3BhZHMvcGNpZS9sYW5lcy9wY2llLTANCihYRU4pIGhhbmRsZSAveHVzYl9wYWRj
dGxANzAwOWYwMDAvcGFkcy9wY2llL2xhbmVzL3BjaWUtMQ0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3BjaWUvbGFuZXMvcGNpZS0xDQooWEVOKSAv
eHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy9wY2llL2xhbmVzL3BjaWUtMSBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy9w
Y2llL2xhbmVzL3BjaWUtMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy9wY2llL2xhbmVzL3Bj
aWUtMQ0KKFhFTikgaGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3BjaWUvbGFuZXMv
cGNpZS0yDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3Bh
ZHMvcGNpZS9sYW5lcy9wY2llLTINCihYRU4pIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3Bj
aWUvbGFuZXMvcGNpZS0yIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3BjaWUvbGFuZXMvcGNpZS0yIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS94dXNiX3BhZGN0
bEA3MDA5ZjAwMC9wYWRzL3BjaWUvbGFuZXMvcGNpZS0yDQooWEVOKSBoYW5kbGUgL3h1c2JfcGFk
Y3RsQDcwMDlmMDAwL3BhZHMvcGNpZS9sYW5lcy9wY2llLTMNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy9wY2llL2xhbmVzL3BjaWUtMw0KKFhFTikg
L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvcGNpZS9sYW5lcy9wY2llLTMgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMv
cGNpZS9sYW5lcy9wY2llLTMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvcGNpZS9sYW5lcy9w
Y2llLTMNCihYRU4pIGhhbmRsZSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy9wY2llL2xhbmVz
L3BjaWUtNA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9w
YWRzL3BjaWUvbGFuZXMvcGNpZS00DQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy9w
Y2llL2xhbmVzL3BjaWUtNCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy9wY2llL2xhbmVzL3BjaWUtNCBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRj
dGxANzAwOWYwMDAvcGFkcy9wY2llL2xhbmVzL3BjaWUtNA0KKFhFTikgaGFuZGxlIC94dXNiX3Bh
ZGN0bEA3MDA5ZjAwMC9wYWRzL3BjaWUvbGFuZXMvcGNpZS01DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvcGNpZS9sYW5lcy9wY2llLTUNCihYRU4p
IC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3BjaWUvbGFuZXMvcGNpZS01IHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRz
L3BjaWUvbGFuZXMvcGNpZS01IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3BjaWUvbGFuZXMv
cGNpZS01DQooWEVOKSBoYW5kbGUgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvcGNpZS9sYW5l
cy9wY2llLTYNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAv
cGFkcy9wY2llL2xhbmVzL3BjaWUtNg0KKFhFTikgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMv
cGNpZS9sYW5lcy9wY2llLTYgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvcGNpZS9sYW5lcy9wY2llLTYgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFk
Y3RsQDcwMDlmMDAwL3BhZHMvcGNpZS9sYW5lcy9wY2llLTYNCihYRU4pIGhhbmRsZSAveHVzYl9w
YWRjdGxANzAwOWYwMDAvcGFkcy9zYXRhDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2Jf
cGFkY3RsQDcwMDlmMDAwL3BhZHMvc2F0YQ0KKFhFTikgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3Bh
ZHMvc2F0YSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAveHVzYl9w
YWRjdGxANzAwOWYwMDAvcGFkcy9zYXRhIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3NhdGEN
CihYRU4pIGhhbmRsZSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy9zYXRhL2xhbmVzDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvc2F0YS9sYW5l
cw0KKFhFTikgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvc2F0YS9sYW5lcyBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFk
cy9zYXRhL2xhbmVzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3NhdGEvbGFuZXMNCihYRU4p
IGhhbmRsZSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy9zYXRhL2xhbmVzL3NhdGEtMA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL3NhdGEvbGFu
ZXMvc2F0YS0wDQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy9zYXRhL2xhbmVzL3Nh
dGEtMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAveHVzYl9wYWRj
dGxANzAwOWYwMDAvcGFkcy9zYXRhL2xhbmVzL3NhdGEtMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAv
cGFkcy9zYXRhL2xhbmVzL3NhdGEtMA0KKFhFTikgaGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAw
MC9wYWRzL2hzaWMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYw
MDAvcGFkcy9oc2ljDQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy9oc2ljIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAw
MC9wYWRzL2hzaWMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvaHNpYw0KKFhFTikgaGFuZGxl
IC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL2hzaWMvbGFuZXMNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcGFkcy9oc2ljL2xhbmVzDQooWEVOKSAveHVz
Yl9wYWRjdGxANzAwOWYwMDAvcGFkcy9oc2ljL2xhbmVzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL2hzaWMvbGFuZXMg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvaHNpYy9sYW5lcw0KKFhFTikgaGFuZGxlIC94dXNi
X3BhZGN0bEA3MDA5ZjAwMC9wYWRzL2hzaWMvbGFuZXMvaHNpYy0wDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BhZHMvaHNpYy9sYW5lcy9oc2ljLTANCihY
RU4pIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL2hzaWMvbGFuZXMvaHNpYy0wIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9w
YWRzL2hzaWMvbGFuZXMvaHNpYy0wIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wYWRzL2hzaWMvbGFu
ZXMvaHNpYy0wDQooWEVOKSBoYW5kbGUgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BvcnRzDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BvcnRzDQooWEVOKSAv
eHVzYl9wYWRjdGxANzAwOWYwMDAvcG9ydHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BvcnRzIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS94dXNiX3BhZGN0bEA3MDA5
ZjAwMC9wb3J0cw0KKFhFTikgaGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wb3J0cy91c2Iy
LTANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcG9ydHMv
dXNiMi0wDQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcG9ydHMvdXNiMi0wIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9w
b3J0cy91c2IyLTAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BvcnRzL3VzYjItMA0KKFhFTikgaGFu
ZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wb3J0cy91c2IyLTENCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcG9ydHMvdXNiMi0xDQooWEVOKSAveHVzYl9w
YWRjdGxANzAwOWYwMDAvcG9ydHMvdXNiMi0xIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wb3J0cy91c2IyLTEgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFk
Y3RsQDcwMDlmMDAwL3BvcnRzL3VzYjItMQ0KKFhFTikgaGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5
ZjAwMC9wb3J0cy91c2IyLTINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxA
NzAwOWYwMDAvcG9ydHMvdXNiMi0yDQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcG9ydHMv
dXNiMi0yIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3Bh
ZGN0bEA3MDA5ZjAwMC9wb3J0cy91c2IyLTIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BvcnRzL3Vz
YjItMg0KKFhFTikgaGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wb3J0cy91c2IyLTMNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcG9ydHMvdXNiMi0z
DQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcG9ydHMvdXNiMi0zIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wb3J0cy91
c2IyLTMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BvcnRzL3VzYjItMw0KKFhFTikgaGFuZGxlIC94
dXNiX3BhZGN0bEA3MDA5ZjAwMC9wb3J0cy91c2IzLTANCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcG9ydHMvdXNiMy0wDQooWEVOKSAveHVzYl9wYWRjdGxA
NzAwOWYwMDAvcG9ydHMvdXNiMy0wIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wb3J0cy91c2IzLTAgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcw
MDlmMDAwL3BvcnRzL3VzYjMtMA0KKFhFTikgaGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9w
b3J0cy91c2IzLTENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYw
MDAvcG9ydHMvdXNiMy0xDQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcG9ydHMvdXNiMy0x
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3
MDA5ZjAwMC9wb3J0cy91c2IzLTEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BvcnRzL3VzYjMtMQ0K
KFhFTikgaGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wb3J0cy91c2IzLTINCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcG9ydHMvdXNiMy0yDQooWEVO
KSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcG9ydHMvdXNiMy0yIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wb3J0cy91c2IzLTIg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BvcnRzL3VzYjMtMg0KKFhFTikgaGFuZGxlIC94dXNiX3Bh
ZGN0bEA3MDA5ZjAwMC9wb3J0cy91c2IzLTMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVz
Yl9wYWRjdGxANzAwOWYwMDAvcG9ydHMvdXNiMy0zDQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYw
MDAvcG9ydHMvdXNiMy0zIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wb3J0cy91c2IzLTMgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAw
L3BvcnRzL3VzYjMtMw0KKFhFTikgaGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wb3J0cy9o
c2ljLTANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcG9y
dHMvaHNpYy0wDQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcG9ydHMvaHNpYy0wIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAw
MC9wb3J0cy9oc2ljLTAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3BvcnRzL2hzaWMtMA0KKFhFTikg
aGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3Byb2Qtc2V0dGluZ3MNCihYRU4pIC94
dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS94
dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzDQooWEVOKSBoYW5kbGUgL3h1c2JfcGFk
Y3RsQDcwMDlmMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2JpYXMNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfYmlhcw0K
KFhFTikgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2JpYXMgcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3h1c2JfcGFkY3RsQDcwMDlm
MDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2JpYXMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3Byb2Qt
c2V0dGluZ3MvcHJvZF9jX2JpYXMNCihYRU4pIGhhbmRsZSAveHVzYl9wYWRjdGxANzAwOWYwMDAv
cHJvZC1zZXR0aW5ncy9wcm9kX2NfYmlhc19hMDINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
eHVzYl9wYWRjdGxANzAwOWYwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfYmlhc19hMDINCihYRU4p
IC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19iaWFzX2EwMiBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAveHVzYl9wYWRjdGxANzAwOWYw
MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfYmlhc19hMDIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3By
b2Qtc2V0dGluZ3MvcHJvZF9jX2JpYXNfYTAyDQooWEVOKSBoYW5kbGUgL3h1c2JfcGFkY3RsQDcw
MDlmMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3V0bWkwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3V0bWkwDQooWEVO
KSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfdXRtaTAgcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3h1c2JfcGFkY3RsQDcwMDlmMDAw
L3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3V0bWkwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNl
dHRpbmdzL3Byb2RfY191dG1pMA0KKFhFTikgaGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9w
cm9kLXNldHRpbmdzL3Byb2RfY191dG1pMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS94dXNi
X3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY191dG1pMQ0KKFhFTikgL3h1c2Jf
cGFkY3RsQDcwMDlmMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3V0bWkxIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNl
dHRpbmdzL3Byb2RfY191dG1pMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcHJvZC1zZXR0aW5ncy9w
cm9kX2NfdXRtaTENCihYRU4pIGhhbmRsZSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcHJvZC1zZXR0
aW5ncy9wcm9kX2NfdXRtaTINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxA
NzAwOWYwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfdXRtaTINCihYRU4pIC94dXNiX3BhZGN0bEA3
MDA5ZjAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY191dG1pMiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAveHVzYl9wYWRjdGxANzAwOWYwMDAvcHJvZC1zZXR0aW5ncy9w
cm9kX2NfdXRtaTIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3V0
bWkyDQooWEVOKSBoYW5kbGUgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3Byb2Qtc2V0dGluZ3MvcHJv
ZF9jX3V0bWkzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAw
L3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3V0bWkzDQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAv
cHJvZC1zZXR0aW5ncy9wcm9kX2NfdXRtaTMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3V0
bWkzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY191dG1pMw0KKFhF
TikgaGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19zczAN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcHJvZC1zZXR0
aW5ncy9wcm9kX2Nfc3MwDQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcHJvZC1zZXR0aW5n
cy9wcm9kX2Nfc3MwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94
dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19zczAgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3Rs
QDcwMDlmMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NzMA0KKFhFTikgaGFuZGxlIC94dXNiX3Bh
ZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19zczENCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc3MxDQoo
WEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc3MxIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAw
MC9wcm9kLXNldHRpbmdzL3Byb2RfY19zczEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3Byb2Qtc2V0
dGluZ3MvcHJvZF9jX3NzMQ0KKFhFTikgaGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9k
LXNldHRpbmdzL3Byb2RfY19zczINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRj
dGxANzAwOWYwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc3MyDQooWEVOKSAveHVzYl9wYWRjdGxA
NzAwOWYwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc3MyIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzL3By
b2RfY19zczIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NzMg0K
KFhFTikgaGFuZGxlIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19z
czMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcHJvZC1z
ZXR0aW5ncy9wcm9kX2Nfc3MzDQooWEVOKSAveHVzYl9wYWRjdGxANzAwOWYwMDAvcHJvZC1zZXR0
aW5ncy9wcm9kX2Nfc3MzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19zczMgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFk
Y3RsQDcwMDlmMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NzMw0KKFhFTikgaGFuZGxlIC94dXNi
X3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19oc2ljMA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19o
c2ljMA0KKFhFTikgL3h1c2JfcGFkY3RsQDcwMDlmMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hz
aWMwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC94dXNiX3BhZGN0
bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19oc2ljMCBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVzYl9wYWRjdGxANzAwOWYw
MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHNpYzANCihYRU4pIGhhbmRsZSAveHVzYl9wYWRjdGxA
NzAwOWYwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHNpYzENCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0veHVzYl9wYWRjdGxANzAwOWYwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHNpYzENCihY
RU4pIC94dXNiX3BhZGN0bEA3MDA5ZjAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19oc2ljMSBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAveHVzYl9wYWRjdGxANzAwOWYw
MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHNpYzEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3h1c2JfcGFkY3RsQDcwMDlmMDAwL3Byb2Qt
c2V0dGluZ3MvcHJvZF9jX2hzaWMxDQooWEVOKSBoYW5kbGUgL3VzYl9jZA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS91c2JfY2QNCihYRU4pIC91c2JfY2QgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3VzYl9jZCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdXNiX2NkDQooWEVOKSBEVDogKiogdHJhbnNs
YXRpb24gZm9yIGRldmljZSAvdXNiX2NkICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5h
PTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAw
MDwzPiA3MDA5ZjAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0g
TU1JTzogMDA3MDA5ZjAwMCAtIDAwNzAwYTAwMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3Bp
bmN0cmxANzAwOWYwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGluY3RybEA3MDA5ZjAw
MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBp
bnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9y
YXdfaXJxOiBkZXY9L3BpbmN0cmxANzAwOWYwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50
ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRz
aXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250
cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAzMS4uLl0sb2ludHNp
emU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYw
MDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICEN
CihYRU4pIC9waW5jdHJsQDcwMDlmMDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4p
IENoZWNrIGlmIC9waW5jdHJsQDcwMDlmMDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waW5jdHJsQDcwMDlmMDAwDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0v
cGluY3RybEA3MDA5ZjAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49
Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQw
MDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDMxLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9
Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDAg
bm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xs
ZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIERU
OiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9waW5jdHJsQDcwMDlmMDAwICoqDQooWEVOKSBE
VDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGlu
ZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDA5ZjAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQg
cm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDA5ZjAwMCAtIDAwNzAwYTAwMDAgUDJNVHlw
ZT01DQooWEVOKSBoYW5kbGUgL3h1c2JANzAwOTAwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0veHVzYkA3MDA5MDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49OQ0KKFhFTikgIGludHNpemU9MyBpbnRsZW49OQ0KKFhFTikg
ZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3h1c2JANzAwOTAwMDAsIGluZGV4PTANCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTkN
CihYRU4pICBpbnRzaXplPTMgaW50bGVuPTkNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2lu
dGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAy
Ny4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1j
b250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAg
LT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS94dXNiQDcwMDkw
MDAwLCBpbmRleD0xDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAg
aW50c3BlYz0wIGludGxlbj05DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05DQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsw
eDAwMDAwMDAwIDB4MDAwMDAwMjguLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4g
YWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19p
cnE6IGRldj0veHVzYkA3MDA5MDAwMCwgaW5kZXg9Mg0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRz
JyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49OQ0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49OQ0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJA
NjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDMxLi4uXSxvaW50c2l6ZT0zDQoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAs
IHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikg
L3h1c2JANzAwOTAwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMw0KKFhFTikgQ2hlY2sgaWYg
L3h1c2JANzAwOTAwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3h1c2JANzAwOTAwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTkNCihYRU4pICBpbnRzaXplPTMgaW50
bGVuPTkNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS94dXNiQDcwMDkwMDAwLCBp
bmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3Bl
Yz0wIGludGxlbj05DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj05DQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAw
MDAwIDB4MDAwMDAwMjcuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFy
PS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNp
emU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9yIGlu
ZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9p
bnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJx
OiBkZXY9L3h1c2JANzAwOTAwMDAsIGluZGV4PTENCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTkNCihYRU4pICBpbnRzaXplPTMgaW50
bGVuPTkNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYw
MDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyOC4uLl0sb2ludHNpemU9Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBz
aXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGly
cSAxIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250
cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVO
KSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0veHVzYkA3MDA5MDAwMCwgaW5kZXg9Mg0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49
OQ0KKFhFTikgIGludHNpemU9MyBpbnRsZW49OQ0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0v
aW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAw
MDMxLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0
LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4p
ICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDIgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBj
b25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNv
bnRyb2xsZXJANjAwMDQwMDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC94
dXNiQDcwMDkwMDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9u
IC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDA5MDAw
MDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDA5
MDAwMCAtIDAwNzAwOTgwMDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9y
IGRldmljZSAveHVzYkA3MDA5MDAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0y
LCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8
Mz4gNzAwOTgwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1N
SU86IDAwNzAwOTgwMDAgLSAwMDcwMDk5MDAwIFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5z
bGF0aW9uIGZvciBkZXZpY2UgL3h1c2JANzAwOTAwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVm
YXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+
IDAwMDAwMDAwPDM+IDcwMDk5MDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihY
RU4pICAgLSBNTUlPOiAwMDcwMDk5MDAwIC0gMDA3MDA5YTAwMCBQMk1UeXBlPTUNCihYRU4pIGhh
bmRsZSAvbWF4MTY5ODQtY2RwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L21heDE2OTg0LWNk
cA0KKFhFTikgL21heDE2OTg0LWNkcCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvbWF4MTY5ODQtY2RwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9tYXgxNjk4NC1jZHANCihYRU4pIGhhbmRsZSAvc2VyaWFs
QDcwMDA2MDAwDQooWEVOKSAgIFNraXAgaXQgKHVzZWQgYnkgWGVuKQ0KKFhFTikgaGFuZGxlIC9z
ZXJpYWxANzAwMDYwNDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2VyaWFsQDcwMDA2MDQw
DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jh
d19pcnE6IGRldj0vc2VyaWFsQDcwMDA2MDQwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVy
cnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6
ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJv
bGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwMjUuLi5dLG9pbnRzaXpl
PTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAw
NDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQoo
WEVOKSAvc2VyaWFsQDcwMDA2MDQwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENo
ZWNrIGlmIC9zZXJpYWxANzAwMDYwNDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NlcmlhbEA3MDAwNjA0MA0KKFhFTikgIHVzaW5nICdp
bnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3Nlcmlh
bEA3MDAwNjA0MCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50
c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDI1Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhF
TikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDAgbm90IChk
aXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENv
bm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIERUOiAqKiB0
cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zZXJpYWxANzAwMDYwNDAgKioNCihYRU4pIERUOiBidXMg
aXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJl
c3M6PDM+IDAwMDAwMDAwPDM+IDcwMDA2MDQwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5v
ZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMDA2MDQwIC0gMDA3MDAwNjA4MCBQMk1UeXBlPTUNCihY
RU4pIGhhbmRsZSAvc2VyaWFsQDcwMDA2MjAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nl
cmlhbEA3MDAwNjIwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikg
IGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRf
ZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NlcmlhbEA3MDAwNjIwMCwgaW5kZXg9MA0KKFhFTikg
IHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0K
KFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50
ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDJl
Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNv
bnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAt
PiBnb3QgaXQgIQ0KKFhFTikgL3NlcmlhbEA3MDAwNjIwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAxDQooWEVOKSBDaGVjayBpZiAvc2VyaWFsQDcwMDA2MjAwIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZXJpYWxANzAwMDYyMDANCihY
RU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVu
PTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2ly
cTogZGV2PS9zZXJpYWxANzAwMDYyMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMg
aW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVy
QDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyZS4uLl0sb2ludHNpemU9Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAw
LCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4p
IGlycSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnlj
b250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQoo
WEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc2VyaWFsQDcwMDA2MjAwICoqDQoo
WEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFu
c2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDAwNjIwMDwzPg0KKFhFTikgRFQ6IHJl
YWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDAwNjIwMCAtIDAwNzAwMDYyNDAg
UDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3NlcmlhbEA3MDAwNjMwMA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9zZXJpYWxANzAwMDYzMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVu
PTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zZXJpYWxANzAwMDYzMDAsIGlu
ZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAw
MDAgMHgwMDAwMDA1YS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6
ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9zZXJpYWxANzAwMDYzMDAgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NlcmlhbEA3MDAwNjMwMCBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2VyaWFs
QDcwMDA2MzAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZp
Y2VfZ2V0X3Jhd19pcnE6IGRldj0vc2VyaWFsQDcwMDA2MzAwLCBpbmRleD0wDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1
cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNWEuLi5d
LG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJv
bGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdv
dCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3Rl
ZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxl
ckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NlcmlhbEA3
MDAwNjMwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQoo
WEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwMDYzMDA8Mz4N
CihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwMDYzMDAg
LSAwMDcwMDA2MzQwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zb3VuZA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb3VuZA0KKFhFTikgL3NvdW5kIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9zb3VuZCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc291bmQNCihYRU4pIGhhbmRsZSAvc291bmQvbnZp
ZGlhLGRhaS1saW5rLTENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc291bmQvbnZpZGlhLGRh
aS1saW5rLTENCihYRU4pIC9zb3VuZC9udmlkaWEsZGFpLWxpbmstMSBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc291bmQvbnZpZGlhLGRhaS1saW5rLTEgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvdW5k
L252aWRpYSxkYWktbGluay0xDQooWEVOKSBoYW5kbGUgL3NvdW5kL252aWRpYSxkYWktbGluay0y
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvdW5kL252aWRpYSxkYWktbGluay0yDQooWEVO
KSAvc291bmQvbnZpZGlhLGRhaS1saW5rLTIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3NvdW5kL252aWRpYSxkYWktbGluay0yIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb3VuZC9udmlkaWEsZGFpLWxp
bmstMg0KKFhFTikgaGFuZGxlIC9zb3VuZC9udmlkaWEsZGFpLWxpbmstMw0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb3VuZC9udmlkaWEsZGFpLWxpbmstMw0KKFhFTikgL3NvdW5kL252aWRp
YSxkYWktbGluay0zIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9z
b3VuZC9udmlkaWEsZGFpLWxpbmstMyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc291bmQvbnZpZGlhLGRhaS1saW5rLTMNCihYRU4pIGhh
bmRsZSAvc291bmQvbnZpZGlhLGRhaS1saW5rLTQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c291bmQvbnZpZGlhLGRhaS1saW5rLTQNCihYRU4pIC9zb3VuZC9udmlkaWEsZGFpLWxpbmstNCBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc291bmQvbnZpZGlhLGRh
aS1saW5rLTQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NvdW5kL252aWRpYSxkYWktbGluay00DQooWEVOKSBoYW5kbGUgL3NvdW5kX3Jl
Zg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb3VuZF9yZWYNCihYRU4pIC9zb3VuZF9yZWYg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvdW5kX3JlZiBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc291
bmRfcmVmDQooWEVOKSBoYW5kbGUgL3B3bUA3MDAwYTAwMA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9wd21ANzAwMGEwMDANCihYRU4pIC9wd21ANzAwMGEwMDAgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3B3bUA3MDAwYTAwMCBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcHdtQDcwMDBhMDAwDQooWEVO
KSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvcHdtQDcwMDBhMDAwICoqDQooWEVOKSBE
VDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGlu
ZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDAwYTAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQg
cm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDAwYTAwMCAtIDAwNzAwMGExMDAgUDJNVHlw
ZT01DQooWEVOKSBoYW5kbGUgL3NwaUA3MDAwZDQwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9zcGlANzAwMGQ0MDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4p
ICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0
X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zcGlANzAwMGQ0MDAsIGluZGV4PTANCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVy
cnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAzYi4u
Ll0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250
cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4g
Z290IGl0ICENCihYRU4pIC9zcGlANzAwMGQ0MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0K
KFhFTikgQ2hlY2sgaWYgL3NwaUA3MDAwZDQwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkNDAwDQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAg
aW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc3Bp
QDcwMDBkNDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQoo
WEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRz
cGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwM2IuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVO
KSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRp
cmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29u
bmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoqIHRy
YW5zbGF0aW9uIGZvciBkZXZpY2UgL3NwaUA3MDAwZDQwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBk
ZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8
Mz4gMDAwMDAwMDA8Mz4gNzAwMGQ0MDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0K
KFhFTikgICAtIE1NSU86IDAwNzAwMGQ0MDAgLSAwMDcwMDBkNjAwIFAyTVR5cGU9NQ0KKFhFTikg
aGFuZGxlIC9zcGlANzAwMGQ0MDAvcHJvZC1zZXR0aW5ncw0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zcGlANzAwMGQ0MDAvcHJvZC1zZXR0aW5ncw0KKFhFTikgL3NwaUA3MDAwZDQwMC9wcm9k
LXNldHRpbmdzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zcGlA
NzAwMGQ0MDAvcHJvZC1zZXR0aW5ncyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkNDAwL3Byb2Qtc2V0dGluZ3MNCihYRU4p
IGhhbmRsZSAvc3BpQDcwMDBkNDAwL3Byb2Qtc2V0dGluZ3MvcHJvZA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9zcGlANzAwMGQ0MDAvcHJvZC1zZXR0aW5ncy9wcm9kDQooWEVOKSAvc3BpQDcw
MDBkNDAwL3Byb2Qtc2V0dGluZ3MvcHJvZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvc3BpQDcwMDBkNDAwL3Byb2Qtc2V0dGluZ3MvcHJvZCBpcyBiZWhpbmQgdGhl
IElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkNDAw
L3Byb2Qtc2V0dGluZ3MvcHJvZA0KKFhFTikgaGFuZGxlIC9zcGlANzAwMGQ0MDAvcHJvZC1zZXR0
aW5ncy9wcm9kX2NfZmxhc2gNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkNDAw
L3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2ZsYXNoDQooWEVOKSAvc3BpQDcwMDBkNDAwL3Byb2Qtc2V0
dGluZ3MvcHJvZF9jX2ZsYXNoIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9zcGlANzAwMGQ0MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZmxhc2ggaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NwaUA3MDAwZDQw
MC9wcm9kLXNldHRpbmdzL3Byb2RfY19mbGFzaA0KKFhFTikgaGFuZGxlIC9zcGlANzAwMGQ0MDAv
cHJvZC1zZXR0aW5ncy9wcm9kX2NfbG9vcA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGlA
NzAwMGQ0MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfbG9vcA0KKFhFTikgL3NwaUA3MDAwZDQwMC9w
cm9kLXNldHRpbmdzL3Byb2RfY19sb29wIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9zcGlANzAwMGQ0MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfbG9vcCBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcw
MDBkNDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2xvb3ANCihYRU4pIGhhbmRsZSAvc3BpQDcwMDBk
NDAwL3NwaUAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NwaUA3MDAwZDQwMC9zcGlAMA0K
KFhFTikgL3NwaUA3MDAwZDQwMC9zcGlAMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvc3BpQDcwMDBkNDAwL3NwaUAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGlANzAwMGQ0MDAvc3BpQDANCihYRU4p
IGhhbmRsZSAvc3BpQDcwMDBkNDAwL3NwaUAwL2NvbnRyb2xsZXItZGF0YQ0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zcGlANzAwMGQ0MDAvc3BpQDAvY29udHJvbGxlci1kYXRhDQooWEVOKSAv
c3BpQDcwMDBkNDAwL3NwaUAwL2NvbnRyb2xsZXItZGF0YSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvc3BpQDcwMDBkNDAwL3NwaUAwL2NvbnRyb2xsZXItZGF0YSBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c3BpQDcwMDBkNDAwL3NwaUAwL2NvbnRyb2xsZXItZGF0YQ0KKFhFTikgaGFuZGxlIC9zcGlANzAw
MGQ0MDAvc3BpQDENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkNDAwL3NwaUAx
DQooWEVOKSAvc3BpQDcwMDBkNDAwL3NwaUAxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9zcGlANzAwMGQ0MDAvc3BpQDEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NwaUA3MDAwZDQwMC9zcGlAMQ0KKFhF
TikgaGFuZGxlIC9zcGlANzAwMGQ0MDAvc3BpQDEvY29udHJvbGxlci1kYXRhDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NwaUA3MDAwZDQwMC9zcGlAMS9jb250cm9sbGVyLWRhdGENCihYRU4p
IC9zcGlANzAwMGQ0MDAvc3BpQDEvY29udHJvbGxlci1kYXRhIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9zcGlANzAwMGQ0MDAvc3BpQDEvY29udHJvbGxlci1kYXRh
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9zcGlANzAwMGQ0MDAvc3BpQDEvY29udHJvbGxlci1kYXRhDQooWEVOKSBoYW5kbGUgL3NwaUA3
MDAwZDYwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGlANzAwMGQ2MDANCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2
PS9zcGlANzAwMGQ2MDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVy
dHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMN
CihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAw
LGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA1Mi4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMN
CihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9zcGlANzAw
MGQ2MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NwaUA3MDAw
ZDYwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc3BpQDcwMDBkNjAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQoo
WEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVO
KSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc3BpQDcwMDBkNjAwLCBpbmRleD0wDQooWEVO
KSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0z
DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9p
bnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAw
NTIuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQt
Y29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikg
IC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNv
bm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29u
dHJvbGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3Nw
aUA3MDAwZDYwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAv
DQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwMGQ2MDA8
Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwMGQ2
MDAgLSAwMDcwMDBkODAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zcGlANzAwMGQ2MDAvcHJv
ZC1zZXR0aW5ncw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGlANzAwMGQ2MDAvcHJvZC1z
ZXR0aW5ncw0KKFhFTikgL3NwaUA3MDAwZDYwMC9wcm9kLXNldHRpbmdzIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zcGlANzAwMGQ2MDAvcHJvZC1zZXR0aW5ncyBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c3BpQDcwMDBkNjAwL3Byb2Qtc2V0dGluZ3MNCihYRU4pIGhhbmRsZSAvc3BpQDcwMDBkNjAwL3By
b2Qtc2V0dGluZ3MvcHJvZA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGlANzAwMGQ2MDAv
cHJvZC1zZXR0aW5ncy9wcm9kDQooWEVOKSAvc3BpQDcwMDBkNjAwL3Byb2Qtc2V0dGluZ3MvcHJv
ZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc3BpQDcwMDBkNjAw
L3Byb2Qtc2V0dGluZ3MvcHJvZCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkNjAwL3Byb2Qtc2V0dGluZ3MvcHJvZA0KKFhF
TikgaGFuZGxlIC9zcGlANzAwMGQ2MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZmxhc2gNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkNjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2Zs
YXNoDQooWEVOKSAvc3BpQDcwMDBkNjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2ZsYXNoIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zcGlANzAwMGQ2MDAvcHJvZC1z
ZXR0aW5ncy9wcm9kX2NfZmxhc2ggaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NwaUA3MDAwZDYwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19m
bGFzaA0KKFhFTikgaGFuZGxlIC9zcGlANzAwMGQ2MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfbG9v
cA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGlANzAwMGQ2MDAvcHJvZC1zZXR0aW5ncy9w
cm9kX2NfbG9vcA0KKFhFTikgL3NwaUA3MDAwZDYwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19sb29w
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zcGlANzAwMGQ2MDAv
cHJvZC1zZXR0aW5ncy9wcm9kX2NfbG9vcCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkNjAwL3Byb2Qtc2V0dGluZ3MvcHJv
ZF9jX2xvb3ANCihYRU4pIGhhbmRsZSAvc3BpQDcwMDBkNjAwL3NwaUAwDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NwaUA3MDAwZDYwMC9zcGlAMA0KKFhFTikgL3NwaUA3MDAwZDYwMC9zcGlA
MCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc3BpQDcwMDBkNjAw
L3NwaUAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9zcGlANzAwMGQ2MDAvc3BpQDANCihYRU4pIGhhbmRsZSAvc3BpQDcwMDBkNjAwL3Nw
aUAwL2NvbnRyb2xsZXItZGF0YQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGlANzAwMGQ2
MDAvc3BpQDAvY29udHJvbGxlci1kYXRhDQooWEVOKSAvc3BpQDcwMDBkNjAwL3NwaUAwL2NvbnRy
b2xsZXItZGF0YSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc3Bp
QDcwMDBkNjAwL3NwaUAwL2NvbnRyb2xsZXItZGF0YSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkNjAwL3NwaUAwL2NvbnRy
b2xsZXItZGF0YQ0KKFhFTikgaGFuZGxlIC9zcGlANzAwMGQ2MDAvc3BpQDENCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vc3BpQDcwMDBkNjAwL3NwaUAxDQooWEVOKSAvc3BpQDcwMDBkNjAwL3Nw
aUAxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zcGlANzAwMGQ2
MDAvc3BpQDEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NwaUA3MDAwZDYwMC9zcGlAMQ0KKFhFTikgaGFuZGxlIC9zcGlANzAwMGQ2MDAv
c3BpQDEvY29udHJvbGxlci1kYXRhDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NwaUA3MDAw
ZDYwMC9zcGlAMS9jb250cm9sbGVyLWRhdGENCihYRU4pIC9zcGlANzAwMGQ2MDAvc3BpQDEvY29u
dHJvbGxlci1kYXRhIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9z
cGlANzAwMGQ2MDAvc3BpQDEvY29udHJvbGxlci1kYXRhIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGlANzAwMGQ2MDAvc3BpQDEvY29u
dHJvbGxlci1kYXRhDQooWEVOKSBoYW5kbGUgL3NwaUA3MDAwZDgwMA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9zcGlANzAwMGQ4MDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVy
dHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMN
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zcGlANzAwMGQ4MDAsIGluZGV4PTAN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBw
YXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgw
MDAwMDA1My4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVy
cnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQoo
WEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9zcGlANzAwMGQ4MDAgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NwaUA3MDAwZDgwMCBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkODAwDQooWEVO
KSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0z
DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6
IGRldj0vc3BpQDcwMDBkODAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHBy
b3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAw
NDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNTMuLi5dLG9pbnRzaXplPTMNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6
ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEg
MCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJv
bGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikg
RFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NwaUA3MDAwZDgwMCAqKg0KKFhFTikgRFQ6
IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcg
YWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwMGQ4MDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJv
b3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwMGQ4MDAgLSAwMDcwMDBkYTAwIFAyTVR5cGU9
NQ0KKFhFTikgaGFuZGxlIC9zcGlANzAwMGQ4MDAvcHJvZC1zZXR0aW5ncw0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zcGlANzAwMGQ4MDAvcHJvZC1zZXR0aW5ncw0KKFhFTikgL3NwaUA3MDAw
ZDgwMC9wcm9kLXNldHRpbmdzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9zcGlANzAwMGQ4MDAvcHJvZC1zZXR0aW5ncyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkODAwL3Byb2Qtc2V0dGlu
Z3MNCihYRU4pIGhhbmRsZSAvc3BpQDcwMDBkODAwL3Byb2Qtc2V0dGluZ3MvcHJvZA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9zcGlANzAwMGQ4MDAvcHJvZC1zZXR0aW5ncy9wcm9kDQooWEVO
KSAvc3BpQDcwMDBkODAwL3Byb2Qtc2V0dGluZ3MvcHJvZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvc3BpQDcwMDBkODAwL3Byb2Qtc2V0dGluZ3MvcHJvZCBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3Bp
QDcwMDBkODAwL3Byb2Qtc2V0dGluZ3MvcHJvZA0KKFhFTikgaGFuZGxlIC9zcGlANzAwMGQ4MDAv
cHJvZC1zZXR0aW5ncy9wcm9kX2NfZmxhc2gNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3Bp
QDcwMDBkODAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2ZsYXNoDQooWEVOKSAvc3BpQDcwMDBkODAw
L3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2ZsYXNoIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9zcGlANzAwMGQ4MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZmxhc2ggaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nw
aUA3MDAwZDgwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19mbGFzaA0KKFhFTikgaGFuZGxlIC9zcGlA
NzAwMGQ4MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfbG9vcA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zcGlANzAwMGQ4MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfbG9vcA0KKFhFTikgL3NwaUA3
MDAwZDgwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19sb29wIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9zcGlANzAwMGQ4MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfbG9v
cCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc3BpQDcwMDBkODAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2xvb3ANCihYRU4pIGhhbmRsZSAv
c3BpQDcwMDBkYTAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NwaUA3MDAwZGEwMA0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49
Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJx
OiBkZXY9L3NwaUA3MDAwZGEwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRs
ZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDVkLi4uXSxvaW50c2l6ZT0zDQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNp
emU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL3Nw
aUA3MDAwZGEwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvc3Bp
QDcwMDBkYTAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9zcGlANzAwMGRhMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVy
dHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMN
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zcGlANzAwMGRhMDAsIGluZGV4PTAN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBw
YXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgw
MDAwMDA1ZC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVy
cnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQoo
WEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3Rs
eSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVw
dC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmlj
ZSAvc3BpQDcwMDBkYTAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIp
IG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDAw
ZGEwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3
MDAwZGEwMCAtIDAwNzAwMGRjMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3NwaUA3MDAwZGEw
MC9wcm9kLXNldHRpbmdzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NwaUA3MDAwZGEwMC9w
cm9kLXNldHRpbmdzDQooWEVOKSAvc3BpQDcwMDBkYTAwL3Byb2Qtc2V0dGluZ3MgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NwaUA3MDAwZGEwMC9wcm9kLXNldHRp
bmdzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zcGlANzAwMGRhMDAvcHJvZC1zZXR0aW5ncw0KKFhFTikgaGFuZGxlIC9zcGlANzAwMGRh
MDAvcHJvZC1zZXR0aW5ncy9wcm9kDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NwaUA3MDAw
ZGEwMC9wcm9kLXNldHRpbmdzL3Byb2QNCihYRU4pIC9zcGlANzAwMGRhMDAvcHJvZC1zZXR0aW5n
cy9wcm9kIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zcGlANzAw
MGRhMDAvcHJvZC1zZXR0aW5ncy9wcm9kIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGlANzAwMGRhMDAvcHJvZC1zZXR0aW5ncy9wcm9k
DQooWEVOKSBoYW5kbGUgL3NwaUA3MDAwZGEwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19mbGFzaA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGlANzAwMGRhMDAvcHJvZC1zZXR0aW5ncy9wcm9k
X2NfZmxhc2gNCihYRU4pIC9zcGlANzAwMGRhMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZmxhc2gg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NwaUA3MDAwZGEwMC9w
cm9kLXNldHRpbmdzL3Byb2RfY19mbGFzaCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkYTAwL3Byb2Qtc2V0dGluZ3MvcHJv
ZF9jX2ZsYXNoDQooWEVOKSBoYW5kbGUgL3NwaUA3MDAwZGEwMC9wcm9kLXNldHRpbmdzL3Byb2Rf
Y19jczANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkYTAwL3Byb2Qtc2V0dGlu
Z3MvcHJvZF9jX2NzMA0KKFhFTikgL3NwaUA3MDAwZGEwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19j
czAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NwaUA3MDAwZGEw
MC9wcm9kLXNldHRpbmdzL3Byb2RfY19jczAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NwaUA3MDAwZGEwMC9wcm9kLXNldHRpbmdzL3By
b2RfY19jczANCihYRU4pIGhhbmRsZSAvc3BpQDcwMDBkYTAwL3NwaS10b3VjaDE5eDEyQDANCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkYTAwL3NwaS10b3VjaDE5eDEyQDANCihY
RU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTE4NSBpbnRs
ZW49Mg0KKFhFTikgIGludHNpemU9MiBpbnRsZW49Mg0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdf
aXJxOiBkZXY9L3NwaUA3MDAwZGEwMC9zcGktdG91Y2gxOXgxMkAwLCBpbmRleD0wDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0xODUgaW50bGVuPTIN
CihYRU4pICBpbnRzaXplPTIgaW50bGVuPTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2dw
aW9ANjAwMGQwMDAsaW50c3BlYz1bMHgwMDAwMDBiOSAweDAwMDAwMDAxLi4uXSxvaW50c2l6ZT0y
DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vZ3Bpb0A2MDAwZDAwMCwgc2l6ZT0yDQooWEVO
KSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc3BpQDcwMDBkYTAw
L3NwaS10b3VjaDE5eDEyQDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3NwaUA3MDAwZGEwMC9zcGktdG91Y2gxOXgxMkAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGlANzAwMGRhMDAvc3BpLXRvdWNo
MTl4MTJAMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MTg1IGludGxlbj0yDQooWEVOKSAgaW50c2l6ZT0yIGludGxlbj0yDQooWEVOKSBkdF9kZXZp
Y2VfZ2V0X3Jhd19pcnE6IGRldj0vc3BpQDcwMDBkYTAwL3NwaS10b3VjaDE5eDEyQDAsIGluZGV4
PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTE4
NSBpbnRsZW49Mg0KKFhFTikgIGludHNpemU9MiBpbnRsZW49Mg0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IHBhcj0vZ3Bpb0A2MDAwZDAwMCxpbnRzcGVjPVsweDAwMDAwMGI5IDB4MDAwMDAwMDEuLi5d
LG9pbnRzaXplPTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9ncGlvQDYwMDBkMDAwLCBz
aXplPTINCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGly
cSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250
cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2dwaW9ANjAwMGQwMDANCihYRU4pIGhhbmRsZSAvc3BpQDcw
MDBkYTAwL3NwaS10b3VjaDI1eDE2QDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcw
MDBkYTAwL3NwaS10b3VjaDI1eDE2QDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVy
dHkNCihYRU4pICBpbnRzcGVjPTE4NSBpbnRsZW49Mg0KKFhFTikgIGludHNpemU9MiBpbnRsZW49
Mg0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NwaUA3MDAwZGEwMC9zcGktdG91
Y2gyNXgxNkAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQoo
WEVOKSAgaW50c3BlYz0xODUgaW50bGVuPTINCihYRU4pICBpbnRzaXplPTIgaW50bGVuPTINCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2dwaW9ANjAwMGQwMDAsaW50c3BlYz1bMHgwMDAwMDBi
OSAweDAwMDAwMDAxLi4uXSxvaW50c2l6ZT0yDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0v
Z3Bpb0A2MDAwZDAwMCwgc2l6ZT0yDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdv
dCBpdCAhDQooWEVOKSAvc3BpQDcwMDBkYTAwL3NwaS10b3VjaDI1eDE2QDAgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NwaUA3MDAwZGEwMC9zcGktdG91Y2gyNXgx
NkAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zcGlANzAwMGRhMDAvc3BpLXRvdWNoMjV4MTZAMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1
cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MTg1IGludGxlbj0yDQooWEVOKSAgaW50c2l6
ZT0yIGludGxlbj0yDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc3BpQDcwMDBk
YTAwL3NwaS10b3VjaDI1eDE2QDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTE4NSBpbnRsZW49Mg0KKFhFTikgIGludHNpemU9MiBp
bnRsZW49Mg0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vZ3Bpb0A2MDAwZDAwMCxpbnRzcGVj
PVsweDAwMDAwMGI5IDB4MDAwMDAwMDEuLi5dLG9pbnRzaXplPTINCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBpcGFyPS9ncGlvQDYwMDBkMDAwLCBzaXplPTINCihYRU4pICAtPiBhZGRyc2l6ZT0yDQoo
WEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3Rs
eSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2dwaW9ANjAw
MGQwMDANCihYRU4pIGhhbmRsZSAvc3BpQDcwMDBkYTAwL3NwaS10b3VjaDE0eDhAMA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9zcGlANzAwMGRhMDAvc3BpLXRvdWNoMTR4OEAwDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0xODUgaW50bGVuPTIN
CihYRU4pICBpbnRzaXplPTIgaW50bGVuPTINCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTog
ZGV2PS9zcGlANzAwMGRhMDAvc3BpLXRvdWNoMTR4OEAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0xODUgaW50bGVuPTINCihYRU4p
ICBpbnRzaXplPTIgaW50bGVuPTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2dwaW9ANjAw
MGQwMDAsaW50c3BlYz1bMHgwMDAwMDBiOSAweDAwMDAwMDAxLi4uXSxvaW50c2l6ZT0yDQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vZ3Bpb0A2MDAwZDAwMCwgc2l6ZT0yDQooWEVOKSAgLT4g
YWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc3BpQDcwMDBkYTAwL3NwaS10
b3VjaDE0eDhAMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc3Bp
QDcwMDBkYTAwL3NwaS10b3VjaDE0eDhAMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwMDBkYTAwL3NwaS10b3VjaDE0eDhAMA0K
KFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MTg1IGlu
dGxlbj0yDQooWEVOKSAgaW50c2l6ZT0yIGludGxlbj0yDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jh
d19pcnE6IGRldj0vc3BpQDcwMDBkYTAwL3NwaS10b3VjaDE0eDhAMCwgaW5kZXg9MA0KKFhFTikg
IHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MTg1IGludGxlbj0y
DQooWEVOKSAgaW50c2l6ZT0yIGludGxlbj0yDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9n
cGlvQDYwMDBkMDAwLGludHNwZWM9WzB4MDAwMDAwYjkgMHgwMDAwMDAwMS4uLl0sb2ludHNpemU9
Mg0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2dwaW9ANjAwMGQwMDAsIHNpemU9Mg0KKFhF
TikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDAgbm90IChk
aXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENv
bm5lY3RlZCB0byAvZ3Bpb0A2MDAwZDAwMA0KKFhFTikgaGFuZGxlIC9zcGlANzA0MTAwMDANCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwNDEwMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVy
cnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6
ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc3BpQDcwNDEw
MDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAg
aW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsw
eDAwMDAwMDAwIDB4MDAwMDAwMGEuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4g
YWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc3BpQDcwNDEwMDAwIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9zcGlANzA0MTAwMDAgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NwaUA3
MDQxMDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNl
X2dldF9yYXdfaXJxOiBkZXY9L3NwaUA3MDQxMDAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdp
bnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNv
bnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDBhLi4uXSxvaW50
c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJA
NjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQg
IQ0KKFhFTikgaXJxIDAgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8g
cHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zcGlANzA0MTAwMDAg
KioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6
IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcwNDEwMDAwPDM+DQooWEVOKSBE
VDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwNDEwMDAwIC0gMDA3MDQx
MTAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc3BpQDcwNDEwMDAwL3NwaWZsYXNoQDANCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwNDEwMDAwL3NwaWZsYXNoQDANCihYRU4pIC9z
cGlANzA0MTAwMDAvc3BpZmxhc2hAMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvc3BpQDcwNDEwMDAwL3NwaWZsYXNoQDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NwaUA3MDQxMDAwMC9zcGlmbGFzaEAw
DQooWEVOKSBoYW5kbGUgL3NwaUA3MDQxMDAwMC9zcGlmbGFzaEAwL2NvbnRyb2xsZXItZGF0YQ0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGlANzA0MTAwMDAvc3BpZmxhc2hAMC9jb250cm9s
bGVyLWRhdGENCihYRU4pIC9zcGlANzA0MTAwMDAvc3BpZmxhc2hAMC9jb250cm9sbGVyLWRhdGEg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NwaUA3MDQxMDAwMC9z
cGlmbGFzaEAwL2NvbnRyb2xsZXItZGF0YSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc3BpQDcwNDEwMDAwL3NwaWZsYXNoQDAvY29udHJv
bGxlci1kYXRhDQooWEVOKSBoYW5kbGUgL2hvc3QxeA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9ob3N0MXgNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRz
cGVjPTAgaW50bGVuPTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTYNCihYRU4pIGR0X2Rldmlj
ZV9nZXRfcmF3X2lycTogZGV2PS9ob3N0MXgsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTYNCihYRU4pICBpbnRzaXpl
PTMgaW50bGVuPTYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9s
bGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA0MS4uLl0sb2ludHNpemU9
Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0
MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihY
RU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9ob3N0MXgsIGluZGV4PTENCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTYNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVy
cnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA0My4u
Ll0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250
cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4g
Z290IGl0ICENCihYRU4pIC9ob3N0MXggcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikg
Q2hlY2sgaWYgL2hvc3QxeCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vaG9zdDF4DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3Bl
cnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj02DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj02
DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaG9zdDF4LCBpbmRleD0wDQooWEVO
KSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj02
DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj02DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9p
bnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAw
NDEuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQt
Y29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikg
IC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNv
bm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29u
dHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2hvc3Qx
eCwgaW5kZXg9MQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGlu
dHNwZWM9MCBpbnRsZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgw
MDAwMDAwMCAweDAwMDAwMDQzLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
aXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFk
ZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDEgbm90IChkaXJlY3RseSBv
ciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0
byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlv
biBmb3IgZGV2aWNlIC9ob3N0MXggKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9Miwg
bnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+
IDUwMDAwMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlP
OiAwMDUwMDAwMDAwIC0gMDA1MDAzNDAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvaG9zdDF4
L3ZpDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC92aQ0KKFhFTikgIHVzaW5nICdp
bnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2hvc3Qx
eC92aSwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikg
IGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1b
MHgwMDAwMDAwMCAweDAwMDAwMDQ1Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+
IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL2hvc3QxeC92aSBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L3ZpIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvdmkN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3
X2lycTogZGV2PS9ob3N0MXgvdmksIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50
bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYw
MDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA0NS4uLl0sb2ludHNpemU9Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBz
aXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGly
cSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250
cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVO
KSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvaG9zdDF4L3ZpICoqDQooWEVOKSBEVDog
YnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9ob3N0MXgNCihYRU4pIERUOiB0cmFuc2xh
dGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA1NDA4MDAwMDwzPg0KKFhFTikgRFQ6IHBhcmVu
dCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdl
czsgMToxIHRyYW5zbGF0aW9uDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4g
MDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNTQwODAwMDAN
CihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDU0MDgwMDAw
PDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDU0MDgw
MDAwIC0gMDA1NDBjMDAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvaG9zdDF4L3ZpL3BvcnRz
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC92aS9wb3J0cw0KKFhFTikgL2hvc3Qx
eC92aS9wb3J0cyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9z
dDF4L3ZpL3BvcnRzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9ob3N0MXgvdmkvcG9ydHMNCihYRU4pIGhhbmRsZSAvaG9zdDF4L3ZpL3Bv
cnRzL3BvcnRAMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvdmkvcG9ydHMvcG9y
dEAwDQooWEVOKSAvaG9zdDF4L3ZpL3BvcnRzL3BvcnRAMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L3ZpL3BvcnRzL3BvcnRAMCBpcyBiZWhpbmQgdGhl
IElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3ZpL3Bv
cnRzL3BvcnRAMA0KKFhFTikgaGFuZGxlIC9ob3N0MXgvdmkvcG9ydHMvcG9ydEAwL2VuZHBvaW50
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC92aS9wb3J0cy9wb3J0QDAvZW5kcG9p
bnQNCihYRU4pIC9ob3N0MXgvdmkvcG9ydHMvcG9ydEAwL2VuZHBvaW50IHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvdmkvcG9ydHMvcG9ydEAwL2VuZHBv
aW50IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9ob3N0MXgvdmkvcG9ydHMvcG9ydEAwL2VuZHBvaW50DQooWEVOKSBoYW5kbGUgL2hvc3Qx
eC92aS9wb3J0cy9wb3J0QDENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3ZpL3Bv
cnRzL3BvcnRAMQ0KKFhFTikgL2hvc3QxeC92aS9wb3J0cy9wb3J0QDEgcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC92aS9wb3J0cy9wb3J0QDEgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3Qx
eC92aS9wb3J0cy9wb3J0QDENCihYRU4pIGhhbmRsZSAvaG9zdDF4L3ZpL3BvcnRzL3BvcnRAMS9l
bmRwb2ludA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvdmkvcG9ydHMvcG9ydEAx
L2VuZHBvaW50DQooWEVOKSAvaG9zdDF4L3ZpL3BvcnRzL3BvcnRAMS9lbmRwb2ludCBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L3ZpL3BvcnRzL3BvcnRA
MS9lbmRwb2ludCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vaG9zdDF4L3ZpL3BvcnRzL3BvcnRAMS9lbmRwb2ludA0KKFhFTikgaGFuZGxl
IC9ob3N0MXgvdmktYnlwYXNzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC92aS1i
eXBhc3MNCihYRU4pIC9ob3N0MXgvdmktYnlwYXNzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9ob3N0MXgvdmktYnlwYXNzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvdmktYnlwYXNzDQooWEVO
KSBoYW5kbGUgL2hvc3QxeC9pc3BANTQ2MDAwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
aG9zdDF4L2lzcEA1NDYwMDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhF
TikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2hvc3QxeC9pc3BANTQ2MDAwMDAsIGluZGV4
PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAg
aW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAg
MHgwMDAwMDA0Ny4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2lu
dGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0y
DQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9ob3N0MXgvaXNwQDU0NjAwMDAwIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvaXNwQDU0NjAwMDAwIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9o
b3N0MXgvaXNwQDU0NjAwMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQoo
WEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVO
KSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaG9zdDF4L2lzcEA1NDYwMDAwMCwgaW5kZXg9
MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBp
bnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAw
eDAwMDAwMDQ3Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50
ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTIN
CihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDAgbm90IChkaXJlY3RseSBvciBpbmRpcmVj
dGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJy
dXB0LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2
aWNlIC9ob3N0MXgvaXNwQDU0NjAwMDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5h
PTIsIG5zPTIpIG9uIC9ob3N0MXgNCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAw
MDAwMDAwMDwzPiA1NDYwMDAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAo
bmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRyYW5zbGF0aW9u
DQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAw
MDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNTQ2MDAwMDANCihYRU4pIERUOiBvbmUgbGV2
ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDU0NjAwMDAwPDM+DQooWEVOKSBEVDogcmVh
Y2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDU0NjAwMDAwIC0gMDA1NDY0MDAwMCBQ
Mk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvaG9zdDF4L2lzcEA1NDY4MDAwMA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9ob3N0MXgvaXNwQDU0NjgwMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaG9zdDF4L2lzcEA1
NDY4MDAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3Bl
Yz1bMHgwMDAwMDAwMCAweDAwMDAwMDQ2Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikg
IC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL2hvc3QxeC9pc3BANTQ2
ODAwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC9p
c3BANTQ2ODAwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L2hvc3QxeC9pc3BANTQ2ODAwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMg
aW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9ob3N0MXgvaXNwQDU0
NjgwMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVj
PVsweDAwMDAwMDAwIDB4MDAwMDAwNDYuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAg
LT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVj
dGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVj
dGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoqIHRyYW5z
bGF0aW9uIGZvciBkZXZpY2UgL2hvc3QxeC9pc3BANTQ2ODAwMDAgKioNCihYRU4pIERUOiBidXMg
aXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL2hvc3QxeA0KKFhFTikgRFQ6IHRyYW5zbGF0aW5n
IGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDU0NjgwMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1
cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogZW1wdHkgcmFuZ2VzOyAx
OjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAw
MDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA1NDY4MDAwMA0KKFhF
TikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNTQ2ODAwMDA8Mz4N
CihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNTQ2ODAwMDAg
LSAwMDU0NmMwMDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9ob3N0MXgvZGNANTQyMDAwMDAN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L2RjQDU0MjAwMDAwDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0v
aG9zdDF4L2RjQDU0MjAwMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHBy
b3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAw
NDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNDkuLi5dLG9pbnRzaXplPTMNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6
ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvaG9z
dDF4L2RjQDU0MjAwMDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlm
IC9ob3N0MXgvZGNANTQyMDAwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9kY0A1NDIwMDAwMA0KKFhFTikgIHVzaW5nICdp
bnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2hvc3Qx
eC9kY0A1NDIwMDAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0
eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAs
aW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDQ5Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0K
KFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDAgbm90
IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIu
IENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIERUOiAq
KiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9ob3N0MXgvZGNANTQyMDAwMDAgKioNCihYRU4pIERU
OiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL2hvc3QxeA0KKFhFTikgRFQ6IHRyYW5z
bGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDU0MjAwMDAwPDM+DQooWEVOKSBEVDogcGFy
ZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogZW1wdHkgcmFu
Z2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwz
PiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA1NDIwMDAw
MA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNTQyMDAw
MDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNTQy
MDAwMDAgLSAwMDU0MjQwMDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9ob3N0MXgvZGNANTQy
MDAwMDAvcmdiDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9kY0A1NDIwMDAwMC9y
Z2INCihYRU4pIC9ob3N0MXgvZGNANTQyMDAwMDAvcmdiIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvZGNANTQyMDAwMDAvcmdiIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvZGNANTQy
MDAwMDAvcmdiDQooWEVOKSBoYW5kbGUgL2hvc3QxeC9kY0A1NDI0MDAwMA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9ob3N0MXgvZGNANTQyNDAwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMg
aW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9ob3N0MXgvZGNANTQy
NDAwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4p
ICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9
WzB4MDAwMDAwMDAgMHgwMDAwMDA0YS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAt
PiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9ob3N0MXgvZGNANTQyNDAw
MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC9kY0A1
NDI0MDAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vaG9zdDF4L2RjQDU0MjQwMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHBy
b3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaG9zdDF4L2RjQDU0MjQwMDAw
LCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAw
MDAwMDAwIDB4MDAwMDAwNGEuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBp
cGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRk
cnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9y
IGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRv
IC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9u
IGZvciBkZXZpY2UgL2hvc3QxeC9kY0A1NDI0MDAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZh
dWx0IChuYT0yLCBucz0yKSBvbiAvaG9zdDF4DQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVz
czo8Mz4gMDAwMDAwMDA8Mz4gNTQyNDAwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiBlbXB0eSByYW5nZXM7IDE6MSB0cmFu
c2xhdGlvbg0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+
IDAwMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDU0MjQwMDAwDQooWEVOKSBEVDog
b25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA1NDI0MDAwMDwzPg0KKFhFTikg
RFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA1NDI0MDAwMCAtIDAwNTQy
ODAwMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL2hvc3QxeC9kY0A1NDI0MDAwMC9yZ2INCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L2RjQDU0MjQwMDAwL3JnYg0KKFhFTikgL2hv
c3QxeC9kY0A1NDI0MDAwMC9yZ2IgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL2hvc3QxeC9kY0A1NDI0MDAwMC9yZ2IgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9kY0A1NDI0MDAwMC9yZ2INCihY
RU4pIGhhbmRsZSAvaG9zdDF4L2RzaQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgv
ZHNpDQooWEVOKSAvaG9zdDF4L2RzaSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAyDQooWEVOKSBD
aGVjayBpZiAvaG9zdDF4L2RzaSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L2RzaQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9u
IGZvciBkZXZpY2UgL2hvc3QxeC9kc2kgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9
MiwgbnM9Mikgb24gL2hvc3QxeA0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAw
MDAwMDAwPDM+IDU0MzAwMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChu
YT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24N
CihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAw
MDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA1NDMwMDAwMA0KKFhFTikgRFQ6IG9uZSBsZXZl
bCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNTQzMDAwMDA8Mz4NCihYRU4pIERUOiByZWFj
aGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNTQzMDAwMDAgLSAwMDU0MzQwMDAwIFAy
TVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2hvc3QxeC9kc2kg
KioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL2hvc3QxeA0KKFhF
TikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDU0NDAwMDAwPDM+DQoo
WEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBE
VDogZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNs
YXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zm
c2V0OiA1NDQwMDAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAw
MDA8Mz4gNTQ0MDAwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAt
IE1NSU86IDAwNTQ0MDAwMDAgLSAwMDU0NDQwMDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9o
b3N0MXgvdmljDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC92aWMNCihYRU4pIC9o
b3N0MXgvdmljIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9ob3N0
MXgvdmljIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9ob3N0MXgvdmljDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAv
aG9zdDF4L3ZpYyAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAv
aG9zdDF4DQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNTQz
NDAwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9u
IC8NCihYRU4pIERUOiBlbXB0eSByYW5nZXM7IDE6MSB0cmFuc2xhdGlvbg0KKFhFTikgRFQ6IHBh
cmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDAwMDAwMDAwPDM+DQooWEVOKSBE
VDogd2l0aCBvZmZzZXQ6IDU0MzQwMDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9u
OjwzPiAwMDAwMDAwMDwzPiA1NDM0MDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2Rl
DQooWEVOKSAgIC0gTU1JTzogMDA1NDM0MDAwMCAtIDAwNTQzODAwMDAgUDJNVHlwZT01DQooWEVO
KSBoYW5kbGUgL2hvc3QxeC9udmVuYw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgv
bnZlbmMNCihYRU4pIC9ob3N0MXgvbnZlbmMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhF
TikgQ2hlY2sgaWYgL2hvc3QxeC9udmVuYyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L252ZW5jDQooWEVOKSBEVDogKiogdHJh
bnNsYXRpb24gZm9yIGRldmljZSAvaG9zdDF4L252ZW5jICoqDQooWEVOKSBEVDogYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9ob3N0MXgNCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRy
ZXNzOjwzPiAwMDAwMDAwMDwzPiA1NDRjMDAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMg
ZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRy
YW5zbGF0aW9uDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8
Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNTQ0YzAwMDANCihYRU4pIERU
OiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDU0NGMwMDAwPDM+DQooWEVO
KSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDU0NGMwMDAwIC0gMDA1
NDUwMDAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvaG9zdDF4L3RzZWMNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vaG9zdDF4L3RzZWMNCihYRU4pIC9ob3N0MXgvdHNlYyBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L3RzZWMgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC90c2Vj
DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvaG9zdDF4L3RzZWMgKioNCihY
RU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL2hvc3QxeA0KKFhFTikgRFQ6
IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDU0NTAwMDAwPDM+DQooWEVOKSBE
VDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogZW1w
dHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24g
Zm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA1
NDUwMDAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4g
NTQ1MDAwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86
IDAwNTQ1MDAwMDAgLSAwMDU0NTQwMDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9ob3N0MXgv
dHNlY2INCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3RzZWNiDQooWEVOKSAvaG9z
dDF4L3RzZWNiIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9ob3N0
MXgvdHNlY2IgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L2hvc3QxeC90c2VjYg0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZp
Y2UgL2hvc3QxeC90c2VjYiAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0y
KSBvbiAvaG9zdDF4DQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8
Mz4gNTQxMDAwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5z
PTIpIG9uIC8NCihYRU4pIERUOiBlbXB0eSByYW5nZXM7IDE6MSB0cmFuc2xhdGlvbg0KKFhFTikg
RFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDAwMDAwMDAwPDM+DQoo
WEVOKSBEVDogd2l0aCBvZmZzZXQ6IDU0MTAwMDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5z
bGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA1NDEwMDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9v
dCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA1NDEwMDAwMCAtIDAwNTQxNDAwMDAgUDJNVHlwZT01
DQooWEVOKSBoYW5kbGUgL2hvc3QxeC9udmRlYw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9o
b3N0MXgvbnZkZWMNCihYRU4pIC9ob3N0MXgvbnZkZWMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MQ0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC9udmRlYyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L252ZGVjDQooWEVOKSBEVDog
KiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvaG9zdDF4L252ZGVjICoqDQooWEVOKSBEVDogYnVz
IGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9ob3N0MXgNCihYRU4pIERUOiB0cmFuc2xhdGlu
ZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA1NDQ4MDAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBi
dXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsg
MToxIHRyYW5zbGF0aW9uDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAw
MDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNTQ0ODAwMDANCihY
RU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDU0NDgwMDAwPDM+
DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDU0NDgwMDAw
IC0gMDA1NDRjMDAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvaG9zdDF4L252anBnDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9udmpwZw0KKFhFTikgL2hvc3QxeC9udmpwZyBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L252anBnIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9o
b3N0MXgvbnZqcGcNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9ob3N0MXgv
bnZqcGcgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL2hvc3Qx
eA0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDU0MzgwMDAw
PDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQoo
WEVOKSBEVDogZW1wdHkgcmFuZ2VzOyAxOjEgdHJhbnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQg
dHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdp
dGggb2Zmc2V0OiA1NDM4MDAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4g
MDAwMDAwMDA8Mz4gNTQzODAwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhF
TikgICAtIE1NSU86IDAwNTQzODAwMDAgLSAwMDU0M2MwMDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFu
ZGxlIC9ob3N0MXgvc29yDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9zb3INCihY
RU4pIC9ob3N0MXgvc29yIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlm
IC9ob3N0MXgvc29yIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9ob3N0MXgvc29yDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRl
dmljZSAvaG9zdDF4L3NvciAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0y
KSBvbiAvaG9zdDF4DQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8
Mz4gNTQ1NDAwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5z
PTIpIG9uIC8NCihYRU4pIERUOiBlbXB0eSByYW5nZXM7IDE6MSB0cmFuc2xhdGlvbg0KKFhFTikg
RFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDAwMDAwMDAwPDM+DQoo
WEVOKSBEVDogd2l0aCBvZmZzZXQ6IDU0NTQwMDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5z
bGF0aW9uOjwzPiAwMDAwMDAwMDwzPiA1NDU0MDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9v
dCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA1NDU0MDAwMCAtIDAwNTQ1ODAwMDAgUDJNVHlwZT01
DQooWEVOKSBoYW5kbGUgL2hvc3QxeC9zb3IvaGRtaS1kaXNwbGF5DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L2hvc3QxeC9zb3IvaGRtaS1kaXNwbGF5DQooWEVOKSAvaG9zdDF4L3Nvci9oZG1p
LWRpc3BsYXkgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2hvc3Qx
eC9zb3IvaGRtaS1kaXNwbGF5IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yL2hkbWktZGlzcGxheQ0KKFhFTikgaGFuZGxl
IC9ob3N0MXgvc29yL2RwLWRpc3BsYXkNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4
L3Nvci9kcC1kaXNwbGF5DQooWEVOKSAvaG9zdDF4L3Nvci9kcC1kaXNwbGF5IHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvc29yL2RwLWRpc3BsYXkgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hv
c3QxeC9zb3IvZHAtZGlzcGxheQ0KKFhFTikgaGFuZGxlIC9ob3N0MXgvc29yL2RwLWRpc3BsYXkv
ZGlzcC1kZWZhdWx0LW91dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yL2Rw
LWRpc3BsYXkvZGlzcC1kZWZhdWx0LW91dA0KKFhFTikgL2hvc3QxeC9zb3IvZHAtZGlzcGxheS9k
aXNwLWRlZmF1bHQtb3V0IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC9ob3N0MXgvc29yL2RwLWRpc3BsYXkvZGlzcC1kZWZhdWx0LW91dCBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3Nvci9kcC1k
aXNwbGF5L2Rpc3AtZGVmYXVsdC1vdXQNCihYRU4pIGhhbmRsZSAvaG9zdDF4L3Nvci9kcC1kaXNw
bGF5L2RwLWx0LXNldHRpbmdzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9zb3Iv
ZHAtZGlzcGxheS9kcC1sdC1zZXR0aW5ncw0KKFhFTikgL2hvc3QxeC9zb3IvZHAtZGlzcGxheS9k
cC1sdC1zZXR0aW5ncyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
aG9zdDF4L3Nvci9kcC1kaXNwbGF5L2RwLWx0LXNldHRpbmdzIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yL2RwLWRpc3Bs
YXkvZHAtbHQtc2V0dGluZ3MNCihYRU4pIGhhbmRsZSAvaG9zdDF4L3Nvci9kcC1kaXNwbGF5L2Rw
LWx0LXNldHRpbmdzL2x0LXNldHRpbmdAMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0
MXgvc29yL2RwLWRpc3BsYXkvZHAtbHQtc2V0dGluZ3MvbHQtc2V0dGluZ0AwDQooWEVOKSAvaG9z
dDF4L3Nvci9kcC1kaXNwbGF5L2RwLWx0LXNldHRpbmdzL2x0LXNldHRpbmdAMCBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L3Nvci9kcC1kaXNwbGF5L2Rw
LWx0LXNldHRpbmdzL2x0LXNldHRpbmdAMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3Nvci9kcC1kaXNwbGF5L2RwLWx0LXNl
dHRpbmdzL2x0LXNldHRpbmdAMA0KKFhFTikgaGFuZGxlIC9ob3N0MXgvc29yL2RwLWRpc3BsYXkv
ZHAtbHQtc2V0dGluZ3MvbHQtc2V0dGluZ0AxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hv
c3QxeC9zb3IvZHAtZGlzcGxheS9kcC1sdC1zZXR0aW5ncy9sdC1zZXR0aW5nQDENCihYRU4pIC9o
b3N0MXgvc29yL2RwLWRpc3BsYXkvZHAtbHQtc2V0dGluZ3MvbHQtc2V0dGluZ0AxIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvc29yL2RwLWRpc3BsYXkv
ZHAtbHQtc2V0dGluZ3MvbHQtc2V0dGluZ0AxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yL2RwLWRpc3BsYXkvZHAtbHQt
c2V0dGluZ3MvbHQtc2V0dGluZ0AxDQooWEVOKSBoYW5kbGUgL2hvc3QxeC9zb3IvZHAtZGlzcGxh
eS9kcC1sdC1zZXR0aW5ncy9sdC1zZXR0aW5nQDINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
aG9zdDF4L3Nvci9kcC1kaXNwbGF5L2RwLWx0LXNldHRpbmdzL2x0LXNldHRpbmdAMg0KKFhFTikg
L2hvc3QxeC9zb3IvZHAtZGlzcGxheS9kcC1sdC1zZXR0aW5ncy9sdC1zZXR0aW5nQDIgcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC9zb3IvZHAtZGlzcGxh
eS9kcC1sdC1zZXR0aW5ncy9sdC1zZXR0aW5nQDIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9zb3IvZHAtZGlzcGxheS9kcC1s
dC1zZXR0aW5ncy9sdC1zZXR0aW5nQDINCihYRU4pIGhhbmRsZSAvaG9zdDF4L3Nvci9wcm9kLXNl
dHRpbmdzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9zb3IvcHJvZC1zZXR0aW5n
cw0KKFhFTikgL2hvc3QxeC9zb3IvcHJvZC1zZXR0aW5ncyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L3Nvci9wcm9kLXNldHRpbmdzIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29y
L3Byb2Qtc2V0dGluZ3MNCihYRU4pIGhhbmRsZSAvaG9zdDF4L3Nvci9wcm9kLXNldHRpbmdzL3By
b2RfY19kcA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yL3Byb2Qtc2V0dGlu
Z3MvcHJvZF9jX2RwDQooWEVOKSAvaG9zdDF4L3Nvci9wcm9kLXNldHRpbmdzL3Byb2RfY19kcCBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L3Nvci9wcm9k
LXNldHRpbmdzL3Byb2RfY19kcCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3Nvci9wcm9kLXNldHRpbmdzL3Byb2RfY19kcA0K
KFhFTikgaGFuZGxlIC9ob3N0MXgvc29yMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0
MXgvc29yMQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNl
X2dldF9yYXdfaXJxOiBkZXY9L2hvc3QxeC9zb3IxLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50
c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29u
dHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNGMuLi5dLG9pbnRz
aXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2
MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAh
DQooWEVOKSAvaG9zdDF4L3NvcjEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hl
Y2sgaWYgL2hvc3QxeC9zb3IxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yMQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRz
JyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2hvc3QxeC9zb3IxLCBp
bmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3Bl
Yz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAw
MDAwIDB4MDAwMDAwNGMuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFy
PS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNp
emU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9yIGlu
ZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9p
bnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZv
ciBkZXZpY2UgL2hvc3QxeC9zb3IxICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIs
IG5zPTIpIG9uIC9ob3N0MXgNCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAw
MDAwMDwzPiA1NDU4MDAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9
MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRyYW5zbGF0aW9uDQoo
WEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8
Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNTQ1ODAwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwg
dHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDU0NTgwMDAwPDM+DQooWEVOKSBEVDogcmVhY2hl
ZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDU0NTgwMDAwIC0gMDA1NDVjMDAwMCBQMk1U
eXBlPTUNCihYRU4pIGhhbmRsZSAvaG9zdDF4L3NvcjEvaGRtaS1kaXNwbGF5DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9zb3IxL2hkbWktZGlzcGxheQ0KKFhFTikgL2hvc3QxeC9z
b3IxL2hkbWktZGlzcGxheSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvaG9zdDF4L3NvcjEvaGRtaS1kaXNwbGF5IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yMS9oZG1pLWRpc3BsYXkNCihY
RU4pIGhhbmRsZSAvaG9zdDF4L3NvcjEvaGRtaS1kaXNwbGF5L2Rpc3AtZGVmYXVsdC1vdXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3NvcjEvaGRtaS1kaXNwbGF5L2Rpc3AtZGVm
YXVsdC1vdXQNCihYRU4pIC9ob3N0MXgvc29yMS9oZG1pLWRpc3BsYXkvZGlzcC1kZWZhdWx0LW91
dCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L3NvcjEv
aGRtaS1kaXNwbGF5L2Rpc3AtZGVmYXVsdC1vdXQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9zb3IxL2hkbWktZGlzcGxheS9k
aXNwLWRlZmF1bHQtb3V0DQooWEVOKSBoYW5kbGUgL2hvc3QxeC9zb3IxL2RwLWRpc3BsYXkNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3NvcjEvZHAtZGlzcGxheQ0KKFhFTikgL2hv
c3QxeC9zb3IxL2RwLWRpc3BsYXkgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL2hvc3QxeC9zb3IxL2RwLWRpc3BsYXkgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9zb3IxL2RwLWRpc3BsYXkNCihY
RU4pIGhhbmRsZSAvaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncw0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzDQooWEVOKSAvaG9zdDF4L3NvcjEvcHJv
ZC1zZXR0aW5ncyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9z
dDF4L3NvcjEvcHJvZC1zZXR0aW5ncyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncw0KKFhFTikg
aGFuZGxlIC9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2QNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kDQooWEVOKSAvaG9zdDF4L3Nv
cjEvcHJvZC1zZXR0aW5ncy9wcm9kIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2QgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9zb3IxL3Byb2Qt
c2V0dGluZ3MvcHJvZA0KKFhFTikgaGFuZGxlIC9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3By
b2RfY19oZG1pXzBtXzU0bQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yMS9w
cm9kLXNldHRpbmdzL3Byb2RfY19oZG1pXzBtXzU0bQ0KKFhFTikgL2hvc3QxeC9zb3IxL3Byb2Qt
c2V0dGluZ3MvcHJvZF9jX2hkbWlfMG1fNTRtIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY19oZG1pXzBtXzU0
bSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaGRtaV8wbV81NG0NCihYRU4pIGhh
bmRsZSAvaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaGRtaV81NG1fMTExbQ0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY19o
ZG1pXzU0bV8xMTFtDQooWEVOKSAvaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaGRt
aV81NG1fMTExbSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9z
dDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaGRtaV81NG1fMTExbSBpcyBiZWhpbmQgdGhl
IElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3NvcjEv
cHJvZC1zZXR0aW5ncy9wcm9kX2NfaGRtaV81NG1fMTExbQ0KKFhFTikgaGFuZGxlIC9ob3N0MXgv
c29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY19oZG1pXzExMW1fMjIzbQ0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY19oZG1pXzExMW1fMjIz
bQ0KKFhFTikgL2hvc3QxeC9zb3IxL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hkbWlfMTExbV8yMjNt
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvc29yMS9w
cm9kLXNldHRpbmdzL3Byb2RfY19oZG1pXzExMW1fMjIzbSBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3NvcjEvcHJvZC1zZXR0
aW5ncy9wcm9kX2NfaGRtaV8xMTFtXzIyM20NCihYRU4pIGhhbmRsZSAvaG9zdDF4L3NvcjEvcHJv
ZC1zZXR0aW5ncy9wcm9kX2NfaGRtaV8yMjNtXzMwMG0NCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaGRtaV8yMjNtXzMwMG0NCihYRU4p
IC9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY19oZG1pXzIyM21fMzAwbSBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L3NvcjEvcHJvZC1zZXR0
aW5ncy9wcm9kX2NfaGRtaV8yMjNtXzMwMG0gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9zb3IxL3Byb2Qtc2V0dGluZ3MvcHJv
ZF9jX2hkbWlfMjIzbV8zMDBtDQooWEVOKSBoYW5kbGUgL2hvc3QxeC9zb3IxL3Byb2Qtc2V0dGlu
Z3MvcHJvZF9jX2hkbWlfMzAwbV82MDBtDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3Qx
eC9zb3IxL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hkbWlfMzAwbV82MDBtDQooWEVOKSAvaG9zdDF4
L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaGRtaV8zMDBtXzYwMG0gcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC9zb3IxL3Byb2Qtc2V0dGluZ3MvcHJv
ZF9jX2hkbWlfMzAwbV82MDBtIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY19oZG1p
XzMwMG1fNjAwbQ0KKFhFTikgaGFuZGxlIC9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2Rf
Y181NE0NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5n
cy9wcm9kX2NfNTRNDQooWEVOKSAvaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfNTRN
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvc29yMS9w
cm9kLXNldHRpbmdzL3Byb2RfY181NE0gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9zb3IxL3Byb2Qtc2V0dGluZ3MvcHJvZF9j
XzU0TQ0KKFhFTikgaGFuZGxlIC9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY183NU0N
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9k
X2NfNzVNDQooWEVOKSAvaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfNzVNIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvc29yMS9wcm9kLXNl
dHRpbmdzL3Byb2RfY183NU0gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9zb3IxL3Byb2Qtc2V0dGluZ3MvcHJvZF9jXzc1TQ0K
KFhFTikgaGFuZGxlIC9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY18xNTBNDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9zb3IxL3Byb2Qtc2V0dGluZ3MvcHJvZF9jXzE1
ME0NCihYRU4pIC9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY18xNTBNIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvc29yMS9wcm9kLXNldHRp
bmdzL3Byb2RfY18xNTBNIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY18xNTBNDQoo
WEVOKSBoYW5kbGUgL2hvc3QxeC9zb3IxL3Byb2Qtc2V0dGluZ3MvcHJvZF9jXzMwME0NCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfMzAw
TQ0KKFhFTikgL2hvc3QxeC9zb3IxL3Byb2Qtc2V0dGluZ3MvcHJvZF9jXzMwME0gcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC9zb3IxL3Byb2Qtc2V0dGlu
Z3MvcHJvZF9jXzMwME0gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9zb3IxL3Byb2Qtc2V0dGluZ3MvcHJvZF9jXzMwME0NCihY
RU4pIGhhbmRsZSAvaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfNjAwTQ0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY182MDBN
DQooWEVOKSAvaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfNjAwTSBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5n
cy9wcm9kX2NfNjAwTSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfNjAwTQ0KKFhF
TikgaGFuZGxlIC9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY19kcA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY19kcA0KKFhF
TikgL2hvc3QxeC9zb3IxL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RwIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2Rf
Y19kcCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZHANCihYRU4pIGhhbmRsZSAv
aG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaGRtaV81NG1fNzVtDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9zb3IxL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hkbWlfNTRt
Xzc1bQ0KKFhFTikgL2hvc3QxeC9zb3IxL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hkbWlfNTRtXzc1
bSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L3NvcjEv
cHJvZC1zZXR0aW5ncy9wcm9kX2NfaGRtaV81NG1fNzVtIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yMS9wcm9kLXNldHRp
bmdzL3Byb2RfY19oZG1pXzU0bV83NW0NCihYRU4pIGhhbmRsZSAvaG9zdDF4L3NvcjEvcHJvZC1z
ZXR0aW5ncy9wcm9kX2NfaGRtaV83NW1fMTUwbQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9o
b3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY19oZG1pXzc1bV8xNTBtDQooWEVOKSAvaG9z
dDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaGRtaV83NW1fMTUwbSBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9w
cm9kX2NfaGRtaV83NW1fMTUwbSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaGRt
aV83NW1fMTUwbQ0KKFhFTikgaGFuZGxlIC9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2Rf
Y19oZG1pXzE1MG1fMzAwbQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvc29yMS9w
cm9kLXNldHRpbmdzL3Byb2RfY19oZG1pXzE1MG1fMzAwbQ0KKFhFTikgL2hvc3QxeC9zb3IxL3By
b2Qtc2V0dGluZ3MvcHJvZF9jX2hkbWlfMTUwbV8zMDBtIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvc29yMS9wcm9kLXNldHRpbmdzL3Byb2RfY19oZG1p
XzE1MG1fMzAwbSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vaG9zdDF4L3NvcjEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaGRtaV8xNTBtXzMw
MG0NCihYRU4pIGhhbmRsZSAvaG9zdDF4L2RwYXV4DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2hvc3QxeC9kcGF1eA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikg
IGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRf
ZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2hvc3QxeC9kcGF1eCwgaW5kZXg9MA0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhF
TikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJy
dXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDlmLi4u
XSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRy
b2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBn
b3QgaXQgIQ0KKFhFTikgL2hvc3QxeC9kcGF1eCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQoo
WEVOKSBDaGVjayBpZiAvaG9zdDF4L2RwYXV4IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvZHBhdXgNCihYRU4pICB1c2luZyAn
aW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9ob3N0
MXgvZHBhdXgsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihY
RU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNw
ZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA5Zi4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4p
ICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGly
ZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25u
ZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBEVDogKiogdHJh
bnNsYXRpb24gZm9yIGRldmljZSAvaG9zdDF4L2RwYXV4ICoqDQooWEVOKSBEVDogYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9ob3N0MXgNCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRy
ZXNzOjwzPiAwMDAwMDAwMDwzPiA1NDVjMDAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMg
ZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IGVtcHR5IHJhbmdlczsgMToxIHRy
YW5zbGF0aW9uDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8
Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNTQ1YzAwMDANCihYRU4pIERU
OiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IDU0NWMwMDAwPDM+DQooWEVO
KSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDU0NWMwMDAwIC0gMDA1
NDYwMDAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvaG9zdDF4L2RwYXV4L3Byb2Qtc2V0dGlu
Z3MNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L2RwYXV4L3Byb2Qtc2V0dGluZ3MN
CihYRU4pIC9ob3N0MXgvZHBhdXgvcHJvZC1zZXR0aW5ncyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L2RwYXV4L3Byb2Qtc2V0dGluZ3MgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9k
cGF1eC9wcm9kLXNldHRpbmdzDQooWEVOKSBoYW5kbGUgL2hvc3QxeC9kcGF1eC9wcm9kLXNldHRp
bmdzL3Byb2RfY19kcGF1eF9kcA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvZHBh
dXgvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZHBhdXhfZHANCihYRU4pIC9ob3N0MXgvZHBhdXgvcHJv
ZC1zZXR0aW5ncy9wcm9kX2NfZHBhdXhfZHAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL2hvc3QxeC9kcGF1eC9wcm9kLXNldHRpbmdzL3Byb2RfY19kcGF1eF9kcCBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
aG9zdDF4L2RwYXV4L3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RwYXV4X2RwDQooWEVOKSBoYW5kbGUg
L2hvc3QxeC9kcGF1eC9wcm9kLXNldHRpbmdzL3Byb2RfY19kcGF1eF9oZG1pDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9kcGF1eC9wcm9kLXNldHRpbmdzL3Byb2RfY19kcGF1eF9o
ZG1pDQooWEVOKSAvaG9zdDF4L2RwYXV4L3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RwYXV4X2hkbWkg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC9kcGF1eC9w
cm9kLXNldHRpbmdzL3Byb2RfY19kcGF1eF9oZG1pIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvZHBhdXgvcHJvZC1zZXR0aW5n
cy9wcm9kX2NfZHBhdXhfaGRtaQ0KKFhFTikgaGFuZGxlIC9ob3N0MXgvZHBhdXgxDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9kcGF1eDENCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMg
aW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9ob3N0MXgvZHBhdXgx
LCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAw
MDAwMDAwIDB4MDAwMDAwMGIuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBp
cGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRk
cnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvaG9zdDF4L2RwYXV4MSBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L2RwYXV4MSBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4
L2RwYXV4MQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNl
X2dldF9yYXdfaXJxOiBkZXY9L2hvc3QxeC9kcGF1eDEsIGluZGV4PTANCihYRU4pICB1c2luZyAn
aW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1j
b250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAwYi4uLl0sb2lu
dHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVy
QDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0
ICENCihYRU4pIGlycSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRv
IHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVyQDYw
MDA0MDAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvaG9zdDF4L2RwYXV4
MSAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvaG9zdDF4DQoo
WEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNTQwNDAwMDA8Mz4N
CihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4p
IERUOiBlbXB0eSByYW5nZXM7IDE6MSB0cmFuc2xhdGlvbg0KKFhFTikgRFQ6IHBhcmVudCB0cmFu
c2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDAwMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBv
ZmZzZXQ6IDU0MDQwMDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAw
MDAwMDwzPiA1NDA0MDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDA1NDA0MDAwMCAtIDAwNTQwODAwMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUg
L2hvc3QxeC9kcGF1eDEvcHJvZC1zZXR0aW5ncw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9o
b3N0MXgvZHBhdXgxL3Byb2Qtc2V0dGluZ3MNCihYRU4pIC9ob3N0MXgvZHBhdXgxL3Byb2Qtc2V0
dGluZ3MgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC9k
cGF1eDEvcHJvZC1zZXR0aW5ncyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L2RwYXV4MS9wcm9kLXNldHRpbmdzDQooWEVOKSBo
YW5kbGUgL2hvc3QxeC9kcGF1eDEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZHBhdXhfZHANCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L2RwYXV4MS9wcm9kLXNldHRpbmdzL3Byb2RfY19k
cGF1eF9kcA0KKFhFTikgL2hvc3QxeC9kcGF1eDEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZHBhdXhf
ZHAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC9kcGF1
eDEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZHBhdXhfZHAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9kcGF1eDEvcHJvZC1zZXR0
aW5ncy9wcm9kX2NfZHBhdXhfZHANCihYRU4pIGhhbmRsZSAvaG9zdDF4L2RwYXV4MS9wcm9kLXNl
dHRpbmdzL3Byb2RfY19kcGF1eF9oZG1pDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3Qx
eC9kcGF1eDEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZHBhdXhfaGRtaQ0KKFhFTikgL2hvc3QxeC9k
cGF1eDEvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZHBhdXhfaGRtaSBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L2RwYXV4MS9wcm9kLXNldHRpbmdzL3Byb2Rf
Y19kcGF1eF9oZG1pIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9ob3N0MXgvZHBhdXgxL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RwYXV4X2hk
bWkNCihYRU4pIGhhbmRsZSAvaG9zdDF4L2kyY0A1NDZjMDAwMA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9ob3N0MXgvaTJjQDU0NmMwMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHBy
b3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaG9zdDF4L2kyY0A1NDZjMDAw
MCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGlu
dHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgw
MDAwMDAwMCAweDAwMDAwMDExLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
aXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFk
ZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL2hvc3QxeC9pMmNANTQ2YzAwMDAg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC9pMmNANTQ2
YzAwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L2hvc3QxeC9pMmNANTQ2YzAwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVu
PTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9ob3N0MXgvaTJjQDU0NmMwMDAw
LCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAw
MDAwMDAwIDB4MDAwMDAwMTEuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBp
cGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRk
cnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9y
IGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRv
IC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9u
IGZvciBkZXZpY2UgL2hvc3QxeC9pMmNANTQ2YzAwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVm
YXVsdCAobmE9MiwgbnM9Mikgb24gL2hvc3QxeA0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJl
c3M6PDM+IDAwMDAwMDAwPDM+IDU0NmMwMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBk
ZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogZW1wdHkgcmFuZ2VzOyAxOjEgdHJh
bnNsYXRpb24NCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwz
PiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA1NDZjMDAwMA0KKFhFTikgRFQ6
IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gNTQ2YzAwMDA8Mz4NCihYRU4p
IERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNTQ2YzAwMDAgLSAwMDU0
NmY0MDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9ob3N0MXgvaTJjQDU0NmMwMDAwL3JicGN2
Ml9pbXgyMTlfYUAxMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvaTJjQDU0NmMw
MDAwL3JicGN2Ml9pbXgyMTlfYUAxMA0KKFhFTikgL2hvc3QxeC9pMmNANTQ2YzAwMDAvcmJwY3Yy
X2lteDIxOV9hQDEwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9o
b3N0MXgvaTJjQDU0NmMwMDAwL3JicGN2Ml9pbXgyMTlfYUAxMCBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L2kyY0A1NDZjMDAw
MC9yYnBjdjJfaW14MjE5X2FAMTANCihYRU4pIGhhbmRsZSAvaG9zdDF4L2kyY0A1NDZjMDAwMC9y
YnBjdjJfaW14MjE5X2FAMTAvbW9kZTANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4
L2kyY0A1NDZjMDAwMC9yYnBjdjJfaW14MjE5X2FAMTAvbW9kZTANCihYRU4pIC9ob3N0MXgvaTJj
QDU0NmMwMDAwL3JicGN2Ml9pbXgyMTlfYUAxMC9tb2RlMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L2kyY0A1NDZjMDAwMC9yYnBjdjJfaW14MjE5X2FA
MTAvbW9kZTAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L2hvc3QxeC9pMmNANTQ2YzAwMDAvcmJwY3YyX2lteDIxOV9hQDEwL21vZGUwDQoo
WEVOKSBoYW5kbGUgL2hvc3QxeC9pMmNANTQ2YzAwMDAvcmJwY3YyX2lteDIxOV9hQDEwL21vZGUx
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9pMmNANTQ2YzAwMDAvcmJwY3YyX2lt
eDIxOV9hQDEwL21vZGUxDQooWEVOKSAvaG9zdDF4L2kyY0A1NDZjMDAwMC9yYnBjdjJfaW14MjE5
X2FAMTAvbW9kZTEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2hv
c3QxeC9pMmNANTQ2YzAwMDAvcmJwY3YyX2lteDIxOV9hQDEwL21vZGUxIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvaTJjQDU0
NmMwMDAwL3JicGN2Ml9pbXgyMTlfYUAxMC9tb2RlMQ0KKFhFTikgaGFuZGxlIC9ob3N0MXgvaTJj
QDU0NmMwMDAwL3JicGN2Ml9pbXgyMTlfYUAxMC9tb2RlMg0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9ob3N0MXgvaTJjQDU0NmMwMDAwL3JicGN2Ml9pbXgyMTlfYUAxMC9tb2RlMg0KKFhFTikg
L2hvc3QxeC9pMmNANTQ2YzAwMDAvcmJwY3YyX2lteDIxOV9hQDEwL21vZGUyIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvaTJjQDU0NmMwMDAwL3JicGN2
Ml9pbXgyMTlfYUAxMC9tb2RlMiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L2kyY0A1NDZjMDAwMC9yYnBjdjJfaW14MjE5X2FA
MTAvbW9kZTINCihYRU4pIGhhbmRsZSAvaG9zdDF4L2kyY0A1NDZjMDAwMC9yYnBjdjJfaW14MjE5
X2FAMTAvbW9kZTMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L2kyY0A1NDZjMDAw
MC9yYnBjdjJfaW14MjE5X2FAMTAvbW9kZTMNCihYRU4pIC9ob3N0MXgvaTJjQDU0NmMwMDAwL3Ji
cGN2Ml9pbXgyMTlfYUAxMC9tb2RlMyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvaG9zdDF4L2kyY0A1NDZjMDAwMC9yYnBjdjJfaW14MjE5X2FAMTAvbW9kZTMgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hv
c3QxeC9pMmNANTQ2YzAwMDAvcmJwY3YyX2lteDIxOV9hQDEwL21vZGUzDQooWEVOKSBoYW5kbGUg
L2hvc3QxeC9pMmNANTQ2YzAwMDAvcmJwY3YyX2lteDIxOV9hQDEwL21vZGU0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9pMmNANTQ2YzAwMDAvcmJwY3YyX2lteDIxOV9hQDEwL21v
ZGU0DQooWEVOKSAvaG9zdDF4L2kyY0A1NDZjMDAwMC9yYnBjdjJfaW14MjE5X2FAMTAvbW9kZTQg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC9pMmNANTQ2
YzAwMDAvcmJwY3YyX2lteDIxOV9hQDEwL21vZGU0IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvaTJjQDU0NmMwMDAwL3JicGN2
Ml9pbXgyMTlfYUAxMC9tb2RlNA0KKFhFTikgaGFuZGxlIC9ob3N0MXgvaTJjQDU0NmMwMDAwL3Ji
cGN2Ml9pbXgyMTlfYUAxMC9wb3J0cw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgv
aTJjQDU0NmMwMDAwL3JicGN2Ml9pbXgyMTlfYUAxMC9wb3J0cw0KKFhFTikgL2hvc3QxeC9pMmNA
NTQ2YzAwMDAvcmJwY3YyX2lteDIxOV9hQDEwL3BvcnRzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvaTJjQDU0NmMwMDAwL3JicGN2Ml9pbXgyMTlfYUAx
MC9wb3J0cyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vaG9zdDF4L2kyY0A1NDZjMDAwMC9yYnBjdjJfaW14MjE5X2FAMTAvcG9ydHMNCihY
RU4pIGhhbmRsZSAvaG9zdDF4L2kyY0A1NDZjMDAwMC9yYnBjdjJfaW14MjE5X2FAMTAvcG9ydHMv
cG9ydEAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9pMmNANTQ2YzAwMDAvcmJw
Y3YyX2lteDIxOV9hQDEwL3BvcnRzL3BvcnRAMA0KKFhFTikgL2hvc3QxeC9pMmNANTQ2YzAwMDAv
cmJwY3YyX2lteDIxOV9hQDEwL3BvcnRzL3BvcnRAMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L2kyY0A1NDZjMDAwMC9yYnBjdjJfaW14MjE5X2FAMTAv
cG9ydHMvcG9ydEAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9ob3N0MXgvaTJjQDU0NmMwMDAwL3JicGN2Ml9pbXgyMTlfYUAxMC9wb3J0
cy9wb3J0QDANCihYRU4pIGhhbmRsZSAvaG9zdDF4L2kyY0A1NDZjMDAwMC9yYnBjdjJfaW14MjE5
X2FAMTAvcG9ydHMvcG9ydEAwL2VuZHBvaW50DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hv
c3QxeC9pMmNANTQ2YzAwMDAvcmJwY3YyX2lteDIxOV9hQDEwL3BvcnRzL3BvcnRAMC9lbmRwb2lu
dA0KKFhFTikgL2hvc3QxeC9pMmNANTQ2YzAwMDAvcmJwY3YyX2lteDIxOV9hQDEwL3BvcnRzL3Bv
cnRAMC9lbmRwb2ludCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
aG9zdDF4L2kyY0A1NDZjMDAwMC9yYnBjdjJfaW14MjE5X2FAMTAvcG9ydHMvcG9ydEAwL2VuZHBv
aW50IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9ob3N0MXgvaTJjQDU0NmMwMDAwL3JicGN2Ml9pbXgyMTlfYUAxMC9wb3J0cy9wb3J0QDAv
ZW5kcG9pbnQNCihYRU4pIGhhbmRsZSAvaG9zdDF4L2kyY0A1NDZjMDAwMC9pbmEzMjIxeEA0MA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvaTJjQDU0NmMwMDAwL2luYTMyMjF4QDQw
DQooWEVOKSAvaG9zdDF4L2kyY0A1NDZjMDAwMC9pbmEzMjIxeEA0MCBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L2kyY0A1NDZjMDAwMC9pbmEzMjIxeEA0
MCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vaG9zdDF4L2kyY0A1NDZjMDAwMC9pbmEzMjIxeEA0MA0KKFhFTikgaGFuZGxlIC9ob3N0MXgv
aTJjQDU0NmMwMDAwL2luYTMyMjF4QDQwL2NoYW5uZWxAMA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9ob3N0MXgvaTJjQDU0NmMwMDAwL2luYTMyMjF4QDQwL2NoYW5uZWxAMA0KKFhFTikgL2hv
c3QxeC9pMmNANTQ2YzAwMDAvaW5hMzIyMXhANDAvY2hhbm5lbEAwIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvaTJjQDU0NmMwMDAwL2luYTMyMjF4QDQw
L2NoYW5uZWxAMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vaG9zdDF4L2kyY0A1NDZjMDAwMC9pbmEzMjIxeEA0MC9jaGFubmVsQDANCihY
RU4pIGhhbmRsZSAvaG9zdDF4L2kyY0A1NDZjMDAwMC9pbmEzMjIxeEA0MC9jaGFubmVsQDENCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L2kyY0A1NDZjMDAwMC9pbmEzMjIxeEA0MC9j
aGFubmVsQDENCihYRU4pIC9ob3N0MXgvaTJjQDU0NmMwMDAwL2luYTMyMjF4QDQwL2NoYW5uZWxA
MSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L2kyY0A1
NDZjMDAwMC9pbmEzMjIxeEA0MC9jaGFubmVsQDEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9pMmNANTQ2YzAwMDAvaW5hMzIy
MXhANDAvY2hhbm5lbEAxDQooWEVOKSBoYW5kbGUgL2hvc3QxeC9pMmNANTQ2YzAwMDAvaW5hMzIy
MXhANDAvY2hhbm5lbEAyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9pMmNANTQ2
YzAwMDAvaW5hMzIyMXhANDAvY2hhbm5lbEAyDQooWEVOKSAvaG9zdDF4L2kyY0A1NDZjMDAwMC9p
bmEzMjIxeEA0MC9jaGFubmVsQDIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL2hvc3QxeC9pMmNANTQ2YzAwMDAvaW5hMzIyMXhANDAvY2hhbm5lbEAyIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgv
aTJjQDU0NmMwMDAwL2luYTMyMjF4QDQwL2NoYW5uZWxAMg0KKFhFTikgaGFuZGxlIC9ob3N0MXgv
bnZjc2kNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L252Y3NpDQooWEVOKSAvaG9z
dDF4L252Y3NpIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ob3N0
MXgvbnZjc2kgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L2hvc3QxeC9udmNzaQ0KKFhFTikgaGFuZGxlIC9ob3N0MXgvbnZjc2kvY2hhbm5l
bEAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9udmNzaS9jaGFubmVsQDANCihY
RU4pIC9ob3N0MXgvbnZjc2kvY2hhbm5lbEAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9ob3N0MXgvbnZjc2kvY2hhbm5lbEAwIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvbnZjc2kvY2hhbm5l
bEAwDQooWEVOKSBoYW5kbGUgL2hvc3QxeC9udmNzaS9jaGFubmVsQDAvcG9ydHMNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L252Y3NpL2NoYW5uZWxAMC9wb3J0cw0KKFhFTikgL2hv
c3QxeC9udmNzaS9jaGFubmVsQDAvcG9ydHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL2hvc3QxeC9udmNzaS9jaGFubmVsQDAvcG9ydHMgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9udmNzaS9j
aGFubmVsQDAvcG9ydHMNCihYRU4pIGhhbmRsZSAvaG9zdDF4L252Y3NpL2NoYW5uZWxAMC9wb3J0
cy9wb3J0QDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L252Y3NpL2NoYW5uZWxA
MC9wb3J0cy9wb3J0QDANCihYRU4pIC9ob3N0MXgvbnZjc2kvY2hhbm5lbEAwL3BvcnRzL3BvcnRA
MCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L252Y3Np
L2NoYW5uZWxAMC9wb3J0cy9wb3J0QDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9udmNzaS9jaGFubmVsQDAvcG9ydHMvcG9y
dEAwDQooWEVOKSBoYW5kbGUgL2hvc3QxeC9udmNzaS9jaGFubmVsQDAvcG9ydHMvcG9ydEAwL2Vu
ZHBvaW50QDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L252Y3NpL2NoYW5uZWxA
MC9wb3J0cy9wb3J0QDAvZW5kcG9pbnRAMA0KKFhFTikgL2hvc3QxeC9udmNzaS9jaGFubmVsQDAv
cG9ydHMvcG9ydEAwL2VuZHBvaW50QDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL2hvc3QxeC9udmNzaS9jaGFubmVsQDAvcG9ydHMvcG9ydEAwL2VuZHBvaW50QDAg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2hvc3QxeC9udmNzaS9jaGFubmVsQDAvcG9ydHMvcG9ydEAwL2VuZHBvaW50QDANCihYRU4pIGhh
bmRsZSAvaG9zdDF4L252Y3NpL2NoYW5uZWxAMC9wb3J0cy9wb3J0QDENCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vaG9zdDF4L252Y3NpL2NoYW5uZWxAMC9wb3J0cy9wb3J0QDENCihYRU4pIC9o
b3N0MXgvbnZjc2kvY2hhbm5lbEAwL3BvcnRzL3BvcnRAMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvaG9zdDF4L252Y3NpL2NoYW5uZWxAMC9wb3J0cy9wb3J0QDEg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2hvc3QxeC9udmNzaS9jaGFubmVsQDAvcG9ydHMvcG9ydEAxDQooWEVOKSBoYW5kbGUgL2hvc3Qx
eC9udmNzaS9jaGFubmVsQDAvcG9ydHMvcG9ydEAxL2VuZHBvaW50QDENCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vaG9zdDF4L252Y3NpL2NoYW5uZWxAMC9wb3J0cy9wb3J0QDEvZW5kcG9pbnRA
MQ0KKFhFTikgL2hvc3QxeC9udmNzaS9jaGFubmVsQDAvcG9ydHMvcG9ydEAxL2VuZHBvaW50QDEg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC9udmNzaS9j
aGFubmVsQDAvcG9ydHMvcG9ydEAxL2VuZHBvaW50QDEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9udmNzaS9jaGFubmVsQDAv
cG9ydHMvcG9ydEAxL2VuZHBvaW50QDENCihYRU4pIGhhbmRsZSAvaG9zdDF4L252Y3NpL2NoYW5u
ZWxAMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvbnZjc2kvY2hhbm5lbEAxDQoo
WEVOKSAvaG9zdDF4L252Y3NpL2NoYW5uZWxAMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvaG9zdDF4L252Y3NpL2NoYW5uZWxAMSBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaG9zdDF4L252Y3NpL2NoYW5u
ZWxAMQ0KKFhFTikgaGFuZGxlIC9ob3N0MXgvbnZjc2kvY2hhbm5lbEAxL3BvcnRzDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9udmNzaS9jaGFubmVsQDEvcG9ydHMNCihYRU4pIC9o
b3N0MXgvbnZjc2kvY2hhbm5lbEAxL3BvcnRzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9ob3N0MXgvbnZjc2kvY2hhbm5lbEAxL3BvcnRzIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvbnZjc2kv
Y2hhbm5lbEAxL3BvcnRzDQooWEVOKSBoYW5kbGUgL2hvc3QxeC9udmNzaS9jaGFubmVsQDEvcG9y
dHMvcG9ydEAyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9udmNzaS9jaGFubmVs
QDEvcG9ydHMvcG9ydEAyDQooWEVOKSAvaG9zdDF4L252Y3NpL2NoYW5uZWxAMS9wb3J0cy9wb3J0
QDIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC9udmNz
aS9jaGFubmVsQDEvcG9ydHMvcG9ydEAyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvbnZjc2kvY2hhbm5lbEAxL3BvcnRzL3Bv
cnRAMg0KKFhFTikgaGFuZGxlIC9ob3N0MXgvbnZjc2kvY2hhbm5lbEAxL3BvcnRzL3BvcnRAMi9l
bmRwb2ludEAyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hvc3QxeC9udmNzaS9jaGFubmVs
QDEvcG9ydHMvcG9ydEAyL2VuZHBvaW50QDINCihYRU4pIC9ob3N0MXgvbnZjc2kvY2hhbm5lbEAx
L3BvcnRzL3BvcnRAMi9lbmRwb2ludEAyIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9ob3N0MXgvbnZjc2kvY2hhbm5lbEAxL3BvcnRzL3BvcnRAMi9lbmRwb2ludEAy
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9ob3N0MXgvbnZjc2kvY2hhbm5lbEAxL3BvcnRzL3BvcnRAMi9lbmRwb2ludEAyDQooWEVOKSBo
YW5kbGUgL2hvc3QxeC9udmNzaS9jaGFubmVsQDEvcG9ydHMvcG9ydEAzDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L2hvc3QxeC9udmNzaS9jaGFubmVsQDEvcG9ydHMvcG9ydEAzDQooWEVOKSAv
aG9zdDF4L252Y3NpL2NoYW5uZWxAMS9wb3J0cy9wb3J0QDMgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL2hvc3QxeC9udmNzaS9jaGFubmVsQDEvcG9ydHMvcG9ydEAz
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9ob3N0MXgvbnZjc2kvY2hhbm5lbEAxL3BvcnRzL3BvcnRAMw0KKFhFTikgaGFuZGxlIC9ob3N0
MXgvbnZjc2kvY2hhbm5lbEAxL3BvcnRzL3BvcnRAMy9lbmRwb2ludEAzDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L2hvc3QxeC9udmNzaS9jaGFubmVsQDEvcG9ydHMvcG9ydEAzL2VuZHBvaW50
QDMNCihYRU4pIC9ob3N0MXgvbnZjc2kvY2hhbm5lbEAxL3BvcnRzL3BvcnRAMy9lbmRwb2ludEAz
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ob3N0MXgvbnZjc2kv
Y2hhbm5lbEAxL3BvcnRzL3BvcnRAMy9lbmRwb2ludEAzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ob3N0MXgvbnZjc2kvY2hhbm5lbEAx
L3BvcnRzL3BvcnRAMy9lbmRwb2ludEAzDQooWEVOKSBoYW5kbGUgL2dwdQ0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9ncHUNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihY
RU4pICBpbnRzcGVjPTAgaW50bGVuPTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTYNCihYRU4p
IGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9ncHUsIGluZGV4PTANCihYRU4pICB1c2luZyAn
aW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTYNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1j
b250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA5ZC4uLl0sb2lu
dHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVy
QDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0
ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9ncHUsIGluZGV4PTENCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTYN
CihYRU4pICBpbnRzaXplPTMgaW50bGVuPTYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2lu
dGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA5
ZS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1j
b250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAg
LT4gZ290IGl0ICENCihYRU4pIC9ncHUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMw0KKFhFTikg
Q2hlY2sgaWYgL2dwdSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vZ3B1DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQoo
WEVOKSAgaW50c3BlYz0wIGludGxlbj02DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj02DQooWEVO
KSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vZ3B1LCBpbmRleD0wDQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj02DQooWEVOKSAg
aW50c2l6ZT0zIGludGxlbj02DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQt
Y29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwOWQuLi5dLG9p
bnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxl
ckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBp
dCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0
byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2
MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L2dwdSwgaW5kZXg9MQ0K
KFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRs
ZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBh
cj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAw
MDAwMDllLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJy
dXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihY
RU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDEgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5
KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0
LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNl
IC9ncHUgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhF
TikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDU3MDAwMDAwPDM+DQoo
WEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDU3MDAwMDAwIC0g
MDA1ODAwMDAwMCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNl
IC9ncHUgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhF
TikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDU4MDAwMDAwPDM+DQoo
WEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDU4MDAwMDAwIC0g
MDA1OTAwMDAwMCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNl
IC9ncHUgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhF
TikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDUzOGYwMDAwPDM+DQoo
WEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDUzOGYwMDAwIC0g
MDA1MzhmMTAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvbWlwaWNhbA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9taXBpY2FsDQooWEVOKSAvbWlwaWNhbCBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvbWlwaWNhbCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vbWlwaWNhbA0KKFhFTikgRFQ6ICoqIHRy
YW5zbGF0aW9uIGZvciBkZXZpY2UgL21pcGljYWwgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVs
dCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAw
MDAwMDAwPDM+IDcwMGUzMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4p
ICAgLSBNTUlPOiAwMDcwMGUzMDAwIC0gMDA3MDBlMzEwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRs
ZSAvbWlwaWNhbC9wcm9kLXNldHRpbmdzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L21pcGlj
YWwvcHJvZC1zZXR0aW5ncw0KKFhFTikgL21pcGljYWwvcHJvZC1zZXR0aW5ncyBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvbWlwaWNhbC9wcm9kLXNldHRpbmdzIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9t
aXBpY2FsL3Byb2Qtc2V0dGluZ3MNCihYRU4pIGhhbmRsZSAvbWlwaWNhbC9wcm9kLXNldHRpbmdz
L3Byb2RfY19kcGh5X2RzaQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9taXBpY2FsL3Byb2Qt
c2V0dGluZ3MvcHJvZF9jX2RwaHlfZHNpDQooWEVOKSAvbWlwaWNhbC9wcm9kLXNldHRpbmdzL3By
b2RfY19kcGh5X2RzaSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
bWlwaWNhbC9wcm9kLXNldHRpbmdzL3Byb2RfY19kcGh5X2RzaSBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vbWlwaWNhbC9wcm9kLXNldHRp
bmdzL3Byb2RfY19kcGh5X2RzaQ0KKFhFTikgaGFuZGxlIC9wbWNANzAwMGU0MDANCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwDQooWEVOKSAvcG1jQDcwMDBlNDAwIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9wbWNANzAwMGU0MDAgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3
MDAwZTQwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3BtY0A3MDAwZTQw
MCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBE
VDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwMGU0MDA8Mz4NCihYRU4p
IERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwMGU0MDAgLSAwMDcw
MDBlODAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9wbWNANzAwMGU0MDAvcGV4X2VuDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9wZXhfZW4NCihYRU4pIC9wbWNANzAw
MGU0MDAvcGV4X2VuIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9w
bWNANzAwMGU0MDAvcGV4X2VuIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvcGV4X2VuDQooWEVOKSBoYW5kbGUgL3Bt
Y0A3MDAwZTQwMC9wZXhfZW4vcGV4LWlvLWRwZC1zaWduYWxzLWRpcw0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvcGV4X2VuL3BleC1pby1kcGQtc2lnbmFscy1kaXMNCihY
RU4pIC9wbWNANzAwMGU0MDAvcGV4X2VuL3BleC1pby1kcGQtc2lnbmFscy1kaXMgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BtY0A3MDAwZTQwMC9wZXhfZW4vcGV4
LWlvLWRwZC1zaWduYWxzLWRpcyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL3BleF9lbi9wZXgtaW8tZHBkLXNpZ25h
bHMtZGlzDQooWEVOKSBoYW5kbGUgL3BtY0A3MDAwZTQwMC9wZXhfZGlzDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9wZXhfZGlzDQooWEVOKSAvcG1jQDcwMDBlNDAwL3Bl
eF9kaXMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BtY0A3MDAw
ZTQwMC9wZXhfZGlzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvcGV4X2Rpcw0KKFhFTikgaGFuZGxlIC9wbWNANzAw
MGU0MDAvcGV4X2Rpcy9wZXgtaW8tZHBkLXNpZ25hbHMtZW4NCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcG1jQDcwMDBlNDAwL3BleF9kaXMvcGV4LWlvLWRwZC1zaWduYWxzLWVuDQooWEVOKSAv
cG1jQDcwMDBlNDAwL3BleF9kaXMvcGV4LWlvLWRwZC1zaWduYWxzLWVuIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNANzAwMGU0MDAvcGV4X2Rpcy9wZXgtaW8t
ZHBkLXNpZ25hbHMtZW4gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9wZXhfZGlzL3BleC1pby1kcGQtc2lnbmFscy1l
bg0KKFhFTikgaGFuZGxlIC9wbWNANzAwMGU0MDAvaGRtaS1kcGQtZW5hYmxlDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9oZG1pLWRwZC1lbmFibGUNCihYRU4pIC9wbWNA
NzAwMGU0MDAvaGRtaS1kcGQtZW5hYmxlIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9wbWNANzAwMGU0MDAvaGRtaS1kcGQtZW5hYmxlIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvaGRt
aS1kcGQtZW5hYmxlDQooWEVOKSBoYW5kbGUgL3BtY0A3MDAwZTQwMC9oZG1pLWRwZC1lbmFibGUv
aGRtaS1wYWQtbG93cG93ZXItZW5hYmxlDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3
MDAwZTQwMC9oZG1pLWRwZC1lbmFibGUvaGRtaS1wYWQtbG93cG93ZXItZW5hYmxlDQooWEVOKSAv
cG1jQDcwMDBlNDAwL2hkbWktZHBkLWVuYWJsZS9oZG1pLXBhZC1sb3dwb3dlci1lbmFibGUgcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BtY0A3MDAwZTQwMC9oZG1p
LWRwZC1lbmFibGUvaGRtaS1wYWQtbG93cG93ZXItZW5hYmxlIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvaGRtaS1k
cGQtZW5hYmxlL2hkbWktcGFkLWxvd3Bvd2VyLWVuYWJsZQ0KKFhFTikgaGFuZGxlIC9wbWNANzAw
MGU0MDAvaGRtaS1kcGQtZGlzYWJsZQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbWNANzAw
MGU0MDAvaGRtaS1kcGQtZGlzYWJsZQ0KKFhFTikgL3BtY0A3MDAwZTQwMC9oZG1pLWRwZC1kaXNh
YmxlIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNANzAwMGU0
MDAvaGRtaS1kcGQtZGlzYWJsZSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL2hkbWktZHBkLWRpc2FibGUNCihYRU4p
IGhhbmRsZSAvcG1jQDcwMDBlNDAwL2hkbWktZHBkLWRpc2FibGUvaGRtaS1wYWQtbG93cG93ZXIt
ZGlzYWJsZQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvaGRtaS1kcGQt
ZGlzYWJsZS9oZG1pLXBhZC1sb3dwb3dlci1kaXNhYmxlDQooWEVOKSAvcG1jQDcwMDBlNDAwL2hk
bWktZHBkLWRpc2FibGUvaGRtaS1wYWQtbG93cG93ZXItZGlzYWJsZSBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcG1jQDcwMDBlNDAwL2hkbWktZHBkLWRpc2FibGUv
aGRtaS1wYWQtbG93cG93ZXItZGlzYWJsZSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL2hkbWktZHBkLWRpc2FibGUv
aGRtaS1wYWQtbG93cG93ZXItZGlzYWJsZQ0KKFhFTikgaGFuZGxlIC9wbWNANzAwMGU0MDAvZHNp
LWRwZC1lbmFibGUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL2RzaS1k
cGQtZW5hYmxlDQooWEVOKSAvcG1jQDcwMDBlNDAwL2RzaS1kcGQtZW5hYmxlIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNANzAwMGU0MDAvZHNpLWRwZC1lbmFi
bGUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3BtY0A3MDAwZTQwMC9kc2ktZHBkLWVuYWJsZQ0KKFhFTikgaGFuZGxlIC9wbWNANzAwMGU0
MDAvZHNpLWRwZC1lbmFibGUvZHNpLXBhZC1sb3dwb3dlci1lbmFibGUNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL2RzaS1kcGQtZW5hYmxlL2RzaS1wYWQtbG93cG93ZXIt
ZW5hYmxlDQooWEVOKSAvcG1jQDcwMDBlNDAwL2RzaS1kcGQtZW5hYmxlL2RzaS1wYWQtbG93cG93
ZXItZW5hYmxlIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNA
NzAwMGU0MDAvZHNpLWRwZC1lbmFibGUvZHNpLXBhZC1sb3dwb3dlci1lbmFibGUgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAw
ZTQwMC9kc2ktZHBkLWVuYWJsZS9kc2ktcGFkLWxvd3Bvd2VyLWVuYWJsZQ0KKFhFTikgaGFuZGxl
IC9wbWNANzAwMGU0MDAvZHNpLWRwZC1kaXNhYmxlDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3BtY0A3MDAwZTQwMC9kc2ktZHBkLWRpc2FibGUNCihYRU4pIC9wbWNANzAwMGU0MDAvZHNpLWRw
ZC1kaXNhYmxlIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNA
NzAwMGU0MDAvZHNpLWRwZC1kaXNhYmxlIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvZHNpLWRwZC1kaXNhYmxlDQoo
WEVOKSBoYW5kbGUgL3BtY0A3MDAwZTQwMC9kc2ktZHBkLWRpc2FibGUvZHNpLXBhZC1sb3dwb3dl
ci1kaXNhYmxlDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9kc2ktZHBk
LWRpc2FibGUvZHNpLXBhZC1sb3dwb3dlci1kaXNhYmxlDQooWEVOKSAvcG1jQDcwMDBlNDAwL2Rz
aS1kcGQtZGlzYWJsZS9kc2ktcGFkLWxvd3Bvd2VyLWRpc2FibGUgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BtY0A3MDAwZTQwMC9kc2ktZHBkLWRpc2FibGUvZHNp
LXBhZC1sb3dwb3dlci1kaXNhYmxlIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvZHNpLWRwZC1kaXNhYmxlL2RzaS1w
YWQtbG93cG93ZXItZGlzYWJsZQ0KKFhFTikgaGFuZGxlIC9wbWNANzAwMGU0MDAvZHNpYi1kcGQt
ZW5hYmxlDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9kc2liLWRwZC1l
bmFibGUNCihYRU4pIC9wbWNANzAwMGU0MDAvZHNpYi1kcGQtZW5hYmxlIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNANzAwMGU0MDAvZHNpYi1kcGQtZW5hYmxl
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9wbWNANzAwMGU0MDAvZHNpYi1kcGQtZW5hYmxlDQooWEVOKSBoYW5kbGUgL3BtY0A3MDAwZTQw
MC9kc2liLWRwZC1lbmFibGUvZHNpYi1wYWQtbG93cG93ZXItZW5hYmxlDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9kc2liLWRwZC1lbmFibGUvZHNpYi1wYWQtbG93cG93
ZXItZW5hYmxlDQooWEVOKSAvcG1jQDcwMDBlNDAwL2RzaWItZHBkLWVuYWJsZS9kc2liLXBhZC1s
b3dwb3dlci1lbmFibGUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3BtY0A3MDAwZTQwMC9kc2liLWRwZC1lbmFibGUvZHNpYi1wYWQtbG93cG93ZXItZW5hYmxlIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
bWNANzAwMGU0MDAvZHNpYi1kcGQtZW5hYmxlL2RzaWItcGFkLWxvd3Bvd2VyLWVuYWJsZQ0KKFhF
TikgaGFuZGxlIC9wbWNANzAwMGU0MDAvZHNpYi1kcGQtZGlzYWJsZQ0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvZHNpYi1kcGQtZGlzYWJsZQ0KKFhFTikgL3BtY0A3MDAw
ZTQwMC9kc2liLWRwZC1kaXNhYmxlIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9wbWNANzAwMGU0MDAvZHNpYi1kcGQtZGlzYWJsZSBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL2RzaWIt
ZHBkLWRpc2FibGUNCihYRU4pIGhhbmRsZSAvcG1jQDcwMDBlNDAwL2RzaWItZHBkLWRpc2FibGUv
ZHNpYi1wYWQtbG93cG93ZXItZGlzYWJsZQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbWNA
NzAwMGU0MDAvZHNpYi1kcGQtZGlzYWJsZS9kc2liLXBhZC1sb3dwb3dlci1kaXNhYmxlDQooWEVO
KSAvcG1jQDcwMDBlNDAwL2RzaWItZHBkLWRpc2FibGUvZHNpYi1wYWQtbG93cG93ZXItZGlzYWJs
ZSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcG1jQDcwMDBlNDAw
L2RzaWItZHBkLWRpc2FibGUvZHNpYi1wYWQtbG93cG93ZXItZGlzYWJsZSBpcyBiZWhpbmQgdGhl
IElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAw
L2RzaWItZHBkLWRpc2FibGUvZHNpYi1wYWQtbG93cG93ZXItZGlzYWJsZQ0KKFhFTikgaGFuZGxl
IC9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzDQooWEVOKSAvcG1jQDcwMDBlNDAwL2lvcGFkLWRl
ZmF1bHRzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNANzAw
MGU0MDAvaW9wYWQtZGVmYXVsdHMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9pb3BhZC1kZWZhdWx0cw0KKFhFTikg
aGFuZGxlIC9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvYXVkaW8tcGFkcw0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvYXVkaW8tcGFkcw0K
KFhFTikgL3BtY0A3MDAwZTQwMC9pb3BhZC1kZWZhdWx0cy9hdWRpby1wYWRzIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVs
dHMvYXVkaW8tcGFkcyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL2F1ZGlvLXBhZHMNCihY
RU4pIGhhbmRsZSAvcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL2NhbS1wYWRzDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9pb3BhZC1kZWZhdWx0cy9jYW0tcGFkcw0K
KFhFTikgL3BtY0A3MDAwZTQwMC9pb3BhZC1kZWZhdWx0cy9jYW0tcGFkcyBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRz
L2NhbS1wYWRzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvY2FtLXBhZHMNCihYRU4pIGhh
bmRsZSAvcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL2RiZy1wYWRzDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9pb3BhZC1kZWZhdWx0cy9kYmctcGFkcw0KKFhFTikg
L3BtY0A3MDAwZTQwMC9pb3BhZC1kZWZhdWx0cy9kYmctcGFkcyBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL2RiZy1w
YWRzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvZGJnLXBhZHMNCihYRU4pIGhhbmRsZSAv
cG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL2RtaWMtcGFkcw0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvZG1pYy1wYWRzDQooWEVOKSAvcG1j
QDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL2RtaWMtcGFkcyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL2RtaWMtcGFk
cyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL2RtaWMtcGFkcw0KKFhFTikgaGFuZGxlIC9w
bWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvcGV4LWN0cmwtcGFkcw0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvcGV4LWN0cmwtcGFkcw0KKFhF
TikgL3BtY0A3MDAwZTQwMC9pb3BhZC1kZWZhdWx0cy9wZXgtY3RybC1wYWRzIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVs
dHMvcGV4LWN0cmwtcGFkcyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL3BleC1jdHJsLXBh
ZHMNCihYRU4pIGhhbmRsZSAvcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL3NwaS1wYWRzDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9pb3BhZC1kZWZhdWx0cy9zcGkt
cGFkcw0KKFhFTikgL3BtY0A3MDAwZTQwMC9pb3BhZC1kZWZhdWx0cy9zcGktcGFkcyBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcG1jQDcwMDBlNDAwL2lvcGFkLWRl
ZmF1bHRzL3NwaS1wYWRzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvc3BpLXBhZHMNCihY
RU4pIGhhbmRsZSAvcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL3VhcnQtcGFkcw0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvdWFydC1wYWRz
DQooWEVOKSAvcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL3VhcnQtcGFkcyBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1
bHRzL3VhcnQtcGFkcyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL3VhcnQtcGFkcw0KKFhF
TikgaGFuZGxlIC9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvcGV4LWlvLXBhZHMNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL3BleC1pby1w
YWRzDQooWEVOKSAvcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL3BleC1pby1wYWRzIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNANzAwMGU0MDAvaW9wYWQt
ZGVmYXVsdHMvcGV4LWlvLXBhZHMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9pb3BhZC1kZWZhdWx0cy9wZXgtaW8t
cGFkcw0KKFhFTikgaGFuZGxlIC9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvYXVkaW8taHYt
cGFkcw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVs
dHMvYXVkaW8taHYtcGFkcw0KKFhFTikgL3BtY0A3MDAwZTQwMC9pb3BhZC1kZWZhdWx0cy9hdWRp
by1odi1wYWRzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNA
NzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvYXVkaW8taHYtcGFkcyBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL2lvcGFk
LWRlZmF1bHRzL2F1ZGlvLWh2LXBhZHMNCihYRU4pIGhhbmRsZSAvcG1jQDcwMDBlNDAwL2lvcGFk
LWRlZmF1bHRzL3NwaS1odi1wYWRzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAw
ZTQwMC9pb3BhZC1kZWZhdWx0cy9zcGktaHYtcGFkcw0KKFhFTikgL3BtY0A3MDAwZTQwMC9pb3Bh
ZC1kZWZhdWx0cy9zcGktaHYtcGFkcyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL3NwaS1odi1wYWRzIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbWNANzAw
MGU0MDAvaW9wYWQtZGVmYXVsdHMvc3BpLWh2LXBhZHMNCihYRU4pIGhhbmRsZSAvcG1jQDcwMDBl
NDAwL2lvcGFkLWRlZmF1bHRzL2dwaW8tcGFkcw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
bWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvZ3Bpby1wYWRzDQooWEVOKSAvcG1jQDcwMDBlNDAw
L2lvcGFkLWRlZmF1bHRzL2dwaW8tcGFkcyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL2dwaW8tcGFkcyBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG1jQDcw
MDBlNDAwL2lvcGFkLWRlZmF1bHRzL2dwaW8tcGFkcw0KKFhFTikgaGFuZGxlIC9wbWNANzAwMGU0
MDAvaW9wYWQtZGVmYXVsdHMvc2RtbWMtaW8tcGFkcw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvc2RtbWMtaW8tcGFkcw0KKFhFTikgL3BtY0A3
MDAwZTQwMC9pb3BhZC1kZWZhdWx0cy9zZG1tYy1pby1wYWRzIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNANzAwMGU0MDAvaW9wYWQtZGVmYXVsdHMvc2RtbWMt
aW8tcGFkcyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcG1jQDcwMDBlNDAwL2lvcGFkLWRlZmF1bHRzL3NkbW1jLWlvLXBhZHMNCihYRU4p
IGhhbmRsZSAvcG1jQDcwMDBlNDAwL2Jvb3Ryb20tY29tbWFuZHMNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcG1jQDcwMDBlNDAwL2Jvb3Ryb20tY29tbWFuZHMNCihYRU4pIC9wbWNANzAwMGU0
MDAvYm9vdHJvbS1jb21tYW5kcyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvcG1jQDcwMDBlNDAwL2Jvb3Ryb20tY29tbWFuZHMgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9ib290cm9t
LWNvbW1hbmRzDQooWEVOKSBoYW5kbGUgL3BtY0A3MDAwZTQwMC9ib290cm9tLWNvbW1hbmRzL3Jl
c2V0LWNvbW1hbmRzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9ib290
cm9tLWNvbW1hbmRzL3Jlc2V0LWNvbW1hbmRzDQooWEVOKSAvcG1jQDcwMDBlNDAwL2Jvb3Ryb20t
Y29tbWFuZHMvcmVzZXQtY29tbWFuZHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3BtY0A3MDAwZTQwMC9ib290cm9tLWNvbW1hbmRzL3Jlc2V0LWNvbW1hbmRzIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
bWNANzAwMGU0MDAvYm9vdHJvbS1jb21tYW5kcy9yZXNldC1jb21tYW5kcw0KKFhFTikgaGFuZGxl
IC9wbWNANzAwMGU0MDAvYm9vdHJvbS1jb21tYW5kcy9yZXNldC1jb21tYW5kcy9jb21tYW5kc0A0
LTAwM2MNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL2Jvb3Ryb20tY29t
bWFuZHMvcmVzZXQtY29tbWFuZHMvY29tbWFuZHNANC0wMDNjDQooWEVOKSAvcG1jQDcwMDBlNDAw
L2Jvb3Ryb20tY29tbWFuZHMvcmVzZXQtY29tbWFuZHMvY29tbWFuZHNANC0wMDNjIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNANzAwMGU0MDAvYm9vdHJvbS1j
b21tYW5kcy9yZXNldC1jb21tYW5kcy9jb21tYW5kc0A0LTAwM2MgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9ib290
cm9tLWNvbW1hbmRzL3Jlc2V0LWNvbW1hbmRzL2NvbW1hbmRzQDQtMDAzYw0KKFhFTikgaGFuZGxl
IC9wbWNANzAwMGU0MDAvYm9vdHJvbS1jb21tYW5kcy9wb3dlci1vZmYtY29tbWFuZHMNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL2Jvb3Ryb20tY29tbWFuZHMvcG93ZXIt
b2ZmLWNvbW1hbmRzDQooWEVOKSAvcG1jQDcwMDBlNDAwL2Jvb3Ryb20tY29tbWFuZHMvcG93ZXIt
b2ZmLWNvbW1hbmRzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9w
bWNANzAwMGU0MDAvYm9vdHJvbS1jb21tYW5kcy9wb3dlci1vZmYtY29tbWFuZHMgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAw
ZTQwMC9ib290cm9tLWNvbW1hbmRzL3Bvd2VyLW9mZi1jb21tYW5kcw0KKFhFTikgaGFuZGxlIC9w
bWNANzAwMGU0MDAvYm9vdHJvbS1jb21tYW5kcy9wb3dlci1vZmYtY29tbWFuZHMvY29tbWFuZHNA
NC0wMDNjDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9ib290cm9tLWNv
bW1hbmRzL3Bvd2VyLW9mZi1jb21tYW5kcy9jb21tYW5kc0A0LTAwM2MNCihYRU4pIC9wbWNANzAw
MGU0MDAvYm9vdHJvbS1jb21tYW5kcy9wb3dlci1vZmYtY29tbWFuZHMvY29tbWFuZHNANC0wMDNj
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNANzAwMGU0MDAv
Ym9vdHJvbS1jb21tYW5kcy9wb3dlci1vZmYtY29tbWFuZHMvY29tbWFuZHNANC0wMDNjIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbWNA
NzAwMGU0MDAvYm9vdHJvbS1jb21tYW5kcy9wb3dlci1vZmYtY29tbWFuZHMvY29tbWFuZHNANC0w
MDNjDQooWEVOKSBoYW5kbGUgL3BtY0A3MDAwZTQwMC9zZG1tYzFfZV8zM1ZfZW5hYmxlDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9zZG1tYzFfZV8zM1ZfZW5hYmxlDQoo
WEVOKSAvcG1jQDcwMDBlNDAwL3NkbW1jMV9lXzMzVl9lbmFibGUgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BtY0A3MDAwZTQwMC9zZG1tYzFfZV8zM1ZfZW5hYmxl
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9wbWNANzAwMGU0MDAvc2RtbWMxX2VfMzNWX2VuYWJsZQ0KKFhFTikgaGFuZGxlIC9wbWNANzAw
MGU0MDAvc2RtbWMxX2VfMzNWX2VuYWJsZS9zZG1tYzENCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcG1jQDcwMDBlNDAwL3NkbW1jMV9lXzMzVl9lbmFibGUvc2RtbWMxDQooWEVOKSAvcG1jQDcw
MDBlNDAwL3NkbW1jMV9lXzMzVl9lbmFibGUvc2RtbWMxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9wbWNANzAwMGU0MDAvc2RtbWMxX2VfMzNWX2VuYWJsZS9zZG1t
YzEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3BtY0A3MDAwZTQwMC9zZG1tYzFfZV8zM1ZfZW5hYmxlL3NkbW1jMQ0KKFhFTikgaGFuZGxl
IC9wbWNANzAwMGU0MDAvc2RtbWMxX2VfMzNWX2Rpc2FibGUNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcG1jQDcwMDBlNDAwL3NkbW1jMV9lXzMzVl9kaXNhYmxlDQooWEVOKSAvcG1jQDcwMDBl
NDAwL3NkbW1jMV9lXzMzVl9kaXNhYmxlIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9wbWNANzAwMGU0MDAvc2RtbWMxX2VfMzNWX2Rpc2FibGUgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQw
MC9zZG1tYzFfZV8zM1ZfZGlzYWJsZQ0KKFhFTikgaGFuZGxlIC9wbWNANzAwMGU0MDAvc2RtbWMx
X2VfMzNWX2Rpc2FibGUvc2RtbWMxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAw
ZTQwMC9zZG1tYzFfZV8zM1ZfZGlzYWJsZS9zZG1tYzENCihYRU4pIC9wbWNANzAwMGU0MDAvc2Rt
bWMxX2VfMzNWX2Rpc2FibGUvc2RtbWMxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9wbWNANzAwMGU0MDAvc2RtbWMxX2VfMzNWX2Rpc2FibGUvc2RtbWMxIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbWNA
NzAwMGU0MDAvc2RtbWMxX2VfMzNWX2Rpc2FibGUvc2RtbWMxDQooWEVOKSBoYW5kbGUgL3BtY0A3
MDAwZTQwMC9zZG1tYzNfZV8zM1ZfZW5hYmxlDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bt
Y0A3MDAwZTQwMC9zZG1tYzNfZV8zM1ZfZW5hYmxlDQooWEVOKSAvcG1jQDcwMDBlNDAwL3NkbW1j
M19lXzMzVl9lbmFibGUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3BtY0A3MDAwZTQwMC9zZG1tYzNfZV8zM1ZfZW5hYmxlIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvc2RtbWMzX2Vf
MzNWX2VuYWJsZQ0KKFhFTikgaGFuZGxlIC9wbWNANzAwMGU0MDAvc2RtbWMzX2VfMzNWX2VuYWJs
ZS9zZG1tYzMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL3NkbW1jM19l
XzMzVl9lbmFibGUvc2RtbWMzDQooWEVOKSAvcG1jQDcwMDBlNDAwL3NkbW1jM19lXzMzVl9lbmFi
bGUvc2RtbWMzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNA
NzAwMGU0MDAvc2RtbWMzX2VfMzNWX2VuYWJsZS9zZG1tYzMgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9zZG1tYzNf
ZV8zM1ZfZW5hYmxlL3NkbW1jMw0KKFhFTikgaGFuZGxlIC9wbWNANzAwMGU0MDAvc2RtbWMzX2Vf
MzNWX2Rpc2FibGUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcG1jQDcwMDBlNDAwL3NkbW1j
M19lXzMzVl9kaXNhYmxlDQooWEVOKSAvcG1jQDcwMDBlNDAwL3NkbW1jM19lXzMzVl9kaXNhYmxl
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNANzAwMGU0MDAv
c2RtbWMzX2VfMzNWX2Rpc2FibGUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9zZG1tYzNfZV8zM1ZfZGlzYWJsZQ0K
KFhFTikgaGFuZGxlIC9wbWNANzAwMGU0MDAvc2RtbWMzX2VfMzNWX2Rpc2FibGUvc2RtbWMzDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtY0A3MDAwZTQwMC9zZG1tYzNfZV8zM1ZfZGlzYWJs
ZS9zZG1tYzMNCihYRU4pIC9wbWNANzAwMGU0MDAvc2RtbWMzX2VfMzNWX2Rpc2FibGUvc2RtbWMz
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbWNANzAwMGU0MDAv
c2RtbWMzX2VfMzNWX2Rpc2FibGUvc2RtbWMzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbWNANzAwMGU0MDAvc2RtbWMzX2VfMzNWX2Rp
c2FibGUvc2RtbWMzDQooWEVOKSBoYW5kbGUgL3NlQDcwMDEyMDAwDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NlQDcwMDEyMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQoo
WEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc2VANzAwMTIwMDAsIGluZGV4PTANCihY
RU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVu
PTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAw
MDAzYS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVw
dC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVO
KSAgLT4gZ290IGl0ICENCihYRU4pIC9zZUA3MDAxMjAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAxDQooWEVOKSBDaGVjayBpZiAvc2VANzAwMTIwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NlQDcwMDEyMDAwDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0v
c2VANzAwMTIwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGlu
dHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAzYS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihY
RU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAwIG5vdCAo
ZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBD
b25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvc2VANzAwMTIwMDAgKioNCihYRU4pIERUOiBidXMgaXMg
ZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6
PDM+IDAwMDAwMDAwPDM+IDcwMDEyMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUN
CihYRU4pICAgLSBNTUlPOiAwMDcwMDEyMDAwIC0gMDA3MDAxNDAwMCBQMk1UeXBlPTUNCihYRU4p
IGhhbmRsZSAvaGRhQDcwMDMwMDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2hkYUA3MDAz
MDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9
MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dl
dF9yYXdfaXJxOiBkZXY9L2hkYUA3MDAzMDAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRy
b2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDUxLi4uXSxvaW50c2l6
ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0K
KFhFTikgL2hkYUA3MDAzMDAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVj
ayBpZiAvaGRhQDcwMDMwMDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9oZGFANzAwMzAwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMg
aW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9oZGFANzAwMzAwMDAs
IGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRz
cGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAw
MDAwMDAgMHgwMDAwMDA1MS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlw
YXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRy
c2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGlyZWN0bHkgb3Ig
aW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8g
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24g
Zm9yIGRldmljZSAvaGRhQDcwMDMwMDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5h
PTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAw
MDwzPiA3MDAzMDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0g
TU1JTzogMDA3MDAzMDAwMCAtIDAwNzAwNDAwMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3Bj
aWVAMTAwMzAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wY2llQDEwMDMwMDANCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTYN
CihYRU4pICBpbnRzaXplPTMgaW50bGVuPTYNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTog
ZGV2PS9wY2llQDEwMDMwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVu
PTYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0
MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA2Mi4uLl0sb2ludHNpemU9Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXpl
PTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rl
dmljZV9nZXRfcmF3X2lycTogZGV2PS9wY2llQDEwMDMwMDAsIGluZGV4PTENCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTYNCihYRU4p
ICBpbnRzaXplPTMgaW50bGVuPTYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVw
dC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA2My4uLl0s
b2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9s
bGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIC9wY2llQDEwMDMwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMw0KKFhF
TikgQ2hlY2sgaWYgL3BjaWVAMTAwMzAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGNpZUAxMDAzMDAwDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj02DQooWEVOKSAgaW50
c2l6ZT0zIGludGxlbj02DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vcGNpZUAx
MDAzMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj02DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj02DQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVj
PVsweDAwMDAwMDAwIDB4MDAwMDAwNjIuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAg
LT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVj
dGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVj
dGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgZHRfZGV2aWNlX2dl
dF9yYXdfaXJxOiBkZXY9L3BjaWVAMTAwMzAwMCwgaW5kZXg9MQ0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Ng0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRy
b2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDYzLi4uXSxvaW50c2l6
ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0K
KFhFTikgaXJxIDEgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJp
bWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQw
MDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9wY2llQDEwMDMwMDAgKioN
CihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRy
YW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDAxMDAzMDAwPDM+DQooWEVOKSBEVDog
cmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDAxMDAzMDAwIC0gMDAwMTAwMzgw
MCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9wY2llQDEw
MDMwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhF
TikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDAxMDAzODAwPDM+DQoo
WEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDAxMDAzODAwIC0g
MDAwMTAwNDAwMCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNl
IC9wY2llQDEwMDMwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikg
b24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDExZmZm
MDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDEx
ZmZmMDAwIC0gMDAxMjAwMDAwMCBQMk1UeXBlPTUNCihYRU4pIE1hcHBpbmcgY2hpbGRyZW4gb2Yg
L3BjaWVAMTAwMzAwMCB0byBndWVzdA0KKFhFTikgZHRfZm9yX2VhY2hfaXJxX21hcDogcGFyPS9w
Y2llQDEwMDMwMDAgY2I9MDAwMDAwMDAwMDJjYTAzYyBkYXRhPTAwMDA4MDAwZmYxZGQwMDANCihY
RU4pIGR0X2Zvcl9lYWNoX2lycV9tYXA6IGlwYXI9L3BjaWVAMTAwMzAwMCwgc2l6ZT0xDQooWEVO
KSAgLT4gYWRkcnNpemU9Mw0KKFhFTikgIC0+IGlwYXIgaW50ZXJydXB0LWNvbnRyb2xsZXINCihY
RU4pICAtPiBwaW50c2l6ZT0zLCBwYWRkcnNpemU9MA0KKFhFTikgICAtIElSUTogMTMwDQooWEVO
KSAgLT4gaW1hcGxlbj0wDQooWEVOKSBkdF9mb3JfZWFjaF9yYW5nZTogZGV2PXBjaWUsIGJ1cz1w
Y2ksIHBhcmVudD0sIHJsZW49MzUsIHJvbmU9Nw0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZv
ciBkZXZpY2UgL3BjaWVAMTAwMzAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0y
LCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8
Mz4gMDEwMDAwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1N
SU86IDAwMDEwMDAwMDAgLSAwMDAxMDAxMDAwIFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5z
bGF0aW9uIGZvciBkZXZpY2UgL3BjaWVAMTAwMzAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZh
dWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4g
MDAwMDAwMDA8Mz4gMDEwMDEwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhF
TikgICAtIE1NSU86IDAwMDEwMDEwMDAgLSAwMDAxMDAyMDAwIFAyTVR5cGU9NQ0KKFhFTikgRFQ6
ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3BjaWVAMTAwMzAwMCAqKg0KKFhFTikgRFQ6IGJ1
cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRk
cmVzczo8Mz4gMDAwMDAwMDA8Mz4gMTIwMDAwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qg
bm9kZQ0KKFhFTikgICAtIE1NSU86IDAwMTIwMDAwMDAgLSAwMDEyMDEwMDAwIFAyTVR5cGU9NQ0K
KFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3BjaWVAMTAwMzAwMCAqKg0KKFhF
TikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNs
YXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gMTMwMDAwMDA8Mz4NCihYRU4pIERUOiByZWFj
aGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwMTMwMDAwMDAgLSAwMDIwMDAwMDAwIFAy
TVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3BjaWVAMTAwMzAw
MCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBE
VDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gMjAwMDAwMDA8Mz4NCihYRU4p
IERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwMjAwMDAwMDAgLSAwMDQw
MDAwMDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9wY2llQDEwMDMwMDAvcGNpQDEsMA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9wY2llQDEwMDMwMDAvcGNpQDEsMA0KKFhFTikgL3BjaWVA
MTAwMzAwMC9wY2lAMSwwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlm
IC9wY2llQDEwMDMwMDAvcGNpQDEsMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGNpZUAxMDAzMDAwL3BjaUAxLDANCihYRU4pIERUOiAq
KiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9wY2llQDEwMDMwMDAvcGNpQDEsMCAqKg0KKFhFTikg
RFQ6IGJ1cyBpcyBwY2kgKG5hPTMsIG5zPTIpIG9uIC9wY2llQDEwMDMwMDANCihYRU4pIERUOiB0
cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA4MjAwMDgwMDwzPiAwMDAwMDAwMDwzPiAwMTAwMDAwMDwz
Pg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhF
TikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogUENJIG1hcCwgY3A9MTAwMDAwMCwg
cz0xMDAwLCBkYT0xMDAwMDAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4g
MDAwMDAwMDA8Mz4gMDEwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogMA0KKFhFTikg
RFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gMDEwMDAwMDA8Mz4NCihY
RU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwMDEwMDAwMDAgLSAw
MDAxMDAxMDAwIFAyTVR5cGU9NQ0KKFhFTikgTWFwcGluZyBjaGlsZHJlbiBvZiAvcGNpZUAxMDAz
MDAwL3BjaUAxLDAgdG8gZ3Vlc3QNCihYRU4pIGR0X2Zvcl9lYWNoX2lycV9tYXA6IHBhcj0vcGNp
ZUAxMDAzMDAwL3BjaUAxLDAgY2I9MDAwMDAwMDAwMDJjYTAzYyBkYXRhPTAwMDA4MDAwZmYxZGQw
MDANCihYRU4pIGR0X2Zvcl9lYWNoX2lycV9tYXA6IGlwYXI9L3BjaWVAMTAwMzAwMCwgc2l6ZT0x
DQooWEVOKSAgLT4gYWRkcnNpemU9Mw0KKFhFTikgIC0+IG5vIG1hcCwgaWdub3JpbmcNCihYRU4p
IGhhbmRsZSAvcGNpZUAxMDAzMDAwL3BjaUAyLDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cGNpZUAxMDAzMDAwL3BjaUAyLDANCihYRU4pIC9wY2llQDEwMDMwMDAvcGNpQDIsMCBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvcGNpZUAxMDAzMDAwL3BjaUAyLDAg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3BjaWVAMTAwMzAwMC9wY2lAMiwwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmlj
ZSAvcGNpZUAxMDAzMDAwL3BjaUAyLDAgKioNCihYRU4pIERUOiBidXMgaXMgcGNpIChuYT0zLCBu
cz0yKSBvbiAvcGNpZUAxMDAzMDAwDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4g
ODIwMDEwMDA8Mz4gMDAwMDAwMDA8Mz4gMDEwMDEwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVz
IGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4u
Lg0KKFhFTikgRFQ6IFBDSSBtYXAsIGNwPTEwMDAwMDAsIHM9MTAwMCwgZGE9MTAwMTAwMA0KKFhF
TikgRFQ6IFBDSSBtYXAsIGNwPTEwMDEwMDAsIHM9MTAwMCwgZGE9MTAwMTAwMA0KKFhFTikgRFQ6
IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IDAxMDAxMDAwPDM+DQooWEVO
KSBEVDogd2l0aCBvZmZzZXQ6IDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+
IDAwMDAwMDAwPDM+IDAxMDAxMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihY
RU4pICAgLSBNTUlPOiAwMDAxMDAxMDAwIC0gMDAwMTAwMjAwMCBQMk1UeXBlPTUNCihYRU4pIE1h
cHBpbmcgY2hpbGRyZW4gb2YgL3BjaWVAMTAwMzAwMC9wY2lAMiwwIHRvIGd1ZXN0DQooWEVOKSBk
dF9mb3JfZWFjaF9pcnFfbWFwOiBwYXI9L3BjaWVAMTAwMzAwMC9wY2lAMiwwIGNiPTAwMDAwMDAw
MDAyY2EwM2MgZGF0YT0wMDAwODAwMGZmMWRkMDAwDQooWEVOKSBkdF9mb3JfZWFjaF9pcnFfbWFw
OiBpcGFyPS9wY2llQDEwMDMwMDAsIHNpemU9MQ0KKFhFTikgIC0+IGFkZHJzaXplPTMNCihYRU4p
ICAtPiBubyBtYXAsIGlnbm9yaW5nDQooWEVOKSBoYW5kbGUgL3BjaWVAMTAwMzAwMC9wY2lAMiww
L2V0aGVybmV0QDAsMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wY2llQDEwMDMwMDAvcGNp
QDIsMC9ldGhlcm5ldEAwLDANCihYRU4pIC9wY2llQDEwMDMwMDAvcGNpQDIsMC9ldGhlcm5ldEAw
LDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BjaWVAMTAwMzAw
MC9wY2lAMiwwL2V0aGVybmV0QDAsMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGNpZUAxMDAzMDAwL3BjaUAyLDAvZXRoZXJuZXRAMCww
DQooWEVOKSBoYW5kbGUgL3BjaWVAMTAwMzAwMC9wcm9kLXNldHRpbmdzDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3BjaWVAMTAwMzAwMC9wcm9kLXNldHRpbmdzDQooWEVOKSAvcGNpZUAxMDAz
MDAwL3Byb2Qtc2V0dGluZ3MgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3BjaWVAMTAwMzAwMC9wcm9kLXNldHRpbmdzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wY2llQDEwMDMwMDAvcHJvZC1zZXR0aW5n
cw0KKFhFTikgaGFuZGxlIC9wY2llQDEwMDMwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfcGFkDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BjaWVAMTAwMzAwMC9wcm9kLXNldHRpbmdzL3Byb2Rf
Y19wYWQNCihYRU4pIC9wY2llQDEwMDMwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfcGFkIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wY2llQDEwMDMwMDAvcHJvZC1z
ZXR0aW5ncy9wcm9kX2NfcGFkIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9wY2llQDEwMDMwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfcGFk
DQooWEVOKSBoYW5kbGUgL3BjaWVAMTAwMzAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19ycA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9wY2llQDEwMDMwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nf
cnANCihYRU4pIC9wY2llQDEwMDMwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfcnAgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BjaWVAMTAwMzAwMC9wcm9kLXNldHRp
bmdzL3Byb2RfY19ycCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGNpZUAxMDAzMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3JwDQooWEVO
KSBoYW5kbGUgL2kyY0A3MDAwYzAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAw
MGMwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9n
ZXRfcmF3X2lycTogZGV2PS9pMmNANzAwMGMwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50
ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRz
aXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250
cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyNi4uLl0sb2ludHNp
emU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYw
MDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICEN
CihYRU4pIC9pMmNANzAwMGMwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hl
Y2sgaWYgL2kyY0A3MDAwYzAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBjMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaTJjQDcwMDBjMDAw
LCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAw
MDAwMDAwIDB4MDAwMDAwMjYuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBp
cGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRk
cnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9y
IGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRv
IC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9u
IGZvciBkZXZpY2UgL2kyY0A3MDAwYzAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChu
YT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAw
MDA8Mz4gNzAwMGMwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAt
IE1NSU86IDAwNzAwMGMwMDAgLSAwMDcwMDBjMTAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9p
MmNANzAwMGMwMDAvdGVtcC1zZW5zb3JANGMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJj
QDcwMDBjMDAwL3RlbXAtc2Vuc29yQDRjDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3Bl
cnR5DQooWEVOKSAgaW50c3BlYz0xODggaW50bGVuPTINCihYRU4pICBpbnRzaXplPTIgaW50bGVu
PTINCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9pMmNANzAwMGMwMDAvdGVtcC1z
ZW5zb3JANGMsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihY
RU4pICBpbnRzcGVjPTE4OCBpbnRsZW49Mg0KKFhFTikgIGludHNpemU9MiBpbnRsZW49Mg0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IHBhcj0vZ3Bpb0A2MDAwZDAwMCxpbnRzcGVjPVsweDAwMDAwMGJj
IDB4MDAwMDAwMDguLi5dLG9pbnRzaXplPTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9n
cGlvQDYwMDBkMDAwLCBzaXplPTINCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIC9pMmNANzAwMGMwMDAvdGVtcC1zZW5zb3JANGMgcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3MDAwYzAwMC90ZW1wLXNlbnNvckA0YyBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
aTJjQDcwMDBjMDAwL3RlbXAtc2Vuc29yQDRjDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHBy
b3BlcnR5DQooWEVOKSAgaW50c3BlYz0xODggaW50bGVuPTINCihYRU4pICBpbnRzaXplPTIgaW50
bGVuPTINCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9pMmNANzAwMGMwMDAvdGVt
cC1zZW5zb3JANGMsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTE4OCBpbnRsZW49Mg0KKFhFTikgIGludHNpemU9MiBpbnRsZW49Mg0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vZ3Bpb0A2MDAwZDAwMCxpbnRzcGVjPVsweDAwMDAw
MGJjIDB4MDAwMDAwMDguLi5dLG9pbnRzaXplPTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFy
PS9ncGlvQDYwMDBkMDAwLCBzaXplPTINCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4g
Z290IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVj
dGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2dwaW9ANjAwMGQwMDANCihY
RU4pIGhhbmRsZSAvaTJjQDcwMDBjMDAwL3RlbXAtc2Vuc29yQDRjL2xvYw0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9pMmNANzAwMGMwMDAvdGVtcC1zZW5zb3JANGMvbG9jDQooWEVOKSAvaTJj
QDcwMDBjMDAwL3RlbXAtc2Vuc29yQDRjL2xvYyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvaTJjQDcwMDBjMDAwL3RlbXAtc2Vuc29yQDRjL2xvYyBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBj
MDAwL3RlbXAtc2Vuc29yQDRjL2xvYw0KKFhFTikgaGFuZGxlIC9pMmNANzAwMGMwMDAvdGVtcC1z
ZW5zb3JANGMvZXh0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwYzAwMC90ZW1w
LXNlbnNvckA0Yy9leHQNCihYRU4pIC9pMmNANzAwMGMwMDAvdGVtcC1zZW5zb3JANGMvZXh0IHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9pMmNANzAwMGMwMDAvdGVt
cC1zZW5zb3JANGMvZXh0IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9pMmNANzAwMGMwMDAvdGVtcC1zZW5zb3JANGMvZXh0DQooWEVOKSBo
YW5kbGUgL2kyY0A3MDAwYzQwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGM0
MDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAg
aW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRf
cmF3X2lycTogZGV2PS9pMmNANzAwMGM0MDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXpl
PTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9s
bGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA1NC4uLl0sb2ludHNpemU9
Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0
MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihY
RU4pIC9pMmNANzAwMGM0MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sg
aWYgL2kyY0A3MDAwYzQwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBjNDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMn
IHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGlu
dGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaTJjQDcwMDBjNDAwLCBp
bmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3Bl
Yz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAw
MDAwIDB4MDAwMDAwNTQuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFy
PS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNp
emU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9yIGlu
ZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9p
bnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZv
ciBkZXZpY2UgL2kyY0A3MDAwYzQwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0y
LCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8
Mz4gNzAwMGM0MDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1N
SU86IDAwNzAwMGM0MDAgLSAwMDcwMDBjNTAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9pMmNA
NzAwMGM0MDAvaTJjbXV4QDcwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwYzQw
MC9pMmNtdXhANzANCihYRU4pIC9pMmNANzAwMGM0MDAvaTJjbXV4QDcwIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9pMmNANzAwMGM0MDAvaTJjbXV4QDcwIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNA
NzAwMGM0MDAvaTJjbXV4QDcwDQooWEVOKSBoYW5kbGUgL2kyY0A3MDAwYzQwMC9pMmNtdXhANzAv
aTJjQDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBjNDAwL2kyY211eEA3MC9p
MmNAMA0KKFhFTikgL2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDAgcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDAg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDANCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBjNDAw
L2kyY211eEA3MC9pMmNAMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGM0MDAv
aTJjbXV4QDcwL2kyY0AxDQooWEVOKSAvaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMSBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBjNDAwL2kyY211
eEA3MC9pMmNAMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMQ0KKFhFTikgaGFuZGxlIC9p
MmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AxL2luYTMyMjF4QDQwDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDEvaW5hMzIyMXhANDANCihYRU4p
IC9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AxL2luYTMyMjF4QDQwIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0Ax
L2luYTMyMjF4QDQwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AxL2luYTMyMjF4QDQwDQoo
WEVOKSBoYW5kbGUgL2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDEvaW5hMzIyMXhANDAvY2hh
bm5lbEAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwYzQwMC9pMmNtdXhANzAv
aTJjQDEvaW5hMzIyMXhANDAvY2hhbm5lbEAwDQooWEVOKSAvaTJjQDcwMDBjNDAwL2kyY211eEA3
MC9pMmNAMS9pbmEzMjIxeEA0MC9jaGFubmVsQDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDEvaW5hMzIyMXhANDAv
Y2hhbm5lbEAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AxL2luYTMyMjF4QDQwL2NoYW5u
ZWxAMA0KKFhFTikgaGFuZGxlIC9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AxL2luYTMyMjF4
QDQwL2NoYW5uZWxAMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGM0MDAvaTJj
bXV4QDcwL2kyY0AxL2luYTMyMjF4QDQwL2NoYW5uZWxAMQ0KKFhFTikgL2kyY0A3MDAwYzQwMC9p
MmNtdXhANzAvaTJjQDEvaW5hMzIyMXhANDAvY2hhbm5lbEAxIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AxL2luYTMy
MjF4QDQwL2NoYW5uZWxAMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMS9pbmEzMjIxeEA0
MC9jaGFubmVsQDENCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMS9p
bmEzMjIxeEA0MC9jaGFubmVsQDINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBj
NDAwL2kyY211eEA3MC9pMmNAMS9pbmEzMjIxeEA0MC9jaGFubmVsQDINCihYRU4pIC9pMmNANzAw
MGM0MDAvaTJjbXV4QDcwL2kyY0AxL2luYTMyMjF4QDQwL2NoYW5uZWxAMiBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNA
MS9pbmEzMjIxeEA0MC9jaGFubmVsQDIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDEvaW5h
MzIyMXhANDAvY2hhbm5lbEAyDQooWEVOKSBoYW5kbGUgL2kyY0A3MDAwYzQwMC9pMmNtdXhANzAv
aTJjQDEvaW5hMzIyMXhANDENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBjNDAw
L2kyY211eEA3MC9pMmNAMS9pbmEzMjIxeEA0MQ0KKFhFTikgL2kyY0A3MDAwYzQwMC9pMmNtdXhA
NzAvaTJjQDEvaW5hMzIyMXhANDEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDEvaW5hMzIyMXhANDEgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAw
YzQwMC9pMmNtdXhANzAvaTJjQDEvaW5hMzIyMXhANDENCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBj
NDAwL2kyY211eEA3MC9pMmNAMS9pbmEzMjIxeEA0MS9jaGFubmVsQDANCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMS9pbmEzMjIxeEA0MS9jaGFu
bmVsQDANCihYRU4pIC9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AxL2luYTMyMjF4QDQxL2No
YW5uZWxAMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcw
MDBjNDAwL2kyY211eEA3MC9pMmNAMS9pbmEzMjIxeEA0MS9jaGFubmVsQDAgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwYzQw
MC9pMmNtdXhANzAvaTJjQDEvaW5hMzIyMXhANDEvY2hhbm5lbEAwDQooWEVOKSBoYW5kbGUgL2ky
Y0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDEvaW5hMzIyMXhANDEvY2hhbm5lbEAxDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDEvaW5hMzIyMXhA
NDEvY2hhbm5lbEAxDQooWEVOKSAvaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMS9pbmEzMjIx
eEA0MS9jaGFubmVsQDEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDEvaW5hMzIyMXhANDEvY2hhbm5lbEAxIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNA
NzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AxL2luYTMyMjF4QDQxL2NoYW5uZWxAMQ0KKFhFTikgaGFu
ZGxlIC9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AxL2luYTMyMjF4QDQxL2NoYW5uZWxAMg0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AxL2lu
YTMyMjF4QDQxL2NoYW5uZWxAMg0KKFhFTikgL2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDEv
aW5hMzIyMXhANDEvY2hhbm5lbEAyIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AxL2luYTMyMjF4QDQxL2NoYW5uZWxA
MiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMS9pbmEzMjIxeEA0MS9jaGFubmVsQDINCihY
RU4pIGhhbmRsZSAvaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMS9pbmEzMjIxeEA0Mg0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AxL2luYTMy
MjF4QDQyDQooWEVOKSAvaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMS9pbmEzMjIxeEA0MiBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBjNDAwL2ky
Y211eEA3MC9pMmNAMS9pbmEzMjIxeEA0MiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMS9p
bmEzMjIxeEA0Mg0KKFhFTikgaGFuZGxlIC9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AxL2lu
YTMyMjF4QDQyL2NoYW5uZWxAMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGM0
MDAvaTJjbXV4QDcwL2kyY0AxL2luYTMyMjF4QDQyL2NoYW5uZWxAMA0KKFhFTikgL2kyY0A3MDAw
YzQwMC9pMmNtdXhANzAvaTJjQDEvaW5hMzIyMXhANDIvY2hhbm5lbEAwIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0Ax
L2luYTMyMjF4QDQyL2NoYW5uZWxAMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMS9pbmEz
MjIxeEA0Mi9jaGFubmVsQDANCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBjNDAwL2kyY211eEA3MC9p
MmNAMS9pbmEzMjIxeEA0Mi9jaGFubmVsQDENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJj
QDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMS9pbmEzMjIxeEA0Mi9jaGFubmVsQDENCihYRU4pIC9p
MmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AxL2luYTMyMjF4QDQyL2NoYW5uZWxAMSBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBjNDAwL2kyY211eEA3
MC9pMmNAMS9pbmEzMjIxeEA0Mi9jaGFubmVsQDEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJj
QDEvaW5hMzIyMXhANDIvY2hhbm5lbEAxDQooWEVOKSBoYW5kbGUgL2kyY0A3MDAwYzQwMC9pMmNt
dXhANzAvaTJjQDEvaW5hMzIyMXhANDIvY2hhbm5lbEAyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDEvaW5hMzIyMXhANDIvY2hhbm5lbEAyDQoo
WEVOKSAvaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMS9pbmEzMjIxeEA0Mi9jaGFubmVsQDIg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3MDAwYzQwMC9p
MmNtdXhANzAvaTJjQDEvaW5hMzIyMXhANDIvY2hhbm5lbEAyIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGM0MDAvaTJjbXV4
QDcwL2kyY0AxL2luYTMyMjF4QDQyL2NoYW5uZWxAMg0KKFhFTikgaGFuZGxlIC9pMmNANzAwMGM0
MDAvaTJjbXV4QDcwL2kyY0AyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwYzQw
MC9pMmNtdXhANzAvaTJjQDINCihYRU4pIC9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AyIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9pMmNANzAwMGM0MDAvaTJj
bXV4QDcwL2kyY0AyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0AyDQooWEVOKSBoYW5kbGUg
L2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
aTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMw0KKFhFTikgL2kyY0A3MDAwYzQwMC9pMmNtdXhA
NzAvaTJjQDMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3
MDAwYzQwMC9pMmNtdXhANzAvaTJjQDMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwYzQwMC9pMmNtdXhANzAvaTJjQDMNCihY
RU4pIGhhbmRsZSAvaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMy9ydDU2NTkuMTItMDAxYUAx
YQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGM0MDAvaTJjbXV4QDcwL2kyY0Az
L3J0NTY1OS4xMi0wMDFhQDFhDQooWEVOKSAvaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMy9y
dDU2NTkuMTItMDAxYUAxYSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvaTJjQDcwMDBjNDAwL2kyY211eEA3MC9pMmNAMy9ydDU2NTkuMTItMDAxYUAxYSBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcw
MDBjNDAwL2kyY211eEA3MC9pMmNAMy9ydDU2NTkuMTItMDAxYUAxYQ0KKFhFTikgaGFuZGxlIC9p
MmNANzAwMGM0MDAvZ3Bpb0AyMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGM0
MDAvZ3Bpb0AyMA0KKFhFTikgL2kyY0A3MDAwYzQwMC9ncGlvQDIwIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9pMmNANzAwMGM0MDAvZ3Bpb0AyMCBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBj
NDAwL2dwaW9AMjANCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBjNDAwL2ljbTIwNjI4QDY4DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwYzQwMC9pY20yMDYyOEA2OA0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MjAwIGludGxlbj0yDQoo
WEVOKSAgaW50c2l6ZT0yIGludGxlbj0yDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRl
dj0vaTJjQDcwMDBjNDAwL2ljbTIwNjI4QDY4LCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVy
cnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0yMDAgaW50bGVuPTINCihYRU4pICBpbnRz
aXplPTIgaW50bGVuPTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2dwaW9ANjAwMGQwMDAs
aW50c3BlYz1bMHgwMDAwMDBjOCAweDAwMDAwMDAxLi4uXSxvaW50c2l6ZT0yDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vZ3Bpb0A2MDAwZDAwMCwgc2l6ZT0yDQooWEVOKSAgLT4gYWRkcnNp
emU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvaTJjQDcwMDBjNDAwL2ljbTIwNjI4QDY4
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9pMmNANzAwMGM0MDAv
aWNtMjA2MjhANjggaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L2kyY0A3MDAwYzQwMC9pY20yMDYyOEA2OA0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MjAwIGludGxlbj0yDQooWEVOKSAgaW50
c2l6ZT0yIGludGxlbj0yDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaTJjQDcw
MDBjNDAwL2ljbTIwNjI4QDY4LCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHBy
b3BlcnR5DQooWEVOKSAgaW50c3BlYz0yMDAgaW50bGVuPTINCihYRU4pICBpbnRzaXplPTIgaW50
bGVuPTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2dwaW9ANjAwMGQwMDAsaW50c3BlYz1b
MHgwMDAwMDBjOCAweDAwMDAwMDAxLi4uXSxvaW50c2l6ZT0yDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogaXBhcj0vZ3Bpb0A2MDAwZDAwMCwgc2l6ZT0yDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhF
TikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkp
IGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9ncGlvQDYwMDBk
MDAwDQooWEVOKSBoYW5kbGUgL2kyY0A3MDAwYzQwMC9hazg5NjNAMGQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vaTJjQDcwMDBjNDAwL2FrODk2M0AwZA0KKFhFTikgL2kyY0A3MDAwYzQwMC9h
azg5NjNAMGQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3
MDAwYzQwMC9hazg5NjNAMGQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwYzQwMC9hazg5NjNAMGQNCihYRU4pIGhhbmRsZSAv
aTJjQDcwMDBjNDAwL2NtMzIxODBANDgNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcw
MDBjNDAwL2NtMzIxODBANDgNCihYRU4pIC9pMmNANzAwMGM0MDAvY20zMjE4MEA0OCBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBjNDAwL2NtMzIxODBA
NDggaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L2kyY0A3MDAwYzQwMC9jbTMyMTgwQDQ4DQooWEVOKSBoYW5kbGUgL2kyY0A3MDAwYzQwMC9p
cXMyNjNANDQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBjNDAwL2lxczI2M0A0
NA0KKFhFTikgL2kyY0A3MDAwYzQwMC9pcXMyNjNANDQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3MDAwYzQwMC9pcXMyNjNANDQgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwYzQwMC9p
cXMyNjNANDQNCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBjNDAwL3J0NTY1OS4xLTAwMWFAMWENCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBjNDAwL3J0NTY1OS4xLTAwMWFAMWENCihY
RU4pIC9pMmNANzAwMGM0MDAvcnQ1NjU5LjEtMDAxYUAxYSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBjNDAwL3J0NTY1OS4xLTAwMWFAMWEgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3
MDAwYzQwMC9ydDU2NTkuMS0wMDFhQDFhDQooWEVOKSBoYW5kbGUgL2kyY0A3MDAwYzUwMA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGM1MDANCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXpl
PTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9pMmNANzAwMGM1
MDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBp
bnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2ly
cV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4
MDAwMDAwMDAgMHgwMDAwMDA1Yy4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBh
ZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9pMmNANzAwMGM1MDAgcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3MDAwYzUwMCBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcw
MDBjNTAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3Bl
Yz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2Vf
Z2V0X3Jhd19pcnE6IGRldj0vaTJjQDcwMDBjNTAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50
c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29u
dHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNWMuLi5dLG9pbnRz
aXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2
MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAh
DQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBw
cmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAw
NDAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2kyY0A3MDAwYzUwMCAq
Kg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDog
dHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwMGM1MDA8Mz4NCihYRU4pIERU
OiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwMGM1MDAgLSAwMDcwMDBj
NjAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9pMmNANzAwMGM1MDAvYmF0dGVyeS1jaGFyZ2Vy
QDZiDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwYzUwMC9iYXR0ZXJ5LWNoYXJn
ZXJANmINCihYRU4pIC9pMmNANzAwMGM1MDAvYmF0dGVyeS1jaGFyZ2VyQDZiIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9pMmNANzAwMGM1MDAvYmF0dGVyeS1jaGFy
Z2VyQDZiIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9pMmNANzAwMGM1MDAvYmF0dGVyeS1jaGFyZ2VyQDZiDQooWEVOKSBoYW5kbGUgL2ky
Y0A3MDAwYzcwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGM3MDANCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMN
CihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTog
ZGV2PS9pMmNANzAwMGM3MDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVu
PTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0
MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3OC4uLl0sb2ludHNpemU9Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXpl
PTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9pMmNA
NzAwMGM3MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3
MDAwYzcwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vaTJjQDcwMDBjNzAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQoo
WEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaTJjQDcwMDBjNzAwLCBpbmRleD0wDQoo
WEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxl
bj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFy
PS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAw
MDAwNzguLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1
cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhF
TikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkp
IGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQt
Y29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2Ug
L2kyY0A3MDAwYzcwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBv
biAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwMGM3
MDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAw
MGM3MDAgLSAwMDcwMDBjODAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9pMmNANzAwMGQwMDAN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAwDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50
c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaTJjQDcw
MDBkMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVj
PVsweDAwMDAwMDAwIDB4MDAwMDAwMzUuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAg
LT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvaTJjQDcwMDBkMDAwIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9pMmNANzAwMGQwMDAgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2ky
Y0A3MDAwZDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGlu
dHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2
aWNlX2dldF9yYXdfaXJxOiBkZXY9L2kyY0A3MDAwZDAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5n
ICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikg
IGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0
LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDM1Li4uXSxv
aW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xs
ZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3Qg
aXQgIQ0KKFhFTikgaXJxIDAgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQg
dG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJA
NjAwMDQwMDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9pMmNANzAwMGQw
MDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikg
RFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcwMDBkMDAwPDM+DQooWEVO
KSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMDBkMDAwIC0gMDA3
MDAwZDEwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNj
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYw0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49
Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJx
OiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYywgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdp
bnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNv
bnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDU2Li4uXSxvaW50
c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJA
NjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQg
IQ0KKFhFTikgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQw
MDAvbWF4Nzc2MjBAM2MNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4p
ICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0
X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MsIGluZGV4
PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAg
aW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAg
MHgwMDAwMDA1Ni4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2lu
dGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0y
DQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJl
Y3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2ludGVy
cnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBoYW5kbGUgL2kyY0A3MDAwZDAwMC9tYXg3
NzYyMEAzYy9waW5tdXhAMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQwMDAv
bWF4Nzc2MjBAM2MvcGlubXV4QDANCihYRU4pIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcGlu
bXV4QDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3MDAw
ZDAwMC9tYXg3NzYyMEAzYy9waW5tdXhAMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3Bpbm11
eEAwDQooWEVOKSBoYW5kbGUgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9waW5tdXhAMC9waW5f
Z3BpbzANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNj
L3Bpbm11eEAwL3Bpbl9ncGlvMA0KKFhFTikgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9waW5t
dXhAMC9waW5fZ3BpbzAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9waW5tdXhAMC9waW5fZ3BpbzAgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAw
MC9tYXg3NzYyMEAzYy9waW5tdXhAMC9waW5fZ3BpbzANCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBk
MDAwL21heDc3NjIwQDNjL3Bpbm11eEAwL3Bpbl9ncGlvMQ0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcGlubXV4QDAvcGluX2dwaW8xDQooWEVOKSAv
aTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3Bpbm11eEAwL3Bpbl9ncGlvMSBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3Bp
bm11eEAwL3Bpbl9ncGlvMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3Bpbm11eEAwL3Bpbl9n
cGlvMQ0KKFhFTikgaGFuZGxlIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcGlubXV4QDAvcGlu
X2dwaW8yDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAz
Yy9waW5tdXhAMC9waW5fZ3BpbzINCihYRU4pIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcGlu
bXV4QDAvcGluX2dwaW8yIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcGlubXV4QDAvcGluX2dwaW8yIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQw
MDAvbWF4Nzc2MjBAM2MvcGlubXV4QDAvcGluX2dwaW8yDQooWEVOKSBoYW5kbGUgL2kyY0A3MDAw
ZDAwMC9tYXg3NzYyMEAzYy9waW5tdXhAMC9waW5fZ3BpbzMNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3Bpbm11eEAwL3Bpbl9ncGlvMw0KKFhFTikg
L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9waW5tdXhAMC9waW5fZ3BpbzMgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9w
aW5tdXhAMC9waW5fZ3BpbzMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9waW5tdXhAMC9waW5f
Z3BpbzMNCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3Bpbm11eEAwL3Bp
bl9ncGlvMl8zDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYy
MEAzYy9waW5tdXhAMC9waW5fZ3BpbzJfMw0KKFhFTikgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAz
Yy9waW5tdXhAMC9waW5fZ3BpbzJfMyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3Bpbm11eEAwL3Bpbl9ncGlvMl8zIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9p
MmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcGlubXV4QDAvcGluX2dwaW8yXzMNCihYRU4pIGhhbmRs
ZSAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3Bpbm11eEAwL3Bpbl9ncGlvNA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcGlubXV4QDAvcGluX2dw
aW80DQooWEVOKSAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3Bpbm11eEAwL3Bpbl9ncGlvNCBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBkMDAwL21h
eDc3NjIwQDNjL3Bpbm11eEAwL3Bpbl9ncGlvNCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3Bp
bm11eEAwL3Bpbl9ncGlvNA0KKFhFTikgaGFuZGxlIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2Mv
cGlubXV4QDAvcGluX2dwaW81XzZfNw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAw
MGQwMDAvbWF4Nzc2MjBAM2MvcGlubXV4QDAvcGluX2dwaW81XzZfNw0KKFhFTikgL2kyY0A3MDAw
ZDAwMC9tYXg3NzYyMEAzYy9waW5tdXhAMC9waW5fZ3BpbzVfNl83IHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcGlubXV4
QDAvcGluX2dwaW81XzZfNyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3Bpbm11eEAwL3Bpbl9n
cGlvNV82XzcNCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3NwbWljLWRl
ZmF1bHQtb3V0cHV0LWhpZ2gNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAw
L21heDc3NjIwQDNjL3NwbWljLWRlZmF1bHQtb3V0cHV0LWhpZ2gNCihYRU4pIC9pMmNANzAwMGQw
MDAvbWF4Nzc2MjBAM2Mvc3BtaWMtZGVmYXVsdC1vdXRwdXQtaGlnaCBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3NwbWlj
LWRlZmF1bHQtb3V0cHV0LWhpZ2ggaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9zcG1pYy1kZWZh
dWx0LW91dHB1dC1oaWdoDQooWEVOKSBoYW5kbGUgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy93
YXRjaGRvZw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBA
M2Mvd2F0Y2hkb2cNCihYRU4pIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2Mvd2F0Y2hkb2cgcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3MDAwZDAwMC9tYXg3
NzYyMEAzYy93YXRjaGRvZyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3dhdGNoZG9nDQooWEVO
KSBoYW5kbGUgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9mcHMNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL2Zwcw0KKFhFTikgL2kyY0A3MDAwZDAw
MC9tYXg3NzYyMEAzYy9mcHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9mcHMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAz
Yy9mcHMNCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL2Zwcy9mcHMwDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9mcHMvZnBz
MA0KKFhFTikgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9mcHMvZnBzMCBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL2Zw
cy9mcHMwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvZnBzL2ZwczANCihYRU4pIGhhbmRsZSAv
aTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL2Zwcy9mcHMxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9mcHMvZnBzMQ0KKFhFTikgL2kyY0A3MDAwZDAw
MC9tYXg3NzYyMEAzYy9mcHMvZnBzMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL2Zwcy9mcHMxIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQwMDAv
bWF4Nzc2MjBAM2MvZnBzL2ZwczENCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBkMDAwL21heDc3NjIw
QDNjL2Zwcy9mcHMyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3
NzYyMEAzYy9mcHMvZnBzMg0KKFhFTikgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9mcHMvZnBz
MiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBkMDAw
L21heDc3NjIwQDNjL2Zwcy9mcHMyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvZnBzL2ZwczIN
CihYRU4pIGhhbmRsZSAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL2JhY2t1cC1iYXR0ZXJ5DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9iYWNrdXAt
YmF0dGVyeQ0KKFhFTikgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9iYWNrdXAtYmF0dGVyeSBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBkMDAwL21h
eDc3NjIwQDNjL2JhY2t1cC1iYXR0ZXJ5IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvYmFja3Vw
LWJhdHRlcnkNCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRv
cnMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3Jl
Z3VsYXRvcnMNCihYRU4pIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycyBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBkMDAwL21heDc3
NjIwQDNjL3JlZ3VsYXRvcnMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzDQoo
WEVOKSBoYW5kbGUgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL3NkMA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9y
cy9zZDANCihYRU4pIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9zZDAgcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3MDAwZDAwMC9tYXg3
NzYyMEAzYy9yZWd1bGF0b3JzL3NkMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRv
cnMvc2QwDQooWEVOKSBoYW5kbGUgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3Jz
L3NkMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2Mv
cmVndWxhdG9ycy9zZDENCihYRU4pIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9y
cy9zZDEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3MDAw
ZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL3NkMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNj
L3JlZ3VsYXRvcnMvc2QxDQooWEVOKSBoYW5kbGUgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9y
ZWd1bGF0b3JzL3NkMg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQwMDAvbWF4
Nzc2MjBAM2MvcmVndWxhdG9ycy9zZDINCihYRU4pIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2Mv
cmVndWxhdG9ycy9zZDIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL3NkMiBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAwL21h
eDc3NjIwQDNjL3JlZ3VsYXRvcnMvc2QyDQooWEVOKSBoYW5kbGUgL2kyY0A3MDAwZDAwMC9tYXg3
NzYyMEAzYy9yZWd1bGF0b3JzL3NkMw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAw
MGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9zZDMNCihYRU4pIC9pMmNANzAwMGQwMDAvbWF4
Nzc2MjBAM2MvcmVndWxhdG9ycy9zZDMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL3NkMyBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcw
MDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvc2QzDQooWEVOKSBoYW5kbGUgL2kyY0A3MDAw
ZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL2xkbzANCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRvMA0KKFhFTikgL2kyY0A3
MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL2xkbzAgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3Jz
L2xkbzAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL2xkbzANCihYRU4pIGhh
bmRsZSAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRvMQ0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9sZG8x
DQooWEVOKSAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRvMSBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBkMDAwL21heDc3NjIw
QDNjL3JlZ3VsYXRvcnMvbGRvMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMv
bGRvMQ0KKFhFTikgaGFuZGxlIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9s
ZG8yDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9y
ZWd1bGF0b3JzL2xkbzINCihYRU4pIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9y
cy9sZG8yIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9pMmNANzAw
MGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9sZG8yIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBA
M2MvcmVndWxhdG9ycy9sZG8yDQooWEVOKSBoYW5kbGUgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAz
Yy9yZWd1bGF0b3JzL2xkbzMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAw
L21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRvMw0KKFhFTikgL2kyY0A3MDAwZDAwMC9tYXg3NzYy
MEAzYy9yZWd1bGF0b3JzL2xkbzMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL2xkbzMgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAw
ZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL2xkbzMNCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBk
MDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRvNA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9sZG80DQooWEVOKSAvaTJjQDcw
MDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRvNCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMv
bGRvNCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRvNA0KKFhFTikgaGFu
ZGxlIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9sZG81DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL2xkbzUN
CihYRU4pIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9sZG81IHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBA
M2MvcmVndWxhdG9ycy9sZG81IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9s
ZG81DQooWEVOKSBoYW5kbGUgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL2xk
bzYNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3Jl
Z3VsYXRvcnMvbGRvNg0KKFhFTikgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3Jz
L2xkbzYgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3MDAw
ZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL2xkbzYgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAz
Yy9yZWd1bGF0b3JzL2xkbzYNCihYRU4pIGhhbmRsZSAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNj
L3JlZ3VsYXRvcnMvbGRvNw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQwMDAv
bWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9sZG83DQooWEVOKSAvaTJjQDcwMDBkMDAwL21heDc3NjIw
QDNjL3JlZ3VsYXRvcnMvbGRvNyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRvNyBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBk
MDAwL21heDc3NjIwQDNjL3JlZ3VsYXRvcnMvbGRvNw0KKFhFTikgaGFuZGxlIC9pMmNANzAwMGQw
MDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9sZG84DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9yZWd1bGF0b3JzL2xkbzgNCihYRU4pIC9pMmNANzAw
MGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9sZG84IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9s
ZG84IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBAM2MvcmVndWxhdG9ycy9sZG84DQooWEVOKSBoYW5k
bGUgL2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9sb3ctYmF0dGVyeS1tb25pdG9yDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAwZDAwMC9tYXg3NzYyMEAzYy9sb3ctYmF0dGVyeS1t
b25pdG9yDQooWEVOKSAvaTJjQDcwMDBkMDAwL21heDc3NjIwQDNjL2xvdy1iYXR0ZXJ5LW1vbml0
b3IgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2kyY0A3MDAwZDAw
MC9tYXg3NzYyMEAzYy9sb3ctYmF0dGVyeS1tb25pdG9yIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9pMmNANzAwMGQwMDAvbWF4Nzc2MjBA
M2MvbG93LWJhdHRlcnktbW9uaXRvcg0KKFhFTikgaGFuZGxlIC9pMmNANzAwMGQxMDANCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vaTJjQDcwMDBkMTAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vaTJjQDcwMDBkMTAw
LCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAw
MDAwMDAwIDB4MDAwMDAwM2YuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBp
cGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRk
cnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvaTJjQDcwMDBkMTAwIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9pMmNANzAwMGQxMDAgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2kyY0A3MDAw
ZDEwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9
MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dl
dF9yYXdfaXJxOiBkZXY9L2kyY0A3MDAwZDEwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRy
b2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDNmLi4uXSxvaW50c2l6
ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0K
KFhFTikgaXJxIDAgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJp
bWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQw
MDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9pMmNANzAwMGQxMDAgKioN
CihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRy
YW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcwMDBkMTAwPDM+DQooWEVOKSBEVDog
cmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMDBkMTAwIC0gMDA3MDAwZDIw
MCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc2RoY2lANzAwYjA2MDANCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc2RoY2lANzAwYjA2MDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVu
PTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zZGhjaUA3MDBiMDYwMCwgaW5k
ZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9
MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAw
MCAweDAwMDAwMDFmLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0v
aW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXpl
PTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL3NkaGNpQDcwMGIwNjAwIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9zZGhjaUA3MDBiMDYwMCBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2RoY2lANzAw
YjA2MDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9n
ZXRfcmF3X2lycTogZGV2PS9zZGhjaUA3MDBiMDYwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdp
bnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNv
bnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDFmLi4uXSxvaW50
c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJA
NjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQg
IQ0KKFhFTikgaXJxIDAgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8g
cHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zZGhjaUA3MDBiMDYw
MCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBE
VDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwYjA2MDA8Mz4NCihYRU4p
IERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwYjA2MDAgLSAwMDcw
MGIwODAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zZGhjaUA3MDBiMDYwMC9wcm9kLXNldHRp
bmdzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwNjAwL3Byb2Qtc2V0dGlu
Z3MNCihYRU4pIC9zZGhjaUA3MDBiMDYwMC9wcm9kLXNldHRpbmdzIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zZGhjaUA3MDBiMDYwMC9wcm9kLXNldHRpbmdzIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9z
ZGhjaUA3MDBiMDYwMC9wcm9kLXNldHRpbmdzDQooWEVOKSBoYW5kbGUgL3NkaGNpQDcwMGIwNjAw
L3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNp
QDcwMGIwNjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RzDQooWEVOKSAvc2RoY2lANzAwYjA2MDAv
cHJvZC1zZXR0aW5ncy9wcm9kX2NfZHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3NkaGNpQDcwMGIwNjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RzIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3
MDBiMDYwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19kcw0KKFhFTikgaGFuZGxlIC9zZGhjaUA3MDBi
MDYwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19ocw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9z
ZGhjaUA3MDBiMDYwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19ocw0KKFhFTikgL3NkaGNpQDcwMGIw
NjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9zZGhjaUA3MDBiMDYwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19ocyBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2Ro
Y2lANzAwYjA2MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHMNCihYRU4pIGhhbmRsZSAvc2RoY2lA
NzAwYjA2MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZGRyNTINCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc2RoY2lANzAwYjA2MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZGRyNTINCihYRU4pIC9z
ZGhjaUA3MDBiMDYwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19kZHI1MiBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc2RoY2lANzAwYjA2MDAvcHJvZC1zZXR0aW5ncy9w
cm9kX2NfZGRyNTIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwNjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RkcjUyDQoo
WEVOKSBoYW5kbGUgL3NkaGNpQDcwMGIwNjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hzMjAwDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwNjAwL3Byb2Qtc2V0dGluZ3MvcHJv
ZF9jX2hzMjAwDQooWEVOKSAvc2RoY2lANzAwYjA2MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHMy
MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NkaGNpQDcwMGIw
NjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hzMjAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDYwMC9wcm9kLXNldHRp
bmdzL3Byb2RfY19oczIwMA0KKFhFTikgaGFuZGxlIC9zZGhjaUA3MDBiMDYwMC9wcm9kLXNldHRp
bmdzL3Byb2RfY19oczQwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDYw
MC9wcm9kLXNldHRpbmdzL3Byb2RfY19oczQwMA0KKFhFTikgL3NkaGNpQDcwMGIwNjAwL3Byb2Qt
c2V0dGluZ3MvcHJvZF9jX2hzNDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9zZGhjaUA3MDBiMDYwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19oczQwMCBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2RoY2lA
NzAwYjA2MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHM0MDANCihYRU4pIGhhbmRsZSAvc2RoY2lA
NzAwYjA2MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHM1MzMNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc2RoY2lANzAwYjA2MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHM1MzMNCihYRU4pIC9z
ZGhjaUA3MDBiMDYwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19oczUzMyBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc2RoY2lANzAwYjA2MDAvcHJvZC1zZXR0aW5ncy9w
cm9kX2NfaHM1MzMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwNjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hzNTMzDQoo
WEVOKSBoYW5kbGUgL3NkaGNpQDcwMGIwNjAwL3Byb2Qtc2V0dGluZ3MvcHJvZA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDYwMC9wcm9kLXNldHRpbmdzL3Byb2QNCihYRU4p
IC9zZGhjaUA3MDBiMDYwMC9wcm9kLXNldHRpbmdzL3Byb2QgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NkaGNpQDcwMGIwNjAwL3Byb2Qtc2V0dGluZ3MvcHJvZCBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c2RoY2lANzAwYjA2MDAvcHJvZC1zZXR0aW5ncy9wcm9kDQooWEVOKSBoYW5kbGUgL3NkaGNpQDcw
MGIwNDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwNDAwDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRl
dj0vc2RoY2lANzAwYjA0MDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVu
PTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0
MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAxMy4uLl0sb2ludHNpemU9Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXpl
PTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9zZGhj
aUA3MDBiMDQwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvc2Ro
Y2lANzAwYjA0MDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwNDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHBy
b3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc2RoY2lANzAwYjA0MDAsIGlu
ZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAw
MDAgMHgwMDAwMDAxMy4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6
ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5k
aXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2lu
dGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9y
IGRldmljZSAvc2RoY2lANzAwYjA0MDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9
MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAw
PDM+IDcwMGIwNDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBN
TUlPOiAwMDcwMGIwNDAwIC0gMDA3MDBiMDYwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc2Ro
Y2lANzAwYjA0MDAvcHJvZC1zZXR0aW5ncw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhj
aUA3MDBiMDQwMC9wcm9kLXNldHRpbmdzDQooWEVOKSAvc2RoY2lANzAwYjA0MDAvcHJvZC1zZXR0
aW5ncyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc2RoY2lANzAw
YjA0MDAvcHJvZC1zZXR0aW5ncyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vc2RoY2lANzAwYjA0MDAvcHJvZC1zZXR0aW5ncw0KKFhFTikg
aGFuZGxlIC9zZGhjaUA3MDBiMDQwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19kcw0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDQwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19kcw0K
KFhFTikgL3NkaGNpQDcwMGIwNDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RzIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zZGhjaUA3MDBiMDQwMC9wcm9kLXNldHRp
bmdzL3Byb2RfY19kcyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vc2RoY2lANzAwYjA0MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZHMNCihY
RU4pIGhhbmRsZSAvc2RoY2lANzAwYjA0MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHMNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vc2RoY2lANzAwYjA0MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nf
aHMNCihYRU4pIC9zZGhjaUA3MDBiMDQwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19ocyBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc2RoY2lANzAwYjA0MDAvcHJvZC1z
ZXR0aW5ncy9wcm9kX2NfaHMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwNDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hz
DQooWEVOKSBoYW5kbGUgL3NkaGNpQDcwMGIwNDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjEy
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwNDAwL3Byb2Qtc2V0dGluZ3Mv
cHJvZF9jX3NkcjEyDQooWEVOKSAvc2RoY2lANzAwYjA0MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nf
c2RyMTIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NkaGNpQDcw
MGIwNDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjEyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDQwMC9wcm9kLXNl
dHRpbmdzL3Byb2RfY19zZHIxMg0KKFhFTikgaGFuZGxlIC9zZGhjaUA3MDBiMDQwMC9wcm9kLXNl
dHRpbmdzL3Byb2RfY19zZHIyNQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBi
MDQwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19zZHIyNQ0KKFhFTikgL3NkaGNpQDcwMGIwNDAwL3By
b2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjI1IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9zZGhjaUA3MDBiMDQwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19zZHIyNSBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2Ro
Y2lANzAwYjA0MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc2RyMjUNCihYRU4pIGhhbmRsZSAvc2Ro
Y2lANzAwYjA0MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc2RyNTANCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vc2RoY2lANzAwYjA0MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc2RyNTANCihYRU4p
IC9zZGhjaUA3MDBiMDQwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19zZHI1MCBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc2RoY2lANzAwYjA0MDAvcHJvZC1zZXR0aW5n
cy9wcm9kX2Nfc2RyNTAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwNDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjUw
DQooWEVOKSBoYW5kbGUgL3NkaGNpQDcwMGIwNDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjEw
NA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDQwMC9wcm9kLXNldHRpbmdz
L3Byb2RfY19zZHIxMDQNCihYRU4pIC9zZGhjaUA3MDBiMDQwMC9wcm9kLXNldHRpbmdzL3Byb2Rf
Y19zZHIxMDQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NkaGNp
QDcwMGIwNDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjEwNCBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2RoY2lANzAwYjA0MDAvcHJv
ZC1zZXR0aW5ncy9wcm9kX2Nfc2RyMTA0DQooWEVOKSBoYW5kbGUgL3NkaGNpQDcwMGIwNDAwL3By
b2Qtc2V0dGluZ3MvcHJvZF9jX2RkcjUyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNp
QDcwMGIwNDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RkcjUyDQooWEVOKSAvc2RoY2lANzAwYjA0
MDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZGRyNTIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3NkaGNpQDcwMGIwNDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RkcjUy
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9zZGhjaUA3MDBiMDQwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19kZHI1Mg0KKFhFTikgaGFuZGxl
IC9zZGhjaUA3MDBiMDQwMC9wcm9kLXNldHRpbmdzL3Byb2QNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc2RoY2lANzAwYjA0MDAvcHJvZC1zZXR0aW5ncy9wcm9kDQooWEVOKSAvc2RoY2lANzAw
YjA0MDAvcHJvZC1zZXR0aW5ncy9wcm9kIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9zZGhjaUA3MDBiMDQwMC9wcm9kLXNldHRpbmdzL3Byb2QgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIw
NDAwL3Byb2Qtc2V0dGluZ3MvcHJvZA0KKFhFTikgaGFuZGxlIC9zZGhjaUA3MDBiMDIwMA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDIwMA0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NkaGNpQDcw
MGIwMjAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVj
PVsweDAwMDAwMDAwIDB4MDAwMDAwMGYuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAg
LT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc2RoY2lANzAwYjAyMDAg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NkaGNpQDcwMGIwMjAw
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9zZGhjaUA3MDBiMDIwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikg
ZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NkaGNpQDcwMGIwMjAwLCBpbmRleD0wDQooWEVO
KSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0z
DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9p
bnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAw
MGYuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQt
Y29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikg
IC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBub3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNv
bm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxlci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29u
dHJvbGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3Nk
aGNpQDcwMGIwMjAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9u
IC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDBiMDIw
MDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDBi
MDIwMCAtIDAwNzAwYjA0MDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3NkaGNpQDcwMGIwMjAw
L3Byb2Qtc2V0dGluZ3MNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2RoY2lANzAwYjAyMDAv
cHJvZC1zZXR0aW5ncw0KKFhFTikgL3NkaGNpQDcwMGIwMjAwL3Byb2Qtc2V0dGluZ3MgcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NkaGNpQDcwMGIwMjAwL3Byb2Qt
c2V0dGluZ3MgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NkaGNpQDcwMGIwMjAwL3Byb2Qtc2V0dGluZ3MNCihYRU4pIGhhbmRsZSAvc2Ro
Y2lANzAwYjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZHMNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc2RoY2lANzAwYjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZHMNCihYRU4pIC9zZGhj
aUA3MDBiMDIwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19kcyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvc2RoY2lANzAwYjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nf
ZHMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NkaGNpQDcwMGIwMjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RzDQooWEVOKSBoYW5kbGUg
L3NkaGNpQDcwMGIwMjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hzDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NkaGNpQDcwMGIwMjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hzDQooWEVOKSAv
c2RoY2lANzAwYjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHMgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NkaGNpQDcwMGIwMjAwL3Byb2Qtc2V0dGluZ3MvcHJv
ZF9jX2hzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9zZGhjaUA3MDBiMDIwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19ocw0KKFhFTikgaGFu
ZGxlIC9zZGhjaUA3MDBiMDIwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19zZHIxMg0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDIwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19zZHIx
Mg0KKFhFTikgL3NkaGNpQDcwMGIwMjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjEyIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zZGhjaUA3MDBiMDIwMC9wcm9k
LXNldHRpbmdzL3Byb2RfY19zZHIxMiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2RoY2lANzAwYjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9k
X2Nfc2RyMTINCihYRU4pIGhhbmRsZSAvc2RoY2lANzAwYjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9k
X2Nfc2RyMjUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2RoY2lANzAwYjAyMDAvcHJvZC1z
ZXR0aW5ncy9wcm9kX2Nfc2RyMjUNCihYRU4pIC9zZGhjaUA3MDBiMDIwMC9wcm9kLXNldHRpbmdz
L3Byb2RfY19zZHIyNSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
c2RoY2lANzAwYjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc2RyMjUgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwMjAw
L3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjI1DQooWEVOKSBoYW5kbGUgL3NkaGNpQDcwMGIwMjAw
L3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjUwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nk
aGNpQDcwMGIwMjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjUwDQooWEVOKSAvc2RoY2lANzAw
YjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc2RyNTAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3NkaGNpQDcwMGIwMjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3Nk
cjUwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zZGhjaUA3MDBiMDIwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19zZHI1MA0KKFhFTikgaGFu
ZGxlIC9zZGhjaUA3MDBiMDIwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19zZHIxMDQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc2RoY2lANzAwYjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc2Ry
MTA0DQooWEVOKSAvc2RoY2lANzAwYjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc2RyMTA0IHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zZGhjaUA3MDBiMDIwMC9w
cm9kLXNldHRpbmdzL3Byb2RfY19zZHIxMDQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwMjAwL3Byb2Qtc2V0dGluZ3Mv
cHJvZF9jX3NkcjEwNA0KKFhFTikgaGFuZGxlIC9zZGhjaUA3MDBiMDIwMC9wcm9kLXNldHRpbmdz
L3Byb2RfY19kZHI1Mg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDIwMC9w
cm9kLXNldHRpbmdzL3Byb2RfY19kZHI1Mg0KKFhFTikgL3NkaGNpQDcwMGIwMjAwL3Byb2Qtc2V0
dGluZ3MvcHJvZF9jX2RkcjUyIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9zZGhjaUA3MDBiMDIwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19kZHI1MiBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2RoY2lANzAw
YjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZGRyNTINCihYRU4pIGhhbmRsZSAvc2RoY2lANzAw
YjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHMyMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc2RoY2lANzAwYjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHMyMDANCihYRU4pIC9zZGhj
aUA3MDBiMDIwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19oczIwMCBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc2RoY2lANzAwYjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9k
X2NfaHMyMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NkaGNpQDcwMGIwMjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hzMjAwDQooWEVO
KSBoYW5kbGUgL3NkaGNpQDcwMGIwMjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hzNDAwDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwMjAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9j
X2hzNDAwDQooWEVOKSAvc2RoY2lANzAwYjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHM0MDAg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NkaGNpQDcwMGIwMjAw
L3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hzNDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDIwMC9wcm9kLXNldHRpbmdz
L3Byb2RfY19oczQwMA0KKFhFTikgaGFuZGxlIC9zZGhjaUA3MDBiMDIwMC9wcm9kLXNldHRpbmdz
L3Byb2RfY19oczUzMw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDIwMC9w
cm9kLXNldHRpbmdzL3Byb2RfY19oczUzMw0KKFhFTikgL3NkaGNpQDcwMGIwMjAwL3Byb2Qtc2V0
dGluZ3MvcHJvZF9jX2hzNTMzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9zZGhjaUA3MDBiMDIwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19oczUzMyBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2RoY2lANzAw
YjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHM1MzMNCihYRU4pIGhhbmRsZSAvc2RoY2lANzAw
YjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNp
QDcwMGIwMjAwL3Byb2Qtc2V0dGluZ3MvcHJvZA0KKFhFTikgL3NkaGNpQDcwMGIwMjAwL3Byb2Qt
c2V0dGluZ3MvcHJvZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
c2RoY2lANzAwYjAyMDAvcHJvZC1zZXR0aW5ncy9wcm9kIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDIwMC9wcm9kLXNl
dHRpbmdzL3Byb2QNCihYRU4pIGhhbmRsZSAvc2RoY2lANzAwYjAwMDANCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc2RoY2lANzAwYjAwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVu
PTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zZGhjaUA3MDBiMDAwMCwgaW5k
ZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9
MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAw
MCAweDAwMDAwMDBlLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0v
aW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXpl
PTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL3NkaGNpQDcwMGIwMDAwIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9zZGhjaUA3MDBiMDAwMCBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2RoY2lANzAw
YjAwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9n
ZXRfcmF3X2lycTogZGV2PS9zZGhjaUA3MDBiMDAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdp
bnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNv
bnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDBlLi4uXSxvaW50
c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJA
NjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQg
IQ0KKFhFTikgaXJxIDAgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8g
cHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zZGhjaUA3MDBiMDAw
MCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBE
VDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwYjAwMDA8Mz4NCihYRU4p
IERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwYjAwMDAgLSAwMDcw
MGIwMjAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zZGhjaUA3MDBiMDAwMC9wcm9kLXNldHRp
bmdzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwMDAwL3Byb2Qtc2V0dGlu
Z3MNCihYRU4pIC9zZGhjaUA3MDBiMDAwMC9wcm9kLXNldHRpbmdzIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zZGhjaUA3MDBiMDAwMC9wcm9kLXNldHRpbmdzIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9z
ZGhjaUA3MDBiMDAwMC9wcm9kLXNldHRpbmdzDQooWEVOKSBoYW5kbGUgL3NkaGNpQDcwMGIwMDAw
L3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNp
QDcwMGIwMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RzDQooWEVOKSAvc2RoY2lANzAwYjAwMDAv
cHJvZC1zZXR0aW5ncy9wcm9kX2NfZHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3NkaGNpQDcwMGIwMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2RzIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3
MDBiMDAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19kcw0KKFhFTikgaGFuZGxlIC9zZGhjaUA3MDBi
MDAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19ocw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9z
ZGhjaUA3MDBiMDAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19ocw0KKFhFTikgL3NkaGNpQDcwMGIw
MDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX2hzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9zZGhjaUA3MDBiMDAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19ocyBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2Ro
Y2lANzAwYjAwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfaHMNCihYRU4pIGhhbmRsZSAvc2RoY2lA
NzAwYjAwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc2RyMTINCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc2RoY2lANzAwYjAwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc2RyMTINCihYRU4pIC9z
ZGhjaUA3MDBiMDAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19zZHIxMiBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc2RoY2lANzAwYjAwMDAvcHJvZC1zZXR0aW5ncy9w
cm9kX2Nfc2RyMTIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjEyDQoo
WEVOKSBoYW5kbGUgL3NkaGNpQDcwMGIwMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjI1DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwMDAwL3Byb2Qtc2V0dGluZ3MvcHJv
ZF9jX3NkcjI1DQooWEVOKSAvc2RoY2lANzAwYjAwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc2Ry
MjUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NkaGNpQDcwMGIw
MDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjI1IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDAwMC9wcm9kLXNldHRp
bmdzL3Byb2RfY19zZHIyNQ0KKFhFTikgaGFuZGxlIC9zZGhjaUA3MDBiMDAwMC9wcm9kLXNldHRp
bmdzL3Byb2RfY19zZHI1MA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDAw
MC9wcm9kLXNldHRpbmdzL3Byb2RfY19zZHI1MA0KKFhFTikgL3NkaGNpQDcwMGIwMDAwL3Byb2Qt
c2V0dGluZ3MvcHJvZF9jX3NkcjUwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9zZGhjaUA3MDBiMDAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19zZHI1MCBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2RoY2lA
NzAwYjAwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc2RyNTANCihYRU4pIGhhbmRsZSAvc2RoY2lA
NzAwYjAwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2Nfc2RyMTA0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3NkaGNpQDcwMGIwMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjEwNA0KKFhFTikg
L3NkaGNpQDcwMGIwMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZF9jX3NkcjEwNCBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc2RoY2lANzAwYjAwMDAvcHJvZC1zZXR0aW5n
cy9wcm9kX2Nfc2RyMTA0IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDAwMC9wcm9kLXNldHRpbmdzL3Byb2RfY19zZHIx
MDQNCihYRU4pIGhhbmRsZSAvc2RoY2lANzAwYjAwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZGRy
NTINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2RoY2lANzAwYjAwMDAvcHJvZC1zZXR0aW5n
cy9wcm9kX2NfZGRyNTINCihYRU4pIC9zZGhjaUA3MDBiMDAwMC9wcm9kLXNldHRpbmdzL3Byb2Rf
Y19kZHI1MiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc2RoY2lA
NzAwYjAwMDAvcHJvZC1zZXR0aW5ncy9wcm9kX2NfZGRyNTIgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NkaGNpQDcwMGIwMDAwL3Byb2Qt
c2V0dGluZ3MvcHJvZF9jX2RkcjUyDQooWEVOKSBoYW5kbGUgL3NkaGNpQDcwMGIwMDAwL3Byb2Qt
c2V0dGluZ3MvcHJvZA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zZGhjaUA3MDBiMDAwMC9w
cm9kLXNldHRpbmdzL3Byb2QNCihYRU4pIC9zZGhjaUA3MDBiMDAwMC9wcm9kLXNldHRpbmdzL3By
b2QgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NkaGNpQDcwMGIw
MDAwL3Byb2Qtc2V0dGluZ3MvcHJvZCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2RoY2lANzAwYjAwMDAvcHJvZC1zZXR0aW5ncy9wcm9k
DQooWEVOKSBoYW5kbGUgL2VmdXNlQDcwMDBmODAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2VmdXNlQDcwMDBmODAwDQooWEVOKSAvZWZ1c2VANzAwMGY4MDAgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL2VmdXNlQDcwMDBmODAwIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9lZnVzZUA3MDAwZjgwMA0K
KFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2VmdXNlQDcwMDBmODAwICoqDQoo
WEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFu
c2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDAwZjgwMDwzPg0KKFhFTikgRFQ6IHJl
YWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDAwZjgwMCAtIDAwNzAwMGZjMDAg
UDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL2VmdXNlQDcwMDBmODAwL2VmdXNlLWJ1cm4NCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vZWZ1c2VANzAwMGY4MDAvZWZ1c2UtYnVybg0KKFhFTikgL2Vm
dXNlQDcwMDBmODAwL2VmdXNlLWJ1cm4gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL2VmdXNlQDcwMDBmODAwL2VmdXNlLWJ1cm4gaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2VmdXNlQDcwMDBmODAwL2VmdXNl
LWJ1cm4NCihYRU4pIGhhbmRsZSAva2Z1c2VANzAwMGZjMDANCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0va2Z1c2VANzAwMGZjMDANCihYRU4pIC9rZnVzZUA3MDAwZmMwMCBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAva2Z1c2VANzAwMGZjMDAgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2tmdXNlQDcwMDBm
YzAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAva2Z1c2VANzAwMGZjMDAg
KioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6
IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcwMDBmYzAwPDM+DQooWEVOKSBE
VDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMDBmYzAwIC0gMDA3MDAx
MDAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvcG1jLWlvcG93ZXINCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcG1jLWlvcG93ZXINCihYRU4pIC9wbWMtaW9wb3dlciBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcG1jLWlvcG93ZXIgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BtYy1pb3Bvd2VyDQoo
WEVOKSBoYW5kbGUgL2R0dkA3MDAwYzMwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9kdHZA
NzAwMGMzMDANCihYRU4pIC9kdHZANzAwMGMzMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0K
KFhFTikgQ2hlY2sgaWYgL2R0dkA3MDAwYzMwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vZHR2QDcwMDBjMzAwDQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvZHR2QDcwMDBjMzAwICoqDQooWEVOKSBEVDogYnVzIGlz
IGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNz
OjwzPiAwMDAwMDAwMDwzPiA3MDAwYzMwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2Rl
DQooWEVOKSAgIC0gTU1JTzogMDA3MDAwYzMwMCAtIDAwNzAwMGM0MDAgUDJNVHlwZT01DQooWEVO
KSBoYW5kbGUgL3h1ZGNANzAwZDAwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0veHVkY0A3
MDBkMDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNl
X2dldF9yYXdfaXJxOiBkZXY9L3h1ZGNANzAwZDAwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAn
aW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1j
b250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyYy4uLl0sb2lu
dHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVy
QDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0
ICENCihYRU4pIC94dWRjQDcwMGQwMDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDMNCihYRU4p
IENoZWNrIGlmIC94dWRjQDcwMGQwMDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS94dWRjQDcwMGQwMDAwDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50
c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0veHVkY0A3
MDBkMDAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3Bl
Yz1bMHgwMDAwMDAwMCAweDAwMDAwMDJjLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikg
IC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDAgbm90IChkaXJl
Y3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5l
Y3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIERUOiAqKiB0cmFu
c2xhdGlvbiBmb3IgZGV2aWNlIC94dWRjQDcwMGQwMDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwz
PiAwMDAwMDAwMDwzPiA3MDBkMDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQoo
WEVOKSAgIC0gTU1JTzogMDA3MDBkMDAwMCAtIDAwNzAwZDgwMDAgUDJNVHlwZT01DQooWEVOKSBE
VDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAveHVkY0A3MDBkMDAwMCAqKg0KKFhFTikgRFQ6
IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcg
YWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwZDgwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJv
b3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwZDgwMDAgLSAwMDcwMGQ5MDAwIFAyTVR5cGU9
NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3h1ZGNANzAwZDAwMDAgKioN
CihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRy
YW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcwMGQ5MDAwPDM+DQooWEVOKSBEVDog
cmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMGQ5MDAwIC0gMDA3MDBkYTAw
MCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvbWVtb3J5LWNvbnRyb2xsZXJANzAwMTkwMDANCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vbWVtb3J5LWNvbnRyb2xsZXJANzAwMTkwMDANCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMN
CihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTog
ZGV2PS9tZW1vcnktY29udHJvbGxlckA3MDAxOTAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdp
bnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNv
bnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDRkLi4uXSxvaW50
c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJA
NjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQg
IQ0KKFhFTikgL21lbW9yeS1jb250cm9sbGVyQDcwMDE5MDAwIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDENCihYRU4pIENoZWNrIGlmIC9tZW1vcnktY29udHJvbGxlckA3MDAxOTAwMCBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vbWVtb3J5
LWNvbnRyb2xsZXJANzAwMTkwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihY
RU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9tZW1vcnktY29udHJvbGxlckA3MDAxOTAw
MCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGlu
dHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgw
MDAwMDAwMCAweDAwMDAwMDRkLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
aXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFk
ZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDAgbm90IChkaXJlY3RseSBv
ciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0
byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlv
biBmb3IgZGV2aWNlIC9tZW1vcnktY29udHJvbGxlckA3MDAxOTAwMCAqKg0KKFhFTikgRFQ6IGJ1
cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRk
cmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwMTkwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qg
bm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwMTkwMDAgLSAwMDcwMDFhMDAwIFAyTVR5cGU9NQ0K
KFhFTikgaGFuZGxlIC9wd21ANzAxMTAwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcHdt
QDcwMTEwMDAwDQooWEVOKSAvcHdtQDcwMTEwMDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDEN
CihYRU4pIENoZWNrIGlmIC9wd21ANzAxMTAwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3B3bUA3MDExMDAwMA0KKFhFTikgRFQ6ICoq
IHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3B3bUA3MDExMDAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBp
cyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVz
czo8Mz4gMDAwMDAwMDA8Mz4gNzAxMTAwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9k
ZQ0KKFhFTikgICAtIE1NSU86IDAwNzAxMTAwMDAgLSAwMDcwMTEwNDAwIFAyTVR5cGU9NQ0KKFhF
TikgaGFuZGxlIC9jbG9ja0A3MDExMDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jbG9j
a0A3MDExMDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGlu
dHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2
aWNlX2dldF9yYXdfaXJxOiBkZXY9L2Nsb2NrQDcwMTEwMDAwLCBpbmRleD0wDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1
cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwM2UuLi5d
LG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJv
bGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdv
dCBpdCAhDQooWEVOKSAvY2xvY2tANzAxMTAwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gNA0K
KFhFTikgQ2hlY2sgaWYgL2Nsb2NrQDcwMTEwMDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jbG9ja0A3MDExMDAwMA0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhF
TikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9
L2Nsb2NrQDcwMTEwMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3Bl
cnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0z
DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAw
MCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwM2UuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0z
DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBpcnEgMCBu
b3QgKGRpcmVjdGx5IG9yIGluZGlyZWN0bHkpIGNvbm5lY3RlZCB0byBwcmltYXJ5Y29udHJvbGxl
ci4gQ29ubmVjdGVkIHRvIC9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMA0KKFhFTikgRFQ6
ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL2Nsb2NrQDcwMTEwMDAwICoqDQooWEVOKSBEVDog
YnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBh
ZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDExMDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9v
dCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDExMDAwMCAtIDAwNzAxMTAxMDAgUDJNVHlwZT01
DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvY2xvY2tANzAxMTAwMDAgKioN
CihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRy
YW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcwMTEwMDAwPDM+DQooWEVOKSBEVDog
cmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDcwMTEwMDAwIC0gMDA3MDExMDEw
MCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9jbG9ja0A3
MDExMDAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQoo
WEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAxMTAxMDA8Mz4N
CihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAxMTAxMDAg
LSAwMDcwMTEwMjAwIFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZp
Y2UgL2Nsb2NrQDcwMTEwMDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5z
PTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3
MDExMDIwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzog
MDA3MDExMDIwMCAtIDAwNzAxMTAzMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3NvY3RoZXJt
QDB4NzAwRTIwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jdGhlcm1AMHg3MDBFMjAw
MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBp
bnRsZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRfZGV2aWNlX2dldF9y
YXdfaXJxOiBkZXY9L3NvY3RoZXJtQDB4NzAwRTIwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAn
aW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTYNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1j
b250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAzMC4uLl0sb2lu
dHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVy
QDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0
ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2N0aGVybUAweDcwMEUyMDAw
LCBpbmRleD0xDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj02DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj02DQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCxpbnRzcGVjPVsweDAw
MDAwMDAwIDB4MDAwMDAwMzMuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBp
cGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRk
cnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc29jdGhlcm1AMHg3MDBFMjAwMCBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAzDQooWEVOKSBDaGVjayBpZiAvc29jdGhlcm1AMHg3MDBF
MjAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc29jdGhlcm1AMHg3MDBFMjAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49
Ng0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvY3RoZXJtQDB4NzAwRTIwMDAs
IGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRz
cGVjPTAgaW50bGVuPTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTYNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAw
MDAwMDAgMHgwMDAwMDAzMC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlw
YXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRy
c2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGlyZWN0bHkgb3Ig
aW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8g
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19p
cnE6IGRldj0vc29jdGhlcm1AMHg3MDBFMjAwMCwgaW5kZXg9MQ0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Ng0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRy
b2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDMzLi4uXSxvaW50c2l6
ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAw
MDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0K
KFhFTikgaXJxIDEgbm90IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJp
bWFyeWNvbnRyb2xsZXIuIENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQw
MDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zb2N0aGVybUAweDcwMEUy
MDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4p
IERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDBlMjAwMDwzPg0KKFhF
TikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDBlMjAwMCAtIDAw
NzAwZTI2MDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAv
c29jdGhlcm1AMHg3MDBFMjAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBu
cz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4g
NjAwMDYwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86
IDAwNjAwMDYwMDAgLSAwMDYwMDA2NDAwIFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0
aW9uIGZvciBkZXZpY2UgL3NvY3RoZXJtQDB4NzAwRTIwMDAgKioNCihYRU4pIERUOiBidXMgaXMg
ZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6
PDM+IDAwMDAwMDAwPDM+IDcwMDQwMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUN
CihYRU4pICAgLSBNTUlPOiAwMDcwMDQwMDAwIC0gMDA3MDA0MDIwMCBQMk1UeXBlPTUNCihYRU4p
IGhhbmRsZSAvc29jdGhlcm1AMHg3MDBFMjAwMC90aHJvdHRsZS1jZmdzDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NvY3RoZXJtQDB4NzAwRTIwMDAvdGhyb3R0bGUtY2Zncw0KKFhFTikgL3Nv
Y3RoZXJtQDB4NzAwRTIwMDAvdGhyb3R0bGUtY2ZncyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvc29jdGhlcm1AMHg3MDBFMjAwMC90aHJvdHRsZS1jZmdzIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2N0
aGVybUAweDcwMEUyMDAwL3Rocm90dGxlLWNmZ3MNCihYRU4pIGhhbmRsZSAvc29jdGhlcm1AMHg3
MDBFMjAwMC90aHJvdHRsZS1jZmdzL2hlYXZ5DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Y3RoZXJtQDB4NzAwRTIwMDAvdGhyb3R0bGUtY2Zncy9oZWF2eQ0KKFhFTikgL3NvY3RoZXJtQDB4
NzAwRTIwMDAvdGhyb3R0bGUtY2Zncy9oZWF2eSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvc29jdGhlcm1AMHg3MDBFMjAwMC90aHJvdHRsZS1jZmdzL2hlYXZ5IGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9z
b2N0aGVybUAweDcwMEUyMDAwL3Rocm90dGxlLWNmZ3MvaGVhdnkNCihYRU4pIGhhbmRsZSAvc29j
dGhlcm1AMHg3MDBFMjAwMC90aHJvdHRsZS1jZmdzL29jMQ0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zb2N0aGVybUAweDcwMEUyMDAwL3Rocm90dGxlLWNmZ3Mvb2MxDQooWEVOKSAvc29jdGhl
cm1AMHg3MDBFMjAwMC90aHJvdHRsZS1jZmdzL29jMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvc29jdGhlcm1AMHg3MDBFMjAwMC90aHJvdHRsZS1jZmdzL29jMSBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c29jdGhlcm1AMHg3MDBFMjAwMC90aHJvdHRsZS1jZmdzL29jMQ0KKFhFTikgaGFuZGxlIC9zb2N0
aGVybUAweDcwMEUyMDAwL3Rocm90dGxlLWNmZ3Mvb2MzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvY3RoZXJtQDB4NzAwRTIwMDAvdGhyb3R0bGUtY2Zncy9vYzMNCihYRU4pIC9zb2N0aGVy
bUAweDcwMEUyMDAwL3Rocm90dGxlLWNmZ3Mvb2MzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9zb2N0aGVybUAweDcwMEUyMDAwL3Rocm90dGxlLWNmZ3Mvb2MzIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9z
b2N0aGVybUAweDcwMEUyMDAwL3Rocm90dGxlLWNmZ3Mvb2MzDQooWEVOKSBoYW5kbGUgL3NvY3Ro
ZXJtQDB4NzAwRTIwMDAvZnVzZV93YXJAZnVzZV9yZXZfMF8xDQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3NvY3RoZXJtQDB4NzAwRTIwMDAvZnVzZV93YXJAZnVzZV9yZXZfMF8xDQooWEVOKSAv
c29jdGhlcm1AMHg3MDBFMjAwMC9mdXNlX3dhckBmdXNlX3Jldl8wXzEgcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvY3RoZXJtQDB4NzAwRTIwMDAvZnVzZV93YXJA
ZnVzZV9yZXZfMF8xIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb2N0aGVybUAweDcwMEUyMDAwL2Z1c2Vfd2FyQGZ1c2VfcmV2XzBfMQ0K
KFhFTikgaGFuZGxlIC9zb2N0aGVybUAweDcwMEUyMDAwL2Z1c2Vfd2FyQGZ1c2VfcmV2XzINCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jdGhlcm1AMHg3MDBFMjAwMC9mdXNlX3dhckBmdXNl
X3Jldl8yDQooWEVOKSAvc29jdGhlcm1AMHg3MDBFMjAwMC9mdXNlX3dhckBmdXNlX3Jldl8yIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2N0aGVybUAweDcwMEUy
MDAwL2Z1c2Vfd2FyQGZ1c2VfcmV2XzIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvY3RoZXJtQDB4NzAwRTIwMDAvZnVzZV93YXJAZnVz
ZV9yZXZfMg0KKFhFTikgaGFuZGxlIC9zb2N0aGVybUAweDcwMEUyMDAwL3Rocm90dGxlQGNyaXRp
Y2FsDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvY3RoZXJtQDB4NzAwRTIwMDAvdGhyb3R0
bGVAY3JpdGljYWwNCihYRU4pIC9zb2N0aGVybUAweDcwMEUyMDAwL3Rocm90dGxlQGNyaXRpY2Fs
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2N0aGVybUAweDcw
MEUyMDAwL3Rocm90dGxlQGNyaXRpY2FsIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2N0aGVybUAweDcwMEUyMDAwL3Rocm90dGxlQGNy
aXRpY2FsDQooWEVOKSBoYW5kbGUgL3NvY3RoZXJtQDB4NzAwRTIwMDAvdGhyb3R0bGVAaGVhdnkN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jdGhlcm1AMHg3MDBFMjAwMC90aHJvdHRsZUBo
ZWF2eQ0KKFhFTikgL3NvY3RoZXJtQDB4NzAwRTIwMDAvdGhyb3R0bGVAaGVhdnkgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvY3RoZXJtQDB4NzAwRTIwMDAvdGhy
b3R0bGVAaGVhdnkgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NvY3RoZXJtQDB4NzAwRTIwMDAvdGhyb3R0bGVAaGVhdnkNCihYRU4pIGhh
bmRsZSAvc29jdGhlcm1AMHg3MDBFMjAwMC90aHJvdHRsZV9kZXZAY3B1X2hpZ2gNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jdGhlcm1AMHg3MDBFMjAwMC90aHJvdHRsZV9kZXZAY3B1X2hp
Z2gNCihYRU4pIC9zb2N0aGVybUAweDcwMEUyMDAwL3Rocm90dGxlX2RldkBjcHVfaGlnaCBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jdGhlcm1AMHg3MDBFMjAw
MC90aHJvdHRsZV9kZXZAY3B1X2hpZ2ggaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvY3RoZXJtQDB4NzAwRTIwMDAvdGhyb3R0bGVfZGV2
QGNwdV9oaWdoDQooWEVOKSBoYW5kbGUgL3NvY3RoZXJtQDB4NzAwRTIwMDAvdGhyb3R0bGVfZGV2
QGdwdV9oaWdoDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvY3RoZXJtQDB4NzAwRTIwMDAv
dGhyb3R0bGVfZGV2QGdwdV9oaWdoDQooWEVOKSAvc29jdGhlcm1AMHg3MDBFMjAwMC90aHJvdHRs
ZV9kZXZAZ3B1X2hpZ2ggcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3NvY3RoZXJtQDB4NzAwRTIwMDAvdGhyb3R0bGVfZGV2QGdwdV9oaWdoIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2N0aGVybUAweDcw
MEUyMDAwL3Rocm90dGxlX2RldkBncHVfaGlnaA0KKFhFTikgaGFuZGxlIC90ZWdyYS1hb3RhZw0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90ZWdyYS1hb3RhZw0KKFhFTikgL3RlZ3JhLWFvdGFn
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90ZWdyYS1hb3RhZyBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
dGVncmEtYW90YWcNCihYRU4pIGhhbmRsZSAvdGVncmFfY2VjDQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3RlZ3JhX2NlYw0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikg
ZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3RlZ3JhX2NlYywgaW5kZXg9MA0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhF
TikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJy
dXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDAzLi4u
XSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRy
b2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBn
b3QgaXQgIQ0KKFhFTikgL3RlZ3JhX2NlYyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVO
KSBDaGVjayBpZiAvdGVncmFfY2VjIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS90ZWdyYV9jZWMNCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMg
aW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS90ZWdyYV9jZWMsIGlu
ZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAw
MDAgMHgwMDAwMDAwMy4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9
L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6
ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAwIG5vdCAoZGlyZWN0bHkgb3IgaW5k
aXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9sbGVyLiBDb25uZWN0ZWQgdG8gL2lu
dGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9y
IGRldmljZSAvdGVncmFfY2VjICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5z
PTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3
MDAxNTAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzog
MDA3MDAxNTAwMCAtIDAwNzAwMTYwMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3dhdGNoZG9n
QDYwMDA1MTAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3dhdGNoZG9nQDYwMDA1MTAwDQoo
WEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxl
bj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19p
cnE6IGRldj0vd2F0Y2hkb2dANjAwMDUxMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXpl
PTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9s
bGVyQDYwMDA0MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3Yi4uLl0sb2ludHNpemU9
Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0
MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihY
RU4pIC93YXRjaGRvZ0A2MDAwNTEwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAyDQooWEVOKSBD
aGVjayBpZiAvd2F0Y2hkb2dANjAwMDUxMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3dhdGNoZG9nQDYwMDA1MTAwDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0v
d2F0Y2hkb2dANjAwMDUxMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVu
PTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0
MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3Yi4uLl0sb2ludHNpemU9Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXpl
PTMNCihYRU4pICAtPiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGlycSAw
IG5vdCAoZGlyZWN0bHkgb3IgaW5kaXJlY3RseSkgY29ubmVjdGVkIHRvIHByaW1hcnljb250cm9s
bGVyLiBDb25uZWN0ZWQgdG8gL2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwDQooWEVOKSBE
VDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvd2F0Y2hkb2dANjAwMDUxMDAgKioNCihYRU4p
IERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0
aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDYwMDA1MTAwPDM+DQooWEVOKSBEVDogcmVhY2hl
ZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMDYwMDA1MTAwIC0gMDA2MDAwNTEyMCBQMk1U
eXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC93YXRjaGRvZ0A2MDAw
NTEwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVO
KSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNjAwMDUwODg8Mz4NCihY
RU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNjAwMDUwODggLSAw
MDYwMDA1MDkwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC90ZWdyYV9maXFfZGVidWdnZXINCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGVncmFfZmlxX2RlYnVnZ2VyDQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAg
aW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vdGVn
cmFfZmlxX2RlYnVnZ2VyLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3Bl
cnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0z
DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAw
MCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwN2IuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBpcGFyPS9pbnRlcnJ1cHQtY29udHJvbGxlckA2MDAwNDAwMCwgc2l6ZT0z
DQooWEVOKSAgLT4gYWRkcnNpemU9Mg0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvdGVncmFf
ZmlxX2RlYnVnZ2VyIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90
ZWdyYV9maXFfZGVidWdnZXIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3RlZ3JhX2ZpcV9kZWJ1Z2dlcg0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3RlZ3JhX2Zp
cV9kZWJ1Z2dlciwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsaW50
c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDdiLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0KKFhF
TikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDAgbm90IChk
aXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIuIENv
bm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIGhhbmRsZSAv
cHRtDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3B0bQ0KKFhFTikgL3B0bSBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAxMg0KKFhFTikgQ2hlY2sgaWYgL3B0bSBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcHRtDQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvcHRtICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAw
MDAwMDwzPiA3MjAxMDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDA3MjAxMDAwMCAtIDAwNzIwMTEwMDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvcHRtICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAw
MDAwMDwzPiA3MjAzMDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDA3MjAzMDAwMCAtIDAwNzIwMzEwMDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvcHRtICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAw
MDAwMDwzPiA3MjA0MDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDA3MjA0MDAwMCAtIDAwNzIwNDEwMDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvcHRtICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAw
MDAwMDwzPiA3MjA1MDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDA3MjA1MDAwMCAtIDAwNzIwNTEwMDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvcHRtICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAw
MDAwMDwzPiA3MjA2MDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDA3MjA2MDAwMCAtIDAwNzIwNjEwMDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvcHRtICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAw
MDAwMDwzPiA3MzAxMDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDA3MzAxMDAwMCAtIDAwNzMwMTEwMDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvcHRtICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAw
MDAwMDwzPiA3MzQ0MDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDA3MzQ0MDAwMCAtIDAwNzM0NDEwMDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvcHRtICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAw
MDAwMDwzPiA3MzU0MDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDA3MzU0MDAwMCAtIDAwNzM1NDEwMDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvcHRtICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAw
MDAwMDwzPiA3MzY0MDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDA3MzY0MDAwMCAtIDAwNzM2NDEwMDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvcHRtICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAw
MDAwMDwzPiA3Mzc0MDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDA3Mzc0MDAwMCAtIDAwNzM3NDEwMDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvcHRtICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAw
MDAwMDwzPiA3MjgyMDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDA3MjgyMDAwMCAtIDAwNzI4MjEwMDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvcHRtICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAw
MDAwMDwzPiA3MmExYzAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDA3MmExYzAwMCAtIDAwNzJhMWQwMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUg
L21zZWxlY3QNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vbXNlbGVjdA0KKFhFTikgIHVzaW5n
ICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikg
IGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L21z
ZWxlY3QsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4p
ICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLGludHNwZWM9
WzB4MDAwMDAwMDAgMHgwMDAwMDBhZi4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IGlwYXI9L2ludGVycnVwdC1jb250cm9sbGVyQDYwMDA0MDAwLCBzaXplPTMNCihYRU4pICAt
PiBhZGRyc2l6ZT0yDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9tc2VsZWN0IHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9tc2VsZWN0IGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9tc2VsZWN0DQooWEVO
KSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0z
DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6
IGRldj0vbXNlbGVjdCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0
eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAs
aW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMGFmLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDAsIHNpemU9Mw0K
KFhFTikgIC0+IGFkZHJzaXplPTINCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgaXJxIDAgbm90
IChkaXJlY3RseSBvciBpbmRpcmVjdGx5KSBjb25uZWN0ZWQgdG8gcHJpbWFyeWNvbnRyb2xsZXIu
IENvbm5lY3RlZCB0byAvaW50ZXJydXB0LWNvbnRyb2xsZXJANjAwMDQwMDANCihYRU4pIERUOiAq
KiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9tc2VsZWN0ICoqDQooWEVOKSBEVDogYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwz
PiAwMDAwMDAwMDwzPiA1MDA2MDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQoo
WEVOKSAgIC0gTU1JTzogMDA1MDA2MDAwMCAtIDAwNTAwNjEwMDAgUDJNVHlwZT01DQooWEVOKSBo
YW5kbGUgL2NwdWlkbGUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY3B1aWRsZQ0KKFhFTikg
L2NwdWlkbGUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2NwdWlk
bGUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L2NwdWlkbGUNCihYRU4pIGhhbmRsZSAvYXBibWlzY0A3MDAwMDgwMA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9hcGJtaXNjQDcwMDAwODAwDQooWEVOKSAvYXBibWlzY0A3MDAwMDgwMCBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAyDQooWEVOKSBDaGVjayBpZiAvYXBibWlzY0A3MDAwMDgw
MCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vYXBibWlzY0A3MDAwMDgwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2Ug
L2FwYm1pc2NANzAwMDA4MDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9
Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDcw
MDAwODAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAw
MDcwMDAwODAwIC0gMDA3MDAwMDg2NCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlv
biBmb3IgZGV2aWNlIC9hcGJtaXNjQDcwMDAwODAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1
bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAw
MDAwMDAwMDwzPiA3MDAwMDAwODwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVO
KSAgIC0gTU1JTzogMDA3MDAwMDAwOCAtIDAwNzAwMDAwMGMgUDJNVHlwZT01DQooWEVOKSBoYW5k
bGUgL252ZHVtcGVyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L252ZHVtcGVyDQooWEVOKSAv
bnZkdW1wZXIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL252ZHVt
cGVyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9udmR1bXBlcg0KKFhFTikgaGFuZGxlIC90ZWdyYS1wbWMtYmxpbmstcHdtDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3RlZ3JhLXBtYy1ibGluay1wd20NCihYRU4pIC90ZWdyYS1wbWMt
YmxpbmstcHdtIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90ZWdy
YS1wbWMtYmxpbmstcHdtIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS90ZWdyYS1wbWMtYmxpbmstcHdtDQooWEVOKSBoYW5kbGUgL252cG1v
ZGVsDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L252cG1vZGVsDQooWEVOKSAvbnZwbW9kZWwg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL252cG1vZGVsIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9udnBt
b2RlbA0KKFhFTikgaGFuZGxlIC9leHRjb24NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vZXh0
Y29uDQooWEVOKSAvZXh0Y29uIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9leHRjb24gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L2V4dGNvbg0KKFhFTikgaGFuZGxlIC9leHRjb24vZGlzcC1zdGF0ZQ0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9leHRjb24vZGlzcC1zdGF0ZQ0KKFhFTikgL2V4dGNvbi9k
aXNwLXN0YXRlIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9leHRj
b24vZGlzcC1zdGF0ZSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vZXh0Y29uL2Rpc3Atc3RhdGUNCihYRU4pIGhhbmRsZSAvZXh0Y29uL2V4
dGNvbkAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2V4dGNvbi9leHRjb25AMA0KKFhFTikg
L2V4dGNvbi9leHRjb25AMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvZXh0Y29uL2V4dGNvbkAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9leHRjb24vZXh0Y29uQDANCihYRU4pIGhhbmRsZSAvZXh0Y29u
L2V4dGNvbkAxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2V4dGNvbi9leHRjb25AMQ0KKFhF
TikgL2V4dGNvbi9leHRjb25AMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvZXh0Y29uL2V4dGNvbkAxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9leHRjb24vZXh0Y29uQDENCihYRU4pIGhhbmRsZSAvYnRo
cm90X2NkZXYNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYnRocm90X2NkZXYNCihYRU4pIC9i
dGhyb3RfY2RldiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvYnRo
cm90X2NkZXYgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L2J0aHJvdF9jZGV2DQooWEVOKSBoYW5kbGUgL2J0aHJvdF9jZGV2L3NraW5fYmFs
YW5jZWQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYnRocm90X2NkZXYvc2tpbl9iYWxhbmNl
ZA0KKFhFTikgL2J0aHJvdF9jZGV2L3NraW5fYmFsYW5jZWQgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL2J0aHJvdF9jZGV2L3NraW5fYmFsYW5jZWQgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2J0aHJvdF9j
ZGV2L3NraW5fYmFsYW5jZWQNCihYRU4pIGhhbmRsZSAvYnRocm90X2NkZXYvZ3B1X2JhbGFuY2Vk
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2J0aHJvdF9jZGV2L2dwdV9iYWxhbmNlZA0KKFhF
TikgL2J0aHJvdF9jZGV2L2dwdV9iYWxhbmNlZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvYnRocm90X2NkZXYvZ3B1X2JhbGFuY2VkIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9idGhyb3RfY2Rldi9ncHVf
YmFsYW5jZWQNCihYRU4pIGhhbmRsZSAvYnRocm90X2NkZXYvY3B1X2JhbGFuY2VkDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2J0aHJvdF9jZGV2L2NwdV9iYWxhbmNlZA0KKFhFTikgL2J0aHJv
dF9jZGV2L2NwdV9iYWxhbmNlZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvYnRocm90X2NkZXYvY3B1X2JhbGFuY2VkIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9idGhyb3RfY2Rldi9jcHVfYmFsYW5jZWQN
CihYRU4pIGhhbmRsZSAvYnRocm90X2NkZXYvZW1lcmdlbmN5X2JhbGFuY2VkDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L2J0aHJvdF9jZGV2L2VtZXJnZW5jeV9iYWxhbmNlZA0KKFhFTikgL2J0
aHJvdF9jZGV2L2VtZXJnZW5jeV9iYWxhbmNlZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvYnRocm90X2NkZXYvZW1lcmdlbmN5X2JhbGFuY2VkIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9idGhyb3RfY2Rl
di9lbWVyZ2VuY3lfYmFsYW5jZWQNCihYRU4pIGhhbmRsZSAvYWdpYy1jb250cm9sbGVyDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FnaWMtY29udHJvbGxlcg0KKFhFTikgL2FnaWMtY29udHJv
bGxlciBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvYWdpYy1jb250
cm9sbGVyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9hZ2ljLWNvbnRyb2xsZXINCihYRU4pIGhhbmRsZSAvYWRtYUA3MDJlMjAwMA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9hZG1hQDcwMmUyMDAwDQooWEVOKSAvYWRtYUA3MDJlMjAw
MCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvYWRtYUA3MDJlMjAw
MCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vYWRtYUA3MDJlMjAwMA0KKFhFTikgaGFuZGxlIC9haHViDQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L2FodWINCihYRU4pIC9haHViIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9haHViIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9haHViDQooWEVOKSBoYW5kbGUgL2FodWIvYWRtYWlmQDB4NzAyZDAw
MDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWh1Yi9hZG1haWZAMHg3MDJkMDAwMA0KKFhF
TikgL2FodWIvYWRtYWlmQDB4NzAyZDAwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL2FodWIvYWRtYWlmQDB4NzAyZDAwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FodWIvYWRtYWlmQDB4NzAyZDAw
MDANCihYRU4pIGhhbmRsZSAvYWh1Yi9zZmNANzAyZDIwMDANCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vYWh1Yi9zZmNANzAyZDIwMDANCihYRU4pIC9haHViL3NmY0A3MDJkMjAwMCBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvYWh1Yi9zZmNANzAyZDIwMDAgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fo
dWIvc2ZjQDcwMmQyMDAwDQooWEVOKSBoYW5kbGUgL2FodWIvc2ZjQDcwMmQyMjAwDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2FodWIvc2ZjQDcwMmQyMjAwDQooWEVOKSAvYWh1Yi9zZmNANzAy
ZDIyMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2FodWIvc2Zj
QDcwMmQyMjAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9haHViL3NmY0A3MDJkMjIwMA0KKFhFTikgaGFuZGxlIC9haHViL3NmY0A3MDJk
MjQwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9haHViL3NmY0A3MDJkMjQwMA0KKFhFTikg
L2FodWIvc2ZjQDcwMmQyNDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9haHViL3NmY0A3MDJkMjQwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWh1Yi9zZmNANzAyZDI0MDANCihYRU4pIGhhbmRsZSAv
YWh1Yi9zZmNANzAyZDI2MDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWh1Yi9zZmNANzAy
ZDI2MDANCihYRU4pIC9haHViL3NmY0A3MDJkMjYwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvYWh1Yi9zZmNANzAyZDI2MDAgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FodWIvc2ZjQDcwMmQyNjAwDQoo
WEVOKSBoYW5kbGUgL2FodWIvc3BrcHJvdEA3MDJkOGMwMA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9haHViL3Nwa3Byb3RANzAyZDhjMDANCihYRU4pIC9haHViL3Nwa3Byb3RANzAyZDhjMDAg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2FodWIvc3BrcHJvdEA3
MDJkOGMwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vYWh1Yi9zcGtwcm90QDcwMmQ4YzAwDQooWEVOKSBoYW5kbGUgL2FodWIvYW1peGVy
QDcwMmRiYjAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FodWIvYW1peGVyQDcwMmRiYjAw
DQooWEVOKSAvYWh1Yi9hbWl4ZXJANzAyZGJiMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL2FodWIvYW1peGVyQDcwMmRiYjAwIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9haHViL2FtaXhlckA3MDJkYmIw
MA0KKFhFTikgaGFuZGxlIC9haHViL2kyc0A3MDJkMTAwMA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9haHViL2kyc0A3MDJkMTAwMA0KKFhFTikgL2FodWIvaTJzQDcwMmQxMDAwIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9haHViL2kyc0A3MDJkMTAwMCBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWh1
Yi9pMnNANzAyZDEwMDANCihYRU4pIGhhbmRsZSAvYWh1Yi9pMnNANzAyZDExMDANCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vYWh1Yi9pMnNANzAyZDExMDANCihYRU4pIC9haHViL2kyc0A3MDJk
MTEwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvYWh1Yi9pMnNA
NzAyZDExMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L2FodWIvaTJzQDcwMmQxMTAwDQooWEVOKSBoYW5kbGUgL2FodWIvaTJzQDcwMmQx
MjAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FodWIvaTJzQDcwMmQxMjAwDQooWEVOKSAv
YWh1Yi9pMnNANzAyZDEyMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL2FodWIvaTJzQDcwMmQxMjAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9haHViL2kyc0A3MDJkMTIwMA0KKFhFTikgaGFuZGxlIC9h
aHViL2kyc0A3MDJkMTMwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9haHViL2kyc0A3MDJk
MTMwMA0KKFhFTikgL2FodWIvaTJzQDcwMmQxMzAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9haHViL2kyc0A3MDJkMTMwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWh1Yi9pMnNANzAyZDEzMDANCihY
RU4pIGhhbmRsZSAvYWh1Yi9pMnNANzAyZDE0MDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
YWh1Yi9pMnNANzAyZDE0MDANCihYRU4pIC9haHViL2kyc0A3MDJkMTQwMCBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvYWh1Yi9pMnNANzAyZDE0MDAgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FodWIvaTJz
QDcwMmQxNDAwDQooWEVOKSBoYW5kbGUgL2FodWIvYW14QDcwMmQzMDAwDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L2FodWIvYW14QDcwMmQzMDAwDQooWEVOKSAvYWh1Yi9hbXhANzAyZDMwMDAg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2FodWIvYW14QDcwMmQz
MDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9haHViL2FteEA3MDJkMzAwMA0KKFhFTikgaGFuZGxlIC9haHViL2FteEA3MDJkMzEwMA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9haHViL2FteEA3MDJkMzEwMA0KKFhFTikgL2FodWIv
YW14QDcwMmQzMTAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9h
aHViL2FteEA3MDJkMzEwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vYWh1Yi9hbXhANzAyZDMxMDANCihYRU4pIGhhbmRsZSAvYWh1Yi9h
ZHhANzAyZDM4MDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWh1Yi9hZHhANzAyZDM4MDAN
CihYRU4pIC9haHViL2FkeEA3MDJkMzgwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvYWh1Yi9hZHhANzAyZDM4MDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FodWIvYWR4QDcwMmQzODAwDQooWEVOKSBo
YW5kbGUgL2FodWIvYWR4QDcwMmQzOTAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FodWIv
YWR4QDcwMmQzOTAwDQooWEVOKSAvYWh1Yi9hZHhANzAyZDM5MDAgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2FodWIvYWR4QDcwMmQzOTAwIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9haHViL2FkeEA3MDJk
MzkwMA0KKFhFTikgaGFuZGxlIC9haHViL2RtaWNANzAyZDQwMDANCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vYWh1Yi9kbWljQDcwMmQ0MDAwDQooWEVOKSAvYWh1Yi9kbWljQDcwMmQ0MDAwIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9haHViL2RtaWNANzAyZDQw
MDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L2FodWIvZG1pY0A3MDJkNDAwMA0KKFhFTikgaGFuZGxlIC9haHViL2RtaWNANzAyZDQxMDAN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWh1Yi9kbWljQDcwMmQ0MTAwDQooWEVOKSAvYWh1
Yi9kbWljQDcwMmQ0MTAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC9haHViL2RtaWNANzAyZDQxMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FodWIvZG1pY0A3MDJkNDEwMA0KKFhFTikgaGFuZGxlIC9h
aHViL2RtaWNANzAyZDQyMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWh1Yi9kbWljQDcw
MmQ0MjAwDQooWEVOKSAvYWh1Yi9kbWljQDcwMmQ0MjAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9haHViL2RtaWNANzAyZDQyMDAgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FodWIvZG1pY0A3MDJkNDIw
MA0KKFhFTikgaGFuZGxlIC9haHViL2FmY0A3MDJkNzAwMA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9haHViL2FmY0A3MDJkNzAwMA0KKFhFTikgL2FodWIvYWZjQDcwMmQ3MDAwIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9haHViL2FmY0A3MDJkNzAwMCBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWh1
Yi9hZmNANzAyZDcwMDANCihYRU4pIGhhbmRsZSAvYWh1Yi9hZmNANzAyZDcxMDANCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vYWh1Yi9hZmNANzAyZDcxMDANCihYRU4pIC9haHViL2FmY0A3MDJk
NzEwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvYWh1Yi9hZmNA
NzAyZDcxMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L2FodWIvYWZjQDcwMmQ3MTAwDQooWEVOKSBoYW5kbGUgL2FodWIvYWZjQDcwMmQ3
MjAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FodWIvYWZjQDcwMmQ3MjAwDQooWEVOKSAv
YWh1Yi9hZmNANzAyZDcyMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL2FodWIvYWZjQDcwMmQ3MjAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9haHViL2FmY0A3MDJkNzIwMA0KKFhFTikgaGFuZGxlIC9h
aHViL2FmY0A3MDJkNzMwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9haHViL2FmY0A3MDJk
NzMwMA0KKFhFTikgL2FodWIvYWZjQDcwMmQ3MzAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9haHViL2FmY0A3MDJkNzMwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWh1Yi9hZmNANzAyZDczMDANCihY
RU4pIGhhbmRsZSAvYWh1Yi9hZmNANzAyZDc0MDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
YWh1Yi9hZmNANzAyZDc0MDANCihYRU4pIC9haHViL2FmY0A3MDJkNzQwMCBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvYWh1Yi9hZmNANzAyZDc0MDAgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FodWIvYWZj
QDcwMmQ3NDAwDQooWEVOKSBoYW5kbGUgL2FodWIvYWZjQDcwMmQ3NTAwDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L2FodWIvYWZjQDcwMmQ3NTAwDQooWEVOKSAvYWh1Yi9hZmNANzAyZDc1MDAg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2FodWIvYWZjQDcwMmQ3
NTAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9haHViL2FmY0A3MDJkNzUwMA0KKFhFTikgaGFuZGxlIC9haHViL212Y0A3MDJkYTAwMA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9haHViL212Y0A3MDJkYTAwMA0KKFhFTikgL2FodWIv
bXZjQDcwMmRhMDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9h
aHViL212Y0A3MDJkYTAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vYWh1Yi9tdmNANzAyZGEwMDANCihYRU4pIGhhbmRsZSAvYWh1Yi9t
dmNANzAyZGEyMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWh1Yi9tdmNANzAyZGEyMDAN
CihYRU4pIC9haHViL212Y0A3MDJkYTIwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvYWh1Yi9tdmNANzAyZGEyMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FodWIvbXZjQDcwMmRhMjAwDQooWEVOKSBo
YW5kbGUgL2FodWIvaXFjQDcwMmRlMDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FodWIv
aXFjQDcwMmRlMDAwDQooWEVOKSAvYWh1Yi9pcWNANzAyZGUwMDAgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2FodWIvaXFjQDcwMmRlMDAwIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9haHViL2lxY0A3MDJk
ZTAwMA0KKFhFTikgaGFuZGxlIC9haHViL2lxY0A3MDJkZTIwMA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9haHViL2lxY0A3MDJkZTIwMA0KKFhFTikgL2FodWIvaXFjQDcwMmRlMjAwIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9haHViL2lxY0A3MDJkZTIwMCBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
YWh1Yi9pcWNANzAyZGUyMDANCihYRU4pIGhhbmRsZSAvYWh1Yi9vcGVANzAyZDgwMDANCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vYWh1Yi9vcGVANzAyZDgwMDANCihYRU4pIC9haHViL29wZUA3
MDJkODAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvYWh1Yi9v
cGVANzAyZDgwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L2FodWIvb3BlQDcwMmQ4MDAwDQooWEVOKSBoYW5kbGUgL2FodWIvb3BlQDcw
MmQ4NDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FodWIvb3BlQDcwMmQ4NDAwDQooWEVO
KSAvYWh1Yi9vcGVANzAyZDg0MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL2FodWIvb3BlQDcwMmQ4NDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9haHViL29wZUA3MDJkODQwMA0KKFhFTikgaGFuZGxl
IC9hZHNwX2F1ZGlvDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Fkc3BfYXVkaW8NCihYRU4p
IC9hZHNwX2F1ZGlvIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9h
ZHNwX2F1ZGlvIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9hZHNwX2F1ZGlvDQooWEVOKSBoYW5kbGUgL3NhdGFANzAwMjAwMDANCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vc2F0YUA3MDAyMDAwMA0KKFhFTikgL3NhdGFANzAwMjAwMDAg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NhdGFANzAwMjAwMDAg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3NhdGFANzAwMjAwMDANCihYRU4pIGhhbmRsZSAvc2F0YUA3MDAyMDAwMC9wcm9kLXNldHRpbmdz
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NhdGFANzAwMjAwMDAvcHJvZC1zZXR0aW5ncw0K
KFhFTikgL3NhdGFANzAwMjAwMDAvcHJvZC1zZXR0aW5ncyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvc2F0YUA3MDAyMDAwMC9wcm9kLXNldHRpbmdzIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zYXRhQDcw
MDIwMDAwL3Byb2Qtc2V0dGluZ3MNCihYRU4pIGhhbmRsZSAvc2F0YUA3MDAyMDAwMC9wcm9kLXNl
dHRpbmdzL3Byb2QNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2F0YUA3MDAyMDAwMC9wcm9k
LXNldHRpbmdzL3Byb2QNCihYRU4pIC9zYXRhQDcwMDIwMDAwL3Byb2Qtc2V0dGluZ3MvcHJvZCBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc2F0YUA3MDAyMDAwMC9w
cm9kLXNldHRpbmdzL3Byb2QgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3NhdGFANzAwMjAwMDAvcHJvZC1zZXR0aW5ncy9wcm9kDQooWEVO
KSBoYW5kbGUgL21vZGVtDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L21vZGVtDQooWEVOKSAv
bW9kZW0gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL21vZGVtIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9t
b2RlbQ0KKFhFTikgaGFuZGxlIC9tb2RlbS9udmlkaWEscGh5LWVoY2ktaHNpYw0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9tb2RlbS9udmlkaWEscGh5LWVoY2ktaHNpYw0KKFhFTikgL21vZGVt
L252aWRpYSxwaHktZWhjaS1oc2ljIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9tb2RlbS9udmlkaWEscGh5LWVoY2ktaHNpYyBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vbW9kZW0vbnZpZGlhLHBoeS1laGNp
LWhzaWMNCihYRU4pIGhhbmRsZSAvbW9kZW0vbnZpZGlhLHBoeS14aGNpLWhzaWMNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vbW9kZW0vbnZpZGlhLHBoeS14aGNpLWhzaWMNCihYRU4pIC9tb2Rl
bS9udmlkaWEscGh5LXhoY2ktaHNpYyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvbW9kZW0vbnZpZGlhLHBoeS14aGNpLWhzaWMgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L21vZGVtL252aWRpYSxwaHkteGhj
aS1oc2ljDQooWEVOKSBoYW5kbGUgL21vZGVtL252aWRpYSxwaHkteGhjaS11dG1pDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L21vZGVtL252aWRpYSxwaHkteGhjaS11dG1pDQooWEVOKSAvbW9k
ZW0vbnZpZGlhLHBoeS14aGNpLXV0bWkgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL21vZGVtL252aWRpYSxwaHkteGhjaS11dG1pIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9tb2RlbS9udmlkaWEscGh5LXho
Y2ktdXRtaQ0KKFhFTikgaGFuZGxlIC90cnVzdHkNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
dHJ1c3R5DQooWEVOKSAvdHJ1c3R5IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC90cnVzdHkgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3RydXN0eQ0KKFhFTikgaGFuZGxlIC90cnVzdHkvaXJxDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3RydXN0eS9pcnENCihYRU4pIC90cnVzdHkvaXJxIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90cnVzdHkvaXJxIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90cnVzdHkvaXJx
DQooWEVOKSBoYW5kbGUgL3RydXN0eS9maXENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdHJ1
c3R5L2ZpcQ0KKFhFTikgL3RydXN0eS9maXEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3RydXN0eS9maXEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RydXN0eS9maXENCihYRU4pIGhhbmRsZSAvdHJ1c3R5
L3ZpcnRpbw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90cnVzdHkvdmlydGlvDQooWEVOKSAv
dHJ1c3R5L3ZpcnRpbyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
dHJ1c3R5L3ZpcnRpbyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vdHJ1c3R5L3ZpcnRpbw0KKFhFTikgaGFuZGxlIC90cnVzdHkvbG9nDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RydXN0eS9sb2cNCihYRU4pIC90cnVzdHkvbG9nIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90cnVzdHkvbG9nIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90cnVz
dHkvbG9nDQooWEVOKSBoYW5kbGUgL3NtcC1jdXN0b20taXBpDQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3NtcC1jdXN0b20taXBpDQooWEVOKSAvc21wLWN1c3RvbS1pcGkgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NtcC1jdXN0b20taXBpIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zbXAtY3VzdG9t
LWlwaQ0KKFhFTikgaGFuZGxlIC9wc3lfZXh0Y29uX3h1ZGMNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcHN5X2V4dGNvbl94dWRjDQooWEVOKSAvcHN5X2V4dGNvbl94dWRjIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wc3lfZXh0Y29uX3h1ZGMgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BzeV9leHRj
b25feHVkYw0KKFhFTikgaGFuZGxlIC90ZWdyYS1zdXBwbHktdGVzdHMNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vdGVncmEtc3VwcGx5LXRlc3RzDQooWEVOKSAvdGVncmEtc3VwcGx5LXRlc3Rz
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90ZWdyYS1zdXBwbHkt
dGVzdHMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3RlZ3JhLXN1cHBseS10ZXN0cw0KKFhFTikgaGFuZGxlIC9jYW1lcmEtcGNsDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L2NhbWVyYS1wY2wNCihYRU4pIC9jYW1lcmEtcGNsIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jYW1lcmEtcGNsIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jYW1lcmEt
cGNsDQooWEVOKSBoYW5kbGUgL2NhbWVyYS1wY2wvZHBkDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L2NhbWVyYS1wY2wvZHBkDQooWEVOKSAvY2FtZXJhLXBjbC9kcGQgcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2NhbWVyYS1wY2wvZHBkIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jYW1lcmEtcGNsL2Rw
ZA0KKFhFTikgaGFuZGxlIC9jYW1lcmEtcGNsL2RwZC9jc2lhDQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L2NhbWVyYS1wY2wvZHBkL2NzaWENCihYRU4pIC9jYW1lcmEtcGNsL2RwZC9jc2lhIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jYW1lcmEtcGNsL2RwZC9j
c2lhIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9jYW1lcmEtcGNsL2RwZC9jc2lhDQooWEVOKSBoYW5kbGUgL2NhbWVyYS1wY2wvZHBkL2Nz
aWINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2FtZXJhLXBjbC9kcGQvY3NpYg0KKFhFTikg
L2NhbWVyYS1wY2wvZHBkL2NzaWIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL2NhbWVyYS1wY2wvZHBkL2NzaWIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2NhbWVyYS1wY2wvZHBkL2NzaWINCihYRU4pIGhh
bmRsZSAvY2FtZXJhLXBjbC9kcGQvY3NpYw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jYW1l
cmEtcGNsL2RwZC9jc2ljDQooWEVOKSAvY2FtZXJhLXBjbC9kcGQvY3NpYyBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvY2FtZXJhLXBjbC9kcGQvY3NpYyBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2FtZXJh
LXBjbC9kcGQvY3NpYw0KKFhFTikgaGFuZGxlIC9jYW1lcmEtcGNsL2RwZC9jc2lkDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2NhbWVyYS1wY2wvZHBkL2NzaWQNCihYRU4pIC9jYW1lcmEtcGNs
L2RwZC9jc2lkIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jYW1l
cmEtcGNsL2RwZC9jc2lkIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9jYW1lcmEtcGNsL2RwZC9jc2lkDQooWEVOKSBoYW5kbGUgL2NhbWVy
YS1wY2wvZHBkL2NzaWUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2FtZXJhLXBjbC9kcGQv
Y3NpZQ0KKFhFTikgL2NhbWVyYS1wY2wvZHBkL2NzaWUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL2NhbWVyYS1wY2wvZHBkL2NzaWUgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2NhbWVyYS1wY2wvZHBkL2Nz
aWUNCihYRU4pIGhhbmRsZSAvY2FtZXJhLXBjbC9kcGQvY3NpZg0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9jYW1lcmEtcGNsL2RwZC9jc2lmDQooWEVOKSAvY2FtZXJhLXBjbC9kcGQvY3NpZiBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvY2FtZXJhLXBjbC9kcGQv
Y3NpZiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vY2FtZXJhLXBjbC9kcGQvY3NpZg0KKFhFTikgaGFuZGxlIC9yb2xsYmFjay1wcm90ZWN0
aW9uDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3JvbGxiYWNrLXByb3RlY3Rpb24NCihYRU4p
IC9yb2xsYmFjay1wcm90ZWN0aW9uIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9yb2xsYmFjay1wcm90ZWN0aW9uIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9yb2xsYmFjay1wcm90ZWN0aW9uDQooWEVOKSBo
YW5kbGUgL2V4dGVybmFsLW1lbW9yeS1jb250cm9sbGVyQDcwMDFiMDAwDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L2V4dGVybmFsLW1lbW9yeS1jb250cm9sbGVyQDcwMDFiMDAwDQooWEVOKSAv
ZXh0ZXJuYWwtbWVtb3J5LWNvbnRyb2xsZXJANzAwMWIwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMw0KKFhFTikgQ2hlY2sgaWYgL2V4dGVybmFsLW1lbW9yeS1jb250cm9sbGVyQDcwMDFiMDAw
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9leHRlcm5hbC1tZW1vcnktY29udHJvbGxlckA3MDAxYjAwMA0KKFhFTikgRFQ6ICoqIHRyYW5z
bGF0aW9uIGZvciBkZXZpY2UgL2V4dGVybmFsLW1lbW9yeS1jb250cm9sbGVyQDcwMDFiMDAwICoq
DQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC8NCihYRU4pIERUOiB0
cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3MDAxYjAwMDwzPg0KKFhFTikgRFQ6
IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDA3MDAxYjAwMCAtIDAwNzAwMWMw
MDAgUDJNVHlwZT01DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvZXh0ZXJu
YWwtbWVtb3J5LWNvbnRyb2xsZXJANzAwMWIwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVs
dCAobmE9MiwgbnM9Mikgb24gLw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAw
MDAwMDAwPDM+IDcwMDFlMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4p
ICAgLSBNTUlPOiAwMDcwMDFlMDAwIC0gMDA3MDAxZjAwMCBQMk1UeXBlPTUNCihYRU4pIERUOiAq
KiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9leHRlcm5hbC1tZW1vcnktY29udHJvbGxlckA3MDAx
YjAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVO
KSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gNzAwMWYwMDA8Mz4NCihY
RU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwNzAwMWYwMDAgLSAw
MDcwMDIwMDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9leHRlcm5hbC1tZW1vcnktY29udHJv
bGxlckA3MDAxYjAwMC9lbWMtdGFibGVAMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9leHRl
cm5hbC1tZW1vcnktY29udHJvbGxlckA3MDAxYjAwMC9lbWMtdGFibGVAMA0KKFhFTikgL2V4dGVy
bmFsLW1lbW9yeS1jb250cm9sbGVyQDcwMDFiMDAwL2VtYy10YWJsZUAwIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9leHRlcm5hbC1tZW1vcnktY29udHJvbGxlckA3
MDAxYjAwMC9lbWMtdGFibGVAMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vZXh0ZXJuYWwtbWVtb3J5LWNvbnRyb2xsZXJANzAwMWIwMDAv
ZW1jLXRhYmxlQDANCihYRU4pIGhhbmRsZSAvZXh0ZXJuYWwtbWVtb3J5LWNvbnRyb2xsZXJANzAw
MWIwMDAvZW1jLXRhYmxlQDAvZW1jLXRhYmxlQDIwNDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9leHRlcm5hbC1tZW1vcnktY29udHJvbGxlckA3MDAxYjAwMC9lbWMtdGFibGVAMC9lbWMt
dGFibGVAMjA0MDAwDQooWEVOKSAvZXh0ZXJuYWwtbWVtb3J5LWNvbnRyb2xsZXJANzAwMWIwMDAv
ZW1jLXRhYmxlQDAvZW1jLXRhYmxlQDIwNDAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvZXh0ZXJuYWwtbWVtb3J5LWNvbnRyb2xsZXJANzAwMWIwMDAvZW1jLXRh
YmxlQDAvZW1jLXRhYmxlQDIwNDAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vZXh0ZXJuYWwtbWVtb3J5LWNvbnRyb2xsZXJANzAwMWIw
MDAvZW1jLXRhYmxlQDAvZW1jLXRhYmxlQDIwNDAwMA0KKFhFTikgaGFuZGxlIC9leHRlcm5hbC1t
ZW1vcnktY29udHJvbGxlckA3MDAxYjAwMC9lbWMtdGFibGVAMC9lbWMtdGFibGVAMTYwMDAwMA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9leHRlcm5hbC1tZW1vcnktY29udHJvbGxlckA3MDAx
YjAwMC9lbWMtdGFibGVAMC9lbWMtdGFibGVAMTYwMDAwMA0KKFhFTikgL2V4dGVybmFsLW1lbW9y
eS1jb250cm9sbGVyQDcwMDFiMDAwL2VtYy10YWJsZUAwL2VtYy10YWJsZUAxNjAwMDAwIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9leHRlcm5hbC1tZW1vcnktY29u
dHJvbGxlckA3MDAxYjAwMC9lbWMtdGFibGVAMC9lbWMtdGFibGVAMTYwMDAwMCBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vZXh0ZXJuYWwt
bWVtb3J5LWNvbnRyb2xsZXJANzAwMWIwMDAvZW1jLXRhYmxlQDAvZW1jLXRhYmxlQDE2MDAwMDAN
CihYRU4pIGhhbmRsZSAvZXh0ZXJuYWwtbWVtb3J5LWNvbnRyb2xsZXJANzAwMWIwMDAvZW1jLXRh
YmxlQDAvZW1jLXRhYmxlLWRlcmF0ZWRAMjA0MDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2V4dGVybmFsLW1lbW9yeS1jb250cm9sbGVyQDcwMDFiMDAwL2VtYy10YWJsZUAwL2VtYy10YWJs
ZS1kZXJhdGVkQDIwNDAwMA0KKFhFTikgL2V4dGVybmFsLW1lbW9yeS1jb250cm9sbGVyQDcwMDFi
MDAwL2VtYy10YWJsZUAwL2VtYy10YWJsZS1kZXJhdGVkQDIwNDAwMCBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvZXh0ZXJuYWwtbWVtb3J5LWNvbnRyb2xsZXJANzAw
MWIwMDAvZW1jLXRhYmxlQDAvZW1jLXRhYmxlLWRlcmF0ZWRAMjA0MDAwIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9leHRlcm5hbC1tZW1v
cnktY29udHJvbGxlckA3MDAxYjAwMC9lbWMtdGFibGVAMC9lbWMtdGFibGUtZGVyYXRlZEAyMDQw
MDANCihYRU4pIGhhbmRsZSAvZXh0ZXJuYWwtbWVtb3J5LWNvbnRyb2xsZXJANzAwMWIwMDAvZW1j
LXRhYmxlQDAvZW1jLXRhYmxlLWRlcmF0ZWRAMTYwMDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9leHRlcm5hbC1tZW1vcnktY29udHJvbGxlckA3MDAxYjAwMC9lbWMtdGFibGVAMC9lbWMt
dGFibGUtZGVyYXRlZEAxNjAwMDAwDQooWEVOKSAvZXh0ZXJuYWwtbWVtb3J5LWNvbnRyb2xsZXJA
NzAwMWIwMDAvZW1jLXRhYmxlQDAvZW1jLXRhYmxlLWRlcmF0ZWRAMTYwMDAwMCBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvZXh0ZXJuYWwtbWVtb3J5LWNvbnRyb2xs
ZXJANzAwMWIwMDAvZW1jLXRhYmxlQDAvZW1jLXRhYmxlLWRlcmF0ZWRAMTYwMDAwMCBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vZXh0ZXJu
YWwtbWVtb3J5LWNvbnRyb2xsZXJANzAwMWIwMDAvZW1jLXRhYmxlQDAvZW1jLXRhYmxlLWRlcmF0
ZWRAMTYwMDAwMA0KKFhFTikgaGFuZGxlIC9kdW1teS1jb29sLWRldg0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9kdW1teS1jb29sLWRldg0KKFhFTikgL2R1bW15LWNvb2wtZGV2IHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9kdW1teS1jb29sLWRldiBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vZHVtbXkt
Y29vbC1kZXYNCihYRU4pIGhhbmRsZSAvcmVndWxhdG9ycw0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9yZWd1bGF0b3JzDQooWEVOKSAvcmVndWxhdG9ycyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvcmVndWxhdG9ycyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcmVndWxhdG9ycw0KKFhFTikgaGFuZGxl
IC9yZWd1bGF0b3JzL3JlZ3VsYXRvckAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3JlZ3Vs
YXRvcnMvcmVndWxhdG9yQDANCihYRU4pIC9yZWd1bGF0b3JzL3JlZ3VsYXRvckAwIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9yZWd1bGF0b3JzL3JlZ3VsYXRvckAw
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9yZWd1bGF0b3JzL3JlZ3VsYXRvckAwDQooWEVOKSBoYW5kbGUgL3JlZ3VsYXRvcnMvcmVndWxh
dG9yQDENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcmVndWxhdG9ycy9yZWd1bGF0b3JAMQ0K
KFhFTikgL3JlZ3VsYXRvcnMvcmVndWxhdG9yQDEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3JlZ3VsYXRvcnMvcmVndWxhdG9yQDEgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3JlZ3VsYXRvcnMvcmVndWxh
dG9yQDENCihYRU4pIGhhbmRsZSAvcmVndWxhdG9ycy9yZWd1bGF0b3JAMg0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9yZWd1bGF0b3JzL3JlZ3VsYXRvckAyDQooWEVOKSAvcmVndWxhdG9ycy9y
ZWd1bGF0b3JAMiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcmVn
dWxhdG9ycy9yZWd1bGF0b3JAMiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcmVndWxhdG9ycy9yZWd1bGF0b3JAMg0KKFhFTikgaGFuZGxl
IC9yZWd1bGF0b3JzL3JlZ3VsYXRvckAzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3JlZ3Vs
YXRvcnMvcmVndWxhdG9yQDMNCihYRU4pIC9yZWd1bGF0b3JzL3JlZ3VsYXRvckAzIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9yZWd1bGF0b3JzL3JlZ3VsYXRvckAz
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9yZWd1bGF0b3JzL3JlZ3VsYXRvckAzDQooWEVOKSBoYW5kbGUgL3JlZ3VsYXRvcnMvcmVndWxh
dG9yQDQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcmVndWxhdG9ycy9yZWd1bGF0b3JANA0K
KFhFTikgL3JlZ3VsYXRvcnMvcmVndWxhdG9yQDQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3JlZ3VsYXRvcnMvcmVndWxhdG9yQDQgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3JlZ3VsYXRvcnMvcmVndWxh
dG9yQDQNCihYRU4pIGhhbmRsZSAvcmVndWxhdG9ycy9yZWd1bGF0b3JANQ0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9yZWd1bGF0b3JzL3JlZ3VsYXRvckA1DQooWEVOKSAvcmVndWxhdG9ycy9y
ZWd1bGF0b3JANSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcmVn
dWxhdG9ycy9yZWd1bGF0b3JANSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcmVndWxhdG9ycy9yZWd1bGF0b3JANQ0KKFhFTikgaGFuZGxl
IC9yZWd1bGF0b3JzL3JlZ3VsYXRvckA2DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3JlZ3Vs
YXRvcnMvcmVndWxhdG9yQDYNCihYRU4pIC9yZWd1bGF0b3JzL3JlZ3VsYXRvckA2IHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9yZWd1bGF0b3JzL3JlZ3VsYXRvckA2
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9yZWd1bGF0b3JzL3JlZ3VsYXRvckA2DQooWEVOKSBoYW5kbGUgL3JlZ3VsYXRvcnMvcmVndWxh
dG9yQDcNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcmVndWxhdG9ycy9yZWd1bGF0b3JANw0K
KFhFTikgL3JlZ3VsYXRvcnMvcmVndWxhdG9yQDcgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3JlZ3VsYXRvcnMvcmVndWxhdG9yQDcgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3JlZ3VsYXRvcnMvcmVndWxh
dG9yQDcNCihYRU4pIGhhbmRsZSAvcmVndWxhdG9ycy9yZWd1bGF0b3JAOA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9yZWd1bGF0b3JzL3JlZ3VsYXRvckA4DQooWEVOKSAvcmVndWxhdG9ycy9y
ZWd1bGF0b3JAOCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcmVn
dWxhdG9ycy9yZWd1bGF0b3JAOCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcmVndWxhdG9ycy9yZWd1bGF0b3JAOA0KKFhFTikgaGFuZGxl
IC9yZWd1bGF0b3JzL3JlZ3VsYXRvckA5DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3JlZ3Vs
YXRvcnMvcmVndWxhdG9yQDkNCihYRU4pIC9yZWd1bGF0b3JzL3JlZ3VsYXRvckA5IHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9yZWd1bGF0b3JzL3JlZ3VsYXRvckA5
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9yZWd1bGF0b3JzL3JlZ3VsYXRvckA5DQooWEVOKSBoYW5kbGUgL3JlZ3VsYXRvcnMvcmVndWxh
dG9yQDEwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3JlZ3VsYXRvcnMvcmVndWxhdG9yQDEw
DQooWEVOKSAvcmVndWxhdG9ycy9yZWd1bGF0b3JAMTAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3JlZ3VsYXRvcnMvcmVndWxhdG9yQDEwIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9yZWd1bGF0b3JzL3Jl
Z3VsYXRvckAxMA0KKFhFTikgaGFuZGxlIC9wd20tZmFuDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3B3bS1mYW4NCihYRU4pIC9wd20tZmFuIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9wd20tZmFuIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9wd20tZmFuDQooWEVOKSBoYW5kbGUgL2R2ZnNfcmFpbHMN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vZHZmc19yYWlscw0KKFhFTikgL2R2ZnNfcmFpbHMg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2R2ZnNfcmFpbHMgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2R2
ZnNfcmFpbHMNCihYRU4pIGhhbmRsZSAvZHZmc19yYWlscy92ZGQtZ3B1LXNjYWxpbmctY2RldkA3
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2R2ZnNfcmFpbHMvdmRkLWdwdS1zY2FsaW5nLWNk
ZXZANw0KKFhFTikgL2R2ZnNfcmFpbHMvdmRkLWdwdS1zY2FsaW5nLWNkZXZANyBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvZHZmc19yYWlscy92ZGQtZ3B1LXNjYWxp
bmctY2RldkA3IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9kdmZzX3JhaWxzL3ZkZC1ncHUtc2NhbGluZy1jZGV2QDcNCihYRU4pIGhhbmRs
ZSAvZHZmc19yYWlscy92ZGQtZ3B1LXZtYXgtY2RldkA5DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L2R2ZnNfcmFpbHMvdmRkLWdwdS12bWF4LWNkZXZAOQ0KKFhFTikgL2R2ZnNfcmFpbHMvdmRk
LWdwdS12bWF4LWNkZXZAOSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvZHZmc19yYWlscy92ZGQtZ3B1LXZtYXgtY2RldkA5IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9kdmZzX3JhaWxzL3ZkZC1ncHUtdm1h
eC1jZGV2QDkNCihYRU4pIGhhbmRsZSAvcGZzZA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
ZnNkDQooWEVOKSAvcGZzZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvcGZzZCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcGZzZA0KKFhFTikgaGFuZGxlIC90ZWdyYS1jYW1lcmEtcGxhdGZvcm0NCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vdGVncmEtY2FtZXJhLXBsYXRmb3JtDQooWEVOKSAvdGVncmEt
Y2FtZXJhLXBsYXRmb3JtIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC90ZWdyYS1jYW1lcmEtcGxhdGZvcm0gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RlZ3JhLWNhbWVyYS1wbGF0Zm9ybQ0KKFhFTikgaGFu
ZGxlIC90ZWdyYS1jYW1lcmEtcGxhdGZvcm0vbW9kdWxlcw0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS90ZWdyYS1jYW1lcmEtcGxhdGZvcm0vbW9kdWxlcw0KKFhFTikgL3RlZ3JhLWNhbWVyYS1w
bGF0Zm9ybS9tb2R1bGVzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC90ZWdyYS1jYW1lcmEtcGxhdGZvcm0vbW9kdWxlcyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGVncmEtY2FtZXJhLXBsYXRmb3JtL21v
ZHVsZXMNCihYRU4pIGhhbmRsZSAvdGVncmEtY2FtZXJhLXBsYXRmb3JtL21vZHVsZXMvbW9kdWxl
MA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90ZWdyYS1jYW1lcmEtcGxhdGZvcm0vbW9kdWxl
cy9tb2R1bGUwDQooWEVOKSAvdGVncmEtY2FtZXJhLXBsYXRmb3JtL21vZHVsZXMvbW9kdWxlMCBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGVncmEtY2FtZXJhLXBs
YXRmb3JtL21vZHVsZXMvbW9kdWxlMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGVncmEtY2FtZXJhLXBsYXRmb3JtL21vZHVsZXMvbW9k
dWxlMA0KKFhFTikgaGFuZGxlIC90ZWdyYS1jYW1lcmEtcGxhdGZvcm0vbW9kdWxlcy9tb2R1bGUw
L2RyaXZlcm5vZGUwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RlZ3JhLWNhbWVyYS1wbGF0
Zm9ybS9tb2R1bGVzL21vZHVsZTAvZHJpdmVybm9kZTANCihYRU4pIC90ZWdyYS1jYW1lcmEtcGxh
dGZvcm0vbW9kdWxlcy9tb2R1bGUwL2RyaXZlcm5vZGUwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC90ZWdyYS1jYW1lcmEtcGxhdGZvcm0vbW9kdWxlcy9tb2R1bGUw
L2RyaXZlcm5vZGUwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS90ZWdyYS1jYW1lcmEtcGxhdGZvcm0vbW9kdWxlcy9tb2R1bGUwL2RyaXZl
cm5vZGUwDQooWEVOKSBoYW5kbGUgL3RlZ3JhLWNhbWVyYS1wbGF0Zm9ybS9tb2R1bGVzL21vZHVs
ZTAvZHJpdmVybm9kZTENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGVncmEtY2FtZXJhLXBs
YXRmb3JtL21vZHVsZXMvbW9kdWxlMC9kcml2ZXJub2RlMQ0KKFhFTikgL3RlZ3JhLWNhbWVyYS1w
bGF0Zm9ybS9tb2R1bGVzL21vZHVsZTAvZHJpdmVybm9kZTEgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RlZ3JhLWNhbWVyYS1wbGF0Zm9ybS9tb2R1bGVzL21vZHVs
ZTAvZHJpdmVybm9kZTEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3RlZ3JhLWNhbWVyYS1wbGF0Zm9ybS9tb2R1bGVzL21vZHVsZTAvZHJp
dmVybm9kZTENCihYRU4pIGhhbmRsZSAvdGVncmEtY2FtZXJhLXBsYXRmb3JtL21vZHVsZXMvbW9k
dWxlMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90ZWdyYS1jYW1lcmEtcGxhdGZvcm0vbW9k
dWxlcy9tb2R1bGUxDQooWEVOKSAvdGVncmEtY2FtZXJhLXBsYXRmb3JtL21vZHVsZXMvbW9kdWxl
MSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGVncmEtY2FtZXJh
LXBsYXRmb3JtL21vZHVsZXMvbW9kdWxlMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGVncmEtY2FtZXJhLXBsYXRmb3JtL21vZHVsZXMv
bW9kdWxlMQ0KKFhFTikgaGFuZGxlIC90ZWdyYS1jYW1lcmEtcGxhdGZvcm0vbW9kdWxlcy9tb2R1
bGUxL2RyaXZlcm5vZGUwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RlZ3JhLWNhbWVyYS1w
bGF0Zm9ybS9tb2R1bGVzL21vZHVsZTEvZHJpdmVybm9kZTANCihYRU4pIC90ZWdyYS1jYW1lcmEt
cGxhdGZvcm0vbW9kdWxlcy9tb2R1bGUxL2RyaXZlcm5vZGUwIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC90ZWdyYS1jYW1lcmEtcGxhdGZvcm0vbW9kdWxlcy9tb2R1
bGUxL2RyaXZlcm5vZGUwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS90ZWdyYS1jYW1lcmEtcGxhdGZvcm0vbW9kdWxlcy9tb2R1bGUxL2Ry
aXZlcm5vZGUwDQooWEVOKSBoYW5kbGUgL3RlZ3JhLWNhbWVyYS1wbGF0Zm9ybS9tb2R1bGVzL21v
ZHVsZTEvZHJpdmVybm9kZTENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGVncmEtY2FtZXJh
LXBsYXRmb3JtL21vZHVsZXMvbW9kdWxlMS9kcml2ZXJub2RlMQ0KKFhFTikgL3RlZ3JhLWNhbWVy
YS1wbGF0Zm9ybS9tb2R1bGVzL21vZHVsZTEvZHJpdmVybm9kZTEgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3RlZ3JhLWNhbWVyYS1wbGF0Zm9ybS9tb2R1bGVzL21v
ZHVsZTEvZHJpdmVybm9kZTEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3RlZ3JhLWNhbWVyYS1wbGF0Zm9ybS9tb2R1bGVzL21vZHVsZTEv
ZHJpdmVybm9kZTENCihYRU4pIGhhbmRsZSAvbGVuc19pbXgyMTlAUkJQQ1YyDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L2xlbnNfaW14MjE5QFJCUENWMg0KKFhFTikgL2xlbnNfaW14MjE5QFJC
UENWMiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvbGVuc19pbXgy
MTlAUkJQQ1YyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9sZW5zX2lteDIxOUBSQlBDVjINCihYRU4pIGhhbmRsZSAvY2FtX2kyY211eA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jYW1faTJjbXV4DQooWEVOKSAvY2FtX2kyY211eCBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvY2FtX2kyY211eCBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2Ft
X2kyY211eA0KKFhFTikgaGFuZGxlIC9jYW1faTJjbXV4L2kyY0AwDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L2NhbV9pMmNtdXgvaTJjQDANCihYRU4pIC9jYW1faTJjbXV4L2kyY0AwIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jYW1faTJjbXV4L2kyY0AwIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9j
YW1faTJjbXV4L2kyY0AwDQooWEVOKSBoYW5kbGUgL2NhbV9pMmNtdXgvaTJjQDAvcmJwY3YyX2lt
eDIxOV9hQDEwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2NhbV9pMmNtdXgvaTJjQDAvcmJw
Y3YyX2lteDIxOV9hQDEwDQooWEVOKSAvY2FtX2kyY211eC9pMmNAMC9yYnBjdjJfaW14MjE5X2FA
MTAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2NhbV9pMmNtdXgv
aTJjQDAvcmJwY3YyX2lteDIxOV9hQDEwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jYW1faTJjbXV4L2kyY0AwL3JicGN2Ml9pbXgyMTlf
YUAxMA0KKFhFTikgaGFuZGxlIC9jYW1faTJjbXV4L2kyY0AwL3JicGN2Ml9pbXgyMTlfYUAxMC9t
b2RlMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jYW1faTJjbXV4L2kyY0AwL3JicGN2Ml9p
bXgyMTlfYUAxMC9tb2RlMA0KKFhFTikgL2NhbV9pMmNtdXgvaTJjQDAvcmJwY3YyX2lteDIxOV9h
QDEwL21vZGUwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jYW1f
aTJjbXV4L2kyY0AwL3JicGN2Ml9pbXgyMTlfYUAxMC9tb2RlMCBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2FtX2kyY211eC9pMmNAMC9y
YnBjdjJfaW14MjE5X2FAMTAvbW9kZTANCihYRU4pIGhhbmRsZSAvY2FtX2kyY211eC9pMmNAMC9y
YnBjdjJfaW14MjE5X2FAMTAvbW9kZTENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2FtX2ky
Y211eC9pMmNAMC9yYnBjdjJfaW14MjE5X2FAMTAvbW9kZTENCihYRU4pIC9jYW1faTJjbXV4L2ky
Y0AwL3JicGN2Ml9pbXgyMTlfYUAxMC9tb2RlMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvY2FtX2kyY211eC9pMmNAMC9yYnBjdjJfaW14MjE5X2FAMTAvbW9kZTEg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2NhbV9pMmNtdXgvaTJjQDAvcmJwY3YyX2lteDIxOV9hQDEwL21vZGUxDQooWEVOKSBoYW5kbGUg
L2NhbV9pMmNtdXgvaTJjQDAvcmJwY3YyX2lteDIxOV9hQDEwL21vZGUyDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L2NhbV9pMmNtdXgvaTJjQDAvcmJwY3YyX2lteDIxOV9hQDEwL21vZGUyDQoo
WEVOKSAvY2FtX2kyY211eC9pMmNAMC9yYnBjdjJfaW14MjE5X2FAMTAvbW9kZTIgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2NhbV9pMmNtdXgvaTJjQDAvcmJwY3Yy
X2lteDIxOV9hQDEwL21vZGUyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9jYW1faTJjbXV4L2kyY0AwL3JicGN2Ml9pbXgyMTlfYUAxMC9t
b2RlMg0KKFhFTikgaGFuZGxlIC9jYW1faTJjbXV4L2kyY0AwL3JicGN2Ml9pbXgyMTlfYUAxMC9t
b2RlMw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jYW1faTJjbXV4L2kyY0AwL3JicGN2Ml9p
bXgyMTlfYUAxMC9tb2RlMw0KKFhFTikgL2NhbV9pMmNtdXgvaTJjQDAvcmJwY3YyX2lteDIxOV9h
QDEwL21vZGUzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jYW1f
aTJjbXV4L2kyY0AwL3JicGN2Ml9pbXgyMTlfYUAxMC9tb2RlMyBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2FtX2kyY211eC9pMmNAMC9y
YnBjdjJfaW14MjE5X2FAMTAvbW9kZTMNCihYRU4pIGhhbmRsZSAvY2FtX2kyY211eC9pMmNAMC9y
YnBjdjJfaW14MjE5X2FAMTAvbW9kZTQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2FtX2ky
Y211eC9pMmNAMC9yYnBjdjJfaW14MjE5X2FAMTAvbW9kZTQNCihYRU4pIC9jYW1faTJjbXV4L2ky
Y0AwL3JicGN2Ml9pbXgyMTlfYUAxMC9tb2RlNCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvY2FtX2kyY211eC9pMmNAMC9yYnBjdjJfaW14MjE5X2FAMTAvbW9kZTQg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2NhbV9pMmNtdXgvaTJjQDAvcmJwY3YyX2lteDIxOV9hQDEwL21vZGU0DQooWEVOKSBoYW5kbGUg
L2NhbV9pMmNtdXgvaTJjQDAvcmJwY3YyX2lteDIxOV9hQDEwL3BvcnRzDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L2NhbV9pMmNtdXgvaTJjQDAvcmJwY3YyX2lteDIxOV9hQDEwL3BvcnRzDQoo
WEVOKSAvY2FtX2kyY211eC9pMmNAMC9yYnBjdjJfaW14MjE5X2FAMTAvcG9ydHMgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2NhbV9pMmNtdXgvaTJjQDAvcmJwY3Yy
X2lteDIxOV9hQDEwL3BvcnRzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9jYW1faTJjbXV4L2kyY0AwL3JicGN2Ml9pbXgyMTlfYUAxMC9w
b3J0cw0KKFhFTikgaGFuZGxlIC9jYW1faTJjbXV4L2kyY0AwL3JicGN2Ml9pbXgyMTlfYUAxMC9w
b3J0cy9wb3J0QDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2FtX2kyY211eC9pMmNAMC9y
YnBjdjJfaW14MjE5X2FAMTAvcG9ydHMvcG9ydEAwDQooWEVOKSAvY2FtX2kyY211eC9pMmNAMC9y
YnBjdjJfaW14MjE5X2FAMTAvcG9ydHMvcG9ydEAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9jYW1faTJjbXV4L2kyY0AwL3JicGN2Ml9pbXgyMTlfYUAxMC9wb3J0
cy9wb3J0QDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L2NhbV9pMmNtdXgvaTJjQDAvcmJwY3YyX2lteDIxOV9hQDEwL3BvcnRzL3BvcnRA
MA0KKFhFTikgaGFuZGxlIC9jYW1faTJjbXV4L2kyY0AwL3JicGN2Ml9pbXgyMTlfYUAxMC9wb3J0
cy9wb3J0QDAvZW5kcG9pbnQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2FtX2kyY211eC9p
MmNAMC9yYnBjdjJfaW14MjE5X2FAMTAvcG9ydHMvcG9ydEAwL2VuZHBvaW50DQooWEVOKSAvY2Ft
X2kyY211eC9pMmNAMC9yYnBjdjJfaW14MjE5X2FAMTAvcG9ydHMvcG9ydEAwL2VuZHBvaW50IHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jYW1faTJjbXV4L2kyY0Aw
L3JicGN2Ml9pbXgyMTlfYUAxMC9wb3J0cy9wb3J0QDAvZW5kcG9pbnQgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2NhbV9pMmNtdXgvaTJj
QDAvcmJwY3YyX2lteDIxOV9hQDEwL3BvcnRzL3BvcnRAMC9lbmRwb2ludA0KKFhFTikgaGFuZGxl
IC9jYW1faTJjbXV4L2kyY0AxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2NhbV9pMmNtdXgv
aTJjQDENCihYRU4pIC9jYW1faTJjbXV4L2kyY0AxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9jYW1faTJjbXV4L2kyY0AxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jYW1faTJjbXV4L2kyY0AxDQooWEVO
KSBoYW5kbGUgL2NhbV9pMmNtdXgvaTJjQDEvcmJwY3YyX2lteDIxOV9lQDEwDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L2NhbV9pMmNtdXgvaTJjQDEvcmJwY3YyX2lteDIxOV9lQDEwDQooWEVO
KSAvY2FtX2kyY211eC9pMmNAMS9yYnBjdjJfaW14MjE5X2VAMTAgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2NhbV9pMmNtdXgvaTJjQDEvcmJwY3YyX2lteDIxOV9l
QDEwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9jYW1faTJjbXV4L2kyY0AxL3JicGN2Ml9pbXgyMTlfZUAxMA0KKFhFTikgaGFuZGxlIC9j
YW1faTJjbXV4L2kyY0AxL3JicGN2Ml9pbXgyMTlfZUAxMC9tb2RlMA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9jYW1faTJjbXV4L2kyY0AxL3JicGN2Ml9pbXgyMTlfZUAxMC9tb2RlMA0KKFhF
TikgL2NhbV9pMmNtdXgvaTJjQDEvcmJwY3YyX2lteDIxOV9lQDEwL21vZGUwIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jYW1faTJjbXV4L2kyY0AxL3JicGN2Ml9p
bXgyMTlfZUAxMC9tb2RlMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vY2FtX2kyY211eC9pMmNAMS9yYnBjdjJfaW14MjE5X2VAMTAvbW9k
ZTANCihYRU4pIGhhbmRsZSAvY2FtX2kyY211eC9pMmNAMS9yYnBjdjJfaW14MjE5X2VAMTAvbW9k
ZTENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2FtX2kyY211eC9pMmNAMS9yYnBjdjJfaW14
MjE5X2VAMTAvbW9kZTENCihYRU4pIC9jYW1faTJjbXV4L2kyY0AxL3JicGN2Ml9pbXgyMTlfZUAx
MC9tb2RlMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvY2FtX2ky
Y211eC9pMmNAMS9yYnBjdjJfaW14MjE5X2VAMTAvbW9kZTEgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2NhbV9pMmNtdXgvaTJjQDEvcmJw
Y3YyX2lteDIxOV9lQDEwL21vZGUxDQooWEVOKSBoYW5kbGUgL2NhbV9pMmNtdXgvaTJjQDEvcmJw
Y3YyX2lteDIxOV9lQDEwL21vZGUyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2NhbV9pMmNt
dXgvaTJjQDEvcmJwY3YyX2lteDIxOV9lQDEwL21vZGUyDQooWEVOKSAvY2FtX2kyY211eC9pMmNA
MS9yYnBjdjJfaW14MjE5X2VAMTAvbW9kZTIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL2NhbV9pMmNtdXgvaTJjQDEvcmJwY3YyX2lteDIxOV9lQDEwL21vZGUyIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9j
YW1faTJjbXV4L2kyY0AxL3JicGN2Ml9pbXgyMTlfZUAxMC9tb2RlMg0KKFhFTikgaGFuZGxlIC9j
YW1faTJjbXV4L2kyY0AxL3JicGN2Ml9pbXgyMTlfZUAxMC9tb2RlMw0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9jYW1faTJjbXV4L2kyY0AxL3JicGN2Ml9pbXgyMTlfZUAxMC9tb2RlMw0KKFhF
TikgL2NhbV9pMmNtdXgvaTJjQDEvcmJwY3YyX2lteDIxOV9lQDEwL21vZGUzIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jYW1faTJjbXV4L2kyY0AxL3JicGN2Ml9p
bXgyMTlfZUAxMC9tb2RlMyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vY2FtX2kyY211eC9pMmNAMS9yYnBjdjJfaW14MjE5X2VAMTAvbW9k
ZTMNCihYRU4pIGhhbmRsZSAvY2FtX2kyY211eC9pMmNAMS9yYnBjdjJfaW14MjE5X2VAMTAvbW9k
ZTQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2FtX2kyY211eC9pMmNAMS9yYnBjdjJfaW14
MjE5X2VAMTAvbW9kZTQNCihYRU4pIC9jYW1faTJjbXV4L2kyY0AxL3JicGN2Ml9pbXgyMTlfZUAx
MC9tb2RlNCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvY2FtX2ky
Y211eC9pMmNAMS9yYnBjdjJfaW14MjE5X2VAMTAvbW9kZTQgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2NhbV9pMmNtdXgvaTJjQDEvcmJw
Y3YyX2lteDIxOV9lQDEwL21vZGU0DQooWEVOKSBoYW5kbGUgL2NhbV9pMmNtdXgvaTJjQDEvcmJw
Y3YyX2lteDIxOV9lQDEwL3BvcnRzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2NhbV9pMmNt
dXgvaTJjQDEvcmJwY3YyX2lteDIxOV9lQDEwL3BvcnRzDQooWEVOKSAvY2FtX2kyY211eC9pMmNA
MS9yYnBjdjJfaW14MjE5X2VAMTAvcG9ydHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL2NhbV9pMmNtdXgvaTJjQDEvcmJwY3YyX2lteDIxOV9lQDEwL3BvcnRzIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9j
YW1faTJjbXV4L2kyY0AxL3JicGN2Ml9pbXgyMTlfZUAxMC9wb3J0cw0KKFhFTikgaGFuZGxlIC9j
YW1faTJjbXV4L2kyY0AxL3JicGN2Ml9pbXgyMTlfZUAxMC9wb3J0cy9wb3J0QDANCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vY2FtX2kyY211eC9pMmNAMS9yYnBjdjJfaW14MjE5X2VAMTAvcG9y
dHMvcG9ydEAwDQooWEVOKSAvY2FtX2kyY211eC9pMmNAMS9yYnBjdjJfaW14MjE5X2VAMTAvcG9y
dHMvcG9ydEAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jYW1f
aTJjbXV4L2kyY0AxL3JicGN2Ml9pbXgyMTlfZUAxMC9wb3J0cy9wb3J0QDAgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2NhbV9pMmNtdXgv
aTJjQDEvcmJwY3YyX2lteDIxOV9lQDEwL3BvcnRzL3BvcnRAMA0KKFhFTikgaGFuZGxlIC9jYW1f
aTJjbXV4L2kyY0AxL3JicGN2Ml9pbXgyMTlfZUAxMC9wb3J0cy9wb3J0QDAvZW5kcG9pbnQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2FtX2kyY211eC9pMmNAMS9yYnBjdjJfaW14MjE5X2VA
MTAvcG9ydHMvcG9ydEAwL2VuZHBvaW50DQooWEVOKSAvY2FtX2kyY211eC9pMmNAMS9yYnBjdjJf
aW14MjE5X2VAMTAvcG9ydHMvcG9ydEAwL2VuZHBvaW50IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9jYW1faTJjbXV4L2kyY0AxL3JicGN2Ml9pbXgyMTlfZUAxMC9w
b3J0cy9wb3J0QDAvZW5kcG9pbnQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L2NhbV9pMmNtdXgvaTJjQDEvcmJwY3YyX2lteDIxOV9lQDEw
L3BvcnRzL3BvcnRAMC9lbmRwb2ludA0KKFhFTikgaGFuZGxlIC90ZmVzZA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS90ZmVzZA0KKFhFTikgL3RmZXNkIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC90ZmVzZCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGZlc2QNCihYRU4pIGhhbmRsZSAvdGZlc2QvZGV2
MQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90ZmVzZC9kZXYxDQooWEVOKSAvdGZlc2QvZGV2
MSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGZlc2QvZGV2MSBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
dGZlc2QvZGV2MQ0KKFhFTikgaGFuZGxlIC90ZmVzZC9kZXYyDQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3RmZXNkL2RldjINCihYRU4pIC90ZmVzZC9kZXYyIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC90ZmVzZC9kZXYyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90ZmVzZC9kZXYyDQooWEVOKSBoYW5k
bGUgL3RoZXJtYWwtZmFuLWVzdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVybWFsLWZh
bi1lc3QNCihYRU4pIC90aGVybWFsLWZhbi1lc3QgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3RoZXJtYWwtZmFuLWVzdCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC1mYW4tZXN0DQooWEVOKSBo
YW5kbGUgL2dwaW8ta2V5cw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ncGlvLWtleXMNCihY
RU4pIC9ncGlvLWtleXMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L2dwaW8ta2V5cyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vZ3Bpby1rZXlzDQooWEVOKSBoYW5kbGUgL2dwaW8ta2V5cy9wb3dlcg0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9ncGlvLWtleXMvcG93ZXINCihYRU4pIC9ncGlvLWtleXMv
cG93ZXIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2dwaW8ta2V5
cy9wb3dlciBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vZ3Bpby1rZXlzL3Bvd2VyDQooWEVOKSBoYW5kbGUgL2dwaW8ta2V5cy9mb3JjZXJl
Y292ZXJ5DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2dwaW8ta2V5cy9mb3JjZXJlY292ZXJ5
DQooWEVOKSAvZ3Bpby1rZXlzL2ZvcmNlcmVjb3ZlcnkgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL2dwaW8ta2V5cy9mb3JjZXJlY292ZXJ5IGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9ncGlvLWtleXMvZm9y
Y2VyZWNvdmVyeQ0KKFhFTikgaGFuZGxlIC9ncGlvLXRpbWVkLWtleXMNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vZ3Bpby10aW1lZC1rZXlzDQooWEVOKSAvZ3Bpby10aW1lZC1rZXlzIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ncGlvLXRpbWVkLWtleXMgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2dw
aW8tdGltZWQta2V5cw0KKFhFTikgaGFuZGxlIC9ncGlvLXRpbWVkLWtleXMvcG93ZXINCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vZ3Bpby10aW1lZC1rZXlzL3Bvd2VyDQooWEVOKSAvZ3Bpby10
aW1lZC1rZXlzL3Bvd2VyIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC9ncGlvLXRpbWVkLWtleXMvcG93ZXIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2dwaW8tdGltZWQta2V5cy9wb3dlcg0KKFhFTikgaGFu
ZGxlIC9zcGRpZi1kaXQuMEAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NwZGlmLWRpdC4w
QDANCihYRU4pIC9zcGRpZi1kaXQuMEAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4p
IENoZWNrIGlmIC9zcGRpZi1kaXQuMEAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGRpZi1kaXQuMEAwDQooWEVOKSBEVDogKiogdHJh
bnNsYXRpb24gZm9yIGRldmljZSAvc3BkaWYtZGl0LjBAMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBk
ZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8
Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0K
KFhFTikgICAtIE1NSU86IDAwMDAwMDAwMDAgLSAwMDAwMDAwMDAwIFAyTVR5cGU9NQ0KKFhFTikg
aGFuZGxlIC9zcGRpZi1kaXQuMUAxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NwZGlmLWRp
dC4xQDENCihYRU4pIC9zcGRpZi1kaXQuMUAxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihY
RU4pIENoZWNrIGlmIC9zcGRpZi1kaXQuMUAxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGRpZi1kaXQuMUAxDQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvc3BkaWYtZGl0LjFAMSAqKg0KKFhFTikgRFQ6IGJ1cyBp
cyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVz
czo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDE8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9k
ZQ0KKFhFTikgICAtIE1NSU86IDAwMDAwMDAwMDEgLSAwMDAwMDAwMDAxIFAyTVR5cGU9NQ0KKFhF
TikgaGFuZGxlIC9zcGRpZi1kaXQuMkAyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NwZGlm
LWRpdC4yQDINCihYRU4pIC9zcGRpZi1kaXQuMkAyIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDEN
CihYRU4pIENoZWNrIGlmIC9zcGRpZi1kaXQuMkAyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGRpZi1kaXQuMkAyDQooWEVOKSBEVDog
KiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc3BkaWYtZGl0LjJAMiAqKg0KKFhFTikgRFQ6IGJ1
cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRk
cmVzczo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDI8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qg
bm9kZQ0KKFhFTikgICAtIE1NSU86IDAwMDAwMDAwMDIgLSAwMDAwMDAwMDAyIFAyTVR5cGU9NQ0K
KFhFTikgaGFuZGxlIC9zcGRpZi1kaXQuM0AzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nw
ZGlmLWRpdC4zQDMNCihYRU4pIC9zcGRpZi1kaXQuM0AzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDENCihYRU4pIENoZWNrIGlmIC9zcGRpZi1kaXQuM0AzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGRpZi1kaXQuM0AzDQooWEVOKSBE
VDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc3BkaWYtZGl0LjNAMyAqKg0KKFhFTikgRFQ6
IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRpbmcg
YWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDM8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJv
b3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwMDAwMDAwMDMgLSAwMDAwMDAwMDAzIFAyTVR5cGU9
NQ0KKFhFTikgaGFuZGxlIC9zcGRpZi1kaXQuNEA0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3NwZGlmLWRpdC40QDQNCihYRU4pIC9zcGRpZi1kaXQuNEA0IHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDENCihYRU4pIENoZWNrIGlmIC9zcGRpZi1kaXQuNEA0IGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGRpZi1kaXQuNEA0DQooWEVO
KSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc3BkaWYtZGl0LjRANCAqKg0KKFhFTikg
RFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNsYXRp
bmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDQ8Mz4NCihYRU4pIERUOiByZWFjaGVk
IHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwMDAwMDAwMDQgLSAwMDAwMDAwMDA0IFAyTVR5
cGU9NQ0KKFhFTikgaGFuZGxlIC9zcGRpZi1kaXQuNUA1DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NwZGlmLWRpdC41QDUNCihYRU4pIC9zcGRpZi1kaXQuNUA1IHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9zcGRpZi1kaXQuNUA1IGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGRpZi1kaXQuNUA1DQoo
WEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc3BkaWYtZGl0LjVANSAqKg0KKFhF
TikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJhbnNs
YXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDU8Mz4NCihYRU4pIERUOiByZWFj
aGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwMDAwMDAwMDUgLSAwMDAwMDAwMDA1IFAy
TVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zcGRpZi1kaXQuNkA2DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3NwZGlmLWRpdC42QDYNCihYRU4pIC9zcGRpZi1kaXQuNkA2IHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9zcGRpZi1kaXQuNkA2IGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGRpZi1kaXQuNkA2
DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc3BkaWYtZGl0LjZANiAqKg0K
KFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDogdHJh
bnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDY8Mz4NCihYRU4pIERUOiBy
ZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwMDAwMDAwMDYgLSAwMDAwMDAwMDA2
IFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zcGRpZi1kaXQuN0A3DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NwZGlmLWRpdC43QDcNCihYRU4pIC9zcGRpZi1kaXQuN0A3IHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9zcGRpZi1kaXQuN0A3IGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zcGRpZi1kaXQu
N0A3DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc3BkaWYtZGl0LjdANyAq
Kg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvDQooWEVOKSBEVDog
dHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4gMDAwMDAwMDc8Mz4NCihYRU4pIERU
OiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwMDAwMDAwMDcgLSAwMDAwMDAw
MDA3IFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9jcHVmcmVxDQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L2NwdWZyZXENCihYRU4pIC9jcHVmcmVxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9jcHVmcmVxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jcHVmcmVxDQooWEVOKSBoYW5kbGUgL2NwdWZyZXEv
Y3B1LXNjYWxpbmctZGF0YQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jcHVmcmVxL2NwdS1z
Y2FsaW5nLWRhdGENCihYRU4pIC9jcHVmcmVxL2NwdS1zY2FsaW5nLWRhdGEgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2NwdWZyZXEvY3B1LXNjYWxpbmctZGF0YSBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
Y3B1ZnJlcS9jcHUtc2NhbGluZy1kYXRhDQooWEVOKSBoYW5kbGUgL2NwdWZyZXEvZW1jLXNjYWxp
bmctZGF0YQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jcHVmcmVxL2VtYy1zY2FsaW5nLWRh
dGENCihYRU4pIC9jcHVmcmVxL2VtYy1zY2FsaW5nLWRhdGEgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL2NwdWZyZXEvZW1jLXNjYWxpbmctZGF0YSBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY3B1ZnJlcS9l
bWMtc2NhbGluZy1kYXRhDQooWEVOKSBoYW5kbGUgL2VlcHJvbS1tYW5hZ2VyDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L2VlcHJvbS1tYW5hZ2VyDQooWEVOKSAvZWVwcm9tLW1hbmFnZXIgcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2VlcHJvbS1tYW5hZ2VyIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9l
ZXByb20tbWFuYWdlcg0KKFhFTikgaGFuZGxlIC9lZXByb20tbWFuYWdlci9idXNAMA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9lZXByb20tbWFuYWdlci9idXNAMA0KKFhFTikgL2VlcHJvbS1t
YW5hZ2VyL2J1c0AwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9l
ZXByb20tbWFuYWdlci9idXNAMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vZWVwcm9tLW1hbmFnZXIvYnVzQDANCihYRU4pIGhhbmRsZSAv
ZWVwcm9tLW1hbmFnZXIvYnVzQDENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vZWVwcm9tLW1h
bmFnZXIvYnVzQDENCihYRU4pIC9lZXByb20tbWFuYWdlci9idXNAMSBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvZWVwcm9tLW1hbmFnZXIvYnVzQDEgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2VlcHJvbS1t
YW5hZ2VyL2J1c0AxDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIgcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVn
aW4tbWFuYWdlcg0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMA0KKFhFTikg
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMCBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIv
ZnJhZ2VtZW50QDANCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDAvb3Zl
cnJpZGVAMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1l
bnRAMC9vdmVycmlkZUAwDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDAvb3ZlcnJp
ZGVAMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1h
bmFnZXIvZnJhZ2VtZW50QDAvb3ZlcnJpZGVAMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDAv
b3ZlcnJpZGVAMA0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMC9vdmVy
cmlkZUAwL19vdmVybGF5Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdl
ci9mcmFnZW1lbnRAMC9vdmVycmlkZUAwL19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdlbWVudEAwL292ZXJyaWRlQDAvX292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMC9vdmVycmlkZUAw
L19vdmVybGF5XyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDAvb3ZlcnJpZGVAMC9fb3Zlcmxh
eV8NCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDAvb3ZlcnJpZGVAMC9f
b3ZlcmxheV8vY2hhbm5lbEAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEAwL292ZXJyaWRlQDAvX292ZXJsYXlfL2NoYW5uZWxAMA0KKFhFTikgL3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAwL292ZXJyaWRlQDAvX292ZXJsYXlfL2NoYW5uZWxAMCBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIv
ZnJhZ2VtZW50QDAvb3ZlcnJpZGVAMC9fb3ZlcmxheV8vY2hhbm5lbEAwIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdl
ci9mcmFnZW1lbnRAMC9vdmVycmlkZUAwL19vdmVybGF5Xy9jaGFubmVsQDANCihYRU4pIGhhbmRs
ZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDAvb3ZlcnJpZGVAMC9fb3ZlcmxheV8vY2hhbm5l
bEAxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAw
L292ZXJyaWRlQDAvX292ZXJsYXlfL2NoYW5uZWxAMQ0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2Zy
YWdlbWVudEAwL292ZXJyaWRlQDAvX292ZXJsYXlfL2NoYW5uZWxAMSBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDAvb3Zl
cnJpZGVAMC9fb3ZlcmxheV8vY2hhbm5lbEAxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMC9v
dmVycmlkZUAwL19vdmVybGF5Xy9jaGFubmVsQDENCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFn
ZXIvZnJhZ21lbnRAMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9m
cmFnbWVudEAxDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMSBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMSBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMQ0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9m
cmFnbWVudEAxL292ZXJyaWRlQDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1h
bmFnZXIvZnJhZ21lbnRAMS9vdmVycmlkZUAwDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21l
bnRAMS9vdmVycmlkZUAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEAxL292ZXJyaWRlQDAgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2Zy
YWdtZW50QDEvb3ZlcnJpZGVAMA0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVu
dEAxL292ZXJyaWRlQDAvX292ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdp
bi1tYW5hZ2VyL2ZyYWdtZW50QDEvb3ZlcnJpZGVAMC9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4t
bWFuYWdlci9mcmFnbWVudEAxL292ZXJyaWRlQDAvX292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEAxL292ZXJy
aWRlQDAvX292ZXJsYXlfIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEAxL292ZXJyaWRlQDAvX292
ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDINCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMg0KKFhFTikgL3BsdWdp
bi1tYW5hZ2VyL2ZyYWdtZW50QDIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50
QDINCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMi9vdmVycmlkZUAwDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDIvb3ZlcnJp
ZGVAMA0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDIvb3ZlcnJpZGVAMCBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ21l
bnRAMi9vdmVycmlkZUAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEAyL292ZXJyaWRlQDANCihY
RU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMi9vdmVycmlkZUAwL19vdmVybGF5
Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEAyL292
ZXJyaWRlQDAvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMi9vdmVy
cmlkZUAwL19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMi9vdmVycmlkZUAwL19vdmVybGF5XyBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2lu
LW1hbmFnZXIvZnJhZ21lbnRAMi9vdmVycmlkZUAwL19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9w
bHVnaW4tbWFuYWdlci9mcmFnbWVudEAyL292ZXJyaWRlQDENCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMi9vdmVycmlkZUAxDQooWEVOKSAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ21lbnRAMi9vdmVycmlkZUAxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEAyL292ZXJyaWRlQDEgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDIvb3ZlcnJpZGVAMQ0KKFhFTikgaGFuZGxlIC9wbHVnaW4t
bWFuYWdlci9mcmFnbWVudEAyL292ZXJyaWRlQDEvX292ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDIvb3ZlcnJpZGVAMS9fb3ZlcmxheV8N
CihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEAyL292ZXJyaWRlQDEvX292ZXJsYXlfIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9m
cmFnbWVudEAyL292ZXJyaWRlQDEvX292ZXJsYXlfIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEAy
L292ZXJyaWRlQDEvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdt
ZW50QDIvb3ZlcnJpZGVAMg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdl
ci9mcmFnbWVudEAyL292ZXJyaWRlQDINCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEAy
L292ZXJyaWRlQDIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDIvb3ZlcnJpZGVAMiBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21l
bnRAMi9vdmVycmlkZUAyDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDIv
b3ZlcnJpZGVAMi9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1h
bmFnZXIvZnJhZ21lbnRAMi9vdmVycmlkZUAyL19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5h
Z2VyL2ZyYWdtZW50QDIvb3ZlcnJpZGVAMi9fb3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDIvb3ZlcnJpZGVA
Mi9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDIvb3ZlcnJpZGVAMi9fb3Zlcmxh
eV8NCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMi9vdmVycmlkZUAyL19v
dmVybGF5Xy9udmlkaWEsZGFpLWxpbmstMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVn
aW4tbWFuYWdlci9mcmFnbWVudEAyL292ZXJyaWRlQDIvX292ZXJsYXlfL252aWRpYSxkYWktbGlu
ay0xDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMi9vdmVycmlkZUAyL19vdmVybGF5
Xy9udmlkaWEsZGFpLWxpbmstMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMi9vdmVycmlkZUAyL19vdmVybGF5Xy9udmlk
aWEsZGFpLWxpbmstMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMi9vdmVycmlkZUAyL19vdmVy
bGF5Xy9udmlkaWEsZGFpLWxpbmstMQ0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFn
bWVudEAzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50
QDMNCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEAzIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEAzIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4t
bWFuYWdlci9mcmFnbWVudEAzDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50
QDMvb3ZlcnJpZGVAMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9m
cmFnbWVudEAzL292ZXJyaWRlQDANCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEAzL292
ZXJyaWRlQDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdp
bi1tYW5hZ2VyL2ZyYWdtZW50QDMvb3ZlcnJpZGVAMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRA
My9vdmVycmlkZUAwDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDMvb3Zl
cnJpZGVAMC9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFn
ZXIvZnJhZ21lbnRAMy9vdmVycmlkZUAwL19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdtZW50QDMvb3ZlcnJpZGVAMC9fb3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDMvb3ZlcnJpZGVAMC9f
b3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDMvb3ZlcnJpZGVAMC9fb3ZlcmxheV8N
CihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMy9vdmVycmlkZUAxDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDMvb3ZlcnJpZGVA
MQ0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDMvb3ZlcnJpZGVAMSBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRA
My9vdmVycmlkZUAxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEAzL292ZXJyaWRlQDENCihYRU4p
IGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMy9vdmVycmlkZUAxL19vdmVybGF5Xw0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEAzL292ZXJy
aWRlQDEvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMy9vdmVycmlk
ZUAxL19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
cGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRAMy9vdmVycmlkZUAxL19vdmVybGF5XyBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1h
bmFnZXIvZnJhZ21lbnRAMy9vdmVycmlkZUAxL19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVn
aW4tbWFuYWdlci9mcmFnbWVudEA0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1t
YW5hZ2VyL2ZyYWdtZW50QDQNCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEA0IHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFn
bWVudEA0IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEA0DQooWEVOKSBoYW5kbGUgL3BsdWdpbi1t
YW5hZ2VyL2ZyYWdtZW50QDQvb3ZlcnJpZGVAMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
bHVnaW4tbWFuYWdlci9mcmFnbWVudEA0L292ZXJyaWRlQDANCihYRU4pIC9wbHVnaW4tbWFuYWdl
ci9mcmFnbWVudEA0L292ZXJyaWRlQDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDQvb3ZlcnJpZGVAMCBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1h
bmFnZXIvZnJhZ21lbnRANC9vdmVycmlkZUAwDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdtZW50QDQvb3ZlcnJpZGVAMC9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRANC9vdmVycmlkZUAwL19vdmVybGF5Xw0KKFhFTikg
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDQvb3ZlcnJpZGVAMC9fb3ZlcmxheV8gcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50
QDQvb3ZlcnJpZGVAMC9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDQvb3ZlcnJp
ZGVAMC9fb3ZlcmxheV8NCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRANC9v
dmVycmlkZUAxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdt
ZW50QDQvb3ZlcnJpZGVAMQ0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDQvb3ZlcnJp
ZGVAMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1h
bmFnZXIvZnJhZ21lbnRANC9vdmVycmlkZUAxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEA0L292
ZXJyaWRlQDENCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRANC9vdmVycmlk
ZUAxL19vdmVybGF5Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9m
cmFnbWVudEA0L292ZXJyaWRlQDEvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJh
Z21lbnRANC9vdmVycmlkZUAxL19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRANC9vdmVycmlkZUAxL19vdmVy
bGF5XyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRANC9vdmVycmlkZUAxL19vdmVybGF5Xw0KKFhF
TikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEA0L292ZXJyaWRlQDINCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRANC9vdmVycmlkZUAyDQoo
WEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRANC9vdmVycmlkZUAyIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEA0L292
ZXJyaWRlQDIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDQvb3ZlcnJpZGVAMg0KKFhFTikgaGFu
ZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEA0L292ZXJyaWRlQDIvX292ZXJsYXlfDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDQvb3ZlcnJpZGVA
Mi9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEA0L292ZXJyaWRlQDIv
X292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVn
aW4tbWFuYWdlci9mcmFnbWVudEA0L292ZXJyaWRlQDIvX292ZXJsYXlfIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdl
ci9mcmFnbWVudEA0L292ZXJyaWRlQDIvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1t
YW5hZ2VyL2ZyYWdtZW50QDUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFn
ZXIvZnJhZ21lbnRANQ0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDUgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50
QDUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDUNCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFn
ZXIvZnJhZ21lbnRANS9vdmVycmlkZUAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdp
bi1tYW5hZ2VyL2ZyYWdtZW50QDUvb3ZlcnJpZGVAMA0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2Zy
YWdtZW50QDUvb3ZlcnJpZGVAMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRANS9vdmVycmlkZUAwIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdl
ci9mcmFnbWVudEA1L292ZXJyaWRlQDANCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJh
Z21lbnRANS9vdmVycmlkZUAwL19vdmVybGF5Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
bHVnaW4tbWFuYWdlci9mcmFnbWVudEA1L292ZXJyaWRlQDAvX292ZXJsYXlfDQooWEVOKSAvcGx1
Z2luLW1hbmFnZXIvZnJhZ21lbnRANS9vdmVycmlkZUAwL19vdmVybGF5XyBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRANS9v
dmVycmlkZUAwL19vdmVybGF5XyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRANS9vdmVycmlkZUAw
L19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEA1L292ZXJy
aWRlQDENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRA
NS9vdmVycmlkZUAxDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRANS9vdmVycmlkZUAx
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdl
ci9mcmFnbWVudEA1L292ZXJyaWRlQDEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDUvb3ZlcnJp
ZGVAMQ0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEA1L292ZXJyaWRlQDEv
X292ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdt
ZW50QDUvb3ZlcnJpZGVAMS9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVu
dEA1L292ZXJyaWRlQDEvX292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEA1L292ZXJyaWRlQDEvX292ZXJsYXlf
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEA1L292ZXJyaWRlQDEvX292ZXJsYXlfDQooWEVOKSBo
YW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDUvb3ZlcnJpZGVAMg0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEA1L292ZXJyaWRlQDINCihYRU4p
IC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudEA1L292ZXJyaWRlQDIgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDUvb3ZlcnJp
ZGVAMiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRANS9vdmVycmlkZUAyDQooWEVOKSBoYW5kbGUg
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDUvb3ZlcnJpZGVAMi9fb3ZlcmxheV8NCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnRANS9vdmVycmlkZUAyL19v
dmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50QDUvb3ZlcnJpZGVAMi9fb3Zl
cmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1t
YW5hZ2VyL2ZyYWdtZW50QDUvb3ZlcnJpZGVAMi9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2Zy
YWdtZW50QDUvb3ZlcnJpZGVAMi9fb3ZlcmxheV8NCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFn
ZXIvZnJhZ2VtZW50QDYNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIv
ZnJhZ2VtZW50QDYNCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRANiBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50
QDYgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA2DQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEA2L292ZXJyaWRlQDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1
Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDYvb3ZlcnJpZGVAMA0KKFhFTikgL3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdlbWVudEA2L292ZXJyaWRlQDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA2L292ZXJyaWRlQDAgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1t
YW5hZ2VyL2ZyYWdlbWVudEA2L292ZXJyaWRlQDANCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFn
ZXIvZnJhZ2VtZW50QDYvb3ZlcnJpZGVAMC9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDYvb3ZlcnJpZGVAMC9fb3ZlcmxheV8NCihY
RU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRANi9vdmVycmlkZUAwL19vdmVybGF5XyBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJh
Z2VtZW50QDYvb3ZlcnJpZGVAMC9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA2
L292ZXJyaWRlQDAvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdl
bWVudEA3DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVu
dEA3DQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDcgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA3IGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVn
aW4tbWFuYWdlci9mcmFnZW1lbnRANw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFn
ZW1lbnRANy9vdmVycmlkZUAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEA3L292ZXJyaWRlQDANCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1l
bnRANy9vdmVycmlkZUAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRANy9vdmVycmlkZUAwIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9m
cmFnZW1lbnRANy9vdmVycmlkZUAwDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdl
bWVudEA3L292ZXJyaWRlQDAvX292ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA3L292ZXJyaWRlQDAvX292ZXJsYXlfDQooWEVOKSAvcGx1
Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDcvb3ZlcnJpZGVAMC9fb3ZlcmxheV8gcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA3
L292ZXJyaWRlQDAvX292ZXJsYXlfIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRANy9vdmVycmlk
ZUAwL19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOA0KKFhF
TikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA4IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOCBpcyBiZWhpbmQgdGhl
IElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFn
ZXIvZnJhZ2VtZW50QDgNCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDgv
b3ZlcnJpZGVAMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFn
ZW1lbnRAOC9vdmVycmlkZUAwDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDgvb3Zl
cnJpZGVAMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ2VtZW50QDgvb3ZlcnJpZGVAMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50
QDgvb3ZlcnJpZGVAMA0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOC9v
dmVycmlkZUAwL19vdmVybGF5Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFu
YWdlci9mcmFnZW1lbnRAOC9vdmVycmlkZUAwL19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEA4L292ZXJyaWRlQDAvX292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOC9vdmVycmlk
ZUAwL19vdmVybGF5XyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDgvb3ZlcnJpZGVAMC9fb3Zl
cmxheV8NCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkNCihYRU4pIC9wbHVn
aW4tbWFuYWdlci9mcmFnZW1lbnRAOSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdl
bWVudEA5DQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRl
QDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkv
b3ZlcnJpZGVAMA0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDAg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdlbWVudEA5L292ZXJyaWRlQDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJy
aWRlQDANCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVA
MC9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJh
Z2VtZW50QDkvb3ZlcnJpZGVAMC9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFn
ZW1lbnRAOS9vdmVycmlkZUAwL19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMC9fb3Zl
cmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDAvX292ZXJsYXlfDQoo
WEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDENCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVA
MQ0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDEgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVu
dEA5L292ZXJyaWRlQDEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDENCihY
RU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMS9fb3Zlcmxh
eV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkv
b3ZlcnJpZGVAMS9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9v
dmVycmlkZUAxL19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMS9fb3ZlcmxheV8gaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDEvX292ZXJsYXlfDQooWEVOKSBoYW5k
bGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDINCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMg0KKFhFTikg
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDIgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJy
aWRlQDIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDINCihYRU4pIGhhbmRs
ZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMi9fb3ZlcmxheV8NCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVA
Mi9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAy
L19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1
Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMi9fb3ZlcmxheV8gaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDIvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdp
bi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMw0KKFhFTikgL3BsdWdpbi1t
YW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDMgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDMNCihYRU4pIGhhbmRsZSAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMy9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMy9fb3Zlcmxh
eV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAzL19vdmVybGF5
XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFn
ZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMy9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdl
bWVudEA5L292ZXJyaWRlQDMvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdlbWVudEA5L292ZXJyaWRlQDQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2lu
LW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVANA0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2Zy
YWdlbWVudEA5L292ZXJyaWRlQDQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDQgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDQNCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIv
ZnJhZ2VtZW50QDkvb3ZlcnJpZGVANC9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVANC9fb3ZlcmxheV8NCihYRU4p
IC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUA0L19vdmVybGF5XyBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2Vt
ZW50QDkvb3ZlcnJpZGVANC9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292
ZXJyaWRlQDQvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVu
dEA5L292ZXJyaWRlQDUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIv
ZnJhZ2VtZW50QDkvb3ZlcnJpZGVANQ0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5
L292ZXJyaWRlQDUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDUgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdl
bWVudEA5L292ZXJyaWRlQDUNCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50
QDkvb3ZlcnJpZGVANS9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2lu
LW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVANS9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4t
bWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUA1L19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3Zl
cnJpZGVANS9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDUv
X292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJy
aWRlQDYNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50
QDkvb3ZlcnJpZGVANg0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRl
QDYgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDYgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292
ZXJyaWRlQDYNCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJp
ZGVANi9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIv
ZnJhZ2VtZW50QDkvb3ZlcnJpZGVANi9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9m
cmFnZW1lbnRAOS9vdmVycmlkZUA2L19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVANi9f
b3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDYvX292ZXJsYXlf
DQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDcNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJp
ZGVANw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDcgcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdl
bWVudEA5L292ZXJyaWRlQDcgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDcN
CihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVANy9fb3Zl
cmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50
QDkvb3ZlcnJpZGVANy9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRA
OS9vdmVycmlkZUA3L19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVANy9fb3ZlcmxheV8g
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDcvX292ZXJsYXlfDQooWEVOKSBo
YW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDgNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAOA0KKFhF
TikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDggcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292
ZXJyaWRlQDggaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDgNCihYRU4pIGhh
bmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAOC9fb3ZlcmxheV8NCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJp
ZGVAOC9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlk
ZUA4L19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
cGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAOC9fb3ZlcmxheV8gaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1t
YW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDgvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDkNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAOQ0KKFhFTikgL3BsdWdp
bi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDkgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDkg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDkNCihYRU4pIGhhbmRsZSAvcGx1
Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAOS9fb3ZlcmxheV8NCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAOS9fb3Zl
cmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUA5L19vdmVy
bGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1h
bmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAOS9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2Zy
YWdlbWVudEA5L292ZXJyaWRlQDkvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDEwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDEwDQooWEVOKSAvcGx1Z2luLW1hbmFn
ZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDEwIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVn
aW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxMA0KKFhFTikgaGFuZGxlIC9wbHVnaW4t
bWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxMC9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTAvX292ZXJs
YXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTAvX292ZXJs
YXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFu
YWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxMC9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2Zy
YWdlbWVudEA5L292ZXJyaWRlQDEwL19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFu
YWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxMQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
bHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxMQ0KKFhFTikgL3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDExIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxMSBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1
Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTENCihYRU4pIGhhbmRsZSAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTEvX292ZXJsYXlfDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDExL19vdmVy
bGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDExL19vdmVy
bGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1h
bmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTEvX292ZXJsYXlfIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9m
cmFnZW1lbnRAOS9vdmVycmlkZUAxMS9fb3ZlcmxheV8NCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1h
bmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTINCihYRU4pIC9wbHVnaW4tbWFu
YWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxMiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTIgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDEyDQooWEVOKSBoYW5kbGUgL3BsdWdp
bi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDEyL19vdmVybGF5Xw0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxMi9fb3Zl
cmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxMi9fb3Zl
cmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1t
YW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDEyL19vdmVybGF5XyBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIv
ZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTIvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1t
YW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDEzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDEzDQooWEVOKSAvcGx1Z2luLW1h
bmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDEzIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
bHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxMw0KKFhFTikgaGFuZGxlIC9wbHVn
aW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxMy9fb3ZlcmxheV8NCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTMvX292
ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTMvX292
ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4t
bWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxMy9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdlbWVudEA5L292ZXJyaWRlQDEzL19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4t
bWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxNA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxNA0KKFhFTikgL3BsdWdpbi1t
YW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDE0IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxNCBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTQNCihYRU4pIGhhbmRsZSAvcGx1
Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTQvX292ZXJsYXlfDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDE0L19v
dmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDE0L19v
dmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTQvX292ZXJsYXlfIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdl
ci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxNC9fb3ZlcmxheV8NCihYRU4pIGhhbmRsZSAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTUNCihYRU4pIC9wbHVnaW4t
bWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxNSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTUg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDE1DQooWEVOKSBoYW5kbGUgL3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDE1L19vdmVybGF5Xw0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxNS9f
b3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxNS9f
b3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdp
bi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDE1L19vdmVybGF5XyBpcyBiZWhpbmQgdGhl
IElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFn
ZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTUvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdp
bi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDE2DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDE2DQooWEVOKSAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTYgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDE2
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxNg0KKFhFTikgaGFuZGxlIC9w
bHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxNi9fb3ZlcmxheV8NCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTYv
X292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDkvb3ZlcnJpZGVAMTYv
X292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVn
aW4tbWFuYWdlci9mcmFnZW1lbnRAOS9vdmVycmlkZUAxNi9fb3ZlcmxheV8gaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEA5L292ZXJyaWRlQDE2L19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVn
aW4tbWFuYWdlci9mcmFnZW1lbnRAMTANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2lu
LW1hbmFnZXIvZnJhZ2VtZW50QDEwDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEw
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdl
ci9mcmFnZW1lbnRAMTAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMA0KKFhFTikgaGFuZGxl
IC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMA0KKFhFTikg
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAwIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3Zl
cnJpZGVAMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDANCihYRU4pIGhh
bmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDAvX292ZXJsYXlfDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVy
cmlkZUAwL19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVy
cmlkZUAwL19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDAvX292ZXJsYXlfIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVn
aW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMC9fb3ZlcmxheV8NCihYRU4pIGhhbmRs
ZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDENCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDENCihYRU4p
IC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMSBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292
ZXJyaWRlQDEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxDQooWEVOKSBo
YW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxL19vdmVybGF5Xw0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3Zl
cnJpZGVAMS9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3Zl
cnJpZGVAMS9fb3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxL19vdmVybGF5XyBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1
Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDEvX292ZXJsYXlfDQooWEVOKSBoYW5k
bGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyDQooWEVO
KSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDIgcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9v
dmVycmlkZUAyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMg0KKFhFTikg
aGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMi9fb3ZlcmxheV8N
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292
ZXJyaWRlQDIvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292
ZXJyaWRlQDIvX292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMi9fb3ZlcmxheV8gaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyL19vdmVybGF5Xw0KKFhFTikgaGFu
ZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMw0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMw0KKFhF
TikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAzIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAv
b3ZlcnJpZGVAMyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDMNCihYRU4p
IGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDMvX292ZXJsYXlf
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9v
dmVycmlkZUAzL19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9v
dmVycmlkZUAzL19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDMvX292ZXJsYXlfIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
bHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMy9fb3ZlcmxheV8NCihYRU4pIGhh
bmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDQNCihY
RU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVANCBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEw
L292ZXJyaWRlQDQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUA0DQooWEVO
KSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUA0L19vdmVybGF5
Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAv
b3ZlcnJpZGVANC9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAv
b3ZlcnJpZGVANC9fb3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUA0L19vdmVybGF5XyBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDQvX292ZXJsYXlfDQooWEVOKSBo
YW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUA1DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUA1DQoo
WEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDUgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAx
MC9vdmVycmlkZUA1IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVANQ0KKFhF
TikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVANS9fb3Zlcmxh
eV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEw
L292ZXJyaWRlQDUvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEw
L292ZXJyaWRlQDUvX292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVANS9fb3ZlcmxheV8g
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUA1L19vdmVybGF5Xw0KKFhFTikg
aGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVANg0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVANg0K
KFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUA2IHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRA
MTAvb3ZlcnJpZGVANiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDYNCihY
RU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDYvX292ZXJs
YXlfDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAx
MC9vdmVycmlkZUA2L19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAx
MC9vdmVycmlkZUA2L19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDYvX292ZXJsYXlf
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVANi9fb3ZlcmxheV8NCihYRU4p
IGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDcNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDcN
CihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVANyBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50
QDEwL292ZXJyaWRlQDcgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUA3DQoo
WEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUA3L19vdmVy
bGF5Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRA
MTAvb3ZlcnJpZGVANy9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRA
MTAvb3ZlcnJpZGVANy9fb3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUA3L19vdmVybGF5
XyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDcvX292ZXJsYXlfDQooWEVO
KSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUA4DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUA4
DQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDggcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVu
dEAxMC9vdmVycmlkZUA4IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAOA0K
KFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAOC9fb3Zl
cmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50
QDEwL292ZXJyaWRlQDgvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50
QDEwL292ZXJyaWRlQDgvX292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAOC9fb3Zlcmxh
eV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUA4L19vdmVybGF5Xw0KKFhF
TikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAOQ0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVA
OQ0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUA5IHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1l
bnRAMTAvb3ZlcnJpZGVAOSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDkN
CihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDkvX292
ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVu
dEAxMC9vdmVycmlkZUA5L19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVu
dEAxMC9vdmVycmlkZUA5L19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDkvX292ZXJs
YXlfIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAOS9fb3ZlcmxheV8NCihY
RU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDEwDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlk
ZUAxMA0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxMCBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJh
Z2VtZW50QDEwL292ZXJyaWRlQDEwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJp
ZGVAMTANCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRl
QDEwL19vdmVybGF5Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9m
cmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTAvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIv
ZnJhZ2VtZW50QDEwL292ZXJyaWRlQDEwL19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRl
QDEwL19vdmVybGF5XyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDEwL19v
dmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJp
ZGVAMTENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50
QDEwL292ZXJyaWRlQDExDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJy
aWRlQDExIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4t
bWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVu
dEAxMC9vdmVycmlkZUAxMQ0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRA
MTAvb3ZlcnJpZGVAMTEvX292ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdp
bi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxMS9fb3ZlcmxheV8NCihYRU4pIC9wbHVn
aW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTEvX292ZXJsYXlfIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRA
MTAvb3ZlcnJpZGVAMTEvX292ZXJsYXlfIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3Zl
cnJpZGVAMTEvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVu
dEAxMC9vdmVycmlkZUAxMg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdl
ci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTINCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1l
bnRAMTAvb3ZlcnJpZGVAMTIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxMiBpcyBiZWhpbmQgdGhl
IElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFn
ZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDEyDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxMi9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDEyL19vdmVybGF5Xw0K
KFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxMi9fb3ZlcmxheV8g
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxMi9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdl
bWVudEAxMC9vdmVycmlkZUAxMi9fb3ZlcmxheV8NCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFn
ZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDEzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxMw0KKFhFTikgL3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxMyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDEzIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
bHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTMNCihYRU4pIGhhbmRsZSAvcGx1
Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDEzL19vdmVybGF5Xw0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTMv
X292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDEz
L19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1
Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDEzL19vdmVybGF5XyBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1h
bmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDEzL19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9w
bHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDE0DQooWEVOKSAv
cGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDE0IHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3Zl
cnJpZGVAMTQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxNA0KKFhFTikg
aGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTQvX292ZXJsYXlf
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9v
dmVycmlkZUAxNC9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAv
b3ZlcnJpZGVAMTQvX292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTQvX292ZXJsYXlf
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTQvX292ZXJsYXlfDQooWEVO
KSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxNQ0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVA
MTUNCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTUgcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdl
bWVudEAxMC9vdmVycmlkZUAxNSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRl
QDE1DQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAx
NS9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJh
Z2VtZW50QDEwL292ZXJyaWRlQDE1L19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2Zy
YWdlbWVudEAxMC9vdmVycmlkZUAxNS9fb3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAx
NS9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxNS9fb3Zl
cmxheV8NCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRl
QDE2DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAx
MC9vdmVycmlkZUAxNg0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlk
ZUAxNiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1h
bmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDE2IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRA
MTAvb3ZlcnJpZGVAMTYNCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEw
L292ZXJyaWRlQDE2L19vdmVybGF5Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4t
bWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTYvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDE2L19vdmVybGF5XyBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEw
L292ZXJyaWRlQDE2L19vdmVybGF5XyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJy
aWRlQDE2L19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRA
MTAvb3ZlcnJpZGVAMTcNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIv
ZnJhZ2VtZW50QDEwL292ZXJyaWRlQDE3DQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50
QDEwL292ZXJyaWRlQDE3IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTcgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxNw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9m
cmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTcvX292ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxNy9fb3ZlcmxheV8NCihY
RU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTcvX292ZXJsYXlfIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9m
cmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTcvX292ZXJsYXlfIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1l
bnRAMTAvb3ZlcnJpZGVAMTcvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxOA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVn
aW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTgNCihYRU4pIC9wbHVnaW4tbWFuYWdl
ci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTggcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxOCBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1
Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDE4DQooWEVOKSBoYW5kbGUgL3BsdWdp
bi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxOC9fb3ZlcmxheV8NCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDE4L19v
dmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxOC9f
b3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdp
bi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxOC9fb3ZlcmxheV8gaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxOC9fb3ZlcmxheV8NCihYRU4pIGhhbmRsZSAvcGx1
Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDE5DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxOQ0KKFhFTikgL3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAxOSBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJy
aWRlQDE5IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMTkNCihYRU4pIGhh
bmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDE5L19vdmVybGF5Xw0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3Zl
cnJpZGVAMTkvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292
ZXJyaWRlQDE5L19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDE5L19vdmVybGF5XyBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDE5L19vdmVybGF5Xw0KKFhFTikg
aGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjANCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDIw
DQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDIwIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1l
bnRAMTAvb3ZlcnJpZGVAMjAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAy
MA0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjAv
X292ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdl
bWVudEAxMC9vdmVycmlkZUAyMC9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFn
ZW1lbnRAMTAvb3ZlcnJpZGVAMjAvX292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjAv
X292ZXJsYXlfIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjAvX292ZXJs
YXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAy
MQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAv
b3ZlcnJpZGVAMjENCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVA
MjEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyMSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEw
L292ZXJyaWRlQDIxDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9v
dmVycmlkZUAyMS9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1h
bmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDIxL19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1t
YW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyMS9fb3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9v
dmVycmlkZUAyMS9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlk
ZUAyMS9fb3ZlcmxheV8NCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEw
L292ZXJyaWRlQDIyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2Zy
YWdlbWVudEAxMC9vdmVycmlkZUAyMg0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAx
MC9vdmVycmlkZUAyMiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
cGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDIyIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9m
cmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjINCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJh
Z2VtZW50QDEwL292ZXJyaWRlQDIyL19vdmVybGF5Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjIvX292ZXJsYXlfDQooWEVO
KSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDIyL19vdmVybGF5XyBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJh
Z2VtZW50QDEwL292ZXJyaWRlQDIyL19vdmVybGF5XyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50
QDEwL292ZXJyaWRlQDIyL19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9m
cmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2lu
LW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDIzDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIv
ZnJhZ2VtZW50QDEwL292ZXJyaWRlQDIzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjMgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdp
bi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyMw0KKFhFTikgaGFuZGxlIC9wbHVnaW4t
bWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjMvX292ZXJsYXlfDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyMy9fb3Zl
cmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjMvX292
ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4t
bWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjMvX292ZXJsYXlfIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdl
ci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjMvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdp
bi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyNA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjQNCihYRU4pIC9wbHVn
aW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjQgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlk
ZUAyNCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDI0DQooWEVOKSBoYW5k
bGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyNC9fb3ZlcmxheV8NCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJy
aWRlQDI0L19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVy
cmlkZUAyNC9fb3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyNC9fb3ZlcmxheV8gaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyNC9fb3ZlcmxheV8NCihYRU4pIGhh
bmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDI1DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyNQ0K
KFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyNSBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50
QDEwL292ZXJyaWRlQDI1IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjUN
CihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDI1L19v
dmVybGF5Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1l
bnRAMTAvb3ZlcnJpZGVAMjUvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2Vt
ZW50QDEwL292ZXJyaWRlQDI1L19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDI1L19v
dmVybGF5XyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDI1L19vdmVybGF5
Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjYN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292
ZXJyaWRlQDI2DQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDI2
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdl
ci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjYgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9v
dmVycmlkZUAyNg0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3Zl
cnJpZGVAMjYvX292ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyNi9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFu
YWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjYvX292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3Zl
cnJpZGVAMjYvX292ZXJsYXlfIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVA
MjYvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9v
dmVycmlkZUAyNw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFn
ZW1lbnRAMTAvb3ZlcnJpZGVAMjcNCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAv
b3ZlcnJpZGVAMjcgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyNyBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJh
Z2VtZW50QDEwL292ZXJyaWRlQDI3DQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdl
bWVudEAxMC9vdmVycmlkZUAyNy9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDI3L19vdmVybGF5Xw0KKFhFTikg
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyNy9fb3ZlcmxheV8gcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdl
bWVudEAxMC9vdmVycmlkZUAyNy9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAx
MC9vdmVycmlkZUAyNy9fb3ZlcmxheV8NCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJh
Z2VtZW50QDEwL292ZXJyaWRlQDI4DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1t
YW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyOA0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2Zy
YWdlbWVudEAxMC9vdmVycmlkZUAyOCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDI4IGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4t
bWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjgNCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1h
bmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDI4L19vdmVybGF5Xw0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjgvX292ZXJs
YXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDI4L19vdmVy
bGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1h
bmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDI4L19vdmVybGF5XyBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIv
ZnJhZ2VtZW50QDEwL292ZXJyaWRlQDI4L19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4t
bWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjkNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDI5DQooWEVOKSAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDI5IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVA
MjkgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAyOQ0KKFhFTikgaGFuZGxl
IC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjkvX292ZXJsYXlfDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlk
ZUAyOS9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJp
ZGVAMjkvX292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjkvX292ZXJsYXlfIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVn
aW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMjkvX292ZXJsYXlfDQooWEVOKSBoYW5k
bGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAzMA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMzANCihY
RU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMzAgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAx
MC9vdmVycmlkZUAzMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDMwDQoo
WEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAzMC9fb3Zl
cmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50
QDEwL292ZXJyaWRlQDMwL19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVu
dEAxMC9vdmVycmlkZUAzMC9fb3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAzMC9fb3Zl
cmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAzMC9fb3ZlcmxheV8N
CihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDMxDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVy
cmlkZUAzMQ0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMC9vdmVycmlkZUAzMSBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIv
ZnJhZ2VtZW50QDEwL292ZXJyaWRlQDMxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTAvb3Zl
cnJpZGVAMzENCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJy
aWRlQDMxL19vdmVybGF5Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdl
ci9mcmFnZW1lbnRAMTAvb3ZlcnJpZGVAMzEvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFn
ZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDMxL19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJy
aWRlQDMxL19vdmVybGF5XyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDEwL292ZXJyaWRlQDMx
L19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTENCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDExDQooWEVO
KSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50QDExIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTEgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5h
Z2VyL2ZyYWdlbWVudEAxMQ0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRA
MTEvb3ZlcnJpZGVAMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9m
cmFnZW1lbnRAMTEvb3ZlcnJpZGVAMA0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAx
MS9vdmVycmlkZUAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9w
bHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTEvb3ZlcnJpZGVAMCBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJh
Z2VtZW50QDExL292ZXJyaWRlQDANCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ2Vt
ZW50QDExL292ZXJyaWRlQDAvX292ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMS9vdmVycmlkZUAwL19vdmVybGF5Xw0KKFhFTikgL3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdlbWVudEAxMS9vdmVycmlkZUAwL19vdmVybGF5XyBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ2VtZW50
QDExL292ZXJyaWRlQDAvX292ZXJsYXlfIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnZW1lbnRAMTEvb3Zl
cnJpZGVAMC9fb3ZlcmxheV8NCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQt
ZTI2MTQtY29tbW9uQDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIv
ZnJhZ21lbnQtZTI2MTQtY29tbW9uQDANCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1l
MjYxNC1jb21tb25AMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
cGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2Zy
YWdtZW50LWUyNjE0LWNvbW1vbkAwDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdt
ZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AwDQooWEVO
KSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDAgcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2Zy
YWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVu
dC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNAMA0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdl
ci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNAMC9fb3ZlcmxheV8NCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAv
b3ZlcnJpZGVzQDAvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2
MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDAvX292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25A
MC9vdmVycmlkZXNAMC9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNv
bW1vbkAwL292ZXJyaWRlc0AwL19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdl
ci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNAMQ0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNA
MQ0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRl
c0AxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFu
YWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNAMSBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIv
ZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDENCihYRU4pIGhhbmRsZSAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDEvX292ZXJsYXlfDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNv
bW1vbkAwL292ZXJyaWRlc0AxL19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdt
ZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AxL19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQt
Y29tbW9uQDAvb3ZlcnJpZGVzQDEvX292ZXJsYXlfIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1l
MjYxNC1jb21tb25AMC9vdmVycmlkZXNAMS9fb3ZlcmxheV8NCihYRU4pIGhhbmRsZSAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDINCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3Zl
cnJpZGVzQDINCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9v
dmVycmlkZXNAMiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1
Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDIgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1t
YW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AyDQooWEVOKSBoYW5kbGUg
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AyL19vdmVy
bGF5Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1l
MjYxNC1jb21tb25AMC9vdmVycmlkZXNAMi9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdl
ci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNAMi9fb3ZlcmxheV8gcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50
LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AyL19vdmVybGF5XyBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJh
Z21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDIvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUg
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AzDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1v
bkAwL292ZXJyaWRlc0AzDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29t
bW9uQDAvb3ZlcnJpZGVzQDMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AzIGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9w
bHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNAMw0KKFhFTikg
aGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNA
My9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJh
Z21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDMvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDMvX292ZXJsYXlfIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9m
cmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNAMy9fb3ZlcmxheV8gaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5h
Z2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AzL19vdmVybGF5Xw0KKFhFTikg
aGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNA
NA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYx
NC1jb21tb25AMC9vdmVycmlkZXNANA0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUy
NjE0LWNvbW1vbkAwL292ZXJyaWRlc0A0IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlk
ZXNANCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDQN
CihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3Zl
cnJpZGVzQDQvX292ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5h
Z2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0A0L19vdmVybGF5Xw0KKFhFTikg
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0A0L19vdmVy
bGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1h
bmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDQvX292ZXJsYXlfIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVn
aW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNANC9fb3ZlcmxheV8N
CihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3Zl
cnJpZGVzQDYNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21l
bnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDYNCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFn
bWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNANiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAv
b3ZlcnJpZGVzQDYgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJy
aWRlc0A2DQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1v
bkAwL292ZXJyaWRlc0A2L19vdmVybGF5Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVn
aW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNANi9fb3ZlcmxheV8N
CihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNA
Ni9fb3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0A2L19vdmVybGF5
XyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDYvX292
ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1v
bkAwL292ZXJyaWRlc0A3DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0A3DQooWEVOKSAvcGx1Z2luLW1hbmFn
ZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDcgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNv
bW1vbkAwL292ZXJyaWRlc0A3IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25A
MC9vdmVycmlkZXNANw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYx
NC1jb21tb25AMC9vdmVycmlkZXNANy9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDcvX292
ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3Zl
cnJpZGVzQDcvX292ZXJsYXlfIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNANy9f
b3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRl
c0A3L19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYx
NC1jb21tb25AMC9vdmVycmlkZXNAOA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4t
bWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNAOA0KKFhFTikgL3BsdWdp
bi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0A4IHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1l
MjYxNC1jb21tb25AMC9vdmVycmlkZXNAOCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQt
Y29tbW9uQDAvb3ZlcnJpZGVzQDgNCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21l
bnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDgvX292ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRl
c0A4L19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1v
bkAwL292ZXJyaWRlc0A4L19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJp
ZGVzQDgvX292ZXJsYXlfIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9v
dmVycmlkZXNAOC9fb3ZlcmxheV8NCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21l
bnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDkNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDkNCihYRU4p
IC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNAOSBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJh
Z21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDkgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50
LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0A5DQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0A5L19vdmVybGF5Xw0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9v
dmVycmlkZXNAOS9fb3ZlcmxheV8NCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYx
NC1jb21tb25AMC9vdmVycmlkZXNAOS9fb3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAw
L292ZXJyaWRlc0A5L19vdmVybGF5XyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29t
bW9uQDAvb3ZlcnJpZGVzQDkvX292ZXJsYXlfDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AxMA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNA
MTANCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlk
ZXNAMTAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1t
YW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AxMCBpcyBiZWhpbmQgdGhl
IElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFn
ZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDEwDQooWEVOKSBoYW5kbGUgL3Bs
dWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AxMC9fb3Zlcmxh
eV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2
MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDEwL19vdmVybGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AxMC9fb3ZlcmxheV8gcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50
LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AxMC9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2Zy
YWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlc0AxMC9fb3ZlcmxheV8NCihYRU4pIGhhbmRs
ZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDExDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNv
bW1vbkAwL292ZXJyaWRlc0AxMQ0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0
LWNvbW1vbkAwL292ZXJyaWRlc0AxMSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVz
QDExIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNAMTEN
CihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3Zl
cnJpZGVzQDExL19vdmVybGF5Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFu
YWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZXNAMTEvX292ZXJsYXlfDQooWEVO
KSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDExL19v
dmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDExL19vdmVybGF5XyBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVzQDExL19vdmVy
bGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25A
MC9vdmVycmlkZUAxMg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9m
cmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZUAxMg0KKFhFTikgL3BsdWdpbi1tYW5hZ2Vy
L2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlQDEyIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21t
b25AMC9vdmVycmlkZUAxMiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAv
b3ZlcnJpZGVAMTINCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQt
Y29tbW9uQDAvb3ZlcnJpZGVAMTIvX292ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJyaWRlQDEyL19vdmVy
bGF5Xw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWNvbW1vbkAwL292ZXJy
aWRlQDEyL19vdmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtY29tbW9uQDAvb3ZlcnJpZGVAMTIvX292
ZXJsYXlfIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1jb21tb25AMC9vdmVycmlkZUAx
Mi9fb3ZlcmxheV8NCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQt
YTAwQDENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQt
ZTI2MTQtYTAwQDENCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1hMDBAMSBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIv
ZnJhZ21lbnQtZTI2MTQtYTAwQDEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWEwMEAx
DQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWEwMEAxL292ZXJy
aWRlc0AwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50
LWUyNjE0LWEwMEAxL292ZXJyaWRlc0AwDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQt
ZTI2MTQtYTAwQDEvb3ZlcnJpZGVzQDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWEwMEAxL292ZXJyaWRlc0Aw
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1hMDBAMS9vdmVycmlkZXNAMA0KKFhFTikg
aGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1hMDBAMS9vdmVycmlkZXNAMC9f
b3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21l
bnQtZTI2MTQtYTAwQDEvb3ZlcnJpZGVzQDAvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFn
ZXIvZnJhZ21lbnQtZTI2MTQtYTAwQDEvb3ZlcnJpZGVzQDAvX292ZXJsYXlfIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1l
MjYxNC1hMDBAMS9vdmVycmlkZXNAMC9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50
LWUyNjE0LWEwMEAxL292ZXJyaWRlc0AwL19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4t
bWFuYWdlci9mcmFnbWVudC1lMjYxNC1hMDBAMS9vdmVycmlkZUAxDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWEwMEAxL292ZXJyaWRlQDEN
CihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1hMDBAMS9vdmVycmlkZUAxIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9m
cmFnbWVudC1lMjYxNC1hMDBAMS9vdmVycmlkZUAxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1l
MjYxNC1hMDBAMS9vdmVycmlkZUAxDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdt
ZW50LWUyNjE0LWEwMEAxL292ZXJyaWRlQDEvX292ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWEwMEAxL292ZXJyaWRlQDEvX292
ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtYTAwQDEvb3ZlcnJp
ZGVAMS9fb3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWEwMEAxL292ZXJyaWRlQDEvX292ZXJsYXlf
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1hMDBAMS9vdmVycmlkZUAxL19vdmVybGF5
Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1hMDBAMS9vdmVy
cmlkZUAxL19vdmVybGF5Xy9udmlkaWEsZGFpLWxpbmstMQ0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1hMDBAMS9vdmVycmlkZUAxL19vdmVy
bGF5Xy9udmlkaWEsZGFpLWxpbmstMQ0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUy
NjE0LWEwMEAxL292ZXJyaWRlQDEvX292ZXJsYXlfL252aWRpYSxkYWktbGluay0xIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVu
dC1lMjYxNC1hMDBAMS9vdmVycmlkZUAxL19vdmVybGF5Xy9udmlkaWEsZGFpLWxpbmstMSBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1
Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtYTAwQDEvb3ZlcnJpZGVAMS9fb3ZlcmxheV8vbnZp
ZGlhLGRhaS1saW5rLTENCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2
MTQtYjAwQDINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJhZ21l
bnQtZTI2MTQtYjAwQDINCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1iMDBA
MiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFn
ZXIvZnJhZ21lbnQtZTI2MTQtYjAwQDIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWIw
MEAyDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWIwMEAyL292
ZXJyaWRlc0AwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdt
ZW50LWUyNjE0LWIwMEAyL292ZXJyaWRlc0AwDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21l
bnQtZTI2MTQtYjAwQDIvb3ZlcnJpZGVzQDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWIwMEAyL292ZXJyaWRl
c0AwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1iMDBAMi9vdmVycmlkZXNAMA0KKFhF
TikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1iMDBAMi9vdmVycmlkZXNA
MC9fb3ZlcmxheV8NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1Z2luLW1hbmFnZXIvZnJh
Z21lbnQtZTI2MTQtYjAwQDIvb3ZlcnJpZGVzQDAvX292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1h
bmFnZXIvZnJhZ21lbnQtZTI2MTQtYjAwQDIvb3ZlcnJpZGVzQDAvX292ZXJsYXlfIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVu
dC1lMjYxNC1iMDBAMi9vdmVycmlkZXNAMC9fb3ZlcmxheV8gaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdt
ZW50LWUyNjE0LWIwMEAyL292ZXJyaWRlc0AwL19vdmVybGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVn
aW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1iMDBAMi9vdmVycmlkZUAxDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWIwMEAyL292ZXJyaWRl
QDENCihYRU4pIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1iMDBAMi9vdmVycmlkZUAx
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdl
ci9mcmFnbWVudC1lMjYxNC1iMDBAMi9vdmVycmlkZUAxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVu
dC1lMjYxNC1iMDBAMi9vdmVycmlkZUAxDQooWEVOKSBoYW5kbGUgL3BsdWdpbi1tYW5hZ2VyL2Zy
YWdtZW50LWUyNjE0LWIwMEAyL292ZXJyaWRlQDEvX292ZXJsYXlfDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWIwMEAyL292ZXJyaWRlQDEv
X292ZXJsYXlfDQooWEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtYjAwQDIvb3Zl
cnJpZGVAMS9fb3ZlcmxheV8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0LWIwMEAyL292ZXJyaWRlQDEvX292ZXJs
YXlfIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1iMDBAMi9vdmVycmlkZUAxL19vdmVy
bGF5Xw0KKFhFTikgaGFuZGxlIC9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1iMDBAMi9v
dmVycmlkZUAxL19vdmVybGF5Xy9udmlkaWEsZGFpLWxpbmstMQ0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1iMDBAMi9vdmVycmlkZUAxL19v
dmVybGF5Xy9udmlkaWEsZGFpLWxpbmstMQ0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50
LWUyNjE0LWIwMEAyL292ZXJyaWRlQDEvX292ZXJsYXlfL252aWRpYSxkYWktbGluay0xIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wbHVnaW4tbWFuYWdlci9mcmFn
bWVudC1lMjYxNC1iMDBAMi9vdmVycmlkZUAxL19vdmVybGF5Xy9udmlkaWEsZGFpLWxpbmstMSBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
cGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtYjAwQDIvb3ZlcnJpZGVAMS9fb3ZlcmxheV8v
bnZpZGlhLGRhaS1saW5rLTENCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQt
ZTI2MTQtcGluc0AzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5hZ2VyL2Zy
YWdtZW50LWUyNjE0LXBpbnNAMw0KKFhFTikgL3BsdWdpbi1tYW5hZ2VyL2ZyYWdtZW50LWUyNjE0
LXBpbnNAMyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtcGluc0AzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1l
MjYxNC1waW5zQDMNCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQt
cGluc0AzL292ZXJyaWRlc0AwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3BsdWdpbi1tYW5h
Z2VyL2ZyYWdtZW50LWUyNjE0LXBpbnNAMy9vdmVycmlkZXNAMA0KKFhFTikgL3BsdWdpbi1tYW5h
Z2VyL2ZyYWdtZW50LWUyNjE0LXBpbnNAMy9vdmVycmlkZXNAMCBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtcGlu
c0AzL292ZXJyaWRlc0AwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9wbHVnaW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1waW5zQDMvb3Zl
cnJpZGVzQDANCihYRU4pIGhhbmRsZSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtcGlu
c0AzL292ZXJyaWRlc0AwL19vdmVybGF5Xw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wbHVn
aW4tbWFuYWdlci9mcmFnbWVudC1lMjYxNC1waW5zQDMvb3ZlcnJpZGVzQDAvX292ZXJsYXlfDQoo
WEVOKSAvcGx1Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtcGluc0AzL292ZXJyaWRlc0AwL19v
dmVybGF5XyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcGx1Z2lu
LW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtcGluc0AzL292ZXJyaWRlc0AwL19vdmVybGF5XyBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGx1
Z2luLW1hbmFnZXIvZnJhZ21lbnQtZTI2MTQtcGluc0AzL292ZXJyaWRlc0AwL19vdmVybGF5Xw0K
KFhFTikgaGFuZGxlIC9tb2RzLXNpbXBsZS1idXMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
bW9kcy1zaW1wbGUtYnVzDQooWEVOKSAvbW9kcy1zaW1wbGUtYnVzIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9tb2RzLXNpbXBsZS1idXMgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L21vZHMtc2ltcGxlLWJ1
cw0KKFhFTikgaGFuZGxlIC9tb2RzLXNpbXBsZS1idXMvbW9kcy1jbG9ja3MNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vbW9kcy1zaW1wbGUtYnVzL21vZHMtY2xvY2tzDQooWEVOKSAvbW9kcy1z
aW1wbGUtYnVzL21vZHMtY2xvY2tzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9tb2RzLXNpbXBsZS1idXMvbW9kcy1jbG9ja3MgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L21vZHMtc2ltcGxlLWJ1cy9tb2Rz
LWNsb2Nrcw0KKFhFTikgaGFuZGxlIC9ncHNfd2FrZQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9ncHNfd2FrZQ0KKFhFTikgL2dwc193YWtlIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9ncHNfd2FrZSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vZ3BzX3dha2UNCihYRU4pIGhhbmRsZSAvY2hvc2VuDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Nob3Nlbg0KKFhFTikgL2Nob3NlbiBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvY2hvc2VuIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jaG9zZW4NCihYRU4pIGhh
bmRsZSAvY2hvc2VuL3BtdS1ib2FyZA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jaG9zZW4v
cG11LWJvYXJkDQooWEVOKSAvY2hvc2VuL3BtdS1ib2FyZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvY2hvc2VuL3BtdS1ib2FyZCBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2hvc2VuL3BtdS1ib2FyZA0K
KFhFTikgaGFuZGxlIC9jaG9zZW4vcHJvYy1ib2FyZA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9jaG9zZW4vcHJvYy1ib2FyZA0KKFhFTikgL2Nob3Nlbi9wcm9jLWJvYXJkIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jaG9zZW4vcHJvYy1ib2FyZCBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2hvc2Vu
L3Byb2MtYm9hcmQNCihYRU4pIGhhbmRsZSAvY2hvc2VuL2Rpc3BsYXktYm9hcmQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vY2hvc2VuL2Rpc3BsYXktYm9hcmQNCihYRU4pIC9jaG9zZW4vZGlz
cGxheS1ib2FyZCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvY2hv
c2VuL2Rpc3BsYXktYm9hcmQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2Nob3Nlbi9kaXNwbGF5LWJvYXJkDQooWEVOKSBoYW5kbGUgL2No
b3Nlbi9yZXNldA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jaG9zZW4vcmVzZXQNCihYRU4p
IC9jaG9zZW4vcmVzZXQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L2Nob3Nlbi9yZXNldCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vY2hvc2VuL3Jlc2V0DQooWEVOKSBoYW5kbGUgL2Nob3Nlbi9wbHVnaW4t
bWFuYWdlcg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jaG9zZW4vcGx1Z2luLW1hbmFnZXIN
CihYRU4pIC9jaG9zZW4vcGx1Z2luLW1hbmFnZXIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL2Nob3Nlbi9wbHVnaW4tbWFuYWdlciBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2hvc2VuL3BsdWdpbi1tYW5h
Z2VyDQooWEVOKSBoYW5kbGUgL2Nob3Nlbi9wbHVnaW4tbWFuYWdlci9pZHMNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vY2hvc2VuL3BsdWdpbi1tYW5hZ2VyL2lkcw0KKFhFTikgL2Nob3Nlbi9w
bHVnaW4tbWFuYWdlci9pZHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL2Nob3Nlbi9wbHVnaW4tbWFuYWdlci9pZHMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Nob3Nlbi9wbHVnaW4tbWFuYWdlci9pZHMN
CihYRU4pIGhhbmRsZSAvY2hvc2VuL3BsdWdpbi1tYW5hZ2VyL2lkcy9jb25uZWN0aW9uDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Nob3Nlbi9wbHVnaW4tbWFuYWdlci9pZHMvY29ubmVjdGlv
bg0KKFhFTikgL2Nob3Nlbi9wbHVnaW4tbWFuYWdlci9pZHMvY29ubmVjdGlvbiBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvY2hvc2VuL3BsdWdpbi1tYW5hZ2VyL2lk
cy9jb25uZWN0aW9uIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9jaG9zZW4vcGx1Z2luLW1hbmFnZXIvaWRzL2Nvbm5lY3Rpb24NCihYRU4p
IGhhbmRsZSAvY2hvc2VuL3BsdWdpbi1tYW5hZ2VyL2lkcy9jb25uZWN0aW9uL2kyY0A3MDAwYzUw
MA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jaG9zZW4vcGx1Z2luLW1hbmFnZXIvaWRzL2Nv
bm5lY3Rpb24vaTJjQDcwMDBjNTAwDQooWEVOKSAvY2hvc2VuL3BsdWdpbi1tYW5hZ2VyL2lkcy9j
b25uZWN0aW9uL2kyY0A3MDAwYzUwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvY2hvc2VuL3BsdWdpbi1tYW5hZ2VyL2lkcy9jb25uZWN0aW9uL2kyY0A3MDAwYzUw
MCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vY2hvc2VuL3BsdWdpbi1tYW5hZ2VyL2lkcy9jb25uZWN0aW9uL2kyY0A3MDAwYzUwMA0KKFhF
TikgaGFuZGxlIC9jaG9zZW4vcGx1Z2luLW1hbmFnZXIvaWRzL2Nvbm5lY3Rpb24vaTJjQDcwMDBj
NTAwL21vZHVsZUAweDUwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Nob3Nlbi9wbHVnaW4t
bWFuYWdlci9pZHMvY29ubmVjdGlvbi9pMmNANzAwMGM1MDAvbW9kdWxlQDB4NTANCihYRU4pIC9j
aG9zZW4vcGx1Z2luLW1hbmFnZXIvaWRzL2Nvbm5lY3Rpb24vaTJjQDcwMDBjNTAwL21vZHVsZUAw
eDUwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jaG9zZW4vcGx1
Z2luLW1hbmFnZXIvaWRzL2Nvbm5lY3Rpb24vaTJjQDcwMDBjNTAwL21vZHVsZUAweDUwIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jaG9z
ZW4vcGx1Z2luLW1hbmFnZXIvaWRzL2Nvbm5lY3Rpb24vaTJjQDcwMDBjNTAwL21vZHVsZUAweDUw
DQooWEVOKSBoYW5kbGUgL2Nob3Nlbi9wbHVnaW4tbWFuYWdlci9pZHMvY29ubmVjdGlvbi9pMmNA
NzAwMGM1MDAvbW9kdWxlQDB4NTcNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2hvc2VuL3Bs
dWdpbi1tYW5hZ2VyL2lkcy9jb25uZWN0aW9uL2kyY0A3MDAwYzUwMC9tb2R1bGVAMHg1Nw0KKFhF
TikgL2Nob3Nlbi9wbHVnaW4tbWFuYWdlci9pZHMvY29ubmVjdGlvbi9pMmNANzAwMGM1MDAvbW9k
dWxlQDB4NTcgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2Nob3Nl
bi9wbHVnaW4tbWFuYWdlci9pZHMvY29ubmVjdGlvbi9pMmNANzAwMGM1MDAvbW9kdWxlQDB4NTcg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2Nob3Nlbi9wbHVnaW4tbWFuYWdlci9pZHMvY29ubmVjdGlvbi9pMmNANzAwMGM1MDAvbW9kdWxl
QDB4NTcNCihYRU4pIGhhbmRsZSAvY2hvc2VuL3BsdWdpbi1tYW5hZ2VyL29kbS1kYXRhDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Nob3Nlbi9wbHVnaW4tbWFuYWdlci9vZG0tZGF0YQ0KKFhF
TikgL2Nob3Nlbi9wbHVnaW4tbWFuYWdlci9vZG0tZGF0YSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvY2hvc2VuL3BsdWdpbi1tYW5hZ2VyL29kbS1kYXRhIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jaG9z
ZW4vcGx1Z2luLW1hbmFnZXIvb2RtLWRhdGENCihYRU4pIGhhbmRsZSAvY2hvc2VuL21vZHVsZUAw
DQooWEVOKSAgIFNraXAgaXQgKG1hdGNoZWQpDQooWEVOKSBoYW5kbGUgL2Nob3Nlbi92ZXJpZmll
ZC1ib290DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Nob3Nlbi92ZXJpZmllZC1ib290DQoo
WEVOKSAvY2hvc2VuL3ZlcmlmaWVkLWJvb3QgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL2Nob3Nlbi92ZXJpZmllZC1ib290IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jaG9zZW4vdmVyaWZpZWQtYm9vdA0K
KFhFTikgaGFuZGxlIC9ncHUtZHZmcy1yZXdvcmsNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
Z3B1LWR2ZnMtcmV3b3JrDQooWEVOKSAvZ3B1LWR2ZnMtcmV3b3JrIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ncHUtZHZmcy1yZXdvcmsgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2dwdS1kdmZzLXJld29y
aw0KKFhFTikgaGFuZGxlIC9wd21fcmVndWxhdG9ycw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9wd21fcmVndWxhdG9ycw0KKFhFTikgL3B3bV9yZWd1bGF0b3JzIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9wd21fcmVndWxhdG9ycyBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcHdtX3JlZ3VsYXRvcnMN
CihYRU4pIGhhbmRsZSAvcHdtX3JlZ3VsYXRvcnMvcHdtLXJlZ3VsYXRvckAwDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3B3bV9yZWd1bGF0b3JzL3B3bS1yZWd1bGF0b3JAMA0KKFhFTikgL3B3
bV9yZWd1bGF0b3JzL3B3bS1yZWd1bGF0b3JAMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvcHdtX3JlZ3VsYXRvcnMvcHdtLXJlZ3VsYXRvckAwIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9wd21fcmVndWxh
dG9ycy9wd20tcmVndWxhdG9yQDANCihYRU4pIGhhbmRsZSAvcHdtX3JlZ3VsYXRvcnMvcHdtLXJl
Z3VsYXRvckAxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3B3bV9yZWd1bGF0b3JzL3B3bS1y
ZWd1bGF0b3JAMQ0KKFhFTikgL3B3bV9yZWd1bGF0b3JzL3B3bS1yZWd1bGF0b3JAMSBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvcHdtX3JlZ3VsYXRvcnMvcHdtLXJl
Z3VsYXRvckAxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9wd21fcmVndWxhdG9ycy9wd20tcmVndWxhdG9yQDENCihYRU4pIGhhbmRsZSAv
ZGZsbC1tYXg3NzYyMUA3MDExMDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9kZmxsLW1h
eDc3NjIxQDcwMTEwMDAwDQooWEVOKSAvZGZsbC1tYXg3NzYyMUA3MDExMDAwMCBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvZGZsbC1tYXg3NzYyMUA3MDExMDAwMCBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
ZGZsbC1tYXg3NzYyMUA3MDExMDAwMA0KKFhFTikgaGFuZGxlIC9kZmxsLW1heDc3NjIxQDcwMTEw
MDAwL2RmbGwtbWF4Nzc2MjEtaW50ZWdyYXRpb24NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
ZGZsbC1tYXg3NzYyMUA3MDExMDAwMC9kZmxsLW1heDc3NjIxLWludGVncmF0aW9uDQooWEVOKSAv
ZGZsbC1tYXg3NzYyMUA3MDExMDAwMC9kZmxsLW1heDc3NjIxLWludGVncmF0aW9uIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9kZmxsLW1heDc3NjIxQDcwMTEwMDAw
L2RmbGwtbWF4Nzc2MjEtaW50ZWdyYXRpb24gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2RmbGwtbWF4Nzc2MjFANzAxMTAwMDAvZGZsbC1t
YXg3NzYyMS1pbnRlZ3JhdGlvbg0KKFhFTikgaGFuZGxlIC9kZmxsLW1heDc3NjIxQDcwMTEwMDAw
L2RmbGwtbWF4Nzc2MjEtYm9hcmQtcGFyYW1zDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Rm
bGwtbWF4Nzc2MjFANzAxMTAwMDAvZGZsbC1tYXg3NzYyMS1ib2FyZC1wYXJhbXMNCihYRU4pIC9k
ZmxsLW1heDc3NjIxQDcwMTEwMDAwL2RmbGwtbWF4Nzc2MjEtYm9hcmQtcGFyYW1zIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9kZmxsLW1heDc3NjIxQDcwMTEwMDAw
L2RmbGwtbWF4Nzc2MjEtYm9hcmQtcGFyYW1zIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9kZmxsLW1heDc3NjIxQDcwMTEwMDAwL2RmbGwt
bWF4Nzc2MjEtYm9hcmQtcGFyYW1zDQooWEVOKSBoYW5kbGUgL2RmbGwtY2Rldi1jYXANCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vZGZsbC1jZGV2LWNhcA0KKFhFTikgL2RmbGwtY2Rldi1jYXAg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2RmbGwtY2Rldi1jYXAg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L2RmbGwtY2Rldi1jYXANCihYRU4pIGhhbmRsZSAvZGZsbC1jZGV2LWZsb29yDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L2RmbGwtY2Rldi1mbG9vcg0KKFhFTikgL2RmbGwtY2Rldi1mbG9vciBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvZGZsbC1jZGV2LWZsb29y
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9kZmxsLWNkZXYtZmxvb3INCihYRU4pIGhhbmRsZSAvZHZmcw0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9kdmZzDQooWEVOKSAvZHZmcyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVO
KSBDaGVjayBpZiAvZHZmcyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vZHZmcw0KKFhFTikgaGFuZGxlIC9yODE2OA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9yODE2OA0KKFhFTikgL3I4MTY4IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9yODE2OCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcjgxNjgNCihYRU4pIGhhbmRsZSAvdGVncmFfdWRy
bQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90ZWdyYV91ZHJtDQooWEVOKSAvdGVncmFfdWRy
bSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGVncmFfdWRybSBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
dGVncmFfdWRybQ0KKFhFTikgaGFuZGxlIC9zb2Z0X3dhdGNoZG9nDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NvZnRfd2F0Y2hkb2cNCihYRU4pIC9zb2Z0X3dhdGNoZG9nIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2Z0X3dhdGNoZG9nIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Z0X3dhdGNo
ZG9nDQooWEVOKSBoYW5kbGUgL2xlZHMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vbGVkcw0K
KFhFTikgL2xlZHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2xl
ZHMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L2xlZHMNCihYRU4pIGhhbmRsZSAvbGVkcy9wd3INCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vbGVkcy9wd3INCihYRU4pIC9sZWRzL3B3ciBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvbGVkcy9wd3IgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2xlZHMvcHdyDQooWEVOKSBoYW5kbGUgL21lbW9yeUA4
MDAwMDAwMA0KKFhFTikgICBTa2lwIGl0IChtYXRjaGVkKQ0KKFhFTikgaGFuZGxlIC9jcHVfZWRw
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2NwdV9lZHANCihYRU4pIC9jcHVfZWRwIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jcHVfZWRwIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9jcHVfZWRwDQoo
WEVOKSBoYW5kbGUgL2dwdV9lZHANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vZ3B1X2VkcA0K
KFhFTikgL2dwdV9lZHAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L2dwdV9lZHAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L2dwdV9lZHANCihYRU4pIGhhbmRsZSAvX19zeW1ib2xzX18NCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vX19zeW1ib2xzX18NCihYRU4pIC9fX3N5bWJvbHNfXyBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvX19zeW1ib2xzX18gaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L19fc3ltYm9sc19f
DQooWEVOKSBBbGxvY2F0aW5nIFBQSSAxNiBmb3IgZXZlbnQgY2hhbm5lbCBpbnRlcnJ1cHQNCihY
RU4pIENyZWF0ZSBoeXBlcnZpc29yIG5vZGUNCihYRU4pIENyZWF0ZSBQU0NJIG5vZGUNCihYRU4p
IENyZWF0ZSBjcHVzIG5vZGUNCihYRU4pIENyZWF0ZSBjcHVAMCAobG9naWNhbCBDUFVJRDogMCkg
bm9kZQ0KKFhFTikgQ3JlYXRlIGNwdUAxIChsb2dpY2FsIENQVUlEOiAxKSBub2RlDQooWEVOKSBD
cmVhdGUgY3B1QDIgKGxvZ2ljYWwgQ1BVSUQ6IDIpIG5vZGUNCihYRU4pIENyZWF0ZSBjcHVAMyAo
bG9naWNhbCBDUFVJRDogMykgbm9kZQ0KKFhFTikgQ3JlYXRlIG1lbW9yeSBub2RlIChyZWcgc2l6
ZSA0LCBuciBjZWxscyA0KQ0KKFhFTikgICBCYW5rIDA6IDB4ZTgwMDAwMDAtPjB4ZjAwMDAwMDAN
CihYRU4pIENyZWF0ZSBtZW1vcnkgbm9kZSAocmVnIHNpemUgNCwgbnIgY2VsbHMgOCkNCihYRU4p
ICAgQmFuayAwOiAweDQwMDAxMDAwLT4weDQwMDQwMDAwDQooWEVOKSAgIEJhbmsgMTogMHhiMDAw
MDAwMC0+MHhiMDIwMDAwMA0KKFhFTikgTG9hZGluZyB6SW1hZ2UgZnJvbSAwMDAwMDAwMGUxMDAw
MDAwIHRvIDAwMDAwMDAwZTgwODAwMDAtMDAwMDAwMDBlYTIzYzgwOA0KKFhFTikgDQooWEVOKSAq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQooWEVOKSBQYW5pYyBvbiBD
UFUgMDoNCihYRU4pIFVuYWJsZSB0byBjb3B5IHRoZSBrZXJuZWwgaW4gdGhlIGh3ZG9tIG1lbW9y
eQ0KKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKFhFTikg
DQooWEVOKSBSZWJvb3QgaW4gZml2ZSBzZWNvbmRzLi4uDQpbMDAwMC4xNTddIFtMNFQgVGVncmFC
b290XSAodmVyc2lvbiAwMC4wMC4yMDE4LjAxLWw0dC04MGE0NjhkYSkNClswMDAwLjE2M10gUHJv
Y2Vzc2luZyBpbiBjb2xkIGJvb3QgbW9kZSBCb290bG9hZGVyIDINClswMDAwLjE2N10gQTAyIEJv
b3Ryb20gUGF0Y2ggcmV2ID0gMTAyMw0KDQpbMDAwMS45OTZdIE5DSyBjYXJ2ZW91dCBub3QgcHJl
c2VudA0KWzAwMDIuMDA2XSBGaW5kIC9pMmNANzAwMGMwMDAncyBhbGlhcyBpMmMwDQpbMDAwMi4w
MTBdIGdldCBlZXByb20gYXQgMS1hMCwgc2l6ZSAyNTYsIHR5cGUgMA0KWzAwMDIuMDE5XSBGaW5k
IC9pMmNANzAwMGM1MDAncyBhbGlhcyBpMmMyDQpbMDAwMi4wMjNdIGdldCBlZXByb20gYXQgMy1h
MCwgc2l6ZSAyNTYsIHR5cGUgMA0KWzAwMDIuMDI3XSBnZXQgZWVwcm9tIGF0IDMtYWUsIHNpemUg
MjU2LCB0eXBlIDANClswMDAyLjAzMV0gcG1faWRzX3VwZGF0ZTogVXBkYXRpbmcgMSxhMCwgc2l6
ZSAyNTYsIHR5cGUgMA0KWzAwMDIuMDM3XSBJMkMgc2xhdmUgbm90IHN0YXJ0ZWQNClswMDAyLjA0
MF0gSTJDIHdyaXRlIGZhaWxlZA0KWzAwMDIuMDQyXSBXcml0aW5nIG9mZnNldCBmYWlsZWQNClsw
MDAyLjA0NV0gZWVwcm9tX2luaXQ6IEVFUFJPTSByZWFkIGZhaWxlZA0KWzAwMDIuMDQ5XSBwbV9p
ZHNfdXBkYXRlOiBlZXByb20gaW5pdCBmYWlsZWQNClswMDAyLjA1M10gcG1faWRzX3VwZGF0ZTog
VXBkYXRpbmcgMyxhMCwgc2l6ZSAyNTYsIHR5cGUgMA0KWzAwMDIuMDgzXSBwbV9pZHNfdXBkYXRl
OiBUaGUgcG0gYm9hcmQgaWQgaXMgMzQ0OC0wMDAwLTIwMA0KWzAwMDIuMDkwXSBBZGRpbmcgcGx1
Z2luLW1hbmFnZXIvaWRzLzM0NDgtMDAwMC0yMDA9L2kyY0A3MDAwYzUwMDptb2R1bGVAMHg1MA0K
WzAwMDIuMDk4XSBwbV9pZHNfdXBkYXRlOiBwbSBpZCB1cGRhdGUgc3VjY2Vzc2Z1bA0KWzAwMDIu
MTAzXSBwbV9pZHNfdXBkYXRlOiBVcGRhdGluZyAzLGFlLCBzaXplIDI1NiwgdHlwZSAwDQpbMDAw
Mi4xMzNdIHBtX2lkc191cGRhdGU6IFRoZSBwbSBib2FyZCBpZCBpcyAzNDQ5LTAwMDAtMjAwDQpb
MDAwMi4xMzldIEFkZGluZyBwbHVnaW4tbWFuYWdlci9pZHMvMzQ0OS0wMDAwLTIwMD0vaTJjQDcw
MDBjNTAwOm1vZHVsZUAweDU3DQpbMDAwMi4xNDddIHBtX2lkc191cGRhdGU6IHBtIGlkIHVwZGF0
ZSBzdWNjZXNzZnVsDQpbMDAwMi4xNzhdIGVlcHJvbV9nZXRfbWFjOiBFRVBST00gaW52YWxpZCBN
QUMgYWRkcmVzcyAoYWxsIDB4ZmYpDQpbMDAwMi4xODRdIHNoaW1fZWVwcm9tX3VwZGF0ZV9tYWM6
MjY3OiBGYWlsZWQgdG8gdXBkYXRlIDAgTUFDIGFkZHJlc3MgaW4gRFRCDQpbMDAwMi4xOTJdIGVl
cHJvbV9nZXRfbWFjOiBFRVBST00gaW52YWxpZCBNQUMgYWRkcmVzcyAoYWxsIDB4ZmYpDQpbMDAw
Mi4xOThdIHNoaW1fZWVwcm9tX3VwZGF0ZV9tYWM6MjY3OiBGYWlsZWQgdG8gdXBkYXRlIDEgTUFD
IGFkZHJlc3MgaW4gRFRCDQpbMDAwMi4yMDZdIHVwZGF0aW5nIC9jaG9zZW4vbnZpZGlhLGV0aGVy
bmV0LW1hYyBub2RlIDAwOjA0OjRiOmU2OjY5OjgzDQpbMDAwMi4yMTNdIFBsdWdpbiBNYW5hZ2Vy
OiBQYXJzZSBPRE0gZGF0YSAweDAwMDk0MDAwDQpbMDAwMi4yMjNdIHNoaW1fY21kbGluZV9pbnN0
YWxsOiAvY2hvc2VuL2Jvb3RhcmdzOiBlYXJseWNvbj11YXJ0ODI1MCxtbWlvMzIsMHg3MDAwNjAw
MCANClswMDAyLjIzOF0gRmluZCAvaTJjQDcwMDBjMDAwJ3MgYWxpYXMgaTJjMA0KWzAwMDIuMjQx
XSBnZXQgZWVwcm9tIGF0IDEtYTAsIHNpemUgMjU2LCB0eXBlIDANClswMDAyLjI1MF0gRmluZCAv
aTJjQDcwMDBjNTAwJ3MgYWxpYXMgaTJjMg0KWzAwMDIuMjU0XSBnZXQgZWVwcm9tIGF0IDMtYTAs
IHNpemUgMjU2LCB0eXBlIDANClswMDAyLjI1OV0gZ2V0IGVlcHJvbSBhdCAzLWFlLCBzaXplIDI1
NiwgdHlwZSAwDQpbMDAwMi4yNjNdIHBtX2lkc191cGRhdGU6IFVwZGF0aW5nIDEsYTAsIHNpemUg
MjU2LCB0eXBlIDANClswMDAyLjI2OF0gSTJDIHNsYXZlIG5vdCBzdGFydGVkDQpbMDAwMi4yNzFd
IEkyQyB3cml0ZSBmYWlsZWQNClswMDAyLjI3NF0gV3JpdGluZyBvZmZzZXQgZmFpbGVkDQpbMDAw
Mi4yNzddIGVlcHJvbV9pbml0OiBFRVBST00gcmVhZCBmYWlsZWQNClswMDAyLjI4MV0gcG1faWRz
X3VwZGF0ZTogZWVwcm9tIGluaXQgZmFpbGVkDQpbMDAwMi4yODVdIHBtX2lkc191cGRhdGU6IFVw
ZGF0aW5nIDMsYTAsIHNpemUgMjU2LCB0eXBlIDANClswMDAyLjMxNV0gcG1faWRzX3VwZGF0ZTog
VGhlIHBtIGJvYXJkIGlkIGlzIDM0NDgtMDAwMC0yMDANClswMDAyLjMyMl0gQWRkaW5nIHBsdWdp
bi1tYW5hZ2VyL2lkcy8zNDQ4LTAwMDAtMjAwPS9pMmNANzAwMGM1MDA6bW9kdWxlQDB4NTANClsw
MDAyLjMyOV0gcG1faWRzX3VwZGF0ZTogcG0gaWQgdXBkYXRlIHN1Y2Nlc3NmdWwNClswMDAyLjMz
M10gcG1faWRzX3VwZGF0ZTogVXBkYXRpbmcgMyxhZSwgc2l6ZSAyNTYsIHR5cGUgMA0KWzAwMDIu
MzYzXSBwbV9pZHNfdXBkYXRlOiBUaGUgcG0gYm9hcmQgaWQgaXMgMzQ0OS0wMDAwLTIwMA0KWzAw
MDIuMzcwXSBBZGRpbmcgcGx1Z2luLW1hbmFnZXIvaWRzLzM0NDktMDAwMC0yMDA9L2kyY0A3MDAw
YzUwMDptb2R1bGVAMHg1Nw0KWzAwMDIuMzc3XSBwbV9pZHNfdXBkYXRlOiBwbSBpZCB1cGRhdGUg
c3VjY2Vzc2Z1bA0KWzAwMDIuNDA3XSBBZGQgc2VyaWFsIG51bWJlcjoxNDIyNDE5MTQ1Mjk2IGFz
IERUIHByb3BlcnR5DQpbMDAwMi40MTVdIEFwcGx5aW5nIHBsYXRmb3JtIGNvbmZpZ3MNClswMDAy
LjQyMl0gcGxhdGZvcm0taW5pdCBpcyBub3QgcHJlc2VudC4gU2tpcHBpbmcNClswMDAyLjQyNl0g
Y2FsbGluZyBhcHBzX2luaXQoKQ0KWzAwMDIuNDUyXSBGb3VuZCAxNCBHUFQgcGFydGl0aW9ucyBp
biAic2QwIg0KWzAwMDIuNDU1XSBQcm9jZWVkaW5nIHRvIENvbGQgQm9vdA0KWzAwMDIuNDU5XSBz
dGFydGluZyBhcHAgYW5kcm9pZF9ib290X2FwcA0KWzAwMDIuNDYzXSBEZXZpY2Ugc3RhdGU6IHVu
bG9ja2VkDQpbMDAwMi40NjZdIGRpc3BsYXkgY29uc29sZSBpbml0DQpbMDAwMi40NzRdIGNvdWxk
IG5vdCBmaW5kIHJlZ3VsYXRvcg0KWzAwMDIuNDk4XSBoZG1pIGNhYmxlIG5vdCBjb25uZWN0ZWQN
ClswMDAyLjUwMV0gaXNfaGRtaV9uZWVkZWQ6IEhETUkgbm90IGNvbm5lY3RlZCwgcmV0dXJuaW5n
IGZhbHNlDQpbMDAwMi41MDddIGhkbWkgaXMgbm90IGNvbm5EVCBlbnRyeSBmb3IgbGVkcy1wd20g
bm90IGZvdW5kDQplWzAwMDIuNTE1XSBjdGVkDQpbMDAwMi41MTddIHNvcjAgaXMgbm90IHN1cHBv
cnRlZA0KWzAwMDIuNTIwXSBkaXNwbGF5X2NvbnNvbGVfaW5pdDogbm8gdmFsaWQgZGlzcGxheSBv
dXRfdHlwZQ0KWzAwMDIuNTI4XSBzdWJub2RlIHZvbHVtZV91cCBpcyBub3QgZm91bmQgIQ0KWzAw
MDIuNTMyXSBzdWJub2RlIGJhY2sgaXMgbm90IGZvdW5kICENClswMDAyLjUzNV0gc3Vibm9kZSB2
b2x1bWVfZG93biBpcyBub3QgZm91bmQgIQ0KWzAwMDIuNTQwXSBzdWJub2RlIG1lbnUgaXMgbm90
IGZvdW5kICENClswMDAyLjU0M10gR3BpbyBrZXlib2FyZCBpbml0IHN1Y2Nlc3MNClswMDAyLjU4
N10gZm91bmQgZGVjb21wcmVzc29yIGhhbmRsZXI6IGx6NC1sZWdhY3kNClswMDAyLjYwMV0gZGVj
b21wcmVzc2luZyBibG9iICh0eXBlIDEpLi4uDQpbMDAwMi42MzVdIGRpc3BsYXlfcmVzb2x1dGlv
bjogTm8gZGlzcGxheSBpbml0DQpbMDAwMi42MzldIEZhaWxlZCB0byByZXRyaWV2ZSBkaXNwbGF5
IHJlc29sdXRpb24NClswMDAyLjY0NF0gQ291bGQgbm90IGxvYWQvaW5pdGlhbGl6ZSBCTVAgYmxv
Yi4uLmlnbm9yaW5nDQpbMDAwMi42OTVdIGRlY29tcHJlc3NvciBoYW5kbGVyIG5vdCBmb3VuZA0K
WzAwMDIuNjk5XSBsb2FkX2Zpcm13YXJlX2Jsb2I6IEZpcm13YXJlIGJsb2IgbG9hZGVkLCBlbnRy
aWVzPTINClswMDAyLjcwNF0gLS0tLS0tLT4gc2VfYWVzX3ZlcmlmeV9zYmtfY2xlYXI6IDc0Nw0K
WzAwMDIuNzA5XSBzZV9hZXNfdmVyaWZ5X3Nia19jbGVhcjogRXJyb3INClswMDAyLjcxM10gYmxf
YmF0dGVyeV9jaGFyZ2luZzogY29ubmVjdGVkIHRvIGV4dGVybmFsIHBvd2VyIHN1cHBseQ0KWzAw
MDIuNzIwXSB4dXNiIGlzIHN1cHBvcnRlZA0KWzAwMDIuNzI2XSBlcnJvciB3aGlsZSBmaW5kaW5n
IG52aWRpYSxwb3J0bWFwDQpbMDAwMy4yMzBdIHh1c2IgYmxvYiB2ZXJzaW9uIDAgc2l6ZSAxMjQ0
MTYNClswMDAzLjIzNF0gZmlybXdhcmUgc2l6ZSAxMjQ0MTYNClswMDAzLjIzOV0gRmlybXdhcmUg
dGltZXN0YW1wOiAweDVkYTg4ZmMzLCBWZXJzaW9uOiA1MC4yNSByZWxlYXNlDQpbMDAwMy4yNDld
IHhoY2kwOiA2NCBieXRlcyBjb250ZXh0IHNpemUsIDMyLWJpdCBETUENClswMDAzLjI4OV0gdXNi
dXMwOiA1LjBHYnBzIFN1cGVyIFNwZWVkIFVTQiB2My4wDQpbMDAwMy4zMDldIHVodWIwOiA8TnZp
ZGlhIFhIQ0kgcm9vdCBIVUIsIGNsYXNzIDkvMCwgcmV2IDMuMDAvMS4wMCwgYWRkciAxPiBvbiB1
c2J1czANClswMDAzLjk1OV0gdWh1YjA6IDkgcG9ydHMgd2l0aCA5IHJlbW92YWJsZSwgc2VsZiBw
b3dlcmVkDQpbMDAwNC45NTldIGZhaWxlZCB0byBnZXQgSElEIGRldmljZXMNClswMDA0Ljk2Ml0g
ZmFpbGVkIHRvIGluaXQgeGhjaSBvciBubyB1c2IgZGV2aWNlIGF0dGFjaGVkDQpbMDAwNC45NzBd
IGRpc3BsYXlfY29uc29sZV9pb2N0bDogTm8gZGlzcGxheSBpbml0DQpbMDAwNC45NzRdIHN3aXRj
aF9iYWNrbGlnaHQgZmFpbGVkDQpbMDAwNC45ODFdIGRldmljZV9xdWVyeV9wYXJ0aXRpb25fc2l6
ZTogZmFpbGVkIHRvIG9wZW4gcGFydGl0aW9uIHNkMDpNU0MgIQ0KWzAwMDQuOTg3XSBNU0MgUGFy
dGl0aW9uIG5vdCBmb3VuZA0KWzAwMDQuOTk0XSBkZXZpY2VfcXVlcnlfcGFydGl0aW9uX3NpemU6
IGZhaWxlZCB0byBvcGVuIHBhcnRpdGlvbiBzZDA6VVNQICENClswMDA1LjAwMF0gVVNQIHBhcnRp
dGlvbiByZWFkIGZhaWxlZCENClswMDA1LjAwNF0gYmxvYl9pbml0OiBibG9iLXBhcnRpdGlvbiBV
U1AgaGVhZGVyIHJlYWQgZmFpbGVkDQpbMDAwNS4wMDldIGFuZHJvaWRfYm9vdCBVbmFibGUgdG8g
dXBkYXRlIHJlY292ZXJ5IHBhcnRpdGlvbg0KWzAwMDUuMDE1XSBrZnNfZ2V0cGFydG5hbWU6IG5h
bWUgPSBMTlgNClswMDA1LjAxOF0gTG9hZGluZyBrZXJuZWwgZnJvbSBMTlgNClswMDA1LjExNl0g
bG9hZCBrZXJuZWwgZnJvbSBzdG9yYWdlDQpbMDAwNS4xMjhdIGRlY29tcHJlc3NvciBoYW5kbGVy
IG5vdCBmb3VuZA0KWzAwMDUuMTg1XSBTdWNjZXNzZnVsbHkgbG9hZGVkIGtlcm5lbCBhbmQgcmFt
ZGlzayBpbWFnZXMNClswMDA1LjE5MV0gZGlzcGxheV9yZXNvbHV0aW9uOiBObyBkaXNwbGF5IGlu
aXQNClswMDA1LjE5Nl0gRmFpbGVkIHRvIHJldHJpZXZlIGRpc3BsYXkgcmVzb2x1dGlvbg0KWzAw
MDUuMjAwXSBibXAgYmxvYiBpcyBub3QgbG9hZGVkIGFuZCBpbml0aWFsaXplZA0KWzAwMDUuMjA1
XSBGYWlsZWQgdG8gZGlzcGxheSBib290LWxvZ28NClswMDA1LjIwOF0gTkNLIGNhcnZlb3V0IG5v
dCBwcmVzZW50DQpbMDAwNS4yMTJdIFNraXBwaW5nIGR0c19vdmVycmlkZXMNClswMDA1LjIxNV0g
TkNLIGNhcnZlb3V0IG5vdCBwcmVzZW50DQpbMDAwNS4yMjVdIEZpbmQgL2kyY0A3MDAwYzAwMCdz
IGFsaWFzIGkyYzANClswMDA1LjIyOV0gZ2V0IGVlcHJvbSBhdCAxLWEwLCBzaXplIDI1NiwgdHlw
ZSAwDQpbMDAwNS4yMzddIEZpbmQgL2kyY0A3MDAwYzUwMCdzIGFsaWFzIGkyYzINClswMDA1LjI0
MV0gZ2V0IGVlcHJvbSBhdCAzLWEwLCBzaXplIDI1NiwgdHlwZSAwDQpbMDAwNS4yNDZdIGdldCBl
ZXByb20gYXQgMy1hZSwgc2l6ZSAyNTYsIHR5cGUgMA0KWzAwMDUuMjUwXSBwbV9pZHNfdXBkYXRl
OiBVcGRhdGluZyAxLGEwLCBzaXplIDI1NiwgdHlwZSAwDQpbMDAwNS4yNTVdIEkyQyBzbGF2ZSBu
b3Qgc3RhcnRlZA0KWzAwMDUuMjU4XSBJMkMgd3JpdGUgZmFpbGVkDQpbMDAwNS4yNjFdIFdyaXRp
bmcgb2Zmc2V0IGZhaWxlZA0KWzAwMDUuMjY0XSBlZXByb21faW5pdDogRUVQUk9NIHJlYWQgZmFp
bGVkDQpbMDAwNS4yNjhdIHBtX2lkc191cGRhdGU6IGVlcHJvbSBpbml0IGZhaWxlZA0KWzAwMDUu
MjcyXSBwbV9pZHNfdXBkYXRlOiBVcGRhdGluZyAzLGEwLCBzaXplIDI1NiwgdHlwZSAwDQpbMDAw
NS4zMDJdIHBtX2lkc191cGRhdGU6IFRoZSBwbSBib2FyZCBpZCBpcyAzNDQ4LTAwMDAtMjAwDQpb
MDAwNS4zMDldIEFkZGluZyBwbHVnaW4tbWFuYWdlci9pZHMvMzQ0OC0wMDAwLTIwMD0vaTJjQDcw
MDBjNTAwOm1vZHVsZUAweDUwDQpbMDAwNS4zMTddIHBtX2lkc191cGRhdGU6IHBtIGlkIHVwZGF0
ZSBzdWNjZXNzZnVsDQpbMDAwNS4zMjFdIHBtX2lkc191cGRhdGU6IFVwZGF0aW5nIDMsYWUsIHNp
emUgMjU2LCB0eXBlIDANClswMDA1LjM1MV0gcG1faWRzX3VwZGF0ZTogVGhlIHBtIGJvYXJkIGlk
IGlzIDM0NDktMDAwMC0yMDANClswMDA1LjM1OF0gQWRkaW5nIHBsdWdpbi1tYW5hZ2VyL2lkcy8z
NDQ5LTAwMDAtMjAwPS9pMmNANzAwMGM1MDA6bW9kdWxlQDB4NTcNClswMDA1LjM2Nl0gcG1faWRz
X3VwZGF0ZTogcG0gaWQgdXBkYXRlIHN1Y2Nlc3NmdWwNClswMDA1LjM5Nl0gZWVwcm9tX2dldF9t
YWM6IEVFUFJPTSBpbnZhbGlkIE1BQyBhZGRyZXNzIChhbGwgMHhmZikNClswMDA1LjQwMl0gc2hp
bV9lZXByb21fdXBkYXRlX21hYzoyNjc6IEZhaWxlZCB0byB1cGRhdGUgMCBNQUMgYWRkcmVzcyBp
biBEVEINClswMDA1LjQxMF0gZWVwcm9tX2dldF9tYWM6IEVFUFJPTSBpbnZhbGlkIE1BQyBhZGRy
ZXNzIChhbGwgMHhmZikNClswMDA1LjQxNl0gc2hpbV9lZXByb21fdXBkYXRlX21hYzoyNjc6IEZh
aWxlZCB0byB1cGRhdGUgMSBNQUMgYWRkcmVzcyBpbiBEVEINClswMDA1LjQyNV0gdXBkYXRpbmcg
L2Nob3Nlbi9udmlkaWEsZXRoZXJuZXQtbWFjIG5vZGUgMDA6MDQ6NGI6ZTY6Njk6ODMNClswMDA1
LjQzMV0gUGx1Z2luIE1hbmFnZXI6IFBhcnNlIE9ETSBkYXRhIDB4MDAwOTQwMDANClswMDA1LjQ0
Ml0gc2hpbV9jbWRsaW5lX2luc3RhbGw6IC9jaG9zZW4vYm9vdGFyZ3M6IGVhcmx5Y29uPXVhcnQ4
MjUwLG1taW8zMiwweDcwMDA2MDAwIA0KWzAwMDUuNDUwXSBBZGQgc2VyaWFsIG51bWJlcjoxNDIy
NDE5MTQ1Mjk2IGFzIERUIHByb3BlcnR5DQpbMDAwNS40NThdICJicG1wIiBkb2Vzbid0IGV4aXN0
LCBjcmVhdGluZyANClswMDA1LjQ2NF0gVXBkYXRlZCBicG1wIGluZm8gdG8gRFRCDQpbMDAwNS40
NjldIFVwZGF0ZWQgaW5pdHJkIGluZm8gdG8gRFRCDQpbMDAwNS40NzJdICJwcm9jLWJvYXJkIiBk
b2Vzbid0IGV4aXN0LCBjcmVhdGluZyANClswMDA1LjQ3OF0gVXBkYXRlZCBib2FyZCBpbmZvIHRv
IERUQg0KWzAwMDUuNDgxXSAicG11LWJvYXJkIiBkb2Vzbid0IGV4aXN0LCBjcmVhdGluZyANClsw
MDA1LjQ4N10gVXBkYXRlZCBib2FyZCBpbmZvIHRvIERUQg0KWzAwMDUuNDkwXSAiZGlzcGxheS1i
b2FyZCIgZG9lc24ndCBleGlzdCwgY3JlYXRpbmcgDQpbMDAwNS40OTZdIFVwZGF0ZWQgYm9hcmQg
aW5mbyB0byBEVEINClswMDA1LjQ5OV0gInJlc2V0IiBkb2Vzbid0IGV4aXN0LCBjcmVhdGluZyAN
ClswMDA1LjUwNF0gVXBkYXRlZCByZXNldCBpbmZvIHRvIERUQg0KWzAwMDUuNTA3XSBkaXNwbGF5
X2NvbnNvbGVfaW9jdGw6IE5vIGRpc3BsYXkgaW5pdA0KWzAwMDUuNTEyXSBkaXNwbGF5X2NvbnNv
bGVfaW9jdGw6IE5vIGRpc3BsYXkgaW5pdA0KWzAwMDUuNTE2XSBkaXNwbGF5X2NvbnNvbGVfaW9j
dGw6IE5vIGRpc3BsYXkgaW5pdA0KWzAwMDUuNTIxXSBDbWRsaW5lOiB0ZWdyYWlkPTIxLjEuMi4w
LjAgZGRyX2RpZT00MDk2TUAyMDQ4TSBzZWN0aW9uPTUxMk0gbWVtdHlwZT0wIHZwcl9yZXNpemUg
dXNiX3BvcnRfb3duZXJfaW5mbz0wIGxhbmVfb3duZXJfaW5mbz0wIGVtY19tYXhfZHZmcz0wIHRv
dWNoX2lkPTBANjMgdmlkZW89dGVncmFmYiBub19jb25zb2xlX3N1c3BlbmQ9MSBjb25zb2xlPXR0
eVMwLDExNTIwMG44IGRlYnVnX3VhcnRwb3J0PWxzcG9ydCwyIGVhcmx5cHJpbnRrPXVhcnQ4MjUw
LTMyYml0LDB4NzAwMDYwMDAgbWF4Y3B1cz00IHVzYmNvcmUub2xkX3NjaGVtZV9maXJzdD0xIGxw
MF92ZWM9MHgxMDAwQDB4ZmY3ODAwMDAgY29yZV9lZHBfbXY9MTA3NSBjb3JlX2VkcF9tYT00MDAw
IA0KWzAwMDUuNTU1XSBEVEIgY21kbGluZTogZWFybHljb249dWFydDgyNTAsbW1pbzMyLDB4NzAw
MDYwMDAgDQpbMDAwNS41NjBdIGJvb3QgaW1hZ2UgY21kbGluZTogcm9vdD0vZGV2L21tY2JsazBw
MSBydyByb290d2FpdCByb290ZnN0eXBlPWV4dDQgY29uc29sZT10dHlTMCwxMTUyMDBuOCBjb25z
b2xlPXR0eTAgZmJjb249bWFwOjAgbmV0LmlmbmFtZXM9MCANClswMDA1LjU3M10gVXBkYXRlZCBi
b290YXJnIGluZm8gdG8gRFRCDQpbMDAwNS41NzddIEFkZGluZyB1dWlkIDAwMDAwMDAxNjQ0NDk1
ODEwNDAwMDAwMDA4MDU4MTgwIHRvIERUDQpbMDAwNS41ODNdIEFkZGluZyBla3MgaW5mbyAwIHRv
IERUDQpbMDAwNS41ODhdIFdBUk5JTkc6IEZhaWxlZCB0byBwYXNzIE5TIERSQU0gcmFuZ2VzIHRv
IFRPUywgZXJyOiAtNw0KWzAwMDUuNTk0XSBVcGRhdGVkIG1lbW9yeSBpbmZvIHRvIERUQg0KWzAw
MDUuNjAwXSBVcGRhdGVkIHN5c3RlbS1scDAtZGlzYWJsZSBpbmZvIHRvIERUQg0KWzAwMDUuNjA4
XSBzZXQgdmRkX2NvcmUgdm9sdGFnZSB0byAxMDc1IG12DQpbMDAwNS42MTJdIHNldHRpbmcgJ3Zk
ZC1jb3JlJyByZWd1bGF0b3IgdG8gMTA3NTAwMCBtaWNybyB2b2x0cw0KWzAwMDUuNjI5XSBGb3Vu
ZCBzZWN1cmUtcG1jOyBkaXNhYmxlIEJQTVANCg0KDQpVLUJvb3QgMjAxNi4wNy1nZTZkYTA5M2Jl
MyAoSnVuIDI1IDIwMjAgLSAyMToxODowOCAtMDcwMCkNCg0KVEVHUkEyMTANCk1vZGVsOiBOVklE
SUEgUDM0NTAtUG9yZw0KQm9hcmQ6IE5WSURJQSBQMzQ1MC1QT1JHDQpEUkFNOiAgNCBHaUINCk1N
QzogICBUZWdyYSBTRC9NTUM6IDAsIFRlZ3JhIFNEL01NQzogMQ0KU0Y6IERldGVjdGVkIE1YMjVV
MzIzNUYgd2l0aCBwYWdlIHNpemUgMjU2IEJ5dGVzLCBlcmFzZSBzaXplIDQgS2lCLCB0b3RhbCA0
IE1pQg0KSW46ICAgIHNlcmlhbA0KT3V0OiAgIHNlcmlhbA0KRXJyOiAgIHNlcmlhbA0KTmV0OiAg
IE5vIGV0aGVybmV0IGZvdW5kLg0KSGl0IGFueSBrZXkgdG8gc3RvcCBhdXRvYm9vdDogIDEgIDAg
DQpUZWdyYTIxMCAoUDM0NTAtUG9yZykgIyA=
--=_48a69f2ecb1c59fcda7c31583f854280--


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 04:32:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 04:32:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k10Er-0003l4-LQ; Thu, 30 Jul 2020 04:32:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6uMI=BJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k10Eq-0003ke-T0
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 04:32:16 +0000
X-Inumbo-ID: a2513fdc-d21d-11ea-aa9f-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a2513fdc-d21d-11ea-aa9f-12813bfff9fa;
 Thu, 30 Jul 2020 04:32:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=t6KGXJpP5EnxK53Cu8vWRZnx8sauOTKFm7wwf+jqqLQ=; b=HGFsW8mGTO8CD8p9WOTAFN1fV
 RgGGvj94O3o66r3b5oyaLNBA//oZr6SoXOglj9P7sSLG8a2Du0oIdSznSWLp7nYLiKQtbi0gjoji3
 xuc1GHw3+cDlnsTfxixtUOZn1bAdj7C44kdkKFMxemIJotE8VQF7MuTTc6wwkMAm4OxwA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k10Ej-0005Ls-Ut; Thu, 30 Jul 2020 04:32:09 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k10Ej-00017K-J6; Thu, 30 Jul 2020 04:32:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k10Ej-0001s6-IT; Thu, 30 Jul 2020 04:32:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152291-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152291: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
 xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=64219fa179c3e48adad12bfce3f6b3f1596cccbf
X-Osstest-Versions-That: xen=b071ec25e85c4aacf3da59e5258cda0b1c4df45d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jul 2020 04:32:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152291 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152291/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       7 xen-boot                 fail REGR. vs. 152269

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  64219fa179c3e48adad12bfce3f6b3f1596cccbf
baseline version:
 xen                  b071ec25e85c4aacf3da59e5258cda0b1c4df45d

Last test of basis   152269  2020-07-28 19:05:32 Z    1 days
Testing same since   152288  2020-07-29 19:01:00 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Fam Zheng <famzheng@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 64219fa179c3e48adad12bfce3f6b3f1596cccbf
Author: Fam Zheng <famzheng@amazon.com>
Date:   Wed Jul 29 18:51:45 2020 +0100

    x86/cpuid: Fix APIC bit clearing
    
    The bug is obvious here, other places in this function used
    "cpufeat_mask" correctly.
    
    Fixed: b648feff8ea2 ("xen/x86: Improvements to in-hypervisor cpuid sanity checks")
    Signed-off-by: Fam Zheng <famzheng@amazon.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 07:34:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 07:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k134v-000316-LO; Thu, 30 Jul 2020 07:34:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6uMI=BJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k134u-00030m-GO
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 07:34:12 +0000
X-Inumbo-ID: 0bb981a0-d237-11ea-aaa8-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0bb981a0-d237-11ea-aaa8-12813bfff9fa;
 Thu, 30 Jul 2020 07:34:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=VfhikoPXQoy9WIT0IgPopx22cG6p5rTq1s2FVQMUddI=; b=NmyixKXNwcDmbo9Vq5UMWy8K/
 Md5Nw+7PFoSCmOs2LTGlwMuRcAjUpXCkr2L4PmGwBurjTdDobxwAV42KWUUpWNZCLZJLzKDjVSj6n
 APnQ147KrzmiOkSOhD9xumVhzUrvgbNKk4676yJRUmT5PHxq3NAnpG4Td4K/51uAxFfqw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k134m-0001fD-I0; Thu, 30 Jul 2020 07:34:04 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k134m-0002FY-4R; Thu, 30 Jul 2020 07:34:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k134m-0006W7-3x; Thu, 30 Jul 2020 07:34:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152296-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 152296: regressions - FAIL
X-Osstest-Failures: libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=ffa7fab4406617e33e30dd2f5246f6e3923da863
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jul 2020 07:34:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152296 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152296/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              ffa7fab4406617e33e30dd2f5246f6e3923da863
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   20 days
Failing since        151818  2020-07-11 04:18:52 Z   19 days   20 attempts
Testing same since   152296  2020-07-30 04:20:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Weblate <noreply@weblate.org>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3081 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 08:08:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 08:08:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k13bV-0006AM-Pb; Thu, 30 Jul 2020 08:07:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gz/s=BJ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1k13bU-0006AH-Uq
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 08:07:53 +0000
X-Inumbo-ID: c35a98c2-d23b-11ea-8d1a-bc764e2007e4
Received: from mail-wm1-x32f.google.com (unknown [2a00:1450:4864:20::32f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c35a98c2-d23b-11ea-8d1a-bc764e2007e4;
 Thu, 30 Jul 2020 08:07:51 +0000 (UTC)
Received: by mail-wm1-x32f.google.com with SMTP id q76so4122711wme.4
 for <xen-devel@lists.xenproject.org>; Thu, 30 Jul 2020 01:07:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=djLqMmxDwR/RXrWm+o7+6bPfbtjq4nhOVRdani7IgtY=;
 b=bayaqW4vEnlo8RE0g507oIcfbLr+HepdiTbrh3dr8k5XN+Kkpyu7kAq5Iw7hlahCiK
 N2F02VRixxPXA/wpAnvK+QSW5X7C0wanr7x5JL7aKDgC7tGL+PGs4WEO90zwRThqA5jD
 uIfhQ4oCMwUFpEX3cVetk9qcDDbe8X6mPFCcSNoiy0ST2im6JdPLjH99ZGtqkqu7VU8z
 QvC6uxfv5nWWlutM3Nxe1bsaNCZLT8UyDhS9bXG31rLKy/WTnRkFfpHqzysT7Kcd9CrU
 /+yjUE35F4c9nwmVE39JpfMkLOJFuHrMX86RHIlUjXxNexF3IiKKVmeeIDheR2LRsP5R
 ZQpw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=djLqMmxDwR/RXrWm+o7+6bPfbtjq4nhOVRdani7IgtY=;
 b=Zhxz+RWeczQvRmKg20+Fxne1cJ8ObeCADaRBC37ufqDDGYLaMt0mVD2aSagQvFtUms
 EkoNfT1FjjNxBCMkvR51whCEORyTYnz9ZPE8XPi0ASg/h0buyzqRXVeFsGu81I0SHaCL
 fk1MJbKL3bTnziUHSmYVmCCiP7eLQurqHhEPIW+S/WLQTzIBAuXYqJdy2qZdu3PafoXZ
 v/8SiI9ardcKCTaaKdtGlAlIAaLaI+jdF8R4MfuGIRqbDUzhdTPATN2GT2Q9rPz2E2fL
 jmIcZYeFF/7R3ITyU2RdQGwm0gB553WGDHX3XB6fu4qmxosPu56ftoJYQpZz6OutMYns
 jaWg==
X-Gm-Message-State: AOAM532P3IilJ80wdbgnXqZUI1KSiFULaGC1jM4cbVl7pgRts2Efd5U8
 uc/JJGX3b5Dt/UBugQsQI+4=
X-Google-Smtp-Source: ABdhPJz/4FeUSr9s0fhPxd49EFFihOdxcEv79My23egivHLllJ3Rrv30e9ObXctY7n0ycetcMbyAaQ==
X-Received: by 2002:a05:600c:2209:: with SMTP id
 z9mr12012195wml.70.1596096470656; 
 Thu, 30 Jul 2020 01:07:50 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec])
 by smtp.gmail.com with ESMTPSA id k1sm8982272wrw.91.2020.07.30.01.07.49
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 30 Jul 2020 01:07:50 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 "'Xen-devel'" <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-2-andrew.cooper3@citrix.com>
In-Reply-To: <20200728113712.22966-2-andrew.cooper3@citrix.com>
Subject: RE: [PATCH 1/5] xen/memory: Introduce CONFIG_ARCH_ACQUIRE_RESOURCE
Date: Thu, 30 Jul 2020 09:02:37 +0100
Message-ID: <002601d66647$ca8567e0$5f9037a0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQF6ExwjYkgZ+OK4ROtT1vtJIEnjiAIw/yGXqcbg+tA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 =?utf-8?Q?'Micha=C5=82_Leszczy=C5=84ski'?= <michal.leszczynski@cert.pl>,
 'Jan Beulich' <JBeulich@suse.com>,
 'Hubert Jasudowicz' <hubert.jasudowicz@cert.pl>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 28 July 2020 12:37
> To: Xen-devel <xen-devel@lists.xenproject.org>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Jan Beulich =
<JBeulich@suse.com>; Wei Liu <wl@xen.org>;
> Roger Pau Monn=C3=A9 <roger.pau@citrix.com>; Stefano Stabellini =
<sstabellini@kernel.org>; Julien Grall
> <julien@xen.org>; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Paul =
Durrant <paul@xen.org>; Micha=C5=82
> Leszczy=C5=84ski <michal.leszczynski@cert.pl>; Hubert Jasudowicz =
<hubert.jasudowicz@cert.pl>
> Subject: [PATCH 1/5] xen/memory: Introduce =
CONFIG_ARCH_ACQUIRE_RESOURCE
>=20
> New architectures shouldn't be forced to implement no-op stubs for =
unused
> functionality.
>=20
> Introduce CONFIG_ARCH_ACQUIRE_RESOURCE which can be opted in to, and =
provide
> compatibility logic in xen/mm.h
>=20
> No functional change.

Code-wise, it looks fine, so...

Reviewed-by: Paul Durrant <paul@xen.org>

...but ...

>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> CC: Paul Durrant <paul@xen.org>
> CC: Micha=C5=82 Leszczy=C5=84ski <michal.leszczynski@cert.pl>
> CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
> ---
>  xen/arch/x86/Kconfig     | 1 +
>  xen/common/Kconfig       | 3 +++
>  xen/include/asm-arm/mm.h | 8 --------
>  xen/include/xen/mm.h     | 9 +++++++++
>  4 files changed, 13 insertions(+), 8 deletions(-)
>=20
> diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
> index a636a4bb1e..e7644a0a9d 100644
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -6,6 +6,7 @@ config X86
>  	select ACPI
>  	select ACPI_LEGACY_TABLES_LOOKUP
>  	select ARCH_SUPPORTS_INT128
> +	select ARCH_ACQUIRE_RESOURCE

... I do wonder whether 'HAS_ACQUIRE_RESOURCE' is a better and more =
descriptive name.

>  	select COMPAT
>  	select CORE_PARKING
>  	select HAS_ALTERNATIVE
> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> index 15e3b79ff5..593459ea6e 100644
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -22,6 +22,9 @@ config GRANT_TABLE
>=20
>  	  If unsure, say Y.
>=20
> +config ARCH_ACQUIRE_RESOURCE
> +	bool
> +
>  config HAS_ALTERNATIVE
>  	bool
>=20
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index f8ba49b118..0b7de3102e 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -358,14 +358,6 @@ static inline void put_page_and_type(struct =
page_info *page)
>=20
>  void clear_and_clean_page(struct page_info *page);
>=20
> -static inline
> -int arch_acquire_resource(struct domain *d, unsigned int type, =
unsigned int id,
> -                          unsigned long frame, unsigned int =
nr_frames,
> -                          xen_pfn_t mfn_list[])
> -{
> -    return -EOPNOTSUPP;
> -}
> -
>  unsigned int arch_get_dma_bitsize(void);
>=20
>  #endif /*  __ARCH_ARM_MM__ */
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 1061765bcd..1b2c1f6b32 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -685,4 +685,13 @@ static inline void put_page_alloc_ref(struct =
page_info *page)
>      }
>  }
>=20
> +#ifndef CONFIG_ARCH_ACQUIRE_RESOURCE
> +static inline int arch_acquire_resource(
> +    struct domain *d, unsigned int type, unsigned int id, unsigned =
long frame,
> +    unsigned int nr_frames, xen_pfn_t mfn_list[])
> +{
> +    return -EOPNOTSUPP;
> +}
> +#endif /* !CONFIG_ARCH_ACQUIRE_RESOURCE */
> +
>  #endif /* __XEN_MM_H__ */
> --
> 2.11.0




From xen-devel-bounces@lists.xenproject.org Thu Jul 30 08:19:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 08:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k13mf-00076X-Te; Thu, 30 Jul 2020 08:19:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gz/s=BJ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1k13me-00076S-Py
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 08:19:24 +0000
X-Inumbo-ID: 5fdd718d-d23d-11ea-8d1a-bc764e2007e4
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5fdd718d-d23d-11ea-8d1a-bc764e2007e4;
 Thu, 30 Jul 2020 08:19:23 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id r2so18926659wrs.8
 for <xen-devel@lists.xenproject.org>; Thu, 30 Jul 2020 01:19:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=KzFYpcmZnZRqz7KZaIRB3nffNHWL4lNaRYDakxYMnN0=;
 b=uCP8Gbl7rExrDDTtQS5fmLvJC6XnqiNnE2b3Qc+r2I3fcwZ6MFiy07wYkj9aHeXOF6
 8DiNctKwmDtkjbXWPRjVmmcci0N4ETl83IWWJDoUtgaZ+XArLtNlgjQwdfoT4QSn1oCy
 RHMa4wAs4bnAhkCRZ7RqzPibY8f42wh9h3zEttqTsRtAyNWPL480WTonR7hteLvcLXJr
 /Y3GxVBoEqmtZoj+IPyuY7woHEjs7YVDebJXDzGfvIB4sCqv+kgL7BvxYdwg3sVRvKJY
 D18/EQCn3aw387pfUAqL4bLKdfR2Bn2MSP8hOLHbNg2Dv+ndKY0gda/QtQ/HTi2QK2vT
 w1DQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=KzFYpcmZnZRqz7KZaIRB3nffNHWL4lNaRYDakxYMnN0=;
 b=H8gaIaF+BiQTNOIyOS3xgs7VEx5mOMzL/g+asuIGJLGgRak1s118lNdJsMMGT1y7Ku
 M0w4qrN3edHon91r/kQVkrlYADd4ZM9Lk8wTt5jrb/N4d0nt3dcg9+79gjxRV2gp8vLT
 /ClcvQi6IMFm58feo+SUhadV8VxVusrtc1t86M23/uzAKpoGWupkK3K4gxYm8WgqhFbl
 PmxktVDeQoA6tLsIrHI66lfLd6wp49XHy9U4d4j2UwZJCzpfJR+hpf/BYhE6Q7MKDozn
 YmaNYMmc2VqYAypJEZFvpEI7DFH6hN+K+0Tjy/eMIqfXFjKO3oK3pgkLKiGlHH3bSXfs
 hX1Q==
X-Gm-Message-State: AOAM532dQbNcrUP9UfIcWLLgKwqYPyUhA9xOkLBQC4m/K6xEU3hoK9HM
 lf7EZvcCz0TCr7e0bHhvr6s=
X-Google-Smtp-Source: ABdhPJwZ1HPqWX2xvE1POlsgMsUmeK7/jWnfAqCaJJdyH5EdXLKM/t5rHdgF7mtmcFN12r0LWCzxsA==
X-Received: by 2002:adf:e550:: with SMTP id z16mr16978855wrm.329.1596097162707; 
 Thu, 30 Jul 2020 01:19:22 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec])
 by smtp.gmail.com with ESMTPSA id g188sm9684768wma.5.2020.07.30.01.19.21
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 30 Jul 2020 01:19:22 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 "'Xen-devel'" <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-3-andrew.cooper3@citrix.com>
In-Reply-To: <20200728113712.22966-3-andrew.cooper3@citrix.com>
Subject: RE: [PATCH 2/5] xen/gnttab: Rework resource acquisition
Date: Thu, 30 Jul 2020 09:14:09 +0100
Message-ID: <002801d66649$67098050$351c80f0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQF6ExwjYkgZ+OK4ROtT1vtJIEnjiAJ9GaFOqcSAvZA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Hubert Jasudowicz' <hubert.jasudowicz@cert.pl>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Julien Grall' <julien@xen.org>,
 'Wei Liu' <wl@xen.org>, 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>,
 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 =?utf-8?Q?'Micha=C5=82_Leszczy=C5=84ski'?= <michal.leszczynski@cert.pl>,
 'Jan Beulich' <JBeulich@suse.com>, 'Ian Jackson' <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 28 July 2020 12:37
> To: Xen-devel <xen-devel@lists.xenproject.org>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap =
<George.Dunlap@eu.citrix.com>; Ian
> Jackson <ian.jackson@citrix.com>; Jan Beulich <JBeulich@suse.com>; =
Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>; =
Wei Liu <wl@xen.org>; Julien
> Grall <julien@xen.org>; Paul Durrant <paul@xen.org>; Micha=C5=82 =
Leszczy=C5=84ski <michal.leszczynski@cert.pl>;
> Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
> Subject: [PATCH 2/5] xen/gnttab: Rework resource acquisition
>=20
> The existing logic doesn't function in the general case for mapping a =
guests
> grant table, due to arbitrary 32 frame limit, and the default grant =
table
> limit being 64.
>=20
> In order to start addressing this, rework the existing grant table =
logic by
> implementing a single gnttab_acquire_resource().  This is far more =
efficient
> than the previous acquire_grant_table() in memory.c because it doesn't =
take
> the grant table write lock, and attempt to grow the table, for every =
single
> frame.
>=20

But that should not have happened before because the code deliberately =
iterates backwards, thereby starting with the last frame, thereby =
growing the table at most once. (I agree that dropping and re-acquiring =
the lock every time was sub-optimal).

> The new gnttab_acquire_resource() function subsumes the previous two
> gnttab_get_{shared,status}_frame() helpers.
>=20
> No functional change.
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Liu <wl@xen.org>
> CC: Julien Grall <julien@xen.org>
> CC: Paul Durrant <paul@xen.org>
> CC: Micha=C5=82 Leszczy=C5=84ski <michal.leszczynski@cert.pl>
> CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
> ---
>  xen/common/grant_table.c      | 93 =
++++++++++++++++++++++++++++++-------------
>  xen/common/memory.c           | 42 +------------------
>  xen/include/xen/grant_table.h | 19 ++++-----
>  3 files changed, 75 insertions(+), 79 deletions(-)
>=20
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 9f0cae52c0..122d1e7596 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -4013,6 +4013,72 @@ static int gnttab_get_shared_frame_mfn(struct =
domain *d,
>      return 0;
>  }
>=20
> +int gnttab_acquire_resource(
> +    struct domain *d, unsigned int id, unsigned long frame,
> +    unsigned int nr_frames, xen_pfn_t mfn_list[])
> +{
> +    struct grant_table *gt =3D d->grant_table;
> +    unsigned int i =3D nr_frames, tot_frames;
> +    void **vaddrs;
> +    int rc =3D 0;
> +
> +    /* Input sanity. */
> +    if ( !nr_frames )
> +        return -EINVAL;

This was not an error before. Does mapping 0 frames really need to be a =
failure?

> +
> +    /* Overflow checks */
> +    if ( frame + nr_frames < frame )
> +        return -EINVAL;
> +
> +    tot_frames =3D frame + nr_frames;

That name is confusing. 'last_frame' perhaps (and then make the -1 =
implicit)?

> +    if ( tot_frames !=3D frame + nr_frames )
> +        return -EINVAL;
> +
> +    /* Grow table if necessary. */
> +    grant_write_lock(gt);
> +    switch ( id )
> +    {
> +        mfn_t tmp;
> +
> +    case XENMEM_resource_grant_table_id_shared:
> +        rc =3D gnttab_get_shared_frame_mfn(d, tot_frames - 1, &tmp);
> +        break;
> +
> +    case XENMEM_resource_grant_table_id_status:
> +        if ( gt->gt_version !=3D 2 )
> +        {
> +    default:
> +            rc =3D -EINVAL;
> +            break;
> +        }
> +        rc =3D gnttab_get_status_frame_mfn(d, tot_frames - 1, &tmp);
> +        break;
> +    }
> +
> +    /* Any errors from growing the table? */
> +    if ( rc )
> +        goto out;
> +
> +    switch ( id )
> +    {
> +    case XENMEM_resource_grant_table_id_shared:
> +        vaddrs =3D gt->shared_raw;
> +        break;
> +
> +    case XENMEM_resource_grant_table_id_status:
> +        vaddrs =3D (void **)gt->status;

Erk. Could we avoid this casting nastiness by putting a loop in each =
switch case. I know it could be considered code duplication but this is =
a bit icky.

> +        break;
> +    }
> +
> +    for ( i =3D 0; i < nr_frames; ++i )
> +        mfn_list[i] =3D virt_to_mfn(vaddrs[frame + i]);
> +
> + out:
> +    grant_write_unlock(gt);

Since you deliberately grew the table first, could you not drop the =
write lock and acquire it a read lock before looping over the frames?

  Paul

> +
> +    return rc;
> +}
> +
>  int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn, =
mfn_t *mfn)
>  {
>      int rc =3D 0;
> @@ -4047,33 +4113,6 @@ int gnttab_map_frame(struct domain *d, unsigned =
long idx, gfn_t gfn, mfn_t
> *mfn)
>      return rc;
>  }
>=20
> -int gnttab_get_shared_frame(struct domain *d, unsigned long idx,
> -                            mfn_t *mfn)
> -{
> -    struct grant_table *gt =3D d->grant_table;
> -    int rc;
> -
> -    grant_write_lock(gt);
> -    rc =3D gnttab_get_shared_frame_mfn(d, idx, mfn);
> -    grant_write_unlock(gt);
> -
> -    return rc;
> -}
> -
> -int gnttab_get_status_frame(struct domain *d, unsigned long idx,
> -                            mfn_t *mfn)
> -{
> -    struct grant_table *gt =3D d->grant_table;
> -    int rc;
> -
> -    grant_write_lock(gt);
> -    rc =3D (gt->gt_version =3D=3D 2) ?
> -        gnttab_get_status_frame_mfn(d, idx, mfn) : -EINVAL;
> -    grant_write_unlock(gt);
> -
> -    return rc;
> -}
> -
>  static void gnttab_usage_print(struct domain *rd)
>  {
>      int first =3D 1;
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index 714077c1e5..dc3a7248e3 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1007,44 +1007,6 @@ static long xatp_permission_check(struct domain =
*d, unsigned int space)
>      return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
>  }
>=20
> -static int acquire_grant_table(struct domain *d, unsigned int id,
> -                               unsigned long frame,
> -                               unsigned int nr_frames,
> -                               xen_pfn_t mfn_list[])
> -{
> -    unsigned int i =3D nr_frames;
> -
> -    /* Iterate backwards in case table needs to grow */
> -    while ( i-- !=3D 0 )
> -    {
> -        mfn_t mfn =3D INVALID_MFN;
> -        int rc;
> -
> -        switch ( id )
> -        {
> -        case XENMEM_resource_grant_table_id_shared:
> -            rc =3D gnttab_get_shared_frame(d, frame + i, &mfn);
> -            break;
> -
> -        case XENMEM_resource_grant_table_id_status:
> -            rc =3D gnttab_get_status_frame(d, frame + i, &mfn);
> -            break;
> -
> -        default:
> -            rc =3D -EINVAL;
> -            break;
> -        }
> -
> -        if ( rc )
> -            return rc;
> -
> -        ASSERT(!mfn_eq(mfn, INVALID_MFN));
> -        mfn_list[i] =3D mfn_x(mfn);
> -    }
> -
> -    return 0;
> -}
> -
>  static int acquire_resource(
>      XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
>  {
> @@ -1091,8 +1053,8 @@ static int acquire_resource(
>      switch ( xmar.type )
>      {
>      case XENMEM_resource_grant_table:
> -        rc =3D acquire_grant_table(d, xmar.id, xmar.frame, =
xmar.nr_frames,
> -                                 mfn_list);
> +        rc =3D gnttab_acquire_resource(d, xmar.id, xmar.frame, =
xmar.nr_frames,
> +                                     mfn_list);
>          break;
>=20
>      default:
> diff --git a/xen/include/xen/grant_table.h =
b/xen/include/xen/grant_table.h
> index 98603604b8..5a2c75b880 100644
> --- a/xen/include/xen/grant_table.h
> +++ b/xen/include/xen/grant_table.h
> @@ -56,10 +56,10 @@ int mem_sharing_gref_to_gfn(struct grant_table =
*gt, grant_ref_t ref,
>=20
>  int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn,
>                       mfn_t *mfn);
> -int gnttab_get_shared_frame(struct domain *d, unsigned long idx,
> -                            mfn_t *mfn);
> -int gnttab_get_status_frame(struct domain *d, unsigned long idx,
> -                            mfn_t *mfn);
> +
> +int gnttab_acquire_resource(
> +    struct domain *d, unsigned int id, unsigned long frame,
> +    unsigned int nr_frames, xen_pfn_t mfn_list[]);
>=20
>  #else
>=20
> @@ -93,14 +93,9 @@ static inline int gnttab_map_frame(struct domain =
*d, unsigned long idx,
>      return -EINVAL;
>  }
>=20
> -static inline int gnttab_get_shared_frame(struct domain *d, unsigned =
long idx,
> -                                          mfn_t *mfn)
> -{
> -    return -EINVAL;
> -}
> -
> -static inline int gnttab_get_status_frame(struct domain *d, unsigned =
long idx,
> -                                          mfn_t *mfn)
> +static inline int gnttab_acquire_resource(
> +    struct domain *d, unsigned int id, unsigned long frame,
> +    unsigned int nr_frames, xen_pfn_t mfn_list[])
>  {
>      return -EINVAL;
>  }
> --
> 2.11.0




From xen-devel-bounces@lists.xenproject.org Thu Jul 30 08:25:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 08:25:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k13s2-0007wf-Ml; Thu, 30 Jul 2020 08:24:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gz/s=BJ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1k13s1-0007wa-C3
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 08:24:57 +0000
X-Inumbo-ID: 263902b0-d23e-11ea-8d1a-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 263902b0-d23e-11ea-8d1a-bc764e2007e4;
 Thu, 30 Jul 2020 08:24:56 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id 184so5397644wmb.0
 for <xen-devel@lists.xenproject.org>; Thu, 30 Jul 2020 01:24:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=gkmi/TXVs4ZmKvtSxhf+eynPuMipQNvbn+uYn5T7hX0=;
 b=mGPBjbxeVfIIOf9/V1Yn2FIMYIDBTMV1k45ZZcvJJTGvAVB94XRRsXKPOQCEsTMb/5
 /HU7hhf70aafn211z/rC3xrygc1Zr1K/knCZMEdXWVamLndWBXjacNBiZiFj0gpK1rYg
 HTaWLW1Ndccf5xxnH6+awGVC7MOCeB/8/px2eLQ/8azzppDUUP29FWZ8lTgsG5qACR8t
 BGYEM4GEj/L5eHY5s0ASe9iwmfJMV42svTqer06nVhYUcfPxVpzmuslF3L1ZXTZtL01R
 y9+rmS2g0vfmK9i+vGw587QiE9//Rl20x1GYEil3p68Xlx6qipl5Nx372jSCKEEbMux6
 fS3g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=gkmi/TXVs4ZmKvtSxhf+eynPuMipQNvbn+uYn5T7hX0=;
 b=oohbu0H9OHSXgf4WZXxIcRm5ZkZj27FGU5hyPrEtAZ58xcYP7fHA6pRB4bYJVFG6hO
 60pIUApp2h2TAJd7dS8IaDRfuuRQ95tAw7C/xkzPyA6WmoBKvy+BwkIkygfXUZyoMFZc
 x6I5TiN/aIDfbxA1KomUl5ujac2SlP6YCpHY2YZuzPzVz3wvoW/bfcVKcuc84MuHR5dj
 Ec/sPaENvzsp3/haSo4LE/VYwXtCI+jo5Z464GEot48zLdkAG39VYzgg/rfzjB0FYcoe
 /74wwrBjRpwPqCsX5KquLL6nNsUqGfTWoR6adRbGU3ER5bfAu28GnypPPC6GHZv7u3xK
 w8ng==
X-Gm-Message-State: AOAM532G16eyKzn53LdL+e5U4Ya+VOauSvcQYgRxgUZvQlZOpfFuZ1LV
 GQmBhgT1PXb/KuuCZYnb2Q8=
X-Google-Smtp-Source: ABdhPJziACfL7t1vVPOuUiwQWcV38y2I8CeaCYe9vYcHrqlz6OymBOEMRcAAd5Q8S1Vj9yPaSt0reQ==
X-Received: by 2002:a1c:5459:: with SMTP id p25mr12679563wmi.85.1596097495637; 
 Thu, 30 Jul 2020 01:24:55 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec])
 by smtp.gmail.com with ESMTPSA id a123sm8884260wmd.28.2020.07.30.01.24.54
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 30 Jul 2020 01:24:55 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 "'Xen-devel'" <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-4-andrew.cooper3@citrix.com>
In-Reply-To: <20200728113712.22966-4-andrew.cooper3@citrix.com>
Subject: RE: [PATCH 3/5] xen/memory: Fix compat XENMEM_acquire_resource for
 size requests
Date: Thu, 30 Jul 2020 09:19:42 +0100
Message-ID: <002a01d6664a$2d6d28f0$88477ad0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQF6ExwjYkgZ+OK4ROtT1vtJIEnjiAJEb1e+qcZJK6A=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Hubert Jasudowicz' <hubert.jasudowicz@cert.pl>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Julien Grall' <julien@xen.org>,
 'Wei Liu' <wl@xen.org>, 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>,
 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 =?utf-8?Q?'Micha=C5=82_Leszczy=C5=84ski'?= <michal.leszczynski@cert.pl>,
 'Jan Beulich' <JBeulich@suse.com>, 'Ian Jackson' <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 28 July 2020 12:37
> To: Xen-devel <xen-devel@lists.xenproject.org>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap =
<George.Dunlap@eu.citrix.com>; Ian
> Jackson <ian.jackson@citrix.com>; Jan Beulich <JBeulich@suse.com>; =
Konrad Rzeszutek Wilk
> <konrad.wilk@oracle.com>; Stefano Stabellini <sstabellini@kernel.org>; =
Wei Liu <wl@xen.org>; Julien
> Grall <julien@xen.org>; Paul Durrant <paul@xen.org>; Micha=C5=82 =
Leszczy=C5=84ski <michal.leszczynski@cert.pl>;
> Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
> Subject: [PATCH 3/5] xen/memory: Fix compat XENMEM_acquire_resource =
for size requests
>=20
> Copy the nr_frames from the correct structure, so the caller doesn't
> unconditionally receive 0.
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Paul Durrant <paul@xen.org>

> ---
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Liu <wl@xen.org>
> CC: Julien Grall <julien@xen.org>
> CC: Paul Durrant <paul@xen.org>
> CC: Micha=C5=82 Leszczy=C5=84ski <michal.leszczynski@cert.pl>
> CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
> ---
>  xen/common/compat/memory.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
> index 3851f756c7..ed92e05b08 100644
> --- a/xen/common/compat/memory.c
> +++ b/xen/common/compat/memory.c
> @@ -599,7 +599,7 @@ int compat_memory_op(unsigned int cmd, =
XEN_GUEST_HANDLE_PARAM(void) compat)
>                  if ( __copy_field_to_guest(
>                           guest_handle_cast(compat,
>                                             =
compat_mem_acquire_resource_t),
> -                         &cmp.mar, nr_frames) )
> +                         nat.mar, nr_frames) )
>                      return -EFAULT;
>              }
>              else
> --
> 2.11.0




From xen-devel-bounces@lists.xenproject.org Thu Jul 30 08:36:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 08:36:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k143A-0000R3-PK; Thu, 30 Jul 2020 08:36:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gz/s=BJ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1k1438-0000Qy-O7
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 08:36:26 +0000
X-Inumbo-ID: c0ca213c-d23f-11ea-8d1c-bc764e2007e4
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c0ca213c-d23f-11ea-8d1c-bc764e2007e4;
 Thu, 30 Jul 2020 08:36:25 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id q76so4198377wme.4
 for <xen-devel@lists.xenproject.org>; Thu, 30 Jul 2020 01:36:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=lrWz9GohPCP+KkZeOmNoHCKzgKvkGS2JUwGp7tx/H9A=;
 b=dfS2F0rleBnQvMPbEbqZus0Aacym8oJFeszkDNu4jzclxOt84lrM1/IGUUBbw1MUlV
 L9qmssxHFon1KPzHunfQnhYecJU47d7C8av6RO/HaKFQFx+S/RYH4dYitapIPAtt/9cc
 8uF5UU50ZFLze6vFKcHjuaUbcruEvK7WCc+bT55wXdaLUBB0xsYb3N4Gtz17x9fR56Ff
 vhdAmj1OZgF6n5VquYE/0XTA0YLZ4eDAF1vEYyBmH40w3ChcuaxSBvyG6UMg4tDb6bo1
 uUO+owuqAgsEZNs0AxHzzKagsKYSIH7p5meU09un76u4xYhYJVf3PUXr30yEOZ7QSFEA
 Ga0Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=lrWz9GohPCP+KkZeOmNoHCKzgKvkGS2JUwGp7tx/H9A=;
 b=ocolpPSKNpcmQGpOj4Ssol3jBsqSicZ5+O9BTriGmFiGF6dB+hAcyVBbObXiTiqo06
 6HUsmJSn790KTEdB+Ib/194OWJs8TWWpJsihXn4D8RxWqp36bCGVcmuIumHCXVL/CmVL
 k1ZyAGH1QR1UsRkY41tKy6NS1j/CpfgotT73Tf2ONxKuNKGBZxbbEtlgBNzysoeJIgk2
 IHyp1ed9DQ0w7y7eLXEQ0nkdNoASkcTTZ8lVCapUQE6M+mgCbUsIrsNTRju0TQcgHddM
 wXwO6udgyeYCCa5NKMHF+B7kWnvMO5rRASrGXH5yiIqDPptWUhzh/V1ZearSQYYm3DAY
 FQLw==
X-Gm-Message-State: AOAM533bxkMN7S74ucrYHlFnPT9E9JbK/ArVJKc4vavAyKkKl0y0hOKr
 Lq09lW0oGTqrYSnLWwIwCgw=
X-Google-Smtp-Source: ABdhPJyqdbBGNiTqnqcTDhk+LJAVhAYv/+unGaOVBLbeGblJZJxOpu0KBL0D7lafmrAngSCN5n/nZQ==
X-Received: by 2002:a1c:b787:: with SMTP id h129mr4626243wmf.93.1596098184337; 
 Thu, 30 Jul 2020 01:36:24 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec])
 by smtp.gmail.com with ESMTPSA id 31sm8937764wrj.94.2020.07.30.01.36.23
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 30 Jul 2020 01:36:23 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 "'Xen-devel'" <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-5-andrew.cooper3@citrix.com>
In-Reply-To: <20200728113712.22966-5-andrew.cooper3@citrix.com>
Subject: RE: [PATCH 4/5] xen/memory: Fix acquire_resource size semantics
Date: Thu, 30 Jul 2020 09:31:10 +0100
Message-ID: <002b01d6664b$c7eb5f40$57c21dc0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQF6ExwjYkgZ+OK4ROtT1vtJIEnjiAEymWI6qc7ZiMA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>,
 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 =?utf-8?Q?'Micha=C5=82_Leszczy=C5=84ski'?= <michal.leszczynski@cert.pl>,
 'Jan Beulich' <JBeulich@suse.com>, 'Ian Jackson' <ian.jackson@citrix.com>,
 'Hubert Jasudowicz' <hubert.jasudowicz@cert.pl>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of =
Andrew Cooper
> Sent: 28 July 2020 12:37
> To: Xen-devel <xen-devel@lists.xenproject.org>
> Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>; Stefano Stabellini =
<sstabellini@kernel.org>; Julien
> Grall <julien@xen.org>; Wei Liu <wl@xen.org>; Konrad Rzeszutek Wilk =
<konrad.wilk@oracle.com>; George
> Dunlap <George.Dunlap@eu.citrix.com>; Andrew Cooper =
<andrew.cooper3@citrix.com>; Paul Durrant
> <paul@xen.org>; Jan Beulich <JBeulich@suse.com>; Micha=C5=82 =
Leszczy=C5=84ski <michal.leszczynski@cert.pl>; Ian
> Jackson <ian.jackson@citrix.com>
> Subject: [PATCH 4/5] xen/memory: Fix acquire_resource size semantics
>=20
> Calling XENMEM_acquire_resource with a NULL frame_list is a request =
for the
> size of the resource, but the returned 32 is bogus.
>=20
> If someone tries to follow it for XENMEM_resource_ioreq_server, the =
acquire
> call will fail as IOREQ servers currently top out at 2 frames, and it =
is only
> half the size of the default grant table limit for guests.
>=20
> Also, no users actually request a resource size, because it was never =
wired up
> in the sole implemenation of resource aquisition in Linux.
>=20
> Introduce a new resource_max_frames() to calculate the size of a =
resource, and
> implement it the IOREQ and grant subsystems.
>=20
> It is impossible to guarentee that a mapping call following a =
successful size

s/guarantee/guarantee

> call will succedd (e.g. The target IOREQ server gets destroyed, or the =
domain

s/succedd/succeed

> switches from grant v2 to v1).  Document the restriction, and use the
> flexibility to simplify the paths to be lockless.
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Liu <wl@xen.org>
> CC: Julien Grall <julien@xen.org>
> CC: Paul Durrant <paul@xen.org>
> CC: Micha=C5=82 Leszczy=C5=84ski <michal.leszczynski@cert.pl>
> CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
> ---
>  xen/arch/x86/mm.c             | 20 ++++++++++++++++
>  xen/common/grant_table.c      | 19 +++++++++++++++
>  xen/common/memory.c           | 55 =
+++++++++++++++++++++++++++++++++----------
>  xen/include/asm-x86/mm.h      |  3 +++
>  xen/include/public/memory.h   | 16 +++++++++----
>  xen/include/xen/grant_table.h |  8 +++++++
>  xen/include/xen/mm.h          |  6 +++++
>  7 files changed, 110 insertions(+), 17 deletions(-)
>=20
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 82bc676553..f73a90a2ab 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4600,6 +4600,26 @@ int xenmem_add_to_physmap_one(
>      return rc;
>  }
>=20
> +unsigned int arch_resource_max_frames(
> +    struct domain *d, unsigned int type, unsigned int id)
> +{
> +    unsigned int nr =3D 0;
> +
> +    switch ( type )
> +    {
> +#ifdef CONFIG_HVM
> +    case XENMEM_resource_ioreq_server:
> +        if ( !is_hvm_domain(d) )
> +            break;
> +        /* One frame for the buf-ioreq ring, and one frame per 128 =
vcpus. */
> +        nr =3D 1 + DIV_ROUND_UP(d->max_vcpus * sizeof(struct ioreq), =
PAGE_SIZE);
> +        break;
> +#endif
> +    }
> +
> +    return nr;
> +}
> +
>  int arch_acquire_resource(struct domain *d, unsigned int type,
>                            unsigned int id, unsigned long frame,
>                            unsigned int nr_frames, xen_pfn_t =
mfn_list[])
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 122d1e7596..0962fc7169 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -4013,6 +4013,25 @@ static int gnttab_get_shared_frame_mfn(struct =
domain *d,
>      return 0;
>  }
>=20
> +unsigned int gnttab_resource_max_frames(struct domain *d, unsigned =
int id)
> +{
> +    unsigned int nr =3D 0;
> +
> +    /* Don't need the grant lock.  This limit is fixed at domain =
create time. */
> +    switch ( id )
> +    {
> +    case XENMEM_resource_grant_table_id_shared:
> +        nr =3D d->grant_table->max_grant_frames;
> +        break;
> +
> +    case XENMEM_resource_grant_table_id_status:
> +        nr =3D =
grant_to_status_frames(d->grant_table->max_grant_frames);

Two uses of d->grant_table, so perhaps define a stack variable for it? =
Also, should you not make sure 0 is returned in the case of a v1 table?

> +        break;
> +    }
> +
> +    return nr;
> +}
> +
>  int gnttab_acquire_resource(
>      struct domain *d, unsigned int id, unsigned long frame,
>      unsigned int nr_frames, xen_pfn_t mfn_list[])
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index dc3a7248e3..21edabf9cc 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -1007,6 +1007,26 @@ static long xatp_permission_check(struct domain =
*d, unsigned int space)
>      return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
>  }
>=20
> +/*
> + * Return 0 on any kind of error.  Caller converts to -EINVAL.
> + *
> + * All nonzero values should be repeatable (i.e. derived from some =
fixed
> + * proerty of the domain), and describe the full resource (i.e. =
mapping the

s/property/property

> + * result of this call will be the entire resource).

This precludes dynamically adding a resource to a running domain. Do we =
really want to bake in that restriction?

> + */
> +static unsigned int resource_max_frames(struct domain *d,
> +                                        unsigned int type, unsigned =
int id)
> +{
> +    switch ( type )
> +    {
> +    case XENMEM_resource_grant_table:
> +        return gnttab_resource_max_frames(d, id);
> +
> +    default:
> +        return arch_resource_max_frames(d, type, id);
> +    }
> +}
> +
>  static int acquire_resource(
>      XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
>  {
> @@ -1018,6 +1038,7 @@ static int acquire_resource(
>       * use-cases then per-CPU arrays or heap allocations may be =
required.
>       */
>      xen_pfn_t mfn_list[32];
> +    unsigned int max_frames;
>      int rc;
>=20
>      if ( copy_from_guest(&xmar, arg, 1) )
> @@ -1026,19 +1047,6 @@ static int acquire_resource(
>      if ( xmar.pad !=3D 0 )
>          return -EINVAL;
>=20
> -    if ( guest_handle_is_null(xmar.frame_list) )
> -    {
> -        if ( xmar.nr_frames )
> -            return -EINVAL;
> -
> -        xmar.nr_frames =3D ARRAY_SIZE(mfn_list);
> -
> -        if ( __copy_field_to_guest(arg, &xmar, nr_frames) )
> -            return -EFAULT;
> -
> -        return 0;
> -    }
> -
>      if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
>          return -E2BIG;
>=20
> @@ -1050,6 +1058,27 @@ static int acquire_resource(
>      if ( rc )
>          goto out;
>=20
> +    max_frames =3D resource_max_frames(d, xmar.type, xmar.id);
> +
> +    rc =3D -EINVAL;
> +    if ( !max_frames )
> +        goto out;
> +
> +    if ( guest_handle_is_null(xmar.frame_list) )
> +    {
> +        if ( xmar.nr_frames )
> +            goto out;
> +
> +        xmar.nr_frames =3D max_frames;
> +
> +        rc =3D -EFAULT;
> +        if ( __copy_field_to_guest(arg, &xmar, nr_frames) )
> +            goto out;
> +
> +        rc =3D 0;
> +        goto out;
> +    }
> +
>      switch ( xmar.type )
>      {
>      case XENMEM_resource_grant_table:
> diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
> index 7e74996053..b0caf372a8 100644
> --- a/xen/include/asm-x86/mm.h
> +++ b/xen/include/asm-x86/mm.h
> @@ -649,6 +649,9 @@ static inline bool arch_mfn_in_directmap(unsigned =
long mfn)
>      return mfn <=3D (virt_to_mfn(eva - 1) + 1);
>  }
>=20
> +unsigned int arch_resource_max_frames(struct domain *d,
> +                                      unsigned int type, unsigned int =
id);
> +
>  int arch_acquire_resource(struct domain *d, unsigned int type,
>                            unsigned int id, unsigned long frame,
>                            unsigned int nr_frames, xen_pfn_t =
mfn_list[]);
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index 21057ed78e..cea88cf40c 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -639,10 +639,18 @@ struct xen_mem_acquire_resource {
>  #define XENMEM_resource_grant_table_id_status 1
>=20
>      /*
> -     * IN/OUT - As an IN parameter number of frames of the resource
> -     *          to be mapped. However, if the specified value is 0 =
and
> -     *          frame_list is NULL then this field will be set to the
> -     *          maximum value supported by the implementation on =
return.
> +     * IN/OUT
> +     *
> +     * As an IN parameter number of frames of the resource to be =
mapped.
> +     *
> +     * When frame_list is NULL and nr_frames is 0, this is =
interpreted as a
> +     * request for the size of the resource, which shall be returned =
in the
> +     * nr_frames field.
> +     *
> +     * The size of a resource will never be zero, but a nonzero =
result doesn't
> +     * guarentee that a subsequent mapping request will be =
successful.  There

s/guarantee/guarantee

  Paul

> +     * are further type/id specific constraints which may change =
between the
> +     * two calls.
>       */
>      uint32_t nr_frames;
>      uint32_t pad;
> diff --git a/xen/include/xen/grant_table.h =
b/xen/include/xen/grant_table.h
> index 5a2c75b880..bae4d79623 100644
> --- a/xen/include/xen/grant_table.h
> +++ b/xen/include/xen/grant_table.h
> @@ -57,6 +57,8 @@ int mem_sharing_gref_to_gfn(struct grant_table *gt, =
grant_ref_t ref,
>  int gnttab_map_frame(struct domain *d, unsigned long idx, gfn_t gfn,
>                       mfn_t *mfn);
>=20
> +unsigned int gnttab_resource_max_frames(struct domain *d, unsigned =
int id);
> +
>  int gnttab_acquire_resource(
>      struct domain *d, unsigned int id, unsigned long frame,
>      unsigned int nr_frames, xen_pfn_t mfn_list[]);
> @@ -93,6 +95,12 @@ static inline int gnttab_map_frame(struct domain =
*d, unsigned long idx,
>      return -EINVAL;
>  }
>=20
> +static inline unsigned int gnttab_resource_max_frames(
> +    struct domain *d, unsigned int id)
> +{
> +    return 0;
> +}
> +
>  static inline int gnttab_acquire_resource(
>      struct domain *d, unsigned int id, unsigned long frame,
>      unsigned int nr_frames, xen_pfn_t mfn_list[])
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 1b2c1f6b32..c184dc1db1 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -686,6 +686,12 @@ static inline void put_page_alloc_ref(struct =
page_info *page)
>  }
>=20
>  #ifndef CONFIG_ARCH_ACQUIRE_RESOURCE
> +static inline unsigned int arch_resource_max_frames(
> +    struct domain *d, unsigned int type, unsigned int id)
> +{
> +    return 0;
> +}
> +
>  static inline int arch_acquire_resource(
>      struct domain *d, unsigned int type, unsigned int id, unsigned =
long frame,
>      unsigned int nr_frames, xen_pfn_t mfn_list[])
> --
> 2.11.0
>=20




From xen-devel-bounces@lists.xenproject.org Thu Jul 30 08:44:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 08:44:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k14BB-0001KK-M7; Thu, 30 Jul 2020 08:44:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gz/s=BJ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1k14BA-0001KF-S5
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 08:44:44 +0000
X-Inumbo-ID: e9ce6d62-d240-11ea-8d20-bc764e2007e4
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e9ce6d62-d240-11ea-8d20-bc764e2007e4;
 Thu, 30 Jul 2020 08:44:43 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id f18so3832565wmc.0
 for <xen-devel@lists.xenproject.org>; Thu, 30 Jul 2020 01:44:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=XWpXQHpy6evGpR7BHsqT2f6iSNpy/AsNc8nuYsMa8jk=;
 b=QCr7IJ0OaIHVOKkOwCDoL78ikAfRJ6ozFOYzWSZUURF6tyoCwPHkr5JmcaILmdJHXg
 GMA9RCquAfLImQ8r6MR7u/sTygPAgJJ9ZKVMUWNQdrOSQ3D25MBmJ/pE2IpxgLCdM9YI
 yJHHugHMp6CHzWexFcS2BEXlwx7nvHCe8P1o38EYRpN4sewWfVHVIMjpSXz8N//w1QaM
 qMRZORfUk0dFp0/HF5AdGO2KtHZa2zaT1IN6kp5v1AMEUketEePhWPfrXzAHb6j2VxMw
 kpfQ51Rs7GVbPNdZYSfgtSZAunhFxX+u+M87HfsqXSDiGmot9+N0gcV30W/4MN9Bc1fs
 3ZqA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=XWpXQHpy6evGpR7BHsqT2f6iSNpy/AsNc8nuYsMa8jk=;
 b=VuAEKg4oQ1NzZFeC8zoiH57xUS6SwAw+6Hyfihu1nzOSKg8FvQLRJDcOxzAPIG5Zdb
 cKhH+p+PKZxTVWmgsvvjyFUAkxNe2Imd8Chf94m7oO0oO1erbClfmup1iMavKxE9lzcs
 Y3353ajKkn2vALKzt6/zXSSfiCD71dMH4Tj8U8NT6ocrfRSZMX1ArPN10+ihRm0aKH0m
 wjnT4hUcnagHoWuE2VgtqEqlWEGUrJf68h3ciRklZA/LA95C0b/V5/wq8QX8SQhrpzdM
 3AuxE1xibbWLQpXjh8EpWxkqAZoyJvCpwxZNbyORzC6kOJipoFnUPCY68qDFcOsPOOPm
 UxeQ==
X-Gm-Message-State: AOAM5329IfRVvqalqQb0PqF8gHcaUSDl78We4UFruDFmNmRJMuma1s3+
 h3c2Vn6nPDDoZwx0I0kLuPo=
X-Google-Smtp-Source: ABdhPJzoqV6rEmLniZ4ER9AYfjIB0gnaocT6OrtqNZXQ9eXa7DWdbahN5SdwbtpiOIgtkpFUuhMkwQ==
X-Received: by 2002:a1c:1b93:: with SMTP id
 b141mr12669868wmb.150.1596098682705; 
 Thu, 30 Jul 2020 01:44:42 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec])
 by smtp.gmail.com with ESMTPSA id 128sm8337887wmz.43.2020.07.30.01.44.41
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 30 Jul 2020 01:44:42 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 "'Xen-devel'" <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-6-andrew.cooper3@citrix.com>
In-Reply-To: <20200728113712.22966-6-andrew.cooper3@citrix.com>
Subject: RE: [PATCH 5/5] tools/foreignmem: Support querying the size of a
 resource
Date: Thu, 30 Jul 2020 09:39:29 +0100
Message-ID: <002d01d6664c$f0fc69f0$d2f53dd0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQF6ExwjYkgZ+OK4ROtT1vtJIEnjiALNw9PgqcIDgBA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Ian Jackson' <Ian.Jackson@citrix.com>,
 =?utf-8?Q?'Micha=C5=82_Leszczy=C5=84ski'?= <michal.leszczynski@cert.pl>,
 'Hubert Jasudowicz' <hubert.jasudowicz@cert.pl>, 'Wei Liu' <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 28 July 2020 12:37
> To: Xen-devel <xen-devel@lists.xenproject.org>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Ian Jackson =
<Ian.Jackson@citrix.com>; Wei Liu
> <wl@xen.org>; Paul Durrant <paul@xen.org>; Micha=C5=82 =
Leszczy=C5=84ski <michal.leszczynski@cert.pl>; Hubert
> Jasudowicz <hubert.jasudowicz@cert.pl>
> Subject: [PATCH 5/5] tools/foreignmem: Support querying the size of a =
resource
>=20
> With the Xen side of this interface fixed to return real sizes, =
userspace
> needs to be able to make the query.
>=20
> Introduce xenforeignmemory_resource_size() for the purpose, bumping =
the
> library minor version and providing compatiblity for the non-Linux =
builds.
>=20
> Its not possible to reuse the IOCTL_PRIVCMD_MMAP_RESOURCE =
infrastructure,
> because it depends on having already mmap()'d a suitably sized region =
before
> it will make an XENMEM_acquire_resource hypercall to Xen.
>=20
> Instead, open a xencall handle and make an XENMEM_acquire_resource =
hypercall
> directly.

Shame we have to do that but, as you say, it's the only option.

>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Paul Durrant <paul@xen.org>

> ---
> CC: Ian Jackson <Ian.Jackson@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Paul Durrant <paul@xen.org>
> CC: Micha=C5=82 Leszczy=C5=84ski <michal.leszczynski@cert.pl>
> CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
> ---
>  tools/libs/foreignmemory/Makefile                  |  2 +-
>  tools/libs/foreignmemory/core.c                    | 14 +++++++++
>  .../libs/foreignmemory/include/xenforeignmemory.h  | 15 ++++++++++
>  tools/libs/foreignmemory/libxenforeignmemory.map   |  4 +++
>  tools/libs/foreignmemory/linux.c                   | 35 =
++++++++++++++++++++++
>  tools/libs/foreignmemory/private.h                 | 14 +++++++++
>  6 files changed, 83 insertions(+), 1 deletion(-)
>=20
> diff --git a/tools/libs/foreignmemory/Makefile =
b/tools/libs/foreignmemory/Makefile
> index 28f1bddc96..8e07f92c59 100644
> --- a/tools/libs/foreignmemory/Makefile
> +++ b/tools/libs/foreignmemory/Makefile
> @@ -2,7 +2,7 @@ XEN_ROOT =3D $(CURDIR)/../../..
>  include $(XEN_ROOT)/tools/Rules.mk
>=20
>  MAJOR    =3D 1
> -MINOR    =3D 3
> +MINOR    =3D 4
>  LIBNAME  :=3D foreignmemory
>  USELIBS  :=3D toollog toolcore
>=20
> diff --git a/tools/libs/foreignmemory/core.c =
b/tools/libs/foreignmemory/core.c
> index 63f12e2450..5d95c59c48 100644
> --- a/tools/libs/foreignmemory/core.c
> +++ b/tools/libs/foreignmemory/core.c
> @@ -53,6 +53,10 @@ xenforeignmemory_handle =
*xenforeignmemory_open(xentoollog_logger *logger,
>          if (!fmem->logger) goto err;
>      }
>=20
> +    fmem->xcall =3D xencall_open(fmem->logger, 0);
> +    if ( !fmem->xcall )
> +        goto err;
> +
>      rc =3D osdep_xenforeignmemory_open(fmem);
>      if ( rc  < 0 ) goto err;
>=20
> @@ -61,6 +65,7 @@ xenforeignmemory_handle =
*xenforeignmemory_open(xentoollog_logger *logger,
>  err:
>      xentoolcore__deregister_active_handle(&fmem->tc_ah);
>      osdep_xenforeignmemory_close(fmem);
> +    xencall_close(fmem->xcall);
>      xtl_logger_destroy(fmem->logger_tofree);
>      free(fmem);
>      return NULL;
> @@ -75,6 +80,7 @@ int xenforeignmemory_close(xenforeignmemory_handle =
*fmem)
>=20
>      xentoolcore__deregister_active_handle(&fmem->tc_ah);
>      rc =3D osdep_xenforeignmemory_close(fmem);
> +    xencall_close(fmem->xcall);
>      xtl_logger_destroy(fmem->logger_tofree);
>      free(fmem);
>      return rc;
> @@ -188,6 +194,14 @@ int xenforeignmemory_unmap_resource(
>      return rc;
>  }
>=20
> +int xenforeignmemory_resource_size(
> +    xenforeignmemory_handle *fmem, domid_t domid, unsigned int type,
> +    unsigned int id, unsigned long *nr_frames)
> +{
> +    return osdep_xenforeignmemory_resource_size(fmem, domid, type,
> +                                                id, nr_frames);
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/tools/libs/foreignmemory/include/xenforeignmemory.h
> b/tools/libs/foreignmemory/include/xenforeignmemory.h
> index d594be8df0..1ba2f5316b 100644
> --- a/tools/libs/foreignmemory/include/xenforeignmemory.h
> +++ b/tools/libs/foreignmemory/include/xenforeignmemory.h
> @@ -179,6 +179,21 @@ xenforeignmemory_resource_handle =
*xenforeignmemory_map_resource(
>  int xenforeignmemory_unmap_resource(
>      xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle =
*fres);
>=20
> +/**
> + * Determine the maximum size of a specific resource.
> + *
> + * @parm fmem handle to the open foreignmemory interface
> + * @parm domid the domain id
> + * @parm type the resource type
> + * @parm id the type-specific resource identifier
> + *
> + * Return 0 on success and fills in *nr_frames.  Sets errno and =
return -1 on
> + * error.
> + */
> +int xenforeignmemory_resource_size(
> +    xenforeignmemory_handle *fmem, domid_t domid, unsigned int type,
> +    unsigned int id, unsigned long *nr_frames);
> +
>  #endif
>=20
>  /*
> diff --git a/tools/libs/foreignmemory/libxenforeignmemory.map
> b/tools/libs/foreignmemory/libxenforeignmemory.map
> index d5323c87d9..8aca341b99 100644
> --- a/tools/libs/foreignmemory/libxenforeignmemory.map
> +++ b/tools/libs/foreignmemory/libxenforeignmemory.map
> @@ -19,3 +19,7 @@ VERS_1.3 {
>  		xenforeignmemory_map_resource;
>  		xenforeignmemory_unmap_resource;
>  } VERS_1.2;
> +VERS_1.4 {
> +	global:
> +		xenforeignmemory_resource_size;
> +} VERS_1.3;
> diff --git a/tools/libs/foreignmemory/linux.c =
b/tools/libs/foreignmemory/linux.c
> index 8daa5828e3..67e0ca1e83 100644
> --- a/tools/libs/foreignmemory/linux.c
> +++ b/tools/libs/foreignmemory/linux.c
> @@ -28,6 +28,8 @@
>=20
>  #include "private.h"
>=20
> +#include <xen/memory.h>
> +
>  #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & =
~((1UL<<(_w))-1))
>=20
>  #ifndef O_CLOEXEC
> @@ -340,6 +342,39 @@ int osdep_xenforeignmemory_map_resource(
>      return 0;
>  }
>=20
> +int osdep_xenforeignmemory_resource_size(
> +    xenforeignmemory_handle *fmem, domid_t domid, unsigned int type,
> +    unsigned int id, unsigned long *nr_frames)
> +{
> +    int rc;
> +    struct xen_mem_acquire_resource *xmar =3D
> +        xencall_alloc_buffer(fmem->xcall, sizeof(*xmar));
> +
> +    if ( !xmar )
> +    {
> +        PERROR("Could not bounce memory for acquire_resource =
hypercall");
> +        return -1;
> +    }
> +
> +    *xmar =3D (struct xen_mem_acquire_resource){
> +        .domid =3D domid,
> +        .type =3D type,
> +        .id =3D id,
> +    };
> +
> +    rc =3D xencall2(fmem->xcall, __HYPERVISOR_memory_op,
> +                  XENMEM_acquire_resource, (uintptr_t)xmar);
> +    if ( rc )
> +        goto out;
> +
> +    *nr_frames =3D xmar->nr_frames;
> +
> + out:
> +    xencall_free_buffer(fmem->xcall, xmar);
> +
> +    return rc;
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/tools/libs/foreignmemory/private.h =
b/tools/libs/foreignmemory/private.h
> index 8f1bf081ed..1a6b685f45 100644
> --- a/tools/libs/foreignmemory/private.h
> +++ b/tools/libs/foreignmemory/private.h
> @@ -4,6 +4,7 @@
>  #include <xentoollog.h>
>=20
>  #include <xenforeignmemory.h>
> +#include <xencall.h>
>=20
>  #include <xentoolcore_internal.h>
>=20
> @@ -20,6 +21,7 @@
>=20
>  struct xenforeignmemory_handle {
>      xentoollog_logger *logger, *logger_tofree;
> +    xencall_handle *xcall;
>      unsigned flags;
>      int fd;
>      Xentoolcore__Active_Handle tc_ah;
> @@ -74,6 +76,15 @@ static inline int =
osdep_xenforeignmemory_unmap_resource(
>  {
>      return 0;
>  }
> +
> +static inline int osdep_xenforeignmemory_resource_size(
> +    xenforeignmemory_handle *fmem, domid_t domid, unsigned int type,
> +    unsigned int id, unsigned long *nr_frames)
> +{
> +    errno =3D EOPNOTSUPP;
> +    return -1;
> +}
> +
>  #else
>  int osdep_xenforeignmemory_restrict(xenforeignmemory_handle *fmem,
>                                      domid_t domid);
> @@ -81,6 +92,9 @@ int osdep_xenforeignmemory_map_resource(
>      xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle =
*fres);
>  int osdep_xenforeignmemory_unmap_resource(
>      xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle =
*fres);
> +int osdep_xenforeignmemory_resource_size(
> +    xenforeignmemory_handle *fmem, domid_t domid, unsigned int type,
> +    unsigned int id, unsigned long *nr_frames);
>  #endif
>=20
>  #define PERROR(_f...) \
> --
> 2.11.0




From xen-devel-bounces@lists.xenproject.org Thu Jul 30 09:04:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 09:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k14UL-00035Q-FM; Thu, 30 Jul 2020 09:04:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6uMI=BJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k14UK-00035L-Ie
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 09:04:32 +0000
X-Inumbo-ID: ad7ab6d8-d243-11ea-aaab-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ad7ab6d8-d243-11ea-aaab-12813bfff9fa;
 Thu, 30 Jul 2020 09:04:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=aTGIKLBckonV9KK4TUwQkn0NQbi+hL1QlpZtJduBBiY=; b=JjoeHf6n9HTx4BWFhxNRMTfPm
 quP8HjBoCaDKN5h1gzCcsf1ULL8AJaJ+jYhnQXrFTHXUsTeeLaqKy33ZkmTfdyaCYu6YqQZwivW7S
 zKvHeTPOc//im7soZtGGlOgBnuJ/Q7Rmj+9y8OQJM+RB3tx5A8jEcBxK9Xa3c09Q38CtM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k14UH-00042k-Ox; Thu, 30 Jul 2020 09:04:29 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k14UH-0005vA-DS; Thu, 30 Jul 2020 09:04:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k14UH-0000Ls-Co; Thu, 30 Jul 2020 09:04:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152297-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152297: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
 xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=64219fa179c3e48adad12bfce3f6b3f1596cccbf
X-Osstest-Versions-That: xen=b071ec25e85c4aacf3da59e5258cda0b1c4df45d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jul 2020 09:04:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152297 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152297/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       7 xen-boot                 fail REGR. vs. 152269

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  64219fa179c3e48adad12bfce3f6b3f1596cccbf
baseline version:
 xen                  b071ec25e85c4aacf3da59e5258cda0b1c4df45d

Last test of basis   152269  2020-07-28 19:05:32 Z    1 days
Testing same since   152288  2020-07-29 19:01:00 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Fam Zheng <famzheng@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 64219fa179c3e48adad12bfce3f6b3f1596cccbf
Author: Fam Zheng <famzheng@amazon.com>
Date:   Wed Jul 29 18:51:45 2020 +0100

    x86/cpuid: Fix APIC bit clearing
    
    The bug is obvious here, other places in this function used
    "cpufeat_mask" correctly.
    
    Fixed: b648feff8ea2 ("xen/x86: Improvements to in-hypervisor cpuid sanity checks")
    Signed-off-by: Fam Zheng <famzheng@amazon.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 09:50:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 09:50:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k15Cu-0007Cj-8F; Thu, 30 Jul 2020 09:50:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k15Cs-0007Ce-Bv
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 09:50:34 +0000
X-Inumbo-ID: 1c5ad276-d24a-11ea-aaae-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c5ad276-d24a-11ea-aaae-12813bfff9fa;
 Thu, 30 Jul 2020 09:50:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=QEDspvaZ4NzL18hLntGfWJiuWx4AGvTAqKsfCz/TN1s=; b=aqS+8cehW0pyZBVb0HnUYLsVVr
 KlRuTdOdWKezTWH/tpcR5kQh+UzHHSdxHYDdr4QCsXsg+owXwkdUYg2WIun2PIBqdAXWcZku9yKxO
 Zk0A9mC/8tkB4RQt0+RNVoNrFdXHpSqSTIVBsd92uH8soU4vzaP4DBN1ziWsG7zwI5e8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k15Co-0004wq-C7; Thu, 30 Jul 2020 09:50:30 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k15Co-0004hD-1F; Thu, 30 Jul 2020 09:50:30 +0000
Subject: Re: [PATCH 1/5] xen/memory: Introduce CONFIG_ARCH_ACQUIRE_RESOURCE
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-2-andrew.cooper3@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9b8397fc-f50e-ef2b-cbaa-2298294af2e3@xen.org>
Date: Thu, 30 Jul 2020 10:50:27 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728113712.22966-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Hubert Jasudowicz <hubert.jasudowicz@cert.pl>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Jan Beulich <JBeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Andrew,

On 28/07/2020 12:37, Andrew Cooper wrote:
> New architectures shouldn't be forced to implement no-op stubs for unused
> functionality.
> 
> Introduce CONFIG_ARCH_ACQUIRE_RESOURCE which can be opted in to, and provide
> compatibility logic in xen/mm.h
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

With one question below:

Acked-by: Julien Grall <jgrall@amazon.com>


> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 1061765bcd..1b2c1f6b32 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -685,4 +685,13 @@ static inline void put_page_alloc_ref(struct page_info *page)
>       }
>   }
>   
> +#ifndef CONFIG_ARCH_ACQUIRE_RESOURCE
> +static inline int arch_acquire_resource(
> +    struct domain *d, unsigned int type, unsigned int id, unsigned long frame,
> +    unsigned int nr_frames, xen_pfn_t mfn_list[])

Any reason to change the way we indent the arguments?

> +{
> +    return -EOPNOTSUPP;
> +}
> +#endif /* !CONFIG_ARCH_ACQUIRE_RESOURCE */
> +
>   #endif /* __XEN_MM_H__ */
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 10:08:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 10:08:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k15Tz-0008KF-On; Thu, 30 Jul 2020 10:08:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mqfy=BJ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k15Tx-0008KA-Il
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 10:08:13 +0000
X-Inumbo-ID: 93593032-d24c-11ea-8d29-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 93593032-d24c-11ea-8d29-bc764e2007e4;
 Thu, 30 Jul 2020 10:08:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596103693;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=yzatxMaE63S7r7J7i/D4AKibgu4cOAE1pxAdd0hTzXI=;
 b=AHg26eu+ySocCvh0ziXkczWVgqkta3tOiu9X/VZV/atoHI5tfAHH5Z6r
 bwDLJZ9obnjdADgjSCrlqZagOKqBs97oTxLCxHmydLlk+sLkd8sIv2k8/
 6gu8MI4SmYqY6+i1p8MVhkhG5ruT85+WSffm5q67Yca9SC6F40zPAJBxO Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: GmJmumpcoXIrQKZQKbZASPbHoUjLZf9syUU+kPJbw1bvzqMUryZNXi7Aue6QGo3uy7kxPeU0Kx
 5ypbEJxg99F2ioRVN1UuMoN9C9LAFed2d4k3HVz0a1kVJbvJBeRTFk6k0fsoj7owQkrUliBMlQ
 xvNzmY0FETMDU56dzTvkoBOYHYeIvEbSKj3nN1z4qSnr1Fwe9zjI3qdIl2bLZvlKlWuFzBgCFy
 KHHPW4fI8dtRpg8SXM9hYjVdkMVUWrERKgVDHVw46SZdroZENZvEqarMu6A3Qel8lMAYatGfMi
 8Zc=
X-SBRS: 2.7
X-MesageID: 23853738
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,414,1589256000"; d="scan'208";a="23853738"
Date: Thu, 30 Jul 2020 12:08:01 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v3] print: introduce a format specifier for pci_sbdf_t
Message-ID: <20200730100801.GF7191@Air-de-Roger>
References: <20200727103136.53343-1-roger.pau@citrix.com>
 <ca6cd6a5-3221-4d34-08a0-8ea4b2dc92d0@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ca6cd6a5-3221-4d34-08a0-8ea4b2dc92d0@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul
 Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Jul 29, 2020 at 09:28:53PM +0200, Jan Beulich wrote:
> On 27.07.2020 12:31, Roger Pau Monne wrote:
> > The new format specifier is '%pp', and prints a pci_sbdf_t using the
> > seg:bus:dev.func format. Replace all SBDFs printed using
> > '%04x:%02x:%02x.%u' to use the new format specifier.
> > 
> > No functional change intended.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> > Acked-by: Julien Grall <julien.grall@arm.com>
> > For just the pieces where Jan is the only maintainer:
> > Acked-by: Jan Beulich <jbeulich@suse.com>
[...]
> In all reality, Roger, it looks to me as if you should have dropped
> my ack, as there seems to be nothing left at this point that I'm
> the only maintainer of.

Yes, just realized that now, I'm sorry. Your Ack happened before Paul
became a maintainer of vendor-independent IOMMU code and I completely
forgot about it.

I think the overall result of having a modifier for printing SBDFs is
a win for everyone. TBH I just revived the patch because I think it
will be helpful to the Arm folks doing the PCI work, if not I wouldn't
have sent it again.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 10:22:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 10:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k15hZ-0001V7-5G; Thu, 30 Jul 2020 10:22:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gz/s=BJ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1k15hY-0001V2-Aa
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 10:22:16 +0000
X-Inumbo-ID: 89c7b4a6-d24e-11ea-8d2d-bc764e2007e4
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 89c7b4a6-d24e-11ea-8d2d-bc764e2007e4;
 Thu, 30 Jul 2020 10:22:15 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id r2so19276813wrs.8
 for <xen-devel@lists.xenproject.org>; Thu, 30 Jul 2020 03:22:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=GQCznpWXV7owUTyAWrYHY9kejsOcOU+xkozwwKynylo=;
 b=Ws3Non746GAlPBS+cBvXRTDCXlW+xM22M28ZVqbDeQNSUtzXVRL7R2SvYqRZ+qJUWR
 iseF341LGQG5gosdY/1Xcjq6PmOg6vyyevEEGOcRCnkhYS1TNtxZE/qaFEDJ/Bzi72ct
 rbhReMUrUsT/BXPUDIiCRqXgDsqgSpDRU/d6n0e10T7KmL8Bpx/wDjpccu54Fc7vsFpk
 Llak+8PFxx9pwREbpWrFHtk5sQXb8ZmxkFGfaj+danS2qLmclwtK3xPtOFqIaWE5PSaq
 scWx4HRaY/QGV7YRhbUcGStouvXu2oAyzSg6vo5tQzP7Xw4XcvbOJ+jABeKrXg3bwFnG
 bDCQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=GQCznpWXV7owUTyAWrYHY9kejsOcOU+xkozwwKynylo=;
 b=cfDLbR4wpdFAYUCgNwxtFj4Oxl6fX/Zx36yHJ1LXslXVCYnH4yCAr5nEsjYYiX5mnd
 8ZOGFHHc+uIRX3crmzoBJtN+ymJie3Yi+QO8a4dCHvYUevpFEGbf6d+qqE0Meh3+JDQ4
 3/cZm1HPjxJGkUuhzWnU3pkJsCGdiT8WU8RnwEWaVRlRT0HOIYtZte/9Wz86byJk/o5k
 jCGNF1WYJKBYRvrIEzVkMsxsn/RyjXams2PlorgKGFY8XdkcllbtyS81PRYtdkGBfKnY
 y+V00RJLXEu6TGiguxxzx3jZL9T2iCIkQmzIcE5p3KEsB+MtpIAjNWvfKjBZU7gmdE+Z
 0UEQ==
X-Gm-Message-State: AOAM530hj8rpGo/hkvgUjGCO5x2MT1jmdgYonAx6vPdD+LFMgrq5nV7D
 hR572Z0aRvo7uH5qK/swQus=
X-Google-Smtp-Source: ABdhPJxKgBghgtk5bx0Mwbv7J2HXc8IRyl5xpSYueNC79khDCeL22SJ6Q0YATeX32zwiquLWvZrrdg==
X-Received: by 2002:adf:9ec1:: with SMTP id b1mr33283408wrf.171.1596104534574; 
 Thu, 30 Jul 2020 03:22:14 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec])
 by smtp.gmail.com with ESMTPSA id c14sm8952610wrw.85.2020.07.30.03.22.13
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 30 Jul 2020 03:22:13 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
 "'Jan Beulich'" <jbeulich@suse.com>
References: <20200727103136.53343-1-roger.pau@citrix.com>
 <ca6cd6a5-3221-4d34-08a0-8ea4b2dc92d0@suse.com>
 <20200730100801.GF7191@Air-de-Roger>
In-Reply-To: <20200730100801.GF7191@Air-de-Roger>
Subject: RE: [PATCH v3] print: introduce a format specifier for pci_sbdf_t
Date: Thu, 30 Jul 2020 11:17:01 +0100
Message-ID: <002e01d6665a$90f84d90$b2e8e8b0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQKdtgt63/bf5agZO9PNbWLjrxWJngIf8KDWAhHTObynb7rGsA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Kevin Tian' <kevin.tian@intel.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Julien Grall' <julien@xen.org>,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Julien Grall' <julien.grall@arm.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> Sent: 30 July 2020 11:08
> To: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; =
xen-devel@lists.xenproject.org; George Dunlap
> <george.dunlap@citrix.com>; Ian Jackson <ian.jackson@eu.citrix.com>; =
Julien Grall <julien@xen.org>;
> Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; =
Paul Durrant <paul@xen.org>; Kevin
> Tian <kevin.tian@intel.com>; Julien Grall <julien.grall@arm.com>
> Subject: Re: [PATCH v3] print: introduce a format specifier for =
pci_sbdf_t
>=20
> On Wed, Jul 29, 2020 at 09:28:53PM +0200, Jan Beulich wrote:
> > On 27.07.2020 12:31, Roger Pau Monne wrote:
> > > The new format specifier is '%pp', and prints a pci_sbdf_t using =
the
> > > seg:bus:dev.func format. Replace all SBDFs printed using
> > > '%04x:%02x:%02x.%u' to use the new format specifier.
> > >
> > > No functional change intended.
> > >
> > > Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> > > Reviewed-by: Kevin Tian <kevin.tian@intel.com>
> > > Acked-by: Julien Grall <julien.grall@arm.com>
> > > For just the pieces where Jan is the only maintainer:
> > > Acked-by: Jan Beulich <jbeulich@suse.com>
> [...]
> > In all reality, Roger, it looks to me as if you should have dropped
> > my ack, as there seems to be nothing left at this point that I'm
> > the only maintainer of.
>=20
> Yes, just realized that now, I'm sorry. Your Ack happened before Paul
> became a maintainer of vendor-independent IOMMU code and I completely
> forgot about it.
>=20
> I think the overall result of having a modifier for printing SBDFs is
> a win for everyone. TBH I just revived the patch because I think it
> will be helpful to the Arm folks doing the PCI work, if not I wouldn't
> have sent it again.

FWIW I am in favour of change.

  Paul

>=20
> Roger.



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 10:24:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 10:24:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k15jm-0001cl-ID; Thu, 30 Jul 2020 10:24:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5KiB=BJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k15jl-0001cg-2z
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 10:24:33 +0000
X-Inumbo-ID: db3650c2-d24e-11ea-aab3-12813bfff9fa
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.88]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id db3650c2-d24e-11ea-aab3-12813bfff9fa;
 Thu, 30 Jul 2020 10:24:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gw+A55Q5XlKPed6CCu/ZU+PndNruUEyCiF4VLgFcMrE=;
 b=GhPVxHNhpnc3e6R0x4ooJXAd71PeFSExA/vcLlfczfVtbbCEVzvSHFkB44AGBNrhIJS64ZgsO5ZcGgMXOQiyophrit4ZcY33R0983g3+rSzlePXU811SGNFnd0HEpKUHCSFVRYT4fMg1Q+SGeUAP/GCi2dO+sx3eEzQDBi/PT8c=
Received: from AM0PR10CA0077.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:15::30)
 by AM5PR0802MB2465.eurprd08.prod.outlook.com (2603:10a6:203:9f::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.23; Thu, 30 Jul
 2020 10:24:30 +0000
Received: from AM5EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:15:cafe::e7) by AM0PR10CA0077.outlook.office365.com
 (2603:10a6:208:15::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.18 via Frontend
 Transport; Thu, 30 Jul 2020 10:24:30 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT064.mail.protection.outlook.com (10.152.17.53) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.20 via Frontend Transport; Thu, 30 Jul 2020 10:24:29 +0000
Received: ("Tessian outbound 1dc58800d5dd:v62");
 Thu, 30 Jul 2020 10:24:29 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5eede8b8d4dfd0cf
X-CR-MTA-TID: 64aa7808
Received: from 336425fe908a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 930D3EB0-4A02-4455-8F61-4EBD24AC42A3.1; 
 Thu, 30 Jul 2020 10:24:23 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 336425fe908a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 30 Jul 2020 10:24:23 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nhRRiAjb90mkV+NBc2kqD/7k5IFlfnzKUifqPtOZ4RfW3+/EC1FIYOs+0KuXfcaS1LwqFBkM3qd22PkK0DVyN/epRexZBwKCowqiQtnbrLwQKDet8hn6muZZjAbvHDpQtule6wwEKmsGDJMSjzzDFl7QEkxeGgag+5znurEp2TBI3hNbOHpxnvMsVdnUmSLZUo0fQRPnt8a8g2IkIWJIRLpIuZbY6VxNUyjctmqD/50xYmCkRS32qVD6n6jKCfzQBa0WBQVdX05fvSDgUYtaeXjVFYVHvr7MxrlSmz/pbJ2RSbfwZUYYGhRy0bc09Q69m4X7XwRh0ta0U86NgzrScQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gw+A55Q5XlKPed6CCu/ZU+PndNruUEyCiF4VLgFcMrE=;
 b=dAnjwFZIcinu5RvoyBsSvvUgUlwiUu/33gQF2cOh2ZcYXtEubL7j8ZD16mAL+v3jvD3Utx6u13rtxqPfQjifXdbYUnZqbBb5jaohOB+rcw9NG6ujHbMWmIMmS9I2z3CvIV1hIa8MaBO+QG/E+CZRbdtrq2ivI18vc/UKVv01J66TDNWDcn9cfzg4yCe9hRFko4D9/RanoCvDz66pjJGBv5zq3TG8TR66xFuvIMtyoWP3oAPORxgG1JdQNm3LCeY8zx5P5nbRrF5zpqOJkxq5Mg/bd1sgNdswSHQ729D1Pm0oQf5rvZPg4DBM7P9CC/0y1bkEsz8ZvwopQ7w3BCouIg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Gw+A55Q5XlKPed6CCu/ZU+PndNruUEyCiF4VLgFcMrE=;
 b=GhPVxHNhpnc3e6R0x4ooJXAd71PeFSExA/vcLlfczfVtbbCEVzvSHFkB44AGBNrhIJS64ZgsO5ZcGgMXOQiyophrit4ZcY33R0983g3+rSzlePXU811SGNFnd0HEpKUHCSFVRYT4fMg1Q+SGeUAP/GCi2dO+sx3eEzQDBi/PT8c=
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none; lists.xenproject.org;
 dmarc=none action=none header.from=arm.com;
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3514.eurprd08.prod.outlook.com (2603:10a6:10:49::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.18; Thu, 30 Jul
 2020 10:24:20 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3216.033; Thu, 30 Jul 2020
 10:24:20 +0000
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3] xen/arm: Convert runstate address during hypcall
Date: Thu, 30 Jul 2020 11:24:00 +0100
Message-Id: <3911d221ce9ed73611b93aa437b9ca227d6aa201.1596099067.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
Content-Type: text/plain
X-ClientProxiedBy: LO2P265CA0142.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9f::34) To DB7PR08MB3689.eurprd08.prod.outlook.com
 (2603:10a6:10:79::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from e109506-lin.cambridge.arm.com (217.140.106.53) by
 LO2P265CA0142.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:9f::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.24 via Frontend Transport; Thu, 30 Jul 2020 10:24:19 +0000
X-Mailer: git-send-email 2.17.1
X-Originating-IP: [217.140.106.53]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 8573940c-e0ab-4b9e-8f1b-08d83472be85
X-MS-TrafficTypeDiagnostic: DB7PR08MB3514:|AM5PR0802MB2465:
X-Microsoft-Antispam-PRVS: <AM5PR0802MB2465D00F12FEAF743BA48A7A9D710@AM5PR0802MB2465.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: bU92IIZ+OT3abzhCGapjo+0jbHKFUafkYszgOelAa4rIpOTCbsVWYqJhSygjtLfaR4NzRUnvvh/tR0NF8soEPhzPNVwhk7QKPAFWS2q0XisAUr81clT3vcveYwoWqoZfq576VpafiZzwxkC9OaG+i1JMahRR/V4mD+5XJoow1MKA5Myh1mX2x/bqG0BQW3Li0axoXod3CanQllxGBHEUYZnsftDg80inB8sKNypMzGhgn4Qg1/J7G27se1YoYTIhL5OERMuW2/Bv8YXZu480Epm7MclN6uLGzCc4YpWlC0O6kOLE3KAAnrOcQ1GQmeMsLoOohqRjgWkk618oaZMKLOq1TgnN7AjBJwVQIRQl/w8igAy9izNYlvWPVr1POKdB
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(39860400002)(346002)(376002)(396003)(366004)(6666004)(6916009)(5660300002)(86362001)(2616005)(7696005)(30864003)(52116002)(54906003)(8936002)(956004)(478600001)(316002)(26005)(66946007)(66476007)(83380400001)(66556008)(8676002)(4326008)(6486002)(7416002)(44832011)(16526019)(2906002)(36756003)(186003)(136400200001);
 DIR:OUT; SFP:1101; 
X-MS-Exchange-AntiSpam-MessageData: YFJpA93XmkUbDzHon2G4foi6ho99nqr8cZ8crWufk5kS9FAzJbhBiunPfkfxPUqTQNplbM7hsqGsWErrl9CBYtnD7xOho2vNstcEKWe5mOUIYaGWReJnTw6ROkdIezdj0O8YiBywa2zG7SZJIIzttxEjEEbEcLnI24+CYLr4fCmgAZBcIeMxFLbh3AeJh6k0dYt2VoGi9VkAMWApJT3fywqgpFp49hrs/Wh92vpB0hijnuuwodEfYmJoo/yx0KnZgWCS7HQLLVsNP4yM3iUPZTtP6dr1hOgmE/JBD7gPcgDTGSA3aCqpMsANdcMXUESgzxu7CtN67dpyT33ajVsAYaFaeSuassGtiGrbuEkuKOy7Z1FZbm42fuB7I/OCvVQAAH8O2G9e8I8Kb0ZPYqlNi+YTiVOM0sCCJMmrEI8wcz+Oczyhe9iWQyM2lJ4vK8IkG5NJm3hbNKUYmDtHVd6tI1eaQ7+LlWoY0+N8ILML7VtkhOQGug1KY82w5z1b/hC7icSElcGllQISQ2I2Cp+42mpGVtaHPX0O6ndeSx+C249J+Q8XUrHy7yYmCAb6YDzHb26GmBz13OtBx+mdQq5YfBkWdxl+MbPHxiWAEYohoWJXAshRltYKgCE2SzYd4OdlQ1B+XwDSPl8TuFEezJTAfQ==
X-MS-Exchange-Transport-Forked: True
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3514
Original-Authentication-Results: lists.xenproject.org;
 dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 547db8e0-8ea7-47d8-7090-08d83472b895
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8saRTYyHHJR/1UZRQ/DVlnVJFcEUBDiEv24ukb5Y/j0HEgISGOUHpv5uVppmDnnI59IIar+47wXSEE9RBHtbQEhtAD7ldmTJA5f8YSx0Rq8c1WonqK7YjtTn5j+BqaWg7Yn1jGmXFKm8vSVwSQplZNUgYC6btN6BFyahLdCHwM4gSqg2mISZaFcxsvvDkSlaAxgEnndoRvckW3839Mu+uskImvB8Xwj7Lc6PACBIZ0cb4AIvm33sAWtugHXMlucAvHUJ920rEQWJHylHTN46/fEcyZ74x+w7CDxmltuTKb5UwSzmBCHz7ujQwnxMUNa/0/sAfMPsFA1m6CUCaXRN+usNLXXr7Q/bc5jdwZXYRaYydQr56Xu9/0QgUhqjB1QfTS2OYCTLP7gw7jba/302BSzZP81/REOfCY22RoNg9YleAr9X89qVVUVU5PtDGh8l
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(396003)(39860400002)(346002)(376002)(46966005)(186003)(2616005)(16526019)(26005)(30864003)(36756003)(7696005)(356005)(82310400002)(86362001)(47076004)(8676002)(82740400003)(956004)(8936002)(44832011)(81166007)(83380400001)(4326008)(70586007)(316002)(6916009)(478600001)(70206006)(54906003)(5660300002)(2906002)(6666004)(6486002)(36906005)(107886003)(336012)(136400200001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jul 2020 10:24:29.9515 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8573940c-e0ab-4b9e-8f1b-08d83472be85
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0802MB2465
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

At the moment on Arm, a Linux guest running with KTPI enabled will
cause the following error when a context switch happens in user mode:
(XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0

The error is caused by the virtual address for the runstate area
registered by the guest only being accessible when the guest is running
in kernel space when KPTI is enabled.

To solve this issue, this patch is doing the translation from virtual
address to physical address during the hypercall and mapping the
required pages using vmap. This is removing the conversion from virtual
to physical address during the context switch which is solving the
problem with KPTI.

This is done only on arm architecture, the behaviour on x86 is not
modified by this patch and the address conversion is done as before
during each context switch.

This is introducing several limitations in comparison to the previous
behaviour (on arm only):
- if the guest is remapping the area at a different physical address Xen
will continue to update the area at the previous physical address. As
the area is in kernel space and usually defined as a global variable this
is something which is believed not to happen. If this is required by a
guest, it will have to call the hypercall with the new area (even if it
is at the same virtual address).
- the area needs to be mapped during the hypercall. For the same reasons
as for the previous case, even if the area is registered for a different
vcpu. It is believed that registering an area using a virtual address
unmapped is not something done.

inline functions in headers could not be used as the architecture
domain.h is included before the global domain.h making it impossible
to use the struct vcpu inside the architecture header.
This should not have any performance impact as the hypercall is only
called once per vcpu usually.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

---
  Changes in v2
    - use vmap to map the pages during the hypercall.
    - reintroduce initial copy during hypercall.

  Changes in v3
    - Fix Coding style
    - Fix vaddr printing on arm32
    - use write_atomic to modify state_entry_time update bit (only
    in guest structure as the bit is not used inside Xen copy)

---
 xen/arch/arm/domain.c        | 161 ++++++++++++++++++++++++++++++-----
 xen/arch/x86/domain.c        |  29 ++++++-
 xen/arch/x86/x86_64/domain.c |   4 +-
 xen/common/domain.c          |  19 ++---
 xen/include/asm-arm/domain.h |   9 ++
 xen/include/asm-x86/domain.h |  16 ++++
 xen/include/xen/domain.h     |   5 ++
 xen/include/xen/sched.h      |  16 +---
 8 files changed, 206 insertions(+), 53 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 31169326b2..8b36946017 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -19,6 +19,7 @@
 #include <xen/sched.h>
 #include <xen/softirq.h>
 #include <xen/wait.h>
+#include <xen/vmap.h>
 
 #include <asm/alternative.h>
 #include <asm/cpuerrata.h>
@@ -275,36 +276,156 @@ static void ctxt_switch_to(struct vcpu *n)
     virt_timer_restore(n);
 }
 
-/* Update per-VCPU guest runstate shared memory area (if registered). */
-static void update_runstate_area(struct vcpu *v)
+static void cleanup_runstate_vcpu_locked(struct vcpu *v)
 {
-    void __user *guest_handle = NULL;
+    if ( v->arch.runstate_guest )
+    {
+        vunmap((void *)((unsigned long)v->arch.runstate_guest & PAGE_MASK));
+
+        put_page(v->arch.runstate_guest_page[0]);
+
+        if ( v->arch.runstate_guest_page[1] )
+            put_page(v->arch.runstate_guest_page[1]);
+
+        v->arch.runstate_guest = NULL;
+    }
+}
+
+void arch_vcpu_cleanup_runstate(struct vcpu *v)
+{
+    spin_lock(&v->arch.runstate_guest_lock);
+
+    cleanup_runstate_vcpu_locked(v);
+
+    spin_unlock(&v->arch.runstate_guest_lock);
+}
+
+static int setup_runstate_vcpu_locked(struct vcpu *v, vaddr_t vaddr)
+{
+    unsigned int offset;
+    mfn_t mfn[2];
+    struct page_info *page;
+    unsigned int numpages;
     struct vcpu_runstate_info runstate;
+    void *p;
 
-    if ( guest_handle_is_null(runstate_guest(v)) )
-        return;
+    /* user can pass a NULL address to unregister a previous area */
+    if ( vaddr == 0 )
+        return 0;
+
+    offset = vaddr & ~PAGE_MASK;
+
+    /* provided address must be aligned to a 64bit */
+    if ( offset % alignof(struct vcpu_runstate_info) )
+    {
+        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRIvaddr
+                ": Invalid offset\n", vaddr);
+        return -EINVAL;
+    }
+
+    page = get_page_from_gva(v, vaddr, GV2M_WRITE);
+    if ( !page )
+    {
+        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRIvaddr
+                ": Page is not mapped\n", vaddr);
+        return -EINVAL;
+    }
+
+    mfn[0] = page_to_mfn(page);
+    v->arch.runstate_guest_page[0] = page;
+
+    if ( offset > (PAGE_SIZE - sizeof(struct vcpu_runstate_info)) )
+    {
+        /* guest area is crossing pages */
+        page = get_page_from_gva(v, vaddr + PAGE_SIZE, GV2M_WRITE);
+        if ( !page )
+        {
+            put_page(v->arch.runstate_guest_page[0]);
+            gprintk(XENLOG_WARNING,
+                    "Cannot map runstate pointer at 0x%"PRIvaddr
+                    ": 2nd Page is not mapped\n", vaddr);
+            return -EINVAL;
+        }
+        mfn[1] = page_to_mfn(page);
+        v->arch.runstate_guest_page[1] = page;
+        numpages = 2;
+    }
+    else
+    {
+        v->arch.runstate_guest_page[1] = NULL;
+        numpages = 1;
+    }
 
-    memcpy(&runstate, &v->runstate, sizeof(runstate));
+    p = vmap(mfn, numpages);
+    if ( !p )
+    {
+        put_page(v->arch.runstate_guest_page[0]);
+        if ( numpages == 2 )
+            put_page(v->arch.runstate_guest_page[1]);
 
-    if ( VM_ASSIST(v->domain, runstate_update_flag) )
+        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRIvaddr
+                ": vmap error\n", vaddr);
+        return -EINVAL;
+    }
+
+    v->arch.runstate_guest = p + offset;
+
+    if (v == current)
+        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate));
+    else
     {
-        guest_handle = &v->runstate_guest.p->state_entry_time + 1;
-        guest_handle--;
-        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
-        __raw_copy_to_guest(guest_handle,
-                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
-        smp_wmb();
+        vcpu_runstate_get(v, &runstate);
+        memcpy(v->arch.runstate_guest, &runstate, sizeof(v->runstate));
     }
 
-    __copy_to_guest(runstate_guest(v), &runstate, 1);
+    return 0;
+}
+
+int arch_vcpu_setup_runstate(struct vcpu *v,
+                             struct vcpu_register_runstate_memory_area area)
+{
+    int rc;
+
+    spin_lock(&v->arch.runstate_guest_lock);
+
+    /* cleanup if we are recalled */
+    cleanup_runstate_vcpu_locked(v);
+
+    rc = setup_runstate_vcpu_locked(v, (vaddr_t)area.addr.v);
+
+    spin_unlock(&v->arch.runstate_guest_lock);
 
-    if ( guest_handle )
+    return rc;
+}
+
+
+/* Update per-VCPU guest runstate shared memory area (if registered). */
+static void update_runstate_area(struct vcpu *v)
+{
+    spin_lock(&v->arch.runstate_guest_lock);
+
+    if ( v->arch.runstate_guest )
     {
-        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
-        smp_wmb();
-        __raw_copy_to_guest(guest_handle,
-                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
+        if ( VM_ASSIST(v->domain, runstate_update_flag) )
+        {
+            v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
+            write_atomic(&(v->arch.runstate_guest->state_entry_time),
+                    v->runstate.state_entry_time);
+        }
+
+        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate));
+
+        if ( VM_ASSIST(v->domain, runstate_update_flag) )
+        {
+            /* copy must be done before switching the bit */
+            smp_wmb();
+            v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
+            write_atomic(&(v->arch.runstate_guest->state_entry_time),
+                    v->runstate.state_entry_time);
+        }
     }
+
+    spin_unlock(&v->arch.runstate_guest_lock);
 }
 
 static void schedule_tail(struct vcpu *prev)
@@ -560,6 +681,8 @@ int arch_vcpu_create(struct vcpu *v)
     v->arch.saved_context.sp = (register_t)v->arch.cpu_info;
     v->arch.saved_context.pc = (register_t)continue_new_vcpu;
 
+    spin_lock_init(&v->arch.runstate_guest_lock);
+
     /* Idle VCPUs don't need the rest of this setup */
     if ( is_idle_vcpu(v) )
         return rc;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index fee6c3931a..b9b81e94e5 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1642,6 +1642,29 @@ void paravirt_ctxt_switch_to(struct vcpu *v)
         wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
 }
 
+int arch_vcpu_setup_runstate(struct vcpu *v,
+                             struct vcpu_register_runstate_memory_area area)
+{
+    struct vcpu_runstate_info runstate;
+
+    runstate_guest(v) = area.addr.h;
+
+    if ( v == current )
+        __copy_to_guest(runstate_guest(v), &v->runstate, 1);
+    else
+    {
+        vcpu_runstate_get(v, &runstate);
+        __copy_to_guest(runstate_guest(v), &runstate, 1);
+    }
+
+    return 0;
+}
+
+void arch_vcpu_cleanup_runstate(struct vcpu *v)
+{
+    set_xen_guest_handle(runstate_guest(v), NULL);
+}
+
 /* Update per-VCPU guest runstate shared memory area (if registered). */
 bool update_runstate_area(struct vcpu *v)
 {
@@ -1660,8 +1683,8 @@ bool update_runstate_area(struct vcpu *v)
     if ( VM_ASSIST(v->domain, runstate_update_flag) )
     {
         guest_handle = has_32bit_shinfo(v->domain)
-            ? &v->runstate_guest.compat.p->state_entry_time + 1
-            : &v->runstate_guest.native.p->state_entry_time + 1;
+            ? &v->arch.runstate_guest.compat.p->state_entry_time + 1
+            : &v->arch.runstate_guest.native.p->state_entry_time + 1;
         guest_handle--;
         runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
         __raw_copy_to_guest(guest_handle,
@@ -1674,7 +1697,7 @@ bool update_runstate_area(struct vcpu *v)
         struct compat_vcpu_runstate_info info;
 
         XLAT_vcpu_runstate_info(&info, &runstate);
-        __copy_to_guest(v->runstate_guest.compat, &info, 1);
+        __copy_to_guest(v->arch.runstate_guest.compat, &info, 1);
         rc = true;
     }
     else
diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
index c46dccc25a..b879e8dd2c 100644
--- a/xen/arch/x86/x86_64/domain.c
+++ b/xen/arch/x86/x86_64/domain.c
@@ -36,7 +36,7 @@ arch_compat_vcpu_op(
             break;
 
         rc = 0;
-        guest_from_compat_handle(v->runstate_guest.compat, area.addr.h);
+        guest_from_compat_handle(v->arch.runstate_guest.compat, area.addr.h);
 
         if ( v == current )
         {
@@ -49,7 +49,7 @@ arch_compat_vcpu_op(
             vcpu_runstate_get(v, &runstate);
             XLAT_vcpu_runstate_info(&info, &runstate);
         }
-        __copy_to_guest(v->runstate_guest.compat, &info, 1);
+        __copy_to_guest(v->arch.runstate_guest.compat, &info, 1);
 
         break;
     }
diff --git a/xen/common/domain.c b/xen/common/domain.c
index f0f9c62feb..739c6b7b62 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -727,7 +727,10 @@ int domain_kill(struct domain *d)
         if ( cpupool_move_domain(d, cpupool0) )
             return -ERESTART;
         for_each_vcpu ( d, v )
+        {
+            arch_vcpu_cleanup_runstate(v);
             unmap_vcpu_info(v);
+        }
         d->is_dying = DOMDYING_dead;
         /* Mem event cleanup has to go here because the rings 
          * have to be put before we call put_domain. */
@@ -1167,7 +1170,7 @@ int domain_soft_reset(struct domain *d)
 
     for_each_vcpu ( d, v )
     {
-        set_xen_guest_handle(runstate_guest(v), NULL);
+        arch_vcpu_cleanup_runstate(v);
         unmap_vcpu_info(v);
     }
 
@@ -1494,7 +1497,6 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
     case VCPUOP_register_runstate_memory_area:
     {
         struct vcpu_register_runstate_memory_area area;
-        struct vcpu_runstate_info runstate;
 
         rc = -EFAULT;
         if ( copy_from_guest(&area, arg, 1) )
@@ -1503,18 +1505,7 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( !guest_handle_okay(area.addr.h, 1) )
             break;
 
-        rc = 0;
-        runstate_guest(v) = area.addr.h;
-
-        if ( v == current )
-        {
-            __copy_to_guest(runstate_guest(v), &v->runstate, 1);
-        }
-        else
-        {
-            vcpu_runstate_get(v, &runstate);
-            __copy_to_guest(runstate_guest(v), &runstate, 1);
-        }
+        rc = arch_vcpu_setup_runstate(v, area);
 
         break;
     }
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 6819a3bf38..2f62c3e8f5 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -204,6 +204,15 @@ struct arch_vcpu
      */
     bool need_flush_to_ram;
 
+    /* runstate guest lock */
+    spinlock_t runstate_guest_lock;
+
+    /* runstate guest info */
+    struct vcpu_runstate_info *runstate_guest;
+
+    /* runstate pages mapped for runstate_guest */
+    struct page_info *runstate_guest_page[2];
+
 }  __cacheline_aligned;
 
 void vcpu_show_execution_state(struct vcpu *);
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 635335634d..007ccfbf9f 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -11,6 +11,11 @@
 #include <asm/x86_emulate.h>
 #include <public/vcpu.h>
 #include <public/hvm/hvm_info_table.h>
+#ifdef CONFIG_COMPAT
+#include <compat/vcpu.h>
+DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t);
+#endif
+
 
 #define has_32bit_shinfo(d)    ((d)->arch.has_32bit_shinfo)
 
@@ -638,6 +643,17 @@ struct arch_vcpu
     struct {
         bool next_interrupt_enabled;
     } monitor;
+
+#ifndef CONFIG_COMPAT
+# define runstate_guest(v) ((v)->arch.runstate_guest)
+    XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest address */
+#else
+# define runstate_guest(v) ((v)->arch.runstate_guest.native)
+    union {
+        XEN_GUEST_HANDLE(vcpu_runstate_info_t) native;
+        XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat;
+    } runstate_guest;
+#endif
 };
 
 struct guest_memory_policy
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 7e51d361de..5e8cbba31d 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -5,6 +5,7 @@
 #include <xen/types.h>
 
 #include <public/xen.h>
+#include <public/vcpu.h>
 #include <asm/domain.h>
 #include <asm/numa.h>
 
@@ -63,6 +64,10 @@ void arch_vcpu_destroy(struct vcpu *v);
 int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
 void unmap_vcpu_info(struct vcpu *v);
 
+int arch_vcpu_setup_runstate(struct vcpu *v,
+                             struct vcpu_register_runstate_memory_area area);
+void arch_vcpu_cleanup_runstate(struct vcpu *v);
+
 int arch_domain_create(struct domain *d,
                        struct xen_domctl_createdomain *config);
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index ac53519d7f..fac030fb83 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -29,11 +29,6 @@
 #include <public/vcpu.h>
 #include <public/event_channel.h>
 
-#ifdef CONFIG_COMPAT
-#include <compat/vcpu.h>
-DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t);
-#endif
-
 /*
  * Stats
  *
@@ -166,16 +161,7 @@ struct vcpu
     struct sched_unit *sched_unit;
 
     struct vcpu_runstate_info runstate;
-#ifndef CONFIG_COMPAT
-# define runstate_guest(v) ((v)->runstate_guest)
-    XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest address */
-#else
-# define runstate_guest(v) ((v)->runstate_guest.native)
-    union {
-        XEN_GUEST_HANDLE(vcpu_runstate_info_t) native;
-        XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat;
-    } runstate_guest; /* guest address */
-#endif
+
     unsigned int     new_state;
 
     /* Has the FPU been initialised? */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 10:56:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 10:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k16En-0004E8-92; Thu, 30 Jul 2020 10:56:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k16Em-0004E3-CP
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 10:56:36 +0000
X-Inumbo-ID: 55e7c41e-d253-11ea-8d2f-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 55e7c41e-d253-11ea-8d2f-bc764e2007e4;
 Thu, 30 Jul 2020 10:56:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=6Ax80nf+EoQR3KIUG/ok9tKdr127h71Jeq4MlF2OY8g=; b=gbUGbRJsZlop4q1hUweM7gdU59
 9V1jcENTrb2+tmLS2NkVK+Gkzuk7GrEsnyXzpNmvPtMwEHrJ2hLNVqrDTiSW9TJXxR09Yay7skfYa
 /odVYVg88EeRhZ72SqJ3NxpWbjNc2SPbsODEdezoBahAD4nDLWJAa4NulkmfZwHuBsPM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k16Eg-0006OS-TM; Thu, 30 Jul 2020 10:56:30 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k16Eg-0000ek-J9; Thu, 30 Jul 2020 10:56:30 +0000
Subject: Re: [PATCH 2/5] xen/gnttab: Rework resource acquisition
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-3-andrew.cooper3@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a3c57218-56b1-77fe-893f-b0269a6245e9@xen.org>
Date: Thu, 30 Jul 2020 11:56:28 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728113712.22966-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Hubert Jasudowicz <hubert.jasudowicz@cert.pl>, Wei Liu <wl@xen.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Paul Durrant <paul@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Andrew,

On 28/07/2020 12:37, Andrew Cooper wrote:
> The existing logic doesn't function in the general case for mapping a guests
> grant table, due to arbitrary 32 frame limit, and the default grant table
> limit being 64.
> 
> In order to start addressing this, rework the existing grant table logic by
> implementing a single gnttab_acquire_resource().  This is far more efficient
> than the previous acquire_grant_table() in memory.c because it doesn't take
> the grant table write lock, and attempt to grow the table, for every single
> frame.
> 
> The new gnttab_acquire_resource() function subsumes the previous two
> gnttab_get_{shared,status}_frame() helpers.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Liu <wl@xen.org>
> CC: Julien Grall <julien@xen.org>
> CC: Paul Durrant <paul@xen.org>
> CC: Michał Leszczyński <michal.leszczynski@cert.pl>
> CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
> ---
>   xen/common/grant_table.c      | 93 ++++++++++++++++++++++++++++++-------------
>   xen/common/memory.c           | 42 +------------------
>   xen/include/xen/grant_table.h | 19 ++++-----
>   3 files changed, 75 insertions(+), 79 deletions(-)
> 
> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
> index 9f0cae52c0..122d1e7596 100644
> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -4013,6 +4013,72 @@ static int gnttab_get_shared_frame_mfn(struct domain *d,
>       return 0;
>   }
>   
> +int gnttab_acquire_resource(
> +    struct domain *d, unsigned int id, unsigned long frame,
> +    unsigned int nr_frames, xen_pfn_t mfn_list[])

Any reasons for changing the way we usually indent parameters?

> +{
> +    struct grant_table *gt = d->grant_table;
> +    unsigned int i = nr_frames, tot_frames;
> +    void **vaddrs;
> +    int rc = 0;

AFAICT rc will always be initialized when used. So is it necessary?

> +
> +    /* Input sanity. */
> +    if ( !nr_frames )
> +        return -EINVAL;
> +
> +    /* Overflow checks */
> +    if ( frame + nr_frames < frame )
> +        return -EINVAL;
> +
> +    tot_frames = frame + nr_frames;
> +    if ( tot_frames != frame + nr_frames )
> +        return -EINVAL;
> +
> +    /* Grow table if necessary. */
> +    grant_write_lock(gt);
> +    switch ( id )
> +    {
> +        mfn_t tmp;
> +
> +    case XENMEM_resource_grant_table_id_shared:
> +        rc = gnttab_get_shared_frame_mfn(d, tot_frames - 1, &tmp);
> +        break;
> +
> +    case XENMEM_resource_grant_table_id_status:
> +        if ( gt->gt_version != 2 )
> +        {
> +    default:
> +            rc = -EINVAL;
> +            break;
I am aware this is valid C code, but I find this construct confusing. 
Could you just duplicate the 2 lines and have a separate default case?

> +        }
> +        rc = gnttab_get_status_frame_mfn(d, tot_frames - 1, &tmp);
> +        break;
> +    }
> +
> +    /* Any errors from growing the table? */
> +    if ( rc )
> +        goto out;
> +
> +    switch ( id )
> +    {
> +    case XENMEM_resource_grant_table_id_shared:
> +        vaddrs = gt->shared_raw;
> +        break;
> +
> +    case XENMEM_resource_grant_table_id_status:
> +        vaddrs = (void **)gt->status;
> +        break;
> +    }
> +

NIT: Would it make sense to check vaddrs? This would avoid NULL 
dereference pointers if something went wrong.

> +    for ( i = 0; i < nr_frames; ++i )
> +        mfn_list[i] = virt_to_mfn(vaddrs[frame + i]);
> +
> + out:
> +    grant_write_unlock(gt);
> +
> +    return rc;
> +}
> +

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 11:05:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 11:05:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k16NF-00058u-4Y; Thu, 30 Jul 2020 11:05:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6uMI=BJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k16ND-00058p-QH
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 11:05:19 +0000
X-Inumbo-ID: 8d13b370-d254-11ea-8d30-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d13b370-d254-11ea-8d30-bc764e2007e4;
 Thu, 30 Jul 2020 11:05:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=/OMLP0ytadLUV2PuFNh5mkutxlCcQ/Zn5TUltUGsqwc=; b=l9EtByNsz2XstBJussvQ22DWq
 JKeMN0lnY3jDtLOq4ekUG4OSZw/lO6ZaeCIhsXa24jXFKVayOdoUZUpo4hpAoiHaBcu+gDMC82KYd
 FQuFU85gWu7NPx17N3FiMLyVIXa0GP53L6XOUNOyFGQ1oHthcWs8sU2G+LVMwrP4Xtsdg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k16NB-0006Zp-1V; Thu, 30 Jul 2020 11:05:17 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k16NA-0006D4-Lk; Thu, 30 Jul 2020 11:05:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k16NA-00052a-L7; Thu, 30 Jul 2020 11:05:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152287-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152287: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
 linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
 linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
 linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
 linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
 linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
 linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
 linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:heisenbug
 linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=6ba1b005ffc388c2aeaddae20da29e4810dea298
X-Osstest-Versions-That: linux=1b5044021070efa3259f3e9548dc35d1eb6aa844
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jul 2020 11:05:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152287 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152287/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail in 152272 pass in 152287
 test-arm64-arm64-xl-seattle   7 xen-boot                   fail pass in 152272
 test-arm64-arm64-xl-xsm       7 xen-boot                   fail pass in 152272
 test-arm64-arm64-libvirt-xsm  7 xen-boot                   fail pass in 152272
 test-arm64-arm64-examine      8 reboot                     fail pass in 152272
 test-arm64-arm64-xl-credit2   7 xen-boot                   fail pass in 152272
 test-arm64-arm64-xl           7 xen-boot                   fail pass in 152272
 test-arm64-arm64-xl-thunderx  7 xen-boot                   fail pass in 152272
 test-arm64-arm64-xl-credit1   7 xen-boot                   fail pass in 152272

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 151214

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle 13 migrate-support-check fail in 152272 never pass
 test-arm64-arm64-xl-seattle 14 saverestore-support-check fail in 152272 never pass
 test-arm64-arm64-xl-xsm     13 migrate-support-check fail in 152272 never pass
 test-arm64-arm64-xl-xsm 14 saverestore-support-check fail in 152272 never pass
 test-arm64-arm64-xl-credit2 13 migrate-support-check fail in 152272 never pass
 test-arm64-arm64-xl-credit2 14 saverestore-support-check fail in 152272 never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check fail in 152272 never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check fail in 152272 never pass
 test-arm64-arm64-xl         13 migrate-support-check fail in 152272 never pass
 test-arm64-arm64-xl     14 saverestore-support-check fail in 152272 never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check fail in 152272 never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check fail in 152272 never pass
 test-arm64-arm64-xl-credit1 13 migrate-support-check fail in 152272 never pass
 test-arm64-arm64-xl-credit1 14 saverestore-support-check fail in 152272 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 151214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 151214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 151214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 151214
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 151214
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151214
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6ba1b005ffc388c2aeaddae20da29e4810dea298
baseline version:
 linux                1b5044021070efa3259f3e9548dc35d1eb6aa844

Last test of basis   151214  2020-06-18 02:27:46 Z   42 days
Failing since        151236  2020-06-19 19:10:35 Z   40 days   63 attempts
Testing same since   152272  2020-07-28 22:11:26 Z    1 days    2 attempts

------------------------------------------------------------
938 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   1b5044021070..6ba1b005ffc3  6ba1b005ffc388c2aeaddae20da29e4810dea298 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 11:13:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 11:13:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k16Uu-00061b-W2; Thu, 30 Jul 2020 11:13:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uTMv=BJ=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1k16Ut-00061W-8b
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 11:13:15 +0000
X-Inumbo-ID: a8ec8026-d255-11ea-8d32-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a8ec8026-d255-11ea-8d32-bc764e2007e4;
 Thu, 30 Jul 2020 11:13:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596107594;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=6F05LD0jNoU2E7I3JKYZPhwUR+cfE3s3Mv/BGlzf/cM=;
 b=Z0jqXWBIgvdhq9rkCi7/ENpH25k7rPZiHEX2C85X+claL6j3x7mHp+6U
 UFzNOIbWgfT7Jbn2LKAngfny3Fpw0o202UH3hpfmXTxp3jOuaqVtDOdsP
 jYLpiZATDRIKJowusOqc3y32ie3K4MRgWI2i9KOgPMcOXi58/9zeYWthL M=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: HLXkWDxgOvLKURf//whfEZ1QG0TsypSbBRPGutWNghswsf+WA/18+OIq4445vyO2rafVZYTEcK
 BEoizpzip0vrBz6Y7YVNDpYTdZEJtvDMEg0G/PkKtOS40kx+y5Grm8BeOKX3rKdigammz+hS2l
 YDtBwRPbZxDAWlaQgTZyQ1UTpCPM1+lgUVk3mZEv2H0Bx9lBhxltVaehorPYeodHzq4X8dUtcr
 OAt/dZOl0y7j+6bVoelLfy4Upc+0v32PwgjKSfe+PVYF8M82BSVcFYk4IKr6s6btMqRfspkVWc
 YuE=
X-SBRS: 2.7
X-MesageID: 23857264
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,414,1589256000"; d="scan'208";a="23857264"
From: George Dunlap <George.Dunlap@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: kernel-doc and xen.git
Thread-Topic: kernel-doc and xen.git
Thread-Index: AQHWZhCpvoHj9qYsqUS/BmcPWnpp8Kkf1yeA
Date: Thu, 30 Jul 2020 11:13:09 +0000
Message-ID: <785FBD2D-A67C-4740-9C5B-2ECCD0AEBFFC@citrix.com>
References: <alpine.DEB.2.21.2007291644330.1767@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007291644330.1767@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <55DBF837FFC50D4686883587E26BF07B@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Committers <committers@xenproject.org>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDMwLCAyMDIwLCBhdCAyOjI3IEFNLCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPiANCj4gSGkgYWxsLA0KPiANCj4gSSB3b3Vs
ZCBsaWtlIHRvIGFzayBmb3IgeW91ciBmZWVkYmFjayBvbiB0aGUgYWRvcHRpb24gb2YgdGhlIGtl
cm5lbC1kb2MNCj4gZm9ybWF0IGZvciBpbi1jb2RlIGNvbW1lbnRzLg0KPiANCj4gSW4gdGhlIEZ1
U2EgU0lHIHdlIGhhdmUgc3RhcnRlZCBsb29raW5nIGludG8gRnVTYSBkb2N1bWVudHMgZm9yIFhl
bi4gT25lDQo+IG9mIHRoZSB0aGluZ3Mgd2UgYXJlIGludmVzdGlnYXRpbmcgYXJlIHdheXMgdG8g
bGluayB0aGVzZSBkb2N1bWVudHMgdG8NCj4gaW4tY29kZSBjb21tZW50cyBpbiB4ZW4uZ2l0IGFu
ZCB2aWNlIHZlcnNhLg0KPiANCj4gSW4gdGhpcyBjb250ZXh0LCBBbmRyZXcgQ29vcGVyIHN1Z2dl
c3RlZCB0byBoYXZlIGEgbG9vayBhdCAia2VybmVsLWRvYyINCj4gWzFdIGR1cmluZyBvbmUgb2Yg
dGhlIHZpcnR1YWwgYmVlciBzZXNzaW9ucyBhdCB0aGUgbGFzdCBYZW4gU3VtbWl0Lg0KPiANCj4g
SSBkaWQgZ2l2ZSBhIGxvb2sgYXQga2VybmVsLWRvYyBhbmQgaXQgaXMgdmVyeSBwcm9taXNpbmcu
IGtlcm5lbC1kb2MgaXMNCj4gYSBzY3JpcHQgdGhhdCBjYW4gZ2VuZXJhdGUgbmljZSByc3QgdGV4
dCBkb2N1bWVudHMgZnJvbSBpbi1jb2RlDQo+IGNvbW1lbnRzLiAoVGhlIGdlbmVyYXRlZCByc3Qg
ZmlsZXMgY2FuIHRoZW4gYmUgdXNlZCBhcyBpbnB1dCBmb3Igc3BoaW54DQo+IHRvIGdlbmVyYXRl
IGh0bWwgZG9jcy4pIFRoZSBjb21tZW50IHN5bnRheCBbMl0gaXMgc2ltcGxlIGFuZCBzaW1pbGFy
IHRvDQo+IERveHlnZW46DQo+IA0KPiAgICAvKioNCj4gICAgICogZnVuY3Rpb25fbmFtZSgpIC0g
QnJpZWYgZGVzY3JpcHRpb24gb2YgZnVuY3Rpb24uDQo+ICAgICAqIEBhcmcxOiBEZXNjcmliZSB0
aGUgZmlyc3QgYXJndW1lbnQuDQo+ICAgICAqIEBhcmcyOiBEZXNjcmliZSB0aGUgc2Vjb25kIGFy
Z3VtZW50Lg0KPiAgICAgKiAgICAgICAgT25lIGNhbiBwcm92aWRlIG11bHRpcGxlIGxpbmUgZGVz
Y3JpcHRpb25zDQo+ICAgICAqICAgICAgICBmb3IgYXJndW1lbnRzLg0KPiAgICAgKi8NCj4gDQo+
IGtlcm5lbC1kb2MgaXMgYWN0dWFsbHkgYmV0dGVyIHRoYW4gRG94eWdlbiBiZWNhdXNlIGl0IGlz
IGEgbXVjaCBzaW1wbGVyDQo+IHRvb2wsIG9uZSB3ZSBjb3VsZCBjdXN0b21pemUgdG8gb3VyIG5l
ZWRzIGFuZCB3aXRoIHByZWRpY3RhYmxlIG91dHB1dC4NCj4gU3BlY2lmaWNhbGx5LCB3ZSBjb3Vs
ZCBhZGQgdGhlIHRhZ2dpbmcsIG51bWJlcmluZywgYW5kIHJlZmVyZW5jaW5nDQo+IHJlcXVpcmVk
IGJ5IEZ1U2EgcmVxdWlyZW1lbnQgZG9jdW1lbnRzLg0KPiANCj4gSSB3b3VsZCBsaWtlIHlvdXIg
ZmVlZGJhY2sgb24gd2hldGhlciBpdCB3b3VsZCBiZSBnb29kIHRvIHN0YXJ0DQo+IGNvbnZlcnRp
bmcgeGVuLmdpdCBpbi1jb2RlIGNvbW1lbnRzIHRvIHRoZSBrZXJuZWwtZG9jIGZvcm1hdCBzbyB0
aGF0DQo+IHByb3BlciBkb2N1bWVudHMgY2FuIGJlIGdlbmVyYXRlZCBvdXQgb2YgdGhlbS4gT25l
IGRheSB3ZSBjb3VsZCBpbXBvcnQNCj4ga2VybmVsLWRvYyBpbnRvIHhlbi5naXQvc2NyaXB0cyBh
bmQgdXNlIGl0IHRvIGdlbmVyYXRlIGEgc2V0IG9mIGh0bWwNCj4gZG9jdW1lbnRzIHZpYSBzcGhp
bnguDQoNCmBnaXQtZ3JlcCDigJheL1wqXCok4oCZIGAgdHVybnMgdXAgbG9hZHMgb2YgaW5zdGFu
Y2VzIG9mIGtlcm5lbC1kb2Mtc3R5bGUgY29tbWVudHMgaW4gdGhlIHRyZWUgYWxyZWFkeS4gIEkg
dGhpbmsgaXQgbWFrZXMgY29tcGxldGUgc2Vuc2UgdG86DQoNCjEuIFN0YXJ0IHVzaW5nIHRvb2xz
IHRvIHB1bGwgdGhlIGV4aXN0aW5nIG9uZXMgaW50byBzcGhpbnggZG9jcw0KMi4gU2tpbSB0aHJv
dWdoIHRoZSBleGlzdGluZyBvbmVzIHRvIG1ha2Ugc3VyZSB0aGV54oCZcmUgYWNjdXJhdGUgLyB1
c2VmdWwNCjMuIEFkZCBzdWNoIGNvbW1lbnRzIGZvciBlbGVtZW50cyBvZiBrZXkgaW1wb3J0YW5j
ZSB0byB0aGUgRlVTQSBTSUcNCjQuIEVuY291cmFnZSBwZW9wbGUgaW5jbHVkZSBkb2N1bWVudGF0
aW9uIGZvciBuZXcgZmVhdHVyZXMsICZjDQoNCiAtR2VvcmdlDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 12:54:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 12:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k184g-0005x5-1L; Thu, 30 Jul 2020 12:54:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k184f-0005x0-0w
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 12:54:17 +0000
X-Inumbo-ID: c5eb2534-d263-11ea-aac1-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c5eb2534-d263-11ea-aac1-12813bfff9fa;
 Thu, 30 Jul 2020 12:54:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=nfEsaqFJybXnm2uijpMuL0CFKltBDOFIQNfLG42L75M=; b=sHopNlrLCXtxKL7FqPXeXDkFtN
 5cuzhU8AbAOjvdX+f1vndFMN7JRDfb0gGl1cc+atAPCIobCnvoR9gp66nZMGDO0o9UxrYfntBzYUf
 RkiVF7oxCaT3ECi8KcGK0/fpnvdz/ncdNeVQr6/uoqLcN/oXJmPQSF2xBkPjXNx5cD5w=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k184X-0000L2-Pc; Thu, 30 Jul 2020 12:54:09 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k184X-0007JG-EM; Thu, 30 Jul 2020 12:54:09 +0000
Subject: Re: [PATCH 4/5] xen/memory: Fix acquire_resource size semantics
To: paul@xen.org, 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Xen-devel' <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-5-andrew.cooper3@citrix.com>
 <002b01d6664b$c7eb5f40$57c21dc0$@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <d0c00a30-2f72-036e-d574-a82e96ea79ea@xen.org>
Date: Thu, 30 Jul 2020 13:54:06 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <002b01d6664b$c7eb5f40$57c21dc0$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Hubert Jasudowicz' <hubert.jasudowicz@cert.pl>, 'Wei Liu' <wl@xen.org>,
 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>,
 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 =?UTF-8?B?J01pY2hhxYIgTGVzemN6ecWEc2tpJw==?= <michal.leszczynski@cert.pl>,
 'Jan Beulich' <JBeulich@suse.com>, 'Ian Jackson' <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Paul,

On 30/07/2020 09:31, Paul Durrant wrote:
>> diff --git a/xen/common/memory.c b/xen/common/memory.c
>> index dc3a7248e3..21edabf9cc 100644
>> --- a/xen/common/memory.c
>> +++ b/xen/common/memory.c
>> @@ -1007,6 +1007,26 @@ static long xatp_permission_check(struct domain *d, unsigned int space)
>>       return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
>>   }
>>
>> +/*
>> + * Return 0 on any kind of error.  Caller converts to -EINVAL.
>> + *
>> + * All nonzero values should be repeatable (i.e. derived from some fixed
>> + * proerty of the domain), and describe the full resource (i.e. mapping the
> 
> s/property/property
> 
>> + * result of this call will be the entire resource).
> 
> This precludes dynamically adding a resource to a running domain. Do we really want to bake in that restriction?

AFAICT, this restriction is not documented in the ABI. In particular, it 
is written:

"
The size of a resource will never be zero, but a nonzero result doesn't
guarentee that a subsequent mapping request will be successful.  There
are further type/id specific constraints which may change between the
two calls.
"

So I think a domain couldn't rely on this behavior. Although, it might 
be good to clarify in the comment on top of resource_max_frames that 
this an implementation decision and not part of the ABI.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 13:07:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 13:07:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k18Hb-0006uZ-8h; Thu, 30 Jul 2020 13:07:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2OTA=BJ=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1k18Ha-0006uU-Iy
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 13:07:38 +0000
X-Inumbo-ID: a3c25854-d265-11ea-aac4-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a3c25854-d265-11ea-aac4-12813bfff9fa;
 Thu, 30 Jul 2020 13:07:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596114457;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=c62y2mGmmssXZL5kAsZahXphYgzKh4QuUrTBqYWdLO0=;
 b=U9bi8fyCr4WlmRgQLkyslq59ZcexrDoyF/AXxioYjN6NZNom2YzK0wm2
 tNd6wU/2fX6KlO+1mS5jyKtmQghDqpI83Sl09i9dz7JSYudlQL2WeiIwR
 gbkIEPwSQhr+460peQuCRPpL1QMNHa420qN8lH25h5HWUgZXF2w3dE10k A=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: uHTK7L7xVqQB6dYXFK0znwuYwCF5c/3Bl8Q32n0svYWRKvaJTB2EnB+n56biw/AMwcVSAt/Vcc
 ooAf10nVCJ7TX6On9HSjFl2DJK/x3+EOHly0W9OTdGkpTiAoi1xkmt44aYyV6AINjWluk6qDWc
 f/z5MBINW2MegbcOtj1e5MHtj65qzoMu6ejIv3HX98DDggCwpjMtSoujhgUvgNSQ7fzL8OrGwg
 RzYOfLKVYdl0HC8n4id/EVwtuGx1BUtvU6kF1D/7/9GDRhxYupYdzBV3TrmZ9/AKE6UgB9k/kJ
 IFo=
X-SBRS: 2.7
X-MesageID: 24408021
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,414,1589256000"; d="scan'208";a="24408021"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24354.50708.138178.815210@mariner.uk.xensource.com>
Date: Thu, 30 Jul 2020 14:07:32 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: kernel-doc and xen.git
In-Reply-To: <785FBD2D-A67C-4740-9C5B-2ECCD0AEBFFC@citrix.com>
References: <alpine.DEB.2.21.2007291644330.1767@sstabellini-ThinkPad-T480s>
 <785FBD2D-A67C-4740-9C5B-2ECCD0AEBFFC@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Committers <committers@xenproject.org>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("Re: kernel-doc and xen.git"):
> > On Jul 30, 2020, at 2:27 AM, Stefano Stabellini <sstabellini@kernel.org> wrote:
...
> > I did give a look at kernel-doc and it is very promising. kernel-doc is
> > a script that can generate nice rst text documents from in-code
> > comments. (The generated rst files can then be used as input for sphinx
> > to generate html docs.) The comment syntax [2] is simple and similar to
> > Doxygen:
> > 
> >    /**
> >     * function_name() - Brief description of function.
> >     * @arg1: Describe the first argument.
> >     * @arg2: Describe the second argument.
> >     *        One can provide multiple line descriptions
> >     *        for arguments.
> >     */
> > 
> > kernel-doc is actually better than Doxygen because it is a much simpler
> > tool, one we could customize to our needs and with predictable output.
> > Specifically, we could add the tagging, numbering, and referencing
> > required by FuSa requirement documents.
> > 
> > I would like your feedback on whether it would be good to start
> > converting xen.git in-code comments to the kernel-doc format so that
> > proper documents can be generated out of them. One day we could import
> > kernel-doc into xen.git/scripts and use it to generate a set of html
> > documents via sphinx.
> 
> `git-grep ‘^/\*\*$’ ` turns up loads of instances of kernel-doc-style comments in the tree already.  I think it makes complete sense to:
> 
> 1. Start using tools to pull the existing ones into sphinx docs
> 2. Skim through the existing ones to make sure they’re accurate / useful
> 3. Add such comments for elements of key importance to the FUSA SIG
> 4. Encourage people include documentation for new features, &c

I have no objection to this.  Indeed switching to something the kernel
folks find useable is likely to be a good idea.

We should ideally convert the existing hypercall documentation, which
is parsed from a bespoke magic comment format by a script in xen.git.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 13:12:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 13:12:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k18Mb-0007mT-1A; Thu, 30 Jul 2020 13:12:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6uMI=BJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k18Ma-0007m8-AA
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 13:12:48 +0000
X-Inumbo-ID: 593ad88c-d266-11ea-aac4-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 593ad88c-d266-11ea-aac4-12813bfff9fa;
 Thu, 30 Jul 2020 13:12:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=WNECkIXINkxosL/PwIwLZRYTLrltxPll5HssO8ErLFE=; b=twKjaNbDCy5P+KUfxCdzhWDb+
 iOeEB8Mr2nA19N1qhe0QZVnPPC/X9pJvC+sdXG4r7216vfkpwoH0WEZ7jaKTyb7Dm/B2HayhPdgTa
 5ADtkBo+0umTG3pUo/mit/m7rqk/l0BhYqw95efp5dH/knNnZwGCw0SF+vxbrIRTSBkFA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k18MT-0000km-6u; Thu, 30 Jul 2020 13:12:41 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k18MS-0006fo-Pn; Thu, 30 Jul 2020 13:12:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k18MS-000227-P9; Thu, 30 Jul 2020 13:12:40 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152302-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152302: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
 xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=64219fa179c3e48adad12bfce3f6b3f1596cccbf
X-Osstest-Versions-That: xen=b071ec25e85c4aacf3da59e5258cda0b1c4df45d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jul 2020 13:12:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152302 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152302/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       7 xen-boot                 fail REGR. vs. 152269

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  64219fa179c3e48adad12bfce3f6b3f1596cccbf
baseline version:
 xen                  b071ec25e85c4aacf3da59e5258cda0b1c4df45d

Last test of basis   152269  2020-07-28 19:05:32 Z    1 days
Testing same since   152288  2020-07-29 19:01:00 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Fam Zheng <famzheng@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 64219fa179c3e48adad12bfce3f6b3f1596cccbf
Author: Fam Zheng <famzheng@amazon.com>
Date:   Wed Jul 29 18:51:45 2020 +0100

    x86/cpuid: Fix APIC bit clearing
    
    The bug is obvious here, other places in this function used
    "cpufeat_mask" correctly.
    
    Fixed: b648feff8ea2 ("xen/x86: Improvements to in-hypervisor cpuid sanity checks")
    Signed-off-by: Fam Zheng <famzheng@amazon.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 14:03:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 14:03:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k199Z-0003bq-UH; Thu, 30 Jul 2020 14:03:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mqfy=BJ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k199Y-0003be-5f
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:03:24 +0000
X-Inumbo-ID: 6c56ba25-d26d-11ea-8d64-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c56ba25-d26d-11ea-8d64-bc764e2007e4;
 Thu, 30 Jul 2020 14:03:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596117802;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=4TMVIVMpX1FCyLw1mlF83kaGSqx+wZpyRfgFknW/4SM=;
 b=ifC2DomTsBdWu9wR10c1Q1WE8a/f78S4oyoojQdZBjIT/dVeBMHP2xpL
 lEftT6fnPecJPMbdTcds88MA6B9MQ+DZ/Vkpf5cb+sw+DIJSo7SCAMVqA
 eb3WR8TFn6LN3TewS+vO+c5/3/iIswOfGyV86bJY6AuapUnaLY1MShg1h w=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: tc0tkk4xUGO1z45vHMcikuX5ISOT9Bg2FOwespuxsEmUb5qMMGbkLqm4aGJA0SqSDJocDSjw7s
 xgIACT1pDgKQfDy8Og/2n/nQc0KfYKwbVDd+XCcEHMID/zTHf52IYbAeztd2rd78+PZfQzCsGi
 oGIbAW8ZFNJ68dsiP+JLAmZrhupzjaeE8SmAOrl+kMuyvI3417vS/SDGGiY/Q9+u9XH3v/xSrV
 M2IGcBSFnzi+1odhcnZB1kjD+nq56RUt1+ITYhaLvbaxBVp7igt4NhBBMFkw4VhOKRpXZg01b8
 qX4=
X-SBRS: 2.7
X-MesageID: 23557320
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,414,1589256000"; d="scan'208";a="23557320"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/vmx: reorder code in vmx_deliver_posted_intr
Date: Thu, 30 Jul 2020 16:03:09 +0200
Message-ID: <20200730140309.59916-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Remove the unneeded else branch, which allows to reduce the
indentation of a larger block of code, while making the flow of the
function more obvious.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/vmx/vmx.c | 55 ++++++++++++++++++--------------------
 1 file changed, 26 insertions(+), 29 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index eb54aadfba..7773dcae1b 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2003,6 +2003,8 @@ static void __vmx_deliver_posted_interrupt(struct vcpu *v)
 
 static void vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
 {
+    struct pi_desc old, new, prev;
+
     if ( pi_test_and_set_pir(vector, &v->arch.hvm.vmx.pi_desc) )
         return;
 
@@ -2014,41 +2016,36 @@ static void vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
          * VMEntry as it used to be.
          */
         pi_set_on(&v->arch.hvm.vmx.pi_desc);
+        vcpu_kick(v);
+        return;
     }
-    else
-    {
-        struct pi_desc old, new, prev;
 
-        prev.control = v->arch.hvm.vmx.pi_desc.control;
+    prev.control = v->arch.hvm.vmx.pi_desc.control;
 
-        do {
-            /*
-             * Currently, we don't support urgent interrupt, all
-             * interrupts are recognized as non-urgent interrupt,
-             * Besides that, if 'ON' is already set, no need to
-             * sent posted-interrupts notification event as well,
-             * according to hardware behavior.
-             */
-            if ( pi_test_sn(&prev) || pi_test_on(&prev) )
-            {
-                vcpu_kick(v);
-                return;
-            }
-
-            old.control = v->arch.hvm.vmx.pi_desc.control &
-                          ~((1 << POSTED_INTR_ON) | (1 << POSTED_INTR_SN));
-            new.control = v->arch.hvm.vmx.pi_desc.control |
-                          (1 << POSTED_INTR_ON);
+    do {
+        /*
+         * Currently, we don't support urgent interrupt, all
+         * interrupts are recognized as non-urgent interrupt,
+         * Besides that, if 'ON' is already set, no need to
+         * sent posted-interrupts notification event as well,
+         * according to hardware behavior.
+         */
+        if ( pi_test_sn(&prev) || pi_test_on(&prev) )
+        {
+            vcpu_kick(v);
+            return;
+        }
 
-            prev.control = cmpxchg(&v->arch.hvm.vmx.pi_desc.control,
-                                   old.control, new.control);
-        } while ( prev.control != old.control );
+        old.control = v->arch.hvm.vmx.pi_desc.control &
+                      ~((1 << POSTED_INTR_ON) | (1 << POSTED_INTR_SN));
+        new.control = v->arch.hvm.vmx.pi_desc.control |
+                      (1 << POSTED_INTR_ON);
 
-        __vmx_deliver_posted_interrupt(v);
-        return;
-    }
+        prev.control = cmpxchg(&v->arch.hvm.vmx.pi_desc.control,
+                               old.control, new.control);
+    } while ( prev.control != old.control );
 
-    vcpu_kick(v);
+    __vmx_deliver_posted_interrupt(v);
 }
 
 static void vmx_sync_pir_to_irr(struct vcpu *v)
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 14:29:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 14:29:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k19Yu-0005Q9-4W; Thu, 30 Jul 2020 14:29:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k19Ys-0005Pz-C8
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:34 +0000
X-Inumbo-ID: 16262758-d271-11ea-8d70-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 16262758-d271-11ea-8d70-bc764e2007e4;
 Thu, 30 Jul 2020 14:29:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=FMdbCGgYRHvs+c9rdNXmWW9PuSsgA3ySLWLmONq97uE=; b=YXgHjsRgjIcGDBnMAiGfS30OzY
 8f5bWnb6eFqRjEwfpyjGWcM0SodXLJSVbIetVDv3wy7TZ6o0I0pcRm176ldi1bcytDTH2b9V6x1KL
 1OpNy8YdXK/tdfm1BU8UUJZESH63zU4Soz72xcqawRNi8twhfUxVvaN5s5qU/rMJA0OQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yq-0002O7-Qe; Thu, 30 Jul 2020 14:29:32 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yq-0005aN-Ia; Thu, 30 Jul 2020 14:29:32 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 02/10] x86/iommu: add common page-table allocator
Date: Thu, 30 Jul 2020 15:29:18 +0100
Message-Id: <20200730142926.6051-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200730142926.6051-1-paul@xen.org>
References: <20200730142926.6051-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Instead of having separate page table allocation functions in VT-d and AMD
IOMMU code, we could use a common allocation function in the general x86 code.

This patch adds a new allocation function, iommu_alloc_pgtable(), for this
purpose. The function adds the page table pages to a list. The pages in this
list are then freed by iommu_free_pgtables(), which is called by
domain_relinquish_resources() after PCI devices have been de-assigned.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - This is split out from a larger patch of the same name in v1
---
 xen/arch/x86/domain.c               |  9 +++++-
 xen/drivers/passthrough/x86/iommu.c | 50 +++++++++++++++++++++++++++++
 xen/include/asm-x86/iommu.h         |  7 ++++
 3 files changed, 65 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index fee6c3931a..2bc49b1db4 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2156,7 +2156,8 @@ int domain_relinquish_resources(struct domain *d)
         d->arch.rel_priv = PROG_ ## x; /* Fallthrough */ case PROG_ ## x
 
         enum {
-            PROG_paging = 1,
+            PROG_iommu_pagetables = 1,
+            PROG_paging,
             PROG_vcpu_pagetables,
             PROG_shared,
             PROG_xen,
@@ -2171,6 +2172,12 @@ int domain_relinquish_resources(struct domain *d)
         if ( ret )
             return ret;
 
+    PROGRESS(iommu_pagetables):
+
+        ret = iommu_free_pgtables(d);
+        if ( ret )
+            return ret;
+
     PROGRESS(paging):
 
         /* Tear down paging-assistance stuff. */
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index a12109a1de..c0d4865dd7 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -140,6 +140,9 @@ int arch_iommu_domain_init(struct domain *d)
 
     spin_lock_init(&hd->arch.mapping_lock);
 
+    INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.list);
+    spin_lock_init(&hd->arch.pgtables.lock);
+
     return 0;
 }
 
@@ -257,6 +260,53 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
         return;
 }
 
+int iommu_free_pgtables(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    struct page_info *pg;
+
+    while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
+    {
+        free_domheap_page(pg);
+
+        if ( general_preempt_check() )
+            return -ERESTART;
+    }
+
+    return 0;
+}
+
+struct page_info *iommu_alloc_pgtable(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    unsigned int memflags = 0;
+    struct page_info *pg;
+    void *p;
+
+#ifdef CONFIG_NUMA
+    if (hd->node != NUMA_NO_NODE)
+        memflags = MEMF_node(hd->node);
+#endif
+
+    pg = alloc_domheap_page(NULL, memflags);
+    if ( !pg )
+        return NULL;
+
+    p = __map_domain_page(pg);
+    clear_page(p);
+
+    if ( hd->platform_ops->sync_cache )
+        iommu_vcall(hd->platform_ops, sync_cache, p, PAGE_SIZE);
+
+    unmap_domain_page(p);
+
+    spin_lock(&hd->arch.pgtables.lock);
+    page_list_add(pg, &hd->arch.pgtables.list);
+    spin_unlock(&hd->arch.pgtables.lock);
+
+    return pg;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index 8ce97c981f..31f6d4a8d8 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -46,6 +46,10 @@ typedef uint64_t daddr_t;
 struct arch_iommu
 {
     spinlock_t mapping_lock; /* io page table lock */
+    struct {
+        struct page_list_head list;
+        spinlock_t lock;
+    } pgtables;
 
     union {
         /* Intel VT-d */
@@ -131,6 +135,9 @@ int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
         iommu_vcall(ops, sync_cache, addr, size);       \
 })
 
+int __must_check iommu_free_pgtables(struct domain *d);
+struct page_info * __must_check iommu_alloc_pgtable(struct domain *d);
+
 #endif /* !__ARCH_X86_IOMMU_H__ */
 /*
  * Local variables:
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 14:29:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 14:29:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k19Yz-0005R6-0h; Thu, 30 Jul 2020 14:29:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k19Yx-0005Q0-FI
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:39 +0000
X-Inumbo-ID: 174a6568-d271-11ea-aad0-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 174a6568-d271-11ea-aad0-12813bfff9fa;
 Thu, 30 Jul 2020 14:29:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=OjYigD2p3SvkFolBbmgzZMPONZM8wJufvetu1fVmKFs=; b=NkIuc7H3PZvo7MGh/0RvFlqBl6
 4eX+/X6ZsHAecwbN1/dj5MtmcuJc78NKE6KYGXETUGm9BvG99rN/9UeQutdKyMrvjSnv6x2Tk/Db6
 h5+wChq4/HbVMZTs9JuYgjL8wd3B/RzXN5BvkpFESizoMmZoRxlP9TrfZHpD/6HeqTo4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Ys-0002OW-Sk; Thu, 30 Jul 2020 14:29:34 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Ys-0005aN-Lx; Thu, 30 Jul 2020 14:29:34 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 05/10] iommu: remove unused iommu_ops method and tasklet
Date: Thu, 30 Jul 2020 15:29:21 +0100
Message-Id: <20200730142926.6051-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200730142926.6051-1-paul@xen.org>
References: <20200730142926.6051-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

The VT-d and AMD IOMMU both use the general x86 IOMMU page table allocator
and ARM always shares page tables with CPU. Hence there is no need to retain
the free_page_table() method or the tasklet which invokes it.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>

v2:
  - New in v2 (split from "add common page-table allocator")
---
 xen/drivers/passthrough/iommu.c | 25 -------------------------
 xen/include/xen/iommu.h         |  2 --
 2 files changed, 27 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 2b1db8022c..660dc5deb2 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -49,10 +49,6 @@ bool_t __read_mostly amd_iommu_perdev_intremap = 1;
 
 DEFINE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
 
-DEFINE_SPINLOCK(iommu_pt_cleanup_lock);
-PAGE_LIST_HEAD(iommu_pt_cleanup_list);
-static struct tasklet iommu_pt_cleanup_tasklet;
-
 static int __init parse_iommu_param(const char *s)
 {
     const char *ss;
@@ -226,9 +222,6 @@ static void iommu_teardown(struct domain *d)
     struct domain_iommu *hd = dom_iommu(d);
 
     iommu_vcall(hd->platform_ops, teardown, d);
-
-    if ( hd->platform_ops->free_page_table )
-        tasklet_schedule(&iommu_pt_cleanup_tasklet);
 }
 
 void iommu_domain_destroy(struct domain *d)
@@ -368,23 +361,6 @@ int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
     return iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags);
 }
 
-static void iommu_free_pagetables(void *unused)
-{
-    do {
-        struct page_info *pg;
-
-        spin_lock(&iommu_pt_cleanup_lock);
-        pg = page_list_remove_head(&iommu_pt_cleanup_list);
-        spin_unlock(&iommu_pt_cleanup_lock);
-        if ( !pg )
-            return;
-        iommu_vcall(iommu_get_ops(), free_page_table, pg);
-    } while ( !softirq_pending(smp_processor_id()) );
-
-    tasklet_schedule_on_cpu(&iommu_pt_cleanup_tasklet,
-                            cpumask_cycle(smp_processor_id(), &cpu_online_map));
-}
-
 int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count,
                       unsigned int flush_flags)
 {
@@ -508,7 +484,6 @@ int __init iommu_setup(void)
 #ifndef iommu_intremap
         printk("Interrupt remapping %sabled\n", iommu_intremap ? "en" : "dis");
 #endif
-        tasklet_init(&iommu_pt_cleanup_tasklet, iommu_free_pagetables, NULL);
     }
 
     return rc;
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 3272874958..1831dc66b0 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -263,8 +263,6 @@ struct iommu_ops {
     int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mfn,
                                     unsigned int *flags);
 
-    void (*free_page_table)(struct page_info *);
-
 #ifdef CONFIG_X86
     int (*enable_x2apic)(void);
     void (*disable_x2apic)(void);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 14:29:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 14:29:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k19Yu-0005QF-Cf; Thu, 30 Jul 2020 14:29:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k19Ys-0005Q0-JB
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:34 +0000
X-Inumbo-ID: 163bd49a-d271-11ea-aad0-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 163bd49a-d271-11ea-aad0-12813bfff9fa;
 Thu, 30 Jul 2020 14:29:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=LjpluPZoZankaP2gyhhW4lCQhJiecEHK14EhRGogexM=; b=IsSt/znRAhCy5OdzgtHlTnCxtD
 RQQHEAUTAlC3XcOn9YA/XuxHsVUNnntgIZWQ7AjzPvzDIFJ2xTTX29eE2PWUHl/9cDg4S3jlOti2e
 DpX2tlm77HlW328pKjgY2vrUSL8pIVT2lJQ5OeO653BXLTSp6FQpfaSItTBK6ZBu/KQI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yo-0002O3-Re; Thu, 30 Jul 2020 14:29:30 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yo-0005aN-HR; Thu, 30 Jul 2020 14:29:30 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 00/10] IOMMU cleanup
Date: Thu, 30 Jul 2020 15:29:16 +0100
Message-Id: <20200730142926.6051-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>,
 Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

This is v2 of my original 6 patch series. The original patch #3 has
been dropped and the original patch #2 carved up into 4. There are
an additional 2 patches that were not in v1 in any form.

Paul Durrant (10):
  x86/iommu: re-arrange arch_iommu to separate common fields...
  x86/iommu: add common page-table allocator
  x86/iommu: convert VT-d code to use new page table allocator
  x86/iommu: convert AMD IOMMU code to use new page table allocator
  iommu: remove unused iommu_ops method and tasklet
  iommu: flush I/O TLB if iommu_map() or iommu_unmap() fail
  iommu: make map, unmap and flush all take both an order and a count
  remove remaining uses of iommu_legacy_map/unmap
  iommu: remove the share_p2m operation
  iommu: stop calling IOMMU page tables 'p2m tables'

 xen/arch/arm/p2m.c                          |   2 +-
 xen/arch/x86/domain.c                       |   9 +-
 xen/arch/x86/mm.c                           |  21 +-
 xen/arch/x86/mm/p2m-ept.c                   |  20 +-
 xen/arch/x86/mm/p2m-pt.c                    |  15 +-
 xen/arch/x86/mm/p2m.c                       |  29 ++-
 xen/arch/x86/tboot.c                        |   4 +-
 xen/arch/x86/x86_64/mm.c                    |  27 ++-
 xen/common/grant_table.c                    |  34 ++-
 xen/common/memory.c                         |   9 +-
 xen/drivers/passthrough/amd/iommu.h         |  20 +-
 xen/drivers/passthrough/amd/iommu_guest.c   |   8 +-
 xen/drivers/passthrough/amd/iommu_map.c     |  26 +-
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 110 +++------
 xen/drivers/passthrough/arm/ipmmu-vmsa.c    |   2 +-
 xen/drivers/passthrough/arm/smmu.c          |   2 +-
 xen/drivers/passthrough/iommu.c             | 118 ++--------
 xen/drivers/passthrough/vtd/iommu.c         | 248 +++++++++-----------
 xen/drivers/passthrough/x86/iommu.c         |  53 ++++-
 xen/include/asm-x86/iommu.h                 |  34 ++-
 xen/include/xen/iommu.h                     |  37 +--
 21 files changed, 391 insertions(+), 437 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>
Cc: Paul Durrant <paul@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: Wei Liu <wl@xen.org>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 14:29:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 14:29:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k19Yy-0005Qy-OE; Thu, 30 Jul 2020 14:29:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k19Yx-0005Pz-6U
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:39 +0000
X-Inumbo-ID: 16262759-d271-11ea-8d70-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 16262759-d271-11ea-8d70-bc764e2007e4;
 Thu, 30 Jul 2020 14:29:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=gp6KEn5GkdD039Rg54pQRN4h4pTnAudvyLFE5D9IDcA=; b=tbRengnDM+Us7q/By31r0l4Bd9
 111ZrfMT4kS1Z0DEcSTKcaibCz0izNxzuYDSioV1QO1VOWBC1mjBXiFBH1fpSlfisC+yvCXyvYREl
 1p3cBaleEHCqfxZQFN8zQZRX5qUBDfjNr40ftyXsFgvlxZ2OG2kzYG1jCcPSAMWwCYtw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yp-0002O5-Sd; Thu, 30 Jul 2020 14:29:31 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yp-0005aN-Kj; Thu, 30 Jul 2020 14:29:31 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 01/10] x86/iommu: re-arrange arch_iommu to separate common
 fields...
Date: Thu, 30 Jul 2020 15:29:17 +0100
Message-Id: <20200730142926.6051-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200730142926.6051-1-paul@xen.org>
References: <20200730142926.6051-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

... from those specific to VT-d or AMD IOMMU, and put the latter in a union.

There is no functional change in this patch, although the initialization of
the 'mapped_rmrrs' list occurs slightly later in iommu_domain_init() since
it is now done (correctly) in VT-d specific code rather than in general x86
code.

NOTE: I have not combined the AMD IOMMU 'root_table' and VT-d 'pgd_maddr'
      fields even though they perform essentially the same function. The
      concept of 'root table' in the VT-d code is different from that in the
      AMD code so attempting to use a common name will probably only serve
      to confuse the reader.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>

v2:
 - s/amd_iommu/amd
 - Definitions still left inline as re-arrangement into implementation
   headers is non-trivial
 - Also s/u64/uint64_t and s/int/unsigned int
---
 xen/arch/x86/tboot.c                        |  4 +-
 xen/drivers/passthrough/amd/iommu_guest.c   |  8 ++--
 xen/drivers/passthrough/amd/iommu_map.c     | 14 +++---
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 35 +++++++-------
 xen/drivers/passthrough/vtd/iommu.c         | 53 +++++++++++----------
 xen/drivers/passthrough/x86/iommu.c         |  1 -
 xen/include/asm-x86/iommu.h                 | 27 +++++++----
 7 files changed, 78 insertions(+), 64 deletions(-)

diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index 320e06f129..e66b0940c4 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -230,8 +230,8 @@ static void tboot_gen_domain_integrity(const uint8_t key[TB_KEY_SIZE],
         {
             const struct domain_iommu *dio = dom_iommu(d);
 
-            update_iommu_mac(&ctx, dio->arch.pgd_maddr,
-                             agaw_to_level(dio->arch.agaw));
+            update_iommu_mac(&ctx, dio->arch.vtd.pgd_maddr,
+                             agaw_to_level(dio->arch.vtd.agaw));
         }
     }
 
diff --git a/xen/drivers/passthrough/amd/iommu_guest.c b/xen/drivers/passthrough/amd/iommu_guest.c
index 014a72a54b..30b7353cd6 100644
--- a/xen/drivers/passthrough/amd/iommu_guest.c
+++ b/xen/drivers/passthrough/amd/iommu_guest.c
@@ -50,12 +50,12 @@ static uint16_t guest_bdf(struct domain *d, uint16_t machine_bdf)
 
 static inline struct guest_iommu *domain_iommu(struct domain *d)
 {
-    return dom_iommu(d)->arch.g_iommu;
+    return dom_iommu(d)->arch.amd.g_iommu;
 }
 
 static inline struct guest_iommu *vcpu_iommu(struct vcpu *v)
 {
-    return dom_iommu(v->domain)->arch.g_iommu;
+    return dom_iommu(v->domain)->arch.amd.g_iommu;
 }
 
 static void guest_iommu_enable(struct guest_iommu *iommu)
@@ -823,7 +823,7 @@ int guest_iommu_init(struct domain* d)
     guest_iommu_reg_init(iommu);
     iommu->mmio_base = ~0ULL;
     iommu->domain = d;
-    hd->arch.g_iommu = iommu;
+    hd->arch.amd.g_iommu = iommu;
 
     tasklet_init(&iommu->cmd_buffer_tasklet, guest_iommu_process_command, d);
 
@@ -845,5 +845,5 @@ void guest_iommu_destroy(struct domain *d)
     tasklet_kill(&iommu->cmd_buffer_tasklet);
     xfree(iommu);
 
-    dom_iommu(d)->arch.g_iommu = NULL;
+    dom_iommu(d)->arch.amd.g_iommu = NULL;
 }
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 93e96cd69c..47b4472e8a 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -180,8 +180,8 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
     struct page_info *table;
     const struct domain_iommu *hd = dom_iommu(d);
 
-    table = hd->arch.root_table;
-    level = hd->arch.paging_mode;
+    table = hd->arch.amd.root_table;
+    level = hd->arch.amd.paging_mode;
 
     BUG_ON( table == NULL || level < 1 || level > 6 );
 
@@ -325,7 +325,7 @@ int amd_iommu_unmap_page(struct domain *d, dfn_t dfn,
 
     spin_lock(&hd->arch.mapping_lock);
 
-    if ( !hd->arch.root_table )
+    if ( !hd->arch.amd.root_table )
     {
         spin_unlock(&hd->arch.mapping_lock);
         return 0;
@@ -450,7 +450,7 @@ int __init amd_iommu_quarantine_init(struct domain *d)
     unsigned int level = amd_iommu_get_paging_mode(end_gfn);
     struct amd_iommu_pte *table;
 
-    if ( hd->arch.root_table )
+    if ( hd->arch.amd.root_table )
     {
         ASSERT_UNREACHABLE();
         return 0;
@@ -458,11 +458,11 @@ int __init amd_iommu_quarantine_init(struct domain *d)
 
     spin_lock(&hd->arch.mapping_lock);
 
-    hd->arch.root_table = alloc_amd_iommu_pgtable();
-    if ( !hd->arch.root_table )
+    hd->arch.amd.root_table = alloc_amd_iommu_pgtable();
+    if ( !hd->arch.amd.root_table )
         goto out;
 
-    table = __map_domain_page(hd->arch.root_table);
+    table = __map_domain_page(hd->arch.amd.root_table);
     while ( level )
     {
         struct page_info *pg;
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index 5f5f4a2eac..c27bfbd48e 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -91,7 +91,8 @@ static void amd_iommu_setup_domain_device(
     u8 bus = pdev->bus;
     const struct domain_iommu *hd = dom_iommu(domain);
 
-    BUG_ON( !hd->arch.root_table || !hd->arch.paging_mode ||
+    BUG_ON( !hd->arch.amd.root_table ||
+            !hd->arch.amd.paging_mode ||
             !iommu->dev_table.buffer );
 
     if ( iommu_hwdom_passthrough && is_hardware_domain(domain) )
@@ -110,8 +111,8 @@ static void amd_iommu_setup_domain_device(
 
         /* bind DTE to domain page-tables */
         amd_iommu_set_root_page_table(
-            dte, page_to_maddr(hd->arch.root_table), domain->domain_id,
-            hd->arch.paging_mode, valid);
+            dte, page_to_maddr(hd->arch.amd.root_table),
+            domain->domain_id, hd->arch.amd.paging_mode, valid);
 
         /* Undo what amd_iommu_disable_domain_device() may have done. */
         ivrs_dev = &get_ivrs_mappings(iommu->seg)[req_id];
@@ -131,8 +132,8 @@ static void amd_iommu_setup_domain_device(
                         "root table = %#"PRIx64", "
                         "domain = %d, paging mode = %d\n",
                         req_id, pdev->type,
-                        page_to_maddr(hd->arch.root_table),
-                        domain->domain_id, hd->arch.paging_mode);
+                        page_to_maddr(hd->arch.amd.root_table),
+                        domain->domain_id, hd->arch.amd.paging_mode);
     }
 
     spin_unlock_irqrestore(&iommu->lock, flags);
@@ -206,10 +207,10 @@ static int iov_enable_xt(void)
 
 int amd_iommu_alloc_root(struct domain_iommu *hd)
 {
-    if ( unlikely(!hd->arch.root_table) )
+    if ( unlikely(!hd->arch.amd.root_table) )
     {
-        hd->arch.root_table = alloc_amd_iommu_pgtable();
-        if ( !hd->arch.root_table )
+        hd->arch.amd.root_table = alloc_amd_iommu_pgtable();
+        if ( !hd->arch.amd.root_table )
             return -ENOMEM;
     }
 
@@ -239,7 +240,7 @@ static int amd_iommu_domain_init(struct domain *d)
      *   physical address space we give it, but this isn't known yet so use 4
      *   unilaterally.
      */
-    hd->arch.paging_mode = amd_iommu_get_paging_mode(
+    hd->arch.amd.paging_mode = amd_iommu_get_paging_mode(
         is_hvm_domain(d)
         ? 1ul << (DEFAULT_DOMAIN_ADDRESS_WIDTH - PAGE_SHIFT)
         : get_upper_mfn_bound() + 1);
@@ -305,7 +306,7 @@ static void amd_iommu_disable_domain_device(const struct domain *domain,
         AMD_IOMMU_DEBUG("Disable: device id = %#x, "
                         "domain = %d, paging mode = %d\n",
                         req_id,  domain->domain_id,
-                        dom_iommu(domain)->arch.paging_mode);
+                        dom_iommu(domain)->arch.amd.paging_mode);
     }
     spin_unlock_irqrestore(&iommu->lock, flags);
 
@@ -420,10 +421,11 @@ static void deallocate_iommu_page_tables(struct domain *d)
     struct domain_iommu *hd = dom_iommu(d);
 
     spin_lock(&hd->arch.mapping_lock);
-    if ( hd->arch.root_table )
+    if ( hd->arch.amd.root_table )
     {
-        deallocate_next_page_table(hd->arch.root_table, hd->arch.paging_mode);
-        hd->arch.root_table = NULL;
+        deallocate_next_page_table(hd->arch.amd.root_table,
+                                   hd->arch.amd.paging_mode);
+        hd->arch.amd.root_table = NULL;
     }
     spin_unlock(&hd->arch.mapping_lock);
 }
@@ -598,11 +600,12 @@ static void amd_dump_p2m_table(struct domain *d)
 {
     const struct domain_iommu *hd = dom_iommu(d);
 
-    if ( !hd->arch.root_table )
+    if ( !hd->arch.amd.root_table )
         return;
 
-    printk("p2m table has %d levels\n", hd->arch.paging_mode);
-    amd_dump_p2m_table_level(hd->arch.root_table, hd->arch.paging_mode, 0, 0);
+    printk("p2m table has %d levels\n", hd->arch.amd.paging_mode);
+    amd_dump_p2m_table_level(hd->arch.amd.root_table,
+                             hd->arch.amd.paging_mode, 0, 0);
 }
 
 static const struct iommu_ops __initconstrel _iommu_ops = {
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index deaeab095d..94e0455a4d 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -257,20 +257,20 @@ static u64 bus_to_context_maddr(struct vtd_iommu *iommu, u8 bus)
 static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
 {
     struct domain_iommu *hd = dom_iommu(domain);
-    int addr_width = agaw_to_width(hd->arch.agaw);
+    int addr_width = agaw_to_width(hd->arch.vtd.agaw);
     struct dma_pte *parent, *pte = NULL;
-    int level = agaw_to_level(hd->arch.agaw);
+    int level = agaw_to_level(hd->arch.vtd.agaw);
     int offset;
     u64 pte_maddr = 0;
 
     addr &= (((u64)1) << addr_width) - 1;
     ASSERT(spin_is_locked(&hd->arch.mapping_lock));
-    if ( !hd->arch.pgd_maddr &&
+    if ( !hd->arch.vtd.pgd_maddr &&
          (!alloc ||
-          ((hd->arch.pgd_maddr = alloc_pgtable_maddr(1, hd->node)) == 0)) )
+          ((hd->arch.vtd.pgd_maddr = alloc_pgtable_maddr(1, hd->node)) == 0)) )
         goto out;
 
-    parent = (struct dma_pte *)map_vtd_domain_page(hd->arch.pgd_maddr);
+    parent = (struct dma_pte *)map_vtd_domain_page(hd->arch.vtd.pgd_maddr);
     while ( level > 1 )
     {
         offset = address_level_offset(addr, level);
@@ -593,7 +593,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d, dfn_t dfn,
     {
         iommu = drhd->iommu;
 
-        if ( !test_bit(iommu->index, &hd->arch.iommu_bitmap) )
+        if ( !test_bit(iommu->index, &hd->arch.vtd.iommu_bitmap) )
             continue;
 
         flush_dev_iotlb = !!find_ats_dev_drhd(iommu);
@@ -1278,7 +1278,10 @@ void __init iommu_free(struct acpi_drhd_unit *drhd)
 
 static int intel_iommu_domain_init(struct domain *d)
 {
-    dom_iommu(d)->arch.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
+    struct domain_iommu *hd = dom_iommu(d);
+
+    hd->arch.vtd.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
+    INIT_LIST_HEAD(&hd->arch.vtd.mapped_rmrrs);
 
     return 0;
 }
@@ -1375,10 +1378,10 @@ int domain_context_mapping_one(
         spin_lock(&hd->arch.mapping_lock);
 
         /* Ensure we have pagetables allocated down to leaf PTE. */
-        if ( hd->arch.pgd_maddr == 0 )
+        if ( hd->arch.vtd.pgd_maddr == 0 )
         {
             addr_to_dma_page_maddr(domain, 0, 1);
-            if ( hd->arch.pgd_maddr == 0 )
+            if ( hd->arch.vtd.pgd_maddr == 0 )
             {
             nomem:
                 spin_unlock(&hd->arch.mapping_lock);
@@ -1389,7 +1392,7 @@ int domain_context_mapping_one(
         }
 
         /* Skip top levels of page tables for 2- and 3-level DRHDs. */
-        pgd_maddr = hd->arch.pgd_maddr;
+        pgd_maddr = hd->arch.vtd.pgd_maddr;
         for ( agaw = level_to_agaw(4);
               agaw != level_to_agaw(iommu->nr_pt_levels);
               agaw-- )
@@ -1443,7 +1446,7 @@ int domain_context_mapping_one(
     if ( rc > 0 )
         rc = 0;
 
-    set_bit(iommu->index, &hd->arch.iommu_bitmap);
+    set_bit(iommu->index, &hd->arch.vtd.iommu_bitmap);
 
     unmap_vtd_domain_page(context_entries);
 
@@ -1714,7 +1717,7 @@ static int domain_context_unmap(struct domain *domain, u8 devfn,
     {
         int iommu_domid;
 
-        clear_bit(iommu->index, &dom_iommu(domain)->arch.iommu_bitmap);
+        clear_bit(iommu->index, &dom_iommu(domain)->arch.vtd.iommu_bitmap);
 
         iommu_domid = domain_iommu_domid(domain, iommu);
         if ( iommu_domid == -1 )
@@ -1739,7 +1742,7 @@ static void iommu_domain_teardown(struct domain *d)
     if ( list_empty(&acpi_drhd_units) )
         return;
 
-    list_for_each_entry_safe ( mrmrr, tmp, &hd->arch.mapped_rmrrs, list )
+    list_for_each_entry_safe ( mrmrr, tmp, &hd->arch.vtd.mapped_rmrrs, list )
     {
         list_del(&mrmrr->list);
         xfree(mrmrr);
@@ -1751,8 +1754,9 @@ static void iommu_domain_teardown(struct domain *d)
         return;
 
     spin_lock(&hd->arch.mapping_lock);
-    iommu_free_pagetable(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw));
-    hd->arch.pgd_maddr = 0;
+    iommu_free_pagetable(hd->arch.vtd.pgd_maddr,
+                         agaw_to_level(hd->arch.vtd.agaw));
+    hd->arch.vtd.pgd_maddr = 0;
     spin_unlock(&hd->arch.mapping_lock);
 }
 
@@ -1892,7 +1896,7 @@ static void iommu_set_pgd(struct domain *d)
     mfn_t pgd_mfn;
 
     pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-    dom_iommu(d)->arch.pgd_maddr =
+    dom_iommu(d)->arch.vtd.pgd_maddr =
         pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
 }
 
@@ -1912,7 +1916,7 @@ static int rmrr_identity_mapping(struct domain *d, bool_t map,
      * No need to acquire hd->arch.mapping_lock: Both insertion and removal
      * get done while holding pcidevs_lock.
      */
-    list_for_each_entry( mrmrr, &hd->arch.mapped_rmrrs, list )
+    list_for_each_entry( mrmrr, &hd->arch.vtd.mapped_rmrrs, list )
     {
         if ( mrmrr->base == rmrr->base_address &&
              mrmrr->end == rmrr->end_address )
@@ -1959,7 +1963,7 @@ static int rmrr_identity_mapping(struct domain *d, bool_t map,
     mrmrr->base = rmrr->base_address;
     mrmrr->end = rmrr->end_address;
     mrmrr->count = 1;
-    list_add_tail(&mrmrr->list, &hd->arch.mapped_rmrrs);
+    list_add_tail(&mrmrr->list, &hd->arch.vtd.mapped_rmrrs);
 
     return 0;
 }
@@ -2657,8 +2661,9 @@ static void vtd_dump_p2m_table(struct domain *d)
         return;
 
     hd = dom_iommu(d);
-    printk("p2m table has %d levels\n", agaw_to_level(hd->arch.agaw));
-    vtd_dump_p2m_table_level(hd->arch.pgd_maddr, agaw_to_level(hd->arch.agaw), 0, 0);
+    printk("p2m table has %d levels\n", agaw_to_level(hd->arch.vtd.agaw));
+    vtd_dump_p2m_table_level(hd->arch.vtd.pgd_maddr,
+                             agaw_to_level(hd->arch.vtd.agaw), 0, 0);
 }
 
 static int __init intel_iommu_quarantine_init(struct domain *d)
@@ -2669,7 +2674,7 @@ static int __init intel_iommu_quarantine_init(struct domain *d)
     unsigned int level = agaw_to_level(agaw);
     int rc;
 
-    if ( hd->arch.pgd_maddr )
+    if ( hd->arch.vtd.pgd_maddr )
     {
         ASSERT_UNREACHABLE();
         return 0;
@@ -2677,11 +2682,11 @@ static int __init intel_iommu_quarantine_init(struct domain *d)
 
     spin_lock(&hd->arch.mapping_lock);
 
-    hd->arch.pgd_maddr = alloc_pgtable_maddr(1, hd->node);
-    if ( !hd->arch.pgd_maddr )
+    hd->arch.vtd.pgd_maddr = alloc_pgtable_maddr(1, hd->node);
+    if ( !hd->arch.vtd.pgd_maddr )
         goto out;
 
-    parent = map_vtd_domain_page(hd->arch.pgd_maddr);
+    parent = map_vtd_domain_page(hd->arch.vtd.pgd_maddr);
     while ( level )
     {
         uint64_t maddr;
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index 3d7670e8c6..a12109a1de 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -139,7 +139,6 @@ int arch_iommu_domain_init(struct domain *d)
     struct domain_iommu *hd = dom_iommu(d);
 
     spin_lock_init(&hd->arch.mapping_lock);
-    INIT_LIST_HEAD(&hd->arch.mapped_rmrrs);
 
     return 0;
 }
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index 6c9d5e5632..8ce97c981f 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -45,16 +45,23 @@ typedef uint64_t daddr_t;
 
 struct arch_iommu
 {
-    u64 pgd_maddr;                 /* io page directory machine address */
-    spinlock_t mapping_lock;            /* io page table lock */
-    int agaw;     /* adjusted guest address width, 0 is level 2 30-bit */
-    u64 iommu_bitmap;              /* bitmap of iommu(s) that the domain uses */
-    struct list_head mapped_rmrrs;
-
-    /* amd iommu support */
-    int paging_mode;
-    struct page_info *root_table;
-    struct guest_iommu *g_iommu;
+    spinlock_t mapping_lock; /* io page table lock */
+
+    union {
+        /* Intel VT-d */
+        struct {
+            uint64_t pgd_maddr; /* io page directory machine address */
+            unsigned int agaw; /* adjusted guest address width, 0 is level 2 30-bit */
+            uint64_t iommu_bitmap; /* bitmap of iommu(s) that the domain uses */
+            struct list_head mapped_rmrrs;
+        } vtd;
+        /* AMD IOMMU */
+        struct {
+            unsigned int paging_mode;
+            struct page_info *root_table;
+            struct guest_iommu *g_iommu;
+        } amd;
+    };
 };
 
 extern struct iommu_ops iommu_ops;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 14:29:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 14:29:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k19Z3-0005TD-9F; Thu, 30 Jul 2020 14:29:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k19Z2-0005Pz-6b
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:44 +0000
X-Inumbo-ID: 1626275a-d271-11ea-8d70-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1626275a-d271-11ea-8d70-bc764e2007e4;
 Thu, 30 Jul 2020 14:29:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=c4JuIE359pliqogW7rVMUNX38I11naurubgFqG5WTDA=; b=b5pisPLstVNPcTdy2IIghU9Cp/
 kcjpyKxG8xz4JoI97ysXo3CY/tMbGjdZxvMIR6GMkWMwd+/LiRCQPGdl/tHrlFbHsYLEZ6s5AqilC
 kImi4L3BnNsSmmRmW422BhsdUKMW57lAzVexK1I4RDB31HYgK+g1g43BoPRawuKBolFQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Ys-0002OK-8c; Thu, 30 Jul 2020 14:29:34 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Ys-0005aN-1H; Thu, 30 Jul 2020 14:29:34 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 04/10] x86/iommu: convert AMD IOMMU code to use new page
 table allocator
Date: Thu, 30 Jul 2020 15:29:20 +0100
Message-Id: <20200730142926.6051-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200730142926.6051-1-paul@xen.org>
References: <20200730142926.6051-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

This patch converts the AMD IOMMU code to use the new page table allocator
function. This allows all the free-ing code to be removed (since it is now
handled by the general x86 code) which reduces TLB and cache thrashing as well
as shortening the code.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

v2:
  - New in v2 (split from "add common page-table allocator")
---
 xen/drivers/passthrough/amd/iommu.h         | 18 +----
 xen/drivers/passthrough/amd/iommu_map.c     | 10 +--
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 75 +++------------------
 3 files changed, 16 insertions(+), 87 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu.h b/xen/drivers/passthrough/amd/iommu.h
index 3489c2a015..e2d174f3b4 100644
--- a/xen/drivers/passthrough/amd/iommu.h
+++ b/xen/drivers/passthrough/amd/iommu.h
@@ -226,7 +226,7 @@ int __must_check amd_iommu_map_page(struct domain *d, dfn_t dfn,
                                     unsigned int *flush_flags);
 int __must_check amd_iommu_unmap_page(struct domain *d, dfn_t dfn,
                                       unsigned int *flush_flags);
-int __must_check amd_iommu_alloc_root(struct domain_iommu *hd);
+int __must_check amd_iommu_alloc_root(struct domain *d);
 int amd_iommu_reserve_domain_unity_map(struct domain *domain,
                                        paddr_t phys_addr, unsigned long size,
                                        int iw, int ir);
@@ -356,22 +356,6 @@ static inline int amd_iommu_get_paging_mode(unsigned long max_frames)
     return level;
 }
 
-static inline struct page_info *alloc_amd_iommu_pgtable(void)
-{
-    struct page_info *pg = alloc_domheap_page(NULL, 0);
-
-    if ( pg )
-        clear_domain_page(page_to_mfn(pg));
-
-    return pg;
-}
-
-static inline void free_amd_iommu_pgtable(struct page_info *pg)
-{
-    if ( pg )
-        free_domheap_page(pg);
-}
-
 static inline void *__alloc_amd_iommu_tables(unsigned int order)
 {
     return alloc_xenheap_pages(order, 0);
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 47b4472e8a..54b991294a 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -217,7 +217,7 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
             mfn = next_table_mfn;
 
             /* allocate lower level page table */
-            table = alloc_amd_iommu_pgtable();
+            table = iommu_alloc_pgtable(d);
             if ( table == NULL )
             {
                 AMD_IOMMU_DEBUG("Cannot allocate I/O page table\n");
@@ -248,7 +248,7 @@ static int iommu_pde_from_dfn(struct domain *d, unsigned long dfn,
 
             if ( next_table_mfn == 0 )
             {
-                table = alloc_amd_iommu_pgtable();
+                table = iommu_alloc_pgtable(d);
                 if ( table == NULL )
                 {
                     AMD_IOMMU_DEBUG("Cannot allocate I/O page table\n");
@@ -286,7 +286,7 @@ int amd_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
 
     spin_lock(&hd->arch.mapping_lock);
 
-    rc = amd_iommu_alloc_root(hd);
+    rc = amd_iommu_alloc_root(d);
     if ( rc )
     {
         spin_unlock(&hd->arch.mapping_lock);
@@ -458,7 +458,7 @@ int __init amd_iommu_quarantine_init(struct domain *d)
 
     spin_lock(&hd->arch.mapping_lock);
 
-    hd->arch.amd.root_table = alloc_amd_iommu_pgtable();
+    hd->arch.amd.root_table = iommu_alloc_pgtable(d);
     if ( !hd->arch.amd.root_table )
         goto out;
 
@@ -473,7 +473,7 @@ int __init amd_iommu_quarantine_init(struct domain *d)
          * page table pages, and the resulting allocations are always
          * zeroed.
          */
-        pg = alloc_amd_iommu_pgtable();
+        pg = iommu_alloc_pgtable(d);
         if ( !pg )
             break;
 
diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index c27bfbd48e..d79668f948 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -205,11 +205,13 @@ static int iov_enable_xt(void)
     return 0;
 }
 
-int amd_iommu_alloc_root(struct domain_iommu *hd)
+int amd_iommu_alloc_root(struct domain *d)
 {
+    struct domain_iommu *hd = dom_iommu(d);
+
     if ( unlikely(!hd->arch.amd.root_table) )
     {
-        hd->arch.amd.root_table = alloc_amd_iommu_pgtable();
+        hd->arch.amd.root_table = iommu_alloc_pgtable(d);
         if ( !hd->arch.amd.root_table )
             return -ENOMEM;
     }
@@ -217,12 +219,13 @@ int amd_iommu_alloc_root(struct domain_iommu *hd)
     return 0;
 }
 
-static int __must_check allocate_domain_resources(struct domain_iommu *hd)
+static int __must_check allocate_domain_resources(struct domain *d)
 {
+    struct domain_iommu *hd = dom_iommu(d);
     int rc;
 
     spin_lock(&hd->arch.mapping_lock);
-    rc = amd_iommu_alloc_root(hd);
+    rc = amd_iommu_alloc_root(d);
     spin_unlock(&hd->arch.mapping_lock);
 
     return rc;
@@ -254,7 +257,7 @@ static void __hwdom_init amd_iommu_hwdom_init(struct domain *d)
 {
     const struct amd_iommu *iommu;
 
-    if ( allocate_domain_resources(dom_iommu(d)) )
+    if ( allocate_domain_resources(d) )
         BUG();
 
     for_each_amd_iommu ( iommu )
@@ -323,7 +326,6 @@ static int reassign_device(struct domain *source, struct domain *target,
 {
     struct amd_iommu *iommu;
     int bdf, rc;
-    struct domain_iommu *t = dom_iommu(target);
 
     bdf = PCI_BDF2(pdev->bus, pdev->devfn);
     iommu = find_iommu_for_device(pdev->seg, bdf);
@@ -344,7 +346,7 @@ static int reassign_device(struct domain *source, struct domain *target,
         pdev->domain = target;
     }
 
-    rc = allocate_domain_resources(t);
+    rc = allocate_domain_resources(target);
     if ( rc )
         return rc;
 
@@ -376,65 +378,9 @@ static int amd_iommu_assign_device(struct domain *d, u8 devfn,
     return reassign_device(pdev->domain, d, devfn, pdev);
 }
 
-static void deallocate_next_page_table(struct page_info *pg, int level)
-{
-    PFN_ORDER(pg) = level;
-    spin_lock(&iommu_pt_cleanup_lock);
-    page_list_add_tail(pg, &iommu_pt_cleanup_list);
-    spin_unlock(&iommu_pt_cleanup_lock);
-}
-
-static void deallocate_page_table(struct page_info *pg)
-{
-    struct amd_iommu_pte *table_vaddr;
-    unsigned int index, level = PFN_ORDER(pg);
-
-    PFN_ORDER(pg) = 0;
-
-    if ( level <= 1 )
-    {
-        free_amd_iommu_pgtable(pg);
-        return;
-    }
-
-    table_vaddr = __map_domain_page(pg);
-
-    for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
-    {
-        struct amd_iommu_pte *pde = &table_vaddr[index];
-
-        if ( pde->mfn && pde->next_level && pde->pr )
-        {
-            /* We do not support skip levels yet */
-            ASSERT(pde->next_level == level - 1);
-            deallocate_next_page_table(mfn_to_page(_mfn(pde->mfn)),
-                                       pde->next_level);
-        }
-    }
-
-    unmap_domain_page(table_vaddr);
-    free_amd_iommu_pgtable(pg);
-}
-
-static void deallocate_iommu_page_tables(struct domain *d)
-{
-    struct domain_iommu *hd = dom_iommu(d);
-
-    spin_lock(&hd->arch.mapping_lock);
-    if ( hd->arch.amd.root_table )
-    {
-        deallocate_next_page_table(hd->arch.amd.root_table,
-                                   hd->arch.amd.paging_mode);
-        hd->arch.amd.root_table = NULL;
-    }
-    spin_unlock(&hd->arch.mapping_lock);
-}
-
-
 static void amd_iommu_domain_destroy(struct domain *d)
 {
-    deallocate_iommu_page_tables(d);
-    amd_iommu_flush_all_pages(d);
+    dom_iommu(d)->arch.amd.root_table = NULL;
 }
 
 static int amd_iommu_add_device(u8 devfn, struct pci_dev *pdev)
@@ -620,7 +566,6 @@ static const struct iommu_ops __initconstrel _iommu_ops = {
     .unmap_page = amd_iommu_unmap_page,
     .iotlb_flush = amd_iommu_flush_iotlb_pages,
     .iotlb_flush_all = amd_iommu_flush_iotlb_all,
-    .free_page_table = deallocate_page_table,
     .reassign_device = reassign_device,
     .get_device_group_id = amd_iommu_group_id,
     .enable_x2apic = iov_enable_xt,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 14:29:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 14:29:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k19Z3-0005Tz-NJ; Thu, 30 Jul 2020 14:29:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k19Z2-0005Q0-Fc
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:44 +0000
X-Inumbo-ID: 179a6d60-d271-11ea-aad0-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 179a6d60-d271-11ea-aad0-12813bfff9fa;
 Thu, 30 Jul 2020 14:29:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=tvDh3hWaZXavS9k/pMM3gTVqpPl5TsT49cdKXt8Ayaw=; b=TPxxcTmDcrEWA8lS2KVk27EsIQ
 PHUWpSkLsK3Iw5/NIZ/B9fTn2lUE3sTUyBWUhH8TKR9vXdLRDLRKe3raFs5Uv9XECTKVx2JDZnm+0
 r16dFmrE5oJbNWGnGNYipezq4uYuE3K6ylz0/k10m87AtovMl0KFFMZD++1oesVX14Vo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yt-0002Oe-H8; Thu, 30 Jul 2020 14:29:35 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yt-0005aN-AO; Thu, 30 Jul 2020 14:29:35 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 06/10] iommu: flush I/O TLB if iommu_map() or iommu_unmap()
 fail
Date: Thu, 30 Jul 2020 15:29:22 +0100
Message-Id: <20200730142926.6051-7-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200730142926.6051-1-paul@xen.org>
References: <20200730142926.6051-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

This patch adds a full I/O TLB flush to the error paths of iommu_map() and
iommu_unmap().

Without this change callers need constructs such as:

rc = iommu_map/unmap(...)
err = iommu_flush(...)
if ( !rc )
  rc = err;

With this change, it can be simplified to:

rc = iommu_map/unmap(...)
if ( !rc )
  rc = iommu_flush(...)

because, if the map or unmap fails the flush will be unnecessary. This saves
a stack variable and generally makes the call sites tidier.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>

v2:
 - New in v2
---
 xen/drivers/passthrough/iommu.c | 28 ++++++++++++----------------
 1 file changed, 12 insertions(+), 16 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 660dc5deb2..e2c0193a09 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -274,6 +274,10 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
         break;
     }
 
+    /* Something went wrong so flush everything and clear flush flags */
+    if ( unlikely(rc) && iommu_iotlb_flush_all(d, *flush_flags) )
+        flush_flags = 0;
+
     return rc;
 }
 
@@ -283,14 +287,8 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
     unsigned int flush_flags = 0;
     int rc = iommu_map(d, dfn, mfn, page_order, flags, &flush_flags);
 
-    if ( !this_cpu(iommu_dont_flush_iotlb) )
-    {
-        int err = iommu_iotlb_flush(d, dfn, (1u << page_order),
-                                    flush_flags);
-
-        if ( !rc )
-            rc = err;
-    }
+    if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
+        rc = iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags);
 
     return rc;
 }
@@ -330,6 +328,10 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order,
         }
     }
 
+    /* Something went wrong so flush everything and clear flush flags */
+    if ( unlikely(rc) && iommu_iotlb_flush_all(d, *flush_flags) )
+        flush_flags = 0;
+
     return rc;
 }
 
@@ -338,14 +340,8 @@ int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int page_order)
     unsigned int flush_flags = 0;
     int rc = iommu_unmap(d, dfn, page_order, &flush_flags);
 
-    if ( !this_cpu(iommu_dont_flush_iotlb) )
-    {
-        int err = iommu_iotlb_flush(d, dfn, (1u << page_order),
-                                    flush_flags);
-
-        if ( !rc )
-            rc = err;
-    }
+    if ( !this_cpu(iommu_dont_flush_iotlb) && ! rc )
+        rc = iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags);
 
     return rc;
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 14:29:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 14:29:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k19Z8-0005WL-1D; Thu, 30 Jul 2020 14:29:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k19Z7-0005Pz-6n
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:49 +0000
X-Inumbo-ID: 1626275b-d271-11ea-8d70-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1626275b-d271-11ea-8d70-bc764e2007e4;
 Thu, 30 Jul 2020 14:29:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=elpnTLLUZTjtNMvn+dphJvziiTX8WkV+tBvTSKCuAxs=; b=wWhWSskt7XjpDnx7pub4XxCYMz
 jRr6L2d2em9THtIrRpHdQIbYfWcht9BED4fB0ugXyYnc1W6RcA84Zn3NUoLW1bQFlH3wQjSEMM77S
 XoDrT6918XvFgliaRG+4LcCAq9AT/KdmJlEGbekWidlhuSwo0APkNWRJ+xLRO4hsdfAI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yr-0002OD-Ht; Thu, 30 Jul 2020 14:29:33 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yr-0005aN-A0; Thu, 30 Jul 2020 14:29:33 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 03/10] x86/iommu: convert VT-d code to use new page table
 allocator
Date: Thu, 30 Jul 2020 15:29:19 +0100
Message-Id: <20200730142926.6051-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200730142926.6051-1-paul@xen.org>
References: <20200730142926.6051-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Kevin Tian <kevin.tian@intel.com>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

This patch converts the VT-d code to use the new IOMMU page table allocator
function. This allows all the free-ing code to be removed (since it is now
handled by the general x86 code) which reduces TLB and cache thrashing as well
as shortening the code.

The scope of the mapping_lock in intel_iommu_quarantine_init() has also been
increased slightly; it should have always covered accesses to
'arch.vtd.pgd_maddr'.

NOTE: The common IOMMU needs a slight modification to avoid scheduling the
      cleanup tasklet if the free_page_table() method is not present (since
      the tasklet will unconditionally call it).

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Kevin Tian <kevin.tian@intel.com>

v2:
 - New in v2 (split from "add common page-table allocator")
---
 xen/drivers/passthrough/iommu.c     |   6 +-
 xen/drivers/passthrough/vtd/iommu.c | 101 ++++++++++------------------
 2 files changed, 39 insertions(+), 68 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 1d644844ab..2b1db8022c 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -225,8 +225,10 @@ static void iommu_teardown(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
 
-    hd->platform_ops->teardown(d);
-    tasklet_schedule(&iommu_pt_cleanup_tasklet);
+    iommu_vcall(hd->platform_ops, teardown, d);
+
+    if ( hd->platform_ops->free_page_table )
+        tasklet_schedule(&iommu_pt_cleanup_tasklet);
 }
 
 void iommu_domain_destroy(struct domain *d)
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 94e0455a4d..607e8b5e65 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -265,10 +265,15 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
 
     addr &= (((u64)1) << addr_width) - 1;
     ASSERT(spin_is_locked(&hd->arch.mapping_lock));
-    if ( !hd->arch.vtd.pgd_maddr &&
-         (!alloc ||
-          ((hd->arch.vtd.pgd_maddr = alloc_pgtable_maddr(1, hd->node)) == 0)) )
-        goto out;
+    if ( !hd->arch.vtd.pgd_maddr )
+    {
+        struct page_info *pg;
+
+        if ( !alloc || !(pg = iommu_alloc_pgtable(domain)) )
+            goto out;
+
+        hd->arch.vtd.pgd_maddr = page_to_maddr(pg);
+    }
 
     parent = (struct dma_pte *)map_vtd_domain_page(hd->arch.vtd.pgd_maddr);
     while ( level > 1 )
@@ -279,13 +284,16 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
         pte_maddr = dma_pte_addr(*pte);
         if ( !pte_maddr )
         {
+            struct page_info *pg;
+
             if ( !alloc )
                 break;
 
-            pte_maddr = alloc_pgtable_maddr(1, hd->node);
-            if ( !pte_maddr )
+            pg = iommu_alloc_pgtable(domain);
+            if ( !pg )
                 break;
 
+            pte_maddr = page_to_maddr(pg);
             dma_set_pte_addr(*pte, pte_maddr);
 
             /*
@@ -675,45 +683,6 @@ static void dma_pte_clear_one(struct domain *domain, uint64_t addr,
     unmap_vtd_domain_page(page);
 }
 
-static void iommu_free_pagetable(u64 pt_maddr, int level)
-{
-    struct page_info *pg = maddr_to_page(pt_maddr);
-
-    if ( pt_maddr == 0 )
-        return;
-
-    PFN_ORDER(pg) = level;
-    spin_lock(&iommu_pt_cleanup_lock);
-    page_list_add_tail(pg, &iommu_pt_cleanup_list);
-    spin_unlock(&iommu_pt_cleanup_lock);
-}
-
-static void iommu_free_page_table(struct page_info *pg)
-{
-    unsigned int i, next_level = PFN_ORDER(pg) - 1;
-    u64 pt_maddr = page_to_maddr(pg);
-    struct dma_pte *pt_vaddr, *pte;
-
-    PFN_ORDER(pg) = 0;
-    pt_vaddr = (struct dma_pte *)map_vtd_domain_page(pt_maddr);
-
-    for ( i = 0; i < PTE_NUM; i++ )
-    {
-        pte = &pt_vaddr[i];
-        if ( !dma_pte_present(*pte) )
-            continue;
-
-        if ( next_level >= 1 )
-            iommu_free_pagetable(dma_pte_addr(*pte), next_level);
-
-        dma_clear_pte(*pte);
-        iommu_sync_cache(pte, sizeof(struct dma_pte));
-    }
-
-    unmap_vtd_domain_page(pt_vaddr);
-    free_pgtable_maddr(pt_maddr);
-}
-
 static int iommu_set_root_entry(struct vtd_iommu *iommu)
 {
     u32 sts;
@@ -1748,16 +1717,7 @@ static void iommu_domain_teardown(struct domain *d)
         xfree(mrmrr);
     }
 
-    ASSERT(is_iommu_enabled(d));
-
-    if ( iommu_use_hap_pt(d) )
-        return;
-
-    spin_lock(&hd->arch.mapping_lock);
-    iommu_free_pagetable(hd->arch.vtd.pgd_maddr,
-                         agaw_to_level(hd->arch.vtd.agaw));
     hd->arch.vtd.pgd_maddr = 0;
-    spin_unlock(&hd->arch.mapping_lock);
 }
 
 static int __must_check intel_iommu_map_page(struct domain *d, dfn_t dfn,
@@ -2669,23 +2629,28 @@ static void vtd_dump_p2m_table(struct domain *d)
 static int __init intel_iommu_quarantine_init(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
+    struct page_info *pg;
     struct dma_pte *parent;
     unsigned int agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
     unsigned int level = agaw_to_level(agaw);
-    int rc;
+    int rc = 0;
+
+    spin_lock(&hd->arch.mapping_lock);
 
     if ( hd->arch.vtd.pgd_maddr )
     {
         ASSERT_UNREACHABLE();
-        return 0;
+        goto out;
     }
 
-    spin_lock(&hd->arch.mapping_lock);
+    pg = iommu_alloc_pgtable(d);
 
-    hd->arch.vtd.pgd_maddr = alloc_pgtable_maddr(1, hd->node);
-    if ( !hd->arch.vtd.pgd_maddr )
+    rc = -ENOMEM;
+    if ( !pg )
         goto out;
 
+    hd->arch.vtd.pgd_maddr = page_to_maddr(pg);
+
     parent = map_vtd_domain_page(hd->arch.vtd.pgd_maddr);
     while ( level )
     {
@@ -2697,10 +2662,12 @@ static int __init intel_iommu_quarantine_init(struct domain *d)
          * page table pages, and the resulting allocations are always
          * zeroed.
          */
-        maddr = alloc_pgtable_maddr(1, hd->node);
-        if ( !maddr )
-            break;
+        pg = iommu_alloc_pgtable(d);
+
+        if ( !pg )
+            goto out;
 
+        maddr = page_to_maddr(pg);
         for ( offset = 0; offset < PTE_NUM; offset++ )
         {
             struct dma_pte *pte = &parent[offset];
@@ -2716,13 +2683,16 @@ static int __init intel_iommu_quarantine_init(struct domain *d)
     }
     unmap_vtd_domain_page(parent);
 
+    rc = 0;
+
  out:
     spin_unlock(&hd->arch.mapping_lock);
 
-    rc = iommu_flush_iotlb_all(d);
+    if ( !rc )
+        rc = iommu_flush_iotlb_all(d);
 
-    /* Pages leaked in failure case */
-    return level ? -ENOMEM : rc;
+    /* Pages may be leaked in failure case */
+    return rc;
 }
 
 static struct iommu_ops __initdata vtd_ops = {
@@ -2737,7 +2707,6 @@ static struct iommu_ops __initdata vtd_ops = {
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
     .lookup_page = intel_iommu_lookup_page,
-    .free_page_table = iommu_free_page_table,
     .reassign_device = reassign_device_ownership,
     .get_device_group_id = intel_iommu_group_id,
     .enable_x2apic = intel_iommu_enable_eim,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 14:29:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 14:29:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k19ZD-0005aI-BW; Thu, 30 Jul 2020 14:29:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k19ZC-0005Pz-6t
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:54 +0000
X-Inumbo-ID: 196e48b4-d271-11ea-8d70-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 196e48b4-d271-11ea-8d70-bc764e2007e4;
 Thu, 30 Jul 2020 14:29:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=fFPZHQeR1imbV8GidBB6NY5g3/DPW1fb7q91uXermqU=; b=KXvwuhP3/bfaynsJkuo4yMD9aT
 0Yy9zwoPrxwclZFVd8fC4yP0R3iYoNvbxr8x5vfa/LYirm+CBwOweZMgtjcX+wkHRnCvekBKCr+Nr
 6hVrPxmEU1vDhWc+Pn22AxX0BQM6qq2/xl2Ax24H4yePFqvTVTRI+MNH/AwHmk6dNTEg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yv-0002On-3M; Thu, 30 Jul 2020 14:29:37 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yu-0005aN-SL; Thu, 30 Jul 2020 14:29:37 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 07/10] iommu: make map,
 unmap and flush all take both an order and a count
Date: Thu, 30 Jul 2020 15:29:23 +0100
Message-Id: <20200730142926.6051-8-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200730142926.6051-1-paul@xen.org>
References: <20200730142926.6051-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

At the moment iommu_map() and iommu_unmap() take a page order but not a
count, whereas iommu_iotlb_flush() takes a count but not a page order.
This patch simply makes them consistent with each other.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>

v2:
 - New in v2
---
 xen/arch/arm/p2m.c                       |  2 +-
 xen/arch/x86/mm/p2m-ept.c                |  2 +-
 xen/common/memory.c                      |  4 +--
 xen/drivers/passthrough/amd/iommu.h      |  2 +-
 xen/drivers/passthrough/amd/iommu_map.c  |  4 +--
 xen/drivers/passthrough/arm/ipmmu-vmsa.c |  2 +-
 xen/drivers/passthrough/arm/smmu.c       |  2 +-
 xen/drivers/passthrough/iommu.c          | 31 ++++++++++++------------
 xen/drivers/passthrough/vtd/iommu.c      |  4 +--
 xen/drivers/passthrough/x86/iommu.c      |  2 +-
 xen/include/xen/iommu.h                  |  9 ++++---
 11 files changed, 33 insertions(+), 31 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ce59f2b503..71f4a78425 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1061,7 +1061,7 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
             flush_flags |= IOMMU_FLUSHF_added;
 
         rc = iommu_iotlb_flush(p2m->domain, _dfn(gfn_x(sgfn)),
-                               1UL << page_order, flush_flags);
+                               page_order, 1, flush_flags);
     }
     else
         rc = 0;
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index b8154a7ecc..b2ac912cde 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -843,7 +843,7 @@ out:
          need_modify_vtd_table )
     {
         if ( iommu_use_hap_pt(d) )
-            rc = iommu_iotlb_flush(d, _dfn(gfn), (1u << order),
+            rc = iommu_iotlb_flush(d, _dfn(gfn), (1u << order), 1,
                                    (iommu_flags ? IOMMU_FLUSHF_added : 0) |
                                    (vtd_pte_present ? IOMMU_FLUSHF_modified
                                                     : 0));
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 714077c1e5..8de334ff10 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -851,12 +851,12 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
 
         this_cpu(iommu_dont_flush_iotlb) = 0;
 
-        ret = iommu_iotlb_flush(d, _dfn(xatp->idx - done), done,
+        ret = iommu_iotlb_flush(d, _dfn(xatp->idx - done), 0, done,
                                 IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
 
-        ret = iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), done,
+        ret = iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), 0, done,
                                 IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
diff --git a/xen/drivers/passthrough/amd/iommu.h b/xen/drivers/passthrough/amd/iommu.h
index e2d174f3b4..f1f0415469 100644
--- a/xen/drivers/passthrough/amd/iommu.h
+++ b/xen/drivers/passthrough/amd/iommu.h
@@ -231,7 +231,7 @@ int amd_iommu_reserve_domain_unity_map(struct domain *domain,
                                        paddr_t phys_addr, unsigned long size,
                                        int iw, int ir);
 int __must_check amd_iommu_flush_iotlb_pages(struct domain *d, dfn_t dfn,
-                                             unsigned int page_count,
+                                             unsigned long page_count,
                                              unsigned int flush_flags);
 int __must_check amd_iommu_flush_iotlb_all(struct domain *d);
 
diff --git a/xen/drivers/passthrough/amd/iommu_map.c b/xen/drivers/passthrough/amd/iommu_map.c
index 54b991294a..0cb948d114 100644
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -351,7 +351,7 @@ int amd_iommu_unmap_page(struct domain *d, dfn_t dfn,
     return 0;
 }
 
-static unsigned long flush_count(unsigned long dfn, unsigned int page_count,
+static unsigned long flush_count(unsigned long dfn, unsigned long page_count,
                                  unsigned int order)
 {
     unsigned long start = dfn >> order;
@@ -362,7 +362,7 @@ static unsigned long flush_count(unsigned long dfn, unsigned int page_count,
 }
 
 int amd_iommu_flush_iotlb_pages(struct domain *d, dfn_t dfn,
-                                unsigned int page_count,
+                                unsigned long page_count,
                                 unsigned int flush_flags)
 {
     unsigned long dfn_l = dfn_x(dfn);
diff --git a/xen/drivers/passthrough/arm/ipmmu-vmsa.c b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
index b2a65dfaaf..346165c3fa 100644
--- a/xen/drivers/passthrough/arm/ipmmu-vmsa.c
+++ b/xen/drivers/passthrough/arm/ipmmu-vmsa.c
@@ -945,7 +945,7 @@ static int __must_check ipmmu_iotlb_flush_all(struct domain *d)
 }
 
 static int __must_check ipmmu_iotlb_flush(struct domain *d, dfn_t dfn,
-                                          unsigned int page_count,
+                                          unsigned long page_count,
                                           unsigned int flush_flags)
 {
     ASSERT(flush_flags);
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 94662a8501..06f9bda47d 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -2534,7 +2534,7 @@ static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
 }
 
 static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn,
-					     unsigned int page_count,
+					     unsigned long page_count,
 					     unsigned int flush_flags)
 {
 	ASSERT(flush_flags);
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index e2c0193a09..568a4a5661 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -235,8 +235,8 @@ void iommu_domain_destroy(struct domain *d)
 }
 
 int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
-              unsigned int page_order, unsigned int flags,
-              unsigned int *flush_flags)
+              unsigned int page_order, unsigned int page_count,
+              unsigned int flags, unsigned int *flush_flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     unsigned long i;
@@ -248,7 +248,7 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
     ASSERT(IS_ALIGNED(dfn_x(dfn), (1ul << page_order)));
     ASSERT(IS_ALIGNED(mfn_x(mfn), (1ul << page_order)));
 
-    for ( i = 0; i < (1ul << page_order); i++ )
+    for ( i = 0; i < ((unsigned long)page_count << page_order); i++ )
     {
         rc = iommu_call(hd->platform_ops, map_page, d, dfn_add(dfn, i),
                         mfn_add(mfn, i), flags, flush_flags);
@@ -285,16 +285,16 @@ int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
                      unsigned int page_order, unsigned int flags)
 {
     unsigned int flush_flags = 0;
-    int rc = iommu_map(d, dfn, mfn, page_order, flags, &flush_flags);
+    int rc = iommu_map(d, dfn, mfn, page_order, 1, flags, &flush_flags);
 
     if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
-        rc = iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags);
+        rc = iommu_iotlb_flush(d, dfn, (1u << page_order), 1, flush_flags);
 
     return rc;
 }
 
 int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order,
-                unsigned int *flush_flags)
+                unsigned int page_count, unsigned int *flush_flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     unsigned long i;
@@ -305,7 +305,7 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order,
 
     ASSERT(IS_ALIGNED(dfn_x(dfn), (1ul << page_order)));
 
-    for ( i = 0; i < (1ul << page_order); i++ )
+    for ( i = 0; i < ((unsigned long)page_count << page_order); i++ )
     {
         int err = iommu_call(hd->platform_ops, unmap_page, d, dfn_add(dfn, i),
                              flush_flags);
@@ -338,10 +338,10 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order,
 int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int page_order)
 {
     unsigned int flush_flags = 0;
-    int rc = iommu_unmap(d, dfn, page_order, &flush_flags);
+    int rc = iommu_unmap(d, dfn, page_order, 1, &flush_flags);
 
     if ( !this_cpu(iommu_dont_flush_iotlb) && ! rc )
-        rc = iommu_iotlb_flush(d, dfn, (1u << page_order), flush_flags);
+        rc = iommu_iotlb_flush(d, dfn, (1u << page_order), 1, flush_flags);
 
     return rc;
 }
@@ -357,8 +357,8 @@ int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
     return iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags);
 }
 
-int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count,
-                      unsigned int flush_flags)
+int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_order,
+                      unsigned int page_count, unsigned int flush_flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
     int rc;
@@ -370,14 +370,15 @@ int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_count,
     if ( dfn_eq(dfn, INVALID_DFN) )
         return -EINVAL;
 
-    rc = iommu_call(hd->platform_ops, iotlb_flush, d, dfn, page_count,
-                    flush_flags);
+    rc = iommu_call(hd->platform_ops, iotlb_flush, d, dfn,
+                    (unsigned long)page_count << page_order, flush_flags);
     if ( unlikely(rc) )
     {
         if ( !d->is_shutting_down && printk_ratelimit() )
             printk(XENLOG_ERR
-                   "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", page count %u flags %x\n",
-                   d->domain_id, rc, dfn_x(dfn), page_count, flush_flags);
+                   "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", page order %u, page count %u flags %x\n",
+                   d->domain_id, rc, dfn_x(dfn), page_order, page_count,
+                   flush_flags);
 
         if ( !is_hardware_domain(d) )
             domain_crash(d);
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 607e8b5e65..68cf0e535a 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -584,7 +584,7 @@ static int __must_check iommu_flush_all(void)
 
 static int __must_check iommu_flush_iotlb(struct domain *d, dfn_t dfn,
                                           bool_t dma_old_pte_present,
-                                          unsigned int page_count)
+                                          unsigned long page_count)
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct acpi_drhd_unit *drhd;
@@ -632,7 +632,7 @@ static int __must_check iommu_flush_iotlb(struct domain *d, dfn_t dfn,
 
 static int __must_check iommu_flush_iotlb_pages(struct domain *d,
                                                 dfn_t dfn,
-                                                unsigned int page_count,
+                                                unsigned long page_count,
                                                 unsigned int flush_flags)
 {
     ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN));
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index c0d4865dd7..5d1a7cb296 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -244,7 +244,7 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
         else if ( paging_mode_translate(d) )
             rc = set_identity_p2m_entry(d, pfn, p2m_access_rw, 0);
         else
-            rc = iommu_map(d, _dfn(pfn), _mfn(pfn), PAGE_ORDER_4K,
+            rc = iommu_map(d, _dfn(pfn), _mfn(pfn), PAGE_ORDER_4K, 1,
                            IOMMUF_readable | IOMMUF_writable, &flush_flags);
 
         if ( rc )
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 1831dc66b0..d9c2e764aa 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -146,10 +146,10 @@ enum
 #define IOMMU_FLUSHF_modified (1u << _IOMMU_FLUSHF_modified)
 
 int __must_check iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
-                           unsigned int page_order, unsigned int flags,
-                           unsigned int *flush_flags);
+                           unsigned int page_order, unsigned int page_count,
+                           unsigned int flags, unsigned int *flush_flags);
 int __must_check iommu_unmap(struct domain *d, dfn_t dfn,
-                             unsigned int page_order,
+                             unsigned int page_order, unsigned int page_count,
                              unsigned int *flush_flags);
 
 int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
@@ -162,6 +162,7 @@ int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
                                    unsigned int *flags);
 
 int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn,
+                                   unsigned int page_order,
                                    unsigned int page_count,
                                    unsigned int flush_flags);
 int __must_check iommu_iotlb_flush_all(struct domain *d,
@@ -281,7 +282,7 @@ struct iommu_ops {
     void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
     int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn,
-                                    unsigned int page_count,
+                                    unsigned long page_count,
                                     unsigned int flush_flags);
     int __must_check (*iotlb_flush_all)(struct domain *d);
     int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 14:29:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 14:29:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k19ZH-0005dm-ST; Thu, 30 Jul 2020 14:29:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k19ZH-0005Pz-70
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:29:59 +0000
X-Inumbo-ID: 1a452e4c-d271-11ea-8d70-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a452e4c-d271-11ea-8d70-bc764e2007e4;
 Thu, 30 Jul 2020 14:29:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=o+lwGz261z94OzG7Pbq+kthB5fb4mouB8CYMb0gpCAk=; b=bGb45gtRGl0qTWB53rk9Rpk5CS
 USsUdxwZwWo3uXFjAcjZukbUQyIU8Jm5UWvPp9eum8JwzdyNw5CX4kyusfNGj+UwtmSC5waShlZq1
 Wb1AXbaGmr0KBmTdDBaTdJFu/jagtrBauDnHU11eiozBUgGbe5eMI/6rhXM4ERP7Nrt8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yw-0002Ot-IH; Thu, 30 Jul 2020 14:29:38 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yw-0005aN-BD; Thu, 30 Jul 2020 14:29:38 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 08/10] remove remaining uses of iommu_legacy_map/unmap
Date: Thu, 30 Jul 2020 15:29:24 +0100
Message-Id: <20200730142926.6051-9-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200730142926.6051-1-paul@xen.org>
References: <20200730142926.6051-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

The 'legacy' functions do implicit flushing so amend the callers to do the
appropriate flushing.

Unfortunately, because of the structure of the P2M code, we cannot remove
the per-CPU 'iommu_dont_flush_iotlb' global and the optimization it
facilitates. It is now checked directly iommu_iotlb_flush(). Also, it is
now declared as bool (rather than bool_t) and setting/clearing it are no
longer pointlessly gated on is_iommu_enabled() returning true. (Arguably
it is also pointless to gate the call to iommu_iotlb_flush() on that
condition - since it is a no-op in that case - but the if clause allows
the scope of a stack variable to be restricted).

NOTE: The code in memory_add() now fails if the number of pages passed to
      a single call overflows an unsigned int. I don't believe this will
      ever happen in practice.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>

v2:
 - Shorten the diff (mainly because of a prior patch introducing automatic
   flush-on-fail into iommu_map() and iommu_unmap())
---
 xen/arch/x86/mm.c               | 21 +++++++++++++++-----
 xen/arch/x86/mm/p2m-ept.c       | 20 +++++++++++--------
 xen/arch/x86/mm/p2m-pt.c        | 15 +++++++++++----
 xen/arch/x86/mm/p2m.c           | 26 ++++++++++++++++++-------
 xen/arch/x86/x86_64/mm.c        | 27 +++++++++++++-------------
 xen/common/grant_table.c        | 34 ++++++++++++++++++++++++---------
 xen/common/memory.c             |  5 +++--
 xen/drivers/passthrough/iommu.c | 25 +-----------------------
 xen/include/xen/iommu.h         | 21 +++++---------------
 9 files changed, 106 insertions(+), 88 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 82bc676553..f7e84f12fa 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -2446,10 +2446,16 @@ static int cleanup_page_mappings(struct page_info *page)
 
         if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) )
         {
-            int rc2 = iommu_legacy_unmap(d, _dfn(mfn), PAGE_ORDER_4K);
+            unsigned int flush_flags = 0;
+            int err;
 
+            err = iommu_unmap(d, _dfn(mfn), PAGE_ORDER_4K, 1, &flush_flags);
             if ( !rc )
-                rc = rc2;
+                rc = err;
+
+            err = iommu_iotlb_flush(d, _dfn(mfn), PAGE_ORDER_4K, 1, flush_flags);
+            if ( !rc )
+                rc = err;
         }
 
         if ( likely(!is_special_page(page)) )
@@ -2971,12 +2977,17 @@ static int _get_page_type(struct page_info *page, unsigned long type,
         if ( d && unlikely(need_iommu_pt_sync(d)) && is_pv_domain(d) )
         {
             mfn_t mfn = page_to_mfn(page);
+            dfn_t dfn = _dfn(mfn_x(mfn));
+            unsigned int flush_flags = 0;
 
             if ( (x & PGT_type_mask) == PGT_writable_page )
-                rc = iommu_legacy_unmap(d, _dfn(mfn_x(mfn)), PAGE_ORDER_4K);
+                rc = iommu_unmap(d, dfn, PAGE_ORDER_4K, 1, &flush_flags);
             else
-                rc = iommu_legacy_map(d, _dfn(mfn_x(mfn)), mfn, PAGE_ORDER_4K,
-                                      IOMMUF_readable | IOMMUF_writable);
+                rc = iommu_map(d, dfn, mfn, PAGE_ORDER_4K, 1,
+                               IOMMUF_readable | IOMMUF_writable, &flush_flags);
+
+            if ( !rc )
+                rc = iommu_iotlb_flush(d, dfn, PAGE_ORDER_4K, 1, flush_flags);
 
             if ( unlikely(rc) )
             {
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index b2ac912cde..e38b0bf95c 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -842,15 +842,19 @@ out:
     if ( rc == 0 && p2m_is_hostp2m(p2m) &&
          need_modify_vtd_table )
     {
-        if ( iommu_use_hap_pt(d) )
-            rc = iommu_iotlb_flush(d, _dfn(gfn), (1u << order), 1,
-                                   (iommu_flags ? IOMMU_FLUSHF_added : 0) |
-                                   (vtd_pte_present ? IOMMU_FLUSHF_modified
-                                                    : 0));
-        else if ( need_iommu_pt_sync(d) )
+        unsigned int flush_flags = 0;
+
+        if ( need_iommu_pt_sync(d) )
             rc = iommu_flags ?
-                iommu_legacy_map(d, _dfn(gfn), mfn, order, iommu_flags) :
-                iommu_legacy_unmap(d, _dfn(gfn), order);
+                iommu_map(d, _dfn(gfn), mfn, order, 1, iommu_flags, &flush_flags) :
+                iommu_unmap(d, _dfn(gfn), order, 1, &flush_flags);
+        else if ( iommu_use_hap_pt(d) )
+            flush_flags =
+                (iommu_flags ? IOMMU_FLUSHF_added : 0) |
+                (vtd_pte_present ? IOMMU_FLUSHF_modified : 0);
+
+        if ( !rc )
+            rc = iommu_iotlb_flush(d, _dfn(gfn), order, 1, flush_flags);
     }
 
     unmap_domain_page(table);
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index badb26bc34..3c0901b56c 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -678,10 +678,17 @@ p2m_pt_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
 
     if ( need_iommu_pt_sync(p2m->domain) &&
          (iommu_old_flags != iommu_pte_flags || old_mfn != mfn_x(mfn)) )
-        rc = iommu_pte_flags
-             ? iommu_legacy_map(d, _dfn(gfn), mfn, page_order,
-                                iommu_pte_flags)
-             : iommu_legacy_unmap(d, _dfn(gfn), page_order);
+    {
+        unsigned int flush_flags = 0;
+
+        rc = iommu_pte_flags ?
+            iommu_map(d, _dfn(gfn), mfn, page_order, 1, iommu_pte_flags,
+                      &flush_flags) :
+            iommu_unmap(d, _dfn(gfn), page_order, 1, &flush_flags);
+
+        if ( !rc )
+            rc = iommu_iotlb_flush(d, _dfn(gfn), page_order, 1, flush_flags);
+    }
 
     /*
      * Free old intermediate tables if necessary.  This has to be the
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index db7bde0230..9f8b9bc5fd 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1350,10 +1350,15 @@ int set_identity_p2m_entry(struct domain *d, unsigned long gfn_l,
 
     if ( !paging_mode_translate(p2m->domain) )
     {
-        if ( !is_iommu_enabled(d) )
-            return 0;
-        return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K,
-                                IOMMUF_readable | IOMMUF_writable);
+        unsigned int flush_flags = 0;
+
+        ret = iommu_map(d, _dfn(gfn_l), _mfn(gfn_l), PAGE_ORDER_4K, 1,
+                        IOMMUF_readable | IOMMUF_writable, &flush_flags);
+        if ( !ret )
+            ret = iommu_iotlb_flush(d, _dfn(gfn_l), PAGE_ORDER_4K, 1,
+                                    flush_flags);
+
+        return ret;
     }
 
     gfn_lock(p2m, gfn, 0);
@@ -1441,9 +1446,16 @@ int clear_identity_p2m_entry(struct domain *d, unsigned long gfn_l)
 
     if ( !paging_mode_translate(d) )
     {
-        if ( !is_iommu_enabled(d) )
-            return 0;
-        return iommu_legacy_unmap(d, _dfn(gfn_l), PAGE_ORDER_4K);
+        unsigned int flush_flags = 0;
+        int err;
+
+        ret = iommu_unmap(d, _dfn(gfn_l), PAGE_ORDER_4K, 1, &flush_flags);
+
+        err = iommu_iotlb_flush(d, _dfn(gfn_l), PAGE_ORDER_4K, 1, flush_flags);
+        if ( !ret )
+            ret = err;
+
+        return ret;
     }
 
     gfn_lock(p2m, gfn, 0);
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 102079a801..02684bcf9d 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1413,21 +1413,22 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
          !iommu_use_hap_pt(hardware_domain) &&
          !need_iommu_pt_sync(hardware_domain) )
     {
-        for ( i = spfn; i < epfn; i++ )
-            if ( iommu_legacy_map(hardware_domain, _dfn(i), _mfn(i),
-                                  PAGE_ORDER_4K,
-                                  IOMMUF_readable | IOMMUF_writable) )
-                break;
-        if ( i != epfn )
-        {
-            while (i-- > old_max)
-                /* If statement to satisfy __must_check. */
-                if ( iommu_legacy_unmap(hardware_domain, _dfn(i),
-                                        PAGE_ORDER_4K) )
-                    continue;
+        unsigned int flush_flags = 0;
+        unsigned int n = epfn - spfn;
+        int rc;
 
+        ret = -EOVERFLOW;
+        if ( spfn + n != epfn )
+            goto destroy_m2p;
+
+        rc = iommu_map(hardware_domain, _dfn(i), _mfn(i),
+                       PAGE_ORDER_4K, n, IOMMUF_readable | IOMMUF_writable,
+                       &flush_flags);
+        if ( !rc )
+            rc = iommu_iotlb_flush(hardware_domain, _dfn(i), PAGE_ORDER_4K, n,
+                                       flush_flags);
+        if ( rc )
             goto destroy_m2p;
-        }
     }
 
     /* We can't revert any more */
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 9f0cae52c0..d6526bca12 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1225,11 +1225,23 @@ map_grant_ref(
             kind = IOMMUF_readable;
         else
             kind = 0;
-        if ( kind && iommu_legacy_map(ld, _dfn(mfn_x(mfn)), mfn, 0, kind) )
+        if ( kind )
         {
-            double_gt_unlock(lgt, rgt);
-            rc = GNTST_general_error;
-            goto undo_out;
+            dfn_t dfn = _dfn(mfn_x(mfn));
+            unsigned int flush_flags = 0;
+            int err;
+
+            err = iommu_map(ld, dfn, mfn, 0, 1, kind, &flush_flags);
+            if ( !err )
+                err = iommu_iotlb_flush(ld, dfn, 0, 1, flush_flags);
+            if ( err )
+                rc = GNTST_general_error;
+
+            if ( rc != GNTST_okay )
+            {
+                double_gt_unlock(lgt, rgt);
+                goto undo_out;
+            }
         }
     }
 
@@ -1473,21 +1485,25 @@ unmap_common(
     if ( rc == GNTST_okay && gnttab_need_iommu_mapping(ld) )
     {
         unsigned int kind;
+        dfn_t dfn = _dfn(mfn_x(op->mfn));
+        unsigned int flush_flags = 0;
         int err = 0;
 
         double_gt_lock(lgt, rgt);
 
         kind = mapkind(lgt, rd, op->mfn);
         if ( !kind )
-            err = iommu_legacy_unmap(ld, _dfn(mfn_x(op->mfn)), 0);
+            err = iommu_unmap(ld, dfn, 0, 1, &flush_flags);
         else if ( !(kind & MAPKIND_WRITE) )
-            err = iommu_legacy_map(ld, _dfn(mfn_x(op->mfn)), op->mfn, 0,
-                                   IOMMUF_readable);
-
-        double_gt_unlock(lgt, rgt);
+            err = iommu_map(ld, dfn, op->mfn, 0, 1, IOMMUF_readable,
+                            &flush_flags);
 
+        if ( !err )
+            err = iommu_iotlb_flush(ld, dfn, 0, 1, flush_flags);
         if ( err )
             rc = GNTST_general_error;
+
+        double_gt_unlock(lgt, rgt);
     }
 
     /* If just unmapped a writable mapping, mark as dirtied */
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 8de334ff10..2891bef57b 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -824,8 +824,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
     xatp->gpfn += start;
     xatp->size -= start;
 
-    if ( is_iommu_enabled(d) )
-       this_cpu(iommu_dont_flush_iotlb) = 1;
+    this_cpu(iommu_dont_flush_iotlb) = true;
 
     while ( xatp->size > done )
     {
@@ -845,6 +844,8 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         }
     }
 
+    this_cpu(iommu_dont_flush_iotlb) = false;
+
     if ( is_iommu_enabled(d) )
     {
         int ret;
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 568a4a5661..ab44c332bb 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -281,18 +281,6 @@ int iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
     return rc;
 }
 
-int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
-                     unsigned int page_order, unsigned int flags)
-{
-    unsigned int flush_flags = 0;
-    int rc = iommu_map(d, dfn, mfn, page_order, 1, flags, &flush_flags);
-
-    if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
-        rc = iommu_iotlb_flush(d, dfn, (1u << page_order), 1, flush_flags);
-
-    return rc;
-}
-
 int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order,
                 unsigned int page_count, unsigned int *flush_flags)
 {
@@ -335,17 +323,6 @@ int iommu_unmap(struct domain *d, dfn_t dfn, unsigned int page_order,
     return rc;
 }
 
-int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned int page_order)
-{
-    unsigned int flush_flags = 0;
-    int rc = iommu_unmap(d, dfn, page_order, 1, &flush_flags);
-
-    if ( !this_cpu(iommu_dont_flush_iotlb) && ! rc )
-        rc = iommu_iotlb_flush(d, dfn, (1u << page_order), 1, flush_flags);
-
-    return rc;
-}
-
 int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
                       unsigned int *flags)
 {
@@ -364,7 +341,7 @@ int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned int page_order,
     int rc;
 
     if ( !is_iommu_enabled(d) || !hd->platform_ops->iotlb_flush ||
-         !page_count || !flush_flags )
+         !page_count || !flush_flags || this_cpu(iommu_dont_flush_iotlb) )
         return 0;
 
     if ( dfn_eq(dfn, INVALID_DFN) )
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index d9c2e764aa..b7e5d3da09 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -151,16 +151,8 @@ int __must_check iommu_map(struct domain *d, dfn_t dfn, mfn_t mfn,
 int __must_check iommu_unmap(struct domain *d, dfn_t dfn,
                              unsigned int page_order, unsigned int page_count,
                              unsigned int *flush_flags);
-
-int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
-                                  unsigned int page_order,
-                                  unsigned int flags);
-int __must_check iommu_legacy_unmap(struct domain *d, dfn_t dfn,
-                                    unsigned int page_order);
-
 int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
                                    unsigned int *flags);
-
 int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn,
                                    unsigned int page_order,
                                    unsigned int page_count,
@@ -370,15 +362,12 @@ void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev);
 
 /*
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
- * avoid unecessary iotlb_flush in the low level IOMMU code.
- *
- * iommu_map_page/iommu_unmap_page must flush the iotlb but somethimes
- * this operation can be really expensive. This flag will be set by the
- * caller to notify the low level IOMMU code to avoid the iotlb flushes.
- * iommu_iotlb_flush/iommu_iotlb_flush_all will be explicitly called by
- * the caller.
+ * avoid unecessary IOMMU flushing while updating the P2M.
+ * Setting the value to true will cause iommu_iotlb_flush() to return without
+ * actually performing a flush. A batch flush must therefore be done by the
+ * calling code after setting the value back to false.
  */
-DECLARE_PER_CPU(bool_t, iommu_dont_flush_iotlb);
+DECLARE_PER_CPU(bool, iommu_dont_flush_iotlb);
 
 extern struct spinlock iommu_pt_cleanup_lock;
 extern struct page_list_head iommu_pt_cleanup_list;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 14:30:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 14:30:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k19ZN-00063Q-6Z; Thu, 30 Jul 2020 14:30:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k19ZM-0005Pz-7I
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:30:04 +0000
X-Inumbo-ID: 1abd1826-d271-11ea-8d70-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1abd1826-d271-11ea-8d70-bc764e2007e4;
 Thu, 30 Jul 2020 14:29:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=JvpjW2yuqqsP9Nf9RqSvKwq47T2XEc66z0tnFgkOJgM=; b=qLJvLHRcxDGkFl+3jJ5yrdkPNU
 +i/+OMys406N7atfvxVxh2b/kuYYD+dijm+H0pI1Tq7HVuuffqCdfYIa5fo5/iRwgCTxBBpBi2I/H
 9rPwF1X8qeEZKODB/WSdWCDcm6SVgvIFpbq+B2eAJiOCdgSyOOBrmcrMhGosvTlOWJVs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yx-0002P1-LG; Thu, 30 Jul 2020 14:29:39 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yx-0005aN-EP; Thu, 30 Jul 2020 14:29:39 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 09/10] iommu: remove the share_p2m operation
Date: Thu, 30 Jul 2020 15:29:25 +0100
Message-Id: <20200730142926.6051-10-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200730142926.6051-1-paul@xen.org>
References: <20200730142926.6051-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Sharing of HAP tables is now VT-d specific so the operation is never defined
for AMD IOMMU any more. There's also no need to pro-actively set vtd.pgd_maddr
when using shared EPT as it is straightforward to simply define a helper
function to return the appropriate value in the shared and non-shared cases.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Kevin Tian <kevin.tian@intel.com>

v2:
  - Put the PGD level adjust into the helper function too, since it is
    irrelevant in the shared EPT case
---
 xen/arch/x86/mm/p2m.c               |  3 -
 xen/drivers/passthrough/iommu.c     |  8 ---
 xen/drivers/passthrough/vtd/iommu.c | 90 ++++++++++++++++-------------
 xen/include/xen/iommu.h             |  3 -
 4 files changed, 50 insertions(+), 54 deletions(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 9f8b9bc5fd..3bd8d83d23 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -726,9 +726,6 @@ int p2m_alloc_table(struct p2m_domain *p2m)
 
     p2m->phys_table = pagetable_from_mfn(top_mfn);
 
-    if ( hap_enabled(d) )
-        iommu_share_p2m_table(d);
-
     p2m_unlock(p2m);
     return 0;
 }
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index ab44c332bb..7464f10d1c 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -498,14 +498,6 @@ int iommu_do_domctl(
     return ret;
 }
 
-void iommu_share_p2m_table(struct domain* d)
-{
-    ASSERT(hap_enabled(d));
-
-    if ( iommu_use_hap_pt(d) )
-        iommu_get_ops()->share_p2m(d);
-}
-
 void iommu_crash_shutdown(void)
 {
     if ( !iommu_crash_disable )
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index 68cf0e535a..a532d9e88c 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -318,6 +318,48 @@ static u64 addr_to_dma_page_maddr(struct domain *domain, u64 addr, int alloc)
     return pte_maddr;
 }
 
+static uint64_t domain_pgd_maddr(struct domain *d, struct vtd_iommu *iommu)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    uint64_t pgd_maddr;
+    unsigned int agaw;
+
+    ASSERT(spin_is_locked(&hd->arch.mapping_lock));
+
+    if ( iommu_use_hap_pt(d) )
+    {
+        mfn_t pgd_mfn =
+            pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
+
+        return pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
+    }
+
+    if ( !hd->arch.vtd.pgd_maddr )
+    {
+        addr_to_dma_page_maddr(d, 0, 1);
+
+        if ( !hd->arch.vtd.pgd_maddr )
+            return 0;
+    }
+
+    pgd_maddr = hd->arch.vtd.pgd_maddr;
+
+    /* Skip top levels of page tables for 2- and 3-level DRHDs. */
+    for ( agaw = level_to_agaw(4);
+          agaw != level_to_agaw(iommu->nr_pt_levels);
+          agaw-- )
+    {
+        struct dma_pte *p = map_vtd_domain_page(pgd_maddr);
+
+        pgd_maddr = dma_pte_addr(*p);
+        unmap_vtd_domain_page(p);
+        if ( !pgd_maddr )
+            return 0;
+    }
+
+    return pgd_maddr;
+}
+
 static void iommu_flush_write_buffer(struct vtd_iommu *iommu)
 {
     u32 val;
@@ -1286,7 +1328,7 @@ int domain_context_mapping_one(
     struct context_entry *context, *context_entries;
     u64 maddr, pgd_maddr;
     u16 seg = iommu->drhd->segment;
-    int agaw, rc, ret;
+    int rc, ret;
     bool_t flush_dev_iotlb;
 
     ASSERT(pcidevs_locked());
@@ -1340,37 +1382,18 @@ int domain_context_mapping_one(
     if ( iommu_hwdom_passthrough && is_hardware_domain(domain) )
     {
         context_set_translation_type(*context, CONTEXT_TT_PASS_THRU);
-        agaw = level_to_agaw(iommu->nr_pt_levels);
     }
     else
     {
         spin_lock(&hd->arch.mapping_lock);
 
-        /* Ensure we have pagetables allocated down to leaf PTE. */
-        if ( hd->arch.vtd.pgd_maddr == 0 )
+        pgd_maddr = domain_pgd_maddr(domain, iommu);
+        if ( !pgd_maddr )
         {
-            addr_to_dma_page_maddr(domain, 0, 1);
-            if ( hd->arch.vtd.pgd_maddr == 0 )
-            {
-            nomem:
-                spin_unlock(&hd->arch.mapping_lock);
-                spin_unlock(&iommu->lock);
-                unmap_vtd_domain_page(context_entries);
-                return -ENOMEM;
-            }
-        }
-
-        /* Skip top levels of page tables for 2- and 3-level DRHDs. */
-        pgd_maddr = hd->arch.vtd.pgd_maddr;
-        for ( agaw = level_to_agaw(4);
-              agaw != level_to_agaw(iommu->nr_pt_levels);
-              agaw-- )
-        {
-            struct dma_pte *p = map_vtd_domain_page(pgd_maddr);
-            pgd_maddr = dma_pte_addr(*p);
-            unmap_vtd_domain_page(p);
-            if ( pgd_maddr == 0 )
-                goto nomem;
+            spin_unlock(&hd->arch.mapping_lock);
+            spin_unlock(&iommu->lock);
+            unmap_vtd_domain_page(context_entries);
+            return -ENOMEM;
         }
 
         context_set_address_root(*context, pgd_maddr);
@@ -1389,7 +1412,7 @@ int domain_context_mapping_one(
         return -EFAULT;
     }
 
-    context_set_address_width(*context, agaw);
+    context_set_address_width(*context, level_to_agaw(iommu->nr_pt_levels));
     context_set_fault_enable(*context);
     context_set_present(*context);
     iommu_sync_cache(context, sizeof(struct context_entry));
@@ -1848,18 +1871,6 @@ static int __init vtd_ept_page_compatible(struct vtd_iommu *iommu)
            (ept_has_1gb(ept_cap) && opt_hap_1gb) <= cap_sps_1gb(vtd_cap);
 }
 
-/*
- * set VT-d page table directory to EPT table if allowed
- */
-static void iommu_set_pgd(struct domain *d)
-{
-    mfn_t pgd_mfn;
-
-    pgd_mfn = pagetable_get_mfn(p2m_get_pagetable(p2m_get_hostp2m(d)));
-    dom_iommu(d)->arch.vtd.pgd_maddr =
-        pagetable_get_paddr(pagetable_from_mfn(pgd_mfn));
-}
-
 static int rmrr_identity_mapping(struct domain *d, bool_t map,
                                  const struct acpi_rmrr_unit *rmrr,
                                  u32 flag)
@@ -2719,7 +2730,6 @@ static struct iommu_ops __initdata vtd_ops = {
     .adjust_irq_affinities = adjust_vtd_irq_affinities,
     .suspend = vtd_suspend,
     .resume = vtd_resume,
-    .share_p2m = iommu_set_pgd,
     .crash_shutdown = vtd_crash_shutdown,
     .iotlb_flush = iommu_flush_iotlb_pages,
     .iotlb_flush_all = iommu_flush_iotlb_all,
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index b7e5d3da09..1f25d2082f 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -271,7 +271,6 @@ struct iommu_ops {
 
     int __must_check (*suspend)(void);
     void (*resume)(void);
-    void (*share_p2m)(struct domain *d);
     void (*crash_shutdown)(void);
     int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn,
                                     unsigned long page_count,
@@ -348,8 +347,6 @@ void iommu_resume(void);
 void iommu_crash_shutdown(void);
 int iommu_get_reserved_device_memory(iommu_grdm_t *, void *);
 
-void iommu_share_p2m_table(struct domain *d);
-
 #ifdef CONFIG_HAS_PCI
 int iommu_do_pci_domctl(struct xen_domctl *, struct domain *d,
                         XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 14:40:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 14:40:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k19it-0007AY-6s; Thu, 30 Jul 2020 14:39:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k19is-0007AT-1j
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 14:39:54 +0000
X-Inumbo-ID: 874026c2-d272-11ea-aad6-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 874026c2-d272-11ea-aad6-12813bfff9fa;
 Thu, 30 Jul 2020 14:39:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zIlKdkXsPhIQfdArjrBSxAJXwBuXmUMve699NvtwDOs=; b=oJD4lGR9uCFCfa9uc9Iv/KrlCc
 EcN1KOvdPZUHCDHxG0ja40qtUKbz7b72rUCfBJu5AGKovdpKH4amzjKyBlHuhTbOKX+awj+COhqA0
 fU3kdY5f/IGu9eqRAVDr1jNUXT1tvONehSklE0irPxxH8F3pAzWqQ1oOoqnCt/IBwpGg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19io-0002df-RK; Thu, 30 Jul 2020 14:39:50 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k19Yy-0005aN-Bd; Thu, 30 Jul 2020 14:29:40 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 10/10] iommu: stop calling IOMMU page tables 'p2m tables'
Date: Thu, 30 Jul 2020 15:29:26 +0100
Message-Id: <20200730142926.6051-11-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200730142926.6051-1-paul@xen.org>
References: <20200730142926.6051-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Kevin Tian <kevin.tian@intel.com>,
 Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

It's confusing and not consistent with the terminology introduced with 'dfn_t'.
Just call them IOMMU page tables.

Also remove a pointless check of the 'acpi_drhd_units' list in
vtd_dump_page_table_level(). If the list is empty then IOMMU mappings would
not have been enabled for the domain in the first place.

NOTE: All calls to printk() have also been removed from
      iommu_dump_page_tables(); the implementation specific code is now
      responsible for all output.
      The check for the global 'iommu_enabled' has also been replaced by an
      ASSERT since iommu_dump_page_tables() is not registered as a key handler
      unless IOMMU mappings are enabled.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Tian <kevin.tian@intel.com>

v2:
 - Moved all output into implementation specific code
---
 xen/drivers/passthrough/amd/pci_amd_iommu.c | 16 ++++++-------
 xen/drivers/passthrough/iommu.c             | 21 ++++-------------
 xen/drivers/passthrough/vtd/iommu.c         | 26 +++++++++++----------
 xen/include/xen/iommu.h                     |  2 +-
 4 files changed, 28 insertions(+), 37 deletions(-)

diff --git a/xen/drivers/passthrough/amd/pci_amd_iommu.c b/xen/drivers/passthrough/amd/pci_amd_iommu.c
index d79668f948..b3e95cf18e 100644
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -491,8 +491,8 @@ static int amd_iommu_group_id(u16 seg, u8 bus, u8 devfn)
 
 #include <asm/io_apic.h>
 
-static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
-                                     paddr_t gpa, int indent)
+static void amd_dump_page_table_level(struct page_info* pg, int level,
+                                      paddr_t gpa, int indent)
 {
     paddr_t address;
     struct amd_iommu_pte *table_vaddr;
@@ -529,7 +529,7 @@ static void amd_dump_p2m_table_level(struct page_info* pg, int level,
 
         address = gpa + amd_offset_level_address(index, level);
         if ( pde->next_level >= 1 )
-            amd_dump_p2m_table_level(
+            amd_dump_page_table_level(
                 mfn_to_page(_mfn(pde->mfn)), pde->next_level,
                 address, indent + 1);
         else
@@ -542,16 +542,16 @@ static void amd_dump_p2m_table_level(struct page_info* pg, int level,
     unmap_domain_page(table_vaddr);
 }
 
-static void amd_dump_p2m_table(struct domain *d)
+static void amd_dump_page_tables(struct domain *d)
 {
     const struct domain_iommu *hd = dom_iommu(d);
 
     if ( !hd->arch.amd.root_table )
         return;
 
-    printk("p2m table has %d levels\n", hd->arch.amd.paging_mode);
-    amd_dump_p2m_table_level(hd->arch.amd.root_table,
-                             hd->arch.amd.paging_mode, 0, 0);
+    printk("AMD IOMMU table has %d levels\n", hd->arch.amd.paging_mode);
+    amd_dump_page_table_level(hd->arch.amd.root_table,
+                              hd->arch.amd.paging_mode, 0, 0);
 }
 
 static const struct iommu_ops __initconstrel _iommu_ops = {
@@ -578,7 +578,7 @@ static const struct iommu_ops __initconstrel _iommu_ops = {
     .suspend = amd_iommu_suspend,
     .resume = amd_iommu_resume,
     .crash_shutdown = amd_iommu_crash_shutdown,
-    .dump_p2m_table = amd_dump_p2m_table,
+    .dump_page_tables = amd_dump_page_tables,
 };
 
 static const struct iommu_init_ops __initconstrel _iommu_init_ops = {
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 7464f10d1c..0f468379e1 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -22,7 +22,7 @@
 #include <xen/keyhandler.h>
 #include <xsm/xsm.h>
 
-static void iommu_dump_p2m_table(unsigned char key);
+static void iommu_dump_page_tables(unsigned char key);
 
 unsigned int __read_mostly iommu_dev_iotlb_timeout = 1000;
 integer_param("iommu_dev_iotlb_timeout", iommu_dev_iotlb_timeout);
@@ -212,7 +212,7 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
     if ( !is_iommu_enabled(d) )
         return;
 
-    register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m table", 0);
+    register_keyhandler('o', &iommu_dump_page_tables, "dump iommu page tables", 0);
 
     hd->platform_ops->hwdom_init(d);
 }
@@ -533,16 +533,12 @@ bool_t iommu_has_feature(struct domain *d, enum iommu_feature feature)
     return is_iommu_enabled(d) && test_bit(feature, dom_iommu(d)->features);
 }
 
-static void iommu_dump_p2m_table(unsigned char key)
+static void iommu_dump_page_tables(unsigned char key)
 {
     struct domain *d;
     const struct iommu_ops *ops;
 
-    if ( !iommu_enabled )
-    {
-        printk("IOMMU not enabled!\n");
-        return;
-    }
+    ASSERT(iommu_enabled);
 
     ops = iommu_get_ops();
 
@@ -553,14 +549,7 @@ static void iommu_dump_p2m_table(unsigned char key)
         if ( is_hardware_domain(d) || !is_iommu_enabled(d) )
             continue;
 
-        if ( iommu_use_hap_pt(d) )
-        {
-            printk("\ndomain%d IOMMU p2m table shared with MMU: \n", d->domain_id);
-            continue;
-        }
-
-        printk("\ndomain%d IOMMU p2m table: \n", d->domain_id);
-        ops->dump_p2m_table(d);
+        ops->dump_page_tables(d);
     }
 
     rcu_read_unlock(&domlist_read_lock);
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index a532d9e88c..f8da4fe0e7 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -2582,8 +2582,8 @@ static void vtd_resume(void)
     }
 }
 
-static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa, 
-                                     int indent)
+static void vtd_dump_page_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
+                                      int indent)
 {
     paddr_t address;
     int i;
@@ -2612,8 +2612,8 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
 
         address = gpa + offset_level_address(i, level);
         if ( next_level >= 1 ) 
-            vtd_dump_p2m_table_level(dma_pte_addr(*pte), next_level, 
-                                     address, indent + 1);
+            vtd_dump_page_table_level(dma_pte_addr(*pte), next_level,
+                                      address, indent + 1);
         else
             printk("%*sdfn: %08lx mfn: %08lx\n",
                    indent, "",
@@ -2624,17 +2624,19 @@ static void vtd_dump_p2m_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
     unmap_vtd_domain_page(pt_vaddr);
 }
 
-static void vtd_dump_p2m_table(struct domain *d)
+static void vtd_dump_page_tables(struct domain *d)
 {
-    const struct domain_iommu *hd;
+    const struct domain_iommu *hd = dom_iommu(d);
 
-    if ( list_empty(&acpi_drhd_units) )
+    if ( iommu_use_hap_pt(d) )
+    {
+        printk("VT-D sharing EPT table\n");
         return;
+    }
 
-    hd = dom_iommu(d);
-    printk("p2m table has %d levels\n", agaw_to_level(hd->arch.vtd.agaw));
-    vtd_dump_p2m_table_level(hd->arch.vtd.pgd_maddr,
-                             agaw_to_level(hd->arch.vtd.agaw), 0, 0);
+    printk("VT-D table has %d levels\n", agaw_to_level(hd->arch.vtd.agaw));
+    vtd_dump_page_table_level(hd->arch.vtd.pgd_maddr,
+                              agaw_to_level(hd->arch.vtd.agaw), 0, 0);
 }
 
 static int __init intel_iommu_quarantine_init(struct domain *d)
@@ -2734,7 +2736,7 @@ static struct iommu_ops __initdata vtd_ops = {
     .iotlb_flush = iommu_flush_iotlb_pages,
     .iotlb_flush_all = iommu_flush_iotlb_all,
     .get_reserved_device_memory = intel_iommu_get_reserved_device_memory,
-    .dump_p2m_table = vtd_dump_p2m_table,
+    .dump_page_tables = vtd_dump_page_tables,
 };
 
 const struct iommu_init_ops __initconstrel intel_iommu_init_ops = {
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 1f25d2082f..23e884f54b 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -277,7 +277,7 @@ struct iommu_ops {
                                     unsigned int flush_flags);
     int __must_check (*iotlb_flush_all)(struct domain *d);
     int (*get_reserved_device_memory)(iommu_grdm_t *, void *);
-    void (*dump_p2m_table)(struct domain *d);
+    void (*dump_page_tables)(struct domain *d);
 
 #ifdef CONFIG_HAS_DEVICE_TREE
     /*
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 15:18:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 15:18:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1AJW-00026G-CX; Thu, 30 Jul 2020 15:17:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uTMv=BJ=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1k1AJV-00026B-Px
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 15:17:45 +0000
X-Inumbo-ID: d016cc02-d277-11ea-aade-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d016cc02-d277-11ea-aade-12813bfff9fa;
 Thu, 30 Jul 2020 15:17:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596122263;
 h=from:to:cc:subject:date:message-id:content-id:
 content-transfer-encoding:mime-version;
 bh=2UWeugfy3yHZRI0/TOl8DyLMdDVKj2LrqM1+xVgFZJE=;
 b=Z7qQPXPl+rAueJJdI3NXAYHm8S2GAoMzvQPh9ogmiHPc1s1ZOOclkAcN
 XabuSMEIA8D0ggCO7SkgNf2w3xRAekk5QPob0IIuX9y7EmJZTLzjr8WX0
 grJvjYJmcNZKEsxtk5IA7BKN/tSdC7EWEpOScv8vmVQcC49DRaS9tn1zg I=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 7IrFBHSHhHcwoKZqdN/1wSRELn98nkgt3SEYN4mcF8nYCibAqhEh105GCcOReM1vha7AOBYCnf
 QrgT05hen46j6GkVakLVvp7lO2HELCJvSFO9aA6n8UiOooH3U3axY/ek7jv08wgITU30VbZmxy
 PEHbG2UYh/TVIBbJBJ7j3Lo9Ny8NzGJwca7R0GD5+Sk+u16gofwLpNisSaBpuy6qMc8+HOUE9e
 Kj4glk1r6xqEtk1gRp92JI9AmVP0LwWBae5IBv9XINQP72camv573Gfry3X+9wUhwFw6OCPRnL
 Wsg=
X-SBRS: 2.7
X-MesageID: 23543145
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,414,1589256000"; d="scan'208";a="23543145"
From: George Dunlap <George.Dunlap@citrix.com>
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>, "intel-xen@intel.com"
 <intel-xen@intel.com>, "daniel.kiper@oracle.com" <daniel.kiper@oracle.com>,
 Roger Pau Monne <roger.pau@citrix.com>, Sergey Dyasli
 <sergey.dyasli@citrix.com>, Christopher Clark
 <christopher.w.clark@gmail.com>, Rich Persaud <persaur@gmail.com>, "Kevin
 Pearson" <kevin.pearson@ortmanconsulting.com>, Juergen Gross
 <jgross@suse.com>, Paul Durrant <pdurrant@amazon.com>, "Ji, John"
 <john.ji@intel.com>, "Natarajan, Janakarajan" <jnataraj@amd.com>,
 "edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>,
 "robin.randhawa@arm.com" <robin.randhawa@arm.com>, Artem Mygaiev
 <Artem_Mygaiev@epam.com>, Matt Spencer <Matt.Spencer@arm.com>,
 "anastassios.nanos@onapp.com" <anastassios.nanos@onapp.com>, "Stewart
 Hildebrand" <Stewart.Hildebrand@dornerworks.com>, Volodymyr Babchuk
 <volodymyr_babchuk@epam.com>, "mirela.simonovic@aggios.com"
 <mirela.simonovic@aggios.com>, Jarvis Roach <Jarvis.Roach@dornerworks.com>,
 Jeff Kubascik <Jeff.Kubascik@dornerworks.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Ian Jackson
 <Ian.Jackson@citrix.com>, Rian Quinn <rianquinn@gmail.com>, "Daniel P. Smith"
 <dpsmith@apertussolutions.com>,
 =?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLRG91ZyBHb2xkc3RlaW4=?=
 <cardoe@cardoe.com>, George Dunlap <George.Dunlap@citrix.com>, "David
 Woodhouse" <dwmw@amazon.co.uk>,
 =?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQW1pdCBTaGFo?= <amit@infradead.org>,
 =?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLVmFyYWQgR2F1dGFt?=
 <varadgautam@gmail.com>, Brian Woods <brian.woods@xilinx.com>, Robert Townley
 <rob.townley@gmail.com>, Bobby Eshleman <bobby.eshleman@gmail.com>, "Olivier
 Lambert" <olivier.lambert@vates.fr>, Andrew Cooper
 <Andrew.Cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: RESCHEDULED Call for agenda items for Community Call, August 13 @
 15:00 UTC
Thread-Topic: RESCHEDULED Call for agenda items for Community Call, August 13
 @ 15:00 UTC
Thread-Index: AQHWZoSNRscNI50zO02VynkLplKjzQ==
Date: Thu, 30 Jul 2020 15:17:35 +0000
Message-ID: <1E023F6E-0E3C-4CD5-A074-7BF62635E123@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <1FB6EE734F640240A5CF592535F87624@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGV5IGFsbCwNCg0KVGhlIGNvbW11bml0eSBjYWxsIGlzIHNjaGVkdWxlZCBmb3IgbmV4dCB3ZWVr
LCA2IEF1Z3VzdC4gIEksIGhvd2V2ZXIsIHdpbGwgYmUgb24gUFRPIHRoYXQgd2VlazsgSSBwcm9w
b3NlIHJlc2NoZWR1bGluZyBpdCBmb3IgdGhlIGZvbGxvd2luZyB3ZWVrLCAxMyBBdWd1c3QsIGF0
IHRoZSBzYW1lIHRpbWUuDQoNClRoZSBwcm9wb3NlZCBhZ2VuZGEgaXMgaW4gWlpaIGFuZCB5b3Ug
Y2FuIGVkaXQgdG8gYWRkIGl0ZW1zLiAgQWx0ZXJuYXRpdmVseSwgeW91IGNhbiByZXBseSB0byB0
aGlzIG1haWwgZGlyZWN0bHkuDQoNCkFnZW5kYSBpdGVtcyBhcHByZWNpYXRlZCBhIGZldyBkYXlz
IGJlZm9yZSB0aGUgY2FsbDogcGxlYXNlIHB1dCB5b3VyIG5hbWUgYmVzaWRlcyBpdGVtcyBpZiB5
b3UgZWRpdCB0aGUgZG9jdW1lbnQuDQoNCk5vdGUgdGhlIGZvbGxvd2luZyBhZG1pbmlzdHJhdGl2
ZSBjb252ZW50aW9ucyBmb3IgdGhlIGNhbGw6DQoqIFVubGVzcywgYWdyZWVkIGluIHRoZSBwZXJ2
aW91cyBtZWV0aW5nIG90aGVyd2lzZSwgdGhlIGNhbGwgaXMgb24gdGhlIDFzdCBUaHVyc2RheSBv
ZiBlYWNoIG1vbnRoIGF0IDE2MDAgQnJpdGlzaCBUaW1lIChlaXRoZXIgR01UIG9yIEJTVCkNCiog
SSB1c3VhbGx5IHNlbmQgb3V0IGEgbWVldGluZyByZW1pbmRlciBhIGZldyBkYXlzIGJlZm9yZSB3
aXRoIGEgcHJvdmlzaW9uYWwgYWdlbmRhDQoNCiogSWYgeW91IHdhbnQgdG8gYmUgQ0MnZWQgcGxl
YXNlIGFkZCBvciByZW1vdmUgeW91cnNlbGYgZnJvbSB0aGUgc2lnbi11cC1zaGVldCBhdCBodHRw
czovL2NyeXB0cGFkLmZyL3BhZC8jLzIvcGFkL2VkaXQvRDl2R3ppaFB4eEFPZTZSRlB6MHNSQ2Yr
Lw0KDQpCZXN0IFJlZ2FyZHMNCkdlb3JnZQ0KDQoNCg0KPT0gRGlhbC1pbiBJbmZvcm1hdGlvbiA9
PQ0KIyMgTWVldGluZyB0aW1lDQoxNTowMCAtIDE2OjAwIFVUQyAoZHVyaW5nIEJTVCkNCkZ1cnRo
ZXIgSW50ZXJuYXRpb25hbCBtZWV0aW5nIHRpbWVzOiBodHRwczovL3d3dy50aW1lYW5kZGF0ZS5j
b20vd29ybGRjbG9jay9tZWV0aW5nZGV0YWlscy5odG1sP3llYXI9MjAyMCZtb250aD01JmRheT03
JmhvdXI9MTUmbWluPTAmc2VjPTAmcDE9MTIzNCZwMj0zNyZwMz0yMjQmcDQ9MTc5DQoNCg0KIyMg
RGlhbCBpbiBkZXRhaWxzDQpXZWI6IGh0dHBzOi8vd3d3LmdvdG9tZWV0Lm1lL0dlb3JnZUR1bmxh
cA0KDQpZb3UgY2FuIGFsc28gZGlhbCBpbiB1c2luZyB5b3VyIHBob25lLg0KQWNjZXNzIENvZGU6
IDE2OC02ODItMTA5DQoNCkNoaW5hIChUb2xsIEZyZWUpOiA0MDA4IDgxMTA4NA0KR2VybWFueTog
KzQ5IDY5MiA1NzM2IDczMTcNClBvbGFuZCAoVG9sbCBGcmVlKTogMDAgODAwIDExMjQ3NTkNClVr
cmFpbmUgKFRvbGwgRnJlZSk6IDAgODAwIDUwIDE3MzMNClVuaXRlZCBLaW5nZG9tOiArNDQgMzMw
IDIyMSAwMDg4DQpVbml0ZWQgU3RhdGVzOiArMSAoNTcxKSAzMTctMzEyOQ0KU3BhaW46ICszNCA5
MzIgNzUgMjAwNA0KDQoNCk1vcmUgcGhvbmUgbnVtYmVycw0KQXVzdHJhbGlhOiArNjEgMiA5MDg3
IDM2MDQNCkF1c3RyaWE6ICs0MyA3IDIwODEgNTQyNw0KQXJnZW50aW5hIChUb2xsIEZyZWUpOiAw
IDgwMCA0NDQgMzM3NQ0KQmFocmFpbiAoVG9sbCBGcmVlKTogODAwIDgxIDExMQ0KQmVsYXJ1cyAo
VG9sbCBGcmVlKTogOCA4MjAgMDAxMSAwNDAwDQpCZWxnaXVtOiArMzIgMjggOTMgNzAxOA0KQnJh
emlsIChUb2xsIEZyZWUpOiAwIDgwMCAwNDcgNDkwNg0KQnVsZ2FyaWEgKFRvbGwgRnJlZSk6IDAw
ODAwIDEyMCA0NDE3DQpDYW5hZGE6ICsxICg2NDcpIDQ5Ny05MzkxDQpDaGlsZSAoVG9sbCBGcmVl
KTogODAwIDM5NSAxNTANCkNvbG9tYmlhIChUb2xsIEZyZWUpOiAwMSA4MDAgNTE4IDQ0ODMNCkN6
ZWNoIFJlcHVibGljIChUb2xsIEZyZWUpOiA4MDAgNTAwNDQ4DQpEZW5tYXJrOiArNDUgMzIgNzIg
MDMgODINCkZpbmxhbmQ6ICszNTggOTIzIDE3IDA1NjgNCkZyYW5jZTogKzMzIDE3MCA5NTAgNTk0
DQpHcmVlY2UgKFRvbGwgRnJlZSk6IDAwIDgwMCA0NDE0IDM4MzgNCkhvbmcgS29uZyAoVG9sbCBG
cmVlKTogMzA3MTMxNjk5MDYtODg2LTk2NQ0KSHVuZ2FyeSAoVG9sbCBGcmVlKTogKDA2KSA4MCA5
ODYgMjU1DQpJY2VsYW5kIChUb2xsIEZyZWUpOiA4MDAgNzIwNA0KSW5kaWEgKFRvbGwgRnJlZSk6
IDE4MDAyNjY5MjcyDQpJbmRvbmVzaWEgKFRvbGwgRnJlZSk6IDAwNyA4MDMgMDIwIDUzNzUNCkly
ZWxhbmQ6ICszNTMgMTUgMzYwIDcyOA0KSXNyYWVsIChUb2xsIEZyZWUpOiAxIDgwOSA0NTQgODMw
DQpJdGFseTogKzM5IDAgMjQ3IDkyIDEzIDAxDQpKYXBhbiAoVG9sbCBGcmVlKTogMCAxMjAgNjYz
IDgwMA0KS29yZWEsIFJlcHVibGljIG9mIChUb2xsIEZyZWUpOiAwMDc5OCAxNCAyMDcgNDkxNA0K
THV4ZW1ib3VyZyAoVG9sbCBGcmVlKTogODAwIDg1MTU4DQpNYWxheXNpYSAoVG9sbCBGcmVlKTog
MSA4MDAgODEgNjg1NA0KTWV4aWNvIChUb2xsIEZyZWUpOiAwMSA4MDAgNTIyIDExMzMNCk5ldGhl
cmxhbmRzOiArMzEgMjA3IDk0MSAzNzcNCk5ldyBaZWFsYW5kOiArNjQgOSAyODAgNjMwMg0KTm9y
d2F5OiArNDcgMjEgOTMgMzcgNTENClBhbmFtYSAoVG9sbCBGcmVlKTogMDAgODAwIDIyNiA3OTI4
DQpQZXJ1IChUb2xsIEZyZWUpOiAwIDgwMCA3NzAyMw0KUGhpbGlwcGluZXMgKFRvbGwgRnJlZSk6
IDEgODAwIDExMTAgMTY2MQ0KUG9ydHVnYWwgKFRvbGwgRnJlZSk6IDgwMCA4MTkgNTc1DQpSb21h
bmlhIChUb2xsIEZyZWUpOiAwIDgwMCA0MTAgMDI5DQpSdXNzaWFuIEZlZGVyYXRpb24gKFRvbGwg
RnJlZSk6IDggODAwIDEwMCA2MjAzDQpTYXVkaSBBcmFiaWEgKFRvbGwgRnJlZSk6IDgwMCA4NDQg
MzYzMw0KU2luZ2Fwb3JlIChUb2xsIEZyZWUpOiAxODAwNzIzMTMyMw0KU291dGggQWZyaWNhIChU
b2xsIEZyZWUpOiAwIDgwMCA1NTUgNDQ3DQpTd2VkZW46ICs0NiA4NTMgNTI3IDgyNw0KU3dpdHpl
cmxhbmQ6ICs0MSAyMjUgNDU5OSA3OA0KVGFpd2FuIChUb2xsIEZyZWUpOiAwIDgwMCA2NjYgODU0
DQpUaGFpbGFuZCAoVG9sbCBGcmVlKTogMDAxIDgwMCAwMTEgMDIzDQpUdXJrZXkgKFRvbGwgRnJl
ZSk6IDAwIDgwMCA0NDg4IDIzNjgzDQpVbml0ZWQgQXJhYiBFbWlyYXRlcyAoVG9sbCBGcmVlKTog
ODAwIDA0NCA0MDQzOQ0KVXJ1Z3VheSAoVG9sbCBGcmVlKTogMDAwNCAwMTkgMTAxOA0KVmlldCBO
YW0gKFRvbGwgRnJlZSk6IDEyMiA4MCA0ODENCuKAi+KAi+KAi+KAi+KAi+KAi+KAiw0KDQpGaXJz
dCBHb1RvTWVldGluZz8gTGV0J3MgZG8gYSBxdWljayBzeXN0ZW0gY2hlY2s6DQoNCmh0dHBzOi8v
bGluay5nb3RvbWVldGluZy5jb20vc3lzdGVtLWNoZWNrDQo=


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 15:41:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 15:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1AgZ-0004Zq-FZ; Thu, 30 Jul 2020 15:41:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uTMv=BJ=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1k1AgY-0004Zl-I5
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 15:41:34 +0000
X-Inumbo-ID: 24827126-d27b-11ea-8d98-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24827126-d27b-11ea-8d98-bc764e2007e4;
 Thu, 30 Jul 2020 15:41:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596123694;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=E2cl8ZcK0apEIOBVFe2bXnr32SKn5oC/68bk31hlAQo=;
 b=JaFOr4WdDlCAMtJo3OdXHGPjoH+YclG3V2YsD6ANYCGpBiJExZbBdXNE
 2Y8PakSZY/QzyAa9nFoVsNydcp3EJ17TqBm5r8i4spd4H6rrxSDvSBNx5
 miH7GUZGQMFNdKtH834LzEXgQkU008FoolYGBwHrqqZtmFZk8gpw9EGqF 8=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: GUh8wbltn/5QguZNqKP/VJEpybgE2keFaz8OG9moVWWzPNr5nsHQXPeE6carXVd1XdmRTIcJmD
 rQX3T/1XR7hJTPXhzaXWtOM5pjswLE3eoxOxxKkkQa44Tekskf2o0t9zhUk8XxwY3Pkp2orak+
 SPRpyyx10aLvhuD4A7K60U64Rpdx6AOUoUeCJISQomwrcaXexM0xZ6EsSAqQrU/Cj6g77PHbLz
 UF4eYwTngaIpjT2GRzlwK4XJkCWEuIr7wKiw+My5BrLvRK1Nb15Sep8cIFg2yfckANZx29jt6e
 cSg=
X-SBRS: 2.7
X-MesageID: 23546757
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,414,1589256000"; d="scan'208";a="23546757"
From: George Dunlap <George.Dunlap@citrix.com>
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>, "intel-xen@intel.com"
 <intel-xen@intel.com>, "daniel.kiper@oracle.com" <daniel.kiper@oracle.com>,
 Roger Pau Monne <roger.pau@citrix.com>, Sergey Dyasli
 <sergey.dyasli@citrix.com>, Christopher Clark
 <christopher.w.clark@gmail.com>, Rich Persaud <persaur@gmail.com>, "Kevin
 Pearson" <kevin.pearson@ortmanconsulting.com>, Juergen Gross
 <jgross@suse.com>, Paul Durrant <pdurrant@amazon.com>, "Ji, John"
 <john.ji@intel.com>, "Natarajan, Janakarajan" <jnataraj@amd.com>,
 "edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>,
 "robin.randhawa@arm.com" <robin.randhawa@arm.com>, Artem Mygaiev
 <Artem_Mygaiev@epam.com>, Matt Spencer <Matt.Spencer@arm.com>,
 "anastassios.nanos@onapp.com" <anastassios.nanos@onapp.com>, "Stewart
 Hildebrand" <Stewart.Hildebrand@dornerworks.com>, Volodymyr Babchuk
 <volodymyr_babchuk@epam.com>, "mirela.simonovic@aggios.com"
 <mirela.simonovic@aggios.com>, Jarvis Roach <Jarvis.Roach@dornerworks.com>,
 Jeff Kubascik <Jeff.Kubascik@dornerworks.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Ian Jackson
 <Ian.Jackson@citrix.com>, Rian Quinn <rianquinn@gmail.com>, "Daniel P. Smith"
 <dpsmith@apertussolutions.com>,
 =?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLRG91ZyBHb2xkc3RlaW4=?=
 <cardoe@cardoe.com>, George Dunlap <George.Dunlap@citrix.com>, "David
 Woodhouse" <dwmw@amazon.co.uk>,
 =?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQW1pdCBTaGFo?= <amit@infradead.org>,
 =?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLVmFyYWQgR2F1dGFt?=
 <varadgautam@gmail.com>, Brian Woods <brian.woods@xilinx.com>, Robert Townley
 <rob.townley@gmail.com>, Bobby Eshleman <bobby.eshleman@gmail.com>, "Olivier
 Lambert" <olivier.lambert@vates.fr>, Andrew Cooper
 <Andrew.Cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: RESCHEDULED Call for agenda items for Community Call, August 13 @
 15:00 UTC
Thread-Topic: RESCHEDULED Call for agenda items for Community Call, August 13
 @ 15:00 UTC
Thread-Index: AQHWZoSNV69Oim50MkyA6CRuliqKJqkgITQA
Date: Thu, 30 Jul 2020 15:41:26 +0000
Message-ID: <40615946-FF55-48DB-91FB-58DD603FDD69@citrix.com>
References: <1E023F6E-0E3C-4CD5-A074-7BF62635E123@citrix.com>
In-Reply-To: <1E023F6E-0E3C-4CD5-A074-7BF62635E123@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <831A920F1F6F3A40B62003D1E0AC1CF7@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDMwLCAyMDIwLCBhdCA0OjE3IFBNLCBHZW9yZ2UgRHVubGFwIDxHZW9yZ2Uu
RHVubGFwQGNpdHJpeC5jb20+IHdyb3RlOg0KPiANCj4gSGV5IGFsbCwNCj4gDQo+IFRoZSBjb21t
dW5pdHkgY2FsbCBpcyBzY2hlZHVsZWQgZm9yIG5leHQgd2VlaywgNiBBdWd1c3QuICBJLCBob3dl
dmVyLCB3aWxsIGJlIG9uIFBUTyB0aGF0IHdlZWs7IEkgcHJvcG9zZSByZXNjaGVkdWxpbmcgaXQg
Zm9yIHRoZSBmb2xsb3dpbmcgd2VlaywgMTMgQXVndXN0LCBhdCB0aGUgc2FtZSB0aW1lLg0KPiAN
Cj4gVGhlIHByb3Bvc2VkIGFnZW5kYSBpcyBpbiBaWlogYW5kIHlvdSBjYW4gZWRpdCB0byBhZGQg
aXRlbXMuICBBbHRlcm5hdGl2ZWx5LCB5b3UgY2FuIHJlcGx5IHRvIHRoaXMgbWFpbCBkaXJlY3Rs
eS4NCg0KU29ycnksIGluIGFsbCBteSBtYW51YWwgdGVtcGxhdGluZyBJIHNlZW0gdG8gaGF2ZSBt
aXNzZWQgdGhpcyBvbmUuICBIZXJl4oCZcyB0aGUgVVJMOg0KDQpodHRwczovL2NyeXB0cGFkLmZy
L3BhZC8jLzMvcGFkL2VkaXQvOWM1ODk5M2EwOGZlOTc0NTFmMGE1YjZjOGJiOTA2YjEvDQoNCiAt
R2VvcmdlDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 16:34:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 16:34:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1BVk-0000u6-6t; Thu, 30 Jul 2020 16:34:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4any=BJ=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1k1BVi-0000u1-JU
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 16:34:27 +0000
X-Inumbo-ID: 869703fc-d282-11ea-8da6-bc764e2007e4
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 869703fc-d282-11ea-8da6-bc764e2007e4;
 Thu, 30 Jul 2020 16:34:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1596126863;
 s=strato-dkim-0002; d=aepfle.de;
 h=Message-Id:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH:From:
 Subject:Sender;
 bh=cC1QXm1cXnUHioSQpBbWkyJanME2wgjyBOyCYaXYrrE=;
 b=Ts6qJ0rIqlis8OHITYUM3lhep0UJjrHXmFbn7VKtE7YETcwogZzm/V33k1rgvy46fQ
 1HSV7WeJXMqyM+DCGcDia36zuDtCqtV47UJIdQ5LWDQWJSHjeH0XlVuKgcXrlhRdwsF8
 YFVwM7CPMUn7QxT2EWlcBLZ1YeBQEXKmq19sqmLmhn7ZWrKAc2KPZOwhdDHnIVqmFdW7
 RiygJkRP5Sbh4WdLcMBPUA+jPFaCEHLQmQxGwAe1bzDR+l06u4w0RdXdGIH3pcPM1GBR
 uzyaLeNQq15y9QXZmmfjBqJ1ECNcm4YtyDDAHnoTl6mHIn18nWyR8YlNLac3imILZtC4
 w61A==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS1QE="
X-RZG-CLASS-ID: mo00
Received: from sender by smtp.strato.de (RZmta 46.10.5 DYNA|AUTH)
 with ESMTPSA id m032cfw6UGYCDBv
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 30 Jul 2020 18:34:12 +0200 (CEST)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v1] tools/xen-cpuid: show enqcmd
Date: Thu, 30 Jul 2020 18:34:06 +0200
Message-Id: <20200730163406.31020-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Olaf Hering <olaf@aepfle.de>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Translate <29> into a feature string.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---

not compile nor runtime tested.

 tools/misc/xen-cpuid.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index ac3548dcfe..2446941a47 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -133,7 +133,7 @@ static const char *const str_7c0[32] =
     [22] = "rdpid",
     /* 24 */                   [25] = "cldemote",
     /* 26 */                   [27] = "movdiri",
-    [28] = "movdir64b",
+    [28] = "movdir64b",        [29] = "enqcmd",
     [30] = "sgx-lc",
 };
 


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:07:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1C1h-0003Un-S4; Thu, 30 Jul 2020 17:07:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1C1g-0003Ui-GY
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:07:28 +0000
X-Inumbo-ID: 25161424-d287-11ea-aaf6-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 25161424-d287-11ea-aaf6-12813bfff9fa;
 Thu, 30 Jul 2020 17:07:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=F0zBWlFRrrkUpoa8qvY/9++vofGF25KtTzc2Y4E7dkU=; b=BrkwV+KkCZPJ9e0pbzZAHuD6l/
 WscBnrngt7t03XBAw4v16wIZj2MVbckFpiOQnr3gaOWqYkAotQ3sK/GHPNrSXIAdEV9UXgbsAIIxJ
 MFOAwfkvhYlk2jEGwVFt/azbUrQyWIk/Sv0cEnFT/xxKO0jHvNe9hcCtiSNot6ZNumIU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1C1e-0006Eg-Pc; Thu, 30 Jul 2020 17:07:26 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1C1e-0002Kp-Ei; Thu, 30 Jul 2020 17:07:26 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] xen/arm: cmpxchg: Add missing memory barriers in
 __cmpxchg_mb_timeout()
Date: Thu, 30 Jul 2020 18:07:21 +0100
Message-Id: <20200730170721.23393-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

The function __cmpxchg_mb_timeout() was intended to have the same
semantics as __cmpxchg_mb(). Unfortunately, the memory barriers were
not added when first implemented.

There is no known issue with the existing callers, but the barriers are
added given this is the expected semantics in Xen.

The issue was introduced by XSA-295.

Backport: 4.8+
Fixes: 86b0bc958373 ("xen/arm: cmpxchg: Provide a new helper that can timeout")
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/asm-arm/arm32/cmpxchg.h | 8 +++++++-
 xen/include/asm-arm/arm64/cmpxchg.h | 8 +++++++-
 2 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/xen/include/asm-arm/arm32/cmpxchg.h b/xen/include/asm-arm/arm32/cmpxchg.h
index 49ca2a0d7ab1..0770f272ee99 100644
--- a/xen/include/asm-arm/arm32/cmpxchg.h
+++ b/xen/include/asm-arm/arm32/cmpxchg.h
@@ -147,7 +147,13 @@ static always_inline bool __cmpxchg_mb_timeout(volatile void *ptr,
 					       int size,
 					       unsigned int max_try)
 {
-	return __int_cmpxchg(ptr, old, new, size, true, max_try);
+	bool ret;
+
+	smp_mb();
+	ret = __int_cmpxchg(ptr, old, new, size, true, max_try);
+	smp_mb();
+
+	return ret;
 }
 
 #define cmpxchg(ptr,o,n)						\
diff --git a/xen/include/asm-arm/arm64/cmpxchg.h b/xen/include/asm-arm/arm64/cmpxchg.h
index 5bc2e1f78674..fc5c60f0bd74 100644
--- a/xen/include/asm-arm/arm64/cmpxchg.h
+++ b/xen/include/asm-arm/arm64/cmpxchg.h
@@ -160,7 +160,13 @@ static always_inline bool __cmpxchg_mb_timeout(volatile void *ptr,
 					       int size,
 					       unsigned int max_try)
 {
-	return __int_cmpxchg(ptr, old, new, size, true, max_try);
+	bool ret;
+
+	smp_mb();
+	ret = __int_cmpxchg(ptr, old, new, size, true, max_try);
+	smp_mb();
+
+	return ret;
 }
 
 #define cmpxchg(ptr, o, n) \
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:08:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1C2p-0003aQ-Kn; Thu, 30 Jul 2020 17:08:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6uMI=BJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k1C2o-0003Zy-EY
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:08:38 +0000
X-Inumbo-ID: 4dbf5eda-d287-11ea-8daf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4dbf5eda-d287-11ea-8daf-bc764e2007e4;
 Thu, 30 Jul 2020 17:08:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=iQhCUM0QmwnW4nBkD5OJtVEmzGkr1BDSuyfh8P1OIKM=; b=5Tre4VFRkmrZrc62V4DK0Y5sX
 Y+4Csm/kMM7cA2B/rKIAHZsdFIuen6c3d3yT8h8zpujTYu+/ki3kXspkS4OXMwG4tQ7uq6GXnxshD
 t8BG5RoTc6Zjjmw108HwxT8EkSh20R13Y03fHV1mP+WXM4L8KcBVJMeXUYkkOgb/jvDYE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1C2l-0006GI-6h; Thu, 30 Jul 2020 17:08:35 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1C2k-0004I9-Rg; Thu, 30 Jul 2020 17:08:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k1C2k-0000HJ-R1; Thu, 30 Jul 2020 17:08:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152293-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 152293: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
 xen-unstable:test-arm64-arm64-xl:xen-boot:fail:heisenbug
 xen-unstable:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
 xen-unstable:test-arm64-arm64-xl-thunderx:xen-boot:fail:heisenbug
 xen-unstable:test-arm64-arm64-examine:reboot:fail:heisenbug
 xen-unstable:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
 xen-unstable:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
 xen-unstable:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: xen=b071ec25e85c4aacf3da59e5258cda0b1c4df45d
X-Osstest-Versions-That: xen=b071ec25e85c4aacf3da59e5258cda0b1c4df45d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jul 2020 17:08:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152293 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152293/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-seattle   7 xen-boot                   fail pass in 152275
 test-arm64-arm64-xl           7 xen-boot                   fail pass in 152275
 test-arm64-arm64-libvirt-xsm  7 xen-boot                   fail pass in 152275
 test-arm64-arm64-xl-thunderx  7 xen-boot                   fail pass in 152275
 test-arm64-arm64-examine      8 reboot                     fail pass in 152275
 test-arm64-arm64-xl-credit2   7 xen-boot                   fail pass in 152275
 test-arm64-arm64-xl-credit1   7 xen-boot                   fail pass in 152275
 test-arm64-arm64-xl-xsm       7 xen-boot                   fail pass in 152275
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 152275

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle 13 migrate-support-check fail in 152275 never pass
 test-arm64-arm64-xl-seattle 14 saverestore-support-check fail in 152275 never pass
 test-arm64-arm64-xl-credit2 13 migrate-support-check fail in 152275 never pass
 test-arm64-arm64-xl-credit2 14 saverestore-support-check fail in 152275 never pass
 test-arm64-arm64-xl         13 migrate-support-check fail in 152275 never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check fail in 152275 never pass
 test-arm64-arm64-xl     14 saverestore-support-check fail in 152275 never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check fail in 152275 never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check fail in 152275 never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check fail in 152275 never pass
 test-arm64-arm64-xl-credit1 13 migrate-support-check fail in 152275 never pass
 test-arm64-arm64-xl-credit1 14 saverestore-support-check fail in 152275 never pass
 test-arm64-arm64-xl-xsm     13 migrate-support-check fail in 152275 never pass
 test-arm64-arm64-xl-xsm 14 saverestore-support-check fail in 152275 never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152275
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152275
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152275
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 152275
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152275
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152275
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152275
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 152275
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 152275
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  b071ec25e85c4aacf3da59e5258cda0b1c4df45d
baseline version:
 xen                  b071ec25e85c4aacf3da59e5258cda0b1c4df45d

Last test of basis   152293  2020-07-30 01:51:37 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:28:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:28:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1CMH-0005YS-Ge; Thu, 30 Jul 2020 17:28:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HZLI=BJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k1CMF-0005YN-QK
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:28:43 +0000
X-Inumbo-ID: 1c8f9d18-d28a-11ea-aaf9-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c8f9d18-d28a-11ea-aaf9-12813bfff9fa;
 Thu, 30 Jul 2020 17:28:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596130123;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=CPM+2EsHyf7nhKrWoMxtddKAcdCxQL8Z163DjoBydnk=;
 b=fzdw6GZL79lq8CmmlC64jTyMAm36ehDh5J2z1cQeGs0ZK8GC/nnN7Uhs
 pF6DZv+JhdyovCm9ZnQRynWKGhmyPPkFV1z4gcxCVcp8sUMYky82701A+
 mUI2nXRqecLOiZf7mMrSGljk2EPGpjVWX377SBJicjQKOR8L+KaiDMiat U=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 1Z3hs6TXhhE+zlFVPqAjSot/mgyMQ018XFVxrCwlK315CFAzdwSIA5Ag4a/tKYdlwXlJHeBiMF
 5RNERWa2QEhpYuXw4FlmLkLDRPn4Kr33N6Nv/GXHbKgVyl1rpQo1GY66qRQ0ZyoMYrehGr9hNy
 IwhZcGSrH1kWBfU9bvmkkRWjWW4lZMB3g0MXLBRSn9pP4MG9ee8Sl4VsQLXdjKoeXl+8W37eC7
 l8pyDZPADa8u3KsQOd1apj3BidPKSkS+LqWjAjFHufU7A1ygm2saIpDMfwJwNfluu93I0lDlJK
 cOA=
X-SBRS: 2.7
X-MesageID: 23898713
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,415,1589256000"; d="scan'208";a="23898713"
Subject: Re: [PATCH 1/5] xen/memory: Introduce CONFIG_ARCH_ACQUIRE_RESOURCE
To: Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-2-andrew.cooper3@citrix.com>
 <9b8397fc-f50e-ef2b-cbaa-2298294af2e3@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0b02d7dd-ebf1-9210-a52f-8debbddddbaa@citrix.com>
Date: Thu, 30 Jul 2020 18:28:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <9b8397fc-f50e-ef2b-cbaa-2298294af2e3@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Hubert Jasudowicz <hubert.jasudowicz@cert.pl>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Jan Beulich <JBeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30/07/2020 10:50, Julien Grall wrote:
> Hi Andrew,
>
> On 28/07/2020 12:37, Andrew Cooper wrote:
>> New architectures shouldn't be forced to implement no-op stubs for
>> unused
>> functionality.
>>
>> Introduce CONFIG_ARCH_ACQUIRE_RESOURCE which can be opted in to, and
>> provide
>> compatibility logic in xen/mm.h
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>
> With one question below:
>
> Acked-by: Julien Grall <jgrall@amazon.com>

Thanks,

>
>
>> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
>> index 1061765bcd..1b2c1f6b32 100644
>> --- a/xen/include/xen/mm.h
>> +++ b/xen/include/xen/mm.h
>> @@ -685,4 +685,13 @@ static inline void put_page_alloc_ref(struct
>> page_info *page)
>>       }
>>   }
>>   +#ifndef CONFIG_ARCH_ACQUIRE_RESOURCE
>> +static inline int arch_acquire_resource(
>> +    struct domain *d, unsigned int type, unsigned int id, unsigned
>> long frame,
>> +    unsigned int nr_frames, xen_pfn_t mfn_list[])
>
> Any reason to change the way we indent the arguments?

So its not all squashed on the right hand side.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:34:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:34:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1CRZ-0006O7-36; Thu, 30 Jul 2020 17:34:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6uMI=BJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k1CRY-0006O2-4f
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:34:12 +0000
X-Inumbo-ID: e0e12c72-d28a-11ea-8daf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0e12c72-d28a-11ea-8daf-bc764e2007e4;
 Thu, 30 Jul 2020 17:34:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=IVizL9gpH7l4myFkI+VBAU8IzMqC/suIBBpMBO1XRdw=; b=Ksj9w9wFS2LX1rW76QuGapTEY
 jJIjHilJQeU82KKOSjnKFBZrCqS3g4IURpg8j1E7rAq4ubK1R0ir0z8KSfS3n473hkQJxd4TaaoGM
 Hc5cM1VFZzDWwLoENUXJijxeuvbnELVQnZuHnSLG82G+CcNfHk5bYS4ueo+venP6I73AQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1CRW-0006m5-M3; Thu, 30 Jul 2020 17:34:10 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1CRW-0006ix-Bp; Thu, 30 Jul 2020 17:34:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k1CRW-00043O-8g; Thu, 30 Jul 2020 17:34:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152305-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152305: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
 xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=64219fa179c3e48adad12bfce3f6b3f1596cccbf
X-Osstest-Versions-That: xen=b071ec25e85c4aacf3da59e5258cda0b1c4df45d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jul 2020 17:34:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152305 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152305/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       7 xen-boot                 fail REGR. vs. 152269

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  64219fa179c3e48adad12bfce3f6b3f1596cccbf
baseline version:
 xen                  b071ec25e85c4aacf3da59e5258cda0b1c4df45d

Last test of basis   152269  2020-07-28 19:05:32 Z    1 days
Testing same since   152288  2020-07-29 19:01:00 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Fam Zheng <famzheng@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 64219fa179c3e48adad12bfce3f6b3f1596cccbf
Author: Fam Zheng <famzheng@amazon.com>
Date:   Wed Jul 29 18:51:45 2020 +0100

    x86/cpuid: Fix APIC bit clearing
    
    The bug is obvious here, other places in this function used
    "cpufeat_mask" correctly.
    
    Fixed: b648feff8ea2 ("xen/x86: Improvements to in-hypervisor cpuid sanity checks")
    Signed-off-by: Fam Zheng <famzheng@amazon.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:34:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1CRn-0006PZ-Bk; Thu, 30 Jul 2020 17:34:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HZLI=BJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k1CRm-0006PQ-8U
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:34:26 +0000
X-Inumbo-ID: e8d54436-d28a-11ea-8daf-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8d54436-d28a-11ea-8daf-bc764e2007e4;
 Thu, 30 Jul 2020 17:34:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596130465;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=VTbuZSogmwTOJfRNbt0lBzRJjNPsdpDQGD6VXlfSoxU=;
 b=U9VH3cg2br43bdg/Q+klm3DDrNajNxMH4vI7FZXi4Ceiw/a/qGceRQCb
 Q4D+aEM4FZrUtVNar+so/sNc11rb3yVy+mK7r2X2upgTnWZv+UUnI94+T
 lsoODnrXUrPsnj8vBpJuz5EdrY1OoYhiEOrf6QfcpwrHMdeHum02A1bxF o=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Xpr0JyzLJ3ItxrMuYNa8xQQBN+kmVldopfurADO8obZ6+x5Z5KehHHHDxJ7YFJYugJ248FQxDu
 1DGLfQ2mN6uloG+vTOJrtdUT35derC9A1wHSOpUmOI3SWNwpDt+jjE9HkqvEGh10Bdv9HmOxYe
 Os45/KR0Z5OP7HMTewI2cY9vcBzpc8JhEOl3RLFswapGsV8+QSEOcG6jFtvm/ibjZpOI5N/Nbw
 sxLOxDceDuDdCQNyjF2ZncWfhuoT5nn3Lf46yWql9ndp7pJ7DMpLs/JB2UGFq73GiT9Mh7UJHj
 tSY=
X-SBRS: 2.7
X-MesageID: 23582494
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,415,1589256000"; d="scan'208";a="23582494"
Subject: Re: [PATCH 1/5] xen/memory: Introduce CONFIG_ARCH_ACQUIRE_RESOURCE
To: <paul@xen.org>, 'Xen-devel' <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-2-andrew.cooper3@citrix.com>
 <002601d66647$ca8567e0$5f9037a0$@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <33a10589-6890-b653-d8c2-7eb19a5e4929@citrix.com>
Date: Thu, 30 Jul 2020 18:34:19 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <002601d66647$ca8567e0$5f9037a0$@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano
 Stabellini' <sstabellini@kernel.org>, 'Julien Grall' <julien@xen.org>,
 'Wei Liu' <wl@xen.org>,
 =?UTF-8?B?J01pY2hhxYIgTGVzemN6ecWEc2tpJw==?= <michal.leszczynski@cert.pl>,
 'Jan Beulich' <JBeulich@suse.com>,
 'Hubert Jasudowicz' <hubert.jasudowicz@cert.pl>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30/07/2020 09:02, Paul Durrant wrote:
>> -----Original Message-----
>> From: Andrew Cooper <andrew.cooper3@citrix.com>
>> Sent: 28 July 2020 12:37
>> To: Xen-devel <xen-devel@lists.xenproject.org>
>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Jan Beulich <JBeulich@suse.com>; Wei Liu <wl@xen.org>;
>> Roger Pau Monné <roger.pau@citrix.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
>> <julien@xen.org>; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Paul Durrant <paul@xen.org>; Michał
>> Leszczyński <michal.leszczynski@cert.pl>; Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
>> Subject: [PATCH 1/5] xen/memory: Introduce CONFIG_ARCH_ACQUIRE_RESOURCE
>>
>> New architectures shouldn't be forced to implement no-op stubs for unused
>> functionality.
>>
>> Introduce CONFIG_ARCH_ACQUIRE_RESOURCE which can be opted in to, and provide
>> compatibility logic in xen/mm.h
>>
>> No functional change.
> Code-wise, it looks fine, so...
>
> Reviewed-by: Paul Durrant <paul@xen.org>

Thanks,

>
> ...but ...
>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Julien Grall <julien@xen.org>
>> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> CC: Paul Durrant <paul@xen.org>
>> CC: Michał Leszczyński <michal.leszczynski@cert.pl>
>> CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
>> ---
>>  xen/arch/x86/Kconfig     | 1 +
>>  xen/common/Kconfig       | 3 +++
>>  xen/include/asm-arm/mm.h | 8 --------
>>  xen/include/xen/mm.h     | 9 +++++++++
>>  4 files changed, 13 insertions(+), 8 deletions(-)
>>
>> diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
>> index a636a4bb1e..e7644a0a9d 100644
>> --- a/xen/arch/x86/Kconfig
>> +++ b/xen/arch/x86/Kconfig
>> @@ -6,6 +6,7 @@ config X86
>>  	select ACPI
>>  	select ACPI_LEGACY_TABLES_LOOKUP
>>  	select ARCH_SUPPORTS_INT128
>> +	select ARCH_ACQUIRE_RESOURCE
> ... I do wonder whether 'HAS_ACQUIRE_RESOURCE' is a better and more descriptive name.

We don't have a coherent policy for how to categorise these things.  I
can change the name if you insist, but I'm not sure it makes a useful
difference.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:52:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:52:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1CjC-00089n-6q; Thu, 30 Jul 2020 17:52:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1CjA-00089P-W1
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:52:25 +0000
X-Inumbo-ID: 6968361b-d28d-11ea-aaff-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6968361b-d28d-11ea-aaff-12813bfff9fa;
 Thu, 30 Jul 2020 17:52:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AgySXafvLKDrFHZwhVYC/CtvL9daRvYVrDlQBaY611s=; b=g1CPwU7EKozO36MiD2faC1WMy4
 I+v2tAqxkFHvaT1ICh99bgbrtPD+9HvV0wzwgZu6zk6TRcB1e4kuU8nkJU/gzbs62eppIByDEAWTt
 pWt+C9DmwoJxfTOOKenxFDczSndjdgohvt0zBXag/Ef5JEzTiLVk0UI90t80pM7RpmZ0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1Cj3-000796-H9; Thu, 30 Jul 2020 17:52:17 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1Cj3-0004nV-1p; Thu, 30 Jul 2020 17:52:17 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 0/7] xen: Consolidate asm-*/guest_access.h in
 xen/guest_access.h
Date: Thu, 30 Jul 2020 18:52:00 +0100
Message-Id: <20200730175213.30679-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Hi all,

A lot of the helpers implemented in asm-*/guest_access.h are implemented
the same way. This series aims to avoid the duplication and implement
them only once in xen/guest_access.h.

Cheers,

Julien Grall (7):
  xen/guest_access: Add emacs magics
  xen/arm: kernel: Re-order the includes
  xen/arm: decode: Re-order the includes
  xen/arm: guestcopy: Re-order the includes
  xen: include xen/guest_access.h rather than asm/guest_access.h
  xen/guest_access: Consolidate guest access helpers in
    xen/guest_access.h
  xen/guest_access: Fix coding style in xen/guest_access.h

 xen/arch/arm/decode.c                |   7 +-
 xen/arch/arm/domain.c                |   2 +-
 xen/arch/arm/guest_walk.c            |   3 +-
 xen/arch/arm/guestcopy.c             |   5 +-
 xen/arch/arm/kernel.c                |  12 +--
 xen/arch/arm/vgic-v3-its.c           |   2 +-
 xen/arch/x86/hvm/svm/svm.c           |   2 +-
 xen/arch/x86/hvm/viridian/viridian.c |   2 +-
 xen/arch/x86/hvm/vmx/vmx.c           |   2 +-
 xen/common/libelf/libelf-loader.c    |   2 +-
 xen/include/asm-arm/guest_access.h   | 115 -----------------------
 xen/include/asm-x86/guest_access.h   | 116 ++----------------------
 xen/include/xen/guest_access.h       | 131 +++++++++++++++++++++++++++
 xen/lib/x86/private.h                |   2 +-
 14 files changed, 161 insertions(+), 242 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:52:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:52:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1CjI-0008Ag-4K; Thu, 30 Jul 2020 17:52:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1CjH-0008AL-Dr
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:52:31 +0000
X-Inumbo-ID: 6fc03d5b-d28d-11ea-8db0-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6fc03d5b-d28d-11ea-8db0-bc764e2007e4;
 Thu, 30 Jul 2020 17:52:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=iYWCfYUjsOmxaTuM6icRxDZYgMO7DF187Z293r7Hn8E=; b=ingw4Z4pScDYE3j2IreY1+IRC0
 icn/tMqmdO7H9xRdKMD2zQMj4r+XXK+/A7U0zrxuvGOciuNdI9PjKu8PqNB25PqoHiXXLHwsj+n7U
 19bLoF1qc6oM/a4snUwaJ+PolAD487gTOo71IPRWj0yH1ANLJOQMTeKELKCxuiitAqzo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1CjF-0007AR-DV; Thu, 30 Jul 2020 17:52:29 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1CjF-0004nV-0B; Thu, 30 Jul 2020 17:52:29 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 5/6] xen/guest_access: Consolidate guest access helpers in
 xen/guest_access.h
Date: Thu, 30 Jul 2020 18:52:09 +0100
Message-Id: <20200730175213.30679-10-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730175213.30679-1-julien@xen.org>
References: <20200730175213.30679-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Most of the helpers to access guest memory are implemented the same way
on Arm and x86. The only differences are:
    - guest_handle_{from, to}_param(): while on x86 XEN_GUEST_HANDLE()
      and XEN_GUEST_HANDLE_PARAM() are the same, they are not on Arm. It
      is still fine to use the Arm implementation on x86.
    - __clear_guest_offset(): Interestingly the prototype does not match
      between the x86 and Arm. However, the Arm one is bogus. So the x86
      implementation can be used.
    - guest_handle{,_subrange}_okay(): They are validly differing
      because Arm is only supporting auto-translated guest and therefore
      handles are always valid.

In the past, the ia64 and ppc64 port use a different model to access
guest parameter. They have been long gone now.

Given Xen currently only support 2 archictures, it is too soon to have a
directory asm-generic as it is not possible to differentiate it with the
existing directory xen/. If/When there is a 3rd port, we can decide to
create the new directory if that new port decide to use a different way
to access guest parameter.

For now, consolidate it in xen/guest_access.h.

While it would be possible to adjust the coding style at the same, this
is left for a follow-up patch so 'diff' can be used to check the
consolidation was done correctly.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - Expand the commit message explaining why asm-generic is not
        created.
---
 xen/include/asm-arm/guest_access.h | 114 ---------------------------
 xen/include/asm-x86/guest_access.h | 108 --------------------------
 xen/include/xen/guest_access.h     | 119 +++++++++++++++++++++++++++++
 3 files changed, 119 insertions(+), 222 deletions(-)

diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index b9a89c495527..53766386d3d8 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -23,88 +23,6 @@ int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
 #define __raw_copy_from_guest raw_copy_from_guest
 #define __raw_clear_guest raw_clear_guest
 
-/* Remainder copied from x86 -- could be common? */
-
-/* Is the guest handle a NULL reference? */
-#define guest_handle_is_null(hnd)        ((hnd).p == NULL)
-
-/* Offset the given guest handle into the array it refers to. */
-#define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
-#define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
-
-/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
- * to the specified type of XEN_GUEST_HANDLE_PARAM. */
-#define guest_handle_cast(hnd, type) ({         \
-    type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
-})
-
-/* Convert a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
-#define guest_handle_to_param(hnd, type) ({                  \
-    typeof((hnd).p) _x = (hnd).p;                            \
-    XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
-    /* type checking: make sure that the pointers inside     \
-     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
-     * the same type, then return hnd */                     \
-    (void)(&_x == &_y.p);                                    \
-    _y;                                                      \
-})
-
-#define guest_handle_for_field(hnd, type, fld)          \
-    ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
-
-#define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
-#define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
-
-/*
- * Copy an array of objects to guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define copy_to_guest_offset(hnd, off, ptr, nr) ({      \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));  \
-})
-
-/*
- * Clear an array of objects in guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define clear_guest_offset(hnd, off, nr) ({    \
-    void *_d = (hnd).p;                        \
-    raw_clear_guest(_d+(off), nr);             \
-})
-
-/*
- * Copy an array of objects from guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define copy_from_guest_offset(ptr, hnd, off, nr) ({    \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
-})
-
-/* Copy sub-field of a structure to guest context via a guest handle. */
-#define copy_field_to_guest(hnd, ptr, field) ({         \
-    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
-    void *_d = &(hnd).p->field;                         \
-    (void)(&(hnd).p->field == _s);                      \
-    raw_copy_to_guest(_d, _s, sizeof(*_s));             \
-})
-
-/* Copy sub-field of a structure from guest context via a guest handle. */
-#define copy_field_from_guest(ptr, hnd, field) ({       \
-    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
-    typeof(&(ptr)->field) _d = &(ptr)->field;           \
-    raw_copy_from_guest(_d, _s, sizeof(*_d));           \
-})
-
 /*
  * Pre-validate a guest handle.
  * Allows use of faster __copy_* functions.
@@ -113,38 +31,6 @@ int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
 #define guest_handle_okay(hnd, nr) (1)
 #define guest_handle_subrange_okay(hnd, first, last) (1)
 
-#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
-})
-
-#define __clear_guest_offset(hnd, off, ptr, nr) ({      \
-    __raw_clear_guest(_d+(off), nr);  \
-})
-
-#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
-})
-
-#define __copy_field_to_guest(hnd, ptr, field) ({       \
-    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
-    void *_d = &(hnd).p->field;                         \
-    (void)(&(hnd).p->field == _s);                      \
-    __raw_copy_to_guest(_d, _s, sizeof(*_s));           \
-})
-
-#define __copy_field_from_guest(ptr, hnd, field) ({     \
-    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
-    typeof(&(ptr)->field) _d = &(ptr)->field;           \
-    __raw_copy_from_guest(_d, _s, sizeof(*_d));         \
-})
-
 #endif /* __ASM_ARM_GUEST_ACCESS_H__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-x86/guest_access.h b/xen/include/asm-x86/guest_access.h
index 3ffde205f6a1..08c9fbbc78e1 100644
--- a/xen/include/asm-x86/guest_access.h
+++ b/xen/include/asm-x86/guest_access.h
@@ -38,81 +38,6 @@
      clear_user_hvm((dst), (len)) :             \
      clear_user((dst), (len)))
 
-/* Is the guest handle a NULL reference? */
-#define guest_handle_is_null(hnd)        ((hnd).p == NULL)
-
-/* Offset the given guest handle into the array it refers to. */
-#define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
-#define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
-
-/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
- * to the specified type of XEN_GUEST_HANDLE_PARAM. */
-#define guest_handle_cast(hnd, type) ({         \
-    type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
-})
-
-/* Convert a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
-#define guest_handle_to_param(hnd, type) ({                  \
-    /* type checking: make sure that the pointers inside     \
-     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
-     * the same type, then return hnd */                     \
-    (void)((typeof(&(hnd).p)) 0 ==                           \
-        (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
-    (hnd);                                                   \
-})
-
-#define guest_handle_for_field(hnd, type, fld)          \
-    ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
-
-#define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
-#define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
-
-/*
- * Copy an array of objects to guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define copy_to_guest_offset(hnd, off, ptr, nr) ({      \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));  \
-})
-
-/*
- * Copy an array of objects from guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define copy_from_guest_offset(ptr, hnd, off, nr) ({    \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
-})
-
-#define clear_guest_offset(hnd, off, nr) ({    \
-    void *_d = (hnd).p;                        \
-    raw_clear_guest(_d+(off), nr);             \
-})
-
-/* Copy sub-field of a structure to guest context via a guest handle. */
-#define copy_field_to_guest(hnd, ptr, field) ({         \
-    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
-    void *_d = &(hnd).p->field;                         \
-    (void)(&(hnd).p->field == _s);                      \
-    raw_copy_to_guest(_d, _s, sizeof(*_s));             \
-})
-
-/* Copy sub-field of a structure from guest context via a guest handle. */
-#define copy_field_from_guest(ptr, hnd, field) ({       \
-    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
-    typeof(&(ptr)->field) _d = &(ptr)->field;           \
-    raw_copy_from_guest(_d, _s, sizeof(*_d));           \
-})
-
 /*
  * Pre-validate a guest handle.
  * Allows use of faster __copy_* functions.
@@ -126,39 +51,6 @@
                      (last)-(first)+1,                  \
                      sizeof(*(hnd).p)))
 
-#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
-})
-
-#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
-})
-
-#define __clear_guest_offset(hnd, off, nr) ({    \
-    void *_d = (hnd).p;                          \
-    __raw_clear_guest(_d+(off), nr);             \
-})
-
-#define __copy_field_to_guest(hnd, ptr, field) ({       \
-    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
-    void *_d = &(hnd).p->field;                         \
-    (void)(&(hnd).p->field == _s);                      \
-    __raw_copy_to_guest(_d, _s, sizeof(*_s));           \
-})
-
-#define __copy_field_from_guest(ptr, hnd, field) ({     \
-    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
-    typeof(&(ptr)->field) _d = &(ptr)->field;           \
-    __raw_copy_from_guest(_d, _s, sizeof(*_d));         \
-})
-
 #endif /* __ASM_X86_GUEST_ACCESS_H__ */
 /*
  * Local variables:
diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
index ef9aaa3efcfe..4957b8d1f2b8 100644
--- a/xen/include/xen/guest_access.h
+++ b/xen/include/xen/guest_access.h
@@ -11,6 +11,86 @@
 #include <xen/types.h>
 #include <public/xen.h>
 
+/* Is the guest handle a NULL reference? */
+#define guest_handle_is_null(hnd)        ((hnd).p == NULL)
+
+/* Offset the given guest handle into the array it refers to. */
+#define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
+#define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
+
+/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
+ * to the specified type of XEN_GUEST_HANDLE_PARAM. */
+#define guest_handle_cast(hnd, type) ({         \
+    type *_x = (hnd).p;                         \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
+#define guest_handle_to_param(hnd, type) ({                  \
+    typeof((hnd).p) _x = (hnd).p;                            \
+    XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
+    /* type checking: make sure that the pointers inside     \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
+     * the same type, then return hnd */                     \
+    (void)(&_x == &_y.p);                                    \
+    _y;                                                      \
+})
+
+#define guest_handle_for_field(hnd, type, fld)          \
+    ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
+
+#define guest_handle_from_ptr(ptr, type)        \
+    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
+#define const_guest_handle_from_ptr(ptr, type)  \
+    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
+
+/*
+ * Copy an array of objects to guest context via a guest handle,
+ * specifying an offset into the guest array.
+ */
+#define copy_to_guest_offset(hnd, off, ptr, nr) ({      \
+    const typeof(*(ptr)) *_s = (ptr);                   \
+    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
+    /* Check that the handle is not for a const type */ \
+    void *__maybe_unused _t = (hnd).p;                  \
+    (void)((hnd).p == _s);                              \
+    raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));  \
+})
+
+/*
+ * Clear an array of objects in guest context via a guest handle,
+ * specifying an offset into the guest array.
+ */
+#define clear_guest_offset(hnd, off, nr) ({    \
+    void *_d = (hnd).p;                        \
+    raw_clear_guest(_d+(off), nr);             \
+})
+
+/*
+ * Copy an array of objects from guest context via a guest handle,
+ * specifying an offset into the guest array.
+ */
+#define copy_from_guest_offset(ptr, hnd, off, nr) ({    \
+    const typeof(*(ptr)) *_s = (hnd).p;                 \
+    typeof(*(ptr)) *_d = (ptr);                         \
+    raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
+})
+
+/* Copy sub-field of a structure to guest context via a guest handle. */
+#define copy_field_to_guest(hnd, ptr, field) ({         \
+    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
+    void *_d = &(hnd).p->field;                         \
+    (void)(&(hnd).p->field == _s);                      \
+    raw_copy_to_guest(_d, _s, sizeof(*_s));             \
+})
+
+/* Copy sub-field of a structure from guest context via a guest handle. */
+#define copy_field_from_guest(ptr, hnd, field) ({       \
+    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
+    typeof(&(ptr)->field) _d = &(ptr)->field;           \
+    raw_copy_from_guest(_d, _s, sizeof(*_d));           \
+})
+
 #define copy_to_guest(hnd, ptr, nr)                     \
     copy_to_guest_offset(hnd, 0, ptr, nr)
 
@@ -20,6 +100,45 @@
 #define clear_guest(hnd, nr)                            \
     clear_guest_offset(hnd, 0, nr)
 
+/*
+ * The __copy_* functions should only be used after the guest handle has
+ * been pre-validated via guest_handle_okay() and
+ * guest_handle_subrange_okay().
+ */
+
+#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
+    const typeof(*(ptr)) *_s = (ptr);                   \
+    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
+    /* Check that the handle is not for a const type */ \
+    void *__maybe_unused _t = (hnd).p;                  \
+    (void)((hnd).p == _s);                              \
+    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
+})
+
+#define __clear_guest_offset(hnd, off, nr) ({    \
+    void *_d = (hnd).p;                          \
+    __raw_clear_guest(_d + (off), nr);           \
+})
+
+#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
+    const typeof(*(ptr)) *_s = (hnd).p;                 \
+    typeof(*(ptr)) *_d = (ptr);                         \
+    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
+})
+
+#define __copy_field_to_guest(hnd, ptr, field) ({       \
+    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
+    void *_d = &(hnd).p->field;                         \
+    (void)(&(hnd).p->field == _s);                      \
+    __raw_copy_to_guest(_d, _s, sizeof(*_s));           \
+})
+
+#define __copy_field_from_guest(ptr, hnd, field) ({     \
+    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
+    typeof(&(ptr)->field) _d = &(ptr)->field;           \
+    __raw_copy_from_guest(_d, _s, sizeof(*_d));         \
+})
+
 #define __copy_to_guest(hnd, ptr, nr)                   \
     __copy_to_guest_offset(hnd, 0, ptr, nr)
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:52:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:52:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1CjC-00089v-FR; Thu, 30 Jul 2020 17:52:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1CjB-00089g-G1
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:52:25 +0000
X-Inumbo-ID: 6cd457a2-d28d-11ea-8db0-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6cd457a2-d28d-11ea-8db0-bc764e2007e4;
 Thu, 30 Jul 2020 17:52:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=uz1f4JaSlPa4wvD4/wRGUINT1oVECe/gDerhHtYEvBE=; b=Sy5oj9eXYQSRViI4imhGOKOsMe
 cW0inVI1B0gMPiQDKoenLOTJpmE8qI1Qn1b9bdy3MGRZJG9HLSQynX7eLWlHBhOZJXXzfaJRRql8V
 ViqP1HhNRcTuGQ5GsUVqrTxQUHeQl1vzCqdP2QQUaRHtGIbYVzoVwW76el4kS8udv1Yo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1CjA-00079q-9f; Thu, 30 Jul 2020 17:52:24 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1CjA-0004nV-14; Thu, 30 Jul 2020 17:52:24 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 3/6] xen/arm: guestcopy: Re-order the includes
Date: Thu, 30 Jul 2020 18:52:06 +0100
Message-Id: <20200730175213.30679-7-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730175213.30679-1-julien@xen.org>
References: <20200730175213.30679-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

We usually have xen/ includes first and then asm/. They are also ordered
alphabetically among themselves.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/guestcopy.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 7a0f3e9d5fc6..c8023e2bca5d 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -1,7 +1,8 @@
-#include <xen/lib.h>
 #include <xen/domain_page.h>
+#include <xen/lib.h>
 #include <xen/mm.h>
 #include <xen/sched.h>
+
 #include <asm/current.h>
 #include <asm/guest_access.h>
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:52:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1CjM-0008Cl-Dz; Thu, 30 Jul 2020 17:52:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1CjL-00089P-06
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:52:35 +0000
X-Inumbo-ID: 6aec02dc-d28d-11ea-aaff-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6aec02dc-d28d-11ea-aaff-12813bfff9fa;
 Thu, 30 Jul 2020 17:52:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=wKQvWLwb3vbFbaN60ANqWtb/FsamZy4DJptrPdgq66M=; b=RJJdGv2/Ln4j/f/bvPNc53qsTO
 HQgXPmfNqiP2RNI9xPcid5Nwq1t2LIFicZE29u1RrwzzyhMnBxuhj2Sv6QJnt792+AJTmKjJqWjJk
 khoysomvWkvhpCiIvb52VefNQ3UKNs6gBSO1Fydxt+1RYDy/z1E1fXP3OaiYWm3SgVSc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1Cj7-00079P-3v; Thu, 30 Jul 2020 17:52:21 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1Cj6-0004nV-RI; Thu, 30 Jul 2020 17:52:21 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 2/6] xen/arm: decode: Re-order the includes
Date: Thu, 30 Jul 2020 18:52:03 +0100
Message-Id: <20200730175213.30679-4-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730175213.30679-1-julien@xen.org>
References: <20200730175213.30679-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

We usually have xen/ includes first and then asm/. They are also ordered
alphabetically among themselves.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/decode.c | 5 +++--
 xen/arch/arm/kernel.c | 2 +-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/decode.c b/xen/arch/arm/decode.c
index 8b1e15d11892..144793c8cea0 100644
--- a/xen/arch/arm/decode.c
+++ b/xen/arch/arm/decode.c
@@ -17,11 +17,12 @@
  * GNU General Public License for more details.
  */
 
-#include <xen/types.h>
+#include <xen/lib.h>
 #include <xen/sched.h>
+#include <xen/types.h>
+
 #include <asm/current.h>
 #include <asm/guest_access.h>
-#include <xen/lib.h>
 
 #include "decode.h"
 
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index f95fa392af44..032923853f2c 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -5,6 +5,7 @@
  */
 #include <xen/domain_page.h>
 #include <xen/errno.h>
+#include <xen/guest_access.h>
 #include <xen/gunzip.h>
 #include <xen/init.h>
 #include <xen/lib.h>
@@ -14,7 +15,6 @@
 #include <xen/vmap.h>
 
 #include <asm/byteorder.h>
-#include <asm/guest_access.h>
 #include <asm/kernel.h>
 #include <asm/setup.h>
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:52:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1CjH-0008AQ-OH; Thu, 30 Jul 2020 17:52:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1CjF-00089P-WB
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:52:30 +0000
X-Inumbo-ID: 6a6afc82-d28d-11ea-aaff-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6a6afc82-d28d-11ea-aaff-12813bfff9fa;
 Thu, 30 Jul 2020 17:52:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=YMp6yPrLfWKrTtFlQljFXWPvFCrq7jJMWfWyLnEct0g=; b=dFsiwvxroEX580uHUCUg7Aoyyg
 ViSgZYMP4Yd22r4r5I/adNrFQV99Z7R7hPdhrPN3XJi3b1PXMdebgY2LMz54V5M4fZpKqTmAsQ3Fd
 btY1QUvSzlC2TdVqy6APhnbrGt8BP+f095I/dfQzVmeeG+vMOZCS29arIJB06MLpTDkc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1Cj6-00079I-2x; Thu, 30 Jul 2020 17:52:20 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1Cj5-0004nV-PA; Thu, 30 Jul 2020 17:52:20 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 1/7] xen/guest_access: Add emacs magics
Date: Thu, 30 Jul 2020 18:52:02 +0100
Message-Id: <20200730175213.30679-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730175213.30679-1-julien@xen.org>
References: <20200730175213.30679-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Add emacs magics for xen/guest_access.h and
asm-x86/guest_access.h.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Acked-by: Jan Beulich <jbeulich@suse.com>

---
    Changes in v2:
        - Remove the word "missing"
---
 xen/include/asm-x86/guest_access.h | 8 ++++++++
 xen/include/xen/guest_access.h     | 8 ++++++++
 2 files changed, 16 insertions(+)

diff --git a/xen/include/asm-x86/guest_access.h b/xen/include/asm-x86/guest_access.h
index 2be3577bd340..3ffde205f6a1 100644
--- a/xen/include/asm-x86/guest_access.h
+++ b/xen/include/asm-x86/guest_access.h
@@ -160,3 +160,11 @@
 })
 
 #endif /* __ASM_X86_GUEST_ACCESS_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
index 09989df819ce..ef9aaa3efcfe 100644
--- a/xen/include/xen/guest_access.h
+++ b/xen/include/xen/guest_access.h
@@ -33,3 +33,11 @@ char *safe_copy_string_from_guest(XEN_GUEST_HANDLE(char) u_buf,
                                   size_t size, size_t max_size);
 
 #endif /* __XEN_GUEST_ACCESS_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:52:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:52:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Cj6-00089U-UP; Thu, 30 Jul 2020 17:52:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1Cj6-00089P-4Z
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:52:20 +0000
X-Inumbo-ID: 6968361a-d28d-11ea-aaff-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6968361a-d28d-11ea-aaff-12813bfff9fa;
 Thu, 30 Jul 2020 17:52:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ShgvcGkxdE4OnoXDwV1go3UlHBZgZS/RIoOwLBZvFGA=; b=3yYE6hNH7/Yi413rAoLlj7WaZE
 6WFLMIUCKYMPIcDQBdWAMgxD7pQRMwgummE0EsQYZVId82ittNXnFIprH5AkaaZbwjMVSIiGUIwdD
 ekNZ+2dAKDU674SKp5IuopBTRdjLmw+1QV9RBsL+JB0CeQHlsgSokI64JpVrN3ttYP4w=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1Cj4-000798-EE; Thu, 30 Jul 2020 17:52:18 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1Cj4-0004nV-43; Thu, 30 Jul 2020 17:52:18 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 1/6] xen/arm: kernel: Re-order the includes
Date: Thu, 30 Jul 2020 18:52:01 +0100
Message-Id: <20200730175213.30679-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730175213.30679-1-julien@xen.org>
References: <20200730175213.30679-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

We usually have xen/ includes first and then asm/. They are also ordered
alphabetically among themselves.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/kernel.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 8eff0748367d..f95fa392af44 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -3,20 +3,20 @@
  *
  * Copyright (C) 2011 Citrix Systems, Inc.
  */
+#include <xen/domain_page.h>
 #include <xen/errno.h>
+#include <xen/gunzip.h>
 #include <xen/init.h>
 #include <xen/lib.h>
+#include <xen/libfdt/libfdt.h>
 #include <xen/mm.h>
-#include <xen/domain_page.h>
 #include <xen/sched.h>
-#include <asm/byteorder.h>
-#include <asm/setup.h>
-#include <xen/libfdt/libfdt.h>
-#include <xen/gunzip.h>
 #include <xen/vmap.h>
 
+#include <asm/byteorder.h>
 #include <asm/guest_access.h>
 #include <asm/kernel.h>
+#include <asm/setup.h>
 
 #define UIMAGE_MAGIC          0x27051956
 #define UIMAGE_NMLEN          32
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:52:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1CjR-0008FV-P6; Thu, 30 Jul 2020 17:52:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1CjQ-00089P-07
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:52:40 +0000
X-Inumbo-ID: 6b914058-d28d-11ea-aaff-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b914058-d28d-11ea-aaff-12813bfff9fa;
 Thu, 30 Jul 2020 17:52:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ShgvcGkxdE4OnoXDwV1go3UlHBZgZS/RIoOwLBZvFGA=; b=RDzByCdshJW1Er+0IgfzylPy5n
 4CksJB+yJgm3fDSmBtpzBWNIMHaGppj0H2NfFZix3KvYNmR6ZYEPN9zAtvNu6q3X1PCPFIsBiVFYZ
 lZtf4jomHPAu6PuhOslzMMPbRPMF2aB7iN9AcMIoJ8+4zGUYHyKTEFoy9rQNhYuOZzFQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1Cj8-00079Z-5l; Thu, 30 Jul 2020 17:52:22 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1Cj7-0004nV-TG; Thu, 30 Jul 2020 17:52:22 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 2/7] xen/arm: kernel: Re-order the includes
Date: Thu, 30 Jul 2020 18:52:04 +0100
Message-Id: <20200730175213.30679-5-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730175213.30679-1-julien@xen.org>
References: <20200730175213.30679-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

We usually have xen/ includes first and then asm/. They are also ordered
alphabetically among themselves.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/kernel.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 8eff0748367d..f95fa392af44 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -3,20 +3,20 @@
  *
  * Copyright (C) 2011 Citrix Systems, Inc.
  */
+#include <xen/domain_page.h>
 #include <xen/errno.h>
+#include <xen/gunzip.h>
 #include <xen/init.h>
 #include <xen/lib.h>
+#include <xen/libfdt/libfdt.h>
 #include <xen/mm.h>
-#include <xen/domain_page.h>
 #include <xen/sched.h>
-#include <asm/byteorder.h>
-#include <asm/setup.h>
-#include <xen/libfdt/libfdt.h>
-#include <xen/gunzip.h>
 #include <xen/vmap.h>
 
+#include <asm/byteorder.h>
 #include <asm/guest_access.h>
 #include <asm/kernel.h>
+#include <asm/setup.h>
 
 #define UIMAGE_MAGIC          0x27051956
 #define UIMAGE_NMLEN          32
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:52:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:52:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1CjX-0008IA-30; Thu, 30 Jul 2020 17:52:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1CjV-00089P-0B
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:52:45 +0000
X-Inumbo-ID: 6c29e948-d28d-11ea-aaff-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6c29e948-d28d-11ea-aaff-12813bfff9fa;
 Thu, 30 Jul 2020 17:52:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=wKQvWLwb3vbFbaN60ANqWtb/FsamZy4DJptrPdgq66M=; b=QYgK3JxNc9pNbpXnViLPgxcRRN
 Yn79mzTi/xICc7L3eS5nahFQ4D0f8WDxm/mem2/w/kMTMhOOw8rtCEl4wfCAPwHn1+q8akaJfaN6e
 5F/rVQoytrqMNikKVXQ0lenWUlZXQkfBStk0lPQYlD0lth7khFY2iYvLs7uXdU2lB7bk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1Cj9-00079i-8i; Thu, 30 Jul 2020 17:52:23 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1Cj8-0004nV-VM; Thu, 30 Jul 2020 17:52:23 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 3/7] xen/arm: decode: Re-order the includes
Date: Thu, 30 Jul 2020 18:52:05 +0100
Message-Id: <20200730175213.30679-6-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730175213.30679-1-julien@xen.org>
References: <20200730175213.30679-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

We usually have xen/ includes first and then asm/. They are also ordered
alphabetically among themselves.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/decode.c | 5 +++--
 xen/arch/arm/kernel.c | 2 +-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/decode.c b/xen/arch/arm/decode.c
index 8b1e15d11892..144793c8cea0 100644
--- a/xen/arch/arm/decode.c
+++ b/xen/arch/arm/decode.c
@@ -17,11 +17,12 @@
  * GNU General Public License for more details.
  */
 
-#include <xen/types.h>
+#include <xen/lib.h>
 #include <xen/sched.h>
+#include <xen/types.h>
+
 #include <asm/current.h>
 #include <asm/guest_access.h>
-#include <xen/lib.h>
 
 #include "decode.h"
 
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index f95fa392af44..032923853f2c 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -5,6 +5,7 @@
  */
 #include <xen/domain_page.h>
 #include <xen/errno.h>
+#include <xen/guest_access.h>
 #include <xen/gunzip.h>
 #include <xen/init.h>
 #include <xen/lib.h>
@@ -14,7 +15,6 @@
 #include <xen/vmap.h>
 
 #include <asm/byteorder.h>
-#include <asm/guest_access.h>
 #include <asm/kernel.h>
 #include <asm/setup.h>
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:52:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:52:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Cja-0008KR-Gy; Thu, 30 Jul 2020 17:52:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1Cja-00089P-0G
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:52:50 +0000
X-Inumbo-ID: 6d683210-d28d-11ea-aaff-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6d683210-d28d-11ea-aaff-12813bfff9fa;
 Thu, 30 Jul 2020 17:52:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=uz1f4JaSlPa4wvD4/wRGUINT1oVECe/gDerhHtYEvBE=; b=VkdeK6gKU2GAWLM4+0LNe9lRPY
 OzYChfIbjn6R+t7jQwgOiDCuQBLel0khAdojYu7+IKshY5lg5BDgTl3a4R2hlipRZFdi65yPnMTLC
 eWsxsdNO+PJ/SyC2/sgV6FtqOuawJCpcw1dTYfCYg9eOsXouqh4f61dyzla1BOLFgK/w=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1CjB-00079x-Bs; Thu, 30 Jul 2020 17:52:25 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1CjB-0004nV-38; Thu, 30 Jul 2020 17:52:25 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 4/7] xen/arm: guestcopy: Re-order the includes
Date: Thu, 30 Jul 2020 18:52:07 +0100
Message-Id: <20200730175213.30679-8-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730175213.30679-1-julien@xen.org>
References: <20200730175213.30679-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

We usually have xen/ includes first and then asm/. They are also ordered
alphabetically among themselves.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/guestcopy.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 7a0f3e9d5fc6..c8023e2bca5d 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -1,7 +1,8 @@
-#include <xen/lib.h>
 #include <xen/domain_page.h>
+#include <xen/lib.h>
 #include <xen/mm.h>
 #include <xen/sched.h>
+
 #include <asm/current.h>
 #include <asm/guest_access.h>
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 17:52:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 17:52:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Cjf-0008Op-RQ; Thu, 30 Jul 2020 17:52:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1Cjf-00089P-0Y
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 17:52:55 +0000
X-Inumbo-ID: 6faa351e-d28d-11ea-aaff-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6faa351e-d28d-11ea-aaff-12813bfff9fa;
 Thu, 30 Jul 2020 17:52:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=JvdgUN4G+hJQeSgeZrRHkKlqRzJx599CQaDONH0J3Tw=; b=v8qSCiyGVTfxkTpFTI1hiNTT2i
 m2CiGGXqL3M+qtt7lAPHIcTyQ6PS2JYXBWQD0/lSU0Hqu3L1Lz9LvveBYKBBHrmVwyim0Uk49y7ti
 tEMKamxKVZU1XVSzHNzqcZGJ4z9MF5iUMLuzjKd8/D34LWKVt9WL1/G9TUbBveI/AzY8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1CjD-0007A5-Fv; Thu, 30 Jul 2020 17:52:27 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1CjD-0004nV-7A; Thu, 30 Jul 2020 17:52:27 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 4/6] xen: include xen/guest_access.h rather than
 asm/guest_access.h
Date: Thu, 30 Jul 2020 18:52:08 +0100
Message-Id: <20200730175213.30679-9-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730175213.30679-1-julien@xen.org>
References: <20200730175213.30679-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Only a few places are actually including asm/guest_access.h. While this
is fine today, a follow-up patch will want to move most of the helpers
from asm/guest_access.h to xen/guest_access.h.

To prepare the move, everyone should include xen/guest_access.h rather
than asm/guest_access.h.

Interestingly, asm-arm/guest_access.h includes xen/guest_access.h. The
inclusion is now removed as no-one but the latter should include the
former.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - Remove some changes that weren't meant to be here.
---
 xen/arch/arm/decode.c                | 2 +-
 xen/arch/arm/domain.c                | 2 +-
 xen/arch/arm/guest_walk.c            | 3 ++-
 xen/arch/arm/guestcopy.c             | 2 +-
 xen/arch/arm/vgic-v3-its.c           | 2 +-
 xen/arch/x86/hvm/svm/svm.c           | 2 +-
 xen/arch/x86/hvm/viridian/viridian.c | 2 +-
 xen/arch/x86/hvm/vmx/vmx.c           | 2 +-
 xen/common/libelf/libelf-loader.c    | 2 +-
 xen/include/asm-arm/guest_access.h   | 1 -
 xen/lib/x86/private.h                | 2 +-
 11 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/decode.c b/xen/arch/arm/decode.c
index 144793c8cea0..792c2e92a7eb 100644
--- a/xen/arch/arm/decode.c
+++ b/xen/arch/arm/decode.c
@@ -17,12 +17,12 @@
  * GNU General Public License for more details.
  */
 
+#include <xen/guest_access.h>
 #include <xen/lib.h>
 #include <xen/sched.h>
 #include <xen/types.h>
 
 #include <asm/current.h>
-#include <asm/guest_access.h>
 
 #include "decode.h"
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 31169326b2e3..9258f6d3faa2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -12,6 +12,7 @@
 #include <xen/bitops.h>
 #include <xen/errno.h>
 #include <xen/grant_table.h>
+#include <xen/guest_access.h>
 #include <xen/hypercall.h>
 #include <xen/init.h>
 #include <xen/lib.h>
@@ -26,7 +27,6 @@
 #include <asm/current.h>
 #include <asm/event.h>
 #include <asm/gic.h>
-#include <asm/guest_access.h>
 #include <asm/guest_atomics.h>
 #include <asm/irq.h>
 #include <asm/p2m.h>
diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
index a1cdd7f4afea..b4496c4c86c6 100644
--- a/xen/arch/arm/guest_walk.c
+++ b/xen/arch/arm/guest_walk.c
@@ -16,8 +16,9 @@
  */
 
 #include <xen/domain_page.h>
+#include <xen/guest_access.h>
 #include <xen/sched.h>
-#include <asm/guest_access.h>
+
 #include <asm/guest_walk.h>
 #include <asm/short-desc.h>
 
diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index c8023e2bca5d..32681606d8fc 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -1,10 +1,10 @@
 #include <xen/domain_page.h>
+#include <xen/guest_access.h>
 #include <xen/lib.h>
 #include <xen/mm.h>
 #include <xen/sched.h>
 
 #include <asm/current.h>
-#include <asm/guest_access.h>
 
 #define COPY_flush_dcache   (1U << 0)
 #define COPY_from_guest     (0U << 1)
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 6e153c698d56..58d939b85f92 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -32,6 +32,7 @@
 #include <xen/bitops.h>
 #include <xen/config.h>
 #include <xen/domain_page.h>
+#include <xen/guest_access.h>
 #include <xen/lib.h>
 #include <xen/init.h>
 #include <xen/softirq.h>
@@ -39,7 +40,6 @@
 #include <xen/sched.h>
 #include <xen/sizes.h>
 #include <asm/current.h>
-#include <asm/guest_access.h>
 #include <asm/mmio.h>
 #include <asm/gic_v3_defs.h>
 #include <asm/gic_v3_its.h>
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index ca3bbfcbb355..7301f3cd6004 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -16,6 +16,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/guest_access.h>
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/trace.h>
@@ -34,7 +35,6 @@
 #include <asm/cpufeature.h>
 #include <asm/processor.h>
 #include <asm/amd.h>
-#include <asm/guest_access.h>
 #include <asm/debugreg.h>
 #include <asm/msr.h>
 #include <asm/i387.h>
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 977c1bc54fad..dc7183a54627 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -5,12 +5,12 @@
  * Hypervisor Top Level Functional Specification for more information.
  */
 
+#include <xen/guest_access.h>
 #include <xen/sched.h>
 #include <xen/version.h>
 #include <xen/hypercall.h>
 #include <xen/domain_page.h>
 #include <xen/param.h>
-#include <asm/guest_access.h>
 #include <asm/guest/hyperv-tlfs.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index eb54aadfbafb..cb5df1e81c9c 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -15,6 +15,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/guest_access.h>
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/param.h>
@@ -31,7 +32,6 @@
 #include <asm/regs.h>
 #include <asm/cpufeature.h>
 #include <asm/processor.h>
-#include <asm/guest_access.h>
 #include <asm/debugreg.h>
 #include <asm/msr.h>
 #include <asm/p2m.h>
diff --git a/xen/common/libelf/libelf-loader.c b/xen/common/libelf/libelf-loader.c
index 0f468727d04a..629cc0d3e611 100644
--- a/xen/common/libelf/libelf-loader.c
+++ b/xen/common/libelf/libelf-loader.c
@@ -16,7 +16,7 @@
  */
 
 #ifdef __XEN__
-#include <asm/guest_access.h>
+#include <xen/guest_access.h>
 #endif
 
 #include "libelf-private.h"
diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 31b9f03f0015..b9a89c495527 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -1,7 +1,6 @@
 #ifndef __ASM_ARM_GUEST_ACCESS_H__
 #define __ASM_ARM_GUEST_ACCESS_H__
 
-#include <xen/guest_access.h>
 #include <xen/errno.h>
 #include <xen/sched.h>
 
diff --git a/xen/lib/x86/private.h b/xen/lib/x86/private.h
index b793181464f3..2d53bd3ced23 100644
--- a/xen/lib/x86/private.h
+++ b/xen/lib/x86/private.h
@@ -4,12 +4,12 @@
 #ifdef __XEN__
 
 #include <xen/bitops.h>
+#include <xen/guest_access.h>
 #include <xen/kernel.h>
 #include <xen/lib.h>
 #include <xen/nospec.h>
 #include <xen/types.h>
 
-#include <asm/guest_access.h>
 #include <asm/msr-index.h>
 
 #define copy_to_buffer_offset copy_to_guest_offset
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:09:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:09:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1D05-0001eZ-B8; Thu, 30 Jul 2020 18:09:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1D03-0001eP-Hw
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:09:51 +0000
X-Inumbo-ID: dc4aeb9e-d28f-11ea-ab01-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dc4aeb9e-d28f-11ea-ab01-12813bfff9fa;
 Thu, 30 Jul 2020 18:09:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HcrdvdMXYTnUwRBlS2DEneqnqQqoRuCUdn6F6d66Mpw=; b=RDw+IThYQ4uxz96MwiXAhvGrf8
 mzOvSStfyzSB45BDMJvDilp9dLxp30WFQ4jsixNFEccVE7difbf7T6mKX7ENovF8NcAuN26rivC+V
 voHIAT6MBz2EqIOwm0RaoZ/JrEu9jZYvq5vJqlZAGP9sgbQblUx/Oy0aY0k6CF+1046w=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D02-0007ch-1R; Thu, 30 Jul 2020 18:09:50 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1CjM-0004nV-3c; Thu, 30 Jul 2020 17:52:36 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 7/7] xen/guest_access: Fix coding style in
 xen/guest_access.h
Date: Thu, 30 Jul 2020 18:52:13 +0100
Message-Id: <20200730175213.30679-14-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730175213.30679-1-julien@xen.org>
References: <20200730175213.30679-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

    * Add space before and after operator
    * Align \
    * Format comments

No functional changes expected.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/xen/guest_access.h | 36 +++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
index 4957b8d1f2b8..52fc7a063249 100644
--- a/xen/include/xen/guest_access.h
+++ b/xen/include/xen/guest_access.h
@@ -18,20 +18,24 @@
 #define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
 #define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
 
-/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
- * to the specified type of XEN_GUEST_HANDLE_PARAM. */
+/*
+ * Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
+ * to the specified type of XEN_GUEST_HANDLE_PARAM.
+ */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };      \
 })
 
 /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
 #define guest_handle_to_param(hnd, type) ({                  \
     typeof((hnd).p) _x = (hnd).p;                            \
     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
-    /* type checking: make sure that the pointers inside     \
+    /*                                                       \
+     * type checking: make sure that the pointers inside     \
      * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
-     * the same type, then return hnd */                     \
+     * the same type, then return hnd.                       \
+     */                                                      \
     (void)(&_x == &_y.p);                                    \
     _y;                                                      \
 })
@@ -106,13 +110,13 @@
  * guest_handle_subrange_okay().
  */
 
-#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
+#define __copy_to_guest_offset(hnd, off, ptr, nr) ({        \
+    const typeof(*(ptr)) *_s = (ptr);                       \
+    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;              \
+    /* Check that the handle is not for a const type */     \
+    void *__maybe_unused _t = (hnd).p;                      \
+    (void)((hnd).p == _s);                                  \
+    __raw_copy_to_guest(_d + (off), _s, sizeof(*_s) * (nr));\
 })
 
 #define __clear_guest_offset(hnd, off, nr) ({    \
@@ -120,10 +124,10 @@
     __raw_clear_guest(_d + (off), nr);           \
 })
 
-#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
+#define __copy_from_guest_offset(ptr, hnd, off, nr) ({          \
+    const typeof(*(ptr)) *_s = (hnd).p;                         \
+    typeof(*(ptr)) *_d = (ptr);                                 \
+    __raw_copy_from_guest(_d, _s + (off), sizeof (*_d) * (nr)); \
 })
 
 #define __copy_field_to_guest(hnd, ptr, field) ({       \
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:10:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:10:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1D09-0001fF-1H; Thu, 30 Jul 2020 18:09:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1D08-0001eP-A2
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:09:56 +0000
X-Inumbo-ID: dc0f8c5d-d28f-11ea-ab01-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dc0f8c5d-d28f-11ea-ab01-12813bfff9fa;
 Thu, 30 Jul 2020 18:09:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HcrdvdMXYTnUwRBlS2DEneqnqQqoRuCUdn6F6d66Mpw=; b=D1cBZ4spwPKtMsnON5Set8IhhG
 yQrUYCsw9pJV7txoCklBSd1q9DibcLOhEPMENetrg6IitzgsX72WC4dYROj+cXCY7oa1vIN37uWu2
 rKDQeWwc7uRoDNjPN/qd324RVWpKaMaDesgI5c57p+GB/YU/UZqeblmPUw9sdvAeZ8tk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D02-0007cl-7G; Thu, 30 Jul 2020 18:09:50 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1CjK-0004nV-II; Thu, 30 Jul 2020 17:52:34 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 6/6] xen/guest_access: Fix coding style in
 xen/guest_access.h
Date: Thu, 30 Jul 2020 18:52:12 +0100
Message-Id: <20200730175213.30679-13-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730175213.30679-1-julien@xen.org>
References: <20200730175213.30679-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

    * Add space before and after operator
    * Align \
    * Format comments

No functional changes expected.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/xen/guest_access.h | 36 +++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
index 4957b8d1f2b8..52fc7a063249 100644
--- a/xen/include/xen/guest_access.h
+++ b/xen/include/xen/guest_access.h
@@ -18,20 +18,24 @@
 #define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
 #define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
 
-/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
- * to the specified type of XEN_GUEST_HANDLE_PARAM. */
+/*
+ * Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
+ * to the specified type of XEN_GUEST_HANDLE_PARAM.
+ */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };      \
 })
 
 /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
 #define guest_handle_to_param(hnd, type) ({                  \
     typeof((hnd).p) _x = (hnd).p;                            \
     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
-    /* type checking: make sure that the pointers inside     \
+    /*                                                       \
+     * type checking: make sure that the pointers inside     \
      * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
-     * the same type, then return hnd */                     \
+     * the same type, then return hnd.                       \
+     */                                                      \
     (void)(&_x == &_y.p);                                    \
     _y;                                                      \
 })
@@ -106,13 +110,13 @@
  * guest_handle_subrange_okay().
  */
 
-#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
+#define __copy_to_guest_offset(hnd, off, ptr, nr) ({        \
+    const typeof(*(ptr)) *_s = (ptr);                       \
+    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;              \
+    /* Check that the handle is not for a const type */     \
+    void *__maybe_unused _t = (hnd).p;                      \
+    (void)((hnd).p == _s);                                  \
+    __raw_copy_to_guest(_d + (off), _s, sizeof(*_s) * (nr));\
 })
 
 #define __clear_guest_offset(hnd, off, nr) ({    \
@@ -120,10 +124,10 @@
     __raw_clear_guest(_d + (off), nr);           \
 })
 
-#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
+#define __copy_from_guest_offset(ptr, hnd, off, nr) ({          \
+    const typeof(*(ptr)) *_s = (hnd).p;                         \
+    typeof(*(ptr)) *_d = (ptr);                                 \
+    __raw_copy_from_guest(_d, _s + (off), sizeof (*_d) * (nr)); \
 })
 
 #define __copy_field_to_guest(hnd, ptr, field) ({       \
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:10:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:10:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1D06-0001ef-K2; Thu, 30 Jul 2020 18:09:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1D04-0001eU-NN
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:09:52 +0000
X-Inumbo-ID: dbb33562-d28f-11ea-8db1-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dbb33562-d28f-11ea-8db1-bc764e2007e4;
 Thu, 30 Jul 2020 18:09:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=iYWCfYUjsOmxaTuM6icRxDZYgMO7DF187Z293r7Hn8E=; b=Gyl0rXk/oqEDDVrIffBibnI6nq
 VBYU1SMoqAkLW3KE23q4IH2ORVnU3dvigW+Gci2UGXRToRPMN6Zbdd37MFGQHWSRoRwuC8vxa89bP
 g9LL4iNaYdEB7bPti7CuqaPrT7EvLtTMOjJ2Yrd/MCD6Iq3j9o1CukPBca2/Ndw6Cbxs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D02-0007cj-5Y; Thu, 30 Jul 2020 18:09:50 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1CjJ-0004nV-0l; Thu, 30 Jul 2020 17:52:33 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 6/7] xen/guest_access: Consolidate guest access helpers in
 xen/guest_access.h
Date: Thu, 30 Jul 2020 18:52:11 +0100
Message-Id: <20200730175213.30679-12-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730175213.30679-1-julien@xen.org>
References: <20200730175213.30679-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Most of the helpers to access guest memory are implemented the same way
on Arm and x86. The only differences are:
    - guest_handle_{from, to}_param(): while on x86 XEN_GUEST_HANDLE()
      and XEN_GUEST_HANDLE_PARAM() are the same, they are not on Arm. It
      is still fine to use the Arm implementation on x86.
    - __clear_guest_offset(): Interestingly the prototype does not match
      between the x86 and Arm. However, the Arm one is bogus. So the x86
      implementation can be used.
    - guest_handle{,_subrange}_okay(): They are validly differing
      because Arm is only supporting auto-translated guest and therefore
      handles are always valid.

In the past, the ia64 and ppc64 port use a different model to access
guest parameter. They have been long gone now.

Given Xen currently only support 2 archictures, it is too soon to have a
directory asm-generic as it is not possible to differentiate it with the
existing directory xen/. If/When there is a 3rd port, we can decide to
create the new directory if that new port decide to use a different way
to access guest parameter.

For now, consolidate it in xen/guest_access.h.

While it would be possible to adjust the coding style at the same, this
is left for a follow-up patch so 'diff' can be used to check the
consolidation was done correctly.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - Expand the commit message explaining why asm-generic is not
        created.
---
 xen/include/asm-arm/guest_access.h | 114 ---------------------------
 xen/include/asm-x86/guest_access.h | 108 --------------------------
 xen/include/xen/guest_access.h     | 119 +++++++++++++++++++++++++++++
 3 files changed, 119 insertions(+), 222 deletions(-)

diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index b9a89c495527..53766386d3d8 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -23,88 +23,6 @@ int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
 #define __raw_copy_from_guest raw_copy_from_guest
 #define __raw_clear_guest raw_clear_guest
 
-/* Remainder copied from x86 -- could be common? */
-
-/* Is the guest handle a NULL reference? */
-#define guest_handle_is_null(hnd)        ((hnd).p == NULL)
-
-/* Offset the given guest handle into the array it refers to. */
-#define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
-#define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
-
-/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
- * to the specified type of XEN_GUEST_HANDLE_PARAM. */
-#define guest_handle_cast(hnd, type) ({         \
-    type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
-})
-
-/* Convert a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
-#define guest_handle_to_param(hnd, type) ({                  \
-    typeof((hnd).p) _x = (hnd).p;                            \
-    XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
-    /* type checking: make sure that the pointers inside     \
-     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
-     * the same type, then return hnd */                     \
-    (void)(&_x == &_y.p);                                    \
-    _y;                                                      \
-})
-
-#define guest_handle_for_field(hnd, type, fld)          \
-    ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
-
-#define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
-#define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
-
-/*
- * Copy an array of objects to guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define copy_to_guest_offset(hnd, off, ptr, nr) ({      \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));  \
-})
-
-/*
- * Clear an array of objects in guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define clear_guest_offset(hnd, off, nr) ({    \
-    void *_d = (hnd).p;                        \
-    raw_clear_guest(_d+(off), nr);             \
-})
-
-/*
- * Copy an array of objects from guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define copy_from_guest_offset(ptr, hnd, off, nr) ({    \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
-})
-
-/* Copy sub-field of a structure to guest context via a guest handle. */
-#define copy_field_to_guest(hnd, ptr, field) ({         \
-    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
-    void *_d = &(hnd).p->field;                         \
-    (void)(&(hnd).p->field == _s);                      \
-    raw_copy_to_guest(_d, _s, sizeof(*_s));             \
-})
-
-/* Copy sub-field of a structure from guest context via a guest handle. */
-#define copy_field_from_guest(ptr, hnd, field) ({       \
-    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
-    typeof(&(ptr)->field) _d = &(ptr)->field;           \
-    raw_copy_from_guest(_d, _s, sizeof(*_d));           \
-})
-
 /*
  * Pre-validate a guest handle.
  * Allows use of faster __copy_* functions.
@@ -113,38 +31,6 @@ int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
 #define guest_handle_okay(hnd, nr) (1)
 #define guest_handle_subrange_okay(hnd, first, last) (1)
 
-#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
-})
-
-#define __clear_guest_offset(hnd, off, ptr, nr) ({      \
-    __raw_clear_guest(_d+(off), nr);  \
-})
-
-#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
-})
-
-#define __copy_field_to_guest(hnd, ptr, field) ({       \
-    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
-    void *_d = &(hnd).p->field;                         \
-    (void)(&(hnd).p->field == _s);                      \
-    __raw_copy_to_guest(_d, _s, sizeof(*_s));           \
-})
-
-#define __copy_field_from_guest(ptr, hnd, field) ({     \
-    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
-    typeof(&(ptr)->field) _d = &(ptr)->field;           \
-    __raw_copy_from_guest(_d, _s, sizeof(*_d));         \
-})
-
 #endif /* __ASM_ARM_GUEST_ACCESS_H__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-x86/guest_access.h b/xen/include/asm-x86/guest_access.h
index 3ffde205f6a1..08c9fbbc78e1 100644
--- a/xen/include/asm-x86/guest_access.h
+++ b/xen/include/asm-x86/guest_access.h
@@ -38,81 +38,6 @@
      clear_user_hvm((dst), (len)) :             \
      clear_user((dst), (len)))
 
-/* Is the guest handle a NULL reference? */
-#define guest_handle_is_null(hnd)        ((hnd).p == NULL)
-
-/* Offset the given guest handle into the array it refers to. */
-#define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
-#define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
-
-/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
- * to the specified type of XEN_GUEST_HANDLE_PARAM. */
-#define guest_handle_cast(hnd, type) ({         \
-    type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
-})
-
-/* Convert a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
-#define guest_handle_to_param(hnd, type) ({                  \
-    /* type checking: make sure that the pointers inside     \
-     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
-     * the same type, then return hnd */                     \
-    (void)((typeof(&(hnd).p)) 0 ==                           \
-        (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
-    (hnd);                                                   \
-})
-
-#define guest_handle_for_field(hnd, type, fld)          \
-    ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
-
-#define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
-#define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
-
-/*
- * Copy an array of objects to guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define copy_to_guest_offset(hnd, off, ptr, nr) ({      \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));  \
-})
-
-/*
- * Copy an array of objects from guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define copy_from_guest_offset(ptr, hnd, off, nr) ({    \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
-})
-
-#define clear_guest_offset(hnd, off, nr) ({    \
-    void *_d = (hnd).p;                        \
-    raw_clear_guest(_d+(off), nr);             \
-})
-
-/* Copy sub-field of a structure to guest context via a guest handle. */
-#define copy_field_to_guest(hnd, ptr, field) ({         \
-    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
-    void *_d = &(hnd).p->field;                         \
-    (void)(&(hnd).p->field == _s);                      \
-    raw_copy_to_guest(_d, _s, sizeof(*_s));             \
-})
-
-/* Copy sub-field of a structure from guest context via a guest handle. */
-#define copy_field_from_guest(ptr, hnd, field) ({       \
-    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
-    typeof(&(ptr)->field) _d = &(ptr)->field;           \
-    raw_copy_from_guest(_d, _s, sizeof(*_d));           \
-})
-
 /*
  * Pre-validate a guest handle.
  * Allows use of faster __copy_* functions.
@@ -126,39 +51,6 @@
                      (last)-(first)+1,                  \
                      sizeof(*(hnd).p)))
 
-#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
-})
-
-#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
-})
-
-#define __clear_guest_offset(hnd, off, nr) ({    \
-    void *_d = (hnd).p;                          \
-    __raw_clear_guest(_d+(off), nr);             \
-})
-
-#define __copy_field_to_guest(hnd, ptr, field) ({       \
-    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
-    void *_d = &(hnd).p->field;                         \
-    (void)(&(hnd).p->field == _s);                      \
-    __raw_copy_to_guest(_d, _s, sizeof(*_s));           \
-})
-
-#define __copy_field_from_guest(ptr, hnd, field) ({     \
-    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
-    typeof(&(ptr)->field) _d = &(ptr)->field;           \
-    __raw_copy_from_guest(_d, _s, sizeof(*_d));         \
-})
-
 #endif /* __ASM_X86_GUEST_ACCESS_H__ */
 /*
  * Local variables:
diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
index ef9aaa3efcfe..4957b8d1f2b8 100644
--- a/xen/include/xen/guest_access.h
+++ b/xen/include/xen/guest_access.h
@@ -11,6 +11,86 @@
 #include <xen/types.h>
 #include <public/xen.h>
 
+/* Is the guest handle a NULL reference? */
+#define guest_handle_is_null(hnd)        ((hnd).p == NULL)
+
+/* Offset the given guest handle into the array it refers to. */
+#define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
+#define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
+
+/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
+ * to the specified type of XEN_GUEST_HANDLE_PARAM. */
+#define guest_handle_cast(hnd, type) ({         \
+    type *_x = (hnd).p;                         \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
+#define guest_handle_to_param(hnd, type) ({                  \
+    typeof((hnd).p) _x = (hnd).p;                            \
+    XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
+    /* type checking: make sure that the pointers inside     \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
+     * the same type, then return hnd */                     \
+    (void)(&_x == &_y.p);                                    \
+    _y;                                                      \
+})
+
+#define guest_handle_for_field(hnd, type, fld)          \
+    ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
+
+#define guest_handle_from_ptr(ptr, type)        \
+    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
+#define const_guest_handle_from_ptr(ptr, type)  \
+    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
+
+/*
+ * Copy an array of objects to guest context via a guest handle,
+ * specifying an offset into the guest array.
+ */
+#define copy_to_guest_offset(hnd, off, ptr, nr) ({      \
+    const typeof(*(ptr)) *_s = (ptr);                   \
+    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
+    /* Check that the handle is not for a const type */ \
+    void *__maybe_unused _t = (hnd).p;                  \
+    (void)((hnd).p == _s);                              \
+    raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));  \
+})
+
+/*
+ * Clear an array of objects in guest context via a guest handle,
+ * specifying an offset into the guest array.
+ */
+#define clear_guest_offset(hnd, off, nr) ({    \
+    void *_d = (hnd).p;                        \
+    raw_clear_guest(_d+(off), nr);             \
+})
+
+/*
+ * Copy an array of objects from guest context via a guest handle,
+ * specifying an offset into the guest array.
+ */
+#define copy_from_guest_offset(ptr, hnd, off, nr) ({    \
+    const typeof(*(ptr)) *_s = (hnd).p;                 \
+    typeof(*(ptr)) *_d = (ptr);                         \
+    raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
+})
+
+/* Copy sub-field of a structure to guest context via a guest handle. */
+#define copy_field_to_guest(hnd, ptr, field) ({         \
+    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
+    void *_d = &(hnd).p->field;                         \
+    (void)(&(hnd).p->field == _s);                      \
+    raw_copy_to_guest(_d, _s, sizeof(*_s));             \
+})
+
+/* Copy sub-field of a structure from guest context via a guest handle. */
+#define copy_field_from_guest(ptr, hnd, field) ({       \
+    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
+    typeof(&(ptr)->field) _d = &(ptr)->field;           \
+    raw_copy_from_guest(_d, _s, sizeof(*_d));           \
+})
+
 #define copy_to_guest(hnd, ptr, nr)                     \
     copy_to_guest_offset(hnd, 0, ptr, nr)
 
@@ -20,6 +100,45 @@
 #define clear_guest(hnd, nr)                            \
     clear_guest_offset(hnd, 0, nr)
 
+/*
+ * The __copy_* functions should only be used after the guest handle has
+ * been pre-validated via guest_handle_okay() and
+ * guest_handle_subrange_okay().
+ */
+
+#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
+    const typeof(*(ptr)) *_s = (ptr);                   \
+    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
+    /* Check that the handle is not for a const type */ \
+    void *__maybe_unused _t = (hnd).p;                  \
+    (void)((hnd).p == _s);                              \
+    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
+})
+
+#define __clear_guest_offset(hnd, off, nr) ({    \
+    void *_d = (hnd).p;                          \
+    __raw_clear_guest(_d + (off), nr);           \
+})
+
+#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
+    const typeof(*(ptr)) *_s = (hnd).p;                 \
+    typeof(*(ptr)) *_d = (ptr);                         \
+    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
+})
+
+#define __copy_field_to_guest(hnd, ptr, field) ({       \
+    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
+    void *_d = &(hnd).p->field;                         \
+    (void)(&(hnd).p->field == _s);                      \
+    __raw_copy_to_guest(_d, _s, sizeof(*_s));           \
+})
+
+#define __copy_field_from_guest(ptr, hnd, field) ({     \
+    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
+    typeof(&(ptr)->field) _d = &(ptr)->field;           \
+    __raw_copy_from_guest(_d, _s, sizeof(*_d));         \
+})
+
 #define __copy_to_guest(hnd, ptr, nr)                   \
     __copy_to_guest_offset(hnd, 0, ptr, nr)
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:10:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:10:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1D0E-0001p9-AA; Thu, 30 Jul 2020 18:10:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1D0D-0001eP-AE
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:10:01 +0000
X-Inumbo-ID: dd9a0340-d28f-11ea-ab01-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd9a0340-d28f-11ea-ab01-12813bfff9fa;
 Thu, 30 Jul 2020 18:09:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=JvdgUN4G+hJQeSgeZrRHkKlqRzJx599CQaDONH0J3Tw=; b=G49/hj4voipbAEp54p6KOONjVL
 EBBtQPz9TM37BQARWen0u2rGIiCOM1lKpy75LRQzyev8VMWqqlUICDLopFXH0SngWA8QF4oU9bjWr
 ids1sQqsfB7kGAiv53cNyyeC0f6Q1iXu+IPIEECd1++ggXCiZHOgzVpjmI1hZYI/nAjw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D02-0007cn-95; Thu, 30 Jul 2020 18:09:50 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1CjH-0004nV-7m; Thu, 30 Jul 2020 17:52:31 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 5/7] xen: include xen/guest_access.h rather than
 asm/guest_access.h
Date: Thu, 30 Jul 2020 18:52:10 +0100
Message-Id: <20200730175213.30679-11-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730175213.30679-1-julien@xen.org>
References: <20200730175213.30679-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Only a few places are actually including asm/guest_access.h. While this
is fine today, a follow-up patch will want to move most of the helpers
from asm/guest_access.h to xen/guest_access.h.

To prepare the move, everyone should include xen/guest_access.h rather
than asm/guest_access.h.

Interestingly, asm-arm/guest_access.h includes xen/guest_access.h. The
inclusion is now removed as no-one but the latter should include the
former.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - Remove some changes that weren't meant to be here.
---
 xen/arch/arm/decode.c                | 2 +-
 xen/arch/arm/domain.c                | 2 +-
 xen/arch/arm/guest_walk.c            | 3 ++-
 xen/arch/arm/guestcopy.c             | 2 +-
 xen/arch/arm/vgic-v3-its.c           | 2 +-
 xen/arch/x86/hvm/svm/svm.c           | 2 +-
 xen/arch/x86/hvm/viridian/viridian.c | 2 +-
 xen/arch/x86/hvm/vmx/vmx.c           | 2 +-
 xen/common/libelf/libelf-loader.c    | 2 +-
 xen/include/asm-arm/guest_access.h   | 1 -
 xen/lib/x86/private.h                | 2 +-
 11 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/decode.c b/xen/arch/arm/decode.c
index 144793c8cea0..792c2e92a7eb 100644
--- a/xen/arch/arm/decode.c
+++ b/xen/arch/arm/decode.c
@@ -17,12 +17,12 @@
  * GNU General Public License for more details.
  */
 
+#include <xen/guest_access.h>
 #include <xen/lib.h>
 #include <xen/sched.h>
 #include <xen/types.h>
 
 #include <asm/current.h>
-#include <asm/guest_access.h>
 
 #include "decode.h"
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 31169326b2e3..9258f6d3faa2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -12,6 +12,7 @@
 #include <xen/bitops.h>
 #include <xen/errno.h>
 #include <xen/grant_table.h>
+#include <xen/guest_access.h>
 #include <xen/hypercall.h>
 #include <xen/init.h>
 #include <xen/lib.h>
@@ -26,7 +27,6 @@
 #include <asm/current.h>
 #include <asm/event.h>
 #include <asm/gic.h>
-#include <asm/guest_access.h>
 #include <asm/guest_atomics.h>
 #include <asm/irq.h>
 #include <asm/p2m.h>
diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
index a1cdd7f4afea..b4496c4c86c6 100644
--- a/xen/arch/arm/guest_walk.c
+++ b/xen/arch/arm/guest_walk.c
@@ -16,8 +16,9 @@
  */
 
 #include <xen/domain_page.h>
+#include <xen/guest_access.h>
 #include <xen/sched.h>
-#include <asm/guest_access.h>
+
 #include <asm/guest_walk.h>
 #include <asm/short-desc.h>
 
diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index c8023e2bca5d..32681606d8fc 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -1,10 +1,10 @@
 #include <xen/domain_page.h>
+#include <xen/guest_access.h>
 #include <xen/lib.h>
 #include <xen/mm.h>
 #include <xen/sched.h>
 
 #include <asm/current.h>
-#include <asm/guest_access.h>
 
 #define COPY_flush_dcache   (1U << 0)
 #define COPY_from_guest     (0U << 1)
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 6e153c698d56..58d939b85f92 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -32,6 +32,7 @@
 #include <xen/bitops.h>
 #include <xen/config.h>
 #include <xen/domain_page.h>
+#include <xen/guest_access.h>
 #include <xen/lib.h>
 #include <xen/init.h>
 #include <xen/softirq.h>
@@ -39,7 +40,6 @@
 #include <xen/sched.h>
 #include <xen/sizes.h>
 #include <asm/current.h>
-#include <asm/guest_access.h>
 #include <asm/mmio.h>
 #include <asm/gic_v3_defs.h>
 #include <asm/gic_v3_its.h>
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index ca3bbfcbb355..7301f3cd6004 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -16,6 +16,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/guest_access.h>
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/trace.h>
@@ -34,7 +35,6 @@
 #include <asm/cpufeature.h>
 #include <asm/processor.h>
 #include <asm/amd.h>
-#include <asm/guest_access.h>
 #include <asm/debugreg.h>
 #include <asm/msr.h>
 #include <asm/i387.h>
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 977c1bc54fad..dc7183a54627 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -5,12 +5,12 @@
  * Hypervisor Top Level Functional Specification for more information.
  */
 
+#include <xen/guest_access.h>
 #include <xen/sched.h>
 #include <xen/version.h>
 #include <xen/hypercall.h>
 #include <xen/domain_page.h>
 #include <xen/param.h>
-#include <asm/guest_access.h>
 #include <asm/guest/hyperv-tlfs.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index eb54aadfbafb..cb5df1e81c9c 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -15,6 +15,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/guest_access.h>
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/param.h>
@@ -31,7 +32,6 @@
 #include <asm/regs.h>
 #include <asm/cpufeature.h>
 #include <asm/processor.h>
-#include <asm/guest_access.h>
 #include <asm/debugreg.h>
 #include <asm/msr.h>
 #include <asm/p2m.h>
diff --git a/xen/common/libelf/libelf-loader.c b/xen/common/libelf/libelf-loader.c
index 0f468727d04a..629cc0d3e611 100644
--- a/xen/common/libelf/libelf-loader.c
+++ b/xen/common/libelf/libelf-loader.c
@@ -16,7 +16,7 @@
  */
 
 #ifdef __XEN__
-#include <asm/guest_access.h>
+#include <xen/guest_access.h>
 #endif
 
 #include "libelf-private.h"
diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 31b9f03f0015..b9a89c495527 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -1,7 +1,6 @@
 #ifndef __ASM_ARM_GUEST_ACCESS_H__
 #define __ASM_ARM_GUEST_ACCESS_H__
 
-#include <xen/guest_access.h>
 #include <xen/errno.h>
 #include <xen/sched.h>
 
diff --git a/xen/lib/x86/private.h b/xen/lib/x86/private.h
index b793181464f3..2d53bd3ced23 100644
--- a/xen/lib/x86/private.h
+++ b/xen/lib/x86/private.h
@@ -4,12 +4,12 @@
 #ifdef __XEN__
 
 #include <xen/bitops.h>
+#include <xen/guest_access.h>
 #include <xen/kernel.h>
 #include <xen/lib.h>
 #include <xen/nospec.h>
 #include <xen/types.h>
 
-#include <asm/guest_access.h>
 #include <asm/msr-index.h>
 
 #define copy_to_buffer_offset copy_to_guest_offset
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:18:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:18:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1D8W-0002pf-7c; Thu, 30 Jul 2020 18:18:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1D8U-0002pV-LK
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:18:34 +0000
X-Inumbo-ID: 1423ce86-d291-11ea-8db3-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1423ce86-d291-11ea-8db3-bc764e2007e4;
 Thu, 30 Jul 2020 18:18:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=YMp6yPrLfWKrTtFlQljFXWPvFCrq7jJMWfWyLnEct0g=; b=uE0pJtyWHqtPs7wVr0qkuSn8Rl
 gdYFO9uCCvn20fWlhFQT/bVKL8Mw27xfQRp8o65w1xxXk+gcYRLEmVLXofjTGmwRePRJAFze89ip6
 6M8lJO7nUzmynUrMu0GZhhYp6Rv8Ub0/Do3SkAddsGvUYwg50GvoojExthBpMlPdaimM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8T-0007p0-3b; Thu, 30 Jul 2020 18:18:33 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8S-0006Uf-Q2; Thu, 30 Jul 2020 18:18:33 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [RESEND][PATCH v2 1/7] xen/guest_access: Add emacs magics
Date: Thu, 30 Jul 2020 19:18:21 +0100
Message-Id: <20200730181827.1670-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730181827.1670-1-julien@xen.org>
References: <20200730181827.1670-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Add emacs magics for xen/guest_access.h and
asm-x86/guest_access.h.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Acked-by: Jan Beulich <jbeulich@suse.com>

---
    Changes in v2:
        - Remove the word "missing"
---
 xen/include/asm-x86/guest_access.h | 8 ++++++++
 xen/include/xen/guest_access.h     | 8 ++++++++
 2 files changed, 16 insertions(+)

diff --git a/xen/include/asm-x86/guest_access.h b/xen/include/asm-x86/guest_access.h
index 2be3577bd340..3ffde205f6a1 100644
--- a/xen/include/asm-x86/guest_access.h
+++ b/xen/include/asm-x86/guest_access.h
@@ -160,3 +160,11 @@
 })
 
 #endif /* __ASM_X86_GUEST_ACCESS_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
index 09989df819ce..ef9aaa3efcfe 100644
--- a/xen/include/xen/guest_access.h
+++ b/xen/include/xen/guest_access.h
@@ -33,3 +33,11 @@ char *safe_copy_string_from_guest(XEN_GUEST_HANDLE(char) u_buf,
                                   size_t size, size_t max_size);
 
 #endif /* __XEN_GUEST_ACCESS_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:18:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:18:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1D8W-0002pl-GB; Thu, 30 Jul 2020 18:18:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1D8V-0002pa-9Y
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:18:35 +0000
X-Inumbo-ID: 14495f5c-d291-11ea-ab02-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14495f5c-d291-11ea-ab02-12813bfff9fa;
 Thu, 30 Jul 2020 18:18:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AgySXafvLKDrFHZwhVYC/CtvL9daRvYVrDlQBaY611s=; b=zV84fcOaV5bkKlPSeIVe8F5SXr
 H8il4m3wdW5J2ffflm5PDTsP9jQYPZX8QIKHphRMhqAE6NtT3tMw44i0VqVFX/vqz+3MJrYVsK92W
 4lLA8MYbuVhAXG5MJQcVh2b/FWpfWAyLrly58wNoJMB7ESnsZ2TvutoUwraqxMXf2aCg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8R-0007oy-MB; Thu, 30 Jul 2020 18:18:31 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8R-0006Uf-6t; Thu, 30 Jul 2020 18:18:31 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [RESEND][PATCH v2 0/7] xen: Consolidate asm-*/guest_access.h in
 xen/guest_access.h
Date: Thu, 30 Jul 2020 19:18:20 +0100
Message-Id: <20200730181827.1670-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Hi all,

A lot of the helpers implemented in asm-*/guest_access.h are implemented
the same way. This series aims to avoid the duplication and implement
them only once in xen/guest_access.h.

Cheers,

Julien Grall (7):
  xen/guest_access: Add emacs magics
  xen/arm: kernel: Re-order the includes
  xen/arm: decode: Re-order the includes
  xen/arm: guestcopy: Re-order the includes
  xen: include xen/guest_access.h rather than asm/guest_access.h
  xen/guest_access: Consolidate guest access helpers in
    xen/guest_access.h
  xen/guest_access: Fix coding style in xen/guest_access.h

 xen/arch/arm/decode.c                |   7 +-
 xen/arch/arm/domain.c                |   2 +-
 xen/arch/arm/guest_walk.c            |   3 +-
 xen/arch/arm/guestcopy.c             |   5 +-
 xen/arch/arm/kernel.c                |  12 +--
 xen/arch/arm/vgic-v3-its.c           |   2 +-
 xen/arch/x86/hvm/svm/svm.c           |   2 +-
 xen/arch/x86/hvm/viridian/viridian.c |   2 +-
 xen/arch/x86/hvm/vmx/vmx.c           |   2 +-
 xen/common/libelf/libelf-loader.c    |   2 +-
 xen/include/asm-arm/guest_access.h   | 115 -----------------------
 xen/include/asm-x86/guest_access.h   | 116 ++----------------------
 xen/include/xen/guest_access.h       | 131 +++++++++++++++++++++++++++
 xen/lib/x86/private.h                |   2 +-
 14 files changed, 161 insertions(+), 242 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:18:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:18:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1D8a-0002qi-P0; Thu, 30 Jul 2020 18:18:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1D8Z-0002pV-Gv
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:18:39 +0000
X-Inumbo-ID: 148584be-d291-11ea-8db3-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 148584be-d291-11ea-8db3-bc764e2007e4;
 Thu, 30 Jul 2020 18:18:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ShgvcGkxdE4OnoXDwV1go3UlHBZgZS/RIoOwLBZvFGA=; b=E+IGhPDl7danxh4SjSMTZ/xLmg
 AFaMOSYRRWeUpDl2o2vXYQMkdOsxv+Sj0sjJBx6ibdDy614jf4APgOvpj9SWysnsv/iWfK+CjOe2J
 KTC50qvJWhRmiQ3RXayIcqjjvDu79tRw3DBF7+5ktqSjjFY050jpItqPN0KwVj2Otq8M=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8U-0007p6-2z; Thu, 30 Jul 2020 18:18:34 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8T-0006Uf-Qd; Thu, 30 Jul 2020 18:18:34 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [RESEND][PATCH v2 2/7] xen/arm: kernel: Re-order the includes
Date: Thu, 30 Jul 2020 19:18:22 +0100
Message-Id: <20200730181827.1670-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730181827.1670-1-julien@xen.org>
References: <20200730181827.1670-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

We usually have xen/ includes first and then asm/. They are also ordered
alphabetically among themselves.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/kernel.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 8eff0748367d..f95fa392af44 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -3,20 +3,20 @@
  *
  * Copyright (C) 2011 Citrix Systems, Inc.
  */
+#include <xen/domain_page.h>
 #include <xen/errno.h>
+#include <xen/gunzip.h>
 #include <xen/init.h>
 #include <xen/lib.h>
+#include <xen/libfdt/libfdt.h>
 #include <xen/mm.h>
-#include <xen/domain_page.h>
 #include <xen/sched.h>
-#include <asm/byteorder.h>
-#include <asm/setup.h>
-#include <xen/libfdt/libfdt.h>
-#include <xen/gunzip.h>
 #include <xen/vmap.h>
 
+#include <asm/byteorder.h>
 #include <asm/guest_access.h>
 #include <asm/kernel.h>
+#include <asm/setup.h>
 
 #define UIMAGE_MAGIC          0x27051956
 #define UIMAGE_NMLEN          32
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:18:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:18:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1D8b-0002qz-1U; Thu, 30 Jul 2020 18:18:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1D8a-0002pa-77
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:18:40 +0000
X-Inumbo-ID: 14495f5d-d291-11ea-ab02-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14495f5d-d291-11ea-ab02-12813bfff9fa;
 Thu, 30 Jul 2020 18:18:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=wKQvWLwb3vbFbaN60ANqWtb/FsamZy4DJptrPdgq66M=; b=pn7ZXaZcjQAibeYNZWFfcFsK2Z
 RTSjmcqOxWTHli9tTa3+olKdMLzo2aivKM04fwEspEyayvKOMG9QvFZ+w1SDINnz9QmHd+T8SdW4C
 5+kgLWol6p1elZYd5Hs0TKPKwoqJGy92WldClCZP7MzHBpPUGuXOxZA14eJ/qQYgooZU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8V-0007pH-4K; Thu, 30 Jul 2020 18:18:35 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8U-0006Uf-RB; Thu, 30 Jul 2020 18:18:35 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [RESEND][PATCH v2 3/7] xen/arm: decode: Re-order the includes
Date: Thu, 30 Jul 2020 19:18:23 +0100
Message-Id: <20200730181827.1670-4-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730181827.1670-1-julien@xen.org>
References: <20200730181827.1670-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

We usually have xen/ includes first and then asm/. They are also ordered
alphabetically among themselves.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/decode.c | 5 +++--
 xen/arch/arm/kernel.c | 2 +-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/decode.c b/xen/arch/arm/decode.c
index 8b1e15d11892..144793c8cea0 100644
--- a/xen/arch/arm/decode.c
+++ b/xen/arch/arm/decode.c
@@ -17,11 +17,12 @@
  * GNU General Public License for more details.
  */
 
-#include <xen/types.h>
+#include <xen/lib.h>
 #include <xen/sched.h>
+#include <xen/types.h>
+
 #include <asm/current.h>
 #include <asm/guest_access.h>
-#include <xen/lib.h>
 
 #include "decode.h"
 
diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index f95fa392af44..032923853f2c 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -5,6 +5,7 @@
  */
 #include <xen/domain_page.h>
 #include <xen/errno.h>
+#include <xen/guest_access.h>
 #include <xen/gunzip.h>
 #include <xen/init.h>
 #include <xen/lib.h>
@@ -14,7 +15,6 @@
 #include <xen/vmap.h>
 
 #include <asm/byteorder.h>
-#include <asm/guest_access.h>
 #include <asm/kernel.h>
 #include <asm/setup.h>
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:18:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:18:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1D8g-0002th-F9; Thu, 30 Jul 2020 18:18:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1D8f-0002pa-7L
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:18:45 +0000
X-Inumbo-ID: 15c69868-d291-11ea-ab02-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15c69868-d291-11ea-ab02-12813bfff9fa;
 Thu, 30 Jul 2020 18:18:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=uz1f4JaSlPa4wvD4/wRGUINT1oVECe/gDerhHtYEvBE=; b=5U3bDfJsH8AdjuVkDCDJxPYg+B
 pqIIILZqvSLWzG9t2M0gISSISjm/klOOptR7xOcu6YoC2cRO+ijoTlXkj5OyZqUDaC61dHuVPQ1S+
 g5h70zzTgjCJf8yRdgjePcr7UfLiv3lnC2C5Zw6xxg+mEs18bT/sux0gNQZw0otb5bzY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8W-0007pO-3l; Thu, 30 Jul 2020 18:18:36 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8V-0006Uf-Rk; Thu, 30 Jul 2020 18:18:36 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [RESEND][PATCH v2 4/7] xen/arm: guestcopy: Re-order the includes
Date: Thu, 30 Jul 2020 19:18:24 +0100
Message-Id: <20200730181827.1670-5-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730181827.1670-1-julien@xen.org>
References: <20200730181827.1670-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

We usually have xen/ includes first and then asm/. They are also ordered
alphabetically among themselves.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/guestcopy.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index 7a0f3e9d5fc6..c8023e2bca5d 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -1,7 +1,8 @@
-#include <xen/lib.h>
 #include <xen/domain_page.h>
+#include <xen/lib.h>
 #include <xen/mm.h>
 #include <xen/sched.h>
+
 #include <asm/current.h>
 #include <asm/guest_access.h>
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:18:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:18:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1D8k-0002wC-Oq; Thu, 30 Jul 2020 18:18:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1D8k-0002pa-7O
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:18:50 +0000
X-Inumbo-ID: 17dfd4ca-d291-11ea-ab02-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 17dfd4ca-d291-11ea-ab02-12813bfff9fa;
 Thu, 30 Jul 2020 18:18:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=JvdgUN4G+hJQeSgeZrRHkKlqRzJx599CQaDONH0J3Tw=; b=f4Z31qjwinMf/nJEBCCCADAjdO
 ctkAW08gja6ojbJWhfpou75A2MCzNtmDwghISiC8X4o2jNvj0SdkmGH2GQvXEJWMxRv/WtfDIwLWV
 YkG32n022oYn3DMSFQbThrLxY3FbMH2ue3JWlQLXJOGsOLjtgUSToWOUT3n1quxSs0DM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8Y-0007pZ-5t; Thu, 30 Jul 2020 18:18:38 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8X-0006Uf-Sf; Thu, 30 Jul 2020 18:18:38 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [RESEND][PATCH v2 5/7] xen: include xen/guest_access.h rather than
 asm/guest_access.h
Date: Thu, 30 Jul 2020 19:18:25 +0100
Message-Id: <20200730181827.1670-6-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730181827.1670-1-julien@xen.org>
References: <20200730181827.1670-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Only a few places are actually including asm/guest_access.h. While this
is fine today, a follow-up patch will want to move most of the helpers
from asm/guest_access.h to xen/guest_access.h.

To prepare the move, everyone should include xen/guest_access.h rather
than asm/guest_access.h.

Interestingly, asm-arm/guest_access.h includes xen/guest_access.h. The
inclusion is now removed as no-one but the latter should include the
former.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - Remove some changes that weren't meant to be here.
---
 xen/arch/arm/decode.c                | 2 +-
 xen/arch/arm/domain.c                | 2 +-
 xen/arch/arm/guest_walk.c            | 3 ++-
 xen/arch/arm/guestcopy.c             | 2 +-
 xen/arch/arm/vgic-v3-its.c           | 2 +-
 xen/arch/x86/hvm/svm/svm.c           | 2 +-
 xen/arch/x86/hvm/viridian/viridian.c | 2 +-
 xen/arch/x86/hvm/vmx/vmx.c           | 2 +-
 xen/common/libelf/libelf-loader.c    | 2 +-
 xen/include/asm-arm/guest_access.h   | 1 -
 xen/lib/x86/private.h                | 2 +-
 11 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/xen/arch/arm/decode.c b/xen/arch/arm/decode.c
index 144793c8cea0..792c2e92a7eb 100644
--- a/xen/arch/arm/decode.c
+++ b/xen/arch/arm/decode.c
@@ -17,12 +17,12 @@
  * GNU General Public License for more details.
  */
 
+#include <xen/guest_access.h>
 #include <xen/lib.h>
 #include <xen/sched.h>
 #include <xen/types.h>
 
 #include <asm/current.h>
-#include <asm/guest_access.h>
 
 #include "decode.h"
 
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 31169326b2e3..9258f6d3faa2 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -12,6 +12,7 @@
 #include <xen/bitops.h>
 #include <xen/errno.h>
 #include <xen/grant_table.h>
+#include <xen/guest_access.h>
 #include <xen/hypercall.h>
 #include <xen/init.h>
 #include <xen/lib.h>
@@ -26,7 +27,6 @@
 #include <asm/current.h>
 #include <asm/event.h>
 #include <asm/gic.h>
-#include <asm/guest_access.h>
 #include <asm/guest_atomics.h>
 #include <asm/irq.h>
 #include <asm/p2m.h>
diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
index a1cdd7f4afea..b4496c4c86c6 100644
--- a/xen/arch/arm/guest_walk.c
+++ b/xen/arch/arm/guest_walk.c
@@ -16,8 +16,9 @@
  */
 
 #include <xen/domain_page.h>
+#include <xen/guest_access.h>
 #include <xen/sched.h>
-#include <asm/guest_access.h>
+
 #include <asm/guest_walk.h>
 #include <asm/short-desc.h>
 
diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
index c8023e2bca5d..32681606d8fc 100644
--- a/xen/arch/arm/guestcopy.c
+++ b/xen/arch/arm/guestcopy.c
@@ -1,10 +1,10 @@
 #include <xen/domain_page.h>
+#include <xen/guest_access.h>
 #include <xen/lib.h>
 #include <xen/mm.h>
 #include <xen/sched.h>
 
 #include <asm/current.h>
-#include <asm/guest_access.h>
 
 #define COPY_flush_dcache   (1U << 0)
 #define COPY_from_guest     (0U << 1)
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 6e153c698d56..58d939b85f92 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -32,6 +32,7 @@
 #include <xen/bitops.h>
 #include <xen/config.h>
 #include <xen/domain_page.h>
+#include <xen/guest_access.h>
 #include <xen/lib.h>
 #include <xen/init.h>
 #include <xen/softirq.h>
@@ -39,7 +40,6 @@
 #include <xen/sched.h>
 #include <xen/sizes.h>
 #include <asm/current.h>
-#include <asm/guest_access.h>
 #include <asm/mmio.h>
 #include <asm/gic_v3_defs.h>
 #include <asm/gic_v3_its.h>
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index ca3bbfcbb355..7301f3cd6004 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -16,6 +16,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/guest_access.h>
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/trace.h>
@@ -34,7 +35,6 @@
 #include <asm/cpufeature.h>
 #include <asm/processor.h>
 #include <asm/amd.h>
-#include <asm/guest_access.h>
 #include <asm/debugreg.h>
 #include <asm/msr.h>
 #include <asm/i387.h>
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
index 977c1bc54fad..dc7183a54627 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -5,12 +5,12 @@
  * Hypervisor Top Level Functional Specification for more information.
  */
 
+#include <xen/guest_access.h>
 #include <xen/sched.h>
 #include <xen/version.h>
 #include <xen/hypercall.h>
 #include <xen/domain_page.h>
 #include <xen/param.h>
-#include <asm/guest_access.h>
 #include <asm/guest/hyperv-tlfs.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index eb54aadfbafb..cb5df1e81c9c 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -15,6 +15,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/guest_access.h>
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/param.h>
@@ -31,7 +32,6 @@
 #include <asm/regs.h>
 #include <asm/cpufeature.h>
 #include <asm/processor.h>
-#include <asm/guest_access.h>
 #include <asm/debugreg.h>
 #include <asm/msr.h>
 #include <asm/p2m.h>
diff --git a/xen/common/libelf/libelf-loader.c b/xen/common/libelf/libelf-loader.c
index 0f468727d04a..629cc0d3e611 100644
--- a/xen/common/libelf/libelf-loader.c
+++ b/xen/common/libelf/libelf-loader.c
@@ -16,7 +16,7 @@
  */
 
 #ifdef __XEN__
-#include <asm/guest_access.h>
+#include <xen/guest_access.h>
 #endif
 
 #include "libelf-private.h"
diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index 31b9f03f0015..b9a89c495527 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -1,7 +1,6 @@
 #ifndef __ASM_ARM_GUEST_ACCESS_H__
 #define __ASM_ARM_GUEST_ACCESS_H__
 
-#include <xen/guest_access.h>
 #include <xen/errno.h>
 #include <xen/sched.h>
 
diff --git a/xen/lib/x86/private.h b/xen/lib/x86/private.h
index b793181464f3..2d53bd3ced23 100644
--- a/xen/lib/x86/private.h
+++ b/xen/lib/x86/private.h
@@ -4,12 +4,12 @@
 #ifdef __XEN__
 
 #include <xen/bitops.h>
+#include <xen/guest_access.h>
 #include <xen/kernel.h>
 #include <xen/lib.h>
 #include <xen/nospec.h>
 #include <xen/types.h>
 
-#include <asm/guest_access.h>
 #include <asm/msr-index.h>
 
 #define copy_to_buffer_offset copy_to_guest_offset
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:18:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:18:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1D8q-0002zE-2A; Thu, 30 Jul 2020 18:18:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1D8p-0002pa-7U
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:18:55 +0000
X-Inumbo-ID: 180bcc10-d291-11ea-ab02-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 180bcc10-d291-11ea-ab02-12813bfff9fa;
 Thu, 30 Jul 2020 18:18:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=iYWCfYUjsOmxaTuM6icRxDZYgMO7DF187Z293r7Hn8E=; b=5wqskYdYAkiLkrpXm4p+L07UPL
 BxbyB6xLswpW0FYwHGJ+C10kQqjavS580kMSscP3oTDpO1M/IqO3H2I0RdUw63W3etGrVQ9+3EMjq
 AsKEnCsnqbo1kzVCXL3a+Tv6O/3VT44V/zpdRm0k+h5eSmenRqC9HuECpkViloXztAlE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8Z-0007pd-V6; Thu, 30 Jul 2020 18:18:39 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8Z-0006Uf-Ic; Thu, 30 Jul 2020 18:18:39 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [RESEND][PATCH v2 6/7] xen/guest_access: Consolidate guest access
 helpers in xen/guest_access.h
Date: Thu, 30 Jul 2020 19:18:26 +0100
Message-Id: <20200730181827.1670-7-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730181827.1670-1-julien@xen.org>
References: <20200730181827.1670-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Most of the helpers to access guest memory are implemented the same way
on Arm and x86. The only differences are:
    - guest_handle_{from, to}_param(): while on x86 XEN_GUEST_HANDLE()
      and XEN_GUEST_HANDLE_PARAM() are the same, they are not on Arm. It
      is still fine to use the Arm implementation on x86.
    - __clear_guest_offset(): Interestingly the prototype does not match
      between the x86 and Arm. However, the Arm one is bogus. So the x86
      implementation can be used.
    - guest_handle{,_subrange}_okay(): They are validly differing
      because Arm is only supporting auto-translated guest and therefore
      handles are always valid.

In the past, the ia64 and ppc64 port use a different model to access
guest parameter. They have been long gone now.

Given Xen currently only support 2 archictures, it is too soon to have a
directory asm-generic as it is not possible to differentiate it with the
existing directory xen/. If/When there is a 3rd port, we can decide to
create the new directory if that new port decide to use a different way
to access guest parameter.

For now, consolidate it in xen/guest_access.h.

While it would be possible to adjust the coding style at the same, this
is left for a follow-up patch so 'diff' can be used to check the
consolidation was done correctly.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - Expand the commit message explaining why asm-generic is not
        created.
---
 xen/include/asm-arm/guest_access.h | 114 ---------------------------
 xen/include/asm-x86/guest_access.h | 108 --------------------------
 xen/include/xen/guest_access.h     | 119 +++++++++++++++++++++++++++++
 3 files changed, 119 insertions(+), 222 deletions(-)

diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
index b9a89c495527..53766386d3d8 100644
--- a/xen/include/asm-arm/guest_access.h
+++ b/xen/include/asm-arm/guest_access.h
@@ -23,88 +23,6 @@ int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
 #define __raw_copy_from_guest raw_copy_from_guest
 #define __raw_clear_guest raw_clear_guest
 
-/* Remainder copied from x86 -- could be common? */
-
-/* Is the guest handle a NULL reference? */
-#define guest_handle_is_null(hnd)        ((hnd).p == NULL)
-
-/* Offset the given guest handle into the array it refers to. */
-#define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
-#define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
-
-/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
- * to the specified type of XEN_GUEST_HANDLE_PARAM. */
-#define guest_handle_cast(hnd, type) ({         \
-    type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
-})
-
-/* Convert a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
-#define guest_handle_to_param(hnd, type) ({                  \
-    typeof((hnd).p) _x = (hnd).p;                            \
-    XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
-    /* type checking: make sure that the pointers inside     \
-     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
-     * the same type, then return hnd */                     \
-    (void)(&_x == &_y.p);                                    \
-    _y;                                                      \
-})
-
-#define guest_handle_for_field(hnd, type, fld)          \
-    ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
-
-#define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
-#define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
-
-/*
- * Copy an array of objects to guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define copy_to_guest_offset(hnd, off, ptr, nr) ({      \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));  \
-})
-
-/*
- * Clear an array of objects in guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define clear_guest_offset(hnd, off, nr) ({    \
-    void *_d = (hnd).p;                        \
-    raw_clear_guest(_d+(off), nr);             \
-})
-
-/*
- * Copy an array of objects from guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define copy_from_guest_offset(ptr, hnd, off, nr) ({    \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
-})
-
-/* Copy sub-field of a structure to guest context via a guest handle. */
-#define copy_field_to_guest(hnd, ptr, field) ({         \
-    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
-    void *_d = &(hnd).p->field;                         \
-    (void)(&(hnd).p->field == _s);                      \
-    raw_copy_to_guest(_d, _s, sizeof(*_s));             \
-})
-
-/* Copy sub-field of a structure from guest context via a guest handle. */
-#define copy_field_from_guest(ptr, hnd, field) ({       \
-    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
-    typeof(&(ptr)->field) _d = &(ptr)->field;           \
-    raw_copy_from_guest(_d, _s, sizeof(*_d));           \
-})
-
 /*
  * Pre-validate a guest handle.
  * Allows use of faster __copy_* functions.
@@ -113,38 +31,6 @@ int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
 #define guest_handle_okay(hnd, nr) (1)
 #define guest_handle_subrange_okay(hnd, first, last) (1)
 
-#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
-})
-
-#define __clear_guest_offset(hnd, off, ptr, nr) ({      \
-    __raw_clear_guest(_d+(off), nr);  \
-})
-
-#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
-})
-
-#define __copy_field_to_guest(hnd, ptr, field) ({       \
-    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
-    void *_d = &(hnd).p->field;                         \
-    (void)(&(hnd).p->field == _s);                      \
-    __raw_copy_to_guest(_d, _s, sizeof(*_s));           \
-})
-
-#define __copy_field_from_guest(ptr, hnd, field) ({     \
-    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
-    typeof(&(ptr)->field) _d = &(ptr)->field;           \
-    __raw_copy_from_guest(_d, _s, sizeof(*_d));         \
-})
-
 #endif /* __ASM_ARM_GUEST_ACCESS_H__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-x86/guest_access.h b/xen/include/asm-x86/guest_access.h
index 3ffde205f6a1..08c9fbbc78e1 100644
--- a/xen/include/asm-x86/guest_access.h
+++ b/xen/include/asm-x86/guest_access.h
@@ -38,81 +38,6 @@
      clear_user_hvm((dst), (len)) :             \
      clear_user((dst), (len)))
 
-/* Is the guest handle a NULL reference? */
-#define guest_handle_is_null(hnd)        ((hnd).p == NULL)
-
-/* Offset the given guest handle into the array it refers to. */
-#define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
-#define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
-
-/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
- * to the specified type of XEN_GUEST_HANDLE_PARAM. */
-#define guest_handle_cast(hnd, type) ({         \
-    type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
-})
-
-/* Convert a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
-#define guest_handle_to_param(hnd, type) ({                  \
-    /* type checking: make sure that the pointers inside     \
-     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
-     * the same type, then return hnd */                     \
-    (void)((typeof(&(hnd).p)) 0 ==                           \
-        (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
-    (hnd);                                                   \
-})
-
-#define guest_handle_for_field(hnd, type, fld)          \
-    ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
-
-#define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
-#define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
-
-/*
- * Copy an array of objects to guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define copy_to_guest_offset(hnd, off, ptr, nr) ({      \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));  \
-})
-
-/*
- * Copy an array of objects from guest context via a guest handle,
- * specifying an offset into the guest array.
- */
-#define copy_from_guest_offset(ptr, hnd, off, nr) ({    \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
-})
-
-#define clear_guest_offset(hnd, off, nr) ({    \
-    void *_d = (hnd).p;                        \
-    raw_clear_guest(_d+(off), nr);             \
-})
-
-/* Copy sub-field of a structure to guest context via a guest handle. */
-#define copy_field_to_guest(hnd, ptr, field) ({         \
-    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
-    void *_d = &(hnd).p->field;                         \
-    (void)(&(hnd).p->field == _s);                      \
-    raw_copy_to_guest(_d, _s, sizeof(*_s));             \
-})
-
-/* Copy sub-field of a structure from guest context via a guest handle. */
-#define copy_field_from_guest(ptr, hnd, field) ({       \
-    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
-    typeof(&(ptr)->field) _d = &(ptr)->field;           \
-    raw_copy_from_guest(_d, _s, sizeof(*_d));           \
-})
-
 /*
  * Pre-validate a guest handle.
  * Allows use of faster __copy_* functions.
@@ -126,39 +51,6 @@
                      (last)-(first)+1,                  \
                      sizeof(*(hnd).p)))
 
-#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
-})
-
-#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
-})
-
-#define __clear_guest_offset(hnd, off, nr) ({    \
-    void *_d = (hnd).p;                          \
-    __raw_clear_guest(_d+(off), nr);             \
-})
-
-#define __copy_field_to_guest(hnd, ptr, field) ({       \
-    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
-    void *_d = &(hnd).p->field;                         \
-    (void)(&(hnd).p->field == _s);                      \
-    __raw_copy_to_guest(_d, _s, sizeof(*_s));           \
-})
-
-#define __copy_field_from_guest(ptr, hnd, field) ({     \
-    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
-    typeof(&(ptr)->field) _d = &(ptr)->field;           \
-    __raw_copy_from_guest(_d, _s, sizeof(*_d));         \
-})
-
 #endif /* __ASM_X86_GUEST_ACCESS_H__ */
 /*
  * Local variables:
diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
index ef9aaa3efcfe..4957b8d1f2b8 100644
--- a/xen/include/xen/guest_access.h
+++ b/xen/include/xen/guest_access.h
@@ -11,6 +11,86 @@
 #include <xen/types.h>
 #include <public/xen.h>
 
+/* Is the guest handle a NULL reference? */
+#define guest_handle_is_null(hnd)        ((hnd).p == NULL)
+
+/* Offset the given guest handle into the array it refers to. */
+#define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
+#define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
+
+/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
+ * to the specified type of XEN_GUEST_HANDLE_PARAM. */
+#define guest_handle_cast(hnd, type) ({         \
+    type *_x = (hnd).p;                         \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+})
+
+/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
+#define guest_handle_to_param(hnd, type) ({                  \
+    typeof((hnd).p) _x = (hnd).p;                            \
+    XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
+    /* type checking: make sure that the pointers inside     \
+     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
+     * the same type, then return hnd */                     \
+    (void)(&_x == &_y.p);                                    \
+    _y;                                                      \
+})
+
+#define guest_handle_for_field(hnd, type, fld)          \
+    ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
+
+#define guest_handle_from_ptr(ptr, type)        \
+    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
+#define const_guest_handle_from_ptr(ptr, type)  \
+    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
+
+/*
+ * Copy an array of objects to guest context via a guest handle,
+ * specifying an offset into the guest array.
+ */
+#define copy_to_guest_offset(hnd, off, ptr, nr) ({      \
+    const typeof(*(ptr)) *_s = (ptr);                   \
+    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
+    /* Check that the handle is not for a const type */ \
+    void *__maybe_unused _t = (hnd).p;                  \
+    (void)((hnd).p == _s);                              \
+    raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));  \
+})
+
+/*
+ * Clear an array of objects in guest context via a guest handle,
+ * specifying an offset into the guest array.
+ */
+#define clear_guest_offset(hnd, off, nr) ({    \
+    void *_d = (hnd).p;                        \
+    raw_clear_guest(_d+(off), nr);             \
+})
+
+/*
+ * Copy an array of objects from guest context via a guest handle,
+ * specifying an offset into the guest array.
+ */
+#define copy_from_guest_offset(ptr, hnd, off, nr) ({    \
+    const typeof(*(ptr)) *_s = (hnd).p;                 \
+    typeof(*(ptr)) *_d = (ptr);                         \
+    raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
+})
+
+/* Copy sub-field of a structure to guest context via a guest handle. */
+#define copy_field_to_guest(hnd, ptr, field) ({         \
+    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
+    void *_d = &(hnd).p->field;                         \
+    (void)(&(hnd).p->field == _s);                      \
+    raw_copy_to_guest(_d, _s, sizeof(*_s));             \
+})
+
+/* Copy sub-field of a structure from guest context via a guest handle. */
+#define copy_field_from_guest(ptr, hnd, field) ({       \
+    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
+    typeof(&(ptr)->field) _d = &(ptr)->field;           \
+    raw_copy_from_guest(_d, _s, sizeof(*_d));           \
+})
+
 #define copy_to_guest(hnd, ptr, nr)                     \
     copy_to_guest_offset(hnd, 0, ptr, nr)
 
@@ -20,6 +100,45 @@
 #define clear_guest(hnd, nr)                            \
     clear_guest_offset(hnd, 0, nr)
 
+/*
+ * The __copy_* functions should only be used after the guest handle has
+ * been pre-validated via guest_handle_okay() and
+ * guest_handle_subrange_okay().
+ */
+
+#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
+    const typeof(*(ptr)) *_s = (ptr);                   \
+    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
+    /* Check that the handle is not for a const type */ \
+    void *__maybe_unused _t = (hnd).p;                  \
+    (void)((hnd).p == _s);                              \
+    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
+})
+
+#define __clear_guest_offset(hnd, off, nr) ({    \
+    void *_d = (hnd).p;                          \
+    __raw_clear_guest(_d + (off), nr);           \
+})
+
+#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
+    const typeof(*(ptr)) *_s = (hnd).p;                 \
+    typeof(*(ptr)) *_d = (ptr);                         \
+    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
+})
+
+#define __copy_field_to_guest(hnd, ptr, field) ({       \
+    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
+    void *_d = &(hnd).p->field;                         \
+    (void)(&(hnd).p->field == _s);                      \
+    __raw_copy_to_guest(_d, _s, sizeof(*_s));           \
+})
+
+#define __copy_field_from_guest(ptr, hnd, field) ({     \
+    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
+    typeof(&(ptr)->field) _d = &(ptr)->field;           \
+    __raw_copy_from_guest(_d, _s, sizeof(*_d));         \
+})
+
 #define __copy_to_guest(hnd, ptr, nr)                   \
     __copy_to_guest_offset(hnd, 0, ptr, nr)
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:19:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1D8v-00033P-FY; Thu, 30 Jul 2020 18:19:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1D8u-0002pa-7j
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:19:00 +0000
X-Inumbo-ID: 17dfd4cb-d291-11ea-ab02-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 17dfd4cb-d291-11ea-ab02-12813bfff9fa;
 Thu, 30 Jul 2020 18:18:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HcrdvdMXYTnUwRBlS2DEneqnqQqoRuCUdn6F6d66Mpw=; b=LaV146ZSoDMplKdOESbxSLKpU1
 VQ9LHyys0ORBcoNKhCoTrPEDdHNa98vKTkXrBTAp/4WLKrOn40YWbWCqnOdik7wuDt4pGlmIGJH5C
 HWwSGkYIkn8PmOW3X9DUD4wFizQPbTvqFplohi0FK/dZffjRiUjN2ZLE+RmZscE3uE5Q=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8b-0007pl-Di; Thu, 30 Jul 2020 18:18:41 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D8b-0006Uf-5N; Thu, 30 Jul 2020 18:18:41 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [RESEND][PATCH v2 7/7] xen/guest_access: Fix coding style in
 xen/guest_access.h
Date: Thu, 30 Jul 2020 19:18:27 +0100
Message-Id: <20200730181827.1670-8-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200730181827.1670-1-julien@xen.org>
References: <20200730181827.1670-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

    * Add space before and after operator
    * Align \
    * Format comments

No functional changes expected.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/xen/guest_access.h | 36 +++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 16 deletions(-)

diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
index 4957b8d1f2b8..52fc7a063249 100644
--- a/xen/include/xen/guest_access.h
+++ b/xen/include/xen/guest_access.h
@@ -18,20 +18,24 @@
 #define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
 #define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
 
-/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
- * to the specified type of XEN_GUEST_HANDLE_PARAM. */
+/*
+ * Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
+ * to the specified type of XEN_GUEST_HANDLE_PARAM.
+ */
 #define guest_handle_cast(hnd, type) ({         \
     type *_x = (hnd).p;                         \
-    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
+    (XEN_GUEST_HANDLE_PARAM(type)) { _x };      \
 })
 
 /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
 #define guest_handle_to_param(hnd, type) ({                  \
     typeof((hnd).p) _x = (hnd).p;                            \
     XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
-    /* type checking: make sure that the pointers inside     \
+    /*                                                       \
+     * type checking: make sure that the pointers inside     \
      * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
-     * the same type, then return hnd */                     \
+     * the same type, then return hnd.                       \
+     */                                                      \
     (void)(&_x == &_y.p);                                    \
     _y;                                                      \
 })
@@ -106,13 +110,13 @@
  * guest_handle_subrange_okay().
  */
 
-#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
-    const typeof(*(ptr)) *_s = (ptr);                   \
-    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
-    /* Check that the handle is not for a const type */ \
-    void *__maybe_unused _t = (hnd).p;                  \
-    (void)((hnd).p == _s);                              \
-    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
+#define __copy_to_guest_offset(hnd, off, ptr, nr) ({        \
+    const typeof(*(ptr)) *_s = (ptr);                       \
+    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;              \
+    /* Check that the handle is not for a const type */     \
+    void *__maybe_unused _t = (hnd).p;                      \
+    (void)((hnd).p == _s);                                  \
+    __raw_copy_to_guest(_d + (off), _s, sizeof(*_s) * (nr));\
 })
 
 #define __clear_guest_offset(hnd, off, nr) ({    \
@@ -120,10 +124,10 @@
     __raw_clear_guest(_d + (off), nr);           \
 })
 
-#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
-    const typeof(*(ptr)) *_s = (hnd).p;                 \
-    typeof(*(ptr)) *_d = (ptr);                         \
-    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
+#define __copy_from_guest_offset(ptr, hnd, off, nr) ({          \
+    const typeof(*(ptr)) *_s = (hnd).p;                         \
+    typeof(*(ptr)) *_d = (ptr);                                 \
+    __raw_copy_from_guest(_d, _s + (off), sizeof (*_d) * (nr)); \
 })
 
 #define __copy_field_to_guest(hnd, ptr, field) ({       \
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:19:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:19:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1D97-0003Al-R8; Thu, 30 Jul 2020 18:19:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1D97-00038A-F8
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:19:13 +0000
X-Inumbo-ID: 2b55c6cc-d291-11ea-ab02-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2b55c6cc-d291-11ea-ab02-12813bfff9fa;
 Thu, 30 Jul 2020 18:19:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=pmTU3PHEjg1Own64U+LvvF5TFRKUO3a3kTILLr6Uheo=; b=QkARhr3Ezw9GhgppPtmJYVUhfm
 Prupt2TkUO6oB4qkT/fH570pLkGMfOs3egfEblsZIce4N68hlAUDel+bmbwH7dC4OLxqpX5Rw1+wN
 FroKaqzLgdRS/0xXygLjjTPES/HZuTyNRcSlo8nNEiZnq49iWqvUkZj/Xl4WrkPEpYdE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D91-0007qP-4H; Thu, 30 Jul 2020 18:19:07 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1D90-0006Wu-Us; Thu, 30 Jul 2020 18:19:07 +0000
Subject: Re: [PATCH v2 0/7] xen: Consolidate asm-*/guest_access.h in
 xen/guest_access.h
To: xen-devel@lists.xenproject.org
References: <20200730175213.30679-1-julien@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <921c0d8d-0d78-8e15-6218-8095be6a0839@xen.org>
Date: Thu, 30 Jul 2020 19:19:03 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <20200730175213.30679-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi all,

I have messed up this version. So I resend a new one.

Please ignore this version.

Cheers,

On 30/07/2020 18:52, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Hi all,
> 
> A lot of the helpers implemented in asm-*/guest_access.h are implemented
> the same way. This series aims to avoid the duplication and implement
> them only once in xen/guest_access.h.
> 
> Cheers,
> 
> Julien Grall (7):
>    xen/guest_access: Add emacs magics
>    xen/arm: kernel: Re-order the includes
>    xen/arm: decode: Re-order the includes
>    xen/arm: guestcopy: Re-order the includes
>    xen: include xen/guest_access.h rather than asm/guest_access.h
>    xen/guest_access: Consolidate guest access helpers in
>      xen/guest_access.h
>    xen/guest_access: Fix coding style in xen/guest_access.h
> 
>   xen/arch/arm/decode.c                |   7 +-
>   xen/arch/arm/domain.c                |   2 +-
>   xen/arch/arm/guest_walk.c            |   3 +-
>   xen/arch/arm/guestcopy.c             |   5 +-
>   xen/arch/arm/kernel.c                |  12 +--
>   xen/arch/arm/vgic-v3-its.c           |   2 +-
>   xen/arch/x86/hvm/svm/svm.c           |   2 +-
>   xen/arch/x86/hvm/viridian/viridian.c |   2 +-
>   xen/arch/x86/hvm/vmx/vmx.c           |   2 +-
>   xen/common/libelf/libelf-loader.c    |   2 +-
>   xen/include/asm-arm/guest_access.h   | 115 -----------------------
>   xen/include/asm-x86/guest_access.h   | 116 ++----------------------
>   xen/include/xen/guest_access.h       | 131 +++++++++++++++++++++++++++
>   xen/lib/x86/private.h                |   2 +-
>   14 files changed, 161 insertions(+), 242 deletions(-)
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:29:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:29:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1DIm-0004Ua-RO; Thu, 30 Jul 2020 18:29:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1DIl-0004UV-6A
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:29:11 +0000
X-Inumbo-ID: 8f80f134-d292-11ea-ab04-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8f80f134-d292-11ea-ab04-12813bfff9fa;
 Thu, 30 Jul 2020 18:29:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:References:Cc:To:From:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Q4jCjAO8LHhj2w0cFfnko4lzoObkmN5toQK4yl73xGc=; b=o8sSf9IbWuy2BII5b7rbhHlzEO
 FzSvAWdrcuSRp86jcdwedoZuqzgEBvoiZbbBLFgImofhiLvVgg8smP0J4xpZuw6hdoNeQ0HaQ/iEr
 vnxnGMpOIHjrn7J/kpKX64yCmDCMOMoNnBHpYYwTg46mReYhVYVcb1VrPj/evsQIiKTg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1DIi-000834-D9; Thu, 30 Jul 2020 18:29:08 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1DIi-0007FT-2l; Thu, 30 Jul 2020 18:29:08 +0000
Subject: Re: [PATCH] xen/spinlock: move debug helpers inside the locked regions
From: Julien Grall <julien@xen.org>
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200729111330.64549-1-roger.pau@citrix.com>
 <16dd0f04-598b-8b84-8a25-6b89af9214d7@xen.org>
 <20200729135045.GD7191@Air-de-Roger>
 <bf6cdb76-e4ca-da72-182f-d61de3e92ccf@xen.org>
Message-ID: <1e48ae9c-c335-69cc-1d26-ac4a0b74c4e8@xen.org>
Date: Thu, 30 Jul 2020 19:29:05 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <bf6cdb76-e4ca-da72-182f-d61de3e92ccf@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 29/07/2020 15:57, Julien Grall wrote:
> Hi Roger,
> 
> On 29/07/2020 14:50, Roger Pau Monné wrote:
>> On Wed, Jul 29, 2020 at 02:37:44PM +0100, Julien Grall wrote:
>>> Hi Roger,
>>>
>>> On 29/07/2020 12:13, Roger Pau Monne wrote:
>>>> Debug helpers such as lock profiling or the invariant pCPU assertions
>>>> must strictly be performed inside the exclusive locked region, or else
>>>> races might happen.
>>>>
>>>> Note the issue was not strictly introduced by the pointed commit in
>>>> the Fixes tag, since lock stats where already incremented before the
>>>> barrier, but that commit made it more apparent as manipulating the cpu
>>>> field could happen outside of the locked regions and thus trigger the
>>>> BUG_ON.
>>>
>>>  From the wording, it is not entirely clear which BUG_ON() you are 
>>> referring
>>> to. I am guessing, it is the one in rel_lock(). Am I correct?
>>
>> Yes, that's right. Expanding to:
>>
>> "...  and thus trigger the BUG_ON in rel_lock()." would be better.
> 
> Looks good to me. With that:
> 
> Reviewed-by: Julien Grall <jgrall@amazon.com>
> 
> I am happy to do the update on commit if there is no more comments.

Committed.

Thank you!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:29:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:29:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1DJQ-0004X1-49; Thu, 30 Jul 2020 18:29:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gz/s=BJ=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1k1DJO-0004Wm-JR
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:29:50 +0000
X-Inumbo-ID: a6b7ac09-d292-11ea-8db4-bc764e2007e4
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6b7ac09-d292-11ea-8db4-bc764e2007e4;
 Thu, 30 Jul 2020 18:29:49 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id p14so6568072wmg.1
 for <xen-devel@lists.xenproject.org>; Thu, 30 Jul 2020 11:29:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=/oQ2B/nA5GJoZwORYhxmg1CGViXflM1e2jiWcV2alIE=;
 b=MEUeLJfQhbMl2ymOXA24n4PxsLgYpsVwSjLBNJTxUwcL/HSofd5QMX+f01esEH79U8
 jIMgEEidPlJQarh1xEKZX4WwANvU0GDN6ZnK2vPhYbBNiInbx0znkhwo+nZwb/3BYhKy
 dacx2oUSlXEXmoJyncT1WFtg36WiSTaC4aAt0YAhcC0Yqx5KiOyQdF5QhArW9HT8KF7U
 Xax+c6lkoUYbi4qLZs/8DWfVdOq8hEG+DQQ9d+jWB82pfxh8bPzKr1sQjeCufoKWlOIl
 6KVnyVgHg1ugIvkNMPwrZQAxKsZ1DJBRmJ3NXu6yn8aDJx15YElqjC4qirg0/hDQOj63
 hBWw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=/oQ2B/nA5GJoZwORYhxmg1CGViXflM1e2jiWcV2alIE=;
 b=TANALuklAkbxUcSlzP0R+bIXgLB5jIj+AzNOgDt+61TmG2cxcbhAYW0XIbIyA2YNiT
 tuEiwzAizZvMT9HSuHaiigQ52LaVTFiq1pFJraN/3WGQ+Tp7F6pQFerto2UMhTgyAoLB
 iuRJ72+YdKfAZhNr04dsC5zoemlEn4tMpfTsgTNwaGxotZLa9VaxTMxpajz8dM51IO56
 Bdm+LBK5IukH8cEd1wCkerfKExbOgY1dvMHHO4E10REn1FLN5l0twGcEh+xJ/Rwev+jj
 DxPVvY16iM7KZ5J5fyBe8InMjDXLjOBfEJZkqUdXrzV3jsVuStqpd1o+TGzzj3XWgBnV
 Ciug==
X-Gm-Message-State: AOAM531h0oNW45E4oulmLxxfa57DMGuF8Lk15S3mKDtlcVh79fnuz8VD
 8hETyBJaXjS1kGbhwbaeJr8=
X-Google-Smtp-Source: ABdhPJwJ5JfmPslQwuRXUb66cXv4PAirCb1OjMzodPGqKFQ/dxT4SOW6am0FgccGpgNJrPjJWA/esg==
X-Received: by 2002:a05:600c:284:: with SMTP id 4mr513155wmk.48.1596133788881; 
 Thu, 30 Jul 2020 11:29:48 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec])
 by smtp.gmail.com with ESMTPSA id 31sm10855810wrp.87.2020.07.30.11.29.47
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 30 Jul 2020 11:29:48 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 "'Xen-devel'" <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-2-andrew.cooper3@citrix.com>
 <002601d66647$ca8567e0$5f9037a0$@xen.org>
 <33a10589-6890-b653-d8c2-7eb19a5e4929@citrix.com>
In-Reply-To: <33a10589-6890-b653-d8c2-7eb19a5e4929@citrix.com>
Subject: RE: [PATCH 1/5] xen/memory: Introduce CONFIG_ARCH_ACQUIRE_RESOURCE
Date: Thu, 30 Jul 2020 19:24:35 +0100
Message-ID: <003301d6669e$addc2500$09946f00$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQF6ExwjYkgZ+OK4ROtT1vtJIEnjiAIw/yGXAh+jJtsCDzS7HKmmGCIw
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 =?utf-8?Q?'Micha=C5=82_Leszczy=C5=84ski'?= <michal.leszczynski@cert.pl>,
 'Jan Beulich' <JBeulich@suse.com>,
 'Hubert Jasudowicz' <hubert.jasudowicz@cert.pl>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 30 July 2020 18:34
> To: paul@xen.org; 'Xen-devel' <xen-devel@lists.xenproject.org>
> Cc: 'Jan Beulich' <JBeulich@suse.com>; 'Wei Liu' <wl@xen.org>; 'Roger =
Pau Monn=C3=A9'
> <roger.pau@citrix.com>; 'Stefano Stabellini' <sstabellini@kernel.org>; =
'Julien Grall'
> <julien@xen.org>; 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>; =
'Micha=C5=82 Leszczy=C5=84ski'
> <michal.leszczynski@cert.pl>; 'Hubert Jasudowicz' =
<hubert.jasudowicz@cert.pl>
> Subject: Re: [PATCH 1/5] xen/memory: Introduce =
CONFIG_ARCH_ACQUIRE_RESOURCE
>=20
> On 30/07/2020 09:02, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Andrew Cooper <andrew.cooper3@citrix.com>
> >> Sent: 28 July 2020 12:37
> >> To: Xen-devel <xen-devel@lists.xenproject.org>
> >> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Jan Beulich =
<JBeulich@suse.com>; Wei Liu
> <wl@xen.org>;
> >> Roger Pau Monn=C3=A9 <roger.pau@citrix.com>; Stefano Stabellini =
<sstabellini@kernel.org>; Julien Grall
> >> <julien@xen.org>; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; =
Paul Durrant <paul@xen.org>;
> Micha=C5=82
> >> Leszczy=C5=84ski <michal.leszczynski@cert.pl>; Hubert Jasudowicz =
<hubert.jasudowicz@cert.pl>
> >> Subject: [PATCH 1/5] xen/memory: Introduce =
CONFIG_ARCH_ACQUIRE_RESOURCE
> >>
> >> New architectures shouldn't be forced to implement no-op stubs for =
unused
> >> functionality.
> >>
> >> Introduce CONFIG_ARCH_ACQUIRE_RESOURCE which can be opted in to, =
and provide
> >> compatibility logic in xen/mm.h
> >>
> >> No functional change.
> > Code-wise, it looks fine, so...
> >
> > Reviewed-by: Paul Durrant <paul@xen.org>
>=20
> Thanks,
>=20
> >
> > ...but ...
> >
> >> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> ---
> >> CC: Jan Beulich <JBeulich@suse.com>
> >> CC: Wei Liu <wl@xen.org>
> >> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> >> CC: Stefano Stabellini <sstabellini@kernel.org>
> >> CC: Julien Grall <julien@xen.org>
> >> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> >> CC: Paul Durrant <paul@xen.org>
> >> CC: Micha=C5=82 Leszczy=C5=84ski <michal.leszczynski@cert.pl>
> >> CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
> >> ---
> >>  xen/arch/x86/Kconfig     | 1 +
> >>  xen/common/Kconfig       | 3 +++
> >>  xen/include/asm-arm/mm.h | 8 --------
> >>  xen/include/xen/mm.h     | 9 +++++++++
> >>  4 files changed, 13 insertions(+), 8 deletions(-)
> >>
> >> diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
> >> index a636a4bb1e..e7644a0a9d 100644
> >> --- a/xen/arch/x86/Kconfig
> >> +++ b/xen/arch/x86/Kconfig
> >> @@ -6,6 +6,7 @@ config X86
> >>  	select ACPI
> >>  	select ACPI_LEGACY_TABLES_LOOKUP
> >>  	select ARCH_SUPPORTS_INT128
> >> +	select ARCH_ACQUIRE_RESOURCE
> > ... I do wonder whether 'HAS_ACQUIRE_RESOURCE' is a better and more =
descriptive name.
>=20
> We don't have a coherent policy for how to categorise these things.  I
> can change the name if you insist, but I'm not sure it makes a useful
> difference.
>=20

Ok, it's fine. My R-b stands.

  Paul

> ~Andrew



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:31:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:31:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1DKV-0005Ij-Ej; Thu, 30 Jul 2020 18:30:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1DKU-0005Ic-8M
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:30:58 +0000
X-Inumbo-ID: cf69b650-d292-11ea-ab07-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cf69b650-d292-11ea-ab07-12813bfff9fa;
 Thu, 30 Jul 2020 18:30:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hGnvvsl+jPnMAF+M9FDBgVeGkyKtb7kb+pt1WPQma4Y=; b=b5+AKboLC7ZNLumERi8mb2PwWV
 1rXuy8Gooyxs3zOXyfkQ9bySggempHAnHCZQSah/S/zpEuEgnbLTYalecHqA2RuUdM+OcBI3RfMhW
 FexKuChyfYqy8/CI7QYeHmHentk8NTUfAfDhqn22vGBP8WrfUxkZlFbGEk4ReFd4UhCY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1DKQ-000852-TW; Thu, 30 Jul 2020 18:30:54 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1DKQ-0007JG-M3; Thu, 30 Jul 2020 18:30:54 +0000
Subject: Re: [PATCH 1/5] xen/memory: Introduce CONFIG_ARCH_ACQUIRE_RESOURCE
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-2-andrew.cooper3@citrix.com>
 <9b8397fc-f50e-ef2b-cbaa-2298294af2e3@xen.org>
 <0b02d7dd-ebf1-9210-a52f-8debbddddbaa@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <954f6551-b5ae-f221-c2c9-5c28d5985bd0@xen.org>
Date: Thu, 30 Jul 2020 19:30:52 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <0b02d7dd-ebf1-9210-a52f-8debbddddbaa@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Hubert Jasudowicz <hubert.jasudowicz@cert.pl>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Jan Beulich <JBeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 30/07/2020 18:28, Andrew Cooper wrote:
> On 30/07/2020 10:50, Julien Grall wrote:
>> Hi Andrew,
>>
>> On 28/07/2020 12:37, Andrew Cooper wrote:
>>> New architectures shouldn't be forced to implement no-op stubs for
>>> unused
>>> functionality.
>>>
>>> Introduce CONFIG_ARCH_ACQUIRE_RESOURCE which can be opted in to, and
>>> provide
>>> compatibility logic in xen/mm.h
>>>
>>> No functional change.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> With one question below:
>>
>> Acked-by: Julien Grall <jgrall@amazon.com>
> 
> Thanks,
> 
>>
>>
>>> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
>>> index 1061765bcd..1b2c1f6b32 100644
>>> --- a/xen/include/xen/mm.h
>>> +++ b/xen/include/xen/mm.h
>>> @@ -685,4 +685,13 @@ static inline void put_page_alloc_ref(struct
>>> page_info *page)
>>>        }
>>>    }
>>>    +#ifndef CONFIG_ARCH_ACQUIRE_RESOURCE
>>> +static inline int arch_acquire_resource(
>>> +    struct domain *d, unsigned int type, unsigned int id, unsigned
>>> long frame,
>>> +    unsigned int nr_frames, xen_pfn_t mfn_list[])
>>
>> Any reason to change the way we indent the arguments?
> 
> So its not all squashed on the right hand side.

Fair enough. I have asked the same question on a follow-up patch. Feel 
free to ignore it :).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 18:59:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 18:59:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Dla-0007FQ-Q8; Thu, 30 Jul 2020 18:58:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B2eQ=BJ=yujala.com=srini@srs-us1.protection.inumbo.net>)
 id 1k1DlZ-0007Eu-Kl
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 18:58:57 +0000
X-Inumbo-ID: b77728bc-d296-11ea-8dbb-bc764e2007e4
Received: from gproxy6-pub.mail.unifiedlayer.com (unknown [67.222.39.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b77728bc-d296-11ea-8dbb-bc764e2007e4;
 Thu, 30 Jul 2020 18:58:55 +0000 (UTC)
Received: from cmgw14.unifiedlayer.com (unknown [10.9.0.14])
 by gproxy6.mail.unifiedlayer.com (Postfix) with ESMTP id C22F91E062D
 for <xen-devel@lists.xenproject.org>; Thu, 30 Jul 2020 12:58:52 -0600 (MDT)
Received: from md-71.webhostbox.net ([204.11.58.143]) by cmsmtp with ESMTP
 id 1DlUktopYwNNl1DlUkOFaP; Thu, 30 Jul 2020 12:58:52 -0600
X-Authority-Reason: nr=8
X-Authority-Analysis: v=2.3 cv=MLZOZvRl c=1 sm=1 tr=0
 a=yS0qNmEK8ed8yKyeR8R6rg==:117 a=dLZJa+xiwSxG16/P+YVxDGlgEgI=:19
 a=kj9zAlcOel0A:10:nop_charset_1 a=_RQrkK6FrEwA:10:nop_rcvd_month_year
 a=o-A10e_uY_YA:10:endurance_base64_authed_username_1 a=0f1Y9JmXAAAA:8
 a=19Ub0thILXFMuuj48kkA:9 a=ABtCjNrQ3t8YFW9y:21 a=NGdGghtINY4PGyVl:21
 a=CjuIK1q_8ugA:10:nop_charset_2 a=It28mvvgxjsq2WIeNnUB:22
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=yujala.com; 
 s=default;
 h=Content-Transfer-Encoding:Content-Type:Message-ID:References:
 In-Reply-To:Subject:Cc:To:From:Date:MIME-Version:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe:
 List-Post:List-Owner:List-Archive;
 bh=OdAD0NdJtuLxZ7JxO9+aNL3a9/g0/jt4LU+dcJmWkpk=; b=bn5wXuWqHTwfe3ogBluD+PPso6
 n4TyaNor3B4AZNieyKPa6NaYcohYFLRnHKSKGyLt3Cjl/VcfmoSjnaCnmPV6V89SGEuNmgpSNiyJ9
 5JE2UkltkoWouSc5kASPRlsKpMcsmYOAmnQMzfAEMupApKmqLSZI0tB4EvEsIKOpQJ/9Jc9Bms8bI
 KgxIGHF5sAxSx8QDbFPyiR9PzE49P/crizT8KS9SM1TzGap0TZjFT/s4fydSbMaXn/5F2adu7UaSk
 mWOXQe14QfN72kROVDaxg399QbLJEuOknd9pRWxupwHG4Okij+A6uQfPYJ4DM2g5MBCo0ChSfhrZ6
 3k8BOJbg==;
Received: from md-71.webhostbox.net ([204.11.58.143]:37874)
 by md-71.webhostbox.net with esmtpa (Exim 4.93)
 (envelope-from <srini@yujala.com>)
 id 1k1DlT-000Omr-Oj; Thu, 30 Jul 2020 18:58:51 +0000
MIME-Version: 1.0
Date: Thu, 30 Jul 2020 18:58:51 +0000
From: srini@yujala.com
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: Porting Xen to Jetson Nano
In-Reply-To: <alpine.DEB.2.21.2007291756380.1767@sstabellini-ThinkPad-T480s>
References: <002801d66051$90fe2300$b2fa6900$@yujala.com>
 <9736680b-1c81-652b-552b-4103341bad50@xen.org>
 <000001d661cb$45cdaa10$d168fe30$@yujala.com>
 <5f985a6a-1bd6-9e68-f35f-b0b665688cee@xen.org>
 <67c102642b0932d88ab2f70e96742ef0@yujala.com>
 <alpine.DEB.2.21.2007291756380.1767@sstabellini-ThinkPad-T480s>
Message-ID: <bd49b460d390cd547ea0ca77e5a20f2d@yujala.com>
X-Sender: srini@yujala.com
User-Agent: Roundcube Webmail/1.3.13
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit
X-AntiAbuse: This header was added to track abuse,
 please include it with any abuse report
X-AntiAbuse: Primary Hostname - md-71.webhostbox.net
X-AntiAbuse: Original Domain - lists.xenproject.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - yujala.com
X-BWhitelist: no
X-Source-IP: 204.11.58.143
X-Source-L: Yes
X-Exim-ID: 1k1DlT-000Omr-Oj
X-Source: 
X-Source-Args: 
X-Source-Dir: 
X-Source-Sender: md-71.webhostbox.net [204.11.58.143]:37874
X-Source-Auth: srini@yujala.com
X-Email-Count: 3
X-Source-Cap: c3JpbmlxbGw7c3JpbmlxbGw7bWQtNzEud2ViaG9zdGJveC5uZXQ=
X-Local-Domain: yes
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien@xen.org>,
 'Christopher Clark' <christopher.w.clark@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 2020-07-30 01:27, Stefano Stabellini wrote:
> On Wed, 29 Jul 2020, srini@yujala.com wrote:
>> Hi Julien,
>> 
>> On 2020-07-24 17:25, Julien Grall wrote:
>> > On 24/07/2020 16:01, Srinivas Bangalore wrote:
>> >
>> > I struggled to find your comment inline as your e-mail client doesn't
>> > quote my answer. Please configure your e-mail client to use some form
>> > of quoting (the usual is '>').
>> >
>> >
>> I have switched to a web-based email client, so I hope this is better 
>> now.
> 
> Seems better, thank you
> 
> 
>> > > (XEN) Freed 296kB init memory.
>> > > (XEN) dom0 IPA 0x0000000088080000
>> > > (XEN) P2M @ 0000000803fc3d40 mfn:0x17f0f5
>> > > (XEN) 0TH[0x0] = 0x004000017f0f377f
>> > > (XEN) 1ST[0x2] = 0x02c00000800006fd
>> > > (XEN) Mem access check
>> > > (XEN) dom0 IPA 0x0000000088080000
>> > > (XEN) P2M @ 0000000803fc3d40 mfn:0x17f0f5
>> > > (XEN) 0TH[0x0] = 0x004000017f0f377f
>> > > (XEN) 1ST[0x2] = 0x02c00000800006fd
>> > > (XEN) Mem access check
>> >
>> > The instruction abort issue looks normal as the mapping is marked as
>> > non-executable.
>> >
>> > Looking at the rest of bits, bits 55:58 indicates the type of mapping
>> > used. The value suggest the mapping has been considered to be used
>> > p2m_mmio_direct_c (RW cacheable MMIO). This looks wrong to me because
>> > RAM should be mapped using p2m_ram_rw.
>> >
>> > Looking at your DT, it looks like the region is marked as reserved. On
>> > Xen 4.8, reserved-memory region are not correctly handled (IIRC this
>> > was only fixed in Xen 4.13). This should be possible to confirm by
>> > enable CONFIG_DEVICE_TREE_DEBUG in your .config.
>> >
>> > The option will debug more information about the mapping to dom0 on
>> > your console.
>> >
>> > However, given you are using an old release, you are at risk at keep
>> > finding bugs that have been resolved in more recent releases. It would
>> > probably better if you switch to Xen 4.14 and report any bug you can
>> > find there.
>> >
>> Ok. I applied to patch series to 4.14. Enabled EARLY_PRINTK, 
>> CONFIG_DEBUG and
>> DEVICE_TREE_DEBUG.
>> Here's the log...
>> 
>> ## Flattened Device Tree blob at e3500000
>>    Booting using the fdt blob at 0xe3500000
>>    reserving fdt memory region: addr=80000000 size=20000
>>    reserving fdt memory region: addr=e3500000 size=35000
>>    Loading Device Tree to 00000000fc7f8000, end 00000000fc82ffff ... 
>> OK
>> 
>> Starting kernel ...
>> 
>> - UART enabled -
>> - Boot CPU booting -
>> - Current EL 00000008 -
>> - Initialize CPU -
>> - Turning on paging -
>> - Zero BSS -
>> - Ready -
>> (XEN) Invalid size for reg
>> (XEN) fdt: node `reserved-memory': parsing failed
>> (XEN)
>> (XEN) MODULE[0]: 00000000e0000000 - 00000000e014b0c8 Xen
>> (XEN) MODULE[1]: 00000000fc7f8000 - 00000000fc82d000 Device Tree
>> (XEN)  RESVD[0]: 0000000080000000 - 0000000080020000
>> (XEN)  RESVD[1]: 00000000e3500000 - 00000000e3535000
>> (XEN)  RESVD[2]: 00000000fc7f8000 - 00000000fc82d000
>> (XEN)  RESVD[3]: 0000000040001000 - 000000004003ffff
>> (XEN)  RESVD[4]: 00000000b0000000 - 00000000b01fffff
>> (XEN)
>> (XEN)
>> (XEN) Command line: console=dtuart sync_console dom0_mem=128M 
>> loglvl=debug
>> guest_loglvl=debug console_to_ring
>> (XEN) Xen BUG at page_alloc.c:398
>> (XEN) ----[ Xen-4.14.0  arm64  debug=y   Not tainted ]----
>> (XEN) CPU:    0
>> (XEN) PC:     00000000002b7b90 alloc_boot_pages+0x38/0x9c
>> (XEN) LR:     00000000002cda04
>> (XEN) SP:     0000000000307d40
>> (XEN) CPSR:   a00003c9 MODE:64-bit EL2h (Hypervisor, handler)
>> (XEN)      X0: 000fc80000002000  X1: 0000000000002000  X2: 
>> 0000000000000000
>> (XEN)      X3: 000fffffffffffff  X4: ffffffffffffffff  X5: 
>> 0000000000000000
>> (XEN)      X6: 0000000000307df0  X7: 0000000000000003  X8: 
>> 0000000000000008
>> (XEN)      X9: fffffffffffffffb X10: 0101010101010101 X11: 
>> 0000000000000007
>> (XEN)     X12: 0000000000000004 X13: ffffffffffffffff X14: 
>> ffffffffff000000
>> (XEN)     X15: ffffffffffffffff X16: 0000000000000000 X17: 
>> 0000000000000000
>> (XEN)     X18: 00000000fc834dd0 X19: 00000000002b5000 X20: 
>> 00000000fc7f8000
>> (XEN)     X21: 00000000fc7f8000 X22: 0000000000000000 X23: 
>> fc80000000000038
>> (XEN)     X24: 00000000fed9de28 X25: ffffffffffffffff X26: 
>> fc80000002000000
>> (XEN)     X27: 0000000002000000 X28: 0000000000000000  FP: 
>> 0000000000307d40
>> (XEN)
>> (XEN)   VTCR_EL2: 80000000
>> (XEN)  VTTBR_EL2: 0000000000000000
>> (XEN)
>> (XEN)  SCTLR_EL2: 30cd183d
>> (XEN)    HCR_EL2: 0000000000000038
>> (XEN)  TTBR0_EL2: 00000000e0145000
>> (XEN)
>> (XEN)    ESR_EL2: f2000001
>> (XEN)  HPFAR_EL2: 0000000000000000
>> (XEN)    FAR_EL2: 0000000000000000
>> (XEN)
>> (XEN) Xen stack trace from sp=0000000000307d40:
>> (XEN)    0000000000307df0 00000000002cf114 0000000000000000 
>> 0000000000307d68
>> (XEN)    00000000fc7f8000 00000000002ceeb0 0000000000400000 
>> 00676e69725f6f74
>> (XEN)    ffffffffffffffff 0000000000000000 0000000000000000 
>> 0000000000307df0
>> (XEN)    0000000000307df0 00000000002cef58 000000003fffffff 
>> 00000000fc7f8000
>> (XEN)    00000000fc7f8000 000fc80000002000 0000000000400000 
>> 0080000000000000
>> (XEN)    0000000000000000 000000000003ffff 00000000fc831170 
>> 00000000002001b8
>> (XEN)    00000000e0000000 00000000dfe00000 00000000fc7f8000 
>> 0000000000000000
>> (XEN)    0000000000400000 00000000fed9de28 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000400 0000000000000000 
>> 0000000000035000
>> (XEN)    00000000fc7f8000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000300000000 0000000000000000 
>> 00000040ffffffff
>> (XEN)    00000000ffffffff 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN)    0000000000000000 0000000000000000 0000000000000000 
>> 0000000000000000
>> (XEN) Xen call trace:
>> (XEN)    [<00000000002b7b90>] alloc_boot_pages+0x38/0x9c (PC)
>> (XEN)    [<00000000002cda04>] setup_frametable_mappings+0xb4/0x310 
>> (LR)
>> (XEN)    [<00000000002cf114>] start_xen+0x3a0/0xc48
>> (XEN)    [<00000000002001b8>] arm64/head.o#primary_switched+0x10/0x30
>> (XEN)
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 0:
>> (XEN) Xen BUG at page_alloc.c:398
>> (XEN) ****************************************
>> (XEN)
>> (XEN) Reboot in five seconds...
>> 
>> There seems to be a problem with the DT in the 'reserved-memory' node. 
>>  I
>> commented out the fb0-carveout, fb1-carveout sections, recompiled and 
>> tried to
>> boot again.
> 
> Yes, those reserved-memory nodes won't work correctly with Xen
> unfortunately: they either use "size" instead of "reg" (vpr-carveout) 
> or
> they specify "no-map". Only regular "reg" reserved memory regions
> without "no-map" are properly parsed by Xen at the moment.
> 
> 

I'll try to modify the nodes that use 'size and replace with 'reg'.

> 
>> This time the log shows the device tree messages (see attached log
>> file), but Xen fails at this point...
>> 
>> (XEN) Allocating PPI 16 for event channel interrupt
>> (XEN) Create hypervisor node
>> (XEN) Create PSCI node
>> (XEN) Create cpus node
>> (XEN) Create cpu@0 (logical CPUID: 0) node
>> (XEN) Create cpu@1 (logical CPUID: 1) node
>> (XEN) Create cpu@2 (logical CPUID: 2) node
>> (XEN) Create cpu@3 (logical CPUID: 3) node
>> (XEN) Create memory node (reg size 4, nr cells 4)
>> (XEN)   Bank 0: 0xe8000000->0xf0000000
>> (XEN) Create memory node (reg size 4, nr cells 8)
>> (XEN)   Bank 0: 0x40001000->0x40040000
>> (XEN)   Bank 1: 0xb0000000->0xb0200000
>> (XEN) Loading zImage from 00000000e1000000 to
>> 00000000e8080000-00000000ea23c808
>> (XEN)
>> (XEN) ****************************************
>> (XEN) Panic on CPU 0:
>> (XEN) Unable to copy the kernel in the hwdom memory
>> (XEN) ****************************************
>> (XEN)
>> 
>> Device tree and log file attached. Is there an issue with the DT? Any 
>> pointers
>> on where I should be looking next?
> 
> Is it possible that the kernel image was loaded on a memory area not
> recognized as ram?
> 
> So xen/arch/arm/guestcopy.c:translate_get_page fails the check
> p2m_is_ram?
> 
The failure happens before p2m_is_ram is called.
This line:
page = get_page_from_gfn(info.gpa.d, paddr_to_pfn(addr), &p2mt, 
P2M_ALLOC);

returns a NULL pointer.

> That would happen for instance if a device or special node is also
> covering that address range.

Is there a way to check such conflicts?

Thanks,
Srini


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:13:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:13:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Dz7-0000S6-6C; Thu, 30 Jul 2020 19:12:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HZLI=BJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k1Dz5-0000S1-L9
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:12:55 +0000
X-Inumbo-ID: ab34def8-d298-11ea-8dbe-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab34def8-d298-11ea-8dbe-bc764e2007e4;
 Thu, 30 Jul 2020 19:12:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596136374;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=BoaBywZ5Q7aeJleDVmT69vyw13PTTVnrTp+Lzs+SSjA=;
 b=U7bwUoXoLG9/SwOh0XYh4fKEKOhqpOCQ6JBYmTWe+NZNq/y8iOfTg0+f
 IxA6lZkRLz2yKilCbi+xzphIRtw2hB5E2DJELzVKzuXGPMLR3DeiSuIkm
 ucoUtfAnJeKjHfoHvSmm2BATRFBzK7onkhUeyToAw1hpZ9ZUQI7Bns9Gt o=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 3NyI0uIfPXpMpFW3/ZGZJx9CWqmeiTDrCW+wvcWvzWwWcHNEFhkV2YlUGDO75dFgUxOp70LT2O
 3P/sNoWUNC2LEdowbuwqCmNcYE3sHA6qYR04fEKaPJNB0h+w9R+/u+VngBLfmS/FaYywSn1GEK
 ZDdyTZuCneBVObqPqD7HNH5gaZtC7Fbj2NWw0Dj7QmNiJhLWpiWAKrk0vsbugfzlS11EPebXKE
 6cYU16V7z9c+ex5oo/j207NqmHcsuEidkQUxvBSc1bCjIqs/rMeKOf9B8qcZzyIcVIRybFeJ2j
 /fY=
X-SBRS: 2.7
X-MesageID: 23907082
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,415,1589256000"; d="scan'208";a="23907082"
Subject: Re: [PATCH 3/5] xen/memory: Fix compat XENMEM_acquire_resource for
 size requests
To: Jan Beulich <jbeulich@suse.com>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-4-andrew.cooper3@citrix.com>
 <0c275cb5-55ec-b0b0-6ba8-cfa7ca23978b@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d3c31bea-0c31-5822-15cb-226402c4ae75@citrix.com>
Date: Thu, 30 Jul 2020 20:12:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0c275cb5-55ec-b0b0-6ba8-cfa7ca23978b@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/07/2020 21:09, Jan Beulich wrote:
> On 28.07.2020 13:37, Andrew Cooper wrote:
>> Copy the nr_frames from the correct structure, so the caller doesn't
>> unconditionally receive 0.
>
> Well, no - it does get copied from the correct structure. It's just
> that the field doesn't get set properly up front.

You appear to be objecting to my use of the term "correct".

There are two structures.  One contains the correct value, and one
contains the wrong value, which happens to always be 0.

I stand by sentence as currently written.

> Otherwise you'll
> (a) build in an unchecked assumption that the native and compat
> fields match in type

Did you actually check?  Because I did before embarking on this course
of action.

In file included from /local/xen.git/xen/include/xen/guest_access.h:10:0,
                 from compat/memory.c:5:
/local/xen.git/xen/include/asm/guest_access.h:152:28: error: comparison
of distinct pointer types lacks a cast [-Werror]
     (void)(&(hnd).p->field == _s);                      \
                            ^
compat/memory.c:628:22: note: in expansion of macro ‘__copy_field_to_guest’
                 if ( __copy_field_to_guest(
                      ^~~~~~~~~~~~~~~~~~~~~

This is what the compiler thinks of the code, when nr_frames is changed
from uint32_t to unsigned long.


> and (b) set a bad example for people looking
> here

This entire function is a massive set of bad examples; the worst IMO
being the fact that there isn't a single useful comment anywhere in it
concerning how the higher level loop structure works.

I'm constantly annoyed that I need to reverse engineer it from scratch
every time I look at it, despite having a better-than-most understanding
of what it is trying to achieve, and how it is supposed to work.

I realise this is noones fault in particular, but it is not
fair/reasonable to claim that this change is the thing setting a bad
example in this file.

> and then cloning this code in perhaps a case where (a) is not
> even true. If you agree, the alternative change of setting
> cmp.mar.nr_frames from nat.mar->nr_frames before the call is

Is there more to this sentence?

While this example could be implemented (at even higher overhead) by
copying nat back to cmp before passing it back to the guest, the same is
not true for the changes required to fix batching (which is another
series the same size as this).

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:37:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:37:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1EMI-0002F6-87; Thu, 30 Jul 2020 19:36:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fgvr=BJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k1EMH-0002F1-1m
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:36:53 +0000
X-Inumbo-ID: 04942e56-d29c-11ea-8dc2-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04942e56-d29c-11ea-8dc2-bc764e2007e4;
 Thu, 30 Jul 2020 19:36:52 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9C7432072A;
 Thu, 30 Jul 2020 19:36:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596137811;
 bh=nllV0vbNRPc7LF/hscfbvxGqVtZquYkchm/h8IciEas=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=TnJCxH0jwgTuAwQlP/o3yYIfGYR/Yyl4oBKKIMQ8jbi2KANC8xc1NLxlGuFn2LngA
 kSg904YjzwTunNLsdTNHKbZduffPDrp7Wn1uYTLEEy8RZUPjvFtlY6Ef9woFC4YThd
 JUCjJ4PCxFuc6wnXbg2DJ4kBezeehOChNxL6fHw4=
Date: Thu, 30 Jul 2020 12:36:51 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [RESEND][PATCH v2 2/7] xen/arm: kernel: Re-order the includes
In-Reply-To: <20200730181827.1670-3-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2007301216060.1767@sstabellini-ThinkPad-T480s>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-3-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 30 Jul 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> We usually have xen/ includes first and then asm/. They are also ordered
> alphabetically among themselves.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/kernel.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 8eff0748367d..f95fa392af44 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -3,20 +3,20 @@
>   *
>   * Copyright (C) 2011 Citrix Systems, Inc.
>   */
> +#include <xen/domain_page.h>
>  #include <xen/errno.h>
> +#include <xen/gunzip.h>
>  #include <xen/init.h>
>  #include <xen/lib.h>
> +#include <xen/libfdt/libfdt.h>
>  #include <xen/mm.h>
> -#include <xen/domain_page.h>
>  #include <xen/sched.h>
> -#include <asm/byteorder.h>
> -#include <asm/setup.h>
> -#include <xen/libfdt/libfdt.h>
> -#include <xen/gunzip.h>
>  #include <xen/vmap.h>
>  
> +#include <asm/byteorder.h>
>  #include <asm/guest_access.h>
>  #include <asm/kernel.h>
> +#include <asm/setup.h>
>  
>  #define UIMAGE_MAGIC          0x27051956
>  #define UIMAGE_NMLEN          32
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:37:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:37:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1EN9-0002IJ-IL; Thu, 30 Jul 2020 19:37:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fgvr=BJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k1EN7-0002I6-QH
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:37:45 +0000
X-Inumbo-ID: 23c205c8-d29c-11ea-ab18-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 23c205c8-d29c-11ea-ab18-12813bfff9fa;
 Thu, 30 Jul 2020 19:37:45 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 10FC02072A;
 Thu, 30 Jul 2020 19:37:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596137864;
 bh=3y9x+7BSRSZTvCMlMArJSQ3LD2BZ4Ov0frjO4ThCT98=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=nosEZRlwIwD7zScZB/rHNn6sSGp8nnm0xOR+a6bZ/+OuHit1hl/9jLpmg8TVgaXeo
 TfSOIKD95XAdD6KQ+RDX68UXqPwbn+ls5GciGfiRYf4Zs4YpXhN7t9plz/bLGg7oRc
 zdOiwqowu0Ts+pSzksVNV/0OomDLmXPkt2nX5NGU=
Date: Thu, 30 Jul 2020 12:37:43 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [RESEND][PATCH v2 3/7] xen/arm: decode: Re-order the includes
In-Reply-To: <20200730181827.1670-4-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2007301219061.1767@sstabellini-ThinkPad-T480s>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-4-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 30 Jul 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> We usually have xen/ includes first and then asm/. They are also ordered
> alphabetically among themselves.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Might wanna mention the change from asm/guest_access.h to
xen/guest_access.h. Anyway:

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/decode.c | 5 +++--
>  xen/arch/arm/kernel.c | 2 +-
>  2 files changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/decode.c b/xen/arch/arm/decode.c
> index 8b1e15d11892..144793c8cea0 100644
> --- a/xen/arch/arm/decode.c
> +++ b/xen/arch/arm/decode.c
> @@ -17,11 +17,12 @@
>   * GNU General Public License for more details.
>   */
>  
> -#include <xen/types.h>
> +#include <xen/lib.h>
>  #include <xen/sched.h>
> +#include <xen/types.h>
> +
>  #include <asm/current.h>
>  #include <asm/guest_access.h>
> -#include <xen/lib.h>
>  
>  #include "decode.h"
>  
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index f95fa392af44..032923853f2c 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -5,6 +5,7 @@
>   */
>  #include <xen/domain_page.h>
>  #include <xen/errno.h>
> +#include <xen/guest_access.h>
>  #include <xen/gunzip.h>
>  #include <xen/init.h>
>  #include <xen/lib.h>
> @@ -14,7 +15,6 @@
>  #include <xen/vmap.h>
>  
>  #include <asm/byteorder.h>
> -#include <asm/guest_access.h>
>  #include <asm/kernel.h>
>  #include <asm/setup.h>


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:37:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1ENC-0002JI-RW; Thu, 30 Jul 2020 19:37:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fgvr=BJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k1ENC-0002J6-9i
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:37:50 +0000
X-Inumbo-ID: 26b5209e-d29c-11ea-8dc2-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26b5209e-d29c-11ea-8dc2-bc764e2007e4;
 Thu, 30 Jul 2020 19:37:49 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0A5172072A;
 Thu, 30 Jul 2020 19:37:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596137869;
 bh=RFAlICnEc9FrvRTqUwCKvIKJ1QviHL6rkJpZKNuh0aE=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=MAC0+LiCnE908RDpLWh+/BmpkhA9BYWc6OsWHHUb9u7conPumofr1YvjMlkZN3xco
 7RoQLc2e0XsqjStt68IB/m+2P8F5VLvUCnqYQ71XoKr6Hi0L5Yjr87UGn/XscKAWO4
 yR+DU79/2t40j3VJSuI5XDflNva0X4DAV9O7L2tk=
Date: Thu, 30 Jul 2020 12:37:48 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [RESEND][PATCH v2 4/7] xen/arm: guestcopy: Re-order the includes
In-Reply-To: <20200730181827.1670-5-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2007301220000.1767@sstabellini-ThinkPad-T480s>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-5-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 30 Jul 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> We usually have xen/ includes first and then asm/. They are also ordered
> alphabetically among themselves.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
>  xen/arch/arm/guestcopy.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index 7a0f3e9d5fc6..c8023e2bca5d 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -1,7 +1,8 @@
> -#include <xen/lib.h>
>  #include <xen/domain_page.h>
> +#include <xen/lib.h>
>  #include <xen/mm.h>
>  #include <xen/sched.h>
> +
>  #include <asm/current.h>
>  #include <asm/guest_access.h>
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:37:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:37:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1ENI-0002Kv-40; Thu, 30 Jul 2020 19:37:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fgvr=BJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k1ENH-0002J6-A5
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:37:55 +0000
X-Inumbo-ID: 28d1c9fe-d29c-11ea-8dc2-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28d1c9fe-d29c-11ea-8dc2-bc764e2007e4;
 Thu, 30 Jul 2020 19:37:53 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 48D182072A;
 Thu, 30 Jul 2020 19:37:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596137872;
 bh=efS5Hvj4o0BdcoqIAX8jTqcMGbde/mR/Fp0tWDgF4Ng=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=WLwL2IVQ0zt5fX9+Fhro/8ZGE5ZSIKL7DZyBTYMK/qckfl545ks4X0aIjTmQKJwr7
 OvdbzYWXUX5UGyIxeFuUV7cPMbM78Duft7oYlT/TNDokniYjvpnSNekohWhXta0vjp
 mmwjIvRqEorFV1ENfme6/t5Fw5B/jhuwAuoOb0Hs=
Date: Thu, 30 Jul 2020 12:37:51 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [RESEND][PATCH v2 5/7] xen: include xen/guest_access.h rather
 than asm/guest_access.h
In-Reply-To: <20200730181827.1670-6-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2007301222190.1767@sstabellini-ThinkPad-T480s>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-6-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 30 Jul 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Only a few places are actually including asm/guest_access.h. While this
> is fine today, a follow-up patch will want to move most of the helpers
> from asm/guest_access.h to xen/guest_access.h.
> 
> To prepare the move, everyone should include xen/guest_access.h rather
> than asm/guest_access.h.
> 
> Interestingly, asm-arm/guest_access.h includes xen/guest_access.h. The
> inclusion is now removed as no-one but the latter should include the
> former.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Remove some changes that weren't meant to be here.
> ---
>  xen/arch/arm/decode.c                | 2 +-
>  xen/arch/arm/domain.c                | 2 +-
>  xen/arch/arm/guest_walk.c            | 3 ++-
>  xen/arch/arm/guestcopy.c             | 2 +-
>  xen/arch/arm/vgic-v3-its.c           | 2 +-
>  xen/arch/x86/hvm/svm/svm.c           | 2 +-
>  xen/arch/x86/hvm/viridian/viridian.c | 2 +-
>  xen/arch/x86/hvm/vmx/vmx.c           | 2 +-
>  xen/common/libelf/libelf-loader.c    | 2 +-
>  xen/include/asm-arm/guest_access.h   | 1 -
>  xen/lib/x86/private.h                | 2 +-
>  11 files changed, 11 insertions(+), 11 deletions(-)
> 
> diff --git a/xen/arch/arm/decode.c b/xen/arch/arm/decode.c
> index 144793c8cea0..792c2e92a7eb 100644
> --- a/xen/arch/arm/decode.c
> +++ b/xen/arch/arm/decode.c
> @@ -17,12 +17,12 @@
>   * GNU General Public License for more details.
>   */
>  
> +#include <xen/guest_access.h>
>  #include <xen/lib.h>
>  #include <xen/sched.h>
>  #include <xen/types.h>
>  
>  #include <asm/current.h>
> -#include <asm/guest_access.h>
>  
>  #include "decode.h"
>  
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 31169326b2e3..9258f6d3faa2 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -12,6 +12,7 @@
>  #include <xen/bitops.h>
>  #include <xen/errno.h>
>  #include <xen/grant_table.h>
> +#include <xen/guest_access.h>
>  #include <xen/hypercall.h>
>  #include <xen/init.h>
>  #include <xen/lib.h>
> @@ -26,7 +27,6 @@
>  #include <asm/current.h>
>  #include <asm/event.h>
>  #include <asm/gic.h>
> -#include <asm/guest_access.h>
>  #include <asm/guest_atomics.h>
>  #include <asm/irq.h>
>  #include <asm/p2m.h>
> diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
> index a1cdd7f4afea..b4496c4c86c6 100644
> --- a/xen/arch/arm/guest_walk.c
> +++ b/xen/arch/arm/guest_walk.c
> @@ -16,8 +16,9 @@
>   */
>  
>  #include <xen/domain_page.h>
> +#include <xen/guest_access.h>
>  #include <xen/sched.h>
> -#include <asm/guest_access.h>
> +
>  #include <asm/guest_walk.h>
>  #include <asm/short-desc.h>
>  
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index c8023e2bca5d..32681606d8fc 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -1,10 +1,10 @@
>  #include <xen/domain_page.h>
> +#include <xen/guest_access.h>
>  #include <xen/lib.h>
>  #include <xen/mm.h>
>  #include <xen/sched.h>
>  
>  #include <asm/current.h>
> -#include <asm/guest_access.h>
>  
>  #define COPY_flush_dcache   (1U << 0)
>  #define COPY_from_guest     (0U << 1)
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index 6e153c698d56..58d939b85f92 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -32,6 +32,7 @@
>  #include <xen/bitops.h>
>  #include <xen/config.h>
>  #include <xen/domain_page.h>
> +#include <xen/guest_access.h>
>  #include <xen/lib.h>
>  #include <xen/init.h>
>  #include <xen/softirq.h>
> @@ -39,7 +40,6 @@
>  #include <xen/sched.h>
>  #include <xen/sizes.h>
>  #include <asm/current.h>
> -#include <asm/guest_access.h>
>  #include <asm/mmio.h>
>  #include <asm/gic_v3_defs.h>
>  #include <asm/gic_v3_its.h>
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index ca3bbfcbb355..7301f3cd6004 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -16,6 +16,7 @@
>   * this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <xen/guest_access.h>
>  #include <xen/init.h>
>  #include <xen/lib.h>
>  #include <xen/trace.h>
> @@ -34,7 +35,6 @@
>  #include <asm/cpufeature.h>
>  #include <asm/processor.h>
>  #include <asm/amd.h>
> -#include <asm/guest_access.h>
>  #include <asm/debugreg.h>
>  #include <asm/msr.h>
>  #include <asm/i387.h>
> diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viridian/viridian.c
> index 977c1bc54fad..dc7183a54627 100644
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -5,12 +5,12 @@
>   * Hypervisor Top Level Functional Specification for more information.
>   */
>  
> +#include <xen/guest_access.h>
>  #include <xen/sched.h>
>  #include <xen/version.h>
>  #include <xen/hypercall.h>
>  #include <xen/domain_page.h>
>  #include <xen/param.h>
> -#include <asm/guest_access.h>
>  #include <asm/guest/hyperv-tlfs.h>
>  #include <asm/paging.h>
>  #include <asm/p2m.h>
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index eb54aadfbafb..cb5df1e81c9c 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -15,6 +15,7 @@
>   * this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include <xen/guest_access.h>
>  #include <xen/init.h>
>  #include <xen/lib.h>
>  #include <xen/param.h>
> @@ -31,7 +32,6 @@
>  #include <asm/regs.h>
>  #include <asm/cpufeature.h>
>  #include <asm/processor.h>
> -#include <asm/guest_access.h>
>  #include <asm/debugreg.h>
>  #include <asm/msr.h>
>  #include <asm/p2m.h>
> diff --git a/xen/common/libelf/libelf-loader.c b/xen/common/libelf/libelf-loader.c
> index 0f468727d04a..629cc0d3e611 100644
> --- a/xen/common/libelf/libelf-loader.c
> +++ b/xen/common/libelf/libelf-loader.c
> @@ -16,7 +16,7 @@
>   */
>  
>  #ifdef __XEN__
> -#include <asm/guest_access.h>
> +#include <xen/guest_access.h>
>  #endif
>  
>  #include "libelf-private.h"
> diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
> index 31b9f03f0015..b9a89c495527 100644
> --- a/xen/include/asm-arm/guest_access.h
> +++ b/xen/include/asm-arm/guest_access.h
> @@ -1,7 +1,6 @@
>  #ifndef __ASM_ARM_GUEST_ACCESS_H__
>  #define __ASM_ARM_GUEST_ACCESS_H__
>  
> -#include <xen/guest_access.h>
>  #include <xen/errno.h>
>  #include <xen/sched.h>
>  
> diff --git a/xen/lib/x86/private.h b/xen/lib/x86/private.h
> index b793181464f3..2d53bd3ced23 100644
> --- a/xen/lib/x86/private.h
> +++ b/xen/lib/x86/private.h
> @@ -4,12 +4,12 @@
>  #ifdef __XEN__
>  
>  #include <xen/bitops.h>
> +#include <xen/guest_access.h>
>  #include <xen/kernel.h>
>  #include <xen/lib.h>
>  #include <xen/nospec.h>
>  #include <xen/types.h>
>  
> -#include <asm/guest_access.h>
>  #include <asm/msr-index.h>
>  
>  #define copy_to_buffer_offset copy_to_guest_offset
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:38:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1ENO-0002P8-Ej; Thu, 30 Jul 2020 19:38:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fgvr=BJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k1ENM-0002Oe-OP
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:38:00 +0000
X-Inumbo-ID: 2cd9fe4a-d29c-11ea-8dc2-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2cd9fe4a-d29c-11ea-8dc2-bc764e2007e4;
 Thu, 30 Jul 2020 19:38:00 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 1E1492072A;
 Thu, 30 Jul 2020 19:37:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596137879;
 bh=eZiWHySLaaa0Sz8CkDLe0qTVueWwx9lKZTEIYPYCLkQ=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=wIz1waKdsZgq2ox8uZVczmghdzkIM22MkAuZu58g3TKQ+N7jz/o4+gU6GDX74UCGN
 rjymMwDew/VCFQQEJCFdps373u1241fB9loAOBf6reXHLxMRuQChZVOGUq3q2/KvDn
 K9NBBuVNbmU5zoVBx3/cx9Cv+5ITIyD4b02Y7o7c=
Date: Thu, 30 Jul 2020 12:37:58 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [RESEND][PATCH v2 6/7] xen/guest_access: Consolidate guest access
 helpers in xen/guest_access.h
In-Reply-To: <20200730181827.1670-7-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2007301234180.1767@sstabellini-ThinkPad-T480s>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-7-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 30 Jul 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Most of the helpers to access guest memory are implemented the same way
> on Arm and x86. The only differences are:
>     - guest_handle_{from, to}_param(): while on x86 XEN_GUEST_HANDLE()

It is actually just guest_handle_to_param() ?


>       and XEN_GUEST_HANDLE_PARAM() are the same, they are not on Arm. It
>       is still fine to use the Arm implementation on x86.
>     - __clear_guest_offset(): Interestingly the prototype does not match
>       between the x86 and Arm. However, the Arm one is bogus. So the x86
>       implementation can be used.
>     - guest_handle{,_subrange}_okay(): They are validly differing
>       because Arm is only supporting auto-translated guest and therefore
>       handles are always valid.
> 
> In the past, the ia64 and ppc64 port use a different model to access
> guest parameter. They have been long gone now.
> 
> Given Xen currently only support 2 archictures, it is too soon to have a
> directory asm-generic as it is not possible to differentiate it with the
> existing directory xen/. If/When there is a 3rd port, we can decide to
> create the new directory if that new port decide to use a different way
> to access guest parameter.
> 
> For now, consolidate it in xen/guest_access.h.
> 
> While it would be possible to adjust the coding style at the same, this
> is left for a follow-up patch so 'diff' can be used to check the
> consolidation was done correctly.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Looks good to me

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Expand the commit message explaining why asm-generic is not
>         created.
> ---
>  xen/include/asm-arm/guest_access.h | 114 ---------------------------
>  xen/include/asm-x86/guest_access.h | 108 --------------------------
>  xen/include/xen/guest_access.h     | 119 +++++++++++++++++++++++++++++
>  3 files changed, 119 insertions(+), 222 deletions(-)
> 
> diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/guest_access.h
> index b9a89c495527..53766386d3d8 100644
> --- a/xen/include/asm-arm/guest_access.h
> +++ b/xen/include/asm-arm/guest_access.h
> @@ -23,88 +23,6 @@ int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
>  #define __raw_copy_from_guest raw_copy_from_guest
>  #define __raw_clear_guest raw_clear_guest
>  
> -/* Remainder copied from x86 -- could be common? */
> -
> -/* Is the guest handle a NULL reference? */
> -#define guest_handle_is_null(hnd)        ((hnd).p == NULL)
> -
> -/* Offset the given guest handle into the array it refers to. */
> -#define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
> -#define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
> -
> -/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
> - * to the specified type of XEN_GUEST_HANDLE_PARAM. */
> -#define guest_handle_cast(hnd, type) ({         \
> -    type *_x = (hnd).p;                         \
> -    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
> -})
> -
> -/* Convert a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> -#define guest_handle_to_param(hnd, type) ({                  \
> -    typeof((hnd).p) _x = (hnd).p;                            \
> -    XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
> -    /* type checking: make sure that the pointers inside     \
> -     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
> -     * the same type, then return hnd */                     \
> -    (void)(&_x == &_y.p);                                    \
> -    _y;                                                      \
> -})
> -
> -#define guest_handle_for_field(hnd, type, fld)          \
> -    ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
> -
> -#define guest_handle_from_ptr(ptr, type)        \
> -    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
> -#define const_guest_handle_from_ptr(ptr, type)  \
> -    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
> -
> -/*
> - * Copy an array of objects to guest context via a guest handle,
> - * specifying an offset into the guest array.
> - */
> -#define copy_to_guest_offset(hnd, off, ptr, nr) ({      \
> -    const typeof(*(ptr)) *_s = (ptr);                   \
> -    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
> -    /* Check that the handle is not for a const type */ \
> -    void *__maybe_unused _t = (hnd).p;                  \
> -    (void)((hnd).p == _s);                              \
> -    raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));  \
> -})
> -
> -/*
> - * Clear an array of objects in guest context via a guest handle,
> - * specifying an offset into the guest array.
> - */
> -#define clear_guest_offset(hnd, off, nr) ({    \
> -    void *_d = (hnd).p;                        \
> -    raw_clear_guest(_d+(off), nr);             \
> -})
> -
> -/*
> - * Copy an array of objects from guest context via a guest handle,
> - * specifying an offset into the guest array.
> - */
> -#define copy_from_guest_offset(ptr, hnd, off, nr) ({    \
> -    const typeof(*(ptr)) *_s = (hnd).p;                 \
> -    typeof(*(ptr)) *_d = (ptr);                         \
> -    raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
> -})
> -
> -/* Copy sub-field of a structure to guest context via a guest handle. */
> -#define copy_field_to_guest(hnd, ptr, field) ({         \
> -    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
> -    void *_d = &(hnd).p->field;                         \
> -    (void)(&(hnd).p->field == _s);                      \
> -    raw_copy_to_guest(_d, _s, sizeof(*_s));             \
> -})
> -
> -/* Copy sub-field of a structure from guest context via a guest handle. */
> -#define copy_field_from_guest(ptr, hnd, field) ({       \
> -    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
> -    typeof(&(ptr)->field) _d = &(ptr)->field;           \
> -    raw_copy_from_guest(_d, _s, sizeof(*_d));           \
> -})
> -
>  /*
>   * Pre-validate a guest handle.
>   * Allows use of faster __copy_* functions.
> @@ -113,38 +31,6 @@ int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
>  #define guest_handle_okay(hnd, nr) (1)
>  #define guest_handle_subrange_okay(hnd, first, last) (1)
>  
> -#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
> -    const typeof(*(ptr)) *_s = (ptr);                   \
> -    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
> -    /* Check that the handle is not for a const type */ \
> -    void *__maybe_unused _t = (hnd).p;                  \
> -    (void)((hnd).p == _s);                              \
> -    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
> -})
> -
> -#define __clear_guest_offset(hnd, off, ptr, nr) ({      \
> -    __raw_clear_guest(_d+(off), nr);  \
> -})
> -
> -#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
> -    const typeof(*(ptr)) *_s = (hnd).p;                 \
> -    typeof(*(ptr)) *_d = (ptr);                         \
> -    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
> -})
> -
> -#define __copy_field_to_guest(hnd, ptr, field) ({       \
> -    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
> -    void *_d = &(hnd).p->field;                         \
> -    (void)(&(hnd).p->field == _s);                      \
> -    __raw_copy_to_guest(_d, _s, sizeof(*_s));           \
> -})
> -
> -#define __copy_field_from_guest(ptr, hnd, field) ({     \
> -    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
> -    typeof(&(ptr)->field) _d = &(ptr)->field;           \
> -    __raw_copy_from_guest(_d, _s, sizeof(*_d));         \
> -})
> -
>  #endif /* __ASM_ARM_GUEST_ACCESS_H__ */
>  /*
>   * Local variables:
> diff --git a/xen/include/asm-x86/guest_access.h b/xen/include/asm-x86/guest_access.h
> index 3ffde205f6a1..08c9fbbc78e1 100644
> --- a/xen/include/asm-x86/guest_access.h
> +++ b/xen/include/asm-x86/guest_access.h
> @@ -38,81 +38,6 @@
>       clear_user_hvm((dst), (len)) :             \
>       clear_user((dst), (len)))
>  
> -/* Is the guest handle a NULL reference? */
> -#define guest_handle_is_null(hnd)        ((hnd).p == NULL)
> -
> -/* Offset the given guest handle into the array it refers to. */
> -#define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
> -#define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
> -
> -/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
> - * to the specified type of XEN_GUEST_HANDLE_PARAM. */
> -#define guest_handle_cast(hnd, type) ({         \
> -    type *_x = (hnd).p;                         \
> -    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
> -})
> -
> -/* Convert a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> -#define guest_handle_to_param(hnd, type) ({                  \
> -    /* type checking: make sure that the pointers inside     \
> -     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
> -     * the same type, then return hnd */                     \
> -    (void)((typeof(&(hnd).p)) 0 ==                           \
> -        (typeof(&((XEN_GUEST_HANDLE_PARAM(type)) {}).p)) 0); \
> -    (hnd);                                                   \
> -})
> -
> -#define guest_handle_for_field(hnd, type, fld)          \
> -    ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
> -
> -#define guest_handle_from_ptr(ptr, type)        \
> -    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
> -#define const_guest_handle_from_ptr(ptr, type)  \
> -    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
> -
> -/*
> - * Copy an array of objects to guest context via a guest handle,
> - * specifying an offset into the guest array.
> - */
> -#define copy_to_guest_offset(hnd, off, ptr, nr) ({      \
> -    const typeof(*(ptr)) *_s = (ptr);                   \
> -    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
> -    /* Check that the handle is not for a const type */ \
> -    void *__maybe_unused _t = (hnd).p;                  \
> -    (void)((hnd).p == _s);                              \
> -    raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));  \
> -})
> -
> -/*
> - * Copy an array of objects from guest context via a guest handle,
> - * specifying an offset into the guest array.
> - */
> -#define copy_from_guest_offset(ptr, hnd, off, nr) ({    \
> -    const typeof(*(ptr)) *_s = (hnd).p;                 \
> -    typeof(*(ptr)) *_d = (ptr);                         \
> -    raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
> -})
> -
> -#define clear_guest_offset(hnd, off, nr) ({    \
> -    void *_d = (hnd).p;                        \
> -    raw_clear_guest(_d+(off), nr);             \
> -})
> -
> -/* Copy sub-field of a structure to guest context via a guest handle. */
> -#define copy_field_to_guest(hnd, ptr, field) ({         \
> -    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
> -    void *_d = &(hnd).p->field;                         \
> -    (void)(&(hnd).p->field == _s);                      \
> -    raw_copy_to_guest(_d, _s, sizeof(*_s));             \
> -})
> -
> -/* Copy sub-field of a structure from guest context via a guest handle. */
> -#define copy_field_from_guest(ptr, hnd, field) ({       \
> -    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
> -    typeof(&(ptr)->field) _d = &(ptr)->field;           \
> -    raw_copy_from_guest(_d, _s, sizeof(*_d));           \
> -})
> -
>  /*
>   * Pre-validate a guest handle.
>   * Allows use of faster __copy_* functions.
> @@ -126,39 +51,6 @@
>                       (last)-(first)+1,                  \
>                       sizeof(*(hnd).p)))
>  
> -#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
> -    const typeof(*(ptr)) *_s = (ptr);                   \
> -    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
> -    /* Check that the handle is not for a const type */ \
> -    void *__maybe_unused _t = (hnd).p;                  \
> -    (void)((hnd).p == _s);                              \
> -    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
> -})
> -
> -#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
> -    const typeof(*(ptr)) *_s = (hnd).p;                 \
> -    typeof(*(ptr)) *_d = (ptr);                         \
> -    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
> -})
> -
> -#define __clear_guest_offset(hnd, off, nr) ({    \
> -    void *_d = (hnd).p;                          \
> -    __raw_clear_guest(_d+(off), nr);             \
> -})
> -
> -#define __copy_field_to_guest(hnd, ptr, field) ({       \
> -    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
> -    void *_d = &(hnd).p->field;                         \
> -    (void)(&(hnd).p->field == _s);                      \
> -    __raw_copy_to_guest(_d, _s, sizeof(*_s));           \
> -})
> -
> -#define __copy_field_from_guest(ptr, hnd, field) ({     \
> -    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
> -    typeof(&(ptr)->field) _d = &(ptr)->field;           \
> -    __raw_copy_from_guest(_d, _s, sizeof(*_d));         \
> -})
> -
>  #endif /* __ASM_X86_GUEST_ACCESS_H__ */
>  /*
>   * Local variables:
> diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
> index ef9aaa3efcfe..4957b8d1f2b8 100644
> --- a/xen/include/xen/guest_access.h
> +++ b/xen/include/xen/guest_access.h
> @@ -11,6 +11,86 @@
>  #include <xen/types.h>
>  #include <public/xen.h>
>  
> +/* Is the guest handle a NULL reference? */
> +#define guest_handle_is_null(hnd)        ((hnd).p == NULL)
> +
> +/* Offset the given guest handle into the array it refers to. */
> +#define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
> +#define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
> +
> +/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
> + * to the specified type of XEN_GUEST_HANDLE_PARAM. */
> +#define guest_handle_cast(hnd, type) ({         \
> +    type *_x = (hnd).p;                         \
> +    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
> +})
> +
> +/* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> +#define guest_handle_to_param(hnd, type) ({                  \
> +    typeof((hnd).p) _x = (hnd).p;                            \
> +    XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
> +    /* type checking: make sure that the pointers inside     \
> +     * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
> +     * the same type, then return hnd */                     \
> +    (void)(&_x == &_y.p);                                    \
> +    _y;                                                      \
> +})
> +
> +#define guest_handle_for_field(hnd, type, fld)          \
> +    ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
> +
> +#define guest_handle_from_ptr(ptr, type)        \
> +    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
> +#define const_guest_handle_from_ptr(ptr, type)  \
> +    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
> +
> +/*
> + * Copy an array of objects to guest context via a guest handle,
> + * specifying an offset into the guest array.
> + */
> +#define copy_to_guest_offset(hnd, off, ptr, nr) ({      \
> +    const typeof(*(ptr)) *_s = (ptr);                   \
> +    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
> +    /* Check that the handle is not for a const type */ \
> +    void *__maybe_unused _t = (hnd).p;                  \
> +    (void)((hnd).p == _s);                              \
> +    raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));  \
> +})
> +
> +/*
> + * Clear an array of objects in guest context via a guest handle,
> + * specifying an offset into the guest array.
> + */
> +#define clear_guest_offset(hnd, off, nr) ({    \
> +    void *_d = (hnd).p;                        \
> +    raw_clear_guest(_d+(off), nr);             \
> +})
> +
> +/*
> + * Copy an array of objects from guest context via a guest handle,
> + * specifying an offset into the guest array.
> + */
> +#define copy_from_guest_offset(ptr, hnd, off, nr) ({    \
> +    const typeof(*(ptr)) *_s = (hnd).p;                 \
> +    typeof(*(ptr)) *_d = (ptr);                         \
> +    raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
> +})
> +
> +/* Copy sub-field of a structure to guest context via a guest handle. */
> +#define copy_field_to_guest(hnd, ptr, field) ({         \
> +    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
> +    void *_d = &(hnd).p->field;                         \
> +    (void)(&(hnd).p->field == _s);                      \
> +    raw_copy_to_guest(_d, _s, sizeof(*_s));             \
> +})
> +
> +/* Copy sub-field of a structure from guest context via a guest handle. */
> +#define copy_field_from_guest(ptr, hnd, field) ({       \
> +    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
> +    typeof(&(ptr)->field) _d = &(ptr)->field;           \
> +    raw_copy_from_guest(_d, _s, sizeof(*_d));           \
> +})
> +
>  #define copy_to_guest(hnd, ptr, nr)                     \
>      copy_to_guest_offset(hnd, 0, ptr, nr)
>  
> @@ -20,6 +100,45 @@
>  #define clear_guest(hnd, nr)                            \
>      clear_guest_offset(hnd, 0, nr)
>  
> +/*
> + * The __copy_* functions should only be used after the guest handle has
> + * been pre-validated via guest_handle_okay() and
> + * guest_handle_subrange_okay().
> + */
> +
> +#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
> +    const typeof(*(ptr)) *_s = (ptr);                   \
> +    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
> +    /* Check that the handle is not for a const type */ \
> +    void *__maybe_unused _t = (hnd).p;                  \
> +    (void)((hnd).p == _s);                              \
> +    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
> +})
> +
> +#define __clear_guest_offset(hnd, off, nr) ({    \
> +    void *_d = (hnd).p;                          \
> +    __raw_clear_guest(_d + (off), nr);           \
> +})
> +
> +#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
> +    const typeof(*(ptr)) *_s = (hnd).p;                 \
> +    typeof(*(ptr)) *_d = (ptr);                         \
> +    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
> +})
> +
> +#define __copy_field_to_guest(hnd, ptr, field) ({       \
> +    const typeof(&(ptr)->field) _s = &(ptr)->field;     \
> +    void *_d = &(hnd).p->field;                         \
> +    (void)(&(hnd).p->field == _s);                      \
> +    __raw_copy_to_guest(_d, _s, sizeof(*_s));           \
> +})
> +
> +#define __copy_field_from_guest(ptr, hnd, field) ({     \
> +    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
> +    typeof(&(ptr)->field) _d = &(ptr)->field;           \
> +    __raw_copy_from_guest(_d, _s, sizeof(*_d));         \
> +})
> +
>  #define __copy_to_guest(hnd, ptr, nr)                   \
>      __copy_to_guest_offset(hnd, 0, ptr, nr)
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:38:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:38:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1ENR-0002Qt-TA; Thu, 30 Jul 2020 19:38:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fgvr=BJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k1ENR-0002Oe-7v
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:38:05 +0000
X-Inumbo-ID: 2e796402-d29c-11ea-8dc2-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2e796402-d29c-11ea-8dc2-bc764e2007e4;
 Thu, 30 Jul 2020 19:38:02 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E0DE220829;
 Thu, 30 Jul 2020 19:38:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596137882;
 bh=YQnSwfl14qMvAUTFCT0b4jxdn2MycRpdn9p+9TgssIM=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=XGyjnW6GqUoCS32aNoa3aVmOcytWvDyDDMm0kUeQAsC83Kydc56jniMkApnE5T/TY
 Eegz4q4CkygUrEEC+Cx4babsrgQqRDlKEFWeiwxyk8zhBvxFnawyEgHK3pJDvxIwZi
 I5stTo/2gP192bHn76Zid1PhIVQt/BW7M+nXZtK4=
Date: Thu, 30 Jul 2020 12:38:01 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [RESEND][PATCH v2 7/7] xen/guest_access: Fix coding style in
 xen/guest_access.h
In-Reply-To: <20200730181827.1670-8-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2007301235360.1767@sstabellini-ThinkPad-T480s>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-8-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 30 Jul 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
>     * Add space before and after operator
>     * Align \
>     * Format comments
> 
> No functional changes expected.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/include/xen/guest_access.h | 36 +++++++++++++++++++---------------
>  1 file changed, 20 insertions(+), 16 deletions(-)
> 
> diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
> index 4957b8d1f2b8..52fc7a063249 100644
> --- a/xen/include/xen/guest_access.h
> +++ b/xen/include/xen/guest_access.h
> @@ -18,20 +18,24 @@
>  #define guest_handle_add_offset(hnd, nr) ((hnd).p += (nr))
>  #define guest_handle_subtract_offset(hnd, nr) ((hnd).p -= (nr))
>  
> -/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
> - * to the specified type of XEN_GUEST_HANDLE_PARAM. */
> +/*
> + * Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARAM)
> + * to the specified type of XEN_GUEST_HANDLE_PARAM.
> + */
>  #define guest_handle_cast(hnd, type) ({         \
>      type *_x = (hnd).p;                         \
> -    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
> +    (XEN_GUEST_HANDLE_PARAM(type)) { _x };      \
>  })
>  
>  /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
>  #define guest_handle_to_param(hnd, type) ({                  \
>      typeof((hnd).p) _x = (hnd).p;                            \
>      XEN_GUEST_HANDLE_PARAM(type) _y = { _x };                \
> -    /* type checking: make sure that the pointers inside     \
> +    /*                                                       \
> +     * type checking: make sure that the pointers inside     \
>       * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
> -     * the same type, then return hnd */                     \
> +     * the same type, then return hnd.                       \
> +     */                                                      \
>      (void)(&_x == &_y.p);                                    \
>      _y;                                                      \
>  })
> @@ -106,13 +110,13 @@
>   * guest_handle_subrange_okay().
>   */
>  
> -#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
> -    const typeof(*(ptr)) *_s = (ptr);                   \
> -    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;          \
> -    /* Check that the handle is not for a const type */ \
> -    void *__maybe_unused _t = (hnd).p;                  \
> -    (void)((hnd).p == _s);                              \
> -    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
> +#define __copy_to_guest_offset(hnd, off, ptr, nr) ({        \
> +    const typeof(*(ptr)) *_s = (ptr);                       \
> +    char (*_d)[sizeof(*_s)] = (void *)(hnd).p;              \
> +    /* Check that the handle is not for a const type */     \
> +    void *__maybe_unused _t = (hnd).p;                      \
> +    (void)((hnd).p == _s);                                  \
> +    __raw_copy_to_guest(_d + (off), _s, sizeof(*_s) * (nr));\
>  })
>  
>  #define __clear_guest_offset(hnd, off, nr) ({    \
> @@ -120,10 +124,10 @@
>      __raw_clear_guest(_d + (off), nr);           \
>  })
>  
> -#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
> -    const typeof(*(ptr)) *_s = (hnd).p;                 \
> -    typeof(*(ptr)) *_d = (ptr);                         \
> -    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
> +#define __copy_from_guest_offset(ptr, hnd, off, nr) ({          \
> +    const typeof(*(ptr)) *_s = (hnd).p;                         \
> +    typeof(*(ptr)) *_d = (ptr);                                 \
> +    __raw_copy_from_guest(_d, _s + (off), sizeof (*_d) * (nr)); \
>  })
>  
>  #define __copy_field_to_guest(hnd, ptr, field) ({       \
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:44:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:44:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1ETW-0003Zn-Mi; Thu, 30 Jul 2020 19:44:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fgvr=BJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k1ETV-0003Zi-59
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:44:21 +0000
X-Inumbo-ID: 0f994682-d29d-11ea-ab18-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f994682-d29d-11ea-ab18-12813bfff9fa;
 Thu, 30 Jul 2020 19:44:20 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A01782072A;
 Thu, 30 Jul 2020 19:44:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596138259;
 bh=ue3sDqBzxkRw8CCsEptpU2EkFEZDRufHMYpjO1K8rGI=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=Qf656IJK+gwP3pvxj/fiIL9ZZcDybfp7toHjxMSHIVgpi07NHS9ApNobijP/GPglS
 BE8BPXiRO43m7M6NLsX0TOtnRJOlW90xTcANKw+OA8oCVpi4zretVF5+BUtLZLC1DM
 sbr0PKdHK0VdAcD7UrfnwEhEwsR3T7AHvme/Tdzc=
Date: Thu, 30 Jul 2020 12:44:19 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH] xen/arm: cmpxchg: Add missing memory barriers in
 __cmpxchg_mb_timeout()
In-Reply-To: <20200730170721.23393-1-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2007301115300.1767@sstabellini-ThinkPad-T480s>
References: <20200730170721.23393-1-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 30 Jul 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The function __cmpxchg_mb_timeout() was intended to have the same
> semantics as __cmpxchg_mb(). Unfortunately, the memory barriers were
> not added when first implemented.
> 
> There is no known issue with the existing callers, but the barriers are
> added given this is the expected semantics in Xen.
> 
> The issue was introduced by XSA-295.
> 
> Backport: 4.8+
> Fixes: 86b0bc958373 ("xen/arm: cmpxchg: Provide a new helper that can timeout")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/include/asm-arm/arm32/cmpxchg.h | 8 +++++++-
>  xen/include/asm-arm/arm64/cmpxchg.h | 8 +++++++-
>  2 files changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/include/asm-arm/arm32/cmpxchg.h b/xen/include/asm-arm/arm32/cmpxchg.h
> index 49ca2a0d7ab1..0770f272ee99 100644
> --- a/xen/include/asm-arm/arm32/cmpxchg.h
> +++ b/xen/include/asm-arm/arm32/cmpxchg.h
> @@ -147,7 +147,13 @@ static always_inline bool __cmpxchg_mb_timeout(volatile void *ptr,
>  					       int size,
>  					       unsigned int max_try)
>  {
> -	return __int_cmpxchg(ptr, old, new, size, true, max_try);
> +	bool ret;
> +
> +	smp_mb();
> +	ret = __int_cmpxchg(ptr, old, new, size, true, max_try);
> +	smp_mb();
> +
> +	return ret;
>  }
>  
>  #define cmpxchg(ptr,o,n)						\
> diff --git a/xen/include/asm-arm/arm64/cmpxchg.h b/xen/include/asm-arm/arm64/cmpxchg.h
> index 5bc2e1f78674..fc5c60f0bd74 100644
> --- a/xen/include/asm-arm/arm64/cmpxchg.h
> +++ b/xen/include/asm-arm/arm64/cmpxchg.h
> @@ -160,7 +160,13 @@ static always_inline bool __cmpxchg_mb_timeout(volatile void *ptr,
>  					       int size,
>  					       unsigned int max_try)
>  {
> -	return __int_cmpxchg(ptr, old, new, size, true, max_try);
> +	bool ret;
> +
> +	smp_mb();
> +	ret = __int_cmpxchg(ptr, old, new, size, true, max_try);
> +	smp_mb();
> +
> +	return ret;
>  }
>  
>  #define cmpxchg(ptr, o, n) \
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:46:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:46:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1EVe-0003gO-3T; Thu, 30 Jul 2020 19:46:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HZLI=BJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k1EVc-0003gJ-ID
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:46:32 +0000
X-Inumbo-ID: 5d59a146-d29d-11ea-8dc5-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d59a146-d29d-11ea-8dc5-bc764e2007e4;
 Thu, 30 Jul 2020 19:46:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596138390;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=o7236mwxcBJcdUA4ijAWCDksAy0tIC0prIHV5JjaWok=;
 b=LYAcKxmsBiKaJigoaztryJpmpBdqdAApY1RlRiJOBM7Ndpl4rAFS0xui
 hOcPCl+B7F4aJhsH2TYAj4tGdS+AfPHprTCgmvVaRdrJpgKeq069Sd7/y
 SSU3tcIyBYGj97bBvT7WvDUAcBLIO30oezO5MOiDjBYRWSlE6gnB+xqBX g=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: y6sbKlEqBMtWssC+XHVuhAvCDVY9l0Vk2KNY8UIlEfqsRK8fZmgHfFqobjkiS2oJPrShPy6D3x
 2XJyKyN+dcCYapHa3xzAdF7ZNtvlaAJcIo+IsLgr8RqU25D+PxaYYGFkzLRCFyTFBFTxBaVz/F
 5AYN/iMUhSSiXZMAIGMY6aaInGwngsXH2+4ZjkE0szs8rByzCtryIz5uw3sc+31GNAltqD8YCe
 o6ff5UiT/JEq1soqlKgMzKc8VmLCxUaZhudEmYAXBvMyB0BpCgZPHs1YtZBvs3tPpQctMGSYJg
 NF8=
X-SBRS: 2.7
X-MesageID: 23902736
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,415,1589256000"; d="scan'208";a="23902736"
Subject: Re: [PATCH 4/5] xen/memory: Fix acquire_resource size semantics
To: <paul@xen.org>, 'Xen-devel' <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-5-andrew.cooper3@citrix.com>
 <002b01d6664b$c7eb5f40$57c21dc0$@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <474ff131-83d8-deff-4e3a-32392ea092b3@citrix.com>
Date: Thu, 30 Jul 2020 20:46:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <002b01d6664b$c7eb5f40$57c21dc0$@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>, 'George
 Dunlap' <George.Dunlap@eu.citrix.com>,
 =?UTF-8?B?J01pY2hhxYIgTGVzemN6ecWEc2tpJw==?= <michal.leszczynski@cert.pl>,
 'Jan Beulich' <JBeulich@suse.com>, 'Ian Jackson' <ian.jackson@citrix.com>,
 'Hubert Jasudowicz' <hubert.jasudowicz@cert.pl>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30/07/2020 09:31, Paul Durrant wrote:
>> -----Original Message-----
>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Andrew Cooper
>> Sent: 28 July 2020 12:37
>> To: Xen-devel <xen-devel@lists.xenproject.org>
>> Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>; Stefano Stabellini <sstabellini@kernel.org>; Julien
>> Grall <julien@xen.org>; Wei Liu <wl@xen.org>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; George
>> Dunlap <George.Dunlap@eu.citrix.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Paul Durrant
>> <paul@xen.org>; Jan Beulich <JBeulich@suse.com>; Michał Leszczyński <michal.leszczynski@cert.pl>; Ian
>> Jackson <ian.jackson@citrix.com>
>> Subject: [PATCH 4/5] xen/memory: Fix acquire_resource size semantics
>>
>> Calling XENMEM_acquire_resource with a NULL frame_list is a request for the
>> size of the resource, but the returned 32 is bogus.
>>
>> If someone tries to follow it for XENMEM_resource_ioreq_server, the acquire
>> call will fail as IOREQ servers currently top out at 2 frames, and it is only
>> half the size of the default grant table limit for guests.
>>
>> Also, no users actually request a resource size, because it was never wired up
>> in the sole implemenation of resource aquisition in Linux.
>>
>> Introduce a new resource_max_frames() to calculate the size of a resource, and
>> implement it the IOREQ and grant subsystems.
>>
>> It is impossible to guarentee that a mapping call following a successful size
> s/guarantee/guarantee
>
>> call will succedd (e.g. The target IOREQ server gets destroyed, or the domain
> s/succedd/succeed
>
>> switches from grant v2 to v1).  Document the restriction, and use the
>> flexibility to simplify the paths to be lockless.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: George Dunlap <George.Dunlap@eu.citrix.com>
>> CC: Ian Jackson <ian.jackson@citrix.com>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Wei Liu <wl@xen.org>
>> CC: Julien Grall <julien@xen.org>
>> CC: Paul Durrant <paul@xen.org>
>> CC: Michał Leszczyński <michal.leszczynski@cert.pl>
>> CC: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>
>> ---
>>  xen/arch/x86/mm.c             | 20 ++++++++++++++++
>>  xen/common/grant_table.c      | 19 +++++++++++++++
>>  xen/common/memory.c           | 55 +++++++++++++++++++++++++++++++++----------
>>  xen/include/asm-x86/mm.h      |  3 +++
>>  xen/include/public/memory.h   | 16 +++++++++----
>>  xen/include/xen/grant_table.h |  8 +++++++
>>  xen/include/xen/mm.h          |  6 +++++
>>  7 files changed, 110 insertions(+), 17 deletions(-)
>>
>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>> index 82bc676553..f73a90a2ab 100644
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -4600,6 +4600,26 @@ int xenmem_add_to_physmap_one(
>>      return rc;
>>  }
>>
>> +unsigned int arch_resource_max_frames(
>> +    struct domain *d, unsigned int type, unsigned int id)
>> +{
>> +    unsigned int nr = 0;
>> +
>> +    switch ( type )
>> +    {
>> +#ifdef CONFIG_HVM
>> +    case XENMEM_resource_ioreq_server:
>> +        if ( !is_hvm_domain(d) )
>> +            break;
>> +        /* One frame for the buf-ioreq ring, and one frame per 128 vcpus. */
>> +        nr = 1 + DIV_ROUND_UP(d->max_vcpus * sizeof(struct ioreq), PAGE_SIZE);
>> +        break;
>> +#endif
>> +    }
>> +
>> +    return nr;
>> +}
>> +
>>  int arch_acquire_resource(struct domain *d, unsigned int type,
>>                            unsigned int id, unsigned long frame,
>>                            unsigned int nr_frames, xen_pfn_t mfn_list[])
>> diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
>> index 122d1e7596..0962fc7169 100644
>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -4013,6 +4013,25 @@ static int gnttab_get_shared_frame_mfn(struct domain *d,
>>      return 0;
>>  }
>>
>> +unsigned int gnttab_resource_max_frames(struct domain *d, unsigned int id)
>> +{
>> +    unsigned int nr = 0;
>> +
>> +    /* Don't need the grant lock.  This limit is fixed at domain create time. */
>> +    switch ( id )
>> +    {
>> +    case XENMEM_resource_grant_table_id_shared:
>> +        nr = d->grant_table->max_grant_frames;
>> +        break;
>> +
>> +    case XENMEM_resource_grant_table_id_status:
>> +        nr = grant_to_status_frames(d->grant_table->max_grant_frames);
> Two uses of d->grant_table, so perhaps define a stack variable for it?

Can do.

>  Also, should you not make sure 0 is returned in the case of a v1 table?

This was the case specifically discussed in the commit message, but
perhaps it needs expanding.

Doing so would be buggy.

Some utility is going to query the resource size, and then try to map it
(if it doesn't blindly know the size and/or subset it cares about already).

In between these two hypercalls from the utility, the guest can do a
v1=>v2 or v2=>v1 switch and make the resource spontaneously appear or
disappear.

The only case where we can know for certain whether the resource is
available is when we're in the map hypercall.  Therefore, userspace has
to be able to get to the map call if there is potentially a resource
available.

The semantics of the size call are really "this resource might exist,
and if it does, this is how large it is".


As for the grant status frames specifically, I think making them a
mappable resource might have been a poor choice in hind sight.

Only the guest can switch between grant versions.  GNTTABOP_set_version
strictly operates on current, unlike most of the other grant hypercalls
which take a domid and let dom0 specify something other than DOMID_SELF.

There is GNTTABOP_get_version, but it is racy to use in the same way as
described above, and if some utility does successfully map the status
frames, what will happen in practice is that a guest attempting to
switch from v2 back to v1 will have the set_version hypercall fail due
to outstanding refs on the frames.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:49:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:49:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1EY3-0003pE-Gz; Thu, 30 Jul 2020 19:49:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1EY2-0003p9-44
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:49:02 +0000
X-Inumbo-ID: b73ccbb6-d29d-11ea-8dc5-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b73ccbb6-d29d-11ea-8dc5-bc764e2007e4;
 Thu, 30 Jul 2020 19:49:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
 Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=PZ8lbk66NMZFhcqCNLy0A0GihyZxJjuJBPFA0eXIoLI=; b=WL+RBSoci0h3L6nuZjOJpCZ9GF
 uRiXJdh04YGDQYVyWuvL+9tiCI8I0GqbF2WdPTHCgOV0wlNzZfcPM1k7U1zxyhhzpJe2qKNLMABWN
 4wASWkQ+kybALla91Ww50QCxX74ak62qaaaLtKUKteSwN5mtwwcRWHYafhzbu1X3D/W0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1EY1-0001Ka-7k; Thu, 30 Jul 2020 19:49:01 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1EY0-0004CS-V5; Thu, 30 Jul 2020 19:49:01 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 0/4] tools: propagate bridge MTU to vif frontends
Date: Thu, 30 Jul 2020 20:48:54 +0100
Message-Id: <20200730194858.28523-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

This is long missing functionality

Paul Durrant (4):
  tools/hotplug: add remove_from_bridge() and improve debug output
  tools/hotplug: combine add/online and remove/offline in vif-bridge...
  public/io/netif: specify MTU override node
  tools/hotplug: modify set_mtu() to inform the frontend via xenstore

 tools/hotplug/Linux/vif-bridge            | 20 +++------
 tools/hotplug/Linux/xen-network-common.sh | 51 +++++++++++++++++++----
 xen/include/public/io/netif.h             | 12 ++++++
 3 files changed, 60 insertions(+), 23 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:49:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:49:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1EY5-0003pV-Ok; Thu, 30 Jul 2020 19:49:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1EY4-0003pM-Qz
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:49:04 +0000
X-Inumbo-ID: b8789c8a-d29d-11ea-ab18-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b8789c8a-d29d-11ea-ab18-12813bfff9fa;
 Thu, 30 Jul 2020 19:49:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=8UnD14dYKCdyYi6YratGsYxZMEOmXUkZTQOXSnNy2ec=; b=qM1rwFgsvkvmFrUxckN5wRBlEU
 QVo9/7xQDlBETj2ZmEZsV/Ic5WufQEBs9w+8R4Bbt9P7YB1vwzGubUznHt6Od+KtzNnj5PnaeTOrj
 m/tNYypKBGzS8akFZFj43vmBx7YehsJ0s1+PygG/KASXi980e4mTT4iEwORXYL4m+KqQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1EY3-0001L1-9z; Thu, 30 Jul 2020 19:49:03 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1EY3-0004CS-2s; Thu, 30 Jul 2020 19:49:03 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 3/4] public/io/netif: specify MTU override node
Date: Thu, 30 Jul 2020 20:48:57 +0100
Message-Id: <20200730194858.28523-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200730194858.28523-1-paul@xen.org>
References: <20200730194858.28523-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Paul Durrant <pdurrant@amazon.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

There is currently no documentation to state what MTU a frontend should
adertise to its network stack. It has however long been assumed that the
default value of 1500 is correct.

This patch specifies a mechanism to allow the tools to set the MTU via a
xenstore node in the frontend area and states that the absence of that node
means the frontend should assume an MTU of 1500 octets.

NOTE: The Windows PV frontend has used an MTU sampled from the xenstore
      node specified in this patch.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Juergen Gross <jgross@suse.com>
---
 xen/include/public/io/netif.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
index 9fcf91a2fe..00dd258712 100644
--- a/xen/include/public/io/netif.h
+++ b/xen/include/public/io/netif.h
@@ -204,6 +204,18 @@
  * present).
  */
 
+/*
+ * MTU
+ * ===
+ *
+ * The toolstack may set a value of MTU for the frontend by setting the
+ * /local/domain/<domid>/device/vif/<vif>/mtu node with the MTU value in
+ * octets. If this node is absent the frontend should assume an MTU value
+ * of 1500 octets. A frontend is also at liberty to ignore this value so
+ * it is only suitable for informing the frontend that a packet payload
+ * >1500 octets is permitted.
+ */
+
 /*
  * Hash types
  * ==========
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:49:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:49:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1EY8-0003qJ-0Q; Thu, 30 Jul 2020 19:49:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1EY7-0003p9-2y
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:49:07 +0000
X-Inumbo-ID: b84e38be-d29d-11ea-8dc5-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b84e38be-d29d-11ea-8dc5-bc764e2007e4;
 Thu, 30 Jul 2020 19:49:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=kR8KOrJwsxDE/ONPFOt3nY9mFnEziImu/LnFUCfqeYk=; b=jNw6wNDiG67DDYLFxqmgL6wTMu
 uk20JibukfLKx343xnmcK4QeGDATQ3ngudLzGnF5ViL43KWpTfALxK/MlWmsnWZSR7CYFli64kgAW
 MY+qrPNP1eEbIvYlT2svb1yASIIsi+q9Xo0Rt+iIK1/NIe1K2tzAlbTr4dlM/94sP7cU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1EY2-0001Kv-LW; Thu, 30 Jul 2020 19:49:02 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1EY2-0004CS-EG; Thu, 30 Jul 2020 19:49:02 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 2/4] tools/hotplug: combine add/online and remove/offline in
 vif-bridge...
Date: Thu, 30 Jul 2020 20:48:56 +0100
Message-Id: <20200730194858.28523-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200730194858.28523-1-paul@xen.org>
References: <20200730194858.28523-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

... as they are in vif-route.

The script is invoked with online/offline for vifs and add/remove for taps.
The operations that are necessary, however, are the same in both cases. This
patch therefore combines the cases.

The open-coded bridge removal code is also replaced with call to
remove_from_bridge().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>
---
 tools/hotplug/Linux/vif-bridge | 18 +++++-------------
 1 file changed, 5 insertions(+), 13 deletions(-)

diff --git a/tools/hotplug/Linux/vif-bridge b/tools/hotplug/Linux/vif-bridge
index e722090ca8..e1d7c49788 100644
--- a/tools/hotplug/Linux/vif-bridge
+++ b/tools/hotplug/Linux/vif-bridge
@@ -77,25 +77,17 @@ then
 fi
 
 case "$command" in
+    add)
+        ;&
     online)
         setup_virtual_bridge_port "$dev"
         set_mtu "$bridge" "$dev"
         add_to_bridge "$bridge" "$dev"
         ;;
-
+    remove)
+        ;&
     offline)
-        if which brctl >&/dev/null; then
-            do_without_error brctl delif "$bridge" "$dev"
-        else
-            do_without_error ip link set "$dev" nomaster
-        fi
-        do_without_error ifconfig "$dev" down
-        ;;
-
-    add)
-        setup_virtual_bridge_port "$dev"
-        set_mtu "$bridge" "$dev"
-        add_to_bridge "$bridge" "$dev"
+        remove_from_bridge "$bridge" "$dev"
         ;;
 esac
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:49:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:49:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1EYB-0003rq-DB; Thu, 30 Jul 2020 19:49:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1EY9-0003pM-PR
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:49:09 +0000
X-Inumbo-ID: b7d6e534-d29d-11ea-ab18-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b7d6e534-d29d-11ea-ab18-12813bfff9fa;
 Thu, 30 Jul 2020 19:49:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=fEecjANlWh3hBTz/ieGtixPqvrZoBJyua6jWfk3Vf10=; b=G5Qsw7KVQaPGTbUXp7+U+YIsx1
 icyS+WhS4A2kpWgA4LgOE29z4wLfrYOYNjehteBqZridgptx9qrXC5I2caHh7i88uZRfiAUdH7iJm
 bwY7HtldogY8HtDfNy8jLXwkVOcwvSwxR0ee6PiLh4UBr8RdPRNcR9Wtii8NPZMgZ/co=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1EY1-0001Kr-Vg; Thu, 30 Jul 2020 19:49:01 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1EY1-0004CS-Mi; Thu, 30 Jul 2020 19:49:01 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 1/4] tools/hotplug: add remove_from_bridge() and improve debug
 output
Date: Thu, 30 Jul 2020 20:48:55 +0100
Message-Id: <20200730194858.28523-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200730194858.28523-1-paul@xen.org>
References: <20200730194858.28523-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

This patch adds a remove_from_bridge() function into xen-network-common.sh
to partner with the existing add_to_bridge() function. The code in
add_to_bridge() is also slightly re-arranged to avoid duplication calls of
'ip link'.

Both add_to_bridge() and remove_from_bridge() will check if their bridge
manipulation operations are necessary and emit a log message if they are not.

NOTE: A call to remove_from_bridge() will be added by a subsequent patch.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>
---
 tools/hotplug/Linux/xen-network-common.sh | 37 ++++++++++++++++++-----
 1 file changed, 29 insertions(+), 8 deletions(-)

diff --git a/tools/hotplug/Linux/xen-network-common.sh b/tools/hotplug/Linux/xen-network-common.sh
index 8dd3a62068..37e71cfa9c 100644
--- a/tools/hotplug/Linux/xen-network-common.sh
+++ b/tools/hotplug/Linux/xen-network-common.sh
@@ -126,19 +126,40 @@ add_to_bridge () {
     local bridge=$1
     local dev=$2
 
-    # Don't add $dev to $bridge if it's already on a bridge.
-    if [ -e "/sys/class/net/${bridge}/brif/${dev}" ]; then
-	ip link set dev ${dev} up || true
-	return
-    fi
-    if which brctl >&/dev/null; then
-        brctl addif ${bridge} ${dev}
+    # Don't add $dev to $bridge if it's already on the bridge.
+    if [ ! -e "/sys/class/net/${bridge}/brif/${dev}" ]; then
+	log debug "adding $dev to bridge $bridge"
+	if which brctl >&/dev/null; then
+            brctl addif ${bridge} ${dev}
+	else
+            ip link set ${dev} master ${bridge}
+	fi
     else
-        ip link set ${dev} master ${bridge}
+	log debug "$dev already on bridge $bridge"
     fi
+
     ip link set dev ${dev} up
 }
 
+remove_from_bridge () {
+    local bridge=$1
+    local dev=$2
+
+    ip link set dev ${dev} down || :
+
+    # Don't remove $dev from $bridge if it's not on the bridge.
+    if [ -e "/sys/class/net/${bridge}/brif/${dev}" ]; then
+	log debug "removing $dev from bridge $bridge"
+	if which brctl >&/dev/null; then
+            brctl delif ${bridge} ${dev}
+	else
+            ip link set ${dev} nomaster
+	fi
+    else
+	log debug "$dev not on bridge $bridge"
+    fi
+}
+
 # Usage: set_mtu bridge dev
 set_mtu () {
     local bridge=$1
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:49:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:49:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1EYD-0003t9-Lc; Thu, 30 Jul 2020 19:49:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B5Vg=BJ=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1EYC-0003p9-39
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:49:12 +0000
X-Inumbo-ID: b84e38bf-d29d-11ea-8dc5-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b84e38bf-d29d-11ea-8dc5-bc764e2007e4;
 Thu, 30 Jul 2020 19:49:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=TpRalkvkG38JtEdNhqPS3RNesiK5+8RoUBFVAeIBPhI=; b=WxT0hE7RoSokynNV8k06V/6cIF
 S9AOZWhblGoHd0merPmft9QPFy+61K5AEobk0g62atYnIFOBmhuWDaKghbpAWAsDxsmfUkCvLBhVw
 yI/yopP+8i7mzveNfASh1hdVRPFI9Wk76GZspJrgeCe74/W/u9Z07YMDel06MBn3onCA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1EY4-0001L6-2O; Thu, 30 Jul 2020 19:49:04 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1EY3-0004CS-Qi; Thu, 30 Jul 2020 19:49:04 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 4/4] tools/hotplug: modify set_mtu() to inform the frontend
 via xenstore
Date: Thu, 30 Jul 2020 20:48:58 +0100
Message-Id: <20200730194858.28523-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200730194858.28523-1-paul@xen.org>
References: <20200730194858.28523-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

set_mtu() currently sets the backend vif MTU but does not inform the frontend
what it is. This patch adds code to write the MTU into a xenstore node. See
netif.h for a specification of the node.

NOTE: There is also a small modification replacing '$mtu' with '${mtu}'
      for style consistency.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>
---
 tools/hotplug/Linux/vif-bridge            |  2 +-
 tools/hotplug/Linux/xen-network-common.sh | 14 +++++++++++++-
 2 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/tools/hotplug/Linux/vif-bridge b/tools/hotplug/Linux/vif-bridge
index e1d7c49788..b99cc82a21 100644
--- a/tools/hotplug/Linux/vif-bridge
+++ b/tools/hotplug/Linux/vif-bridge
@@ -81,7 +81,7 @@ case "$command" in
         ;&
     online)
         setup_virtual_bridge_port "$dev"
-        set_mtu "$bridge" "$dev"
+        set_mtu "$bridge" "$dev" "$type_if"
         add_to_bridge "$bridge" "$dev"
         ;;
     remove)
diff --git a/tools/hotplug/Linux/xen-network-common.sh b/tools/hotplug/Linux/xen-network-common.sh
index 37e71cfa9c..24fc42d9cf 100644
--- a/tools/hotplug/Linux/xen-network-common.sh
+++ b/tools/hotplug/Linux/xen-network-common.sh
@@ -164,9 +164,21 @@ remove_from_bridge () {
 set_mtu () {
     local bridge=$1
     local dev=$2
+    local type_if=$3
+
     mtu="`ip link show dev ${bridge}| awk '/mtu/ { print $5 }'`"
     if [ -n "$mtu" ] && [ "$mtu" -gt 0 ]
     then
-            ip link set dev ${dev} mtu $mtu || :
+            ip link set dev ${dev} mtu ${mtu} || :
+    fi
+
+    if [ ${type_if} = vif ]
+    then
+       dev_=${dev#vif}
+       domid=${dev_%.*}
+       devid=${dev_#*.}
+
+       XENBUS_PATH="/local/domain/$domid/device/vif/$devid"
+       xenstore_write "$XENBUS_PATH/mtu" ${mtu}
     fi
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Jul 30 19:53:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 19:53:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1EcA-00051D-7w; Thu, 30 Jul 2020 19:53:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HZLI=BJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k1Ec9-000516-0i
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 19:53:17 +0000
X-Inumbo-ID: 4e29fdaa-d29e-11ea-ab1a-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4e29fdaa-d29e-11ea-ab1a-12813bfff9fa;
 Thu, 30 Jul 2020 19:53:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596138794;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=wf+fhMOVf1vvOkTyouZrdEt+26wjL/aBc/DrRpUDK0Y=;
 b=SYnhJ+ltlbHuyml+l2f+lUg5RFJOTpIyuDlNqIm+D/ZnNbThxxQ+xGgr
 f7iwbKzHnb+hWUyK7hg7UE/+jrrtuEAYF2KfKDdb3IYKAWGRa0DfB5H34
 EVTg9Jy87L7PJJ8p1y2j2P+H/laGCHZDcJPSR8gZMfnf0b9LMCmO76IbW I=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: nqnJxs+Xpx7NxZTh8AWiEtTmjTO27dOHGkZygQHHS0P+9CUccpL9tymdiTLvS2RxAmEKPWgHwW
 OR/ZZyOWQcfS1V13EAlyEiEJK6mqkNB3xNckX4+oOkqJQaKMXFduBL5YCOenhpW8UvlIWH/wOs
 XauWjqJ6VY3BQxDBtZfKN0FXVIQTyBejEbTWIBnmsKf0vzBaqWI3Ut0LwQUtbYBgTOQNuX62j/
 PtbyKXmyF4iQ6TZkWNCMa6A0oi6sM5T/sGvKjdH6V7hexG5BYffX5yMX5IwnIz8QIWxvUJA4uy
 Iek=
X-SBRS: 2.7
X-MesageID: 23761758
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,415,1589256000"; d="scan'208";a="23761758"
Subject: Re: [PATCH 4/5] xen/memory: Fix acquire_resource size semantics
To: Julien Grall <julien@xen.org>, <paul@xen.org>, 'Xen-devel'
 <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-5-andrew.cooper3@citrix.com>
 <002b01d6664b$c7eb5f40$57c21dc0$@xen.org>
 <d0c00a30-2f72-036e-d574-a82e96ea79ea@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <7b7aae0d-35d8-8bfa-7352-8e3c58873964@citrix.com>
Date: Thu, 30 Jul 2020 20:53:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d0c00a30-2f72-036e-d574-a82e96ea79ea@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Hubert Jasudowicz' <hubert.jasudowicz@cert.pl>, 'Wei Liu' <wl@xen.org>,
 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>,
 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 =?UTF-8?B?J01pY2hhxYIgTGVzemN6ecWEc2tpJw==?= <michal.leszczynski@cert.pl>,
 'Jan Beulich' <JBeulich@suse.com>, 'Ian Jackson' <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30/07/2020 13:54, Julien Grall wrote:
> Hi Paul,
>
> On 30/07/2020 09:31, Paul Durrant wrote:
>>> diff --git a/xen/common/memory.c b/xen/common/memory.c
>>> index dc3a7248e3..21edabf9cc 100644
>>> --- a/xen/common/memory.c
>>> +++ b/xen/common/memory.c
>>> @@ -1007,6 +1007,26 @@ static long xatp_permission_check(struct
>>> domain *d, unsigned int space)
>>>       return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
>>>   }
>>>
>>> +/*
>>> + * Return 0 on any kind of error.  Caller converts to -EINVAL.
>>> + *
>>> + * All nonzero values should be repeatable (i.e. derived from some
>>> fixed
>>> + * proerty of the domain), and describe the full resource (i.e.
>>> mapping the
>>
>> s/property/property
>>
>>> + * result of this call will be the entire resource).
>>
>> This precludes dynamically adding a resource to a running domain. Do
>> we really want to bake in that restriction?
>
> AFAICT, this restriction is not documented in the ABI. In particular,
> it is written:
>
> "
> The size of a resource will never be zero, but a nonzero result doesn't
> guarentee that a subsequent mapping request will be successful.  There
> are further type/id specific constraints which may change between the
> two calls.
> "
>
> So I think a domain couldn't rely on this behavior. Although, it might
> be good to clarify in the comment on top of resource_max_frames that
> this an implementation decision and not part of the ABI.

There are two aspects here.

First, yes - I deliberately didn't state it in the ABI, just in case we
might want to use it in the future.  I could theoretically foresee using
-EBUSY for the purpose.

That said however, we are currently deliberately taking dynamic
resources out of Xen, because they've proved to be unnecessary in
practice and a fertile source of complexity and security bugs.

I don't foresee accepting new dynamic resources, but that's not to say
that someone can't theoretically come up with a sufficiently compelling
counterexample.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 20:01:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 20:01:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1EjT-0005x0-4I; Thu, 30 Jul 2020 20:00:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1EjS-0005wu-2y
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 20:00:50 +0000
X-Inumbo-ID: 5c86ad53-d29f-11ea-8dc7-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c86ad53-d29f-11ea-8dc7-bc764e2007e4;
 Thu, 30 Jul 2020 20:00:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=XOqBVUmSR6XbSqoKdv8Ee7IMVjkKW34Rs6RR9l/DbA0=; b=ASlhrZb4/vPTrI14J0edpJ4rPN
 gQupt3Wm6+qomnQ8/T36AAYCfR9H09QDwkS3v68uHdJgqBiTdhrcNyJW3lOwMOaqwmag/RdpwnbS2
 vmqCQK/6qgmIsx9wnV99JKc1cLYRNMaK7DieTVHzfi0nQgpqUIyjo57yKiigaawxOXOE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1EjL-0001fi-E3; Thu, 30 Jul 2020 20:00:43 +0000
Received: from 54-240-197-226.amazon.com ([54.240.197.226]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1EjL-0004yG-5k; Thu, 30 Jul 2020 20:00:43 +0000
Subject: Re: [PATCH 4/5] xen/memory: Fix acquire_resource size semantics
To: Andrew Cooper <andrew.cooper3@citrix.com>, paul@xen.org,
 'Xen-devel' <xen-devel@lists.xenproject.org>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-5-andrew.cooper3@citrix.com>
 <002b01d6664b$c7eb5f40$57c21dc0$@xen.org>
 <474ff131-83d8-deff-4e3a-32392ea092b3@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <5c7d67c3-4731-caf0-cbe5-3849cd07bf98@xen.org>
Date: Thu, 30 Jul 2020 21:00:40 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <474ff131-83d8-deff-4e3a-32392ea092b3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Hubert Jasudowicz' <hubert.jasudowicz@cert.pl>, 'Wei Liu' <wl@xen.org>,
 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>,
 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 =?UTF-8?B?J01pY2hhxYIgTGVzemN6ecWEc2tpJw==?= <michal.leszczynski@cert.pl>,
 'Jan Beulich' <JBeulich@suse.com>, 'Ian Jackson' <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Andrew,

On 30/07/2020 20:46, Andrew Cooper wrote:
> On 30/07/2020 09:31, Paul Durrant wrote:
> In between these two hypercalls from the utility, the guest can do a
> v1=>v2 or v2=>v1 switch and make the resource spontaneously appear or
> disappear.

This can only happen on platform where grant-table v2 is enabled. Where 
this is not enabled (e.g Arm), then I think we want to return 0 as there 
is nothing to map.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 20:50:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 20:50:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1FVW-0001fU-VY; Thu, 30 Jul 2020 20:50:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IK5u=BJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1FVV-0001fP-JP
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 20:50:29 +0000
X-Inumbo-ID: 4c06c55a-d2a6-11ea-ab30-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4c06c55a-d2a6-11ea-ab30-12813bfff9fa;
 Thu, 30 Jul 2020 20:50:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Hx/PHRINzFmUMkk9n1BpAtmiK86SlTkkpXHqaNb2uaM=; b=nRc3Z8A1YpHRF8/NQU243/2OYT
 KqGhtxGDUW2fGYSV/gy9pHtygqwpFDdeWuaInS3Ey4tBguNFoMoxJw81p/RdBNqs8Bz+tzUUt6nOX
 k44n6LUWqSeYjChlXmTtdXDXmK/sZBq3LI26qlx0UbiMWHBRrjzHtG+kpG7bwbjqk7eo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1FVO-0002h6-1y; Thu, 30 Jul 2020 20:50:22 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1FVN-0007sn-Gt; Thu, 30 Jul 2020 20:50:21 +0000
Subject: Re: [PATCH v3] xen/arm: Convert runstate address during hypcall
To: Bertrand Marquis <bertrand.marquis@arm.com>, xen-devel@lists.xenproject.org
References: <3911d221ce9ed73611b93aa437b9ca227d6aa201.1596099067.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f48f81d5-589e-3f75-1044-583114bf497e@xen.org>
Date: Thu, 30 Jul 2020 21:50:18 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <3911d221ce9ed73611b93aa437b9ca227d6aa201.1596099067.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Bertrand,

To avoid extra work on your side, I would recommend to wait a bit before 
sending a new version. It would be good to at least settle the 
conversation in v2 regarding the approach taken.

On 30/07/2020 11:24, Bertrand Marquis wrote:
> At the moment on Arm, a Linux guest running with KTPI enabled will
> cause the following error when a context switch happens in user mode:
> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
> 
> The error is caused by the virtual address for the runstate area
> registered by the guest only being accessible when the guest is running
> in kernel space when KPTI is enabled.
> 
> To solve this issue, this patch is doing the translation from virtual
> address to physical address during the hypercall and mapping the
> required pages using vmap. This is removing the conversion from virtual
> to physical address during the context switch which is solving the
> problem with KPTI.

To echo what Jan said on the previous version, this is a change in a 
stable ABI and therefore may break existing guest. FAOD, I agree in 
principle with the idea. However, we want to explain why breaking the 
ABI is the *only* viable solution.

 From my understanding, it is not possible to fix without an ABI 
breakage because the hypervisor doesn't know when the guest will switch 
back from userspace to kernel space. The risk is the information 
provided by the runstate wouldn't contain accurate information and could 
affect how the guest handle stolen time.

Additionally there are a few issues with the current interface:
    1) It is assuming the virtual address cannot be re-used by the 
userspace. Thanksfully Linux have a split address space. But this may 
change with KPTI in place.
    2) When update the page-tables, the guest has to go through an 
invalid mapping. So the translation may fail at any point.

IOW, the existing interface can lead to random memory corruption and 
inacurracy of the stolen time.

> 
> This is done only on arm architecture, the behaviour on x86 is not
> modified by this patch and the address conversion is done as before
> during each context switch.
> 
> This is introducing several limitations in comparison to the previous
> behaviour (on arm only):
> - if the guest is remapping the area at a different physical address Xen
> will continue to update the area at the previous physical address. As
> the area is in kernel space and usually defined as a global variable this
> is something which is believed not to happen. If this is required by a
> guest, it will have to call the hypercall with the new area (even if it
> is at the same virtual address).
> - the area needs to be mapped during the hypercall. For the same reasons
> as for the previous case, even if the area is registered for a different
> vcpu. It is believed that registering an area using a virtual address
> unmapped is not something done.

This is not clear whether the virtual address refer to the current vCPU 
or the vCPU you register the runstate for. From the past discussion, I 
think you refer to the former. It would be good to clarify.

Additionally, all the new restrictions should be documented in the 
public interface. So an OS developper can find the differences between 
the architectures.

To answer Jan's concern, we certainly don't know all the guest OSes 
existing, however we also need to balance the benefit for a large 
majority of the users.

 From previous discussion, the current approach was deemed to be 
acceptable on Arm and, AFAICT, also x86 (see [1]).

TBH, I would rather see the approach to be common. For that, we would an 
agreement from Andrew and Jan in the approach here. Meanwhile, I think 
this is the best approach to address the concern from Arm users.

> 
> inline functions in headers could not be used as the architecture
> domain.h is included before the global domain.h making it impossible
> to use the struct vcpu inside the architecture header.
> This should not have any performance impact as the hypercall is only
> called once per vcpu usually.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> ---
>    Changes in v2
>      - use vmap to map the pages during the hypercall.
>      - reintroduce initial copy during hypercall.
> 
>    Changes in v3
>      - Fix Coding style
>      - Fix vaddr printing on arm32
>      - use write_atomic to modify state_entry_time update bit (only
>      in guest structure as the bit is not used inside Xen copy)
> 
> ---
>   xen/arch/arm/domain.c        | 161 ++++++++++++++++++++++++++++++-----
>   xen/arch/x86/domain.c        |  29 ++++++-
>   xen/arch/x86/x86_64/domain.c |   4 +-
>   xen/common/domain.c          |  19 ++---
>   xen/include/asm-arm/domain.h |   9 ++
>   xen/include/asm-x86/domain.h |  16 ++++
>   xen/include/xen/domain.h     |   5 ++
>   xen/include/xen/sched.h      |  16 +---
>   8 files changed, 206 insertions(+), 53 deletions(-)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 31169326b2..8b36946017 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -19,6 +19,7 @@
>   #include <xen/sched.h>
>   #include <xen/softirq.h>
>   #include <xen/wait.h>
> +#include <xen/vmap.h>
>   
>   #include <asm/alternative.h>
>   #include <asm/cpuerrata.h>
> @@ -275,36 +276,156 @@ static void ctxt_switch_to(struct vcpu *n)
>       virt_timer_restore(n);
>   }
>   
> -/* Update per-VCPU guest runstate shared memory area (if registered). */
> -static void update_runstate_area(struct vcpu *v)
> +static void cleanup_runstate_vcpu_locked(struct vcpu *v)
>   {
> -    void __user *guest_handle = NULL;
> +    if ( v->arch.runstate_guest )
> +    {
> +        vunmap((void *)((unsigned long)v->arch.runstate_guest & PAGE_MASK));
> +
> +        put_page(v->arch.runstate_guest_page[0]);
> +
> +        if ( v->arch.runstate_guest_page[1] )
> +            put_page(v->arch.runstate_guest_page[1]);
> +
> +        v->arch.runstate_guest = NULL;
> +    }
> +}
> +
> +void arch_vcpu_cleanup_runstate(struct vcpu *v)
> +{
> +    spin_lock(&v->arch.runstate_guest_lock);
> +
> +    cleanup_runstate_vcpu_locked(v);
> +
> +    spin_unlock(&v->arch.runstate_guest_lock);
> +}
> +
> +static int setup_runstate_vcpu_locked(struct vcpu *v, vaddr_t vaddr)
> +{
> +    unsigned int offset;
> +    mfn_t mfn[2];
> +    struct page_info *page;
> +    unsigned int numpages;
>       struct vcpu_runstate_info runstate;
> +    void *p;
>   
> -    if ( guest_handle_is_null(runstate_guest(v)) )
> -        return;
> +    /* user can pass a NULL address to unregister a previous area */
> +    if ( vaddr == 0 )
> +        return 0;
> +
> +    offset = vaddr & ~PAGE_MASK;
> +
> +    /* provided address must be aligned to a 64bit */
> +    if ( offset % alignof(struct vcpu_runstate_info) )

This new restriction wants to be explained in the commit message and 
public header.

> +    {
> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRIvaddr
> +                ": Invalid offset\n", vaddr);

We usually enforce 80 character per lines except for format string. So 
it is easier to grep them in the code.

> +        return -EINVAL;
> +    }
> +
> +    page = get_page_from_gva(v, vaddr, GV2M_WRITE);
> +    if ( !page )
> +    {
> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRIvaddr
> +                ": Page is not mapped\n", vaddr);
> +        return -EINVAL;
> +    }
> +
> +    mfn[0] = page_to_mfn(page);
> +    v->arch.runstate_guest_page[0] = page;
> +
> +    if ( offset > (PAGE_SIZE - sizeof(struct vcpu_runstate_info)) )
> +    {
> +        /* guest area is crossing pages */
> +        page = get_page_from_gva(v, vaddr + PAGE_SIZE, GV2M_WRITE);
> +        if ( !page )
> +        {
> +            put_page(v->arch.runstate_guest_page[0]);
> +            gprintk(XENLOG_WARNING,
> +                    "Cannot map runstate pointer at 0x%"PRIvaddr
> +                    ": 2nd Page is not mapped\n", vaddr);
> +            return -EINVAL;
> +        }
> +        mfn[1] = page_to_mfn(page);
> +        v->arch.runstate_guest_page[1] = page;
> +        numpages = 2;
> +    }
> +    else
> +    {
> +        v->arch.runstate_guest_page[1] = NULL;
> +        numpages = 1;
> +    }
>   
> -    memcpy(&runstate, &v->runstate, sizeof(runstate));
> +    p = vmap(mfn, numpages);
> +    if ( !p )
> +    {
> +        put_page(v->arch.runstate_guest_page[0]);
> +        if ( numpages == 2 )
> +            put_page(v->arch.runstate_guest_page[1]);
>   
> -    if ( VM_ASSIST(v->domain, runstate_update_flag) )
> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRIvaddr
> +                ": vmap error\n", vaddr);
> +        return -EINVAL;
> +    }
> +
> +    v->arch.runstate_guest = p + offset;
> +
> +    if (v == current)
> +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate));
> +    else
>       {
> -        guest_handle = &v->runstate_guest.p->state_entry_time + 1;
> -        guest_handle--;
> -        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
> -        __raw_copy_to_guest(guest_handle,
> -                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
> -        smp_wmb();
> +        vcpu_runstate_get(v, &runstate);
> +        memcpy(v->arch.runstate_guest, &runstate, sizeof(v->runstate));
>       }
>   
> -    __copy_to_guest(runstate_guest(v), &runstate, 1);
> +    return 0;
> +}
> +
> +int arch_vcpu_setup_runstate(struct vcpu *v,
> +                             struct vcpu_register_runstate_memory_area area)
> +{
> +    int rc;
> +
> +    spin_lock(&v->arch.runstate_guest_lock);
> +
> +    /* cleanup if we are recalled */
> +    cleanup_runstate_vcpu_locked(v);
> +
> +    rc = setup_runstate_vcpu_locked(v, (vaddr_t)area.addr.v);
> +
> +    spin_unlock(&v->arch.runstate_guest_lock);
>   
> -    if ( guest_handle )
> +    return rc;
> +}
> +
> +
> +/* Update per-VCPU guest runstate shared memory area (if registered). */
> +static void update_runstate_area(struct vcpu *v)
> +{
> +    spin_lock(&v->arch.runstate_guest_lock);
> +
> +    if ( v->arch.runstate_guest )
>       {
> -        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
> -        smp_wmb();
> -        __raw_copy_to_guest(guest_handle,
> -                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
> +        {
> +            v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
> +            write_atomic(&(v->arch.runstate_guest->state_entry_time),
> +                    v->runstate.state_entry_time);

NIT: You want to indent v-> at the same level as the argument from the 
first line.

Also, I think you are missing a smp_wmb() here.

> +        }
> +
> +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate));
> +
> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
> +        {
> +            /* copy must be done before switching the bit */
> +            smp_wmb();
> +            v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
> +            write_atomic(&(v->arch.runstate_guest->state_entry_time),
> +                    v->runstate.state_entry_time);

Same remark for the indentation.

> +        }
>       }
> +
> +    spin_unlock(&v->arch.runstate_guest_lock);
>   }
>   
>   static void schedule_tail(struct vcpu *prev)
> @@ -560,6 +681,8 @@ int arch_vcpu_create(struct vcpu *v)
>       v->arch.saved_context.sp = (register_t)v->arch.cpu_info;
>       v->arch.saved_context.pc = (register_t)continue_new_vcpu;
>   
> +    spin_lock_init(&v->arch.runstate_guest_lock);
> +
>       /* Idle VCPUs don't need the rest of this setup */
>       if ( is_idle_vcpu(v) )
>           return rc;
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index fee6c3931a..b9b81e94e5 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1642,6 +1642,29 @@ void paravirt_ctxt_switch_to(struct vcpu *v)
>           wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
>   }
>   
> +int arch_vcpu_setup_runstate(struct vcpu *v,
> +                             struct vcpu_register_runstate_memory_area area)
> +{
> +    struct vcpu_runstate_info runstate;
> +
> +    runstate_guest(v) = area.addr.h;
> +
> +    if ( v == current )
> +        __copy_to_guest(runstate_guest(v), &v->runstate, 1);
> +    else
> +    {
> +        vcpu_runstate_get(v, &runstate);
> +        __copy_to_guest(runstate_guest(v), &runstate, 1);
> +    }
> +
> +    return 0;
> +}
> +
> +void arch_vcpu_cleanup_runstate(struct vcpu *v)
> +{
> +    set_xen_guest_handle(runstate_guest(v), NULL);
> +}
> +
>   /* Update per-VCPU guest runstate shared memory area (if registered). */
>   bool update_runstate_area(struct vcpu *v)
>   {
> @@ -1660,8 +1683,8 @@ bool update_runstate_area(struct vcpu *v)
>       if ( VM_ASSIST(v->domain, runstate_update_flag) )
>       {
>           guest_handle = has_32bit_shinfo(v->domain)
> -            ? &v->runstate_guest.compat.p->state_entry_time + 1
> -            : &v->runstate_guest.native.p->state_entry_time + 1;
> +            ? &v->arch.runstate_guest.compat.p->state_entry_time + 1
> +            : &v->arch.runstate_guest.native.p->state_entry_time + 1;
>           guest_handle--;
>           runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
>           __raw_copy_to_guest(guest_handle,
> @@ -1674,7 +1697,7 @@ bool update_runstate_area(struct vcpu *v)
>           struct compat_vcpu_runstate_info info;
>   
>           XLAT_vcpu_runstate_info(&info, &runstate);
> -        __copy_to_guest(v->runstate_guest.compat, &info, 1);
> +        __copy_to_guest(v->arch.runstate_guest.compat, &info, 1);
>           rc = true;
>       }
>       else
> diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
> index c46dccc25a..b879e8dd2c 100644
> --- a/xen/arch/x86/x86_64/domain.c
> +++ b/xen/arch/x86/x86_64/domain.c
> @@ -36,7 +36,7 @@ arch_compat_vcpu_op(
>               break;
>   
>           rc = 0;
> -        guest_from_compat_handle(v->runstate_guest.compat, area.addr.h);
> +        guest_from_compat_handle(v->arch.runstate_guest.compat, area.addr.h);
>   
>           if ( v == current )
>           {
> @@ -49,7 +49,7 @@ arch_compat_vcpu_op(
>               vcpu_runstate_get(v, &runstate);
>               XLAT_vcpu_runstate_info(&info, &runstate);
>           }
> -        __copy_to_guest(v->runstate_guest.compat, &info, 1);
> +        __copy_to_guest(v->arch.runstate_guest.compat, &info, 1);
>   
>           break;
>       }
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index f0f9c62feb..739c6b7b62 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -727,7 +727,10 @@ int domain_kill(struct domain *d)
>           if ( cpupool_move_domain(d, cpupool0) )
>               return -ERESTART;
>           for_each_vcpu ( d, v )
> +        {
> +            arch_vcpu_cleanup_runstate(v);
>               unmap_vcpu_info(v);
> +        }
>           d->is_dying = DOMDYING_dead;
>           /* Mem event cleanup has to go here because the rings
>            * have to be put before we call put_domain. */
> @@ -1167,7 +1170,7 @@ int domain_soft_reset(struct domain *d)
>   
>       for_each_vcpu ( d, v )
>       {
> -        set_xen_guest_handle(runstate_guest(v), NULL);
> +        arch_vcpu_cleanup_runstate(v);
>           unmap_vcpu_info(v);
>       }
>   
> @@ -1494,7 +1497,6 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
>       case VCPUOP_register_runstate_memory_area:
>       {
>           struct vcpu_register_runstate_memory_area area;
> -        struct vcpu_runstate_info runstate;
>   
>           rc = -EFAULT;
>           if ( copy_from_guest(&area, arg, 1) )
> @@ -1503,18 +1505,7 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
>           if ( !guest_handle_okay(area.addr.h, 1) )
>               break;
>   
> -        rc = 0;
> -        runstate_guest(v) = area.addr.h;
> -
> -        if ( v == current )
> -        {
> -            __copy_to_guest(runstate_guest(v), &v->runstate, 1);
> -        }
> -        else
> -        {
> -            vcpu_runstate_get(v, &runstate);
> -            __copy_to_guest(runstate_guest(v), &runstate, 1);
> -        }
> +        rc = arch_vcpu_setup_runstate(v, area);
>   
>           break;
>       }
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index 6819a3bf38..2f62c3e8f5 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -204,6 +204,15 @@ struct arch_vcpu
>        */
>       bool need_flush_to_ram;
>   
> +    /* runstate guest lock */
> +    spinlock_t runstate_guest_lock;
> +
> +    /* runstate guest info */
> +    struct vcpu_runstate_info *runstate_guest;
> +
> +    /* runstate pages mapped for runstate_guest */
> +    struct page_info *runstate_guest_page[2];
> +
>   }  __cacheline_aligned;
>   
>   void vcpu_show_execution_state(struct vcpu *);
> diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
> index 635335634d..007ccfbf9f 100644
> --- a/xen/include/asm-x86/domain.h
> +++ b/xen/include/asm-x86/domain.h
> @@ -11,6 +11,11 @@
>   #include <asm/x86_emulate.h>
>   #include <public/vcpu.h>
>   #include <public/hvm/hvm_info_table.h>
> +#ifdef CONFIG_COMPAT
> +#include <compat/vcpu.h>
> +DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t);
> +#endif
> +
>   
>   #define has_32bit_shinfo(d)    ((d)->arch.has_32bit_shinfo)
>   
> @@ -638,6 +643,17 @@ struct arch_vcpu
>       struct {
>           bool next_interrupt_enabled;
>       } monitor;
> +
> +#ifndef CONFIG_COMPAT
> +# define runstate_guest(v) ((v)->arch.runstate_guest)
> +    XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest address */
> +#else
> +# define runstate_guest(v) ((v)->arch.runstate_guest.native)
> +    union {
> +        XEN_GUEST_HANDLE(vcpu_runstate_info_t) native;
> +        XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat;
> +    } runstate_guest;
> +#endif
>   };
>   
>   struct guest_memory_policy
> diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
> index 7e51d361de..5e8cbba31d 100644
> --- a/xen/include/xen/domain.h
> +++ b/xen/include/xen/domain.h
> @@ -5,6 +5,7 @@
>   #include <xen/types.h>
>   
>   #include <public/xen.h>
> +#include <public/vcpu.h>
>   #include <asm/domain.h>
>   #include <asm/numa.h>
>   
> @@ -63,6 +64,10 @@ void arch_vcpu_destroy(struct vcpu *v);
>   int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
>   void unmap_vcpu_info(struct vcpu *v);
>   
> +int arch_vcpu_setup_runstate(struct vcpu *v,
> +                             struct vcpu_register_runstate_memory_area area);
> +void arch_vcpu_cleanup_runstate(struct vcpu *v);
> +
>   int arch_domain_create(struct domain *d,
>                          struct xen_domctl_createdomain *config);
>   
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index ac53519d7f..fac030fb83 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -29,11 +29,6 @@
>   #include <public/vcpu.h>
>   #include <public/event_channel.h>
>   
> -#ifdef CONFIG_COMPAT
> -#include <compat/vcpu.h>
> -DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t);
> -#endif
> -
>   /*
>    * Stats
>    *
> @@ -166,16 +161,7 @@ struct vcpu
>       struct sched_unit *sched_unit;
>   
>       struct vcpu_runstate_info runstate;
> -#ifndef CONFIG_COMPAT
> -# define runstate_guest(v) ((v)->runstate_guest)
> -    XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest address */
> -#else
> -# define runstate_guest(v) ((v)->runstate_guest.native)
> -    union {
> -        XEN_GUEST_HANDLE(vcpu_runstate_info_t) native;
> -        XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat;
> -    } runstate_guest; /* guest address */
> -#endif
> +
>       unsigned int     new_state;
>   
>       /* Has the FPU been initialised? */
> 

[1] <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>


-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 21:33:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 21:33:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1GBE-00057f-H9; Thu, 30 Jul 2020 21:33:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6uMI=BJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k1GBD-00057a-L4
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 21:33:35 +0000
X-Inumbo-ID: 507af2ae-d2ac-11ea-ab38-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 507af2ae-d2ac-11ea-ab38-12813bfff9fa;
 Thu, 30 Jul 2020 21:33:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=2rZhoTUKHb0bDDbncrl25yXmvslvW4rXj3tz+nJ/9Ow=; b=G75lvyFx+EWdn4II6T/hxW8mg
 L2CiQXGQK4ywCA4YA23GbWstUY1noc0h4Ksp2a1jCwq9k47Xsh5N40XR1LvSBj4q63vrldAyiPpjZ
 pdVu0kHno+UJfLgpNimA1MMfMWXubd17fR+iS3kmRzaXA83E9ARARlGGdJt4SYKuHeyuk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1GB9-0003a3-5A; Thu, 30 Jul 2020 21:33:31 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1GB8-0005bM-TB; Thu, 30 Jul 2020 21:33:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k1GB8-0001Ni-Se; Thu, 30 Jul 2020 21:33:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152295-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152295: regressions - trouble: fail/pass/starved
X-Osstest-Failures: qemu-mainline:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
 qemu-mainline:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
 qemu-mainline:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
 qemu-mainline:test-arm64-arm64-xl:xen-boot:fail:regression
 qemu-mainline:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
 qemu-mainline:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
 qemu-mainline:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: qemuu=5772f2b1fc5d00e7e04e01fa28e9081d6550440a
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jul 2020 21:33:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152295 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152295/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1   7 xen-boot                 fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1       fail REGR. vs. 151065
 test-arm64-arm64-xl-thunderx  7 xen-boot                 fail REGR. vs. 151065
 test-arm64-arm64-xl-credit2   7 xen-boot                 fail REGR. vs. 151065
 test-arm64-arm64-xl           7 xen-boot                 fail REGR. vs. 151065
 test-arm64-arm64-xl-xsm       7 xen-boot                 fail REGR. vs. 151065
 test-arm64-arm64-libvirt-xsm  7 xen-boot                 fail REGR. vs. 151065
 test-arm64-arm64-xl-seattle   7 xen-boot                 fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1         fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64  2 hosts-allocate              starved n/a

version targeted for testing:
 qemuu                5772f2b1fc5d00e7e04e01fa28e9081d6550440a
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   47 days
Failing since        151101  2020-06-14 08:32:51 Z   46 days   65 attempts
Testing same since   152284  2020-07-29 11:03:27 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Duyck <alexander.h.duyck@linux.intel.com>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andreas Schwab <schwab@suse.de>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Antoine Damhet <antoine.damhet@blade-group.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Bruce Rogers <brogers@suse.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dongjiu Geng <gengdongjiu@huawei.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Hogan Wang <hogan.wang@huawei.com>
  Hogan Wang <king.wang@huawei.com>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiskza@siemens.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  KONRAD Frederic <frederic.konrad@adacore.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  Liu Yi L <yi.l.liu@intel.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Turschmid <peter.turschm@nutanix.com>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Sven Schnelle <svens@stackframe.org>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Viktor Mihajlovski <mihajlov@linux.ibm.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          starved 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 34292 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 21:52:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 21:52:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1GTT-0006o7-5w; Thu, 30 Jul 2020 21:52:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6uMI=BJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k1GTR-0006o2-SL
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 21:52:25 +0000
X-Inumbo-ID: f3a981be-d2ae-11ea-8ddb-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3a981be-d2ae-11ea-8ddb-bc764e2007e4;
 Thu, 30 Jul 2020 21:52:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=OsZl2ddpapf1/EZxGwq9uVj6LPHhdzwbSn9IvFIfuhY=; b=kPIeNEIV5usQNWN5TW+wU976s
 CobZLe6RECM0gvUpapMt5FnKKJPUsz36UXYrolSGFiM/Ymf/lIz2JZ0j3rHJz+dvC77rEPFqzbphx
 fMtSBLrzj6btHUchYUzKXhm8ID87nEUqKJ9rLX+UG7HMC6KM6uIPjGyhhQD7qdZC/t8Eo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1GTP-0003xe-TB; Thu, 30 Jul 2020 21:52:23 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1GTP-0006CE-Ln; Thu, 30 Jul 2020 21:52:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k1GTP-00055d-L0; Thu, 30 Jul 2020 21:52:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152308-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152308: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
 xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=64219fa179c3e48adad12bfce3f6b3f1596cccbf
X-Osstest-Versions-That: xen=b071ec25e85c4aacf3da59e5258cda0b1c4df45d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 30 Jul 2020 21:52:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152308 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152308/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       7 xen-boot                 fail REGR. vs. 152269

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  64219fa179c3e48adad12bfce3f6b3f1596cccbf
baseline version:
 xen                  b071ec25e85c4aacf3da59e5258cda0b1c4df45d

Last test of basis   152269  2020-07-28 19:05:32 Z    2 days
Testing same since   152288  2020-07-29 19:01:00 Z    1 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Fam Zheng <famzheng@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 64219fa179c3e48adad12bfce3f6b3f1596cccbf
Author: Fam Zheng <famzheng@amazon.com>
Date:   Wed Jul 29 18:51:45 2020 +0100

    x86/cpuid: Fix APIC bit clearing
    
    The bug is obvious here, other places in this function used
    "cpufeat_mask" correctly.
    
    Fixed: b648feff8ea2 ("xen/x86: Improvements to in-hypervisor cpuid sanity checks")
    Signed-off-by: Fam Zheng <famzheng@amazon.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jul 30 23:07:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 30 Jul 2020 23:07:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Hde-0004Je-OX; Thu, 30 Jul 2020 23:07:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3G63=BJ=amazon.com=prvs=473b2afd7=anchalag@srs-us1.protection.inumbo.net>)
 id 1k1Hdd-0004JZ-9l
 for xen-devel@lists.xenproject.org; Thu, 30 Jul 2020 23:07:01 +0000
X-Inumbo-ID: 5f2241ec-d2b9-11ea-8dee-bc764e2007e4
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f2241ec-d2b9-11ea-8dee-bc764e2007e4;
 Thu, 30 Jul 2020 23:07:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1596150421; x=1627686421;
 h=date:from:to:cc:message-id:references:mime-version:
 in-reply-to:subject;
 bh=7vMrFvtUTBg/lhMoMFfi8NrSvzxD5ruFtgpTJwVUKIE=;
 b=Sy3nT18hAsrqiQ2zpjwTU5xG8Gfqb8e59l6yda9R0Hzsr7XoDr9m9nHD
 i3ELluJGFVhVBeD2iBNLK29S6Gs82hkerQnKxL9hzgd5Qfd0IcrOxxDKs
 vdy7k7C40AySq/0w22cGhPrBjTRJxr0WRIWiCA3EGd7hH0GNTEZ8JEiDn k=;
IronPort-SDR: QtZ+XaoPa1wpxqYDTq2G1f0sXs4cmexTY/A+g/lEhuRJZ3WCcxB5sOu3b78fnTjQ8rKbrbn3ky
 jsdilpsZgCJg==
X-IronPort-AV: E=Sophos;i="5.75,415,1589241600"; d="scan'208";a="64407396"
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1a-e34f1ddc.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 30 Jul 2020 23:06:55 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1a-e34f1ddc.us-east-1.amazon.com (Postfix) with ESMTPS
 id 143F3A2573; Thu, 30 Jul 2020 23:06:47 +0000 (UTC)
Received: from EX13D08UEE001.ant.amazon.com (10.43.62.126) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 30 Jul 2020 23:06:35 +0000
Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by
 EX13D08UEE001.ant.amazon.com (10.43.62.126) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 30 Jul 2020 23:06:35 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 30 Jul 2020 23:06:35 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id BFA4640384; Thu, 30 Jul 2020 23:06:34 +0000 (UTC)
Date: Thu, 30 Jul 2020 23:06:34 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <20200730230634.GA17221@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200721000348.GA19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <408d3ce9-2510-2950-d28d-fdfe8ee41a54@oracle.com>
 <alpine.DEB.2.21.2007211640500.17562@sstabellini-ThinkPad-T480s>
 <20200722180229.GA32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <alpine.DEB.2.21.2007221645430.17562@sstabellini-ThinkPad-T480s>
 <20200723225745.GB32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <alpine.DEB.2.21.2007241431280.17562@sstabellini-ThinkPad-T480s>
 <66a9b838-70ed-0807-9260-f2c31343a081@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <66a9b838-70ed-0807-9260-f2c31343a081@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: x86@kernel.org, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 pavel@ucw.cz, hpa@zytor.com, Stefano Stabellini <sstabellini@kernel.org>,
 eduval@amazon.com, mingo@redhat.com, xen-devel@lists.xenproject.org,
 sblbir@amazon.com, axboe@kernel.dk, konrad.wilk@oracle.com, bp@alien8.de,
 tglx@linutronix.de, jgross@suse.com, netdev@vger.kernel.org,
 linux-pm@vger.kernel.org, rjw@rjwysocki.net, kamatam@amazon.com,
 vkuznets@redhat.com, davem@davemloft.net, dwmw@amazon.co.uk,
 roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 27, 2020 at 06:08:29PM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 7/24/20 7:01 PM, Stefano Stabellini wrote:
> > Yes, it does, thank you. I'd rather not introduce unknown regressions so
> > I would recommend to add an arch-specific check on registering
> > freeze/thaw/restore handlers. Maybe something like the following:
> >
> > #ifdef CONFIG_X86
> >     .freeze = blkfront_freeze,
> >     .thaw = blkfront_restore,
> >     .restore = blkfront_restore
> > #endif
> >
> >
> > maybe Boris has a better suggestion on how to do it
> 
> 
> An alternative might be to still install pm notifier in
> drivers/xen/manage.c (I think as result of latest discussions we decided
> we won't need it) and return -ENOTSUPP for ARM for
> PM_HIBERNATION_PREPARE and friends. Would that work?
>
I think the question here is for registering driver specific freeze/thaw/restore
callbacks for x86 only. I have dropped the pm_notifier in the v3 still pending
testing. So I think just registering driver specific callbacks for x86 only is a
good option. What do you think?

Anchal
> 
> -boris
> 
> 
> 


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 00:40:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 00:40:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1J66-0004bi-9e; Fri, 31 Jul 2020 00:40:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1Hgf=BK=knorrie.org=hans@srs-us1.protection.inumbo.net>)
 id 1k1J64-0004bd-IU
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 00:40:28 +0000
X-Inumbo-ID: 6cb3d5a2-d2c6-11ea-8df7-bc764e2007e4
Received: from syrinx.knorrie.org (unknown [2001:888:2177::4d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6cb3d5a2-d2c6-11ea-8df7-bc764e2007e4;
 Fri, 31 Jul 2020 00:40:26 +0000 (UTC)
Received: from [IPv6:2a02:a213:2b80:f000::12] (unknown
 [IPv6:2a02:a213:2b80:f000::12])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by syrinx.knorrie.org (Postfix) with ESMTPSA id 61607609C2677
 for <xen-devel@lists.xenproject.org>; Fri, 31 Jul 2020 02:40:25 +0200 (CEST)
To: xen-devel <xen-devel@lists.xenproject.org>
From: Hans van Kranenburg <hans@knorrie.org>
Subject: =?UTF-8?Q?4=2e14=2e0_FTBFS_for_Debian_unstable=2c_libxlu=5fpci=2ec_?=
 =?UTF-8?B?KOKVr8Kw4pahwrDvvInila/vuLUg4pS74pSB4pS7?=
Autocrypt: addr=hans@knorrie.org; keydata=
 mQINBFo2pooBEADwTBe/lrCa78zuhVkmpvuN+pXPWHkYs0LuAgJrOsOKhxLkYXn6Pn7e3xm+
 ySfxwtFmqLUMPWujQYF0r5C6DteypL7XvkPP+FPVlQnDIifyEoKq8JZRPsAFt1S87QThYPC3
 mjfluLUKVBP21H3ZFUGjcf+hnJSN9d9MuSQmAvtJiLbRTo5DTZZvO/SuQlmafaEQteaOswme
 DKRcIYj7+FokaW9n90P8agvPZJn50MCKy1D2QZwvw0g2ZMR8yUdtsX6fHTe7Ym+tHIYM3Tsg
 2KKgt17NTxIqyttcAIaVRs4+dnQ23J98iFmVHyT+X2Jou+KpHuULES8562QltmkchA7YxZpT
 mLMZ6TPit+sIocvxFE5dGiT1FMpjM5mOVCNOP+KOup/N7jobCG15haKWtu9k0kPz+trT3NOn
 gZXecYzBmasSJro60O4bwBayG9ILHNn+v/ZLg/jv33X2MV7oYXf+ustwjXnYUqVmjZkdI/pt
 30lcNUxCANvTF861OgvZUR4WoMNK4krXtodBoEImjmT385LATGFt9HnXd1rQ4QzqyMPBk84j
 roX5NpOzNZrNJiUxj+aUQZcINtbpmvskGpJX0RsfhOh2fxfQ39ZP/0a2C59gBQuVCH6C5qsY
 rc1qTIpGdPYT+J1S2rY88AvPpr2JHZbiVqeB3jIlwVSmkYeB/QARAQABtCZIYW5zIHZhbiBL
 cmFuZW5idXJnIDxoYW5zQGtub3JyaWUub3JnPokCTgQTAQoAOBYhBOJv1o/B6NS2GUVGTueB
 VzIYDCpVBQJaNq7KAhsDBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEOeBVzIYDCpVgDMQ
 ANSQMebh0Rr6RNhfA+g9CKiCDMGWZvHvvq3BNo9TqAo9BC4neAoVciSmeZXIlN8xVALf6rF8
 lKy8L1omocMcWw7TlvZHBr2gZHKlFYYC34R2NvxS0xO8Iw5rhEU6paYaKzlrvxuXuHMVXgjj
 bM3zBiN8W4b9VW1MoynP9nvm1WaGtFI9GIyK9j6mBCU+N5hpvFtt4DBmuWjzdDkd3sWUufYd
 nQhGimWHEg95GWhQUiFvr4HRvYJpbjRRRQG3O/5Fm0YyTYZkI5CDzQIm5lhqKNqmuf2ENstS
 8KcBImlbwlzEpK9Pa3Z5MUeLZ5Ywwv+d11fyhk53aT9bipdEipvcGa6DrA0DquO4WlQR+RKU
 ywoGTgntwFu8G0+tmD8J1UE6kIzFwE5kiFWjM0rxv1tAgV9ZWqmp3sbI7vzbZXn+KI/wosHV
 iDeW5rYg+PdmnOlYXQIJO+t0KmF5zJlSe7daylKZKTYtk7w1Fq/Oh1Rps9h1C4sXN8OAUO7h
 1SAnEtehHfv52nPxwZiI6eqbvqV0uEEyLFS5pCuuwmPpC8AmOrciY2T8T+4pmkJNO2Nd3jOP
 cnJgAQrxPvD7ACp/85LParnoz5c9/nPHJB1FgbAa7N5d8ubqJgi+k9Q2lAL9vBxK67aZlFZ0
 Kd7u1w1rUlY12KlFWzxpd4TuHZJ8rwi7PUceuQINBFo2sK8BEADSZP5cKnGl2d7CHXdpAzVF
 6K4Hxwn5eHyKC1D/YvsY+otq3PnfLJeMf1hzv2OSrGaEAkGJh/9yXPOkQ+J1OxJJs9CY0fqB
 MvHZ98iTyeFAq+4CwKcnZxLiBchQJQd0dFPujtcoMkWgzp3QdzONdkK4P7+9XfryPECyCSUF
 ib2aEkuU3Ic4LYfsBqGR5hezbJqOs96ExMnYUCEAS5aeejr3xNb8NqZLPqU38SQCTLrAmPAX
 glKVnYyEVxFUV8EXXY6AK31lRzpCqmPxLoyhPAPda9BXchRluy+QOyg+Yn4Q2DSwbgCYPrxo
 HTZKxH+E+JxCMfSW35ZE5ufvAbY3IrfHIhbNnHyxbTRgYMDbTQCDyN9F2Rvx3EButRMApj+v
 OuaMBJF/fWfxL3pSIosG9Q7uPc+qJvVMHMRNnS0Y1QQ5ZPLG0zI5TeHzMnGmSTbcvn/NOxDe
 6EhumcclFS0foHR78l1uOhUItya/48WCJE3FvOS3+KBhYvXCsG84KVsJeen+ieX/8lnSn0d2
 ZvUsj+6wo+d8tcOAP+KGwJ+ElOilqW29QfV4qvqmxnWjDYQWzxU9WGagU3z0diN97zMEO4D8
 SfUu72S5O0o9ATgid9lEzMKdagXP94x5CRvBydWu1E5CTgKZ3YZv+U3QclOG5p9/4+QNbhqH
 W4SaIIg90CFMiwARAQABiQRsBBgBCgAgFiEE4m/Wj8Ho1LYZRUZO54FXMhgMKlUFAlo2sK8C
 GwICQAkQ54FXMhgMKlXBdCAEGQEKAB0WIQRJbJ13A1ob3rfuShiywd9yY2FfbAUCWjawrwAK
 CRCywd9yY2FfbMKbEACIGLdFrD5j8rz/1fm8xWTJlOb3+o5A6fdJ2eyPwr5njJZSG9i5R28c
 dMmcwLtVisfedBUYLaMBmCEHnj7ylOgJi60HE74ZySX055hKECNfmA9Q7eidxta5WeXeTPSb
 PwTQkAgUZ576AO129MKKP4jkEiNENePMuYugCuW7XGR+FCEC2efYlVwDQy24ZfR9Q1dNK2ny
 0gH1c+313l0JcNTKjQ0e7M9KsQSKUr6Tk0VGTFZE2dp+dJF1sxtWhJ6Ci7N1yyj3buFFpD9c
 kj5YQFqBkEwt3OGtYNuLfdwR4d47CEGdQSm52n91n/AKdhRDG5xvvADG0qLGBXdWvbdQFllm
 v47TlJRDc9LmwpIqgtaUGTVjtkhw0SdiwJX+BjhtWTtrQPbseDe2pN3gWte/dPidJWnj8zzS
 ggZ5otY2reSvM+79w/odUlmtaFx+IyFITuFnBVcMF0uGmQBBxssew8rePQejYQHz0bZUDNbD
 VaZiXqP4njzBJu5+nzNxQKzQJ0VDF6ve5K49y0RpT4IjNOupZ+OtlZTQyM7moag+Y6bcJ7KK
 8+MRdRjGFFWP6H/RCSFAfoOGIKTlZHubjgetyQhMwKJQ5KnGDm+XUkeIWyevPfCVPNvqF2q3
 viQm0taFit8L+x7ATpolZuSCat5PSXtgx1liGjBpPKnERxyNLQ/erRNcEACwEJliFbQm+c2i
 6ccpx2cdtyAI1yzWuE0nr9DqpsEbIZzTCIVyry/VZgdJ27YijGJWesj/ie/8PtpDu0Cf1pty
 QOKSpC9WvRCFGJPGS8MmvzepmX2DYQ5MSKTO5tRJZ8EwCFfd9OxX2g280rdcDyCFkY3BYrf9
 ic2PTKQokx+9sLCHAC/+feSx/MA/vYpY1EJwkAr37mP7Q8KA9PCRShJziiljh5tKQeIG4sz1
 QjOrS8WryEwI160jKBBNc/M5n2kiIPCrapBGsL58MumrtbL53VimFOAJaPaRWNSdWCJSnVSv
 kCHMl/1fRgzXEMpEmOlBEY0Kdd1Ut3S2cuwejzI+WbrQLgeps2N70Ztq50PkfWkj0jeethhI
 FqIJzNlUqVkHl1zCWSFsghxiMyZmqULaGcSDItYQ+3c9fxIO/v0zDg7bLeG9Zbj4y8E47xqJ
 6brtAAEJ1RIM42gzF5GW71BqZrbFFoI0C6AzgHjaQP1xfj7nBRSBz4ObqnsuvRr7H6Jme5rl
 eg7COIbm8R7zsFjF4tC6k5HMc1tZ8xX+WoDsurqeQuBOg7rggmhJEpDK2f+g8DsvKtP14Vs0
 Sn7fVJi87b5HZojry1lZB2pXUH90+GWPF7DabimBki4QLzmyJ/ENH8GspFulVR3U7r3YYQ5K
 ctOSoRq9pGmMi231Q+xx9LkCDQRaOtArARAA50ylThKbq0ACHyomxjQ6nFNxa9ICp6byU9Lh
 hKOax0GB6l4WebMsQLhVGRQ8H7DT84E7QLRYsidEbneB1ciToZkL5YFFaVxY0Hj1wKxCFcVo
 CRNtOfoPnHQ5m/eDLaO4o0KKL/kaxZwTn2jnl6BQDGX1Aak0u4KiUlFtoWn/E/NIv5QbTGSw
 IYuzWqqYBIzFtDbiQRvGw0NuKxAGMhwXy8VP05mmNwRdyh/CC4rWQPBTvTeMwr3nl8/G+16/
 cn4RNGhDiGTTXcX03qzZ5jZ5N7GLY5JtE6pTpLG+EXn5pAnQ7MvuO19cCbp6Dj8fXRmI0SVX
 WKSo0A2C8xH6KLCRfUMzD7nvDRU+bAHQmbi5cZBODBZ5yp5CfIL1KUCSoiGOMpMin3FrarIl
 cxhNtoE+ya23A+JVtOwtM53ESra9cJL4WPkyk/E3OvNDmh8U6iZXn4ZaKQTHaxN9yvmAUhZQ
 iQi/sABwxCcQQ2ydRb86Vjcbx+FUr5OoEyQS46gc3KN5yax9D3H9wrptOzkNNMUhFj0oK0fX
 /MYDWOFeuNBTYk1uFRJDmHAOp01rrMHRogQAkMBuJDMrMHfolivZw8RKfdPzgiI500okLTzH
 C0wgSSAOyHKGZjYjbEwmxsl3sLJck9IPOKvqQi1DkvpOPFSUeX3LPBIav5UUlXt0wjbzInUA
 EQEAAYkCNgQYAQoAIBYhBOJv1o/B6NS2GUVGTueBVzIYDCpVBQJaOtArAhsMAAoJEOeBVzIY
 DCpV4kgP+wUh3BDRhuKaZyianKroStgr+LM8FIUwQs3Fc8qKrcDaa35vdT9cocDZjkaGHprp
 mlN0OuT2PB+Djt7am2noV6Kv1C8EnCPpyDBCwa7DntGdGcGMjH9w6aR4/ruNRUGS1aSMw8sR
 QgpTVWEyzHlnIH92D+k+IhdNG+eJ6o1fc7MeC0gUwMt27Im+TxVxc0JRfniNk8PUAg4kvJq7
 z7NLBUcJsIh3hM0WHQH9AYe/mZhQq5oyZTsz4jo/dWFRSlpY7zrDS2TZNYt4cCfZj1bIdpbf
 SpRi9M3W/yBF2WOkwYgbkqGnTUvr+3r0LMCH2H7nzENrYxNY2kFmDX9bBvOWsWpcMdOEo99/
 Iayz5/q2d1rVjYVFRm5U9hG+C7BYvtUOnUvSEBeE4tnJBMakbJPYxWe61yANDQubPsINB10i
 ngzsm553yqEjLTuWOjzdHLpE4lzD416ExCoZy7RLEHNhM1YQSI2RNs8umlDfZM9Lek1+1kgB
 vT3RH0/CpPJgveWV5xDOKuhD8j5l7FME+t2RWP+gyLid6dE0C7J03ir90PlTEkMEHEzyJMPt
 OhO05Phy+d51WPTo1VSKxhL4bsWddHLfQoXW8RQ388Q69JG4m+JhNH/XvWe3aQFpYP+GZuzO
 hkMez0lHCaVOOLBSKHkAHh9i0/pH+/3hfEa4NsoHCpyy
Message-ID: <dab05ef3-4ce8-2177-893d-61168d897821@knorrie.org>
Date: Fri, 31 Jul 2020 02:40:24 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi!

News from the Debian Xen team (well, that's still only Ian and me). We
still have Xen 4.11 in Debian unstable and stable (Buster) now, but at
this point I really want to start working on the preparations for the
next Debian release which will happen in about a little bit less than a
year from now.

So, the 4.14.0 release is a good moment to kick it off. In february 2020
Ian and I already spent a day to move the Debian packaging to 4.13, and
the result has been laying around for a bit. Now I'm forwarding it to
4.14.0 and I really want to get this in Debian so users can start
playing around with it and have enough time to either contribute new
things (like cross-building for raspberry pi4!) etc.

All the yolo WIP stuff without anything cleaned up is here:
https://salsa.debian.org/xen-team/debian-xen/-/commits/knorrie/4.14

Unfortunately, it FTBFS in an unexpected way, since I cannot relate this
to any of our additional patches or anything.

This is the last part of the output with the failure:

---- >8 ----

gcc  -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall
-Wstrict-prototypes -Wdeclaration-after-statement
-Wno-unused-but-set-variable -Wno-unused-local-typedefs   -O2
-fomit-frame-pointer
-D__XEN_INTERFACE_VERSION__=__XEN_LATEST_INTERFACE_VERSION__ -MMD -MP
-MF .libxlu_pci.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE  -g -O2
-fdebug-prefix-map=/home/knorrie/build/xen/debian-xen=.
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time
-D_FORTIFY_SOURCE=2 -Werror -Wno-format-zero-length
-Wmissing-declarations -Wno-declaration-after-statement
-Wformat-nonliteral -I. -fPIC -pthread
-I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/libxc/include
-I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/libs/toollog/include
-I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/include
-I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/libs/foreignmemory/include
-I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/include
-I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/libs/devicemodel/include
-I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/include
-I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/include
-D__XEN_TOOLS__   -c -o libxlu_pci.o libxlu_pci.c
libxlu_pci.c: In function 'xlu_pci_parse_bdf':
libxlu_pci.c:32:18: error: 'func' may be used uninitialized in this
function [-Werror=maybe-uninitialized]
   32 |     pcidev->func = func;
      |     ~~~~~~~~~~~~~^~~~~~
libxlu_pci.c:51:29: note: 'func' was declared here
   51 |     unsigned dom, bus, dev, func, vslot = 0;
      |                             ^~~~
libxlu_pci.c:31:17: error: 'dev' may be used uninitialized in this
function [-Werror=maybe-uninitialized]
   31 |     pcidev->dev = dev;
      |     ~~~~~~~~~~~~^~~~~
libxlu_pci.c:51:24: note: 'dev' was declared here
   51 |     unsigned dom, bus, dev, func, vslot = 0;
      |                        ^~~
libxlu_pci.c:30:17: error: 'bus' may be used uninitialized in this
function [-Werror=maybe-uninitialized]
   30 |     pcidev->bus = bus;
      |     ~~~~~~~~~~~~^~~~~
libxlu_pci.c:51:19: note: 'bus' was declared here
   51 |     unsigned dom, bus, dev, func, vslot = 0;
      |                   ^~~
libxlu_pci.c:29:20: error: 'dom' may be used uninitialized in this
function [-Werror=maybe-uninitialized]
   29 |     pcidev->domain = domain;
      |     ~~~~~~~~~~~~~~~^~~~~~~~
libxlu_pci.c:51:14: note: 'dom' was declared here
   51 |     unsigned dom, bus, dev, func, vslot = 0;
      |              ^~~
cc1: all warnings being treated as errors
make[6]: ***
[/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/Rules.mk:218:
libxlu_pci.o] Error 1
make[6]: Leaving directory '/home/knorrie/build/xen/debian-xen/tools/libxl'
make[5]: ***
[/home/knorrie/build/xen/debian-xen/tools/../tools/Rules.mk:242:
subdir-install-libxl] Error 2
make[5]: Leaving directory '/home/knorrie/build/xen/debian-xen/tools'
make[4]: ***
[/home/knorrie/build/xen/debian-xen/tools/../tools/Rules.mk:237:
subdirs-install] Error 2
make[4]: Leaving directory '/home/knorrie/build/xen/debian-xen/tools'
make[3]: *** [Makefile:72: install] Error 2
make[3]: Leaving directory '/home/knorrie/build/xen/debian-xen/tools'
make[2]: *** [Makefile:134: install-tools] Error 2
make[2]: Leaving directory '/home/knorrie/build/xen/debian-xen'
make[1]: *** [debian/rules:205: override_dh_auto_build] Error 2
make[1]: Leaving directory '/home/knorrie/build/xen/debian-xen'
make: *** [debian/rules:153: build] Error 2
dpkg-buildpackage: error: debian/rules build subprocess returned exit
status 2

---- >8 ----

I lost all of the other output already, but if needed, I can just cause
it again and provide a full build log with all specific package versions
of compilers etc. Anyway, it just used all the current stuff in Debian
unstable.

Knorrie

P.S. The next big thing to fix in the packaging before it can go into
Debian unstable is no usage of python 2 anywhere. I might ask some
questions about that later.


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 00:59:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 00:59:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1JOh-0005fG-Ud; Fri, 31 Jul 2020 00:59:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OD0g=BK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k1JOg-0005et-Uh
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 00:59:42 +0000
X-Inumbo-ID: 1a70f3e4-d2c9-11ea-ab5d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1a70f3e4-d2c9-11ea-ab5d-12813bfff9fa;
 Fri, 31 Jul 2020 00:59:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Xhfzle9vTAsczlLl3KpcQ3JU/lUbjCD79JAs4y5Teok=; b=4cj+Qhfl7ELtjFdoUxjUEULpA
 DgWU8m4EcW3e9mZ3J4aeAhw0rsYh9/4kJlknr+mps8nOA2uDsmlFeUwTAunK3URIrtXyWYphc7Q6H
 v3IKbqhUs3QO8Zj6lmcUq7UnZMOvRJki22U15Rbw86FrWiQqQBG2ccoalGl2sDL07/0xg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1JOZ-0008LR-QW; Fri, 31 Jul 2020 00:59:35 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1JOZ-0005ah-4q; Fri, 31 Jul 2020 00:59:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k1JOZ-0004d5-3v; Fri, 31 Jul 2020 00:59:35 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152310-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152310: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=98bed5de1de3352c63cfe29a00f17e8d9ce72689
X-Osstest-Versions-That: xen=b071ec25e85c4aacf3da59e5258cda0b1c4df45d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 31 Jul 2020 00:59:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152310 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152310/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  98bed5de1de3352c63cfe29a00f17e8d9ce72689
baseline version:
 xen                  b071ec25e85c4aacf3da59e5258cda0b1c4df45d

Last test of basis   152269  2020-07-28 19:05:32 Z    2 days
Failing since        152288  2020-07-29 19:01:00 Z    1 days    7 attempts
Testing same since   152310  2020-07-30 22:00:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Fam Zheng <famzheng@amazon.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b071ec25e8..98bed5de1d  98bed5de1de3352c63cfe29a00f17e8d9ce72689 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 01:16:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 01:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Jee-0008QK-CF; Fri, 31 Jul 2020 01:16:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F22U=BK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k1Jec-0008QF-OW
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 01:16:10 +0000
X-Inumbo-ID: 6aa3b138-d2cb-11ea-8df9-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6aa3b138-d2cb-11ea-8df9-bc764e2007e4;
 Fri, 31 Jul 2020 01:16:10 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 20744208E4;
 Fri, 31 Jul 2020 01:16:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596158169;
 bh=+/WozuFjWE7vvJYgQCOsmMDkf3muLMjbRPBlc7Gn768=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=P6bhlhRb86o/7urfxeC/miJn9H84X7Oh4EKBhkVJqbjc0d/Ec2RIiobcU2qvUjlHb
 oO8+OFwRnfuv+bvax3UVfFkT8JN+bEN6GlYENQwrTDwF/Wfk4ycQY+bKvHCClqjeT4
 cVWCl/ar73l26tW/JkXr15ENMOW7KGZId/xNSeOM=
Date: Thu, 30 Jul 2020 18:16:08 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Ian Jackson <ian.jackson@citrix.com>
Subject: Re: kernel-doc and xen.git
In-Reply-To: <24354.50708.138178.815210@mariner.uk.xensource.com>
Message-ID: <alpine.DEB.2.21.2007300956020.1767@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2007291644330.1767@sstabellini-ThinkPad-T480s>
 <785FBD2D-A67C-4740-9C5B-2ECCD0AEBFFC@citrix.com>
 <24354.50708.138178.815210@mariner.uk.xensource.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1959419991-1596128184=:1767"
Content-ID: <alpine.DEB.2.21.2007300957310.1767@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Committers <committers@xenproject.org>,
 George Dunlap <George.Dunlap@citrix.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1959419991-1596128184=:1767
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2007300957311.1767@sstabellini-ThinkPad-T480s>

On Thu, 30 Jul 2020, Ian Jackson wrote:
> George Dunlap writes ("Re: kernel-doc and xen.git"):
> > > On Jul 30, 2020, at 2:27 AM, Stefano Stabellini <sstabellini@kernel.org> wrote:
> ...
> > > I did give a look at kernel-doc and it is very promising. kernel-doc is
> > > a script that can generate nice rst text documents from in-code
> > > comments. (The generated rst files can then be used as input for sphinx
> > > to generate html docs.) The comment syntax [2] is simple and similar to
> > > Doxygen:
> > > 
> > >    /**
> > >     * function_name() - Brief description of function.
> > >     * @arg1: Describe the first argument.
> > >     * @arg2: Describe the second argument.
> > >     *        One can provide multiple line descriptions
> > >     *        for arguments.
> > >     */
> > > 
> > > kernel-doc is actually better than Doxygen because it is a much simpler
> > > tool, one we could customize to our needs and with predictable output.
> > > Specifically, we could add the tagging, numbering, and referencing
> > > required by FuSa requirement documents.
> > > 
> > > I would like your feedback on whether it would be good to start
> > > converting xen.git in-code comments to the kernel-doc format so that
> > > proper documents can be generated out of them. One day we could import
> > > kernel-doc into xen.git/scripts and use it to generate a set of html
> > > documents via sphinx.
> > 
> > `git-grep ‘^/\*\*$’ ` turns up loads of instances of kernel-doc-style comments in the tree already.  I think it makes complete sense to:
> > 
> > 1. Start using tools to pull the existing ones into sphinx docs
> > 2. Skim through the existing ones to make sure they’re accurate / useful
> > 3. Add such comments for elements of key importance to the FUSA SIG
> > 4. Encourage people include documentation for new features, &c
> 
> I have no objection to this.  Indeed switching to something the kernel
> folks find useable is likely to be a good idea.
> 
> We should ideally convert the existing hypercall documentation, which
> is parsed from a bespoke magic comment format by a script in xen.git.

I agree.

Great, thank you both for the feedback!
--8323329-1959419991-1596128184=:1767--


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 01:16:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 01:16:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Jey-0008RF-KY; Fri, 31 Jul 2020 01:16:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F22U=BK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k1Jex-0008R4-CO
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 01:16:31 +0000
X-Inumbo-ID: 76c934f6-d2cb-11ea-ab62-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 76c934f6-d2cb-11ea-ab62-12813bfff9fa;
 Fri, 31 Jul 2020 01:16:30 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A10AC208E4;
 Fri, 31 Jul 2020 01:16:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596158189;
 bh=oQnDEu+vx9knS9OTqiu3fPGLwbvblGwGetKQP0J6G4U=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=f2OWWMyknAxPzQgwAzEXK5myl9fNMuw6eVOFGvRKygAr9RXWMbDoLW/6ZVdm1wALC
 t7xKBJtuwVYJ4e7yizrIoGI2SptfwLuN5UN3No6hvZdX8W+M0eUOvoGCLbxKAnILxZ
 QFXL9L73I68O2lORjwatHMsZgCOZtqK9wpg9+Mjc=
Date: Thu, 30 Jul 2020 18:16:29 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: srini@yujala.com
Subject: Re: Porting Xen to Jetson Nano
In-Reply-To: <bd49b460d390cd547ea0ca77e5a20f2d@yujala.com>
Message-ID: <alpine.DEB.2.21.2007301303340.1767@sstabellini-ThinkPad-T480s>
References: <002801d66051$90fe2300$b2fa6900$@yujala.com>
 <9736680b-1c81-652b-552b-4103341bad50@xen.org>
 <000001d661cb$45cdaa10$d168fe30$@yujala.com>
 <5f985a6a-1bd6-9e68-f35f-b0b665688cee@xen.org>
 <67c102642b0932d88ab2f70e96742ef0@yujala.com>
 <alpine.DEB.2.21.2007291756380.1767@sstabellini-ThinkPad-T480s>
 <bd49b460d390cd547ea0ca77e5a20f2d@yujala.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>,
 'Christopher Clark' <christopher.w.clark@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 30 Jul 2020, srini@yujala.com wrote:
> On 2020-07-30 01:27, Stefano Stabellini wrote:
> > On Wed, 29 Jul 2020, srini@yujala.com wrote:
> > > Hi Julien,
> > > 
> > > On 2020-07-24 17:25, Julien Grall wrote:
> > > > On 24/07/2020 16:01, Srinivas Bangalore wrote:
> > > >
> > > > I struggled to find your comment inline as your e-mail client doesn't
> > > > quote my answer. Please configure your e-mail client to use some form
> > > > of quoting (the usual is '>').
> > > >
> > > >
> > > I have switched to a web-based email client, so I hope this is better now.
> > 
> > Seems better, thank you
> > 
> > 
> > > > > (XEN) Freed 296kB init memory.
> > > > > (XEN) dom0 IPA 0x0000000088080000
> > > > > (XEN) P2M @ 0000000803fc3d40 mfn:0x17f0f5
> > > > > (XEN) 0TH[0x0] = 0x004000017f0f377f
> > > > > (XEN) 1ST[0x2] = 0x02c00000800006fd
> > > > > (XEN) Mem access check
> > > > > (XEN) dom0 IPA 0x0000000088080000
> > > > > (XEN) P2M @ 0000000803fc3d40 mfn:0x17f0f5
> > > > > (XEN) 0TH[0x0] = 0x004000017f0f377f
> > > > > (XEN) 1ST[0x2] = 0x02c00000800006fd
> > > > > (XEN) Mem access check
> > > >
> > > > The instruction abort issue looks normal as the mapping is marked as
> > > > non-executable.
> > > >
> > > > Looking at the rest of bits, bits 55:58 indicates the type of mapping
> > > > used. The value suggest the mapping has been considered to be used
> > > > p2m_mmio_direct_c (RW cacheable MMIO). This looks wrong to me because
> > > > RAM should be mapped using p2m_ram_rw.
> > > >
> > > > Looking at your DT, it looks like the region is marked as reserved. On
> > > > Xen 4.8, reserved-memory region are not correctly handled (IIRC this
> > > > was only fixed in Xen 4.13). This should be possible to confirm by
> > > > enable CONFIG_DEVICE_TREE_DEBUG in your .config.
> > > >
> > > > The option will debug more information about the mapping to dom0 on
> > > > your console.
> > > >
> > > > However, given you are using an old release, you are at risk at keep
> > > > finding bugs that have been resolved in more recent releases. It would
> > > > probably better if you switch to Xen 4.14 and report any bug you can
> > > > find there.
> > > >
> > > Ok. I applied to patch series to 4.14. Enabled EARLY_PRINTK, CONFIG_DEBUG
> > > and
> > > DEVICE_TREE_DEBUG.
> > > Here's the log...
> > > 
> > > ## Flattened Device Tree blob at e3500000
> > >    Booting using the fdt blob at 0xe3500000
> > >    reserving fdt memory region: addr=80000000 size=20000
> > >    reserving fdt memory region: addr=e3500000 size=35000
> > >    Loading Device Tree to 00000000fc7f8000, end 00000000fc82ffff ... OK
> > > 
> > > Starting kernel ...
> > > 
> > > - UART enabled -
> > > - Boot CPU booting -
> > > - Current EL 00000008 -
> > > - Initialize CPU -
> > > - Turning on paging -
> > > - Zero BSS -
> > > - Ready -
> > > (XEN) Invalid size for reg
> > > (XEN) fdt: node `reserved-memory': parsing failed
> > > (XEN)
> > > (XEN) MODULE[0]: 00000000e0000000 - 00000000e014b0c8 Xen
> > > (XEN) MODULE[1]: 00000000fc7f8000 - 00000000fc82d000 Device Tree
> > > (XEN)  RESVD[0]: 0000000080000000 - 0000000080020000
> > > (XEN)  RESVD[1]: 00000000e3500000 - 00000000e3535000
> > > (XEN)  RESVD[2]: 00000000fc7f8000 - 00000000fc82d000
> > > (XEN)  RESVD[3]: 0000000040001000 - 000000004003ffff
> > > (XEN)  RESVD[4]: 00000000b0000000 - 00000000b01fffff
> > > (XEN)
> > > (XEN)
> > > (XEN) Command line: console=dtuart sync_console dom0_mem=128M loglvl=debug
> > > guest_loglvl=debug console_to_ring
> > > (XEN) Xen BUG at page_alloc.c:398
> > > (XEN) ----[ Xen-4.14.0  arm64  debug=y   Not tainted ]----
> > > (XEN) CPU:    0
> > > (XEN) PC:     00000000002b7b90 alloc_boot_pages+0x38/0x9c
> > > (XEN) LR:     00000000002cda04
> > > (XEN) SP:     0000000000307d40
> > > (XEN) CPSR:   a00003c9 MODE:64-bit EL2h (Hypervisor, handler)
> > > (XEN)      X0: 000fc80000002000  X1: 0000000000002000  X2:
> > > 0000000000000000
> > > (XEN)      X3: 000fffffffffffff  X4: ffffffffffffffff  X5:
> > > 0000000000000000
> > > (XEN)      X6: 0000000000307df0  X7: 0000000000000003  X8:
> > > 0000000000000008
> > > (XEN)      X9: fffffffffffffffb X10: 0101010101010101 X11:
> > > 0000000000000007
> > > (XEN)     X12: 0000000000000004 X13: ffffffffffffffff X14:
> > > ffffffffff000000
> > > (XEN)     X15: ffffffffffffffff X16: 0000000000000000 X17:
> > > 0000000000000000
> > > (XEN)     X18: 00000000fc834dd0 X19: 00000000002b5000 X20:
> > > 00000000fc7f8000
> > > (XEN)     X21: 00000000fc7f8000 X22: 0000000000000000 X23:
> > > fc80000000000038
> > > (XEN)     X24: 00000000fed9de28 X25: ffffffffffffffff X26:
> > > fc80000002000000
> > > (XEN)     X27: 0000000002000000 X28: 0000000000000000  FP:
> > > 0000000000307d40
> > > (XEN)
> > > (XEN)   VTCR_EL2: 80000000
> > > (XEN)  VTTBR_EL2: 0000000000000000
> > > (XEN)
> > > (XEN)  SCTLR_EL2: 30cd183d
> > > (XEN)    HCR_EL2: 0000000000000038
> > > (XEN)  TTBR0_EL2: 00000000e0145000
> > > (XEN)
> > > (XEN)    ESR_EL2: f2000001
> > > (XEN)  HPFAR_EL2: 0000000000000000
> > > (XEN)    FAR_EL2: 0000000000000000
> > > (XEN)
> > > (XEN) Xen stack trace from sp=0000000000307d40:
> > > (XEN)    0000000000307df0 00000000002cf114 0000000000000000
> > > 0000000000307d68
> > > (XEN)    00000000fc7f8000 00000000002ceeb0 0000000000400000
> > > 00676e69725f6f74
> > > (XEN)    ffffffffffffffff 0000000000000000 0000000000000000
> > > 0000000000307df0
> > > (XEN)    0000000000307df0 00000000002cef58 000000003fffffff
> > > 00000000fc7f8000
> > > (XEN)    00000000fc7f8000 000fc80000002000 0000000000400000
> > > 0080000000000000
> > > (XEN)    0000000000000000 000000000003ffff 00000000fc831170
> > > 00000000002001b8
> > > (XEN)    00000000e0000000 00000000dfe00000 00000000fc7f8000
> > > 0000000000000000
> > > (XEN)    0000000000400000 00000000fed9de28 0000000000000000
> > > 0000000000000000
> > > (XEN)    0000000000000000 0000000000000400 0000000000000000
> > > 0000000000035000
> > > (XEN)    00000000fc7f8000 0000000000000000 0000000000000000
> > > 0000000000000000
> > > (XEN)    0000000000000000 0000000300000000 0000000000000000
> > > 00000040ffffffff
> > > (XEN)    00000000ffffffff 0000000000000000 0000000000000000
> > > 0000000000000000
> > > (XEN)    0000000000000000 0000000000000000 0000000000000000
> > > 0000000000000000
> > > (XEN)    0000000000000000 0000000000000000 0000000000000000
> > > 0000000000000000
> > > (XEN)    0000000000000000 0000000000000000 0000000000000000
> > > 0000000000000000
> > > (XEN)    0000000000000000 0000000000000000 0000000000000000
> > > 0000000000000000
> > > (XEN)    0000000000000000 0000000000000000 0000000000000000
> > > 0000000000000000
> > > (XEN)    0000000000000000 0000000000000000 0000000000000000
> > > 0000000000000000
> > > (XEN)    0000000000000000 0000000000000000 0000000000000000
> > > 0000000000000000
> > > (XEN)    0000000000000000 0000000000000000 0000000000000000
> > > 0000000000000000
> > > (XEN)    0000000000000000 0000000000000000 0000000000000000
> > > 0000000000000000
> > > (XEN)    0000000000000000 0000000000000000 0000000000000000
> > > 0000000000000000
> > > (XEN) Xen call trace:
> > > (XEN)    [<00000000002b7b90>] alloc_boot_pages+0x38/0x9c (PC)
> > > (XEN)    [<00000000002cda04>] setup_frametable_mappings+0xb4/0x310 (LR)
> > > (XEN)    [<00000000002cf114>] start_xen+0x3a0/0xc48
> > > (XEN)    [<00000000002001b8>] arm64/head.o#primary_switched+0x10/0x30
> > > (XEN)
> > > (XEN)
> > > (XEN) ****************************************
> > > (XEN) Panic on CPU 0:
> > > (XEN) Xen BUG at page_alloc.c:398
> > > (XEN) ****************************************
> > > (XEN)
> > > (XEN) Reboot in five seconds...
> > > 
> > > There seems to be a problem with the DT in the 'reserved-memory' node.  I
> > > commented out the fb0-carveout, fb1-carveout sections, recompiled and
> > > tried to
> > > boot again.
> > 
> > Yes, those reserved-memory nodes won't work correctly with Xen
> > unfortunately: they either use "size" instead of "reg" (vpr-carveout) or
> > they specify "no-map". Only regular "reg" reserved memory regions
> > without "no-map" are properly parsed by Xen at the moment.
> > 
> > 
> 
> I'll try to modify the nodes that use 'size and replace with 'reg'.
> 
> > 
> > > This time the log shows the device tree messages (see attached log
> > > file), but Xen fails at this point...
> > > 
> > > (XEN) Allocating PPI 16 for event channel interrupt
> > > (XEN) Create hypervisor node
> > > (XEN) Create PSCI node
> > > (XEN) Create cpus node
> > > (XEN) Create cpu@0 (logical CPUID: 0) node
> > > (XEN) Create cpu@1 (logical CPUID: 1) node
> > > (XEN) Create cpu@2 (logical CPUID: 2) node
> > > (XEN) Create cpu@3 (logical CPUID: 3) node
> > > (XEN) Create memory node (reg size 4, nr cells 4)
> > > (XEN)   Bank 0: 0xe8000000->0xf0000000
> > > (XEN) Create memory node (reg size 4, nr cells 8)
> > > (XEN)   Bank 0: 0x40001000->0x40040000
> > > (XEN)   Bank 1: 0xb0000000->0xb0200000
> > > (XEN) Loading zImage from 00000000e1000000 to
> > > 00000000e8080000-00000000ea23c808
> > > (XEN)
> > > (XEN) ****************************************
> > > (XEN) Panic on CPU 0:
> > > (XEN) Unable to copy the kernel in the hwdom memory
> > > (XEN) ****************************************
> > > (XEN)
> > > 
> > > Device tree and log file attached. Is there an issue with the DT? Any
> > > pointers
> > > on where I should be looking next?
> > 
> > Is it possible that the kernel image was loaded on a memory area not
> > recognized as ram?
> > 
> > So xen/arch/arm/guestcopy.c:translate_get_page fails the check
> > p2m_is_ram?
> > 
> The failure happens before p2m_is_ram is called.
> This line:
> page = get_page_from_gfn(info.gpa.d, paddr_to_pfn(addr), &p2mt, P2M_ALLOC);
> 
> returns a NULL pointer.

Could you add a couple of printks to know exactly where it fails inside
get_page_from_gfn? I imagine it would be somewhere in
xen/arch/arm/p2m.c:p2m_get_page_from_gfn ?


> > That would happen for instance if a device or special node is also
> > covering that address range.
> 
> Is there a way to check such conflicts?

Nothing automatic. I went through the DTS manually and couldn't spot
any conflicts unfortunately.


On a different note, I noticed that you gave 128MB to Dom0 which is not
much; I am saying that because sometimes when you don't give enough
memory to dom0 you get similar errors. Note that it doesn't look like
this is the source of the issue we are seeing here, but I thought it is
worth mentioning.

Also to help with debugging, you might want to use an Image for Dom0,
rather than a zImage, just to eliminate another potential source of
issues.


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 01:18:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 01:18:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Jgu-0000CC-6L; Fri, 31 Jul 2020 01:18:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F22U=BK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k1Jgt-0000C6-2E
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 01:18:31 +0000
X-Inumbo-ID: be408bae-d2cb-11ea-ab62-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id be408bae-d2cb-11ea-ab62-12813bfff9fa;
 Fri, 31 Jul 2020 01:18:30 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 63562208E4;
 Fri, 31 Jul 2020 01:18:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596158309;
 bh=NkC+gVIfv+TVu+bSb9zrp8Mjd4JMr0fGQvlKvKQx2R0=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=s3cpMjRuDvkxuTue7PxQlzfcYpp7TsZJLRhH4xlHhU4sY4AnN41z0BYskKWlc+6qO
 vCLUvrjjK63nFexHO2dp57N7smEMtD1oFUa27FN2dbTE3Q6J/Ld78WwqcY+pqLxT46
 BSJbuHvoA/ppA2cTV3hPS5FIkZXxPylkPltNUFvM=
Date: Thu, 30 Jul 2020 18:18:28 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v3] xen/arm: Convert runstate address during hypcall
In-Reply-To: <f48f81d5-589e-3f75-1044-583114bf497e@xen.org>
Message-ID: <alpine.DEB.2.21.2007301422030.1767@sstabellini-ThinkPad-T480s>
References: <3911d221ce9ed73611b93aa437b9ca227d6aa201.1596099067.git.bertrand.marquis@arm.com>
 <f48f81d5-589e-3f75-1044-583114bf497e@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 30 Jul 2020, Julien Grall wrote:
> Hi Bertrand,
> 
> To avoid extra work on your side, I would recommend to wait a bit before
> sending a new version. It would be good to at least settle the conversation in
> v2 regarding the approach taken.
> 
> On 30/07/2020 11:24, Bertrand Marquis wrote:
> > At the moment on Arm, a Linux guest running with KTPI enabled will
> > cause the following error when a context switch happens in user mode:
> > (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
> > 
> > The error is caused by the virtual address for the runstate area
> > registered by the guest only being accessible when the guest is running
> > in kernel space when KPTI is enabled.
> > 
> > To solve this issue, this patch is doing the translation from virtual
> > address to physical address during the hypercall and mapping the
> > required pages using vmap. This is removing the conversion from virtual
> > to physical address during the context switch which is solving the
> > problem with KPTI.
> 
> To echo what Jan said on the previous version, this is a change in a stable
> ABI and therefore may break existing guest. FAOD, I agree in principle with
> the idea. However, we want to explain why breaking the ABI is the *only*
> viable solution.
> 
> From my understanding, it is not possible to fix without an ABI breakage
> because the hypervisor doesn't know when the guest will switch back from
> userspace to kernel space. The risk is the information provided by the
> runstate wouldn't contain accurate information and could affect how the guest
> handle stolen time.
> 
> Additionally there are a few issues with the current interface:
>    1) It is assuming the virtual address cannot be re-used by the userspace.
> Thanksfully Linux have a split address space. But this may change with KPTI in
> place.
>    2) When update the page-tables, the guest has to go through an invalid
> mapping. So the translation may fail at any point.
> 
> IOW, the existing interface can lead to random memory corruption and
> inacurracy of the stolen time.
> 
> > 
> > This is done only on arm architecture, the behaviour on x86 is not
> > modified by this patch and the address conversion is done as before
> > during each context switch.
> > 
> > This is introducing several limitations in comparison to the previous
> > behaviour (on arm only):
> > - if the guest is remapping the area at a different physical address Xen
> > will continue to update the area at the previous physical address. As
> > the area is in kernel space and usually defined as a global variable this
> > is something which is believed not to happen. If this is required by a
> > guest, it will have to call the hypercall with the new area (even if it
> > is at the same virtual address).
> > - the area needs to be mapped during the hypercall. For the same reasons
> > as for the previous case, even if the area is registered for a different
> > vcpu. It is believed that registering an area using a virtual address
> > unmapped is not something done.
> 
> This is not clear whether the virtual address refer to the current vCPU or the
> vCPU you register the runstate for. From the past discussion, I think you
> refer to the former. It would be good to clarify.
> 
> Additionally, all the new restrictions should be documented in the public
> interface. So an OS developper can find the differences between the
> architectures.

Just to paraphrase what Julien wrote, it would be good to improve the
commit message with the points suggested and also write a note in the
header file about the changes to the interface.


> To answer Jan's concern, we certainly don't know all the guest OSes existing,
> however we also need to balance the benefit for a large majority of the users.
> 
> From previous discussion, the current approach was deemed to be acceptable on
> Arm and, AFAICT, also x86 (see [1]).
>
> TBH, I would rather see the approach to be common. For that, we would an
> agreement from Andrew and Jan in the approach here. Meanwhile, I think this is
> the best approach to address the concern from Arm users.

+1


> > inline functions in headers could not be used as the architecture
> > domain.h is included before the global domain.h making it impossible
> > to use the struct vcpu inside the architecture header.
> > This should not have any performance impact as the hypercall is only
> > called once per vcpu usually.
> > 
> > Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> > 
> > ---
> >    Changes in v2
> >      - use vmap to map the pages during the hypercall.
> >      - reintroduce initial copy during hypercall.
> > 
> >    Changes in v3
> >      - Fix Coding style
> >      - Fix vaddr printing on arm32
> >      - use write_atomic to modify state_entry_time update bit (only
> >      in guest structure as the bit is not used inside Xen copy)
> > 
> > ---
> >   xen/arch/arm/domain.c        | 161 ++++++++++++++++++++++++++++++-----
> >   xen/arch/x86/domain.c        |  29 ++++++-
> >   xen/arch/x86/x86_64/domain.c |   4 +-
> >   xen/common/domain.c          |  19 ++---
> >   xen/include/asm-arm/domain.h |   9 ++
> >   xen/include/asm-x86/domain.h |  16 ++++
> >   xen/include/xen/domain.h     |   5 ++
> >   xen/include/xen/sched.h      |  16 +---
> >   8 files changed, 206 insertions(+), 53 deletions(-)
> > 
> > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> > index 31169326b2..8b36946017 100644
> > --- a/xen/arch/arm/domain.c
> > +++ b/xen/arch/arm/domain.c
> > @@ -19,6 +19,7 @@
> >   #include <xen/sched.h>
> >   #include <xen/softirq.h>
> >   #include <xen/wait.h>
> > +#include <xen/vmap.h>
> >     #include <asm/alternative.h>
> >   #include <asm/cpuerrata.h>
> > @@ -275,36 +276,156 @@ static void ctxt_switch_to(struct vcpu *n)
> >       virt_timer_restore(n);
> >   }
> >   -/* Update per-VCPU guest runstate shared memory area (if registered). */
> > -static void update_runstate_area(struct vcpu *v)
> > +static void cleanup_runstate_vcpu_locked(struct vcpu *v)
> >   {
> > -    void __user *guest_handle = NULL;
> > +    if ( v->arch.runstate_guest )
> > +    {
> > +        vunmap((void *)((unsigned long)v->arch.runstate_guest &
> > PAGE_MASK));
> > +
> > +        put_page(v->arch.runstate_guest_page[0]);
> > +
> > +        if ( v->arch.runstate_guest_page[1] )
> > +            put_page(v->arch.runstate_guest_page[1]);
> > +
> > +        v->arch.runstate_guest = NULL;
> > +    }
> > +}
> > +
> > +void arch_vcpu_cleanup_runstate(struct vcpu *v)
> > +{
> > +    spin_lock(&v->arch.runstate_guest_lock);
> > +
> > +    cleanup_runstate_vcpu_locked(v);
> > +
> > +    spin_unlock(&v->arch.runstate_guest_lock);
> > +}
> > +
> > +static int setup_runstate_vcpu_locked(struct vcpu *v, vaddr_t vaddr)
> > +{
> > +    unsigned int offset;
> > +    mfn_t mfn[2];
> > +    struct page_info *page;
> > +    unsigned int numpages;
> >       struct vcpu_runstate_info runstate;
> > +    void *p;
> >   -    if ( guest_handle_is_null(runstate_guest(v)) )
> > -        return;
> > +    /* user can pass a NULL address to unregister a previous area */
> > +    if ( vaddr == 0 )
> > +        return 0;
> > +
> > +    offset = vaddr & ~PAGE_MASK;
> > +
> > +    /* provided address must be aligned to a 64bit */
> > +    if ( offset % alignof(struct vcpu_runstate_info) )
> 
> This new restriction wants to be explained in the commit message and public
> header.
> 
> > +    {
> > +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at
> > 0x%"PRIvaddr
> > +                ": Invalid offset\n", vaddr);
> 
> We usually enforce 80 character per lines except for format string. So it is
> easier to grep them in the code.
> 
> > +        return -EINVAL;
> > +    }
> > +
> > +    page = get_page_from_gva(v, vaddr, GV2M_WRITE);
> > +    if ( !page )
> > +    {
> > +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at
> > 0x%"PRIvaddr
> > +                ": Page is not mapped\n", vaddr);
> > +        return -EINVAL;
> > +    }
> > +
> > +    mfn[0] = page_to_mfn(page);
> > +    v->arch.runstate_guest_page[0] = page;
> > +
> > +    if ( offset > (PAGE_SIZE - sizeof(struct vcpu_runstate_info)) )
> > +    {
> > +        /* guest area is crossing pages */
> > +        page = get_page_from_gva(v, vaddr + PAGE_SIZE, GV2M_WRITE);
> > +        if ( !page )
> > +        {
> > +            put_page(v->arch.runstate_guest_page[0]);
> > +            gprintk(XENLOG_WARNING,
> > +                    "Cannot map runstate pointer at 0x%"PRIvaddr
> > +                    ": 2nd Page is not mapped\n", vaddr);
> > +            return -EINVAL;
> > +        }
> > +        mfn[1] = page_to_mfn(page);
> > +        v->arch.runstate_guest_page[1] = page;
> > +        numpages = 2;
> > +    }
> > +    else
> > +    {
> > +        v->arch.runstate_guest_page[1] = NULL;
> > +        numpages = 1;
> > +    }
> >   -    memcpy(&runstate, &v->runstate, sizeof(runstate));
> > +    p = vmap(mfn, numpages);
> > +    if ( !p )
> > +    {
> > +        put_page(v->arch.runstate_guest_page[0]);
> > +        if ( numpages == 2 )
> > +            put_page(v->arch.runstate_guest_page[1]);
> >   -    if ( VM_ASSIST(v->domain, runstate_update_flag) )
> > +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at
> > 0x%"PRIvaddr
> > +                ": vmap error\n", vaddr);
> > +        return -EINVAL;
> > +    }
> > +
> > +    v->arch.runstate_guest = p + offset;
> > +
> > +    if (v == current)
> > +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate));
> > +    else
> >       {
> > -        guest_handle = &v->runstate_guest.p->state_entry_time + 1;
> > -        guest_handle--;
> > -        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
> > -        __raw_copy_to_guest(guest_handle,
> > -                            (void *)(&runstate.state_entry_time + 1) - 1,
> > 1);
> > -        smp_wmb();
> > +        vcpu_runstate_get(v, &runstate);
> > +        memcpy(v->arch.runstate_guest, &runstate, sizeof(v->runstate));
> >       }
> >   -    __copy_to_guest(runstate_guest(v), &runstate, 1);
> > +    return 0;
> > +}
> > +
> > +int arch_vcpu_setup_runstate(struct vcpu *v,
> > +                             struct vcpu_register_runstate_memory_area
> > area)
> > +{
> > +    int rc;
> > +
> > +    spin_lock(&v->arch.runstate_guest_lock);
> > +
> > +    /* cleanup if we are recalled */
> > +    cleanup_runstate_vcpu_locked(v);
> > +
> > +    rc = setup_runstate_vcpu_locked(v, (vaddr_t)area.addr.v);
> > +
> > +    spin_unlock(&v->arch.runstate_guest_lock);
> >   -    if ( guest_handle )
> > +    return rc;
> > +}
> > +
> > +
> > +/* Update per-VCPU guest runstate shared memory area (if registered). */
> > +static void update_runstate_area(struct vcpu *v)
> > +{
> > +    spin_lock(&v->arch.runstate_guest_lock);
> > +
> > +    if ( v->arch.runstate_guest )
> >       {
> > -        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
> > -        smp_wmb();
> > -        __raw_copy_to_guest(guest_handle,
> > -                            (void *)(&runstate.state_entry_time + 1) - 1,
> > 1);
> > +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
> > +        {
> > +            v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
> > +            write_atomic(&(v->arch.runstate_guest->state_entry_time),
> > +                    v->runstate.state_entry_time);
> 
> NIT: You want to indent v-> at the same level as the argument from the first
> line.
> 
> Also, I think you are missing a smp_wmb() here.

I just wanted to add that I reviewed the patch and aside from the
smp_wmb (and the couple of code style NITs), there is no other issue in
the patch that I could find. No further comments from my side.


> > +        }
> > +
> > +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate));
> > +
> > +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
> > +        {
> > +            /* copy must be done before switching the bit */
> > +            smp_wmb();
> > +            v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
> > +            write_atomic(&(v->arch.runstate_guest->state_entry_time),
> > +                    v->runstate.state_entry_time);
> 
> Same remark for the indentation.


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 02:26:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 02:26:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1KkS-0006Lv-8m; Fri, 31 Jul 2020 02:26:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OD0g=BK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k1KkR-0006Lq-EE
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 02:26:15 +0000
X-Inumbo-ID: 3404a268-d2d5-11ea-8e01-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3404a268-d2d5-11ea-8e01-bc764e2007e4;
 Fri, 31 Jul 2020 02:26:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=cY8ARxQbXgQIUX8MY4YfFvTH52di5SNVumDbM90hvX8=; b=k9d4xuo08sY60uJrNgQUQ9yFT
 i5324DoV9Um6RMg3mJkvK5Ab60LIGX0azJ3kz9nD8v3EzIZ+Cg3zScQ/nDFx+W3Rl6g8YInq6ahd3
 sL5f5AIQ5JJRR42xiGF/dWMyMSulLgARYMAOtWlfnpXaZTtrStQWCxRPEZIg1+5bGIqpc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1KkO-0003Uw-K8; Fri, 31 Jul 2020 02:26:12 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1KkO-0001gx-8j; Fri, 31 Jul 2020 02:26:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k1KkO-0005sw-89; Fri, 31 Jul 2020 02:26:12 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152303-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152303: regressions - FAIL
X-Osstest-Failures: linux-linus:build-arm64-pvops:kernel-build:fail:regression
 linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
 linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=83bdc7275e6206f560d247be856bceba3e1ed8f2
X-Osstest-Versions-That: linux=6ba1b005ffc388c2aeaddae20da29e4810dea298
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 31 Jul 2020 02:26:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152303 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152303/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-pvops             6 kernel-build             fail REGR. vs. 152287

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152287
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152287
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152287
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 152287
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 152287
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 152287
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152287
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152287
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152287
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                83bdc7275e6206f560d247be856bceba3e1ed8f2
baseline version:
 linux                6ba1b005ffc388c2aeaddae20da29e4810dea298

Last test of basis   152287  2020-07-29 17:11:28 Z    1 days
Testing same since   152303  2020-07-30 11:09:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ben Skeggs <bskeggs@redhat.com>
  Ben Skeggs <skeggsb@gmail.com>
  Biju Das <biju.das.jz@bp.renesas.com>
  Christoph Hellwig <hch@lst.de>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  Dominique Martinet <asmadeus@codewreck.org>
  Douglas Anderson <dianders@chromium.org>
  Guido Günther <agx@sigxcpu.org>
  Jitao Shi <jitao.shi@mediatek.com>
  Laurentiu Palcu <laurentiu.palcu@nxp.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Paul Cercueil <paul@crapouillou.net>
  Paul Moore <paul@paul-moore.com>
  Sam Ravnborg <sam@ravnborg.org>
  Stephan Gerhold <stephan@gerhold.net>
  Steve Cohen <cohens@codeaurora.org>
  Thomas Zimmermann <tzimmermann@suse.de>
  Vinod Koul <vkoul@kernel.org> # tested on DragonBoard 410c
  Wang Hai <wanghai38@huawei.com>
  Willy Tarreau <w@1wt.eu>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            fail    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 546 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 06:26:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 06:26:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1OUN-0001w7-3o; Fri, 31 Jul 2020 06:25:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1OUL-0001w2-OZ
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 06:25:53 +0000
X-Inumbo-ID: ae69a316-d2f6-11ea-8e0e-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae69a316-d2f6-11ea-8e0e-bc764e2007e4;
 Fri, 31 Jul 2020 06:25:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 55B50AB8B;
 Fri, 31 Jul 2020 06:26:04 +0000 (UTC)
Subject: Re: [PATCH v3] print: introduce a format specifier for pci_sbdf_t
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200727103136.53343-1-roger.pau@citrix.com>
 <ca6cd6a5-3221-4d34-08a0-8ea4b2dc92d0@suse.com>
 <20200730100801.GF7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e99a55dd-9b30-9469-b0e7-c16026012824@suse.com>
Date: Fri, 31 Jul 2020 08:25:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200730100801.GF7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.07.2020 12:08, Roger Pau Monné wrote:
> On Wed, Jul 29, 2020 at 09:28:53PM +0200, Jan Beulich wrote:
>> On 27.07.2020 12:31, Roger Pau Monne wrote:
>>> The new format specifier is '%pp', and prints a pci_sbdf_t using the
>>> seg:bus:dev.func format. Replace all SBDFs printed using
>>> '%04x:%02x:%02x.%u' to use the new format specifier.
>>>
>>> No functional change intended.
>>>
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>> Reviewed-by: Kevin Tian <kevin.tian@intel.com>
>>> Acked-by: Julien Grall <julien.grall@arm.com>
>>> For just the pieces where Jan is the only maintainer:
>>> Acked-by: Jan Beulich <jbeulich@suse.com>
> [...]
>> In all reality, Roger, it looks to me as if you should have dropped
>> my ack, as there seems to be nothing left at this point that I'm
>> the only maintainer of.
> 
> Yes, just realized that now, I'm sorry. Your Ack happened before Paul
> became a maintainer of vendor-independent IOMMU code and I completely
> forgot about it.
> 
> I think the overall result of having a modifier for printing SBDFs is
> a win for everyone.

No-one disagrees here, I think. It's the "how", not the "what" that
was controversial.

> TBH I just revived the patch because I think it
> will be helpful to the Arm folks doing the PCI work, if not I wouldn't
> have sent it again.

Yes, I understood this to be the case.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 06:39:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 06:39:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1OhM-0002tq-B8; Fri, 31 Jul 2020 06:39:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1OhK-0002tl-Sm
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 06:39:18 +0000
X-Inumbo-ID: 8e7eddda-d2f8-11ea-8e0f-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e7eddda-d2f8-11ea-8e0f-bc764e2007e4;
 Fri, 31 Jul 2020 06:39:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C3FEAAC5E;
 Fri, 31 Jul 2020 06:39:29 +0000 (UTC)
Subject: Re: [PATCH v2] xen/arm: Convert runstate address during hypcall
To: Stefano Stabellini <sstabellini@kernel.org>
References: <4647a019c7b42d40d3c2f5b0a3685954bea7f982.1595948219.git.bertrand.marquis@arm.com>
 <8d2d7f03-450c-d50c-630b-8608c6d42bb9@suse.com>
 <FCAB700B-4617-4323-BE1E-B80DDA1806C1@arm.com>
 <1b046f2c-05c8-9276-a91e-fd55ec098bed@suse.com>
 <alpine.DEB.2.21.2007291356060.1767@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1a8bbcc7-9d0c-9669-db7b-e837af279027@suse.com>
Date: Fri, 31 Jul 2020 08:39:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007291356060.1767@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.07.2020 03:30, Stefano Stabellini wrote:
> On Wed, 29 Jul 2020, Jan Beulich wrote:
>> On 29.07.2020 09:08, Bertrand Marquis wrote:
>>>> On 28 Jul 2020, at 21:54, Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 28.07.2020 17:52, Bertrand Marquis wrote:
>>>>> At the moment on Arm, a Linux guest running with KTPI enabled will
>>>>> cause the following error when a context switch happens in user mode:
>>>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>>>>> The error is caused by the virtual address for the runstate area
>>>>> registered by the guest only being accessible when the guest is running
>>>>> in kernel space when KPTI is enabled.
>>>>> To solve this issue, this patch is doing the translation from virtual
>>>>> address to physical address during the hypercall and mapping the
>>>>> required pages using vmap. This is removing the conversion from virtual
>>>>> to physical address during the context switch which is solving the
>>>>> problem with KPTI.
>>>>> This is done only on arm architecture, the behaviour on x86 is not
>>>>> modified by this patch and the address conversion is done as before
>>>>> during each context switch.
>>>>> This is introducing several limitations in comparison to the previous
>>>>> behaviour (on arm only):
>>>>> - if the guest is remapping the area at a different physical address Xen
>>>>> will continue to update the area at the previous physical address. As
>>>>> the area is in kernel space and usually defined as a global variable
>>>>> this
>>>>> is something which is believed not to happen. If this is required by a
>>>>> guest, it will have to call the hypercall with the new area (even if it
>>>>> is at the same virtual address).
>>>>> - the area needs to be mapped during the hypercall. For the same reasons
>>>>> as for the previous case, even if the area is registered for a different
>>>>> vcpu. It is believed that registering an area using a virtual address
>>>>> unmapped is not something done.
>>>>
>>>> Beside me thinking that an in-use and stable ABI can't be changed like
>>>> this, no matter what is "believed" kernel code may or may not do, I
>>>> also don't think having arch-es diverge in behavior here is a good
>>>> idea. Use of commonly available interfaces shouldn't lead to head
>>>> aches or surprises when porting code from one arch to another. I'm
>>>> pretty sure it was suggested before: Why don't you simply introduce
>>>> a physical address based hypercall (and then also on x86 at the same
>>>> time, keeping functional parity)? I even seem to recall giving a
>>>> suggestion how to fit this into a future "physical addresses only"
>>>> model, as long as we can settle on the basic principles of that
>>>> conversion path that we want to go sooner or later anyway (as I
>>>> understand).
>>>
>>> I fully agree with the “physical address only” model and i think it must be
>>> done. Introducing a new hypercall taking a physical address as parameter
>>> is the long term solution (and I would even volunteer to do it in a new
>>> patchset).
>>> But this would not solve the issue here unless linux is modified.
>>> So I do see this patch as a “bug fix”.
>>
>> Well, it is sort of implied by my previous reply that we won't get away
>> without an OS side change here. The prereq to get away without would be
>> that it is okay to change the behavior of a hypercall like you do, and
>> that it is okay to make the behavior diverge between arch-es. I think
>> I've made pretty clear that I don't think either is really an option.
> 
> This is a difficult problem to solve and the current situation honestly
> sucks: there is no way to solve the problem without making compromises.
> 
> The new hypercall is good-to-have in any case (it is a better interface)
> but it is not a full solution.  If we introduce a new hypercall we fix
> new guests but don't fix existing guests. If we change Linux in any way,
> we are still going to have problems with all already-released kernel
> binaries. Leaving the issue unfixed is not an option either because the
> problem can't be ignored.

We're fixing other issues without breaking the ABI. Where's the
problem of backporting the kernel side change (which I anticipate
to not be overly involved)?

If the plan remains to be to make an ABI breaking change, then I
think this will need an explicit vote.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 07:06:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 07:06:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1P7m-0005QN-Et; Fri, 31 Jul 2020 07:06:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1P7l-0005QI-Kx
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 07:06:37 +0000
X-Inumbo-ID: 5f769cb8-d2fc-11ea-8e12-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f769cb8-d2fc-11ea-8e12-bc764e2007e4;
 Fri, 31 Jul 2020 07:06:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D48F4ACDB;
 Fri, 31 Jul 2020 07:06:48 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: replace UB shifts
Message-ID: <bd679766-939d-3176-c913-e993dd48ef15@suse.com>
Date: Fri, 31 Jul 2020 09:06:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Displacement values can be negative, hence we shouldn't left-shift them.

While auditing shifts, I noticed a pair of missing parentheses, which
also get added right here.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -3370,7 +3370,7 @@ x86_decode(
         {
             generate_exception_if(d & vSIB, EXC_UD);
             modrm_rm |= ((rex_prefix & 1) << 3) |
-                        (evex_encoded() && !evex.x) << 4;
+                        ((evex_encoded() && !evex.x) << 4);
             ea.type = OP_REG;
         }
         else if ( ad_bytes == 2 )
@@ -3417,7 +3417,7 @@ x86_decode(
                     ea.mem.off = insn_fetch_type(int16_t);
                 break;
             case 1:
-                ea.mem.off += insn_fetch_type(int8_t) << disp8scale;
+                ea.mem.off += insn_fetch_type(int8_t) * (1 << disp8scale);
                 break;
             case 2:
                 ea.mem.off += insn_fetch_type(int16_t);
@@ -3479,7 +3479,7 @@ x86_decode(
                 pc_rel = mode_64bit();
                 break;
             case 1:
-                ea.mem.off += insn_fetch_type(int8_t) << disp8scale;
+                ea.mem.off += insn_fetch_type(int8_t) * (1 << disp8scale);
                 break;
             case 2:
                 ea.mem.off += insn_fetch_type(int32_t);
@@ -10028,7 +10028,8 @@ x86_emulate(
                 continue;
 
             rc = ops->write(ea.mem.seg,
-                            truncate_ea(ea.mem.off + (idx << state->sib_scale)),
+                            truncate_ea(ea.mem.off +
+                                        idx * (1 << state->sib_scale)),
                             (void *)mmvalp + i * op_bytes, op_bytes, ctxt);
             if ( rc != X86EMUL_OKAY )
             {
@@ -10146,7 +10147,7 @@ x86_emulate(
                   ? ops->write
                   : ops->read)(ea.mem.seg,
                                truncate_ea(ea.mem.off +
-                                           (idx << state->sib_scale)),
+                                           idx * (1 << state->sib_scale)),
                                NULL, 0, ctxt);
             if ( rc == X86EMUL_EXCEPTION )
             {


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 07:52:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 07:52:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Ppn-00015b-2v; Fri, 31 Jul 2020 07:52:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1Ppl-00015W-FG
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 07:52:05 +0000
X-Inumbo-ID: b8f0bfe8-d302-11ea-ab8b-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b8f0bfe8-d302-11ea-ab8b-12813bfff9fa;
 Fri, 31 Jul 2020 07:52:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0800FAE17;
 Fri, 31 Jul 2020 07:52:16 +0000 (UTC)
Subject: Re: [PATCH 3/5] xen/memory: Fix compat XENMEM_acquire_resource for
 size requests
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-4-andrew.cooper3@citrix.com>
 <0c275cb5-55ec-b0b0-6ba8-cfa7ca23978b@suse.com>
 <d3c31bea-0c31-5822-15cb-226402c4ae75@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dd3a8e3f-fe09-6d71-1ef6-13e6e1a7ea00@suse.com>
Date: Fri, 31 Jul 2020 09:52:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d3c31bea-0c31-5822-15cb-226402c4ae75@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.07.2020 21:12, Andrew Cooper wrote:
> On 29/07/2020 21:09, Jan Beulich wrote:
>> On 28.07.2020 13:37, Andrew Cooper wrote:
>>> Copy the nr_frames from the correct structure, so the caller doesn't
>>> unconditionally receive 0.
>>
>> Well, no - it does get copied from the correct structure. It's just
>> that the field doesn't get set properly up front.
> 
> You appear to be objecting to my use of the term "correct".
> 
> There are two structures.  One contains the correct value, and one
> contains the wrong value, which happens to always be 0.
> 
> I stand by sentence as currently written.

At the risk of splitting hair, what you copy from is a field holding
the correct value, but not the correct field. This only works
correctly because of the way __copy_field_{from,to}_guest() happen
to be implemented; there are possible alternative implementations
where this would break, despite ...

>> Otherwise you'll
>> (a) build in an unchecked assumption that the native and compat
>> fields match in type
> 
> Did you actually check?  Because I did before embarking on this course
> of action.
> 
> In file included from /local/xen.git/xen/include/xen/guest_access.h:10:0,
>                  from compat/memory.c:5:
> /local/xen.git/xen/include/asm/guest_access.h:152:28: error: comparison
> of distinct pointer types lacks a cast [-Werror]
>      (void)(&(hnd).p->field == _s);                      \
>                             ^
> compat/memory.c:628:22: note: in expansion of macro ‘__copy_field_to_guest’
>                  if ( __copy_field_to_guest(
>                       ^~~~~~~~~~~~~~~~~~~~~
> 
> This is what the compiler thinks of the code, when nr_frames is changed
> from uint32_t to unsigned long.

... this type safety check (which, I admit, I didn't consider when
writing my reply). I continue to think that handle and struct should
match up not just for {,__}copy_{from,to}_guest() but also for
{,__}copy_field_{from,to}_guest().

>> and (b) set a bad example for people looking
>> here
> 
> This entire function is a massive set of bad examples; the worst IMO
> being the fact that there isn't a single useful comment anywhere in it
> concerning how the higher level loop structure works.
> 
> I'm constantly annoyed that I need to reverse engineer it from scratch
> every time I look at it, despite having a better-than-most understanding
> of what it is trying to achieve, and how it is supposed to work.
> 
> I realise this is noones fault in particular, but it is not
> fair/reasonable to claim that this change is the thing setting a bad
> example in this file.

I'd be happy to see "bad examples" be corrected. As stated at various
occasions, at the time I first implemented the compat layer this seemed
like the most reasonable approach to me. If you see room for
improvement, then I'm all for it.

>> and then cloning this code in perhaps a case where (a) is not
>> even true. If you agree, the alternative change of setting
>> cmp.mar.nr_frames from nat.mar->nr_frames before the call is
> 
> Is there more to this sentence?

I guess I can't figure what you mean here.

> While this example could be implemented (at even higher overhead) by
> copying nat back to cmp before passing it back to the guest, the same is
> not true for the changes required to fix batching (which is another
> series the same size as this).

I'll see when you post this, but I think we will want the principle
outlined above to continue to hold.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 08:00:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 08:00:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Pxe-0002On-8t; Fri, 31 Jul 2020 08:00:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1Pxd-0002Oi-5Z
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 08:00:13 +0000
X-Inumbo-ID: dc29b6da-d303-11ea-ab8c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dc29b6da-d303-11ea-ab8c-12813bfff9fa;
 Fri, 31 Jul 2020 08:00:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A1090AE17;
 Fri, 31 Jul 2020 08:00:24 +0000 (UTC)
Subject: Re: [PATCH 1/4] x86: replace __ASM_{CL,ST}AC
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <fc8e042e-fef8-ac38-34d8-16b13e4b0135@suse.com>
 <ea6eeb6d-7af2-97cb-4c11-6e0a81755961@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7ead7983-f652-41d9-62e2-33104af3bba5@suse.com>
Date: Fri, 31 Jul 2020 10:00:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <ea6eeb6d-7af2-97cb-4c11-6e0a81755961@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 15:55, Andrew Cooper wrote:
> On 15/07/2020 11:48, Jan Beulich wrote:
>> --- a/xen/arch/x86/arch.mk
>> +++ b/xen/arch/x86/arch.mk
>> @@ -20,6 +20,7 @@ $(call as-option-add,CFLAGS,CC,"rdrand %
>>  $(call as-option-add,CFLAGS,CC,"rdfsbase %rax",-DHAVE_AS_FSGSBASE)
>>  $(call as-option-add,CFLAGS,CC,"xsaveopt (%rax)",-DHAVE_AS_XSAVEOPT)
>>  $(call as-option-add,CFLAGS,CC,"rdseed %eax",-DHAVE_AS_RDSEED)
>> +$(call as-option-add,CFLAGS,CC,"clac",-DHAVE_AS_CLAC_STAC)
> 
> Kconfig please, rather than extending this legacy section.
> 
> That said, surely stac/clac support is old enough for us to start using
> unconditionally?

It's available in binutils 2.24 and newer.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 08:05:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 08:05:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Q2Q-0002hQ-Ta; Fri, 31 Jul 2020 08:05:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1Q2P-0002hL-I6
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 08:05:09 +0000
X-Inumbo-ID: 8c1f8cba-d304-11ea-8e14-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c1f8cba-d304-11ea-8e14-bc764e2007e4;
 Fri, 31 Jul 2020 08:05:08 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B5A91AD2C;
 Fri, 31 Jul 2020 08:05:20 +0000 (UTC)
Subject: Re: [PATCH 1/4] x86: replace __ASM_{CL,ST}AC
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <fc8e042e-fef8-ac38-34d8-16b13e4b0135@suse.com>
 <20200727145526.GR7191@Air-de-Roger>
 <b29e4b17-8ec2-a0db-8426-94393e9eb2c0@suse.com>
 <20200728090618.GZ7191@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <32c79b37-a93c-7a72-7c0f-753cf603adfb@suse.com>
Date: Fri, 31 Jul 2020 10:05:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728090618.GZ7191@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 11:06, Roger Pau Monné wrote:
> On Mon, Jul 27, 2020 at 09:47:52PM +0200, Jan Beulich wrote:
>> On 27.07.2020 16:55, Roger Pau Monné wrote:
>>> On Wed, Jul 15, 2020 at 12:48:14PM +0200, Jan Beulich wrote:
>>>> --- /dev/null
>>>> +++ b/xen/include/asm-x86/asm-defns.h
>>>
>>> Maybe this could be asm-insn.h or a different name? I find it
>>> confusing to have asm-defns.h and an asm_defs.h.
>>
>> While indeed I anticipated a reply to this effect, I don't consider
>> asm-insn.h or asm-macros.h suitable: We don't want to limit this
>> header to a more narrow purpose than "all sorts of definition", I
>> don't think. Hence I chose that name despite its similarity to the
>> C header's one.
> 
> I think it's confusing, but I also think the whole magic we do with
> asm includes is already confusing (me), so if you and Andrew agree
> this is the best name I'm certainly fine with it. FWIW:
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Please quote the clac/stac instructions in order to match the other
> usages of ALTERNATIVE.

We're not consistently quoting when there's just a single word, see
in particular spec_ctrl_asm.h. And thinking about it again I also
don't see why we would want or need to enforce quotation when none
is needed. Therefore both here and in patch 2 I'll keep (or make,
when I touch a line anyway) things consistently unquoted where no
quotes are needed. Please let me know if your R-b holds without the
requested adjustment.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 08:05:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 08:05:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Q2b-0002im-5j; Fri, 31 Jul 2020 08:05:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OD0g=BK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k1Q2Z-0002iW-TV
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 08:05:19 +0000
X-Inumbo-ID: 91d51916-d304-11ea-8e14-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 91d51916-d304-11ea-8e14-bc764e2007e4;
 Fri, 31 Jul 2020 08:05:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=C5nQfdufVNvxAXAbhIquk6wxe3MWJLLeqVnHQ8lL95c=; b=IaWFIu0fxRYKP2p4YwO9ZC7Aq
 XJQeYqytuWeKyhHj3qmXDaP0llBY9tHqAw0WpaiBSYT/MU6DNT1jp9kRRp/aBjhyntA3LOXJf6zeq
 mfluA/66Wcy2+C+azHI3+kX2IsoMtOtdPPy4ClWOYSIxeOgCDYFjrhAoyuRi7WCFDf0QI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1Q2W-0003eh-F0; Fri, 31 Jul 2020 08:05:16 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1Q2V-0000bL-SO; Fri, 31 Jul 2020 08:05:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k1Q2V-0004g8-Ro; Fri, 31 Jul 2020 08:05:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152309-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 152309: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-qemuu-nested-intel:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:xen-boot/l1:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=1448629751871c4924c234c2faaa968fc26890e1
X-Osstest-Versions-That: qemuu=9e3903136d9acde2fb2dd9e967ba928050a6cb4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 31 Jul 2020 08:05:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152309 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152309/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 14 xen-boot/l1       fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151065
 test-amd64-amd64-qemuu-nested-amd 14 xen-boot/l1         fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 151065
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 151065

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 151065
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 151065
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                1448629751871c4924c234c2faaa968fc26890e1
baseline version:
 qemuu                9e3903136d9acde2fb2dd9e967ba928050a6cb4a

Last test of basis   151065  2020-06-12 22:27:51 Z   48 days
Failing since        151101  2020-06-14 08:32:51 Z   46 days   66 attempts
Testing same since   152309  2020-07-30 21:40:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Ahmed Karaman <ahmedkhaledkaraman@gmail.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.m.mail@gmail.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Richardson <Alexander.Richardson@cl.cam.ac.uk>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Boettcher <alexander.boettcher@genode-labs.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Duyck <alexander.h.duyck@linux.intel.com>
  Alexandre Mergnat <amergnat@baylibre.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andreas Schwab <schwab@suse.de>
  Andrew <andrew@daynix.com>
  Andrew Jones <drjones@redhat.com>
  Andrew Melnychenko <andrew@daynix.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani.sinha@nutanix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Antoine Damhet <antoine.damhet@blade-group.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Atish Patra <atish.patra@wdc.com>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Basil Salman <basil@daynix.com>
  Basil Salman <bsalman@redhat.com>
  Beata Michalska <beata.michalska@linaro.org>
  Bin Meng <bin.meng@windriver.com>
  Bin Meng <bmeng.cn@gmail.com>
  Bruce Rogers <brogers@suse.com>
  Cameron Esfahani <dirty@apple.com>
  Catherine A. Frederick <chocola@animebitch.es>
  Cathy Zhang <cathy.zhang@intel.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chenyi Qiang <chenyi.qiang@intel.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christophe de Dinechin <dinechin@redhat.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Cindy Lu <lulu@redhat.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Derek Su <dereksu@qnap.com>
  Dongjiu Geng <gengdongjiu@huawei.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Ed Robbins <E.J.C.Robbins@kent.ac.uk>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Emilio G. Cota <cota@braap.org>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  erik-smit <erik.lucas.smit@gmail.com>
  Evgeny Yakovlev <eyakovlev@virtuozzo.com>
  fangying <fangying1@huawei.com>
  Farhan Ali <alifm@linux.ibm.com>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Giuseppe Musacchio <thatlemon@gmail.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Romero <gromero@linux.ibm.com>
  Halil Pasic <pasic@linux.ibm.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Hogan Wang <hogan.wang@huawei.com>
  Hogan Wang <king.wang@huawei.com>
  Howard Spoelstra <hsp.cat7@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Ian Jiang <ianjiang.ict@gmail.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Kiszka <jan.kiskza@siemens.com>
  Jan Kiszka <jan.kiszka@siemens.com>
  Janne Grunau <j@jannau.net>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jay Zhou <jianjay.zhou@huawei.com>
  Jean-Christophe Dubois <jcd@tribudubois.net>
  Jessica Clarke <jrtc27@jrtc27.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Jingqi Liu <jingqi.liu@intel.com>
  John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Joseph Myers <joseph@codesourcery.com>
  Josh DuBois <duboisj@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Josh Kunz <jkz@google.com>
  Juan Quintela <quintela@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  KONRAD Frederic <frederic.konrad@adacore.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Leonid Bloch <lbloch@janustech.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Qiang <liq3ea@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  lichun <lichun@ruijie.com.cn>
  Lijun Pan <ljp@linux.ibm.com>
  Like Xu <like.xu@linux.intel.com>
  Lingfeng Yang <lfy@google.com>
  Lingshan zhu <lingshan.zhu@intel.com>
  Liran Alon <liran.alon@oracle.com>
  Liu Yi L <yi.l.liu@intel.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Luc Michel <luc.michel@greensocs.com>
  Lukas Straub <lukasstraub2@web.de>
  Luwei Kang <luwei.kang@intel.com>
  Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
  Magnus Damm <magnus.damm@gmail.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marcelo Tosatti <mtosatti@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mario Smarduch <msmarduch@digitalocean.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Maxime Coquelin <maxime.coquelin@redhat.com>
  Menno Lageman <menno.lageman@oracle.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michal Privoznik <mprivozn@redhat.com>
  Michele Denber <denber@mindspring.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Olaf Hering <olaf@aepfle.de>
  Pan Nengyuan <pannengyuan@huawei.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Turschmid <peter.turschm@nutanix.com>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Radoslaw Biernacki <rad@semihalf.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Reza Arbab <arbab@linux.ibm.com>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Riku Voipio <riku.voipio@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Roman Kagan <rkagan@virtuozzo.com>
  Roman Kagan <rvkagan@yandex-team.ru>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sarah Harris <S.E.Harris@kent.ac.uk>
  Sebastian Rasmussen <sebras@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Sven Schnelle <svens@stackframe.org>
  Tao Xu <tao3.xu@intel.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tiwei Bie <tiwei.bie@intel.com>
  Tong Ho <tong.ho@xilinx.com>
  Viktor Mihajlovski <mihajlov@linux.ibm.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  WangBowen <bowen.wang@intel.com>
  Wei Huang <wei.huang2@amd.com>
  Wei Wang <wei.w.wang@intel.com>
  Wentong Wu <wentong.wu@intel.com>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xie Yongji <xieyongji@bytedance.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Ying Fang <fangying1@huawei.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Yuri Benditovich <yuri.benditovich@daynix.com>
  Zhang Chen <chen.zhang@intel.com>
  Zheng Chuan <zhengchuan@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 34358 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 08:12:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 08:12:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Q9s-0003eY-5F; Fri, 31 Jul 2020 08:12:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xDYK=BK=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k1Q9r-0003eT-2z
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 08:12:51 +0000
X-Inumbo-ID: 9fbc8842-d305-11ea-8e14-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9fbc8842-d305-11ea-8e14-bc764e2007e4;
 Fri, 31 Jul 2020 08:12:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596183170;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=orBigK7HqKlYuFLBbF5hlm/uG7lMoFm/YSH2dLvDWT8=;
 b=SqNv6IY3Gk+K1hVuwa3SWi532gWiBXAnNLZE35rs9sJpGrqoC0PAsQPu
 G8R7kGHfcBdp4btagUnGh2dxFBNg4g9g3Q56Xmc6n2vrDJ+iCWATm3xDS
 xEl+AxXP6lYmbzvzd7t78qMvUoMtByg5y/QsRr0rZWgDIcZHBESRQPXDy k=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: aTKWflshIQYZiRGFowlSt8uYZHLOxGIFHVqPkTgbEUeY+d3imjLlTXY30yyXRe4XQ6PzRuNy4t
 H4N7Rr+bgVtqyPVEcJ2CmvItFwmC/jdKykQQaNd5jjdeIKxi2Pi2Qgx6ZzbgUNoEtgzkPVc1Xo
 4swGR/UCLnC7Xno15vG4gC+nOM8JaXVxe6gna3a/7Fh8H/rpU2nsUxyVk9Y/WDFzXtxqwzckjN
 SFvPj98a4cTjnuyuurE96iqeQWhrbYEaJxD1Y4MMCHiOgUOLwd4aa4ZfAvkL4ijMajV4gHXKsv
 uhs=
X-SBRS: 2.7
X-MesageID: 24486609
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,417,1589256000"; d="scan'208";a="24486609"
Date: Fri, 31 Jul 2020 10:12:40 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 1/4] x86: replace __ASM_{CL,ST}AC
Message-ID: <20200731081240.GA88772@Air-de-Roger>
References: <58b9211a-f6dd-85da-d0bd-c927ac537a5d@suse.com>
 <fc8e042e-fef8-ac38-34d8-16b13e4b0135@suse.com>
 <20200727145526.GR7191@Air-de-Roger>
 <b29e4b17-8ec2-a0db-8426-94393e9eb2c0@suse.com>
 <20200728090618.GZ7191@Air-de-Roger>
 <32c79b37-a93c-7a72-7c0f-753cf603adfb@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <32c79b37-a93c-7a72-7c0f-753cf603adfb@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 31, 2020 at 10:05:07AM +0200, Jan Beulich wrote:
> On 28.07.2020 11:06, Roger Pau Monné wrote:
> > On Mon, Jul 27, 2020 at 09:47:52PM +0200, Jan Beulich wrote:
> >> On 27.07.2020 16:55, Roger Pau Monné wrote:
> >>> On Wed, Jul 15, 2020 at 12:48:14PM +0200, Jan Beulich wrote:
> >>>> --- /dev/null
> >>>> +++ b/xen/include/asm-x86/asm-defns.h
> >>>
> >>> Maybe this could be asm-insn.h or a different name? I find it
> >>> confusing to have asm-defns.h and an asm_defs.h.
> >>
> >> While indeed I anticipated a reply to this effect, I don't consider
> >> asm-insn.h or asm-macros.h suitable: We don't want to limit this
> >> header to a more narrow purpose than "all sorts of definition", I
> >> don't think. Hence I chose that name despite its similarity to the
> >> C header's one.
> > 
> > I think it's confusing, but I also think the whole magic we do with
> > asm includes is already confusing (me), so if you and Andrew agree
> > this is the best name I'm certainly fine with it. FWIW:
> > 
> > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> > 
> > Please quote the clac/stac instructions in order to match the other
> > usages of ALTERNATIVE.
> 
> We're not consistently quoting when there's just a single word, see
> in particular spec_ctrl_asm.h. And thinking about it again I also
> don't see why we would want or need to enforce quotation when none
> is needed. Therefore both here and in patch 2 I'll keep (or make,
> when I touch a line anyway) things consistently unquoted where no
> quotes are needed. Please let me know if your R-b holds without the
> requested adjustment.

Yes, I'm fine as long as we are consistent with quoting of single word
instructions. Ideally I would like that we quote both single and multi
word for consistency, but you are the one doing the work so I'm not
going to oppose to not quoting single words.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 08:39:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 08:39:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1QZ9-0005UD-63; Fri, 31 Jul 2020 08:38:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ALHA=BK=amazon.com=prvs=474dac838=elnikety@srs-us1.protection.inumbo.net>)
 id 1k1QZ7-0005U8-PO
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 08:38:57 +0000
X-Inumbo-ID: 45955296-d309-11ea-ab8f-12813bfff9fa
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 45955296-d309-11ea-ab8f-12813bfff9fa;
 Fri, 31 Jul 2020 08:38:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1596184737; x=1627720737;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=dbgp+mgGDpt6s0BH/lBjYcctpiFZ1oBJvtaGhj7elSI=;
 b=WZrgQmAUBRdsroZSebACoVCI9cGqtIxCh2Hl2qDt3a5UOAcqLX+GDt8r
 aAK8DaT4aETZOuI6nLdz39ByiFCdoBsN55NugdE/txTAFRG5ldq8SaneF
 dknELemtUy+gD3VgncuTcERd6MU61VyW74ciH92lWx7d37iXMT+VK7e8g I=;
IronPort-SDR: eQz03h9VQxdct/SIk99QwxVxrYimMz/qsRmT86awONPy6YNdnUf0k1u+h8BUru/kbOYbc7wZGO
 Y3GKuMEcscpQ==
X-IronPort-AV: E=Sophos;i="5.75,417,1589241600"; d="scan'208";a="64481883"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1d-f273de60.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 31 Jul 2020 08:38:51 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1d-f273de60.us-east-1.amazon.com (Postfix) with ESMTPS
 id 01285A216B; Fri, 31 Jul 2020 08:38:49 +0000 (UTC)
Received: from EX13D03EUA002.ant.amazon.com (10.43.165.166) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 31 Jul 2020 08:38:49 +0000
Received: from a483e73f63b0.ant.amazon.com (10.43.161.203) by
 EX13D03EUA002.ant.amazon.com (10.43.165.166) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 31 Jul 2020 08:38:45 +0000
Subject: Re: [PATCH] x86/vhpet: Fix type size in timer_int_route_valid
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Eslam Elnikety <elnikety@amazon.com>
References: <20200728083357.77999-1-elnikety@amazon.com>
 <a55fba45-a008-059e-ea8c-b7300e2e8b7d@citrix.com>
 <278f0f31-619b-a392-6627-e75e65d0d14f@suse.com>
From: Eslam Elnikety <elnikety@amazon.com>
Message-ID: <076df48e-0010-bb8d-891f-dc89aa4b9439@amazon.com>
Date: Fri, 31 Jul 2020 10:38:40 +0200
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <278f0f31-619b-a392-6627-e75e65d0d14f@suse.com>
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Originating-IP: [10.43.161.203]
X-ClientProxiedBy: EX13D37UWA003.ant.amazon.com (10.43.160.25) To
 EX13D03EUA002.ant.amazon.com (10.43.165.166)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.co.uk>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.20 19:51, Jan Beulich wrote:
> On 28.07.2020 11:26, Andrew Cooper wrote:
>> Does this work?
>>
>> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
>> index ca94e8b453..638f6174de 100644
>> --- a/xen/arch/x86/hvm/hpet.c
>> +++ b/xen/arch/x86/hvm/hpet.c
>> @@ -62,8 +62,7 @@
>>   #define timer_int_route(h, n)    MASK_EXTR(timer_config(h, n),
>> HPET_TN_ROUTE)
>> -#define timer_int_route_cap(h, n) \
>> -    MASK_EXTR(timer_config(h, n), HPET_TN_INT_ROUTE_CAP)
>> +#define timer_int_route_cap(h, n) (h)->hpet.timers[(n)].route
> 
> Seeing that this is likely the route taken here, and hence to avoid
> an extra round trip, two remarks: Here I see no need for the
> parentheses inside the square brackets.
> 

Will take of this in v2.

>> diff --git a/xen/include/asm-x86/hvm/vpt.h 
>> b/xen/include/asm-x86/hvm/vpt.h
>> index f0e0eaec83..a41fc443cc 100644
>> --- a/xen/include/asm-x86/hvm/vpt.h
>> +++ b/xen/include/asm-x86/hvm/vpt.h
>> @@ -73,7 +73,13 @@ struct hpet_registers {
>>       uint64_t isr;               /* interrupt status reg */
>>       uint64_t mc64;              /* main counter */
>>       struct {                    /* timers */
>> -        uint64_t config;        /* configuration/cap */
>> +        union {
>> +            uint64_t config;    /* configuration/cap */
>> +            struct {
>> +                uint32_t _;
>> +                uint32_t route;
>> +            };
>> +        };
> 
> So long as there are no static initializers for this construct
> that would then suffer the old-gcc problem, this is of course a
> fine arrangement to make.
> 

I have to admit that I have no clue what the "old-gcc" problem is. I am 
curious, and I would appreciate pointers to figure out if/how to 
resolve. Is that an old, existing problem? Or a problem that was present 
in older versions of gcc? If the latter, is that a gcc version that we 
still care about? Thanks, Jan.

-- Eslam

> Jan
> 



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 08:43:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 08:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1QdZ-0006Ir-PL; Fri, 31 Jul 2020 08:43:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uPNw=BK=durham.ac.uk=m.a.young@srs-us1.protection.inumbo.net>)
 id 1k1QdY-0006Im-Id
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 08:43:32 +0000
X-Inumbo-ID: e8c3194e-d309-11ea-ab92-12813bfff9fa
Received: from GBR01-CWL-obe.outbound.protection.outlook.com (unknown
 [40.107.11.128]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e8c3194e-d309-11ea-ab92-12813bfff9fa;
 Fri, 31 Jul 2020 08:43:30 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=llZUv53o5b8jUJ0AmKPYkv+AJKoj4Ec40emHe+5s4EQLxX7Z2lLPoIfrbjZFdEhOs6K/NosTkNosH9cTS2bIJQyKhiPiGgK+JvsCGW2XvMv4iRg+DQRc9Z7o8Ke/ruf7jhaf0zooMZEhyfbuSJQhmkP0DUketsZZlupQJJpdTKXCk2ycVoro9SjZVgk7DwF1XQSJjQvctZVPWtWPbovHqSQa6HX3Xq/wdmG8OpqPlapOx9Fa1yF6rdadL/N3+xD/5voNUxQhKqCGnjtNiXfGmwuCal3BGNxCOPGJGR07WagIXYSPMxgvLQ85MT/owiDLpLTe23iBndyvMBbz6dd3Ig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iLoaLEe6EYFmh7sY8j42lkXdpWwKkOT+yOFnO8lzP2Y=;
 b=FKVO2xndzcLPSLHi+JqtETX71wfgN9lRKd7GsYfNNIZSVsYuS55v08uF0R9PSsuJkeG8twURfYXobalGtjH+3c5lmzFnDDvl3gOym5YkDMOXUgyMszpiL84FImatTRZbyHaRU3fdkoKzrufDIopEwxiGs3qVI01qCYEdaME1M7cfRAHoy8xEf6bJQNLavE2YQYka8NihSFEUZZ1Yxxdg2GkzsEA7rA8M2Q/c9NYDkhqR1hl9lULK4Mi66o9DPLF4+XdbCWvK129S5nOQcNiSzy0b90IhM3JEyxyCL8EvdMDRGTPeR6biEq9KbNzrkU6unzIQgdMXTeZh1u5kq+x4Tg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=durham.ac.uk; dmarc=pass action=none header.from=durham.ac.uk;
 dkim=pass header.d=durham.ac.uk; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=durhamuniversity.onmicrosoft.com;
 s=selector2-durhamuniversity-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iLoaLEe6EYFmh7sY8j42lkXdpWwKkOT+yOFnO8lzP2Y=;
 b=QjQbwpktUMWHNlwqAtJILFUBlHbluqucaHqA0mWVnSHbdkViwfbqO2GuC3vGzu+AsHT3nhWrvmd05GWxW2JYoMRi1uG/zSFZrT1sg/lww7caEyDZgZnTf61RGgdHCODrvlaEscmo/kxRJ4NOZwsQJgvXkid/v4hTjdfZSD9XCAs=
Authentication-Results: knorrie.org; dkim=none (message not signed)
 header.d=none;knorrie.org; dmarc=none action=none header.from=durham.ac.uk;
Received: from CWLP265MB1634.GBRP265.PROD.OUTLOOK.COM (2603:10a6:401:32::19)
 by CWXP265MB0597.GBRP265.PROD.OUTLOOK.COM (2603:10a6:401::22) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3216.24; Fri, 31 Jul 2020 08:43:28 +0000
Received: from CWLP265MB1634.GBRP265.PROD.OUTLOOK.COM
 ([fe80::6114:a769:8565:1a70]) by CWLP265MB1634.GBRP265.PROD.OUTLOOK.COM
 ([fe80::6114:a769:8565:1a70%4]) with mapi id 15.20.3216.034; Fri, 31 Jul 2020
 08:43:28 +0000
Date: Fri, 31 Jul 2020 09:43:25 +0100 (BST)
From: Michael Young <m.a.young@durham.ac.uk>
X-X-Sender: michael@austen3.home
To: Hans van Kranenburg <hans@knorrie.org>
Subject: =?GB2312?Q?Re=3A_4=2E14=2E0_FTBFS_for_Debian_unstable=2C_libx?=
 =?GB2312?Q?lu=5Fpci=2Ec_=28=A8s=A1=E3=A1=F5=A1=E3=A3=A9=A8s=A6=E0_=A9?=
 =?GB2312?Q?=DF=A9=A5=A9=DF?=
In-Reply-To: <dab05ef3-4ce8-2177-893d-61168d897821@knorrie.org>
Message-ID: <alpine.LFD.2.23.451.2007310933040.2862@austen3.home>
References: <dab05ef3-4ce8-2177-893d-61168d897821@knorrie.org>
Content-Type: text/plain; charset=US-ASCII; format=flowed
X-ClientProxiedBy: LO2P265CA0269.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a1::17) To CWLP265MB1634.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:401:32::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from broadband.bt.com (2a00:23c4:921a:2100:1097:224c:243b:f186) by
 LO2P265CA0269.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a1::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.16 via Frontend Transport; Fri, 31 Jul 2020 08:43:27 +0000
X-X-Sender: michael@austen3.home
X-Originating-IP: [2a00:23c4:921a:2100:1097:224c:243b:f186]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 366669bb-933e-482c-ec81-08d8352dcb5e
X-MS-TrafficTypeDiagnostic: CWXP265MB0597:
X-Microsoft-Antispam-PRVS: <CWXP265MB0597AF64D316812CBA5195D5874E0@CWXP265MB0597.GBRP265.PROD.OUTLOOK.COM>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: GM8fARhmYusFECbOhI0hQZuuuaUZdweSNP9GOTXvnNh+pCTiRKTxVSCq+6NWO/YEYng1FUcntMMpWhB2v/ZMt0de0AuSY8/IVy4rhLMIoTs4/Ezlngu+OzYsKwoLAFQgk2SA3D6MD/rWJTIzsEzRxzBr0TnerPnd66Wr99krZVIYxB6tAOPQcYguhFLaiHfUEuRooXBWxxEtDG9G5ECBxauTLun1NrQGHUZqtJAaaTrm9xDYPkkhRZsdc7PjopTw0c2Ye/C+H1zcXoPAHRGXjLRciny/xuSzMB2ZnYk6eLFcDmbl2/i3Zh/kASsj2ch9LEU9hSWAivB24pfgOLqfwzybUJZiBfAAcgPmGVaIwM/niPVenmDAbESHfPC2ek7H9jA2OleyibI1/Rkmur4kSw==
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:CWLP265MB1634.GBRP265.PROD.OUTLOOK.COM; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(376002)(346002)(39860400002)(396003)(366004)(136003)(8936002)(186003)(6506007)(966005)(16526019)(478600001)(83380400001)(66946007)(86362001)(36756003)(52116002)(6512007)(9686003)(5660300002)(786003)(6486002)(2906002)(6916009)(66476007)(4326008)(316002)(66556008);
 DIR:OUT; SFP:1102; 
X-MS-Exchange-AntiSpam-MessageData: CKs9E5Sg8uMOcuvE9YP48MLjHxcb30g5/IVa1E+r+WXTUniwfajorP0HS652DBZVKHJhzEqwR+VtyrNvXAdP1t+NO2fb7Exj+qJpV2oMt0DcHltcROMtFHcggk6G3OQ+ESZAK2Wu6xYGzcVxvFtZST3bEApS7ITdoP0LX6/nAvYfAwgKWHXNDHdbwfi4SaSY3xCx8ntOroQFvwjjLl6SvYOorDRio28qyfm9MlI4N105vaYBVU4uy/J46XtAglTYQFXT5MLhCcQvT/SJAx/iZUjafRBQhU+Wwq1rybQHESHh1gdiJGM/h2pjiH0dGqtTTs03RCIU50AJk+bafogB4VgXdnt9bt8BuaGA1q+nxRH4vPiHlBIGVlF1oH+WPb28slSaXJxadfXl+n1vS+2oM0A/WdjFLPtBkJXe5FSUPxkb39f9Om4x3h5yhdduK7sXWamzNwh4U5rxbErk1MN5+qSqe/EFP8pMMJr8RcVZGSwucEbo7/GlHZaJaEBtV+PBpklMgKGxo6rEKpXIAV9+Yo83ecV0bRNXmzVc8Zuk28o=
X-OriginatorOrg: durham.ac.uk
X-MS-Exchange-CrossTenant-Network-Message-Id: 366669bb-933e-482c-ec81-08d8352dcb5e
X-MS-Exchange-CrossTenant-AuthSource: CWLP265MB1634.GBRP265.PROD.OUTLOOK.COM
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 08:43:28.0748 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 7250d88b-4b68-4529-be44-d59a2d8a6f94
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mpR3ZvbhLVWxu0fzlYKjoOOPu7xQW+qj7RMWH3ZPWv4xtxY6KIzLUukugUDct4voieJ08kijzzPwXZ37KhE6uA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CWXP265MB0597
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 31 Jul 2020, Hans van Kranenburg wrote:

> Hi!
>
> News from the Debian Xen team (well, that's still only Ian and me). We
> still have Xen 4.11 in Debian unstable and stable (Buster) now, but at
> this point I really want to start working on the preparations for the
> next Debian release which will happen in about a little bit less than a
> year from now.
>
> So, the 4.14.0 release is a good moment to kick it off. In february 2020
> Ian and I already spent a day to move the Debian packaging to 4.13, and
> the result has been laying around for a bit. Now I'm forwarding it to
> 4.14.0 and I really want to get this in Debian so users can start
> playing around with it and have enough time to either contribute new
> things (like cross-building for raspberry pi4!) etc.
>
> All the yolo WIP stuff without anything cleaned up is here:
> https://salsa.debian.org/xen-team/debian-xen/-/commits/knorrie/4.14
>
> Unfortunately, it FTBFS in an unexpected way, since I cannot relate this
> to any of our additional patches or anything.
>
> This is the last part of the output with the failure:
>
> ---- >8 ----
>
> gcc  -m64 -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall
> -Wstrict-prototypes -Wdeclaration-after-statement
> -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -O2
> -fomit-frame-pointer
> -D__XEN_INTERFACE_VERSION__=__XEN_LATEST_INTERFACE_VERSION__ -MMD -MP
> -MF .libxlu_pci.o.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE  -g -O2
> -fdebug-prefix-map=/home/knorrie/build/xen/debian-xen=.
> -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time
> -D_FORTIFY_SOURCE=2 -Werror -Wno-format-zero-length
> -Wmissing-declarations -Wno-declaration-after-statement
> -Wformat-nonliteral -I. -fPIC -pthread
> -I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/libxc/include
> -I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/libs/toollog/include
> -I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/include
> -I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/libs/foreignmemory/include
> -I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/include
> -I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/libs/devicemodel/include
> -I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/include
> -I/home/knorrie/build/xen/debian-xen/tools/libxl/../../tools/include
> -D__XEN_TOOLS__   -c -o libxlu_pci.o libxlu_pci.c
> libxlu_pci.c: In function 'xlu_pci_parse_bdf':
> libxlu_pci.c:32:18: error: 'func' may be used uninitialized in this
> function [-Werror=maybe-uninitialized]
>   32 |     pcidev->func = func;
>      |     ~~~~~~~~~~~~~^~~~~~
> libxlu_pci.c:51:29: note: 'func' was declared here
>   51 |     unsigned dom, bus, dev, func, vslot = 0;
>      |                             ^~~~
> libxlu_pci.c:31:17: error: 'dev' may be used uninitialized in this
> function [-Werror=maybe-uninitialized]
>   31 |     pcidev->dev = dev;
>      |     ~~~~~~~~~~~~^~~~~
> libxlu_pci.c:51:24: note: 'dev' was declared here
>   51 |     unsigned dom, bus, dev, func, vslot = 0;
>      |                        ^~~
> libxlu_pci.c:30:17: error: 'bus' may be used uninitialized in this
> function [-Werror=maybe-uninitialized]
>   30 |     pcidev->bus = bus;
>      |     ~~~~~~~~~~~~^~~~~
> libxlu_pci.c:51:19: note: 'bus' was declared here
>   51 |     unsigned dom, bus, dev, func, vslot = 0;
>      |                   ^~~
> libxlu_pci.c:29:20: error: 'dom' may be used uninitialized in this
> function [-Werror=maybe-uninitialized]
>   29 |     pcidev->domain = domain;
>      |     ~~~~~~~~~~~~~~~^~~~~~~~
> libxlu_pci.c:51:14: note: 'dom' was declared here
>   51 |     unsigned dom, bus, dev, func, vslot = 0;
>      |              ^~~

That looks like an issue I saw in Fedora which I associated to the update 
to gcc 10. It is one of the things I fixed (or at least worked around) in 
the patch here 
https://src.fedoraproject.org/rpms/xen/blob/master/f/xen.gcc10.fixes.patch

 	Michael Young


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 09:19:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 09:19:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1RBz-0000X0-Hx; Fri, 31 Jul 2020 09:19:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XNXZ=BK=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1k1RBx-0000Wj-Tl
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 09:19:05 +0000
X-Inumbo-ID: dff4855a-d30e-11ea-ab95-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id dff4855a-d30e-11ea-ab95-12813bfff9fa;
 Fri, 31 Jul 2020 09:19:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1596187142;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=GX38+opkWxBZfD2far/R2rRkc2X2qlfoUgt4pP5SXDg=;
 b=gwyJn3jU7N/REiKzRAhgTbpkjsF3/nUITttfsKZRuHB1p23+rMMRAwX0ywfGW4cRJyE5x9
 AobB+bvssCqavcJHHZaS6yRmHlBo7MzkztYsCI4ZKmtE8AddPcP19Y7FpfEnil8oK66xtL
 Wmn+xyRPyWX32GAtaulDphrYkhHmomY=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-301-Z7A2oVdiNTiak_-Gh5Vfug-1; Fri, 31 Jul 2020 05:18:58 -0400
X-MC-Unique: Z7A2oVdiNTiak_-Gh5Vfug-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4358480382D;
 Fri, 31 Jul 2020 09:18:55 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-113-22.ams2.redhat.com [10.36.113.22])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 5D0681A835;
 Fri, 31 Jul 2020 09:18:50 +0000 (UTC)
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Subject: [PATCH RFCv1 2/5] kernel/resource: merge_child_mem_resources() to
 merge memory resources after adding succeeded
Date: Fri, 31 Jul 2020 11:18:35 +0200
Message-Id: <20200731091838.7490-3-david@redhat.com>
In-Reply-To: <20200731091838.7490-1-david@redhat.com>
References: <20200731091838.7490-1-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Jason Gunthorpe <jgg@ziepe.ca>, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Stephen Hemminger <sthemmin@microsoft.com>,
 Kees Cook <keescook@chromium.org>, David Hildenbrand <david@redhat.com>,
 Ard Biesheuvel <ardb@kernel.org>, Haiyang Zhang <haiyangz@microsoft.com>,
 Wei Liu <wei.liu@kernel.org>, virtualization@lists.linux-foundation.org,
 Juergen Gross <jgross@suse.com>, linux-mm@kvack.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org, Andrew Morton <akpm@linux-foundation.org>,
 "K. Y. Srinivasan" <kys@microsoft.com>,
 Dan Williams <dan.j.williams@intel.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Some add_memory*() users add memory in small, contiguous memory blocks.
Examples include virtio-mem, hyper-v balloon, and the XEN balloon.

This can quickly result in a lot of memory resources, whereby the actual
resource boundaries are not of interest (e.g., it might be relevant for
DIMMs, exposed via /proc/iomem to user space). We really want to merge
added resources in this scenario where possible.

Let's provide an interface to trigger merging of applicable child
resources. It will be, for example, used by virtio-mem to trigger
merging of memory resources it added (via add_memory_driver()
managed) to its resource container.

Note: We really want to merge after the whole operation succeeded, not
directly when adding a resource to the resource tree (it would break
add_memory_resource() and require splitting resources again when the
operation failed - e.g., due to -ENOMEM).

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Kees Cook <keescook@chromium.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Wei Liu <wei.liu@kernel.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Roger Pau Monné <roger.pau@citrix.com>
Cc: Julien Grall <julien@xen.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/ioport.h |  3 +++
 kernel/resource.c      | 56 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 59 insertions(+)

diff --git a/include/linux/ioport.h b/include/linux/ioport.h
index 52a91f5fa1a36..743b87fe2205b 100644
--- a/include/linux/ioport.h
+++ b/include/linux/ioport.h
@@ -251,6 +251,9 @@ extern void __release_region(struct resource *, resource_size_t,
 extern void release_mem_region_adjustable(struct resource *, resource_size_t,
 					  resource_size_t);
 #endif
+#ifdef CONFIG_MEMORY_HOTPLUG
+extern void merge_child_mem_resources(struct resource *res, const char *name);
+#endif
 
 /* Wrappers for managed devices */
 struct device;
diff --git a/kernel/resource.c b/kernel/resource.c
index 249c6b54014de..01ecc5b7956f5 100644
--- a/kernel/resource.c
+++ b/kernel/resource.c
@@ -1360,6 +1360,62 @@ void release_mem_region_adjustable(struct resource *parent,
 }
 #endif	/* CONFIG_MEMORY_HOTREMOVE */
 
+#ifdef CONFIG_MEMORY_HOTPLUG
+static bool mem_resources_mergeable(struct resource *r1, struct resource *r2)
+{
+	return r1->end + 1 == r2->start &&
+	       r1->name == r2->name &&
+	       r1->flags == r2->flags &&
+	       (r1->flags & IORESOURCE_MEM) &&
+	       r1->desc == r2->desc &&
+	       !r1->child && !r2->child;
+}
+
+/*
+ * merge_child_mem_resources - try to merge contiguous child IORESOURCE_MEM
+ *                             resources with the given name that match all
+ *                             other properties
+ * @parent: parent resource descriptor
+ * @name: name of the child resources to consider for merging
+ *
+ * This interface is intended for memory hotplug, whereby lots of consecutive
+ * memory resources are added (e.g., via add_memory*()) by a driver, and the
+ * actual resource boundaries are not of interest (e.g., it might be
+ * relevant for DIMMs). Only immediate child resources are considered. All
+ * applicable child resources must be immutable during the request.
+ *
+ * Note:
+ * - The caller has to make sure that no pointers to resources that might
+ *   get merged are held anymore. Callers should only trigger merging of child
+ *   resources when they are the only one adding such resources to the parent.
+ *   E.g., if two mechanisms could add "System RAM" immediately below the
+ *   same parent, this function is not safe to use.
+ * - release_mem_region_adjustable() will split on demand on memory hotunplug
+ */
+void merge_child_mem_resources(struct resource *parent, const char *name)
+{
+	struct resource *cur, *next;
+
+	write_lock(&resource_lock);
+
+	cur = parent->child;
+	while (cur && cur->sibling) {
+		next = cur->sibling;
+		if (!strcmp(cur->name, name) &&
+		    mem_resources_mergeable(cur, next)) {
+			cur->end = next->end;
+			cur->sibling = next->sibling;
+			free_resource(next);
+			next = cur->sibling;
+		}
+		cur = next;
+	}
+
+	write_unlock(&resource_lock);
+}
+EXPORT_SYMBOL(merge_child_mem_resources);
+#endif	/* CONFIG_MEMORY_HOTPLUG */
+
 /*
  * Managed region resource
  */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 09:19:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 09:19:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1RBt-0000W5-TC; Fri, 31 Jul 2020 09:19:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XNXZ=BK=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1k1RBs-0000Vm-4g
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 09:19:00 +0000
X-Inumbo-ID: dab94e05-d30e-11ea-8e1f-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id dab94e05-d30e-11ea-8e1f-bc764e2007e4;
 Fri, 31 Jul 2020 09:18:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1596187134;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=Nm9hdEOPm77EQVaUlUcV47hpK3bGj715U/FFPYWuGso=;
 b=UJSVVoXWWXWnPd07mj93j8NHefizfRRtku8wBHOJuMlX142lwynJHx32O4stSDZynh4sxs
 DxHxlGwM0vgm0x9fHuUCI+PF56uv27CrzoO/iXQi4KslGuZ4WqbupwPp5lYmZkZlro7zBW
 gW0s6HIIZScVS22Aj+wC2Fwnubi5Y40=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-423-eybjEKR1P9C14MolFOWKEA-1; Fri, 31 Jul 2020 05:18:51 -0400
X-MC-Unique: eybjEKR1P9C14MolFOWKEA-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0D2E6101C8A7;
 Fri, 31 Jul 2020 09:18:50 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-113-22.ams2.redhat.com [10.36.113.22])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 564781A835;
 Fri, 31 Jul 2020 09:18:47 +0000 (UTC)
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Subject: [PATCH RFCv1 1/5] kernel/resource: make
 release_mem_region_adjustable() never fail
Date: Fri, 31 Jul 2020 11:18:34 +0200
Message-Id: <20200731091838.7490-2-david@redhat.com>
In-Reply-To: <20200731091838.7490-1-david@redhat.com>
References: <20200731091838.7490-1-david@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Jason Gunthorpe <jgg@ziepe.ca>, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Kees Cook <keescook@chromium.org>,
 David Hildenbrand <david@redhat.com>, Ard Biesheuvel <ardb@kernel.org>,
 virtualization@lists.linux-foundation.org, linux-mm@kvack.org,
 Wei Yang <richardw.yang@linux.intel.com>, xen-devel@lists.xenproject.org,
 Andrew Morton <akpm@linux-foundation.org>,
 Dan Williams <dan.j.williams@intel.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Let's make sure splitting a resource on memory hotunplug will never fail.
This will become more relevant once we merge selected system ram
resources - then, we'll trigger that case more often un memory unplug.

In general, this function is already unlikely to fail. When we remove
memory, we free up quite a lot of metadata (memmap, page tables, memory
block device, etc.).

All other error cases inside release_mem_region_adjustable() seem to be
sanity checks if the function would be abused in different context -
let's add WARN_ON_ONCE().

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Kees Cook <keescook@chromium.org>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Wei Yang <richardw.yang@linux.intel.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/ioport.h |  4 ++--
 kernel/resource.c      | 49 ++++++++++++++++++++++++------------------
 mm/memory_hotplug.c    | 22 +------------------
 3 files changed, 31 insertions(+), 44 deletions(-)

diff --git a/include/linux/ioport.h b/include/linux/ioport.h
index 6c2b06fe8beb7..52a91f5fa1a36 100644
--- a/include/linux/ioport.h
+++ b/include/linux/ioport.h
@@ -248,8 +248,8 @@ extern struct resource * __request_region(struct resource *,
 extern void __release_region(struct resource *, resource_size_t,
 				resource_size_t);
 #ifdef CONFIG_MEMORY_HOTREMOVE
-extern int release_mem_region_adjustable(struct resource *, resource_size_t,
-				resource_size_t);
+extern void release_mem_region_adjustable(struct resource *, resource_size_t,
+					  resource_size_t);
 #endif
 
 /* Wrappers for managed devices */
diff --git a/kernel/resource.c b/kernel/resource.c
index 841737bbda9e5..249c6b54014de 100644
--- a/kernel/resource.c
+++ b/kernel/resource.c
@@ -1255,21 +1255,28 @@ EXPORT_SYMBOL(__release_region);
  *   assumes that all children remain in the lower address entry for
  *   simplicity.  Enhance this logic when necessary.
  */
-int release_mem_region_adjustable(struct resource *parent,
-				  resource_size_t start, resource_size_t size)
+void release_mem_region_adjustable(struct resource *parent,
+				   resource_size_t start, resource_size_t size)
 {
+	struct resource *new_res = NULL;
+	bool alloc_nofail = false;
 	struct resource **p;
 	struct resource *res;
-	struct resource *new_res;
 	resource_size_t end;
-	int ret = -EINVAL;
 
 	end = start + size - 1;
-	if ((start < parent->start) || (end > parent->end))
-		return ret;
+	if (WARN_ON_ONCE((start < parent->start) || (end > parent->end)))
+		return;
 
-	/* The alloc_resource() result gets checked later */
-	new_res = alloc_resource(GFP_KERNEL);
+	/*
+	 * We free up quite a lot of memory on memory hotunplug (esp. memap),
+	 * just before releasing the region. This is highly unlikely to
+	 * fail - let's play save and make it never fail as the caller cannot
+	 * perform any error handling (e.g., trying to re-add memory will fail
+	 * similarly).
+	 */
+retry:
+	new_res = alloc_resource(GFP_KERNEL | alloc_nofail ? __GFP_NOFAIL : 0);
 
 	p = &parent->child;
 	write_lock(&resource_lock);
@@ -1295,7 +1302,6 @@ int release_mem_region_adjustable(struct resource *parent,
 		 * so if we are dealing with them, let us just back off here.
 		 */
 		if (!(res->flags & IORESOURCE_SYSRAM)) {
-			ret = 0;
 			break;
 		}
 
@@ -1312,20 +1318,23 @@ int release_mem_region_adjustable(struct resource *parent,
 			/* free the whole entry */
 			*p = res->sibling;
 			free_resource(res);
-			ret = 0;
 		} else if (res->start == start && res->end != end) {
 			/* adjust the start */
-			ret = __adjust_resource(res, end + 1,
-						res->end - end);
+			WARN_ON_ONCE(__adjust_resource(res, end + 1,
+						       res->end - end));
 		} else if (res->start != start && res->end == end) {
 			/* adjust the end */
-			ret = __adjust_resource(res, res->start,
-						start - res->start);
+			WARN_ON_ONCE(__adjust_resource(res, res->start,
+						       start - res->start));
 		} else {
-			/* split into two entries */
+			/* split into two entries - we need a new resource */
 			if (!new_res) {
-				ret = -ENOMEM;
-				break;
+				new_res = alloc_resource(GFP_ATOMIC);
+				if (!new_res) {
+					alloc_nofail = true;
+					write_unlock(&resource_lock);
+					goto retry;
+				}
 			}
 			new_res->name = res->name;
 			new_res->start = end + 1;
@@ -1336,9 +1345,8 @@ int release_mem_region_adjustable(struct resource *parent,
 			new_res->sibling = res->sibling;
 			new_res->child = NULL;
 
-			ret = __adjust_resource(res, res->start,
-						start - res->start);
-			if (ret)
+			if (WARN_ON_ONCE(__adjust_resource(res, res->start,
+							   start - res->start)))
 				break;
 			res->sibling = new_res;
 			new_res = NULL;
@@ -1349,7 +1357,6 @@ int release_mem_region_adjustable(struct resource *parent,
 
 	write_unlock(&resource_lock);
 	free_resource(new_res);
-	return ret;
 }
 #endif	/* CONFIG_MEMORY_HOTREMOVE */
 
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index da374cd3d45b3..258656b819dbe 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1709,26 +1709,6 @@ void try_offline_node(int nid)
 }
 EXPORT_SYMBOL(try_offline_node);
 
-static void __release_memory_resource(resource_size_t start,
-				      resource_size_t size)
-{
-	int ret;
-
-	/*
-	 * When removing memory in the same granularity as it was added,
-	 * this function never fails. It might only fail if resources
-	 * have to be adjusted or split. We'll ignore the error, as
-	 * removing of memory cannot fail.
-	 */
-	ret = release_mem_region_adjustable(&iomem_resource, start, size);
-	if (ret) {
-		resource_size_t endres = start + size - 1;
-
-		pr_warn("Unable to release resource <%pa-%pa> (%d)\n",
-			&start, &endres, ret);
-	}
-}
-
 static int __ref try_remove_memory(int nid, u64 start, u64 size)
 {
 	int rc = 0;
@@ -1762,7 +1742,7 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size)
 		memblock_remove(start, size);
 	}
 
-	__release_memory_resource(start, size);
+	release_mem_region_adjustable(&iomem_resource, start, size);
 
 	try_offline_node(nid);
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 09:19:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 09:19:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1RBy-0000Wp-9O; Fri, 31 Jul 2020 09:19:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XNXZ=BK=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1k1RBx-0000Vm-4p
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 09:19:05 +0000
X-Inumbo-ID: de6814e0-d30e-11ea-8e1f-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id de6814e0-d30e-11ea-8e1f-bc764e2007e4;
 Fri, 31 Jul 2020 09:19:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1596187140;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=nOON8+ajaLpXIcBxCYJ6eMATKXtAWnIlcMGpRLJHnBo=;
 b=bJTt0i+iPUoecHL6hE6yF2u6x9u/VnocEw77lYLaQeSQ8bQ+VJLfFs84wLWoqDbvlHAHCj
 xjILaBJeVmtLrchGYQpBeEA4rhve19vwAU4KmfiCJXixltInJfy184Vg9QIkmWuV5suksV
 c7wfg/ODoXhy2UfPQVf3lkGWXgDfmpA=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-14-a6wjUmLxO62-OZabls5-1w-1; Fri, 31 Jul 2020 05:18:58 -0400
X-MC-Unique: a6wjUmLxO62-OZabls5-1w-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DFFD618C63C0;
 Fri, 31 Jul 2020 09:18:56 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-113-22.ams2.redhat.com [10.36.113.22])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 919091C92D;
 Fri, 31 Jul 2020 09:18:55 +0000 (UTC)
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Subject: [PATCH RFCv1 3/5] virtio-mem: try to merge "System RAM (virtio_mem)"
 resources
Date: Fri, 31 Jul 2020 11:18:36 +0200
Message-Id: <20200731091838.7490-4-david@redhat.com>
In-Reply-To: <20200731091838.7490-1-david@redhat.com>
References: <20200731091838.7490-1-david@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org,
 David Hildenbrand <david@redhat.com>, xen-devel@lists.xenproject.org,
 virtualization@lists.linux-foundation.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

virtio-mem adds memory in memory block granularity, to be able to
remove it in the same granularity again later, and to grow slowly on
demand. This, however, results in quite a lot of resources when
adding a lot of memory. Resources are effectively stored in a list-based
tree. Having a lot of resources not only wastes memory, it also makes
traversing that tree more expensive, and makes /proc/iomem explode in
size (e.g., requiring kexec-tools to manually merge resources later
when e.g., trying to create a kdump header).

Before this patch, we get (/proc/iomem) when hotplugging 2G via virtio-mem
on x86-64:
        [...]
        100000000-13fffffff : System RAM
        140000000-33fffffff : virtio0
          140000000-147ffffff : System RAM (virtio_mem)
          148000000-14fffffff : System RAM (virtio_mem)
          150000000-157ffffff : System RAM (virtio_mem)
          158000000-15fffffff : System RAM (virtio_mem)
          160000000-167ffffff : System RAM (virtio_mem)
          168000000-16fffffff : System RAM (virtio_mem)
          170000000-177ffffff : System RAM (virtio_mem)
          178000000-17fffffff : System RAM (virtio_mem)
          180000000-187ffffff : System RAM (virtio_mem)
          188000000-18fffffff : System RAM (virtio_mem)
          190000000-197ffffff : System RAM (virtio_mem)
          198000000-19fffffff : System RAM (virtio_mem)
          1a0000000-1a7ffffff : System RAM (virtio_mem)
          1a8000000-1afffffff : System RAM (virtio_mem)
          1b0000000-1b7ffffff : System RAM (virtio_mem)
          1b8000000-1bfffffff : System RAM (virtio_mem)
        3280000000-32ffffffff : PCI Bus 0000:00

With this patch, we get (/proc/iomem):
        [...]
        fffc0000-ffffffff : Reserved
        100000000-13fffffff : System RAM
        140000000-33fffffff : virtio0
          140000000-1bfffffff : System RAM (virtio_mem)
        3280000000-32ffffffff : PCI Bus 0000:00

Of course, with more hotplugged memory, it gets worse. When unplugging
memory blocks again, try_remove_memory() (via
offline_and_remove_memory()) will properly split the resource up again.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 drivers/virtio/virtio_mem.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index f26f5f64ae822..2396a8d67875e 100644
--- a/drivers/virtio/virtio_mem.c
+++ b/drivers/virtio/virtio_mem.c
@@ -415,6 +415,7 @@ static int virtio_mem_mb_add(struct virtio_mem *vm, unsigned long mb_id)
 {
 	const uint64_t addr = virtio_mem_mb_id_to_phys(mb_id);
 	int nid = vm->nid;
+	int rc;
 
 	if (nid == NUMA_NO_NODE)
 		nid = memory_add_physaddr_to_nid(addr);
@@ -431,8 +432,17 @@ static int virtio_mem_mb_add(struct virtio_mem *vm, unsigned long mb_id)
 	}
 
 	dev_dbg(&vm->vdev->dev, "adding memory block: %lu\n", mb_id);
-	return add_memory_driver_managed(nid, addr, memory_block_size_bytes(),
-					 vm->resource_name);
+	rc = add_memory_driver_managed(nid, addr, memory_block_size_bytes(),
+				       vm->resource_name);
+	if (!rc) {
+		/*
+		 * Try to reduce the number of resources by merging them. The
+		 * memory removal path will properly split them up again.
+		 */
+		merge_child_mem_resources(vm->parent_resource,
+					  vm->resource_name);
+	}
+	return rc;
 }
 
 /*
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 09:19:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 09:19:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1RBp-0000Vs-Kt; Fri, 31 Jul 2020 09:18:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XNXZ=BK=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1k1RBn-0000Vm-BI
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 09:18:56 +0000
X-Inumbo-ID: dadd789c-d30e-11ea-8e1f-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id dadd789c-d30e-11ea-8e1f-bc764e2007e4;
 Fri, 31 Jul 2020 09:18:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1596187134;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding;
 bh=7KJjJxh3LrCrOmiVZX7zoQLv4kMydPtkCS0ZPjF2DUk=;
 b=YRAc7jZAndJHzfvcACv97DJeo2rFUQwS1c0JV3aHgHUCRN2D5quQzwXbCozUdHxOAaYAC3
 dYjRg3RCvTGoc6TRnYfFbu2eCD1Nei29sdWTuG535vgOHmL5fdT8Jq7AbrUgfDu8+c5RSd
 LX1BIHXcLj0pV2cTdjX9IuuWKcydwTE=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-70-SolxhgZkNXO5E6nSxUQ3oA-1; Fri, 31 Jul 2020 05:18:49 -0400
X-MC-Unique: SolxhgZkNXO5E6nSxUQ3oA-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 07FF759;
 Fri, 31 Jul 2020 09:18:47 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-113-22.ams2.redhat.com [10.36.113.22])
 by smtp.corp.redhat.com (Postfix) with ESMTP id BC0631A835;
 Fri, 31 Jul 2020 09:18:39 +0000 (UTC)
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Subject: [PATCH RFCv1 0/5] mm/memory_hotplug: selective merging of memory
 resources
Date: Fri, 31 Jul 2020 11:18:33 +0200
Message-Id: <20200731091838.7490-1-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: linux-hyperv@vger.kernel.org, Michal Hocko <mhocko@suse.com>,
 David Hildenbrand <david@redhat.com>,
 virtualization@lists.linux-foundation.org, linux-mm@kvack.org,
 "K. Y. Srinivasan" <kys@microsoft.com>,
 Dan Williams <dan.j.williams@intel.com>, Wei Liu <wei.liu@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Stephen Hemminger <sthemmin@microsoft.com>, Ard Biesheuvel <ardb@kernel.org>,
 Jason Gunthorpe <jgg@ziepe.ca>, xen-devel@lists.xenproject.org,
 Julien Grall <julien@xen.org>, Kees Cook <keescook@chromium.org>,
 Haiyang Zhang <haiyangz@microsoft.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>,
 Thomas Gleixner <tglx@linutronix.de>, Wei Yang <richardw.yang@linux.intel.com>,
 Andrew Morton <akpm@linux-foundation.org>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Some add_memory*() users add memory in small, contiguous memory blocks.
Examples include virtio-mem, hyper-v balloon, and the XEN balloon.

This can quickly result in a lot of memory resources, whereby the actual
resource boundaries are not of interest (e.g., it might be relevant for
DIMMs, exposed via /proc/iomem to user space). We really want to merge
added resources in this scenario where possible.

Resources are effectively stored in a list-based tree. Having a lot of
resources not only wastes memory, it also makes traversing that tree more
expensive, and makes /proc/iomem explode in size (e.g., requiring
kexec-tools to manually merge resources when creating a kdump header. The
current kexec-tools resource count limit does not allow more than ~100GB
of memory with a memory block size of 128MB on x86-64).

Let's allow to selectively merge resources, speciyfing a parent node and
a resource idendifier string. The memory unplug path will properly split
up merged resources again.

Patch #3 contains a /proc/iomem example. Only tested with virtio-mem.

Note: This gets the job done and is comparably simple. More complicated
approaches would require introducing IORESOURCE_MERGEABLE and extending our
add_memory*() interfaces with a flag, specifying that merging after adding
succeeded is acceptable. I'd like to avoid that complexity and code churn
for now.

David Hildenbrand (5):
  kernel/resource: make release_mem_region_adjustable() never fail
  kernel/resource: merge_child_mem_resources() to merge memory resources
    after adding succeeded
  virtio-mem: try to merge "System RAM (virtio_mem)" resources
  xen/balloon: try to merge "System RAM" resources
  hv_balloon:: try to merge "System RAM" resources

 drivers/hv/hv_balloon.c     |   3 ++
 drivers/virtio/virtio_mem.c |  14 ++++-
 drivers/xen/balloon.c       |   4 ++
 include/linux/ioport.h      |   7 ++-
 kernel/resource.c           | 105 ++++++++++++++++++++++++++++--------
 mm/memory_hotplug.c         |  22 +-------
 6 files changed, 109 insertions(+), 46 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 09:19:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 09:19:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1RC3-0000Xo-QH; Fri, 31 Jul 2020 09:19:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XNXZ=BK=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1k1RC2-0000Wj-PZ
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 09:19:10 +0000
X-Inumbo-ID: e1ba0b76-d30e-11ea-ab95-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id e1ba0b76-d30e-11ea-ab95-12813bfff9fa;
 Fri, 31 Jul 2020 09:19:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1596187145;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=WXo92L6xzVp7lbkKaGhQOH5PjMjS2UMyqhJq+SqZmjU=;
 b=eKz2GmRzt71ZPjgS2zsWodplxYWV8Uzf3VNCS/zSN/Ixo1Sf06HXasBEzxNvRSR6NI5iBj
 tYpCPWqZft+a7ljcRaO0FmaQrcFBIvLYbJPkPOnFrHjoTJSgq6YOthrjZn2m2ZFGiSD1FH
 TBP6Q25KFwaqg7pQ276qpbNqxVmDjLo=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-403-Vj_nV24LO0GNaNZsmopdDQ-1; Fri, 31 Jul 2020 05:19:01 -0400
X-MC-Unique: Vj_nV24LO0GNaNZsmopdDQ-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A93C718C63C0;
 Fri, 31 Jul 2020 09:18:59 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-113-22.ams2.redhat.com [10.36.113.22])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 39C481A835;
 Fri, 31 Jul 2020 09:18:57 +0000 (UTC)
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Subject: [PATCH RFCv1 4/5] xen/balloon: try to merge "System RAM" resources
Date: Fri, 31 Jul 2020 11:18:37 +0200
Message-Id: <20200731091838.7490-5-david@redhat.com>
In-Reply-To: <20200731091838.7490-1-david@redhat.com>
References: <20200731091838.7490-1-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Julien Grall <julien@xen.org>,
 David Hildenbrand <david@redhat.com>,
 virtualization@lists.linux-foundation.org, linux-mm@kvack.org,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 Andrew Morton <akpm@linux-foundation.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Let's reuse the new mechanism to merge "System RAM" resources below the
root. We are the only one hotplugging "System RAM" and DIMMs don't apply,
so this is safe to use.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Roger Pau Monné <roger.pau@citrix.com>
Cc: Julien Grall <julien@xen.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 drivers/xen/balloon.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index 77c57568e5d7f..644ae2e3798e2 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -353,6 +353,10 @@ static enum bp_state reserve_additional_memory(void)
 	if (rc) {
 		pr_warn("Cannot add additional memory (%i)\n", rc);
 		goto err;
+	} else {
+		resource = NULL;
+		/* Try to reduce the number of "System RAM" resources. */
+		merge_child_mem_resources(&iomem_resource, "System RAM");
 	}
 
 	balloon_stats.total_pages += balloon_hotplug;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 09:19:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 09:19:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1RC9-0000al-2e; Fri, 31 Jul 2020 09:19:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XNXZ=BK=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1k1RC7-0000Wj-Pr
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 09:19:15 +0000
X-Inumbo-ID: e5293f84-d30e-11ea-ab95-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id e5293f84-d30e-11ea-ab95-12813bfff9fa;
 Fri, 31 Jul 2020 09:19:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1596187151;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=1ITjza8QZs334kZDawRKP4E/FGkviB+rOAxL++W/FFU=;
 b=D/HxuniBO2RN0kp9Zkm3YpHn+w9DEuPfw9K2pbvYWH1mKV4tntKnmq2yQC+y3q4a5W4M2A
 wyz2S6sSz0B4nIqvPCzpOee0ijPcp/TUvX96JrJtuytnVzA7uf6OuDlnZ2S+Sr0PE0ugtT
 jcPgO0U+4y+UWEBRrmiVTyTYQ/z92co=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-352-w-XOsNLgPyeml39HbUP41A-1; Fri, 31 Jul 2020 05:19:06 -0400
X-MC-Unique: w-XOsNLgPyeml39HbUP41A-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2F36F801504;
 Fri, 31 Jul 2020 09:19:05 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-113-22.ams2.redhat.com [10.36.113.22])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 035EC1A835;
 Fri, 31 Jul 2020 09:18:59 +0000 (UTC)
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Subject: [PATCH RFCv1 5/5] hv_balloon:: try to merge "System RAM" resources
Date: Fri, 31 Jul 2020 11:18:38 +0200
Message-Id: <20200731091838.7490-6-david@redhat.com>
In-Reply-To: <20200731091838.7490-1-david@redhat.com>
References: <20200731091838.7490-1-david@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: linux-hyperv@vger.kernel.org, Michal Hocko <mhocko@suse.com>,
 Stephen Hemminger <sthemmin@microsoft.com>,
 David Hildenbrand <david@redhat.com>, Haiyang Zhang <haiyangz@microsoft.com>,
 Wei Liu <wei.liu@kernel.org>, virtualization@lists.linux-foundation.org,
 linux-mm@kvack.org, xen-devel@lists.xenproject.org,
 Andrew Morton <akpm@linux-foundation.org>,
 "K. Y. Srinivasan" <kys@microsoft.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Let's reuse the new mechanism to merge "System RAM" resources below the
root. We are the only one hotplugging "System RAM" and DIMMs don't apply,
so this is safe to use.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Wei Liu <wei.liu@kernel.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 drivers/hv/hv_balloon.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
index 32e3bc0aa665a..0745f7cc1727b 100644
--- a/drivers/hv/hv_balloon.c
+++ b/drivers/hv/hv_balloon.c
@@ -745,6 +745,9 @@ static void hv_mem_hot_add(unsigned long start, unsigned long size,
 			has->covered_end_pfn -=  processed_pfn;
 			spin_unlock_irqrestore(&dm_device.ha_lock, flags);
 			break;
+		} else {
+			/* Try to reduce the number of "System RAM" resources. */
+			merge_child_mem_resources(&iomem_resource, "System RAM");
 		}
 
 		/*
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 09:22:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 09:22:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1REv-0001i7-Id; Fri, 31 Jul 2020 09:22:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HXbV=BK=amazon.co.uk=prvs=4749be70b=pdurrant@srs-us1.protection.inumbo.net>)
 id 1k1REu-0001i0-N9
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 09:22:08 +0000
X-Inumbo-ID: 4dddf39e-d30f-11ea-8e1f-bc764e2007e4
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4dddf39e-d30f-11ea-8e1f-bc764e2007e4;
 Fri, 31 Jul 2020 09:22:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
 s=amazon201209; t=1596187328; x=1627723328;
 h=from:to:cc:date:message-id:references:in-reply-to:
 content-transfer-encoding:mime-version:subject;
 bh=hqb9Ru6P4OkJF5q16qJJmtAlLQTi/lbwbWAdxeSqxP8=;
 b=iXzafmxvz8gePJiJkK7XdpELjvlVUP7mGk7vpY359cWKVToS+klZS8hY
 iBROnoAJsUbJzU157hQWPYln32tKfH7mIxg7unQcsLeQjL5SgKE+lO/M0
 F3V19PXNyc5LKtlyMt4eM/r4w+lPJdNGs+Nl2hss1CD/0an8OttrF1VkA c=;
IronPort-SDR: QnSz88uG4O7oScxGzDs8yBhXe6YcDygdado8zqmzkRCsbyDMmZ4oiRp4lgRJe5LH1+MEna5tVF
 cQ4rkG5n2Atg==
X-IronPort-AV: E=Sophos;i="5.75,417,1589241600"; d="scan'208";a="63241284"
Subject: RE: [PATCH v2 08/10] remove remaining uses of iommu_legacy_map/unmap
Thread-Topic: [PATCH v2 08/10] remove remaining uses of iommu_legacy_map/unmap
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2c-4e7c8266.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 31 Jul 2020 09:22:02 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2c-4e7c8266.us-west-2.amazon.com (Postfix) with ESMTPS
 id A8351A1DDE; Fri, 31 Jul 2020 09:22:00 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 31 Jul 2020 09:22:00 +0000
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC003.ant.amazon.com (10.43.164.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 31 Jul 2020 09:21:59 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Fri, 31 Jul 2020 09:21:59 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Paul Durrant <paul@xen.org>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Thread-Index: AQEfdnyAAr67l4isBdEg8WCGHUeFkAH+vB2Yqn9TdzA=
Date: Fri, 31 Jul 2020 09:21:58 +0000
Message-ID: <ccd1e0d3ba334dc5b1ba37734f581b3b@EX13D32EUC003.ant.amazon.com>
References: <20200730142926.6051-1-paul@xen.org>
 <20200730142926.6051-9-paul@xen.org>
In-Reply-To: <20200730142926.6051-9-paul@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.90]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Jun
 Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBQYXVsIER1cnJhbnQgPHBhdWxA
eGVuLm9yZz4NCj4gU2VudDogMzAgSnVseSAyMDIwIDE1OjI5DQo+IFRvOiB4ZW4tZGV2ZWxAbGlz
dHMueGVucHJvamVjdC5vcmcNCj4gQ2M6IER1cnJhbnQsIFBhdWwgPHBkdXJyYW50QGFtYXpvbi5j
by51az47IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT47IEFuZHJldyBDb29wZXINCj4g
PGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+OyBXZWkgTGl1IDx3bEB4ZW4ub3JnPjsgUm9nZXIg
UGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+OyBHZW9yZ2UNCj4gRHVubGFwIDxnZW9y
Z2UuZHVubGFwQGNpdHJpeC5jb20+OyBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25AZXUuY2l0cml4
LmNvbT47IEp1bGllbiBHcmFsbA0KPiA8anVsaWVuQHhlbi5vcmc+OyBTdGVmYW5vIFN0YWJlbGxp
bmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBKdW4gTmFrYWppbWEgPGp1bi5uYWthamltYUBp
bnRlbC5jb20+Ow0KPiBLZXZpbiBUaWFuIDxrZXZpbi50aWFuQGludGVsLmNvbT4NCj4gU3ViamVj
dDogW0VYVEVSTkFMXSBbUEFUQ0ggdjIgMDgvMTBdIHJlbW92ZSByZW1haW5pbmcgdXNlcyBvZiBp
b21tdV9sZWdhY3lfbWFwL3VubWFwDQo+IA0KPiBDQVVUSU9OOiBUaGlzIGVtYWlsIG9yaWdpbmF0
ZWQgZnJvbSBvdXRzaWRlIG9mIHRoZSBvcmdhbml6YXRpb24uIERvIG5vdCBjbGljayBsaW5rcyBv
ciBvcGVuDQo+IGF0dGFjaG1lbnRzIHVubGVzcyB5b3UgY2FuIGNvbmZpcm0gdGhlIHNlbmRlciBh
bmQga25vdyB0aGUgY29udGVudCBpcyBzYWZlLg0KPiANCj4gDQo+IA0KPiBGcm9tOiBQYXVsIER1
cnJhbnQgPHBkdXJyYW50QGFtYXpvbi5jb20+DQo+IA0KPiBUaGUgJ2xlZ2FjeScgZnVuY3Rpb25z
IGRvIGltcGxpY2l0IGZsdXNoaW5nIHNvIGFtZW5kIHRoZSBjYWxsZXJzIHRvIGRvIHRoZQ0KPiBh
cHByb3ByaWF0ZSBmbHVzaGluZy4NCj4gDQo+IFVuZm9ydHVuYXRlbHksIGJlY2F1c2Ugb2YgdGhl
IHN0cnVjdHVyZSBvZiB0aGUgUDJNIGNvZGUsIHdlIGNhbm5vdCByZW1vdmUNCj4gdGhlIHBlci1D
UFUgJ2lvbW11X2RvbnRfZmx1c2hfaW90bGInIGdsb2JhbCBhbmQgdGhlIG9wdGltaXphdGlvbiBp
dA0KPiBmYWNpbGl0YXRlcy4gSXQgaXMgbm93IGNoZWNrZWQgZGlyZWN0bHkgaW9tbXVfaW90bGJf
Zmx1c2goKS4gQWxzbywgaXQgaXMNCj4gbm93IGRlY2xhcmVkIGFzIGJvb2wgKHJhdGhlciB0aGFu
IGJvb2xfdCkgYW5kIHNldHRpbmcvY2xlYXJpbmcgaXQgYXJlIG5vDQo+IGxvbmdlciBwb2ludGxl
c3NseSBnYXRlZCBvbiBpc19pb21tdV9lbmFibGVkKCkgcmV0dXJuaW5nIHRydWUuIChBcmd1YWJs
eQ0KPiBpdCBpcyBhbHNvIHBvaW50bGVzcyB0byBnYXRlIHRoZSBjYWxsIHRvIGlvbW11X2lvdGxi
X2ZsdXNoKCkgb24gdGhhdA0KPiBjb25kaXRpb24gLSBzaW5jZSBpdCBpcyBhIG5vLW9wIGluIHRo
YXQgY2FzZSAtIGJ1dCB0aGUgaWYgY2xhdXNlIGFsbG93cw0KPiB0aGUgc2NvcGUgb2YgYSBzdGFj
ayB2YXJpYWJsZSB0byBiZSByZXN0cmljdGVkKS4NCj4gDQo+IE5PVEU6IFRoZSBjb2RlIGluIG1l
bW9yeV9hZGQoKSBub3cgZmFpbHMgaWYgdGhlIG51bWJlciBvZiBwYWdlcyBwYXNzZWQgdG8NCj4g
ICAgICAgYSBzaW5nbGUgY2FsbCBvdmVyZmxvd3MgYW4gdW5zaWduZWQgaW50LiBJIGRvbid0IGJl
bGlldmUgdGhpcyB3aWxsDQo+ICAgICAgIGV2ZXIgaGFwcGVuIGluIHByYWN0aWNlLg0KPiANCj4g
U2lnbmVkLW9mZi1ieTogUGF1bCBEdXJyYW50IDxwZHVycmFudEBhbWF6b24uY29tPg0KDQpJIHJl
YWxpc2Ugbm93IHRoYXQgSSBjb21wbGV0ZWx5IGZvcmdvdCB0byBhZGRyZXNzIEphbidzIGNvbW1l
bnRzIG9uIGdyYW50IHRhYmxlIGxvY2tpbmcgYW5kIGZsdXNoIGJhdGNoaW5nLCBzbyB0aGVyZSB3
aWxsIGJlIGEgdjMgb2YgYXQgbGVhc3QgdGhpcyBwYXRjaC4NCg0KICBQYXVsDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 09:53:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 09:53:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1RjW-0004Jt-4T; Fri, 31 Jul 2020 09:53:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1RjV-0004Jo-CD
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 09:53:45 +0000
X-Inumbo-ID: b78c4800-d313-11ea-ab96-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b78c4800-d313-11ea-ab96-12813bfff9fa;
 Fri, 31 Jul 2020 09:53:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 373A4AB9F;
 Fri, 31 Jul 2020 09:53:55 +0000 (UTC)
Subject: Re: [PATCH] x86/vhpet: Fix type size in timer_int_route_valid
To: Eslam Elnikety <elnikety@amazon.com>
References: <20200728083357.77999-1-elnikety@amazon.com>
 <a55fba45-a008-059e-ea8c-b7300e2e8b7d@citrix.com>
 <278f0f31-619b-a392-6627-e75e65d0d14f@suse.com>
 <076df48e-0010-bb8d-891f-dc89aa4b9439@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cd9b283b-5c10-d186-93ef-8d8c07302e26@suse.com>
Date: Fri, 31 Jul 2020 11:53:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <076df48e-0010-bb8d-891f-dc89aa4b9439@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Paul Durrant <pdurrant@amazon.co.uk>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31.07.2020 10:38, Eslam Elnikety wrote:
> On 28.07.20 19:51, Jan Beulich wrote:
>> On 28.07.2020 11:26, Andrew Cooper wrote:
>>> --- a/xen/include/asm-x86/hvm/vpt.h
>>> +++ b/xen/include/asm-x86/hvm/vpt.h
>>> @@ -73,7 +73,13 @@ struct hpet_registers {
>>>       uint64_t isr;               /* interrupt status reg */
>>>       uint64_t mc64;              /* main counter */
>>>       struct {                    /* timers */
>>> -        uint64_t config;        /* configuration/cap */
>>> +        union {
>>> +            uint64_t config;    /* configuration/cap */
>>> +            struct {
>>> +                uint32_t _;
>>> +                uint32_t route;
>>> +            };
>>> +        };
>>
>> So long as there are no static initializers for this construct
>> that would then suffer the old-gcc problem, this is of course a
>> fine arrangement to make.
> 
> I have to admit that I have no clue what the "old-gcc" problem is. I am 
> curious, and I would appreciate pointers to figure out if/how to 
> resolve. Is that an old, existing problem? Or a problem that was present 
> in older versions of gcc?

Well, as already said - the problem is with old gcc not dealing
well with initializers of structs/unions with unnamed fields.

> If the latter, is that a gcc version that we still care about?

Until someone makes a (justified) proposal what the new minimum
version(s) ought to be, I'm afraid we still have to care. This
topic came up very recently in another context, and I've proposed
to put it on the agenda of the next community call.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 10:13:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 10:13:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1S21-00065V-Q8; Fri, 31 Jul 2020 10:12:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PAbA=BK=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1S20-00065Q-MK
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 10:12:52 +0000
X-Inumbo-ID: 64463996-d316-11ea-ab98-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 64463996-d316-11ea-ab98-12813bfff9fa;
 Fri, 31 Jul 2020 10:12:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=2zWe36xSudEC1Qe3mBC6X1dlmXW/MjuDSaccc+cISKY=; b=CSo8bO6jTKzu0goHo+3zVZ+IvU
 pfOx++B5s2NBTn0+Xs2VInSYIcZxldLqo0C9PfHFJ1xvh/2+Cc5hDDoA33M4/z/RpOFqdvQsw72N6
 LHKxAmvyuTxFlZk1EZHhhsfOUCZVqvUriKfl5pP2xclx8rBteufgd4ljCQIefLJ6MBqY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1S1w-0006Lx-BJ; Fri, 31 Jul 2020 10:12:48 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1S1w-0007A3-0X; Fri, 31 Jul 2020 10:12:48 +0000
Subject: Re: [PATCH v2] xen/arm: Convert runstate address during hypcall
To: Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <4647a019c7b42d40d3c2f5b0a3685954bea7f982.1595948219.git.bertrand.marquis@arm.com>
 <8d2d7f03-450c-d50c-630b-8608c6d42bb9@suse.com>
 <FCAB700B-4617-4323-BE1E-B80DDA1806C1@arm.com>
 <1b046f2c-05c8-9276-a91e-fd55ec098bed@suse.com>
 <alpine.DEB.2.21.2007291356060.1767@sstabellini-ThinkPad-T480s>
 <1a8bbcc7-9d0c-9669-db7b-e837af279027@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <73c8ade5-36a3-cc13-80b6-bda89e175cbb@xen.org>
Date: Fri, 31 Jul 2020 11:12:45 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <1a8bbcc7-9d0c-9669-db7b-e837af279027@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 31/07/2020 07:39, Jan Beulich wrote:
> We're fixing other issues without breaking the ABI. Where's the
> problem of backporting the kernel side change (which I anticipate
> to not be overly involved)?
This means you can't take advantage of the runstage on existing Linux 
without any modification.

> If the plan remains to be to make an ABI breaking change,

For a theoritical PoV, this is a ABI breakage. However, I fail to see 
how the restrictions added would affect OSes at least on Arm.

In particular, you can't change the VA -> PA on Arm without going 
through an invalid mapping. So I wouldn't expect this to happen for the 
runstate.

The only part that *may* be an issue is if the guest is registering the 
runstate with an initially invalid VA. Although, I have yet to see that 
in practice. Maybe you know?

>  then I
> think this will need an explicit vote.

I was under the impression that the two Arm maintainers (Stefano and I) 
already agreed with the approach here. Therefore, given the ABI breakage 
is only affecting Arm, why would we need a vote?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 10:18:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 10:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1S7h-0006IB-GX; Fri, 31 Jul 2020 10:18:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1S7f-0006I6-QE
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 10:18:43 +0000
X-Inumbo-ID: 357022b6-d317-11ea-8e22-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 357022b6-d317-11ea-8e22-bc764e2007e4;
 Fri, 31 Jul 2020 10:18:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D7D68AF95;
 Fri, 31 Jul 2020 10:18:54 +0000 (UTC)
Subject: Re: [PATCH v2] xen/arm: Convert runstate address during hypcall
To: Julien Grall <julien@xen.org>
References: <4647a019c7b42d40d3c2f5b0a3685954bea7f982.1595948219.git.bertrand.marquis@arm.com>
 <8d2d7f03-450c-d50c-630b-8608c6d42bb9@suse.com>
 <FCAB700B-4617-4323-BE1E-B80DDA1806C1@arm.com>
 <1b046f2c-05c8-9276-a91e-fd55ec098bed@suse.com>
 <alpine.DEB.2.21.2007291356060.1767@sstabellini-ThinkPad-T480s>
 <1a8bbcc7-9d0c-9669-db7b-e837af279027@suse.com>
 <73c8ade5-36a3-cc13-80b6-bda89e175cbb@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6066b507-f956-8e7a-89f3-b21428b66d65@suse.com>
Date: Fri, 31 Jul 2020 12:18:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <73c8ade5-36a3-cc13-80b6-bda89e175cbb@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31.07.2020 12:12, Julien Grall wrote:
> On 31/07/2020 07:39, Jan Beulich wrote:
>> We're fixing other issues without breaking the ABI. Where's the
>> problem of backporting the kernel side change (which I anticipate
>> to not be overly involved)?
> This means you can't take advantage of the runstage on existing Linux 
> without any modification.
> 
>> If the plan remains to be to make an ABI breaking change,
> 
> For a theoritical PoV, this is a ABI breakage. However, I fail to see 
> how the restrictions added would affect OSes at least on Arm.

"OSes" covering what? Just Linux?

> In particular, you can't change the VA -> PA on Arm without going 
> through an invalid mapping. So I wouldn't expect this to happen for the 
> runstate.
> 
> The only part that *may* be an issue is if the guest is registering the 
> runstate with an initially invalid VA. Although, I have yet to see that 
> in practice. Maybe you know?

I'm unaware of any such use, but this means close to nothing.

>>  then I
>> think this will need an explicit vote.
> 
> I was under the impression that the two Arm maintainers (Stefano and I) 
> already agreed with the approach here. Therefore, given the ABI breakage 
> is only affecting Arm, why would we need a vote?

The problem here is of conceptual nature: You're planning to
make the behavior of a common hypercall diverge between
architectures, and in a retroactive fashion. Imo that's nothing
we should do even for new hypercalls, if _at all_ avoidable. If
we allow this here, we'll have a precedent that people later
may (and based on my experience will, sooner or later) reference,
to get their own change justified.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 10:40:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 10:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1SS6-00080V-BA; Fri, 31 Jul 2020 10:39:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RrDm=BK=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1k1SS5-00080Q-0J
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 10:39:49 +0000
X-Inumbo-ID: 274a646e-d31a-11ea-8e23-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 274a646e-d31a-11ea-8e23-bc764e2007e4;
 Fri, 31 Jul 2020 10:39:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596191987;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=zM6KH5wyqkh3s6YsZsewwZhKYOgVuv5Vsro5/IwWIe0=;
 b=PPJT1erLB3YiJVwbitKsA7xuojH7f0oj+AiAQKfZ1havkL2FW2pyi1dr
 1OLPc0kZWemQT7wB+H5EeVhcP6LxM6OMRK1p6icxZ9TXlQxYRLliUM7AK
 f7Ee2DkGRnoW5/eJCj142s3A2EdaPodBnP/xRzlk2tz+1D0rS62bxx3CA o=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: be2iGoBLgkqyAJLEhvLkfCPrVhm3b9ahX3h9weEFiIU2Rrrdn3oHsYJ5lwDk841nOpvNF+aSoV
 aED+Da3PknT51FkzpqQ1y36F0GP/Ylx+QR1SiRKFFcUf0ECT6rcLS+uGuz1Y5S1zIfcKzjdCbH
 BJJx+/CxYNnaz7nOzjvAuDm+H1ngQI2OKz+SNzEbawMSWUjey9VvR1okyyZslCV96XmguCAIRV
 T5u7GR2LIp+yfgAMKNY4xrqNv54vbs4Sjv/NP7yAQtFr3WZjSltvenSMFMOwiN5kXh/uOpWEm8
 F3k=
X-SBRS: 3.7
X-MesageID: 24493179
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="24493179"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24355.62702.194666.338534@mariner.uk.xensource.com>
Date: Fri, 31 Jul 2020 11:39:42 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [OSSTEST PATCH 14/14] duration_estimator: Move duration query
 loop into database
In-Reply-To: <7A4B6786-4456-44E4-A85D-9CC83B522FBB@citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
 <20200721184205.15232-15-ian.jackson@eu.citrix.com>
 <7A4B6786-4456-44E4-A85D-9CC83B522FBB@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("Re: [OSSTEST PATCH 14/14] duration_estimator: Move duration query loop into database"):
> > On Jul 21, 2020, at 7:42 PM, Ian Jackson <ian.jackson@eu.citrix.com> wrote:
...
> > Example queries before (from the debugging output):
> > 
> > Query A part I:
> > 
> >            SELECT f.flight AS flight,
> >                   j.job AS job,
> >                   f.started AS started,
> >                   j.status AS status
> >                     FROM flights f
> >                     JOIN jobs j USING (flight)
> >                     JOIN runvars r
> >                             ON  f.flight=r.flight
> >                            AND  r.name=?
> >                    WHERE  j.job=r.job
> 
> Did these last two get mixed up?  My limited experience w/ JOIN ON
> and WHERE would lead me to expect we’re joining on
> `f.flight=r.flight and r.job = j.job`, and having `r.name = ?` as
> part of the WHERE clause.  I see it’s the same in the combined query
> as well.

Well spotted.  However, actually, this makes no difference: with an
inner join, ON clauses are the same as WHERE clauses.  It does seem
stylistically poor though, so I will add a commit to change it.

> > Query common part II:
> > 
> >        WITH tsteps AS
> >        (
> >            SELECT *
> >              FROM steps
> >             WHERE flight=? AND job=?
> >        )
> >        , tsteps2 AS
> >        (
> >            SELECT *
> >              FROM tsteps
> >             WHERE finished <=
> >                     (SELECT finished
> >                        FROM tsteps
> >                       WHERE tsteps.testid = ?)
> >        )
> >        SELECT (
> >            SELECT max(finished)-min(started)
> >              FROM tsteps2
> >          ) - (
> >            SELECT sum(finished-started)
> >              FROM tsteps2
> >             WHERE step = 'ts-hosts-allocate'
> >          )
> >                AS duration
> 
> Er, wait — you were doing a separate `duration` query for each row of the previous query?  Yeah, that sounds like it could be a lot of round trips. :-)

I was doing, yes.  This code was not really very optimised.

> I mean, in both queries (A and B), the transform should basically result in the same thing happening, as far as I can tell.

Good, thanks.

> I can try to analyze the duration query and see if I can come up with any suggestions, but that would be a different patch anyway.

It's fast enough now :-).

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 10:40:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 10:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1SSq-0000GF-Kw; Fri, 31 Jul 2020 10:40:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XNXZ=BK=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1k1SSp-0000G5-OH
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 10:40:36 +0000
X-Inumbo-ID: 435e7f83-d31a-11ea-ab99-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 435e7f83-d31a-11ea-ab99-12813bfff9fa;
 Fri, 31 Jul 2020 10:40:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1596192034;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=BL1oMIwxlQ+A/Gl0cpYp9Jl1YEPXaJY/V5WmDVexMn8=;
 b=GIEypfXW5JTktWxzZ/4KGAw1pw92VpkZovGnDlr7glAV7OAs8cmpY7bFO6D18KbCqPwkMV
 j+GI0+HLxbAOx5dUUrrZuDKu1lFtUTptdxan8NRl8BCZRMk1N6nWGwaj3xULErLNpRH5di
 IT1Rh2vPJah065ezDTtDUpR+5Yn4z5M=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-137-dsjHfKRiNW-Oaap_X3aKdQ-1; Fri, 31 Jul 2020 06:40:29 -0400
X-MC-Unique: dsjHfKRiNW-Oaap_X3aKdQ-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 52150E918;
 Fri, 31 Jul 2020 10:40:28 +0000 (UTC)
Received: from [10.36.113.22] (ovpn-113-22.ams2.redhat.com [10.36.113.22])
 by smtp.corp.redhat.com (Postfix) with ESMTP id A8E0360BE2;
 Fri, 31 Jul 2020 10:40:20 +0000 (UTC)
Subject: Re: [PATCH RFCv1 3/5] virtio-mem: try to merge "System RAM
 (virtio_mem)" resources
To: linux-kernel@vger.kernel.org
References: <20200731091838.7490-1-david@redhat.com>
 <20200731091838.7490-4-david@redhat.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <f79c78d7-3231-d33d-8814-4c5b8c966c50@redhat.com>
Date: Fri, 31 Jul 2020 12:40:19 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200731091838.7490-4-david@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: linux-hyperv@vger.kernel.org, Michal Hocko <mhocko@suse.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 virtualization@lists.linux-foundation.org, linux-mm@kvack.org,
 xen-devel@lists.xenproject.org, Andrew Morton <akpm@linux-foundation.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31.07.20 11:18, David Hildenbrand wrote:

Grml, forgot to add cc: list for this patch, ccing the right people.

> virtio-mem adds memory in memory block granularity, to be able to
> remove it in the same granularity again later, and to grow slowly on
> demand. This, however, results in quite a lot of resources when
> adding a lot of memory. Resources are effectively stored in a list-based
> tree. Having a lot of resources not only wastes memory, it also makes
> traversing that tree more expensive, and makes /proc/iomem explode in
> size (e.g., requiring kexec-tools to manually merge resources later
> when e.g., trying to create a kdump header).
> 
> Before this patch, we get (/proc/iomem) when hotplugging 2G via virtio-mem
> on x86-64:
>         [...]
>         100000000-13fffffff : System RAM
>         140000000-33fffffff : virtio0
>           140000000-147ffffff : System RAM (virtio_mem)
>           148000000-14fffffff : System RAM (virtio_mem)
>           150000000-157ffffff : System RAM (virtio_mem)
>           158000000-15fffffff : System RAM (virtio_mem)
>           160000000-167ffffff : System RAM (virtio_mem)
>           168000000-16fffffff : System RAM (virtio_mem)
>           170000000-177ffffff : System RAM (virtio_mem)
>           178000000-17fffffff : System RAM (virtio_mem)
>           180000000-187ffffff : System RAM (virtio_mem)
>           188000000-18fffffff : System RAM (virtio_mem)
>           190000000-197ffffff : System RAM (virtio_mem)
>           198000000-19fffffff : System RAM (virtio_mem)
>           1a0000000-1a7ffffff : System RAM (virtio_mem)
>           1a8000000-1afffffff : System RAM (virtio_mem)
>           1b0000000-1b7ffffff : System RAM (virtio_mem)
>           1b8000000-1bfffffff : System RAM (virtio_mem)
>         3280000000-32ffffffff : PCI Bus 0000:00
> 
> With this patch, we get (/proc/iomem):
>         [...]
>         fffc0000-ffffffff : Reserved
>         100000000-13fffffff : System RAM
>         140000000-33fffffff : virtio0
>           140000000-1bfffffff : System RAM (virtio_mem)
>         3280000000-32ffffffff : PCI Bus 0000:00
> 
> Of course, with more hotplugged memory, it gets worse. When unplugging
> memory blocks again, try_remove_memory() (via
> offline_and_remove_memory()) will properly split the resource up again.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  drivers/virtio/virtio_mem.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
> index f26f5f64ae822..2396a8d67875e 100644
> --- a/drivers/virtio/virtio_mem.c
> +++ b/drivers/virtio/virtio_mem.c
> @@ -415,6 +415,7 @@ static int virtio_mem_mb_add(struct virtio_mem *vm, unsigned long mb_id)
>  {
>  	const uint64_t addr = virtio_mem_mb_id_to_phys(mb_id);
>  	int nid = vm->nid;
> +	int rc;
>  
>  	if (nid == NUMA_NO_NODE)
>  		nid = memory_add_physaddr_to_nid(addr);
> @@ -431,8 +432,17 @@ static int virtio_mem_mb_add(struct virtio_mem *vm, unsigned long mb_id)
>  	}
>  
>  	dev_dbg(&vm->vdev->dev, "adding memory block: %lu\n", mb_id);
> -	return add_memory_driver_managed(nid, addr, memory_block_size_bytes(),
> -					 vm->resource_name);
> +	rc = add_memory_driver_managed(nid, addr, memory_block_size_bytes(),
> +				       vm->resource_name);
> +	if (!rc) {
> +		/*
> +		 * Try to reduce the number of resources by merging them. The
> +		 * memory removal path will properly split them up again.
> +		 */
> +		merge_child_mem_resources(vm->parent_resource,
> +					  vm->resource_name);
> +	}
> +	return rc;
>  }
>  
>  /*
> 


-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 10:41:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 10:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1STy-0000OI-0C; Fri, 31 Jul 2020 10:41:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RrDm=BK=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1k1STx-0000O8-1u
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 10:41:45 +0000
X-Inumbo-ID: 6c93086e-d31a-11ea-8e24-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c93086e-d31a-11ea-8e24-bc764e2007e4;
 Fri, 31 Jul 2020 10:41:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596192103;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=ueYWH7BG9nThAJwWNZA6VQQmfXwYi+S8pEP2TJzWFRU=;
 b=aDycFin7hKdz7TzTCr707td4I1dhqyEU5WcFJKovu3fElR8bjfzg4E45
 RgtN7ZSsJca29Iq4ZiEMzbP7UwPgmYX+8NoUTnTSDgdJBFzQjdTR/U5mJ
 /i0DPa38g8ZoroHxxU3lK8nCnARshdSPrGCHtSheWjqUzDxaCSXFlpdmC Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: k3QZCc8dLZYDj6xPz2waUH76gXHfrEwNjXZNj+BM+dNfiYnaYLukQojYUs+Vbw4N6zn+Wh1L8K
 7bToiAWkQb77i7AqU012UPYt+mz92Fser6ZPFme82UT9dDfUvwgUdqTahpTnu3ohjZFNB19IPE
 bG1pF84YEms39/HreLcxYQtIzfPiAsTtCVhDqSWEssdj1XyXwuw72+FhEsaQsa5tedsh8W0kVq
 jlidcNedDT+PxKWI8X3riLY/TnHGWFWe3JHVnHShcj+3WbouTc+XyLX4IAYHVk3oCT7mpWp9jq
 E2c=
X-SBRS: 3.7
X-MesageID: 23939388
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23939388"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24355.62818.483922.426288@mariner.uk.xensource.com>
Date: Fri, 31 Jul 2020 11:41:38 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [OSSTEST PATCH 06/14] sg-report-flight: Use WITH clause to use
 index for $anypassq
In-Reply-To: <E1356BFA-1FDF-42B8-A4E1-47C45F93D036@citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
 <20200721184205.15232-7-ian.jackson@eu.citrix.com>
 <E1356BFA-1FDF-42B8-A4E1-47C45F93D036@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("Re: [OSSTEST PATCH 06/14] sg-report-flight: Use WITH clause to use index for $anypassq"):
> > On Jul 21, 2020, at 7:41 PM, Ian Jackson <ian.jackson@eu.citrix.com> wrote:
> > +    # In psql 9.6 this WITH clause makes postgresql do the steps query
> > +    # first.  This is good because if this test never passed we can
> > +    # determine that really quickly using the new index, without
> > +    # having to scan the flights table.  (If the test passed we will
> > +    # probably not have to look at many flights to find one, so in
> > +    # that case this is not much worse.)
> 
> Seems a bit weird, but OK.  The SQL looks the same, so:
> 
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>

Thanks.  This business with the WITH clause as an optimisation fence
is well-known in Postgres circles, it seems.

Ian.



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 10:45:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 10:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1SXm-0000aL-HL; Fri, 31 Jul 2020 10:45:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GLoN=BK=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1k1SXl-0000aG-1D
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 10:45:41 +0000
X-Inumbo-ID: f8e07cf2-d31a-11ea-ab99-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8e07cf2-d31a-11ea-ab99-12813bfff9fa;
 Fri, 31 Jul 2020 10:45:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596192338;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=MAlUoJmK5jHLvdE12umdPb4+vbrcyWtC53izDasoRTc=;
 b=aUFyK06MQkl4wmd7HYdd2RsBeXrSSnmcYys3xOhpVCiqQycNV33f0MJH
 KEMuP5gbczNbVU4qiT1sSdNNbJl9yi1aK+LuTQw4xHEe2fHBtqQeuGowb
 RuNaxcUKe9xHiBBbL1e4TKcawgJ7m1HhV5Tsa0UvwlawxGtnA3tntO5EL c=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 8xbihE64pm+UhQNHOCjB1DgdZc7BiInihMh6xsurBN2xDsPlWjeXJsZMWtx0GUq+NxWZ5aXcAr
 9kG4Bh39hGheKFh+PvPDo5YE1wLi9w50pmnuYQ3N47MY4HNqPbYSwdMWYzYa4jv/WAQYh0xHlS
 rzaCzhmjBqv4yYO8cAwozLnT6BDP7fvxRjRPwLRQ8tVI/etC5SgOeARheiLbBRXc1A5MGpaRDs
 3BLymNRd33ZhOaZ3pKT8wOLW5XFPVxPZSOGDah2qumV7Dpbj3gy0QxRkNbz3JPAKWJp1CELqOo
 1vo=
X-SBRS: 3.7
X-MesageID: 23939540
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23939540"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [OSSTEST PATCH 14/14] duration_estimator: Move duration query
 loop into database
Thread-Topic: [OSSTEST PATCH 14/14] duration_estimator: Move duration query
 loop into database
Thread-Index: AQHWX5IXwUKXL54pjkKddmBXrxrv3qkbmlIAgAXSzgCAAAGlgA==
Date: Fri, 31 Jul 2020 10:45:35 +0000
Message-ID: <729C1D34-B3A6-49D0-9F6E-88B4934242F9@citrix.com>
References: <20200721184205.15232-1-ian.jackson@eu.citrix.com>
 <20200721184205.15232-15-ian.jackson@eu.citrix.com>
 <7A4B6786-4456-44E4-A85D-9CC83B522FBB@citrix.com>
 <24355.62702.194666.338534@mariner.uk.xensource.com>
In-Reply-To: <24355.62702.194666.338534@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <2719371797AE6945B24720DF5B1C3399@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDMxLCAyMDIwLCBhdCAxMTozOSBBTSwgSWFuIEphY2tzb24gPGlhbi5qYWNr
c29uQGNpdHJpeC5jb20+IHdyb3RlOg0KPiANCj4gR2VvcmdlIER1bmxhcCB3cml0ZXMgKCJSZTog
W09TU1RFU1QgUEFUQ0ggMTQvMTRdIGR1cmF0aW9uX2VzdGltYXRvcjogTW92ZSBkdXJhdGlvbiBx
dWVyeSBsb29wIGludG8gZGF0YWJhc2UiKToNCj4+PiBPbiBKdWwgMjEsIDIwMjAsIGF0IDc6NDIg
UE0sIElhbiBKYWNrc29uIDxpYW4uamFja3NvbkBldS5jaXRyaXguY29tPiB3cm90ZToNCj4gLi4u
DQo+Pj4gRXhhbXBsZSBxdWVyaWVzIGJlZm9yZSAoZnJvbSB0aGUgZGVidWdnaW5nIG91dHB1dCk6
DQo+Pj4gDQo+Pj4gUXVlcnkgQSBwYXJ0IEk6DQo+Pj4gDQo+Pj4gICAgICAgICAgIFNFTEVDVCBm
LmZsaWdodCBBUyBmbGlnaHQsDQo+Pj4gICAgICAgICAgICAgICAgICBqLmpvYiBBUyBqb2IsDQo+
Pj4gICAgICAgICAgICAgICAgICBmLnN0YXJ0ZWQgQVMgc3RhcnRlZCwNCj4+PiAgICAgICAgICAg
ICAgICAgIGouc3RhdHVzIEFTIHN0YXR1cw0KPj4+ICAgICAgICAgICAgICAgICAgICBGUk9NIGZs
aWdodHMgZg0KPj4+ICAgICAgICAgICAgICAgICAgICBKT0lOIGpvYnMgaiBVU0lORyAoZmxpZ2h0
KQ0KPj4+ICAgICAgICAgICAgICAgICAgICBKT0lOIHJ1bnZhcnMgcg0KPj4+ICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIE9OICBmLmZsaWdodD1yLmZsaWdodA0KPj4+ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgQU5EICByLm5hbWU9Pw0KPj4+ICAgICAgICAgICAgICAgICAgIFdIRVJFICBq
LmpvYj1yLmpvYg0KPj4gDQo+PiBEaWQgdGhlc2UgbGFzdCB0d28gZ2V0IG1peGVkIHVwPyAgTXkg
bGltaXRlZCBleHBlcmllbmNlIHcvIEpPSU4gT04NCj4+IGFuZCBXSEVSRSB3b3VsZCBsZWFkIG1l
IHRvIGV4cGVjdCB3ZeKAmXJlIGpvaW5pbmcgb24NCj4+IGBmLmZsaWdodD1yLmZsaWdodCBhbmQg
ci5qb2IgPSBqLmpvYmAsIGFuZCBoYXZpbmcgYHIubmFtZSA9ID9gIGFzDQo+PiBwYXJ0IG9mIHRo
ZSBXSEVSRSBjbGF1c2UuICBJIHNlZSBpdOKAmXMgdGhlIHNhbWUgaW4gdGhlIGNvbWJpbmVkIHF1
ZXJ5DQo+PiBhcyB3ZWxsLg0KPiANCj4gV2VsbCBzcG90dGVkLiAgSG93ZXZlciwgYWN0dWFsbHks
IHRoaXMgbWFrZXMgbm8gZGlmZmVyZW5jZTogd2l0aCBhbg0KPiBpbm5lciBqb2luLCBPTiBjbGF1
c2VzIGFyZSB0aGUgc2FtZSBhcyBXSEVSRSBjbGF1c2VzLiAgSXQgZG9lcyBzZWVtDQo+IHN0eWxp
c3RpY2FsbHkgcG9vciB0aG91Z2gsIHNvIEkgd2lsbCBhZGQgYSBjb21taXQgdG8gY2hhbmdlIGl0
Lg0KDQpZZWFoLCBpbiBteSB0aW55IGFtb3VudCBvZiBleHBlcmllbmNlIHdpdGggU1FMaXRlLCBw
dXR0aW5nIHRoaXMgc29ydCBvZiByZXN0cmljdGlvbiBpbiBXSEVSRSByYXRoZXIgdGhhbiBPTiBk
aWRu4oCZdCBzZWVtIHRvIG1ha2UgYSBwcmFjdGljYWwgZGlmZmVyZW5jZTsgbm8gZG91YnQgdGhl
IHF1ZXJ5IHBsYW5uZXIgaXMgc21hcnQgZW5vdWdoIHRvIERUUlQuICBCdXQgc3dpdGNoaW5nIHRo
ZW0gc2hvdWxkIG1ha2UgaXQgc2xpZ2h0bHkgZWFzaWVyIGZvciBodW1hbnMgdG8gcGFyc2UsIHNv
IGlzIHByb2JhYmx5IHdvcnRoIGRvaW5nIHdoaWxlIHlvdeKAmXJlIGhlcmUsIGlmIHlvdSBoYXZl
IGEgZmV3IHNwYXJlIGN5Y2xlcy4NCg0KVGhhbmtzLA0KIC1HZW9yZ2UNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 10:46:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 10:46:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1SYt-0000gj-W8; Fri, 31 Jul 2020 10:46:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S/xS=BK=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1SYs-0000gb-SR
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 10:46:50 +0000
X-Inumbo-ID: 232515c2-d31b-11ea-ab99-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 232515c2-d31b-11ea-ab99-12813bfff9fa;
 Fri, 31 Jul 2020 10:46:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ss8BcfguJ4eLaJkT8XAPmw+xnIY7iragrK2r4C0eMGM=; b=L8m+eq5ZuEINNtrHOcScU8c6GO
 vp+daTw0XVb7yu1/W0dsO30piz3TYaw17a7Bmy0wC0G0JXzNqNB4Yd39NiTtpjZuywMyifk/XX5M+
 0Iuupyy1bzW/k0JSI10P2i/m2TYwKMQ+QX7YSldfR6ysZsjXMYdOQpbkUX+wQikAiDpI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1SYq-00074G-Pj; Fri, 31 Jul 2020 10:46:48 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1SYq-0000ZR-Bd; Fri, 31 Jul 2020 10:46:48 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] x86/hvm: set 'ipat' in EPT for special pages
Date: Fri, 31 Jul 2020 11:46:44 +0100
Message-Id: <20200731104644.20906-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

All non-MMIO ranges (i.e those not mapping real device MMIO regions) that
map valid MFNs are normally marked MTRR_TYPE_WRBACK and 'ipat' is set. Hence
when PV drivers running in a guest populate the BAR space of the Xen Platform
PCI Device with pages such as the Shared Info page or Grant Table pages,
accesses to these pages will be cachable.

However, should IOMMU mappings be enabled be enabled for the guest then these
accesses become uncachable. This has a substantial negative effect on I/O
throughput of PV devices. Arguably PV drivers should bot be using BAR space to
host the Shared Info and Grant Table pages but it is currently commonplace for
them to do this and so this problem needs mitigation. Hence this patch makes
sure the 'ipat' bit is set for any special page regardless of where in GFN
space it is mapped.

NOTE: Clearly this mitigation only applies to Intel EPT. It is not obvious
      that there is any similar mitigation possible for AMD NPT. Downstreams
      such as Citrix XenServer have been carrying a patch similar to this for
      several releases though.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/mtrr.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 511c3be1c8..3ad813ed15 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -830,7 +830,8 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
         return MTRR_TYPE_UNCACHABLE;
     }
 
-    if ( !is_iommu_enabled(d) && !cache_flush_permitted(d) )
+    if ( (!is_iommu_enabled(d) && !cache_flush_permitted(d)) ||
+         is_special_page(mfn_to_page(mfn)) )
     {
         *ipat = 1;
         return MTRR_TYPE_WRBACK;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 10:50:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 10:50:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Sc8-0001VH-F8; Fri, 31 Jul 2020 10:50:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oG5j=BK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k1Sc7-0001VB-Ny
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 10:50:11 +0000
X-Inumbo-ID: 9aca1492-d31b-11ea-8e24-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9aca1492-d31b-11ea-8e24-bc764e2007e4;
 Fri, 31 Jul 2020 10:50:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596192611;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=tZx3PEJmOBHD6pru8XvPlLvbiHaBtjABhRbeqzn/zek=;
 b=cq4mkn7yKLXI8+n5P9oiGSnZHKGNxOKO0Sm02MgZLD0mIFxP56i8hqGb
 gpmHMRKuEd1frb309NaUnnrbjRE52rXo/GiK81A0lNNtAyBX+o1pvcgCn
 53JL0P8NRJwPP82qC49ek5/rKbw1Xdbz65DXC2sn5Jy6S4gzwy+T1KHx/ 0=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: U0rYzTZXmpNJz2TQ4Cf3cHdh0JXuKqzA2cK1XcU1UaqNGQtV0nJQNZGYPhi/fo38YIS7YQKztF
 9HapsaWQ32f3pXiZnux1M35iTU95aruXBLenpV+Ils+OY4yT0kCHTI/03wdxlig0h9nr4wz6Fa
 ENsU1wnhBAeqt07N93ehua0ziM5VNsXAYkX6GuXSp0gplbLbnQLFZ6h23h+0T68mmJeVKcVqES
 HS91pnISWu6JWpux8wLGoTiSDERhsJ1NMaqejD2Z0ADAvFDWtOleSBpRK8aHwEKxOcBknPXdi0
 vHA=
X-SBRS: 3.7
X-MesageID: 23632365
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23632365"
Subject: Re: [PATCH] x86emul: replace UB shifts
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <bd679766-939d-3176-c913-e993dd48ef15@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <564e3f51-335d-dcdf-900f-380886d01d6b@citrix.com>
Date: Fri, 31 Jul 2020 11:50:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <bd679766-939d-3176-c913-e993dd48ef15@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31/07/2020 08:06, Jan Beulich wrote:
> Displacement values can be negative, hence we shouldn't left-shift them.
>
> While auditing shifts, I noticed a pair of missing parentheses, which
> also get added right here.
>
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I'd suggest putting the UBSAN report into the commit message

(XEN) UBSAN: Undefined behaviour in x86_emulate/x86_emulate.c:3482:55
(XEN) left shift of negative value -2

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 10:53:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 10:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1SfH-0001g4-Ui; Fri, 31 Jul 2020 10:53:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/8Mk=BK=epam.com=prvs=64810d1384=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1k1SfG-0001fz-0Q
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 10:53:26 +0000
X-Inumbo-ID: 0e21b648-d31c-11ea-8e24-bc764e2007e4
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e21b648-d31c-11ea-8e24-bc764e2007e4;
 Fri, 31 Jul 2020 10:53:24 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 06VAoQCH014636; Fri, 31 Jul 2020 10:53:22 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2175.outbound.protection.outlook.com [104.47.17.175])
 by mx0a-0039f301.pphosted.com with ESMTP id 32mghfr8g9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 31 Jul 2020 10:53:21 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=avRRcM+kk3t+7Nz0AFApDfGx49v7ObILi84oYmh65Zx+x3X4gaVxWCtEDE29PJv9UPrGDW0HjcppVri6DagaK3+J/7LEKvYxEvJtr++tA6YvDODe9HniJ52KtyUgGvkb/VnX4Qk5Lr8Pd+fWjcDcrvKfU7kPtl2JVpfaF9uugSXJtqQfPq1hZtn21/SapwXJDPxcZwa4yAlylJHGMeJ7u2VVr4CjkJ/Nv1HO/fb3ppuvw2xbKryPvKKXAu5G+Rbm8OIVxtIjI1BjiDFyRZsgRIXAEhrhRIDwl9PHZglHAlg9QDazogV23XAeE5DA5D1/tIuQCy12G2lWyBi8I0bytg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l4V0IV66q0LxwRuSg5WihLuEtmMVIozIiJEPU5RVXV0=;
 b=BGm3Nl1eGWAKRTF8dN1hnC5FzIDgZLOXkGixBdQg3ak5s6RLK0uNSAZqa8MV/bSsAHNU6GILlnB+TgCTN3i03X3OJ8htNlZSTxL56IclRC65TCn5QulNG/EXr7DKEfWT/6xcrwZ2/Ko22C96Ufeo0ubN6Gd9Qaf8IZkhgUtZKLlEZac4hAef8TQcu3AlWuxUygoyYOMyAvocV7lAq/3lRiFgJLs4EsDcoW0lxd5bomJ/u7Kt9lFOtOLkugNmaxVkhmSDFv4BZ2pEEzwMBLomLmbUnVVYYL157Ldq0ELJubSyJUPNFHY0Sz3xDUOQrpisZ9u+neoYW4sbpikn+pztFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l4V0IV66q0LxwRuSg5WihLuEtmMVIozIiJEPU5RVXV0=;
 b=1UDuG5YzE2qpCEUlqRqnI6YNF0znnzHc2qAyfDPnBilbBMqESWXkajzc/yvrANglpOqrrQWBgdNh7OPgeQBN12+be13GrTzfZy9bSOvT8dZnCHTmnUzxPUMf9F6hqVtF+y+4k6RZ4ZIE5F1NCG7qIY63tRAah4hKk29LcuFuduhb5l0lj7DZRJqubX5fFJyl4ABkCPy31ltMmpORNlGWgYBQkMlf7aGSVAuzttJEB4AkkecQC9mTsOUuNY8pqkMwenh/MZskgD2QFwiMk9CoxAY/Riy2TrVtRGQ3oz4eHKpuUg0au1hpt8BZWfWnDKPMBneR+KBPYiVVoYY4qHTCMw==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB3714.eurprd03.prod.outlook.com (2603:10a6:208:44::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.18; Fri, 31 Jul
 2020 10:53:18 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::21e5:6d27:5ba0:f508%9]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 10:53:18 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "ian.jackson@citrix.com" <ian.jackson@citrix.com>,
 "wl@xen.org" <wl@xen.org>
Subject: Re: [PATCH 2/2] libgnttab: Add support for Linux dma-buf offset
Thread-Topic: [PATCH 2/2] libgnttab: Add support for Linux dma-buf offset
Thread-Index: AQHWLoWvWdjq7ozlqkeaZNP0F/2zoKkh9IyA
Date: Fri, 31 Jul 2020 10:53:18 +0000
Message-ID: <ca641b0d-297d-1d1e-ecc7-0b35a2aa7c91@epam.com>
References: <20200520090425.28558-1-andr2000@gmail.com>
 <20200520090425.28558-3-andr2000@gmail.com>
In-Reply-To: <20200520090425.28558-3-andr2000@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 9b50ade6-1c22-48e4-c7df-08d8353fef30
x-ms-traffictypediagnostic: AM0PR03MB3714:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <AM0PR03MB3714456D406D36CD321E569EE74E0@AM0PR03MB3714.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 2evGiuokImme1kSJzNyytP3gjUObEGDJdwYBRRpInXZdB1K8ikpC++2kE3wyHMx/acKS5DDKUbd37qJeyj02TidEu9Orvjix6iH/rcdR0qWXRJLI21/y7YJnAS0avvcxdC+/r03I2QZnnnUFqgHTQFscSvYO64QnJGDplbPDHjb69iMWZWP6g1UcKVEotRWlwalKIQHRUiu2uDf+4X8aK5cTd49xVTpaTu0khLCE0+WMvaZCkMTmi4ha9Oo5WmezAsCSnMLW57Ty3lGLB/oeCZylHnGxGgQekEXMmeozBS3PZ0+/fro1s2ExN8DcaIEOX1U4U+6lMHgKZo3GTIQSMHJXNwjbp3L1isYHlLky3hMpiXKujswwLx2sykntA91b3tzg26B7ksVmlO4YrkyjfOYuFFGwA3wmAhtjgc4E93oJHgsnXW5f8h636Mvfo3uGm3pmUEX4B0I6JTatA3gZYg==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM0PR03MB6324.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(376002)(136003)(346002)(39860400002)(396003)(366004)(4326008)(8676002)(6512007)(30864003)(26005)(5660300002)(31696002)(71200400001)(186003)(86362001)(316002)(31686004)(107886003)(66446008)(64756008)(66556008)(110136005)(2906002)(66476007)(53546011)(6506007)(966005)(76116006)(91956017)(478600001)(8936002)(6486002)(54906003)(66946007)(2616005)(36756003)(83380400001)(21314003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: 90KXslE73fsBTMLzOPMIhoRS9k2m46Emc7slAhuangujhYTZiidRcGnFqV1bkQS8hQyuhfDHMhpO3/xcdmP82+8H4iKpgPUOkDGWStG1JXQcR4WK3KpLPh5KaQdVZUtQsS8J6Boy4xPyFnHoUTPVHDYLQAmetkEe+k0UPbqzilNtnbJZE/ZZ6+19oL5dVnFsxu2kL61lbsOyKDodH0S+TeClE6Yvt+GNycZJlq5yFJmgTj0yYRu98Xet/UhhFc7kxQaOUuXHR7xOZcX7xNvqccZtk1WkknHVkaStwzc0gWBxxk2yblEUYEyE1wO2hjPg/M8Y3dyZwzTB11c/8U6hUHKU8vPIBpzE0zTxHykC8t6aGM3dIWw6fn5InYodAZZ0RL9OkZrcU0Yg0G7mrD0I6NbDMepB5JhZvJSA3mvkxivRuQuBl65wj0S9FiKO70c/YGJy4o9DLD3Aodn+xo62ndGoW4BOt0SMfbC6cdoGbeH3aH7GmmNqClV9G1Ovcs+xwoJcElbsR3wXdepuPMgpDfUokt1AejORRH0Ebxrg7xHB0KWvi+ZAmlepJxo2V+JY2Lei/lZOwUakpqYNq+qvYHmMOVKtK35Daq0A414yAM95pb6tFwecS0jlqR0D7c6j5d3xcI36dOQ8P+vux1Q4Ig==
Content-Type: text/plain; charset="utf-8"
Content-ID: <40357C1CF16E52419EE1EB1C48D94D61@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9b50ade6-1c22-48e4-c7df-08d8353fef30
X-MS-Exchange-CrossTenant-originalarrivaltime: 31 Jul 2020 10:53:18.3814 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: r+7eFr6oSatm54ktjX4aJFGpsueAQ0Atsb7dcN10i1eEr0yUD2yYEU4nT4T4+TbqqWo2ecXj0a8t0T3FyltPtAk0tqPi7+LHoiUpL8VqU3x2NKLQ2wvQYRnDInjTWKSO
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB3714
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687
 definitions=2020-07-31_04:2020-07-31,
 2020-07-31 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0
 suspectscore=0 malwarescore=0
 adultscore=0 lowpriorityscore=0 phishscore=0 mlxscore=0 mlxlogscore=999
 bulkscore=0 priorityscore=1501 clxscore=1011 spamscore=0 impostorscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007310082
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <andr2000@gmail.com>,
 Artem Mygaiev <Artem_Mygaiev@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGVsbG8sIElhbiwgV2VpIQ0KDQpJbml0aWFsbHkgSSBoYXZlIHNlbnQgdGhpcyBwYXRjaCBhcyBh
IHBhcnQgb2YgdGhlIHNlcmllcyBmb3IgdGhlIGRpc3BsYXkgcHJvdG9jb2wgY2hhbmdlcywNCg0K
YnV0IGxhdGVyIGRlY2lkZWQgdG8gc3BsaXQuIEF0IHRoZSBtb21lbnQgdGhlIHByb3RvY29sIGNo
YW5nZXMgYXJlIGFscmVhZHkNCg0KYWNjZXB0ZWQsIGJ1dCB0aGlzIHBhcnQgaXMgc3RpbGwgbWlz
c2luZyBmZWVkYmFjayBhbmQgaXMgc3RpbGwgd2FudGVkLg0KDQpJIHdvdWxkIHJlYWxseSBhcHBy
ZWNpYXRlIGlmIHlvdSBjb3VsZCBoYXZlIGEgbG9vayBhdCB0aGUgY2hhbmdlIGJlbG93Lg0KDQpU
aGFuayB5b3UgaW4gYWR2YW5jZSwNCg0KT2xla3NhbmRyIEFuZHJ1c2hjaGVua28NCg0KT24gNS8y
MC8yMCAxMjowNCBQTSwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+IEZyb206IE9s
ZWtzYW5kciBBbmRydXNoY2hlbmtvIDxvbGVrc2FuZHJfYW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4N
Cj4NCj4gQWRkIHZlcnNpb24gMiBvZiB0aGUgZG1hLWJ1ZiBpb2N0bHMgd2hpY2ggYWRkcyBkYXRh
X29mcyBwYXJhbWV0ZXIuDQo+DQo+IGRtYS1idWYgaXMgYmFja2VkIGJ5IGEgc2NhdHRlci1nYXRo
ZXIgdGFibGUgYW5kIGhhcyBvZmZzZXQgcGFyYW1ldGVyDQo+IHdoaWNoIHRlbGxzIHdoZXJlIHRo
ZSBhY3R1YWwgZGF0YSBzdGFydHMuIFJlbGV2YW50IGlvY3RscyBhcmUgZXh0ZW5kZWQNCj4gdG8g
c3VwcG9ydCB0aGF0IG9mZnNldDoNCj4gICAgLSB3aGVuIGRtYS1idWYgaXMgY3JlYXRlZCAoZXhw
b3J0ZWQpIGZyb20gZ3JhbnQgcmVmZXJlbmNlcyB0aGVuDQo+ICAgICAgZGF0YV9vZnMgaXMgdXNl
ZCB0byBzZXQgdGhlIG9mZnNldCBmaWVsZCBpbiB0aGUgc2NhdHRlciBsaXN0DQo+ICAgICAgb2Yg
dGhlIG5ldyBkbWEtYnVmDQo+ICAgIC0gd2hlbiBkbWEtYnVmIGlzIGltcG9ydGVkIGFuZCBncmFu
dCByZWZlcmVuY2VzIHByb3ZpZGVkIHRoZW4NCj4gICAgICBkYXRhX29mcyBpcyB1c2VkIHRvIHJl
cG9ydCB0aGF0IG9mZnNldCB0byB1c2VyLXNwYWNlDQo+DQo+IFNpZ25lZC1vZmYtYnk6IE9sZWtz
YW5kciBBbmRydXNoY2hlbmtvIDxvbGVrc2FuZHJfYW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4NCj4g
LS0tDQo+ICAgdG9vbHMvaW5jbHVkZS94ZW4tc3lzL0xpbnV4L2dudGRldi5oICB8IDUzICsrKysr
KysrKysrKysrKysrDQo+ICAgdG9vbHMvbGlicy9nbnR0YWIvTWFrZWZpbGUgICAgICAgICAgICB8
ICAyICstDQo+ICAgdG9vbHMvbGlicy9nbnR0YWIvZnJlZWJzZC5jICAgICAgICAgICB8IDE1ICsr
KysrDQo+ICAgdG9vbHMvbGlicy9nbnR0YWIvZ250dGFiX2NvcmUuYyAgICAgICB8IDE3ICsrKysr
Kw0KPiAgIHRvb2xzL2xpYnMvZ250dGFiL2luY2x1ZGUveGVuZ250dGFiLmggfCAxMyArKysrDQo+
ICAgdG9vbHMvbGlicy9nbnR0YWIvbGlieGVuZ250dGFiLm1hcCAgICB8ICA2ICsrDQo+ICAgdG9v
bHMvbGlicy9nbnR0YWIvbGludXguYyAgICAgICAgICAgICB8IDg2ICsrKysrKysrKysrKysrKysr
KysrKysrKysrKw0KPiAgIHRvb2xzL2xpYnMvZ250dGFiL21pbmlvcy5jICAgICAgICAgICAgfCAx
NSArKysrKw0KPiAgIHRvb2xzL2xpYnMvZ250dGFiL3ByaXZhdGUuaCAgICAgICAgICAgfCAgOSAr
KysNCj4gICA5IGZpbGVzIGNoYW5nZWQsIDIxNSBpbnNlcnRpb25zKCspLCAxIGRlbGV0aW9uKC0p
DQo+DQo+IGRpZmYgLS1naXQgYS90b29scy9pbmNsdWRlL3hlbi1zeXMvTGludXgvZ250ZGV2Lmgg
Yi90b29scy9pbmNsdWRlL3hlbi1zeXMvTGludXgvZ250ZGV2LmgNCj4gaW5kZXggZDE2MDc2MDQ0
YzcxLi4wYzQzMzkzY2JlZTUgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL2luY2x1ZGUveGVuLXN5cy9M
aW51eC9nbnRkZXYuaA0KPiArKysgYi90b29scy9pbmNsdWRlL3hlbi1zeXMvTGludXgvZ250ZGV2
LmgNCj4gQEAgLTI3NCw0ICsyNzQsNTcgQEAgc3RydWN0IGlvY3RsX2dudGRldl9kbWFidWZfaW1w
X3JlbGVhc2Ugew0KPiAgICAgICB1aW50MzJfdCByZXNlcnZlZDsNCj4gICB9Ow0KPiAgIA0KPiAr
LyoNCj4gKyAqIFZlcnNpb24gMiBvZiB0aGUgaW9jdGxzIGFkZHMgQGRhdGFfb2ZzIHBhcmFtZXRl
ci4NCj4gKyAqDQo+ICsgKiBkbWEtYnVmIGlzIGJhY2tlZCBieSBhIHNjYXR0ZXItZ2F0aGVyIHRh
YmxlIGFuZCBoYXMgb2Zmc2V0DQo+ICsgKiBwYXJhbWV0ZXIgd2hpY2ggdGVsbHMgd2hlcmUgdGhl
IGFjdHVhbCBkYXRhIHN0YXJ0cy4NCj4gKyAqIFJlbGV2YW50IGlvY3RscyBhcmUgZXh0ZW5kZWQg
dG8gc3VwcG9ydCB0aGF0IG9mZnNldDoNCj4gKyAqICAgLSB3aGVuIGRtYS1idWYgaXMgY3JlYXRl
ZCAoZXhwb3J0ZWQpIGZyb20gZ3JhbnQgcmVmZXJlbmNlcyB0aGVuDQo+ICsgKiAgICAgQGRhdGFf
b2ZzIGlzIHVzZWQgdG8gc2V0IHRoZSBvZmZzZXQgZmllbGQgaW4gdGhlIHNjYXR0ZXIgbGlzdA0K
PiArICogICAgIG9mIHRoZSBuZXcgZG1hLWJ1Zg0KPiArICogICAtIHdoZW4gZG1hLWJ1ZiBpcyBp
bXBvcnRlZCBhbmQgZ3JhbnQgcmVmZXJlbmNlcyBhcmUgcHJvdmlkZWQgdGhlbg0KPiArICogICAg
IEBkYXRhX29mcyBpcyB1c2VkIHRvIHJlcG9ydCB0aGF0IG9mZnNldCB0byB1c2VyLXNwYWNlDQo+
ICsgKi8NCj4gKyNkZWZpbmUgSU9DVExfR05UREVWX0RNQUJVRl9FWFBfRlJPTV9SRUZTX1YyIFwN
Cj4gKyAgICBfSU9DKF9JT0NfTk9ORSwgJ0cnLCAxMywgXA0KPiArICAgICAgICAgc2l6ZW9mKHN0
cnVjdCBpb2N0bF9nbnRkZXZfZG1hYnVmX2V4cF9mcm9tX3JlZnNfdjIpKQ0KPiArc3RydWN0IGlv
Y3RsX2dudGRldl9kbWFidWZfZXhwX2Zyb21fcmVmc192MiB7DQo+ICsgICAgLyogSU4gcGFyYW1l
dGVycy4gKi8NCj4gKyAgICAvKiBTcGVjaWZpYyBvcHRpb25zIGZvciB0aGlzIGRtYS1idWY6IHNl
ZSBHTlRERVZfRE1BX0ZMQUdfWFhYLiAqLw0KPiArICAgIHVpbnQzMl90IGZsYWdzOw0KPiArICAg
IC8qIE51bWJlciBvZiBncmFudCByZWZlcmVuY2VzIGluIEByZWZzIGFycmF5LiAqLw0KPiArICAg
IHVpbnQzMl90IGNvdW50Ow0KPiArICAgIC8qIE9mZnNldCBvZiB0aGUgZGF0YSBpbiB0aGUgZG1h
LWJ1Zi4gKi8NCj4gKyAgICB1aW50MzJfdCBkYXRhX29mczsNCj4gKyAgICAvKiBPVVQgcGFyYW1l
dGVycy4gKi8NCj4gKyAgICAvKiBGaWxlIGRlc2NyaXB0b3Igb2YgdGhlIGRtYS1idWYuICovDQo+
ICsgICAgdWludDMyX3QgZmQ7DQo+ICsgICAgLyogVGhlIGRvbWFpbiBJRCBvZiB0aGUgZ3JhbnQg
cmVmZXJlbmNlcyB0byBiZSBtYXBwZWQuICovDQo+ICsgICAgdWludDMyX3QgZG9taWQ7DQo+ICsg
ICAgLyogVmFyaWFibGUgSU4gcGFyYW1ldGVyLiAqLw0KPiArICAgIC8qIEFycmF5IG9mIGdyYW50
IHJlZmVyZW5jZXMgb2Ygc2l6ZSBAY291bnQuICovDQo+ICsgICAgdWludDMyX3QgcmVmc1sxXTsN
Cj4gK307DQo+ICsNCj4gKyNkZWZpbmUgSU9DVExfR05UREVWX0RNQUJVRl9JTVBfVE9fUkVGU19W
MiBcDQo+ICsgICAgX0lPQyhfSU9DX05PTkUsICdHJywgMTQsIFwNCj4gKyAgICAgICAgIHNpemVv
ZihzdHJ1Y3QgaW9jdGxfZ250ZGV2X2RtYWJ1Zl9pbXBfdG9fcmVmc192MikpDQo+ICtzdHJ1Y3Qg
aW9jdGxfZ250ZGV2X2RtYWJ1Zl9pbXBfdG9fcmVmc192MiB7DQo+ICsgICAgLyogSU4gcGFyYW1l
dGVycy4gKi8NCj4gKyAgICAvKiBGaWxlIGRlc2NyaXB0b3Igb2YgdGhlIGRtYS1idWYuICovDQo+
ICsgICAgdWludDMyX3QgZmQ7DQo+ICsgICAgLyogTnVtYmVyIG9mIGdyYW50IHJlZmVyZW5jZXMg
aW4gQHJlZnMgYXJyYXkuICovDQo+ICsgICAgdWludDMyX3QgY291bnQ7DQo+ICsgICAgLyogVGhl
IGRvbWFpbiBJRCBmb3Igd2hpY2ggcmVmZXJlbmNlcyB0byBiZSBncmFudGVkLiAqLw0KPiArICAg
IHVpbnQzMl90IGRvbWlkOw0KPiArICAgIC8qIFJlc2VydmVkIC0gbXVzdCBiZSB6ZXJvLiAqLw0K
PiArICAgIHVpbnQzMl90IHJlc2VydmVkOw0KPiArICAgIC8qIE9VVCBwYXJhbWV0ZXJzLiAqLw0K
PiArICAgIC8qIE9mZnNldCBvZiB0aGUgZGF0YSBpbiB0aGUgZG1hLWJ1Zi4gKi8NCj4gKyAgICB1
aW50MzJfdCBkYXRhX29mczsNCj4gKyAgICAvKiBBcnJheSBvZiBncmFudCByZWZlcmVuY2VzIG9m
IHNpemUgQGNvdW50LiAqLw0KPiArICAgIHVpbnQzMl90IHJlZnNbMV07DQo+ICt9Ow0KPiArDQo+
ICAgI2VuZGlmIC8qIF9fTElOVVhfUFVCTElDX0dOVERFVl9IX18gKi8NCj4gZGlmZiAtLWdpdCBh
L3Rvb2xzL2xpYnMvZ250dGFiL01ha2VmaWxlIGIvdG9vbHMvbGlicy9nbnR0YWIvTWFrZWZpbGUN
Cj4gaW5kZXggMmRhOGZiYmI3ZjZmLi41ZWUyZDk2NTIxNGYgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xz
L2xpYnMvZ250dGFiL01ha2VmaWxlDQo+ICsrKyBiL3Rvb2xzL2xpYnMvZ250dGFiL01ha2VmaWxl
DQo+IEBAIC0yLDcgKzIsNyBAQCBYRU5fUk9PVCA9ICQoQ1VSRElSKS8uLi8uLi8uLg0KPiAgIGlu
Y2x1ZGUgJChYRU5fUk9PVCkvdG9vbHMvUnVsZXMubWsNCj4gICANCj4gICBNQUpPUiAgICA9IDEN
Cj4gLU1JTk9SICAgID0gMg0KPiArTUlOT1IgICAgPSAzDQo+ICAgTElCTkFNRSAgOj0gZ250dGFi
DQo+ICAgVVNFTElCUyAgOj0gdG9vbGxvZyB0b29sY29yZQ0KPiAgIA0KPiBkaWZmIC0tZ2l0IGEv
dG9vbHMvbGlicy9nbnR0YWIvZnJlZWJzZC5jIGIvdG9vbHMvbGlicy9nbnR0YWIvZnJlZWJzZC5j
DQo+IGluZGV4IDg4NmI1ODgzMDNhMC4uYmFmMGY2MGFhNGQzIDEwMDY0NA0KPiAtLS0gYS90b29s
cy9saWJzL2dudHRhYi9mcmVlYnNkLmMNCj4gKysrIGIvdG9vbHMvbGlicy9nbnR0YWIvZnJlZWJz
ZC5jDQo+IEBAIC0zMTksNiArMzE5LDE0IEBAIGludCBvc2RlcF9nbnR0YWJfZG1hYnVmX2V4cF9m
cm9tX3JlZnMoeGVuZ250dGFiX2hhbmRsZSAqeGd0LCB1aW50MzJfdCBkb21pZCwNCj4gICAgICAg
YWJvcnQoKTsNCj4gICB9DQo+ICAgDQo+ICtpbnQgb3NkZXBfZ250dGFiX2RtYWJ1Zl9leHBfZnJv
bV9yZWZzX3YyKHhlbmdudHRhYl9oYW5kbGUgKnhndCwgdWludDMyX3QgZG9taWQsDQo+ICsgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IGZsYWdzLCB1aW50
MzJfdCBjb3VudCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
Y29uc3QgdWludDMyX3QgKnJlZnMsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHVpbnQzMl90ICpkbWFidWZfZmQsIHVpbnQzMl90IGRhdGFfb2ZzKQ0KPiArew0K
PiArICAgIGFib3J0KCk7DQo+ICt9DQo+ICsNCj4gICBpbnQgb3NkZXBfZ250dGFiX2RtYWJ1Zl9l
eHBfd2FpdF9yZWxlYXNlZCh4ZW5nbnR0YWJfaGFuZGxlICp4Z3QsDQo+ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgZmQsIHVpbnQzMl90IHdhaXRf
dG9fbXMpDQo+ICAgew0KPiBAQCAtMzMxLDYgKzMzOSwxMyBAQCBpbnQgb3NkZXBfZ250dGFiX2Rt
YWJ1Zl9pbXBfdG9fcmVmcyh4ZW5nbnR0YWJfaGFuZGxlICp4Z3QsIHVpbnQzMl90IGRvbWlkLA0K
PiAgICAgICBhYm9ydCgpOw0KPiAgIH0NCj4gICANCj4gK2ludCBvc2RlcF9nbnR0YWJfZG1hYnVm
X2ltcF90b19yZWZzX3YyKHhlbmdudHRhYl9oYW5kbGUgKnhndCwgdWludDMyX3QgZG9taWQsDQo+
ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCBmZCwgdWlu
dDMyX3QgY291bnQsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1
aW50MzJfdCAqcmVmcywgdWludDMyX3QgKmRhdGFfb2ZzKQ0KPiArew0KPiArICAgIGFib3J0KCk7
DQo+ICt9DQo+ICsNCj4gICBpbnQgb3NkZXBfZ250dGFiX2RtYWJ1Zl9pbXBfcmVsZWFzZSh4ZW5n
bnR0YWJfaGFuZGxlICp4Z3QsIHVpbnQzMl90IGZkKQ0KPiAgIHsNCj4gICAgICAgYWJvcnQoKTsN
Cj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnMvZ250dGFiL2dudHRhYl9jb3JlLmMgYi90b29scy9s
aWJzL2dudHRhYi9nbnR0YWJfY29yZS5jDQo+IGluZGV4IDkyZTcyMjhhMjY3MS4uM2FmM2NlYzgw
MDQ1IDEwMDY0NA0KPiAtLS0gYS90b29scy9saWJzL2dudHRhYi9nbnR0YWJfY29yZS5jDQo+ICsr
KyBiL3Rvb2xzL2xpYnMvZ250dGFiL2dudHRhYl9jb3JlLmMNCj4gQEAgLTE0NCw2ICsxNDQsMTUg
QEAgaW50IHhlbmdudHRhYl9kbWFidWZfZXhwX2Zyb21fcmVmcyh4ZW5nbnR0YWJfaGFuZGxlICp4
Z3QsIHVpbnQzMl90IGRvbWlkLA0KPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHJlZnMsIGZkKTsNCj4gICB9DQo+ICAgDQo+ICtpbnQgeGVuZ250dGFiX2Rt
YWJ1Zl9leHBfZnJvbV9yZWZzX3YyKHhlbmdudHRhYl9oYW5kbGUgKnhndCwgdWludDMyX3QgZG9t
aWQsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IGZs
YWdzLCB1aW50MzJfdCBjb3VudCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgY29uc3QgdWludDMyX3QgKnJlZnMsIHVpbnQzMl90ICpmZCwNCj4gKyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgZGF0YV9vZnMpDQo+ICt7DQo+ICsg
ICAgcmV0dXJuIG9zZGVwX2dudHRhYl9kbWFidWZfZXhwX2Zyb21fcmVmc192Mih4Z3QsIGRvbWlk
LCBmbGFncywgY291bnQsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICByZWZzLCBmZCwgZGF0YV9vZnMpOw0KPiArfQ0KPiArDQo+ICAgaW50IHhlbmdu
dHRhYl9kbWFidWZfZXhwX3dhaXRfcmVsZWFzZWQoeGVuZ250dGFiX2hhbmRsZSAqeGd0LCB1aW50
MzJfdCBmZCwNCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50
MzJfdCB3YWl0X3RvX21zKQ0KPiAgIHsNCj4gQEAgLTE1Niw2ICsxNjUsMTQgQEAgaW50IHhlbmdu
dHRhYl9kbWFidWZfaW1wX3RvX3JlZnMoeGVuZ250dGFiX2hhbmRsZSAqeGd0LCB1aW50MzJfdCBk
b21pZCwNCj4gICAgICAgcmV0dXJuIG9zZGVwX2dudHRhYl9kbWFidWZfaW1wX3RvX3JlZnMoeGd0
LCBkb21pZCwgZmQsIGNvdW50LCByZWZzKTsNCj4gICB9DQo+ICAgDQo+ICtpbnQgeGVuZ250dGFi
X2RtYWJ1Zl9pbXBfdG9fcmVmc192Mih4ZW5nbnR0YWJfaGFuZGxlICp4Z3QsIHVpbnQzMl90IGRv
bWlkLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgZmQs
IHVpbnQzMl90IGNvdW50LCB1aW50MzJfdCAqcmVmcywNCj4gKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHVpbnQzMl90ICpkYXRhX29mcykNCj4gK3sNCj4gKyAgICByZXR1cm4g
b3NkZXBfZ250dGFiX2RtYWJ1Zl9pbXBfdG9fcmVmc192Mih4Z3QsIGRvbWlkLCBmZCwgY291bnQs
IHJlZnMsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ZGF0YV9vZnMpOw0KPiArfQ0KPiArDQo+ICAgaW50IHhlbmdudHRhYl9kbWFidWZfaW1wX3JlbGVh
c2UoeGVuZ250dGFiX2hhbmRsZSAqeGd0LCB1aW50MzJfdCBmZCkNCj4gICB7DQo+ICAgICAgIHJl
dHVybiBvc2RlcF9nbnR0YWJfZG1hYnVmX2ltcF9yZWxlYXNlKHhndCwgZmQpOw0KPiBkaWZmIC0t
Z2l0IGEvdG9vbHMvbGlicy9nbnR0YWIvaW5jbHVkZS94ZW5nbnR0YWIuaCBiL3Rvb2xzL2xpYnMv
Z250dGFiL2luY2x1ZGUveGVuZ250dGFiLmgNCj4gaW5kZXggMTExZmM4OGNhZWIzLi4wOTU2YmQ5
MWUwZGYgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL2xpYnMvZ250dGFiL2luY2x1ZGUveGVuZ250dGFi
LmgNCj4gKysrIGIvdG9vbHMvbGlicy9nbnR0YWIvaW5jbHVkZS94ZW5nbnR0YWIuaA0KPiBAQCAt
MzIyLDEyICszMjIsMTkgQEAgaW50IHhlbmdudHRhYl9ncmFudF9jb3B5KHhlbmdudHRhYl9oYW5k
bGUgKnhndCwNCj4gICAgKiBSZXR1cm5zIDAgaWYgZG1hLWJ1ZiB3YXMgc3VjY2Vzc2Z1bGx5IGNy
ZWF0ZWQgYW5kIHRoZSBjb3JyZXNwb25kaW5nDQo+ICAgICogZG1hLWJ1ZidzIGZpbGUgZGVzY3Jp
cHRvciBpcyByZXR1cm5lZCBpbiBAZmQuDQo+ICAgICoNCj4gKyAqIFZlcnNpb24gMiBhbHNvIGFj
Y2VwdHMgQGRhdGFfb2ZzIG9mZnNldCBvZiB0aGUgZGF0YSBpbiB0aGUgYnVmZmVyLg0KPiArICoN
Cj4gICAgKiBbMV0gaHR0cHM6Ly9lbGl4aXIuYm9vdGxpbi5jb20vbGludXgvbGF0ZXN0L3NvdXJj
ZS9Eb2N1bWVudGF0aW9uL2RyaXZlci1hcGkvZG1hLWJ1Zi5yc3QNCj4gICAgKi8NCj4gICBpbnQg
eGVuZ250dGFiX2RtYWJ1Zl9leHBfZnJvbV9yZWZzKHhlbmdudHRhYl9oYW5kbGUgKnhndCwgdWlu
dDMyX3QgZG9taWQsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50
MzJfdCBmbGFncywgdWludDMyX3QgY291bnQsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBjb25zdCB1aW50MzJfdCAqcmVmcywgdWludDMyX3QgKmZkKTsNCj4gICANCj4g
K2ludCB4ZW5nbnR0YWJfZG1hYnVmX2V4cF9mcm9tX3JlZnNfdjIoeGVuZ250dGFiX2hhbmRsZSAq
eGd0LCB1aW50MzJfdCBkb21pZCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgdWludDMyX3QgZmxhZ3MsIHVpbnQzMl90IGNvdW50LA0KPiArICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBjb25zdCB1aW50MzJfdCAqcmVmcywgdWludDMyX3QgKmZk
LA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCBkYXRh
X29mcyk7DQo+ICsNCj4gICAvKg0KPiAgICAqIFRoaXMgd2lsbCBibG9jayB1bnRpbCB0aGUgZG1h
LWJ1ZiB3aXRoIHRoZSBmaWxlIGRlc2NyaXB0b3IgQGZkIGlzDQo+ICAgICogcmVsZWFzZWQuIFRo
aXMgaXMgb25seSB2YWxpZCBmb3IgYnVmZmVycyBjcmVhdGVkIHdpdGgNCj4gQEAgLTM0NSwxMCAr
MzUyLDE2IEBAIGludCB4ZW5nbnR0YWJfZG1hYnVmX2V4cF93YWl0X3JlbGVhc2VkKHhlbmdudHRh
Yl9oYW5kbGUgKnhndCwgdWludDMyX3QgZmQsDQo+ICAgLyoNCj4gICAgKiBJbXBvcnQgYSBkbWEt
YnVmIHdpdGggZmlsZSBkZXNjcmlwdG9yIEBmZCBhbmQgZXhwb3J0IGdyYW50ZWQgcmVmZXJlbmNl
cw0KPiAgICAqIHRvIHRoZSBwYWdlcyBvZiB0aGF0IGRtYS1idWYgaW50byBhcnJheSBAcmVmcyBv
ZiBzaXplIEBjb3VudC4NCj4gKyAqDQo+ICsgKiBWZXJzaW9uIDIgYWxzbyBwcm92aWRlcyBAZGF0
YV9vZnMgb2Zmc2V0IG9mIHRoZSBkYXRhIGluIHRoZSBidWZmZXIuDQo+ICAgICovDQo+ICAgaW50
IHhlbmdudHRhYl9kbWFidWZfaW1wX3RvX3JlZnMoeGVuZ250dGFiX2hhbmRsZSAqeGd0LCB1aW50
MzJfdCBkb21pZCwNCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJf
dCBmZCwgdWludDMyX3QgY291bnQsIHVpbnQzMl90ICpyZWZzKTsNCj4gICANCj4gK2ludCB4ZW5n
bnR0YWJfZG1hYnVmX2ltcF90b19yZWZzX3YyKHhlbmdudHRhYl9oYW5kbGUgKnhndCwgdWludDMy
X3QgZG9taWQsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJf
dCBmZCwgdWludDMyX3QgY291bnQsIHVpbnQzMl90ICpyZWZzLA0KPiArICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgKmRhdGFfb2ZzKTsNCj4gKw0KPiAgIC8qDQo+
ICAgICogVGhpcyB3aWxsIGNsb3NlIGFsbCByZWZlcmVuY2VzIHRvIGFuIGltcG9ydGVkIGJ1ZmZl
ciwgc28gaXQgY2FuIGJlDQo+ICAgICogcmVsZWFzZWQgYnkgdGhlIG93bmVyLiBUaGlzIGlzIG9u
bHkgdmFsaWQgZm9yIGJ1ZmZlcnMgY3JlYXRlZCB3aXRoDQo+IGRpZmYgLS1naXQgYS90b29scy9s
aWJzL2dudHRhYi9saWJ4ZW5nbnR0YWIubWFwIGIvdG9vbHMvbGlicy9nbnR0YWIvbGlieGVuZ250
dGFiLm1hcA0KPiBpbmRleCBkMmE5YjdlMThiZWEuLmRkZjc3ZTA2NGIwOCAxMDA2NDQNCj4gLS0t
IGEvdG9vbHMvbGlicy9nbnR0YWIvbGlieGVuZ250dGFiLm1hcA0KPiArKysgYi90b29scy9saWJz
L2dudHRhYi9saWJ4ZW5nbnR0YWIubWFwDQo+IEBAIC0zNiwzICszNiw5IEBAIFZFUlNfMS4yIHsN
Cj4gICAJCXhlbmdudHRhYl9kbWFidWZfaW1wX3RvX3JlZnM7DQo+ICAgCQl4ZW5nbnR0YWJfZG1h
YnVmX2ltcF9yZWxlYXNlOw0KPiAgIH0gVkVSU18xLjE7DQo+ICsNCj4gK1ZFUlNfMS4zIHsNCj4g
KyAgICBnbG9iYWw6DQo+ICsJCXhlbmdudHRhYl9kbWFidWZfZXhwX2Zyb21fcmVmc192MjsNCj4g
KwkJeGVuZ250dGFiX2RtYWJ1Zl9pbXBfdG9fcmVmc192MjsNCj4gK30gVkVSU18xLjI7DQo+IGRp
ZmYgLS1naXQgYS90b29scy9saWJzL2dudHRhYi9saW51eC5jIGIvdG9vbHMvbGlicy9nbnR0YWIv
bGludXguYw0KPiBpbmRleCBhMDFiYjZjNjk4YzYuLjc1ZTI0OWZiMzIwMiAxMDA2NDQNCj4gLS0t
IGEvdG9vbHMvbGlicy9nbnR0YWIvbGludXguYw0KPiArKysgYi90b29scy9saWJzL2dudHRhYi9s
aW51eC5jDQo+IEBAIC0zNTIsNiArMzUyLDUxIEBAIG91dDoNCj4gICAgICAgcmV0dXJuIHJjOw0K
PiAgIH0NCj4gICANCj4gK2ludCBvc2RlcF9nbnR0YWJfZG1hYnVmX2V4cF9mcm9tX3JlZnNfdjIo
eGVuZ250dGFiX2hhbmRsZSAqeGd0LCB1aW50MzJfdCBkb21pZCwNCj4gKyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgZmxhZ3MsIHVpbnQzMl90IGNvdW50
LA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCB1aW50
MzJfdCAqcmVmcywNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
dWludDMyX3QgKmRtYWJ1Zl9mZCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgdWludDMyX3QgZGF0YV9vZnMpDQo+ICt7DQo+ICsgICAgc3RydWN0IGlvY3RsX2du
dGRldl9kbWFidWZfZXhwX2Zyb21fcmVmc192MiAqZnJvbV9yZWZzX3YyID0gTlVMTDsNCj4gKyAg
ICBpbnQgcmMgPSAtMTsNCj4gKw0KPiArICAgIGlmICggIWNvdW50ICkNCj4gKyAgICB7DQo+ICsg
ICAgICAgIGVycm5vID0gRUlOVkFMOw0KPiArICAgICAgICBnb3RvIG91dDsNCj4gKyAgICB9DQo+
ICsNCj4gKyAgICBmcm9tX3JlZnNfdjIgPSBtYWxsb2Moc2l6ZW9mKCpmcm9tX3JlZnNfdjIpICsN
Cj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgKGNvdW50IC0gMSkgKiBzaXplb2YoZnJvbV9y
ZWZzX3YyLT5yZWZzWzBdKSk7DQo+ICsgICAgaWYgKCAhZnJvbV9yZWZzX3YyICkNCj4gKyAgICB7
DQo+ICsgICAgICAgIGVycm5vID0gRU5PTUVNOw0KPiArICAgICAgICBnb3RvIG91dDsNCj4gKyAg
ICB9DQo+ICsNCj4gKyAgICBmcm9tX3JlZnNfdjItPmZsYWdzID0gZmxhZ3M7DQo+ICsgICAgZnJv
bV9yZWZzX3YyLT5jb3VudCA9IGNvdW50Ow0KPiArICAgIGZyb21fcmVmc192Mi0+ZG9taWQgPSBk
b21pZDsNCj4gKyAgICBmcm9tX3JlZnNfdjItPmRhdGFfb2ZzID0gZGF0YV9vZnM7DQo+ICsNCj4g
KyAgICBtZW1jcHkoZnJvbV9yZWZzX3YyLT5yZWZzLCByZWZzLCBjb3VudCAqIHNpemVvZihmcm9t
X3JlZnNfdjItPnJlZnNbMF0pKTsNCj4gKw0KPiArICAgIGlmICggKHJjID0gaW9jdGwoeGd0LT5m
ZCwgSU9DVExfR05UREVWX0RNQUJVRl9FWFBfRlJPTV9SRUZTX1YyLA0KPiArICAgICAgICAgICAg
ICAgICAgICAgZnJvbV9yZWZzX3YyKSkgKQ0KPiArICAgIHsNCj4gKyAgICAgICAgR1RFUlJPUih4
Z3QtPmxvZ2dlciwgImlvY3RsIERNQUJVRl9FWFBfRlJPTV9SRUZTX1YyIGZhaWxlZCIpOw0KPiAr
ICAgICAgICBnb3RvIG91dDsNCj4gKyAgICB9DQo+ICsNCj4gKyAgICAqZG1hYnVmX2ZkID0gZnJv
bV9yZWZzX3YyLT5mZDsNCj4gKyAgICByYyA9IDA7DQo+ICsNCj4gK291dDoNCj4gKyAgICBmcmVl
KGZyb21fcmVmc192Mik7DQo+ICsgICAgcmV0dXJuIHJjOw0KPiArfQ0KPiArDQo+ICAgaW50IG9z
ZGVwX2dudHRhYl9kbWFidWZfZXhwX3dhaXRfcmVsZWFzZWQoeGVuZ250dGFiX2hhbmRsZSAqeGd0
LA0KPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90
IGZkLCB1aW50MzJfdCB3YWl0X3RvX21zKQ0KPiAgIHsNCj4gQEAgLTQxMyw2ICs0NTgsNDcgQEAg
b3V0Og0KPiAgICAgICByZXR1cm4gcmM7DQo+ICAgfQ0KPiAgIA0KPiAraW50IG9zZGVwX2dudHRh
Yl9kbWFidWZfaW1wX3RvX3JlZnNfdjIoeGVuZ250dGFiX2hhbmRsZSAqeGd0LCB1aW50MzJfdCBk
b21pZCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90
IGZkLCB1aW50MzJfdCBjb3VudCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHVpbnQzMl90ICpyZWZzLA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgdWludDMyX3QgKmRhdGFfb2ZzKQ0KPiArew0KPiArICAgIHN0cnVjdCBpb2N0bF9n
bnRkZXZfZG1hYnVmX2ltcF90b19yZWZzX3YyICp0b19yZWZzX3YyID0gTlVMTDsNCj4gKyAgICBp
bnQgcmMgPSAtMTsNCj4gKw0KPiArICAgIGlmICggIWNvdW50ICkNCj4gKyAgICB7DQo+ICsgICAg
ICAgIGVycm5vID0gRUlOVkFMOw0KPiArICAgICAgICBnb3RvIG91dDsNCj4gKyAgICB9DQo+ICsN
Cj4gKyAgICB0b19yZWZzX3YyID0gbWFsbG9jKHNpemVvZigqdG9fcmVmc192MikgKw0KPiArICAg
ICAgICAgICAgICAgICAgICAgICAgKGNvdW50IC0gMSkgKiBzaXplb2YodG9fcmVmc192Mi0+cmVm
c1swXSkpOw0KPiArICAgIGlmICggIXRvX3JlZnNfdjIgKQ0KPiArICAgIHsNCj4gKyAgICAgICAg
ZXJybm8gPSBFTk9NRU07DQo+ICsgICAgICAgIGdvdG8gb3V0Ow0KPiArICAgIH0NCj4gKw0KPiAr
ICAgIHRvX3JlZnNfdjItPmZkID0gZmQ7DQo+ICsgICAgdG9fcmVmc192Mi0+Y291bnQgPSBjb3Vu
dDsNCj4gKyAgICB0b19yZWZzX3YyLT5kb21pZCA9IGRvbWlkOw0KPiArDQo+ICsgICAgaWYgKCAo
cmMgPSBpb2N0bCh4Z3QtPmZkLCBJT0NUTF9HTlRERVZfRE1BQlVGX0lNUF9UT19SRUZTX1YyLCB0
b19yZWZzX3YyKSkgKQ0KPiArICAgIHsNCj4gKyAgICAgICAgR1RFUlJPUih4Z3QtPmxvZ2dlciwg
ImlvY3RsIERNQUJVRl9JTVBfVE9fUkVGU19WMiBmYWlsZWQiKTsNCj4gKyAgICAgICAgZ290byBv
dXQ7DQo+ICsgICAgfQ0KPiArDQo+ICsgICAgbWVtY3B5KHJlZnMsIHRvX3JlZnNfdjItPnJlZnMs
IGNvdW50ICogc2l6ZW9mKCpyZWZzKSk7DQo+ICsgICAgKmRhdGFfb2ZzID0gdG9fcmVmc192Mi0+
ZGF0YV9vZnM7DQo+ICsgICAgcmMgPSAwOw0KPiArDQo+ICtvdXQ6DQo+ICsgICAgZnJlZSh0b19y
ZWZzX3YyKTsNCj4gKyAgICByZXR1cm4gcmM7DQo+ICt9DQo+ICsNCj4gICBpbnQgb3NkZXBfZ250
dGFiX2RtYWJ1Zl9pbXBfcmVsZWFzZSh4ZW5nbnR0YWJfaGFuZGxlICp4Z3QsIHVpbnQzMl90IGZk
KQ0KPiAgIHsNCj4gICAgICAgc3RydWN0IGlvY3RsX2dudGRldl9kbWFidWZfaW1wX3JlbGVhc2Ug
cmVsZWFzZTsNCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnMvZ250dGFiL21pbmlvcy5jIGIvdG9v
bHMvbGlicy9nbnR0YWIvbWluaW9zLmMNCj4gaW5kZXggZjc4Y2FhZGQzMDQzLi4yOTg0MTZiMmE5
OGQgMTAwNjQ0DQo+IC0tLSBhL3Rvb2xzL2xpYnMvZ250dGFiL21pbmlvcy5jDQo+ICsrKyBiL3Rv
b2xzL2xpYnMvZ250dGFiL21pbmlvcy5jDQo+IEBAIC0xMjAsNiArMTIwLDE0IEBAIGludCBvc2Rl
cF9nbnR0YWJfZG1hYnVmX2V4cF9mcm9tX3JlZnMoeGVuZ250dGFiX2hhbmRsZSAqeGd0LCB1aW50
MzJfdCBkb21pZCwNCj4gICAgICAgcmV0dXJuIC0xOw0KPiAgIH0NCj4gICANCj4gK2ludCBvc2Rl
cF9nbnR0YWJfZG1hYnVmX2V4cF9mcm9tX3JlZnNfdjIoeGVuZ250dGFiX2hhbmRsZSAqeGd0LCB1
aW50MzJfdCBkb21pZCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgdWludDMyX3QgZmxhZ3MsIHVpbnQzMl90IGNvdW50LA0KPiArICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBjb25zdCB1aW50MzJfdCAqcmVmcywgdWludDMyX3QgKmZk
LA0KPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCBk
YXRhX29mcykNCj4gK3sNCj4gKyAgICByZXR1cm4gLTE7DQo+ICt9DQo+ICsNCj4gICBpbnQgb3Nk
ZXBfZ250dGFiX2RtYWJ1Zl9leHBfd2FpdF9yZWxlYXNlZCh4ZW5nbnR0YWJfaGFuZGxlICp4Z3Qs
DQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3Qg
ZmQsIHVpbnQzMl90IHdhaXRfdG9fbXMpDQo+ICAgew0KPiBAQCAtMTMzLDYgKzE0MSwxMyBAQCBp
bnQgb3NkZXBfZ250dGFiX2RtYWJ1Zl9pbXBfdG9fcmVmcyh4ZW5nbnR0YWJfaGFuZGxlICp4Z3Qs
IHVpbnQzMl90IGRvbWlkLA0KPiAgICAgICByZXR1cm4gLTE7DQo+ICAgfQ0KPiAgIA0KPiAraW50
IG9zZGVwX2dudHRhYl9kbWFidWZfaW1wX3RvX3JlZnNfdjIoeGVuZ250dGFiX2hhbmRsZSAqeGd0
LCB1aW50MzJfdCBkb21pZCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIHVpbnQzMl90IGZkLCB1aW50MzJfdCBjb3VudCwNCj4gKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIHVpbnQzMl90ICpyZWZzLCB1aW50MzJfdCAqZGF0YV9vZnMpDQo+
ICt7DQo+ICsgICAgcmV0dXJuIC0xOw0KPiArfQ0KPiArDQo+ICAgaW50IG9zZGVwX2dudHRhYl9k
bWFidWZfaW1wX3JlbGVhc2UoeGVuZ250dGFiX2hhbmRsZSAqeGd0LCB1aW50MzJfdCBmZCkNCj4g
ICB7DQo+ICAgICAgIHJldHVybiAtMTsNCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnMvZ250dGFi
L3ByaXZhdGUuaCBiL3Rvb2xzL2xpYnMvZ250dGFiL3ByaXZhdGUuaA0KPiBpbmRleCBjNWUyMzYz
OWIxNDEuLjA3MjcxNjM3ZjYwOSAxMDA2NDQNCj4gLS0tIGEvdG9vbHMvbGlicy9nbnR0YWIvcHJp
dmF0ZS5oDQo+ICsrKyBiL3Rvb2xzL2xpYnMvZ250dGFiL3ByaXZhdGUuaA0KPiBAQCAtMzksNiAr
MzksMTEgQEAgaW50IG9zZGVwX2dudHRhYl9kbWFidWZfZXhwX2Zyb21fcmVmcyh4ZW5nbnR0YWJf
aGFuZGxlICp4Z3QsIHVpbnQzMl90IGRvbWlkLA0KPiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgdWludDMyX3QgZmxhZ3MsIHVpbnQzMl90IGNvdW50LA0KPiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgdWludDMyX3QgKnJlZnMsIHVp
bnQzMl90ICpmZCk7DQo+ICAgDQo+ICtpbnQgb3NkZXBfZ250dGFiX2RtYWJ1Zl9leHBfZnJvbV9y
ZWZzX3YyKHhlbmdudHRhYl9oYW5kbGUgKnhndCwgdWludDMyX3QgZG9taWQsDQo+ICsgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IGZsYWdzLCB1aW50MzJf
dCBjb3VudCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29u
c3QgdWludDMyX3QgKnJlZnMsIHVpbnQzMl90ICpmZCwNCj4gKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgZGF0YV9vZnMpOw0KPiArDQo+ICAgaW50IG9z
ZGVwX2dudHRhYl9kbWFidWZfZXhwX3dhaXRfcmVsZWFzZWQoeGVuZ250dGFiX2hhbmRsZSAqeGd0
LA0KPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90
IGZkLCB1aW50MzJfdCB3YWl0X3RvX21zKTsNCj4gICANCj4gQEAgLTQ2LDYgKzUxLDEwIEBAIGlu
dCBvc2RlcF9nbnR0YWJfZG1hYnVmX2ltcF90b19yZWZzKHhlbmdudHRhYl9oYW5kbGUgKnhndCwg
dWludDMyX3QgZG9taWQsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
dWludDMyX3QgZmQsIHVpbnQzMl90IGNvdW50LA0KPiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHVpbnQzMl90ICpyZWZzKTsNCj4gICANCj4gK2ludCBvc2RlcF9nbnR0YWJf
ZG1hYnVmX2ltcF90b19yZWZzX3YyKHhlbmdudHRhYl9oYW5kbGUgKnhndCwgdWludDMyX3QgZG9t
aWQsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCBm
ZCwgdWludDMyX3QgY291bnQsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICB1aW50MzJfdCAqcmVmcywgdWludDMyX3QgKmRhdGFfb2ZzKTsNCj4gKw0KPiAgIGludCBv
c2RlcF9nbnR0YWJfZG1hYnVmX2ltcF9yZWxlYXNlKHhlbmdudHRhYl9oYW5kbGUgKnhndCwgdWlu
dDMyX3QgZmQpOw0KPiAgIA0KPiAgIGludCBvc2RlcF9nbnRzaHJfb3Blbih4ZW5nbnRzaHJfaGFu
ZGxlICp4Z3MpOw==


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:14:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:14:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Szj-0003Ri-RK; Fri, 31 Jul 2020 11:14:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1Szh-0003Rd-O8
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:14:33 +0000
X-Inumbo-ID: 012ae8a9-d31f-11ea-ab9f-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 012ae8a9-d31f-11ea-ab9f-12813bfff9fa;
 Fri, 31 Jul 2020 11:14:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 79EC5AB55;
 Fri, 31 Jul 2020 11:14:44 +0000 (UTC)
Subject: Re: [PATCH] x86/hvm: set 'ipat' in EPT for special pages
To: Paul Durrant <paul@xen.org>
References: <20200731104644.20906-1-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <317725c7-4db4-c844-ec97-6f677b047661@suse.com>
Date: Fri, 31 Jul 2020 13:14:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200731104644.20906-1-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31.07.2020 12:46, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> All non-MMIO ranges (i.e those not mapping real device MMIO regions) that
> map valid MFNs are normally marked MTRR_TYPE_WRBACK and 'ipat' is set. Hence
> when PV drivers running in a guest populate the BAR space of the Xen Platform
> PCI Device with pages such as the Shared Info page or Grant Table pages,
> accesses to these pages will be cachable.
> 
> However, should IOMMU mappings be enabled be enabled for the guest then these
> accesses become uncachable. This has a substantial negative effect on I/O
> throughput of PV devices. Arguably PV drivers should bot be using BAR space to
> host the Shared Info and Grant Table pages but it is currently commonplace for
> them to do this and so this problem needs mitigation. Hence this patch makes
> sure the 'ipat' bit is set for any special page regardless of where in GFN
> space it is mapped.
> 
> NOTE: Clearly this mitigation only applies to Intel EPT. It is not obvious
>       that there is any similar mitigation possible for AMD NPT. Downstreams
>       such as Citrix XenServer have been carrying a patch similar to this for
>       several releases though.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

However, ...

> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -830,7 +830,8 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
>          return MTRR_TYPE_UNCACHABLE;
>      }
>  
> -    if ( !is_iommu_enabled(d) && !cache_flush_permitted(d) )
> +    if ( (!is_iommu_enabled(d) && !cache_flush_permitted(d)) ||
> +         is_special_page(mfn_to_page(mfn)) )
>      {
>          *ipat = 1;
>          return MTRR_TYPE_WRBACK;

... shouldn't we leverages this (right away?) to do away with the
APIC access page special case a few lines up from here? It is my
understanding that vmx_alloc_vlapic_mapping() uses
set_mmio_p2m_entry() just in order to get ipat set on the resulting
EPT entry. Yet with the allocation using MEMF_no_refcount, this will
now be the result even if no artificial MMIO mapping was created.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:21:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:21:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1T6J-0004HE-Iu; Fri, 31 Jul 2020 11:21:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oG5j=BK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k1T6I-0004H9-Ot
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:21:22 +0000
X-Inumbo-ID: f5d6ab58-d31f-11ea-8e24-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5d6ab58-d31f-11ea-8e24-bc764e2007e4;
 Fri, 31 Jul 2020 11:21:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596194481;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=+P1KIq/6Ywxtu4pyKU6IAI26bhhPg71MBoPm2a6aTl4=;
 b=XDsBhj0D5p64Qfq8zOAY+o0ThNqjLavS8XD0YpO5JfJ30/Ed8rxfZdFY
 OnZsNunxIhCHL+KQiP2/T7gZVKb0tL0lbrjgK2zE9sJggEVG7Kt4YqkhV
 J97RLrItIVg5cTSfS3A9n9Nw0SKhMKSH2816ZSa3N6X1O/Jj/d6mBUCw0 g=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: uphDXr02WgtTe1GvOXKeVMjNOgwYLd2gH0NC1Vc5TBbEzCsHctGsV4rui27Z52AJGMyC+pNrsq
 MDorvIzNOkt6vWz/pU11LWx+qXRG2M/2ypVSHxCckwRAXWgU96MGgZwZw3i5p4EN8Xcvdl5Odg
 1cD+89pcZcjv18Gr9n1Srx279TCT6mM8mddPFbghyrsRrLtKGrj6dE6euuhSyWCszb7h+Rjtxm
 ttha+qKUtaJc8UeffoVnTxbqruQ1INNAKCK+QBs3OpgYOBHCKPEJ5zN65CgWgoCAOO0lxl5xym
 a4g=
X-SBRS: 3.7
X-MesageID: 23633986
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23633986"
Subject: Re: [PATCH] x86/hvm: set 'ipat' in EPT for special pages
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
References: <20200731104644.20906-1-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <dba8c4c4-dfdd-9935-2d59-7bcee7615361@citrix.com>
Date: Fri, 31 Jul 2020 12:21:16 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200731104644.20906-1-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Wei
 Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31/07/2020 11:46, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
>
> All non-MMIO ranges (i.e those not mapping real device MMIO regions) that
> map valid MFNs are normally marked MTRR_TYPE_WRBACK and 'ipat' is set. Hence
> when PV drivers running in a guest populate the BAR space of the Xen Platform
> PCI Device with pages such as the Shared Info page or Grant Table pages,
> accesses to these pages will be cachable.
>
> However, should IOMMU mappings be enabled be enabled for the guest then these
> accesses become uncachable. This has a substantial negative effect on I/O
> throughput of PV devices. Arguably PV drivers should bot be using BAR space to
> host the Shared Info and Grant Table pages but it is currently commonplace for
> them to do this and so this problem needs mitigation. Hence this patch makes
> sure the 'ipat' bit is set for any special page regardless of where in GFN
> space it is mapped.
>
> NOTE: Clearly this mitigation only applies to Intel EPT. It is not obvious
>       that there is any similar mitigation possible for AMD NPT. Downstreams
>       such as Citrix XenServer have been carrying a patch similar to this for
>       several releases though.

https://github.com/xenserver/xen.pg/blob/XS-8.2.x/master/xen-override-caching-cp-26562.patch

(Yay for internal ticket references escaping into the wild.)


However, it is very important to be aware that this is just papering
over the problem, and it will cease to function as soon as we get MKTME
support.  When we hit that point, iPAT cannot be used, as it will cause
data corruption in guests.

The only correct way to fix this is to not (mis)use BAR space for RAM
mappings.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:22:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:22:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1T76-0004Kv-Ud; Fri, 31 Jul 2020 11:22:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OD0g=BK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k1T76-0004K9-8v
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:22:12 +0000
X-Inumbo-ID: 106b08ec-d320-11ea-8e24-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 106b08ec-d320-11ea-8e24-bc764e2007e4;
 Fri, 31 Jul 2020 11:22:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ap/7AY2WC4LRe14wSyQ8HpMEhPyBDf0CYwEZHiRq8KQ=; b=sUNwSlFPTrEqNDIhqVVS5z/xe
 wnOwL9o9yHKK6yTEBZXetbvmjtFPoMLrndTPYeLFAYfqVA2mgUkTMK6kDuN1pDvo4RUwvMCyH2977
 GJ9Z9aVOGR/l0KgKptY/1eSRJgnZXDNEHl9di9NtoKYJ+dwBsCDi4bwyWuzUh19ZioZV0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1T6z-0007p8-58; Fri, 31 Jul 2020 11:22:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1T6y-0001vx-N6; Fri, 31 Jul 2020 11:22:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k1T6y-0006ga-MK; Fri, 31 Jul 2020 11:22:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152317-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 152317: regressions - FAIL
X-Osstest-Failures: libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=f7f5b86be25d27915cc67a8b84fa9a2589df4ab8
X-Osstest-Versions-That: libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 31 Jul 2020 11:22:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152317 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152317/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a

version targeted for testing:
 libvirt              f7f5b86be25d27915cc67a8b84fa9a2589df4ab8
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   21 days
Failing since        151818  2020-07-11 04:18:52 Z   20 days   21 attempts
Testing same since   152317  2020-07-31 04:19:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Weblate <noreply@weblate.org>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3102 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:22:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:22:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1T7C-0004M2-7G; Fri, 31 Jul 2020 11:22:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fae6=BK=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1k1T7B-0004K9-8v
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:22:17 +0000
X-Inumbo-ID: 152826c6-d320-11ea-8e24-bc764e2007e4
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 152826c6-d320-11ea-8e24-bc764e2007e4;
 Fri, 31 Jul 2020 11:22:14 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id 184so8914882wmb.0
 for <xen-devel@lists.xenproject.org>; Fri, 31 Jul 2020 04:22:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=D2H2b7P5qmUsaui6FZYqP2HzZx7XEMBAADHaXJEbRDI=;
 b=uOeMfgoEL/esE1eb/SPXupH4IrJiAQedwg9OKaLlFp5F0lShIHa+FfnORwp9lk2S3g
 bnNCoEf38PT11OxlLIsQkMaQ8J1BagmwXPaIbxIJN+nN5R6vV5V80g8voMJs5KiEx6vD
 QWdBXMtwV9Vb6cXXkTKysNTkbhc37BDenJuaQYNIcJiTtaOkhcB/krGv76h052XnTafJ
 oZu952XUtihKFa3NWKhaX5OpuzxJHz7ixnEnnESZNNfMuFWkYngupZDIRqxPO/LmYMmu
 6Hfq5/PN5wA8MwTQUclJM8d2ZCl/Q8Z981yFC10gpkyo5hFXbmHUD8OfulTbgAdQed78
 zpgA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=D2H2b7P5qmUsaui6FZYqP2HzZx7XEMBAADHaXJEbRDI=;
 b=T99xQAPQoLBSPnfySQn37EXPBGSNB/47OGY4xvFz4D0lZ++4CZCYQLeKiSSI+DJeog
 hPXiTvWcD1cFJysrHlramQOlGhhpWYYpVi+D9hmuQO4zXOo7nAnOE5M+w3iJB/XKCWTl
 VnYh74pbXi/Fj8YIjod8kWy6RqmoDQNpXHE4WSuBJasTVb+gI3IiqjsfUpDNkdmmtW6L
 KX0cula7InxR/CyDD6E/pLGO62ItlYY5fSqVtCsw0B/qaufq/5YUy9DEDOUgq3ZTNOau
 sit/WGYw7JysJOoqQ+B8o+Lz5DItJN28wUu3EAMXZxN5M+ByxLyCEqJfNCwk+Uyyw92C
 T+uA==
X-Gm-Message-State: AOAM531mRYJyL1L6AKjPdFaQq7jTFiOgF3Jrq+3dHXMpQoIR0X1baBV3
 MGn+mipOi+SglEBWXOEpIwA=
X-Google-Smtp-Source: ABdhPJwfCV64IBQy2Oom0dJKF6OZUSAHEGCss8nmpT4enj7BZ9H78QyqqmU0BjiBxSBUrKt4gTeUEA==
X-Received: by 2002:a1c:2dcb:: with SMTP id t194mr3395904wmt.94.1596194533183; 
 Fri, 31 Jul 2020 04:22:13 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec])
 by smtp.gmail.com with ESMTPSA id p22sm11294732wmc.38.2020.07.31.04.22.12
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 31 Jul 2020 04:22:12 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200731104644.20906-1-paul@xen.org>
 <317725c7-4db4-c844-ec97-6f677b047661@suse.com>
In-Reply-To: <317725c7-4db4-c844-ec97-6f677b047661@suse.com>
Subject: RE: [PATCH] x86/hvm: set 'ipat' in EPT for special pages
Date: Fri, 31 Jul 2020 12:17:00 +0100
Message-ID: <003a01d6672c$1ca0f1e0$55e2d5a0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLI7tFygWYJCZY9nQSqDpuxgiPp+AJCrDHjpypktYA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org, 'Paul Durrant' <pdurrant@amazon.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 31 July 2020 12:15
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Paul Durrant =
<pdurrant@amazon.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau =
Monn=C3=A9 <roger.pau@citrix.com>
> Subject: Re: [PATCH] x86/hvm: set 'ipat' in EPT for special pages
>=20
> On 31.07.2020 12:46, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > All non-MMIO ranges (i.e those not mapping real device MMIO regions) =
that
> > map valid MFNs are normally marked MTRR_TYPE_WRBACK and 'ipat' is =
set. Hence
> > when PV drivers running in a guest populate the BAR space of the Xen =
Platform
> > PCI Device with pages such as the Shared Info page or Grant Table =
pages,
> > accesses to these pages will be cachable.
> >
> > However, should IOMMU mappings be enabled be enabled for the guest =
then these
> > accesses become uncachable. This has a substantial negative effect =
on I/O
> > throughput of PV devices. Arguably PV drivers should bot be using =
BAR space to
> > host the Shared Info and Grant Table pages but it is currently =
commonplace for
> > them to do this and so this problem needs mitigation. Hence this =
patch makes
> > sure the 'ipat' bit is set for any special page regardless of where =
in GFN
> > space it is mapped.
> >
> > NOTE: Clearly this mitigation only applies to Intel EPT. It is not =
obvious
> >       that there is any similar mitigation possible for AMD NPT. =
Downstreams
> >       such as Citrix XenServer have been carrying a patch similar to =
this for
> >       several releases though.
> >
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>=20
> However, ...
>=20
> > --- a/xen/arch/x86/hvm/mtrr.c
> > +++ b/xen/arch/x86/hvm/mtrr.c
> > @@ -830,7 +830,8 @@ int epte_get_entry_emt(struct domain *d, =
unsigned long gfn, mfn_t mfn,
> >          return MTRR_TYPE_UNCACHABLE;
> >      }
> >
> > -    if ( !is_iommu_enabled(d) && !cache_flush_permitted(d) )
> > +    if ( (!is_iommu_enabled(d) && !cache_flush_permitted(d)) ||
> > +         is_special_page(mfn_to_page(mfn)) )
> >      {
> >          *ipat =3D 1;
> >          return MTRR_TYPE_WRBACK;
>=20
> ... shouldn't we leverages this (right away?) to do away with the
> APIC access page special case a few lines up from here? It is my
> understanding that vmx_alloc_vlapic_mapping() uses
> set_mmio_p2m_entry() just in order to get ipat set on the resulting
> EPT entry. Yet with the allocation using MEMF_no_refcount, this will
> now be the result even if no artificial MMIO mapping was created.

That's a good point. Best handled by a separate patch I think so I'll =
re-send this with your R-b plus a follow up patch as a v2.

  Paul

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:24:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:24:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1T9N-0004aj-MC; Fri, 31 Jul 2020 11:24:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fae6=BK=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1k1T9M-0004ac-BH
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:24:32 +0000
X-Inumbo-ID: 6723048c-d320-11ea-8e24-bc764e2007e4
Received: from mail-wr1-x430.google.com (unknown [2a00:1450:4864:20::430])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6723048c-d320-11ea-8e24-bc764e2007e4;
 Fri, 31 Jul 2020 11:24:31 +0000 (UTC)
Received: by mail-wr1-x430.google.com with SMTP id f1so27113626wro.2
 for <xen-devel@lists.xenproject.org>; Fri, 31 Jul 2020 04:24:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=GTbWCKy6JNu5/qi4072qGPDLgB/3dhBRBvdUoHZqR+o=;
 b=NJJWFgbVvJwFX/R661qppiZ5bBCV5SqVIcsPYNWhf0ZhKxCJ6BOFz0qzpwE8bkYdny
 Qe92qZqoMqDrZxvtXmTDGxB1e7Xc+dCVOVZ42ns1nxp3G0LAiA2hDMyQIgv6ViNCBGnx
 Ns2uYOV15c3V+BDYnBOZ99S6hRMRdg3J1Ka9w6BiurNq1XKc67PDS2o48HjQ3YPkclKX
 kf8uFURZg681gWkF3uUHtkxHbnQmch1I+06x2htvj4Be5rsTELmzPh04gNaONi0PWQ52
 Htwc3rvZOB5bggEHQERTx3tczkrxwoqiTx6vDqBT2EpmG1wHahlsuHjQGbzC1nIXHzC9
 bhvw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=GTbWCKy6JNu5/qi4072qGPDLgB/3dhBRBvdUoHZqR+o=;
 b=kDQlI+K6DymOuAJNspVtcRzvdj+XH7P0yDSQynCyACPz8MIxfIUPOCmUjzH5lolKbm
 YjVAaHvuOl4fzinzN6RmR7W+VPId8rhgwWtFF1JVh4BUQgZG3cQkefU3gXibpEbUOIkO
 6hQSu7oxsr7XHWjXaccbsM7pT04QypiB7wkzPEVrmo0XM47y4KrgoJmBE4Whe6NZw9oi
 hezypl4hEaKhPoEORTS6qGWgf/YbIWZgRCp3oWflZvpOVKL5PDF0N+PIUP4Gz5h1yixX
 9YWt6kWhsuT0p1KkhfRM/L0TaFAoGuEUzArKHZikl5E4vWBWmhiqN1zaMO+CdIgRYOCS
 5iWg==
X-Gm-Message-State: AOAM533BqYiVq14v/Ofbs6wX8o76pP8OwLh7BTUgLeXRWf0EYcvcfUN8
 TYjYiYTyQfLLRd58U1WZ2No=
X-Google-Smtp-Source: ABdhPJwQ4xrLakLXFsIFwQvrfm4gr44Yi6VlcPFlFRaPYwMhk1TP/wHA2+8NgHUnU+rYROHVPRaAEw==
X-Received: by 2002:a5d:4d87:: with SMTP id b7mr3375732wru.170.1596194670802; 
 Fri, 31 Jul 2020 04:24:30 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec])
 by smtp.gmail.com with ESMTPSA id x11sm12757822wrl.28.2020.07.31.04.24.29
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 31 Jul 2020 04:24:30 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 <xen-devel@lists.xenproject.org>
References: <20200731104644.20906-1-paul@xen.org>
 <dba8c4c4-dfdd-9935-2d59-7bcee7615361@citrix.com>
In-Reply-To: <dba8c4c4-dfdd-9935-2d59-7bcee7615361@citrix.com>
Subject: RE: [PATCH] x86/hvm: set 'ipat' in EPT for special pages
Date: Fri, 31 Jul 2020 12:19:17 +0100
Message-ID: <003b01d6672c$6e8ffb40$4baff1c0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLI7tFygWYJCZY9nQSqDpuxgiPp+AIVoClxpyvNmOA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Paul Durrant' <pdurrant@amazon.com>, 'Wei Liu' <wl@xen.org>,
 'Jan Beulich' <jbeulich@suse.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 31 July 2020 12:21
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Paul Durrant <pdurrant@amazon.com>; Jan Beulich =
<jbeulich@suse.com>; Wei Liu <wl@xen.org>; Roger
> Pau Monn=C3=A9 <roger.pau@citrix.com>
> Subject: Re: [PATCH] x86/hvm: set 'ipat' in EPT for special pages
>=20
> On 31/07/2020 11:46, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > All non-MMIO ranges (i.e those not mapping real device MMIO regions) =
that
> > map valid MFNs are normally marked MTRR_TYPE_WRBACK and 'ipat' is =
set. Hence
> > when PV drivers running in a guest populate the BAR space of the Xen =
Platform
> > PCI Device with pages such as the Shared Info page or Grant Table =
pages,
> > accesses to these pages will be cachable.
> >
> > However, should IOMMU mappings be enabled be enabled for the guest =
then these
> > accesses become uncachable. This has a substantial negative effect =
on I/O
> > throughput of PV devices. Arguably PV drivers should bot be using =
BAR space to
> > host the Shared Info and Grant Table pages but it is currently =
commonplace for
> > them to do this and so this problem needs mitigation. Hence this =
patch makes
> > sure the 'ipat' bit is set for any special page regardless of where =
in GFN
> > space it is mapped.
> >
> > NOTE: Clearly this mitigation only applies to Intel EPT. It is not =
obvious
> >       that there is any similar mitigation possible for AMD NPT. =
Downstreams
> >       such as Citrix XenServer have been carrying a patch similar to =
this for
> >       several releases though.
>=20
> =
https://github.com/xenserver/xen.pg/blob/XS-8.2.x/master/xen-override-cac=
hing-cp-26562.patch
>=20
> (Yay for internal ticket references escaping into the wild.)
>=20

:-)

>=20
> However, it is very important to be aware that this is just papering
> over the problem, and it will cease to function as soon as we get =
MKTME
> support.  When we hit that point, iPAT cannot be used, as it will =
cause
> data corruption in guests.
>=20
> The only correct way to fix this is to not (mis)use BAR space for RAM
> mappings.
>=20

Oh yes, t
his is only a mitigation. I believe Roger is working on a mechanism for =
guests to query for non-populated RAM space, which would be suitable for =
use by PV drivers.

  Paul

> ~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:30:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:30:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TEa-0004n1-EW; Fri, 31 Jul 2020 11:29:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1TEZ-0004mw-7f
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:29:55 +0000
X-Inumbo-ID: 2771b594-d321-11ea-8e26-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2771b594-d321-11ea-8e26-bc764e2007e4;
 Fri, 31 Jul 2020 11:29:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 711DEAB55;
 Fri, 31 Jul 2020 11:30:06 +0000 (UTC)
Subject: Re: kernel-doc and xen.git
To: Stefano Stabellini <sstabellini@kernel.org>
References: <alpine.DEB.2.21.2007291644330.1767@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9421ec73-1ec0-844f-0014-bd5a36a4036f@suse.com>
Date: Fri, 31 Jul 2020 13:29:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007291644330.1767@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, committers@xenproject.org,
 Bertrand.Marquis@arm.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.07.2020 03:27, Stefano Stabellini wrote:
> Hi all,
> 
> I would like to ask for your feedback on the adoption of the kernel-doc
> format for in-code comments.
> 
> In the FuSa SIG we have started looking into FuSa documents for Xen. One
> of the things we are investigating are ways to link these documents to
> in-code comments in xen.git and vice versa.
> 
> In this context, Andrew Cooper suggested to have a look at "kernel-doc"
> [1] during one of the virtual beer sessions at the last Xen Summit.
> 
> I did give a look at kernel-doc and it is very promising. kernel-doc is
> a script that can generate nice rst text documents from in-code
> comments. (The generated rst files can then be used as input for sphinx
> to generate html docs.) The comment syntax [2] is simple and similar to
> Doxygen:
> 
>     /**
>      * function_name() - Brief description of function.
>      * @arg1: Describe the first argument.
>      * @arg2: Describe the second argument.
>      *        One can provide multiple line descriptions
>      *        for arguments.
>      */
> 
> kernel-doc is actually better than Doxygen because it is a much simpler
> tool, one we could customize to our needs and with predictable output.
> Specifically, we could add the tagging, numbering, and referencing
> required by FuSa requirement documents.
> 
> I would like your feedback on whether it would be good to start
> converting xen.git in-code comments to the kernel-doc format so that
> proper documents can be generated out of them. One day we could import
> kernel-doc into xen.git/scripts and use it to generate a set of html
> documents via sphinx.

How far is this intended to go? The example is description of a
function's parameters, which is definitely fine (albeit I wonder
if there's a hidden implication then that _all_ functions
whatsoever are supposed to gain such comments). But the text just
says much more generally "in-code comments", which could mean all
of them. I'd consider the latter as likely going too far.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:36:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:36:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TL2-0005d6-6I; Fri, 31 Jul 2020 11:36:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1TL0-0005d1-Rh
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:36:34 +0000
X-Inumbo-ID: 15660804-d322-11ea-aba1-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15660804-d322-11ea-aba1-12813bfff9fa;
 Fri, 31 Jul 2020 11:36:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9C3C1AC46;
 Fri, 31 Jul 2020 11:36:45 +0000 (UTC)
Subject: Re: [RESEND][PATCH v2 5/7] xen: include xen/guest_access.h rather
 than asm/guest_access.h
To: Julien Grall <julien@xen.org>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-6-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0874b4c7-13d4-61c1-c076-c9d7cf3720c7@suse.com>
Date: Fri, 31 Jul 2020 13:36:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200730181827.1670-6-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.07.2020 20:18, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Only a few places are actually including asm/guest_access.h. While this
> is fine today, a follow-up patch will want to move most of the helpers
> from asm/guest_access.h to xen/guest_access.h.
> 
> To prepare the move, everyone should include xen/guest_access.h rather
> than asm/guest_access.h.
> 
> Interestingly, asm-arm/guest_access.h includes xen/guest_access.h. The
> inclusion is now removed as no-one but the latter should include the
> former.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

Is there any chance you could take measures to avoid new inclusions
of asm/guest_access.h to appear?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:38:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:38:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TMp-0005ke-IY; Fri, 31 Jul 2020 11:38:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TMo-0005kZ-2U
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:38:26 +0000
X-Inumbo-ID: 57e8ee26-d322-11ea-8e26-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57e8ee26-d322-11ea-8e26-bc764e2007e4;
 Fri, 31 Jul 2020 11:38:25 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMm-0001W4-3J; Fri, 31 Jul 2020 12:38:24 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 00/41] Performance work
Date: Fri, 31 Jul 2020 12:37:39 +0100
Message-Id: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a combination of two series and some new work:
 * [OSSTEST PATCH 00/14] Flight report performance improvements
 * [OSSTEST PATCH 00/11] Improve performance of sg-report-host-history
 * New work to improve the performance of cs-bisection-step
 * Fixes to usage of SQL LIKE

Thanks to George for reviews of some of the most critical patches to
the regression analyser in sg-report-flight.  Where necessary I have
rebased those reviewed patches over the SQL LIKE fixes, and in those
cases I retained the reviewed-by.  I hope that's OK.

Outstanding in my perf programme is sg-report-job-history.

Ian Jackson (41):
  Add cperl-indent-level to .dir-locals.el
  SQL: Use "LIKE" rather than "like", etc.
  SQL: Fix incorrect LIKE pattern syntax (literals)
  SQL: Fix incorrect LIKE pattern syntax (program variables)
  sg-report-flight: Add a comment re same-flight search narrowing
  sg-report-flight: Sort failures by job name as last resort
  schema: Provide indices for sg-report-flight
  sg-report-flight: Ask the db for flights of interest
  sg-report-flight: Use WITH to use best index use for $flightsq
  sg-report-flight: Use WITH clause to use index for $anypassq
  sg-report-flight: Use the job row from the intitial query
  Executive: Use index for report__find_test
  duration_estimator: Ignore truncated jobs unless we know the step
  duration_estimator: Introduce some _qtxt variables
  duration_estimator: Explicitly provide null in general host q
  duration_estimator: Return job column in first query
  duration_estimator: Move $uptincl_testid to separate @x_params
  duration_estimator: Move duration query loop into database
  Executive: Drop redundant AND clause
  schema: Add index for quick lookup by host
  sg-report-host-history: Find flight limit by flight start date
  sg-report-host-history: Drop per-job debug etc.
  Executive: Export opendb_tests
  sg-report-host-history: Add a debug print after sorting jobs
  sg-report-host-history: Do the main query per host
  sg-report-host-history: Rerganisation: Make mainquery per-host
  sg-report-host-history: Rerganisation: Read old logs later
  sg-report-host-history: Rerganisation: Change loops
  sg-report-host-history: Drop a redundznt AND clause
  sg-report-host-history: Fork
  schema: Add index to help cs-bisection-step
  adhoc-revtuple-generator: Fix an undef warning in a debug print
  cs-bisection-step: Generalise qtxt_common_rev_ok
  cs-bisection-step: Move an AND
  cs-bisection-step: Break out qtxt_common_ok
  cs-bisection-step: Use db_prepare a few times instead of ->do
  cs-bisection-step: temporary table: Insert only rows we care about
  SQL: Change LIKE E'...\\_...' to LIKE '...\_...'
  cs-bisection-step: Add a debug print when we run dot(1)
  cs-bisection-step: Lay out the revision tuple graph once
  duration_estimator: Clarify recentflights query a bit

 .dir-locals.el                    |   3 +-
 Osstest.pm                        |   8 +-
 Osstest/Executive.pm              |  79 +++++++++------
 Osstest/JobDB/Executive.pm        |   2 +-
 adhoc-revtuple-generator          |   2 +-
 cr-ensure-disk-space              |   4 +-
 cs-adjust-flight                  |   2 +-
 cs-bisection-step                 |  51 +++++++---
 mg-force-push                     |   2 +-
 mg-report-host-usage-collect      |  10 +-
 ms-planner                        |   2 +-
 schema/runvars-built-index.sql    |   7 ++
 schema/runvars-host-index.sql     |   8 ++
 schema/runvars-revision-index.sql |   7 ++
 schema/steps-broken-index.sql     |   7 ++
 schema/steps-job-index.sql        |   7 ++
 sg-report-flight                  | 129 ++++++++++++++++++++----
 sg-report-host-history            | 161 +++++++++++++++++-------------
 sg-report-job-history             |   4 +-
 ts-logs-capture                   |   2 +-
 20 files changed, 344 insertions(+), 153 deletions(-)
 create mode 100644 schema/runvars-built-index.sql
 create mode 100644 schema/runvars-host-index.sql
 create mode 100644 schema/runvars-revision-index.sql
 create mode 100644 schema/steps-broken-index.sql
 create mode 100644 schema/steps-job-index.sql

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:38:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TMu-0005lT-Qe; Fri, 31 Jul 2020 11:38:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TMt-0005kZ-08
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:38:31 +0000
X-Inumbo-ID: 59721e70-d322-11ea-8e26-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59721e70-d322-11ea-8e26-bc764e2007e4;
 Fri, 31 Jul 2020 11:38:27 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMo-0001W4-Im; Fri, 31 Jul 2020 12:38:26 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 01/41] Add cperl-indent-level to .dir-locals.el
Date: Fri, 31 Jul 2020 12:37:40 +0100
Message-Id: <20200731113820.5765-2-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

My personal config on my laptop has this set to 2 and that makes
editing osstest, which uses 4, quite annoying.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
New in v2.
---
 .dir-locals.el | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/.dir-locals.el b/.dir-locals.el
index d87916f7..ad4fa3dc 100644
--- a/.dir-locals.el
+++ b/.dir-locals.el
@@ -1 +1,2 @@
-((nil . ((indent-tabs-mode . t))))
+((nil . ((indent-tabs-mode . t)))
+ (cperl-mode . ((cperl-indent-level . 4))))
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:38:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:38:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TMz-0005mh-3A; Fri, 31 Jul 2020 11:38:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TMy-0005kZ-0D
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:38:36 +0000
X-Inumbo-ID: 59a52144-d322-11ea-8e26-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59a52144-d322-11ea-8e26-bc764e2007e4;
 Fri, 31 Jul 2020 11:38:27 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMp-0001W4-15; Fri, 31 Jul 2020 12:38:27 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 02/41] SQL: Use "LIKE" rather than "like", etc.
Date: Fri, 31 Jul 2020 12:37:41 +0100
Message-Id: <20200731113820.5765-3-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is more like the rest of the style.  It will also make it easier
to find instances of the mistaken LIKE syntax.

I found these with "git grep" and manually edited them.  I have
checked the before-and-after result of
   find * -type f | xargs perl -i~ -pe 's/\bLIKE\b/like/g'
and it has only the few expected changes to ANDs and ORs.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
New in v2.
---
 cr-ensure-disk-space         |  4 ++--
 cs-adjust-flight             |  2 +-
 mg-force-push                |  2 +-
 mg-report-host-usage-collect | 10 +++++-----
 ms-planner                   |  2 +-
 sg-report-flight             |  2 +-
 sg-report-host-history       |  4 ++--
 7 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/cr-ensure-disk-space b/cr-ensure-disk-space
index 3e0288f9..11d801b0 100755
--- a/cr-ensure-disk-space
+++ b/cr-ensure-disk-space
@@ -99,8 +99,8 @@ my $chkq= db_prepare("SELECT * FROM flights WHERE flight=?");
 my $refq= db_prepare(<<END);
     SELECT flight, val
       FROM runvars
-     WHERE name like '%job'
-       AND val like '%.%'
+     WHERE name LIKE '%job'
+       AND val LIKE '%.%'
        AND flight >= ?
 END
 
diff --git a/cs-adjust-flight b/cs-adjust-flight
index 98d40891..d04a2fd7 100755
--- a/cs-adjust-flight
+++ b/cs-adjust-flight
@@ -526,7 +526,7 @@ sub change__repro_buildjobs {
 	}
     }
     my $testq = db_prepare(<<END);
-SELECT name, val FROM runvars WHERE flight=? AND job=? AND name like '%job';
+SELECT name, val FROM runvars WHERE flight=? AND job=? AND name LIKE '%job';
 END
     my $buildq_txt = <<END;
 SELECT name FROM runvars WHERE flight=? AND job=? AND ('f'
diff --git a/mg-force-push b/mg-force-push
index 1066a300..001e0c47 100755
--- a/mg-force-push
+++ b/mg-force-push
@@ -54,7 +54,7 @@ END
         FROM rv url
         JOIN rv built
              ON url.job    = built.job
-            AND url.name   like 'tree_%'
+            AND url.name   LIKE 'tree_%'
             AND built.name = 'built_revision_' || substring(url.name, 6)
        WHERE url.val = ?
 END
diff --git a/mg-report-host-usage-collect b/mg-report-host-usage-collect
index 160d295f..3fab490a 100755
--- a/mg-report-host-usage-collect
+++ b/mg-report-host-usage-collect
@@ -154,10 +154,10 @@ END
         SELECT finished    prep_finished,
                status      prep_status
           FROM steps prep
-         WHERE flight=? and job=?
+         WHERE flight=? AND job=?
            AND prep.finished IS NOT NULL
            AND (prep.step='ts-host-build-prep'
-            OR  prep.step like 'ts-host-install%')
+            OR  prep.step LIKE 'ts-host-install%')
       ORDER BY stepno DESC
          LIMIT 1
 END
@@ -165,14 +165,14 @@ END
     my $hostsq = db_prepare(<<END);
         SELECT val, synth
           FROM runvars
-         WHERE flight=? and job=?
-           AND (name like '%_host' or name='host')
+         WHERE flight=? AND job=?
+           AND (name LIKE '%_host' OR name='host')
 END
 
     my $finishq = db_prepare(<<END);
         SELECT max(finished) AS finished
           FROM steps
-         WHERE flight=? and job=?
+         WHERE flight=? AND job=?
 END
 
     progress1 "minflight $minflight executing...";
diff --git a/ms-planner b/ms-planner
index c70b46b0..11423404 100755
--- a/ms-planner
+++ b/ms-planner
@@ -72,7 +72,7 @@ sub allocations ($$) {
                        ON owntaskid = taskid
 		    WHERE NOT (tasks.type='magic' AND
                                tasks.refkey='allocatable')
-                      AND NOT (resources.restype like 'share-%'
+                      AND NOT (resources.restype LIKE 'share-%'
                            AND NOT EXISTS (
  SELECT 1 FROM resource_sharing sh
          WHERE sh.restype = substring(resources.restype from 7)
diff --git a/sg-report-flight b/sg-report-flight
index 6c481f6f..0edb6e1a 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -513,7 +513,7 @@ END
         my $revh= db_prepare(<<END);
             SELECT * FROM runvars
                 WHERE flight=$flight AND job='$j->{job}'
-                  AND name like 'built_revision_%'
+                  AND name LIKE 'built_revision_%'
                 ORDER BY name
 END
         # We report in jobtext revisions in non-main-revision jobs, too.
diff --git a/sg-report-host-history b/sg-report-host-history
index 54738e68..c22a1704 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -37,7 +37,7 @@ our @blessings;
 
 open DEBUG, ">/dev/null";
 
-my $namecond= "(name = 'host' or name like '%_host')";
+my $namecond= "(name = 'host' OR name LIKE '%_host')";
 csreadconfig();
 
 while (@ARGV && $ARGV[0] =~ m/^-/) {
@@ -456,7 +456,7 @@ foreach my $host (@ARGV) {
 	        SELECT DISTINCT val
 		  FROM runvars
 		 WHERE flight=?
-		   AND (name = 'host' or name like '%_host')
+		   AND (name = 'host' OR name LIKE '%_host')
 END
             $hostsinflightq->execute($flight);
 	    while (my $row = $hostsinflightq->fetchrow_hashref()) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:38:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:38:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TN4-0005oL-At; Fri, 31 Jul 2020 11:38:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TN3-0005kZ-0H
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:38:41 +0000
X-Inumbo-ID: 59d8fa33-d322-11ea-8e26-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59d8fa33-d322-11ea-8e26-bc764e2007e4;
 Fri, 31 Jul 2020 11:38:28 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMp-0001W4-Cw; Fri, 31 Jul 2020 12:38:27 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 03/41] SQL: Fix incorrect LIKE pattern syntax
 (literals)
Date: Fri, 31 Jul 2020 12:37:42 +0100
Message-Id: <20200731113820.5765-4-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

LIKE takes a weird SQLish glob pattern, where % is like a glob *
and (relevantly, here) _ is like a glob ?.

Every _ in one of these LIKE patterns needs to be escaped with \.

Do that for all the literal LIKE patterns.

This fixes bugs.  Generally, bugs where the wrong rows might be
returned (except that the data probably doesn't have any such rows).

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
New in v2.
---
 mg-force-push                | 2 +-
 mg-report-host-usage-collect | 2 +-
 sg-report-flight             | 2 +-
 sg-report-host-history       | 6 +++---
 ts-logs-capture              | 2 +-
 5 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/mg-force-push b/mg-force-push
index 001e0c47..3a701a11 100755
--- a/mg-force-push
+++ b/mg-force-push
@@ -54,7 +54,7 @@ END
         FROM rv url
         JOIN rv built
              ON url.job    = built.job
-            AND url.name   LIKE 'tree_%'
+            AND url.name   LIKE 'tree\_%'
             AND built.name = 'built_revision_' || substring(url.name, 6)
        WHERE url.val = ?
 END
diff --git a/mg-report-host-usage-collect b/mg-report-host-usage-collect
index 3fab490a..1944c8d7 100755
--- a/mg-report-host-usage-collect
+++ b/mg-report-host-usage-collect
@@ -166,7 +166,7 @@ END
         SELECT val, synth
           FROM runvars
          WHERE flight=? AND job=?
-           AND (name LIKE '%_host' OR name='host')
+           AND (name LIKE '%\_host' OR name='host')
 END
 
     my $finishq = db_prepare(<<END);
diff --git a/sg-report-flight b/sg-report-flight
index 0edb6e1a..831917a9 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -513,7 +513,7 @@ END
         my $revh= db_prepare(<<END);
             SELECT * FROM runvars
                 WHERE flight=$flight AND job='$j->{job}'
-                  AND name LIKE 'built_revision_%'
+                  AND name LIKE 'built\_revision\_%'
                 ORDER BY name
 END
         # We report in jobtext revisions in non-main-revision jobs, too.
diff --git a/sg-report-host-history b/sg-report-host-history
index c22a1704..7505b18b 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -37,7 +37,7 @@ our @blessings;
 
 open DEBUG, ">/dev/null";
 
-my $namecond= "(name = 'host' OR name LIKE '%_host')";
+my $namecond= "(name = 'host' OR name LIKE '%\_host')";
 csreadconfig();
 
 while (@ARGV && $ARGV[0] =~ m/^-/) {
@@ -256,7 +256,7 @@ END
 	  FROM runvars
 	 WHERE flight=? AND job=?
            AND (
-               name LIKE (? || '_power_%')
+               name LIKE (? || '\_power\_%')
            )
 END
 
@@ -456,7 +456,7 @@ foreach my $host (@ARGV) {
 	        SELECT DISTINCT val
 		  FROM runvars
 		 WHERE flight=?
-		   AND (name = 'host' OR name LIKE '%_host')
+		   AND (name = 'host' OR name LIKE '%\_host')
 END
             $hostsinflightq->execute($flight);
 	    while (my $row = $hostsinflightq->fetchrow_hashref()) {
diff --git a/ts-logs-capture b/ts-logs-capture
index d75a2fda..62c281b8 100755
--- a/ts-logs-capture
+++ b/ts-logs-capture
@@ -44,7 +44,7 @@ our (@allguests, @guests);
 sub find_guests () {
     my $sth= $dbh_tests->prepare(<<END);
         SELECT name FROM runvars WHERE flight=? AND job=?
-            AND name LIKE '%_domname'
+            AND name LIKE '%\_domname'
             ORDER BY name
 END
     $sth->execute($flight, $job);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:38:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TN9-0005q6-LI; Fri, 31 Jul 2020 11:38:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TN8-0005kZ-0K
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:38:46 +0000
X-Inumbo-ID: 5a418c50-d322-11ea-8e26-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a418c50-d322-11ea-8e26-bc764e2007e4;
 Fri, 31 Jul 2020 11:38:28 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMp-0001W4-T6; Fri, 31 Jul 2020 12:38:28 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 04/41] SQL: Fix incorrect LIKE pattern syntax
 (program variables)
Date: Fri, 31 Jul 2020 12:37:43 +0100
Message-Id: <20200731113820.5765-5-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In two places the pattern for LIKE is constructed programmatically.
In this case, too, we need to escape % and _.

We pass the actual pattern (or pattern fragment) via ?, so we do not
need to worry about '.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
New in v2.
---
 Osstest.pm                 | 8 +++++++-
 Osstest/JobDB/Executive.pm | 2 +-
 sg-report-host-history     | 3 ++-
 3 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/Osstest.pm b/Osstest.pm
index 63dddd95..b2b6b741 100644
--- a/Osstest.pm
+++ b/Osstest.pm
@@ -39,7 +39,7 @@ BEGIN {
                       main_revision_job_cond other_revision_job_suffix
                       $dbh_tests db_retry db_retry_retry db_retry_abort
 		      db_readonly_report
-                      db_begin_work db_prepare
+                      db_begin_work db_prepare db_quote_like_pattern
                       get_harness_rev blessing_must_not_modify_host
                       ensuredir get_filecontents_core_quiet system_checked
                       nonempty visible_undef show_abs_time
@@ -358,6 +358,12 @@ sub postfork () {
     $mjobdb->jobdb_postfork();
 }
 
+sub db_quote_like_pattern ($) {
+    local ($_) = @_;
+    s{[_%\\]}{\\$&}g;
+    $_;
+}
+
 #---------- script entrypoints ----------
 
 sub csreadconfig () {
diff --git a/Osstest/JobDB/Executive.pm b/Osstest/JobDB/Executive.pm
index be5588fc..39deb8a2 100644
--- a/Osstest/JobDB/Executive.pm
+++ b/Osstest/JobDB/Executive.pm
@@ -143,7 +143,7 @@ sub _check_testdbs ($) {
 	      AND live
 	      AND username LIKE (? || '@%')
 END
-    $sth->execute($c{Username});
+    $sth->execute(db_quote_like_pattern($c{Username}));
     my $allok = 1;
     while (my $row = $sth->fetchrow_hashref()) {
 	next if $row->{dbname} =~ m/^$re$/o;
diff --git a/sg-report-host-history b/sg-report-host-history
index 7505b18b..9730ae7a 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -380,7 +380,8 @@ END
 	    $runvarq_hits++;
 	} else {
 	    $runvarq_misses++;
-	    $jrunvarq->execute($jr->{flight}, $jr->{job}, $ident);
+	    $jrunvarq->execute($jr->{flight}, $jr->{job},
+			       db_quote_like_pattern($ident));
 	    my %runvars;
 	    while (my ($n, $v) = $jrunvarq->fetchrow_array()) {
 		$runvars{$n} = $v;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:38:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:38:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TNF-0005sq-1o; Fri, 31 Jul 2020 11:38:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TND-0005kZ-0M
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:38:51 +0000
X-Inumbo-ID: 59d8fa34-d322-11ea-8e26-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59d8fa34-d322-11ea-8e26-bc764e2007e4;
 Fri, 31 Jul 2020 11:38:29 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMq-0001W4-Bf; Fri, 31 Jul 2020 12:38:28 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 05/41] sg-report-flight: Add a comment re
 same-flight search narrowing
Date: Fri, 31 Jul 2020 12:37:44 +0100
Message-Id: <20200731113820.5765-6-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In afe851ca1771e5da6395b596afa69e509dbbc278
  sg-report-flight: When justifying, disregard out-of-flight build jobs
we narrowed sg-report-flight's search algorith.

An extensive justification is in the commit message.  I think much of
this information belongs in-tree, so c&p it (with slight edits) here.

No code change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/sg-report-flight b/sg-report-flight
index 831917a9..fc439495 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -242,9 +242,27 @@ END
 	# jobs.  We start with all jobs in $tflight, and for each job
 	# we also process any other jobs it refers to in *buildjob runvars.
 	#
+	# The real thing we want to check that the build jobs *in the
+	# same flight as the justifying job* used the right revisions.
+	# Build jobs from other flights were either (i) build jobs for
+	# components not being targed for testing by this branch, but
+	# which were necessary for the justifying job and for which we
+	# decided to reuse another build job (in which case we don't
+	# really care what versions they used, even if underlying it
+	# all there might be a different version of a tree we are
+	# actually interested in (ii) the kind of continuous update
+	# thing seen with freebsdbuildjob.
+	#
+	# (This is rather different to cs-bisection-step, which is
+	# less focused on changes in a particular set of trees.)
+	#
+	# So we limit the scope of our recursive descent into build
+	# jobs, to jobs in the same flight.
+	#
 	# We don't actually use a recursive algorithm because that
 	# would involve recursive use of the same sql query object;
 	# hence the @binfos_todo queue.
+
 	my @binfos_todo;
 	my $binfos_queue = sub {
 	    my ($inflight,$q,$why) = @_;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:38:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:38:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TNJ-0005vS-Bl; Fri, 31 Jul 2020 11:38:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TNI-0005kZ-0W
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:38:56 +0000
X-Inumbo-ID: 5aa0cf12-d322-11ea-8e26-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5aa0cf12-d322-11ea-8e26-bc764e2007e4;
 Fri, 31 Jul 2020 11:38:29 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMq-0001W4-Ki; Fri, 31 Jul 2020 12:38:28 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 06/41] sg-report-flight: Sort failures by job name
 as last resort
Date: Fri, 31 Jul 2020 12:37:45 +0100
Message-Id: <20200731113820.5765-7-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This removes some nondeterminism from the output.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 1 +
 1 file changed, 1 insertion(+)

diff --git a/sg-report-flight b/sg-report-flight
index fc439495..7f2790ce 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -813,6 +813,7 @@ END
 	# they finished in the same second, we pick the lower-numbered
 	# step, which is the earlier one (if they are sequential at
 	# all).
+	or $a->{Job} cmp $b->{Job}
     }
         @failures;
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:39:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:39:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TNO-0005yx-Mi; Fri, 31 Jul 2020 11:39:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TNN-0005kZ-0h
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:39:01 +0000
X-Inumbo-ID: 5aa0cf14-d322-11ea-8e26-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5aa0cf14-d322-11ea-8e26-bc764e2007e4;
 Fri, 31 Jul 2020 11:38:29 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMr-0001W4-0L; Fri, 31 Jul 2020 12:38:29 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 07/41] schema: Provide indices for sg-report-flight
Date: Fri, 31 Jul 2020 12:37:46 +0100
Message-Id: <20200731113820.5765-8-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

These indexes allow very fast lookup of "relevant" flights eg when
trying to justify failures.

In my ad-hoc test case, these indices (along with the subsequent
changes to sg-report-flight and Executive.pm, reduce the runtime of
sg-report-flight from 2-3ks (unacceptably long!) to as little as
5-7s seconds - a speedup of about 500x.

(Getting the database snapshot may take a while first, but deploying
this code should help with that too by reducing long-running
transactions.  Quoted perf timings are from snapshot acquisition.)

Without these new indexes there may be a performance change from the
query changes.  I haven't benchmarked this so I am setting the schema
updates to be Preparatory/Needed (ie, "Schema first" as
schema/README.updates has it), to say that the index should be created
before the new code is deployed.

Testing: I have tested this series by creating experimental indices
"trial_..." in the actual production instance.  (Transactional DDL was
very helpful with this.)  I have verified with \d that schema update
instructions in this commit generate indexes which are equivalent to
the trial indices.

Deployment: AFter these schema updates are applied, the trial indices
are redundant duplicates and should be deleted.

CC: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: Use proper \ escaping for underscores in LIKE
---
 schema/runvars-built-index.sql    | 7 +++++++
 schema/runvars-revision-index.sql | 7 +++++++
 schema/steps-job-index.sql        | 7 +++++++
 3 files changed, 21 insertions(+)
 create mode 100644 schema/runvars-built-index.sql
 create mode 100644 schema/runvars-revision-index.sql
 create mode 100644 schema/steps-job-index.sql

diff --git a/schema/runvars-built-index.sql b/schema/runvars-built-index.sql
new file mode 100644
index 00000000..7108e0af
--- /dev/null
+++ b/schema/runvars-built-index.sql
@@ -0,0 +1,7 @@
+-- ##OSSTEST## 007 Preparatory
+--
+-- This index helps sg-report-flight find relevant flights.
+
+CREATE INDEX runvars_built_revision_idx
+    ON runvars (val)
+ WHERE name LIKE 'built\_revision\_%';
diff --git a/schema/runvars-revision-index.sql b/schema/runvars-revision-index.sql
new file mode 100644
index 00000000..8871b528
--- /dev/null
+++ b/schema/runvars-revision-index.sql
@@ -0,0 +1,7 @@
+-- ##OSSTEST## 008 Preparatory
+--
+-- This index helps Executive::report__find_test find relevant flights.
+
+CREATE INDEX runvars_revision_idx
+    ON runvars (val)
+ WHERE name LIKE 'revision\_%';
diff --git a/schema/steps-job-index.sql b/schema/steps-job-index.sql
new file mode 100644
index 00000000..07dc5a30
--- /dev/null
+++ b/schema/steps-job-index.sql
@@ -0,0 +1,7 @@
+-- ##OSSTEST## 006 Preparatory
+--
+-- This index helps sg-report-flight find if a test ever passed.
+
+CREATE INDEX steps_job_testid_status_idx
+    ON steps (job, testid, status);
+
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:39:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TNU-00062m-0s; Fri, 31 Jul 2020 11:39:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TNS-0005kZ-0g
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:39:06 +0000
X-Inumbo-ID: 5aa0cf15-d322-11ea-8e26-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5aa0cf15-d322-11ea-8e26-bc764e2007e4;
 Fri, 31 Jul 2020 11:38:30 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMr-0001W4-BA; Fri, 31 Jul 2020 12:38:29 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 08/41] sg-report-flight: Ask the db for flights of
 interest
Date: Fri, 31 Jul 2020 12:37:47 +0100
Message-Id: <20200731113820.5765-9-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Specifically, we narrow the initial query to flights which have at
least some job with the built_revision_foo we are looking for.

This condition is strictly broader than that implemented inside the
flight search loop, so there is no functional change.

Perf: runtime of my test case now ~300s-500s.

Example query before (from the Perl DBI trace):

      SELECT * FROM (
        SELECT flight, blessing FROM flights
            WHERE (branch='xen-unstable')
              AND                   EXISTS (SELECT 1
                            FROM jobs
                           WHERE jobs.flight = flights.flight
                             AND jobs.job = ?)

              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
            ORDER BY flight DESC
            LIMIT 1000
      ) AS sub
      ORDER BY blessing ASC, flight DESC

With these bind variables:

    "test-armhf-armhf-libvirt"

After:

      SELECT * FROM (
        SELECT DISTINCT flight, blessing
             FROM flights
             JOIN runvars r1 USING (flight)

            WHERE (branch='xen-unstable')
              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
                  AND EXISTS (SELECT 1
                            FROM jobs
                           WHERE jobs.flight = flights.flight
                             AND jobs.job = ?)

              AND r1.name LIKE 'built\_revision\_%'
              AND r1.name = ?
              AND r1.val= ?

            ORDER BY flight DESC
            LIMIT 1000
      ) AS sub
      ORDER BY blessing ASC, flight DESC

With these bind variables:

      "test-armhf-armhf-libvirt"
      'built_revision_xen'
      '165f3afbfc3db70fcfdccad07085cde0a03c858b'

Diff to the query:

      SELECT * FROM (
-        SELECT flight, blessing FROM flights
+        SELECT DISTINCT flight, blessing
+             FROM flights
+             JOIN runvars r1 USING (flight)
+
             WHERE (branch='xen-unstable')
+              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
               AND                   EXISTS (SELECT 1
                             FROM jobs
                            WHERE jobs.flight = flights.flight
                              AND jobs.job = ?)

-              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
+              AND r1.name LIKE 'built\_revision\_%'
+              AND r1.name = ?
+              AND r1.val= ?
+
             ORDER BY flight DESC
             LIMIT 1000
       ) AS sub

CC: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: Use proper \ escaping for underscores in LIKE
---
 schema/runvars-built-index.sql |  2 +-
 sg-report-flight               | 64 ++++++++++++++++++++++++++++++++--
 2 files changed, 62 insertions(+), 4 deletions(-)

diff --git a/schema/runvars-built-index.sql b/schema/runvars-built-index.sql
index 7108e0af..128e69e9 100644
--- a/schema/runvars-built-index.sql
+++ b/schema/runvars-built-index.sql
@@ -1,4 +1,4 @@
--- ##OSSTEST## 007 Preparatory
+-- ##OSSTEST## 007 Needed
 --
 -- This index helps sg-report-flight find relevant flights.
 
diff --git a/sg-report-flight b/sg-report-flight
index 7f2790ce..10127582 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -185,19 +185,77 @@ END
     if (defined $job) {
 	push @flightsq_params, $job;
 	$flightsq_jobcond = <<END;
-                  EXISTS (SELECT 1
+                  AND EXISTS (SELECT 1
 			    FROM jobs
 			   WHERE jobs.flight = flights.flight
 			     AND jobs.job = ?)
 END
     }
 
+    # We build a slightly complicated query to find possibly-relevant
+    # flights.  A "possibly-relevant" flight is one which the main
+    # flight categorisation algorithm below (the loop over $tflight)
+    # *might* decide is of interest.
+    #
+    # That algorithm produces a table of which revision(s) of what
+    # %specver trees the build jobs for the relevant test job used.
+    # And then it insists (amongst other things) that for each such
+    # tree the revision in question appears.
+    #
+    # It only looks at build jobs within the flight.  So any flight
+    # that the main algorithm finds interesting will have *some* job
+    # (in the same flight) mentioning that revision in a built
+    # revision runvar.  So we can search the runvars table by its
+    # index on the revision.
+    #
+    # So we look for flights that have an appropriate entry in runvars
+    # for each %specver tree.  We can do this by joining the runvar
+    # table once for each tree.
+    #
+    # The "osstest" tree is handled specially. as ever.  (We use
+    # "r$ri" there too for orthogonality of the code, not because
+    # there could be multiple specifiations for the osstest revision.)
+    #
+    # This complex query is an optimisation: for correctness, we must
+    # still execute the full job-specific recursive examination, for
+    # each possibly-relevant flight - that's the $tflight loop body.
+
+    my $runvars_joins = '';
+    my $runvars_conds = '';
+    my $ri=0;
+    foreach my $tree (sort keys %{ $specver{$thisthat} }) {
+      $ri++;
+      if ($tree ne 'osstest') {
+	  $runvars_joins .= <<END;
+             JOIN runvars r$ri USING (flight)
+END
+	  $runvars_conds .= <<END;
+              AND r$ri.name LIKE 'built\_revision\_%' 
+              AND r$ri.name = ?
+              AND r$ri.val= ?
+END
+	  push @flightsq_params, "built_revision_$tree",
+	                     $specver{$thisthat}{$tree};
+      } else {
+	  $runvars_joins .= <<END;
+             JOIN flights_harness_touched r$ri USING (flight)
+END
+	  $runvars_conds .= <<END;
+              AND r$ri.harness= ?
+END
+	  push @flightsq_params, $specver{$thisthat}{$tree};
+      }
+    }
+
     my $flightsq= <<END;
       SELECT * FROM (
-        SELECT flight, blessing FROM flights
+        SELECT DISTINCT flight, blessing
+             FROM flights
+$runvars_joins
             WHERE $branches_cond_q
-              AND $flightsq_jobcond
               AND $blessingscond
+$flightsq_jobcond
+$runvars_conds
             ORDER BY flight DESC
             LIMIT 1000
       ) AS sub
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:39:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:39:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TNX-00065Q-AR; Fri, 31 Jul 2020 11:39:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TNX-0005kZ-0j
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:39:11 +0000
X-Inumbo-ID: 5aa0cf16-d322-11ea-8e26-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5aa0cf16-d322-11ea-8e26-bc764e2007e4;
 Fri, 31 Jul 2020 11:38:30 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMr-0001W4-KU; Fri, 31 Jul 2020 12:38:29 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 09/41] sg-report-flight: Use WITH to use best index
 use for $flightsq
Date: Fri, 31 Jul 2020 12:37:48 +0100
Message-Id: <20200731113820.5765-10-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

While we're here, convert this EXISTS subquery to a JOIN.

Perf: runtime of my test case now ~200-300s.

Example query before (from the Perl DBI trace):

      SELECT * FROM (
        SELECT DISTINCT flight, blessing
             FROM flights
             JOIN runvars r1 USING (flight)

            WHERE (branch='xen-unstable')
              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
                  AND EXISTS (SELECT 1
                            FROM jobs
                           WHERE jobs.flight = flights.flight
                             AND jobs.job = ?)

              AND r1.name LIKE 'built_revision_%'
              AND r1.name = ?
              AND r1.val= ?

            ORDER BY flight DESC
            LIMIT 1000
      ) AS sub
      ORDER BY blessing ASC, flight DESC

With bind variables:

     "test-armhf-armhf-libvirt"
     'built_revision_xen'
     '165f3afbfc3db70fcfdccad07085cde0a03c858b'

After:

      WITH sub AS (
        SELECT DISTINCT flight, blessing
             FROM flights
             JOIN runvars r1 USING (flight)

            WHERE (branch='xen-unstable')
              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
              AND r1.name LIKE 'built_revision_%'
              AND r1.name = ?
              AND r1.val= ?

            ORDER BY flight DESC
            LIMIT 1000
      )
      SELECT *
        FROM sub
        JOIN jobs USING (flight)

       WHERE (1=1)
                  AND jobs.job = ?

      ORDER BY blessing ASC, flight DESC

With bind variables:

    'built_revision_xen'
    '165f3afbfc3db70fcfdccad07085cde0a03c858b'
    "test-armhf-armhf-libvirt"

Diff to the query:

-      SELECT * FROM (
+      WITH sub AS (
         SELECT DISTINCT flight, blessing
              FROM flights
              JOIN runvars r1 USING (flight)

             WHERE (branch='xen-unstable')
               AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
-                  AND EXISTS (SELECT 1
-                            FROM jobs
-                           WHERE jobs.flight = flights.flight
-                             AND jobs.job = ?)
-
               AND r1.name LIKE 'built_revision_%'
               AND r1.name = ?
               AND r1.val= ?

             ORDER BY flight DESC
             LIMIT 1000
-      ) AS sub
+      )
+      SELECT *
+        FROM sub
+        JOIN jobs USING (flight)
+
+       WHERE (1=1)
+                  AND jobs.job = ?
+
       ORDER BY blessing ASC, flight DESC

Reviewed-by: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 39 ++++++++++++++++++++++++---------------
 1 file changed, 24 insertions(+), 15 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index 10127582..d06be292 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -180,18 +180,6 @@ END
         return undef;
     }
 
-    my @flightsq_params;
-    my $flightsq_jobcond='(1=1)';
-    if (defined $job) {
-	push @flightsq_params, $job;
-	$flightsq_jobcond = <<END;
-                  AND EXISTS (SELECT 1
-			    FROM jobs
-			   WHERE jobs.flight = flights.flight
-			     AND jobs.job = ?)
-END
-    }
-
     # We build a slightly complicated query to find possibly-relevant
     # flights.  A "possibly-relevant" flight is one which the main
     # flight categorisation algorithm below (the loop over $tflight)
@@ -220,6 +208,7 @@ END
     # still execute the full job-specific recursive examination, for
     # each possibly-relevant flight - that's the $tflight loop body.
 
+    my @flightsq_params;
     my $runvars_joins = '';
     my $runvars_conds = '';
     my $ri=0;
@@ -247,18 +236,38 @@ END
       }
     }
 
+    my $flightsq_jobs_join = '';
+    my $flightsq_jobcond = '';
+    if (defined $job) {
+	push @flightsq_params, $job;
+	$flightsq_jobs_join = <<END;
+        JOIN jobs USING (flight)
+END
+	$flightsq_jobcond = <<END;
+                  AND jobs.job = ?
+END
+    }
+
+    # In psql 9.6 this WITH clause makes postgresql do the flights
+    # query first.  This is good because our built revision index finds
+    # relevant flights very quickly.  Without this, postgresql seems
+    # to like to scan the jobs table.
     my $flightsq= <<END;
-      SELECT * FROM (
+      WITH sub AS (
         SELECT DISTINCT flight, blessing
              FROM flights
 $runvars_joins
             WHERE $branches_cond_q
               AND $blessingscond
-$flightsq_jobcond
 $runvars_conds
             ORDER BY flight DESC
             LIMIT 1000
-      ) AS sub
+      )
+      SELECT *
+        FROM sub
+$flightsq_jobs_join
+       WHERE (1=1)
+$flightsq_jobcond
       ORDER BY blessing ASC, flight DESC
 END
     $flightsq= db_prepare($flightsq);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:42:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:42:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TQB-0007Jw-QK; Fri, 31 Jul 2020 11:41:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1TQA-0007Jj-Uh
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:41:54 +0000
X-Inumbo-ID: d45e36fa-d322-11ea-aba1-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d45e36fa-d322-11ea-aba1-12813bfff9fa;
 Fri, 31 Jul 2020 11:41:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0FA76B6A4;
 Fri, 31 Jul 2020 11:42:06 +0000 (UTC)
Subject: Re: [RESEND][PATCH v2 7/7] xen/guest_access: Fix coding style in
 xen/guest_access.h
To: Julien Grall <julien@xen.org>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-8-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3bafb97f-45a3-7203-3e73-37e73c453de6@suse.com>
Date: Fri, 31 Jul 2020 13:41:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200730181827.1670-8-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.07.2020 20:18, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
>     * Add space before and after operator
>     * Align \
>     * Format comments

How about also

    * remove/replace leading underscores

?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:45:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:45:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TTb-0007UT-Ai; Fri, 31 Jul 2020 11:45:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1TTZ-0007UM-QQ
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:45:25 +0000
X-Inumbo-ID: 520a346e-d323-11ea-aba1-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 520a346e-d323-11ea-aba1-12813bfff9fa;
 Fri, 31 Jul 2020 11:45:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E18E7AB71;
 Fri, 31 Jul 2020 11:45:36 +0000 (UTC)
Subject: Re: [RESEND][PATCH v2 6/7] xen/guest_access: Consolidate guest access
 helpers in xen/guest_access.h
To: Julien Grall <julien@xen.org>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-7-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <17a7da1c-78eb-a86b-85f1-2372af93476e@suse.com>
Date: Fri, 31 Jul 2020 13:45:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200730181827.1670-7-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.07.2020 20:18, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Most of the helpers to access guest memory are implemented the same way
> on Arm and x86. The only differences are:
>     - guest_handle_{from, to}_param(): while on x86 XEN_GUEST_HANDLE()
>       and XEN_GUEST_HANDLE_PARAM() are the same, they are not on Arm. It
>       is still fine to use the Arm implementation on x86.

Is the description stale? I don't think there's any guest_handle_from_param()
anymore.

>     - __clear_guest_offset(): Interestingly the prototype does not match
>       between the x86 and Arm. However, the Arm one is bogus. So the x86
>       implementation can be used.
>     - guest_handle{,_subrange}_okay(): They are validly differing
>       because Arm is only supporting auto-translated guest and therefore
>       handles are always valid.
> 
> In the past, the ia64 and ppc64 port use a different model to access
> guest parameter. They have been long gone now.
> 
> Given Xen currently only support 2 archictures, it is too soon to have a
> directory asm-generic as it is not possible to differentiate it with the
> existing directory xen/. If/When there is a 3rd port, we can decide to
> create the new directory if that new port decide to use a different way
> to access guest parameter.
> 
> For now, consolidate it in xen/guest_access.h.
> 
> While it would be possible to adjust the coding style at the same, this
> is left for a follow-up patch so 'diff' can be used to check the
> consolidation was done correctly.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Apart from the above
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:55:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:55:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tda-0008Po-F6; Fri, 31 Jul 2020 11:55:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TdY-0008Pj-VK
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:55:44 +0000
X-Inumbo-ID: c25a5cad-d324-11ea-8e29-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c25a5cad-d324-11ea-8e29-bc764e2007e4;
 Fri, 31 Jul 2020 11:55:43 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMv-0001W4-F1; Fri, 31 Jul 2020 12:38:33 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 19/41] Executive: Drop redundant AND clause
Date: Fri, 31 Jul 2020 12:37:58 +0100
Message-Id: <20200731113820.5765-20-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In "Executive: Use index for report__find_test" we changed an EXISTS
subquery into a JOIN.

Now, the condition r.flight=f.flight is redundant because this is the
join column (from USING).

No functional change.

CC: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 1 -
 1 file changed, 1 deletion(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 684cafc3..2f81e89d 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -433,7 +433,6 @@ END
 		   WHERE name=?
                      AND name LIKE 'revision\_%'
 		     AND val=?
-		     AND r.flight=f.flight
                      AND ${\ main_revision_job_cond('r.job') }
 END
             push @params, "revision_$tree", $revision;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:55:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:55:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tdg-0008QM-NO; Fri, 31 Jul 2020 11:55:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tdf-0008Q8-CS
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:55:51 +0000
X-Inumbo-ID: c723ccb4-d324-11ea-8e29-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c723ccb4-d324-11ea-8e29-bc764e2007e4;
 Fri, 31 Jul 2020 11:55:50 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMv-0001W4-Qw; Fri, 31 Jul 2020 12:38:34 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 20/41] schema: Add index for quick lookup by host
Date: Fri, 31 Jul 2020 12:37:59 +0100
Message-Id: <20200731113820.5765-21-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: Use proper \ escaping for underscores in LIKE
---
 schema/runvars-host-index.sql | 8 ++++++++
 1 file changed, 8 insertions(+)
 create mode 100644 schema/runvars-host-index.sql

diff --git a/schema/runvars-host-index.sql b/schema/runvars-host-index.sql
new file mode 100644
index 00000000..222a0a30
--- /dev/null
+++ b/schema/runvars-host-index.sql
@@ -0,0 +1,8 @@
+-- ##OSSTEST## 009 Preparatory
+--
+-- This index helps sg-report-host-history find relevant flights.
+
+CREATE INDEX runvars_host_idx
+    ON runvars (val, flight)
+ WHERE name ='host'
+    OR name LIKE '%\_host';
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:55:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:55:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tdn-0008RS-0O; Fri, 31 Jul 2020 11:55:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tdl-0008RD-O1
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:55:57 +0000
X-Inumbo-ID: caff2b44-d324-11ea-8e29-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id caff2b44-d324-11ea-8e29-bc764e2007e4;
 Fri, 31 Jul 2020 11:55:57 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN1-0001W4-JE; Fri, 31 Jul 2020 12:38:39 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 30/41] sg-report-host-history: Fork
Date: Fri, 31 Jul 2020 12:38:09 +0100
Message-Id: <20200731113820.5765-31-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Run each host's report in a separate child.  This is considerably
faster.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 47 +++++++++++++++++++++++++++++++++++-------
 1 file changed, 40 insertions(+), 7 deletions(-)

diff --git a/sg-report-host-history b/sg-report-host-history
index f4352fc3..dc694ebe 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -34,6 +34,7 @@ our $flightlimit;
 our $htmlout = ".";
 our $read_existing=1;
 our $doinstall=1;
+our $maxjobs=10;
 our @blessings;
 
 open DEBUG, ">/dev/null";
@@ -44,7 +45,7 @@ csreadconfig();
 while (@ARGV && $ARGV[0] =~ m/^-/) {
     $_= shift @ARGV;
     last if m/^--?$/;
-    if (m/^--(limit)\=([1-9]\d*)$/) {
+    if (m/^--(limit|maxjobs)\=([1-9]\d*)$/) {
         $$1= $2;
     } elsif (m/^--time-limit\=([1-9]\d*)$/) {
         $timelimit= $1;
@@ -469,12 +470,44 @@ db_retry($dbh_tests, [], sub {
     computeflightsrange();
 });
 
+undef $dbh_tests;
+
+our %children;
+our $worst = 0;
+
+sub wait_for_max_children ($) {
+    my ($lim) = @_;
+    while (keys(%children) > $lim) {
+	$!=0; $?=0; my $got = wait;
+	die "$! $got $?" unless exists $children{$got};
+	my $host = $children{$got};
+	delete $children{$got};
+	$worst = $? if $? > $worst;
+	if ($?) {
+	    print STDERR "sg-report-flight[: [$got] failed for $host: $?\n";
+	} else {
+	    print DEBUG "REAPED [$got] $host\n";
+	}
+    }
+}
+
 foreach my $host (sort keys %hosts) {
-    read_existing_logs($host);
-    db_retry($dbh_tests, [], sub {
-        mainquery($host);
-	reporthost $host;
-    });
+    wait_for_max_children($maxjobs);
+
+    my $pid = fork // die $!;
+    if (!$pid) {
+	opendb_tests();
+	read_existing_logs($host);
+	db_retry($dbh_tests, [], sub {
+            mainquery($host);
+	    reporthost $host;
+	});
+	print DEBUG "JQ CACHE ".($jqtotal-$jqcachemisses)." / $jqtotal\n";
+	exit(0);
+    }
+    print DEBUG "SPAWNED [$pid] $host\n";
+    $children{$pid} = $host;
 }
 
-print DEBUG "JQ CACHE ".($jqtotal-$jqcachemisses)." / $jqtotal\n";
+wait_for_max_children(0);
+exit $worst;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:56:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tdp-0008SJ-9I; Fri, 31 Jul 2020 11:56:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tdn-0008RD-RE
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:55:59 +0000
X-Inumbo-ID: cc2c9506-d324-11ea-8e29-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc2c9506-d324-11ea-8e29-bc764e2007e4;
 Fri, 31 Jul 2020 11:55:59 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMs-0001W4-It; Fri, 31 Jul 2020 12:38:30 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 12/41] Executive: Use index for report__find_test
Date: Fri, 31 Jul 2020 12:37:51 +0100
Message-Id: <20200731113820.5765-13-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

After we refactor this query then we can enable the index use.
(Both of these things together in this commit because I haven't perf
tested the version with just the refactoring.)

(We have provided an index that can answer this question really
quickly if a version is specified.  But the query planner couldn't see
that because it works without seeing the bind variables, so doesn't
know that the value of name is going to be suitable for this index.)

* Convert the two EXISTS subqueries into JOIN/AND with a DISTINCT
  clause naming the fields on flights, so as to replicate the previous
  result rows.  Then do $selection field last.  The subquery is a
  convenient way to let this do the previous thing for all the values
  of $selection (including, notably, *).

* Add the additional AND clause for r.name, which has no logical
  effect given the actual values of name, enabling the query planner
  to use this index.

Perf: In my test case the sg-report-flight runtime is now ~8s.  I am
reasonably confident that this will not make other use cases of this
code worse.

Perf: runtime of my test case now ~11s

Example query before (from the Perl DBI trace):

        SELECT *
         FROM flights f
        WHERE
                EXISTS (
                   SELECT 1
                    FROM runvars r
                   WHERE name=?
                     AND val=?
                     AND r.flight=f.flight
                     AND (      (CASE
       WHEN (r.job) LIKE 'build-%-prev' THEN 'xprev'
       WHEN ((r.job) LIKE 'build-%-freebsd'
             AND 'x' = 'freebsdbuildjob') THEN 'DISCARD'
       ELSE                                      ''
       END)
 = '')
                 )
          AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
          AND (branch=?)
        ORDER BY flight DESC
        LIMIT 1

After:

        SELECT *
          FROM ( SELECT DISTINCT
                      flight, started, blessing, branch, intended
                 FROM flights f
                    JOIN runvars r USING (flight)
                   WHERE name=?
                     AND name LIKE 'revision\_%'
                     AND val=?
                     AND r.flight=f.flight
                     AND (      (CASE
       WHEN (r.job) LIKE 'build-%-prev' THEN 'xprev'
       WHEN ((r.job) LIKE 'build-%-freebsd'
             AND 'x' = 'freebsdbuildjob') THEN 'DISCARD'
       ELSE                                      ''
       END)
 = '')
          AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
          AND (branch=?)
) AS sub WHERE TRUE
        ORDER BY flight DESC
        LIMIT 1

In both cases with bind vars:

   'revision_xen'
   '165f3afbfc3db70fcfdccad07085cde0a03c858b'
   "xen-unstable"

Diff to the example query:

@@ -1,10 +1,10 @@
         SELECT *
+          FROM ( SELECT DISTINCT
+                      flight, started, blessing, branch, intended
          FROM flights f
-        WHERE
-                EXISTS (
-                   SELECT 1
-                    FROM runvars r
+                    JOIN runvars r USING (flight)
                    WHERE name=?
+                     AND name LIKE 'revision\_%'
                      AND val=?
                      AND r.flight=f.flight
                      AND (      (CASE
@@ -14,8 +14,8 @@
        ELSE                                      ''
        END)
  = '')
-                 )
           AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
           AND (branch=?)
+) AS sub WHERE TRUE
         ORDER BY flight DESC
         LIMIT 1

Reviewed-by: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: Use proper \ escaping for underscores in LIKE
---
 Osstest/Executive.pm              | 20 ++++++++------------
 schema/runvars-revision-index.sql |  2 +-
 2 files changed, 9 insertions(+), 13 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index c3dc1261..9208d8af 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -415,37 +415,32 @@ sub report__find_test ($$$$$$$) {
 
     my $querytext = <<END;
         SELECT $selection
-	 FROM flights f
-	WHERE
+          FROM ( SELECT DISTINCT
+                      flight, started, blessing, branch, intended
+   	         FROM flights f
 END
 
     if (defined $revision) {
 	if ($tree eq 'osstest') {
 	    $querytext .= <<END;
-		EXISTS (
-		   SELECT 1
-		    FROM flights_harness_touched t
+		    JOIN flights_harness_touched t USING (flight)
 		   WHERE t.harness=?
-		     AND t.flight=f.flight
-		 )
 END
             push @params, $revision;
 	} else {
 	    $querytext .= <<END;
-		EXISTS (
-		   SELECT 1
-		    FROM runvars r
+		    JOIN runvars r USING (flight)
 		   WHERE name=?
+                     AND name LIKE 'revision\_%'
 		     AND val=?
 		     AND r.flight=f.flight
                      AND ${\ main_revision_job_cond('r.job') }
-		 )
 END
             push @params, "revision_$tree", $revision;
         }
     } else {
 	$querytext .= <<END;
-	    TRUE
+	    WHERE TRUE
 END
     }
 
@@ -460,6 +455,7 @@ END
 END
     push @params, @$branches;
 
+    $querytext .= ") AS sub WHERE TRUE\n";
     $querytext .= $extracond;
     $querytext .= $sortlimit;
 
diff --git a/schema/runvars-revision-index.sql b/schema/runvars-revision-index.sql
index 8871b528..25306354 100644
--- a/schema/runvars-revision-index.sql
+++ b/schema/runvars-revision-index.sql
@@ -1,4 +1,4 @@
--- ##OSSTEST## 008 Preparatory
+-- ##OSSTEST## 008 Needed
 --
 -- This index helps Executive::report__find_test find relevant flights.
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:56:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:56:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tdu-0008Uj-IZ; Fri, 31 Jul 2020 11:56:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tds-0008RD-Pc
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:56:04 +0000
X-Inumbo-ID: ce5a97ba-d324-11ea-8e29-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce5a97ba-d324-11ea-8e29-bc764e2007e4;
 Fri, 31 Jul 2020 11:56:02 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN0-0001W4-4W; Fri, 31 Jul 2020 12:38:38 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 27/41] sg-report-host-history: Rerganisation: Read
 old logs later
Date: Fri, 31 Jul 2020 12:38:06 +0100
Message-Id: <20200731113820.5765-28-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Perhaps at one point something read from these logs influenced the db
query for thye flights range, but that is no longer the case and it
doesn't seem likely to need to come back.

We want to move the per-host stuff together.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/sg-report-host-history b/sg-report-host-history
index 34216aa2..3f4670e5 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -466,14 +466,14 @@ END
 
 exit 0 unless %hosts;
 
-foreach (keys %hosts) {
-    read_existing_logs($_);
-}
-
 db_retry($dbh_tests, [], sub {
     computeflightsrange();
 });
 
+foreach (keys %hosts) {
+    read_existing_logs($_);
+}
+
 db_retry($dbh_tests, [], sub {
     foreach my $host (sort keys %hosts) {
 	mainquery($host);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 11:56:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 11:56:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tdy-00005B-S1; Fri, 31 Jul 2020 11:56:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tdx-0008RD-Td
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 11:56:09 +0000
X-Inumbo-ID: d21043bf-d324-11ea-8e29-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d21043bf-d324-11ea-8e29-bc764e2007e4;
 Fri, 31 Jul 2020 11:56:09 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMx-0001W4-0B; Fri, 31 Jul 2020 12:38:35 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 21/41] sg-report-host-history: Find flight limit by
 flight start date
Date: Fri, 31 Jul 2020 12:38:00 +0100
Message-Id: <20200731113820.5765-22-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

By default we look for anything in (roughly) the last year.

This query is in fact quite fast because the flights table is small.

There is still the per-host limit of $limit (2000) recent runs.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 56 ++++++++++++++++++++----------------------
 1 file changed, 27 insertions(+), 29 deletions(-)

diff --git a/sg-report-host-history b/sg-report-host-history
index 9730ae7a..a159df3e 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -29,6 +29,7 @@ use POSIX;
 use Osstest::Executive qw(:DEFAULT :colours);
 
 our $limit= 2000;
+our $timelimit= 86400 * (366 + 14);
 our $flightlimit;
 our $htmlout = ".";
 our $read_existing=1;
@@ -45,6 +46,8 @@ while (@ARGV && $ARGV[0] =~ m/^-/) {
     last if m/^--?$/;
     if (m/^--(limit)\=([1-9]\d*)$/) {
         $$1= $2;
+    } elsif (m/^--time-limit\=([1-9]\d*)$/) {
+        $timelimit= $1;
     } elsif (m/^--flight-limit\=([1-9]\d*)$/) {
 	$flightlimit= $1;
     } elsif (restrictflight_arg($_)) {
@@ -108,38 +111,33 @@ sub read_existing_logs ($) {
 }
 
 sub computeflightsrange () {
-    if (!$flightlimit) {
-	my $flagscond =
-	    '('.join(' OR ', map { "f.hostflag = 'blessed-$_'" } @blessings).')';
-	my $nhostsq = db_prepare(<<END);
-	    SELECT count(*)
-	      FROM resources r
-	     WHERE restype='host'
-	       AND EXISTS (SELECT 1
-			     FROM hostflags f
-			    WHERE f.hostname=r.resname
-			      AND $flagscond)
+    if ($flightlimit) {
+	my $minflightsq = db_prepare(<<END);
+	    SELECT flight
+	      FROM (
+		SELECT flight
+		  FROM flights
+		 WHERE $restrictflight_cond
+		 ORDER BY flight DESC
+		 LIMIT $flightlimit
+	      ) f
+	      ORDER BY flight ASC
+	      LIMIT 1
 END
-        $nhostsq->execute();
-	my ($nhosts) = $nhostsq->fetchrow_array();
-	print DEBUG "COUNTED $nhosts hosts\n";
-	$flightlimit = $nhosts * $limit * 2;
-    }
-
-    my $minflightsq = db_prepare(<<END);
-	SELECT flight
-	  FROM (
+	$minflightsq->execute();
+	($minflight,) = $minflightsq->fetchrow_array();
+    } else {
+	my $minflightsq = db_prepare(<<END);
 	    SELECT flight
-	      FROM flights
-             WHERE $restrictflight_cond
-	     ORDER BY flight DESC
-	     LIMIT $flightlimit
-	  ) f
-	  ORDER BY flight ASC
-	  LIMIT 1
+              FROM flights
+             WHERE started >= ?
+          ORDER BY flight ASC
+             LIMIT 1
 END
-    $minflightsq->execute();
-    ($minflight,) = $minflightsq->fetchrow_array();
+	my $now = time // die $!;
+        $minflightsq->execute($now - $timelimit);
+	($minflight,) = $minflightsq->fetchrow_array();
+    }
     $minflight //= 0;
 
     $flightcond = "(flight > $minflight)";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:00:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:00:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TiL-0001Jt-P9; Fri, 31 Jul 2020 12:00:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TiK-0001Jk-L8
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:00:40 +0000
X-Inumbo-ID: 737bffed-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 737bffed-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:00:40 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN1-0001W4-UH; Fri, 31 Jul 2020 12:38:40 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 31/41] schema: Add index to help cs-bisection-step
Date: Fri, 31 Jul 2020 12:38:10 +0100
Message-Id: <20200731113820.5765-32-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

cs-bisection step basis search involves looking for recent flights
that weren't broken.  A flight is broken if it has broken steps.
Make an index for this to save it scanning the steps table.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 schema/steps-broken-index.sql | 7 +++++++
 1 file changed, 7 insertions(+)
 create mode 100644 schema/steps-broken-index.sql

diff --git a/schema/steps-broken-index.sql b/schema/steps-broken-index.sql
new file mode 100644
index 00000000..770747cc
--- /dev/null
+++ b/schema/steps-broken-index.sql
@@ -0,0 +1,7 @@
+-- ##OSSTEST## 010 Harmless
+--
+-- This index helps cs-bisection-flight check if flighss are broken.
+
+CREATE INDEX steps_broken_idx
+    ON steps (flight)
+ WHERE status='broken';
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:00:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:00:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TiR-0001KY-0p; Fri, 31 Jul 2020 12:00:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TiP-0001Jk-Jv
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:00:45 +0000
X-Inumbo-ID: 76268c7e-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 76268c7e-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:00:44 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN4-0001W4-8W; Fri, 31 Jul 2020 12:38:42 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 37/41] cs-bisection-step: temporary table: Insert
 only rows we care about
Date: Fri, 31 Jul 2020 12:38:16 +0100
Message-Id: <20200731113820.5765-38-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Every use of this table has a WHERE or ON which invokes at least one
of these conditions.  So put only those rows into the table.

This provides a significant speedup (which I haven't properly
measured).

No overall functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: New patch.
---
 cs-bisection-step | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/cs-bisection-step b/cs-bisection-step
index 1c165b78..718c87b0 100755
--- a/cs-bisection-step
+++ b/cs-bisection-step
@@ -219,7 +219,9 @@ END
     
            WHERE t.job = ?
 	     AND t.flight = ?
-	     AND t.name LIKE '%buildjob'
+	     AND t.name LIKE '%buildjob' AND
+(@{ $qtxt_common_rev_ok->('b') } OR
+ @{ $qtxt_common_tree_ok->('b') })
 	     AND b.flight = (CASE WHEN t.val NOT LIKE '%.%'
                                   THEN t.flight
                                   ELSE cast(split_part(t.val, '.', 1) AS int)
@@ -239,7 +241,9 @@ END
 	           job  AS job
 	      FROM runvars
 	     WHERE job = ?
-	       AND flight = ?
+	       AND flight = ? AND
+(@{ $qtxt_common_rev_ok->('runvars') } OR
+ @{ $qtxt_common_tree_ok->('runvars') })
 END
 
     my $qtxt_common_results = <<END;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:00:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:00:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TiV-0001M3-99; Fri, 31 Jul 2020 12:00:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TiU-0001Jk-KF
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:00:50 +0000
X-Inumbo-ID: 78ce6d3e-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78ce6d3e-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:00:48 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMt-0001W4-O5; Fri, 31 Jul 2020 12:38:31 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 15/41] duration_estimator: Explicitly provide null
 in general host q
Date: Fri, 31 Jul 2020 12:37:54 +0100
Message-Id: <20200731113820.5765-16-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Our spec. says we return nulls for started and status if we don't find
a job matching the host spec.

The way this works right now is that we look up the nonexistent
entries in $refs->[0].  This is not really brilliant and is going to
be troublesome as we continue to refactor.

Provide these values explicitly.  No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 4cb22cc9..d45d6557 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1169,6 +1169,8 @@ END
 
     my $duration_anyref_qtxt= <<END;
             SELECT f.flight AS flight,
+                   NULL as started,
+                   NULL as status,
                    max(s.finished) AS max_finished
 		      FROM steps s JOIN flights f
 		        ON s.flight=f.flight
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:00:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:00:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tia-0001Nh-I8; Fri, 31 Jul 2020 12:00:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TiZ-0001Jk-KC
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:00:55 +0000
X-Inumbo-ID: 7a17f7c8-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a17f7c8-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:00:50 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN6-0001W4-Jg; Fri, 31 Jul 2020 12:38:44 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 41/41] duration_estimator: Clarify recentflights
 query a bit
Date: Fri, 31 Jul 2020 12:38:20 +0100
Message-Id: <20200731113820.5765-42-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The condition on r.job is more naturally thought of as a join
condition than a where condition.  (This is an inner join, so the
semantics are identical.)

Also, for clarity, swap the flight and job conditions round, so that
the ON clause is a series of r.thing = otherthing.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
CC: George Dunlap <George.Dunlap@citrix.com>
---
v2: New patch.
---
 Osstest/Executive.pm | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 8e4c5b9a..a69c624f 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1153,10 +1153,10 @@ sub duration_estimator ($$;$$) {
 		     FROM flights f
                      JOIN jobs j USING (flight)
                      JOIN runvars r
-                             ON  f.flight=r.flight
+                             ON  r.flight=f.flight
+                            AND  r.job=j.job=
                             AND  r.name=?
-                    WHERE  j.job=r.job
-                      AND  f.blessing=?
+                    WHERE  f.blessing=?
                       AND  f.branch=?
                       AND  j.job=?
                       AND  r.val=?
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:01:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:01:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tin-0001Rg-S7; Fri, 31 Jul 2020 12:01:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tim-0001R9-5A
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:01:08 +0000
X-Inumbo-ID: 83774e7d-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83774e7d-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:01:07 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMu-0001W4-4C; Fri, 31 Jul 2020 12:38:32 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 16/41] duration_estimator: Return job column in
 first query
Date: Fri, 31 Jul 2020 12:37:55 +0100
Message-Id: <20200731113820.5765-17-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Right now this is pointless since the Perl code doesn't need it.  But
this row is going to be part of a WITH clause soon.

No functional change.

Diffs to two example queries (from the Perl DBI trace):

            SELECT f.flight AS flight,
+                   j.job AS job,
                   f.started AS started,
                    j.status AS status
                     FROM flights f
                     JOIN jobs j USING (flight)
                     JOIN runvars r
                             ON  f.flight=r.flight
                            AND  r.name=?
                    WHERE  j.job=r.job
                      AND  f.blessing=?
                      AND  f.branch=?
                      AND  j.job=?
                      AND  r.val=?
                      AND  (j.status='pass' OR j.status='fail'
                           OR j.status='truncated'!)
                      AND  f.started IS NOT NULL
                       AND  f.started >= ?
                  ORDER BY f.started DESC

            SELECT f.flight AS flight,
+                   s.job AS job,
                    NULL as started,
                    NULL as status,
                    max(s.finished) AS max_finished
                      FROM steps s JOIN flights f
                        ON s.flight=f.flight
                     WHERE s.job=? AND f.blessing=? AND f.branch=?
                        AND s.finished IS NOT NULL
                        AND f.started IS NOT NULL
                        AND f.started >= ?
-                     GROUP BY f.flight
+                     GROUP BY f.flight, s.job
                      ORDER BY max_finished DESC

CC: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index d45d6557..359120c0 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1148,6 +1148,7 @@ sub duration_estimator ($$;$$) {
     }
     my $recentflights_qtxt= <<END;
             SELECT f.flight AS flight,
+                   j.job AS job,
 		   f.started AS started,
                    j.status AS status
 		     FROM flights f
@@ -1169,6 +1170,7 @@ END
 
     my $duration_anyref_qtxt= <<END;
             SELECT f.flight AS flight,
+                   s.job AS job,
                    NULL as started,
                    NULL as status,
                    max(s.finished) AS max_finished
@@ -1178,7 +1180,7 @@ END
                        AND s.finished IS NOT NULL
                        AND f.started IS NOT NULL
                        AND f.started >= ?
-                     GROUP BY f.flight
+                     GROUP BY f.flight, s.job
                      ORDER BY max_finished DESC
 END
     # s J J J # fix perl-mode
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:01:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:01:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tis-0001TI-4l; Fri, 31 Jul 2020 12:01:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tiq-0001R9-36
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:01:12 +0000
X-Inumbo-ID: 8663f32e-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8663f32e-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:01:11 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMs-0001W4-Th; Fri, 31 Jul 2020 12:38:31 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 13/41] duration_estimator: Ignore truncated jobs
 unless we know the step
Date: Fri, 31 Jul 2020 12:37:52 +0100
Message-Id: <20200731113820.5765-14-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

If we are looking for a particular step then we will ignore jobs
without that step, so any job which was truncated before it will be
ignored.

Otherwise we are looking for the whole job duration and a truncated
job is not a good representative.

This is a bugfix (to duration estimation), not a performance
improvement like the preceding and subsequent changes.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 9208d8af..f528edd0 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1142,6 +1142,10 @@ sub duration_estimator ($$;$$) {
     # estimated (and only jobs which contained that step will be
     # considered).
 
+    my $or_status_truncated = '';
+    if ($will_uptoincl_testid) {
+	$or_status_truncated = "OR j.status='truncated'!";
+    }
     my $recentflights_q= $dbh_tests->prepare(<<END);
             SELECT f.flight AS flight,
 		   f.started AS started,
@@ -1156,8 +1160,8 @@ sub duration_estimator ($$;$$) {
                       AND  f.branch=?
                       AND  j.job=?
                       AND  r.val=?
-		      AND  (j.status='pass' OR j.status='fail' OR
-                            j.status='truncated')
+		      AND  (j.status='pass' OR j.status='fail'
+                           $or_status_truncated)
                       AND  f.started IS NOT NULL
                       AND  f.started >= ?
                  ORDER BY f.started DESC
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:01:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:01:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tiu-0001Uc-ED; Fri, 31 Jul 2020 12:01:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tit-0001R9-Mw
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:01:15 +0000
X-Inumbo-ID: 88960e84-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88960e84-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:01:15 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMt-0001W4-GK; Fri, 31 Jul 2020 12:38:31 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 14/41] duration_estimator: Introduce some _qtxt
 variables
Date: Fri, 31 Jul 2020 12:37:53 +0100
Message-Id: <20200731113820.5765-15-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index f528edd0..4cb22cc9 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1146,7 +1146,7 @@ sub duration_estimator ($$;$$) {
     if ($will_uptoincl_testid) {
 	$or_status_truncated = "OR j.status='truncated'!";
     }
-    my $recentflights_q= $dbh_tests->prepare(<<END);
+    my $recentflights_qtxt= <<END;
             SELECT f.flight AS flight,
 		   f.started AS started,
                    j.status AS status
@@ -1167,7 +1167,7 @@ sub duration_estimator ($$;$$) {
                  ORDER BY f.started DESC
 END
 
-    my $duration_anyref_q= $dbh_tests->prepare(<<END);
+    my $duration_anyref_qtxt= <<END;
             SELECT f.flight AS flight,
                    max(s.finished) AS max_finished
 		      FROM steps s JOIN flights f
@@ -1212,6 +1212,8 @@ END_UPTOINCL
                 AS duration
 END_ALWAYS
 	
+    my $recentflights_q= $dbh_tests->prepare($recentflights_qtxt);
+    my $duration_anyref_q= $dbh_tests->prepare($duration_anyref_qtxt);
     my $duration_duration_q = $dbh_tests->prepare($duration_duration_qtxt);
 
     return sub {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:01:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:01:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tix-0001Wj-NT; Fri, 31 Jul 2020 12:01:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tiw-0001R9-6M
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:01:18 +0000
X-Inumbo-ID: 8a0dea02-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a0dea02-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:01:17 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN4-0001W4-Jj; Fri, 31 Jul 2020 12:38:43 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 38/41] SQL: Change LIKE E'...\\_...' to LIKE
 '...\_...'
Date: Fri, 31 Jul 2020 12:38:17 +0100
Message-Id: <20200731113820.5765-39-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

E'...' means to interpret \-escapes.  But we don't want them: without
E, we can avoid some toothpick-doubling.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: New patch.
---
 cs-bisection-step     | 8 ++++----
 sg-report-job-history | 4 ++--
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/cs-bisection-step b/cs-bisection-step
index 718c87b0..a82cbfb8 100755
--- a/cs-bisection-step
+++ b/cs-bisection-step
@@ -185,15 +185,15 @@ sub flight_rmap ($$) {
     my $qtxt_common_rev_ok = sub {
 	my ($table) = @_;
 	[<<END];
-                 ($table.name LIKE E'built\\_revision\\_%' OR
-                  $table.name LIKE E'revision\\_%')
+                 ($table.name LIKE 'built\_revision\_%' OR
+                  $table.name LIKE 'revision\_%')
 END
     };
 
     my $qtxt_common_tree_ok = sub {
 	my ($table) = @_;
 	[<<END];
-  	      $table.name LIKE E'tree\\_%'
+  	      $table.name LIKE 'tree\_%'
 END
     };
 
@@ -1220,7 +1220,7 @@ sub preparejob ($$$$) {
             INTO TEMP  bisection_runvars
                  FROM  runvars
                 WHERE  flight=? AND job=? AND synth='f'
-                  AND  name NOT LIKE E'revision\\_%'
+                  AND  name NOT LIKE 'revision\_%'
                   AND  name NOT LIKE '%host'
 END
     my (@trevisions) = split / /, $choose->{Rtuple};
diff --git a/sg-report-job-history b/sg-report-job-history
index d5f91ff1..22a28627 100755
--- a/sg-report-job-history
+++ b/sg-report-job-history
@@ -92,7 +92,7 @@ if (defined($flight)) {
 our $revisionsq= db_prepare(<<END);
         SELECT * FROM runvars
          WHERE flight=? AND job=?
-           AND name LIKE E'built\\_revision\\_\%'
+           AND name LIKE 'built\_revision\_%'
 END
 # (We report on non-main-revision jobs just as for main-revision ones.)
 
@@ -109,7 +109,7 @@ sub add_revisions ($$$$) {
 our $buildsq= db_prepare(<<END);
         SELECT * FROM runvars
          WHERE flight=? AND job=?
-           AND name LIKE E'\%buildjob'
+           AND name LIKE '%buildjob'
 END
 
 sub processjobbranch ($$) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:01:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:01:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tj2-0001Z5-1Q; Fri, 31 Jul 2020 12:01:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tj1-0001R9-3N
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:01:23 +0000
X-Inumbo-ID: 8c6e59e4-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c6e59e4-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:01:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN5-0001W4-Ch; Fri, 31 Jul 2020 12:38:43 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 39/41] cs-bisection-step: Add a debug print when we
 run dot(1)
Date: Fri, 31 Jul 2020 12:38:18 +0100
Message-Id: <20200731113820.5765-40-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Amongst other things this was useful for perf investigation.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: New patch.
---
 cs-bisection-step | 1 +
 1 file changed, 1 insertion(+)

diff --git a/cs-bisection-step b/cs-bisection-step
index a82cbfb8..027032a1 100755
--- a/cs-bisection-step
+++ b/cs-bisection-step
@@ -1114,6 +1114,7 @@ END
 
     if (eval {
         foreach my $fmt (qw(ps png svg)) {
+	    print DEBUG "RUNNING dot -T$fmt\n";
             system_checked("dot", "-T$fmt", "-o$graphfile.$fmt",
 			   "$graphfile.dot");
         }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:01:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tj7-0001cQ-Ag; Fri, 31 Jul 2020 12:01:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tj6-0001R9-3c
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:01:28 +0000
X-Inumbo-ID: 8fd671a2-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8fd671a2-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:01:27 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMz-0001W4-DE; Fri, 31 Jul 2020 12:38:37 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 25/41] sg-report-host-history: Do the main query
 per host
Date: Fri, 31 Jul 2020 12:38:04 +0100
Message-Id: <20200731113820.5765-26-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In f6001d628c3b3fd42b10cd15351981a04bc02572 we combined these
queries into one:
  sg-report-host-history: Aggregate runvars query for all hosts

Now that we have an index, there is a faster way for the db to do this
query: via that index.  But it doesn't like to do that if be aggregate
the queries.  Experimentally, doing this query separately once per
host is significantly faster.

Also, later, it will allow us to parallelise this work.

So, we undo that.  (Not by reverting, though.)

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: Use proper \ escaping for underscores in LIKE
---
 schema/runvars-host-index.sql |  2 +-
 sg-report-host-history        | 27 +++++++++------------------
 2 files changed, 10 insertions(+), 19 deletions(-)

diff --git a/schema/runvars-host-index.sql b/schema/runvars-host-index.sql
index 222a0a30..6a3ef377 100644
--- a/schema/runvars-host-index.sql
+++ b/schema/runvars-host-index.sql
@@ -1,4 +1,4 @@
--- ##OSSTEST## 009 Preparatory
+-- ##OSSTEST## 009 Needed
 --
 -- This index helps sg-report-host-history find relevant flights.
 
diff --git a/sg-report-host-history b/sg-report-host-history
index 1c2d19ae..15866ab6 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -165,34 +165,25 @@ sub jobquery ($$$) {
 our %hosts;
 
 sub mainquery () {
-    our $valcond = join " OR ", map { "val = ?" } keys %hosts;
-    our @params = keys %hosts;
-
     our $runvarq //= db_prepare(<<END);
-	SELECT flight, job, name, val, status
+	SELECT flight, job, name, status
 	  FROM runvars
           JOIN jobs USING (flight, job)
-	 WHERE $namecond
-	   AND ($valcond)
+	 WHERE (name = 'host' OR name LIKE '%\_host')
+	   AND val = ?
 	   AND $flightcond
            AND $restrictflight_cond
            AND flight > ?
 	 ORDER BY flight DESC
-	 LIMIT ($limit * 3 + 100) * ?
+         LIMIT $limit * 2
 END
+    foreach my $host (sort keys %hosts) {
+	print DEBUG "MAINQUERY $host...\n";
+	$runvarq->execute($host, $minflight);
 
-    push @params, $minflight;
-    push @params, scalar keys %hosts;
-
-    print DEBUG "MAINQUERY...\n";
-    $runvarq->execute(@params);
-
-    print DEBUG "FIRST PASS\n";
-    while (my $jr= $runvarq->fetchrow_hashref()) {
-	print DEBUG " $jr->{flight}.$jr->{job} ";
-	push @{ $hosts{$jr->{val}} }, $jr;
+	$hosts{$host} = $runvarq->fetchall_arrayref({});
+	print DEBUG "MAINQUERY $host got ".(scalar @{ $hosts{$host} })."\n";
     }
-    print DEBUG "\n";
 }
 
 sub reporthost ($) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:01:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:01:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TjB-0001ft-Pk; Fri, 31 Jul 2020 12:01:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TjB-0001R9-3l
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:01:33 +0000
X-Inumbo-ID: 9260ce9a-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9260ce9a-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:01:31 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN3-0001W4-Dg; Fri, 31 Jul 2020 12:38:41 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 35/41] cs-bisection-step: Break out qtxt_common_ok
Date: Fri, 31 Jul 2020 12:38:14 +0100
Message-Id: <20200731113820.5765-36-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Make this bit of query into a subref which takes a $table argument.

No change to the generated query.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: New patch.
---
 cs-bisection-step | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/cs-bisection-step b/cs-bisection-step
index f11726aa..ba0c6424 100755
--- a/cs-bisection-step
+++ b/cs-bisection-step
@@ -190,6 +190,13 @@ sub flight_rmap ($$) {
 END
     };
 
+    my $qtxt_common_tree_ok = sub {
+	my ($table) = @_;
+	[<<END];
+  	      $table.name LIKE E'tree\\_%'
+END
+    };
+
     $dbh_tests->do(<<END, {});
           CREATE TEMP TABLE tmp_build_info (
               use varchar NOT NULL,
@@ -267,7 +274,7 @@ $qtxt_common_tables
 
            WHERE
 @{ $qtxt_common_rev_ok->('rev') } AND
-  	          url.name LIKE E'tree\\_%'
+@{ $qtxt_common_tree_ok->('url') }
 	     AND  url.use = rev.use
 	     AND  url.job = rev.job
 	     AND (rev.name = 'built_revision_' || substr(url.name,6) OR
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:01:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:01:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TjL-0001mI-3V; Fri, 31 Jul 2020 12:01:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TjJ-0001lW-Sp
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:01:41 +0000
X-Inumbo-ID: 982be3e6-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 982be3e6-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:01:41 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMy-0001W4-Vh; Fri, 31 Jul 2020 12:38:37 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 24/41] sg-report-host-history: Add a debug print
 after sorting jobs
Date: Fri, 31 Jul 2020 12:38:03 +0100
Message-Id: <20200731113820.5765-25-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This helps rule this sort out as a source of slowness.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/sg-report-host-history b/sg-report-host-history
index a34458e0..1c2d19ae 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -318,6 +318,8 @@ END
 
     @rows = sort { $b->{finished} <=> $a->{finished} } @rows;
 
+    print DEBUG "SORTED\n";
+
     my $alternate = 0;
     my $wrote = 0;
     my $runvarq_hits = 0;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:01:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TjN-0001oE-DQ; Fri, 31 Jul 2020 12:01:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TjL-0001lW-Kc
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:01:43 +0000
X-Inumbo-ID: 9903baab-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9903baab-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:01:43 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMu-0001W4-DT; Fri, 31 Jul 2020 12:38:32 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 17/41] duration_estimator: Move $uptincl_testid to
 separate @x_params
Date: Fri, 31 Jul 2020 12:37:56 +0100
Message-Id: <20200731113820.5765-18-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is going to be useful soon.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 359120c0..fb975dac 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1223,6 +1223,9 @@ END_ALWAYS
     return sub {
         my ($job, $hostidname, $onhost, $uptoincl_testid) = @_;
 
+	my @x_params;
+	push @x_params, $uptoincl_testid if $will_uptoincl_testid;
+
         my $dbg= $debug ? sub {
             $debug->("DUR $branch $blessing $job $hostidname $onhost @_");
         } : sub { };
@@ -1257,7 +1260,7 @@ END_ALWAYS
         my $duration_max= 0;
         foreach my $ref (@$refs) {
 	    my @d_d_args = ($ref->{flight}, $job);
-	    push @d_d_args, $uptoincl_testid if $will_uptoincl_testid;
+	    push @d_d_args, @x_params;
             $duration_duration_q->execute(@d_d_args);
             my ($duration) = $duration_duration_q->fetchrow_array();
             $duration_duration_q->finish();
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:02:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:02:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tjk-00022R-Nm; Fri, 31 Jul 2020 12:02:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tjk-000224-5W
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:02:08 +0000
X-Inumbo-ID: a796ee7b-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a796ee7b-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:02:07 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMs-0001W4-9R; Fri, 31 Jul 2020 12:38:30 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 11/41] sg-report-flight: Use the job row from the
 intitial query
Date: Fri, 31 Jul 2020 12:37:50 +0100
Message-Id: <20200731113820.5765-12-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

$jcheckq is redundant: we looked this up right at the start.

This is not expected to speed things up very much, but it makes things
somewhat cleaner and clearer.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index d218b24e..cb6b8174 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -160,10 +160,6 @@ sub findaflight ($$$$$) {
         return undef;
     }
 
-    my $jcheckq= db_prepare(<<END);
-        SELECT status FROM jobs WHERE flight=? AND job=?
-END
-
     my $checkq= db_prepare(<<END);
         SELECT status FROM steps WHERE flight=? AND job=? AND testid=?
                                    AND status!='skip'
@@ -263,7 +259,7 @@ $runvars_conds
             ORDER BY flight DESC
             LIMIT 1000
       )
-      SELECT *
+      SELECT flight, jobs.status
         FROM sub
 $flightsq_jobs_join
        WHERE (1=1)
@@ -304,7 +300,7 @@ END
                 WHERE flight=?
 END
 
-    while (my ($tflight) = $flightsq->fetchrow_array) {
+    while (my ($tflight, $tjstatus) = $flightsq->fetchrow_array) {
 	# Recurse from the starting flight looking for relevant build
 	# jobs.  We start with all jobs in $tflight, and for each job
 	# we also process any other jobs it refers to in *buildjob runvars.
@@ -407,8 +403,7 @@ END
             $checkq->execute($tflight, $job, $testid);
             ($chkst) = $checkq->fetchrow_array();
 	    if (!defined $chkst) {
-		$jcheckq->execute($tflight, $job);
-		my ($jchkst) = $jcheckq->fetchrow_array();
+		my $jchkst = $tflight->{status};
 		$chkst = $jchkst if $jchkst eq 'starved';
 	    }
         }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:02:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:02:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tjt-00027b-0S; Fri, 31 Jul 2020 12:02:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tjr-00026u-HR
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:02:15 +0000
X-Inumbo-ID: ac328bec-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac328bec-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:02:14 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN2-0001W4-LZ; Fri, 31 Jul 2020 12:38:40 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 33/41] cs-bisection-step: Generalise
 qtxt_common_rev_ok
Date: Fri, 31 Jul 2020 12:38:12 +0100
Message-Id: <20200731113820.5765-34-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

* Make it into a subref which takes a $table argument.
* Change the two references into function calls using the @{...} syntax
* Move the definition earlier in the file

No change to the generated query.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: New patch.
---
 cs-bisection-step | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/cs-bisection-step b/cs-bisection-step
index 9a0fee39..5d4e179e 100755
--- a/cs-bisection-step
+++ b/cs-bisection-step
@@ -182,6 +182,14 @@ END
 sub flight_rmap ($$) {
     my ($flight, $need_urls) = @_;
 
+    my $qtxt_common_rev_ok = sub {
+	my ($table) = @_;
+	[<<END];
+                 ($table.name LIKE E'built\\_revision\\_%' OR
+                  $table.name LIKE E'revision\\_%')
+END
+    };
+
     $dbh_tests->do(<<END, {});
           CREATE TEMP TABLE tmp_build_info (
               use varchar NOT NULL,
@@ -236,10 +244,6 @@ END
     my $qtxt_common_tables = <<END;
 	    FROM tmp_build_info AS rev
 END
-    my $qtxt_common_rev_condition = <<END;
-                 (rev.name LIKE E'built\\_revision\\_%' OR
-                  rev.name LIKE E'revision\\_%')
-END
 
     my $sth= db_prepare(!$need_urls ? <<END_NOURLS : <<END_URLS);
         SELECT
@@ -249,7 +253,7 @@ $qtxt_common_results
 $qtxt_common_tables
 
            WHERE
-$qtxt_common_rev_condition
+@{ $qtxt_common_rev_ok->('rev') }
 
 	   ORDER by rev.name;
 
@@ -262,7 +266,7 @@ $qtxt_common_tables
       CROSS JOIN tmp_build_info AS url
 
            WHERE
-$qtxt_common_rev_condition
+@{ $qtxt_common_rev_ok->('rev') }
   	     AND  url.name LIKE E'tree\\_%'
 	     AND  url.use = rev.use
 	     AND  url.job = rev.job
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:02:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:02:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tjv-00029K-9j; Fri, 31 Jul 2020 12:02:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tju-00026u-4q
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:02:18 +0000
X-Inumbo-ID: ad15deb1-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad15deb1-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:02:16 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMz-0001W4-QA; Fri, 31 Jul 2020 12:38:37 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 26/41] sg-report-host-history: Rerganisation: Make
 mainquery per-host
Date: Fri, 31 Jul 2020 12:38:05 +0100
Message-Id: <20200731113820.5765-27-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This moves the loop over hosts into the main program.  We are working
our way to a new code structure.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/sg-report-host-history b/sg-report-host-history
index 15866ab6..34216aa2 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -164,7 +164,9 @@ sub jobquery ($$$) {
 
 our %hosts;
 
-sub mainquery () {
+sub mainquery ($) {
+    my ($host) = @_;
+
     our $runvarq //= db_prepare(<<END);
 	SELECT flight, job, name, status
 	  FROM runvars
@@ -177,13 +179,12 @@ sub mainquery () {
 	 ORDER BY flight DESC
          LIMIT $limit * 2
 END
-    foreach my $host (sort keys %hosts) {
-	print DEBUG "MAINQUERY $host...\n";
-	$runvarq->execute($host, $minflight);
 
-	$hosts{$host} = $runvarq->fetchall_arrayref({});
-	print DEBUG "MAINQUERY $host got ".(scalar @{ $hosts{$host} })."\n";
-    }
+    print DEBUG "MAINQUERY $host...\n";
+    $runvarq->execute($host, $minflight);
+
+    $hosts{$host} = $runvarq->fetchall_arrayref({});
+    print DEBUG "MAINQUERY $host got ".(scalar @{ $hosts{$host} })."\n";
 }
 
 sub reporthost ($) {
@@ -474,7 +475,9 @@ db_retry($dbh_tests, [], sub {
 });
 
 db_retry($dbh_tests, [], sub {
-    mainquery();
+    foreach my $host (sort keys %hosts) {
+	mainquery($host);
+    }
 });
 
 foreach my $host (sort keys %hosts) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:02:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:02:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tk0-0002Cz-IP; Fri, 31 Jul 2020 12:02:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tjz-00026u-58
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:02:23 +0000
X-Inumbo-ID: af900828-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af900828-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:02:20 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN6-0001W4-2a; Fri, 31 Jul 2020 12:38:44 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 40/41] cs-bisection-step: Lay out the revision
 tuple graph once
Date: Fri, 31 Jul 2020 12:38:19 +0100
Message-Id: <20200731113820.5765-41-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The graph layout algorithm is not very fast, particularly if the
revision graph is big.  In my test case this saves about 10s.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: New patch.
---
 cs-bisection-step | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/cs-bisection-step b/cs-bisection-step
index 027032a1..8544bac0 100755
--- a/cs-bisection-step
+++ b/cs-bisection-step
@@ -1113,10 +1113,15 @@ END
         or die "$!";
 
     if (eval {
+	print DEBUG "RUNNING dot -Txdot\n";
+	system_checked("dot", "-Txdot", "-o$graphfile.xdot",
+		       "$graphfile.dot");
         foreach my $fmt (qw(ps png svg)) {
-	    print DEBUG "RUNNING dot -T$fmt\n";
-            system_checked("dot", "-T$fmt", "-o$graphfile.$fmt",
-			   "$graphfile.dot");
+	    # neato rather than dot, because neato just uses positions
+	    # etc. in the input whereas dot does (re)calculation work.
+	    print DEBUG "RUNNING neato -n2 -T$fmt\n";
+            system_checked("neato", "-n2", "-T$fmt", "-o$graphfile.$fmt",
+			   "$graphfile.xdot");
         }
 	open SVGI, "$graphfile.svg" or die "$graphfile.svg $!";
 	open SVGO, ">", "$graphfile.svg.new" or die "$graphfile.svg.new $!";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:02:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:02:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tk4-0002HY-Si; Fri, 31 Jul 2020 12:02:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tk4-00026u-5E
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:02:28 +0000
X-Inumbo-ID: b261fa71-d325-11ea-8e2b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b261fa71-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:02:26 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN2-0001W4-AN; Fri, 31 Jul 2020 12:38:40 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 32/41] adhoc-revtuple-generator: Fix an undef
 warning in a debug print
Date: Fri, 31 Jul 2020 12:38:11 +0100
Message-Id: <20200731113820.5765-33-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

$parents might be undef here.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
New in v2.
---
 adhoc-revtuple-generator | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/adhoc-revtuple-generator b/adhoc-revtuple-generator
index c8d6f4ad..ec33305a 100755
--- a/adhoc-revtuple-generator
+++ b/adhoc-revtuple-generator
@@ -463,7 +463,7 @@ sub coalesce {
 	$out->{$node}{Date}= $explode_date;
 	my $parents= $graphs[$explode_i]{ $node[$explode_i] }{Parents};
 	print DEBUG "#$explode_i $explode_isearliest".
-            " $explode_date  x".scalar(@$parents)."\n";
+            " $explode_date  x".($parents ? scalar(@$parents) : "-")."\n";
 
 	foreach my $subparent (@$parents) {
 	    $node[$explode_i]= $subparent;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:03:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:03:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tkv-0002k7-8k; Fri, 31 Jul 2020 12:03:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1Tkt-0002jl-RH
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:03:19 +0000
X-Inumbo-ID: d27c4a0e-d325-11ea-8e2b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d27c4a0e-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:03:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 642C7ABD2;
 Fri, 31 Jul 2020 12:03:31 +0000 (UTC)
Subject: Re: [PATCH v1] tools/xen-cpuid: show enqcmd
To: Olaf Hering <olaf@aepfle.de>
References: <20200730163406.31020-1-olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <65317ac2-0dd0-b453-caec-e5529b423d95@suse.com>
Date: Fri, 31 Jul 2020 14:03:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200730163406.31020-1-olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.07.2020 18:34, Olaf Hering wrote:
> Translate <29> into a feature string.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Jan Beulich <jbeulich@suse.com>

Albeit I'm pretty sure there are more missing than just this lone one.

Jan

> --- a/tools/misc/xen-cpuid.c
> +++ b/tools/misc/xen-cpuid.c
> @@ -133,7 +133,7 @@ static const char *const str_7c0[32] =
>      [22] = "rdpid",
>      /* 24 */                   [25] = "cldemote",
>      /* 26 */                   [27] = "movdiri",
> -    [28] = "movdir64b",
> +    [28] = "movdir64b",        [29] = "enqcmd",
>      [30] = "sgx-lc",
>  };
>  
> 



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:03:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:03:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tl9-0002q7-I1; Fri, 31 Jul 2020 12:03:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OD0g=BK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k1Tl8-0002pQ-Ag
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:03:34 +0000
X-Inumbo-ID: da737b92-d325-11ea-8e2b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id da737b92-d325-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:03:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=QmmkKWOaeBEFTkhy9NuTAlOi0R7nF9jHdE5Cn/wM76I=; b=tiOKnemT2oQqyykCnXviU9p5K
 8NXpU2cFg7SH/NPRBvzBeRBgw4c9k+ytin1FVmF/ckGGhwTCaZp46zVBv2FI+OrwR6HUNKNqNf3+D
 iq40Nne247pvmLQheLM1VVpiJ9btJdRi9Qjf7tWTV3B2cLGlEu+9X0DPoq6JzBnLt5edI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1Tl5-0000Id-Pl; Fri, 31 Jul 2020 12:03:31 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1Tl5-0003OD-DS; Fri, 31 Jul 2020 12:03:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k1Tl5-0003FL-Cq; Fri, 31 Jul 2020 12:03:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152311-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 152311: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: xen=98bed5de1de3352c63cfe29a00f17e8d9ce72689
X-Osstest-Versions-That: xen=b071ec25e85c4aacf3da59e5258cda0b1c4df45d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 31 Jul 2020 12:03:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152311 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152311/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152293
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152293
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152293
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152293
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 152293
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 152293
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152293
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152293
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 152293
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  98bed5de1de3352c63cfe29a00f17e8d9ce72689
baseline version:
 xen                  b071ec25e85c4aacf3da59e5258cda0b1c4df45d

Last test of basis   152293  2020-07-30 01:51:37 Z    1 days
Testing same since   152311  2020-07-31 01:07:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Fam Zheng <famzheng@amazon.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b071ec25e8..98bed5de1d  98bed5de1de3352c63cfe29a00f17e8d9ce72689 -> master


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:05:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:05:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tmc-0003B9-55; Fri, 31 Jul 2020 12:05:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oG5j=BK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k1Tmb-0003Ax-2A
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:05:05 +0000
X-Inumbo-ID: 10176509-d326-11ea-8e2b-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10176509-d326-11ea-8e2b-bc764e2007e4;
 Fri, 31 Jul 2020 12:05:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596197105;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=U8btkJG1pX5eVem4AvDe70lJNMz+0CIS6/m8SL4EwNI=;
 b=Ek7LFyg7/11pMLB58sXaZD97RTVDauZmz5Plh9A4G9odIys3MGwdbILx
 t3ZNeunXLZKNwDxKwUz42Nx0i4hr3jeeoHoy0PEjd8Eyd+vDcMVO5CVog
 knir/Y9Lk6JHhT6uKjDGm4WJj2at9lcKXPHkZM6AafbpzWIP3yXn9lUFn 0=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 3qb8Cb4PLMZzWyLe90hE+yG4wiTFRR1BmmJvmN3ovKjye2sZBhP5kpK8CihGssZftir7aihwiS
 hC3fUTR4SLHZCVBrtmJgILZIhbC/Hq4YqUe3Z9TT1hIbgxfoyi38HqI2VGDRXxxcEIvOl7MZ1z
 Pp8fo9qKpojeLovlYwIALtbi+2pM7rtsfIElqPv+GeTDxlWhbTTmZmPgI2xrrOToHiXocI3BpL
 rv7e3tS/gDEQnFcbYKXiifE6BPAAnhL1fENmUSH/WYJS68SzqIpNrbk6bOQwD5yGfHQVhyGjRY
 y8U=
X-SBRS: 3.7
X-MesageID: 23636293
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23636293"
Subject: Re: [PATCH v1] tools/xen-cpuid: show enqcmd
To: Jan Beulich <jbeulich@suse.com>, Olaf Hering <olaf@aepfle.de>
References: <20200730163406.31020-1-olaf@aepfle.de>
 <65317ac2-0dd0-b453-caec-e5529b423d95@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6e467a8f-d727-8511-da56-69901b6ada85@citrix.com>
Date: Fri, 31 Jul 2020 13:04:35 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <65317ac2-0dd0-b453-caec-e5529b423d95@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31/07/2020 13:03, Jan Beulich wrote:
> On 30.07.2020 18:34, Olaf Hering wrote:
>> Translate <29> into a feature string.
>>
>> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> Acked-by: Jan Beulich <jbeulich@suse.com>
>
> Albeit I'm pretty sure there are more missing than just this lone one.

And in particular, probably missing from libxl_cpuid.c, which I was
meaning to check when I've got a free moment.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:12:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:12:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Ttz-00047x-25; Fri, 31 Jul 2020 12:12:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tty-00047s-0v
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:12:42 +0000
X-Inumbo-ID: 2140508a-d327-11ea-8e2c-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2140508a-d327-11ea-8e2c-bc764e2007e4;
 Fri, 31 Jul 2020 12:12:40 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMv-0001W4-2L; Fri, 31 Jul 2020 12:38:33 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 18/41] duration_estimator: Move duration query loop
 into database
Date: Fri, 31 Jul 2020 12:37:57 +0100
Message-Id: <20200731113820.5765-19-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Stuff the two queries together: we use the firsty query as a WITH
clause.  This is significantly faster, perhaps because the query
optimiser does a better job but probably just because it saves on
round trips.

No functional change.

Perf: subjectively this seemed to help when the cache was cold.  Now I
have a warm cache and it doesn't seem to make much difference.

Perf: runtime of my test case now ~5-7s.

Example queries before (from the debugging output):

 Query A part I:

            SELECT f.flight AS flight,
                   j.job AS job,
                   f.started AS started,
                   j.status AS status
                     FROM flights f
                     JOIN jobs j USING (flight)
                     JOIN runvars r
                             ON  f.flight=r.flight
                            AND  r.name=?
                    WHERE  j.job=r.job
                      AND  f.blessing=?
                      AND  f.branch=?
                      AND  j.job=?
                      AND  r.val=?
                      AND  (j.status='pass' OR j.status='fail'
                           OR j.status='truncated'!)
                      AND  f.started IS NOT NULL
                      AND  f.started >= ?
                 ORDER BY f.started DESC

 With bind variables:
     "test-amd64-i386-xl-pvshim"
     "guest-start"

 Query B part I:

            SELECT f.flight AS flight,
                   s.job AS job,
                   NULL as started,
                   NULL as status,
                   max(s.finished) AS max_finished
                      FROM steps s JOIN flights f
                        ON s.flight=f.flight
                     WHERE s.job=? AND f.blessing=? AND f.branch=?
                       AND s.finished IS NOT NULL
                       AND f.started IS NOT NULL
                       AND f.started >= ?
                     GROUP BY f.flight, s.job
                     ORDER BY max_finished DESC

 With bind variables:
    "test-armhf-armhf-libvirt"
    'real'
    "xen-unstable"
    1594144469

 Query common part II:

        WITH tsteps AS
        (
            SELECT *
              FROM steps
             WHERE flight=? AND job=?
        )
        , tsteps2 AS
        (
            SELECT *
              FROM tsteps
             WHERE finished <=
                     (SELECT finished
                        FROM tsteps
                       WHERE tsteps.testid = ?)
        )
        SELECT (
            SELECT max(finished)-min(started)
              FROM tsteps2
          ) - (
            SELECT sum(finished-started)
              FROM tsteps2
             WHERE step = 'ts-hosts-allocate'
          )
                AS duration

 With bind variables from previous query, eg:
     152045
     "test-armhf-armhf-libvirt"
     "guest-start.2"

After:

 Query A (combined):

            WITH f AS (
            SELECT f.flight AS flight,
                   j.job AS job,
                   f.started AS started,
                   j.status AS status
                     FROM flights f
                     JOIN jobs j USING (flight)
                     JOIN runvars r
                             ON  f.flight=r.flight
                            AND  r.name=?
                    WHERE  j.job=r.job
                      AND  f.blessing=?
                      AND  f.branch=?
                      AND  j.job=?
                      AND  r.val=?
                      AND  (j.status='pass' OR j.status='fail'
                           OR j.status='truncated'!)
                      AND  f.started IS NOT NULL
                      AND  f.started >= ?
                 ORDER BY f.started DESC

            )
            SELECT flight, max_finished, job, started, status,
            (
        WITH tsteps AS
        (
            SELECT *
              FROM steps
             WHERE flight=f.flight AND job=f.job
        )
        , tsteps2 AS
        (
            SELECT *
              FROM tsteps
             WHERE finished <=
                     (SELECT finished
                        FROM tsteps
                       WHERE tsteps.testid = ?)
        )
        SELECT (
            SELECT max(finished)-min(started)
              FROM tsteps2
          ) - (
            SELECT sum(finished-started)
              FROM tsteps2
             WHERE step = 'ts-hosts-allocate'
          )
                AS duration

            ) FROM f

 Query B (combined):

            WITH f AS (
            SELECT f.flight AS flight,
                   s.job AS job,
                   NULL as started,
                   NULL as status,
                   max(s.finished) AS max_finished
                      FROM steps s JOIN flights f
                        ON s.flight=f.flight
                     WHERE s.job=? AND f.blessing=? AND f.branch=?
                       AND s.finished IS NOT NULL
                       AND f.started IS NOT NULL
                       AND f.started >= ?
                     GROUP BY f.flight, s.job
                     ORDER BY max_finished DESC

            )
            SELECT flight, max_finished, job, started, status,
            (
        WITH tsteps AS
        (
            SELECT *
              FROM steps
             WHERE flight=f.flight AND job=f.job
        )
        , tsteps2 AS
        (
            SELECT *
              FROM tsteps
             WHERE finished <=
                     (SELECT finished
                        FROM tsteps
                       WHERE tsteps.testid = ?)
        )
        SELECT (
            SELECT max(finished)-min(started)
              FROM tsteps2
          ) - (
            SELECT sum(finished-started)
              FROM tsteps2
             WHERE step = 'ts-hosts-allocate'
          )
                AS duration

            ) FROM f

Diff for query A:

@@ -1,3 +1,4 @@
+            WITH f AS (
             SELECT f.flight AS flight,
                    j.job AS job,
                    f.started AS started,
@@ -18,11 +19,14 @@
                       AND  f.started >= ?
                  ORDER BY f.started DESC

+            )
+            SELECT flight, max_finished, job, started, status,
+            (
        WITH tsteps AS
         (
             SELECT *
               FROM steps
-             WHERE flight=? AND job=?
+             WHERE flight=f.flight AND job=f.job
         )
         , tsteps2 AS
         (
@@ -42,3 +46,5 @@
              WHERE step = 'ts-hosts-allocate'
           )
                 AS duration
+
+            ) FROM f

Diff for query B:

@@ -1,3 +1,4 @@
+            WITH f AS (
             SELECT f.flight AS flight,
                    s.job AS job,
                    NULL as started,
@@ -12,11 +13,14 @@
                      GROUP BY f.flight, s.job
                      ORDER BY max_finished DESC

+            )
+            SELECT flight, max_finished, job, started, status,
+            (
         WITH tsteps AS
         (
             SELECT *
               FROM steps
-             WHERE flight=? AND job=?
+             WHERE flight=f.flight AND job=f.job
         )
         , tsteps2 AS
         (
@@ -36,3 +40,5 @@
              WHERE step = 'ts-hosts-allocate'
           )
                 AS duration
+
+            ) FROM f

Reviewed-by: George Dunlap <George.Dunlap@citrix.com>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 31 ++++++++++++++++++++-----------
 1 file changed, 20 insertions(+), 11 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index fb975dac..684cafc3 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1192,7 +1192,7 @@ END
         (
             SELECT *
               FROM steps
-             WHERE flight=? AND job=?
+             WHERE flight=f.flight AND job=f.job
         )
 END_ALWAYS
         , tsteps2 AS
@@ -1216,9 +1216,20 @@ END_UPTOINCL
                 AS duration
 END_ALWAYS
 	
-    my $recentflights_q= $dbh_tests->prepare($recentflights_qtxt);
-    my $duration_anyref_q= $dbh_tests->prepare($duration_anyref_qtxt);
-    my $duration_duration_q = $dbh_tests->prepare($duration_duration_qtxt);
+    my $prepare_combi = sub {
+	db_prepare(<<END);
+            WITH f AS (
+$_[0]
+            )
+            SELECT flight, max_finished, job, started, status,
+            (
+$duration_duration_qtxt
+            ) FROM f
+END
+    };
+
+    my $recentflights_q= $prepare_combi->($recentflights_qtxt);
+    my $duration_anyref_q= $prepare_combi->($duration_anyref_qtxt);
 
     return sub {
         my ($job, $hostidname, $onhost, $uptoincl_testid) = @_;
@@ -1239,14 +1250,16 @@ END_ALWAYS
                                       $branch,
                                       $job,
                                       $onhost,
-                                      $limit);
+                                      $limit,
+				      @x_params);
             $refs= $recentflights_q->fetchall_arrayref({});
             $recentflights_q->finish();
             $dbg->("SAME-HOST GOT ".scalar(@$refs));
         }
 
         if (!@$refs) {
-            $duration_anyref_q->execute($job, $blessing, $branch, $limit);
+            $duration_anyref_q->execute($job, $blessing, $branch, $limit,
+					@x_params);
             $refs= $duration_anyref_q->fetchall_arrayref({});
             $duration_anyref_q->finish();
             $dbg->("ANY-HOST GOT ".scalar(@$refs));
@@ -1259,11 +1272,7 @@ END_ALWAYS
 
         my $duration_max= 0;
         foreach my $ref (@$refs) {
-	    my @d_d_args = ($ref->{flight}, $job);
-	    push @d_d_args, @x_params;
-            $duration_duration_q->execute(@d_d_args);
-            my ($duration) = $duration_duration_q->fetchrow_array();
-            $duration_duration_q->finish();
+            my ($duration) = $ref->{duration};
             if ($duration) {
                 $dbg->("REF $ref->{flight} DURATION $duration ".
 		       ($ref->{status} // ''));
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:12:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:12:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tu4-00048N-9w; Fri, 31 Jul 2020 12:12:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tu3-000489-Db
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:12:47 +0000
X-Inumbo-ID: 24dc3ab0-d327-11ea-8e2c-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24dc3ab0-d327-11ea-8e2c-bc764e2007e4;
 Fri, 31 Jul 2020 12:12:46 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN3-0001W4-Pd; Fri, 31 Jul 2020 12:38:41 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 36/41] cs-bisection-step: Use db_prepare a few
 times instead of ->do
Date: Fri, 31 Jul 2020 12:38:15 +0100
Message-Id: <20200731113820.5765-37-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

With $dbh_tests->do(...), we can only get a debug trace of the queries
by using DBI_TRACE which produces voluminous output.  Using our own
db_prepare invokes our own debugging.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: New patch.
---
 cs-bisection-step | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/cs-bisection-step b/cs-bisection-step
index ba0c6424..1c165b78 100755
--- a/cs-bisection-step
+++ b/cs-bisection-step
@@ -197,7 +197,7 @@ END
 END
     };
 
-    $dbh_tests->do(<<END, {});
+    db_prepare(<<END)->execute();
           CREATE TEMP TABLE tmp_build_info (
               use varchar NOT NULL,
               name varchar NOT NULL,
@@ -206,7 +206,7 @@ END
               )
 END
 
-        $dbh_tests->do(<<END, {}, $job, $flight);
+    db_prepare(<<END)->execute($job, $flight);
     
         INSERT INTO tmp_build_info
         SELECT t.name AS use,
@@ -230,7 +230,7 @@ END
                                END)
 END
 
-    $dbh_tests->do(<<END, {}, $job, $flight);
+    db_prepare(<<END)->execute($job, $flight);
 
         INSERT INTO tmp_build_info
 	    SELECT ''   AS use,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:12:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:12:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Tu9-00049E-IC; Fri, 31 Jul 2020 12:12:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1Tu8-000489-8T
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:12:52 +0000
X-Inumbo-ID: 27b4b37a-d327-11ea-8e2c-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27b4b37a-d327-11ea-8e2c-bc764e2007e4;
 Fri, 31 Jul 2020 12:12:51 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMr-0001W4-UZ; Fri, 31 Jul 2020 12:38:30 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 10/41] sg-report-flight: Use WITH clause to use
 index for $anypassq
Date: Fri, 31 Jul 2020 12:37:49 +0100
Message-Id: <20200731113820.5765-11-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Perf: runtime of my test case now ~11s

Example query before (from the Perl DBI trace):

        SELECT * FROM flights JOIN steps USING (flight)
            WHERE (branch='xen-unstable')
              AND job=? and testid=? and status='pass'
              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
            LIMIT 1

After:

        WITH s AS
        (
        SELECT * FROM steps
         WHERE job=? and testid=? and status='pass'
        )
        SELECT * FROM flights JOIN s USING (flight)
            WHERE (branch='xen-unstable')
              AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
            LIMIT 1

In both cases with bind vars:

   "test-amd64-i386-xl-pvshim"
   "guest-start"

Diff to the query:

-        SELECT * FROM flights JOIN steps USING (flight)
+        WITH s AS
+        (
+        SELECT * FROM steps
+         WHERE job=? and testid=? and status='pass'
+        )
+        SELECT * FROM flights JOIN s USING (flight)
             WHERE (branch='xen-unstable')
-              AND job=? and testid=? and status='pass'
               AND ( (TRUE AND flight <= 151903) AND (blessing='real') )
             LIMIT 1

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
---
 schema/steps-job-index.sql |  2 +-
 sg-report-flight           | 14 ++++++++++++--
 2 files changed, 13 insertions(+), 3 deletions(-)

diff --git a/schema/steps-job-index.sql b/schema/steps-job-index.sql
index 07dc5a30..2c33af72 100644
--- a/schema/steps-job-index.sql
+++ b/schema/steps-job-index.sql
@@ -1,4 +1,4 @@
--- ##OSSTEST## 006 Preparatory
+-- ##OSSTEST## 006 Needed
 --
 -- This index helps sg-report-flight find if a test ever passed.
 
diff --git a/sg-report-flight b/sg-report-flight
index d06be292..d218b24e 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -849,10 +849,20 @@ sub justifyfailures ($;$) {
 
     my @failures= values %{ $fi->{Failures} };
 
+    # In psql 9.6 this WITH clause makes postgresql do the steps query
+    # first.  This is good because if this test never passed we can
+    # determine that really quickly using the new index, without
+    # having to scan the flights table.  (If the test passed we will
+    # probably not have to look at many flights to find one, so in
+    # that case this is not much worse.)
     my $anypassq= <<END;
-        SELECT * FROM flights JOIN steps USING (flight)
+        WITH s AS
+        (
+        SELECT * FROM steps
+         WHERE job=? and testid=? and status='pass'
+        )
+        SELECT * FROM flights JOIN s USING (flight)
             WHERE $branches_cond_q
-              AND job=? and testid=? and status='pass'
               AND $blessingscond
             LIMIT 1
 END
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:13:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:13:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TuF-0004C7-QY; Fri, 31 Jul 2020 12:12:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TuD-0004Ac-Pm
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:12:57 +0000
X-Inumbo-ID: 2b067d9c-d327-11ea-8e2c-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b067d9c-d327-11ea-8e2c-bc764e2007e4;
 Fri, 31 Jul 2020 12:12:57 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMx-0001W4-Jl; Fri, 31 Jul 2020 12:38:35 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 22/41] sg-report-host-history: Drop per-job debug
 etc.
Date: Fri, 31 Jul 2020 12:38:01 +0100
Message-Id: <20200731113820.5765-23-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This printing has a significant effect on the performance of this
program, at least after we optimise various other things.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/sg-report-host-history b/sg-report-host-history
index a159df3e..a34458e0 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -102,9 +102,9 @@ sub read_existing_logs ($) {
 	    my $k = $1;
 	    s{\%([0-9a-f]{2})}{ chr hex $1 }ge;
 	    $ch->{$k} = $_;
-	    print DEBUG "GOTCACHE $hostname $k\n";
+#	    print DEBUG "GOTCACHE $hostname $k\n";
 	}
-	print DEBUG "GOTCACHE $hostname \@ $jr->{flight} $jr->{job} $jr->{status},$jr->{name}\n";
+#	print DEBUG "GOTCACHE $hostname \@ $jr->{flight} $jr->{job} $jr->{status},$jr->{name}\n";
 	$tcache->{$jr->{flight},$jr->{job},$jr->{status},$jr->{name}} = $jr;
     }
     close H;
@@ -272,7 +272,7 @@ END
     my @rows;
     my $cachehits = 0;
     foreach my $jr (@$inrows) {
-	print DEBUG "JOB $jr->{flight}.$jr->{job} ";
+	#print DEBUG "JOB $jr->{flight}.$jr->{job} ";
 
 	my $cacherow =
 	    $tcache->{$jr->{flight},$jr->{job},$jr->{status},$jr->{name}};
@@ -283,11 +283,11 @@ END
 
 	my $endedrow = jobquery($endedq, $jr, 'e');
 	if (!$endedrow) {
-	    print DEBUG "no-finished\n";
+	    #print DEBUG "no-finished\n";
 	    next;
 	}
-	print DEBUG join " ", map { $endedrow->{$_} } sort keys %$endedrow;
-	print DEBUG ".\n";
+	#print DEBUG join " ", map { $endedrow->{$_} } sort keys %$endedrow;
+	#print DEBUG ".\n";
 
 	push @rows, { %$jr, %$endedrow };
     }
@@ -329,7 +329,7 @@ END
 	    next;
 	}
 
-        print DEBUG "JR $jr->{flight}.$jr->{job}\n";
+        #print DEBUG "JR $jr->{flight}.$jr->{job}\n";
 	my $ir = jobquery($infoq, $jr, 'i');
 	my $ar = jobquery($allocdq, $jr, 'a');
 	my $ident = $jr->{name};
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:13:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:13:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TuJ-0004EJ-4s; Fri, 31 Jul 2020 12:13:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TuH-0004Ac-V0
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:13:01 +0000
X-Inumbo-ID: 2c6d87ca-d327-11ea-8e2c-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c6d87ca-d327-11ea-8e2c-bc764e2007e4;
 Fri, 31 Jul 2020 12:12:59 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN0-0001W4-HO; Fri, 31 Jul 2020 12:38:38 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 28/41] sg-report-host-history: Rerganisation:
 Change loops
Date: Fri, 31 Jul 2020 12:38:07 +0100
Message-Id: <20200731113820.5765-29-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Move the per-host code all into the same per-host loop.  One effect is
to transpose the db_retry and host loops for mainquery.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/sg-report-host-history b/sg-report-host-history
index 3f4670e5..2ca0e235 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -470,18 +470,10 @@ db_retry($dbh_tests, [], sub {
     computeflightsrange();
 });
 
-foreach (keys %hosts) {
-    read_existing_logs($_);
-}
-
-db_retry($dbh_tests, [], sub {
-    foreach my $host (sort keys %hosts) {
-	mainquery($host);
-    }
-});
-
 foreach my $host (sort keys %hosts) {
+    read_existing_logs($host);
     db_retry($dbh_tests, [], sub {
+        mainquery($host);
 	reporthost $host;
     });
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:13:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:13:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TuO-0004GX-E4; Fri, 31 Jul 2020 12:13:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TuM-0004Ac-VB
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:13:06 +0000
X-Inumbo-ID: 2defe6f6-d327-11ea-8e2c-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2defe6f6-d327-11ea-8e2c-bc764e2007e4;
 Fri, 31 Jul 2020 12:13:02 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TMy-0001W4-0G; Fri, 31 Jul 2020 12:38:36 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 23/41] Executive: Export opendb_tests
Date: Fri, 31 Jul 2020 12:38:02 +0100
Message-Id: <20200731113820.5765-24-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

sg-report-host-history is going to want this in a moment

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 2f81e89d..8e4c5b9a 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -49,7 +49,7 @@ BEGIN {
                       task_spec_desc findtask findtask_spec @all_lock_tables
                       restrictflight_arg restrictflight_cond
                       report_run_getinfo report_altcolour
-                      report_altchangecolour
+                      report_altchangecolour opendb_tests
                       report_blessingscond report_find_push_age_info
                       tcpconnect_queuedaemon plan_search
                       manual_allocation_base_jobinfo
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:13:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:13:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TuT-0004Jk-Re; Fri, 31 Jul 2020 12:13:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TuR-0004Ac-VB
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:13:11 +0000
X-Inumbo-ID: 3007e43e-d327-11ea-8e2c-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3007e43e-d327-11ea-8e2c-bc764e2007e4;
 Fri, 31 Jul 2020 12:13:05 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN3-0001W4-1J; Fri, 31 Jul 2020 12:38:41 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 34/41] cs-bisection-step: Move an AND
Date: Fri, 31 Jul 2020 12:38:13 +0100
Message-Id: <20200731113820.5765-35-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This obviously-fine change makes the next commit easier to review.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2: New patch.
---
 cs-bisection-step | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/cs-bisection-step b/cs-bisection-step
index 5d4e179e..f11726aa 100755
--- a/cs-bisection-step
+++ b/cs-bisection-step
@@ -266,8 +266,8 @@ $qtxt_common_tables
       CROSS JOIN tmp_build_info AS url
 
            WHERE
-@{ $qtxt_common_rev_ok->('rev') }
-  	     AND  url.name LIKE E'tree\\_%'
+@{ $qtxt_common_rev_ok->('rev') } AND
+  	          url.name LIKE E'tree\\_%'
 	     AND  url.use = rev.use
 	     AND  url.job = rev.job
 	     AND (rev.name = 'built_revision_' || substr(url.name,6) OR
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:13:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:13:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TuY-0004MN-5g; Fri, 31 Jul 2020 12:13:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1TuW-0004Ac-VG
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:13:16 +0000
X-Inumbo-ID: 33015f95-d327-11ea-8e2c-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 33015f95-d327-11ea-8e2c-bc764e2007e4;
 Fri, 31 Jul 2020 12:13:10 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1TN0-0001W4-Th; Fri, 31 Jul 2020 12:38:39 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH v2 29/41] sg-report-host-history: Drop a redundznt AND
 clause
Date: Fri, 31 Jul 2020 12:38:08 +0100
Message-Id: <20200731113820.5765-30-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This condition is the same as $flightcond.  (This has no effect on the
db performance since the query planner figures it out, but it is
confusing.)

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-host-history | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/sg-report-host-history b/sg-report-host-history
index 2ca0e235..f4352fc3 100755
--- a/sg-report-host-history
+++ b/sg-report-host-history
@@ -175,13 +175,12 @@ sub mainquery ($) {
 	   AND val = ?
 	   AND $flightcond
            AND $restrictflight_cond
-           AND flight > ?
 	 ORDER BY flight DESC
          LIMIT $limit * 2
 END
 
     print DEBUG "MAINQUERY $host...\n";
-    $runvarq->execute($host, $minflight);
+    $runvarq->execute($host);
 
     $hosts{$host} = $runvarq->fetchall_arrayref({});
     print DEBUG "MAINQUERY $host got ".(scalar @{ $hosts{$host} })."\n";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:16:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:16:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1TxH-0004tW-Lq; Fri, 31 Jul 2020 12:16:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OevI=BK=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1k1TxF-0004tQ-RQ
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:16:06 +0000
X-Inumbo-ID: 9a438646-d327-11ea-abaa-12813bfff9fa
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.20])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9a438646-d327-11ea-abaa-12813bfff9fa;
 Fri, 31 Jul 2020 12:16:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1596197763;
 s=strato-dkim-0002; d=aepfle.de;
 h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:
 X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
 bh=oDJV2Te/0euogOizTXwWSecv1vZ4GMrGg4ZiS8EsREo=;
 b=KYLbhsH7kjTdDC568+VGQiE/yk60Q1FfUuwTSB+NTePepQ/dmELlf9t9E9z+O1ww8f
 +DbRk07X2nJZc//GhP99RFW7lBF1ikNeLpbWJibP044sNA6UX1/g6t7GfO0xafv4Vzwn
 NtsqGLnR27v71vq+5qG68N2AIVEadEbmeAhGVosuvWM2XDgoG1I5WBfs0NkPNUwtzWif
 hwowt4LYflbZFFTwqLHzlPUTPGVki3N+IQIzsSUQ9wfkcu5OTm5/ltDnpZJMCtXZIfmk
 vBYfXdF4lUZDgda2A9+KTubdOnXiXxwoxJxxAGHoAo9n7B3pFY8XzAwOWf4tyyRfgkTY
 Y1jg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTUoY="
X-RZG-CLASS-ID: mo00
Received: from sender by smtp.strato.de (RZmta 46.10.5 DYNA|AUTH)
 with ESMTPSA id m032cfw6VCFuGc8
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 31 Jul 2020 14:15:56 +0200 (CEST)
Date: Fri, 31 Jul 2020 14:15:49 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v1] tools/xen-cpuid: show enqcmd
Message-ID: <20200731141549.476fa255.olaf@aepfle.de>
In-Reply-To: <6e467a8f-d727-8511-da56-69901b6ada85@citrix.com>
References: <20200730163406.31020-1-olaf@aepfle.de>
 <65317ac2-0dd0-b453-caec-e5529b423d95@suse.com>
 <6e467a8f-d727-8511-da56-69901b6ada85@citrix.com>
X-Mailer: Claws Mail 2020.07.13 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/NjE1vwMdWZYH7_MxQQ==ZYN"; protocol="application/pgp-signature"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Roger Pau =?UTF-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--Sig_/NjE1vwMdWZYH7_MxQQ==ZYN
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Fri, 31 Jul 2020 13:04:35 +0100
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> And in particular, probably missing from libxl_cpuid.c, which I was
> meaning to check when I've got a free moment.

Will a ever domU see this flag? I just spotted the <29> when comparing 'xen=
-cpuid' output between recent Xen releases. It shows up just in the 'Known'=
 section at this point.


Olaf

--Sig_/NjE1vwMdWZYH7_MxQQ==ZYN
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl8kC3UACgkQ86SN7mm1
DoBEChAAjZiHJJXd54LUe7zxcGGRVgnK027bclYSllWE6aKaqkcj4OiY1+Du0cVe
LS81x/7elDpixrco/svlPwkHJ561665eg8piyBfHMzyQ5Wa/5gLsCuviiDlkhx0j
lThje92/56YLgU0na6OHa3tFxfv7IpThLw7MPHnW9VHcrQbxcJMHJBtGwuxkES4F
jf2NoKG6FAgu4D2Ref32K1BATc/H/odd3Q1Uar45Y/Bwy3RSq77GXFlcjgALLE7U
1O1sgZJsInv/Uv0F2vh520LJ5j5tT71z4GnVjzhquSJ39SLsllEK+X2gCTAPlQps
KZRDsjWLqgO6AeS4k5tkCCwxqn2aWF9F1tUUxpN3ISMxdiSCDY9MB2bnbtxnqKNg
G3dmIgotuB6EFMeQ0H0+dnJFwKl9MtSnlH4aKNWtrq/ZwNaHlozDFUf5mx982B7H
3kBrAH1z43bTX8GxuJ0rGBzHx9kVHHNouQORzTJtVqykfHBPZhBHGZ2nwzO79Jm8
vGpK3gkT+Is04QRRwNWpkda2erve6LDgwcqO3vveTru79G2tM1m/I1ClguEvou8f
Xj9FM8vnFbWhesOHQURxpCBoK8rMsQxOmA6eE66LrF+EiazwT9r8A7yf5x9eC9FJ
uoasYS7y6/CeSvqrRQMZ6ngTpQhowOrZv1+bsHnCQMhcqrkdDi0=
=RrXN
-----END PGP SIGNATURE-----

--Sig_/NjE1vwMdWZYH7_MxQQ==ZYN--


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:19:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1U0u-00053A-6F; Fri, 31 Jul 2020 12:19:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1U0t-000535-2p
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:19:51 +0000
X-Inumbo-ID: 211537fa-d328-11ea-abaa-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 211537fa-d328-11ea-abaa-12813bfff9fa;
 Fri, 31 Jul 2020 12:19:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3CE8CACE3;
 Fri, 31 Jul 2020 12:20:02 +0000 (UTC)
Subject: Re: [PATCH v3] xen/arm: Convert runstate address during hypcall
To: Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>
References: <3911d221ce9ed73611b93aa437b9ca227d6aa201.1596099067.git.bertrand.marquis@arm.com>
 <f48f81d5-589e-3f75-1044-583114bf497e@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d8eb8052-6370-7484-1c9a-f90d83396fa1@suse.com>
Date: Fri, 31 Jul 2020 14:19:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <f48f81d5-589e-3f75-1044-583114bf497e@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.07.2020 22:50, Julien Grall wrote:
> On 30/07/2020 11:24, Bertrand Marquis wrote:
>> At the moment on Arm, a Linux guest running with KTPI enabled will
>> cause the following error when a context switch happens in user mode:
>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>>
>> The error is caused by the virtual address for the runstate area
>> registered by the guest only being accessible when the guest is running
>> in kernel space when KPTI is enabled.
>>
>> To solve this issue, this patch is doing the translation from virtual
>> address to physical address during the hypercall and mapping the
>> required pages using vmap. This is removing the conversion from virtual
>> to physical address during the context switch which is solving the
>> problem with KPTI.
> 
> To echo what Jan said on the previous version, this is a change in a 
> stable ABI and therefore may break existing guest. FAOD, I agree in 
> principle with the idea. However, we want to explain why breaking the 
> ABI is the *only* viable solution.
> 
>  From my understanding, it is not possible to fix without an ABI 
> breakage because the hypervisor doesn't know when the guest will switch 
> back from userspace to kernel space.

And there's also no way to know on Arm, by e.g. enabling a suitable
intercept?

> The risk is the information 
> provided by the runstate wouldn't contain accurate information and could 
> affect how the guest handle stolen time.
> 
> Additionally there are a few issues with the current interface:
>     1) It is assuming the virtual address cannot be re-used by the 
> userspace. Thanksfully Linux have a split address space. But this may 
> change with KPTI in place.
>     2) When update the page-tables, the guest has to go through an 
> invalid mapping. So the translation may fail at any point.
> 
> IOW, the existing interface can lead to random memory corruption and 
> inacurracy of the stolen time.
> 
>>
>> This is done only on arm architecture, the behaviour on x86 is not
>> modified by this patch and the address conversion is done as before
>> during each context switch.
>>
>> This is introducing several limitations in comparison to the previous
>> behaviour (on arm only):
>> - if the guest is remapping the area at a different physical address Xen
>> will continue to update the area at the previous physical address. As
>> the area is in kernel space and usually defined as a global variable this
>> is something which is believed not to happen. If this is required by a
>> guest, it will have to call the hypercall with the new area (even if it
>> is at the same virtual address).
>> - the area needs to be mapped during the hypercall. For the same reasons
>> as for the previous case, even if the area is registered for a different
>> vcpu. It is believed that registering an area using a virtual address
>> unmapped is not something done.
> 
> This is not clear whether the virtual address refer to the current vCPU 
> or the vCPU you register the runstate for. From the past discussion, I 
> think you refer to the former. It would be good to clarify.
> 
> Additionally, all the new restrictions should be documented in the 
> public interface. So an OS developper can find the differences between 
> the architectures.
> 
> To answer Jan's concern, we certainly don't know all the guest OSes 
> existing, however we also need to balance the benefit for a large 
> majority of the users.
> 
>  From previous discussion, the current approach was deemed to be 
> acceptable on Arm and, AFAICT, also x86 (see [1]).
> 
> TBH, I would rather see the approach to be common. For that, we would an 
> agreement from Andrew and Jan in the approach here. Meanwhile, I think 
> this is the best approach to address the concern from Arm users.

Just FTR: If x86 was to also change, VCPUOP_register_vcpu_time_memory_area
would need taking care of as well, as it's using the same underlying model
(including recovery logic when, while the guest is in user mode, the
update has failed).

>> @@ -275,36 +276,156 @@ static void ctxt_switch_to(struct vcpu *n)
>>       virt_timer_restore(n);
>>   }
>>   
>> -/* Update per-VCPU guest runstate shared memory area (if registered). */
>> -static void update_runstate_area(struct vcpu *v)
>> +static void cleanup_runstate_vcpu_locked(struct vcpu *v)
>>   {
>> -    void __user *guest_handle = NULL;
>> +    if ( v->arch.runstate_guest )
>> +    {
>> +        vunmap((void *)((unsigned long)v->arch.runstate_guest & PAGE_MASK));
>> +
>> +        put_page(v->arch.runstate_guest_page[0]);
>> +
>> +        if ( v->arch.runstate_guest_page[1] )
>> +            put_page(v->arch.runstate_guest_page[1]);
>> +
>> +        v->arch.runstate_guest = NULL;
>> +    }
>> +}
>> +
>> +void arch_vcpu_cleanup_runstate(struct vcpu *v)
>> +{
>> +    spin_lock(&v->arch.runstate_guest_lock);
>> +
>> +    cleanup_runstate_vcpu_locked(v);
>> +
>> +    spin_unlock(&v->arch.runstate_guest_lock);
>> +}
>> +
>> +static int setup_runstate_vcpu_locked(struct vcpu *v, vaddr_t vaddr)
>> +{
>> +    unsigned int offset;
>> +    mfn_t mfn[2];
>> +    struct page_info *page;
>> +    unsigned int numpages;
>>       struct vcpu_runstate_info runstate;
>> +    void *p;
>>   
>> -    if ( guest_handle_is_null(runstate_guest(v)) )
>> -        return;
>> +    /* user can pass a NULL address to unregister a previous area */
>> +    if ( vaddr == 0 )
>> +        return 0;
>> +
>> +    offset = vaddr & ~PAGE_MASK;
>> +
>> +    /* provided address must be aligned to a 64bit */
>> +    if ( offset % alignof(struct vcpu_runstate_info) )
> 
> This new restriction wants to be explained in the commit message and 
> public header.

And the expression would imo also better use alignof(runstate).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:21:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:21:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1U2v-0005p3-JC; Fri, 31 Jul 2020 12:21:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1U2u-0005ox-Gi
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:21:56 +0000
X-Inumbo-ID: 6bfbcdf6-d328-11ea-abaa-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6bfbcdf6-d328-11ea-abaa-12813bfff9fa;
 Fri, 31 Jul 2020 12:21:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F0729B188;
 Fri, 31 Jul 2020 12:22:07 +0000 (UTC)
Subject: Re: [PATCH v1] tools/xen-cpuid: show enqcmd
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200730163406.31020-1-olaf@aepfle.de>
 <65317ac2-0dd0-b453-caec-e5529b423d95@suse.com>
 <6e467a8f-d727-8511-da56-69901b6ada85@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <92d27664-267a-4095-484d-90a07367c5c9@suse.com>
Date: Fri, 31 Jul 2020 14:21:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <6e467a8f-d727-8511-da56-69901b6ada85@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Olaf Hering <olaf@aepfle.de>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31.07.2020 14:04, Andrew Cooper wrote:
> On 31/07/2020 13:03, Jan Beulich wrote:
>> On 30.07.2020 18:34, Olaf Hering wrote:
>>> Translate <29> into a feature string.
>>>
>>> Signed-off-by: Olaf Hering <olaf@aepfle.de>
>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>
>> Albeit I'm pretty sure there are more missing than just this lone one.
> 
> And in particular, probably missing from libxl_cpuid.c, which I was
> meaning to check when I've got a free moment.

As it's not just this one, but e.g. also the two movdir* ones,
I thought I'd not require the other side to also be changed,
the more that enqcmd also doesn't get exposed to guests at all
right now.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:24:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:24:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1U5I-0005zK-12; Fri, 31 Jul 2020 12:24:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oG5j=BK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k1U5H-0005zF-Gj
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:24:23 +0000
X-Inumbo-ID: c394fa1a-d328-11ea-abaa-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c394fa1a-d328-11ea-abaa-12813bfff9fa;
 Fri, 31 Jul 2020 12:24:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596198262;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=ShpmB52Fr8d3Ksv94IPuEB5CRkYdnM/BGBhq/DKWbls=;
 b=icdLm2J3GxCIm6yZVkFi8HHbak9fahNuscMR+b+7cRZaRFx1BCGhymap
 TO0mDEn2pJf/DpFJ/UYpxcJtgfKEm9WfqgxeKkzhkoRf1W5Ee0Zw0JdZP
 w1Cu6r/oeDDkc9NlKPtovGTt7mL+SR2KX7jReGZv+69QFV+/cJA5f3IFU I=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: LqdMT3hqVq9SSVy5QExw9Q72tIyAQoLDIw9n7kJgmKacj1kSdRG2H+Qz1I0a2z58mS6TTUb0Uj
 Unc4hGAbFvq7usUnjPoIYdRViUkew/QXKIp2Ihw/+7rFNK9+R0x2lpFIVVPKcyIIVKdFmF9IDW
 C9rSWI/DkP5ZJmqxs8axNZFqyM9aJ0b+MIXcxsQvi2Og5a85wIWNiNENZmOOY5a835jCtcxtxG
 dZEGaSg7SaL3Wxgu0MaLl1iZ3nNp03vLSpPku5+jDdJj5Qxh3HgqIuH4bXi2RqPeNHF/Vz5A4S
 xIo=
X-SBRS: 3.7
X-MesageID: 23803642
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23803642"
Subject: Re: [PATCH v1] tools/xen-cpuid: show enqcmd
To: Olaf Hering <olaf@aepfle.de>
References: <20200730163406.31020-1-olaf@aepfle.de>
 <65317ac2-0dd0-b453-caec-e5529b423d95@suse.com>
 <6e467a8f-d727-8511-da56-69901b6ada85@citrix.com>
 <20200731141549.476fa255.olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <11ecaa2d-22a4-8b9f-d994-133e934e27c7@citrix.com>
Date: Fri, 31 Jul 2020 13:24:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200731141549.476fa255.olaf@aepfle.de>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian
 Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31/07/2020 13:15, Olaf Hering wrote:
> Am Fri, 31 Jul 2020 13:04:35 +0100
> schrieb Andrew Cooper <andrew.cooper3@citrix.com>:
>
>> And in particular, probably missing from libxl_cpuid.c, which I was
>> meaning to check when I've got a free moment.
> Will a ever domU see this flag? I just spotted the <29> when comparing =
'xen-cpuid' output between recent Xen releases. It shows up just in the '=
Known' section at this point.

For future platforms supporting Compute eXpress Link, PCIPassthrough
will be able to give various accelerators to VMs, and the ENQCMD{,S}
instructions is what the guest will use to talk to hardware.

As a set of functionality, think DPDK but with with userspace (or the
guest) able to submit work to hardware directly, through an interface
designed to be safe for this kind of thing.

~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:25:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1U6Z-00065R-Bx; Fri, 31 Jul 2020 12:25:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1U6Y-00065K-16
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:25:42 +0000
X-Inumbo-ID: f222bf8f-d328-11ea-8e2c-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f222bf8f-d328-11ea-8e2c-bc764e2007e4;
 Fri, 31 Jul 2020 12:25:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1469FACE3;
 Fri, 31 Jul 2020 12:25:53 +0000 (UTC)
Subject: Re: RESCHEDULED Call for agenda items for Community Call, August 13 @
 15:00 UTC
To: George Dunlap <George.Dunlap@citrix.com>,
 "robin.randhawa@arm.com" <robin.randhawa@arm.com>,
 Artem Mygaiev <Artem_Mygaiev@epam.com>, Matt Spencer <Matt.Spencer@arm.com>,
 "anastassios.nanos@onapp.com" <anastassios.nanos@onapp.com>,
 Stewart Hildebrand <Stewart.Hildebrand@dornerworks.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 "mirela.simonovic@aggios.com" <mirela.simonovic@aggios.com>,
 Jarvis Roach <Jarvis.Roach@dornerworks.com>,
 Jeff Kubascik <Jeff.Kubascik@dornerworks.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Ian Jackson <Ian.Jackson@citrix.com>, Rian Quinn <rianquinn@gmail.com>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 Doug Goldstein <cardoe@cardoe.com>, David Woodhouse <dwmw@amazon.co.uk>,
 Amit Shah <amit@infradead.org>, Varad Gautam <varadgautam@gmail.com>,
 Brian Woods <brian.woods@xilinx.com>, Robert Townley
 <rob.townley@gmail.com>, Bobby Eshleman <bobby.eshleman@gmail.com>,
 Olivier Lambert <olivier.lambert@vates.fr>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <1E023F6E-0E3C-4CD5-A074-7BF62635E123@citrix.com>
 <40615946-FF55-48DB-91FB-58DD603FDD69@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9bfef1bf-31a7-1c95-60fa-2ca665942fda@suse.com>
Date: Fri, 31 Jul 2020 14:25:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <40615946-FF55-48DB-91FB-58DD603FDD69@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Sergey Dyasli <sergey.dyasli@citrix.com>,
 "edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>,
 "daniel.kiper@oracle.com" <daniel.kiper@oracle.com>, "Natarajan,
 Janakarajan" <jnataraj@amd.com>,
 Christopher Clark <christopher.w.clark@gmail.com>, "Ji,
 John" <john.ji@intel.com>, Rich Persaud <persaur@gmail.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Kevin Pearson <kevin.pearson@ortmanconsulting.com>,
 "intel-xen@intel.com" <intel-xen@intel.com>,
 Paul Durrant <pdurrant@amazon.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.07.2020 17:41, George Dunlap wrote:
>> On Jul 30, 2020, at 4:17 PM, George Dunlap <George.Dunlap@citrix.com> wrote:
>>
>> Hey all,
>>
>> The community call is scheduled for next week, 6 August.  I, however, will be on PTO that week; I propose rescheduling it for the following week, 13 August, at the same time.
>>
>> The proposed agenda is in ZZZ and you can edit to add items.  Alternatively, you can reply to this mail directly.
> 
> Sorry, in all my manual templating I seem to have missed this one.  Here’s the URL:
> 
> https://cryptpad.fr/pad/#/3/pad/edit/9c58993a08fe97451f0a5b6c8bb906b1/

I get "This link does not give you access to the document". Maybe a
permissions problem? I've meant to add a "minimum toolchain versions"
topic ...

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:27:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1U8B-0006DC-Qa; Fri, 31 Jul 2020 12:27:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GLoN=BK=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1k1U8A-0006D5-U8
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:27:22 +0000
X-Inumbo-ID: 2e45436b-d329-11ea-8e2c-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2e45436b-d329-11ea-8e2c-bc764e2007e4;
 Fri, 31 Jul 2020 12:27:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596198442;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=NBzcklOp4UfUAL66oBP6JbSlN2WSKWlVtlvvVAPKoPU=;
 b=BIln1sWuqpBlVUuZLclbPksjw90SUugTVgKf958GtLI7p5w2NA9Gfena
 Z8ud/LWazVTtP8yldRjp7ihPU6pQRKvRDwWP4/JBmtVUUqraHXZm1yZp2
 WOVj4l8KZVbtBFznIgEcVsd839bCP8pbzVrwdv3XEtIzGU3B8jz5/EkLH M=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: CNA55Zc8AGjFXv5PoNVPW/UxgArtZ/lQBHB5b+qjiKt13ztSuW06M2i/aTkrhRUAO4e5tqgoFV
 62FL+yOV3X1fd+f7x1RhbCsaztv55U7dOX6abzvkUgAjtJ5Fkhv6pdI0dowI7EGEAxgtgM1hOP
 yNmRvKIq8vHRu1mEF111SttKBsXG3OvhiIv0XJkc5WG1CvKZZdqMgFKP2fFYDscFKlp4leIvdQ
 /1EPF+dSPKRsH+RHIdYgYUoCaydgLFc165KmwLVb6Oz7B78RTPyU754IW3Bkf6pXjV1T3vrek2
 XO8=
X-SBRS: 3.7
X-MesageID: 24498915
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="24498915"
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: RESCHEDULED Call for agenda items for Community Call, August 13 @
 15:00 UTC
Thread-Topic: RESCHEDULED Call for agenda items for Community Call, August 13
 @ 15:00 UTC
Thread-Index: AQHWZoSNV69Oim50MkyA6CRuliqKJqkgITQAgAFbngCAAAB2gA==
Date: Fri, 31 Jul 2020 12:27:16 +0000
Message-ID: <047B12C2-71AA-459F-853C-DF1CD040D6C1@citrix.com>
References: <1E023F6E-0E3C-4CD5-A074-7BF62635E123@citrix.com>
 <40615946-FF55-48DB-91FB-58DD603FDD69@citrix.com>
 <9bfef1bf-31a7-1c95-60fa-2ca665942fda@suse.com>
In-Reply-To: <9bfef1bf-31a7-1c95-60fa-2ca665942fda@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <9032569B6F620A4FB03AA7F5B98ADDEA@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Amit Shah <amit@infradead.org>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>,
 Brian Woods <brian.woods@xilinx.com>, Rich Persaud <persaur@gmail.com>,
 "anastassios.nanos@onapp.com" <anastassios.nanos@onapp.com>,
 "mirela.simonovic@aggios.com" <mirela.simonovic@aggios.com>,
 "edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>, "Natarajan,
 Janakarajan" <jnataraj@amd.com>,
 "robin.randhawa@arm.com" <robin.randhawa@arm.com>, Olivier
 Lambert <olivier.lambert@vates.fr>, Matt Spencer <Matt.Spencer@arm.com>,
 Robert Townley <rob.townley@gmail.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Rian Quinn <rianquinn@gmail.com>, Varad Gautam <varadgautam@gmail.com>,
 Julien Grall <julien@xen.org>, Juergen Gross <jgross@suse.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Christopher Clark <christopher.w.clark@gmail.com>,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 Kevin Pearson <kevin.pearson@ortmanconsulting.com>,
 "intel-xen@intel.com" <intel-xen@intel.com>,
 Jarvis Roach <Jarvis.Roach@dornerworks.com>,
 Artem Mygaiev <Artem_Mygaiev@epam.com>,
 Sergey Dyasli <sergey.dyasli@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Jeff Kubascik <Jeff.Kubascik@dornerworks.com>, "Ji, John" <john.ji@intel.com>,
 Stewart Hildebrand <Stewart.Hildebrand@dornerworks.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 "daniel.kiper@oracle.com" <daniel.kiper@oracle.com>, David
 Woodhouse <dwmw@amazon.co.uk>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDMxLCAyMDIwLCBhdCAxOjI1IFBNLCBKYW4gQmV1bGljaCA8amJldWxpY2hA
c3VzZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMzAuMDcuMjAyMCAxNzo0MSwgR2VvcmdlIER1bmxh
cCB3cm90ZToNCj4+PiBPbiBKdWwgMzAsIDIwMjAsIGF0IDQ6MTcgUE0sIEdlb3JnZSBEdW5sYXAg
PEdlb3JnZS5EdW5sYXBAY2l0cml4LmNvbT4gd3JvdGU6DQo+Pj4gDQo+Pj4gSGV5IGFsbCwNCj4+
PiANCj4+PiBUaGUgY29tbXVuaXR5IGNhbGwgaXMgc2NoZWR1bGVkIGZvciBuZXh0IHdlZWssIDYg
QXVndXN0LiAgSSwgaG93ZXZlciwgd2lsbCBiZSBvbiBQVE8gdGhhdCB3ZWVrOyBJIHByb3Bvc2Ug
cmVzY2hlZHVsaW5nIGl0IGZvciB0aGUgZm9sbG93aW5nIHdlZWssIDEzIEF1Z3VzdCwgYXQgdGhl
IHNhbWUgdGltZS4NCj4+PiANCj4+PiBUaGUgcHJvcG9zZWQgYWdlbmRhIGlzIGluIFpaWiBhbmQg
eW91IGNhbiBlZGl0IHRvIGFkZCBpdGVtcy4gIEFsdGVybmF0aXZlbHksIHlvdSBjYW4gcmVwbHkg
dG8gdGhpcyBtYWlsIGRpcmVjdGx5Lg0KPj4gDQo+PiBTb3JyeSwgaW4gYWxsIG15IG1hbnVhbCB0
ZW1wbGF0aW5nIEkgc2VlbSB0byBoYXZlIG1pc3NlZCB0aGlzIG9uZS4gIEhlcmXigJlzIHRoZSBV
Ukw6DQo+PiANCj4+IGh0dHBzOi8vY3J5cHRwYWQuZnIvcGFkLyMvMy9wYWQvZWRpdC85YzU4OTkz
YTA4ZmU5NzQ1MWYwYTViNmM4YmI5MDZiMS8NCj4gDQo+IEkgZ2V0ICJUaGlzIGxpbmsgZG9lcyBu
b3QgZ2l2ZSB5b3UgYWNjZXNzIHRvIHRoZSBkb2N1bWVudCIuIE1heWJlIGENCj4gcGVybWlzc2lv
bnMgcHJvYmxlbT8gSSd2ZSBtZWFudCB0byBhZGQgYSAibWluaW11bSB0b29sY2hhaW4gdmVyc2lv
bnMiDQo+IHRvcGljIC4uLg0KDQpUcnkgdGhpcyBvbmU/DQoNCmh0dHBzOi8vY3J5cHRwYWQuZnIv
cGFkLyMvMi9wYWQvZWRpdC9WbExkaml3N2lCbTBSLWVmT015Q1krS3MvDQoNClRoYXQgc291bmRz
IGxpa2UgYSBnb29kIHRvcGljLg0KDQogLUdlb3JnZQ==


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:35:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:35:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UGC-0007B2-Ot; Fri, 31 Jul 2020 12:35:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PAbA=BK=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1UGA-0007Ax-Pw
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:35:38 +0000
X-Inumbo-ID: 56476fd6-d32a-11ea-8e30-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56476fd6-d32a-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:35:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=7x0o0L0QJhcX08av8K+blTV8Kwsa/HFa1+pEqgQILCQ=; b=kite1M/YeKNZF0AT2nxZCg+Y2t
 u2ngiYG6AON6f5Vx+M9xAst2JbYhHzZpEK+CBPZLyI9FCzB37yMk4LLShRyCYI3FUnbyeZ6QJ4MVL
 w/94YtM3KK6NUqIvwZ4e8rlaSg9reeghNz+xQ6k0Wzl1xKLt0ToksfMMnGJ2q9vUT2w4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1UG9-00012C-8C; Fri, 31 Jul 2020 12:35:37 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1UG8-0006yz-Ua; Fri, 31 Jul 2020 12:35:37 +0000
Subject: Re: [PATCH] x86/vhpet: Fix type size in timer_int_route_valid
To: Jan Beulich <jbeulich@suse.com>, Eslam Elnikety <elnikety@amazon.com>
References: <20200728083357.77999-1-elnikety@amazon.com>
 <a55fba45-a008-059e-ea8c-b7300e2e8b7d@citrix.com>
 <278f0f31-619b-a392-6627-e75e65d0d14f@suse.com>
 <076df48e-0010-bb8d-891f-dc89aa4b9439@amazon.com>
 <cd9b283b-5c10-d186-93ef-8d8c07302e26@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <534c831a-4635-45b0-2580-4bd5812f49e8@xen.org>
Date: Fri, 31 Jul 2020 13:35:34 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <cd9b283b-5c10-d186-93ef-8d8c07302e26@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.co.uk>, xen-devel@lists.xenproject.org,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 31/07/2020 10:53, Jan Beulich wrote:
> On 31.07.2020 10:38, Eslam Elnikety wrote:
>> On 28.07.20 19:51, Jan Beulich wrote:
>>> On 28.07.2020 11:26, Andrew Cooper wrote:
>>>> --- a/xen/include/asm-x86/hvm/vpt.h
>>>> +++ b/xen/include/asm-x86/hvm/vpt.h
>>>> @@ -73,7 +73,13 @@ struct hpet_registers {
>>>>        uint64_t isr;               /* interrupt status reg */
>>>>        uint64_t mc64;              /* main counter */
>>>>        struct {                    /* timers */
>>>> -        uint64_t config;        /* configuration/cap */
>>>> +        union {
>>>> +            uint64_t config;    /* configuration/cap */
>>>> +            struct {
>>>> +                uint32_t _;
>>>> +                uint32_t route;
>>>> +            };
>>>> +        };
>>>
>>> So long as there are no static initializers for this construct
>>> that would then suffer the old-gcc problem, this is of course a
>>> fine arrangement to make.
>>
>> I have to admit that I have no clue what the "old-gcc" problem is. I am
>> curious, and I would appreciate pointers to figure out if/how to
>> resolve. Is that an old, existing problem? Or a problem that was present
>> in older versions of gcc?
> 
> Well, as already said - the problem is with old gcc not dealing
> well with initializers of structs/unions with unnamed fields.

You seem to know quite well the problem. So would you mind to give us 
more details on which GCC version is believed to be broken?

> 
>> If the latter, is that a gcc version that we still care about?
> 
> Until someone makes a (justified) proposal what the new minimum
> version(s) ought to be, I'm afraid we still have to care. This
> topic came up very recently in another context, and I've proposed
> to put it on the agenda of the next community call.

I don't think Eslam was requesting to change the limits. He was just 
asking whether one of the compiler we support is affected.

Cheers,


-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:36:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:36:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UGW-0007CG-1G; Fri, 31 Jul 2020 12:36:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1UGV-0007C9-Jr
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:35:59 +0000
X-Inumbo-ID: 6260317d-d32a-11ea-abab-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6260317d-d32a-11ea-abab-12813bfff9fa;
 Fri, 31 Jul 2020 12:35:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9ECF1AC94;
 Fri, 31 Jul 2020 12:36:10 +0000 (UTC)
Subject: Re: RESCHEDULED Call for agenda items for Community Call, August 13 @
 15:00 UTC
To: George Dunlap <George.Dunlap@citrix.com>
References: <1E023F6E-0E3C-4CD5-A074-7BF62635E123@citrix.com>
 <40615946-FF55-48DB-91FB-58DD603FDD69@citrix.com>
 <9bfef1bf-31a7-1c95-60fa-2ca665942fda@suse.com>
 <047B12C2-71AA-459F-853C-DF1CD040D6C1@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <37d5e973-7645-d4eb-7bd6-f8d3226d7cb5@suse.com>
Date: Fri, 31 Jul 2020 14:35:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <047B12C2-71AA-459F-853C-DF1CD040D6C1@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Amit Shah <amit@infradead.org>,
 "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 Bobby Eshleman <bobby.eshleman@gmail.com>,
 Brian Woods <brian.woods@xilinx.com>, Rich Persaud <persaur@gmail.com>,
 "anastassios.nanos@onapp.com" <anastassios.nanos@onapp.com>,
 "mirela.simonovic@aggios.com" <mirela.simonovic@aggios.com>,
 "edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>, "Natarajan,
 Janakarajan" <jnataraj@amd.com>,
 "robin.randhawa@arm.com" <robin.randhawa@arm.com>,
 Olivier Lambert <olivier.lambert@vates.fr>,
 Matt Spencer <Matt.Spencer@arm.com>, Robert Townley <rob.townley@gmail.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Rian Quinn <rianquinn@gmail.com>, Varad Gautam <varadgautam@gmail.com>,
 Julien Grall <julien@xen.org>, Juergen Gross <jgross@suse.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Christopher Clark <christopher.w.clark@gmail.com>,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 Kevin Pearson <kevin.pearson@ortmanconsulting.com>,
 "intel-xen@intel.com" <intel-xen@intel.com>,
 Jarvis Roach <Jarvis.Roach@dornerworks.com>,
 Artem Mygaiev <Artem_Mygaiev@epam.com>,
 Sergey Dyasli <sergey.dyasli@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Jeff Kubascik <Jeff.Kubascik@dornerworks.com>, "Ji, John" <john.ji@intel.com>,
 Stewart Hildebrand <Stewart.Hildebrand@dornerworks.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 "daniel.kiper@oracle.com" <daniel.kiper@oracle.com>,
 David Woodhouse <dwmw@amazon.co.uk>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31.07.2020 14:27, George Dunlap wrote:
>> On Jul 31, 2020, at 1:25 PM, Jan Beulich <jbeulich@suse.com> wrote:
>> On 30.07.2020 17:41, George Dunlap wrote:
>>>> On Jul 30, 2020, at 4:17 PM, George Dunlap <George.Dunlap@citrix.com> wrote:
>>>>
>>>> Hey all,
>>>>
>>>> The community call is scheduled for next week, 6 August.  I, however, will be on PTO that week; I propose rescheduling it for the following week, 13 August, at the same time.
>>>>
>>>> The proposed agenda is in ZZZ and you can edit to add items.  Alternatively, you can reply to this mail directly.
>>>
>>> Sorry, in all my manual templating I seem to have missed this one.  Here’s the URL:
>>>
>>> https://cryptpad.fr/pad/#/3/pad/edit/9c58993a08fe97451f0a5b6c8bb906b1/
>>
>> I get "This link does not give you access to the document". Maybe a
>> permissions problem? I've meant to add a "minimum toolchain versions"
>> topic ...
> 
> Try this one?
> 
> https://cryptpad.fr/pad/#/2/pad/edit/VlLdjiw7iBm0R-efOMyCY+Ks/

Ah yes, this one works. Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:39:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:39:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UJS-0007Oq-H4; Fri, 31 Jul 2020 12:39:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1UJR-0007Ol-Ek
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:39:01 +0000
X-Inumbo-ID: ceedb35a-d32a-11ea-8e30-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ceedb35a-d32a-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:39:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F1035ACAF;
 Fri, 31 Jul 2020 12:39:12 +0000 (UTC)
Subject: Re: [PATCH] x86/vhpet: Fix type size in timer_int_route_valid
To: Julien Grall <julien@xen.org>
References: <20200728083357.77999-1-elnikety@amazon.com>
 <a55fba45-a008-059e-ea8c-b7300e2e8b7d@citrix.com>
 <278f0f31-619b-a392-6627-e75e65d0d14f@suse.com>
 <076df48e-0010-bb8d-891f-dc89aa4b9439@amazon.com>
 <cd9b283b-5c10-d186-93ef-8d8c07302e26@suse.com>
 <534c831a-4635-45b0-2580-4bd5812f49e8@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cb42664a-fd35-f2bd-bab0-60dfda34fd81@suse.com>
Date: Fri, 31 Jul 2020 14:38:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <534c831a-4635-45b0-2580-4bd5812f49e8@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Eslam Elnikety <elnikety@amazon.com>, Paul Durrant <pdurrant@amazon.co.uk>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31.07.2020 14:35, Julien Grall wrote:
> Hi Jan,
> 
> On 31/07/2020 10:53, Jan Beulich wrote:
>> On 31.07.2020 10:38, Eslam Elnikety wrote:
>>> On 28.07.20 19:51, Jan Beulich wrote:
>>>> On 28.07.2020 11:26, Andrew Cooper wrote:
>>>>> --- a/xen/include/asm-x86/hvm/vpt.h
>>>>> +++ b/xen/include/asm-x86/hvm/vpt.h
>>>>> @@ -73,7 +73,13 @@ struct hpet_registers {
>>>>>        uint64_t isr;               /* interrupt status reg */
>>>>>        uint64_t mc64;              /* main counter */
>>>>>        struct {                    /* timers */
>>>>> -        uint64_t config;        /* configuration/cap */
>>>>> +        union {
>>>>> +            uint64_t config;    /* configuration/cap */
>>>>> +            struct {
>>>>> +                uint32_t _;
>>>>> +                uint32_t route;
>>>>> +            };
>>>>> +        };
>>>>
>>>> So long as there are no static initializers for this construct
>>>> that would then suffer the old-gcc problem, this is of course a
>>>> fine arrangement to make.
>>>
>>> I have to admit that I have no clue what the "old-gcc" problem is. I am
>>> curious, and I would appreciate pointers to figure out if/how to
>>> resolve. Is that an old, existing problem? Or a problem that was present
>>> in older versions of gcc?
>>
>> Well, as already said - the problem is with old gcc not dealing
>> well with initializers of structs/unions with unnamed fields.
> 
> You seem to know quite well the problem. So would you mind to give us 
> more details on which GCC version is believed to be broken?

I don't recall for sure, but iirc anything before 4.4.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:39:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UJv-0007SI-Ps; Fri, 31 Jul 2020 12:39:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S/xS=BK=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1UJu-0007SA-PN
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:39:30 +0000
X-Inumbo-ID: e0c3c312-d32a-11ea-8e30-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0c3c312-d32a-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:39:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=UnYQohLRvUIJevD99SSi2RBZgyVusm/wpcD40eKewpI=; b=Q+Noto9sktNZ5I1JisPSceyP1/
 8D/RhbfGlusv59LN2lnp6MlPablUSAYzjUR5aeXo1SRJBoSCAnj2pdn8af2veBzPBGmDKyxdQKqB+
 csfF/p61OoRLgPrRJSSMT0kRzhXrKscv7MHbdGsK5zLe0ZX4RfHDgpclaHxa/WDgo0m0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1UJt-0001A3-JY; Fri, 31 Jul 2020 12:39:29 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1UJt-0007EP-BJ; Fri, 31 Jul 2020 12:39:29 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 0/2] epte_get_entry_emt() modifications
Date: Fri, 31 Jul 2020 13:39:24 +0100
Message-Id: <20200731123926.28970-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

This series was originally a singleton (of patch #1)

Paul Durrant (2):
  x86/hvm: set 'ipat' in EPT for special pages
  x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()

 xen/arch/x86/hvm/mtrr.c | 16 +++++-----------
 1 file changed, 5 insertions(+), 11 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:39:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:39:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UJy-0007T5-1L; Fri, 31 Jul 2020 12:39:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S/xS=BK=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1UJw-0007SZ-4K
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:39:32 +0000
X-Inumbo-ID: e13c6308-d32a-11ea-abab-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e13c6308-d32a-11ea-abab-12813bfff9fa;
 Fri, 31 Jul 2020 12:39:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=z2IY39FopPNWgEDc6LGpyAvx8fgxPmSJFUs3X+CwziI=; b=QgyL/2yvdHMIT/RvS5+x0seXrH
 jkEXLIHDeIgKtoc0gHI/Lx4mj+nsTsK3PZw4CkHP6Ih56vM5Q/RL6YROl6omiQfgGa6Dsein+2gEa
 9JtcTfIFllo6Xi0HJO8qh74OYSyMsN6aEqoiN1ISiyULsP0eN8bL2O1atcQePU9E0HHY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1UJu-0001A8-Gz; Fri, 31 Jul 2020 12:39:30 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1UJu-0007EP-9l; Fri, 31 Jul 2020 12:39:30 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 1/2] x86/hvm: set 'ipat' in EPT for special pages
Date: Fri, 31 Jul 2020 13:39:25 +0100
Message-Id: <20200731123926.28970-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731123926.28970-1-paul@xen.org>
References: <20200731123926.28970-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

All non-MMIO ranges (i.e those not mapping real device MMIO regions) that
map valid MFNs are normally marked MTRR_TYPE_WRBACK and 'ipat' is set. Hence
when PV drivers running in a guest populate the BAR space of the Xen Platform
PCI Device with pages such as the Shared Info page or Grant Table pages,
accesses to these pages will be cachable.

However, should IOMMU mappings be enabled be enabled for the guest then these
accesses become uncachable. This has a substantial negative effect on I/O
throughput of PV devices. Arguably PV drivers should bot be using BAR space to
host the Shared Info and Grant Table pages but it is currently commonplace for
them to do this and so this problem needs mitigation. Hence this patch makes
sure the 'ipat' bit is set for any special page regardless of where in GFN
space it is mapped.

NOTE: Clearly this mitigation only applies to Intel EPT. It is not obvious
      that there is any similar mitigation possible for AMD NPT. Downstreams
      such as Citrix XenServer have been carrying a patch similar to this for
      several releases though.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/mtrr.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 511c3be1c8..3ad813ed15 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -830,7 +830,8 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
         return MTRR_TYPE_UNCACHABLE;
     }
 
-    if ( !is_iommu_enabled(d) && !cache_flush_permitted(d) )
+    if ( (!is_iommu_enabled(d) && !cache_flush_permitted(d)) ||
+         is_special_page(mfn_to_page(mfn)) )
     {
         *ipat = 1;
         return MTRR_TYPE_WRBACK;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:39:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:39:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UK2-0007US-9P; Fri, 31 Jul 2020 12:39:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S/xS=BK=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1UK1-0007SZ-2z
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:39:37 +0000
X-Inumbo-ID: e20d8a00-d32a-11ea-abab-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e20d8a00-d32a-11ea-abab-12813bfff9fa;
 Fri, 31 Jul 2020 12:39:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=33WYKKwbqm7tyWR7ZQeAbyiugZFV+VQYG8o2+fd9T2E=; b=AT7z8AETk7fkVZqKS7UO+hsCJU
 fzOgU5vqC5I6LkYpnZZnMSBHqHXb1wseDoos04sKdndp/JBUMB/D02qz9ZCZIsjMBJBaBog5LvWzE
 dnkSGk/CJrGcfCUyrrtd3iUoaHhbTfS9G3C5eL3zKBx7Ni37p05obmM0coNgCHuJGNS8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1UJv-0001AE-Ey; Fri, 31 Jul 2020 12:39:31 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1UJv-0007EP-7I; Fri, 31 Jul 2020 12:39:31 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in
 epte_get_entry_emt()
Date: Fri, 31 Jul 2020 13:39:26 +0100
Message-Id: <20200731123926.28970-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731123926.28970-1-paul@xen.org>
References: <20200731123926.28970-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Re-factor the code to take advantage of the fact that the APIC access page is
a 'special' page.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/mtrr.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 3ad813ed15..0992f05e8f 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -814,29 +814,22 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
         return -1;
     }
 
-    if ( direct_mmio )
-    {
-        if ( (mfn_x(mfn) ^ mfn_x(d->arch.hvm.vmx.apic_access_mfn)) >> order )
-            return MTRR_TYPE_UNCACHABLE;
-        if ( order )
-            return -1;
-        *ipat = 1;
-        return MTRR_TYPE_WRBACK;
-    }
-
     if ( !mfn_valid(mfn) )
     {
         *ipat = 1;
         return MTRR_TYPE_UNCACHABLE;
     }
 
-    if ( (!is_iommu_enabled(d) && !cache_flush_permitted(d)) ||
+    if ( (!direct_mmio && !is_iommu_enabled(d) && !cache_flush_permitted(d)) ||
          is_special_page(mfn_to_page(mfn)) )
     {
         *ipat = 1;
         return MTRR_TYPE_WRBACK;
     }
 
+    if ( direct_mmio )
+        return MTRR_TYPE_UNCACHABLE;
+
     gmtrr_mtype = hvm_get_mem_pinned_cacheattr(d, _gfn(gfn), order);
     if ( gmtrr_mtype >= 0 )
     {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:45:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:45:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UPQ-000053-UE; Fri, 31 Jul 2020 12:45:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1UPQ-00004w-AM
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:45:12 +0000
X-Inumbo-ID: ab701f48-d32b-11ea-8e30-bc764e2007e4
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1b::615])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab701f48-d32b-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:45:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8m4EYduCEPARetGLysx6gxGJAn3u1iVSamjefshBK2o=;
 b=c2wEwem+kipdArpzpltTF+8sLFrKbJfbUcW0s9ADJ642bznGIFGzopjbq3nOJ5cZGa3JrtaipW0/h8GwUu/B3XYPIm3cT7NB7qbRkqdGVpXQ90AvPxi1v1IEZcNqM5SJrBuYBUdViVItrsyJTsfX+f1HICkRAq/rfa6SYiBG3Y8=
Received: from AM6PR05CA0004.eurprd05.prod.outlook.com (2603:10a6:20b:2e::17)
 by DB7PR08MB4219.eurprd08.prod.outlook.com (2603:10a6:10:34::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.20; Fri, 31 Jul
 2020 12:45:09 +0000
Received: from VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::c8) by AM6PR05CA0004.outlook.office365.com
 (2603:10a6:20b:2e::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.16 via Frontend
 Transport; Fri, 31 Jul 2020 12:45:08 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT033.mail.protection.outlook.com (10.152.18.147) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.20 via Frontend Transport; Fri, 31 Jul 2020 12:45:08 +0000
Received: ("Tessian outbound 2ae7cfbcc26c:v62");
 Fri, 31 Jul 2020 12:45:08 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: fe9ad90011b12fde
X-CR-MTA-TID: 64aa7808
Received: from 201af4206fbb.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 59B4E25F-F342-4937-9B20-B1505AED559E.1; 
 Fri, 31 Jul 2020 12:45:02 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 201af4206fbb.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 12:45:02 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WkAbiiDGMuDTw47XV46EqWA2eXOCPd/tyna9MsGQbRq8guW/45+cpzUnY566aAFa52EBLnEistYMzF8sQFTWoF9havYnlJ3rQ8ydCC8T8zyD2GPmscA+YWNBvbUjncDCAL+RgAhomFFFskyaaKPAQS4d0DYXrCqm7Kxq01lVUxhEkLU+E+mMEabUhOeN5Tv6pWzfeM4x9vGYZ38GEaV99BOZJiMbxvJLTSOiDxXgXp6yoW9Mnl3B0utGLZUMae7KCtAAlo+Hi3s9QyAqwykifgEA253puYEtEfy8gVxo79DBShbhWRE1NvRlW0rUyXIPvDBiTIW6w779Puw3cRBiaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8m4EYduCEPARetGLysx6gxGJAn3u1iVSamjefshBK2o=;
 b=hGfTsF+LuFBNC/Sc0+V1r+n8mGif+cUzzVlMpW9rUA5adRWO8/KAaxjreM356ruHEjbP0rWEIoPTTo86NRXawJgeKQKFm84zEeHBJ6i3LeDgOpmuWw/j/XXJHLQZqqq4fv0vbvHGk+TV+3xvW5JYpNdx6wMf9TQuAGCJe+uAr9RwV3I0KGcsUkXsZL+ntQZqwajYi05c967y2LbHEWzThJipMuTjiUSLdNlHDfR8/QRrHsLVqHomXODWGauvAYfiCWiZzthOQBUY2IWYBS2gFiaWiUsniqOq4/r2wSgj2OAqBSO1Aw3eFZmjOqzxOfgBbjOjGBtLeraXza5Je47RdQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8m4EYduCEPARetGLysx6gxGJAn3u1iVSamjefshBK2o=;
 b=c2wEwem+kipdArpzpltTF+8sLFrKbJfbUcW0s9ADJ642bznGIFGzopjbq3nOJ5cZGa3JrtaipW0/h8GwUu/B3XYPIm3cT7NB7qbRkqdGVpXQ90AvPxi1v1IEZcNqM5SJrBuYBUdViVItrsyJTsfX+f1HICkRAq/rfa6SYiBG3Y8=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3850.eurprd08.prod.outlook.com (2603:10a6:10:7b::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.18; Fri, 31 Jul
 2020 12:45:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 12:45:00 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RESEND][PATCH v2 5/7] xen: include xen/guest_access.h rather
 than asm/guest_access.h
Thread-Topic: [RESEND][PATCH v2 5/7] xen: include xen/guest_access.h rather
 than asm/guest_access.h
Thread-Index: AQHWZp4X4KUogf6E0E+1j1OVzKuR+6kho5CA
Date: Fri, 31 Jul 2020 12:45:00 +0000
Message-ID: <4111782C-4E92-442E-BAE3-A9F697FEA91B@arm.com>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-6-julien@xen.org>
In-Reply-To: <20200730181827.1670-6-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 5e19f2f3-77fb-4254-85c1-08d8354f8e95
x-ms-traffictypediagnostic: DB7PR08MB3850:|DB7PR08MB4219:
X-Microsoft-Antispam-PRVS: <DB7PR08MB4219369288A67DD21899E9079D4E0@DB7PR08MB4219.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: OvEG+GdgwhyyLIOirQd2HxhdIfplGfBVw1Zr8AtSgzpg/zCQx1nDiKiU/F53wCdhcDQB3Iv8lZ8VTOrFA86a75lzinwTQu2VJOB7A++eG1+rrSEoQNWn0Sid3ioqZBqn4QURzE/JwG/HLkoKmal5xg0swu2Gb1Hbtpc0dxrlZg80+k2vEOjSu0FOUByn+FxBbu0oHPnycNz2GlIPm/fSw2BlR6ZP34edT7cRhazXA6+oIy7iUBVj/QuZE9ydgtn6nrR3UoDeo4E1QxfHUyVrhGKI5JNJB2wr8Ep6xDCVKEzNsZu1+WtYZBRLJnKkwGY4sXqg9qA0ke/SoyQYnesx6cRmfDjAsU0Om0muzSXHdjnpojjk6ytLewn/r/DUHOijr6kvknjVQUP7vP3BRS0zig==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(366004)(376002)(346002)(396003)(6486002)(478600001)(54906003)(71200400001)(76116006)(66946007)(8936002)(316002)(53546011)(6506007)(91956017)(36756003)(7416002)(86362001)(8676002)(6512007)(5660300002)(83380400001)(2906002)(2616005)(33656002)(66476007)(66556008)(26005)(64756008)(66446008)(186003)(4326008)(6916009);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: cjK2klzyuvwP9nLpWl2k02VADcke8MZjZXH78FF0yuBDwUXJVJj61JBGoKhdSrYQ+mQcaYlSh0syqiRwnWX1zzf6sHWnnh1z00RQvZdH5woY0jmjzeSfsjl91nvRGbkzNUBEo8b0oFEvr+43Bn+rvBcetCLTikr7Et18TpkVqDXeJT3YFXjqXJSaJ18gEjIu1Ue+GBg69V1LImXMroFWZ1EW3kngydruCMxDrOFR7C2yGagL2rC3cCCRamotXF4RoR4it2U7JryGg7eZMuv4t/SH+K0ckEKofR+znx4QutBxLbtnK47RP11qhVI14dFAINj3lDVggXYl+gPCJVfsXxslCLSW5Xh69rtXEpNHDc5aE3Pl5hxa4nqso7chamdYm3jNJQcPt8DDM2wM5aC773tXciFysrz4dS7wZnOUjemDjbIKDHufe46OggEBzzxgAD0j2R1itQhIpocID7+uABNZBUGwv9yL3mGlHYOyuotaKK0GPTGE3TLq26IcoWggKifUc11eiOjBT3kqKPHpc1BKuEoYJ5dFSfM1mRFfgPiJMywEuA3HAusm4UNbJPzqGcZDec65UCKs9+jhKo2YD5iaygg63GZZ+2pVVMesZZuRtF3qruiQhBE4yvOmGtPzlvyrGPONHzRPxTZ5bUtwfg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <A81B98BFB5592C4780B007BC3068CF7A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3850
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: b21035fd-2552-4956-a98b-08d8354f89ad
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: lcX8cAn/I2TsYi1GL8ruI3MlvfJm1YbXTKZmnlVnrsWtkrdBYLvC7lGc14D9L7UNwws+oVdejrFr94pYR/NL1Jgrb0LSjQZKTgOurTYQ+YNe5ZaZAd+BxYGY88A+5Ny3OhG2h5uundL603PIxi1lovRMdWIdzDdr2FNhAMf8fLVdAiU2qNaBsM7wt3M409Yr3ZDopLswbbgXcR7RH0cLnN68vx01xntg3kbAyH2oRYGOg86lTWiwguo5jT6kTOXH/6QKWoYiGKmWVxd4vQq3X4WRK7O7hQ1uI4NidPj909a5hZ5MH0vA/23/wiaKI3tgrMZ/x8+PccYRMp7UStx5u8sisCYKJMgvmu2+U4opd4yQsEp2zdr6i7DU6J1P0dxehBt533U1vLYOTbcCptwhCmts7J/PTy18cSfUWK+Cv1c8x7wA6Zm+7Anwcu1dhpQOp5wQ/8Rj0gnaJlZ4pS5fZePdeH86cIaJRO21IdPg6Ok=
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(346002)(376002)(136003)(396003)(46966005)(478600001)(81166007)(82740400003)(47076004)(2906002)(83380400001)(356005)(70586007)(2616005)(70206006)(82310400002)(86362001)(36756003)(33656002)(5660300002)(54906003)(6512007)(336012)(26005)(186003)(6506007)(53546011)(8676002)(8936002)(36906005)(6862004)(316002)(6486002)(107886003)(4326008);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 12:45:08.2225 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5e19f2f3-77fb-4254-85c1-08d8354f8e95
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB4219
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 30 Jul 2020, at 20:18, Julien Grall <julien@xen.org> wrote:
>
> From: Julien Grall <jgrall@amazon.com>
>
> Only a few places are actually including asm/guest_access.h. While this
> is fine today, a follow-up patch will want to move most of the helpers
> from asm/guest_access.h to xen/guest_access.h.
>
> To prepare the move, everyone should include xen/guest_access.h rather
> than asm/guest_access.h.
>
> Interestingly, asm-arm/guest_access.h includes xen/guest_access.h. The
> inclusion is now removed as no-one but the latter should include the
> former.
>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

>
> ---
>    Changes in v2:
>        - Remove some changes that weren't meant to be here.
> ---
> xen/arch/arm/decode.c                | 2 +-
> xen/arch/arm/domain.c                | 2 +-
> xen/arch/arm/guest_walk.c            | 3 ++-
> xen/arch/arm/guestcopy.c             | 2 +-
> xen/arch/arm/vgic-v3-its.c           | 2 +-
> xen/arch/x86/hvm/svm/svm.c           | 2 +-
> xen/arch/x86/hvm/viridian/viridian.c | 2 +-
> xen/arch/x86/hvm/vmx/vmx.c           | 2 +-
> xen/common/libelf/libelf-loader.c    | 2 +-
> xen/include/asm-arm/guest_access.h   | 1 -
> xen/lib/x86/private.h                | 2 +-
> 11 files changed, 11 insertions(+), 11 deletions(-)
>
> diff --git a/xen/arch/arm/decode.c b/xen/arch/arm/decode.c
> index 144793c8cea0..792c2e92a7eb 100644
> --- a/xen/arch/arm/decode.c
> +++ b/xen/arch/arm/decode.c
> @@ -17,12 +17,12 @@
>  * GNU General Public License for more details.
>  */
>
> +#include <xen/guest_access.h>
> #include <xen/lib.h>
> #include <xen/sched.h>
> #include <xen/types.h>
>
> #include <asm/current.h>
> -#include <asm/guest_access.h>
>
> #include "decode.h"
>
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 31169326b2e3..9258f6d3faa2 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -12,6 +12,7 @@
> #include <xen/bitops.h>
> #include <xen/errno.h>
> #include <xen/grant_table.h>
> +#include <xen/guest_access.h>
> #include <xen/hypercall.h>
> #include <xen/init.h>
> #include <xen/lib.h>
> @@ -26,7 +27,6 @@
> #include <asm/current.h>
> #include <asm/event.h>
> #include <asm/gic.h>
> -#include <asm/guest_access.h>
> #include <asm/guest_atomics.h>
> #include <asm/irq.h>
> #include <asm/p2m.h>
> diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
> index a1cdd7f4afea..b4496c4c86c6 100644
> --- a/xen/arch/arm/guest_walk.c
> +++ b/xen/arch/arm/guest_walk.c
> @@ -16,8 +16,9 @@
>  */
>
> #include <xen/domain_page.h>
> +#include <xen/guest_access.h>
> #include <xen/sched.h>
> -#include <asm/guest_access.h>
> +
> #include <asm/guest_walk.h>
> #include <asm/short-desc.h>
>
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index c8023e2bca5d..32681606d8fc 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -1,10 +1,10 @@
> #include <xen/domain_page.h>
> +#include <xen/guest_access.h>
> #include <xen/lib.h>
> #include <xen/mm.h>
> #include <xen/sched.h>
>
> #include <asm/current.h>
> -#include <asm/guest_access.h>
>
> #define COPY_flush_dcache   (1U << 0)
> #define COPY_from_guest     (0U << 1)
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index 6e153c698d56..58d939b85f92 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -32,6 +32,7 @@
> #include <xen/bitops.h>
> #include <xen/config.h>
> #include <xen/domain_page.h>
> +#include <xen/guest_access.h>
> #include <xen/lib.h>
> #include <xen/init.h>
> #include <xen/softirq.h>
> @@ -39,7 +40,6 @@
> #include <xen/sched.h>
> #include <xen/sizes.h>
> #include <asm/current.h>
> -#include <asm/guest_access.h>
> #include <asm/mmio.h>
> #include <asm/gic_v3_defs.h>
> #include <asm/gic_v3_its.h>
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index ca3bbfcbb355..7301f3cd6004 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -16,6 +16,7 @@
>  * this program; If not, see <http://www.gnu.org/licenses/>.
>  */
>
> +#include <xen/guest_access.h>
> #include <xen/init.h>
> #include <xen/lib.h>
> #include <xen/trace.h>
> @@ -34,7 +35,6 @@
> #include <asm/cpufeature.h>
> #include <asm/processor.h>
> #include <asm/amd.h>
> -#include <asm/guest_access.h>
> #include <asm/debugreg.h>
> #include <asm/msr.h>
> #include <asm/i387.h>
> diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viri=
dian/viridian.c
> index 977c1bc54fad..dc7183a54627 100644
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -5,12 +5,12 @@
>  * Hypervisor Top Level Functional Specification for more information.
>  */
>
> +#include <xen/guest_access.h>
> #include <xen/sched.h>
> #include <xen/version.h>
> #include <xen/hypercall.h>
> #include <xen/domain_page.h>
> #include <xen/param.h>
> -#include <asm/guest_access.h>
> #include <asm/guest/hyperv-tlfs.h>
> #include <asm/paging.h>
> #include <asm/p2m.h>
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index eb54aadfbafb..cb5df1e81c9c 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -15,6 +15,7 @@
>  * this program; If not, see <http://www.gnu.org/licenses/>.
>  */
>
> +#include <xen/guest_access.h>
> #include <xen/init.h>
> #include <xen/lib.h>
> #include <xen/param.h>
> @@ -31,7 +32,6 @@
> #include <asm/regs.h>
> #include <asm/cpufeature.h>
> #include <asm/processor.h>
> -#include <asm/guest_access.h>
> #include <asm/debugreg.h>
> #include <asm/msr.h>
> #include <asm/p2m.h>
> diff --git a/xen/common/libelf/libelf-loader.c b/xen/common/libelf/libelf=
-loader.c
> index 0f468727d04a..629cc0d3e611 100644
> --- a/xen/common/libelf/libelf-loader.c
> +++ b/xen/common/libelf/libelf-loader.c
> @@ -16,7 +16,7 @@
>  */
>
> #ifdef __XEN__
> -#include <asm/guest_access.h>
> +#include <xen/guest_access.h>
> #endif
>
> #include "libelf-private.h"
> diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/gue=
st_access.h
> index 31b9f03f0015..b9a89c495527 100644
> --- a/xen/include/asm-arm/guest_access.h
> +++ b/xen/include/asm-arm/guest_access.h
> @@ -1,7 +1,6 @@
> #ifndef __ASM_ARM_GUEST_ACCESS_H__
> #define __ASM_ARM_GUEST_ACCESS_H__
>
> -#include <xen/guest_access.h>
> #include <xen/errno.h>
> #include <xen/sched.h>
>
> diff --git a/xen/lib/x86/private.h b/xen/lib/x86/private.h
> index b793181464f3..2d53bd3ced23 100644
> --- a/xen/lib/x86/private.h
> +++ b/xen/lib/x86/private.h
> @@ -4,12 +4,12 @@
> #ifdef __XEN__
>
> #include <xen/bitops.h>
> +#include <xen/guest_access.h>
> #include <xen/kernel.h>
> #include <xen/lib.h>
> #include <xen/nospec.h>
> #include <xen/types.h>
>
> -#include <asm/guest_access.h>
> #include <asm/msr-index.h>
>
> #define copy_to_buffer_offset copy_to_guest_offset
> --
> 2.17.1
>
>

IMPORTANT NOTICE: The contents of this email and any attachments are confid=
ential and may also be privileged. If you are not the intended recipient, p=
lease notify the sender immediately and do not disclose the contents to any=
 other person, use it for any purpose, or store or copy the information in =
any medium. Thank you.


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:45:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:45:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UQ4-00008I-B3; Fri, 31 Jul 2020 12:45:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1UQ3-00008C-Ld
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:45:51 +0000
X-Inumbo-ID: c30a8878-d32b-11ea-8e30-bc764e2007e4
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.0.75]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c30a8878-d32b-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:45:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IC0wdRqlD+X+NTgBdv+ZJU48FRWliKfhdryxchVH1vA=;
 b=qceycvUIXnCon2Jid+yCN/HP2TptC7qoDGlWhCRSd/G3vareln/NiOlA5tWSmeIQaO2dB0jbXBGSTS8MBZFN+sOvjypkG4J2sxalnZ0uIsooJ6oht4kdP8tAxkff3qCIhbNzsJFoM3L/8utAKpjHnXbhPUgkEcKA183GyO12jjI=
Received: from AM6PR0202CA0048.eurprd02.prod.outlook.com
 (2603:10a6:20b:3a::25) by VE1PR08MB4799.eurprd08.prod.outlook.com
 (2603:10a6:802:ad::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.20; Fri, 31 Jul
 2020 12:45:47 +0000
Received: from AM5EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:3a:cafe::d5) by AM6PR0202CA0048.outlook.office365.com
 (2603:10a6:20b:3a::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.16 via Frontend
 Transport; Fri, 31 Jul 2020 12:45:47 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT027.mail.protection.outlook.com (10.152.16.138) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.17 via Frontend Transport; Fri, 31 Jul 2020 12:45:47 +0000
Received: ("Tessian outbound 1c27ecaec3d6:v62");
 Fri, 31 Jul 2020 12:45:47 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 64d9592e3f24aece
X-CR-MTA-TID: 64aa7808
Received: from 91b45e3dd659.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 123D7306-C593-4828-B77D-9C3B6A60C2B5.1; 
 Fri, 31 Jul 2020 12:45:41 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 91b45e3dd659.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 12:45:41 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M7Pl4tMcdi3jdsUxi59YIUUxv4naKqlGlniVnzMQ8PRS1AjOsoepTW2WOU+JFkj+isZuYf9eN+oLHOqdKYecpE+HbJasEi6YVNL++2z+msUVdg8B2d4uYdS2IasO3tzjLGl/magTtHB3svqlxHZ5IQdbxvcwJlm0PR+gVsGWzme9hqHdtJwQhocrz2Ei9xB2+Pqyto8Wn032Iyu+U1rncCElhS4NoUC25mKj9jIOgrH0I4f9RrOkwTIW/daGkb6BTg6mkGjTx7Qh627gewylyBFOK1CvrraFP9BdNhU47zA9390Ob8ZiXwqXLt3T6H/Cb0CSQWjF0F+GPIOlS3HpMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IC0wdRqlD+X+NTgBdv+ZJU48FRWliKfhdryxchVH1vA=;
 b=Pgx06HUmIUpWOklOPf1J24HKbtwEtC5JFxnQNSFX2GYieuJQM1mF3ioIrtur0wG/IB5V9rMupgQzTyscfVwk3p2uvgAqSqAZ1xWOuP9vX1SkF0nDjpBVZ6WaHDrGVJlYDOcHMZhZkqQyAVrQ7HquYalR4MZP7M//Lt8AC0Ov6I2YKI94LCde5HP9xLcXttYUPOzB6GdZW0MEh+npzLv6xvWNoCD5HYJTqUcNFNWSNQIEDOJrDgQ7Ya6QYegWCIjBV8JumIEENqdYl48fmiS3RlKZG+Bab3rYYLPLVZXFPnAwNXkfr+J8YnzFvsnRy//9tiJduCeVtKhOPrz7oe4mCQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IC0wdRqlD+X+NTgBdv+ZJU48FRWliKfhdryxchVH1vA=;
 b=qceycvUIXnCon2Jid+yCN/HP2TptC7qoDGlWhCRSd/G3vareln/NiOlA5tWSmeIQaO2dB0jbXBGSTS8MBZFN+sOvjypkG4J2sxalnZ0uIsooJ6oht4kdP8tAxkff3qCIhbNzsJFoM3L/8utAKpjHnXbhPUgkEcKA183GyO12jjI=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3850.eurprd08.prod.outlook.com (2603:10a6:10:7b::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.18; Fri, 31 Jul
 2020 12:45:39 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 12:45:39 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RESEND][PATCH v2 5/7] xen: include xen/guest_access.h rather
 than asm/guest_access.h
Thread-Topic: [RESEND][PATCH v2 5/7] xen: include xen/guest_access.h rather
 than asm/guest_access.h
Thread-Index: AQHWZp4X4KUogf6E0E+1j1OVzKuR+6kho78A
Date: Fri, 31 Jul 2020 12:45:39 +0000
Message-ID: <63792F54-60FD-41C0-A28A-0C3673349CFB@arm.com>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-6-julien@xen.org>
In-Reply-To: <20200730181827.1670-6-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [90.126.203.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 94086750-2ea6-407f-bb10-08d8354fa5ab
x-ms-traffictypediagnostic: DB7PR08MB3850:|VE1PR08MB4799:
X-Microsoft-Antispam-PRVS: <VE1PR08MB47993881BC82A4ABF0586B5A9D4E0@VE1PR08MB4799.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: DIW0nmLnpJBA6+Qc5ESTsE61xTU0VJvHlMv3WsLG1Znp/nrgujg7SWLsUZ4I2+cmrs265cZ5sgI909R+8so8cMZSsf0qOtboXIeOnWim6NAvPpHU4eGRGhT8/me2XKx2aZT5A0NseytjhcV1bMb7Gzdt/NFaaKbkCjOrsUmCaWLprvypUtL3X4gEGziVT9+OsM/td9MlbPZ0fe5aCmVgBjo+5iwhZSox/6VOkT5nKqGMOwngFwtTKyMU0z8Y6UrOphRdGGlcsMX5C+HEgxeGCd4Y7AeKwaZdU06DP1nBlqJIV8rLRle0kk8ws5itnGB/4xGMUYgHdLzhR+zWusufI3z4eHrTz0YNzSqPprIWiO1kRgSGW2It5P5DOkRQJ2LuqimmTKCqBkBt2qS5IWfl1Q==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(366004)(376002)(346002)(396003)(6486002)(478600001)(54906003)(71200400001)(76116006)(66946007)(8936002)(316002)(53546011)(6506007)(91956017)(36756003)(7416002)(86362001)(8676002)(6512007)(5660300002)(83380400001)(2906002)(2616005)(33656002)(66476007)(66556008)(26005)(64756008)(66446008)(186003)(4326008)(6916009);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: VlpwYGpnJw4uO41M+OPmPT6mrYOmO3b1hqjebSn470CRsra2X+q9OSUFqDdciPguHBUQu3O4+JoSy6GLSN2nH1PFPb49FfvOL/W5pTxGoVi9LFASD3EgRfJA2UibGxswli/oj/MzJ9a63upfzx1Yf+VwEG9b7tmExGM/CCuZrav648c8jKg7pzm2ChrqkuDjtYaPJ+eCQccL9pgyyG+yVBCj3ppjImK666fFm4ElS4283f01yADezH8r4gqpeDY+LVnvJLN9f2dazm+vnLS9gt6OuazI47744nUQKvBT3vY3dVgghb9K/HZYm2s2JRgphIkjNm8QIdNlMju54lgEBAs6+wwqcW+cNi0c1zaRs/5FdGVzmwmDfnzqzd9JoHD1w4selfAwkINKTXEUemwhsQSJMd6uwQkXxkv2DTbCGflPZiqP8cOa36ggFS3E/gZJxe7IGPe4bpH50l/p8xCxUixtndvi8BIAJ99vI9HwEmlRIqrWuEh+KM6LBFbQQ5/7sR/u0oQt3RZSrFowLhTHrFDnEwBG7Pl5Wy69bys6tKrlj9JG3Fd5oi2nAx3YzIdRMl9sayvmXxuxrdyNJRFAbaoovbWpYPmiT/koZNFlv5Wc8UT7TfHSNZGxt1pNrEsv2FIKRUjc5dmAxVrj3+2RYw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <68B2DDD0622C1D4BA3A8AA7212506F52@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3850
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: f7c30a50-b27f-4ccc-9842-08d8354fa0ed
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ugIzJN0rvIIk35rHe2A7oUMQKhT890brSgeQkmqELSn2FvguF8nyNRHTLtOAOLxXi638TuxOLV575B9ntThNL0nYP6cOEksix8tpN8fzagbgifxRjwcaeuiWPzNq1rvfO4Ksi1KsuqvuiIKEA9xbQm2vEGVWgegtZu8CWMQIL6M40hIfts85iz13r8J/G42WiBuu1r332t7LRBN1gIqiEXvrhBmrLPd5KtJRCr6vVqQD+QjUs0fM8XSHwqqUDxUBz1I42dAEjDW9w2Oa+b+T/I2yPNd/9xmz2W+xsrvXonzRsXc1icnfPN+/jHwinAhhVXNGeyo+hAYn5zaYIF6/qNPrNRuHiO8T8YOHjEjA667jUZNWNsTedRVQsER3W5S/c5Vd03ZHu3x7xpAHguVu7vZQ7j3UUBfEIaBlgJtHDcSqIswEOdDq1J2ZMQLSg3jySOCaIKUWjMWYQ4rIEXPXEXbnAaixbsoeXF8eew2C5dg=
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(46966005)(6862004)(82740400003)(47076004)(81166007)(36906005)(478600001)(83380400001)(36756003)(356005)(2906002)(4326008)(82310400002)(6486002)(316002)(70206006)(70586007)(6512007)(5660300002)(8676002)(86362001)(8936002)(2616005)(186003)(26005)(33656002)(54906003)(336012)(53546011)(6506007);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 12:45:47.0519 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 94086750-2ea6-407f-bb10-08d8354fa5ab
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB4799
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 30 Jul 2020, at 20:18, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> Only a few places are actually including asm/guest_access.h. While this
> is fine today, a follow-up patch will want to move most of the helpers
> from asm/guest_access.h to xen/guest_access.h.
>=20
> To prepare the move, everyone should include xen/guest_access.h rather
> than asm/guest_access.h.
>=20
> Interestingly, asm-arm/guest_access.h includes xen/guest_access.h. The
> inclusion is now removed as no-one but the latter should include the
> former.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

(sorry forgot to remove the disclaimer in the previous one)
>=20
> ---
>    Changes in v2:
>        - Remove some changes that weren't meant to be here.
> ---
> xen/arch/arm/decode.c                | 2 +-
> xen/arch/arm/domain.c                | 2 +-
> xen/arch/arm/guest_walk.c            | 3 ++-
> xen/arch/arm/guestcopy.c             | 2 +-
> xen/arch/arm/vgic-v3-its.c           | 2 +-
> xen/arch/x86/hvm/svm/svm.c           | 2 +-
> xen/arch/x86/hvm/viridian/viridian.c | 2 +-
> xen/arch/x86/hvm/vmx/vmx.c           | 2 +-
> xen/common/libelf/libelf-loader.c    | 2 +-
> xen/include/asm-arm/guest_access.h   | 1 -
> xen/lib/x86/private.h                | 2 +-
> 11 files changed, 11 insertions(+), 11 deletions(-)
>=20
> diff --git a/xen/arch/arm/decode.c b/xen/arch/arm/decode.c
> index 144793c8cea0..792c2e92a7eb 100644
> --- a/xen/arch/arm/decode.c
> +++ b/xen/arch/arm/decode.c
> @@ -17,12 +17,12 @@
>  * GNU General Public License for more details.
>  */
>=20
> +#include <xen/guest_access.h>
> #include <xen/lib.h>
> #include <xen/sched.h>
> #include <xen/types.h>
>=20
> #include <asm/current.h>
> -#include <asm/guest_access.h>
>=20
> #include "decode.h"
>=20
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 31169326b2e3..9258f6d3faa2 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -12,6 +12,7 @@
> #include <xen/bitops.h>
> #include <xen/errno.h>
> #include <xen/grant_table.h>
> +#include <xen/guest_access.h>
> #include <xen/hypercall.h>
> #include <xen/init.h>
> #include <xen/lib.h>
> @@ -26,7 +27,6 @@
> #include <asm/current.h>
> #include <asm/event.h>
> #include <asm/gic.h>
> -#include <asm/guest_access.h>
> #include <asm/guest_atomics.h>
> #include <asm/irq.h>
> #include <asm/p2m.h>
> diff --git a/xen/arch/arm/guest_walk.c b/xen/arch/arm/guest_walk.c
> index a1cdd7f4afea..b4496c4c86c6 100644
> --- a/xen/arch/arm/guest_walk.c
> +++ b/xen/arch/arm/guest_walk.c
> @@ -16,8 +16,9 @@
>  */
>=20
> #include <xen/domain_page.h>
> +#include <xen/guest_access.h>
> #include <xen/sched.h>
> -#include <asm/guest_access.h>
> +
> #include <asm/guest_walk.h>
> #include <asm/short-desc.h>
>=20
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index c8023e2bca5d..32681606d8fc 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -1,10 +1,10 @@
> #include <xen/domain_page.h>
> +#include <xen/guest_access.h>
> #include <xen/lib.h>
> #include <xen/mm.h>
> #include <xen/sched.h>
>=20
> #include <asm/current.h>
> -#include <asm/guest_access.h>
>=20
> #define COPY_flush_dcache   (1U << 0)
> #define COPY_from_guest     (0U << 1)
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index 6e153c698d56..58d939b85f92 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -32,6 +32,7 @@
> #include <xen/bitops.h>
> #include <xen/config.h>
> #include <xen/domain_page.h>
> +#include <xen/guest_access.h>
> #include <xen/lib.h>
> #include <xen/init.h>
> #include <xen/softirq.h>
> @@ -39,7 +40,6 @@
> #include <xen/sched.h>
> #include <xen/sizes.h>
> #include <asm/current.h>
> -#include <asm/guest_access.h>
> #include <asm/mmio.h>
> #include <asm/gic_v3_defs.h>
> #include <asm/gic_v3_its.h>
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index ca3bbfcbb355..7301f3cd6004 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -16,6 +16,7 @@
>  * this program; If not, see <http://www.gnu.org/licenses/>.
>  */
>=20
> +#include <xen/guest_access.h>
> #include <xen/init.h>
> #include <xen/lib.h>
> #include <xen/trace.h>
> @@ -34,7 +35,6 @@
> #include <asm/cpufeature.h>
> #include <asm/processor.h>
> #include <asm/amd.h>
> -#include <asm/guest_access.h>
> #include <asm/debugreg.h>
> #include <asm/msr.h>
> #include <asm/i387.h>
> diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viri=
dian/viridian.c
> index 977c1bc54fad..dc7183a54627 100644
> --- a/xen/arch/x86/hvm/viridian/viridian.c
> +++ b/xen/arch/x86/hvm/viridian/viridian.c
> @@ -5,12 +5,12 @@
>  * Hypervisor Top Level Functional Specification for more information.
>  */
>=20
> +#include <xen/guest_access.h>
> #include <xen/sched.h>
> #include <xen/version.h>
> #include <xen/hypercall.h>
> #include <xen/domain_page.h>
> #include <xen/param.h>
> -#include <asm/guest_access.h>
> #include <asm/guest/hyperv-tlfs.h>
> #include <asm/paging.h>
> #include <asm/p2m.h>
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index eb54aadfbafb..cb5df1e81c9c 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -15,6 +15,7 @@
>  * this program; If not, see <http://www.gnu.org/licenses/>.
>  */
>=20
> +#include <xen/guest_access.h>
> #include <xen/init.h>
> #include <xen/lib.h>
> #include <xen/param.h>
> @@ -31,7 +32,6 @@
> #include <asm/regs.h>
> #include <asm/cpufeature.h>
> #include <asm/processor.h>
> -#include <asm/guest_access.h>
> #include <asm/debugreg.h>
> #include <asm/msr.h>
> #include <asm/p2m.h>
> diff --git a/xen/common/libelf/libelf-loader.c b/xen/common/libelf/libelf=
-loader.c
> index 0f468727d04a..629cc0d3e611 100644
> --- a/xen/common/libelf/libelf-loader.c
> +++ b/xen/common/libelf/libelf-loader.c
> @@ -16,7 +16,7 @@
>  */
>=20
> #ifdef __XEN__
> -#include <asm/guest_access.h>
> +#include <xen/guest_access.h>
> #endif
>=20
> #include "libelf-private.h"
> diff --git a/xen/include/asm-arm/guest_access.h b/xen/include/asm-arm/gue=
st_access.h
> index 31b9f03f0015..b9a89c495527 100644
> --- a/xen/include/asm-arm/guest_access.h
> +++ b/xen/include/asm-arm/guest_access.h
> @@ -1,7 +1,6 @@
> #ifndef __ASM_ARM_GUEST_ACCESS_H__
> #define __ASM_ARM_GUEST_ACCESS_H__
>=20
> -#include <xen/guest_access.h>
> #include <xen/errno.h>
> #include <xen/sched.h>
>=20
> diff --git a/xen/lib/x86/private.h b/xen/lib/x86/private.h
> index b793181464f3..2d53bd3ced23 100644
> --- a/xen/lib/x86/private.h
> +++ b/xen/lib/x86/private.h
> @@ -4,12 +4,12 @@
> #ifdef __XEN__
>=20
> #include <xen/bitops.h>
> +#include <xen/guest_access.h>
> #include <xen/kernel.h>
> #include <xen/lib.h>
> #include <xen/nospec.h>
> #include <xen/types.h>
>=20
> -#include <asm/guest_access.h>
> #include <asm/msr-index.h>
>=20
> #define copy_to_buffer_offset copy_to_guest_offset
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:46:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:46:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UQz-0000Ex-Lf; Fri, 31 Jul 2020 12:46:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1UQy-0000En-CA
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:46:48 +0000
X-Inumbo-ID: e4db022a-d32b-11ea-8e30-bc764e2007e4
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0e::615])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4db022a-d32b-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:46:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eQl4wNDKcbTGDFmi+x4LzPFfB0nI4NfrbsFPh62FtZg=;
 b=RGbVCkbRMjS87Lz7Juf8G3oVMR4yiFB2KJXXtDOWyZlrTrJtYfNSD6vJ3Vr7Q6nm82fFem/HGx12j3I9JmriGlhPoXi6x5uzXBX5xT7iUtjDv3soc+Ok3rwVQqgF+WhkynlCWVyrYbldhmzilzSStdS9v9WvDmllO4TwMBiJqMw=
Received: from AM6PR05CA0020.eurprd05.prod.outlook.com (2603:10a6:20b:2e::33)
 by VE1PR08MB5134.eurprd08.prod.outlook.com (2603:10a6:803:110::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.21; Fri, 31 Jul
 2020 12:46:44 +0000
Received: from VE1EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::94) by AM6PR05CA0020.outlook.office365.com
 (2603:10a6:20b:2e::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.17 via Frontend
 Transport; Fri, 31 Jul 2020 12:46:43 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT055.mail.protection.outlook.com (10.152.19.158) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.20 via Frontend Transport; Fri, 31 Jul 2020 12:46:43 +0000
Received: ("Tessian outbound c4059ed8d7bf:v62");
 Fri, 31 Jul 2020 12:46:43 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: ea6a28545ffd0d9b
X-CR-MTA-TID: 64aa7808
Received: from d457ff977885.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 70BFEE85-DD18-4456-AE45-4D817FFBD66B.1; 
 Fri, 31 Jul 2020 12:46:38 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d457ff977885.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 12:46:38 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZHcRYos/s2YchWMhzLYHYwAl46TvA7YpzbpXnys2i5adjCAeOYzh9YMjR8V1oMI4BRnlW+jzwM3NqRDUyy/utd8RbCRz9buVz3B8sIsNelpKcuc5Y1hvxCUvTuo8Ons3Od71Wcn3U4tn/O3P6PhBDjz74hDVqOBGrzEkwC6sw0od7gqXnv1Cz38O+TbX47n2xJwyu4Jitlg4EDKLvkYzIElqGdcPd7YLbvr1T4Aq52jhyN+0JVd7Pcl9fv6SDg3sfWUFy63eOiRasxS7grF7xRl4JEcyV7587CrXM4xAGORu41zo2x0h2jtCBg23hqsOpKTeRXf6aso2Xn5RywSAZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eQl4wNDKcbTGDFmi+x4LzPFfB0nI4NfrbsFPh62FtZg=;
 b=BU9IdyMkfkuxC43+JNOW4Z1aOQDuchsA/ykGbHlg+MtWGdGzmH0xzOQ82I+MybFqmxSPpS0Gpo3+iDgYvO/hj8W6l4g+gkEzEtTsIu/g9r1NM2q4oblnptvD2l+71IAuY1JCCjFZZHulPYj7H0jOKgTfc1+SfIQfQ3R4Jvt66OvUeHK7DggCzItCfQa6iRNpeYtAJFCQP8Q6CkykNfPT4apI5Jfj3ZvdsryFFY43DzAesiAs/IvdNgTjrCnSFAUr8gN/DaV/R2T1nbsqh0pgeNlCwYykJM/fGLIifok3VknxQQkKvgq7wvcLJ+/ny0oUJMnKkeuAwcm8XP57ywNWxA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eQl4wNDKcbTGDFmi+x4LzPFfB0nI4NfrbsFPh62FtZg=;
 b=RGbVCkbRMjS87Lz7Juf8G3oVMR4yiFB2KJXXtDOWyZlrTrJtYfNSD6vJ3Vr7Q6nm82fFem/HGx12j3I9JmriGlhPoXi6x5uzXBX5xT7iUtjDv3soc+Ok3rwVQqgF+WhkynlCWVyrYbldhmzilzSStdS9v9WvDmllO4TwMBiJqMw=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5323.eurprd08.prod.outlook.com (2603:10a6:10:fa::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.18; Fri, 31 Jul
 2020 12:46:37 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 12:46:37 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RESEND][PATCH v2 7/7] xen/guest_access: Fix coding style in
 xen/guest_access.h
Thread-Topic: [RESEND][PATCH v2 7/7] xen/guest_access: Fix coding style in
 xen/guest_access.h
Thread-Index: AQHWZp4mnyA5b3r8N0qp4y2QMyc1KKkhpAQA
Date: Fri, 31 Jul 2020 12:46:37 +0000
Message-ID: <B4708E9E-BF20-4B02-803E-FDC7F97C55A1@arm.com>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-8-julien@xen.org>
In-Reply-To: <20200730181827.1670-8-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [90.126.203.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 4a71dec4-b7bf-44f0-0eb4-08d8354fc75b
x-ms-traffictypediagnostic: DB8PR08MB5323:|VE1PR08MB5134:
X-Microsoft-Antispam-PRVS: <VE1PR08MB51347BC92DEBE64F2D20A4A69D4E0@VE1PR08MB5134.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:972;OLM:972;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 7zOlsZzVS6lqiZ0u5PHSImPqOMKqUGFX6qrSuKXem5NyyXvUDgEqalI444YizNM2qRzqexZ4zQ0ve1Sh2ehKBFVfKrxeDQl87BtEA5Bdif5xQylqbSnd85xe8LNHf39Ptseg7TGKslypwOWLtZV0W5TCA8+yPNCJxRqB9jPCs+QY72kk24XevVF4SdiXUwpvBvrSZOvkb+uh1AivlpYsRHnJqn344pY1DoJbgovyxa9SaAIHUMQHqutTgbI1+qZ4HgVsQmKCFgiob71l88vy1nL5HloyuVEXQtJo6L+e0ciV1NzAKoXxFFesuF/U8btg
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(366004)(376002)(346002)(396003)(6486002)(478600001)(54906003)(71200400001)(76116006)(66946007)(8936002)(316002)(53546011)(6506007)(91956017)(36756003)(86362001)(8676002)(6512007)(5660300002)(83380400001)(2906002)(2616005)(33656002)(66476007)(66556008)(26005)(64756008)(66446008)(186003)(4326008)(6916009);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: crnMAH6tvbyBaMqQ3xKoR2Mi0qC5NGbmkxe7xBQ9Xs1sVEFmAAK3/77STObQXy9bai64IobOTRHduv1xYZadYRJtjlzmChlfCaeWtAmL5lJbh0Wcx4o82E3w2plpBbFnKzw3NZtpiY1T1n/IBKWtL49QpGpdfNeXDOC4IQfhoz/d1V1JYnM3clanC8YF5GvsAvDymyShd/eQ0Rmx8ij4tNxSw2ve0E6YsdWJXWo+XQ/oZDTQ72fLJJJ+DWUfBAe935gRm2rOdn7sJEBLyz+wJMmpAB/osPsYHLeZHOEcCvXapUF0UDg5XTgyvyuIqZ+OUwJ4ZAfoMh+Ly8HFqMHm9ZnwOQsQRXpCODod5ab+Bh/e7yW56hcz/Etc9z4Qj25TPnHUxUg44ttbyaJI12lcTnHyGsnhEY87qy7Z9dGBJwM2f5oo8QD34bm1aAPojQCBeQtVPbI/fkKT/3HiSratvD0k1ClKZ5IYhiQFT+7SFCIhq+UJ0QKWx+M4Slf957ghGZBAfQyUDTdEdLEuagDHtboT+7ZHiO06v4swA/+8orGNw1X28zk/VA/GTF0u+rEfX1+PNnzFTKPEHh7LxkeShF7hlelbDkueLKJ2nP+p1RrqS6UNI8Nx8df8jD7YQVXdWwqHB78f5BP5Cc670qOi4A==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <25AC9BD07B6B664AA7FA8C1080E4FAF4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5323
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: fc886a2a-71b0-46d1-b52c-08d8354fc3ad
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: h/rEb/O8UJmy3EtPjFyvq6xji2HHOxpbAlHH0zmp6pTooRetioNvR1Z2/atZm6GHl+zzT/MsjInJCCBaBSdSvXUr5zvbSoGGCSdrktQflhSdfokldmA+hDf0S74uoyOWqSUXEmXKPSjrZ3l3UmlecdXJT9w9Umsu31AXhOODn4CwsE+y50XEZK5TsveAVm5oT+btAgdXcMcRqF5LSpLYnCNgYZb3M6LThKSZ3U1s89WoXOnrAumxT5r8QU6jf8nKgdWafxbh6uLQU0MP2CSzwKwNv9BtJYhcQr2+Hl1nVCX5iMSD8HB1P2fHL/D/4ZTts/7f5tr7f1sWxVVANM73930qZwiJ9qIs4zOOPLNN2MInCYRtMhKCVp+/hbUESHS8+pi9Wp7BMlfw1Rnxkb522A==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(136003)(376002)(39860400002)(346002)(46966005)(86362001)(2906002)(83380400001)(81166007)(26005)(186003)(70586007)(6506007)(70206006)(53546011)(82740400003)(54906003)(6486002)(356005)(82310400002)(47076004)(36756003)(336012)(36906005)(316002)(478600001)(33656002)(2616005)(5660300002)(8936002)(6512007)(8676002)(6862004)(4326008);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 12:46:43.5064 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4a71dec4-b7bf-44f0-0eb4-08d8354fc75b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5134
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 30 Jul 2020, at 20:18, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
>    * Add space before and after operator
>    * Align \
>    * Format comments
>=20
> No functional changes expected.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

> ---
> xen/include/xen/guest_access.h | 36 +++++++++++++++++++---------------
> 1 file changed, 20 insertions(+), 16 deletions(-)
>=20
> diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_acces=
s.h
> index 4957b8d1f2b8..52fc7a063249 100644
> --- a/xen/include/xen/guest_access.h
> +++ b/xen/include/xen/guest_access.h
> @@ -18,20 +18,24 @@
> #define guest_handle_add_offset(hnd, nr) ((hnd).p +=3D (nr))
> #define guest_handle_subtract_offset(hnd, nr) ((hnd).p -=3D (nr))
>=20
> -/* Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARA=
M)
> - * to the specified type of XEN_GUEST_HANDLE_PARAM. */
> +/*
> + * Cast a guest handle (either XEN_GUEST_HANDLE or XEN_GUEST_HANDLE_PARA=
M)
> + * to the specified type of XEN_GUEST_HANDLE_PARAM.
> + */
> #define guest_handle_cast(hnd, type) ({         \
>     type *_x =3D (hnd).p;                         \
> -    (XEN_GUEST_HANDLE_PARAM(type)) { _x };            \
> +    (XEN_GUEST_HANDLE_PARAM(type)) { _x };      \
> })
>=20
> /* Cast a XEN_GUEST_HANDLE to XEN_GUEST_HANDLE_PARAM */
> #define guest_handle_to_param(hnd, type) ({                  \
>     typeof((hnd).p) _x =3D (hnd).p;                            \
>     XEN_GUEST_HANDLE_PARAM(type) _y =3D { _x };                \
> -    /* type checking: make sure that the pointers inside     \
> +    /*                                                       \
> +     * type checking: make sure that the pointers inside     \
>      * XEN_GUEST_HANDLE and XEN_GUEST_HANDLE_PARAM are of    \
> -     * the same type, then return hnd */                     \
> +     * the same type, then return hnd.                       \
> +     */                                                      \
>     (void)(&_x =3D=3D &_y.p);                                    \
>     _y;                                                      \
> })
> @@ -106,13 +110,13 @@
>  * guest_handle_subrange_okay().
>  */
>=20
> -#define __copy_to_guest_offset(hnd, off, ptr, nr) ({    \
> -    const typeof(*(ptr)) *_s =3D (ptr);                   \
> -    char (*_d)[sizeof(*_s)] =3D (void *)(hnd).p;          \
> -    /* Check that the handle is not for a const type */ \
> -    void *__maybe_unused _t =3D (hnd).p;                  \
> -    (void)((hnd).p =3D=3D _s);                              \
> -    __raw_copy_to_guest(_d+(off), _s, sizeof(*_s)*(nr));\
> +#define __copy_to_guest_offset(hnd, off, ptr, nr) ({        \
> +    const typeof(*(ptr)) *_s =3D (ptr);                       \
> +    char (*_d)[sizeof(*_s)] =3D (void *)(hnd).p;              \
> +    /* Check that the handle is not for a const type */     \
> +    void *__maybe_unused _t =3D (hnd).p;                      \
> +    (void)((hnd).p =3D=3D _s);                                  \
> +    __raw_copy_to_guest(_d + (off), _s, sizeof(*_s) * (nr));\
> })
>=20
> #define __clear_guest_offset(hnd, off, nr) ({    \
> @@ -120,10 +124,10 @@
>     __raw_clear_guest(_d + (off), nr);           \
> })
>=20
> -#define __copy_from_guest_offset(ptr, hnd, off, nr) ({  \
> -    const typeof(*(ptr)) *_s =3D (hnd).p;                 \
> -    typeof(*(ptr)) *_d =3D (ptr);                         \
> -    __raw_copy_from_guest(_d, _s+(off), sizeof(*_d)*(nr));\
> +#define __copy_from_guest_offset(ptr, hnd, off, nr) ({          \
> +    const typeof(*(ptr)) *_s =3D (hnd).p;                         \
> +    typeof(*(ptr)) *_d =3D (ptr);                                 \
> +    __raw_copy_from_guest(_d, _s + (off), sizeof (*_d) * (nr)); \
> })
>=20
> #define __copy_field_to_guest(hnd, ptr, field) ({       \
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:48:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:48:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1USL-0000Q1-4V; Fri, 31 Jul 2020 12:48:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1USJ-0000Pv-Ts
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:48:11 +0000
X-Inumbo-ID: 1625fb01-d32c-11ea-8e30-bc764e2007e4
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.54]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1625fb01-d32c-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:48:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=63LxU5fX9gZp/NnEyliV3iSuEgzXGxcrXnHh99UrHXk=;
 b=NvLHOnQbhzSH6QGPGmpSxK0St2r4aMBidFxy0zU2vtoWXHxvgKhVDgdnIlZDrqPzjzpvVaAQ/2C1bbiSt/oWwqi7f/aiMozczJe323T8lOrXmRFL5dk5aEjznZib+z1d2ARDw8oOwRcSkNtnGQHmLGcpUVXVFLr5depsh1Ry6tc=
Received: from DB6PR07CA0062.eurprd07.prod.outlook.com (2603:10a6:6:2a::24) by
 HE1PR0801MB2107.eurprd08.prod.outlook.com (2603:10a6:3:4c::16) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.16; Fri, 31 Jul 2020 12:48:08 +0000
Received: from DB5EUR03FT021.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2a:cafe::5f) by DB6PR07CA0062.outlook.office365.com
 (2603:10a6:6:2a::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.9 via Frontend
 Transport; Fri, 31 Jul 2020 12:48:07 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT021.mail.protection.outlook.com (10.152.20.238) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.17 via Frontend Transport; Fri, 31 Jul 2020 12:48:07 +0000
Received: ("Tessian outbound c4059ed8d7bf:v62");
 Fri, 31 Jul 2020 12:48:07 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 57b6d5c869fd5c42
X-CR-MTA-TID: 64aa7808
Received: from e0f6eabaecf5.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 67896373-3F65-4C53-BA77-52E665AA3336.1; 
 Fri, 31 Jul 2020 12:48:02 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e0f6eabaecf5.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 12:48:02 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cAlryCAnvhgNvgpNd+u0VS2zd625GQEVcnkDdBxH1i7S85ksM1gtdCREsc7vJaStnS2b3iLWk8uyrxDVcwro2kZpXe+pyCcg/lCGTIFseOP7H0eR/YW20Szot0FDXvtLzXtTlRYcWGhpIYlBm1HcSr8BFx+DfhJiiiTmnkIlgwhm2x2j9q/unLFs/upml0T2IWoCzW6e8meDxmyvVxk1DTaWNEqZllwFdLgpk/DXhKRNURHOMK1VY5WcfsdpWqYSt/x04F8sxfafzrsXotjs2iLFt/oKPxzdSkGibodjn7Fc5l4dPJLbVLKHfqgQ+E8byz4s/bZWmCgutlBZSkxagQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=63LxU5fX9gZp/NnEyliV3iSuEgzXGxcrXnHh99UrHXk=;
 b=SjihB9uB9Bq/59yT9QYBQIWW9lHNYs2ExEbVkAM9Amm7iSX2MKUSeFzHdqoagzSghrXp95EXNUjuRdjZjGTYoBqzZ7OYyrlcfRHCQkpjevRqJYjU6Xn35VeMObEoVyEAy0vdfsl2XjT6wqZU5YE8qu9RAMaa8Ie4EPaD7mScQ7Y43zgpzC/KnkpZRAM9shNXqS7ghIY2tr3VuGikgzeXvzkJsw5TVs24l2fv/tK4TFeUqW7OKogde8ycM9Nk3TavX+mVd71bfzFjkrj3xgnPzAgRD/zy6/KNiBiWaIp5JfVo35VX0JdU58FZbR+4j9ZixECxv9o7/OfwdcDlmltUig==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=63LxU5fX9gZp/NnEyliV3iSuEgzXGxcrXnHh99UrHXk=;
 b=NvLHOnQbhzSH6QGPGmpSxK0St2r4aMBidFxy0zU2vtoWXHxvgKhVDgdnIlZDrqPzjzpvVaAQ/2C1bbiSt/oWwqi7f/aiMozczJe323T8lOrXmRFL5dk5aEjznZib+z1d2ARDw8oOwRcSkNtnGQHmLGcpUVXVFLr5depsh1Ry6tc=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5323.eurprd08.prod.outlook.com (2603:10a6:10:fa::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.18; Fri, 31 Jul
 2020 12:48:01 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 12:48:01 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: kernel-doc and xen.git
Thread-Topic: kernel-doc and xen.git
Thread-Index: AQHWZhCqcVqKiEQuVU6MZpfTy49dJakhj62AgAAV1gA=
Date: Fri, 31 Jul 2020 12:48:01 +0000
Message-ID: <CB28175F-DCE6-426F-9F75-EBD4AE4B2BF6@arm.com>
References: <alpine.DEB.2.21.2007291644330.1767@sstabellini-ThinkPad-T480s>
 <9421ec73-1ec0-844f-0014-bd5a36a4036f@suse.com>
In-Reply-To: <9421ec73-1ec0-844f-0014-bd5a36a4036f@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [90.126.203.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 32007122-0bac-43f2-384d-08d8354ff993
x-ms-traffictypediagnostic: DB8PR08MB5323:|HE1PR0801MB2107:
X-Microsoft-Antispam-PRVS: <HE1PR0801MB210781D158C499E3A0063D8C9D4E0@HE1PR0801MB2107.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: DaCQy1uocmK6FH1pGQsTZG1nHMIP1FZxAz0zccikM6DmN9COVx4NFyGYtR1xPHkOd4skg1tTLjbce6pzTiDgNlnSxO+HGjauw/lq/rfXZHNl43Ob1ws+cbb1RqBWcrtHyHflyXVIhkTrZKjw3ivMqM6YHd+B8cpcTycKLZ6xD+0y03MXBv81jHUyCpi0Y6xgGeJxifKQtHvrr58+QyZvdi9ODNbHnBRTyhmXtGQ3oM7+VxD5hjDcaOp3ae6GACZSHRDViKaApXLWt4FSqZDqVDT9eG/22oNRVYBW+XybtgwgV6bsAi/UxK0Yn1j7wWnxhzWqxJICNBXIzSorJE83Ow==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(366004)(376002)(346002)(396003)(6486002)(478600001)(54906003)(71200400001)(76116006)(66946007)(8936002)(316002)(53546011)(6506007)(91956017)(36756003)(86362001)(8676002)(6512007)(5660300002)(3480700007)(2906002)(2616005)(33656002)(66476007)(66556008)(26005)(64756008)(66446008)(186003)(4326008)(6916009);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: AxrqUiqhB1GI3cHFnuZWFIF3Lx2o1ZG7LMCGIW2hdKfwWqRIBdn4MIowyJBtBLWGJB3Im/7P11sptAW8xt7kLKZrWWBrrmCNz2aS6YXr9b2JpXTcV8DTkRhF0B/hJMFcFztFur/Fh93FhvVq5rhhgMP7gQEuWkGBdz7MKODV+fm6PIJBtUiB4R3HuOPS/1NpTAzEpR1pdU/PHbowod7R/tCoOla+Fqj4e9I6AjHjMMCSa5LRzjftEb4imsWn8I2PsGiYdfIa5P56sX2h1wHc7bpvQh8GOgvmb1zFI8i6rYEPCr6SG7ePqnMyg59LpxEGP+pN1MAjAAEZbIVQC7KB9jSUC3hr5drBt1e307sf+EDBSQAQP4V7MpLR0DiNsnVfmygG2loSrBNLurJCQXEM8IHJLGjmSzTxFY+RSxH3IF9ELxDwV947ATOL0AL4CCD/lcCXoeMQrCum+1wxfWso+IsPGThgcLN5EgrGYQIihlUtcdA/tpmEJ49vfQcloGqKmhckE1FomHc4VrqRB3NWoCdaA7dPPNXrBYG6diVqKCmnJdkVyS1ZgEU9/MuybRimkeWAcE/hZuksj81R239If6kX7KoWyTjJeZaQpKl/b+BhOI3LgjAfDMbHrTKD1N+M6jET+4iYDK24kVKb/2HRIQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <3E2F6F798A0F5F43841836B28D6951AA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5323
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT021.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: ad8aea1b-37c4-4400-7e88-08d8354ff5aa
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ML4hzMZIRaS6xHeC3GECyoBLVrHx/77B5V1EJC43Rb+tNt4GCnUK+ZYqCnrPUwtBpVdKZS6JakUUjx0Y0p9VN/oeWN7m25TvIj9TpGVeIUeZVVl758Bm2oeFocT3s8YRR1K6kr99Azlk/AX58XyfMsI4CvyYrlDGwVJ3y6odHZTtkFFOpxQ2lR0mUTYqBSOjlyoM+Z/kE2xjYyC0DAVEEHOsQoKgfio5fnTTTGsErRba1crqUxSEZsxGK0Ly75OKPDqCipkjTvG8MGUhngCsjRliHEkKSCSPbWdIhpDpiogpXWxlJ4t7Be98H8dyMAN898z0z4/K0zFU5jYl8MGEO+P2twyT9AGFeR0y9ZccVEkWDTnQc4ywg35FeKo9A2Z9zRTTB374ffNIsLd8E8CoRQ==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(376002)(136003)(346002)(39860400002)(46966005)(8936002)(82310400002)(336012)(86362001)(6862004)(33656002)(4326008)(5660300002)(478600001)(47076004)(356005)(2616005)(2906002)(81166007)(54906003)(8676002)(3480700007)(82740400003)(53546011)(6512007)(316002)(6506007)(6486002)(70206006)(186003)(36756003)(26005)(70586007);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 12:48:07.8718 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 32007122-0bac-43f2-384d-08d8354ff993
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT021.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB2107
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "committers@xenproject.org" <committers@xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 31 Jul 2020, at 13:29, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 30.07.2020 03:27, Stefano Stabellini wrote:
>> Hi all,
>>=20
>> I would like to ask for your feedback on the adoption of the kernel-doc
>> format for in-code comments.
>>=20
>> In the FuSa SIG we have started looking into FuSa documents for Xen. One
>> of the things we are investigating are ways to link these documents to
>> in-code comments in xen.git and vice versa.
>>=20
>> In this context, Andrew Cooper suggested to have a look at "kernel-doc"
>> [1] during one of the virtual beer sessions at the last Xen Summit.
>>=20
>> I did give a look at kernel-doc and it is very promising. kernel-doc is
>> a script that can generate nice rst text documents from in-code
>> comments. (The generated rst files can then be used as input for sphinx
>> to generate html docs.) The comment syntax [2] is simple and similar to
>> Doxygen:
>>=20
>>    /**
>>     * function_name() - Brief description of function.
>>     * @arg1: Describe the first argument.
>>     * @arg2: Describe the second argument.
>>     *        One can provide multiple line descriptions
>>     *        for arguments.
>>     */
>>=20
>> kernel-doc is actually better than Doxygen because it is a much simpler
>> tool, one we could customize to our needs and with predictable output.
>> Specifically, we could add the tagging, numbering, and referencing
>> required by FuSa requirement documents.
>>=20
>> I would like your feedback on whether it would be good to start
>> converting xen.git in-code comments to the kernel-doc format so that
>> proper documents can be generated out of them. One day we could import
>> kernel-doc into xen.git/scripts and use it to generate a set of html
>> documents via sphinx.
>=20
> How far is this intended to go? The example is description of a
> function's parameters, which is definitely fine (albeit I wonder
> if there's a hidden implication then that _all_ functions
> whatsoever are supposed to gain such comments). But the text just
> says much more generally "in-code comments", which could mean all
> of them. I'd consider the latter as likely going too far.

The idea in the FuSa project is more to start with external interfaces like=
 hypercall definitions and public headers elements.

Bertrand

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:50:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:50:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UUo-0001DO-JD; Fri, 31 Jul 2020 12:50:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1UUn-0001DJ-LE
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:50:45 +0000
X-Inumbo-ID: 724c2fa8-d32c-11ea-abad-12813bfff9fa
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.48]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 724c2fa8-d32c-11ea-abad-12813bfff9fa;
 Fri, 31 Jul 2020 12:50:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A9MF/oavr0y5/X8dO6kF96zbmF0lsxqm7Y8/nZ2i0X8=;
 b=kTPrVbCBpA78hU6BaBDFf5zoZ3UVzEIGirpo092gDQSFepiJlRylIjgpOl+Zl+sl8tBNR587Lj5BxdKODX0MRvC796njVYCK/Um9Lmqq8NFgJVxV2JPADFSc/mTvMQAANwKoTNYWrEGAxwfQG50+9kmcXXXxBuYwvBNt1xs6Ms0=
Received: from AM6PR10CA0049.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:80::26)
 by DB7PR08MB3436.eurprd08.prod.outlook.com (2603:10a6:10:44::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.26; Fri, 31 Jul
 2020 12:50:42 +0000
Received: from AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:80:cafe::69) by AM6PR10CA0049.outlook.office365.com
 (2603:10a6:209:80::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.17 via Frontend
 Transport; Fri, 31 Jul 2020 12:50:42 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT025.mail.protection.outlook.com (10.152.16.157) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.20 via Frontend Transport; Fri, 31 Jul 2020 12:50:41 +0000
Received: ("Tessian outbound 1c27ecaec3d6:v62");
 Fri, 31 Jul 2020 12:50:41 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4a3a6ed7a05af89a
X-CR-MTA-TID: 64aa7808
Received: from 7b243da7026f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B286AB7A-7B52-4A9F-B5CE-977E2A6DD0BB.1; 
 Fri, 31 Jul 2020 12:50:36 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7b243da7026f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 12:50:36 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ImZboK/CfedxKZgBDzLijCuH6rPwmYg1/gbjOV6TKUjqxpsZ1U69U5BP8U7MT7rKPZIzZTMQOJKPa6eP8BWMiyEm4sHbRLnuFxQW2TLUL6/mhRKPs7rBdiI3KjF3sgPRE6qBhnc0CrgXgjHfI/Uzbf6Vq5bB/Woi9Bw4tIhWhHD6EC16JV32QEWCUkmJvOgQodrNhRG+jC3GffQuqBcyw9YnAYU12lXquMfFyFiNpgZJUxRTEyg6fB29GS6DFP6zjBufxxzaiJ2JjauCxuunjCTxEphA1cMdapJL4iM9fvkWPYb0px3D1gJMIJ64gz0gqrOeL/rgfpCHNk4lMmLpmA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A9MF/oavr0y5/X8dO6kF96zbmF0lsxqm7Y8/nZ2i0X8=;
 b=cPXZcoam9aSWw/JOFi3ZqThr+aDwllKeroTbJ761xFSM/tZ+VsqK43c7UxH3+j16cTPRaV1AHrGQiWACpAxrlTxg25Ha0EL5CCdJSCidUiACAazWOAXy8iiTx5oba+0evTANMbyEqlAcbY6VZ5XUupG1bI2OIeF9urIeRfnUs1CiDxyikWukWAWFQlUVgZmOf2FIeJJzd0LivFaEFtZ3lV891J+fP6PDBmggAo8XujxA69HLoH65ahWHGaxy2Fw36t98bkj0LRV3EVzAXL8x0VsizITJMikE5BLOKeuqH3F3oQcspB/Mcp+xOnsiEUqsYhaJroQaeUnBdcA4WDR0bw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A9MF/oavr0y5/X8dO6kF96zbmF0lsxqm7Y8/nZ2i0X8=;
 b=kTPrVbCBpA78hU6BaBDFf5zoZ3UVzEIGirpo092gDQSFepiJlRylIjgpOl+Zl+sl8tBNR587Lj5BxdKODX0MRvC796njVYCK/Um9Lmqq8NFgJVxV2JPADFSc/mTvMQAANwKoTNYWrEGAxwfQG50+9kmcXXXxBuYwvBNt1xs6Ms0=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3850.eurprd08.prod.outlook.com (2603:10a6:10:7b::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.18; Fri, 31 Jul
 2020 12:50:34 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 12:50:34 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH] xen/arm: cmpxchg: Add missing memory barriers in
 __cmpxchg_mb_timeout()
Thread-Topic: [PATCH] xen/arm: cmpxchg: Add missing memory barriers in
 __cmpxchg_mb_timeout()
Thread-Index: AQHWZpQy56S+RNuVHkeeRq+c5Nj1s6khpTQA
Date: Fri, 31 Jul 2020 12:50:34 +0000
Message-ID: <0A749DCC-C7A6-4E4F-BE90-E06C93CE8E91@arm.com>
References: <20200730170721.23393-1-julien@xen.org>
In-Reply-To: <20200730170721.23393-1-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [90.126.203.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 10903d63-a5f4-4ae5-016a-08d835505558
x-ms-traffictypediagnostic: DB7PR08MB3850:|DB7PR08MB3436:
X-Microsoft-Antispam-PRVS: <DB7PR08MB3436DAC54F6C6CA694739C749D4E0@DB7PR08MB3436.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: DAzUANyVmquk6vqHJ/bB+7cUI6EhxTBrrbZ+8ZxCOam7uSrhCgNv89/3oBc2TXNOFCX+mMqrVMYGaZkX/vAN4cevEqf14KVOQqaOSZbRQ2gWfnP1VQoSrWdZE4RGep1neph89MalRg/JxVAWNnbfH5NwuiU/+hNoVWw+8RuLtELw+joBq2tiCEVHtsZc47v02wpru2+unH6QLvzuayhtrs95kOyFfSiWLB5aXQyHtMkuXTqlaGK3FKuZ2fzH10fYRx7ZKCrJ3o08fAoY26kBPsGWDGDHONZ2MURadQNoz7eANciFJ37mYQFv2309IJ8uDXDPDRxD9+EWEhUHVMF7a/+c5z61JGfw5dxgDsPlKmOcOGgLY3JPan58dgeWxKXG
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(396003)(376002)(366004)(136003)(39860400002)(2906002)(2616005)(33656002)(5660300002)(83380400001)(186003)(6916009)(4326008)(66476007)(66556008)(26005)(66446008)(64756008)(8936002)(71200400001)(76116006)(66946007)(53546011)(6506007)(316002)(6486002)(478600001)(54906003)(86362001)(8676002)(6512007)(91956017)(36756003)(133343001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: 7WYOdE1QYgHueJfkGenuWIv/eCTW26CItAlPZu1A6Sdz3Lz0j0JV72aufzS7gIk1zZ7kQmOqV/ntTZxog8J2wJ1KIH24IyKhnIT7W95ySMsuNrduj016iqkYKwN+vUCje5oGvdm2PnxR1MqB/JESPb4+crODslSVcA7Wp9sKdMMFzPoz9h3d0N3/iJVeFt5zetpe3TnTixPFGH9tesG/wR89+TC3rqogRke5SO3Xm8dpvYu+iEMT6CKNWCDGGnoq7NUsFSCtkcI4d31SBcw0bswyb4evvBMMQw8aItNEdVSJtyCD5xUDZ03hbO6ZYv50leUp0jOIzMZVB547kCrvp8oJLoge2DKAbQT7n4gxkAVNbWZCNdMuo8CvMeEbSaM9mckArButDWrXkNxbkK757XFpH95nnx507e33vzeM9WPnE20rmifg7NLJSV6dkRmoL0TgobAthpcbjU2APAtDODDFXyNJdJHCxzx2DdKaVu01/zdyiTkIkofmCTV0TBgdO3fUwD0oOJcRvlPmB+dVvGlotYLl/uwfKHPFUdoDROm3iPWhr4e2k0jMG9UrAU45b7yJxfHIF3capcCZVhzfVQ9K9gD3UKlf8TcAFlTb7t/k80A9KehUDr23MNGsFu9Q9btTeHtYg0BS7hkouxyh+Q==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <49A0913F5A683248BCC859DF04FBBE17@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3850
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: a74ecf65-c831-4c3b-7b7c-08d835505118
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ltAD5OT5TfsVqdOWvXG6rybuPUjU9vt2d2SPpobEDFKYTilsYI/aepeQfrkPlzJIWfQNKc1IGeUcGxwOB5frkuq0ASEdJCoYeKjXgfAsy3P3k4EI/kJ+Urqvofmp/JpPU88gitYFZOLDruWU/2bo/dSegnPSNOmtCRcuJFcy5wRobhBSAhj+elRgya37ukiwiB0RE38Q2JnniTriYZeK8vmyl88lYJ6dSyzob3Nwky2g3UHrNqtZR1bO01+RBZ4+U4UmNRQb+lTtw6HRPs9ZtH0AtMgt3IcQjm4q+RSQzuD8p8mPJzf3+JGhW8dO0yX0HRJoj+UuQFCcByV3ZltQWymiw7bbCcG47OkdyW2nQPMCV9TVKcf+1Dr0oi4BEEbvxe3ZDrqLPuxe/QQjUSWXIMsHpqSlxZnchwMATlVuMnM=
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(346002)(396003)(39860400002)(376002)(46966005)(2616005)(6486002)(336012)(54906003)(6862004)(86362001)(36756003)(6506007)(53546011)(478600001)(5660300002)(8676002)(82740400003)(81166007)(6512007)(83380400001)(36906005)(4326008)(356005)(8936002)(47076004)(70586007)(26005)(186003)(70206006)(2906002)(33656002)(316002)(82310400002)(133343001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 12:50:41.7822 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 10903d63-a5f4-4ae5-016a-08d835505558
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3436
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, nd <nd@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 30 Jul 2020, at 19:07, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> The function __cmpxchg_mb_timeout() was intended to have the same
> semantics as __cmpxchg_mb(). Unfortunately, the memory barriers were
> not added when first implemented.
>=20
> There is no known issue with the existing callers, but the barriers are
> added given this is the expected semantics in Xen.
>=20
> The issue was introduced by XSA-295.
>=20
> Backport: 4.8+
> Fixes: 86b0bc958373 ("xen/arm: cmpxchg: Provide a new helper that can tim=
eout")
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

> ---
> xen/include/asm-arm/arm32/cmpxchg.h | 8 +++++++-
> xen/include/asm-arm/arm64/cmpxchg.h | 8 +++++++-
> 2 files changed, 14 insertions(+), 2 deletions(-)
>=20
> diff --git a/xen/include/asm-arm/arm32/cmpxchg.h b/xen/include/asm-arm/ar=
m32/cmpxchg.h
> index 49ca2a0d7ab1..0770f272ee99 100644
> --- a/xen/include/asm-arm/arm32/cmpxchg.h
> +++ b/xen/include/asm-arm/arm32/cmpxchg.h
> @@ -147,7 +147,13 @@ static always_inline bool __cmpxchg_mb_timeout(volat=
ile void *ptr,
> 					       int size,
> 					       unsigned int max_try)
> {
> -	return __int_cmpxchg(ptr, old, new, size, true, max_try);
> +	bool ret;
> +
> +	smp_mb();
> +	ret =3D __int_cmpxchg(ptr, old, new, size, true, max_try);
> +	smp_mb();
> +
> +	return ret;
> }
>=20
> #define cmpxchg(ptr,o,n)						\
> diff --git a/xen/include/asm-arm/arm64/cmpxchg.h b/xen/include/asm-arm/ar=
m64/cmpxchg.h
> index 5bc2e1f78674..fc5c60f0bd74 100644
> --- a/xen/include/asm-arm/arm64/cmpxchg.h
> +++ b/xen/include/asm-arm/arm64/cmpxchg.h
> @@ -160,7 +160,13 @@ static always_inline bool __cmpxchg_mb_timeout(volat=
ile void *ptr,
> 					       int size,
> 					       unsigned int max_try)
> {
> -	return __int_cmpxchg(ptr, old, new, size, true, max_try);
> +	bool ret;
> +
> +	smp_mb();
> +	ret =3D __int_cmpxchg(ptr, old, new, size, true, max_try);
> +	smp_mb();
> +
> +	return ret;
> }
>=20
> #define cmpxchg(ptr, o, n) \
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:51:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:51:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UVB-0001Fy-Sx; Fri, 31 Jul 2020 12:51:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GLoN=BK=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1k1UVA-0001Fk-LD
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:51:08 +0000
X-Inumbo-ID: 804b01ce-d32c-11ea-8e30-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 804b01ce-d32c-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:51:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596199867;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=AWD6VIBU2O2rySUu4GtSw+XVYuDNY3yDptplia+3xUU=;
 b=aoXL3ly1kBc7dGw/+YMUUMav/csBTCjOo+AWDNwbckyl8KekZa+s+rCJ
 TSGd4192OzsCvdhhK5mwWLFKWjGKpOqE2UZo1t2QWl+sMFw1kWaeCF3v+
 FKl3kcHd4IoEBxOnp/ReUZO2RPa7VI/Y2X9Rd5oCDbyIHyJ86fEv3jQ5L o=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: k+cm+PS9Xt9tBzpmVhGXo87klSehMgbvJb+C83jGjZnOUcOut2TiYj+NFMqlsIoXjC3AYbbSDX
 z+qcIeNdptJ5jCVMXlT+vGXtID6NpMo+FBPHEFtGPDoFLCvAelfVDyWxQ3V56oTfzKSXdlQVBC
 R/T5zyP01feIJTu6VH/Hl6C8TIsfoeYapHxAP0QQmmvC6uJH98wHzkKpEch04Zlm+aTuqljUyj
 e4N607P+hGjtipCaS+TDFfK6zenE67DT4F65QRQuxNtN8iQo+3LVz+5Npk0ys0td8Aggs5uCS0
 ro4=
X-SBRS: 3.7
X-MesageID: 23946838
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23946838"
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: kernel-doc and xen.git
Thread-Topic: kernel-doc and xen.git
Thread-Index: AQHWZhCpvoHj9qYsqUS/BmcPWnpp8KkhbiaAgAAWsIA=
Date: Fri, 31 Jul 2020 12:51:03 +0000
Message-ID: <F09D32F7-4826-421B-99A6-3E94756FFCEF@citrix.com>
References: <alpine.DEB.2.21.2007291644330.1767@sstabellini-ThinkPad-T480s>
 <9421ec73-1ec0-844f-0014-bd5a36a4036f@suse.com>
In-Reply-To: <9421ec73-1ec0-844f-0014-bd5a36a4036f@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <5C281D85B9670B4694BB3759D2D24023@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "committers@xenproject.org" <committers@xenproject.org>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDMxLCAyMDIwLCBhdCAxMjoyOSBQTSwgSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPiB3cm90ZToNCj4gDQo+IE9uIDMwLjA3LjIwMjAgMDM6MjcsIFN0ZWZhbm8gU3Rh
YmVsbGluaSB3cm90ZToNCj4+IEhpIGFsbCwNCj4+IA0KPj4gSSB3b3VsZCBsaWtlIHRvIGFzayBm
b3IgeW91ciBmZWVkYmFjayBvbiB0aGUgYWRvcHRpb24gb2YgdGhlIGtlcm5lbC1kb2MNCj4+IGZv
cm1hdCBmb3IgaW4tY29kZSBjb21tZW50cy4NCj4+IA0KPj4gSW4gdGhlIEZ1U2EgU0lHIHdlIGhh
dmUgc3RhcnRlZCBsb29raW5nIGludG8gRnVTYSBkb2N1bWVudHMgZm9yIFhlbi4gT25lDQo+PiBv
ZiB0aGUgdGhpbmdzIHdlIGFyZSBpbnZlc3RpZ2F0aW5nIGFyZSB3YXlzIHRvIGxpbmsgdGhlc2Ug
ZG9jdW1lbnRzIHRvDQo+PiBpbi1jb2RlIGNvbW1lbnRzIGluIHhlbi5naXQgYW5kIHZpY2UgdmVy
c2EuDQo+PiANCj4+IEluIHRoaXMgY29udGV4dCwgQW5kcmV3IENvb3BlciBzdWdnZXN0ZWQgdG8g
aGF2ZSBhIGxvb2sgYXQgImtlcm5lbC1kb2MiDQo+PiBbMV0gZHVyaW5nIG9uZSBvZiB0aGUgdmly
dHVhbCBiZWVyIHNlc3Npb25zIGF0IHRoZSBsYXN0IFhlbiBTdW1taXQuDQo+PiANCj4+IEkgZGlk
IGdpdmUgYSBsb29rIGF0IGtlcm5lbC1kb2MgYW5kIGl0IGlzIHZlcnkgcHJvbWlzaW5nLiBrZXJu
ZWwtZG9jIGlzDQo+PiBhIHNjcmlwdCB0aGF0IGNhbiBnZW5lcmF0ZSBuaWNlIHJzdCB0ZXh0IGRv
Y3VtZW50cyBmcm9tIGluLWNvZGUNCj4+IGNvbW1lbnRzLiAoVGhlIGdlbmVyYXRlZCByc3QgZmls
ZXMgY2FuIHRoZW4gYmUgdXNlZCBhcyBpbnB1dCBmb3Igc3BoaW54DQo+PiB0byBnZW5lcmF0ZSBo
dG1sIGRvY3MuKSBUaGUgY29tbWVudCBzeW50YXggWzJdIGlzIHNpbXBsZSBhbmQgc2ltaWxhciB0
bw0KPj4gRG94eWdlbjoNCj4+IA0KPj4gICAgLyoqDQo+PiAgICAgKiBmdW5jdGlvbl9uYW1lKCkg
LSBCcmllZiBkZXNjcmlwdGlvbiBvZiBmdW5jdGlvbi4NCj4+ICAgICAqIEBhcmcxOiBEZXNjcmli
ZSB0aGUgZmlyc3QgYXJndW1lbnQuDQo+PiAgICAgKiBAYXJnMjogRGVzY3JpYmUgdGhlIHNlY29u
ZCBhcmd1bWVudC4NCj4+ICAgICAqICAgICAgICBPbmUgY2FuIHByb3ZpZGUgbXVsdGlwbGUgbGlu
ZSBkZXNjcmlwdGlvbnMNCj4+ICAgICAqICAgICAgICBmb3IgYXJndW1lbnRzLg0KPj4gICAgICov
DQo+PiANCj4+IGtlcm5lbC1kb2MgaXMgYWN0dWFsbHkgYmV0dGVyIHRoYW4gRG94eWdlbiBiZWNh
dXNlIGl0IGlzIGEgbXVjaCBzaW1wbGVyDQo+PiB0b29sLCBvbmUgd2UgY291bGQgY3VzdG9taXpl
IHRvIG91ciBuZWVkcyBhbmQgd2l0aCBwcmVkaWN0YWJsZSBvdXRwdXQuDQo+PiBTcGVjaWZpY2Fs
bHksIHdlIGNvdWxkIGFkZCB0aGUgdGFnZ2luZywgbnVtYmVyaW5nLCBhbmQgcmVmZXJlbmNpbmcN
Cj4+IHJlcXVpcmVkIGJ5IEZ1U2EgcmVxdWlyZW1lbnQgZG9jdW1lbnRzLg0KPj4gDQo+PiBJIHdv
dWxkIGxpa2UgeW91ciBmZWVkYmFjayBvbiB3aGV0aGVyIGl0IHdvdWxkIGJlIGdvb2QgdG8gc3Rh
cnQNCj4+IGNvbnZlcnRpbmcgeGVuLmdpdCBpbi1jb2RlIGNvbW1lbnRzIHRvIHRoZSBrZXJuZWwt
ZG9jIGZvcm1hdCBzbyB0aGF0DQo+PiBwcm9wZXIgZG9jdW1lbnRzIGNhbiBiZSBnZW5lcmF0ZWQg
b3V0IG9mIHRoZW0uIE9uZSBkYXkgd2UgY291bGQgaW1wb3J0DQo+PiBrZXJuZWwtZG9jIGludG8g
eGVuLmdpdC9zY3JpcHRzIGFuZCB1c2UgaXQgdG8gZ2VuZXJhdGUgYSBzZXQgb2YgaHRtbA0KPj4g
ZG9jdW1lbnRzIHZpYSBzcGhpbnguDQo+IA0KPiBIb3cgZmFyIGlzIHRoaXMgaW50ZW5kZWQgdG8g
Z28/IFRoZSBleGFtcGxlIGlzIGRlc2NyaXB0aW9uIG9mIGENCj4gZnVuY3Rpb24ncyBwYXJhbWV0
ZXJzLCB3aGljaCBpcyBkZWZpbml0ZWx5IGZpbmUgKGFsYmVpdCBJIHdvbmRlcg0KPiBpZiB0aGVy
ZSdzIGEgaGlkZGVuIGltcGxpY2F0aW9uIHRoZW4gdGhhdCBfYWxsXyBmdW5jdGlvbnMNCj4gd2hh
dHNvZXZlciBhcmUgc3VwcG9zZWQgdG8gZ2FpbiBzdWNoIGNvbW1lbnRzKS4gQnV0IHRoZSB0ZXh0
IGp1c3QNCj4gc2F5cyBtdWNoIG1vcmUgZ2VuZXJhbGx5ICJpbi1jb2RlIGNvbW1lbnRzIiwgd2hp
Y2ggY291bGQgbWVhbiBhbGwNCj4gb2YgdGhlbS4gSSdkIGNvbnNpZGVyIHRoZSBsYXR0ZXIgYXMg
bGlrZWx5IGdvaW5nIHRvbyBmYXIuDQoNCkkgdG9vayBoaW0gdG8gbWVhbiBjb21tZW50cyBpbiB0
aGUgY29kZSBhdCB0aGUgbW9tZW50LCB3aGljaCBkZXNjcmliZSBzb21lIGludGVyZmFjZSwgYnV0
IGFyZW7igJl0IGluIGtlcm5lbC1kb2MgZm9ybWF0LiAgTmF0dXJhbGx5IHdlIHdvdWxkbuKAmXQg
d2FudCAqYWxsKiBjb21tZW50cyB0byBiZSBzdHVmZmVkIGludG8gYSBkb2N1bWVudCBzb21ld2hl
cmUuDQoNCiAtR2Vvcmdl


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:51:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:51:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UVI-0001Hz-90; Fri, 31 Jul 2020 12:51:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YcXq=BK=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1k1UVG-0001HZ-Kd
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:51:14 +0000
X-Inumbo-ID: 83bd0c26-d32c-11ea-8e30-bc764e2007e4
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83bd0c26-d32c-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:51:13 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id t23so8799919ljc.3
 for <xen-devel@lists.xenproject.org>; Fri, 31 Jul 2020 05:51:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id;
 bh=B/WYQkneNHHasyBibLNJc39P94On0q0Yf/0dGxBiCnM=;
 b=bH1oegfq4F1BjeNTK0Nmijn8MhoTBmIOgLZoaVfUGo0h2JoRT+aO14I78TeLvdBHYm
 Pt4W0qwPJ4s9jm03v4G0XX/uCdXRfRLPIyYfE7mgknlRuHgczule6rCWmicdbvvZkVR5
 FK1yg1ZEqij2/fftNtqZXSMUYpyr8RLHPKXEKIlnDfhXUcnkj7I+tcnqEu4OLCIdRvwU
 kdCVHuuI5NS6sIq8V5pwcioRFKnchzzF27Gus5xnrUP4lSfxpu9uXYL8eh3rwklF60Hw
 yDmB2W+Hz3BQ8SKUUY/ifYHZeaBqiQgP66TgtLdW15C+Z/XUEQZamtVRWmfGMyn6Brg4
 z6Jw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=B/WYQkneNHHasyBibLNJc39P94On0q0Yf/0dGxBiCnM=;
 b=gjc7/A1TVz9DhXvzya6kfyEf1DWDRZamQpv63xihwdlpdvJ8HScRVsHr5X22pNfUfH
 FYVojepZL/YFXbQRxHli+pQpPouXeBKv/4XSQcL17nsCo6N3d5WZ8+/lpTGz5piUH0l4
 StYxCaj1B4ck6SL87xWaupKJ7xNr5Wy2ef1G8OlV2RKdk55132Z004eHZQdtOT9saLtf
 tUWFKYmmlv/i/C8Oug0yERMYmKlnQKSVi0DOV37UIc75kIK8r1QiHzgIDydsKEnSEgd9
 Z96T2cC5oDgUW7BI8e8s2KZU0ddsYxUMhrB6OQvFRII0ErLA+p4bhCFvliib9Bvb71wd
 mByw==
X-Gm-Message-State: AOAM531AOo9yCMgE/rjuYJ9OWR3jhckcG+4I0k01sLVrQweH/mjuJgfG
 9a1OqejHEHdPvcRRYWZvIFgw19kkA+4=
X-Google-Smtp-Source: ABdhPJwQKJDe178RRgB019PdVZcaJRxYEd+DG5Tp68DUOkwsyOEYd5yGBw9xFWQf08QnB4cwBtO9VA==
X-Received: by 2002:a2e:7a07:: with SMTP id v7mr1822750ljc.159.1596199872208; 
 Fri, 31 Jul 2020 05:51:12 -0700 (PDT)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id s2sm1923362lfs.4.2020.07.31.05.51.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 31 Jul 2020 05:51:11 -0700 (PDT)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: xen-devel@lists.xenproject.org, dri-devel@lists.freedesktop.org,
 linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com, jgross@suse.com,
 airlied@linux.ie, daniel@ffwll.ch
Subject: [PATCH 0/6] Fixes and improvements for Xen pvdrm
Date: Fri, 31 Jul 2020 15:51:03 +0300
Message-Id: <20200731125109.18666-1-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: intel-gfx@lists.freedesktop.org, sstabellini@kernel.org,
 dan.carpenter@oracle.com,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Hello,

This series contains an assorted set of fixes and improvements for
the Xen para-virtualized display driver and grant device driver which
I have collected over the last couple of months:

1. Minor fixes to grant device driver and drm/xen-front.

2. New format (YUYV) added to the list of the PV DRM supported formats
which allows the driver to be used in zero-copying use-cases when
a camera device is the source of the dma-bufs.

3. Synchronization with the latest para-virtualized protocol definition
in Xen [1].

4. SGT offset is now propagated to the backend: while importing a dmabuf
it is possible that the data of the buffer is put with offset which is
indicated by the SGT offset. This is needed for some GPUs which have
non-zero offset.

5. Version 2 of the Xen displif protocol adds XENDISPL_OP_GET_EDID
request which allows frontends to request EDID structure per
connector. This request is optional and if not supported by the
backend then visible area is still defined by the relevant
XenStore's "resolution" property.
If backend provides EDID with XENDISPL_OP_GET_EDID request then
its values must take precedence over the resolutions defined in
XenStore.

Thank you,
Oleksandr Andrushchenko

[1] https://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=c27a184225eab54d20435c8cab5ad0ef384dc2c0

Oleksandr Andrushchenko (6):
  xen/gntdev: Fix dmabuf import with non-zero sgt offset
  drm/xen-front: Fix misused IS_ERR_OR_NULL checks
  drm/xen-front: Add YUYV to supported formats
  xen: Sync up with the canonical protocol definition in Xen
  drm/xen-front: Pass dumb buffer data offset to the backend
  drm/xen-front: Add support for EDID based configuration

 drivers/gpu/drm/xen/xen_drm_front.c         | 72 +++++++++++++++-
 drivers/gpu/drm/xen/xen_drm_front.h         | 11 ++-
 drivers/gpu/drm/xen/xen_drm_front_cfg.c     | 82 +++++++++++++++++++
 drivers/gpu/drm/xen/xen_drm_front_cfg.h     |  7 ++
 drivers/gpu/drm/xen/xen_drm_front_conn.c    | 27 +++++-
 drivers/gpu/drm/xen/xen_drm_front_evtchnl.c |  3 +
 drivers/gpu/drm/xen/xen_drm_front_evtchnl.h |  3 +
 drivers/gpu/drm/xen/xen_drm_front_gem.c     | 11 +--
 drivers/gpu/drm/xen/xen_drm_front_kms.c     |  7 +-
 drivers/xen/gntdev-dmabuf.c                 |  8 ++
 include/xen/interface/io/displif.h          | 91 ++++++++++++++++++++-
 11 files changed, 305 insertions(+), 17 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:51:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:51:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UVL-0001JL-IW; Fri, 31 Jul 2020 12:51:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YcXq=BK=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1k1UVK-0001HZ-I5
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:51:18 +0000
X-Inumbo-ID: 84af1b9c-d32c-11ea-8e30-bc764e2007e4
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 84af1b9c-d32c-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:51:15 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id j22so10896253lfm.2
 for <xen-devel@lists.xenproject.org>; Fri, 31 Jul 2020 05:51:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=P8zgpF044pgTW1uK5Q8AXrv/7bel3rEWKcAtjS7WDeo=;
 b=WgUyGPBfwmJ9bTlFZ1iykzmaC+/eyxO2mETTCyh0EN7DEuu6cfihmRQOfjN0UX17KJ
 1XK/aClkr4jouIfR2Uq33SHtLP56E4y8DgZ87NMJDRnE/5HMKnL0qQRmTaRqEurFsoxI
 D/R3+eHgdIAl42+OdJRKAhPJheLy6IATe2ljDZKS9Ir/9TnHfgeT9ND6Q078g/e5NeAs
 I2W7ZBR4JYFb7XWCyVQlSdl0Ydv93Ytn/B8SI+DrRwYKxIF7FMjE6FYcroqJuyHyrA1S
 e7Z785IPUZkU0urptiMhDbgonuN0KgLkpcNKcT+5pvixgh0YZDlTJ29FOCn97utl+t8+
 hahQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=P8zgpF044pgTW1uK5Q8AXrv/7bel3rEWKcAtjS7WDeo=;
 b=rv3q0HIgph9RvopwE15MHDPbwNdcwxyW/8MD9y7kG0wdGxSvZyEq9LTD3oNsMmjBdK
 3nzYVXiRKX4aTVYg7aDg4YhjPrav2dZdMwOqs3mfIQLcuT7UqhE+6NDDk9PRuHVPpqy8
 a2MdvIUZdLX1fFXggRSXH2HjB2Jf9dhqXrOlR3JFCsNqJVYyze04ik+s9+qr5yaT1bHx
 zPpiQn9af63FnNBoM+1NNf+rIfNpgKZ1lURqAgYB9rMaJHAon4dpg0lsgvtEm2sDLRdV
 6lzXbe5buBhfmJqOxxm8xPgr9i1eqYnG5UqcYMTZk33b4x/ewuyeIukwpDESFpToDv8S
 cSTg==
X-Gm-Message-State: AOAM531M4tTvHi68nwPo4T0owXUZV02iETMwvwQpqfrkD7arbrhs7CwJ
 AXoHVCYoAWB0tuMgYr17A4j0guzb45Y=
X-Google-Smtp-Source: ABdhPJzATe0fCD2sbhssHSnUfNXc6kWFAY41kSspFrVQTiQcz0U9VH0isUKE3+W9ohrCIfVkdQXBSg==
X-Received: by 2002:ac2:5683:: with SMTP id 3mr1948307lfr.69.1596199873919;
 Fri, 31 Jul 2020 05:51:13 -0700 (PDT)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id s2sm1923362lfs.4.2020.07.31.05.51.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 31 Jul 2020 05:51:13 -0700 (PDT)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: xen-devel@lists.xenproject.org, dri-devel@lists.freedesktop.org,
 linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com, jgross@suse.com,
 airlied@linux.ie, daniel@ffwll.ch
Subject: [PATCH 1/6] xen/gntdev: Fix dmabuf import with non-zero sgt offset
Date: Fri, 31 Jul 2020 15:51:04 +0300
Message-Id: <20200731125109.18666-2-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200731125109.18666-1-andr2000@gmail.com>
References: <20200731125109.18666-1-andr2000@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: intel-gfx@lists.freedesktop.org, sstabellini@kernel.org,
 dan.carpenter@oracle.com,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

It is possible that the scatter-gather table during dmabuf import has
non-zero offset of the data, but user-space doesn't expect that.
Fix this by failing the import, so user-space doesn't access wrong data.

Fixes: 37ccb44d0b00 ("xen/gntdev: Implement dma-buf import functionality")

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 drivers/xen/gntdev-dmabuf.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
index 75d3bb948bf3..b1b6eebafd5d 100644
--- a/drivers/xen/gntdev-dmabuf.c
+++ b/drivers/xen/gntdev-dmabuf.c
@@ -613,6 +613,14 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
 		goto fail_detach;
 	}
 
+	/* Check that we have zero offset. */
+	if (sgt->sgl->offset) {
+		ret = ERR_PTR(-EINVAL);
+		pr_debug("DMA buffer has %d bytes offset, user-space expects 0\n",
+			 sgt->sgl->offset);
+		goto fail_unmap;
+	}
+
 	/* Check number of pages that imported buffer has. */
 	if (attach->dmabuf->size != gntdev_dmabuf->nr_pages << PAGE_SHIFT) {
 		ret = ERR_PTR(-EINVAL);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:51:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:51:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UVQ-0001LN-Rr; Fri, 31 Jul 2020 12:51:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YcXq=BK=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1k1UVP-0001HZ-IN
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:51:23 +0000
X-Inumbo-ID: 85a1fed4-d32c-11ea-8e30-bc764e2007e4
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 85a1fed4-d32c-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:51:16 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id w14so2461070ljj.4
 for <xen-devel@lists.xenproject.org>; Fri, 31 Jul 2020 05:51:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=hf2m1CaACgkl3i0FS0MjmjyDubimZdTD8G//CB5QIMY=;
 b=WpFdItwLmE1h/jcPKXj3mRmyaF+TKN3FWkbAvN9i01/b/GBnQQrfqTYPD3KkCl+HER
 4a3yBnoP8xZ1+beeiIJ9vfMIpoL/Uq0aKjRgj9n3gDAB35PWSD+52cvI8qsbw9MMxnde
 Ogk+jE2mX/g7LFVkGpGWoSr7sD8Yog+cmIHDyOr5lEBZFzgPS2kU4Fuv8LCiEtAMJi4Y
 V/MPagzOAVhsCHI/ruYoFU5NgrrL0yQ5Bln8moUVlTyOMw3rTQ7cCkra3lYcATWDogf/
 jBRFwFf8eSKHB5WoChdB5jL9GonvtcReyID4iSV0w2pPa3lh5NAPcP/9Jan5UaU8YB8u
 zvVQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=hf2m1CaACgkl3i0FS0MjmjyDubimZdTD8G//CB5QIMY=;
 b=RJZCsqxpB8QfWIPrCYKglcsmcc1xPHjmXRfzyP/iUY3y1jf8roz+bnPSwk6Sdd/Lyc
 aAe4+GaKQjHmkkIySG1ksaaptDbuUdGR3KUb5QOHU3XDHtU6xx4bLMfoo6OVHYOmkbD6
 GTxSsCznE2QB8ZEwhKxwnPUd/lUix0XcpMgXD2Q6zimusCOCiSXwCWNlDA+90uLTbHXR
 V+IlSYzGY7Bh9mzSKWyAItReYx/7J10Ou3TBWeICmFh0V2nLiqCHfRmOOqvJO5J0k4/O
 VQHcTD0b3ucIoL2og30CwIhOo+/qDAIxxSK/Eq3SIJQgNIfKmnE/P7aNZgQ6/5C+sJHa
 Wduw==
X-Gm-Message-State: AOAM533zcSdaJZqFa8ZBPpTF1z1MvMH7wxKZAeR1jfrlKU7tCjIuTd45
 1+juv/Kzql2JIqWfdXJCo1OlgSJy5pA=
X-Google-Smtp-Source: ABdhPJyAr3EgNy6m0bxN5SP4DJ54ompCgY6CBLlI16M8yY4FlCVLhiNPKf3qPH+txZ2PmAgUTW2prA==
X-Received: by 2002:a2e:9e43:: with SMTP id g3mr1932560ljk.309.1596199875447; 
 Fri, 31 Jul 2020 05:51:15 -0700 (PDT)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id s2sm1923362lfs.4.2020.07.31.05.51.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 31 Jul 2020 05:51:14 -0700 (PDT)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: xen-devel@lists.xenproject.org, dri-devel@lists.freedesktop.org,
 linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com, jgross@suse.com,
 airlied@linux.ie, daniel@ffwll.ch
Subject: [PATCH 2/6] drm/xen-front: Fix misused IS_ERR_OR_NULL checks
Date: Fri, 31 Jul 2020 15:51:05 +0300
Message-Id: <20200731125109.18666-3-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200731125109.18666-1-andr2000@gmail.com>
References: <20200731125109.18666-1-andr2000@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: intel-gfx@lists.freedesktop.org, sstabellini@kernel.org,
 dan.carpenter@oracle.com,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

The patch c575b7eeb89f: "drm/xen-front: Add support for Xen PV
display frontend" from Apr 3, 2018, leads to the following static
checker warning:

	drivers/gpu/drm/xen/xen_drm_front_gem.c:140 xen_drm_front_gem_create()
	warn: passing zero to 'ERR_CAST'

drivers/gpu/drm/xen/xen_drm_front_gem.c
   133  struct drm_gem_object *xen_drm_front_gem_create(struct drm_device *dev,
   134                                                  size_t size)
   135  {
   136          struct xen_gem_object *xen_obj;
   137
   138          xen_obj = gem_create(dev, size);
   139          if (IS_ERR_OR_NULL(xen_obj))
   140                  return ERR_CAST(xen_obj);

Fix this and the rest of misused places with IS_ERR_OR_NULL in the
driver.

Fixes:  c575b7eeb89f: "drm/xen-front: Add support for Xen PV display frontend"

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
---
 drivers/gpu/drm/xen/xen_drm_front.c     | 4 ++--
 drivers/gpu/drm/xen/xen_drm_front_gem.c | 8 ++++----
 drivers/gpu/drm/xen/xen_drm_front_kms.c | 2 +-
 3 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
index 3e660fb111b3..88db2726e8ce 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.c
+++ b/drivers/gpu/drm/xen/xen_drm_front.c
@@ -400,8 +400,8 @@ static int xen_drm_drv_dumb_create(struct drm_file *filp,
 	args->size = args->pitch * args->height;
 
 	obj = xen_drm_front_gem_create(dev, args->size);
-	if (IS_ERR_OR_NULL(obj)) {
-		ret = PTR_ERR_OR_ZERO(obj);
+	if (IS_ERR(obj)) {
+		ret = PTR_ERR(obj);
 		goto fail;
 	}
 
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index f0b85e094111..4ec8a49241e1 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -83,7 +83,7 @@ static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
 
 	size = round_up(size, PAGE_SIZE);
 	xen_obj = gem_create_obj(dev, size);
-	if (IS_ERR_OR_NULL(xen_obj))
+	if (IS_ERR(xen_obj))
 		return xen_obj;
 
 	if (drm_info->front_info->cfg.be_alloc) {
@@ -117,7 +117,7 @@ static struct xen_gem_object *gem_create(struct drm_device *dev, size_t size)
 	 */
 	xen_obj->num_pages = DIV_ROUND_UP(size, PAGE_SIZE);
 	xen_obj->pages = drm_gem_get_pages(&xen_obj->base);
-	if (IS_ERR_OR_NULL(xen_obj->pages)) {
+	if (IS_ERR(xen_obj->pages)) {
 		ret = PTR_ERR(xen_obj->pages);
 		xen_obj->pages = NULL;
 		goto fail;
@@ -136,7 +136,7 @@ struct drm_gem_object *xen_drm_front_gem_create(struct drm_device *dev,
 	struct xen_gem_object *xen_obj;
 
 	xen_obj = gem_create(dev, size);
-	if (IS_ERR_OR_NULL(xen_obj))
+	if (IS_ERR(xen_obj))
 		return ERR_CAST(xen_obj);
 
 	return &xen_obj->base;
@@ -194,7 +194,7 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
 
 	size = attach->dmabuf->size;
 	xen_obj = gem_create_obj(dev, size);
-	if (IS_ERR_OR_NULL(xen_obj))
+	if (IS_ERR(xen_obj))
 		return ERR_CAST(xen_obj);
 
 	ret = gem_alloc_pages_array(xen_obj, size);
diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
index 78096bbcd226..ef11b1e4de39 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_kms.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
@@ -60,7 +60,7 @@ fb_create(struct drm_device *dev, struct drm_file *filp,
 	int ret;
 
 	fb = drm_gem_fb_create_with_funcs(dev, filp, mode_cmd, &fb_funcs);
-	if (IS_ERR_OR_NULL(fb))
+	if (IS_ERR(fb))
 		return fb;
 
 	gem_obj = fb->obj[0];
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:51:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:51:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UVW-0001OA-5M; Fri, 31 Jul 2020 12:51:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YcXq=BK=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1k1UVU-0001HZ-IQ
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:51:28 +0000
X-Inumbo-ID: 861a5ecf-d32c-11ea-8e30-bc764e2007e4
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 861a5ecf-d32c-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:51:18 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id k13so16822339lfo.0
 for <xen-devel@lists.xenproject.org>; Fri, 31 Jul 2020 05:51:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=in3e/rhynkLwVD5V9PwoaNI3UJD8A73TVa8u/14UsAg=;
 b=dvGhnctdpY5kxFxb0suRBtGNcio8aozQM9kdwS+Lnwi6/6de1sgC6u7OXXgupL3KX2
 CzjQ8fZkjfpjEM8Ct/Rl4YPy3RXEMyiAxnq6w0+TAqibQWiMtXMmRYiVdbWlDCLDExp0
 oRSTR1udpBecZD7hP4CuP4fsW1WUZOXMyRpiX52GsPu8zk/CAJpl0Ozg9mQi94llxQ9A
 YDcTUgBuVLAXdjVPehaqgATExkYwQHpVbPFgQcFFGFYUxffiNQmOCqYF1R34fiJLe/+4
 bu77en9ko2W4Qny1ym2XxbQtdj/47CBtRgk9E5QWu0Vo7/i4SQO/zCHjVP+UYAjeOzTP
 F4rw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=in3e/rhynkLwVD5V9PwoaNI3UJD8A73TVa8u/14UsAg=;
 b=QmFLhNoBDjTQjqzzXUP/v74e5DP5cGalEsXx+iV7PGXHw3wYJ4okx07h2DfbpivTzv
 Znnufr1NR5eeAZ8yKL6WFug5Tbe3E84LzHinCQHj3s3p0lY/ZanhYc8D3Fg59Z0cw1fa
 ptDCakWG8cO1EMo5pa0Cq5Z/GqeTmDzetmfN86y7x2tPiDo86GmfcrL3xK0kpZ8aEFaR
 vyhXLsMP2l3RjadiLa0aZqgqVa/a6/0FqHzUPoanjrGeEvweqL1JpCQ+76k+G9DW9FtC
 +CCAv/2OMFtj47Y2i8pG+XF/a0YMVdB57j7hOpJE4z6SdFHoj/GUj3goMAMFrN6jk5/h
 gcpQ==
X-Gm-Message-State: AOAM530TTwuVQzhHyJb8GW5QsNCeeWWD7IEFpVsA44yOkK/5KK6MMrOB
 FU3GPS8yaQxWEQlX6+RR7QsBUeT/yIg=
X-Google-Smtp-Source: ABdhPJxYkcBz7mtQ/VNmuWNPBGMmzaqks8S8snreTQ/Yahk5mchXkjcy3WbaQqY4RS/Kt1wg3P3OVw==
X-Received: by 2002:a19:ae06:: with SMTP id f6mr1992770lfc.42.1596199876892;
 Fri, 31 Jul 2020 05:51:16 -0700 (PDT)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id s2sm1923362lfs.4.2020.07.31.05.51.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 31 Jul 2020 05:51:16 -0700 (PDT)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: xen-devel@lists.xenproject.org, dri-devel@lists.freedesktop.org,
 linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com, jgross@suse.com,
 airlied@linux.ie, daniel@ffwll.ch
Subject: [PATCH 3/6] drm/xen-front: Add YUYV to supported formats
Date: Fri, 31 Jul 2020 15:51:06 +0300
Message-Id: <20200731125109.18666-4-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200731125109.18666-1-andr2000@gmail.com>
References: <20200731125109.18666-1-andr2000@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: intel-gfx@lists.freedesktop.org, sstabellini@kernel.org,
 dan.carpenter@oracle.com,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Add YUYV to supported formats, so the frontend can work with the
formats used by cameras and other HW.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 drivers/gpu/drm/xen/xen_drm_front_conn.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/gpu/drm/xen/xen_drm_front_conn.c b/drivers/gpu/drm/xen/xen_drm_front_conn.c
index 459702fa990e..44f1f70c0aed 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_conn.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_conn.c
@@ -33,6 +33,7 @@ static const u32 plane_formats[] = {
 	DRM_FORMAT_ARGB4444,
 	DRM_FORMAT_XRGB1555,
 	DRM_FORMAT_ARGB1555,
+	DRM_FORMAT_YUYV,
 };
 
 const u32 *xen_drm_front_conn_get_formats(int *format_count)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:51:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UVb-0001R2-FS; Fri, 31 Jul 2020 12:51:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YcXq=BK=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1k1UVZ-0001HZ-IU
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:51:33 +0000
X-Inumbo-ID: 876e36d8-d32c-11ea-8e30-bc764e2007e4
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 876e36d8-d32c-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:51:19 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id j22so10896360lfm.2
 for <xen-devel@lists.xenproject.org>; Fri, 31 Jul 2020 05:51:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=OegNMCy7sd4YzoHmkSxRWimW6gQOJebTWr9iC4Yy25M=;
 b=oyEx3PKzdJIIXSy691f9LlHCTe0UkaNkvc6+33eXGtZ6qb6/BzMUd7PJCNP7erQpQf
 rAbeFcLX9M5eRkOqb/GN/c1CmRHOAOEOK5r7MRycTck1COq2fLmWvFmyp7oQFEyT/x5j
 vuUV/B5Ty8I62qWbpcz3LCBAk4v84PQL8J6u1lcQRA35bAkOUPttVTySf5KsWJikvCwr
 cNSzJTK0NrEEbCN+LFmTzx7MFTTsUCfkWiQ642E7tVODBEXIKg15u6sHbqtHIJyY9Lyw
 qo9/SrsLoigzUTcYp3I9HK92bM7fEFwbhQKbNXqoKrKVpqVP8iwYAdA7ReON8q1L41Ki
 5pFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=OegNMCy7sd4YzoHmkSxRWimW6gQOJebTWr9iC4Yy25M=;
 b=r6cxbrZdh2b5wzDpE5ZDfpqQ6IlORHrasIxUY+I1uIMc8dQHSVxTuSwC6V07yYcLKA
 oHvB79l2uV/QNQspHOsIjmfM07QnUu6PmKljEodVnj/wAA7tJcBKmiYrfccziP9Mud4O
 +AjkmSqsdt8v5D+PNLM/FIrRTn/3bPbxLnBsVmejgxinEIjOVeXjJSM9DUzD/e8PWW2P
 LHWFcqHFI3kOjFgQmjt0NsfKdnW0Hl0SYec3FD1mEj4YZHKdWjg87uI/aEEL3Rdq63Ua
 S+grVG7eJaOEOmIwKoP5OkJ6zWBJgTg2TTbwGNJ1QUAGcWvOKA2q0DwhMaf/jZPxWyXu
 6zZg==
X-Gm-Message-State: AOAM533dqH1Kk9oV4v2NUOzbQ4xWFRr2V57VtIvklWoRObF6ItrWp8Mt
 PqkhIcJOpgqkXde21iD5SsmQ2ZPBMKs=
X-Google-Smtp-Source: ABdhPJy9Wqj5+XdlvDNNN+TvI5vrQTMO+5R57ssPhVVE/1YOzNCRvq0qSuixu5RL8ILDT5KUXx2aKg==
X-Received: by 2002:a05:6512:3a5:: with SMTP id
 v5mr1944657lfp.138.1596199878393; 
 Fri, 31 Jul 2020 05:51:18 -0700 (PDT)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id s2sm1923362lfs.4.2020.07.31.05.51.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 31 Jul 2020 05:51:17 -0700 (PDT)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: xen-devel@lists.xenproject.org, dri-devel@lists.freedesktop.org,
 linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com, jgross@suse.com,
 airlied@linux.ie, daniel@ffwll.ch
Subject: [PATCH 4/6] xen: Sync up with the canonical protocol definition in Xen
Date: Fri, 31 Jul 2020 15:51:07 +0300
Message-Id: <20200731125109.18666-5-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200731125109.18666-1-andr2000@gmail.com>
References: <20200731125109.18666-1-andr2000@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: intel-gfx@lists.freedesktop.org, sstabellini@kernel.org,
 dan.carpenter@oracle.com,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

This is the sync up with the canonical definition of the
display protocol in Xen.

1. Add protocol version as an integer

Version string, which is in fact an integer, is hard to handle in the
code that supports different protocol versions. To simplify that
also add the version as an integer.

2. Pass buffer offset with XENDISPL_OP_DBUF_CREATE

There are cases when display data buffer is created with non-zero
offset to the data start. Handle such cases and provide that offset
while creating a display buffer.

3. Add XENDISPL_OP_GET_EDID command

Add an optional request for reading Extended Display Identification
Data (EDID) structure which allows better configuration of the
display connectors over the configuration set in XenStore.
With this change connectors may have multiple resolutions defined
with respect to detailed timing definitions and additional properties
normally provided by displays.

If this request is not supported by the backend then visible area
is defined by the relevant XenStore's "resolution" property.

If backend provides extended display identification data (EDID) with
XENDISPL_OP_GET_EDID request then EDID values must take precedence
over the resolutions defined in XenStore.

4. Bump protocol version to 2.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 include/xen/interface/io/displif.h | 91 +++++++++++++++++++++++++++++-
 1 file changed, 88 insertions(+), 3 deletions(-)

diff --git a/include/xen/interface/io/displif.h b/include/xen/interface/io/displif.h
index fdc279dc4a88..c2d900186883 100644
--- a/include/xen/interface/io/displif.h
+++ b/include/xen/interface/io/displif.h
@@ -38,7 +38,8 @@
  *                           Protocol version
  ******************************************************************************
  */
-#define XENDISPL_PROTOCOL_VERSION	"1"
+#define XENDISPL_PROTOCOL_VERSION	"2"
+#define XENDISPL_PROTOCOL_VERSION_INT	 2
 
 /*
  ******************************************************************************
@@ -202,6 +203,9 @@
  *      Width and height of the connector in pixels separated by
  *      XENDISPL_RESOLUTION_SEPARATOR. This defines visible area of the
  *      display.
+ *      If backend provides extended display identification data (EDID) with
+ *      XENDISPL_OP_GET_EDID request then EDID values must take precedence
+ *      over the resolutions defined here.
  *
  *------------------ Connector Request Transport Parameters -------------------
  *
@@ -349,6 +353,8 @@
 #define XENDISPL_OP_FB_DETACH		0x13
 #define XENDISPL_OP_SET_CONFIG		0x14
 #define XENDISPL_OP_PG_FLIP		0x15
+/* The below command is available in protocol version 2 and above. */
+#define XENDISPL_OP_GET_EDID		0x16
 
 /*
  ******************************************************************************
@@ -377,6 +383,10 @@
 #define XENDISPL_FIELD_BE_ALLOC		"be-alloc"
 #define XENDISPL_FIELD_UNIQUE_ID	"unique-id"
 
+#define XENDISPL_EDID_BLOCK_SIZE	128
+#define XENDISPL_EDID_BLOCK_COUNT	256
+#define XENDISPL_EDID_MAX_SIZE		(XENDISPL_EDID_BLOCK_SIZE * XENDISPL_EDID_BLOCK_COUNT)
+
 /*
  ******************************************************************************
  *                          STATUS RETURN CODES
@@ -451,7 +461,9 @@
  * +----------------+----------------+----------------+----------------+
  * |                           gref_directory                          | 40
  * +----------------+----------------+----------------+----------------+
- * |                             reserved                              | 44
+ * |                             data_ofs                              | 44
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 48
  * +----------------+----------------+----------------+----------------+
  * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/|
  * +----------------+----------------+----------------+----------------+
@@ -494,6 +506,7 @@
  *   buffer size (buffer_sz) exceeds what can be addressed by this single page,
  *   then reference to the next page must be supplied (see gref_dir_next_page
  *   below)
+ * data_ofs - uint32_t, offset of the data in the buffer, octets
  */
 
 #define XENDISPL_DBUF_FLG_REQ_ALLOC	(1 << 0)
@@ -506,6 +519,7 @@ struct xendispl_dbuf_create_req {
 	uint32_t buffer_sz;
 	uint32_t flags;
 	grant_ref_t gref_directory;
+	uint32_t data_ofs;
 };
 
 /*
@@ -731,6 +745,44 @@ struct xendispl_page_flip_req {
 	uint64_t fb_cookie;
 };
 
+/*
+ * Request EDID - request EDID describing current connector:
+ *         0                1                 2               3        octet
+ * +----------------+----------------+----------------+----------------+
+ * |               id                | _OP_GET_EDID   |   reserved     | 4
+ * +----------------+----------------+----------------+----------------+
+ * |                             buffer_sz                             | 8
+ * +----------------+----------------+----------------+----------------+
+ * |                          gref_directory                           | 12
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 16
+ * +----------------+----------------+----------------+----------------+
+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/|
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 64
+ * +----------------+----------------+----------------+----------------+
+ *
+ * Notes:
+ *   - This command is not available in protocol version 1 and should be
+ *     ignored.
+ *   - This request is optional and if not supported then visible area
+ *     is defined by the relevant XenStore's "resolution" property.
+ *   - Shared buffer, allocated for EDID storage, must not be less then
+ *     XENDISPL_EDID_MAX_SIZE octets.
+ *
+ * buffer_sz - uint32_t, buffer size to be allocated, octets
+ * gref_directory - grant_ref_t, a reference to the first shared page
+ *   describing EDID buffer references. See XENDISPL_OP_DBUF_CREATE for
+ *   grant page directory structure (struct xendispl_page_directory).
+ *
+ * See response format for this request.
+ */
+
+struct xendispl_get_edid_req {
+	uint32_t buffer_sz;
+	grant_ref_t gref_directory;
+};
+
 /*
  *---------------------------------- Responses --------------------------------
  *
@@ -753,6 +805,35 @@ struct xendispl_page_flip_req {
  * id - uint16_t, private guest value, echoed from request
  * status - int32_t, response status, zero on success and -XEN_EXX on failure
  *
+ *
+ * Get EDID response - response for XENDISPL_OP_GET_EDID:
+ *         0                1                 2               3        octet
+ * +----------------+----------------+----------------+----------------+
+ * |               id                |    operation   |    reserved    | 4
+ * +----------------+----------------+----------------+----------------+
+ * |                              status                               | 8
+ * +----------------+----------------+----------------+----------------+
+ * |                             edid_sz                               | 12
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 16
+ * +----------------+----------------+----------------+----------------+
+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/|
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 64
+ * +----------------+----------------+----------------+----------------+
+ *
+ * Notes:
+ *   - This response is not available in protocol version 1 and should be
+ *     ignored.
+ *
+ * edid_sz - uint32_t, size of the EDID, octets
+ */
+
+struct xendispl_get_edid_resp {
+	uint32_t edid_sz;
+};
+
+/*
  *----------------------------------- Events ----------------------------------
  *
  * Events are sent via a shared page allocated by the front and propagated by
@@ -804,6 +885,7 @@ struct xendispl_req {
 		struct xendispl_fb_detach_req fb_detach;
 		struct xendispl_set_config_req set_config;
 		struct xendispl_page_flip_req pg_flip;
+		struct xendispl_get_edid_req get_edid;
 		uint8_t reserved[56];
 	} op;
 };
@@ -813,7 +895,10 @@ struct xendispl_resp {
 	uint8_t operation;
 	uint8_t reserved;
 	int32_t status;
-	uint8_t reserved1[56];
+	union {
+	    struct xendispl_get_edid_resp get_edid;
+	    uint8_t reserved1[56];
+	} op;
 };
 
 struct xendispl_evt {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:51:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:51:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UVf-0001UF-TH; Fri, 31 Jul 2020 12:51:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YcXq=BK=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1k1UVe-0001HZ-Ic
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:51:38 +0000
X-Inumbo-ID: 884136dc-d32c-11ea-8e30-bc764e2007e4
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 884136dc-d32c-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:51:21 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id v12so1992406ljc.10
 for <xen-devel@lists.xenproject.org>; Fri, 31 Jul 2020 05:51:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=802wLPAiJOjYAF20z0XA9+ti0fbeGRnp8uxtEVqZ2cw=;
 b=XUpNekScbM7pVkILHss15SrZ149Lm5AtQfG+6vzsprcdirumI53ZN4t6VgSMIYCR07
 9skaHVlkX7NcNFlcoaIjMIYSSo7VYBaJxeFKKYA+1ThMAmgeCAorx2b8WQDzWH/EGcSL
 ZACMLyTvq9p49NUdDZQJ+C+6q75OV7U/dPqL0HqWZKTCJ09TgWEz8ZiW5LPhvz2Zln7Y
 qLR/h+WuI938pTe27pXqjPod/3HJqe4kl8HPpgANJKQL9Fx2Abd50/Qurj9uH4ULNMGa
 JFm0yWYg0IArPZxrohxFHuEZSdW3ZFfFWjH48CxRSNPSLbf6dahnKvy02/L1oQOkzkQl
 xcCg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=802wLPAiJOjYAF20z0XA9+ti0fbeGRnp8uxtEVqZ2cw=;
 b=rR1bRC2ZHes9SAnaikUzkWsF0TrgSTMMOFNadGlBd3q/hjBB8Mjfrm5DxfhuEgMf0z
 j/AxJ2vK8zRcRPWiJk67D+Pb+P1vBTopgUit+a8iT9C3k/72C8guf29xL4m6d2oVdb8y
 /HIXbYlbLqT5nwhzXePlrIhugQbAdjJ2ae8BYRUnpEXCxW6FrAZr2V3nUAdoFhRZ4cLx
 aJCVgiUHsmWqPmfZAY8hD/tVAXR6vkSX87t7bPpd7WUimJH6xRQmV8AZKZRkC2zrWPXI
 S6QJ95/V5bZl5Z3XFj+WR6owbe4puLZHoZBjwyyQyNSSBINEfrLFX7cUBZyzEFtPrvLc
 o5wA==
X-Gm-Message-State: AOAM531EiwbUTuq58ppqHglJhW+Al888ihGtcTHZag1cktO21QJXuuib
 /kSYmEFTYZZWycSPl50som6iWU4Pcl8=
X-Google-Smtp-Source: ABdhPJxCBCQqGvpy9XgTgM7mttEVb5TQWPTSAYYjB8mKwaVorgp/Gu6H2+I5YwJsHP9dUUFce4OPrw==
X-Received: by 2002:a2e:b610:: with SMTP id r16mr1841475ljn.439.1596199879842; 
 Fri, 31 Jul 2020 05:51:19 -0700 (PDT)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id s2sm1923362lfs.4.2020.07.31.05.51.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 31 Jul 2020 05:51:19 -0700 (PDT)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: xen-devel@lists.xenproject.org, dri-devel@lists.freedesktop.org,
 linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com, jgross@suse.com,
 airlied@linux.ie, daniel@ffwll.ch
Subject: [PATCH 5/6] drm/xen-front: Pass dumb buffer data offset to the backend
Date: Fri, 31 Jul 2020 15:51:08 +0300
Message-Id: <20200731125109.18666-6-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200731125109.18666-1-andr2000@gmail.com>
References: <20200731125109.18666-1-andr2000@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: intel-gfx@lists.freedesktop.org, sstabellini@kernel.org,
 dan.carpenter@oracle.com,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

While importing a dmabuf it is possible that the data of the buffer
is put with offset which is indicated by the SGT offset.
Respect the offset value and forward it to the backend.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 drivers/gpu/drm/xen/xen_drm_front.c     | 6 ++++--
 drivers/gpu/drm/xen/xen_drm_front.h     | 2 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.c | 3 ++-
 3 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
index 88db2726e8ce..013c9e0e412c 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.c
+++ b/drivers/gpu/drm/xen/xen_drm_front.c
@@ -157,7 +157,8 @@ int xen_drm_front_mode_set(struct xen_drm_front_drm_pipeline *pipeline,
 
 int xen_drm_front_dbuf_create(struct xen_drm_front_info *front_info,
 			      u64 dbuf_cookie, u32 width, u32 height,
-			      u32 bpp, u64 size, struct page **pages)
+			      u32 bpp, u64 size, u32 offset,
+			      struct page **pages)
 {
 	struct xen_drm_front_evtchnl *evtchnl;
 	struct xen_drm_front_dbuf *dbuf;
@@ -194,6 +195,7 @@ int xen_drm_front_dbuf_create(struct xen_drm_front_info *front_info,
 	req->op.dbuf_create.gref_directory =
 			xen_front_pgdir_shbuf_get_dir_start(&dbuf->shbuf);
 	req->op.dbuf_create.buffer_sz = size;
+	req->op.dbuf_create.data_ofs = offset;
 	req->op.dbuf_create.dbuf_cookie = dbuf_cookie;
 	req->op.dbuf_create.width = width;
 	req->op.dbuf_create.height = height;
@@ -408,7 +410,7 @@ static int xen_drm_drv_dumb_create(struct drm_file *filp,
 	ret = xen_drm_front_dbuf_create(drm_info->front_info,
 					xen_drm_front_dbuf_to_cookie(obj),
 					args->width, args->height, args->bpp,
-					args->size,
+					args->size, 0,
 					xen_drm_front_gem_get_pages(obj));
 	if (ret)
 		goto fail_backend;
diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
index f92c258350ca..54486d89650e 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.h
+++ b/drivers/gpu/drm/xen/xen_drm_front.h
@@ -145,7 +145,7 @@ int xen_drm_front_mode_set(struct xen_drm_front_drm_pipeline *pipeline,
 
 int xen_drm_front_dbuf_create(struct xen_drm_front_info *front_info,
 			      u64 dbuf_cookie, u32 width, u32 height,
-			      u32 bpp, u64 size, struct page **pages);
+			      u32 bpp, u64 size, u32 offset, struct page **pages);
 
 int xen_drm_front_fb_attach(struct xen_drm_front_info *front_info,
 			    u64 dbuf_cookie, u64 fb_cookie, u32 width,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 4ec8a49241e1..39ff95b75357 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -210,7 +210,8 @@ xen_drm_front_gem_import_sg_table(struct drm_device *dev,
 
 	ret = xen_drm_front_dbuf_create(drm_info->front_info,
 					xen_drm_front_dbuf_to_cookie(&xen_obj->base),
-					0, 0, 0, size, xen_obj->pages);
+					0, 0, 0, size, sgt->sgl->offset,
+					xen_obj->pages);
 	if (ret < 0)
 		return ERR_PTR(ret);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:51:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:51:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UVl-0001Xg-5R; Fri, 31 Jul 2020 12:51:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YcXq=BK=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1k1UVj-0001HZ-Im
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:51:43 +0000
X-Inumbo-ID: 8942e22e-d32c-11ea-8e30-bc764e2007e4
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8942e22e-d32c-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:51:23 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id h19so32295806ljg.13
 for <xen-devel@lists.xenproject.org>; Fri, 31 Jul 2020 05:51:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=7zQCi5vRKjnMMexLpqVO1+RGpDeDdG/krrqCxgZp1MI=;
 b=HbWhpWICAKhm7ouBRWBGEzVp2Ly1gNg+XaeQigbOa3I7JrEoNstXZD9boIMFRnQvp/
 NU+kKOCO8ZEkuhIpeiv19FHcfR+LRrWDdBo0BKaYl0w6PuwgkRt+X1aB9Ty3ZT9/RQKD
 VF0eO+0ytLMLEII2QEMeQrHh3rQQ7nNFNIjdOGFs+D1gtNPbaqMv4NkpMtcalx5ngOxb
 AVPvGT1YVb+CQmOcyVqg8MZbPt84wROetRpcP+4ExWdPSUcLWzfwHI16QZWgZznhFmFG
 KGeg8W5vZqv2pTaLQkE2dxca5QdflMYeeUMbxBjnyWO4T6n1vj3pztQ8sYZG5ist3k/Y
 bmxA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=7zQCi5vRKjnMMexLpqVO1+RGpDeDdG/krrqCxgZp1MI=;
 b=RN5hLSZQVhWUXx8KO4KzZfdxu5wiGbRrJFoiFbCprfbkJ/xFNisJXLAS9aOQgCh01v
 4cYBQ+nCp3JjMmu0cMOueQMCp2pmrPsuNW+eRk0qqskKYKH0T5AZb9PseSJLpt/nan3n
 +Pwm8TQB0mVn5BwKOXDkL4dPEwLHEYR9BnyA1usiiYTjgL7wJeKZmlzBmAOkD677KjG9
 13P449KhG5M8wVZIT4iH1WszbOiF35AvjsHRTs4bCTlULtSQD/t47Yku14JyKPxnIhco
 okhvoHEFWG7hjQ87RZQvMPRol6Gg8ml91vYX+/RtV6Z89jB85SIZLzvl19mm+bgrKb1u
 it/w==
X-Gm-Message-State: AOAM530btYiYKHu/SfTjwmYnvx5IWW8hxClHx8hcW+2ZY3RkIh3BQUqP
 +z5ALtDBxDt1+KTXMYAISwdKsZhvByA=
X-Google-Smtp-Source: ABdhPJxMttTK1ZQScjW7rj1U9b/hfyPJlPrb8YWiyNlNA+Bo6ic4x233upQgVSRTdPt8SYImIwdtLg==
X-Received: by 2002:a2e:8689:: with SMTP id l9mr388219lji.467.1596199881377;
 Fri, 31 Jul 2020 05:51:21 -0700 (PDT)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id s2sm1923362lfs.4.2020.07.31.05.51.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 31 Jul 2020 05:51:20 -0700 (PDT)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: xen-devel@lists.xenproject.org, dri-devel@lists.freedesktop.org,
 linux-kernel@vger.kernel.org, boris.ostrovsky@oracle.com, jgross@suse.com,
 airlied@linux.ie, daniel@ffwll.ch
Subject: [PATCH 6/6] drm/xen-front: Add support for EDID based configuration
Date: Fri, 31 Jul 2020 15:51:09 +0300
Message-Id: <20200731125109.18666-7-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200731125109.18666-1-andr2000@gmail.com>
References: <20200731125109.18666-1-andr2000@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: intel-gfx@lists.freedesktop.org, sstabellini@kernel.org,
 dan.carpenter@oracle.com,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Version 2 of the Xen displif protocol adds XENDISPL_OP_GET_EDID
request which allows frontends to request EDID structure per
connector. This request is optional and if not supported by the
backend then visible area is still defined by the relevant
XenStore's "resolution" property.
If backend provides EDID with XENDISPL_OP_GET_EDID request then
its values must take precedence over the resolutions defined in
XenStore.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 drivers/gpu/drm/xen/xen_drm_front.c         | 62 ++++++++++++++++
 drivers/gpu/drm/xen/xen_drm_front.h         |  9 ++-
 drivers/gpu/drm/xen/xen_drm_front_cfg.c     | 82 +++++++++++++++++++++
 drivers/gpu/drm/xen/xen_drm_front_cfg.h     |  7 ++
 drivers/gpu/drm/xen/xen_drm_front_conn.c    | 26 ++++++-
 drivers/gpu/drm/xen/xen_drm_front_evtchnl.c |  3 +
 drivers/gpu/drm/xen/xen_drm_front_evtchnl.h |  3 +
 drivers/gpu/drm/xen/xen_drm_front_kms.c     |  5 ++
 8 files changed, 194 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/xen/xen_drm_front.c b/drivers/gpu/drm/xen/xen_drm_front.c
index 013c9e0e412c..cc5981bdbfb3 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.c
+++ b/drivers/gpu/drm/xen/xen_drm_front.c
@@ -381,6 +381,59 @@ void xen_drm_front_on_frame_done(struct xen_drm_front_info *front_info,
 					fb_cookie);
 }
 
+int xen_drm_front_get_edid(struct xen_drm_front_info *front_info,
+			   int conn_idx, struct page **pages,
+			   u32 buffer_sz, u32 *edid_sz)
+{
+	struct xen_drm_front_evtchnl *evtchnl;
+	struct xen_front_pgdir_shbuf_cfg buf_cfg;
+	struct xen_front_pgdir_shbuf shbuf;
+	struct xendispl_req *req;
+	unsigned long flags;
+	int ret;
+
+	if (unlikely(conn_idx >= front_info->num_evt_pairs))
+		return -EINVAL;
+
+	memset(&buf_cfg, 0, sizeof(buf_cfg));
+	buf_cfg.xb_dev = front_info->xb_dev;
+	buf_cfg.num_pages = DIV_ROUND_UP(buffer_sz, PAGE_SIZE);
+	buf_cfg.pages = pages;
+	buf_cfg.pgdir = &shbuf;
+	buf_cfg.be_alloc = false;
+
+	ret = xen_front_pgdir_shbuf_alloc(&buf_cfg);
+	if (ret < 0)
+		return ret;
+
+	evtchnl = &front_info->evt_pairs[conn_idx].req;
+
+	mutex_lock(&evtchnl->u.req.req_io_lock);
+
+	spin_lock_irqsave(&front_info->io_lock, flags);
+	req = be_prepare_req(evtchnl, XENDISPL_OP_GET_EDID);
+	req->op.get_edid.gref_directory =
+		xen_front_pgdir_shbuf_get_dir_start(&shbuf);
+	req->op.get_edid.buffer_sz = buffer_sz;
+
+	ret = be_stream_do_io(evtchnl, req);
+	spin_unlock_irqrestore(&front_info->io_lock, flags);
+
+	if (ret < 0)
+		goto fail;
+
+	ret = be_stream_wait_io(evtchnl);
+	if (ret < 0)
+		goto fail;
+
+	*edid_sz = evtchnl->u.req.resp.get_edid.edid_sz;
+
+fail:
+	mutex_unlock(&evtchnl->u.req.req_io_lock);
+	xen_front_pgdir_shbuf_free(&shbuf);
+	return ret;
+}
+
 static int xen_drm_drv_dumb_create(struct drm_file *filp,
 				   struct drm_device *dev,
 				   struct drm_mode_create_dumb *args)
@@ -466,6 +519,7 @@ static void xen_drm_drv_release(struct drm_device *dev)
 		xenbus_switch_state(front_info->xb_dev,
 				    XenbusStateInitialising);
 
+	xen_drm_front_cfg_free(front_info, &front_info->cfg);
 	kfree(drm_info);
 }
 
@@ -562,6 +616,7 @@ static int xen_drm_drv_init(struct xen_drm_front_info *front_info)
 	drm_mode_config_cleanup(drm_dev);
 	drm_dev_put(drm_dev);
 fail:
+	xen_drm_front_cfg_free(front_info, &front_info->cfg);
 	kfree(drm_info);
 	return ret;
 }
@@ -622,7 +677,14 @@ static int displback_initwait(struct xen_drm_front_info *front_info)
 
 static int displback_connect(struct xen_drm_front_info *front_info)
 {
+	int ret;
+
 	xen_drm_front_evtchnl_set_state(front_info, EVTCHNL_STATE_CONNECTED);
+
+	/* We are all set to read additional configuration from the backend. */
+	ret = xen_drm_front_cfg_tail(front_info, &front_info->cfg);
+	if (ret < 0)
+		return ret;
 	return xen_drm_drv_init(front_info);
 }
 
diff --git a/drivers/gpu/drm/xen/xen_drm_front.h b/drivers/gpu/drm/xen/xen_drm_front.h
index 54486d89650e..be0c982f4d82 100644
--- a/drivers/gpu/drm/xen/xen_drm_front.h
+++ b/drivers/gpu/drm/xen/xen_drm_front.h
@@ -112,9 +112,12 @@ struct xen_drm_front_drm_pipeline {
 	struct drm_simple_display_pipe pipe;
 
 	struct drm_connector conn;
-	/* These are only for connector mode checking */
+	/* These are only for connector mode checking if no EDID present */
 	int width, height;
 
+	/* Is not NULL if EDID is used for connector configuration. */
+	struct edid *edid;
+
 	struct drm_pending_vblank_event *pending_event;
 
 	struct delayed_work pflip_to_worker;
@@ -160,4 +163,8 @@ int xen_drm_front_page_flip(struct xen_drm_front_info *front_info,
 void xen_drm_front_on_frame_done(struct xen_drm_front_info *front_info,
 				 int conn_idx, u64 fb_cookie);
 
+int xen_drm_front_get_edid(struct xen_drm_front_info *front_info,
+			   int conn_idx, struct page **pages,
+			   u32 buffer_sz, u32 *edid_sz);
+
 #endif /* __XEN_DRM_FRONT_H_ */
diff --git a/drivers/gpu/drm/xen/xen_drm_front_cfg.c b/drivers/gpu/drm/xen/xen_drm_front_cfg.c
index ec53b9cc9e0e..f7c45a2fdab3 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_cfg.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_cfg.c
@@ -45,6 +45,64 @@ static int cfg_connector(struct xen_drm_front_info *front_info,
 	return 0;
 }
 
+static void
+cfg_connector_free_edid(struct xen_drm_front_cfg_connector *connector)
+{
+	vfree(connector->edid);
+	connector->edid = NULL;
+}
+
+static void cfg_connector_edid(struct xen_drm_front_info *front_info,
+			       struct xen_drm_front_cfg_connector *connector,
+			       int index)
+{
+	struct page **pages;
+	u32 edid_sz;
+	int i, npages, ret = -ENOMEM;
+
+	connector->edid = vmalloc(XENDISPL_EDID_MAX_SIZE);
+	if (!connector->edid)
+		goto fail;
+
+	npages = DIV_ROUND_UP(XENDISPL_EDID_MAX_SIZE, PAGE_SIZE);
+	pages = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL);
+	if (!pages)
+		goto fail_free_edid;
+
+	for (i = 0; i < npages; i++)
+		pages[i] = vmalloc_to_page((u8 *)connector->edid +
+					   i * PAGE_SIZE);
+
+	ret = xen_drm_front_get_edid(front_info, index, pages,
+				     XENDISPL_EDID_MAX_SIZE, &edid_sz);
+
+	kvfree(pages);
+
+	if (ret < 0)
+		goto fail_free_edid;
+
+	ret = -EINVAL;
+	if (!edid_sz || (edid_sz % EDID_LENGTH))
+		goto fail_free_edid;
+
+	if (!drm_edid_is_valid(connector->edid))
+		goto fail_free_edid;
+
+	DRM_INFO("Connector %s: using EDID for configuration, size %d\n",
+		 connector->xenstore_path, edid_sz);
+	return;
+
+fail_free_edid:
+	cfg_connector_free_edid(connector);
+fail:
+	/*
+	 * If any error this is not critical as we can still read
+	 * connector settings from XenStore, so just warn.
+	 */
+	DRM_WARN("Connector %s: cannot read or wrong EDID: %d\n",
+		 connector->xenstore_path, ret);
+}
+
 int xen_drm_front_cfg_card(struct xen_drm_front_info *front_info,
 			   struct xen_drm_front_cfg *cfg)
 {
@@ -75,3 +133,27 @@ int xen_drm_front_cfg_card(struct xen_drm_front_info *front_info,
 	return 0;
 }
 
+int xen_drm_front_cfg_tail(struct xen_drm_front_info *front_info,
+			   struct xen_drm_front_cfg *cfg)
+{
+	int i;
+
+	/*
+	 * Try reading EDID(s) from the backend: it is not an error
+	 * if backend doesn't support or provides no EDID.
+	 */
+	for (i = 0; i < cfg->num_connectors; i++)
+		cfg_connector_edid(front_info, &cfg->connectors[i], i);
+
+	return 0;
+}
+
+void xen_drm_front_cfg_free(struct xen_drm_front_info *front_info,
+			    struct xen_drm_front_cfg *cfg)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(cfg->connectors); i++)
+		cfg_connector_free_edid(&cfg->connectors[i]);
+}
+
diff --git a/drivers/gpu/drm/xen/xen_drm_front_cfg.h b/drivers/gpu/drm/xen/xen_drm_front_cfg.h
index aa8490ba9146..f80f47f14697 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_cfg.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_cfg.h
@@ -19,6 +19,7 @@ struct xen_drm_front_cfg_connector {
 	int width;
 	int height;
 	char *xenstore_path;
+	struct edid *edid;
 };
 
 struct xen_drm_front_cfg {
@@ -34,4 +35,10 @@ struct xen_drm_front_cfg {
 int xen_drm_front_cfg_card(struct xen_drm_front_info *front_info,
 			   struct xen_drm_front_cfg *cfg);
 
+int xen_drm_front_cfg_tail(struct xen_drm_front_info *front_info,
+						   struct xen_drm_front_cfg *cfg);
+
+void xen_drm_front_cfg_free(struct xen_drm_front_info *front_info,
+							struct xen_drm_front_cfg *cfg);
+
 #endif /* __XEN_DRM_FRONT_CFG_H_ */
diff --git a/drivers/gpu/drm/xen/xen_drm_front_conn.c b/drivers/gpu/drm/xen/xen_drm_front_conn.c
index 44f1f70c0aed..c98d989a005f 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_conn.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_conn.c
@@ -66,6 +66,16 @@ static int connector_get_modes(struct drm_connector *connector)
 	struct videomode videomode;
 	int width, height;
 
+	if (pipeline->edid) {
+		int count;
+
+		drm_connector_update_edid_property(connector,
+						   pipeline->edid);
+		count = drm_add_edid_modes(connector, pipeline->edid);
+		if (count)
+			return count;
+	}
+
 	mode = drm_mode_create(connector->dev);
 	if (!mode)
 		return 0;
@@ -103,6 +113,7 @@ int xen_drm_front_conn_init(struct xen_drm_front_drm_info *drm_info,
 {
 	struct xen_drm_front_drm_pipeline *pipeline =
 			to_xen_drm_pipeline(connector);
+	int ret;
 
 	drm_connector_helper_add(connector, &connector_helper_funcs);
 
@@ -111,6 +122,17 @@ int xen_drm_front_conn_init(struct xen_drm_front_drm_info *drm_info,
 	connector->polled = DRM_CONNECTOR_POLL_CONNECT |
 			DRM_CONNECTOR_POLL_DISCONNECT;
 
-	return drm_connector_init(drm_info->drm_dev, connector,
-				  &connector_funcs, DRM_MODE_CONNECTOR_VIRTUAL);
+	ret = drm_connector_init(drm_info->drm_dev, connector,
+				 &connector_funcs, DRM_MODE_CONNECTOR_VIRTUAL);
+	if (ret < 0)
+		return ret;
+
+	/*
+	 * Virtual connectors do not have EDID property, but we do,
+	 * so add it manually if EDID is present.
+	 */
+	if (pipeline->edid)
+		drm_connector_attach_edid_property(connector);
+
+	return 0;
 }
diff --git a/drivers/gpu/drm/xen/xen_drm_front_evtchnl.c b/drivers/gpu/drm/xen/xen_drm_front_evtchnl.c
index e10d95dddb99..af574ef16d84 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_evtchnl.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_evtchnl.c
@@ -44,6 +44,9 @@ static irqreturn_t evtchnl_interrupt_ctrl(int irq, void *dev_id)
 			continue;
 
 		switch (resp->operation) {
+		case XENDISPL_OP_GET_EDID:
+			evtchnl->u.req.resp.get_edid =
+				resp->op.get_edid;
 		case XENDISPL_OP_PG_FLIP:
 		case XENDISPL_OP_FB_ATTACH:
 		case XENDISPL_OP_FB_DETACH:
diff --git a/drivers/gpu/drm/xen/xen_drm_front_evtchnl.h b/drivers/gpu/drm/xen/xen_drm_front_evtchnl.h
index b0af6994332b..8267f40b6549 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_evtchnl.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_evtchnl.h
@@ -53,6 +53,9 @@ struct xen_drm_front_evtchnl {
 			struct completion completion;
 			/* latest response status */
 			int resp_status;
+			union {
+				struct xendispl_get_edid_resp get_edid;
+			} resp;
 			/* serializer for backend IO: request/response */
 			struct mutex req_io_lock;
 		} req;
diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
index ef11b1e4de39..d7ff1a656d40 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_kms.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
@@ -288,6 +288,10 @@ display_mode_valid(struct drm_simple_display_pipe *pipe,
 			container_of(pipe, struct xen_drm_front_drm_pipeline,
 				     pipe);
 
+	/* We have nothing to check if EDID is present. */
+	if (pipeline->edid)
+		return MODE_OK;
+
 	if (mode->hdisplay != pipeline->width)
 		return MODE_ERROR;
 
@@ -319,6 +323,7 @@ static int display_pipe_init(struct xen_drm_front_drm_info *drm_info,
 	pipeline->index = index;
 	pipeline->height = cfg->height;
 	pipeline->width = cfg->width;
+	pipeline->edid = cfg->edid;
 
 	INIT_DELAYED_WORK(&pipeline->pflip_to_worker, pflip_to_worker);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:53:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:53:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UXd-00022X-Js; Fri, 31 Jul 2020 12:53:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1UXc-00022P-6a
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:53:40 +0000
X-Inumbo-ID: da8bb872-d32c-11ea-8e30-bc764e2007e4
Received: from FRA01-PR2-obe.outbound.protection.outlook.com (unknown
 [40.107.12.87]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id da8bb872-d32c-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:53:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JBGgB3sBYTD44bnzZdQKadPAKk7WRa4mW5x1cnybI7U=;
 b=adgtTCrSCo9AwqtryZleebmUM0G30mTZNcZmOykXYbDmtbBSMr54mZlKKClaUjR+1qcqQyLcsDVIulOGEibHlCl5okT5tO/lAxGevpiSubUWHhIsFDe5Lsgs+RZkffamcuOUSnxKyuLAYrljBq20xM6jIZAemLNnyHpT7A3SrEk=
Received: from AM5PR0701CA0005.eurprd07.prod.outlook.com
 (2603:10a6:203:51::15) by PR2PR08MB4777.eurprd08.prod.outlook.com
 (2603:10a6:101:26::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.17; Fri, 31 Jul
 2020 12:53:37 +0000
Received: from VE1EUR03FT013.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:51:cafe::87) by AM5PR0701CA0005.outlook.office365.com
 (2603:10a6:203:51::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.10 via Frontend
 Transport; Fri, 31 Jul 2020 12:53:37 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT013.mail.protection.outlook.com (10.152.19.37) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.17 via Frontend Transport; Fri, 31 Jul 2020 12:53:37 +0000
Received: ("Tessian outbound 8f45de5545d6:v62");
 Fri, 31 Jul 2020 12:53:36 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 90ed80deb058d033
X-CR-MTA-TID: 64aa7808
Received: from cef4a5924d81.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A52DD1DA-2B31-40C7-ADCC-C061F744E92E.1; 
 Fri, 31 Jul 2020 12:53:31 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cef4a5924d81.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 12:53:31 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DDLWS/O8sz6OVOyZ5HDLlm34ShBA7H+KbaNy5oMNmWns5zK1+BR47WwCA8iwOT58S8tzHd/jYLrywoCW04z5Fzw1okEm1DEWs+p2kdqxCmccpWC6o6z6Fi6dearhspe8PjDr381MrRNEOm/zqJBEerQ+ohrB7bEXHFZCbAB2j+kWP3V16i4vnXiTB6SCn59AQPJc5IzQC0uibx+9m18rtrCLz7ODus0He+od6mHQWiK9CULiynoSijLN1hDewRU4nfa5sPq85QQ7QVxvuysR4mpGk6kvFEgs+xgVvmN2Gz+e234onqtahKWL5/B9WY8k+U5n7cAiYyMykhU7gwdYow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JBGgB3sBYTD44bnzZdQKadPAKk7WRa4mW5x1cnybI7U=;
 b=k0ODCChKsNn4bZLgtJnY7G0K1/M8MV7LcTV8MK2cwr6nQOlpoZVsZIYRaOrO//XkXFhf1JhIVCiDuTx7ImsNrPhryAXmoWn1ly1N9t8Y4UrlZRPERQrAN5lNzQp/X3b2jH61kZSistMhnM8GF+UBCyF2dQzQMMwKF66NDeaUwwr1/oIp5sBiylbv7tqGkgwoa8+LhCVF0upWpat5D9deb14co4CDi9Wzm/vA32kBly6+ycHvWK5UtQAmIWj99YlhmU8zCGBg0sfvbGsuvsLyzmRFbV0TlzhUsbWGCrlBDYOILsyKKBy8Y81Gy+8AB01PgoiYy1UMmhfxYdrjM+LZiw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JBGgB3sBYTD44bnzZdQKadPAKk7WRa4mW5x1cnybI7U=;
 b=adgtTCrSCo9AwqtryZleebmUM0G30mTZNcZmOykXYbDmtbBSMr54mZlKKClaUjR+1qcqQyLcsDVIulOGEibHlCl5okT5tO/lAxGevpiSubUWHhIsFDe5Lsgs+RZkffamcuOUSnxKyuLAYrljBq20xM6jIZAemLNnyHpT7A3SrEk=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4776.eurprd08.prod.outlook.com (2603:10a6:10:f2::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.20; Fri, 31 Jul
 2020 12:53:29 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 12:53:29 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RESEND][PATCH v2 4/7] xen/arm: guestcopy: Re-order the includes
Thread-Topic: [RESEND][PATCH v2 4/7] xen/arm: guestcopy: Re-order the includes
Thread-Index: AQHWZp4MdrEziqnOzkaNnp5sU1zHlakhpfCA
Date: Fri, 31 Jul 2020 12:53:29 +0000
Message-ID: <87E8FBB4-DFD2-4B10-9D90-D8628AB102F5@arm.com>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-5-julien@xen.org>
In-Reply-To: <20200730181827.1670-5-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [90.126.203.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 95c6bc2d-8e9d-42a5-46e4-08d83550bdd5
x-ms-traffictypediagnostic: DBBPR08MB4776:|PR2PR08MB4777:
X-Microsoft-Antispam-PRVS: <PR2PR08MB4777465A4B2CC697C77FBA319D4E0@PR2PR08MB4777.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2089;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: c9H7i67I7oJnIYdHs5QW068gsNz6SdQxPt156SlmjNR78ZsyCJnjjh7e3CxtS+Po4HgkSWG6wgp3Pz+/mQxMAx9cPsaOrx3C4lTLgSdR2LDdFy+G3GaT2Hr5WRagmSVRrj2W51Zbj0Jg+Na4Q0wKF7fNpuqEsur7ftg+4a6HQ4NZR/89LVquiHXzKP/2KSZwBCzp/kojya5gIiucJF1FZTfl+mEInAOwQNrbutE96mowPch2Ji5M28a7LyH+AiVebQzLFY+A2NthEIlw6AIGRtEJR1ne3oMy0urZnTYWyAeiP/BGGbMZysdftwUTP9hLw9q1AeCr6xF60xNzm7IOo4ANe4tDcYPBYVOLHh9NG6kfNMwtG4vHK4IJkkWVjB9u
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(39860400002)(346002)(376002)(136003)(366004)(76116006)(66946007)(316002)(91956017)(6486002)(2906002)(6916009)(33656002)(8676002)(83380400001)(5660300002)(26005)(186003)(6506007)(4326008)(478600001)(54906003)(66446008)(86362001)(36756003)(71200400001)(66476007)(64756008)(8936002)(6512007)(66556008)(53546011)(2616005)(169823001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: G1Lt3u5sBeJ5rsX+Xi2/X7/kELsxo3pDEWIVuxadIjIxWs43TdOKxYqH4107uwok3lrGB7sg/mMLQdg4dU7brqOlgH8BtUCzDAYFNfWCW7/raVubJq8bf9wkiLn4pEhxtXcxj0MCbM9yLgCsA6MwNpRAtQYN/pJTgfzI2/Dquurh2lL3hxtAiCk4s1TvouvYWXrXSWa8XseU6gF0koVMIsCxkoX/rvB1I1+U0miGTEJQbaB29IwrZLMg3ThbfpZ7YU3r/Q9cZyjsCZUgtf9GS0cno6vWrO+0jAg7rV1UrFo4lCRGIik0XNf6h/3ogJ2zyjU/fm/LPnRNfRGo9buqh66iWud4h4D6SbhdTp9NlN9VFH7YPOmTLN5QH0skB0lhLvNa7r14v3KIuR+69UWRdA7vgsWyJV7LXxhz6bIJe3yqq/oN2cgbQGd4qiqzJwv3a8BAJrjsIavOhS/dOZfEv6DKc9uhROdLRHQLo6aO8nu0Dxu+xQXFwd4IpUNDMoVK
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <9C40F41B2267044F88C879F30610E339@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4776
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 0a361eba-f13d-4fbf-cc93-08d83550b956
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: IYlWOiUUzeGtsdRlU53pLj8TaJR+yd+trFR8R9iUtD7iWWQYOaZxZoADhPAVyiQoD9bWX+eyiXNMae+9NnIoLIpMxFF3XyWfXsYXO3YYdU4kiipB82Z4AKirAQ6n0M74d/nzVoUSMNe745Q+vpuDw0ZVYp67qth+2Txjejio3YK886LvKf5YQStQB+80DWO5zHVEqciCbai4nTlbFAK2HYR32DGZaknobUKKQpvAcXN/mzLGMDdov7sq5YQ7/3Ae9LL7S+JzNitpMiVkvOBlkIuf19HMJ1/adkh/6ZwSme3R7cUdnGkjORLRfx+0TmyDq5EJbQktvOw6Wc+NDcKDDIhKKA5h87DxUV1FCTP0zhN4u6Ey/k6NDu5EUEy04CH6HBewGP0n/UN9ykImdvCRaxGDkfLG6KCZ+jKsDOaBqj0=
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(136003)(39860400002)(396003)(346002)(46966005)(316002)(82310400002)(33656002)(8936002)(83380400001)(8676002)(336012)(26005)(5660300002)(6512007)(356005)(2616005)(36906005)(47076004)(81166007)(6862004)(478600001)(107886003)(6506007)(53546011)(36756003)(70586007)(70206006)(82740400003)(86362001)(54906003)(6486002)(186003)(2906002)(4326008)(169823001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 12:53:37.0352 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 95c6bc2d-8e9d-42a5-46e4-08d83550bdd5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR2PR08MB4777
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 30 Jul 2020, at 20:18, Julien Grall <julien@xen.org> wrote:
>
> From: Julien Grall <jgrall@amazon.com>
>
> We usually have xen/ includes first and then asm/. They are also ordered
> alphabetically among themselves.
>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

This could have been merged in patch 3.

But anyway:
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>


> ---
> xen/arch/arm/guestcopy.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
> index 7a0f3e9d5fc6..c8023e2bca5d 100644
> --- a/xen/arch/arm/guestcopy.c
> +++ b/xen/arch/arm/guestcopy.c
> @@ -1,7 +1,8 @@
> -#include <xen/lib.h>
> #include <xen/domain_page.h>
> +#include <xen/lib.h>
> #include <xen/mm.h>
> #include <xen/sched.h>
> +
> #include <asm/current.h>
> #include <asm/guest_access.h>
>
> --
> 2.17.1
>
>

IMPORTANT NOTICE: The contents of this email and any attachments are confid=
ential and may also be privileged. If you are not the intended recipient, p=
lease notify the sender immediately and do not disclose the contents to any=
 other person, use it for any purpose, or store or copy the information in =
any medium. Thank you.


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:56:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UaV-0002GH-6U; Fri, 31 Jul 2020 12:56:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1UaT-0002GA-9r
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:56:37 +0000
X-Inumbo-ID: 44019af6-d32d-11ea-8e30-bc764e2007e4
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.47]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 44019af6-d32d-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:56:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VuFG/yDYzYkDX54/MavEWcUrDRVXztJYTHmRMNh2AZM=;
 b=Z6eorV+yl7gyAYQ1OjjXz/C4MQmI9K24c79QB/KJKhThUaoUi4a/d6P2stcyjdszgZjYsQ68L0/0lSvXI4wnw6696u+tGuXORJ1ZXwY977S3REezea0oJWYQAmrmyJcNRSrEtLz7j44e9d/hb4cPCiXrmFaDkNYPOQbFM5b20GE=
Received: from AM6PR05CA0034.eurprd05.prod.outlook.com (2603:10a6:20b:2e::47)
 by AM0PR08MB4994.eurprd08.prod.outlook.com (2603:10a6:208:15d::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.19; Fri, 31 Jul
 2020 12:56:34 +0000
Received: from AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::4) by AM6PR05CA0034.outlook.office365.com
 (2603:10a6:20b:2e::47) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.16 via Frontend
 Transport; Fri, 31 Jul 2020 12:56:34 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT055.mail.protection.outlook.com (10.152.17.214) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.20 via Frontend Transport; Fri, 31 Jul 2020 12:56:34 +0000
Received: ("Tessian outbound 2ae7cfbcc26c:v62");
 Fri, 31 Jul 2020 12:56:34 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 556d06a5833601a7
X-CR-MTA-TID: 64aa7808
Received: from 8eee29f2d773.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4DFA96B6-B5CD-4275-8664-0E3931598F29.1; 
 Fri, 31 Jul 2020 12:56:28 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8eee29f2d773.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 12:56:28 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X/n5tGSgz5Zgz6mdWtBOWMWUAxRRecW8HT+1VTW1MlaESgYX9atQvm3Gy5lz0cyBDxHHUE4pAzZro7Aa6JmMnz74HjRSp3HL4bfpS6OnnDsLREafd6Et2c9wKHPlDk3WW1qRquYC5H82u2LJGwKCnD3QrsjjSxildLMPthKnDaITBz9jgdLCfbk2tLJ9EG2ALK4pcVlbc90cUX926fDWAYAD76tIjT951C8fc63B7hR1IAUNJZH40shjbYBXXGOB6RGKi2bXJqCsi0XlyoJqrQD/7HV1FLN1gjGbgBOTYfwwxM+JdW1gYkixcpGvhVpESZf60IHxDNIuqdCqMhnArA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VuFG/yDYzYkDX54/MavEWcUrDRVXztJYTHmRMNh2AZM=;
 b=mSL1oi04BQMkbSYI+9ddNtEe2T0s1EqbVs7E6XKRGGD1/T9wyUjDWlWYcd4z5L6RU0rMQVhFUUja4Uvdg2VUNRSFFb2qSBMgPsLHbnrSTrDGO8wrQLupoPULQKRzhmjIh71zyt8j0Di8Rbh8VOE4vx+Pacnjz2guDpRLjsGYsRFvGaOK/8ahgvZar92mLSBOq3iHvp3AxKFn+EOlY23+gazOzJOMbyeYOgMQYFqap2miHWztkg4wAmoTZHVMX7Lj34UGTYpqjn4E+FI9AnxL4dD0LKJ2N7q+5Sm4kqDOOPsULCzGC/iFRVTLYJ8EHCIap7cUXGmqCTUhV7V7rV8xGw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VuFG/yDYzYkDX54/MavEWcUrDRVXztJYTHmRMNh2AZM=;
 b=Z6eorV+yl7gyAYQ1OjjXz/C4MQmI9K24c79QB/KJKhThUaoUi4a/d6P2stcyjdszgZjYsQ68L0/0lSvXI4wnw6696u+tGuXORJ1ZXwY977S3REezea0oJWYQAmrmyJcNRSrEtLz7j44e9d/hb4cPCiXrmFaDkNYPOQbFM5b20GE=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5177.eurprd08.prod.outlook.com (2603:10a6:10:e3::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.17; Fri, 31 Jul
 2020 12:56:27 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 12:56:27 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RESEND][PATCH v2 4/7] xen/arm: guestcopy: Re-order the includes
Thread-Topic: [RESEND][PATCH v2 4/7] xen/arm: guestcopy: Re-order the includes
Thread-Index: AQHWZp4MdrEziqnOzkaNnp5sU1zHlakhpfCAgAAA0wA=
Date: Fri, 31 Jul 2020 12:56:27 +0000
Message-ID: <E6C3838C-EFBC-41B0-BE3D-852C1E1B65FB@arm.com>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-5-julien@xen.org>
 <87E8FBB4-DFD2-4B10-9D90-D8628AB102F5@arm.com>
In-Reply-To: <87E8FBB4-DFD2-4B10-9D90-D8628AB102F5@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [90.126.203.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 52dde557-2703-4fab-0c09-08d835512776
x-ms-traffictypediagnostic: DB8PR08MB5177:|AM0PR08MB4994:
X-Microsoft-Antispam-PRVS: <AM0PR08MB49945BC356B95F51345A45089D4E0@AM0PR08MB4994.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 5n7jAFhCMYYrwQY2vk1ofpYJwvyBNd3C9gWUy8cxEbTjShfCfadCMUfjNSYq+d6UnNgd/d83uxRJfhOxEJivSYI0lig/GYI6hmSBuheu4/mK3J/yjoA5sr8xqnNn0ahegytbWLURHWq3KUddUZo/hbMQwxCh9D08lBrB3iWlvv8fM1qBegXEnas7CeKmF2I2mqc+02oxbXBR9UQjaqlXjyU8EWwSZEyaICDJdMpoazfdfVYuWGtdgyg54v9Q5bqH3Etqwie7yTmQ7FrGh/tECJjT2eVmebuvqVC+nhlRo6dgMmFHKG4mX349ACNczPtaelE2kOZxLHuoNl+znnf9PfnNQLV7jvA4IlfW16TEAZ0s/gqAqXDSgVBsZNWDWZoO
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(39860400002)(346002)(136003)(376002)(366004)(186003)(66946007)(66476007)(66446008)(64756008)(2906002)(91956017)(26005)(66556008)(76116006)(8676002)(54906003)(36756003)(86362001)(71200400001)(6486002)(6512007)(6916009)(8936002)(2616005)(478600001)(4326008)(33656002)(5660300002)(53546011)(6506007)(316002)(83380400001)(169823001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: ShrPJ5g3DTBI/WSu/Att76aoP7DaHf1AeyLhfH46t6rs4cKi/M1S/P/5dgL2SXZQsYKGo/KpRPkSpjdEJGgSsg9v/xNcU+/b33EcrB1iXgMnCrg5xPmnSAcc4jqX4DkQkwXYgxLc/QJw2NbH2elIDulwEhyRjqWQnbwdaP3i4r6Vy3WJBL/qb14ZkO3WSXSsfoh1TQ0Qs9X6dy4yDYvubYa9AHo+clBCK+lqGxTFbhxZrXxOnO1UA2S/lx9fz9Vpfr3+2TJeBtL7hbZj3A6hiGuA/MSOqsgU6ryDUvufHjo6eT+OhblE8qVVXuZY8DZGbPdOSqr3PlVH0pBD0XO5Rni6QYynSBWKeG4fnalkG8egCe01Lv7r1Uretjclg1Zfmd9wsLGzVnWOyQIOD0isTb7O1Q4+FgoRyQ/Dl0l0B1tbJ7zObJSLzpeZbd60eu0z2SVVB9h3GzIUSB0S5PvgzzVr0axmNYa9JqdpNqPYKh9ELWLHtxEgiBs22wV8flr/jGqrUjp4trOHp9/ipiaAjhDGoMC3pTnpQbTkneyVYMomE9/9vPyKHr52OxJG3/A2bdIXq/EJBLlvRKugQ12gUwEnGTfawp5FpM/qfZmecrCRxZo8dVrEOJt3VqM9Wv/1J0dlYVBSd5lBrnkAutQS5g==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <2E4187FC06B0524C97348DB3F4CC3121@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5177
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: d36c2feb-fa5d-4912-bcb4-08d83551232e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 1TmQeEoNF8WSrnxdls7LdWiK1xT3jUK0gLdncQGGRcE7ke9NPCu1MYymThVT5wfprZIbazyx/DpCbmukaBhPKnFE7IN9Q4TUOHkb+26Cd006D6IWHpLznegQGfhdMSjk6IY85Rcz7kc1sK3vgc6wvh87ZsKp2tttkOn821BgOwgtA29tYrml9uIjl4sO47P/LExwtWmjoS14QVGEuZCO45NazSN4EyZZvK1xknjV9pb+pepebsB2CgEOLviTR3VJsBrDBTcZgy8LYs63r7qsd9zBbLnCUZ2yEmf8GmjMKW61OM9zd4LVoPpa6Xv9tkkJ/Wr1A0/UEClaC3wJpD3ujePorZtxwcZEs98fHTgWodeGrOz08TWWCPinb9/7dYjbNgwG2tGNPRmHD+Pd7ZlKMy+j1h0mWjeNGITSanKnbHs=
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(376002)(346002)(136003)(396003)(46966005)(2906002)(53546011)(6506007)(86362001)(83380400001)(186003)(82740400003)(336012)(6486002)(8936002)(70206006)(70586007)(33656002)(8676002)(4326008)(26005)(6862004)(36906005)(5660300002)(82310400002)(356005)(47076004)(81166007)(2616005)(36756003)(6512007)(54906003)(478600001)(316002)(169823001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 12:56:34.2996 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 52dde557-2703-4fab-0c09-08d835512776
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4994
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, nd <nd@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 31 Jul 2020, at 14:53, Bertrand Marquis <Bertrand.Marquis@arm.com> wro=
te:
>=20
>=20
>=20
>> On 30 Jul 2020, at 20:18, Julien Grall <julien@xen.org> wrote:
>>=20
>> From: Julien Grall <jgrall@amazon.com>
>>=20
>> We usually have xen/ includes first and then asm/. They are also ordered
>> alphabetically among themselves.
>>=20
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>=20
> This could have been merged in patch 3.
>=20
> But anyway:
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>=20
>=20
>> ---
>> xen/arch/arm/guestcopy.c | 3 ++-
>> 1 file changed, 2 insertions(+), 1 deletion(-)
>>=20
>> diff --git a/xen/arch/arm/guestcopy.c b/xen/arch/arm/guestcopy.c
>> index 7a0f3e9d5fc6..c8023e2bca5d 100644
>> --- a/xen/arch/arm/guestcopy.c
>> +++ b/xen/arch/arm/guestcopy.c
>> @@ -1,7 +1,8 @@
>> -#include <xen/lib.h>
>> #include <xen/domain_page.h>
>> +#include <xen/lib.h>
>> #include <xen/mm.h>
>> #include <xen/sched.h>
>> +
>> #include <asm/current.h>
>> #include <asm/guest_access.h>
>>=20
>> --
>> 2.17.1
>>=20
>>=20
>=20
> IMPORTANT NOTICE: The contents of this email and any attachments are conf=
idential and may also be privileged. If you are not the intended recipient,=
 please notify the sender immediately and do not disclose the contents to a=
ny other person, use it for any purpose, or store or copy the information i=
n any medium. Thank you.

sorry for the notice, i need to find a way to turn it off automatically :-)





From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:58:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:58:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Ubw-0002N1-Hq; Fri, 31 Jul 2020 12:58:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1Ubw-0002Mv-04
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:58:08 +0000
X-Inumbo-ID: 7985091b-d32d-11ea-abaf-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7985091b-d32d-11ea-abaf-12813bfff9fa;
 Fri, 31 Jul 2020 12:58:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 255FCAC94;
 Fri, 31 Jul 2020 12:58:18 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in
 epte_get_entry_emt()
To: Paul Durrant <paul@xen.org>
References: <20200731123926.28970-1-paul@xen.org>
 <20200731123926.28970-3-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a4856c33-8bb0-4afa-cc71-3af4c229bc27@suse.com>
Date: Fri, 31 Jul 2020 14:58:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200731123926.28970-3-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31.07.2020 14:39, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Re-factor the code to take advantage of the fact that the APIC access page is
> a 'special' page.

Hmm, that's going quite as far as I was thinking to go: In
particular, you leave in place the set_mmio_p2m_entry() use
in vmx_alloc_vlapic_mapping(). With that replaced, the
re-ordering in epte_get_entry_emt() that you do shouldn't
be necessary; you'd simple drop the checking of the
specific MFN. However, ...

> --- a/xen/arch/x86/hvm/mtrr.c
> +++ b/xen/arch/x86/hvm/mtrr.c
> @@ -814,29 +814,22 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
>          return -1;
>      }
>  
> -    if ( direct_mmio )
> -    {
> -        if ( (mfn_x(mfn) ^ mfn_x(d->arch.hvm.vmx.apic_access_mfn)) >> order )
> -            return MTRR_TYPE_UNCACHABLE;
> -        if ( order )
> -            return -1;

... this part of the logic wants retaining, I think, i.e.
reporting back to the guest that the mapping needs splitting.
I'm afraid I have to withdraw my R-b on patch 1 for this
reason, as the check needs to be added there already.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 12:58:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 12:58:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UcE-0002P0-Qg; Fri, 31 Jul 2020 12:58:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1UcD-0002Op-L7
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 12:58:25 +0000
X-Inumbo-ID: 84c404ca-d32d-11ea-8e30-bc764e2007e4
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.0.72]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 84c404ca-d32d-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 12:58:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hV3wLf96RYlKeb8gH691k9VjenLtEmtuxqc/HLtUFyg=;
 b=TYqv7LM4SmMSqxzTZ58McPqC/J3iJP/6FHTnlL2EAtQT/WyR+3waTLcK9pZGhKElQBiTpC+AGLDtueAhq8ETWr/kjvP2+j0154pXhieyIu7KF0L1AyLshqhxu3iz9VRmirF3/8hQcZaY8vth6e3CKiLvMP7MQASTmoumBGpN2/s=
Received: from DB6PR0501CA0016.eurprd05.prod.outlook.com (2603:10a6:4:8f::26)
 by AM4PR08MB2897.eurprd08.prod.outlook.com (2603:10a6:205:a::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.17; Fri, 31 Jul
 2020 12:58:22 +0000
Received: from DB5EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:8f:cafe::ac) by DB6PR0501CA0016.outlook.office365.com
 (2603:10a6:4:8f::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.16 via Frontend
 Transport; Fri, 31 Jul 2020 12:58:22 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT032.mail.protection.outlook.com (10.152.20.162) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.20 via Frontend Transport; Fri, 31 Jul 2020 12:58:22 +0000
Received: ("Tessian outbound c4059ed8d7bf:v62");
 Fri, 31 Jul 2020 12:58:22 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9d62cfa80a9dd849
X-CR-MTA-TID: 64aa7808
Received: from d95a03cd5078.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 298162C4-D54F-49FD-BCA1-FDD487DB344D.1; 
 Fri, 31 Jul 2020 12:58:12 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d95a03cd5078.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 12:58:12 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lD4U3oJzEHmRzwmdKgOVr19gF97nFISzqDNH4OwxOx74Xu6w+wfVwegxtjHSr1EmyX35e0xB0OeT4f3QPdV3Bfx3hOYZM9PcLNyt2WniRxVZSFBLdveecCbuZSTGJEdBrNl/ZhOXH/4vaq+x+AZGX+lAPZZ2cUdABaKCRAUqh0/h/pSIbv6HyvdDSKz6rSkoU0zhhVu79TWQx7TWZsf2f82lzu9COfr+Kyu4xtTU21fXJ0Wt5Ct7tT1667xh+o8hwVKhOTVkHScPafC8/j3JhyFuB9A2WPA2JjrF2Yio77xMw2K+WwUWVWfa7j5KbPG7REIaxEiTDl1PtimfDGFUBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hV3wLf96RYlKeb8gH691k9VjenLtEmtuxqc/HLtUFyg=;
 b=M7f6h5KtvRLoaqur0x+/j9OWCQONchIHVo6xB8ARYHeyICK6jlu2z3CU1Co/7uBwSEihu2JET9QeZ8WHeFDQwejRC1kzQaL1JQJJAX0sf4fbfMvPiptfKYL7QGepcf6R2t9xoirv5abnCx+7G/3XkJk/WCFjEqIkcyZRmDs5tGrvp36wR1bBmAqfzoSsAWD/uaIOoqFzeAn/J0owxuwtEQA3ntDE3f0p0H6KzUu1+5Xf6pJGZ4WV9cTt5xKIKy3N8nJFR8Ys6EhKjIVjeEl0lAqdvNaYzcZite7ktQwnG9Xccgy1MG1wMwbvmMnr+e36dR3sI6IDgpL5mwYeARftbQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hV3wLf96RYlKeb8gH691k9VjenLtEmtuxqc/HLtUFyg=;
 b=TYqv7LM4SmMSqxzTZ58McPqC/J3iJP/6FHTnlL2EAtQT/WyR+3waTLcK9pZGhKElQBiTpC+AGLDtueAhq8ETWr/kjvP2+j0154pXhieyIu7KF0L1AyLshqhxu3iz9VRmirF3/8hQcZaY8vth6e3CKiLvMP7MQASTmoumBGpN2/s=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4776.eurprd08.prod.outlook.com (2603:10a6:10:f2::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.20; Fri, 31 Jul
 2020 12:58:10 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 12:58:10 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RESEND][PATCH v2 2/7] xen/arm: kernel: Re-order the includes
Thread-Topic: [RESEND][PATCH v2 2/7] xen/arm: kernel: Re-order the includes
Thread-Index: AQHWZp4EFvxlcjFwoE6yApIOnyEIgKkhpz6A
Date: Fri, 31 Jul 2020 12:58:10 +0000
Message-ID: <030C3314-A8E2-4FD6-8D1A-25C5734A6A80@arm.com>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-3-julien@xen.org>
In-Reply-To: <20200730181827.1670-3-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [90.126.203.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 90ad4fc0-f78f-4fb9-3e7d-08d8355167e4
x-ms-traffictypediagnostic: DBBPR08MB4776:|AM4PR08MB2897:
X-Microsoft-Antispam-PRVS: <AM4PR08MB2897AD331FC6CA6B0A70EC8F9D4E0@AM4PR08MB2897.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:345;OLM:345;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: JKWT5NH443bySKCZM4uO7uIBnFVBYZ6SNi9dSVkri6epIK7PtjXDLsQo/YHT+JpUTVPXApJzQTHAdtprPgF5RIZwnelohxyamm016toGVozEebE6Pd08+4XK815ckSetqv2WTrzSakmO7W1JKJJGDnxYi947mLfBk4PNN6wjGtv5RZKGIkcdIW3lkTEqwxWcEsd2sxFH46ORPAh+nw+1Drc33cpmHnO8rBP9/etr/5O74ErGES8+9NllryfO4BefVfFsTBoDfaC1q0ZdjjqkUB7+PT01vWluwLUKLt8r3mMTk9aNTK3z8ql+FnMPvIkFYXS/3HG0pvvbQ/XsrUcvlRgis+dqcQY99DGLrBpnPQeZmv6xkYZDLYRr0mYHMKNJ
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(366004)(346002)(39860400002)(376002)(396003)(4326008)(478600001)(66446008)(54906003)(5660300002)(26005)(186003)(6506007)(6512007)(66556008)(53546011)(64756008)(8936002)(2616005)(4744005)(36756003)(71200400001)(86362001)(66476007)(6486002)(2906002)(316002)(66946007)(91956017)(76116006)(83380400001)(8676002)(33656002)(6916009)(169823001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: JRR+Hx2nd5JTURpb4nKwiEj91H2ia82XWKqFwcwAVQvoAXOkUhVzaPHDGdDtkQ2W7MLTIzpBz/jMmViYHK6+6+42YigfW3sF4OPLYItEakvdKWyHxxrd9aHvbixEHLUv0Ik8aJCeLw/eHjmqDuSPr7+UBUUOgQllm2SgXcD9vXJtloRHb1bSY/RYm+ARJ72tzrKJlv+flHwQrJncyQRol8m0WTN5som7rZX9qzN327GhQJbGGgo+jASB3Q43S3jogJEr1QCIIUpgEP+YduPnbNt5yq7Rf5h6C+X2iLk69wq2/DC1riLOWdriBQr7WDvnCNLl56CPyWDIrBh3BOxeXRHiFOTSDC9KrSnglTBmU84DPHRwwtnH0RB8KSJk9GkZsf/3cbwIfwPIw8ESRR+Yoca0p288Psgc7at8RLz9A+4XSqff+zbp0xxIpo14zZHeuunWKMCH/ZD9uLqjq//XiGpwgzPy2qcCQzTowSHEJ4ysyqV5D3EgNTp5I5uqsOtL
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <637DF1DEAB000D4B88D4AB6A873A8660@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4776
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: c7bbdc0f-041b-497a-e7d3-08d835516096
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Si3ftA9N3Xt3NIHPHRfiz6/ShZS13EtP/pMpTqjl9xc7d9IOjj05KCDeosPjnnej8QX1s8WM+NRB53eSWZ96tUq7/vpBUs5sd/7IBPBub4J0IaJT/I2dvbYFApvSKdgVGC4zOHANRIOnFK0pjCBKyf47M1oOwuX8VkeCT6z7Q2RhyL58MJBeWN2+n41psI5qREQF7CXyUGE/h1CbU6xQ+oj7TvL7WU/uz3Kg0Zu1foYfVrpDPiZd/b2R4G+kqfG2r9IbOSZJ8Gg81NIii7K20zFd125rxtMt0lVs+dmrKb1AIPk7vSyx/PWBDqfFdP2Tl/VXV+dsBmCLM4+8XngLfUONIyG8hA7hQY56X5bVG9F9X6h6njWaVPluo+PVsEIxVvlytk6a9tnP+UmdWQP4xfFv6S+gYngh6L3E/sUnjIw=
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(376002)(346002)(136003)(396003)(46966005)(26005)(4326008)(6486002)(86362001)(5660300002)(316002)(83380400001)(2906002)(70586007)(6512007)(478600001)(336012)(70206006)(6862004)(33656002)(82740400003)(186003)(82310400002)(36756003)(356005)(2616005)(53546011)(8676002)(6506007)(8936002)(54906003)(47076004)(81166007)(169823001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 12:58:22.4499 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 90ad4fc0-f78f-4fb9-3e7d-08d8355167e4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR08MB2897
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, nd <nd@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 30 Jul 2020, at 20:18, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> We usually have xen/ includes first and then asm/. They are also ordered
> alphabetically among themselves.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

> ---
> xen/arch/arm/kernel.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>=20
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 8eff0748367d..f95fa392af44 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -3,20 +3,20 @@
>  *
>  * Copyright (C) 2011 Citrix Systems, Inc.
>  */
> +#include <xen/domain_page.h>
> #include <xen/errno.h>
> +#include <xen/gunzip.h>
> #include <xen/init.h>
> #include <xen/lib.h>
> +#include <xen/libfdt/libfdt.h>
> #include <xen/mm.h>
> -#include <xen/domain_page.h>
> #include <xen/sched.h>
> -#include <asm/byteorder.h>
> -#include <asm/setup.h>
> -#include <xen/libfdt/libfdt.h>
> -#include <xen/gunzip.h>
> #include <xen/vmap.h>
>=20
> +#include <asm/byteorder.h>
> #include <asm/guest_access.h>
> #include <asm/kernel.h>
> +#include <asm/setup.h>
>=20
> #define UIMAGE_MAGIC          0x27051956
> #define UIMAGE_NMLEN          32
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:02:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:02:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UgL-0003LN-CN; Fri, 31 Jul 2020 13:02:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RrDm=BK=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1k1UgJ-0003LI-KV
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:02:39 +0000
X-Inumbo-ID: 1bfed4ab-d32e-11ea-8e30-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1bfed4ab-d32e-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 13:02:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596200559;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=8+Apex/05OziTwQ96raHXebZalhR41lpWEOv9w28wcE=;
 b=a5DrfBRy1ox94m+FmVwoQGbvkONB+aY5Z9H7SPZ2rcVRIbx3QngWKMrj
 TbsdQxeNN8p2OE14RrcA1fMgvVK+3JJm4nzTddIz04g6A2LifI1rIS3zn
 P5CaH5uMR2+lcaZnyFTv5YLAp49gK1LPOaFwTRvHZ9Ox3yJ2SchD5cdTK 8=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: VhKDc1nssWaHK07zJyr8fZXuKnKq5k5LtTanFhFZWZvlhToQUVd7+jYQaUQkO7ySH+q1Su29sg
 PKoZZre+53oMUa1pCZSyueL8JAVDm8kcU8M5PMPNjmaPkJDgzye5+B2xpG6SJ46Oqk8lA0mako
 DatncznEtdeBd6qvpPUReyfy6v/5bYiOjRkl26LTrOo+wmyMtXwzxL/q1vHL0/c0XDKrlK33FV
 FDTZ8jtc3lwh2fcO9Xpagj0jeojdsM3VWMbUP4w8zCHfPjpPDSRtCQ7Nyr1M0YPDoraEEhIWYj
 BSs=
X-SBRS: 3.7
X-MesageID: 23956842
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23956842"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24356.5736.297234.341867@mariner.uk.xensource.com>
Date: Fri, 31 Jul 2020 14:02:32 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] tools/configure: drop BASH configure variable [and 1 more
 messages]
In-Reply-To: <2c202733-cbff-74e0-30c6-4cba227e7969@suse.com>,
 <20200722113258.3673-1-andrew.cooper3@citrix.com>
References: <20200722113258.3673-1-andrew.cooper3@citrix.com>
 <20200626170038.27650-1-andrew.cooper3@citrix.com>
 <880fcc83-875c-c030-bfac-c64477aa3254@suse.com>
 <24313.55588.61431.336617@mariner.uk.xensource.com>
 <2c202733-cbff-74e0-30c6-4cba227e7969@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jan Beulich writes ("Re: [PATCH] tools/configure: drop BASH configure variable"):
> On 29.06.2020 14:05, Ian Jackson wrote:
> > Jan Beulich writes ("Re: [PATCH] tools/configure: drop BASH configure variable"):
> >> On 26.06.2020 19:00, Andrew Cooper wrote:
> >> ... this may or may not take effect on the file system the sources
> >> are stored on.
> > 
> > In what circumstances might this not take effect ?
> 
> When the file system is incapable of recording execute permissions?
> It has been a common workaround for this in various projects that
> I've worked with to use $(SHELL) to account for that, so the actual
> permissions from the fs don't matter. (There may be mount options
> to make everything executable on such file systems, but people may
> be hesitant to use them.)

I don't think we support building from sources which have been
unpacked onto such filesystems.  Other projects which might actually
need to build on Windows or something do do this $(SHELL) thing or an
equivalent, but I don't think that's us.

But obviously that opinion of mine is not a blocker for Andy's patch
since Andy is going in the right direction, regardless.

Andrew Cooper writes ("[PATCH v2] tools/configure: drop BASH configure variable"):
> This is a weird variable to have in the first place.  The only user of it is
> XSM's CONFIG_SHELL, which opencodes a fallback to sh, and the only two scripts
> run with this are shebang sh anyway, so don't need bash in the first place.
> 
> Make the mkflask.sh and mkaccess_vector.sh scripts executable, drop the
> CONFIG_SHELL, and drop the $BASH variable to prevent further use.

In response to this commit message, I wrote:

> > Andrew Cooper writes ("[PATCH] tools/configure: drop BASH configure variable"):
> > Thanks for this cleanup.  I agree with the basic idea.
> >
> > However, did you run these scripts with dash, or review them, to check
> > for bashisms ?

And you replied:

> Yes, to all of the above.
> 
> They are both very thin wrappers (doing some argument shuffling) around
> large AWK scripts.

Can you please put this information in the commit message where it
belongs ?  As a rule we should know in future what we were thinking
when a change was made, and as I say "are shebang sh anyway, so don't
need bash in the first place" is weak evidence.

With that change,

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:04:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:04:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Ui5-0003Tp-Rm; Fri, 31 Jul 2020 13:04:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1Ui4-0003Ti-9S
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:04:28 +0000
X-Inumbo-ID: 5cc41b9e-d32e-11ea-8e30-bc764e2007e4
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.53]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5cc41b9e-d32e-11ea-8e30-bc764e2007e4;
 Fri, 31 Jul 2020 13:04:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fRMB0UR5j8DpixaHREdiqCaJRQTn1wXeZcoLQkXzj7I=;
 b=HlgN/ZuZ/ATXm8hxiGmHVuRKXkyTky2S+gJ5HvbhgZy2H/+mofskzEnrqw79XnUQpX1HWGZ3CnyoYAiMyt0ex5x60285APdn3iDGYKRc2F0k4DQucGVfX8mpoyMZDPUqXVOaOLrcvluvxhGW79GYpN4zCRCBwa/OiI1rMuZ3Ma8=
Received: from AM6PR05CA0033.eurprd05.prod.outlook.com (2603:10a6:20b:2e::46)
 by VI1PR0801MB1933.eurprd08.prod.outlook.com (2603:10a6:800:8d::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.23; Fri, 31 Jul
 2020 13:04:25 +0000
Received: from AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::fa) by AM6PR05CA0033.outlook.office365.com
 (2603:10a6:20b:2e::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.17 via Frontend
 Transport; Fri, 31 Jul 2020 13:04:25 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT006.mail.protection.outlook.com (10.152.16.122) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.17 via Frontend Transport; Fri, 31 Jul 2020 13:04:25 +0000
Received: ("Tessian outbound 73b502bf693a:v62");
 Fri, 31 Jul 2020 13:04:25 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: d4a38bd26bfdfdcc
X-CR-MTA-TID: 64aa7808
Received: from 496eed6af33a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0FE1641A-1070-4EB2-A6B0-F642C035AE34.1; 
 Fri, 31 Jul 2020 13:04:19 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 496eed6af33a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 13:04:19 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NI0VS157I9mAVK8QkcwYAO/Y3Hh8L9hKdD+8x/vp746bQvHSIXhKCD7H6v9k6Sqf0yaxfT/72mTeXL1llC43t4eI8RX/6v1/lcj1KVGDERPGdAcGeugRwOntMOpVspvmC2PzeFtNckRSJAZkEe4KsxT6WKVS18/uCjYUe63eFW/Q7jYO5kTOGwr9f0UUDTjRyTOmzRnFvTEs05ZBYBvEeIwta23mNiZSPCIc0V59xTR6kPVhr8Z4qq0R6Di4COGPR49JB3RMDuAJhgvgnO7ei9PZwCog7uq8z0BZIsCs1EF8LP5yf5wwFElxOvLmt+4vX3TaIE5L8K92fRz21AEYrA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fRMB0UR5j8DpixaHREdiqCaJRQTn1wXeZcoLQkXzj7I=;
 b=Z8m14NpC6SlVKyp6FtbuongFnMaH79E5/J7l2rB+YKYEZDYbTRMkuaRYp+XUK6/CvPHG3/xmFTp2iCobBYzqku5PpMdk6zqj1NchPCRr/McTuNoCKSB92XOj/FOOfmGWVvwghIf5JUJOeNdiAW93b6LOgzigyvcpS8/2ukwVdmETBc6kba6H/3sScNhGjFioNdlxTNRJFZR8iyomAc53xqzH999hzszP5lh2J6Yw199AklJ8vBmiVfBuZMNSz8ipyEvfHY/S4SgoxIuzkMbaWnjx+Qs8iFfnbb1cG6AOvq2T/2fCop5aNLDkfedDRKqplQbEGCD+Br+mLUnd1Z6rLw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fRMB0UR5j8DpixaHREdiqCaJRQTn1wXeZcoLQkXzj7I=;
 b=HlgN/ZuZ/ATXm8hxiGmHVuRKXkyTky2S+gJ5HvbhgZy2H/+mofskzEnrqw79XnUQpX1HWGZ3CnyoYAiMyt0ex5x60285APdn3iDGYKRc2F0k4DQucGVfX8mpoyMZDPUqXVOaOLrcvluvxhGW79GYpN4zCRCBwa/OiI1rMuZ3Ma8=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR08MB2694.eurprd08.prod.outlook.com (2603:10a6:6:1f::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.16; Fri, 31 Jul
 2020 13:04:17 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 13:04:17 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RESEND][PATCH v2 1/7] xen/guest_access: Add emacs magics
Thread-Topic: [RESEND][PATCH v2 1/7] xen/guest_access: Add emacs magics
Thread-Index: AQHWZp4EjLyFAMhDcEetGvqId62N/akhqPWA
Date: Fri, 31 Jul 2020 13:04:17 +0000
Message-ID: <5DD7E100-D146-4853-A1CA-168DA1802C7A@arm.com>
References: <20200730181827.1670-1-julien@xen.org>
 <20200730181827.1670-2-julien@xen.org>
In-Reply-To: <20200730181827.1670-2-julien@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [90.126.203.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: da39a358-0ab7-4ae8-12ea-08d83552404b
x-ms-traffictypediagnostic: DB6PR08MB2694:|VI1PR0801MB1933:
X-Microsoft-Antispam-PRVS: <VI1PR0801MB193355787E3FF4F9E5AD83659D4E0@VI1PR0801MB1933.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:5516;OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: ipYbNwJxfr9/S2rTwtnpct/+T81tTe1O/QYr+xrKUHPstzedcywww/j/5Dw2tidhwG5rlUH7G3HIOjwjD5hyKphLl8Ls27By8rgG7mXN4v092ZoLYK/jRwwHolm+7muoKRttFS32pU/y2necrp9lcC+y2CJW1YIueRaMYx1q16sToKQ0IfH6bPyKrGAF8UAdtkGyy1UJjX2mGJ4l0PjqvnlSTPYKB4N7yZi0xk5wf7P//JKlBBHcFSgkMp7iv5ojWBtZ9ZuAyAIcuw10YFyFLpvnWV45E9osP/ACp+UpeXjlEhDF0/4mFGLL0WNXFwxj
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(396003)(366004)(346002)(376002)(6512007)(4326008)(6486002)(8676002)(8936002)(33656002)(5660300002)(6916009)(86362001)(2906002)(54906003)(7416002)(64756008)(478600001)(316002)(186003)(36756003)(66476007)(6506007)(53546011)(91956017)(66556008)(2616005)(66946007)(71200400001)(83380400001)(66446008)(26005)(76116006);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: 9XpIu7+8RtWpr97yqqEs2SGf1hOZ1cIb0P/LciYClfDtLYu9tMb+xIkU8S2jReTX+WIUlMdw9J0LStFbbgQTNxPgjPQNEB74J0a43R3riiJx8e5bu4yMHFlP8zHFyEMmr3VrxCKQFyR7IW67L6cAFWy3PsHdUE2gwM5LnYhbrcr7dPSt5gxhkjlnkBqXqq1sTeiIchs56mjFHptW8yFckE7+vSf7TdzkcVn7WHDwXovHCPBrCWyQBWaGSqEWbDfS5CHsGV7gysgyaWYeYPJYMDXtb1v50lWMWOe84p9cedImO2I+Z4qAgRe76/IT7lL2J8qJPW9/PolTED44g75zBm88LmqV4MKivb4ZKbZhqGIlIOk0oepfgAYTuRDhYqXd/UJ6/ptB+ikYlHzts309Wsy6BJihO+HGa3fg7aMEx3L+GoMlMfpNrGtQkpOfoeyVvg3u70cx+xcIVIk400QmDYFovouoUjIC7RN1h1ZUa4sCfJFacFPyfapNk0stwWr5mFbSjpax+OFFwc5ee9SG5rDffuchQZUz/yt8oblHHzpJCnZYFWVL2hGk5DU5vThh3jV1xLXFZKus16xXsgzaPUJna0yLo0UJd5/KhlQSnqL9KzOgcwj2nHA1qOx53sgJn4iogJXkTOQdKNskV6AABA==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <4E6953D5205C4442A4B26DC8AD5222A1@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2694
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 09d77118-7aca-4978-a6fc-08d835523ba6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: orNgZTfn5+jihPmUtQYNR5bkW/FPaDY+xd+P+BAS5WSTtLdeCKalwiKNN2rm8sHfFEDg2aRwC06O9j/CoEhFNiyOvLbhrHDzWimxZVzL+tjl5xk9iUjmoerKW/Jft/feMzxfIuuX2+d3IFQdeSi6Itzf37aHA0QvMwTCN4uBfaXX8gefHrl3A7YSV9c8Ngg1gyJG7VxuzMRuPcFc8dVdAyaBxekmFax1V0yqcUNxmGmgmR60KnsyhrxgZw1H/ImA3jaXPSep9j2P/ujJRkex3BwdvyGrf6w0DdjCJSE0B5NsCSiV4w+yv9D2dZUtSK1No5SlzWvZWCM7HSMPkFB4egCLFMT5KVCyKlMic3hgHvseoj6w4d7CTxYjqqgKbHKmNyujkXUpuUJk9UAQ6xd8Gw==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(136003)(39860400002)(376002)(346002)(46966005)(33656002)(8676002)(82740400003)(2906002)(70586007)(186003)(356005)(36756003)(8936002)(2616005)(5660300002)(81166007)(54906003)(82310400002)(316002)(6512007)(36906005)(336012)(70206006)(6506007)(6486002)(83380400001)(53546011)(6862004)(478600001)(4326008)(86362001)(47076004)(26005);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 13:04:25.4588 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: da39a358-0ab7-4ae8-12ea-08d83552404b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1933
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMzAgSnVsIDIwMjAsIGF0IDIwOjE4LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEZyb206IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpvbi5jb20+
DQo+IA0KPiBBZGQgZW1hY3MgbWFnaWNzIGZvciB4ZW4vZ3Vlc3RfYWNjZXNzLmggYW5kDQo+IGFz
bS14ODYvZ3Vlc3RfYWNjZXNzLmguDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBKdWxpZW4gR3JhbGwg
PGpncmFsbEBhbWF6b24uY29tPg0KPiBBY2tlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1
c2UuY29tPg0KDQpNb3N0IG9mIGZpbGUgaW4gWGVuIHNvdXJjZSBjb2RlIHNlZW0gdG8gaGF2ZSBh
IHdoaXRlIGxpbmUgYmVmb3JlIHRoZSDigJxlbWFjcyBtYWdpY3PigJ0uDQpJZiB0aGlzIGlzIHNv
bWV0aGluZyB0aGF0IHNob3VsZCBiZSBlbmZvcmNlZCwgaXQgc2hvdWxkIGJlIGRvbmUgaGVyZS4N
Cg0KSWYgbm90IHRoZSBjaGFuZ2Ugc2VlbXMgb2sgOi0pDQoNCj4gDQo+IC0tLQ0KPiAgICBDaGFu
Z2VzIGluIHYyOg0KPiAgICAgICAgLSBSZW1vdmUgdGhlIHdvcmQgIm1pc3NpbmciDQo+IC0tLQ0K
PiB4ZW4vaW5jbHVkZS9hc20teDg2L2d1ZXN0X2FjY2Vzcy5oIHwgOCArKysrKysrKw0KPiB4ZW4v
aW5jbHVkZS94ZW4vZ3Vlc3RfYWNjZXNzLmggICAgIHwgOCArKysrKysrKw0KPiAyIGZpbGVzIGNo
YW5nZWQsIDE2IGluc2VydGlvbnMoKykNCj4gDQo+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9h
c20teDg2L2d1ZXN0X2FjY2Vzcy5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9ndWVzdF9hY2Nlc3Mu
aA0KPiBpbmRleCAyYmUzNTc3YmQzNDAuLjNmZmRlMjA1ZjZhMSAxMDA2NDQNCj4gLS0tIGEveGVu
L2luY2x1ZGUvYXNtLXg4Ni9ndWVzdF9hY2Nlc3MuaA0KPiArKysgYi94ZW4vaW5jbHVkZS9hc20t
eDg2L2d1ZXN0X2FjY2Vzcy5oDQo+IEBAIC0xNjAsMyArMTYwLDExIEBADQo+IH0pDQo+IA0KPiAj
ZW5kaWYgLyogX19BU01fWDg2X0dVRVNUX0FDQ0VTU19IX18gKi8NCj4gKy8qDQo+ICsgKiBMb2Nh
bCB2YXJpYWJsZXM6DQo+ICsgKiBtb2RlOiBDDQo+ICsgKiBjLWZpbGUtc3R5bGU6ICJCU0QiDQo+
ICsgKiBjLWJhc2ljLW9mZnNldDogNA0KPiArICogaW5kZW50LXRhYnMtbW9kZTogbmlsDQo+ICsg
KiBFbmQ6DQo+ICsgKi8NCj4gZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL3hlbi9ndWVzdF9hY2Nl
c3MuaCBiL3hlbi9pbmNsdWRlL3hlbi9ndWVzdF9hY2Nlc3MuaA0KPiBpbmRleCAwOTk4OWRmODE5
Y2UuLmVmOWFhYTNlZmNmZSAxMDA2NDQNCj4gLS0tIGEveGVuL2luY2x1ZGUveGVuL2d1ZXN0X2Fj
Y2Vzcy5oDQo+ICsrKyBiL3hlbi9pbmNsdWRlL3hlbi9ndWVzdF9hY2Nlc3MuaA0KPiBAQCAtMzMs
MyArMzMsMTEgQEAgY2hhciAqc2FmZV9jb3B5X3N0cmluZ19mcm9tX2d1ZXN0KFhFTl9HVUVTVF9I
QU5ETEUoY2hhcikgdV9idWYsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBz
aXplX3Qgc2l6ZSwgc2l6ZV90IG1heF9zaXplKTsNCj4gDQo+ICNlbmRpZiAvKiBfX1hFTl9HVUVT
VF9BQ0NFU1NfSF9fICovDQo+ICsvKg0KPiArICogTG9jYWwgdmFyaWFibGVzOg0KPiArICogbW9k
ZTogQw0KPiArICogYy1maWxlLXN0eWxlOiAiQlNEIg0KPiArICogYy1iYXNpYy1vZmZzZXQ6IDQN
Cj4gKyAqIGluZGVudC10YWJzLW1vZGU6IG5pbA0KPiArICogRW5kOg0KPiArICovDQo+IC0tIA0K
PiAyLjE3LjENCj4gDQo+IA0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:05:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UjU-0003Zg-7i; Fri, 31 Jul 2020 13:05:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1UjT-0003Za-Id
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:05:55 +0000
X-Inumbo-ID: 90e929c8-d32e-11ea-abaf-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 90e929c8-d32e-11ea-abaf-12813bfff9fa;
 Fri, 31 Jul 2020 13:05:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DCC97B582;
 Fri, 31 Jul 2020 13:06:06 +0000 (UTC)
Subject: Re: [PATCH] x86/vmx: reorder code in vmx_deliver_posted_intr
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20200730140309.59916-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <505b30dc-e504-918e-e676-70d856b76899@suse.com>
Date: Fri, 31 Jul 2020 15:05:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200730140309.59916-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Kevin Tian <kevin.tian@intel.com>,
 Wei Liu <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.07.2020 16:03, Roger Pau Monne wrote:
> Remove the unneeded else branch, which allows to reduce the
> indentation of a larger block of code, while making the flow of the
> function more obvious.
> 
> No functional change intended.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

One minor request (could likely be taken care of while
committing):

> @@ -2014,41 +2016,36 @@ static void vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
>           * VMEntry as it used to be.
>           */
>          pi_set_on(&v->arch.hvm.vmx.pi_desc);
> +        vcpu_kick(v);
> +        return;
>      }
> -    else
> -    {
> -        struct pi_desc old, new, prev;
>  
> -        prev.control = v->arch.hvm.vmx.pi_desc.control;
> +    prev.control = v->arch.hvm.vmx.pi_desc.control;
>  
> -        do {
> -            /*
> -             * Currently, we don't support urgent interrupt, all
> -             * interrupts are recognized as non-urgent interrupt,
> -             * Besides that, if 'ON' is already set, no need to
> -             * sent posted-interrupts notification event as well,
> -             * according to hardware behavior.
> -             */
> -            if ( pi_test_sn(&prev) || pi_test_on(&prev) )
> -            {
> -                vcpu_kick(v);
> -                return;
> -            }
> -
> -            old.control = v->arch.hvm.vmx.pi_desc.control &
> -                          ~((1 << POSTED_INTR_ON) | (1 << POSTED_INTR_SN));
> -            new.control = v->arch.hvm.vmx.pi_desc.control |
> -                          (1 << POSTED_INTR_ON);
> +    do {
> +        /*
> +         * Currently, we don't support urgent interrupt, all
> +         * interrupts are recognized as non-urgent interrupt,
> +         * Besides that, if 'ON' is already set, no need to
> +         * sent posted-interrupts notification event as well,
> +         * according to hardware behavior.
> +         */

Would be nice to s/sent/send/ here as you move it (maybe also
remove the plural from "posted-interrupts") and - if possible -
re-flow for the now increased space on the right side.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:10:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:10:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Una-0004P4-RM; Fri, 31 Jul 2020 13:10:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1UnZ-0004Oz-N9
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:10:09 +0000
X-Inumbo-ID: 283b3154-d32f-11ea-8e33-bc764e2007e4
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.50]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 283b3154-d32f-11ea-8e33-bc764e2007e4;
 Fri, 31 Jul 2020 13:10:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y89ZBDd7zijBTKMXLvfE7XpuXPY7VUTkWAkQV7hDA88=;
 b=xXomj5dLhp7fdPvW11cYojVv2suWEhAX1VoYOD8+I/HmC2cFZX3laTkFrFqZt/5YEIcdPIxZky0PRlIDsj+EAIMsbvgzkRiXsIkiK3pJpS/gHnR47N/qesxI0mOuusdRxN6NBXr0dshsLj0c0zq7NzVAcDhNk1kGRAsZoAhdiTo=
Received: from DB7PR03CA0091.eurprd03.prod.outlook.com (2603:10a6:10:72::32)
 by DBBPR08MB4853.eurprd08.prod.outlook.com (2603:10a6:10:d5::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.16; Fri, 31 Jul
 2020 13:10:06 +0000
Received: from DB5EUR03FT009.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:72:cafe::35) by DB7PR03CA0091.outlook.office365.com
 (2603:10a6:10:72::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.17 via Frontend
 Transport; Fri, 31 Jul 2020 13:10:06 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT009.mail.protection.outlook.com (10.152.20.117) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.20 via Frontend Transport; Fri, 31 Jul 2020 13:10:06 +0000
Received: ("Tessian outbound 1dc58800d5dd:v62");
 Fri, 31 Jul 2020 13:10:06 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2d869e824ba11d01
X-CR-MTA-TID: 64aa7808
Received: from 73fabe25d96c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9CEA4803-572D-4A12-A296-6B9A9A1B1DFC.1; 
 Fri, 31 Jul 2020 13:10:01 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 73fabe25d96c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 13:10:01 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SyMB5dleEJz+St0wItmjUdD3PXtOI/4YypKTO/Ueie+roJRubismBNeDnkmYwPmYmlg/Ioi+grRc52N8f7UF71KlOWIxenMfgkzvk9a3sjoR+dnZ3WyBvqOF8GYa1IQEXdpo2jg9LcoDKhS4c9zk+LzosXGdJpwAuH1kPv+wqQgq29hShE3op7Cs9pTbARHUCzbiJQMdZemws5YFS2zyS9z0T5/dasmk2wR6jS/Fo2/EwhpRF2kAUx3fxyqAkX6vXchaGfzAk2y8S/cJzWSfEW6JSsFLRpL8udKKC37DjYDKf18O5IfI1ZGupAlzHcHaSeDMtAZ5J9be7/hbQCgfJg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y89ZBDd7zijBTKMXLvfE7XpuXPY7VUTkWAkQV7hDA88=;
 b=ag/Tj64MrlQqVjql5r2Gd8yGQWWB82SkmGnRIetA1K7pL4G+G0PIREHGJGygxJZ4BwJXWCNJ+6aSo5zQHOsxCOak0d3sv0uVcQavzcKTEIVYjnEqz+em73gbPeySNdNTOexcZDIY+Y+K/4QllC2g2ZJ7J7UGAbAlo22mHnuXiFd8UoadjJ8Ky/Forg4qc4ghB8zJDdDd+cPUjMObSJBlMQb3UKOej69G4L5kSwJlcDBJLVc7COyz1ZRXoAZQD0d3i3KmT/hDuzUWnJwhbNMHThD+mo4o5BJigze/pdOwrG86rgIF9E3++WTescUiadNur0jjSkYqtruj9G4jKdBL/w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y89ZBDd7zijBTKMXLvfE7XpuXPY7VUTkWAkQV7hDA88=;
 b=xXomj5dLhp7fdPvW11cYojVv2suWEhAX1VoYOD8+I/HmC2cFZX3laTkFrFqZt/5YEIcdPIxZky0PRlIDsj+EAIMsbvgzkRiXsIkiK3pJpS/gHnR47N/qesxI0mOuusdRxN6NBXr0dshsLj0c0zq7NzVAcDhNk1kGRAsZoAhdiTo=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5082.eurprd08.prod.outlook.com (2603:10a6:10:ec::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.17; Fri, 31 Jul
 2020 13:09:59 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 13:09:59 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v3] xen/arm: Convert runstate address during hypcall
Thread-Topic: [PATCH v3] xen/arm: Convert runstate address during hypcall
Thread-Index: AQHWZluWPRKsKyuZRkCQVDwO60tsvakgmVkAgAEDsoCAAA4FAA==
Date: Fri, 31 Jul 2020 13:09:58 +0000
Message-ID: <5301A49B-3404-4AC2-B04E-2BB969BABEED@arm.com>
References: <3911d221ce9ed73611b93aa437b9ca227d6aa201.1596099067.git.bertrand.marquis@arm.com>
 <f48f81d5-589e-3f75-1044-583114bf497e@xen.org>
 <d8eb8052-6370-7484-1c9a-f90d83396fa1@suse.com>
In-Reply-To: <d8eb8052-6370-7484-1c9a-f90d83396fa1@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [90.126.203.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c523a70b-8190-4b9c-f022-08d835530ba8
x-ms-traffictypediagnostic: DB8PR08MB5082:|DBBPR08MB4853:
X-Microsoft-Antispam-PRVS: <DBBPR08MB4853CBD9639BD7B888BF40309D4E0@DBBPR08MB4853.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 3euiRBXiMJ1tzx6xdm3ZXv3GDHOrbnW6YdIefaMx/QJbb61PzN3n7src51I7vSxkf7DSnsMOyKkExSjNBsgsjxn0Sra+C1JOFw7aE9w57SnRVDaY7RNAq39iaMnKWn9j0WbW23Od+VlbcDlK4U1WA/fYUZZiWB6cnk8cP1h1xmhSYlmW1lhXqJgyxc+ewsCOh5HDO8ERvO7xM9m8NsYEq8AiAjEjUGu3LpjCzPAiGlTqTfulY0X/raD/EvZumNg9mdsfBe0za+lLfUtlf1pg2bGicXttY0nixCZ61Pse2rUUt/XEiHqWwOxXXWLmWDXMvukevp3XRmMmuI4GkNlS0A==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(366004)(346002)(396003)(376002)(136003)(6486002)(83380400001)(66556008)(2616005)(64756008)(66446008)(316002)(36756003)(86362001)(6916009)(54906003)(8936002)(478600001)(2906002)(91956017)(6506007)(8676002)(53546011)(66946007)(186003)(76116006)(4326008)(6512007)(5660300002)(7416002)(66476007)(26005)(71200400001)(33656002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: xF6MQzWEhesuPZQdY55f/cBJQtGeGZ7aG6y8Evvb52MSO6PCYUqkOB8t61dJ4ZqEa1oCkQc0HUOs9h5KYGlv5PnP+J2xqmGU7aaotgbpArNmpwUQW52i/TVkbIHFrDAbgyLILo/z0n0usy7dBfX7PshONovgF5pr8NgxCOICQP4edAedc2PDlooSPRwpnvcWnKzUAgNlXQKyEy4sy5NxtaiPKkrOZW+DwXrQGYBpBE02zmliHIi7tELLwoVT/4DbVMD15AI+29DZuZYrRG8K2TFdhBGPLR7jgGOTDN/dCQ2PqAJtVb+zxHdDDdwKWzOkW6IjGyGEDxJdi8G43k6ID9iW/ZNm+h/JOD6kpbZ4B+fd0rAAn4flxZ4zeoArdtd+YBCTZq8IoTPHesf9tik9QgOVjoXNOuSgkAy2DwCafBdfUFfk2gT9bEF3Pfx4MBWGSGJcCkAVlVL5W2yxE2b7crXJX6RdVhqpIZPlezwqKbkcby4Y51AEWestJ/giaViVa3yi4aPvs0hR/I1qsqTMFjF2hcsRnC6+8Ed6lLd9uKo+lL2lfxv3xYhnHkzL4OtLUtn/k1xd9cg57xyafFAnFNh9Fmm1EaulSScwvrFSnZintj0UTKh+8qDPepel7EcRJj7GPVpigFKzr5e27jpprw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <95DA35772808D34E98AA6B01EFFC0FC9@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5082
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 6b39c16a-7b89-49f1-41a3-08d835530715
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: PUBTaVF0MbmyjSIT2Vg7Kb/O9rN/bapTmA5ak05d+MmTP9ViCIONNxRgPGwgjpb/KBCeCntLodYu8FQzdLH52GJigycFUwDEKm67s1gy8PghlqGwgbltHZS9ibPBMZ7RXLEH+Rf4QCNjFItSUR6ylAbpgN3bYielgwPc0+JBw/5rbArVVsMjdNf7mdZzwrL7hML1SqOA1fD1njL9AXDyhO6xpbn/7U6aAsQuRwHAAyXKaERveL9Ujag3o4nQ208gHwCUwh0QAOe5/u3dh1oznrvmuuYwmRtt8u3xMek6TSW7wpiqHVaROy0+aA0yi98W01rIUOvC1VZP6iDQhGqqx0h7Jo/xVjYLi8cINo8n2cCtS2byXvhZE+5S1qtm1M6035lWoBvEd/rNLKqfd/yy2w==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(376002)(396003)(346002)(39860400002)(46966005)(54906003)(5660300002)(26005)(6512007)(70586007)(36756003)(86362001)(2616005)(70206006)(336012)(316002)(186003)(8676002)(8936002)(478600001)(6862004)(4326008)(6486002)(82740400003)(82310400002)(2906002)(107886003)(6506007)(53546011)(33656002)(47076004)(356005)(81166007)(83380400001);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 13:10:06.7033 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c523a70b-8190-4b9c-f022-08d835530ba8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4853
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 31 Jul 2020, at 14:19, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 30.07.2020 22:50, Julien Grall wrote:
>> On 30/07/2020 11:24, Bertrand Marquis wrote:
>>> At the moment on Arm, a Linux guest running with KTPI enabled will
>>> cause the following error when a context switch happens in user mode:
>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>>>=20
>>> The error is caused by the virtual address for the runstate area
>>> registered by the guest only being accessible when the guest is running
>>> in kernel space when KPTI is enabled.
>>>=20
>>> To solve this issue, this patch is doing the translation from virtual
>>> address to physical address during the hypercall and mapping the
>>> required pages using vmap. This is removing the conversion from virtual
>>> to physical address during the context switch which is solving the
>>> problem with KPTI.
>>=20
>> To echo what Jan said on the previous version, this is a change in a=20
>> stable ABI and therefore may break existing guest. FAOD, I agree in=20
>> principle with the idea. However, we want to explain why breaking the=20
>> ABI is the *only* viable solution.
>>=20
>> From my understanding, it is not possible to fix without an ABI=20
>> breakage because the hypervisor doesn't know when the guest will switch=
=20
>> back from userspace to kernel space.
>=20
> And there's also no way to know on Arm, by e.g. enabling a suitable
> intercept?

An intercept would mean that Xen gets a notice whenever a guest is switchin=
g
from kernel mode to user mode.
There is nothing in this process which could be intercepted by Xen, appart =
from
maybe trapping all access to MMU registers which would be very complex and
slow.

>=20
>> The risk is the information=20
>> provided by the runstate wouldn't contain accurate information and could=
=20
>> affect how the guest handle stolen time.
>>=20
>> Additionally there are a few issues with the current interface:
>>    1) It is assuming the virtual address cannot be re-used by the=20
>> userspace. Thanksfully Linux have a split address space. But this may=20
>> change with KPTI in place.
>>    2) When update the page-tables, the guest has to go through an=20
>> invalid mapping. So the translation may fail at any point.
>>=20
>> IOW, the existing interface can lead to random memory corruption and=20
>> inacurracy of the stolen time.
>>=20
>>>=20
>>> This is done only on arm architecture, the behaviour on x86 is not
>>> modified by this patch and the address conversion is done as before
>>> during each context switch.
>>>=20
>>> This is introducing several limitations in comparison to the previous
>>> behaviour (on arm only):
>>> - if the guest is remapping the area at a different physical address Xe=
n
>>> will continue to update the area at the previous physical address. As
>>> the area is in kernel space and usually defined as a global variable th=
is
>>> is something which is believed not to happen. If this is required by a
>>> guest, it will have to call the hypercall with the new area (even if it
>>> is at the same virtual address).
>>> - the area needs to be mapped during the hypercall. For the same reason=
s
>>> as for the previous case, even if the area is registered for a differen=
t
>>> vcpu. It is believed that registering an area using a virtual address
>>> unmapped is not something done.
>>=20
>> This is not clear whether the virtual address refer to the current vCPU=
=20
>> or the vCPU you register the runstate for. From the past discussion, I=20
>> think you refer to the former. It would be good to clarify.
>>=20
>> Additionally, all the new restrictions should be documented in the=20
>> public interface. So an OS developper can find the differences between=20
>> the architectures.
>>=20
>> To answer Jan's concern, we certainly don't know all the guest OSes=20
>> existing, however we also need to balance the benefit for a large=20
>> majority of the users.
>>=20
>> From previous discussion, the current approach was deemed to be=20
>> acceptable on Arm and, AFAICT, also x86 (see [1]).
>>=20
>> TBH, I would rather see the approach to be common. For that, we would an=
=20
>> agreement from Andrew and Jan in the approach here. Meanwhile, I think=20
>> this is the best approach to address the concern from Arm users.
>=20
> Just FTR: If x86 was to also change, VCPUOP_register_vcpu_time_memory_are=
a
> would need taking care of as well, as it's using the same underlying mode=
l
> (including recovery logic when, while the guest is in user mode, the
> update has failed).
>=20
>>> @@ -275,36 +276,156 @@ static void ctxt_switch_to(struct vcpu *n)
>>>      virt_timer_restore(n);
>>>  }
>>>=20
>>> -/* Update per-VCPU guest runstate shared memory area (if registered). =
*/
>>> -static void update_runstate_area(struct vcpu *v)
>>> +static void cleanup_runstate_vcpu_locked(struct vcpu *v)
>>>  {
>>> -    void __user *guest_handle =3D NULL;
>>> +    if ( v->arch.runstate_guest )
>>> +    {
>>> +        vunmap((void *)((unsigned long)v->arch.runstate_guest & PAGE_M=
ASK));
>>> +
>>> +        put_page(v->arch.runstate_guest_page[0]);
>>> +
>>> +        if ( v->arch.runstate_guest_page[1] )
>>> +            put_page(v->arch.runstate_guest_page[1]);
>>> +
>>> +        v->arch.runstate_guest =3D NULL;
>>> +    }
>>> +}
>>> +
>>> +void arch_vcpu_cleanup_runstate(struct vcpu *v)
>>> +{
>>> +    spin_lock(&v->arch.runstate_guest_lock);
>>> +
>>> +    cleanup_runstate_vcpu_locked(v);
>>> +
>>> +    spin_unlock(&v->arch.runstate_guest_lock);
>>> +}
>>> +
>>> +static int setup_runstate_vcpu_locked(struct vcpu *v, vaddr_t vaddr)
>>> +{
>>> +    unsigned int offset;
>>> +    mfn_t mfn[2];
>>> +    struct page_info *page;
>>> +    unsigned int numpages;
>>>      struct vcpu_runstate_info runstate;
>>> +    void *p;
>>>=20
>>> -    if ( guest_handle_is_null(runstate_guest(v)) )
>>> -        return;
>>> +    /* user can pass a NULL address to unregister a previous area */
>>> +    if ( vaddr =3D=3D 0 )
>>> +        return 0;
>>> +
>>> +    offset =3D vaddr & ~PAGE_MASK;
>>> +
>>> +    /* provided address must be aligned to a 64bit */
>>> +    if ( offset % alignof(struct vcpu_runstate_info) )
>>=20
>> This new restriction wants to be explained in the commit message and=20
>> public header.
>=20
> And the expression would imo also better use alignof(runstate).

ok i will fix that.

Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:13:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:13:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UqU-0004Yu-DJ; Fri, 31 Jul 2020 13:13:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fae6=BK=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1k1UqT-0004Yp-DR
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:13:09 +0000
X-Inumbo-ID: 93a0218e-d32f-11ea-8e33-bc764e2007e4
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 93a0218e-d32f-11ea-8e33-bc764e2007e4;
 Fri, 31 Jul 2020 13:13:08 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id r2so22881855wrs.8
 for <xen-devel@lists.xenproject.org>; Fri, 31 Jul 2020 06:13:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=xbaCTwd47f5RsrTDC/XEMHAiDAHTew0m39W0YHlJ18Y=;
 b=eObv7IGn8kpagi9qChsuYC4riVyG+4fw4dSMn8NF14V/eketgc2TukXTDVuK3gHOOz
 nCV+NEYR+Fj9K3vS6oBvKSMLgZF/isWHINSg+RbgEJU/kFDJvGrHuoEADHn/K4faKcdU
 6AWPAWtAhxfjNSTa1Bhzwx00HPhvuZJwCpFQ23yZAdTZhBTOus2fjaWtNDGBbYAssyIp
 UFYU2IARhOVkuSTpAfvY4NcdKICmopM2qRJv7xqX0p5h7MfXn3EeOOrXaS3/94c12SZB
 6AXe0GobLipGZcyXqulA9NZVqWrW6MN5rWg8obAZQd8KWZB4cQs4VxBwzLiMTt50d36U
 XweA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=xbaCTwd47f5RsrTDC/XEMHAiDAHTew0m39W0YHlJ18Y=;
 b=iVRFgLp/v2//xe+e0Q5sX6pizd4bzLF2GRdBcz1KHYnmU8M+/I9E62+Poi3ooHzAXb
 egDM7HQA5zMl6skFUeFBaFqAxJbcDrcxKvXFl25mWuz+2PZ06aLpgGAcq0HLcASDjHX3
 F59ghdbgSZ5ouvlG5G4UStTb4etYcroGF7EWmpWJrATFu5nZaU57ANJVN8Pi8vgvYs2N
 YfD4NmIC+SBsnhKfgfQYvrsaFWo/mdByc43BE+nth8PBkZxQ59lR+UgfwFxl/W8FigIs
 72WloFugcIkTDL7yqLiSNkNOPKvzoXXnSlikvBiC7VnIdAeG59veEkio4d85N3XqTlYF
 4Rng==
X-Gm-Message-State: AOAM5306ds4RsmFoNi6t184SLsYrMWj1pARklTGIGE6FmzHocUpUYsJs
 ThZbJV0XNErGNB6i1BtJNxQ=
X-Google-Smtp-Source: ABdhPJzYlaT8zWLG835WJLzipwvHznzwPetKnvgiMg7UFvrB/qpx6D0UZoh5SxcyhPxHXrE3H96P2A==
X-Received: by 2002:a5d:4e81:: with SMTP id e1mr3363853wru.22.1596201187745;
 Fri, 31 Jul 2020 06:13:07 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec])
 by smtp.gmail.com with ESMTPSA id 68sm14231059wra.39.2020.07.31.06.13.06
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 31 Jul 2020 06:13:06 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200731123926.28970-1-paul@xen.org>
 <20200731123926.28970-3-paul@xen.org>
 <a4856c33-8bb0-4afa-cc71-3af4c229bc27@suse.com>
In-Reply-To: <a4856c33-8bb0-4afa-cc71-3af4c229bc27@suse.com>
Subject: RE: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in
 epte_get_entry_emt()
Date: Fri, 31 Jul 2020 14:07:54 +0100
Message-ID: <004501d6673b$9adffbf0$d09ff3d0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLlfQu00pWfsofZfQkBPK3LKKgwygFbp5BqArgsBgSm4tz34A==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org, 'Paul Durrant' <pdurrant@amazon.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 31 July 2020 13:58
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Paul Durrant =
<pdurrant@amazon.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau =
Monn=C3=A9 <roger.pau@citrix.com>
> Subject: Re: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in =
epte_get_entry_emt()
>=20
> On 31.07.2020 14:39, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > Re-factor the code to take advantage of the fact that the APIC =
access page is
> > a 'special' page.
>=20
> Hmm, that's going quite as far as I was thinking to go: In
> particular, you leave in place the set_mmio_p2m_entry() use
> in vmx_alloc_vlapic_mapping(). With that replaced, the
> re-ordering in epte_get_entry_emt() that you do shouldn't
> be necessary; you'd simple drop the checking of the
> specific MFN.

Ok, it still needs to go in the p2m though so are you suggesting just =
calling p2m_set_entry() directly?

> However, ...
>=20
> > --- a/xen/arch/x86/hvm/mtrr.c
> > +++ b/xen/arch/x86/hvm/mtrr.c
> > @@ -814,29 +814,22 @@ int epte_get_entry_emt(struct domain *d, =
unsigned long gfn, mfn_t mfn,
> >          return -1;
> >      }
> >
> > -    if ( direct_mmio )
> > -    {
> > -        if ( (mfn_x(mfn) ^ mfn_x(d->arch.hvm.vmx.apic_access_mfn)) =
>> order )
> > -            return MTRR_TYPE_UNCACHABLE;
> > -        if ( order )
> > -            return -1;
>=20
> ... this part of the logic wants retaining, I think, i.e.
> reporting back to the guest that the mapping needs splitting.
> I'm afraid I have to withdraw my R-b on patch 1 for this
> reason, as the check needs to be added there already.

To be clear... You mean I need the:

if ( order )
  return -1;

there?

  Paul

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:16:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:16:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Utq-0004iw-UA; Fri, 31 Jul 2020 13:16:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1Utp-0004ij-6A
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:16:37 +0000
X-Inumbo-ID: 0f37b8b6-d330-11ea-8e33-bc764e2007e4
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.15.81]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f37b8b6-d330-11ea-8e33-bc764e2007e4;
 Fri, 31 Jul 2020 13:16:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WAXF4KrQCnOrUEd0PbRMLq+0m67j/O4rGxBjoDF4vS4=;
 b=6iLbNnXKbPcHNA+AlMey7KkvBnZDHrpPUcC9Fvn8ZuS4Z7er+7F7IFj20i9ACYRJMpbZD9uOWaRmETOrpAZVnOsdPqkqWApZUt2CuZhrjgmbI0wcFOidmCocmrkdHrgPJ3h4oIuOIR5XA6qRg5v90kGPgF7HKWQAWJQGFEANH/4=
Received: from MR2P264CA0062.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:31::26)
 by AM8PR08MB5809.eurprd08.prod.outlook.com (2603:10a6:20b:1db::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.20; Fri, 31 Jul
 2020 13:16:33 +0000
Received: from VE1EUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:31:cafe::b9) by MR2P264CA0062.outlook.office365.com
 (2603:10a6:500:31::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.16 via Frontend
 Transport; Fri, 31 Jul 2020 13:16:33 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT028.mail.protection.outlook.com (10.152.18.88) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.17 via Frontend Transport; Fri, 31 Jul 2020 13:16:33 +0000
Received: ("Tessian outbound c4059ed8d7bf:v62");
 Fri, 31 Jul 2020 13:16:32 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4958f73086f07afb
X-CR-MTA-TID: 64aa7808
Received: from 22c086657f09.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2AFFE49E-3F40-41F2-A5E4-16EBF6166018.1; 
 Fri, 31 Jul 2020 13:16:27 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 22c086657f09.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 13:16:27 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aCHiz0phu4itlrOwQfC3qOB8mHxq/dKpHMUTGFX09fSiPqYCPSVvO+MduVX/lZOOcdYaWwC4aYg+84srChc634MpAUlrYKyQKNfyGyKKzVY8Ne/2ieNfMqST+tVGf5ko4oS8HGVMF1wnPoL9y6wpWwYO83i3jYYYwh6yDmKHy5m3gzlKmLEeTTKsFo4hjuZz2yntZswivbAEfcobxwjYq+rEm/GTvnZVtFOrIqzioIcF5TFiIFd61SF/Kh9rHYHKoUcA8+/JprppQgK8Iq/JTdKegQ24rG8eBvdf5G67oUgYY/lrLJ2cwcIRM6vAHYZ3gfB/agplX1h35WNotrbyGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WAXF4KrQCnOrUEd0PbRMLq+0m67j/O4rGxBjoDF4vS4=;
 b=N6Lny+MXgXycVMeqmOzU1psHU0wb/brlKkec3L7JNFJSeEIIrFJTxg1MwA1SfRXMi1YAm+duZA0K+bWtkZHKdUt74a/H449482Q6ar8IQZ7c+15VX8sogaqfnMZcKp94S4If4AjU+ekv+01LK/NgBsDHTk9d9kMq8uSFcgyx0juPfsqldDGlYpe7Ogm5PdXstbFfCTWTUvMobv+0IFw/XWWQ/osCfP1SDY+vHCHXDTml50O7oiENcZipc8yJTdODnUJCEyhnp5TfYTzb73F2ofHUI2WyLh7Pi2FH0SavmaCUyVfHCc4B5cduW/Lh7gk//tSUdmSX9iPv8APg384zog==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WAXF4KrQCnOrUEd0PbRMLq+0m67j/O4rGxBjoDF4vS4=;
 b=6iLbNnXKbPcHNA+AlMey7KkvBnZDHrpPUcC9Fvn8ZuS4Z7er+7F7IFj20i9ACYRJMpbZD9uOWaRmETOrpAZVnOsdPqkqWApZUt2CuZhrjgmbI0wcFOidmCocmrkdHrgPJ3h4oIuOIR5XA6qRg5v90kGPgF7HKWQAWJQGFEANH/4=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB3996.eurprd08.prod.outlook.com (2603:10a6:10:a5::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.17; Fri, 31 Jul
 2020 13:16:24 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 13:16:23 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v3] xen/arm: Convert runstate address during hypcall
Thread-Topic: [PATCH v3] xen/arm: Convert runstate address during hypcall
Thread-Index: AQHWZluWPRKsKyuZRkCQVDwO60tsvakgmVkAgAETfwA=
Date: Fri, 31 Jul 2020 13:16:22 +0000
Message-ID: <DF7D0DF3-F494-4F1B-9877-E7B2A8BAAC3B@arm.com>
References: <3911d221ce9ed73611b93aa437b9ca227d6aa201.1596099067.git.bertrand.marquis@arm.com>
 <f48f81d5-589e-3f75-1044-583114bf497e@xen.org>
In-Reply-To: <f48f81d5-589e-3f75-1044-583114bf497e@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [90.126.203.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 36ea3266-6693-4955-4cc1-08d83553f1fc
x-ms-traffictypediagnostic: DB8PR08MB3996:|AM8PR08MB5809:
X-Microsoft-Antispam-PRVS: <AM8PR08MB580935C1A97AC5A52BD785299D4E0@AM8PR08MB5809.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: wYyx/1IlPA+rYnTW0vTdvHvha/QeieKdxiEBQBluWUdcE/pq1pKeNg4AU9my6xA5F1sJ7nx40SErGrUcAUhLvn8w4ltKWnFZhjuUv/ByonDk79dipfHMVrXqJCFw7xjKMqmhw+ux8UczDJT2wbR0DIk3h4jkHW0Ob+QXdP8QmTK+DJ6jbB2kOwlnBz6DeHTFe8nKuRUJQVUzUzeOyRW3Kh+vLm8981vlnON9TWjvdFbkD4jq34cWdGHDexkz60N1rNOApP0hzlAuJXmOu6ZPM+ZXCXN+vs4T3lcKFUBFfqiG7BlAcWYaN1b+4jpRubwMzF5/qGZkRaTHmRF4UUCQow==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(366004)(346002)(39860400002)(396003)(376002)(2906002)(478600001)(54906003)(6512007)(30864003)(71200400001)(316002)(4326008)(53546011)(26005)(6506007)(36756003)(7416002)(66946007)(83380400001)(6486002)(91956017)(2616005)(33656002)(8936002)(8676002)(186003)(66446008)(64756008)(66556008)(66476007)(76116006)(6916009)(86362001)(5660300002)(579004);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: qkG7fv9S6Q/3aimI+YR4/Vf5yqIGGZpPFJl1zdhXdeQ8o8/vkGoT4FVKfm593e0YzhGn2M8ZVPRuhOdQYQpzZ0lpqVJ+/ZHVX9g2DDSjQykMaenRrd9eGhN+M8PYFjfDDSupBdN4JDvbyRCeK5MN7c4AH/pdbVFh6FbHenvtPfyOklR4ux5ypgO8vGGkf8mlA6lxtF48qMKGxAXkpjPpbpqDaSnXyO74E/aEPw/273AlnMVmotcgInQY6i+0RE/QAKlFiXIjGAPv/WlQ+V3/Azct4EN7t0E0CrWh+a1dgtj1+UxUaZJepZpWdmHpyyMpbmwbA8pkRZBJQAkDOThdc8AyKEiHC9WRiLMle8FTKH5ZesW2jbgZCOfr69cwGGe2ixbsYC8ohQQSFWElJI/MLlWHF38nhDjSaEQAPZQJ+Yrt2xgkWKSA3tmjmtJU+TuJaDnQjsVJl/WW7jY3oVMXk0k1fAZuBTMxlWwPWjJMmia+JH0AhRboBA0uEKEwpqzZEjgL+pddJgAnQeUZdVxcnLDEPCVUHZJCEGg7apEVBPS8TjmPZ5E/kHACQ3RIFBYDuJaTOoS4f3Ed3+7zsZlYfkhT9jBRKq8ZvLeNVCBdLt5E/CebiKB5qkSLMtW0+BDf9McTldMvVjz5TVuj01p8Sg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <A9845E2DBC14D94A906747ECEFEE7082@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB3996
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: fc79eb65-5beb-4924-cb02-08d83553ec89
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: kmEt2icdmnRvh0Pfl7iy3vSWyFXPG0NlcnxIOUYeV3gztaORNfjWDThpVbIVJxt+bm4I6hdWJn20AVDfT5W+eOHPaDBF36PpaC+cRdtXD7tZh0G0EFWuH5XmBjaCR5QBzEuQLj1ljVtSJKu86Gx1jCRcQLaShPYvtt/csVOTw8C3gyb3U3QqJkx8a1R2kpvFqSSE2uTsQfjO4HlETjnQ8AdHi2AUHVGu5OiiDQYVxaiQIZcXIiqMbaUUjI9JHMXkaI0XOFmo3CmjO2voUu0MHyDNaZtFSeTVIarcCRjWQXpDuDyZvKVfGfLphIFVpC1ThDbu1XcScodgBh2vKnWyxVKIbPOwwyak8mG6YVDzRGeDAwt2Xv6TP9kSlLM4Lxg5o899w7tQ7E19SC+GG/QJ/g==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(396003)(136003)(346002)(376002)(46966005)(82740400003)(70206006)(70586007)(4326008)(6862004)(36756003)(47076004)(81166007)(6506007)(53546011)(107886003)(2616005)(33656002)(336012)(316002)(36906005)(186003)(26005)(54906003)(82310400002)(6486002)(478600001)(2906002)(8676002)(30864003)(83380400001)(5660300002)(86362001)(356005)(8936002)(6512007);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 13:16:33.0197 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 36ea3266-6693-4955-4cc1-08d83553f1fc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5809
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 30 Jul 2020, at 22:50, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Bertrand,
>=20
> To avoid extra work on your side, I would recommend to wait a bit before =
sending a new version. It would be good to at least settle the conversation=
 in v2 regarding the approach taken.

Thanks for the advice.
I thought all discussions where done the first time we started the discussi=
on on the last thread but there are apparently more point to go through.

>=20
> On 30/07/2020 11:24, Bertrand Marquis wrote:
>> At the moment on Arm, a Linux guest running with KTPI enabled will
>> cause the following error when a context switch happens in user mode:
>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>> The error is caused by the virtual address for the runstate area
>> registered by the guest only being accessible when the guest is running
>> in kernel space when KPTI is enabled.
>> To solve this issue, this patch is doing the translation from virtual
>> address to physical address during the hypercall and mapping the
>> required pages using vmap. This is removing the conversion from virtual
>> to physical address during the context switch which is solving the
>> problem with KPTI.
>=20
> To echo what Jan said on the previous version, this is a change in a stab=
le ABI and therefore may break existing guest. FAOD, I agree in principle w=
ith the idea. However, we want to explain why breaking the ABI is the *only=
* viable solution.
>=20
> From my understanding, it is not possible to fix without an ABI breakage =
because the hypervisor doesn't know when the guest will switch back from us=
erspace to kernel space. The risk is the information provided by the runsta=
te wouldn't contain accurate information and could affect how the guest han=
dle stolen time.
>=20
> Additionally there are a few issues with the current interface:
>   1) It is assuming the virtual address cannot be re-used by the userspac=
e. Thanksfully Linux have a split address space. But this may change with K=
PTI in place.
>   2) When update the page-tables, the guest has to go through an invalid =
mapping. So the translation may fail at any point.
>=20
> IOW, the existing interface can lead to random memory corruption and inac=
urracy of the stolen time.

I agree but i am not sure what you want me to do here.
Should i add more details in the commit message ?

>=20
>> This is done only on arm architecture, the behaviour on x86 is not
>> modified by this patch and the address conversion is done as before
>> during each context switch.
>> This is introducing several limitations in comparison to the previous
>> behaviour (on arm only):
>> - if the guest is remapping the area at a different physical address Xen
>> will continue to update the area at the previous physical address. As
>> the area is in kernel space and usually defined as a global variable thi=
s
>> is something which is believed not to happen. If this is required by a
>> guest, it will have to call the hypercall with the new area (even if it
>> is at the same virtual address).
>> - the area needs to be mapped during the hypercall. For the same reasons
>> as for the previous case, even if the area is registered for a different
>> vcpu. It is believed that registering an area using a virtual address
>> unmapped is not something done.
>=20
> This is not clear whether the virtual address refer to the current vCPU o=
r the vCPU you register the runstate for. From the past discussion, I think=
 you refer to the former. It would be good to clarify.

Ok i will try to clarify.

>=20
> Additionally, all the new restrictions should be documented in the public=
 interface. So an OS developper can find the differences between the archit=
ectures.
>=20
> To answer Jan's concern, we certainly don't know all the guest OSes exist=
ing, however we also need to balance the benefit for a large majority of th=
e users.
>=20
> From previous discussion, the current approach was deemed to be acceptabl=
e on Arm and, AFAICT, also x86 (see [1]).
>=20
> TBH, I would rather see the approach to be common. For that, we would an =
agreement from Andrew and Jan in the approach here. Meanwhile, I think this=
 is the best approach to address the concern from Arm users.

>From this I get that you want me to document the specific behaviour on Arm =
on the public header describing the hypercall, right ?

Bertrand

>=20
>> inline functions in headers could not be used as the architecture
>> domain.h is included before the global domain.h making it impossible
>> to use the struct vcpu inside the architecture header.
>> This should not have any performance impact as the hypercall is only
>> called once per vcpu usually.
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>>   Changes in v2
>>     - use vmap to map the pages during the hypercall.
>>     - reintroduce initial copy during hypercall.
>>   Changes in v3
>>     - Fix Coding style
>>     - Fix vaddr printing on arm32
>>     - use write_atomic to modify state_entry_time update bit (only
>>     in guest structure as the bit is not used inside Xen copy)
>> ---
>>  xen/arch/arm/domain.c        | 161 ++++++++++++++++++++++++++++++-----
>>  xen/arch/x86/domain.c        |  29 ++++++-
>>  xen/arch/x86/x86_64/domain.c |   4 +-
>>  xen/common/domain.c          |  19 ++---
>>  xen/include/asm-arm/domain.h |   9 ++
>>  xen/include/asm-x86/domain.h |  16 ++++
>>  xen/include/xen/domain.h     |   5 ++
>>  xen/include/xen/sched.h      |  16 +---
>>  8 files changed, 206 insertions(+), 53 deletions(-)
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index 31169326b2..8b36946017 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -19,6 +19,7 @@
>>  #include <xen/sched.h>
>>  #include <xen/softirq.h>
>>  #include <xen/wait.h>
>> +#include <xen/vmap.h>
>>    #include <asm/alternative.h>
>>  #include <asm/cpuerrata.h>
>> @@ -275,36 +276,156 @@ static void ctxt_switch_to(struct vcpu *n)
>>      virt_timer_restore(n);
>>  }
>>  -/* Update per-VCPU guest runstate shared memory area (if registered). =
*/
>> -static void update_runstate_area(struct vcpu *v)
>> +static void cleanup_runstate_vcpu_locked(struct vcpu *v)
>>  {
>> -    void __user *guest_handle =3D NULL;
>> +    if ( v->arch.runstate_guest )
>> +    {
>> +        vunmap((void *)((unsigned long)v->arch.runstate_guest & PAGE_MA=
SK));
>> +
>> +        put_page(v->arch.runstate_guest_page[0]);
>> +
>> +        if ( v->arch.runstate_guest_page[1] )
>> +            put_page(v->arch.runstate_guest_page[1]);
>> +
>> +        v->arch.runstate_guest =3D NULL;
>> +    }
>> +}
>> +
>> +void arch_vcpu_cleanup_runstate(struct vcpu *v)
>> +{
>> +    spin_lock(&v->arch.runstate_guest_lock);
>> +
>> +    cleanup_runstate_vcpu_locked(v);
>> +
>> +    spin_unlock(&v->arch.runstate_guest_lock);
>> +}
>> +
>> +static int setup_runstate_vcpu_locked(struct vcpu *v, vaddr_t vaddr)
>> +{
>> +    unsigned int offset;
>> +    mfn_t mfn[2];
>> +    struct page_info *page;
>> +    unsigned int numpages;
>>      struct vcpu_runstate_info runstate;
>> +    void *p;
>>  -    if ( guest_handle_is_null(runstate_guest(v)) )
>> -        return;
>> +    /* user can pass a NULL address to unregister a previous area */
>> +    if ( vaddr =3D=3D 0 )
>> +        return 0;
>> +
>> +    offset =3D vaddr & ~PAGE_MASK;
>> +
>> +    /* provided address must be aligned to a 64bit */
>> +    if ( offset % alignof(struct vcpu_runstate_info) )
>=20
> This new restriction wants to be explained in the commit message and publ=
ic header.
>=20
>> +    {
>> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRI=
vaddr
>> +                ": Invalid offset\n", vaddr);
>=20
> We usually enforce 80 character per lines except for format string. So it=
 is easier to grep them in the code.
>=20
>> +        return -EINVAL;
>> +    }
>> +
>> +    page =3D get_page_from_gva(v, vaddr, GV2M_WRITE);
>> +    if ( !page )
>> +    {
>> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRI=
vaddr
>> +                ": Page is not mapped\n", vaddr);
>> +        return -EINVAL;
>> +    }
>> +
>> +    mfn[0] =3D page_to_mfn(page);
>> +    v->arch.runstate_guest_page[0] =3D page;
>> +
>> +    if ( offset > (PAGE_SIZE - sizeof(struct vcpu_runstate_info)) )
>> +    {
>> +        /* guest area is crossing pages */
>> +        page =3D get_page_from_gva(v, vaddr + PAGE_SIZE, GV2M_WRITE);
>> +        if ( !page )
>> +        {
>> +            put_page(v->arch.runstate_guest_page[0]);
>> +            gprintk(XENLOG_WARNING,
>> +                    "Cannot map runstate pointer at 0x%"PRIvaddr
>> +                    ": 2nd Page is not mapped\n", vaddr);
>> +            return -EINVAL;
>> +        }
>> +        mfn[1] =3D page_to_mfn(page);
>> +        v->arch.runstate_guest_page[1] =3D page;
>> +        numpages =3D 2;
>> +    }
>> +    else
>> +    {
>> +        v->arch.runstate_guest_page[1] =3D NULL;
>> +        numpages =3D 1;
>> +    }
>>  -    memcpy(&runstate, &v->runstate, sizeof(runstate));
>> +    p =3D vmap(mfn, numpages);
>> +    if ( !p )
>> +    {
>> +        put_page(v->arch.runstate_guest_page[0]);
>> +        if ( numpages =3D=3D 2 )
>> +            put_page(v->arch.runstate_guest_page[1]);
>>  -    if ( VM_ASSIST(v->domain, runstate_update_flag) )
>> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRI=
vaddr
>> +                ": vmap error\n", vaddr);
>> +        return -EINVAL;
>> +    }
>> +
>> +    v->arch.runstate_guest =3D p + offset;
>> +
>> +    if (v =3D=3D current)
>> +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate=
));
>> +    else
>>      {
>> -        guest_handle =3D &v->runstate_guest.p->state_entry_time + 1;
>> -        guest_handle--;
>> -        runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE;
>> -        __raw_copy_to_guest(guest_handle,
>> -                            (void *)(&runstate.state_entry_time + 1) - =
1, 1);
>> -        smp_wmb();
>> +        vcpu_runstate_get(v, &runstate);
>> +        memcpy(v->arch.runstate_guest, &runstate, sizeof(v->runstate));
>>      }
>>  -    __copy_to_guest(runstate_guest(v), &runstate, 1);
>> +    return 0;
>> +}
>> +
>> +int arch_vcpu_setup_runstate(struct vcpu *v,
>> +                             struct vcpu_register_runstate_memory_area =
area)
>> +{
>> +    int rc;
>> +
>> +    spin_lock(&v->arch.runstate_guest_lock);
>> +
>> +    /* cleanup if we are recalled */
>> +    cleanup_runstate_vcpu_locked(v);
>> +
>> +    rc =3D setup_runstate_vcpu_locked(v, (vaddr_t)area.addr.v);
>> +
>> +    spin_unlock(&v->arch.runstate_guest_lock);
>>  -    if ( guest_handle )
>> +    return rc;
>> +}
>> +
>> +
>> +/* Update per-VCPU guest runstate shared memory area (if registered). *=
/
>> +static void update_runstate_area(struct vcpu *v)
>> +{
>> +    spin_lock(&v->arch.runstate_guest_lock);
>> +
>> +    if ( v->arch.runstate_guest )
>>      {
>> -        runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE;
>> -        smp_wmb();
>> -        __raw_copy_to_guest(guest_handle,
>> -                            (void *)(&runstate.state_entry_time + 1) - =
1, 1);
>> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
>> +        {
>> +            v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE;
>> +            write_atomic(&(v->arch.runstate_guest->state_entry_time),
>> +                    v->runstate.state_entry_time);
>=20
> NIT: You want to indent v-> at the same level as the argument from the fi=
rst line.
>=20
> Also, I think you are missing a smp_wmb() here.
>=20
>> +        }
>> +
>> +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate=
));
>> +
>> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
>> +        {
>> +            /* copy must be done before switching the bit */
>> +            smp_wmb();
>> +            v->runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE;
>> +            write_atomic(&(v->arch.runstate_guest->state_entry_time),
>> +                    v->runstate.state_entry_time);
>=20
> Same remark for the indentation.
>=20
>> +        }
>>      }
>> +
>> +    spin_unlock(&v->arch.runstate_guest_lock);
>>  }
>>    static void schedule_tail(struct vcpu *prev)
>> @@ -560,6 +681,8 @@ int arch_vcpu_create(struct vcpu *v)
>>      v->arch.saved_context.sp =3D (register_t)v->arch.cpu_info;
>>      v->arch.saved_context.pc =3D (register_t)continue_new_vcpu;
>>  +    spin_lock_init(&v->arch.runstate_guest_lock);
>> +
>>      /* Idle VCPUs don't need the rest of this setup */
>>      if ( is_idle_vcpu(v) )
>>          return rc;
>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>> index fee6c3931a..b9b81e94e5 100644
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -1642,6 +1642,29 @@ void paravirt_ctxt_switch_to(struct vcpu *v)
>>          wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
>>  }
>>  +int arch_vcpu_setup_runstate(struct vcpu *v,
>> +                             struct vcpu_register_runstate_memory_area =
area)
>> +{
>> +    struct vcpu_runstate_info runstate;
>> +
>> +    runstate_guest(v) =3D area.addr.h;
>> +
>> +    if ( v =3D=3D current )
>> +        __copy_to_guest(runstate_guest(v), &v->runstate, 1);
>> +    else
>> +    {
>> +        vcpu_runstate_get(v, &runstate);
>> +        __copy_to_guest(runstate_guest(v), &runstate, 1);
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>> +void arch_vcpu_cleanup_runstate(struct vcpu *v)
>> +{
>> +    set_xen_guest_handle(runstate_guest(v), NULL);
>> +}
>> +
>>  /* Update per-VCPU guest runstate shared memory area (if registered). *=
/
>>  bool update_runstate_area(struct vcpu *v)
>>  {
>> @@ -1660,8 +1683,8 @@ bool update_runstate_area(struct vcpu *v)
>>      if ( VM_ASSIST(v->domain, runstate_update_flag) )
>>      {
>>          guest_handle =3D has_32bit_shinfo(v->domain)
>> -            ? &v->runstate_guest.compat.p->state_entry_time + 1
>> -            : &v->runstate_guest.native.p->state_entry_time + 1;
>> +            ? &v->arch.runstate_guest.compat.p->state_entry_time + 1
>> +            : &v->arch.runstate_guest.native.p->state_entry_time + 1;
>>          guest_handle--;
>>          runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE;
>>          __raw_copy_to_guest(guest_handle,
>> @@ -1674,7 +1697,7 @@ bool update_runstate_area(struct vcpu *v)
>>          struct compat_vcpu_runstate_info info;
>>            XLAT_vcpu_runstate_info(&info, &runstate);
>> -        __copy_to_guest(v->runstate_guest.compat, &info, 1);
>> +        __copy_to_guest(v->arch.runstate_guest.compat, &info, 1);
>>          rc =3D true;
>>      }
>>      else
>> diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
>> index c46dccc25a..b879e8dd2c 100644
>> --- a/xen/arch/x86/x86_64/domain.c
>> +++ b/xen/arch/x86/x86_64/domain.c
>> @@ -36,7 +36,7 @@ arch_compat_vcpu_op(
>>              break;
>>            rc =3D 0;
>> -        guest_from_compat_handle(v->runstate_guest.compat, area.addr.h)=
;
>> +        guest_from_compat_handle(v->arch.runstate_guest.compat, area.ad=
dr.h);
>>            if ( v =3D=3D current )
>>          {
>> @@ -49,7 +49,7 @@ arch_compat_vcpu_op(
>>              vcpu_runstate_get(v, &runstate);
>>              XLAT_vcpu_runstate_info(&info, &runstate);
>>          }
>> -        __copy_to_guest(v->runstate_guest.compat, &info, 1);
>> +        __copy_to_guest(v->arch.runstate_guest.compat, &info, 1);
>>            break;
>>      }
>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>> index f0f9c62feb..739c6b7b62 100644
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -727,7 +727,10 @@ int domain_kill(struct domain *d)
>>          if ( cpupool_move_domain(d, cpupool0) )
>>              return -ERESTART;
>>          for_each_vcpu ( d, v )
>> +        {
>> +            arch_vcpu_cleanup_runstate(v);
>>              unmap_vcpu_info(v);
>> +        }
>>          d->is_dying =3D DOMDYING_dead;
>>          /* Mem event cleanup has to go here because the rings
>>           * have to be put before we call put_domain. */
>> @@ -1167,7 +1170,7 @@ int domain_soft_reset(struct domain *d)
>>        for_each_vcpu ( d, v )
>>      {
>> -        set_xen_guest_handle(runstate_guest(v), NULL);
>> +        arch_vcpu_cleanup_runstate(v);
>>          unmap_vcpu_info(v);
>>      }
>>  @@ -1494,7 +1497,6 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN=
_GUEST_HANDLE_PARAM(void) arg)
>>      case VCPUOP_register_runstate_memory_area:
>>      {
>>          struct vcpu_register_runstate_memory_area area;
>> -        struct vcpu_runstate_info runstate;
>>            rc =3D -EFAULT;
>>          if ( copy_from_guest(&area, arg, 1) )
>> @@ -1503,18 +1505,7 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN=
_GUEST_HANDLE_PARAM(void) arg)
>>          if ( !guest_handle_okay(area.addr.h, 1) )
>>              break;
>>  -        rc =3D 0;
>> -        runstate_guest(v) =3D area.addr.h;
>> -
>> -        if ( v =3D=3D current )
>> -        {
>> -            __copy_to_guest(runstate_guest(v), &v->runstate, 1);
>> -        }
>> -        else
>> -        {
>> -            vcpu_runstate_get(v, &runstate);
>> -            __copy_to_guest(runstate_guest(v), &runstate, 1);
>> -        }
>> +        rc =3D arch_vcpu_setup_runstate(v, area);
>>            break;
>>      }
>> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
>> index 6819a3bf38..2f62c3e8f5 100644
>> --- a/xen/include/asm-arm/domain.h
>> +++ b/xen/include/asm-arm/domain.h
>> @@ -204,6 +204,15 @@ struct arch_vcpu
>>       */
>>      bool need_flush_to_ram;
>>  +    /* runstate guest lock */
>> +    spinlock_t runstate_guest_lock;
>> +
>> +    /* runstate guest info */
>> +    struct vcpu_runstate_info *runstate_guest;
>> +
>> +    /* runstate pages mapped for runstate_guest */
>> +    struct page_info *runstate_guest_page[2];
>> +
>>  }  __cacheline_aligned;
>>    void vcpu_show_execution_state(struct vcpu *);
>> diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
>> index 635335634d..007ccfbf9f 100644
>> --- a/xen/include/asm-x86/domain.h
>> +++ b/xen/include/asm-x86/domain.h
>> @@ -11,6 +11,11 @@
>>  #include <asm/x86_emulate.h>
>>  #include <public/vcpu.h>
>>  #include <public/hvm/hvm_info_table.h>
>> +#ifdef CONFIG_COMPAT
>> +#include <compat/vcpu.h>
>> +DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t);
>> +#endif
>> +
>>    #define has_32bit_shinfo(d)    ((d)->arch.has_32bit_shinfo)
>>  @@ -638,6 +643,17 @@ struct arch_vcpu
>>      struct {
>>          bool next_interrupt_enabled;
>>      } monitor;
>> +
>> +#ifndef CONFIG_COMPAT
>> +# define runstate_guest(v) ((v)->arch.runstate_guest)
>> +    XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest add=
ress */
>> +#else
>> +# define runstate_guest(v) ((v)->arch.runstate_guest.native)
>> +    union {
>> +        XEN_GUEST_HANDLE(vcpu_runstate_info_t) native;
>> +        XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat;
>> +    } runstate_guest;
>> +#endif
>>  };
>>    struct guest_memory_policy
>> diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
>> index 7e51d361de..5e8cbba31d 100644
>> --- a/xen/include/xen/domain.h
>> +++ b/xen/include/xen/domain.h
>> @@ -5,6 +5,7 @@
>>  #include <xen/types.h>
>>    #include <public/xen.h>
>> +#include <public/vcpu.h>
>>  #include <asm/domain.h>
>>  #include <asm/numa.h>
>>  @@ -63,6 +64,10 @@ void arch_vcpu_destroy(struct vcpu *v);
>>  int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
>>  void unmap_vcpu_info(struct vcpu *v);
>>  +int arch_vcpu_setup_runstate(struct vcpu *v,
>> +                             struct vcpu_register_runstate_memory_area =
area);
>> +void arch_vcpu_cleanup_runstate(struct vcpu *v);
>> +
>>  int arch_domain_create(struct domain *d,
>>                         struct xen_domctl_createdomain *config);
>>  diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>> index ac53519d7f..fac030fb83 100644
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -29,11 +29,6 @@
>>  #include <public/vcpu.h>
>>  #include <public/event_channel.h>
>>  -#ifdef CONFIG_COMPAT
>> -#include <compat/vcpu.h>
>> -DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t);
>> -#endif
>> -
>>  /*
>>   * Stats
>>   *
>> @@ -166,16 +161,7 @@ struct vcpu
>>      struct sched_unit *sched_unit;
>>        struct vcpu_runstate_info runstate;
>> -#ifndef CONFIG_COMPAT
>> -# define runstate_guest(v) ((v)->runstate_guest)
>> -    XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest add=
ress */
>> -#else
>> -# define runstate_guest(v) ((v)->runstate_guest.native)
>> -    union {
>> -        XEN_GUEST_HANDLE(vcpu_runstate_info_t) native;
>> -        XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat;
>> -    } runstate_guest; /* guest address */
>> -#endif
>> +
>>      unsigned int     new_state;
>>        /* Has the FPU been initialised? */
>=20
> [1] <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
>=20
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:17:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:17:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1UuW-0004od-Bv; Fri, 31 Jul 2020 13:17:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1UuV-0004oG-EL
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:17:19 +0000
X-Inumbo-ID: 289b6d98-d330-11ea-abaf-12813bfff9fa
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.43]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 289b6d98-d330-11ea-abaf-12813bfff9fa;
 Fri, 31 Jul 2020 13:17:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZrFKs3O/XEAAg4MrxELVagdkdAmjbsQjQzYOmGjsHz0=;
 b=EGSb+USpkFdLGJKot2svpSl8qcNpspPHUvYGkhh6MX9R5pGYocFvnnvTsfc88USyUuyRB3o2Xv7VRnyYH1iIs+0nnppe/6axKAwLlTiL1VulOJrUA1HlGD1RNQFm+/GGi19eENDg6M564tlCoqAlHau5NrDIIFX5upRyKF2xX2A=
Received: from AM7PR03CA0002.eurprd03.prod.outlook.com (2603:10a6:20b:130::12)
 by DBAPR08MB5701.eurprd08.prod.outlook.com (2603:10a6:10:1a6::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3216.26; Fri, 31 Jul
 2020 13:17:17 +0000
Received: from AM5EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:130:cafe::78) by AM7PR03CA0002.outlook.office365.com
 (2603:10a6:20b:130::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.16 via Frontend
 Transport; Fri, 31 Jul 2020 13:17:16 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT064.mail.protection.outlook.com (10.152.17.53) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.20 via Frontend Transport; Fri, 31 Jul 2020 13:17:16 +0000
Received: ("Tessian outbound c83312565ef4:v62");
 Fri, 31 Jul 2020 13:17:16 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 424f4b2185d02200
X-CR-MTA-TID: 64aa7808
Received: from 85bc4615d114.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0685CC8F-A154-448D-95A8-B72D737D8736.1; 
 Fri, 31 Jul 2020 13:17:10 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 85bc4615d114.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 13:17:10 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Kt2bbB3CI5HEWv53wYHdZVhvud799UPFB+MZ700S7+tTZb+oeOo/Ad5KNtcGIPMvIWxwjndD6UC9iU1WnfTwUKJm20qixH4mXES9m3kTrmVnSnVjYuVlW53PwPh6OAIMI8S87YIolUX4wqjlJTniEL4ThwQyAGHVZ7FapRCAMnvahNogN5AnUk8HQEb0Tyaulm1o4ih21qmsXWrvmZ0SQ7HZj8g4tar/N+Ilf9v9jpw9dPVWMXlWuGKjFWSlmZgLiThhZBRQM6emnxj9YUQ6aDw0nPhvDQTNewHr7qrAqhDfdTp6gFYxpNqvdFOqAX83T+fBbF5iOatryWd1KCwOrQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZrFKs3O/XEAAg4MrxELVagdkdAmjbsQjQzYOmGjsHz0=;
 b=BYugdG1JT8K9XZO2dIC8p6LTYMPEUEdYyNnA6uYPLMqeQR/+/Yn8+CdY8JDoUvM3gFXsRcCK/MvMHD11C/kflMjfKI8a3IYt3VqjJpVQGXg3EhQ+hB3c862bN9u+maZGAIsEoRFZxarMpVUxncqHF4Ddy5re1x/VOwPEpev3Dpn/QMfKpr75gkHwTsTkUgXl4gSUrlJ5XyZhJ95LRxga+a040f54zb7er455zmLvJlD0P9Ahe7f/LwworxZkr559nyYK4znb+ipFYWHqAbZdZO3RcGn8JJ4SmiWgnVJaimu1ggH91d2ydFDdZGFAxG3yc0yrJcghexEV1Fy/w4exHQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZrFKs3O/XEAAg4MrxELVagdkdAmjbsQjQzYOmGjsHz0=;
 b=EGSb+USpkFdLGJKot2svpSl8qcNpspPHUvYGkhh6MX9R5pGYocFvnnvTsfc88USyUuyRB3o2Xv7VRnyYH1iIs+0nnppe/6axKAwLlTiL1VulOJrUA1HlGD1RNQFm+/GGi19eENDg6M564tlCoqAlHau5NrDIIFX5upRyKF2xX2A=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB3996.eurprd08.prod.outlook.com (2603:10a6:10:a5::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.17; Fri, 31 Jul
 2020 13:17:09 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 13:17:09 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v3] xen/arm: Convert runstate address during hypcall
Thread-Topic: [PATCH v3] xen/arm: Convert runstate address during hypcall
Thread-Index: AQHWZluWPRKsKyuZRkCQVDwO60tsvakgmVkAgABK7QCAAMjLAA==
Date: Fri, 31 Jul 2020 13:17:09 +0000
Message-ID: <91E1094A-C03D-4DD7-AC4B-0A01330A043F@arm.com>
References: <3911d221ce9ed73611b93aa437b9ca227d6aa201.1596099067.git.bertrand.marquis@arm.com>
 <f48f81d5-589e-3f75-1044-583114bf497e@xen.org>
 <alpine.DEB.2.21.2007301422030.1767@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2007301422030.1767@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [90.126.203.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 08fded95-64dd-4069-6080-08d835540bf8
x-ms-traffictypediagnostic: DB8PR08MB3996:|DBAPR08MB5701:
X-Microsoft-Antispam-PRVS: <DBAPR08MB5701E5718031FE3674E76C689D4E0@DBAPR08MB5701.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: ozYMh+6RfzVVTT9W98d9TjK3UFW062LvZZwGaUjrayjDoit6GbNq/OlE6Hr+xl/lqoauN8igEKnK0ZqKLtl8UhmyxdbEtU/ybNrDmsqxCsyC/ToI92J+CPOEr0Q2ZCF35qV5yR39RKAFBqcsfJTptyTDE4kh8jLWKE3i6ud2fXSd8bQh2rf13YtLYRhjo7wdzjG5lvxZ86d80ybD6PI+odOe31c+m+lUzobUGdfftCVGWHMdd+9C5mEnVV+zIiW2KbILs1ydMgc5WcYXIL9pq5V/5CsadAIpI6Ih1r78jj0kfjczMckH+VFhuJhbpsPeDLbj5dfZmkVY7sDCf1nfYw==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(366004)(346002)(39860400002)(396003)(376002)(2906002)(478600001)(54906003)(6512007)(30864003)(71200400001)(316002)(4326008)(53546011)(26005)(6506007)(36756003)(7416002)(66946007)(83380400001)(6486002)(91956017)(2616005)(33656002)(8936002)(8676002)(186003)(66446008)(64756008)(66556008)(66476007)(76116006)(6916009)(86362001)(5660300002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: HFudRaSra0vzrirgy7bTwzYQveZFd7z1LD/N63lF9X8iRYlRGRLH0RGblNp0Vr92GN/5ORu2OWZIcaseAG7bzbiSG3YGs301xUTf0iCIUH2IIoU72bvjoGs7/TWnm8vUF7GO1ZAa34wDWzoytKOuaUTQOK/4xO9Z34DINf7FdjHE6CDjGcOlnEBdQRC04cuSwJ0JqN4csSaEgFuIOWh1RxfX1fD/ii6WexCUA8T30jnDwNNvp6J1/vtMSzx10XyN7dbFrYbRLq1u47aXgH3AxOjg2xi8UvzG71iJAPyhIO/pnRO3baTb7uOxZwfmbWL0YQqI5qgYJj5IoiKbqQO9JHBuUJoddb6PGP881xZauDs8jl44ZW/3Eiqlq9t+e8rvc/12YcissBo1ROU65XnmlqSoh8y72M5NGw57YNXgqqX067z6WwKFHWSetEFmZSj5tvUG/c8s38VzeZvIYXEC79m7yys6cMjwQwtRudp32wM+yXY/RszG0OHdbNekmDyfmEbBgkN0Ieb4a08LInv1jZwLBX5tDf217V+zehxm4K/9NRnCeZ8cE+z+jiCZNYcUoV7knTVIhAQbMl+e+MPuQ60udXNNbkOgnHkdJeaVUtQByW3Q4X3kYUHsDgU/iYOh+U4uELgiLJkfsnF1+ivbFg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <84B27139DA7DC94D8ABFECC61B5DDE86@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB3996
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 03ca6c3f-6579-426e-17f0-08d8355407d4
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: pOrPxLxAigHmruF/ismgaXH3tQ/GZ9OJjDrsrAfq0nyxw843yIAgM29GOeXyuSZPSGFk375/a2pH3WYWklZNhWvs+fl8FYDDoFtcm9riO4ES7a4D3nsQFa/x0bWZkid6MVr1w09iWInQl4GHK8YtVxypzA+oqO041bny1rK7Cp+obeto4p9bFj83IclE+ManoeSi7qoCdrvXwXHosuiIdJLEQAhH4+e0/nmluNZo/p0AiMQtaSLP+8CibCLpLfww5gaPV4twUKU3MqM9jKdip6kGWRibNv1GligEHVGGZKXIR9UhFpjMkWfqWmM3FDH0uF4bBgGlgm/oPEDmHg5woSUuh2MEwX9pb9voamkhqY4++OyyjhC1ZjoeVozbznKliDOkdxfFlp5TCJUaIIRq6Q==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(396003)(136003)(346002)(376002)(46966005)(2906002)(82310400002)(30864003)(5660300002)(2616005)(26005)(6506007)(53546011)(478600001)(336012)(186003)(8936002)(4326008)(36756003)(81166007)(6862004)(6486002)(47076004)(86362001)(8676002)(82740400003)(36906005)(6512007)(83380400001)(356005)(33656002)(70586007)(316002)(70206006)(107886003)(54906003);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 13:17:16.6507 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 08fded95-64dd-4069-6080-08d835540bf8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5701
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 31 Jul 2020, at 03:18, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> On Thu, 30 Jul 2020, Julien Grall wrote:
>> Hi Bertrand,
>>=20
>> To avoid extra work on your side, I would recommend to wait a bit before
>> sending a new version. It would be good to at least settle the conversat=
ion in
>> v2 regarding the approach taken.
>>=20
>> On 30/07/2020 11:24, Bertrand Marquis wrote:
>>> At the moment on Arm, a Linux guest running with KTPI enabled will
>>> cause the following error when a context switch happens in user mode:
>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>>>=20
>>> The error is caused by the virtual address for the runstate area
>>> registered by the guest only being accessible when the guest is running
>>> in kernel space when KPTI is enabled.
>>>=20
>>> To solve this issue, this patch is doing the translation from virtual
>>> address to physical address during the hypercall and mapping the
>>> required pages using vmap. This is removing the conversion from virtual
>>> to physical address during the context switch which is solving the
>>> problem with KPTI.
>>=20
>> To echo what Jan said on the previous version, this is a change in a sta=
ble
>> ABI and therefore may break existing guest. FAOD, I agree in principle w=
ith
>> the idea. However, we want to explain why breaking the ABI is the *only*
>> viable solution.
>>=20
>> From my understanding, it is not possible to fix without an ABI breakage
>> because the hypervisor doesn't know when the guest will switch back from
>> userspace to kernel space. The risk is the information provided by the
>> runstate wouldn't contain accurate information and could affect how the =
guest
>> handle stolen time.
>>=20
>> Additionally there are a few issues with the current interface:
>>   1) It is assuming the virtual address cannot be re-used by the userspa=
ce.
>> Thanksfully Linux have a split address space. But this may change with K=
PTI in
>> place.
>>   2) When update the page-tables, the guest has to go through an invalid
>> mapping. So the translation may fail at any point.
>>=20
>> IOW, the existing interface can lead to random memory corruption and
>> inacurracy of the stolen time.
>>=20
>>>=20
>>> This is done only on arm architecture, the behaviour on x86 is not
>>> modified by this patch and the address conversion is done as before
>>> during each context switch.
>>>=20
>>> This is introducing several limitations in comparison to the previous
>>> behaviour (on arm only):
>>> - if the guest is remapping the area at a different physical address Xe=
n
>>> will continue to update the area at the previous physical address. As
>>> the area is in kernel space and usually defined as a global variable th=
is
>>> is something which is believed not to happen. If this is required by a
>>> guest, it will have to call the hypercall with the new area (even if it
>>> is at the same virtual address).
>>> - the area needs to be mapped during the hypercall. For the same reason=
s
>>> as for the previous case, even if the area is registered for a differen=
t
>>> vcpu. It is believed that registering an area using a virtual address
>>> unmapped is not something done.
>>=20
>> This is not clear whether the virtual address refer to the current vCPU =
or the
>> vCPU you register the runstate for. From the past discussion, I think yo=
u
>> refer to the former. It would be good to clarify.
>>=20
>> Additionally, all the new restrictions should be documented in the publi=
c
>> interface. So an OS developper can find the differences between the
>> architectures.
>=20
> Just to paraphrase what Julien wrote, it would be good to improve the
> commit message with the points suggested and also write a note in the
> header file about the changes to the interface.

Ok i wil do that.

>=20
>=20
>> To answer Jan's concern, we certainly don't know all the guest OSes exis=
ting,
>> however we also need to balance the benefit for a large majority of the =
users.
>>=20
>> From previous discussion, the current approach was deemed to be acceptab=
le on
>> Arm and, AFAICT, also x86 (see [1]).
>>=20
>> TBH, I would rather see the approach to be common. For that, we would an
>> agreement from Andrew and Jan in the approach here. Meanwhile, I think t=
his is
>> the best approach to address the concern from Arm users.
>=20
> +1
>=20
>=20
>>> inline functions in headers could not be used as the architecture
>>> domain.h is included before the global domain.h making it impossible
>>> to use the struct vcpu inside the architecture header.
>>> This should not have any performance impact as the hypercall is only
>>> called once per vcpu usually.
>>>=20
>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>=20
>>> ---
>>>   Changes in v2
>>>     - use vmap to map the pages during the hypercall.
>>>     - reintroduce initial copy during hypercall.
>>>=20
>>>   Changes in v3
>>>     - Fix Coding style
>>>     - Fix vaddr printing on arm32
>>>     - use write_atomic to modify state_entry_time update bit (only
>>>     in guest structure as the bit is not used inside Xen copy)
>>>=20
>>> ---
>>>  xen/arch/arm/domain.c        | 161 ++++++++++++++++++++++++++++++-----
>>>  xen/arch/x86/domain.c        |  29 ++++++-
>>>  xen/arch/x86/x86_64/domain.c |   4 +-
>>>  xen/common/domain.c          |  19 ++---
>>>  xen/include/asm-arm/domain.h |   9 ++
>>>  xen/include/asm-x86/domain.h |  16 ++++
>>>  xen/include/xen/domain.h     |   5 ++
>>>  xen/include/xen/sched.h      |  16 +---
>>>  8 files changed, 206 insertions(+), 53 deletions(-)
>>>=20
>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>>> index 31169326b2..8b36946017 100644
>>> --- a/xen/arch/arm/domain.c
>>> +++ b/xen/arch/arm/domain.c
>>> @@ -19,6 +19,7 @@
>>>  #include <xen/sched.h>
>>>  #include <xen/softirq.h>
>>>  #include <xen/wait.h>
>>> +#include <xen/vmap.h>
>>>    #include <asm/alternative.h>
>>>  #include <asm/cpuerrata.h>
>>> @@ -275,36 +276,156 @@ static void ctxt_switch_to(struct vcpu *n)
>>>      virt_timer_restore(n);
>>>  }
>>>  -/* Update per-VCPU guest runstate shared memory area (if registered).=
 */
>>> -static void update_runstate_area(struct vcpu *v)
>>> +static void cleanup_runstate_vcpu_locked(struct vcpu *v)
>>>  {
>>> -    void __user *guest_handle =3D NULL;
>>> +    if ( v->arch.runstate_guest )
>>> +    {
>>> +        vunmap((void *)((unsigned long)v->arch.runstate_guest &
>>> PAGE_MASK));
>>> +
>>> +        put_page(v->arch.runstate_guest_page[0]);
>>> +
>>> +        if ( v->arch.runstate_guest_page[1] )
>>> +            put_page(v->arch.runstate_guest_page[1]);
>>> +
>>> +        v->arch.runstate_guest =3D NULL;
>>> +    }
>>> +}
>>> +
>>> +void arch_vcpu_cleanup_runstate(struct vcpu *v)
>>> +{
>>> +    spin_lock(&v->arch.runstate_guest_lock);
>>> +
>>> +    cleanup_runstate_vcpu_locked(v);
>>> +
>>> +    spin_unlock(&v->arch.runstate_guest_lock);
>>> +}
>>> +
>>> +static int setup_runstate_vcpu_locked(struct vcpu *v, vaddr_t vaddr)
>>> +{
>>> +    unsigned int offset;
>>> +    mfn_t mfn[2];
>>> +    struct page_info *page;
>>> +    unsigned int numpages;
>>>      struct vcpu_runstate_info runstate;
>>> +    void *p;
>>>  -    if ( guest_handle_is_null(runstate_guest(v)) )
>>> -        return;
>>> +    /* user can pass a NULL address to unregister a previous area */
>>> +    if ( vaddr =3D=3D 0 )
>>> +        return 0;
>>> +
>>> +    offset =3D vaddr & ~PAGE_MASK;
>>> +
>>> +    /* provided address must be aligned to a 64bit */
>>> +    if ( offset % alignof(struct vcpu_runstate_info) )
>>=20
>> This new restriction wants to be explained in the commit message and pub=
lic
>> header.
>>=20
>>> +    {
>>> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at
>>> 0x%"PRIvaddr
>>> +                ": Invalid offset\n", vaddr);
>>=20
>> We usually enforce 80 character per lines except for format string. So i=
t is
>> easier to grep them in the code.
>>=20
>>> +        return -EINVAL;
>>> +    }
>>> +
>>> +    page =3D get_page_from_gva(v, vaddr, GV2M_WRITE);
>>> +    if ( !page )
>>> +    {
>>> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at
>>> 0x%"PRIvaddr
>>> +                ": Page is not mapped\n", vaddr);
>>> +        return -EINVAL;
>>> +    }
>>> +
>>> +    mfn[0] =3D page_to_mfn(page);
>>> +    v->arch.runstate_guest_page[0] =3D page;
>>> +
>>> +    if ( offset > (PAGE_SIZE - sizeof(struct vcpu_runstate_info)) )
>>> +    {
>>> +        /* guest area is crossing pages */
>>> +        page =3D get_page_from_gva(v, vaddr + PAGE_SIZE, GV2M_WRITE);
>>> +        if ( !page )
>>> +        {
>>> +            put_page(v->arch.runstate_guest_page[0]);
>>> +            gprintk(XENLOG_WARNING,
>>> +                    "Cannot map runstate pointer at 0x%"PRIvaddr
>>> +                    ": 2nd Page is not mapped\n", vaddr);
>>> +            return -EINVAL;
>>> +        }
>>> +        mfn[1] =3D page_to_mfn(page);
>>> +        v->arch.runstate_guest_page[1] =3D page;
>>> +        numpages =3D 2;
>>> +    }
>>> +    else
>>> +    {
>>> +        v->arch.runstate_guest_page[1] =3D NULL;
>>> +        numpages =3D 1;
>>> +    }
>>>  -    memcpy(&runstate, &v->runstate, sizeof(runstate));
>>> +    p =3D vmap(mfn, numpages);
>>> +    if ( !p )
>>> +    {
>>> +        put_page(v->arch.runstate_guest_page[0]);
>>> +        if ( numpages =3D=3D 2 )
>>> +            put_page(v->arch.runstate_guest_page[1]);
>>>  -    if ( VM_ASSIST(v->domain, runstate_update_flag) )
>>> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at
>>> 0x%"PRIvaddr
>>> +                ": vmap error\n", vaddr);
>>> +        return -EINVAL;
>>> +    }
>>> +
>>> +    v->arch.runstate_guest =3D p + offset;
>>> +
>>> +    if (v =3D=3D current)
>>> +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstat=
e));
>>> +    else
>>>      {
>>> -        guest_handle =3D &v->runstate_guest.p->state_entry_time + 1;
>>> -        guest_handle--;
>>> -        runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE;
>>> -        __raw_copy_to_guest(guest_handle,
>>> -                            (void *)(&runstate.state_entry_time + 1) -=
 1,
>>> 1);
>>> -        smp_wmb();
>>> +        vcpu_runstate_get(v, &runstate);
>>> +        memcpy(v->arch.runstate_guest, &runstate, sizeof(v->runstate))=
;
>>>      }
>>>  -    __copy_to_guest(runstate_guest(v), &runstate, 1);
>>> +    return 0;
>>> +}
>>> +
>>> +int arch_vcpu_setup_runstate(struct vcpu *v,
>>> +                             struct vcpu_register_runstate_memory_area
>>> area)
>>> +{
>>> +    int rc;
>>> +
>>> +    spin_lock(&v->arch.runstate_guest_lock);
>>> +
>>> +    /* cleanup if we are recalled */
>>> +    cleanup_runstate_vcpu_locked(v);
>>> +
>>> +    rc =3D setup_runstate_vcpu_locked(v, (vaddr_t)area.addr.v);
>>> +
>>> +    spin_unlock(&v->arch.runstate_guest_lock);
>>>  -    if ( guest_handle )
>>> +    return rc;
>>> +}
>>> +
>>> +
>>> +/* Update per-VCPU guest runstate shared memory area (if registered). =
*/
>>> +static void update_runstate_area(struct vcpu *v)
>>> +{
>>> +    spin_lock(&v->arch.runstate_guest_lock);
>>> +
>>> +    if ( v->arch.runstate_guest )
>>>      {
>>> -        runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE;
>>> -        smp_wmb();
>>> -        __raw_copy_to_guest(guest_handle,
>>> -                            (void *)(&runstate.state_entry_time + 1) -=
 1,
>>> 1);
>>> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
>>> +        {
>>> +            v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE;
>>> +            write_atomic(&(v->arch.runstate_guest->state_entry_time),
>>> +                    v->runstate.state_entry_time);
>>=20
>> NIT: You want to indent v-> at the same level as the argument from the f=
irst
>> line.
>>=20
>> Also, I think you are missing a smp_wmb() here.
>=20
> I just wanted to add that I reviewed the patch and aside from the
> smp_wmb (and the couple of code style NITs), there is no other issue in
> the patch that I could find. No further comments from my side.
>=20
>=20
>>> +        }
>>> +
>>> +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstat=
e));
>>> +
>>> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
>>> +        {
>>> +            /* copy must be done before switching the bit */
>>> +            smp_wmb();
>>> +            v->runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE;
>>> +            write_atomic(&(v->arch.runstate_guest->state_entry_time),
>>> +                    v->runstate.state_entry_time);
>>=20
>> Same remark for the indentation.



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:26:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:26:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1V3Y-0005nY-A6; Fri, 31 Jul 2020 13:26:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1V3X-0005nT-6q
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:26:39 +0000
X-Inumbo-ID: 75eb8af0-d331-11ea-8e37-bc764e2007e4
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.3.50]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75eb8af0-d331-11ea-8e37-bc764e2007e4;
 Fri, 31 Jul 2020 13:26:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+e0TkPHd8LO8oqcEPGNyoRt32RlB9v9mzoALWvWj3fs=;
 b=AJkg8t8k1zVUi1BWrve7li9wtbwUb9r+W15u2JckcbNLWZy+yBd6xUdn4Je0S1nfLba7r9B+I2M8/CLnzRB1t+LdlgqTn1DAKLOv7J5NtF0BIEm4HNDhGCBQxVtjufq0Bv3prU5rRodXvaAnvi07RV2VKkeluerw5i4j0XI9OCg=
Received: from AM6P195CA0021.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:81::34)
 by AM0PR08MB3795.eurprd08.prod.outlook.com (2603:10a6:208:105::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.17; Fri, 31 Jul
 2020 13:26:34 +0000
Received: from VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:81:cafe::7) by AM6P195CA0021.outlook.office365.com
 (2603:10a6:209:81::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.17 via Frontend
 Transport; Fri, 31 Jul 2020 13:26:34 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT010.mail.protection.outlook.com (10.152.18.113) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.20 via Frontend Transport; Fri, 31 Jul 2020 13:26:34 +0000
Received: ("Tessian outbound 1c27ecaec3d6:v62");
 Fri, 31 Jul 2020 13:26:34 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0e1060d1bee04075
X-CR-MTA-TID: 64aa7808
Received: from ba640e12fa39.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 46445FB1-5B50-44DC-A7F1-F4152961CFF8.1; 
 Fri, 31 Jul 2020 13:26:28 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ba640e12fa39.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 13:26:28 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MATKNvGfuKl3sUUFmEAFi3iAac0WsGC5BgQTdQc8NROU1rFoh4DTbfrUAhqzeu0QW8LHRwuIHMGKdjeY2DYRpe8zFWClddSDeNu65yLdhePNW3ksAvneiduj4wYGcAvnVxFCgomyFAYllVwkCSFtkbRN0OI74PwARsGpEvILLccow6dmH01Nu3jbCz4RSbI8Z4t7d7LY3DbQiqKQsR32rtUFuWEtnHBwdtHU9H9K2qVE7JMbk7KiByG5AlOFehwJHyod3LdDfoz3dxF0Ecrk022zzojIcONSu9PvRlQCUkhN9RUsoG63yf3M3dpU6xjYGZ03XHJN1p4rMFt2zSyWiQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+e0TkPHd8LO8oqcEPGNyoRt32RlB9v9mzoALWvWj3fs=;
 b=QeqR9geuF2xySfF3LAH6EhR2STv8/Qk8u81lwCIaozW4bF4zBpFyMSJhtbP5dz4RHr6ebrhLjfM6Dj2AhGae+FzDKrv0lJC2cyD9c6tqc6yM6GFl/znsfx+b8MYIJYYSSUXKk5qeLRcXLfimwGbCu+bXTVy/y0pURQ14/28tuS+K/P+YJ5cH8ZPn12vfKbF4LNw/o77aAbRxot3OI/TauqDw/K0KJ6BMe1doSREo1T8cO9O0c0ArLSdoL6Rq3jaLO2ITWrMEGr1mB9l4bKdP9ZHKE06O6UhGlzPlOhQrueaY+2xWjwGMRkHMayoCosl07zAGO57N0u6gVRIsGS71KA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+e0TkPHd8LO8oqcEPGNyoRt32RlB9v9mzoALWvWj3fs=;
 b=AJkg8t8k1zVUi1BWrve7li9wtbwUb9r+W15u2JckcbNLWZy+yBd6xUdn4Je0S1nfLba7r9B+I2M8/CLnzRB1t+LdlgqTn1DAKLOv7J5NtF0BIEm4HNDhGCBQxVtjufq0Bv3prU5rRodXvaAnvi07RV2VKkeluerw5i4j0XI9OCg=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4363.eurprd08.prod.outlook.com (2603:10a6:10:ce::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.20; Fri, 31 Jul
 2020 13:26:25 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 13:26:25 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v3] xen/arm: Convert runstate address during hypcall
Thread-Topic: [PATCH v3] xen/arm: Convert runstate address during hypcall
Thread-Index: AQHWZluWPRKsKyuZRkCQVDwO60tsvakgmVkAgAEWTwA=
Date: Fri, 31 Jul 2020 13:26:24 +0000
Message-ID: <CB9F22FE-BEFF-4A36-BC81-A18F9E0F9D7C@arm.com>
References: <3911d221ce9ed73611b93aa437b9ca227d6aa201.1596099067.git.bertrand.marquis@arm.com>
 <f48f81d5-589e-3f75-1044-583114bf497e@xen.org>
In-Reply-To: <f48f81d5-589e-3f75-1044-583114bf497e@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [90.126.203.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 385c49d8-459b-4151-c366-08d83555585e
x-ms-traffictypediagnostic: DBBPR08MB4363:|AM0PR08MB3795:
X-Microsoft-Antispam-PRVS: <AM0PR08MB3795FF18A2B5608B7EBEA6959D4E0@AM0PR08MB3795.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: csBgYatQGLrPMwRkqp0FUudkOgiavZu2hDuM+88LG9CLsAHWwxb/8H59i6WUi+pL8hvsOA1EIkyUPZ3gJhUnNsrKr2HTnSpRAAMTotJg7v6PqsxzHxE9kH09kgJpFCSt4GOuQwLRg6HrT/Vv8T1CBf8HyOoTzfo5Jwx0ZKYvIWCzKG+j5sD8QoWm+fcjbJRwiEXquvKlBC8fDtW94Npd7rIdgGMMumaPHPb7UzkV6KjcevQ5cdpo00QKJ2nof7QRZRwhJqUuC7dTfywKzGGp2o9kqG4MkBy/zNrA+tlgr7ZkM2trjZmzJxf9XoUAPgw/FWfdiPX0vST94q9bam47dQ==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(366004)(396003)(376002)(136003)(346002)(39860400002)(71200400001)(4326008)(33656002)(30864003)(6486002)(5660300002)(6512007)(8936002)(54906003)(316002)(2616005)(478600001)(66556008)(66446008)(186003)(2906002)(91956017)(76116006)(66946007)(86362001)(8676002)(66476007)(64756008)(53546011)(26005)(6506007)(7416002)(6916009)(36756003)(83380400001)(579004);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: l3be3GSadfM40A/O7G4ca1rxYdqDj3uNrftyYnitJiYyTQvd1BhssonHGujA8RBNinI55eZwjwTTDzjarAzsnpWZhEOFCyqetXYy/E8a9xvV7675SML/q/iAIgOpHWd8MBKAbJSBCgqmAAfPpKq0IVpomn3Trnkzpz5Rfg/ett5tAlX967u8C/bwD8hivhL0iyiriwbajafJFChAvhA2HJQkNs9m0P5moFOMORSmmOZgCFpHY1Wv8UYvG1x4IXElWT4UvN8/TAgBgVqYzS1LthJ8rrkyzvFTG+uQUW/eSwcmWDR9KBSk/avMBjCSQ2g+giHTqMMkUM1oIFdX1rgRjBv/mYYgpGe6oS1waGj/pEhM/0Bans+XFl4JTyWa3sUHkdtAODpd5ug7uOGwOaBnvJuPVPKZGgzE2PGLp1tXBptGiSgGmATWjjiy42fsDvjMgFTDA6BqBSe6/rTViPmq67OqKQq9bF5w/4WwJqRUYWpF86Y4g400VIymRVdpw6OJTlq+4qDOClskNZ75hpRQF7VBewruZSVLIhHho4ruKSbk/mNvz4szoAjnMCH5FzyX/jNZR/zoL6riE6O5W+9JxXzShILG1ruEI5gz9s+x5qlI38KgwTe4MciVUxtWUuYz8wGuwFb/ZFlgroCREMzcZA==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <D0E0AC33E056DA469B4840D44543D6E0@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4363
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: 3fe3aebe-a196-4cb5-ab2a-08d8355552cb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 0OR3hb3hUI6rNUf3swFdRDALDgGtaLP22SfxVJ6ZrSInyB/4iOvVIS10cT08181OnlchWnRGxbXMlrFFYOQrDAqlZAa39MrEMna+rsz/4C0K9eyiyZ28d8lwvYaPm4JDLCw8tgXyGlKhqypyTpQ9W24L5l6Ep6d80BgXsNyMMYCOCAGXHGdwHY/GiJR67P3aDisKo/A3derjpPBiVPaFnhiyBbLL9WtnyhKf2pXEPHtMnO9ptzXJuvcmzjnaTI5q7K91u9DQhuQR2zXHgbwtrYRsa/5etBhKUNN4yBfS4B5TcrZjqwY1lmLQPTwh77+ZZBRtv2NilUO6NixUYD/qFEvCCQgksIjVS85tOA53SMhquzXmBgvXYsNvdj374fxEClUASK5grVsjuc1YdQYu9g==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(376002)(136003)(39860400002)(396003)(46966005)(8936002)(107886003)(316002)(33656002)(83380400001)(4326008)(54906003)(70586007)(6862004)(86362001)(478600001)(8676002)(36756003)(30864003)(336012)(82740400003)(47076004)(6486002)(70206006)(6512007)(36906005)(2616005)(53546011)(5660300002)(356005)(26005)(2906002)(6506007)(186003)(81166007)(82310400002);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 13:26:34.2513 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 385c49d8-459b-4151-c366-08d83555585e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3795
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Sorry missed some points in my previous answer.

> On 30 Jul 2020, at 22:50, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Bertrand,
>=20
> To avoid extra work on your side, I would recommend to wait a bit before =
sending a new version. It would be good to at least settle the conversation=
 in v2 regarding the approach taken.
>=20
> On 30/07/2020 11:24, Bertrand Marquis wrote:
>> At the moment on Arm, a Linux guest running with KTPI enabled will
>> cause the following error when a context switch happens in user mode:
>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>> The error is caused by the virtual address for the runstate area
>> registered by the guest only being accessible when the guest is running
>> in kernel space when KPTI is enabled.
>> To solve this issue, this patch is doing the translation from virtual
>> address to physical address during the hypercall and mapping the
>> required pages using vmap. This is removing the conversion from virtual
>> to physical address during the context switch which is solving the
>> problem with KPTI.
>=20
> To echo what Jan said on the previous version, this is a change in a stab=
le ABI and therefore may break existing guest. FAOD, I agree in principle w=
ith the idea. However, we want to explain why breaking the ABI is the *only=
* viable solution.
>=20
> From my understanding, it is not possible to fix without an ABI breakage =
because the hypervisor doesn't know when the guest will switch back from us=
erspace to kernel space. The risk is the information provided by the runsta=
te wouldn't contain accurate information and could affect how the guest han=
dle stolen time.
>=20
> Additionally there are a few issues with the current interface:
>   1) It is assuming the virtual address cannot be re-used by the userspac=
e. Thanksfully Linux have a split address space. But this may change with K=
PTI in place.
>   2) When update the page-tables, the guest has to go through an invalid =
mapping. So the translation may fail at any point.
>=20
> IOW, the existing interface can lead to random memory corruption and inac=
urracy of the stolen time.
>=20
>> This is done only on arm architecture, the behaviour on x86 is not
>> modified by this patch and the address conversion is done as before
>> during each context switch.
>> This is introducing several limitations in comparison to the previous
>> behaviour (on arm only):
>> - if the guest is remapping the area at a different physical address Xen
>> will continue to update the area at the previous physical address. As
>> the area is in kernel space and usually defined as a global variable thi=
s
>> is something which is believed not to happen. If this is required by a
>> guest, it will have to call the hypercall with the new area (even if it
>> is at the same virtual address).
>> - the area needs to be mapped during the hypercall. For the same reasons
>> as for the previous case, even if the area is registered for a different
>> vcpu. It is believed that registering an area using a virtual address
>> unmapped is not something done.
>=20
> This is not clear whether the virtual address refer to the current vCPU o=
r the vCPU you register the runstate for. From the past discussion, I think=
 you refer to the former. It would be good to clarify.
>=20
> Additionally, all the new restrictions should be documented in the public=
 interface. So an OS developper can find the differences between the archit=
ectures.
>=20
> To answer Jan's concern, we certainly don't know all the guest OSes exist=
ing, however we also need to balance the benefit for a large majority of th=
e users.
>=20
> From previous discussion, the current approach was deemed to be acceptabl=
e on Arm and, AFAICT, also x86 (see [1]).
>=20
> TBH, I would rather see the approach to be common. For that, we would an =
agreement from Andrew and Jan in the approach here. Meanwhile, I think this=
 is the best approach to address the concern from Arm users.
>=20
>> inline functions in headers could not be used as the architecture
>> domain.h is included before the global domain.h making it impossible
>> to use the struct vcpu inside the architecture header.
>> This should not have any performance impact as the hypercall is only
>> called once per vcpu usually.
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>>   Changes in v2
>>     - use vmap to map the pages during the hypercall.
>>     - reintroduce initial copy during hypercall.
>>   Changes in v3
>>     - Fix Coding style
>>     - Fix vaddr printing on arm32
>>     - use write_atomic to modify state_entry_time update bit (only
>>     in guest structure as the bit is not used inside Xen copy)
>> ---
>>  xen/arch/arm/domain.c        | 161 ++++++++++++++++++++++++++++++-----
>>  xen/arch/x86/domain.c        |  29 ++++++-
>>  xen/arch/x86/x86_64/domain.c |   4 +-
>>  xen/common/domain.c          |  19 ++---
>>  xen/include/asm-arm/domain.h |   9 ++
>>  xen/include/asm-x86/domain.h |  16 ++++
>>  xen/include/xen/domain.h     |   5 ++
>>  xen/include/xen/sched.h      |  16 +---
>>  8 files changed, 206 insertions(+), 53 deletions(-)
>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>> index 31169326b2..8b36946017 100644
>> --- a/xen/arch/arm/domain.c
>> +++ b/xen/arch/arm/domain.c
>> @@ -19,6 +19,7 @@
>>  #include <xen/sched.h>
>>  #include <xen/softirq.h>
>>  #include <xen/wait.h>
>> +#include <xen/vmap.h>
>>    #include <asm/alternative.h>
>>  #include <asm/cpuerrata.h>
>> @@ -275,36 +276,156 @@ static void ctxt_switch_to(struct vcpu *n)
>>      virt_timer_restore(n);
>>  }
>>  -/* Update per-VCPU guest runstate shared memory area (if registered). =
*/
>> -static void update_runstate_area(struct vcpu *v)
>> +static void cleanup_runstate_vcpu_locked(struct vcpu *v)
>>  {
>> -    void __user *guest_handle =3D NULL;
>> +    if ( v->arch.runstate_guest )
>> +    {
>> +        vunmap((void *)((unsigned long)v->arch.runstate_guest & PAGE_MA=
SK));
>> +
>> +        put_page(v->arch.runstate_guest_page[0]);
>> +
>> +        if ( v->arch.runstate_guest_page[1] )
>> +            put_page(v->arch.runstate_guest_page[1]);
>> +
>> +        v->arch.runstate_guest =3D NULL;
>> +    }
>> +}
>> +
>> +void arch_vcpu_cleanup_runstate(struct vcpu *v)
>> +{
>> +    spin_lock(&v->arch.runstate_guest_lock);
>> +
>> +    cleanup_runstate_vcpu_locked(v);
>> +
>> +    spin_unlock(&v->arch.runstate_guest_lock);
>> +}
>> +
>> +static int setup_runstate_vcpu_locked(struct vcpu *v, vaddr_t vaddr)
>> +{
>> +    unsigned int offset;
>> +    mfn_t mfn[2];
>> +    struct page_info *page;
>> +    unsigned int numpages;
>>      struct vcpu_runstate_info runstate;
>> +    void *p;
>>  -    if ( guest_handle_is_null(runstate_guest(v)) )
>> -        return;
>> +    /* user can pass a NULL address to unregister a previous area */
>> +    if ( vaddr =3D=3D 0 )
>> +        return 0;
>> +
>> +    offset =3D vaddr & ~PAGE_MASK;
>> +
>> +    /* provided address must be aligned to a 64bit */
>> +    if ( offset % alignof(struct vcpu_runstate_info) )
>=20
> This new restriction wants to be explained in the commit message and publ=
ic header.

ok

>=20
>> +    {
>> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRI=
vaddr
>> +                ": Invalid offset\n", vaddr);
>=20
> We usually enforce 80 character per lines except for format string. So it=
 is easier to grep them in the code.

Ok i will fix this one and the following ones.
But here PRIvaddr would break any attempt to grep something in fact.

>=20
>> +        return -EINVAL;
>> +    }
>> +
>> +    page =3D get_page_from_gva(v, vaddr, GV2M_WRITE);
>> +    if ( !page )
>> +    {
>> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRI=
vaddr
>> +                ": Page is not mapped\n", vaddr);
>> +        return -EINVAL;
>> +    }
>> +
>> +    mfn[0] =3D page_to_mfn(page);
>> +    v->arch.runstate_guest_page[0] =3D page;
>> +
>> +    if ( offset > (PAGE_SIZE - sizeof(struct vcpu_runstate_info)) )
>> +    {
>> +        /* guest area is crossing pages */
>> +        page =3D get_page_from_gva(v, vaddr + PAGE_SIZE, GV2M_WRITE);
>> +        if ( !page )
>> +        {
>> +            put_page(v->arch.runstate_guest_page[0]);
>> +            gprintk(XENLOG_WARNING,
>> +                    "Cannot map runstate pointer at 0x%"PRIvaddr
>> +                    ": 2nd Page is not mapped\n", vaddr);
>> +            return -EINVAL;
>> +        }
>> +        mfn[1] =3D page_to_mfn(page);
>> +        v->arch.runstate_guest_page[1] =3D page;
>> +        numpages =3D 2;
>> +    }
>> +    else
>> +    {
>> +        v->arch.runstate_guest_page[1] =3D NULL;
>> +        numpages =3D 1;
>> +    }
>>  -    memcpy(&runstate, &v->runstate, sizeof(runstate));
>> +    p =3D vmap(mfn, numpages);
>> +    if ( !p )
>> +    {
>> +        put_page(v->arch.runstate_guest_page[0]);
>> +        if ( numpages =3D=3D 2 )
>> +            put_page(v->arch.runstate_guest_page[1]);
>>  -    if ( VM_ASSIST(v->domain, runstate_update_flag) )
>> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRI=
vaddr
>> +                ": vmap error\n", vaddr);
>> +        return -EINVAL;
>> +    }
>> +
>> +    v->arch.runstate_guest =3D p + offset;
>> +
>> +    if (v =3D=3D current)
>> +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate=
));
>> +    else
>>      {
>> -        guest_handle =3D &v->runstate_guest.p->state_entry_time + 1;
>> -        guest_handle--;
>> -        runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE;
>> -        __raw_copy_to_guest(guest_handle,
>> -                            (void *)(&runstate.state_entry_time + 1) - =
1, 1);
>> -        smp_wmb();
>> +        vcpu_runstate_get(v, &runstate);
>> +        memcpy(v->arch.runstate_guest, &runstate, sizeof(v->runstate));
>>      }
>>  -    __copy_to_guest(runstate_guest(v), &runstate, 1);
>> +    return 0;
>> +}
>> +
>> +int arch_vcpu_setup_runstate(struct vcpu *v,
>> +                             struct vcpu_register_runstate_memory_area =
area)
>> +{
>> +    int rc;
>> +
>> +    spin_lock(&v->arch.runstate_guest_lock);
>> +
>> +    /* cleanup if we are recalled */
>> +    cleanup_runstate_vcpu_locked(v);
>> +
>> +    rc =3D setup_runstate_vcpu_locked(v, (vaddr_t)area.addr.v);
>> +
>> +    spin_unlock(&v->arch.runstate_guest_lock);
>>  -    if ( guest_handle )
>> +    return rc;
>> +}
>> +
>> +
>> +/* Update per-VCPU guest runstate shared memory area (if registered). *=
/
>> +static void update_runstate_area(struct vcpu *v)
>> +{
>> +    spin_lock(&v->arch.runstate_guest_lock);
>> +
>> +    if ( v->arch.runstate_guest )
>>      {
>> -        runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE;
>> -        smp_wmb();
>> -        __raw_copy_to_guest(guest_handle,
>> -                            (void *)(&runstate.state_entry_time + 1) - =
1, 1);
>> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
>> +        {
>> +            v->runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE;
>> +            write_atomic(&(v->arch.runstate_guest->state_entry_time),
>> +                    v->runstate.state_entry_time);
>=20
> NIT: You want to indent v-> at the same level as the argument from the fi=
rst line.

Ok

>=20
> Also, I think you are missing a smp_wmb() here.

The atomic operation itself would not need a barrier.
I do not see why you think a barrier is needed here.
For the internal structure ?

>=20
>> +        }
>> +
>> +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate=
));
>> +
>> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
>> +        {
>> +            /* copy must be done before switching the bit */
>> +            smp_wmb();
>> +            v->runstate.state_entry_time &=3D ~XEN_RUNSTATE_UPDATE;
>> +            write_atomic(&(v->arch.runstate_guest->state_entry_time),
>> +                    v->runstate.state_entry_time);
>=20
> Same remark for the indentation.
Ok

>=20
>> +        }
>>      }
>> +
>> +    spin_unlock(&v->arch.runstate_guest_lock);
>>  }
>>    static void schedule_tail(struct vcpu *prev)
>> @@ -560,6 +681,8 @@ int arch_vcpu_create(struct vcpu *v)
>>      v->arch.saved_context.sp =3D (register_t)v->arch.cpu_info;
>>      v->arch.saved_context.pc =3D (register_t)continue_new_vcpu;
>>  +    spin_lock_init(&v->arch.runstate_guest_lock);
>> +
>>      /* Idle VCPUs don't need the rest of this setup */
>>      if ( is_idle_vcpu(v) )
>>          return rc;
>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>> index fee6c3931a..b9b81e94e5 100644
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -1642,6 +1642,29 @@ void paravirt_ctxt_switch_to(struct vcpu *v)
>>          wrmsr_tsc_aux(v->arch.msrs->tsc_aux);
>>  }
>>  +int arch_vcpu_setup_runstate(struct vcpu *v,
>> +                             struct vcpu_register_runstate_memory_area =
area)
>> +{
>> +    struct vcpu_runstate_info runstate;
>> +
>> +    runstate_guest(v) =3D area.addr.h;
>> +
>> +    if ( v =3D=3D current )
>> +        __copy_to_guest(runstate_guest(v), &v->runstate, 1);
>> +    else
>> +    {
>> +        vcpu_runstate_get(v, &runstate);
>> +        __copy_to_guest(runstate_guest(v), &runstate, 1);
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>> +void arch_vcpu_cleanup_runstate(struct vcpu *v)
>> +{
>> +    set_xen_guest_handle(runstate_guest(v), NULL);
>> +}
>> +
>>  /* Update per-VCPU guest runstate shared memory area (if registered). *=
/
>>  bool update_runstate_area(struct vcpu *v)
>>  {
>> @@ -1660,8 +1683,8 @@ bool update_runstate_area(struct vcpu *v)
>>      if ( VM_ASSIST(v->domain, runstate_update_flag) )
>>      {
>>          guest_handle =3D has_32bit_shinfo(v->domain)
>> -            ? &v->runstate_guest.compat.p->state_entry_time + 1
>> -            : &v->runstate_guest.native.p->state_entry_time + 1;
>> +            ? &v->arch.runstate_guest.compat.p->state_entry_time + 1
>> +            : &v->arch.runstate_guest.native.p->state_entry_time + 1;
>>          guest_handle--;
>>          runstate.state_entry_time |=3D XEN_RUNSTATE_UPDATE;
>>          __raw_copy_to_guest(guest_handle,
>> @@ -1674,7 +1697,7 @@ bool update_runstate_area(struct vcpu *v)
>>          struct compat_vcpu_runstate_info info;
>>            XLAT_vcpu_runstate_info(&info, &runstate);
>> -        __copy_to_guest(v->runstate_guest.compat, &info, 1);
>> +        __copy_to_guest(v->arch.runstate_guest.compat, &info, 1);
>>          rc =3D true;
>>      }
>>      else
>> diff --git a/xen/arch/x86/x86_64/domain.c b/xen/arch/x86/x86_64/domain.c
>> index c46dccc25a..b879e8dd2c 100644
>> --- a/xen/arch/x86/x86_64/domain.c
>> +++ b/xen/arch/x86/x86_64/domain.c
>> @@ -36,7 +36,7 @@ arch_compat_vcpu_op(
>>              break;
>>            rc =3D 0;
>> -        guest_from_compat_handle(v->runstate_guest.compat, area.addr.h)=
;
>> +        guest_from_compat_handle(v->arch.runstate_guest.compat, area.ad=
dr.h);
>>            if ( v =3D=3D current )
>>          {
>> @@ -49,7 +49,7 @@ arch_compat_vcpu_op(
>>              vcpu_runstate_get(v, &runstate);
>>              XLAT_vcpu_runstate_info(&info, &runstate);
>>          }
>> -        __copy_to_guest(v->runstate_guest.compat, &info, 1);
>> +        __copy_to_guest(v->arch.runstate_guest.compat, &info, 1);
>>            break;
>>      }
>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>> index f0f9c62feb..739c6b7b62 100644
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -727,7 +727,10 @@ int domain_kill(struct domain *d)
>>          if ( cpupool_move_domain(d, cpupool0) )
>>              return -ERESTART;
>>          for_each_vcpu ( d, v )
>> +        {
>> +            arch_vcpu_cleanup_runstate(v);
>>              unmap_vcpu_info(v);
>> +        }
>>          d->is_dying =3D DOMDYING_dead;
>>          /* Mem event cleanup has to go here because the rings
>>           * have to be put before we call put_domain. */
>> @@ -1167,7 +1170,7 @@ int domain_soft_reset(struct domain *d)
>>        for_each_vcpu ( d, v )
>>      {
>> -        set_xen_guest_handle(runstate_guest(v), NULL);
>> +        arch_vcpu_cleanup_runstate(v);
>>          unmap_vcpu_info(v);
>>      }
>>  @@ -1494,7 +1497,6 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN=
_GUEST_HANDLE_PARAM(void) arg)
>>      case VCPUOP_register_runstate_memory_area:
>>      {
>>          struct vcpu_register_runstate_memory_area area;
>> -        struct vcpu_runstate_info runstate;
>>            rc =3D -EFAULT;
>>          if ( copy_from_guest(&area, arg, 1) )
>> @@ -1503,18 +1505,7 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN=
_GUEST_HANDLE_PARAM(void) arg)
>>          if ( !guest_handle_okay(area.addr.h, 1) )
>>              break;
>>  -        rc =3D 0;
>> -        runstate_guest(v) =3D area.addr.h;
>> -
>> -        if ( v =3D=3D current )
>> -        {
>> -            __copy_to_guest(runstate_guest(v), &v->runstate, 1);
>> -        }
>> -        else
>> -        {
>> -            vcpu_runstate_get(v, &runstate);
>> -            __copy_to_guest(runstate_guest(v), &runstate, 1);
>> -        }
>> +        rc =3D arch_vcpu_setup_runstate(v, area);
>>            break;
>>      }
>> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
>> index 6819a3bf38..2f62c3e8f5 100644
>> --- a/xen/include/asm-arm/domain.h
>> +++ b/xen/include/asm-arm/domain.h
>> @@ -204,6 +204,15 @@ struct arch_vcpu
>>       */
>>      bool need_flush_to_ram;
>>  +    /* runstate guest lock */
>> +    spinlock_t runstate_guest_lock;
>> +
>> +    /* runstate guest info */
>> +    struct vcpu_runstate_info *runstate_guest;
>> +
>> +    /* runstate pages mapped for runstate_guest */
>> +    struct page_info *runstate_guest_page[2];
>> +
>>  }  __cacheline_aligned;
>>    void vcpu_show_execution_state(struct vcpu *);
>> diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
>> index 635335634d..007ccfbf9f 100644
>> --- a/xen/include/asm-x86/domain.h
>> +++ b/xen/include/asm-x86/domain.h
>> @@ -11,6 +11,11 @@
>>  #include <asm/x86_emulate.h>
>>  #include <public/vcpu.h>
>>  #include <public/hvm/hvm_info_table.h>
>> +#ifdef CONFIG_COMPAT
>> +#include <compat/vcpu.h>
>> +DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t);
>> +#endif
>> +
>>    #define has_32bit_shinfo(d)    ((d)->arch.has_32bit_shinfo)
>>  @@ -638,6 +643,17 @@ struct arch_vcpu
>>      struct {
>>          bool next_interrupt_enabled;
>>      } monitor;
>> +
>> +#ifndef CONFIG_COMPAT
>> +# define runstate_guest(v) ((v)->arch.runstate_guest)
>> +    XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest add=
ress */
>> +#else
>> +# define runstate_guest(v) ((v)->arch.runstate_guest.native)
>> +    union {
>> +        XEN_GUEST_HANDLE(vcpu_runstate_info_t) native;
>> +        XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat;
>> +    } runstate_guest;
>> +#endif
>>  };
>>    struct guest_memory_policy
>> diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
>> index 7e51d361de..5e8cbba31d 100644
>> --- a/xen/include/xen/domain.h
>> +++ b/xen/include/xen/domain.h
>> @@ -5,6 +5,7 @@
>>  #include <xen/types.h>
>>    #include <public/xen.h>
>> +#include <public/vcpu.h>
>>  #include <asm/domain.h>
>>  #include <asm/numa.h>
>>  @@ -63,6 +64,10 @@ void arch_vcpu_destroy(struct vcpu *v);
>>  int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
>>  void unmap_vcpu_info(struct vcpu *v);
>>  +int arch_vcpu_setup_runstate(struct vcpu *v,
>> +                             struct vcpu_register_runstate_memory_area =
area);
>> +void arch_vcpu_cleanup_runstate(struct vcpu *v);
>> +
>>  int arch_domain_create(struct domain *d,
>>                         struct xen_domctl_createdomain *config);
>>  diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>> index ac53519d7f..fac030fb83 100644
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -29,11 +29,6 @@
>>  #include <public/vcpu.h>
>>  #include <public/event_channel.h>
>>  -#ifdef CONFIG_COMPAT
>> -#include <compat/vcpu.h>
>> -DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t);
>> -#endif
>> -
>>  /*
>>   * Stats
>>   *
>> @@ -166,16 +161,7 @@ struct vcpu
>>      struct sched_unit *sched_unit;
>>        struct vcpu_runstate_info runstate;
>> -#ifndef CONFIG_COMPAT
>> -# define runstate_guest(v) ((v)->runstate_guest)
>> -    XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest add=
ress */
>> -#else
>> -# define runstate_guest(v) ((v)->runstate_guest.native)
>> -    union {
>> -        XEN_GUEST_HANDLE(vcpu_runstate_info_t) native;
>> -        XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat;
>> -    } runstate_guest; /* guest address */
>> -#endif
>> +
>>      unsigned int     new_state;
>>        /* Has the FPU been initialised? */
>=20
> [1] <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
>=20
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:30:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:30:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1V74-0006bj-Vo; Fri, 31 Jul 2020 13:30:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1V73-0006bd-9S
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:30:17 +0000
X-Inumbo-ID: f7cf482c-d331-11ea-abb1-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f7cf482c-d331-11ea-abb1-12813bfff9fa;
 Fri, 31 Jul 2020 13:30:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EB622AC5E;
 Fri, 31 Jul 2020 13:30:27 +0000 (UTC)
Subject: Re: [PATCH] tools/configure: drop BASH configure variable [and 1 more
 messages]
To: Ian Jackson <ian.jackson@citrix.com>
References: <20200722113258.3673-1-andrew.cooper3@citrix.com>
 <20200626170038.27650-1-andrew.cooper3@citrix.com>
 <880fcc83-875c-c030-bfac-c64477aa3254@suse.com>
 <24313.55588.61431.336617@mariner.uk.xensource.com>
 <2c202733-cbff-74e0-30c6-4cba227e7969@suse.com>
 <24356.5736.297234.341867@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d963d352-d6d6-393a-9fdf-9d6f46450309@suse.com>
Date: Fri, 31 Jul 2020 15:30:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <24356.5736.297234.341867@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31.07.2020 15:02, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH] tools/configure: drop BASH configure variable"):
>> On 29.06.2020 14:05, Ian Jackson wrote:
>>> Jan Beulich writes ("Re: [PATCH] tools/configure: drop BASH configure variable"):
>>>> On 26.06.2020 19:00, Andrew Cooper wrote:
>>>> ... this may or may not take effect on the file system the sources
>>>> are stored on.
>>>
>>> In what circumstances might this not take effect ?
>>
>> When the file system is incapable of recording execute permissions?
>> It has been a common workaround for this in various projects that
>> I've worked with to use $(SHELL) to account for that, so the actual
>> permissions from the fs don't matter. (There may be mount options
>> to make everything executable on such file systems, but people may
>> be hesitant to use them.)
> 
> I don't think we support building from sources which have been
> unpacked onto such filesystems.  Other projects which might actually
> need to build on Windows or something do do this $(SHELL) thing or an
> equivalent, but I don't think that's us.

It's not unexpected that you think of Windows here, but my thoughts
were more towards building from sources on a CD or DVD, where iirc
execute permissions also don't exist. The latest when we have
out-of-tree builds fully working, this ought to be something that
people should be able to do, imo. (Even without out-of-tree builds,
my "next best" alternative of using a tree of symlinks to build in
would similarly have an issue with the links pointing at a mounted
CD/DVD, if the $(SHELL) wasn't present.)

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:40:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:40:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1VHD-0007Z6-0R; Fri, 31 Jul 2020 13:40:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1VHC-0007Z1-2k
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:40:46 +0000
X-Inumbo-ID: 6ef5c5ec-d333-11ea-abb3-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ef5c5ec-d333-11ea-abb3-12813bfff9fa;
 Fri, 31 Jul 2020 13:40:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 69428AFAC;
 Fri, 31 Jul 2020 13:40:57 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in
 epte_get_entry_emt()
To: paul@xen.org
References: <20200731123926.28970-1-paul@xen.org>
 <20200731123926.28970-3-paul@xen.org>
 <a4856c33-8bb0-4afa-cc71-3af4c229bc27@suse.com>
 <004501d6673b$9adffbf0$d09ff3d0$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <84cdd5b8-5149-a240-8bad-be8d67dca0d8@suse.com>
Date: Fri, 31 Jul 2020 15:40:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <004501d6673b$9adffbf0$d09ff3d0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, 'Paul Durrant' <pdurrant@amazon.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31.07.2020 15:07, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 31 July 2020 13:58
>> To: Paul Durrant <paul@xen.org>
>> Cc: xen-devel@lists.xenproject.org; Paul Durrant <pdurrant@amazon.com>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau Monné <roger.pau@citrix.com>
>> Subject: Re: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()
>>
>> On 31.07.2020 14:39, Paul Durrant wrote:
>>> From: Paul Durrant <pdurrant@amazon.com>
>>>
>>> Re-factor the code to take advantage of the fact that the APIC access page is
>>> a 'special' page.
>>
>> Hmm, that's going quite as far as I was thinking to go: In
>> particular, you leave in place the set_mmio_p2m_entry() use
>> in vmx_alloc_vlapic_mapping(). With that replaced, the
>> re-ordering in epte_get_entry_emt() that you do shouldn't
>> be necessary; you'd simple drop the checking of the
>> specific MFN.
> 
> Ok, it still needs to go in the p2m though so are you suggesting
> just calling p2m_set_entry() directly?

Yes, if this works. The main question really is whether there are
any hidden assumptions elsewhere that this page gets mapped as an
MMIO one.

>>> --- a/xen/arch/x86/hvm/mtrr.c
>>> +++ b/xen/arch/x86/hvm/mtrr.c
>>> @@ -814,29 +814,22 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
>>>          return -1;
>>>      }
>>>
>>> -    if ( direct_mmio )
>>> -    {
>>> -        if ( (mfn_x(mfn) ^ mfn_x(d->arch.hvm.vmx.apic_access_mfn)) >> order )
>>> -            return MTRR_TYPE_UNCACHABLE;
>>> -        if ( order )
>>> -            return -1;
>>
>> ... this part of the logic wants retaining, I think, i.e.
>> reporting back to the guest that the mapping needs splitting.
>> I'm afraid I have to withdraw my R-b on patch 1 for this
>> reason, as the check needs to be added there already.
> 
> To be clear... You mean I need the:
> 
> if ( order )
>   return -1;
> 
> there?

Not just - first of all you need to check whether the requested
range overlaps a special page.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:45:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1VLs-0007kw-Ji; Fri, 31 Jul 2020 13:45:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HXbV=BK=amazon.co.uk=prvs=4749be70b=pdurrant@srs-us1.protection.inumbo.net>)
 id 1k1VLr-0007kr-08
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:45:35 +0000
X-Inumbo-ID: 1afcdd58-d334-11ea-8e3e-bc764e2007e4
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1afcdd58-d334-11ea-8e3e-bc764e2007e4;
 Fri, 31 Jul 2020 13:45:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
 s=amazon201209; t=1596203134; x=1627739134;
 h=from:to:cc:date:message-id:references:in-reply-to:
 content-transfer-encoding:mime-version:subject;
 bh=ylkOaOK1WziisXgbBwS6H+PCGrgwG/8nlRD9DYjT1gI=;
 b=XmLEU614TXDcOQ87IwytWFVdqW5FjcGOKF6MYVfw/rtPACE46Y2IABEq
 2KiQaXIHetujD3UtRqtVST+Rz2Wps6BFcGHJlP0Icz6gcuDQaFu53+2bq
 PvlsNC9XqNo/A0UXZnzFVRg85oz6rnknFj7MpgyLnTsquWrumoXILH+DN 0=;
IronPort-SDR: 9Ihn7M7IBeyb4VNqhxnMNN3fwT7fMKIT/uLnH5qbkT1wv3QtyeEmdyujo5kyHkTojiYHT4WCOV
 t54+L4rFiqJw==
X-IronPort-AV: E=Sophos;i="5.75,418,1589241600"; d="scan'208";a="63299551"
Subject: RE: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in
 epte_get_entry_emt()
Thread-Topic: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in
 epte_get_entry_emt()
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2a-22cc717f.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 31 Jul 2020 13:45:20 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2a-22cc717f.us-west-2.amazon.com (Postfix) with ESMTPS
 id CDD1AA069E; Fri, 31 Jul 2020 13:45:19 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 31 Jul 2020 13:45:19 +0000
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC003.ant.amazon.com (10.43.164.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 31 Jul 2020 13:45:18 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Fri, 31 Jul 2020 13:45:18 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, "paul@xen.org" <paul@xen.org>
Thread-Index: AQHWZzfNu61as9hCEUu/A6GqtDYgc6khpgSAgAACwACAAAkqAP///z7A
Date: Fri, 31 Jul 2020 13:45:18 +0000
Message-ID: <9f38fdfdd6f2498d90c094c43de09a8e@EX13D32EUC003.ant.amazon.com>
References: <20200731123926.28970-1-paul@xen.org>
 <20200731123926.28970-3-paul@xen.org>
 <a4856c33-8bb0-4afa-cc71-3af4c229bc27@suse.com>
 <004501d6673b$9adffbf0$d09ff3d0$@xen.org>
 <84cdd5b8-5149-a240-8bad-be8d67dca0d8@suse.com>
In-Reply-To: <84cdd5b8-5149-a240-8bad-be8d67dca0d8@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.90]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?utf-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDMxIEp1bHkgMjAyMCAxNDo0MQ0KPiBUbzogcGF1bEB4ZW4u
b3JnDQo+IENjOiB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmc7IER1cnJhbnQsIFBhdWwg
PHBkdXJyYW50QGFtYXpvbi5jby51az47ICdBbmRyZXcgQ29vcGVyJw0KPiA8YW5kcmV3LmNvb3Bl
cjNAY2l0cml4LmNvbT47ICdXZWkgTGl1JyA8d2xAeGVuLm9yZz47ICdSb2dlciBQYXUgTW9ubsOp
JyA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQo+IFN1YmplY3Q6IFJFOiBbRVhURVJOQUxdIFtQQVRD
SCB2MiAyLzJdIHg4Ni9odm06IHNpbXBsaWZ5ICdtbWlvX2RpcmVjdCcgY2hlY2sgaW4gZXB0ZV9n
ZXRfZW50cnlfZW10KCkNCj4gDQo+IENBVVRJT046IFRoaXMgZW1haWwgb3JpZ2luYXRlZCBmcm9t
IG91dHNpZGUgb2YgdGhlIG9yZ2FuaXphdGlvbi4gRG8gbm90IGNsaWNrIGxpbmtzIG9yIG9wZW4N
Cj4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBjYW4gY29uZmlybSB0aGUgc2VuZGVyIGFuZCBrbm93
IHRoZSBjb250ZW50IGlzIHNhZmUuDQo+IA0KPiANCj4gDQo+IE9uIDMxLjA3LjIwMjAgMTU6MDcs
IFBhdWwgRHVycmFudCB3cm90ZToNCj4gPj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4g
Pj4gRnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiA+PiBTZW50OiAzMSBK
dWx5IDIwMjAgMTM6NTgNCj4gPj4gVG86IFBhdWwgRHVycmFudCA8cGF1bEB4ZW4ub3JnPg0KPiA+
PiBDYzogeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnOyBQYXVsIER1cnJhbnQgPHBkdXJy
YW50QGFtYXpvbi5jb20+OyBBbmRyZXcgQ29vcGVyDQo+ID4+IDxhbmRyZXcuY29vcGVyM0BjaXRy
aXguY29tPjsgV2VpIExpdSA8d2xAeGVuLm9yZz47IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBh
dUBjaXRyaXguY29tPg0KPiA+PiBTdWJqZWN0OiBSZTogW1BBVENIIHYyIDIvMl0geDg2L2h2bTog
c2ltcGxpZnkgJ21taW9fZGlyZWN0JyBjaGVjayBpbiBlcHRlX2dldF9lbnRyeV9lbXQoKQ0KPiA+
Pg0KPiA+PiBPbiAzMS4wNy4yMDIwIDE0OjM5LCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+ID4+PiBG
cm9tOiBQYXVsIER1cnJhbnQgPHBkdXJyYW50QGFtYXpvbi5jb20+DQo+ID4+Pg0KPiA+Pj4gUmUt
ZmFjdG9yIHRoZSBjb2RlIHRvIHRha2UgYWR2YW50YWdlIG9mIHRoZSBmYWN0IHRoYXQgdGhlIEFQ
SUMgYWNjZXNzIHBhZ2UgaXMNCj4gPj4+IGEgJ3NwZWNpYWwnIHBhZ2UuDQo+ID4+DQo+ID4+IEht
bSwgdGhhdCdzIGdvaW5nIHF1aXRlIGFzIGZhciBhcyBJIHdhcyB0aGlua2luZyB0byBnbzogSW4N
Cj4gPj4gcGFydGljdWxhciwgeW91IGxlYXZlIGluIHBsYWNlIHRoZSBzZXRfbW1pb19wMm1fZW50
cnkoKSB1c2UNCj4gPj4gaW4gdm14X2FsbG9jX3ZsYXBpY19tYXBwaW5nKCkuIFdpdGggdGhhdCBy
ZXBsYWNlZCwgdGhlDQo+ID4+IHJlLW9yZGVyaW5nIGluIGVwdGVfZ2V0X2VudHJ5X2VtdCgpIHRo
YXQgeW91IGRvIHNob3VsZG4ndA0KPiA+PiBiZSBuZWNlc3Nhcnk7IHlvdSdkIHNpbXBsZSBkcm9w
IHRoZSBjaGVja2luZyBvZiB0aGUNCj4gPj4gc3BlY2lmaWMgTUZOLg0KPiA+DQo+ID4gT2ssIGl0
IHN0aWxsIG5lZWRzIHRvIGdvIGluIHRoZSBwMm0gdGhvdWdoIHNvIGFyZSB5b3Ugc3VnZ2VzdGlu
Zw0KPiA+IGp1c3QgY2FsbGluZyBwMm1fc2V0X2VudHJ5KCkgZGlyZWN0bHk/DQo+IA0KPiBZZXMs
IGlmIHRoaXMgd29ya3MuIFRoZSBtYWluIHF1ZXN0aW9uIHJlYWxseSBpcyB3aGV0aGVyIHRoZXJl
IGFyZQ0KPiBhbnkgaGlkZGVuIGFzc3VtcHRpb25zIGVsc2V3aGVyZSB0aGF0IHRoaXMgcGFnZSBn
ZXRzIG1hcHBlZCBhcyBhbg0KPiBNTUlPIG9uZS4NCj4gDQo+ID4+PiAtLS0gYS94ZW4vYXJjaC94
ODYvaHZtL210cnIuYw0KPiA+Pj4gKysrIGIveGVuL2FyY2gveDg2L2h2bS9tdHJyLmMNCj4gPj4+
IEBAIC04MTQsMjkgKzgxNCwyMiBAQCBpbnQgZXB0ZV9nZXRfZW50cnlfZW10KHN0cnVjdCBkb21h
aW4gKmQsIHVuc2lnbmVkIGxvbmcgZ2ZuLCBtZm5fdCBtZm4sDQo+ID4+PiAgICAgICAgICByZXR1
cm4gLTE7DQo+ID4+PiAgICAgIH0NCj4gPj4+DQo+ID4+PiAtICAgIGlmICggZGlyZWN0X21taW8g
KQ0KPiA+Pj4gLSAgICB7DQo+ID4+PiAtICAgICAgICBpZiAoIChtZm5feChtZm4pIF4gbWZuX3go
ZC0+YXJjaC5odm0udm14LmFwaWNfYWNjZXNzX21mbikpID4+IG9yZGVyICkNCj4gPj4+IC0gICAg
ICAgICAgICByZXR1cm4gTVRSUl9UWVBFX1VOQ0FDSEFCTEU7DQo+ID4+PiAtICAgICAgICBpZiAo
IG9yZGVyICkNCj4gPj4+IC0gICAgICAgICAgICByZXR1cm4gLTE7DQo+ID4+DQo+ID4+IC4uLiB0
aGlzIHBhcnQgb2YgdGhlIGxvZ2ljIHdhbnRzIHJldGFpbmluZywgSSB0aGluaywgaS5lLg0KPiA+
PiByZXBvcnRpbmcgYmFjayB0byB0aGUgZ3Vlc3QgdGhhdCB0aGUgbWFwcGluZyBuZWVkcyBzcGxp
dHRpbmcuDQo+ID4+IEknbSBhZnJhaWQgSSBoYXZlIHRvIHdpdGhkcmF3IG15IFItYiBvbiBwYXRj
aCAxIGZvciB0aGlzDQo+ID4+IHJlYXNvbiwgYXMgdGhlIGNoZWNrIG5lZWRzIHRvIGJlIGFkZGVk
IHRoZXJlIGFscmVhZHkuDQo+ID4NCj4gPiBUbyBiZSBjbGVhci4uLiBZb3UgbWVhbiBJIG5lZWQg
dGhlOg0KPiA+DQo+ID4gaWYgKCBvcmRlciApDQo+ID4gICByZXR1cm4gLTE7DQo+ID4NCj4gPiB0
aGVyZT8NCj4gDQo+IE5vdCBqdXN0IC0gZmlyc3Qgb2YgYWxsIHlvdSBuZWVkIHRvIGNoZWNrIHdo
ZXRoZXIgdGhlIHJlcXVlc3RlZA0KPiByYW5nZSBvdmVybGFwcyBhIHNwZWNpYWwgcGFnZS4NCg0K
T2suIEknbGwgYWRkIHRoYXQuDQoNCiAgUGF1bA0KDQoNCj4gDQo+IEphbg0K


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:46:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:46:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1VMW-0007nV-TK; Fri, 31 Jul 2020 13:46:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oG5j=BK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k1VMV-0007nI-7Z
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:46:15 +0000
X-Inumbo-ID: 32cc8136-d334-11ea-abb4-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 32cc8136-d334-11ea-abb4-12813bfff9fa;
 Fri, 31 Jul 2020 13:46:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596203173;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=pxLwBaKdI1Rl3eoAeiydUxHAipGDZU34ct9PFuZ7AW8=;
 b=EHOktpnPjZHgxB/BeetF7YjB7eqP8RKYeMnQeONQDIpCd/H5yXM9or3a
 jJSb1VyoJnX4Xji3AS7WxMfTWPOk+GdVocGTd3O1I4VW6vY+sktIyt9YG
 2mTo2jv77c09hXnoYvlvIOxibZdKFRYTMZcBdo0qINLMD1lE0mgIEPNlR I=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: qp+giJRsKHqCcTL+S4EWbw00jbbstkd2kJCwx3AI6t7t7MaB0Tubjdpo4k1u7Gu4SOep/lg5l/
 r6nl5cIsBd1hGXNXRSLnqrK06N5leGZhwthOsPzRmcmpXz0/aj1CvfWCO+sgCKsSU42YsvuN85
 cU4g4Bn+nl1okqu7spvFs3Ur79BEHewPy5w+cnuXjlQfbXvHmisUI+E0/IlEL3G4eh4ZxFgrl8
 AHS7bmCovLMp9xlf+N8sYfVnMu9XemwZFeJNOBqK1RXrvOHn5Edr4MmcAvXSK0IHn5/IyIK9zH
 KiY=
X-SBRS: 3.7
X-MesageID: 23810049
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23810049"
Subject: Re: [PATCH] tools/configure: drop BASH configure variable [and 1 more
 messages]
To: Jan Beulich <jbeulich@suse.com>, Ian Jackson <ian.jackson@citrix.com>
References: <20200722113258.3673-1-andrew.cooper3@citrix.com>
 <20200626170038.27650-1-andrew.cooper3@citrix.com>
 <880fcc83-875c-c030-bfac-c64477aa3254@suse.com>
 <24313.55588.61431.336617@mariner.uk.xensource.com>
 <2c202733-cbff-74e0-30c6-4cba227e7969@suse.com>
 <24356.5736.297234.341867@mariner.uk.xensource.com>
 <d963d352-d6d6-393a-9fdf-9d6f46450309@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ff465b58-3547-ac52-8d4b-9159b45da613@citrix.com>
Date: Fri, 31 Jul 2020 14:46:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d963d352-d6d6-393a-9fdf-9d6f46450309@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31/07/2020 14:30, Jan Beulich wrote:
> On 31.07.2020 15:02, Ian Jackson wrote:
>> Jan Beulich writes ("Re: [PATCH] tools/configure: drop BASH configure variable"):
>>> On 29.06.2020 14:05, Ian Jackson wrote:
>>>> Jan Beulich writes ("Re: [PATCH] tools/configure: drop BASH configure variable"):
>>>>> On 26.06.2020 19:00, Andrew Cooper wrote:
>>>>> ... this may or may not take effect on the file system the sources
>>>>> are stored on.
>>>> In what circumstances might this not take effect ?
>>> When the file system is incapable of recording execute permissions?
>>> It has been a common workaround for this in various projects that
>>> I've worked with to use $(SHELL) to account for that, so the actual
>>> permissions from the fs don't matter. (There may be mount options
>>> to make everything executable on such file systems, but people may
>>> be hesitant to use them.)
>> I don't think we support building from sources which have been
>> unpacked onto such filesystems.  Other projects which might actually
>> need to build on Windows or something do do this $(SHELL) thing or an
>> equivalent, but I don't think that's us.
> It's not unexpected that you think of Windows here, but my thoughts
> were more towards building from sources on a CD or DVD, where iirc
> execute permissions also don't exist. The latest when we have
> out-of-tree builds fully working, this ought to be something that
> people should be able to do, imo. (Even without out-of-tree builds,
> my "next best" alternative of using a tree of symlinks to build in
> would similarly have an issue with the links pointing at a mounted
> CD/DVD, if the $(SHELL) wasn't present.)

See v2.  I put $(SHELL) in, because it isn't a worthwhile use of our
time to continue arguing over this point.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:47:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:47:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1VNW-0007tr-78; Fri, 31 Jul 2020 13:47:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1VNU-0007tf-OZ
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:47:16 +0000
X-Inumbo-ID: 57d3fff4-d334-11ea-abb4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 57d3fff4-d334-11ea-abb4-12813bfff9fa;
 Fri, 31 Jul 2020 13:47:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 21EA1B013;
 Fri, 31 Jul 2020 13:47:28 +0000 (UTC)
Subject: Re: [PATCH] tools/configure: drop BASH configure variable [and 1 more
 messages]
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200722113258.3673-1-andrew.cooper3@citrix.com>
 <20200626170038.27650-1-andrew.cooper3@citrix.com>
 <880fcc83-875c-c030-bfac-c64477aa3254@suse.com>
 <24313.55588.61431.336617@mariner.uk.xensource.com>
 <2c202733-cbff-74e0-30c6-4cba227e7969@suse.com>
 <24356.5736.297234.341867@mariner.uk.xensource.com>
 <d963d352-d6d6-393a-9fdf-9d6f46450309@suse.com>
 <ff465b58-3547-ac52-8d4b-9159b45da613@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2229b6be-bf37-d7d9-4b53-686e49f22eb7@suse.com>
Date: Fri, 31 Jul 2020 15:47:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <ff465b58-3547-ac52-8d4b-9159b45da613@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31.07.2020 15:46, Andrew Cooper wrote:
> On 31/07/2020 14:30, Jan Beulich wrote:
>> On 31.07.2020 15:02, Ian Jackson wrote:
>>> Jan Beulich writes ("Re: [PATCH] tools/configure: drop BASH configure variable"):
>>>> On 29.06.2020 14:05, Ian Jackson wrote:
>>>>> Jan Beulich writes ("Re: [PATCH] tools/configure: drop BASH configure variable"):
>>>>>> On 26.06.2020 19:00, Andrew Cooper wrote:
>>>>>> ... this may or may not take effect on the file system the sources
>>>>>> are stored on.
>>>>> In what circumstances might this not take effect ?
>>>> When the file system is incapable of recording execute permissions?
>>>> It has been a common workaround for this in various projects that
>>>> I've worked with to use $(SHELL) to account for that, so the actual
>>>> permissions from the fs don't matter. (There may be mount options
>>>> to make everything executable on such file systems, but people may
>>>> be hesitant to use them.)
>>> I don't think we support building from sources which have been
>>> unpacked onto such filesystems.  Other projects which might actually
>>> need to build on Windows or something do do this $(SHELL) thing or an
>>> equivalent, but I don't think that's us.
>> It's not unexpected that you think of Windows here, but my thoughts
>> were more towards building from sources on a CD or DVD, where iirc
>> execute permissions also don't exist. The latest when we have
>> out-of-tree builds fully working, this ought to be something that
>> people should be able to do, imo. (Even without out-of-tree builds,
>> my "next best" alternative of using a tree of symlinks to build in
>> would similarly have an issue with the links pointing at a mounted
>> CD/DVD, if the $(SHELL) wasn't present.)
> 
> See v2.  I put $(SHELL) in, because it isn't a worthwhile use of our
> time to continue arguing over this point.

I had seen you did; I was merely replying back to Ian's comments.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:48:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1VOJ-00081i-Hz; Fri, 31 Jul 2020 13:48:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oG5j=BK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k1VOI-00081Y-Po
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:48:06 +0000
X-Inumbo-ID: 75bbd4f6-d334-11ea-abb4-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 75bbd4f6-d334-11ea-abb4-12813bfff9fa;
 Fri, 31 Jul 2020 13:48:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596203285;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=URgNCkP+53pvhFNRR73gpEOWSMa9QiN9+jmEOCsKjls=;
 b=XIU4M/Zy8spUtALYuuQpitGKpjvvgg2r34Z9KXtaZQdNHrEjhl8U7HSR
 K/KOCsX85Eh9WDt8vYEJXt3INdZAZd4crVqNmSt1eudSPx8DpfSBFQP92
 S1OJiZIJhfGgBMIYkZWDg06tD8tfcnHVFq2M0dkGVuqDkUuqWtqd3MKOA 4=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: B4VCpL1nl5fLFGWGF+qWWrkzKoVZcwuZWTaproI+PoJHVgW5YOwIlixCMGDhttGBpqHQe4SZ2N
 Irg3RRmoHT5zhEJr/UruS8okjmFIzIPB38Hyoxce7d6q8LL+xqxamzK9xKnFlHlauLMOgi3llB
 5/YVDa1mdisgpdmWC1wXNse/Aj7DVvbD+gYcmQ+ofWJqf4koiA68Wdl2NUr0nw82fAYv0weosh
 miHPyZ0fdpwoSCTu7tEPKLUkXx6p4VRibQYMt3EohcKv3++chSJWZxzxRFPuMTUTZn6js/qREs
 668=
X-SBRS: 3.7
X-MesageID: 24505596
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="24505596"
Subject: Re: kernel-doc and xen.git
To: Stefano Stabellini <sstabellini@kernel.org>, <committers@xenproject.org>
References: <alpine.DEB.2.21.2007291644330.1767@sstabellini-ThinkPad-T480s>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <dc42ea91-a876-0f85-3d99-739d4990d3eb@citrix.com>
Date: Fri, 31 Jul 2020 14:48:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2007291644330.1767@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Bertrand.Marquis@arm.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30/07/2020 02:27, Stefano Stabellini wrote:
> Hi all,
>
> I would like to ask for your feedback on the adoption of the kernel-doc
> format for in-code comments.
>
> In the FuSa SIG we have started looking into FuSa documents for Xen. One
> of the things we are investigating are ways to link these documents to
> in-code comments in xen.git and vice versa.
>
> In this context, Andrew Cooper suggested to have a look at "kernel-doc"
> [1] during one of the virtual beer sessions at the last Xen Summit.
>
> I did give a look at kernel-doc and it is very promising. kernel-doc is
> a script that can generate nice rst text documents from in-code
> comments. (The generated rst files can then be used as input for sphinx
> to generate html docs.) The comment syntax [2] is simple and similar to
> Doxygen:
>
>     /**
>      * function_name() - Brief description of function.
>      * @arg1: Describe the first argument.
>      * @arg2: Describe the second argument.
>      *        One can provide multiple line descriptions
>      *        for arguments.
>      */
>
> kernel-doc is actually better than Doxygen because it is a much simpler
> tool, one we could customize to our needs and with predictable output.
> Specifically, we could add the tagging, numbering, and referencing
> required by FuSa requirement documents.
>
> I would like your feedback on whether it would be good to start
> converting xen.git in-code comments to the kernel-doc format so that
> proper documents can be generated out of them. One day we could import
> kernel-doc into xen.git/scripts and use it to generate a set of html
> documents via sphinx.
>
> At a minimum we'll need to start the in-code comment blocks with two
> stars:
>
>     /**
>
> There could be also other small changes required to make sure the output
> is appropriate.
>
>
> Feedback is welcome!

I think it goes without saying that I'm +1 to this in principle.

We definitely have some /** comments already, but I have no idea if they
are valid kernel-doc or not.  It seems that the kernel-doc has some
ability to report warnings, so it would be interesting to see what that
spits out.

I also think getting rid of our home-grown syntax for the public headers
will be major improvement.  We actually have a reasonable amount of
ancillary documentation

As with everything else in the Sphinx docs, I'd request that we don't
just blindly throw this in and call it done.  We need to curate
additions properly to avoid the docs turning into a mess.  I'm happy to
help out in my copious free time.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:48:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:48:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1VOz-00087X-Vj; Fri, 31 Jul 2020 13:48:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fae6=BK=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1k1VOz-00087P-1I
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:48:49 +0000
X-Inumbo-ID: 8ee47f00-d334-11ea-8e3e-bc764e2007e4
Received: from mail-wm1-x32e.google.com (unknown [2a00:1450:4864:20::32e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ee47f00-d334-11ea-8e3e-bc764e2007e4;
 Fri, 31 Jul 2020 13:48:48 +0000 (UTC)
Received: by mail-wm1-x32e.google.com with SMTP id 184so9321932wmb.0
 for <xen-devel@lists.xenproject.org>; Fri, 31 Jul 2020 06:48:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=KKYSpQQGDTEf/rszLaXCTRWUnIGy5njL3CATz1Qk9lc=;
 b=cTMMW/JmUVKMziEy1Y7wBpvFpLH/cRfHIAUoYRaziHki0mINkYBYj2ks3qerMk1GMX
 a2r69LCc4j3ncftJpCzjzZ6Jgz8HD/isCTCTI3nI+tG7iJ+Oo1sFRJCzD2enIT7Y36Gi
 HUrLe/CzfBy5hGSUsayz6H3zTmJSXGMrO5lJC+Rt5A/paqGecL6N1ZQRTtA6kBRQG/UY
 ZQbAgTDAY6n/Rm286KmlEOPlz8Z+xKstG074lRJaE1+yEJyVzf1h0TFYAZPzS7CZM/sD
 eX8U1Vo/5HqVJgAIT6qXUJKwZV1n1cw1qe8pswSazWZNumg3jMe9HRMM140yj6ZVlNNj
 MZJQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=KKYSpQQGDTEf/rszLaXCTRWUnIGy5njL3CATz1Qk9lc=;
 b=XUt5LCyRaa39nfPx6IFYYgcKZYmlnYOF2Ggs4TMxUcrtp3qfV/T1Y2Rz+R0iENK+C/
 Aelwxrk90yhGVzUfMnrywm+R+aqvdwvtziA9x9rPAaUsvCOfv7O6grPMAeqXeP2o/iBo
 nCcBVim+F5naVJnaiMvxvlRrS8HpCOttNGsyn1mdiFNy8LZuc4G2X75pW/k67qMz8xFB
 CB5mMWsc70Bf6I2r2NuT+WF+m6CTU3nSq7D8UiGJHincO8lRcpIkeUAN9OesuUAemDmY
 /yKfV1/5ZtdQTfkfL7Pmdiu2OtBxdRUL/3ltQmw8qBWBq/sHmQ+61+lBu1G4HHhPA0Sm
 +kiA==
X-Gm-Message-State: AOAM530ly3aN9DU5dUExjQcSDGhNCm04FOR8LtS14R2eMPmA3IIFHxrQ
 aAjANXOF2QmKcXZm08j59jg=
X-Google-Smtp-Source: ABdhPJyu6AWHs0On15gV36rotdLoKobT2q5+M9ZsUp+rj+N1uQ6GQ1fdPwD7wC/ZIAxl3JOc6iSbBw==
X-Received: by 2002:a7b:c8d3:: with SMTP id f19mr3778273wml.163.1596203327417; 
 Fri, 31 Jul 2020 06:48:47 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec])
 by smtp.gmail.com with ESMTPSA id z7sm12010526wmk.6.2020.07.31.06.48.46
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 31 Jul 2020 06:48:46 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200731123926.28970-1-paul@xen.org>
 <20200731123926.28970-3-paul@xen.org>
 <a4856c33-8bb0-4afa-cc71-3af4c229bc27@suse.com>
 <004501d6673b$9adffbf0$d09ff3d0$@xen.org>
 <84cdd5b8-5149-a240-8bad-be8d67dca0d8@suse.com>
In-Reply-To: <84cdd5b8-5149-a240-8bad-be8d67dca0d8@suse.com>
Subject: RE: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in
 epte_get_entry_emt()
Date: Fri, 31 Jul 2020 14:43:34 +0100
Message-ID: <004601d66740$9661ad80$c3250880$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLlfQu00pWfsofZfQkBPK3LKKgwygFbp5BqArgsBgQC0uj+iQMLOy2gprP24fA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org, 'Paul Durrant' <pdurrant@amazon.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 31 July 2020 14:41
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; 'Paul Durrant' =
<pdurrant@amazon.com>; 'Andrew Cooper'
> <andrew.cooper3@citrix.com>; 'Wei Liu' <wl@xen.org>; 'Roger Pau =
Monn=C3=A9' <roger.pau@citrix.com>
> Subject: Re: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in =
epte_get_entry_emt()
>=20
> On 31.07.2020 15:07, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 31 July 2020 13:58
> >> To: Paul Durrant <paul@xen.org>
> >> Cc: xen-devel@lists.xenproject.org; Paul Durrant =
<pdurrant@amazon.com>; Andrew Cooper
> >> <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau =
Monn=C3=A9 <roger.pau@citrix.com>
> >> Subject: Re: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check =
in epte_get_entry_emt()
> >>
> >> On 31.07.2020 14:39, Paul Durrant wrote:
> >>> From: Paul Durrant <pdurrant@amazon.com>
> >>>
> >>> Re-factor the code to take advantage of the fact that the APIC =
access page is
> >>> a 'special' page.
> >>
> >> Hmm, that's going quite as far as I was thinking to go: In
> >> particular, you leave in place the set_mmio_p2m_entry() use
> >> in vmx_alloc_vlapic_mapping(). With that replaced, the
> >> re-ordering in epte_get_entry_emt() that you do shouldn't
> >> be necessary; you'd simple drop the checking of the
> >> specific MFN.
> >
> > Ok, it still needs to go in the p2m though so are you suggesting
> > just calling p2m_set_entry() directly?
>=20
> Yes, if this works. The main question really is whether there are
> any hidden assumptions elsewhere that this page gets mapped as an
> MMIO one.
>

Actually, it occurs to me that logdirty is going to be an issue if I use =
p2m_ram_rw. If I'm not going to use p2m_mmio_direct then do you have =
another suggestion?

  Paul



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:50:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:50:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1VQY-0000Tg-As; Fri, 31 Jul 2020 13:50:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1VQW-0000Tb-GN
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:50:24 +0000
X-Inumbo-ID: c7841ab4-d334-11ea-abb6-12813bfff9fa
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.89]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c7841ab4-d334-11ea-abb6-12813bfff9fa;
 Fri, 31 Jul 2020 13:50:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QaUnDIMDodH8Mw1jCxd8sOaB4OjY03ZR0m3EsXHBgI0=;
 b=90zmv/vDIpRnZ0wohcbhJsB/0ZyxYCSfHgDri3P8QtcXLGLW/gvL7eWWbThOm69ombJgqUD41tmDx+prNlkfQ6U6GrNK84Bj13t7qnHbCwyjNTdzIDwO6/VGJ6g/OhvRLWyQ3twv6cjOD1pVyiTaDKmVllosfLBZioY4ZfH/aWI=
Received: from DB8P191CA0005.EURP191.PROD.OUTLOOK.COM (2603:10a6:10:130::15)
 by HE1PR0801MB1929.eurprd08.prod.outlook.com (2603:10a6:3:50::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.16; Fri, 31 Jul
 2020 13:50:21 +0000
Received: from DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:130:cafe::75) by DB8P191CA0005.outlook.office365.com
 (2603:10a6:10:130::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.17 via Frontend
 Transport; Fri, 31 Jul 2020 13:50:20 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT035.mail.protection.outlook.com (10.152.20.65) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.17 via Frontend Transport; Fri, 31 Jul 2020 13:50:20 +0000
Received: ("Tessian outbound 7de93d801f24:v62");
 Fri, 31 Jul 2020 13:50:20 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 08b1964606a16d39
X-CR-MTA-TID: 64aa7808
Received: from bbd7f401b31f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6C7B466D-6918-4F0A-9777-96C3FD26ED6B.1; 
 Fri, 31 Jul 2020 13:50:15 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bbd7f401b31f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 13:50:15 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PF9bHSA3aJkNCvabwZSY8W6UFRSvtISUmJ5dps0Nq8wOiGlEVtoJRM2WqZZ4Q3hWljhcXZXpe7EzXEkPSPYqGayQzOBL7MgnwvUvLfVnICJe/Nks0T6/IiM4ZJEXat3LvsOGIkpdvkh+uPOuvWfRJ59G2ypYmydjxhBETSUZ5IaniUQYKxvS3GrJ9L/HUzGld3tXVt2K7Hy/QcYs/BD8F8ukbPtBF6kb7GjP6gxnteauHL1n8ueseEaA/7s1aFPewdkoScb18kNcsQnQaDoqOPtpxnyYLqkaC72lHX2ExbPUdvQIeVSVU7jWGKSWOgFALvOdnEeovoVwHUpxmH3fjQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QaUnDIMDodH8Mw1jCxd8sOaB4OjY03ZR0m3EsXHBgI0=;
 b=ER9C5CK48Gk0CJaCKpA/mJDVyH4FqwlKDak+Jti6YCOE3tzEtbcbJ6lCBa1gu7p6Tn53OOUmZLoK7mnN3HatAwr/RvSVCr5jZ7Tg9JKszlQ7I7NOzzz1U/j8w9B49BaUhj8u5zXsIC0eAgDTDB1zqW+bpjIet1M5JKouYxR7bTAkeRSxEWaSHQ/2kqX7Es1NlniQCt5ySrHX0VrJ2h1/co6H70Ajrid2sYJaFSq2whaTUmGkCm4eVQthIAIVPKfdu/PHfJ5SpQ99neafYPKM0G4plcFMyFqfkmMUFkbfCOF+xN220DlLVnEAajhPg+KyAM2uM1Z6DTCs57w93PRCgQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QaUnDIMDodH8Mw1jCxd8sOaB4OjY03ZR0m3EsXHBgI0=;
 b=90zmv/vDIpRnZ0wohcbhJsB/0ZyxYCSfHgDri3P8QtcXLGLW/gvL7eWWbThOm69ombJgqUD41tmDx+prNlkfQ6U6GrNK84Bj13t7qnHbCwyjNTdzIDwO6/VGJ6g/OhvRLWyQ3twv6cjOD1pVyiTaDKmVllosfLBZioY4ZfH/aWI=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3260.eurprd08.prod.outlook.com (2603:10a6:5:21::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.18; Fri, 31 Jul
 2020 13:50:13 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 13:50:13 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: kernel-doc and xen.git
Thread-Topic: kernel-doc and xen.git
Thread-Index: AQHWZhCqcVqKiEQuVU6MZpfTy49dJakhtkiAgAAAnAA=
Date: Fri, 31 Jul 2020 13:50:12 +0000
Message-ID: <7C42E4DC-7887-43A9-8764-648AB8240548@arm.com>
References: <alpine.DEB.2.21.2007291644330.1767@sstabellini-ThinkPad-T480s>
 <dc42ea91-a876-0f85-3d99-739d4990d3eb@citrix.com>
In-Reply-To: <dc42ea91-a876-0f85-3d99-739d4990d3eb@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [90.126.203.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 2388126f-1db3-45f5-4ad9-08d83558aa9a
x-ms-traffictypediagnostic: DB7PR08MB3260:|HE1PR0801MB1929:
X-Microsoft-Antispam-PRVS: <HE1PR0801MB19295C68E5414A83E78213259D4E0@HE1PR0801MB1929.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: ueNbj//LyNMqvL4cu2V89IE/PJSCKoHkQZml2KWPucKxqVtWHSbADiWJqnWpsM/CkQ/Cp1JXn+Omx6ZgpGlrlUZ9Sq04sMwQLue06DD3zT0+GK72jDGgs8v1vxl0yPTtRrdeg2k4CgXCgBKENRm7xecJm6A+F/gT6uIg1JTSSTHnv99Dp25owfmztEdVZAAwvL5DcjIG3DwOxeMhaBI/oGVhsjo+wlHADK2aRvwriciGA72WN0cZHiM/yXHlLuodLS21oyhZYWo7V+nrMa7u6St8obf5uVruBbuu/96Ly8HTq7r4cgRgC6aRn/PmEYWqJAQPyNi60fuuNMRCHUJBCA==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(136003)(346002)(376002)(366004)(39860400002)(66446008)(64756008)(8676002)(6486002)(66476007)(186003)(91956017)(26005)(2906002)(36756003)(66946007)(76116006)(66556008)(3480700007)(4326008)(6506007)(8936002)(316002)(5660300002)(86362001)(478600001)(33656002)(83380400001)(53546011)(6916009)(2616005)(54906003)(6512007)(71200400001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: xs0rJ3a/d8kXUts4uAO78fk543AcpzKMvC0J2YYzUyh6mCrWhWXZhL8HTFi2U57TVR7T3wzFi+jNaS11EB/4lVmVaz5a0bI77KsBJyVSais1E+mFc/z6CH4ywQ1EHOP7ZCeUxjPYEbYfowEYIVqmcZCUlsmNPo6dCqFc9xs7yPSJlfGtdM/Os9UrNv6iKFh9+Iu2wB75T1D/FmEN+Uu1yHY91aDOwXn5DvokOPgG25+w7EOWEIYSHtvIlmTAg4XWsj98wt4c3zmM+KsuyxVJfk6knwv8DBvGEGca+IQmzJfFkHJG4jsVKrwK62YJ9ttPtzj8j3chze1tIqjTH5A0B+nQ9RNRLpaAvJff23DuBHredGnMtOZpTdRiNe8NnFKvkdgm/UIwXmSPeA6Kt851IVsXQ4P+M49NQVBvZIeSB84RgmPgaKTavG/Gantr/xUSBnwd7EkHWSUg1ztcsPHZNJ0kUVrBrdRXxawjPMrI6YT87D6Twq8WG5/u0fKzC0oJysjp1dr3ttaC4Iwp7lTeYSp+BLyIyI2LUa47+dXw+b+x5opHjMli5S6VMQm2ga2qFYm45B+LCOfRQOdZ9jraJtFWc/ooHSAG/Z+Dp6MeTn9uxHY3bzx6DVYyu9Zsxc0bJ2n9dhPIf3O7hv30OCRSRw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <090DB01F0F017E49980C659B35259B19@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3260
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: ec898d6e-cf68-4b30-235c-08d83558a5ed
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: QjAQlnY7cPdeagRHy8DLI9J/Q7bXImMrlY9XG0Xl8fKZIpbM1Z4UOHDpNuqBwkL1Txn7gO40d+9OBN3FaescnVnGRey3FEctrVS2yIQuzhZDN/bkiYvHABrp0pougVqreymGG5q+2KhGhmm66kye5ODRi8rYjkERuJrqmaJRvO1Up0AXPoYxnEMWfnxQh/gSVn2POwH6bkPKuwERjMx3aHAYsMsU8iat9UEFi6jwYbqgY2Q/o+oTBr5cd/cE39g01ssUvRQBh3gfx5pKcZgnIqfblz+7gAc+YUVmWAMrMDMJuUMInKsAezjFDkZTQLldJ+8WKYhsqRW5rr2Ndr74sa15c4jTDH9AxoCz38hF4YqPyBtbIRlVEaWPKQBpDssMFJyEf9i6S9RCyNF7rNIvkQ==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(396003)(39860400002)(376002)(346002)(46966005)(36756003)(478600001)(8936002)(3480700007)(83380400001)(6512007)(8676002)(82310400002)(82740400003)(81166007)(4326008)(6862004)(356005)(47076004)(33656002)(2906002)(336012)(186003)(54906003)(6506007)(2616005)(26005)(316002)(5660300002)(53546011)(86362001)(70586007)(70206006)(6486002);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 13:50:20.8497 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2388126f-1db3-45f5-4ad9-08d83558aa9a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB1929
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "committers@xenproject.org" <committers@xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMzEgSnVsIDIwMjAsIGF0IDE1OjQ4LCBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IE9uIDMwLzA3LzIwMjAgMDI6MjcsIFN0ZWZh
bm8gU3RhYmVsbGluaSB3cm90ZToNCj4+IEhpIGFsbCwNCj4+IA0KPj4gSSB3b3VsZCBsaWtlIHRv
IGFzayBmb3IgeW91ciBmZWVkYmFjayBvbiB0aGUgYWRvcHRpb24gb2YgdGhlIGtlcm5lbC1kb2MN
Cj4+IGZvcm1hdCBmb3IgaW4tY29kZSBjb21tZW50cy4NCj4+IA0KPj4gSW4gdGhlIEZ1U2EgU0lH
IHdlIGhhdmUgc3RhcnRlZCBsb29raW5nIGludG8gRnVTYSBkb2N1bWVudHMgZm9yIFhlbi4gT25l
DQo+PiBvZiB0aGUgdGhpbmdzIHdlIGFyZSBpbnZlc3RpZ2F0aW5nIGFyZSB3YXlzIHRvIGxpbmsg
dGhlc2UgZG9jdW1lbnRzIHRvDQo+PiBpbi1jb2RlIGNvbW1lbnRzIGluIHhlbi5naXQgYW5kIHZp
Y2UgdmVyc2EuDQo+PiANCj4+IEluIHRoaXMgY29udGV4dCwgQW5kcmV3IENvb3BlciBzdWdnZXN0
ZWQgdG8gaGF2ZSBhIGxvb2sgYXQgImtlcm5lbC1kb2MiDQo+PiBbMV0gZHVyaW5nIG9uZSBvZiB0
aGUgdmlydHVhbCBiZWVyIHNlc3Npb25zIGF0IHRoZSBsYXN0IFhlbiBTdW1taXQuDQo+PiANCj4+
IEkgZGlkIGdpdmUgYSBsb29rIGF0IGtlcm5lbC1kb2MgYW5kIGl0IGlzIHZlcnkgcHJvbWlzaW5n
LiBrZXJuZWwtZG9jIGlzDQo+PiBhIHNjcmlwdCB0aGF0IGNhbiBnZW5lcmF0ZSBuaWNlIHJzdCB0
ZXh0IGRvY3VtZW50cyBmcm9tIGluLWNvZGUNCj4+IGNvbW1lbnRzLiAoVGhlIGdlbmVyYXRlZCBy
c3QgZmlsZXMgY2FuIHRoZW4gYmUgdXNlZCBhcyBpbnB1dCBmb3Igc3BoaW54DQo+PiB0byBnZW5l
cmF0ZSBodG1sIGRvY3MuKSBUaGUgY29tbWVudCBzeW50YXggWzJdIGlzIHNpbXBsZSBhbmQgc2lt
aWxhciB0bw0KPj4gRG94eWdlbjoNCj4+IA0KPj4gICAgLyoqDQo+PiAgICAgKiBmdW5jdGlvbl9u
YW1lKCkgLSBCcmllZiBkZXNjcmlwdGlvbiBvZiBmdW5jdGlvbi4NCj4+ICAgICAqIEBhcmcxOiBE
ZXNjcmliZSB0aGUgZmlyc3QgYXJndW1lbnQuDQo+PiAgICAgKiBAYXJnMjogRGVzY3JpYmUgdGhl
IHNlY29uZCBhcmd1bWVudC4NCj4+ICAgICAqICAgICAgICBPbmUgY2FuIHByb3ZpZGUgbXVsdGlw
bGUgbGluZSBkZXNjcmlwdGlvbnMNCj4+ICAgICAqICAgICAgICBmb3IgYXJndW1lbnRzLg0KPj4g
ICAgICovDQo+PiANCj4+IGtlcm5lbC1kb2MgaXMgYWN0dWFsbHkgYmV0dGVyIHRoYW4gRG94eWdl
biBiZWNhdXNlIGl0IGlzIGEgbXVjaCBzaW1wbGVyDQo+PiB0b29sLCBvbmUgd2UgY291bGQgY3Vz
dG9taXplIHRvIG91ciBuZWVkcyBhbmQgd2l0aCBwcmVkaWN0YWJsZSBvdXRwdXQuDQo+PiBTcGVj
aWZpY2FsbHksIHdlIGNvdWxkIGFkZCB0aGUgdGFnZ2luZywgbnVtYmVyaW5nLCBhbmQgcmVmZXJl
bmNpbmcNCj4+IHJlcXVpcmVkIGJ5IEZ1U2EgcmVxdWlyZW1lbnQgZG9jdW1lbnRzLg0KPj4gDQo+
PiBJIHdvdWxkIGxpa2UgeW91ciBmZWVkYmFjayBvbiB3aGV0aGVyIGl0IHdvdWxkIGJlIGdvb2Qg
dG8gc3RhcnQNCj4+IGNvbnZlcnRpbmcgeGVuLmdpdCBpbi1jb2RlIGNvbW1lbnRzIHRvIHRoZSBr
ZXJuZWwtZG9jIGZvcm1hdCBzbyB0aGF0DQo+PiBwcm9wZXIgZG9jdW1lbnRzIGNhbiBiZSBnZW5l
cmF0ZWQgb3V0IG9mIHRoZW0uIE9uZSBkYXkgd2UgY291bGQgaW1wb3J0DQo+PiBrZXJuZWwtZG9j
IGludG8geGVuLmdpdC9zY3JpcHRzIGFuZCB1c2UgaXQgdG8gZ2VuZXJhdGUgYSBzZXQgb2YgaHRt
bA0KPj4gZG9jdW1lbnRzIHZpYSBzcGhpbnguDQo+PiANCj4+IEF0IGEgbWluaW11bSB3ZSdsbCBu
ZWVkIHRvIHN0YXJ0IHRoZSBpbi1jb2RlIGNvbW1lbnQgYmxvY2tzIHdpdGggdHdvDQo+PiBzdGFy
czoNCj4+IA0KPj4gICAgLyoqDQo+PiANCj4+IFRoZXJlIGNvdWxkIGJlIGFsc28gb3RoZXIgc21h
bGwgY2hhbmdlcyByZXF1aXJlZCB0byBtYWtlIHN1cmUgdGhlIG91dHB1dA0KPj4gaXMgYXBwcm9w
cmlhdGUuDQo+PiANCj4+IA0KPj4gRmVlZGJhY2sgaXMgd2VsY29tZSENCj4gDQo+IEkgdGhpbmsg
aXQgZ29lcyB3aXRob3V0IHNheWluZyB0aGF0IEknbSArMSB0byB0aGlzIGluIHByaW5jaXBsZS4N
Cj4gDQo+IFdlIGRlZmluaXRlbHkgaGF2ZSBzb21lIC8qKiBjb21tZW50cyBhbHJlYWR5LCBidXQg
SSBoYXZlIG5vIGlkZWEgaWYgdGhleQ0KPiBhcmUgdmFsaWQga2VybmVsLWRvYyBvciBub3QuICBJ
dCBzZWVtcyB0aGF0IHRoZSBrZXJuZWwtZG9jIGhhcyBzb21lDQo+IGFiaWxpdHkgdG8gcmVwb3J0
IHdhcm5pbmdzLCBzbyBpdCB3b3VsZCBiZSBpbnRlcmVzdGluZyB0byBzZWUgd2hhdCB0aGF0DQo+
IHNwaXRzIG91dC4NCg0KRnJvbSBteSBmaXJzdCBjcmFzaCB0ZXN0LCBub3QgbXVjaCBpcyDigJxr
ZXJuZWwtZG9j4oCdIGZyaWVuZGx5IGJ1dCB0aGUgY29udGVudA0KaXMgdGhlcmUsIGl0IGlzIG9u
bHkgYSBtYXR0ZXIgb2YgZG9pbmcgc29tZSBmb3JtYXR0aW5nLg0KDQo+IA0KPiBJIGFsc28gdGhp
bmsgZ2V0dGluZyByaWQgb2Ygb3VyIGhvbWUtZ3Jvd24gc3ludGF4IGZvciB0aGUgcHVibGljIGhl
YWRlcnMNCj4gd2lsbCBiZSBtYWpvciBpbXByb3ZlbWVudC4gIFdlIGFjdHVhbGx5IGhhdmUgYSBy
ZWFzb25hYmxlIGFtb3VudCBvZg0KPiBhbmNpbGxhcnkgZG9jdW1lbnRhdGlvbg0KPiANCj4gQXMg
d2l0aCBldmVyeXRoaW5nIGVsc2UgaW4gdGhlIFNwaGlueCBkb2NzLCBJJ2QgcmVxdWVzdCB0aGF0
IHdlIGRvbid0DQo+IGp1c3QgYmxpbmRseSB0aHJvdyB0aGlzIGluIGFuZCBjYWxsIGl0IGRvbmUu
ICBXZSBuZWVkIHRvIGN1cmF0ZQ0KPiBhZGRpdGlvbnMgcHJvcGVybHkgdG8gYXZvaWQgdGhlIGRv
Y3MgdHVybmluZyBpbnRvIGEgbWVzcy4gIEknbSBoYXBweSB0bw0KPiBoZWxwIG91dCBpbiBteSBj
b3Bpb3VzIGZyZWUgdGltZS4NCg0KVGhhbmtzIDotKQ0KDQpCZXJ0cmFuZA0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:57:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1VXC-0000jG-1i; Fri, 31 Jul 2020 13:57:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xDYK=BK=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1k1VXA-0000jB-N3
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:57:16 +0000
X-Inumbo-ID: bd0c082a-d335-11ea-8e40-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd0c082a-d335-11ea-8e40-bc764e2007e4;
 Fri, 31 Jul 2020 13:57:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596203834;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=4RzrWCIV0sUMk4OmbfP6TdBvcG7aq4SBkVZTyZG5USU=;
 b=Q5H57N2ZEVe6LNKZ2CASsOOKQxm3IuX7iKfMcqKLkX30DTvFguNuQtWv
 U0KHpOExVrD6bmOldKnUXzVQ71tM48RfKoR5K75hiTdWNouthFuyG9Nhy
 RY0RzE0X1eutPJHQRlYA1gCSkZg60ebkT5rkg8XVhQ+TLReGL8HHz8bGV 0=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: lYBgwtNojffg8me2tMYK+TdIMt1R4Rw6s59L1lDEjqqTWz/4HQKl0oH+SQeud8JvLkUYS2Syu7
 04r9Fk/Kf8alVC83vQ9SWQ0rDeRB/TtuER1MzlBLXna48iStRkgUuZDGTfNxzW5krS+FXarJQG
 G68OvhHq627BjqLYHfx6Sjvr6z68jW2RuEuCeD1kzYvyxZDGFN9YNsOytEny31c7ktaq6XBoae
 XZIdiG96syJuhd8ZD6JM3OBohhKWQ9YCtlEmb4Q91zCk0dtE/Z4CISJ7mfuZqAa/O2OaA+N9Cj
 JU0=
X-SBRS: 3.7
X-MesageID: 23810834
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23810834"
Date: Fri, 31 Jul 2020 15:56:56 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86/vmx: reorder code in vmx_deliver_posted_intr
Message-ID: <20200731135656.GB88772@Air-de-Roger>
References: <20200730140309.59916-1-roger.pau@citrix.com>
 <505b30dc-e504-918e-e676-70d856b76899@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <505b30dc-e504-918e-e676-70d856b76899@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Kevin Tian <kevin.tian@intel.com>,
 Wei Liu <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 31, 2020 at 03:05:52PM +0200, Jan Beulich wrote:
> On 30.07.2020 16:03, Roger Pau Monne wrote:
> > Remove the unneeded else branch, which allows to reduce the
> > indentation of a larger block of code, while making the flow of the
> > function more obvious.
> > 
> > No functional change intended.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> One minor request (could likely be taken care of while
> committing):
> 
> > @@ -2014,41 +2016,36 @@ static void vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
> >           * VMEntry as it used to be.
> >           */
> >          pi_set_on(&v->arch.hvm.vmx.pi_desc);
> > +        vcpu_kick(v);
> > +        return;
> >      }
> > -    else
> > -    {
> > -        struct pi_desc old, new, prev;
> >  
> > -        prev.control = v->arch.hvm.vmx.pi_desc.control;
> > +    prev.control = v->arch.hvm.vmx.pi_desc.control;
> >  
> > -        do {
> > -            /*
> > -             * Currently, we don't support urgent interrupt, all
> > -             * interrupts are recognized as non-urgent interrupt,
> > -             * Besides that, if 'ON' is already set, no need to
> > -             * sent posted-interrupts notification event as well,
> > -             * according to hardware behavior.
> > -             */
> > -            if ( pi_test_sn(&prev) || pi_test_on(&prev) )
> > -            {
> > -                vcpu_kick(v);
> > -                return;
> > -            }
> > -
> > -            old.control = v->arch.hvm.vmx.pi_desc.control &
> > -                          ~((1 << POSTED_INTR_ON) | (1 << POSTED_INTR_SN));
> > -            new.control = v->arch.hvm.vmx.pi_desc.control |
> > -                          (1 << POSTED_INTR_ON);
> > +    do {
> > +        /*
> > +         * Currently, we don't support urgent interrupt, all
> > +         * interrupts are recognized as non-urgent interrupt,
> > +         * Besides that, if 'ON' is already set, no need to
> > +         * sent posted-interrupts notification event as well,
> > +         * according to hardware behavior.
> > +         */
> 
> Would be nice to s/sent/send/ here as you move it (maybe also
> remove the plural from "posted-interrupts") and - if possible -
> re-flow for the now increased space on the right side.

Oh, sure, I should have realized myself. Feel free to adjust at commit
if you don't mind. I would also adjust 'non-urgent interrupts'.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 13:59:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 13:59:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1VZe-0000s2-FM; Fri, 31 Jul 2020 13:59:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fae6=BK=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1k1VZc-0000rw-OX
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 13:59:48 +0000
X-Inumbo-ID: 177957e0-d336-11ea-8e40-bc764e2007e4
Received: from mail-wm1-x331.google.com (unknown [2a00:1450:4864:20::331])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 177957e0-d336-11ea-8e40-bc764e2007e4;
 Fri, 31 Jul 2020 13:59:46 +0000 (UTC)
Received: by mail-wm1-x331.google.com with SMTP id 184so9351716wmb.0
 for <xen-devel@lists.xenproject.org>; Fri, 31 Jul 2020 06:59:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=YrGJFmvM9W3vLd6xyCKqsjUN4BdSFENjQYcqnSOmoT8=;
 b=FxOrwDbDDbzual7ruPoViixfSNLKMiSu3pyFMNc8qm05hI6NhmzIzABAQY6DxAH1gR
 LPbGqNJeoUvh/S+6ZpChyJF7tC1Ecx/suTy5EEOl+kJWqC2W+nXzZpuucv+0wN0V+GWq
 HpTcdcQBP2a7gN+P/+bmXAYJLXQTnqQfqY/cfplyhAOdPrHvg9GaoAP+FbCksZCJsNRk
 iA7lDcRqsG5gmKxaIlR6F7fjvYq8mtVEnEEKnuqTVJnWfKhlPna/oWmFkPTYmy2dfwLL
 sZbWS+mHX5RxH57R6k+TwnF+7TNmpehxwrYH+Lb7m7UA0fn8p5mju+2eVrUh2k8orIqt
 aXkQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=YrGJFmvM9W3vLd6xyCKqsjUN4BdSFENjQYcqnSOmoT8=;
 b=HUtZccvryA0+CHi/+Q4Bj+MOdOxFo+Wu5vIDNzvNQVC2rBDVGZyILXbAIscskxjzS5
 66En2fglJd+e07miCIgaxq59wZ3OsXZz17rRkKvb8PXAp2wPjwFkzT2y+p1rAQJsGwRn
 qXPXPWXmTlsemk6jggDJqnmse3DKMJlaNfhLOD96uoJ5TB3OvfvhsPkOHJQUiQuQIWlf
 RwKCMtZ3pyu44pX+R9kKGEnBZt1ttjZmzvVTNC4sOLVRMQLW3YV1daFCNYGctBTzSza0
 ABxUfkH5OGNkOGlDCpWLy7Ocm6TayF/tEkuJGgKxY6lzaHxBtrqXrz++xnszcyP9D4Vj
 qYLg==
X-Gm-Message-State: AOAM530Xrs+Etu+V90bZwHXHiTpjTSANYU5jBl7NxlFr0hRCfFmJtl6x
 Ir9A15Pu+A8GKys/kIRL0tY=
X-Google-Smtp-Source: ABdhPJyMcWgvjVv666IE1Ji5VhV6mo67kEyfu+uhXPQwWV/ia/AbTcouC/ErnMNd6UWDX0LNrFLPRg==
X-Received: by 2002:a05:600c:230e:: with SMTP id
 14mr3831247wmo.3.1596203986036; 
 Fri, 31 Jul 2020 06:59:46 -0700 (PDT)
Received: from CBGR90WXYV0 ([2a00:23c5:5785:9a01:ad9a:ab78:5748:a7ec])
 by smtp.gmail.com with ESMTPSA id b2sm11652468wmj.47.2020.07.31.06.59.45
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 31 Jul 2020 06:59:45 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: <paul@xen.org>,
	"'Jan Beulich'" <jbeulich@suse.com>
References: <20200731123926.28970-1-paul@xen.org>
 <20200731123926.28970-3-paul@xen.org>
 <a4856c33-8bb0-4afa-cc71-3af4c229bc27@suse.com>
 <004501d6673b$9adffbf0$d09ff3d0$@xen.org>
 <84cdd5b8-5149-a240-8bad-be8d67dca0d8@suse.com>
 <004601d66740$9661ad80$c3250880$@xen.org>
In-Reply-To: <004601d66740$9661ad80$c3250880$@xen.org>
Subject: RE: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in
 epte_get_entry_emt()
Date: Fri, 31 Jul 2020 14:54:33 +0100
Message-ID: <004701d66742$1ef678a0$5ce369e0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLlfQu00pWfsofZfQkBPK3LKKgwygFbp5BqArgsBgQC0uj+iQMLOy2gApabDZmmn0USEA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org, 'Paul Durrant' <pdurrant@amazon.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Paul Durrant <xadimgnik@gmail.com>
> Sent: 31 July 2020 14:44
> To: 'Jan Beulich' <jbeulich@suse.com>
> Cc: xen-devel@lists.xenproject.org; 'Paul Durrant' =
<pdurrant@amazon.com>; 'Andrew Cooper'
> <andrew.cooper3@citrix.com>; 'Wei Liu' <wl@xen.org>; 'Roger Pau =
Monn=C3=A9' <roger.pau@citrix.com>
> Subject: RE: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in =
epte_get_entry_emt()
>=20
> > -----Original Message-----
> > From: Jan Beulich <jbeulich@suse.com>
> > Sent: 31 July 2020 14:41
> > To: paul@xen.org
> > Cc: xen-devel@lists.xenproject.org; 'Paul Durrant' =
<pdurrant@amazon.com>; 'Andrew Cooper'
> > <andrew.cooper3@citrix.com>; 'Wei Liu' <wl@xen.org>; 'Roger Pau =
Monn=C3=A9' <roger.pau@citrix.com>
> > Subject: Re: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in =
epte_get_entry_emt()
> >
> > On 31.07.2020 15:07, Paul Durrant wrote:
> > >> -----Original Message-----
> > >> From: Jan Beulich <jbeulich@suse.com>
> > >> Sent: 31 July 2020 13:58
> > >> To: Paul Durrant <paul@xen.org>
> > >> Cc: xen-devel@lists.xenproject.org; Paul Durrant =
<pdurrant@amazon.com>; Andrew Cooper
> > >> <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau =
Monn=C3=A9 <roger.pau@citrix.com>
> > >> Subject: Re: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check =
in epte_get_entry_emt()
> > >>
> > >> On 31.07.2020 14:39, Paul Durrant wrote:
> > >>> From: Paul Durrant <pdurrant@amazon.com>
> > >>>
> > >>> Re-factor the code to take advantage of the fact that the APIC =
access page is
> > >>> a 'special' page.
> > >>
> > >> Hmm, that's going quite as far as I was thinking to go: In
> > >> particular, you leave in place the set_mmio_p2m_entry() use
> > >> in vmx_alloc_vlapic_mapping(). With that replaced, the
> > >> re-ordering in epte_get_entry_emt() that you do shouldn't
> > >> be necessary; you'd simple drop the checking of the
> > >> specific MFN.
> > >
> > > Ok, it still needs to go in the p2m though so are you suggesting
> > > just calling p2m_set_entry() directly?
> >
> > Yes, if this works. The main question really is whether there are
> > any hidden assumptions elsewhere that this page gets mapped as an
> > MMIO one.
> >
>=20
> Actually, it occurs to me that logdirty is going to be an issue if I =
use p2m_ram_rw. If I'm not going
> to use p2m_mmio_direct then do you have another suggestion?
>=20

Looking further I'm uneasy to move away from setting that APIC access =
page to anything other than mmio_direct so I'd rather leave the VMX code =
alone and re-order things here.

  Paul




From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:04:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:04:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Vdv-0001mT-5Q; Fri, 31 Jul 2020 14:04:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GLoN=BK=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1k1Vdt-0001mL-M5
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:04:13 +0000
X-Inumbo-ID: b5eaaf96-d336-11ea-abbb-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b5eaaf96-d336-11ea-abbb-12813bfff9fa;
 Fri, 31 Jul 2020 14:04:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596204252;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=JbvmWHPnHyFXzuWdV85iadLyRKIgc4icLLHssKEOtHI=;
 b=TlDwgyPZMOWCsx+HtOITZVXMOoomxgftjvvBJ9v+SKGd14G8ryTg5PyM
 VEF+ZgiZxfix9z+0qThVFQ+AuwUJdNOtu0sfAsAB9Pn1ulGN/Zbiwr/YI
 aZpvBHcxzfwrYzaNtPqlwF2Z/OK2XqGbefbzvfmwSPtePeB+trbnGq1lD o=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: JSRjncnjuG1XpqS77FBF4RGbiPcvqHM+AjXFI7YLCNBKDP2yyt2oSaM+izJkuD7PE7AJfSzAxN
 SKU0eKgC9oY9ruZnxgcwyflgufMzMGJIilWc8mUUiXRRHmI8ek1fcyFX2rTem1gH7cpkVJwEEB
 2ghtBbY3u1lS/H/4gYWTdsUlH9gNCNfvgk0voX/21ZeruZyLTK4Lqxgeu0GfE6w4DQ+El6kntV
 RKkcvKqyc9WqSws8nI45aM/3smaz/S6Ms4C5ejygsF7KesfjPhjDOOvd+QkT5HQbj99frSpeAv
 CWM=
X-SBRS: 3.7
X-MesageID: 23953025
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23953025"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [OSSTEST PATCH v2 41/41] duration_estimator: Clarify
 recentflights query a bit
Thread-Topic: [OSSTEST PATCH v2 41/41] duration_estimator: Clarify
 recentflights query a bit
Thread-Index: AQHWZzI+XEyk28m8zUGjw7HVjV9ik6khlv4A
Date: Fri, 31 Jul 2020 14:04:09 +0000
Message-ID: <0E9B6793-7B13-418C-8E8B-7F5CA38520D3@citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
 <20200731113820.5765-42-ian.jackson@eu.citrix.com>
In-Reply-To: <20200731113820.5765-42-ian.jackson@eu.citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C35978B494756E40BBA332AC7BD54BDF@citrix.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On Jul 31, 2020, at 12:38 PM, Ian Jackson <ian.jackson@eu.citrix.com> wro=
te:
>=20
> The condition on r.job is more naturally thought of as a join
> condition than a where condition.  (This is an inner join, so the
> semantics are identical.)
>=20
> Also, for clarity, swap the flight and job conditions round, so that
> the ON clause is a series of r.thing =3D otherthing.
>=20
> No functional change.
>=20
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:14:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:14:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1VnW-0002ix-5W; Fri, 31 Jul 2020 14:14:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1VnV-0002is-6i
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:14:09 +0000
X-Inumbo-ID: 1819e82a-d338-11ea-abbb-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1819e82a-d338-11ea-abbb-12813bfff9fa;
 Fri, 31 Jul 2020 14:14:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 34D22AB8B;
 Fri, 31 Jul 2020 14:14:19 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in
 epte_get_entry_emt()
To: paul@xen.org
References: <20200731123926.28970-1-paul@xen.org>
 <20200731123926.28970-3-paul@xen.org>
 <a4856c33-8bb0-4afa-cc71-3af4c229bc27@suse.com>
 <004501d6673b$9adffbf0$d09ff3d0$@xen.org>
 <84cdd5b8-5149-a240-8bad-be8d67dca0d8@suse.com>
 <004601d66740$9661ad80$c3250880$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1386ff11-6bf4-2edb-3b44-354faa75cced@suse.com>
Date: Fri, 31 Jul 2020 16:14:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <004601d66740$9661ad80$c3250880$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, 'Paul Durrant' <pdurrant@amazon.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31.07.2020 15:43, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 31 July 2020 14:41
>> To: paul@xen.org
>> Cc: xen-devel@lists.xenproject.org; 'Paul Durrant' <pdurrant@amazon.com>; 'Andrew Cooper'
>> <andrew.cooper3@citrix.com>; 'Wei Liu' <wl@xen.org>; 'Roger Pau Monné' <roger.pau@citrix.com>
>> Subject: Re: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()
>>
>> On 31.07.2020 15:07, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 31 July 2020 13:58
>>>> To: Paul Durrant <paul@xen.org>
>>>> Cc: xen-devel@lists.xenproject.org; Paul Durrant <pdurrant@amazon.com>; Andrew Cooper
>>>> <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau Monné <roger.pau@citrix.com>
>>>> Subject: Re: [PATCH v2 2/2] x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()
>>>>
>>>> On 31.07.2020 14:39, Paul Durrant wrote:
>>>>> From: Paul Durrant <pdurrant@amazon.com>
>>>>>
>>>>> Re-factor the code to take advantage of the fact that the APIC access page is
>>>>> a 'special' page.
>>>>
>>>> Hmm, that's going quite as far as I was thinking to go: In
>>>> particular, you leave in place the set_mmio_p2m_entry() use
>>>> in vmx_alloc_vlapic_mapping(). With that replaced, the
>>>> re-ordering in epte_get_entry_emt() that you do shouldn't
>>>> be necessary; you'd simple drop the checking of the
>>>> specific MFN.
>>>
>>> Ok, it still needs to go in the p2m though so are you suggesting
>>> just calling p2m_set_entry() directly?
>>
>> Yes, if this works. The main question really is whether there are
>> any hidden assumptions elsewhere that this page gets mapped as an
>> MMIO one.
>>
> 
> Actually, it occurs to me that logdirty is going to be an issue if I
> use p2m_ram_rw. If I'm not going to use p2m_mmio_direct then do you
> have another suggestion?

p2m_ram_rw is also not good because of allowing execution. If we don't
want to create a new type, how about (ab)using p2m_grant_map_rw (in
the sense of Xen granting the domain access to this page)? Possible
problems with this are
- replace_grant_p2m_mapping()
- hvm_translate_get_page()
All other uses of p2m_grant_map_rw and p2m_is_grant() look to be
compatible with this approach.

For replace_grant_p2m_mapping() I think there's no issue because the
grant table code won't find a suitable active entry.

For hvm_translate_get_page() the question is why it checks for the
two grant types in the first place; iow I wonder whether that check
couldn't be dropped.

Independent of this, btw, you may need to call set_gpfn_from_mfn()
alongside p2m_set_entry().

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:14:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:14:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Vns-0002l6-EE; Fri, 31 Jul 2020 14:14:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g5be=BK=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1k1Vnq-0002kx-ML
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:14:30 +0000
X-Inumbo-ID: 25d31ba8-d338-11ea-abbb-12813bfff9fa
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 25d31ba8-d338-11ea-abbb-12813bfff9fa;
 Fri, 31 Jul 2020 14:14:29 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06VEBlvI012994;
 Fri, 31 Jul 2020 14:14:03 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=Qkoa98ueMEN0nIMXY0FuUYHUpYN7mcWFqQyjzv4PaLo=;
 b=k/uCLfgZ8X9eVxW4isO626hrKf3+/2IjrLyTvdA9YLfqY5sygPd5v+5X2HYb5Ei0gSkZ
 qTKLAw8RNOWa/bM0xOXHbYur22CTTRZhOAZeWgfRAJOb3S99aFDtHThi0Be10j4WgjX+
 3Z7X6KK7fFLq7dol6TKOMXZrz8HhbvtzyjFtjTAAxIrBGGz58+U0BU+TGnH8hElNFUjI
 5xl9NsWGf6whYw56/ly7ArxDz+rvEgywQrAEdcw9sw8wP5hMOMgHrD544jZq5WMFG5cb
 pBxhjZXFZpV3UEdfE4iyKuKx6LYHdxBeWBAw+bPYVValzgBgTnbdjU+ZbeNCBtBY8Zrv kA== 
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by aserp2120.oracle.com with ESMTP id 32mf9g1kqj-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 31 Jul 2020 14:14:03 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 06VECseV171471;
 Fri, 31 Jul 2020 14:14:03 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by aserp3030.oracle.com with ESMTP id 32hu5yy3ej-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 31 Jul 2020 14:14:03 +0000
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 06VEDq61018430;
 Fri, 31 Jul 2020 14:13:52 GMT
Received: from [10.39.217.162] (/10.39.217.162)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 31 Jul 2020 07:13:52 -0700
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
To: Anchal Agarwal <anchalag@amazon.com>
References: <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200721000348.GA19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <408d3ce9-2510-2950-d28d-fdfe8ee41a54@oracle.com>
 <alpine.DEB.2.21.2007211640500.17562@sstabellini-ThinkPad-T480s>
 <20200722180229.GA32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <alpine.DEB.2.21.2007221645430.17562@sstabellini-ThinkPad-T480s>
 <20200723225745.GB32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <alpine.DEB.2.21.2007241431280.17562@sstabellini-ThinkPad-T480s>
 <66a9b838-70ed-0807-9260-f2c31343a081@oracle.com>
 <20200730230634.GA17221@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <53b577a3-6af9-5587-7e47-485be38b3653@oracle.com>
Date: Fri, 31 Jul 2020 10:13:48 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200730230634.GA17221@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9698
 signatures=668679
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0
 malwarescore=0
 mlxscore=0 adultscore=0 spamscore=0 phishscore=0 mlxlogscore=999
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2007310105
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9698
 signatures=668679
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0
 lowpriorityscore=0 malwarescore=0
 bulkscore=0 adultscore=0 spamscore=0 mlxlogscore=999 priorityscore=1501
 suspectscore=0 clxscore=1015 mlxscore=0 phishscore=0 impostorscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2007310105
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: x86@kernel.org, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 pavel@ucw.cz, hpa@zytor.com, Stefano Stabellini <sstabellini@kernel.org>,
 eduval@amazon.com, mingo@redhat.com, xen-devel@lists.xenproject.org,
 sblbir@amazon.com, axboe@kernel.dk, konrad.wilk@oracle.com, bp@alien8.de,
 tglx@linutronix.de, jgross@suse.com, netdev@vger.kernel.org,
 linux-pm@vger.kernel.org, rjw@rjwysocki.net, kamatam@amazon.com,
 vkuznets@redhat.com, davem@davemloft.net, dwmw@amazon.co.uk,
 roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 7/30/20 7:06 PM, Anchal Agarwal wrote:
> On Mon, Jul 27, 2020 at 06:08:29PM -0400, Boris Ostrovsky wrote:
>> CAUTION: This email originated from outside of the organization. Do no=
t click links or open attachments unless you can confirm the sender and k=
now the content is safe.
>>
>>
>>
>> On 7/24/20 7:01 PM, Stefano Stabellini wrote:
>>> Yes, it does, thank you. I'd rather not introduce unknown regressions=
 so
>>> I would recommend to add an arch-specific check on registering
>>> freeze/thaw/restore handlers. Maybe something like the following:
>>>
>>> #ifdef CONFIG_X86
>>>     .freeze =3D blkfront_freeze,
>>>     .thaw =3D blkfront_restore,
>>>     .restore =3D blkfront_restore
>>> #endif
>>>
>>>
>>> maybe Boris has a better suggestion on how to do it
>>
>> An alternative might be to still install pm notifier in
>> drivers/xen/manage.c (I think as result of latest discussions we decid=
ed
>> we won't need it) and return -ENOTSUPP for ARM for
>> PM_HIBERNATION_PREPARE and friends. Would that work?
>>
> I think the question here is for registering driver specific freeze/tha=
w/restore
> callbacks for x86 only. I have dropped the pm_notifier in the v3 still =
pending
> testing. So I think just registering driver specific callbacks for x86 =
only is a
> good option. What do you think?


I suggested using the notifier under assumption that if it returns an
error then that will prevent callbacks to be called because hibernation
will be effectively disabled. But I haven't looked at PM code so I don't
know whether this is actually the case.


The advantage of doing it in the notifier is that instead of adding
ifdefs to each driver you will be able to prevent callbacks from a
single place. Plus you can use this do disable hibernation for PVH dom0
as well.



-boris





From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:17:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:17:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1VqQ-0002vI-SX; Fri, 31 Jul 2020 14:17:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GLoN=BK=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1k1VqP-0002vC-Ek
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:17:09 +0000
X-Inumbo-ID: 8427b90c-d338-11ea-abbe-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8427b90c-d338-11ea-abbe-12813bfff9fa;
 Fri, 31 Jul 2020 14:17:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596205028;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=wnkgz3lJcyGV8tNUyDz2X/De2tL0/HgOXTDiF4Sf7Fc=;
 b=CPJ13pVB1QLHd1GZmTOtp7Qrn2cPJ2Jufuqt8X8Jik2DiXtFZOtuLbNd
 VNTPHSVZPz9f2GJysUMwx1nO1tsZ2dizItVYi/HS0KKfdrtx51e4hiKBm
 NT6HsXtCG9zc0W/yjHGi409ksbX0zCsBTli5AyHxA4swvxspDkXd+j6nc 0=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: uBPa7bx6y3uZSXAYbB89HsRQifuJopnCiFIHfLxk/6QlvC0Vg7W2oSayCYRMZtIZQn6u+B7sBG
 lLm/Cw78Cd8BAhyPZGC0tJZkJJK4W0jgnjfAfj3m0DBBgt9RoSOUvEjo3RtMLf81lYih/b0V44
 5SEKGOGqiuWD4twpKgDiwFJcgs3kgfLIR+HR5YaxhQdl9W0HKyYFpgG+tdb26QRZYIdEDySwjK
 QVuiDX30FHXGqxUatnk2ZzOFRr+HG0ra5L1AcjyP7wH3Z+WXqhobopD/ynnvNPRlLRhUOel9UY
 u5Q=
X-SBRS: 3.7
X-MesageID: 23646781
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23646781"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [OSSTEST PATCH v2 08/41] sg-report-flight: Ask the db for flights
 of interest
Thread-Topic: [OSSTEST PATCH v2 08/41] sg-report-flight: Ask the db for
 flights of interest
Thread-Index: AQHWZy8f+6AlWw+J2k+zDE+BIUpw1akhmqIA
Date: Fri, 31 Jul 2020 14:17:04 +0000
Message-ID: <391CB71B-3587-40C1-BE6E-F01A6473141D@citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
 <20200731113820.5765-9-ian.jackson@eu.citrix.com>
In-Reply-To: <20200731113820.5765-9-ian.jackson@eu.citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <C8B6825660F566488DB0442233F58E21@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gSnVsIDMxLCAyMDIwLCBhdCAxMjozNyBQTSwgSWFuIEphY2tzb24gPGlhbi5qYWNr
c29uQGV1LmNpdHJpeC5jb20+IHdyb3RlOg0KPiANCj4gU3BlY2lmaWNhbGx5LCB3ZSBuYXJyb3cg
dGhlIGluaXRpYWwgcXVlcnkgdG8gZmxpZ2h0cyB3aGljaCBoYXZlIGF0DQo+IGxlYXN0IHNvbWUg
am9iIHdpdGggdGhlIGJ1aWx0X3JldmlzaW9uX2ZvbyB3ZSBhcmUgbG9va2luZyBmb3IuDQo+IA0K
PiBUaGlzIGNvbmRpdGlvbiBpcyBzdHJpY3RseSBicm9hZGVyIHRoYW4gdGhhdCBpbXBsZW1lbnRl
ZCBpbnNpZGUgdGhlDQo+IGZsaWdodCBzZWFyY2ggbG9vcCwgc28gdGhlcmUgaXMgbm8gZnVuY3Rp
b25hbCBjaGFuZ2UuDQoNCkFzc3VtaW5nIHRoaXMgaXMgdHJ1ZSwgdGhhdCBqb2IgLyBydW52YXIg
aXMgZmlsdGVyZWQgYWZ0ZXIgZXh0cmFjdGluZyB0aGlzIGluZm9ybWF0aW9uLCB0aGVuLi4uDQoN
Cj4gDQo+IFBlcmY6IHJ1bnRpbWUgb2YgbXkgdGVzdCBjYXNlIG5vdyB+MzAwcy01MDBzLg0KPiAN
Cj4gRXhhbXBsZSBxdWVyeSBiZWZvcmUgKGZyb20gdGhlIFBlcmwgREJJIHRyYWNlKToNCj4gDQo+
ICAgICAgU0VMRUNUICogRlJPTSAoDQo+ICAgICAgICBTRUxFQ1QgZmxpZ2h0LCBibGVzc2luZyBG
Uk9NIGZsaWdodHMNCj4gICAgICAgICAgICBXSEVSRSAoYnJhbmNoPSd4ZW4tdW5zdGFibGUnKQ0K
PiAgICAgICAgICAgICAgQU5EICAgICAgICAgICAgICAgICAgIEVYSVNUUyAoU0VMRUNUIDENCj4g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgRlJPTSBqb2JzDQo+ICAgICAgICAgICAgICAgICAg
ICAgICAgICAgV0hFUkUgam9icy5mbGlnaHQgPSBmbGlnaHRzLmZsaWdodA0KPiAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgQU5EIGpvYnMuam9iID0gPykNCj4gDQo+ICAgICAgICAgICAgICBB
TkQgKCAoVFJVRSBBTkQgZmxpZ2h0IDw9IDE1MTkwMykgQU5EIChibGVzc2luZz0ncmVhbCcpICkN
Cj4gICAgICAgICAgICBPUkRFUiBCWSBmbGlnaHQgREVTQw0KPiAgICAgICAgICAgIExJTUlUIDEw
MDANCj4gICAgICApIEFTIHN1Yg0KPiAgICAgIE9SREVSIEJZIGJsZXNzaW5nIEFTQywgZmxpZ2h0
IERFU0MNCj4gDQo+IFdpdGggdGhlc2UgYmluZCB2YXJpYWJsZXM6DQo+IA0KPiAgICAidGVzdC1h
cm1oZi1hcm1oZi1saWJ2aXJ0Ig0KPiANCj4gQWZ0ZXI6DQo+IA0KPiAgICAgIFNFTEVDVCAqIEZS
T00gKA0KPiAgICAgICAgU0VMRUNUIERJU1RJTkNUIGZsaWdodCwgYmxlc3NpbmcNCj4gICAgICAg
ICAgICAgRlJPTSBmbGlnaHRzDQo+ICAgICAgICAgICAgIEpPSU4gcnVudmFycyByMSBVU0lORyAo
ZmxpZ2h0KQ0KPiANCj4gICAgICAgICAgICBXSEVSRSAoYnJhbmNoPSd4ZW4tdW5zdGFibGUnKQ0K
PiAgICAgICAgICAgICAgQU5EICggKFRSVUUgQU5EIGZsaWdodCA8PSAxNTE5MDMpIEFORCAoYmxl
c3Npbmc9J3JlYWwnKSApDQo+ICAgICAgICAgICAgICAgICAgQU5EIEVYSVNUUyAoU0VMRUNUIDEN
Cj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgRlJPTSBqb2JzDQo+ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgV0hFUkUgam9icy5mbGlnaHQgPSBmbGlnaHRzLmZsaWdodA0KPiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgQU5EIGpvYnMuam9iID0gPykNCj4gDQo+ICAgICAgICAgICAg
ICBBTkQgcjEubmFtZSBMSUtFICdidWlsdFxfcmV2aXNpb25cXyUnDQo+ICAgICAgICAgICAgICBB
TkQgcjEubmFtZSA9ID8NCj4gICAgICAgICAgICAgIEFORCByMS52YWw9ID8NCj4gDQo+ICAgICAg
ICAgICAgT1JERVIgQlkgZmxpZ2h0IERFU0MNCj4gICAgICAgICAgICBMSU1JVCAxMDAwDQo+ICAg
ICAgKSBBUyBzdWINCj4gICAgICBPUkRFUiBCWSBibGVzc2luZyBBU0MsIGZsaWdodCBERVNDDQoN
CuKApkkgYWdyZWUgdGhhdCB0aGlzIHNob3VkIGludHJvZHVjZSBubyBvdGhlciBjaGFuZ2VzLg0K
DQpSZXZpZXdlZC1ieTogR2VvcmdlIER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29tPg==


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:17:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Vqs-0002y6-5M; Fri, 31 Jul 2020 14:17:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GLoN=BK=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1k1Vqr-0002xy-Cy
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:17:37 +0000
X-Inumbo-ID: 95360a0a-d338-11ea-abbe-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 95360a0a-d338-11ea-abbe-12813bfff9fa;
 Fri, 31 Jul 2020 14:17:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596205056;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=QyIFJ9+6z5aSNXWn+RiFboyvev1mE5/Mf7OMP9avnX8=;
 b=K8nlNmVpep2lj64s/CQhioCh9wLig7dnoqjbnWmWkcFijka6B3X/noAW
 9KJL0skHeXUjQWl6nTaHfxf6xIXCSQeitfefsXiRNFMaJKMyof6dLvFuE
 RcOLQC0ySMb2PUMuwzpaHllV2YJzEc1tmet54i6hiyjj9Fx8lntpe2Fyz U=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 1xKBZ4gKgD9gFH+1jHFxj73g8j/n9jStd65somPgvlCJWF5ZpVGjwvj2AX8LiFkxpvbqy10M/b
 eGdEFkM1NclGEKVAy1L9Cmnrj8U992a20cu9DiFHU67o7ayMgEgRteqmNXl7XGT/wV14oLcY8v
 utx8borbU9COM8qusOzO5daCp+SKUDUq1swT2Frh4mhsIskaSTiamGarDvWTl1cRGDq33bFQ5A
 3MancGKCCmQ8lieqnqR6mEfzab5mfUqCTMlesN++XBRO17ftUmVH9+s5GqvtCtyyCyVrOyypVE
 aH8=
X-SBRS: 3.7
X-MesageID: 24508538
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="24508538"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [OSSTEST PATCH v2 19/41] Executive: Drop redundant AND clause
Thread-Topic: [OSSTEST PATCH v2 19/41] Executive: Drop redundant AND clause
Thread-Index: AQHWZzGIiXhuM2ONcUuWJjbNKhR3S6khmr+A
Date: Fri, 31 Jul 2020 14:17:33 +0000
Message-ID: <6745953F-B208-49CF-BEA2-0956CC30891E@citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
 <20200731113820.5765-20-ian.jackson@eu.citrix.com>
In-Reply-To: <20200731113820.5765-20-ian.jackson@eu.citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="us-ascii"
Content-ID: <1C793E6989D8B64D921AB80437CD3BD8@citrix.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On Jul 31, 2020, at 12:37 PM, Ian Jackson <ian.jackson@eu.citrix.com> wro=
te:
>=20
> In "Executive: Use index for report__find_test" we changed an EXISTS
> subquery into a JOIN.
>=20
> Now, the condition r.flight=3Df.flight is redundant because this is the
> join column (from USING).
>=20
> No functional change.
>=20
> CC: George Dunlap <George.Dunlap@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:21:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:21:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Vub-0003qs-MN; Fri, 31 Jul 2020 14:21:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GLoN=BK=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1k1Vua-0003qM-7L
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:21:28 +0000
X-Inumbo-ID: 1e777d76-d339-11ea-8e4a-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e777d76-d339-11ea-8e4a-bc764e2007e4;
 Fri, 31 Jul 2020 14:21:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596205286;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=SafftgoayHAnHb15nG5Edak7dqbBFqff/CHbv1T2iLs=;
 b=hCDIny1mvTK3tzkElUyNVk9K3iTlf0H7JIUT7zYwrUqSOw5/h29lPCjL
 ZgvEu8xnHbuehv0E47Xv5I5Dmq4wGuRSSzDEB36nAe32Br5fOXj1Diexz
 YsOiaSmXkzMDida4iSQ5nHxXEQrwg5Nq6F8WlvDHN6X2gCcnXLMvy7Klu 8=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: rsUxuyQPNnXVk7MgZUD3od0Dw/CdEDO2LVUhHiV7bWM5rSRasc2Dla3iAP83zsmiJcYV259hyb
 Yxm9HB0H1g0+n3f7n+YsGtshAJzzF8aVWFjAvz8CSjO2VK5+uhoJ26jQ+EcrZotbloX96zmemH
 /x+EKBEsb3qa3gcsiZtf+BEzkIf8eOL5CVVVG8jX9/XoTJurU+HKxCYfdJSNgR5jbrSj8cu5vc
 aDdWJEE3QYgbe/kG1TDAdH8npKlUbuR3CXwIwqpJ0bTMbRSreEHnGUebeYgi+5mlb+ijJ5bAxH
 My8=
X-SBRS: 3.7
X-MesageID: 24508979
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="24508979"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [OSSTEST PATCH v2 07/41] schema: Provide indices for
 sg-report-flight
Thread-Topic: [OSSTEST PATCH v2 07/41] schema: Provide indices for
 sg-report-flight
Thread-Index: AQHWZy8fK8s8vZ3fmUmSnUsHUP7xn6khm9aA
Date: Fri, 31 Jul 2020 14:21:23 +0000
Message-ID: <05461545-D39A-4B98-BC27-3560C367FE25@citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
 <20200731113820.5765-8-ian.jackson@eu.citrix.com>
In-Reply-To: <20200731113820.5765-8-ian.jackson@eu.citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="us-ascii"
Content-ID: <51CFC83B5620394FAEF50EE1006261FC@citrix.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On Jul 31, 2020, at 12:37 PM, Ian Jackson <ian.jackson@eu.citrix.com> wro=
te:
>=20
> These indexes allow very fast lookup of "relevant" flights eg when
> trying to justify failures.
>=20
> In my ad-hoc test case, these indices (along with the subsequent
> changes to sg-report-flight and Executive.pm, reduce the runtime of
> sg-report-flight from 2-3ks (unacceptably long!) to as little as
> 5-7s seconds - a speedup of about 500x.
>=20
> (Getting the database snapshot may take a while first, but deploying
> this code should help with that too by reducing long-running
> transactions.  Quoted perf timings are from snapshot acquisition.)
>=20
> Without these new indexes there may be a performance change from the
> query changes.  I haven't benchmarked this so I am setting the schema
> updates to be Preparatory/Needed (ie, "Schema first" as
> schema/README.updates has it), to say that the index should be created
> before the new code is deployed.
>=20
> Testing: I have tested this series by creating experimental indices
> "trial_..." in the actual production instance.  (Transactional DDL was
> very helpful with this.)  I have verified with \d that schema update
> instructions in this commit generate indexes which are equivalent to
> the trial indices.
>=20
> Deployment: AFter these schema updates are applied, the trial indices
> are redundant duplicates and should be deleted.
>=20
> CC: George Dunlap <George.Dunlap@citrix.com>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

I have no idea if building an index on a LIKE is a good idea or not, but it=
 certainly seems to be useful, so:

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:25:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:25:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1VyQ-00041r-CD; Fri, 31 Jul 2020 14:25:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7AOU=BK=gmail.com=rjwysocki@srs-us1.protection.inumbo.net>)
 id 1k1VyP-00041l-62
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:25:25 +0000
X-Inumbo-ID: abda3475-d339-11ea-8e4a-bc764e2007e4
Received: from mail-oi1-f193.google.com (unknown [209.85.167.193])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id abda3475-d339-11ea-8e4a-bc764e2007e4;
 Fri, 31 Jul 2020 14:25:24 +0000 (UTC)
Received: by mail-oi1-f193.google.com with SMTP id e6so6627473oii.4
 for <xen-devel@lists.xenproject.org>; Fri, 31 Jul 2020 07:25:24 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=LUq1ijDD9SyO2ufTXKPW9//vrm+uL0JQvV8OQ7tDtyA=;
 b=pLnw3WxW6FrK4QQGOgLG+9xHM1D6GN0ldJ0iGIAeulZbMBM0uHOzcTMagxg/H8fAO4
 tW5YuAFhcW2TMHyiEsU/zmrntdTOzAYw2Sc2qtT1W6xU0AZUPWa+hifPtynqcPO9u1ec
 KiN42KGd2iGlkwSYN5FQf6NRI1bxWQ6WKsjNhvs9K65ZmkX1jES87xsrouVRHTjJJukl
 kbuV8AvYgsZcVFl2eDRfLeRCMzoWrHV62lfL17ASNAgY/wtTgvjT0IWFqS6/qF7gdyU8
 XgNNKcubet3B0TQd0cM91bHhMhC9Tx8/qSwP8Uz4WBNXksR23cofCttcCo0bfofs8q6B
 4E/Q==
X-Gm-Message-State: AOAM531BNEyH3M2OcY9HeGZKW0ImeE8GDk5/zBqeG9rE5iezMtekA9rz
 Jzv9dktsWtlCE4T6ThAHLOBp2AvqTUPe+jH4RuA=
X-Google-Smtp-Source: ABdhPJwXQdnG1otKOfzVxyL9Pv5KPx5Dv3MAzyWKLAJcQfKXU/RGMyj9PAedIsxXvXnE+QB4qD60adxhPMBESDi6fDY=
X-Received: by 2002:aca:a88e:: with SMTP id r136mr3259373oie.110.1596205523967; 
 Fri, 31 Jul 2020 07:25:23 -0700 (PDT)
MIME-Version: 1.0
References: <20200717191009.GA3387@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5464f384-d4b4-73f0-d39e-60ba9800d804@oracle.com>
 <20200721000348.GA19610@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <408d3ce9-2510-2950-d28d-fdfe8ee41a54@oracle.com>
 <alpine.DEB.2.21.2007211640500.17562@sstabellini-ThinkPad-T480s>
 <20200722180229.GA32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <alpine.DEB.2.21.2007221645430.17562@sstabellini-ThinkPad-T480s>
 <20200723225745.GB32316@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <alpine.DEB.2.21.2007241431280.17562@sstabellini-ThinkPad-T480s>
 <66a9b838-70ed-0807-9260-f2c31343a081@oracle.com>
 <20200730230634.GA17221@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <53b577a3-6af9-5587-7e47-485be38b3653@oracle.com>
In-Reply-To: <53b577a3-6af9-5587-7e47-485be38b3653@oracle.com>
From: "Rafael J. Wysocki" <rafael@kernel.org>
Date: Fri, 31 Jul 2020 16:25:12 +0200
Message-ID: <CAJZ5v0j2kqgEfbiQchiA_USwGKC-UFkn2J3bUU2xCWU=+1p9Mw@mail.gmail.com>
Subject: Re: [PATCH v2 01/11] xen/manage: keep track of the on-going suspend
 mode
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: the arch/x86 maintainers <x86@kernel.org>, Len Brown <len.brown@intel.com>,
 Peter Zijlstra <peterz@infradead.org>,
 Benjamin Herrenschmidt <benh@kernel.crashing.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 Linux Memory Management List <linux-mm@kvack.org>, Pavel Machek <pavel@ucw.cz>,
 "H. Peter Anvin" <hpa@zytor.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Valentin <eduval@amazon.com>, Ingo Molnar <mingo@redhat.com>,
 xen-devel@lists.xenproject.org, "Singh, Balbir" <sblbir@amazon.com>,
 Jens Axboe <axboe@kernel.dk>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Anchal Agarwal <anchalag@amazon.com>, Borislav Petkov <bp@alien8.de>,
 Thomas Gleixner <tglx@linutronix.de>, Juergen Gross <jgross@suse.com>,
 netdev <netdev@vger.kernel.org>, Linux PM <linux-pm@vger.kernel.org>,
 "Rafael J. Wysocki" <rjw@rjwysocki.net>, "Kamata,
 Munehisa" <kamatam@amazon.com>, Vitaly Kuznetsov <vkuznets@redhat.com>,
 David Miller <davem@davemloft.net>, David Woodhouse <dwmw@amazon.co.uk>,
 roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Jul 31, 2020 at 4:14 PM Boris Ostrovsky
<boris.ostrovsky@oracle.com> wrote:
>
> On 7/30/20 7:06 PM, Anchal Agarwal wrote:
> > On Mon, Jul 27, 2020 at 06:08:29PM -0400, Boris Ostrovsky wrote:
> >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >>
> >>
> >>
> >> On 7/24/20 7:01 PM, Stefano Stabellini wrote:
> >>> Yes, it does, thank you. I'd rather not introduce unknown regressions so
> >>> I would recommend to add an arch-specific check on registering
> >>> freeze/thaw/restore handlers. Maybe something like the following:
> >>>
> >>> #ifdef CONFIG_X86
> >>>     .freeze = blkfront_freeze,
> >>>     .thaw = blkfront_restore,
> >>>     .restore = blkfront_restore
> >>> #endif
> >>>
> >>>
> >>> maybe Boris has a better suggestion on how to do it
> >>
> >> An alternative might be to still install pm notifier in
> >> drivers/xen/manage.c (I think as result of latest discussions we decided
> >> we won't need it) and return -ENOTSUPP for ARM for
> >> PM_HIBERNATION_PREPARE and friends. Would that work?
> >>
> > I think the question here is for registering driver specific freeze/thaw/restore
> > callbacks for x86 only. I have dropped the pm_notifier in the v3 still pending
> > testing. So I think just registering driver specific callbacks for x86 only is a
> > good option. What do you think?
>
>
> I suggested using the notifier under assumption that if it returns an
> error then that will prevent callbacks to be called because hibernation
> will be effectively disabled.

That's correct.


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:26:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:26:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Vz7-00045U-NC; Fri, 31 Jul 2020 14:26:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S/xS=BK=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1Vz6-00045J-Qb
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:26:08 +0000
X-Inumbo-ID: c63e343c-d339-11ea-abc3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c63e343c-d339-11ea-abc3-12813bfff9fa;
 Fri, 31 Jul 2020 14:26:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AP1vBeaCfx9AP7a24e34n/oJJkJNPvuMBU2kMllpL5g=; b=hqK3dbOBOpsHdb8/jm0f/wWaDY
 mUW4jWn0L+NpK83xAwILRtEFrpxQArNFr4MnDQXBSQtbDwYLZgkNdSZtnHPCCgVgXIU7EcpP8MVdq
 0bPnVsZfc92nkJ7JRylyLu6JeFCmK2rZ0hzyGF32gbKDZz3RiMKV2ssrJ/ZiDSSiERGk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1Vz5-0003Zv-HQ; Fri, 31 Jul 2020 14:26:07 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1Vz5-0005vI-7r; Fri, 31 Jul 2020 14:26:07 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 0/2] epte_get_entry_emt() modifications
Date: Fri, 31 Jul 2020 15:26:02 +0100
Message-Id: <20200731142604.30149-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (2):
  x86/hvm: set 'ipat' in EPT for special pages
  x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()

 xen/arch/x86/hvm/mtrr.c | 26 +++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:26:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:26:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1VzC-00046v-Vt; Fri, 31 Jul 2020 14:26:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S/xS=BK=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1VzB-00045J-PE
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:26:13 +0000
X-Inumbo-ID: c63e343d-d339-11ea-abc3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c63e343d-d339-11ea-abc3-12813bfff9fa;
 Fri, 31 Jul 2020 14:26:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=0ELqKaNJTQk3GSaeAGwge8h9gTL67LFvh82JALfJSP0=; b=bMipD4w6/T3x8nE1XEd3g37i7P
 x3fT0nfXiTGnbgsZOkLavc7KNBcEbHJrr4n3T5aiU/H6ltMXfoo5hLqjiCe5WS1eBHat5B6Uu1NKA
 ye9Kq7Kb6SmmC70x1dhXS8tq64LvBDx+42ir3a3jQWkopBYcNKzJLHfuqxG5skJY/gqE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1Vz6-0003Zz-E8; Fri, 31 Jul 2020 14:26:08 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1Vz6-0005vI-6D; Fri, 31 Jul 2020 14:26:08 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 1/2] x86/hvm: set 'ipat' in EPT for special pages
Date: Fri, 31 Jul 2020 15:26:03 +0100
Message-Id: <20200731142604.30149-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731142604.30149-1-paul@xen.org>
References: <20200731142604.30149-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

All non-MMIO ranges (i.e those not mapping real device MMIO regions) that
map valid MFNs are normally marked MTRR_TYPE_WRBACK and 'ipat' is set. Hence
when PV drivers running in a guest populate the BAR space of the Xen Platform
PCI Device with pages such as the Shared Info page or Grant Table pages,
accesses to these pages will be cachable.

However, should IOMMU mappings be enabled be enabled for the guest then these
accesses become uncachable. This has a substantial negative effect on I/O
throughput of PV devices. Arguably PV drivers should bot be using BAR space to
host the Shared Info and Grant Table pages but it is currently commonplace for
them to do this and so this problem needs mitigation. Hence this patch makes
sure the 'ipat' bit is set for any special page regardless of where in GFN
space it is mapped.

NOTE: Clearly this mitigation only applies to Intel EPT. It is not obvious
      that there is any similar mitigation possible for AMD NPT. Downstreams
      such as Citrix XenServer have been carrying a patch similar to this for
      several releases though.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - dropping Jan's R-b
 - cope with order > 0
---
 xen/arch/x86/hvm/mtrr.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 511c3be1c8..26721f6ee7 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -836,6 +836,17 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
         return MTRR_TYPE_WRBACK;
     }
 
+    for ( i = 0; i < (1ul << order); i++ )
+    {
+        if ( is_special_page(mfn_to_page(mfn_add(mfn, i))) )
+        {
+            if ( order )
+                return -1;
+            *ipat = 1;
+            return MTRR_TYPE_WRBACK;
+        }
+    }
+
     gmtrr_mtype = hvm_get_mem_pinned_cacheattr(d, _gfn(gfn), order);
     if ( gmtrr_mtype >= 0 )
     {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:26:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:26:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1VzI-00048Y-7m; Fri, 31 Jul 2020 14:26:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S/xS=BK=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1VzG-00045J-PQ
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:26:18 +0000
X-Inumbo-ID: c63e343e-d339-11ea-abc3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c63e343e-d339-11ea-abc3-12813bfff9fa;
 Fri, 31 Jul 2020 14:26:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=TJaEXcctXRgjH3OlX3A2Uvf+yHvHz4k83Q2YxZ3EH7g=; b=b7Gkm1LV1B6caEeTpvHPEmpFVb
 0pFIndce3oWA5jGg1QoCid1wIzUY9J+JVrP8t17/9ZHTpdaMHJAxMJi6g+S7HztB9vCpxTXFbmc/6
 9Yz6Gc0kfObG8fbuXodKghyGExJSeB1ksYLSG7U5jc4/19G4oU29LjWmQtDkayU6Xdfo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1Vz7-0003a9-El; Fri, 31 Jul 2020 14:26:09 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1Vz7-0005vI-63; Fri, 31 Jul 2020 14:26:09 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 2/2] x86/hvm: simplify 'mmio_direct' check in
 epte_get_entry_emt()
Date: Fri, 31 Jul 2020 15:26:04 +0100
Message-Id: <20200731142604.30149-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731142604.30149-1-paul@xen.org>
References: <20200731142604.30149-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Re-factor the code to take advantage of the fact that the APIC access page is
a 'special' page. The VMX code is left alone and hence the APIC access page is
still inserted into the P2M with type p2m_mmio_direct. This is left alone as it
is not obvious there is another suitable type to use, and the necessary
re-ordering in epte_get_entry_emt() is straightforward.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - New in v2

v3:
 - Re-base
 - Expand commit comment
---
 xen/arch/x86/hvm/mtrr.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 26721f6ee7..cfdbcbfef1 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -814,23 +814,13 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
         return -1;
     }
 
-    if ( direct_mmio )
-    {
-        if ( (mfn_x(mfn) ^ mfn_x(d->arch.hvm.vmx.apic_access_mfn)) >> order )
-            return MTRR_TYPE_UNCACHABLE;
-        if ( order )
-            return -1;
-        *ipat = 1;
-        return MTRR_TYPE_WRBACK;
-    }
-
     if ( !mfn_valid(mfn) )
     {
         *ipat = 1;
         return MTRR_TYPE_UNCACHABLE;
     }
 
-    if ( !is_iommu_enabled(d) && !cache_flush_permitted(d) )
+    if ( !direct_mmio && !is_iommu_enabled(d) && !cache_flush_permitted(d) )
     {
         *ipat = 1;
         return MTRR_TYPE_WRBACK;
@@ -847,6 +837,9 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
         }
     }
 
+    if ( direct_mmio )
+        return MTRR_TYPE_UNCACHABLE;
+
     gmtrr_mtype = hvm_get_mem_pinned_cacheattr(d, _gfn(gfn), order);
     if ( gmtrr_mtype >= 0 )
     {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:30:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:30:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1W3C-00056U-Q7; Fri, 31 Jul 2020 14:30:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HXbV=BK=amazon.co.uk=prvs=4749be70b=pdurrant@srs-us1.protection.inumbo.net>)
 id 1k1W3B-00056M-In
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:30:21 +0000
X-Inumbo-ID: 5cea6fcc-d33a-11ea-abc4-12813bfff9fa
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5cea6fcc-d33a-11ea-abc4-12813bfff9fa;
 Fri, 31 Jul 2020 14:30:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
 s=amazon201209; t=1596205821; x=1627741821;
 h=from:to:cc:date:message-id:references:in-reply-to:
 content-transfer-encoding:mime-version:subject;
 bh=+gXW+enlch3WbuRFmyKr9mq3d3S23+dK+/0Asq4lc7M=;
 b=tOaRKInb2REPQqpdTPSTPQscHpzDnj81cLlkwe9fbCv+hV/F9xbyr5HH
 bbX4UgB+I6mQOdBExu1WrhKYkd5nNzOo5fRrLewECzjzerxIn6cVSv4u9
 pJVrVVaBxvrfLpgjNAPAyFkVm8QzzYp04Dw9OHHATucXgFe3EZJe9CajR o=;
IronPort-SDR: t2AS02JopL4ieO79pk16JluAO2qAjuCx+8Ok9UXASbk3e715uNwRDq51ETjeBES9sCoWve3eve
 zTLxnsL7zPgQ==
X-IronPort-AV: E=Sophos;i="5.75,418,1589241600"; d="scan'208";a="45206227"
Subject: RE: [PATCH v3 1/2] x86/hvm: set 'ipat' in EPT for special pages
Thread-Topic: [PATCH v3 1/2] x86/hvm: set 'ipat' in EPT for special pages
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1a-7d76a15f.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 31 Jul 2020 14:30:20 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1a-7d76a15f.us-east-1.amazon.com (Postfix) with ESMTPS
 id C3DB6A26F5; Fri, 31 Jul 2020 14:30:18 +0000 (UTC)
Received: from EX13D32EUC004.ant.amazon.com (10.43.164.121) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 31 Jul 2020 14:30:18 +0000
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC004.ant.amazon.com (10.43.164.121) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 31 Jul 2020 14:30:16 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Fri, 31 Jul 2020 14:30:16 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Paul Durrant <paul@xen.org>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Thread-Index: AQHWZ0ag7yevQZOxg0qa3sMTQ3/Iq6khvhzw
Date: Fri, 31 Jul 2020 14:30:15 +0000
Message-ID: <409dc5e763f446b2be1df92b31e57d13@EX13D32EUC003.ant.amazon.com>
References: <20200731142604.30149-1-paul@xen.org>
 <20200731142604.30149-2-paul@xen.org>
In-Reply-To: <20200731142604.30149-2-paul@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.90]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBQYXVsIER1cnJhbnQgPHBhdWxA
eGVuLm9yZz4NCj4gU2VudDogMzEgSnVseSAyMDIwIDE1OjI2DQo+IFRvOiB4ZW4tZGV2ZWxAbGlz
dHMueGVucHJvamVjdC5vcmcNCj4gQ2M6IER1cnJhbnQsIFBhdWwgPHBkdXJyYW50QGFtYXpvbi5j
by51az47IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT47IEFuZHJldyBDb29wZXINCj4g
PGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+OyBXZWkgTGl1IDx3bEB4ZW4ub3JnPjsgUm9nZXIg
UGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+DQo+IFN1YmplY3Q6IFtFWFRFUk5BTF0g
W1BBVENIIHYzIDEvMl0geDg2L2h2bTogc2V0ICdpcGF0JyBpbiBFUFQgZm9yIHNwZWNpYWwgcGFn
ZXMNCj4gDQo+IENBVVRJT046IFRoaXMgZW1haWwgb3JpZ2luYXRlZCBmcm9tIG91dHNpZGUgb2Yg
dGhlIG9yZ2FuaXphdGlvbi4gRG8gbm90IGNsaWNrIGxpbmtzIG9yIG9wZW4NCj4gYXR0YWNobWVu
dHMgdW5sZXNzIHlvdSBjYW4gY29uZmlybSB0aGUgc2VuZGVyIGFuZCBrbm93IHRoZSBjb250ZW50
IGlzIHNhZmUuDQo+IA0KPiANCj4gDQo+IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1h
em9uLmNvbT4NCj4gDQo+IEFsbCBub24tTU1JTyByYW5nZXMgKGkuZSB0aG9zZSBub3QgbWFwcGlu
ZyByZWFsIGRldmljZSBNTUlPIHJlZ2lvbnMpIHRoYXQNCj4gbWFwIHZhbGlkIE1GTnMgYXJlIG5v
cm1hbGx5IG1hcmtlZCBNVFJSX1RZUEVfV1JCQUNLIGFuZCAnaXBhdCcgaXMgc2V0LiBIZW5jZQ0K
PiB3aGVuIFBWIGRyaXZlcnMgcnVubmluZyBpbiBhIGd1ZXN0IHBvcHVsYXRlIHRoZSBCQVIgc3Bh
Y2Ugb2YgdGhlIFhlbiBQbGF0Zm9ybQ0KPiBQQ0kgRGV2aWNlIHdpdGggcGFnZXMgc3VjaCBhcyB0
aGUgU2hhcmVkIEluZm8gcGFnZSBvciBHcmFudCBUYWJsZSBwYWdlcywNCj4gYWNjZXNzZXMgdG8g
dGhlc2UgcGFnZXMgd2lsbCBiZSBjYWNoYWJsZS4NCj4gDQo+IEhvd2V2ZXIsIHNob3VsZCBJT01N
VSBtYXBwaW5ncyBiZSBlbmFibGVkIGJlIGVuYWJsZWQgZm9yIHRoZSBndWVzdCB0aGVuIHRoZXNl
DQo+IGFjY2Vzc2VzIGJlY29tZSB1bmNhY2hhYmxlLiBUaGlzIGhhcyBhIHN1YnN0YW50aWFsIG5l
Z2F0aXZlIGVmZmVjdCBvbiBJL08NCj4gdGhyb3VnaHB1dCBvZiBQViBkZXZpY2VzLiBBcmd1YWJs
eSBQViBkcml2ZXJzIHNob3VsZCBib3QgYmUgdXNpbmcgQkFSIHNwYWNlIHRvDQo+IGhvc3QgdGhl
IFNoYXJlZCBJbmZvIGFuZCBHcmFudCBUYWJsZSBwYWdlcyBidXQgaXQgaXMgY3VycmVudGx5IGNv
bW1vbnBsYWNlIGZvcg0KPiB0aGVtIHRvIGRvIHRoaXMgYW5kIHNvIHRoaXMgcHJvYmxlbSBuZWVk
cyBtaXRpZ2F0aW9uLiBIZW5jZSB0aGlzIHBhdGNoIG1ha2VzDQo+IHN1cmUgdGhlICdpcGF0JyBi
aXQgaXMgc2V0IGZvciBhbnkgc3BlY2lhbCBwYWdlIHJlZ2FyZGxlc3Mgb2Ygd2hlcmUgaW4gR0ZO
DQo+IHNwYWNlIGl0IGlzIG1hcHBlZC4NCj4gDQo+IE5PVEU6IENsZWFybHkgdGhpcyBtaXRpZ2F0
aW9uIG9ubHkgYXBwbGllcyB0byBJbnRlbCBFUFQuIEl0IGlzIG5vdCBvYnZpb3VzDQo+ICAgICAg
IHRoYXQgdGhlcmUgaXMgYW55IHNpbWlsYXIgbWl0aWdhdGlvbiBwb3NzaWJsZSBmb3IgQU1EIE5Q
VC4gRG93bnN0cmVhbXMNCj4gICAgICAgc3VjaCBhcyBDaXRyaXggWGVuU2VydmVyIGhhdmUgYmVl
biBjYXJyeWluZyBhIHBhdGNoIHNpbWlsYXIgdG8gdGhpcyBmb3INCj4gICAgICAgc2V2ZXJhbCBy
ZWxlYXNlcyB0aG91Z2guDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBQYXVsIER1cnJhbnQgPHBkdXJy
YW50QGFtYXpvbi5jb20+DQoNClRoaXMgaXMgbWlzc2luZyBhIGh1bmsuIEknbGwgc2VuZCB2NC4N
Cg0KICBQYXVsDQoNCj4gLS0tDQo+IENjOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+
DQo+IENjOiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBDYzog
V2VpIExpdSA8d2xAeGVuLm9yZz4NCj4gQ2M6ICJSb2dlciBQYXUgTW9ubsOpIiA8cm9nZXIucGF1
QGNpdHJpeC5jb20+DQo+IA0KPiB2MzoNCj4gIC0gZHJvcHBpbmcgSmFuJ3MgUi1iDQo+ICAtIGNv
cGUgd2l0aCBvcmRlciA+IDANCj4gLS0tDQo+ICB4ZW4vYXJjaC94ODYvaHZtL210cnIuYyB8IDEx
ICsrKysrKysrKysrDQo+ICAxIGZpbGUgY2hhbmdlZCwgMTEgaW5zZXJ0aW9ucygrKQ0KPiANCj4g
ZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9odm0vbXRyci5jIGIveGVuL2FyY2gveDg2L2h2bS9t
dHJyLmMNCj4gaW5kZXggNTExYzNiZTFjOC4uMjY3MjFmNmVlNyAxMDA2NDQNCj4gLS0tIGEveGVu
L2FyY2gveDg2L2h2bS9tdHJyLmMNCj4gKysrIGIveGVuL2FyY2gveDg2L2h2bS9tdHJyLmMNCj4g
QEAgLTgzNiw2ICs4MzYsMTcgQEAgaW50IGVwdGVfZ2V0X2VudHJ5X2VtdChzdHJ1Y3QgZG9tYWlu
ICpkLCB1bnNpZ25lZCBsb25nIGdmbiwgbWZuX3QgbWZuLA0KPiAgICAgICAgICByZXR1cm4gTVRS
Ul9UWVBFX1dSQkFDSzsNCj4gICAgICB9DQo+IA0KPiArICAgIGZvciAoIGkgPSAwOyBpIDwgKDF1
bCA8PCBvcmRlcik7IGkrKyApDQo+ICsgICAgew0KPiArICAgICAgICBpZiAoIGlzX3NwZWNpYWxf
cGFnZShtZm5fdG9fcGFnZShtZm5fYWRkKG1mbiwgaSkpKSApDQo+ICsgICAgICAgIHsNCj4gKyAg
ICAgICAgICAgIGlmICggb3JkZXIgKQ0KPiArICAgICAgICAgICAgICAgIHJldHVybiAtMTsNCj4g
KyAgICAgICAgICAgICppcGF0ID0gMTsNCj4gKyAgICAgICAgICAgIHJldHVybiBNVFJSX1RZUEVf
V1JCQUNLOw0KPiArICAgICAgICB9DQo+ICsgICAgfQ0KPiArDQo+ICAgICAgZ210cnJfbXR5cGUg
PSBodm1fZ2V0X21lbV9waW5uZWRfY2FjaGVhdHRyKGQsIF9nZm4oZ2ZuKSwgb3JkZXIpOw0KPiAg
ICAgIGlmICggZ210cnJfbXR5cGUgPj0gMCApDQo+ICAgICAgew0KPiAtLQ0KPiAyLjIwLjENCg0K


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:31:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:31:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1W4M-0005D5-4Y; Fri, 31 Jul 2020 14:31:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1W4K-0005Cz-Ik
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:31:32 +0000
X-Inumbo-ID: 86ae67fa-d33a-11ea-8e4b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 86ae67fa-d33a-11ea-8e4b-bc764e2007e4;
 Fri, 31 Jul 2020 14:31:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A848BAD4A;
 Fri, 31 Jul 2020 14:31:43 +0000 (UTC)
Subject: Re: [PATCH 4/5] xen/memory: Fix acquire_resource size semantics
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-5-andrew.cooper3@citrix.com>
 <002b01d6664b$c7eb5f40$57c21dc0$@xen.org>
 <474ff131-83d8-deff-4e3a-32392ea092b3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0815a476-cee1-71a8-bac4-c1feb3c518e5@suse.com>
Date: Fri, 31 Jul 2020 16:31:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <474ff131-83d8-deff-4e3a-32392ea092b3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Hubert Jasudowicz' <hubert.jasudowicz@cert.pl>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Julien Grall' <julien@xen.org>,
 'Wei Liu' <wl@xen.org>, paul@xen.org,
 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>,
 =?UTF-8?B?J01pY2hhxYIgTGVzemN6ecWEc2tpJw==?= <michal.leszczynski@cert.pl>,
 'Ian Jackson' <ian.jackson@citrix.com>,
 'Xen-devel' <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.07.2020 21:46, Andrew Cooper wrote:
> On 30/07/2020 09:31, Paul Durrant wrote:
>>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Andrew Cooper
>>> Sent: 28 July 2020 12:37
>>>
>>> --- a/xen/common/grant_table.c
>>> +++ b/xen/common/grant_table.c
>>> @@ -4013,6 +4013,25 @@ static int gnttab_get_shared_frame_mfn(struct domain *d,
>>>      return 0;
>>>  }
>>>
>>> +unsigned int gnttab_resource_max_frames(struct domain *d, unsigned int id)
>>> +{
>>> +    unsigned int nr = 0;
>>> +
>>> +    /* Don't need the grant lock.  This limit is fixed at domain create time. */
>>> +    switch ( id )
>>> +    {
>>> +    case XENMEM_resource_grant_table_id_shared:
>>> +        nr = d->grant_table->max_grant_frames;
>>> +        break;
>>> +
>>> +    case XENMEM_resource_grant_table_id_status:
>>> +        nr = grant_to_status_frames(d->grant_table->max_grant_frames);
>> Two uses of d->grant_table, so perhaps define a stack variable for it?
> 
> Can do.
> 
>>  Also, should you not make sure 0 is returned in the case of a v1 table?
> 
> This was the case specifically discussed in the commit message, but
> perhaps it needs expanding.
> 
> Doing so would be buggy.
> 
> Some utility is going to query the resource size, and then try to map it
> (if it doesn't blindly know the size and/or subset it cares about already).
> 
> In between these two hypercalls from the utility, the guest can do a
> v1=>v2 or v2=>v1 switch and make the resource spontaneously appear or
> disappear.
> 
> The only case where we can know for certain whether the resource is
> available is when we're in the map hypercall.  Therefore, userspace has
> to be able to get to the map call if there is potentially a resource
> available.
> 
> The semantics of the size call are really "this resource might exist,
> and if it does, this is how large it is".

With you deriving from d->grant_table->max_grant_frames, this approach
would imply that by obtaining a mapping the grant tables will get
grown to their permitted maximum, no matter whether as much is actually
needed by the guest. If this is indeed the intention, then we could as
well set up maximum grant structures right at domain creation. Not
something I would favor, but anyway...

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:37:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:37:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1W9k-0005QK-Tb; Fri, 31 Jul 2020 14:37:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wy6+=BK=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1k1W9j-0005QF-Ib
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:37:07 +0000
X-Inumbo-ID: 4e37c76d-d33b-11ea-8e4c-bc764e2007e4
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.80]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e37c76d-d33b-11ea-8e4c-bc764e2007e4;
 Fri, 31 Jul 2020 14:37:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F4fP4MDk9m2b6laCbcsqr8wMhisUSTZRFuFt5iuwl18=;
 b=sDHyhaG4SckR2SyJLhkuXvlSkXXf7RNkOaEODfYtXDgAf61zyIEMc7I/PIgn92K0sr3iFEWS0SFvIFClePwvVJJu9WWohIOHYJNWASqfknSeRHcycCk4qlItSnQ0+OUV0ymd6dE4i9DF/LmUk4n/MVSltyBRFASma3VIAF714a0=
Received: from AM6PR01CA0065.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::42) by DB8PR08MB5482.eurprd08.prod.outlook.com
 (2603:10a6:10:116::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.17; Fri, 31 Jul
 2020 14:37:03 +0000
Received: from AM5EUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:e0:cafe::23) by AM6PR01CA0065.outlook.office365.com
 (2603:10a6:20b:e0::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.16 via Frontend
 Transport; Fri, 31 Jul 2020 14:37:03 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT060.mail.protection.outlook.com (10.152.16.160) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3239.17 via Frontend Transport; Fri, 31 Jul 2020 14:37:03 +0000
Received: ("Tessian outbound 73b502bf693a:v62");
 Fri, 31 Jul 2020 14:37:02 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 9209c0734e0e4dd7
X-CR-MTA-TID: 64aa7808
Received: from cba7433004ee.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 994671D5-6954-4B08-BA14-EF4A2A9CB728.1; 
 Fri, 31 Jul 2020 14:36:57 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cba7433004ee.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 31 Jul 2020 14:36:57 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ofpeGe0t87w08m7G/fzwMFHLzVF4swy85L9sHnNdyCfOL1MxQD0FP99A5Q0RVT1eyK6p9l0e1GpQz3QM3iTtg1TQ76G/tIX+B/aylgs+flkLNGpna3GKA2s3NrbNFvClPofVpDuKLUcZtZHm9HtDqnbeLbKkabuHAVuo5T3i/5gB7WUhiqfyDF7fF5B3CYM1xtF0lbgBMXiJMJAIjiNmMFJjFIEpZ3pvnXCmTXQsWjIZPj5p5ErjS7Vga+MO+peOMHxlwGt3S6H9c8RcrvWCzaU+ivWVsXGMaXJ/Uu3DTB7/nCbdcP1IHe7SsUDp2weP9+fCloAEJRQ/rgsmV0SErQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F4fP4MDk9m2b6laCbcsqr8wMhisUSTZRFuFt5iuwl18=;
 b=e1jcC5b/KXPx5B0HwapU8jN+3/EKKEI5k5FEec5Ddrl8Jb4Z1w4s/0h8CCJ8X/pQKjSkJWzrbOJhTbAp2ePh4yEomDjRVw8Ec6VsbgSRodLyjgUNV17Kor+oG6W/R555ibce2Vs8gammBODUWs0IVLfN4DP7xnZDn73GuS4EPq07PFLtxcmnmdQY3l3mLDvoEvaRlk3kdlWq2UWlJxQ8+pO9WUEGxyZK2GUoz8r09WddMqQTOt7TUacLJRDKpUxiAN683RnqhYTP/6LJOUrJnuUiKwPEx1ARvW1HSV4bdNETX4KHnITGVDp3I8fNgj+X0z4hcDGrSYv0/Sc+tQ5VlQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F4fP4MDk9m2b6laCbcsqr8wMhisUSTZRFuFt5iuwl18=;
 b=sDHyhaG4SckR2SyJLhkuXvlSkXXf7RNkOaEODfYtXDgAf61zyIEMc7I/PIgn92K0sr3iFEWS0SFvIFClePwvVJJu9WWohIOHYJNWASqfknSeRHcycCk4qlItSnQ0+OUV0ymd6dE4i9DF/LmUk4n/MVSltyBRFASma3VIAF714a0=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5323.eurprd08.prod.outlook.com (2603:10a6:10:fa::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3239.18; Fri, 31 Jul
 2020 14:36:56 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::7c65:30f9:4e87:f58a%3]) with mapi id 15.20.3239.020; Fri, 31 Jul 2020
 14:36:56 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2] xen/arm: Convert runstate address during hypcall
Thread-Topic: [PATCH v2] xen/arm: Convert runstate address during hypcall
Thread-Index: AQHWZPcpcJdIAbXhcEqLxJ7JubPXUKkdZ+eAgAC8aQCAAMFpAIAAcn6AgAHoeQCAADumgIAAAacAgABIKQA=
Date: Fri, 31 Jul 2020 14:36:56 +0000
Message-ID: <E39531EE-0265-4387-813D-22A57CD3F67B@arm.com>
References: <4647a019c7b42d40d3c2f5b0a3685954bea7f982.1595948219.git.bertrand.marquis@arm.com>
 <8d2d7f03-450c-d50c-630b-8608c6d42bb9@suse.com>
 <FCAB700B-4617-4323-BE1E-B80DDA1806C1@arm.com>
 <1b046f2c-05c8-9276-a91e-fd55ec098bed@suse.com>
 <alpine.DEB.2.21.2007291356060.1767@sstabellini-ThinkPad-T480s>
 <1a8bbcc7-9d0c-9669-db7b-e837af279027@suse.com>
 <73c8ade5-36a3-cc13-80b6-bda89e175cbb@xen.org>
 <6066b507-f956-8e7a-89f3-b21428b66d65@suse.com>
In-Reply-To: <6066b507-f956-8e7a-89f3-b21428b66d65@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [90.126.203.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c0c3a65c-2c61-483d-18cb-08d8355f30db
x-ms-traffictypediagnostic: DB8PR08MB5323:|DB8PR08MB5482:
X-Microsoft-Antispam-PRVS: <DB8PR08MB5482DF6EE199251199BA60239D4E0@DB8PR08MB5482.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: uTDLqnf4S6y3uOhB7oxMWosXXyFuH8IJkZLYqECLeV2fTeDuJpIPkiht41ik31sLpr6UtxS80FhwNFMKrdmw3BJGRYed/IcrzXheAeKV++tYf/KC0ItBTgY9k4LFqI3U325p8RpUS0hdBkUAkWIVae4P5BVPf6bxUtvAGedjl5msoONQ17Uesu2bnTXlduRS376CSAT3dgHtsVPYFWaNcAch3pa7KFeGMcql5wDlYGkkIGMa/tjhh71fwxTKJharGMVOMWav5biGJia0eFovjNHb/+teA8FAoCNff5wnfatANqHmAddoVG/zE1+Jp6LQQj7LlrNzKRD+tZjsGT0C3g==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(366004)(39860400002)(396003)(376002)(346002)(6486002)(54906003)(478600001)(71200400001)(66946007)(76116006)(316002)(6506007)(53546011)(91956017)(36756003)(8936002)(7416002)(86362001)(8676002)(6512007)(83380400001)(2616005)(2906002)(33656002)(5660300002)(66476007)(66556008)(66446008)(64756008)(26005)(186003)(4326008)(6916009);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: WtMOLnzM2IgstopKRYf9b1ZvfuCrfodmvBguiHAv9JdQKbr2+EtXZE8Y/Z75e2KGY6FfcuIzVg6T7cDu1mC3rQ/JYjfC6Sgdr/6TBhfNxqr805BqmdcqqZFBCR+pgGI1ZzxL4V262YOQ1PAXTvUdb1mba+gOKfd4pao9ZFhj4mGmiXSqU0UI29jyi9AtinUR7mKs9v3U5V0udPGArlRztof1OFEHheBQHn3jRplPSCfjXtrfCrqQIU+AQdgZNi43UlPxCtw3WhMj/CvlgQvBJKfTDLks0nWeRmlOirRFDNMkORPaBtQYFhUy39TAuxN2E3XRfbe7xel/mZ9vAPusyp5i+TsmhTXBRWqmCRshxKfaffN3AkSUyBg9CBQNnYYbvcmoX6BmSibZzM2twB/uovkT35FkTbLbIGI1hYMmyxSfD6GXmbZLKgoW9Y3pQegKMVzBGZyvzeJBOYMVsQ1yBWiBMzVptrH8IWVA0/Dpnpo6gCzI/PBO/3DkPXagki17G22fb6m9vt5Ba3ruHi8+W2DtQGFQEPZcg6Xo5ZDUVjCmWGlCYLqZYNh1UOej/Et0VhPoflNEVPmWAmcwKnjRez3TnL/Q/aVOJU7WAP2L3F7SSlFilfGceDxBvUfOQlCxbu/vpxMluQXi8N8fTgfTwg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <63D1AC863BF10C4F87D27D91394B0B7A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5323
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs: a1128fd1-5154-407a-a558-08d8355f2d2c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: TIOrp19BxsTfPDJZ43ACT9GWe/PaAt65EvoblZINoKW5j6WKAFQztwLoJZCChPVWhCL8iU0jgv4kO7y2G1jnMB74zt1+Sy2EuRzS0QU5V7BkFBMsgn9kBPxidrQ0ZSviTcdy69A7G7b70QLVpNe5j76uSuE6fUh2AidPK5e6aeMq7qPFl57Nwqmj74vHY+3kgHWvOXTBN7cdrdIW4RlV5HLy3xEZ+7hl5SiS+ZEBUsTV+I7X4U0cS4PVwBQ+qbaYWlEpjHXM8RdAlqXjMRzWDiF/mb1b1eBBB2HsZhdaxaYaujB+gnaukOM8BMAiTYdA0IHe3ZBnXGMU7e4wZiQm08o95gAw0PpBc5u2F9Ru21EPjoJeM0hd43jaN3YByMRRLl+gDDSYWZ6etAjwg5XLCA==
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(376002)(396003)(39860400002)(346002)(46966005)(6512007)(53546011)(4326008)(2906002)(83380400001)(54906003)(6506007)(5660300002)(6486002)(82310400002)(26005)(82740400003)(8676002)(6862004)(70206006)(478600001)(356005)(47076004)(70586007)(316002)(336012)(36906005)(81166007)(186003)(2616005)(107886003)(36756003)(86362001)(33656002)(8936002);
 DIR:OUT; SFP:1101; 
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 31 Jul 2020 14:37:03.0125 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c0c3a65c-2c61-483d-18cb-08d8355f30db
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource: AM5EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5482
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 31 Jul 2020, at 12:18, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 31.07.2020 12:12, Julien Grall wrote:
>> On 31/07/2020 07:39, Jan Beulich wrote:
>>> We're fixing other issues without breaking the ABI. Where's the
>>> problem of backporting the kernel side change (which I anticipate
>>> to not be overly involved)?
>> This means you can't take advantage of the runstage on existing Linux=20
>> without any modification.
>>=20
>>> If the plan remains to be to make an ABI breaking change,
>>=20
>> For a theoritical PoV, this is a ABI breakage. However, I fail to see=20
>> how the restrictions added would affect OSes at least on Arm.
>=20
> "OSes" covering what? Just Linux?
>=20
>> In particular, you can't change the VA -> PA on Arm without going=20
>> through an invalid mapping. So I wouldn't expect this to happen for the=
=20
>> runstate.
>>=20
>> The only part that *may* be an issue is if the guest is registering the=
=20
>> runstate with an initially invalid VA. Although, I have yet to see that=
=20
>> in practice. Maybe you know?
>=20
> I'm unaware of any such use, but this means close to nothing.
>=20
>>> then I
>>> think this will need an explicit vote.
>>=20
>> I was under the impression that the two Arm maintainers (Stefano and I)=
=20
>> already agreed with the approach here. Therefore, given the ABI breakage=
=20
>> is only affecting Arm, why would we need a vote?
>=20
> The problem here is of conceptual nature: You're planning to
> make the behavior of a common hypercall diverge between
> architectures, and in a retroactive fashion. Imo that's nothing
> we should do even for new hypercalls, if _at all_ avoidable. If
> we allow this here, we'll have a precedent that people later
> may (and based on my experience will, sooner or later) reference,
> to get their own change justified.

After a discussion with Jan, he is proposing to have a guest config setting=
 to
turn on or off the translation of the address during the hypercall and add =
a
global Xen command line parameter to set the global default behaviour.=20
With this was done on arm could be done on x86 and the current behaviour
would be kept by default but possible to modify by configuration.

@Jan: please correct me if i said something wrong
@others: what is your view on this solution ?

Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:41:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:41:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1WDl-0006FZ-El; Fri, 31 Jul 2020 14:41:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S/xS=BK=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1WDj-0006FT-SX
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:41:15 +0000
X-Inumbo-ID: e2f15fc6-d33b-11ea-8e4e-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e2f15fc6-d33b-11ea-8e4e-bc764e2007e4;
 Fri, 31 Jul 2020 14:41:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=BwXkwyFi+LCrnTMuiLnJqRZtbRZNATdlvmHbygVdJHM=; b=YauQFl93uaE/o7bsM8OC9dGdFz
 f0VeM2nHwppFy0Q/xKR2ThE6kkGhpWQPR4tLWLBt515pL2nYzTajCQkHDKHSZQ87oLgepwtVsaDsa
 K2Mt+NOQ8v67FEu0dxUeYWmga0ZYAAsMSBerb05ftqbyKfH2Z0YYHzS/F1NF7qkKV3ls=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1WDi-0003vH-U1; Fri, 31 Jul 2020 14:41:14 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1WDi-0006lb-LI; Fri, 31 Jul 2020 14:41:14 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v4 0/2] epte_get_entry_emt() modifications
Date: Fri, 31 Jul 2020 15:41:10 +0100
Message-Id: <20200731144112.12516-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (2):
  x86/hvm: set 'ipat' in EPT for special pages
  x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()

 xen/arch/x86/hvm/mtrr.c | 27 ++++++++++++++++-----------
 1 file changed, 16 insertions(+), 11 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:41:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:41:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1WDm-0006Fu-Mp; Fri, 31 Jul 2020 14:41:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S/xS=BK=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1WDl-0006FY-Fg
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:41:17 +0000
X-Inumbo-ID: e3a77dd8-d33b-11ea-abc5-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e3a77dd8-d33b-11ea-abc5-12813bfff9fa;
 Fri, 31 Jul 2020 14:41:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=F+vgTGWDObAevSQByeyw/+lv31YkNMVYZOrPyWpXIeA=; b=ZLpOEHJ+BtbCrYYkYntQsixiu4
 SyzK1qEal3meY6t1/Ol4ddHkSsfrkDvhR5a2VCdo8X3SexoGSV8bE81CQQ2jfqQD7Lj19y7By291x
 nLz8DVke/6iicZI8wGNKwoX+fs7f0eVjA4oPB3uM8C+wGcreqjIOvx+DNX070ZCPR3OQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1WDj-0003vN-S1; Fri, 31 Jul 2020 14:41:15 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1WDj-0006lb-Io; Fri, 31 Jul 2020 14:41:15 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v4 1/2] x86/hvm: set 'ipat' in EPT for special pages
Date: Fri, 31 Jul 2020 15:41:11 +0100
Message-Id: <20200731144112.12516-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731144112.12516-1-paul@xen.org>
References: <20200731144112.12516-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

All non-MMIO ranges (i.e those not mapping real device MMIO regions) that
map valid MFNs are normally marked MTRR_TYPE_WRBACK and 'ipat' is set. Hence
when PV drivers running in a guest populate the BAR space of the Xen Platform
PCI Device with pages such as the Shared Info page or Grant Table pages,
accesses to these pages will be cachable.

However, should IOMMU mappings be enabled be enabled for the guest then these
accesses become uncachable. This has a substantial negative effect on I/O
throughput of PV devices. Arguably PV drivers should bot be using BAR space to
host the Shared Info and Grant Table pages but it is currently commonplace for
them to do this and so this problem needs mitigation. Hence this patch makes
sure the 'ipat' bit is set for any special page regardless of where in GFN
space it is mapped.

NOTE: Clearly this mitigation only applies to Intel EPT. It is not obvious
      that there is any similar mitigation possible for AMD NPT. Downstreams
      such as Citrix XenServer have been carrying a patch similar to this for
      several releases though.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Dropping Jan's R-b
 - Cope with order > 0

v4:
 - Add missing hunk
---
 xen/arch/x86/hvm/mtrr.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 511c3be1c8..2bd64e8025 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -794,6 +794,7 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
 {
     int gmtrr_mtype, hmtrr_mtype;
     struct vcpu *v = current;
+    unsigned long i;
 
     *ipat = 0;
 
@@ -836,6 +837,17 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
         return MTRR_TYPE_WRBACK;
     }
 
+    for ( i = 0; i < (1ul << order); i++ )
+    {
+        if ( is_special_page(mfn_to_page(mfn_add(mfn, i))) )
+        {
+            if ( order )
+                return -1;
+            *ipat = 1;
+            return MTRR_TYPE_WRBACK;
+        }
+    }
+
     gmtrr_mtype = hvm_get_mem_pinned_cacheattr(d, _gfn(gfn), order);
     if ( gmtrr_mtype >= 0 )
     {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:41:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:41:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1WDs-0006HE-0N; Fri, 31 Jul 2020 14:41:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S/xS=BK=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1k1WDq-0006FY-EC
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:41:22 +0000
X-Inumbo-ID: e3786e27-d33b-11ea-abc5-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e3786e27-d33b-11ea-abc5-12813bfff9fa;
 Fri, 31 Jul 2020 14:41:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=sRZ/84+z2kLbSeOy0XdjBV2qQku1iCduFJndgVP5MXY=; b=0LxHB8JSYw88f3jzZO6QHD4VOC
 qZWFFmlZuEr7nuZvOsT70QpeqFCcDq1bN1MMxZOPGh9nJNF+PIcO8sKA/HoPPux2OgIL+MY4eGv3F
 MHMnmTPGkAH62ccWYICnnETuVIN/oLCzNDJbvYlX5sMszXzxVAgU6kkGdIBE7nyFGRFQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1WDk-0003vR-Ns; Fri, 31 Jul 2020 14:41:16 +0000
Received: from host86-143-223-30.range86-143.btcentralplus.com
 ([86.143.223.30] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1k1WDk-0006lb-Ga; Fri, 31 Jul 2020 14:41:16 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v4 2/2] x86/hvm: simplify 'mmio_direct' check in
 epte_get_entry_emt()
Date: Fri, 31 Jul 2020 15:41:12 +0100
Message-Id: <20200731144112.12516-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200731144112.12516-1-paul@xen.org>
References: <20200731144112.12516-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Re-factor the code to take advantage of the fact that the APIC access page is
a 'special' page. The VMX code is left alone and hence the APIC access page is
still inserted into the P2M with type p2m_mmio_direct. This is left alone as it
is not obvious there is another suitable type to use, and the necessary
re-ordering in epte_get_entry_emt() is straightforward.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v2:
 - New in v2

v3:
 - Re-base
 - Expand commit comment
---
 xen/arch/x86/hvm/mtrr.c | 15 ++++-----------
 1 file changed, 4 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 2bd64e8025..fb051d59c3 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -815,23 +815,13 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
         return -1;
     }
 
-    if ( direct_mmio )
-    {
-        if ( (mfn_x(mfn) ^ mfn_x(d->arch.hvm.vmx.apic_access_mfn)) >> order )
-            return MTRR_TYPE_UNCACHABLE;
-        if ( order )
-            return -1;
-        *ipat = 1;
-        return MTRR_TYPE_WRBACK;
-    }
-
     if ( !mfn_valid(mfn) )
     {
         *ipat = 1;
         return MTRR_TYPE_UNCACHABLE;
     }
 
-    if ( !is_iommu_enabled(d) && !cache_flush_permitted(d) )
+    if ( !direct_mmio && !is_iommu_enabled(d) && !cache_flush_permitted(d) )
     {
         *ipat = 1;
         return MTRR_TYPE_WRBACK;
@@ -848,6 +838,9 @@ int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
         }
     }
 
+    if ( direct_mmio )
+        return MTRR_TYPE_UNCACHABLE;
+
     gmtrr_mtype = hvm_get_mem_pinned_cacheattr(d, _gfn(gfn), order);
     if ( gmtrr_mtype >= 0 )
     {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:44:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:44:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1WH9-0006Zi-GY; Fri, 31 Jul 2020 14:44:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1WH8-0006Zc-Dc
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:44:46 +0000
X-Inumbo-ID: 6019293e-d33c-11ea-abc5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6019293e-d33c-11ea-abc5-12813bfff9fa;
 Fri, 31 Jul 2020 14:44:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 00986AB3D;
 Fri, 31 Jul 2020 14:44:57 +0000 (UTC)
Subject: Re: [PATCH 4/5] xen/memory: Fix acquire_resource size semantics
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-5-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <75a7761f-45c6-5642-ea46-1b92072914b1@suse.com>
Date: Fri, 31 Jul 2020 16:44:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200728113712.22966-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.07.2020 13:37, Andrew Cooper wrote:
> @@ -1026,19 +1047,6 @@ static int acquire_resource(
>      if ( xmar.pad != 0 )
>          return -EINVAL;
>  
> -    if ( guest_handle_is_null(xmar.frame_list) )
> -    {
> -        if ( xmar.nr_frames )
> -            return -EINVAL;
> -
> -        xmar.nr_frames = ARRAY_SIZE(mfn_list);
> -
> -        if ( __copy_field_to_guest(arg, &xmar, nr_frames) )
> -            return -EFAULT;
> -
> -        return 0;
> -    }
> -
>      if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
>          return -E2BIG;

While arguably minor, the error code in the null-handle case
would imo better be the same, no matter how big xmar.nr_frames
is.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:50:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:50:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1WMq-0007Ok-6b; Fri, 31 Jul 2020 14:50:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1WMo-0007Of-Nq
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:50:38 +0000
X-Inumbo-ID: 32105fe8-d33d-11ea-abc9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 32105fe8-d33d-11ea-abc9-12813bfff9fa;
 Fri, 31 Jul 2020 14:50:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 439F5AB3D;
 Fri, 31 Jul 2020 14:50:50 +0000 (UTC)
Subject: Re: [PATCH v4 0/2] epte_get_entry_emt() modifications
To: Paul Durrant <paul@xen.org>
References: <20200731144112.12516-1-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bcf6deeb-e451-8929-e034-45b41ab1500a@suse.com>
Date: Fri, 31 Jul 2020 16:50:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200731144112.12516-1-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31.07.2020 16:41, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Paul Durrant (2):
>   x86/hvm: set 'ipat' in EPT for special pages
>   x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:53:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:53:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1WPu-0007Zs-Qx; Fri, 31 Jul 2020 14:53:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oG5j=BK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k1WPt-0007Zn-8h
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:53:49 +0000
X-Inumbo-ID: a380f05c-d33d-11ea-8e51-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a380f05c-d33d-11ea-8e51-bc764e2007e4;
 Fri, 31 Jul 2020 14:53:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596207228;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=mAMItb8aga74v8Fv2WlGk8PQj935jkqVfiNNiANhB4k=;
 b=brbW3z3PLqdE5U6DM7Th5COw1HHWftFm4nMHPBU/nAPAfezvrBlEauRJ
 5JkxBtB3STesZqO5d6/V4YxtuuZt4o+TZnKGEMP70KHP9x3So/41Mc5eT
 1Tzl0mEPpusk+rSPZmg6fnkSGvZJDeep+3bAvsCSiQ8vBC0Sj853vWZ6U A=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 6uSnCPgpnA/PDsCuMV8EmK9MVc+NGDfnY8CUcSl1E59f0Llt10PDCUFSqBClS+slO59mmUamwg
 XWKPFR1wtLXBrHwYkBTRmd/qkeIz/2zrRb1IyVm6JAES/xeJTOt6NSP4k9kID5D+o0hkXVItPc
 1Z7U5Rlt/WwPkHpDdb8W3hDUCWfbZYOIXRgIc4+M77ZlxEKWULjNqncTdZqUfZYdVgBBDH0Fbj
 y3GXQWNgw1ncw9AvQhH5j34mDMsoEJBP4gYwPyur7sCSIPk07EFmVnU8dsBa5cLzr7k84Jz4VN
 ptI=
X-SBRS: 3.7
X-MesageID: 24511857
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="24511857"
Subject: Re: [PATCH 4/5] xen/memory: Fix acquire_resource size semantics
To: Jan Beulich <jbeulich@suse.com>
References: <20200728113712.22966-1-andrew.cooper3@citrix.com>
 <20200728113712.22966-5-andrew.cooper3@citrix.com>
 <75a7761f-45c6-5642-ea46-1b92072914b1@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <bbe456ce-9fcb-9934-6526-9e968c2ea24e@citrix.com>
Date: Fri, 31 Jul 2020 15:53:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <75a7761f-45c6-5642-ea46-1b92072914b1@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hubert Jasudowicz <hubert.jasudowicz@cert.pl>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 Ian Jackson <ian.jackson@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31/07/2020 15:44, Jan Beulich wrote:
> On 28.07.2020 13:37, Andrew Cooper wrote:
>> @@ -1026,19 +1047,6 @@ static int acquire_resource(
>>      if ( xmar.pad != 0 )
>>          return -EINVAL;
>>  
>> -    if ( guest_handle_is_null(xmar.frame_list) )
>> -    {
>> -        if ( xmar.nr_frames )
>> -            return -EINVAL;
>> -
>> -        xmar.nr_frames = ARRAY_SIZE(mfn_list);
>> -
>> -        if ( __copy_field_to_guest(arg, &xmar, nr_frames) )
>> -            return -EFAULT;
>> -
>> -        return 0;
>> -    }
>> -
>>      if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
>>          return -E2BIG;
> While arguably minor, the error code in the null-handle case
> would imo better be the same, no matter how big xmar.nr_frames
> is.

This clause doesn't survive the fixes to batching.

Given how broken this infrastructure is, I'm not concerned with
transient differences in error codes for users which will ultimately
fail anyway.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:54:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:54:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1WQg-0007dW-4Z; Fri, 31 Jul 2020 14:54:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1WQe-0007dM-L9
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:54:36 +0000
X-Inumbo-ID: bfd90942-d33d-11ea-abc9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bfd90942-d33d-11ea-abc9-12813bfff9fa;
 Fri, 31 Jul 2020 14:54:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 33658AE64;
 Fri, 31 Jul 2020 14:54:48 +0000 (UTC)
Subject: Ping: [PATCH v2] x86emul: avoid assembler warning about .type not
 taking effect in test harness
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <42875d48-10e4-cc88-70ac-8979fea2493c@suse.com>
Message-ID: <bf92faf4-b323-d4be-ca31-5e065c576b9a@suse.com>
Date: Fri, 31 Jul 2020 16:54:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <42875d48-10e4-cc88-70ac-8979fea2493c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.07.2020 10:06, Jan Beulich wrote:
> gcc re-orders top level blocks by default when optimizing. This
> re-ordering results in all our .type directives to get emitted to the
> assembly file first, followed by gcc's. The assembler warns about
> attempts to change the type of a symbol when it was already set (and
> when there's no intervening setting to "notype").
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Refine description to no longer claim a gcc change to be the reason.
> 
> --- a/tools/tests/x86_emulator/Makefile
> +++ b/tools/tests/x86_emulator/Makefile
> @@ -295,4 +295,9 @@ x86-emulate.o cpuid.o test_x86_emulator.
>  x86-emulate.o: x86_emulate/x86_emulate.c
>  x86-emulate.o: HOSTCFLAGS += -D__XEN_TOOLS__
>  
> +# In order for our custom .type assembler directives to reliably land after
> +# gcc's, we need to keep it from re-ordering top-level constructs.
> +$(call cc-option-add,HOSTCFLAGS-toplevel,HOSTCC,-fno-toplevel-reorder)
> +test_x86_emulator.o: HOSTCFLAGS += $(HOSTCFLAGS-toplevel)
> +
>  test_x86_emulator.o: $(addsuffix .h,$(TESTCASES)) $(addsuffix -opmask.h,$(OPMASK))
> 



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:55:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:55:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1WRE-0007ib-EL; Fri, 31 Jul 2020 14:55:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1WRE-0007iU-0f
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:55:12 +0000
X-Inumbo-ID: d500a2c6-d33d-11ea-8e51-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d500a2c6-d33d-11ea-8e51-bc764e2007e4;
 Fri, 31 Jul 2020 14:55:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A977EAEBE;
 Fri, 31 Jul 2020 14:55:23 +0000 (UTC)
Subject: Ping: [PATCH] x86/CPUID: move some static masks into .init
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <2e3dfe1a-bc8b-6774-ef7e-efb565343c52@suse.com>
Message-ID: <ed96af1b-62ba-a7ca-913f-74e454ca9e2f@suse.com>
Date: Fri, 31 Jul 2020 16:55:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <2e3dfe1a-bc8b-6774-ef7e-efb565343c52@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 09:45, Jan Beulich wrote:
> Except for hvm_shadow_max_featuremask and deep_features they're
> referenced by __init functions only.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -16,12 +16,15 @@
>  const uint32_t known_features[] = INIT_KNOWN_FEATURES;
>  const uint32_t special_features[] = INIT_SPECIAL_FEATURES;
>  
> -static const uint32_t pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
> +static const uint32_t __initconst pv_max_featuremask[] = INIT_PV_MAX_FEATURES;
>  static const uint32_t hvm_shadow_max_featuremask[] = INIT_HVM_SHADOW_MAX_FEATURES;
> -static const uint32_t hvm_hap_max_featuremask[] = INIT_HVM_HAP_MAX_FEATURES;
> -static const uint32_t pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
> -static const uint32_t hvm_shadow_def_featuremask[] = INIT_HVM_SHADOW_DEF_FEATURES;
> -static const uint32_t hvm_hap_def_featuremask[] = INIT_HVM_HAP_DEF_FEATURES;
> +static const uint32_t __initconst hvm_hap_max_featuremask[] =
> +    INIT_HVM_HAP_MAX_FEATURES;
> +static const uint32_t __initconst pv_def_featuremask[] = INIT_PV_DEF_FEATURES;
> +static const uint32_t __initconst hvm_shadow_def_featuremask[] =
> +    INIT_HVM_SHADOW_DEF_FEATURES;
> +static const uint32_t __initconst hvm_hap_def_featuremask[] =
> +    INIT_HVM_HAP_DEF_FEATURES;
>  static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
>  
>  static int __init parse_xen_cpuid(const char *s)
> 



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:55:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:55:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1WRo-0007nf-Op; Fri, 31 Jul 2020 14:55:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RrDm=BK=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1k1WRn-0007nK-4b
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:55:47 +0000
X-Inumbo-ID: e9cc33dc-d33d-11ea-abc9-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9cc33dc-d33d-11ea-abc9-12813bfff9fa;
 Fri, 31 Jul 2020 14:55:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596207345;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=aHHPVD0dLbX3i8fEpJSmc/8YKtmbngH0igI0RVS0cZw=;
 b=TBN/KxTWdkZibVpIPT7T4DzWKirYOyxpzrt9tJsq8bz0ZixVt+4jW2U9
 DSVFEqoUI7Ow9YXZOCi56FTybJBXHwlyd6OWQrzAB37Z2Zt+MHBFfm885
 WMyVmH4OvtPVFa8HzmhDhl7Uz6s9E1NL2ZPLkHv1Ehz+isAfNc1f9pYtI k=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: gpR/ISQ8ds6LbISq4cCnf58nJBgwEhMtH8axr28zq/D7hbUqsHSLA6pcCajURCbg6K+dEcQQEN
 wB8WdN8WOq+QW1yqUXqN37SkXBc50Nm7mdrYgnFXF3f75p54pyHD+2hFlJVkf+l87CesQYP7TL
 ugGJDbuGmy5cWLqyP7GMMeuv0q8MKuFKrAVZa8zV2bmZohhJaGy4Mdj+bEkHOJRhnaA83uN0AU
 tIT7Ar4ZuacdWUYZ+r4bNUnK1YzEaUaEYfOUynZZySCt+9jPP+AbP1XgkGHkVdG9hMUMjnceAi
 cOs=
X-SBRS: 3.7
X-MesageID: 23957421
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23957421"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24356.12524.794794.651517@mariner.uk.xensource.com>
Date: Fri, 31 Jul 2020 15:55:40 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [OSSTEST PATCH v2 07/41] schema: Provide indices for
 sg-report-flight
In-Reply-To: <05461545-D39A-4B98-BC27-3560C367FE25@citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
 <20200731113820.5765-8-ian.jackson@eu.citrix.com>
 <05461545-D39A-4B98-BC27-3560C367FE25@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("Re: [OSSTEST PATCH v2 07/41] schema: Provide indices for sg-report-flight"):
> 
> 
> > On Jul 31, 2020, at 12:37 PM, Ian Jackson <ian.jackson@eu.citrix.com> wrote:
> > 
> > These indexes allow very fast lookup of "relevant" flights eg when
> > trying to justify failures.
> > 
> > In my ad-hoc test case, these indices (along with the subsequent
> > changes to sg-report-flight and Executive.pm, reduce the runtime of
> > sg-report-flight from 2-3ks (unacceptably long!) to as little as
> > 5-7s seconds - a speedup of about 500x.
> > 
> > (Getting the database snapshot may take a while first, but deploying
> > this code should help with that too by reducing long-running
> > transactions.  Quoted perf timings are from snapshot acquisition.)
> > 
> > Without these new indexes there may be a performance change from the
> > query changes.  I haven't benchmarked this so I am setting the schema
> > updates to be Preparatory/Needed (ie, "Schema first" as
> > schema/README.updates has it), to say that the index should be created
> > before the new code is deployed.
> > 
> > Testing: I have tested this series by creating experimental indices
> > "trial_..." in the actual production instance.  (Transactional DDL was
> > very helpful with this.)  I have verified with \d that schema update
> > instructions in this commit generate indexes which are equivalent to
> > the trial indices.
> > 
> > Deployment: AFter these schema updates are applied, the trial indices
> > are redundant duplicates and should be deleted.
...
> 
> I have no idea if building an index on a LIKE is a good idea or not, but it certainly seems to be useful, so:
> 
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>

Thanks.

This is a thing called a "partial index", where the index only covers
some subset of the rows.  The subset is determined a condition on the
row contents.

Such an index can be a lot smaller than an index on the whole table
and also avoids slowing down updates that don't match the index
condition.

The idea is that when the query contains a condition that matches the
index condition, the query planner can use this small on-topic index
instead of wading through something large and irrelevant.

The query planner is not always very bright about what conditions are
subsets of what other conditions, and it runs without seeing the
contents of bind variables.  So with LIKE, for example, it's generally
necessary to precisely replicate the index condition in the queries.
That's why some of the queries in this series have things like this:

              AND r$ri.name LIKE 'built\_revision\_%'
              AND r$ri.name = ?

where the Perl code passes in 'built_revison_something'.

I hope this explanation was interesting :-).

Ian.


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 14:58:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 14:58:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1WUW-00081I-8N; Fri, 31 Jul 2020 14:58:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S17i=BK=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1k1WUU-00081A-QJ
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 14:58:34 +0000
X-Inumbo-ID: 4dbf3d12-d33e-11ea-8e51-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4dbf3d12-d33e-11ea-8e51-bc764e2007e4;
 Fri, 31 Jul 2020 14:58:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 37517AD76;
 Fri, 31 Jul 2020 14:58:46 +0000 (UTC)
Subject: Ping: [PATCH 3/5] x86/PV: drop a few misleading
 paging_mode_refcounts() checks
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
 <9f8d0c4d-dec2-0175-09df-51d5e11c88e1@suse.com>
Message-ID: <bc2c4ec4-8703-c7a7-76b6-b79e55bca49e@suse.com>
Date: Fri, 31 Jul 2020 16:58:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <9f8d0c4d-dec2-0175-09df-51d5e11c88e1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tim Deegan <tim@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.07.2020 11:59, Jan Beulich wrote:
> The filling and cleaning up of v->arch.guest_table in new_guest_cr3()
> was apparently inconsistent so far: There was a type ref acquired
> unconditionally for the new top level page table, but the dropping of
> the old type ref was conditional upon !paging_mode_refcounts(). Mirror
> this also to arch_set_info_guest().
> 
> Also move new_guest_cr3()'s #ifdef to around the function - both callers
> now get built only when CONFIG_PV, i.e. no need to retain a stub.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

While I've got an ack from Tim, I think I need either an ack from
Andrew or someone's R-b in order to commit this.

Thanks, Jan

> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1122,8 +1122,6 @@ int arch_set_info_guest(
>  
>      if ( !cr3_page )
>          rc = -EINVAL;
> -    else if ( paging_mode_refcounts(d) )
> -        /* nothing */;
>      else if ( cr3_page == v->arch.old_guest_table )
>      {
>          v->arch.old_guest_table = NULL;
> @@ -1144,8 +1142,7 @@ int arch_set_info_guest(
>          case -ERESTART:
>              break;
>          case 0:
> -            if ( !compat && !VM_ASSIST(d, m2p_strict) &&
> -                 !paging_mode_refcounts(d) )
> +            if ( !compat && !VM_ASSIST(d, m2p_strict) )
>                  fill_ro_mpt(cr3_mfn);
>              break;
>          default:
> @@ -1166,7 +1163,7 @@ int arch_set_info_guest(
>  
>              if ( !cr3_page )
>                  rc = -EINVAL;
> -            else if ( !paging_mode_refcounts(d) )
> +            else
>              {
>                  rc = get_page_type_preemptible(cr3_page, PGT_root_page_table);
>                  switch ( rc )
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -3149,9 +3149,9 @@ int vcpu_destroy_pagetables(struct vcpu
>      return rc;
>  }
>  
> +#ifdef CONFIG_PV
>  int new_guest_cr3(mfn_t mfn)
>  {
> -#ifdef CONFIG_PV
>      struct vcpu *curr = current;
>      struct domain *d = curr->domain;
>      int rc;
> @@ -3220,7 +3220,7 @@ int new_guest_cr3(mfn_t mfn)
>  
>      pv_destroy_ldt(curr); /* Unconditional TLB flush later. */
>  
> -    if ( !VM_ASSIST(d, m2p_strict) && !paging_mode_refcounts(d) )
> +    if ( !VM_ASSIST(d, m2p_strict) )
>          fill_ro_mpt(mfn);
>      curr->arch.guest_table = pagetable_from_mfn(mfn);
>      update_cr3(curr);
> @@ -3231,30 +3231,24 @@ int new_guest_cr3(mfn_t mfn)
>      {
>          struct page_info *page = mfn_to_page(old_base_mfn);
>  
> -        if ( paging_mode_refcounts(d) )
> -            put_page(page);
> -        else
> -            switch ( rc = put_page_and_type_preemptible(page) )
> -            {
> -            case -EINTR:
> -            case -ERESTART:
> -                curr->arch.old_guest_ptpg = NULL;
> -                curr->arch.old_guest_table = page;
> -                curr->arch.old_guest_table_partial = (rc == -ERESTART);
> -                rc = -ERESTART;
> -                break;
> -            default:
> -                BUG_ON(rc);
> -                break;
> -            }
> +        switch ( rc = put_page_and_type_preemptible(page) )
> +        {
> +        case -EINTR:
> +        case -ERESTART:
> +            curr->arch.old_guest_ptpg = NULL;
> +            curr->arch.old_guest_table = page;
> +            curr->arch.old_guest_table_partial = (rc == -ERESTART);
> +            rc = -ERESTART;
> +            break;
> +        default:
> +            BUG_ON(rc);
> +            break;
> +        }
>      }
>  
>      return rc;
> -#else
> -    ASSERT_UNREACHABLE();
> -    return -EINVAL;
> -#endif
>  }
> +#endif
>  
>  #ifdef CONFIG_PV
>  static int vcpumask_to_pcpumask(
> 
> 



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 15:06:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 15:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Wbs-0000Uk-0M; Fri, 31 Jul 2020 15:06:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PAbA=BK=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1k1Wbq-0000Uf-72
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 15:06:10 +0000
X-Inumbo-ID: 5d2ef908-d33f-11ea-abd0-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5d2ef908-d33f-11ea-abd0-12813bfff9fa;
 Fri, 31 Jul 2020 15:06:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=c01nQpnuMNKDRMqpXstX0sq2XzDK1emqrSyXjhCI+7Q=; b=gV77bL5olyvAGxLEE0MDSf3+Xy
 p8npeRT6AgeqoaI3J8za1Vghb262jeBlYU0nMv7UjPPza+e5/ewzqEOnN7wvxwUiPF4p7E6uBROIC
 BmMXeUNmbb95Y7mn/SzHqu7Z7nUysgs10GaBAcD8mHqjCSACjssDiuTqGMl+DzqasDPM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1Wbm-0004Vd-D0; Fri, 31 Jul 2020 15:06:06 +0000
Received: from 54-240-197-230.amazon.com ([54.240.197.230]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1k1Wbl-0008TT-Vu; Fri, 31 Jul 2020 15:06:06 +0000
Subject: Re: [PATCH v3] xen/arm: Convert runstate address during hypcall
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Jan Beulich <jbeulich@suse.com>
References: <3911d221ce9ed73611b93aa437b9ca227d6aa201.1596099067.git.bertrand.marquis@arm.com>
 <f48f81d5-589e-3f75-1044-583114bf497e@xen.org>
 <d8eb8052-6370-7484-1c9a-f90d83396fa1@suse.com>
 <5301A49B-3404-4AC2-B04E-2BB969BABEED@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b59494b5-866e-30d9-7dfc-a4aa6366a91e@xen.org>
Date: Fri, 31 Jul 2020 16:06:03 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <5301A49B-3404-4AC2-B04E-2BB969BABEED@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Bertrand,

On 31/07/2020 14:09, Bertrand Marquis wrote:
> 
> 
>> On 31 Jul 2020, at 14:19, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 30.07.2020 22:50, Julien Grall wrote:
>>> On 30/07/2020 11:24, Bertrand Marquis wrote:
>>>> At the moment on Arm, a Linux guest running with KTPI enabled will
>>>> cause the following error when a context switch happens in user mode:
>>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>>>>
>>>> The error is caused by the virtual address for the runstate area
>>>> registered by the guest only being accessible when the guest is running
>>>> in kernel space when KPTI is enabled.
>>>>
>>>> To solve this issue, this patch is doing the translation from virtual
>>>> address to physical address during the hypercall and mapping the
>>>> required pages using vmap. This is removing the conversion from virtual
>>>> to physical address during the context switch which is solving the
>>>> problem with KPTI.
>>>
>>> To echo what Jan said on the previous version, this is a change in a
>>> stable ABI and therefore may break existing guest. FAOD, I agree in
>>> principle with the idea. However, we want to explain why breaking the
>>> ABI is the *only* viable solution.
>>>
>>>  From my understanding, it is not possible to fix without an ABI
>>> breakage because the hypervisor doesn't know when the guest will switch
>>> back from userspace to kernel space.
>>
>> And there's also no way to know on Arm, by e.g. enabling a suitable
>> intercept?

There is no easy way to do it. You might be able to route all EL0 
exceptions to EL2 using HCR_EL2.TGE, but this is basically disable EL1 
(kernel space). The amount of work required and the overhead is likely 
not worth it.

> 
> An intercept would mean that Xen gets a notice whenever a guest is switching
> from kernel mode to user mode.
> There is nothing in this process which could be intercepted by Xen, appart from
> maybe trapping all access to MMU registers which would be very complex and
> slow.

I agree. Although, even if it wasn't slow, there is no guarantee that 
any of those registers would be accessed during the switch.

You could implement a "dumb" KPTI by just removing the mappings from the 
page-tables.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 15:13:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 15:13:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1WiW-0001OR-Oa; Fri, 31 Jul 2020 15:13:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OD0g=BK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k1WiW-0001OM-9O
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 15:13:04 +0000
X-Inumbo-ID: 5301964c-d340-11ea-abd1-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5301964c-d340-11ea-abd1-12813bfff9fa;
 Fri, 31 Jul 2020 15:13:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=IrLi+5Px2HB18wWvqXBckJFDGIUmwNnZbJhKZZfk/lc=; b=aDkbPeUgo1mqYm8nllN5guZaO
 YACkfmcOb75itdpi2v8j4RiyZ8H1OjI6H4g0O5UAPp7K2jJyrWKOa2JAexGVDOdE4cNvmV9yWgEp5
 ZPPX/aJkFDx5dw8vmS0s/WZIETcz4oRRgnoeZyDOEVRKn7xPoIwkCfqt2fMywLgWrn3CQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1WiS-0004ez-Ra; Fri, 31 Jul 2020 15:13:00 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1WiS-0007Ds-Gj; Fri, 31 Jul 2020 15:13:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k1WiS-0005bv-G3; Fri, 31 Jul 2020 15:13:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152315-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 152315: all pass - PUSHED
X-Osstest-Versions-This: ovmf=7f79b736b0a57da71d87c987357db0227cd16ac6
X-Osstest-Versions-That: ovmf=e848b58d7c85293cd4121287abcea2d22a4f0620
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 31 Jul 2020 15:13:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152315 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152315/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 7f79b736b0a57da71d87c987357db0227cd16ac6
baseline version:
 ovmf                 e848b58d7c85293cd4121287abcea2d22a4f0620

Last test of basis   152277  2020-07-29 04:16:17 Z    2 days
Testing same since   152315  2020-07-31 03:12:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Shenglei Zhang <shenglei.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e848b58d7c..7f79b736b0  7f79b736b0a57da71d87c987357db0227cd16ac6 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 15:17:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 15:17:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1Wmm-0001YA-HV; Fri, 31 Jul 2020 15:17:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oG5j=BK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k1Wml-0001Y5-F8
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 15:17:27 +0000
X-Inumbo-ID: f0b8a51a-d340-11ea-8e58-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0b8a51a-d340-11ea-8e58-bc764e2007e4;
 Fri, 31 Jul 2020 15:17:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596208646;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=/w9QrzdbXq3+QWS8jsdjJCrxPabXgxbpAuteD8IFlvE=;
 b=ayJL8bueIaDtInrMEEAEnj2A54OMhJB3443eNUKL51j7LKatuuPWhk8+
 udfLXM9JhHncPQPjT6QwBNqmo6dslPZuZ0aPvE2TU1Xe3uRYnzcuaz4Pq
 G9fGDRn1Mf5DW5wwTEhpep/I4BK/arTas8T7TtYgUuGNmqDj+YvsdBwc3 M=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: DTdIo7pTNrRNl3JCjI6HmcJqhFibRl2T7DVEAxyTEZGAeMx/KlkyNb5TVTddiEgBrg9k0vBdaM
 4np50gDy1jtv9XajAV5yafuywsG9UmAt670gRe80LGSmsTjhpnF5vxUm2deTwBWV1zd9Bq4MPi
 zaHiniOMW55mi7N2vYJD5ZMPn/Lh5XFNOGar5RlG61VGRrSVRD+rvYddli9+0zWqvtDksWr6xv
 mNXW0fr56qsg2GMCYzKPm8y9Yc/Az6zq5VgFrrLmvcNu3PM0qpUjFxfEHqaOYiWjf0SoM3m4Wo
 kkI=
X-SBRS: 3.7
X-MesageID: 23969009
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23969009"
Subject: Re: Ping: [PATCH 3/5] x86/PV: drop a few misleading
 paging_mode_refcounts() checks
To: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <a4dc8db4-0388-a922-838e-42c6f4635639@suse.com>
 <9f8d0c4d-dec2-0175-09df-51d5e11c88e1@suse.com>
 <bc2c4ec4-8703-c7a7-76b6-b79e55bca49e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <bd8e8dd1-ea1d-039b-d96a-69a4d5443b65@citrix.com>
Date: Fri, 31 Jul 2020 16:17:21 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <bc2c4ec4-8703-c7a7-76b6-b79e55bca49e@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tim Deegan <tim@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31/07/2020 15:58, Jan Beulich wrote:
> On 15.07.2020 11:59, Jan Beulich wrote:
>> The filling and cleaning up of v->arch.guest_table in new_guest_cr3()
>> was apparently inconsistent so far: There was a type ref acquired
>> unconditionally for the new top level page table, but the dropping of
>> the old type ref was conditional upon !paging_mode_refcounts(). Mirror
>> this also to arch_set_info_guest().
>>
>> Also move new_guest_cr3()'s #ifdef to around the function - both callers
>> now get built only when CONFIG_PV, i.e. no need to retain a stub.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> While I've got an ack from Tim, I think I need either an ack from
> Andrew or someone's R-b in order to commit this.

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 15:44:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 15:44:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1XCH-00044X-Mm; Fri, 31 Jul 2020 15:43:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RrDm=BK=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1k1XCG-00044S-DZ
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 15:43:48 +0000
X-Inumbo-ID: 9e27f1f0-d344-11ea-abdc-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9e27f1f0-d344-11ea-abdc-12813bfff9fa;
 Fri, 31 Jul 2020 15:43:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596210226;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=ju2I/12vZhDvJEYkrAvMnqF4YhBzZ/juOIuVKzFJITc=;
 b=cGH4Cwezuzu2445bVl9kOkKNYCW4Ww1goWCyRBEFqQvLxqyrHjWHVu08
 R434/QZc3DU5MJherO9JDtEc4dZr2/rVjsAmdRnfbQtwUY4DNlX8KkDUh
 Z4Ez5qKDPnDLGSz+m4d9F5SpSBAplOQSU0yZV7FTOHVn88V5oa3Rz4ROg g=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: wPtrGNpzJe90TRtXKcvePH2T7zpctB+mpZoXOMyU4AgL7niD4giJ6WlNMjpUbkHL7R+5Qw/c/Y
 nbeUnsUt9DlufPzjPLDhogIJ6vHta/VrWKX+Kgq3o51HoAe+BRNX/aiHu22/jbC21xbUoAVtjC
 H24FS1bhKInRDIHEHpdPT3o1NSVzB+AmpsvrN1F2pvjx8cqoWaozm0q5EKyJyJL9prLaMF6qrK
 urpWk/iEIJLSWS5ZVC0jFp1s2r9tdaIL32h3JinuPVyioIMqeBApZDJxU/feSC3jc3ZHzDuKAA
 Vec=
X-SBRS: 3.7
X-MesageID: 24517364
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="24517364"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24356.15406.68578.77965@mariner.uk.xensource.com>
Date: Fri, 31 Jul 2020 16:43:42 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [OSSTEST PATCH v2 08/41] sg-report-flight: Ask the db for flights
 of interest
In-Reply-To: <391CB71B-3587-40C1-BE6E-F01A6473141D@citrix.com>
References: <20200731113820.5765-1-ian.jackson@eu.citrix.com>
 <20200731113820.5765-9-ian.jackson@eu.citrix.com>
 <391CB71B-3587-40C1-BE6E-F01A6473141D@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("Re: [OSSTEST PATCH v2 08/41] sg-report-flight: Ask the db for flights of interest"):
> > On Jul 31, 2020, at 12:37 PM, Ian Jackson <ian.jackson@eu.citrix.com> wrote:
> > Specifically, we narrow the initial query to flights which have at
> > least some job with the built_revision_foo we are looking for.
> > 
> > This condition is strictly broader than that implemented inside the
> > flight search loop, so there is no functional change.
> 
> Assuming this is true, that job / runvar is filtered after extracting this information, then...
...
> …I agree that this shoud introduce no other changes.
> 
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>

Thanks.

Just to convince myself, I ran through the argument based on the perl
code.  I found a lacuna.

1. The job of findaflight is to find a flight, and it doesn't have
   significant side effects - just a return value.

2. If it returns a flight from the loop, $whynot must have been
   undef.  $whynot is never unset.

Consider some tree in %{ $specver{$thisthat} }.

3. If @revisions is 0 for that tree, $whynot is set.  So one of the
   two queries $revisionsq or $revisionsosstestq must have returned
   some rows.

4. Furthermore, none of those rows must have passed the $wronginfo
   grep.  If they had, $whynot would have been set.  Any row
   whose val doesn't contain a colon, and which doesn't end up
   in $wronginfo, had a val equal to the requested specver.

5. Colons in this field appear only in mercurial revisions.  These are
   now obsoelete - we have no mercurial trees.  A consequence of this
   commit is actually that we should explicitly abolish mercurial
   support, at least pending a change to osstest to arrange for the
   val column to contain only the hash part and not the number part.

6. Together, these conditons means that if $whynot wasn't set,
   there must have been some row whose val matched the specver.

7. Both the $revisionsq and $revisionsosstestq queries take a flight
   bound variable condition.  This is bound by a value that came out
   of @binfos.  @binfos is made from %binfos, where the flight number
   is the key.  %binfos is populated by the @binfos_todo loop, where
   it gets the flight number from a @binfos_todos entry - but it
   filters them for $bflight == $tflight.

8. So some row must have matched the flight, and the specver, and
   of course the name.  This is precisely the new condition.

I think this means I should put a commit earlier in this series which
disables mercurial support until the colon version situation is
rationalised.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 15:55:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 15:55:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1XMz-00055z-2k; Fri, 31 Jul 2020 15:54:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nr0X=BK=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1k1XMy-00055u-F3
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 15:54:52 +0000
X-Inumbo-ID: 2afeaa3a-d346-11ea-8e65-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2afeaa3a-d346-11ea-8e65-bc764e2007e4;
 Fri, 31 Jul 2020 15:54:51 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1k1XMw-0003MB-LC; Fri, 31 Jul 2020 16:54:50 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH] Disable mercurial support
Date: Fri, 31 Jul 2020 16:54:44 +0100
Message-Id: <20200731155444.2767-1-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: committers@xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is in order that we can substantially simplify forthcoming
database changes.  If mercurial support were still desired, the right
thing to do would be to rework it now along the lines of this request.
But we haven't used it for some years.

It could be reenabled later, if this work were done then.  (Of course
there might be other bitrot already that we don't know about.)

CC: committers@xenproject.org
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm |  5 +++++
 sg-report-flight       | 11 ++++++++++-
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 7eeac49f..faac106f 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -1661,6 +1661,11 @@ sub build_url_vcs ($) {
 	$tree = git_massage_url($tree);
     }
 
+    if ($vcs eq 'hg') {
+	die "mercurial support has rottted";
+	# to reinstate, git grep for "mercurial" and fix everything
+    }
+
     return ($tree, $vcs);
 }
 
diff --git a/sg-report-flight b/sg-report-flight
index 831917a9..49f7ba6a 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -299,7 +299,16 @@ END
                 last;
             }
             my ($wronginfo) = grep {
-                $_->[1]{val} !~ m/^(?: .*: )? $v /x;
+                $_->[1]{val} ne $v;
+                # Was once   $_->[1]{val} !~ m/^(?: .*: )? $v /x;
+		# to support stripping (local) changeset numbers from
+		# mercurial revisions in the val column.  But this
+		# does not work with our index query strategy.  To
+		# reinstate mercurial support, it will be necessary to
+		# either make the index query more complicated (eg an
+		# index on a substring of val) or to arrange for all
+		# the code to not ever store these revision counts in
+		# the db.  The latter is probably more correct.
             } @revisions;
 
             if (defined $wronginfo) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 15:59:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 15:59:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1XR4-0005Fq-KL; Fri, 31 Jul 2020 15:59:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oG5j=BK=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1k1XR3-0005Fl-BT
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 15:59:05 +0000
X-Inumbo-ID: c1d34971-d346-11ea-8e65-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1d34971-d346-11ea-8e65-bc764e2007e4;
 Fri, 31 Jul 2020 15:59:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1596211144;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=Rj+yjfDtynwRGMOHPflgAH/QmhH8AaC6v2zvrjV9jC4=;
 b=bHfMGwGaPBg67AlLgwqL2dIRgGW6G+xyrNFIZOZCufPzz+t50Qt65qhG
 +yE8raeKBKr8T6MUeViXZEQkeCG3volPr4w69S0cD17lt2hshEpWzlW35
 p2NAe+mAbzl6mJ5zgegwwoo/zs6jYnMaebFclFjQQgP7GyklwMpjD9NaM o=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: lB2fu8k/XhNyde5cP8VBfCws3qdrOrigCS6I7rjMSRU5/MJg5cb9rheJTikGicG0tVE0NKrEbp
 hJm2qu5I6WLMMWzRU1OJILI//ixFAmNFdlfhYhmZgEjewWIJq+FgeVnEnJAa2kQSKWbo0R/2JY
 q4GjH1Yynx6O3vJho8bvubYbzPKaHYmrUsOT6xXaHv4my+gHWRyB1E54Sv57TD98QEvsOhVcOa
 PCVYUU2zIP5Lt/8RD+CqTqDrAnajIzYBVdHspHQdxQhH1bN1OUGDfagSRu+Pl/81qcDB/DeVhJ
 1Ms=
X-SBRS: 3.7
X-MesageID: 23963620
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.75,418,1589256000"; d="scan'208";a="23963620"
Subject: Re: Ping: [PATCH v2] x86emul: avoid assembler warning about .type not
 taking effect in test harness
To: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <42875d48-10e4-cc88-70ac-8979fea2493c@suse.com>
 <bf92faf4-b323-d4be-ca31-5e065c576b9a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6178d558-1826-120d-51fd-4daee8712fa8@citrix.com>
Date: Fri, 31 Jul 2020 16:59:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <bf92faf4-b323-d4be-ca31-5e065c576b9a@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 31/07/2020 15:54, Jan Beulich wrote:
> On 14.07.2020 10:06, Jan Beulich wrote:
>> gcc re-orders top level blocks by default when optimizing. This
>> re-ordering results in all our .type directives to get emitted to the
>> assembly file first, followed by gcc's. The assembler warns about
>> attempts to change the type of a symbol when it was already set (and
>> when there's no intervening setting to "notype").
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 16:19:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 16:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1XkE-0007Xq-BS; Fri, 31 Jul 2020 16:18:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F22U=BK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k1XkD-0007Xl-CK
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 16:18:53 +0000
X-Inumbo-ID: 8613ed60-d349-11ea-8e66-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8613ed60-d349-11ea-8e66-bc764e2007e4;
 Fri, 31 Jul 2020 16:18:52 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id BF86522B3F;
 Fri, 31 Jul 2020 16:18:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596212332;
 bh=wGcVS68wxyzd53wI3zCnPCufYyT7AXC3KRChuAG9S88=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=GRD67L9XqoJZFQ9gPcQwYZxo7jiVioPpnDygvv/5xQUDtmAP4vJlb70r83pFf0wK4
 CfZKmShJgoNRJLtBdkhRYCYdD+f1IhnQVMW2R/RkwWVFWCJ5q8aSH9hwrHq3A7jylC
 vU8Kp282ex1WsDdBRwtair4LDusvBHbR9l8688PE=
Date: Fri, 31 Jul 2020 09:18:51 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: kernel-doc and xen.git
In-Reply-To: <F09D32F7-4826-421B-99A6-3E94756FFCEF@citrix.com>
Message-ID: <alpine.DEB.2.21.2007310918360.1767@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2007291644330.1767@sstabellini-ThinkPad-T480s>
 <9421ec73-1ec0-844f-0014-bd5a36a4036f@suse.com>
 <F09D32F7-4826-421B-99A6-3E94756FFCEF@citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1155002912-1596212332=:1767"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "committers@xenproject.org" <committers@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 "Bertrand.Marquis@arm.com" <Bertrand.Marquis@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1155002912-1596212332=:1767
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Fri, 31 Jul 2020, George Dunlap wrote:
> > On Jul 31, 2020, at 12:29 PM, Jan Beulich <jbeulich@suse.com> wrote:
> > 
> > On 30.07.2020 03:27, Stefano Stabellini wrote:
> >> Hi all,
> >> 
> >> I would like to ask for your feedback on the adoption of the kernel-doc
> >> format for in-code comments.
> >> 
> >> In the FuSa SIG we have started looking into FuSa documents for Xen. One
> >> of the things we are investigating are ways to link these documents to
> >> in-code comments in xen.git and vice versa.
> >> 
> >> In this context, Andrew Cooper suggested to have a look at "kernel-doc"
> >> [1] during one of the virtual beer sessions at the last Xen Summit.
> >> 
> >> I did give a look at kernel-doc and it is very promising. kernel-doc is
> >> a script that can generate nice rst text documents from in-code
> >> comments. (The generated rst files can then be used as input for sphinx
> >> to generate html docs.) The comment syntax [2] is simple and similar to
> >> Doxygen:
> >> 
> >>    /**
> >>     * function_name() - Brief description of function.
> >>     * @arg1: Describe the first argument.
> >>     * @arg2: Describe the second argument.
> >>     *        One can provide multiple line descriptions
> >>     *        for arguments.
> >>     */
> >> 
> >> kernel-doc is actually better than Doxygen because it is a much simpler
> >> tool, one we could customize to our needs and with predictable output.
> >> Specifically, we could add the tagging, numbering, and referencing
> >> required by FuSa requirement documents.
> >> 
> >> I would like your feedback on whether it would be good to start
> >> converting xen.git in-code comments to the kernel-doc format so that
> >> proper documents can be generated out of them. One day we could import
> >> kernel-doc into xen.git/scripts and use it to generate a set of html
> >> documents via sphinx.
> > 
> > How far is this intended to go? The example is description of a
> > function's parameters, which is definitely fine (albeit I wonder
> > if there's a hidden implication then that _all_ functions
> > whatsoever are supposed to gain such comments). But the text just
> > says much more generally "in-code comments", which could mean all
> > of them. I'd consider the latter as likely going too far.
> 
> I took him to mean comments in the code at the moment, which describe some interface, but aren’t in kernel-doc format.  Naturally we wouldn’t want *all* comments to be stuffed into a document somewhere.

+1
--8323329-1155002912-1596212332=:1767--


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 19:05:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 19:05:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1aKb-0004us-An; Fri, 31 Jul 2020 19:04:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OD0g=BK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k1aKa-0004ts-FJ
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 19:04:36 +0000
X-Inumbo-ID: a859d437-d360-11ea-ac23-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a859d437-d360-11ea-ac23-12813bfff9fa;
 Fri, 31 Jul 2020 19:04:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=4OoWHo8lZYt8XBlmil8gtutC7XJ6GhaPaUGbG8pwebY=; b=J6zFn3n1HrTbkZnXuhrKG0qGD
 TEMJ+r/fdjSi/QiyD82L9snptvjqT59tWnpYhRFcPM2iYO2SNsAJtHpHgNXtTPOArPsscgnFxFMXs
 4vgGDQfzQ1otMT/4QBRT1kaNYIPO2EtTXCOvB6f/Lz5s4zDCQnxpn2W/AJAu16yu7Bg6A=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1aKS-0001Yi-P6; Fri, 31 Jul 2020 19:04:28 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1aKS-0003wl-8t; Fri, 31 Jul 2020 19:04:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k1aKS-0002ff-8C; Fri, 31 Jul 2020 19:04:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152327-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152327: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=a85f67b2658ed8032586b3a3e7cd78814d20aa4b
X-Osstest-Versions-That: xen=98bed5de1de3352c63cfe29a00f17e8d9ce72689
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 31 Jul 2020 19:04:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152327 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152327/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a85f67b2658ed8032586b3a3e7cd78814d20aa4b
baseline version:
 xen                  98bed5de1de3352c63cfe29a00f17e8d9ce72689

Last test of basis   152310  2020-07-30 22:00:29 Z    0 days
Testing same since   152327  2020-07-31 15:02:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   98bed5de1d..a85f67b265  a85f67b2658ed8032586b3a3e7cd78814d20aa4b -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 19:27:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 19:27:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1agt-0006hr-7E; Fri, 31 Jul 2020 19:27:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OD0g=BK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k1agr-0006h6-Ga
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 19:27:37 +0000
X-Inumbo-ID: dfe00d64-d363-11ea-8e8d-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dfe00d64-d363-11ea-8e8d-bc764e2007e4;
 Fri, 31 Jul 2020 19:27:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=VBhGWje2qvC0bvJykJ/91ZgcMk2qKZRxOXbXocEVjwU=; b=gPLyA9tOVeHjQztyo8a0hYQlb
 D4sfRv4XNe+YGNeoU+H3OGyiJ1dAcIC0Xo5bXR/5hnjsgzovQsZ8vHmUTbrPZH6JjHMS9j/0fQRXS
 h9o9xlqAdoAUCJKtw8iNigG9uwLWdB2OsRmS0YcNzD8RXkbW2vSwAr5LRHLTvw+P8xbIw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1agj-00021V-Dc; Fri, 31 Jul 2020 19:27:29 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1agi-0005QP-VC; Fri, 31 Jul 2020 19:27:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k1agi-0000CY-UU; Fri, 31 Jul 2020 19:27:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152313-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 152313: regressions - FAIL
X-Osstest-Failures: linux-linus:test-amd64-i386-freebsd10-i386:xen-boot:fail:regression
 linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: linux=417385c47ef7ee0d4f48f63f70cca6c1ed6355f4
X-Osstest-Versions-That: linux=6ba1b005ffc388c2aeaddae20da29e4810dea298
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 31 Jul 2020 19:27:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152313 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152313/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-i386  7 xen-boot               fail REGR. vs. 152287
 test-amd64-amd64-amd64-pvgrub 17 guest-localmigrate/x10  fail REGR. vs. 152287

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152287
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152287
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152287
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 152287
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 152287
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152287
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152287
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 152287
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152287
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                417385c47ef7ee0d4f48f63f70cca6c1ed6355f4
baseline version:
 linux                6ba1b005ffc388c2aeaddae20da29e4810dea298

Last test of basis   152287  2020-07-29 17:11:28 Z    2 days
Failing since        152303  2020-07-30 11:09:02 Z    1 days    2 attempts
Testing same since   152313  2020-07-31 02:28:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Duyck <alexander.h.duyck@linux.intel.com>
  Ben Skeggs <bskeggs@redhat.com>
  Ben Skeggs <skeggsb@gmail.com>
  Biju Das <biju.das.jz@bp.renesas.com>
  Bjorn Helgaas <bhelgaas@google.com>
  Christoph Hellwig <hch@lst.de>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Hildenbrand <david@redhat.com>
  Dominique Martinet <asmadeus@codewreck.org>
  Douglas Anderson <dianders@chromium.org>
  Grygorii Strashko <grygorii.strashko@ti.com>
  Guido Günther <agx@sigxcpu.org>
  Ingo Brunberg <ingo_brunberg@web.de>
  Jason Wang <jasowang@redhat.com>
  Jens Axboe <axboe@kernel.dk>
  Jitao Shi <jitao.shi@mediatek.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Laurentiu Palcu <laurentiu.palcu@nxp.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Marc Zyngier <maz@kernel.org>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Michael S. Tsirkin <mst@redhat.com>
  Paul Cercueil <paul@crapouillou.net>
  Paul Moore <paul@paul-moore.com>
  Pavel Begunkov <asml.silence@gmail.com>
  Qiushi Wu <wu000273@umn.edu>
  Randy Dunlap <rdunlap@infradead.org> # build-tested
  Robert Hancock <hancockrwd@gmail.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sam Ravnborg <sam@ravnborg.org>
  Stephan Gerhold <stephan@gerhold.net>
  Steve Cohen <cohens@codeaurora.org>
  Thomas Zimmermann <tzimmermann@suse.de>
  Vinod Koul <vkoul@kernel.org> # tested on DragonBoard 410c
  Wang Hai <wanghai38@huawei.com>
  Weilong Chen <chenweilong@huawei.com>
  Willy Tarreau <w@1wt.eu>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 907 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 23:03:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 23:03:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1e41-00083e-NH; Fri, 31 Jul 2020 23:03:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F22U=BK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k1e41-00083Z-45
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 23:03:45 +0000
X-Inumbo-ID: 14f3bcd0-d382-11ea-8eb6-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14f3bcd0-d382-11ea-8eb6-bc764e2007e4;
 Fri, 31 Jul 2020 23:03:44 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 437282072A;
 Fri, 31 Jul 2020 23:03:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596236623;
 bh=MRghBlBbcTjwoN4MJ43/JXA2Ns4QiM5984Rn2Ro3UIM=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=uUgqb4+eUZ4dXFmm9gz2orey5biYf0ntD7LlSJIVkK2RteFj5uP7xBGB7sCWbXcMr
 uQwbHKUx1D9zUkiKfQDKW8Gn+dEjr0N9xecE66Olrw/OZm5yOAa0mHodggFw5L3JBL
 r//1MYknjKm3gQFtW32oYwqg6qfbL31/FSitBwbI=
Date: Fri, 31 Jul 2020 16:03:42 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: [PATCH v2] xen/arm: Convert runstate address during hypcall
In-Reply-To: <E39531EE-0265-4387-813D-22A57CD3F67B@arm.com>
Message-ID: <alpine.DEB.2.21.2007310935350.1767@sstabellini-ThinkPad-T480s>
References: <4647a019c7b42d40d3c2f5b0a3685954bea7f982.1595948219.git.bertrand.marquis@arm.com>
 <8d2d7f03-450c-d50c-630b-8608c6d42bb9@suse.com>
 <FCAB700B-4617-4323-BE1E-B80DDA1806C1@arm.com>
 <1b046f2c-05c8-9276-a91e-fd55ec098bed@suse.com>
 <alpine.DEB.2.21.2007291356060.1767@sstabellini-ThinkPad-T480s>
 <1a8bbcc7-9d0c-9669-db7b-e837af279027@suse.com>
 <73c8ade5-36a3-cc13-80b6-bda89e175cbb@xen.org>
 <6066b507-f956-8e7a-89f3-b21428b66d65@suse.com>
 <E39531EE-0265-4387-813D-22A57CD3F67B@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 31 Jul 2020, Bertrand Marquis wrote:
> > On 31 Jul 2020, at 12:18, Jan Beulich <jbeulich@suse.com> wrote:
> > 
> > On 31.07.2020 12:12, Julien Grall wrote:
> >> On 31/07/2020 07:39, Jan Beulich wrote:
> >>> We're fixing other issues without breaking the ABI. Where's the
> >>> problem of backporting the kernel side change (which I anticipate
> >>> to not be overly involved)?
> >> This means you can't take advantage of the runstage on existing Linux 
> >> without any modification.
> >> 
> >>> If the plan remains to be to make an ABI breaking change,
> >> 
> >> For a theoritical PoV, this is a ABI breakage. However, I fail to see 
> >> how the restrictions added would affect OSes at least on Arm.
> > 
> > "OSes" covering what? Just Linux?
> > 
> >> In particular, you can't change the VA -> PA on Arm without going 
> >> through an invalid mapping. So I wouldn't expect this to happen for the 
> >> runstate.
> >> 
> >> The only part that *may* be an issue is if the guest is registering the 
> >> runstate with an initially invalid VA. Although, I have yet to see that 
> >> in practice. Maybe you know?
> > 
> > I'm unaware of any such use, but this means close to nothing.
> > 
> >>> then I
> >>> think this will need an explicit vote.
> >> 
> >> I was under the impression that the two Arm maintainers (Stefano and I) 
> >> already agreed with the approach here. Therefore, given the ABI breakage 
> >> is only affecting Arm, why would we need a vote?
> > 
> > The problem here is of conceptual nature: You're planning to
> > make the behavior of a common hypercall diverge between
> > architectures, and in a retroactive fashion. Imo that's nothing
> > we should do even for new hypercalls, if _at all_ avoidable. If
> > we allow this here, we'll have a precedent that people later
> > may (and based on my experience will, sooner or later) reference,
> > to get their own change justified.

Please let's avoid "slippery slope" arguments
(https://en.wikipedia.org/wiki/Slippery_slope)

We shouldn't consider this instance as the first in a long series of bad
decisions on hypercall compatibility. Each new case, if there will be
any, will have to be considered based on its own merits. Also, let's
keep in mind that there have been no other cases in the last 8 years. (I
would like to repeat my support for hypercall ABI compatibility.)


I would also kindly ask not to put the discussion on a "conceptual"
level: there is no way to fix all guests and also keep compatibility.
>From a conceptual point of view, it is already game over :-)


> After a discussion with Jan, he is proposing to have a guest config setting to
> turn on or off the translation of the address during the hypercall and add a
> global Xen command line parameter to set the global default behaviour. 
> With this was done on arm could be done on x86 and the current behaviour
> would be kept by default but possible to modify by configuration.
> 
> @Jan: please correct me if i said something wrong
> @others: what is your view on this solution ?

Having options to turn on or off the new behavior could be good-to-have
if we find a guest that actually requires the old behavior. Today we
don't know of any such cases. We have strong reasons to believe that
there aren't any on ARM (see Julien's explanation in regards to the
temporary invalid mappings.) In fact, it is one of the factors that led
us to think this patch is the right approach.

That said, I am also OK with adding such a parameter now, but we need to
choose the default value carefully.


We need the new behavior as default on ARM because we need the fix to
work for all guests. I don't think we want to explain how you always
need to set config_foobar otherwise things don't work. It has to work
out of the box.

It would be nice if we had the same default on x86 too, although I
understand if Jan and Andrew don't want to make the same change on x86,
at least initially.



From xen-devel-bounces@lists.xenproject.org Fri Jul 31 23:03:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 23:03:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1e3p-00082s-EC; Fri, 31 Jul 2020 23:03:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F22U=BK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1k1e3n-00082n-OC
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 23:03:31 +0000
X-Inumbo-ID: 0cd2032c-d382-11ea-ac72-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0cd2032c-d382-11ea-ac72-12813bfff9fa;
 Fri, 31 Jul 2020 23:03:30 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 76EFF2072A;
 Fri, 31 Jul 2020 23:03:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1596236609;
 bh=gS8LaeQahirFzsum+tAEXBlS4nE/w36LX8Jon4RvuHQ=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=1gZUPV7mvNbmvy0wnFgKj8uDmxjqhNi0P9yAk3ZZZxXBHKlNalyIBQUDK//uiRWVT
 gUu/idUsf3g2X2/2c2Rw5yQBy9qQS4LC1R2Gt1GfF8Im+Fzue2lBO1/vtuihua78nF
 QJRwZX6MlLK3UYavQXv+iHLWddbS+KedjuAdvf/s=
Date: Fri, 31 Jul 2020 16:03:29 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: [PATCH v3] xen/arm: Convert runstate address during hypcall
In-Reply-To: <CB9F22FE-BEFF-4A36-BC81-A18F9E0F9D7C@arm.com>
Message-ID: <alpine.DEB.2.21.2007311018330.1767@sstabellini-ThinkPad-T480s>
References: <3911d221ce9ed73611b93aa437b9ca227d6aa201.1596099067.git.bertrand.marquis@arm.com>
 <f48f81d5-589e-3f75-1044-583114bf497e@xen.org>
 <CB9F22FE-BEFF-4A36-BC81-A18F9E0F9D7C@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 31 Jul 2020, Bertrand Marquis wrote:
> Sorry missed some points in my previous answer.
> 
> > On 30 Jul 2020, at 22:50, Julien Grall <julien@xen.org> wrote:
> > 
> > Hi Bertrand,
> > 
> > To avoid extra work on your side, I would recommend to wait a bit before sending a new version. It would be good to at least settle the conversation in v2 regarding the approach taken.
> > 
> > On 30/07/2020 11:24, Bertrand Marquis wrote:
> >> At the moment on Arm, a Linux guest running with KTPI enabled will
> >> cause the following error when a context switch happens in user mode:
> >> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
> >> The error is caused by the virtual address for the runstate area
> >> registered by the guest only being accessible when the guest is running
> >> in kernel space when KPTI is enabled.
> >> To solve this issue, this patch is doing the translation from virtual
> >> address to physical address during the hypercall and mapping the
> >> required pages using vmap. This is removing the conversion from virtual
> >> to physical address during the context switch which is solving the
> >> problem with KPTI.
> > 
> > To echo what Jan said on the previous version, this is a change in a stable ABI and therefore may break existing guest. FAOD, I agree in principle with the idea. However, we want to explain why breaking the ABI is the *only* viable solution.
> > 
> > From my understanding, it is not possible to fix without an ABI breakage because the hypervisor doesn't know when the guest will switch back from userspace to kernel space. The risk is the information provided by the runstate wouldn't contain accurate information and could affect how the guest handle stolen time.
> > 
> > Additionally there are a few issues with the current interface:
> >   1) It is assuming the virtual address cannot be re-used by the userspace. Thanksfully Linux have a split address space. But this may change with KPTI in place.
> >   2) When update the page-tables, the guest has to go through an invalid mapping. So the translation may fail at any point.
> > 
> > IOW, the existing interface can lead to random memory corruption and inacurracy of the stolen time.
> > 
> >> This is done only on arm architecture, the behaviour on x86 is not
> >> modified by this patch and the address conversion is done as before
> >> during each context switch.
> >> This is introducing several limitations in comparison to the previous
> >> behaviour (on arm only):
> >> - if the guest is remapping the area at a different physical address Xen
> >> will continue to update the area at the previous physical address. As
> >> the area is in kernel space and usually defined as a global variable this
> >> is something which is believed not to happen. If this is required by a
> >> guest, it will have to call the hypercall with the new area (even if it
> >> is at the same virtual address).
> >> - the area needs to be mapped during the hypercall. For the same reasons
> >> as for the previous case, even if the area is registered for a different
> >> vcpu. It is believed that registering an area using a virtual address
> >> unmapped is not something done.
> > 
> > This is not clear whether the virtual address refer to the current vCPU or the vCPU you register the runstate for. From the past discussion, I think you refer to the former. It would be good to clarify.
> > 
> > Additionally, all the new restrictions should be documented in the public interface. So an OS developper can find the differences between the architectures.
> > 
> > To answer Jan's concern, we certainly don't know all the guest OSes existing, however we also need to balance the benefit for a large majority of the users.
> > 
> > From previous discussion, the current approach was deemed to be acceptable on Arm and, AFAICT, also x86 (see [1]).
> > 
> > TBH, I would rather see the approach to be common. For that, we would an agreement from Andrew and Jan in the approach here. Meanwhile, I think this is the best approach to address the concern from Arm users.
> > 
> >> inline functions in headers could not be used as the architecture
> >> domain.h is included before the global domain.h making it impossible
> >> to use the struct vcpu inside the architecture header.
> >> This should not have any performance impact as the hypercall is only
> >> called once per vcpu usually.
> >> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> >> ---
> >>   Changes in v2
> >>     - use vmap to map the pages during the hypercall.
> >>     - reintroduce initial copy during hypercall.
> >>   Changes in v3
> >>     - Fix Coding style
> >>     - Fix vaddr printing on arm32
> >>     - use write_atomic to modify state_entry_time update bit (only
> >>     in guest structure as the bit is not used inside Xen copy)
> >> ---
> >>  xen/arch/arm/domain.c        | 161 ++++++++++++++++++++++++++++++-----
> >>  xen/arch/x86/domain.c        |  29 ++++++-
> >>  xen/arch/x86/x86_64/domain.c |   4 +-
> >>  xen/common/domain.c          |  19 ++---
> >>  xen/include/asm-arm/domain.h |   9 ++
> >>  xen/include/asm-x86/domain.h |  16 ++++
> >>  xen/include/xen/domain.h     |   5 ++
> >>  xen/include/xen/sched.h      |  16 +---
> >>  8 files changed, 206 insertions(+), 53 deletions(-)
> >> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> >> index 31169326b2..8b36946017 100644
> >> --- a/xen/arch/arm/domain.c
> >> +++ b/xen/arch/arm/domain.c
> >> @@ -19,6 +19,7 @@
> >>  #include <xen/sched.h>
> >>  #include <xen/softirq.h>
> >>  #include <xen/wait.h>
> >> +#include <xen/vmap.h>
> >>    #include <asm/alternative.h>
> >>  #include <asm/cpuerrata.h>
> >> @@ -275,36 +276,156 @@ static void ctxt_switch_to(struct vcpu *n)
> >>      virt_timer_restore(n);
> >>  }
> >>  -/* Update per-VCPU guest runstate shared memory area (if registered). */
> >> -static void update_runstate_area(struct vcpu *v)
> >> +static void cleanup_runstate_vcpu_locked(struct vcpu *v)
> >>  {
> >> -    void __user *guest_handle = NULL;
> >> +    if ( v->arch.runstate_guest )
> >> +    {
> >> +        vunmap((void *)((unsigned long)v->arch.runstate_guest & PAGE_MASK));
> >> +
> >> +        put_page(v->arch.runstate_guest_page[0]);
> >> +
> >> +        if ( v->arch.runstate_guest_page[1] )
> >> +            put_page(v->arch.runstate_guest_page[1]);
> >> +
> >> +        v->arch.runstate_guest = NULL;
> >> +    }
> >> +}
> >> +
> >> +void arch_vcpu_cleanup_runstate(struct vcpu *v)
> >> +{
> >> +    spin_lock(&v->arch.runstate_guest_lock);
> >> +
> >> +    cleanup_runstate_vcpu_locked(v);
> >> +
> >> +    spin_unlock(&v->arch.runstate_guest_lock);
> >> +}
> >> +
> >> +static int setup_runstate_vcpu_locked(struct vcpu *v, vaddr_t vaddr)
> >> +{
> >> +    unsigned int offset;
> >> +    mfn_t mfn[2];
> >> +    struct page_info *page;
> >> +    unsigned int numpages;
> >>      struct vcpu_runstate_info runstate;
> >> +    void *p;
> >>  -    if ( guest_handle_is_null(runstate_guest(v)) )
> >> -        return;
> >> +    /* user can pass a NULL address to unregister a previous area */
> >> +    if ( vaddr == 0 )
> >> +        return 0;
> >> +
> >> +    offset = vaddr & ~PAGE_MASK;
> >> +
> >> +    /* provided address must be aligned to a 64bit */
> >> +    if ( offset % alignof(struct vcpu_runstate_info) )
> > 
> > This new restriction wants to be explained in the commit message and public header.
> 
> ok
> 
> > 
> >> +    {
> >> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRIvaddr
> >> +                ": Invalid offset\n", vaddr);
> > 
> > We usually enforce 80 character per lines except for format string. So it is easier to grep them in the code.
> 
> Ok i will fix this one and the following ones.
> But here PRIvaddr would break any attempt to grep something in fact.
> 
> > 
> >> +        return -EINVAL;
> >> +    }
> >> +
> >> +    page = get_page_from_gva(v, vaddr, GV2M_WRITE);
> >> +    if ( !page )
> >> +    {
> >> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRIvaddr
> >> +                ": Page is not mapped\n", vaddr);
> >> +        return -EINVAL;
> >> +    }
> >> +
> >> +    mfn[0] = page_to_mfn(page);
> >> +    v->arch.runstate_guest_page[0] = page;
> >> +
> >> +    if ( offset > (PAGE_SIZE - sizeof(struct vcpu_runstate_info)) )
> >> +    {
> >> +        /* guest area is crossing pages */
> >> +        page = get_page_from_gva(v, vaddr + PAGE_SIZE, GV2M_WRITE);
> >> +        if ( !page )
> >> +        {
> >> +            put_page(v->arch.runstate_guest_page[0]);
> >> +            gprintk(XENLOG_WARNING,
> >> +                    "Cannot map runstate pointer at 0x%"PRIvaddr
> >> +                    ": 2nd Page is not mapped\n", vaddr);
> >> +            return -EINVAL;
> >> +        }
> >> +        mfn[1] = page_to_mfn(page);
> >> +        v->arch.runstate_guest_page[1] = page;
> >> +        numpages = 2;
> >> +    }
> >> +    else
> >> +    {
> >> +        v->arch.runstate_guest_page[1] = NULL;
> >> +        numpages = 1;
> >> +    }
> >>  -    memcpy(&runstate, &v->runstate, sizeof(runstate));
> >> +    p = vmap(mfn, numpages);
> >> +    if ( !p )
> >> +    {
> >> +        put_page(v->arch.runstate_guest_page[0]);
> >> +        if ( numpages == 2 )
> >> +            put_page(v->arch.runstate_guest_page[1]);
> >>  -    if ( VM_ASSIST(v->domain, runstate_update_flag) )
> >> +        gprintk(XENLOG_WARNING, "Cannot map runstate pointer at 0x%"PRIvaddr
> >> +                ": vmap error\n", vaddr);
> >> +        return -EINVAL;
> >> +    }
> >> +
> >> +    v->arch.runstate_guest = p + offset;
> >> +
> >> +    if (v == current)
> >> +        memcpy(v->arch.runstate_guest, &v->runstate, sizeof(v->runstate));
> >> +    else
> >>      {
> >> -        guest_handle = &v->runstate_guest.p->state_entry_time + 1;
> >> -        guest_handle--;
> >> -        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
> >> -        __raw_copy_to_guest(guest_handle,
> >> -                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
> >> -        smp_wmb();
> >> +        vcpu_runstate_get(v, &runstate);
> >> +        memcpy(v->arch.runstate_guest, &runstate, sizeof(v->runstate));
> >>      }
> >>  -    __copy_to_guest(runstate_guest(v), &runstate, 1);
> >> +    return 0;
> >> +}
> >> +
> >> +int arch_vcpu_setup_runstate(struct vcpu *v,
> >> +                             struct vcpu_register_runstate_memory_area area)
> >> +{
> >> +    int rc;
> >> +
> >> +    spin_lock(&v->arch.runstate_guest_lock);
> >> +
> >> +    /* cleanup if we are recalled */
> >> +    cleanup_runstate_vcpu_locked(v);
> >> +
> >> +    rc = setup_runstate_vcpu_locked(v, (vaddr_t)area.addr.v);
> >> +
> >> +    spin_unlock(&v->arch.runstate_guest_lock);
> >>  -    if ( guest_handle )
> >> +    return rc;
> >> +}
> >> +
> >> +
> >> +/* Update per-VCPU guest runstate shared memory area (if registered). */
> >> +static void update_runstate_area(struct vcpu *v)
> >> +{
> >> +    spin_lock(&v->arch.runstate_guest_lock);
> >> +
> >> +    if ( v->arch.runstate_guest )
> >>      {
> >> -        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
> >> -        smp_wmb();
> >> -        __raw_copy_to_guest(guest_handle,
> >> -                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
> >> +        if ( VM_ASSIST(v->domain, runstate_update_flag) )
> >> +        {
> >> +            v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
> >> +            write_atomic(&(v->arch.runstate_guest->state_entry_time),
> >> +                    v->runstate.state_entry_time);
> > 
> > NIT: You want to indent v-> at the same level as the argument from the first line.
> 
> Ok
> 
> > 
> > Also, I think you are missing a smp_wmb() here.
> 
> The atomic operation itself would not need a barrier.
> I do not see why you think a barrier is needed here.
> For the internal structure ?

We need to make sure the other-end sees the XEN_RUNSTATE_UPDATE change
before other changes. Otherwise, due to cpu reordering, the writes could
be seen in reverse order. (Technically the reader would have to use a
read-barrier but that's a separate topic.)


From xen-devel-bounces@lists.xenproject.org Fri Jul 31 23:13:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 31 Jul 2020 23:13:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1k1eD7-0000c8-Ln; Fri, 31 Jul 2020 23:13:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OD0g=BK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1k1eD6-0000c3-4i
 for xen-devel@lists.xenproject.org; Fri, 31 Jul 2020 23:13:08 +0000
X-Inumbo-ID: 63cffb92-d383-11ea-ac77-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 63cffb92-d383-11ea-ac77-12813bfff9fa;
 Fri, 31 Jul 2020 23:13:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=E4WZTqzlQhzJ39fm6XvagUAWoeCTOvuInmu5C+e+is0=; b=p0u8wB7dTcmJwAZF+8lIe5rYw
 gyI+xRWO1Ox7tVd/eVXC86QYLPac4OEBNQjNzk8qOwXoz6iE2qY1bbSdte7iYZiQrFvc4Up5iFOMS
 mCzTvRMfq97e8HqpX1QZ9IQUVRHJjkn0kp8Yx1LRAA1BILBO0KWzlqLs933KvCCGy/+Os=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1eD3-0006iY-4P; Fri, 31 Jul 2020 23:13:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1k1eD2-0000Mb-Ma; Fri, 31 Jul 2020 23:13:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1k1eD2-0007vC-Kj; Fri, 31 Jul 2020 23:13:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-152334-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 152334: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=81fd0d3ca4b2cd309403c6e8da662c325dd35750
X-Osstest-Versions-That: xen=a85f67b2658ed8032586b3a3e7cd78814d20aa4b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 31 Jul 2020 23:13:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 152334 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/152334/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  81fd0d3ca4b2cd309403c6e8da662c325dd35750
baseline version:
 xen                  a85f67b2658ed8032586b3a3e7cd78814d20aa4b

Last test of basis   152327  2020-07-31 15:02:14 Z    0 days
Testing same since   152334  2020-07-31 20:01:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <pdurrant@amazon.com>
  Tim Deegan <tim@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a85f67b265..81fd0d3ca4  81fd0d3ca4b2cd309403c6e8da662c325dd35750 -> smoke


